{"prompt": "Explain this text code:\n\npod-security.kubernetes.io/enforce-policy: restricted:latest", "completion": "This text code Audit Annotations | Value must be privileged:, baseline:, restricted: which correspond to Pod Security Standard levels accompanied by a version which must be latest or a valid Kubernetes versio", "language": "text", "type": "code-example_explanation", "context": "Audit Annotations | Value must be privileged:, baseline:, restricted: which correspond to Pod Security Standard levels accompanied by a version which must be latest or a valid Kubernetes versio", "source": "https://kubernetes.io/docs/reference/labels-annotations-taints/audit-annotations/", "id": "k8s_00000_code_explain_0"} {"prompt": "Explain this text code:\n\npod-security.kubernetes.io/audit-violations: would violate PodSecurity \"restricted:latest\": allowPrivilegeEscalation != false (container \"example\" must set securityContext.allowPrivilegeEscalation=false), ...", "completion": "This text code Audit Annotations | Value details an audit policy violation, it contains the Pod Security Standard level that was transgressed as well as the specific policies on the fields that were violated from the PodSecurity enforc", "language": "text", "type": "code-example_explanation", "context": "Audit Annotations | Value details an audit policy violation, it contains the Pod Security Standard level that was transgressed as well as the specific policies on the fields that were violated from the PodSecurity enforc", "source": "https://kubernetes.io/docs/reference/labels-annotations-taints/audit-annotations/", "id": "k8s_00000_code_explain_1"} {"prompt": "Explain this text code:\n\napiserver.latency.k8s.io/decode-response-object: \"450.6649ns\"", "completion": "This text code Audit Annotations | This annotation records the time taken to decode the response received from the storage layer (etcd)", "language": "text", "type": "code-example_explanation", "context": "Audit Annotations | This annotation records the time taken to decode the response received from the storage layer (etcd)", "source": "https://kubernetes.io/docs/reference/labels-annotations-taints/audit-annotations/", "id": "k8s_00000_code_explain_2"} {"prompt": "Explain this text code:\n\nauthorization.k8s.io/reason: \"Human-readable reason for the decision\"", "completion": "This text code Audit Annotations | This annotation gives reason for the decision in Kubernetes audit logs.", "language": "text", "type": "code-example_explanation", "context": "Audit Annotations | This annotation gives reason for the decision in Kubernetes audit logs.", "source": "https://kubernetes.io/docs/reference/labels-annotations-taints/audit-annotations/", "id": "k8s_00000_code_explain_3"} {"prompt": "Explain this text code:\n\nmissing-san.invalid-cert.kubernetes.io/example-svc.example-namespace.svc: \"relies on a legacy Common Name field instead of the SAN extension for subject validation\"", "completion": "This text code Audit Annotations | Used by Kubernetes version v1.24 and later", "language": "text", "type": "code-example_explanation", "context": "Audit Annotations | Used by Kubernetes version v1.24 and later", "source": "https://kubernetes.io/docs/reference/labels-annotations-taints/audit-annotations/", "id": "k8s_00000_code_explain_4"} {"prompt": "Explain this text code:\n\ninsecure-sha1.invalid-cert.kubernetes.io/example-svc.example-namespace.svc: \"uses an insecure SHA-1 signature\"", "completion": "This text code Audit Annotations | Used by Kubernetes version v1.24 and later", "language": "text", "type": "code-example_explanation", "context": "Audit Annotations | Used by Kubernetes version v1.24 and later", "source": "https://kubernetes.io/docs/reference/labels-annotations-taints/audit-annotations/", "id": "k8s_00000_code_explain_5"} {"prompt": "Explain this text code:\n\nvalidation.policy.admission.k8s.io/validation_failure: '[{\"message\": \"Invalid value\", {\"policy\": \"policy.example.com\", {\"binding\": \"policybinding.example.com\", {\"expressionIndex\": \"1\", {\"validationActions\": [\"Audit\"]}]'", "completion": "This text code Audit Annotations | Used by Kubernetes version v1.27 and later.", "language": "text", "type": "code-example_explanation", "context": "Audit Annotations | Used by Kubernetes version v1.27 and later.", "source": "https://kubernetes.io/docs/reference/labels-annotations-taints/audit-annotations/", "id": "k8s_00000_code_explain_6"} {"prompt": "Complete this text code:\n\nNAME CAPACITY ACCESSMODES RECLAIMPOLICY STATUS CLAIM STORAGECLASS REASON AGE\npvc-b6efd8da-b7b5-11e6-9d58-0ed433a7dd94 4Gi RWO Delete Bound default/claim1 manual 10s", "completion": "pvc-b95650f8-b7b5-11e6-9d58-0ed433a7dd94 4Gi RWO Delete Bound default/claim2 manual 6s\npvc-bb3ca71d-b7b5-11e6-9d58-0ed433a7dd94 4Gi RWO Delete Bound default/claim3 manual 3s", "language": "text", "type": "code-example", "context": "Change the Reclaim Policy of a PersistentVolume | This list also includes the name of the claims that are bound to each volume for easier identification of dynamically provisioned volumes.", "source": "https://kubernetes.io/docs/tasks/administer-cluster/change-pv-reclaim-policy/", "id": "k8s_00001_code_completion_7"} {"prompt": "Explain this text code:\n\nNAME CAPACITY ACCESSMODES RECLAIMPOLICY STATUS CLAIM STORAGECLASS REASON AGE\npvc-b6efd8da-b7b5-11e6-9d58-0ed433a7dd94 4Gi RWO Delete Bound default/claim1 manual 10s\npvc-b95650f8-b7b5-11e6-9d58-0ed433a7dd94 4Gi RWO Delete Bound default/claim2 manual 6s\npvc-bb3ca71d-b7b5-11e6-9d58-0ed433a7dd94 4Gi RWO Delete Bound default/claim3 manual 3s", "completion": "This text code Change the Reclaim Policy of a PersistentVolume | This list also includes the name of the claims that are bound to each volume for easier identification of dynamically provisioned volumes.", "language": "text", "type": "code-example_explanation", "context": "Change the Reclaim Policy of a PersistentVolume | This list also includes the name of the claims that are bound to each volume for easier identification of dynamically provisioned volumes.", "source": "https://kubernetes.io/docs/tasks/administer-cluster/change-pv-reclaim-policy/", "id": "k8s_00001_code_explain_8"} {"prompt": "Explain this bash code:\n\nkubectl patch pv -p '{\"spec\":{\"persistentVolumeReclaimPolicy\":\"Retain\"}}'", "completion": "This bash code Change the Reclaim Policy of a PersistentVolume | where is the name of your chosen PersistentVolume.", "language": "bash", "type": "kubectl-command_explanation", "context": "Change the Reclaim Policy of a PersistentVolume | where is the name of your chosen PersistentVolume.", "source": "https://kubernetes.io/docs/tasks/administer-cluster/change-pv-reclaim-policy/", "id": "k8s_00001_code_explain_9"} {"prompt": "Explain this bash code:\n\nkubectl patch pv -p \"{\\\"spec\\\":{\\\"persistentVolumeReclaimPolicy\\\":\\\"Retain\\\"}}\"", "completion": "This bash code Change the Reclaim Policy of a PersistentVolume | Verify that your chosen PersistentVolume has the right policy:", "language": "bash", "type": "kubectl-command_explanation", "context": "Change the Reclaim Policy of a PersistentVolume | Verify that your chosen PersistentVolume has the right policy:", "source": "https://kubernetes.io/docs/tasks/administer-cluster/change-pv-reclaim-policy/", "id": "k8s_00001_code_explain_10"} {"prompt": "Complete this text code:\n\nNAME CAPACITY ACCESSMODES RECLAIMPOLICY STATUS CLAIM STORAGECLASS REASON AGE\npvc-b6efd8da-b7b5-11e6-9d58-0ed433a7dd94 4Gi RWO Delete Bound default/claim1 manual 40s", "completion": "pvc-b95650f8-b7b5-11e6-9d58-0ed433a7dd94 4Gi RWO Delete Bound default/claim2 manual 36s\npvc-bb3ca71d-b7b5-11e6-9d58-0ed433a7dd94 4Gi RWO Retain Bound default/claim3 manual 33s", "language": "text", "type": "code-example", "context": "Change the Reclaim Policy of a PersistentVolume | In the preceding output, you can see that the volume bound to claim default/claim3 has reclaim policy Retain. It will not be automatically deleted when a user deletes claim default/claim3.", "source": "https://kubernetes.io/docs/tasks/administer-cluster/change-pv-reclaim-policy/", "id": "k8s_00001_code_completion_11"} {"prompt": "Explain this text code:\n\nNAME CAPACITY ACCESSMODES RECLAIMPOLICY STATUS CLAIM STORAGECLASS REASON AGE\npvc-b6efd8da-b7b5-11e6-9d58-0ed433a7dd94 4Gi RWO Delete Bound default/claim1 manual 40s\npvc-b95650f8-b7b5-11e6-9d58-0ed433a7dd94 4Gi RWO Delete Bound default/claim2 manual 36s\npvc-bb3ca71d-b7b5-11e6-9d58-0ed433a7dd94 4Gi RWO Retain Bound default/claim3 manual 33s", "completion": "This text code Change the Reclaim Policy of a PersistentVolume | In the preceding output, you can see that the volume bound to claim default/claim3 has reclaim policy Retain. It will not be automatically deleted when a user deletes claim default/claim3.", "language": "text", "type": "code-example_explanation", "context": "Change the Reclaim Policy of a PersistentVolume | In the preceding output, you can see that the volume bound to claim default/claim3 has reclaim policy Retain. It will not be automatically deleted when a user deletes claim default/claim3.", "source": "https://kubernetes.io/docs/tasks/administer-cluster/change-pv-reclaim-policy/", "id": "k8s_00001_code_explain_12"} {"prompt": "Complete this none code:\n\nNAME CAPACITY ACCESSMODES RECLAIMPOLICY STATUS CLAIM STORAGECLASS REASON AGE\npvc-b6efd8da-b7b5-11e6-9d58-0ed433a7dd94 4Gi RWO Delete Bound default/claim1 manual 10s", "completion": "pvc-b95650f8-b7b5-11e6-9d58-0ed433a7dd94 4Gi RWO Delete Bound default/claim2 manual 6s\npvc-bb3ca71d-b7b5-11e6-9d58-0ed433a7dd94 4Gi RWO Delete Bound default/claim3 manual 3s", "language": "none", "type": "code-example", "context": "Change the Reclaim Policy of a PersistentVolume | This list also includes the name of the claims that are bound to each volume for easier identification of dynamically provisioned volumes.", "source": "https://kubernetes.io/docs/tasks/administer-cluster/change-pv-reclaim-policy/", "id": "k8s_00001_code_completion_13"} {"prompt": "Explain this none code:\n\nNAME CAPACITY ACCESSMODES RECLAIMPOLICY STATUS CLAIM STORAGECLASS REASON AGE\npvc-b6efd8da-b7b5-11e6-9d58-0ed433a7dd94 4Gi RWO Delete Bound default/claim1 manual 10s\npvc-b95650f8-b7b5-11e6-9d58-0ed433a7dd94 4Gi RWO Delete Bound default/claim2 manual 6s\npvc-bb3ca71d-b7b5-11e6-9d58-0ed433a7dd94 4Gi RWO Delete Bound default/claim3 manual 3s", "completion": "This none code Change the Reclaim Policy of a PersistentVolume | This list also includes the name of the claims that are bound to each volume for easier identification of dynamically provisioned volumes.", "language": "none", "type": "code-example_explanation", "context": "Change the Reclaim Policy of a PersistentVolume | This list also includes the name of the claims that are bound to each volume for easier identification of dynamically provisioned volumes.", "source": "https://kubernetes.io/docs/tasks/administer-cluster/change-pv-reclaim-policy/", "id": "k8s_00001_code_explain_14"} {"prompt": "Explain this shell code:\n\nkubectl patch pv -p '{\"spec\":{\"persistentVolumeReclaimPolicy\":\"Retain\"}}'", "completion": "This shell code Change the Reclaim Policy of a PersistentVolume | where is the name of your chosen PersistentVolume.", "language": "shell", "type": "code-example_explanation", "context": "Change the Reclaim Policy of a PersistentVolume | where is the name of your chosen PersistentVolume.", "source": "https://kubernetes.io/docs/tasks/administer-cluster/change-pv-reclaim-policy/", "id": "k8s_00001_code_explain_15"} {"prompt": "Explain this cmd code:\n\nkubectl patch pv -p \"{\\\"spec\\\":{\\\"persistentVolumeReclaimPolicy\\\":\\\"Retain\\\"}}\"", "completion": "This cmd code Change the Reclaim Policy of a PersistentVolume | Verify that your chosen PersistentVolume has the right policy:", "language": "cmd", "type": "code-example_explanation", "context": "Change the Reclaim Policy of a PersistentVolume | Verify that your chosen PersistentVolume has the right policy:", "source": "https://kubernetes.io/docs/tasks/administer-cluster/change-pv-reclaim-policy/", "id": "k8s_00001_code_explain_16"} {"prompt": "Complete this none code:\n\nNAME CAPACITY ACCESSMODES RECLAIMPOLICY STATUS CLAIM STORAGECLASS REASON AGE\npvc-b6efd8da-b7b5-11e6-9d58-0ed433a7dd94 4Gi RWO Delete Bound default/claim1 manual 40s", "completion": "pvc-b95650f8-b7b5-11e6-9d58-0ed433a7dd94 4Gi RWO Delete Bound default/claim2 manual 36s\npvc-bb3ca71d-b7b5-11e6-9d58-0ed433a7dd94 4Gi RWO Retain Bound default/claim3 manual 33s", "language": "none", "type": "code-example", "context": "Change the Reclaim Policy of a PersistentVolume | In the preceding output, you can see that the volume bound to claim default/claim3 has reclaim policy Retain. It will not be automatically deleted when a user deletes claim default/claim3.", "source": "https://kubernetes.io/docs/tasks/administer-cluster/change-pv-reclaim-policy/", "id": "k8s_00001_code_completion_17"} {"prompt": "Explain this none code:\n\nNAME CAPACITY ACCESSMODES RECLAIMPOLICY STATUS CLAIM STORAGECLASS REASON AGE\npvc-b6efd8da-b7b5-11e6-9d58-0ed433a7dd94 4Gi RWO Delete Bound default/claim1 manual 40s\npvc-b95650f8-b7b5-11e6-9d58-0ed433a7dd94 4Gi RWO Delete Bound default/claim2 manual 36s\npvc-bb3ca71d-b7b5-11e6-9d58-0ed433a7dd94 4Gi RWO Retain Bound default/claim3 manual 33s", "completion": "This none code Change the Reclaim Policy of a PersistentVolume | In the preceding output, you can see that the volume bound to claim default/claim3 has reclaim policy Retain. It will not be automatically deleted when a user deletes claim default/claim3.", "language": "none", "type": "code-example_explanation", "context": "Change the Reclaim Policy of a PersistentVolume | In the preceding output, you can see that the volume bound to claim default/claim3 has reclaim policy Retain. It will not be automatically deleted when a user deletes claim default/claim3.", "source": "https://kubernetes.io/docs/tasks/administer-cluster/change-pv-reclaim-policy/", "id": "k8s_00001_code_explain_18"} {"prompt": "Explain this bash code:\n\nkubectl patch pv -p '{\"spec\":{\"persistentVolumeReclaimPolicy\":\"Retain\"}}'", "completion": "This bash code Change the Reclaim Policy of a PersistentVolume | where is the name of your chosen PersistentVolume.", "language": "bash", "type": "kubectl-command_explanation", "context": "Change the Reclaim Policy of a PersistentVolume | where is the name of your chosen PersistentVolume.", "source": "https://kubernetes.io/docs/tasks/administer-cluster/change-pv-reclaim-policy/", "id": "k8s_00001_code_explain_19"} {"prompt": "Explain this bash code:\n\nkubectl patch pv -p \"{\\\"spec\\\":{\\\"persistentVolumeReclaimPolicy\\\":\\\"Retain\\\"}}\"", "completion": "This bash code Change the Reclaim Policy of a PersistentVolume | Verify that your chosen PersistentVolume has the right policy:", "language": "bash", "type": "kubectl-command_explanation", "context": "Change the Reclaim Policy of a PersistentVolume | Verify that your chosen PersistentVolume has the right policy:", "source": "https://kubernetes.io/docs/tasks/administer-cluster/change-pv-reclaim-policy/", "id": "k8s_00001_code_explain_20"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: userns\nspec:", "completion": " hostUsers: false\n containers:\n - name: shell\n command: [\"sleep\", \"infinity\"]\n image: debian", "language": "yaml", "type": "pod-definition", "context": "Use a User Namespace With a Pod | Create the pod on your cluster:", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/user-namespaces/", "id": "k8s_00002_code_completion_21"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: userns\nspec:\n hostUsers: false\n containers:\n - name: shell\n command: [\"sleep\", \"infinity\"]\n image: debian", "completion": "This yaml code Use a User Namespace With a Pod | Create the pod on your cluster:", "language": "yaml", "type": "pod-definition_explanation", "context": "Use a User Namespace With a Pod | Create the pod on your cluster:", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/user-namespaces/", "id": "k8s_00002_code_explain_22"} {"prompt": "Explain this bash code:\n\nkubectl apply -f https://k8s.io/examples/pods/user-namespaces-stateless.yaml", "completion": "This bash code Use a User Namespace With a Pod | Exec into the pod and run readlink /proc/self/ns/user:", "language": "bash", "type": "kubectl-command_explanation", "context": "Use a User Namespace With a Pod | Exec into the pod and run readlink /proc/self/ns/user:", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/user-namespaces/", "id": "k8s_00002_code_explain_23"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: userns\nspec:", "completion": " hostUsers: false\n containers:\n - name: shell\n command: [\"sleep\", \"infinity\"]\n image: debian", "language": "yaml", "type": "pod-definition", "context": "Use a User Namespace With a Pod | Create the pod on your cluster:", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/user-namespaces/", "id": "k8s_00002_code_completion_24"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: userns\nspec:\n hostUsers: false\n containers:\n - name: shell\n command: [\"sleep\", \"infinity\"]\n image: debian", "completion": "This yaml code Use a User Namespace With a Pod | Create the pod on your cluster:", "language": "yaml", "type": "pod-definition_explanation", "context": "Use a User Namespace With a Pod | Create the pod on your cluster:", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/user-namespaces/", "id": "k8s_00002_code_explain_25"} {"prompt": "Explain this shell code:\n\nkubectl apply -f https://k8s.io/examples/pods/user-namespaces-stateless.yaml", "completion": "This shell code Use a User Namespace With a Pod | Exec into the pod and run readlink /proc/self/ns/user:", "language": "shell", "type": "code-example_explanation", "context": "Use a User Namespace With a Pod | Exec into the pod and run readlink /proc/self/ns/user:", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/user-namespaces/", "id": "k8s_00002_code_explain_26"} {"prompt": "Complete this yaml code:\n\npods/user-namespaces-stateless.yaml\napiVersion: v1\nkind: Pod\nmetadata:\n name: userns", "completion": "spec:\n hostUsers: false\n containers:\n - name: shell\n command: [\"sleep\", \"infinity\"]\n image: debian", "language": "yaml", "type": "pod-definition", "context": "Use a User Namespace With a Pod | Create the pod on your cluster:", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/user-namespaces/", "id": "k8s_00002_code_completion_27"} {"prompt": "Explain this yaml code:\n\npods/user-namespaces-stateless.yaml\napiVersion: v1\nkind: Pod\nmetadata:\n name: userns\nspec:\n hostUsers: false\n containers:\n - name: shell\n command: [\"sleep\", \"infinity\"]\n image: debian", "completion": "This yaml code Use a User Namespace With a Pod | Create the pod on your cluster:", "language": "yaml", "type": "pod-definition_explanation", "context": "Use a User Namespace With a Pod | Create the pod on your cluster:", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/user-namespaces/", "id": "k8s_00002_code_explain_28"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: userns\nspec:", "completion": " hostUsers: false\n containers:\n - name: shell\n command: [\"sleep\", \"infinity\"]\n image: debian", "language": "yaml", "type": "pod-definition", "context": "Use a User Namespace With a Pod | Create the pod on your cluster:", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/user-namespaces/", "id": "k8s_00002_code_completion_29"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: userns\nspec:\n hostUsers: false\n containers:\n - name: shell\n command: [\"sleep\", \"infinity\"]\n image: debian", "completion": "This yaml code Use a User Namespace With a Pod | Create the pod on your cluster:", "language": "yaml", "type": "pod-definition_explanation", "context": "Use a User Namespace With a Pod | Create the pod on your cluster:", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/user-namespaces/", "id": "k8s_00002_code_explain_30"} {"prompt": "Explain this bash code:\n\nkubectl apply -f https://k8s.io/examples/pods/user-namespaces-stateless.yaml", "completion": "This bash code Use a User Namespace With a Pod | Exec into the pod and run readlink /proc/self/ns/user:", "language": "bash", "type": "kubectl-command_explanation", "context": "Use a User Namespace With a Pod | Exec into the pod and run readlink /proc/self/ns/user:", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/user-namespaces/", "id": "k8s_00002_code_explain_31"} {"prompt": "Complete this text code:\n\npreflight Run pre-flight checks\ncerts Certificate generation\n /ca Generate the self-signed Kubernetes CA to provision identities for other Kubernetes components\n /apiserver Generate the certificate for serving the Kubernetes API\n /apiserver-kubelet-client Generate the certificate for the API server to connect to kubelet\n /front-proxy-ca Generate the self-signed CA to provision identities for front proxy\n /front-proxy-client Generate the certificate for the front proxy client\n /etcd-ca Generate the self-signed CA to provision identities for etcd\n /etcd-server Generate the certificate for serving etcd\n /etcd-peer Generate the certificate for etcd nodes to communicate with each other\n /etcd-healthcheck-client Generate the certificate for liveness probes to healthcheck etcd\n /apiserver-etcd-client Generate the certificate the apiserver uses to access etcd\n /sa Generate a private key for signing service account tokens along with its public key\nkubeconfig Generate all kubeconfig files necessary to establish the control plane and the admin kubeconfig file\n /admin Generate a kubeconfig file for the admin to use and for kubeadm itself\n /super-admin Generate a kubeconfig file for the super-admin\n /kubelet Generate a kubeconfig file for the kubelet to use *only* for cluster bootstrapping purposes\n /controller-manager Generate a kubeconfig file for the controller manager to use\n /scheduler Generate a kubeconfig file for the scheduler to use", "completion": "etcd Generate static Pod manifest file for local etcd\n /local Generate the static Pod manifest file for a local, single-node local etcd instance\ncontrol-plane Generate all static Pod manifest files necessary to establish the control plane\n /apiserver Generates the kube-apiserver static Pod manifest\n /controller-manager Generates the kube-controller-manager static Pod manifest\n /scheduler Generates the kube-scheduler static Pod manifest\nkubelet-start Write kubelet settings and (re)start the kubelet\nwait-control-plane Wait for the control plane to start\nupload-config Upload the kubeadm and kubelet configuration to a ConfigMap\n /kubeadm Upload the kubeadm ClusterConfiguration to a ConfigMap\n /kubelet Upload the kubelet component config to a ConfigMap\nupload-certs Upload certificates to kubeadm-certs\nmark-control-plane Mark a node as a control-plane\nbootstrap-token Generates bootstrap tokens used to join a node to a cluster\nkubelet-finalize Updates settings relevant to the kubelet after TLS bootstrap\n /enable-client-cert-rotation Enable kubelet client certificate rotation\naddon Install required addons for passing conformance tests\n /coredns Install the CoreDNS addon to a Kubernetes cluster\n /kube-proxy Install the kube-proxy addon to a Kubernetes cluster\nshow-join-command Show the join command for control-plane and worker node", "language": "text", "type": "code-example", "context": "kubeadm init | The IP address the API Server will advertise it's listening on. If not set the default network interface will be used.", "source": "https://kubernetes.io/docs/reference/setup-tools/kubeadm/kubeadm-init/", "id": "k8s_00003_code_completion_32"} {"prompt": "Explain this text code:\n\npreflight Run pre-flight checks\ncerts Certificate generation\n /ca Generate the self-signed Kubernetes CA to provision identities for other Kubernetes components\n /apiserver Generate the certificate for serving the Kubernetes API\n /apiserver-kubelet-client Generate the certificate for the API server to connect to kubelet\n /front-proxy-ca Generate the self-signed CA to provision identities for front proxy\n /front-proxy-client Generate the certificate for the front proxy client\n /etcd-ca Generate the self-signed CA to provision identities for etcd\n /etcd-server Generate the certificate for serving etcd\n /etcd-peer Generate the certificate for etcd nodes to communicate with each other\n /etcd-healthcheck-client Generate the certificate for liveness probes to healthcheck etcd\n /apiserver-etcd-client Generate the certificate the apiserver uses to access etcd\n /sa Generate a private key for signing service account tokens along with its public key\nkubeconfig Generate all kubeconfig files necessary to establish the control plane and the admin kubeconfig file\n /admin Generate a kubeconfig file for the admin to use and for kubeadm itself\n /super-admin Generate a kubeconfig file for the super-admin\n /kubelet Generate a kubeconfig file for the kubelet to use *only* for cluster bootstrapping purposes\n /controller-manager Generate a kubeconfig file for the controller manager to use\n /scheduler Generate a kubeconfig file for the scheduler to use\netcd Generate static Pod manifest file for local etcd\n /local Generate the static Pod manifest file for a local, single-node local etcd instance\ncontrol-plane Generate all static Pod manifest files necessary to establish the control plane\n /apiserver Generates the kube-apiserver static Pod manifest\n /controller-manager Generates the kube-controller-manager static Pod manifest\n /scheduler Generates the kube-scheduler static Pod manifest\nkubelet-start Write kubelet settings and (re)start the kubelet\nwait-control-plane Wait for the control plane to start\nupload-config Upload the kubeadm and kubelet configuration to a ConfigMap\n /kubeadm Upload the kubeadm ClusterConfiguration to a ConfigMap\n /kubelet Upload the kubelet component config to a ConfigMap\nupload-certs Upload certificates to kubeadm-certs\nmark-control-plane Mark a node as a control-plane\nbootstrap-token Generates bootstrap tokens used to join a node to a cluster\nkubelet-finalize Updates settings relevant to the kubelet after TLS bootstrap\n /enable-client-cert-rotation Enable kubelet client certificate rotation\naddon Install required addons for passing conformance tests\n /coredns Install the CoreDNS addon to a Kubernetes cluster\n /kube-proxy Install the kube-proxy addon to a Kubernetes cluster\nshow-join-command Show the join command for control-plane and worker node", "completion": "This text code kubeadm init | The IP address the API Server will advertise it's listening on. If not set the default network interface will be used.", "language": "text", "type": "code-example_explanation", "context": "kubeadm init | The IP address the API Server will advertise it's listening on. If not set the default network interface will be used.", "source": "https://kubernetes.io/docs/reference/setup-tools/kubeadm/kubeadm-init/", "id": "k8s_00003_code_explain_33"} {"prompt": "Explain this text code:\n\nsudo kubeadm init phase control-plane controller-manager --help", "completion": "This text code kubeadm init | You can also use --help to see the list of sub-phases for a certain parent phase:", "language": "text", "type": "code-example_explanation", "context": "kubeadm init | You can also use --help to see the list of sub-phases for a certain parent phase:", "source": "https://kubernetes.io/docs/reference/setup-tools/kubeadm/kubeadm-init/", "id": "k8s_00003_code_explain_34"} {"prompt": "Complete this text code:\n\nsudo kubeadm init phase control-plane all --config=configfile.yaml\nsudo kubeadm init phase etcd local --config=configfile.yaml", "completion": "# you can now modify the control plane and etcd manifest files\nsudo kubeadm init --skip-phases=control-plane,etcd --config=configfile.yaml", "language": "text", "type": "code-example", "context": "kubeadm init | What this example would do is write the manifest files for the control plane and etcd in /etc/kubernetes/manifests based on the configuration in configfile.yaml. This allows you to modify the files an", "source": "https://kubernetes.io/docs/reference/setup-tools/kubeadm/kubeadm-init/", "id": "k8s_00003_code_completion_35"} {"prompt": "Explain this text code:\n\nsudo kubeadm init phase control-plane all --config=configfile.yaml\nsudo kubeadm init phase etcd local --config=configfile.yaml\n# you can now modify the control plane and etcd manifest files\nsudo kubeadm init --skip-phases=control-plane,etcd --config=configfile.yaml", "completion": "This text code kubeadm init | What this example would do is write the manifest files for the control plane and etcd in /etc/kubernetes/manifests based on the configuration in configfile.yaml. This allows you to modify the files an", "language": "text", "type": "code-example_explanation", "context": "kubeadm init | What this example would do is write the manifest files for the control plane and etcd in /etc/kubernetes/manifests based on the configuration in configfile.yaml. This allows you to modify the files an", "source": "https://kubernetes.io/docs/reference/setup-tools/kubeadm/kubeadm-init/", "id": "k8s_00003_code_explain_36"} {"prompt": "Explain this text code:\n\nkubeadm config images list\nkubeadm config images pull", "completion": "This text code kubeadm init | You can pass --config to the above commands with a kubeadm configuration file to control the kubernetesVersion and imageRepository fields.", "language": "text", "type": "code-example_explanation", "context": "kubeadm init | You can pass --config to the above commands with a kubeadm configuration file to control the kubernetesVersion and imageRepository fields.", "source": "https://kubernetes.io/docs/reference/setup-tools/kubeadm/kubeadm-init/", "id": "k8s_00003_code_explain_37"} {"prompt": "Explain this text code:\n\nkubeadm init phase upload-certs --upload-certs --config=SOME_YAML_FILE", "completion": "This text code kubeadm init | If a predefined certificate key is not passed to kubeadm init and kubeadm init phase upload-certs a new key will be generated automatically.", "language": "text", "type": "code-example_explanation", "context": "kubeadm init | If a predefined certificate key is not passed to kubeadm init and kubeadm init phase upload-certs a new key will be generated automatically.", "source": "https://kubernetes.io/docs/reference/setup-tools/kubeadm/kubeadm-init/", "id": "k8s_00003_code_explain_38"} {"prompt": "Complete this text code:\n\npreflight Run pre-flight checks\ncerts Certificate generation\n /ca Generate the self-signed Kubernetes CA to provision identities for other Kubernetes components\n /apiserver Generate the certificate for serving the Kubernetes API\n /apiserver-kubelet-client Generate the certificate for the API server to connect to kubelet\n /front-proxy-ca Generate the self-signed CA to provision identities for front proxy\n /front-proxy-client Generate the certificate for the front proxy client\n /etcd-ca Generate the self-signed CA to provision identities for etcd\n /etcd-server Generate the certificate for serving etcd\n /etcd-peer Generate the certificate for etcd nodes to communicate with each other\n /etcd-healthcheck-client Generate the certificate for liveness probes to healthcheck etcd\n /apiserver-etcd-client Generate the certificate the apiserver uses to access etcd\n /sa Generate a private key for signing service account tokens along with its public key\nkubeconfig Generate all kubeconfig files necessary to establish the control plane and the admin kubeconfig file\n /admin Generate a kubeconfig file for the admin to use and for kubeadm itself\n /super-admin Generate a kubeconfig file for the super-admin\n /kubelet Generate a kubeconfig file for the kubelet to use *only* for cluster bootstrapping purposes\n /controller-manager Generate a kubeconfig file for the controller manager to use\n /scheduler Generate a kubeconfig file for the scheduler to use", "completion": "etcd Generate static Pod manifest file for local etcd\n /local Generate the static Pod manifest file for a local, single-node local etcd instance\ncontrol-plane Generate all static Pod manifest files necessary to establish the control plane\n /apiserver Generates the kube-apiserver static Pod manifest\n /controller-manager Generates the kube-controller-manager static Pod manifest\n /scheduler Generates the kube-scheduler static Pod manifest\nkubelet-start Write kubelet settings and (re)start the kubelet\nwait-control-plane Wait for the control plane to start\nupload-config Upload the kubeadm and kubelet configuration to a ConfigMap\n /kubeadm Upload the kubeadm ClusterConfiguration to a ConfigMap\n /kubelet Upload the kubelet component config to a ConfigMap\nupload-certs Upload certificates to kubeadm-certs\nmark-control-plane Mark a node as a control-plane\nbootstrap-token Generates bootstrap tokens used to join a node to a cluster\nkubelet-finalize Updates settings relevant to the kubelet after TLS bootstrap\n /enable-client-cert-rotation Enable kubelet client certificate rotation\naddon Install required addons for passing conformance tests\n /coredns Install the CoreDNS addon to a Kubernetes cluster\n /kube-proxy Install the kube-proxy addon to a Kubernetes cluster\nshow-join-command Show the join command for control-plane and worker node", "language": "text", "type": "code-example", "context": "kubeadm init | The IP address the API Server will advertise it's listening on. If not set the default network interface will be used.", "source": "https://kubernetes.io/docs/reference/setup-tools/kubeadm/kubeadm-init/", "id": "k8s_00003_code_completion_39"} {"prompt": "Explain this text code:\n\npreflight Run pre-flight checks\ncerts Certificate generation\n /ca Generate the self-signed Kubernetes CA to provision identities for other Kubernetes components\n /apiserver Generate the certificate for serving the Kubernetes API\n /apiserver-kubelet-client Generate the certificate for the API server to connect to kubelet\n /front-proxy-ca Generate the self-signed CA to provision identities for front proxy\n /front-proxy-client Generate the certificate for the front proxy client\n /etcd-ca Generate the self-signed CA to provision identities for etcd\n /etcd-server Generate the certificate for serving etcd\n /etcd-peer Generate the certificate for etcd nodes to communicate with each other\n /etcd-healthcheck-client Generate the certificate for liveness probes to healthcheck etcd\n /apiserver-etcd-client Generate the certificate the apiserver uses to access etcd\n /sa Generate a private key for signing service account tokens along with its public key\nkubeconfig Generate all kubeconfig files necessary to establish the control plane and the admin kubeconfig file\n /admin Generate a kubeconfig file for the admin to use and for kubeadm itself\n /super-admin Generate a kubeconfig file for the super-admin\n /kubelet Generate a kubeconfig file for the kubelet to use *only* for cluster bootstrapping purposes\n /controller-manager Generate a kubeconfig file for the controller manager to use\n /scheduler Generate a kubeconfig file for the scheduler to use\netcd Generate static Pod manifest file for local etcd\n /local Generate the static Pod manifest file for a local, single-node local etcd instance\ncontrol-plane Generate all static Pod manifest files necessary to establish the control plane\n /apiserver Generates the kube-apiserver static Pod manifest\n /controller-manager Generates the kube-controller-manager static Pod manifest\n /scheduler Generates the kube-scheduler static Pod manifest\nkubelet-start Write kubelet settings and (re)start the kubelet\nwait-control-plane Wait for the control plane to start\nupload-config Upload the kubeadm and kubelet configuration to a ConfigMap\n /kubeadm Upload the kubeadm ClusterConfiguration to a ConfigMap\n /kubelet Upload the kubelet component config to a ConfigMap\nupload-certs Upload certificates to kubeadm-certs\nmark-control-plane Mark a node as a control-plane\nbootstrap-token Generates bootstrap tokens used to join a node to a cluster\nkubelet-finalize Updates settings relevant to the kubelet after TLS bootstrap\n /enable-client-cert-rotation Enable kubelet client certificate rotation\naddon Install required addons for passing conformance tests\n /coredns Install the CoreDNS addon to a Kubernetes cluster\n /kube-proxy Install the kube-proxy addon to a Kubernetes cluster\nshow-join-command Show the join command for control-plane and worker node", "completion": "This text code kubeadm init | The IP address the API Server will advertise it's listening on. If not set the default network interface will be used.", "language": "text", "type": "code-example_explanation", "context": "kubeadm init | The IP address the API Server will advertise it's listening on. If not set the default network interface will be used.", "source": "https://kubernetes.io/docs/reference/setup-tools/kubeadm/kubeadm-init/", "id": "k8s_00003_code_explain_40"} {"prompt": "Explain this shell code:\n\nsudo kubeadm init phase control-plane controller-manager --help", "completion": "This shell code kubeadm init | You can also use --help to see the list of sub-phases for a certain parent phase:", "language": "shell", "type": "code-example_explanation", "context": "kubeadm init | You can also use --help to see the list of sub-phases for a certain parent phase:", "source": "https://kubernetes.io/docs/reference/setup-tools/kubeadm/kubeadm-init/", "id": "k8s_00003_code_explain_41"} {"prompt": "Complete this shell code:\n\nsudo kubeadm init phase control-plane all --config=configfile.yaml\nsudo kubeadm init phase etcd local --config=configfile.yaml", "completion": "# you can now modify the control plane and etcd manifest files\nsudo kubeadm init --skip-phases=control-plane,etcd --config=configfile.yaml", "language": "shell", "type": "code-example", "context": "kubeadm init | What this example would do is write the manifest files for the control plane and etcd in /etc/kubernetes/manifests based on the configuration in configfile.yaml. This allows you to modify the files an", "source": "https://kubernetes.io/docs/reference/setup-tools/kubeadm/kubeadm-init/", "id": "k8s_00003_code_completion_42"} {"prompt": "Explain this shell code:\n\nsudo kubeadm init phase control-plane all --config=configfile.yaml\nsudo kubeadm init phase etcd local --config=configfile.yaml\n# you can now modify the control plane and etcd manifest files\nsudo kubeadm init --skip-phases=control-plane,etcd --config=configfile.yaml", "completion": "This shell code kubeadm init | What this example would do is write the manifest files for the control plane and etcd in /etc/kubernetes/manifests based on the configuration in configfile.yaml. This allows you to modify the files an", "language": "shell", "type": "code-example_explanation", "context": "kubeadm init | What this example would do is write the manifest files for the control plane and etcd in /etc/kubernetes/manifests based on the configuration in configfile.yaml. This allows you to modify the files an", "source": "https://kubernetes.io/docs/reference/setup-tools/kubeadm/kubeadm-init/", "id": "k8s_00003_code_explain_43"} {"prompt": "Explain this shell code:\n\nkubeadm config images list\nkubeadm config images pull", "completion": "This shell code kubeadm init | You can pass --config to the above commands with a kubeadm configuration file to control the kubernetesVersion and imageRepository fields.", "language": "shell", "type": "code-example_explanation", "context": "kubeadm init | You can pass --config to the above commands with a kubeadm configuration file to control the kubernetesVersion and imageRepository fields.", "source": "https://kubernetes.io/docs/reference/setup-tools/kubeadm/kubeadm-init/", "id": "k8s_00003_code_explain_44"} {"prompt": "Explain this shell code:\n\nkubeadm init phase upload-certs --upload-certs --config=SOME_YAML_FILE", "completion": "This shell code kubeadm init | If a predefined certificate key is not passed to kubeadm init and kubeadm init phase upload-certs a new key will be generated automatically.", "language": "shell", "type": "code-example_explanation", "context": "kubeadm init | If a predefined certificate key is not passed to kubeadm init and kubeadm init phase upload-certs a new key will be generated automatically.", "source": "https://kubernetes.io/docs/reference/setup-tools/kubeadm/kubeadm-init/", "id": "k8s_00003_code_explain_45"} {"prompt": "Explain this text code:\n\nsudo kubeadm init phase control-plane controller-manager --help", "completion": "This text code kubeadm init | You can also use --help to see the list of sub-phases for a certain parent phase:", "language": "text", "type": "code-example_explanation", "context": "kubeadm init | You can also use --help to see the list of sub-phases for a certain parent phase:", "source": "https://kubernetes.io/docs/reference/setup-tools/kubeadm/kubeadm-init/", "id": "k8s_00003_code_explain_46"} {"prompt": "Complete this text code:\n\nsudo kubeadm init phase control-plane all --config=configfile.yaml\nsudo kubeadm init phase etcd local --config=configfile.yaml", "completion": "# you can now modify the control plane and etcd manifest files\nsudo kubeadm init --skip-phases=control-plane,etcd --config=configfile.yaml", "language": "text", "type": "code-example", "context": "kubeadm init | What this example would do is write the manifest files for the control plane and etcd in /etc/kubernetes/manifests based on the configuration in configfile.yaml. This allows you to modify the files an", "source": "https://kubernetes.io/docs/reference/setup-tools/kubeadm/kubeadm-init/", "id": "k8s_00003_code_completion_47"} {"prompt": "Explain this text code:\n\nsudo kubeadm init phase control-plane all --config=configfile.yaml\nsudo kubeadm init phase etcd local --config=configfile.yaml\n# you can now modify the control plane and etcd manifest files\nsudo kubeadm init --skip-phases=control-plane,etcd --config=configfile.yaml", "completion": "This text code kubeadm init | What this example would do is write the manifest files for the control plane and etcd in /etc/kubernetes/manifests based on the configuration in configfile.yaml. This allows you to modify the files an", "language": "text", "type": "code-example_explanation", "context": "kubeadm init | What this example would do is write the manifest files for the control plane and etcd in /etc/kubernetes/manifests based on the configuration in configfile.yaml. This allows you to modify the files an", "source": "https://kubernetes.io/docs/reference/setup-tools/kubeadm/kubeadm-init/", "id": "k8s_00003_code_explain_48"} {"prompt": "Explain this text code:\n\nkubeadm config images list\nkubeadm config images pull", "completion": "This text code kubeadm init | You can pass --config to the above commands with a kubeadm configuration file to control the kubernetesVersion and imageRepository fields.", "language": "text", "type": "code-example_explanation", "context": "kubeadm init | You can pass --config to the above commands with a kubeadm configuration file to control the kubernetesVersion and imageRepository fields.", "source": "https://kubernetes.io/docs/reference/setup-tools/kubeadm/kubeadm-init/", "id": "k8s_00003_code_explain_49"} {"prompt": "Explain this text code:\n\nkubeadm init phase upload-certs --upload-certs --config=SOME_YAML_FILE", "completion": "This text code kubeadm init | If a predefined certificate key is not passed to kubeadm init and kubeadm init phase upload-certs a new key will be generated automatically.", "language": "text", "type": "code-example_explanation", "context": "kubeadm init | If a predefined certificate key is not passed to kubeadm init and kubeadm init phase upload-certs a new key will be generated automatically.", "source": "https://kubernetes.io/docs/reference/setup-tools/kubeadm/kubeadm-init/", "id": "k8s_00003_code_explain_50"} {"prompt": "Complete this yaml code:\n\napiVersion: kubeadm.k8s.io/v1beta4\nkind: ClusterConfiguration\nkubernetesVersion: v1.16.0\napiServer:", "completion": " extraArgs:\n - name: \"enable-admission-plugins\"\n value: \"AlwaysPullImages,DefaultStorageClass\"\n - name: \"audit-log-path\"\n value: \"/home/johndoe/audit.log\"", "language": "yaml", "type": "kubernetes-manifest", "context": "Customizing components with the kubeadm API | For details, see the reference documentation for kube-controller-manager.", "source": "https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/control-plane-flags/", "id": "k8s_00004_code_completion_51"} {"prompt": "Explain this yaml code:\n\napiVersion: kubeadm.k8s.io/v1beta4\nkind: ClusterConfiguration\nkubernetesVersion: v1.16.0\napiServer:\n extraArgs:\n - name: \"enable-admission-plugins\"\n value: \"AlwaysPullImages,DefaultStorageClass\"\n - name: \"audit-log-path\"\n value: \"/home/johndoe/audit.log\"", "completion": "This yaml code Customizing components with the kubeadm API | For details, see the reference documentation for kube-controller-manager.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Customizing components with the kubeadm API | For details, see the reference documentation for kube-controller-manager.", "source": "https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/control-plane-flags/", "id": "k8s_00004_code_explain_52"} {"prompt": "Complete this yaml code:\n\napiVersion: kubeadm.k8s.io/v1beta4\nkind: ClusterConfiguration\nkubernetesVersion: v1.16.0\ncontrollerManager:", "completion": " extraArgs:\n - name: \"cluster-signing-key-file\"\n value: \"/home/johndoe/keys/ca.key\"\n - name: \"deployment-controller-sync-period\"\n value: \"50\"", "language": "yaml", "type": "kubernetes-manifest", "context": "Customizing components with the kubeadm API | For details, see the reference documentation for kube-scheduler.", "source": "https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/control-plane-flags/", "id": "k8s_00004_code_completion_53"} {"prompt": "Explain this yaml code:\n\napiVersion: kubeadm.k8s.io/v1beta4\nkind: ClusterConfiguration\nkubernetesVersion: v1.16.0\ncontrollerManager:\n extraArgs:\n - name: \"cluster-signing-key-file\"\n value: \"/home/johndoe/keys/ca.key\"\n - name: \"deployment-controller-sync-period\"\n value: \"50\"", "completion": "This yaml code Customizing components with the kubeadm API | For details, see the reference documentation for kube-scheduler.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Customizing components with the kubeadm API | For details, see the reference documentation for kube-scheduler.", "source": "https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/control-plane-flags/", "id": "k8s_00004_code_explain_54"} {"prompt": "Complete this yaml code:\n\napiVersion: kubeadm.k8s.io/v1beta4\nkind: ClusterConfiguration\nkubernetesVersion: v1.16.0\nscheduler:\n extraArgs:\n - name: \"config\"", "completion": " value: \"/etc/kubernetes/scheduler-config.yaml\"\n extraVolumes:\n - name: schedulerconfig\n hostPath: /home/johndoe/schedconfig.yaml\n mountPath: /etc/kubernetes/scheduler-config.yaml\n readOnly: true\n pathType: \"File\"", "language": "yaml", "type": "kubernetes-manifest", "context": "Customizing components with the kubeadm API | For details, see the etcd server documentation.", "source": "https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/control-plane-flags/", "id": "k8s_00004_code_completion_55"} {"prompt": "Explain this yaml code:\n\napiVersion: kubeadm.k8s.io/v1beta4\nkind: ClusterConfiguration\nkubernetesVersion: v1.16.0\nscheduler:\n extraArgs:\n - name: \"config\"\n value: \"/etc/kubernetes/scheduler-config.yaml\"\n extraVolumes:\n - name: schedulerconfig\n hostPath: /home/johndoe/schedconfig.yaml\n mountPath: /etc/kubernetes/scheduler-config.yaml\n readOnly: true\n pathType: \"File\"", "completion": "This yaml code Customizing components with the kubeadm API | For details, see the etcd server documentation.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Customizing components with the kubeadm API | For details, see the etcd server documentation.", "source": "https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/control-plane-flags/", "id": "k8s_00004_code_explain_56"} {"prompt": "Complete this yaml code:\n\napiVersion: kubeadm.k8s.io/v1beta4\nkind: ClusterConfiguration\netcd:", "completion": " local:\n extraArgs:\n - name: \"election-timeout\"\n value: 1000", "language": "yaml", "type": "kubernetes-manifest", "context": "Customizing components with the kubeadm API | Kubeadm allows you to pass a directory with patch files to InitConfiguration and JoinConfiguration on individual nodes. These patches can be used as the last customization step before component config", "source": "https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/control-plane-flags/", "id": "k8s_00004_code_completion_57"} {"prompt": "Explain this yaml code:\n\napiVersion: kubeadm.k8s.io/v1beta4\nkind: ClusterConfiguration\netcd:\n local:\n extraArgs:\n - name: \"election-timeout\"\n value: 1000", "completion": "This yaml code Customizing components with the kubeadm API | Kubeadm allows you to pass a directory with patch files to InitConfiguration and JoinConfiguration on individual nodes. These patches can be used as the last customization step before component config", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Customizing components with the kubeadm API | Kubeadm allows you to pass a directory with patch files to InitConfiguration and JoinConfiguration on individual nodes. These patches can be used as the last customization step before component config", "source": "https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/control-plane-flags/", "id": "k8s_00004_code_explain_58"} {"prompt": "Complete this yaml code:\n\napiVersion: kubeadm.k8s.io/v1beta4\nkind: InitConfiguration", "completion": "patches:\n directory: /home/user/somedir", "language": "yaml", "type": "kubernetes-manifest", "context": "Customizing components with the kubeadm API | You can pass this file to kubeadm join with --config :", "source": "https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/control-plane-flags/", "id": "k8s_00004_code_completion_59"} {"prompt": "Explain this yaml code:\n\napiVersion: kubeadm.k8s.io/v1beta4\nkind: InitConfiguration\npatches:\n directory: /home/user/somedir", "completion": "This yaml code Customizing components with the kubeadm API | You can pass this file to kubeadm join with --config :", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Customizing components with the kubeadm API | You can pass this file to kubeadm join with --config :", "source": "https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/control-plane-flags/", "id": "k8s_00004_code_explain_60"} {"prompt": "Complete this yaml code:\n\napiVersion: kubeadm.k8s.io/v1beta4\nkind: JoinConfiguration", "completion": "patches:\n directory: /home/user/somedir", "language": "yaml", "type": "kubernetes-manifest", "context": "Customizing components with the kubeadm API | The directory must contain files named target[suffix][+patchtype].extension. For example, kube-apiserver0+merge.yaml or just etcd.json.", "source": "https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/control-plane-flags/", "id": "k8s_00004_code_completion_61"} {"prompt": "Explain this yaml code:\n\napiVersion: kubeadm.k8s.io/v1beta4\nkind: JoinConfiguration\npatches:\n directory: /home/user/somedir", "completion": "This yaml code Customizing components with the kubeadm API | The directory must contain files named target[suffix][+patchtype].extension. For example, kube-apiserver0+merge.yaml or just etcd.json.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Customizing components with the kubeadm API | The directory must contain files named target[suffix][+patchtype].extension. For example, kube-apiserver0+merge.yaml or just etcd.json.", "source": "https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/control-plane-flags/", "id": "k8s_00004_code_explain_62"} {"prompt": "Complete this yaml code:\n\napiVersion: kubeadm.k8s.io/v1beta4\nkind: ClusterConfiguration\nkubernetesVersion: v1.16.0\napiServer:", "completion": " extraArgs:\n - name: \"enable-admission-plugins\"\n value: \"AlwaysPullImages,DefaultStorageClass\"\n - name: \"audit-log-path\"\n value: \"/home/johndoe/audit.log\"", "language": "yaml", "type": "kubernetes-manifest", "context": "Customizing components with the kubeadm API | For details, see the reference documentation for kube-controller-manager.", "source": "https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/control-plane-flags/", "id": "k8s_00004_code_completion_63"} {"prompt": "Explain this yaml code:\n\napiVersion: kubeadm.k8s.io/v1beta4\nkind: ClusterConfiguration\nkubernetesVersion: v1.16.0\napiServer:\n extraArgs:\n - name: \"enable-admission-plugins\"\n value: \"AlwaysPullImages,DefaultStorageClass\"\n - name: \"audit-log-path\"\n value: \"/home/johndoe/audit.log\"", "completion": "This yaml code Customizing components with the kubeadm API | For details, see the reference documentation for kube-controller-manager.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Customizing components with the kubeadm API | For details, see the reference documentation for kube-controller-manager.", "source": "https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/control-plane-flags/", "id": "k8s_00004_code_explain_64"} {"prompt": "Complete this yaml code:\n\napiVersion: kubeadm.k8s.io/v1beta4\nkind: ClusterConfiguration\nkubernetesVersion: v1.16.0\ncontrollerManager:", "completion": " extraArgs:\n - name: \"cluster-signing-key-file\"\n value: \"/home/johndoe/keys/ca.key\"\n - name: \"deployment-controller-sync-period\"\n value: \"50\"", "language": "yaml", "type": "kubernetes-manifest", "context": "Customizing components with the kubeadm API | For details, see the reference documentation for kube-scheduler.", "source": "https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/control-plane-flags/", "id": "k8s_00004_code_completion_65"} {"prompt": "Explain this yaml code:\n\napiVersion: kubeadm.k8s.io/v1beta4\nkind: ClusterConfiguration\nkubernetesVersion: v1.16.0\ncontrollerManager:\n extraArgs:\n - name: \"cluster-signing-key-file\"\n value: \"/home/johndoe/keys/ca.key\"\n - name: \"deployment-controller-sync-period\"\n value: \"50\"", "completion": "This yaml code Customizing components with the kubeadm API | For details, see the reference documentation for kube-scheduler.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Customizing components with the kubeadm API | For details, see the reference documentation for kube-scheduler.", "source": "https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/control-plane-flags/", "id": "k8s_00004_code_explain_66"} {"prompt": "Complete this yaml code:\n\napiVersion: kubeadm.k8s.io/v1beta4\nkind: ClusterConfiguration\nkubernetesVersion: v1.16.0\nscheduler:\n extraArgs:\n - name: \"config\"", "completion": " value: \"/etc/kubernetes/scheduler-config.yaml\"\n extraVolumes:\n - name: schedulerconfig\n hostPath: /home/johndoe/schedconfig.yaml\n mountPath: /etc/kubernetes/scheduler-config.yaml\n readOnly: true\n pathType: \"File\"", "language": "yaml", "type": "kubernetes-manifest", "context": "Customizing components with the kubeadm API | For details, see the etcd server documentation.", "source": "https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/control-plane-flags/", "id": "k8s_00004_code_completion_67"} {"prompt": "Explain this yaml code:\n\napiVersion: kubeadm.k8s.io/v1beta4\nkind: ClusterConfiguration\nkubernetesVersion: v1.16.0\nscheduler:\n extraArgs:\n - name: \"config\"\n value: \"/etc/kubernetes/scheduler-config.yaml\"\n extraVolumes:\n - name: schedulerconfig\n hostPath: /home/johndoe/schedconfig.yaml\n mountPath: /etc/kubernetes/scheduler-config.yaml\n readOnly: true\n pathType: \"File\"", "completion": "This yaml code Customizing components with the kubeadm API | For details, see the etcd server documentation.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Customizing components with the kubeadm API | For details, see the etcd server documentation.", "source": "https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/control-plane-flags/", "id": "k8s_00004_code_explain_68"} {"prompt": "Complete this yaml code:\n\napiVersion: kubeadm.k8s.io/v1beta4\nkind: ClusterConfiguration\netcd:", "completion": " local:\n extraArgs:\n - name: \"election-timeout\"\n value: 1000", "language": "yaml", "type": "kubernetes-manifest", "context": "Customizing components with the kubeadm API | Kubeadm allows you to pass a directory with patch files to InitConfiguration and JoinConfiguration on individual nodes. These patches can be used as the last customization step before component config", "source": "https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/control-plane-flags/", "id": "k8s_00004_code_completion_69"} {"prompt": "Explain this yaml code:\n\napiVersion: kubeadm.k8s.io/v1beta4\nkind: ClusterConfiguration\netcd:\n local:\n extraArgs:\n - name: \"election-timeout\"\n value: 1000", "completion": "This yaml code Customizing components with the kubeadm API | Kubeadm allows you to pass a directory with patch files to InitConfiguration and JoinConfiguration on individual nodes. These patches can be used as the last customization step before component config", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Customizing components with the kubeadm API | Kubeadm allows you to pass a directory with patch files to InitConfiguration and JoinConfiguration on individual nodes. These patches can be used as the last customization step before component config", "source": "https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/control-plane-flags/", "id": "k8s_00004_code_explain_70"} {"prompt": "Complete this yaml code:\n\napiVersion: kubeadm.k8s.io/v1beta4\nkind: InitConfiguration", "completion": "patches:\n directory: /home/user/somedir", "language": "yaml", "type": "kubernetes-manifest", "context": "Customizing components with the kubeadm API | You can pass this file to kubeadm join with --config :", "source": "https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/control-plane-flags/", "id": "k8s_00004_code_completion_71"} {"prompt": "Explain this yaml code:\n\napiVersion: kubeadm.k8s.io/v1beta4\nkind: InitConfiguration\npatches:\n directory: /home/user/somedir", "completion": "This yaml code Customizing components with the kubeadm API | You can pass this file to kubeadm join with --config :", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Customizing components with the kubeadm API | You can pass this file to kubeadm join with --config :", "source": "https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/control-plane-flags/", "id": "k8s_00004_code_explain_72"} {"prompt": "Complete this yaml code:\n\napiVersion: kubeadm.k8s.io/v1beta4\nkind: JoinConfiguration", "completion": "patches:\n directory: /home/user/somedir", "language": "yaml", "type": "kubernetes-manifest", "context": "Customizing components with the kubeadm API | The directory must contain files named target[suffix][+patchtype].extension. For example, kube-apiserver0+merge.yaml or just etcd.json.", "source": "https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/control-plane-flags/", "id": "k8s_00004_code_completion_73"} {"prompt": "Explain this yaml code:\n\napiVersion: kubeadm.k8s.io/v1beta4\nkind: JoinConfiguration\npatches:\n directory: /home/user/somedir", "completion": "This yaml code Customizing components with the kubeadm API | The directory must contain files named target[suffix][+patchtype].extension. For example, kube-apiserver0+merge.yaml or just etcd.json.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Customizing components with the kubeadm API | The directory must contain files named target[suffix][+patchtype].extension. For example, kube-apiserver0+merge.yaml or just etcd.json.", "source": "https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/control-plane-flags/", "id": "k8s_00004_code_explain_74"} {"prompt": "Complete this yaml code:\n\napiVersion: kubeadm.k8s.io/v1beta4\nkind: ClusterConfiguration\nkubernetesVersion: v1.16.0\napiServer:", "completion": " extraArgs:\n - name: \"enable-admission-plugins\"\n value: \"AlwaysPullImages,DefaultStorageClass\"\n - name: \"audit-log-path\"\n value: \"/home/johndoe/audit.log\"", "language": "yaml", "type": "kubernetes-manifest", "context": "Customizing components with the kubeadm API | For details, see the reference documentation for kube-controller-manager.", "source": "https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/control-plane-flags/", "id": "k8s_00004_code_completion_75"} {"prompt": "Explain this yaml code:\n\napiVersion: kubeadm.k8s.io/v1beta4\nkind: ClusterConfiguration\nkubernetesVersion: v1.16.0\napiServer:\n extraArgs:\n - name: \"enable-admission-plugins\"\n value: \"AlwaysPullImages,DefaultStorageClass\"\n - name: \"audit-log-path\"\n value: \"/home/johndoe/audit.log\"", "completion": "This yaml code Customizing components with the kubeadm API | For details, see the reference documentation for kube-controller-manager.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Customizing components with the kubeadm API | For details, see the reference documentation for kube-controller-manager.", "source": "https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/control-plane-flags/", "id": "k8s_00004_code_explain_76"} {"prompt": "Complete this yaml code:\n\napiVersion: kubeadm.k8s.io/v1beta4\nkind: ClusterConfiguration\nkubernetesVersion: v1.16.0\ncontrollerManager:", "completion": " extraArgs:\n - name: \"cluster-signing-key-file\"\n value: \"/home/johndoe/keys/ca.key\"\n - name: \"deployment-controller-sync-period\"\n value: \"50\"", "language": "yaml", "type": "kubernetes-manifest", "context": "Customizing components with the kubeadm API | For details, see the reference documentation for kube-scheduler.", "source": "https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/control-plane-flags/", "id": "k8s_00004_code_completion_77"} {"prompt": "Explain this yaml code:\n\napiVersion: kubeadm.k8s.io/v1beta4\nkind: ClusterConfiguration\nkubernetesVersion: v1.16.0\ncontrollerManager:\n extraArgs:\n - name: \"cluster-signing-key-file\"\n value: \"/home/johndoe/keys/ca.key\"\n - name: \"deployment-controller-sync-period\"\n value: \"50\"", "completion": "This yaml code Customizing components with the kubeadm API | For details, see the reference documentation for kube-scheduler.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Customizing components with the kubeadm API | For details, see the reference documentation for kube-scheduler.", "source": "https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/control-plane-flags/", "id": "k8s_00004_code_explain_78"} {"prompt": "Complete this yaml code:\n\napiVersion: kubeadm.k8s.io/v1beta4\nkind: ClusterConfiguration\nkubernetesVersion: v1.16.0\nscheduler:\n extraArgs:\n - name: \"config\"", "completion": " value: \"/etc/kubernetes/scheduler-config.yaml\"\n extraVolumes:\n - name: schedulerconfig\n hostPath: /home/johndoe/schedconfig.yaml\n mountPath: /etc/kubernetes/scheduler-config.yaml\n readOnly: true\n pathType: \"File\"", "language": "yaml", "type": "kubernetes-manifest", "context": "Customizing components with the kubeadm API | For details, see the etcd server documentation.", "source": "https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/control-plane-flags/", "id": "k8s_00004_code_completion_79"} {"prompt": "Explain this yaml code:\n\napiVersion: kubeadm.k8s.io/v1beta4\nkind: ClusterConfiguration\nkubernetesVersion: v1.16.0\nscheduler:\n extraArgs:\n - name: \"config\"\n value: \"/etc/kubernetes/scheduler-config.yaml\"\n extraVolumes:\n - name: schedulerconfig\n hostPath: /home/johndoe/schedconfig.yaml\n mountPath: /etc/kubernetes/scheduler-config.yaml\n readOnly: true\n pathType: \"File\"", "completion": "This yaml code Customizing components with the kubeadm API | For details, see the etcd server documentation.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Customizing components with the kubeadm API | For details, see the etcd server documentation.", "source": "https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/control-plane-flags/", "id": "k8s_00004_code_explain_80"} {"prompt": "Complete this yaml code:\n\napiVersion: kubeadm.k8s.io/v1beta4\nkind: ClusterConfiguration\netcd:", "completion": " local:\n extraArgs:\n - name: \"election-timeout\"\n value: 1000", "language": "yaml", "type": "kubernetes-manifest", "context": "Customizing components with the kubeadm API | Kubeadm allows you to pass a directory with patch files to InitConfiguration and JoinConfiguration on individual nodes. These patches can be used as the last customization step before component config", "source": "https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/control-plane-flags/", "id": "k8s_00004_code_completion_81"} {"prompt": "Explain this yaml code:\n\napiVersion: kubeadm.k8s.io/v1beta4\nkind: ClusterConfiguration\netcd:\n local:\n extraArgs:\n - name: \"election-timeout\"\n value: 1000", "completion": "This yaml code Customizing components with the kubeadm API | Kubeadm allows you to pass a directory with patch files to InitConfiguration and JoinConfiguration on individual nodes. These patches can be used as the last customization step before component config", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Customizing components with the kubeadm API | Kubeadm allows you to pass a directory with patch files to InitConfiguration and JoinConfiguration on individual nodes. These patches can be used as the last customization step before component config", "source": "https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/control-plane-flags/", "id": "k8s_00004_code_explain_82"} {"prompt": "Complete this yaml code:\n\napiVersion: kubeadm.k8s.io/v1beta4\nkind: InitConfiguration", "completion": "patches:\n directory: /home/user/somedir", "language": "yaml", "type": "kubernetes-manifest", "context": "Customizing components with the kubeadm API | You can pass this file to kubeadm join with --config :", "source": "https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/control-plane-flags/", "id": "k8s_00004_code_completion_83"} {"prompt": "Explain this yaml code:\n\napiVersion: kubeadm.k8s.io/v1beta4\nkind: InitConfiguration\npatches:\n directory: /home/user/somedir", "completion": "This yaml code Customizing components with the kubeadm API | You can pass this file to kubeadm join with --config :", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Customizing components with the kubeadm API | You can pass this file to kubeadm join with --config :", "source": "https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/control-plane-flags/", "id": "k8s_00004_code_explain_84"} {"prompt": "Complete this yaml code:\n\napiVersion: kubeadm.k8s.io/v1beta4\nkind: JoinConfiguration", "completion": "patches:\n directory: /home/user/somedir", "language": "yaml", "type": "kubernetes-manifest", "context": "Customizing components with the kubeadm API | The directory must contain files named target[suffix][+patchtype].extension. For example, kube-apiserver0+merge.yaml or just etcd.json.", "source": "https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/control-plane-flags/", "id": "k8s_00004_code_completion_85"} {"prompt": "Explain this yaml code:\n\napiVersion: kubeadm.k8s.io/v1beta4\nkind: JoinConfiguration\npatches:\n directory: /home/user/somedir", "completion": "This yaml code Customizing components with the kubeadm API | The directory must contain files named target[suffix][+patchtype].extension. For example, kube-apiserver0+merge.yaml or just etcd.json.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Customizing components with the kubeadm API | The directory must contain files named target[suffix][+patchtype].extension. For example, kube-apiserver0+merge.yaml or just etcd.json.", "source": "https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/control-plane-flags/", "id": "k8s_00004_code_explain_86"} {"prompt": "Complete this text code:\n\n# Print the supported API resources\n kubectl api-resources\n \n # Print the supported API resources with more information\n kubectl api-resources -o wide\n \n # Print the supported API resources sorted by a column\n kubectl api-resources --sort-by=name", "completion": " \n # Print the supported namespaced resources\n kubectl api-resources --namespaced=true\n \n # Print the supported non-namespaced resources\n kubectl api-resources --namespaced=false\n \n # Print the supported API resources with a specific APIGroup\n kubectl api-resources --api-group=rbac.authorization.k8s.io", "language": "text", "type": "code-example", "context": "kubectl api-resources | Limit to resources in the specified API group.", "source": "https://kubernetes.io/docs/reference/kubectl/generated/kubectl_api-resources/", "id": "k8s_00006_code_completion_87"} {"prompt": "Explain this text code:\n\n# Print the supported API resources\n kubectl api-resources\n \n # Print the supported API resources with more information\n kubectl api-resources -o wide\n \n # Print the supported API resources sorted by a column\n kubectl api-resources --sort-by=name\n \n # Print the supported namespaced resources\n kubectl api-resources --namespaced=true\n \n # Print the supported non-namespaced resources\n kubectl api-resources --namespaced=false\n \n # Print the supported API resources with a specific APIGroup\n kubectl api-resources --api-group=rbac.authorization.k8s.io", "completion": "This text code kubectl api-resources | Limit to resources in the specified API group.", "language": "text", "type": "code-example_explanation", "context": "kubectl api-resources | Limit to resources in the specified API group.", "source": "https://kubernetes.io/docs/reference/kubectl/generated/kubectl_api-resources/", "id": "k8s_00006_code_explain_88"} {"prompt": "Complete this text code:\n\n# Print the supported API resources\n kubectl api-resources\n \n # Print the supported API resources with more information\n kubectl api-resources -o wide\n \n # Print the supported API resources sorted by a column\n kubectl api-resources --sort-by=name", "completion": " \n # Print the supported namespaced resources\n kubectl api-resources --namespaced=true\n \n # Print the supported non-namespaced resources\n kubectl api-resources --namespaced=false\n \n # Print the supported API resources with a specific APIGroup\n kubectl api-resources --api-group=rbac.authorization.k8s.io", "language": "text", "type": "code-example", "context": "kubectl api-resources | Limit to resources in the specified API group.", "source": "https://kubernetes.io/docs/reference/kubectl/generated/kubectl_api-resources/", "id": "k8s_00006_code_completion_89"} {"prompt": "Explain this text code:\n\n# Print the supported API resources\n kubectl api-resources\n \n # Print the supported API resources with more information\n kubectl api-resources -o wide\n \n # Print the supported API resources sorted by a column\n kubectl api-resources --sort-by=name\n \n # Print the supported namespaced resources\n kubectl api-resources --namespaced=true\n \n # Print the supported non-namespaced resources\n kubectl api-resources --namespaced=false\n \n # Print the supported API resources with a specific APIGroup\n kubectl api-resources --api-group=rbac.authorization.k8s.io", "completion": "This text code kubectl api-resources | Limit to resources in the specified API group.", "language": "text", "type": "code-example_explanation", "context": "kubectl api-resources | Limit to resources in the specified API group.", "source": "https://kubernetes.io/docs/reference/kubectl/generated/kubectl_api-resources/", "id": "k8s_00006_code_explain_90"} {"prompt": "Complete this yaml code:\n\napiVersion: apps/v1\nkind: Deployment\nmetadata:\n name: php-apache\nspec:\n selector:\n matchLabels:\n run: php-apache\n template:\n metadata:\n labels:\n run: php-apache\n spec:\n containers:\n - name: php-apache\n image: registry.k8s.io/hpa-example\n ports:", "completion": " - containerPort: 80\n resources:\n limits:\n cpu: 500m\n requests:\n cpu: 200m\n---\napiVersion: v1\nkind: Service\nmetadata:\n name: php-apache\n labels:\n run: php-apache\nspec:\n ports:\n - port: 80\n selector:\n run: php-apache", "language": "yaml", "type": "deployment-definition", "context": "HorizontalPodAutoscaler Walkthrough | To do so, run the following command:", "source": "https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale-walkthrough/", "id": "k8s_00007_code_completion_91"} {"prompt": "Explain this yaml code:\n\napiVersion: apps/v1\nkind: Deployment\nmetadata:\n name: php-apache\nspec:\n selector:\n matchLabels:\n run: php-apache\n template:\n metadata:\n labels:\n run: php-apache\n spec:\n containers:\n - name: php-apache\n image: registry.k8s.io/hpa-example\n ports:\n - containerPort: 80\n resources:\n limits:\n cpu: 500m\n requests:\n cpu: 200m\n---\napiVersion: v1\nkind: Service\nmetadata:\n name: php-apache\n labels:\n run: php-apache\nspec:\n ports:\n - port: 80\n selector:\n run: php-apache", "completion": "This yaml code HorizontalPodAutoscaler Walkthrough | To do so, run the following command:", "language": "yaml", "type": "deployment-definition_explanation", "context": "HorizontalPodAutoscaler Walkthrough | To do so, run the following command:", "source": "https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale-walkthrough/", "id": "k8s_00007_code_explain_92"} {"prompt": "Explain this bash code:\n\nkubectl apply -f https://k8s.io/examples/application/php-apache.yaml", "completion": "This bash code HorizontalPodAutoscaler Walkthrough | Now that the server is running, create the autoscaler using kubectl. The kubectl autoscale subcommand, part of kubectl, helps you do this.", "language": "bash", "type": "kubectl-command_explanation", "context": "HorizontalPodAutoscaler Walkthrough | Now that the server is running, create the autoscaler using kubectl. The kubectl autoscale subcommand, part of kubectl, helps you do this.", "source": "https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale-walkthrough/", "id": "k8s_00007_code_explain_93"} {"prompt": "Explain this text code:\n\ndeployment.apps/php-apache created\nservice/php-apache created", "completion": "This text code HorizontalPodAutoscaler Walkthrough | Now that the server is running, create the autoscaler using kubectl. The kubectl autoscale subcommand, part of kubectl, helps you do this.", "language": "text", "type": "code-example_explanation", "context": "HorizontalPodAutoscaler Walkthrough | Now that the server is running, create the autoscaler using kubectl. The kubectl autoscale subcommand, part of kubectl, helps you do this.", "source": "https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale-walkthrough/", "id": "k8s_00007_code_explain_94"} {"prompt": "Explain this bash code:\n\nkubectl autoscale deployment php-apache --cpu=50% --min=1 --max=10", "completion": "This bash code HorizontalPodAutoscaler Walkthrough | You can check the current status of the newly-made HorizontalPodAutoscaler, by running:", "language": "bash", "type": "kubectl-command_explanation", "context": "HorizontalPodAutoscaler Walkthrough | You can check the current status of the newly-made HorizontalPodAutoscaler, by running:", "source": "https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale-walkthrough/", "id": "k8s_00007_code_explain_95"} {"prompt": "Explain this text code:\n\nhorizontalpodautoscaler.autoscaling/php-apache autoscaled", "completion": "This text code HorizontalPodAutoscaler Walkthrough | You can check the current status of the newly-made HorizontalPodAutoscaler, by running:", "language": "text", "type": "code-example_explanation", "context": "HorizontalPodAutoscaler Walkthrough | You can check the current status of the newly-made HorizontalPodAutoscaler, by running:", "source": "https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale-walkthrough/", "id": "k8s_00007_code_explain_96"} {"prompt": "Explain this text code:\n\n# You can use \"hpa\" or \"horizontalpodautoscaler\"; either name works OK.\nkubectl get hpa", "completion": "This text code HorizontalPodAutoscaler Walkthrough | The output is similar to:", "language": "text", "type": "code-example_explanation", "context": "HorizontalPodAutoscaler Walkthrough | The output is similar to:", "source": "https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale-walkthrough/", "id": "k8s_00007_code_explain_97"} {"prompt": "Explain this text code:\n\nNAME REFERENCE TARGET MINPODS MAXPODS REPLICAS AGE\nphp-apache Deployment/php-apache/scale 0% / 50% 1 10 1 18s", "completion": "This text code HorizontalPodAutoscaler Walkthrough | (if you see other HorizontalPodAutoscalers with different names, that means they already existed, and isn't usually a problem).", "language": "text", "type": "code-example_explanation", "context": "HorizontalPodAutoscaler Walkthrough | (if you see other HorizontalPodAutoscalers with different names, that means they already existed, and isn't usually a problem).", "source": "https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale-walkthrough/", "id": "k8s_00007_code_explain_98"} {"prompt": "Explain this text code:\n\n# Run this in a separate terminal\n# so that the load generation continues and you can carry on with the rest of the steps\nkubectl run -i --tty load-generator --rm --image=busybox:1.28 --restart=Never -- /bin/sh -c \"while sleep 0.01; do wget -q -O- http://php-apache; done\"", "completion": "This text code HorizontalPodAutoscaler Walkthrough | Now run:", "language": "text", "type": "code-example_explanation", "context": "HorizontalPodAutoscaler Walkthrough | Now run:", "source": "https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale-walkthrough/", "id": "k8s_00007_code_explain_99"} {"prompt": "Explain this text code:\n\n# type Ctrl+C to end the watch when you're ready\nkubectl get hpa php-apache --watch", "completion": "This text code HorizontalPodAutoscaler Walkthrough | Within a minute or so, you should see the higher CPU load; for example:", "language": "text", "type": "code-example_explanation", "context": "HorizontalPodAutoscaler Walkthrough | Within a minute or so, you should see the higher CPU load; for example:", "source": "https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale-walkthrough/", "id": "k8s_00007_code_explain_100"} {"prompt": "Explain this text code:\n\nNAME REFERENCE TARGET MINPODS MAXPODS REPLICAS AGE\nphp-apache Deployment/php-apache/scale 305% / 50% 1 10 1 3m", "completion": "This text code HorizontalPodAutoscaler Walkthrough | and then, more replicas. For example:", "language": "text", "type": "code-example_explanation", "context": "HorizontalPodAutoscaler Walkthrough | and then, more replicas. For example:", "source": "https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale-walkthrough/", "id": "k8s_00007_code_explain_101"} {"prompt": "Explain this text code:\n\nNAME REFERENCE TARGET MINPODS MAXPODS REPLICAS AGE\nphp-apache Deployment/php-apache/scale 305% / 50% 1 10 7 3m", "completion": "This text code HorizontalPodAutoscaler Walkthrough | Here, CPU consumption has increased to 305% of the request. As a result, the Deployment was resized to 7 replicas:", "language": "text", "type": "code-example_explanation", "context": "HorizontalPodAutoscaler Walkthrough | Here, CPU consumption has increased to 305% of the request. As a result, the Deployment was resized to 7 replicas:", "source": "https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale-walkthrough/", "id": "k8s_00007_code_explain_102"} {"prompt": "Explain this text code:\n\nNAME READY UP-TO-DATE AVAILABLE AGE\nphp-apache 7/7 7 7 19m", "completion": "This text code HorizontalPodAutoscaler Walkthrough | To finish the example, stop sending the load.", "language": "text", "type": "code-example_explanation", "context": "HorizontalPodAutoscaler Walkthrough | To finish the example, stop sending the load.", "source": "https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale-walkthrough/", "id": "k8s_00007_code_explain_103"} {"prompt": "Explain this text code:\n\n# type Ctrl+C to end the watch when you're ready\nkubectl get hpa php-apache --watch", "completion": "This text code HorizontalPodAutoscaler Walkthrough | The output is similar to:", "language": "text", "type": "code-example_explanation", "context": "HorizontalPodAutoscaler Walkthrough | The output is similar to:", "source": "https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale-walkthrough/", "id": "k8s_00007_code_explain_104"} {"prompt": "Explain this text code:\n\nNAME REFERENCE TARGET MINPODS MAXPODS REPLICAS AGE\nphp-apache Deployment/php-apache/scale 0% / 50% 1 10 1 11m", "completion": "This text code HorizontalPodAutoscaler Walkthrough | and the Deployment also shows that it has scaled down:", "language": "text", "type": "code-example_explanation", "context": "HorizontalPodAutoscaler Walkthrough | and the Deployment also shows that it has scaled down:", "source": "https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale-walkthrough/", "id": "k8s_00007_code_explain_105"} {"prompt": "Explain this text code:\n\nNAME READY UP-TO-DATE AVAILABLE AGE\nphp-apache 1/1 1 1 27m", "completion": "This text code HorizontalPodAutoscaler Walkthrough | Once CPU utilization dropped to 0, the HPA automatically scaled the number of replicas back down to 1.", "language": "text", "type": "code-example_explanation", "context": "HorizontalPodAutoscaler Walkthrough | Once CPU utilization dropped to 0, the HPA automatically scaled the number of replicas back down to 1.", "source": "https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale-walkthrough/", "id": "k8s_00007_code_explain_106"} {"prompt": "Explain this bash code:\n\nkubectl get hpa php-apache -o yaml > /tmp/hpa-v2.yaml", "completion": "This bash code HorizontalPodAutoscaler Walkthrough | Open the /tmp/hpa-v2.yaml file in an editor, and you should see YAML which looks like this:", "language": "bash", "type": "kubectl-command_explanation", "context": "HorizontalPodAutoscaler Walkthrough | Open the /tmp/hpa-v2.yaml file in an editor, and you should see YAML which looks like this:", "source": "https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale-walkthrough/", "id": "k8s_00007_code_explain_107"} {"prompt": "Complete this yaml code:\n\napiVersion: autoscaling/v2\nkind: HorizontalPodAutoscaler\nmetadata:\n name: php-apache\nspec:\n scaleTargetRef:\n apiVersion: apps/v1\n kind: Deployment\n name: php-apache\n minReplicas: 1\n maxReplicas: 10\n metrics:\n - type: Resource\n resource:\n name: cpu", "completion": " target:\n type: Utilization\n averageUtilization: 50\nstatus:\n observedGeneration: 1\n lastScaleTime: \n currentReplicas: 1\n desiredReplicas: 1\n currentMetrics:\n - type: Resource\n resource:\n name: cpu\n current:\n averageUtilization: 0\n averageValue: 0", "language": "yaml", "type": "deployment-definition", "context": "HorizontalPodAutoscaler Walkthrough | Notice that the targetCPUUtilizationPercentage field has been replaced with an array called metrics. The CPU utilization metric is a resource metric, since it is represented as a percentage of a resou", "source": "https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale-walkthrough/", "id": "k8s_00007_code_completion_108"} {"prompt": "Explain this yaml code:\n\napiVersion: autoscaling/v2\nkind: HorizontalPodAutoscaler\nmetadata:\n name: php-apache\nspec:\n scaleTargetRef:\n apiVersion: apps/v1\n kind: Deployment\n name: php-apache\n minReplicas: 1\n maxReplicas: 10\n metrics:\n - type: Resource\n resource:\n name: cpu\n target:\n type: Utilization\n averageUtilization: 50\nstatus:\n observedGeneration: 1\n lastScaleTime: \n currentReplicas: 1\n desiredReplicas: 1\n currentMetrics:\n - type: Resource\n resource:\n name: cpu\n current:\n averageUtilization: 0\n averageValue: 0", "completion": "This yaml code HorizontalPodAutoscaler Walkthrough | Notice that the targetCPUUtilizationPercentage field has been replaced with an array called metrics. The CPU utilization metric is a resource metric, since it is represented as a percentage of a resou", "language": "yaml", "type": "deployment-definition_explanation", "context": "HorizontalPodAutoscaler Walkthrough | Notice that the targetCPUUtilizationPercentage field has been replaced with an array called metrics. The CPU utilization metric is a resource metric, since it is represented as a percentage of a resou", "source": "https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale-walkthrough/", "id": "k8s_00007_code_explain_109"} {"prompt": "Complete this text code:\n\nmetrics:\n - type: Resource\n resource:", "completion": " name: memory\n target:\n type: AverageValue\n averageValue: 500Mi", "language": "text", "type": "code-example", "context": "HorizontalPodAutoscaler Walkthrough | There are two other types of metrics, both of which are considered custom metrics: pod metrics and object metrics. These metrics may have names which are cluster specific, and require a more advanced", "source": "https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale-walkthrough/", "id": "k8s_00007_code_completion_110"} {"prompt": "Explain this text code:\n\nmetrics:\n - type: Resource\n resource:\n name: memory\n target:\n type: AverageValue\n averageValue: 500Mi", "completion": "This text code HorizontalPodAutoscaler Walkthrough | There are two other types of metrics, both of which are considered custom metrics: pod metrics and object metrics. These metrics may have names which are cluster specific, and require a more advanced", "language": "text", "type": "code-example_explanation", "context": "HorizontalPodAutoscaler Walkthrough | There are two other types of metrics, both of which are considered custom metrics: pod metrics and object metrics. These metrics may have names which are cluster specific, and require a more advanced", "source": "https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale-walkthrough/", "id": "k8s_00007_code_explain_111"} {"prompt": "Complete this text code:\n\ntype: Pods\npods:\n metric:", "completion": " name: packets-per-second\n target:\n type: AverageValue\n averageValue: 1k", "language": "text", "type": "code-example", "context": "HorizontalPodAutoscaler Walkthrough | The second alternative metric type is object metrics. These metrics describe a different object in the same namespace, instead of describing Pods. The metrics are not necessarily fetched from the obje", "source": "https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale-walkthrough/", "id": "k8s_00007_code_completion_112"} {"prompt": "Explain this text code:\n\ntype: Pods\npods:\n metric:\n name: packets-per-second\n target:\n type: AverageValue\n averageValue: 1k", "completion": "This text code HorizontalPodAutoscaler Walkthrough | The second alternative metric type is object metrics. These metrics describe a different object in the same namespace, instead of describing Pods. The metrics are not necessarily fetched from the obje", "language": "text", "type": "code-example_explanation", "context": "HorizontalPodAutoscaler Walkthrough | The second alternative metric type is object metrics. These metrics describe a different object in the same namespace, instead of describing Pods. The metrics are not necessarily fetched from the obje", "source": "https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale-walkthrough/", "id": "k8s_00007_code_explain_113"} {"prompt": "Complete this yaml code:\n\ntype: Object\nobject:\n metric:\n name: requests-per-second\n describedObject:", "completion": " apiVersion: networking.k8s.io/v1\n kind: Ingress\n name: main-route\n target:\n type: Value\n value: 2k", "language": "yaml", "type": "kubernetes-manifest", "context": "HorizontalPodAutoscaler Walkthrough | If you provide multiple such metric blocks, the HorizontalPodAutoscaler will consider each metric in turn. The HorizontalPodAutoscaler will calculate proposed replica counts for each metric, and then", "source": "https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale-walkthrough/", "id": "k8s_00007_code_completion_114"} {"prompt": "Explain this yaml code:\n\ntype: Object\nobject:\n metric:\n name: requests-per-second\n describedObject:\n apiVersion: networking.k8s.io/v1\n kind: Ingress\n name: main-route\n target:\n type: Value\n value: 2k", "completion": "This yaml code HorizontalPodAutoscaler Walkthrough | If you provide multiple such metric blocks, the HorizontalPodAutoscaler will consider each metric in turn. The HorizontalPodAutoscaler will calculate proposed replica counts for each metric, and then", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "HorizontalPodAutoscaler Walkthrough | If you provide multiple such metric blocks, the HorizontalPodAutoscaler will consider each metric in turn. The HorizontalPodAutoscaler will calculate proposed replica counts for each metric, and then", "source": "https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale-walkthrough/", "id": "k8s_00007_code_explain_115"} {"prompt": "Complete this yaml code:\n\napiVersion: autoscaling/v2\nkind: HorizontalPodAutoscaler\nmetadata:\n name: php-apache\nspec:\n scaleTargetRef:\n apiVersion: apps/v1\n kind: Deployment\n name: php-apache\n minReplicas: 1\n maxReplicas: 10\n metrics:\n - type: Resource\n resource:\n name: cpu\n target:\n type: Utilization\n averageUtilization: 50\n - type: Pods\n pods:\n metric:\n name: packets-per-second\n target:\n type: AverageValue\n averageValue: 1k\n - type: Object\n object:\n metric:\n name: requests-per-second", "completion": " describedObject:\n apiVersion: networking.k8s.io/v1\n kind: Ingress\n name: main-route\n target:\n type: Value\n value: 10k\nstatus:\n observedGeneration: 1\n lastScaleTime: \n currentReplicas: 1\n desiredReplicas: 1\n currentMetrics:\n - type: Resource\n resource:\n name: cpu\n current:\n averageUtilization: 0\n averageValue: 0\n - type: Object\n object:\n metric:\n name: requests-per-second\n describedObject:\n apiVersion: networking.k8s.io/v1\n kind: Ingress\n name: main-route\n current:\n value: 10k", "language": "yaml", "type": "deployment-definition", "context": "HorizontalPodAutoscaler Walkthrough | Then, your HorizontalPodAutoscaler would attempt to ensure that each pod was consuming roughly 50% of its requested CPU, serving 1000 packets per second, and that all pods behind the main-route Ingres", "source": "https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale-walkthrough/", "id": "k8s_00007_code_completion_116"} {"prompt": "Explain this yaml code:\n\napiVersion: autoscaling/v2\nkind: HorizontalPodAutoscaler\nmetadata:\n name: php-apache\nspec:\n scaleTargetRef:\n apiVersion: apps/v1\n kind: Deployment\n name: php-apache\n minReplicas: 1\n maxReplicas: 10\n metrics:\n - type: Resource\n resource:\n name: cpu\n target:\n type: Utilization\n averageUtilization: 50\n - type: Pods\n pods:\n metric:\n name: packets-per-second\n target:\n type: AverageValue\n averageValue: 1k\n - type: Object\n object:\n metric:\n name: requests-per-second\n describedObject:\n apiVersion: networking.k8s.io/v1\n kind: Ingress\n name: main-route\n target:\n type: Value\n value: 10k\nstatus:\n observedGeneration: 1\n lastScaleTime: \n currentReplicas: 1\n desiredReplicas: 1\n currentMetrics:\n - type: Resource\n resource:\n name: cpu\n current:\n averageUtilization: 0\n averageValue: 0\n - type: Object\n object:\n metric:\n name: requests-per-second\n describedObject:\n apiVersion: networking.k8s.io/v1\n kind: Ingress\n name: main-route\n current:\n value: 10k", "completion": "This yaml code HorizontalPodAutoscaler Walkthrough | Then, your HorizontalPodAutoscaler would attempt to ensure that each pod was consuming roughly 50% of its requested CPU, serving 1000 packets per second, and that all pods behind the main-route Ingres", "language": "yaml", "type": "deployment-definition_explanation", "context": "HorizontalPodAutoscaler Walkthrough | Then, your HorizontalPodAutoscaler would attempt to ensure that each pod was consuming roughly 50% of its requested CPU, serving 1000 packets per second, and that all pods behind the main-route Ingres", "source": "https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale-walkthrough/", "id": "k8s_00007_code_explain_117"} {"prompt": "Complete this text code:\n\ntype: Object\nobject:", "completion": " metric:\n name: http_requests\n selector: {matchLabels: {verb: GET}}", "language": "text", "type": "code-example", "context": "HorizontalPodAutoscaler Walkthrough | This selector uses the same syntax as the full Kubernetes label selectors. The monitoring pipeline determines how to collapse multiple series into a single value, if the name and selector match multip", "source": "https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale-walkthrough/", "id": "k8s_00007_code_completion_118"} {"prompt": "Explain this text code:\n\ntype: Object\nobject:\n metric:\n name: http_requests\n selector: {matchLabels: {verb: GET}}", "completion": "This text code HorizontalPodAutoscaler Walkthrough | This selector uses the same syntax as the full Kubernetes label selectors. The monitoring pipeline determines how to collapse multiple series into a single value, if the name and selector match multip", "language": "text", "type": "code-example_explanation", "context": "HorizontalPodAutoscaler Walkthrough | This selector uses the same syntax as the full Kubernetes label selectors. The monitoring pipeline determines how to collapse multiple series into a single value, if the name and selector match multip", "source": "https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale-walkthrough/", "id": "k8s_00007_code_explain_119"} {"prompt": "Complete this text code:\n\n- type: External\n external:\n metric:\n name: queue_messages_ready\n selector:", "completion": " matchLabels:\n queue: \"worker_tasks\"\n target:\n type: AverageValue\n averageValue: 30", "language": "text", "type": "code-example", "context": "HorizontalPodAutoscaler Walkthrough | When possible, it's preferable to use the custom metric target types instead of external metrics, since it's easier for cluster administrators to secure the custom metrics API. The external metrics AP", "source": "https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale-walkthrough/", "id": "k8s_00007_code_completion_120"} {"prompt": "Explain this text code:\n\n- type: External\n external:\n metric:\n name: queue_messages_ready\n selector:\n matchLabels:\n queue: \"worker_tasks\"\n target:\n type: AverageValue\n averageValue: 30", "completion": "This text code HorizontalPodAutoscaler Walkthrough | When possible, it's preferable to use the custom metric target types instead of external metrics, since it's easier for cluster administrators to secure the custom metrics API. The external metrics AP", "language": "text", "type": "code-example_explanation", "context": "HorizontalPodAutoscaler Walkthrough | When possible, it's preferable to use the custom metric target types instead of external metrics, since it's easier for cluster administrators to secure the custom metrics API. The external metrics AP", "source": "https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale-walkthrough/", "id": "k8s_00007_code_explain_121"} {"prompt": "Complete this text code:\n\nName: cm-test\nNamespace: prom\nLabels: \nAnnotations: \nCreationTimestamp: Fri, 16 Jun 2017 18:09:22 +0000\nReference: ReplicationController/cm-test\nMetrics: ( current / target )\n \"http_requests\" on pods: 66m / 500m\nMin replicas: 1", "completion": "Max replicas: 4\nReplicationController pods: 1 current / 1 desired\nConditions:\n Type Status Reason Message\n ---- ------ ------ -------\n AbleToScale True ReadyForNewScale the last scale time was sufficiently old as to warrant a new scale\n ScalingActive True ValidMetricFound the HPA was able to successfully calculate a replica count from pods metric http_requests\n ScalingLimited False DesiredWithinRange the desired replica count is within the acceptable range\nEvents:", "language": "text", "type": "code-example", "context": "HorizontalPodAutoscaler Walkthrough | For this HorizontalPodAutoscaler, you can see several conditions in a healthy state. The first, AbleToScale, indicates whether or not the HPA is able to fetch and update scales, as well as whether or", "source": "https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale-walkthrough/", "id": "k8s_00007_code_completion_122"} {"prompt": "Explain this text code:\n\nName: cm-test\nNamespace: prom\nLabels: \nAnnotations: \nCreationTimestamp: Fri, 16 Jun 2017 18:09:22 +0000\nReference: ReplicationController/cm-test\nMetrics: ( current / target )\n \"http_requests\" on pods: 66m / 500m\nMin replicas: 1\nMax replicas: 4\nReplicationController pods: 1 current / 1 desired\nConditions:\n Type Status Reason Message\n ---- ------ ------ -------\n AbleToScale True ReadyForNewScale the last scale time was sufficiently old as to warrant a new scale\n ScalingActive True ValidMetricFound the HPA was able to successfully calculate a replica count from pods metric http_requests\n ScalingLimited False DesiredWithinRange the desired replica count is within the acceptable range\nEvents:", "completion": "This text code HorizontalPodAutoscaler Walkthrough | For this HorizontalPodAutoscaler, you can see several conditions in a healthy state. The first, AbleToScale, indicates whether or not the HPA is able to fetch and update scales, as well as whether or", "language": "text", "type": "code-example_explanation", "context": "HorizontalPodAutoscaler Walkthrough | For this HorizontalPodAutoscaler, you can see several conditions in a healthy state. The first, AbleToScale, indicates whether or not the HPA is able to fetch and update scales, as well as whether or", "source": "https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale-walkthrough/", "id": "k8s_00007_code_explain_123"} {"prompt": "Complete this yaml code:\n\napiVersion: autoscaling/v2\nkind: HorizontalPodAutoscaler\nmetadata:\n name: php-apache\nspec:\n scaleTargetRef:\n apiVersion: apps/v1\n kind: Deployment\n name: php-apache", "completion": " minReplicas: 1\n maxReplicas: 10\n metrics:\n - type: Resource\n resource:\n name: cpu\n target:\n type: Utilization\n averageUtilization: 50", "language": "yaml", "type": "deployment-definition", "context": "HorizontalPodAutoscaler Walkthrough | Then, create the autoscaler by executing the following command:", "source": "https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale-walkthrough/", "id": "k8s_00007_code_completion_124"} {"prompt": "Explain this yaml code:\n\napiVersion: autoscaling/v2\nkind: HorizontalPodAutoscaler\nmetadata:\n name: php-apache\nspec:\n scaleTargetRef:\n apiVersion: apps/v1\n kind: Deployment\n name: php-apache\n minReplicas: 1\n maxReplicas: 10\n metrics:\n - type: Resource\n resource:\n name: cpu\n target:\n type: Utilization\n averageUtilization: 50", "completion": "This yaml code HorizontalPodAutoscaler Walkthrough | Then, create the autoscaler by executing the following command:", "language": "yaml", "type": "deployment-definition_explanation", "context": "HorizontalPodAutoscaler Walkthrough | Then, create the autoscaler by executing the following command:", "source": "https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale-walkthrough/", "id": "k8s_00007_code_explain_125"} {"prompt": "Explain this bash code:\n\nkubectl create -f https://k8s.io/examples/application/hpa/php-apache.yaml", "completion": "This bash code HorizontalPodAutoscaler Walkthrough |", "language": "bash", "type": "kubectl-command_explanation", "context": "HorizontalPodAutoscaler Walkthrough |", "source": "https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale-walkthrough/", "id": "k8s_00007_code_explain_126"} {"prompt": "Explain this text code:\n\nhorizontalpodautoscaler.autoscaling/php-apache created", "completion": "This text code HorizontalPodAutoscaler Walkthrough |", "language": "text", "type": "code-example_explanation", "context": "HorizontalPodAutoscaler Walkthrough |", "source": "https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale-walkthrough/", "id": "k8s_00007_code_explain_127"} {"prompt": "Complete this yaml code:\n\napiVersion: apps/v1\nkind: Deployment\nmetadata:\n name: php-apache\nspec:\n selector:\n matchLabels:\n run: php-apache\n template:\n metadata:\n labels:\n run: php-apache\n spec:\n containers:\n - name: php-apache\n image: registry.k8s.io/hpa-example\n ports:", "completion": " - containerPort: 80\n resources:\n limits:\n cpu: 500m\n requests:\n cpu: 200m\n---\napiVersion: v1\nkind: Service\nmetadata:\n name: php-apache\n labels:\n run: php-apache\nspec:\n ports:\n - port: 80\n selector:\n run: php-apache", "language": "yaml", "type": "deployment-definition", "context": "HorizontalPodAutoscaler Walkthrough | To do so, run the following command:", "source": "https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale-walkthrough/", "id": "k8s_00007_code_completion_128"} {"prompt": "Explain this yaml code:\n\napiVersion: apps/v1\nkind: Deployment\nmetadata:\n name: php-apache\nspec:\n selector:\n matchLabels:\n run: php-apache\n template:\n metadata:\n labels:\n run: php-apache\n spec:\n containers:\n - name: php-apache\n image: registry.k8s.io/hpa-example\n ports:\n - containerPort: 80\n resources:\n limits:\n cpu: 500m\n requests:\n cpu: 200m\n---\napiVersion: v1\nkind: Service\nmetadata:\n name: php-apache\n labels:\n run: php-apache\nspec:\n ports:\n - port: 80\n selector:\n run: php-apache", "completion": "This yaml code HorizontalPodAutoscaler Walkthrough | To do so, run the following command:", "language": "yaml", "type": "deployment-definition_explanation", "context": "HorizontalPodAutoscaler Walkthrough | To do so, run the following command:", "source": "https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale-walkthrough/", "id": "k8s_00007_code_explain_129"} {"prompt": "Explain this shell code:\n\nkubectl apply -f https://k8s.io/examples/application/php-apache.yaml", "completion": "This shell code HorizontalPodAutoscaler Walkthrough | Now that the server is running, create the autoscaler using kubectl. The kubectl autoscale subcommand, part of kubectl, helps you do this.", "language": "shell", "type": "code-example_explanation", "context": "HorizontalPodAutoscaler Walkthrough | Now that the server is running, create the autoscaler using kubectl. The kubectl autoscale subcommand, part of kubectl, helps you do this.", "source": "https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale-walkthrough/", "id": "k8s_00007_code_explain_130"} {"prompt": "Explain this text code:\n\ndeployment.apps/php-apache created\nservice/php-apache created", "completion": "This text code HorizontalPodAutoscaler Walkthrough | Now that the server is running, create the autoscaler using kubectl. The kubectl autoscale subcommand, part of kubectl, helps you do this.", "language": "text", "type": "code-example_explanation", "context": "HorizontalPodAutoscaler Walkthrough | Now that the server is running, create the autoscaler using kubectl. The kubectl autoscale subcommand, part of kubectl, helps you do this.", "source": "https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale-walkthrough/", "id": "k8s_00007_code_explain_131"} {"prompt": "Explain this shell code:\n\nkubectl autoscale deployment php-apache --cpu=50% --min=1 --max=10", "completion": "This shell code HorizontalPodAutoscaler Walkthrough | You can check the current status of the newly-made HorizontalPodAutoscaler, by running:", "language": "shell", "type": "code-example_explanation", "context": "HorizontalPodAutoscaler Walkthrough | You can check the current status of the newly-made HorizontalPodAutoscaler, by running:", "source": "https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale-walkthrough/", "id": "k8s_00007_code_explain_132"} {"prompt": "Explain this text code:\n\nhorizontalpodautoscaler.autoscaling/php-apache autoscaled", "completion": "This text code HorizontalPodAutoscaler Walkthrough | You can check the current status of the newly-made HorizontalPodAutoscaler, by running:", "language": "text", "type": "code-example_explanation", "context": "HorizontalPodAutoscaler Walkthrough | You can check the current status of the newly-made HorizontalPodAutoscaler, by running:", "source": "https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale-walkthrough/", "id": "k8s_00007_code_explain_133"} {"prompt": "Explain this shell code:\n\n# You can use \"hpa\" or \"horizontalpodautoscaler\"; either name works OK.\nkubectl get hpa", "completion": "This shell code HorizontalPodAutoscaler Walkthrough | The output is similar to:", "language": "shell", "type": "code-example_explanation", "context": "HorizontalPodAutoscaler Walkthrough | The output is similar to:", "source": "https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale-walkthrough/", "id": "k8s_00007_code_explain_134"} {"prompt": "Explain this text code:\n\nNAME REFERENCE TARGET MINPODS MAXPODS REPLICAS AGE\nphp-apache Deployment/php-apache/scale 0% / 50% 1 10 1 18s", "completion": "This text code HorizontalPodAutoscaler Walkthrough | (if you see other HorizontalPodAutoscalers with different names, that means they already existed, and isn't usually a problem).", "language": "text", "type": "code-example_explanation", "context": "HorizontalPodAutoscaler Walkthrough | (if you see other HorizontalPodAutoscalers with different names, that means they already existed, and isn't usually a problem).", "source": "https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale-walkthrough/", "id": "k8s_00007_code_explain_135"} {"prompt": "Explain this shell code:\n\n# Run this in a separate terminal\n# so that the load generation continues and you can carry on with the rest of the steps\nkubectl run -i --tty load-generator --rm --image=busybox:1.28 --restart=Never -- /bin/sh -c \"while sleep 0.01; do wget -q -O- http://php-apache; done\"", "completion": "This shell code HorizontalPodAutoscaler Walkthrough | Now run:", "language": "shell", "type": "code-example_explanation", "context": "HorizontalPodAutoscaler Walkthrough | Now run:", "source": "https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale-walkthrough/", "id": "k8s_00007_code_explain_136"} {"prompt": "Explain this shell code:\n\n# type Ctrl+C to end the watch when you're ready\nkubectl get hpa php-apache --watch", "completion": "This shell code HorizontalPodAutoscaler Walkthrough | Within a minute or so, you should see the higher CPU load; for example:", "language": "shell", "type": "code-example_explanation", "context": "HorizontalPodAutoscaler Walkthrough | Within a minute or so, you should see the higher CPU load; for example:", "source": "https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale-walkthrough/", "id": "k8s_00007_code_explain_137"} {"prompt": "Explain this text code:\n\nNAME REFERENCE TARGET MINPODS MAXPODS REPLICAS AGE\nphp-apache Deployment/php-apache/scale 305% / 50% 1 10 1 3m", "completion": "This text code HorizontalPodAutoscaler Walkthrough | and then, more replicas. For example:", "language": "text", "type": "code-example_explanation", "context": "HorizontalPodAutoscaler Walkthrough | and then, more replicas. For example:", "source": "https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale-walkthrough/", "id": "k8s_00007_code_explain_138"} {"prompt": "Explain this text code:\n\nNAME REFERENCE TARGET MINPODS MAXPODS REPLICAS AGE\nphp-apache Deployment/php-apache/scale 305% / 50% 1 10 7 3m", "completion": "This text code HorizontalPodAutoscaler Walkthrough | Here, CPU consumption has increased to 305% of the request. As a result, the Deployment was resized to 7 replicas:", "language": "text", "type": "code-example_explanation", "context": "HorizontalPodAutoscaler Walkthrough | Here, CPU consumption has increased to 305% of the request. As a result, the Deployment was resized to 7 replicas:", "source": "https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale-walkthrough/", "id": "k8s_00007_code_explain_139"} {"prompt": "Explain this text code:\n\nNAME READY UP-TO-DATE AVAILABLE AGE\nphp-apache 7/7 7 7 19m", "completion": "This text code HorizontalPodAutoscaler Walkthrough | To finish the example, stop sending the load.", "language": "text", "type": "code-example_explanation", "context": "HorizontalPodAutoscaler Walkthrough | To finish the example, stop sending the load.", "source": "https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale-walkthrough/", "id": "k8s_00007_code_explain_140"} {"prompt": "Explain this shell code:\n\n# type Ctrl+C to end the watch when you're ready\nkubectl get hpa php-apache --watch", "completion": "This shell code HorizontalPodAutoscaler Walkthrough | The output is similar to:", "language": "shell", "type": "code-example_explanation", "context": "HorizontalPodAutoscaler Walkthrough | The output is similar to:", "source": "https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale-walkthrough/", "id": "k8s_00007_code_explain_141"} {"prompt": "Explain this text code:\n\nNAME REFERENCE TARGET MINPODS MAXPODS REPLICAS AGE\nphp-apache Deployment/php-apache/scale 0% / 50% 1 10 1 11m", "completion": "This text code HorizontalPodAutoscaler Walkthrough | and the Deployment also shows that it has scaled down:", "language": "text", "type": "code-example_explanation", "context": "HorizontalPodAutoscaler Walkthrough | and the Deployment also shows that it has scaled down:", "source": "https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale-walkthrough/", "id": "k8s_00007_code_explain_142"} {"prompt": "Explain this text code:\n\nNAME READY UP-TO-DATE AVAILABLE AGE\nphp-apache 1/1 1 1 27m", "completion": "This text code HorizontalPodAutoscaler Walkthrough | Once CPU utilization dropped to 0, the HPA automatically scaled the number of replicas back down to 1.", "language": "text", "type": "code-example_explanation", "context": "HorizontalPodAutoscaler Walkthrough | Once CPU utilization dropped to 0, the HPA automatically scaled the number of replicas back down to 1.", "source": "https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale-walkthrough/", "id": "k8s_00007_code_explain_143"} {"prompt": "Explain this shell code:\n\nkubectl get hpa php-apache -o yaml > /tmp/hpa-v2.yaml", "completion": "This shell code HorizontalPodAutoscaler Walkthrough | Open the /tmp/hpa-v2.yaml file in an editor, and you should see YAML which looks like this:", "language": "shell", "type": "code-example_explanation", "context": "HorizontalPodAutoscaler Walkthrough | Open the /tmp/hpa-v2.yaml file in an editor, and you should see YAML which looks like this:", "source": "https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale-walkthrough/", "id": "k8s_00007_code_explain_144"} {"prompt": "Complete this yaml code:\n\napiVersion: autoscaling/v2\nkind: HorizontalPodAutoscaler\nmetadata:\n name: php-apache\nspec:\n scaleTargetRef:\n apiVersion: apps/v1\n kind: Deployment\n name: php-apache\n minReplicas: 1\n maxReplicas: 10\n metrics:\n - type: Resource\n resource:\n name: cpu", "completion": " target:\n type: Utilization\n averageUtilization: 50\nstatus:\n observedGeneration: 1\n lastScaleTime: \n currentReplicas: 1\n desiredReplicas: 1\n currentMetrics:\n - type: Resource\n resource:\n name: cpu\n current:\n averageUtilization: 0\n averageValue: 0", "language": "yaml", "type": "deployment-definition", "context": "HorizontalPodAutoscaler Walkthrough | Notice that the targetCPUUtilizationPercentage field has been replaced with an array called metrics. The CPU utilization metric is a resource metric, since it is represented as a percentage of a resou", "source": "https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale-walkthrough/", "id": "k8s_00007_code_completion_145"} {"prompt": "Explain this yaml code:\n\napiVersion: autoscaling/v2\nkind: HorizontalPodAutoscaler\nmetadata:\n name: php-apache\nspec:\n scaleTargetRef:\n apiVersion: apps/v1\n kind: Deployment\n name: php-apache\n minReplicas: 1\n maxReplicas: 10\n metrics:\n - type: Resource\n resource:\n name: cpu\n target:\n type: Utilization\n averageUtilization: 50\nstatus:\n observedGeneration: 1\n lastScaleTime: \n currentReplicas: 1\n desiredReplicas: 1\n currentMetrics:\n - type: Resource\n resource:\n name: cpu\n current:\n averageUtilization: 0\n averageValue: 0", "completion": "This yaml code HorizontalPodAutoscaler Walkthrough | Notice that the targetCPUUtilizationPercentage field has been replaced with an array called metrics. The CPU utilization metric is a resource metric, since it is represented as a percentage of a resou", "language": "yaml", "type": "deployment-definition_explanation", "context": "HorizontalPodAutoscaler Walkthrough | Notice that the targetCPUUtilizationPercentage field has been replaced with an array called metrics. The CPU utilization metric is a resource metric, since it is represented as a percentage of a resou", "source": "https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale-walkthrough/", "id": "k8s_00007_code_explain_146"} {"prompt": "Complete this text code:\n\nmetrics:\n - type: Resource\n resource:", "completion": " name: memory\n target:\n type: AverageValue\n averageValue: 500Mi", "language": "text", "type": "code-example", "context": "HorizontalPodAutoscaler Walkthrough | There are two other types of metrics, both of which are considered custom metrics: pod metrics and object metrics. These metrics may have names which are cluster specific, and require a more advanced", "source": "https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale-walkthrough/", "id": "k8s_00007_code_completion_147"} {"prompt": "Explain this text code:\n\nmetrics:\n - type: Resource\n resource:\n name: memory\n target:\n type: AverageValue\n averageValue: 500Mi", "completion": "This text code HorizontalPodAutoscaler Walkthrough | There are two other types of metrics, both of which are considered custom metrics: pod metrics and object metrics. These metrics may have names which are cluster specific, and require a more advanced", "language": "text", "type": "code-example_explanation", "context": "HorizontalPodAutoscaler Walkthrough | There are two other types of metrics, both of which are considered custom metrics: pod metrics and object metrics. These metrics may have names which are cluster specific, and require a more advanced", "source": "https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale-walkthrough/", "id": "k8s_00007_code_explain_148"} {"prompt": "Complete this yaml code:\n\ntype: Pods\npods:\n metric:", "completion": " name: packets-per-second\n target:\n type: AverageValue\n averageValue: 1k", "language": "yaml", "type": "kubernetes-manifest", "context": "HorizontalPodAutoscaler Walkthrough | The second alternative metric type is object metrics. These metrics describe a different object in the same namespace, instead of describing Pods. The metrics are not necessarily fetched from the obje", "source": "https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale-walkthrough/", "id": "k8s_00007_code_completion_149"} {"prompt": "Explain this yaml code:\n\ntype: Pods\npods:\n metric:\n name: packets-per-second\n target:\n type: AverageValue\n averageValue: 1k", "completion": "This yaml code HorizontalPodAutoscaler Walkthrough | The second alternative metric type is object metrics. These metrics describe a different object in the same namespace, instead of describing Pods. The metrics are not necessarily fetched from the obje", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "HorizontalPodAutoscaler Walkthrough | The second alternative metric type is object metrics. These metrics describe a different object in the same namespace, instead of describing Pods. The metrics are not necessarily fetched from the obje", "source": "https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale-walkthrough/", "id": "k8s_00007_code_explain_150"} {"prompt": "Complete this yaml code:\n\ntype: Object\nobject:\n metric:\n name: requests-per-second\n describedObject:", "completion": " apiVersion: networking.k8s.io/v1\n kind: Ingress\n name: main-route\n target:\n type: Value\n value: 2k", "language": "yaml", "type": "kubernetes-manifest", "context": "HorizontalPodAutoscaler Walkthrough | If you provide multiple such metric blocks, the HorizontalPodAutoscaler will consider each metric in turn. The HorizontalPodAutoscaler will calculate proposed replica counts for each metric, and then", "source": "https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale-walkthrough/", "id": "k8s_00007_code_completion_151"} {"prompt": "Explain this yaml code:\n\ntype: Object\nobject:\n metric:\n name: requests-per-second\n describedObject:\n apiVersion: networking.k8s.io/v1\n kind: Ingress\n name: main-route\n target:\n type: Value\n value: 2k", "completion": "This yaml code HorizontalPodAutoscaler Walkthrough | If you provide multiple such metric blocks, the HorizontalPodAutoscaler will consider each metric in turn. The HorizontalPodAutoscaler will calculate proposed replica counts for each metric, and then", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "HorizontalPodAutoscaler Walkthrough | If you provide multiple such metric blocks, the HorizontalPodAutoscaler will consider each metric in turn. The HorizontalPodAutoscaler will calculate proposed replica counts for each metric, and then", "source": "https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale-walkthrough/", "id": "k8s_00007_code_explain_152"} {"prompt": "Complete this yaml code:\n\napiVersion: autoscaling/v2\nkind: HorizontalPodAutoscaler\nmetadata:\n name: php-apache\nspec:\n scaleTargetRef:\n apiVersion: apps/v1\n kind: Deployment\n name: php-apache\n minReplicas: 1\n maxReplicas: 10\n metrics:\n - type: Resource\n resource:\n name: cpu\n target:\n type: Utilization\n averageUtilization: 50\n - type: Pods\n pods:\n metric:\n name: packets-per-second\n target:\n type: AverageValue\n averageValue: 1k\n - type: Object\n object:\n metric:\n name: requests-per-second", "completion": " describedObject:\n apiVersion: networking.k8s.io/v1\n kind: Ingress\n name: main-route\n target:\n type: Value\n value: 10k\nstatus:\n observedGeneration: 1\n lastScaleTime: \n currentReplicas: 1\n desiredReplicas: 1\n currentMetrics:\n - type: Resource\n resource:\n name: cpu\n current:\n averageUtilization: 0\n averageValue: 0\n - type: Object\n object:\n metric:\n name: requests-per-second\n describedObject:\n apiVersion: networking.k8s.io/v1\n kind: Ingress\n name: main-route\n current:\n value: 10k", "language": "yaml", "type": "deployment-definition", "context": "HorizontalPodAutoscaler Walkthrough | Then, your HorizontalPodAutoscaler would attempt to ensure that each pod was consuming roughly 50% of its requested CPU, serving 1000 packets per second, and that all pods behind the main-route Ingres", "source": "https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale-walkthrough/", "id": "k8s_00007_code_completion_153"} {"prompt": "Explain this yaml code:\n\napiVersion: autoscaling/v2\nkind: HorizontalPodAutoscaler\nmetadata:\n name: php-apache\nspec:\n scaleTargetRef:\n apiVersion: apps/v1\n kind: Deployment\n name: php-apache\n minReplicas: 1\n maxReplicas: 10\n metrics:\n - type: Resource\n resource:\n name: cpu\n target:\n type: Utilization\n averageUtilization: 50\n - type: Pods\n pods:\n metric:\n name: packets-per-second\n target:\n type: AverageValue\n averageValue: 1k\n - type: Object\n object:\n metric:\n name: requests-per-second\n describedObject:\n apiVersion: networking.k8s.io/v1\n kind: Ingress\n name: main-route\n target:\n type: Value\n value: 10k\nstatus:\n observedGeneration: 1\n lastScaleTime: \n currentReplicas: 1\n desiredReplicas: 1\n currentMetrics:\n - type: Resource\n resource:\n name: cpu\n current:\n averageUtilization: 0\n averageValue: 0\n - type: Object\n object:\n metric:\n name: requests-per-second\n describedObject:\n apiVersion: networking.k8s.io/v1\n kind: Ingress\n name: main-route\n current:\n value: 10k", "completion": "This yaml code HorizontalPodAutoscaler Walkthrough | Then, your HorizontalPodAutoscaler would attempt to ensure that each pod was consuming roughly 50% of its requested CPU, serving 1000 packets per second, and that all pods behind the main-route Ingres", "language": "yaml", "type": "deployment-definition_explanation", "context": "HorizontalPodAutoscaler Walkthrough | Then, your HorizontalPodAutoscaler would attempt to ensure that each pod was consuming roughly 50% of its requested CPU, serving 1000 packets per second, and that all pods behind the main-route Ingres", "source": "https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale-walkthrough/", "id": "k8s_00007_code_explain_154"} {"prompt": "Complete this yaml code:\n\ntype: Object\nobject:", "completion": " metric:\n name: http_requests\n selector: {matchLabels: {verb: GET}}", "language": "yaml", "type": "kubernetes-manifest", "context": "HorizontalPodAutoscaler Walkthrough | This selector uses the same syntax as the full Kubernetes label selectors. The monitoring pipeline determines how to collapse multiple series into a single value, if the name and selector match multip", "source": "https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale-walkthrough/", "id": "k8s_00007_code_completion_155"} {"prompt": "Explain this yaml code:\n\ntype: Object\nobject:\n metric:\n name: http_requests\n selector: {matchLabels: {verb: GET}}", "completion": "This yaml code HorizontalPodAutoscaler Walkthrough | This selector uses the same syntax as the full Kubernetes label selectors. The monitoring pipeline determines how to collapse multiple series into a single value, if the name and selector match multip", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "HorizontalPodAutoscaler Walkthrough | This selector uses the same syntax as the full Kubernetes label selectors. The monitoring pipeline determines how to collapse multiple series into a single value, if the name and selector match multip", "source": "https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale-walkthrough/", "id": "k8s_00007_code_explain_156"} {"prompt": "Complete this yaml code:\n\n- type: External\n external:\n metric:\n name: queue_messages_ready\n selector:", "completion": " matchLabels:\n queue: \"worker_tasks\"\n target:\n type: AverageValue\n averageValue: 30", "language": "yaml", "type": "kubernetes-manifest", "context": "HorizontalPodAutoscaler Walkthrough | When possible, it's preferable to use the custom metric target types instead of external metrics, since it's easier for cluster administrators to secure the custom metrics API. The external metrics AP", "source": "https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale-walkthrough/", "id": "k8s_00007_code_completion_157"} {"prompt": "Explain this yaml code:\n\n- type: External\n external:\n metric:\n name: queue_messages_ready\n selector:\n matchLabels:\n queue: \"worker_tasks\"\n target:\n type: AverageValue\n averageValue: 30", "completion": "This yaml code HorizontalPodAutoscaler Walkthrough | When possible, it's preferable to use the custom metric target types instead of external metrics, since it's easier for cluster administrators to secure the custom metrics API. The external metrics AP", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "HorizontalPodAutoscaler Walkthrough | When possible, it's preferable to use the custom metric target types instead of external metrics, since it's easier for cluster administrators to secure the custom metrics API. The external metrics AP", "source": "https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale-walkthrough/", "id": "k8s_00007_code_explain_158"} {"prompt": "Complete this text code:\n\nName: cm-test\nNamespace: prom\nLabels: \nAnnotations: \nCreationTimestamp: Fri, 16 Jun 2017 18:09:22 +0000\nReference: ReplicationController/cm-test\nMetrics: ( current / target )\n \"http_requests\" on pods: 66m / 500m\nMin replicas: 1", "completion": "Max replicas: 4\nReplicationController pods: 1 current / 1 desired\nConditions:\n Type Status Reason Message\n ---- ------ ------ -------\n AbleToScale True ReadyForNewScale the last scale time was sufficiently old as to warrant a new scale\n ScalingActive True ValidMetricFound the HPA was able to successfully calculate a replica count from pods metric http_requests\n ScalingLimited False DesiredWithinRange the desired replica count is within the acceptable range\nEvents:", "language": "text", "type": "code-example", "context": "HorizontalPodAutoscaler Walkthrough | For this HorizontalPodAutoscaler, you can see several conditions in a healthy state. The first, AbleToScale, indicates whether or not the HPA is able to fetch and update scales, as well as whether or", "source": "https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale-walkthrough/", "id": "k8s_00007_code_completion_159"} {"prompt": "Explain this text code:\n\nName: cm-test\nNamespace: prom\nLabels: \nAnnotations: \nCreationTimestamp: Fri, 16 Jun 2017 18:09:22 +0000\nReference: ReplicationController/cm-test\nMetrics: ( current / target )\n \"http_requests\" on pods: 66m / 500m\nMin replicas: 1\nMax replicas: 4\nReplicationController pods: 1 current / 1 desired\nConditions:\n Type Status Reason Message\n ---- ------ ------ -------\n AbleToScale True ReadyForNewScale the last scale time was sufficiently old as to warrant a new scale\n ScalingActive True ValidMetricFound the HPA was able to successfully calculate a replica count from pods metric http_requests\n ScalingLimited False DesiredWithinRange the desired replica count is within the acceptable range\nEvents:", "completion": "This text code HorizontalPodAutoscaler Walkthrough | For this HorizontalPodAutoscaler, you can see several conditions in a healthy state. The first, AbleToScale, indicates whether or not the HPA is able to fetch and update scales, as well as whether or", "language": "text", "type": "code-example_explanation", "context": "HorizontalPodAutoscaler Walkthrough | For this HorizontalPodAutoscaler, you can see several conditions in a healthy state. The first, AbleToScale, indicates whether or not the HPA is able to fetch and update scales, as well as whether or", "source": "https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale-walkthrough/", "id": "k8s_00007_code_explain_160"} {"prompt": "Complete this yaml code:\n\napiVersion: autoscaling/v2\nkind: HorizontalPodAutoscaler\nmetadata:\n name: php-apache\nspec:\n scaleTargetRef:\n apiVersion: apps/v1\n kind: Deployment\n name: php-apache", "completion": " minReplicas: 1\n maxReplicas: 10\n metrics:\n - type: Resource\n resource:\n name: cpu\n target:\n type: Utilization\n averageUtilization: 50", "language": "yaml", "type": "deployment-definition", "context": "HorizontalPodAutoscaler Walkthrough | Then, create the autoscaler by executing the following command:", "source": "https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale-walkthrough/", "id": "k8s_00007_code_completion_161"} {"prompt": "Explain this yaml code:\n\napiVersion: autoscaling/v2\nkind: HorizontalPodAutoscaler\nmetadata:\n name: php-apache\nspec:\n scaleTargetRef:\n apiVersion: apps/v1\n kind: Deployment\n name: php-apache\n minReplicas: 1\n maxReplicas: 10\n metrics:\n - type: Resource\n resource:\n name: cpu\n target:\n type: Utilization\n averageUtilization: 50", "completion": "This yaml code HorizontalPodAutoscaler Walkthrough | Then, create the autoscaler by executing the following command:", "language": "yaml", "type": "deployment-definition_explanation", "context": "HorizontalPodAutoscaler Walkthrough | Then, create the autoscaler by executing the following command:", "source": "https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale-walkthrough/", "id": "k8s_00007_code_explain_162"} {"prompt": "Explain this shell code:\n\nkubectl create -f https://k8s.io/examples/application/hpa/php-apache.yaml", "completion": "This shell code HorizontalPodAutoscaler Walkthrough |", "language": "shell", "type": "code-example_explanation", "context": "HorizontalPodAutoscaler Walkthrough |", "source": "https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale-walkthrough/", "id": "k8s_00007_code_explain_163"} {"prompt": "Explain this text code:\n\nhorizontalpodautoscaler.autoscaling/php-apache created", "completion": "This text code HorizontalPodAutoscaler Walkthrough |", "language": "text", "type": "code-example_explanation", "context": "HorizontalPodAutoscaler Walkthrough |", "source": "https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale-walkthrough/", "id": "k8s_00007_code_explain_164"} {"prompt": "Complete this yaml code:\n\napplication/php-apache.yaml\napiVersion: apps/v1\nkind: Deployment\nmetadata:\n name: php-apache\nspec:\n selector:\n matchLabels:\n run: php-apache\n template:\n metadata:\n labels:\n run: php-apache\n spec:\n containers:\n - name: php-apache\n image: registry.k8s.io/hpa-example\n ports:", "completion": " - containerPort: 80\n resources:\n limits:\n cpu: 500m\n requests:\n cpu: 200m\n---\napiVersion: v1\nkind: Service\nmetadata:\n name: php-apache\n labels:\n run: php-apache\nspec:\n ports:\n - port: 80\n selector:\n run: php-apache", "language": "yaml", "type": "deployment-definition", "context": "HorizontalPodAutoscaler Walkthrough | To do so, run the following command:", "source": "https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale-walkthrough/", "id": "k8s_00007_code_completion_165"} {"prompt": "Explain this yaml code:\n\napplication/php-apache.yaml\napiVersion: apps/v1\nkind: Deployment\nmetadata:\n name: php-apache\nspec:\n selector:\n matchLabels:\n run: php-apache\n template:\n metadata:\n labels:\n run: php-apache\n spec:\n containers:\n - name: php-apache\n image: registry.k8s.io/hpa-example\n ports:\n - containerPort: 80\n resources:\n limits:\n cpu: 500m\n requests:\n cpu: 200m\n---\napiVersion: v1\nkind: Service\nmetadata:\n name: php-apache\n labels:\n run: php-apache\nspec:\n ports:\n - port: 80\n selector:\n run: php-apache", "completion": "This yaml code HorizontalPodAutoscaler Walkthrough | To do so, run the following command:", "language": "yaml", "type": "deployment-definition_explanation", "context": "HorizontalPodAutoscaler Walkthrough | To do so, run the following command:", "source": "https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale-walkthrough/", "id": "k8s_00007_code_explain_166"} {"prompt": "Complete this yaml code:\n\napiVersion: apps/v1\nkind: Deployment\nmetadata:\n name: php-apache\nspec:\n selector:\n matchLabels:\n run: php-apache\n template:\n metadata:\n labels:\n run: php-apache\n spec:\n containers:\n - name: php-apache\n image: registry.k8s.io/hpa-example\n ports:", "completion": " - containerPort: 80\n resources:\n limits:\n cpu: 500m\n requests:\n cpu: 200m\n---\napiVersion: v1\nkind: Service\nmetadata:\n name: php-apache\n labels:\n run: php-apache\nspec:\n ports:\n - port: 80\n selector:\n run: php-apache", "language": "yaml", "type": "deployment-definition", "context": "HorizontalPodAutoscaler Walkthrough | To do so, run the following command:", "source": "https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale-walkthrough/", "id": "k8s_00007_code_completion_167"} {"prompt": "Explain this yaml code:\n\napiVersion: apps/v1\nkind: Deployment\nmetadata:\n name: php-apache\nspec:\n selector:\n matchLabels:\n run: php-apache\n template:\n metadata:\n labels:\n run: php-apache\n spec:\n containers:\n - name: php-apache\n image: registry.k8s.io/hpa-example\n ports:\n - containerPort: 80\n resources:\n limits:\n cpu: 500m\n requests:\n cpu: 200m\n---\napiVersion: v1\nkind: Service\nmetadata:\n name: php-apache\n labels:\n run: php-apache\nspec:\n ports:\n - port: 80\n selector:\n run: php-apache", "completion": "This yaml code HorizontalPodAutoscaler Walkthrough | To do so, run the following command:", "language": "yaml", "type": "deployment-definition_explanation", "context": "HorizontalPodAutoscaler Walkthrough | To do so, run the following command:", "source": "https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale-walkthrough/", "id": "k8s_00007_code_explain_168"} {"prompt": "Explain this bash code:\n\nkubectl apply -f https://k8s.io/examples/application/php-apache.yaml", "completion": "This bash code HorizontalPodAutoscaler Walkthrough | Now that the server is running, create the autoscaler using kubectl. The kubectl autoscale subcommand, part of kubectl, helps you do this.", "language": "bash", "type": "kubectl-command_explanation", "context": "HorizontalPodAutoscaler Walkthrough | Now that the server is running, create the autoscaler using kubectl. The kubectl autoscale subcommand, part of kubectl, helps you do this.", "source": "https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale-walkthrough/", "id": "k8s_00007_code_explain_169"} {"prompt": "Explain this bash code:\n\nkubectl autoscale deployment php-apache --cpu=50% --min=1 --max=10", "completion": "This bash code HorizontalPodAutoscaler Walkthrough | You can check the current status of the newly-made HorizontalPodAutoscaler, by running:", "language": "bash", "type": "kubectl-command_explanation", "context": "HorizontalPodAutoscaler Walkthrough | You can check the current status of the newly-made HorizontalPodAutoscaler, by running:", "source": "https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale-walkthrough/", "id": "k8s_00007_code_explain_170"} {"prompt": "Explain this text code:\n\n# You can use \"hpa\" or \"horizontalpodautoscaler\"; either name works OK.\nkubectl get hpa", "completion": "This text code HorizontalPodAutoscaler Walkthrough | The output is similar to:", "language": "text", "type": "code-example_explanation", "context": "HorizontalPodAutoscaler Walkthrough | The output is similar to:", "source": "https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale-walkthrough/", "id": "k8s_00007_code_explain_171"} {"prompt": "Explain this text code:\n\n# Run this in a separate terminal\n# so that the load generation continues and you can carry on with the rest of the steps\nkubectl run -i --tty load-generator --rm --image=busybox:1.28 --restart=Never -- /bin/sh -c \"while sleep 0.01; do wget -q -O- http://php-apache; done\"", "completion": "This text code HorizontalPodAutoscaler Walkthrough | Now run:", "language": "text", "type": "code-example_explanation", "context": "HorizontalPodAutoscaler Walkthrough | Now run:", "source": "https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale-walkthrough/", "id": "k8s_00007_code_explain_172"} {"prompt": "Explain this text code:\n\n# type Ctrl+C to end the watch when you're ready\nkubectl get hpa php-apache --watch", "completion": "This text code HorizontalPodAutoscaler Walkthrough | Within a minute or so, you should see the higher CPU load; for example:", "language": "text", "type": "code-example_explanation", "context": "HorizontalPodAutoscaler Walkthrough | Within a minute or so, you should see the higher CPU load; for example:", "source": "https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale-walkthrough/", "id": "k8s_00007_code_explain_173"} {"prompt": "Explain this text code:\n\n# type Ctrl+C to end the watch when you're ready\nkubectl get hpa php-apache --watch", "completion": "This text code HorizontalPodAutoscaler Walkthrough | The output is similar to:", "language": "text", "type": "code-example_explanation", "context": "HorizontalPodAutoscaler Walkthrough | The output is similar to:", "source": "https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale-walkthrough/", "id": "k8s_00007_code_explain_174"} {"prompt": "Explain this bash code:\n\nkubectl get hpa php-apache -o yaml > /tmp/hpa-v2.yaml", "completion": "This bash code HorizontalPodAutoscaler Walkthrough | Open the /tmp/hpa-v2.yaml file in an editor, and you should see YAML which looks like this:", "language": "bash", "type": "kubectl-command_explanation", "context": "HorizontalPodAutoscaler Walkthrough | Open the /tmp/hpa-v2.yaml file in an editor, and you should see YAML which looks like this:", "source": "https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale-walkthrough/", "id": "k8s_00007_code_explain_175"} {"prompt": "Complete this yaml code:\n\napiVersion: autoscaling/v2\nkind: HorizontalPodAutoscaler\nmetadata:\n name: php-apache\nspec:\n scaleTargetRef:\n apiVersion: apps/v1\n kind: Deployment\n name: php-apache\n minReplicas: 1\n maxReplicas: 10\n metrics:\n - type: Resource\n resource:\n name: cpu", "completion": " target:\n type: Utilization\n averageUtilization: 50\nstatus:\n observedGeneration: 1\n lastScaleTime: \n currentReplicas: 1\n desiredReplicas: 1\n currentMetrics:\n - type: Resource\n resource:\n name: cpu\n current:\n averageUtilization: 0\n averageValue: 0", "language": "yaml", "type": "deployment-definition", "context": "HorizontalPodAutoscaler Walkthrough | Notice that the targetCPUUtilizationPercentage field has been replaced with an array called metrics. The CPU utilization metric is a resource metric, since it is represented as a percentage of a resou", "source": "https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale-walkthrough/", "id": "k8s_00007_code_completion_176"} {"prompt": "Explain this yaml code:\n\napiVersion: autoscaling/v2\nkind: HorizontalPodAutoscaler\nmetadata:\n name: php-apache\nspec:\n scaleTargetRef:\n apiVersion: apps/v1\n kind: Deployment\n name: php-apache\n minReplicas: 1\n maxReplicas: 10\n metrics:\n - type: Resource\n resource:\n name: cpu\n target:\n type: Utilization\n averageUtilization: 50\nstatus:\n observedGeneration: 1\n lastScaleTime: \n currentReplicas: 1\n desiredReplicas: 1\n currentMetrics:\n - type: Resource\n resource:\n name: cpu\n current:\n averageUtilization: 0\n averageValue: 0", "completion": "This yaml code HorizontalPodAutoscaler Walkthrough | Notice that the targetCPUUtilizationPercentage field has been replaced with an array called metrics. The CPU utilization metric is a resource metric, since it is represented as a percentage of a resou", "language": "yaml", "type": "deployment-definition_explanation", "context": "HorizontalPodAutoscaler Walkthrough | Notice that the targetCPUUtilizationPercentage field has been replaced with an array called metrics. The CPU utilization metric is a resource metric, since it is represented as a percentage of a resou", "source": "https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale-walkthrough/", "id": "k8s_00007_code_explain_177"} {"prompt": "Complete this text code:\n\ntype: Pods\npods:\n metric:", "completion": " name: packets-per-second\n target:\n type: AverageValue\n averageValue: 1k", "language": "text", "type": "code-example", "context": "HorizontalPodAutoscaler Walkthrough | The second alternative metric type is object metrics. These metrics describe a different object in the same namespace, instead of describing Pods. The metrics are not necessarily fetched from the obje", "source": "https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale-walkthrough/", "id": "k8s_00007_code_completion_178"} {"prompt": "Explain this text code:\n\ntype: Pods\npods:\n metric:\n name: packets-per-second\n target:\n type: AverageValue\n averageValue: 1k", "completion": "This text code HorizontalPodAutoscaler Walkthrough | The second alternative metric type is object metrics. These metrics describe a different object in the same namespace, instead of describing Pods. The metrics are not necessarily fetched from the obje", "language": "text", "type": "code-example_explanation", "context": "HorizontalPodAutoscaler Walkthrough | The second alternative metric type is object metrics. These metrics describe a different object in the same namespace, instead of describing Pods. The metrics are not necessarily fetched from the obje", "source": "https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale-walkthrough/", "id": "k8s_00007_code_explain_179"} {"prompt": "Complete this yaml code:\n\ntype: Object\nobject:\n metric:\n name: requests-per-second\n describedObject:", "completion": " apiVersion: networking.k8s.io/v1\n kind: Ingress\n name: main-route\n target:\n type: Value\n value: 2k", "language": "yaml", "type": "kubernetes-manifest", "context": "HorizontalPodAutoscaler Walkthrough | If you provide multiple such metric blocks, the HorizontalPodAutoscaler will consider each metric in turn. The HorizontalPodAutoscaler will calculate proposed replica counts for each metric, and then", "source": "https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale-walkthrough/", "id": "k8s_00007_code_completion_180"} {"prompt": "Explain this yaml code:\n\ntype: Object\nobject:\n metric:\n name: requests-per-second\n describedObject:\n apiVersion: networking.k8s.io/v1\n kind: Ingress\n name: main-route\n target:\n type: Value\n value: 2k", "completion": "This yaml code HorizontalPodAutoscaler Walkthrough | If you provide multiple such metric blocks, the HorizontalPodAutoscaler will consider each metric in turn. The HorizontalPodAutoscaler will calculate proposed replica counts for each metric, and then", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "HorizontalPodAutoscaler Walkthrough | If you provide multiple such metric blocks, the HorizontalPodAutoscaler will consider each metric in turn. The HorizontalPodAutoscaler will calculate proposed replica counts for each metric, and then", "source": "https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale-walkthrough/", "id": "k8s_00007_code_explain_181"} {"prompt": "Complete this yaml code:\n\napiVersion: autoscaling/v2\nkind: HorizontalPodAutoscaler\nmetadata:\n name: php-apache\nspec:\n scaleTargetRef:\n apiVersion: apps/v1\n kind: Deployment\n name: php-apache\n minReplicas: 1\n maxReplicas: 10\n metrics:\n - type: Resource\n resource:\n name: cpu\n target:\n type: Utilization\n averageUtilization: 50\n - type: Pods\n pods:\n metric:\n name: packets-per-second\n target:\n type: AverageValue\n averageValue: 1k\n - type: Object\n object:\n metric:\n name: requests-per-second", "completion": " describedObject:\n apiVersion: networking.k8s.io/v1\n kind: Ingress\n name: main-route\n target:\n type: Value\n value: 10k\nstatus:\n observedGeneration: 1\n lastScaleTime: \n currentReplicas: 1\n desiredReplicas: 1\n currentMetrics:\n - type: Resource\n resource:\n name: cpu\n current:\n averageUtilization: 0\n averageValue: 0\n - type: Object\n object:\n metric:\n name: requests-per-second\n describedObject:\n apiVersion: networking.k8s.io/v1\n kind: Ingress\n name: main-route\n current:\n value: 10k", "language": "yaml", "type": "deployment-definition", "context": "HorizontalPodAutoscaler Walkthrough | Then, your HorizontalPodAutoscaler would attempt to ensure that each pod was consuming roughly 50% of its requested CPU, serving 1000 packets per second, and that all pods behind the main-route Ingres", "source": "https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale-walkthrough/", "id": "k8s_00007_code_completion_182"} {"prompt": "Explain this yaml code:\n\napiVersion: autoscaling/v2\nkind: HorizontalPodAutoscaler\nmetadata:\n name: php-apache\nspec:\n scaleTargetRef:\n apiVersion: apps/v1\n kind: Deployment\n name: php-apache\n minReplicas: 1\n maxReplicas: 10\n metrics:\n - type: Resource\n resource:\n name: cpu\n target:\n type: Utilization\n averageUtilization: 50\n - type: Pods\n pods:\n metric:\n name: packets-per-second\n target:\n type: AverageValue\n averageValue: 1k\n - type: Object\n object:\n metric:\n name: requests-per-second\n describedObject:\n apiVersion: networking.k8s.io/v1\n kind: Ingress\n name: main-route\n target:\n type: Value\n value: 10k\nstatus:\n observedGeneration: 1\n lastScaleTime: \n currentReplicas: 1\n desiredReplicas: 1\n currentMetrics:\n - type: Resource\n resource:\n name: cpu\n current:\n averageUtilization: 0\n averageValue: 0\n - type: Object\n object:\n metric:\n name: requests-per-second\n describedObject:\n apiVersion: networking.k8s.io/v1\n kind: Ingress\n name: main-route\n current:\n value: 10k", "completion": "This yaml code HorizontalPodAutoscaler Walkthrough | Then, your HorizontalPodAutoscaler would attempt to ensure that each pod was consuming roughly 50% of its requested CPU, serving 1000 packets per second, and that all pods behind the main-route Ingres", "language": "yaml", "type": "deployment-definition_explanation", "context": "HorizontalPodAutoscaler Walkthrough | Then, your HorizontalPodAutoscaler would attempt to ensure that each pod was consuming roughly 50% of its requested CPU, serving 1000 packets per second, and that all pods behind the main-route Ingres", "source": "https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale-walkthrough/", "id": "k8s_00007_code_explain_183"} {"prompt": "Complete this text code:\n\ntype: Object\nobject:", "completion": " metric:\n name: http_requests\n selector: {matchLabels: {verb: GET}}", "language": "text", "type": "code-example", "context": "HorizontalPodAutoscaler Walkthrough | This selector uses the same syntax as the full Kubernetes label selectors. The monitoring pipeline determines how to collapse multiple series into a single value, if the name and selector match multip", "source": "https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale-walkthrough/", "id": "k8s_00007_code_completion_184"} {"prompt": "Explain this text code:\n\ntype: Object\nobject:\n metric:\n name: http_requests\n selector: {matchLabels: {verb: GET}}", "completion": "This text code HorizontalPodAutoscaler Walkthrough | This selector uses the same syntax as the full Kubernetes label selectors. The monitoring pipeline determines how to collapse multiple series into a single value, if the name and selector match multip", "language": "text", "type": "code-example_explanation", "context": "HorizontalPodAutoscaler Walkthrough | This selector uses the same syntax as the full Kubernetes label selectors. The monitoring pipeline determines how to collapse multiple series into a single value, if the name and selector match multip", "source": "https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale-walkthrough/", "id": "k8s_00007_code_explain_185"} {"prompt": "Complete this text code:\n\n- type: External\n external:\n metric:\n name: queue_messages_ready\n selector:", "completion": " matchLabels:\n queue: \"worker_tasks\"\n target:\n type: AverageValue\n averageValue: 30", "language": "text", "type": "code-example", "context": "HorizontalPodAutoscaler Walkthrough | When possible, it's preferable to use the custom metric target types instead of external metrics, since it's easier for cluster administrators to secure the custom metrics API. The external metrics AP", "source": "https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale-walkthrough/", "id": "k8s_00007_code_completion_186"} {"prompt": "Explain this text code:\n\n- type: External\n external:\n metric:\n name: queue_messages_ready\n selector:\n matchLabels:\n queue: \"worker_tasks\"\n target:\n type: AverageValue\n averageValue: 30", "completion": "This text code HorizontalPodAutoscaler Walkthrough | When possible, it's preferable to use the custom metric target types instead of external metrics, since it's easier for cluster administrators to secure the custom metrics API. The external metrics AP", "language": "text", "type": "code-example_explanation", "context": "HorizontalPodAutoscaler Walkthrough | When possible, it's preferable to use the custom metric target types instead of external metrics, since it's easier for cluster administrators to secure the custom metrics API. The external metrics AP", "source": "https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale-walkthrough/", "id": "k8s_00007_code_explain_187"} {"prompt": "Complete this yaml code:\n\napplication/hpa/php-apache.yaml\napiVersion: autoscaling/v2\nkind: HorizontalPodAutoscaler\nmetadata:\n name: php-apache\nspec:\n scaleTargetRef:\n apiVersion: apps/v1\n kind: Deployment", "completion": " name: php-apache\n minReplicas: 1\n maxReplicas: 10\n metrics:\n - type: Resource\n resource:\n name: cpu\n target:\n type: Utilization\n averageUtilization: 50", "language": "yaml", "type": "deployment-definition", "context": "HorizontalPodAutoscaler Walkthrough | Then, create the autoscaler by executing the following command:", "source": "https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale-walkthrough/", "id": "k8s_00007_code_completion_188"} {"prompt": "Explain this yaml code:\n\napplication/hpa/php-apache.yaml\napiVersion: autoscaling/v2\nkind: HorizontalPodAutoscaler\nmetadata:\n name: php-apache\nspec:\n scaleTargetRef:\n apiVersion: apps/v1\n kind: Deployment\n name: php-apache\n minReplicas: 1\n maxReplicas: 10\n metrics:\n - type: Resource\n resource:\n name: cpu\n target:\n type: Utilization\n averageUtilization: 50", "completion": "This yaml code HorizontalPodAutoscaler Walkthrough | Then, create the autoscaler by executing the following command:", "language": "yaml", "type": "deployment-definition_explanation", "context": "HorizontalPodAutoscaler Walkthrough | Then, create the autoscaler by executing the following command:", "source": "https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale-walkthrough/", "id": "k8s_00007_code_explain_189"} {"prompt": "Complete this yaml code:\n\napiVersion: autoscaling/v2\nkind: HorizontalPodAutoscaler\nmetadata:\n name: php-apache\nspec:\n scaleTargetRef:\n apiVersion: apps/v1\n kind: Deployment\n name: php-apache", "completion": " minReplicas: 1\n maxReplicas: 10\n metrics:\n - type: Resource\n resource:\n name: cpu\n target:\n type: Utilization\n averageUtilization: 50", "language": "yaml", "type": "deployment-definition", "context": "HorizontalPodAutoscaler Walkthrough | Then, create the autoscaler by executing the following command:", "source": "https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale-walkthrough/", "id": "k8s_00007_code_completion_190"} {"prompt": "Explain this yaml code:\n\napiVersion: autoscaling/v2\nkind: HorizontalPodAutoscaler\nmetadata:\n name: php-apache\nspec:\n scaleTargetRef:\n apiVersion: apps/v1\n kind: Deployment\n name: php-apache\n minReplicas: 1\n maxReplicas: 10\n metrics:\n - type: Resource\n resource:\n name: cpu\n target:\n type: Utilization\n averageUtilization: 50", "completion": "This yaml code HorizontalPodAutoscaler Walkthrough | Then, create the autoscaler by executing the following command:", "language": "yaml", "type": "deployment-definition_explanation", "context": "HorizontalPodAutoscaler Walkthrough | Then, create the autoscaler by executing the following command:", "source": "https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale-walkthrough/", "id": "k8s_00007_code_explain_191"} {"prompt": "Explain this bash code:\n\nkubectl create -f https://k8s.io/examples/application/hpa/php-apache.yaml", "completion": "This bash code HorizontalPodAutoscaler Walkthrough |", "language": "bash", "type": "kubectl-command_explanation", "context": "HorizontalPodAutoscaler Walkthrough |", "source": "https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale-walkthrough/", "id": "k8s_00007_code_explain_192"} {"prompt": "Explain this bash code:\n\nkubectl auth can-i VERB [TYPE | TYPE/NAME | NONRESOURCEURL]", "completion": "This bash code kubectl auth can-i | If true, check the specified action in all namespaces.", "language": "bash", "type": "kubectl-command_explanation", "context": "kubectl auth can-i | If true, check the specified action in all namespaces.", "source": "https://kubernetes.io/docs/reference/kubectl/generated/kubectl_auth/kubectl_auth_can-i/", "id": "k8s_00008_code_explain_193"} {"prompt": "Complete this text code:\n\n# Check to see if I can create pods in any namespace\n kubectl auth can-i create pods --all-namespaces\n \n # Check to see if I can list deployments in my current namespace\n kubectl auth can-i list deployments.apps\n \n # Check to see if service account \"foo\" of namespace \"dev\" can list pods in the namespace \"prod\"\n # You must be allowed to use impersonation for the global option \"--as\"\n kubectl auth can-i list pods --as=system:serviceaccount:dev:foo -n prod\n \n # Check to see if I can do everything in my current namespace (\"*\" means all)\n kubectl auth can-i '*' '*'\n ", "completion": " # Check to see if I can get the job named \"bar\" in namespace \"foo\"\n kubectl auth can-i list jobs.batch/bar -n foo\n \n # Check to see if I can read pod logs\n kubectl auth can-i get pods --subresource=log\n \n # Check to see if I can access the URL /logs/\n kubectl auth can-i get /logs/\n \n # Check to see if I can approve certificates.k8s.io\n kubectl auth can-i approve certificates.k8s.io\n \n # List all allowed actions in namespace \"foo\"\n kubectl auth can-i --list --namespace=foo", "language": "text", "type": "code-example", "context": "kubectl auth can-i | If true, check the specified action in all namespaces.", "source": "https://kubernetes.io/docs/reference/kubectl/generated/kubectl_auth/kubectl_auth_can-i/", "id": "k8s_00008_code_completion_194"} {"prompt": "Explain this text code:\n\n# Check to see if I can create pods in any namespace\n kubectl auth can-i create pods --all-namespaces\n \n # Check to see if I can list deployments in my current namespace\n kubectl auth can-i list deployments.apps\n \n # Check to see if service account \"foo\" of namespace \"dev\" can list pods in the namespace \"prod\"\n # You must be allowed to use impersonation for the global option \"--as\"\n kubectl auth can-i list pods --as=system:serviceaccount:dev:foo -n prod\n \n # Check to see if I can do everything in my current namespace (\"*\" means all)\n kubectl auth can-i '*' '*'\n \n # Check to see if I can get the job named \"bar\" in namespace \"foo\"\n kubectl auth can-i list jobs.batch/bar -n foo\n \n # Check to see if I can read pod logs\n kubectl auth can-i get pods --subresource=log\n \n # Check to see if I can access the URL /logs/\n kubectl auth can-i get /logs/\n \n # Check to see if I can approve certificates.k8s.io\n kubectl auth can-i approve certificates.k8s.io\n \n # List all allowed actions in namespace \"foo\"\n kubectl auth can-i --list --namespace=foo", "completion": "This text code kubectl auth can-i | If true, check the specified action in all namespaces.", "language": "text", "type": "code-example_explanation", "context": "kubectl auth can-i | If true, check the specified action in all namespaces.", "source": "https://kubernetes.io/docs/reference/kubectl/generated/kubectl_auth/kubectl_auth_can-i/", "id": "k8s_00008_code_explain_195"} {"prompt": "Explain this bash code:\n\nkubectl auth can-i VERB [TYPE | TYPE/NAME | NONRESOURCEURL]", "completion": "This bash code kubectl auth can-i | If true, check the specified action in all namespaces.", "language": "bash", "type": "kubectl-command_explanation", "context": "kubectl auth can-i | If true, check the specified action in all namespaces.", "source": "https://kubernetes.io/docs/reference/kubectl/generated/kubectl_auth/kubectl_auth_can-i/", "id": "k8s_00008_code_explain_196"} {"prompt": "Complete this text code:\n\n# Check to see if I can create pods in any namespace\n kubectl auth can-i create pods --all-namespaces\n \n # Check to see if I can list deployments in my current namespace\n kubectl auth can-i list deployments.apps\n \n # Check to see if service account \"foo\" of namespace \"dev\" can list pods in the namespace \"prod\"\n # You must be allowed to use impersonation for the global option \"--as\"\n kubectl auth can-i list pods --as=system:serviceaccount:dev:foo -n prod\n \n # Check to see if I can do everything in my current namespace (\"*\" means all)\n kubectl auth can-i '*' '*'\n ", "completion": " # Check to see if I can get the job named \"bar\" in namespace \"foo\"\n kubectl auth can-i list jobs.batch/bar -n foo\n \n # Check to see if I can read pod logs\n kubectl auth can-i get pods --subresource=log\n \n # Check to see if I can access the URL /logs/\n kubectl auth can-i get /logs/\n \n # Check to see if I can approve certificates.k8s.io\n kubectl auth can-i approve certificates.k8s.io\n \n # List all allowed actions in namespace \"foo\"\n kubectl auth can-i --list --namespace=foo", "language": "text", "type": "code-example", "context": "kubectl auth can-i | If true, check the specified action in all namespaces.", "source": "https://kubernetes.io/docs/reference/kubectl/generated/kubectl_auth/kubectl_auth_can-i/", "id": "k8s_00008_code_completion_197"} {"prompt": "Explain this text code:\n\n# Check to see if I can create pods in any namespace\n kubectl auth can-i create pods --all-namespaces\n \n # Check to see if I can list deployments in my current namespace\n kubectl auth can-i list deployments.apps\n \n # Check to see if service account \"foo\" of namespace \"dev\" can list pods in the namespace \"prod\"\n # You must be allowed to use impersonation for the global option \"--as\"\n kubectl auth can-i list pods --as=system:serviceaccount:dev:foo -n prod\n \n # Check to see if I can do everything in my current namespace (\"*\" means all)\n kubectl auth can-i '*' '*'\n \n # Check to see if I can get the job named \"bar\" in namespace \"foo\"\n kubectl auth can-i list jobs.batch/bar -n foo\n \n # Check to see if I can read pod logs\n kubectl auth can-i get pods --subresource=log\n \n # Check to see if I can access the URL /logs/\n kubectl auth can-i get /logs/\n \n # Check to see if I can approve certificates.k8s.io\n kubectl auth can-i approve certificates.k8s.io\n \n # List all allowed actions in namespace \"foo\"\n kubectl auth can-i --list --namespace=foo", "completion": "This text code kubectl auth can-i | If true, check the specified action in all namespaces.", "language": "text", "type": "code-example_explanation", "context": "kubectl auth can-i | If true, check the specified action in all namespaces.", "source": "https://kubernetes.io/docs/reference/kubectl/generated/kubectl_auth/kubectl_auth_can-i/", "id": "k8s_00008_code_explain_198"} {"prompt": "Explain this text code:\n\npreflight Run reset pre-flight checks\nremove-etcd-member Remove a local etcd member.\ncleanup-node Run cleanup node.", "completion": "This text code kubeadm reset | The path to the directory where the certificates are stored. If specified, clean this directory.", "language": "text", "type": "code-example_explanation", "context": "kubeadm reset | The path to the directory where the certificates are stored. If specified, clean this directory.", "source": "https://kubernetes.io/docs/reference/setup-tools/kubeadm/kubeadm-reset/", "id": "k8s_00011_code_explain_199"} {"prompt": "Explain this bash code:\n\ndocker run --privileged --rm registry.k8s.io/kube-proxy:v1.34.0 sh -c \"kube-proxy --cleanup && echo DONE\"", "completion": "This bash code kubeadm reset | The output of the above command should print DONE at the end. Instead of Docker, you can use your preferred container runtime to start the container.", "language": "bash", "type": "shell-command_explanation", "context": "kubeadm reset | The output of the above command should print DONE at the end. Instead of Docker, you can use your preferred container runtime to start the container.", "source": "https://kubernetes.io/docs/reference/setup-tools/kubeadm/kubeadm-reset/", "id": "k8s_00011_code_explain_200"} {"prompt": "Explain this text code:\n\nyq eval -i '.spec.containers[0].command = []' /etc/kubernetes/manifests/kube-apiserver.yaml\ntimeout 60 sh -c 'while pgrep kube-apiserver >/dev/null; do sleep 1; done' || true", "completion": "This text code kubeadm reset |", "language": "text", "type": "code-example_explanation", "context": "kubeadm reset |", "source": "https://kubernetes.io/docs/reference/setup-tools/kubeadm/kubeadm-reset/", "id": "k8s_00011_code_explain_201"} {"prompt": "Explain this text code:\n\npreflight Run reset pre-flight checks\nremove-etcd-member Remove a local etcd member.\ncleanup-node Run cleanup node.", "completion": "This text code kubeadm reset | The path to the directory where the certificates are stored. If specified, clean this directory.", "language": "text", "type": "code-example_explanation", "context": "kubeadm reset | The path to the directory where the certificates are stored. If specified, clean this directory.", "source": "https://kubernetes.io/docs/reference/setup-tools/kubeadm/kubeadm-reset/", "id": "k8s_00011_code_explain_202"} {"prompt": "Explain this bash code:\n\ndocker run --privileged --rm registry.k8s.io/kube-proxy:v1.34.0 sh -c \"kube-proxy --cleanup && echo DONE\"", "completion": "This bash code kubeadm reset | The output of the above command should print DONE at the end. Instead of Docker, you can use your preferred container runtime to start the container.", "language": "bash", "type": "shell-command_explanation", "context": "kubeadm reset | The output of the above command should print DONE at the end. Instead of Docker, you can use your preferred container runtime to start the container.", "source": "https://kubernetes.io/docs/reference/setup-tools/kubeadm/kubeadm-reset/", "id": "k8s_00011_code_explain_203"} {"prompt": "Explain this bash code:\n\nyq eval -i '.spec.containers[0].command = []' /etc/kubernetes/manifests/kube-apiserver.yaml\ntimeout 60 sh -c 'while pgrep kube-apiserver >/dev/null; do sleep 1; done' || true", "completion": "This bash code kubeadm reset |", "language": "bash", "type": "shell-command_explanation", "context": "kubeadm reset |", "source": "https://kubernetes.io/docs/reference/setup-tools/kubeadm/kubeadm-reset/", "id": "k8s_00011_code_explain_204"} {"prompt": "Explain this bash code:\n\ndocker run --privileged --rm registry.k8s.io/kube-proxy:v1.34.0 sh -c \"kube-proxy --cleanup && echo DONE\"", "completion": "This bash code kubeadm reset | The output of the above command should print DONE at the end. Instead of Docker, you can use your preferred container runtime to start the container.", "language": "bash", "type": "shell-command_explanation", "context": "kubeadm reset | The output of the above command should print DONE at the end. Instead of Docker, you can use your preferred container runtime to start the container.", "source": "https://kubernetes.io/docs/reference/setup-tools/kubeadm/kubeadm-reset/", "id": "k8s_00011_code_explain_205"} {"prompt": "Explain this text code:\n\nyq eval -i '.spec.containers[0].command = []' /etc/kubernetes/manifests/kube-apiserver.yaml\ntimeout 60 sh -c 'while pgrep kube-apiserver >/dev/null; do sleep 1; done' || true", "completion": "This text code kubeadm reset |", "language": "text", "type": "code-example_explanation", "context": "kubeadm reset |", "source": "https://kubernetes.io/docs/reference/setup-tools/kubeadm/kubeadm-reset/", "id": "k8s_00011_code_explain_206"} {"prompt": "Explain this text code:\n\nhello-world:latest@sha256:91fb4b041da273d5a3273b6d587d62d518300a6ad268b28628f74997b93171b2", "completion": "This text code Kubelet Configuration (v1alpha1) | Appears in:", "language": "text", "type": "code-example_explanation", "context": "Kubelet Configuration (v1alpha1) | Appears in:", "source": "https://kubernetes.io/docs/reference/config-api/kubelet-config.v1alpha1/", "id": "k8s_00012_code_explain_207"} {"prompt": "Explain this text code:\n\necho -n 'admin' | base64\necho -n '1f2d1e2e67df' | base64", "completion": "This text code Managing Secrets using Configuration File | The output is similar to:", "language": "text", "type": "code-example_explanation", "context": "Managing Secrets using Configuration File | The output is similar to:", "source": "https://kubernetes.io/docs/tasks/configmap-secret/managing-secret-using-config-file/", "id": "k8s_00013_code_explain_208"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Secret\nmetadata:\n name: mysecret", "completion": "type: Opaque\ndata:\n username: YWRtaW4=\n password: MWYyZDFlMmU2N2Rm", "language": "yaml", "type": "kubernetes-manifest", "context": "Managing Secrets using Configuration File | Note that the name of a Secret object must be a valid DNS subdomain name.", "source": "https://kubernetes.io/docs/tasks/configmap-secret/managing-secret-using-config-file/", "id": "k8s_00013_code_completion_209"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Secret\nmetadata:\n name: mysecret\ntype: Opaque\ndata:\n username: YWRtaW4=\n password: MWYyZDFlMmU2N2Rm", "completion": "This yaml code Managing Secrets using Configuration File | Note that the name of a Secret object must be a valid DNS subdomain name.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Managing Secrets using Configuration File | Note that the name of a Secret object must be a valid DNS subdomain name.", "source": "https://kubernetes.io/docs/tasks/configmap-secret/managing-secret-using-config-file/", "id": "k8s_00013_code_explain_210"} {"prompt": "Explain this text code:\n\napiUrl: \"https://my.api.com/api/v1\"\nusername: \"\"\npassword: \"\"", "completion": "This text code Managing Secrets using Configuration File | You could store this in a Secret using the following definition:", "language": "text", "type": "code-example_explanation", "context": "Managing Secrets using Configuration File | You could store this in a Secret using the following definition:", "source": "https://kubernetes.io/docs/tasks/configmap-secret/managing-secret-using-config-file/", "id": "k8s_00013_code_explain_211"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Secret\nmetadata:\n name: mysecret\ntype: Opaque", "completion": "stringData:\n config.yaml: |\n apiUrl: \"https://my.api.com/api/v1\"\n username: \n password: ", "language": "yaml", "type": "kubernetes-manifest", "context": "Managing Secrets using Configuration File | When you retrieve the Secret data, the command returns the encoded values, and not the plaintext values you provided in stringData.", "source": "https://kubernetes.io/docs/tasks/configmap-secret/managing-secret-using-config-file/", "id": "k8s_00013_code_completion_212"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Secret\nmetadata:\n name: mysecret\ntype: Opaque\nstringData:\n config.yaml: |\n apiUrl: \"https://my.api.com/api/v1\"\n username: \n password: ", "completion": "This yaml code Managing Secrets using Configuration File | When you retrieve the Secret data, the command returns the encoded values, and not the plaintext values you provided in stringData.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Managing Secrets using Configuration File | When you retrieve the Secret data, the command returns the encoded values, and not the plaintext values you provided in stringData.", "source": "https://kubernetes.io/docs/tasks/configmap-secret/managing-secret-using-config-file/", "id": "k8s_00013_code_explain_213"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\ndata:\n config.yaml: YXBpVXJsOiAiaHR0cHM6Ly9teS5hcGkuY29tL2FwaS92MSIKdXNlcm5hbWU6IHt7dXNlcm5hbWV9fQpwYXNzd29yZDoge3twYXNzd29yZH19\nkind: Secret\nmetadata:", "completion": " creationTimestamp: 2018-11-15T20:40:59Z\n name: mysecret\n namespace: default\n resourceVersion: \"7225\"\n uid: c280ad2e-e916-11e8-98f2-025000000001\ntype: Opaque", "language": "yaml", "type": "kubernetes-manifest", "context": "Managing Secrets using Configuration File | If you specify a field in both data and stringData, the value from stringData is used.", "source": "https://kubernetes.io/docs/tasks/configmap-secret/managing-secret-using-config-file/", "id": "k8s_00013_code_completion_214"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\ndata:\n config.yaml: YXBpVXJsOiAiaHR0cHM6Ly9teS5hcGkuY29tL2FwaS92MSIKdXNlcm5hbWU6IHt7dXNlcm5hbWV9fQpwYXNzd29yZDoge3twYXNzd29yZH19\nkind: Secret\nmetadata:\n creationTimestamp: 2018-11-15T20:40:59Z\n name: mysecret\n namespace: default\n resourceVersion: \"7225\"\n uid: c280ad2e-e916-11e8-98f2-025000000001\ntype: Opaque", "completion": "This yaml code Managing Secrets using Configuration File | If you specify a field in both data and stringData, the value from stringData is used.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Managing Secrets using Configuration File | If you specify a field in both data and stringData, the value from stringData is used.", "source": "https://kubernetes.io/docs/tasks/configmap-secret/managing-secret-using-config-file/", "id": "k8s_00013_code_explain_215"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Secret\nmetadata:\n name: mysecret", "completion": "type: Opaque\ndata:\n username: YWRtaW4=\nstringData:\n username: administrator", "language": "yaml", "type": "kubernetes-manifest", "context": "Managing Secrets using Configuration File | The Secret object is created as follows:", "source": "https://kubernetes.io/docs/tasks/configmap-secret/managing-secret-using-config-file/", "id": "k8s_00013_code_completion_216"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Secret\nmetadata:\n name: mysecret\ntype: Opaque\ndata:\n username: YWRtaW4=\nstringData:\n username: administrator", "completion": "This yaml code Managing Secrets using Configuration File | The Secret object is created as follows:", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Managing Secrets using Configuration File | The Secret object is created as follows:", "source": "https://kubernetes.io/docs/tasks/configmap-secret/managing-secret-using-config-file/", "id": "k8s_00013_code_explain_217"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\ndata:\n username: YWRtaW5pc3RyYXRvcg==\nkind: Secret\nmetadata:", "completion": " creationTimestamp: 2018-11-15T20:46:46Z\n name: mysecret\n namespace: default\n resourceVersion: \"7579\"\n uid: 91460ecb-e917-11e8-98f2-025000000001\ntype: Opaque", "language": "yaml", "type": "kubernetes-manifest", "context": "Managing Secrets using Configuration File | YWRtaW5pc3RyYXRvcg== decodes to administrator.", "source": "https://kubernetes.io/docs/tasks/configmap-secret/managing-secret-using-config-file/", "id": "k8s_00013_code_completion_218"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\ndata:\n username: YWRtaW5pc3RyYXRvcg==\nkind: Secret\nmetadata:\n creationTimestamp: 2018-11-15T20:46:46Z\n name: mysecret\n namespace: default\n resourceVersion: \"7579\"\n uid: 91460ecb-e917-11e8-98f2-025000000001\ntype: Opaque", "completion": "This yaml code Managing Secrets using Configuration File | YWRtaW5pc3RyYXRvcg== decodes to administrator.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Managing Secrets using Configuration File | YWRtaW5pc3RyYXRvcg== decodes to administrator.", "source": "https://kubernetes.io/docs/tasks/configmap-secret/managing-secret-using-config-file/", "id": "k8s_00013_code_explain_219"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Secret\nmetadata:\n name: mysecret", "completion": "type: Opaque\ndata:\n username: YWRtaW4=\n password: YmlyZHNhcmVudHJlYWw=", "language": "yaml", "type": "kubernetes-manifest", "context": "Managing Secrets using Configuration File | Apply the manifest to your cluster:", "source": "https://kubernetes.io/docs/tasks/configmap-secret/managing-secret-using-config-file/", "id": "k8s_00013_code_completion_220"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Secret\nmetadata:\n name: mysecret\ntype: Opaque\ndata:\n username: YWRtaW4=\n password: YmlyZHNhcmVudHJlYWw=", "completion": "This yaml code Managing Secrets using Configuration File | Apply the manifest to your cluster:", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Managing Secrets using Configuration File | Apply the manifest to your cluster:", "source": "https://kubernetes.io/docs/tasks/configmap-secret/managing-secret-using-config-file/", "id": "k8s_00013_code_explain_221"} {"prompt": "Explain this shell code:\n\necho -n 'admin' | base64\necho -n '1f2d1e2e67df' | base64", "completion": "This shell code Managing Secrets using Configuration File | The output is similar to:", "language": "shell", "type": "code-example_explanation", "context": "Managing Secrets using Configuration File | The output is similar to:", "source": "https://kubernetes.io/docs/tasks/configmap-secret/managing-secret-using-config-file/", "id": "k8s_00013_code_explain_222"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Secret\nmetadata:\n name: mysecret", "completion": "type: Opaque\ndata:\n username: YWRtaW4=\n password: MWYyZDFlMmU2N2Rm", "language": "yaml", "type": "kubernetes-manifest", "context": "Managing Secrets using Configuration File | Note that the name of a Secret object must be a valid DNS subdomain name.", "source": "https://kubernetes.io/docs/tasks/configmap-secret/managing-secret-using-config-file/", "id": "k8s_00013_code_completion_223"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Secret\nmetadata:\n name: mysecret\ntype: Opaque\ndata:\n username: YWRtaW4=\n password: MWYyZDFlMmU2N2Rm", "completion": "This yaml code Managing Secrets using Configuration File | Note that the name of a Secret object must be a valid DNS subdomain name.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Managing Secrets using Configuration File | Note that the name of a Secret object must be a valid DNS subdomain name.", "source": "https://kubernetes.io/docs/tasks/configmap-secret/managing-secret-using-config-file/", "id": "k8s_00013_code_explain_224"} {"prompt": "Explain this yaml code:\n\napiUrl: \"https://my.api.com/api/v1\"\nusername: \"\"\npassword: \"\"", "completion": "This yaml code Managing Secrets using Configuration File | You could store this in a Secret using the following definition:", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Managing Secrets using Configuration File | You could store this in a Secret using the following definition:", "source": "https://kubernetes.io/docs/tasks/configmap-secret/managing-secret-using-config-file/", "id": "k8s_00013_code_explain_225"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Secret\nmetadata:\n name: mysecret\ntype: Opaque", "completion": "stringData:\n config.yaml: |\n apiUrl: \"https://my.api.com/api/v1\"\n username: \n password: ", "language": "yaml", "type": "kubernetes-manifest", "context": "Managing Secrets using Configuration File | When you retrieve the Secret data, the command returns the encoded values, and not the plaintext values you provided in stringData.", "source": "https://kubernetes.io/docs/tasks/configmap-secret/managing-secret-using-config-file/", "id": "k8s_00013_code_completion_226"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Secret\nmetadata:\n name: mysecret\ntype: Opaque\nstringData:\n config.yaml: |\n apiUrl: \"https://my.api.com/api/v1\"\n username: \n password: ", "completion": "This yaml code Managing Secrets using Configuration File | When you retrieve the Secret data, the command returns the encoded values, and not the plaintext values you provided in stringData.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Managing Secrets using Configuration File | When you retrieve the Secret data, the command returns the encoded values, and not the plaintext values you provided in stringData.", "source": "https://kubernetes.io/docs/tasks/configmap-secret/managing-secret-using-config-file/", "id": "k8s_00013_code_explain_227"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\ndata:\n config.yaml: YXBpVXJsOiAiaHR0cHM6Ly9teS5hcGkuY29tL2FwaS92MSIKdXNlcm5hbWU6IHt7dXNlcm5hbWV9fQpwYXNzd29yZDoge3twYXNzd29yZH19\nkind: Secret\nmetadata:", "completion": " creationTimestamp: 2018-11-15T20:40:59Z\n name: mysecret\n namespace: default\n resourceVersion: \"7225\"\n uid: c280ad2e-e916-11e8-98f2-025000000001\ntype: Opaque", "language": "yaml", "type": "kubernetes-manifest", "context": "Managing Secrets using Configuration File | If you specify a field in both data and stringData, the value from stringData is used.", "source": "https://kubernetes.io/docs/tasks/configmap-secret/managing-secret-using-config-file/", "id": "k8s_00013_code_completion_228"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\ndata:\n config.yaml: YXBpVXJsOiAiaHR0cHM6Ly9teS5hcGkuY29tL2FwaS92MSIKdXNlcm5hbWU6IHt7dXNlcm5hbWV9fQpwYXNzd29yZDoge3twYXNzd29yZH19\nkind: Secret\nmetadata:\n creationTimestamp: 2018-11-15T20:40:59Z\n name: mysecret\n namespace: default\n resourceVersion: \"7225\"\n uid: c280ad2e-e916-11e8-98f2-025000000001\ntype: Opaque", "completion": "This yaml code Managing Secrets using Configuration File | If you specify a field in both data and stringData, the value from stringData is used.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Managing Secrets using Configuration File | If you specify a field in both data and stringData, the value from stringData is used.", "source": "https://kubernetes.io/docs/tasks/configmap-secret/managing-secret-using-config-file/", "id": "k8s_00013_code_explain_229"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Secret\nmetadata:\n name: mysecret", "completion": "type: Opaque\ndata:\n username: YWRtaW4=\nstringData:\n username: administrator", "language": "yaml", "type": "kubernetes-manifest", "context": "Managing Secrets using Configuration File | The Secret object is created as follows:", "source": "https://kubernetes.io/docs/tasks/configmap-secret/managing-secret-using-config-file/", "id": "k8s_00013_code_completion_230"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Secret\nmetadata:\n name: mysecret\ntype: Opaque\ndata:\n username: YWRtaW4=\nstringData:\n username: administrator", "completion": "This yaml code Managing Secrets using Configuration File | The Secret object is created as follows:", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Managing Secrets using Configuration File | The Secret object is created as follows:", "source": "https://kubernetes.io/docs/tasks/configmap-secret/managing-secret-using-config-file/", "id": "k8s_00013_code_explain_231"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\ndata:\n username: YWRtaW5pc3RyYXRvcg==\nkind: Secret\nmetadata:", "completion": " creationTimestamp: 2018-11-15T20:46:46Z\n name: mysecret\n namespace: default\n resourceVersion: \"7579\"\n uid: 91460ecb-e917-11e8-98f2-025000000001\ntype: Opaque", "language": "yaml", "type": "kubernetes-manifest", "context": "Managing Secrets using Configuration File | YWRtaW5pc3RyYXRvcg== decodes to administrator.", "source": "https://kubernetes.io/docs/tasks/configmap-secret/managing-secret-using-config-file/", "id": "k8s_00013_code_completion_232"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\ndata:\n username: YWRtaW5pc3RyYXRvcg==\nkind: Secret\nmetadata:\n creationTimestamp: 2018-11-15T20:46:46Z\n name: mysecret\n namespace: default\n resourceVersion: \"7579\"\n uid: 91460ecb-e917-11e8-98f2-025000000001\ntype: Opaque", "completion": "This yaml code Managing Secrets using Configuration File | YWRtaW5pc3RyYXRvcg== decodes to administrator.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Managing Secrets using Configuration File | YWRtaW5pc3RyYXRvcg== decodes to administrator.", "source": "https://kubernetes.io/docs/tasks/configmap-secret/managing-secret-using-config-file/", "id": "k8s_00013_code_explain_233"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Secret\nmetadata:\n name: mysecret", "completion": "type: Opaque\ndata:\n username: YWRtaW4=\n password: YmlyZHNhcmVudHJlYWw=", "language": "yaml", "type": "kubernetes-manifest", "context": "Managing Secrets using Configuration File | Apply the manifest to your cluster:", "source": "https://kubernetes.io/docs/tasks/configmap-secret/managing-secret-using-config-file/", "id": "k8s_00013_code_completion_234"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Secret\nmetadata:\n name: mysecret\ntype: Opaque\ndata:\n username: YWRtaW4=\n password: YmlyZHNhcmVudHJlYWw=", "completion": "This yaml code Managing Secrets using Configuration File | Apply the manifest to your cluster:", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Managing Secrets using Configuration File | Apply the manifest to your cluster:", "source": "https://kubernetes.io/docs/tasks/configmap-secret/managing-secret-using-config-file/", "id": "k8s_00013_code_explain_235"} {"prompt": "Explain this text code:\n\necho -n 'admin' | base64\necho -n '1f2d1e2e67df' | base64", "completion": "This text code Managing Secrets using Configuration File | The output is similar to:", "language": "text", "type": "code-example_explanation", "context": "Managing Secrets using Configuration File | The output is similar to:", "source": "https://kubernetes.io/docs/tasks/configmap-secret/managing-secret-using-config-file/", "id": "k8s_00013_code_explain_236"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Secret\nmetadata:\n name: mysecret", "completion": "type: Opaque\ndata:\n username: YWRtaW4=\n password: MWYyZDFlMmU2N2Rm", "language": "yaml", "type": "kubernetes-manifest", "context": "Managing Secrets using Configuration File | Note that the name of a Secret object must be a valid DNS subdomain name.", "source": "https://kubernetes.io/docs/tasks/configmap-secret/managing-secret-using-config-file/", "id": "k8s_00013_code_completion_237"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Secret\nmetadata:\n name: mysecret\ntype: Opaque\ndata:\n username: YWRtaW4=\n password: MWYyZDFlMmU2N2Rm", "completion": "This yaml code Managing Secrets using Configuration File | Note that the name of a Secret object must be a valid DNS subdomain name.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Managing Secrets using Configuration File | Note that the name of a Secret object must be a valid DNS subdomain name.", "source": "https://kubernetes.io/docs/tasks/configmap-secret/managing-secret-using-config-file/", "id": "k8s_00013_code_explain_238"} {"prompt": "Explain this text code:\n\napiUrl: \"https://my.api.com/api/v1\"\nusername: \"\"\npassword: \"\"", "completion": "This text code Managing Secrets using Configuration File | You could store this in a Secret using the following definition:", "language": "text", "type": "code-example_explanation", "context": "Managing Secrets using Configuration File | You could store this in a Secret using the following definition:", "source": "https://kubernetes.io/docs/tasks/configmap-secret/managing-secret-using-config-file/", "id": "k8s_00013_code_explain_239"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Secret\nmetadata:\n name: mysecret\ntype: Opaque", "completion": "stringData:\n config.yaml: |\n apiUrl: \"https://my.api.com/api/v1\"\n username: \n password: ", "language": "yaml", "type": "kubernetes-manifest", "context": "Managing Secrets using Configuration File | When you retrieve the Secret data, the command returns the encoded values, and not the plaintext values you provided in stringData.", "source": "https://kubernetes.io/docs/tasks/configmap-secret/managing-secret-using-config-file/", "id": "k8s_00013_code_completion_240"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Secret\nmetadata:\n name: mysecret\ntype: Opaque\nstringData:\n config.yaml: |\n apiUrl: \"https://my.api.com/api/v1\"\n username: \n password: ", "completion": "This yaml code Managing Secrets using Configuration File | When you retrieve the Secret data, the command returns the encoded values, and not the plaintext values you provided in stringData.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Managing Secrets using Configuration File | When you retrieve the Secret data, the command returns the encoded values, and not the plaintext values you provided in stringData.", "source": "https://kubernetes.io/docs/tasks/configmap-secret/managing-secret-using-config-file/", "id": "k8s_00013_code_explain_241"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\ndata:\n config.yaml: YXBpVXJsOiAiaHR0cHM6Ly9teS5hcGkuY29tL2FwaS92MSIKdXNlcm5hbWU6IHt7dXNlcm5hbWV9fQpwYXNzd29yZDoge3twYXNzd29yZH19\nkind: Secret\nmetadata:", "completion": " creationTimestamp: 2018-11-15T20:40:59Z\n name: mysecret\n namespace: default\n resourceVersion: \"7225\"\n uid: c280ad2e-e916-11e8-98f2-025000000001\ntype: Opaque", "language": "yaml", "type": "kubernetes-manifest", "context": "Managing Secrets using Configuration File | If you specify a field in both data and stringData, the value from stringData is used.", "source": "https://kubernetes.io/docs/tasks/configmap-secret/managing-secret-using-config-file/", "id": "k8s_00013_code_completion_242"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\ndata:\n config.yaml: YXBpVXJsOiAiaHR0cHM6Ly9teS5hcGkuY29tL2FwaS92MSIKdXNlcm5hbWU6IHt7dXNlcm5hbWV9fQpwYXNzd29yZDoge3twYXNzd29yZH19\nkind: Secret\nmetadata:\n creationTimestamp: 2018-11-15T20:40:59Z\n name: mysecret\n namespace: default\n resourceVersion: \"7225\"\n uid: c280ad2e-e916-11e8-98f2-025000000001\ntype: Opaque", "completion": "This yaml code Managing Secrets using Configuration File | If you specify a field in both data and stringData, the value from stringData is used.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Managing Secrets using Configuration File | If you specify a field in both data and stringData, the value from stringData is used.", "source": "https://kubernetes.io/docs/tasks/configmap-secret/managing-secret-using-config-file/", "id": "k8s_00013_code_explain_243"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Secret\nmetadata:\n name: mysecret", "completion": "type: Opaque\ndata:\n username: YWRtaW4=\nstringData:\n username: administrator", "language": "yaml", "type": "kubernetes-manifest", "context": "Managing Secrets using Configuration File | The Secret object is created as follows:", "source": "https://kubernetes.io/docs/tasks/configmap-secret/managing-secret-using-config-file/", "id": "k8s_00013_code_completion_244"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Secret\nmetadata:\n name: mysecret\ntype: Opaque\ndata:\n username: YWRtaW4=\nstringData:\n username: administrator", "completion": "This yaml code Managing Secrets using Configuration File | The Secret object is created as follows:", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Managing Secrets using Configuration File | The Secret object is created as follows:", "source": "https://kubernetes.io/docs/tasks/configmap-secret/managing-secret-using-config-file/", "id": "k8s_00013_code_explain_245"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\ndata:\n username: YWRtaW5pc3RyYXRvcg==\nkind: Secret\nmetadata:", "completion": " creationTimestamp: 2018-11-15T20:46:46Z\n name: mysecret\n namespace: default\n resourceVersion: \"7579\"\n uid: 91460ecb-e917-11e8-98f2-025000000001\ntype: Opaque", "language": "yaml", "type": "kubernetes-manifest", "context": "Managing Secrets using Configuration File | YWRtaW5pc3RyYXRvcg== decodes to administrator.", "source": "https://kubernetes.io/docs/tasks/configmap-secret/managing-secret-using-config-file/", "id": "k8s_00013_code_completion_246"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\ndata:\n username: YWRtaW5pc3RyYXRvcg==\nkind: Secret\nmetadata:\n creationTimestamp: 2018-11-15T20:46:46Z\n name: mysecret\n namespace: default\n resourceVersion: \"7579\"\n uid: 91460ecb-e917-11e8-98f2-025000000001\ntype: Opaque", "completion": "This yaml code Managing Secrets using Configuration File | YWRtaW5pc3RyYXRvcg== decodes to administrator.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Managing Secrets using Configuration File | YWRtaW5pc3RyYXRvcg== decodes to administrator.", "source": "https://kubernetes.io/docs/tasks/configmap-secret/managing-secret-using-config-file/", "id": "k8s_00013_code_explain_247"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Secret\nmetadata:\n name: mysecret", "completion": "type: Opaque\ndata:\n username: YWRtaW4=\n password: YmlyZHNhcmVudHJlYWw=", "language": "yaml", "type": "kubernetes-manifest", "context": "Managing Secrets using Configuration File | Apply the manifest to your cluster:", "source": "https://kubernetes.io/docs/tasks/configmap-secret/managing-secret-using-config-file/", "id": "k8s_00013_code_completion_248"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Secret\nmetadata:\n name: mysecret\ntype: Opaque\ndata:\n username: YWRtaW4=\n password: YmlyZHNhcmVudHJlYWw=", "completion": "This yaml code Managing Secrets using Configuration File | Apply the manifest to your cluster:", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Managing Secrets using Configuration File | Apply the manifest to your cluster:", "source": "https://kubernetes.io/docs/tasks/configmap-secret/managing-secret-using-config-file/", "id": "k8s_00013_code_explain_249"} {"prompt": "Complete this yaml code:\n\n# This is an example of how to set up cloud-controller-manager as a Daemonset in your cluster.\n# It assumes that your masters can run pods and has the role node-role.kubernetes.io/master\n# Note that this Daemonset will not work straight out of the box for your cloud, this is\n# meant to be a guideline.\n\n---\napiVersion: v1\nkind: ServiceAccount\nmetadata:\n name: cloud-controller-manager\n namespace: kube-system\n---\napiVersion: rbac.authorization.k8s.io/v1\nkind: ClusterRoleBinding\nmetadata:\n name: system:cloud-controller-manager\nroleRef:\n apiGroup: rbac.authorization.k8s.io\n kind: ClusterRole\n name: cluster-admin\nsubjects:\n- kind: ServiceAccount\n name: cloud-controller-manager\n namespace: kube-system\n---\napiVersion: apps/v1\nkind: DaemonSet\nmetadata:\n labels:\n k8s-app: cloud-controller-manager\n name: cloud-controller-manager\n namespace: kube-system\nspec:\n selector:\n matchLabels:\n k8s-app: cloud-controller-manager", "completion": " template:\n metadata:\n labels:\n k8s-app: cloud-controller-manager\n spec:\n serviceAccountName: cloud-controller-manager\n containers:\n - name: cloud-controller-manager\n # for in-tree providers we use registry.k8s.io/cloud-controller-manager\n # this can be replaced with any other image for out-of-tree providers\n image: registry.k8s.io/cloud-controller-manager:v1.8.0\n command:\n - /usr/local/bin/cloud-controller-manager\n - --cloud-provider=[YOUR_CLOUD_PROVIDER] # Add your own cloud provider here!\n - --leader-elect=true\n - --use-service-account-credentials\n # these flags will vary for every cloud provider\n - --allocate-node-cidrs=true\n - --configure-cloud-routes=true\n - --cluster-cidr=172.17.0.0/16\n tolerations:\n # this is required so CCM can bootstrap itself\n - key: node.cloudprovider.kubernetes.io/uninitialized\n value: \"true\"\n effect: NoSchedule\n # these tolerations are to have the daemonset runnable on control plane nodes\n # remove them if your control plane nodes should not run pods\n - key: node-role.kubernetes.io/control-plane\n operator: Exists\n effect: NoSchedule\n - key: node-role.kubernetes.io/master\n operator: Exists\n effect: NoSchedule\n # this is to restrict CCM to only run on master nodes\n # the node selector may vary depending on your cluster setup\n nodeSelector:\n node-role.kubernetes.io/master: \"\"", "language": "yaml", "type": "service-definition", "context": "Cloud Controller Manager Administration | Running cloud controller manager comes with a few possible limitations. Although these limitations are being addressed in upcoming releases, it's important that you are aware of these limitations for", "source": "https://kubernetes.io/docs/tasks/administer-cluster/running-cloud-controller/", "id": "k8s_00014_code_completion_250"} {"prompt": "Explain this yaml code:\n\n# This is an example of how to set up cloud-controller-manager as a Daemonset in your cluster.\n# It assumes that your masters can run pods and has the role node-role.kubernetes.io/master\n# Note that this Daemonset will not work straight out of the box for your cloud, this is\n# meant to be a guideline.\n\n---\napiVersion: v1\nkind: ServiceAccount\nmetadata:\n name: cloud-controller-manager\n namespace: kube-system\n---\napiVersion: rbac.authorization.k8s.io/v1\nkind: ClusterRoleBinding\nmetadata:\n name: system:cloud-controller-manager\nroleRef:\n apiGroup: rbac.authorization.k8s.io\n kind: ClusterRole\n name: cluster-admin\nsubjects:\n- kind: ServiceAccount\n name: cloud-controller-manager\n namespace: kube-system\n---\napiVersion: apps/v1\nkind: DaemonSet\nmetadata:\n labels:\n k8s-app: cloud-controller-manager\n name: cloud-controller-manager\n namespace: kube-system\nspec:\n selector:\n matchLabels:\n k8s-app: cloud-controller-manager\n template:\n metadata:\n labels:\n k8s-app: cloud-controller-manager\n spec:\n serviceAccountName: cloud-controller-manager\n containers:\n - name: cloud-controller-manager\n # for in-tree providers we use registry.k8s.io/cloud-controller-manager\n # this can be replaced with any other image for out-of-tree providers\n image: registry.k8s.io/cloud-controller-manager:v1.8.0\n command:\n - /usr/local/bin/cloud-controller-manager\n - --cloud-provider=[YOUR_CLOUD_PROVIDER] # Add your own cloud provider here!\n - --leader-elect=true\n - --use-service-account-credentials\n # these flags will vary for every cloud provider\n - --allocate-node-cidrs=true\n - --configure-cloud-routes=true\n - --cluster-cidr=172.17.0.0/16\n tolerations:\n # this is required so CCM can bootstrap itself\n - key: node.cloudprovider.kubernetes.io/uninitialized\n value: \"true\"\n effect: NoSchedule\n # these tolerations are to have the daemonset runnable on control plane nodes\n # remove them if your control plane nodes should not run pods\n - key: node-role.kubernetes.io/control-plane\n operator: Exists\n effect: NoSchedule\n - key: node-role.kubernetes.io/master\n operator: Exists\n effect: NoSchedule\n # this is to restrict CCM to only run on master nodes\n # the node selector may vary depending on your cluster setup\n nodeSelector:\n node-role.kubernetes.io/master: \"\"", "completion": "This yaml code Cloud Controller Manager Administration | Running cloud controller manager comes with a few possible limitations. Although these limitations are being addressed in upcoming releases, it's important that you are aware of these limitations for", "language": "yaml", "type": "service-definition_explanation", "context": "Cloud Controller Manager Administration | Running cloud controller manager comes with a few possible limitations. Although these limitations are being addressed in upcoming releases, it's important that you are aware of these limitations for", "source": "https://kubernetes.io/docs/tasks/administer-cluster/running-cloud-controller/", "id": "k8s_00014_code_explain_251"} {"prompt": "Complete this yaml code:\n\n# This is an example of how to set up cloud-controller-manager as a Daemonset in your cluster.\n# It assumes that your masters can run pods and has the role node-role.kubernetes.io/master\n# Note that this Daemonset will not work straight out of the box for your cloud, this is\n# meant to be a guideline.\n\n---\napiVersion: v1\nkind: ServiceAccount\nmetadata:\n name: cloud-controller-manager\n namespace: kube-system\n---\napiVersion: rbac.authorization.k8s.io/v1\nkind: ClusterRoleBinding\nmetadata:\n name: system:cloud-controller-manager\nroleRef:\n apiGroup: rbac.authorization.k8s.io\n kind: ClusterRole\n name: cluster-admin\nsubjects:\n- kind: ServiceAccount\n name: cloud-controller-manager\n namespace: kube-system\n---\napiVersion: apps/v1\nkind: DaemonSet\nmetadata:\n labels:\n k8s-app: cloud-controller-manager\n name: cloud-controller-manager\n namespace: kube-system\nspec:\n selector:\n matchLabels:\n k8s-app: cloud-controller-manager", "completion": " template:\n metadata:\n labels:\n k8s-app: cloud-controller-manager\n spec:\n serviceAccountName: cloud-controller-manager\n containers:\n - name: cloud-controller-manager\n # for in-tree providers we use registry.k8s.io/cloud-controller-manager\n # this can be replaced with any other image for out-of-tree providers\n image: registry.k8s.io/cloud-controller-manager:v1.8.0\n command:\n - /usr/local/bin/cloud-controller-manager\n - --cloud-provider=[YOUR_CLOUD_PROVIDER] # Add your own cloud provider here!\n - --leader-elect=true\n - --use-service-account-credentials\n # these flags will vary for every cloud provider\n - --allocate-node-cidrs=true\n - --configure-cloud-routes=true\n - --cluster-cidr=172.17.0.0/16\n tolerations:\n # this is required so CCM can bootstrap itself\n - key: node.cloudprovider.kubernetes.io/uninitialized\n value: \"true\"\n effect: NoSchedule\n # these tolerations are to have the daemonset runnable on control plane nodes\n # remove them if your control plane nodes should not run pods\n - key: node-role.kubernetes.io/control-plane\n operator: Exists\n effect: NoSchedule\n - key: node-role.kubernetes.io/master\n operator: Exists\n effect: NoSchedule\n # this is to restrict CCM to only run on master nodes\n # the node selector may vary depending on your cluster setup\n nodeSelector:\n node-role.kubernetes.io/master: \"\"", "language": "yaml", "type": "service-definition", "context": "Cloud Controller Manager Administration | Running cloud controller manager comes with a few possible limitations. Although these limitations are being addressed in upcoming releases, it's important that you are aware of these limitations for", "source": "https://kubernetes.io/docs/tasks/administer-cluster/running-cloud-controller/", "id": "k8s_00014_code_completion_252"} {"prompt": "Explain this yaml code:\n\n# This is an example of how to set up cloud-controller-manager as a Daemonset in your cluster.\n# It assumes that your masters can run pods and has the role node-role.kubernetes.io/master\n# Note that this Daemonset will not work straight out of the box for your cloud, this is\n# meant to be a guideline.\n\n---\napiVersion: v1\nkind: ServiceAccount\nmetadata:\n name: cloud-controller-manager\n namespace: kube-system\n---\napiVersion: rbac.authorization.k8s.io/v1\nkind: ClusterRoleBinding\nmetadata:\n name: system:cloud-controller-manager\nroleRef:\n apiGroup: rbac.authorization.k8s.io\n kind: ClusterRole\n name: cluster-admin\nsubjects:\n- kind: ServiceAccount\n name: cloud-controller-manager\n namespace: kube-system\n---\napiVersion: apps/v1\nkind: DaemonSet\nmetadata:\n labels:\n k8s-app: cloud-controller-manager\n name: cloud-controller-manager\n namespace: kube-system\nspec:\n selector:\n matchLabels:\n k8s-app: cloud-controller-manager\n template:\n metadata:\n labels:\n k8s-app: cloud-controller-manager\n spec:\n serviceAccountName: cloud-controller-manager\n containers:\n - name: cloud-controller-manager\n # for in-tree providers we use registry.k8s.io/cloud-controller-manager\n # this can be replaced with any other image for out-of-tree providers\n image: registry.k8s.io/cloud-controller-manager:v1.8.0\n command:\n - /usr/local/bin/cloud-controller-manager\n - --cloud-provider=[YOUR_CLOUD_PROVIDER] # Add your own cloud provider here!\n - --leader-elect=true\n - --use-service-account-credentials\n # these flags will vary for every cloud provider\n - --allocate-node-cidrs=true\n - --configure-cloud-routes=true\n - --cluster-cidr=172.17.0.0/16\n tolerations:\n # this is required so CCM can bootstrap itself\n - key: node.cloudprovider.kubernetes.io/uninitialized\n value: \"true\"\n effect: NoSchedule\n # these tolerations are to have the daemonset runnable on control plane nodes\n # remove them if your control plane nodes should not run pods\n - key: node-role.kubernetes.io/control-plane\n operator: Exists\n effect: NoSchedule\n - key: node-role.kubernetes.io/master\n operator: Exists\n effect: NoSchedule\n # this is to restrict CCM to only run on master nodes\n # the node selector may vary depending on your cluster setup\n nodeSelector:\n node-role.kubernetes.io/master: \"\"", "completion": "This yaml code Cloud Controller Manager Administration | Running cloud controller manager comes with a few possible limitations. Although these limitations are being addressed in upcoming releases, it's important that you are aware of these limitations for", "language": "yaml", "type": "service-definition_explanation", "context": "Cloud Controller Manager Administration | Running cloud controller manager comes with a few possible limitations. Although these limitations are being addressed in upcoming releases, it's important that you are aware of these limitations for", "source": "https://kubernetes.io/docs/tasks/administer-cluster/running-cloud-controller/", "id": "k8s_00014_code_explain_253"} {"prompt": "Complete this yaml code:\n\nadmin/cloud/ccm-example.yaml\n# This is an example of how to set up cloud-controller-manager as a Daemonset in your cluster.\n# It assumes that your masters can run pods and has the role node-role.kubernetes.io/master\n# Note that this Daemonset will not work straight out of the box for your cloud, this is\n# meant to be a guideline.\n\n---\napiVersion: v1\nkind: ServiceAccount\nmetadata:\n name: cloud-controller-manager\n namespace: kube-system\n---\napiVersion: rbac.authorization.k8s.io/v1\nkind: ClusterRoleBinding\nmetadata:\n name: system:cloud-controller-manager\nroleRef:\n apiGroup: rbac.authorization.k8s.io\n kind: ClusterRole\n name: cluster-admin\nsubjects:\n- kind: ServiceAccount\n name: cloud-controller-manager\n namespace: kube-system\n---\napiVersion: apps/v1\nkind: DaemonSet\nmetadata:\n labels:\n k8s-app: cloud-controller-manager\n name: cloud-controller-manager\n namespace: kube-system\nspec:\n selector:\n matchLabels:\n k8s-app: cloud-controller-manager", "completion": " template:\n metadata:\n labels:\n k8s-app: cloud-controller-manager\n spec:\n serviceAccountName: cloud-controller-manager\n containers:\n - name: cloud-controller-manager\n # for in-tree providers we use registry.k8s.io/cloud-controller-manager\n # this can be replaced with any other image for out-of-tree providers\n image: registry.k8s.io/cloud-controller-manager:v1.8.0\n command:\n - /usr/local/bin/cloud-controller-manager\n - --cloud-provider=[YOUR_CLOUD_PROVIDER] # Add your own cloud provider here!\n - --leader-elect=true\n - --use-service-account-credentials\n # these flags will vary for every cloud provider\n - --allocate-node-cidrs=true\n - --configure-cloud-routes=true\n - --cluster-cidr=172.17.0.0/16\n tolerations:\n # this is required so CCM can bootstrap itself\n - key: node.cloudprovider.kubernetes.io/uninitialized\n value: \"true\"\n effect: NoSchedule\n # these tolerations are to have the daemonset runnable on control plane nodes\n # remove them if your control plane nodes should not run pods\n - key: node-role.kubernetes.io/control-plane\n operator: Exists\n effect: NoSchedule\n - key: node-role.kubernetes.io/master\n operator: Exists\n effect: NoSchedule\n # this is to restrict CCM to only run on master nodes\n # the node selector may vary depending on your cluster setup\n nodeSelector:\n node-role.kubernetes.io/master: \"\"", "language": "yaml", "type": "service-definition", "context": "Cloud Controller Manager Administration | Running cloud controller manager comes with a few possible limitations. Although these limitations are being addressed in upcoming releases, it's important that you are aware of these limitations for", "source": "https://kubernetes.io/docs/tasks/administer-cluster/running-cloud-controller/", "id": "k8s_00014_code_completion_254"} {"prompt": "Explain this yaml code:\n\nadmin/cloud/ccm-example.yaml\n# This is an example of how to set up cloud-controller-manager as a Daemonset in your cluster.\n# It assumes that your masters can run pods and has the role node-role.kubernetes.io/master\n# Note that this Daemonset will not work straight out of the box for your cloud, this is\n# meant to be a guideline.\n\n---\napiVersion: v1\nkind: ServiceAccount\nmetadata:\n name: cloud-controller-manager\n namespace: kube-system\n---\napiVersion: rbac.authorization.k8s.io/v1\nkind: ClusterRoleBinding\nmetadata:\n name: system:cloud-controller-manager\nroleRef:\n apiGroup: rbac.authorization.k8s.io\n kind: ClusterRole\n name: cluster-admin\nsubjects:\n- kind: ServiceAccount\n name: cloud-controller-manager\n namespace: kube-system\n---\napiVersion: apps/v1\nkind: DaemonSet\nmetadata:\n labels:\n k8s-app: cloud-controller-manager\n name: cloud-controller-manager\n namespace: kube-system\nspec:\n selector:\n matchLabels:\n k8s-app: cloud-controller-manager\n template:\n metadata:\n labels:\n k8s-app: cloud-controller-manager\n spec:\n serviceAccountName: cloud-controller-manager\n containers:\n - name: cloud-controller-manager\n # for in-tree providers we use registry.k8s.io/cloud-controller-manager\n # this can be replaced with any other image for out-of-tree providers\n image: registry.k8s.io/cloud-controller-manager:v1.8.0\n command:\n - /usr/local/bin/cloud-controller-manager\n - --cloud-provider=[YOUR_CLOUD_PROVIDER] # Add your own cloud provider here!\n - --leader-elect=true\n - --use-service-account-credentials\n # these flags will vary for every cloud provider\n - --allocate-node-cidrs=true\n - --configure-cloud-routes=true\n - --cluster-cidr=172.17.0.0/16\n tolerations:\n # this is required so CCM can bootstrap itself\n - key: node.cloudprovider.kubernetes.io/uninitialized\n value: \"true\"\n effect: NoSchedule\n # these tolerations are to have the daemonset runnable on control plane nodes\n # remove them if your control plane nodes should not run pods\n - key: node-role.kubernetes.io/control-plane\n operator: Exists\n effect: NoSchedule\n - key: node-role.kubernetes.io/master\n operator: Exists\n effect: NoSchedule\n # this is to restrict CCM to only run on master nodes\n # the node selector may vary depending on your cluster setup\n nodeSelector:\n node-role.kubernetes.io/master: \"\"", "completion": "This yaml code Cloud Controller Manager Administration | Running cloud controller manager comes with a few possible limitations. Although these limitations are being addressed in upcoming releases, it's important that you are aware of these limitations for", "language": "yaml", "type": "service-definition_explanation", "context": "Cloud Controller Manager Administration | Running cloud controller manager comes with a few possible limitations. Although these limitations are being addressed in upcoming releases, it's important that you are aware of these limitations for", "source": "https://kubernetes.io/docs/tasks/administer-cluster/running-cloud-controller/", "id": "k8s_00014_code_explain_255"} {"prompt": "Complete this yaml code:\n\n# This is an example of how to set up cloud-controller-manager as a Daemonset in your cluster.\n# It assumes that your masters can run pods and has the role node-role.kubernetes.io/master\n# Note that this Daemonset will not work straight out of the box for your cloud, this is\n# meant to be a guideline.\n\n---\napiVersion: v1\nkind: ServiceAccount\nmetadata:\n name: cloud-controller-manager\n namespace: kube-system\n---\napiVersion: rbac.authorization.k8s.io/v1\nkind: ClusterRoleBinding\nmetadata:\n name: system:cloud-controller-manager\nroleRef:\n apiGroup: rbac.authorization.k8s.io\n kind: ClusterRole\n name: cluster-admin\nsubjects:\n- kind: ServiceAccount\n name: cloud-controller-manager\n namespace: kube-system\n---\napiVersion: apps/v1\nkind: DaemonSet\nmetadata:\n labels:\n k8s-app: cloud-controller-manager\n name: cloud-controller-manager\n namespace: kube-system\nspec:\n selector:\n matchLabels:\n k8s-app: cloud-controller-manager", "completion": " template:\n metadata:\n labels:\n k8s-app: cloud-controller-manager\n spec:\n serviceAccountName: cloud-controller-manager\n containers:\n - name: cloud-controller-manager\n # for in-tree providers we use registry.k8s.io/cloud-controller-manager\n # this can be replaced with any other image for out-of-tree providers\n image: registry.k8s.io/cloud-controller-manager:v1.8.0\n command:\n - /usr/local/bin/cloud-controller-manager\n - --cloud-provider=[YOUR_CLOUD_PROVIDER] # Add your own cloud provider here!\n - --leader-elect=true\n - --use-service-account-credentials\n # these flags will vary for every cloud provider\n - --allocate-node-cidrs=true\n - --configure-cloud-routes=true\n - --cluster-cidr=172.17.0.0/16\n tolerations:\n # this is required so CCM can bootstrap itself\n - key: node.cloudprovider.kubernetes.io/uninitialized\n value: \"true\"\n effect: NoSchedule\n # these tolerations are to have the daemonset runnable on control plane nodes\n # remove them if your control plane nodes should not run pods\n - key: node-role.kubernetes.io/control-plane\n operator: Exists\n effect: NoSchedule\n - key: node-role.kubernetes.io/master\n operator: Exists\n effect: NoSchedule\n # this is to restrict CCM to only run on master nodes\n # the node selector may vary depending on your cluster setup\n nodeSelector:\n node-role.kubernetes.io/master: \"\"", "language": "yaml", "type": "service-definition", "context": "Cloud Controller Manager Administration | Running cloud controller manager comes with a few possible limitations. Although these limitations are being addressed in upcoming releases, it's important that you are aware of these limitations for", "source": "https://kubernetes.io/docs/tasks/administer-cluster/running-cloud-controller/", "id": "k8s_00014_code_completion_256"} {"prompt": "Explain this yaml code:\n\n# This is an example of how to set up cloud-controller-manager as a Daemonset in your cluster.\n# It assumes that your masters can run pods and has the role node-role.kubernetes.io/master\n# Note that this Daemonset will not work straight out of the box for your cloud, this is\n# meant to be a guideline.\n\n---\napiVersion: v1\nkind: ServiceAccount\nmetadata:\n name: cloud-controller-manager\n namespace: kube-system\n---\napiVersion: rbac.authorization.k8s.io/v1\nkind: ClusterRoleBinding\nmetadata:\n name: system:cloud-controller-manager\nroleRef:\n apiGroup: rbac.authorization.k8s.io\n kind: ClusterRole\n name: cluster-admin\nsubjects:\n- kind: ServiceAccount\n name: cloud-controller-manager\n namespace: kube-system\n---\napiVersion: apps/v1\nkind: DaemonSet\nmetadata:\n labels:\n k8s-app: cloud-controller-manager\n name: cloud-controller-manager\n namespace: kube-system\nspec:\n selector:\n matchLabels:\n k8s-app: cloud-controller-manager\n template:\n metadata:\n labels:\n k8s-app: cloud-controller-manager\n spec:\n serviceAccountName: cloud-controller-manager\n containers:\n - name: cloud-controller-manager\n # for in-tree providers we use registry.k8s.io/cloud-controller-manager\n # this can be replaced with any other image for out-of-tree providers\n image: registry.k8s.io/cloud-controller-manager:v1.8.0\n command:\n - /usr/local/bin/cloud-controller-manager\n - --cloud-provider=[YOUR_CLOUD_PROVIDER] # Add your own cloud provider here!\n - --leader-elect=true\n - --use-service-account-credentials\n # these flags will vary for every cloud provider\n - --allocate-node-cidrs=true\n - --configure-cloud-routes=true\n - --cluster-cidr=172.17.0.0/16\n tolerations:\n # this is required so CCM can bootstrap itself\n - key: node.cloudprovider.kubernetes.io/uninitialized\n value: \"true\"\n effect: NoSchedule\n # these tolerations are to have the daemonset runnable on control plane nodes\n # remove them if your control plane nodes should not run pods\n - key: node-role.kubernetes.io/control-plane\n operator: Exists\n effect: NoSchedule\n - key: node-role.kubernetes.io/master\n operator: Exists\n effect: NoSchedule\n # this is to restrict CCM to only run on master nodes\n # the node selector may vary depending on your cluster setup\n nodeSelector:\n node-role.kubernetes.io/master: \"\"", "completion": "This yaml code Cloud Controller Manager Administration | Running cloud controller manager comes with a few possible limitations. Although these limitations are being addressed in upcoming releases, it's important that you are aware of these limitations for", "language": "yaml", "type": "service-definition_explanation", "context": "Cloud Controller Manager Administration | Running cloud controller manager comes with a few possible limitations. Although these limitations are being addressed in upcoming releases, it's important that you are aware of these limitations for", "source": "https://kubernetes.io/docs/tasks/administer-cluster/running-cloud-controller/", "id": "k8s_00014_code_explain_257"} {"prompt": "Complete this text code:\n\nNAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE\nservice/kubernetes 10.100.0.1 443/TCP 46m\nservice/nginx 10.100.0.16 80/TCP 33s", "completion": "\nNAME READY STATUS RESTARTS AGE\npod/nginx-701339712-e0qfq 1/1 Running 0 35s", "language": "text", "type": "code-example", "context": "Declare Network Policy | You should be able to access the new nginx service from other Pods. To access the nginx Service from another Pod in the default namespace, start a busybox container:", "source": "https://kubernetes.io/docs/tasks/administer-cluster/declare-network-policy/", "id": "k8s_00015_code_completion_258"} {"prompt": "Explain this text code:\n\nNAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE\nservice/kubernetes 10.100.0.1 443/TCP 46m\nservice/nginx 10.100.0.16 80/TCP 33s\n\nNAME READY STATUS RESTARTS AGE\npod/nginx-701339712-e0qfq 1/1 Running 0 35s", "completion": "This text code Declare Network Policy | You should be able to access the new nginx service from other Pods. To access the nginx Service from another Pod in the default namespace, start a busybox container:", "language": "text", "type": "code-example_explanation", "context": "Declare Network Policy | You should be able to access the new nginx service from other Pods. To access the nginx Service from another Pod in the default namespace, start a busybox container:", "source": "https://kubernetes.io/docs/tasks/administer-cluster/declare-network-policy/", "id": "k8s_00015_code_explain_259"} {"prompt": "Explain this bash code:\n\nkubectl run busybox --rm -ti --image=busybox -- /bin/sh", "completion": "This bash code Declare Network Policy | In your shell, run the following command:", "language": "bash", "type": "kubectl-command_explanation", "context": "Declare Network Policy | In your shell, run the following command:", "source": "https://kubernetes.io/docs/tasks/administer-cluster/declare-network-policy/", "id": "k8s_00015_code_explain_260"} {"prompt": "Explain this text code:\n\nConnecting to nginx (10.100.0.16:80)\nremote file exists", "completion": "This text code Declare Network Policy | To limit the access to the nginx service so that only Pods with the label access: true can query it, create a NetworkPolicy object as follows:", "language": "text", "type": "code-example_explanation", "context": "Declare Network Policy | To limit the access to the nginx service so that only Pods with the label access: true can query it, create a NetworkPolicy object as follows:", "source": "https://kubernetes.io/docs/tasks/administer-cluster/declare-network-policy/", "id": "k8s_00015_code_explain_261"} {"prompt": "Complete this yaml code:\n\napiVersion: networking.k8s.io/v1\nkind: NetworkPolicy\nmetadata:\n name: access-nginx\nspec:\n podSelector:", "completion": " matchLabels:\n app: nginx\n ingress:\n - from:\n - podSelector:\n matchLabels:\n access: \"true\"", "language": "yaml", "type": "kubernetes-manifest", "context": "Declare Network Policy | The name of a NetworkPolicy object must be a valid DNS subdomain name.", "source": "https://kubernetes.io/docs/tasks/administer-cluster/declare-network-policy/", "id": "k8s_00015_code_completion_262"} {"prompt": "Explain this yaml code:\n\napiVersion: networking.k8s.io/v1\nkind: NetworkPolicy\nmetadata:\n name: access-nginx\nspec:\n podSelector:\n matchLabels:\n app: nginx\n ingress:\n - from:\n - podSelector:\n matchLabels:\n access: \"true\"", "completion": "This yaml code Declare Network Policy | The name of a NetworkPolicy object must be a valid DNS subdomain name.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Declare Network Policy | The name of a NetworkPolicy object must be a valid DNS subdomain name.", "source": "https://kubernetes.io/docs/tasks/administer-cluster/declare-network-policy/", "id": "k8s_00015_code_explain_263"} {"prompt": "Explain this bash code:\n\nkubectl apply -f https://k8s.io/examples/service/networking/nginx-policy.yaml", "completion": "This bash code Declare Network Policy | When you attempt to access the nginx Service from a Pod without the correct labels, the request times out:", "language": "bash", "type": "kubectl-command_explanation", "context": "Declare Network Policy | When you attempt to access the nginx Service from a Pod without the correct labels, the request times out:", "source": "https://kubernetes.io/docs/tasks/administer-cluster/declare-network-policy/", "id": "k8s_00015_code_explain_264"} {"prompt": "Explain this text code:\n\nnetworkpolicy.networking.k8s.io/access-nginx created", "completion": "This text code Declare Network Policy | When you attempt to access the nginx Service from a Pod without the correct labels, the request times out:", "language": "text", "type": "code-example_explanation", "context": "Declare Network Policy | When you attempt to access the nginx Service from a Pod without the correct labels, the request times out:", "source": "https://kubernetes.io/docs/tasks/administer-cluster/declare-network-policy/", "id": "k8s_00015_code_explain_265"} {"prompt": "Explain this bash code:\n\nkubectl run busybox --rm -ti --image=busybox -- /bin/sh", "completion": "This bash code Declare Network Policy | In your shell, run the command:", "language": "bash", "type": "kubectl-command_explanation", "context": "Declare Network Policy | In your shell, run the command:", "source": "https://kubernetes.io/docs/tasks/administer-cluster/declare-network-policy/", "id": "k8s_00015_code_explain_266"} {"prompt": "Explain this text code:\n\nConnecting to nginx (10.100.0.16:80)\nwget: download timed out", "completion": "This text code Declare Network Policy | You can create a Pod with the correct labels to see that the request is allowed:", "language": "text", "type": "code-example_explanation", "context": "Declare Network Policy | You can create a Pod with the correct labels to see that the request is allowed:", "source": "https://kubernetes.io/docs/tasks/administer-cluster/declare-network-policy/", "id": "k8s_00015_code_explain_267"} {"prompt": "Explain this bash code:\n\nkubectl run busybox --rm -ti --labels=\"access=true\" --image=busybox -- /bin/sh", "completion": "This bash code Declare Network Policy | In your shell, run the command:", "language": "bash", "type": "kubectl-command_explanation", "context": "Declare Network Policy | In your shell, run the command:", "source": "https://kubernetes.io/docs/tasks/administer-cluster/declare-network-policy/", "id": "k8s_00015_code_explain_268"} {"prompt": "Explain this text code:\n\nConnecting to nginx (10.100.0.16:80)\nremote file exists", "completion": "This text code Declare Network Policy | Items on this page refer to third party products or projects that provide functionality required by Kubernetes. The Kubernetes project authors aren't responsible for those third-party products or proj", "language": "text", "type": "code-example_explanation", "context": "Declare Network Policy | Items on this page refer to third party products or projects that provide functionality required by Kubernetes. The Kubernetes project authors aren't responsible for those third-party products or proj", "source": "https://kubernetes.io/docs/tasks/administer-cluster/declare-network-policy/", "id": "k8s_00015_code_explain_269"} {"prompt": "Complete this none code:\n\nNAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE\nservice/kubernetes 10.100.0.1 443/TCP 46m\nservice/nginx 10.100.0.16 80/TCP 33s", "completion": "\nNAME READY STATUS RESTARTS AGE\npod/nginx-701339712-e0qfq 1/1 Running 0 35s", "language": "none", "type": "code-example", "context": "Declare Network Policy | You should be able to access the new nginx service from other Pods. To access the nginx Service from another Pod in the default namespace, start a busybox container:", "source": "https://kubernetes.io/docs/tasks/administer-cluster/declare-network-policy/", "id": "k8s_00015_code_completion_270"} {"prompt": "Explain this none code:\n\nNAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE\nservice/kubernetes 10.100.0.1 443/TCP 46m\nservice/nginx 10.100.0.16 80/TCP 33s\n\nNAME READY STATUS RESTARTS AGE\npod/nginx-701339712-e0qfq 1/1 Running 0 35s", "completion": "This none code Declare Network Policy | You should be able to access the new nginx service from other Pods. To access the nginx Service from another Pod in the default namespace, start a busybox container:", "language": "none", "type": "code-example_explanation", "context": "Declare Network Policy | You should be able to access the new nginx service from other Pods. To access the nginx Service from another Pod in the default namespace, start a busybox container:", "source": "https://kubernetes.io/docs/tasks/administer-cluster/declare-network-policy/", "id": "k8s_00015_code_explain_271"} {"prompt": "Explain this console code:\n\nkubectl run busybox --rm -ti --image=busybox -- /bin/sh", "completion": "This console code Declare Network Policy | In your shell, run the following command:", "language": "console", "type": "code-example_explanation", "context": "Declare Network Policy | In your shell, run the following command:", "source": "https://kubernetes.io/docs/tasks/administer-cluster/declare-network-policy/", "id": "k8s_00015_code_explain_272"} {"prompt": "Explain this none code:\n\nConnecting to nginx (10.100.0.16:80)\nremote file exists", "completion": "This none code Declare Network Policy | To limit the access to the nginx service so that only Pods with the label access: true can query it, create a NetworkPolicy object as follows:", "language": "none", "type": "code-example_explanation", "context": "Declare Network Policy | To limit the access to the nginx service so that only Pods with the label access: true can query it, create a NetworkPolicy object as follows:", "source": "https://kubernetes.io/docs/tasks/administer-cluster/declare-network-policy/", "id": "k8s_00015_code_explain_273"} {"prompt": "Complete this yaml code:\n\napiVersion: networking.k8s.io/v1\nkind: NetworkPolicy\nmetadata:\n name: access-nginx\nspec:\n podSelector:", "completion": " matchLabels:\n app: nginx\n ingress:\n - from:\n - podSelector:\n matchLabels:\n access: \"true\"", "language": "yaml", "type": "kubernetes-manifest", "context": "Declare Network Policy | The name of a NetworkPolicy object must be a valid DNS subdomain name.", "source": "https://kubernetes.io/docs/tasks/administer-cluster/declare-network-policy/", "id": "k8s_00015_code_completion_274"} {"prompt": "Explain this yaml code:\n\napiVersion: networking.k8s.io/v1\nkind: NetworkPolicy\nmetadata:\n name: access-nginx\nspec:\n podSelector:\n matchLabels:\n app: nginx\n ingress:\n - from:\n - podSelector:\n matchLabels:\n access: \"true\"", "completion": "This yaml code Declare Network Policy | The name of a NetworkPolicy object must be a valid DNS subdomain name.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Declare Network Policy | The name of a NetworkPolicy object must be a valid DNS subdomain name.", "source": "https://kubernetes.io/docs/tasks/administer-cluster/declare-network-policy/", "id": "k8s_00015_code_explain_275"} {"prompt": "Explain this console code:\n\nkubectl apply -f https://k8s.io/examples/service/networking/nginx-policy.yaml", "completion": "This console code Declare Network Policy | When you attempt to access the nginx Service from a Pod without the correct labels, the request times out:", "language": "console", "type": "code-example_explanation", "context": "Declare Network Policy | When you attempt to access the nginx Service from a Pod without the correct labels, the request times out:", "source": "https://kubernetes.io/docs/tasks/administer-cluster/declare-network-policy/", "id": "k8s_00015_code_explain_276"} {"prompt": "Explain this none code:\n\nnetworkpolicy.networking.k8s.io/access-nginx created", "completion": "This none code Declare Network Policy | When you attempt to access the nginx Service from a Pod without the correct labels, the request times out:", "language": "none", "type": "code-example_explanation", "context": "Declare Network Policy | When you attempt to access the nginx Service from a Pod without the correct labels, the request times out:", "source": "https://kubernetes.io/docs/tasks/administer-cluster/declare-network-policy/", "id": "k8s_00015_code_explain_277"} {"prompt": "Explain this console code:\n\nkubectl run busybox --rm -ti --image=busybox -- /bin/sh", "completion": "This console code Declare Network Policy | In your shell, run the command:", "language": "console", "type": "code-example_explanation", "context": "Declare Network Policy | In your shell, run the command:", "source": "https://kubernetes.io/docs/tasks/administer-cluster/declare-network-policy/", "id": "k8s_00015_code_explain_278"} {"prompt": "Explain this none code:\n\nConnecting to nginx (10.100.0.16:80)\nwget: download timed out", "completion": "This none code Declare Network Policy | You can create a Pod with the correct labels to see that the request is allowed:", "language": "none", "type": "code-example_explanation", "context": "Declare Network Policy | You can create a Pod with the correct labels to see that the request is allowed:", "source": "https://kubernetes.io/docs/tasks/administer-cluster/declare-network-policy/", "id": "k8s_00015_code_explain_279"} {"prompt": "Explain this console code:\n\nkubectl run busybox --rm -ti --labels=\"access=true\" --image=busybox -- /bin/sh", "completion": "This console code Declare Network Policy | In your shell, run the command:", "language": "console", "type": "code-example_explanation", "context": "Declare Network Policy | In your shell, run the command:", "source": "https://kubernetes.io/docs/tasks/administer-cluster/declare-network-policy/", "id": "k8s_00015_code_explain_280"} {"prompt": "Explain this none code:\n\nConnecting to nginx (10.100.0.16:80)\nremote file exists", "completion": "This none code Declare Network Policy | Items on this page refer to third party products or projects that provide functionality required by Kubernetes. The Kubernetes project authors aren't responsible for those third-party products or proj", "language": "none", "type": "code-example_explanation", "context": "Declare Network Policy | Items on this page refer to third party products or projects that provide functionality required by Kubernetes. The Kubernetes project authors aren't responsible for those third-party products or proj", "source": "https://kubernetes.io/docs/tasks/administer-cluster/declare-network-policy/", "id": "k8s_00015_code_explain_281"} {"prompt": "Explain this bash code:\n\nkubectl run busybox --rm -ti --image=busybox -- /bin/sh", "completion": "This bash code Declare Network Policy | In your shell, run the following command:", "language": "bash", "type": "kubectl-command_explanation", "context": "Declare Network Policy | In your shell, run the following command:", "source": "https://kubernetes.io/docs/tasks/administer-cluster/declare-network-policy/", "id": "k8s_00015_code_explain_282"} {"prompt": "Complete this yaml code:\n\nservice/networking/nginx-policy.yaml\napiVersion: networking.k8s.io/v1\nkind: NetworkPolicy\nmetadata:\n name: access-nginx\nspec:\n podSelector:", "completion": " matchLabels:\n app: nginx\n ingress:\n - from:\n - podSelector:\n matchLabels:\n access: \"true\"", "language": "yaml", "type": "kubernetes-manifest", "context": "Declare Network Policy | The name of a NetworkPolicy object must be a valid DNS subdomain name.", "source": "https://kubernetes.io/docs/tasks/administer-cluster/declare-network-policy/", "id": "k8s_00015_code_completion_283"} {"prompt": "Explain this yaml code:\n\nservice/networking/nginx-policy.yaml\napiVersion: networking.k8s.io/v1\nkind: NetworkPolicy\nmetadata:\n name: access-nginx\nspec:\n podSelector:\n matchLabels:\n app: nginx\n ingress:\n - from:\n - podSelector:\n matchLabels:\n access: \"true\"", "completion": "This yaml code Declare Network Policy | The name of a NetworkPolicy object must be a valid DNS subdomain name.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Declare Network Policy | The name of a NetworkPolicy object must be a valid DNS subdomain name.", "source": "https://kubernetes.io/docs/tasks/administer-cluster/declare-network-policy/", "id": "k8s_00015_code_explain_284"} {"prompt": "Complete this yaml code:\n\napiVersion: networking.k8s.io/v1\nkind: NetworkPolicy\nmetadata:\n name: access-nginx\nspec:\n podSelector:", "completion": " matchLabels:\n app: nginx\n ingress:\n - from:\n - podSelector:\n matchLabels:\n access: \"true\"", "language": "yaml", "type": "kubernetes-manifest", "context": "Declare Network Policy | The name of a NetworkPolicy object must be a valid DNS subdomain name.", "source": "https://kubernetes.io/docs/tasks/administer-cluster/declare-network-policy/", "id": "k8s_00015_code_completion_285"} {"prompt": "Explain this yaml code:\n\napiVersion: networking.k8s.io/v1\nkind: NetworkPolicy\nmetadata:\n name: access-nginx\nspec:\n podSelector:\n matchLabels:\n app: nginx\n ingress:\n - from:\n - podSelector:\n matchLabels:\n access: \"true\"", "completion": "This yaml code Declare Network Policy | The name of a NetworkPolicy object must be a valid DNS subdomain name.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Declare Network Policy | The name of a NetworkPolicy object must be a valid DNS subdomain name.", "source": "https://kubernetes.io/docs/tasks/administer-cluster/declare-network-policy/", "id": "k8s_00015_code_explain_286"} {"prompt": "Explain this bash code:\n\nkubectl apply -f https://k8s.io/examples/service/networking/nginx-policy.yaml", "completion": "This bash code Declare Network Policy | When you attempt to access the nginx Service from a Pod without the correct labels, the request times out:", "language": "bash", "type": "kubectl-command_explanation", "context": "Declare Network Policy | When you attempt to access the nginx Service from a Pod without the correct labels, the request times out:", "source": "https://kubernetes.io/docs/tasks/administer-cluster/declare-network-policy/", "id": "k8s_00015_code_explain_287"} {"prompt": "Explain this bash code:\n\nkubectl run busybox --rm -ti --image=busybox -- /bin/sh", "completion": "This bash code Declare Network Policy | In your shell, run the command:", "language": "bash", "type": "kubectl-command_explanation", "context": "Declare Network Policy | In your shell, run the command:", "source": "https://kubernetes.io/docs/tasks/administer-cluster/declare-network-policy/", "id": "k8s_00015_code_explain_288"} {"prompt": "Explain this bash code:\n\nkubectl run busybox --rm -ti --labels=\"access=true\" --image=busybox -- /bin/sh", "completion": "This bash code Declare Network Policy | In your shell, run the command:", "language": "bash", "type": "kubectl-command_explanation", "context": "Declare Network Policy | In your shell, run the command:", "source": "https://kubernetes.io/docs/tasks/administer-cluster/declare-network-policy/", "id": "k8s_00015_code_explain_289"} {"prompt": "Explain this go code:\n\nsource <(kubectl completion bash) # set up autocomplete in bash into the current shell, bash-completion package should be installed first.\necho \"source <(kubectl completion bash)\" >> ~/.bashrc # add autocomplete permanently to your bash shell.", "completion": "This go code kubectl Quick Reference | You can also use a shorthand alias for kubectl that also works with completion:", "language": "go", "type": "code-example_explanation", "context": "kubectl Quick Reference | You can also use a shorthand alias for kubectl that also works with completion:", "source": "https://kubernetes.io/docs/reference/kubectl/quick-reference/", "id": "k8s_00016_code_explain_290"} {"prompt": "Explain this text code:\n\nalias k=kubectl\ncomplete -o default -F __start_kubectl k", "completion": "This text code kubectl Quick Reference | Appending --all-namespaces happens frequently enough that you should be aware of the shorthand for --all-namespaces:", "language": "text", "type": "code-example_explanation", "context": "kubectl Quick Reference | Appending --all-namespaces happens frequently enough that you should be aware of the shorthand for --all-namespaces:", "source": "https://kubernetes.io/docs/reference/kubectl/quick-reference/", "id": "k8s_00016_code_explain_291"} {"prompt": "Explain this text code:\n\nsource <(kubectl completion zsh) # set up autocomplete in zsh into the current shell\necho '[[ $commands[kubectl] ]] && source <(kubectl completion zsh)' >> ~/.zshrc # add autocomplete permanently to your zsh shell", "completion": "This text code kubectl Quick Reference | Appending --all-namespaces happens frequently enough that you should be aware of the shorthand for --all-namespaces:", "language": "text", "type": "code-example_explanation", "context": "kubectl Quick Reference | Appending --all-namespaces happens frequently enough that you should be aware of the shorthand for --all-namespaces:", "source": "https://kubernetes.io/docs/reference/kubectl/quick-reference/", "id": "k8s_00016_code_explain_292"} {"prompt": "Explain this text code:\n\necho 'kubectl completion fish | source' > ~/.config/fish/completions/kubectl.fish && source ~/.config/fish/completions/kubectl.fish", "completion": "This text code kubectl Quick Reference | Appending --all-namespaces happens frequently enough that you should be aware of the shorthand for --all-namespaces:", "language": "text", "type": "code-example_explanation", "context": "kubectl Quick Reference | Appending --all-namespaces happens frequently enough that you should be aware of the shorthand for --all-namespaces:", "source": "https://kubernetes.io/docs/reference/kubectl/quick-reference/", "id": "k8s_00016_code_explain_293"} {"prompt": "Complete this bash code:\n\nkubectl config view # Show Merged kubeconfig settings.\n\n# use multiple kubeconfig files at the same time and view merged config\nKUBECONFIG=~/.kube/config:~/.kube/kubconfig2\n\nkubectl config view\n\n# Show merged kubeconfig settings and raw certificate data and exposed secrets\nkubectl config view --raw \n\n# get the password for the e2e user\nkubectl config view -o jsonpath='{.users[?(@.name == \"e2e\")].user.password}'\n\n# get the certificate for the e2e user\nkubectl config view --raw -o jsonpath='{.users[?(.name == \"e2e\")].user.client-certificate-data}' | base64 -d\n\nkubectl config view -o jsonpath='{.users[].name}' # display the first user\nkubectl config view -o jsonpath='{.users[*].name}' # get a list of users\nkubectl config get-contexts # display list of contexts\nkubectl config get-contexts -o name # get all context names\nkubectl config current-context # display the current-context", "completion": "kubectl config use-context my-cluster-name # set the default context to my-cluster-name\n\nkubectl config set-cluster my-cluster-name # set a cluster entry in the kubeconfig\n\n# configure the URL to a proxy server to use for requests made by this client in the kubeconfig\nkubectl config set-cluster my-cluster-name --proxy-url=my-proxy-url\n\n# add a new user to your kubeconf that supports basic auth\nkubectl config set-credentials kubeuser/foo.kubernetes.com --username=kubeuser --password=kubepassword\n\n# permanently save the namespace for all subsequent kubectl commands in that context.\nkubectl config set-context --current --namespace=ggckad-s2\n\n# set a context utilizing a specific username and namespace.\nkubectl config set-context gce --user=cluster-admin --namespace=foo \\\n && kubectl config use-context gce\n\nkubectl config unset users.foo # delete user foo\n\n# short alias to set/show context/namespace (only works for bash and bash-compatible shells, current context to be set before using kn to set namespace)\nalias kx='f() { [ \"$1\" ] && kubectl config use-context $1 || kubectl config current-context ; } ; f'\nalias kn='f() { [ \"$1\" ] && kubectl config set-context --current --namespace $1 || kubectl config view --minify | grep namespace | cut -d\" \" -f6 ; } ; f'", "language": "bash", "type": "kubectl-command", "context": "kubectl Quick Reference | apply manages applications through files defining Kubernetes resources. It creates and updates resources in a cluster through running kubectl apply. This is the recommended way of managing Kubernetes", "source": "https://kubernetes.io/docs/reference/kubectl/quick-reference/", "id": "k8s_00016_code_completion_294"} {"prompt": "Explain this bash code:\n\nkubectl config view # Show Merged kubeconfig settings.\n\n# use multiple kubeconfig files at the same time and view merged config\nKUBECONFIG=~/.kube/config:~/.kube/kubconfig2\n\nkubectl config view\n\n# Show merged kubeconfig settings and raw certificate data and exposed secrets\nkubectl config view --raw \n\n# get the password for the e2e user\nkubectl config view -o jsonpath='{.users[?(@.name == \"e2e\")].user.password}'\n\n# get the certificate for the e2e user\nkubectl config view --raw -o jsonpath='{.users[?(.name == \"e2e\")].user.client-certificate-data}' | base64 -d\n\nkubectl config view -o jsonpath='{.users[].name}' # display the first user\nkubectl config view -o jsonpath='{.users[*].name}' # get a list of users\nkubectl config get-contexts # display list of contexts\nkubectl config get-contexts -o name # get all context names\nkubectl config current-context # display the current-context\nkubectl config use-context my-cluster-name # set the default context to my-cluster-name\n\nkubectl config set-cluster my-cluster-name # set a cluster entry in the kubeconfig\n\n# configure the URL to a proxy server to use for requests made by this client in the kubeconfig\nkubectl config set-cluster my-cluster-name --proxy-url=my-proxy-url\n\n# add a new user to your kubeconf that supports basic auth\nkubectl config set-credentials kubeuser/foo.kubernetes.com --username=kubeuser --password=kubepassword\n\n# permanently save the namespace for all subsequent kubectl commands in that context.\nkubectl config set-context --current --namespace=ggckad-s2\n\n# set a context utilizing a specific username and namespace.\nkubectl config set-context gce --user=cluster-admin --namespace=foo \\\n && kubectl config use-context gce\n\nkubectl config unset users.foo # delete user foo\n\n# short alias to set/show context/namespace (only works for bash and bash-compatible shells, current context to be set before using kn to set namespace)\nalias kx='f() { [ \"$1\" ] && kubectl config use-context $1 || kubectl config current-context ; } ; f'\nalias kn='f() { [ \"$1\" ] && kubectl config set-context --current --namespace $1 || kubectl config view --minify | grep namespace | cut -d\" \" -f6 ; } ; f'", "completion": "This bash code kubectl Quick Reference | apply manages applications through files defining Kubernetes resources. It creates and updates resources in a cluster through running kubectl apply. This is the recommended way of managing Kubernetes", "language": "bash", "type": "kubectl-command_explanation", "context": "kubectl Quick Reference | apply manages applications through files defining Kubernetes resources. It creates and updates resources in a cluster through running kubectl apply. This is the recommended way of managing Kubernetes", "source": "https://kubernetes.io/docs/reference/kubectl/quick-reference/", "id": "k8s_00016_code_explain_295"} {"prompt": "Complete this yaml code:\n\nkubectl apply -f ./my-manifest.yaml # create resource(s)\nkubectl apply -f ./my1.yaml -f ./my2.yaml # create from multiple files\nkubectl apply -f ./dir # create resource(s) in all manifest files in dir\nkubectl apply -f https://example.com/manifest.yaml # create resource(s) from url (Note: this is an example domain and does not contain a valid manifest)\nkubectl create deployment nginx --image=nginx # start a single instance of nginx\n\n# create a Job which prints \"Hello World\"\nkubectl create job hello --image=busybox:1.28 -- echo \"Hello World\"\n\n# create a CronJob that prints \"Hello World\" every minute\nkubectl create cronjob hello --image=busybox:1.28 --schedule=\"*/1 * * * *\" -- echo \"Hello World\"\n\nkubectl explain pods # get the documentation for pod manifests\n\n# Create multiple YAML objects from stdin\nkubectl apply -f - < pod.yaml\n # Generate spec for running pod nginx and write it into a file called pod.yaml\nkubectl attach my-pod -i # Attach to Running Container\nkubectl port-forward my-pod 5000:6000 # Listen on port 5000 on the local machine and forward to port 6000 on my-pod\nkubectl exec my-pod -- ls / # Run command in existing pod (1 container case)\nkubectl exec --stdin --tty my-pod -- /bin/sh # Interactive shell access to a running pod (1 container case)\nkubectl exec my-pod -c my-container -- ls / # Run command in existing pod (multi-container case)\nkubectl debug my-pod -it --image=busybox:1.28 # Create an interactive debugging session within existing pod and immediately attach to it\nkubectl debug node/my-node -it --image=busybox:1.28 # Create an interactive debugging session on a node and immediately attach to it\nkubectl top pod # Show metrics for all pods in the default namespace\nkubectl top pod POD_NAME --containers # Show metrics for a given pod and its containers\nkubectl top pod POD_NAME --sort-by=cpu # Show metrics for a given pod and sort it by 'cpu' or 'memory'", "language": "bash", "type": "kubectl-command", "context": "kubectl Quick Reference | List all supported resource types along with their shortnames, API group, whether they are namespaced, and kind:", "source": "https://kubernetes.io/docs/reference/kubectl/quick-reference/", "id": "k8s_00016_code_completion_309"} {"prompt": "Explain this bash code:\n\nkubectl logs my-pod # dump pod logs (stdout)\nkubectl logs -l name=myLabel # dump pod logs, with label name=myLabel (stdout)\nkubectl logs my-pod --previous # dump pod logs (stdout) for a previous instantiation of a container\nkubectl logs my-pod -c my-container # dump pod container logs (stdout, multi-container case)\nkubectl logs -l name=myLabel -c my-container # dump pod container logs, with label name=myLabel (stdout)\nkubectl logs my-pod -c my-container --previous # dump pod container logs (stdout, multi-container case) for a previous instantiation of a container\nkubectl logs -f my-pod # stream pod logs (stdout)\nkubectl logs -f my-pod -c my-container # stream pod container logs (stdout, multi-container case)\nkubectl logs -f -l name=myLabel --all-containers # stream all pods logs with label name=myLabel (stdout)\nkubectl run -i --tty busybox --image=busybox:1.28 -- sh # Run pod as interactive shell\nkubectl run nginx --image=nginx -n mynamespace # Start a single instance of nginx pod in the namespace of mynamespace\nkubectl run nginx --image=nginx --dry-run=client -o yaml > pod.yaml\n # Generate spec for running pod nginx and write it into a file called pod.yaml\nkubectl attach my-pod -i # Attach to Running Container\nkubectl port-forward my-pod 5000:6000 # Listen on port 5000 on the local machine and forward to port 6000 on my-pod\nkubectl exec my-pod -- ls / # Run command in existing pod (1 container case)\nkubectl exec --stdin --tty my-pod -- /bin/sh # Interactive shell access to a running pod (1 container case)\nkubectl exec my-pod -c my-container -- ls / # Run command in existing pod (multi-container case)\nkubectl debug my-pod -it --image=busybox:1.28 # Create an interactive debugging session within existing pod and immediately attach to it\nkubectl debug node/my-node -it --image=busybox:1.28 # Create an interactive debugging session on a node and immediately attach to it\nkubectl top pod # Show metrics for all pods in the default namespace\nkubectl top pod POD_NAME --containers # Show metrics for a given pod and its containers\nkubectl top pod POD_NAME --sort-by=cpu # Show metrics for a given pod and sort it by 'cpu' or 'memory'", "completion": "This bash code kubectl Quick Reference | List all supported resource types along with their shortnames, API group, whether they are namespaced, and kind:", "language": "bash", "type": "kubectl-command_explanation", "context": "kubectl Quick Reference | List all supported resource types along with their shortnames, API group, whether they are namespaced, and kind:", "source": "https://kubernetes.io/docs/reference/kubectl/quick-reference/", "id": "k8s_00016_code_explain_310"} {"prompt": "Complete this bash code:\n\nkubectl cp /tmp/foo_dir my-pod:/tmp/bar_dir # Copy /tmp/foo_dir local directory to /tmp/bar_dir in a remote pod in the current namespace\nkubectl cp /tmp/foo my-pod:/tmp/bar -c my-container # Copy /tmp/foo local file to /tmp/bar in a remote pod in a specific container", "completion": "kubectl cp /tmp/foo my-namespace/my-pod:/tmp/bar # Copy /tmp/foo local file to /tmp/bar in a remote pod in namespace my-namespace\nkubectl cp my-namespace/my-pod:/tmp/foo /tmp/bar # Copy /tmp/foo from a remote pod to /tmp/bar locally", "language": "bash", "type": "kubectl-command", "context": "kubectl Quick Reference | List all supported resource types along with their shortnames, API group, whether they are namespaced, and kind:", "source": "https://kubernetes.io/docs/reference/kubectl/quick-reference/", "id": "k8s_00016_code_completion_311"} {"prompt": "Explain this bash code:\n\nkubectl cp /tmp/foo_dir my-pod:/tmp/bar_dir # Copy /tmp/foo_dir local directory to /tmp/bar_dir in a remote pod in the current namespace\nkubectl cp /tmp/foo my-pod:/tmp/bar -c my-container # Copy /tmp/foo local file to /tmp/bar in a remote pod in a specific container\nkubectl cp /tmp/foo my-namespace/my-pod:/tmp/bar # Copy /tmp/foo local file to /tmp/bar in a remote pod in namespace my-namespace\nkubectl cp my-namespace/my-pod:/tmp/foo /tmp/bar # Copy /tmp/foo from a remote pod to /tmp/bar locally", "completion": "This bash code kubectl Quick Reference | List all supported resource types along with their shortnames, API group, whether they are namespaced, and kind:", "language": "bash", "type": "kubectl-command_explanation", "context": "kubectl Quick Reference | List all supported resource types along with their shortnames, API group, whether they are namespaced, and kind:", "source": "https://kubernetes.io/docs/reference/kubectl/quick-reference/", "id": "k8s_00016_code_explain_312"} {"prompt": "Explain this text code:\n\ntar cf - /tmp/foo | kubectl exec -i -n my-namespace my-pod -- tar xf - -C /tmp/bar # Copy /tmp/foo local file to /tmp/bar in a remote pod in namespace my-namespace\nkubectl exec -n my-namespace my-pod -- tar cf - /tmp/foo | tar xf - -C /tmp/bar # Copy /tmp/foo from a remote pod to /tmp/bar locally", "completion": "This text code kubectl Quick Reference | List all supported resource types along with their shortnames, API group, whether they are namespaced, and kind:", "language": "text", "type": "code-example_explanation", "context": "kubectl Quick Reference | List all supported resource types along with their shortnames, API group, whether they are namespaced, and kind:", "source": "https://kubernetes.io/docs/reference/kubectl/quick-reference/", "id": "k8s_00016_code_explain_313"} {"prompt": "Complete this bash code:\n\nkubectl logs deploy/my-deployment # dump Pod logs for a Deployment (single-container case)\nkubectl logs deploy/my-deployment -c my-container # dump Pod logs for a Deployment (multi-container case)\n\nkubectl port-forward svc/my-service 5000 # listen on local port 5000 and forward to port 5000 on Service backend", "completion": "kubectl port-forward svc/my-service 5000:my-service-port # listen on local port 5000 and forward to Service target port with name \n\nkubectl port-forward deploy/my-deployment 5000:6000 # listen on local port 5000 and forward to port 6000 on a Pod created by \nkubectl exec deploy/my-deployment -- ls # run command in first Pod and first container in Deployment (single- or multi-container cases)", "language": "bash", "type": "kubectl-command", "context": "kubectl Quick Reference | List all supported resource types along with their shortnames, API group, whether they are namespaced, and kind:", "source": "https://kubernetes.io/docs/reference/kubectl/quick-reference/", "id": "k8s_00016_code_completion_314"} {"prompt": "Explain this bash code:\n\nkubectl logs deploy/my-deployment # dump Pod logs for a Deployment (single-container case)\nkubectl logs deploy/my-deployment -c my-container # dump Pod logs for a Deployment (multi-container case)\n\nkubectl port-forward svc/my-service 5000 # listen on local port 5000 and forward to port 5000 on Service backend\nkubectl port-forward svc/my-service 5000:my-service-port # listen on local port 5000 and forward to Service target port with name \n\nkubectl port-forward deploy/my-deployment 5000:6000 # listen on local port 5000 and forward to port 6000 on a Pod created by \nkubectl exec deploy/my-deployment -- ls # run command in first Pod and first container in Deployment (single- or multi-container cases)", "completion": "This bash code kubectl Quick Reference | List all supported resource types along with their shortnames, API group, whether they are namespaced, and kind:", "language": "bash", "type": "kubectl-command_explanation", "context": "kubectl Quick Reference | List all supported resource types along with their shortnames, API group, whether they are namespaced, and kind:", "source": "https://kubernetes.io/docs/reference/kubectl/quick-reference/", "id": "k8s_00016_code_explain_315"} {"prompt": "Complete this bash code:\n\nkubectl cordon my-node # Mark my-node as unschedulable\nkubectl drain my-node # Drain my-node in preparation for maintenance\nkubectl uncordon my-node # Mark my-node as schedulable\nkubectl top node # Show metrics for all nodes\nkubectl top node my-node # Show metrics for a given node\nkubectl cluster-info # Display addresses of the master and services\nkubectl cluster-info dump # Dump current cluster state to stdout", "completion": "kubectl cluster-info dump --output-directory=/path/to/cluster-state # Dump current cluster state to /path/to/cluster-state\n\n# View existing taints on which exist on current nodes.\nkubectl get nodes -o='custom-columns=NodeName:.metadata.name,TaintKey:.spec.taints[*].key,TaintValue:.spec.taints[*].value,TaintEffect:.spec.taints[*].effect'\n\n# If a taint with that key and effect already exists, its value is replaced as specified.\nkubectl taint nodes foo dedicated=special-user:NoSchedule", "language": "bash", "type": "kubectl-command", "context": "kubectl Quick Reference | List all supported resource types along with their shortnames, API group, whether they are namespaced, and kind:", "source": "https://kubernetes.io/docs/reference/kubectl/quick-reference/", "id": "k8s_00016_code_completion_316"} {"prompt": "Explain this bash code:\n\nkubectl cordon my-node # Mark my-node as unschedulable\nkubectl drain my-node # Drain my-node in preparation for maintenance\nkubectl uncordon my-node # Mark my-node as schedulable\nkubectl top node # Show metrics for all nodes\nkubectl top node my-node # Show metrics for a given node\nkubectl cluster-info # Display addresses of the master and services\nkubectl cluster-info dump # Dump current cluster state to stdout\nkubectl cluster-info dump --output-directory=/path/to/cluster-state # Dump current cluster state to /path/to/cluster-state\n\n# View existing taints on which exist on current nodes.\nkubectl get nodes -o='custom-columns=NodeName:.metadata.name,TaintKey:.spec.taints[*].key,TaintValue:.spec.taints[*].value,TaintEffect:.spec.taints[*].effect'\n\n# If a taint with that key and effect already exists, its value is replaced as specified.\nkubectl taint nodes foo dedicated=special-user:NoSchedule", "completion": "This bash code kubectl Quick Reference | List all supported resource types along with their shortnames, API group, whether they are namespaced, and kind:", "language": "bash", "type": "kubectl-command_explanation", "context": "kubectl Quick Reference | List all supported resource types along with their shortnames, API group, whether they are namespaced, and kind:", "source": "https://kubernetes.io/docs/reference/kubectl/quick-reference/", "id": "k8s_00016_code_explain_317"} {"prompt": "Complete this bash code:\n\nkubectl api-resources --namespaced=true # All namespaced resources\nkubectl api-resources --namespaced=false # All non-namespaced resources\nkubectl api-resources -o name # All resources with simple output (only the resource name)", "completion": "kubectl api-resources -o wide # All resources with expanded (aka \"wide\") output\nkubectl api-resources --verbs=list,get # All resources that support the \"list\" and \"get\" request verbs\nkubectl api-resources --api-group=extensions # All resources in the \"extensions\" API group", "language": "bash", "type": "kubectl-command", "context": "kubectl Quick Reference | To output details to your terminal window in a specific format, add the -o (or --output) flag to a supported kubectl command.", "source": "https://kubernetes.io/docs/reference/kubectl/quick-reference/", "id": "k8s_00016_code_completion_318"} {"prompt": "Explain this bash code:\n\nkubectl api-resources --namespaced=true # All namespaced resources\nkubectl api-resources --namespaced=false # All non-namespaced resources\nkubectl api-resources -o name # All resources with simple output (only the resource name)\nkubectl api-resources -o wide # All resources with expanded (aka \"wide\") output\nkubectl api-resources --verbs=list,get # All resources that support the \"list\" and \"get\" request verbs\nkubectl api-resources --api-group=extensions # All resources in the \"extensions\" API group", "completion": "This bash code kubectl Quick Reference | To output details to your terminal window in a specific format, add the -o (or --output) flag to a supported kubectl command.", "language": "bash", "type": "kubectl-command_explanation", "context": "kubectl Quick Reference | To output details to your terminal window in a specific format, add the -o (or --output) flag to a supported kubectl command.", "source": "https://kubernetes.io/docs/reference/kubectl/quick-reference/", "id": "k8s_00016_code_explain_319"} {"prompt": "Complete this text code:\n\n# All images running in a cluster\nkubectl get pods -A -o=custom-columns='DATA:spec.containers[*].image'\n\n# All images running in namespace: default, grouped by Pod\nkubectl get pods --namespace default --output=custom-columns=\"NAME:.metadata.name,IMAGE:.spec.containers[*].image\"", "completion": "\n # All images excluding \"registry.k8s.io/coredns:1.6.2\"\nkubectl get pods -A -o=custom-columns='DATA:spec.containers[?(@.image!=\"registry.k8s.io/coredns:1.6.2\")].image'\n\n# All fields under metadata regardless of name\nkubectl get pods -A -o=custom-columns='DATA:metadata.*'", "language": "text", "type": "code-example", "context": "kubectl Quick Reference | More examples in the kubectl reference documentation.", "source": "https://kubernetes.io/docs/reference/kubectl/quick-reference/", "id": "k8s_00016_code_completion_320"} {"prompt": "Explain this text code:\n\n# All images running in a cluster\nkubectl get pods -A -o=custom-columns='DATA:spec.containers[*].image'\n\n# All images running in namespace: default, grouped by Pod\nkubectl get pods --namespace default --output=custom-columns=\"NAME:.metadata.name,IMAGE:.spec.containers[*].image\"\n\n # All images excluding \"registry.k8s.io/coredns:1.6.2\"\nkubectl get pods -A -o=custom-columns='DATA:spec.containers[?(@.image!=\"registry.k8s.io/coredns:1.6.2\")].image'\n\n# All fields under metadata regardless of name\nkubectl get pods -A -o=custom-columns='DATA:metadata.*'", "completion": "This text code kubectl Quick Reference | More examples in the kubectl reference documentation.", "language": "text", "type": "code-example_explanation", "context": "kubectl Quick Reference | More examples in the kubectl reference documentation.", "source": "https://kubernetes.io/docs/reference/kubectl/quick-reference/", "id": "k8s_00016_code_explain_321"} {"prompt": "Explain this bash code:\n\nsource <(kubectl completion bash) # set up autocomplete in bash into the current shell, bash-completion package should be installed first.\necho \"source <(kubectl completion bash)\" >> ~/.bashrc # add autocomplete permanently to your bash shell.", "completion": "This bash code kubectl Quick Reference | You can also use a shorthand alias for kubectl that also works with completion:", "language": "bash", "type": "shell-command_explanation", "context": "kubectl Quick Reference | You can also use a shorthand alias for kubectl that also works with completion:", "source": "https://kubernetes.io/docs/reference/kubectl/quick-reference/", "id": "k8s_00016_code_explain_322"} {"prompt": "Explain this bash code:\n\nalias k=kubectl\ncomplete -o default -F __start_kubectl k", "completion": "This bash code kubectl Quick Reference | Appending --all-namespaces happens frequently enough that you should be aware of the shorthand for --all-namespaces:", "language": "bash", "type": "shell-command_explanation", "context": "kubectl Quick Reference | Appending --all-namespaces happens frequently enough that you should be aware of the shorthand for --all-namespaces:", "source": "https://kubernetes.io/docs/reference/kubectl/quick-reference/", "id": "k8s_00016_code_explain_323"} {"prompt": "Explain this bash code:\n\nsource <(kubectl completion zsh) # set up autocomplete in zsh into the current shell\necho '[[ $commands[kubectl] ]] && source <(kubectl completion zsh)' >> ~/.zshrc # add autocomplete permanently to your zsh shell", "completion": "This bash code kubectl Quick Reference | Appending --all-namespaces happens frequently enough that you should be aware of the shorthand for --all-namespaces:", "language": "bash", "type": "shell-command_explanation", "context": "kubectl Quick Reference | Appending --all-namespaces happens frequently enough that you should be aware of the shorthand for --all-namespaces:", "source": "https://kubernetes.io/docs/reference/kubectl/quick-reference/", "id": "k8s_00016_code_explain_324"} {"prompt": "Explain this bash code:\n\necho 'kubectl completion fish | source' > ~/.config/fish/completions/kubectl.fish && source ~/.config/fish/completions/kubectl.fish", "completion": "This bash code kubectl Quick Reference | Appending --all-namespaces happens frequently enough that you should be aware of the shorthand for --all-namespaces:", "language": "bash", "type": "shell-command_explanation", "context": "kubectl Quick Reference | Appending --all-namespaces happens frequently enough that you should be aware of the shorthand for --all-namespaces:", "source": "https://kubernetes.io/docs/reference/kubectl/quick-reference/", "id": "k8s_00016_code_explain_325"} {"prompt": "Complete this bash code:\n\nkubectl config view # Show Merged kubeconfig settings.\n\n# use multiple kubeconfig files at the same time and view merged config\nKUBECONFIG=~/.kube/config:~/.kube/kubconfig2\n\nkubectl config view\n\n# Show merged kubeconfig settings and raw certificate data and exposed secrets\nkubectl config view --raw \n\n# get the password for the e2e user\nkubectl config view -o jsonpath='{.users[?(@.name == \"e2e\")].user.password}'\n\n# get the certificate for the e2e user\nkubectl config view --raw -o jsonpath='{.users[?(.name == \"e2e\")].user.client-certificate-data}' | base64 -d\n\nkubectl config view -o jsonpath='{.users[].name}' # display the first user\nkubectl config view -o jsonpath='{.users[*].name}' # get a list of users\nkubectl config get-contexts # display list of contexts\nkubectl config get-contexts -o name # get all context names\nkubectl config current-context # display the current-context", "completion": "kubectl config use-context my-cluster-name # set the default context to my-cluster-name\n\nkubectl config set-cluster my-cluster-name # set a cluster entry in the kubeconfig\n\n# configure the URL to a proxy server to use for requests made by this client in the kubeconfig\nkubectl config set-cluster my-cluster-name --proxy-url=my-proxy-url\n\n# add a new user to your kubeconf that supports basic auth\nkubectl config set-credentials kubeuser/foo.kubernetes.com --username=kubeuser --password=kubepassword\n\n# permanently save the namespace for all subsequent kubectl commands in that context.\nkubectl config set-context --current --namespace=ggckad-s2\n\n# set a context utilizing a specific username and namespace.\nkubectl config set-context gce --user=cluster-admin --namespace=foo \\\n && kubectl config use-context gce\n\nkubectl config unset users.foo # delete user foo\n\n# short alias to set/show context/namespace (only works for bash and bash-compatible shells, current context to be set before using kn to set namespace)\nalias kx='f() { [ \"$1\" ] && kubectl config use-context $1 || kubectl config current-context ; } ; f'\nalias kn='f() { [ \"$1\" ] && kubectl config set-context --current --namespace $1 || kubectl config view --minify | grep namespace | cut -d\" \" -f6 ; } ; f'", "language": "bash", "type": "kubectl-command", "context": "kubectl Quick Reference | apply manages applications through files defining Kubernetes resources. It creates and updates resources in a cluster through running kubectl apply. This is the recommended way of managing Kubernetes", "source": "https://kubernetes.io/docs/reference/kubectl/quick-reference/", "id": "k8s_00016_code_completion_326"} {"prompt": "Explain this bash code:\n\nkubectl config view # Show Merged kubeconfig settings.\n\n# use multiple kubeconfig files at the same time and view merged config\nKUBECONFIG=~/.kube/config:~/.kube/kubconfig2\n\nkubectl config view\n\n# Show merged kubeconfig settings and raw certificate data and exposed secrets\nkubectl config view --raw \n\n# get the password for the e2e user\nkubectl config view -o jsonpath='{.users[?(@.name == \"e2e\")].user.password}'\n\n# get the certificate for the e2e user\nkubectl config view --raw -o jsonpath='{.users[?(.name == \"e2e\")].user.client-certificate-data}' | base64 -d\n\nkubectl config view -o jsonpath='{.users[].name}' # display the first user\nkubectl config view -o jsonpath='{.users[*].name}' # get a list of users\nkubectl config get-contexts # display list of contexts\nkubectl config get-contexts -o name # get all context names\nkubectl config current-context # display the current-context\nkubectl config use-context my-cluster-name # set the default context to my-cluster-name\n\nkubectl config set-cluster my-cluster-name # set a cluster entry in the kubeconfig\n\n# configure the URL to a proxy server to use for requests made by this client in the kubeconfig\nkubectl config set-cluster my-cluster-name --proxy-url=my-proxy-url\n\n# add a new user to your kubeconf that supports basic auth\nkubectl config set-credentials kubeuser/foo.kubernetes.com --username=kubeuser --password=kubepassword\n\n# permanently save the namespace for all subsequent kubectl commands in that context.\nkubectl config set-context --current --namespace=ggckad-s2\n\n# set a context utilizing a specific username and namespace.\nkubectl config set-context gce --user=cluster-admin --namespace=foo \\\n && kubectl config use-context gce\n\nkubectl config unset users.foo # delete user foo\n\n# short alias to set/show context/namespace (only works for bash and bash-compatible shells, current context to be set before using kn to set namespace)\nalias kx='f() { [ \"$1\" ] && kubectl config use-context $1 || kubectl config current-context ; } ; f'\nalias kn='f() { [ \"$1\" ] && kubectl config set-context --current --namespace $1 || kubectl config view --minify | grep namespace | cut -d\" \" -f6 ; } ; f'", "completion": "This bash code kubectl Quick Reference | apply manages applications through files defining Kubernetes resources. It creates and updates resources in a cluster through running kubectl apply. This is the recommended way of managing Kubernetes", "language": "bash", "type": "kubectl-command_explanation", "context": "kubectl Quick Reference | apply manages applications through files defining Kubernetes resources. It creates and updates resources in a cluster through running kubectl apply. This is the recommended way of managing Kubernetes", "source": "https://kubernetes.io/docs/reference/kubectl/quick-reference/", "id": "k8s_00016_code_explain_327"} {"prompt": "Complete this bash code:\n\nkubectl apply -f ./my-manifest.yaml # create resource(s)\nkubectl apply -f ./my1.yaml -f ./my2.yaml # create from multiple files\nkubectl apply -f ./dir # create resource(s) in all manifest files in dir\nkubectl apply -f https://example.com/manifest.yaml # create resource(s) from url (Note: this is an example domain and does not contain a valid manifest)\nkubectl create deployment nginx --image=nginx # start a single instance of nginx\n\n# create a Job which prints \"Hello World\"\nkubectl create job hello --image=busybox:1.28 -- echo \"Hello World\"\n\n# create a CronJob that prints \"Hello World\" every minute\nkubectl create cronjob hello --image=busybox:1.28 --schedule=\"*/1 * * * *\" -- echo \"Hello World\"\n\nkubectl explain pods # get the documentation for pod manifests\n\n# Create multiple YAML objects from stdin\nkubectl apply -f - < pod.yaml\n # Generate spec for running pod nginx and write it into a file called pod.yaml\nkubectl attach my-pod -i # Attach to Running Container\nkubectl port-forward my-pod 5000:6000 # Listen on port 5000 on the local machine and forward to port 6000 on my-pod\nkubectl exec my-pod -- ls / # Run command in existing pod (1 container case)\nkubectl exec --stdin --tty my-pod -- /bin/sh # Interactive shell access to a running pod (1 container case)\nkubectl exec my-pod -c my-container -- ls / # Run command in existing pod (multi-container case)\nkubectl debug my-pod -it --image=busybox:1.28 # Create an interactive debugging session within existing pod and immediately attach to it\nkubectl debug node/my-node -it --image=busybox:1.28 # Create an interactive debugging session on a node and immediately attach to it\nkubectl top pod # Show metrics for all pods in the default namespace\nkubectl top pod POD_NAME --containers # Show metrics for a given pod and its containers\nkubectl top pod POD_NAME --sort-by=cpu # Show metrics for a given pod and sort it by 'cpu' or 'memory'", "language": "bash", "type": "kubectl-command", "context": "kubectl Quick Reference | List all supported resource types along with their shortnames, API group, whether they are namespaced, and kind:", "source": "https://kubernetes.io/docs/reference/kubectl/quick-reference/", "id": "k8s_00016_code_completion_341"} {"prompt": "Explain this bash code:\n\nkubectl logs my-pod # dump pod logs (stdout)\nkubectl logs -l name=myLabel # dump pod logs, with label name=myLabel (stdout)\nkubectl logs my-pod --previous # dump pod logs (stdout) for a previous instantiation of a container\nkubectl logs my-pod -c my-container # dump pod container logs (stdout, multi-container case)\nkubectl logs -l name=myLabel -c my-container # dump pod container logs, with label name=myLabel (stdout)\nkubectl logs my-pod -c my-container --previous # dump pod container logs (stdout, multi-container case) for a previous instantiation of a container\nkubectl logs -f my-pod # stream pod logs (stdout)\nkubectl logs -f my-pod -c my-container # stream pod container logs (stdout, multi-container case)\nkubectl logs -f -l name=myLabel --all-containers # stream all pods logs with label name=myLabel (stdout)\nkubectl run -i --tty busybox --image=busybox:1.28 -- sh # Run pod as interactive shell\nkubectl run nginx --image=nginx -n mynamespace # Start a single instance of nginx pod in the namespace of mynamespace\nkubectl run nginx --image=nginx --dry-run=client -o yaml > pod.yaml\n # Generate spec for running pod nginx and write it into a file called pod.yaml\nkubectl attach my-pod -i # Attach to Running Container\nkubectl port-forward my-pod 5000:6000 # Listen on port 5000 on the local machine and forward to port 6000 on my-pod\nkubectl exec my-pod -- ls / # Run command in existing pod (1 container case)\nkubectl exec --stdin --tty my-pod -- /bin/sh # Interactive shell access to a running pod (1 container case)\nkubectl exec my-pod -c my-container -- ls / # Run command in existing pod (multi-container case)\nkubectl debug my-pod -it --image=busybox:1.28 # Create an interactive debugging session within existing pod and immediately attach to it\nkubectl debug node/my-node -it --image=busybox:1.28 # Create an interactive debugging session on a node and immediately attach to it\nkubectl top pod # Show metrics for all pods in the default namespace\nkubectl top pod POD_NAME --containers # Show metrics for a given pod and its containers\nkubectl top pod POD_NAME --sort-by=cpu # Show metrics for a given pod and sort it by 'cpu' or 'memory'", "completion": "This bash code kubectl Quick Reference | List all supported resource types along with their shortnames, API group, whether they are namespaced, and kind:", "language": "bash", "type": "kubectl-command_explanation", "context": "kubectl Quick Reference | List all supported resource types along with their shortnames, API group, whether they are namespaced, and kind:", "source": "https://kubernetes.io/docs/reference/kubectl/quick-reference/", "id": "k8s_00016_code_explain_342"} {"prompt": "Complete this bash code:\n\nkubectl cp /tmp/foo_dir my-pod:/tmp/bar_dir # Copy /tmp/foo_dir local directory to /tmp/bar_dir in a remote pod in the current namespace\nkubectl cp /tmp/foo my-pod:/tmp/bar -c my-container # Copy /tmp/foo local file to /tmp/bar in a remote pod in a specific container", "completion": "kubectl cp /tmp/foo my-namespace/my-pod:/tmp/bar # Copy /tmp/foo local file to /tmp/bar in a remote pod in namespace my-namespace\nkubectl cp my-namespace/my-pod:/tmp/foo /tmp/bar # Copy /tmp/foo from a remote pod to /tmp/bar locally", "language": "bash", "type": "kubectl-command", "context": "kubectl Quick Reference | List all supported resource types along with their shortnames, API group, whether they are namespaced, and kind:", "source": "https://kubernetes.io/docs/reference/kubectl/quick-reference/", "id": "k8s_00016_code_completion_343"} {"prompt": "Explain this bash code:\n\nkubectl cp /tmp/foo_dir my-pod:/tmp/bar_dir # Copy /tmp/foo_dir local directory to /tmp/bar_dir in a remote pod in the current namespace\nkubectl cp /tmp/foo my-pod:/tmp/bar -c my-container # Copy /tmp/foo local file to /tmp/bar in a remote pod in a specific container\nkubectl cp /tmp/foo my-namespace/my-pod:/tmp/bar # Copy /tmp/foo local file to /tmp/bar in a remote pod in namespace my-namespace\nkubectl cp my-namespace/my-pod:/tmp/foo /tmp/bar # Copy /tmp/foo from a remote pod to /tmp/bar locally", "completion": "This bash code kubectl Quick Reference | List all supported resource types along with their shortnames, API group, whether they are namespaced, and kind:", "language": "bash", "type": "kubectl-command_explanation", "context": "kubectl Quick Reference | List all supported resource types along with their shortnames, API group, whether they are namespaced, and kind:", "source": "https://kubernetes.io/docs/reference/kubectl/quick-reference/", "id": "k8s_00016_code_explain_344"} {"prompt": "Explain this bash code:\n\ntar cf - /tmp/foo | kubectl exec -i -n my-namespace my-pod -- tar xf - -C /tmp/bar # Copy /tmp/foo local file to /tmp/bar in a remote pod in namespace my-namespace\nkubectl exec -n my-namespace my-pod -- tar cf - /tmp/foo | tar xf - -C /tmp/bar # Copy /tmp/foo from a remote pod to /tmp/bar locally", "completion": "This bash code kubectl Quick Reference | List all supported resource types along with their shortnames, API group, whether they are namespaced, and kind:", "language": "bash", "type": "shell-command_explanation", "context": "kubectl Quick Reference | List all supported resource types along with their shortnames, API group, whether they are namespaced, and kind:", "source": "https://kubernetes.io/docs/reference/kubectl/quick-reference/", "id": "k8s_00016_code_explain_345"} {"prompt": "Complete this bash code:\n\nkubectl logs deploy/my-deployment # dump Pod logs for a Deployment (single-container case)\nkubectl logs deploy/my-deployment -c my-container # dump Pod logs for a Deployment (multi-container case)\n\nkubectl port-forward svc/my-service 5000 # listen on local port 5000 and forward to port 5000 on Service backend", "completion": "kubectl port-forward svc/my-service 5000:my-service-port # listen on local port 5000 and forward to Service target port with name \n\nkubectl port-forward deploy/my-deployment 5000:6000 # listen on local port 5000 and forward to port 6000 on a Pod created by \nkubectl exec deploy/my-deployment -- ls # run command in first Pod and first container in Deployment (single- or multi-container cases)", "language": "bash", "type": "kubectl-command", "context": "kubectl Quick Reference | List all supported resource types along with their shortnames, API group, whether they are namespaced, and kind:", "source": "https://kubernetes.io/docs/reference/kubectl/quick-reference/", "id": "k8s_00016_code_completion_346"} {"prompt": "Explain this bash code:\n\nkubectl logs deploy/my-deployment # dump Pod logs for a Deployment (single-container case)\nkubectl logs deploy/my-deployment -c my-container # dump Pod logs for a Deployment (multi-container case)\n\nkubectl port-forward svc/my-service 5000 # listen on local port 5000 and forward to port 5000 on Service backend\nkubectl port-forward svc/my-service 5000:my-service-port # listen on local port 5000 and forward to Service target port with name \n\nkubectl port-forward deploy/my-deployment 5000:6000 # listen on local port 5000 and forward to port 6000 on a Pod created by \nkubectl exec deploy/my-deployment -- ls # run command in first Pod and first container in Deployment (single- or multi-container cases)", "completion": "This bash code kubectl Quick Reference | List all supported resource types along with their shortnames, API group, whether they are namespaced, and kind:", "language": "bash", "type": "kubectl-command_explanation", "context": "kubectl Quick Reference | List all supported resource types along with their shortnames, API group, whether they are namespaced, and kind:", "source": "https://kubernetes.io/docs/reference/kubectl/quick-reference/", "id": "k8s_00016_code_explain_347"} {"prompt": "Complete this bash code:\n\nkubectl cordon my-node # Mark my-node as unschedulable\nkubectl drain my-node # Drain my-node in preparation for maintenance\nkubectl uncordon my-node # Mark my-node as schedulable\nkubectl top node # Show metrics for all nodes\nkubectl top node my-node # Show metrics for a given node\nkubectl cluster-info # Display addresses of the master and services\nkubectl cluster-info dump # Dump current cluster state to stdout", "completion": "kubectl cluster-info dump --output-directory=/path/to/cluster-state # Dump current cluster state to /path/to/cluster-state\n\n# View existing taints on which exist on current nodes.\nkubectl get nodes -o='custom-columns=NodeName:.metadata.name,TaintKey:.spec.taints[*].key,TaintValue:.spec.taints[*].value,TaintEffect:.spec.taints[*].effect'\n\n# If a taint with that key and effect already exists, its value is replaced as specified.\nkubectl taint nodes foo dedicated=special-user:NoSchedule", "language": "bash", "type": "kubectl-command", "context": "kubectl Quick Reference | List all supported resource types along with their shortnames, API group, whether they are namespaced, and kind:", "source": "https://kubernetes.io/docs/reference/kubectl/quick-reference/", "id": "k8s_00016_code_completion_348"} {"prompt": "Explain this bash code:\n\nkubectl cordon my-node # Mark my-node as unschedulable\nkubectl drain my-node # Drain my-node in preparation for maintenance\nkubectl uncordon my-node # Mark my-node as schedulable\nkubectl top node # Show metrics for all nodes\nkubectl top node my-node # Show metrics for a given node\nkubectl cluster-info # Display addresses of the master and services\nkubectl cluster-info dump # Dump current cluster state to stdout\nkubectl cluster-info dump --output-directory=/path/to/cluster-state # Dump current cluster state to /path/to/cluster-state\n\n# View existing taints on which exist on current nodes.\nkubectl get nodes -o='custom-columns=NodeName:.metadata.name,TaintKey:.spec.taints[*].key,TaintValue:.spec.taints[*].value,TaintEffect:.spec.taints[*].effect'\n\n# If a taint with that key and effect already exists, its value is replaced as specified.\nkubectl taint nodes foo dedicated=special-user:NoSchedule", "completion": "This bash code kubectl Quick Reference | List all supported resource types along with their shortnames, API group, whether they are namespaced, and kind:", "language": "bash", "type": "kubectl-command_explanation", "context": "kubectl Quick Reference | List all supported resource types along with their shortnames, API group, whether they are namespaced, and kind:", "source": "https://kubernetes.io/docs/reference/kubectl/quick-reference/", "id": "k8s_00016_code_explain_349"} {"prompt": "Complete this bash code:\n\nkubectl api-resources --namespaced=true # All namespaced resources\nkubectl api-resources --namespaced=false # All non-namespaced resources\nkubectl api-resources -o name # All resources with simple output (only the resource name)", "completion": "kubectl api-resources -o wide # All resources with expanded (aka \"wide\") output\nkubectl api-resources --verbs=list,get # All resources that support the \"list\" and \"get\" request verbs\nkubectl api-resources --api-group=extensions # All resources in the \"extensions\" API group", "language": "bash", "type": "kubectl-command", "context": "kubectl Quick Reference | To output details to your terminal window in a specific format, add the -o (or --output) flag to a supported kubectl command.", "source": "https://kubernetes.io/docs/reference/kubectl/quick-reference/", "id": "k8s_00016_code_completion_350"} {"prompt": "Explain this bash code:\n\nkubectl api-resources --namespaced=true # All namespaced resources\nkubectl api-resources --namespaced=false # All non-namespaced resources\nkubectl api-resources -o name # All resources with simple output (only the resource name)\nkubectl api-resources -o wide # All resources with expanded (aka \"wide\") output\nkubectl api-resources --verbs=list,get # All resources that support the \"list\" and \"get\" request verbs\nkubectl api-resources --api-group=extensions # All resources in the \"extensions\" API group", "completion": "This bash code kubectl Quick Reference | To output details to your terminal window in a specific format, add the -o (or --output) flag to a supported kubectl command.", "language": "bash", "type": "kubectl-command_explanation", "context": "kubectl Quick Reference | To output details to your terminal window in a specific format, add the -o (or --output) flag to a supported kubectl command.", "source": "https://kubernetes.io/docs/reference/kubectl/quick-reference/", "id": "k8s_00016_code_explain_351"} {"prompt": "Complete this bash code:\n\n# All images running in a cluster\nkubectl get pods -A -o=custom-columns='DATA:spec.containers[*].image'\n\n# All images running in namespace: default, grouped by Pod\nkubectl get pods --namespace default --output=custom-columns=\"NAME:.metadata.name,IMAGE:.spec.containers[*].image\"", "completion": "\n # All images excluding \"registry.k8s.io/coredns:1.6.2\"\nkubectl get pods -A -o=custom-columns='DATA:spec.containers[?(@.image!=\"registry.k8s.io/coredns:1.6.2\")].image'\n\n# All fields under metadata regardless of name\nkubectl get pods -A -o=custom-columns='DATA:metadata.*'", "language": "bash", "type": "shell-command", "context": "kubectl Quick Reference | More examples in the kubectl reference documentation.", "source": "https://kubernetes.io/docs/reference/kubectl/quick-reference/", "id": "k8s_00016_code_completion_352"} {"prompt": "Explain this bash code:\n\n# All images running in a cluster\nkubectl get pods -A -o=custom-columns='DATA:spec.containers[*].image'\n\n# All images running in namespace: default, grouped by Pod\nkubectl get pods --namespace default --output=custom-columns=\"NAME:.metadata.name,IMAGE:.spec.containers[*].image\"\n\n # All images excluding \"registry.k8s.io/coredns:1.6.2\"\nkubectl get pods -A -o=custom-columns='DATA:spec.containers[?(@.image!=\"registry.k8s.io/coredns:1.6.2\")].image'\n\n# All fields under metadata regardless of name\nkubectl get pods -A -o=custom-columns='DATA:metadata.*'", "completion": "This bash code kubectl Quick Reference | More examples in the kubectl reference documentation.", "language": "bash", "type": "shell-command_explanation", "context": "kubectl Quick Reference | More examples in the kubectl reference documentation.", "source": "https://kubernetes.io/docs/reference/kubectl/quick-reference/", "id": "k8s_00016_code_explain_353"} {"prompt": "Explain this go code:\n\nsource <(kubectl completion bash) # set up autocomplete in bash into the current shell, bash-completion package should be installed first.\necho \"source <(kubectl completion bash)\" >> ~/.bashrc # add autocomplete permanently to your bash shell.", "completion": "This go code kubectl Quick Reference | You can also use a shorthand alias for kubectl that also works with completion:", "language": "go", "type": "code-example_explanation", "context": "kubectl Quick Reference | You can also use a shorthand alias for kubectl that also works with completion:", "source": "https://kubernetes.io/docs/reference/kubectl/quick-reference/", "id": "k8s_00016_code_explain_354"} {"prompt": "Explain this text code:\n\nalias k=kubectl\ncomplete -o default -F __start_kubectl k", "completion": "This text code kubectl Quick Reference | Appending --all-namespaces happens frequently enough that you should be aware of the shorthand for --all-namespaces:", "language": "text", "type": "code-example_explanation", "context": "kubectl Quick Reference | Appending --all-namespaces happens frequently enough that you should be aware of the shorthand for --all-namespaces:", "source": "https://kubernetes.io/docs/reference/kubectl/quick-reference/", "id": "k8s_00016_code_explain_355"} {"prompt": "Explain this text code:\n\nsource <(kubectl completion zsh) # set up autocomplete in zsh into the current shell\necho '[[ $commands[kubectl] ]] && source <(kubectl completion zsh)' >> ~/.zshrc # add autocomplete permanently to your zsh shell", "completion": "This text code kubectl Quick Reference | Appending --all-namespaces happens frequently enough that you should be aware of the shorthand for --all-namespaces:", "language": "text", "type": "code-example_explanation", "context": "kubectl Quick Reference | Appending --all-namespaces happens frequently enough that you should be aware of the shorthand for --all-namespaces:", "source": "https://kubernetes.io/docs/reference/kubectl/quick-reference/", "id": "k8s_00016_code_explain_356"} {"prompt": "Explain this text code:\n\necho 'kubectl completion fish | source' > ~/.config/fish/completions/kubectl.fish && source ~/.config/fish/completions/kubectl.fish", "completion": "This text code kubectl Quick Reference | Appending --all-namespaces happens frequently enough that you should be aware of the shorthand for --all-namespaces:", "language": "text", "type": "code-example_explanation", "context": "kubectl Quick Reference | Appending --all-namespaces happens frequently enough that you should be aware of the shorthand for --all-namespaces:", "source": "https://kubernetes.io/docs/reference/kubectl/quick-reference/", "id": "k8s_00016_code_explain_357"} {"prompt": "Complete this bash code:\n\nkubectl config view # Show Merged kubeconfig settings.\n\n# use multiple kubeconfig files at the same time and view merged config\nKUBECONFIG=~/.kube/config:~/.kube/kubconfig2\n\nkubectl config view\n\n# Show merged kubeconfig settings and raw certificate data and exposed secrets\nkubectl config view --raw \n\n# get the password for the e2e user\nkubectl config view -o jsonpath='{.users[?(@.name == \"e2e\")].user.password}'\n\n# get the certificate for the e2e user\nkubectl config view --raw -o jsonpath='{.users[?(.name == \"e2e\")].user.client-certificate-data}' | base64 -d\n\nkubectl config view -o jsonpath='{.users[].name}' # display the first user\nkubectl config view -o jsonpath='{.users[*].name}' # get a list of users\nkubectl config get-contexts # display list of contexts\nkubectl config get-contexts -o name # get all context names\nkubectl config current-context # display the current-context", "completion": "kubectl config use-context my-cluster-name # set the default context to my-cluster-name\n\nkubectl config set-cluster my-cluster-name # set a cluster entry in the kubeconfig\n\n# configure the URL to a proxy server to use for requests made by this client in the kubeconfig\nkubectl config set-cluster my-cluster-name --proxy-url=my-proxy-url\n\n# add a new user to your kubeconf that supports basic auth\nkubectl config set-credentials kubeuser/foo.kubernetes.com --username=kubeuser --password=kubepassword\n\n# permanently save the namespace for all subsequent kubectl commands in that context.\nkubectl config set-context --current --namespace=ggckad-s2\n\n# set a context utilizing a specific username and namespace.\nkubectl config set-context gce --user=cluster-admin --namespace=foo \\\n && kubectl config use-context gce\n\nkubectl config unset users.foo # delete user foo\n\n# short alias to set/show context/namespace (only works for bash and bash-compatible shells, current context to be set before using kn to set namespace)\nalias kx='f() { [ \"$1\" ] && kubectl config use-context $1 || kubectl config current-context ; } ; f'\nalias kn='f() { [ \"$1\" ] && kubectl config set-context --current --namespace $1 || kubectl config view --minify | grep namespace | cut -d\" \" -f6 ; } ; f'", "language": "bash", "type": "kubectl-command", "context": "kubectl Quick Reference | apply manages applications through files defining Kubernetes resources. It creates and updates resources in a cluster through running kubectl apply. This is the recommended way of managing Kubernetes", "source": "https://kubernetes.io/docs/reference/kubectl/quick-reference/", "id": "k8s_00016_code_completion_358"} {"prompt": "Explain this bash code:\n\nkubectl config view # Show Merged kubeconfig settings.\n\n# use multiple kubeconfig files at the same time and view merged config\nKUBECONFIG=~/.kube/config:~/.kube/kubconfig2\n\nkubectl config view\n\n# Show merged kubeconfig settings and raw certificate data and exposed secrets\nkubectl config view --raw \n\n# get the password for the e2e user\nkubectl config view -o jsonpath='{.users[?(@.name == \"e2e\")].user.password}'\n\n# get the certificate for the e2e user\nkubectl config view --raw -o jsonpath='{.users[?(.name == \"e2e\")].user.client-certificate-data}' | base64 -d\n\nkubectl config view -o jsonpath='{.users[].name}' # display the first user\nkubectl config view -o jsonpath='{.users[*].name}' # get a list of users\nkubectl config get-contexts # display list of contexts\nkubectl config get-contexts -o name # get all context names\nkubectl config current-context # display the current-context\nkubectl config use-context my-cluster-name # set the default context to my-cluster-name\n\nkubectl config set-cluster my-cluster-name # set a cluster entry in the kubeconfig\n\n# configure the URL to a proxy server to use for requests made by this client in the kubeconfig\nkubectl config set-cluster my-cluster-name --proxy-url=my-proxy-url\n\n# add a new user to your kubeconf that supports basic auth\nkubectl config set-credentials kubeuser/foo.kubernetes.com --username=kubeuser --password=kubepassword\n\n# permanently save the namespace for all subsequent kubectl commands in that context.\nkubectl config set-context --current --namespace=ggckad-s2\n\n# set a context utilizing a specific username and namespace.\nkubectl config set-context gce --user=cluster-admin --namespace=foo \\\n && kubectl config use-context gce\n\nkubectl config unset users.foo # delete user foo\n\n# short alias to set/show context/namespace (only works for bash and bash-compatible shells, current context to be set before using kn to set namespace)\nalias kx='f() { [ \"$1\" ] && kubectl config use-context $1 || kubectl config current-context ; } ; f'\nalias kn='f() { [ \"$1\" ] && kubectl config set-context --current --namespace $1 || kubectl config view --minify | grep namespace | cut -d\" \" -f6 ; } ; f'", "completion": "This bash code kubectl Quick Reference | apply manages applications through files defining Kubernetes resources. It creates and updates resources in a cluster through running kubectl apply. This is the recommended way of managing Kubernetes", "language": "bash", "type": "kubectl-command_explanation", "context": "kubectl Quick Reference | apply manages applications through files defining Kubernetes resources. It creates and updates resources in a cluster through running kubectl apply. This is the recommended way of managing Kubernetes", "source": "https://kubernetes.io/docs/reference/kubectl/quick-reference/", "id": "k8s_00016_code_explain_359"} {"prompt": "Complete this yaml code:\n\nkubectl apply -f ./my-manifest.yaml # create resource(s)\nkubectl apply -f ./my1.yaml -f ./my2.yaml # create from multiple files\nkubectl apply -f ./dir # create resource(s) in all manifest files in dir\nkubectl apply -f https://example.com/manifest.yaml # create resource(s) from url (Note: this is an example domain and does not contain a valid manifest)\nkubectl create deployment nginx --image=nginx # start a single instance of nginx\n\n# create a Job which prints \"Hello World\"\nkubectl create job hello --image=busybox:1.28 -- echo \"Hello World\"\n\n# create a CronJob that prints \"Hello World\" every minute\nkubectl create cronjob hello --image=busybox:1.28 --schedule=\"*/1 * * * *\" -- echo \"Hello World\"\n\nkubectl explain pods # get the documentation for pod manifests\n\n# Create multiple YAML objects from stdin\nkubectl apply -f - < pod.yaml\n # Generate spec for running pod nginx and write it into a file called pod.yaml\nkubectl attach my-pod -i # Attach to Running Container\nkubectl port-forward my-pod 5000:6000 # Listen on port 5000 on the local machine and forward to port 6000 on my-pod\nkubectl exec my-pod -- ls / # Run command in existing pod (1 container case)\nkubectl exec --stdin --tty my-pod -- /bin/sh # Interactive shell access to a running pod (1 container case)\nkubectl exec my-pod -c my-container -- ls / # Run command in existing pod (multi-container case)\nkubectl debug my-pod -it --image=busybox:1.28 # Create an interactive debugging session within existing pod and immediately attach to it\nkubectl debug node/my-node -it --image=busybox:1.28 # Create an interactive debugging session on a node and immediately attach to it\nkubectl top pod # Show metrics for all pods in the default namespace\nkubectl top pod POD_NAME --containers # Show metrics for a given pod and its containers\nkubectl top pod POD_NAME --sort-by=cpu # Show metrics for a given pod and sort it by 'cpu' or 'memory'", "language": "bash", "type": "kubectl-command", "context": "kubectl Quick Reference | List all supported resource types along with their shortnames, API group, whether they are namespaced, and kind:", "source": "https://kubernetes.io/docs/reference/kubectl/quick-reference/", "id": "k8s_00016_code_completion_373"} {"prompt": "Explain this bash code:\n\nkubectl logs my-pod # dump pod logs (stdout)\nkubectl logs -l name=myLabel # dump pod logs, with label name=myLabel (stdout)\nkubectl logs my-pod --previous # dump pod logs (stdout) for a previous instantiation of a container\nkubectl logs my-pod -c my-container # dump pod container logs (stdout, multi-container case)\nkubectl logs -l name=myLabel -c my-container # dump pod container logs, with label name=myLabel (stdout)\nkubectl logs my-pod -c my-container --previous # dump pod container logs (stdout, multi-container case) for a previous instantiation of a container\nkubectl logs -f my-pod # stream pod logs (stdout)\nkubectl logs -f my-pod -c my-container # stream pod container logs (stdout, multi-container case)\nkubectl logs -f -l name=myLabel --all-containers # stream all pods logs with label name=myLabel (stdout)\nkubectl run -i --tty busybox --image=busybox:1.28 -- sh # Run pod as interactive shell\nkubectl run nginx --image=nginx -n mynamespace # Start a single instance of nginx pod in the namespace of mynamespace\nkubectl run nginx --image=nginx --dry-run=client -o yaml > pod.yaml\n # Generate spec for running pod nginx and write it into a file called pod.yaml\nkubectl attach my-pod -i # Attach to Running Container\nkubectl port-forward my-pod 5000:6000 # Listen on port 5000 on the local machine and forward to port 6000 on my-pod\nkubectl exec my-pod -- ls / # Run command in existing pod (1 container case)\nkubectl exec --stdin --tty my-pod -- /bin/sh # Interactive shell access to a running pod (1 container case)\nkubectl exec my-pod -c my-container -- ls / # Run command in existing pod (multi-container case)\nkubectl debug my-pod -it --image=busybox:1.28 # Create an interactive debugging session within existing pod and immediately attach to it\nkubectl debug node/my-node -it --image=busybox:1.28 # Create an interactive debugging session on a node and immediately attach to it\nkubectl top pod # Show metrics for all pods in the default namespace\nkubectl top pod POD_NAME --containers # Show metrics for a given pod and its containers\nkubectl top pod POD_NAME --sort-by=cpu # Show metrics for a given pod and sort it by 'cpu' or 'memory'", "completion": "This bash code kubectl Quick Reference | List all supported resource types along with their shortnames, API group, whether they are namespaced, and kind:", "language": "bash", "type": "kubectl-command_explanation", "context": "kubectl Quick Reference | List all supported resource types along with their shortnames, API group, whether they are namespaced, and kind:", "source": "https://kubernetes.io/docs/reference/kubectl/quick-reference/", "id": "k8s_00016_code_explain_374"} {"prompt": "Complete this bash code:\n\nkubectl cp /tmp/foo_dir my-pod:/tmp/bar_dir # Copy /tmp/foo_dir local directory to /tmp/bar_dir in a remote pod in the current namespace\nkubectl cp /tmp/foo my-pod:/tmp/bar -c my-container # Copy /tmp/foo local file to /tmp/bar in a remote pod in a specific container", "completion": "kubectl cp /tmp/foo my-namespace/my-pod:/tmp/bar # Copy /tmp/foo local file to /tmp/bar in a remote pod in namespace my-namespace\nkubectl cp my-namespace/my-pod:/tmp/foo /tmp/bar # Copy /tmp/foo from a remote pod to /tmp/bar locally", "language": "bash", "type": "kubectl-command", "context": "kubectl Quick Reference | List all supported resource types along with their shortnames, API group, whether they are namespaced, and kind:", "source": "https://kubernetes.io/docs/reference/kubectl/quick-reference/", "id": "k8s_00016_code_completion_375"} {"prompt": "Explain this bash code:\n\nkubectl cp /tmp/foo_dir my-pod:/tmp/bar_dir # Copy /tmp/foo_dir local directory to /tmp/bar_dir in a remote pod in the current namespace\nkubectl cp /tmp/foo my-pod:/tmp/bar -c my-container # Copy /tmp/foo local file to /tmp/bar in a remote pod in a specific container\nkubectl cp /tmp/foo my-namespace/my-pod:/tmp/bar # Copy /tmp/foo local file to /tmp/bar in a remote pod in namespace my-namespace\nkubectl cp my-namespace/my-pod:/tmp/foo /tmp/bar # Copy /tmp/foo from a remote pod to /tmp/bar locally", "completion": "This bash code kubectl Quick Reference | List all supported resource types along with their shortnames, API group, whether they are namespaced, and kind:", "language": "bash", "type": "kubectl-command_explanation", "context": "kubectl Quick Reference | List all supported resource types along with their shortnames, API group, whether they are namespaced, and kind:", "source": "https://kubernetes.io/docs/reference/kubectl/quick-reference/", "id": "k8s_00016_code_explain_376"} {"prompt": "Explain this text code:\n\ntar cf - /tmp/foo | kubectl exec -i -n my-namespace my-pod -- tar xf - -C /tmp/bar # Copy /tmp/foo local file to /tmp/bar in a remote pod in namespace my-namespace\nkubectl exec -n my-namespace my-pod -- tar cf - /tmp/foo | tar xf - -C /tmp/bar # Copy /tmp/foo from a remote pod to /tmp/bar locally", "completion": "This text code kubectl Quick Reference | List all supported resource types along with their shortnames, API group, whether they are namespaced, and kind:", "language": "text", "type": "code-example_explanation", "context": "kubectl Quick Reference | List all supported resource types along with their shortnames, API group, whether they are namespaced, and kind:", "source": "https://kubernetes.io/docs/reference/kubectl/quick-reference/", "id": "k8s_00016_code_explain_377"} {"prompt": "Complete this bash code:\n\nkubectl logs deploy/my-deployment # dump Pod logs for a Deployment (single-container case)\nkubectl logs deploy/my-deployment -c my-container # dump Pod logs for a Deployment (multi-container case)\n\nkubectl port-forward svc/my-service 5000 # listen on local port 5000 and forward to port 5000 on Service backend", "completion": "kubectl port-forward svc/my-service 5000:my-service-port # listen on local port 5000 and forward to Service target port with name \n\nkubectl port-forward deploy/my-deployment 5000:6000 # listen on local port 5000 and forward to port 6000 on a Pod created by \nkubectl exec deploy/my-deployment -- ls # run command in first Pod and first container in Deployment (single- or multi-container cases)", "language": "bash", "type": "kubectl-command", "context": "kubectl Quick Reference | List all supported resource types along with their shortnames, API group, whether they are namespaced, and kind:", "source": "https://kubernetes.io/docs/reference/kubectl/quick-reference/", "id": "k8s_00016_code_completion_378"} {"prompt": "Explain this bash code:\n\nkubectl logs deploy/my-deployment # dump Pod logs for a Deployment (single-container case)\nkubectl logs deploy/my-deployment -c my-container # dump Pod logs for a Deployment (multi-container case)\n\nkubectl port-forward svc/my-service 5000 # listen on local port 5000 and forward to port 5000 on Service backend\nkubectl port-forward svc/my-service 5000:my-service-port # listen on local port 5000 and forward to Service target port with name \n\nkubectl port-forward deploy/my-deployment 5000:6000 # listen on local port 5000 and forward to port 6000 on a Pod created by \nkubectl exec deploy/my-deployment -- ls # run command in first Pod and first container in Deployment (single- or multi-container cases)", "completion": "This bash code kubectl Quick Reference | List all supported resource types along with their shortnames, API group, whether they are namespaced, and kind:", "language": "bash", "type": "kubectl-command_explanation", "context": "kubectl Quick Reference | List all supported resource types along with their shortnames, API group, whether they are namespaced, and kind:", "source": "https://kubernetes.io/docs/reference/kubectl/quick-reference/", "id": "k8s_00016_code_explain_379"} {"prompt": "Complete this bash code:\n\nkubectl cordon my-node # Mark my-node as unschedulable\nkubectl drain my-node # Drain my-node in preparation for maintenance\nkubectl uncordon my-node # Mark my-node as schedulable\nkubectl top node # Show metrics for all nodes\nkubectl top node my-node # Show metrics for a given node\nkubectl cluster-info # Display addresses of the master and services\nkubectl cluster-info dump # Dump current cluster state to stdout", "completion": "kubectl cluster-info dump --output-directory=/path/to/cluster-state # Dump current cluster state to /path/to/cluster-state\n\n# View existing taints on which exist on current nodes.\nkubectl get nodes -o='custom-columns=NodeName:.metadata.name,TaintKey:.spec.taints[*].key,TaintValue:.spec.taints[*].value,TaintEffect:.spec.taints[*].effect'\n\n# If a taint with that key and effect already exists, its value is replaced as specified.\nkubectl taint nodes foo dedicated=special-user:NoSchedule", "language": "bash", "type": "kubectl-command", "context": "kubectl Quick Reference | List all supported resource types along with their shortnames, API group, whether they are namespaced, and kind:", "source": "https://kubernetes.io/docs/reference/kubectl/quick-reference/", "id": "k8s_00016_code_completion_380"} {"prompt": "Explain this bash code:\n\nkubectl cordon my-node # Mark my-node as unschedulable\nkubectl drain my-node # Drain my-node in preparation for maintenance\nkubectl uncordon my-node # Mark my-node as schedulable\nkubectl top node # Show metrics for all nodes\nkubectl top node my-node # Show metrics for a given node\nkubectl cluster-info # Display addresses of the master and services\nkubectl cluster-info dump # Dump current cluster state to stdout\nkubectl cluster-info dump --output-directory=/path/to/cluster-state # Dump current cluster state to /path/to/cluster-state\n\n# View existing taints on which exist on current nodes.\nkubectl get nodes -o='custom-columns=NodeName:.metadata.name,TaintKey:.spec.taints[*].key,TaintValue:.spec.taints[*].value,TaintEffect:.spec.taints[*].effect'\n\n# If a taint with that key and effect already exists, its value is replaced as specified.\nkubectl taint nodes foo dedicated=special-user:NoSchedule", "completion": "This bash code kubectl Quick Reference | List all supported resource types along with their shortnames, API group, whether they are namespaced, and kind:", "language": "bash", "type": "kubectl-command_explanation", "context": "kubectl Quick Reference | List all supported resource types along with their shortnames, API group, whether they are namespaced, and kind:", "source": "https://kubernetes.io/docs/reference/kubectl/quick-reference/", "id": "k8s_00016_code_explain_381"} {"prompt": "Complete this bash code:\n\nkubectl api-resources --namespaced=true # All namespaced resources\nkubectl api-resources --namespaced=false # All non-namespaced resources\nkubectl api-resources -o name # All resources with simple output (only the resource name)", "completion": "kubectl api-resources -o wide # All resources with expanded (aka \"wide\") output\nkubectl api-resources --verbs=list,get # All resources that support the \"list\" and \"get\" request verbs\nkubectl api-resources --api-group=extensions # All resources in the \"extensions\" API group", "language": "bash", "type": "kubectl-command", "context": "kubectl Quick Reference | To output details to your terminal window in a specific format, add the -o (or --output) flag to a supported kubectl command.", "source": "https://kubernetes.io/docs/reference/kubectl/quick-reference/", "id": "k8s_00016_code_completion_382"} {"prompt": "Explain this bash code:\n\nkubectl api-resources --namespaced=true # All namespaced resources\nkubectl api-resources --namespaced=false # All non-namespaced resources\nkubectl api-resources -o name # All resources with simple output (only the resource name)\nkubectl api-resources -o wide # All resources with expanded (aka \"wide\") output\nkubectl api-resources --verbs=list,get # All resources that support the \"list\" and \"get\" request verbs\nkubectl api-resources --api-group=extensions # All resources in the \"extensions\" API group", "completion": "This bash code kubectl Quick Reference | To output details to your terminal window in a specific format, add the -o (or --output) flag to a supported kubectl command.", "language": "bash", "type": "kubectl-command_explanation", "context": "kubectl Quick Reference | To output details to your terminal window in a specific format, add the -o (or --output) flag to a supported kubectl command.", "source": "https://kubernetes.io/docs/reference/kubectl/quick-reference/", "id": "k8s_00016_code_explain_383"} {"prompt": "Complete this text code:\n\n# All images running in a cluster\nkubectl get pods -A -o=custom-columns='DATA:spec.containers[*].image'\n\n# All images running in namespace: default, grouped by Pod\nkubectl get pods --namespace default --output=custom-columns=\"NAME:.metadata.name,IMAGE:.spec.containers[*].image\"", "completion": "\n # All images excluding \"registry.k8s.io/coredns:1.6.2\"\nkubectl get pods -A -o=custom-columns='DATA:spec.containers[?(@.image!=\"registry.k8s.io/coredns:1.6.2\")].image'\n\n# All fields under metadata regardless of name\nkubectl get pods -A -o=custom-columns='DATA:metadata.*'", "language": "text", "type": "code-example", "context": "kubectl Quick Reference | More examples in the kubectl reference documentation.", "source": "https://kubernetes.io/docs/reference/kubectl/quick-reference/", "id": "k8s_00016_code_completion_384"} {"prompt": "Explain this text code:\n\n# All images running in a cluster\nkubectl get pods -A -o=custom-columns='DATA:spec.containers[*].image'\n\n# All images running in namespace: default, grouped by Pod\nkubectl get pods --namespace default --output=custom-columns=\"NAME:.metadata.name,IMAGE:.spec.containers[*].image\"\n\n # All images excluding \"registry.k8s.io/coredns:1.6.2\"\nkubectl get pods -A -o=custom-columns='DATA:spec.containers[?(@.image!=\"registry.k8s.io/coredns:1.6.2\")].image'\n\n# All fields under metadata regardless of name\nkubectl get pods -A -o=custom-columns='DATA:metadata.*'", "completion": "This text code kubectl Quick Reference | More examples in the kubectl reference documentation.", "language": "text", "type": "code-example_explanation", "context": "kubectl Quick Reference | More examples in the kubectl reference documentation.", "source": "https://kubernetes.io/docs/reference/kubectl/quick-reference/", "id": "k8s_00016_code_explain_385"} {"prompt": "Complete this yaml code:\n\napiVersion: discovery.k8s.io/v1\nkind: EndpointSlice\nmetadata:\n name: example-abc\n labels:\n kubernetes.io/service-name: example\naddressType: IPv4\nports:\n - name: http", "completion": " protocol: TCP\n port: 80\nendpoints:\n - addresses:\n - \"10.1.2.3\"\n conditions:\n ready: true\n hostname: pod-1\n nodeName: node-1\n zone: us-west2-a", "language": "yaml", "type": "kubernetes-manifest", "context": "EndpointSlices | By default, the control plane creates and manages EndpointSlices to have no more than 100 endpoints each. You can configure this with the --max-endpoints-per-slice kube-controller-manager flag, up to", "source": "https://kubernetes.io/docs/concepts/services-networking/endpoint-slices/", "id": "k8s_00017_code_completion_386"} {"prompt": "Explain this yaml code:\n\napiVersion: discovery.k8s.io/v1\nkind: EndpointSlice\nmetadata:\n name: example-abc\n labels:\n kubernetes.io/service-name: example\naddressType: IPv4\nports:\n - name: http\n protocol: TCP\n port: 80\nendpoints:\n - addresses:\n - \"10.1.2.3\"\n conditions:\n ready: true\n hostname: pod-1\n nodeName: node-1\n zone: us-west2-a", "completion": "This yaml code EndpointSlices | By default, the control plane creates and manages EndpointSlices to have no more than 100 endpoints each. You can configure this with the --max-endpoints-per-slice kube-controller-manager flag, up to", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "EndpointSlices | By default, the control plane creates and manages EndpointSlices to have no more than 100 endpoints each. You can configure this with the --max-endpoints-per-slice kube-controller-manager flag, up to", "source": "https://kubernetes.io/docs/concepts/services-networking/endpoint-slices/", "id": "k8s_00017_code_explain_387"} {"prompt": "Complete this yaml code:\n\napiVersion: discovery.k8s.io/v1\nkind: EndpointSlice\nmetadata:\n name: example-abc\n labels:\n kubernetes.io/service-name: example\naddressType: IPv4\nports:\n - name: http", "completion": " protocol: TCP\n port: 80\nendpoints:\n - addresses:\n - \"10.1.2.3\"\n conditions:\n ready: true\n hostname: pod-1\n nodeName: node-1\n zone: us-west2-a", "language": "yaml", "type": "kubernetes-manifest", "context": "EndpointSlices | By default, the control plane creates and manages EndpointSlices to have no more than 100 endpoints each. You can configure this with the --max-endpoints-per-slice kube-controller-manager flag, up to", "source": "https://kubernetes.io/docs/concepts/services-networking/endpoint-slices/", "id": "k8s_00017_code_completion_388"} {"prompt": "Explain this yaml code:\n\napiVersion: discovery.k8s.io/v1\nkind: EndpointSlice\nmetadata:\n name: example-abc\n labels:\n kubernetes.io/service-name: example\naddressType: IPv4\nports:\n - name: http\n protocol: TCP\n port: 80\nendpoints:\n - addresses:\n - \"10.1.2.3\"\n conditions:\n ready: true\n hostname: pod-1\n nodeName: node-1\n zone: us-west2-a", "completion": "This yaml code EndpointSlices | By default, the control plane creates and manages EndpointSlices to have no more than 100 endpoints each. You can configure this with the --max-endpoints-per-slice kube-controller-manager flag, up to", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "EndpointSlices | By default, the control plane creates and manages EndpointSlices to have no more than 100 endpoints each. You can configure this with the --max-endpoints-per-slice kube-controller-manager flag, up to", "source": "https://kubernetes.io/docs/concepts/services-networking/endpoint-slices/", "id": "k8s_00017_code_explain_389"} {"prompt": "Complete this yaml code:\n\napiVersion: discovery.k8s.io/v1\nkind: EndpointSlice\nmetadata:\n name: example-abc\n labels:\n kubernetes.io/service-name: example\naddressType: IPv4\nports:\n - name: http", "completion": " protocol: TCP\n port: 80\nendpoints:\n - addresses:\n - \"10.1.2.3\"\n conditions:\n ready: true\n hostname: pod-1\n nodeName: node-1\n zone: us-west2-a", "language": "yaml", "type": "kubernetes-manifest", "context": "EndpointSlices | By default, the control plane creates and manages EndpointSlices to have no more than 100 endpoints each. You can configure this with the --max-endpoints-per-slice kube-controller-manager flag, up to", "source": "https://kubernetes.io/docs/concepts/services-networking/endpoint-slices/", "id": "k8s_00017_code_completion_390"} {"prompt": "Explain this yaml code:\n\napiVersion: discovery.k8s.io/v1\nkind: EndpointSlice\nmetadata:\n name: example-abc\n labels:\n kubernetes.io/service-name: example\naddressType: IPv4\nports:\n - name: http\n protocol: TCP\n port: 80\nendpoints:\n - addresses:\n - \"10.1.2.3\"\n conditions:\n ready: true\n hostname: pod-1\n nodeName: node-1\n zone: us-west2-a", "completion": "This yaml code EndpointSlices | By default, the control plane creates and manages EndpointSlices to have no more than 100 endpoints each. You can configure this with the --max-endpoints-per-slice kube-controller-manager flag, up to", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "EndpointSlices | By default, the control plane creates and manages EndpointSlices to have no more than 100 endpoints each. You can configure this with the --max-endpoints-per-slice kube-controller-manager flag, up to", "source": "https://kubernetes.io/docs/concepts/services-networking/endpoint-slices/", "id": "k8s_00017_code_explain_391"} {"prompt": "Explain this text code:\n\nI1025 00:15:15.525108 1 httplog.go:79] GET /api/v1/namespaces/kube-system/pods/metrics-server-v0.3.1-57c75779f-9p8wg: (1.512ms) 200 [pod_nanny/v0.0.0 (linux/amd64) kubernetes/$Format 10.56.1.19:51756]", "completion": "This text code System Logs | The message string may contain line breaks:", "language": "text", "type": "code-example_explanation", "context": "System Logs | The message string may contain line breaks:", "source": "https://kubernetes.io/docs/concepts/cluster-administration/system-logs/", "id": "k8s_00019_code_explain_392"} {"prompt": "Explain this text code:\n\nI1025 00:15:15.525108 1 example.go:79] This is a message\nwhich has a line break.", "completion": "This text code System Logs | Migration to structured log messages is an ongoing process. Not all log messages are structured in this version. When parsing log files, you must also handle unstructured log messages.", "language": "text", "type": "code-example_explanation", "context": "System Logs | Migration to structured log messages is an ongoing process. Not all log messages are structured in this version. When parsing log files, you must also handle unstructured log messages.", "source": "https://kubernetes.io/docs/concepts/cluster-administration/system-logs/", "id": "k8s_00019_code_explain_393"} {"prompt": "Explain this text code:\n\n \"\" =\"\" =\"\" ...", "completion": "This text code System Logs | Example:", "language": "text", "type": "code-example_explanation", "context": "System Logs | Example:", "source": "https://kubernetes.io/docs/concepts/cluster-administration/system-logs/", "id": "k8s_00019_code_explain_394"} {"prompt": "Explain this text code:\n\nI1025 00:15:15.525108 1 controller_utils.go:116] \"Pod status updated\" pod=\"kube-system/kubedns\" status=\"ready\"", "completion": "This text code System Logs | Strings are quoted. Other values are formatted with %+v, which may cause log messages to continue on the next line depending on the data.", "language": "text", "type": "code-example_explanation", "context": "System Logs | Strings are quoted. Other values are formatted with %+v, which may cause log messages to continue on the next line depending on the data.", "source": "https://kubernetes.io/docs/concepts/cluster-administration/system-logs/", "id": "k8s_00019_code_explain_395"} {"prompt": "Explain this text code:\n\nI1025 00:15:15.525108 1 example.go:116] \"Example\" data=\"This is text with a line break\\nand \\\"quotation marks\\\".\" someInt=1 someFloat=0.1 someStruct={StringField: First line,\nsecond line.}", "completion": "This text code System Logs | Contextual logging builds on top of structured logging. It is primarily about how developers use logging calls: code based on that concept is more flexible and supports additional use cases as describ", "language": "text", "type": "code-example_explanation", "context": "System Logs | Contextual logging builds on top of structured logging. It is primarily about how developers use logging calls: code based on that concept is more flexible and supports additional use cases as describ", "source": "https://kubernetes.io/docs/concepts/cluster-administration/system-logs/", "id": "k8s_00019_code_explain_396"} {"prompt": "Complete this text code:\n\n$ cd $GOPATH/src/k8s.io/kubernetes/staging/src/k8s.io/component-base/logs/example/cmd/\n$ go run . --help\n...\n --feature-gates mapStringBool A set of key=value pairs that describe feature gates for alpha/experimental features. Options are:\n AllAlpha=true|false (ALPHA - default=false)", "completion": " AllBeta=true|false (BETA - default=false)\n ContextualLogging=true|false (BETA - default=true)\n$ go run . --feature-gates ContextualLogging=true\n...\nI0222 15:13:31.645988 197901 example.go:54] \"runtime\" logger=\"example.myname\" foo=\"bar\" duration=\"1m0s\"\nI0222 15:13:31.646007 197901 example.go:55] \"another runtime\" logger=\"example\" foo=\"bar\" duration=\"1h0m0s\" duration=\"1m0s\"", "language": "text", "type": "code-example", "context": "System Logs | The logger key and foo=\"bar\" were added by the caller of the function which logs the runtime message and duration=\"1m0s\" value, without having to modify that function.", "source": "https://kubernetes.io/docs/concepts/cluster-administration/system-logs/", "id": "k8s_00019_code_completion_397"} {"prompt": "Explain this text code:\n\n$ cd $GOPATH/src/k8s.io/kubernetes/staging/src/k8s.io/component-base/logs/example/cmd/\n$ go run . --help\n...\n --feature-gates mapStringBool A set of key=value pairs that describe feature gates for alpha/experimental features. Options are:\n AllAlpha=true|false (ALPHA - default=false)\n AllBeta=true|false (BETA - default=false)\n ContextualLogging=true|false (BETA - default=true)\n$ go run . --feature-gates ContextualLogging=true\n...\nI0222 15:13:31.645988 197901 example.go:54] \"runtime\" logger=\"example.myname\" foo=\"bar\" duration=\"1m0s\"\nI0222 15:13:31.646007 197901 example.go:55] \"another runtime\" logger=\"example\" foo=\"bar\" duration=\"1h0m0s\" duration=\"1m0s\"", "completion": "This text code System Logs | The logger key and foo=\"bar\" were added by the caller of the function which logs the runtime message and duration=\"1m0s\" value, without having to modify that function.", "language": "text", "type": "code-example_explanation", "context": "System Logs | The logger key and foo=\"bar\" were added by the caller of the function which logs the runtime message and duration=\"1m0s\" value, without having to modify that function.", "source": "https://kubernetes.io/docs/concepts/cluster-administration/system-logs/", "id": "k8s_00019_code_explain_398"} {"prompt": "Complete this text code:\n\n$ go run . --feature-gates ContextualLogging=false\n...", "completion": "I0222 15:14:40.497333 198174 example.go:54] \"runtime\" duration=\"1m0s\"\nI0222 15:14:40.497346 198174 example.go:55] \"another runtime\" duration=\"1h0m0s\" duration=\"1m0s\"", "language": "text", "type": "code-example", "context": "System Logs | JSON output does not support many standard klog flags. For list of unsupported klog flags, see the Command line tool reference.", "source": "https://kubernetes.io/docs/concepts/cluster-administration/system-logs/", "id": "k8s_00019_code_completion_399"} {"prompt": "Explain this text code:\n\n$ go run . --feature-gates ContextualLogging=false\n...\nI0222 15:14:40.497333 198174 example.go:54] \"runtime\" duration=\"1m0s\"\nI0222 15:14:40.497346 198174 example.go:55] \"another runtime\" duration=\"1h0m0s\" duration=\"1m0s\"", "completion": "This text code System Logs | JSON output does not support many standard klog flags. For list of unsupported klog flags, see the Command line tool reference.", "language": "text", "type": "code-example_explanation", "context": "System Logs | JSON output does not support many standard klog flags. For list of unsupported klog flags, see the Command line tool reference.", "source": "https://kubernetes.io/docs/concepts/cluster-administration/system-logs/", "id": "k8s_00019_code_explain_400"} {"prompt": "Complete this json code:\n\n{\n \"ts\": 1580306777.04728,\n \"v\": 4,\n \"msg\": \"Pod status updated\",\n \"pod\":{", "completion": " \"name\": \"nginx-1\",\n \"namespace\": \"default\"\n },\n \"status\": \"ready\"\n}", "language": "json", "type": "api-response", "context": "System Logs | Keys with special meaning:", "source": "https://kubernetes.io/docs/concepts/cluster-administration/system-logs/", "id": "k8s_00019_code_completion_401"} {"prompt": "Explain this json code:\n\n{\n \"ts\": 1580306777.04728,\n \"v\": 4,\n \"msg\": \"Pod status updated\",\n \"pod\":{\n \"name\": \"nginx-1\",\n \"namespace\": \"default\"\n },\n \"status\": \"ready\"\n}", "completion": "This json code System Logs | Keys with special meaning:", "language": "json", "type": "api-response_explanation", "context": "System Logs | Keys with special meaning:", "source": "https://kubernetes.io/docs/concepts/cluster-administration/system-logs/", "id": "k8s_00019_code_explain_402"} {"prompt": "Explain this text code:\n\n# Fetch kubelet logs from a node named node-1.example\nkubectl get --raw \"/api/v1/nodes/node-1.example/proxy/logs/?query=kubelet\"", "completion": "This text code System Logs | You can also fetch files, provided that the files are in a directory that the kubelet allows for log fetches. For example, you can fetch a log from /var/log on a Linux node:", "language": "text", "type": "code-example_explanation", "context": "System Logs | You can also fetch files, provided that the files are in a directory that the kubelet allows for log fetches. For example, you can fetch a log from /var/log on a Linux node:", "source": "https://kubernetes.io/docs/concepts/cluster-administration/system-logs/", "id": "k8s_00019_code_explain_403"} {"prompt": "Explain this bash code:\n\nkubectl get --raw \"/api/v1/nodes//proxy/logs/?query=/\"", "completion": "This bash code System Logs | The kubelet uses heuristics to retrieve logs. This helps if you are not aware whether a given system service is writing logs to the operating system's native logger like journald or to a log file in /", "language": "bash", "type": "kubectl-command_explanation", "context": "System Logs | The kubelet uses heuristics to retrieve logs. This helps if you are not aware whether a given system service is writing logs to the operating system's native logger like journald or to a log file in /", "source": "https://kubernetes.io/docs/concepts/cluster-administration/system-logs/", "id": "k8s_00019_code_explain_404"} {"prompt": "Explain this text code:\n\n# Fetch kubelet logs from a node named node-1.example that have the word \"error\"\nkubectl get --raw \"/api/v1/nodes/node-1.example/proxy/logs/?query=kubelet&pattern=error\"", "completion": "This text code System Logs |", "language": "text", "type": "code-example_explanation", "context": "System Logs |", "source": "https://kubernetes.io/docs/concepts/cluster-administration/system-logs/", "id": "k8s_00019_code_explain_405"} {"prompt": "Explain this text code:\n\nkube-log-runner -log-file=/tmp/log -redirect-stderr=false", "completion": "This text code System Logs | An example of the traditional klog native format:", "language": "text", "type": "code-example_explanation", "context": "System Logs | An example of the traditional klog native format:", "source": "https://kubernetes.io/docs/concepts/cluster-administration/system-logs/", "id": "k8s_00019_code_explain_406"} {"prompt": "Explain this text code:\n\nI1025 00:15:15.525108 1 httplog.go:79] GET /api/v1/namespaces/kube-system/pods/metrics-server-v0.3.1-57c75779f-9p8wg: (1.512ms) 200 [pod_nanny/v0.0.0 (linux/amd64) kubernetes/$Format 10.56.1.19:51756]", "completion": "This text code System Logs | The message string may contain line breaks:", "language": "text", "type": "code-example_explanation", "context": "System Logs | The message string may contain line breaks:", "source": "https://kubernetes.io/docs/concepts/cluster-administration/system-logs/", "id": "k8s_00019_code_explain_407"} {"prompt": "Explain this text code:\n\nI1025 00:15:15.525108 1 example.go:79] This is a message\nwhich has a line break.", "completion": "This text code System Logs | Migration to structured log messages is an ongoing process. Not all log messages are structured in this version. When parsing log files, you must also handle unstructured log messages.", "language": "text", "type": "code-example_explanation", "context": "System Logs | Migration to structured log messages is an ongoing process. Not all log messages are structured in this version. When parsing log files, you must also handle unstructured log messages.", "source": "https://kubernetes.io/docs/concepts/cluster-administration/system-logs/", "id": "k8s_00019_code_explain_408"} {"prompt": "Explain this text code:\n\n \"\" =\"\" =\"\" ...", "completion": "This text code System Logs | Example:", "language": "text", "type": "code-example_explanation", "context": "System Logs | Example:", "source": "https://kubernetes.io/docs/concepts/cluster-administration/system-logs/", "id": "k8s_00019_code_explain_409"} {"prompt": "Explain this text code:\n\nI1025 00:15:15.525108 1 controller_utils.go:116] \"Pod status updated\" pod=\"kube-system/kubedns\" status=\"ready\"", "completion": "This text code System Logs | Strings are quoted. Other values are formatted with %+v, which may cause log messages to continue on the next line depending on the data.", "language": "text", "type": "code-example_explanation", "context": "System Logs | Strings are quoted. Other values are formatted with %+v, which may cause log messages to continue on the next line depending on the data.", "source": "https://kubernetes.io/docs/concepts/cluster-administration/system-logs/", "id": "k8s_00019_code_explain_410"} {"prompt": "Explain this text code:\n\nI1025 00:15:15.525108 1 example.go:116] \"Example\" data=\"This is text with a line break\\nand \\\"quotation marks\\\".\" someInt=1 someFloat=0.1 someStruct={StringField: First line,\nsecond line.}", "completion": "This text code System Logs | Contextual logging builds on top of structured logging. It is primarily about how developers use logging calls: code based on that concept is more flexible and supports additional use cases as describ", "language": "text", "type": "code-example_explanation", "context": "System Logs | Contextual logging builds on top of structured logging. It is primarily about how developers use logging calls: code based on that concept is more flexible and supports additional use cases as describ", "source": "https://kubernetes.io/docs/concepts/cluster-administration/system-logs/", "id": "k8s_00019_code_explain_411"} {"prompt": "Complete this console code:\n\n$ cd $GOPATH/src/k8s.io/kubernetes/staging/src/k8s.io/component-base/logs/example/cmd/\n$ go run . --help\n...\n --feature-gates mapStringBool A set of key=value pairs that describe feature gates for alpha/experimental features. Options are:\n AllAlpha=true|false (ALPHA - default=false)", "completion": " AllBeta=true|false (BETA - default=false)\n ContextualLogging=true|false (BETA - default=true)\n$ go run . --feature-gates ContextualLogging=true\n...\nI0222 15:13:31.645988 197901 example.go:54] \"runtime\" logger=\"example.myname\" foo=\"bar\" duration=\"1m0s\"\nI0222 15:13:31.646007 197901 example.go:55] \"another runtime\" logger=\"example\" foo=\"bar\" duration=\"1h0m0s\" duration=\"1m0s\"", "language": "console", "type": "code-example", "context": "System Logs | The logger key and foo=\"bar\" were added by the caller of the function which logs the runtime message and duration=\"1m0s\" value, without having to modify that function.", "source": "https://kubernetes.io/docs/concepts/cluster-administration/system-logs/", "id": "k8s_00019_code_completion_412"} {"prompt": "Explain this console code:\n\n$ cd $GOPATH/src/k8s.io/kubernetes/staging/src/k8s.io/component-base/logs/example/cmd/\n$ go run . --help\n...\n --feature-gates mapStringBool A set of key=value pairs that describe feature gates for alpha/experimental features. Options are:\n AllAlpha=true|false (ALPHA - default=false)\n AllBeta=true|false (BETA - default=false)\n ContextualLogging=true|false (BETA - default=true)\n$ go run . --feature-gates ContextualLogging=true\n...\nI0222 15:13:31.645988 197901 example.go:54] \"runtime\" logger=\"example.myname\" foo=\"bar\" duration=\"1m0s\"\nI0222 15:13:31.646007 197901 example.go:55] \"another runtime\" logger=\"example\" foo=\"bar\" duration=\"1h0m0s\" duration=\"1m0s\"", "completion": "This console code System Logs | The logger key and foo=\"bar\" were added by the caller of the function which logs the runtime message and duration=\"1m0s\" value, without having to modify that function.", "language": "console", "type": "code-example_explanation", "context": "System Logs | The logger key and foo=\"bar\" were added by the caller of the function which logs the runtime message and duration=\"1m0s\" value, without having to modify that function.", "source": "https://kubernetes.io/docs/concepts/cluster-administration/system-logs/", "id": "k8s_00019_code_explain_413"} {"prompt": "Complete this console code:\n\n$ go run . --feature-gates ContextualLogging=false\n...", "completion": "I0222 15:14:40.497333 198174 example.go:54] \"runtime\" duration=\"1m0s\"\nI0222 15:14:40.497346 198174 example.go:55] \"another runtime\" duration=\"1h0m0s\" duration=\"1m0s\"", "language": "console", "type": "code-example", "context": "System Logs | JSON output does not support many standard klog flags. For list of unsupported klog flags, see the Command line tool reference.", "source": "https://kubernetes.io/docs/concepts/cluster-administration/system-logs/", "id": "k8s_00019_code_completion_414"} {"prompt": "Explain this console code:\n\n$ go run . --feature-gates ContextualLogging=false\n...\nI0222 15:14:40.497333 198174 example.go:54] \"runtime\" duration=\"1m0s\"\nI0222 15:14:40.497346 198174 example.go:55] \"another runtime\" duration=\"1h0m0s\" duration=\"1m0s\"", "completion": "This console code System Logs | JSON output does not support many standard klog flags. For list of unsupported klog flags, see the Command line tool reference.", "language": "console", "type": "code-example_explanation", "context": "System Logs | JSON output does not support many standard klog flags. For list of unsupported klog flags, see the Command line tool reference.", "source": "https://kubernetes.io/docs/concepts/cluster-administration/system-logs/", "id": "k8s_00019_code_explain_415"} {"prompt": "Complete this json code:\n\n{\n \"ts\": 1580306777.04728,\n \"v\": 4,\n \"msg\": \"Pod status updated\",\n \"pod\":{", "completion": " \"name\": \"nginx-1\",\n \"namespace\": \"default\"\n },\n \"status\": \"ready\"\n}", "language": "json", "type": "api-response", "context": "System Logs | Keys with special meaning:", "source": "https://kubernetes.io/docs/concepts/cluster-administration/system-logs/", "id": "k8s_00019_code_completion_416"} {"prompt": "Explain this json code:\n\n{\n \"ts\": 1580306777.04728,\n \"v\": 4,\n \"msg\": \"Pod status updated\",\n \"pod\":{\n \"name\": \"nginx-1\",\n \"namespace\": \"default\"\n },\n \"status\": \"ready\"\n}", "completion": "This json code System Logs | Keys with special meaning:", "language": "json", "type": "api-response_explanation", "context": "System Logs | Keys with special meaning:", "source": "https://kubernetes.io/docs/concepts/cluster-administration/system-logs/", "id": "k8s_00019_code_explain_417"} {"prompt": "Explain this shell code:\n\n# Fetch kubelet logs from a node named node-1.example\nkubectl get --raw \"/api/v1/nodes/node-1.example/proxy/logs/?query=kubelet\"", "completion": "This shell code System Logs | You can also fetch files, provided that the files are in a directory that the kubelet allows for log fetches. For example, you can fetch a log from /var/log on a Linux node:", "language": "shell", "type": "code-example_explanation", "context": "System Logs | You can also fetch files, provided that the files are in a directory that the kubelet allows for log fetches. For example, you can fetch a log from /var/log on a Linux node:", "source": "https://kubernetes.io/docs/concepts/cluster-administration/system-logs/", "id": "k8s_00019_code_explain_418"} {"prompt": "Explain this shell code:\n\nkubectl get --raw \"/api/v1/nodes//proxy/logs/?query=/\"", "completion": "This shell code System Logs | The kubelet uses heuristics to retrieve logs. This helps if you are not aware whether a given system service is writing logs to the operating system's native logger like journald or to a log file in /", "language": "shell", "type": "code-example_explanation", "context": "System Logs | The kubelet uses heuristics to retrieve logs. This helps if you are not aware whether a given system service is writing logs to the operating system's native logger like journald or to a log file in /", "source": "https://kubernetes.io/docs/concepts/cluster-administration/system-logs/", "id": "k8s_00019_code_explain_419"} {"prompt": "Explain this shell code:\n\n# Fetch kubelet logs from a node named node-1.example that have the word \"error\"\nkubectl get --raw \"/api/v1/nodes/node-1.example/proxy/logs/?query=kubelet&pattern=error\"", "completion": "This shell code System Logs |", "language": "shell", "type": "code-example_explanation", "context": "System Logs |", "source": "https://kubernetes.io/docs/concepts/cluster-administration/system-logs/", "id": "k8s_00019_code_explain_420"} {"prompt": "Complete this text code:\n\n$ cd $GOPATH/src/k8s.io/kubernetes/staging/src/k8s.io/component-base/logs/example/cmd/\n$ go run . --help\n...\n --feature-gates mapStringBool A set of key=value pairs that describe feature gates for alpha/experimental features. Options are:\n AllAlpha=true|false (ALPHA - default=false)", "completion": " AllBeta=true|false (BETA - default=false)\n ContextualLogging=true|false (BETA - default=true)\n$ go run . --feature-gates ContextualLogging=true\n...\nI0222 15:13:31.645988 197901 example.go:54] \"runtime\" logger=\"example.myname\" foo=\"bar\" duration=\"1m0s\"\nI0222 15:13:31.646007 197901 example.go:55] \"another runtime\" logger=\"example\" foo=\"bar\" duration=\"1h0m0s\" duration=\"1m0s\"", "language": "text", "type": "code-example", "context": "System Logs | The logger key and foo=\"bar\" were added by the caller of the function which logs the runtime message and duration=\"1m0s\" value, without having to modify that function.", "source": "https://kubernetes.io/docs/concepts/cluster-administration/system-logs/", "id": "k8s_00019_code_completion_421"} {"prompt": "Explain this text code:\n\n$ cd $GOPATH/src/k8s.io/kubernetes/staging/src/k8s.io/component-base/logs/example/cmd/\n$ go run . --help\n...\n --feature-gates mapStringBool A set of key=value pairs that describe feature gates for alpha/experimental features. Options are:\n AllAlpha=true|false (ALPHA - default=false)\n AllBeta=true|false (BETA - default=false)\n ContextualLogging=true|false (BETA - default=true)\n$ go run . --feature-gates ContextualLogging=true\n...\nI0222 15:13:31.645988 197901 example.go:54] \"runtime\" logger=\"example.myname\" foo=\"bar\" duration=\"1m0s\"\nI0222 15:13:31.646007 197901 example.go:55] \"another runtime\" logger=\"example\" foo=\"bar\" duration=\"1h0m0s\" duration=\"1m0s\"", "completion": "This text code System Logs | The logger key and foo=\"bar\" were added by the caller of the function which logs the runtime message and duration=\"1m0s\" value, without having to modify that function.", "language": "text", "type": "code-example_explanation", "context": "System Logs | The logger key and foo=\"bar\" were added by the caller of the function which logs the runtime message and duration=\"1m0s\" value, without having to modify that function.", "source": "https://kubernetes.io/docs/concepts/cluster-administration/system-logs/", "id": "k8s_00019_code_explain_422"} {"prompt": "Complete this text code:\n\n$ go run . --feature-gates ContextualLogging=false\n...", "completion": "I0222 15:14:40.497333 198174 example.go:54] \"runtime\" duration=\"1m0s\"\nI0222 15:14:40.497346 198174 example.go:55] \"another runtime\" duration=\"1h0m0s\" duration=\"1m0s\"", "language": "text", "type": "code-example", "context": "System Logs | JSON output does not support many standard klog flags. For list of unsupported klog flags, see the Command line tool reference.", "source": "https://kubernetes.io/docs/concepts/cluster-administration/system-logs/", "id": "k8s_00019_code_completion_423"} {"prompt": "Explain this text code:\n\n$ go run . --feature-gates ContextualLogging=false\n...\nI0222 15:14:40.497333 198174 example.go:54] \"runtime\" duration=\"1m0s\"\nI0222 15:14:40.497346 198174 example.go:55] \"another runtime\" duration=\"1h0m0s\" duration=\"1m0s\"", "completion": "This text code System Logs | JSON output does not support many standard klog flags. For list of unsupported klog flags, see the Command line tool reference.", "language": "text", "type": "code-example_explanation", "context": "System Logs | JSON output does not support many standard klog flags. For list of unsupported klog flags, see the Command line tool reference.", "source": "https://kubernetes.io/docs/concepts/cluster-administration/system-logs/", "id": "k8s_00019_code_explain_424"} {"prompt": "Complete this json code:\n\n{\n \"ts\": 1580306777.04728,\n \"v\": 4,\n \"msg\": \"Pod status updated\",\n \"pod\":{", "completion": " \"name\": \"nginx-1\",\n \"namespace\": \"default\"\n },\n \"status\": \"ready\"\n}", "language": "json", "type": "api-response", "context": "System Logs | Keys with special meaning:", "source": "https://kubernetes.io/docs/concepts/cluster-administration/system-logs/", "id": "k8s_00019_code_completion_425"} {"prompt": "Explain this json code:\n\n{\n \"ts\": 1580306777.04728,\n \"v\": 4,\n \"msg\": \"Pod status updated\",\n \"pod\":{\n \"name\": \"nginx-1\",\n \"namespace\": \"default\"\n },\n \"status\": \"ready\"\n}", "completion": "This json code System Logs | Keys with special meaning:", "language": "json", "type": "api-response_explanation", "context": "System Logs | Keys with special meaning:", "source": "https://kubernetes.io/docs/concepts/cluster-administration/system-logs/", "id": "k8s_00019_code_explain_426"} {"prompt": "Explain this text code:\n\n# Fetch kubelet logs from a node named node-1.example\nkubectl get --raw \"/api/v1/nodes/node-1.example/proxy/logs/?query=kubelet\"", "completion": "This text code System Logs | You can also fetch files, provided that the files are in a directory that the kubelet allows for log fetches. For example, you can fetch a log from /var/log on a Linux node:", "language": "text", "type": "code-example_explanation", "context": "System Logs | You can also fetch files, provided that the files are in a directory that the kubelet allows for log fetches. For example, you can fetch a log from /var/log on a Linux node:", "source": "https://kubernetes.io/docs/concepts/cluster-administration/system-logs/", "id": "k8s_00019_code_explain_427"} {"prompt": "Explain this bash code:\n\nkubectl get --raw \"/api/v1/nodes//proxy/logs/?query=/\"", "completion": "This bash code System Logs | The kubelet uses heuristics to retrieve logs. This helps if you are not aware whether a given system service is writing logs to the operating system's native logger like journald or to a log file in /", "language": "bash", "type": "kubectl-command_explanation", "context": "System Logs | The kubelet uses heuristics to retrieve logs. This helps if you are not aware whether a given system service is writing logs to the operating system's native logger like journald or to a log file in /", "source": "https://kubernetes.io/docs/concepts/cluster-administration/system-logs/", "id": "k8s_00019_code_explain_428"} {"prompt": "Explain this text code:\n\n# Fetch kubelet logs from a node named node-1.example that have the word \"error\"\nkubectl get --raw \"/api/v1/nodes/node-1.example/proxy/logs/?query=kubelet&pattern=error\"", "completion": "This text code System Logs |", "language": "text", "type": "code-example_explanation", "context": "System Logs |", "source": "https://kubernetes.io/docs/concepts/cluster-administration/system-logs/", "id": "k8s_00019_code_explain_429"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Service\nmetadata:\n name: my-nginx-svc\n labels:\n app: nginx\nspec:\n type: LoadBalancer\n ports:\n - port: 80\n selector:\n app: nginx\n---\napiVersion: apps/v1\nkind: Deployment\nmetadata:\n name: my-nginx", "completion": " labels:\n app: nginx\nspec:\n replicas: 3\n selector:\n matchLabels:\n app: nginx\n template:\n metadata:\n labels:\n app: nginx\n spec:\n containers:\n - name: nginx\n image: nginx:1.14.2\n ports:\n - containerPort: 80", "language": "yaml", "type": "deployment-definition", "context": "Managing Workloads | Multiple resources can be created the same way as a single resource:", "source": "https://kubernetes.io/docs/concepts/workloads/management/", "id": "k8s_00021_code_completion_430"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Service\nmetadata:\n name: my-nginx-svc\n labels:\n app: nginx\nspec:\n type: LoadBalancer\n ports:\n - port: 80\n selector:\n app: nginx\n---\napiVersion: apps/v1\nkind: Deployment\nmetadata:\n name: my-nginx\n labels:\n app: nginx\nspec:\n replicas: 3\n selector:\n matchLabels:\n app: nginx\n template:\n metadata:\n labels:\n app: nginx\n spec:\n containers:\n - name: nginx\n image: nginx:1.14.2\n ports:\n - containerPort: 80", "completion": "This yaml code Managing Workloads | Multiple resources can be created the same way as a single resource:", "language": "yaml", "type": "deployment-definition_explanation", "context": "Managing Workloads | Multiple resources can be created the same way as a single resource:", "source": "https://kubernetes.io/docs/concepts/workloads/management/", "id": "k8s_00021_code_explain_431"} {"prompt": "Explain this bash code:\n\nkubectl apply -f https://k8s.io/examples/application/nginx-app.yaml", "completion": "This bash code Managing Workloads | The resources will be created in the order they appear in the manifest. Therefore, it's best to specify the Service first, since that will ensure the scheduler can spread the pods associated with the", "language": "bash", "type": "kubectl-command_explanation", "context": "Managing Workloads | The resources will be created in the order they appear in the manifest. Therefore, it's best to specify the Service first, since that will ensure the scheduler can spread the pods associated with the", "source": "https://kubernetes.io/docs/concepts/workloads/management/", "id": "k8s_00021_code_explain_432"} {"prompt": "Explain this text code:\n\nservice/my-nginx-svc created\ndeployment.apps/my-nginx created", "completion": "This text code Managing Workloads | The resources will be created in the order they appear in the manifest. Therefore, it's best to specify the Service first, since that will ensure the scheduler can spread the pods associated with the", "language": "text", "type": "code-example_explanation", "context": "Managing Workloads | The resources will be created in the order they appear in the manifest. Therefore, it's best to specify the Service first, since that will ensure the scheduler can spread the pods associated with the", "source": "https://kubernetes.io/docs/concepts/workloads/management/", "id": "k8s_00021_code_explain_433"} {"prompt": "Explain this bash code:\n\nkubectl apply -f https://k8s.io/examples/application/nginx/nginx-svc.yaml \\\n -f https://k8s.io/examples/application/nginx/nginx-deployment.yaml", "completion": "This bash code Managing Workloads | It is a recommended practice to put resources related to the same microservice or application tier into the same file, and to group all of the files associated with your application in the same direct", "language": "bash", "type": "kubectl-command_explanation", "context": "Managing Workloads | It is a recommended practice to put resources related to the same microservice or application tier into the same file, and to group all of the files associated with your application in the same direct", "source": "https://kubernetes.io/docs/concepts/workloads/management/", "id": "k8s_00021_code_explain_434"} {"prompt": "Explain this bash code:\n\nkubectl apply -f https://k8s.io/examples/application/nginx/nginx-deployment.yaml", "completion": "This bash code Managing Workloads | If you need to define more manifests, such as adding a ConfigMap, you can do that too.", "language": "bash", "type": "kubectl-command_explanation", "context": "Managing Workloads | If you need to define more manifests, such as adding a ConfigMap, you can do that too.", "source": "https://kubernetes.io/docs/concepts/workloads/management/", "id": "k8s_00021_code_explain_435"} {"prompt": "Explain this bash code:\n\nkubectl delete -f https://k8s.io/examples/application/nginx-app.yaml", "completion": "This bash code Managing Workloads | In the case of two resources, you can specify both resources on the command line using the resource/name syntax:", "language": "bash", "type": "kubectl-command_explanation", "context": "Managing Workloads | In the case of two resources, you can specify both resources on the command line using the resource/name syntax:", "source": "https://kubernetes.io/docs/concepts/workloads/management/", "id": "k8s_00021_code_explain_436"} {"prompt": "Explain this text code:\n\ndeployment.apps \"my-nginx\" deleted\nservice \"my-nginx-svc\" deleted", "completion": "This text code Managing Workloads | In the case of two resources, you can specify both resources on the command line using the resource/name syntax:", "language": "text", "type": "code-example_explanation", "context": "Managing Workloads | In the case of two resources, you can specify both resources on the command line using the resource/name syntax:", "source": "https://kubernetes.io/docs/concepts/workloads/management/", "id": "k8s_00021_code_explain_437"} {"prompt": "Explain this bash code:\n\nkubectl delete deployments/my-nginx services/my-nginx-svc", "completion": "This bash code Managing Workloads | For larger numbers of resources, you'll find it easier to specify the selector (label query) specified using -l or --selector, to filter resources by their labels:", "language": "bash", "type": "kubectl-command_explanation", "context": "Managing Workloads | For larger numbers of resources, you'll find it easier to specify the selector (label query) specified using -l or --selector, to filter resources by their labels:", "source": "https://kubernetes.io/docs/concepts/workloads/management/", "id": "k8s_00021_code_explain_438"} {"prompt": "Explain this text code:\n\ndeployment.apps \"my-nginx\" deleted\nservice \"my-nginx-svc\" deleted", "completion": "This text code Managing Workloads | Because kubectl outputs resource names in the same syntax it accepts, you can chain operations using $() or xargs:", "language": "text", "type": "code-example_explanation", "context": "Managing Workloads | Because kubectl outputs resource names in the same syntax it accepts, you can chain operations using $() or xargs:", "source": "https://kubernetes.io/docs/concepts/workloads/management/", "id": "k8s_00021_code_explain_439"} {"prompt": "Explain this bash code:\n\nkubectl get $(kubectl create -f docs/concepts/cluster-administration/nginx/ -o name | grep service/ )\nkubectl create -f docs/concepts/cluster-administration/nginx/ -o name | grep service/ | xargs -i kubectl get '{}'", "completion": "This bash code Managing Workloads | The output might be similar to:", "language": "bash", "type": "kubectl-command_explanation", "context": "Managing Workloads | The output might be similar to:", "source": "https://kubernetes.io/docs/concepts/workloads/management/", "id": "k8s_00021_code_explain_440"} {"prompt": "Explain this text code:\n\nNAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE\nmy-nginx-svc LoadBalancer 10.0.0.208 80/TCP 0s", "completion": "This text code Managing Workloads | With the above commands, first you create resources under examples/application/nginx/ and print the resources created with -o name output format (print each resource as resource/name). Then you grep o", "language": "text", "type": "code-example_explanation", "context": "Managing Workloads | With the above commands, first you create resources under examples/application/nginx/ and print the resources created with -o name output format (print each resource as resource/name). Then you grep o", "source": "https://kubernetes.io/docs/concepts/workloads/management/", "id": "k8s_00021_code_explain_441"} {"prompt": "Complete this text code:\n\nproject/k8s/development\n├── configmap\n│   └── my-configmap.yaml", "completion": "├── deployment\n│   └── my-deployment.yaml\n└── pvc\n └── my-pvc.yaml", "language": "text", "type": "code-example", "context": "Managing Workloads | By default, performing a bulk operation on project/k8s/development will stop at the first level of the directory, not processing any subdirectories. If you had tried to create the resources in this di", "source": "https://kubernetes.io/docs/concepts/workloads/management/", "id": "k8s_00021_code_completion_442"} {"prompt": "Explain this text code:\n\nproject/k8s/development\n├── configmap\n│   └── my-configmap.yaml\n├── deployment\n│   └── my-deployment.yaml\n└── pvc\n └── my-pvc.yaml", "completion": "This text code Managing Workloads | By default, performing a bulk operation on project/k8s/development will stop at the first level of the directory, not processing any subdirectories. If you had tried to create the resources in this di", "language": "text", "type": "code-example_explanation", "context": "Managing Workloads | By default, performing a bulk operation on project/k8s/development will stop at the first level of the directory, not processing any subdirectories. If you had tried to create the resources in this di", "source": "https://kubernetes.io/docs/concepts/workloads/management/", "id": "k8s_00021_code_explain_443"} {"prompt": "Explain this text code:\n\nerror: you must provide one or more resources by argument or filename (.json|.yaml|.yml|stdin)", "completion": "This text code Managing Workloads | Instead, specify the --recursive or -R command line argument along with the --filename/-f argument:", "language": "text", "type": "code-example_explanation", "context": "Managing Workloads | Instead, specify the --recursive or -R command line argument along with the --filename/-f argument:", "source": "https://kubernetes.io/docs/concepts/workloads/management/", "id": "k8s_00021_code_explain_444"} {"prompt": "Explain this bash code:\n\nkubectl apply -f project/k8s/development --recursive", "completion": "This bash code Managing Workloads | The --recursive argument works with any operation that accepts the --filename/-f argument such as: kubectl create, kubectl get, kubectl delete, kubectl describe, or even kubectl rollout.", "language": "bash", "type": "kubectl-command_explanation", "context": "Managing Workloads | The --recursive argument works with any operation that accepts the --filename/-f argument such as: kubectl create, kubectl get, kubectl delete, kubectl describe, or even kubectl rollout.", "source": "https://kubernetes.io/docs/concepts/workloads/management/", "id": "k8s_00021_code_explain_445"} {"prompt": "Explain this text code:\n\nconfigmap/my-config created\ndeployment.apps/my-deployment created\npersistentvolumeclaim/my-pvc created", "completion": "This text code Managing Workloads | The --recursive argument works with any operation that accepts the --filename/-f argument such as: kubectl create, kubectl get, kubectl delete, kubectl describe, or even kubectl rollout.", "language": "text", "type": "code-example_explanation", "context": "Managing Workloads | The --recursive argument works with any operation that accepts the --filename/-f argument such as: kubectl create, kubectl get, kubectl delete, kubectl describe, or even kubectl rollout.", "source": "https://kubernetes.io/docs/concepts/workloads/management/", "id": "k8s_00021_code_explain_446"} {"prompt": "Explain this bash code:\n\nkubectl apply -f project/k8s/namespaces -f project/k8s/development --recursive", "completion": "This bash code Managing Workloads | If you're interested in learning more about kubectl, go ahead and read Command line tool (kubectl).", "language": "bash", "type": "kubectl-command_explanation", "context": "Managing Workloads | If you're interested in learning more about kubectl, go ahead and read Command line tool (kubectl).", "source": "https://kubernetes.io/docs/concepts/workloads/management/", "id": "k8s_00021_code_explain_447"} {"prompt": "Complete this text code:\n\nnamespace/development created\nnamespace/staging created", "completion": "configmap/my-config created\ndeployment.apps/my-deployment created\npersistentvolumeclaim/my-pvc created", "language": "text", "type": "code-example", "context": "Managing Workloads | If you're interested in learning more about kubectl, go ahead and read Command line tool (kubectl).", "source": "https://kubernetes.io/docs/concepts/workloads/management/", "id": "k8s_00021_code_completion_448"} {"prompt": "Explain this text code:\n\nnamespace/development created\nnamespace/staging created\nconfigmap/my-config created\ndeployment.apps/my-deployment created\npersistentvolumeclaim/my-pvc created", "completion": "This text code Managing Workloads | If you're interested in learning more about kubectl, go ahead and read Command line tool (kubectl).", "language": "text", "type": "code-example_explanation", "context": "Managing Workloads | If you're interested in learning more about kubectl, go ahead and read Command line tool (kubectl).", "source": "https://kubernetes.io/docs/concepts/workloads/management/", "id": "k8s_00021_code_explain_449"} {"prompt": "Explain this bash code:\n\nkubectl create deployment my-nginx --image=nginx:1.14.2", "completion": "This bash code Managing Workloads | Ensure that there is 1 replica:", "language": "bash", "type": "kubectl-command_explanation", "context": "Managing Workloads | Ensure that there is 1 replica:", "source": "https://kubernetes.io/docs/concepts/workloads/management/", "id": "k8s_00021_code_explain_450"} {"prompt": "Explain this bash code:\n\nkubectl scale --replicas 1 deployments/my-nginx --subresource='scale' --type='merge' -p '{\"spec\":{\"replicas\": 1}}'", "completion": "This bash code Managing Workloads | and allow Kubernetes to add more temporary replicas during a rollout, by setting a surge maximum of 100%:", "language": "bash", "type": "kubectl-command_explanation", "context": "Managing Workloads | and allow Kubernetes to add more temporary replicas during a rollout, by setting a surge maximum of 100%:", "source": "https://kubernetes.io/docs/concepts/workloads/management/", "id": "k8s_00021_code_explain_451"} {"prompt": "Explain this bash code:\n\nkubectl patch --type='merge' -p '{\"spec\":{\"strategy\":{\"rollingUpdate\":{\"maxSurge\": \"100%\" }}}}'", "completion": "This bash code Managing Workloads | To update to version 1.16.1, change .spec.template.spec.containers[0].image from nginx:1.14.2 to nginx:1.16.1 using kubectl edit:", "language": "bash", "type": "kubectl-command_explanation", "context": "Managing Workloads | To update to version 1.16.1, change .spec.template.spec.containers[0].image from nginx:1.14.2 to nginx:1.16.1 using kubectl edit:", "source": "https://kubernetes.io/docs/concepts/workloads/management/", "id": "k8s_00021_code_explain_452"} {"prompt": "Explain this bash code:\n\nkubectl edit deployment/my-nginx\n# Change the manifest to use the newer container image, then save your changes", "completion": "This bash code Managing Workloads | That's it! The Deployment will declaratively update the deployed nginx application progressively behind the scene. It ensures that only a certain number of old replicas may be down while they are bein", "language": "bash", "type": "kubectl-command_explanation", "context": "Managing Workloads | That's it! The Deployment will declaratively update the deployed nginx application progressively behind the scene. It ensures that only a certain number of old replicas may be down while they are bein", "source": "https://kubernetes.io/docs/concepts/workloads/management/", "id": "k8s_00021_code_explain_453"} {"prompt": "Complete this bash code:\n\nkubectl apply -f my-deployment.yaml\n", "completion": "# wait for rollout to finish\nkubectl rollout status deployment/my-deployment --timeout 10m # 10 minute timeout", "language": "bash", "type": "kubectl-command", "context": "Managing Workloads | or", "source": "https://kubernetes.io/docs/concepts/workloads/management/", "id": "k8s_00021_code_completion_454"} {"prompt": "Explain this bash code:\n\nkubectl apply -f my-deployment.yaml\n\n# wait for rollout to finish\nkubectl rollout status deployment/my-deployment --timeout 10m # 10 minute timeout", "completion": "This bash code Managing Workloads | or", "language": "bash", "type": "kubectl-command_explanation", "context": "Managing Workloads | or", "source": "https://kubernetes.io/docs/concepts/workloads/management/", "id": "k8s_00021_code_explain_455"} {"prompt": "Complete this bash code:\n\nkubectl apply -f backing-stateful-component.yaml\n", "completion": "# don't wait for rollout to finish, just check the status\nkubectl rollout status statefulsets/backing-stateful-component --watch=false", "language": "bash", "type": "kubectl-command", "context": "Managing Workloads | You can also pause, resume or cancel a rollout. Visit kubectl rollout to learn more.", "source": "https://kubernetes.io/docs/concepts/workloads/management/", "id": "k8s_00021_code_completion_456"} {"prompt": "Explain this bash code:\n\nkubectl apply -f backing-stateful-component.yaml\n\n# don't wait for rollout to finish, just check the status\nkubectl rollout status statefulsets/backing-stateful-component --watch=false", "completion": "This bash code Managing Workloads | You can also pause, resume or cancel a rollout. Visit kubectl rollout to learn more.", "language": "bash", "type": "kubectl-command_explanation", "context": "Managing Workloads | You can also pause, resume or cancel a rollout. Visit kubectl rollout to learn more.", "source": "https://kubernetes.io/docs/concepts/workloads/management/", "id": "k8s_00021_code_explain_457"} {"prompt": "Complete this text code:\n\nname: frontend\nreplicas: 3\n...\nlabels:", "completion": " app: guestbook\n tier: frontend\n track: stable\n...\nimage: gb-frontend:v3", "language": "text", "type": "code-example", "context": "Managing Workloads | and then you can create a new release of the guestbook frontend that carries the track label with different value (i.e. canary), so that two sets of pods would not overlap:", "source": "https://kubernetes.io/docs/concepts/workloads/management/", "id": "k8s_00021_code_completion_458"} {"prompt": "Explain this text code:\n\nname: frontend\nreplicas: 3\n...\nlabels:\n app: guestbook\n tier: frontend\n track: stable\n...\nimage: gb-frontend:v3", "completion": "This text code Managing Workloads | and then you can create a new release of the guestbook frontend that carries the track label with different value (i.e. canary), so that two sets of pods would not overlap:", "language": "text", "type": "code-example_explanation", "context": "Managing Workloads | and then you can create a new release of the guestbook frontend that carries the track label with different value (i.e. canary), so that two sets of pods would not overlap:", "source": "https://kubernetes.io/docs/concepts/workloads/management/", "id": "k8s_00021_code_explain_459"} {"prompt": "Complete this text code:\n\nname: frontend-canary\nreplicas: 1\n...\nlabels:", "completion": " app: guestbook\n tier: frontend\n track: canary\n...\nimage: gb-frontend:v4", "language": "text", "type": "code-example", "context": "Managing Workloads | The frontend service would span both sets of replicas by selecting the common subset of their labels (i.e. omitting the track label), so that the traffic will be redirected to both applications:", "source": "https://kubernetes.io/docs/concepts/workloads/management/", "id": "k8s_00021_code_completion_460"} {"prompt": "Explain this text code:\n\nname: frontend-canary\nreplicas: 1\n...\nlabels:\n app: guestbook\n tier: frontend\n track: canary\n...\nimage: gb-frontend:v4", "completion": "This text code Managing Workloads | The frontend service would span both sets of replicas by selecting the common subset of their labels (i.e. omitting the track label), so that the traffic will be redirected to both applications:", "language": "text", "type": "code-example_explanation", "context": "Managing Workloads | The frontend service would span both sets of replicas by selecting the common subset of their labels (i.e. omitting the track label), so that the traffic will be redirected to both applications:", "source": "https://kubernetes.io/docs/concepts/workloads/management/", "id": "k8s_00021_code_explain_461"} {"prompt": "Explain this bash code:\n\nkubectl annotate pods my-nginx-v4-9gw19 description='my frontend running nginx'\nkubectl get pods my-nginx-v4-9gw19 -o yaml", "completion": "This bash code Managing Workloads | For more information, see annotations and kubectl annotate.", "language": "bash", "type": "kubectl-command_explanation", "context": "Managing Workloads | For more information, see annotations and kubectl annotate.", "source": "https://kubernetes.io/docs/concepts/workloads/management/", "id": "k8s_00021_code_explain_462"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: pod\nmetadata:", "completion": " annotations:\n description: my frontend running nginx\n...", "language": "yaml", "type": "kubernetes-manifest", "context": "Managing Workloads | For more information, see annotations and kubectl annotate.", "source": "https://kubernetes.io/docs/concepts/workloads/management/", "id": "k8s_00021_code_completion_463"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: pod\nmetadata:\n annotations:\n description: my frontend running nginx\n...", "completion": "This yaml code Managing Workloads | For more information, see annotations and kubectl annotate.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Managing Workloads | For more information, see annotations and kubectl annotate.", "source": "https://kubernetes.io/docs/concepts/workloads/management/", "id": "k8s_00021_code_explain_464"} {"prompt": "Explain this text code:\n\nNAME READY STATUS RESTARTS AGE\nmy-nginx-2035384211-j5fhi 1/1 Running 0 30m", "completion": "This text code Managing Workloads | To have the system automatically choose the number of nginx replicas as needed, ranging from 1 to 3, do:", "language": "text", "type": "code-example_explanation", "context": "Managing Workloads | To have the system automatically choose the number of nginx replicas as needed, ranging from 1 to 3, do:", "source": "https://kubernetes.io/docs/concepts/workloads/management/", "id": "k8s_00021_code_explain_465"} {"prompt": "Explain this text code:\n\n# This requires an existing source of container and Pod metrics\nkubectl autoscale deployment/my-nginx --min=1 --max=3", "completion": "This text code Managing Workloads | Now your nginx replicas will be scaled up and down as needed, automatically.", "language": "text", "type": "code-example_explanation", "context": "Managing Workloads | Now your nginx replicas will be scaled up and down as needed, automatically.", "source": "https://kubernetes.io/docs/concepts/workloads/management/", "id": "k8s_00021_code_explain_466"} {"prompt": "Explain this text code:\n\nhorizontalpodautoscaler.autoscaling/my-nginx autoscaled", "completion": "This text code Managing Workloads | Now your nginx replicas will be scaled up and down as needed, automatically.", "language": "text", "type": "code-example_explanation", "context": "Managing Workloads | Now your nginx replicas will be scaled up and down as needed, automatically.", "source": "https://kubernetes.io/docs/concepts/workloads/management/", "id": "k8s_00021_code_explain_467"} {"prompt": "Explain this bash code:\n\nkubectl apply -f https://k8s.io/examples/application/nginx/nginx-deployment.yaml", "completion": "This bash code Managing Workloads | To learn more about the underlying mechanism, read server-side apply.", "language": "bash", "type": "kubectl-command_explanation", "context": "Managing Workloads | To learn more about the underlying mechanism, read server-side apply.", "source": "https://kubernetes.io/docs/concepts/workloads/management/", "id": "k8s_00021_code_explain_468"} {"prompt": "Complete this bash code:\n\nkubectl get deployment my-nginx -o yaml > /tmp/nginx.yaml\nvi /tmp/nginx.yaml\n# do some edit, and then save the file\n", "completion": "kubectl apply -f /tmp/nginx.yaml\ndeployment.apps/my-nginx configured\n\nrm /tmp/nginx.yaml", "language": "bash", "type": "kubectl-command", "context": "Managing Workloads | This allows you to do more significant changes more easily. Note that you can specify the editor with your EDITOR or KUBE_EDITOR environment variables.", "source": "https://kubernetes.io/docs/concepts/workloads/management/", "id": "k8s_00021_code_completion_469"} {"prompt": "Explain this bash code:\n\nkubectl get deployment my-nginx -o yaml > /tmp/nginx.yaml\nvi /tmp/nginx.yaml\n# do some edit, and then save the file\n\nkubectl apply -f /tmp/nginx.yaml\ndeployment.apps/my-nginx configured\n\nrm /tmp/nginx.yaml", "completion": "This bash code Managing Workloads | This allows you to do more significant changes more easily. Note that you can specify the editor with your EDITOR or KUBE_EDITOR environment variables.", "language": "bash", "type": "kubectl-command_explanation", "context": "Managing Workloads | This allows you to do more significant changes more easily. Note that you can specify the editor with your EDITOR or KUBE_EDITOR environment variables.", "source": "https://kubernetes.io/docs/concepts/workloads/management/", "id": "k8s_00021_code_explain_470"} {"prompt": "Explain this bash code:\n\nkubectl replace -f https://k8s.io/examples/application/nginx/nginx-deployment.yaml --force", "completion": "This bash code Managing Workloads | Items on this page refer to third party products or projects that provide functionality required by Kubernetes. The Kubernetes project authors aren't responsible for those third-party products or proj", "language": "bash", "type": "kubectl-command_explanation", "context": "Managing Workloads | Items on this page refer to third party products or projects that provide functionality required by Kubernetes. The Kubernetes project authors aren't responsible for those third-party products or proj", "source": "https://kubernetes.io/docs/concepts/workloads/management/", "id": "k8s_00021_code_explain_471"} {"prompt": "Explain this text code:\n\ndeployment.apps/my-nginx deleted\ndeployment.apps/my-nginx replaced", "completion": "This text code Managing Workloads | Items on this page refer to third party products or projects that provide functionality required by Kubernetes. The Kubernetes project authors aren't responsible for those third-party products or proj", "language": "text", "type": "code-example_explanation", "context": "Managing Workloads | Items on this page refer to third party products or projects that provide functionality required by Kubernetes. The Kubernetes project authors aren't responsible for those third-party products or proj", "source": "https://kubernetes.io/docs/concepts/workloads/management/", "id": "k8s_00021_code_explain_472"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Service\nmetadata:\n name: my-nginx-svc\n labels:\n app: nginx\nspec:\n type: LoadBalancer\n ports:\n - port: 80\n selector:\n app: nginx\n---\napiVersion: apps/v1\nkind: Deployment\nmetadata:\n name: my-nginx", "completion": " labels:\n app: nginx\nspec:\n replicas: 3\n selector:\n matchLabels:\n app: nginx\n template:\n metadata:\n labels:\n app: nginx\n spec:\n containers:\n - name: nginx\n image: nginx:1.14.2\n ports:\n - containerPort: 80", "language": "yaml", "type": "deployment-definition", "context": "Managing Workloads | Multiple resources can be created the same way as a single resource:", "source": "https://kubernetes.io/docs/concepts/workloads/management/", "id": "k8s_00021_code_completion_473"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Service\nmetadata:\n name: my-nginx-svc\n labels:\n app: nginx\nspec:\n type: LoadBalancer\n ports:\n - port: 80\n selector:\n app: nginx\n---\napiVersion: apps/v1\nkind: Deployment\nmetadata:\n name: my-nginx\n labels:\n app: nginx\nspec:\n replicas: 3\n selector:\n matchLabels:\n app: nginx\n template:\n metadata:\n labels:\n app: nginx\n spec:\n containers:\n - name: nginx\n image: nginx:1.14.2\n ports:\n - containerPort: 80", "completion": "This yaml code Managing Workloads | Multiple resources can be created the same way as a single resource:", "language": "yaml", "type": "deployment-definition_explanation", "context": "Managing Workloads | Multiple resources can be created the same way as a single resource:", "source": "https://kubernetes.io/docs/concepts/workloads/management/", "id": "k8s_00021_code_explain_474"} {"prompt": "Explain this shell code:\n\nkubectl apply -f https://k8s.io/examples/application/nginx-app.yaml", "completion": "This shell code Managing Workloads | The resources will be created in the order they appear in the manifest. Therefore, it's best to specify the Service first, since that will ensure the scheduler can spread the pods associated with the", "language": "shell", "type": "code-example_explanation", "context": "Managing Workloads | The resources will be created in the order they appear in the manifest. Therefore, it's best to specify the Service first, since that will ensure the scheduler can spread the pods associated with the", "source": "https://kubernetes.io/docs/concepts/workloads/management/", "id": "k8s_00021_code_explain_475"} {"prompt": "Explain this none code:\n\nservice/my-nginx-svc created\ndeployment.apps/my-nginx created", "completion": "This none code Managing Workloads | The resources will be created in the order they appear in the manifest. Therefore, it's best to specify the Service first, since that will ensure the scheduler can spread the pods associated with the", "language": "none", "type": "code-example_explanation", "context": "Managing Workloads | The resources will be created in the order they appear in the manifest. Therefore, it's best to specify the Service first, since that will ensure the scheduler can spread the pods associated with the", "source": "https://kubernetes.io/docs/concepts/workloads/management/", "id": "k8s_00021_code_explain_476"} {"prompt": "Explain this shell code:\n\nkubectl apply -f https://k8s.io/examples/application/nginx/nginx-svc.yaml \\\n -f https://k8s.io/examples/application/nginx/nginx-deployment.yaml", "completion": "This shell code Managing Workloads | It is a recommended practice to put resources related to the same microservice or application tier into the same file, and to group all of the files associated with your application in the same direct", "language": "shell", "type": "code-example_explanation", "context": "Managing Workloads | It is a recommended practice to put resources related to the same microservice or application tier into the same file, and to group all of the files associated with your application in the same direct", "source": "https://kubernetes.io/docs/concepts/workloads/management/", "id": "k8s_00021_code_explain_477"} {"prompt": "Explain this shell code:\n\nkubectl apply -f https://k8s.io/examples/application/nginx/nginx-deployment.yaml", "completion": "This shell code Managing Workloads | If you need to define more manifests, such as adding a ConfigMap, you can do that too.", "language": "shell", "type": "code-example_explanation", "context": "Managing Workloads | If you need to define more manifests, such as adding a ConfigMap, you can do that too.", "source": "https://kubernetes.io/docs/concepts/workloads/management/", "id": "k8s_00021_code_explain_478"} {"prompt": "Explain this shell code:\n\nkubectl delete -f https://k8s.io/examples/application/nginx-app.yaml", "completion": "This shell code Managing Workloads | In the case of two resources, you can specify both resources on the command line using the resource/name syntax:", "language": "shell", "type": "code-example_explanation", "context": "Managing Workloads | In the case of two resources, you can specify both resources on the command line using the resource/name syntax:", "source": "https://kubernetes.io/docs/concepts/workloads/management/", "id": "k8s_00021_code_explain_479"} {"prompt": "Explain this none code:\n\ndeployment.apps \"my-nginx\" deleted\nservice \"my-nginx-svc\" deleted", "completion": "This none code Managing Workloads | In the case of two resources, you can specify both resources on the command line using the resource/name syntax:", "language": "none", "type": "code-example_explanation", "context": "Managing Workloads | In the case of two resources, you can specify both resources on the command line using the resource/name syntax:", "source": "https://kubernetes.io/docs/concepts/workloads/management/", "id": "k8s_00021_code_explain_480"} {"prompt": "Explain this shell code:\n\nkubectl delete deployments/my-nginx services/my-nginx-svc", "completion": "This shell code Managing Workloads | For larger numbers of resources, you'll find it easier to specify the selector (label query) specified using -l or --selector, to filter resources by their labels:", "language": "shell", "type": "code-example_explanation", "context": "Managing Workloads | For larger numbers of resources, you'll find it easier to specify the selector (label query) specified using -l or --selector, to filter resources by their labels:", "source": "https://kubernetes.io/docs/concepts/workloads/management/", "id": "k8s_00021_code_explain_481"} {"prompt": "Explain this none code:\n\ndeployment.apps \"my-nginx\" deleted\nservice \"my-nginx-svc\" deleted", "completion": "This none code Managing Workloads | Because kubectl outputs resource names in the same syntax it accepts, you can chain operations using $() or xargs:", "language": "none", "type": "code-example_explanation", "context": "Managing Workloads | Because kubectl outputs resource names in the same syntax it accepts, you can chain operations using $() or xargs:", "source": "https://kubernetes.io/docs/concepts/workloads/management/", "id": "k8s_00021_code_explain_482"} {"prompt": "Explain this shell code:\n\nkubectl get $(kubectl create -f docs/concepts/cluster-administration/nginx/ -o name | grep service/ )\nkubectl create -f docs/concepts/cluster-administration/nginx/ -o name | grep service/ | xargs -i kubectl get '{}'", "completion": "This shell code Managing Workloads | The output might be similar to:", "language": "shell", "type": "code-example_explanation", "context": "Managing Workloads | The output might be similar to:", "source": "https://kubernetes.io/docs/concepts/workloads/management/", "id": "k8s_00021_code_explain_483"} {"prompt": "Explain this none code:\n\nNAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE\nmy-nginx-svc LoadBalancer 10.0.0.208 80/TCP 0s", "completion": "This none code Managing Workloads | With the above commands, first you create resources under examples/application/nginx/ and print the resources created with -o name output format (print each resource as resource/name). Then you grep o", "language": "none", "type": "code-example_explanation", "context": "Managing Workloads | With the above commands, first you create resources under examples/application/nginx/ and print the resources created with -o name output format (print each resource as resource/name). Then you grep o", "source": "https://kubernetes.io/docs/concepts/workloads/management/", "id": "k8s_00021_code_explain_484"} {"prompt": "Complete this none code:\n\nproject/k8s/development\n├── configmap\n│   └── my-configmap.yaml", "completion": "├── deployment\n│   └── my-deployment.yaml\n└── pvc\n └── my-pvc.yaml", "language": "none", "type": "code-example", "context": "Managing Workloads | By default, performing a bulk operation on project/k8s/development will stop at the first level of the directory, not processing any subdirectories. If you had tried to create the resources in this di", "source": "https://kubernetes.io/docs/concepts/workloads/management/", "id": "k8s_00021_code_completion_485"} {"prompt": "Explain this none code:\n\nproject/k8s/development\n├── configmap\n│   └── my-configmap.yaml\n├── deployment\n│   └── my-deployment.yaml\n└── pvc\n └── my-pvc.yaml", "completion": "This none code Managing Workloads | By default, performing a bulk operation on project/k8s/development will stop at the first level of the directory, not processing any subdirectories. If you had tried to create the resources in this di", "language": "none", "type": "code-example_explanation", "context": "Managing Workloads | By default, performing a bulk operation on project/k8s/development will stop at the first level of the directory, not processing any subdirectories. If you had tried to create the resources in this di", "source": "https://kubernetes.io/docs/concepts/workloads/management/", "id": "k8s_00021_code_explain_486"} {"prompt": "Explain this none code:\n\nerror: you must provide one or more resources by argument or filename (.json|.yaml|.yml|stdin)", "completion": "This none code Managing Workloads | Instead, specify the --recursive or -R command line argument along with the --filename/-f argument:", "language": "none", "type": "code-example_explanation", "context": "Managing Workloads | Instead, specify the --recursive or -R command line argument along with the --filename/-f argument:", "source": "https://kubernetes.io/docs/concepts/workloads/management/", "id": "k8s_00021_code_explain_487"} {"prompt": "Explain this shell code:\n\nkubectl apply -f project/k8s/development --recursive", "completion": "This shell code Managing Workloads | The --recursive argument works with any operation that accepts the --filename/-f argument such as: kubectl create, kubectl get, kubectl delete, kubectl describe, or even kubectl rollout.", "language": "shell", "type": "code-example_explanation", "context": "Managing Workloads | The --recursive argument works with any operation that accepts the --filename/-f argument such as: kubectl create, kubectl get, kubectl delete, kubectl describe, or even kubectl rollout.", "source": "https://kubernetes.io/docs/concepts/workloads/management/", "id": "k8s_00021_code_explain_488"} {"prompt": "Explain this none code:\n\nconfigmap/my-config created\ndeployment.apps/my-deployment created\npersistentvolumeclaim/my-pvc created", "completion": "This none code Managing Workloads | The --recursive argument works with any operation that accepts the --filename/-f argument such as: kubectl create, kubectl get, kubectl delete, kubectl describe, or even kubectl rollout.", "language": "none", "type": "code-example_explanation", "context": "Managing Workloads | The --recursive argument works with any operation that accepts the --filename/-f argument such as: kubectl create, kubectl get, kubectl delete, kubectl describe, or even kubectl rollout.", "source": "https://kubernetes.io/docs/concepts/workloads/management/", "id": "k8s_00021_code_explain_489"} {"prompt": "Explain this shell code:\n\nkubectl apply -f project/k8s/namespaces -f project/k8s/development --recursive", "completion": "This shell code Managing Workloads | If you're interested in learning more about kubectl, go ahead and read Command line tool (kubectl).", "language": "shell", "type": "code-example_explanation", "context": "Managing Workloads | If you're interested in learning more about kubectl, go ahead and read Command line tool (kubectl).", "source": "https://kubernetes.io/docs/concepts/workloads/management/", "id": "k8s_00021_code_explain_490"} {"prompt": "Complete this none code:\n\nnamespace/development created\nnamespace/staging created", "completion": "configmap/my-config created\ndeployment.apps/my-deployment created\npersistentvolumeclaim/my-pvc created", "language": "none", "type": "code-example", "context": "Managing Workloads | If you're interested in learning more about kubectl, go ahead and read Command line tool (kubectl).", "source": "https://kubernetes.io/docs/concepts/workloads/management/", "id": "k8s_00021_code_completion_491"} {"prompt": "Explain this none code:\n\nnamespace/development created\nnamespace/staging created\nconfigmap/my-config created\ndeployment.apps/my-deployment created\npersistentvolumeclaim/my-pvc created", "completion": "This none code Managing Workloads | If you're interested in learning more about kubectl, go ahead and read Command line tool (kubectl).", "language": "none", "type": "code-example_explanation", "context": "Managing Workloads | If you're interested in learning more about kubectl, go ahead and read Command line tool (kubectl).", "source": "https://kubernetes.io/docs/concepts/workloads/management/", "id": "k8s_00021_code_explain_492"} {"prompt": "Explain this shell code:\n\nkubectl create deployment my-nginx --image=nginx:1.14.2", "completion": "This shell code Managing Workloads | Ensure that there is 1 replica:", "language": "shell", "type": "code-example_explanation", "context": "Managing Workloads | Ensure that there is 1 replica:", "source": "https://kubernetes.io/docs/concepts/workloads/management/", "id": "k8s_00021_code_explain_493"} {"prompt": "Explain this shell code:\n\nkubectl scale --replicas 1 deployments/my-nginx --subresource='scale' --type='merge' -p '{\"spec\":{\"replicas\": 1}}'", "completion": "This shell code Managing Workloads | and allow Kubernetes to add more temporary replicas during a rollout, by setting a surge maximum of 100%:", "language": "shell", "type": "code-example_explanation", "context": "Managing Workloads | and allow Kubernetes to add more temporary replicas during a rollout, by setting a surge maximum of 100%:", "source": "https://kubernetes.io/docs/concepts/workloads/management/", "id": "k8s_00021_code_explain_494"} {"prompt": "Explain this shell code:\n\nkubectl patch --type='merge' -p '{\"spec\":{\"strategy\":{\"rollingUpdate\":{\"maxSurge\": \"100%\" }}}}'", "completion": "This shell code Managing Workloads | To update to version 1.16.1, change .spec.template.spec.containers[0].image from nginx:1.14.2 to nginx:1.16.1 using kubectl edit:", "language": "shell", "type": "code-example_explanation", "context": "Managing Workloads | To update to version 1.16.1, change .spec.template.spec.containers[0].image from nginx:1.14.2 to nginx:1.16.1 using kubectl edit:", "source": "https://kubernetes.io/docs/concepts/workloads/management/", "id": "k8s_00021_code_explain_495"} {"prompt": "Explain this shell code:\n\nkubectl edit deployment/my-nginx\n# Change the manifest to use the newer container image, then save your changes", "completion": "This shell code Managing Workloads | That's it! The Deployment will declaratively update the deployed nginx application progressively behind the scene. It ensures that only a certain number of old replicas may be down while they are bein", "language": "shell", "type": "code-example_explanation", "context": "Managing Workloads | That's it! The Deployment will declaratively update the deployed nginx application progressively behind the scene. It ensures that only a certain number of old replicas may be down while they are bein", "source": "https://kubernetes.io/docs/concepts/workloads/management/", "id": "k8s_00021_code_explain_496"} {"prompt": "Complete this shell code:\n\nkubectl apply -f my-deployment.yaml\n", "completion": "# wait for rollout to finish\nkubectl rollout status deployment/my-deployment --timeout 10m # 10 minute timeout", "language": "shell", "type": "code-example", "context": "Managing Workloads | or", "source": "https://kubernetes.io/docs/concepts/workloads/management/", "id": "k8s_00021_code_completion_497"} {"prompt": "Explain this shell code:\n\nkubectl apply -f my-deployment.yaml\n\n# wait for rollout to finish\nkubectl rollout status deployment/my-deployment --timeout 10m # 10 minute timeout", "completion": "This shell code Managing Workloads | or", "language": "shell", "type": "code-example_explanation", "context": "Managing Workloads | or", "source": "https://kubernetes.io/docs/concepts/workloads/management/", "id": "k8s_00021_code_explain_498"} {"prompt": "Complete this shell code:\n\nkubectl apply -f backing-stateful-component.yaml\n", "completion": "# don't wait for rollout to finish, just check the status\nkubectl rollout status statefulsets/backing-stateful-component --watch=false", "language": "shell", "type": "code-example", "context": "Managing Workloads | You can also pause, resume or cancel a rollout. Visit kubectl rollout to learn more.", "source": "https://kubernetes.io/docs/concepts/workloads/management/", "id": "k8s_00021_code_completion_499"} {"prompt": "Explain this shell code:\n\nkubectl apply -f backing-stateful-component.yaml\n\n# don't wait for rollout to finish, just check the status\nkubectl rollout status statefulsets/backing-stateful-component --watch=false", "completion": "This shell code Managing Workloads | You can also pause, resume or cancel a rollout. Visit kubectl rollout to learn more.", "language": "shell", "type": "code-example_explanation", "context": "Managing Workloads | You can also pause, resume or cancel a rollout. Visit kubectl rollout to learn more.", "source": "https://kubernetes.io/docs/concepts/workloads/management/", "id": "k8s_00021_code_explain_500"} {"prompt": "Complete this none code:\n\nname: frontend\nreplicas: 3\n...\nlabels:", "completion": " app: guestbook\n tier: frontend\n track: stable\n...\nimage: gb-frontend:v3", "language": "none", "type": "code-example", "context": "Managing Workloads | and then you can create a new release of the guestbook frontend that carries the track label with different value (i.e. canary), so that two sets of pods would not overlap:", "source": "https://kubernetes.io/docs/concepts/workloads/management/", "id": "k8s_00021_code_completion_501"} {"prompt": "Explain this none code:\n\nname: frontend\nreplicas: 3\n...\nlabels:\n app: guestbook\n tier: frontend\n track: stable\n...\nimage: gb-frontend:v3", "completion": "This none code Managing Workloads | and then you can create a new release of the guestbook frontend that carries the track label with different value (i.e. canary), so that two sets of pods would not overlap:", "language": "none", "type": "code-example_explanation", "context": "Managing Workloads | and then you can create a new release of the guestbook frontend that carries the track label with different value (i.e. canary), so that two sets of pods would not overlap:", "source": "https://kubernetes.io/docs/concepts/workloads/management/", "id": "k8s_00021_code_explain_502"} {"prompt": "Complete this none code:\n\nname: frontend-canary\nreplicas: 1\n...\nlabels:", "completion": " app: guestbook\n tier: frontend\n track: canary\n...\nimage: gb-frontend:v4", "language": "none", "type": "code-example", "context": "Managing Workloads | The frontend service would span both sets of replicas by selecting the common subset of their labels (i.e. omitting the track label), so that the traffic will be redirected to both applications:", "source": "https://kubernetes.io/docs/concepts/workloads/management/", "id": "k8s_00021_code_completion_503"} {"prompt": "Explain this none code:\n\nname: frontend-canary\nreplicas: 1\n...\nlabels:\n app: guestbook\n tier: frontend\n track: canary\n...\nimage: gb-frontend:v4", "completion": "This none code Managing Workloads | The frontend service would span both sets of replicas by selecting the common subset of their labels (i.e. omitting the track label), so that the traffic will be redirected to both applications:", "language": "none", "type": "code-example_explanation", "context": "Managing Workloads | The frontend service would span both sets of replicas by selecting the common subset of their labels (i.e. omitting the track label), so that the traffic will be redirected to both applications:", "source": "https://kubernetes.io/docs/concepts/workloads/management/", "id": "k8s_00021_code_explain_504"} {"prompt": "Explain this shell code:\n\nkubectl annotate pods my-nginx-v4-9gw19 description='my frontend running nginx'\nkubectl get pods my-nginx-v4-9gw19 -o yaml", "completion": "This shell code Managing Workloads | For more information, see annotations and kubectl annotate.", "language": "shell", "type": "code-example_explanation", "context": "Managing Workloads | For more information, see annotations and kubectl annotate.", "source": "https://kubernetes.io/docs/concepts/workloads/management/", "id": "k8s_00021_code_explain_505"} {"prompt": "Complete this shell code:\n\napiVersion: v1\nkind: pod\nmetadata:", "completion": " annotations:\n description: my frontend running nginx\n...", "language": "shell", "type": "code-example", "context": "Managing Workloads | For more information, see annotations and kubectl annotate.", "source": "https://kubernetes.io/docs/concepts/workloads/management/", "id": "k8s_00021_code_completion_506"} {"prompt": "Explain this shell code:\n\napiVersion: v1\nkind: pod\nmetadata:\n annotations:\n description: my frontend running nginx\n...", "completion": "This shell code Managing Workloads | For more information, see annotations and kubectl annotate.", "language": "shell", "type": "code-example_explanation", "context": "Managing Workloads | For more information, see annotations and kubectl annotate.", "source": "https://kubernetes.io/docs/concepts/workloads/management/", "id": "k8s_00021_code_explain_507"} {"prompt": "Explain this none code:\n\nNAME READY STATUS RESTARTS AGE\nmy-nginx-2035384211-j5fhi 1/1 Running 0 30m", "completion": "This none code Managing Workloads | To have the system automatically choose the number of nginx replicas as needed, ranging from 1 to 3, do:", "language": "none", "type": "code-example_explanation", "context": "Managing Workloads | To have the system automatically choose the number of nginx replicas as needed, ranging from 1 to 3, do:", "source": "https://kubernetes.io/docs/concepts/workloads/management/", "id": "k8s_00021_code_explain_508"} {"prompt": "Explain this shell code:\n\n# This requires an existing source of container and Pod metrics\nkubectl autoscale deployment/my-nginx --min=1 --max=3", "completion": "This shell code Managing Workloads | Now your nginx replicas will be scaled up and down as needed, automatically.", "language": "shell", "type": "code-example_explanation", "context": "Managing Workloads | Now your nginx replicas will be scaled up and down as needed, automatically.", "source": "https://kubernetes.io/docs/concepts/workloads/management/", "id": "k8s_00021_code_explain_509"} {"prompt": "Explain this none code:\n\nhorizontalpodautoscaler.autoscaling/my-nginx autoscaled", "completion": "This none code Managing Workloads | Now your nginx replicas will be scaled up and down as needed, automatically.", "language": "none", "type": "code-example_explanation", "context": "Managing Workloads | Now your nginx replicas will be scaled up and down as needed, automatically.", "source": "https://kubernetes.io/docs/concepts/workloads/management/", "id": "k8s_00021_code_explain_510"} {"prompt": "Explain this shell code:\n\nkubectl apply -f https://k8s.io/examples/application/nginx/nginx-deployment.yaml", "completion": "This shell code Managing Workloads | To learn more about the underlying mechanism, read server-side apply.", "language": "shell", "type": "code-example_explanation", "context": "Managing Workloads | To learn more about the underlying mechanism, read server-side apply.", "source": "https://kubernetes.io/docs/concepts/workloads/management/", "id": "k8s_00021_code_explain_511"} {"prompt": "Complete this shell code:\n\nkubectl get deployment my-nginx -o yaml > /tmp/nginx.yaml\nvi /tmp/nginx.yaml\n# do some edit, and then save the file\n", "completion": "kubectl apply -f /tmp/nginx.yaml\ndeployment.apps/my-nginx configured\n\nrm /tmp/nginx.yaml", "language": "shell", "type": "code-example", "context": "Managing Workloads | This allows you to do more significant changes more easily. Note that you can specify the editor with your EDITOR or KUBE_EDITOR environment variables.", "source": "https://kubernetes.io/docs/concepts/workloads/management/", "id": "k8s_00021_code_completion_512"} {"prompt": "Explain this shell code:\n\nkubectl get deployment my-nginx -o yaml > /tmp/nginx.yaml\nvi /tmp/nginx.yaml\n# do some edit, and then save the file\n\nkubectl apply -f /tmp/nginx.yaml\ndeployment.apps/my-nginx configured\n\nrm /tmp/nginx.yaml", "completion": "This shell code Managing Workloads | This allows you to do more significant changes more easily. Note that you can specify the editor with your EDITOR or KUBE_EDITOR environment variables.", "language": "shell", "type": "code-example_explanation", "context": "Managing Workloads | This allows you to do more significant changes more easily. Note that you can specify the editor with your EDITOR or KUBE_EDITOR environment variables.", "source": "https://kubernetes.io/docs/concepts/workloads/management/", "id": "k8s_00021_code_explain_513"} {"prompt": "Explain this shell code:\n\nkubectl replace -f https://k8s.io/examples/application/nginx/nginx-deployment.yaml --force", "completion": "This shell code Managing Workloads | Items on this page refer to third party products or projects that provide functionality required by Kubernetes. The Kubernetes project authors aren't responsible for those third-party products or proj", "language": "shell", "type": "code-example_explanation", "context": "Managing Workloads | Items on this page refer to third party products or projects that provide functionality required by Kubernetes. The Kubernetes project authors aren't responsible for those third-party products or proj", "source": "https://kubernetes.io/docs/concepts/workloads/management/", "id": "k8s_00021_code_explain_514"} {"prompt": "Explain this none code:\n\ndeployment.apps/my-nginx deleted\ndeployment.apps/my-nginx replaced", "completion": "This none code Managing Workloads | Items on this page refer to third party products or projects that provide functionality required by Kubernetes. The Kubernetes project authors aren't responsible for those third-party products or proj", "language": "none", "type": "code-example_explanation", "context": "Managing Workloads | Items on this page refer to third party products or projects that provide functionality required by Kubernetes. The Kubernetes project authors aren't responsible for those third-party products or proj", "source": "https://kubernetes.io/docs/concepts/workloads/management/", "id": "k8s_00021_code_explain_515"} {"prompt": "Complete this yaml code:\n\napplication/nginx-app.yaml\napiVersion: v1\nkind: Service\nmetadata:\n name: my-nginx-svc\n labels:\n app: nginx\nspec:\n type: LoadBalancer\n ports:\n - port: 80\n selector:\n app: nginx\n---\napiVersion: apps/v1\nkind: Deployment\nmetadata:", "completion": " name: my-nginx\n labels:\n app: nginx\nspec:\n replicas: 3\n selector:\n matchLabels:\n app: nginx\n template:\n metadata:\n labels:\n app: nginx\n spec:\n containers:\n - name: nginx\n image: nginx:1.14.2\n ports:\n - containerPort: 80", "language": "yaml", "type": "deployment-definition", "context": "Managing Workloads | Multiple resources can be created the same way as a single resource:", "source": "https://kubernetes.io/docs/concepts/workloads/management/", "id": "k8s_00021_code_completion_516"} {"prompt": "Explain this yaml code:\n\napplication/nginx-app.yaml\napiVersion: v1\nkind: Service\nmetadata:\n name: my-nginx-svc\n labels:\n app: nginx\nspec:\n type: LoadBalancer\n ports:\n - port: 80\n selector:\n app: nginx\n---\napiVersion: apps/v1\nkind: Deployment\nmetadata:\n name: my-nginx\n labels:\n app: nginx\nspec:\n replicas: 3\n selector:\n matchLabels:\n app: nginx\n template:\n metadata:\n labels:\n app: nginx\n spec:\n containers:\n - name: nginx\n image: nginx:1.14.2\n ports:\n - containerPort: 80", "completion": "This yaml code Managing Workloads | Multiple resources can be created the same way as a single resource:", "language": "yaml", "type": "deployment-definition_explanation", "context": "Managing Workloads | Multiple resources can be created the same way as a single resource:", "source": "https://kubernetes.io/docs/concepts/workloads/management/", "id": "k8s_00021_code_explain_517"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Service\nmetadata:\n name: my-nginx-svc\n labels:\n app: nginx\nspec:\n type: LoadBalancer\n ports:\n - port: 80\n selector:\n app: nginx\n---\napiVersion: apps/v1\nkind: Deployment\nmetadata:\n name: my-nginx", "completion": " labels:\n app: nginx\nspec:\n replicas: 3\n selector:\n matchLabels:\n app: nginx\n template:\n metadata:\n labels:\n app: nginx\n spec:\n containers:\n - name: nginx\n image: nginx:1.14.2\n ports:\n - containerPort: 80", "language": "yaml", "type": "deployment-definition", "context": "Managing Workloads | Multiple resources can be created the same way as a single resource:", "source": "https://kubernetes.io/docs/concepts/workloads/management/", "id": "k8s_00021_code_completion_518"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Service\nmetadata:\n name: my-nginx-svc\n labels:\n app: nginx\nspec:\n type: LoadBalancer\n ports:\n - port: 80\n selector:\n app: nginx\n---\napiVersion: apps/v1\nkind: Deployment\nmetadata:\n name: my-nginx\n labels:\n app: nginx\nspec:\n replicas: 3\n selector:\n matchLabels:\n app: nginx\n template:\n metadata:\n labels:\n app: nginx\n spec:\n containers:\n - name: nginx\n image: nginx:1.14.2\n ports:\n - containerPort: 80", "completion": "This yaml code Managing Workloads | Multiple resources can be created the same way as a single resource:", "language": "yaml", "type": "deployment-definition_explanation", "context": "Managing Workloads | Multiple resources can be created the same way as a single resource:", "source": "https://kubernetes.io/docs/concepts/workloads/management/", "id": "k8s_00021_code_explain_519"} {"prompt": "Explain this bash code:\n\nkubectl apply -f https://k8s.io/examples/application/nginx-app.yaml", "completion": "This bash code Managing Workloads | The resources will be created in the order they appear in the manifest. Therefore, it's best to specify the Service first, since that will ensure the scheduler can spread the pods associated with the", "language": "bash", "type": "kubectl-command_explanation", "context": "Managing Workloads | The resources will be created in the order they appear in the manifest. Therefore, it's best to specify the Service first, since that will ensure the scheduler can spread the pods associated with the", "source": "https://kubernetes.io/docs/concepts/workloads/management/", "id": "k8s_00021_code_explain_520"} {"prompt": "Explain this bash code:\n\nkubectl apply -f https://k8s.io/examples/application/nginx/nginx-svc.yaml \\\n -f https://k8s.io/examples/application/nginx/nginx-deployment.yaml", "completion": "This bash code Managing Workloads | It is a recommended practice to put resources related to the same microservice or application tier into the same file, and to group all of the files associated with your application in the same direct", "language": "bash", "type": "kubectl-command_explanation", "context": "Managing Workloads | It is a recommended practice to put resources related to the same microservice or application tier into the same file, and to group all of the files associated with your application in the same direct", "source": "https://kubernetes.io/docs/concepts/workloads/management/", "id": "k8s_00021_code_explain_521"} {"prompt": "Explain this bash code:\n\nkubectl apply -f https://k8s.io/examples/application/nginx/nginx-deployment.yaml", "completion": "This bash code Managing Workloads | If you need to define more manifests, such as adding a ConfigMap, you can do that too.", "language": "bash", "type": "kubectl-command_explanation", "context": "Managing Workloads | If you need to define more manifests, such as adding a ConfigMap, you can do that too.", "source": "https://kubernetes.io/docs/concepts/workloads/management/", "id": "k8s_00021_code_explain_522"} {"prompt": "Explain this bash code:\n\nkubectl delete -f https://k8s.io/examples/application/nginx-app.yaml", "completion": "This bash code Managing Workloads | In the case of two resources, you can specify both resources on the command line using the resource/name syntax:", "language": "bash", "type": "kubectl-command_explanation", "context": "Managing Workloads | In the case of two resources, you can specify both resources on the command line using the resource/name syntax:", "source": "https://kubernetes.io/docs/concepts/workloads/management/", "id": "k8s_00021_code_explain_523"} {"prompt": "Explain this bash code:\n\nkubectl delete deployments/my-nginx services/my-nginx-svc", "completion": "This bash code Managing Workloads | For larger numbers of resources, you'll find it easier to specify the selector (label query) specified using -l or --selector, to filter resources by their labels:", "language": "bash", "type": "kubectl-command_explanation", "context": "Managing Workloads | For larger numbers of resources, you'll find it easier to specify the selector (label query) specified using -l or --selector, to filter resources by their labels:", "source": "https://kubernetes.io/docs/concepts/workloads/management/", "id": "k8s_00021_code_explain_524"} {"prompt": "Explain this bash code:\n\nkubectl get $(kubectl create -f docs/concepts/cluster-administration/nginx/ -o name | grep service/ )\nkubectl create -f docs/concepts/cluster-administration/nginx/ -o name | grep service/ | xargs -i kubectl get '{}'", "completion": "This bash code Managing Workloads | The output might be similar to:", "language": "bash", "type": "kubectl-command_explanation", "context": "Managing Workloads | The output might be similar to:", "source": "https://kubernetes.io/docs/concepts/workloads/management/", "id": "k8s_00021_code_explain_525"} {"prompt": "Explain this bash code:\n\nkubectl apply -f project/k8s/development --recursive", "completion": "This bash code Managing Workloads | The --recursive argument works with any operation that accepts the --filename/-f argument such as: kubectl create, kubectl get, kubectl delete, kubectl describe, or even kubectl rollout.", "language": "bash", "type": "kubectl-command_explanation", "context": "Managing Workloads | The --recursive argument works with any operation that accepts the --filename/-f argument such as: kubectl create, kubectl get, kubectl delete, kubectl describe, or even kubectl rollout.", "source": "https://kubernetes.io/docs/concepts/workloads/management/", "id": "k8s_00021_code_explain_526"} {"prompt": "Explain this bash code:\n\nkubectl apply -f project/k8s/namespaces -f project/k8s/development --recursive", "completion": "This bash code Managing Workloads | If you're interested in learning more about kubectl, go ahead and read Command line tool (kubectl).", "language": "bash", "type": "kubectl-command_explanation", "context": "Managing Workloads | If you're interested in learning more about kubectl, go ahead and read Command line tool (kubectl).", "source": "https://kubernetes.io/docs/concepts/workloads/management/", "id": "k8s_00021_code_explain_527"} {"prompt": "Explain this bash code:\n\nkubectl create deployment my-nginx --image=nginx:1.14.2", "completion": "This bash code Managing Workloads | Ensure that there is 1 replica:", "language": "bash", "type": "kubectl-command_explanation", "context": "Managing Workloads | Ensure that there is 1 replica:", "source": "https://kubernetes.io/docs/concepts/workloads/management/", "id": "k8s_00021_code_explain_528"} {"prompt": "Explain this bash code:\n\nkubectl scale --replicas 1 deployments/my-nginx --subresource='scale' --type='merge' -p '{\"spec\":{\"replicas\": 1}}'", "completion": "This bash code Managing Workloads | and allow Kubernetes to add more temporary replicas during a rollout, by setting a surge maximum of 100%:", "language": "bash", "type": "kubectl-command_explanation", "context": "Managing Workloads | and allow Kubernetes to add more temporary replicas during a rollout, by setting a surge maximum of 100%:", "source": "https://kubernetes.io/docs/concepts/workloads/management/", "id": "k8s_00021_code_explain_529"} {"prompt": "Explain this bash code:\n\nkubectl patch --type='merge' -p '{\"spec\":{\"strategy\":{\"rollingUpdate\":{\"maxSurge\": \"100%\" }}}}'", "completion": "This bash code Managing Workloads | To update to version 1.16.1, change .spec.template.spec.containers[0].image from nginx:1.14.2 to nginx:1.16.1 using kubectl edit:", "language": "bash", "type": "kubectl-command_explanation", "context": "Managing Workloads | To update to version 1.16.1, change .spec.template.spec.containers[0].image from nginx:1.14.2 to nginx:1.16.1 using kubectl edit:", "source": "https://kubernetes.io/docs/concepts/workloads/management/", "id": "k8s_00021_code_explain_530"} {"prompt": "Explain this bash code:\n\nkubectl edit deployment/my-nginx\n# Change the manifest to use the newer container image, then save your changes", "completion": "This bash code Managing Workloads | That's it! The Deployment will declaratively update the deployed nginx application progressively behind the scene. It ensures that only a certain number of old replicas may be down while they are bein", "language": "bash", "type": "kubectl-command_explanation", "context": "Managing Workloads | That's it! The Deployment will declaratively update the deployed nginx application progressively behind the scene. It ensures that only a certain number of old replicas may be down while they are bein", "source": "https://kubernetes.io/docs/concepts/workloads/management/", "id": "k8s_00021_code_explain_531"} {"prompt": "Complete this bash code:\n\nkubectl apply -f my-deployment.yaml\n", "completion": "# wait for rollout to finish\nkubectl rollout status deployment/my-deployment --timeout 10m # 10 minute timeout", "language": "bash", "type": "kubectl-command", "context": "Managing Workloads | or", "source": "https://kubernetes.io/docs/concepts/workloads/management/", "id": "k8s_00021_code_completion_532"} {"prompt": "Explain this bash code:\n\nkubectl apply -f my-deployment.yaml\n\n# wait for rollout to finish\nkubectl rollout status deployment/my-deployment --timeout 10m # 10 minute timeout", "completion": "This bash code Managing Workloads | or", "language": "bash", "type": "kubectl-command_explanation", "context": "Managing Workloads | or", "source": "https://kubernetes.io/docs/concepts/workloads/management/", "id": "k8s_00021_code_explain_533"} {"prompt": "Complete this bash code:\n\nkubectl apply -f backing-stateful-component.yaml\n", "completion": "# don't wait for rollout to finish, just check the status\nkubectl rollout status statefulsets/backing-stateful-component --watch=false", "language": "bash", "type": "kubectl-command", "context": "Managing Workloads | You can also pause, resume or cancel a rollout. Visit kubectl rollout to learn more.", "source": "https://kubernetes.io/docs/concepts/workloads/management/", "id": "k8s_00021_code_completion_534"} {"prompt": "Explain this bash code:\n\nkubectl apply -f backing-stateful-component.yaml\n\n# don't wait for rollout to finish, just check the status\nkubectl rollout status statefulsets/backing-stateful-component --watch=false", "completion": "This bash code Managing Workloads | You can also pause, resume or cancel a rollout. Visit kubectl rollout to learn more.", "language": "bash", "type": "kubectl-command_explanation", "context": "Managing Workloads | You can also pause, resume or cancel a rollout. Visit kubectl rollout to learn more.", "source": "https://kubernetes.io/docs/concepts/workloads/management/", "id": "k8s_00021_code_explain_535"} {"prompt": "Explain this bash code:\n\nkubectl annotate pods my-nginx-v4-9gw19 description='my frontend running nginx'\nkubectl get pods my-nginx-v4-9gw19 -o yaml", "completion": "This bash code Managing Workloads | For more information, see annotations and kubectl annotate.", "language": "bash", "type": "kubectl-command_explanation", "context": "Managing Workloads | For more information, see annotations and kubectl annotate.", "source": "https://kubernetes.io/docs/concepts/workloads/management/", "id": "k8s_00021_code_explain_536"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: pod\nmetadata:", "completion": " annotations:\n description: my frontend running nginx\n...", "language": "yaml", "type": "kubernetes-manifest", "context": "Managing Workloads | For more information, see annotations and kubectl annotate.", "source": "https://kubernetes.io/docs/concepts/workloads/management/", "id": "k8s_00021_code_completion_537"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: pod\nmetadata:\n annotations:\n description: my frontend running nginx\n...", "completion": "This yaml code Managing Workloads | For more information, see annotations and kubectl annotate.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Managing Workloads | For more information, see annotations and kubectl annotate.", "source": "https://kubernetes.io/docs/concepts/workloads/management/", "id": "k8s_00021_code_explain_538"} {"prompt": "Explain this text code:\n\n# This requires an existing source of container and Pod metrics\nkubectl autoscale deployment/my-nginx --min=1 --max=3", "completion": "This text code Managing Workloads | Now your nginx replicas will be scaled up and down as needed, automatically.", "language": "text", "type": "code-example_explanation", "context": "Managing Workloads | Now your nginx replicas will be scaled up and down as needed, automatically.", "source": "https://kubernetes.io/docs/concepts/workloads/management/", "id": "k8s_00021_code_explain_539"} {"prompt": "Explain this bash code:\n\nkubectl apply -f https://k8s.io/examples/application/nginx/nginx-deployment.yaml", "completion": "This bash code Managing Workloads | To learn more about the underlying mechanism, read server-side apply.", "language": "bash", "type": "kubectl-command_explanation", "context": "Managing Workloads | To learn more about the underlying mechanism, read server-side apply.", "source": "https://kubernetes.io/docs/concepts/workloads/management/", "id": "k8s_00021_code_explain_540"} {"prompt": "Complete this bash code:\n\nkubectl get deployment my-nginx -o yaml > /tmp/nginx.yaml\nvi /tmp/nginx.yaml\n# do some edit, and then save the file\n", "completion": "kubectl apply -f /tmp/nginx.yaml\ndeployment.apps/my-nginx configured\n\nrm /tmp/nginx.yaml", "language": "bash", "type": "kubectl-command", "context": "Managing Workloads | This allows you to do more significant changes more easily. Note that you can specify the editor with your EDITOR or KUBE_EDITOR environment variables.", "source": "https://kubernetes.io/docs/concepts/workloads/management/", "id": "k8s_00021_code_completion_541"} {"prompt": "Explain this bash code:\n\nkubectl get deployment my-nginx -o yaml > /tmp/nginx.yaml\nvi /tmp/nginx.yaml\n# do some edit, and then save the file\n\nkubectl apply -f /tmp/nginx.yaml\ndeployment.apps/my-nginx configured\n\nrm /tmp/nginx.yaml", "completion": "This bash code Managing Workloads | This allows you to do more significant changes more easily. Note that you can specify the editor with your EDITOR or KUBE_EDITOR environment variables.", "language": "bash", "type": "kubectl-command_explanation", "context": "Managing Workloads | This allows you to do more significant changes more easily. Note that you can specify the editor with your EDITOR or KUBE_EDITOR environment variables.", "source": "https://kubernetes.io/docs/concepts/workloads/management/", "id": "k8s_00021_code_explain_542"} {"prompt": "Explain this bash code:\n\nkubectl replace -f https://k8s.io/examples/application/nginx/nginx-deployment.yaml --force", "completion": "This bash code Managing Workloads | Items on this page refer to third party products or projects that provide functionality required by Kubernetes. The Kubernetes project authors aren't responsible for those third-party products or proj", "language": "bash", "type": "kubectl-command_explanation", "context": "Managing Workloads | Items on this page refer to third party products or projects that provide functionality required by Kubernetes. The Kubernetes project authors aren't responsible for those third-party products or proj", "source": "https://kubernetes.io/docs/concepts/workloads/management/", "id": "k8s_00021_code_explain_543"} {"prompt": "Complete this text code:\n\n# Set the server field on the my-cluster cluster to https://1.2.3.4\n kubectl config set clusters.my-cluster.server https://1.2.3.4\n \n # Set the certificate-authority-data field on the my-cluster cluster\n kubectl config set clusters.my-cluster.certificate-authority-data $(echo \"cert_data_here\" | base64 -i -)", "completion": " \n # Set the cluster field in the my-context context to my-cluster\n kubectl config set contexts.my-context.cluster my-cluster\n \n # Set the client-key-data field in the cluster-admin user using --set-raw-bytes option\n kubectl config set users.cluster-admin.client-key-data cert_data_here --set-raw-bytes=true", "language": "text", "type": "code-example", "context": "kubectl config set | help for set", "source": "https://kubernetes.io/docs/reference/kubectl/generated/kubectl_config/kubectl_config_set/", "id": "k8s_00022_code_completion_544"} {"prompt": "Explain this text code:\n\n# Set the server field on the my-cluster cluster to https://1.2.3.4\n kubectl config set clusters.my-cluster.server https://1.2.3.4\n \n # Set the certificate-authority-data field on the my-cluster cluster\n kubectl config set clusters.my-cluster.certificate-authority-data $(echo \"cert_data_here\" | base64 -i -)\n \n # Set the cluster field in the my-context context to my-cluster\n kubectl config set contexts.my-context.cluster my-cluster\n \n # Set the client-key-data field in the cluster-admin user using --set-raw-bytes option\n kubectl config set users.cluster-admin.client-key-data cert_data_here --set-raw-bytes=true", "completion": "This text code kubectl config set | help for set", "language": "text", "type": "code-example_explanation", "context": "kubectl config set | help for set", "source": "https://kubernetes.io/docs/reference/kubectl/generated/kubectl_config/kubectl_config_set/", "id": "k8s_00022_code_explain_545"} {"prompt": "Complete this text code:\n\n# Set the server field on the my-cluster cluster to https://1.2.3.4\n kubectl config set clusters.my-cluster.server https://1.2.3.4\n \n # Set the certificate-authority-data field on the my-cluster cluster\n kubectl config set clusters.my-cluster.certificate-authority-data $(echo \"cert_data_here\" | base64 -i -)", "completion": " \n # Set the cluster field in the my-context context to my-cluster\n kubectl config set contexts.my-context.cluster my-cluster\n \n # Set the client-key-data field in the cluster-admin user using --set-raw-bytes option\n kubectl config set users.cluster-admin.client-key-data cert_data_here --set-raw-bytes=true", "language": "text", "type": "code-example", "context": "kubectl config set | help for set", "source": "https://kubernetes.io/docs/reference/kubectl/generated/kubectl_config/kubectl_config_set/", "id": "k8s_00022_code_completion_546"} {"prompt": "Explain this text code:\n\n# Set the server field on the my-cluster cluster to https://1.2.3.4\n kubectl config set clusters.my-cluster.server https://1.2.3.4\n \n # Set the certificate-authority-data field on the my-cluster cluster\n kubectl config set clusters.my-cluster.certificate-authority-data $(echo \"cert_data_here\" | base64 -i -)\n \n # Set the cluster field in the my-context context to my-cluster\n kubectl config set contexts.my-context.cluster my-cluster\n \n # Set the client-key-data field in the cluster-admin user using --set-raw-bytes option\n kubectl config set users.cluster-admin.client-key-data cert_data_here --set-raw-bytes=true", "completion": "This text code kubectl config set | help for set", "language": "text", "type": "code-example_explanation", "context": "kubectl config set | help for set", "source": "https://kubernetes.io/docs/reference/kubectl/generated/kubectl_config/kubectl_config_set/", "id": "k8s_00022_code_explain_547"} {"prompt": "Complete this text code:\n\npreflight Run preflight checks before upgrade\ncontrol-plane Upgrade the control plane\nupload-config Upload the kubeadm and kubelet configurations to ConfigMaps\n /kubeadm Upload the kubeadm ClusterConfiguration to a ConfigMap\n /kubelet Upload the kubelet configuration to a ConfigMap", "completion": "kubelet-config Upgrade the kubelet configuration for this node\nbootstrap-token Configures bootstrap token and cluster-info RBAC rules\naddon Upgrade the default kubeadm addons\n /coredns Upgrade the CoreDNS addon\n /kube-proxy Upgrade the kube-proxy addon\npost-upgrade Run post upgrade tasks", "language": "text", "type": "code-example", "context": "kubeadm upgrade | Show unstable versions of Kubernetes as an upgrade alternative and allow upgrading to an alpha/beta/release candidate versions of Kubernetes.", "source": "https://kubernetes.io/docs/reference/setup-tools/kubeadm/kubeadm-upgrade/", "id": "k8s_00024_code_completion_548"} {"prompt": "Explain this text code:\n\npreflight Run preflight checks before upgrade\ncontrol-plane Upgrade the control plane\nupload-config Upload the kubeadm and kubelet configurations to ConfigMaps\n /kubeadm Upload the kubeadm ClusterConfiguration to a ConfigMap\n /kubelet Upload the kubelet configuration to a ConfigMap\nkubelet-config Upgrade the kubelet configuration for this node\nbootstrap-token Configures bootstrap token and cluster-info RBAC rules\naddon Upgrade the default kubeadm addons\n /coredns Upgrade the CoreDNS addon\n /kube-proxy Upgrade the kube-proxy addon\npost-upgrade Run post upgrade tasks", "completion": "This text code kubeadm upgrade | Show unstable versions of Kubernetes as an upgrade alternative and allow upgrading to an alpha/beta/release candidate versions of Kubernetes.", "language": "text", "type": "code-example_explanation", "context": "kubeadm upgrade | Show unstable versions of Kubernetes as an upgrade alternative and allow upgrading to an alpha/beta/release candidate versions of Kubernetes.", "source": "https://kubernetes.io/docs/reference/setup-tools/kubeadm/kubeadm-upgrade/", "id": "k8s_00024_code_explain_549"} {"prompt": "Complete this text code:\n\npreflight Run upgrade node pre-flight checks\ncontrol-plane Upgrade the control plane instance deployed on this node, if any\nkubelet-config Upgrade the kubelet configuration for this node", "completion": "addon Upgrade the default kubeadm addons\n /coredns Upgrade the CoreDNS addon\n /kube-proxy Upgrade the kube-proxy addon\npost-upgrade Run post upgrade tasks", "language": "text", "type": "code-example", "context": "kubeadm upgrade | Perform the renewal of certificates used by component changed during upgrades.", "source": "https://kubernetes.io/docs/reference/setup-tools/kubeadm/kubeadm-upgrade/", "id": "k8s_00024_code_completion_550"} {"prompt": "Explain this text code:\n\npreflight Run upgrade node pre-flight checks\ncontrol-plane Upgrade the control plane instance deployed on this node, if any\nkubelet-config Upgrade the kubelet configuration for this node\naddon Upgrade the default kubeadm addons\n /coredns Upgrade the CoreDNS addon\n /kube-proxy Upgrade the kube-proxy addon\npost-upgrade Run post upgrade tasks", "completion": "This text code kubeadm upgrade | Perform the renewal of certificates used by component changed during upgrades.", "language": "text", "type": "code-example_explanation", "context": "kubeadm upgrade | Perform the renewal of certificates used by component changed during upgrades.", "source": "https://kubernetes.io/docs/reference/setup-tools/kubeadm/kubeadm-upgrade/", "id": "k8s_00024_code_explain_551"} {"prompt": "Complete this text code:\n\npreflight Run preflight checks before upgrade\ncontrol-plane Upgrade the control plane\nupload-config Upload the kubeadm and kubelet configurations to ConfigMaps\n /kubeadm Upload the kubeadm ClusterConfiguration to a ConfigMap\n /kubelet Upload the kubelet configuration to a ConfigMap", "completion": "kubelet-config Upgrade the kubelet configuration for this node\nbootstrap-token Configures bootstrap token and cluster-info RBAC rules\naddon Upgrade the default kubeadm addons\n /coredns Upgrade the CoreDNS addon\n /kube-proxy Upgrade the kube-proxy addon\npost-upgrade Run post upgrade tasks", "language": "text", "type": "code-example", "context": "kubeadm upgrade | Show unstable versions of Kubernetes as an upgrade alternative and allow upgrading to an alpha/beta/release candidate versions of Kubernetes.", "source": "https://kubernetes.io/docs/reference/setup-tools/kubeadm/kubeadm-upgrade/", "id": "k8s_00024_code_completion_552"} {"prompt": "Explain this text code:\n\npreflight Run preflight checks before upgrade\ncontrol-plane Upgrade the control plane\nupload-config Upload the kubeadm and kubelet configurations to ConfigMaps\n /kubeadm Upload the kubeadm ClusterConfiguration to a ConfigMap\n /kubelet Upload the kubelet configuration to a ConfigMap\nkubelet-config Upgrade the kubelet configuration for this node\nbootstrap-token Configures bootstrap token and cluster-info RBAC rules\naddon Upgrade the default kubeadm addons\n /coredns Upgrade the CoreDNS addon\n /kube-proxy Upgrade the kube-proxy addon\npost-upgrade Run post upgrade tasks", "completion": "This text code kubeadm upgrade | Show unstable versions of Kubernetes as an upgrade alternative and allow upgrading to an alpha/beta/release candidate versions of Kubernetes.", "language": "text", "type": "code-example_explanation", "context": "kubeadm upgrade | Show unstable versions of Kubernetes as an upgrade alternative and allow upgrading to an alpha/beta/release candidate versions of Kubernetes.", "source": "https://kubernetes.io/docs/reference/setup-tools/kubeadm/kubeadm-upgrade/", "id": "k8s_00024_code_explain_553"} {"prompt": "Complete this text code:\n\npreflight Run upgrade node pre-flight checks\ncontrol-plane Upgrade the control plane instance deployed on this node, if any\nkubelet-config Upgrade the kubelet configuration for this node", "completion": "addon Upgrade the default kubeadm addons\n /coredns Upgrade the CoreDNS addon\n /kube-proxy Upgrade the kube-proxy addon\npost-upgrade Run post upgrade tasks", "language": "text", "type": "code-example", "context": "kubeadm upgrade | Perform the renewal of certificates used by component changed during upgrades.", "source": "https://kubernetes.io/docs/reference/setup-tools/kubeadm/kubeadm-upgrade/", "id": "k8s_00024_code_completion_554"} {"prompt": "Explain this text code:\n\npreflight Run upgrade node pre-flight checks\ncontrol-plane Upgrade the control plane instance deployed on this node, if any\nkubelet-config Upgrade the kubelet configuration for this node\naddon Upgrade the default kubeadm addons\n /coredns Upgrade the CoreDNS addon\n /kube-proxy Upgrade the kube-proxy addon\npost-upgrade Run post upgrade tasks", "completion": "This text code kubeadm upgrade | Perform the renewal of certificates used by component changed during upgrades.", "language": "text", "type": "code-example_explanation", "context": "kubeadm upgrade | Perform the renewal of certificates used by component changed during upgrades.", "source": "https://kubernetes.io/docs/reference/setup-tools/kubeadm/kubeadm-upgrade/", "id": "k8s_00024_code_explain_555"} {"prompt": "Explain this bash code:\n\nkubectl patch pod -p '{\"metadata\":{\"finalizers\":null}}'", "completion": "This bash code Force Delete StatefulSet Pods | Always perform force deletion of StatefulSet Pods carefully and with complete knowledge of the risks involved.", "language": "bash", "type": "kubectl-command_explanation", "context": "Force Delete StatefulSet Pods | Always perform force deletion of StatefulSet Pods carefully and with complete knowledge of the risks involved.", "source": "https://kubernetes.io/docs/tasks/run-application/force-delete-stateful-set-pod/", "id": "k8s_00027_code_explain_556"} {"prompt": "Explain this shell code:\n\nkubectl patch pod -p '{\"metadata\":{\"finalizers\":null}}'", "completion": "This shell code Force Delete StatefulSet Pods | Always perform force deletion of StatefulSet Pods carefully and with complete knowledge of the risks involved.", "language": "shell", "type": "code-example_explanation", "context": "Force Delete StatefulSet Pods | Always perform force deletion of StatefulSet Pods carefully and with complete knowledge of the risks involved.", "source": "https://kubernetes.io/docs/tasks/run-application/force-delete-stateful-set-pod/", "id": "k8s_00027_code_explain_557"} {"prompt": "Explain this bash code:\n\nkubectl patch pod -p '{\"metadata\":{\"finalizers\":null}}'", "completion": "This bash code Force Delete StatefulSet Pods | Always perform force deletion of StatefulSet Pods carefully and with complete knowledge of the risks involved.", "language": "bash", "type": "kubectl-command_explanation", "context": "Force Delete StatefulSet Pods | Always perform force deletion of StatefulSet Pods carefully and with complete knowledge of the risks involved.", "source": "https://kubernetes.io/docs/tasks/run-application/force-delete-stateful-set-pod/", "id": "k8s_00027_code_explain_558"} {"prompt": "Explain this text code:\n\nkillall -s SIGTERM kube-apiserver # trigger a graceful kube-apiserver shutdown\nsleep 20 # wait a little bit to permit completing in-flight requests\nkubeadm upgrade ... # execute a kubeadm upgrade command", "completion": "This text code Upgrading kubeadm clusters | If you're using the community-owned package repositories (pkgs.k8s.io), you need to enable the package repository for the desired Kubernetes minor release. This is explained in Changing the Kubernetes", "language": "text", "type": "code-example_explanation", "context": "Upgrading kubeadm clusters | If you're using the community-owned package repositories (pkgs.k8s.io), you need to enable the package repository for the desired Kubernetes minor release. This is explained in Changing the Kubernetes", "source": "https://kubernetes.io/docs/tasks/administer-cluster/kubeadm/kubeadm-upgrade/", "id": "k8s_00028_code_explain_559"} {"prompt": "Complete this text code:\n\n# Find the latest 1.34 version in the list.\n# It should look like 1.34.x-*, where x is the latest patch.", "completion": "sudo apt update\nsudo apt-cache madison kubeadm", "language": "text", "type": "code-example", "context": "Upgrading kubeadm clusters | For systems with DNF:# Find the latest 1.34 version in the list. # It should look like 1.34.x-*, where x is the latest patch. sudo yum list --showduplicates kubeadm --disableexcludes=kubernetes For sy", "source": "https://kubernetes.io/docs/tasks/administer-cluster/kubeadm/kubeadm-upgrade/", "id": "k8s_00028_code_completion_560"} {"prompt": "Explain this text code:\n\n# Find the latest 1.34 version in the list.\n# It should look like 1.34.x-*, where x is the latest patch.\nsudo apt update\nsudo apt-cache madison kubeadm", "completion": "This text code Upgrading kubeadm clusters | For systems with DNF:# Find the latest 1.34 version in the list. # It should look like 1.34.x-*, where x is the latest patch. sudo yum list --showduplicates kubeadm --disableexcludes=kubernetes For sy", "language": "text", "type": "code-example_explanation", "context": "Upgrading kubeadm clusters | For systems with DNF:# Find the latest 1.34 version in the list. # It should look like 1.34.x-*, where x is the latest patch. sudo yum list --showduplicates kubeadm --disableexcludes=kubernetes For sy", "source": "https://kubernetes.io/docs/tasks/administer-cluster/kubeadm/kubeadm-upgrade/", "id": "k8s_00028_code_explain_561"} {"prompt": "Explain this text code:\n\n# Find the latest 1.34 version in the list.\n# It should look like 1.34.x-*, where x is the latest patch.\nsudo yum list --showduplicates kubeadm --disableexcludes=kubernetes", "completion": "This text code Upgrading kubeadm clusters | For systems with DNF5:", "language": "text", "type": "code-example_explanation", "context": "Upgrading kubeadm clusters | For systems with DNF5:", "source": "https://kubernetes.io/docs/tasks/administer-cluster/kubeadm/kubeadm-upgrade/", "id": "k8s_00028_code_explain_562"} {"prompt": "Explain this text code:\n\n# Find the latest 1.34 version in the list.\n# It should look like 1.34.x-*, where x is the latest patch.\nsudo yum list --showduplicates kubeadm --setopt=disable_excludes=kubernetes", "completion": "This text code Upgrading kubeadm clusters | If you don't see the version you expect to upgrade to, verify if the Kubernetes package repositories are used.", "language": "text", "type": "code-example_explanation", "context": "Upgrading kubeadm clusters | If you don't see the version you expect to upgrade to, verify if the Kubernetes package repositories are used.", "source": "https://kubernetes.io/docs/tasks/administer-cluster/kubeadm/kubeadm-upgrade/", "id": "k8s_00028_code_explain_563"} {"prompt": "Complete this text code:\n\n# replace x in 1.34.x-* with the latest patch version\nsudo apt-mark unhold kubeadm && \\", "completion": "sudo apt-get update && sudo apt-get install -y kubeadm='1.34.x-*' && \\\nsudo apt-mark hold kubeadm", "language": "text", "type": "code-example", "context": "Upgrading kubeadm clusters | For systems with DNF:# replace x in 1.34.x-* with the latest patch version sudo yum install -y kubeadm-'1.34.x-*' --disableexcludes=kubernetes For systems with DNF5:# replace x in 1.34.x-* with the la", "source": "https://kubernetes.io/docs/tasks/administer-cluster/kubeadm/kubeadm-upgrade/", "id": "k8s_00028_code_completion_564"} {"prompt": "Explain this text code:\n\n# replace x in 1.34.x-* with the latest patch version\nsudo apt-mark unhold kubeadm && \\\nsudo apt-get update && sudo apt-get install -y kubeadm='1.34.x-*' && \\\nsudo apt-mark hold kubeadm", "completion": "This text code Upgrading kubeadm clusters | For systems with DNF:# replace x in 1.34.x-* with the latest patch version sudo yum install -y kubeadm-'1.34.x-*' --disableexcludes=kubernetes For systems with DNF5:# replace x in 1.34.x-* with the la", "language": "text", "type": "code-example_explanation", "context": "Upgrading kubeadm clusters | For systems with DNF:# replace x in 1.34.x-* with the latest patch version sudo yum install -y kubeadm-'1.34.x-*' --disableexcludes=kubernetes For systems with DNF5:# replace x in 1.34.x-* with the la", "source": "https://kubernetes.io/docs/tasks/administer-cluster/kubeadm/kubeadm-upgrade/", "id": "k8s_00028_code_explain_565"} {"prompt": "Explain this text code:\n\n# replace x in 1.34.x-* with the latest patch version\nsudo yum install -y kubeadm-'1.34.x-*' --disableexcludes=kubernetes", "completion": "This text code Upgrading kubeadm clusters | For systems with DNF5:", "language": "text", "type": "code-example_explanation", "context": "Upgrading kubeadm clusters | For systems with DNF5:", "source": "https://kubernetes.io/docs/tasks/administer-cluster/kubeadm/kubeadm-upgrade/", "id": "k8s_00028_code_explain_566"} {"prompt": "Explain this text code:\n\n# replace x in 1.34.x-* with the latest patch version\nsudo yum install -y kubeadm-'1.34.x-*' --setopt=disable_excludes=kubernetes", "completion": "This text code Upgrading kubeadm clusters | Verify that the download works and has the expected version:", "language": "text", "type": "code-example_explanation", "context": "Upgrading kubeadm clusters | Verify that the download works and has the expected version:", "source": "https://kubernetes.io/docs/tasks/administer-cluster/kubeadm/kubeadm-upgrade/", "id": "k8s_00028_code_explain_567"} {"prompt": "Explain this text code:\n\n# replace x with the patch version you picked for this upgrade\nsudo kubeadm upgrade apply v1.34.x", "completion": "This text code Upgrading kubeadm clusters | Once the command finishes you should see:", "language": "text", "type": "code-example_explanation", "context": "Upgrading kubeadm clusters | Once the command finishes you should see:", "source": "https://kubernetes.io/docs/tasks/administer-cluster/kubeadm/kubeadm-upgrade/", "id": "k8s_00028_code_explain_568"} {"prompt": "Explain this json code:\n\n[upgrade/successful] SUCCESS! Your cluster was upgraded to \"v1.34.x\". Enjoy!\n\n[upgrade/kubelet] Now that your control plane is upgraded, please proceed with upgrading your kubelets if you haven't already done so.", "completion": "This json code Upgrading kubeadm clusters | Manually upgrade your CNI provider plugin.", "language": "json", "type": "api-response_explanation", "context": "Upgrading kubeadm clusters | Manually upgrade your CNI provider plugin.", "source": "https://kubernetes.io/docs/tasks/administer-cluster/kubeadm/kubeadm-upgrade/", "id": "k8s_00028_code_explain_569"} {"prompt": "Explain this text code:\n\n# replace with the name of your node you are draining\nkubectl drain --ignore-daemonsets", "completion": "This text code Upgrading kubeadm clusters | Upgrade the kubelet and kubectl:", "language": "text", "type": "code-example_explanation", "context": "Upgrading kubeadm clusters | Upgrade the kubelet and kubectl:", "source": "https://kubernetes.io/docs/tasks/administer-cluster/kubeadm/kubeadm-upgrade/", "id": "k8s_00028_code_explain_570"} {"prompt": "Complete this text code:\n\n# replace x in 1.34.x-* with the latest patch version\nsudo apt-mark unhold kubelet kubectl && \\", "completion": "sudo apt-get update && sudo apt-get install -y kubelet='1.34.x-*' kubectl='1.34.x-*' && \\\nsudo apt-mark hold kubelet kubectl", "language": "text", "type": "code-example", "context": "Upgrading kubeadm clusters | For systems with DNF:# replace x in 1.34.x-* with the latest patch version sudo yum install -y kubelet-'1.34.x-*' kubectl-'1.34.x-*' --disableexcludes=kubernetes For systems with DNF5:# replace x in 1", "source": "https://kubernetes.io/docs/tasks/administer-cluster/kubeadm/kubeadm-upgrade/", "id": "k8s_00028_code_completion_571"} {"prompt": "Explain this text code:\n\n# replace x in 1.34.x-* with the latest patch version\nsudo apt-mark unhold kubelet kubectl && \\\nsudo apt-get update && sudo apt-get install -y kubelet='1.34.x-*' kubectl='1.34.x-*' && \\\nsudo apt-mark hold kubelet kubectl", "completion": "This text code Upgrading kubeadm clusters | For systems with DNF:# replace x in 1.34.x-* with the latest patch version sudo yum install -y kubelet-'1.34.x-*' kubectl-'1.34.x-*' --disableexcludes=kubernetes For systems with DNF5:# replace x in 1", "language": "text", "type": "code-example_explanation", "context": "Upgrading kubeadm clusters | For systems with DNF:# replace x in 1.34.x-* with the latest patch version sudo yum install -y kubelet-'1.34.x-*' kubectl-'1.34.x-*' --disableexcludes=kubernetes For systems with DNF5:# replace x in 1", "source": "https://kubernetes.io/docs/tasks/administer-cluster/kubeadm/kubeadm-upgrade/", "id": "k8s_00028_code_explain_572"} {"prompt": "Explain this text code:\n\n# replace x in 1.34.x-* with the latest patch version\nsudo yum install -y kubelet-'1.34.x-*' kubectl-'1.34.x-*' --disableexcludes=kubernetes", "completion": "This text code Upgrading kubeadm clusters | For systems with DNF5:", "language": "text", "type": "code-example_explanation", "context": "Upgrading kubeadm clusters | For systems with DNF5:", "source": "https://kubernetes.io/docs/tasks/administer-cluster/kubeadm/kubeadm-upgrade/", "id": "k8s_00028_code_explain_573"} {"prompt": "Explain this text code:\n\n# replace x in 1.34.x-* with the latest patch version\nsudo yum install -y kubelet-'1.34.x-*' kubectl-'1.34.x-*' --setopt=disable_excludes=kubernetes", "completion": "This text code Upgrading kubeadm clusters | Restart the kubelet:", "language": "text", "type": "code-example_explanation", "context": "Upgrading kubeadm clusters | Restart the kubelet:", "source": "https://kubernetes.io/docs/tasks/administer-cluster/kubeadm/kubeadm-upgrade/", "id": "k8s_00028_code_explain_574"} {"prompt": "Explain this text code:\n\nsudo systemctl daemon-reload\nsudo systemctl restart kubelet", "completion": "This text code Upgrading kubeadm clusters | Bring the node back online by marking it schedulable:", "language": "text", "type": "code-example_explanation", "context": "Upgrading kubeadm clusters | Bring the node back online by marking it schedulable:", "source": "https://kubernetes.io/docs/tasks/administer-cluster/kubeadm/kubeadm-upgrade/", "id": "k8s_00028_code_explain_575"} {"prompt": "Explain this text code:\n\n# replace with the name of your node\nkubectl uncordon ", "completion": "This text code Upgrading kubeadm clusters | The upgrade procedure on worker nodes should be executed one node at a time or few nodes at a time, without compromising the minimum required capacity for running your workloads.", "language": "text", "type": "code-example_explanation", "context": "Upgrading kubeadm clusters | The upgrade procedure on worker nodes should be executed one node at a time or few nodes at a time, without compromising the minimum required capacity for running your workloads.", "source": "https://kubernetes.io/docs/tasks/administer-cluster/kubeadm/kubeadm-upgrade/", "id": "k8s_00028_code_explain_576"} {"prompt": "Explain this shell code:\n\nkillall -s SIGTERM kube-apiserver # trigger a graceful kube-apiserver shutdown\nsleep 20 # wait a little bit to permit completing in-flight requests\nkubeadm upgrade ... # execute a kubeadm upgrade command", "completion": "This shell code Upgrading kubeadm clusters | If you're using the community-owned package repositories (pkgs.k8s.io), you need to enable the package repository for the desired Kubernetes minor release. This is explained in Changing the Kubernetes", "language": "shell", "type": "code-example_explanation", "context": "Upgrading kubeadm clusters | If you're using the community-owned package repositories (pkgs.k8s.io), you need to enable the package repository for the desired Kubernetes minor release. This is explained in Changing the Kubernetes", "source": "https://kubernetes.io/docs/tasks/administer-cluster/kubeadm/kubeadm-upgrade/", "id": "k8s_00028_code_explain_577"} {"prompt": "Complete this shell code:\n\n# Find the latest 1.34 version in the list.\n# It should look like 1.34.x-*, where x is the latest patch.", "completion": "sudo apt update\nsudo apt-cache madison kubeadm", "language": "shell", "type": "code-example", "context": "Upgrading kubeadm clusters | For systems with DNF:# Find the latest 1.34 version in the list. # It should look like 1.34.x-*, where x is the latest patch. sudo yum list --showduplicates kubeadm --disableexcludes=kubernetes For sy", "source": "https://kubernetes.io/docs/tasks/administer-cluster/kubeadm/kubeadm-upgrade/", "id": "k8s_00028_code_completion_578"} {"prompt": "Explain this shell code:\n\n# Find the latest 1.34 version in the list.\n# It should look like 1.34.x-*, where x is the latest patch.\nsudo apt update\nsudo apt-cache madison kubeadm", "completion": "This shell code Upgrading kubeadm clusters | For systems with DNF:# Find the latest 1.34 version in the list. # It should look like 1.34.x-*, where x is the latest patch. sudo yum list --showduplicates kubeadm --disableexcludes=kubernetes For sy", "language": "shell", "type": "code-example_explanation", "context": "Upgrading kubeadm clusters | For systems with DNF:# Find the latest 1.34 version in the list. # It should look like 1.34.x-*, where x is the latest patch. sudo yum list --showduplicates kubeadm --disableexcludes=kubernetes For sy", "source": "https://kubernetes.io/docs/tasks/administer-cluster/kubeadm/kubeadm-upgrade/", "id": "k8s_00028_code_explain_579"} {"prompt": "Explain this shell code:\n\n# Find the latest 1.34 version in the list.\n# It should look like 1.34.x-*, where x is the latest patch.\nsudo yum list --showduplicates kubeadm --disableexcludes=kubernetes", "completion": "This shell code Upgrading kubeadm clusters | For systems with DNF5:", "language": "shell", "type": "code-example_explanation", "context": "Upgrading kubeadm clusters | For systems with DNF5:", "source": "https://kubernetes.io/docs/tasks/administer-cluster/kubeadm/kubeadm-upgrade/", "id": "k8s_00028_code_explain_580"} {"prompt": "Explain this shell code:\n\n# Find the latest 1.34 version in the list.\n# It should look like 1.34.x-*, where x is the latest patch.\nsudo yum list --showduplicates kubeadm --setopt=disable_excludes=kubernetes", "completion": "This shell code Upgrading kubeadm clusters | If you don't see the version you expect to upgrade to, verify if the Kubernetes package repositories are used.", "language": "shell", "type": "code-example_explanation", "context": "Upgrading kubeadm clusters | If you don't see the version you expect to upgrade to, verify if the Kubernetes package repositories are used.", "source": "https://kubernetes.io/docs/tasks/administer-cluster/kubeadm/kubeadm-upgrade/", "id": "k8s_00028_code_explain_581"} {"prompt": "Complete this shell code:\n\n# replace x in 1.34.x-* with the latest patch version\nsudo apt-mark unhold kubeadm && \\", "completion": "sudo apt-get update && sudo apt-get install -y kubeadm='1.34.x-*' && \\\nsudo apt-mark hold kubeadm", "language": "shell", "type": "code-example", "context": "Upgrading kubeadm clusters | For systems with DNF:# replace x in 1.34.x-* with the latest patch version sudo yum install -y kubeadm-'1.34.x-*' --disableexcludes=kubernetes For systems with DNF5:# replace x in 1.34.x-* with the la", "source": "https://kubernetes.io/docs/tasks/administer-cluster/kubeadm/kubeadm-upgrade/", "id": "k8s_00028_code_completion_582"} {"prompt": "Explain this shell code:\n\n# replace x in 1.34.x-* with the latest patch version\nsudo apt-mark unhold kubeadm && \\\nsudo apt-get update && sudo apt-get install -y kubeadm='1.34.x-*' && \\\nsudo apt-mark hold kubeadm", "completion": "This shell code Upgrading kubeadm clusters | For systems with DNF:# replace x in 1.34.x-* with the latest patch version sudo yum install -y kubeadm-'1.34.x-*' --disableexcludes=kubernetes For systems with DNF5:# replace x in 1.34.x-* with the la", "language": "shell", "type": "code-example_explanation", "context": "Upgrading kubeadm clusters | For systems with DNF:# replace x in 1.34.x-* with the latest patch version sudo yum install -y kubeadm-'1.34.x-*' --disableexcludes=kubernetes For systems with DNF5:# replace x in 1.34.x-* with the la", "source": "https://kubernetes.io/docs/tasks/administer-cluster/kubeadm/kubeadm-upgrade/", "id": "k8s_00028_code_explain_583"} {"prompt": "Explain this shell code:\n\n# replace x in 1.34.x-* with the latest patch version\nsudo yum install -y kubeadm-'1.34.x-*' --disableexcludes=kubernetes", "completion": "This shell code Upgrading kubeadm clusters | For systems with DNF5:", "language": "shell", "type": "code-example_explanation", "context": "Upgrading kubeadm clusters | For systems with DNF5:", "source": "https://kubernetes.io/docs/tasks/administer-cluster/kubeadm/kubeadm-upgrade/", "id": "k8s_00028_code_explain_584"} {"prompt": "Explain this shell code:\n\n# replace x in 1.34.x-* with the latest patch version\nsudo yum install -y kubeadm-'1.34.x-*' --setopt=disable_excludes=kubernetes", "completion": "This shell code Upgrading kubeadm clusters | Verify that the download works and has the expected version:", "language": "shell", "type": "code-example_explanation", "context": "Upgrading kubeadm clusters | Verify that the download works and has the expected version:", "source": "https://kubernetes.io/docs/tasks/administer-cluster/kubeadm/kubeadm-upgrade/", "id": "k8s_00028_code_explain_585"} {"prompt": "Explain this shell code:\n\n# replace x with the patch version you picked for this upgrade\nsudo kubeadm upgrade apply v1.34.x", "completion": "This shell code Upgrading kubeadm clusters | Once the command finishes you should see:", "language": "shell", "type": "code-example_explanation", "context": "Upgrading kubeadm clusters | Once the command finishes you should see:", "source": "https://kubernetes.io/docs/tasks/administer-cluster/kubeadm/kubeadm-upgrade/", "id": "k8s_00028_code_explain_586"} {"prompt": "Explain this json code:\n\n[upgrade/successful] SUCCESS! Your cluster was upgraded to \"v1.34.x\". Enjoy!\n\n[upgrade/kubelet] Now that your control plane is upgraded, please proceed with upgrading your kubelets if you haven't already done so.", "completion": "This json code Upgrading kubeadm clusters | Manually upgrade your CNI provider plugin.", "language": "json", "type": "api-response_explanation", "context": "Upgrading kubeadm clusters | Manually upgrade your CNI provider plugin.", "source": "https://kubernetes.io/docs/tasks/administer-cluster/kubeadm/kubeadm-upgrade/", "id": "k8s_00028_code_explain_587"} {"prompt": "Explain this shell code:\n\n# replace with the name of your node you are draining\nkubectl drain --ignore-daemonsets", "completion": "This shell code Upgrading kubeadm clusters | Upgrade the kubelet and kubectl:", "language": "shell", "type": "code-example_explanation", "context": "Upgrading kubeadm clusters | Upgrade the kubelet and kubectl:", "source": "https://kubernetes.io/docs/tasks/administer-cluster/kubeadm/kubeadm-upgrade/", "id": "k8s_00028_code_explain_588"} {"prompt": "Complete this shell code:\n\n# replace x in 1.34.x-* with the latest patch version\nsudo apt-mark unhold kubelet kubectl && \\", "completion": "sudo apt-get update && sudo apt-get install -y kubelet='1.34.x-*' kubectl='1.34.x-*' && \\\nsudo apt-mark hold kubelet kubectl", "language": "shell", "type": "code-example", "context": "Upgrading kubeadm clusters | For systems with DNF:# replace x in 1.34.x-* with the latest patch version sudo yum install -y kubelet-'1.34.x-*' kubectl-'1.34.x-*' --disableexcludes=kubernetes For systems with DNF5:# replace x in 1", "source": "https://kubernetes.io/docs/tasks/administer-cluster/kubeadm/kubeadm-upgrade/", "id": "k8s_00028_code_completion_589"} {"prompt": "Explain this shell code:\n\n# replace x in 1.34.x-* with the latest patch version\nsudo apt-mark unhold kubelet kubectl && \\\nsudo apt-get update && sudo apt-get install -y kubelet='1.34.x-*' kubectl='1.34.x-*' && \\\nsudo apt-mark hold kubelet kubectl", "completion": "This shell code Upgrading kubeadm clusters | For systems with DNF:# replace x in 1.34.x-* with the latest patch version sudo yum install -y kubelet-'1.34.x-*' kubectl-'1.34.x-*' --disableexcludes=kubernetes For systems with DNF5:# replace x in 1", "language": "shell", "type": "code-example_explanation", "context": "Upgrading kubeadm clusters | For systems with DNF:# replace x in 1.34.x-* with the latest patch version sudo yum install -y kubelet-'1.34.x-*' kubectl-'1.34.x-*' --disableexcludes=kubernetes For systems with DNF5:# replace x in 1", "source": "https://kubernetes.io/docs/tasks/administer-cluster/kubeadm/kubeadm-upgrade/", "id": "k8s_00028_code_explain_590"} {"prompt": "Explain this shell code:\n\n# replace x in 1.34.x-* with the latest patch version\nsudo yum install -y kubelet-'1.34.x-*' kubectl-'1.34.x-*' --disableexcludes=kubernetes", "completion": "This shell code Upgrading kubeadm clusters | For systems with DNF5:", "language": "shell", "type": "code-example_explanation", "context": "Upgrading kubeadm clusters | For systems with DNF5:", "source": "https://kubernetes.io/docs/tasks/administer-cluster/kubeadm/kubeadm-upgrade/", "id": "k8s_00028_code_explain_591"} {"prompt": "Explain this shell code:\n\n# replace x in 1.34.x-* with the latest patch version\nsudo yum install -y kubelet-'1.34.x-*' kubectl-'1.34.x-*' --setopt=disable_excludes=kubernetes", "completion": "This shell code Upgrading kubeadm clusters | Restart the kubelet:", "language": "shell", "type": "code-example_explanation", "context": "Upgrading kubeadm clusters | Restart the kubelet:", "source": "https://kubernetes.io/docs/tasks/administer-cluster/kubeadm/kubeadm-upgrade/", "id": "k8s_00028_code_explain_592"} {"prompt": "Explain this shell code:\n\nsudo systemctl daemon-reload\nsudo systemctl restart kubelet", "completion": "This shell code Upgrading kubeadm clusters | Bring the node back online by marking it schedulable:", "language": "shell", "type": "code-example_explanation", "context": "Upgrading kubeadm clusters | Bring the node back online by marking it schedulable:", "source": "https://kubernetes.io/docs/tasks/administer-cluster/kubeadm/kubeadm-upgrade/", "id": "k8s_00028_code_explain_593"} {"prompt": "Explain this shell code:\n\n# replace with the name of your node\nkubectl uncordon ", "completion": "This shell code Upgrading kubeadm clusters | The upgrade procedure on worker nodes should be executed one node at a time or few nodes at a time, without compromising the minimum required capacity for running your workloads.", "language": "shell", "type": "code-example_explanation", "context": "Upgrading kubeadm clusters | The upgrade procedure on worker nodes should be executed one node at a time or few nodes at a time, without compromising the minimum required capacity for running your workloads.", "source": "https://kubernetes.io/docs/tasks/administer-cluster/kubeadm/kubeadm-upgrade/", "id": "k8s_00028_code_explain_594"} {"prompt": "Explain this text code:\n\nkillall -s SIGTERM kube-apiserver # trigger a graceful kube-apiserver shutdown\nsleep 20 # wait a little bit to permit completing in-flight requests\nkubeadm upgrade ... # execute a kubeadm upgrade command", "completion": "This text code Upgrading kubeadm clusters | If you're using the community-owned package repositories (pkgs.k8s.io), you need to enable the package repository for the desired Kubernetes minor release. This is explained in Changing the Kubernetes", "language": "text", "type": "code-example_explanation", "context": "Upgrading kubeadm clusters | If you're using the community-owned package repositories (pkgs.k8s.io), you need to enable the package repository for the desired Kubernetes minor release. This is explained in Changing the Kubernetes", "source": "https://kubernetes.io/docs/tasks/administer-cluster/kubeadm/kubeadm-upgrade/", "id": "k8s_00028_code_explain_595"} {"prompt": "Complete this text code:\n\n# Find the latest 1.34 version in the list.\n# It should look like 1.34.x-*, where x is the latest patch.", "completion": "sudo apt update\nsudo apt-cache madison kubeadm", "language": "text", "type": "code-example", "context": "Upgrading kubeadm clusters | For systems with DNF:# Find the latest 1.34 version in the list. # It should look like 1.34.x-*, where x is the latest patch. sudo yum list --showduplicates kubeadm --disableexcludes=kubernetes For sy", "source": "https://kubernetes.io/docs/tasks/administer-cluster/kubeadm/kubeadm-upgrade/", "id": "k8s_00028_code_completion_596"} {"prompt": "Explain this text code:\n\n# Find the latest 1.34 version in the list.\n# It should look like 1.34.x-*, where x is the latest patch.\nsudo apt update\nsudo apt-cache madison kubeadm", "completion": "This text code Upgrading kubeadm clusters | For systems with DNF:# Find the latest 1.34 version in the list. # It should look like 1.34.x-*, where x is the latest patch. sudo yum list --showduplicates kubeadm --disableexcludes=kubernetes For sy", "language": "text", "type": "code-example_explanation", "context": "Upgrading kubeadm clusters | For systems with DNF:# Find the latest 1.34 version in the list. # It should look like 1.34.x-*, where x is the latest patch. sudo yum list --showduplicates kubeadm --disableexcludes=kubernetes For sy", "source": "https://kubernetes.io/docs/tasks/administer-cluster/kubeadm/kubeadm-upgrade/", "id": "k8s_00028_code_explain_597"} {"prompt": "Explain this text code:\n\n# Find the latest 1.34 version in the list.\n# It should look like 1.34.x-*, where x is the latest patch.\nsudo yum list --showduplicates kubeadm --disableexcludes=kubernetes", "completion": "This text code Upgrading kubeadm clusters | For systems with DNF5:", "language": "text", "type": "code-example_explanation", "context": "Upgrading kubeadm clusters | For systems with DNF5:", "source": "https://kubernetes.io/docs/tasks/administer-cluster/kubeadm/kubeadm-upgrade/", "id": "k8s_00028_code_explain_598"} {"prompt": "Explain this text code:\n\n# Find the latest 1.34 version in the list.\n# It should look like 1.34.x-*, where x is the latest patch.\nsudo yum list --showduplicates kubeadm --setopt=disable_excludes=kubernetes", "completion": "This text code Upgrading kubeadm clusters | If you don't see the version you expect to upgrade to, verify if the Kubernetes package repositories are used.", "language": "text", "type": "code-example_explanation", "context": "Upgrading kubeadm clusters | If you don't see the version you expect to upgrade to, verify if the Kubernetes package repositories are used.", "source": "https://kubernetes.io/docs/tasks/administer-cluster/kubeadm/kubeadm-upgrade/", "id": "k8s_00028_code_explain_599"} {"prompt": "Complete this text code:\n\n# replace x in 1.34.x-* with the latest patch version\nsudo apt-mark unhold kubeadm && \\", "completion": "sudo apt-get update && sudo apt-get install -y kubeadm='1.34.x-*' && \\\nsudo apt-mark hold kubeadm", "language": "text", "type": "code-example", "context": "Upgrading kubeadm clusters | For systems with DNF:# replace x in 1.34.x-* with the latest patch version sudo yum install -y kubeadm-'1.34.x-*' --disableexcludes=kubernetes For systems with DNF5:# replace x in 1.34.x-* with the la", "source": "https://kubernetes.io/docs/tasks/administer-cluster/kubeadm/kubeadm-upgrade/", "id": "k8s_00028_code_completion_600"} {"prompt": "Explain this text code:\n\n# replace x in 1.34.x-* with the latest patch version\nsudo apt-mark unhold kubeadm && \\\nsudo apt-get update && sudo apt-get install -y kubeadm='1.34.x-*' && \\\nsudo apt-mark hold kubeadm", "completion": "This text code Upgrading kubeadm clusters | For systems with DNF:# replace x in 1.34.x-* with the latest patch version sudo yum install -y kubeadm-'1.34.x-*' --disableexcludes=kubernetes For systems with DNF5:# replace x in 1.34.x-* with the la", "language": "text", "type": "code-example_explanation", "context": "Upgrading kubeadm clusters | For systems with DNF:# replace x in 1.34.x-* with the latest patch version sudo yum install -y kubeadm-'1.34.x-*' --disableexcludes=kubernetes For systems with DNF5:# replace x in 1.34.x-* with the la", "source": "https://kubernetes.io/docs/tasks/administer-cluster/kubeadm/kubeadm-upgrade/", "id": "k8s_00028_code_explain_601"} {"prompt": "Explain this text code:\n\n# replace x in 1.34.x-* with the latest patch version\nsudo yum install -y kubeadm-'1.34.x-*' --disableexcludes=kubernetes", "completion": "This text code Upgrading kubeadm clusters | For systems with DNF5:", "language": "text", "type": "code-example_explanation", "context": "Upgrading kubeadm clusters | For systems with DNF5:", "source": "https://kubernetes.io/docs/tasks/administer-cluster/kubeadm/kubeadm-upgrade/", "id": "k8s_00028_code_explain_602"} {"prompt": "Explain this text code:\n\n# replace x in 1.34.x-* with the latest patch version\nsudo yum install -y kubeadm-'1.34.x-*' --setopt=disable_excludes=kubernetes", "completion": "This text code Upgrading kubeadm clusters | Verify that the download works and has the expected version:", "language": "text", "type": "code-example_explanation", "context": "Upgrading kubeadm clusters | Verify that the download works and has the expected version:", "source": "https://kubernetes.io/docs/tasks/administer-cluster/kubeadm/kubeadm-upgrade/", "id": "k8s_00028_code_explain_603"} {"prompt": "Explain this text code:\n\n# replace x with the patch version you picked for this upgrade\nsudo kubeadm upgrade apply v1.34.x", "completion": "This text code Upgrading kubeadm clusters | Once the command finishes you should see:", "language": "text", "type": "code-example_explanation", "context": "Upgrading kubeadm clusters | Once the command finishes you should see:", "source": "https://kubernetes.io/docs/tasks/administer-cluster/kubeadm/kubeadm-upgrade/", "id": "k8s_00028_code_explain_604"} {"prompt": "Explain this text code:\n\n# replace with the name of your node you are draining\nkubectl drain --ignore-daemonsets", "completion": "This text code Upgrading kubeadm clusters | Upgrade the kubelet and kubectl:", "language": "text", "type": "code-example_explanation", "context": "Upgrading kubeadm clusters | Upgrade the kubelet and kubectl:", "source": "https://kubernetes.io/docs/tasks/administer-cluster/kubeadm/kubeadm-upgrade/", "id": "k8s_00028_code_explain_605"} {"prompt": "Complete this text code:\n\n# replace x in 1.34.x-* with the latest patch version\nsudo apt-mark unhold kubelet kubectl && \\", "completion": "sudo apt-get update && sudo apt-get install -y kubelet='1.34.x-*' kubectl='1.34.x-*' && \\\nsudo apt-mark hold kubelet kubectl", "language": "text", "type": "code-example", "context": "Upgrading kubeadm clusters | For systems with DNF:# replace x in 1.34.x-* with the latest patch version sudo yum install -y kubelet-'1.34.x-*' kubectl-'1.34.x-*' --disableexcludes=kubernetes For systems with DNF5:# replace x in 1", "source": "https://kubernetes.io/docs/tasks/administer-cluster/kubeadm/kubeadm-upgrade/", "id": "k8s_00028_code_completion_606"} {"prompt": "Explain this text code:\n\n# replace x in 1.34.x-* with the latest patch version\nsudo apt-mark unhold kubelet kubectl && \\\nsudo apt-get update && sudo apt-get install -y kubelet='1.34.x-*' kubectl='1.34.x-*' && \\\nsudo apt-mark hold kubelet kubectl", "completion": "This text code Upgrading kubeadm clusters | For systems with DNF:# replace x in 1.34.x-* with the latest patch version sudo yum install -y kubelet-'1.34.x-*' kubectl-'1.34.x-*' --disableexcludes=kubernetes For systems with DNF5:# replace x in 1", "language": "text", "type": "code-example_explanation", "context": "Upgrading kubeadm clusters | For systems with DNF:# replace x in 1.34.x-* with the latest patch version sudo yum install -y kubelet-'1.34.x-*' kubectl-'1.34.x-*' --disableexcludes=kubernetes For systems with DNF5:# replace x in 1", "source": "https://kubernetes.io/docs/tasks/administer-cluster/kubeadm/kubeadm-upgrade/", "id": "k8s_00028_code_explain_607"} {"prompt": "Explain this text code:\n\n# replace x in 1.34.x-* with the latest patch version\nsudo yum install -y kubelet-'1.34.x-*' kubectl-'1.34.x-*' --disableexcludes=kubernetes", "completion": "This text code Upgrading kubeadm clusters | For systems with DNF5:", "language": "text", "type": "code-example_explanation", "context": "Upgrading kubeadm clusters | For systems with DNF5:", "source": "https://kubernetes.io/docs/tasks/administer-cluster/kubeadm/kubeadm-upgrade/", "id": "k8s_00028_code_explain_608"} {"prompt": "Explain this text code:\n\n# replace x in 1.34.x-* with the latest patch version\nsudo yum install -y kubelet-'1.34.x-*' kubectl-'1.34.x-*' --setopt=disable_excludes=kubernetes", "completion": "This text code Upgrading kubeadm clusters | Restart the kubelet:", "language": "text", "type": "code-example_explanation", "context": "Upgrading kubeadm clusters | Restart the kubelet:", "source": "https://kubernetes.io/docs/tasks/administer-cluster/kubeadm/kubeadm-upgrade/", "id": "k8s_00028_code_explain_609"} {"prompt": "Explain this text code:\n\nsudo systemctl daemon-reload\nsudo systemctl restart kubelet", "completion": "This text code Upgrading kubeadm clusters | Bring the node back online by marking it schedulable:", "language": "text", "type": "code-example_explanation", "context": "Upgrading kubeadm clusters | Bring the node back online by marking it schedulable:", "source": "https://kubernetes.io/docs/tasks/administer-cluster/kubeadm/kubeadm-upgrade/", "id": "k8s_00028_code_explain_610"} {"prompt": "Explain this text code:\n\n# replace with the name of your node\nkubectl uncordon ", "completion": "This text code Upgrading kubeadm clusters | The upgrade procedure on worker nodes should be executed one node at a time or few nodes at a time, without compromising the minimum required capacity for running your workloads.", "language": "text", "type": "code-example_explanation", "context": "Upgrading kubeadm clusters | The upgrade procedure on worker nodes should be executed one node at a time or few nodes at a time, without compromising the minimum required capacity for running your workloads.", "source": "https://kubernetes.io/docs/tasks/administer-cluster/kubeadm/kubeadm-upgrade/", "id": "k8s_00028_code_explain_611"} {"prompt": "Complete this yaml code:\n\napiVersion: storage.k8s.io/v1\nkind: StorageClass\nmetadata:\n name: low-latency\n annotations:\n storageclass.kubernetes.io/is-default-class: \"false\"\nprovisioner: csi-driver.example-vendor.example", "completion": "reclaimPolicy: Retain # default value is Delete\nallowVolumeExpansion: true\nmountOptions:\n - discard # this might enable UNMAP / TRIM at the block storage layer\nvolumeBindingMode: WaitForFirstConsumer\nparameters:\n guaranteedReadWriteLatency: \"true\" # provider-specific", "language": "yaml", "type": "kubernetes-manifest", "context": "Storage Classes | You can mark a StorageClass as the default for your cluster. For instructions on setting the default StorageClass, see Change the default StorageClass.", "source": "https://kubernetes.io/docs/concepts/storage/storage-classes/", "id": "k8s_00029_code_completion_612"} {"prompt": "Explain this yaml code:\n\napiVersion: storage.k8s.io/v1\nkind: StorageClass\nmetadata:\n name: low-latency\n annotations:\n storageclass.kubernetes.io/is-default-class: \"false\"\nprovisioner: csi-driver.example-vendor.example\nreclaimPolicy: Retain # default value is Delete\nallowVolumeExpansion: true\nmountOptions:\n - discard # this might enable UNMAP / TRIM at the block storage layer\nvolumeBindingMode: WaitForFirstConsumer\nparameters:\n guaranteedReadWriteLatency: \"true\" # provider-specific", "completion": "This yaml code Storage Classes | You can mark a StorageClass as the default for your cluster. For instructions on setting the default StorageClass, see Change the default StorageClass.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Storage Classes | You can mark a StorageClass as the default for your cluster. For instructions on setting the default StorageClass, see Change the default StorageClass.", "source": "https://kubernetes.io/docs/concepts/storage/storage-classes/", "id": "k8s_00029_code_explain_613"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: task-pv-pod\nspec:\n nodeSelector:\n kubernetes.io/hostname: kube-01\n volumes:\n - name: task-pv-storage\n persistentVolumeClaim:", "completion": " claimName: task-pv-claim\n containers:\n - name: task-pv-container\n image: nginx\n ports:\n - containerPort: 80\n name: \"http-server\"\n volumeMounts:\n - mountPath: \"/usr/share/nginx/html\"\n name: task-pv-storage", "language": "yaml", "type": "pod-definition", "context": "Storage Classes | When a cluster operator specifies the WaitForFirstConsumer volume binding mode, it is no longer necessary to restrict provisioning to specific topologies in most situations. However, if still required", "source": "https://kubernetes.io/docs/concepts/storage/storage-classes/", "id": "k8s_00029_code_completion_614"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: task-pv-pod\nspec:\n nodeSelector:\n kubernetes.io/hostname: kube-01\n volumes:\n - name: task-pv-storage\n persistentVolumeClaim:\n claimName: task-pv-claim\n containers:\n - name: task-pv-container\n image: nginx\n ports:\n - containerPort: 80\n name: \"http-server\"\n volumeMounts:\n - mountPath: \"/usr/share/nginx/html\"\n name: task-pv-storage", "completion": "This yaml code Storage Classes | When a cluster operator specifies the WaitForFirstConsumer volume binding mode, it is no longer necessary to restrict provisioning to specific topologies in most situations. However, if still required", "language": "yaml", "type": "pod-definition_explanation", "context": "Storage Classes | When a cluster operator specifies the WaitForFirstConsumer volume binding mode, it is no longer necessary to restrict provisioning to specific topologies in most situations. However, if still required", "source": "https://kubernetes.io/docs/concepts/storage/storage-classes/", "id": "k8s_00029_code_explain_615"} {"prompt": "Complete this yaml code:\n\napiVersion: storage.k8s.io/v1\nkind: StorageClass\nmetadata:\n name: standard\nprovisioner: example.com/example\nparameters:\n type: pd-standard", "completion": "volumeBindingMode: WaitForFirstConsumer\nallowedTopologies:\n- matchLabelExpressions:\n - key: topology.kubernetes.io/zone\n values:\n - us-central-1a\n - us-central-1b", "language": "yaml", "type": "kubernetes-manifest", "context": "Storage Classes | StorageClasses have parameters that describe volumes belonging to the storage class. Different parameters may be accepted depending on the provisioner. When a parameter is omitted, some default is use", "source": "https://kubernetes.io/docs/concepts/storage/storage-classes/", "id": "k8s_00029_code_completion_616"} {"prompt": "Explain this yaml code:\n\napiVersion: storage.k8s.io/v1\nkind: StorageClass\nmetadata:\n name: standard\nprovisioner: example.com/example\nparameters:\n type: pd-standard\nvolumeBindingMode: WaitForFirstConsumer\nallowedTopologies:\n- matchLabelExpressions:\n - key: topology.kubernetes.io/zone\n values:\n - us-central-1a\n - us-central-1b", "completion": "This yaml code Storage Classes | StorageClasses have parameters that describe volumes belonging to the storage class. Different parameters may be accepted depending on the provisioner. When a parameter is omitted, some default is use", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Storage Classes | StorageClasses have parameters that describe volumes belonging to the storage class. Different parameters may be accepted depending on the provisioner. When a parameter is omitted, some default is use", "source": "https://kubernetes.io/docs/concepts/storage/storage-classes/", "id": "k8s_00029_code_explain_617"} {"prompt": "Complete this yaml code:\n\napiVersion: storage.k8s.io/v1\nkind: StorageClass\nmetadata:\n name: ebs-sc\nprovisioner: ebs.csi.aws.com\nvolumeBindingMode: WaitForFirstConsumer\nparameters:\n csi.storage.k8s.io/fstype: xfs\n type: io1", "completion": " iopsPerGB: \"50\"\n encrypted: \"true\"\n tagSpecification_1: \"key1=value1\"\n tagSpecification_2: \"key2=value2\"\nallowedTopologies:\n- matchLabelExpressions:\n - key: topology.ebs.csi.aws.com/zone\n values:\n - us-east-2c", "language": "yaml", "type": "kubernetes-manifest", "context": "Storage Classes | tagSpecification: Tags with this prefix are applied to dynamically provisioned EBS volumes.", "source": "https://kubernetes.io/docs/concepts/storage/storage-classes/", "id": "k8s_00029_code_completion_618"} {"prompt": "Explain this yaml code:\n\napiVersion: storage.k8s.io/v1\nkind: StorageClass\nmetadata:\n name: ebs-sc\nprovisioner: ebs.csi.aws.com\nvolumeBindingMode: WaitForFirstConsumer\nparameters:\n csi.storage.k8s.io/fstype: xfs\n type: io1\n iopsPerGB: \"50\"\n encrypted: \"true\"\n tagSpecification_1: \"key1=value1\"\n tagSpecification_2: \"key2=value2\"\nallowedTopologies:\n- matchLabelExpressions:\n - key: topology.ebs.csi.aws.com/zone\n values:\n - us-east-2c", "completion": "This yaml code Storage Classes | tagSpecification: Tags with this prefix are applied to dynamically provisioned EBS volumes.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Storage Classes | tagSpecification: Tags with this prefix are applied to dynamically provisioned EBS volumes.", "source": "https://kubernetes.io/docs/concepts/storage/storage-classes/", "id": "k8s_00029_code_explain_619"} {"prompt": "Complete this yaml code:\n\nkind: StorageClass\napiVersion: storage.k8s.io/v1\nmetadata:\n name: efs-sc", "completion": "provisioner: efs.csi.aws.com\nparameters:\n provisioningMode: efs-ap\n fileSystemId: fs-92107410\n directoryPerms: \"700\"", "language": "yaml", "type": "kubernetes-manifest", "context": "Storage Classes | For more details, refer to the AWS_EFS_CSI_Driver Dynamic Provisioning documentation.", "source": "https://kubernetes.io/docs/concepts/storage/storage-classes/", "id": "k8s_00029_code_completion_620"} {"prompt": "Explain this yaml code:\n\nkind: StorageClass\napiVersion: storage.k8s.io/v1\nmetadata:\n name: efs-sc\nprovisioner: efs.csi.aws.com\nparameters:\n provisioningMode: efs-ap\n fileSystemId: fs-92107410\n directoryPerms: \"700\"", "completion": "This yaml code Storage Classes | For more details, refer to the AWS_EFS_CSI_Driver Dynamic Provisioning documentation.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Storage Classes | For more details, refer to the AWS_EFS_CSI_Driver Dynamic Provisioning documentation.", "source": "https://kubernetes.io/docs/concepts/storage/storage-classes/", "id": "k8s_00029_code_explain_621"} {"prompt": "Complete this yaml code:\n\napiVersion: storage.k8s.io/v1\nkind: StorageClass\nmetadata:\n name: example-nfs", "completion": "provisioner: example.com/external-nfs\nparameters:\n server: nfs-server.example.com\n path: /share\n readOnly: \"false\"", "language": "yaml", "type": "kubernetes-manifest", "context": "Storage Classes | Kubernetes doesn't include an internal NFS provisioner. You need to use an external provisioner to create a StorageClass for NFS. Here are some examples:", "source": "https://kubernetes.io/docs/concepts/storage/storage-classes/", "id": "k8s_00029_code_completion_622"} {"prompt": "Explain this yaml code:\n\napiVersion: storage.k8s.io/v1\nkind: StorageClass\nmetadata:\n name: example-nfs\nprovisioner: example.com/external-nfs\nparameters:\n server: nfs-server.example.com\n path: /share\n readOnly: \"false\"", "completion": "This yaml code Storage Classes | Kubernetes doesn't include an internal NFS provisioner. You need to use an external provisioner to create a StorageClass for NFS. Here are some examples:", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Storage Classes | Kubernetes doesn't include an internal NFS provisioner. You need to use an external provisioner to create a StorageClass for NFS. Here are some examples:", "source": "https://kubernetes.io/docs/concepts/storage/storage-classes/", "id": "k8s_00029_code_explain_623"} {"prompt": "Complete this yaml code:\n\napiVersion: storage.k8s.io/v1\nkind: StorageClass\nmetadata:", "completion": " name: fast\nprovisioner: kubernetes.io/vsphere-volume\nparameters:\n diskformat: zeroedthick", "language": "yaml", "type": "kubernetes-manifest", "context": "Storage Classes | diskformat: thin, zeroedthick and eagerzeroedthick. Default: \"thin\".", "source": "https://kubernetes.io/docs/concepts/storage/storage-classes/", "id": "k8s_00029_code_completion_624"} {"prompt": "Explain this yaml code:\n\napiVersion: storage.k8s.io/v1\nkind: StorageClass\nmetadata:\n name: fast\nprovisioner: kubernetes.io/vsphere-volume\nparameters:\n diskformat: zeroedthick", "completion": "This yaml code Storage Classes | diskformat: thin, zeroedthick and eagerzeroedthick. Default: \"thin\".", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Storage Classes | diskformat: thin, zeroedthick and eagerzeroedthick. Default: \"thin\".", "source": "https://kubernetes.io/docs/concepts/storage/storage-classes/", "id": "k8s_00029_code_explain_625"} {"prompt": "Complete this yaml code:\n\napiVersion: storage.k8s.io/v1\nkind: StorageClass\nmetadata:\n name: fast", "completion": "provisioner: kubernetes.io/vsphere-volume\nparameters:\n diskformat: zeroedthick\n datastore: VSANDatastore", "language": "yaml", "type": "kubernetes-manifest", "context": "Storage Classes | datastore: The user can also specify the datastore in the StorageClass. The volume will be created on the datastore specified in the StorageClass, which in this case is VSANDatastore. This field is op", "source": "https://kubernetes.io/docs/concepts/storage/storage-classes/", "id": "k8s_00029_code_completion_626"} {"prompt": "Explain this yaml code:\n\napiVersion: storage.k8s.io/v1\nkind: StorageClass\nmetadata:\n name: fast\nprovisioner: kubernetes.io/vsphere-volume\nparameters:\n diskformat: zeroedthick\n datastore: VSANDatastore", "completion": "This yaml code Storage Classes | datastore: The user can also specify the datastore in the StorageClass. The volume will be created on the datastore specified in the StorageClass, which in this case is VSANDatastore. This field is op", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Storage Classes | datastore: The user can also specify the datastore in the StorageClass. The volume will be created on the datastore specified in the StorageClass, which in this case is VSANDatastore. This field is op", "source": "https://kubernetes.io/docs/concepts/storage/storage-classes/", "id": "k8s_00029_code_explain_627"} {"prompt": "Complete this yaml code:\n\napiVersion: storage.k8s.io/v1\nkind: StorageClass\nmetadata:\n name: fast\nprovisioner: kubernetes.io/rbd # This provisioner is deprecated\nparameters:\n monitors: 198.19.254.105:6789\n adminId: kube", "completion": " adminSecretName: ceph-secret\n adminSecretNamespace: kube-system\n pool: kube\n userId: kube\n userSecretName: ceph-secret-user\n userSecretNamespace: default\n fsType: ext4\n imageFormat: \"2\"\n imageFeatures: \"layering\"", "language": "yaml", "type": "kubernetes-manifest", "context": "Storage Classes | monitors: Ceph monitors, comma delimited. This parameter is required.", "source": "https://kubernetes.io/docs/concepts/storage/storage-classes/", "id": "k8s_00029_code_completion_628"} {"prompt": "Explain this yaml code:\n\napiVersion: storage.k8s.io/v1\nkind: StorageClass\nmetadata:\n name: fast\nprovisioner: kubernetes.io/rbd # This provisioner is deprecated\nparameters:\n monitors: 198.19.254.105:6789\n adminId: kube\n adminSecretName: ceph-secret\n adminSecretNamespace: kube-system\n pool: kube\n userId: kube\n userSecretName: ceph-secret-user\n userSecretNamespace: default\n fsType: ext4\n imageFormat: \"2\"\n imageFeatures: \"layering\"", "completion": "This yaml code Storage Classes | monitors: Ceph monitors, comma delimited. This parameter is required.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Storage Classes | monitors: Ceph monitors, comma delimited. This parameter is required.", "source": "https://kubernetes.io/docs/concepts/storage/storage-classes/", "id": "k8s_00029_code_explain_629"} {"prompt": "Explain this bash code:\n\nkubectl create secret generic ceph-secret --type=\"kubernetes.io/rbd\" \\\n --from-literal=key='QVFEQ1pMdFhPUnQrSmhBQUFYaERWNHJsZ3BsMmNjcDR6RFZST0E9PQ==' \\\n --namespace=kube-system", "completion": "This bash code Storage Classes | userSecretNamespace: The namespace for userSecretName.", "language": "bash", "type": "kubectl-command_explanation", "context": "Storage Classes | userSecretNamespace: The namespace for userSecretName.", "source": "https://kubernetes.io/docs/concepts/storage/storage-classes/", "id": "k8s_00029_code_explain_630"} {"prompt": "Complete this yaml code:\n\napiVersion: storage.k8s.io/v1\nkind: StorageClass\nmetadata:\n name: azurefile", "completion": "provisioner: kubernetes.io/azure-file\nparameters:\n skuName: Standard_LRS\n location: eastus\n storageAccount: azure_storage_account_name # example value", "language": "yaml", "type": "kubernetes-manifest", "context": "Storage Classes | During storage provisioning, a secret named by secretName is created for the mounting credentials. If the cluster has enabled both RBAC and Controller Roles, add the create permission of resource secr", "source": "https://kubernetes.io/docs/concepts/storage/storage-classes/", "id": "k8s_00029_code_completion_631"} {"prompt": "Explain this yaml code:\n\napiVersion: storage.k8s.io/v1\nkind: StorageClass\nmetadata:\n name: azurefile\nprovisioner: kubernetes.io/azure-file\nparameters:\n skuName: Standard_LRS\n location: eastus\n storageAccount: azure_storage_account_name # example value", "completion": "This yaml code Storage Classes | During storage provisioning, a secret named by secretName is created for the mounting credentials. If the cluster has enabled both RBAC and Controller Roles, add the create permission of resource secr", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Storage Classes | During storage provisioning, a secret named by secretName is created for the mounting credentials. If the cluster has enabled both RBAC and Controller Roles, add the create permission of resource secr", "source": "https://kubernetes.io/docs/concepts/storage/storage-classes/", "id": "k8s_00029_code_explain_632"} {"prompt": "Complete this yaml code:\n\napiVersion: storage.k8s.io/v1\nkind: StorageClass\nmetadata:\n name: portworx-io-priority-high", "completion": "provisioner: kubernetes.io/portworx-volume # This provisioner is deprecated\nparameters:\n repl: \"1\"\n snap_interval: \"70\"\n priority_io: \"high\"", "language": "yaml", "type": "kubernetes-manifest", "context": "Storage Classes | Local volumes do not support dynamic provisioning in Kubernetes 1.34; however a StorageClass should still be created to delay volume binding until a Pod is actually scheduled to the appropriate node.", "source": "https://kubernetes.io/docs/concepts/storage/storage-classes/", "id": "k8s_00029_code_completion_633"} {"prompt": "Explain this yaml code:\n\napiVersion: storage.k8s.io/v1\nkind: StorageClass\nmetadata:\n name: portworx-io-priority-high\nprovisioner: kubernetes.io/portworx-volume # This provisioner is deprecated\nparameters:\n repl: \"1\"\n snap_interval: \"70\"\n priority_io: \"high\"", "completion": "This yaml code Storage Classes | Local volumes do not support dynamic provisioning in Kubernetes 1.34; however a StorageClass should still be created to delay volume binding until a Pod is actually scheduled to the appropriate node.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Storage Classes | Local volumes do not support dynamic provisioning in Kubernetes 1.34; however a StorageClass should still be created to delay volume binding until a Pod is actually scheduled to the appropriate node.", "source": "https://kubernetes.io/docs/concepts/storage/storage-classes/", "id": "k8s_00029_code_explain_634"} {"prompt": "Complete this yaml code:\n\napiVersion: storage.k8s.io/v1\nkind: StorageClass\nmetadata:", "completion": " name: local-storage\nprovisioner: kubernetes.io/no-provisioner # indicates that this StorageClass does not support automatic provisioning\nvolumeBindingMode: WaitForFirstConsumer", "language": "yaml", "type": "kubernetes-manifest", "context": "Storage Classes | Local volumes do not support dynamic provisioning in Kubernetes 1.34; however a StorageClass should still be created to delay volume binding until a Pod is actually scheduled to the appropriate node.", "source": "https://kubernetes.io/docs/concepts/storage/storage-classes/", "id": "k8s_00029_code_completion_635"} {"prompt": "Explain this yaml code:\n\napiVersion: storage.k8s.io/v1\nkind: StorageClass\nmetadata:\n name: local-storage\nprovisioner: kubernetes.io/no-provisioner # indicates that this StorageClass does not support automatic provisioning\nvolumeBindingMode: WaitForFirstConsumer", "completion": "This yaml code Storage Classes | Local volumes do not support dynamic provisioning in Kubernetes 1.34; however a StorageClass should still be created to delay volume binding until a Pod is actually scheduled to the appropriate node.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Storage Classes | Local volumes do not support dynamic provisioning in Kubernetes 1.34; however a StorageClass should still be created to delay volume binding until a Pod is actually scheduled to the appropriate node.", "source": "https://kubernetes.io/docs/concepts/storage/storage-classes/", "id": "k8s_00029_code_explain_636"} {"prompt": "Complete this yaml code:\n\napiVersion: storage.k8s.io/v1\nkind: StorageClass\nmetadata:\n name: low-latency\n annotations:\n storageclass.kubernetes.io/is-default-class: \"false\"\nprovisioner: csi-driver.example-vendor.example", "completion": "reclaimPolicy: Retain # default value is Delete\nallowVolumeExpansion: true\nmountOptions:\n - discard # this might enable UNMAP / TRIM at the block storage layer\nvolumeBindingMode: WaitForFirstConsumer\nparameters:\n guaranteedReadWriteLatency: \"true\" # provider-specific", "language": "yaml", "type": "kubernetes-manifest", "context": "Storage Classes | You can mark a StorageClass as the default for your cluster. For instructions on setting the default StorageClass, see Change the default StorageClass.", "source": "https://kubernetes.io/docs/concepts/storage/storage-classes/", "id": "k8s_00029_code_completion_637"} {"prompt": "Explain this yaml code:\n\napiVersion: storage.k8s.io/v1\nkind: StorageClass\nmetadata:\n name: low-latency\n annotations:\n storageclass.kubernetes.io/is-default-class: \"false\"\nprovisioner: csi-driver.example-vendor.example\nreclaimPolicy: Retain # default value is Delete\nallowVolumeExpansion: true\nmountOptions:\n - discard # this might enable UNMAP / TRIM at the block storage layer\nvolumeBindingMode: WaitForFirstConsumer\nparameters:\n guaranteedReadWriteLatency: \"true\" # provider-specific", "completion": "This yaml code Storage Classes | You can mark a StorageClass as the default for your cluster. For instructions on setting the default StorageClass, see Change the default StorageClass.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Storage Classes | You can mark a StorageClass as the default for your cluster. For instructions on setting the default StorageClass, see Change the default StorageClass.", "source": "https://kubernetes.io/docs/concepts/storage/storage-classes/", "id": "k8s_00029_code_explain_638"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: task-pv-pod\nspec:\n nodeSelector:\n kubernetes.io/hostname: kube-01\n volumes:\n - name: task-pv-storage\n persistentVolumeClaim:", "completion": " claimName: task-pv-claim\n containers:\n - name: task-pv-container\n image: nginx\n ports:\n - containerPort: 80\n name: \"http-server\"\n volumeMounts:\n - mountPath: \"/usr/share/nginx/html\"\n name: task-pv-storage", "language": "yaml", "type": "pod-definition", "context": "Storage Classes | When a cluster operator specifies the WaitForFirstConsumer volume binding mode, it is no longer necessary to restrict provisioning to specific topologies in most situations. However, if still required", "source": "https://kubernetes.io/docs/concepts/storage/storage-classes/", "id": "k8s_00029_code_completion_639"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: task-pv-pod\nspec:\n nodeSelector:\n kubernetes.io/hostname: kube-01\n volumes:\n - name: task-pv-storage\n persistentVolumeClaim:\n claimName: task-pv-claim\n containers:\n - name: task-pv-container\n image: nginx\n ports:\n - containerPort: 80\n name: \"http-server\"\n volumeMounts:\n - mountPath: \"/usr/share/nginx/html\"\n name: task-pv-storage", "completion": "This yaml code Storage Classes | When a cluster operator specifies the WaitForFirstConsumer volume binding mode, it is no longer necessary to restrict provisioning to specific topologies in most situations. However, if still required", "language": "yaml", "type": "pod-definition_explanation", "context": "Storage Classes | When a cluster operator specifies the WaitForFirstConsumer volume binding mode, it is no longer necessary to restrict provisioning to specific topologies in most situations. However, if still required", "source": "https://kubernetes.io/docs/concepts/storage/storage-classes/", "id": "k8s_00029_code_explain_640"} {"prompt": "Complete this yaml code:\n\napiVersion: storage.k8s.io/v1\nkind: StorageClass\nmetadata:\n name: standard\nprovisioner: example.com/example\nparameters:\n type: pd-standard", "completion": "volumeBindingMode: WaitForFirstConsumer\nallowedTopologies:\n- matchLabelExpressions:\n - key: topology.kubernetes.io/zone\n values:\n - us-central-1a\n - us-central-1b", "language": "yaml", "type": "kubernetes-manifest", "context": "Storage Classes | StorageClasses have parameters that describe volumes belonging to the storage class. Different parameters may be accepted depending on the provisioner. When a parameter is omitted, some default is use", "source": "https://kubernetes.io/docs/concepts/storage/storage-classes/", "id": "k8s_00029_code_completion_641"} {"prompt": "Explain this yaml code:\n\napiVersion: storage.k8s.io/v1\nkind: StorageClass\nmetadata:\n name: standard\nprovisioner: example.com/example\nparameters:\n type: pd-standard\nvolumeBindingMode: WaitForFirstConsumer\nallowedTopologies:\n- matchLabelExpressions:\n - key: topology.kubernetes.io/zone\n values:\n - us-central-1a\n - us-central-1b", "completion": "This yaml code Storage Classes | StorageClasses have parameters that describe volumes belonging to the storage class. Different parameters may be accepted depending on the provisioner. When a parameter is omitted, some default is use", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Storage Classes | StorageClasses have parameters that describe volumes belonging to the storage class. Different parameters may be accepted depending on the provisioner. When a parameter is omitted, some default is use", "source": "https://kubernetes.io/docs/concepts/storage/storage-classes/", "id": "k8s_00029_code_explain_642"} {"prompt": "Complete this yaml code:\n\napiVersion: storage.k8s.io/v1\nkind: StorageClass\nmetadata:\n name: ebs-sc\nprovisioner: ebs.csi.aws.com\nvolumeBindingMode: WaitForFirstConsumer\nparameters:\n csi.storage.k8s.io/fstype: xfs\n type: io1", "completion": " iopsPerGB: \"50\"\n encrypted: \"true\"\n tagSpecification_1: \"key1=value1\"\n tagSpecification_2: \"key2=value2\"\nallowedTopologies:\n- matchLabelExpressions:\n - key: topology.ebs.csi.aws.com/zone\n values:\n - us-east-2c", "language": "yaml", "type": "kubernetes-manifest", "context": "Storage Classes | tagSpecification: Tags with this prefix are applied to dynamically provisioned EBS volumes.", "source": "https://kubernetes.io/docs/concepts/storage/storage-classes/", "id": "k8s_00029_code_completion_643"} {"prompt": "Explain this yaml code:\n\napiVersion: storage.k8s.io/v1\nkind: StorageClass\nmetadata:\n name: ebs-sc\nprovisioner: ebs.csi.aws.com\nvolumeBindingMode: WaitForFirstConsumer\nparameters:\n csi.storage.k8s.io/fstype: xfs\n type: io1\n iopsPerGB: \"50\"\n encrypted: \"true\"\n tagSpecification_1: \"key1=value1\"\n tagSpecification_2: \"key2=value2\"\nallowedTopologies:\n- matchLabelExpressions:\n - key: topology.ebs.csi.aws.com/zone\n values:\n - us-east-2c", "completion": "This yaml code Storage Classes | tagSpecification: Tags with this prefix are applied to dynamically provisioned EBS volumes.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Storage Classes | tagSpecification: Tags with this prefix are applied to dynamically provisioned EBS volumes.", "source": "https://kubernetes.io/docs/concepts/storage/storage-classes/", "id": "k8s_00029_code_explain_644"} {"prompt": "Complete this yaml code:\n\nkind: StorageClass\napiVersion: storage.k8s.io/v1\nmetadata:\n name: efs-sc", "completion": "provisioner: efs.csi.aws.com\nparameters:\n provisioningMode: efs-ap\n fileSystemId: fs-92107410\n directoryPerms: \"700\"", "language": "yaml", "type": "kubernetes-manifest", "context": "Storage Classes | For more details, refer to the AWS_EFS_CSI_Driver Dynamic Provisioning documentation.", "source": "https://kubernetes.io/docs/concepts/storage/storage-classes/", "id": "k8s_00029_code_completion_645"} {"prompt": "Explain this yaml code:\n\nkind: StorageClass\napiVersion: storage.k8s.io/v1\nmetadata:\n name: efs-sc\nprovisioner: efs.csi.aws.com\nparameters:\n provisioningMode: efs-ap\n fileSystemId: fs-92107410\n directoryPerms: \"700\"", "completion": "This yaml code Storage Classes | For more details, refer to the AWS_EFS_CSI_Driver Dynamic Provisioning documentation.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Storage Classes | For more details, refer to the AWS_EFS_CSI_Driver Dynamic Provisioning documentation.", "source": "https://kubernetes.io/docs/concepts/storage/storage-classes/", "id": "k8s_00029_code_explain_646"} {"prompt": "Complete this yaml code:\n\napiVersion: storage.k8s.io/v1\nkind: StorageClass\nmetadata:\n name: example-nfs", "completion": "provisioner: example.com/external-nfs\nparameters:\n server: nfs-server.example.com\n path: /share\n readOnly: \"false\"", "language": "yaml", "type": "kubernetes-manifest", "context": "Storage Classes | Kubernetes doesn't include an internal NFS provisioner. You need to use an external provisioner to create a StorageClass for NFS. Here are some examples:", "source": "https://kubernetes.io/docs/concepts/storage/storage-classes/", "id": "k8s_00029_code_completion_647"} {"prompt": "Explain this yaml code:\n\napiVersion: storage.k8s.io/v1\nkind: StorageClass\nmetadata:\n name: example-nfs\nprovisioner: example.com/external-nfs\nparameters:\n server: nfs-server.example.com\n path: /share\n readOnly: \"false\"", "completion": "This yaml code Storage Classes | Kubernetes doesn't include an internal NFS provisioner. You need to use an external provisioner to create a StorageClass for NFS. Here are some examples:", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Storage Classes | Kubernetes doesn't include an internal NFS provisioner. You need to use an external provisioner to create a StorageClass for NFS. Here are some examples:", "source": "https://kubernetes.io/docs/concepts/storage/storage-classes/", "id": "k8s_00029_code_explain_648"} {"prompt": "Complete this yaml code:\n\napiVersion: storage.k8s.io/v1\nkind: StorageClass\nmetadata:", "completion": " name: fast\nprovisioner: kubernetes.io/vsphere-volume\nparameters:\n diskformat: zeroedthick", "language": "yaml", "type": "kubernetes-manifest", "context": "Storage Classes | diskformat: thin, zeroedthick and eagerzeroedthick. Default: \"thin\".", "source": "https://kubernetes.io/docs/concepts/storage/storage-classes/", "id": "k8s_00029_code_completion_649"} {"prompt": "Explain this yaml code:\n\napiVersion: storage.k8s.io/v1\nkind: StorageClass\nmetadata:\n name: fast\nprovisioner: kubernetes.io/vsphere-volume\nparameters:\n diskformat: zeroedthick", "completion": "This yaml code Storage Classes | diskformat: thin, zeroedthick and eagerzeroedthick. Default: \"thin\".", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Storage Classes | diskformat: thin, zeroedthick and eagerzeroedthick. Default: \"thin\".", "source": "https://kubernetes.io/docs/concepts/storage/storage-classes/", "id": "k8s_00029_code_explain_650"} {"prompt": "Complete this yaml code:\n\napiVersion: storage.k8s.io/v1\nkind: StorageClass\nmetadata:\n name: fast", "completion": "provisioner: kubernetes.io/vsphere-volume\nparameters:\n diskformat: zeroedthick\n datastore: VSANDatastore", "language": "yaml", "type": "kubernetes-manifest", "context": "Storage Classes | datastore: The user can also specify the datastore in the StorageClass. The volume will be created on the datastore specified in the StorageClass, which in this case is VSANDatastore. This field is op", "source": "https://kubernetes.io/docs/concepts/storage/storage-classes/", "id": "k8s_00029_code_completion_651"} {"prompt": "Explain this yaml code:\n\napiVersion: storage.k8s.io/v1\nkind: StorageClass\nmetadata:\n name: fast\nprovisioner: kubernetes.io/vsphere-volume\nparameters:\n diskformat: zeroedthick\n datastore: VSANDatastore", "completion": "This yaml code Storage Classes | datastore: The user can also specify the datastore in the StorageClass. The volume will be created on the datastore specified in the StorageClass, which in this case is VSANDatastore. This field is op", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Storage Classes | datastore: The user can also specify the datastore in the StorageClass. The volume will be created on the datastore specified in the StorageClass, which in this case is VSANDatastore. This field is op", "source": "https://kubernetes.io/docs/concepts/storage/storage-classes/", "id": "k8s_00029_code_explain_652"} {"prompt": "Complete this yaml code:\n\napiVersion: storage.k8s.io/v1\nkind: StorageClass\nmetadata:\n name: fast\nprovisioner: kubernetes.io/rbd # This provisioner is deprecated\nparameters:\n monitors: 198.19.254.105:6789\n adminId: kube", "completion": " adminSecretName: ceph-secret\n adminSecretNamespace: kube-system\n pool: kube\n userId: kube\n userSecretName: ceph-secret-user\n userSecretNamespace: default\n fsType: ext4\n imageFormat: \"2\"\n imageFeatures: \"layering\"", "language": "yaml", "type": "kubernetes-manifest", "context": "Storage Classes | monitors: Ceph monitors, comma delimited. This parameter is required.", "source": "https://kubernetes.io/docs/concepts/storage/storage-classes/", "id": "k8s_00029_code_completion_653"} {"prompt": "Explain this yaml code:\n\napiVersion: storage.k8s.io/v1\nkind: StorageClass\nmetadata:\n name: fast\nprovisioner: kubernetes.io/rbd # This provisioner is deprecated\nparameters:\n monitors: 198.19.254.105:6789\n adminId: kube\n adminSecretName: ceph-secret\n adminSecretNamespace: kube-system\n pool: kube\n userId: kube\n userSecretName: ceph-secret-user\n userSecretNamespace: default\n fsType: ext4\n imageFormat: \"2\"\n imageFeatures: \"layering\"", "completion": "This yaml code Storage Classes | monitors: Ceph monitors, comma delimited. This parameter is required.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Storage Classes | monitors: Ceph monitors, comma delimited. This parameter is required.", "source": "https://kubernetes.io/docs/concepts/storage/storage-classes/", "id": "k8s_00029_code_explain_654"} {"prompt": "Explain this shell code:\n\nkubectl create secret generic ceph-secret --type=\"kubernetes.io/rbd\" \\\n --from-literal=key='QVFEQ1pMdFhPUnQrSmhBQUFYaERWNHJsZ3BsMmNjcDR6RFZST0E9PQ==' \\\n --namespace=kube-system", "completion": "This shell code Storage Classes | userSecretNamespace: The namespace for userSecretName.", "language": "shell", "type": "code-example_explanation", "context": "Storage Classes | userSecretNamespace: The namespace for userSecretName.", "source": "https://kubernetes.io/docs/concepts/storage/storage-classes/", "id": "k8s_00029_code_explain_655"} {"prompt": "Complete this yaml code:\n\napiVersion: storage.k8s.io/v1\nkind: StorageClass\nmetadata:\n name: azurefile", "completion": "provisioner: kubernetes.io/azure-file\nparameters:\n skuName: Standard_LRS\n location: eastus\n storageAccount: azure_storage_account_name # example value", "language": "yaml", "type": "kubernetes-manifest", "context": "Storage Classes | During storage provisioning, a secret named by secretName is created for the mounting credentials. If the cluster has enabled both RBAC and Controller Roles, add the create permission of resource secr", "source": "https://kubernetes.io/docs/concepts/storage/storage-classes/", "id": "k8s_00029_code_completion_656"} {"prompt": "Explain this yaml code:\n\napiVersion: storage.k8s.io/v1\nkind: StorageClass\nmetadata:\n name: azurefile\nprovisioner: kubernetes.io/azure-file\nparameters:\n skuName: Standard_LRS\n location: eastus\n storageAccount: azure_storage_account_name # example value", "completion": "This yaml code Storage Classes | During storage provisioning, a secret named by secretName is created for the mounting credentials. If the cluster has enabled both RBAC and Controller Roles, add the create permission of resource secr", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Storage Classes | During storage provisioning, a secret named by secretName is created for the mounting credentials. If the cluster has enabled both RBAC and Controller Roles, add the create permission of resource secr", "source": "https://kubernetes.io/docs/concepts/storage/storage-classes/", "id": "k8s_00029_code_explain_657"} {"prompt": "Explain this text code:\n\nstorage/storageclass/storageclass-portworx-volume.yaml", "completion": "This text code Storage Classes | Local volumes do not support dynamic provisioning in Kubernetes 1.34; however a StorageClass should still be created to delay volume binding until a Pod is actually scheduled to the appropriate node.", "language": "text", "type": "code-example_explanation", "context": "Storage Classes | Local volumes do not support dynamic provisioning in Kubernetes 1.34; however a StorageClass should still be created to delay volume binding until a Pod is actually scheduled to the appropriate node.", "source": "https://kubernetes.io/docs/concepts/storage/storage-classes/", "id": "k8s_00029_code_explain_658"} {"prompt": "Complete this yaml code:\n\napiVersion: storage.k8s.io/v1\nkind: StorageClass\nmetadata:\n name: portworx-io-priority-high", "completion": "provisioner: kubernetes.io/portworx-volume # This provisioner is deprecated\nparameters:\n repl: \"1\"\n snap_interval: \"70\"\n priority_io: \"high\"", "language": "yaml", "type": "kubernetes-manifest", "context": "Storage Classes | Local volumes do not support dynamic provisioning in Kubernetes 1.34; however a StorageClass should still be created to delay volume binding until a Pod is actually scheduled to the appropriate node.", "source": "https://kubernetes.io/docs/concepts/storage/storage-classes/", "id": "k8s_00029_code_completion_659"} {"prompt": "Explain this yaml code:\n\napiVersion: storage.k8s.io/v1\nkind: StorageClass\nmetadata:\n name: portworx-io-priority-high\nprovisioner: kubernetes.io/portworx-volume # This provisioner is deprecated\nparameters:\n repl: \"1\"\n snap_interval: \"70\"\n priority_io: \"high\"", "completion": "This yaml code Storage Classes | Local volumes do not support dynamic provisioning in Kubernetes 1.34; however a StorageClass should still be created to delay volume binding until a Pod is actually scheduled to the appropriate node.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Storage Classes | Local volumes do not support dynamic provisioning in Kubernetes 1.34; however a StorageClass should still be created to delay volume binding until a Pod is actually scheduled to the appropriate node.", "source": "https://kubernetes.io/docs/concepts/storage/storage-classes/", "id": "k8s_00029_code_explain_660"} {"prompt": "Complete this yaml code:\n\napiVersion: storage.k8s.io/v1\nkind: StorageClass\nmetadata:", "completion": " name: local-storage\nprovisioner: kubernetes.io/no-provisioner # indicates that this StorageClass does not support automatic provisioning\nvolumeBindingMode: WaitForFirstConsumer", "language": "yaml", "type": "kubernetes-manifest", "context": "Storage Classes | Local volumes do not support dynamic provisioning in Kubernetes 1.34; however a StorageClass should still be created to delay volume binding until a Pod is actually scheduled to the appropriate node.", "source": "https://kubernetes.io/docs/concepts/storage/storage-classes/", "id": "k8s_00029_code_completion_661"} {"prompt": "Explain this yaml code:\n\napiVersion: storage.k8s.io/v1\nkind: StorageClass\nmetadata:\n name: local-storage\nprovisioner: kubernetes.io/no-provisioner # indicates that this StorageClass does not support automatic provisioning\nvolumeBindingMode: WaitForFirstConsumer", "completion": "This yaml code Storage Classes | Local volumes do not support dynamic provisioning in Kubernetes 1.34; however a StorageClass should still be created to delay volume binding until a Pod is actually scheduled to the appropriate node.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Storage Classes | Local volumes do not support dynamic provisioning in Kubernetes 1.34; however a StorageClass should still be created to delay volume binding until a Pod is actually scheduled to the appropriate node.", "source": "https://kubernetes.io/docs/concepts/storage/storage-classes/", "id": "k8s_00029_code_explain_662"} {"prompt": "Complete this yaml code:\n\nstorage/storageclass-low-latency.yaml\napiVersion: storage.k8s.io/v1\nkind: StorageClass\nmetadata:\n name: low-latency\n annotations:\n storageclass.kubernetes.io/is-default-class: \"false\"", "completion": "provisioner: csi-driver.example-vendor.example\nreclaimPolicy: Retain # default value is Delete\nallowVolumeExpansion: true\nmountOptions:\n - discard # this might enable UNMAP / TRIM at the block storage layer\nvolumeBindingMode: WaitForFirstConsumer\nparameters:\n guaranteedReadWriteLatency: \"true\" # provider-specific", "language": "yaml", "type": "kubernetes-manifest", "context": "Storage Classes | You can mark a StorageClass as the default for your cluster. For instructions on setting the default StorageClass, see Change the default StorageClass.", "source": "https://kubernetes.io/docs/concepts/storage/storage-classes/", "id": "k8s_00029_code_completion_663"} {"prompt": "Explain this yaml code:\n\nstorage/storageclass-low-latency.yaml\napiVersion: storage.k8s.io/v1\nkind: StorageClass\nmetadata:\n name: low-latency\n annotations:\n storageclass.kubernetes.io/is-default-class: \"false\"\nprovisioner: csi-driver.example-vendor.example\nreclaimPolicy: Retain # default value is Delete\nallowVolumeExpansion: true\nmountOptions:\n - discard # this might enable UNMAP / TRIM at the block storage layer\nvolumeBindingMode: WaitForFirstConsumer\nparameters:\n guaranteedReadWriteLatency: \"true\" # provider-specific", "completion": "This yaml code Storage Classes | You can mark a StorageClass as the default for your cluster. For instructions on setting the default StorageClass, see Change the default StorageClass.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Storage Classes | You can mark a StorageClass as the default for your cluster. For instructions on setting the default StorageClass, see Change the default StorageClass.", "source": "https://kubernetes.io/docs/concepts/storage/storage-classes/", "id": "k8s_00029_code_explain_664"} {"prompt": "Complete this yaml code:\n\napiVersion: storage.k8s.io/v1\nkind: StorageClass\nmetadata:\n name: low-latency\n annotations:\n storageclass.kubernetes.io/is-default-class: \"false\"\nprovisioner: csi-driver.example-vendor.example", "completion": "reclaimPolicy: Retain # default value is Delete\nallowVolumeExpansion: true\nmountOptions:\n - discard # this might enable UNMAP / TRIM at the block storage layer\nvolumeBindingMode: WaitForFirstConsumer\nparameters:\n guaranteedReadWriteLatency: \"true\" # provider-specific", "language": "yaml", "type": "kubernetes-manifest", "context": "Storage Classes | You can mark a StorageClass as the default for your cluster. For instructions on setting the default StorageClass, see Change the default StorageClass.", "source": "https://kubernetes.io/docs/concepts/storage/storage-classes/", "id": "k8s_00029_code_completion_665"} {"prompt": "Explain this yaml code:\n\napiVersion: storage.k8s.io/v1\nkind: StorageClass\nmetadata:\n name: low-latency\n annotations:\n storageclass.kubernetes.io/is-default-class: \"false\"\nprovisioner: csi-driver.example-vendor.example\nreclaimPolicy: Retain # default value is Delete\nallowVolumeExpansion: true\nmountOptions:\n - discard # this might enable UNMAP / TRIM at the block storage layer\nvolumeBindingMode: WaitForFirstConsumer\nparameters:\n guaranteedReadWriteLatency: \"true\" # provider-specific", "completion": "This yaml code Storage Classes | You can mark a StorageClass as the default for your cluster. For instructions on setting the default StorageClass, see Change the default StorageClass.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Storage Classes | You can mark a StorageClass as the default for your cluster. For instructions on setting the default StorageClass, see Change the default StorageClass.", "source": "https://kubernetes.io/docs/concepts/storage/storage-classes/", "id": "k8s_00029_code_explain_666"} {"prompt": "Complete this yaml code:\n\nstorage/storageclass/pod-volume-binding.yaml\napiVersion: v1\nkind: Pod\nmetadata:\n name: task-pv-pod\nspec:\n nodeSelector:\n kubernetes.io/hostname: kube-01\n volumes:\n - name: task-pv-storage", "completion": " persistentVolumeClaim:\n claimName: task-pv-claim\n containers:\n - name: task-pv-container\n image: nginx\n ports:\n - containerPort: 80\n name: \"http-server\"\n volumeMounts:\n - mountPath: \"/usr/share/nginx/html\"\n name: task-pv-storage", "language": "yaml", "type": "pod-definition", "context": "Storage Classes | When a cluster operator specifies the WaitForFirstConsumer volume binding mode, it is no longer necessary to restrict provisioning to specific topologies in most situations. However, if still required", "source": "https://kubernetes.io/docs/concepts/storage/storage-classes/", "id": "k8s_00029_code_completion_667"} {"prompt": "Explain this yaml code:\n\nstorage/storageclass/pod-volume-binding.yaml\napiVersion: v1\nkind: Pod\nmetadata:\n name: task-pv-pod\nspec:\n nodeSelector:\n kubernetes.io/hostname: kube-01\n volumes:\n - name: task-pv-storage\n persistentVolumeClaim:\n claimName: task-pv-claim\n containers:\n - name: task-pv-container\n image: nginx\n ports:\n - containerPort: 80\n name: \"http-server\"\n volumeMounts:\n - mountPath: \"/usr/share/nginx/html\"\n name: task-pv-storage", "completion": "This yaml code Storage Classes | When a cluster operator specifies the WaitForFirstConsumer volume binding mode, it is no longer necessary to restrict provisioning to specific topologies in most situations. However, if still required", "language": "yaml", "type": "pod-definition_explanation", "context": "Storage Classes | When a cluster operator specifies the WaitForFirstConsumer volume binding mode, it is no longer necessary to restrict provisioning to specific topologies in most situations. However, if still required", "source": "https://kubernetes.io/docs/concepts/storage/storage-classes/", "id": "k8s_00029_code_explain_668"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: task-pv-pod\nspec:\n nodeSelector:\n kubernetes.io/hostname: kube-01\n volumes:\n - name: task-pv-storage\n persistentVolumeClaim:", "completion": " claimName: task-pv-claim\n containers:\n - name: task-pv-container\n image: nginx\n ports:\n - containerPort: 80\n name: \"http-server\"\n volumeMounts:\n - mountPath: \"/usr/share/nginx/html\"\n name: task-pv-storage", "language": "yaml", "type": "pod-definition", "context": "Storage Classes | When a cluster operator specifies the WaitForFirstConsumer volume binding mode, it is no longer necessary to restrict provisioning to specific topologies in most situations. However, if still required", "source": "https://kubernetes.io/docs/concepts/storage/storage-classes/", "id": "k8s_00029_code_completion_669"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: task-pv-pod\nspec:\n nodeSelector:\n kubernetes.io/hostname: kube-01\n volumes:\n - name: task-pv-storage\n persistentVolumeClaim:\n claimName: task-pv-claim\n containers:\n - name: task-pv-container\n image: nginx\n ports:\n - containerPort: 80\n name: \"http-server\"\n volumeMounts:\n - mountPath: \"/usr/share/nginx/html\"\n name: task-pv-storage", "completion": "This yaml code Storage Classes | When a cluster operator specifies the WaitForFirstConsumer volume binding mode, it is no longer necessary to restrict provisioning to specific topologies in most situations. However, if still required", "language": "yaml", "type": "pod-definition_explanation", "context": "Storage Classes | When a cluster operator specifies the WaitForFirstConsumer volume binding mode, it is no longer necessary to restrict provisioning to specific topologies in most situations. However, if still required", "source": "https://kubernetes.io/docs/concepts/storage/storage-classes/", "id": "k8s_00029_code_explain_670"} {"prompt": "Complete this yaml code:\n\nstorage/storageclass/storageclass-topology.yaml\napiVersion: storage.k8s.io/v1\nkind: StorageClass\nmetadata:\n name: standard\nprovisioner: example.com/example\nparameters:", "completion": " type: pd-standard\nvolumeBindingMode: WaitForFirstConsumer\nallowedTopologies:\n- matchLabelExpressions:\n - key: topology.kubernetes.io/zone\n values:\n - us-central-1a\n - us-central-1b", "language": "yaml", "type": "kubernetes-manifest", "context": "Storage Classes | StorageClasses have parameters that describe volumes belonging to the storage class. Different parameters may be accepted depending on the provisioner. When a parameter is omitted, some default is use", "source": "https://kubernetes.io/docs/concepts/storage/storage-classes/", "id": "k8s_00029_code_completion_671"} {"prompt": "Explain this yaml code:\n\nstorage/storageclass/storageclass-topology.yaml\napiVersion: storage.k8s.io/v1\nkind: StorageClass\nmetadata:\n name: standard\nprovisioner: example.com/example\nparameters:\n type: pd-standard\nvolumeBindingMode: WaitForFirstConsumer\nallowedTopologies:\n- matchLabelExpressions:\n - key: topology.kubernetes.io/zone\n values:\n - us-central-1a\n - us-central-1b", "completion": "This yaml code Storage Classes | StorageClasses have parameters that describe volumes belonging to the storage class. Different parameters may be accepted depending on the provisioner. When a parameter is omitted, some default is use", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Storage Classes | StorageClasses have parameters that describe volumes belonging to the storage class. Different parameters may be accepted depending on the provisioner. When a parameter is omitted, some default is use", "source": "https://kubernetes.io/docs/concepts/storage/storage-classes/", "id": "k8s_00029_code_explain_672"} {"prompt": "Complete this yaml code:\n\napiVersion: storage.k8s.io/v1\nkind: StorageClass\nmetadata:\n name: standard\nprovisioner: example.com/example\nparameters:\n type: pd-standard", "completion": "volumeBindingMode: WaitForFirstConsumer\nallowedTopologies:\n- matchLabelExpressions:\n - key: topology.kubernetes.io/zone\n values:\n - us-central-1a\n - us-central-1b", "language": "yaml", "type": "kubernetes-manifest", "context": "Storage Classes | StorageClasses have parameters that describe volumes belonging to the storage class. Different parameters may be accepted depending on the provisioner. When a parameter is omitted, some default is use", "source": "https://kubernetes.io/docs/concepts/storage/storage-classes/", "id": "k8s_00029_code_completion_673"} {"prompt": "Explain this yaml code:\n\napiVersion: storage.k8s.io/v1\nkind: StorageClass\nmetadata:\n name: standard\nprovisioner: example.com/example\nparameters:\n type: pd-standard\nvolumeBindingMode: WaitForFirstConsumer\nallowedTopologies:\n- matchLabelExpressions:\n - key: topology.kubernetes.io/zone\n values:\n - us-central-1a\n - us-central-1b", "completion": "This yaml code Storage Classes | StorageClasses have parameters that describe volumes belonging to the storage class. Different parameters may be accepted depending on the provisioner. When a parameter is omitted, some default is use", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Storage Classes | StorageClasses have parameters that describe volumes belonging to the storage class. Different parameters may be accepted depending on the provisioner. When a parameter is omitted, some default is use", "source": "https://kubernetes.io/docs/concepts/storage/storage-classes/", "id": "k8s_00029_code_explain_674"} {"prompt": "Complete this yaml code:\n\nstorage/storageclass/storageclass-aws-ebs.yaml\napiVersion: storage.k8s.io/v1\nkind: StorageClass\nmetadata:\n name: ebs-sc\nprovisioner: ebs.csi.aws.com\nvolumeBindingMode: WaitForFirstConsumer\nparameters:\n csi.storage.k8s.io/fstype: xfs", "completion": " type: io1\n iopsPerGB: \"50\"\n encrypted: \"true\"\n tagSpecification_1: \"key1=value1\"\n tagSpecification_2: \"key2=value2\"\nallowedTopologies:\n- matchLabelExpressions:\n - key: topology.ebs.csi.aws.com/zone\n values:\n - us-east-2c", "language": "yaml", "type": "kubernetes-manifest", "context": "Storage Classes | tagSpecification: Tags with this prefix are applied to dynamically provisioned EBS volumes.", "source": "https://kubernetes.io/docs/concepts/storage/storage-classes/", "id": "k8s_00029_code_completion_675"} {"prompt": "Explain this yaml code:\n\nstorage/storageclass/storageclass-aws-ebs.yaml\napiVersion: storage.k8s.io/v1\nkind: StorageClass\nmetadata:\n name: ebs-sc\nprovisioner: ebs.csi.aws.com\nvolumeBindingMode: WaitForFirstConsumer\nparameters:\n csi.storage.k8s.io/fstype: xfs\n type: io1\n iopsPerGB: \"50\"\n encrypted: \"true\"\n tagSpecification_1: \"key1=value1\"\n tagSpecification_2: \"key2=value2\"\nallowedTopologies:\n- matchLabelExpressions:\n - key: topology.ebs.csi.aws.com/zone\n values:\n - us-east-2c", "completion": "This yaml code Storage Classes | tagSpecification: Tags with this prefix are applied to dynamically provisioned EBS volumes.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Storage Classes | tagSpecification: Tags with this prefix are applied to dynamically provisioned EBS volumes.", "source": "https://kubernetes.io/docs/concepts/storage/storage-classes/", "id": "k8s_00029_code_explain_676"} {"prompt": "Complete this yaml code:\n\napiVersion: storage.k8s.io/v1\nkind: StorageClass\nmetadata:\n name: ebs-sc\nprovisioner: ebs.csi.aws.com\nvolumeBindingMode: WaitForFirstConsumer\nparameters:\n csi.storage.k8s.io/fstype: xfs\n type: io1", "completion": " iopsPerGB: \"50\"\n encrypted: \"true\"\n tagSpecification_1: \"key1=value1\"\n tagSpecification_2: \"key2=value2\"\nallowedTopologies:\n- matchLabelExpressions:\n - key: topology.ebs.csi.aws.com/zone\n values:\n - us-east-2c", "language": "yaml", "type": "kubernetes-manifest", "context": "Storage Classes | tagSpecification: Tags with this prefix are applied to dynamically provisioned EBS volumes.", "source": "https://kubernetes.io/docs/concepts/storage/storage-classes/", "id": "k8s_00029_code_completion_677"} {"prompt": "Explain this yaml code:\n\napiVersion: storage.k8s.io/v1\nkind: StorageClass\nmetadata:\n name: ebs-sc\nprovisioner: ebs.csi.aws.com\nvolumeBindingMode: WaitForFirstConsumer\nparameters:\n csi.storage.k8s.io/fstype: xfs\n type: io1\n iopsPerGB: \"50\"\n encrypted: \"true\"\n tagSpecification_1: \"key1=value1\"\n tagSpecification_2: \"key2=value2\"\nallowedTopologies:\n- matchLabelExpressions:\n - key: topology.ebs.csi.aws.com/zone\n values:\n - us-east-2c", "completion": "This yaml code Storage Classes | tagSpecification: Tags with this prefix are applied to dynamically provisioned EBS volumes.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Storage Classes | tagSpecification: Tags with this prefix are applied to dynamically provisioned EBS volumes.", "source": "https://kubernetes.io/docs/concepts/storage/storage-classes/", "id": "k8s_00029_code_explain_678"} {"prompt": "Complete this yaml code:\n\nstorage/storageclass/storageclass-aws-efs.yaml\nkind: StorageClass\napiVersion: storage.k8s.io/v1\nmetadata:\n name: efs-sc", "completion": "provisioner: efs.csi.aws.com\nparameters:\n provisioningMode: efs-ap\n fileSystemId: fs-92107410\n directoryPerms: \"700\"", "language": "yaml", "type": "kubernetes-manifest", "context": "Storage Classes | For more details, refer to the AWS_EFS_CSI_Driver Dynamic Provisioning documentation.", "source": "https://kubernetes.io/docs/concepts/storage/storage-classes/", "id": "k8s_00029_code_completion_679"} {"prompt": "Explain this yaml code:\n\nstorage/storageclass/storageclass-aws-efs.yaml\nkind: StorageClass\napiVersion: storage.k8s.io/v1\nmetadata:\n name: efs-sc\nprovisioner: efs.csi.aws.com\nparameters:\n provisioningMode: efs-ap\n fileSystemId: fs-92107410\n directoryPerms: \"700\"", "completion": "This yaml code Storage Classes | For more details, refer to the AWS_EFS_CSI_Driver Dynamic Provisioning documentation.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Storage Classes | For more details, refer to the AWS_EFS_CSI_Driver Dynamic Provisioning documentation.", "source": "https://kubernetes.io/docs/concepts/storage/storage-classes/", "id": "k8s_00029_code_explain_680"} {"prompt": "Complete this yaml code:\n\nkind: StorageClass\napiVersion: storage.k8s.io/v1\nmetadata:\n name: efs-sc", "completion": "provisioner: efs.csi.aws.com\nparameters:\n provisioningMode: efs-ap\n fileSystemId: fs-92107410\n directoryPerms: \"700\"", "language": "yaml", "type": "kubernetes-manifest", "context": "Storage Classes | For more details, refer to the AWS_EFS_CSI_Driver Dynamic Provisioning documentation.", "source": "https://kubernetes.io/docs/concepts/storage/storage-classes/", "id": "k8s_00029_code_completion_681"} {"prompt": "Explain this yaml code:\n\nkind: StorageClass\napiVersion: storage.k8s.io/v1\nmetadata:\n name: efs-sc\nprovisioner: efs.csi.aws.com\nparameters:\n provisioningMode: efs-ap\n fileSystemId: fs-92107410\n directoryPerms: \"700\"", "completion": "This yaml code Storage Classes | For more details, refer to the AWS_EFS_CSI_Driver Dynamic Provisioning documentation.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Storage Classes | For more details, refer to the AWS_EFS_CSI_Driver Dynamic Provisioning documentation.", "source": "https://kubernetes.io/docs/concepts/storage/storage-classes/", "id": "k8s_00029_code_explain_682"} {"prompt": "Complete this yaml code:\n\nstorage/storageclass/storageclass-nfs.yaml\napiVersion: storage.k8s.io/v1\nkind: StorageClass\nmetadata:\n name: example-nfs", "completion": "provisioner: example.com/external-nfs\nparameters:\n server: nfs-server.example.com\n path: /share\n readOnly: \"false\"", "language": "yaml", "type": "kubernetes-manifest", "context": "Storage Classes | Kubernetes doesn't include an internal NFS provisioner. You need to use an external provisioner to create a StorageClass for NFS. Here are some examples:", "source": "https://kubernetes.io/docs/concepts/storage/storage-classes/", "id": "k8s_00029_code_completion_683"} {"prompt": "Explain this yaml code:\n\nstorage/storageclass/storageclass-nfs.yaml\napiVersion: storage.k8s.io/v1\nkind: StorageClass\nmetadata:\n name: example-nfs\nprovisioner: example.com/external-nfs\nparameters:\n server: nfs-server.example.com\n path: /share\n readOnly: \"false\"", "completion": "This yaml code Storage Classes | Kubernetes doesn't include an internal NFS provisioner. You need to use an external provisioner to create a StorageClass for NFS. Here are some examples:", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Storage Classes | Kubernetes doesn't include an internal NFS provisioner. You need to use an external provisioner to create a StorageClass for NFS. Here are some examples:", "source": "https://kubernetes.io/docs/concepts/storage/storage-classes/", "id": "k8s_00029_code_explain_684"} {"prompt": "Complete this yaml code:\n\napiVersion: storage.k8s.io/v1\nkind: StorageClass\nmetadata:\n name: example-nfs", "completion": "provisioner: example.com/external-nfs\nparameters:\n server: nfs-server.example.com\n path: /share\n readOnly: \"false\"", "language": "yaml", "type": "kubernetes-manifest", "context": "Storage Classes | Kubernetes doesn't include an internal NFS provisioner. You need to use an external provisioner to create a StorageClass for NFS. Here are some examples:", "source": "https://kubernetes.io/docs/concepts/storage/storage-classes/", "id": "k8s_00029_code_completion_685"} {"prompt": "Explain this yaml code:\n\napiVersion: storage.k8s.io/v1\nkind: StorageClass\nmetadata:\n name: example-nfs\nprovisioner: example.com/external-nfs\nparameters:\n server: nfs-server.example.com\n path: /share\n readOnly: \"false\"", "completion": "This yaml code Storage Classes | Kubernetes doesn't include an internal NFS provisioner. You need to use an external provisioner to create a StorageClass for NFS. Here are some examples:", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Storage Classes | Kubernetes doesn't include an internal NFS provisioner. You need to use an external provisioner to create a StorageClass for NFS. Here are some examples:", "source": "https://kubernetes.io/docs/concepts/storage/storage-classes/", "id": "k8s_00029_code_explain_686"} {"prompt": "Complete this yaml code:\n\napiVersion: storage.k8s.io/v1\nkind: StorageClass\nmetadata:", "completion": " name: fast\nprovisioner: kubernetes.io/vsphere-volume\nparameters:\n diskformat: zeroedthick", "language": "yaml", "type": "kubernetes-manifest", "context": "Storage Classes | diskformat: thin, zeroedthick and eagerzeroedthick. Default: \"thin\".", "source": "https://kubernetes.io/docs/concepts/storage/storage-classes/", "id": "k8s_00029_code_completion_687"} {"prompt": "Explain this yaml code:\n\napiVersion: storage.k8s.io/v1\nkind: StorageClass\nmetadata:\n name: fast\nprovisioner: kubernetes.io/vsphere-volume\nparameters:\n diskformat: zeroedthick", "completion": "This yaml code Storage Classes | diskformat: thin, zeroedthick and eagerzeroedthick. Default: \"thin\".", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Storage Classes | diskformat: thin, zeroedthick and eagerzeroedthick. Default: \"thin\".", "source": "https://kubernetes.io/docs/concepts/storage/storage-classes/", "id": "k8s_00029_code_explain_688"} {"prompt": "Complete this yaml code:\n\napiVersion: storage.k8s.io/v1\nkind: StorageClass\nmetadata:\n name: fast", "completion": "provisioner: kubernetes.io/vsphere-volume\nparameters:\n diskformat: zeroedthick\n datastore: VSANDatastore", "language": "yaml", "type": "kubernetes-manifest", "context": "Storage Classes | datastore: The user can also specify the datastore in the StorageClass. The volume will be created on the datastore specified in the StorageClass, which in this case is VSANDatastore. This field is op", "source": "https://kubernetes.io/docs/concepts/storage/storage-classes/", "id": "k8s_00029_code_completion_689"} {"prompt": "Explain this yaml code:\n\napiVersion: storage.k8s.io/v1\nkind: StorageClass\nmetadata:\n name: fast\nprovisioner: kubernetes.io/vsphere-volume\nparameters:\n diskformat: zeroedthick\n datastore: VSANDatastore", "completion": "This yaml code Storage Classes | datastore: The user can also specify the datastore in the StorageClass. The volume will be created on the datastore specified in the StorageClass, which in this case is VSANDatastore. This field is op", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Storage Classes | datastore: The user can also specify the datastore in the StorageClass. The volume will be created on the datastore specified in the StorageClass, which in this case is VSANDatastore. This field is op", "source": "https://kubernetes.io/docs/concepts/storage/storage-classes/", "id": "k8s_00029_code_explain_690"} {"prompt": "Complete this yaml code:\n\nstorage/storageclass/storageclass-ceph-rbd.yaml\napiVersion: storage.k8s.io/v1\nkind: StorageClass\nmetadata:\n name: fast\nprovisioner: kubernetes.io/rbd # This provisioner is deprecated\nparameters:\n monitors: 198.19.254.105:6789\n adminId: kube", "completion": " adminSecretName: ceph-secret\n adminSecretNamespace: kube-system\n pool: kube\n userId: kube\n userSecretName: ceph-secret-user\n userSecretNamespace: default\n fsType: ext4\n imageFormat: \"2\"\n imageFeatures: \"layering\"", "language": "yaml", "type": "kubernetes-manifest", "context": "Storage Classes | monitors: Ceph monitors, comma delimited. This parameter is required.", "source": "https://kubernetes.io/docs/concepts/storage/storage-classes/", "id": "k8s_00029_code_completion_691"} {"prompt": "Explain this yaml code:\n\nstorage/storageclass/storageclass-ceph-rbd.yaml\napiVersion: storage.k8s.io/v1\nkind: StorageClass\nmetadata:\n name: fast\nprovisioner: kubernetes.io/rbd # This provisioner is deprecated\nparameters:\n monitors: 198.19.254.105:6789\n adminId: kube\n adminSecretName: ceph-secret\n adminSecretNamespace: kube-system\n pool: kube\n userId: kube\n userSecretName: ceph-secret-user\n userSecretNamespace: default\n fsType: ext4\n imageFormat: \"2\"\n imageFeatures: \"layering\"", "completion": "This yaml code Storage Classes | monitors: Ceph monitors, comma delimited. This parameter is required.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Storage Classes | monitors: Ceph monitors, comma delimited. This parameter is required.", "source": "https://kubernetes.io/docs/concepts/storage/storage-classes/", "id": "k8s_00029_code_explain_692"} {"prompt": "Complete this yaml code:\n\napiVersion: storage.k8s.io/v1\nkind: StorageClass\nmetadata:\n name: fast\nprovisioner: kubernetes.io/rbd # This provisioner is deprecated\nparameters:\n monitors: 198.19.254.105:6789\n adminId: kube", "completion": " adminSecretName: ceph-secret\n adminSecretNamespace: kube-system\n pool: kube\n userId: kube\n userSecretName: ceph-secret-user\n userSecretNamespace: default\n fsType: ext4\n imageFormat: \"2\"\n imageFeatures: \"layering\"", "language": "yaml", "type": "kubernetes-manifest", "context": "Storage Classes | monitors: Ceph monitors, comma delimited. This parameter is required.", "source": "https://kubernetes.io/docs/concepts/storage/storage-classes/", "id": "k8s_00029_code_completion_693"} {"prompt": "Explain this yaml code:\n\napiVersion: storage.k8s.io/v1\nkind: StorageClass\nmetadata:\n name: fast\nprovisioner: kubernetes.io/rbd # This provisioner is deprecated\nparameters:\n monitors: 198.19.254.105:6789\n adminId: kube\n adminSecretName: ceph-secret\n adminSecretNamespace: kube-system\n pool: kube\n userId: kube\n userSecretName: ceph-secret-user\n userSecretNamespace: default\n fsType: ext4\n imageFormat: \"2\"\n imageFeatures: \"layering\"", "completion": "This yaml code Storage Classes | monitors: Ceph monitors, comma delimited. This parameter is required.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Storage Classes | monitors: Ceph monitors, comma delimited. This parameter is required.", "source": "https://kubernetes.io/docs/concepts/storage/storage-classes/", "id": "k8s_00029_code_explain_694"} {"prompt": "Explain this bash code:\n\nkubectl create secret generic ceph-secret --type=\"kubernetes.io/rbd\" \\\n --from-literal=key='QVFEQ1pMdFhPUnQrSmhBQUFYaERWNHJsZ3BsMmNjcDR6RFZST0E9PQ==' \\\n --namespace=kube-system", "completion": "This bash code Storage Classes | userSecretNamespace: The namespace for userSecretName.", "language": "bash", "type": "kubectl-command_explanation", "context": "Storage Classes | userSecretNamespace: The namespace for userSecretName.", "source": "https://kubernetes.io/docs/concepts/storage/storage-classes/", "id": "k8s_00029_code_explain_695"} {"prompt": "Complete this yaml code:\n\nstorage/storageclass/storageclass-azure-file.yaml\napiVersion: storage.k8s.io/v1\nkind: StorageClass\nmetadata:\n name: azurefile", "completion": "provisioner: kubernetes.io/azure-file\nparameters:\n skuName: Standard_LRS\n location: eastus\n storageAccount: azure_storage_account_name # example value", "language": "yaml", "type": "kubernetes-manifest", "context": "Storage Classes | During storage provisioning, a secret named by secretName is created for the mounting credentials. If the cluster has enabled both RBAC and Controller Roles, add the create permission of resource secr", "source": "https://kubernetes.io/docs/concepts/storage/storage-classes/", "id": "k8s_00029_code_completion_696"} {"prompt": "Explain this yaml code:\n\nstorage/storageclass/storageclass-azure-file.yaml\napiVersion: storage.k8s.io/v1\nkind: StorageClass\nmetadata:\n name: azurefile\nprovisioner: kubernetes.io/azure-file\nparameters:\n skuName: Standard_LRS\n location: eastus\n storageAccount: azure_storage_account_name # example value", "completion": "This yaml code Storage Classes | During storage provisioning, a secret named by secretName is created for the mounting credentials. If the cluster has enabled both RBAC and Controller Roles, add the create permission of resource secr", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Storage Classes | During storage provisioning, a secret named by secretName is created for the mounting credentials. If the cluster has enabled both RBAC and Controller Roles, add the create permission of resource secr", "source": "https://kubernetes.io/docs/concepts/storage/storage-classes/", "id": "k8s_00029_code_explain_697"} {"prompt": "Complete this yaml code:\n\napiVersion: storage.k8s.io/v1\nkind: StorageClass\nmetadata:\n name: azurefile", "completion": "provisioner: kubernetes.io/azure-file\nparameters:\n skuName: Standard_LRS\n location: eastus\n storageAccount: azure_storage_account_name # example value", "language": "yaml", "type": "kubernetes-manifest", "context": "Storage Classes | During storage provisioning, a secret named by secretName is created for the mounting credentials. If the cluster has enabled both RBAC and Controller Roles, add the create permission of resource secr", "source": "https://kubernetes.io/docs/concepts/storage/storage-classes/", "id": "k8s_00029_code_completion_698"} {"prompt": "Explain this yaml code:\n\napiVersion: storage.k8s.io/v1\nkind: StorageClass\nmetadata:\n name: azurefile\nprovisioner: kubernetes.io/azure-file\nparameters:\n skuName: Standard_LRS\n location: eastus\n storageAccount: azure_storage_account_name # example value", "completion": "This yaml code Storage Classes | During storage provisioning, a secret named by secretName is created for the mounting credentials. If the cluster has enabled both RBAC and Controller Roles, add the create permission of resource secr", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Storage Classes | During storage provisioning, a secret named by secretName is created for the mounting credentials. If the cluster has enabled both RBAC and Controller Roles, add the create permission of resource secr", "source": "https://kubernetes.io/docs/concepts/storage/storage-classes/", "id": "k8s_00029_code_explain_699"} {"prompt": "Complete this yaml code:\n\nstorage/storageclass/storageclass-portworx-volume.yaml\napiVersion: storage.k8s.io/v1\nkind: StorageClass\nmetadata:\n name: portworx-io-priority-high", "completion": "provisioner: kubernetes.io/portworx-volume # This provisioner is deprecated\nparameters:\n repl: \"1\"\n snap_interval: \"70\"\n priority_io: \"high\"", "language": "yaml", "type": "kubernetes-manifest", "context": "Storage Classes | Local volumes do not support dynamic provisioning in Kubernetes 1.34; however a StorageClass should still be created to delay volume binding until a Pod is actually scheduled to the appropriate node.", "source": "https://kubernetes.io/docs/concepts/storage/storage-classes/", "id": "k8s_00029_code_completion_700"} {"prompt": "Explain this yaml code:\n\nstorage/storageclass/storageclass-portworx-volume.yaml\napiVersion: storage.k8s.io/v1\nkind: StorageClass\nmetadata:\n name: portworx-io-priority-high\nprovisioner: kubernetes.io/portworx-volume # This provisioner is deprecated\nparameters:\n repl: \"1\"\n snap_interval: \"70\"\n priority_io: \"high\"", "completion": "This yaml code Storage Classes | Local volumes do not support dynamic provisioning in Kubernetes 1.34; however a StorageClass should still be created to delay volume binding until a Pod is actually scheduled to the appropriate node.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Storage Classes | Local volumes do not support dynamic provisioning in Kubernetes 1.34; however a StorageClass should still be created to delay volume binding until a Pod is actually scheduled to the appropriate node.", "source": "https://kubernetes.io/docs/concepts/storage/storage-classes/", "id": "k8s_00029_code_explain_701"} {"prompt": "Complete this yaml code:\n\napiVersion: storage.k8s.io/v1\nkind: StorageClass\nmetadata:\n name: portworx-io-priority-high", "completion": "provisioner: kubernetes.io/portworx-volume # This provisioner is deprecated\nparameters:\n repl: \"1\"\n snap_interval: \"70\"\n priority_io: \"high\"", "language": "yaml", "type": "kubernetes-manifest", "context": "Storage Classes | Local volumes do not support dynamic provisioning in Kubernetes 1.34; however a StorageClass should still be created to delay volume binding until a Pod is actually scheduled to the appropriate node.", "source": "https://kubernetes.io/docs/concepts/storage/storage-classes/", "id": "k8s_00029_code_completion_702"} {"prompt": "Explain this yaml code:\n\napiVersion: storage.k8s.io/v1\nkind: StorageClass\nmetadata:\n name: portworx-io-priority-high\nprovisioner: kubernetes.io/portworx-volume # This provisioner is deprecated\nparameters:\n repl: \"1\"\n snap_interval: \"70\"\n priority_io: \"high\"", "completion": "This yaml code Storage Classes | Local volumes do not support dynamic provisioning in Kubernetes 1.34; however a StorageClass should still be created to delay volume binding until a Pod is actually scheduled to the appropriate node.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Storage Classes | Local volumes do not support dynamic provisioning in Kubernetes 1.34; however a StorageClass should still be created to delay volume binding until a Pod is actually scheduled to the appropriate node.", "source": "https://kubernetes.io/docs/concepts/storage/storage-classes/", "id": "k8s_00029_code_explain_703"} {"prompt": "Complete this yaml code:\n\nstorage/storageclass/storageclass-local.yaml\napiVersion: storage.k8s.io/v1\nkind: StorageClass", "completion": "metadata:\n name: local-storage\nprovisioner: kubernetes.io/no-provisioner # indicates that this StorageClass does not support automatic provisioning\nvolumeBindingMode: WaitForFirstConsumer", "language": "yaml", "type": "kubernetes-manifest", "context": "Storage Classes | Local volumes do not support dynamic provisioning in Kubernetes 1.34; however a StorageClass should still be created to delay volume binding until a Pod is actually scheduled to the appropriate node.", "source": "https://kubernetes.io/docs/concepts/storage/storage-classes/", "id": "k8s_00029_code_completion_704"} {"prompt": "Explain this yaml code:\n\nstorage/storageclass/storageclass-local.yaml\napiVersion: storage.k8s.io/v1\nkind: StorageClass\nmetadata:\n name: local-storage\nprovisioner: kubernetes.io/no-provisioner # indicates that this StorageClass does not support automatic provisioning\nvolumeBindingMode: WaitForFirstConsumer", "completion": "This yaml code Storage Classes | Local volumes do not support dynamic provisioning in Kubernetes 1.34; however a StorageClass should still be created to delay volume binding until a Pod is actually scheduled to the appropriate node.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Storage Classes | Local volumes do not support dynamic provisioning in Kubernetes 1.34; however a StorageClass should still be created to delay volume binding until a Pod is actually scheduled to the appropriate node.", "source": "https://kubernetes.io/docs/concepts/storage/storage-classes/", "id": "k8s_00029_code_explain_705"} {"prompt": "Complete this yaml code:\n\napiVersion: storage.k8s.io/v1\nkind: StorageClass\nmetadata:", "completion": " name: local-storage\nprovisioner: kubernetes.io/no-provisioner # indicates that this StorageClass does not support automatic provisioning\nvolumeBindingMode: WaitForFirstConsumer", "language": "yaml", "type": "kubernetes-manifest", "context": "Storage Classes | Local volumes do not support dynamic provisioning in Kubernetes 1.34; however a StorageClass should still be created to delay volume binding until a Pod is actually scheduled to the appropriate node.", "source": "https://kubernetes.io/docs/concepts/storage/storage-classes/", "id": "k8s_00029_code_completion_706"} {"prompt": "Explain this yaml code:\n\napiVersion: storage.k8s.io/v1\nkind: StorageClass\nmetadata:\n name: local-storage\nprovisioner: kubernetes.io/no-provisioner # indicates that this StorageClass does not support automatic provisioning\nvolumeBindingMode: WaitForFirstConsumer", "completion": "This yaml code Storage Classes | Local volumes do not support dynamic provisioning in Kubernetes 1.34; however a StorageClass should still be created to delay volume binding until a Pod is actually scheduled to the appropriate node.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Storage Classes | Local volumes do not support dynamic provisioning in Kubernetes 1.34; however a StorageClass should still be created to delay volume binding until a Pod is actually scheduled to the appropriate node.", "source": "https://kubernetes.io/docs/concepts/storage/storage-classes/", "id": "k8s_00029_code_explain_707"} {"prompt": "Explain this text code:\n\npod (po), replicationcontroller (rc), deployment (deploy), daemonset (ds), statefulset (sts), cronjob (cj), replicaset (rs)", "completion": "This text code kubectl set env | If true, select all resources in the namespace of the specified resource types", "language": "text", "type": "code-example_explanation", "context": "kubectl set env | If true, select all resources in the namespace of the specified resource types", "source": "https://kubernetes.io/docs/reference/kubectl/generated/kubectl_set/kubectl_set_env/", "id": "k8s_00030_code_explain_708"} {"prompt": "Explain this bash code:\n\nkubectl set env RESOURCE/NAME KEY_1=VAL_1 ... KEY_N=VAL_N", "completion": "This bash code kubectl set env | If true, select all resources in the namespace of the specified resource types", "language": "bash", "type": "kubectl-command_explanation", "context": "kubectl set env | If true, select all resources in the namespace of the specified resource types", "source": "https://kubernetes.io/docs/reference/kubectl/generated/kubectl_set/kubectl_set_env/", "id": "k8s_00030_code_explain_709"} {"prompt": "Complete this text code:\n\n# Update deployment 'registry' with a new environment variable\n kubectl set env deployment/registry STORAGE_DIR=/local\n \n # List the environment variables defined on a deployments 'sample-build'\n kubectl set env deployment/sample-build --list\n \n # List the environment variables defined on all pods\n kubectl set env pods --all --list\n \n # Output modified deployment in YAML, and does not alter the object on the server\n kubectl set env deployment/sample-build STORAGE_DIR=/data -o yaml\n \n # Update all containers in all replication controllers in the project to have ENV=prod\n kubectl set env rc --all ENV=prod\n \n # Import environment from a secret", "completion": " kubectl set env --from=secret/mysecret deployment/myapp\n \n # Import environment from a config map with a prefix\n kubectl set env --from=configmap/myconfigmap --prefix=MYSQL_ deployment/myapp\n \n # Import specific keys from a config map\n kubectl set env --keys=my-example-key --from=configmap/myconfigmap deployment/myapp\n \n # Remove the environment variable ENV from container 'c1' in all deployment configs\n kubectl set env deployments --all --containers=\"c1\" ENV-\n \n # Remove the environment variable ENV from a deployment definition on disk and\n # update the deployment config on the server\n kubectl set env -f deploy.json ENV-\n \n # Set some of the local shell environment into a deployment config on the server\n env | grep RAILS_ | kubectl set env -e - deployment/registry", "language": "text", "type": "code-example", "context": "kubectl set env | If true, select all resources in the namespace of the specified resource types", "source": "https://kubernetes.io/docs/reference/kubectl/generated/kubectl_set/kubectl_set_env/", "id": "k8s_00030_code_completion_710"} {"prompt": "Explain this text code:\n\n# Update deployment 'registry' with a new environment variable\n kubectl set env deployment/registry STORAGE_DIR=/local\n \n # List the environment variables defined on a deployments 'sample-build'\n kubectl set env deployment/sample-build --list\n \n # List the environment variables defined on all pods\n kubectl set env pods --all --list\n \n # Output modified deployment in YAML, and does not alter the object on the server\n kubectl set env deployment/sample-build STORAGE_DIR=/data -o yaml\n \n # Update all containers in all replication controllers in the project to have ENV=prod\n kubectl set env rc --all ENV=prod\n \n # Import environment from a secret\n kubectl set env --from=secret/mysecret deployment/myapp\n \n # Import environment from a config map with a prefix\n kubectl set env --from=configmap/myconfigmap --prefix=MYSQL_ deployment/myapp\n \n # Import specific keys from a config map\n kubectl set env --keys=my-example-key --from=configmap/myconfigmap deployment/myapp\n \n # Remove the environment variable ENV from container 'c1' in all deployment configs\n kubectl set env deployments --all --containers=\"c1\" ENV-\n \n # Remove the environment variable ENV from a deployment definition on disk and\n # update the deployment config on the server\n kubectl set env -f deploy.json ENV-\n \n # Set some of the local shell environment into a deployment config on the server\n env | grep RAILS_ | kubectl set env -e - deployment/registry", "completion": "This text code kubectl set env | If true, select all resources in the namespace of the specified resource types", "language": "text", "type": "code-example_explanation", "context": "kubectl set env | If true, select all resources in the namespace of the specified resource types", "source": "https://kubernetes.io/docs/reference/kubectl/generated/kubectl_set/kubectl_set_env/", "id": "k8s_00030_code_explain_711"} {"prompt": "Explain this text code:\n\npod (po), replicationcontroller (rc), deployment (deploy), daemonset (ds), statefulset (sts), cronjob (cj), replicaset (rs)", "completion": "This text code kubectl set env | If true, select all resources in the namespace of the specified resource types", "language": "text", "type": "code-example_explanation", "context": "kubectl set env | If true, select all resources in the namespace of the specified resource types", "source": "https://kubernetes.io/docs/reference/kubectl/generated/kubectl_set/kubectl_set_env/", "id": "k8s_00030_code_explain_712"} {"prompt": "Explain this bash code:\n\nkubectl set env RESOURCE/NAME KEY_1=VAL_1 ... KEY_N=VAL_N", "completion": "This bash code kubectl set env | If true, select all resources in the namespace of the specified resource types", "language": "bash", "type": "kubectl-command_explanation", "context": "kubectl set env | If true, select all resources in the namespace of the specified resource types", "source": "https://kubernetes.io/docs/reference/kubectl/generated/kubectl_set/kubectl_set_env/", "id": "k8s_00030_code_explain_713"} {"prompt": "Complete this text code:\n\n# Update deployment 'registry' with a new environment variable\n kubectl set env deployment/registry STORAGE_DIR=/local\n \n # List the environment variables defined on a deployments 'sample-build'\n kubectl set env deployment/sample-build --list\n \n # List the environment variables defined on all pods\n kubectl set env pods --all --list\n \n # Output modified deployment in YAML, and does not alter the object on the server\n kubectl set env deployment/sample-build STORAGE_DIR=/data -o yaml\n \n # Update all containers in all replication controllers in the project to have ENV=prod\n kubectl set env rc --all ENV=prod\n \n # Import environment from a secret", "completion": " kubectl set env --from=secret/mysecret deployment/myapp\n \n # Import environment from a config map with a prefix\n kubectl set env --from=configmap/myconfigmap --prefix=MYSQL_ deployment/myapp\n \n # Import specific keys from a config map\n kubectl set env --keys=my-example-key --from=configmap/myconfigmap deployment/myapp\n \n # Remove the environment variable ENV from container 'c1' in all deployment configs\n kubectl set env deployments --all --containers=\"c1\" ENV-\n \n # Remove the environment variable ENV from a deployment definition on disk and\n # update the deployment config on the server\n kubectl set env -f deploy.json ENV-\n \n # Set some of the local shell environment into a deployment config on the server\n env | grep RAILS_ | kubectl set env -e - deployment/registry", "language": "text", "type": "code-example", "context": "kubectl set env | If true, select all resources in the namespace of the specified resource types", "source": "https://kubernetes.io/docs/reference/kubectl/generated/kubectl_set/kubectl_set_env/", "id": "k8s_00030_code_completion_714"} {"prompt": "Explain this text code:\n\n# Update deployment 'registry' with a new environment variable\n kubectl set env deployment/registry STORAGE_DIR=/local\n \n # List the environment variables defined on a deployments 'sample-build'\n kubectl set env deployment/sample-build --list\n \n # List the environment variables defined on all pods\n kubectl set env pods --all --list\n \n # Output modified deployment in YAML, and does not alter the object on the server\n kubectl set env deployment/sample-build STORAGE_DIR=/data -o yaml\n \n # Update all containers in all replication controllers in the project to have ENV=prod\n kubectl set env rc --all ENV=prod\n \n # Import environment from a secret\n kubectl set env --from=secret/mysecret deployment/myapp\n \n # Import environment from a config map with a prefix\n kubectl set env --from=configmap/myconfigmap --prefix=MYSQL_ deployment/myapp\n \n # Import specific keys from a config map\n kubectl set env --keys=my-example-key --from=configmap/myconfigmap deployment/myapp\n \n # Remove the environment variable ENV from container 'c1' in all deployment configs\n kubectl set env deployments --all --containers=\"c1\" ENV-\n \n # Remove the environment variable ENV from a deployment definition on disk and\n # update the deployment config on the server\n kubectl set env -f deploy.json ENV-\n \n # Set some of the local shell environment into a deployment config on the server\n env | grep RAILS_ | kubectl set env -e - deployment/registry", "completion": "This text code kubectl set env | If true, select all resources in the namespace of the specified resource types", "language": "text", "type": "code-example_explanation", "context": "kubectl set env | If true, select all resources in the namespace of the specified resource types", "source": "https://kubernetes.io/docs/reference/kubectl/generated/kubectl_set/kubectl_set_env/", "id": "k8s_00030_code_explain_715"} {"prompt": "Explain this bash code:\n\nkubectl apply -f https://k8s.io/examples/application/job/redis/redis-pod.yaml\nkubectl apply -f https://k8s.io/examples/application/job/redis/redis-service.yaml", "completion": "This bash code Fine Parallel Processing Using a Work Queue | Now let's fill the queue with some \"tasks\". In this example, the tasks are strings to be printed.", "language": "bash", "type": "kubectl-command_explanation", "context": "Fine Parallel Processing Using a Work Queue | Now let's fill the queue with some \"tasks\". In this example, the tasks are strings to be printed.", "source": "https://kubernetes.io/docs/tasks/job/fine-parallel-processing-work-queue/", "id": "k8s_00031_code_explain_716"} {"prompt": "Explain this bash code:\n\nkubectl run -i --tty temp --image redis --command \"/bin/sh\"", "completion": "This bash code Fine Parallel Processing Using a Work Queue | Now hit enter, start the Redis CLI, and create a list with some work items in it.", "language": "bash", "type": "kubectl-command_explanation", "context": "Fine Parallel Processing Using a Work Queue | Now hit enter, start the Redis CLI, and create a list with some work items in it.", "source": "https://kubernetes.io/docs/tasks/job/fine-parallel-processing-work-queue/", "id": "k8s_00031_code_explain_717"} {"prompt": "Explain this text code:\n\nWaiting for pod default/redis2-c7h78 to be running, status is Pending, pod ready: false\nHit enter for command prompt", "completion": "This text code Fine Parallel Processing Using a Work Queue | Now hit enter, start the Redis CLI, and create a list with some work items in it.", "language": "text", "type": "code-example_explanation", "context": "Fine Parallel Processing Using a Work Queue | Now hit enter, start the Redis CLI, and create a list with some work items in it.", "source": "https://kubernetes.io/docs/tasks/job/fine-parallel-processing-work-queue/", "id": "k8s_00031_code_explain_718"} {"prompt": "Complete this text code:\n\nredis:6379> rpush job2 \"apple\"\n(integer) 1\nredis:6379> rpush job2 \"banana\"\n(integer) 2\nredis:6379> rpush job2 \"cherry\"\n(integer) 3\nredis:6379> rpush job2 \"date\"\n(integer) 4\nredis:6379> rpush job2 \"fig\"\n(integer) 5\nredis:6379> rpush job2 \"grape\"\n(integer) 6\nredis:6379> rpush job2 \"lemon\"\n(integer) 7", "completion": "redis:6379> rpush job2 \"melon\"\n(integer) 8\nredis:6379> rpush job2 \"orange\"\n(integer) 9\nredis:6379> lrange job2 0 -1\n1) \"apple\"\n2) \"banana\"\n3) \"cherry\"\n4) \"date\"\n5) \"fig\"\n6) \"grape\"\n7) \"lemon\"\n8) \"melon\"\n9) \"orange\"", "language": "text", "type": "code-example", "context": "Fine Parallel Processing Using a Work Queue | So, the list with key job2 will be the work queue.", "source": "https://kubernetes.io/docs/tasks/job/fine-parallel-processing-work-queue/", "id": "k8s_00031_code_completion_719"} {"prompt": "Explain this text code:\n\nredis:6379> rpush job2 \"apple\"\n(integer) 1\nredis:6379> rpush job2 \"banana\"\n(integer) 2\nredis:6379> rpush job2 \"cherry\"\n(integer) 3\nredis:6379> rpush job2 \"date\"\n(integer) 4\nredis:6379> rpush job2 \"fig\"\n(integer) 5\nredis:6379> rpush job2 \"grape\"\n(integer) 6\nredis:6379> rpush job2 \"lemon\"\n(integer) 7\nredis:6379> rpush job2 \"melon\"\n(integer) 8\nredis:6379> rpush job2 \"orange\"\n(integer) 9\nredis:6379> lrange job2 0 -1\n1) \"apple\"\n2) \"banana\"\n3) \"cherry\"\n4) \"date\"\n5) \"fig\"\n6) \"grape\"\n7) \"lemon\"\n8) \"melon\"\n9) \"orange\"", "completion": "This text code Fine Parallel Processing Using a Work Queue | So, the list with key job2 will be the work queue.", "language": "text", "type": "code-example_explanation", "context": "Fine Parallel Processing Using a Work Queue | So, the list with key job2 will be the work queue.", "source": "https://kubernetes.io/docs/tasks/job/fine-parallel-processing-work-queue/", "id": "k8s_00031_code_explain_720"} {"prompt": "Complete this python code:\n\n#!/usr/bin/env python\n\nimport time\nimport rediswq\n\nhost=\"redis\"\n# Uncomment next two lines if you do not have Kube-DNS working.\n# import os\n# host = os.getenv(\"REDIS_SERVICE_HOST\")\n\nq = rediswq.RedisWQ(name=\"job2\", host=host)", "completion": "print(\"Worker with sessionID: \" + q.sessionID())\nprint(\"Initial queue state: empty=\" + str(q.empty()))\nwhile not q.empty():\n item = q.lease(lease_secs=10, block=True, timeout=2) \n if item is not None:\n itemstr = item.decode(\"utf-8\")\n print(\"Working on \" + itemstr)\n time.sleep(10) # Put your actual work here instead of sleep.\n q.complete(item)\n else:\n print(\"Waiting for work\")\nprint(\"Queue empty, exiting\")", "language": "python", "type": "code-example", "context": "Fine Parallel Processing Using a Work Queue | You could also download worker.py, rediswq.py, and Dockerfile files, then build the container image. Here's an example using Docker to do the image build:", "source": "https://kubernetes.io/docs/tasks/job/fine-parallel-processing-work-queue/", "id": "k8s_00031_code_completion_721"} {"prompt": "Explain this python code:\n\n#!/usr/bin/env python\n\nimport time\nimport rediswq\n\nhost=\"redis\"\n# Uncomment next two lines if you do not have Kube-DNS working.\n# import os\n# host = os.getenv(\"REDIS_SERVICE_HOST\")\n\nq = rediswq.RedisWQ(name=\"job2\", host=host)\nprint(\"Worker with sessionID: \" + q.sessionID())\nprint(\"Initial queue state: empty=\" + str(q.empty()))\nwhile not q.empty():\n item = q.lease(lease_secs=10, block=True, timeout=2) \n if item is not None:\n itemstr = item.decode(\"utf-8\")\n print(\"Working on \" + itemstr)\n time.sleep(10) # Put your actual work here instead of sleep.\n q.complete(item)\n else:\n print(\"Waiting for work\")\nprint(\"Queue empty, exiting\")", "completion": "This python code Fine Parallel Processing Using a Work Queue | You could also download worker.py, rediswq.py, and Dockerfile files, then build the container image. Here's an example using Docker to do the image build:", "language": "python", "type": "code-example_explanation", "context": "Fine Parallel Processing Using a Work Queue | You could also download worker.py, rediswq.py, and Dockerfile files, then build the container image. Here's an example using Docker to do the image build:", "source": "https://kubernetes.io/docs/tasks/job/fine-parallel-processing-work-queue/", "id": "k8s_00031_code_explain_722"} {"prompt": "Explain this bash code:\n\ndocker tag job-wq-2 /job-wq-2\ndocker push /job-wq-2", "completion": "This bash code Fine Parallel Processing Using a Work Queue | You need to push to a public repository or configure your cluster to be able to access your private repository.", "language": "bash", "type": "shell-command_explanation", "context": "Fine Parallel Processing Using a Work Queue | You need to push to a public repository or configure your cluster to be able to access your private repository.", "source": "https://kubernetes.io/docs/tasks/job/fine-parallel-processing-work-queue/", "id": "k8s_00031_code_explain_723"} {"prompt": "Complete this yaml code:\n\napiVersion: batch/v1\nkind: Job\nmetadata:\n name: job-wq-2\nspec:\n parallelism: 2\n template:", "completion": " metadata:\n name: job-wq-2\n spec:\n containers:\n - name: c\n image: gcr.io/myproject/job-wq-2\n restartPolicy: OnFailure", "language": "yaml", "type": "kubernetes-manifest", "context": "Fine Parallel Processing Using a Work Queue | In this example, each pod works on several items from the queue and then exits when there are no more items. Since the workers themselves detect when the workqueue is empty, and the Job controller doe", "source": "https://kubernetes.io/docs/tasks/job/fine-parallel-processing-work-queue/", "id": "k8s_00031_code_completion_724"} {"prompt": "Explain this yaml code:\n\napiVersion: batch/v1\nkind: Job\nmetadata:\n name: job-wq-2\nspec:\n parallelism: 2\n template:\n metadata:\n name: job-wq-2\n spec:\n containers:\n - name: c\n image: gcr.io/myproject/job-wq-2\n restartPolicy: OnFailure", "completion": "This yaml code Fine Parallel Processing Using a Work Queue | In this example, each pod works on several items from the queue and then exits when there are no more items. Since the workers themselves detect when the workqueue is empty, and the Job controller doe", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Fine Parallel Processing Using a Work Queue | In this example, each pod works on several items from the queue and then exits when there are no more items. Since the workers themselves detect when the workqueue is empty, and the Job controller doe", "source": "https://kubernetes.io/docs/tasks/job/fine-parallel-processing-work-queue/", "id": "k8s_00031_code_explain_725"} {"prompt": "Explain this text code:\n\n# this assumes you downloaded and then edited the manifest already\nkubectl apply -f ./job.yaml", "completion": "This text code Fine Parallel Processing Using a Work Queue | Now wait a bit, then check on the Job:", "language": "text", "type": "code-example_explanation", "context": "Fine Parallel Processing Using a Work Queue | Now wait a bit, then check on the Job:", "source": "https://kubernetes.io/docs/tasks/job/fine-parallel-processing-work-queue/", "id": "k8s_00031_code_explain_726"} {"prompt": "Complete this text code:\n\nName: job-wq-2\nNamespace: default\nSelector: controller-uid=b1c7e4e3-92e1-11e7-b85e-fa163ee3c11f\nLabels: controller-uid=b1c7e4e3-92e1-11e7-b85e-fa163ee3c11f\n job-name=job-wq-2\nAnnotations: \nParallelism: 2\nCompletions: \nStart Time: Mon, 11 Jan 2022 17:07:59 +0000\nPods Statuses: 1 Running / 0 Succeeded / 0 Failed\nPod Template:\n Labels: controller-uid=b1c7e4e3-92e1-11e7-b85e-fa163ee3c11f", "completion": " job-name=job-wq-2\n Containers:\n c:\n Image: container-registry.example/exampleproject/job-wq-2\n Port:\n Environment: \n Mounts: \n Volumes: \nEvents:\n FirstSeen LastSeen Count From SubobjectPath Type Reason Message\n --------- -------- ----- ---- ------------- -------- ------ -------\n 33s 33s 1 {job-controller } Normal SuccessfulCreate Created pod: job-wq-2-lglf8", "language": "text", "type": "code-example", "context": "Fine Parallel Processing Using a Work Queue | You can wait for the Job to succeed, with a timeout:", "source": "https://kubernetes.io/docs/tasks/job/fine-parallel-processing-work-queue/", "id": "k8s_00031_code_completion_727"} {"prompt": "Explain this text code:\n\nName: job-wq-2\nNamespace: default\nSelector: controller-uid=b1c7e4e3-92e1-11e7-b85e-fa163ee3c11f\nLabels: controller-uid=b1c7e4e3-92e1-11e7-b85e-fa163ee3c11f\n job-name=job-wq-2\nAnnotations: \nParallelism: 2\nCompletions: \nStart Time: Mon, 11 Jan 2022 17:07:59 +0000\nPods Statuses: 1 Running / 0 Succeeded / 0 Failed\nPod Template:\n Labels: controller-uid=b1c7e4e3-92e1-11e7-b85e-fa163ee3c11f\n job-name=job-wq-2\n Containers:\n c:\n Image: container-registry.example/exampleproject/job-wq-2\n Port:\n Environment: \n Mounts: \n Volumes: \nEvents:\n FirstSeen LastSeen Count From SubobjectPath Type Reason Message\n --------- -------- ----- ---- ------------- -------- ------ -------\n 33s 33s 1 {job-controller } Normal SuccessfulCreate Created pod: job-wq-2-lglf8", "completion": "This text code Fine Parallel Processing Using a Work Queue | You can wait for the Job to succeed, with a timeout:", "language": "text", "type": "code-example_explanation", "context": "Fine Parallel Processing Using a Work Queue | You can wait for the Job to succeed, with a timeout:", "source": "https://kubernetes.io/docs/tasks/job/fine-parallel-processing-work-queue/", "id": "k8s_00031_code_explain_728"} {"prompt": "Explain this text code:\n\n# The check for condition name is case insensitive\nkubectl wait --for=condition=complete --timeout=300s job/job-wq-2", "completion": "This text code Fine Parallel Processing Using a Work Queue | As you can see, one of the pods for this Job worked on several work units.", "language": "text", "type": "code-example_explanation", "context": "Fine Parallel Processing Using a Work Queue | As you can see, one of the pods for this Job worked on several work units.", "source": "https://kubernetes.io/docs/tasks/job/fine-parallel-processing-work-queue/", "id": "k8s_00031_code_explain_729"} {"prompt": "Complete this text code:\n\nWorker with sessionID: bbd72d0a-9e5c-4dd6-abf6-416cc267991f\nInitial queue state: empty=False", "completion": "Working on banana\nWorking on date\nWorking on lemon", "language": "text", "type": "code-example", "context": "Fine Parallel Processing Using a Work Queue | As you can see, one of the pods for this Job worked on several work units.", "source": "https://kubernetes.io/docs/tasks/job/fine-parallel-processing-work-queue/", "id": "k8s_00031_code_completion_730"} {"prompt": "Explain this text code:\n\nWorker with sessionID: bbd72d0a-9e5c-4dd6-abf6-416cc267991f\nInitial queue state: empty=False\nWorking on banana\nWorking on date\nWorking on lemon", "completion": "This text code Fine Parallel Processing Using a Work Queue | As you can see, one of the pods for this Job worked on several work units.", "language": "text", "type": "code-example_explanation", "context": "Fine Parallel Processing Using a Work Queue | As you can see, one of the pods for this Job worked on several work units.", "source": "https://kubernetes.io/docs/tasks/job/fine-parallel-processing-work-queue/", "id": "k8s_00031_code_explain_731"} {"prompt": "Explain this shell code:\n\nkubectl apply -f https://k8s.io/examples/application/job/redis/redis-pod.yaml\nkubectl apply -f https://k8s.io/examples/application/job/redis/redis-service.yaml", "completion": "This shell code Fine Parallel Processing Using a Work Queue | Now let's fill the queue with some \"tasks\". In this example, the tasks are strings to be printed.", "language": "shell", "type": "code-example_explanation", "context": "Fine Parallel Processing Using a Work Queue | Now let's fill the queue with some \"tasks\". In this example, the tasks are strings to be printed.", "source": "https://kubernetes.io/docs/tasks/job/fine-parallel-processing-work-queue/", "id": "k8s_00031_code_explain_732"} {"prompt": "Explain this shell code:\n\nkubectl run -i --tty temp --image redis --command \"/bin/sh\"", "completion": "This shell code Fine Parallel Processing Using a Work Queue | Now hit enter, start the Redis CLI, and create a list with some work items in it.", "language": "shell", "type": "code-example_explanation", "context": "Fine Parallel Processing Using a Work Queue | Now hit enter, start the Redis CLI, and create a list with some work items in it.", "source": "https://kubernetes.io/docs/tasks/job/fine-parallel-processing-work-queue/", "id": "k8s_00031_code_explain_733"} {"prompt": "Explain this text code:\n\nWaiting for pod default/redis2-c7h78 to be running, status is Pending, pod ready: false\nHit enter for command prompt", "completion": "This text code Fine Parallel Processing Using a Work Queue | Now hit enter, start the Redis CLI, and create a list with some work items in it.", "language": "text", "type": "code-example_explanation", "context": "Fine Parallel Processing Using a Work Queue | Now hit enter, start the Redis CLI, and create a list with some work items in it.", "source": "https://kubernetes.io/docs/tasks/job/fine-parallel-processing-work-queue/", "id": "k8s_00031_code_explain_734"} {"prompt": "Complete this console code:\n\nredis:6379> rpush job2 \"apple\"\n(integer) 1\nredis:6379> rpush job2 \"banana\"\n(integer) 2\nredis:6379> rpush job2 \"cherry\"\n(integer) 3\nredis:6379> rpush job2 \"date\"\n(integer) 4\nredis:6379> rpush job2 \"fig\"\n(integer) 5\nredis:6379> rpush job2 \"grape\"\n(integer) 6\nredis:6379> rpush job2 \"lemon\"\n(integer) 7", "completion": "redis:6379> rpush job2 \"melon\"\n(integer) 8\nredis:6379> rpush job2 \"orange\"\n(integer) 9\nredis:6379> lrange job2 0 -1\n1) \"apple\"\n2) \"banana\"\n3) \"cherry\"\n4) \"date\"\n5) \"fig\"\n6) \"grape\"\n7) \"lemon\"\n8) \"melon\"\n9) \"orange\"", "language": "console", "type": "code-example", "context": "Fine Parallel Processing Using a Work Queue | So, the list with key job2 will be the work queue.", "source": "https://kubernetes.io/docs/tasks/job/fine-parallel-processing-work-queue/", "id": "k8s_00031_code_completion_735"} {"prompt": "Explain this console code:\n\nredis:6379> rpush job2 \"apple\"\n(integer) 1\nredis:6379> rpush job2 \"banana\"\n(integer) 2\nredis:6379> rpush job2 \"cherry\"\n(integer) 3\nredis:6379> rpush job2 \"date\"\n(integer) 4\nredis:6379> rpush job2 \"fig\"\n(integer) 5\nredis:6379> rpush job2 \"grape\"\n(integer) 6\nredis:6379> rpush job2 \"lemon\"\n(integer) 7\nredis:6379> rpush job2 \"melon\"\n(integer) 8\nredis:6379> rpush job2 \"orange\"\n(integer) 9\nredis:6379> lrange job2 0 -1\n1) \"apple\"\n2) \"banana\"\n3) \"cherry\"\n4) \"date\"\n5) \"fig\"\n6) \"grape\"\n7) \"lemon\"\n8) \"melon\"\n9) \"orange\"", "completion": "This console code Fine Parallel Processing Using a Work Queue | So, the list with key job2 will be the work queue.", "language": "console", "type": "code-example_explanation", "context": "Fine Parallel Processing Using a Work Queue | So, the list with key job2 will be the work queue.", "source": "https://kubernetes.io/docs/tasks/job/fine-parallel-processing-work-queue/", "id": "k8s_00031_code_explain_736"} {"prompt": "Complete this python code:\n\n#!/usr/bin/env python\n\nimport time\nimport rediswq\n\nhost=\"redis\"\n# Uncomment next two lines if you do not have Kube-DNS working.\n# import os\n# host = os.getenv(\"REDIS_SERVICE_HOST\")\n\nq = rediswq.RedisWQ(name=\"job2\", host=host)", "completion": "print(\"Worker with sessionID: \" + q.sessionID())\nprint(\"Initial queue state: empty=\" + str(q.empty()))\nwhile not q.empty():\n item = q.lease(lease_secs=10, block=True, timeout=2) \n if item is not None:\n itemstr = item.decode(\"utf-8\")\n print(\"Working on \" + itemstr)\n time.sleep(10) # Put your actual work here instead of sleep.\n q.complete(item)\n else:\n print(\"Waiting for work\")\nprint(\"Queue empty, exiting\")", "language": "python", "type": "code-example", "context": "Fine Parallel Processing Using a Work Queue | You could also download worker.py, rediswq.py, and Dockerfile files, then build the container image. Here's an example using Docker to do the image build:", "source": "https://kubernetes.io/docs/tasks/job/fine-parallel-processing-work-queue/", "id": "k8s_00031_code_completion_737"} {"prompt": "Explain this python code:\n\n#!/usr/bin/env python\n\nimport time\nimport rediswq\n\nhost=\"redis\"\n# Uncomment next two lines if you do not have Kube-DNS working.\n# import os\n# host = os.getenv(\"REDIS_SERVICE_HOST\")\n\nq = rediswq.RedisWQ(name=\"job2\", host=host)\nprint(\"Worker with sessionID: \" + q.sessionID())\nprint(\"Initial queue state: empty=\" + str(q.empty()))\nwhile not q.empty():\n item = q.lease(lease_secs=10, block=True, timeout=2) \n if item is not None:\n itemstr = item.decode(\"utf-8\")\n print(\"Working on \" + itemstr)\n time.sleep(10) # Put your actual work here instead of sleep.\n q.complete(item)\n else:\n print(\"Waiting for work\")\nprint(\"Queue empty, exiting\")", "completion": "This python code Fine Parallel Processing Using a Work Queue | You could also download worker.py, rediswq.py, and Dockerfile files, then build the container image. Here's an example using Docker to do the image build:", "language": "python", "type": "code-example_explanation", "context": "Fine Parallel Processing Using a Work Queue | You could also download worker.py, rediswq.py, and Dockerfile files, then build the container image. Here's an example using Docker to do the image build:", "source": "https://kubernetes.io/docs/tasks/job/fine-parallel-processing-work-queue/", "id": "k8s_00031_code_explain_738"} {"prompt": "Explain this shell code:\n\ndocker tag job-wq-2 /job-wq-2\ndocker push /job-wq-2", "completion": "This shell code Fine Parallel Processing Using a Work Queue | You need to push to a public repository or configure your cluster to be able to access your private repository.", "language": "shell", "type": "code-example_explanation", "context": "Fine Parallel Processing Using a Work Queue | You need to push to a public repository or configure your cluster to be able to access your private repository.", "source": "https://kubernetes.io/docs/tasks/job/fine-parallel-processing-work-queue/", "id": "k8s_00031_code_explain_739"} {"prompt": "Complete this yaml code:\n\napiVersion: batch/v1\nkind: Job\nmetadata:\n name: job-wq-2\nspec:\n parallelism: 2\n template:", "completion": " metadata:\n name: job-wq-2\n spec:\n containers:\n - name: c\n image: gcr.io/myproject/job-wq-2\n restartPolicy: OnFailure", "language": "yaml", "type": "kubernetes-manifest", "context": "Fine Parallel Processing Using a Work Queue | In this example, each pod works on several items from the queue and then exits when there are no more items. Since the workers themselves detect when the workqueue is empty, and the Job controller doe", "source": "https://kubernetes.io/docs/tasks/job/fine-parallel-processing-work-queue/", "id": "k8s_00031_code_completion_740"} {"prompt": "Explain this yaml code:\n\napiVersion: batch/v1\nkind: Job\nmetadata:\n name: job-wq-2\nspec:\n parallelism: 2\n template:\n metadata:\n name: job-wq-2\n spec:\n containers:\n - name: c\n image: gcr.io/myproject/job-wq-2\n restartPolicy: OnFailure", "completion": "This yaml code Fine Parallel Processing Using a Work Queue | In this example, each pod works on several items from the queue and then exits when there are no more items. Since the workers themselves detect when the workqueue is empty, and the Job controller doe", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Fine Parallel Processing Using a Work Queue | In this example, each pod works on several items from the queue and then exits when there are no more items. Since the workers themselves detect when the workqueue is empty, and the Job controller doe", "source": "https://kubernetes.io/docs/tasks/job/fine-parallel-processing-work-queue/", "id": "k8s_00031_code_explain_741"} {"prompt": "Explain this shell code:\n\n# this assumes you downloaded and then edited the manifest already\nkubectl apply -f ./job.yaml", "completion": "This shell code Fine Parallel Processing Using a Work Queue | Now wait a bit, then check on the Job:", "language": "shell", "type": "code-example_explanation", "context": "Fine Parallel Processing Using a Work Queue | Now wait a bit, then check on the Job:", "source": "https://kubernetes.io/docs/tasks/job/fine-parallel-processing-work-queue/", "id": "k8s_00031_code_explain_742"} {"prompt": "Complete this text code:\n\nName: job-wq-2\nNamespace: default\nSelector: controller-uid=b1c7e4e3-92e1-11e7-b85e-fa163ee3c11f\nLabels: controller-uid=b1c7e4e3-92e1-11e7-b85e-fa163ee3c11f\n job-name=job-wq-2\nAnnotations: \nParallelism: 2\nCompletions: \nStart Time: Mon, 11 Jan 2022 17:07:59 +0000\nPods Statuses: 1 Running / 0 Succeeded / 0 Failed\nPod Template:\n Labels: controller-uid=b1c7e4e3-92e1-11e7-b85e-fa163ee3c11f", "completion": " job-name=job-wq-2\n Containers:\n c:\n Image: container-registry.example/exampleproject/job-wq-2\n Port:\n Environment: \n Mounts: \n Volumes: \nEvents:\n FirstSeen LastSeen Count From SubobjectPath Type Reason Message\n --------- -------- ----- ---- ------------- -------- ------ -------\n 33s 33s 1 {job-controller } Normal SuccessfulCreate Created pod: job-wq-2-lglf8", "language": "text", "type": "code-example", "context": "Fine Parallel Processing Using a Work Queue | You can wait for the Job to succeed, with a timeout:", "source": "https://kubernetes.io/docs/tasks/job/fine-parallel-processing-work-queue/", "id": "k8s_00031_code_completion_743"} {"prompt": "Explain this text code:\n\nName: job-wq-2\nNamespace: default\nSelector: controller-uid=b1c7e4e3-92e1-11e7-b85e-fa163ee3c11f\nLabels: controller-uid=b1c7e4e3-92e1-11e7-b85e-fa163ee3c11f\n job-name=job-wq-2\nAnnotations: \nParallelism: 2\nCompletions: \nStart Time: Mon, 11 Jan 2022 17:07:59 +0000\nPods Statuses: 1 Running / 0 Succeeded / 0 Failed\nPod Template:\n Labels: controller-uid=b1c7e4e3-92e1-11e7-b85e-fa163ee3c11f\n job-name=job-wq-2\n Containers:\n c:\n Image: container-registry.example/exampleproject/job-wq-2\n Port:\n Environment: \n Mounts: \n Volumes: \nEvents:\n FirstSeen LastSeen Count From SubobjectPath Type Reason Message\n --------- -------- ----- ---- ------------- -------- ------ -------\n 33s 33s 1 {job-controller } Normal SuccessfulCreate Created pod: job-wq-2-lglf8", "completion": "This text code Fine Parallel Processing Using a Work Queue | You can wait for the Job to succeed, with a timeout:", "language": "text", "type": "code-example_explanation", "context": "Fine Parallel Processing Using a Work Queue | You can wait for the Job to succeed, with a timeout:", "source": "https://kubernetes.io/docs/tasks/job/fine-parallel-processing-work-queue/", "id": "k8s_00031_code_explain_744"} {"prompt": "Explain this shell code:\n\n# The check for condition name is case insensitive\nkubectl wait --for=condition=complete --timeout=300s job/job-wq-2", "completion": "This shell code Fine Parallel Processing Using a Work Queue | As you can see, one of the pods for this Job worked on several work units.", "language": "shell", "type": "code-example_explanation", "context": "Fine Parallel Processing Using a Work Queue | As you can see, one of the pods for this Job worked on several work units.", "source": "https://kubernetes.io/docs/tasks/job/fine-parallel-processing-work-queue/", "id": "k8s_00031_code_explain_745"} {"prompt": "Complete this text code:\n\nWorker with sessionID: bbd72d0a-9e5c-4dd6-abf6-416cc267991f\nInitial queue state: empty=False", "completion": "Working on banana\nWorking on date\nWorking on lemon", "language": "text", "type": "code-example", "context": "Fine Parallel Processing Using a Work Queue | As you can see, one of the pods for this Job worked on several work units.", "source": "https://kubernetes.io/docs/tasks/job/fine-parallel-processing-work-queue/", "id": "k8s_00031_code_completion_746"} {"prompt": "Explain this text code:\n\nWorker with sessionID: bbd72d0a-9e5c-4dd6-abf6-416cc267991f\nInitial queue state: empty=False\nWorking on banana\nWorking on date\nWorking on lemon", "completion": "This text code Fine Parallel Processing Using a Work Queue | As you can see, one of the pods for this Job worked on several work units.", "language": "text", "type": "code-example_explanation", "context": "Fine Parallel Processing Using a Work Queue | As you can see, one of the pods for this Job worked on several work units.", "source": "https://kubernetes.io/docs/tasks/job/fine-parallel-processing-work-queue/", "id": "k8s_00031_code_explain_747"} {"prompt": "Explain this bash code:\n\nkubectl apply -f https://k8s.io/examples/application/job/redis/redis-pod.yaml\nkubectl apply -f https://k8s.io/examples/application/job/redis/redis-service.yaml", "completion": "This bash code Fine Parallel Processing Using a Work Queue | Now let's fill the queue with some \"tasks\". In this example, the tasks are strings to be printed.", "language": "bash", "type": "kubectl-command_explanation", "context": "Fine Parallel Processing Using a Work Queue | Now let's fill the queue with some \"tasks\". In this example, the tasks are strings to be printed.", "source": "https://kubernetes.io/docs/tasks/job/fine-parallel-processing-work-queue/", "id": "k8s_00031_code_explain_748"} {"prompt": "Explain this bash code:\n\nkubectl run -i --tty temp --image redis --command \"/bin/sh\"", "completion": "This bash code Fine Parallel Processing Using a Work Queue | Now hit enter, start the Redis CLI, and create a list with some work items in it.", "language": "bash", "type": "kubectl-command_explanation", "context": "Fine Parallel Processing Using a Work Queue | Now hit enter, start the Redis CLI, and create a list with some work items in it.", "source": "https://kubernetes.io/docs/tasks/job/fine-parallel-processing-work-queue/", "id": "k8s_00031_code_explain_749"} {"prompt": "Complete this text code:\n\nredis:6379> rpush job2 \"apple\"\n(integer) 1\nredis:6379> rpush job2 \"banana\"\n(integer) 2\nredis:6379> rpush job2 \"cherry\"\n(integer) 3\nredis:6379> rpush job2 \"date\"\n(integer) 4\nredis:6379> rpush job2 \"fig\"\n(integer) 5\nredis:6379> rpush job2 \"grape\"\n(integer) 6\nredis:6379> rpush job2 \"lemon\"\n(integer) 7", "completion": "redis:6379> rpush job2 \"melon\"\n(integer) 8\nredis:6379> rpush job2 \"orange\"\n(integer) 9\nredis:6379> lrange job2 0 -1\n1) \"apple\"\n2) \"banana\"\n3) \"cherry\"\n4) \"date\"\n5) \"fig\"\n6) \"grape\"\n7) \"lemon\"\n8) \"melon\"\n9) \"orange\"", "language": "text", "type": "code-example", "context": "Fine Parallel Processing Using a Work Queue | So, the list with key job2 will be the work queue.", "source": "https://kubernetes.io/docs/tasks/job/fine-parallel-processing-work-queue/", "id": "k8s_00031_code_completion_750"} {"prompt": "Explain this text code:\n\nredis:6379> rpush job2 \"apple\"\n(integer) 1\nredis:6379> rpush job2 \"banana\"\n(integer) 2\nredis:6379> rpush job2 \"cherry\"\n(integer) 3\nredis:6379> rpush job2 \"date\"\n(integer) 4\nredis:6379> rpush job2 \"fig\"\n(integer) 5\nredis:6379> rpush job2 \"grape\"\n(integer) 6\nredis:6379> rpush job2 \"lemon\"\n(integer) 7\nredis:6379> rpush job2 \"melon\"\n(integer) 8\nredis:6379> rpush job2 \"orange\"\n(integer) 9\nredis:6379> lrange job2 0 -1\n1) \"apple\"\n2) \"banana\"\n3) \"cherry\"\n4) \"date\"\n5) \"fig\"\n6) \"grape\"\n7) \"lemon\"\n8) \"melon\"\n9) \"orange\"", "completion": "This text code Fine Parallel Processing Using a Work Queue | So, the list with key job2 will be the work queue.", "language": "text", "type": "code-example_explanation", "context": "Fine Parallel Processing Using a Work Queue | So, the list with key job2 will be the work queue.", "source": "https://kubernetes.io/docs/tasks/job/fine-parallel-processing-work-queue/", "id": "k8s_00031_code_explain_751"} {"prompt": "Complete this python code:\n\napplication/job/redis/worker.py\n#!/usr/bin/env python\n\nimport time\nimport rediswq\n\nhost=\"redis\"\n# Uncomment next two lines if you do not have Kube-DNS working.\n# import os\n# host = os.getenv(\"REDIS_SERVICE_HOST\")\n\nq = rediswq.RedisWQ(name=\"job2\", host=host)", "completion": "print(\"Worker with sessionID: \" + q.sessionID())\nprint(\"Initial queue state: empty=\" + str(q.empty()))\nwhile not q.empty():\n item = q.lease(lease_secs=10, block=True, timeout=2) \n if item is not None:\n itemstr = item.decode(\"utf-8\")\n print(\"Working on \" + itemstr)\n time.sleep(10) # Put your actual work here instead of sleep.\n q.complete(item)\n else:\n print(\"Waiting for work\")\nprint(\"Queue empty, exiting\")", "language": "python", "type": "code-example", "context": "Fine Parallel Processing Using a Work Queue | You could also download worker.py, rediswq.py, and Dockerfile files, then build the container image. Here's an example using Docker to do the image build:", "source": "https://kubernetes.io/docs/tasks/job/fine-parallel-processing-work-queue/", "id": "k8s_00031_code_completion_752"} {"prompt": "Explain this python code:\n\napplication/job/redis/worker.py\n#!/usr/bin/env python\n\nimport time\nimport rediswq\n\nhost=\"redis\"\n# Uncomment next two lines if you do not have Kube-DNS working.\n# import os\n# host = os.getenv(\"REDIS_SERVICE_HOST\")\n\nq = rediswq.RedisWQ(name=\"job2\", host=host)\nprint(\"Worker with sessionID: \" + q.sessionID())\nprint(\"Initial queue state: empty=\" + str(q.empty()))\nwhile not q.empty():\n item = q.lease(lease_secs=10, block=True, timeout=2) \n if item is not None:\n itemstr = item.decode(\"utf-8\")\n print(\"Working on \" + itemstr)\n time.sleep(10) # Put your actual work here instead of sleep.\n q.complete(item)\n else:\n print(\"Waiting for work\")\nprint(\"Queue empty, exiting\")", "completion": "This python code Fine Parallel Processing Using a Work Queue | You could also download worker.py, rediswq.py, and Dockerfile files, then build the container image. Here's an example using Docker to do the image build:", "language": "python", "type": "code-example_explanation", "context": "Fine Parallel Processing Using a Work Queue | You could also download worker.py, rediswq.py, and Dockerfile files, then build the container image. Here's an example using Docker to do the image build:", "source": "https://kubernetes.io/docs/tasks/job/fine-parallel-processing-work-queue/", "id": "k8s_00031_code_explain_753"} {"prompt": "Complete this python code:\n\n#!/usr/bin/env python\n\nimport time\nimport rediswq\n\nhost=\"redis\"\n# Uncomment next two lines if you do not have Kube-DNS working.\n# import os\n# host = os.getenv(\"REDIS_SERVICE_HOST\")\n\nq = rediswq.RedisWQ(name=\"job2\", host=host)", "completion": "print(\"Worker with sessionID: \" + q.sessionID())\nprint(\"Initial queue state: empty=\" + str(q.empty()))\nwhile not q.empty():\n item = q.lease(lease_secs=10, block=True, timeout=2) \n if item is not None:\n itemstr = item.decode(\"utf-8\")\n print(\"Working on \" + itemstr)\n time.sleep(10) # Put your actual work here instead of sleep.\n q.complete(item)\n else:\n print(\"Waiting for work\")\nprint(\"Queue empty, exiting\")", "language": "python", "type": "code-example", "context": "Fine Parallel Processing Using a Work Queue | You could also download worker.py, rediswq.py, and Dockerfile files, then build the container image. Here's an example using Docker to do the image build:", "source": "https://kubernetes.io/docs/tasks/job/fine-parallel-processing-work-queue/", "id": "k8s_00031_code_completion_754"} {"prompt": "Explain this python code:\n\n#!/usr/bin/env python\n\nimport time\nimport rediswq\n\nhost=\"redis\"\n# Uncomment next two lines if you do not have Kube-DNS working.\n# import os\n# host = os.getenv(\"REDIS_SERVICE_HOST\")\n\nq = rediswq.RedisWQ(name=\"job2\", host=host)\nprint(\"Worker with sessionID: \" + q.sessionID())\nprint(\"Initial queue state: empty=\" + str(q.empty()))\nwhile not q.empty():\n item = q.lease(lease_secs=10, block=True, timeout=2) \n if item is not None:\n itemstr = item.decode(\"utf-8\")\n print(\"Working on \" + itemstr)\n time.sleep(10) # Put your actual work here instead of sleep.\n q.complete(item)\n else:\n print(\"Waiting for work\")\nprint(\"Queue empty, exiting\")", "completion": "This python code Fine Parallel Processing Using a Work Queue | You could also download worker.py, rediswq.py, and Dockerfile files, then build the container image. Here's an example using Docker to do the image build:", "language": "python", "type": "code-example_explanation", "context": "Fine Parallel Processing Using a Work Queue | You could also download worker.py, rediswq.py, and Dockerfile files, then build the container image. Here's an example using Docker to do the image build:", "source": "https://kubernetes.io/docs/tasks/job/fine-parallel-processing-work-queue/", "id": "k8s_00031_code_explain_755"} {"prompt": "Explain this bash code:\n\ndocker tag job-wq-2 /job-wq-2\ndocker push /job-wq-2", "completion": "This bash code Fine Parallel Processing Using a Work Queue | You need to push to a public repository or configure your cluster to be able to access your private repository.", "language": "bash", "type": "shell-command_explanation", "context": "Fine Parallel Processing Using a Work Queue | You need to push to a public repository or configure your cluster to be able to access your private repository.", "source": "https://kubernetes.io/docs/tasks/job/fine-parallel-processing-work-queue/", "id": "k8s_00031_code_explain_756"} {"prompt": "Complete this yaml code:\n\napplication/job/redis/job.yaml\napiVersion: batch/v1\nkind: Job\nmetadata:\n name: job-wq-2\nspec:\n parallelism: 2", "completion": " template:\n metadata:\n name: job-wq-2\n spec:\n containers:\n - name: c\n image: gcr.io/myproject/job-wq-2\n restartPolicy: OnFailure", "language": "yaml", "type": "kubernetes-manifest", "context": "Fine Parallel Processing Using a Work Queue | In this example, each pod works on several items from the queue and then exits when there are no more items. Since the workers themselves detect when the workqueue is empty, and the Job controller doe", "source": "https://kubernetes.io/docs/tasks/job/fine-parallel-processing-work-queue/", "id": "k8s_00031_code_completion_757"} {"prompt": "Explain this yaml code:\n\napplication/job/redis/job.yaml\napiVersion: batch/v1\nkind: Job\nmetadata:\n name: job-wq-2\nspec:\n parallelism: 2\n template:\n metadata:\n name: job-wq-2\n spec:\n containers:\n - name: c\n image: gcr.io/myproject/job-wq-2\n restartPolicy: OnFailure", "completion": "This yaml code Fine Parallel Processing Using a Work Queue | In this example, each pod works on several items from the queue and then exits when there are no more items. Since the workers themselves detect when the workqueue is empty, and the Job controller doe", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Fine Parallel Processing Using a Work Queue | In this example, each pod works on several items from the queue and then exits when there are no more items. Since the workers themselves detect when the workqueue is empty, and the Job controller doe", "source": "https://kubernetes.io/docs/tasks/job/fine-parallel-processing-work-queue/", "id": "k8s_00031_code_explain_758"} {"prompt": "Complete this yaml code:\n\napiVersion: batch/v1\nkind: Job\nmetadata:\n name: job-wq-2\nspec:\n parallelism: 2\n template:", "completion": " metadata:\n name: job-wq-2\n spec:\n containers:\n - name: c\n image: gcr.io/myproject/job-wq-2\n restartPolicy: OnFailure", "language": "yaml", "type": "kubernetes-manifest", "context": "Fine Parallel Processing Using a Work Queue | In this example, each pod works on several items from the queue and then exits when there are no more items. Since the workers themselves detect when the workqueue is empty, and the Job controller doe", "source": "https://kubernetes.io/docs/tasks/job/fine-parallel-processing-work-queue/", "id": "k8s_00031_code_completion_759"} {"prompt": "Explain this yaml code:\n\napiVersion: batch/v1\nkind: Job\nmetadata:\n name: job-wq-2\nspec:\n parallelism: 2\n template:\n metadata:\n name: job-wq-2\n spec:\n containers:\n - name: c\n image: gcr.io/myproject/job-wq-2\n restartPolicy: OnFailure", "completion": "This yaml code Fine Parallel Processing Using a Work Queue | In this example, each pod works on several items from the queue and then exits when there are no more items. Since the workers themselves detect when the workqueue is empty, and the Job controller doe", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Fine Parallel Processing Using a Work Queue | In this example, each pod works on several items from the queue and then exits when there are no more items. Since the workers themselves detect when the workqueue is empty, and the Job controller doe", "source": "https://kubernetes.io/docs/tasks/job/fine-parallel-processing-work-queue/", "id": "k8s_00031_code_explain_760"} {"prompt": "Explain this text code:\n\n# this assumes you downloaded and then edited the manifest already\nkubectl apply -f ./job.yaml", "completion": "This text code Fine Parallel Processing Using a Work Queue | Now wait a bit, then check on the Job:", "language": "text", "type": "code-example_explanation", "context": "Fine Parallel Processing Using a Work Queue | Now wait a bit, then check on the Job:", "source": "https://kubernetes.io/docs/tasks/job/fine-parallel-processing-work-queue/", "id": "k8s_00031_code_explain_761"} {"prompt": "Explain this text code:\n\n# The check for condition name is case insensitive\nkubectl wait --for=condition=complete --timeout=300s job/job-wq-2", "completion": "This text code Fine Parallel Processing Using a Work Queue | As you can see, one of the pods for this Job worked on several work units.", "language": "text", "type": "code-example_explanation", "context": "Fine Parallel Processing Using a Work Queue | As you can see, one of the pods for this Job worked on several work units.", "source": "https://kubernetes.io/docs/tasks/job/fine-parallel-processing-work-queue/", "id": "k8s_00031_code_explain_762"} {"prompt": "Explain this text code:\n\ncurl -Lv https://k8s.io/docs/reference/issues-security/official-cve-feed/index.json", "completion": "This text code Official CVE Feed | Link to RSS formatcurl -Lv https://k8s.io/docs/reference/issues-security/official-cve-feed/feed.xml", "language": "text", "type": "code-example_explanation", "context": "Official CVE Feed | Link to RSS formatcurl -Lv https://k8s.io/docs/reference/issues-security/official-cve-feed/feed.xml", "source": "https://kubernetes.io/docs/reference/issues-security/official-cve-feed/", "id": "k8s_00032_code_explain_763"} {"prompt": "Explain this text code:\n\ncurl -Lv https://k8s.io/docs/reference/issues-security/official-cve-feed/feed.xml", "completion": "This text code Official CVE Feed | This feed is auto-refreshing with a noticeable but small lag (minutes to hours) from the time a CVE is announced to the time it is accessible in this feed.", "language": "text", "type": "code-example_explanation", "context": "Official CVE Feed | This feed is auto-refreshing with a noticeable but small lag (minutes to hours) from the time a CVE is announced to the time it is accessible in this feed.", "source": "https://kubernetes.io/docs/reference/issues-security/official-cve-feed/", "id": "k8s_00032_code_explain_764"} {"prompt": "Explain this shell code:\n\ncurl -Lv https://k8s.io/docs/reference/issues-security/official-cve-feed/index.json", "completion": "This shell code Official CVE Feed | Link to RSS formatcurl -Lv https://k8s.io/docs/reference/issues-security/official-cve-feed/feed.xml", "language": "shell", "type": "code-example_explanation", "context": "Official CVE Feed | Link to RSS formatcurl -Lv https://k8s.io/docs/reference/issues-security/official-cve-feed/feed.xml", "source": "https://kubernetes.io/docs/reference/issues-security/official-cve-feed/", "id": "k8s_00032_code_explain_765"} {"prompt": "Explain this shell code:\n\ncurl -Lv https://k8s.io/docs/reference/issues-security/official-cve-feed/feed.xml", "completion": "This shell code Official CVE Feed | This feed is auto-refreshing with a noticeable but small lag (minutes to hours) from the time a CVE is announced to the time it is accessible in this feed.", "language": "shell", "type": "code-example_explanation", "context": "Official CVE Feed | This feed is auto-refreshing with a noticeable but small lag (minutes to hours) from the time a CVE is announced to the time it is accessible in this feed.", "source": "https://kubernetes.io/docs/reference/issues-security/official-cve-feed/", "id": "k8s_00032_code_explain_766"} {"prompt": "Explain this text code:\n\ncurl -Lv https://k8s.io/docs/reference/issues-security/official-cve-feed/index.json", "completion": "This text code Official CVE Feed | Link to RSS formatcurl -Lv https://k8s.io/docs/reference/issues-security/official-cve-feed/feed.xml", "language": "text", "type": "code-example_explanation", "context": "Official CVE Feed | Link to RSS formatcurl -Lv https://k8s.io/docs/reference/issues-security/official-cve-feed/feed.xml", "source": "https://kubernetes.io/docs/reference/issues-security/official-cve-feed/", "id": "k8s_00032_code_explain_767"} {"prompt": "Explain this text code:\n\ncurl -Lv https://k8s.io/docs/reference/issues-security/official-cve-feed/feed.xml", "completion": "This text code Official CVE Feed | This feed is auto-refreshing with a noticeable but small lag (minutes to hours) from the time a CVE is announced to the time it is accessible in this feed.", "language": "text", "type": "code-example_explanation", "context": "Official CVE Feed | This feed is auto-refreshing with a noticeable but small lag (minutes to hours) from the time a CVE is announced to the time it is accessible in this feed.", "source": "https://kubernetes.io/docs/reference/issues-security/official-cve-feed/", "id": "k8s_00032_code_explain_768"} {"prompt": "Explain this text code:\n\n# Print the client and server versions for the current context\n kubectl version", "completion": "This text code kubectl version | If true, shows client version only (no server required).", "language": "text", "type": "code-example_explanation", "context": "kubectl version | If true, shows client version only (no server required).", "source": "https://kubernetes.io/docs/reference/kubectl/generated/kubectl_version/", "id": "k8s_00034_code_explain_769"} {"prompt": "Explain this text code:\n\n# Print the client and server versions for the current context\n kubectl version", "completion": "This text code kubectl version | If true, shows client version only (no server required).", "language": "text", "type": "code-example_explanation", "context": "kubectl version | If true, shows client version only (no server required).", "source": "https://kubernetes.io/docs/reference/kubectl/generated/kubectl_version/", "id": "k8s_00034_code_explain_770"} {"prompt": "Complete this json code:\n\n{\n \"kind\": \"Node\",\n \"apiVersion\": \"v1\",\n \"metadata\": {\n \"name\": \"10.240.79.157\",", "completion": " \"labels\": {\n \"name\": \"my-first-k8s-node\"\n }\n }\n}", "language": "json", "type": "api-response", "context": "Nodes | Kubernetes creates a Node object internally (the representation). Kubernetes checks that a kubelet has registered to the API server that matches the metadata.name field of the Node. If the node is hea", "source": "https://kubernetes.io/docs/concepts/architecture/nodes/", "id": "k8s_00035_code_completion_771"} {"prompt": "Explain this json code:\n\n{\n \"kind\": \"Node\",\n \"apiVersion\": \"v1\",\n \"metadata\": {\n \"name\": \"10.240.79.157\",\n \"labels\": {\n \"name\": \"my-first-k8s-node\"\n }\n }\n}", "completion": "This json code Nodes | Kubernetes creates a Node object internally (the representation). Kubernetes checks that a kubelet has registered to the API server that matches the metadata.name field of the Node. If the node is hea", "language": "json", "type": "api-response_explanation", "context": "Nodes | Kubernetes creates a Node object internally (the representation). Kubernetes checks that a kubelet has registered to the API server that matches the metadata.name field of the Node. If the node is hea", "source": "https://kubernetes.io/docs/concepts/architecture/nodes/", "id": "k8s_00035_code_explain_772"} {"prompt": "Complete this json code:\n\n{\n \"kind\": \"Node\",\n \"apiVersion\": \"v1\",\n \"metadata\": {\n \"name\": \"10.240.79.157\",", "completion": " \"labels\": {\n \"name\": \"my-first-k8s-node\"\n }\n }\n}", "language": "json", "type": "api-response", "context": "Nodes | Kubernetes creates a Node object internally (the representation). Kubernetes checks that a kubelet has registered to the API server that matches the metadata.name field of the Node. If the node is hea", "source": "https://kubernetes.io/docs/concepts/architecture/nodes/", "id": "k8s_00035_code_completion_773"} {"prompt": "Explain this json code:\n\n{\n \"kind\": \"Node\",\n \"apiVersion\": \"v1\",\n \"metadata\": {\n \"name\": \"10.240.79.157\",\n \"labels\": {\n \"name\": \"my-first-k8s-node\"\n }\n }\n}", "completion": "This json code Nodes | Kubernetes creates a Node object internally (the representation). Kubernetes checks that a kubelet has registered to the API server that matches the metadata.name field of the Node. If the node is hea", "language": "json", "type": "api-response_explanation", "context": "Nodes | Kubernetes creates a Node object internally (the representation). Kubernetes checks that a kubelet has registered to the API server that matches the metadata.name field of the Node. If the node is hea", "source": "https://kubernetes.io/docs/concepts/architecture/nodes/", "id": "k8s_00035_code_explain_774"} {"prompt": "Complete this json code:\n\n{\n \"kind\": \"Node\",\n \"apiVersion\": \"v1\",\n \"metadata\": {\n \"name\": \"10.240.79.157\",", "completion": " \"labels\": {\n \"name\": \"my-first-k8s-node\"\n }\n }\n}", "language": "json", "type": "api-response", "context": "Nodes | Kubernetes creates a Node object internally (the representation). Kubernetes checks that a kubelet has registered to the API server that matches the metadata.name field of the Node. If the node is hea", "source": "https://kubernetes.io/docs/concepts/architecture/nodes/", "id": "k8s_00035_code_completion_775"} {"prompt": "Explain this json code:\n\n{\n \"kind\": \"Node\",\n \"apiVersion\": \"v1\",\n \"metadata\": {\n \"name\": \"10.240.79.157\",\n \"labels\": {\n \"name\": \"my-first-k8s-node\"\n }\n }\n}", "completion": "This json code Nodes | Kubernetes creates a Node object internally (the representation). Kubernetes checks that a kubelet has registered to the API server that matches the metadata.name field of the Node. If the node is hea", "language": "json", "type": "api-response_explanation", "context": "Nodes | Kubernetes creates a Node object internally (the representation). Kubernetes checks that a kubelet has registered to the API server that matches the metadata.name field of the Node. If the node is hea", "source": "https://kubernetes.io/docs/concepts/architecture/nodes/", "id": "k8s_00035_code_explain_776"} {"prompt": "Explain this text code:\n\necho -n 'my-app' | base64\necho -n '39528$vdg7Jb' | base64", "completion": "This text code Distribute Credentials Securely Using Secrets | The output shows that the base-64 representation of your username is bXktYXBw, and the base-64 representation of your password is Mzk1MjgkdmRnN0pi.", "language": "text", "type": "code-example_explanation", "context": "Distribute Credentials Securely Using Secrets | The output shows that the base-64 representation of your username is bXktYXBw, and the base-64 representation of your password is Mzk1MjgkdmRnN0pi.", "source": "https://kubernetes.io/docs/tasks/inject-data-application/distribute-credentials-secure/", "id": "k8s_00036_code_explain_777"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Secret\nmetadata:", "completion": " name: test-secret\ndata:\n username: bXktYXBw\n password: Mzk1MjgkdmRnN0pi", "language": "yaml", "type": "kubernetes-manifest", "context": "Distribute Credentials Securely Using Secrets | Create the Secret", "source": "https://kubernetes.io/docs/tasks/inject-data-application/distribute-credentials-secure/", "id": "k8s_00036_code_completion_778"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Secret\nmetadata:\n name: test-secret\ndata:\n username: bXktYXBw\n password: Mzk1MjgkdmRnN0pi", "completion": "This yaml code Distribute Credentials Securely Using Secrets | Create the Secret", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Distribute Credentials Securely Using Secrets | Create the Secret", "source": "https://kubernetes.io/docs/tasks/inject-data-application/distribute-credentials-secure/", "id": "k8s_00036_code_explain_779"} {"prompt": "Explain this bash code:\n\nkubectl apply -f https://k8s.io/examples/pods/inject/secret.yaml", "completion": "This bash code Distribute Credentials Securely Using Secrets | View information about the Secret:", "language": "bash", "type": "kubectl-command_explanation", "context": "Distribute Credentials Securely Using Secrets | View information about the Secret:", "source": "https://kubernetes.io/docs/tasks/inject-data-application/distribute-credentials-secure/", "id": "k8s_00036_code_explain_780"} {"prompt": "Explain this text code:\n\nNAME TYPE DATA AGE\ntest-secret Opaque 2 1m", "completion": "This text code Distribute Credentials Securely Using Secrets | View more detailed information about the Secret:", "language": "text", "type": "code-example_explanation", "context": "Distribute Credentials Securely Using Secrets | View more detailed information about the Secret:", "source": "https://kubernetes.io/docs/tasks/inject-data-application/distribute-credentials-secure/", "id": "k8s_00036_code_explain_781"} {"prompt": "Complete this text code:\n\nName: test-secret\nNamespace: default\nLabels: \nAnnotations: \n", "completion": "Type: Opaque\n\nData\n====\npassword: 13 bytes\nusername: 7 bytes", "language": "text", "type": "code-example", "context": "Distribute Credentials Securely Using Secrets | If you want to skip the Base64 encoding step, you can create the same Secret using the kubectl create secret command. For example:", "source": "https://kubernetes.io/docs/tasks/inject-data-application/distribute-credentials-secure/", "id": "k8s_00036_code_completion_782"} {"prompt": "Explain this text code:\n\nName: test-secret\nNamespace: default\nLabels: \nAnnotations: \n\nType: Opaque\n\nData\n====\npassword: 13 bytes\nusername: 7 bytes", "completion": "This text code Distribute Credentials Securely Using Secrets | If you want to skip the Base64 encoding step, you can create the same Secret using the kubectl create secret command. For example:", "language": "text", "type": "code-example_explanation", "context": "Distribute Credentials Securely Using Secrets | If you want to skip the Base64 encoding step, you can create the same Secret using the kubectl create secret command. For example:", "source": "https://kubernetes.io/docs/tasks/inject-data-application/distribute-credentials-secure/", "id": "k8s_00036_code_explain_783"} {"prompt": "Explain this bash code:\n\nkubectl create secret generic test-secret --from-literal='username=my-app' --from-literal='password=39528$vdg7Jb'", "completion": "This bash code Distribute Credentials Securely Using Secrets | This is more convenient. The detailed approach shown earlier runs through each step explicitly to demonstrate what is happening.", "language": "bash", "type": "kubectl-command_explanation", "context": "Distribute Credentials Securely Using Secrets | This is more convenient. The detailed approach shown earlier runs through each step explicitly to demonstrate what is happening.", "source": "https://kubernetes.io/docs/tasks/inject-data-application/distribute-credentials-secure/", "id": "k8s_00036_code_explain_784"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: secret-test-pod\nspec:\n containers:\n - name: test-container\n image: nginx\n volumeMounts:", "completion": " # name must match the volume name below\n - name: secret-volume\n mountPath: /etc/secret-volume\n readOnly: true\n # The secret data is exposed to Containers in the Pod through a Volume.\n volumes:\n - name: secret-volume\n secret:\n secretName: test-secret", "language": "yaml", "type": "pod-definition", "context": "Distribute Credentials Securely Using Secrets | Create the Pod:", "source": "https://kubernetes.io/docs/tasks/inject-data-application/distribute-credentials-secure/", "id": "k8s_00036_code_completion_785"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: secret-test-pod\nspec:\n containers:\n - name: test-container\n image: nginx\n volumeMounts:\n # name must match the volume name below\n - name: secret-volume\n mountPath: /etc/secret-volume\n readOnly: true\n # The secret data is exposed to Containers in the Pod through a Volume.\n volumes:\n - name: secret-volume\n secret:\n secretName: test-secret", "completion": "This yaml code Distribute Credentials Securely Using Secrets | Create the Pod:", "language": "yaml", "type": "pod-definition_explanation", "context": "Distribute Credentials Securely Using Secrets | Create the Pod:", "source": "https://kubernetes.io/docs/tasks/inject-data-application/distribute-credentials-secure/", "id": "k8s_00036_code_explain_786"} {"prompt": "Explain this bash code:\n\nkubectl apply -f https://k8s.io/examples/pods/inject/secret-pod.yaml", "completion": "This bash code Distribute Credentials Securely Using Secrets | Verify that your Pod is running:", "language": "bash", "type": "kubectl-command_explanation", "context": "Distribute Credentials Securely Using Secrets | Verify that your Pod is running:", "source": "https://kubernetes.io/docs/tasks/inject-data-application/distribute-credentials-secure/", "id": "k8s_00036_code_explain_787"} {"prompt": "Explain this text code:\n\nNAME READY STATUS RESTARTS AGE\nsecret-test-pod 1/1 Running 0 42m", "completion": "This text code Distribute Credentials Securely Using Secrets | Get a shell into the Container that is running in your Pod:", "language": "text", "type": "code-example_explanation", "context": "Distribute Credentials Securely Using Secrets | Get a shell into the Container that is running in your Pod:", "source": "https://kubernetes.io/docs/tasks/inject-data-application/distribute-credentials-secure/", "id": "k8s_00036_code_explain_788"} {"prompt": "Explain this text code:\n\n# Run this in the shell inside the container\nls /etc/secret-volume", "completion": "This text code Distribute Credentials Securely Using Secrets | The output shows two files, one for each piece of secret data:", "language": "text", "type": "code-example_explanation", "context": "Distribute Credentials Securely Using Secrets | The output shows two files, one for each piece of secret data:", "source": "https://kubernetes.io/docs/tasks/inject-data-application/distribute-credentials-secure/", "id": "k8s_00036_code_explain_789"} {"prompt": "Explain this text code:\n\n# Run this in the shell inside the container\necho \"$( cat /etc/secret-volume/username )\"\necho \"$( cat /etc/secret-volume/password )\"", "completion": "This text code Distribute Credentials Securely Using Secrets | The output is your username and password:", "language": "text", "type": "code-example_explanation", "context": "Distribute Credentials Securely Using Secrets | The output is your username and password:", "source": "https://kubernetes.io/docs/tasks/inject-data-application/distribute-credentials-secure/", "id": "k8s_00036_code_explain_790"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: mypod\nspec:\n containers:\n - name: mypod\n image: redis\n volumeMounts:", "completion": " - name: foo\n mountPath: \"/etc/foo\"\n readOnly: true\n volumes:\n - name: foo\n secret:\n secretName: mysecret\n items:\n - key: username\n path: my-group/my-username", "language": "yaml", "type": "pod-definition", "context": "Distribute Credentials Securely Using Secrets | When you deploy this Pod, the following happens:", "source": "https://kubernetes.io/docs/tasks/inject-data-application/distribute-credentials-secure/", "id": "k8s_00036_code_completion_791"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: mypod\nspec:\n containers:\n - name: mypod\n image: redis\n volumeMounts:\n - name: foo\n mountPath: \"/etc/foo\"\n readOnly: true\n volumes:\n - name: foo\n secret:\n secretName: mysecret\n items:\n - key: username\n path: my-group/my-username", "completion": "This yaml code Distribute Credentials Securely Using Secrets | When you deploy this Pod, the following happens:", "language": "yaml", "type": "pod-definition_explanation", "context": "Distribute Credentials Securely Using Secrets | When you deploy this Pod, the following happens:", "source": "https://kubernetes.io/docs/tasks/inject-data-application/distribute-credentials-secure/", "id": "k8s_00036_code_explain_792"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: mypod\nspec:\n containers:\n - name: mypod\n image: redis", "completion": " volumeMounts:\n - name: foo\n mountPath: \"/etc/foo\"\n volumes:\n - name: foo\n secret:\n secretName: mysecret\n defaultMode: 0400", "language": "yaml", "type": "pod-definition", "context": "Distribute Credentials Securely Using Secrets | The Secret is mounted on /etc/foo; all the files created by the secret volume mount have permission 0400.", "source": "https://kubernetes.io/docs/tasks/inject-data-application/distribute-credentials-secure/", "id": "k8s_00036_code_completion_793"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: mypod\nspec:\n containers:\n - name: mypod\n image: redis\n volumeMounts:\n - name: foo\n mountPath: \"/etc/foo\"\n volumes:\n - name: foo\n secret:\n secretName: mysecret\n defaultMode: 0400", "completion": "This yaml code Distribute Credentials Securely Using Secrets | The Secret is mounted on /etc/foo; all the files created by the secret volume mount have permission 0400.", "language": "yaml", "type": "pod-definition_explanation", "context": "Distribute Credentials Securely Using Secrets | The Secret is mounted on /etc/foo; all the files created by the secret volume mount have permission 0400.", "source": "https://kubernetes.io/docs/tasks/inject-data-application/distribute-credentials-secure/", "id": "k8s_00036_code_explain_794"} {"prompt": "Explain this bash code:\n\nkubectl create secret generic backend-user --from-literal=backend-username='backend-admin'", "completion": "This bash code Distribute Credentials Securely Using Secrets | Assign the backend-username value defined in the Secret to the SECRET_USERNAME environment variable in the Pod specification.", "language": "bash", "type": "kubectl-command_explanation", "context": "Distribute Credentials Securely Using Secrets | Assign the backend-username value defined in the Secret to the SECRET_USERNAME environment variable in the Pod specification.", "source": "https://kubernetes.io/docs/tasks/inject-data-application/distribute-credentials-secure/", "id": "k8s_00036_code_explain_795"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: env-single-secret\nspec:\n containers:\n - name: envars-test-container", "completion": " image: nginx\n env:\n - name: SECRET_USERNAME\n valueFrom:\n secretKeyRef:\n name: backend-user\n key: backend-username", "language": "yaml", "type": "pod-definition", "context": "Distribute Credentials Securely Using Secrets | Create the Pod:", "source": "https://kubernetes.io/docs/tasks/inject-data-application/distribute-credentials-secure/", "id": "k8s_00036_code_completion_796"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: env-single-secret\nspec:\n containers:\n - name: envars-test-container\n image: nginx\n env:\n - name: SECRET_USERNAME\n valueFrom:\n secretKeyRef:\n name: backend-user\n key: backend-username", "completion": "This yaml code Distribute Credentials Securely Using Secrets | Create the Pod:", "language": "yaml", "type": "pod-definition_explanation", "context": "Distribute Credentials Securely Using Secrets | Create the Pod:", "source": "https://kubernetes.io/docs/tasks/inject-data-application/distribute-credentials-secure/", "id": "k8s_00036_code_explain_797"} {"prompt": "Explain this bash code:\n\nkubectl create -f https://k8s.io/examples/pods/inject/pod-single-secret-env-variable.yaml", "completion": "This bash code Distribute Credentials Securely Using Secrets | In your shell, display the content of SECRET_USERNAME container environment variable.", "language": "bash", "type": "kubectl-command_explanation", "context": "Distribute Credentials Securely Using Secrets | In your shell, display the content of SECRET_USERNAME container environment variable.", "source": "https://kubernetes.io/docs/tasks/inject-data-application/distribute-credentials-secure/", "id": "k8s_00036_code_explain_798"} {"prompt": "Explain this bash code:\n\nkubectl exec -i -t env-single-secret -- /bin/sh -c 'echo $SECRET_USERNAME'", "completion": "This bash code Distribute Credentials Securely Using Secrets | The output is similar to:", "language": "bash", "type": "kubectl-command_explanation", "context": "Distribute Credentials Securely Using Secrets | The output is similar to:", "source": "https://kubernetes.io/docs/tasks/inject-data-application/distribute-credentials-secure/", "id": "k8s_00036_code_explain_799"} {"prompt": "Explain this bash code:\n\nkubectl create secret generic backend-user --from-literal=backend-username='backend-admin'\nkubectl create secret generic db-user --from-literal=db-username='db-admin'", "completion": "This bash code Distribute Credentials Securely Using Secrets | Define the environment variables in the Pod specification.", "language": "bash", "type": "kubectl-command_explanation", "context": "Distribute Credentials Securely Using Secrets | Define the environment variables in the Pod specification.", "source": "https://kubernetes.io/docs/tasks/inject-data-application/distribute-credentials-secure/", "id": "k8s_00036_code_explain_800"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: envvars-multiple-secrets\nspec:\n containers:\n - name: envars-test-container\n image: nginx\n env:", "completion": " - name: BACKEND_USERNAME\n valueFrom:\n secretKeyRef:\n name: backend-user\n key: backend-username\n - name: DB_USERNAME\n valueFrom:\n secretKeyRef:\n name: db-user\n key: db-username", "language": "yaml", "type": "pod-definition", "context": "Distribute Credentials Securely Using Secrets | Create the Pod:", "source": "https://kubernetes.io/docs/tasks/inject-data-application/distribute-credentials-secure/", "id": "k8s_00036_code_completion_801"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: envvars-multiple-secrets\nspec:\n containers:\n - name: envars-test-container\n image: nginx\n env:\n - name: BACKEND_USERNAME\n valueFrom:\n secretKeyRef:\n name: backend-user\n key: backend-username\n - name: DB_USERNAME\n valueFrom:\n secretKeyRef:\n name: db-user\n key: db-username", "completion": "This yaml code Distribute Credentials Securely Using Secrets | Create the Pod:", "language": "yaml", "type": "pod-definition_explanation", "context": "Distribute Credentials Securely Using Secrets | Create the Pod:", "source": "https://kubernetes.io/docs/tasks/inject-data-application/distribute-credentials-secure/", "id": "k8s_00036_code_explain_802"} {"prompt": "Explain this bash code:\n\nkubectl create -f https://k8s.io/examples/pods/inject/pod-multiple-secret-env-variable.yaml", "completion": "This bash code Distribute Credentials Securely Using Secrets | In your shell, display the container environment variables.", "language": "bash", "type": "kubectl-command_explanation", "context": "Distribute Credentials Securely Using Secrets | In your shell, display the container environment variables.", "source": "https://kubernetes.io/docs/tasks/inject-data-application/distribute-credentials-secure/", "id": "k8s_00036_code_explain_803"} {"prompt": "Explain this bash code:\n\nkubectl exec -i -t envvars-multiple-secrets -- /bin/sh -c 'env | grep _USERNAME'", "completion": "This bash code Distribute Credentials Securely Using Secrets | The output is similar to:", "language": "bash", "type": "kubectl-command_explanation", "context": "Distribute Credentials Securely Using Secrets | The output is similar to:", "source": "https://kubernetes.io/docs/tasks/inject-data-application/distribute-credentials-secure/", "id": "k8s_00036_code_explain_804"} {"prompt": "Explain this text code:\n\nDB_USERNAME=db-admin\nBACKEND_USERNAME=backend-admin", "completion": "This text code Distribute Credentials Securely Using Secrets | Create a Secret containing multiple key-value pairs", "language": "text", "type": "code-example_explanation", "context": "Distribute Credentials Securely Using Secrets | Create a Secret containing multiple key-value pairs", "source": "https://kubernetes.io/docs/tasks/inject-data-application/distribute-credentials-secure/", "id": "k8s_00036_code_explain_805"} {"prompt": "Explain this bash code:\n\nkubectl create secret generic test-secret --from-literal=username='my-app' --from-literal=password='39528$vdg7Jb'", "completion": "This bash code Distribute Credentials Securely Using Secrets | Use envFrom to define all of the Secret's data as container environment variables. The key from the Secret becomes the environment variable name in the Pod.", "language": "bash", "type": "kubectl-command_explanation", "context": "Distribute Credentials Securely Using Secrets | Use envFrom to define all of the Secret's data as container environment variables. The key from the Secret becomes the environment variable name in the Pod.", "source": "https://kubernetes.io/docs/tasks/inject-data-application/distribute-credentials-secure/", "id": "k8s_00036_code_explain_806"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: envfrom-secret\nspec:", "completion": " containers:\n - name: envars-test-container\n image: nginx\n envFrom:\n - secretRef:\n name: test-secret", "language": "yaml", "type": "pod-definition", "context": "Distribute Credentials Securely Using Secrets | Create the Pod:", "source": "https://kubernetes.io/docs/tasks/inject-data-application/distribute-credentials-secure/", "id": "k8s_00036_code_completion_807"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: envfrom-secret\nspec:\n containers:\n - name: envars-test-container\n image: nginx\n envFrom:\n - secretRef:\n name: test-secret", "completion": "This yaml code Distribute Credentials Securely Using Secrets | Create the Pod:", "language": "yaml", "type": "pod-definition_explanation", "context": "Distribute Credentials Securely Using Secrets | Create the Pod:", "source": "https://kubernetes.io/docs/tasks/inject-data-application/distribute-credentials-secure/", "id": "k8s_00036_code_explain_808"} {"prompt": "Explain this bash code:\n\nkubectl create -f https://k8s.io/examples/pods/inject/pod-secret-envFrom.yaml", "completion": "This bash code Distribute Credentials Securely Using Secrets | In your shell, display username and password container environment variables.", "language": "bash", "type": "kubectl-command_explanation", "context": "Distribute Credentials Securely Using Secrets | In your shell, display username and password container environment variables.", "source": "https://kubernetes.io/docs/tasks/inject-data-application/distribute-credentials-secure/", "id": "k8s_00036_code_explain_809"} {"prompt": "Explain this bash code:\n\nkubectl exec -i -t envfrom-secret -- /bin/sh -c 'echo \"username: $username\\npassword: $password\\n\"'", "completion": "This bash code Distribute Credentials Securely Using Secrets | The output is similar to:", "language": "bash", "type": "kubectl-command_explanation", "context": "Distribute Credentials Securely Using Secrets | The output is similar to:", "source": "https://kubernetes.io/docs/tasks/inject-data-application/distribute-credentials-secure/", "id": "k8s_00036_code_explain_810"} {"prompt": "Explain this bash code:\n\nkubectl create secret generic prod-db-secret --from-literal=username=produser --from-literal=password=Y4nys7f11", "completion": "This bash code Distribute Credentials Securely Using Secrets | The output is similar to:", "language": "bash", "type": "kubectl-command_explanation", "context": "Distribute Credentials Securely Using Secrets | The output is similar to:", "source": "https://kubernetes.io/docs/tasks/inject-data-application/distribute-credentials-secure/", "id": "k8s_00036_code_explain_811"} {"prompt": "Explain this bash code:\n\nkubectl create secret generic test-db-secret --from-literal=username=testuser --from-literal=password=iluvtests", "completion": "This bash code Distribute Credentials Securely Using Secrets | The output is similar to:", "language": "bash", "type": "kubectl-command_explanation", "context": "Distribute Credentials Securely Using Secrets | The output is similar to:", "source": "https://kubernetes.io/docs/tasks/inject-data-application/distribute-credentials-secure/", "id": "k8s_00036_code_explain_812"} {"prompt": "Explain this bash code:\n\nkubectl create secret generic dev-db-secret --from-literal=username=devuser --from-literal=password='S!B\\*d$zDsb='", "completion": "This bash code Distribute Credentials Securely Using Secrets | You do not need to escape special characters in passwords from files (--from-file).", "language": "bash", "type": "kubectl-command_explanation", "context": "Distribute Credentials Securely Using Secrets | You do not need to escape special characters in passwords from files (--from-file).", "source": "https://kubernetes.io/docs/tasks/inject-data-application/distribute-credentials-secure/", "id": "k8s_00036_code_explain_813"} {"prompt": "Complete this yaml code:\n\ncat < pod.yaml\napiVersion: v1\nkind: List\nitems:\n- kind: Pod\n apiVersion: v1\n metadata:\n name: prod-db-client-pod\n labels:\n name: prod-db-client\n spec:\n volumes:\n - name: secret-volume\n secret:\n secretName: prod-db-secret\n containers:\n - name: db-client-container\n image: myClientImage\n volumeMounts:\n - name: secret-volume", "completion": " readOnly: true\n mountPath: \"/etc/secret-volume\"\n- kind: Pod\n apiVersion: v1\n metadata:\n name: test-db-client-pod\n labels:\n name: test-db-client\n spec:\n volumes:\n - name: secret-volume\n secret:\n secretName: test-db-secret\n containers:\n - name: db-client-container\n image: myClientImage\n volumeMounts:\n - name: secret-volume\n readOnly: true\n mountPath: \"/etc/secret-volume\"\nEOF", "language": "yaml", "type": "pod-definition", "context": "Distribute Credentials Securely Using Secrets | Apply all those objects on the API server by running:", "source": "https://kubernetes.io/docs/tasks/inject-data-application/distribute-credentials-secure/", "id": "k8s_00036_code_completion_814"} {"prompt": "Explain this yaml code:\n\ncat < pod.yaml\napiVersion: v1\nkind: List\nitems:\n- kind: Pod\n apiVersion: v1\n metadata:\n name: prod-db-client-pod\n labels:\n name: prod-db-client\n spec:\n volumes:\n - name: secret-volume\n secret:\n secretName: prod-db-secret\n containers:\n - name: db-client-container\n image: myClientImage\n volumeMounts:\n - name: secret-volume\n readOnly: true\n mountPath: \"/etc/secret-volume\"\n- kind: Pod\n apiVersion: v1\n metadata:\n name: test-db-client-pod\n labels:\n name: test-db-client\n spec:\n volumes:\n - name: secret-volume\n secret:\n secretName: test-db-secret\n containers:\n - name: db-client-container\n image: myClientImage\n volumeMounts:\n - name: secret-volume\n readOnly: true\n mountPath: \"/etc/secret-volume\"\nEOF", "completion": "This yaml code Distribute Credentials Securely Using Secrets | Apply all those objects on the API server by running:", "language": "yaml", "type": "pod-definition_explanation", "context": "Distribute Credentials Securely Using Secrets | Apply all those objects on the API server by running:", "source": "https://kubernetes.io/docs/tasks/inject-data-application/distribute-credentials-secure/", "id": "k8s_00036_code_explain_815"} {"prompt": "Explain this text code:\n\n/etc/secret-volume/username\n/etc/secret-volume/password", "completion": "This text code Distribute Credentials Securely Using Secrets | You could further simplify the base Pod specification by using two service accounts:", "language": "text", "type": "code-example_explanation", "context": "Distribute Credentials Securely Using Secrets | You could further simplify the base Pod specification by using two service accounts:", "source": "https://kubernetes.io/docs/tasks/inject-data-application/distribute-credentials-secure/", "id": "k8s_00036_code_explain_816"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: prod-db-client-pod\n labels:", "completion": " name: prod-db-client\nspec:\n serviceAccount: prod-db-client\n containers:\n - name: db-client-container\n image: myClientImage", "language": "yaml", "type": "pod-definition", "context": "Distribute Credentials Securely Using Secrets |", "source": "https://kubernetes.io/docs/tasks/inject-data-application/distribute-credentials-secure/", "id": "k8s_00036_code_completion_817"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: prod-db-client-pod\n labels:\n name: prod-db-client\nspec:\n serviceAccount: prod-db-client\n containers:\n - name: db-client-container\n image: myClientImage", "completion": "This yaml code Distribute Credentials Securely Using Secrets |", "language": "yaml", "type": "pod-definition_explanation", "context": "Distribute Credentials Securely Using Secrets |", "source": "https://kubernetes.io/docs/tasks/inject-data-application/distribute-credentials-secure/", "id": "k8s_00036_code_explain_818"} {"prompt": "Explain this shell code:\n\necho -n 'my-app' | base64\necho -n '39528$vdg7Jb' | base64", "completion": "This shell code Distribute Credentials Securely Using Secrets | The output shows that the base-64 representation of your username is bXktYXBw, and the base-64 representation of your password is Mzk1MjgkdmRnN0pi.", "language": "shell", "type": "code-example_explanation", "context": "Distribute Credentials Securely Using Secrets | The output shows that the base-64 representation of your username is bXktYXBw, and the base-64 representation of your password is Mzk1MjgkdmRnN0pi.", "source": "https://kubernetes.io/docs/tasks/inject-data-application/distribute-credentials-secure/", "id": "k8s_00036_code_explain_819"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Secret\nmetadata:", "completion": " name: test-secret\ndata:\n username: bXktYXBw\n password: Mzk1MjgkdmRnN0pi", "language": "yaml", "type": "kubernetes-manifest", "context": "Distribute Credentials Securely Using Secrets | Create the Secret", "source": "https://kubernetes.io/docs/tasks/inject-data-application/distribute-credentials-secure/", "id": "k8s_00036_code_completion_820"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Secret\nmetadata:\n name: test-secret\ndata:\n username: bXktYXBw\n password: Mzk1MjgkdmRnN0pi", "completion": "This yaml code Distribute Credentials Securely Using Secrets | Create the Secret", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Distribute Credentials Securely Using Secrets | Create the Secret", "source": "https://kubernetes.io/docs/tasks/inject-data-application/distribute-credentials-secure/", "id": "k8s_00036_code_explain_821"} {"prompt": "Explain this shell code:\n\nkubectl apply -f https://k8s.io/examples/pods/inject/secret.yaml", "completion": "This shell code Distribute Credentials Securely Using Secrets | View information about the Secret:", "language": "shell", "type": "code-example_explanation", "context": "Distribute Credentials Securely Using Secrets | View information about the Secret:", "source": "https://kubernetes.io/docs/tasks/inject-data-application/distribute-credentials-secure/", "id": "k8s_00036_code_explain_822"} {"prompt": "Explain this text code:\n\nNAME TYPE DATA AGE\ntest-secret Opaque 2 1m", "completion": "This text code Distribute Credentials Securely Using Secrets | View more detailed information about the Secret:", "language": "text", "type": "code-example_explanation", "context": "Distribute Credentials Securely Using Secrets | View more detailed information about the Secret:", "source": "https://kubernetes.io/docs/tasks/inject-data-application/distribute-credentials-secure/", "id": "k8s_00036_code_explain_823"} {"prompt": "Complete this text code:\n\nName: test-secret\nNamespace: default\nLabels: \nAnnotations: \n", "completion": "Type: Opaque\n\nData\n====\npassword: 13 bytes\nusername: 7 bytes", "language": "text", "type": "code-example", "context": "Distribute Credentials Securely Using Secrets | If you want to skip the Base64 encoding step, you can create the same Secret using the kubectl create secret command. For example:", "source": "https://kubernetes.io/docs/tasks/inject-data-application/distribute-credentials-secure/", "id": "k8s_00036_code_completion_824"} {"prompt": "Explain this text code:\n\nName: test-secret\nNamespace: default\nLabels: \nAnnotations: \n\nType: Opaque\n\nData\n====\npassword: 13 bytes\nusername: 7 bytes", "completion": "This text code Distribute Credentials Securely Using Secrets | If you want to skip the Base64 encoding step, you can create the same Secret using the kubectl create secret command. For example:", "language": "text", "type": "code-example_explanation", "context": "Distribute Credentials Securely Using Secrets | If you want to skip the Base64 encoding step, you can create the same Secret using the kubectl create secret command. For example:", "source": "https://kubernetes.io/docs/tasks/inject-data-application/distribute-credentials-secure/", "id": "k8s_00036_code_explain_825"} {"prompt": "Explain this shell code:\n\nkubectl create secret generic test-secret --from-literal='username=my-app' --from-literal='password=39528$vdg7Jb'", "completion": "This shell code Distribute Credentials Securely Using Secrets | This is more convenient. The detailed approach shown earlier runs through each step explicitly to demonstrate what is happening.", "language": "shell", "type": "code-example_explanation", "context": "Distribute Credentials Securely Using Secrets | This is more convenient. The detailed approach shown earlier runs through each step explicitly to demonstrate what is happening.", "source": "https://kubernetes.io/docs/tasks/inject-data-application/distribute-credentials-secure/", "id": "k8s_00036_code_explain_826"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: secret-test-pod\nspec:\n containers:\n - name: test-container\n image: nginx\n volumeMounts:", "completion": " # name must match the volume name below\n - name: secret-volume\n mountPath: /etc/secret-volume\n readOnly: true\n # The secret data is exposed to Containers in the Pod through a Volume.\n volumes:\n - name: secret-volume\n secret:\n secretName: test-secret", "language": "yaml", "type": "pod-definition", "context": "Distribute Credentials Securely Using Secrets | Create the Pod:", "source": "https://kubernetes.io/docs/tasks/inject-data-application/distribute-credentials-secure/", "id": "k8s_00036_code_completion_827"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: secret-test-pod\nspec:\n containers:\n - name: test-container\n image: nginx\n volumeMounts:\n # name must match the volume name below\n - name: secret-volume\n mountPath: /etc/secret-volume\n readOnly: true\n # The secret data is exposed to Containers in the Pod through a Volume.\n volumes:\n - name: secret-volume\n secret:\n secretName: test-secret", "completion": "This yaml code Distribute Credentials Securely Using Secrets | Create the Pod:", "language": "yaml", "type": "pod-definition_explanation", "context": "Distribute Credentials Securely Using Secrets | Create the Pod:", "source": "https://kubernetes.io/docs/tasks/inject-data-application/distribute-credentials-secure/", "id": "k8s_00036_code_explain_828"} {"prompt": "Explain this shell code:\n\nkubectl apply -f https://k8s.io/examples/pods/inject/secret-pod.yaml", "completion": "This shell code Distribute Credentials Securely Using Secrets | Verify that your Pod is running:", "language": "shell", "type": "code-example_explanation", "context": "Distribute Credentials Securely Using Secrets | Verify that your Pod is running:", "source": "https://kubernetes.io/docs/tasks/inject-data-application/distribute-credentials-secure/", "id": "k8s_00036_code_explain_829"} {"prompt": "Explain this text code:\n\nNAME READY STATUS RESTARTS AGE\nsecret-test-pod 1/1 Running 0 42m", "completion": "This text code Distribute Credentials Securely Using Secrets | Get a shell into the Container that is running in your Pod:", "language": "text", "type": "code-example_explanation", "context": "Distribute Credentials Securely Using Secrets | Get a shell into the Container that is running in your Pod:", "source": "https://kubernetes.io/docs/tasks/inject-data-application/distribute-credentials-secure/", "id": "k8s_00036_code_explain_830"} {"prompt": "Explain this shell code:\n\n# Run this in the shell inside the container\nls /etc/secret-volume", "completion": "This shell code Distribute Credentials Securely Using Secrets | The output shows two files, one for each piece of secret data:", "language": "shell", "type": "code-example_explanation", "context": "Distribute Credentials Securely Using Secrets | The output shows two files, one for each piece of secret data:", "source": "https://kubernetes.io/docs/tasks/inject-data-application/distribute-credentials-secure/", "id": "k8s_00036_code_explain_831"} {"prompt": "Explain this shell code:\n\n# Run this in the shell inside the container\necho \"$( cat /etc/secret-volume/username )\"\necho \"$( cat /etc/secret-volume/password )\"", "completion": "This shell code Distribute Credentials Securely Using Secrets | The output is your username and password:", "language": "shell", "type": "code-example_explanation", "context": "Distribute Credentials Securely Using Secrets | The output is your username and password:", "source": "https://kubernetes.io/docs/tasks/inject-data-application/distribute-credentials-secure/", "id": "k8s_00036_code_explain_832"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: mypod\nspec:\n containers:\n - name: mypod\n image: redis\n volumeMounts:", "completion": " - name: foo\n mountPath: \"/etc/foo\"\n readOnly: true\n volumes:\n - name: foo\n secret:\n secretName: mysecret\n items:\n - key: username\n path: my-group/my-username", "language": "yaml", "type": "pod-definition", "context": "Distribute Credentials Securely Using Secrets | When you deploy this Pod, the following happens:", "source": "https://kubernetes.io/docs/tasks/inject-data-application/distribute-credentials-secure/", "id": "k8s_00036_code_completion_833"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: mypod\nspec:\n containers:\n - name: mypod\n image: redis\n volumeMounts:\n - name: foo\n mountPath: \"/etc/foo\"\n readOnly: true\n volumes:\n - name: foo\n secret:\n secretName: mysecret\n items:\n - key: username\n path: my-group/my-username", "completion": "This yaml code Distribute Credentials Securely Using Secrets | When you deploy this Pod, the following happens:", "language": "yaml", "type": "pod-definition_explanation", "context": "Distribute Credentials Securely Using Secrets | When you deploy this Pod, the following happens:", "source": "https://kubernetes.io/docs/tasks/inject-data-application/distribute-credentials-secure/", "id": "k8s_00036_code_explain_834"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: mypod\nspec:\n containers:\n - name: mypod\n image: redis", "completion": " volumeMounts:\n - name: foo\n mountPath: \"/etc/foo\"\n volumes:\n - name: foo\n secret:\n secretName: mysecret\n defaultMode: 0400", "language": "yaml", "type": "pod-definition", "context": "Distribute Credentials Securely Using Secrets | The Secret is mounted on /etc/foo; all the files created by the secret volume mount have permission 0400.", "source": "https://kubernetes.io/docs/tasks/inject-data-application/distribute-credentials-secure/", "id": "k8s_00036_code_completion_835"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: mypod\nspec:\n containers:\n - name: mypod\n image: redis\n volumeMounts:\n - name: foo\n mountPath: \"/etc/foo\"\n volumes:\n - name: foo\n secret:\n secretName: mysecret\n defaultMode: 0400", "completion": "This yaml code Distribute Credentials Securely Using Secrets | The Secret is mounted on /etc/foo; all the files created by the secret volume mount have permission 0400.", "language": "yaml", "type": "pod-definition_explanation", "context": "Distribute Credentials Securely Using Secrets | The Secret is mounted on /etc/foo; all the files created by the secret volume mount have permission 0400.", "source": "https://kubernetes.io/docs/tasks/inject-data-application/distribute-credentials-secure/", "id": "k8s_00036_code_explain_836"} {"prompt": "Explain this shell code:\n\nkubectl create secret generic backend-user --from-literal=backend-username='backend-admin'", "completion": "This shell code Distribute Credentials Securely Using Secrets | Assign the backend-username value defined in the Secret to the SECRET_USERNAME environment variable in the Pod specification.", "language": "shell", "type": "code-example_explanation", "context": "Distribute Credentials Securely Using Secrets | Assign the backend-username value defined in the Secret to the SECRET_USERNAME environment variable in the Pod specification.", "source": "https://kubernetes.io/docs/tasks/inject-data-application/distribute-credentials-secure/", "id": "k8s_00036_code_explain_837"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: env-single-secret\nspec:\n containers:\n - name: envars-test-container", "completion": " image: nginx\n env:\n - name: SECRET_USERNAME\n valueFrom:\n secretKeyRef:\n name: backend-user\n key: backend-username", "language": "yaml", "type": "pod-definition", "context": "Distribute Credentials Securely Using Secrets | Create the Pod:", "source": "https://kubernetes.io/docs/tasks/inject-data-application/distribute-credentials-secure/", "id": "k8s_00036_code_completion_838"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: env-single-secret\nspec:\n containers:\n - name: envars-test-container\n image: nginx\n env:\n - name: SECRET_USERNAME\n valueFrom:\n secretKeyRef:\n name: backend-user\n key: backend-username", "completion": "This yaml code Distribute Credentials Securely Using Secrets | Create the Pod:", "language": "yaml", "type": "pod-definition_explanation", "context": "Distribute Credentials Securely Using Secrets | Create the Pod:", "source": "https://kubernetes.io/docs/tasks/inject-data-application/distribute-credentials-secure/", "id": "k8s_00036_code_explain_839"} {"prompt": "Explain this shell code:\n\nkubectl create -f https://k8s.io/examples/pods/inject/pod-single-secret-env-variable.yaml", "completion": "This shell code Distribute Credentials Securely Using Secrets | In your shell, display the content of SECRET_USERNAME container environment variable.", "language": "shell", "type": "code-example_explanation", "context": "Distribute Credentials Securely Using Secrets | In your shell, display the content of SECRET_USERNAME container environment variable.", "source": "https://kubernetes.io/docs/tasks/inject-data-application/distribute-credentials-secure/", "id": "k8s_00036_code_explain_840"} {"prompt": "Explain this shell code:\n\nkubectl exec -i -t env-single-secret -- /bin/sh -c 'echo $SECRET_USERNAME'", "completion": "This shell code Distribute Credentials Securely Using Secrets | The output is similar to:", "language": "shell", "type": "code-example_explanation", "context": "Distribute Credentials Securely Using Secrets | The output is similar to:", "source": "https://kubernetes.io/docs/tasks/inject-data-application/distribute-credentials-secure/", "id": "k8s_00036_code_explain_841"} {"prompt": "Explain this shell code:\n\nkubectl create secret generic backend-user --from-literal=backend-username='backend-admin'\nkubectl create secret generic db-user --from-literal=db-username='db-admin'", "completion": "This shell code Distribute Credentials Securely Using Secrets | Define the environment variables in the Pod specification.", "language": "shell", "type": "code-example_explanation", "context": "Distribute Credentials Securely Using Secrets | Define the environment variables in the Pod specification.", "source": "https://kubernetes.io/docs/tasks/inject-data-application/distribute-credentials-secure/", "id": "k8s_00036_code_explain_842"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: envvars-multiple-secrets\nspec:\n containers:\n - name: envars-test-container\n image: nginx\n env:", "completion": " - name: BACKEND_USERNAME\n valueFrom:\n secretKeyRef:\n name: backend-user\n key: backend-username\n - name: DB_USERNAME\n valueFrom:\n secretKeyRef:\n name: db-user\n key: db-username", "language": "yaml", "type": "pod-definition", "context": "Distribute Credentials Securely Using Secrets | Create the Pod:", "source": "https://kubernetes.io/docs/tasks/inject-data-application/distribute-credentials-secure/", "id": "k8s_00036_code_completion_843"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: envvars-multiple-secrets\nspec:\n containers:\n - name: envars-test-container\n image: nginx\n env:\n - name: BACKEND_USERNAME\n valueFrom:\n secretKeyRef:\n name: backend-user\n key: backend-username\n - name: DB_USERNAME\n valueFrom:\n secretKeyRef:\n name: db-user\n key: db-username", "completion": "This yaml code Distribute Credentials Securely Using Secrets | Create the Pod:", "language": "yaml", "type": "pod-definition_explanation", "context": "Distribute Credentials Securely Using Secrets | Create the Pod:", "source": "https://kubernetes.io/docs/tasks/inject-data-application/distribute-credentials-secure/", "id": "k8s_00036_code_explain_844"} {"prompt": "Explain this shell code:\n\nkubectl create -f https://k8s.io/examples/pods/inject/pod-multiple-secret-env-variable.yaml", "completion": "This shell code Distribute Credentials Securely Using Secrets | In your shell, display the container environment variables.", "language": "shell", "type": "code-example_explanation", "context": "Distribute Credentials Securely Using Secrets | In your shell, display the container environment variables.", "source": "https://kubernetes.io/docs/tasks/inject-data-application/distribute-credentials-secure/", "id": "k8s_00036_code_explain_845"} {"prompt": "Explain this shell code:\n\nkubectl exec -i -t envvars-multiple-secrets -- /bin/sh -c 'env | grep _USERNAME'", "completion": "This shell code Distribute Credentials Securely Using Secrets | The output is similar to:", "language": "shell", "type": "code-example_explanation", "context": "Distribute Credentials Securely Using Secrets | The output is similar to:", "source": "https://kubernetes.io/docs/tasks/inject-data-application/distribute-credentials-secure/", "id": "k8s_00036_code_explain_846"} {"prompt": "Explain this text code:\n\nDB_USERNAME=db-admin\nBACKEND_USERNAME=backend-admin", "completion": "This text code Distribute Credentials Securely Using Secrets | Create a Secret containing multiple key-value pairs", "language": "text", "type": "code-example_explanation", "context": "Distribute Credentials Securely Using Secrets | Create a Secret containing multiple key-value pairs", "source": "https://kubernetes.io/docs/tasks/inject-data-application/distribute-credentials-secure/", "id": "k8s_00036_code_explain_847"} {"prompt": "Explain this shell code:\n\nkubectl create secret generic test-secret --from-literal=username='my-app' --from-literal=password='39528$vdg7Jb'", "completion": "This shell code Distribute Credentials Securely Using Secrets | Use envFrom to define all of the Secret's data as container environment variables. The key from the Secret becomes the environment variable name in the Pod.", "language": "shell", "type": "code-example_explanation", "context": "Distribute Credentials Securely Using Secrets | Use envFrom to define all of the Secret's data as container environment variables. The key from the Secret becomes the environment variable name in the Pod.", "source": "https://kubernetes.io/docs/tasks/inject-data-application/distribute-credentials-secure/", "id": "k8s_00036_code_explain_848"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: envfrom-secret\nspec:", "completion": " containers:\n - name: envars-test-container\n image: nginx\n envFrom:\n - secretRef:\n name: test-secret", "language": "yaml", "type": "pod-definition", "context": "Distribute Credentials Securely Using Secrets | Create the Pod:", "source": "https://kubernetes.io/docs/tasks/inject-data-application/distribute-credentials-secure/", "id": "k8s_00036_code_completion_849"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: envfrom-secret\nspec:\n containers:\n - name: envars-test-container\n image: nginx\n envFrom:\n - secretRef:\n name: test-secret", "completion": "This yaml code Distribute Credentials Securely Using Secrets | Create the Pod:", "language": "yaml", "type": "pod-definition_explanation", "context": "Distribute Credentials Securely Using Secrets | Create the Pod:", "source": "https://kubernetes.io/docs/tasks/inject-data-application/distribute-credentials-secure/", "id": "k8s_00036_code_explain_850"} {"prompt": "Explain this shell code:\n\nkubectl create -f https://k8s.io/examples/pods/inject/pod-secret-envFrom.yaml", "completion": "This shell code Distribute Credentials Securely Using Secrets | In your shell, display username and password container environment variables.", "language": "shell", "type": "code-example_explanation", "context": "Distribute Credentials Securely Using Secrets | In your shell, display username and password container environment variables.", "source": "https://kubernetes.io/docs/tasks/inject-data-application/distribute-credentials-secure/", "id": "k8s_00036_code_explain_851"} {"prompt": "Explain this shell code:\n\nkubectl exec -i -t envfrom-secret -- /bin/sh -c 'echo \"username: $username\\npassword: $password\\n\"'", "completion": "This shell code Distribute Credentials Securely Using Secrets | The output is similar to:", "language": "shell", "type": "code-example_explanation", "context": "Distribute Credentials Securely Using Secrets | The output is similar to:", "source": "https://kubernetes.io/docs/tasks/inject-data-application/distribute-credentials-secure/", "id": "k8s_00036_code_explain_852"} {"prompt": "Explain this shell code:\n\nkubectl create secret generic prod-db-secret --from-literal=username=produser --from-literal=password=Y4nys7f11", "completion": "This shell code Distribute Credentials Securely Using Secrets | The output is similar to:", "language": "shell", "type": "code-example_explanation", "context": "Distribute Credentials Securely Using Secrets | The output is similar to:", "source": "https://kubernetes.io/docs/tasks/inject-data-application/distribute-credentials-secure/", "id": "k8s_00036_code_explain_853"} {"prompt": "Explain this shell code:\n\nkubectl create secret generic test-db-secret --from-literal=username=testuser --from-literal=password=iluvtests", "completion": "This shell code Distribute Credentials Securely Using Secrets | The output is similar to:", "language": "shell", "type": "code-example_explanation", "context": "Distribute Credentials Securely Using Secrets | The output is similar to:", "source": "https://kubernetes.io/docs/tasks/inject-data-application/distribute-credentials-secure/", "id": "k8s_00036_code_explain_854"} {"prompt": "Explain this shell code:\n\nkubectl create secret generic dev-db-secret --from-literal=username=devuser --from-literal=password='S!B\\*d$zDsb='", "completion": "This shell code Distribute Credentials Securely Using Secrets | You do not need to escape special characters in passwords from files (--from-file).", "language": "shell", "type": "code-example_explanation", "context": "Distribute Credentials Securely Using Secrets | You do not need to escape special characters in passwords from files (--from-file).", "source": "https://kubernetes.io/docs/tasks/inject-data-application/distribute-credentials-secure/", "id": "k8s_00036_code_explain_855"} {"prompt": "Complete this shell code:\n\ncat < pod.yaml\napiVersion: v1\nkind: List\nitems:\n- kind: Pod\n apiVersion: v1\n metadata:\n name: prod-db-client-pod\n labels:\n name: prod-db-client\n spec:\n volumes:\n - name: secret-volume\n secret:\n secretName: prod-db-secret\n containers:\n - name: db-client-container\n image: myClientImage\n volumeMounts:\n - name: secret-volume", "completion": " readOnly: true\n mountPath: \"/etc/secret-volume\"\n- kind: Pod\n apiVersion: v1\n metadata:\n name: test-db-client-pod\n labels:\n name: test-db-client\n spec:\n volumes:\n - name: secret-volume\n secret:\n secretName: test-db-secret\n containers:\n - name: db-client-container\n image: myClientImage\n volumeMounts:\n - name: secret-volume\n readOnly: true\n mountPath: \"/etc/secret-volume\"\nEOF", "language": "shell", "type": "code-example", "context": "Distribute Credentials Securely Using Secrets | Apply all those objects on the API server by running:", "source": "https://kubernetes.io/docs/tasks/inject-data-application/distribute-credentials-secure/", "id": "k8s_00036_code_completion_856"} {"prompt": "Explain this shell code:\n\ncat < pod.yaml\napiVersion: v1\nkind: List\nitems:\n- kind: Pod\n apiVersion: v1\n metadata:\n name: prod-db-client-pod\n labels:\n name: prod-db-client\n spec:\n volumes:\n - name: secret-volume\n secret:\n secretName: prod-db-secret\n containers:\n - name: db-client-container\n image: myClientImage\n volumeMounts:\n - name: secret-volume\n readOnly: true\n mountPath: \"/etc/secret-volume\"\n- kind: Pod\n apiVersion: v1\n metadata:\n name: test-db-client-pod\n labels:\n name: test-db-client\n spec:\n volumes:\n - name: secret-volume\n secret:\n secretName: test-db-secret\n containers:\n - name: db-client-container\n image: myClientImage\n volumeMounts:\n - name: secret-volume\n readOnly: true\n mountPath: \"/etc/secret-volume\"\nEOF", "completion": "This shell code Distribute Credentials Securely Using Secrets | Apply all those objects on the API server by running:", "language": "shell", "type": "code-example_explanation", "context": "Distribute Credentials Securely Using Secrets | Apply all those objects on the API server by running:", "source": "https://kubernetes.io/docs/tasks/inject-data-application/distribute-credentials-secure/", "id": "k8s_00036_code_explain_857"} {"prompt": "Explain this text code:\n\n/etc/secret-volume/username\n/etc/secret-volume/password", "completion": "This text code Distribute Credentials Securely Using Secrets | You could further simplify the base Pod specification by using two service accounts:", "language": "text", "type": "code-example_explanation", "context": "Distribute Credentials Securely Using Secrets | You could further simplify the base Pod specification by using two service accounts:", "source": "https://kubernetes.io/docs/tasks/inject-data-application/distribute-credentials-secure/", "id": "k8s_00036_code_explain_858"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: prod-db-client-pod\n labels:", "completion": " name: prod-db-client\nspec:\n serviceAccount: prod-db-client\n containers:\n - name: db-client-container\n image: myClientImage", "language": "yaml", "type": "pod-definition", "context": "Distribute Credentials Securely Using Secrets |", "source": "https://kubernetes.io/docs/tasks/inject-data-application/distribute-credentials-secure/", "id": "k8s_00036_code_completion_859"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: prod-db-client-pod\n labels:\n name: prod-db-client\nspec:\n serviceAccount: prod-db-client\n containers:\n - name: db-client-container\n image: myClientImage", "completion": "This yaml code Distribute Credentials Securely Using Secrets |", "language": "yaml", "type": "pod-definition_explanation", "context": "Distribute Credentials Securely Using Secrets |", "source": "https://kubernetes.io/docs/tasks/inject-data-application/distribute-credentials-secure/", "id": "k8s_00036_code_explain_860"} {"prompt": "Explain this text code:\n\necho -n 'my-app' | base64\necho -n '39528$vdg7Jb' | base64", "completion": "This text code Distribute Credentials Securely Using Secrets | The output shows that the base-64 representation of your username is bXktYXBw, and the base-64 representation of your password is Mzk1MjgkdmRnN0pi.", "language": "text", "type": "code-example_explanation", "context": "Distribute Credentials Securely Using Secrets | The output shows that the base-64 representation of your username is bXktYXBw, and the base-64 representation of your password is Mzk1MjgkdmRnN0pi.", "source": "https://kubernetes.io/docs/tasks/inject-data-application/distribute-credentials-secure/", "id": "k8s_00036_code_explain_861"} {"prompt": "Complete this yaml code:\n\npods/inject/secret.yaml\napiVersion: v1\nkind: Secret\nmetadata:", "completion": " name: test-secret\ndata:\n username: bXktYXBw\n password: Mzk1MjgkdmRnN0pi", "language": "yaml", "type": "kubernetes-manifest", "context": "Distribute Credentials Securely Using Secrets | Create the Secret", "source": "https://kubernetes.io/docs/tasks/inject-data-application/distribute-credentials-secure/", "id": "k8s_00036_code_completion_862"} {"prompt": "Explain this yaml code:\n\npods/inject/secret.yaml\napiVersion: v1\nkind: Secret\nmetadata:\n name: test-secret\ndata:\n username: bXktYXBw\n password: Mzk1MjgkdmRnN0pi", "completion": "This yaml code Distribute Credentials Securely Using Secrets | Create the Secret", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Distribute Credentials Securely Using Secrets | Create the Secret", "source": "https://kubernetes.io/docs/tasks/inject-data-application/distribute-credentials-secure/", "id": "k8s_00036_code_explain_863"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Secret\nmetadata:", "completion": " name: test-secret\ndata:\n username: bXktYXBw\n password: Mzk1MjgkdmRnN0pi", "language": "yaml", "type": "kubernetes-manifest", "context": "Distribute Credentials Securely Using Secrets | Create the Secret", "source": "https://kubernetes.io/docs/tasks/inject-data-application/distribute-credentials-secure/", "id": "k8s_00036_code_completion_864"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Secret\nmetadata:\n name: test-secret\ndata:\n username: bXktYXBw\n password: Mzk1MjgkdmRnN0pi", "completion": "This yaml code Distribute Credentials Securely Using Secrets | Create the Secret", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Distribute Credentials Securely Using Secrets | Create the Secret", "source": "https://kubernetes.io/docs/tasks/inject-data-application/distribute-credentials-secure/", "id": "k8s_00036_code_explain_865"} {"prompt": "Explain this bash code:\n\nkubectl apply -f https://k8s.io/examples/pods/inject/secret.yaml", "completion": "This bash code Distribute Credentials Securely Using Secrets | View information about the Secret:", "language": "bash", "type": "kubectl-command_explanation", "context": "Distribute Credentials Securely Using Secrets | View information about the Secret:", "source": "https://kubernetes.io/docs/tasks/inject-data-application/distribute-credentials-secure/", "id": "k8s_00036_code_explain_866"} {"prompt": "Explain this bash code:\n\nkubectl create secret generic test-secret --from-literal='username=my-app' --from-literal='password=39528$vdg7Jb'", "completion": "This bash code Distribute Credentials Securely Using Secrets | This is more convenient. The detailed approach shown earlier runs through each step explicitly to demonstrate what is happening.", "language": "bash", "type": "kubectl-command_explanation", "context": "Distribute Credentials Securely Using Secrets | This is more convenient. The detailed approach shown earlier runs through each step explicitly to demonstrate what is happening.", "source": "https://kubernetes.io/docs/tasks/inject-data-application/distribute-credentials-secure/", "id": "k8s_00036_code_explain_867"} {"prompt": "Complete this yaml code:\n\npods/inject/secret-pod.yaml\napiVersion: v1\nkind: Pod\nmetadata:\n name: secret-test-pod\nspec:\n containers:\n - name: test-container\n image: nginx", "completion": " volumeMounts:\n # name must match the volume name below\n - name: secret-volume\n mountPath: /etc/secret-volume\n readOnly: true\n # The secret data is exposed to Containers in the Pod through a Volume.\n volumes:\n - name: secret-volume\n secret:\n secretName: test-secret", "language": "yaml", "type": "pod-definition", "context": "Distribute Credentials Securely Using Secrets | Create the Pod:", "source": "https://kubernetes.io/docs/tasks/inject-data-application/distribute-credentials-secure/", "id": "k8s_00036_code_completion_868"} {"prompt": "Explain this yaml code:\n\npods/inject/secret-pod.yaml\napiVersion: v1\nkind: Pod\nmetadata:\n name: secret-test-pod\nspec:\n containers:\n - name: test-container\n image: nginx\n volumeMounts:\n # name must match the volume name below\n - name: secret-volume\n mountPath: /etc/secret-volume\n readOnly: true\n # The secret data is exposed to Containers in the Pod through a Volume.\n volumes:\n - name: secret-volume\n secret:\n secretName: test-secret", "completion": "This yaml code Distribute Credentials Securely Using Secrets | Create the Pod:", "language": "yaml", "type": "pod-definition_explanation", "context": "Distribute Credentials Securely Using Secrets | Create the Pod:", "source": "https://kubernetes.io/docs/tasks/inject-data-application/distribute-credentials-secure/", "id": "k8s_00036_code_explain_869"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: secret-test-pod\nspec:\n containers:\n - name: test-container\n image: nginx\n volumeMounts:", "completion": " # name must match the volume name below\n - name: secret-volume\n mountPath: /etc/secret-volume\n readOnly: true\n # The secret data is exposed to Containers in the Pod through a Volume.\n volumes:\n - name: secret-volume\n secret:\n secretName: test-secret", "language": "yaml", "type": "pod-definition", "context": "Distribute Credentials Securely Using Secrets | Create the Pod:", "source": "https://kubernetes.io/docs/tasks/inject-data-application/distribute-credentials-secure/", "id": "k8s_00036_code_completion_870"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: secret-test-pod\nspec:\n containers:\n - name: test-container\n image: nginx\n volumeMounts:\n # name must match the volume name below\n - name: secret-volume\n mountPath: /etc/secret-volume\n readOnly: true\n # The secret data is exposed to Containers in the Pod through a Volume.\n volumes:\n - name: secret-volume\n secret:\n secretName: test-secret", "completion": "This yaml code Distribute Credentials Securely Using Secrets | Create the Pod:", "language": "yaml", "type": "pod-definition_explanation", "context": "Distribute Credentials Securely Using Secrets | Create the Pod:", "source": "https://kubernetes.io/docs/tasks/inject-data-application/distribute-credentials-secure/", "id": "k8s_00036_code_explain_871"} {"prompt": "Explain this bash code:\n\nkubectl apply -f https://k8s.io/examples/pods/inject/secret-pod.yaml", "completion": "This bash code Distribute Credentials Securely Using Secrets | Verify that your Pod is running:", "language": "bash", "type": "kubectl-command_explanation", "context": "Distribute Credentials Securely Using Secrets | Verify that your Pod is running:", "source": "https://kubernetes.io/docs/tasks/inject-data-application/distribute-credentials-secure/", "id": "k8s_00036_code_explain_872"} {"prompt": "Explain this text code:\n\n# Run this in the shell inside the container\nls /etc/secret-volume", "completion": "This text code Distribute Credentials Securely Using Secrets | The output shows two files, one for each piece of secret data:", "language": "text", "type": "code-example_explanation", "context": "Distribute Credentials Securely Using Secrets | The output shows two files, one for each piece of secret data:", "source": "https://kubernetes.io/docs/tasks/inject-data-application/distribute-credentials-secure/", "id": "k8s_00036_code_explain_873"} {"prompt": "Explain this text code:\n\n# Run this in the shell inside the container\necho \"$( cat /etc/secret-volume/username )\"\necho \"$( cat /etc/secret-volume/password )\"", "completion": "This text code Distribute Credentials Securely Using Secrets | The output is your username and password:", "language": "text", "type": "code-example_explanation", "context": "Distribute Credentials Securely Using Secrets | The output is your username and password:", "source": "https://kubernetes.io/docs/tasks/inject-data-application/distribute-credentials-secure/", "id": "k8s_00036_code_explain_874"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: mypod\nspec:\n containers:\n - name: mypod\n image: redis\n volumeMounts:", "completion": " - name: foo\n mountPath: \"/etc/foo\"\n readOnly: true\n volumes:\n - name: foo\n secret:\n secretName: mysecret\n items:\n - key: username\n path: my-group/my-username", "language": "yaml", "type": "pod-definition", "context": "Distribute Credentials Securely Using Secrets | When you deploy this Pod, the following happens:", "source": "https://kubernetes.io/docs/tasks/inject-data-application/distribute-credentials-secure/", "id": "k8s_00036_code_completion_875"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: mypod\nspec:\n containers:\n - name: mypod\n image: redis\n volumeMounts:\n - name: foo\n mountPath: \"/etc/foo\"\n readOnly: true\n volumes:\n - name: foo\n secret:\n secretName: mysecret\n items:\n - key: username\n path: my-group/my-username", "completion": "This yaml code Distribute Credentials Securely Using Secrets | When you deploy this Pod, the following happens:", "language": "yaml", "type": "pod-definition_explanation", "context": "Distribute Credentials Securely Using Secrets | When you deploy this Pod, the following happens:", "source": "https://kubernetes.io/docs/tasks/inject-data-application/distribute-credentials-secure/", "id": "k8s_00036_code_explain_876"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: mypod\nspec:\n containers:\n - name: mypod\n image: redis", "completion": " volumeMounts:\n - name: foo\n mountPath: \"/etc/foo\"\n volumes:\n - name: foo\n secret:\n secretName: mysecret\n defaultMode: 0400", "language": "yaml", "type": "pod-definition", "context": "Distribute Credentials Securely Using Secrets | The Secret is mounted on /etc/foo; all the files created by the secret volume mount have permission 0400.", "source": "https://kubernetes.io/docs/tasks/inject-data-application/distribute-credentials-secure/", "id": "k8s_00036_code_completion_877"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: mypod\nspec:\n containers:\n - name: mypod\n image: redis\n volumeMounts:\n - name: foo\n mountPath: \"/etc/foo\"\n volumes:\n - name: foo\n secret:\n secretName: mysecret\n defaultMode: 0400", "completion": "This yaml code Distribute Credentials Securely Using Secrets | The Secret is mounted on /etc/foo; all the files created by the secret volume mount have permission 0400.", "language": "yaml", "type": "pod-definition_explanation", "context": "Distribute Credentials Securely Using Secrets | The Secret is mounted on /etc/foo; all the files created by the secret volume mount have permission 0400.", "source": "https://kubernetes.io/docs/tasks/inject-data-application/distribute-credentials-secure/", "id": "k8s_00036_code_explain_878"} {"prompt": "Explain this bash code:\n\nkubectl create secret generic backend-user --from-literal=backend-username='backend-admin'", "completion": "This bash code Distribute Credentials Securely Using Secrets | Assign the backend-username value defined in the Secret to the SECRET_USERNAME environment variable in the Pod specification.", "language": "bash", "type": "kubectl-command_explanation", "context": "Distribute Credentials Securely Using Secrets | Assign the backend-username value defined in the Secret to the SECRET_USERNAME environment variable in the Pod specification.", "source": "https://kubernetes.io/docs/tasks/inject-data-application/distribute-credentials-secure/", "id": "k8s_00036_code_explain_879"} {"prompt": "Complete this yaml code:\n\npods/inject/pod-single-secret-env-variable.yaml\napiVersion: v1\nkind: Pod\nmetadata:\n name: env-single-secret\nspec:\n containers:", "completion": " - name: envars-test-container\n image: nginx\n env:\n - name: SECRET_USERNAME\n valueFrom:\n secretKeyRef:\n name: backend-user\n key: backend-username", "language": "yaml", "type": "pod-definition", "context": "Distribute Credentials Securely Using Secrets | Create the Pod:", "source": "https://kubernetes.io/docs/tasks/inject-data-application/distribute-credentials-secure/", "id": "k8s_00036_code_completion_880"} {"prompt": "Explain this yaml code:\n\npods/inject/pod-single-secret-env-variable.yaml\napiVersion: v1\nkind: Pod\nmetadata:\n name: env-single-secret\nspec:\n containers:\n - name: envars-test-container\n image: nginx\n env:\n - name: SECRET_USERNAME\n valueFrom:\n secretKeyRef:\n name: backend-user\n key: backend-username", "completion": "This yaml code Distribute Credentials Securely Using Secrets | Create the Pod:", "language": "yaml", "type": "pod-definition_explanation", "context": "Distribute Credentials Securely Using Secrets | Create the Pod:", "source": "https://kubernetes.io/docs/tasks/inject-data-application/distribute-credentials-secure/", "id": "k8s_00036_code_explain_881"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: env-single-secret\nspec:\n containers:\n - name: envars-test-container", "completion": " image: nginx\n env:\n - name: SECRET_USERNAME\n valueFrom:\n secretKeyRef:\n name: backend-user\n key: backend-username", "language": "yaml", "type": "pod-definition", "context": "Distribute Credentials Securely Using Secrets | Create the Pod:", "source": "https://kubernetes.io/docs/tasks/inject-data-application/distribute-credentials-secure/", "id": "k8s_00036_code_completion_882"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: env-single-secret\nspec:\n containers:\n - name: envars-test-container\n image: nginx\n env:\n - name: SECRET_USERNAME\n valueFrom:\n secretKeyRef:\n name: backend-user\n key: backend-username", "completion": "This yaml code Distribute Credentials Securely Using Secrets | Create the Pod:", "language": "yaml", "type": "pod-definition_explanation", "context": "Distribute Credentials Securely Using Secrets | Create the Pod:", "source": "https://kubernetes.io/docs/tasks/inject-data-application/distribute-credentials-secure/", "id": "k8s_00036_code_explain_883"} {"prompt": "Explain this bash code:\n\nkubectl create -f https://k8s.io/examples/pods/inject/pod-single-secret-env-variable.yaml", "completion": "This bash code Distribute Credentials Securely Using Secrets | In your shell, display the content of SECRET_USERNAME container environment variable.", "language": "bash", "type": "kubectl-command_explanation", "context": "Distribute Credentials Securely Using Secrets | In your shell, display the content of SECRET_USERNAME container environment variable.", "source": "https://kubernetes.io/docs/tasks/inject-data-application/distribute-credentials-secure/", "id": "k8s_00036_code_explain_884"} {"prompt": "Explain this bash code:\n\nkubectl exec -i -t env-single-secret -- /bin/sh -c 'echo $SECRET_USERNAME'", "completion": "This bash code Distribute Credentials Securely Using Secrets | The output is similar to:", "language": "bash", "type": "kubectl-command_explanation", "context": "Distribute Credentials Securely Using Secrets | The output is similar to:", "source": "https://kubernetes.io/docs/tasks/inject-data-application/distribute-credentials-secure/", "id": "k8s_00036_code_explain_885"} {"prompt": "Explain this bash code:\n\nkubectl create secret generic backend-user --from-literal=backend-username='backend-admin'\nkubectl create secret generic db-user --from-literal=db-username='db-admin'", "completion": "This bash code Distribute Credentials Securely Using Secrets | Define the environment variables in the Pod specification.", "language": "bash", "type": "kubectl-command_explanation", "context": "Distribute Credentials Securely Using Secrets | Define the environment variables in the Pod specification.", "source": "https://kubernetes.io/docs/tasks/inject-data-application/distribute-credentials-secure/", "id": "k8s_00036_code_explain_886"} {"prompt": "Complete this yaml code:\n\npods/inject/pod-multiple-secret-env-variable.yaml\napiVersion: v1\nkind: Pod\nmetadata:\n name: envvars-multiple-secrets\nspec:\n containers:\n - name: envars-test-container\n image: nginx\n env:", "completion": " - name: BACKEND_USERNAME\n valueFrom:\n secretKeyRef:\n name: backend-user\n key: backend-username\n - name: DB_USERNAME\n valueFrom:\n secretKeyRef:\n name: db-user\n key: db-username", "language": "yaml", "type": "pod-definition", "context": "Distribute Credentials Securely Using Secrets | Create the Pod:", "source": "https://kubernetes.io/docs/tasks/inject-data-application/distribute-credentials-secure/", "id": "k8s_00036_code_completion_887"} {"prompt": "Explain this yaml code:\n\npods/inject/pod-multiple-secret-env-variable.yaml\napiVersion: v1\nkind: Pod\nmetadata:\n name: envvars-multiple-secrets\nspec:\n containers:\n - name: envars-test-container\n image: nginx\n env:\n - name: BACKEND_USERNAME\n valueFrom:\n secretKeyRef:\n name: backend-user\n key: backend-username\n - name: DB_USERNAME\n valueFrom:\n secretKeyRef:\n name: db-user\n key: db-username", "completion": "This yaml code Distribute Credentials Securely Using Secrets | Create the Pod:", "language": "yaml", "type": "pod-definition_explanation", "context": "Distribute Credentials Securely Using Secrets | Create the Pod:", "source": "https://kubernetes.io/docs/tasks/inject-data-application/distribute-credentials-secure/", "id": "k8s_00036_code_explain_888"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: envvars-multiple-secrets\nspec:\n containers:\n - name: envars-test-container\n image: nginx\n env:", "completion": " - name: BACKEND_USERNAME\n valueFrom:\n secretKeyRef:\n name: backend-user\n key: backend-username\n - name: DB_USERNAME\n valueFrom:\n secretKeyRef:\n name: db-user\n key: db-username", "language": "yaml", "type": "pod-definition", "context": "Distribute Credentials Securely Using Secrets | Create the Pod:", "source": "https://kubernetes.io/docs/tasks/inject-data-application/distribute-credentials-secure/", "id": "k8s_00036_code_completion_889"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: envvars-multiple-secrets\nspec:\n containers:\n - name: envars-test-container\n image: nginx\n env:\n - name: BACKEND_USERNAME\n valueFrom:\n secretKeyRef:\n name: backend-user\n key: backend-username\n - name: DB_USERNAME\n valueFrom:\n secretKeyRef:\n name: db-user\n key: db-username", "completion": "This yaml code Distribute Credentials Securely Using Secrets | Create the Pod:", "language": "yaml", "type": "pod-definition_explanation", "context": "Distribute Credentials Securely Using Secrets | Create the Pod:", "source": "https://kubernetes.io/docs/tasks/inject-data-application/distribute-credentials-secure/", "id": "k8s_00036_code_explain_890"} {"prompt": "Explain this bash code:\n\nkubectl create -f https://k8s.io/examples/pods/inject/pod-multiple-secret-env-variable.yaml", "completion": "This bash code Distribute Credentials Securely Using Secrets | In your shell, display the container environment variables.", "language": "bash", "type": "kubectl-command_explanation", "context": "Distribute Credentials Securely Using Secrets | In your shell, display the container environment variables.", "source": "https://kubernetes.io/docs/tasks/inject-data-application/distribute-credentials-secure/", "id": "k8s_00036_code_explain_891"} {"prompt": "Explain this bash code:\n\nkubectl exec -i -t envvars-multiple-secrets -- /bin/sh -c 'env | grep _USERNAME'", "completion": "This bash code Distribute Credentials Securely Using Secrets | The output is similar to:", "language": "bash", "type": "kubectl-command_explanation", "context": "Distribute Credentials Securely Using Secrets | The output is similar to:", "source": "https://kubernetes.io/docs/tasks/inject-data-application/distribute-credentials-secure/", "id": "k8s_00036_code_explain_892"} {"prompt": "Explain this bash code:\n\nkubectl create secret generic test-secret --from-literal=username='my-app' --from-literal=password='39528$vdg7Jb'", "completion": "This bash code Distribute Credentials Securely Using Secrets | Use envFrom to define all of the Secret's data as container environment variables. The key from the Secret becomes the environment variable name in the Pod.", "language": "bash", "type": "kubectl-command_explanation", "context": "Distribute Credentials Securely Using Secrets | Use envFrom to define all of the Secret's data as container environment variables. The key from the Secret becomes the environment variable name in the Pod.", "source": "https://kubernetes.io/docs/tasks/inject-data-application/distribute-credentials-secure/", "id": "k8s_00036_code_explain_893"} {"prompt": "Complete this yaml code:\n\npods/inject/pod-secret-envFrom.yaml\napiVersion: v1\nkind: Pod\nmetadata:\n name: envfrom-secret\nspec:", "completion": " containers:\n - name: envars-test-container\n image: nginx\n envFrom:\n - secretRef:\n name: test-secret", "language": "yaml", "type": "pod-definition", "context": "Distribute Credentials Securely Using Secrets | Create the Pod:", "source": "https://kubernetes.io/docs/tasks/inject-data-application/distribute-credentials-secure/", "id": "k8s_00036_code_completion_894"} {"prompt": "Explain this yaml code:\n\npods/inject/pod-secret-envFrom.yaml\napiVersion: v1\nkind: Pod\nmetadata:\n name: envfrom-secret\nspec:\n containers:\n - name: envars-test-container\n image: nginx\n envFrom:\n - secretRef:\n name: test-secret", "completion": "This yaml code Distribute Credentials Securely Using Secrets | Create the Pod:", "language": "yaml", "type": "pod-definition_explanation", "context": "Distribute Credentials Securely Using Secrets | Create the Pod:", "source": "https://kubernetes.io/docs/tasks/inject-data-application/distribute-credentials-secure/", "id": "k8s_00036_code_explain_895"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: envfrom-secret\nspec:", "completion": " containers:\n - name: envars-test-container\n image: nginx\n envFrom:\n - secretRef:\n name: test-secret", "language": "yaml", "type": "pod-definition", "context": "Distribute Credentials Securely Using Secrets | Create the Pod:", "source": "https://kubernetes.io/docs/tasks/inject-data-application/distribute-credentials-secure/", "id": "k8s_00036_code_completion_896"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: envfrom-secret\nspec:\n containers:\n - name: envars-test-container\n image: nginx\n envFrom:\n - secretRef:\n name: test-secret", "completion": "This yaml code Distribute Credentials Securely Using Secrets | Create the Pod:", "language": "yaml", "type": "pod-definition_explanation", "context": "Distribute Credentials Securely Using Secrets | Create the Pod:", "source": "https://kubernetes.io/docs/tasks/inject-data-application/distribute-credentials-secure/", "id": "k8s_00036_code_explain_897"} {"prompt": "Explain this bash code:\n\nkubectl create -f https://k8s.io/examples/pods/inject/pod-secret-envFrom.yaml", "completion": "This bash code Distribute Credentials Securely Using Secrets | In your shell, display username and password container environment variables.", "language": "bash", "type": "kubectl-command_explanation", "context": "Distribute Credentials Securely Using Secrets | In your shell, display username and password container environment variables.", "source": "https://kubernetes.io/docs/tasks/inject-data-application/distribute-credentials-secure/", "id": "k8s_00036_code_explain_898"} {"prompt": "Explain this bash code:\n\nkubectl exec -i -t envfrom-secret -- /bin/sh -c 'echo \"username: $username\\npassword: $password\\n\"'", "completion": "This bash code Distribute Credentials Securely Using Secrets | The output is similar to:", "language": "bash", "type": "kubectl-command_explanation", "context": "Distribute Credentials Securely Using Secrets | The output is similar to:", "source": "https://kubernetes.io/docs/tasks/inject-data-application/distribute-credentials-secure/", "id": "k8s_00036_code_explain_899"} {"prompt": "Explain this bash code:\n\nkubectl create secret generic prod-db-secret --from-literal=username=produser --from-literal=password=Y4nys7f11", "completion": "This bash code Distribute Credentials Securely Using Secrets | The output is similar to:", "language": "bash", "type": "kubectl-command_explanation", "context": "Distribute Credentials Securely Using Secrets | The output is similar to:", "source": "https://kubernetes.io/docs/tasks/inject-data-application/distribute-credentials-secure/", "id": "k8s_00036_code_explain_900"} {"prompt": "Explain this bash code:\n\nkubectl create secret generic test-db-secret --from-literal=username=testuser --from-literal=password=iluvtests", "completion": "This bash code Distribute Credentials Securely Using Secrets | The output is similar to:", "language": "bash", "type": "kubectl-command_explanation", "context": "Distribute Credentials Securely Using Secrets | The output is similar to:", "source": "https://kubernetes.io/docs/tasks/inject-data-application/distribute-credentials-secure/", "id": "k8s_00036_code_explain_901"} {"prompt": "Explain this bash code:\n\nkubectl create secret generic dev-db-secret --from-literal=username=devuser --from-literal=password='S!B\\*d$zDsb='", "completion": "This bash code Distribute Credentials Securely Using Secrets | You do not need to escape special characters in passwords from files (--from-file).", "language": "bash", "type": "kubectl-command_explanation", "context": "Distribute Credentials Securely Using Secrets | You do not need to escape special characters in passwords from files (--from-file).", "source": "https://kubernetes.io/docs/tasks/inject-data-application/distribute-credentials-secure/", "id": "k8s_00036_code_explain_902"} {"prompt": "Complete this yaml code:\n\ncat < pod.yaml\napiVersion: v1\nkind: List\nitems:\n- kind: Pod\n apiVersion: v1\n metadata:\n name: prod-db-client-pod\n labels:\n name: prod-db-client\n spec:\n volumes:\n - name: secret-volume\n secret:\n secretName: prod-db-secret\n containers:\n - name: db-client-container\n image: myClientImage\n volumeMounts:\n - name: secret-volume", "completion": " readOnly: true\n mountPath: \"/etc/secret-volume\"\n- kind: Pod\n apiVersion: v1\n metadata:\n name: test-db-client-pod\n labels:\n name: test-db-client\n spec:\n volumes:\n - name: secret-volume\n secret:\n secretName: test-db-secret\n containers:\n - name: db-client-container\n image: myClientImage\n volumeMounts:\n - name: secret-volume\n readOnly: true\n mountPath: \"/etc/secret-volume\"\nEOF", "language": "yaml", "type": "pod-definition", "context": "Distribute Credentials Securely Using Secrets | Apply all those objects on the API server by running:", "source": "https://kubernetes.io/docs/tasks/inject-data-application/distribute-credentials-secure/", "id": "k8s_00036_code_completion_903"} {"prompt": "Explain this yaml code:\n\ncat < pod.yaml\napiVersion: v1\nkind: List\nitems:\n- kind: Pod\n apiVersion: v1\n metadata:\n name: prod-db-client-pod\n labels:\n name: prod-db-client\n spec:\n volumes:\n - name: secret-volume\n secret:\n secretName: prod-db-secret\n containers:\n - name: db-client-container\n image: myClientImage\n volumeMounts:\n - name: secret-volume\n readOnly: true\n mountPath: \"/etc/secret-volume\"\n- kind: Pod\n apiVersion: v1\n metadata:\n name: test-db-client-pod\n labels:\n name: test-db-client\n spec:\n volumes:\n - name: secret-volume\n secret:\n secretName: test-db-secret\n containers:\n - name: db-client-container\n image: myClientImage\n volumeMounts:\n - name: secret-volume\n readOnly: true\n mountPath: \"/etc/secret-volume\"\nEOF", "completion": "This yaml code Distribute Credentials Securely Using Secrets | Apply all those objects on the API server by running:", "language": "yaml", "type": "pod-definition_explanation", "context": "Distribute Credentials Securely Using Secrets | Apply all those objects on the API server by running:", "source": "https://kubernetes.io/docs/tasks/inject-data-application/distribute-credentials-secure/", "id": "k8s_00036_code_explain_904"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: prod-db-client-pod\n labels:", "completion": " name: prod-db-client\nspec:\n serviceAccount: prod-db-client\n containers:\n - name: db-client-container\n image: myClientImage", "language": "yaml", "type": "pod-definition", "context": "Distribute Credentials Securely Using Secrets |", "source": "https://kubernetes.io/docs/tasks/inject-data-application/distribute-credentials-secure/", "id": "k8s_00036_code_completion_905"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: prod-db-client-pod\n labels:\n name: prod-db-client\nspec:\n serviceAccount: prod-db-client\n containers:\n - name: db-client-container\n image: myClientImage", "completion": "This yaml code Distribute Credentials Securely Using Secrets |", "language": "yaml", "type": "pod-definition_explanation", "context": "Distribute Credentials Securely Using Secrets |", "source": "https://kubernetes.io/docs/tasks/inject-data-application/distribute-credentials-secure/", "id": "k8s_00036_code_explain_906"} {"prompt": "Complete this yaml code:\n\napiVersion: rbac.authorization.k8s.io/v1\nkind: ClusterRole\nmetadata:\n name: prometheus", "completion": "rules:\n - nonResourceURLs:\n - \"/metrics\"\n verbs:\n - get", "language": "yaml", "type": "kubernetes-manifest", "context": "Metrics For Kubernetes System Components | Alpha metric → Stable metric → Deprecated metric → Hidden metric → Deleted metric", "source": "https://kubernetes.io/docs/concepts/cluster-administration/system-metrics/", "id": "k8s_00037_code_completion_907"} {"prompt": "Explain this yaml code:\n\napiVersion: rbac.authorization.k8s.io/v1\nkind: ClusterRole\nmetadata:\n name: prometheus\nrules:\n - nonResourceURLs:\n - \"/metrics\"\n verbs:\n - get", "completion": "This yaml code Metrics For Kubernetes System Components | Alpha metric → Stable metric → Deprecated metric → Hidden metric → Deleted metric", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Metrics For Kubernetes System Components | Alpha metric → Stable metric → Deprecated metric → Hidden metric → Deleted metric", "source": "https://kubernetes.io/docs/concepts/cluster-administration/system-metrics/", "id": "k8s_00037_code_explain_908"} {"prompt": "Explain this text code:\n\n# HELP some_counter this counts things\n# TYPE some_counter counter\nsome_counter 0", "completion": "This text code Metrics For Kubernetes System Components | After deprecation", "language": "text", "type": "code-example_explanation", "context": "Metrics For Kubernetes System Components | After deprecation", "source": "https://kubernetes.io/docs/concepts/cluster-administration/system-metrics/", "id": "k8s_00037_code_explain_909"} {"prompt": "Explain this text code:\n\n# HELP some_counter (Deprecated since 1.15.0) this counts things\n# TYPE some_counter counter\nsome_counter 0", "completion": "This text code Metrics For Kubernetes System Components | Hidden metrics are no longer published for scraping, but are still available for use. To use a hidden metric, please refer to the Show hidden metrics section.", "language": "text", "type": "code-example_explanation", "context": "Metrics For Kubernetes System Components | Hidden metrics are no longer published for scraping, but are still available for use. To use a hidden metric, please refer to the Show hidden metrics section.", "source": "https://kubernetes.io/docs/concepts/cluster-administration/system-metrics/", "id": "k8s_00037_code_explain_910"} {"prompt": "Complete this text code:\n\ncloudprovider_gce_api_request_duration_seconds { request = \"instance_list\"}\ncloudprovider_gce_api_request_duration_seconds { request = \"disk_insert\"}\ncloudprovider_gce_api_request_duration_seconds { request = \"disk_delete\"}", "completion": "cloudprovider_gce_api_request_duration_seconds { request = \"attach_disk\"}\ncloudprovider_gce_api_request_duration_seconds { request = \"detach_disk\"}\ncloudprovider_gce_api_request_duration_seconds { request = \"list_disk\"}", "language": "text", "type": "code-example", "context": "Metrics For Kubernetes System Components | The scheduler exposes optional metrics that reports the requested resources and the desired limits of all running pods. These metrics can be used to build capacity planning dashboards, assess current", "source": "https://kubernetes.io/docs/concepts/cluster-administration/system-metrics/", "id": "k8s_00037_code_completion_911"} {"prompt": "Explain this text code:\n\ncloudprovider_gce_api_request_duration_seconds { request = \"instance_list\"}\ncloudprovider_gce_api_request_duration_seconds { request = \"disk_insert\"}\ncloudprovider_gce_api_request_duration_seconds { request = \"disk_delete\"}\ncloudprovider_gce_api_request_duration_seconds { request = \"attach_disk\"}\ncloudprovider_gce_api_request_duration_seconds { request = \"detach_disk\"}\ncloudprovider_gce_api_request_duration_seconds { request = \"list_disk\"}", "completion": "This text code Metrics For Kubernetes System Components | The scheduler exposes optional metrics that reports the requested resources and the desired limits of all running pods. These metrics can be used to build capacity planning dashboards, assess current", "language": "text", "type": "code-example_explanation", "context": "Metrics For Kubernetes System Components | The scheduler exposes optional metrics that reports the requested resources and the desired limits of all running pods. These metrics can be used to build capacity planning dashboards, assess current", "source": "https://kubernetes.io/docs/concepts/cluster-administration/system-metrics/", "id": "k8s_00037_code_explain_912"} {"prompt": "Complete this text code:\n\ncontainer_pressure_cpu_stalled_seconds_total\ncontainer_pressure_cpu_waiting_seconds_total\ncontainer_pressure_memory_stalled_seconds_total", "completion": "container_pressure_memory_waiting_seconds_total\ncontainer_pressure_io_stalled_seconds_total\ncontainer_pressure_io_waiting_seconds_total", "language": "text", "type": "code-example", "context": "Metrics For Kubernetes System Components | This feature is enabled by default, by setting the KubeletPSI feature gate. The information is also exposed in the Summary API.", "source": "https://kubernetes.io/docs/concepts/cluster-administration/system-metrics/", "id": "k8s_00037_code_completion_913"} {"prompt": "Explain this text code:\n\ncontainer_pressure_cpu_stalled_seconds_total\ncontainer_pressure_cpu_waiting_seconds_total\ncontainer_pressure_memory_stalled_seconds_total\ncontainer_pressure_memory_waiting_seconds_total\ncontainer_pressure_io_stalled_seconds_total\ncontainer_pressure_io_waiting_seconds_total", "completion": "This text code Metrics For Kubernetes System Components | This feature is enabled by default, by setting the KubeletPSI feature gate. The information is also exposed in the Summary API.", "language": "text", "type": "code-example_explanation", "context": "Metrics For Kubernetes System Components | This feature is enabled by default, by setting the KubeletPSI feature gate. The information is also exposed in the Summary API.", "source": "https://kubernetes.io/docs/concepts/cluster-administration/system-metrics/", "id": "k8s_00037_code_explain_914"} {"prompt": "Explain this text code:\n\n--allow-metric-labels ,=', ...', ,=', ...', ...", "completion": "This text code Metrics For Kubernetes System Components | Here is an example:", "language": "text", "type": "code-example_explanation", "context": "Metrics For Kubernetes System Components | Here is an example:", "source": "https://kubernetes.io/docs/concepts/cluster-administration/system-metrics/", "id": "k8s_00037_code_explain_915"} {"prompt": "Explain this text code:\n\n--allow-metric-labels number_count_metric,odd_number='1,3,5', number_count_metric,even_number='2,4,6', date_gauge_metric,weekend='Saturday,Sunday'", "completion": "This text code Metrics For Kubernetes System Components | In addition to specifying this from the CLI, this can also be done within a configuration file. You can specify the path to that configuration file using the --allow-metric-labels-manifest command lin", "language": "text", "type": "code-example_explanation", "context": "Metrics For Kubernetes System Components | In addition to specifying this from the CLI, this can also be done within a configuration file. You can specify the path to that configuration file using the --allow-metric-labels-manifest command lin", "source": "https://kubernetes.io/docs/concepts/cluster-administration/system-metrics/", "id": "k8s_00037_code_explain_916"} {"prompt": "Explain this text code:\n\n\"metric1,label2\": \"v1,v2,v3\"\n\"metric2,label1\": \"v1,v2,v3\"", "completion": "This text code Metrics For Kubernetes System Components | Additionally, the cardinality_enforcement_unexpected_categorizations_total meta-metric records the count of unexpected categorizations during cardinality enforcement, that is, whenever a label value i", "language": "text", "type": "code-example_explanation", "context": "Metrics For Kubernetes System Components | Additionally, the cardinality_enforcement_unexpected_categorizations_total meta-metric records the count of unexpected categorizations during cardinality enforcement, that is, whenever a label value i", "source": "https://kubernetes.io/docs/concepts/cluster-administration/system-metrics/", "id": "k8s_00037_code_explain_917"} {"prompt": "Complete this yaml code:\n\napiVersion: rbac.authorization.k8s.io/v1\nkind: ClusterRole\nmetadata:\n name: prometheus", "completion": "rules:\n - nonResourceURLs:\n - \"/metrics\"\n verbs:\n - get", "language": "yaml", "type": "kubernetes-manifest", "context": "Metrics For Kubernetes System Components | Alpha metric → Stable metric → Deprecated metric → Hidden metric → Deleted metric", "source": "https://kubernetes.io/docs/concepts/cluster-administration/system-metrics/", "id": "k8s_00037_code_completion_918"} {"prompt": "Explain this yaml code:\n\napiVersion: rbac.authorization.k8s.io/v1\nkind: ClusterRole\nmetadata:\n name: prometheus\nrules:\n - nonResourceURLs:\n - \"/metrics\"\n verbs:\n - get", "completion": "This yaml code Metrics For Kubernetes System Components | Alpha metric → Stable metric → Deprecated metric → Hidden metric → Deleted metric", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Metrics For Kubernetes System Components | Alpha metric → Stable metric → Deprecated metric → Hidden metric → Deleted metric", "source": "https://kubernetes.io/docs/concepts/cluster-administration/system-metrics/", "id": "k8s_00037_code_explain_919"} {"prompt": "Explain this text code:\n\n# HELP some_counter this counts things\n# TYPE some_counter counter\nsome_counter 0", "completion": "This text code Metrics For Kubernetes System Components | After deprecation", "language": "text", "type": "code-example_explanation", "context": "Metrics For Kubernetes System Components | After deprecation", "source": "https://kubernetes.io/docs/concepts/cluster-administration/system-metrics/", "id": "k8s_00037_code_explain_920"} {"prompt": "Explain this text code:\n\n# HELP some_counter (Deprecated since 1.15.0) this counts things\n# TYPE some_counter counter\nsome_counter 0", "completion": "This text code Metrics For Kubernetes System Components | Hidden metrics are no longer published for scraping, but are still available for use. To use a hidden metric, please refer to the Show hidden metrics section.", "language": "text", "type": "code-example_explanation", "context": "Metrics For Kubernetes System Components | Hidden metrics are no longer published for scraping, but are still available for use. To use a hidden metric, please refer to the Show hidden metrics section.", "source": "https://kubernetes.io/docs/concepts/cluster-administration/system-metrics/", "id": "k8s_00037_code_explain_921"} {"prompt": "Complete this text code:\n\ncloudprovider_gce_api_request_duration_seconds { request = \"instance_list\"}\ncloudprovider_gce_api_request_duration_seconds { request = \"disk_insert\"}\ncloudprovider_gce_api_request_duration_seconds { request = \"disk_delete\"}", "completion": "cloudprovider_gce_api_request_duration_seconds { request = \"attach_disk\"}\ncloudprovider_gce_api_request_duration_seconds { request = \"detach_disk\"}\ncloudprovider_gce_api_request_duration_seconds { request = \"list_disk\"}", "language": "text", "type": "code-example", "context": "Metrics For Kubernetes System Components | The scheduler exposes optional metrics that reports the requested resources and the desired limits of all running pods. These metrics can be used to build capacity planning dashboards, assess current", "source": "https://kubernetes.io/docs/concepts/cluster-administration/system-metrics/", "id": "k8s_00037_code_completion_922"} {"prompt": "Explain this text code:\n\ncloudprovider_gce_api_request_duration_seconds { request = \"instance_list\"}\ncloudprovider_gce_api_request_duration_seconds { request = \"disk_insert\"}\ncloudprovider_gce_api_request_duration_seconds { request = \"disk_delete\"}\ncloudprovider_gce_api_request_duration_seconds { request = \"attach_disk\"}\ncloudprovider_gce_api_request_duration_seconds { request = \"detach_disk\"}\ncloudprovider_gce_api_request_duration_seconds { request = \"list_disk\"}", "completion": "This text code Metrics For Kubernetes System Components | The scheduler exposes optional metrics that reports the requested resources and the desired limits of all running pods. These metrics can be used to build capacity planning dashboards, assess current", "language": "text", "type": "code-example_explanation", "context": "Metrics For Kubernetes System Components | The scheduler exposes optional metrics that reports the requested resources and the desired limits of all running pods. These metrics can be used to build capacity planning dashboards, assess current", "source": "https://kubernetes.io/docs/concepts/cluster-administration/system-metrics/", "id": "k8s_00037_code_explain_923"} {"prompt": "Complete this text code:\n\ncontainer_pressure_cpu_stalled_seconds_total\ncontainer_pressure_cpu_waiting_seconds_total\ncontainer_pressure_memory_stalled_seconds_total", "completion": "container_pressure_memory_waiting_seconds_total\ncontainer_pressure_io_stalled_seconds_total\ncontainer_pressure_io_waiting_seconds_total", "language": "text", "type": "code-example", "context": "Metrics For Kubernetes System Components | This feature is enabled by default, by setting the KubeletPSI feature gate. The information is also exposed in the Summary API.", "source": "https://kubernetes.io/docs/concepts/cluster-administration/system-metrics/", "id": "k8s_00037_code_completion_924"} {"prompt": "Explain this text code:\n\ncontainer_pressure_cpu_stalled_seconds_total\ncontainer_pressure_cpu_waiting_seconds_total\ncontainer_pressure_memory_stalled_seconds_total\ncontainer_pressure_memory_waiting_seconds_total\ncontainer_pressure_io_stalled_seconds_total\ncontainer_pressure_io_waiting_seconds_total", "completion": "This text code Metrics For Kubernetes System Components | This feature is enabled by default, by setting the KubeletPSI feature gate. The information is also exposed in the Summary API.", "language": "text", "type": "code-example_explanation", "context": "Metrics For Kubernetes System Components | This feature is enabled by default, by setting the KubeletPSI feature gate. The information is also exposed in the Summary API.", "source": "https://kubernetes.io/docs/concepts/cluster-administration/system-metrics/", "id": "k8s_00037_code_explain_925"} {"prompt": "Explain this text code:\n\n--allow-metric-labels ,=', ...', ,=', ...', ...", "completion": "This text code Metrics For Kubernetes System Components | Here is an example:", "language": "text", "type": "code-example_explanation", "context": "Metrics For Kubernetes System Components | Here is an example:", "source": "https://kubernetes.io/docs/concepts/cluster-administration/system-metrics/", "id": "k8s_00037_code_explain_926"} {"prompt": "Explain this none code:\n\n--allow-metric-labels number_count_metric,odd_number='1,3,5', number_count_metric,even_number='2,4,6', date_gauge_metric,weekend='Saturday,Sunday'", "completion": "This none code Metrics For Kubernetes System Components | In addition to specifying this from the CLI, this can also be done within a configuration file. You can specify the path to that configuration file using the --allow-metric-labels-manifest command lin", "language": "none", "type": "code-example_explanation", "context": "Metrics For Kubernetes System Components | In addition to specifying this from the CLI, this can also be done within a configuration file. You can specify the path to that configuration file using the --allow-metric-labels-manifest command lin", "source": "https://kubernetes.io/docs/concepts/cluster-administration/system-metrics/", "id": "k8s_00037_code_explain_927"} {"prompt": "Explain this yaml code:\n\n\"metric1,label2\": \"v1,v2,v3\"\n\"metric2,label1\": \"v1,v2,v3\"", "completion": "This yaml code Metrics For Kubernetes System Components | Additionally, the cardinality_enforcement_unexpected_categorizations_total meta-metric records the count of unexpected categorizations during cardinality enforcement, that is, whenever a label value i", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Metrics For Kubernetes System Components | Additionally, the cardinality_enforcement_unexpected_categorizations_total meta-metric records the count of unexpected categorizations during cardinality enforcement, that is, whenever a label value i", "source": "https://kubernetes.io/docs/concepts/cluster-administration/system-metrics/", "id": "k8s_00037_code_explain_928"} {"prompt": "Explain this text code:\n\ncardinality_enforcement_unexpected_categorizations_total", "completion": "This text code Metrics For Kubernetes System Components |", "language": "text", "type": "code-example_explanation", "context": "Metrics For Kubernetes System Components |", "source": "https://kubernetes.io/docs/concepts/cluster-administration/system-metrics/", "id": "k8s_00037_code_explain_929"} {"prompt": "Complete this yaml code:\n\napiVersion: rbac.authorization.k8s.io/v1\nkind: ClusterRole\nmetadata:\n name: prometheus", "completion": "rules:\n - nonResourceURLs:\n - \"/metrics\"\n verbs:\n - get", "language": "yaml", "type": "kubernetes-manifest", "context": "Metrics For Kubernetes System Components | Alpha metric → Stable metric → Deprecated metric → Hidden metric → Deleted metric", "source": "https://kubernetes.io/docs/concepts/cluster-administration/system-metrics/", "id": "k8s_00037_code_completion_930"} {"prompt": "Explain this yaml code:\n\napiVersion: rbac.authorization.k8s.io/v1\nkind: ClusterRole\nmetadata:\n name: prometheus\nrules:\n - nonResourceURLs:\n - \"/metrics\"\n verbs:\n - get", "completion": "This yaml code Metrics For Kubernetes System Components | Alpha metric → Stable metric → Deprecated metric → Hidden metric → Deleted metric", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Metrics For Kubernetes System Components | Alpha metric → Stable metric → Deprecated metric → Hidden metric → Deleted metric", "source": "https://kubernetes.io/docs/concepts/cluster-administration/system-metrics/", "id": "k8s_00037_code_explain_931"} {"prompt": "Explain this text code:\n\n\"metric1,label2\": \"v1,v2,v3\"\n\"metric2,label1\": \"v1,v2,v3\"", "completion": "This text code Metrics For Kubernetes System Components | Additionally, the cardinality_enforcement_unexpected_categorizations_total meta-metric records the count of unexpected categorizations during cardinality enforcement, that is, whenever a label value i", "language": "text", "type": "code-example_explanation", "context": "Metrics For Kubernetes System Components | Additionally, the cardinality_enforcement_unexpected_categorizations_total meta-metric records the count of unexpected categorizations during cardinality enforcement, that is, whenever a label value i", "source": "https://kubernetes.io/docs/concepts/cluster-administration/system-metrics/", "id": "k8s_00037_code_explain_932"} {"prompt": "Complete this text code:\n\n# !!!Important Note!!!\n # Requires that the 'tar' binary is present in your container\n # image. If 'tar' is not present, 'kubectl cp' will fail.\n #\n # For advanced use cases, such as symlinks, wildcard expansion or\n # file mode preservation, consider using 'kubectl exec'.\n \n # Copy /tmp/foo local file to /tmp/bar in a remote pod in namespace \n tar cf - /tmp/foo | kubectl exec -i -n -- tar xf - -C /tmp/bar\n \n # Copy /tmp/foo from a remote pod to /tmp/bar locally\n kubectl exec -n -- tar cf - /tmp/foo | tar xf - -C /tmp/bar", "completion": " \n # Copy /tmp/foo_dir local directory to /tmp/bar_dir in a remote pod in the default namespace\n kubectl cp /tmp/foo_dir :/tmp/bar_dir\n \n # Copy /tmp/foo local file to /tmp/bar in a remote pod in a specific container\n kubectl cp /tmp/foo :/tmp/bar -c \n \n # Copy /tmp/foo local file to /tmp/bar in a remote pod in namespace \n kubectl cp /tmp/foo /:/tmp/bar\n \n # Copy /tmp/foo from a remote pod to /tmp/bar locally\n kubectl cp /:/tmp/foo /tmp/bar", "language": "text", "type": "code-example", "context": "kubectl cp | Container name. If omitted, use the kubectl.kubernetes.io/default-container annotation for selecting the container to be attached or the first container in the pod will be chosen", "source": "https://kubernetes.io/docs/reference/kubectl/generated/kubectl_cp/", "id": "k8s_00038_code_completion_933"} {"prompt": "Explain this text code:\n\n# !!!Important Note!!!\n # Requires that the 'tar' binary is present in your container\n # image. If 'tar' is not present, 'kubectl cp' will fail.\n #\n # For advanced use cases, such as symlinks, wildcard expansion or\n # file mode preservation, consider using 'kubectl exec'.\n \n # Copy /tmp/foo local file to /tmp/bar in a remote pod in namespace \n tar cf - /tmp/foo | kubectl exec -i -n -- tar xf - -C /tmp/bar\n \n # Copy /tmp/foo from a remote pod to /tmp/bar locally\n kubectl exec -n -- tar cf - /tmp/foo | tar xf - -C /tmp/bar\n \n # Copy /tmp/foo_dir local directory to /tmp/bar_dir in a remote pod in the default namespace\n kubectl cp /tmp/foo_dir :/tmp/bar_dir\n \n # Copy /tmp/foo local file to /tmp/bar in a remote pod in a specific container\n kubectl cp /tmp/foo :/tmp/bar -c \n \n # Copy /tmp/foo local file to /tmp/bar in a remote pod in namespace \n kubectl cp /tmp/foo /:/tmp/bar\n \n # Copy /tmp/foo from a remote pod to /tmp/bar locally\n kubectl cp /:/tmp/foo /tmp/bar", "completion": "This text code kubectl cp | Container name. If omitted, use the kubectl.kubernetes.io/default-container annotation for selecting the container to be attached or the first container in the pod will be chosen", "language": "text", "type": "code-example_explanation", "context": "kubectl cp | Container name. If omitted, use the kubectl.kubernetes.io/default-container annotation for selecting the container to be attached or the first container in the pod will be chosen", "source": "https://kubernetes.io/docs/reference/kubectl/generated/kubectl_cp/", "id": "k8s_00038_code_explain_934"} {"prompt": "Complete this text code:\n\n# !!!Important Note!!!\n # Requires that the 'tar' binary is present in your container\n # image. If 'tar' is not present, 'kubectl cp' will fail.\n #\n # For advanced use cases, such as symlinks, wildcard expansion or\n # file mode preservation, consider using 'kubectl exec'.\n \n # Copy /tmp/foo local file to /tmp/bar in a remote pod in namespace \n tar cf - /tmp/foo | kubectl exec -i -n -- tar xf - -C /tmp/bar\n \n # Copy /tmp/foo from a remote pod to /tmp/bar locally\n kubectl exec -n -- tar cf - /tmp/foo | tar xf - -C /tmp/bar", "completion": " \n # Copy /tmp/foo_dir local directory to /tmp/bar_dir in a remote pod in the default namespace\n kubectl cp /tmp/foo_dir :/tmp/bar_dir\n \n # Copy /tmp/foo local file to /tmp/bar in a remote pod in a specific container\n kubectl cp /tmp/foo :/tmp/bar -c \n \n # Copy /tmp/foo local file to /tmp/bar in a remote pod in namespace \n kubectl cp /tmp/foo /:/tmp/bar\n \n # Copy /tmp/foo from a remote pod to /tmp/bar locally\n kubectl cp /:/tmp/foo /tmp/bar", "language": "text", "type": "code-example", "context": "kubectl cp | Container name. If omitted, use the kubectl.kubernetes.io/default-container annotation for selecting the container to be attached or the first container in the pod will be chosen", "source": "https://kubernetes.io/docs/reference/kubectl/generated/kubectl_cp/", "id": "k8s_00038_code_completion_935"} {"prompt": "Explain this text code:\n\n# !!!Important Note!!!\n # Requires that the 'tar' binary is present in your container\n # image. If 'tar' is not present, 'kubectl cp' will fail.\n #\n # For advanced use cases, such as symlinks, wildcard expansion or\n # file mode preservation, consider using 'kubectl exec'.\n \n # Copy /tmp/foo local file to /tmp/bar in a remote pod in namespace \n tar cf - /tmp/foo | kubectl exec -i -n -- tar xf - -C /tmp/bar\n \n # Copy /tmp/foo from a remote pod to /tmp/bar locally\n kubectl exec -n -- tar cf - /tmp/foo | tar xf - -C /tmp/bar\n \n # Copy /tmp/foo_dir local directory to /tmp/bar_dir in a remote pod in the default namespace\n kubectl cp /tmp/foo_dir :/tmp/bar_dir\n \n # Copy /tmp/foo local file to /tmp/bar in a remote pod in a specific container\n kubectl cp /tmp/foo :/tmp/bar -c \n \n # Copy /tmp/foo local file to /tmp/bar in a remote pod in namespace \n kubectl cp /tmp/foo /:/tmp/bar\n \n # Copy /tmp/foo from a remote pod to /tmp/bar locally\n kubectl cp /:/tmp/foo /tmp/bar", "completion": "This text code kubectl cp | Container name. If omitted, use the kubectl.kubernetes.io/default-container annotation for selecting the container to be attached or the first container in the pod will be chosen", "language": "text", "type": "code-example_explanation", "context": "kubectl cp | Container name. If omitted, use the kubectl.kubernetes.io/default-container annotation for selecting the container to be attached or the first container in the pod will be chosen", "source": "https://kubernetes.io/docs/reference/kubectl/generated/kubectl_cp/", "id": "k8s_00038_code_explain_936"} {"prompt": "Complete this text code:\n\n\"metadata\": {\n \"annotations\": {\n \"key1\" : \"value1\",", "completion": " \"key2\" : \"value2\"\n }\n}", "language": "text", "type": "code-example", "context": "Annotations | Here are some examples of information that could be recorded in annotations:", "source": "https://kubernetes.io/docs/concepts/overview/working-with-objects/annotations/", "id": "k8s_00039_code_completion_937"} {"prompt": "Explain this text code:\n\n\"metadata\": {\n \"annotations\": {\n \"key1\" : \"value1\",\n \"key2\" : \"value2\"\n }\n}", "completion": "This text code Annotations | Here are some examples of information that could be recorded in annotations:", "language": "text", "type": "code-example_explanation", "context": "Annotations | Here are some examples of information that could be recorded in annotations:", "source": "https://kubernetes.io/docs/concepts/overview/working-with-objects/annotations/", "id": "k8s_00039_code_explain_938"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: annotations-demo\n annotations:\n imageregistry: \"https://hub.docker.com/\"", "completion": "spec:\n containers:\n - name: nginx\n image: nginx:1.14.2\n ports:\n - containerPort: 80", "language": "yaml", "type": "pod-definition", "context": "Annotations |", "source": "https://kubernetes.io/docs/concepts/overview/working-with-objects/annotations/", "id": "k8s_00039_code_completion_939"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: annotations-demo\n annotations:\n imageregistry: \"https://hub.docker.com/\"\nspec:\n containers:\n - name: nginx\n image: nginx:1.14.2\n ports:\n - containerPort: 80", "completion": "This yaml code Annotations |", "language": "yaml", "type": "pod-definition_explanation", "context": "Annotations |", "source": "https://kubernetes.io/docs/concepts/overview/working-with-objects/annotations/", "id": "k8s_00039_code_explain_940"} {"prompt": "Complete this json code:\n\n\"metadata\": {\n \"annotations\": {\n \"key1\" : \"value1\",", "completion": " \"key2\" : \"value2\"\n }\n}", "language": "json", "type": "api-response", "context": "Annotations | Here are some examples of information that could be recorded in annotations:", "source": "https://kubernetes.io/docs/concepts/overview/working-with-objects/annotations/", "id": "k8s_00039_code_completion_941"} {"prompt": "Explain this json code:\n\n\"metadata\": {\n \"annotations\": {\n \"key1\" : \"value1\",\n \"key2\" : \"value2\"\n }\n}", "completion": "This json code Annotations | Here are some examples of information that could be recorded in annotations:", "language": "json", "type": "api-response_explanation", "context": "Annotations | Here are some examples of information that could be recorded in annotations:", "source": "https://kubernetes.io/docs/concepts/overview/working-with-objects/annotations/", "id": "k8s_00039_code_explain_942"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: annotations-demo\n annotations:\n imageregistry: \"https://hub.docker.com/\"", "completion": "spec:\n containers:\n - name: nginx\n image: nginx:1.14.2\n ports:\n - containerPort: 80", "language": "yaml", "type": "pod-definition", "context": "Annotations |", "source": "https://kubernetes.io/docs/concepts/overview/working-with-objects/annotations/", "id": "k8s_00039_code_completion_943"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: annotations-demo\n annotations:\n imageregistry: \"https://hub.docker.com/\"\nspec:\n containers:\n - name: nginx\n image: nginx:1.14.2\n ports:\n - containerPort: 80", "completion": "This yaml code Annotations |", "language": "yaml", "type": "pod-definition_explanation", "context": "Annotations |", "source": "https://kubernetes.io/docs/concepts/overview/working-with-objects/annotations/", "id": "k8s_00039_code_explain_944"} {"prompt": "Complete this text code:\n\n\"metadata\": {\n \"annotations\": {\n \"key1\" : \"value1\",", "completion": " \"key2\" : \"value2\"\n }\n}", "language": "text", "type": "code-example", "context": "Annotations | Here are some examples of information that could be recorded in annotations:", "source": "https://kubernetes.io/docs/concepts/overview/working-with-objects/annotations/", "id": "k8s_00039_code_completion_945"} {"prompt": "Explain this text code:\n\n\"metadata\": {\n \"annotations\": {\n \"key1\" : \"value1\",\n \"key2\" : \"value2\"\n }\n}", "completion": "This text code Annotations | Here are some examples of information that could be recorded in annotations:", "language": "text", "type": "code-example_explanation", "context": "Annotations | Here are some examples of information that could be recorded in annotations:", "source": "https://kubernetes.io/docs/concepts/overview/working-with-objects/annotations/", "id": "k8s_00039_code_explain_946"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: annotations-demo\n annotations:\n imageregistry: \"https://hub.docker.com/\"", "completion": "spec:\n containers:\n - name: nginx\n image: nginx:1.14.2\n ports:\n - containerPort: 80", "language": "yaml", "type": "pod-definition", "context": "Annotations |", "source": "https://kubernetes.io/docs/concepts/overview/working-with-objects/annotations/", "id": "k8s_00039_code_completion_947"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: annotations-demo\n annotations:\n imageregistry: \"https://hub.docker.com/\"\nspec:\n containers:\n - name: nginx\n image: nginx:1.14.2\n ports:\n - containerPort: 80", "completion": "This yaml code Annotations |", "language": "yaml", "type": "pod-definition_explanation", "context": "Annotations |", "source": "https://kubernetes.io/docs/concepts/overview/working-with-objects/annotations/", "id": "k8s_00039_code_explain_948"} {"prompt": "Complete this text code:\n\ncontainer_fs_inodes_free\ncontainer_fs_inodes_total\ncontainer_fs_io_current\ncontainer_fs_io_time_seconds_total\ncontainer_fs_io_time_weighted_seconds_total\ncontainer_fs_limit_bytes", "completion": "container_fs_read_seconds_total\ncontainer_fs_reads_merged_total\ncontainer_fs_sector_reads_total\ncontainer_fs_sector_writes_total\ncontainer_fs_usage_bytes\ncontainer_fs_write_seconds_total\ncontainer_fs_writes_merged_total", "language": "text", "type": "code-example", "context": "Check whether dockershim removal affects you | You can mitigate this issue by using cAdvisor as a standalone daemonset.", "source": "https://kubernetes.io/docs/tasks/administer-cluster/migrating-from-dockershim/check-if-dockershim-removal-affects-you/", "id": "k8s_00040_code_completion_949"} {"prompt": "Explain this text code:\n\ncontainer_fs_inodes_free\ncontainer_fs_inodes_total\ncontainer_fs_io_current\ncontainer_fs_io_time_seconds_total\ncontainer_fs_io_time_weighted_seconds_total\ncontainer_fs_limit_bytes\ncontainer_fs_read_seconds_total\ncontainer_fs_reads_merged_total\ncontainer_fs_sector_reads_total\ncontainer_fs_sector_writes_total\ncontainer_fs_usage_bytes\ncontainer_fs_write_seconds_total\ncontainer_fs_writes_merged_total", "completion": "This text code Check whether dockershim removal affects you | You can mitigate this issue by using cAdvisor as a standalone daemonset.", "language": "text", "type": "code-example_explanation", "context": "Check whether dockershim removal affects you | You can mitigate this issue by using cAdvisor as a standalone daemonset.", "source": "https://kubernetes.io/docs/tasks/administer-cluster/migrating-from-dockershim/check-if-dockershim-removal-affects-you/", "id": "k8s_00040_code_explain_950"} {"prompt": "Explain this text code:\n\nk8s_____", "completion": "This text code Check whether dockershim removal affects you | You can mitigate this issue by using cAdvisor as a standalone daemonset.", "language": "text", "type": "code-example_explanation", "context": "Check whether dockershim removal affects you | You can mitigate this issue by using cAdvisor as a standalone daemonset.", "source": "https://kubernetes.io/docs/tasks/administer-cluster/migrating-from-dockershim/check-if-dockershim-removal-affects-you/", "id": "k8s_00040_code_explain_951"} {"prompt": "Complete this text code:\n\ncontainer_fs_inodes_free\ncontainer_fs_inodes_total\ncontainer_fs_io_current\ncontainer_fs_io_time_seconds_total\ncontainer_fs_io_time_weighted_seconds_total\ncontainer_fs_limit_bytes", "completion": "container_fs_read_seconds_total\ncontainer_fs_reads_merged_total\ncontainer_fs_sector_reads_total\ncontainer_fs_sector_writes_total\ncontainer_fs_usage_bytes\ncontainer_fs_write_seconds_total\ncontainer_fs_writes_merged_total", "language": "text", "type": "code-example", "context": "Check whether dockershim removal affects you | You can mitigate this issue by using cAdvisor as a standalone daemonset.", "source": "https://kubernetes.io/docs/tasks/administer-cluster/migrating-from-dockershim/check-if-dockershim-removal-affects-you/", "id": "k8s_00040_code_completion_952"} {"prompt": "Explain this text code:\n\ncontainer_fs_inodes_free\ncontainer_fs_inodes_total\ncontainer_fs_io_current\ncontainer_fs_io_time_seconds_total\ncontainer_fs_io_time_weighted_seconds_total\ncontainer_fs_limit_bytes\ncontainer_fs_read_seconds_total\ncontainer_fs_reads_merged_total\ncontainer_fs_sector_reads_total\ncontainer_fs_sector_writes_total\ncontainer_fs_usage_bytes\ncontainer_fs_write_seconds_total\ncontainer_fs_writes_merged_total", "completion": "This text code Check whether dockershim removal affects you | You can mitigate this issue by using cAdvisor as a standalone daemonset.", "language": "text", "type": "code-example_explanation", "context": "Check whether dockershim removal affects you | You can mitigate this issue by using cAdvisor as a standalone daemonset.", "source": "https://kubernetes.io/docs/tasks/administer-cluster/migrating-from-dockershim/check-if-dockershim-removal-affects-you/", "id": "k8s_00040_code_explain_953"} {"prompt": "Explain this bash code:\n\nkubectl create configmap NAME [--from-file=[key=]source] [--from-literal=key1=value1] [--dry-run=server|client|none]", "completion": "This bash code kubectl create configmap | If true, ignore any errors in templates when a field or map key is missing in the template. Only applies to golang and jsonpath output formats.", "language": "bash", "type": "kubectl-command_explanation", "context": "kubectl create configmap | If true, ignore any errors in templates when a field or map key is missing in the template. Only applies to golang and jsonpath output formats.", "source": "https://kubernetes.io/docs/reference/kubectl/generated/kubectl_create/kubectl_create_configmap/", "id": "k8s_00041_code_explain_954"} {"prompt": "Complete this text code:\n\n# Create a new config map named my-config based on folder bar\n kubectl create configmap my-config --from-file=path/to/bar\n \n # Create a new config map named my-config with specified keys instead of file basenames on disk\n kubectl create configmap my-config --from-file=key1=/path/to/bar/file1.txt --from-file=key2=/path/to/bar/file2.txt\n \n # Create a new config map named my-config with key1=config1 and key2=config2", "completion": " kubectl create configmap my-config --from-literal=key1=config1 --from-literal=key2=config2\n \n # Create a new config map named my-config from the key=value pairs in the file\n kubectl create configmap my-config --from-file=path/to/bar\n \n # Create a new config map named my-config from an env file\n kubectl create configmap my-config --from-env-file=path/to/foo.env --from-env-file=path/to/bar.env", "language": "text", "type": "code-example", "context": "kubectl create configmap | If true, ignore any errors in templates when a field or map key is missing in the template. Only applies to golang and jsonpath output formats.", "source": "https://kubernetes.io/docs/reference/kubectl/generated/kubectl_create/kubectl_create_configmap/", "id": "k8s_00041_code_completion_955"} {"prompt": "Explain this text code:\n\n# Create a new config map named my-config based on folder bar\n kubectl create configmap my-config --from-file=path/to/bar\n \n # Create a new config map named my-config with specified keys instead of file basenames on disk\n kubectl create configmap my-config --from-file=key1=/path/to/bar/file1.txt --from-file=key2=/path/to/bar/file2.txt\n \n # Create a new config map named my-config with key1=config1 and key2=config2\n kubectl create configmap my-config --from-literal=key1=config1 --from-literal=key2=config2\n \n # Create a new config map named my-config from the key=value pairs in the file\n kubectl create configmap my-config --from-file=path/to/bar\n \n # Create a new config map named my-config from an env file\n kubectl create configmap my-config --from-env-file=path/to/foo.env --from-env-file=path/to/bar.env", "completion": "This text code kubectl create configmap | If true, ignore any errors in templates when a field or map key is missing in the template. Only applies to golang and jsonpath output formats.", "language": "text", "type": "code-example_explanation", "context": "kubectl create configmap | If true, ignore any errors in templates when a field or map key is missing in the template. Only applies to golang and jsonpath output formats.", "source": "https://kubernetes.io/docs/reference/kubectl/generated/kubectl_create/kubectl_create_configmap/", "id": "k8s_00041_code_explain_956"} {"prompt": "Explain this bash code:\n\nkubectl create configmap NAME [--from-file=[key=]source] [--from-literal=key1=value1] [--dry-run=server|client|none]", "completion": "This bash code kubectl create configmap | If true, ignore any errors in templates when a field or map key is missing in the template. Only applies to golang and jsonpath output formats.", "language": "bash", "type": "kubectl-command_explanation", "context": "kubectl create configmap | If true, ignore any errors in templates when a field or map key is missing in the template. Only applies to golang and jsonpath output formats.", "source": "https://kubernetes.io/docs/reference/kubectl/generated/kubectl_create/kubectl_create_configmap/", "id": "k8s_00041_code_explain_957"} {"prompt": "Complete this text code:\n\n# Create a new config map named my-config based on folder bar\n kubectl create configmap my-config --from-file=path/to/bar\n \n # Create a new config map named my-config with specified keys instead of file basenames on disk\n kubectl create configmap my-config --from-file=key1=/path/to/bar/file1.txt --from-file=key2=/path/to/bar/file2.txt\n \n # Create a new config map named my-config with key1=config1 and key2=config2", "completion": " kubectl create configmap my-config --from-literal=key1=config1 --from-literal=key2=config2\n \n # Create a new config map named my-config from the key=value pairs in the file\n kubectl create configmap my-config --from-file=path/to/bar\n \n # Create a new config map named my-config from an env file\n kubectl create configmap my-config --from-env-file=path/to/foo.env --from-env-file=path/to/bar.env", "language": "text", "type": "code-example", "context": "kubectl create configmap | If true, ignore any errors in templates when a field or map key is missing in the template. Only applies to golang and jsonpath output formats.", "source": "https://kubernetes.io/docs/reference/kubectl/generated/kubectl_create/kubectl_create_configmap/", "id": "k8s_00041_code_completion_958"} {"prompt": "Explain this text code:\n\n# Create a new config map named my-config based on folder bar\n kubectl create configmap my-config --from-file=path/to/bar\n \n # Create a new config map named my-config with specified keys instead of file basenames on disk\n kubectl create configmap my-config --from-file=key1=/path/to/bar/file1.txt --from-file=key2=/path/to/bar/file2.txt\n \n # Create a new config map named my-config with key1=config1 and key2=config2\n kubectl create configmap my-config --from-literal=key1=config1 --from-literal=key2=config2\n \n # Create a new config map named my-config from the key=value pairs in the file\n kubectl create configmap my-config --from-file=path/to/bar\n \n # Create a new config map named my-config from an env file\n kubectl create configmap my-config --from-env-file=path/to/foo.env --from-env-file=path/to/bar.env", "completion": "This text code kubectl create configmap | If true, ignore any errors in templates when a field or map key is missing in the template. Only applies to golang and jsonpath output formats.", "language": "text", "type": "code-example_explanation", "context": "kubectl create configmap | If true, ignore any errors in templates when a field or map key is missing in the template. Only applies to golang and jsonpath output formats.", "source": "https://kubernetes.io/docs/reference/kubectl/generated/kubectl_create/kubectl_create_configmap/", "id": "k8s_00041_code_explain_959"} {"prompt": "Complete this text code:\n\n# Show metrics for all pods in the default namespace\n kubectl top pod\n \n # Show metrics for all pods in the given namespace\n kubectl top pod --namespace=NAMESPACE", "completion": " \n # Show metrics for a given pod and its containers\n kubectl top pod POD_NAME --containers\n \n # Show metrics for the pods defined by label name=myLabel\n kubectl top pod -l name=myLabel", "language": "text", "type": "code-example", "context": "kubectl top pod | If present, list the requested object(s) across all namespaces. Namespace in current context is ignored even if specified with --namespace.", "source": "https://kubernetes.io/docs/reference/kubectl/generated/kubectl_top/kubectl_top_pod/", "id": "k8s_00042_code_completion_960"} {"prompt": "Explain this text code:\n\n# Show metrics for all pods in the default namespace\n kubectl top pod\n \n # Show metrics for all pods in the given namespace\n kubectl top pod --namespace=NAMESPACE\n \n # Show metrics for a given pod and its containers\n kubectl top pod POD_NAME --containers\n \n # Show metrics for the pods defined by label name=myLabel\n kubectl top pod -l name=myLabel", "completion": "This text code kubectl top pod | If present, list the requested object(s) across all namespaces. Namespace in current context is ignored even if specified with --namespace.", "language": "text", "type": "code-example_explanation", "context": "kubectl top pod | If present, list the requested object(s) across all namespaces. Namespace in current context is ignored even if specified with --namespace.", "source": "https://kubernetes.io/docs/reference/kubectl/generated/kubectl_top/kubectl_top_pod/", "id": "k8s_00042_code_explain_961"} {"prompt": "Complete this text code:\n\n# Show metrics for all pods in the default namespace\n kubectl top pod\n \n # Show metrics for all pods in the given namespace\n kubectl top pod --namespace=NAMESPACE", "completion": " \n # Show metrics for a given pod and its containers\n kubectl top pod POD_NAME --containers\n \n # Show metrics for the pods defined by label name=myLabel\n kubectl top pod -l name=myLabel", "language": "text", "type": "code-example", "context": "kubectl top pod | If present, list the requested object(s) across all namespaces. Namespace in current context is ignored even if specified with --namespace.", "source": "https://kubernetes.io/docs/reference/kubectl/generated/kubectl_top/kubectl_top_pod/", "id": "k8s_00042_code_completion_962"} {"prompt": "Explain this text code:\n\n# Show metrics for all pods in the default namespace\n kubectl top pod\n \n # Show metrics for all pods in the given namespace\n kubectl top pod --namespace=NAMESPACE\n \n # Show metrics for a given pod and its containers\n kubectl top pod POD_NAME --containers\n \n # Show metrics for the pods defined by label name=myLabel\n kubectl top pod -l name=myLabel", "completion": "This text code kubectl top pod | If present, list the requested object(s) across all namespaces. Namespace in current context is ignored even if specified with --namespace.", "language": "text", "type": "code-example_explanation", "context": "kubectl top pod | If present, list the requested object(s) across all namespaces. Namespace in current context is ignored even if specified with --namespace.", "source": "https://kubernetes.io/docs/reference/kubectl/generated/kubectl_top/kubectl_top_pod/", "id": "k8s_00042_code_explain_963"} {"prompt": "Complete this text code:\n\nNAME READY UP-TO-DATE AVAILABLE AGE\n...", "completion": "kube-dns-autoscaler 1/1 1 1 ...\n...", "language": "text", "type": "code-example", "context": "Autoscale the DNS Service in a Cluster | If you see \"kube-dns-autoscaler\" in the output, DNS horizontal autoscaling is already enabled, and you can skip to Tuning autoscaling parameters.", "source": "https://kubernetes.io/docs/tasks/administer-cluster/dns-horizontal-autoscaling/", "id": "k8s_00043_code_completion_964"} {"prompt": "Explain this text code:\n\nNAME READY UP-TO-DATE AVAILABLE AGE\n...\nkube-dns-autoscaler 1/1 1 1 ...\n...", "completion": "This text code Autoscale the DNS Service in a Cluster | If you see \"kube-dns-autoscaler\" in the output, DNS horizontal autoscaling is already enabled, and you can skip to Tuning autoscaling parameters.", "language": "text", "type": "code-example_explanation", "context": "Autoscale the DNS Service in a Cluster | If you see \"kube-dns-autoscaler\" in the output, DNS horizontal autoscaling is already enabled, and you can skip to Tuning autoscaling parameters.", "source": "https://kubernetes.io/docs/tasks/administer-cluster/dns-horizontal-autoscaling/", "id": "k8s_00043_code_explain_965"} {"prompt": "Explain this bash code:\n\nkubectl get deployment -l k8s-app=kube-dns --namespace=kube-system", "completion": "This bash code Autoscale the DNS Service in a Cluster | The output is similar to this:", "language": "bash", "type": "kubectl-command_explanation", "context": "Autoscale the DNS Service in a Cluster | The output is similar to this:", "source": "https://kubernetes.io/docs/tasks/administer-cluster/dns-horizontal-autoscaling/", "id": "k8s_00043_code_explain_966"} {"prompt": "Complete this text code:\n\nNAME READY UP-TO-DATE AVAILABLE AGE\n...", "completion": "coredns 2/2 2 2 ...\n...", "language": "text", "type": "code-example", "context": "Autoscale the DNS Service in a Cluster | If you don't see a Deployment for DNS services, you can also look for it by name:", "source": "https://kubernetes.io/docs/tasks/administer-cluster/dns-horizontal-autoscaling/", "id": "k8s_00043_code_completion_967"} {"prompt": "Explain this text code:\n\nNAME READY UP-TO-DATE AVAILABLE AGE\n...\ncoredns 2/2 2 2 ...\n...", "completion": "This text code Autoscale the DNS Service in a Cluster | If you don't see a Deployment for DNS services, you can also look for it by name:", "language": "text", "type": "code-example_explanation", "context": "Autoscale the DNS Service in a Cluster | If you don't see a Deployment for DNS services, you can also look for it by name:", "source": "https://kubernetes.io/docs/tasks/administer-cluster/dns-horizontal-autoscaling/", "id": "k8s_00043_code_explain_968"} {"prompt": "Complete this yaml code:\n\nkind: ServiceAccount\napiVersion: v1\nmetadata:\n name: kube-dns-autoscaler\n namespace: kube-system\n---\nkind: ClusterRole\napiVersion: rbac.authorization.k8s.io/v1\nmetadata:\n name: system:kube-dns-autoscaler\nrules:\n - apiGroups: [\"\"]\n resources: [\"nodes\"]\n verbs: [\"list\", \"watch\"]\n - apiGroups: [\"\"]\n resources: [\"replicationcontrollers/scale\"]\n verbs: [\"get\", \"update\"]\n - apiGroups: [\"apps\"]\n resources: [\"deployments/scale\", \"replicasets/scale\"]\n verbs: [\"get\", \"update\"]\n# Remove the configmaps rule once below issue is fixed:\n# kubernetes-incubator/cluster-proportional-autoscaler#16\n - apiGroups: [\"\"]\n resources: [\"configmaps\"]\n verbs: [\"get\", \"create\"]\n---\nkind: ClusterRoleBinding\napiVersion: rbac.authorization.k8s.io/v1\nmetadata:\n name: system:kube-dns-autoscaler\nsubjects:\n - kind: ServiceAccount\n name: kube-dns-autoscaler\n namespace: kube-system\nroleRef:\n kind: ClusterRole\n name: system:kube-dns-autoscaler\n apiGroup: rbac.authorization.k8s.io\n\n---\napiVersion: apps/v1\nkind: Deployment\nmetadata:", "completion": " name: kube-dns-autoscaler\n namespace: kube-system\n labels:\n k8s-app: kube-dns-autoscaler\n kubernetes.io/cluster-service: \"true\"\nspec:\n selector:\n matchLabels:\n k8s-app: kube-dns-autoscaler\n template:\n metadata:\n labels:\n k8s-app: kube-dns-autoscaler\n spec:\n priorityClassName: system-cluster-critical\n securityContext:\n seccompProfile:\n type: RuntimeDefault\n supplementalGroups: [ 65534 ]\n fsGroup: 65534\n nodeSelector:\n kubernetes.io/os: linux\n containers:\n - name: autoscaler\n image: registry.k8s.io/cpa/cluster-proportional-autoscaler:1.8.4\n resources:\n requests:\n cpu: \"20m\"\n memory: \"10Mi\"\n command:\n - /cluster-proportional-autoscaler\n - --namespace=kube-system\n - --configmap=kube-dns-autoscaler\n # Should keep target in sync with cluster/addons/dns/kube-dns.yaml.base\n - --target=\n # When cluster is using large nodes(with more cores), \"coresPerReplica\" should dominate.\n # If using small nodes, \"nodesPerReplica\" should dominate.\n - --default-params={\"linear\":{\"coresPerReplica\":256,\"nodesPerReplica\":16,\"preventSinglePointFailure\":true,\"includeUnschedulableNodes\":true}}\n - --logtostderr=true\n - --v=2\n tolerations:\n - key: \"CriticalAddonsOnly\"\n operator: \"Exists\"\n serviceAccountName: kube-dns-autoscaler", "language": "yaml", "type": "deployment-definition", "context": "Autoscale the DNS Service in a Cluster | In the file, replace with your scale target.", "source": "https://kubernetes.io/docs/tasks/administer-cluster/dns-horizontal-autoscaling/", "id": "k8s_00043_code_completion_969"} {"prompt": "Explain this yaml code:\n\nkind: ServiceAccount\napiVersion: v1\nmetadata:\n name: kube-dns-autoscaler\n namespace: kube-system\n---\nkind: ClusterRole\napiVersion: rbac.authorization.k8s.io/v1\nmetadata:\n name: system:kube-dns-autoscaler\nrules:\n - apiGroups: [\"\"]\n resources: [\"nodes\"]\n verbs: [\"list\", \"watch\"]\n - apiGroups: [\"\"]\n resources: [\"replicationcontrollers/scale\"]\n verbs: [\"get\", \"update\"]\n - apiGroups: [\"apps\"]\n resources: [\"deployments/scale\", \"replicasets/scale\"]\n verbs: [\"get\", \"update\"]\n# Remove the configmaps rule once below issue is fixed:\n# kubernetes-incubator/cluster-proportional-autoscaler#16\n - apiGroups: [\"\"]\n resources: [\"configmaps\"]\n verbs: [\"get\", \"create\"]\n---\nkind: ClusterRoleBinding\napiVersion: rbac.authorization.k8s.io/v1\nmetadata:\n name: system:kube-dns-autoscaler\nsubjects:\n - kind: ServiceAccount\n name: kube-dns-autoscaler\n namespace: kube-system\nroleRef:\n kind: ClusterRole\n name: system:kube-dns-autoscaler\n apiGroup: rbac.authorization.k8s.io\n\n---\napiVersion: apps/v1\nkind: Deployment\nmetadata:\n name: kube-dns-autoscaler\n namespace: kube-system\n labels:\n k8s-app: kube-dns-autoscaler\n kubernetes.io/cluster-service: \"true\"\nspec:\n selector:\n matchLabels:\n k8s-app: kube-dns-autoscaler\n template:\n metadata:\n labels:\n k8s-app: kube-dns-autoscaler\n spec:\n priorityClassName: system-cluster-critical\n securityContext:\n seccompProfile:\n type: RuntimeDefault\n supplementalGroups: [ 65534 ]\n fsGroup: 65534\n nodeSelector:\n kubernetes.io/os: linux\n containers:\n - name: autoscaler\n image: registry.k8s.io/cpa/cluster-proportional-autoscaler:1.8.4\n resources:\n requests:\n cpu: \"20m\"\n memory: \"10Mi\"\n command:\n - /cluster-proportional-autoscaler\n - --namespace=kube-system\n - --configmap=kube-dns-autoscaler\n # Should keep target in sync with cluster/addons/dns/kube-dns.yaml.base\n - --target=\n # When cluster is using large nodes(with more cores), \"coresPerReplica\" should dominate.\n # If using small nodes, \"nodesPerReplica\" should dominate.\n - --default-params={\"linear\":{\"coresPerReplica\":256,\"nodesPerReplica\":16,\"preventSinglePointFailure\":true,\"includeUnschedulableNodes\":true}}\n - --logtostderr=true\n - --v=2\n tolerations:\n - key: \"CriticalAddonsOnly\"\n operator: \"Exists\"\n serviceAccountName: kube-dns-autoscaler", "completion": "This yaml code Autoscale the DNS Service in a Cluster | In the file, replace with your scale target.", "language": "yaml", "type": "deployment-definition_explanation", "context": "Autoscale the DNS Service in a Cluster | In the file, replace with your scale target.", "source": "https://kubernetes.io/docs/tasks/administer-cluster/dns-horizontal-autoscaling/", "id": "k8s_00043_code_explain_970"} {"prompt": "Complete this text code:\n\nNAME DATA AGE\n...", "completion": "kube-dns-autoscaler 1 ...\n...", "language": "text", "type": "code-example", "context": "Autoscale the DNS Service in a Cluster | Modify the data in the ConfigMap:", "source": "https://kubernetes.io/docs/tasks/administer-cluster/dns-horizontal-autoscaling/", "id": "k8s_00043_code_completion_971"} {"prompt": "Explain this text code:\n\nNAME DATA AGE\n...\nkube-dns-autoscaler 1 ...\n...", "completion": "This text code Autoscale the DNS Service in a Cluster | Modify the data in the ConfigMap:", "language": "text", "type": "code-example_explanation", "context": "Autoscale the DNS Service in a Cluster | Modify the data in the ConfigMap:", "source": "https://kubernetes.io/docs/tasks/administer-cluster/dns-horizontal-autoscaling/", "id": "k8s_00043_code_explain_972"} {"prompt": "Explain this bash code:\n\nkubectl edit configmap kube-dns-autoscaler --namespace=kube-system", "completion": "This bash code Autoscale the DNS Service in a Cluster | Look for this line:", "language": "bash", "type": "kubectl-command_explanation", "context": "Autoscale the DNS Service in a Cluster | Look for this line:", "source": "https://kubernetes.io/docs/tasks/administer-cluster/dns-horizontal-autoscaling/", "id": "k8s_00043_code_explain_973"} {"prompt": "Explain this text code:\n\nlinear: '{\"coresPerReplica\":256,\"min\":1,\"nodesPerReplica\":16}'", "completion": "This text code Autoscale the DNS Service in a Cluster | Modify the fields according to your needs. The \"min\" field indicates the minimal number of DNS backends. The actual number of backends is calculated using this equation:", "language": "text", "type": "code-example_explanation", "context": "Autoscale the DNS Service in a Cluster | Modify the fields according to your needs. The \"min\" field indicates the minimal number of DNS backends. The actual number of backends is calculated using this equation:", "source": "https://kubernetes.io/docs/tasks/administer-cluster/dns-horizontal-autoscaling/", "id": "k8s_00043_code_explain_974"} {"prompt": "Explain this text code:\n\nreplicas = max( ceil( cores × 1/coresPerReplica ) , ceil( nodes × 1/nodesPerReplica ) )", "completion": "This text code Autoscale the DNS Service in a Cluster | Note that the values of both coresPerReplica and nodesPerReplica are floats.", "language": "text", "type": "code-example_explanation", "context": "Autoscale the DNS Service in a Cluster | Note that the values of both coresPerReplica and nodesPerReplica are floats.", "source": "https://kubernetes.io/docs/tasks/administer-cluster/dns-horizontal-autoscaling/", "id": "k8s_00043_code_explain_975"} {"prompt": "Explain this bash code:\n\nkubectl scale deployment --replicas=0 kube-dns-autoscaler --namespace=kube-system", "completion": "This bash code Autoscale the DNS Service in a Cluster | The output is:", "language": "bash", "type": "kubectl-command_explanation", "context": "Autoscale the DNS Service in a Cluster | The output is:", "source": "https://kubernetes.io/docs/tasks/administer-cluster/dns-horizontal-autoscaling/", "id": "k8s_00043_code_explain_976"} {"prompt": "Complete this text code:\n\nNAME DESIRED CURRENT READY AGE\n...", "completion": "kube-dns-autoscaler-6b59789fc8 0 0 0 ...\n...", "language": "text", "type": "code-example", "context": "Autoscale the DNS Service in a Cluster | This option works if kube-dns-autoscaler is under your own control, which means no one will re-create it:", "source": "https://kubernetes.io/docs/tasks/administer-cluster/dns-horizontal-autoscaling/", "id": "k8s_00043_code_completion_977"} {"prompt": "Explain this text code:\n\nNAME DESIRED CURRENT READY AGE\n...\nkube-dns-autoscaler-6b59789fc8 0 0 0 ...\n...", "completion": "This text code Autoscale the DNS Service in a Cluster | This option works if kube-dns-autoscaler is under your own control, which means no one will re-create it:", "language": "text", "type": "code-example_explanation", "context": "Autoscale the DNS Service in a Cluster | This option works if kube-dns-autoscaler is under your own control, which means no one will re-create it:", "source": "https://kubernetes.io/docs/tasks/administer-cluster/dns-horizontal-autoscaling/", "id": "k8s_00043_code_explain_978"} {"prompt": "Explain this bash code:\n\nkubectl delete deployment kube-dns-autoscaler --namespace=kube-system", "completion": "This bash code Autoscale the DNS Service in a Cluster | The output is:", "language": "bash", "type": "kubectl-command_explanation", "context": "Autoscale the DNS Service in a Cluster | The output is:", "source": "https://kubernetes.io/docs/tasks/administer-cluster/dns-horizontal-autoscaling/", "id": "k8s_00043_code_explain_979"} {"prompt": "Explain this text code:\n\n/etc/kubernetes/addons/dns-horizontal-autoscaler/dns-horizontal-autoscaler.yaml", "completion": "This text code Autoscale the DNS Service in a Cluster | After the manifest file is deleted, the Addon Manager will delete the kube-dns-autoscaler Deployment.", "language": "text", "type": "code-example_explanation", "context": "Autoscale the DNS Service in a Cluster | After the manifest file is deleted, the Addon Manager will delete the kube-dns-autoscaler Deployment.", "source": "https://kubernetes.io/docs/tasks/administer-cluster/dns-horizontal-autoscaling/", "id": "k8s_00043_code_explain_980"} {"prompt": "Complete this text code:\n\nNAME READY UP-TO-DATE AVAILABLE AGE\n...", "completion": "kube-dns-autoscaler 1/1 1 1 ...\n...", "language": "text", "type": "code-example", "context": "Autoscale the DNS Service in a Cluster | If you see \"kube-dns-autoscaler\" in the output, DNS horizontal autoscaling is already enabled, and you can skip to Tuning autoscaling parameters.", "source": "https://kubernetes.io/docs/tasks/administer-cluster/dns-horizontal-autoscaling/", "id": "k8s_00043_code_completion_981"} {"prompt": "Explain this text code:\n\nNAME READY UP-TO-DATE AVAILABLE AGE\n...\nkube-dns-autoscaler 1/1 1 1 ...\n...", "completion": "This text code Autoscale the DNS Service in a Cluster | If you see \"kube-dns-autoscaler\" in the output, DNS horizontal autoscaling is already enabled, and you can skip to Tuning autoscaling parameters.", "language": "text", "type": "code-example_explanation", "context": "Autoscale the DNS Service in a Cluster | If you see \"kube-dns-autoscaler\" in the output, DNS horizontal autoscaling is already enabled, and you can skip to Tuning autoscaling parameters.", "source": "https://kubernetes.io/docs/tasks/administer-cluster/dns-horizontal-autoscaling/", "id": "k8s_00043_code_explain_982"} {"prompt": "Explain this shell code:\n\nkubectl get deployment -l k8s-app=kube-dns --namespace=kube-system", "completion": "This shell code Autoscale the DNS Service in a Cluster | The output is similar to this:", "language": "shell", "type": "code-example_explanation", "context": "Autoscale the DNS Service in a Cluster | The output is similar to this:", "source": "https://kubernetes.io/docs/tasks/administer-cluster/dns-horizontal-autoscaling/", "id": "k8s_00043_code_explain_983"} {"prompt": "Complete this text code:\n\nNAME READY UP-TO-DATE AVAILABLE AGE\n...", "completion": "coredns 2/2 2 2 ...\n...", "language": "text", "type": "code-example", "context": "Autoscale the DNS Service in a Cluster | If you don't see a Deployment for DNS services, you can also look for it by name:", "source": "https://kubernetes.io/docs/tasks/administer-cluster/dns-horizontal-autoscaling/", "id": "k8s_00043_code_completion_984"} {"prompt": "Explain this text code:\n\nNAME READY UP-TO-DATE AVAILABLE AGE\n...\ncoredns 2/2 2 2 ...\n...", "completion": "This text code Autoscale the DNS Service in a Cluster | If you don't see a Deployment for DNS services, you can also look for it by name:", "language": "text", "type": "code-example_explanation", "context": "Autoscale the DNS Service in a Cluster | If you don't see a Deployment for DNS services, you can also look for it by name:", "source": "https://kubernetes.io/docs/tasks/administer-cluster/dns-horizontal-autoscaling/", "id": "k8s_00043_code_explain_985"} {"prompt": "Complete this yaml code:\n\nkind: ServiceAccount\napiVersion: v1\nmetadata:\n name: kube-dns-autoscaler\n namespace: kube-system\n---\nkind: ClusterRole\napiVersion: rbac.authorization.k8s.io/v1\nmetadata:\n name: system:kube-dns-autoscaler\nrules:\n - apiGroups: [\"\"]\n resources: [\"nodes\"]\n verbs: [\"list\", \"watch\"]\n - apiGroups: [\"\"]\n resources: [\"replicationcontrollers/scale\"]\n verbs: [\"get\", \"update\"]\n - apiGroups: [\"apps\"]\n resources: [\"deployments/scale\", \"replicasets/scale\"]\n verbs: [\"get\", \"update\"]\n# Remove the configmaps rule once below issue is fixed:\n# kubernetes-incubator/cluster-proportional-autoscaler#16\n - apiGroups: [\"\"]\n resources: [\"configmaps\"]\n verbs: [\"get\", \"create\"]\n---\nkind: ClusterRoleBinding\napiVersion: rbac.authorization.k8s.io/v1\nmetadata:\n name: system:kube-dns-autoscaler\nsubjects:\n - kind: ServiceAccount\n name: kube-dns-autoscaler\n namespace: kube-system\nroleRef:\n kind: ClusterRole\n name: system:kube-dns-autoscaler\n apiGroup: rbac.authorization.k8s.io\n\n---\napiVersion: apps/v1\nkind: Deployment\nmetadata:", "completion": " name: kube-dns-autoscaler\n namespace: kube-system\n labels:\n k8s-app: kube-dns-autoscaler\n kubernetes.io/cluster-service: \"true\"\nspec:\n selector:\n matchLabels:\n k8s-app: kube-dns-autoscaler\n template:\n metadata:\n labels:\n k8s-app: kube-dns-autoscaler\n spec:\n priorityClassName: system-cluster-critical\n securityContext:\n seccompProfile:\n type: RuntimeDefault\n supplementalGroups: [ 65534 ]\n fsGroup: 65534\n nodeSelector:\n kubernetes.io/os: linux\n containers:\n - name: autoscaler\n image: registry.k8s.io/cpa/cluster-proportional-autoscaler:1.8.4\n resources:\n requests:\n cpu: \"20m\"\n memory: \"10Mi\"\n command:\n - /cluster-proportional-autoscaler\n - --namespace=kube-system\n - --configmap=kube-dns-autoscaler\n # Should keep target in sync with cluster/addons/dns/kube-dns.yaml.base\n - --target=\n # When cluster is using large nodes(with more cores), \"coresPerReplica\" should dominate.\n # If using small nodes, \"nodesPerReplica\" should dominate.\n - --default-params={\"linear\":{\"coresPerReplica\":256,\"nodesPerReplica\":16,\"preventSinglePointFailure\":true,\"includeUnschedulableNodes\":true}}\n - --logtostderr=true\n - --v=2\n tolerations:\n - key: \"CriticalAddonsOnly\"\n operator: \"Exists\"\n serviceAccountName: kube-dns-autoscaler", "language": "yaml", "type": "deployment-definition", "context": "Autoscale the DNS Service in a Cluster | In the file, replace with your scale target.", "source": "https://kubernetes.io/docs/tasks/administer-cluster/dns-horizontal-autoscaling/", "id": "k8s_00043_code_completion_986"} {"prompt": "Explain this yaml code:\n\nkind: ServiceAccount\napiVersion: v1\nmetadata:\n name: kube-dns-autoscaler\n namespace: kube-system\n---\nkind: ClusterRole\napiVersion: rbac.authorization.k8s.io/v1\nmetadata:\n name: system:kube-dns-autoscaler\nrules:\n - apiGroups: [\"\"]\n resources: [\"nodes\"]\n verbs: [\"list\", \"watch\"]\n - apiGroups: [\"\"]\n resources: [\"replicationcontrollers/scale\"]\n verbs: [\"get\", \"update\"]\n - apiGroups: [\"apps\"]\n resources: [\"deployments/scale\", \"replicasets/scale\"]\n verbs: [\"get\", \"update\"]\n# Remove the configmaps rule once below issue is fixed:\n# kubernetes-incubator/cluster-proportional-autoscaler#16\n - apiGroups: [\"\"]\n resources: [\"configmaps\"]\n verbs: [\"get\", \"create\"]\n---\nkind: ClusterRoleBinding\napiVersion: rbac.authorization.k8s.io/v1\nmetadata:\n name: system:kube-dns-autoscaler\nsubjects:\n - kind: ServiceAccount\n name: kube-dns-autoscaler\n namespace: kube-system\nroleRef:\n kind: ClusterRole\n name: system:kube-dns-autoscaler\n apiGroup: rbac.authorization.k8s.io\n\n---\napiVersion: apps/v1\nkind: Deployment\nmetadata:\n name: kube-dns-autoscaler\n namespace: kube-system\n labels:\n k8s-app: kube-dns-autoscaler\n kubernetes.io/cluster-service: \"true\"\nspec:\n selector:\n matchLabels:\n k8s-app: kube-dns-autoscaler\n template:\n metadata:\n labels:\n k8s-app: kube-dns-autoscaler\n spec:\n priorityClassName: system-cluster-critical\n securityContext:\n seccompProfile:\n type: RuntimeDefault\n supplementalGroups: [ 65534 ]\n fsGroup: 65534\n nodeSelector:\n kubernetes.io/os: linux\n containers:\n - name: autoscaler\n image: registry.k8s.io/cpa/cluster-proportional-autoscaler:1.8.4\n resources:\n requests:\n cpu: \"20m\"\n memory: \"10Mi\"\n command:\n - /cluster-proportional-autoscaler\n - --namespace=kube-system\n - --configmap=kube-dns-autoscaler\n # Should keep target in sync with cluster/addons/dns/kube-dns.yaml.base\n - --target=\n # When cluster is using large nodes(with more cores), \"coresPerReplica\" should dominate.\n # If using small nodes, \"nodesPerReplica\" should dominate.\n - --default-params={\"linear\":{\"coresPerReplica\":256,\"nodesPerReplica\":16,\"preventSinglePointFailure\":true,\"includeUnschedulableNodes\":true}}\n - --logtostderr=true\n - --v=2\n tolerations:\n - key: \"CriticalAddonsOnly\"\n operator: \"Exists\"\n serviceAccountName: kube-dns-autoscaler", "completion": "This yaml code Autoscale the DNS Service in a Cluster | In the file, replace with your scale target.", "language": "yaml", "type": "deployment-definition_explanation", "context": "Autoscale the DNS Service in a Cluster | In the file, replace with your scale target.", "source": "https://kubernetes.io/docs/tasks/administer-cluster/dns-horizontal-autoscaling/", "id": "k8s_00043_code_explain_987"} {"prompt": "Complete this text code:\n\nNAME DATA AGE\n...", "completion": "kube-dns-autoscaler 1 ...\n...", "language": "text", "type": "code-example", "context": "Autoscale the DNS Service in a Cluster | Modify the data in the ConfigMap:", "source": "https://kubernetes.io/docs/tasks/administer-cluster/dns-horizontal-autoscaling/", "id": "k8s_00043_code_completion_988"} {"prompt": "Explain this text code:\n\nNAME DATA AGE\n...\nkube-dns-autoscaler 1 ...\n...", "completion": "This text code Autoscale the DNS Service in a Cluster | Modify the data in the ConfigMap:", "language": "text", "type": "code-example_explanation", "context": "Autoscale the DNS Service in a Cluster | Modify the data in the ConfigMap:", "source": "https://kubernetes.io/docs/tasks/administer-cluster/dns-horizontal-autoscaling/", "id": "k8s_00043_code_explain_989"} {"prompt": "Explain this shell code:\n\nkubectl edit configmap kube-dns-autoscaler --namespace=kube-system", "completion": "This shell code Autoscale the DNS Service in a Cluster | Look for this line:", "language": "shell", "type": "code-example_explanation", "context": "Autoscale the DNS Service in a Cluster | Look for this line:", "source": "https://kubernetes.io/docs/tasks/administer-cluster/dns-horizontal-autoscaling/", "id": "k8s_00043_code_explain_990"} {"prompt": "Explain this yaml code:\n\nlinear: '{\"coresPerReplica\":256,\"min\":1,\"nodesPerReplica\":16}'", "completion": "This yaml code Autoscale the DNS Service in a Cluster | Modify the fields according to your needs. The \"min\" field indicates the minimal number of DNS backends. The actual number of backends is calculated using this equation:", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Autoscale the DNS Service in a Cluster | Modify the fields according to your needs. The \"min\" field indicates the minimal number of DNS backends. The actual number of backends is calculated using this equation:", "source": "https://kubernetes.io/docs/tasks/administer-cluster/dns-horizontal-autoscaling/", "id": "k8s_00043_code_explain_991"} {"prompt": "Explain this text code:\n\nreplicas = max( ceil( cores × 1/coresPerReplica ) , ceil( nodes × 1/nodesPerReplica ) )", "completion": "This text code Autoscale the DNS Service in a Cluster | Note that the values of both coresPerReplica and nodesPerReplica are floats.", "language": "text", "type": "code-example_explanation", "context": "Autoscale the DNS Service in a Cluster | Note that the values of both coresPerReplica and nodesPerReplica are floats.", "source": "https://kubernetes.io/docs/tasks/administer-cluster/dns-horizontal-autoscaling/", "id": "k8s_00043_code_explain_992"} {"prompt": "Explain this shell code:\n\nkubectl scale deployment --replicas=0 kube-dns-autoscaler --namespace=kube-system", "completion": "This shell code Autoscale the DNS Service in a Cluster | The output is:", "language": "shell", "type": "code-example_explanation", "context": "Autoscale the DNS Service in a Cluster | The output is:", "source": "https://kubernetes.io/docs/tasks/administer-cluster/dns-horizontal-autoscaling/", "id": "k8s_00043_code_explain_993"} {"prompt": "Complete this text code:\n\nNAME DESIRED CURRENT READY AGE\n...", "completion": "kube-dns-autoscaler-6b59789fc8 0 0 0 ...\n...", "language": "text", "type": "code-example", "context": "Autoscale the DNS Service in a Cluster | This option works if kube-dns-autoscaler is under your own control, which means no one will re-create it:", "source": "https://kubernetes.io/docs/tasks/administer-cluster/dns-horizontal-autoscaling/", "id": "k8s_00043_code_completion_994"} {"prompt": "Explain this text code:\n\nNAME DESIRED CURRENT READY AGE\n...\nkube-dns-autoscaler-6b59789fc8 0 0 0 ...\n...", "completion": "This text code Autoscale the DNS Service in a Cluster | This option works if kube-dns-autoscaler is under your own control, which means no one will re-create it:", "language": "text", "type": "code-example_explanation", "context": "Autoscale the DNS Service in a Cluster | This option works if kube-dns-autoscaler is under your own control, which means no one will re-create it:", "source": "https://kubernetes.io/docs/tasks/administer-cluster/dns-horizontal-autoscaling/", "id": "k8s_00043_code_explain_995"} {"prompt": "Explain this shell code:\n\nkubectl delete deployment kube-dns-autoscaler --namespace=kube-system", "completion": "This shell code Autoscale the DNS Service in a Cluster | The output is:", "language": "shell", "type": "code-example_explanation", "context": "Autoscale the DNS Service in a Cluster | The output is:", "source": "https://kubernetes.io/docs/tasks/administer-cluster/dns-horizontal-autoscaling/", "id": "k8s_00043_code_explain_996"} {"prompt": "Explain this text code:\n\n/etc/kubernetes/addons/dns-horizontal-autoscaler/dns-horizontal-autoscaler.yaml", "completion": "This text code Autoscale the DNS Service in a Cluster | After the manifest file is deleted, the Addon Manager will delete the kube-dns-autoscaler Deployment.", "language": "text", "type": "code-example_explanation", "context": "Autoscale the DNS Service in a Cluster | After the manifest file is deleted, the Addon Manager will delete the kube-dns-autoscaler Deployment.", "source": "https://kubernetes.io/docs/tasks/administer-cluster/dns-horizontal-autoscaling/", "id": "k8s_00043_code_explain_997"} {"prompt": "Explain this bash code:\n\nkubectl get deployment -l k8s-app=kube-dns --namespace=kube-system", "completion": "This bash code Autoscale the DNS Service in a Cluster | The output is similar to this:", "language": "bash", "type": "kubectl-command_explanation", "context": "Autoscale the DNS Service in a Cluster | The output is similar to this:", "source": "https://kubernetes.io/docs/tasks/administer-cluster/dns-horizontal-autoscaling/", "id": "k8s_00043_code_explain_998"} {"prompt": "Complete this yaml code:\n\nadmin/dns/dns-horizontal-autoscaler.yaml\nkind: ServiceAccount\napiVersion: v1\nmetadata:\n name: kube-dns-autoscaler\n namespace: kube-system\n---\nkind: ClusterRole\napiVersion: rbac.authorization.k8s.io/v1\nmetadata:\n name: system:kube-dns-autoscaler\nrules:\n - apiGroups: [\"\"]\n resources: [\"nodes\"]\n verbs: [\"list\", \"watch\"]\n - apiGroups: [\"\"]\n resources: [\"replicationcontrollers/scale\"]\n verbs: [\"get\", \"update\"]\n - apiGroups: [\"apps\"]\n resources: [\"deployments/scale\", \"replicasets/scale\"]\n verbs: [\"get\", \"update\"]\n# Remove the configmaps rule once below issue is fixed:\n# kubernetes-incubator/cluster-proportional-autoscaler#16\n - apiGroups: [\"\"]\n resources: [\"configmaps\"]\n verbs: [\"get\", \"create\"]\n---\nkind: ClusterRoleBinding\napiVersion: rbac.authorization.k8s.io/v1\nmetadata:\n name: system:kube-dns-autoscaler\nsubjects:\n - kind: ServiceAccount\n name: kube-dns-autoscaler\n namespace: kube-system\nroleRef:\n kind: ClusterRole\n name: system:kube-dns-autoscaler\n apiGroup: rbac.authorization.k8s.io\n\n---\napiVersion: apps/v1\nkind: Deployment\nmetadata:", "completion": " name: kube-dns-autoscaler\n namespace: kube-system\n labels:\n k8s-app: kube-dns-autoscaler\n kubernetes.io/cluster-service: \"true\"\nspec:\n selector:\n matchLabels:\n k8s-app: kube-dns-autoscaler\n template:\n metadata:\n labels:\n k8s-app: kube-dns-autoscaler\n spec:\n priorityClassName: system-cluster-critical\n securityContext:\n seccompProfile:\n type: RuntimeDefault\n supplementalGroups: [ 65534 ]\n fsGroup: 65534\n nodeSelector:\n kubernetes.io/os: linux\n containers:\n - name: autoscaler\n image: registry.k8s.io/cpa/cluster-proportional-autoscaler:1.8.4\n resources:\n requests:\n cpu: \"20m\"\n memory: \"10Mi\"\n command:\n - /cluster-proportional-autoscaler\n - --namespace=kube-system\n - --configmap=kube-dns-autoscaler\n # Should keep target in sync with cluster/addons/dns/kube-dns.yaml.base\n - --target=\n # When cluster is using large nodes(with more cores), \"coresPerReplica\" should dominate.\n # If using small nodes, \"nodesPerReplica\" should dominate.\n - --default-params={\"linear\":{\"coresPerReplica\":256,\"nodesPerReplica\":16,\"preventSinglePointFailure\":true,\"includeUnschedulableNodes\":true}}\n - --logtostderr=true\n - --v=2\n tolerations:\n - key: \"CriticalAddonsOnly\"\n operator: \"Exists\"\n serviceAccountName: kube-dns-autoscaler", "language": "yaml", "type": "deployment-definition", "context": "Autoscale the DNS Service in a Cluster | In the file, replace with your scale target.", "source": "https://kubernetes.io/docs/tasks/administer-cluster/dns-horizontal-autoscaling/", "id": "k8s_00043_code_completion_999"} {"prompt": "Explain this yaml code:\n\nadmin/dns/dns-horizontal-autoscaler.yaml\nkind: ServiceAccount\napiVersion: v1\nmetadata:\n name: kube-dns-autoscaler\n namespace: kube-system\n---\nkind: ClusterRole\napiVersion: rbac.authorization.k8s.io/v1\nmetadata:\n name: system:kube-dns-autoscaler\nrules:\n - apiGroups: [\"\"]\n resources: [\"nodes\"]\n verbs: [\"list\", \"watch\"]\n - apiGroups: [\"\"]\n resources: [\"replicationcontrollers/scale\"]\n verbs: [\"get\", \"update\"]\n - apiGroups: [\"apps\"]\n resources: [\"deployments/scale\", \"replicasets/scale\"]\n verbs: [\"get\", \"update\"]\n# Remove the configmaps rule once below issue is fixed:\n# kubernetes-incubator/cluster-proportional-autoscaler#16\n - apiGroups: [\"\"]\n resources: [\"configmaps\"]\n verbs: [\"get\", \"create\"]\n---\nkind: ClusterRoleBinding\napiVersion: rbac.authorization.k8s.io/v1\nmetadata:\n name: system:kube-dns-autoscaler\nsubjects:\n - kind: ServiceAccount\n name: kube-dns-autoscaler\n namespace: kube-system\nroleRef:\n kind: ClusterRole\n name: system:kube-dns-autoscaler\n apiGroup: rbac.authorization.k8s.io\n\n---\napiVersion: apps/v1\nkind: Deployment\nmetadata:\n name: kube-dns-autoscaler\n namespace: kube-system\n labels:\n k8s-app: kube-dns-autoscaler\n kubernetes.io/cluster-service: \"true\"\nspec:\n selector:\n matchLabels:\n k8s-app: kube-dns-autoscaler\n template:\n metadata:\n labels:\n k8s-app: kube-dns-autoscaler\n spec:\n priorityClassName: system-cluster-critical\n securityContext:\n seccompProfile:\n type: RuntimeDefault\n supplementalGroups: [ 65534 ]\n fsGroup: 65534\n nodeSelector:\n kubernetes.io/os: linux\n containers:\n - name: autoscaler\n image: registry.k8s.io/cpa/cluster-proportional-autoscaler:1.8.4\n resources:\n requests:\n cpu: \"20m\"\n memory: \"10Mi\"\n command:\n - /cluster-proportional-autoscaler\n - --namespace=kube-system\n - --configmap=kube-dns-autoscaler\n # Should keep target in sync with cluster/addons/dns/kube-dns.yaml.base\n - --target=\n # When cluster is using large nodes(with more cores), \"coresPerReplica\" should dominate.\n # If using small nodes, \"nodesPerReplica\" should dominate.\n - --default-params={\"linear\":{\"coresPerReplica\":256,\"nodesPerReplica\":16,\"preventSinglePointFailure\":true,\"includeUnschedulableNodes\":true}}\n - --logtostderr=true\n - --v=2\n tolerations:\n - key: \"CriticalAddonsOnly\"\n operator: \"Exists\"\n serviceAccountName: kube-dns-autoscaler", "completion": "This yaml code Autoscale the DNS Service in a Cluster | In the file, replace with your scale target.", "language": "yaml", "type": "deployment-definition_explanation", "context": "Autoscale the DNS Service in a Cluster | In the file, replace with your scale target.", "source": "https://kubernetes.io/docs/tasks/administer-cluster/dns-horizontal-autoscaling/", "id": "k8s_00043_code_explain_1000"} {"prompt": "Complete this yaml code:\n\nkind: ServiceAccount\napiVersion: v1\nmetadata:\n name: kube-dns-autoscaler\n namespace: kube-system\n---\nkind: ClusterRole\napiVersion: rbac.authorization.k8s.io/v1\nmetadata:\n name: system:kube-dns-autoscaler\nrules:\n - apiGroups: [\"\"]\n resources: [\"nodes\"]\n verbs: [\"list\", \"watch\"]\n - apiGroups: [\"\"]\n resources: [\"replicationcontrollers/scale\"]\n verbs: [\"get\", \"update\"]\n - apiGroups: [\"apps\"]\n resources: [\"deployments/scale\", \"replicasets/scale\"]\n verbs: [\"get\", \"update\"]\n# Remove the configmaps rule once below issue is fixed:\n# kubernetes-incubator/cluster-proportional-autoscaler#16\n - apiGroups: [\"\"]\n resources: [\"configmaps\"]\n verbs: [\"get\", \"create\"]\n---\nkind: ClusterRoleBinding\napiVersion: rbac.authorization.k8s.io/v1\nmetadata:\n name: system:kube-dns-autoscaler\nsubjects:\n - kind: ServiceAccount\n name: kube-dns-autoscaler\n namespace: kube-system\nroleRef:\n kind: ClusterRole\n name: system:kube-dns-autoscaler\n apiGroup: rbac.authorization.k8s.io\n\n---\napiVersion: apps/v1\nkind: Deployment\nmetadata:", "completion": " name: kube-dns-autoscaler\n namespace: kube-system\n labels:\n k8s-app: kube-dns-autoscaler\n kubernetes.io/cluster-service: \"true\"\nspec:\n selector:\n matchLabels:\n k8s-app: kube-dns-autoscaler\n template:\n metadata:\n labels:\n k8s-app: kube-dns-autoscaler\n spec:\n priorityClassName: system-cluster-critical\n securityContext:\n seccompProfile:\n type: RuntimeDefault\n supplementalGroups: [ 65534 ]\n fsGroup: 65534\n nodeSelector:\n kubernetes.io/os: linux\n containers:\n - name: autoscaler\n image: registry.k8s.io/cpa/cluster-proportional-autoscaler:1.8.4\n resources:\n requests:\n cpu: \"20m\"\n memory: \"10Mi\"\n command:\n - /cluster-proportional-autoscaler\n - --namespace=kube-system\n - --configmap=kube-dns-autoscaler\n # Should keep target in sync with cluster/addons/dns/kube-dns.yaml.base\n - --target=\n # When cluster is using large nodes(with more cores), \"coresPerReplica\" should dominate.\n # If using small nodes, \"nodesPerReplica\" should dominate.\n - --default-params={\"linear\":{\"coresPerReplica\":256,\"nodesPerReplica\":16,\"preventSinglePointFailure\":true,\"includeUnschedulableNodes\":true}}\n - --logtostderr=true\n - --v=2\n tolerations:\n - key: \"CriticalAddonsOnly\"\n operator: \"Exists\"\n serviceAccountName: kube-dns-autoscaler", "language": "yaml", "type": "deployment-definition", "context": "Autoscale the DNS Service in a Cluster | In the file, replace with your scale target.", "source": "https://kubernetes.io/docs/tasks/administer-cluster/dns-horizontal-autoscaling/", "id": "k8s_00043_code_completion_1001"} {"prompt": "Explain this yaml code:\n\nkind: ServiceAccount\napiVersion: v1\nmetadata:\n name: kube-dns-autoscaler\n namespace: kube-system\n---\nkind: ClusterRole\napiVersion: rbac.authorization.k8s.io/v1\nmetadata:\n name: system:kube-dns-autoscaler\nrules:\n - apiGroups: [\"\"]\n resources: [\"nodes\"]\n verbs: [\"list\", \"watch\"]\n - apiGroups: [\"\"]\n resources: [\"replicationcontrollers/scale\"]\n verbs: [\"get\", \"update\"]\n - apiGroups: [\"apps\"]\n resources: [\"deployments/scale\", \"replicasets/scale\"]\n verbs: [\"get\", \"update\"]\n# Remove the configmaps rule once below issue is fixed:\n# kubernetes-incubator/cluster-proportional-autoscaler#16\n - apiGroups: [\"\"]\n resources: [\"configmaps\"]\n verbs: [\"get\", \"create\"]\n---\nkind: ClusterRoleBinding\napiVersion: rbac.authorization.k8s.io/v1\nmetadata:\n name: system:kube-dns-autoscaler\nsubjects:\n - kind: ServiceAccount\n name: kube-dns-autoscaler\n namespace: kube-system\nroleRef:\n kind: ClusterRole\n name: system:kube-dns-autoscaler\n apiGroup: rbac.authorization.k8s.io\n\n---\napiVersion: apps/v1\nkind: Deployment\nmetadata:\n name: kube-dns-autoscaler\n namespace: kube-system\n labels:\n k8s-app: kube-dns-autoscaler\n kubernetes.io/cluster-service: \"true\"\nspec:\n selector:\n matchLabels:\n k8s-app: kube-dns-autoscaler\n template:\n metadata:\n labels:\n k8s-app: kube-dns-autoscaler\n spec:\n priorityClassName: system-cluster-critical\n securityContext:\n seccompProfile:\n type: RuntimeDefault\n supplementalGroups: [ 65534 ]\n fsGroup: 65534\n nodeSelector:\n kubernetes.io/os: linux\n containers:\n - name: autoscaler\n image: registry.k8s.io/cpa/cluster-proportional-autoscaler:1.8.4\n resources:\n requests:\n cpu: \"20m\"\n memory: \"10Mi\"\n command:\n - /cluster-proportional-autoscaler\n - --namespace=kube-system\n - --configmap=kube-dns-autoscaler\n # Should keep target in sync with cluster/addons/dns/kube-dns.yaml.base\n - --target=\n # When cluster is using large nodes(with more cores), \"coresPerReplica\" should dominate.\n # If using small nodes, \"nodesPerReplica\" should dominate.\n - --default-params={\"linear\":{\"coresPerReplica\":256,\"nodesPerReplica\":16,\"preventSinglePointFailure\":true,\"includeUnschedulableNodes\":true}}\n - --logtostderr=true\n - --v=2\n tolerations:\n - key: \"CriticalAddonsOnly\"\n operator: \"Exists\"\n serviceAccountName: kube-dns-autoscaler", "completion": "This yaml code Autoscale the DNS Service in a Cluster | In the file, replace with your scale target.", "language": "yaml", "type": "deployment-definition_explanation", "context": "Autoscale the DNS Service in a Cluster | In the file, replace with your scale target.", "source": "https://kubernetes.io/docs/tasks/administer-cluster/dns-horizontal-autoscaling/", "id": "k8s_00043_code_explain_1002"} {"prompt": "Explain this bash code:\n\nkubectl edit configmap kube-dns-autoscaler --namespace=kube-system", "completion": "This bash code Autoscale the DNS Service in a Cluster | Look for this line:", "language": "bash", "type": "kubectl-command_explanation", "context": "Autoscale the DNS Service in a Cluster | Look for this line:", "source": "https://kubernetes.io/docs/tasks/administer-cluster/dns-horizontal-autoscaling/", "id": "k8s_00043_code_explain_1003"} {"prompt": "Explain this text code:\n\nlinear: '{\"coresPerReplica\":256,\"min\":1,\"nodesPerReplica\":16}'", "completion": "This text code Autoscale the DNS Service in a Cluster | Modify the fields according to your needs. The \"min\" field indicates the minimal number of DNS backends. The actual number of backends is calculated using this equation:", "language": "text", "type": "code-example_explanation", "context": "Autoscale the DNS Service in a Cluster | Modify the fields according to your needs. The \"min\" field indicates the minimal number of DNS backends. The actual number of backends is calculated using this equation:", "source": "https://kubernetes.io/docs/tasks/administer-cluster/dns-horizontal-autoscaling/", "id": "k8s_00043_code_explain_1004"} {"prompt": "Explain this bash code:\n\nkubectl scale deployment --replicas=0 kube-dns-autoscaler --namespace=kube-system", "completion": "This bash code Autoscale the DNS Service in a Cluster | The output is:", "language": "bash", "type": "kubectl-command_explanation", "context": "Autoscale the DNS Service in a Cluster | The output is:", "source": "https://kubernetes.io/docs/tasks/administer-cluster/dns-horizontal-autoscaling/", "id": "k8s_00043_code_explain_1005"} {"prompt": "Explain this bash code:\n\nkubectl delete deployment kube-dns-autoscaler --namespace=kube-system", "completion": "This bash code Autoscale the DNS Service in a Cluster | The output is:", "language": "bash", "type": "kubectl-command_explanation", "context": "Autoscale the DNS Service in a Cluster | The output is:", "source": "https://kubernetes.io/docs/tasks/administer-cluster/dns-horizontal-autoscaling/", "id": "k8s_00043_code_explain_1006"} {"prompt": "Complete this text code:\n\nkube-apiserver statusz\nWarning: This endpoint is not meant to be machine parseable, has no formatting compatibility guarantees and is for debugging purposes only.\n\nStarted: Wed Oct 16 21:03:43 UTC 2024", "completion": "Up: 0 hr 00 min 16 sec\nGo version: go1.23.2\nBinary version: 1.32.0-alpha.0.1484+5eeac4f21a491b-dirty\nEmulation version: 1.32.0-alpha.0.1484", "language": "text", "type": "code-example", "context": "Kubernetes z-pages | Enabled using the ComponentFlagz feature gate, the /flagz endpoint shows you the command line arguments that were used to start a component.", "source": "https://kubernetes.io/docs/reference/instrumentation/zpages/", "id": "k8s_00045_code_completion_1007"} {"prompt": "Explain this text code:\n\nkube-apiserver statusz\nWarning: This endpoint is not meant to be machine parseable, has no formatting compatibility guarantees and is for debugging purposes only.\n\nStarted: Wed Oct 16 21:03:43 UTC 2024\nUp: 0 hr 00 min 16 sec\nGo version: go1.23.2\nBinary version: 1.32.0-alpha.0.1484+5eeac4f21a491b-dirty\nEmulation version: 1.32.0-alpha.0.1484", "completion": "This text code Kubernetes z-pages | Enabled using the ComponentFlagz feature gate, the /flagz endpoint shows you the command line arguments that were used to start a component.", "language": "text", "type": "code-example_explanation", "context": "Kubernetes z-pages | Enabled using the ComponentFlagz feature gate, the /flagz endpoint shows you the command line arguments that were used to start a component.", "source": "https://kubernetes.io/docs/reference/instrumentation/zpages/", "id": "k8s_00045_code_explain_1008"} {"prompt": "Complete this text code:\n\nkube-apiserver flags\nWarning: This endpoint is not meant to be machine parseable, has no formatting compatibility guarantees and is for debugging purposes only.\n\nadvertise-address=192.168.8.2\ncontention-profiling=false\nenable-priority-and-fairness=true", "completion": "profiling=true\nauthorization-mode=[Node,RBAC]\nauthorization-webhook-cache-authorized-ttl=5m0s\nauthorization-webhook-cache-unauthorized-ttl=30s\nauthorization-webhook-version=v1beta1\ndefault-watch-cache-size=100", "language": "text", "type": "code-example", "context": "Kubernetes z-pages |", "source": "https://kubernetes.io/docs/reference/instrumentation/zpages/", "id": "k8s_00045_code_completion_1009"} {"prompt": "Explain this text code:\n\nkube-apiserver flags\nWarning: This endpoint is not meant to be machine parseable, has no formatting compatibility guarantees and is for debugging purposes only.\n\nadvertise-address=192.168.8.2\ncontention-profiling=false\nenable-priority-and-fairness=true\nprofiling=true\nauthorization-mode=[Node,RBAC]\nauthorization-webhook-cache-authorized-ttl=5m0s\nauthorization-webhook-cache-unauthorized-ttl=30s\nauthorization-webhook-version=v1beta1\ndefault-watch-cache-size=100", "completion": "This text code Kubernetes z-pages |", "language": "text", "type": "code-example_explanation", "context": "Kubernetes z-pages |", "source": "https://kubernetes.io/docs/reference/instrumentation/zpages/", "id": "k8s_00045_code_explain_1010"} {"prompt": "Complete this text code:\n\nkube-apiserver statusz\nWarning: This endpoint is not meant to be machine parseable, has no formatting compatibility guarantees and is for debugging purposes only.\n\nStarted: Wed Oct 16 21:03:43 UTC 2024", "completion": "Up: 0 hr 00 min 16 sec\nGo version: go1.23.2\nBinary version: 1.32.0-alpha.0.1484+5eeac4f21a491b-dirty\nEmulation version: 1.32.0-alpha.0.1484", "language": "text", "type": "code-example", "context": "Kubernetes z-pages | Enabled using the ComponentFlagz feature gate, the /flagz endpoint shows you the command line arguments that were used to start a component.", "source": "https://kubernetes.io/docs/reference/instrumentation/zpages/", "id": "k8s_00045_code_completion_1011"} {"prompt": "Explain this text code:\n\nkube-apiserver statusz\nWarning: This endpoint is not meant to be machine parseable, has no formatting compatibility guarantees and is for debugging purposes only.\n\nStarted: Wed Oct 16 21:03:43 UTC 2024\nUp: 0 hr 00 min 16 sec\nGo version: go1.23.2\nBinary version: 1.32.0-alpha.0.1484+5eeac4f21a491b-dirty\nEmulation version: 1.32.0-alpha.0.1484", "completion": "This text code Kubernetes z-pages | Enabled using the ComponentFlagz feature gate, the /flagz endpoint shows you the command line arguments that were used to start a component.", "language": "text", "type": "code-example_explanation", "context": "Kubernetes z-pages | Enabled using the ComponentFlagz feature gate, the /flagz endpoint shows you the command line arguments that were used to start a component.", "source": "https://kubernetes.io/docs/reference/instrumentation/zpages/", "id": "k8s_00045_code_explain_1012"} {"prompt": "Complete this text code:\n\nkube-apiserver flags\nWarning: This endpoint is not meant to be machine parseable, has no formatting compatibility guarantees and is for debugging purposes only.\n\nadvertise-address=192.168.8.2\ncontention-profiling=false\nenable-priority-and-fairness=true", "completion": "profiling=true\nauthorization-mode=[Node,RBAC]\nauthorization-webhook-cache-authorized-ttl=5m0s\nauthorization-webhook-cache-unauthorized-ttl=30s\nauthorization-webhook-version=v1beta1\ndefault-watch-cache-size=100", "language": "text", "type": "code-example", "context": "Kubernetes z-pages |", "source": "https://kubernetes.io/docs/reference/instrumentation/zpages/", "id": "k8s_00045_code_completion_1013"} {"prompt": "Explain this text code:\n\nkube-apiserver flags\nWarning: This endpoint is not meant to be machine parseable, has no formatting compatibility guarantees and is for debugging purposes only.\n\nadvertise-address=192.168.8.2\ncontention-profiling=false\nenable-priority-and-fairness=true\nprofiling=true\nauthorization-mode=[Node,RBAC]\nauthorization-webhook-cache-authorized-ttl=5m0s\nauthorization-webhook-cache-unauthorized-ttl=30s\nauthorization-webhook-version=v1beta1\ndefault-watch-cache-size=100", "completion": "This text code Kubernetes z-pages |", "language": "text", "type": "code-example_explanation", "context": "Kubernetes z-pages |", "source": "https://kubernetes.io/docs/reference/instrumentation/zpages/", "id": "k8s_00045_code_explain_1014"} {"prompt": "Complete this json code:\n\n{\n \"auths\": {\n \"https://index.docker.io/v1/\": {", "completion": " \"auth\": \"c3R...zE2\"\n }\n }\n}", "language": "json", "type": "api-response", "context": "Pull an Image from a Private Registry | A Kubernetes cluster uses the Secret of kubernetes.io/dockerconfigjson type to authenticate with a container registry to pull a private image.", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/pull-image-private-registry/", "id": "k8s_00046_code_completion_1015"} {"prompt": "Explain this json code:\n\n{\n \"auths\": {\n \"https://index.docker.io/v1/\": {\n \"auth\": \"c3R...zE2\"\n }\n }\n}", "completion": "This json code Pull an Image from a Private Registry | A Kubernetes cluster uses the Secret of kubernetes.io/dockerconfigjson type to authenticate with a container registry to pull a private image.", "language": "json", "type": "api-response_explanation", "context": "Pull an Image from a Private Registry | A Kubernetes cluster uses the Secret of kubernetes.io/dockerconfigjson type to authenticate with a container registry to pull a private image.", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/pull-image-private-registry/", "id": "k8s_00046_code_explain_1016"} {"prompt": "Explain this bash code:\n\nkubectl create secret generic regcred \\\n --from-file=.dockerconfigjson= \\\n --type=kubernetes.io/dockerconfigjson", "completion": "This bash code Pull an Image from a Private Registry | If you need more control (for example, to set a namespace or a label on the new secret) then you can customise the Secret before storing it. Be sure to:", "language": "bash", "type": "kubectl-command_explanation", "context": "Pull an Image from a Private Registry | If you need more control (for example, to set a namespace or a label on the new secret) then you can customise the Secret before storing it. Be sure to:", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/pull-image-private-registry/", "id": "k8s_00046_code_explain_1017"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Secret\nmetadata:\n name: myregistrykey", "completion": " namespace: awesomeapps\ndata:\n .dockerconfigjson: UmVhbGx5IHJlYWxseSByZWVlZWVlZWVlZWFhYWFhYWFhYWFhYWFhYWFhYWFhYWFhYWFhYWxsbGxsbGxsbGxsbGxsbGxsbGxsbGxsbGxsbGxsbGx5eXl5eXl5eXl5eXl5eXl5eXl5eSBsbGxsbGxsbGxsbGxsbG9vb29vb29vb29vb29vb29vb29vb29vb29vb25ubm5ubm5ubm5ubm5ubm5ubm5ubm5ubmdnZ2dnZ2dnZ2dnZ2dnZ2dnZ2cgYXV0aCBrZXlzCg==\ntype: kubernetes.io/dockerconfigjson", "language": "yaml", "type": "kubernetes-manifest", "context": "Pull an Image from a Private Registry | If you get the error message error: no objects passed to create, it may mean the base64 encoded string is invalid. If you get an error message like Secret \"myregistrykey\" is invalid: data[.dockerconfi", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/pull-image-private-registry/", "id": "k8s_00046_code_completion_1018"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Secret\nmetadata:\n name: myregistrykey\n namespace: awesomeapps\ndata:\n .dockerconfigjson: UmVhbGx5IHJlYWxseSByZWVlZWVlZWVlZWFhYWFhYWFhYWFhYWFhYWFhYWFhYWFhYWFhYWxsbGxsbGxsbGxsbGxsbGxsbGxsbGxsbGxsbGxsbGx5eXl5eXl5eXl5eXl5eXl5eXl5eSBsbGxsbGxsbGxsbGxsbG9vb29vb29vb29vb29vb29vb29vb29vb29vb25ubm5ubm5ubm5ubm5ubm5ubm5ubm5ubmdnZ2dnZ2dnZ2dnZ2dnZ2dnZ2cgYXV0aCBrZXlzCg==\ntype: kubernetes.io/dockerconfigjson", "completion": "This yaml code Pull an Image from a Private Registry | If you get the error message error: no objects passed to create, it may mean the base64 encoded string is invalid. If you get an error message like Secret \"myregistrykey\" is invalid: data[.dockerconfi", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Pull an Image from a Private Registry | If you get the error message error: no objects passed to create, it may mean the base64 encoded string is invalid. If you get an error message like Secret \"myregistrykey\" is invalid: data[.dockerconfi", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/pull-image-private-registry/", "id": "k8s_00046_code_explain_1019"} {"prompt": "Explain this bash code:\n\nkubectl create secret docker-registry regcred --docker-server= --docker-username= --docker-password= --docker-email=", "completion": "This bash code Pull an Image from a Private Registry | where:", "language": "bash", "type": "kubectl-command_explanation", "context": "Pull an Image from a Private Registry | where:", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/pull-image-private-registry/", "id": "k8s_00046_code_explain_1020"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Secret\nmetadata:\n ...", "completion": " name: regcred\n ...\ndata:\n .dockerconfigjson: eyJodHRwczovL2luZGV4L ... J0QUl6RTIifX0=\ntype: kubernetes.io/dockerconfigjson", "language": "yaml", "type": "kubernetes-manifest", "context": "Pull an Image from a Private Registry | The value of the .dockerconfigjson field is a base64 representation of your Docker credentials.", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/pull-image-private-registry/", "id": "k8s_00046_code_completion_1021"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Secret\nmetadata:\n ...\n name: regcred\n ...\ndata:\n .dockerconfigjson: eyJodHRwczovL2luZGV4L ... J0QUl6RTIifX0=\ntype: kubernetes.io/dockerconfigjson", "completion": "This yaml code Pull an Image from a Private Registry | The value of the .dockerconfigjson field is a base64 representation of your Docker credentials.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Pull an Image from a Private Registry | The value of the .dockerconfigjson field is a base64 representation of your Docker credentials.", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/pull-image-private-registry/", "id": "k8s_00046_code_explain_1022"} {"prompt": "Explain this bash code:\n\nkubectl get secret regcred --output=\"jsonpath={.data.\\.dockerconfigjson}\" | base64 --decode", "completion": "This bash code Pull an Image from a Private Registry | The output is similar to this:", "language": "bash", "type": "kubectl-command_explanation", "context": "Pull an Image from a Private Registry | The output is similar to this:", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/pull-image-private-registry/", "id": "k8s_00046_code_explain_1023"} {"prompt": "Explain this json code:\n\n{\"auths\":{\"your.private.registry.example.com\":{\"username\":\"janedoe\",\"password\":\"xxxxxxxxxxx\",\"email\":\"jdoe@example.com\",\"auth\":\"c3R...zE2\"}}}", "completion": "This json code Pull an Image from a Private Registry | To understand what is in the auth field, convert the base64-encoded data to a readable format:", "language": "json", "type": "api-response_explanation", "context": "Pull an Image from a Private Registry | To understand what is in the auth field, convert the base64-encoded data to a readable format:", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/pull-image-private-registry/", "id": "k8s_00046_code_explain_1024"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: private-reg\nspec:", "completion": " containers:\n - name: private-reg-container\n image: \n imagePullSecrets:\n - name: regcred", "language": "yaml", "type": "pod-definition", "context": "Pull an Image from a Private Registry | Download the above file onto your computer:", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/pull-image-private-registry/", "id": "k8s_00046_code_completion_1025"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: private-reg\nspec:\n containers:\n - name: private-reg-container\n image: \n imagePullSecrets:\n - name: regcred", "completion": "This yaml code Pull an Image from a Private Registry | Download the above file onto your computer:", "language": "yaml", "type": "pod-definition_explanation", "context": "Pull an Image from a Private Registry | Download the above file onto your computer:", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/pull-image-private-registry/", "id": "k8s_00046_code_explain_1026"} {"prompt": "Explain this text code:\n\ncurl -L -o my-private-reg-pod.yaml https://k8s.io/examples/pods/private-reg-pod.yaml", "completion": "This text code Pull an Image from a Private Registry | In file my-private-reg-pod.yaml, replace with the path to an image in a private registry such as:", "language": "text", "type": "code-example_explanation", "context": "Pull an Image from a Private Registry | In file my-private-reg-pod.yaml, replace with the path to an image in a private registry such as:", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/pull-image-private-registry/", "id": "k8s_00046_code_explain_1027"} {"prompt": "Explain this text code:\n\nyour.private.registry.example.com/janedoe/jdoe-private:v1", "completion": "This text code Pull an Image from a Private Registry | To pull the image from the private registry, Kubernetes needs credentials. The imagePullSecrets field in the configuration file specifies that Kubernetes should get the credentials from a Secret named", "language": "text", "type": "code-example_explanation", "context": "Pull an Image from a Private Registry | To pull the image from the private registry, Kubernetes needs credentials. The imagePullSecrets field in the configuration file specifies that Kubernetes should get the credentials from a Secret named", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/pull-image-private-registry/", "id": "k8s_00046_code_explain_1028"} {"prompt": "Explain this bash code:\n\nkubectl apply -f my-private-reg-pod.yaml\nkubectl get pod private-reg", "completion": "This bash code Pull an Image from a Private Registry | Also, in case the Pod fails to start with the status ImagePullBackOff, view the Pod events:", "language": "bash", "type": "kubectl-command_explanation", "context": "Pull an Image from a Private Registry | Also, in case the Pod fails to start with the status ImagePullBackOff, view the Pod events:", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/pull-image-private-registry/", "id": "k8s_00046_code_explain_1029"} {"prompt": "Complete this text code:\n\nEvents:\n ... Reason ... Message", "completion": " ------ -------\n ... FailedToRetrieveImagePullSecret ... Unable to retrieve some image pull secrets (); attempting to pull the image may not succeed.", "language": "text", "type": "code-example", "context": "Pull an Image from a Private Registry | A pod can have multiple containers, each container image can be from a different registry. You can use multiple imagePullSecrets with one pod, and each can contain multiple credentials.", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/pull-image-private-registry/", "id": "k8s_00046_code_completion_1030"} {"prompt": "Explain this text code:\n\nEvents:\n ... Reason ... Message\n ------ -------\n ... FailedToRetrieveImagePullSecret ... Unable to retrieve some image pull secrets (); attempting to pull the image may not succeed.", "completion": "This text code Pull an Image from a Private Registry | A pod can have multiple containers, each container image can be from a different registry. You can use multiple imagePullSecrets with one pod, and each can contain multiple credentials.", "language": "text", "type": "code-example_explanation", "context": "Pull an Image from a Private Registry | A pod can have multiple containers, each container image can be from a different registry. You can use multiple imagePullSecrets with one pod, and each can contain multiple credentials.", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/pull-image-private-registry/", "id": "k8s_00046_code_explain_1031"} {"prompt": "Complete this json code:\n\n{\n \"auths\": {\n \"https://index.docker.io/v1/\": {", "completion": " \"auth\": \"c3R...zE2\"\n }\n }\n}", "language": "json", "type": "api-response", "context": "Pull an Image from a Private Registry | A Kubernetes cluster uses the Secret of kubernetes.io/dockerconfigjson type to authenticate with a container registry to pull a private image.", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/pull-image-private-registry/", "id": "k8s_00046_code_completion_1032"} {"prompt": "Explain this json code:\n\n{\n \"auths\": {\n \"https://index.docker.io/v1/\": {\n \"auth\": \"c3R...zE2\"\n }\n }\n}", "completion": "This json code Pull an Image from a Private Registry | A Kubernetes cluster uses the Secret of kubernetes.io/dockerconfigjson type to authenticate with a container registry to pull a private image.", "language": "json", "type": "api-response_explanation", "context": "Pull an Image from a Private Registry | A Kubernetes cluster uses the Secret of kubernetes.io/dockerconfigjson type to authenticate with a container registry to pull a private image.", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/pull-image-private-registry/", "id": "k8s_00046_code_explain_1033"} {"prompt": "Explain this shell code:\n\nkubectl create secret generic regcred \\\n --from-file=.dockerconfigjson= \\\n --type=kubernetes.io/dockerconfigjson", "completion": "This shell code Pull an Image from a Private Registry | If you need more control (for example, to set a namespace or a label on the new secret) then you can customise the Secret before storing it. Be sure to:", "language": "shell", "type": "code-example_explanation", "context": "Pull an Image from a Private Registry | If you need more control (for example, to set a namespace or a label on the new secret) then you can customise the Secret before storing it. Be sure to:", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/pull-image-private-registry/", "id": "k8s_00046_code_explain_1034"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Secret\nmetadata:\n name: myregistrykey", "completion": " namespace: awesomeapps\ndata:\n .dockerconfigjson: UmVhbGx5IHJlYWxseSByZWVlZWVlZWVlZWFhYWFhYWFhYWFhYWFhYWFhYWFhYWFhYWFhYWxsbGxsbGxsbGxsbGxsbGxsbGxsbGxsbGxsbGxsbGx5eXl5eXl5eXl5eXl5eXl5eXl5eSBsbGxsbGxsbGxsbGxsbG9vb29vb29vb29vb29vb29vb29vb29vb29vb25ubm5ubm5ubm5ubm5ubm5ubm5ubm5ubmdnZ2dnZ2dnZ2dnZ2dnZ2dnZ2cgYXV0aCBrZXlzCg==\ntype: kubernetes.io/dockerconfigjson", "language": "yaml", "type": "kubernetes-manifest", "context": "Pull an Image from a Private Registry | If you get the error message error: no objects passed to create, it may mean the base64 encoded string is invalid. If you get an error message like Secret \"myregistrykey\" is invalid: data[.dockerconfi", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/pull-image-private-registry/", "id": "k8s_00046_code_completion_1035"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Secret\nmetadata:\n name: myregistrykey\n namespace: awesomeapps\ndata:\n .dockerconfigjson: UmVhbGx5IHJlYWxseSByZWVlZWVlZWVlZWFhYWFhYWFhYWFhYWFhYWFhYWFhYWFhYWFhYWxsbGxsbGxsbGxsbGxsbGxsbGxsbGxsbGxsbGxsbGx5eXl5eXl5eXl5eXl5eXl5eXl5eSBsbGxsbGxsbGxsbGxsbG9vb29vb29vb29vb29vb29vb29vb29vb29vb25ubm5ubm5ubm5ubm5ubm5ubm5ubm5ubmdnZ2dnZ2dnZ2dnZ2dnZ2dnZ2cgYXV0aCBrZXlzCg==\ntype: kubernetes.io/dockerconfigjson", "completion": "This yaml code Pull an Image from a Private Registry | If you get the error message error: no objects passed to create, it may mean the base64 encoded string is invalid. If you get an error message like Secret \"myregistrykey\" is invalid: data[.dockerconfi", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Pull an Image from a Private Registry | If you get the error message error: no objects passed to create, it may mean the base64 encoded string is invalid. If you get an error message like Secret \"myregistrykey\" is invalid: data[.dockerconfi", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/pull-image-private-registry/", "id": "k8s_00046_code_explain_1036"} {"prompt": "Explain this text code:\n\nSecret \"myregistrykey\" is invalid: data[.dockerconfigjson]: invalid value ...", "completion": "This text code Pull an Image from a Private Registry | Create this Secret, naming it regcred:", "language": "text", "type": "code-example_explanation", "context": "Pull an Image from a Private Registry | Create this Secret, naming it regcred:", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/pull-image-private-registry/", "id": "k8s_00046_code_explain_1037"} {"prompt": "Explain this shell code:\n\nkubectl create secret docker-registry regcred --docker-server= --docker-username= --docker-password= --docker-email=", "completion": "This shell code Pull an Image from a Private Registry | where:", "language": "shell", "type": "code-example_explanation", "context": "Pull an Image from a Private Registry | where:", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/pull-image-private-registry/", "id": "k8s_00046_code_explain_1038"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Secret\nmetadata:\n ...", "completion": " name: regcred\n ...\ndata:\n .dockerconfigjson: eyJodHRwczovL2luZGV4L ... J0QUl6RTIifX0=\ntype: kubernetes.io/dockerconfigjson", "language": "yaml", "type": "kubernetes-manifest", "context": "Pull an Image from a Private Registry | The value of the .dockerconfigjson field is a base64 representation of your Docker credentials.", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/pull-image-private-registry/", "id": "k8s_00046_code_completion_1039"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Secret\nmetadata:\n ...\n name: regcred\n ...\ndata:\n .dockerconfigjson: eyJodHRwczovL2luZGV4L ... J0QUl6RTIifX0=\ntype: kubernetes.io/dockerconfigjson", "completion": "This yaml code Pull an Image from a Private Registry | The value of the .dockerconfigjson field is a base64 representation of your Docker credentials.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Pull an Image from a Private Registry | The value of the .dockerconfigjson field is a base64 representation of your Docker credentials.", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/pull-image-private-registry/", "id": "k8s_00046_code_explain_1040"} {"prompt": "Explain this shell code:\n\nkubectl get secret regcred --output=\"jsonpath={.data.\\.dockerconfigjson}\" | base64 --decode", "completion": "This shell code Pull an Image from a Private Registry | The output is similar to this:", "language": "shell", "type": "code-example_explanation", "context": "Pull an Image from a Private Registry | The output is similar to this:", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/pull-image-private-registry/", "id": "k8s_00046_code_explain_1041"} {"prompt": "Explain this json code:\n\n{\"auths\":{\"your.private.registry.example.com\":{\"username\":\"janedoe\",\"password\":\"xxxxxxxxxxx\",\"email\":\"jdoe@example.com\",\"auth\":\"c3R...zE2\"}}}", "completion": "This json code Pull an Image from a Private Registry | To understand what is in the auth field, convert the base64-encoded data to a readable format:", "language": "json", "type": "api-response_explanation", "context": "Pull an Image from a Private Registry | To understand what is in the auth field, convert the base64-encoded data to a readable format:", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/pull-image-private-registry/", "id": "k8s_00046_code_explain_1042"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: private-reg\nspec:", "completion": " containers:\n - name: private-reg-container\n image: \n imagePullSecrets:\n - name: regcred", "language": "yaml", "type": "pod-definition", "context": "Pull an Image from a Private Registry | Download the above file onto your computer:", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/pull-image-private-registry/", "id": "k8s_00046_code_completion_1043"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: private-reg\nspec:\n containers:\n - name: private-reg-container\n image: \n imagePullSecrets:\n - name: regcred", "completion": "This yaml code Pull an Image from a Private Registry | Download the above file onto your computer:", "language": "yaml", "type": "pod-definition_explanation", "context": "Pull an Image from a Private Registry | Download the above file onto your computer:", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/pull-image-private-registry/", "id": "k8s_00046_code_explain_1044"} {"prompt": "Explain this shell code:\n\ncurl -L -o my-private-reg-pod.yaml https://k8s.io/examples/pods/private-reg-pod.yaml", "completion": "This shell code Pull an Image from a Private Registry | In file my-private-reg-pod.yaml, replace with the path to an image in a private registry such as:", "language": "shell", "type": "code-example_explanation", "context": "Pull an Image from a Private Registry | In file my-private-reg-pod.yaml, replace with the path to an image in a private registry such as:", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/pull-image-private-registry/", "id": "k8s_00046_code_explain_1045"} {"prompt": "Explain this none code:\n\nyour.private.registry.example.com/janedoe/jdoe-private:v1", "completion": "This none code Pull an Image from a Private Registry | To pull the image from the private registry, Kubernetes needs credentials. The imagePullSecrets field in the configuration file specifies that Kubernetes should get the credentials from a Secret named", "language": "none", "type": "code-example_explanation", "context": "Pull an Image from a Private Registry | To pull the image from the private registry, Kubernetes needs credentials. The imagePullSecrets field in the configuration file specifies that Kubernetes should get the credentials from a Secret named", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/pull-image-private-registry/", "id": "k8s_00046_code_explain_1046"} {"prompt": "Explain this shell code:\n\nkubectl apply -f my-private-reg-pod.yaml\nkubectl get pod private-reg", "completion": "This shell code Pull an Image from a Private Registry | Also, in case the Pod fails to start with the status ImagePullBackOff, view the Pod events:", "language": "shell", "type": "code-example_explanation", "context": "Pull an Image from a Private Registry | Also, in case the Pod fails to start with the status ImagePullBackOff, view the Pod events:", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/pull-image-private-registry/", "id": "k8s_00046_code_explain_1047"} {"prompt": "Complete this shell code:\n\nEvents:\n ... Reason ... Message", "completion": " ------ -------\n ... FailedToRetrieveImagePullSecret ... Unable to retrieve some image pull secrets (); attempting to pull the image may not succeed.", "language": "shell", "type": "code-example", "context": "Pull an Image from a Private Registry | A pod can have multiple containers, each container image can be from a different registry. You can use multiple imagePullSecrets with one pod, and each can contain multiple credentials.", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/pull-image-private-registry/", "id": "k8s_00046_code_completion_1048"} {"prompt": "Explain this shell code:\n\nEvents:\n ... Reason ... Message\n ------ -------\n ... FailedToRetrieveImagePullSecret ... Unable to retrieve some image pull secrets (); attempting to pull the image may not succeed.", "completion": "This shell code Pull an Image from a Private Registry | A pod can have multiple containers, each container image can be from a different registry. You can use multiple imagePullSecrets with one pod, and each can contain multiple credentials.", "language": "shell", "type": "code-example_explanation", "context": "Pull an Image from a Private Registry | A pod can have multiple containers, each container image can be from a different registry. You can use multiple imagePullSecrets with one pod, and each can contain multiple credentials.", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/pull-image-private-registry/", "id": "k8s_00046_code_explain_1049"} {"prompt": "Complete this json code:\n\n{\n \"auths\": {\n \"https://index.docker.io/v1/\": {", "completion": " \"auth\": \"c3R...zE2\"\n }\n }\n}", "language": "json", "type": "api-response", "context": "Pull an Image from a Private Registry | A Kubernetes cluster uses the Secret of kubernetes.io/dockerconfigjson type to authenticate with a container registry to pull a private image.", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/pull-image-private-registry/", "id": "k8s_00046_code_completion_1050"} {"prompt": "Explain this json code:\n\n{\n \"auths\": {\n \"https://index.docker.io/v1/\": {\n \"auth\": \"c3R...zE2\"\n }\n }\n}", "completion": "This json code Pull an Image from a Private Registry | A Kubernetes cluster uses the Secret of kubernetes.io/dockerconfigjson type to authenticate with a container registry to pull a private image.", "language": "json", "type": "api-response_explanation", "context": "Pull an Image from a Private Registry | A Kubernetes cluster uses the Secret of kubernetes.io/dockerconfigjson type to authenticate with a container registry to pull a private image.", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/pull-image-private-registry/", "id": "k8s_00046_code_explain_1051"} {"prompt": "Explain this bash code:\n\nkubectl create secret generic regcred \\\n --from-file=.dockerconfigjson= \\\n --type=kubernetes.io/dockerconfigjson", "completion": "This bash code Pull an Image from a Private Registry | If you need more control (for example, to set a namespace or a label on the new secret) then you can customise the Secret before storing it. Be sure to:", "language": "bash", "type": "kubectl-command_explanation", "context": "Pull an Image from a Private Registry | If you need more control (for example, to set a namespace or a label on the new secret) then you can customise the Secret before storing it. Be sure to:", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/pull-image-private-registry/", "id": "k8s_00046_code_explain_1052"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Secret\nmetadata:\n name: myregistrykey", "completion": " namespace: awesomeapps\ndata:\n .dockerconfigjson: UmVhbGx5IHJlYWxseSByZWVlZWVlZWVlZWFhYWFhYWFhYWFhYWFhYWFhYWFhYWFhYWFhYWxsbGxsbGxsbGxsbGxsbGxsbGxsbGxsbGxsbGxsbGx5eXl5eXl5eXl5eXl5eXl5eXl5eSBsbGxsbGxsbGxsbGxsbG9vb29vb29vb29vb29vb29vb29vb29vb29vb25ubm5ubm5ubm5ubm5ubm5ubm5ubm5ubmdnZ2dnZ2dnZ2dnZ2dnZ2dnZ2cgYXV0aCBrZXlzCg==\ntype: kubernetes.io/dockerconfigjson", "language": "yaml", "type": "kubernetes-manifest", "context": "Pull an Image from a Private Registry | If you get the error message error: no objects passed to create, it may mean the base64 encoded string is invalid. If you get an error message like Secret \"myregistrykey\" is invalid: data[.dockerconfi", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/pull-image-private-registry/", "id": "k8s_00046_code_completion_1053"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Secret\nmetadata:\n name: myregistrykey\n namespace: awesomeapps\ndata:\n .dockerconfigjson: UmVhbGx5IHJlYWxseSByZWVlZWVlZWVlZWFhYWFhYWFhYWFhYWFhYWFhYWFhYWFhYWFhYWxsbGxsbGxsbGxsbGxsbGxsbGxsbGxsbGxsbGxsbGx5eXl5eXl5eXl5eXl5eXl5eXl5eSBsbGxsbGxsbGxsbGxsbG9vb29vb29vb29vb29vb29vb29vb29vb29vb25ubm5ubm5ubm5ubm5ubm5ubm5ubm5ubmdnZ2dnZ2dnZ2dnZ2dnZ2dnZ2cgYXV0aCBrZXlzCg==\ntype: kubernetes.io/dockerconfigjson", "completion": "This yaml code Pull an Image from a Private Registry | If you get the error message error: no objects passed to create, it may mean the base64 encoded string is invalid. If you get an error message like Secret \"myregistrykey\" is invalid: data[.dockerconfi", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Pull an Image from a Private Registry | If you get the error message error: no objects passed to create, it may mean the base64 encoded string is invalid. If you get an error message like Secret \"myregistrykey\" is invalid: data[.dockerconfi", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/pull-image-private-registry/", "id": "k8s_00046_code_explain_1054"} {"prompt": "Explain this bash code:\n\nkubectl create secret docker-registry regcred --docker-server= --docker-username= --docker-password= --docker-email=", "completion": "This bash code Pull an Image from a Private Registry | where:", "language": "bash", "type": "kubectl-command_explanation", "context": "Pull an Image from a Private Registry | where:", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/pull-image-private-registry/", "id": "k8s_00046_code_explain_1055"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Secret\nmetadata:\n ...", "completion": " name: regcred\n ...\ndata:\n .dockerconfigjson: eyJodHRwczovL2luZGV4L ... J0QUl6RTIifX0=\ntype: kubernetes.io/dockerconfigjson", "language": "yaml", "type": "kubernetes-manifest", "context": "Pull an Image from a Private Registry | The value of the .dockerconfigjson field is a base64 representation of your Docker credentials.", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/pull-image-private-registry/", "id": "k8s_00046_code_completion_1056"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Secret\nmetadata:\n ...\n name: regcred\n ...\ndata:\n .dockerconfigjson: eyJodHRwczovL2luZGV4L ... J0QUl6RTIifX0=\ntype: kubernetes.io/dockerconfigjson", "completion": "This yaml code Pull an Image from a Private Registry | The value of the .dockerconfigjson field is a base64 representation of your Docker credentials.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Pull an Image from a Private Registry | The value of the .dockerconfigjson field is a base64 representation of your Docker credentials.", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/pull-image-private-registry/", "id": "k8s_00046_code_explain_1057"} {"prompt": "Explain this bash code:\n\nkubectl get secret regcred --output=\"jsonpath={.data.\\.dockerconfigjson}\" | base64 --decode", "completion": "This bash code Pull an Image from a Private Registry | The output is similar to this:", "language": "bash", "type": "kubectl-command_explanation", "context": "Pull an Image from a Private Registry | The output is similar to this:", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/pull-image-private-registry/", "id": "k8s_00046_code_explain_1058"} {"prompt": "Explain this json code:\n\n{\"auths\":{\"your.private.registry.example.com\":{\"username\":\"janedoe\",\"password\":\"xxxxxxxxxxx\",\"email\":\"jdoe@example.com\",\"auth\":\"c3R...zE2\"}}}", "completion": "This json code Pull an Image from a Private Registry | To understand what is in the auth field, convert the base64-encoded data to a readable format:", "language": "json", "type": "api-response_explanation", "context": "Pull an Image from a Private Registry | To understand what is in the auth field, convert the base64-encoded data to a readable format:", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/pull-image-private-registry/", "id": "k8s_00046_code_explain_1059"} {"prompt": "Complete this yaml code:\n\npods/private-reg-pod.yaml\napiVersion: v1\nkind: Pod\nmetadata:\n name: private-reg", "completion": "spec:\n containers:\n - name: private-reg-container\n image: \n imagePullSecrets:\n - name: regcred", "language": "yaml", "type": "pod-definition", "context": "Pull an Image from a Private Registry | Download the above file onto your computer:", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/pull-image-private-registry/", "id": "k8s_00046_code_completion_1060"} {"prompt": "Explain this yaml code:\n\npods/private-reg-pod.yaml\napiVersion: v1\nkind: Pod\nmetadata:\n name: private-reg\nspec:\n containers:\n - name: private-reg-container\n image: \n imagePullSecrets:\n - name: regcred", "completion": "This yaml code Pull an Image from a Private Registry | Download the above file onto your computer:", "language": "yaml", "type": "pod-definition_explanation", "context": "Pull an Image from a Private Registry | Download the above file onto your computer:", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/pull-image-private-registry/", "id": "k8s_00046_code_explain_1061"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: private-reg\nspec:", "completion": " containers:\n - name: private-reg-container\n image: \n imagePullSecrets:\n - name: regcred", "language": "yaml", "type": "pod-definition", "context": "Pull an Image from a Private Registry | Download the above file onto your computer:", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/pull-image-private-registry/", "id": "k8s_00046_code_completion_1062"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: private-reg\nspec:\n containers:\n - name: private-reg-container\n image: \n imagePullSecrets:\n - name: regcred", "completion": "This yaml code Pull an Image from a Private Registry | Download the above file onto your computer:", "language": "yaml", "type": "pod-definition_explanation", "context": "Pull an Image from a Private Registry | Download the above file onto your computer:", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/pull-image-private-registry/", "id": "k8s_00046_code_explain_1063"} {"prompt": "Explain this text code:\n\ncurl -L -o my-private-reg-pod.yaml https://k8s.io/examples/pods/private-reg-pod.yaml", "completion": "This text code Pull an Image from a Private Registry | In file my-private-reg-pod.yaml, replace with the path to an image in a private registry such as:", "language": "text", "type": "code-example_explanation", "context": "Pull an Image from a Private Registry | In file my-private-reg-pod.yaml, replace with the path to an image in a private registry such as:", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/pull-image-private-registry/", "id": "k8s_00046_code_explain_1064"} {"prompt": "Explain this bash code:\n\nkubectl apply -f my-private-reg-pod.yaml\nkubectl get pod private-reg", "completion": "This bash code Pull an Image from a Private Registry | Also, in case the Pod fails to start with the status ImagePullBackOff, view the Pod events:", "language": "bash", "type": "kubectl-command_explanation", "context": "Pull an Image from a Private Registry | Also, in case the Pod fails to start with the status ImagePullBackOff, view the Pod events:", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/pull-image-private-registry/", "id": "k8s_00046_code_explain_1065"} {"prompt": "Complete this text code:\n\nEvents:\n ... Reason ... Message", "completion": " ------ -------\n ... FailedToRetrieveImagePullSecret ... Unable to retrieve some image pull secrets (); attempting to pull the image may not succeed.", "language": "text", "type": "code-example", "context": "Pull an Image from a Private Registry | A pod can have multiple containers, each container image can be from a different registry. You can use multiple imagePullSecrets with one pod, and each can contain multiple credentials.", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/pull-image-private-registry/", "id": "k8s_00046_code_completion_1066"} {"prompt": "Explain this text code:\n\nEvents:\n ... Reason ... Message\n ------ -------\n ... FailedToRetrieveImagePullSecret ... Unable to retrieve some image pull secrets (); attempting to pull the image may not succeed.", "completion": "This text code Pull an Image from a Private Registry | A pod can have multiple containers, each container image can be from a different registry. You can use multiple imagePullSecrets with one pod, and each can contain multiple credentials.", "language": "text", "type": "code-example_explanation", "context": "Pull an Image from a Private Registry | A pod can have multiple containers, each container image can be from a different registry. You can use multiple imagePullSecrets with one pod, and each can contain multiple credentials.", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/pull-image-private-registry/", "id": "k8s_00046_code_explain_1067"} {"prompt": "Explain this text code:\n\nReason: Terminated\nMessage: Pod was terminated in response to imminent node shutdown.", "completion": "This text code Node Shutdowns | To provide more flexibility during graceful node shutdown around the ordering of pods during shutdown, graceful node shutdown honors the PriorityClass for Pods, provided that you enabled this feature", "language": "text", "type": "code-example_explanation", "context": "Node Shutdowns | To provide more flexibility during graceful node shutdown around the ordering of pods during shutdown, graceful node shutdown honors the PriorityClass for Pods, provided that you enabled this feature", "source": "https://kubernetes.io/docs/concepts/cluster-administration/node-shutdown/", "id": "k8s_00047_code_explain_1068"} {"prompt": "Complete this text code:\n\nshutdownGracePeriodByPodPriority:\n - priority: 100000\n shutdownGracePeriodSeconds: 10\n - priority: 10000", "completion": " shutdownGracePeriodSeconds: 180\n - priority: 1000\n shutdownGracePeriodSeconds: 120\n - priority: 0\n shutdownGracePeriodSeconds: 60", "language": "text", "type": "code-example", "context": "Node Shutdowns | The above table implies that any pod with priority value >= 100000 will get just 10 seconds to shut down, any pod with value >= 10000 and < 100000 will get 180 seconds to shut down, any pod with value", "source": "https://kubernetes.io/docs/concepts/cluster-administration/node-shutdown/", "id": "k8s_00047_code_completion_1069"} {"prompt": "Explain this text code:\n\nshutdownGracePeriodByPodPriority:\n - priority: 100000\n shutdownGracePeriodSeconds: 10\n - priority: 10000\n shutdownGracePeriodSeconds: 180\n - priority: 1000\n shutdownGracePeriodSeconds: 120\n - priority: 0\n shutdownGracePeriodSeconds: 60", "completion": "This text code Node Shutdowns | The above table implies that any pod with priority value >= 100000 will get just 10 seconds to shut down, any pod with value >= 10000 and < 100000 will get 180 seconds to shut down, any pod with value", "language": "text", "type": "code-example_explanation", "context": "Node Shutdowns | The above table implies that any pod with priority value >= 100000 will get just 10 seconds to shut down, any pod with value >= 10000 and < 100000 will get 180 seconds to shut down, any pod with value", "source": "https://kubernetes.io/docs/concepts/cluster-administration/node-shutdown/", "id": "k8s_00047_code_explain_1070"} {"prompt": "Explain this text code:\n\nReason: Terminated\nMessage: Pod was terminated in response to imminent node shutdown.", "completion": "This text code Node Shutdowns | To provide more flexibility during graceful node shutdown around the ordering of pods during shutdown, graceful node shutdown honors the PriorityClass for Pods, provided that you enabled this feature", "language": "text", "type": "code-example_explanation", "context": "Node Shutdowns | To provide more flexibility during graceful node shutdown around the ordering of pods during shutdown, graceful node shutdown honors the PriorityClass for Pods, provided that you enabled this feature", "source": "https://kubernetes.io/docs/concepts/cluster-administration/node-shutdown/", "id": "k8s_00047_code_explain_1071"} {"prompt": "Complete this yaml code:\n\nshutdownGracePeriodByPodPriority:\n - priority: 100000\n shutdownGracePeriodSeconds: 10\n - priority: 10000", "completion": " shutdownGracePeriodSeconds: 180\n - priority: 1000\n shutdownGracePeriodSeconds: 120\n - priority: 0\n shutdownGracePeriodSeconds: 60", "language": "yaml", "type": "kubernetes-manifest", "context": "Node Shutdowns | The above table implies that any pod with priority value >= 100000 will get just 10 seconds to shut down, any pod with value >= 10000 and < 100000 will get 180 seconds to shut down, any pod with value", "source": "https://kubernetes.io/docs/concepts/cluster-administration/node-shutdown/", "id": "k8s_00047_code_completion_1072"} {"prompt": "Explain this yaml code:\n\nshutdownGracePeriodByPodPriority:\n - priority: 100000\n shutdownGracePeriodSeconds: 10\n - priority: 10000\n shutdownGracePeriodSeconds: 180\n - priority: 1000\n shutdownGracePeriodSeconds: 120\n - priority: 0\n shutdownGracePeriodSeconds: 60", "completion": "This yaml code Node Shutdowns | The above table implies that any pod with priority value >= 100000 will get just 10 seconds to shut down, any pod with value >= 10000 and < 100000 will get 180 seconds to shut down, any pod with value", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Node Shutdowns | The above table implies that any pod with priority value >= 100000 will get just 10 seconds to shut down, any pod with value >= 10000 and < 100000 will get 180 seconds to shut down, any pod with value", "source": "https://kubernetes.io/docs/concepts/cluster-administration/node-shutdown/", "id": "k8s_00047_code_explain_1073"} {"prompt": "Complete this text code:\n\nshutdownGracePeriodByPodPriority:\n - priority: 100000\n shutdownGracePeriodSeconds: 10\n - priority: 10000", "completion": " shutdownGracePeriodSeconds: 180\n - priority: 1000\n shutdownGracePeriodSeconds: 120\n - priority: 0\n shutdownGracePeriodSeconds: 60", "language": "text", "type": "code-example", "context": "Node Shutdowns | The above table implies that any pod with priority value >= 100000 will get just 10 seconds to shut down, any pod with value >= 10000 and < 100000 will get 180 seconds to shut down, any pod with value", "source": "https://kubernetes.io/docs/concepts/cluster-administration/node-shutdown/", "id": "k8s_00047_code_completion_1074"} {"prompt": "Explain this text code:\n\nshutdownGracePeriodByPodPriority:\n - priority: 100000\n shutdownGracePeriodSeconds: 10\n - priority: 10000\n shutdownGracePeriodSeconds: 180\n - priority: 1000\n shutdownGracePeriodSeconds: 120\n - priority: 0\n shutdownGracePeriodSeconds: 60", "completion": "This text code Node Shutdowns | The above table implies that any pod with priority value >= 100000 will get just 10 seconds to shut down, any pod with value >= 10000 and < 100000 will get 180 seconds to shut down, any pod with value", "language": "text", "type": "code-example_explanation", "context": "Node Shutdowns | The above table implies that any pod with priority value >= 100000 will get just 10 seconds to shut down, any pod with value >= 10000 and < 100000 will get 180 seconds to shut down, any pod with value", "source": "https://kubernetes.io/docs/concepts/cluster-administration/node-shutdown/", "id": "k8s_00047_code_explain_1075"} {"prompt": "Complete this json code:\n\n{\n \"kind\": \"APIGroupList\",\n \"apiVersion\": \"v1\",\n \"groups\": [\n {\n \"name\": \"apiregistration.k8s.io\",\n \"versions\": [\n {\n \"groupVersion\": \"apiregistration.k8s.io/v1\",\n \"version\": \"v1\"\n }\n ],\n \"preferredVersion\": {\n \"groupVersion\": \"apiregistration.k8s.io/v1\",\n \"version\": \"v1\"\n }", "completion": " },\n {\n \"name\": \"apps\",\n \"versions\": [\n {\n \"groupVersion\": \"apps/v1\",\n \"version\": \"v1\"\n }\n ],\n \"preferredVersion\": {\n \"groupVersion\": \"apps/v1\",\n \"version\": \"v1\"\n }\n },\n ...\n}", "language": "json", "type": "api-response", "context": "The Kubernetes API | Additional requests are needed to obtain the discovery document for each group version at /apis// (for example: /apis/rbac.authorization.k8s.io/v1alpha1), which advertises the list of", "source": "https://kubernetes.io/docs/concepts/overview/kubernetes-api/", "id": "k8s_00048_code_completion_1076"} {"prompt": "Explain this json code:\n\n{\n \"kind\": \"APIGroupList\",\n \"apiVersion\": \"v1\",\n \"groups\": [\n {\n \"name\": \"apiregistration.k8s.io\",\n \"versions\": [\n {\n \"groupVersion\": \"apiregistration.k8s.io/v1\",\n \"version\": \"v1\"\n }\n ],\n \"preferredVersion\": {\n \"groupVersion\": \"apiregistration.k8s.io/v1\",\n \"version\": \"v1\"\n }\n },\n {\n \"name\": \"apps\",\n \"versions\": [\n {\n \"groupVersion\": \"apps/v1\",\n \"version\": \"v1\"\n }\n ],\n \"preferredVersion\": {\n \"groupVersion\": \"apps/v1\",\n \"version\": \"v1\"\n }\n },\n ...\n}", "completion": "This json code The Kubernetes API | Additional requests are needed to obtain the discovery document for each group version at /apis// (for example: /apis/rbac.authorization.k8s.io/v1alpha1), which advertises the list of", "language": "json", "type": "api-response_explanation", "context": "The Kubernetes API | Additional requests are needed to obtain the discovery document for each group version at /apis// (for example: /apis/rbac.authorization.k8s.io/v1alpha1), which advertises the list of", "source": "https://kubernetes.io/docs/concepts/overview/kubernetes-api/", "id": "k8s_00048_code_explain_1077"} {"prompt": "Complete this json code:\n\n{\n \"paths\": {\n ...,\n \"api/v1\": {\n \"serverRelativeURL\": \"/openapi/v3/api/v1?hash=CC0E9BFD992D8C59AEC98A1E2336F899E8318D3CF4C68944C3DEC640AF5AB52D864AC50DAA8D145B3494F75FA3CFF939FCBDDA431DAD3CA79738B297795818CF\"\n },", "completion": " \"apis/admissionregistration.k8s.io/v1\": {\n \"serverRelativeURL\": \"/openapi/v3/apis/admissionregistration.k8s.io/v1?hash=E19CC93A116982CE5422FC42B590A8AFAD92CDE9AE4D59B5CAAD568F083AD07946E6CB5817531680BCE6E215C16973CD39003B0425F3477CFD854E89A9DB6597\"\n },\n ....\n }\n}", "language": "json", "type": "api-response", "context": "The Kubernetes API | The relative URLs are pointing to immutable OpenAPI descriptions, in order to improve client-side caching. The proper HTTP caching headers are also set by the API server for that purpose (Expires to 1", "source": "https://kubernetes.io/docs/concepts/overview/kubernetes-api/", "id": "k8s_00048_code_completion_1078"} {"prompt": "Explain this json code:\n\n{\n \"paths\": {\n ...,\n \"api/v1\": {\n \"serverRelativeURL\": \"/openapi/v3/api/v1?hash=CC0E9BFD992D8C59AEC98A1E2336F899E8318D3CF4C68944C3DEC640AF5AB52D864AC50DAA8D145B3494F75FA3CFF939FCBDDA431DAD3CA79738B297795818CF\"\n },\n \"apis/admissionregistration.k8s.io/v1\": {\n \"serverRelativeURL\": \"/openapi/v3/apis/admissionregistration.k8s.io/v1?hash=E19CC93A116982CE5422FC42B590A8AFAD92CDE9AE4D59B5CAAD568F083AD07946E6CB5817531680BCE6E215C16973CD39003B0425F3477CFD854E89A9DB6597\"\n },\n ....\n }\n}", "completion": "This json code The Kubernetes API | The relative URLs are pointing to immutable OpenAPI descriptions, in order to improve client-side caching. The proper HTTP caching headers are also set by the API server for that purpose (Expires to 1", "language": "json", "type": "api-response_explanation", "context": "The Kubernetes API | The relative URLs are pointing to immutable OpenAPI descriptions, in order to improve client-side caching. The proper HTTP caching headers are also set by the API server for that purpose (Expires to 1", "source": "https://kubernetes.io/docs/concepts/overview/kubernetes-api/", "id": "k8s_00048_code_explain_1079"} {"prompt": "Explain this text code:\n\nAccept: application/json;v=v2;g=apidiscovery.k8s.io;as=APIGroupDiscoveryList", "completion": "This text code The Kubernetes API | Without indicating the resource type using the Accept header, the default response for the /api and /apis endpoint is an unaggregated discovery document.", "language": "text", "type": "code-example_explanation", "context": "The Kubernetes API | Without indicating the resource type using the Accept header, the default response for the /api and /apis endpoint is an unaggregated discovery document.", "source": "https://kubernetes.io/docs/concepts/overview/kubernetes-api/", "id": "k8s_00048_code_explain_1080"} {"prompt": "Complete this json code:\n\n{\n \"kind\": \"APIGroupList\",\n \"apiVersion\": \"v1\",\n \"groups\": [\n {\n \"name\": \"apiregistration.k8s.io\",\n \"versions\": [\n {\n \"groupVersion\": \"apiregistration.k8s.io/v1\",\n \"version\": \"v1\"\n }\n ],\n \"preferredVersion\": {\n \"groupVersion\": \"apiregistration.k8s.io/v1\",\n \"version\": \"v1\"\n }", "completion": " },\n {\n \"name\": \"apps\",\n \"versions\": [\n {\n \"groupVersion\": \"apps/v1\",\n \"version\": \"v1\"\n }\n ],\n \"preferredVersion\": {\n \"groupVersion\": \"apps/v1\",\n \"version\": \"v1\"\n }\n },\n ...\n}", "language": "json", "type": "api-response", "context": "The Kubernetes API | Additional requests are needed to obtain the discovery document for each group version at /apis// (for example: /apis/rbac.authorization.k8s.io/v1alpha1), which advertises the list of", "source": "https://kubernetes.io/docs/concepts/overview/kubernetes-api/", "id": "k8s_00048_code_completion_1081"} {"prompt": "Explain this json code:\n\n{\n \"kind\": \"APIGroupList\",\n \"apiVersion\": \"v1\",\n \"groups\": [\n {\n \"name\": \"apiregistration.k8s.io\",\n \"versions\": [\n {\n \"groupVersion\": \"apiregistration.k8s.io/v1\",\n \"version\": \"v1\"\n }\n ],\n \"preferredVersion\": {\n \"groupVersion\": \"apiregistration.k8s.io/v1\",\n \"version\": \"v1\"\n }\n },\n {\n \"name\": \"apps\",\n \"versions\": [\n {\n \"groupVersion\": \"apps/v1\",\n \"version\": \"v1\"\n }\n ],\n \"preferredVersion\": {\n \"groupVersion\": \"apps/v1\",\n \"version\": \"v1\"\n }\n },\n ...\n}", "completion": "This json code The Kubernetes API | Additional requests are needed to obtain the discovery document for each group version at /apis// (for example: /apis/rbac.authorization.k8s.io/v1alpha1), which advertises the list of", "language": "json", "type": "api-response_explanation", "context": "The Kubernetes API | Additional requests are needed to obtain the discovery document for each group version at /apis// (for example: /apis/rbac.authorization.k8s.io/v1alpha1), which advertises the list of", "source": "https://kubernetes.io/docs/concepts/overview/kubernetes-api/", "id": "k8s_00048_code_explain_1082"} {"prompt": "Explain this text code:\n\napplication/com.github.proto-openapi.spec.v2@v1.0+protobuf", "completion": "This text code The Kubernetes API | Kubernetes supports publishing a description of its APIs as OpenAPI v3.", "language": "text", "type": "code-example_explanation", "context": "The Kubernetes API | Kubernetes supports publishing a description of its APIs as OpenAPI v3.", "source": "https://kubernetes.io/docs/concepts/overview/kubernetes-api/", "id": "k8s_00048_code_explain_1083"} {"prompt": "Complete this yaml code:\n\n{\n \"paths\": {\n ...,\n \"api/v1\": {\n \"serverRelativeURL\": \"/openapi/v3/api/v1?hash=CC0E9BFD992D8C59AEC98A1E2336F899E8318D3CF4C68944C3DEC640AF5AB52D864AC50DAA8D145B3494F75FA3CFF939FCBDDA431DAD3CA79738B297795818CF\"\n },", "completion": " \"apis/admissionregistration.k8s.io/v1\": {\n \"serverRelativeURL\": \"/openapi/v3/apis/admissionregistration.k8s.io/v1?hash=E19CC93A116982CE5422FC42B590A8AFAD92CDE9AE4D59B5CAAD568F083AD07946E6CB5817531680BCE6E215C16973CD39003B0425F3477CFD854E89A9DB6597\"\n },\n ....\n }\n}", "language": "yaml", "type": "kubernetes-manifest", "context": "The Kubernetes API | The relative URLs are pointing to immutable OpenAPI descriptions, in order to improve client-side caching. The proper HTTP caching headers are also set by the API server for that purpose (Expires to 1", "source": "https://kubernetes.io/docs/concepts/overview/kubernetes-api/", "id": "k8s_00048_code_completion_1084"} {"prompt": "Explain this yaml code:\n\n{\n \"paths\": {\n ...,\n \"api/v1\": {\n \"serverRelativeURL\": \"/openapi/v3/api/v1?hash=CC0E9BFD992D8C59AEC98A1E2336F899E8318D3CF4C68944C3DEC640AF5AB52D864AC50DAA8D145B3494F75FA3CFF939FCBDDA431DAD3CA79738B297795818CF\"\n },\n \"apis/admissionregistration.k8s.io/v1\": {\n \"serverRelativeURL\": \"/openapi/v3/apis/admissionregistration.k8s.io/v1?hash=E19CC93A116982CE5422FC42B590A8AFAD92CDE9AE4D59B5CAAD568F083AD07946E6CB5817531680BCE6E215C16973CD39003B0425F3477CFD854E89A9DB6597\"\n },\n ....\n }\n}", "completion": "This yaml code The Kubernetes API | The relative URLs are pointing to immutable OpenAPI descriptions, in order to improve client-side caching. The proper HTTP caching headers are also set by the API server for that purpose (Expires to 1", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "The Kubernetes API | The relative URLs are pointing to immutable OpenAPI descriptions, in order to improve client-side caching. The proper HTTP caching headers are also set by the API server for that purpose (Expires to 1", "source": "https://kubernetes.io/docs/concepts/overview/kubernetes-api/", "id": "k8s_00048_code_explain_1085"} {"prompt": "Explain this text code:\n\napplication/com.github.proto-openapi.spec.v3@v1.0+protobuf", "completion": "This text code The Kubernetes API | A Golang implementation to fetch the OpenAPI V3 is provided in the package k8s.io/client-go/openapi3.", "language": "text", "type": "code-example_explanation", "context": "The Kubernetes API | A Golang implementation to fetch the OpenAPI V3 is provided in the package k8s.io/client-go/openapi3.", "source": "https://kubernetes.io/docs/concepts/overview/kubernetes-api/", "id": "k8s_00048_code_explain_1086"} {"prompt": "Complete this json code:\n\n{\n \"paths\": {\n ...,\n \"api/v1\": {\n \"serverRelativeURL\": \"/openapi/v3/api/v1?hash=CC0E9BFD992D8C59AEC98A1E2336F899E8318D3CF4C68944C3DEC640AF5AB52D864AC50DAA8D145B3494F75FA3CFF939FCBDDA431DAD3CA79738B297795818CF\"\n },", "completion": " \"apis/admissionregistration.k8s.io/v1\": {\n \"serverRelativeURL\": \"/openapi/v3/apis/admissionregistration.k8s.io/v1?hash=E19CC93A116982CE5422FC42B590A8AFAD92CDE9AE4D59B5CAAD568F083AD07946E6CB5817531680BCE6E215C16973CD39003B0425F3477CFD854E89A9DB6597\"\n },\n ....\n }\n}", "language": "json", "type": "api-response", "context": "The Kubernetes API | The relative URLs are pointing to immutable OpenAPI descriptions, in order to improve client-side caching. The proper HTTP caching headers are also set by the API server for that purpose (Expires to 1", "source": "https://kubernetes.io/docs/concepts/overview/kubernetes-api/", "id": "k8s_00048_code_completion_1087"} {"prompt": "Explain this json code:\n\n{\n \"paths\": {\n ...,\n \"api/v1\": {\n \"serverRelativeURL\": \"/openapi/v3/api/v1?hash=CC0E9BFD992D8C59AEC98A1E2336F899E8318D3CF4C68944C3DEC640AF5AB52D864AC50DAA8D145B3494F75FA3CFF939FCBDDA431DAD3CA79738B297795818CF\"\n },\n \"apis/admissionregistration.k8s.io/v1\": {\n \"serverRelativeURL\": \"/openapi/v3/apis/admissionregistration.k8s.io/v1?hash=E19CC93A116982CE5422FC42B590A8AFAD92CDE9AE4D59B5CAAD568F083AD07946E6CB5817531680BCE6E215C16973CD39003B0425F3477CFD854E89A9DB6597\"\n },\n ....\n }\n}", "completion": "This json code The Kubernetes API | The relative URLs are pointing to immutable OpenAPI descriptions, in order to improve client-side caching. The proper HTTP caching headers are also set by the API server for that purpose (Expires to 1", "language": "json", "type": "api-response_explanation", "context": "The Kubernetes API | The relative URLs are pointing to immutable OpenAPI descriptions, in order to improve client-side caching. The proper HTTP caching headers are also set by the API server for that purpose (Expires to 1", "source": "https://kubernetes.io/docs/concepts/overview/kubernetes-api/", "id": "k8s_00048_code_explain_1088"} {"prompt": "Explain this bash code:\n\nkubectl create deployment hello-app --image=gcr.io/google-samples/hello-app:2.0 --port=8080", "completion": "This bash code Use an HTTP Proxy to Access the Kubernetes API | This command starts a proxy to the Kubernetes API server:", "language": "bash", "type": "kubectl-command_explanation", "context": "Use an HTTP Proxy to Access the Kubernetes API | This command starts a proxy to the Kubernetes API server:", "source": "https://kubernetes.io/docs/tasks/extend-kubernetes/http-proxy-access-api/", "id": "k8s_00049_code_explain_1089"} {"prompt": "Complete this json code:\n\n{\n \"kind\": \"APIVersions\",\n \"versions\": [\n \"v1\"\n ],\n \"serverAddressByClientCIDRs\": [", "completion": " {\n \"clientCIDR\": \"0.0.0.0/0\",\n \"serverAddress\": \"10.0.2.15:8443\"\n }\n ]\n}", "language": "json", "type": "api-response", "context": "Use an HTTP Proxy to Access the Kubernetes API | Get a list of pods:", "source": "https://kubernetes.io/docs/tasks/extend-kubernetes/http-proxy-access-api/", "id": "k8s_00049_code_completion_1090"} {"prompt": "Explain this json code:\n\n{\n \"kind\": \"APIVersions\",\n \"versions\": [\n \"v1\"\n ],\n \"serverAddressByClientCIDRs\": [\n {\n \"clientCIDR\": \"0.0.0.0/0\",\n \"serverAddress\": \"10.0.2.15:8443\"\n }\n ]\n}", "completion": "This json code Use an HTTP Proxy to Access the Kubernetes API | Get a list of pods:", "language": "json", "type": "api-response_explanation", "context": "Use an HTTP Proxy to Access the Kubernetes API | Get a list of pods:", "source": "https://kubernetes.io/docs/tasks/extend-kubernetes/http-proxy-access-api/", "id": "k8s_00049_code_explain_1091"} {"prompt": "Explain this text code:\n\ncurl http://localhost:8080/api/v1/namespaces/default/pods", "completion": "This text code Use an HTTP Proxy to Access the Kubernetes API | The output should look similar to this:", "language": "text", "type": "code-example_explanation", "context": "Use an HTTP Proxy to Access the Kubernetes API | The output should look similar to this:", "source": "https://kubernetes.io/docs/tasks/extend-kubernetes/http-proxy-access-api/", "id": "k8s_00049_code_explain_1092"} {"prompt": "Complete this json code:\n\n{\n \"kind\": \"PodList\",\n \"apiVersion\": \"v1\",\n \"metadata\": {\n \"resourceVersion\": \"33074\"\n },\n \"items\": [\n {\n \"metadata\": {\n \"name\": \"kubernetes-bootcamp-2321272333-ix8pt\",", "completion": " \"generateName\": \"kubernetes-bootcamp-2321272333-\",\n \"namespace\": \"default\",\n \"uid\": \"ba21457c-6b1d-11e6-85f7-1ef9f1dab92b\",\n \"resourceVersion\": \"33003\",\n \"creationTimestamp\": \"2016-08-25T23:43:30Z\",\n \"labels\": {\n \"pod-template-hash\": \"2321272333\",\n \"run\": \"kubernetes-bootcamp\"\n },\n ...\n}", "language": "json", "type": "api-response", "context": "Use an HTTP Proxy to Access the Kubernetes API | Learn more about kubectl proxy.", "source": "https://kubernetes.io/docs/tasks/extend-kubernetes/http-proxy-access-api/", "id": "k8s_00049_code_completion_1093"} {"prompt": "Explain this json code:\n\n{\n \"kind\": \"PodList\",\n \"apiVersion\": \"v1\",\n \"metadata\": {\n \"resourceVersion\": \"33074\"\n },\n \"items\": [\n {\n \"metadata\": {\n \"name\": \"kubernetes-bootcamp-2321272333-ix8pt\",\n \"generateName\": \"kubernetes-bootcamp-2321272333-\",\n \"namespace\": \"default\",\n \"uid\": \"ba21457c-6b1d-11e6-85f7-1ef9f1dab92b\",\n \"resourceVersion\": \"33003\",\n \"creationTimestamp\": \"2016-08-25T23:43:30Z\",\n \"labels\": {\n \"pod-template-hash\": \"2321272333\",\n \"run\": \"kubernetes-bootcamp\"\n },\n ...\n}", "completion": "This json code Use an HTTP Proxy to Access the Kubernetes API | Learn more about kubectl proxy.", "language": "json", "type": "api-response_explanation", "context": "Use an HTTP Proxy to Access the Kubernetes API | Learn more about kubectl proxy.", "source": "https://kubernetes.io/docs/tasks/extend-kubernetes/http-proxy-access-api/", "id": "k8s_00049_code_explain_1094"} {"prompt": "Explain this shell code:\n\nkubectl create deployment hello-app --image=gcr.io/google-samples/hello-app:2.0 --port=8080", "completion": "This shell code Use an HTTP Proxy to Access the Kubernetes API | This command starts a proxy to the Kubernetes API server:", "language": "shell", "type": "code-example_explanation", "context": "Use an HTTP Proxy to Access the Kubernetes API | This command starts a proxy to the Kubernetes API server:", "source": "https://kubernetes.io/docs/tasks/extend-kubernetes/http-proxy-access-api/", "id": "k8s_00049_code_explain_1095"} {"prompt": "Complete this json code:\n\n{\n \"kind\": \"APIVersions\",\n \"versions\": [\n \"v1\"\n ],\n \"serverAddressByClientCIDRs\": [", "completion": " {\n \"clientCIDR\": \"0.0.0.0/0\",\n \"serverAddress\": \"10.0.2.15:8443\"\n }\n ]\n}", "language": "json", "type": "api-response", "context": "Use an HTTP Proxy to Access the Kubernetes API | Get a list of pods:", "source": "https://kubernetes.io/docs/tasks/extend-kubernetes/http-proxy-access-api/", "id": "k8s_00049_code_completion_1096"} {"prompt": "Explain this json code:\n\n{\n \"kind\": \"APIVersions\",\n \"versions\": [\n \"v1\"\n ],\n \"serverAddressByClientCIDRs\": [\n {\n \"clientCIDR\": \"0.0.0.0/0\",\n \"serverAddress\": \"10.0.2.15:8443\"\n }\n ]\n}", "completion": "This json code Use an HTTP Proxy to Access the Kubernetes API | Get a list of pods:", "language": "json", "type": "api-response_explanation", "context": "Use an HTTP Proxy to Access the Kubernetes API | Get a list of pods:", "source": "https://kubernetes.io/docs/tasks/extend-kubernetes/http-proxy-access-api/", "id": "k8s_00049_code_explain_1097"} {"prompt": "Explain this text code:\n\ncurl http://localhost:8080/api/v1/namespaces/default/pods", "completion": "This text code Use an HTTP Proxy to Access the Kubernetes API | The output should look similar to this:", "language": "text", "type": "code-example_explanation", "context": "Use an HTTP Proxy to Access the Kubernetes API | The output should look similar to this:", "source": "https://kubernetes.io/docs/tasks/extend-kubernetes/http-proxy-access-api/", "id": "k8s_00049_code_explain_1098"} {"prompt": "Complete this json code:\n\n{\n \"kind\": \"PodList\",\n \"apiVersion\": \"v1\",\n \"metadata\": {\n \"resourceVersion\": \"33074\"\n },\n \"items\": [\n {\n \"metadata\": {\n \"name\": \"kubernetes-bootcamp-2321272333-ix8pt\",", "completion": " \"generateName\": \"kubernetes-bootcamp-2321272333-\",\n \"namespace\": \"default\",\n \"uid\": \"ba21457c-6b1d-11e6-85f7-1ef9f1dab92b\",\n \"resourceVersion\": \"33003\",\n \"creationTimestamp\": \"2016-08-25T23:43:30Z\",\n \"labels\": {\n \"pod-template-hash\": \"2321272333\",\n \"run\": \"kubernetes-bootcamp\"\n },\n ...\n}", "language": "json", "type": "api-response", "context": "Use an HTTP Proxy to Access the Kubernetes API | Learn more about kubectl proxy.", "source": "https://kubernetes.io/docs/tasks/extend-kubernetes/http-proxy-access-api/", "id": "k8s_00049_code_completion_1099"} {"prompt": "Explain this json code:\n\n{\n \"kind\": \"PodList\",\n \"apiVersion\": \"v1\",\n \"metadata\": {\n \"resourceVersion\": \"33074\"\n },\n \"items\": [\n {\n \"metadata\": {\n \"name\": \"kubernetes-bootcamp-2321272333-ix8pt\",\n \"generateName\": \"kubernetes-bootcamp-2321272333-\",\n \"namespace\": \"default\",\n \"uid\": \"ba21457c-6b1d-11e6-85f7-1ef9f1dab92b\",\n \"resourceVersion\": \"33003\",\n \"creationTimestamp\": \"2016-08-25T23:43:30Z\",\n \"labels\": {\n \"pod-template-hash\": \"2321272333\",\n \"run\": \"kubernetes-bootcamp\"\n },\n ...\n}", "completion": "This json code Use an HTTP Proxy to Access the Kubernetes API | Learn more about kubectl proxy.", "language": "json", "type": "api-response_explanation", "context": "Use an HTTP Proxy to Access the Kubernetes API | Learn more about kubectl proxy.", "source": "https://kubernetes.io/docs/tasks/extend-kubernetes/http-proxy-access-api/", "id": "k8s_00049_code_explain_1100"} {"prompt": "Explain this bash code:\n\nkubectl create deployment hello-app --image=gcr.io/google-samples/hello-app:2.0 --port=8080", "completion": "This bash code Use an HTTP Proxy to Access the Kubernetes API | This command starts a proxy to the Kubernetes API server:", "language": "bash", "type": "kubectl-command_explanation", "context": "Use an HTTP Proxy to Access the Kubernetes API | This command starts a proxy to the Kubernetes API server:", "source": "https://kubernetes.io/docs/tasks/extend-kubernetes/http-proxy-access-api/", "id": "k8s_00049_code_explain_1101"} {"prompt": "Explain this bash code:\n\nkubectl create secret generic db-user-pass \\\n --from-literal=username=admin \\\n --from-literal=password='S!B\\*d$zDsb='", "completion": "This bash code Managing Secrets using kubectl | You must use single quotes '' to escape special characters such as $, \\, *, =, and ! in your strings. If you don't, your shell will interpret these characters.", "language": "bash", "type": "kubectl-command_explanation", "context": "Managing Secrets using kubectl | You must use single quotes '' to escape special characters such as $, \\, *, =, and ! in your strings. If you don't, your shell will interpret these characters.", "source": "https://kubernetes.io/docs/tasks/configmap-secret/managing-secret-using-kubectl/", "id": "k8s_00050_code_explain_1102"} {"prompt": "Explain this text code:\n\necho -n 'admin' > ./username.txt\necho -n 'S!B\\*d$zDsb=' > ./password.txt", "completion": "This text code Managing Secrets using kubectl | The -n flag ensures that the generated files do not have an extra newline character at the end of the text. This is important because when kubectl reads a file and encodes the content into a base64 st", "language": "text", "type": "code-example_explanation", "context": "Managing Secrets using kubectl | The -n flag ensures that the generated files do not have an extra newline character at the end of the text. This is important because when kubectl reads a file and encodes the content into a base64 st", "source": "https://kubernetes.io/docs/tasks/configmap-secret/managing-secret-using-kubectl/", "id": "k8s_00050_code_explain_1103"} {"prompt": "Explain this bash code:\n\nkubectl create secret generic db-user-pass \\\n --from-file=./username.txt \\\n --from-file=./password.txt", "completion": "This bash code Managing Secrets using kubectl | The default key name is the file name. You can optionally set the key name using --from-file=[key=]source. For example:", "language": "bash", "type": "kubectl-command_explanation", "context": "Managing Secrets using kubectl | The default key name is the file name. You can optionally set the key name using --from-file=[key=]source. For example:", "source": "https://kubernetes.io/docs/tasks/configmap-secret/managing-secret-using-kubectl/", "id": "k8s_00050_code_explain_1104"} {"prompt": "Explain this bash code:\n\nkubectl create secret generic db-user-pass \\\n --from-file=username=./username.txt \\\n --from-file=password=./password.txt", "completion": "This bash code Managing Secrets using kubectl | With either method, the output is similar to:", "language": "bash", "type": "kubectl-command_explanation", "context": "Managing Secrets using kubectl | With either method, the output is similar to:", "source": "https://kubernetes.io/docs/tasks/configmap-secret/managing-secret-using-kubectl/", "id": "k8s_00050_code_explain_1105"} {"prompt": "Explain this text code:\n\nNAME TYPE DATA AGE\ndb-user-pass Opaque 2 51s", "completion": "This text code Managing Secrets using kubectl | View the details of the Secret:", "language": "text", "type": "code-example_explanation", "context": "Managing Secrets using kubectl | View the details of the Secret:", "source": "https://kubernetes.io/docs/tasks/configmap-secret/managing-secret-using-kubectl/", "id": "k8s_00050_code_explain_1106"} {"prompt": "Complete this text code:\n\nName: db-user-pass\nNamespace: default\nLabels: \nAnnotations: \n", "completion": "Type: Opaque\n\nData\n====\npassword: 12 bytes\nusername: 5 bytes", "language": "text", "type": "code-example", "context": "Managing Secrets using kubectl | The commands kubectl get and kubectl describe avoid showing the contents of a Secret by default. This is to protect the Secret from being exposed accidentally, or from being stored in a terminal log.", "source": "https://kubernetes.io/docs/tasks/configmap-secret/managing-secret-using-kubectl/", "id": "k8s_00050_code_completion_1107"} {"prompt": "Explain this text code:\n\nName: db-user-pass\nNamespace: default\nLabels: \nAnnotations: \n\nType: Opaque\n\nData\n====\npassword: 12 bytes\nusername: 5 bytes", "completion": "This text code Managing Secrets using kubectl | The commands kubectl get and kubectl describe avoid showing the contents of a Secret by default. This is to protect the Secret from being exposed accidentally, or from being stored in a terminal log.", "language": "text", "type": "code-example_explanation", "context": "Managing Secrets using kubectl | The commands kubectl get and kubectl describe avoid showing the contents of a Secret by default. This is to protect the Secret from being exposed accidentally, or from being stored in a terminal log.", "source": "https://kubernetes.io/docs/tasks/configmap-secret/managing-secret-using-kubectl/", "id": "k8s_00050_code_explain_1108"} {"prompt": "Explain this bash code:\n\nkubectl get secret db-user-pass -o jsonpath='{.data}'", "completion": "This bash code Managing Secrets using kubectl | The output is similar to:", "language": "bash", "type": "kubectl-command_explanation", "context": "Managing Secrets using kubectl | The output is similar to:", "source": "https://kubernetes.io/docs/tasks/configmap-secret/managing-secret-using-kubectl/", "id": "k8s_00050_code_explain_1109"} {"prompt": "Explain this json code:\n\n{ \"password\": \"UyFCXCpkJHpEc2I9\", \"username\": \"YWRtaW4=\" }", "completion": "This json code Managing Secrets using kubectl | Decode the password data:", "language": "json", "type": "api-response_explanation", "context": "Managing Secrets using kubectl | Decode the password data:", "source": "https://kubernetes.io/docs/tasks/configmap-secret/managing-secret-using-kubectl/", "id": "k8s_00050_code_explain_1110"} {"prompt": "Explain this bash code:\n\nkubectl get secret db-user-pass -o jsonpath='{.data.password}' | base64 --decode", "completion": "This bash code Managing Secrets using kubectl | You can edit an existing Secret object unless it is immutable. To edit a Secret, run the following command:", "language": "bash", "type": "kubectl-command_explanation", "context": "Managing Secrets using kubectl | You can edit an existing Secret object unless it is immutable. To edit a Secret, run the following command:", "source": "https://kubernetes.io/docs/tasks/configmap-secret/managing-secret-using-kubectl/", "id": "k8s_00050_code_explain_1111"} {"prompt": "Complete this yaml code:\n\n# Please edit the object below. Lines beginning with a '#' will be ignored,\n# and an empty file will abort the edit. If an error occurs while saving this file, it will be\n# reopened with the relevant failures.\n#\napiVersion: v1\ndata:\n password: UyFCXCpkJHpEc2I9\n username: YWRtaW4=", "completion": "kind: Secret\nmetadata:\n creationTimestamp: \"2022-06-28T17:44:13Z\"\n name: db-user-pass\n namespace: default\n resourceVersion: \"12708504\"\n uid: 91becd59-78fa-4c85-823f-6d44436242ac\ntype: Opaque", "language": "yaml", "type": "kubernetes-manifest", "context": "Managing Secrets using kubectl | To delete a Secret, run the following command:", "source": "https://kubernetes.io/docs/tasks/configmap-secret/managing-secret-using-kubectl/", "id": "k8s_00050_code_completion_1112"} {"prompt": "Explain this yaml code:\n\n# Please edit the object below. Lines beginning with a '#' will be ignored,\n# and an empty file will abort the edit. If an error occurs while saving this file, it will be\n# reopened with the relevant failures.\n#\napiVersion: v1\ndata:\n password: UyFCXCpkJHpEc2I9\n username: YWRtaW4=\nkind: Secret\nmetadata:\n creationTimestamp: \"2022-06-28T17:44:13Z\"\n name: db-user-pass\n namespace: default\n resourceVersion: \"12708504\"\n uid: 91becd59-78fa-4c85-823f-6d44436242ac\ntype: Opaque", "completion": "This yaml code Managing Secrets using kubectl | To delete a Secret, run the following command:", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Managing Secrets using kubectl | To delete a Secret, run the following command:", "source": "https://kubernetes.io/docs/tasks/configmap-secret/managing-secret-using-kubectl/", "id": "k8s_00050_code_explain_1113"} {"prompt": "Explain this shell code:\n\nkubectl create secret generic db-user-pass \\\n --from-literal=username=admin \\\n --from-literal=password='S!B\\*d$zDsb='", "completion": "This shell code Managing Secrets using kubectl | You must use single quotes '' to escape special characters such as $, \\, *, =, and ! in your strings. If you don't, your shell will interpret these characters.", "language": "shell", "type": "code-example_explanation", "context": "Managing Secrets using kubectl | You must use single quotes '' to escape special characters such as $, \\, *, =, and ! in your strings. If you don't, your shell will interpret these characters.", "source": "https://kubernetes.io/docs/tasks/configmap-secret/managing-secret-using-kubectl/", "id": "k8s_00050_code_explain_1114"} {"prompt": "Explain this shell code:\n\necho -n 'admin' > ./username.txt\necho -n 'S!B\\*d$zDsb=' > ./password.txt", "completion": "This shell code Managing Secrets using kubectl | The -n flag ensures that the generated files do not have an extra newline character at the end of the text. This is important because when kubectl reads a file and encodes the content into a base64 st", "language": "shell", "type": "code-example_explanation", "context": "Managing Secrets using kubectl | The -n flag ensures that the generated files do not have an extra newline character at the end of the text. This is important because when kubectl reads a file and encodes the content into a base64 st", "source": "https://kubernetes.io/docs/tasks/configmap-secret/managing-secret-using-kubectl/", "id": "k8s_00050_code_explain_1115"} {"prompt": "Explain this shell code:\n\nkubectl create secret generic db-user-pass \\\n --from-file=./username.txt \\\n --from-file=./password.txt", "completion": "This shell code Managing Secrets using kubectl | The default key name is the file name. You can optionally set the key name using --from-file=[key=]source. For example:", "language": "shell", "type": "code-example_explanation", "context": "Managing Secrets using kubectl | The default key name is the file name. You can optionally set the key name using --from-file=[key=]source. For example:", "source": "https://kubernetes.io/docs/tasks/configmap-secret/managing-secret-using-kubectl/", "id": "k8s_00050_code_explain_1116"} {"prompt": "Explain this shell code:\n\nkubectl create secret generic db-user-pass \\\n --from-file=username=./username.txt \\\n --from-file=password=./password.txt", "completion": "This shell code Managing Secrets using kubectl | With either method, the output is similar to:", "language": "shell", "type": "code-example_explanation", "context": "Managing Secrets using kubectl | With either method, the output is similar to:", "source": "https://kubernetes.io/docs/tasks/configmap-secret/managing-secret-using-kubectl/", "id": "k8s_00050_code_explain_1117"} {"prompt": "Explain this text code:\n\nNAME TYPE DATA AGE\ndb-user-pass Opaque 2 51s", "completion": "This text code Managing Secrets using kubectl | View the details of the Secret:", "language": "text", "type": "code-example_explanation", "context": "Managing Secrets using kubectl | View the details of the Secret:", "source": "https://kubernetes.io/docs/tasks/configmap-secret/managing-secret-using-kubectl/", "id": "k8s_00050_code_explain_1118"} {"prompt": "Complete this text code:\n\nName: db-user-pass\nNamespace: default\nLabels: \nAnnotations: \n", "completion": "Type: Opaque\n\nData\n====\npassword: 12 bytes\nusername: 5 bytes", "language": "text", "type": "code-example", "context": "Managing Secrets using kubectl | The commands kubectl get and kubectl describe avoid showing the contents of a Secret by default. This is to protect the Secret from being exposed accidentally, or from being stored in a terminal log.", "source": "https://kubernetes.io/docs/tasks/configmap-secret/managing-secret-using-kubectl/", "id": "k8s_00050_code_completion_1119"} {"prompt": "Explain this text code:\n\nName: db-user-pass\nNamespace: default\nLabels: \nAnnotations: \n\nType: Opaque\n\nData\n====\npassword: 12 bytes\nusername: 5 bytes", "completion": "This text code Managing Secrets using kubectl | The commands kubectl get and kubectl describe avoid showing the contents of a Secret by default. This is to protect the Secret from being exposed accidentally, or from being stored in a terminal log.", "language": "text", "type": "code-example_explanation", "context": "Managing Secrets using kubectl | The commands kubectl get and kubectl describe avoid showing the contents of a Secret by default. This is to protect the Secret from being exposed accidentally, or from being stored in a terminal log.", "source": "https://kubernetes.io/docs/tasks/configmap-secret/managing-secret-using-kubectl/", "id": "k8s_00050_code_explain_1120"} {"prompt": "Explain this shell code:\n\nkubectl get secret db-user-pass -o jsonpath='{.data}'", "completion": "This shell code Managing Secrets using kubectl | The output is similar to:", "language": "shell", "type": "code-example_explanation", "context": "Managing Secrets using kubectl | The output is similar to:", "source": "https://kubernetes.io/docs/tasks/configmap-secret/managing-secret-using-kubectl/", "id": "k8s_00050_code_explain_1121"} {"prompt": "Explain this json code:\n\n{ \"password\": \"UyFCXCpkJHpEc2I9\", \"username\": \"YWRtaW4=\" }", "completion": "This json code Managing Secrets using kubectl | Decode the password data:", "language": "json", "type": "api-response_explanation", "context": "Managing Secrets using kubectl | Decode the password data:", "source": "https://kubernetes.io/docs/tasks/configmap-secret/managing-secret-using-kubectl/", "id": "k8s_00050_code_explain_1122"} {"prompt": "Explain this shell code:\n\nkubectl get secret db-user-pass -o jsonpath='{.data.password}' | base64 --decode", "completion": "This shell code Managing Secrets using kubectl | You can edit an existing Secret object unless it is immutable. To edit a Secret, run the following command:", "language": "shell", "type": "code-example_explanation", "context": "Managing Secrets using kubectl | You can edit an existing Secret object unless it is immutable. To edit a Secret, run the following command:", "source": "https://kubernetes.io/docs/tasks/configmap-secret/managing-secret-using-kubectl/", "id": "k8s_00050_code_explain_1123"} {"prompt": "Complete this yaml code:\n\n# Please edit the object below. Lines beginning with a '#' will be ignored,\n# and an empty file will abort the edit. If an error occurs while saving this file, it will be\n# reopened with the relevant failures.\n#\napiVersion: v1\ndata:\n password: UyFCXCpkJHpEc2I9\n username: YWRtaW4=", "completion": "kind: Secret\nmetadata:\n creationTimestamp: \"2022-06-28T17:44:13Z\"\n name: db-user-pass\n namespace: default\n resourceVersion: \"12708504\"\n uid: 91becd59-78fa-4c85-823f-6d44436242ac\ntype: Opaque", "language": "yaml", "type": "kubernetes-manifest", "context": "Managing Secrets using kubectl | To delete a Secret, run the following command:", "source": "https://kubernetes.io/docs/tasks/configmap-secret/managing-secret-using-kubectl/", "id": "k8s_00050_code_completion_1124"} {"prompt": "Explain this yaml code:\n\n# Please edit the object below. Lines beginning with a '#' will be ignored,\n# and an empty file will abort the edit. If an error occurs while saving this file, it will be\n# reopened with the relevant failures.\n#\napiVersion: v1\ndata:\n password: UyFCXCpkJHpEc2I9\n username: YWRtaW4=\nkind: Secret\nmetadata:\n creationTimestamp: \"2022-06-28T17:44:13Z\"\n name: db-user-pass\n namespace: default\n resourceVersion: \"12708504\"\n uid: 91becd59-78fa-4c85-823f-6d44436242ac\ntype: Opaque", "completion": "This yaml code Managing Secrets using kubectl | To delete a Secret, run the following command:", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Managing Secrets using kubectl | To delete a Secret, run the following command:", "source": "https://kubernetes.io/docs/tasks/configmap-secret/managing-secret-using-kubectl/", "id": "k8s_00050_code_explain_1125"} {"prompt": "Explain this bash code:\n\nkubectl create secret generic db-user-pass \\\n --from-literal=username=admin \\\n --from-literal=password='S!B\\*d$zDsb='", "completion": "This bash code Managing Secrets using kubectl | You must use single quotes '' to escape special characters such as $, \\, *, =, and ! in your strings. If you don't, your shell will interpret these characters.", "language": "bash", "type": "kubectl-command_explanation", "context": "Managing Secrets using kubectl | You must use single quotes '' to escape special characters such as $, \\, *, =, and ! in your strings. If you don't, your shell will interpret these characters.", "source": "https://kubernetes.io/docs/tasks/configmap-secret/managing-secret-using-kubectl/", "id": "k8s_00050_code_explain_1126"} {"prompt": "Explain this text code:\n\necho -n 'admin' > ./username.txt\necho -n 'S!B\\*d$zDsb=' > ./password.txt", "completion": "This text code Managing Secrets using kubectl | The -n flag ensures that the generated files do not have an extra newline character at the end of the text. This is important because when kubectl reads a file and encodes the content into a base64 st", "language": "text", "type": "code-example_explanation", "context": "Managing Secrets using kubectl | The -n flag ensures that the generated files do not have an extra newline character at the end of the text. This is important because when kubectl reads a file and encodes the content into a base64 st", "source": "https://kubernetes.io/docs/tasks/configmap-secret/managing-secret-using-kubectl/", "id": "k8s_00050_code_explain_1127"} {"prompt": "Explain this bash code:\n\nkubectl create secret generic db-user-pass \\\n --from-file=./username.txt \\\n --from-file=./password.txt", "completion": "This bash code Managing Secrets using kubectl | The default key name is the file name. You can optionally set the key name using --from-file=[key=]source. For example:", "language": "bash", "type": "kubectl-command_explanation", "context": "Managing Secrets using kubectl | The default key name is the file name. You can optionally set the key name using --from-file=[key=]source. For example:", "source": "https://kubernetes.io/docs/tasks/configmap-secret/managing-secret-using-kubectl/", "id": "k8s_00050_code_explain_1128"} {"prompt": "Explain this bash code:\n\nkubectl create secret generic db-user-pass \\\n --from-file=username=./username.txt \\\n --from-file=password=./password.txt", "completion": "This bash code Managing Secrets using kubectl | With either method, the output is similar to:", "language": "bash", "type": "kubectl-command_explanation", "context": "Managing Secrets using kubectl | With either method, the output is similar to:", "source": "https://kubernetes.io/docs/tasks/configmap-secret/managing-secret-using-kubectl/", "id": "k8s_00050_code_explain_1129"} {"prompt": "Explain this bash code:\n\nkubectl get secret db-user-pass -o jsonpath='{.data}'", "completion": "This bash code Managing Secrets using kubectl | The output is similar to:", "language": "bash", "type": "kubectl-command_explanation", "context": "Managing Secrets using kubectl | The output is similar to:", "source": "https://kubernetes.io/docs/tasks/configmap-secret/managing-secret-using-kubectl/", "id": "k8s_00050_code_explain_1130"} {"prompt": "Explain this json code:\n\n{ \"password\": \"UyFCXCpkJHpEc2I9\", \"username\": \"YWRtaW4=\" }", "completion": "This json code Managing Secrets using kubectl | Decode the password data:", "language": "json", "type": "api-response_explanation", "context": "Managing Secrets using kubectl | Decode the password data:", "source": "https://kubernetes.io/docs/tasks/configmap-secret/managing-secret-using-kubectl/", "id": "k8s_00050_code_explain_1131"} {"prompt": "Explain this bash code:\n\nkubectl get secret db-user-pass -o jsonpath='{.data.password}' | base64 --decode", "completion": "This bash code Managing Secrets using kubectl | You can edit an existing Secret object unless it is immutable. To edit a Secret, run the following command:", "language": "bash", "type": "kubectl-command_explanation", "context": "Managing Secrets using kubectl | You can edit an existing Secret object unless it is immutable. To edit a Secret, run the following command:", "source": "https://kubernetes.io/docs/tasks/configmap-secret/managing-secret-using-kubectl/", "id": "k8s_00050_code_explain_1132"} {"prompt": "Complete this yaml code:\n\n# Please edit the object below. Lines beginning with a '#' will be ignored,\n# and an empty file will abort the edit. If an error occurs while saving this file, it will be\n# reopened with the relevant failures.\n#\napiVersion: v1\ndata:\n password: UyFCXCpkJHpEc2I9\n username: YWRtaW4=", "completion": "kind: Secret\nmetadata:\n creationTimestamp: \"2022-06-28T17:44:13Z\"\n name: db-user-pass\n namespace: default\n resourceVersion: \"12708504\"\n uid: 91becd59-78fa-4c85-823f-6d44436242ac\ntype: Opaque", "language": "yaml", "type": "kubernetes-manifest", "context": "Managing Secrets using kubectl | To delete a Secret, run the following command:", "source": "https://kubernetes.io/docs/tasks/configmap-secret/managing-secret-using-kubectl/", "id": "k8s_00050_code_completion_1133"} {"prompt": "Explain this yaml code:\n\n# Please edit the object below. Lines beginning with a '#' will be ignored,\n# and an empty file will abort the edit. If an error occurs while saving this file, it will be\n# reopened with the relevant failures.\n#\napiVersion: v1\ndata:\n password: UyFCXCpkJHpEc2I9\n username: YWRtaW4=\nkind: Secret\nmetadata:\n creationTimestamp: \"2022-06-28T17:44:13Z\"\n name: db-user-pass\n namespace: default\n resourceVersion: \"12708504\"\n uid: 91becd59-78fa-4c85-823f-6d44436242ac\ntype: Opaque", "completion": "This yaml code Managing Secrets using kubectl | To delete a Secret, run the following command:", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Managing Secrets using kubectl | To delete a Secret, run the following command:", "source": "https://kubernetes.io/docs/tasks/configmap-secret/managing-secret-using-kubectl/", "id": "k8s_00050_code_explain_1134"} {"prompt": "Complete this yaml code:\n\napiVersion: apps/v1\nkind: DaemonSet\nmetadata:\n name: fluentd-elasticsearch\n namespace: kube-system\n labels:\n k8s-app: fluentd-logging\nspec:\n selector:\n matchLabels:\n name: fluentd-elasticsearch\n updateStrategy:\n type: RollingUpdate\n rollingUpdate:\n maxUnavailable: 1\n template:\n metadata:\n labels:\n name: fluentd-elasticsearch\n spec:\n tolerations:\n # these tolerations are to have the daemonset runnable on control plane nodes\n # remove them if your control plane nodes should not run pods", "completion": " - key: node-role.kubernetes.io/control-plane\n operator: Exists\n effect: NoSchedule\n - key: node-role.kubernetes.io/master\n operator: Exists\n effect: NoSchedule\n containers:\n - name: fluentd-elasticsearch\n image: quay.io/fluentd_elasticsearch/fluentd:v5.0.1\n volumeMounts:\n - name: varlog\n mountPath: /var/log\n - name: varlibdockercontainers\n mountPath: /var/lib/docker/containers\n readOnly: true\n terminationGracePeriodSeconds: 30\n volumes:\n - name: varlog\n hostPath:\n path: /var/log\n - name: varlibdockercontainers\n hostPath:\n path: /var/lib/docker/containers", "language": "yaml", "type": "kubernetes-manifest", "context": "Perform a Rolling Update on a DaemonSet | After verifying the update strategy of the DaemonSet manifest, create the DaemonSet:", "source": "https://kubernetes.io/docs/tasks/manage-daemon/update-daemon-set/", "id": "k8s_00052_code_completion_1135"} {"prompt": "Explain this yaml code:\n\napiVersion: apps/v1\nkind: DaemonSet\nmetadata:\n name: fluentd-elasticsearch\n namespace: kube-system\n labels:\n k8s-app: fluentd-logging\nspec:\n selector:\n matchLabels:\n name: fluentd-elasticsearch\n updateStrategy:\n type: RollingUpdate\n rollingUpdate:\n maxUnavailable: 1\n template:\n metadata:\n labels:\n name: fluentd-elasticsearch\n spec:\n tolerations:\n # these tolerations are to have the daemonset runnable on control plane nodes\n # remove them if your control plane nodes should not run pods\n - key: node-role.kubernetes.io/control-plane\n operator: Exists\n effect: NoSchedule\n - key: node-role.kubernetes.io/master\n operator: Exists\n effect: NoSchedule\n containers:\n - name: fluentd-elasticsearch\n image: quay.io/fluentd_elasticsearch/fluentd:v5.0.1\n volumeMounts:\n - name: varlog\n mountPath: /var/log\n - name: varlibdockercontainers\n mountPath: /var/lib/docker/containers\n readOnly: true\n terminationGracePeriodSeconds: 30\n volumes:\n - name: varlog\n hostPath:\n path: /var/log\n - name: varlibdockercontainers\n hostPath:\n path: /var/lib/docker/containers", "completion": "This yaml code Perform a Rolling Update on a DaemonSet | After verifying the update strategy of the DaemonSet manifest, create the DaemonSet:", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Perform a Rolling Update on a DaemonSet | After verifying the update strategy of the DaemonSet manifest, create the DaemonSet:", "source": "https://kubernetes.io/docs/tasks/manage-daemon/update-daemon-set/", "id": "k8s_00052_code_explain_1136"} {"prompt": "Explain this bash code:\n\nkubectl create -f https://k8s.io/examples/controllers/fluentd-daemonset.yaml", "completion": "This bash code Perform a Rolling Update on a DaemonSet | Alternatively, use kubectl apply to create the same DaemonSet if you plan to update the DaemonSet with kubectl apply.", "language": "bash", "type": "kubectl-command_explanation", "context": "Perform a Rolling Update on a DaemonSet | Alternatively, use kubectl apply to create the same DaemonSet if you plan to update the DaemonSet with kubectl apply.", "source": "https://kubernetes.io/docs/tasks/manage-daemon/update-daemon-set/", "id": "k8s_00052_code_explain_1137"} {"prompt": "Explain this bash code:\n\nkubectl apply -f https://k8s.io/examples/controllers/fluentd-daemonset.yaml", "completion": "This bash code Perform a Rolling Update on a DaemonSet | Check the update strategy of your DaemonSet, and make sure it's set to RollingUpdate:", "language": "bash", "type": "kubectl-command_explanation", "context": "Perform a Rolling Update on a DaemonSet | Check the update strategy of your DaemonSet, and make sure it's set to RollingUpdate:", "source": "https://kubernetes.io/docs/tasks/manage-daemon/update-daemon-set/", "id": "k8s_00052_code_explain_1138"} {"prompt": "Explain this bash code:\n\nkubectl get ds/fluentd-elasticsearch -o go-template='{{.spec.updateStrategy.type}}{{\"\\n\"}}' -n kube-system", "completion": "This bash code Perform a Rolling Update on a DaemonSet | If you haven't created the DaemonSet in the system, check your DaemonSet manifest with the following command instead:", "language": "bash", "type": "kubectl-command_explanation", "context": "Perform a Rolling Update on a DaemonSet | If you haven't created the DaemonSet in the system, check your DaemonSet manifest with the following command instead:", "source": "https://kubernetes.io/docs/tasks/manage-daemon/update-daemon-set/", "id": "k8s_00052_code_explain_1139"} {"prompt": "Explain this bash code:\n\nkubectl apply -f https://k8s.io/examples/controllers/fluentd-daemonset.yaml --dry-run=client -o go-template='{{.spec.updateStrategy.type}}{{\"\\n\"}}'", "completion": "This bash code Perform a Rolling Update on a DaemonSet | The output from both commands should be:", "language": "bash", "type": "kubectl-command_explanation", "context": "Perform a Rolling Update on a DaemonSet | The output from both commands should be:", "source": "https://kubernetes.io/docs/tasks/manage-daemon/update-daemon-set/", "id": "k8s_00052_code_explain_1140"} {"prompt": "Complete this yaml code:\n\napiVersion: apps/v1\nkind: DaemonSet\nmetadata:\n name: fluentd-elasticsearch\n namespace: kube-system\n labels:\n k8s-app: fluentd-logging\nspec:\n selector:\n matchLabels:\n name: fluentd-elasticsearch\n updateStrategy:\n type: RollingUpdate\n rollingUpdate:\n maxUnavailable: 1\n template:\n metadata:\n labels:\n name: fluentd-elasticsearch\n spec:\n tolerations:\n # these tolerations are to have the daemonset runnable on control plane nodes\n # remove them if your control plane nodes should not run pods\n - key: node-role.kubernetes.io/control-plane\n operator: Exists\n effect: NoSchedule", "completion": " - key: node-role.kubernetes.io/master\n operator: Exists\n effect: NoSchedule\n containers:\n - name: fluentd-elasticsearch\n image: quay.io/fluentd_elasticsearch/fluentd:v5.0.1\n resources:\n limits:\n memory: 200Mi\n requests:\n cpu: 100m\n memory: 200Mi\n volumeMounts:\n - name: varlog\n mountPath: /var/log\n - name: varlibdockercontainers\n mountPath: /var/lib/docker/containers\n readOnly: true\n terminationGracePeriodSeconds: 30\n volumes:\n - name: varlog\n hostPath:\n path: /var/log\n - name: varlibdockercontainers\n hostPath:\n path: /var/lib/docker/containers", "language": "yaml", "type": "kubernetes-manifest", "context": "Perform a Rolling Update on a DaemonSet | If you update DaemonSets using configuration files, use kubectl apply:", "source": "https://kubernetes.io/docs/tasks/manage-daemon/update-daemon-set/", "id": "k8s_00052_code_completion_1141"} {"prompt": "Explain this yaml code:\n\napiVersion: apps/v1\nkind: DaemonSet\nmetadata:\n name: fluentd-elasticsearch\n namespace: kube-system\n labels:\n k8s-app: fluentd-logging\nspec:\n selector:\n matchLabels:\n name: fluentd-elasticsearch\n updateStrategy:\n type: RollingUpdate\n rollingUpdate:\n maxUnavailable: 1\n template:\n metadata:\n labels:\n name: fluentd-elasticsearch\n spec:\n tolerations:\n # these tolerations are to have the daemonset runnable on control plane nodes\n # remove them if your control plane nodes should not run pods\n - key: node-role.kubernetes.io/control-plane\n operator: Exists\n effect: NoSchedule\n - key: node-role.kubernetes.io/master\n operator: Exists\n effect: NoSchedule\n containers:\n - name: fluentd-elasticsearch\n image: quay.io/fluentd_elasticsearch/fluentd:v5.0.1\n resources:\n limits:\n memory: 200Mi\n requests:\n cpu: 100m\n memory: 200Mi\n volumeMounts:\n - name: varlog\n mountPath: /var/log\n - name: varlibdockercontainers\n mountPath: /var/lib/docker/containers\n readOnly: true\n terminationGracePeriodSeconds: 30\n volumes:\n - name: varlog\n hostPath:\n path: /var/log\n - name: varlibdockercontainers\n hostPath:\n path: /var/lib/docker/containers", "completion": "This yaml code Perform a Rolling Update on a DaemonSet | If you update DaemonSets using configuration files, use kubectl apply:", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Perform a Rolling Update on a DaemonSet | If you update DaemonSets using configuration files, use kubectl apply:", "source": "https://kubernetes.io/docs/tasks/manage-daemon/update-daemon-set/", "id": "k8s_00052_code_explain_1142"} {"prompt": "Explain this bash code:\n\nkubectl apply -f https://k8s.io/examples/controllers/fluentd-daemonset-update.yaml", "completion": "This bash code Perform a Rolling Update on a DaemonSet | If you update DaemonSets using imperative commands, use kubectl edit :", "language": "bash", "type": "kubectl-command_explanation", "context": "Perform a Rolling Update on a DaemonSet | If you update DaemonSets using imperative commands, use kubectl edit :", "source": "https://kubernetes.io/docs/tasks/manage-daemon/update-daemon-set/", "id": "k8s_00052_code_explain_1143"} {"prompt": "Explain this bash code:\n\nkubectl edit ds/fluentd-elasticsearch -n kube-system", "completion": "This bash code Perform a Rolling Update on a DaemonSet | If you only need to update the container image in the DaemonSet template, i.e. .spec.template.spec.containers[*].image, use kubectl set image:", "language": "bash", "type": "kubectl-command_explanation", "context": "Perform a Rolling Update on a DaemonSet | If you only need to update the container image in the DaemonSet template, i.e. .spec.template.spec.containers[*].image, use kubectl set image:", "source": "https://kubernetes.io/docs/tasks/manage-daemon/update-daemon-set/", "id": "k8s_00052_code_explain_1144"} {"prompt": "Explain this bash code:\n\nkubectl set image ds/fluentd-elasticsearch fluentd-elasticsearch=quay.io/fluentd_elasticsearch/fluentd:v2.6.0 -n kube-system", "completion": "This bash code Perform a Rolling Update on a DaemonSet | Finally, watch the rollout status of the latest DaemonSet rolling update:", "language": "bash", "type": "kubectl-command_explanation", "context": "Perform a Rolling Update on a DaemonSet | Finally, watch the rollout status of the latest DaemonSet rolling update:", "source": "https://kubernetes.io/docs/tasks/manage-daemon/update-daemon-set/", "id": "k8s_00052_code_explain_1145"} {"prompt": "Explain this bash code:\n\nkubectl rollout status ds/fluentd-elasticsearch -n kube-system", "completion": "This bash code Perform a Rolling Update on a DaemonSet | When the rollout is complete, the output is similar to this:", "language": "bash", "type": "kubectl-command_explanation", "context": "Perform a Rolling Update on a DaemonSet | When the rollout is complete, the output is similar to this:", "source": "https://kubernetes.io/docs/tasks/manage-daemon/update-daemon-set/", "id": "k8s_00052_code_explain_1146"} {"prompt": "Explain this text code:\n\ndaemonset \"fluentd-elasticsearch\" successfully rolled out", "completion": "This text code Perform a Rolling Update on a DaemonSet | Sometimes, a DaemonSet rolling update may be stuck. Here are some possible causes:", "language": "text", "type": "code-example_explanation", "context": "Perform a Rolling Update on a DaemonSet | Sometimes, a DaemonSet rolling update may be stuck. Here are some possible causes:", "source": "https://kubernetes.io/docs/tasks/manage-daemon/update-daemon-set/", "id": "k8s_00052_code_explain_1147"} {"prompt": "Explain this bash code:\n\nkubectl get pods -l name=fluentd-elasticsearch -o wide -n kube-system", "completion": "This bash code Perform a Rolling Update on a DaemonSet | Once you've found those nodes, delete some non-DaemonSet pods from the node to make room for new DaemonSet pods.", "language": "bash", "type": "kubectl-command_explanation", "context": "Perform a Rolling Update on a DaemonSet | Once you've found those nodes, delete some non-DaemonSet pods from the node to make room for new DaemonSet pods.", "source": "https://kubernetes.io/docs/tasks/manage-daemon/update-daemon-set/", "id": "k8s_00052_code_explain_1148"} {"prompt": "Explain this bash code:\n\nkubectl delete ds fluentd-elasticsearch -n kube-system", "completion": "This bash code Perform a Rolling Update on a DaemonSet |", "language": "bash", "type": "kubectl-command_explanation", "context": "Perform a Rolling Update on a DaemonSet |", "source": "https://kubernetes.io/docs/tasks/manage-daemon/update-daemon-set/", "id": "k8s_00052_code_explain_1149"} {"prompt": "Complete this yaml code:\n\napiVersion: apps/v1\nkind: DaemonSet\nmetadata:\n name: fluentd-elasticsearch\n namespace: kube-system\n labels:\n k8s-app: fluentd-logging\nspec:\n selector:\n matchLabels:\n name: fluentd-elasticsearch\n updateStrategy:\n type: RollingUpdate\n rollingUpdate:\n maxUnavailable: 1\n template:\n metadata:\n labels:\n name: fluentd-elasticsearch\n spec:\n tolerations:\n # these tolerations are to have the daemonset runnable on control plane nodes\n # remove them if your control plane nodes should not run pods", "completion": " - key: node-role.kubernetes.io/control-plane\n operator: Exists\n effect: NoSchedule\n - key: node-role.kubernetes.io/master\n operator: Exists\n effect: NoSchedule\n containers:\n - name: fluentd-elasticsearch\n image: quay.io/fluentd_elasticsearch/fluentd:v5.0.1\n volumeMounts:\n - name: varlog\n mountPath: /var/log\n - name: varlibdockercontainers\n mountPath: /var/lib/docker/containers\n readOnly: true\n terminationGracePeriodSeconds: 30\n volumes:\n - name: varlog\n hostPath:\n path: /var/log\n - name: varlibdockercontainers\n hostPath:\n path: /var/lib/docker/containers", "language": "yaml", "type": "kubernetes-manifest", "context": "Perform a Rolling Update on a DaemonSet | After verifying the update strategy of the DaemonSet manifest, create the DaemonSet:", "source": "https://kubernetes.io/docs/tasks/manage-daemon/update-daemon-set/", "id": "k8s_00052_code_completion_1150"} {"prompt": "Explain this yaml code:\n\napiVersion: apps/v1\nkind: DaemonSet\nmetadata:\n name: fluentd-elasticsearch\n namespace: kube-system\n labels:\n k8s-app: fluentd-logging\nspec:\n selector:\n matchLabels:\n name: fluentd-elasticsearch\n updateStrategy:\n type: RollingUpdate\n rollingUpdate:\n maxUnavailable: 1\n template:\n metadata:\n labels:\n name: fluentd-elasticsearch\n spec:\n tolerations:\n # these tolerations are to have the daemonset runnable on control plane nodes\n # remove them if your control plane nodes should not run pods\n - key: node-role.kubernetes.io/control-plane\n operator: Exists\n effect: NoSchedule\n - key: node-role.kubernetes.io/master\n operator: Exists\n effect: NoSchedule\n containers:\n - name: fluentd-elasticsearch\n image: quay.io/fluentd_elasticsearch/fluentd:v5.0.1\n volumeMounts:\n - name: varlog\n mountPath: /var/log\n - name: varlibdockercontainers\n mountPath: /var/lib/docker/containers\n readOnly: true\n terminationGracePeriodSeconds: 30\n volumes:\n - name: varlog\n hostPath:\n path: /var/log\n - name: varlibdockercontainers\n hostPath:\n path: /var/lib/docker/containers", "completion": "This yaml code Perform a Rolling Update on a DaemonSet | After verifying the update strategy of the DaemonSet manifest, create the DaemonSet:", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Perform a Rolling Update on a DaemonSet | After verifying the update strategy of the DaemonSet manifest, create the DaemonSet:", "source": "https://kubernetes.io/docs/tasks/manage-daemon/update-daemon-set/", "id": "k8s_00052_code_explain_1151"} {"prompt": "Explain this shell code:\n\nkubectl create -f https://k8s.io/examples/controllers/fluentd-daemonset.yaml", "completion": "This shell code Perform a Rolling Update on a DaemonSet | Alternatively, use kubectl apply to create the same DaemonSet if you plan to update the DaemonSet with kubectl apply.", "language": "shell", "type": "code-example_explanation", "context": "Perform a Rolling Update on a DaemonSet | Alternatively, use kubectl apply to create the same DaemonSet if you plan to update the DaemonSet with kubectl apply.", "source": "https://kubernetes.io/docs/tasks/manage-daemon/update-daemon-set/", "id": "k8s_00052_code_explain_1152"} {"prompt": "Explain this shell code:\n\nkubectl apply -f https://k8s.io/examples/controllers/fluentd-daemonset.yaml", "completion": "This shell code Perform a Rolling Update on a DaemonSet | Check the update strategy of your DaemonSet, and make sure it's set to RollingUpdate:", "language": "shell", "type": "code-example_explanation", "context": "Perform a Rolling Update on a DaemonSet | Check the update strategy of your DaemonSet, and make sure it's set to RollingUpdate:", "source": "https://kubernetes.io/docs/tasks/manage-daemon/update-daemon-set/", "id": "k8s_00052_code_explain_1153"} {"prompt": "Explain this shell code:\n\nkubectl get ds/fluentd-elasticsearch -o go-template='{{.spec.updateStrategy.type}}{{\"\\n\"}}' -n kube-system", "completion": "This shell code Perform a Rolling Update on a DaemonSet | If you haven't created the DaemonSet in the system, check your DaemonSet manifest with the following command instead:", "language": "shell", "type": "code-example_explanation", "context": "Perform a Rolling Update on a DaemonSet | If you haven't created the DaemonSet in the system, check your DaemonSet manifest with the following command instead:", "source": "https://kubernetes.io/docs/tasks/manage-daemon/update-daemon-set/", "id": "k8s_00052_code_explain_1154"} {"prompt": "Explain this shell code:\n\nkubectl apply -f https://k8s.io/examples/controllers/fluentd-daemonset.yaml --dry-run=client -o go-template='{{.spec.updateStrategy.type}}{{\"\\n\"}}'", "completion": "This shell code Perform a Rolling Update on a DaemonSet | The output from both commands should be:", "language": "shell", "type": "code-example_explanation", "context": "Perform a Rolling Update on a DaemonSet | The output from both commands should be:", "source": "https://kubernetes.io/docs/tasks/manage-daemon/update-daemon-set/", "id": "k8s_00052_code_explain_1155"} {"prompt": "Complete this yaml code:\n\napiVersion: apps/v1\nkind: DaemonSet\nmetadata:\n name: fluentd-elasticsearch\n namespace: kube-system\n labels:\n k8s-app: fluentd-logging\nspec:\n selector:\n matchLabels:\n name: fluentd-elasticsearch\n updateStrategy:\n type: RollingUpdate\n rollingUpdate:\n maxUnavailable: 1\n template:\n metadata:\n labels:\n name: fluentd-elasticsearch\n spec:\n tolerations:\n # these tolerations are to have the daemonset runnable on control plane nodes\n # remove them if your control plane nodes should not run pods\n - key: node-role.kubernetes.io/control-plane\n operator: Exists\n effect: NoSchedule", "completion": " - key: node-role.kubernetes.io/master\n operator: Exists\n effect: NoSchedule\n containers:\n - name: fluentd-elasticsearch\n image: quay.io/fluentd_elasticsearch/fluentd:v5.0.1\n resources:\n limits:\n memory: 200Mi\n requests:\n cpu: 100m\n memory: 200Mi\n volumeMounts:\n - name: varlog\n mountPath: /var/log\n - name: varlibdockercontainers\n mountPath: /var/lib/docker/containers\n readOnly: true\n terminationGracePeriodSeconds: 30\n volumes:\n - name: varlog\n hostPath:\n path: /var/log\n - name: varlibdockercontainers\n hostPath:\n path: /var/lib/docker/containers", "language": "yaml", "type": "kubernetes-manifest", "context": "Perform a Rolling Update on a DaemonSet | If you update DaemonSets using configuration files, use kubectl apply:", "source": "https://kubernetes.io/docs/tasks/manage-daemon/update-daemon-set/", "id": "k8s_00052_code_completion_1156"} {"prompt": "Explain this yaml code:\n\napiVersion: apps/v1\nkind: DaemonSet\nmetadata:\n name: fluentd-elasticsearch\n namespace: kube-system\n labels:\n k8s-app: fluentd-logging\nspec:\n selector:\n matchLabels:\n name: fluentd-elasticsearch\n updateStrategy:\n type: RollingUpdate\n rollingUpdate:\n maxUnavailable: 1\n template:\n metadata:\n labels:\n name: fluentd-elasticsearch\n spec:\n tolerations:\n # these tolerations are to have the daemonset runnable on control plane nodes\n # remove them if your control plane nodes should not run pods\n - key: node-role.kubernetes.io/control-plane\n operator: Exists\n effect: NoSchedule\n - key: node-role.kubernetes.io/master\n operator: Exists\n effect: NoSchedule\n containers:\n - name: fluentd-elasticsearch\n image: quay.io/fluentd_elasticsearch/fluentd:v5.0.1\n resources:\n limits:\n memory: 200Mi\n requests:\n cpu: 100m\n memory: 200Mi\n volumeMounts:\n - name: varlog\n mountPath: /var/log\n - name: varlibdockercontainers\n mountPath: /var/lib/docker/containers\n readOnly: true\n terminationGracePeriodSeconds: 30\n volumes:\n - name: varlog\n hostPath:\n path: /var/log\n - name: varlibdockercontainers\n hostPath:\n path: /var/lib/docker/containers", "completion": "This yaml code Perform a Rolling Update on a DaemonSet | If you update DaemonSets using configuration files, use kubectl apply:", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Perform a Rolling Update on a DaemonSet | If you update DaemonSets using configuration files, use kubectl apply:", "source": "https://kubernetes.io/docs/tasks/manage-daemon/update-daemon-set/", "id": "k8s_00052_code_explain_1157"} {"prompt": "Explain this shell code:\n\nkubectl apply -f https://k8s.io/examples/controllers/fluentd-daemonset-update.yaml", "completion": "This shell code Perform a Rolling Update on a DaemonSet | If you update DaemonSets using imperative commands, use kubectl edit :", "language": "shell", "type": "code-example_explanation", "context": "Perform a Rolling Update on a DaemonSet | If you update DaemonSets using imperative commands, use kubectl edit :", "source": "https://kubernetes.io/docs/tasks/manage-daemon/update-daemon-set/", "id": "k8s_00052_code_explain_1158"} {"prompt": "Explain this shell code:\n\nkubectl edit ds/fluentd-elasticsearch -n kube-system", "completion": "This shell code Perform a Rolling Update on a DaemonSet | If you only need to update the container image in the DaemonSet template, i.e. .spec.template.spec.containers[*].image, use kubectl set image:", "language": "shell", "type": "code-example_explanation", "context": "Perform a Rolling Update on a DaemonSet | If you only need to update the container image in the DaemonSet template, i.e. .spec.template.spec.containers[*].image, use kubectl set image:", "source": "https://kubernetes.io/docs/tasks/manage-daemon/update-daemon-set/", "id": "k8s_00052_code_explain_1159"} {"prompt": "Explain this shell code:\n\nkubectl set image ds/fluentd-elasticsearch fluentd-elasticsearch=quay.io/fluentd_elasticsearch/fluentd:v2.6.0 -n kube-system", "completion": "This shell code Perform a Rolling Update on a DaemonSet | Finally, watch the rollout status of the latest DaemonSet rolling update:", "language": "shell", "type": "code-example_explanation", "context": "Perform a Rolling Update on a DaemonSet | Finally, watch the rollout status of the latest DaemonSet rolling update:", "source": "https://kubernetes.io/docs/tasks/manage-daemon/update-daemon-set/", "id": "k8s_00052_code_explain_1160"} {"prompt": "Explain this shell code:\n\nkubectl rollout status ds/fluentd-elasticsearch -n kube-system", "completion": "This shell code Perform a Rolling Update on a DaemonSet | When the rollout is complete, the output is similar to this:", "language": "shell", "type": "code-example_explanation", "context": "Perform a Rolling Update on a DaemonSet | When the rollout is complete, the output is similar to this:", "source": "https://kubernetes.io/docs/tasks/manage-daemon/update-daemon-set/", "id": "k8s_00052_code_explain_1161"} {"prompt": "Explain this shell code:\n\ndaemonset \"fluentd-elasticsearch\" successfully rolled out", "completion": "This shell code Perform a Rolling Update on a DaemonSet | Sometimes, a DaemonSet rolling update may be stuck. Here are some possible causes:", "language": "shell", "type": "code-example_explanation", "context": "Perform a Rolling Update on a DaemonSet | Sometimes, a DaemonSet rolling update may be stuck. Here are some possible causes:", "source": "https://kubernetes.io/docs/tasks/manage-daemon/update-daemon-set/", "id": "k8s_00052_code_explain_1162"} {"prompt": "Explain this shell code:\n\nkubectl get pods -l name=fluentd-elasticsearch -o wide -n kube-system", "completion": "This shell code Perform a Rolling Update on a DaemonSet | Once you've found those nodes, delete some non-DaemonSet pods from the node to make room for new DaemonSet pods.", "language": "shell", "type": "code-example_explanation", "context": "Perform a Rolling Update on a DaemonSet | Once you've found those nodes, delete some non-DaemonSet pods from the node to make room for new DaemonSet pods.", "source": "https://kubernetes.io/docs/tasks/manage-daemon/update-daemon-set/", "id": "k8s_00052_code_explain_1163"} {"prompt": "Explain this shell code:\n\nkubectl delete ds fluentd-elasticsearch -n kube-system", "completion": "This shell code Perform a Rolling Update on a DaemonSet |", "language": "shell", "type": "code-example_explanation", "context": "Perform a Rolling Update on a DaemonSet |", "source": "https://kubernetes.io/docs/tasks/manage-daemon/update-daemon-set/", "id": "k8s_00052_code_explain_1164"} {"prompt": "Complete this yaml code:\n\ncontrollers/fluentd-daemonset.yaml\napiVersion: apps/v1\nkind: DaemonSet\nmetadata:\n name: fluentd-elasticsearch\n namespace: kube-system\n labels:\n k8s-app: fluentd-logging\nspec:\n selector:\n matchLabels:\n name: fluentd-elasticsearch\n updateStrategy:\n type: RollingUpdate\n rollingUpdate:\n maxUnavailable: 1\n template:\n metadata:\n labels:\n name: fluentd-elasticsearch\n spec:\n tolerations:\n # these tolerations are to have the daemonset runnable on control plane nodes", "completion": " # remove them if your control plane nodes should not run pods\n - key: node-role.kubernetes.io/control-plane\n operator: Exists\n effect: NoSchedule\n - key: node-role.kubernetes.io/master\n operator: Exists\n effect: NoSchedule\n containers:\n - name: fluentd-elasticsearch\n image: quay.io/fluentd_elasticsearch/fluentd:v5.0.1\n volumeMounts:\n - name: varlog\n mountPath: /var/log\n - name: varlibdockercontainers\n mountPath: /var/lib/docker/containers\n readOnly: true\n terminationGracePeriodSeconds: 30\n volumes:\n - name: varlog\n hostPath:\n path: /var/log\n - name: varlibdockercontainers\n hostPath:\n path: /var/lib/docker/containers", "language": "yaml", "type": "kubernetes-manifest", "context": "Perform a Rolling Update on a DaemonSet | After verifying the update strategy of the DaemonSet manifest, create the DaemonSet:", "source": "https://kubernetes.io/docs/tasks/manage-daemon/update-daemon-set/", "id": "k8s_00052_code_completion_1165"} {"prompt": "Explain this yaml code:\n\ncontrollers/fluentd-daemonset.yaml\napiVersion: apps/v1\nkind: DaemonSet\nmetadata:\n name: fluentd-elasticsearch\n namespace: kube-system\n labels:\n k8s-app: fluentd-logging\nspec:\n selector:\n matchLabels:\n name: fluentd-elasticsearch\n updateStrategy:\n type: RollingUpdate\n rollingUpdate:\n maxUnavailable: 1\n template:\n metadata:\n labels:\n name: fluentd-elasticsearch\n spec:\n tolerations:\n # these tolerations are to have the daemonset runnable on control plane nodes\n # remove them if your control plane nodes should not run pods\n - key: node-role.kubernetes.io/control-plane\n operator: Exists\n effect: NoSchedule\n - key: node-role.kubernetes.io/master\n operator: Exists\n effect: NoSchedule\n containers:\n - name: fluentd-elasticsearch\n image: quay.io/fluentd_elasticsearch/fluentd:v5.0.1\n volumeMounts:\n - name: varlog\n mountPath: /var/log\n - name: varlibdockercontainers\n mountPath: /var/lib/docker/containers\n readOnly: true\n terminationGracePeriodSeconds: 30\n volumes:\n - name: varlog\n hostPath:\n path: /var/log\n - name: varlibdockercontainers\n hostPath:\n path: /var/lib/docker/containers", "completion": "This yaml code Perform a Rolling Update on a DaemonSet | After verifying the update strategy of the DaemonSet manifest, create the DaemonSet:", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Perform a Rolling Update on a DaemonSet | After verifying the update strategy of the DaemonSet manifest, create the DaemonSet:", "source": "https://kubernetes.io/docs/tasks/manage-daemon/update-daemon-set/", "id": "k8s_00052_code_explain_1166"} {"prompt": "Complete this yaml code:\n\napiVersion: apps/v1\nkind: DaemonSet\nmetadata:\n name: fluentd-elasticsearch\n namespace: kube-system\n labels:\n k8s-app: fluentd-logging\nspec:\n selector:\n matchLabels:\n name: fluentd-elasticsearch\n updateStrategy:\n type: RollingUpdate\n rollingUpdate:\n maxUnavailable: 1\n template:\n metadata:\n labels:\n name: fluentd-elasticsearch\n spec:\n tolerations:\n # these tolerations are to have the daemonset runnable on control plane nodes\n # remove them if your control plane nodes should not run pods", "completion": " - key: node-role.kubernetes.io/control-plane\n operator: Exists\n effect: NoSchedule\n - key: node-role.kubernetes.io/master\n operator: Exists\n effect: NoSchedule\n containers:\n - name: fluentd-elasticsearch\n image: quay.io/fluentd_elasticsearch/fluentd:v5.0.1\n volumeMounts:\n - name: varlog\n mountPath: /var/log\n - name: varlibdockercontainers\n mountPath: /var/lib/docker/containers\n readOnly: true\n terminationGracePeriodSeconds: 30\n volumes:\n - name: varlog\n hostPath:\n path: /var/log\n - name: varlibdockercontainers\n hostPath:\n path: /var/lib/docker/containers", "language": "yaml", "type": "kubernetes-manifest", "context": "Perform a Rolling Update on a DaemonSet | After verifying the update strategy of the DaemonSet manifest, create the DaemonSet:", "source": "https://kubernetes.io/docs/tasks/manage-daemon/update-daemon-set/", "id": "k8s_00052_code_completion_1167"} {"prompt": "Explain this yaml code:\n\napiVersion: apps/v1\nkind: DaemonSet\nmetadata:\n name: fluentd-elasticsearch\n namespace: kube-system\n labels:\n k8s-app: fluentd-logging\nspec:\n selector:\n matchLabels:\n name: fluentd-elasticsearch\n updateStrategy:\n type: RollingUpdate\n rollingUpdate:\n maxUnavailable: 1\n template:\n metadata:\n labels:\n name: fluentd-elasticsearch\n spec:\n tolerations:\n # these tolerations are to have the daemonset runnable on control plane nodes\n # remove them if your control plane nodes should not run pods\n - key: node-role.kubernetes.io/control-plane\n operator: Exists\n effect: NoSchedule\n - key: node-role.kubernetes.io/master\n operator: Exists\n effect: NoSchedule\n containers:\n - name: fluentd-elasticsearch\n image: quay.io/fluentd_elasticsearch/fluentd:v5.0.1\n volumeMounts:\n - name: varlog\n mountPath: /var/log\n - name: varlibdockercontainers\n mountPath: /var/lib/docker/containers\n readOnly: true\n terminationGracePeriodSeconds: 30\n volumes:\n - name: varlog\n hostPath:\n path: /var/log\n - name: varlibdockercontainers\n hostPath:\n path: /var/lib/docker/containers", "completion": "This yaml code Perform a Rolling Update on a DaemonSet | After verifying the update strategy of the DaemonSet manifest, create the DaemonSet:", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Perform a Rolling Update on a DaemonSet | After verifying the update strategy of the DaemonSet manifest, create the DaemonSet:", "source": "https://kubernetes.io/docs/tasks/manage-daemon/update-daemon-set/", "id": "k8s_00052_code_explain_1168"} {"prompt": "Explain this bash code:\n\nkubectl create -f https://k8s.io/examples/controllers/fluentd-daemonset.yaml", "completion": "This bash code Perform a Rolling Update on a DaemonSet | Alternatively, use kubectl apply to create the same DaemonSet if you plan to update the DaemonSet with kubectl apply.", "language": "bash", "type": "kubectl-command_explanation", "context": "Perform a Rolling Update on a DaemonSet | Alternatively, use kubectl apply to create the same DaemonSet if you plan to update the DaemonSet with kubectl apply.", "source": "https://kubernetes.io/docs/tasks/manage-daemon/update-daemon-set/", "id": "k8s_00052_code_explain_1169"} {"prompt": "Explain this bash code:\n\nkubectl apply -f https://k8s.io/examples/controllers/fluentd-daemonset.yaml", "completion": "This bash code Perform a Rolling Update on a DaemonSet | Check the update strategy of your DaemonSet, and make sure it's set to RollingUpdate:", "language": "bash", "type": "kubectl-command_explanation", "context": "Perform a Rolling Update on a DaemonSet | Check the update strategy of your DaemonSet, and make sure it's set to RollingUpdate:", "source": "https://kubernetes.io/docs/tasks/manage-daemon/update-daemon-set/", "id": "k8s_00052_code_explain_1170"} {"prompt": "Explain this bash code:\n\nkubectl get ds/fluentd-elasticsearch -o go-template='{{.spec.updateStrategy.type}}{{\"\\n\"}}' -n kube-system", "completion": "This bash code Perform a Rolling Update on a DaemonSet | If you haven't created the DaemonSet in the system, check your DaemonSet manifest with the following command instead:", "language": "bash", "type": "kubectl-command_explanation", "context": "Perform a Rolling Update on a DaemonSet | If you haven't created the DaemonSet in the system, check your DaemonSet manifest with the following command instead:", "source": "https://kubernetes.io/docs/tasks/manage-daemon/update-daemon-set/", "id": "k8s_00052_code_explain_1171"} {"prompt": "Explain this bash code:\n\nkubectl apply -f https://k8s.io/examples/controllers/fluentd-daemonset.yaml --dry-run=client -o go-template='{{.spec.updateStrategy.type}}{{\"\\n\"}}'", "completion": "This bash code Perform a Rolling Update on a DaemonSet | The output from both commands should be:", "language": "bash", "type": "kubectl-command_explanation", "context": "Perform a Rolling Update on a DaemonSet | The output from both commands should be:", "source": "https://kubernetes.io/docs/tasks/manage-daemon/update-daemon-set/", "id": "k8s_00052_code_explain_1172"} {"prompt": "Complete this yaml code:\n\ncontrollers/fluentd-daemonset-update.yaml\napiVersion: apps/v1\nkind: DaemonSet\nmetadata:\n name: fluentd-elasticsearch\n namespace: kube-system\n labels:\n k8s-app: fluentd-logging\nspec:\n selector:\n matchLabels:\n name: fluentd-elasticsearch\n updateStrategy:\n type: RollingUpdate\n rollingUpdate:\n maxUnavailable: 1\n template:\n metadata:\n labels:\n name: fluentd-elasticsearch\n spec:\n tolerations:\n # these tolerations are to have the daemonset runnable on control plane nodes\n # remove them if your control plane nodes should not run pods\n - key: node-role.kubernetes.io/control-plane\n operator: Exists", "completion": " effect: NoSchedule\n - key: node-role.kubernetes.io/master\n operator: Exists\n effect: NoSchedule\n containers:\n - name: fluentd-elasticsearch\n image: quay.io/fluentd_elasticsearch/fluentd:v5.0.1\n resources:\n limits:\n memory: 200Mi\n requests:\n cpu: 100m\n memory: 200Mi\n volumeMounts:\n - name: varlog\n mountPath: /var/log\n - name: varlibdockercontainers\n mountPath: /var/lib/docker/containers\n readOnly: true\n terminationGracePeriodSeconds: 30\n volumes:\n - name: varlog\n hostPath:\n path: /var/log\n - name: varlibdockercontainers\n hostPath:\n path: /var/lib/docker/containers", "language": "yaml", "type": "kubernetes-manifest", "context": "Perform a Rolling Update on a DaemonSet | If you update DaemonSets using configuration files, use kubectl apply:", "source": "https://kubernetes.io/docs/tasks/manage-daemon/update-daemon-set/", "id": "k8s_00052_code_completion_1173"} {"prompt": "Explain this yaml code:\n\ncontrollers/fluentd-daemonset-update.yaml\napiVersion: apps/v1\nkind: DaemonSet\nmetadata:\n name: fluentd-elasticsearch\n namespace: kube-system\n labels:\n k8s-app: fluentd-logging\nspec:\n selector:\n matchLabels:\n name: fluentd-elasticsearch\n updateStrategy:\n type: RollingUpdate\n rollingUpdate:\n maxUnavailable: 1\n template:\n metadata:\n labels:\n name: fluentd-elasticsearch\n spec:\n tolerations:\n # these tolerations are to have the daemonset runnable on control plane nodes\n # remove them if your control plane nodes should not run pods\n - key: node-role.kubernetes.io/control-plane\n operator: Exists\n effect: NoSchedule\n - key: node-role.kubernetes.io/master\n operator: Exists\n effect: NoSchedule\n containers:\n - name: fluentd-elasticsearch\n image: quay.io/fluentd_elasticsearch/fluentd:v5.0.1\n resources:\n limits:\n memory: 200Mi\n requests:\n cpu: 100m\n memory: 200Mi\n volumeMounts:\n - name: varlog\n mountPath: /var/log\n - name: varlibdockercontainers\n mountPath: /var/lib/docker/containers\n readOnly: true\n terminationGracePeriodSeconds: 30\n volumes:\n - name: varlog\n hostPath:\n path: /var/log\n - name: varlibdockercontainers\n hostPath:\n path: /var/lib/docker/containers", "completion": "This yaml code Perform a Rolling Update on a DaemonSet | If you update DaemonSets using configuration files, use kubectl apply:", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Perform a Rolling Update on a DaemonSet | If you update DaemonSets using configuration files, use kubectl apply:", "source": "https://kubernetes.io/docs/tasks/manage-daemon/update-daemon-set/", "id": "k8s_00052_code_explain_1174"} {"prompt": "Complete this yaml code:\n\napiVersion: apps/v1\nkind: DaemonSet\nmetadata:\n name: fluentd-elasticsearch\n namespace: kube-system\n labels:\n k8s-app: fluentd-logging\nspec:\n selector:\n matchLabels:\n name: fluentd-elasticsearch\n updateStrategy:\n type: RollingUpdate\n rollingUpdate:\n maxUnavailable: 1\n template:\n metadata:\n labels:\n name: fluentd-elasticsearch\n spec:\n tolerations:\n # these tolerations are to have the daemonset runnable on control plane nodes\n # remove them if your control plane nodes should not run pods\n - key: node-role.kubernetes.io/control-plane\n operator: Exists\n effect: NoSchedule", "completion": " - key: node-role.kubernetes.io/master\n operator: Exists\n effect: NoSchedule\n containers:\n - name: fluentd-elasticsearch\n image: quay.io/fluentd_elasticsearch/fluentd:v5.0.1\n resources:\n limits:\n memory: 200Mi\n requests:\n cpu: 100m\n memory: 200Mi\n volumeMounts:\n - name: varlog\n mountPath: /var/log\n - name: varlibdockercontainers\n mountPath: /var/lib/docker/containers\n readOnly: true\n terminationGracePeriodSeconds: 30\n volumes:\n - name: varlog\n hostPath:\n path: /var/log\n - name: varlibdockercontainers\n hostPath:\n path: /var/lib/docker/containers", "language": "yaml", "type": "kubernetes-manifest", "context": "Perform a Rolling Update on a DaemonSet | If you update DaemonSets using configuration files, use kubectl apply:", "source": "https://kubernetes.io/docs/tasks/manage-daemon/update-daemon-set/", "id": "k8s_00052_code_completion_1175"} {"prompt": "Explain this yaml code:\n\napiVersion: apps/v1\nkind: DaemonSet\nmetadata:\n name: fluentd-elasticsearch\n namespace: kube-system\n labels:\n k8s-app: fluentd-logging\nspec:\n selector:\n matchLabels:\n name: fluentd-elasticsearch\n updateStrategy:\n type: RollingUpdate\n rollingUpdate:\n maxUnavailable: 1\n template:\n metadata:\n labels:\n name: fluentd-elasticsearch\n spec:\n tolerations:\n # these tolerations are to have the daemonset runnable on control plane nodes\n # remove them if your control plane nodes should not run pods\n - key: node-role.kubernetes.io/control-plane\n operator: Exists\n effect: NoSchedule\n - key: node-role.kubernetes.io/master\n operator: Exists\n effect: NoSchedule\n containers:\n - name: fluentd-elasticsearch\n image: quay.io/fluentd_elasticsearch/fluentd:v5.0.1\n resources:\n limits:\n memory: 200Mi\n requests:\n cpu: 100m\n memory: 200Mi\n volumeMounts:\n - name: varlog\n mountPath: /var/log\n - name: varlibdockercontainers\n mountPath: /var/lib/docker/containers\n readOnly: true\n terminationGracePeriodSeconds: 30\n volumes:\n - name: varlog\n hostPath:\n path: /var/log\n - name: varlibdockercontainers\n hostPath:\n path: /var/lib/docker/containers", "completion": "This yaml code Perform a Rolling Update on a DaemonSet | If you update DaemonSets using configuration files, use kubectl apply:", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Perform a Rolling Update on a DaemonSet | If you update DaemonSets using configuration files, use kubectl apply:", "source": "https://kubernetes.io/docs/tasks/manage-daemon/update-daemon-set/", "id": "k8s_00052_code_explain_1176"} {"prompt": "Explain this bash code:\n\nkubectl apply -f https://k8s.io/examples/controllers/fluentd-daemonset-update.yaml", "completion": "This bash code Perform a Rolling Update on a DaemonSet | If you update DaemonSets using imperative commands, use kubectl edit :", "language": "bash", "type": "kubectl-command_explanation", "context": "Perform a Rolling Update on a DaemonSet | If you update DaemonSets using imperative commands, use kubectl edit :", "source": "https://kubernetes.io/docs/tasks/manage-daemon/update-daemon-set/", "id": "k8s_00052_code_explain_1177"} {"prompt": "Explain this bash code:\n\nkubectl edit ds/fluentd-elasticsearch -n kube-system", "completion": "This bash code Perform a Rolling Update on a DaemonSet | If you only need to update the container image in the DaemonSet template, i.e. .spec.template.spec.containers[*].image, use kubectl set image:", "language": "bash", "type": "kubectl-command_explanation", "context": "Perform a Rolling Update on a DaemonSet | If you only need to update the container image in the DaemonSet template, i.e. .spec.template.spec.containers[*].image, use kubectl set image:", "source": "https://kubernetes.io/docs/tasks/manage-daemon/update-daemon-set/", "id": "k8s_00052_code_explain_1178"} {"prompt": "Explain this bash code:\n\nkubectl set image ds/fluentd-elasticsearch fluentd-elasticsearch=quay.io/fluentd_elasticsearch/fluentd:v2.6.0 -n kube-system", "completion": "This bash code Perform a Rolling Update on a DaemonSet | Finally, watch the rollout status of the latest DaemonSet rolling update:", "language": "bash", "type": "kubectl-command_explanation", "context": "Perform a Rolling Update on a DaemonSet | Finally, watch the rollout status of the latest DaemonSet rolling update:", "source": "https://kubernetes.io/docs/tasks/manage-daemon/update-daemon-set/", "id": "k8s_00052_code_explain_1179"} {"prompt": "Explain this bash code:\n\nkubectl rollout status ds/fluentd-elasticsearch -n kube-system", "completion": "This bash code Perform a Rolling Update on a DaemonSet | When the rollout is complete, the output is similar to this:", "language": "bash", "type": "kubectl-command_explanation", "context": "Perform a Rolling Update on a DaemonSet | When the rollout is complete, the output is similar to this:", "source": "https://kubernetes.io/docs/tasks/manage-daemon/update-daemon-set/", "id": "k8s_00052_code_explain_1180"} {"prompt": "Explain this text code:\n\ndaemonset \"fluentd-elasticsearch\" successfully rolled out", "completion": "This text code Perform a Rolling Update on a DaemonSet | Sometimes, a DaemonSet rolling update may be stuck. Here are some possible causes:", "language": "text", "type": "code-example_explanation", "context": "Perform a Rolling Update on a DaemonSet | Sometimes, a DaemonSet rolling update may be stuck. Here are some possible causes:", "source": "https://kubernetes.io/docs/tasks/manage-daemon/update-daemon-set/", "id": "k8s_00052_code_explain_1181"} {"prompt": "Explain this bash code:\n\nkubectl get pods -l name=fluentd-elasticsearch -o wide -n kube-system", "completion": "This bash code Perform a Rolling Update on a DaemonSet | Once you've found those nodes, delete some non-DaemonSet pods from the node to make room for new DaemonSet pods.", "language": "bash", "type": "kubectl-command_explanation", "context": "Perform a Rolling Update on a DaemonSet | Once you've found those nodes, delete some non-DaemonSet pods from the node to make room for new DaemonSet pods.", "source": "https://kubernetes.io/docs/tasks/manage-daemon/update-daemon-set/", "id": "k8s_00052_code_explain_1182"} {"prompt": "Explain this bash code:\n\nkubectl delete ds fluentd-elasticsearch -n kube-system", "completion": "This bash code Perform a Rolling Update on a DaemonSet |", "language": "bash", "type": "kubectl-command_explanation", "context": "Perform a Rolling Update on a DaemonSet |", "source": "https://kubernetes.io/docs/tasks/manage-daemon/update-daemon-set/", "id": "k8s_00052_code_explain_1183"} {"prompt": "Explain this bash code:\n\nkubectl get --raw \"/api/v1/nodes/minikube/proxy/stats/summary\"", "completion": "This bash code Node metrics data | Here is the same API call using curl:", "language": "bash", "type": "kubectl-command_explanation", "context": "Node metrics data | Here is the same API call using curl:", "source": "https://kubernetes.io/docs/reference/instrumentation/node-metrics/", "id": "k8s_00054_code_explain_1184"} {"prompt": "Explain this text code:\n\n# You need to run \"kubectl proxy\" first\n# Change 8080 to the port that \"kubectl proxy\" assigns\ncurl http://localhost:8080/api/v1/nodes/minikube/proxy/stats/summary", "completion": "This text code Node metrics data | By default, Kubernetes fetches node summary metrics data using an embedded cAdvisor that runs within the kubelet. If you enable the PodAndContainerStatsFromCRI feature gate in your cluster, and you us", "language": "text", "type": "code-example_explanation", "context": "Node metrics data | By default, Kubernetes fetches node summary metrics data using an embedded cAdvisor that runs within the kubelet. If you enable the PodAndContainerStatsFromCRI feature gate in your cluster, and you us", "source": "https://kubernetes.io/docs/reference/instrumentation/node-metrics/", "id": "k8s_00054_code_explain_1185"} {"prompt": "Explain this shell code:\n\nkubectl get --raw \"/api/v1/nodes/minikube/proxy/stats/summary\"", "completion": "This shell code Node metrics data | Here is the same API call using curl:", "language": "shell", "type": "code-example_explanation", "context": "Node metrics data | Here is the same API call using curl:", "source": "https://kubernetes.io/docs/reference/instrumentation/node-metrics/", "id": "k8s_00054_code_explain_1186"} {"prompt": "Explain this shell code:\n\n# You need to run \"kubectl proxy\" first\n# Change 8080 to the port that \"kubectl proxy\" assigns\ncurl http://localhost:8080/api/v1/nodes/minikube/proxy/stats/summary", "completion": "This shell code Node metrics data | By default, Kubernetes fetches node summary metrics data using an embedded cAdvisor that runs within the kubelet. If you enable the PodAndContainerStatsFromCRI feature gate in your cluster, and you us", "language": "shell", "type": "code-example_explanation", "context": "Node metrics data | By default, Kubernetes fetches node summary metrics data using an embedded cAdvisor that runs within the kubelet. If you enable the PodAndContainerStatsFromCRI feature gate in your cluster, and you us", "source": "https://kubernetes.io/docs/reference/instrumentation/node-metrics/", "id": "k8s_00054_code_explain_1187"} {"prompt": "Explain this bash code:\n\nkubectl get --raw \"/api/v1/nodes/minikube/proxy/stats/summary\"", "completion": "This bash code Node metrics data | Here is the same API call using curl:", "language": "bash", "type": "kubectl-command_explanation", "context": "Node metrics data | Here is the same API call using curl:", "source": "https://kubernetes.io/docs/reference/instrumentation/node-metrics/", "id": "k8s_00054_code_explain_1188"} {"prompt": "Explain this text code:\n\n# You need to run \"kubectl proxy\" first\n# Change 8080 to the port that \"kubectl proxy\" assigns\ncurl http://localhost:8080/api/v1/nodes/minikube/proxy/stats/summary", "completion": "This text code Node metrics data | By default, Kubernetes fetches node summary metrics data using an embedded cAdvisor that runs within the kubelet. If you enable the PodAndContainerStatsFromCRI feature gate in your cluster, and you us", "language": "text", "type": "code-example_explanation", "context": "Node metrics data | By default, Kubernetes fetches node summary metrics data using an embedded cAdvisor that runs within the kubelet. If you enable the PodAndContainerStatsFromCRI feature gate in your cluster, and you us", "source": "https://kubernetes.io/docs/reference/instrumentation/node-metrics/", "id": "k8s_00054_code_explain_1189"} {"prompt": "Explain this text code:\n\napiGroups:[\"apps\"], apiVersions:[\"v1\"], resources: [\"deployments\"]", "completion": "This text code ValidatingWebhookConfiguration | Equivalent: match a request if modifies a resource listed in rules, even via another API group or version. For example, if deployments can be modified via apps/v1, apps/v1beta1, and extensions/v1beta1", "language": "text", "type": "code-example_explanation", "context": "ValidatingWebhookConfiguration | Equivalent: match a request if modifies a resource listed in rules, even via another API group or version. For example, if deployments can be modified via apps/v1, apps/v1beta1, and extensions/v1beta1", "source": "https://kubernetes.io/docs/reference/kubernetes-api/extend-resources/validating-webhook-configuration-v1/", "id": "k8s_00055_code_explain_1190"} {"prompt": "Explain this text code:\n\napiGroups:[\"apps\"], apiVersions:[\"v1\"], resources: [\"deployments\"]", "completion": "This text code ValidatingWebhookConfiguration | Defaults to \"Equivalent\"", "language": "text", "type": "code-example_explanation", "context": "ValidatingWebhookConfiguration | Defaults to \"Equivalent\"", "source": "https://kubernetes.io/docs/reference/kubernetes-api/extend-resources/validating-webhook-configuration-v1/", "id": "k8s_00055_code_explain_1191"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Service\nmetadata:\n name: my-service\nspec:", "completion": " selector:\n app.kubernetes.io/name: MyApp\n ports:\n - protocol: TCP\n port: 80\n targetPort: 9376", "language": "yaml", "type": "service-definition", "context": "1 - Service | Applying this manifest creates a new Service named \"my-service\" with the default ClusterIP service type. The Service targets TCP port 9376 on any Pod with the app.kubernetes.io/name: MyApp label.", "source": "https://kubernetes.io/docs/concepts/services-networking/_print/", "id": "k8s_00056_code_completion_1192"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Service\nmetadata:\n name: my-service\nspec:\n selector:\n app.kubernetes.io/name: MyApp\n ports:\n - protocol: TCP\n port: 80\n targetPort: 9376", "completion": "This yaml code 1 - Service | Applying this manifest creates a new Service named \"my-service\" with the default ClusterIP service type. The Service targets TCP port 9376 on any Pod with the app.kubernetes.io/name: MyApp label.", "language": "yaml", "type": "service-definition_explanation", "context": "1 - Service | Applying this manifest creates a new Service named \"my-service\" with the default ClusterIP service type. The Service targets TCP port 9376 on any Pod with the app.kubernetes.io/name: MyApp label.", "source": "https://kubernetes.io/docs/concepts/services-networking/_print/", "id": "k8s_00056_code_explain_1193"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: nginx\n labels:\n app.kubernetes.io/name: proxy\nspec:\n containers:\n - name: nginx\n image: nginx:stable\n ports:\n - containerPort: 80\n name: http-web-svc", "completion": "\n---\napiVersion: v1\nkind: Service\nmetadata:\n name: nginx-service\nspec:\n selector:\n app.kubernetes.io/name: proxy\n ports:\n - name: name-of-service-port\n protocol: TCP\n port: 80\n targetPort: http-web-svc", "language": "yaml", "type": "pod-definition", "context": "1 - Service | This works even if there is a mixture of Pods in the Service using a single configured name, with the same network protocol available via different port numbers. This offers a lot of flexibility for d", "source": "https://kubernetes.io/docs/concepts/services-networking/_print/", "id": "k8s_00056_code_completion_1194"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: nginx\n labels:\n app.kubernetes.io/name: proxy\nspec:\n containers:\n - name: nginx\n image: nginx:stable\n ports:\n - containerPort: 80\n name: http-web-svc\n\n---\napiVersion: v1\nkind: Service\nmetadata:\n name: nginx-service\nspec:\n selector:\n app.kubernetes.io/name: proxy\n ports:\n - name: name-of-service-port\n protocol: TCP\n port: 80\n targetPort: http-web-svc", "completion": "This yaml code 1 - Service | This works even if there is a mixture of Pods in the Service using a single configured name, with the same network protocol available via different port numbers. This offers a lot of flexibility for d", "language": "yaml", "type": "pod-definition_explanation", "context": "1 - Service | This works even if there is a mixture of Pods in the Service using a single configured name, with the same network protocol available via different port numbers. This offers a lot of flexibility for d", "source": "https://kubernetes.io/docs/concepts/services-networking/_print/", "id": "k8s_00056_code_explain_1195"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Service\nmetadata:\n name: my-service\nspec:", "completion": " ports:\n - name: http\n protocol: TCP\n port: 80\n targetPort: 9376", "language": "yaml", "type": "service-definition", "context": "1 - Service | Because this Service has no selector, the corresponding EndpointSlice objects are not created automatically. You can map the Service to the network address and port where it's running, by adding an En", "source": "https://kubernetes.io/docs/concepts/services-networking/_print/", "id": "k8s_00056_code_completion_1196"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Service\nmetadata:\n name: my-service\nspec:\n ports:\n - name: http\n protocol: TCP\n port: 80\n targetPort: 9376", "completion": "This yaml code 1 - Service | Because this Service has no selector, the corresponding EndpointSlice objects are not created automatically. You can map the Service to the network address and port where it's running, by adding an En", "language": "yaml", "type": "service-definition_explanation", "context": "1 - Service | Because this Service has no selector, the corresponding EndpointSlice objects are not created automatically. You can map the Service to the network address and port where it's running, by adding an En", "source": "https://kubernetes.io/docs/concepts/services-networking/_print/", "id": "k8s_00056_code_explain_1197"} {"prompt": "Complete this yaml code:\n\napiVersion: discovery.k8s.io/v1\nkind: EndpointSlice\nmetadata:\n name: my-service-1 # by convention, use the name of the Service\n # as a prefix for the name of the EndpointSlice\n labels:\n # You should set the \"kubernetes.io/service-name\" label.\n # Set its value to match the name of the Service\n kubernetes.io/service-name: my-service\naddressType: IPv4", "completion": "ports:\n - name: http # should match with the name of the service port defined above\n appProtocol: http\n protocol: TCP\n port: 9376\nendpoints:\n - addresses:\n - \"10.4.5.6\"\n - addresses:\n - \"10.1.2.3\"", "language": "yaml", "type": "kubernetes-manifest", "context": "1 - Service | When you create an EndpointSlice object for a Service, you can use any name for the EndpointSlice. Each EndpointSlice in a namespace must have a unique name. You link an EndpointSlice to a Service by", "source": "https://kubernetes.io/docs/concepts/services-networking/_print/", "id": "k8s_00056_code_completion_1198"} {"prompt": "Explain this yaml code:\n\napiVersion: discovery.k8s.io/v1\nkind: EndpointSlice\nmetadata:\n name: my-service-1 # by convention, use the name of the Service\n # as a prefix for the name of the EndpointSlice\n labels:\n # You should set the \"kubernetes.io/service-name\" label.\n # Set its value to match the name of the Service\n kubernetes.io/service-name: my-service\naddressType: IPv4\nports:\n - name: http # should match with the name of the service port defined above\n appProtocol: http\n protocol: TCP\n port: 9376\nendpoints:\n - addresses:\n - \"10.4.5.6\"\n - addresses:\n - \"10.1.2.3\"", "completion": "This yaml code 1 - Service | When you create an EndpointSlice object for a Service, you can use any name for the EndpointSlice. Each EndpointSlice in a namespace must have a unique name. You link an EndpointSlice to a Service by", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "1 - Service | When you create an EndpointSlice object for a Service, you can use any name for the EndpointSlice. Each EndpointSlice in a namespace must have a unique name. You link an EndpointSlice to a Service by", "source": "https://kubernetes.io/docs/concepts/services-networking/_print/", "id": "k8s_00056_code_explain_1199"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Service\nmetadata:\n name: my-service\nspec:\n selector:\n app.kubernetes.io/name: MyApp\n ports:", "completion": " - name: http\n protocol: TCP\n port: 80\n targetPort: 9376\n - name: https\n protocol: TCP\n port: 443\n targetPort: 9377", "language": "yaml", "type": "service-definition", "context": "1 - Service | As with Kubernetes names in general, names for ports must only contain lowercase alphanumeric characters and -. Port names must also start and end with an alphanumeric character.", "source": "https://kubernetes.io/docs/concepts/services-networking/_print/", "id": "k8s_00056_code_completion_1200"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Service\nmetadata:\n name: my-service\nspec:\n selector:\n app.kubernetes.io/name: MyApp\n ports:\n - name: http\n protocol: TCP\n port: 80\n targetPort: 9376\n - name: https\n protocol: TCP\n port: 443\n targetPort: 9377", "completion": "This yaml code 1 - Service | As with Kubernetes names in general, names for ports must only contain lowercase alphanumeric characters and -. Port names must also start and end with an alphanumeric character.", "language": "yaml", "type": "service-definition_explanation", "context": "1 - Service | As with Kubernetes names in general, names for ports must only contain lowercase alphanumeric characters and -. Port names must also start and end with an alphanumeric character.", "source": "https://kubernetes.io/docs/concepts/services-networking/_print/", "id": "k8s_00056_code_explain_1201"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Service\nmetadata:\n name: my-service\nspec:\n type: NodePort\n selector:\n app.kubernetes.io/name: MyApp", "completion": " ports:\n - port: 80\n # By default and for convenience, the `targetPort` is set to\n # the same value as the `port` field.\n targetPort: 80\n # Optional field\n # By default and for convenience, the Kubernetes control plane\n # will allocate a port from a range (default: 30000-32767)\n nodePort: 30007", "language": "yaml", "type": "service-definition", "context": "1 - Service | The policy for assigning ports to NodePort services applies to both the auto-assignment and the manual assignment scenarios. When a user wants to create a NodePort service that uses a specific port, t", "source": "https://kubernetes.io/docs/concepts/services-networking/_print/", "id": "k8s_00056_code_completion_1202"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Service\nmetadata:\n name: my-service\nspec:\n type: NodePort\n selector:\n app.kubernetes.io/name: MyApp\n ports:\n - port: 80\n # By default and for convenience, the `targetPort` is set to\n # the same value as the `port` field.\n targetPort: 80\n # Optional field\n # By default and for convenience, the Kubernetes control plane\n # will allocate a port from a range (default: 30000-32767)\n nodePort: 30007", "completion": "This yaml code 1 - Service | The policy for assigning ports to NodePort services applies to both the auto-assignment and the manual assignment scenarios. When a user wants to create a NodePort service that uses a specific port, t", "language": "yaml", "type": "service-definition_explanation", "context": "1 - Service | The policy for assigning ports to NodePort services applies to both the auto-assignment and the manual assignment scenarios. When a user wants to create a NodePort service that uses a specific port, t", "source": "https://kubernetes.io/docs/concepts/services-networking/_print/", "id": "k8s_00056_code_explain_1203"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Service\nmetadata:\n name: my-service\nspec:\n selector:\n app.kubernetes.io/name: MyApp\n ports:", "completion": " - protocol: TCP\n port: 80\n targetPort: 9376\n clusterIP: 10.0.171.239\n type: LoadBalancer\nstatus:\n loadBalancer:\n ingress:\n - ip: 192.0.2.127", "language": "yaml", "type": "service-definition", "context": "1 - Service | Traffic from the external load balancer is directed at the backend Pods. The cloud provider decides how it is load balanced.", "source": "https://kubernetes.io/docs/concepts/services-networking/_print/", "id": "k8s_00056_code_completion_1204"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Service\nmetadata:\n name: my-service\nspec:\n selector:\n app.kubernetes.io/name: MyApp\n ports:\n - protocol: TCP\n port: 80\n targetPort: 9376\n clusterIP: 10.0.171.239\n type: LoadBalancer\nstatus:\n loadBalancer:\n ingress:\n - ip: 192.0.2.127", "completion": "This yaml code 1 - Service | Traffic from the external load balancer is directed at the backend Pods. The cloud provider decides how it is load balanced.", "language": "yaml", "type": "service-definition_explanation", "context": "1 - Service | Traffic from the external load balancer is directed at the backend Pods. The cloud provider decides how it is load balanced.", "source": "https://kubernetes.io/docs/concepts/services-networking/_print/", "id": "k8s_00056_code_explain_1205"} {"prompt": "Complete this text code:\n\nmetadata:\n name: my-service", "completion": " annotations:\n networking.gke.io/load-balancer-type: \"Internal\"", "language": "text", "type": "code-example", "context": "1 - Service | metadata: name: my-service annotations: service.beta.kubernetes.io/aws-load-balancer-internal: \"true\"", "source": "https://kubernetes.io/docs/concepts/services-networking/_print/", "id": "k8s_00056_code_completion_1206"} {"prompt": "Explain this text code:\n\nmetadata:\n name: my-service\n annotations:\n networking.gke.io/load-balancer-type: \"Internal\"", "completion": "This text code 1 - Service | metadata: name: my-service annotations: service.beta.kubernetes.io/aws-load-balancer-internal: \"true\"", "language": "text", "type": "code-example_explanation", "context": "1 - Service | metadata: name: my-service annotations: service.beta.kubernetes.io/aws-load-balancer-internal: \"true\"", "source": "https://kubernetes.io/docs/concepts/services-networking/_print/", "id": "k8s_00056_code_explain_1207"} {"prompt": "Complete this text code:\n\nmetadata:\n name: my-service", "completion": " annotations:\n service.beta.kubernetes.io/aws-load-balancer-internal: \"true\"", "language": "text", "type": "code-example", "context": "1 - Service | metadata: name: my-service annotations: service.beta.kubernetes.io/azure-load-balancer-internal: \"true\"", "source": "https://kubernetes.io/docs/concepts/services-networking/_print/", "id": "k8s_00056_code_completion_1208"} {"prompt": "Explain this text code:\n\nmetadata:\n name: my-service\n annotations:\n service.beta.kubernetes.io/aws-load-balancer-internal: \"true\"", "completion": "This text code 1 - Service | metadata: name: my-service annotations: service.beta.kubernetes.io/azure-load-balancer-internal: \"true\"", "language": "text", "type": "code-example_explanation", "context": "1 - Service | metadata: name: my-service annotations: service.beta.kubernetes.io/azure-load-balancer-internal: \"true\"", "source": "https://kubernetes.io/docs/concepts/services-networking/_print/", "id": "k8s_00056_code_explain_1209"} {"prompt": "Complete this text code:\n\nmetadata:\n name: my-service", "completion": " annotations:\n service.beta.kubernetes.io/azure-load-balancer-internal: \"true\"", "language": "text", "type": "code-example", "context": "1 - Service | metadata: name: my-service annotations: service.kubernetes.io/ibm-load-balancer-cloud-provider-ip-type: \"private\"", "source": "https://kubernetes.io/docs/concepts/services-networking/_print/", "id": "k8s_00056_code_completion_1210"} {"prompt": "Explain this text code:\n\nmetadata:\n name: my-service\n annotations:\n service.beta.kubernetes.io/azure-load-balancer-internal: \"true\"", "completion": "This text code 1 - Service | metadata: name: my-service annotations: service.kubernetes.io/ibm-load-balancer-cloud-provider-ip-type: \"private\"", "language": "text", "type": "code-example_explanation", "context": "1 - Service | metadata: name: my-service annotations: service.kubernetes.io/ibm-load-balancer-cloud-provider-ip-type: \"private\"", "source": "https://kubernetes.io/docs/concepts/services-networking/_print/", "id": "k8s_00056_code_explain_1211"} {"prompt": "Complete this text code:\n\nmetadata:\n name: my-service", "completion": " annotations:\n service.kubernetes.io/ibm-load-balancer-cloud-provider-ip-type: \"private\"", "language": "text", "type": "code-example", "context": "1 - Service | metadata: name: my-service annotations: service.beta.kubernetes.io/openstack-internal-load-balancer: \"true\"", "source": "https://kubernetes.io/docs/concepts/services-networking/_print/", "id": "k8s_00056_code_completion_1212"} {"prompt": "Explain this text code:\n\nmetadata:\n name: my-service\n annotations:\n service.kubernetes.io/ibm-load-balancer-cloud-provider-ip-type: \"private\"", "completion": "This text code 1 - Service | metadata: name: my-service annotations: service.beta.kubernetes.io/openstack-internal-load-balancer: \"true\"", "language": "text", "type": "code-example_explanation", "context": "1 - Service | metadata: name: my-service annotations: service.beta.kubernetes.io/openstack-internal-load-balancer: \"true\"", "source": "https://kubernetes.io/docs/concepts/services-networking/_print/", "id": "k8s_00056_code_explain_1213"} {"prompt": "Complete this text code:\n\nmetadata:\n name: my-service", "completion": " annotations:\n service.beta.kubernetes.io/openstack-internal-load-balancer: \"true\"", "language": "text", "type": "code-example", "context": "1 - Service | metadata: name: my-service annotations: service.beta.kubernetes.io/cce-load-balancer-internal-vpc: \"true\"", "source": "https://kubernetes.io/docs/concepts/services-networking/_print/", "id": "k8s_00056_code_completion_1214"} {"prompt": "Explain this text code:\n\nmetadata:\n name: my-service\n annotations:\n service.beta.kubernetes.io/openstack-internal-load-balancer: \"true\"", "completion": "This text code 1 - Service | metadata: name: my-service annotations: service.beta.kubernetes.io/cce-load-balancer-internal-vpc: \"true\"", "language": "text", "type": "code-example_explanation", "context": "1 - Service | metadata: name: my-service annotations: service.beta.kubernetes.io/cce-load-balancer-internal-vpc: \"true\"", "source": "https://kubernetes.io/docs/concepts/services-networking/_print/", "id": "k8s_00056_code_explain_1215"} {"prompt": "Complete this text code:\n\nmetadata:\n name: my-service", "completion": " annotations:\n service.beta.kubernetes.io/cce-load-balancer-internal-vpc: \"true\"", "language": "text", "type": "code-example", "context": "1 - Service | metadata: annotations: service.kubernetes.io/qcloud-loadbalancer-internal-subnetid: subnet-xxxxx", "source": "https://kubernetes.io/docs/concepts/services-networking/_print/", "id": "k8s_00056_code_completion_1216"} {"prompt": "Explain this text code:\n\nmetadata:\n name: my-service\n annotations:\n service.beta.kubernetes.io/cce-load-balancer-internal-vpc: \"true\"", "completion": "This text code 1 - Service | metadata: annotations: service.kubernetes.io/qcloud-loadbalancer-internal-subnetid: subnet-xxxxx", "language": "text", "type": "code-example_explanation", "context": "1 - Service | metadata: annotations: service.kubernetes.io/qcloud-loadbalancer-internal-subnetid: subnet-xxxxx", "source": "https://kubernetes.io/docs/concepts/services-networking/_print/", "id": "k8s_00056_code_explain_1217"} {"prompt": "Explain this text code:\n\nmetadata:\n annotations:\n service.kubernetes.io/qcloud-loadbalancer-internal-subnetid: subnet-xxxxx", "completion": "This text code 1 - Service | metadata: annotations: service.beta.kubernetes.io/alibaba-cloud-loadbalancer-address-type: \"intranet\"", "language": "text", "type": "code-example_explanation", "context": "1 - Service | metadata: annotations: service.beta.kubernetes.io/alibaba-cloud-loadbalancer-address-type: \"intranet\"", "source": "https://kubernetes.io/docs/concepts/services-networking/_print/", "id": "k8s_00056_code_explain_1218"} {"prompt": "Explain this text code:\n\nmetadata:\n annotations:\n service.beta.kubernetes.io/alibaba-cloud-loadbalancer-address-type: \"intranet\"", "completion": "This text code 1 - Service | metadata: name: my-service annotations: service.beta.kubernetes.io/oci-load-balancer-internal: true", "language": "text", "type": "code-example_explanation", "context": "1 - Service | metadata: name: my-service annotations: service.beta.kubernetes.io/oci-load-balancer-internal: true", "source": "https://kubernetes.io/docs/concepts/services-networking/_print/", "id": "k8s_00056_code_explain_1219"} {"prompt": "Complete this text code:\n\nmetadata:\n name: my-service", "completion": " annotations:\n service.beta.kubernetes.io/oci-load-balancer-internal: true", "language": "text", "type": "code-example", "context": "1 - Service | Services of type ExternalName map a Service to a DNS name, not to a typical selector such as my-service or cassandra. You specify these Services with the spec.externalName parameter.", "source": "https://kubernetes.io/docs/concepts/services-networking/_print/", "id": "k8s_00056_code_completion_1220"} {"prompt": "Explain this text code:\n\nmetadata:\n name: my-service\n annotations:\n service.beta.kubernetes.io/oci-load-balancer-internal: true", "completion": "This text code 1 - Service | Services of type ExternalName map a Service to a DNS name, not to a typical selector such as my-service or cassandra. You specify these Services with the spec.externalName parameter.", "language": "text", "type": "code-example_explanation", "context": "1 - Service | Services of type ExternalName map a Service to a DNS name, not to a typical selector such as my-service or cassandra. You specify these Services with the spec.externalName parameter.", "source": "https://kubernetes.io/docs/concepts/services-networking/_print/", "id": "k8s_00056_code_explain_1221"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Service\nmetadata:\n name: my-service", "completion": " namespace: prod\nspec:\n type: ExternalName\n externalName: my.database.example.com", "language": "yaml", "type": "service-definition", "context": "1 - Service | A Service of type: ExternalName accepts an IPv4 address string, but treats that string as a DNS name comprised of digits, not as an IP address (the internet does not however allow such names in DNS).", "source": "https://kubernetes.io/docs/concepts/services-networking/_print/", "id": "k8s_00056_code_completion_1222"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Service\nmetadata:\n name: my-service\n namespace: prod\nspec:\n type: ExternalName\n externalName: my.database.example.com", "completion": "This yaml code 1 - Service | A Service of type: ExternalName accepts an IPv4 address string, but treats that string as a DNS name comprised of digits, not as an IP address (the internet does not however allow such names in DNS).", "language": "yaml", "type": "service-definition_explanation", "context": "1 - Service | A Service of type: ExternalName accepts an IPv4 address string, but treats that string as a DNS name comprised of digits, not as an IP address (the internet does not however allow such names in DNS).", "source": "https://kubernetes.io/docs/concepts/services-networking/_print/", "id": "k8s_00056_code_explain_1223"} {"prompt": "Complete this text code:\n\nREDIS_PRIMARY_SERVICE_HOST=10.0.0.11\nREDIS_PRIMARY_SERVICE_PORT=6379\nREDIS_PRIMARY_PORT=tcp://10.0.0.11:6379", "completion": "REDIS_PRIMARY_PORT_6379_TCP=tcp://10.0.0.11:6379\nREDIS_PRIMARY_PORT_6379_TCP_PROTO=tcp\nREDIS_PRIMARY_PORT_6379_TCP_PORT=6379\nREDIS_PRIMARY_PORT_6379_TCP_ADDR=10.0.0.11", "language": "text", "type": "code-example", "context": "1 - Service | When you have a Pod that needs to access a Service, and you are using the environment variable method to publish the port and cluster IP to the client Pods, you must create the Service before the clie", "source": "https://kubernetes.io/docs/concepts/services-networking/_print/", "id": "k8s_00056_code_completion_1224"} {"prompt": "Explain this text code:\n\nREDIS_PRIMARY_SERVICE_HOST=10.0.0.11\nREDIS_PRIMARY_SERVICE_PORT=6379\nREDIS_PRIMARY_PORT=tcp://10.0.0.11:6379\nREDIS_PRIMARY_PORT_6379_TCP=tcp://10.0.0.11:6379\nREDIS_PRIMARY_PORT_6379_TCP_PROTO=tcp\nREDIS_PRIMARY_PORT_6379_TCP_PORT=6379\nREDIS_PRIMARY_PORT_6379_TCP_ADDR=10.0.0.11", "completion": "This text code 1 - Service | When you have a Pod that needs to access a Service, and you are using the environment variable method to publish the port and cluster IP to the client Pods, you must create the Service before the clie", "language": "text", "type": "code-example_explanation", "context": "1 - Service | When you have a Pod that needs to access a Service, and you are using the environment variable method to publish the port and cluster IP to the client Pods, you must create the Service before the clie", "source": "https://kubernetes.io/docs/concepts/services-networking/_print/", "id": "k8s_00056_code_explain_1225"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Service\nmetadata:\n name: my-service\nspec:\n selector:\n app.kubernetes.io/name: MyApp", "completion": " ports:\n - name: http\n protocol: TCP\n port: 80\n targetPort: 49152\n externalIPs:\n - 198.51.100.32", "language": "yaml", "type": "service-definition", "context": "1 - Service | Service is a top-level resource in the Kubernetes REST API. You can find more details about the Service API object.", "source": "https://kubernetes.io/docs/concepts/services-networking/_print/", "id": "k8s_00056_code_completion_1226"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Service\nmetadata:\n name: my-service\nspec:\n selector:\n app.kubernetes.io/name: MyApp\n ports:\n - name: http\n protocol: TCP\n port: 80\n targetPort: 49152\n externalIPs:\n - 198.51.100.32", "completion": "This yaml code 1 - Service | Service is a top-level resource in the Kubernetes REST API. You can find more details about the Service API object.", "language": "yaml", "type": "service-definition_explanation", "context": "1 - Service | Service is a top-level resource in the Kubernetes REST API. You can find more details about the Service API object.", "source": "https://kubernetes.io/docs/concepts/services-networking/_print/", "id": "k8s_00056_code_explain_1227"} {"prompt": "Complete this yaml code:\n\napiVersion: networking.k8s.io/v1\nkind: Ingress\nmetadata:\n name: minimal-ingress\n annotations:\n nginx.ingress.kubernetes.io/rewrite-target: /\nspec:\n ingressClassName: nginx-example\n rules:", "completion": " - http:\n paths:\n - path: /testpath\n pathType: Prefix\n backend:\n service:\n name: test\n port:\n number: 80", "language": "yaml", "type": "kubernetes-manifest", "context": "2 - Ingress | An Ingress needs apiVersion, kind, metadata and spec fields. The name of an Ingress object must be a valid DNS subdomain name. For general information about working with config files, see deploying ap", "source": "https://kubernetes.io/docs/concepts/services-networking/_print/", "id": "k8s_00056_code_completion_1228"} {"prompt": "Explain this yaml code:\n\napiVersion: networking.k8s.io/v1\nkind: Ingress\nmetadata:\n name: minimal-ingress\n annotations:\n nginx.ingress.kubernetes.io/rewrite-target: /\nspec:\n ingressClassName: nginx-example\n rules:\n - http:\n paths:\n - path: /testpath\n pathType: Prefix\n backend:\n service:\n name: test\n port:\n number: 80", "completion": "This yaml code 2 - Ingress | An Ingress needs apiVersion, kind, metadata and spec fields. The name of an Ingress object must be a valid DNS subdomain name. For general information about working with config files, see deploying ap", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "2 - Ingress | An Ingress needs apiVersion, kind, metadata and spec fields. The name of an Ingress object must be a valid DNS subdomain name. For general information about working with config files, see deploying ap", "source": "https://kubernetes.io/docs/concepts/services-networking/_print/", "id": "k8s_00056_code_explain_1229"} {"prompt": "Complete this yaml code:\n\napiVersion: networking.k8s.io/v1\nkind: Ingress\nmetadata:\n name: ingress-resource-backend\nspec:\n defaultBackend:\n resource:\n apiGroup: k8s.example.com\n kind: StorageBucket\n name: static-assets", "completion": " rules:\n - http:\n paths:\n - path: /icons\n pathType: ImplementationSpecific\n backend:\n resource:\n apiGroup: k8s.example.com\n kind: StorageBucket\n name: icon-assets", "language": "yaml", "type": "kubernetes-manifest", "context": "2 - Ingress | After creating the Ingress above, you can view it with the following command:", "source": "https://kubernetes.io/docs/concepts/services-networking/_print/", "id": "k8s_00056_code_completion_1230"} {"prompt": "Explain this yaml code:\n\napiVersion: networking.k8s.io/v1\nkind: Ingress\nmetadata:\n name: ingress-resource-backend\nspec:\n defaultBackend:\n resource:\n apiGroup: k8s.example.com\n kind: StorageBucket\n name: static-assets\n rules:\n - http:\n paths:\n - path: /icons\n pathType: ImplementationSpecific\n backend:\n resource:\n apiGroup: k8s.example.com\n kind: StorageBucket\n name: icon-assets", "completion": "This yaml code 2 - Ingress | After creating the Ingress above, you can view it with the following command:", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "2 - Ingress | After creating the Ingress above, you can view it with the following command:", "source": "https://kubernetes.io/docs/concepts/services-networking/_print/", "id": "k8s_00056_code_explain_1231"} {"prompt": "Complete this text code:\n\nName: ingress-resource-backend\nNamespace: default\nAddress:\nDefault backend: APIGroup: k8s.example.com, Kind: StorageBucket, Name: static-assets\nRules:", "completion": " Host Path Backends\n ---- ---- --------\n *\n /icons APIGroup: k8s.example.com, Kind: StorageBucket, Name: icon-assets\nAnnotations: \nEvents: ", "language": "text", "type": "code-example", "context": "2 - Ingress | Each path in an Ingress is required to have a corresponding path type. Paths that do not include an explicit pathType will fail validation. There are three supported path types:", "source": "https://kubernetes.io/docs/concepts/services-networking/_print/", "id": "k8s_00056_code_completion_1232"} {"prompt": "Explain this text code:\n\nName: ingress-resource-backend\nNamespace: default\nAddress:\nDefault backend: APIGroup: k8s.example.com, Kind: StorageBucket, Name: static-assets\nRules:\n Host Path Backends\n ---- ---- --------\n *\n /icons APIGroup: k8s.example.com, Kind: StorageBucket, Name: icon-assets\nAnnotations: \nEvents: ", "completion": "This text code 2 - Ingress | Each path in an Ingress is required to have a corresponding path type. Paths that do not include an explicit pathType will fail validation. There are three supported path types:", "language": "text", "type": "code-example_explanation", "context": "2 - Ingress | Each path in an Ingress is required to have a corresponding path type. Paths that do not include an explicit pathType will fail validation. There are three supported path types:", "source": "https://kubernetes.io/docs/concepts/services-networking/_print/", "id": "k8s_00056_code_explain_1233"} {"prompt": "Complete this yaml code:\n\napiVersion: networking.k8s.io/v1\nkind: Ingress\nmetadata:\n name: ingress-wildcard-host\nspec:\n rules:\n - host: \"foo.bar.com\"\n http:\n paths:\n - pathType: Prefix\n path: \"/bar\"\n backend:\n service:", "completion": " name: service1\n port:\n number: 80\n - host: \"*.foo.com\"\n http:\n paths:\n - pathType: Prefix\n path: \"/foo\"\n backend:\n service:\n name: service2\n port:\n number: 80", "language": "yaml", "type": "kubernetes-manifest", "context": "2 - Ingress | Ingresses can be implemented by different controllers, often with different configuration. Each Ingress should specify a class, a reference to an IngressClass resource that contains additional configu", "source": "https://kubernetes.io/docs/concepts/services-networking/_print/", "id": "k8s_00056_code_completion_1234"} {"prompt": "Explain this yaml code:\n\napiVersion: networking.k8s.io/v1\nkind: Ingress\nmetadata:\n name: ingress-wildcard-host\nspec:\n rules:\n - host: \"foo.bar.com\"\n http:\n paths:\n - pathType: Prefix\n path: \"/bar\"\n backend:\n service:\n name: service1\n port:\n number: 80\n - host: \"*.foo.com\"\n http:\n paths:\n - pathType: Prefix\n path: \"/foo\"\n backend:\n service:\n name: service2\n port:\n number: 80", "completion": "This yaml code 2 - Ingress | Ingresses can be implemented by different controllers, often with different configuration. Each Ingress should specify a class, a reference to an IngressClass resource that contains additional configu", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "2 - Ingress | Ingresses can be implemented by different controllers, often with different configuration. Each Ingress should specify a class, a reference to an IngressClass resource that contains additional configu", "source": "https://kubernetes.io/docs/concepts/services-networking/_print/", "id": "k8s_00056_code_explain_1235"} {"prompt": "Complete this yaml code:\n\napiVersion: networking.k8s.io/v1\nkind: IngressClass\nmetadata:\n name: external-lb\nspec:", "completion": " controller: example.com/ingress-controller\n parameters:\n apiGroup: k8s.example.com\n kind: IngressParameters\n name: external-lb", "language": "yaml", "type": "kubernetes-manifest", "context": "2 - Ingress | The .spec.parameters field of an IngressClass lets you reference another resource that provides configuration related to that IngressClass.", "source": "https://kubernetes.io/docs/concepts/services-networking/_print/", "id": "k8s_00056_code_completion_1236"} {"prompt": "Explain this yaml code:\n\napiVersion: networking.k8s.io/v1\nkind: IngressClass\nmetadata:\n name: external-lb\nspec:\n controller: example.com/ingress-controller\n parameters:\n apiGroup: k8s.example.com\n kind: IngressParameters\n name: external-lb", "completion": "This yaml code 2 - Ingress | The .spec.parameters field of an IngressClass lets you reference another resource that provides configuration related to that IngressClass.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "2 - Ingress | The .spec.parameters field of an IngressClass lets you reference another resource that provides configuration related to that IngressClass.", "source": "https://kubernetes.io/docs/concepts/services-networking/_print/", "id": "k8s_00056_code_explain_1237"} {"prompt": "Complete this yaml code:\n\n---\napiVersion: networking.k8s.io/v1\nkind: IngressClass\nmetadata:\n name: external-lb-1\nspec:\n controller: example.com/ingress-controller\n parameters:", "completion": " # The parameters for this IngressClass are specified in a\n # ClusterIngressParameter (API group k8s.example.net) named\n # \"external-config-1\". This definition tells Kubernetes to\n # look for a cluster-scoped parameter resource.\n scope: Cluster\n apiGroup: k8s.example.net\n kind: ClusterIngressParameter\n name: external-config-1", "language": "yaml", "type": "kubernetes-manifest", "context": "2 - Ingress | FEATURE STATE: Kubernetes v1.23 [stable]If you set the .spec.parameters field and set .spec.parameters.scope to Namespace, then the IngressClass refers to a namespaced-scoped resource. You must also s", "source": "https://kubernetes.io/docs/concepts/services-networking/_print/", "id": "k8s_00056_code_completion_1238"} {"prompt": "Explain this yaml code:\n\n---\napiVersion: networking.k8s.io/v1\nkind: IngressClass\nmetadata:\n name: external-lb-1\nspec:\n controller: example.com/ingress-controller\n parameters:\n # The parameters for this IngressClass are specified in a\n # ClusterIngressParameter (API group k8s.example.net) named\n # \"external-config-1\". This definition tells Kubernetes to\n # look for a cluster-scoped parameter resource.\n scope: Cluster\n apiGroup: k8s.example.net\n kind: ClusterIngressParameter\n name: external-config-1", "completion": "This yaml code 2 - Ingress | FEATURE STATE: Kubernetes v1.23 [stable]If you set the .spec.parameters field and set .spec.parameters.scope to Namespace, then the IngressClass refers to a namespaced-scoped resource. You must also s", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "2 - Ingress | FEATURE STATE: Kubernetes v1.23 [stable]If you set the .spec.parameters field and set .spec.parameters.scope to Namespace, then the IngressClass refers to a namespaced-scoped resource. You must also s", "source": "https://kubernetes.io/docs/concepts/services-networking/_print/", "id": "k8s_00056_code_explain_1239"} {"prompt": "Complete this yaml code:\n\n---\napiVersion: networking.k8s.io/v1\nkind: IngressClass\nmetadata:\n name: external-lb-2\nspec:\n controller: example.com/ingress-controller\n parameters:", "completion": " # The parameters for this IngressClass are specified in an\n # IngressParameter (API group k8s.example.com) named \"external-config\",\n # that's in the \"external-configuration\" namespace.\n scope: Namespace\n apiGroup: k8s.example.com\n kind: IngressParameter\n namespace: external-configuration\n name: external-config", "language": "yaml", "type": "kubernetes-manifest", "context": "2 - Ingress | Before the IngressClass resource and ingressClassName field were added in Kubernetes 1.18, Ingress classes were specified with a kubernetes.io/ingress.class annotation on the Ingress. This annotation", "source": "https://kubernetes.io/docs/concepts/services-networking/_print/", "id": "k8s_00056_code_completion_1240"} {"prompt": "Explain this yaml code:\n\n---\napiVersion: networking.k8s.io/v1\nkind: IngressClass\nmetadata:\n name: external-lb-2\nspec:\n controller: example.com/ingress-controller\n parameters:\n # The parameters for this IngressClass are specified in an\n # IngressParameter (API group k8s.example.com) named \"external-config\",\n # that's in the \"external-configuration\" namespace.\n scope: Namespace\n apiGroup: k8s.example.com\n kind: IngressParameter\n namespace: external-configuration\n name: external-config", "completion": "This yaml code 2 - Ingress | Before the IngressClass resource and ingressClassName field were added in Kubernetes 1.18, Ingress classes were specified with a kubernetes.io/ingress.class annotation on the Ingress. This annotation", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "2 - Ingress | Before the IngressClass resource and ingressClassName field were added in Kubernetes 1.18, Ingress classes were specified with a kubernetes.io/ingress.class annotation on the Ingress. This annotation", "source": "https://kubernetes.io/docs/concepts/services-networking/_print/", "id": "k8s_00056_code_explain_1241"} {"prompt": "Complete this yaml code:\n\napiVersion: networking.k8s.io/v1\nkind: IngressClass\nmetadata:\n labels:\n app.kubernetes.io/component: controller", "completion": " name: nginx-example\n annotations:\n ingressclass.kubernetes.io/is-default-class: \"true\"\nspec:\n controller: k8s.io/ingress-nginx", "language": "yaml", "type": "kubernetes-manifest", "context": "2 - Ingress | There are existing Kubernetes concepts that allow you to expose a single Service (see alternatives). You can also do this with an Ingress by specifying a default backend with no rules.", "source": "https://kubernetes.io/docs/concepts/services-networking/_print/", "id": "k8s_00056_code_completion_1242"} {"prompt": "Explain this yaml code:\n\napiVersion: networking.k8s.io/v1\nkind: IngressClass\nmetadata:\n labels:\n app.kubernetes.io/component: controller\n name: nginx-example\n annotations:\n ingressclass.kubernetes.io/is-default-class: \"true\"\nspec:\n controller: k8s.io/ingress-nginx", "completion": "This yaml code 2 - Ingress | There are existing Kubernetes concepts that allow you to expose a single Service (see alternatives). You can also do this with an Ingress by specifying a default backend with no rules.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "2 - Ingress | There are existing Kubernetes concepts that allow you to expose a single Service (see alternatives). You can also do this with an Ingress by specifying a default backend with no rules.", "source": "https://kubernetes.io/docs/concepts/services-networking/_print/", "id": "k8s_00056_code_explain_1243"} {"prompt": "Complete this yaml code:\n\napiVersion: networking.k8s.io/v1\nkind: Ingress\nmetadata:\n name: test-ingress\nspec:", "completion": " defaultBackend:\n service:\n name: test\n port:\n number: 80", "language": "yaml", "type": "kubernetes-manifest", "context": "2 - Ingress | If you create it using kubectl apply -f you should be able to view the state of the Ingress you added:", "source": "https://kubernetes.io/docs/concepts/services-networking/_print/", "id": "k8s_00056_code_completion_1244"} {"prompt": "Explain this yaml code:\n\napiVersion: networking.k8s.io/v1\nkind: Ingress\nmetadata:\n name: test-ingress\nspec:\n defaultBackend:\n service:\n name: test\n port:\n number: 80", "completion": "This yaml code 2 - Ingress | If you create it using kubectl apply -f you should be able to view the state of the Ingress you added:", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "2 - Ingress | If you create it using kubectl apply -f you should be able to view the state of the Ingress you added:", "source": "https://kubernetes.io/docs/concepts/services-networking/_print/", "id": "k8s_00056_code_explain_1245"} {"prompt": "Explain this text code:\n\nNAME CLASS HOSTS ADDRESS PORTS AGE\ntest-ingress external-lb * 203.0.113.123 80 59s", "completion": "This text code 2 - Ingress | Where 203.0.113.123 is the IP allocated by the Ingress controller to satisfy this Ingress.", "language": "text", "type": "code-example_explanation", "context": "2 - Ingress | Where 203.0.113.123 is the IP allocated by the Ingress controller to satisfy this Ingress.", "source": "https://kubernetes.io/docs/concepts/services-networking/_print/", "id": "k8s_00056_code_explain_1246"} {"prompt": "Complete this yaml code:\n\napiVersion: networking.k8s.io/v1\nkind: Ingress\nmetadata:\n name: simple-fanout-example\nspec:\n rules:\n - host: foo.bar.com\n http:\n paths:\n - path: /foo\n pathType: Prefix", "completion": " backend:\n service:\n name: service1\n port:\n number: 4200\n - path: /bar\n pathType: Prefix\n backend:\n service:\n name: service2\n port:\n number: 8080", "language": "yaml", "type": "kubernetes-manifest", "context": "2 - Ingress | When you create the Ingress with kubectl apply -f:", "source": "https://kubernetes.io/docs/concepts/services-networking/_print/", "id": "k8s_00056_code_completion_1247"} {"prompt": "Explain this yaml code:\n\napiVersion: networking.k8s.io/v1\nkind: Ingress\nmetadata:\n name: simple-fanout-example\nspec:\n rules:\n - host: foo.bar.com\n http:\n paths:\n - path: /foo\n pathType: Prefix\n backend:\n service:\n name: service1\n port:\n number: 4200\n - path: /bar\n pathType: Prefix\n backend:\n service:\n name: service2\n port:\n number: 8080", "completion": "This yaml code 2 - Ingress | When you create the Ingress with kubectl apply -f:", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "2 - Ingress | When you create the Ingress with kubectl apply -f:", "source": "https://kubernetes.io/docs/concepts/services-networking/_print/", "id": "k8s_00056_code_explain_1248"} {"prompt": "Complete this text code:\n\nName: simple-fanout-example\nNamespace: default\nAddress: 178.91.123.132\nDefault backend: default-http-backend:80 (10.8.2.3:8080)\nRules:\n Host Path Backends\n ---- ---- --------", "completion": " foo.bar.com\n /foo service1:4200 (10.8.0.90:4200)\n /bar service2:8080 (10.8.0.91:8080)\nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal ADD 22s loadbalancer-controller default/test", "language": "text", "type": "code-example", "context": "2 - Ingress | The Ingress controller provisions an implementation-specific load balancer that satisfies the Ingress, as long as the Services (service1, service2) exist. When it has done so, you can see the address", "source": "https://kubernetes.io/docs/concepts/services-networking/_print/", "id": "k8s_00056_code_completion_1249"} {"prompt": "Explain this text code:\n\nName: simple-fanout-example\nNamespace: default\nAddress: 178.91.123.132\nDefault backend: default-http-backend:80 (10.8.2.3:8080)\nRules:\n Host Path Backends\n ---- ---- --------\n foo.bar.com\n /foo service1:4200 (10.8.0.90:4200)\n /bar service2:8080 (10.8.0.91:8080)\nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal ADD 22s loadbalancer-controller default/test", "completion": "This text code 2 - Ingress | The Ingress controller provisions an implementation-specific load balancer that satisfies the Ingress, as long as the Services (service1, service2) exist. When it has done so, you can see the address", "language": "text", "type": "code-example_explanation", "context": "2 - Ingress | The Ingress controller provisions an implementation-specific load balancer that satisfies the Ingress, as long as the Services (service1, service2) exist. When it has done so, you can see the address", "source": "https://kubernetes.io/docs/concepts/services-networking/_print/", "id": "k8s_00056_code_explain_1250"} {"prompt": "Complete this yaml code:\n\napiVersion: networking.k8s.io/v1\nkind: Ingress\nmetadata:\n name: name-virtual-host-ingress\nspec:\n rules:\n - host: foo.bar.com\n http:\n paths:\n - pathType: Prefix\n path: \"/\"\n backend:\n service:", "completion": " name: service1\n port:\n number: 80\n - host: bar.foo.com\n http:\n paths:\n - pathType: Prefix\n path: \"/\"\n backend:\n service:\n name: service2\n port:\n number: 80", "language": "yaml", "type": "kubernetes-manifest", "context": "2 - Ingress | If you create an Ingress resource without any hosts defined in the rules, then any web traffic to the IP address of your Ingress controller can be matched without a name based virtual host being requi", "source": "https://kubernetes.io/docs/concepts/services-networking/_print/", "id": "k8s_00056_code_completion_1251"} {"prompt": "Explain this yaml code:\n\napiVersion: networking.k8s.io/v1\nkind: Ingress\nmetadata:\n name: name-virtual-host-ingress\nspec:\n rules:\n - host: foo.bar.com\n http:\n paths:\n - pathType: Prefix\n path: \"/\"\n backend:\n service:\n name: service1\n port:\n number: 80\n - host: bar.foo.com\n http:\n paths:\n - pathType: Prefix\n path: \"/\"\n backend:\n service:\n name: service2\n port:\n number: 80", "completion": "This yaml code 2 - Ingress | If you create an Ingress resource without any hosts defined in the rules, then any web traffic to the IP address of your Ingress controller can be matched without a name based virtual host being requi", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "2 - Ingress | If you create an Ingress resource without any hosts defined in the rules, then any web traffic to the IP address of your Ingress controller can be matched without a name based virtual host being requi", "source": "https://kubernetes.io/docs/concepts/services-networking/_print/", "id": "k8s_00056_code_explain_1252"} {"prompt": "Complete this yaml code:\n\napiVersion: networking.k8s.io/v1\nkind: Ingress\nmetadata:\n name: name-virtual-host-ingress-no-third-host\nspec:\n rules:\n - host: first.bar.com\n http:\n paths:\n - pathType: Prefix\n path: \"/\"\n backend:\n service:\n name: service1\n port:\n number: 80\n - host: second.bar.com", "completion": " http:\n paths:\n - pathType: Prefix\n path: \"/\"\n backend:\n service:\n name: service2\n port:\n number: 80\n - http:\n paths:\n - pathType: Prefix\n path: \"/\"\n backend:\n service:\n name: service3\n port:\n number: 80", "language": "yaml", "type": "kubernetes-manifest", "context": "2 - Ingress | You can secure an Ingress by specifying a Secret that contains a TLS private key and certificate. The Ingress resource only supports a single TLS port, 443, and assumes TLS termination at the ingress", "source": "https://kubernetes.io/docs/concepts/services-networking/_print/", "id": "k8s_00056_code_completion_1253"} {"prompt": "Explain this yaml code:\n\napiVersion: networking.k8s.io/v1\nkind: Ingress\nmetadata:\n name: name-virtual-host-ingress-no-third-host\nspec:\n rules:\n - host: first.bar.com\n http:\n paths:\n - pathType: Prefix\n path: \"/\"\n backend:\n service:\n name: service1\n port:\n number: 80\n - host: second.bar.com\n http:\n paths:\n - pathType: Prefix\n path: \"/\"\n backend:\n service:\n name: service2\n port:\n number: 80\n - http:\n paths:\n - pathType: Prefix\n path: \"/\"\n backend:\n service:\n name: service3\n port:\n number: 80", "completion": "This yaml code 2 - Ingress | You can secure an Ingress by specifying a Secret that contains a TLS private key and certificate. The Ingress resource only supports a single TLS port, 443, and assumes TLS termination at the ingress", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "2 - Ingress | You can secure an Ingress by specifying a Secret that contains a TLS private key and certificate. The Ingress resource only supports a single TLS port, 443, and assumes TLS termination at the ingress", "source": "https://kubernetes.io/docs/concepts/services-networking/_print/", "id": "k8s_00056_code_explain_1254"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Secret\nmetadata:\n name: testsecret-tls", "completion": " namespace: default\ndata:\n tls.crt: base64 encoded cert\n tls.key: base64 encoded key\ntype: kubernetes.io/tls", "language": "yaml", "type": "kubernetes-manifest", "context": "2 - Ingress | Referencing this secret in an Ingress tells the Ingress controller to secure the channel from the client to the load balancer using TLS. You need to make sure the TLS secret you created came from a ce", "source": "https://kubernetes.io/docs/concepts/services-networking/_print/", "id": "k8s_00056_code_completion_1255"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Secret\nmetadata:\n name: testsecret-tls\n namespace: default\ndata:\n tls.crt: base64 encoded cert\n tls.key: base64 encoded key\ntype: kubernetes.io/tls", "completion": "This yaml code 2 - Ingress | Referencing this secret in an Ingress tells the Ingress controller to secure the channel from the client to the load balancer using TLS. You need to make sure the TLS secret you created came from a ce", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "2 - Ingress | Referencing this secret in an Ingress tells the Ingress controller to secure the channel from the client to the load balancer using TLS. You need to make sure the TLS secret you created came from a ce", "source": "https://kubernetes.io/docs/concepts/services-networking/_print/", "id": "k8s_00056_code_explain_1256"} {"prompt": "Complete this yaml code:\n\napiVersion: networking.k8s.io/v1\nkind: Ingress\nmetadata:\n name: tls-example-ingress\nspec:\n tls:\n - hosts:\n - https-example.foo.com\n secretName: testsecret-tls\n rules:", "completion": " - host: https-example.foo.com\n http:\n paths:\n - path: /\n pathType: Prefix\n backend:\n service:\n name: service1\n port:\n number: 80", "language": "yaml", "type": "kubernetes-manifest", "context": "2 - Ingress | An Ingress controller is bootstrapped with some load balancing policy settings that it applies to all Ingress, such as the load balancing algorithm, backend weight scheme, and others. More advanced lo", "source": "https://kubernetes.io/docs/concepts/services-networking/_print/", "id": "k8s_00056_code_completion_1257"} {"prompt": "Explain this yaml code:\n\napiVersion: networking.k8s.io/v1\nkind: Ingress\nmetadata:\n name: tls-example-ingress\nspec:\n tls:\n - hosts:\n - https-example.foo.com\n secretName: testsecret-tls\n rules:\n - host: https-example.foo.com\n http:\n paths:\n - path: /\n pathType: Prefix\n backend:\n service:\n name: service1\n port:\n number: 80", "completion": "This yaml code 2 - Ingress | An Ingress controller is bootstrapped with some load balancing policy settings that it applies to all Ingress, such as the load balancing algorithm, backend weight scheme, and others. More advanced lo", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "2 - Ingress | An Ingress controller is bootstrapped with some load balancing policy settings that it applies to all Ingress, such as the load balancing algorithm, backend weight scheme, and others. More advanced lo", "source": "https://kubernetes.io/docs/concepts/services-networking/_print/", "id": "k8s_00056_code_explain_1258"} {"prompt": "Complete this text code:\n\nName: test\nNamespace: default\nAddress: 178.91.123.132\nDefault backend: default-http-backend:80 (10.8.2.3:8080)\nRules:\n Host Path Backends\n ---- ---- --------", "completion": " foo.bar.com\n /foo service1:80 (10.8.0.90:80)\nAnnotations:\n nginx.ingress.kubernetes.io/rewrite-target: /\nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal ADD 35s loadbalancer-controller default/test", "language": "text", "type": "code-example", "context": "2 - Ingress | This pops up an editor with the existing configuration in YAML format. Modify it to include the new Host:", "source": "https://kubernetes.io/docs/concepts/services-networking/_print/", "id": "k8s_00056_code_completion_1259"} {"prompt": "Explain this text code:\n\nName: test\nNamespace: default\nAddress: 178.91.123.132\nDefault backend: default-http-backend:80 (10.8.2.3:8080)\nRules:\n Host Path Backends\n ---- ---- --------\n foo.bar.com\n /foo service1:80 (10.8.0.90:80)\nAnnotations:\n nginx.ingress.kubernetes.io/rewrite-target: /\nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal ADD 35s loadbalancer-controller default/test", "completion": "This text code 2 - Ingress | This pops up an editor with the existing configuration in YAML format. Modify it to include the new Host:", "language": "text", "type": "code-example_explanation", "context": "2 - Ingress | This pops up an editor with the existing configuration in YAML format. Modify it to include the new Host:", "source": "https://kubernetes.io/docs/concepts/services-networking/_print/", "id": "k8s_00056_code_explain_1260"} {"prompt": "Complete this text code:\n\nspec:\n rules:\n - host: foo.bar.com\n http:\n paths:\n - backend:\n service:\n name: service1\n port:\n number: 80\n path: /foo", "completion": " pathType: Prefix\n - host: bar.baz.com\n http:\n paths:\n - backend:\n service:\n name: service2\n port:\n number: 80\n path: /foo\n pathType: Prefix\n..", "language": "text", "type": "code-example", "context": "2 - Ingress | After you save your changes, kubectl updates the resource in the API server, which tells the Ingress controller to reconfigure the load balancer.", "source": "https://kubernetes.io/docs/concepts/services-networking/_print/", "id": "k8s_00056_code_completion_1261"} {"prompt": "Explain this text code:\n\nspec:\n rules:\n - host: foo.bar.com\n http:\n paths:\n - backend:\n service:\n name: service1\n port:\n number: 80\n path: /foo\n pathType: Prefix\n - host: bar.baz.com\n http:\n paths:\n - backend:\n service:\n name: service2\n port:\n number: 80\n path: /foo\n pathType: Prefix\n..", "completion": "This text code 2 - Ingress | After you save your changes, kubectl updates the resource in the API server, which tells the Ingress controller to reconfigure the load balancer.", "language": "text", "type": "code-example_explanation", "context": "2 - Ingress | After you save your changes, kubectl updates the resource in the API server, which tells the Ingress controller to reconfigure the load balancer.", "source": "https://kubernetes.io/docs/concepts/services-networking/_print/", "id": "k8s_00056_code_explain_1262"} {"prompt": "Complete this text code:\n\nName: test\nNamespace: default\nAddress: 178.91.123.132\nDefault backend: default-http-backend:80 (10.8.2.3:8080)\nRules:\n Host Path Backends\n ---- ---- --------\n foo.bar.com", "completion": " /foo service1:80 (10.8.0.90:80)\n bar.baz.com\n /foo service2:80 (10.8.0.91:80)\nAnnotations:\n nginx.ingress.kubernetes.io/rewrite-target: /\nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal ADD 45s loadbalancer-controller default/test", "language": "text", "type": "code-example", "context": "2 - Ingress | You can achieve the same outcome by invoking kubectl replace -f on a modified Ingress YAML file.", "source": "https://kubernetes.io/docs/concepts/services-networking/_print/", "id": "k8s_00056_code_completion_1263"} {"prompt": "Explain this text code:\n\nName: test\nNamespace: default\nAddress: 178.91.123.132\nDefault backend: default-http-backend:80 (10.8.2.3:8080)\nRules:\n Host Path Backends\n ---- ---- --------\n foo.bar.com\n /foo service1:80 (10.8.0.90:80)\n bar.baz.com\n /foo service2:80 (10.8.0.91:80)\nAnnotations:\n nginx.ingress.kubernetes.io/rewrite-target: /\nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal ADD 45s loadbalancer-controller default/test", "completion": "This text code 2 - Ingress | You can achieve the same outcome by invoking kubectl replace -f on a modified Ingress YAML file.", "language": "text", "type": "code-example_explanation", "context": "2 - Ingress | You can achieve the same outcome by invoking kubectl replace -f on a modified Ingress YAML file.", "source": "https://kubernetes.io/docs/concepts/services-networking/_print/", "id": "k8s_00056_code_explain_1264"} {"prompt": "Complete this yaml code:\n\napiVersion: gateway.networking.k8s.io/v1\nkind: GatewayClass\nmetadata:", "completion": " name: example-class\nspec:\n controllerName: example.com/gateway-controller", "language": "yaml", "type": "kubernetes-manifest", "context": "4 - Gateway API | In this example, a controller that has implemented Gateway API is configured to manage GatewayClasses with the controller name example.com/gateway-controller. Gateways of this class will be managed by", "source": "https://kubernetes.io/docs/concepts/services-networking/_print/", "id": "k8s_00056_code_completion_1265"} {"prompt": "Explain this yaml code:\n\napiVersion: gateway.networking.k8s.io/v1\nkind: GatewayClass\nmetadata:\n name: example-class\nspec:\n controllerName: example.com/gateway-controller", "completion": "This yaml code 4 - Gateway API | In this example, a controller that has implemented Gateway API is configured to manage GatewayClasses with the controller name example.com/gateway-controller. Gateways of this class will be managed by", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "4 - Gateway API | In this example, a controller that has implemented Gateway API is configured to manage GatewayClasses with the controller name example.com/gateway-controller. Gateways of this class will be managed by", "source": "https://kubernetes.io/docs/concepts/services-networking/_print/", "id": "k8s_00056_code_explain_1266"} {"prompt": "Complete this yaml code:\n\napiVersion: gateway.networking.k8s.io/v1\nkind: Gateway\nmetadata:\n name: example-gateway\nspec:", "completion": " gatewayClassName: example-class\n listeners:\n - name: http\n protocol: HTTP\n port: 80", "language": "yaml", "type": "kubernetes-manifest", "context": "4 - Gateway API | In this example, an instance of traffic handling infrastructure is programmed to listen for HTTP traffic on port 80. Since the addresses field is unspecified, an address or hostname is assigned to the", "source": "https://kubernetes.io/docs/concepts/services-networking/_print/", "id": "k8s_00056_code_completion_1267"} {"prompt": "Explain this yaml code:\n\napiVersion: gateway.networking.k8s.io/v1\nkind: Gateway\nmetadata:\n name: example-gateway\nspec:\n gatewayClassName: example-class\n listeners:\n - name: http\n protocol: HTTP\n port: 80", "completion": "This yaml code 4 - Gateway API | In this example, an instance of traffic handling infrastructure is programmed to listen for HTTP traffic on port 80. Since the addresses field is unspecified, an address or hostname is assigned to the", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "4 - Gateway API | In this example, an instance of traffic handling infrastructure is programmed to listen for HTTP traffic on port 80. Since the addresses field is unspecified, an address or hostname is assigned to the", "source": "https://kubernetes.io/docs/concepts/services-networking/_print/", "id": "k8s_00056_code_explain_1268"} {"prompt": "Complete this yaml code:\n\napiVersion: gateway.networking.k8s.io/v1\nkind: HTTPRoute\nmetadata:\n name: example-httproute\nspec:\n parentRefs:\n - name: example-gateway\n hostnames:", "completion": " - \"www.example.com\"\n rules:\n - matches:\n - path:\n type: PathPrefix\n value: /login\n backendRefs:\n - name: example-svc\n port: 8080", "language": "yaml", "type": "kubernetes-manifest", "context": "4 - Gateway API | In this example, HTTP traffic from Gateway example-gateway with the Host: header set to www.example.com and the request path specified as /login will be routed to Service example-svc on port 8080.", "source": "https://kubernetes.io/docs/concepts/services-networking/_print/", "id": "k8s_00056_code_completion_1269"} {"prompt": "Explain this yaml code:\n\napiVersion: gateway.networking.k8s.io/v1\nkind: HTTPRoute\nmetadata:\n name: example-httproute\nspec:\n parentRefs:\n - name: example-gateway\n hostnames:\n - \"www.example.com\"\n rules:\n - matches:\n - path:\n type: PathPrefix\n value: /login\n backendRefs:\n - name: example-svc\n port: 8080", "completion": "This yaml code 4 - Gateway API | In this example, HTTP traffic from Gateway example-gateway with the Host: header set to www.example.com and the request path specified as /login will be routed to Service example-svc on port 8080.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "4 - Gateway API | In this example, HTTP traffic from Gateway example-gateway with the Host: header set to www.example.com and the request path specified as /login will be routed to Service example-svc on port 8080.", "source": "https://kubernetes.io/docs/concepts/services-networking/_print/", "id": "k8s_00056_code_explain_1270"} {"prompt": "Complete this yaml code:\n\napiVersion: gateway.networking.k8s.io/v1\nkind: GRPCRoute\nmetadata:\n name: example-grpcroute\nspec:\n parentRefs:", "completion": " - name: example-gateway\n hostnames:\n - \"svc.example.com\"\n rules:\n - backendRefs:\n - name: example-svc\n port: 50051", "language": "yaml", "type": "kubernetes-manifest", "context": "4 - Gateway API | In this example, gRPC traffic from Gateway example-gateway with the host set to svc.example.com will be directed to the service example-svc on port 50051 from the same namespace.", "source": "https://kubernetes.io/docs/concepts/services-networking/_print/", "id": "k8s_00056_code_completion_1271"} {"prompt": "Explain this yaml code:\n\napiVersion: gateway.networking.k8s.io/v1\nkind: GRPCRoute\nmetadata:\n name: example-grpcroute\nspec:\n parentRefs:\n - name: example-gateway\n hostnames:\n - \"svc.example.com\"\n rules:\n - backendRefs:\n - name: example-svc\n port: 50051", "completion": "This yaml code 4 - Gateway API | In this example, gRPC traffic from Gateway example-gateway with the host set to svc.example.com will be directed to the service example-svc on port 50051 from the same namespace.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "4 - Gateway API | In this example, gRPC traffic from Gateway example-gateway with the host set to svc.example.com will be directed to the service example-svc on port 50051 from the same namespace.", "source": "https://kubernetes.io/docs/concepts/services-networking/_print/", "id": "k8s_00056_code_explain_1272"} {"prompt": "Complete this yaml code:\n\napiVersion: gateway.networking.k8s.io/v1\nkind: GRPCRoute\nmetadata:\n name: example-grpcroute\nspec:\n parentRefs:\n - name: example-gateway\n hostnames:", "completion": " - \"svc.example.com\"\n rules:\n - matches:\n - method:\n service: com.example\n method: Login\n backendRefs:\n - name: foo-svc\n port: 50051", "language": "yaml", "type": "kubernetes-manifest", "context": "4 - Gateway API | In this case, the GRPCRoute will match any traffic for svc.example.com and apply its routing rules to forward the traffic to the correct backend. Since there is only one match specified,only requests", "source": "https://kubernetes.io/docs/concepts/services-networking/_print/", "id": "k8s_00056_code_completion_1273"} {"prompt": "Explain this yaml code:\n\napiVersion: gateway.networking.k8s.io/v1\nkind: GRPCRoute\nmetadata:\n name: example-grpcroute\nspec:\n parentRefs:\n - name: example-gateway\n hostnames:\n - \"svc.example.com\"\n rules:\n - matches:\n - method:\n service: com.example\n method: Login\n backendRefs:\n - name: foo-svc\n port: 50051", "completion": "This yaml code 4 - Gateway API | In this case, the GRPCRoute will match any traffic for svc.example.com and apply its routing rules to forward the traffic to the correct backend. Since there is only one match specified,only requests", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "4 - Gateway API | In this case, the GRPCRoute will match any traffic for svc.example.com and apply its routing rules to forward the traffic to the correct backend. Since there is only one match specified,only requests", "source": "https://kubernetes.io/docs/concepts/services-networking/_print/", "id": "k8s_00056_code_explain_1274"} {"prompt": "Complete this yaml code:\n\napiVersion: discovery.k8s.io/v1\nkind: EndpointSlice\nmetadata:\n name: example-abc\n labels:\n kubernetes.io/service-name: example\naddressType: IPv4\nports:\n - name: http", "completion": " protocol: TCP\n port: 80\nendpoints:\n - addresses:\n - \"10.1.2.3\"\n conditions:\n ready: true\n hostname: pod-1\n nodeName: node-1\n zone: us-west2-a", "language": "yaml", "type": "kubernetes-manifest", "context": "5 - EndpointSlices | By default, the control plane creates and manages EndpointSlices to have no more than 100 endpoints each. You can configure this with the --max-endpoints-per-slice kube-controller-manager flag, up to", "source": "https://kubernetes.io/docs/concepts/services-networking/_print/", "id": "k8s_00056_code_completion_1275"} {"prompt": "Explain this yaml code:\n\napiVersion: discovery.k8s.io/v1\nkind: EndpointSlice\nmetadata:\n name: example-abc\n labels:\n kubernetes.io/service-name: example\naddressType: IPv4\nports:\n - name: http\n protocol: TCP\n port: 80\nendpoints:\n - addresses:\n - \"10.1.2.3\"\n conditions:\n ready: true\n hostname: pod-1\n nodeName: node-1\n zone: us-west2-a", "completion": "This yaml code 5 - EndpointSlices | By default, the control plane creates and manages EndpointSlices to have no more than 100 endpoints each. You can configure this with the --max-endpoints-per-slice kube-controller-manager flag, up to", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "5 - EndpointSlices | By default, the control plane creates and manages EndpointSlices to have no more than 100 endpoints each. You can configure this with the --max-endpoints-per-slice kube-controller-manager flag, up to", "source": "https://kubernetes.io/docs/concepts/services-networking/_print/", "id": "k8s_00056_code_explain_1276"} {"prompt": "Complete this yaml code:\n\napiVersion: networking.k8s.io/v1\nkind: NetworkPolicy\nmetadata:\n name: test-network-policy\n namespace: default\nspec:\n podSelector:\n matchLabels:\n role: db\n policyTypes:\n - Ingress\n - Egress\n ingress:\n - from:\n - ipBlock:\n cidr: 172.17.0.0/16\n except:", "completion": " - 172.17.1.0/24\n - namespaceSelector:\n matchLabels:\n project: myproject\n - podSelector:\n matchLabels:\n role: frontend\n ports:\n - protocol: TCP\n port: 6379\n egress:\n - to:\n - ipBlock:\n cidr: 10.0.0.0/24\n ports:\n - protocol: TCP\n port: 5978", "language": "yaml", "type": "kubernetes-manifest", "context": "6 - Network Policies | Mandatory Fields: As with all other Kubernetes config, a NetworkPolicy needs apiVersion, kind, and metadata fields. For general information about working with config files, see Configure a Pod to Use", "source": "https://kubernetes.io/docs/concepts/services-networking/_print/", "id": "k8s_00056_code_completion_1277"} {"prompt": "Explain this yaml code:\n\napiVersion: networking.k8s.io/v1\nkind: NetworkPolicy\nmetadata:\n name: test-network-policy\n namespace: default\nspec:\n podSelector:\n matchLabels:\n role: db\n policyTypes:\n - Ingress\n - Egress\n ingress:\n - from:\n - ipBlock:\n cidr: 172.17.0.0/16\n except:\n - 172.17.1.0/24\n - namespaceSelector:\n matchLabels:\n project: myproject\n - podSelector:\n matchLabels:\n role: frontend\n ports:\n - protocol: TCP\n port: 6379\n egress:\n - to:\n - ipBlock:\n cidr: 10.0.0.0/24\n ports:\n - protocol: TCP\n port: 5978", "completion": "This yaml code 6 - Network Policies | Mandatory Fields: As with all other Kubernetes config, a NetworkPolicy needs apiVersion, kind, and metadata fields. For general information about working with config files, see Configure a Pod to Use", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "6 - Network Policies | Mandatory Fields: As with all other Kubernetes config, a NetworkPolicy needs apiVersion, kind, and metadata fields. For general information about working with config files, see Configure a Pod to Use", "source": "https://kubernetes.io/docs/concepts/services-networking/_print/", "id": "k8s_00056_code_explain_1278"} {"prompt": "Complete this text code:\n\n...\n ingress:\n - from:\n - namespaceSelector:\n matchLabels:", "completion": " user: alice\n podSelector:\n matchLabels:\n role: client\n ...", "language": "text", "type": "code-example", "context": "6 - Network Policies | This policy contains a single from element allowing connections from Pods with the label role=client in namespaces with the label user=alice. But the following policy is different:", "source": "https://kubernetes.io/docs/concepts/services-networking/_print/", "id": "k8s_00056_code_completion_1279"} {"prompt": "Explain this text code:\n\n...\n ingress:\n - from:\n - namespaceSelector:\n matchLabels:\n user: alice\n podSelector:\n matchLabels:\n role: client\n ...", "completion": "This text code 6 - Network Policies | This policy contains a single from element allowing connections from Pods with the label role=client in namespaces with the label user=alice. But the following policy is different:", "language": "text", "type": "code-example_explanation", "context": "6 - Network Policies | This policy contains a single from element allowing connections from Pods with the label role=client in namespaces with the label user=alice. But the following policy is different:", "source": "https://kubernetes.io/docs/concepts/services-networking/_print/", "id": "k8s_00056_code_explain_1280"} {"prompt": "Complete this text code:\n\n...\n ingress:\n - from:\n - namespaceSelector:\n matchLabels:", "completion": " user: alice\n - podSelector:\n matchLabels:\n role: client\n ...", "language": "text", "type": "code-example", "context": "6 - Network Policies | It contains two elements in the from array, and allows connections from Pods in the local Namespace with the label role=client, or from any Pod in any namespace with the label user=alice.", "source": "https://kubernetes.io/docs/concepts/services-networking/_print/", "id": "k8s_00056_code_completion_1281"} {"prompt": "Explain this text code:\n\n...\n ingress:\n - from:\n - namespaceSelector:\n matchLabels:\n user: alice\n - podSelector:\n matchLabels:\n role: client\n ...", "completion": "This text code 6 - Network Policies | It contains two elements in the from array, and allows connections from Pods in the local Namespace with the label role=client, or from any Pod in any namespace with the label user=alice.", "language": "text", "type": "code-example_explanation", "context": "6 - Network Policies | It contains two elements in the from array, and allows connections from Pods in the local Namespace with the label role=client, or from any Pod in any namespace with the label user=alice.", "source": "https://kubernetes.io/docs/concepts/services-networking/_print/", "id": "k8s_00056_code_explain_1282"} {"prompt": "Complete this yaml code:\n\n---\napiVersion: networking.k8s.io/v1\nkind: NetworkPolicy\nmetadata:", "completion": " name: default-deny-ingress\nspec:\n podSelector: {}\n policyTypes:\n - Ingress", "language": "yaml", "type": "kubernetes-manifest", "context": "6 - Network Policies | This ensures that even pods that aren't selected by any other NetworkPolicy will still be isolated for ingress. This policy does not affect isolation for egress from any pod.", "source": "https://kubernetes.io/docs/concepts/services-networking/_print/", "id": "k8s_00056_code_completion_1283"} {"prompt": "Explain this yaml code:\n\n---\napiVersion: networking.k8s.io/v1\nkind: NetworkPolicy\nmetadata:\n name: default-deny-ingress\nspec:\n podSelector: {}\n policyTypes:\n - Ingress", "completion": "This yaml code 6 - Network Policies | This ensures that even pods that aren't selected by any other NetworkPolicy will still be isolated for ingress. This policy does not affect isolation for egress from any pod.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "6 - Network Policies | This ensures that even pods that aren't selected by any other NetworkPolicy will still be isolated for ingress. This policy does not affect isolation for egress from any pod.", "source": "https://kubernetes.io/docs/concepts/services-networking/_print/", "id": "k8s_00056_code_explain_1284"} {"prompt": "Complete this yaml code:\n\n---\napiVersion: networking.k8s.io/v1\nkind: NetworkPolicy\nmetadata:\n name: allow-all-ingress", "completion": "spec:\n podSelector: {}\n ingress:\n - {}\n policyTypes:\n - Ingress", "language": "yaml", "type": "kubernetes-manifest", "context": "6 - Network Policies | With this policy in place, no additional policy or policies can cause any incoming connection to those pods to be denied. This policy has no effect on isolation for egress from any pod.", "source": "https://kubernetes.io/docs/concepts/services-networking/_print/", "id": "k8s_00056_code_completion_1285"} {"prompt": "Explain this yaml code:\n\n---\napiVersion: networking.k8s.io/v1\nkind: NetworkPolicy\nmetadata:\n name: allow-all-ingress\nspec:\n podSelector: {}\n ingress:\n - {}\n policyTypes:\n - Ingress", "completion": "This yaml code 6 - Network Policies | With this policy in place, no additional policy or policies can cause any incoming connection to those pods to be denied. This policy has no effect on isolation for egress from any pod.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "6 - Network Policies | With this policy in place, no additional policy or policies can cause any incoming connection to those pods to be denied. This policy has no effect on isolation for egress from any pod.", "source": "https://kubernetes.io/docs/concepts/services-networking/_print/", "id": "k8s_00056_code_explain_1286"} {"prompt": "Complete this yaml code:\n\n---\napiVersion: networking.k8s.io/v1\nkind: NetworkPolicy\nmetadata:", "completion": " name: default-deny-egress\nspec:\n podSelector: {}\n policyTypes:\n - Egress", "language": "yaml", "type": "kubernetes-manifest", "context": "6 - Network Policies | This ensures that even pods that aren't selected by any other NetworkPolicy will not be allowed egress traffic. This policy does not change the ingress isolation behavior of any pod.", "source": "https://kubernetes.io/docs/concepts/services-networking/_print/", "id": "k8s_00056_code_completion_1287"} {"prompt": "Explain this yaml code:\n\n---\napiVersion: networking.k8s.io/v1\nkind: NetworkPolicy\nmetadata:\n name: default-deny-egress\nspec:\n podSelector: {}\n policyTypes:\n - Egress", "completion": "This yaml code 6 - Network Policies | This ensures that even pods that aren't selected by any other NetworkPolicy will not be allowed egress traffic. This policy does not change the ingress isolation behavior of any pod.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "6 - Network Policies | This ensures that even pods that aren't selected by any other NetworkPolicy will not be allowed egress traffic. This policy does not change the ingress isolation behavior of any pod.", "source": "https://kubernetes.io/docs/concepts/services-networking/_print/", "id": "k8s_00056_code_explain_1288"} {"prompt": "Complete this yaml code:\n\n---\napiVersion: networking.k8s.io/v1\nkind: NetworkPolicy\nmetadata:\n name: allow-all-egress", "completion": "spec:\n podSelector: {}\n egress:\n - {}\n policyTypes:\n - Egress", "language": "yaml", "type": "kubernetes-manifest", "context": "6 - Network Policies | With this policy in place, no additional policy or policies can cause any outgoing connection from those pods to be denied. This policy has no effect on isolation for ingress to any pod.", "source": "https://kubernetes.io/docs/concepts/services-networking/_print/", "id": "k8s_00056_code_completion_1289"} {"prompt": "Explain this yaml code:\n\n---\napiVersion: networking.k8s.io/v1\nkind: NetworkPolicy\nmetadata:\n name: allow-all-egress\nspec:\n podSelector: {}\n egress:\n - {}\n policyTypes:\n - Egress", "completion": "This yaml code 6 - Network Policies | With this policy in place, no additional policy or policies can cause any outgoing connection from those pods to be denied. This policy has no effect on isolation for ingress to any pod.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "6 - Network Policies | With this policy in place, no additional policy or policies can cause any outgoing connection from those pods to be denied. This policy has no effect on isolation for ingress to any pod.", "source": "https://kubernetes.io/docs/concepts/services-networking/_print/", "id": "k8s_00056_code_explain_1290"} {"prompt": "Complete this yaml code:\n\n---\napiVersion: networking.k8s.io/v1\nkind: NetworkPolicy\nmetadata:\n name: default-deny-all", "completion": "spec:\n podSelector: {}\n policyTypes:\n - Ingress\n - Egress", "language": "yaml", "type": "kubernetes-manifest", "context": "6 - Network Policies | This ensures that even pods that aren't selected by any other NetworkPolicy will not be allowed ingress or egress traffic.", "source": "https://kubernetes.io/docs/concepts/services-networking/_print/", "id": "k8s_00056_code_completion_1291"} {"prompt": "Explain this yaml code:\n\n---\napiVersion: networking.k8s.io/v1\nkind: NetworkPolicy\nmetadata:\n name: default-deny-all\nspec:\n podSelector: {}\n policyTypes:\n - Ingress\n - Egress", "completion": "This yaml code 6 - Network Policies | This ensures that even pods that aren't selected by any other NetworkPolicy will not be allowed ingress or egress traffic.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "6 - Network Policies | This ensures that even pods that aren't selected by any other NetworkPolicy will not be allowed ingress or egress traffic.", "source": "https://kubernetes.io/docs/concepts/services-networking/_print/", "id": "k8s_00056_code_explain_1292"} {"prompt": "Complete this yaml code:\n\napiVersion: networking.k8s.io/v1\nkind: NetworkPolicy\nmetadata:\n name: multi-port-egress\n namespace: default\nspec:\n podSelector:\n matchLabels:\n role: db", "completion": " policyTypes:\n - Egress\n egress:\n - to:\n - ipBlock:\n cidr: 10.0.0.0/24\n ports:\n - protocol: TCP\n port: 32000\n endPort: 32768", "language": "yaml", "type": "kubernetes-manifest", "context": "6 - Network Policies | The above rule allows any Pod with label role=db on the namespace default to communicate with any IP within the range 10.0.0.0/24 over TCP, provided that the target port is between the range 32000 and", "source": "https://kubernetes.io/docs/concepts/services-networking/_print/", "id": "k8s_00056_code_completion_1293"} {"prompt": "Explain this yaml code:\n\napiVersion: networking.k8s.io/v1\nkind: NetworkPolicy\nmetadata:\n name: multi-port-egress\n namespace: default\nspec:\n podSelector:\n matchLabels:\n role: db\n policyTypes:\n - Egress\n egress:\n - to:\n - ipBlock:\n cidr: 10.0.0.0/24\n ports:\n - protocol: TCP\n port: 32000\n endPort: 32768", "completion": "This yaml code 6 - Network Policies | The above rule allows any Pod with label role=db on the namespace default to communicate with any IP within the range 10.0.0.0/24 over TCP, provided that the target port is between the range 32000 and", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "6 - Network Policies | The above rule allows any Pod with label role=db on the namespace default to communicate with any IP within the range 10.0.0.0/24 over TCP, provided that the target port is between the range 32000 and", "source": "https://kubernetes.io/docs/concepts/services-networking/_print/", "id": "k8s_00056_code_explain_1294"} {"prompt": "Explain this bash code:\n\nkubectl label namespace frontend namespace=frontend\nkubectl label namespace backend namespace=backend", "completion": "This bash code 6 - Network Policies | Add the labels under namespaceSelector in your NetworkPolicy document. For example:", "language": "bash", "type": "kubectl-command_explanation", "context": "6 - Network Policies | Add the labels under namespaceSelector in your NetworkPolicy document. For example:", "source": "https://kubernetes.io/docs/concepts/services-networking/_print/", "id": "k8s_00056_code_explain_1295"} {"prompt": "Complete this yaml code:\n\napiVersion: networking.k8s.io/v1\nkind: NetworkPolicy\nmetadata:\n name: egress-namespaces\nspec:\n podSelector:\n matchLabels:\n app: myapp", "completion": " policyTypes:\n - Egress\n egress:\n - to:\n - namespaceSelector:\n matchExpressions:\n - key: namespace\n operator: In\n values: [\"frontend\", \"backend\"]", "language": "yaml", "type": "kubernetes-manifest", "context": "6 - Network Policies | The Kubernetes control plane sets an immutable label kubernetes.io/metadata.name on all namespaces, the value of the label is the namespace name.", "source": "https://kubernetes.io/docs/concepts/services-networking/_print/", "id": "k8s_00056_code_completion_1296"} {"prompt": "Explain this yaml code:\n\napiVersion: networking.k8s.io/v1\nkind: NetworkPolicy\nmetadata:\n name: egress-namespaces\nspec:\n podSelector:\n matchLabels:\n app: myapp\n policyTypes:\n - Egress\n egress:\n - to:\n - namespaceSelector:\n matchExpressions:\n - key: namespace\n operator: In\n values: [\"frontend\", \"backend\"]", "completion": "This yaml code 6 - Network Policies | The Kubernetes control plane sets an immutable label kubernetes.io/metadata.name on all namespaces, the value of the label is the namespace name.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "6 - Network Policies | The Kubernetes control plane sets an immutable label kubernetes.io/metadata.name on all namespaces, the value of the label is the namespace name.", "source": "https://kubernetes.io/docs/concepts/services-networking/_print/", "id": "k8s_00056_code_explain_1297"} {"prompt": "Complete this text code:\n\n...\n spec:\n podSelector:", "completion": " matchLabels:\n role: client\n ...", "language": "text", "type": "code-example", "context": "6 - Network Policies | a hostNetwork pod is selected by a podSelector or namespaceSelector in an ingress or egress rule.", "source": "https://kubernetes.io/docs/concepts/services-networking/_print/", "id": "k8s_00056_code_completion_1298"} {"prompt": "Explain this text code:\n\n...\n spec:\n podSelector:\n matchLabels:\n role: client\n ...", "completion": "This text code 6 - Network Policies | a hostNetwork pod is selected by a podSelector or namespaceSelector in an ingress or egress rule.", "language": "text", "type": "code-example_explanation", "context": "6 - Network Policies | a hostNetwork pod is selected by a podSelector or namespaceSelector in an ingress or egress rule.", "source": "https://kubernetes.io/docs/concepts/services-networking/_print/", "id": "k8s_00056_code_explain_1299"} {"prompt": "Complete this text code:\n\n...\n ingress:\n - from:", "completion": " - podSelector:\n matchLabels:\n role: client\n ...", "language": "text", "type": "code-example", "context": "6 - Network Policies | At the same time, since hostNetwork pods have the same IP addresses as the nodes they reside on, their connections will be treated as node connections. For example, you can allow traffic from a hostNe", "source": "https://kubernetes.io/docs/concepts/services-networking/_print/", "id": "k8s_00056_code_completion_1300"} {"prompt": "Explain this text code:\n\n...\n ingress:\n - from:\n - podSelector:\n matchLabels:\n role: client\n ...", "completion": "This text code 6 - Network Policies | At the same time, since hostNetwork pods have the same IP addresses as the nodes they reside on, their connections will be treated as node connections. For example, you can allow traffic from a hostNe", "language": "text", "type": "code-example_explanation", "context": "6 - Network Policies | At the same time, since hostNetwork pods have the same IP addresses as the nodes they reside on, their connections will be treated as node connections. For example, you can allow traffic from a hostNe", "source": "https://kubernetes.io/docs/concepts/services-networking/_print/", "id": "k8s_00056_code_explain_1301"} {"prompt": "Explain this text code:\n\nnameserver 10.32.0.10\nsearch .svc.cluster.local svc.cluster.local cluster.local\noptions ndots:5", "completion": "This text code 7 - DNS for Services and Pods | In summary, a Pod in the test namespace can successfully resolve either data.prod or data.prod.svc.cluster.local.", "language": "text", "type": "code-example_explanation", "context": "7 - DNS for Services and Pods | In summary, a Pod in the test namespace can successfully resolve either data.prod or data.prod.svc.cluster.local.", "source": "https://kubernetes.io/docs/concepts/services-networking/_print/", "id": "k8s_00056_code_explain_1302"} {"prompt": "Explain this text code:\n\n..pod.", "completion": "This text code 7 - DNS for Services and Pods | For example, if a Pod in the default namespace has the IP address 172.17.0.3, and the domain name for your cluster is cluster.local, then the Pod has a DNS name:", "language": "text", "type": "code-example_explanation", "context": "7 - DNS for Services and Pods | For example, if a Pod in the default namespace has the IP address 172.17.0.3, and the domain name for your cluster is cluster.local, then the Pod has a DNS name:", "source": "https://kubernetes.io/docs/concepts/services-networking/_print/", "id": "k8s_00056_code_explain_1303"} {"prompt": "Explain this text code:\n\n...svc.", "completion": "This text code 7 - DNS for Services and Pods | For example, if a Pod in the cafe namespace has the IP address 172.17.0.3, is an endpoint of a Service named barista, and the domain name for your cluster is cluster.local, then the Pod would have thi", "language": "text", "type": "code-example_explanation", "context": "7 - DNS for Services and Pods | For example, if a Pod in the cafe namespace has the IP address 172.17.0.3, is an endpoint of a Service named barista, and the domain name for your cluster is cluster.local, then the Pod would have thi", "source": "https://kubernetes.io/docs/concepts/services-networking/_print/", "id": "k8s_00056_code_explain_1304"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Service\nmetadata:\n name: busybox-subdomain\nspec:\n selector:\n name: busybox\n clusterIP: None\n ports:\n - name: foo # name is not required for single-port Services\n port: 1234\n---\napiVersion: v1\nkind: Pod\nmetadata:\n name: busybox1\n labels:\n name: busybox\nspec:\n hostname: busybox-1\n subdomain: busybox-subdomain", "completion": " containers:\n - image: busybox:1.28\n command:\n - sleep\n - \"3600\"\n name: busybox\n---\napiVersion: v1\nkind: Pod\nmetadata:\n name: busybox2\n labels:\n name: busybox\nspec:\n hostname: busybox-2\n subdomain: busybox-subdomain\n containers:\n - image: busybox:1.28\n command:\n - sleep\n - \"3600\"\n name: busybox", "language": "yaml", "type": "pod-definition", "context": "7 - DNS for Services and Pods | Given the above Service \"busybox-subdomain\" and the Pods which set spec.subdomain to \"busybox-subdomain\", the first Pod will see its own FQDN as \"busybox-1.busybox-subdomain.my-namespace.svc.cluster-d", "source": "https://kubernetes.io/docs/concepts/services-networking/_print/", "id": "k8s_00056_code_completion_1305"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Service\nmetadata:\n name: busybox-subdomain\nspec:\n selector:\n name: busybox\n clusterIP: None\n ports:\n - name: foo # name is not required for single-port Services\n port: 1234\n---\napiVersion: v1\nkind: Pod\nmetadata:\n name: busybox1\n labels:\n name: busybox\nspec:\n hostname: busybox-1\n subdomain: busybox-subdomain\n containers:\n - image: busybox:1.28\n command:\n - sleep\n - \"3600\"\n name: busybox\n---\napiVersion: v1\nkind: Pod\nmetadata:\n name: busybox2\n labels:\n name: busybox\nspec:\n hostname: busybox-2\n subdomain: busybox-subdomain\n containers:\n - image: busybox:1.28\n command:\n - sleep\n - \"3600\"\n name: busybox", "completion": "This yaml code 7 - DNS for Services and Pods | Given the above Service \"busybox-subdomain\" and the Pods which set spec.subdomain to \"busybox-subdomain\", the first Pod will see its own FQDN as \"busybox-1.busybox-subdomain.my-namespace.svc.cluster-d", "language": "yaml", "type": "pod-definition_explanation", "context": "7 - DNS for Services and Pods | Given the above Service \"busybox-subdomain\" and the Pods which set spec.subdomain to \"busybox-subdomain\", the first Pod will see its own FQDN as \"busybox-1.busybox-subdomain.my-namespace.svc.cluster-d", "source": "https://kubernetes.io/docs/concepts/services-networking/_print/", "id": "k8s_00056_code_explain_1306"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: busybox\n namespace: default\nspec:\n containers:\n - image: busybox:1.28", "completion": " command:\n - sleep\n - \"3600\"\n imagePullPolicy: IfNotPresent\n name: busybox\n restartPolicy: Always\n hostNetwork: true\n dnsPolicy: ClusterFirstWithHostNet", "language": "yaml", "type": "pod-definition", "context": "7 - DNS for Services and Pods | Pod's DNS Config allows users more control on the DNS settings for a Pod.", "source": "https://kubernetes.io/docs/concepts/services-networking/_print/", "id": "k8s_00056_code_completion_1307"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: busybox\n namespace: default\nspec:\n containers:\n - image: busybox:1.28\n command:\n - sleep\n - \"3600\"\n imagePullPolicy: IfNotPresent\n name: busybox\n restartPolicy: Always\n hostNetwork: true\n dnsPolicy: ClusterFirstWithHostNet", "completion": "This yaml code 7 - DNS for Services and Pods | Pod's DNS Config allows users more control on the DNS settings for a Pod.", "language": "yaml", "type": "pod-definition_explanation", "context": "7 - DNS for Services and Pods | Pod's DNS Config allows users more control on the DNS settings for a Pod.", "source": "https://kubernetes.io/docs/concepts/services-networking/_print/", "id": "k8s_00056_code_explain_1308"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n namespace: default\n name: dns-example\nspec:\n containers:\n - name: test\n image: nginx\n dnsPolicy: \"None\"", "completion": " dnsConfig:\n nameservers:\n - 192.0.2.1 # this is an example\n searches:\n - ns1.svc.cluster-domain.example\n - my.dns.search.suffix\n options:\n - name: ndots\n value: \"2\"\n - name: edns0", "language": "yaml", "type": "pod-definition", "context": "7 - DNS for Services and Pods | When the Pod above is created, the container test gets the following contents in its /etc/resolv.conf file:", "source": "https://kubernetes.io/docs/concepts/services-networking/_print/", "id": "k8s_00056_code_completion_1309"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n namespace: default\n name: dns-example\nspec:\n containers:\n - name: test\n image: nginx\n dnsPolicy: \"None\"\n dnsConfig:\n nameservers:\n - 192.0.2.1 # this is an example\n searches:\n - ns1.svc.cluster-domain.example\n - my.dns.search.suffix\n options:\n - name: ndots\n value: \"2\"\n - name: edns0", "completion": "This yaml code 7 - DNS for Services and Pods | When the Pod above is created, the container test gets the following contents in its /etc/resolv.conf file:", "language": "yaml", "type": "pod-definition_explanation", "context": "7 - DNS for Services and Pods | When the Pod above is created, the container test gets the following contents in its /etc/resolv.conf file:", "source": "https://kubernetes.io/docs/concepts/services-networking/_print/", "id": "k8s_00056_code_explain_1310"} {"prompt": "Explain this text code:\n\nnameserver 192.0.2.1\nsearch ns1.svc.cluster-domain.example my.dns.search.suffix\noptions ndots:2 edns0", "completion": "This text code 7 - DNS for Services and Pods | For IPv6 setup, search path and name server should be set up like this:", "language": "text", "type": "code-example_explanation", "context": "7 - DNS for Services and Pods | For IPv6 setup, search path and name server should be set up like this:", "source": "https://kubernetes.io/docs/concepts/services-networking/_print/", "id": "k8s_00056_code_explain_1311"} {"prompt": "Explain this bash code:\n\nkubectl exec -it dns-example -- cat /etc/resolv.conf", "completion": "This bash code 7 - DNS for Services and Pods | The output is similar to this:", "language": "bash", "type": "kubectl-command_explanation", "context": "7 - DNS for Services and Pods | The output is similar to this:", "source": "https://kubernetes.io/docs/concepts/services-networking/_print/", "id": "k8s_00056_code_explain_1312"} {"prompt": "Explain this text code:\n\nnameserver 2001:db8:30::a\nsearch default.svc.cluster-domain.example svc.cluster-domain.example cluster-domain.example\noptions ndots:5", "completion": "This text code 7 - DNS for Services and Pods | Kubernetes itself does not limit the DNS Config until the length of the search domain list exceeds 32 or the total length of all search domains exceeds 2048. This limit applies to the node's resolver", "language": "text", "type": "code-example_explanation", "context": "7 - DNS for Services and Pods | Kubernetes itself does not limit the DNS Config until the length of the search domain list exceeds 32 or the total length of all search domains exceeds 2048. This limit applies to the node's resolver", "source": "https://kubernetes.io/docs/concepts/services-networking/_print/", "id": "k8s_00056_code_explain_1313"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Service\nmetadata:\n name: my-service\n labels:\n app.kubernetes.io/name: MyApp", "completion": "spec:\n selector:\n app.kubernetes.io/name: MyApp\n ports:\n - protocol: TCP\n port: 80", "language": "yaml", "type": "service-definition", "context": "8 - IPv4/IPv6 dual-stack | This Service specification explicitly defines PreferDualStack in .spec.ipFamilyPolicy. When you create this Service on a dual-stack cluster, Kubernetes assigns both IPv4 and IPv6 addresses for the ser", "source": "https://kubernetes.io/docs/concepts/services-networking/_print/", "id": "k8s_00056_code_completion_1314"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Service\nmetadata:\n name: my-service\n labels:\n app.kubernetes.io/name: MyApp\nspec:\n selector:\n app.kubernetes.io/name: MyApp\n ports:\n - protocol: TCP\n port: 80", "completion": "This yaml code 8 - IPv4/IPv6 dual-stack | This Service specification explicitly defines PreferDualStack in .spec.ipFamilyPolicy. When you create this Service on a dual-stack cluster, Kubernetes assigns both IPv4 and IPv6 addresses for the ser", "language": "yaml", "type": "service-definition_explanation", "context": "8 - IPv4/IPv6 dual-stack | This Service specification explicitly defines PreferDualStack in .spec.ipFamilyPolicy. When you create this Service on a dual-stack cluster, Kubernetes assigns both IPv4 and IPv6 addresses for the ser", "source": "https://kubernetes.io/docs/concepts/services-networking/_print/", "id": "k8s_00056_code_explain_1315"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Service\nmetadata:\n name: my-service\n labels:\n app.kubernetes.io/name: MyApp", "completion": "spec:\n ipFamilyPolicy: PreferDualStack\n selector:\n app.kubernetes.io/name: MyApp\n ports:\n - protocol: TCP\n port: 80", "language": "yaml", "type": "service-definition", "context": "8 - IPv4/IPv6 dual-stack | This Service specification explicitly defines IPv6 and IPv4 in .spec.ipFamilies as well as defining PreferDualStack in .spec.ipFamilyPolicy. When Kubernetes assigns an IPv6 and IPv4 address in .spec.c", "source": "https://kubernetes.io/docs/concepts/services-networking/_print/", "id": "k8s_00056_code_completion_1316"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Service\nmetadata:\n name: my-service\n labels:\n app.kubernetes.io/name: MyApp\nspec:\n ipFamilyPolicy: PreferDualStack\n selector:\n app.kubernetes.io/name: MyApp\n ports:\n - protocol: TCP\n port: 80", "completion": "This yaml code 8 - IPv4/IPv6 dual-stack | This Service specification explicitly defines IPv6 and IPv4 in .spec.ipFamilies as well as defining PreferDualStack in .spec.ipFamilyPolicy. When Kubernetes assigns an IPv6 and IPv4 address in .spec.c", "language": "yaml", "type": "service-definition_explanation", "context": "8 - IPv4/IPv6 dual-stack | This Service specification explicitly defines IPv6 and IPv4 in .spec.ipFamilies as well as defining PreferDualStack in .spec.ipFamilyPolicy. When Kubernetes assigns an IPv6 and IPv4 address in .spec.c", "source": "https://kubernetes.io/docs/concepts/services-networking/_print/", "id": "k8s_00056_code_explain_1317"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Service\nmetadata:\n name: my-service\n labels:\n app.kubernetes.io/name: MyApp\nspec:\n ipFamilyPolicy: PreferDualStack", "completion": " ipFamilies:\n - IPv6\n - IPv4\n selector:\n app.kubernetes.io/name: MyApp\n ports:\n - protocol: TCP\n port: 80", "language": "yaml", "type": "service-definition", "context": "8 - IPv4/IPv6 dual-stack | These examples demonstrate the default behavior when dual-stack is newly enabled on a cluster where Services already exist. (Upgrading an existing cluster to 1.21 or beyond will enable dual-stack.)", "source": "https://kubernetes.io/docs/concepts/services-networking/_print/", "id": "k8s_00056_code_completion_1318"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Service\nmetadata:\n name: my-service\n labels:\n app.kubernetes.io/name: MyApp\nspec:\n ipFamilyPolicy: PreferDualStack\n ipFamilies:\n - IPv6\n - IPv4\n selector:\n app.kubernetes.io/name: MyApp\n ports:\n - protocol: TCP\n port: 80", "completion": "This yaml code 8 - IPv4/IPv6 dual-stack | These examples demonstrate the default behavior when dual-stack is newly enabled on a cluster where Services already exist. (Upgrading an existing cluster to 1.21 or beyond will enable dual-stack.)", "language": "yaml", "type": "service-definition_explanation", "context": "8 - IPv4/IPv6 dual-stack | These examples demonstrate the default behavior when dual-stack is newly enabled on a cluster where Services already exist. (Upgrading an existing cluster to 1.21 or beyond will enable dual-stack.)", "source": "https://kubernetes.io/docs/concepts/services-networking/_print/", "id": "k8s_00056_code_explain_1319"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Service\nmetadata:\n name: my-service\n labels:\n app.kubernetes.io/name: MyApp", "completion": "spec:\n selector:\n app.kubernetes.io/name: MyApp\n ports:\n - protocol: TCP\n port: 80", "language": "yaml", "type": "service-definition", "context": "8 - IPv4/IPv6 dual-stack | You can validate this behavior by using kubectl to inspect an existing service.", "source": "https://kubernetes.io/docs/concepts/services-networking/_print/", "id": "k8s_00056_code_completion_1320"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Service\nmetadata:\n name: my-service\n labels:\n app.kubernetes.io/name: MyApp\nspec:\n selector:\n app.kubernetes.io/name: MyApp\n ports:\n - protocol: TCP\n port: 80", "completion": "This yaml code 8 - IPv4/IPv6 dual-stack | You can validate this behavior by using kubectl to inspect an existing service.", "language": "yaml", "type": "service-definition_explanation", "context": "8 - IPv4/IPv6 dual-stack | You can validate this behavior by using kubectl to inspect an existing service.", "source": "https://kubernetes.io/docs/concepts/services-networking/_print/", "id": "k8s_00056_code_explain_1321"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Service\nmetadata:\n labels:\n app.kubernetes.io/name: MyApp\n name: my-service\nspec:\n clusterIP: 10.0.197.123\n clusterIPs:\n - 10.0.197.123\n ipFamilies:", "completion": " - IPv4\n ipFamilyPolicy: SingleStack\n ports:\n - port: 80\n protocol: TCP\n targetPort: 80\n selector:\n app.kubernetes.io/name: MyApp\n type: ClusterIP\nstatus:\n loadBalancer: {}", "language": "yaml", "type": "service-definition", "context": "8 - IPv4/IPv6 dual-stack | When dual-stack is enabled on a cluster, existing headless Services with selectors are configured by the control plane to set .spec.ipFamilyPolicy to SingleStack and set .spec.ipFamilies to the addres", "source": "https://kubernetes.io/docs/concepts/services-networking/_print/", "id": "k8s_00056_code_completion_1322"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Service\nmetadata:\n labels:\n app.kubernetes.io/name: MyApp\n name: my-service\nspec:\n clusterIP: 10.0.197.123\n clusterIPs:\n - 10.0.197.123\n ipFamilies:\n - IPv4\n ipFamilyPolicy: SingleStack\n ports:\n - port: 80\n protocol: TCP\n targetPort: 80\n selector:\n app.kubernetes.io/name: MyApp\n type: ClusterIP\nstatus:\n loadBalancer: {}", "completion": "This yaml code 8 - IPv4/IPv6 dual-stack | When dual-stack is enabled on a cluster, existing headless Services with selectors are configured by the control plane to set .spec.ipFamilyPolicy to SingleStack and set .spec.ipFamilies to the addres", "language": "yaml", "type": "service-definition_explanation", "context": "8 - IPv4/IPv6 dual-stack | When dual-stack is enabled on a cluster, existing headless Services with selectors are configured by the control plane to set .spec.ipFamilyPolicy to SingleStack and set .spec.ipFamilies to the addres", "source": "https://kubernetes.io/docs/concepts/services-networking/_print/", "id": "k8s_00056_code_explain_1323"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Service\nmetadata:\n name: my-service\n labels:\n app.kubernetes.io/name: MyApp", "completion": "spec:\n selector:\n app.kubernetes.io/name: MyApp\n ports:\n - protocol: TCP\n port: 80", "language": "yaml", "type": "service-definition", "context": "8 - IPv4/IPv6 dual-stack | You can validate this behavior by using kubectl to inspect an existing headless service with selectors.", "source": "https://kubernetes.io/docs/concepts/services-networking/_print/", "id": "k8s_00056_code_completion_1324"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Service\nmetadata:\n name: my-service\n labels:\n app.kubernetes.io/name: MyApp\nspec:\n selector:\n app.kubernetes.io/name: MyApp\n ports:\n - protocol: TCP\n port: 80", "completion": "This yaml code 8 - IPv4/IPv6 dual-stack | You can validate this behavior by using kubectl to inspect an existing headless service with selectors.", "language": "yaml", "type": "service-definition_explanation", "context": "8 - IPv4/IPv6 dual-stack | You can validate this behavior by using kubectl to inspect an existing headless service with selectors.", "source": "https://kubernetes.io/docs/concepts/services-networking/_print/", "id": "k8s_00056_code_explain_1325"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Service\nmetadata:\n labels:\n app.kubernetes.io/name: MyApp\n name: my-service\nspec:\n clusterIP: None\n clusterIPs:", "completion": " - None\n ipFamilies:\n - IPv4\n ipFamilyPolicy: SingleStack\n ports:\n - port: 80\n protocol: TCP\n targetPort: 80\n selector:\n app.kubernetes.io/name: MyApp", "language": "yaml", "type": "service-definition", "context": "8 - IPv4/IPv6 dual-stack | Services can be changed from single-stack to dual-stack and from dual-stack to single-stack.", "source": "https://kubernetes.io/docs/concepts/services-networking/_print/", "id": "k8s_00056_code_completion_1326"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Service\nmetadata:\n labels:\n app.kubernetes.io/name: MyApp\n name: my-service\nspec:\n clusterIP: None\n clusterIPs:\n - None\n ipFamilies:\n - IPv4\n ipFamilyPolicy: SingleStack\n ports:\n - port: 80\n protocol: TCP\n targetPort: 80\n selector:\n app.kubernetes.io/name: MyApp", "completion": "This yaml code 8 - IPv4/IPv6 dual-stack | Services can be changed from single-stack to dual-stack and from dual-stack to single-stack.", "language": "yaml", "type": "service-definition_explanation", "context": "8 - IPv4/IPv6 dual-stack | Services can be changed from single-stack to dual-stack and from dual-stack to single-stack.", "source": "https://kubernetes.io/docs/concepts/services-networking/_print/", "id": "k8s_00056_code_explain_1327"} {"prompt": "Complete this yaml code:\n\napiVersion: discovery.k8s.io/v1\nkind: EndpointSlice\nmetadata:\n name: example-hints\n labels:\n kubernetes.io/service-name: example-svc\naddressType: IPv4\nports:\n - name: http\n protocol: TCP", "completion": " port: 80\nendpoints:\n - addresses:\n - \"10.1.2.3\"\n conditions:\n ready: true\n hostname: pod-1\n zone: zone-a\n hints:\n forZones:\n - name: \"zone-a\"", "language": "yaml", "type": "kubernetes-manifest", "context": "9 - Topology Aware Routing | The kube-proxy component filters the endpoints it routes to based on the hints set by the EndpointSlice controller. In most cases, this means that the kube-proxy is able to route traffic to endpoints", "source": "https://kubernetes.io/docs/concepts/services-networking/_print/", "id": "k8s_00056_code_completion_1328"} {"prompt": "Explain this yaml code:\n\napiVersion: discovery.k8s.io/v1\nkind: EndpointSlice\nmetadata:\n name: example-hints\n labels:\n kubernetes.io/service-name: example-svc\naddressType: IPv4\nports:\n - name: http\n protocol: TCP\n port: 80\nendpoints:\n - addresses:\n - \"10.1.2.3\"\n conditions:\n ready: true\n hostname: pod-1\n zone: zone-a\n hints:\n forZones:\n - name: \"zone-a\"", "completion": "This yaml code 9 - Topology Aware Routing | The kube-proxy component filters the endpoints it routes to based on the hints set by the EndpointSlice controller. In most cases, this means that the kube-proxy is able to route traffic to endpoints", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "9 - Topology Aware Routing | The kube-proxy component filters the endpoints it routes to based on the hints set by the EndpointSlice controller. In most cases, this means that the kube-proxy is able to route traffic to endpoints", "source": "https://kubernetes.io/docs/concepts/services-networking/_print/", "id": "k8s_00056_code_explain_1329"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Service\nmetadata:\n labels:\n k8s-app: kube-dns\n kubernetes.io/cluster-service: \"true\"\n kubernetes.io/name: CoreDNS\n name: kube-dns\n namespace: kube-system\nspec:\n clusterIP: 10.96.0.10", "completion": " ports:\n - name: dns\n port: 53\n protocol: UDP\n targetPort: 53\n - name: dns-tcp\n port: 53\n protocol: TCP\n targetPort: 53\n selector:\n k8s-app: kube-dns\n type: ClusterIP", "language": "yaml", "type": "service-definition", "context": "11 - Service ClusterIP allocation | But, as it was explained before, the IP address 10.96.0.10 has not been reserved. If other Services are created before or in parallel with dynamic allocation, there is a chance they can allocate this", "source": "https://kubernetes.io/docs/concepts/services-networking/_print/", "id": "k8s_00056_code_completion_1330"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Service\nmetadata:\n labels:\n k8s-app: kube-dns\n kubernetes.io/cluster-service: \"true\"\n kubernetes.io/name: CoreDNS\n name: kube-dns\n namespace: kube-system\nspec:\n clusterIP: 10.96.0.10\n ports:\n - name: dns\n port: 53\n protocol: UDP\n targetPort: 53\n - name: dns-tcp\n port: 53\n protocol: TCP\n targetPort: 53\n selector:\n k8s-app: kube-dns\n type: ClusterIP", "completion": "This yaml code 11 - Service ClusterIP allocation | But, as it was explained before, the IP address 10.96.0.10 has not been reserved. If other Services are created before or in parallel with dynamic allocation, there is a chance they can allocate this", "language": "yaml", "type": "service-definition_explanation", "context": "11 - Service ClusterIP allocation | But, as it was explained before, the IP address 10.96.0.10 has not been reserved. If other Services are created before or in parallel with dynamic allocation, there is a chance they can allocate this", "source": "https://kubernetes.io/docs/concepts/services-networking/_print/", "id": "k8s_00056_code_explain_1331"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Service\nmetadata:\n name: my-service\nspec:\n selector:", "completion": " app.kubernetes.io/name: MyApp\n ports:\n - protocol: TCP\n port: 80\n targetPort: 9376\n internalTrafficPolicy: Local", "language": "yaml", "type": "service-definition", "context": "12 - Service Internal Traffic Policy | The kube-proxy filters the endpoints it routes to based on the spec.internalTrafficPolicy setting. When it's set to Local, only node local endpoints are considered. When it's Cluster (the default), or", "source": "https://kubernetes.io/docs/concepts/services-networking/_print/", "id": "k8s_00056_code_completion_1332"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Service\nmetadata:\n name: my-service\nspec:\n selector:\n app.kubernetes.io/name: MyApp\n ports:\n - protocol: TCP\n port: 80\n targetPort: 9376\n internalTrafficPolicy: Local", "completion": "This yaml code 12 - Service Internal Traffic Policy | The kube-proxy filters the endpoints it routes to based on the spec.internalTrafficPolicy setting. When it's set to Local, only node local endpoints are considered. When it's Cluster (the default), or", "language": "yaml", "type": "service-definition_explanation", "context": "12 - Service Internal Traffic Policy | The kube-proxy filters the endpoints it routes to based on the spec.internalTrafficPolicy setting. When it's set to Local, only node local endpoints are considered. When it's Cluster (the default), or", "source": "https://kubernetes.io/docs/concepts/services-networking/_print/", "id": "k8s_00056_code_explain_1333"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Service\nmetadata:\n name: my-service\nspec:", "completion": " selector:\n app.kubernetes.io/name: MyApp\n ports:\n - protocol: TCP\n port: 80\n targetPort: 9376", "language": "yaml", "type": "service-definition", "context": "1 - Service | Applying this manifest creates a new Service named \"my-service\" with the default ClusterIP service type. The Service targets TCP port 9376 on any Pod with the app.kubernetes.io/name: MyApp label.", "source": "https://kubernetes.io/docs/concepts/services-networking/_print/", "id": "k8s_00056_code_completion_1334"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Service\nmetadata:\n name: my-service\nspec:\n selector:\n app.kubernetes.io/name: MyApp\n ports:\n - protocol: TCP\n port: 80\n targetPort: 9376", "completion": "This yaml code 1 - Service | Applying this manifest creates a new Service named \"my-service\" with the default ClusterIP service type. The Service targets TCP port 9376 on any Pod with the app.kubernetes.io/name: MyApp label.", "language": "yaml", "type": "service-definition_explanation", "context": "1 - Service | Applying this manifest creates a new Service named \"my-service\" with the default ClusterIP service type. The Service targets TCP port 9376 on any Pod with the app.kubernetes.io/name: MyApp label.", "source": "https://kubernetes.io/docs/concepts/services-networking/_print/", "id": "k8s_00056_code_explain_1335"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: nginx\n labels:\n app.kubernetes.io/name: proxy\nspec:\n containers:\n - name: nginx\n image: nginx:stable\n ports:\n - containerPort: 80\n name: http-web-svc", "completion": "\n---\napiVersion: v1\nkind: Service\nmetadata:\n name: nginx-service\nspec:\n selector:\n app.kubernetes.io/name: proxy\n ports:\n - name: name-of-service-port\n protocol: TCP\n port: 80\n targetPort: http-web-svc", "language": "yaml", "type": "pod-definition", "context": "1 - Service | This works even if there is a mixture of Pods in the Service using a single configured name, with the same network protocol available via different port numbers. This offers a lot of flexibility for d", "source": "https://kubernetes.io/docs/concepts/services-networking/_print/", "id": "k8s_00056_code_completion_1336"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: nginx\n labels:\n app.kubernetes.io/name: proxy\nspec:\n containers:\n - name: nginx\n image: nginx:stable\n ports:\n - containerPort: 80\n name: http-web-svc\n\n---\napiVersion: v1\nkind: Service\nmetadata:\n name: nginx-service\nspec:\n selector:\n app.kubernetes.io/name: proxy\n ports:\n - name: name-of-service-port\n protocol: TCP\n port: 80\n targetPort: http-web-svc", "completion": "This yaml code 1 - Service | This works even if there is a mixture of Pods in the Service using a single configured name, with the same network protocol available via different port numbers. This offers a lot of flexibility for d", "language": "yaml", "type": "pod-definition_explanation", "context": "1 - Service | This works even if there is a mixture of Pods in the Service using a single configured name, with the same network protocol available via different port numbers. This offers a lot of flexibility for d", "source": "https://kubernetes.io/docs/concepts/services-networking/_print/", "id": "k8s_00056_code_explain_1337"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Service\nmetadata:\n name: my-service\nspec:", "completion": " ports:\n - name: http\n protocol: TCP\n port: 80\n targetPort: 9376", "language": "yaml", "type": "service-definition", "context": "1 - Service | Because this Service has no selector, the corresponding EndpointSlice objects are not created automatically. You can map the Service to the network address and port where it's running, by adding an En", "source": "https://kubernetes.io/docs/concepts/services-networking/_print/", "id": "k8s_00056_code_completion_1338"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Service\nmetadata:\n name: my-service\nspec:\n ports:\n - name: http\n protocol: TCP\n port: 80\n targetPort: 9376", "completion": "This yaml code 1 - Service | Because this Service has no selector, the corresponding EndpointSlice objects are not created automatically. You can map the Service to the network address and port where it's running, by adding an En", "language": "yaml", "type": "service-definition_explanation", "context": "1 - Service | Because this Service has no selector, the corresponding EndpointSlice objects are not created automatically. You can map the Service to the network address and port where it's running, by adding an En", "source": "https://kubernetes.io/docs/concepts/services-networking/_print/", "id": "k8s_00056_code_explain_1339"} {"prompt": "Complete this yaml code:\n\napiVersion: discovery.k8s.io/v1\nkind: EndpointSlice\nmetadata:\n name: my-service-1 # by convention, use the name of the Service\n # as a prefix for the name of the EndpointSlice\n labels:\n # You should set the \"kubernetes.io/service-name\" label.\n # Set its value to match the name of the Service\n kubernetes.io/service-name: my-service\naddressType: IPv4", "completion": "ports:\n - name: http # should match with the name of the service port defined above\n appProtocol: http\n protocol: TCP\n port: 9376\nendpoints:\n - addresses:\n - \"10.4.5.6\"\n - addresses:\n - \"10.1.2.3\"", "language": "yaml", "type": "kubernetes-manifest", "context": "1 - Service | When you create an EndpointSlice object for a Service, you can use any name for the EndpointSlice. Each EndpointSlice in a namespace must have a unique name. You link an EndpointSlice to a Service by", "source": "https://kubernetes.io/docs/concepts/services-networking/_print/", "id": "k8s_00056_code_completion_1340"} {"prompt": "Explain this yaml code:\n\napiVersion: discovery.k8s.io/v1\nkind: EndpointSlice\nmetadata:\n name: my-service-1 # by convention, use the name of the Service\n # as a prefix for the name of the EndpointSlice\n labels:\n # You should set the \"kubernetes.io/service-name\" label.\n # Set its value to match the name of the Service\n kubernetes.io/service-name: my-service\naddressType: IPv4\nports:\n - name: http # should match with the name of the service port defined above\n appProtocol: http\n protocol: TCP\n port: 9376\nendpoints:\n - addresses:\n - \"10.4.5.6\"\n - addresses:\n - \"10.1.2.3\"", "completion": "This yaml code 1 - Service | When you create an EndpointSlice object for a Service, you can use any name for the EndpointSlice. Each EndpointSlice in a namespace must have a unique name. You link an EndpointSlice to a Service by", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "1 - Service | When you create an EndpointSlice object for a Service, you can use any name for the EndpointSlice. Each EndpointSlice in a namespace must have a unique name. You link an EndpointSlice to a Service by", "source": "https://kubernetes.io/docs/concepts/services-networking/_print/", "id": "k8s_00056_code_explain_1341"} {"prompt": "Explain this bash code:\n\nkubectl port-forward service/ forwardedPort:servicePort", "completion": "This bash code 1 - Service | An ExternalName Service is a special case of Service that does not have selectors and uses DNS names instead. For more information, see the ExternalName section.", "language": "bash", "type": "kubectl-command_explanation", "context": "1 - Service | An ExternalName Service is a special case of Service that does not have selectors and uses DNS names instead. For more information, see the ExternalName section.", "source": "https://kubernetes.io/docs/concepts/services-networking/_print/", "id": "k8s_00056_code_explain_1342"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Service\nmetadata:\n name: my-service\nspec:\n selector:\n app.kubernetes.io/name: MyApp\n ports:", "completion": " - name: http\n protocol: TCP\n port: 80\n targetPort: 9376\n - name: https\n protocol: TCP\n port: 443\n targetPort: 9377", "language": "yaml", "type": "service-definition", "context": "1 - Service | As with Kubernetes names in general, names for ports must only contain lowercase alphanumeric characters and -. Port names must also start and end with an alphanumeric character.", "source": "https://kubernetes.io/docs/concepts/services-networking/_print/", "id": "k8s_00056_code_completion_1343"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Service\nmetadata:\n name: my-service\nspec:\n selector:\n app.kubernetes.io/name: MyApp\n ports:\n - name: http\n protocol: TCP\n port: 80\n targetPort: 9376\n - name: https\n protocol: TCP\n port: 443\n targetPort: 9377", "completion": "This yaml code 1 - Service | As with Kubernetes names in general, names for ports must only contain lowercase alphanumeric characters and -. Port names must also start and end with an alphanumeric character.", "language": "yaml", "type": "service-definition_explanation", "context": "1 - Service | As with Kubernetes names in general, names for ports must only contain lowercase alphanumeric characters and -. Port names must also start and end with an alphanumeric character.", "source": "https://kubernetes.io/docs/concepts/services-networking/_print/", "id": "k8s_00056_code_explain_1344"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Service\nmetadata:\n name: my-service\nspec:\n type: NodePort\n selector:\n app.kubernetes.io/name: MyApp", "completion": " ports:\n - port: 80\n # By default and for convenience, the `targetPort` is set to\n # the same value as the `port` field.\n targetPort: 80\n # Optional field\n # By default and for convenience, the Kubernetes control plane\n # will allocate a port from a range (default: 30000-32767)\n nodePort: 30007", "language": "yaml", "type": "service-definition", "context": "1 - Service | The policy for assigning ports to NodePort services applies to both the auto-assignment and the manual assignment scenarios. When a user wants to create a NodePort service that uses a specific port, t", "source": "https://kubernetes.io/docs/concepts/services-networking/_print/", "id": "k8s_00056_code_completion_1345"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Service\nmetadata:\n name: my-service\nspec:\n type: NodePort\n selector:\n app.kubernetes.io/name: MyApp\n ports:\n - port: 80\n # By default and for convenience, the `targetPort` is set to\n # the same value as the `port` field.\n targetPort: 80\n # Optional field\n # By default and for convenience, the Kubernetes control plane\n # will allocate a port from a range (default: 30000-32767)\n nodePort: 30007", "completion": "This yaml code 1 - Service | The policy for assigning ports to NodePort services applies to both the auto-assignment and the manual assignment scenarios. When a user wants to create a NodePort service that uses a specific port, t", "language": "yaml", "type": "service-definition_explanation", "context": "1 - Service | The policy for assigning ports to NodePort services applies to both the auto-assignment and the manual assignment scenarios. When a user wants to create a NodePort service that uses a specific port, t", "source": "https://kubernetes.io/docs/concepts/services-networking/_print/", "id": "k8s_00056_code_explain_1346"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Service\nmetadata:\n name: my-service\nspec:\n selector:\n app.kubernetes.io/name: MyApp\n ports:", "completion": " - protocol: TCP\n port: 80\n targetPort: 9376\n clusterIP: 10.0.171.239\n type: LoadBalancer\nstatus:\n loadBalancer:\n ingress:\n - ip: 192.0.2.127", "language": "yaml", "type": "service-definition", "context": "1 - Service | Traffic from the external load balancer is directed at the backend Pods. The cloud provider decides how it is load balanced.", "source": "https://kubernetes.io/docs/concepts/services-networking/_print/", "id": "k8s_00056_code_completion_1347"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Service\nmetadata:\n name: my-service\nspec:\n selector:\n app.kubernetes.io/name: MyApp\n ports:\n - protocol: TCP\n port: 80\n targetPort: 9376\n clusterIP: 10.0.171.239\n type: LoadBalancer\nstatus:\n loadBalancer:\n ingress:\n - ip: 192.0.2.127", "completion": "This yaml code 1 - Service | Traffic from the external load balancer is directed at the backend Pods. The cloud provider decides how it is load balanced.", "language": "yaml", "type": "service-definition_explanation", "context": "1 - Service | Traffic from the external load balancer is directed at the backend Pods. The cloud provider decides how it is load balanced.", "source": "https://kubernetes.io/docs/concepts/services-networking/_print/", "id": "k8s_00056_code_explain_1348"} {"prompt": "Complete this yaml code:\n\nmetadata:\n name: my-service", "completion": " annotations:\n networking.gke.io/load-balancer-type: \"Internal\"", "language": "yaml", "type": "kubernetes-manifest", "context": "1 - Service | metadata: name: my-service annotations: service.beta.kubernetes.io/aws-load-balancer-internal: \"true\"", "source": "https://kubernetes.io/docs/concepts/services-networking/_print/", "id": "k8s_00056_code_completion_1349"} {"prompt": "Explain this yaml code:\n\nmetadata:\n name: my-service\n annotations:\n networking.gke.io/load-balancer-type: \"Internal\"", "completion": "This yaml code 1 - Service | metadata: name: my-service annotations: service.beta.kubernetes.io/aws-load-balancer-internal: \"true\"", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "1 - Service | metadata: name: my-service annotations: service.beta.kubernetes.io/aws-load-balancer-internal: \"true\"", "source": "https://kubernetes.io/docs/concepts/services-networking/_print/", "id": "k8s_00056_code_explain_1350"} {"prompt": "Complete this yaml code:\n\nmetadata:\n name: my-service", "completion": " annotations:\n service.beta.kubernetes.io/aws-load-balancer-internal: \"true\"", "language": "yaml", "type": "kubernetes-manifest", "context": "1 - Service | metadata: name: my-service annotations: service.beta.kubernetes.io/azure-load-balancer-internal: \"true\"", "source": "https://kubernetes.io/docs/concepts/services-networking/_print/", "id": "k8s_00056_code_completion_1351"} {"prompt": "Explain this yaml code:\n\nmetadata:\n name: my-service\n annotations:\n service.beta.kubernetes.io/aws-load-balancer-internal: \"true\"", "completion": "This yaml code 1 - Service | metadata: name: my-service annotations: service.beta.kubernetes.io/azure-load-balancer-internal: \"true\"", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "1 - Service | metadata: name: my-service annotations: service.beta.kubernetes.io/azure-load-balancer-internal: \"true\"", "source": "https://kubernetes.io/docs/concepts/services-networking/_print/", "id": "k8s_00056_code_explain_1352"} {"prompt": "Complete this yaml code:\n\nmetadata:\n name: my-service", "completion": " annotations:\n service.beta.kubernetes.io/azure-load-balancer-internal: \"true\"", "language": "yaml", "type": "kubernetes-manifest", "context": "1 - Service | metadata: name: my-service annotations: service.kubernetes.io/ibm-load-balancer-cloud-provider-ip-type: \"private\"", "source": "https://kubernetes.io/docs/concepts/services-networking/_print/", "id": "k8s_00056_code_completion_1353"} {"prompt": "Explain this yaml code:\n\nmetadata:\n name: my-service\n annotations:\n service.beta.kubernetes.io/azure-load-balancer-internal: \"true\"", "completion": "This yaml code 1 - Service | metadata: name: my-service annotations: service.kubernetes.io/ibm-load-balancer-cloud-provider-ip-type: \"private\"", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "1 - Service | metadata: name: my-service annotations: service.kubernetes.io/ibm-load-balancer-cloud-provider-ip-type: \"private\"", "source": "https://kubernetes.io/docs/concepts/services-networking/_print/", "id": "k8s_00056_code_explain_1354"} {"prompt": "Complete this yaml code:\n\nmetadata:\n name: my-service", "completion": " annotations:\n service.kubernetes.io/ibm-load-balancer-cloud-provider-ip-type: \"private\"", "language": "yaml", "type": "kubernetes-manifest", "context": "1 - Service | metadata: name: my-service annotations: service.beta.kubernetes.io/openstack-internal-load-balancer: \"true\"", "source": "https://kubernetes.io/docs/concepts/services-networking/_print/", "id": "k8s_00056_code_completion_1355"} {"prompt": "Explain this yaml code:\n\nmetadata:\n name: my-service\n annotations:\n service.kubernetes.io/ibm-load-balancer-cloud-provider-ip-type: \"private\"", "completion": "This yaml code 1 - Service | metadata: name: my-service annotations: service.beta.kubernetes.io/openstack-internal-load-balancer: \"true\"", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "1 - Service | metadata: name: my-service annotations: service.beta.kubernetes.io/openstack-internal-load-balancer: \"true\"", "source": "https://kubernetes.io/docs/concepts/services-networking/_print/", "id": "k8s_00056_code_explain_1356"} {"prompt": "Complete this yaml code:\n\nmetadata:\n name: my-service", "completion": " annotations:\n service.beta.kubernetes.io/openstack-internal-load-balancer: \"true\"", "language": "yaml", "type": "kubernetes-manifest", "context": "1 - Service | metadata: name: my-service annotations: service.beta.kubernetes.io/cce-load-balancer-internal-vpc: \"true\"", "source": "https://kubernetes.io/docs/concepts/services-networking/_print/", "id": "k8s_00056_code_completion_1357"} {"prompt": "Explain this yaml code:\n\nmetadata:\n name: my-service\n annotations:\n service.beta.kubernetes.io/openstack-internal-load-balancer: \"true\"", "completion": "This yaml code 1 - Service | metadata: name: my-service annotations: service.beta.kubernetes.io/cce-load-balancer-internal-vpc: \"true\"", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "1 - Service | metadata: name: my-service annotations: service.beta.kubernetes.io/cce-load-balancer-internal-vpc: \"true\"", "source": "https://kubernetes.io/docs/concepts/services-networking/_print/", "id": "k8s_00056_code_explain_1358"} {"prompt": "Complete this yaml code:\n\nmetadata:\n name: my-service", "completion": " annotations:\n service.beta.kubernetes.io/cce-load-balancer-internal-vpc: \"true\"", "language": "yaml", "type": "kubernetes-manifest", "context": "1 - Service | metadata: annotations: service.kubernetes.io/qcloud-loadbalancer-internal-subnetid: subnet-xxxxx", "source": "https://kubernetes.io/docs/concepts/services-networking/_print/", "id": "k8s_00056_code_completion_1359"} {"prompt": "Explain this yaml code:\n\nmetadata:\n name: my-service\n annotations:\n service.beta.kubernetes.io/cce-load-balancer-internal-vpc: \"true\"", "completion": "This yaml code 1 - Service | metadata: annotations: service.kubernetes.io/qcloud-loadbalancer-internal-subnetid: subnet-xxxxx", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "1 - Service | metadata: annotations: service.kubernetes.io/qcloud-loadbalancer-internal-subnetid: subnet-xxxxx", "source": "https://kubernetes.io/docs/concepts/services-networking/_print/", "id": "k8s_00056_code_explain_1360"} {"prompt": "Explain this yaml code:\n\nmetadata:\n annotations:\n service.kubernetes.io/qcloud-loadbalancer-internal-subnetid: subnet-xxxxx", "completion": "This yaml code 1 - Service | metadata: annotations: service.beta.kubernetes.io/alibaba-cloud-loadbalancer-address-type: \"intranet\"", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "1 - Service | metadata: annotations: service.beta.kubernetes.io/alibaba-cloud-loadbalancer-address-type: \"intranet\"", "source": "https://kubernetes.io/docs/concepts/services-networking/_print/", "id": "k8s_00056_code_explain_1361"} {"prompt": "Explain this yaml code:\n\nmetadata:\n annotations:\n service.beta.kubernetes.io/alibaba-cloud-loadbalancer-address-type: \"intranet\"", "completion": "This yaml code 1 - Service | metadata: name: my-service annotations: service.beta.kubernetes.io/oci-load-balancer-internal: true", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "1 - Service | metadata: name: my-service annotations: service.beta.kubernetes.io/oci-load-balancer-internal: true", "source": "https://kubernetes.io/docs/concepts/services-networking/_print/", "id": "k8s_00056_code_explain_1362"} {"prompt": "Complete this yaml code:\n\nmetadata:\n name: my-service", "completion": " annotations:\n service.beta.kubernetes.io/oci-load-balancer-internal: true", "language": "yaml", "type": "kubernetes-manifest", "context": "1 - Service | Services of type ExternalName map a Service to a DNS name, not to a typical selector such as my-service or cassandra. You specify these Services with the spec.externalName parameter.", "source": "https://kubernetes.io/docs/concepts/services-networking/_print/", "id": "k8s_00056_code_completion_1363"} {"prompt": "Explain this yaml code:\n\nmetadata:\n name: my-service\n annotations:\n service.beta.kubernetes.io/oci-load-balancer-internal: true", "completion": "This yaml code 1 - Service | Services of type ExternalName map a Service to a DNS name, not to a typical selector such as my-service or cassandra. You specify these Services with the spec.externalName parameter.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "1 - Service | Services of type ExternalName map a Service to a DNS name, not to a typical selector such as my-service or cassandra. You specify these Services with the spec.externalName parameter.", "source": "https://kubernetes.io/docs/concepts/services-networking/_print/", "id": "k8s_00056_code_explain_1364"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Service\nmetadata:\n name: my-service", "completion": " namespace: prod\nspec:\n type: ExternalName\n externalName: my.database.example.com", "language": "yaml", "type": "service-definition", "context": "1 - Service | A Service of type: ExternalName accepts an IPv4 address string, but treats that string as a DNS name comprised of digits, not as an IP address (the internet does not however allow such names in DNS).", "source": "https://kubernetes.io/docs/concepts/services-networking/_print/", "id": "k8s_00056_code_completion_1365"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Service\nmetadata:\n name: my-service\n namespace: prod\nspec:\n type: ExternalName\n externalName: my.database.example.com", "completion": "This yaml code 1 - Service | A Service of type: ExternalName accepts an IPv4 address string, but treats that string as a DNS name comprised of digits, not as an IP address (the internet does not however allow such names in DNS).", "language": "yaml", "type": "service-definition_explanation", "context": "1 - Service | A Service of type: ExternalName accepts an IPv4 address string, but treats that string as a DNS name comprised of digits, not as an IP address (the internet does not however allow such names in DNS).", "source": "https://kubernetes.io/docs/concepts/services-networking/_print/", "id": "k8s_00056_code_explain_1366"} {"prompt": "Complete this shell code:\n\nREDIS_PRIMARY_SERVICE_HOST=10.0.0.11\nREDIS_PRIMARY_SERVICE_PORT=6379\nREDIS_PRIMARY_PORT=tcp://10.0.0.11:6379", "completion": "REDIS_PRIMARY_PORT_6379_TCP=tcp://10.0.0.11:6379\nREDIS_PRIMARY_PORT_6379_TCP_PROTO=tcp\nREDIS_PRIMARY_PORT_6379_TCP_PORT=6379\nREDIS_PRIMARY_PORT_6379_TCP_ADDR=10.0.0.11", "language": "shell", "type": "code-example", "context": "1 - Service | When you have a Pod that needs to access a Service, and you are using the environment variable method to publish the port and cluster IP to the client Pods, you must create the Service before the clie", "source": "https://kubernetes.io/docs/concepts/services-networking/_print/", "id": "k8s_00056_code_completion_1367"} {"prompt": "Explain this shell code:\n\nREDIS_PRIMARY_SERVICE_HOST=10.0.0.11\nREDIS_PRIMARY_SERVICE_PORT=6379\nREDIS_PRIMARY_PORT=tcp://10.0.0.11:6379\nREDIS_PRIMARY_PORT_6379_TCP=tcp://10.0.0.11:6379\nREDIS_PRIMARY_PORT_6379_TCP_PROTO=tcp\nREDIS_PRIMARY_PORT_6379_TCP_PORT=6379\nREDIS_PRIMARY_PORT_6379_TCP_ADDR=10.0.0.11", "completion": "This shell code 1 - Service | When you have a Pod that needs to access a Service, and you are using the environment variable method to publish the port and cluster IP to the client Pods, you must create the Service before the clie", "language": "shell", "type": "code-example_explanation", "context": "1 - Service | When you have a Pod that needs to access a Service, and you are using the environment variable method to publish the port and cluster IP to the client Pods, you must create the Service before the clie", "source": "https://kubernetes.io/docs/concepts/services-networking/_print/", "id": "k8s_00056_code_explain_1368"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Service\nmetadata:\n name: my-service\nspec:\n selector:\n app.kubernetes.io/name: MyApp", "completion": " ports:\n - name: http\n protocol: TCP\n port: 80\n targetPort: 49152\n externalIPs:\n - 198.51.100.32", "language": "yaml", "type": "service-definition", "context": "1 - Service | Service is a top-level resource in the Kubernetes REST API. You can find more details about the Service API object.", "source": "https://kubernetes.io/docs/concepts/services-networking/_print/", "id": "k8s_00056_code_completion_1369"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Service\nmetadata:\n name: my-service\nspec:\n selector:\n app.kubernetes.io/name: MyApp\n ports:\n - name: http\n protocol: TCP\n port: 80\n targetPort: 49152\n externalIPs:\n - 198.51.100.32", "completion": "This yaml code 1 - Service | Service is a top-level resource in the Kubernetes REST API. You can find more details about the Service API object.", "language": "yaml", "type": "service-definition_explanation", "context": "1 - Service | Service is a top-level resource in the Kubernetes REST API. You can find more details about the Service API object.", "source": "https://kubernetes.io/docs/concepts/services-networking/_print/", "id": "k8s_00056_code_explain_1370"} {"prompt": "Complete this yaml code:\n\napiVersion: networking.k8s.io/v1\nkind: Ingress\nmetadata:\n name: minimal-ingress\n annotations:\n nginx.ingress.kubernetes.io/rewrite-target: /\nspec:\n ingressClassName: nginx-example\n rules:", "completion": " - http:\n paths:\n - path: /testpath\n pathType: Prefix\n backend:\n service:\n name: test\n port:\n number: 80", "language": "yaml", "type": "kubernetes-manifest", "context": "2 - Ingress | An Ingress needs apiVersion, kind, metadata and spec fields. The name of an Ingress object must be a valid DNS subdomain name. For general information about working with config files, see deploying ap", "source": "https://kubernetes.io/docs/concepts/services-networking/_print/", "id": "k8s_00056_code_completion_1371"} {"prompt": "Explain this yaml code:\n\napiVersion: networking.k8s.io/v1\nkind: Ingress\nmetadata:\n name: minimal-ingress\n annotations:\n nginx.ingress.kubernetes.io/rewrite-target: /\nspec:\n ingressClassName: nginx-example\n rules:\n - http:\n paths:\n - path: /testpath\n pathType: Prefix\n backend:\n service:\n name: test\n port:\n number: 80", "completion": "This yaml code 2 - Ingress | An Ingress needs apiVersion, kind, metadata and spec fields. The name of an Ingress object must be a valid DNS subdomain name. For general information about working with config files, see deploying ap", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "2 - Ingress | An Ingress needs apiVersion, kind, metadata and spec fields. The name of an Ingress object must be a valid DNS subdomain name. For general information about working with config files, see deploying ap", "source": "https://kubernetes.io/docs/concepts/services-networking/_print/", "id": "k8s_00056_code_explain_1372"} {"prompt": "Complete this yaml code:\n\napiVersion: networking.k8s.io/v1\nkind: Ingress\nmetadata:\n name: ingress-resource-backend\nspec:\n defaultBackend:\n resource:\n apiGroup: k8s.example.com\n kind: StorageBucket\n name: static-assets", "completion": " rules:\n - http:\n paths:\n - path: /icons\n pathType: ImplementationSpecific\n backend:\n resource:\n apiGroup: k8s.example.com\n kind: StorageBucket\n name: icon-assets", "language": "yaml", "type": "kubernetes-manifest", "context": "2 - Ingress | After creating the Ingress above, you can view it with the following command:", "source": "https://kubernetes.io/docs/concepts/services-networking/_print/", "id": "k8s_00056_code_completion_1373"} {"prompt": "Explain this yaml code:\n\napiVersion: networking.k8s.io/v1\nkind: Ingress\nmetadata:\n name: ingress-resource-backend\nspec:\n defaultBackend:\n resource:\n apiGroup: k8s.example.com\n kind: StorageBucket\n name: static-assets\n rules:\n - http:\n paths:\n - path: /icons\n pathType: ImplementationSpecific\n backend:\n resource:\n apiGroup: k8s.example.com\n kind: StorageBucket\n name: icon-assets", "completion": "This yaml code 2 - Ingress | After creating the Ingress above, you can view it with the following command:", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "2 - Ingress | After creating the Ingress above, you can view it with the following command:", "source": "https://kubernetes.io/docs/concepts/services-networking/_print/", "id": "k8s_00056_code_explain_1374"} {"prompt": "Complete this text code:\n\nName: ingress-resource-backend\nNamespace: default\nAddress:\nDefault backend: APIGroup: k8s.example.com, Kind: StorageBucket, Name: static-assets\nRules:", "completion": " Host Path Backends\n ---- ---- --------\n *\n /icons APIGroup: k8s.example.com, Kind: StorageBucket, Name: icon-assets\nAnnotations: \nEvents: ", "language": "text", "type": "code-example", "context": "2 - Ingress | Each path in an Ingress is required to have a corresponding path type. Paths that do not include an explicit pathType will fail validation. There are three supported path types:", "source": "https://kubernetes.io/docs/concepts/services-networking/_print/", "id": "k8s_00056_code_completion_1375"} {"prompt": "Explain this text code:\n\nName: ingress-resource-backend\nNamespace: default\nAddress:\nDefault backend: APIGroup: k8s.example.com, Kind: StorageBucket, Name: static-assets\nRules:\n Host Path Backends\n ---- ---- --------\n *\n /icons APIGroup: k8s.example.com, Kind: StorageBucket, Name: icon-assets\nAnnotations: \nEvents: ", "completion": "This text code 2 - Ingress | Each path in an Ingress is required to have a corresponding path type. Paths that do not include an explicit pathType will fail validation. There are three supported path types:", "language": "text", "type": "code-example_explanation", "context": "2 - Ingress | Each path in an Ingress is required to have a corresponding path type. Paths that do not include an explicit pathType will fail validation. There are three supported path types:", "source": "https://kubernetes.io/docs/concepts/services-networking/_print/", "id": "k8s_00056_code_explain_1376"} {"prompt": "Complete this yaml code:\n\napiVersion: networking.k8s.io/v1\nkind: Ingress\nmetadata:\n name: ingress-wildcard-host\nspec:\n rules:\n - host: \"foo.bar.com\"\n http:\n paths:\n - pathType: Prefix\n path: \"/bar\"\n backend:\n service:", "completion": " name: service1\n port:\n number: 80\n - host: \"*.foo.com\"\n http:\n paths:\n - pathType: Prefix\n path: \"/foo\"\n backend:\n service:\n name: service2\n port:\n number: 80", "language": "yaml", "type": "kubernetes-manifest", "context": "2 - Ingress | Ingresses can be implemented by different controllers, often with different configuration. Each Ingress should specify a class, a reference to an IngressClass resource that contains additional configu", "source": "https://kubernetes.io/docs/concepts/services-networking/_print/", "id": "k8s_00056_code_completion_1377"} {"prompt": "Explain this yaml code:\n\napiVersion: networking.k8s.io/v1\nkind: Ingress\nmetadata:\n name: ingress-wildcard-host\nspec:\n rules:\n - host: \"foo.bar.com\"\n http:\n paths:\n - pathType: Prefix\n path: \"/bar\"\n backend:\n service:\n name: service1\n port:\n number: 80\n - host: \"*.foo.com\"\n http:\n paths:\n - pathType: Prefix\n path: \"/foo\"\n backend:\n service:\n name: service2\n port:\n number: 80", "completion": "This yaml code 2 - Ingress | Ingresses can be implemented by different controllers, often with different configuration. Each Ingress should specify a class, a reference to an IngressClass resource that contains additional configu", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "2 - Ingress | Ingresses can be implemented by different controllers, often with different configuration. Each Ingress should specify a class, a reference to an IngressClass resource that contains additional configu", "source": "https://kubernetes.io/docs/concepts/services-networking/_print/", "id": "k8s_00056_code_explain_1378"} {"prompt": "Complete this yaml code:\n\napiVersion: networking.k8s.io/v1\nkind: IngressClass\nmetadata:\n name: external-lb\nspec:", "completion": " controller: example.com/ingress-controller\n parameters:\n apiGroup: k8s.example.com\n kind: IngressParameters\n name: external-lb", "language": "yaml", "type": "kubernetes-manifest", "context": "2 - Ingress | The .spec.parameters field of an IngressClass lets you reference another resource that provides configuration related to that IngressClass.", "source": "https://kubernetes.io/docs/concepts/services-networking/_print/", "id": "k8s_00056_code_completion_1379"} {"prompt": "Explain this yaml code:\n\napiVersion: networking.k8s.io/v1\nkind: IngressClass\nmetadata:\n name: external-lb\nspec:\n controller: example.com/ingress-controller\n parameters:\n apiGroup: k8s.example.com\n kind: IngressParameters\n name: external-lb", "completion": "This yaml code 2 - Ingress | The .spec.parameters field of an IngressClass lets you reference another resource that provides configuration related to that IngressClass.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "2 - Ingress | The .spec.parameters field of an IngressClass lets you reference another resource that provides configuration related to that IngressClass.", "source": "https://kubernetes.io/docs/concepts/services-networking/_print/", "id": "k8s_00056_code_explain_1380"} {"prompt": "Complete this yaml code:\n\n---\napiVersion: networking.k8s.io/v1\nkind: IngressClass\nmetadata:\n name: external-lb-1\nspec:\n controller: example.com/ingress-controller\n parameters:", "completion": " # The parameters for this IngressClass are specified in a\n # ClusterIngressParameter (API group k8s.example.net) named\n # \"external-config-1\". This definition tells Kubernetes to\n # look for a cluster-scoped parameter resource.\n scope: Cluster\n apiGroup: k8s.example.net\n kind: ClusterIngressParameter\n name: external-config-1", "language": "yaml", "type": "kubernetes-manifest", "context": "2 - Ingress | FEATURE STATE: Kubernetes v1.23 [stable]If you set the .spec.parameters field and set .spec.parameters.scope to Namespace, then the IngressClass refers to a namespaced-scoped resource. You must also s", "source": "https://kubernetes.io/docs/concepts/services-networking/_print/", "id": "k8s_00056_code_completion_1381"} {"prompt": "Explain this yaml code:\n\n---\napiVersion: networking.k8s.io/v1\nkind: IngressClass\nmetadata:\n name: external-lb-1\nspec:\n controller: example.com/ingress-controller\n parameters:\n # The parameters for this IngressClass are specified in a\n # ClusterIngressParameter (API group k8s.example.net) named\n # \"external-config-1\". This definition tells Kubernetes to\n # look for a cluster-scoped parameter resource.\n scope: Cluster\n apiGroup: k8s.example.net\n kind: ClusterIngressParameter\n name: external-config-1", "completion": "This yaml code 2 - Ingress | FEATURE STATE: Kubernetes v1.23 [stable]If you set the .spec.parameters field and set .spec.parameters.scope to Namespace, then the IngressClass refers to a namespaced-scoped resource. You must also s", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "2 - Ingress | FEATURE STATE: Kubernetes v1.23 [stable]If you set the .spec.parameters field and set .spec.parameters.scope to Namespace, then the IngressClass refers to a namespaced-scoped resource. You must also s", "source": "https://kubernetes.io/docs/concepts/services-networking/_print/", "id": "k8s_00056_code_explain_1382"} {"prompt": "Complete this yaml code:\n\n---\napiVersion: networking.k8s.io/v1\nkind: IngressClass\nmetadata:\n name: external-lb-2\nspec:\n controller: example.com/ingress-controller\n parameters:", "completion": " # The parameters for this IngressClass are specified in an\n # IngressParameter (API group k8s.example.com) named \"external-config\",\n # that's in the \"external-configuration\" namespace.\n scope: Namespace\n apiGroup: k8s.example.com\n kind: IngressParameter\n namespace: external-configuration\n name: external-config", "language": "yaml", "type": "kubernetes-manifest", "context": "2 - Ingress | Before the IngressClass resource and ingressClassName field were added in Kubernetes 1.18, Ingress classes were specified with a kubernetes.io/ingress.class annotation on the Ingress. This annotation", "source": "https://kubernetes.io/docs/concepts/services-networking/_print/", "id": "k8s_00056_code_completion_1383"} {"prompt": "Explain this yaml code:\n\n---\napiVersion: networking.k8s.io/v1\nkind: IngressClass\nmetadata:\n name: external-lb-2\nspec:\n controller: example.com/ingress-controller\n parameters:\n # The parameters for this IngressClass are specified in an\n # IngressParameter (API group k8s.example.com) named \"external-config\",\n # that's in the \"external-configuration\" namespace.\n scope: Namespace\n apiGroup: k8s.example.com\n kind: IngressParameter\n namespace: external-configuration\n name: external-config", "completion": "This yaml code 2 - Ingress | Before the IngressClass resource and ingressClassName field were added in Kubernetes 1.18, Ingress classes were specified with a kubernetes.io/ingress.class annotation on the Ingress. This annotation", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "2 - Ingress | Before the IngressClass resource and ingressClassName field were added in Kubernetes 1.18, Ingress classes were specified with a kubernetes.io/ingress.class annotation on the Ingress. This annotation", "source": "https://kubernetes.io/docs/concepts/services-networking/_print/", "id": "k8s_00056_code_explain_1384"} {"prompt": "Complete this yaml code:\n\napiVersion: networking.k8s.io/v1\nkind: IngressClass\nmetadata:\n labels:\n app.kubernetes.io/component: controller", "completion": " name: nginx-example\n annotations:\n ingressclass.kubernetes.io/is-default-class: \"true\"\nspec:\n controller: k8s.io/ingress-nginx", "language": "yaml", "type": "kubernetes-manifest", "context": "2 - Ingress | There are existing Kubernetes concepts that allow you to expose a single Service (see alternatives). You can also do this with an Ingress by specifying a default backend with no rules.", "source": "https://kubernetes.io/docs/concepts/services-networking/_print/", "id": "k8s_00056_code_completion_1385"} {"prompt": "Explain this yaml code:\n\napiVersion: networking.k8s.io/v1\nkind: IngressClass\nmetadata:\n labels:\n app.kubernetes.io/component: controller\n name: nginx-example\n annotations:\n ingressclass.kubernetes.io/is-default-class: \"true\"\nspec:\n controller: k8s.io/ingress-nginx", "completion": "This yaml code 2 - Ingress | There are existing Kubernetes concepts that allow you to expose a single Service (see alternatives). You can also do this with an Ingress by specifying a default backend with no rules.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "2 - Ingress | There are existing Kubernetes concepts that allow you to expose a single Service (see alternatives). You can also do this with an Ingress by specifying a default backend with no rules.", "source": "https://kubernetes.io/docs/concepts/services-networking/_print/", "id": "k8s_00056_code_explain_1386"} {"prompt": "Complete this yaml code:\n\napiVersion: networking.k8s.io/v1\nkind: Ingress\nmetadata:\n name: test-ingress\nspec:", "completion": " defaultBackend:\n service:\n name: test\n port:\n number: 80", "language": "yaml", "type": "kubernetes-manifest", "context": "2 - Ingress | If you create it using kubectl apply -f you should be able to view the state of the Ingress you added:", "source": "https://kubernetes.io/docs/concepts/services-networking/_print/", "id": "k8s_00056_code_completion_1387"} {"prompt": "Explain this yaml code:\n\napiVersion: networking.k8s.io/v1\nkind: Ingress\nmetadata:\n name: test-ingress\nspec:\n defaultBackend:\n service:\n name: test\n port:\n number: 80", "completion": "This yaml code 2 - Ingress | If you create it using kubectl apply -f you should be able to view the state of the Ingress you added:", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "2 - Ingress | If you create it using kubectl apply -f you should be able to view the state of the Ingress you added:", "source": "https://kubernetes.io/docs/concepts/services-networking/_print/", "id": "k8s_00056_code_explain_1388"} {"prompt": "Explain this text code:\n\nNAME CLASS HOSTS ADDRESS PORTS AGE\ntest-ingress external-lb * 203.0.113.123 80 59s", "completion": "This text code 2 - Ingress | Where 203.0.113.123 is the IP allocated by the Ingress controller to satisfy this Ingress.", "language": "text", "type": "code-example_explanation", "context": "2 - Ingress | Where 203.0.113.123 is the IP allocated by the Ingress controller to satisfy this Ingress.", "source": "https://kubernetes.io/docs/concepts/services-networking/_print/", "id": "k8s_00056_code_explain_1389"} {"prompt": "Complete this yaml code:\n\napiVersion: networking.k8s.io/v1\nkind: Ingress\nmetadata:\n name: simple-fanout-example\nspec:\n rules:\n - host: foo.bar.com\n http:\n paths:\n - path: /foo\n pathType: Prefix", "completion": " backend:\n service:\n name: service1\n port:\n number: 4200\n - path: /bar\n pathType: Prefix\n backend:\n service:\n name: service2\n port:\n number: 8080", "language": "yaml", "type": "kubernetes-manifest", "context": "2 - Ingress | When you create the Ingress with kubectl apply -f:", "source": "https://kubernetes.io/docs/concepts/services-networking/_print/", "id": "k8s_00056_code_completion_1390"} {"prompt": "Explain this yaml code:\n\napiVersion: networking.k8s.io/v1\nkind: Ingress\nmetadata:\n name: simple-fanout-example\nspec:\n rules:\n - host: foo.bar.com\n http:\n paths:\n - path: /foo\n pathType: Prefix\n backend:\n service:\n name: service1\n port:\n number: 4200\n - path: /bar\n pathType: Prefix\n backend:\n service:\n name: service2\n port:\n number: 8080", "completion": "This yaml code 2 - Ingress | When you create the Ingress with kubectl apply -f:", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "2 - Ingress | When you create the Ingress with kubectl apply -f:", "source": "https://kubernetes.io/docs/concepts/services-networking/_print/", "id": "k8s_00056_code_explain_1391"} {"prompt": "Complete this text code:\n\nName: simple-fanout-example\nNamespace: default\nAddress: 178.91.123.132\nDefault backend: default-http-backend:80 (10.8.2.3:8080)\nRules:\n Host Path Backends\n ---- ---- --------", "completion": " foo.bar.com\n /foo service1:4200 (10.8.0.90:4200)\n /bar service2:8080 (10.8.0.91:8080)\nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal ADD 22s loadbalancer-controller default/test", "language": "text", "type": "code-example", "context": "2 - Ingress | The Ingress controller provisions an implementation-specific load balancer that satisfies the Ingress, as long as the Services (service1, service2) exist. When it has done so, you can see the address", "source": "https://kubernetes.io/docs/concepts/services-networking/_print/", "id": "k8s_00056_code_completion_1392"} {"prompt": "Explain this text code:\n\nName: simple-fanout-example\nNamespace: default\nAddress: 178.91.123.132\nDefault backend: default-http-backend:80 (10.8.2.3:8080)\nRules:\n Host Path Backends\n ---- ---- --------\n foo.bar.com\n /foo service1:4200 (10.8.0.90:4200)\n /bar service2:8080 (10.8.0.91:8080)\nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal ADD 22s loadbalancer-controller default/test", "completion": "This text code 2 - Ingress | The Ingress controller provisions an implementation-specific load balancer that satisfies the Ingress, as long as the Services (service1, service2) exist. When it has done so, you can see the address", "language": "text", "type": "code-example_explanation", "context": "2 - Ingress | The Ingress controller provisions an implementation-specific load balancer that satisfies the Ingress, as long as the Services (service1, service2) exist. When it has done so, you can see the address", "source": "https://kubernetes.io/docs/concepts/services-networking/_print/", "id": "k8s_00056_code_explain_1393"} {"prompt": "Complete this yaml code:\n\napiVersion: networking.k8s.io/v1\nkind: Ingress\nmetadata:\n name: name-virtual-host-ingress\nspec:\n rules:\n - host: foo.bar.com\n http:\n paths:\n - pathType: Prefix\n path: \"/\"\n backend:\n service:", "completion": " name: service1\n port:\n number: 80\n - host: bar.foo.com\n http:\n paths:\n - pathType: Prefix\n path: \"/\"\n backend:\n service:\n name: service2\n port:\n number: 80", "language": "yaml", "type": "kubernetes-manifest", "context": "2 - Ingress | If you create an Ingress resource without any hosts defined in the rules, then any web traffic to the IP address of your Ingress controller can be matched without a name based virtual host being requi", "source": "https://kubernetes.io/docs/concepts/services-networking/_print/", "id": "k8s_00056_code_completion_1394"} {"prompt": "Explain this yaml code:\n\napiVersion: networking.k8s.io/v1\nkind: Ingress\nmetadata:\n name: name-virtual-host-ingress\nspec:\n rules:\n - host: foo.bar.com\n http:\n paths:\n - pathType: Prefix\n path: \"/\"\n backend:\n service:\n name: service1\n port:\n number: 80\n - host: bar.foo.com\n http:\n paths:\n - pathType: Prefix\n path: \"/\"\n backend:\n service:\n name: service2\n port:\n number: 80", "completion": "This yaml code 2 - Ingress | If you create an Ingress resource without any hosts defined in the rules, then any web traffic to the IP address of your Ingress controller can be matched without a name based virtual host being requi", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "2 - Ingress | If you create an Ingress resource without any hosts defined in the rules, then any web traffic to the IP address of your Ingress controller can be matched without a name based virtual host being requi", "source": "https://kubernetes.io/docs/concepts/services-networking/_print/", "id": "k8s_00056_code_explain_1395"} {"prompt": "Explain this text code:\n\nservice/networking/name-virtual-host-ingress-no-third-host.yaml", "completion": "This text code 2 - Ingress | You can secure an Ingress by specifying a Secret that contains a TLS private key and certificate. The Ingress resource only supports a single TLS port, 443, and assumes TLS termination at the ingress", "language": "text", "type": "code-example_explanation", "context": "2 - Ingress | You can secure an Ingress by specifying a Secret that contains a TLS private key and certificate. The Ingress resource only supports a single TLS port, 443, and assumes TLS termination at the ingress", "source": "https://kubernetes.io/docs/concepts/services-networking/_print/", "id": "k8s_00056_code_explain_1396"} {"prompt": "Complete this yaml code:\n\napiVersion: networking.k8s.io/v1\nkind: Ingress\nmetadata:\n name: name-virtual-host-ingress-no-third-host\nspec:\n rules:\n - host: first.bar.com\n http:\n paths:\n - pathType: Prefix\n path: \"/\"\n backend:\n service:\n name: service1\n port:\n number: 80\n - host: second.bar.com", "completion": " http:\n paths:\n - pathType: Prefix\n path: \"/\"\n backend:\n service:\n name: service2\n port:\n number: 80\n - http:\n paths:\n - pathType: Prefix\n path: \"/\"\n backend:\n service:\n name: service3\n port:\n number: 80", "language": "yaml", "type": "kubernetes-manifest", "context": "2 - Ingress | You can secure an Ingress by specifying a Secret that contains a TLS private key and certificate. The Ingress resource only supports a single TLS port, 443, and assumes TLS termination at the ingress", "source": "https://kubernetes.io/docs/concepts/services-networking/_print/", "id": "k8s_00056_code_completion_1397"} {"prompt": "Explain this yaml code:\n\napiVersion: networking.k8s.io/v1\nkind: Ingress\nmetadata:\n name: name-virtual-host-ingress-no-third-host\nspec:\n rules:\n - host: first.bar.com\n http:\n paths:\n - pathType: Prefix\n path: \"/\"\n backend:\n service:\n name: service1\n port:\n number: 80\n - host: second.bar.com\n http:\n paths:\n - pathType: Prefix\n path: \"/\"\n backend:\n service:\n name: service2\n port:\n number: 80\n - http:\n paths:\n - pathType: Prefix\n path: \"/\"\n backend:\n service:\n name: service3\n port:\n number: 80", "completion": "This yaml code 2 - Ingress | You can secure an Ingress by specifying a Secret that contains a TLS private key and certificate. The Ingress resource only supports a single TLS port, 443, and assumes TLS termination at the ingress", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "2 - Ingress | You can secure an Ingress by specifying a Secret that contains a TLS private key and certificate. The Ingress resource only supports a single TLS port, 443, and assumes TLS termination at the ingress", "source": "https://kubernetes.io/docs/concepts/services-networking/_print/", "id": "k8s_00056_code_explain_1398"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Secret\nmetadata:\n name: testsecret-tls", "completion": " namespace: default\ndata:\n tls.crt: base64 encoded cert\n tls.key: base64 encoded key\ntype: kubernetes.io/tls", "language": "yaml", "type": "kubernetes-manifest", "context": "2 - Ingress | Referencing this secret in an Ingress tells the Ingress controller to secure the channel from the client to the load balancer using TLS. You need to make sure the TLS secret you created came from a ce", "source": "https://kubernetes.io/docs/concepts/services-networking/_print/", "id": "k8s_00056_code_completion_1399"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Secret\nmetadata:\n name: testsecret-tls\n namespace: default\ndata:\n tls.crt: base64 encoded cert\n tls.key: base64 encoded key\ntype: kubernetes.io/tls", "completion": "This yaml code 2 - Ingress | Referencing this secret in an Ingress tells the Ingress controller to secure the channel from the client to the load balancer using TLS. You need to make sure the TLS secret you created came from a ce", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "2 - Ingress | Referencing this secret in an Ingress tells the Ingress controller to secure the channel from the client to the load balancer using TLS. You need to make sure the TLS secret you created came from a ce", "source": "https://kubernetes.io/docs/concepts/services-networking/_print/", "id": "k8s_00056_code_explain_1400"} {"prompt": "Complete this yaml code:\n\napiVersion: networking.k8s.io/v1\nkind: Ingress\nmetadata:\n name: tls-example-ingress\nspec:\n tls:\n - hosts:\n - https-example.foo.com\n secretName: testsecret-tls\n rules:", "completion": " - host: https-example.foo.com\n http:\n paths:\n - path: /\n pathType: Prefix\n backend:\n service:\n name: service1\n port:\n number: 80", "language": "yaml", "type": "kubernetes-manifest", "context": "2 - Ingress | An Ingress controller is bootstrapped with some load balancing policy settings that it applies to all Ingress, such as the load balancing algorithm, backend weight scheme, and others. More advanced lo", "source": "https://kubernetes.io/docs/concepts/services-networking/_print/", "id": "k8s_00056_code_completion_1401"} {"prompt": "Explain this yaml code:\n\napiVersion: networking.k8s.io/v1\nkind: Ingress\nmetadata:\n name: tls-example-ingress\nspec:\n tls:\n - hosts:\n - https-example.foo.com\n secretName: testsecret-tls\n rules:\n - host: https-example.foo.com\n http:\n paths:\n - path: /\n pathType: Prefix\n backend:\n service:\n name: service1\n port:\n number: 80", "completion": "This yaml code 2 - Ingress | An Ingress controller is bootstrapped with some load balancing policy settings that it applies to all Ingress, such as the load balancing algorithm, backend weight scheme, and others. More advanced lo", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "2 - Ingress | An Ingress controller is bootstrapped with some load balancing policy settings that it applies to all Ingress, such as the load balancing algorithm, backend weight scheme, and others. More advanced lo", "source": "https://kubernetes.io/docs/concepts/services-networking/_print/", "id": "k8s_00056_code_explain_1402"} {"prompt": "Complete this text code:\n\nName: test\nNamespace: default\nAddress: 178.91.123.132\nDefault backend: default-http-backend:80 (10.8.2.3:8080)\nRules:\n Host Path Backends\n ---- ---- --------", "completion": " foo.bar.com\n /foo service1:80 (10.8.0.90:80)\nAnnotations:\n nginx.ingress.kubernetes.io/rewrite-target: /\nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal ADD 35s loadbalancer-controller default/test", "language": "text", "type": "code-example", "context": "2 - Ingress | This pops up an editor with the existing configuration in YAML format. Modify it to include the new Host:", "source": "https://kubernetes.io/docs/concepts/services-networking/_print/", "id": "k8s_00056_code_completion_1403"} {"prompt": "Explain this text code:\n\nName: test\nNamespace: default\nAddress: 178.91.123.132\nDefault backend: default-http-backend:80 (10.8.2.3:8080)\nRules:\n Host Path Backends\n ---- ---- --------\n foo.bar.com\n /foo service1:80 (10.8.0.90:80)\nAnnotations:\n nginx.ingress.kubernetes.io/rewrite-target: /\nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal ADD 35s loadbalancer-controller default/test", "completion": "This text code 2 - Ingress | This pops up an editor with the existing configuration in YAML format. Modify it to include the new Host:", "language": "text", "type": "code-example_explanation", "context": "2 - Ingress | This pops up an editor with the existing configuration in YAML format. Modify it to include the new Host:", "source": "https://kubernetes.io/docs/concepts/services-networking/_print/", "id": "k8s_00056_code_explain_1404"} {"prompt": "Complete this yaml code:\n\nspec:\n rules:\n - host: foo.bar.com\n http:\n paths:\n - backend:\n service:\n name: service1\n port:\n number: 80\n path: /foo", "completion": " pathType: Prefix\n - host: bar.baz.com\n http:\n paths:\n - backend:\n service:\n name: service2\n port:\n number: 80\n path: /foo\n pathType: Prefix\n..", "language": "yaml", "type": "kubernetes-manifest", "context": "2 - Ingress | After you save your changes, kubectl updates the resource in the API server, which tells the Ingress controller to reconfigure the load balancer.", "source": "https://kubernetes.io/docs/concepts/services-networking/_print/", "id": "k8s_00056_code_completion_1405"} {"prompt": "Explain this yaml code:\n\nspec:\n rules:\n - host: foo.bar.com\n http:\n paths:\n - backend:\n service:\n name: service1\n port:\n number: 80\n path: /foo\n pathType: Prefix\n - host: bar.baz.com\n http:\n paths:\n - backend:\n service:\n name: service2\n port:\n number: 80\n path: /foo\n pathType: Prefix\n..", "completion": "This yaml code 2 - Ingress | After you save your changes, kubectl updates the resource in the API server, which tells the Ingress controller to reconfigure the load balancer.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "2 - Ingress | After you save your changes, kubectl updates the resource in the API server, which tells the Ingress controller to reconfigure the load balancer.", "source": "https://kubernetes.io/docs/concepts/services-networking/_print/", "id": "k8s_00056_code_explain_1406"} {"prompt": "Complete this text code:\n\nName: test\nNamespace: default\nAddress: 178.91.123.132\nDefault backend: default-http-backend:80 (10.8.2.3:8080)\nRules:\n Host Path Backends\n ---- ---- --------\n foo.bar.com", "completion": " /foo service1:80 (10.8.0.90:80)\n bar.baz.com\n /foo service2:80 (10.8.0.91:80)\nAnnotations:\n nginx.ingress.kubernetes.io/rewrite-target: /\nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal ADD 45s loadbalancer-controller default/test", "language": "text", "type": "code-example", "context": "2 - Ingress | You can achieve the same outcome by invoking kubectl replace -f on a modified Ingress YAML file.", "source": "https://kubernetes.io/docs/concepts/services-networking/_print/", "id": "k8s_00056_code_completion_1407"} {"prompt": "Explain this text code:\n\nName: test\nNamespace: default\nAddress: 178.91.123.132\nDefault backend: default-http-backend:80 (10.8.2.3:8080)\nRules:\n Host Path Backends\n ---- ---- --------\n foo.bar.com\n /foo service1:80 (10.8.0.90:80)\n bar.baz.com\n /foo service2:80 (10.8.0.91:80)\nAnnotations:\n nginx.ingress.kubernetes.io/rewrite-target: /\nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal ADD 45s loadbalancer-controller default/test", "completion": "This text code 2 - Ingress | You can achieve the same outcome by invoking kubectl replace -f on a modified Ingress YAML file.", "language": "text", "type": "code-example_explanation", "context": "2 - Ingress | You can achieve the same outcome by invoking kubectl replace -f on a modified Ingress YAML file.", "source": "https://kubernetes.io/docs/concepts/services-networking/_print/", "id": "k8s_00056_code_explain_1408"} {"prompt": "Complete this yaml code:\n\napiVersion: gateway.networking.k8s.io/v1\nkind: GatewayClass\nmetadata:", "completion": " name: example-class\nspec:\n controllerName: example.com/gateway-controller", "language": "yaml", "type": "kubernetes-manifest", "context": "4 - Gateway API | In this example, a controller that has implemented Gateway API is configured to manage GatewayClasses with the controller name example.com/gateway-controller. Gateways of this class will be managed by", "source": "https://kubernetes.io/docs/concepts/services-networking/_print/", "id": "k8s_00056_code_completion_1409"} {"prompt": "Explain this yaml code:\n\napiVersion: gateway.networking.k8s.io/v1\nkind: GatewayClass\nmetadata:\n name: example-class\nspec:\n controllerName: example.com/gateway-controller", "completion": "This yaml code 4 - Gateway API | In this example, a controller that has implemented Gateway API is configured to manage GatewayClasses with the controller name example.com/gateway-controller. Gateways of this class will be managed by", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "4 - Gateway API | In this example, a controller that has implemented Gateway API is configured to manage GatewayClasses with the controller name example.com/gateway-controller. Gateways of this class will be managed by", "source": "https://kubernetes.io/docs/concepts/services-networking/_print/", "id": "k8s_00056_code_explain_1410"} {"prompt": "Complete this yaml code:\n\napiVersion: gateway.networking.k8s.io/v1\nkind: Gateway\nmetadata:\n name: example-gateway\nspec:", "completion": " gatewayClassName: example-class\n listeners:\n - name: http\n protocol: HTTP\n port: 80", "language": "yaml", "type": "kubernetes-manifest", "context": "4 - Gateway API | In this example, an instance of traffic handling infrastructure is programmed to listen for HTTP traffic on port 80. Since the addresses field is unspecified, an address or hostname is assigned to the", "source": "https://kubernetes.io/docs/concepts/services-networking/_print/", "id": "k8s_00056_code_completion_1411"} {"prompt": "Explain this yaml code:\n\napiVersion: gateway.networking.k8s.io/v1\nkind: Gateway\nmetadata:\n name: example-gateway\nspec:\n gatewayClassName: example-class\n listeners:\n - name: http\n protocol: HTTP\n port: 80", "completion": "This yaml code 4 - Gateway API | In this example, an instance of traffic handling infrastructure is programmed to listen for HTTP traffic on port 80. Since the addresses field is unspecified, an address or hostname is assigned to the", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "4 - Gateway API | In this example, an instance of traffic handling infrastructure is programmed to listen for HTTP traffic on port 80. Since the addresses field is unspecified, an address or hostname is assigned to the", "source": "https://kubernetes.io/docs/concepts/services-networking/_print/", "id": "k8s_00056_code_explain_1412"} {"prompt": "Complete this yaml code:\n\napiVersion: gateway.networking.k8s.io/v1\nkind: HTTPRoute\nmetadata:\n name: example-httproute\nspec:\n parentRefs:\n - name: example-gateway\n hostnames:", "completion": " - \"www.example.com\"\n rules:\n - matches:\n - path:\n type: PathPrefix\n value: /login\n backendRefs:\n - name: example-svc\n port: 8080", "language": "yaml", "type": "kubernetes-manifest", "context": "4 - Gateway API | In this example, HTTP traffic from Gateway example-gateway with the Host: header set to www.example.com and the request path specified as /login will be routed to Service example-svc on port 8080.", "source": "https://kubernetes.io/docs/concepts/services-networking/_print/", "id": "k8s_00056_code_completion_1413"} {"prompt": "Explain this yaml code:\n\napiVersion: gateway.networking.k8s.io/v1\nkind: HTTPRoute\nmetadata:\n name: example-httproute\nspec:\n parentRefs:\n - name: example-gateway\n hostnames:\n - \"www.example.com\"\n rules:\n - matches:\n - path:\n type: PathPrefix\n value: /login\n backendRefs:\n - name: example-svc\n port: 8080", "completion": "This yaml code 4 - Gateway API | In this example, HTTP traffic from Gateway example-gateway with the Host: header set to www.example.com and the request path specified as /login will be routed to Service example-svc on port 8080.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "4 - Gateway API | In this example, HTTP traffic from Gateway example-gateway with the Host: header set to www.example.com and the request path specified as /login will be routed to Service example-svc on port 8080.", "source": "https://kubernetes.io/docs/concepts/services-networking/_print/", "id": "k8s_00056_code_explain_1414"} {"prompt": "Complete this yaml code:\n\napiVersion: gateway.networking.k8s.io/v1\nkind: GRPCRoute\nmetadata:\n name: example-grpcroute\nspec:\n parentRefs:", "completion": " - name: example-gateway\n hostnames:\n - \"svc.example.com\"\n rules:\n - backendRefs:\n - name: example-svc\n port: 50051", "language": "yaml", "type": "kubernetes-manifest", "context": "4 - Gateway API | In this example, gRPC traffic from Gateway example-gateway with the host set to svc.example.com will be directed to the service example-svc on port 50051 from the same namespace.", "source": "https://kubernetes.io/docs/concepts/services-networking/_print/", "id": "k8s_00056_code_completion_1415"} {"prompt": "Explain this yaml code:\n\napiVersion: gateway.networking.k8s.io/v1\nkind: GRPCRoute\nmetadata:\n name: example-grpcroute\nspec:\n parentRefs:\n - name: example-gateway\n hostnames:\n - \"svc.example.com\"\n rules:\n - backendRefs:\n - name: example-svc\n port: 50051", "completion": "This yaml code 4 - Gateway API | In this example, gRPC traffic from Gateway example-gateway with the host set to svc.example.com will be directed to the service example-svc on port 50051 from the same namespace.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "4 - Gateway API | In this example, gRPC traffic from Gateway example-gateway with the host set to svc.example.com will be directed to the service example-svc on port 50051 from the same namespace.", "source": "https://kubernetes.io/docs/concepts/services-networking/_print/", "id": "k8s_00056_code_explain_1416"} {"prompt": "Complete this yaml code:\n\napiVersion: gateway.networking.k8s.io/v1\nkind: GRPCRoute\nmetadata:\n name: example-grpcroute\nspec:\n parentRefs:\n - name: example-gateway\n hostnames:", "completion": " - \"svc.example.com\"\n rules:\n - matches:\n - method:\n service: com.example\n method: Login\n backendRefs:\n - name: foo-svc\n port: 50051", "language": "yaml", "type": "kubernetes-manifest", "context": "4 - Gateway API | In this case, the GRPCRoute will match any traffic for svc.example.com and apply its routing rules to forward the traffic to the correct backend. Since there is only one match specified,only requests", "source": "https://kubernetes.io/docs/concepts/services-networking/_print/", "id": "k8s_00056_code_completion_1417"} {"prompt": "Explain this yaml code:\n\napiVersion: gateway.networking.k8s.io/v1\nkind: GRPCRoute\nmetadata:\n name: example-grpcroute\nspec:\n parentRefs:\n - name: example-gateway\n hostnames:\n - \"svc.example.com\"\n rules:\n - matches:\n - method:\n service: com.example\n method: Login\n backendRefs:\n - name: foo-svc\n port: 50051", "completion": "This yaml code 4 - Gateway API | In this case, the GRPCRoute will match any traffic for svc.example.com and apply its routing rules to forward the traffic to the correct backend. Since there is only one match specified,only requests", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "4 - Gateway API | In this case, the GRPCRoute will match any traffic for svc.example.com and apply its routing rules to forward the traffic to the correct backend. Since there is only one match specified,only requests", "source": "https://kubernetes.io/docs/concepts/services-networking/_print/", "id": "k8s_00056_code_explain_1418"} {"prompt": "Complete this yaml code:\n\napiVersion: discovery.k8s.io/v1\nkind: EndpointSlice\nmetadata:\n name: example-abc\n labels:\n kubernetes.io/service-name: example\naddressType: IPv4\nports:\n - name: http", "completion": " protocol: TCP\n port: 80\nendpoints:\n - addresses:\n - \"10.1.2.3\"\n conditions:\n ready: true\n hostname: pod-1\n nodeName: node-1\n zone: us-west2-a", "language": "yaml", "type": "kubernetes-manifest", "context": "5 - EndpointSlices | By default, the control plane creates and manages EndpointSlices to have no more than 100 endpoints each. You can configure this with the --max-endpoints-per-slice kube-controller-manager flag, up to", "source": "https://kubernetes.io/docs/concepts/services-networking/_print/", "id": "k8s_00056_code_completion_1419"} {"prompt": "Explain this yaml code:\n\napiVersion: discovery.k8s.io/v1\nkind: EndpointSlice\nmetadata:\n name: example-abc\n labels:\n kubernetes.io/service-name: example\naddressType: IPv4\nports:\n - name: http\n protocol: TCP\n port: 80\nendpoints:\n - addresses:\n - \"10.1.2.3\"\n conditions:\n ready: true\n hostname: pod-1\n nodeName: node-1\n zone: us-west2-a", "completion": "This yaml code 5 - EndpointSlices | By default, the control plane creates and manages EndpointSlices to have no more than 100 endpoints each. You can configure this with the --max-endpoints-per-slice kube-controller-manager flag, up to", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "5 - EndpointSlices | By default, the control plane creates and manages EndpointSlices to have no more than 100 endpoints each. You can configure this with the --max-endpoints-per-slice kube-controller-manager flag, up to", "source": "https://kubernetes.io/docs/concepts/services-networking/_print/", "id": "k8s_00056_code_explain_1420"} {"prompt": "Complete this yaml code:\n\napiVersion: networking.k8s.io/v1\nkind: NetworkPolicy\nmetadata:\n name: test-network-policy\n namespace: default\nspec:\n podSelector:\n matchLabels:\n role: db\n policyTypes:\n - Ingress\n - Egress\n ingress:\n - from:\n - ipBlock:\n cidr: 172.17.0.0/16\n except:", "completion": " - 172.17.1.0/24\n - namespaceSelector:\n matchLabels:\n project: myproject\n - podSelector:\n matchLabels:\n role: frontend\n ports:\n - protocol: TCP\n port: 6379\n egress:\n - to:\n - ipBlock:\n cidr: 10.0.0.0/24\n ports:\n - protocol: TCP\n port: 5978", "language": "yaml", "type": "kubernetes-manifest", "context": "6 - Network Policies | Mandatory Fields: As with all other Kubernetes config, a NetworkPolicy needs apiVersion, kind, and metadata fields. For general information about working with config files, see Configure a Pod to Use", "source": "https://kubernetes.io/docs/concepts/services-networking/_print/", "id": "k8s_00056_code_completion_1421"} {"prompt": "Explain this yaml code:\n\napiVersion: networking.k8s.io/v1\nkind: NetworkPolicy\nmetadata:\n name: test-network-policy\n namespace: default\nspec:\n podSelector:\n matchLabels:\n role: db\n policyTypes:\n - Ingress\n - Egress\n ingress:\n - from:\n - ipBlock:\n cidr: 172.17.0.0/16\n except:\n - 172.17.1.0/24\n - namespaceSelector:\n matchLabels:\n project: myproject\n - podSelector:\n matchLabels:\n role: frontend\n ports:\n - protocol: TCP\n port: 6379\n egress:\n - to:\n - ipBlock:\n cidr: 10.0.0.0/24\n ports:\n - protocol: TCP\n port: 5978", "completion": "This yaml code 6 - Network Policies | Mandatory Fields: As with all other Kubernetes config, a NetworkPolicy needs apiVersion, kind, and metadata fields. For general information about working with config files, see Configure a Pod to Use", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "6 - Network Policies | Mandatory Fields: As with all other Kubernetes config, a NetworkPolicy needs apiVersion, kind, and metadata fields. For general information about working with config files, see Configure a Pod to Use", "source": "https://kubernetes.io/docs/concepts/services-networking/_print/", "id": "k8s_00056_code_explain_1422"} {"prompt": "Complete this yaml code:\n\n...\n ingress:\n - from:\n - namespaceSelector:\n matchLabels:", "completion": " user: alice\n podSelector:\n matchLabels:\n role: client\n ...", "language": "yaml", "type": "kubernetes-manifest", "context": "6 - Network Policies | This policy contains a single from element allowing connections from Pods with the label role=client in namespaces with the label user=alice. But the following policy is different:", "source": "https://kubernetes.io/docs/concepts/services-networking/_print/", "id": "k8s_00056_code_completion_1423"} {"prompt": "Explain this yaml code:\n\n...\n ingress:\n - from:\n - namespaceSelector:\n matchLabels:\n user: alice\n podSelector:\n matchLabels:\n role: client\n ...", "completion": "This yaml code 6 - Network Policies | This policy contains a single from element allowing connections from Pods with the label role=client in namespaces with the label user=alice. But the following policy is different:", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "6 - Network Policies | This policy contains a single from element allowing connections from Pods with the label role=client in namespaces with the label user=alice. But the following policy is different:", "source": "https://kubernetes.io/docs/concepts/services-networking/_print/", "id": "k8s_00056_code_explain_1424"} {"prompt": "Complete this yaml code:\n\n...\n ingress:\n - from:\n - namespaceSelector:\n matchLabels:", "completion": " user: alice\n - podSelector:\n matchLabels:\n role: client\n ...", "language": "yaml", "type": "kubernetes-manifest", "context": "6 - Network Policies | It contains two elements in the from array, and allows connections from Pods in the local Namespace with the label role=client, or from any Pod in any namespace with the label user=alice.", "source": "https://kubernetes.io/docs/concepts/services-networking/_print/", "id": "k8s_00056_code_completion_1425"} {"prompt": "Explain this yaml code:\n\n...\n ingress:\n - from:\n - namespaceSelector:\n matchLabels:\n user: alice\n - podSelector:\n matchLabels:\n role: client\n ...", "completion": "This yaml code 6 - Network Policies | It contains two elements in the from array, and allows connections from Pods in the local Namespace with the label role=client, or from any Pod in any namespace with the label user=alice.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "6 - Network Policies | It contains two elements in the from array, and allows connections from Pods in the local Namespace with the label role=client, or from any Pod in any namespace with the label user=alice.", "source": "https://kubernetes.io/docs/concepts/services-networking/_print/", "id": "k8s_00056_code_explain_1426"} {"prompt": "Explain this text code:\n\nservice/networking/network-policy-default-deny-ingress.yaml", "completion": "This text code 6 - Network Policies | This ensures that even pods that aren't selected by any other NetworkPolicy will still be isolated for ingress. This policy does not affect isolation for egress from any pod.", "language": "text", "type": "code-example_explanation", "context": "6 - Network Policies | This ensures that even pods that aren't selected by any other NetworkPolicy will still be isolated for ingress. This policy does not affect isolation for egress from any pod.", "source": "https://kubernetes.io/docs/concepts/services-networking/_print/", "id": "k8s_00056_code_explain_1427"} {"prompt": "Complete this yaml code:\n\n---\napiVersion: networking.k8s.io/v1\nkind: NetworkPolicy\nmetadata:", "completion": " name: default-deny-ingress\nspec:\n podSelector: {}\n policyTypes:\n - Ingress", "language": "yaml", "type": "kubernetes-manifest", "context": "6 - Network Policies | This ensures that even pods that aren't selected by any other NetworkPolicy will still be isolated for ingress. This policy does not affect isolation for egress from any pod.", "source": "https://kubernetes.io/docs/concepts/services-networking/_print/", "id": "k8s_00056_code_completion_1428"} {"prompt": "Explain this yaml code:\n\n---\napiVersion: networking.k8s.io/v1\nkind: NetworkPolicy\nmetadata:\n name: default-deny-ingress\nspec:\n podSelector: {}\n policyTypes:\n - Ingress", "completion": "This yaml code 6 - Network Policies | This ensures that even pods that aren't selected by any other NetworkPolicy will still be isolated for ingress. This policy does not affect isolation for egress from any pod.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "6 - Network Policies | This ensures that even pods that aren't selected by any other NetworkPolicy will still be isolated for ingress. This policy does not affect isolation for egress from any pod.", "source": "https://kubernetes.io/docs/concepts/services-networking/_print/", "id": "k8s_00056_code_explain_1429"} {"prompt": "Explain this text code:\n\nservice/networking/network-policy-allow-all-ingress.yaml", "completion": "This text code 6 - Network Policies | With this policy in place, no additional policy or policies can cause any incoming connection to those pods to be denied. This policy has no effect on isolation for egress from any pod.", "language": "text", "type": "code-example_explanation", "context": "6 - Network Policies | With this policy in place, no additional policy or policies can cause any incoming connection to those pods to be denied. This policy has no effect on isolation for egress from any pod.", "source": "https://kubernetes.io/docs/concepts/services-networking/_print/", "id": "k8s_00056_code_explain_1430"} {"prompt": "Complete this yaml code:\n\n---\napiVersion: networking.k8s.io/v1\nkind: NetworkPolicy\nmetadata:\n name: allow-all-ingress", "completion": "spec:\n podSelector: {}\n ingress:\n - {}\n policyTypes:\n - Ingress", "language": "yaml", "type": "kubernetes-manifest", "context": "6 - Network Policies | With this policy in place, no additional policy or policies can cause any incoming connection to those pods to be denied. This policy has no effect on isolation for egress from any pod.", "source": "https://kubernetes.io/docs/concepts/services-networking/_print/", "id": "k8s_00056_code_completion_1431"} {"prompt": "Explain this yaml code:\n\n---\napiVersion: networking.k8s.io/v1\nkind: NetworkPolicy\nmetadata:\n name: allow-all-ingress\nspec:\n podSelector: {}\n ingress:\n - {}\n policyTypes:\n - Ingress", "completion": "This yaml code 6 - Network Policies | With this policy in place, no additional policy or policies can cause any incoming connection to those pods to be denied. This policy has no effect on isolation for egress from any pod.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "6 - Network Policies | With this policy in place, no additional policy or policies can cause any incoming connection to those pods to be denied. This policy has no effect on isolation for egress from any pod.", "source": "https://kubernetes.io/docs/concepts/services-networking/_print/", "id": "k8s_00056_code_explain_1432"} {"prompt": "Explain this text code:\n\nservice/networking/network-policy-default-deny-egress.yaml", "completion": "This text code 6 - Network Policies | This ensures that even pods that aren't selected by any other NetworkPolicy will not be allowed egress traffic. This policy does not change the ingress isolation behavior of any pod.", "language": "text", "type": "code-example_explanation", "context": "6 - Network Policies | This ensures that even pods that aren't selected by any other NetworkPolicy will not be allowed egress traffic. This policy does not change the ingress isolation behavior of any pod.", "source": "https://kubernetes.io/docs/concepts/services-networking/_print/", "id": "k8s_00056_code_explain_1433"} {"prompt": "Complete this yaml code:\n\n---\napiVersion: networking.k8s.io/v1\nkind: NetworkPolicy\nmetadata:", "completion": " name: default-deny-egress\nspec:\n podSelector: {}\n policyTypes:\n - Egress", "language": "yaml", "type": "kubernetes-manifest", "context": "6 - Network Policies | This ensures that even pods that aren't selected by any other NetworkPolicy will not be allowed egress traffic. This policy does not change the ingress isolation behavior of any pod.", "source": "https://kubernetes.io/docs/concepts/services-networking/_print/", "id": "k8s_00056_code_completion_1434"} {"prompt": "Explain this yaml code:\n\n---\napiVersion: networking.k8s.io/v1\nkind: NetworkPolicy\nmetadata:\n name: default-deny-egress\nspec:\n podSelector: {}\n policyTypes:\n - Egress", "completion": "This yaml code 6 - Network Policies | This ensures that even pods that aren't selected by any other NetworkPolicy will not be allowed egress traffic. This policy does not change the ingress isolation behavior of any pod.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "6 - Network Policies | This ensures that even pods that aren't selected by any other NetworkPolicy will not be allowed egress traffic. This policy does not change the ingress isolation behavior of any pod.", "source": "https://kubernetes.io/docs/concepts/services-networking/_print/", "id": "k8s_00056_code_explain_1435"} {"prompt": "Explain this text code:\n\nservice/networking/network-policy-allow-all-egress.yaml", "completion": "This text code 6 - Network Policies | With this policy in place, no additional policy or policies can cause any outgoing connection from those pods to be denied. This policy has no effect on isolation for ingress to any pod.", "language": "text", "type": "code-example_explanation", "context": "6 - Network Policies | With this policy in place, no additional policy or policies can cause any outgoing connection from those pods to be denied. This policy has no effect on isolation for ingress to any pod.", "source": "https://kubernetes.io/docs/concepts/services-networking/_print/", "id": "k8s_00056_code_explain_1436"} {"prompt": "Complete this yaml code:\n\n---\napiVersion: networking.k8s.io/v1\nkind: NetworkPolicy\nmetadata:\n name: allow-all-egress", "completion": "spec:\n podSelector: {}\n egress:\n - {}\n policyTypes:\n - Egress", "language": "yaml", "type": "kubernetes-manifest", "context": "6 - Network Policies | With this policy in place, no additional policy or policies can cause any outgoing connection from those pods to be denied. This policy has no effect on isolation for ingress to any pod.", "source": "https://kubernetes.io/docs/concepts/services-networking/_print/", "id": "k8s_00056_code_completion_1437"} {"prompt": "Explain this yaml code:\n\n---\napiVersion: networking.k8s.io/v1\nkind: NetworkPolicy\nmetadata:\n name: allow-all-egress\nspec:\n podSelector: {}\n egress:\n - {}\n policyTypes:\n - Egress", "completion": "This yaml code 6 - Network Policies | With this policy in place, no additional policy or policies can cause any outgoing connection from those pods to be denied. This policy has no effect on isolation for ingress to any pod.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "6 - Network Policies | With this policy in place, no additional policy or policies can cause any outgoing connection from those pods to be denied. This policy has no effect on isolation for ingress to any pod.", "source": "https://kubernetes.io/docs/concepts/services-networking/_print/", "id": "k8s_00056_code_explain_1438"} {"prompt": "Explain this text code:\n\nservice/networking/network-policy-default-deny-all.yaml", "completion": "This text code 6 - Network Policies | This ensures that even pods that aren't selected by any other NetworkPolicy will not be allowed ingress or egress traffic.", "language": "text", "type": "code-example_explanation", "context": "6 - Network Policies | This ensures that even pods that aren't selected by any other NetworkPolicy will not be allowed ingress or egress traffic.", "source": "https://kubernetes.io/docs/concepts/services-networking/_print/", "id": "k8s_00056_code_explain_1439"} {"prompt": "Complete this yaml code:\n\n---\napiVersion: networking.k8s.io/v1\nkind: NetworkPolicy\nmetadata:\n name: default-deny-all", "completion": "spec:\n podSelector: {}\n policyTypes:\n - Ingress\n - Egress", "language": "yaml", "type": "kubernetes-manifest", "context": "6 - Network Policies | This ensures that even pods that aren't selected by any other NetworkPolicy will not be allowed ingress or egress traffic.", "source": "https://kubernetes.io/docs/concepts/services-networking/_print/", "id": "k8s_00056_code_completion_1440"} {"prompt": "Explain this yaml code:\n\n---\napiVersion: networking.k8s.io/v1\nkind: NetworkPolicy\nmetadata:\n name: default-deny-all\nspec:\n podSelector: {}\n policyTypes:\n - Ingress\n - Egress", "completion": "This yaml code 6 - Network Policies | This ensures that even pods that aren't selected by any other NetworkPolicy will not be allowed ingress or egress traffic.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "6 - Network Policies | This ensures that even pods that aren't selected by any other NetworkPolicy will not be allowed ingress or egress traffic.", "source": "https://kubernetes.io/docs/concepts/services-networking/_print/", "id": "k8s_00056_code_explain_1441"} {"prompt": "Explain this text code:\n\nservice/networking/networkpolicy-multiport-egress.yaml", "completion": "This text code 6 - Network Policies | The above rule allows any Pod with label role=db on the namespace default to communicate with any IP within the range 10.0.0.0/24 over TCP, provided that the target port is between the range 32000 and", "language": "text", "type": "code-example_explanation", "context": "6 - Network Policies | The above rule allows any Pod with label role=db on the namespace default to communicate with any IP within the range 10.0.0.0/24 over TCP, provided that the target port is between the range 32000 and", "source": "https://kubernetes.io/docs/concepts/services-networking/_print/", "id": "k8s_00056_code_explain_1442"} {"prompt": "Complete this yaml code:\n\napiVersion: networking.k8s.io/v1\nkind: NetworkPolicy\nmetadata:\n name: multi-port-egress\n namespace: default\nspec:\n podSelector:\n matchLabels:\n role: db", "completion": " policyTypes:\n - Egress\n egress:\n - to:\n - ipBlock:\n cidr: 10.0.0.0/24\n ports:\n - protocol: TCP\n port: 32000\n endPort: 32768", "language": "yaml", "type": "kubernetes-manifest", "context": "6 - Network Policies | The above rule allows any Pod with label role=db on the namespace default to communicate with any IP within the range 10.0.0.0/24 over TCP, provided that the target port is between the range 32000 and", "source": "https://kubernetes.io/docs/concepts/services-networking/_print/", "id": "k8s_00056_code_completion_1443"} {"prompt": "Explain this yaml code:\n\napiVersion: networking.k8s.io/v1\nkind: NetworkPolicy\nmetadata:\n name: multi-port-egress\n namespace: default\nspec:\n podSelector:\n matchLabels:\n role: db\n policyTypes:\n - Egress\n egress:\n - to:\n - ipBlock:\n cidr: 10.0.0.0/24\n ports:\n - protocol: TCP\n port: 32000\n endPort: 32768", "completion": "This yaml code 6 - Network Policies | The above rule allows any Pod with label role=db on the namespace default to communicate with any IP within the range 10.0.0.0/24 over TCP, provided that the target port is between the range 32000 and", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "6 - Network Policies | The above rule allows any Pod with label role=db on the namespace default to communicate with any IP within the range 10.0.0.0/24 over TCP, provided that the target port is between the range 32000 and", "source": "https://kubernetes.io/docs/concepts/services-networking/_print/", "id": "k8s_00056_code_explain_1444"} {"prompt": "Explain this shell code:\n\nkubectl label namespace frontend namespace=frontend\nkubectl label namespace backend namespace=backend", "completion": "This shell code 6 - Network Policies | Add the labels under namespaceSelector in your NetworkPolicy document. For example:", "language": "shell", "type": "code-example_explanation", "context": "6 - Network Policies | Add the labels under namespaceSelector in your NetworkPolicy document. For example:", "source": "https://kubernetes.io/docs/concepts/services-networking/_print/", "id": "k8s_00056_code_explain_1445"} {"prompt": "Complete this yaml code:\n\napiVersion: networking.k8s.io/v1\nkind: NetworkPolicy\nmetadata:\n name: egress-namespaces\nspec:\n podSelector:\n matchLabels:\n app: myapp", "completion": " policyTypes:\n - Egress\n egress:\n - to:\n - namespaceSelector:\n matchExpressions:\n - key: namespace\n operator: In\n values: [\"frontend\", \"backend\"]", "language": "yaml", "type": "kubernetes-manifest", "context": "6 - Network Policies | The Kubernetes control plane sets an immutable label kubernetes.io/metadata.name on all namespaces, the value of the label is the namespace name.", "source": "https://kubernetes.io/docs/concepts/services-networking/_print/", "id": "k8s_00056_code_completion_1446"} {"prompt": "Explain this yaml code:\n\napiVersion: networking.k8s.io/v1\nkind: NetworkPolicy\nmetadata:\n name: egress-namespaces\nspec:\n podSelector:\n matchLabels:\n app: myapp\n policyTypes:\n - Egress\n egress:\n - to:\n - namespaceSelector:\n matchExpressions:\n - key: namespace\n operator: In\n values: [\"frontend\", \"backend\"]", "completion": "This yaml code 6 - Network Policies | The Kubernetes control plane sets an immutable label kubernetes.io/metadata.name on all namespaces, the value of the label is the namespace name.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "6 - Network Policies | The Kubernetes control plane sets an immutable label kubernetes.io/metadata.name on all namespaces, the value of the label is the namespace name.", "source": "https://kubernetes.io/docs/concepts/services-networking/_print/", "id": "k8s_00056_code_explain_1447"} {"prompt": "Complete this yaml code:\n\n...\n spec:\n podSelector:", "completion": " matchLabels:\n role: client\n ...", "language": "yaml", "type": "kubernetes-manifest", "context": "6 - Network Policies | a hostNetwork pod is selected by a podSelector or namespaceSelector in an ingress or egress rule.", "source": "https://kubernetes.io/docs/concepts/services-networking/_print/", "id": "k8s_00056_code_completion_1448"} {"prompt": "Explain this yaml code:\n\n...\n spec:\n podSelector:\n matchLabels:\n role: client\n ...", "completion": "This yaml code 6 - Network Policies | a hostNetwork pod is selected by a podSelector or namespaceSelector in an ingress or egress rule.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "6 - Network Policies | a hostNetwork pod is selected by a podSelector or namespaceSelector in an ingress or egress rule.", "source": "https://kubernetes.io/docs/concepts/services-networking/_print/", "id": "k8s_00056_code_explain_1449"} {"prompt": "Complete this yaml code:\n\n...\n ingress:\n - from:", "completion": " - podSelector:\n matchLabels:\n role: client\n ...", "language": "yaml", "type": "kubernetes-manifest", "context": "6 - Network Policies | At the same time, since hostNetwork pods have the same IP addresses as the nodes they reside on, their connections will be treated as node connections. For example, you can allow traffic from a hostNe", "source": "https://kubernetes.io/docs/concepts/services-networking/_print/", "id": "k8s_00056_code_completion_1450"} {"prompt": "Explain this yaml code:\n\n...\n ingress:\n - from:\n - podSelector:\n matchLabels:\n role: client\n ...", "completion": "This yaml code 6 - Network Policies | At the same time, since hostNetwork pods have the same IP addresses as the nodes they reside on, their connections will be treated as node connections. For example, you can allow traffic from a hostNe", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "6 - Network Policies | At the same time, since hostNetwork pods have the same IP addresses as the nodes they reside on, their connections will be treated as node connections. For example, you can allow traffic from a hostNe", "source": "https://kubernetes.io/docs/concepts/services-networking/_print/", "id": "k8s_00056_code_explain_1451"} {"prompt": "Explain this text code:\n\nnameserver 10.32.0.10\nsearch .svc.cluster.local svc.cluster.local cluster.local\noptions ndots:5", "completion": "This text code 7 - DNS for Services and Pods | In summary, a Pod in the test namespace can successfully resolve either data.prod or data.prod.svc.cluster.local.", "language": "text", "type": "code-example_explanation", "context": "7 - DNS for Services and Pods | In summary, a Pod in the test namespace can successfully resolve either data.prod or data.prod.svc.cluster.local.", "source": "https://kubernetes.io/docs/concepts/services-networking/_print/", "id": "k8s_00056_code_explain_1452"} {"prompt": "Explain this text code:\n\n_port-name._port-protocol.my-svc.my-namespace.svc.cluster-domain.example", "completion": "This text code 7 - DNS for Services and Pods | Kube-DNS versions, prior to the implementation of the DNS specification, had the following DNS resolution:", "language": "text", "type": "code-example_explanation", "context": "7 - DNS for Services and Pods | Kube-DNS versions, prior to the implementation of the DNS specification, had the following DNS resolution:", "source": "https://kubernetes.io/docs/concepts/services-networking/_print/", "id": "k8s_00056_code_explain_1453"} {"prompt": "Explain this text code:\n\nhostname.my-svc.my-namespace.svc.cluster-domain.example", "completion": "This text code 7 - DNS for Services and Pods | Kube-DNS versions, prior to the implementation of the DNS specification, had the following DNS resolution:", "language": "text", "type": "code-example_explanation", "context": "7 - DNS for Services and Pods | Kube-DNS versions, prior to the implementation of the DNS specification, had the following DNS resolution:", "source": "https://kubernetes.io/docs/concepts/services-networking/_print/", "id": "k8s_00056_code_explain_1454"} {"prompt": "Explain this text code:\n\n..pod.", "completion": "This text code 7 - DNS for Services and Pods | For example, if a Pod in the default namespace has the IP address 172.17.0.3, and the domain name for your cluster is cluster.local, then the Pod has a DNS name:", "language": "text", "type": "code-example_explanation", "context": "7 - DNS for Services and Pods | For example, if a Pod in the default namespace has the IP address 172.17.0.3, and the domain name for your cluster is cluster.local, then the Pod has a DNS name:", "source": "https://kubernetes.io/docs/concepts/services-networking/_print/", "id": "k8s_00056_code_explain_1455"} {"prompt": "Explain this text code:\n\n...svc.", "completion": "This text code 7 - DNS for Services and Pods | For example, if a Pod in the cafe namespace has the IP address 172.17.0.3, is an endpoint of a Service named barista, and the domain name for your cluster is cluster.local, then the Pod would have thi", "language": "text", "type": "code-example_explanation", "context": "7 - DNS for Services and Pods | For example, if a Pod in the cafe namespace has the IP address 172.17.0.3, is an endpoint of a Service named barista, and the domain name for your cluster is cluster.local, then the Pod would have thi", "source": "https://kubernetes.io/docs/concepts/services-networking/_print/", "id": "k8s_00056_code_explain_1456"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Service\nmetadata:\n name: busybox-subdomain\nspec:\n selector:\n name: busybox\n clusterIP: None\n ports:\n - name: foo # name is not required for single-port Services\n port: 1234\n---\napiVersion: v1\nkind: Pod\nmetadata:\n name: busybox1\n labels:\n name: busybox\nspec:\n hostname: busybox-1\n subdomain: busybox-subdomain", "completion": " containers:\n - image: busybox:1.28\n command:\n - sleep\n - \"3600\"\n name: busybox\n---\napiVersion: v1\nkind: Pod\nmetadata:\n name: busybox2\n labels:\n name: busybox\nspec:\n hostname: busybox-2\n subdomain: busybox-subdomain\n containers:\n - image: busybox:1.28\n command:\n - sleep\n - \"3600\"\n name: busybox", "language": "yaml", "type": "pod-definition", "context": "7 - DNS for Services and Pods | Given the above Service \"busybox-subdomain\" and the Pods which set spec.subdomain to \"busybox-subdomain\", the first Pod will see its own FQDN as \"busybox-1.busybox-subdomain.my-namespace.svc.cluster-d", "source": "https://kubernetes.io/docs/concepts/services-networking/_print/", "id": "k8s_00056_code_completion_1457"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Service\nmetadata:\n name: busybox-subdomain\nspec:\n selector:\n name: busybox\n clusterIP: None\n ports:\n - name: foo # name is not required for single-port Services\n port: 1234\n---\napiVersion: v1\nkind: Pod\nmetadata:\n name: busybox1\n labels:\n name: busybox\nspec:\n hostname: busybox-1\n subdomain: busybox-subdomain\n containers:\n - image: busybox:1.28\n command:\n - sleep\n - \"3600\"\n name: busybox\n---\napiVersion: v1\nkind: Pod\nmetadata:\n name: busybox2\n labels:\n name: busybox\nspec:\n hostname: busybox-2\n subdomain: busybox-subdomain\n containers:\n - image: busybox:1.28\n command:\n - sleep\n - \"3600\"\n name: busybox", "completion": "This yaml code 7 - DNS for Services and Pods | Given the above Service \"busybox-subdomain\" and the Pods which set spec.subdomain to \"busybox-subdomain\", the first Pod will see its own FQDN as \"busybox-1.busybox-subdomain.my-namespace.svc.cluster-d", "language": "yaml", "type": "pod-definition_explanation", "context": "7 - DNS for Services and Pods | Given the above Service \"busybox-subdomain\" and the Pods which set spec.subdomain to \"busybox-subdomain\", the first Pod will see its own FQDN as \"busybox-1.busybox-subdomain.my-namespace.svc.cluster-d", "source": "https://kubernetes.io/docs/concepts/services-networking/_print/", "id": "k8s_00056_code_explain_1458"} {"prompt": "Explain this text code:\n\n\"busybox-1.busybox-subdomain.my-namespace.svc.cluster-domain.example\"", "completion": "This text code 7 - DNS for Services and Pods | An EndpointSlice can specify the DNS hostname for any endpoint addresses, along with its IP.", "language": "text", "type": "code-example_explanation", "context": "7 - DNS for Services and Pods | An EndpointSlice can specify the DNS hostname for any endpoint addresses, along with its IP.", "source": "https://kubernetes.io/docs/concepts/services-networking/_print/", "id": "k8s_00056_code_explain_1459"} {"prompt": "Explain this text code:\n\nbusybox-subdomain.my-namespace.svc.cluster-domain.example", "completion": "This text code 7 - DNS for Services and Pods | When a Pod is configured to have fully qualified domain name (FQDN), its hostname is the short hostname. For example, if you have a Pod with the fully qualified domain name busybox-1.busybox-subdomain", "language": "text", "type": "code-example_explanation", "context": "7 - DNS for Services and Pods | When a Pod is configured to have fully qualified domain name (FQDN), its hostname is the short hostname. For example, if you have a Pod with the fully qualified domain name busybox-1.busybox-subdomain", "source": "https://kubernetes.io/docs/concepts/services-networking/_print/", "id": "k8s_00056_code_explain_1460"} {"prompt": "Explain this text code:\n\nbusybox-1.busybox-subdomain.my-namespace.svc.cluster-domain.example", "completion": "This text code 7 - DNS for Services and Pods | When you set setHostnameAsFQDN: true in the Pod spec, the kubelet writes the Pod's FQDN into the hostname for that Pod's namespace. In this case, both hostname and hostname --fqdn return the Pod's FQD", "language": "text", "type": "code-example_explanation", "context": "7 - DNS for Services and Pods | When you set setHostnameAsFQDN: true in the Pod spec, the kubelet writes the Pod's FQDN into the hostname for that Pod's namespace. In this case, both hostname and hostname --fqdn return the Pod's FQD", "source": "https://kubernetes.io/docs/concepts/services-networking/_print/", "id": "k8s_00056_code_explain_1461"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: busybox\n namespace: default\nspec:\n containers:\n - image: busybox:1.28", "completion": " command:\n - sleep\n - \"3600\"\n imagePullPolicy: IfNotPresent\n name: busybox\n restartPolicy: Always\n hostNetwork: true\n dnsPolicy: ClusterFirstWithHostNet", "language": "yaml", "type": "pod-definition", "context": "7 - DNS for Services and Pods | Pod's DNS Config allows users more control on the DNS settings for a Pod.", "source": "https://kubernetes.io/docs/concepts/services-networking/_print/", "id": "k8s_00056_code_completion_1462"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: busybox\n namespace: default\nspec:\n containers:\n - image: busybox:1.28\n command:\n - sleep\n - \"3600\"\n imagePullPolicy: IfNotPresent\n name: busybox\n restartPolicy: Always\n hostNetwork: true\n dnsPolicy: ClusterFirstWithHostNet", "completion": "This yaml code 7 - DNS for Services and Pods | Pod's DNS Config allows users more control on the DNS settings for a Pod.", "language": "yaml", "type": "pod-definition_explanation", "context": "7 - DNS for Services and Pods | Pod's DNS Config allows users more control on the DNS settings for a Pod.", "source": "https://kubernetes.io/docs/concepts/services-networking/_print/", "id": "k8s_00056_code_explain_1463"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n namespace: default\n name: dns-example\nspec:\n containers:\n - name: test\n image: nginx\n dnsPolicy: \"None\"", "completion": " dnsConfig:\n nameservers:\n - 192.0.2.1 # this is an example\n searches:\n - ns1.svc.cluster-domain.example\n - my.dns.search.suffix\n options:\n - name: ndots\n value: \"2\"\n - name: edns0", "language": "yaml", "type": "pod-definition", "context": "7 - DNS for Services and Pods | When the Pod above is created, the container test gets the following contents in its /etc/resolv.conf file:", "source": "https://kubernetes.io/docs/concepts/services-networking/_print/", "id": "k8s_00056_code_completion_1464"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n namespace: default\n name: dns-example\nspec:\n containers:\n - name: test\n image: nginx\n dnsPolicy: \"None\"\n dnsConfig:\n nameservers:\n - 192.0.2.1 # this is an example\n searches:\n - ns1.svc.cluster-domain.example\n - my.dns.search.suffix\n options:\n - name: ndots\n value: \"2\"\n - name: edns0", "completion": "This yaml code 7 - DNS for Services and Pods | When the Pod above is created, the container test gets the following contents in its /etc/resolv.conf file:", "language": "yaml", "type": "pod-definition_explanation", "context": "7 - DNS for Services and Pods | When the Pod above is created, the container test gets the following contents in its /etc/resolv.conf file:", "source": "https://kubernetes.io/docs/concepts/services-networking/_print/", "id": "k8s_00056_code_explain_1465"} {"prompt": "Explain this text code:\n\nnameserver 192.0.2.1\nsearch ns1.svc.cluster-domain.example my.dns.search.suffix\noptions ndots:2 edns0", "completion": "This text code 7 - DNS for Services and Pods | For IPv6 setup, search path and name server should be set up like this:", "language": "text", "type": "code-example_explanation", "context": "7 - DNS for Services and Pods | For IPv6 setup, search path and name server should be set up like this:", "source": "https://kubernetes.io/docs/concepts/services-networking/_print/", "id": "k8s_00056_code_explain_1466"} {"prompt": "Explain this shell code:\n\nkubectl exec -it dns-example -- cat /etc/resolv.conf", "completion": "This shell code 7 - DNS for Services and Pods | The output is similar to this:", "language": "shell", "type": "code-example_explanation", "context": "7 - DNS for Services and Pods | The output is similar to this:", "source": "https://kubernetes.io/docs/concepts/services-networking/_print/", "id": "k8s_00056_code_explain_1467"} {"prompt": "Explain this text code:\n\nnameserver 2001:db8:30::a\nsearch default.svc.cluster-domain.example svc.cluster-domain.example cluster-domain.example\noptions ndots:5", "completion": "This text code 7 - DNS for Services and Pods | Kubernetes itself does not limit the DNS Config until the length of the search domain list exceeds 32 or the total length of all search domains exceeds 2048. This limit applies to the node's resolver", "language": "text", "type": "code-example_explanation", "context": "7 - DNS for Services and Pods | Kubernetes itself does not limit the DNS Config until the length of the search domain list exceeds 32 or the total length of all search domains exceeds 2048. This limit applies to the node's resolver", "source": "https://kubernetes.io/docs/concepts/services-networking/_print/", "id": "k8s_00056_code_explain_1468"} {"prompt": "Explain this text code:\n\n--node-cidr-mask-size-ipv4|--node-cidr-mask-size-ipv6", "completion": "This text code 8 - IPv4/IPv6 dual-stack | An example of an IPv4 CIDR: 10.244.0.0/16 (though you would supply your own address range)", "language": "text", "type": "code-example_explanation", "context": "8 - IPv4/IPv6 dual-stack | An example of an IPv4 CIDR: 10.244.0.0/16 (though you would supply your own address range)", "source": "https://kubernetes.io/docs/concepts/services-networking/_print/", "id": "k8s_00056_code_explain_1469"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Service\nmetadata:\n name: my-service\n labels:\n app.kubernetes.io/name: MyApp", "completion": "spec:\n selector:\n app.kubernetes.io/name: MyApp\n ports:\n - protocol: TCP\n port: 80", "language": "yaml", "type": "service-definition", "context": "8 - IPv4/IPv6 dual-stack | This Service specification explicitly defines PreferDualStack in .spec.ipFamilyPolicy. When you create this Service on a dual-stack cluster, Kubernetes assigns both IPv4 and IPv6 addresses for the ser", "source": "https://kubernetes.io/docs/concepts/services-networking/_print/", "id": "k8s_00056_code_completion_1470"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Service\nmetadata:\n name: my-service\n labels:\n app.kubernetes.io/name: MyApp\nspec:\n selector:\n app.kubernetes.io/name: MyApp\n ports:\n - protocol: TCP\n port: 80", "completion": "This yaml code 8 - IPv4/IPv6 dual-stack | This Service specification explicitly defines PreferDualStack in .spec.ipFamilyPolicy. When you create this Service on a dual-stack cluster, Kubernetes assigns both IPv4 and IPv6 addresses for the ser", "language": "yaml", "type": "service-definition_explanation", "context": "8 - IPv4/IPv6 dual-stack | This Service specification explicitly defines PreferDualStack in .spec.ipFamilyPolicy. When you create this Service on a dual-stack cluster, Kubernetes assigns both IPv4 and IPv6 addresses for the ser", "source": "https://kubernetes.io/docs/concepts/services-networking/_print/", "id": "k8s_00056_code_explain_1471"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Service\nmetadata:\n name: my-service\n labels:\n app.kubernetes.io/name: MyApp", "completion": "spec:\n ipFamilyPolicy: PreferDualStack\n selector:\n app.kubernetes.io/name: MyApp\n ports:\n - protocol: TCP\n port: 80", "language": "yaml", "type": "service-definition", "context": "8 - IPv4/IPv6 dual-stack | This Service specification explicitly defines IPv6 and IPv4 in .spec.ipFamilies as well as defining PreferDualStack in .spec.ipFamilyPolicy. When Kubernetes assigns an IPv6 and IPv4 address in .spec.c", "source": "https://kubernetes.io/docs/concepts/services-networking/_print/", "id": "k8s_00056_code_completion_1472"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Service\nmetadata:\n name: my-service\n labels:\n app.kubernetes.io/name: MyApp\nspec:\n ipFamilyPolicy: PreferDualStack\n selector:\n app.kubernetes.io/name: MyApp\n ports:\n - protocol: TCP\n port: 80", "completion": "This yaml code 8 - IPv4/IPv6 dual-stack | This Service specification explicitly defines IPv6 and IPv4 in .spec.ipFamilies as well as defining PreferDualStack in .spec.ipFamilyPolicy. When Kubernetes assigns an IPv6 and IPv4 address in .spec.c", "language": "yaml", "type": "service-definition_explanation", "context": "8 - IPv4/IPv6 dual-stack | This Service specification explicitly defines IPv6 and IPv4 in .spec.ipFamilies as well as defining PreferDualStack in .spec.ipFamilyPolicy. When Kubernetes assigns an IPv6 and IPv4 address in .spec.c", "source": "https://kubernetes.io/docs/concepts/services-networking/_print/", "id": "k8s_00056_code_explain_1473"} {"prompt": "Explain this text code:\n\nservice/networking/dual-stack-preferred-ipfamilies-svc.yaml", "completion": "This text code 8 - IPv4/IPv6 dual-stack | These examples demonstrate the default behavior when dual-stack is newly enabled on a cluster where Services already exist. (Upgrading an existing cluster to 1.21 or beyond will enable dual-stack.)", "language": "text", "type": "code-example_explanation", "context": "8 - IPv4/IPv6 dual-stack | These examples demonstrate the default behavior when dual-stack is newly enabled on a cluster where Services already exist. (Upgrading an existing cluster to 1.21 or beyond will enable dual-stack.)", "source": "https://kubernetes.io/docs/concepts/services-networking/_print/", "id": "k8s_00056_code_explain_1474"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Service\nmetadata:\n name: my-service\n labels:\n app.kubernetes.io/name: MyApp\nspec:\n ipFamilyPolicy: PreferDualStack", "completion": " ipFamilies:\n - IPv6\n - IPv4\n selector:\n app.kubernetes.io/name: MyApp\n ports:\n - protocol: TCP\n port: 80", "language": "yaml", "type": "service-definition", "context": "8 - IPv4/IPv6 dual-stack | These examples demonstrate the default behavior when dual-stack is newly enabled on a cluster where Services already exist. (Upgrading an existing cluster to 1.21 or beyond will enable dual-stack.)", "source": "https://kubernetes.io/docs/concepts/services-networking/_print/", "id": "k8s_00056_code_completion_1475"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Service\nmetadata:\n name: my-service\n labels:\n app.kubernetes.io/name: MyApp\nspec:\n ipFamilyPolicy: PreferDualStack\n ipFamilies:\n - IPv6\n - IPv4\n selector:\n app.kubernetes.io/name: MyApp\n ports:\n - protocol: TCP\n port: 80", "completion": "This yaml code 8 - IPv4/IPv6 dual-stack | These examples demonstrate the default behavior when dual-stack is newly enabled on a cluster where Services already exist. (Upgrading an existing cluster to 1.21 or beyond will enable dual-stack.)", "language": "yaml", "type": "service-definition_explanation", "context": "8 - IPv4/IPv6 dual-stack | These examples demonstrate the default behavior when dual-stack is newly enabled on a cluster where Services already exist. (Upgrading an existing cluster to 1.21 or beyond will enable dual-stack.)", "source": "https://kubernetes.io/docs/concepts/services-networking/_print/", "id": "k8s_00056_code_explain_1476"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Service\nmetadata:\n name: my-service\n labels:\n app.kubernetes.io/name: MyApp", "completion": "spec:\n selector:\n app.kubernetes.io/name: MyApp\n ports:\n - protocol: TCP\n port: 80", "language": "yaml", "type": "service-definition", "context": "8 - IPv4/IPv6 dual-stack | You can validate this behavior by using kubectl to inspect an existing service.", "source": "https://kubernetes.io/docs/concepts/services-networking/_print/", "id": "k8s_00056_code_completion_1477"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Service\nmetadata:\n name: my-service\n labels:\n app.kubernetes.io/name: MyApp\nspec:\n selector:\n app.kubernetes.io/name: MyApp\n ports:\n - protocol: TCP\n port: 80", "completion": "This yaml code 8 - IPv4/IPv6 dual-stack | You can validate this behavior by using kubectl to inspect an existing service.", "language": "yaml", "type": "service-definition_explanation", "context": "8 - IPv4/IPv6 dual-stack | You can validate this behavior by using kubectl to inspect an existing service.", "source": "https://kubernetes.io/docs/concepts/services-networking/_print/", "id": "k8s_00056_code_explain_1478"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Service\nmetadata:\n labels:\n app.kubernetes.io/name: MyApp\n name: my-service\nspec:\n clusterIP: 10.0.197.123\n clusterIPs:\n - 10.0.197.123\n ipFamilies:", "completion": " - IPv4\n ipFamilyPolicy: SingleStack\n ports:\n - port: 80\n protocol: TCP\n targetPort: 80\n selector:\n app.kubernetes.io/name: MyApp\n type: ClusterIP\nstatus:\n loadBalancer: {}", "language": "yaml", "type": "service-definition", "context": "8 - IPv4/IPv6 dual-stack | When dual-stack is enabled on a cluster, existing headless Services with selectors are configured by the control plane to set .spec.ipFamilyPolicy to SingleStack and set .spec.ipFamilies to the addres", "source": "https://kubernetes.io/docs/concepts/services-networking/_print/", "id": "k8s_00056_code_completion_1479"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Service\nmetadata:\n labels:\n app.kubernetes.io/name: MyApp\n name: my-service\nspec:\n clusterIP: 10.0.197.123\n clusterIPs:\n - 10.0.197.123\n ipFamilies:\n - IPv4\n ipFamilyPolicy: SingleStack\n ports:\n - port: 80\n protocol: TCP\n targetPort: 80\n selector:\n app.kubernetes.io/name: MyApp\n type: ClusterIP\nstatus:\n loadBalancer: {}", "completion": "This yaml code 8 - IPv4/IPv6 dual-stack | When dual-stack is enabled on a cluster, existing headless Services with selectors are configured by the control plane to set .spec.ipFamilyPolicy to SingleStack and set .spec.ipFamilies to the addres", "language": "yaml", "type": "service-definition_explanation", "context": "8 - IPv4/IPv6 dual-stack | When dual-stack is enabled on a cluster, existing headless Services with selectors are configured by the control plane to set .spec.ipFamilyPolicy to SingleStack and set .spec.ipFamilies to the addres", "source": "https://kubernetes.io/docs/concepts/services-networking/_print/", "id": "k8s_00056_code_explain_1480"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Service\nmetadata:\n name: my-service\n labels:\n app.kubernetes.io/name: MyApp", "completion": "spec:\n selector:\n app.kubernetes.io/name: MyApp\n ports:\n - protocol: TCP\n port: 80", "language": "yaml", "type": "service-definition", "context": "8 - IPv4/IPv6 dual-stack | You can validate this behavior by using kubectl to inspect an existing headless service with selectors.", "source": "https://kubernetes.io/docs/concepts/services-networking/_print/", "id": "k8s_00056_code_completion_1481"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Service\nmetadata:\n name: my-service\n labels:\n app.kubernetes.io/name: MyApp\nspec:\n selector:\n app.kubernetes.io/name: MyApp\n ports:\n - protocol: TCP\n port: 80", "completion": "This yaml code 8 - IPv4/IPv6 dual-stack | You can validate this behavior by using kubectl to inspect an existing headless service with selectors.", "language": "yaml", "type": "service-definition_explanation", "context": "8 - IPv4/IPv6 dual-stack | You can validate this behavior by using kubectl to inspect an existing headless service with selectors.", "source": "https://kubernetes.io/docs/concepts/services-networking/_print/", "id": "k8s_00056_code_explain_1482"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Service\nmetadata:\n labels:\n app.kubernetes.io/name: MyApp\n name: my-service\nspec:\n clusterIP: None\n clusterIPs:", "completion": " - None\n ipFamilies:\n - IPv4\n ipFamilyPolicy: SingleStack\n ports:\n - port: 80\n protocol: TCP\n targetPort: 80\n selector:\n app.kubernetes.io/name: MyApp", "language": "yaml", "type": "service-definition", "context": "8 - IPv4/IPv6 dual-stack | Services can be changed from single-stack to dual-stack and from dual-stack to single-stack.", "source": "https://kubernetes.io/docs/concepts/services-networking/_print/", "id": "k8s_00056_code_completion_1483"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Service\nmetadata:\n labels:\n app.kubernetes.io/name: MyApp\n name: my-service\nspec:\n clusterIP: None\n clusterIPs:\n - None\n ipFamilies:\n - IPv4\n ipFamilyPolicy: SingleStack\n ports:\n - port: 80\n protocol: TCP\n targetPort: 80\n selector:\n app.kubernetes.io/name: MyApp", "completion": "This yaml code 8 - IPv4/IPv6 dual-stack | Services can be changed from single-stack to dual-stack and from dual-stack to single-stack.", "language": "yaml", "type": "service-definition_explanation", "context": "8 - IPv4/IPv6 dual-stack | Services can be changed from single-stack to dual-stack and from dual-stack to single-stack.", "source": "https://kubernetes.io/docs/concepts/services-networking/_print/", "id": "k8s_00056_code_explain_1484"} {"prompt": "Complete this yaml code:\n\napiVersion: discovery.k8s.io/v1\nkind: EndpointSlice\nmetadata:\n name: example-hints\n labels:\n kubernetes.io/service-name: example-svc\naddressType: IPv4\nports:\n - name: http\n protocol: TCP", "completion": " port: 80\nendpoints:\n - addresses:\n - \"10.1.2.3\"\n conditions:\n ready: true\n hostname: pod-1\n zone: zone-a\n hints:\n forZones:\n - name: \"zone-a\"", "language": "yaml", "type": "kubernetes-manifest", "context": "9 - Topology Aware Routing | The kube-proxy component filters the endpoints it routes to based on the hints set by the EndpointSlice controller. In most cases, this means that the kube-proxy is able to route traffic to endpoints", "source": "https://kubernetes.io/docs/concepts/services-networking/_print/", "id": "k8s_00056_code_completion_1485"} {"prompt": "Explain this yaml code:\n\napiVersion: discovery.k8s.io/v1\nkind: EndpointSlice\nmetadata:\n name: example-hints\n labels:\n kubernetes.io/service-name: example-svc\naddressType: IPv4\nports:\n - name: http\n protocol: TCP\n port: 80\nendpoints:\n - addresses:\n - \"10.1.2.3\"\n conditions:\n ready: true\n hostname: pod-1\n zone: zone-a\n hints:\n forZones:\n - name: \"zone-a\"", "completion": "This yaml code 9 - Topology Aware Routing | The kube-proxy component filters the endpoints it routes to based on the hints set by the EndpointSlice controller. In most cases, this means that the kube-proxy is able to route traffic to endpoints", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "9 - Topology Aware Routing | The kube-proxy component filters the endpoints it routes to based on the hints set by the EndpointSlice controller. In most cases, this means that the kube-proxy is able to route traffic to endpoints", "source": "https://kubernetes.io/docs/concepts/services-networking/_print/", "id": "k8s_00056_code_explain_1486"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Service\nmetadata:\n labels:\n k8s-app: kube-dns\n kubernetes.io/cluster-service: \"true\"\n kubernetes.io/name: CoreDNS\n name: kube-dns\n namespace: kube-system\nspec:\n clusterIP: 10.96.0.10", "completion": " ports:\n - name: dns\n port: 53\n protocol: UDP\n targetPort: 53\n - name: dns-tcp\n port: 53\n protocol: TCP\n targetPort: 53\n selector:\n k8s-app: kube-dns\n type: ClusterIP", "language": "yaml", "type": "service-definition", "context": "11 - Service ClusterIP allocation | But, as it was explained before, the IP address 10.96.0.10 has not been reserved. If other Services are created before or in parallel with dynamic allocation, there is a chance they can allocate this", "source": "https://kubernetes.io/docs/concepts/services-networking/_print/", "id": "k8s_00056_code_completion_1487"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Service\nmetadata:\n labels:\n k8s-app: kube-dns\n kubernetes.io/cluster-service: \"true\"\n kubernetes.io/name: CoreDNS\n name: kube-dns\n namespace: kube-system\nspec:\n clusterIP: 10.96.0.10\n ports:\n - name: dns\n port: 53\n protocol: UDP\n targetPort: 53\n - name: dns-tcp\n port: 53\n protocol: TCP\n targetPort: 53\n selector:\n k8s-app: kube-dns\n type: ClusterIP", "completion": "This yaml code 11 - Service ClusterIP allocation | But, as it was explained before, the IP address 10.96.0.10 has not been reserved. If other Services are created before or in parallel with dynamic allocation, there is a chance they can allocate this", "language": "yaml", "type": "service-definition_explanation", "context": "11 - Service ClusterIP allocation | But, as it was explained before, the IP address 10.96.0.10 has not been reserved. If other Services are created before or in parallel with dynamic allocation, there is a chance they can allocate this", "source": "https://kubernetes.io/docs/concepts/services-networking/_print/", "id": "k8s_00056_code_explain_1488"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Service\nmetadata:\n name: my-service\nspec:\n selector:", "completion": " app.kubernetes.io/name: MyApp\n ports:\n - protocol: TCP\n port: 80\n targetPort: 9376\n internalTrafficPolicy: Local", "language": "yaml", "type": "service-definition", "context": "12 - Service Internal Traffic Policy | The kube-proxy filters the endpoints it routes to based on the spec.internalTrafficPolicy setting. When it's set to Local, only node local endpoints are considered. When it's Cluster (the default), or", "source": "https://kubernetes.io/docs/concepts/services-networking/_print/", "id": "k8s_00056_code_completion_1489"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Service\nmetadata:\n name: my-service\nspec:\n selector:\n app.kubernetes.io/name: MyApp\n ports:\n - protocol: TCP\n port: 80\n targetPort: 9376\n internalTrafficPolicy: Local", "completion": "This yaml code 12 - Service Internal Traffic Policy | The kube-proxy filters the endpoints it routes to based on the spec.internalTrafficPolicy setting. When it's set to Local, only node local endpoints are considered. When it's Cluster (the default), or", "language": "yaml", "type": "service-definition_explanation", "context": "12 - Service Internal Traffic Policy | The kube-proxy filters the endpoints it routes to based on the spec.internalTrafficPolicy setting. When it's set to Local, only node local endpoints are considered. When it's Cluster (the default), or", "source": "https://kubernetes.io/docs/concepts/services-networking/_print/", "id": "k8s_00056_code_explain_1490"} {"prompt": "Complete this yaml code:\n\nservice/simple-service.yaml\napiVersion: v1\nkind: Service\nmetadata:\n name: my-service\nspec:", "completion": " selector:\n app.kubernetes.io/name: MyApp\n ports:\n - protocol: TCP\n port: 80\n targetPort: 9376", "language": "yaml", "type": "service-definition", "context": "1 - Service | Applying this manifest creates a new Service named \"my-service\" with the default ClusterIP service type. The Service targets TCP port 9376 on any Pod with the app.kubernetes.io/name: MyApp label.", "source": "https://kubernetes.io/docs/concepts/services-networking/_print/", "id": "k8s_00056_code_completion_1491"} {"prompt": "Explain this yaml code:\n\nservice/simple-service.yaml\napiVersion: v1\nkind: Service\nmetadata:\n name: my-service\nspec:\n selector:\n app.kubernetes.io/name: MyApp\n ports:\n - protocol: TCP\n port: 80\n targetPort: 9376", "completion": "This yaml code 1 - Service | Applying this manifest creates a new Service named \"my-service\" with the default ClusterIP service type. The Service targets TCP port 9376 on any Pod with the app.kubernetes.io/name: MyApp label.", "language": "yaml", "type": "service-definition_explanation", "context": "1 - Service | Applying this manifest creates a new Service named \"my-service\" with the default ClusterIP service type. The Service targets TCP port 9376 on any Pod with the app.kubernetes.io/name: MyApp label.", "source": "https://kubernetes.io/docs/concepts/services-networking/_print/", "id": "k8s_00056_code_explain_1492"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Service\nmetadata:\n name: my-service\nspec:", "completion": " selector:\n app.kubernetes.io/name: MyApp\n ports:\n - protocol: TCP\n port: 80\n targetPort: 9376", "language": "yaml", "type": "service-definition", "context": "1 - Service | Applying this manifest creates a new Service named \"my-service\" with the default ClusterIP service type. The Service targets TCP port 9376 on any Pod with the app.kubernetes.io/name: MyApp label.", "source": "https://kubernetes.io/docs/concepts/services-networking/_print/", "id": "k8s_00056_code_completion_1493"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Service\nmetadata:\n name: my-service\nspec:\n selector:\n app.kubernetes.io/name: MyApp\n ports:\n - protocol: TCP\n port: 80\n targetPort: 9376", "completion": "This yaml code 1 - Service | Applying this manifest creates a new Service named \"my-service\" with the default ClusterIP service type. The Service targets TCP port 9376 on any Pod with the app.kubernetes.io/name: MyApp label.", "language": "yaml", "type": "service-definition_explanation", "context": "1 - Service | Applying this manifest creates a new Service named \"my-service\" with the default ClusterIP service type. The Service targets TCP port 9376 on any Pod with the app.kubernetes.io/name: MyApp label.", "source": "https://kubernetes.io/docs/concepts/services-networking/_print/", "id": "k8s_00056_code_explain_1494"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: nginx\n labels:\n app.kubernetes.io/name: proxy\nspec:\n containers:\n - name: nginx\n image: nginx:stable\n ports:\n - containerPort: 80\n name: http-web-svc", "completion": "\n---\napiVersion: v1\nkind: Service\nmetadata:\n name: nginx-service\nspec:\n selector:\n app.kubernetes.io/name: proxy\n ports:\n - name: name-of-service-port\n protocol: TCP\n port: 80\n targetPort: http-web-svc", "language": "yaml", "type": "pod-definition", "context": "1 - Service | This works even if there is a mixture of Pods in the Service using a single configured name, with the same network protocol available via different port numbers. This offers a lot of flexibility for d", "source": "https://kubernetes.io/docs/concepts/services-networking/_print/", "id": "k8s_00056_code_completion_1495"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: nginx\n labels:\n app.kubernetes.io/name: proxy\nspec:\n containers:\n - name: nginx\n image: nginx:stable\n ports:\n - containerPort: 80\n name: http-web-svc\n\n---\napiVersion: v1\nkind: Service\nmetadata:\n name: nginx-service\nspec:\n selector:\n app.kubernetes.io/name: proxy\n ports:\n - name: name-of-service-port\n protocol: TCP\n port: 80\n targetPort: http-web-svc", "completion": "This yaml code 1 - Service | This works even if there is a mixture of Pods in the Service using a single configured name, with the same network protocol available via different port numbers. This offers a lot of flexibility for d", "language": "yaml", "type": "pod-definition_explanation", "context": "1 - Service | This works even if there is a mixture of Pods in the Service using a single configured name, with the same network protocol available via different port numbers. This offers a lot of flexibility for d", "source": "https://kubernetes.io/docs/concepts/services-networking/_print/", "id": "k8s_00056_code_explain_1496"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Service\nmetadata:\n name: my-service\nspec:", "completion": " ports:\n - name: http\n protocol: TCP\n port: 80\n targetPort: 9376", "language": "yaml", "type": "service-definition", "context": "1 - Service | Because this Service has no selector, the corresponding EndpointSlice objects are not created automatically. You can map the Service to the network address and port where it's running, by adding an En", "source": "https://kubernetes.io/docs/concepts/services-networking/_print/", "id": "k8s_00056_code_completion_1497"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Service\nmetadata:\n name: my-service\nspec:\n ports:\n - name: http\n protocol: TCP\n port: 80\n targetPort: 9376", "completion": "This yaml code 1 - Service | Because this Service has no selector, the corresponding EndpointSlice objects are not created automatically. You can map the Service to the network address and port where it's running, by adding an En", "language": "yaml", "type": "service-definition_explanation", "context": "1 - Service | Because this Service has no selector, the corresponding EndpointSlice objects are not created automatically. You can map the Service to the network address and port where it's running, by adding an En", "source": "https://kubernetes.io/docs/concepts/services-networking/_print/", "id": "k8s_00056_code_explain_1498"} {"prompt": "Complete this yaml code:\n\napiVersion: discovery.k8s.io/v1\nkind: EndpointSlice\nmetadata:\n name: my-service-1 # by convention, use the name of the Service\n # as a prefix for the name of the EndpointSlice\n labels:\n # You should set the \"kubernetes.io/service-name\" label.\n # Set its value to match the name of the Service\n kubernetes.io/service-name: my-service\naddressType: IPv4", "completion": "ports:\n - name: http # should match with the name of the service port defined above\n appProtocol: http\n protocol: TCP\n port: 9376\nendpoints:\n - addresses:\n - \"10.4.5.6\"\n - addresses:\n - \"10.1.2.3\"", "language": "yaml", "type": "kubernetes-manifest", "context": "1 - Service | When you create an EndpointSlice object for a Service, you can use any name for the EndpointSlice. Each EndpointSlice in a namespace must have a unique name. You link an EndpointSlice to a Service by", "source": "https://kubernetes.io/docs/concepts/services-networking/_print/", "id": "k8s_00056_code_completion_1499"} {"prompt": "Explain this yaml code:\n\napiVersion: discovery.k8s.io/v1\nkind: EndpointSlice\nmetadata:\n name: my-service-1 # by convention, use the name of the Service\n # as a prefix for the name of the EndpointSlice\n labels:\n # You should set the \"kubernetes.io/service-name\" label.\n # Set its value to match the name of the Service\n kubernetes.io/service-name: my-service\naddressType: IPv4\nports:\n - name: http # should match with the name of the service port defined above\n appProtocol: http\n protocol: TCP\n port: 9376\nendpoints:\n - addresses:\n - \"10.4.5.6\"\n - addresses:\n - \"10.1.2.3\"", "completion": "This yaml code 1 - Service | When you create an EndpointSlice object for a Service, you can use any name for the EndpointSlice. Each EndpointSlice in a namespace must have a unique name. You link an EndpointSlice to a Service by", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "1 - Service | When you create an EndpointSlice object for a Service, you can use any name for the EndpointSlice. Each EndpointSlice in a namespace must have a unique name. You link an EndpointSlice to a Service by", "source": "https://kubernetes.io/docs/concepts/services-networking/_print/", "id": "k8s_00056_code_explain_1500"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Service\nmetadata:\n name: my-service\nspec:\n selector:\n app.kubernetes.io/name: MyApp\n ports:", "completion": " - name: http\n protocol: TCP\n port: 80\n targetPort: 9376\n - name: https\n protocol: TCP\n port: 443\n targetPort: 9377", "language": "yaml", "type": "service-definition", "context": "1 - Service | As with Kubernetes names in general, names for ports must only contain lowercase alphanumeric characters and -. Port names must also start and end with an alphanumeric character.", "source": "https://kubernetes.io/docs/concepts/services-networking/_print/", "id": "k8s_00056_code_completion_1501"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Service\nmetadata:\n name: my-service\nspec:\n selector:\n app.kubernetes.io/name: MyApp\n ports:\n - name: http\n protocol: TCP\n port: 80\n targetPort: 9376\n - name: https\n protocol: TCP\n port: 443\n targetPort: 9377", "completion": "This yaml code 1 - Service | As with Kubernetes names in general, names for ports must only contain lowercase alphanumeric characters and -. Port names must also start and end with an alphanumeric character.", "language": "yaml", "type": "service-definition_explanation", "context": "1 - Service | As with Kubernetes names in general, names for ports must only contain lowercase alphanumeric characters and -. Port names must also start and end with an alphanumeric character.", "source": "https://kubernetes.io/docs/concepts/services-networking/_print/", "id": "k8s_00056_code_explain_1502"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Service\nmetadata:\n name: my-service\nspec:\n type: NodePort\n selector:\n app.kubernetes.io/name: MyApp", "completion": " ports:\n - port: 80\n # By default and for convenience, the `targetPort` is set to\n # the same value as the `port` field.\n targetPort: 80\n # Optional field\n # By default and for convenience, the Kubernetes control plane\n # will allocate a port from a range (default: 30000-32767)\n nodePort: 30007", "language": "yaml", "type": "service-definition", "context": "1 - Service | The policy for assigning ports to NodePort services applies to both the auto-assignment and the manual assignment scenarios. When a user wants to create a NodePort service that uses a specific port, t", "source": "https://kubernetes.io/docs/concepts/services-networking/_print/", "id": "k8s_00056_code_completion_1503"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Service\nmetadata:\n name: my-service\nspec:\n type: NodePort\n selector:\n app.kubernetes.io/name: MyApp\n ports:\n - port: 80\n # By default and for convenience, the `targetPort` is set to\n # the same value as the `port` field.\n targetPort: 80\n # Optional field\n # By default and for convenience, the Kubernetes control plane\n # will allocate a port from a range (default: 30000-32767)\n nodePort: 30007", "completion": "This yaml code 1 - Service | The policy for assigning ports to NodePort services applies to both the auto-assignment and the manual assignment scenarios. When a user wants to create a NodePort service that uses a specific port, t", "language": "yaml", "type": "service-definition_explanation", "context": "1 - Service | The policy for assigning ports to NodePort services applies to both the auto-assignment and the manual assignment scenarios. When a user wants to create a NodePort service that uses a specific port, t", "source": "https://kubernetes.io/docs/concepts/services-networking/_print/", "id": "k8s_00056_code_explain_1504"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Service\nmetadata:\n name: my-service\nspec:\n selector:\n app.kubernetes.io/name: MyApp\n ports:", "completion": " - protocol: TCP\n port: 80\n targetPort: 9376\n clusterIP: 10.0.171.239\n type: LoadBalancer\nstatus:\n loadBalancer:\n ingress:\n - ip: 192.0.2.127", "language": "yaml", "type": "service-definition", "context": "1 - Service | Traffic from the external load balancer is directed at the backend Pods. The cloud provider decides how it is load balanced.", "source": "https://kubernetes.io/docs/concepts/services-networking/_print/", "id": "k8s_00056_code_completion_1505"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Service\nmetadata:\n name: my-service\nspec:\n selector:\n app.kubernetes.io/name: MyApp\n ports:\n - protocol: TCP\n port: 80\n targetPort: 9376\n clusterIP: 10.0.171.239\n type: LoadBalancer\nstatus:\n loadBalancer:\n ingress:\n - ip: 192.0.2.127", "completion": "This yaml code 1 - Service | Traffic from the external load balancer is directed at the backend Pods. The cloud provider decides how it is load balanced.", "language": "yaml", "type": "service-definition_explanation", "context": "1 - Service | Traffic from the external load balancer is directed at the backend Pods. The cloud provider decides how it is load balanced.", "source": "https://kubernetes.io/docs/concepts/services-networking/_print/", "id": "k8s_00056_code_explain_1506"} {"prompt": "Complete this text code:\n\nmetadata:\n name: my-service", "completion": " annotations:\n networking.gke.io/load-balancer-type: \"Internal\"", "language": "text", "type": "code-example", "context": "1 - Service | metadata: name: my-service annotations: service.beta.kubernetes.io/aws-load-balancer-internal: \"true\"", "source": "https://kubernetes.io/docs/concepts/services-networking/_print/", "id": "k8s_00056_code_completion_1507"} {"prompt": "Explain this text code:\n\nmetadata:\n name: my-service\n annotations:\n networking.gke.io/load-balancer-type: \"Internal\"", "completion": "This text code 1 - Service | metadata: name: my-service annotations: service.beta.kubernetes.io/aws-load-balancer-internal: \"true\"", "language": "text", "type": "code-example_explanation", "context": "1 - Service | metadata: name: my-service annotations: service.beta.kubernetes.io/aws-load-balancer-internal: \"true\"", "source": "https://kubernetes.io/docs/concepts/services-networking/_print/", "id": "k8s_00056_code_explain_1508"} {"prompt": "Complete this text code:\n\nmetadata:\n name: my-service", "completion": " annotations:\n service.beta.kubernetes.io/aws-load-balancer-internal: \"true\"", "language": "text", "type": "code-example", "context": "1 - Service | metadata: name: my-service annotations: service.beta.kubernetes.io/azure-load-balancer-internal: \"true\"", "source": "https://kubernetes.io/docs/concepts/services-networking/_print/", "id": "k8s_00056_code_completion_1509"} {"prompt": "Explain this text code:\n\nmetadata:\n name: my-service\n annotations:\n service.beta.kubernetes.io/aws-load-balancer-internal: \"true\"", "completion": "This text code 1 - Service | metadata: name: my-service annotations: service.beta.kubernetes.io/azure-load-balancer-internal: \"true\"", "language": "text", "type": "code-example_explanation", "context": "1 - Service | metadata: name: my-service annotations: service.beta.kubernetes.io/azure-load-balancer-internal: \"true\"", "source": "https://kubernetes.io/docs/concepts/services-networking/_print/", "id": "k8s_00056_code_explain_1510"} {"prompt": "Complete this text code:\n\nmetadata:\n name: my-service", "completion": " annotations:\n service.beta.kubernetes.io/azure-load-balancer-internal: \"true\"", "language": "text", "type": "code-example", "context": "1 - Service | metadata: name: my-service annotations: service.kubernetes.io/ibm-load-balancer-cloud-provider-ip-type: \"private\"", "source": "https://kubernetes.io/docs/concepts/services-networking/_print/", "id": "k8s_00056_code_completion_1511"} {"prompt": "Explain this text code:\n\nmetadata:\n name: my-service\n annotations:\n service.beta.kubernetes.io/azure-load-balancer-internal: \"true\"", "completion": "This text code 1 - Service | metadata: name: my-service annotations: service.kubernetes.io/ibm-load-balancer-cloud-provider-ip-type: \"private\"", "language": "text", "type": "code-example_explanation", "context": "1 - Service | metadata: name: my-service annotations: service.kubernetes.io/ibm-load-balancer-cloud-provider-ip-type: \"private\"", "source": "https://kubernetes.io/docs/concepts/services-networking/_print/", "id": "k8s_00056_code_explain_1512"} {"prompt": "Complete this text code:\n\nmetadata:\n name: my-service", "completion": " annotations:\n service.kubernetes.io/ibm-load-balancer-cloud-provider-ip-type: \"private\"", "language": "text", "type": "code-example", "context": "1 - Service | metadata: name: my-service annotations: service.beta.kubernetes.io/openstack-internal-load-balancer: \"true\"", "source": "https://kubernetes.io/docs/concepts/services-networking/_print/", "id": "k8s_00056_code_completion_1513"} {"prompt": "Explain this text code:\n\nmetadata:\n name: my-service\n annotations:\n service.kubernetes.io/ibm-load-balancer-cloud-provider-ip-type: \"private\"", "completion": "This text code 1 - Service | metadata: name: my-service annotations: service.beta.kubernetes.io/openstack-internal-load-balancer: \"true\"", "language": "text", "type": "code-example_explanation", "context": "1 - Service | metadata: name: my-service annotations: service.beta.kubernetes.io/openstack-internal-load-balancer: \"true\"", "source": "https://kubernetes.io/docs/concepts/services-networking/_print/", "id": "k8s_00056_code_explain_1514"} {"prompt": "Complete this text code:\n\nmetadata:\n name: my-service", "completion": " annotations:\n service.beta.kubernetes.io/openstack-internal-load-balancer: \"true\"", "language": "text", "type": "code-example", "context": "1 - Service | metadata: name: my-service annotations: service.beta.kubernetes.io/cce-load-balancer-internal-vpc: \"true\"", "source": "https://kubernetes.io/docs/concepts/services-networking/_print/", "id": "k8s_00056_code_completion_1515"} {"prompt": "Explain this text code:\n\nmetadata:\n name: my-service\n annotations:\n service.beta.kubernetes.io/openstack-internal-load-balancer: \"true\"", "completion": "This text code 1 - Service | metadata: name: my-service annotations: service.beta.kubernetes.io/cce-load-balancer-internal-vpc: \"true\"", "language": "text", "type": "code-example_explanation", "context": "1 - Service | metadata: name: my-service annotations: service.beta.kubernetes.io/cce-load-balancer-internal-vpc: \"true\"", "source": "https://kubernetes.io/docs/concepts/services-networking/_print/", "id": "k8s_00056_code_explain_1516"} {"prompt": "Complete this text code:\n\nmetadata:\n name: my-service", "completion": " annotations:\n service.beta.kubernetes.io/cce-load-balancer-internal-vpc: \"true\"", "language": "text", "type": "code-example", "context": "1 - Service | metadata: annotations: service.kubernetes.io/qcloud-loadbalancer-internal-subnetid: subnet-xxxxx", "source": "https://kubernetes.io/docs/concepts/services-networking/_print/", "id": "k8s_00056_code_completion_1517"} {"prompt": "Explain this text code:\n\nmetadata:\n name: my-service\n annotations:\n service.beta.kubernetes.io/cce-load-balancer-internal-vpc: \"true\"", "completion": "This text code 1 - Service | metadata: annotations: service.kubernetes.io/qcloud-loadbalancer-internal-subnetid: subnet-xxxxx", "language": "text", "type": "code-example_explanation", "context": "1 - Service | metadata: annotations: service.kubernetes.io/qcloud-loadbalancer-internal-subnetid: subnet-xxxxx", "source": "https://kubernetes.io/docs/concepts/services-networking/_print/", "id": "k8s_00056_code_explain_1518"} {"prompt": "Explain this text code:\n\nmetadata:\n annotations:\n service.kubernetes.io/qcloud-loadbalancer-internal-subnetid: subnet-xxxxx", "completion": "This text code 1 - Service | metadata: annotations: service.beta.kubernetes.io/alibaba-cloud-loadbalancer-address-type: \"intranet\"", "language": "text", "type": "code-example_explanation", "context": "1 - Service | metadata: annotations: service.beta.kubernetes.io/alibaba-cloud-loadbalancer-address-type: \"intranet\"", "source": "https://kubernetes.io/docs/concepts/services-networking/_print/", "id": "k8s_00056_code_explain_1519"} {"prompt": "Explain this text code:\n\nmetadata:\n annotations:\n service.beta.kubernetes.io/alibaba-cloud-loadbalancer-address-type: \"intranet\"", "completion": "This text code 1 - Service | metadata: name: my-service annotations: service.beta.kubernetes.io/oci-load-balancer-internal: true", "language": "text", "type": "code-example_explanation", "context": "1 - Service | metadata: name: my-service annotations: service.beta.kubernetes.io/oci-load-balancer-internal: true", "source": "https://kubernetes.io/docs/concepts/services-networking/_print/", "id": "k8s_00056_code_explain_1520"} {"prompt": "Complete this text code:\n\nmetadata:\n name: my-service", "completion": " annotations:\n service.beta.kubernetes.io/oci-load-balancer-internal: true", "language": "text", "type": "code-example", "context": "1 - Service | Services of type ExternalName map a Service to a DNS name, not to a typical selector such as my-service or cassandra. You specify these Services with the spec.externalName parameter.", "source": "https://kubernetes.io/docs/concepts/services-networking/_print/", "id": "k8s_00056_code_completion_1521"} {"prompt": "Explain this text code:\n\nmetadata:\n name: my-service\n annotations:\n service.beta.kubernetes.io/oci-load-balancer-internal: true", "completion": "This text code 1 - Service | Services of type ExternalName map a Service to a DNS name, not to a typical selector such as my-service or cassandra. You specify these Services with the spec.externalName parameter.", "language": "text", "type": "code-example_explanation", "context": "1 - Service | Services of type ExternalName map a Service to a DNS name, not to a typical selector such as my-service or cassandra. You specify these Services with the spec.externalName parameter.", "source": "https://kubernetes.io/docs/concepts/services-networking/_print/", "id": "k8s_00056_code_explain_1522"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Service\nmetadata:\n name: my-service", "completion": " namespace: prod\nspec:\n type: ExternalName\n externalName: my.database.example.com", "language": "yaml", "type": "service-definition", "context": "1 - Service | A Service of type: ExternalName accepts an IPv4 address string, but treats that string as a DNS name comprised of digits, not as an IP address (the internet does not however allow such names in DNS).", "source": "https://kubernetes.io/docs/concepts/services-networking/_print/", "id": "k8s_00056_code_completion_1523"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Service\nmetadata:\n name: my-service\n namespace: prod\nspec:\n type: ExternalName\n externalName: my.database.example.com", "completion": "This yaml code 1 - Service | A Service of type: ExternalName accepts an IPv4 address string, but treats that string as a DNS name comprised of digits, not as an IP address (the internet does not however allow such names in DNS).", "language": "yaml", "type": "service-definition_explanation", "context": "1 - Service | A Service of type: ExternalName accepts an IPv4 address string, but treats that string as a DNS name comprised of digits, not as an IP address (the internet does not however allow such names in DNS).", "source": "https://kubernetes.io/docs/concepts/services-networking/_print/", "id": "k8s_00056_code_explain_1524"} {"prompt": "Complete this text code:\n\nREDIS_PRIMARY_SERVICE_HOST=10.0.0.11\nREDIS_PRIMARY_SERVICE_PORT=6379\nREDIS_PRIMARY_PORT=tcp://10.0.0.11:6379", "completion": "REDIS_PRIMARY_PORT_6379_TCP=tcp://10.0.0.11:6379\nREDIS_PRIMARY_PORT_6379_TCP_PROTO=tcp\nREDIS_PRIMARY_PORT_6379_TCP_PORT=6379\nREDIS_PRIMARY_PORT_6379_TCP_ADDR=10.0.0.11", "language": "text", "type": "code-example", "context": "1 - Service | When you have a Pod that needs to access a Service, and you are using the environment variable method to publish the port and cluster IP to the client Pods, you must create the Service before the clie", "source": "https://kubernetes.io/docs/concepts/services-networking/_print/", "id": "k8s_00056_code_completion_1525"} {"prompt": "Explain this text code:\n\nREDIS_PRIMARY_SERVICE_HOST=10.0.0.11\nREDIS_PRIMARY_SERVICE_PORT=6379\nREDIS_PRIMARY_PORT=tcp://10.0.0.11:6379\nREDIS_PRIMARY_PORT_6379_TCP=tcp://10.0.0.11:6379\nREDIS_PRIMARY_PORT_6379_TCP_PROTO=tcp\nREDIS_PRIMARY_PORT_6379_TCP_PORT=6379\nREDIS_PRIMARY_PORT_6379_TCP_ADDR=10.0.0.11", "completion": "This text code 1 - Service | When you have a Pod that needs to access a Service, and you are using the environment variable method to publish the port and cluster IP to the client Pods, you must create the Service before the clie", "language": "text", "type": "code-example_explanation", "context": "1 - Service | When you have a Pod that needs to access a Service, and you are using the environment variable method to publish the port and cluster IP to the client Pods, you must create the Service before the clie", "source": "https://kubernetes.io/docs/concepts/services-networking/_print/", "id": "k8s_00056_code_explain_1526"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Service\nmetadata:\n name: my-service\nspec:\n selector:\n app.kubernetes.io/name: MyApp", "completion": " ports:\n - name: http\n protocol: TCP\n port: 80\n targetPort: 49152\n externalIPs:\n - 198.51.100.32", "language": "yaml", "type": "service-definition", "context": "1 - Service | Service is a top-level resource in the Kubernetes REST API. You can find more details about the Service API object.", "source": "https://kubernetes.io/docs/concepts/services-networking/_print/", "id": "k8s_00056_code_completion_1527"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Service\nmetadata:\n name: my-service\nspec:\n selector:\n app.kubernetes.io/name: MyApp\n ports:\n - name: http\n protocol: TCP\n port: 80\n targetPort: 49152\n externalIPs:\n - 198.51.100.32", "completion": "This yaml code 1 - Service | Service is a top-level resource in the Kubernetes REST API. You can find more details about the Service API object.", "language": "yaml", "type": "service-definition_explanation", "context": "1 - Service | Service is a top-level resource in the Kubernetes REST API. You can find more details about the Service API object.", "source": "https://kubernetes.io/docs/concepts/services-networking/_print/", "id": "k8s_00056_code_explain_1528"} {"prompt": "Complete this yaml code:\n\nservice/networking/minimal-ingress.yaml\napiVersion: networking.k8s.io/v1\nkind: Ingress\nmetadata:\n name: minimal-ingress\n annotations:\n nginx.ingress.kubernetes.io/rewrite-target: /\nspec:\n ingressClassName: nginx-example", "completion": " rules:\n - http:\n paths:\n - path: /testpath\n pathType: Prefix\n backend:\n service:\n name: test\n port:\n number: 80", "language": "yaml", "type": "kubernetes-manifest", "context": "2 - Ingress | An Ingress needs apiVersion, kind, metadata and spec fields. The name of an Ingress object must be a valid DNS subdomain name. For general information about working with config files, see deploying ap", "source": "https://kubernetes.io/docs/concepts/services-networking/_print/", "id": "k8s_00056_code_completion_1529"} {"prompt": "Explain this yaml code:\n\nservice/networking/minimal-ingress.yaml\napiVersion: networking.k8s.io/v1\nkind: Ingress\nmetadata:\n name: minimal-ingress\n annotations:\n nginx.ingress.kubernetes.io/rewrite-target: /\nspec:\n ingressClassName: nginx-example\n rules:\n - http:\n paths:\n - path: /testpath\n pathType: Prefix\n backend:\n service:\n name: test\n port:\n number: 80", "completion": "This yaml code 2 - Ingress | An Ingress needs apiVersion, kind, metadata and spec fields. The name of an Ingress object must be a valid DNS subdomain name. For general information about working with config files, see deploying ap", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "2 - Ingress | An Ingress needs apiVersion, kind, metadata and spec fields. The name of an Ingress object must be a valid DNS subdomain name. For general information about working with config files, see deploying ap", "source": "https://kubernetes.io/docs/concepts/services-networking/_print/", "id": "k8s_00056_code_explain_1530"} {"prompt": "Complete this yaml code:\n\napiVersion: networking.k8s.io/v1\nkind: Ingress\nmetadata:\n name: minimal-ingress\n annotations:\n nginx.ingress.kubernetes.io/rewrite-target: /\nspec:\n ingressClassName: nginx-example\n rules:", "completion": " - http:\n paths:\n - path: /testpath\n pathType: Prefix\n backend:\n service:\n name: test\n port:\n number: 80", "language": "yaml", "type": "kubernetes-manifest", "context": "2 - Ingress | An Ingress needs apiVersion, kind, metadata and spec fields. The name of an Ingress object must be a valid DNS subdomain name. For general information about working with config files, see deploying ap", "source": "https://kubernetes.io/docs/concepts/services-networking/_print/", "id": "k8s_00056_code_completion_1531"} {"prompt": "Explain this yaml code:\n\napiVersion: networking.k8s.io/v1\nkind: Ingress\nmetadata:\n name: minimal-ingress\n annotations:\n nginx.ingress.kubernetes.io/rewrite-target: /\nspec:\n ingressClassName: nginx-example\n rules:\n - http:\n paths:\n - path: /testpath\n pathType: Prefix\n backend:\n service:\n name: test\n port:\n number: 80", "completion": "This yaml code 2 - Ingress | An Ingress needs apiVersion, kind, metadata and spec fields. The name of an Ingress object must be a valid DNS subdomain name. For general information about working with config files, see deploying ap", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "2 - Ingress | An Ingress needs apiVersion, kind, metadata and spec fields. The name of an Ingress object must be a valid DNS subdomain name. For general information about working with config files, see deploying ap", "source": "https://kubernetes.io/docs/concepts/services-networking/_print/", "id": "k8s_00056_code_explain_1532"} {"prompt": "Complete this yaml code:\n\nservice/networking/ingress-resource-backend.yaml\napiVersion: networking.k8s.io/v1\nkind: Ingress\nmetadata:\n name: ingress-resource-backend\nspec:\n defaultBackend:\n resource:\n apiGroup: k8s.example.com\n kind: StorageBucket", "completion": " name: static-assets\n rules:\n - http:\n paths:\n - path: /icons\n pathType: ImplementationSpecific\n backend:\n resource:\n apiGroup: k8s.example.com\n kind: StorageBucket\n name: icon-assets", "language": "yaml", "type": "kubernetes-manifest", "context": "2 - Ingress | After creating the Ingress above, you can view it with the following command:", "source": "https://kubernetes.io/docs/concepts/services-networking/_print/", "id": "k8s_00056_code_completion_1533"} {"prompt": "Explain this yaml code:\n\nservice/networking/ingress-resource-backend.yaml\napiVersion: networking.k8s.io/v1\nkind: Ingress\nmetadata:\n name: ingress-resource-backend\nspec:\n defaultBackend:\n resource:\n apiGroup: k8s.example.com\n kind: StorageBucket\n name: static-assets\n rules:\n - http:\n paths:\n - path: /icons\n pathType: ImplementationSpecific\n backend:\n resource:\n apiGroup: k8s.example.com\n kind: StorageBucket\n name: icon-assets", "completion": "This yaml code 2 - Ingress | After creating the Ingress above, you can view it with the following command:", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "2 - Ingress | After creating the Ingress above, you can view it with the following command:", "source": "https://kubernetes.io/docs/concepts/services-networking/_print/", "id": "k8s_00056_code_explain_1534"} {"prompt": "Complete this yaml code:\n\napiVersion: networking.k8s.io/v1\nkind: Ingress\nmetadata:\n name: ingress-resource-backend\nspec:\n defaultBackend:\n resource:\n apiGroup: k8s.example.com\n kind: StorageBucket\n name: static-assets", "completion": " rules:\n - http:\n paths:\n - path: /icons\n pathType: ImplementationSpecific\n backend:\n resource:\n apiGroup: k8s.example.com\n kind: StorageBucket\n name: icon-assets", "language": "yaml", "type": "kubernetes-manifest", "context": "2 - Ingress | After creating the Ingress above, you can view it with the following command:", "source": "https://kubernetes.io/docs/concepts/services-networking/_print/", "id": "k8s_00056_code_completion_1535"} {"prompt": "Explain this yaml code:\n\napiVersion: networking.k8s.io/v1\nkind: Ingress\nmetadata:\n name: ingress-resource-backend\nspec:\n defaultBackend:\n resource:\n apiGroup: k8s.example.com\n kind: StorageBucket\n name: static-assets\n rules:\n - http:\n paths:\n - path: /icons\n pathType: ImplementationSpecific\n backend:\n resource:\n apiGroup: k8s.example.com\n kind: StorageBucket\n name: icon-assets", "completion": "This yaml code 2 - Ingress | After creating the Ingress above, you can view it with the following command:", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "2 - Ingress | After creating the Ingress above, you can view it with the following command:", "source": "https://kubernetes.io/docs/concepts/services-networking/_print/", "id": "k8s_00056_code_explain_1536"} {"prompt": "Complete this yaml code:\n\nservice/networking/ingress-wildcard-host.yaml\napiVersion: networking.k8s.io/v1\nkind: Ingress\nmetadata:\n name: ingress-wildcard-host\nspec:\n rules:\n - host: \"foo.bar.com\"\n http:\n paths:\n - pathType: Prefix\n path: \"/bar\"\n backend:", "completion": " service:\n name: service1\n port:\n number: 80\n - host: \"*.foo.com\"\n http:\n paths:\n - pathType: Prefix\n path: \"/foo\"\n backend:\n service:\n name: service2\n port:\n number: 80", "language": "yaml", "type": "kubernetes-manifest", "context": "2 - Ingress | Ingresses can be implemented by different controllers, often with different configuration. Each Ingress should specify a class, a reference to an IngressClass resource that contains additional configu", "source": "https://kubernetes.io/docs/concepts/services-networking/_print/", "id": "k8s_00056_code_completion_1537"} {"prompt": "Explain this yaml code:\n\nservice/networking/ingress-wildcard-host.yaml\napiVersion: networking.k8s.io/v1\nkind: Ingress\nmetadata:\n name: ingress-wildcard-host\nspec:\n rules:\n - host: \"foo.bar.com\"\n http:\n paths:\n - pathType: Prefix\n path: \"/bar\"\n backend:\n service:\n name: service1\n port:\n number: 80\n - host: \"*.foo.com\"\n http:\n paths:\n - pathType: Prefix\n path: \"/foo\"\n backend:\n service:\n name: service2\n port:\n number: 80", "completion": "This yaml code 2 - Ingress | Ingresses can be implemented by different controllers, often with different configuration. Each Ingress should specify a class, a reference to an IngressClass resource that contains additional configu", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "2 - Ingress | Ingresses can be implemented by different controllers, often with different configuration. Each Ingress should specify a class, a reference to an IngressClass resource that contains additional configu", "source": "https://kubernetes.io/docs/concepts/services-networking/_print/", "id": "k8s_00056_code_explain_1538"} {"prompt": "Complete this yaml code:\n\napiVersion: networking.k8s.io/v1\nkind: Ingress\nmetadata:\n name: ingress-wildcard-host\nspec:\n rules:\n - host: \"foo.bar.com\"\n http:\n paths:\n - pathType: Prefix\n path: \"/bar\"\n backend:\n service:", "completion": " name: service1\n port:\n number: 80\n - host: \"*.foo.com\"\n http:\n paths:\n - pathType: Prefix\n path: \"/foo\"\n backend:\n service:\n name: service2\n port:\n number: 80", "language": "yaml", "type": "kubernetes-manifest", "context": "2 - Ingress | Ingresses can be implemented by different controllers, often with different configuration. Each Ingress should specify a class, a reference to an IngressClass resource that contains additional configu", "source": "https://kubernetes.io/docs/concepts/services-networking/_print/", "id": "k8s_00056_code_completion_1539"} {"prompt": "Explain this yaml code:\n\napiVersion: networking.k8s.io/v1\nkind: Ingress\nmetadata:\n name: ingress-wildcard-host\nspec:\n rules:\n - host: \"foo.bar.com\"\n http:\n paths:\n - pathType: Prefix\n path: \"/bar\"\n backend:\n service:\n name: service1\n port:\n number: 80\n - host: \"*.foo.com\"\n http:\n paths:\n - pathType: Prefix\n path: \"/foo\"\n backend:\n service:\n name: service2\n port:\n number: 80", "completion": "This yaml code 2 - Ingress | Ingresses can be implemented by different controllers, often with different configuration. Each Ingress should specify a class, a reference to an IngressClass resource that contains additional configu", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "2 - Ingress | Ingresses can be implemented by different controllers, often with different configuration. Each Ingress should specify a class, a reference to an IngressClass resource that contains additional configu", "source": "https://kubernetes.io/docs/concepts/services-networking/_print/", "id": "k8s_00056_code_explain_1540"} {"prompt": "Complete this yaml code:\n\nservice/networking/external-lb.yaml\napiVersion: networking.k8s.io/v1\nkind: IngressClass\nmetadata:\n name: external-lb", "completion": "spec:\n controller: example.com/ingress-controller\n parameters:\n apiGroup: k8s.example.com\n kind: IngressParameters\n name: external-lb", "language": "yaml", "type": "kubernetes-manifest", "context": "2 - Ingress | The .spec.parameters field of an IngressClass lets you reference another resource that provides configuration related to that IngressClass.", "source": "https://kubernetes.io/docs/concepts/services-networking/_print/", "id": "k8s_00056_code_completion_1541"} {"prompt": "Explain this yaml code:\n\nservice/networking/external-lb.yaml\napiVersion: networking.k8s.io/v1\nkind: IngressClass\nmetadata:\n name: external-lb\nspec:\n controller: example.com/ingress-controller\n parameters:\n apiGroup: k8s.example.com\n kind: IngressParameters\n name: external-lb", "completion": "This yaml code 2 - Ingress | The .spec.parameters field of an IngressClass lets you reference another resource that provides configuration related to that IngressClass.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "2 - Ingress | The .spec.parameters field of an IngressClass lets you reference another resource that provides configuration related to that IngressClass.", "source": "https://kubernetes.io/docs/concepts/services-networking/_print/", "id": "k8s_00056_code_explain_1542"} {"prompt": "Complete this yaml code:\n\napiVersion: networking.k8s.io/v1\nkind: IngressClass\nmetadata:\n name: external-lb\nspec:", "completion": " controller: example.com/ingress-controller\n parameters:\n apiGroup: k8s.example.com\n kind: IngressParameters\n name: external-lb", "language": "yaml", "type": "kubernetes-manifest", "context": "2 - Ingress | The .spec.parameters field of an IngressClass lets you reference another resource that provides configuration related to that IngressClass.", "source": "https://kubernetes.io/docs/concepts/services-networking/_print/", "id": "k8s_00056_code_completion_1543"} {"prompt": "Explain this yaml code:\n\napiVersion: networking.k8s.io/v1\nkind: IngressClass\nmetadata:\n name: external-lb\nspec:\n controller: example.com/ingress-controller\n parameters:\n apiGroup: k8s.example.com\n kind: IngressParameters\n name: external-lb", "completion": "This yaml code 2 - Ingress | The .spec.parameters field of an IngressClass lets you reference another resource that provides configuration related to that IngressClass.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "2 - Ingress | The .spec.parameters field of an IngressClass lets you reference another resource that provides configuration related to that IngressClass.", "source": "https://kubernetes.io/docs/concepts/services-networking/_print/", "id": "k8s_00056_code_explain_1544"} {"prompt": "Complete this yaml code:\n\n---\napiVersion: networking.k8s.io/v1\nkind: IngressClass\nmetadata:\n name: external-lb-1\nspec:\n controller: example.com/ingress-controller\n parameters:", "completion": " # The parameters for this IngressClass are specified in a\n # ClusterIngressParameter (API group k8s.example.net) named\n # \"external-config-1\". This definition tells Kubernetes to\n # look for a cluster-scoped parameter resource.\n scope: Cluster\n apiGroup: k8s.example.net\n kind: ClusterIngressParameter\n name: external-config-1", "language": "yaml", "type": "kubernetes-manifest", "context": "2 - Ingress | FEATURE STATE: Kubernetes v1.23 [stable]If you set the .spec.parameters field and set .spec.parameters.scope to Namespace, then the IngressClass refers to a namespaced-scoped resource. You must also s", "source": "https://kubernetes.io/docs/concepts/services-networking/_print/", "id": "k8s_00056_code_completion_1545"} {"prompt": "Explain this yaml code:\n\n---\napiVersion: networking.k8s.io/v1\nkind: IngressClass\nmetadata:\n name: external-lb-1\nspec:\n controller: example.com/ingress-controller\n parameters:\n # The parameters for this IngressClass are specified in a\n # ClusterIngressParameter (API group k8s.example.net) named\n # \"external-config-1\". This definition tells Kubernetes to\n # look for a cluster-scoped parameter resource.\n scope: Cluster\n apiGroup: k8s.example.net\n kind: ClusterIngressParameter\n name: external-config-1", "completion": "This yaml code 2 - Ingress | FEATURE STATE: Kubernetes v1.23 [stable]If you set the .spec.parameters field and set .spec.parameters.scope to Namespace, then the IngressClass refers to a namespaced-scoped resource. You must also s", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "2 - Ingress | FEATURE STATE: Kubernetes v1.23 [stable]If you set the .spec.parameters field and set .spec.parameters.scope to Namespace, then the IngressClass refers to a namespaced-scoped resource. You must also s", "source": "https://kubernetes.io/docs/concepts/services-networking/_print/", "id": "k8s_00056_code_explain_1546"} {"prompt": "Complete this yaml code:\n\n---\napiVersion: networking.k8s.io/v1\nkind: IngressClass\nmetadata:\n name: external-lb-2\nspec:\n controller: example.com/ingress-controller\n parameters:", "completion": " # The parameters for this IngressClass are specified in an\n # IngressParameter (API group k8s.example.com) named \"external-config\",\n # that's in the \"external-configuration\" namespace.\n scope: Namespace\n apiGroup: k8s.example.com\n kind: IngressParameter\n namespace: external-configuration\n name: external-config", "language": "yaml", "type": "kubernetes-manifest", "context": "2 - Ingress | Before the IngressClass resource and ingressClassName field were added in Kubernetes 1.18, Ingress classes were specified with a kubernetes.io/ingress.class annotation on the Ingress. This annotation", "source": "https://kubernetes.io/docs/concepts/services-networking/_print/", "id": "k8s_00056_code_completion_1547"} {"prompt": "Explain this yaml code:\n\n---\napiVersion: networking.k8s.io/v1\nkind: IngressClass\nmetadata:\n name: external-lb-2\nspec:\n controller: example.com/ingress-controller\n parameters:\n # The parameters for this IngressClass are specified in an\n # IngressParameter (API group k8s.example.com) named \"external-config\",\n # that's in the \"external-configuration\" namespace.\n scope: Namespace\n apiGroup: k8s.example.com\n kind: IngressParameter\n namespace: external-configuration\n name: external-config", "completion": "This yaml code 2 - Ingress | Before the IngressClass resource and ingressClassName field were added in Kubernetes 1.18, Ingress classes were specified with a kubernetes.io/ingress.class annotation on the Ingress. This annotation", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "2 - Ingress | Before the IngressClass resource and ingressClassName field were added in Kubernetes 1.18, Ingress classes were specified with a kubernetes.io/ingress.class annotation on the Ingress. This annotation", "source": "https://kubernetes.io/docs/concepts/services-networking/_print/", "id": "k8s_00056_code_explain_1548"} {"prompt": "Complete this yaml code:\n\nservice/networking/default-ingressclass.yaml\napiVersion: networking.k8s.io/v1\nkind: IngressClass\nmetadata:\n labels:", "completion": " app.kubernetes.io/component: controller\n name: nginx-example\n annotations:\n ingressclass.kubernetes.io/is-default-class: \"true\"\nspec:\n controller: k8s.io/ingress-nginx", "language": "yaml", "type": "kubernetes-manifest", "context": "2 - Ingress | There are existing Kubernetes concepts that allow you to expose a single Service (see alternatives). You can also do this with an Ingress by specifying a default backend with no rules.", "source": "https://kubernetes.io/docs/concepts/services-networking/_print/", "id": "k8s_00056_code_completion_1549"} {"prompt": "Explain this yaml code:\n\nservice/networking/default-ingressclass.yaml\napiVersion: networking.k8s.io/v1\nkind: IngressClass\nmetadata:\n labels:\n app.kubernetes.io/component: controller\n name: nginx-example\n annotations:\n ingressclass.kubernetes.io/is-default-class: \"true\"\nspec:\n controller: k8s.io/ingress-nginx", "completion": "This yaml code 2 - Ingress | There are existing Kubernetes concepts that allow you to expose a single Service (see alternatives). You can also do this with an Ingress by specifying a default backend with no rules.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "2 - Ingress | There are existing Kubernetes concepts that allow you to expose a single Service (see alternatives). You can also do this with an Ingress by specifying a default backend with no rules.", "source": "https://kubernetes.io/docs/concepts/services-networking/_print/", "id": "k8s_00056_code_explain_1550"} {"prompt": "Complete this yaml code:\n\napiVersion: networking.k8s.io/v1\nkind: IngressClass\nmetadata:\n labels:\n app.kubernetes.io/component: controller", "completion": " name: nginx-example\n annotations:\n ingressclass.kubernetes.io/is-default-class: \"true\"\nspec:\n controller: k8s.io/ingress-nginx", "language": "yaml", "type": "kubernetes-manifest", "context": "2 - Ingress | There are existing Kubernetes concepts that allow you to expose a single Service (see alternatives). You can also do this with an Ingress by specifying a default backend with no rules.", "source": "https://kubernetes.io/docs/concepts/services-networking/_print/", "id": "k8s_00056_code_completion_1551"} {"prompt": "Explain this yaml code:\n\napiVersion: networking.k8s.io/v1\nkind: IngressClass\nmetadata:\n labels:\n app.kubernetes.io/component: controller\n name: nginx-example\n annotations:\n ingressclass.kubernetes.io/is-default-class: \"true\"\nspec:\n controller: k8s.io/ingress-nginx", "completion": "This yaml code 2 - Ingress | There are existing Kubernetes concepts that allow you to expose a single Service (see alternatives). You can also do this with an Ingress by specifying a default backend with no rules.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "2 - Ingress | There are existing Kubernetes concepts that allow you to expose a single Service (see alternatives). You can also do this with an Ingress by specifying a default backend with no rules.", "source": "https://kubernetes.io/docs/concepts/services-networking/_print/", "id": "k8s_00056_code_explain_1552"} {"prompt": "Complete this yaml code:\n\nservice/networking/test-ingress.yaml\napiVersion: networking.k8s.io/v1\nkind: Ingress\nmetadata:\n name: test-ingress", "completion": "spec:\n defaultBackend:\n service:\n name: test\n port:\n number: 80", "language": "yaml", "type": "kubernetes-manifest", "context": "2 - Ingress | If you create it using kubectl apply -f you should be able to view the state of the Ingress you added:", "source": "https://kubernetes.io/docs/concepts/services-networking/_print/", "id": "k8s_00056_code_completion_1553"} {"prompt": "Explain this yaml code:\n\nservice/networking/test-ingress.yaml\napiVersion: networking.k8s.io/v1\nkind: Ingress\nmetadata:\n name: test-ingress\nspec:\n defaultBackend:\n service:\n name: test\n port:\n number: 80", "completion": "This yaml code 2 - Ingress | If you create it using kubectl apply -f you should be able to view the state of the Ingress you added:", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "2 - Ingress | If you create it using kubectl apply -f you should be able to view the state of the Ingress you added:", "source": "https://kubernetes.io/docs/concepts/services-networking/_print/", "id": "k8s_00056_code_explain_1554"} {"prompt": "Complete this yaml code:\n\napiVersion: networking.k8s.io/v1\nkind: Ingress\nmetadata:\n name: test-ingress\nspec:", "completion": " defaultBackend:\n service:\n name: test\n port:\n number: 80", "language": "yaml", "type": "kubernetes-manifest", "context": "2 - Ingress | If you create it using kubectl apply -f you should be able to view the state of the Ingress you added:", "source": "https://kubernetes.io/docs/concepts/services-networking/_print/", "id": "k8s_00056_code_completion_1555"} {"prompt": "Explain this yaml code:\n\napiVersion: networking.k8s.io/v1\nkind: Ingress\nmetadata:\n name: test-ingress\nspec:\n defaultBackend:\n service:\n name: test\n port:\n number: 80", "completion": "This yaml code 2 - Ingress | If you create it using kubectl apply -f you should be able to view the state of the Ingress you added:", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "2 - Ingress | If you create it using kubectl apply -f you should be able to view the state of the Ingress you added:", "source": "https://kubernetes.io/docs/concepts/services-networking/_print/", "id": "k8s_00056_code_explain_1556"} {"prompt": "Complete this yaml code:\n\nservice/networking/simple-fanout-example.yaml\napiVersion: networking.k8s.io/v1\nkind: Ingress\nmetadata:\n name: simple-fanout-example\nspec:\n rules:\n - host: foo.bar.com\n http:\n paths:\n - path: /foo\n pathType: Prefix", "completion": " backend:\n service:\n name: service1\n port:\n number: 4200\n - path: /bar\n pathType: Prefix\n backend:\n service:\n name: service2\n port:\n number: 8080", "language": "yaml", "type": "kubernetes-manifest", "context": "2 - Ingress | When you create the Ingress with kubectl apply -f:", "source": "https://kubernetes.io/docs/concepts/services-networking/_print/", "id": "k8s_00056_code_completion_1557"} {"prompt": "Explain this yaml code:\n\nservice/networking/simple-fanout-example.yaml\napiVersion: networking.k8s.io/v1\nkind: Ingress\nmetadata:\n name: simple-fanout-example\nspec:\n rules:\n - host: foo.bar.com\n http:\n paths:\n - path: /foo\n pathType: Prefix\n backend:\n service:\n name: service1\n port:\n number: 4200\n - path: /bar\n pathType: Prefix\n backend:\n service:\n name: service2\n port:\n number: 8080", "completion": "This yaml code 2 - Ingress | When you create the Ingress with kubectl apply -f:", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "2 - Ingress | When you create the Ingress with kubectl apply -f:", "source": "https://kubernetes.io/docs/concepts/services-networking/_print/", "id": "k8s_00056_code_explain_1558"} {"prompt": "Complete this yaml code:\n\napiVersion: networking.k8s.io/v1\nkind: Ingress\nmetadata:\n name: simple-fanout-example\nspec:\n rules:\n - host: foo.bar.com\n http:\n paths:\n - path: /foo\n pathType: Prefix", "completion": " backend:\n service:\n name: service1\n port:\n number: 4200\n - path: /bar\n pathType: Prefix\n backend:\n service:\n name: service2\n port:\n number: 8080", "language": "yaml", "type": "kubernetes-manifest", "context": "2 - Ingress | When you create the Ingress with kubectl apply -f:", "source": "https://kubernetes.io/docs/concepts/services-networking/_print/", "id": "k8s_00056_code_completion_1559"} {"prompt": "Explain this yaml code:\n\napiVersion: networking.k8s.io/v1\nkind: Ingress\nmetadata:\n name: simple-fanout-example\nspec:\n rules:\n - host: foo.bar.com\n http:\n paths:\n - path: /foo\n pathType: Prefix\n backend:\n service:\n name: service1\n port:\n number: 4200\n - path: /bar\n pathType: Prefix\n backend:\n service:\n name: service2\n port:\n number: 8080", "completion": "This yaml code 2 - Ingress | When you create the Ingress with kubectl apply -f:", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "2 - Ingress | When you create the Ingress with kubectl apply -f:", "source": "https://kubernetes.io/docs/concepts/services-networking/_print/", "id": "k8s_00056_code_explain_1560"} {"prompt": "Complete this yaml code:\n\nservice/networking/name-virtual-host-ingress.yaml\napiVersion: networking.k8s.io/v1\nkind: Ingress\nmetadata:\n name: name-virtual-host-ingress\nspec:\n rules:\n - host: foo.bar.com\n http:\n paths:\n - pathType: Prefix\n path: \"/\"\n backend:", "completion": " service:\n name: service1\n port:\n number: 80\n - host: bar.foo.com\n http:\n paths:\n - pathType: Prefix\n path: \"/\"\n backend:\n service:\n name: service2\n port:\n number: 80", "language": "yaml", "type": "kubernetes-manifest", "context": "2 - Ingress | If you create an Ingress resource without any hosts defined in the rules, then any web traffic to the IP address of your Ingress controller can be matched without a name based virtual host being requi", "source": "https://kubernetes.io/docs/concepts/services-networking/_print/", "id": "k8s_00056_code_completion_1561"} {"prompt": "Explain this yaml code:\n\nservice/networking/name-virtual-host-ingress.yaml\napiVersion: networking.k8s.io/v1\nkind: Ingress\nmetadata:\n name: name-virtual-host-ingress\nspec:\n rules:\n - host: foo.bar.com\n http:\n paths:\n - pathType: Prefix\n path: \"/\"\n backend:\n service:\n name: service1\n port:\n number: 80\n - host: bar.foo.com\n http:\n paths:\n - pathType: Prefix\n path: \"/\"\n backend:\n service:\n name: service2\n port:\n number: 80", "completion": "This yaml code 2 - Ingress | If you create an Ingress resource without any hosts defined in the rules, then any web traffic to the IP address of your Ingress controller can be matched without a name based virtual host being requi", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "2 - Ingress | If you create an Ingress resource without any hosts defined in the rules, then any web traffic to the IP address of your Ingress controller can be matched without a name based virtual host being requi", "source": "https://kubernetes.io/docs/concepts/services-networking/_print/", "id": "k8s_00056_code_explain_1562"} {"prompt": "Complete this yaml code:\n\napiVersion: networking.k8s.io/v1\nkind: Ingress\nmetadata:\n name: name-virtual-host-ingress\nspec:\n rules:\n - host: foo.bar.com\n http:\n paths:\n - pathType: Prefix\n path: \"/\"\n backend:\n service:", "completion": " name: service1\n port:\n number: 80\n - host: bar.foo.com\n http:\n paths:\n - pathType: Prefix\n path: \"/\"\n backend:\n service:\n name: service2\n port:\n number: 80", "language": "yaml", "type": "kubernetes-manifest", "context": "2 - Ingress | If you create an Ingress resource without any hosts defined in the rules, then any web traffic to the IP address of your Ingress controller can be matched without a name based virtual host being requi", "source": "https://kubernetes.io/docs/concepts/services-networking/_print/", "id": "k8s_00056_code_completion_1563"} {"prompt": "Explain this yaml code:\n\napiVersion: networking.k8s.io/v1\nkind: Ingress\nmetadata:\n name: name-virtual-host-ingress\nspec:\n rules:\n - host: foo.bar.com\n http:\n paths:\n - pathType: Prefix\n path: \"/\"\n backend:\n service:\n name: service1\n port:\n number: 80\n - host: bar.foo.com\n http:\n paths:\n - pathType: Prefix\n path: \"/\"\n backend:\n service:\n name: service2\n port:\n number: 80", "completion": "This yaml code 2 - Ingress | If you create an Ingress resource without any hosts defined in the rules, then any web traffic to the IP address of your Ingress controller can be matched without a name based virtual host being requi", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "2 - Ingress | If you create an Ingress resource without any hosts defined in the rules, then any web traffic to the IP address of your Ingress controller can be matched without a name based virtual host being requi", "source": "https://kubernetes.io/docs/concepts/services-networking/_print/", "id": "k8s_00056_code_explain_1564"} {"prompt": "Complete this yaml code:\n\nservice/networking/name-virtual-host-ingress-no-third-host.yaml\napiVersion: networking.k8s.io/v1\nkind: Ingress\nmetadata:\n name: name-virtual-host-ingress-no-third-host\nspec:\n rules:\n - host: first.bar.com\n http:\n paths:\n - pathType: Prefix\n path: \"/\"\n backend:\n service:\n name: service1\n port:\n number: 80\n - host: second.bar.com", "completion": " http:\n paths:\n - pathType: Prefix\n path: \"/\"\n backend:\n service:\n name: service2\n port:\n number: 80\n - http:\n paths:\n - pathType: Prefix\n path: \"/\"\n backend:\n service:\n name: service3\n port:\n number: 80", "language": "yaml", "type": "kubernetes-manifest", "context": "2 - Ingress | You can secure an Ingress by specifying a Secret that contains a TLS private key and certificate. The Ingress resource only supports a single TLS port, 443, and assumes TLS termination at the ingress", "source": "https://kubernetes.io/docs/concepts/services-networking/_print/", "id": "k8s_00056_code_completion_1565"} {"prompt": "Explain this yaml code:\n\nservice/networking/name-virtual-host-ingress-no-third-host.yaml\napiVersion: networking.k8s.io/v1\nkind: Ingress\nmetadata:\n name: name-virtual-host-ingress-no-third-host\nspec:\n rules:\n - host: first.bar.com\n http:\n paths:\n - pathType: Prefix\n path: \"/\"\n backend:\n service:\n name: service1\n port:\n number: 80\n - host: second.bar.com\n http:\n paths:\n - pathType: Prefix\n path: \"/\"\n backend:\n service:\n name: service2\n port:\n number: 80\n - http:\n paths:\n - pathType: Prefix\n path: \"/\"\n backend:\n service:\n name: service3\n port:\n number: 80", "completion": "This yaml code 2 - Ingress | You can secure an Ingress by specifying a Secret that contains a TLS private key and certificate. The Ingress resource only supports a single TLS port, 443, and assumes TLS termination at the ingress", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "2 - Ingress | You can secure an Ingress by specifying a Secret that contains a TLS private key and certificate. The Ingress resource only supports a single TLS port, 443, and assumes TLS termination at the ingress", "source": "https://kubernetes.io/docs/concepts/services-networking/_print/", "id": "k8s_00056_code_explain_1566"} {"prompt": "Complete this yaml code:\n\napiVersion: networking.k8s.io/v1\nkind: Ingress\nmetadata:\n name: name-virtual-host-ingress-no-third-host\nspec:\n rules:\n - host: first.bar.com\n http:\n paths:\n - pathType: Prefix\n path: \"/\"\n backend:\n service:\n name: service1\n port:\n number: 80\n - host: second.bar.com", "completion": " http:\n paths:\n - pathType: Prefix\n path: \"/\"\n backend:\n service:\n name: service2\n port:\n number: 80\n - http:\n paths:\n - pathType: Prefix\n path: \"/\"\n backend:\n service:\n name: service3\n port:\n number: 80", "language": "yaml", "type": "kubernetes-manifest", "context": "2 - Ingress | You can secure an Ingress by specifying a Secret that contains a TLS private key and certificate. The Ingress resource only supports a single TLS port, 443, and assumes TLS termination at the ingress", "source": "https://kubernetes.io/docs/concepts/services-networking/_print/", "id": "k8s_00056_code_completion_1567"} {"prompt": "Explain this yaml code:\n\napiVersion: networking.k8s.io/v1\nkind: Ingress\nmetadata:\n name: name-virtual-host-ingress-no-third-host\nspec:\n rules:\n - host: first.bar.com\n http:\n paths:\n - pathType: Prefix\n path: \"/\"\n backend:\n service:\n name: service1\n port:\n number: 80\n - host: second.bar.com\n http:\n paths:\n - pathType: Prefix\n path: \"/\"\n backend:\n service:\n name: service2\n port:\n number: 80\n - http:\n paths:\n - pathType: Prefix\n path: \"/\"\n backend:\n service:\n name: service3\n port:\n number: 80", "completion": "This yaml code 2 - Ingress | You can secure an Ingress by specifying a Secret that contains a TLS private key and certificate. The Ingress resource only supports a single TLS port, 443, and assumes TLS termination at the ingress", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "2 - Ingress | You can secure an Ingress by specifying a Secret that contains a TLS private key and certificate. The Ingress resource only supports a single TLS port, 443, and assumes TLS termination at the ingress", "source": "https://kubernetes.io/docs/concepts/services-networking/_print/", "id": "k8s_00056_code_explain_1568"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Secret\nmetadata:\n name: testsecret-tls", "completion": " namespace: default\ndata:\n tls.crt: base64 encoded cert\n tls.key: base64 encoded key\ntype: kubernetes.io/tls", "language": "yaml", "type": "kubernetes-manifest", "context": "2 - Ingress | Referencing this secret in an Ingress tells the Ingress controller to secure the channel from the client to the load balancer using TLS. You need to make sure the TLS secret you created came from a ce", "source": "https://kubernetes.io/docs/concepts/services-networking/_print/", "id": "k8s_00056_code_completion_1569"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Secret\nmetadata:\n name: testsecret-tls\n namespace: default\ndata:\n tls.crt: base64 encoded cert\n tls.key: base64 encoded key\ntype: kubernetes.io/tls", "completion": "This yaml code 2 - Ingress | Referencing this secret in an Ingress tells the Ingress controller to secure the channel from the client to the load balancer using TLS. You need to make sure the TLS secret you created came from a ce", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "2 - Ingress | Referencing this secret in an Ingress tells the Ingress controller to secure the channel from the client to the load balancer using TLS. You need to make sure the TLS secret you created came from a ce", "source": "https://kubernetes.io/docs/concepts/services-networking/_print/", "id": "k8s_00056_code_explain_1570"} {"prompt": "Complete this yaml code:\n\nservice/networking/tls-example-ingress.yaml\napiVersion: networking.k8s.io/v1\nkind: Ingress\nmetadata:\n name: tls-example-ingress\nspec:\n tls:\n - hosts:\n - https-example.foo.com\n secretName: testsecret-tls", "completion": " rules:\n - host: https-example.foo.com\n http:\n paths:\n - path: /\n pathType: Prefix\n backend:\n service:\n name: service1\n port:\n number: 80", "language": "yaml", "type": "kubernetes-manifest", "context": "2 - Ingress | An Ingress controller is bootstrapped with some load balancing policy settings that it applies to all Ingress, such as the load balancing algorithm, backend weight scheme, and others. More advanced lo", "source": "https://kubernetes.io/docs/concepts/services-networking/_print/", "id": "k8s_00056_code_completion_1571"} {"prompt": "Explain this yaml code:\n\nservice/networking/tls-example-ingress.yaml\napiVersion: networking.k8s.io/v1\nkind: Ingress\nmetadata:\n name: tls-example-ingress\nspec:\n tls:\n - hosts:\n - https-example.foo.com\n secretName: testsecret-tls\n rules:\n - host: https-example.foo.com\n http:\n paths:\n - path: /\n pathType: Prefix\n backend:\n service:\n name: service1\n port:\n number: 80", "completion": "This yaml code 2 - Ingress | An Ingress controller is bootstrapped with some load balancing policy settings that it applies to all Ingress, such as the load balancing algorithm, backend weight scheme, and others. More advanced lo", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "2 - Ingress | An Ingress controller is bootstrapped with some load balancing policy settings that it applies to all Ingress, such as the load balancing algorithm, backend weight scheme, and others. More advanced lo", "source": "https://kubernetes.io/docs/concepts/services-networking/_print/", "id": "k8s_00056_code_explain_1572"} {"prompt": "Complete this yaml code:\n\napiVersion: networking.k8s.io/v1\nkind: Ingress\nmetadata:\n name: tls-example-ingress\nspec:\n tls:\n - hosts:\n - https-example.foo.com\n secretName: testsecret-tls\n rules:", "completion": " - host: https-example.foo.com\n http:\n paths:\n - path: /\n pathType: Prefix\n backend:\n service:\n name: service1\n port:\n number: 80", "language": "yaml", "type": "kubernetes-manifest", "context": "2 - Ingress | An Ingress controller is bootstrapped with some load balancing policy settings that it applies to all Ingress, such as the load balancing algorithm, backend weight scheme, and others. More advanced lo", "source": "https://kubernetes.io/docs/concepts/services-networking/_print/", "id": "k8s_00056_code_completion_1573"} {"prompt": "Explain this yaml code:\n\napiVersion: networking.k8s.io/v1\nkind: Ingress\nmetadata:\n name: tls-example-ingress\nspec:\n tls:\n - hosts:\n - https-example.foo.com\n secretName: testsecret-tls\n rules:\n - host: https-example.foo.com\n http:\n paths:\n - path: /\n pathType: Prefix\n backend:\n service:\n name: service1\n port:\n number: 80", "completion": "This yaml code 2 - Ingress | An Ingress controller is bootstrapped with some load balancing policy settings that it applies to all Ingress, such as the load balancing algorithm, backend weight scheme, and others. More advanced lo", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "2 - Ingress | An Ingress controller is bootstrapped with some load balancing policy settings that it applies to all Ingress, such as the load balancing algorithm, backend weight scheme, and others. More advanced lo", "source": "https://kubernetes.io/docs/concepts/services-networking/_print/", "id": "k8s_00056_code_explain_1574"} {"prompt": "Complete this text code:\n\nspec:\n rules:\n - host: foo.bar.com\n http:\n paths:\n - backend:\n service:\n name: service1\n port:\n number: 80\n path: /foo", "completion": " pathType: Prefix\n - host: bar.baz.com\n http:\n paths:\n - backend:\n service:\n name: service2\n port:\n number: 80\n path: /foo\n pathType: Prefix\n..", "language": "text", "type": "code-example", "context": "2 - Ingress | After you save your changes, kubectl updates the resource in the API server, which tells the Ingress controller to reconfigure the load balancer.", "source": "https://kubernetes.io/docs/concepts/services-networking/_print/", "id": "k8s_00056_code_completion_1575"} {"prompt": "Explain this text code:\n\nspec:\n rules:\n - host: foo.bar.com\n http:\n paths:\n - backend:\n service:\n name: service1\n port:\n number: 80\n path: /foo\n pathType: Prefix\n - host: bar.baz.com\n http:\n paths:\n - backend:\n service:\n name: service2\n port:\n number: 80\n path: /foo\n pathType: Prefix\n..", "completion": "This text code 2 - Ingress | After you save your changes, kubectl updates the resource in the API server, which tells the Ingress controller to reconfigure the load balancer.", "language": "text", "type": "code-example_explanation", "context": "2 - Ingress | After you save your changes, kubectl updates the resource in the API server, which tells the Ingress controller to reconfigure the load balancer.", "source": "https://kubernetes.io/docs/concepts/services-networking/_print/", "id": "k8s_00056_code_explain_1576"} {"prompt": "Complete this yaml code:\n\napiVersion: gateway.networking.k8s.io/v1\nkind: GatewayClass\nmetadata:", "completion": " name: example-class\nspec:\n controllerName: example.com/gateway-controller", "language": "yaml", "type": "kubernetes-manifest", "context": "4 - Gateway API | In this example, a controller that has implemented Gateway API is configured to manage GatewayClasses with the controller name example.com/gateway-controller. Gateways of this class will be managed by", "source": "https://kubernetes.io/docs/concepts/services-networking/_print/", "id": "k8s_00056_code_completion_1577"} {"prompt": "Explain this yaml code:\n\napiVersion: gateway.networking.k8s.io/v1\nkind: GatewayClass\nmetadata:\n name: example-class\nspec:\n controllerName: example.com/gateway-controller", "completion": "This yaml code 4 - Gateway API | In this example, a controller that has implemented Gateway API is configured to manage GatewayClasses with the controller name example.com/gateway-controller. Gateways of this class will be managed by", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "4 - Gateway API | In this example, a controller that has implemented Gateway API is configured to manage GatewayClasses with the controller name example.com/gateway-controller. Gateways of this class will be managed by", "source": "https://kubernetes.io/docs/concepts/services-networking/_print/", "id": "k8s_00056_code_explain_1578"} {"prompt": "Complete this yaml code:\n\napiVersion: gateway.networking.k8s.io/v1\nkind: Gateway\nmetadata:\n name: example-gateway\nspec:", "completion": " gatewayClassName: example-class\n listeners:\n - name: http\n protocol: HTTP\n port: 80", "language": "yaml", "type": "kubernetes-manifest", "context": "4 - Gateway API | In this example, an instance of traffic handling infrastructure is programmed to listen for HTTP traffic on port 80. Since the addresses field is unspecified, an address or hostname is assigned to the", "source": "https://kubernetes.io/docs/concepts/services-networking/_print/", "id": "k8s_00056_code_completion_1579"} {"prompt": "Explain this yaml code:\n\napiVersion: gateway.networking.k8s.io/v1\nkind: Gateway\nmetadata:\n name: example-gateway\nspec:\n gatewayClassName: example-class\n listeners:\n - name: http\n protocol: HTTP\n port: 80", "completion": "This yaml code 4 - Gateway API | In this example, an instance of traffic handling infrastructure is programmed to listen for HTTP traffic on port 80. Since the addresses field is unspecified, an address or hostname is assigned to the", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "4 - Gateway API | In this example, an instance of traffic handling infrastructure is programmed to listen for HTTP traffic on port 80. Since the addresses field is unspecified, an address or hostname is assigned to the", "source": "https://kubernetes.io/docs/concepts/services-networking/_print/", "id": "k8s_00056_code_explain_1580"} {"prompt": "Complete this yaml code:\n\napiVersion: gateway.networking.k8s.io/v1\nkind: HTTPRoute\nmetadata:\n name: example-httproute\nspec:\n parentRefs:\n - name: example-gateway\n hostnames:", "completion": " - \"www.example.com\"\n rules:\n - matches:\n - path:\n type: PathPrefix\n value: /login\n backendRefs:\n - name: example-svc\n port: 8080", "language": "yaml", "type": "kubernetes-manifest", "context": "4 - Gateway API | In this example, HTTP traffic from Gateway example-gateway with the Host: header set to www.example.com and the request path specified as /login will be routed to Service example-svc on port 8080.", "source": "https://kubernetes.io/docs/concepts/services-networking/_print/", "id": "k8s_00056_code_completion_1581"} {"prompt": "Explain this yaml code:\n\napiVersion: gateway.networking.k8s.io/v1\nkind: HTTPRoute\nmetadata:\n name: example-httproute\nspec:\n parentRefs:\n - name: example-gateway\n hostnames:\n - \"www.example.com\"\n rules:\n - matches:\n - path:\n type: PathPrefix\n value: /login\n backendRefs:\n - name: example-svc\n port: 8080", "completion": "This yaml code 4 - Gateway API | In this example, HTTP traffic from Gateway example-gateway with the Host: header set to www.example.com and the request path specified as /login will be routed to Service example-svc on port 8080.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "4 - Gateway API | In this example, HTTP traffic from Gateway example-gateway with the Host: header set to www.example.com and the request path specified as /login will be routed to Service example-svc on port 8080.", "source": "https://kubernetes.io/docs/concepts/services-networking/_print/", "id": "k8s_00056_code_explain_1582"} {"prompt": "Complete this yaml code:\n\napiVersion: gateway.networking.k8s.io/v1\nkind: GRPCRoute\nmetadata:\n name: example-grpcroute\nspec:\n parentRefs:", "completion": " - name: example-gateway\n hostnames:\n - \"svc.example.com\"\n rules:\n - backendRefs:\n - name: example-svc\n port: 50051", "language": "yaml", "type": "kubernetes-manifest", "context": "4 - Gateway API | In this example, gRPC traffic from Gateway example-gateway with the host set to svc.example.com will be directed to the service example-svc on port 50051 from the same namespace.", "source": "https://kubernetes.io/docs/concepts/services-networking/_print/", "id": "k8s_00056_code_completion_1583"} {"prompt": "Explain this yaml code:\n\napiVersion: gateway.networking.k8s.io/v1\nkind: GRPCRoute\nmetadata:\n name: example-grpcroute\nspec:\n parentRefs:\n - name: example-gateway\n hostnames:\n - \"svc.example.com\"\n rules:\n - backendRefs:\n - name: example-svc\n port: 50051", "completion": "This yaml code 4 - Gateway API | In this example, gRPC traffic from Gateway example-gateway with the host set to svc.example.com will be directed to the service example-svc on port 50051 from the same namespace.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "4 - Gateway API | In this example, gRPC traffic from Gateway example-gateway with the host set to svc.example.com will be directed to the service example-svc on port 50051 from the same namespace.", "source": "https://kubernetes.io/docs/concepts/services-networking/_print/", "id": "k8s_00056_code_explain_1584"} {"prompt": "Complete this yaml code:\n\napiVersion: gateway.networking.k8s.io/v1\nkind: GRPCRoute\nmetadata:\n name: example-grpcroute\nspec:\n parentRefs:\n - name: example-gateway\n hostnames:", "completion": " - \"svc.example.com\"\n rules:\n - matches:\n - method:\n service: com.example\n method: Login\n backendRefs:\n - name: foo-svc\n port: 50051", "language": "yaml", "type": "kubernetes-manifest", "context": "4 - Gateway API | In this case, the GRPCRoute will match any traffic for svc.example.com and apply its routing rules to forward the traffic to the correct backend. Since there is only one match specified,only requests", "source": "https://kubernetes.io/docs/concepts/services-networking/_print/", "id": "k8s_00056_code_completion_1585"} {"prompt": "Explain this yaml code:\n\napiVersion: gateway.networking.k8s.io/v1\nkind: GRPCRoute\nmetadata:\n name: example-grpcroute\nspec:\n parentRefs:\n - name: example-gateway\n hostnames:\n - \"svc.example.com\"\n rules:\n - matches:\n - method:\n service: com.example\n method: Login\n backendRefs:\n - name: foo-svc\n port: 50051", "completion": "This yaml code 4 - Gateway API | In this case, the GRPCRoute will match any traffic for svc.example.com and apply its routing rules to forward the traffic to the correct backend. Since there is only one match specified,only requests", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "4 - Gateway API | In this case, the GRPCRoute will match any traffic for svc.example.com and apply its routing rules to forward the traffic to the correct backend. Since there is only one match specified,only requests", "source": "https://kubernetes.io/docs/concepts/services-networking/_print/", "id": "k8s_00056_code_explain_1586"} {"prompt": "Complete this yaml code:\n\napiVersion: discovery.k8s.io/v1\nkind: EndpointSlice\nmetadata:\n name: example-abc\n labels:\n kubernetes.io/service-name: example\naddressType: IPv4\nports:\n - name: http", "completion": " protocol: TCP\n port: 80\nendpoints:\n - addresses:\n - \"10.1.2.3\"\n conditions:\n ready: true\n hostname: pod-1\n nodeName: node-1\n zone: us-west2-a", "language": "yaml", "type": "kubernetes-manifest", "context": "5 - EndpointSlices | By default, the control plane creates and manages EndpointSlices to have no more than 100 endpoints each. You can configure this with the --max-endpoints-per-slice kube-controller-manager flag, up to", "source": "https://kubernetes.io/docs/concepts/services-networking/_print/", "id": "k8s_00056_code_completion_1587"} {"prompt": "Explain this yaml code:\n\napiVersion: discovery.k8s.io/v1\nkind: EndpointSlice\nmetadata:\n name: example-abc\n labels:\n kubernetes.io/service-name: example\naddressType: IPv4\nports:\n - name: http\n protocol: TCP\n port: 80\nendpoints:\n - addresses:\n - \"10.1.2.3\"\n conditions:\n ready: true\n hostname: pod-1\n nodeName: node-1\n zone: us-west2-a", "completion": "This yaml code 5 - EndpointSlices | By default, the control plane creates and manages EndpointSlices to have no more than 100 endpoints each. You can configure this with the --max-endpoints-per-slice kube-controller-manager flag, up to", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "5 - EndpointSlices | By default, the control plane creates and manages EndpointSlices to have no more than 100 endpoints each. You can configure this with the --max-endpoints-per-slice kube-controller-manager flag, up to", "source": "https://kubernetes.io/docs/concepts/services-networking/_print/", "id": "k8s_00056_code_explain_1588"} {"prompt": "Complete this yaml code:\n\nservice/networking/networkpolicy.yaml\napiVersion: networking.k8s.io/v1\nkind: NetworkPolicy\nmetadata:\n name: test-network-policy\n namespace: default\nspec:\n podSelector:\n matchLabels:\n role: db\n policyTypes:\n - Ingress\n - Egress\n ingress:\n - from:\n - ipBlock:\n cidr: 172.17.0.0/16", "completion": " except:\n - 172.17.1.0/24\n - namespaceSelector:\n matchLabels:\n project: myproject\n - podSelector:\n matchLabels:\n role: frontend\n ports:\n - protocol: TCP\n port: 6379\n egress:\n - to:\n - ipBlock:\n cidr: 10.0.0.0/24\n ports:\n - protocol: TCP\n port: 5978", "language": "yaml", "type": "kubernetes-manifest", "context": "6 - Network Policies | Mandatory Fields: As with all other Kubernetes config, a NetworkPolicy needs apiVersion, kind, and metadata fields. For general information about working with config files, see Configure a Pod to Use", "source": "https://kubernetes.io/docs/concepts/services-networking/_print/", "id": "k8s_00056_code_completion_1589"} {"prompt": "Explain this yaml code:\n\nservice/networking/networkpolicy.yaml\napiVersion: networking.k8s.io/v1\nkind: NetworkPolicy\nmetadata:\n name: test-network-policy\n namespace: default\nspec:\n podSelector:\n matchLabels:\n role: db\n policyTypes:\n - Ingress\n - Egress\n ingress:\n - from:\n - ipBlock:\n cidr: 172.17.0.0/16\n except:\n - 172.17.1.0/24\n - namespaceSelector:\n matchLabels:\n project: myproject\n - podSelector:\n matchLabels:\n role: frontend\n ports:\n - protocol: TCP\n port: 6379\n egress:\n - to:\n - ipBlock:\n cidr: 10.0.0.0/24\n ports:\n - protocol: TCP\n port: 5978", "completion": "This yaml code 6 - Network Policies | Mandatory Fields: As with all other Kubernetes config, a NetworkPolicy needs apiVersion, kind, and metadata fields. For general information about working with config files, see Configure a Pod to Use", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "6 - Network Policies | Mandatory Fields: As with all other Kubernetes config, a NetworkPolicy needs apiVersion, kind, and metadata fields. For general information about working with config files, see Configure a Pod to Use", "source": "https://kubernetes.io/docs/concepts/services-networking/_print/", "id": "k8s_00056_code_explain_1590"} {"prompt": "Complete this yaml code:\n\napiVersion: networking.k8s.io/v1\nkind: NetworkPolicy\nmetadata:\n name: test-network-policy\n namespace: default\nspec:\n podSelector:\n matchLabels:\n role: db\n policyTypes:\n - Ingress\n - Egress\n ingress:\n - from:\n - ipBlock:\n cidr: 172.17.0.0/16\n except:", "completion": " - 172.17.1.0/24\n - namespaceSelector:\n matchLabels:\n project: myproject\n - podSelector:\n matchLabels:\n role: frontend\n ports:\n - protocol: TCP\n port: 6379\n egress:\n - to:\n - ipBlock:\n cidr: 10.0.0.0/24\n ports:\n - protocol: TCP\n port: 5978", "language": "yaml", "type": "kubernetes-manifest", "context": "6 - Network Policies | Mandatory Fields: As with all other Kubernetes config, a NetworkPolicy needs apiVersion, kind, and metadata fields. For general information about working with config files, see Configure a Pod to Use", "source": "https://kubernetes.io/docs/concepts/services-networking/_print/", "id": "k8s_00056_code_completion_1591"} {"prompt": "Explain this yaml code:\n\napiVersion: networking.k8s.io/v1\nkind: NetworkPolicy\nmetadata:\n name: test-network-policy\n namespace: default\nspec:\n podSelector:\n matchLabels:\n role: db\n policyTypes:\n - Ingress\n - Egress\n ingress:\n - from:\n - ipBlock:\n cidr: 172.17.0.0/16\n except:\n - 172.17.1.0/24\n - namespaceSelector:\n matchLabels:\n project: myproject\n - podSelector:\n matchLabels:\n role: frontend\n ports:\n - protocol: TCP\n port: 6379\n egress:\n - to:\n - ipBlock:\n cidr: 10.0.0.0/24\n ports:\n - protocol: TCP\n port: 5978", "completion": "This yaml code 6 - Network Policies | Mandatory Fields: As with all other Kubernetes config, a NetworkPolicy needs apiVersion, kind, and metadata fields. For general information about working with config files, see Configure a Pod to Use", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "6 - Network Policies | Mandatory Fields: As with all other Kubernetes config, a NetworkPolicy needs apiVersion, kind, and metadata fields. For general information about working with config files, see Configure a Pod to Use", "source": "https://kubernetes.io/docs/concepts/services-networking/_print/", "id": "k8s_00056_code_explain_1592"} {"prompt": "Complete this text code:\n\n...\n ingress:\n - from:\n - namespaceSelector:\n matchLabels:", "completion": " user: alice\n podSelector:\n matchLabels:\n role: client\n ...", "language": "text", "type": "code-example", "context": "6 - Network Policies | This policy contains a single from element allowing connections from Pods with the label role=client in namespaces with the label user=alice. But the following policy is different:", "source": "https://kubernetes.io/docs/concepts/services-networking/_print/", "id": "k8s_00056_code_completion_1593"} {"prompt": "Explain this text code:\n\n...\n ingress:\n - from:\n - namespaceSelector:\n matchLabels:\n user: alice\n podSelector:\n matchLabels:\n role: client\n ...", "completion": "This text code 6 - Network Policies | This policy contains a single from element allowing connections from Pods with the label role=client in namespaces with the label user=alice. But the following policy is different:", "language": "text", "type": "code-example_explanation", "context": "6 - Network Policies | This policy contains a single from element allowing connections from Pods with the label role=client in namespaces with the label user=alice. But the following policy is different:", "source": "https://kubernetes.io/docs/concepts/services-networking/_print/", "id": "k8s_00056_code_explain_1594"} {"prompt": "Complete this text code:\n\n...\n ingress:\n - from:\n - namespaceSelector:\n matchLabels:", "completion": " user: alice\n - podSelector:\n matchLabels:\n role: client\n ...", "language": "text", "type": "code-example", "context": "6 - Network Policies | It contains two elements in the from array, and allows connections from Pods in the local Namespace with the label role=client, or from any Pod in any namespace with the label user=alice.", "source": "https://kubernetes.io/docs/concepts/services-networking/_print/", "id": "k8s_00056_code_completion_1595"} {"prompt": "Explain this text code:\n\n...\n ingress:\n - from:\n - namespaceSelector:\n matchLabels:\n user: alice\n - podSelector:\n matchLabels:\n role: client\n ...", "completion": "This text code 6 - Network Policies | It contains two elements in the from array, and allows connections from Pods in the local Namespace with the label role=client, or from any Pod in any namespace with the label user=alice.", "language": "text", "type": "code-example_explanation", "context": "6 - Network Policies | It contains two elements in the from array, and allows connections from Pods in the local Namespace with the label role=client, or from any Pod in any namespace with the label user=alice.", "source": "https://kubernetes.io/docs/concepts/services-networking/_print/", "id": "k8s_00056_code_explain_1596"} {"prompt": "Complete this yaml code:\n\nservice/networking/network-policy-default-deny-ingress.yaml\n---\napiVersion: networking.k8s.io/v1\nkind: NetworkPolicy\nmetadata:", "completion": " name: default-deny-ingress\nspec:\n podSelector: {}\n policyTypes:\n - Ingress", "language": "yaml", "type": "kubernetes-manifest", "context": "6 - Network Policies | This ensures that even pods that aren't selected by any other NetworkPolicy will still be isolated for ingress. This policy does not affect isolation for egress from any pod.", "source": "https://kubernetes.io/docs/concepts/services-networking/_print/", "id": "k8s_00056_code_completion_1597"} {"prompt": "Explain this yaml code:\n\nservice/networking/network-policy-default-deny-ingress.yaml\n---\napiVersion: networking.k8s.io/v1\nkind: NetworkPolicy\nmetadata:\n name: default-deny-ingress\nspec:\n podSelector: {}\n policyTypes:\n - Ingress", "completion": "This yaml code 6 - Network Policies | This ensures that even pods that aren't selected by any other NetworkPolicy will still be isolated for ingress. This policy does not affect isolation for egress from any pod.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "6 - Network Policies | This ensures that even pods that aren't selected by any other NetworkPolicy will still be isolated for ingress. This policy does not affect isolation for egress from any pod.", "source": "https://kubernetes.io/docs/concepts/services-networking/_print/", "id": "k8s_00056_code_explain_1598"} {"prompt": "Complete this yaml code:\n\n---\napiVersion: networking.k8s.io/v1\nkind: NetworkPolicy\nmetadata:", "completion": " name: default-deny-ingress\nspec:\n podSelector: {}\n policyTypes:\n - Ingress", "language": "yaml", "type": "kubernetes-manifest", "context": "6 - Network Policies | This ensures that even pods that aren't selected by any other NetworkPolicy will still be isolated for ingress. This policy does not affect isolation for egress from any pod.", "source": "https://kubernetes.io/docs/concepts/services-networking/_print/", "id": "k8s_00056_code_completion_1599"} {"prompt": "Explain this yaml code:\n\n---\napiVersion: networking.k8s.io/v1\nkind: NetworkPolicy\nmetadata:\n name: default-deny-ingress\nspec:\n podSelector: {}\n policyTypes:\n - Ingress", "completion": "This yaml code 6 - Network Policies | This ensures that even pods that aren't selected by any other NetworkPolicy will still be isolated for ingress. This policy does not affect isolation for egress from any pod.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "6 - Network Policies | This ensures that even pods that aren't selected by any other NetworkPolicy will still be isolated for ingress. This policy does not affect isolation for egress from any pod.", "source": "https://kubernetes.io/docs/concepts/services-networking/_print/", "id": "k8s_00056_code_explain_1600"} {"prompt": "Complete this yaml code:\n\nservice/networking/network-policy-allow-all-ingress.yaml\n---\napiVersion: networking.k8s.io/v1\nkind: NetworkPolicy\nmetadata:\n name: allow-all-ingress", "completion": "spec:\n podSelector: {}\n ingress:\n - {}\n policyTypes:\n - Ingress", "language": "yaml", "type": "kubernetes-manifest", "context": "6 - Network Policies | With this policy in place, no additional policy or policies can cause any incoming connection to those pods to be denied. This policy has no effect on isolation for egress from any pod.", "source": "https://kubernetes.io/docs/concepts/services-networking/_print/", "id": "k8s_00056_code_completion_1601"} {"prompt": "Explain this yaml code:\n\nservice/networking/network-policy-allow-all-ingress.yaml\n---\napiVersion: networking.k8s.io/v1\nkind: NetworkPolicy\nmetadata:\n name: allow-all-ingress\nspec:\n podSelector: {}\n ingress:\n - {}\n policyTypes:\n - Ingress", "completion": "This yaml code 6 - Network Policies | With this policy in place, no additional policy or policies can cause any incoming connection to those pods to be denied. This policy has no effect on isolation for egress from any pod.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "6 - Network Policies | With this policy in place, no additional policy or policies can cause any incoming connection to those pods to be denied. This policy has no effect on isolation for egress from any pod.", "source": "https://kubernetes.io/docs/concepts/services-networking/_print/", "id": "k8s_00056_code_explain_1602"} {"prompt": "Complete this yaml code:\n\n---\napiVersion: networking.k8s.io/v1\nkind: NetworkPolicy\nmetadata:\n name: allow-all-ingress", "completion": "spec:\n podSelector: {}\n ingress:\n - {}\n policyTypes:\n - Ingress", "language": "yaml", "type": "kubernetes-manifest", "context": "6 - Network Policies | With this policy in place, no additional policy or policies can cause any incoming connection to those pods to be denied. This policy has no effect on isolation for egress from any pod.", "source": "https://kubernetes.io/docs/concepts/services-networking/_print/", "id": "k8s_00056_code_completion_1603"} {"prompt": "Explain this yaml code:\n\n---\napiVersion: networking.k8s.io/v1\nkind: NetworkPolicy\nmetadata:\n name: allow-all-ingress\nspec:\n podSelector: {}\n ingress:\n - {}\n policyTypes:\n - Ingress", "completion": "This yaml code 6 - Network Policies | With this policy in place, no additional policy or policies can cause any incoming connection to those pods to be denied. This policy has no effect on isolation for egress from any pod.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "6 - Network Policies | With this policy in place, no additional policy or policies can cause any incoming connection to those pods to be denied. This policy has no effect on isolation for egress from any pod.", "source": "https://kubernetes.io/docs/concepts/services-networking/_print/", "id": "k8s_00056_code_explain_1604"} {"prompt": "Complete this yaml code:\n\nservice/networking/network-policy-default-deny-egress.yaml\n---\napiVersion: networking.k8s.io/v1\nkind: NetworkPolicy\nmetadata:", "completion": " name: default-deny-egress\nspec:\n podSelector: {}\n policyTypes:\n - Egress", "language": "yaml", "type": "kubernetes-manifest", "context": "6 - Network Policies | This ensures that even pods that aren't selected by any other NetworkPolicy will not be allowed egress traffic. This policy does not change the ingress isolation behavior of any pod.", "source": "https://kubernetes.io/docs/concepts/services-networking/_print/", "id": "k8s_00056_code_completion_1605"} {"prompt": "Explain this yaml code:\n\nservice/networking/network-policy-default-deny-egress.yaml\n---\napiVersion: networking.k8s.io/v1\nkind: NetworkPolicy\nmetadata:\n name: default-deny-egress\nspec:\n podSelector: {}\n policyTypes:\n - Egress", "completion": "This yaml code 6 - Network Policies | This ensures that even pods that aren't selected by any other NetworkPolicy will not be allowed egress traffic. This policy does not change the ingress isolation behavior of any pod.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "6 - Network Policies | This ensures that even pods that aren't selected by any other NetworkPolicy will not be allowed egress traffic. This policy does not change the ingress isolation behavior of any pod.", "source": "https://kubernetes.io/docs/concepts/services-networking/_print/", "id": "k8s_00056_code_explain_1606"} {"prompt": "Complete this yaml code:\n\n---\napiVersion: networking.k8s.io/v1\nkind: NetworkPolicy\nmetadata:", "completion": " name: default-deny-egress\nspec:\n podSelector: {}\n policyTypes:\n - Egress", "language": "yaml", "type": "kubernetes-manifest", "context": "6 - Network Policies | This ensures that even pods that aren't selected by any other NetworkPolicy will not be allowed egress traffic. This policy does not change the ingress isolation behavior of any pod.", "source": "https://kubernetes.io/docs/concepts/services-networking/_print/", "id": "k8s_00056_code_completion_1607"} {"prompt": "Explain this yaml code:\n\n---\napiVersion: networking.k8s.io/v1\nkind: NetworkPolicy\nmetadata:\n name: default-deny-egress\nspec:\n podSelector: {}\n policyTypes:\n - Egress", "completion": "This yaml code 6 - Network Policies | This ensures that even pods that aren't selected by any other NetworkPolicy will not be allowed egress traffic. This policy does not change the ingress isolation behavior of any pod.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "6 - Network Policies | This ensures that even pods that aren't selected by any other NetworkPolicy will not be allowed egress traffic. This policy does not change the ingress isolation behavior of any pod.", "source": "https://kubernetes.io/docs/concepts/services-networking/_print/", "id": "k8s_00056_code_explain_1608"} {"prompt": "Complete this yaml code:\n\nservice/networking/network-policy-allow-all-egress.yaml\n---\napiVersion: networking.k8s.io/v1\nkind: NetworkPolicy\nmetadata:\n name: allow-all-egress", "completion": "spec:\n podSelector: {}\n egress:\n - {}\n policyTypes:\n - Egress", "language": "yaml", "type": "kubernetes-manifest", "context": "6 - Network Policies | With this policy in place, no additional policy or policies can cause any outgoing connection from those pods to be denied. This policy has no effect on isolation for ingress to any pod.", "source": "https://kubernetes.io/docs/concepts/services-networking/_print/", "id": "k8s_00056_code_completion_1609"} {"prompt": "Explain this yaml code:\n\nservice/networking/network-policy-allow-all-egress.yaml\n---\napiVersion: networking.k8s.io/v1\nkind: NetworkPolicy\nmetadata:\n name: allow-all-egress\nspec:\n podSelector: {}\n egress:\n - {}\n policyTypes:\n - Egress", "completion": "This yaml code 6 - Network Policies | With this policy in place, no additional policy or policies can cause any outgoing connection from those pods to be denied. This policy has no effect on isolation for ingress to any pod.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "6 - Network Policies | With this policy in place, no additional policy or policies can cause any outgoing connection from those pods to be denied. This policy has no effect on isolation for ingress to any pod.", "source": "https://kubernetes.io/docs/concepts/services-networking/_print/", "id": "k8s_00056_code_explain_1610"} {"prompt": "Complete this yaml code:\n\n---\napiVersion: networking.k8s.io/v1\nkind: NetworkPolicy\nmetadata:\n name: allow-all-egress", "completion": "spec:\n podSelector: {}\n egress:\n - {}\n policyTypes:\n - Egress", "language": "yaml", "type": "kubernetes-manifest", "context": "6 - Network Policies | With this policy in place, no additional policy or policies can cause any outgoing connection from those pods to be denied. This policy has no effect on isolation for ingress to any pod.", "source": "https://kubernetes.io/docs/concepts/services-networking/_print/", "id": "k8s_00056_code_completion_1611"} {"prompt": "Explain this yaml code:\n\n---\napiVersion: networking.k8s.io/v1\nkind: NetworkPolicy\nmetadata:\n name: allow-all-egress\nspec:\n podSelector: {}\n egress:\n - {}\n policyTypes:\n - Egress", "completion": "This yaml code 6 - Network Policies | With this policy in place, no additional policy or policies can cause any outgoing connection from those pods to be denied. This policy has no effect on isolation for ingress to any pod.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "6 - Network Policies | With this policy in place, no additional policy or policies can cause any outgoing connection from those pods to be denied. This policy has no effect on isolation for ingress to any pod.", "source": "https://kubernetes.io/docs/concepts/services-networking/_print/", "id": "k8s_00056_code_explain_1612"} {"prompt": "Complete this yaml code:\n\nservice/networking/network-policy-default-deny-all.yaml\n---\napiVersion: networking.k8s.io/v1\nkind: NetworkPolicy\nmetadata:", "completion": " name: default-deny-all\nspec:\n podSelector: {}\n policyTypes:\n - Ingress\n - Egress", "language": "yaml", "type": "kubernetes-manifest", "context": "6 - Network Policies | This ensures that even pods that aren't selected by any other NetworkPolicy will not be allowed ingress or egress traffic.", "source": "https://kubernetes.io/docs/concepts/services-networking/_print/", "id": "k8s_00056_code_completion_1613"} {"prompt": "Explain this yaml code:\n\nservice/networking/network-policy-default-deny-all.yaml\n---\napiVersion: networking.k8s.io/v1\nkind: NetworkPolicy\nmetadata:\n name: default-deny-all\nspec:\n podSelector: {}\n policyTypes:\n - Ingress\n - Egress", "completion": "This yaml code 6 - Network Policies | This ensures that even pods that aren't selected by any other NetworkPolicy will not be allowed ingress or egress traffic.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "6 - Network Policies | This ensures that even pods that aren't selected by any other NetworkPolicy will not be allowed ingress or egress traffic.", "source": "https://kubernetes.io/docs/concepts/services-networking/_print/", "id": "k8s_00056_code_explain_1614"} {"prompt": "Complete this yaml code:\n\n---\napiVersion: networking.k8s.io/v1\nkind: NetworkPolicy\nmetadata:\n name: default-deny-all", "completion": "spec:\n podSelector: {}\n policyTypes:\n - Ingress\n - Egress", "language": "yaml", "type": "kubernetes-manifest", "context": "6 - Network Policies | This ensures that even pods that aren't selected by any other NetworkPolicy will not be allowed ingress or egress traffic.", "source": "https://kubernetes.io/docs/concepts/services-networking/_print/", "id": "k8s_00056_code_completion_1615"} {"prompt": "Explain this yaml code:\n\n---\napiVersion: networking.k8s.io/v1\nkind: NetworkPolicy\nmetadata:\n name: default-deny-all\nspec:\n podSelector: {}\n policyTypes:\n - Ingress\n - Egress", "completion": "This yaml code 6 - Network Policies | This ensures that even pods that aren't selected by any other NetworkPolicy will not be allowed ingress or egress traffic.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "6 - Network Policies | This ensures that even pods that aren't selected by any other NetworkPolicy will not be allowed ingress or egress traffic.", "source": "https://kubernetes.io/docs/concepts/services-networking/_print/", "id": "k8s_00056_code_explain_1616"} {"prompt": "Complete this yaml code:\n\nservice/networking/networkpolicy-multiport-egress.yaml\napiVersion: networking.k8s.io/v1\nkind: NetworkPolicy\nmetadata:\n name: multi-port-egress\n namespace: default\nspec:\n podSelector:\n matchLabels:\n role: db", "completion": " policyTypes:\n - Egress\n egress:\n - to:\n - ipBlock:\n cidr: 10.0.0.0/24\n ports:\n - protocol: TCP\n port: 32000\n endPort: 32768", "language": "yaml", "type": "kubernetes-manifest", "context": "6 - Network Policies | The above rule allows any Pod with label role=db on the namespace default to communicate with any IP within the range 10.0.0.0/24 over TCP, provided that the target port is between the range 32000 and", "source": "https://kubernetes.io/docs/concepts/services-networking/_print/", "id": "k8s_00056_code_completion_1617"} {"prompt": "Explain this yaml code:\n\nservice/networking/networkpolicy-multiport-egress.yaml\napiVersion: networking.k8s.io/v1\nkind: NetworkPolicy\nmetadata:\n name: multi-port-egress\n namespace: default\nspec:\n podSelector:\n matchLabels:\n role: db\n policyTypes:\n - Egress\n egress:\n - to:\n - ipBlock:\n cidr: 10.0.0.0/24\n ports:\n - protocol: TCP\n port: 32000\n endPort: 32768", "completion": "This yaml code 6 - Network Policies | The above rule allows any Pod with label role=db on the namespace default to communicate with any IP within the range 10.0.0.0/24 over TCP, provided that the target port is between the range 32000 and", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "6 - Network Policies | The above rule allows any Pod with label role=db on the namespace default to communicate with any IP within the range 10.0.0.0/24 over TCP, provided that the target port is between the range 32000 and", "source": "https://kubernetes.io/docs/concepts/services-networking/_print/", "id": "k8s_00056_code_explain_1618"} {"prompt": "Complete this yaml code:\n\napiVersion: networking.k8s.io/v1\nkind: NetworkPolicy\nmetadata:\n name: multi-port-egress\n namespace: default\nspec:\n podSelector:\n matchLabels:\n role: db", "completion": " policyTypes:\n - Egress\n egress:\n - to:\n - ipBlock:\n cidr: 10.0.0.0/24\n ports:\n - protocol: TCP\n port: 32000\n endPort: 32768", "language": "yaml", "type": "kubernetes-manifest", "context": "6 - Network Policies | The above rule allows any Pod with label role=db on the namespace default to communicate with any IP within the range 10.0.0.0/24 over TCP, provided that the target port is between the range 32000 and", "source": "https://kubernetes.io/docs/concepts/services-networking/_print/", "id": "k8s_00056_code_completion_1619"} {"prompt": "Explain this yaml code:\n\napiVersion: networking.k8s.io/v1\nkind: NetworkPolicy\nmetadata:\n name: multi-port-egress\n namespace: default\nspec:\n podSelector:\n matchLabels:\n role: db\n policyTypes:\n - Egress\n egress:\n - to:\n - ipBlock:\n cidr: 10.0.0.0/24\n ports:\n - protocol: TCP\n port: 32000\n endPort: 32768", "completion": "This yaml code 6 - Network Policies | The above rule allows any Pod with label role=db on the namespace default to communicate with any IP within the range 10.0.0.0/24 over TCP, provided that the target port is between the range 32000 and", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "6 - Network Policies | The above rule allows any Pod with label role=db on the namespace default to communicate with any IP within the range 10.0.0.0/24 over TCP, provided that the target port is between the range 32000 and", "source": "https://kubernetes.io/docs/concepts/services-networking/_print/", "id": "k8s_00056_code_explain_1620"} {"prompt": "Explain this bash code:\n\nkubectl label namespace frontend namespace=frontend\nkubectl label namespace backend namespace=backend", "completion": "This bash code 6 - Network Policies | Add the labels under namespaceSelector in your NetworkPolicy document. For example:", "language": "bash", "type": "kubectl-command_explanation", "context": "6 - Network Policies | Add the labels under namespaceSelector in your NetworkPolicy document. For example:", "source": "https://kubernetes.io/docs/concepts/services-networking/_print/", "id": "k8s_00056_code_explain_1621"} {"prompt": "Complete this yaml code:\n\napiVersion: networking.k8s.io/v1\nkind: NetworkPolicy\nmetadata:\n name: egress-namespaces\nspec:\n podSelector:\n matchLabels:\n app: myapp", "completion": " policyTypes:\n - Egress\n egress:\n - to:\n - namespaceSelector:\n matchExpressions:\n - key: namespace\n operator: In\n values: [\"frontend\", \"backend\"]", "language": "yaml", "type": "kubernetes-manifest", "context": "6 - Network Policies | The Kubernetes control plane sets an immutable label kubernetes.io/metadata.name on all namespaces, the value of the label is the namespace name.", "source": "https://kubernetes.io/docs/concepts/services-networking/_print/", "id": "k8s_00056_code_completion_1622"} {"prompt": "Explain this yaml code:\n\napiVersion: networking.k8s.io/v1\nkind: NetworkPolicy\nmetadata:\n name: egress-namespaces\nspec:\n podSelector:\n matchLabels:\n app: myapp\n policyTypes:\n - Egress\n egress:\n - to:\n - namespaceSelector:\n matchExpressions:\n - key: namespace\n operator: In\n values: [\"frontend\", \"backend\"]", "completion": "This yaml code 6 - Network Policies | The Kubernetes control plane sets an immutable label kubernetes.io/metadata.name on all namespaces, the value of the label is the namespace name.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "6 - Network Policies | The Kubernetes control plane sets an immutable label kubernetes.io/metadata.name on all namespaces, the value of the label is the namespace name.", "source": "https://kubernetes.io/docs/concepts/services-networking/_print/", "id": "k8s_00056_code_explain_1623"} {"prompt": "Complete this text code:\n\n...\n spec:\n podSelector:", "completion": " matchLabels:\n role: client\n ...", "language": "text", "type": "code-example", "context": "6 - Network Policies | a hostNetwork pod is selected by a podSelector or namespaceSelector in an ingress or egress rule.", "source": "https://kubernetes.io/docs/concepts/services-networking/_print/", "id": "k8s_00056_code_completion_1624"} {"prompt": "Explain this text code:\n\n...\n spec:\n podSelector:\n matchLabels:\n role: client\n ...", "completion": "This text code 6 - Network Policies | a hostNetwork pod is selected by a podSelector or namespaceSelector in an ingress or egress rule.", "language": "text", "type": "code-example_explanation", "context": "6 - Network Policies | a hostNetwork pod is selected by a podSelector or namespaceSelector in an ingress or egress rule.", "source": "https://kubernetes.io/docs/concepts/services-networking/_print/", "id": "k8s_00056_code_explain_1625"} {"prompt": "Complete this text code:\n\n...\n ingress:\n - from:", "completion": " - podSelector:\n matchLabels:\n role: client\n ...", "language": "text", "type": "code-example", "context": "6 - Network Policies | At the same time, since hostNetwork pods have the same IP addresses as the nodes they reside on, their connections will be treated as node connections. For example, you can allow traffic from a hostNe", "source": "https://kubernetes.io/docs/concepts/services-networking/_print/", "id": "k8s_00056_code_completion_1626"} {"prompt": "Explain this text code:\n\n...\n ingress:\n - from:\n - podSelector:\n matchLabels:\n role: client\n ...", "completion": "This text code 6 - Network Policies | At the same time, since hostNetwork pods have the same IP addresses as the nodes they reside on, their connections will be treated as node connections. For example, you can allow traffic from a hostNe", "language": "text", "type": "code-example_explanation", "context": "6 - Network Policies | At the same time, since hostNetwork pods have the same IP addresses as the nodes they reside on, their connections will be treated as node connections. For example, you can allow traffic from a hostNe", "source": "https://kubernetes.io/docs/concepts/services-networking/_print/", "id": "k8s_00056_code_explain_1627"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Service\nmetadata:\n name: busybox-subdomain\nspec:\n selector:\n name: busybox\n clusterIP: None\n ports:\n - name: foo # name is not required for single-port Services\n port: 1234\n---\napiVersion: v1\nkind: Pod\nmetadata:\n name: busybox1\n labels:\n name: busybox\nspec:\n hostname: busybox-1\n subdomain: busybox-subdomain", "completion": " containers:\n - image: busybox:1.28\n command:\n - sleep\n - \"3600\"\n name: busybox\n---\napiVersion: v1\nkind: Pod\nmetadata:\n name: busybox2\n labels:\n name: busybox\nspec:\n hostname: busybox-2\n subdomain: busybox-subdomain\n containers:\n - image: busybox:1.28\n command:\n - sleep\n - \"3600\"\n name: busybox", "language": "yaml", "type": "pod-definition", "context": "7 - DNS for Services and Pods | Given the above Service \"busybox-subdomain\" and the Pods which set spec.subdomain to \"busybox-subdomain\", the first Pod will see its own FQDN as \"busybox-1.busybox-subdomain.my-namespace.svc.cluster-d", "source": "https://kubernetes.io/docs/concepts/services-networking/_print/", "id": "k8s_00056_code_completion_1628"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Service\nmetadata:\n name: busybox-subdomain\nspec:\n selector:\n name: busybox\n clusterIP: None\n ports:\n - name: foo # name is not required for single-port Services\n port: 1234\n---\napiVersion: v1\nkind: Pod\nmetadata:\n name: busybox1\n labels:\n name: busybox\nspec:\n hostname: busybox-1\n subdomain: busybox-subdomain\n containers:\n - image: busybox:1.28\n command:\n - sleep\n - \"3600\"\n name: busybox\n---\napiVersion: v1\nkind: Pod\nmetadata:\n name: busybox2\n labels:\n name: busybox\nspec:\n hostname: busybox-2\n subdomain: busybox-subdomain\n containers:\n - image: busybox:1.28\n command:\n - sleep\n - \"3600\"\n name: busybox", "completion": "This yaml code 7 - DNS for Services and Pods | Given the above Service \"busybox-subdomain\" and the Pods which set spec.subdomain to \"busybox-subdomain\", the first Pod will see its own FQDN as \"busybox-1.busybox-subdomain.my-namespace.svc.cluster-d", "language": "yaml", "type": "pod-definition_explanation", "context": "7 - DNS for Services and Pods | Given the above Service \"busybox-subdomain\" and the Pods which set spec.subdomain to \"busybox-subdomain\", the first Pod will see its own FQDN as \"busybox-1.busybox-subdomain.my-namespace.svc.cluster-d", "source": "https://kubernetes.io/docs/concepts/services-networking/_print/", "id": "k8s_00056_code_explain_1629"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: busybox\n namespace: default\nspec:\n containers:\n - image: busybox:1.28", "completion": " command:\n - sleep\n - \"3600\"\n imagePullPolicy: IfNotPresent\n name: busybox\n restartPolicy: Always\n hostNetwork: true\n dnsPolicy: ClusterFirstWithHostNet", "language": "yaml", "type": "pod-definition", "context": "7 - DNS for Services and Pods | Pod's DNS Config allows users more control on the DNS settings for a Pod.", "source": "https://kubernetes.io/docs/concepts/services-networking/_print/", "id": "k8s_00056_code_completion_1630"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: busybox\n namespace: default\nspec:\n containers:\n - image: busybox:1.28\n command:\n - sleep\n - \"3600\"\n imagePullPolicy: IfNotPresent\n name: busybox\n restartPolicy: Always\n hostNetwork: true\n dnsPolicy: ClusterFirstWithHostNet", "completion": "This yaml code 7 - DNS for Services and Pods | Pod's DNS Config allows users more control on the DNS settings for a Pod.", "language": "yaml", "type": "pod-definition_explanation", "context": "7 - DNS for Services and Pods | Pod's DNS Config allows users more control on the DNS settings for a Pod.", "source": "https://kubernetes.io/docs/concepts/services-networking/_print/", "id": "k8s_00056_code_explain_1631"} {"prompt": "Complete this yaml code:\n\nservice/networking/custom-dns.yaml\napiVersion: v1\nkind: Pod\nmetadata:\n namespace: default\n name: dns-example\nspec:\n containers:\n - name: test\n image: nginx", "completion": " dnsPolicy: \"None\"\n dnsConfig:\n nameservers:\n - 192.0.2.1 # this is an example\n searches:\n - ns1.svc.cluster-domain.example\n - my.dns.search.suffix\n options:\n - name: ndots\n value: \"2\"\n - name: edns0", "language": "yaml", "type": "pod-definition", "context": "7 - DNS for Services and Pods | When the Pod above is created, the container test gets the following contents in its /etc/resolv.conf file:", "source": "https://kubernetes.io/docs/concepts/services-networking/_print/", "id": "k8s_00056_code_completion_1632"} {"prompt": "Explain this yaml code:\n\nservice/networking/custom-dns.yaml\napiVersion: v1\nkind: Pod\nmetadata:\n namespace: default\n name: dns-example\nspec:\n containers:\n - name: test\n image: nginx\n dnsPolicy: \"None\"\n dnsConfig:\n nameservers:\n - 192.0.2.1 # this is an example\n searches:\n - ns1.svc.cluster-domain.example\n - my.dns.search.suffix\n options:\n - name: ndots\n value: \"2\"\n - name: edns0", "completion": "This yaml code 7 - DNS for Services and Pods | When the Pod above is created, the container test gets the following contents in its /etc/resolv.conf file:", "language": "yaml", "type": "pod-definition_explanation", "context": "7 - DNS for Services and Pods | When the Pod above is created, the container test gets the following contents in its /etc/resolv.conf file:", "source": "https://kubernetes.io/docs/concepts/services-networking/_print/", "id": "k8s_00056_code_explain_1633"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n namespace: default\n name: dns-example\nspec:\n containers:\n - name: test\n image: nginx\n dnsPolicy: \"None\"", "completion": " dnsConfig:\n nameservers:\n - 192.0.2.1 # this is an example\n searches:\n - ns1.svc.cluster-domain.example\n - my.dns.search.suffix\n options:\n - name: ndots\n value: \"2\"\n - name: edns0", "language": "yaml", "type": "pod-definition", "context": "7 - DNS for Services and Pods | When the Pod above is created, the container test gets the following contents in its /etc/resolv.conf file:", "source": "https://kubernetes.io/docs/concepts/services-networking/_print/", "id": "k8s_00056_code_completion_1634"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n namespace: default\n name: dns-example\nspec:\n containers:\n - name: test\n image: nginx\n dnsPolicy: \"None\"\n dnsConfig:\n nameservers:\n - 192.0.2.1 # this is an example\n searches:\n - ns1.svc.cluster-domain.example\n - my.dns.search.suffix\n options:\n - name: ndots\n value: \"2\"\n - name: edns0", "completion": "This yaml code 7 - DNS for Services and Pods | When the Pod above is created, the container test gets the following contents in its /etc/resolv.conf file:", "language": "yaml", "type": "pod-definition_explanation", "context": "7 - DNS for Services and Pods | When the Pod above is created, the container test gets the following contents in its /etc/resolv.conf file:", "source": "https://kubernetes.io/docs/concepts/services-networking/_print/", "id": "k8s_00056_code_explain_1635"} {"prompt": "Explain this bash code:\n\nkubectl exec -it dns-example -- cat /etc/resolv.conf", "completion": "This bash code 7 - DNS for Services and Pods | The output is similar to this:", "language": "bash", "type": "kubectl-command_explanation", "context": "7 - DNS for Services and Pods | The output is similar to this:", "source": "https://kubernetes.io/docs/concepts/services-networking/_print/", "id": "k8s_00056_code_explain_1636"} {"prompt": "Complete this yaml code:\n\nservice/networking/dual-stack-default-svc.yaml\napiVersion: v1\nkind: Service\nmetadata:\n name: my-service\n labels:", "completion": " app.kubernetes.io/name: MyApp\nspec:\n selector:\n app.kubernetes.io/name: MyApp\n ports:\n - protocol: TCP\n port: 80", "language": "yaml", "type": "service-definition", "context": "8 - IPv4/IPv6 dual-stack | This Service specification explicitly defines PreferDualStack in .spec.ipFamilyPolicy. When you create this Service on a dual-stack cluster, Kubernetes assigns both IPv4 and IPv6 addresses for the ser", "source": "https://kubernetes.io/docs/concepts/services-networking/_print/", "id": "k8s_00056_code_completion_1637"} {"prompt": "Explain this yaml code:\n\nservice/networking/dual-stack-default-svc.yaml\napiVersion: v1\nkind: Service\nmetadata:\n name: my-service\n labels:\n app.kubernetes.io/name: MyApp\nspec:\n selector:\n app.kubernetes.io/name: MyApp\n ports:\n - protocol: TCP\n port: 80", "completion": "This yaml code 8 - IPv4/IPv6 dual-stack | This Service specification explicitly defines PreferDualStack in .spec.ipFamilyPolicy. When you create this Service on a dual-stack cluster, Kubernetes assigns both IPv4 and IPv6 addresses for the ser", "language": "yaml", "type": "service-definition_explanation", "context": "8 - IPv4/IPv6 dual-stack | This Service specification explicitly defines PreferDualStack in .spec.ipFamilyPolicy. When you create this Service on a dual-stack cluster, Kubernetes assigns both IPv4 and IPv6 addresses for the ser", "source": "https://kubernetes.io/docs/concepts/services-networking/_print/", "id": "k8s_00056_code_explain_1638"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Service\nmetadata:\n name: my-service\n labels:\n app.kubernetes.io/name: MyApp", "completion": "spec:\n selector:\n app.kubernetes.io/name: MyApp\n ports:\n - protocol: TCP\n port: 80", "language": "yaml", "type": "service-definition", "context": "8 - IPv4/IPv6 dual-stack | This Service specification explicitly defines PreferDualStack in .spec.ipFamilyPolicy. When you create this Service on a dual-stack cluster, Kubernetes assigns both IPv4 and IPv6 addresses for the ser", "source": "https://kubernetes.io/docs/concepts/services-networking/_print/", "id": "k8s_00056_code_completion_1639"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Service\nmetadata:\n name: my-service\n labels:\n app.kubernetes.io/name: MyApp\nspec:\n selector:\n app.kubernetes.io/name: MyApp\n ports:\n - protocol: TCP\n port: 80", "completion": "This yaml code 8 - IPv4/IPv6 dual-stack | This Service specification explicitly defines PreferDualStack in .spec.ipFamilyPolicy. When you create this Service on a dual-stack cluster, Kubernetes assigns both IPv4 and IPv6 addresses for the ser", "language": "yaml", "type": "service-definition_explanation", "context": "8 - IPv4/IPv6 dual-stack | This Service specification explicitly defines PreferDualStack in .spec.ipFamilyPolicy. When you create this Service on a dual-stack cluster, Kubernetes assigns both IPv4 and IPv6 addresses for the ser", "source": "https://kubernetes.io/docs/concepts/services-networking/_print/", "id": "k8s_00056_code_explain_1640"} {"prompt": "Complete this yaml code:\n\nservice/networking/dual-stack-preferred-svc.yaml\napiVersion: v1\nkind: Service\nmetadata:\n name: my-service\n labels:\n app.kubernetes.io/name: MyApp", "completion": "spec:\n ipFamilyPolicy: PreferDualStack\n selector:\n app.kubernetes.io/name: MyApp\n ports:\n - protocol: TCP\n port: 80", "language": "yaml", "type": "service-definition", "context": "8 - IPv4/IPv6 dual-stack | This Service specification explicitly defines IPv6 and IPv4 in .spec.ipFamilies as well as defining PreferDualStack in .spec.ipFamilyPolicy. When Kubernetes assigns an IPv6 and IPv4 address in .spec.c", "source": "https://kubernetes.io/docs/concepts/services-networking/_print/", "id": "k8s_00056_code_completion_1641"} {"prompt": "Explain this yaml code:\n\nservice/networking/dual-stack-preferred-svc.yaml\napiVersion: v1\nkind: Service\nmetadata:\n name: my-service\n labels:\n app.kubernetes.io/name: MyApp\nspec:\n ipFamilyPolicy: PreferDualStack\n selector:\n app.kubernetes.io/name: MyApp\n ports:\n - protocol: TCP\n port: 80", "completion": "This yaml code 8 - IPv4/IPv6 dual-stack | This Service specification explicitly defines IPv6 and IPv4 in .spec.ipFamilies as well as defining PreferDualStack in .spec.ipFamilyPolicy. When Kubernetes assigns an IPv6 and IPv4 address in .spec.c", "language": "yaml", "type": "service-definition_explanation", "context": "8 - IPv4/IPv6 dual-stack | This Service specification explicitly defines IPv6 and IPv4 in .spec.ipFamilies as well as defining PreferDualStack in .spec.ipFamilyPolicy. When Kubernetes assigns an IPv6 and IPv4 address in .spec.c", "source": "https://kubernetes.io/docs/concepts/services-networking/_print/", "id": "k8s_00056_code_explain_1642"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Service\nmetadata:\n name: my-service\n labels:\n app.kubernetes.io/name: MyApp", "completion": "spec:\n ipFamilyPolicy: PreferDualStack\n selector:\n app.kubernetes.io/name: MyApp\n ports:\n - protocol: TCP\n port: 80", "language": "yaml", "type": "service-definition", "context": "8 - IPv4/IPv6 dual-stack | This Service specification explicitly defines IPv6 and IPv4 in .spec.ipFamilies as well as defining PreferDualStack in .spec.ipFamilyPolicy. When Kubernetes assigns an IPv6 and IPv4 address in .spec.c", "source": "https://kubernetes.io/docs/concepts/services-networking/_print/", "id": "k8s_00056_code_completion_1643"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Service\nmetadata:\n name: my-service\n labels:\n app.kubernetes.io/name: MyApp\nspec:\n ipFamilyPolicy: PreferDualStack\n selector:\n app.kubernetes.io/name: MyApp\n ports:\n - protocol: TCP\n port: 80", "completion": "This yaml code 8 - IPv4/IPv6 dual-stack | This Service specification explicitly defines IPv6 and IPv4 in .spec.ipFamilies as well as defining PreferDualStack in .spec.ipFamilyPolicy. When Kubernetes assigns an IPv6 and IPv4 address in .spec.c", "language": "yaml", "type": "service-definition_explanation", "context": "8 - IPv4/IPv6 dual-stack | This Service specification explicitly defines IPv6 and IPv4 in .spec.ipFamilies as well as defining PreferDualStack in .spec.ipFamilyPolicy. When Kubernetes assigns an IPv6 and IPv4 address in .spec.c", "source": "https://kubernetes.io/docs/concepts/services-networking/_print/", "id": "k8s_00056_code_explain_1644"} {"prompt": "Complete this yaml code:\n\nservice/networking/dual-stack-preferred-ipfamilies-svc.yaml\napiVersion: v1\nkind: Service\nmetadata:\n name: my-service\n labels:\n app.kubernetes.io/name: MyApp\nspec:", "completion": " ipFamilyPolicy: PreferDualStack\n ipFamilies:\n - IPv6\n - IPv4\n selector:\n app.kubernetes.io/name: MyApp\n ports:\n - protocol: TCP\n port: 80", "language": "yaml", "type": "service-definition", "context": "8 - IPv4/IPv6 dual-stack | These examples demonstrate the default behavior when dual-stack is newly enabled on a cluster where Services already exist. (Upgrading an existing cluster to 1.21 or beyond will enable dual-stack.)", "source": "https://kubernetes.io/docs/concepts/services-networking/_print/", "id": "k8s_00056_code_completion_1645"} {"prompt": "Explain this yaml code:\n\nservice/networking/dual-stack-preferred-ipfamilies-svc.yaml\napiVersion: v1\nkind: Service\nmetadata:\n name: my-service\n labels:\n app.kubernetes.io/name: MyApp\nspec:\n ipFamilyPolicy: PreferDualStack\n ipFamilies:\n - IPv6\n - IPv4\n selector:\n app.kubernetes.io/name: MyApp\n ports:\n - protocol: TCP\n port: 80", "completion": "This yaml code 8 - IPv4/IPv6 dual-stack | These examples demonstrate the default behavior when dual-stack is newly enabled on a cluster where Services already exist. (Upgrading an existing cluster to 1.21 or beyond will enable dual-stack.)", "language": "yaml", "type": "service-definition_explanation", "context": "8 - IPv4/IPv6 dual-stack | These examples demonstrate the default behavior when dual-stack is newly enabled on a cluster where Services already exist. (Upgrading an existing cluster to 1.21 or beyond will enable dual-stack.)", "source": "https://kubernetes.io/docs/concepts/services-networking/_print/", "id": "k8s_00056_code_explain_1646"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Service\nmetadata:\n name: my-service\n labels:\n app.kubernetes.io/name: MyApp\nspec:\n ipFamilyPolicy: PreferDualStack", "completion": " ipFamilies:\n - IPv6\n - IPv4\n selector:\n app.kubernetes.io/name: MyApp\n ports:\n - protocol: TCP\n port: 80", "language": "yaml", "type": "service-definition", "context": "8 - IPv4/IPv6 dual-stack | These examples demonstrate the default behavior when dual-stack is newly enabled on a cluster where Services already exist. (Upgrading an existing cluster to 1.21 or beyond will enable dual-stack.)", "source": "https://kubernetes.io/docs/concepts/services-networking/_print/", "id": "k8s_00056_code_completion_1647"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Service\nmetadata:\n name: my-service\n labels:\n app.kubernetes.io/name: MyApp\nspec:\n ipFamilyPolicy: PreferDualStack\n ipFamilies:\n - IPv6\n - IPv4\n selector:\n app.kubernetes.io/name: MyApp\n ports:\n - protocol: TCP\n port: 80", "completion": "This yaml code 8 - IPv4/IPv6 dual-stack | These examples demonstrate the default behavior when dual-stack is newly enabled on a cluster where Services already exist. (Upgrading an existing cluster to 1.21 or beyond will enable dual-stack.)", "language": "yaml", "type": "service-definition_explanation", "context": "8 - IPv4/IPv6 dual-stack | These examples demonstrate the default behavior when dual-stack is newly enabled on a cluster where Services already exist. (Upgrading an existing cluster to 1.21 or beyond will enable dual-stack.)", "source": "https://kubernetes.io/docs/concepts/services-networking/_print/", "id": "k8s_00056_code_explain_1648"} {"prompt": "Complete this yaml code:\n\nservice/networking/dual-stack-default-svc.yaml\napiVersion: v1\nkind: Service\nmetadata:\n name: my-service\n labels:", "completion": " app.kubernetes.io/name: MyApp\nspec:\n selector:\n app.kubernetes.io/name: MyApp\n ports:\n - protocol: TCP\n port: 80", "language": "yaml", "type": "service-definition", "context": "8 - IPv4/IPv6 dual-stack | You can validate this behavior by using kubectl to inspect an existing service.", "source": "https://kubernetes.io/docs/concepts/services-networking/_print/", "id": "k8s_00056_code_completion_1649"} {"prompt": "Explain this yaml code:\n\nservice/networking/dual-stack-default-svc.yaml\napiVersion: v1\nkind: Service\nmetadata:\n name: my-service\n labels:\n app.kubernetes.io/name: MyApp\nspec:\n selector:\n app.kubernetes.io/name: MyApp\n ports:\n - protocol: TCP\n port: 80", "completion": "This yaml code 8 - IPv4/IPv6 dual-stack | You can validate this behavior by using kubectl to inspect an existing service.", "language": "yaml", "type": "service-definition_explanation", "context": "8 - IPv4/IPv6 dual-stack | You can validate this behavior by using kubectl to inspect an existing service.", "source": "https://kubernetes.io/docs/concepts/services-networking/_print/", "id": "k8s_00056_code_explain_1650"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Service\nmetadata:\n name: my-service\n labels:\n app.kubernetes.io/name: MyApp", "completion": "spec:\n selector:\n app.kubernetes.io/name: MyApp\n ports:\n - protocol: TCP\n port: 80", "language": "yaml", "type": "service-definition", "context": "8 - IPv4/IPv6 dual-stack | You can validate this behavior by using kubectl to inspect an existing service.", "source": "https://kubernetes.io/docs/concepts/services-networking/_print/", "id": "k8s_00056_code_completion_1651"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Service\nmetadata:\n name: my-service\n labels:\n app.kubernetes.io/name: MyApp\nspec:\n selector:\n app.kubernetes.io/name: MyApp\n ports:\n - protocol: TCP\n port: 80", "completion": "This yaml code 8 - IPv4/IPv6 dual-stack | You can validate this behavior by using kubectl to inspect an existing service.", "language": "yaml", "type": "service-definition_explanation", "context": "8 - IPv4/IPv6 dual-stack | You can validate this behavior by using kubectl to inspect an existing service.", "source": "https://kubernetes.io/docs/concepts/services-networking/_print/", "id": "k8s_00056_code_explain_1652"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Service\nmetadata:\n labels:\n app.kubernetes.io/name: MyApp\n name: my-service\nspec:\n clusterIP: 10.0.197.123\n clusterIPs:\n - 10.0.197.123\n ipFamilies:", "completion": " - IPv4\n ipFamilyPolicy: SingleStack\n ports:\n - port: 80\n protocol: TCP\n targetPort: 80\n selector:\n app.kubernetes.io/name: MyApp\n type: ClusterIP\nstatus:\n loadBalancer: {}", "language": "yaml", "type": "service-definition", "context": "8 - IPv4/IPv6 dual-stack | When dual-stack is enabled on a cluster, existing headless Services with selectors are configured by the control plane to set .spec.ipFamilyPolicy to SingleStack and set .spec.ipFamilies to the addres", "source": "https://kubernetes.io/docs/concepts/services-networking/_print/", "id": "k8s_00056_code_completion_1653"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Service\nmetadata:\n labels:\n app.kubernetes.io/name: MyApp\n name: my-service\nspec:\n clusterIP: 10.0.197.123\n clusterIPs:\n - 10.0.197.123\n ipFamilies:\n - IPv4\n ipFamilyPolicy: SingleStack\n ports:\n - port: 80\n protocol: TCP\n targetPort: 80\n selector:\n app.kubernetes.io/name: MyApp\n type: ClusterIP\nstatus:\n loadBalancer: {}", "completion": "This yaml code 8 - IPv4/IPv6 dual-stack | When dual-stack is enabled on a cluster, existing headless Services with selectors are configured by the control plane to set .spec.ipFamilyPolicy to SingleStack and set .spec.ipFamilies to the addres", "language": "yaml", "type": "service-definition_explanation", "context": "8 - IPv4/IPv6 dual-stack | When dual-stack is enabled on a cluster, existing headless Services with selectors are configured by the control plane to set .spec.ipFamilyPolicy to SingleStack and set .spec.ipFamilies to the addres", "source": "https://kubernetes.io/docs/concepts/services-networking/_print/", "id": "k8s_00056_code_explain_1654"} {"prompt": "Complete this yaml code:\n\nservice/networking/dual-stack-default-svc.yaml\napiVersion: v1\nkind: Service\nmetadata:\n name: my-service\n labels:", "completion": " app.kubernetes.io/name: MyApp\nspec:\n selector:\n app.kubernetes.io/name: MyApp\n ports:\n - protocol: TCP\n port: 80", "language": "yaml", "type": "service-definition", "context": "8 - IPv4/IPv6 dual-stack | You can validate this behavior by using kubectl to inspect an existing headless service with selectors.", "source": "https://kubernetes.io/docs/concepts/services-networking/_print/", "id": "k8s_00056_code_completion_1655"} {"prompt": "Explain this yaml code:\n\nservice/networking/dual-stack-default-svc.yaml\napiVersion: v1\nkind: Service\nmetadata:\n name: my-service\n labels:\n app.kubernetes.io/name: MyApp\nspec:\n selector:\n app.kubernetes.io/name: MyApp\n ports:\n - protocol: TCP\n port: 80", "completion": "This yaml code 8 - IPv4/IPv6 dual-stack | You can validate this behavior by using kubectl to inspect an existing headless service with selectors.", "language": "yaml", "type": "service-definition_explanation", "context": "8 - IPv4/IPv6 dual-stack | You can validate this behavior by using kubectl to inspect an existing headless service with selectors.", "source": "https://kubernetes.io/docs/concepts/services-networking/_print/", "id": "k8s_00056_code_explain_1656"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Service\nmetadata:\n name: my-service\n labels:\n app.kubernetes.io/name: MyApp", "completion": "spec:\n selector:\n app.kubernetes.io/name: MyApp\n ports:\n - protocol: TCP\n port: 80", "language": "yaml", "type": "service-definition", "context": "8 - IPv4/IPv6 dual-stack | You can validate this behavior by using kubectl to inspect an existing headless service with selectors.", "source": "https://kubernetes.io/docs/concepts/services-networking/_print/", "id": "k8s_00056_code_completion_1657"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Service\nmetadata:\n name: my-service\n labels:\n app.kubernetes.io/name: MyApp\nspec:\n selector:\n app.kubernetes.io/name: MyApp\n ports:\n - protocol: TCP\n port: 80", "completion": "This yaml code 8 - IPv4/IPv6 dual-stack | You can validate this behavior by using kubectl to inspect an existing headless service with selectors.", "language": "yaml", "type": "service-definition_explanation", "context": "8 - IPv4/IPv6 dual-stack | You can validate this behavior by using kubectl to inspect an existing headless service with selectors.", "source": "https://kubernetes.io/docs/concepts/services-networking/_print/", "id": "k8s_00056_code_explain_1658"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Service\nmetadata:\n labels:\n app.kubernetes.io/name: MyApp\n name: my-service\nspec:\n clusterIP: None\n clusterIPs:", "completion": " - None\n ipFamilies:\n - IPv4\n ipFamilyPolicy: SingleStack\n ports:\n - port: 80\n protocol: TCP\n targetPort: 80\n selector:\n app.kubernetes.io/name: MyApp", "language": "yaml", "type": "service-definition", "context": "8 - IPv4/IPv6 dual-stack | Services can be changed from single-stack to dual-stack and from dual-stack to single-stack.", "source": "https://kubernetes.io/docs/concepts/services-networking/_print/", "id": "k8s_00056_code_completion_1659"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Service\nmetadata:\n labels:\n app.kubernetes.io/name: MyApp\n name: my-service\nspec:\n clusterIP: None\n clusterIPs:\n - None\n ipFamilies:\n - IPv4\n ipFamilyPolicy: SingleStack\n ports:\n - port: 80\n protocol: TCP\n targetPort: 80\n selector:\n app.kubernetes.io/name: MyApp", "completion": "This yaml code 8 - IPv4/IPv6 dual-stack | Services can be changed from single-stack to dual-stack and from dual-stack to single-stack.", "language": "yaml", "type": "service-definition_explanation", "context": "8 - IPv4/IPv6 dual-stack | Services can be changed from single-stack to dual-stack and from dual-stack to single-stack.", "source": "https://kubernetes.io/docs/concepts/services-networking/_print/", "id": "k8s_00056_code_explain_1660"} {"prompt": "Complete this yaml code:\n\napiVersion: discovery.k8s.io/v1\nkind: EndpointSlice\nmetadata:\n name: example-hints\n labels:\n kubernetes.io/service-name: example-svc\naddressType: IPv4\nports:\n - name: http\n protocol: TCP", "completion": " port: 80\nendpoints:\n - addresses:\n - \"10.1.2.3\"\n conditions:\n ready: true\n hostname: pod-1\n zone: zone-a\n hints:\n forZones:\n - name: \"zone-a\"", "language": "yaml", "type": "kubernetes-manifest", "context": "9 - Topology Aware Routing | The kube-proxy component filters the endpoints it routes to based on the hints set by the EndpointSlice controller. In most cases, this means that the kube-proxy is able to route traffic to endpoints", "source": "https://kubernetes.io/docs/concepts/services-networking/_print/", "id": "k8s_00056_code_completion_1661"} {"prompt": "Explain this yaml code:\n\napiVersion: discovery.k8s.io/v1\nkind: EndpointSlice\nmetadata:\n name: example-hints\n labels:\n kubernetes.io/service-name: example-svc\naddressType: IPv4\nports:\n - name: http\n protocol: TCP\n port: 80\nendpoints:\n - addresses:\n - \"10.1.2.3\"\n conditions:\n ready: true\n hostname: pod-1\n zone: zone-a\n hints:\n forZones:\n - name: \"zone-a\"", "completion": "This yaml code 9 - Topology Aware Routing | The kube-proxy component filters the endpoints it routes to based on the hints set by the EndpointSlice controller. In most cases, this means that the kube-proxy is able to route traffic to endpoints", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "9 - Topology Aware Routing | The kube-proxy component filters the endpoints it routes to based on the hints set by the EndpointSlice controller. In most cases, this means that the kube-proxy is able to route traffic to endpoints", "source": "https://kubernetes.io/docs/concepts/services-networking/_print/", "id": "k8s_00056_code_explain_1662"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Service\nmetadata:\n labels:\n k8s-app: kube-dns\n kubernetes.io/cluster-service: \"true\"\n kubernetes.io/name: CoreDNS\n name: kube-dns\n namespace: kube-system\nspec:\n clusterIP: 10.96.0.10", "completion": " ports:\n - name: dns\n port: 53\n protocol: UDP\n targetPort: 53\n - name: dns-tcp\n port: 53\n protocol: TCP\n targetPort: 53\n selector:\n k8s-app: kube-dns\n type: ClusterIP", "language": "yaml", "type": "service-definition", "context": "11 - Service ClusterIP allocation | But, as it was explained before, the IP address 10.96.0.10 has not been reserved. If other Services are created before or in parallel with dynamic allocation, there is a chance they can allocate this", "source": "https://kubernetes.io/docs/concepts/services-networking/_print/", "id": "k8s_00056_code_completion_1663"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Service\nmetadata:\n labels:\n k8s-app: kube-dns\n kubernetes.io/cluster-service: \"true\"\n kubernetes.io/name: CoreDNS\n name: kube-dns\n namespace: kube-system\nspec:\n clusterIP: 10.96.0.10\n ports:\n - name: dns\n port: 53\n protocol: UDP\n targetPort: 53\n - name: dns-tcp\n port: 53\n protocol: TCP\n targetPort: 53\n selector:\n k8s-app: kube-dns\n type: ClusterIP", "completion": "This yaml code 11 - Service ClusterIP allocation | But, as it was explained before, the IP address 10.96.0.10 has not been reserved. If other Services are created before or in parallel with dynamic allocation, there is a chance they can allocate this", "language": "yaml", "type": "service-definition_explanation", "context": "11 - Service ClusterIP allocation | But, as it was explained before, the IP address 10.96.0.10 has not been reserved. If other Services are created before or in parallel with dynamic allocation, there is a chance they can allocate this", "source": "https://kubernetes.io/docs/concepts/services-networking/_print/", "id": "k8s_00056_code_explain_1664"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Service\nmetadata:\n name: my-service\nspec:\n selector:", "completion": " app.kubernetes.io/name: MyApp\n ports:\n - protocol: TCP\n port: 80\n targetPort: 9376\n internalTrafficPolicy: Local", "language": "yaml", "type": "service-definition", "context": "12 - Service Internal Traffic Policy | The kube-proxy filters the endpoints it routes to based on the spec.internalTrafficPolicy setting. When it's set to Local, only node local endpoints are considered. When it's Cluster (the default), or", "source": "https://kubernetes.io/docs/concepts/services-networking/_print/", "id": "k8s_00056_code_completion_1665"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Service\nmetadata:\n name: my-service\nspec:\n selector:\n app.kubernetes.io/name: MyApp\n ports:\n - protocol: TCP\n port: 80\n targetPort: 9376\n internalTrafficPolicy: Local", "completion": "This yaml code 12 - Service Internal Traffic Policy | The kube-proxy filters the endpoints it routes to based on the spec.internalTrafficPolicy setting. When it's set to Local, only node local endpoints are considered. When it's Cluster (the default), or", "language": "yaml", "type": "service-definition_explanation", "context": "12 - Service Internal Traffic Policy | The kube-proxy filters the endpoints it routes to based on the spec.internalTrafficPolicy setting. When it's set to Local, only node local endpoints are considered. When it's Cluster (the default), or", "source": "https://kubernetes.io/docs/concepts/services-networking/_print/", "id": "k8s_00056_code_explain_1666"} {"prompt": "Explain this text code:\n\nIf you aren't sure where to add yourself, add yourself to `sig-docs-en-owners`.", "completion": "This text code Roles and responsibilities | Assign the PR to one or more current SIG Docs approvers.", "language": "text", "type": "code-example_explanation", "context": "Roles and responsibilities | Assign the PR to one or more current SIG Docs approvers.", "source": "https://kubernetes.io/docs/contribute/participate/roles-and-responsibilities/", "id": "k8s_00057_code_explain_1667"} {"prompt": "Explain this text code:\n\nIf you aren't sure where to add yourself, add yourself to `sig-docs-en-owners`.", "completion": "This text code Roles and responsibilities | Assign the PR to one or more current SIG Docs approvers.", "language": "text", "type": "code-example_explanation", "context": "Roles and responsibilities | Assign the PR to one or more current SIG Docs approvers.", "source": "https://kubernetes.io/docs/contribute/participate/roles-and-responsibilities/", "id": "k8s_00057_code_explain_1668"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: dependent-envars-demo\nspec:\n containers:\n - name: dependent-envars-demo\n args:\n - while true; do echo -en '\\n'; printf UNCHANGED_REFERENCE=$UNCHANGED_REFERENCE'\\n'; printf SERVICE_ADDRESS=$SERVICE_ADDRESS'\\n';printf ESCAPED_REFERENCE=$ESCAPED_REFERENCE'\\n'; sleep 30; done;\n command:\n - sh\n - -c\n image: busybox:1.28", "completion": " env:\n - name: SERVICE_PORT\n value: \"80\"\n - name: SERVICE_IP\n value: \"172.17.0.1\"\n - name: UNCHANGED_REFERENCE\n value: \"$(PROTOCOL)://$(SERVICE_IP):$(SERVICE_PORT)\"\n - name: PROTOCOL\n value: \"https\"\n - name: SERVICE_ADDRESS\n value: \"$(PROTOCOL)://$(SERVICE_IP):$(SERVICE_PORT)\"\n - name: ESCAPED_REFERENCE\n value: \"$$(PROTOCOL)://$(SERVICE_IP):$(SERVICE_PORT)\"", "language": "yaml", "type": "pod-definition", "context": "Define Dependent Environment Variables | Create a Pod based on that manifest:", "source": "https://kubernetes.io/docs/tasks/inject-data-application/define-interdependent-environment-variables/", "id": "k8s_00058_code_completion_1669"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: dependent-envars-demo\nspec:\n containers:\n - name: dependent-envars-demo\n args:\n - while true; do echo -en '\\n'; printf UNCHANGED_REFERENCE=$UNCHANGED_REFERENCE'\\n'; printf SERVICE_ADDRESS=$SERVICE_ADDRESS'\\n';printf ESCAPED_REFERENCE=$ESCAPED_REFERENCE'\\n'; sleep 30; done;\n command:\n - sh\n - -c\n image: busybox:1.28\n env:\n - name: SERVICE_PORT\n value: \"80\"\n - name: SERVICE_IP\n value: \"172.17.0.1\"\n - name: UNCHANGED_REFERENCE\n value: \"$(PROTOCOL)://$(SERVICE_IP):$(SERVICE_PORT)\"\n - name: PROTOCOL\n value: \"https\"\n - name: SERVICE_ADDRESS\n value: \"$(PROTOCOL)://$(SERVICE_IP):$(SERVICE_PORT)\"\n - name: ESCAPED_REFERENCE\n value: \"$$(PROTOCOL)://$(SERVICE_IP):$(SERVICE_PORT)\"", "completion": "This yaml code Define Dependent Environment Variables | Create a Pod based on that manifest:", "language": "yaml", "type": "pod-definition_explanation", "context": "Define Dependent Environment Variables | Create a Pod based on that manifest:", "source": "https://kubernetes.io/docs/tasks/inject-data-application/define-interdependent-environment-variables/", "id": "k8s_00058_code_explain_1670"} {"prompt": "Explain this bash code:\n\nkubectl apply -f https://k8s.io/examples/pods/inject/dependent-envars.yaml", "completion": "This bash code Define Dependent Environment Variables | List the running Pods:", "language": "bash", "type": "kubectl-command_explanation", "context": "Define Dependent Environment Variables | List the running Pods:", "source": "https://kubernetes.io/docs/tasks/inject-data-application/define-interdependent-environment-variables/", "id": "k8s_00058_code_explain_1671"} {"prompt": "Explain this text code:\n\nNAME READY STATUS RESTARTS AGE\ndependent-envars-demo 1/1 Running 0 9s", "completion": "This text code Define Dependent Environment Variables | Check the logs for the container running in your Pod:", "language": "text", "type": "code-example_explanation", "context": "Define Dependent Environment Variables | Check the logs for the container running in your Pod:", "source": "https://kubernetes.io/docs/tasks/inject-data-application/define-interdependent-environment-variables/", "id": "k8s_00058_code_explain_1672"} {"prompt": "Explain this text code:\n\nUNCHANGED_REFERENCE=$(PROTOCOL)://172.17.0.1:80\nSERVICE_ADDRESS=https://172.17.0.1:80\nESCAPED_REFERENCE=$(PROTOCOL)://172.17.0.1:80", "completion": "This text code Define Dependent Environment Variables | As shown above, you have defined the correct dependency reference of SERVICE_ADDRESS, bad dependency reference of UNCHANGED_REFERENCE and skip dependent references of ESCAPED_REFERENCE.", "language": "text", "type": "code-example_explanation", "context": "Define Dependent Environment Variables | As shown above, you have defined the correct dependency reference of SERVICE_ADDRESS, bad dependency reference of UNCHANGED_REFERENCE and skip dependent references of ESCAPED_REFERENCE.", "source": "https://kubernetes.io/docs/tasks/inject-data-application/define-interdependent-environment-variables/", "id": "k8s_00058_code_explain_1673"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: dependent-envars-demo\nspec:\n containers:\n - name: dependent-envars-demo\n args:\n - while true; do echo -en '\\n'; printf UNCHANGED_REFERENCE=$UNCHANGED_REFERENCE'\\n'; printf SERVICE_ADDRESS=$SERVICE_ADDRESS'\\n';printf ESCAPED_REFERENCE=$ESCAPED_REFERENCE'\\n'; sleep 30; done;\n command:\n - sh\n - -c\n image: busybox:1.28", "completion": " env:\n - name: SERVICE_PORT\n value: \"80\"\n - name: SERVICE_IP\n value: \"172.17.0.1\"\n - name: UNCHANGED_REFERENCE\n value: \"$(PROTOCOL)://$(SERVICE_IP):$(SERVICE_PORT)\"\n - name: PROTOCOL\n value: \"https\"\n - name: SERVICE_ADDRESS\n value: \"$(PROTOCOL)://$(SERVICE_IP):$(SERVICE_PORT)\"\n - name: ESCAPED_REFERENCE\n value: \"$$(PROTOCOL)://$(SERVICE_IP):$(SERVICE_PORT)\"", "language": "yaml", "type": "pod-definition", "context": "Define Dependent Environment Variables | Create a Pod based on that manifest:", "source": "https://kubernetes.io/docs/tasks/inject-data-application/define-interdependent-environment-variables/", "id": "k8s_00058_code_completion_1674"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: dependent-envars-demo\nspec:\n containers:\n - name: dependent-envars-demo\n args:\n - while true; do echo -en '\\n'; printf UNCHANGED_REFERENCE=$UNCHANGED_REFERENCE'\\n'; printf SERVICE_ADDRESS=$SERVICE_ADDRESS'\\n';printf ESCAPED_REFERENCE=$ESCAPED_REFERENCE'\\n'; sleep 30; done;\n command:\n - sh\n - -c\n image: busybox:1.28\n env:\n - name: SERVICE_PORT\n value: \"80\"\n - name: SERVICE_IP\n value: \"172.17.0.1\"\n - name: UNCHANGED_REFERENCE\n value: \"$(PROTOCOL)://$(SERVICE_IP):$(SERVICE_PORT)\"\n - name: PROTOCOL\n value: \"https\"\n - name: SERVICE_ADDRESS\n value: \"$(PROTOCOL)://$(SERVICE_IP):$(SERVICE_PORT)\"\n - name: ESCAPED_REFERENCE\n value: \"$$(PROTOCOL)://$(SERVICE_IP):$(SERVICE_PORT)\"", "completion": "This yaml code Define Dependent Environment Variables | Create a Pod based on that manifest:", "language": "yaml", "type": "pod-definition_explanation", "context": "Define Dependent Environment Variables | Create a Pod based on that manifest:", "source": "https://kubernetes.io/docs/tasks/inject-data-application/define-interdependent-environment-variables/", "id": "k8s_00058_code_explain_1675"} {"prompt": "Explain this shell code:\n\nkubectl apply -f https://k8s.io/examples/pods/inject/dependent-envars.yaml", "completion": "This shell code Define Dependent Environment Variables | List the running Pods:", "language": "shell", "type": "code-example_explanation", "context": "Define Dependent Environment Variables | List the running Pods:", "source": "https://kubernetes.io/docs/tasks/inject-data-application/define-interdependent-environment-variables/", "id": "k8s_00058_code_explain_1676"} {"prompt": "Explain this text code:\n\nNAME READY STATUS RESTARTS AGE\ndependent-envars-demo 1/1 Running 0 9s", "completion": "This text code Define Dependent Environment Variables | Check the logs for the container running in your Pod:", "language": "text", "type": "code-example_explanation", "context": "Define Dependent Environment Variables | Check the logs for the container running in your Pod:", "source": "https://kubernetes.io/docs/tasks/inject-data-application/define-interdependent-environment-variables/", "id": "k8s_00058_code_explain_1677"} {"prompt": "Explain this text code:\n\nUNCHANGED_REFERENCE=$(PROTOCOL)://172.17.0.1:80\nSERVICE_ADDRESS=https://172.17.0.1:80\nESCAPED_REFERENCE=$(PROTOCOL)://172.17.0.1:80", "completion": "This text code Define Dependent Environment Variables | As shown above, you have defined the correct dependency reference of SERVICE_ADDRESS, bad dependency reference of UNCHANGED_REFERENCE and skip dependent references of ESCAPED_REFERENCE.", "language": "text", "type": "code-example_explanation", "context": "Define Dependent Environment Variables | As shown above, you have defined the correct dependency reference of SERVICE_ADDRESS, bad dependency reference of UNCHANGED_REFERENCE and skip dependent references of ESCAPED_REFERENCE.", "source": "https://kubernetes.io/docs/tasks/inject-data-application/define-interdependent-environment-variables/", "id": "k8s_00058_code_explain_1678"} {"prompt": "Complete this yaml code:\n\npods/inject/dependent-envars.yaml\napiVersion: v1\nkind: Pod\nmetadata:\n name: dependent-envars-demo\nspec:\n containers:\n - name: dependent-envars-demo\n args:\n - while true; do echo -en '\\n'; printf UNCHANGED_REFERENCE=$UNCHANGED_REFERENCE'\\n'; printf SERVICE_ADDRESS=$SERVICE_ADDRESS'\\n';printf ESCAPED_REFERENCE=$ESCAPED_REFERENCE'\\n'; sleep 30; done;\n command:\n - sh\n - -c", "completion": " image: busybox:1.28\n env:\n - name: SERVICE_PORT\n value: \"80\"\n - name: SERVICE_IP\n value: \"172.17.0.1\"\n - name: UNCHANGED_REFERENCE\n value: \"$(PROTOCOL)://$(SERVICE_IP):$(SERVICE_PORT)\"\n - name: PROTOCOL\n value: \"https\"\n - name: SERVICE_ADDRESS\n value: \"$(PROTOCOL)://$(SERVICE_IP):$(SERVICE_PORT)\"\n - name: ESCAPED_REFERENCE\n value: \"$$(PROTOCOL)://$(SERVICE_IP):$(SERVICE_PORT)\"", "language": "yaml", "type": "pod-definition", "context": "Define Dependent Environment Variables | Create a Pod based on that manifest:", "source": "https://kubernetes.io/docs/tasks/inject-data-application/define-interdependent-environment-variables/", "id": "k8s_00058_code_completion_1679"} {"prompt": "Explain this yaml code:\n\npods/inject/dependent-envars.yaml\napiVersion: v1\nkind: Pod\nmetadata:\n name: dependent-envars-demo\nspec:\n containers:\n - name: dependent-envars-demo\n args:\n - while true; do echo -en '\\n'; printf UNCHANGED_REFERENCE=$UNCHANGED_REFERENCE'\\n'; printf SERVICE_ADDRESS=$SERVICE_ADDRESS'\\n';printf ESCAPED_REFERENCE=$ESCAPED_REFERENCE'\\n'; sleep 30; done;\n command:\n - sh\n - -c\n image: busybox:1.28\n env:\n - name: SERVICE_PORT\n value: \"80\"\n - name: SERVICE_IP\n value: \"172.17.0.1\"\n - name: UNCHANGED_REFERENCE\n value: \"$(PROTOCOL)://$(SERVICE_IP):$(SERVICE_PORT)\"\n - name: PROTOCOL\n value: \"https\"\n - name: SERVICE_ADDRESS\n value: \"$(PROTOCOL)://$(SERVICE_IP):$(SERVICE_PORT)\"\n - name: ESCAPED_REFERENCE\n value: \"$$(PROTOCOL)://$(SERVICE_IP):$(SERVICE_PORT)\"", "completion": "This yaml code Define Dependent Environment Variables | Create a Pod based on that manifest:", "language": "yaml", "type": "pod-definition_explanation", "context": "Define Dependent Environment Variables | Create a Pod based on that manifest:", "source": "https://kubernetes.io/docs/tasks/inject-data-application/define-interdependent-environment-variables/", "id": "k8s_00058_code_explain_1680"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: dependent-envars-demo\nspec:\n containers:\n - name: dependent-envars-demo\n args:\n - while true; do echo -en '\\n'; printf UNCHANGED_REFERENCE=$UNCHANGED_REFERENCE'\\n'; printf SERVICE_ADDRESS=$SERVICE_ADDRESS'\\n';printf ESCAPED_REFERENCE=$ESCAPED_REFERENCE'\\n'; sleep 30; done;\n command:\n - sh\n - -c\n image: busybox:1.28", "completion": " env:\n - name: SERVICE_PORT\n value: \"80\"\n - name: SERVICE_IP\n value: \"172.17.0.1\"\n - name: UNCHANGED_REFERENCE\n value: \"$(PROTOCOL)://$(SERVICE_IP):$(SERVICE_PORT)\"\n - name: PROTOCOL\n value: \"https\"\n - name: SERVICE_ADDRESS\n value: \"$(PROTOCOL)://$(SERVICE_IP):$(SERVICE_PORT)\"\n - name: ESCAPED_REFERENCE\n value: \"$$(PROTOCOL)://$(SERVICE_IP):$(SERVICE_PORT)\"", "language": "yaml", "type": "pod-definition", "context": "Define Dependent Environment Variables | Create a Pod based on that manifest:", "source": "https://kubernetes.io/docs/tasks/inject-data-application/define-interdependent-environment-variables/", "id": "k8s_00058_code_completion_1681"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: dependent-envars-demo\nspec:\n containers:\n - name: dependent-envars-demo\n args:\n - while true; do echo -en '\\n'; printf UNCHANGED_REFERENCE=$UNCHANGED_REFERENCE'\\n'; printf SERVICE_ADDRESS=$SERVICE_ADDRESS'\\n';printf ESCAPED_REFERENCE=$ESCAPED_REFERENCE'\\n'; sleep 30; done;\n command:\n - sh\n - -c\n image: busybox:1.28\n env:\n - name: SERVICE_PORT\n value: \"80\"\n - name: SERVICE_IP\n value: \"172.17.0.1\"\n - name: UNCHANGED_REFERENCE\n value: \"$(PROTOCOL)://$(SERVICE_IP):$(SERVICE_PORT)\"\n - name: PROTOCOL\n value: \"https\"\n - name: SERVICE_ADDRESS\n value: \"$(PROTOCOL)://$(SERVICE_IP):$(SERVICE_PORT)\"\n - name: ESCAPED_REFERENCE\n value: \"$$(PROTOCOL)://$(SERVICE_IP):$(SERVICE_PORT)\"", "completion": "This yaml code Define Dependent Environment Variables | Create a Pod based on that manifest:", "language": "yaml", "type": "pod-definition_explanation", "context": "Define Dependent Environment Variables | Create a Pod based on that manifest:", "source": "https://kubernetes.io/docs/tasks/inject-data-application/define-interdependent-environment-variables/", "id": "k8s_00058_code_explain_1682"} {"prompt": "Explain this bash code:\n\nkubectl apply -f https://k8s.io/examples/pods/inject/dependent-envars.yaml", "completion": "This bash code Define Dependent Environment Variables | List the running Pods:", "language": "bash", "type": "kubectl-command_explanation", "context": "Define Dependent Environment Variables | List the running Pods:", "source": "https://kubernetes.io/docs/tasks/inject-data-application/define-interdependent-environment-variables/", "id": "k8s_00058_code_explain_1683"} {"prompt": "Complete this yaml code:\n\napiVersion: storage.k8s.io/v1\nkind: StorageClass\nmetadata:", "completion": " name: slow\nprovisioner: kubernetes.io/gce-pd\nparameters:\n type: pd-standard", "language": "yaml", "type": "kubernetes-manifest", "context": "Dynamic Volume Provisioning | The following manifest creates a storage class \"fast\" which provisions SSD-like persistent disks.", "source": "https://kubernetes.io/docs/concepts/storage/dynamic-provisioning/", "id": "k8s_00059_code_completion_1684"} {"prompt": "Explain this yaml code:\n\napiVersion: storage.k8s.io/v1\nkind: StorageClass\nmetadata:\n name: slow\nprovisioner: kubernetes.io/gce-pd\nparameters:\n type: pd-standard", "completion": "This yaml code Dynamic Volume Provisioning | The following manifest creates a storage class \"fast\" which provisions SSD-like persistent disks.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Dynamic Volume Provisioning | The following manifest creates a storage class \"fast\" which provisions SSD-like persistent disks.", "source": "https://kubernetes.io/docs/concepts/storage/dynamic-provisioning/", "id": "k8s_00059_code_explain_1685"} {"prompt": "Complete this yaml code:\n\napiVersion: storage.k8s.io/v1\nkind: StorageClass\nmetadata:", "completion": " name: fast\nprovisioner: kubernetes.io/gce-pd\nparameters:\n type: pd-ssd", "language": "yaml", "type": "kubernetes-manifest", "context": "Dynamic Volume Provisioning | Users request dynamically provisioned storage by including a storage class in their PersistentVolumeClaim. Before Kubernetes v1.6, this was done via the volume.beta.kubernetes.io/storage-class annotat", "source": "https://kubernetes.io/docs/concepts/storage/dynamic-provisioning/", "id": "k8s_00059_code_completion_1686"} {"prompt": "Explain this yaml code:\n\napiVersion: storage.k8s.io/v1\nkind: StorageClass\nmetadata:\n name: fast\nprovisioner: kubernetes.io/gce-pd\nparameters:\n type: pd-ssd", "completion": "This yaml code Dynamic Volume Provisioning | Users request dynamically provisioned storage by including a storage class in their PersistentVolumeClaim. Before Kubernetes v1.6, this was done via the volume.beta.kubernetes.io/storage-class annotat", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Dynamic Volume Provisioning | Users request dynamically provisioned storage by including a storage class in their PersistentVolumeClaim. Before Kubernetes v1.6, this was done via the volume.beta.kubernetes.io/storage-class annotat", "source": "https://kubernetes.io/docs/concepts/storage/dynamic-provisioning/", "id": "k8s_00059_code_explain_1687"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: PersistentVolumeClaim\nmetadata:\n name: claim1\nspec:", "completion": " accessModes:\n - ReadWriteOnce\n storageClassName: fast\n resources:\n requests:\n storage: 30Gi", "language": "yaml", "type": "kubernetes-manifest", "context": "Dynamic Volume Provisioning | This claim results in an SSD-like Persistent Disk being automatically provisioned. When the claim is deleted, the volume is destroyed.", "source": "https://kubernetes.io/docs/concepts/storage/dynamic-provisioning/", "id": "k8s_00059_code_completion_1688"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: PersistentVolumeClaim\nmetadata:\n name: claim1\nspec:\n accessModes:\n - ReadWriteOnce\n storageClassName: fast\n resources:\n requests:\n storage: 30Gi", "completion": "This yaml code Dynamic Volume Provisioning | This claim results in an SSD-like Persistent Disk being automatically provisioned. When the claim is deleted, the volume is destroyed.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Dynamic Volume Provisioning | This claim results in an SSD-like Persistent Disk being automatically provisioned. When the claim is deleted, the volume is destroyed.", "source": "https://kubernetes.io/docs/concepts/storage/dynamic-provisioning/", "id": "k8s_00059_code_explain_1689"} {"prompt": "Complete this yaml code:\n\napiVersion: storage.k8s.io/v1\nkind: StorageClass\nmetadata:", "completion": " name: slow\nprovisioner: kubernetes.io/gce-pd\nparameters:\n type: pd-standard", "language": "yaml", "type": "kubernetes-manifest", "context": "Dynamic Volume Provisioning | The following manifest creates a storage class \"fast\" which provisions SSD-like persistent disks.", "source": "https://kubernetes.io/docs/concepts/storage/dynamic-provisioning/", "id": "k8s_00059_code_completion_1690"} {"prompt": "Explain this yaml code:\n\napiVersion: storage.k8s.io/v1\nkind: StorageClass\nmetadata:\n name: slow\nprovisioner: kubernetes.io/gce-pd\nparameters:\n type: pd-standard", "completion": "This yaml code Dynamic Volume Provisioning | The following manifest creates a storage class \"fast\" which provisions SSD-like persistent disks.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Dynamic Volume Provisioning | The following manifest creates a storage class \"fast\" which provisions SSD-like persistent disks.", "source": "https://kubernetes.io/docs/concepts/storage/dynamic-provisioning/", "id": "k8s_00059_code_explain_1691"} {"prompt": "Complete this yaml code:\n\napiVersion: storage.k8s.io/v1\nkind: StorageClass\nmetadata:", "completion": " name: fast\nprovisioner: kubernetes.io/gce-pd\nparameters:\n type: pd-ssd", "language": "yaml", "type": "kubernetes-manifest", "context": "Dynamic Volume Provisioning | Users request dynamically provisioned storage by including a storage class in their PersistentVolumeClaim. Before Kubernetes v1.6, this was done via the volume.beta.kubernetes.io/storage-class annotat", "source": "https://kubernetes.io/docs/concepts/storage/dynamic-provisioning/", "id": "k8s_00059_code_completion_1692"} {"prompt": "Explain this yaml code:\n\napiVersion: storage.k8s.io/v1\nkind: StorageClass\nmetadata:\n name: fast\nprovisioner: kubernetes.io/gce-pd\nparameters:\n type: pd-ssd", "completion": "This yaml code Dynamic Volume Provisioning | Users request dynamically provisioned storage by including a storage class in their PersistentVolumeClaim. Before Kubernetes v1.6, this was done via the volume.beta.kubernetes.io/storage-class annotat", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Dynamic Volume Provisioning | Users request dynamically provisioned storage by including a storage class in their PersistentVolumeClaim. Before Kubernetes v1.6, this was done via the volume.beta.kubernetes.io/storage-class annotat", "source": "https://kubernetes.io/docs/concepts/storage/dynamic-provisioning/", "id": "k8s_00059_code_explain_1693"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: PersistentVolumeClaim\nmetadata:\n name: claim1\nspec:", "completion": " accessModes:\n - ReadWriteOnce\n storageClassName: fast\n resources:\n requests:\n storage: 30Gi", "language": "yaml", "type": "kubernetes-manifest", "context": "Dynamic Volume Provisioning | This claim results in an SSD-like Persistent Disk being automatically provisioned. When the claim is deleted, the volume is destroyed.", "source": "https://kubernetes.io/docs/concepts/storage/dynamic-provisioning/", "id": "k8s_00059_code_completion_1694"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: PersistentVolumeClaim\nmetadata:\n name: claim1\nspec:\n accessModes:\n - ReadWriteOnce\n storageClassName: fast\n resources:\n requests:\n storage: 30Gi", "completion": "This yaml code Dynamic Volume Provisioning | This claim results in an SSD-like Persistent Disk being automatically provisioned. When the claim is deleted, the volume is destroyed.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Dynamic Volume Provisioning | This claim results in an SSD-like Persistent Disk being automatically provisioned. When the claim is deleted, the volume is destroyed.", "source": "https://kubernetes.io/docs/concepts/storage/dynamic-provisioning/", "id": "k8s_00059_code_explain_1695"} {"prompt": "Complete this yaml code:\n\napiVersion: storage.k8s.io/v1\nkind: StorageClass\nmetadata:", "completion": " name: slow\nprovisioner: kubernetes.io/gce-pd\nparameters:\n type: pd-standard", "language": "yaml", "type": "kubernetes-manifest", "context": "Dynamic Volume Provisioning | The following manifest creates a storage class \"fast\" which provisions SSD-like persistent disks.", "source": "https://kubernetes.io/docs/concepts/storage/dynamic-provisioning/", "id": "k8s_00059_code_completion_1696"} {"prompt": "Explain this yaml code:\n\napiVersion: storage.k8s.io/v1\nkind: StorageClass\nmetadata:\n name: slow\nprovisioner: kubernetes.io/gce-pd\nparameters:\n type: pd-standard", "completion": "This yaml code Dynamic Volume Provisioning | The following manifest creates a storage class \"fast\" which provisions SSD-like persistent disks.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Dynamic Volume Provisioning | The following manifest creates a storage class \"fast\" which provisions SSD-like persistent disks.", "source": "https://kubernetes.io/docs/concepts/storage/dynamic-provisioning/", "id": "k8s_00059_code_explain_1697"} {"prompt": "Complete this yaml code:\n\napiVersion: storage.k8s.io/v1\nkind: StorageClass\nmetadata:", "completion": " name: fast\nprovisioner: kubernetes.io/gce-pd\nparameters:\n type: pd-ssd", "language": "yaml", "type": "kubernetes-manifest", "context": "Dynamic Volume Provisioning | Users request dynamically provisioned storage by including a storage class in their PersistentVolumeClaim. Before Kubernetes v1.6, this was done via the volume.beta.kubernetes.io/storage-class annotat", "source": "https://kubernetes.io/docs/concepts/storage/dynamic-provisioning/", "id": "k8s_00059_code_completion_1698"} {"prompt": "Explain this yaml code:\n\napiVersion: storage.k8s.io/v1\nkind: StorageClass\nmetadata:\n name: fast\nprovisioner: kubernetes.io/gce-pd\nparameters:\n type: pd-ssd", "completion": "This yaml code Dynamic Volume Provisioning | Users request dynamically provisioned storage by including a storage class in their PersistentVolumeClaim. Before Kubernetes v1.6, this was done via the volume.beta.kubernetes.io/storage-class annotat", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Dynamic Volume Provisioning | Users request dynamically provisioned storage by including a storage class in their PersistentVolumeClaim. Before Kubernetes v1.6, this was done via the volume.beta.kubernetes.io/storage-class annotat", "source": "https://kubernetes.io/docs/concepts/storage/dynamic-provisioning/", "id": "k8s_00059_code_explain_1699"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: PersistentVolumeClaim\nmetadata:\n name: claim1\nspec:", "completion": " accessModes:\n - ReadWriteOnce\n storageClassName: fast\n resources:\n requests:\n storage: 30Gi", "language": "yaml", "type": "kubernetes-manifest", "context": "Dynamic Volume Provisioning | This claim results in an SSD-like Persistent Disk being automatically provisioned. When the claim is deleted, the volume is destroyed.", "source": "https://kubernetes.io/docs/concepts/storage/dynamic-provisioning/", "id": "k8s_00059_code_completion_1700"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: PersistentVolumeClaim\nmetadata:\n name: claim1\nspec:\n accessModes:\n - ReadWriteOnce\n storageClassName: fast\n resources:\n requests:\n storage: 30Gi", "completion": "This yaml code Dynamic Volume Provisioning | This claim results in an SSD-like Persistent Disk being automatically provisioned. When the claim is deleted, the volume is destroyed.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Dynamic Volume Provisioning | This claim results in an SSD-like Persistent Disk being automatically provisioned. When the claim is deleted, the volume is destroyed.", "source": "https://kubernetes.io/docs/concepts/storage/dynamic-provisioning/", "id": "k8s_00059_code_explain_1701"} {"prompt": "Complete this yaml code:\n\n---\napiVersion: v1\nkind: ConfigMap\nmetadata:\n name: test-cm\n namespace: default\n labels:\n test-label: test\n managedFields:\n - manager: kubectl\n operation: Apply # note capitalization: \"Apply\" (or \"Update\")", "completion": " apiVersion: v1\n time: \"2010-10-10T0:00:00Z\"\n fieldsType: FieldsV1\n fieldsV1:\n f:metadata:\n f:labels:\n f:test-label: {}\n f:data:\n f:key: {}\ndata:\n key: some value", "language": "yaml", "type": "kubernetes-manifest", "context": "Server-Side Apply | That example ConfigMap object contains a single field management record in .metadata.managedFields. The field management record consists of basic information about the managing entity itself, plus det", "source": "https://kubernetes.io/docs/reference/using-api/server-side-apply/", "id": "k8s_00060_code_completion_1702"} {"prompt": "Explain this yaml code:\n\n---\napiVersion: v1\nkind: ConfigMap\nmetadata:\n name: test-cm\n namespace: default\n labels:\n test-label: test\n managedFields:\n - manager: kubectl\n operation: Apply # note capitalization: \"Apply\" (or \"Update\")\n apiVersion: v1\n time: \"2010-10-10T0:00:00Z\"\n fieldsType: FieldsV1\n fieldsV1:\n f:metadata:\n f:labels:\n f:test-label: {}\n f:data:\n f:key: {}\ndata:\n key: some value", "completion": "This yaml code Server-Side Apply | That example ConfigMap object contains a single field management record in .metadata.managedFields. The field management record consists of basic information about the managing entity itself, plus det", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Server-Side Apply | That example ConfigMap object contains a single field management record in .metadata.managedFields. The field management record consists of basic information about the managing entity itself, plus det", "source": "https://kubernetes.io/docs/reference/using-api/server-side-apply/", "id": "k8s_00060_code_explain_1703"} {"prompt": "Complete this yaml code:\n\n---\napiVersion: v1\nkind: ConfigMap\nmetadata:\n name: test-cm\n namespace: default\n labels:\n test-label: test\n managedFields:\n - manager: kubectl\n operation: Apply\n time: '2019-03-30T15:00:00.000Z'\n apiVersion: v1\n fieldsType: FieldsV1", "completion": " fieldsV1:\n f:metadata:\n f:labels:\n f:test-label: {}\n - manager: kube-controller-manager\n operation: Update\n apiVersion: v1\n time: '2019-03-30T16:00:00.000Z'\n fieldsType: FieldsV1\n fieldsV1:\n f:data:\n f:key: {}\ndata:\n key: new value", "language": "yaml", "type": "kubernetes-manifest", "context": "Server-Side Apply | In this example, a second operation was run as an update by the manager called kube-controller-manager. The update request succeeded and changed a value in the data field, which caused that field's ma", "source": "https://kubernetes.io/docs/reference/using-api/server-side-apply/", "id": "k8s_00060_code_completion_1704"} {"prompt": "Explain this yaml code:\n\n---\napiVersion: v1\nkind: ConfigMap\nmetadata:\n name: test-cm\n namespace: default\n labels:\n test-label: test\n managedFields:\n - manager: kubectl\n operation: Apply\n time: '2019-03-30T15:00:00.000Z'\n apiVersion: v1\n fieldsType: FieldsV1\n fieldsV1:\n f:metadata:\n f:labels:\n f:test-label: {}\n - manager: kube-controller-manager\n operation: Update\n apiVersion: v1\n time: '2019-03-30T16:00:00.000Z'\n fieldsType: FieldsV1\n fieldsV1:\n f:data:\n f:key: {}\ndata:\n key: new value", "completion": "This yaml code Server-Side Apply | In this example, a second operation was run as an update by the manager called kube-controller-manager. The update request succeeded and changed a value in the data field, which caused that field's ma", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Server-Side Apply | In this example, a second operation was run as an update by the manager called kube-controller-manager. The update request succeeded and changed a value in the data field, which caused that field's ma", "source": "https://kubernetes.io/docs/reference/using-api/server-side-apply/", "id": "k8s_00060_code_explain_1705"} {"prompt": "Complete this yaml code:\n\n---\napiVersion: example.com/v1\nkind: Foo\nmetadata:\n name: foo-sample\n managedFields:\n - manager: \"manager-one\"\n operation: Apply", "completion": " apiVersion: example.com/v1\n fieldsType: FieldsV1\n fieldsV1:\n f:spec:\n f:data: {}\nspec:\n data:\n key1: val1\n key2: val2", "language": "yaml", "type": "kubernetes-manifest", "context": "Server-Side Apply | Before spec.data gets changed from atomic to granular, manager-one owns the field spec.data, and all the fields within it (key1 and key2). When the CRD gets changed to make spec.data granular, manager", "source": "https://kubernetes.io/docs/reference/using-api/server-side-apply/", "id": "k8s_00060_code_completion_1706"} {"prompt": "Explain this yaml code:\n\n---\napiVersion: example.com/v1\nkind: Foo\nmetadata:\n name: foo-sample\n managedFields:\n - manager: \"manager-one\"\n operation: Apply\n apiVersion: example.com/v1\n fieldsType: FieldsV1\n fieldsV1:\n f:spec:\n f:data: {}\nspec:\n data:\n key1: val1\n key2: val2", "completion": "This yaml code Server-Side Apply | Before spec.data gets changed from atomic to granular, manager-one owns the field spec.data, and all the fields within it (key1 and key2). When the CRD gets changed to make spec.data granular, manager", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Server-Side Apply | Before spec.data gets changed from atomic to granular, manager-one owns the field spec.data, and all the fields within it (key1 and key2). When the CRD gets changed to make spec.data granular, manager", "source": "https://kubernetes.io/docs/reference/using-api/server-side-apply/", "id": "k8s_00060_code_explain_1707"} {"prompt": "Complete this yaml code:\n\napiVersion: apps/v1\nkind: Deployment\nmetadata:\n name: nginx-deployment\n labels:\n app: nginx\nspec:\n replicas: 3\n selector:", "completion": " matchLabels:\n app: nginx\n template:\n metadata:\n labels:\n app: nginx\n spec:\n containers:\n - name: nginx\n image: nginx:1.14.2", "language": "yaml", "type": "deployment-definition", "context": "Server-Side Apply | And the user has created the Deployment using Server-Side Apply, like so:", "source": "https://kubernetes.io/docs/reference/using-api/server-side-apply/", "id": "k8s_00060_code_completion_1708"} {"prompt": "Explain this yaml code:\n\napiVersion: apps/v1\nkind: Deployment\nmetadata:\n name: nginx-deployment\n labels:\n app: nginx\nspec:\n replicas: 3\n selector:\n matchLabels:\n app: nginx\n template:\n metadata:\n labels:\n app: nginx\n spec:\n containers:\n - name: nginx\n image: nginx:1.14.2", "completion": "This yaml code Server-Side Apply | And the user has created the Deployment using Server-Side Apply, like so:", "language": "yaml", "type": "deployment-definition_explanation", "context": "Server-Side Apply | And the user has created the Deployment using Server-Side Apply, like so:", "source": "https://kubernetes.io/docs/reference/using-api/server-side-apply/", "id": "k8s_00060_code_explain_1709"} {"prompt": "Explain this bash code:\n\nkubectl apply -f https://k8s.io/examples/application/ssa/nginx-deployment.yaml --server-side", "completion": "This bash code Server-Side Apply | Then later, automatic scaling is enabled for the Deployment; for example:", "language": "bash", "type": "kubectl-command_explanation", "context": "Server-Side Apply | Then later, automatic scaling is enabled for the Deployment; for example:", "source": "https://kubernetes.io/docs/reference/using-api/server-side-apply/", "id": "k8s_00060_code_explain_1710"} {"prompt": "Explain this bash code:\n\nkubectl autoscale deployment nginx-deployment --cpu-percent=50 --min=1 --max=10", "completion": "This bash code Server-Side Apply | Now, the user would like to remove replicas from their configuration, so they don't accidentally fight with the HorizontalPodAutoscaler (HPA) and its controller. However, there is a race: it might tak", "language": "bash", "type": "kubectl-command_explanation", "context": "Server-Side Apply | Now, the user would like to remove replicas from their configuration, so they don't accidentally fight with the HorizontalPodAutoscaler (HPA) and its controller. However, there is a race: it might tak", "source": "https://kubernetes.io/docs/reference/using-api/server-side-apply/", "id": "k8s_00060_code_explain_1711"} {"prompt": "Complete this yaml code:\n\n# Save this file as 'nginx-deployment-replicas-only.yaml'.\napiVersion: apps/v1\nkind: Deployment", "completion": "metadata:\n name: nginx-deployment\nspec:\n replicas: 3", "language": "yaml", "type": "deployment-definition", "context": "Server-Side Apply | The user applies that manifest using a private field manager name. In this example, the user picked handover-to-hpa:", "source": "https://kubernetes.io/docs/reference/using-api/server-side-apply/", "id": "k8s_00060_code_completion_1712"} {"prompt": "Explain this yaml code:\n\n# Save this file as 'nginx-deployment-replicas-only.yaml'.\napiVersion: apps/v1\nkind: Deployment\nmetadata:\n name: nginx-deployment\nspec:\n replicas: 3", "completion": "This yaml code Server-Side Apply | The user applies that manifest using a private field manager name. In this example, the user picked handover-to-hpa:", "language": "yaml", "type": "deployment-definition_explanation", "context": "Server-Side Apply | The user applies that manifest using a private field manager name. In this example, the user picked handover-to-hpa:", "source": "https://kubernetes.io/docs/reference/using-api/server-side-apply/", "id": "k8s_00060_code_explain_1713"} {"prompt": "Explain this bash code:\n\nkubectl apply -f nginx-deployment-replicas-only.yaml \\\n --server-side --field-manager=handover-to-hpa \\\n --validate=false", "completion": "This bash code Server-Side Apply | If the apply results in a conflict with the HPA controller, then do nothing. The conflict indicates the controller has claimed the field earlier in the process than it sometimes does.", "language": "bash", "type": "kubectl-command_explanation", "context": "Server-Side Apply | If the apply results in a conflict with the HPA controller, then do nothing. The conflict indicates the controller has claimed the field earlier in the process than it sometimes does.", "source": "https://kubernetes.io/docs/reference/using-api/server-side-apply/", "id": "k8s_00060_code_explain_1714"} {"prompt": "Complete this yaml code:\n\napiVersion: apps/v1\nkind: Deployment\nmetadata:\n name: nginx-deployment\n labels:\n app: nginx\nspec:\n selector:\n matchLabels:", "completion": " app: nginx\n template:\n metadata:\n labels:\n app: nginx\n spec:\n containers:\n - name: nginx\n image: nginx:1.14.2", "language": "yaml", "type": "deployment-definition", "context": "Server-Side Apply | Note that whenever the HPA controller sets the replicas field to a new value, the temporary field manager will no longer own any fields and will be automatically deleted. No further clean up is requir", "source": "https://kubernetes.io/docs/reference/using-api/server-side-apply/", "id": "k8s_00060_code_completion_1715"} {"prompt": "Explain this yaml code:\n\napiVersion: apps/v1\nkind: Deployment\nmetadata:\n name: nginx-deployment\n labels:\n app: nginx\nspec:\n selector:\n matchLabels:\n app: nginx\n template:\n metadata:\n labels:\n app: nginx\n spec:\n containers:\n - name: nginx\n image: nginx:1.14.2", "completion": "This yaml code Server-Side Apply | Note that whenever the HPA controller sets the replicas field to a new value, the temporary field manager will no longer own any fields and will be automatically deleted. No further clean up is requir", "language": "yaml", "type": "deployment-definition_explanation", "context": "Server-Side Apply | Note that whenever the HPA controller sets the replicas field to a new value, the temporary field manager will no longer own any fields and will be automatically deleted. No further clean up is requir", "source": "https://kubernetes.io/docs/reference/using-api/server-side-apply/", "id": "k8s_00060_code_explain_1716"} {"prompt": "Explain this bash code:\n\nkubectl apply --server-side --field-manager=my-manager [--dry-run=server]", "completion": "This bash code Server-Side Apply | If you manage a resource with kubectl apply --server-side, you can downgrade to client-side apply directly with kubectl apply.", "language": "bash", "type": "kubectl-command_explanation", "context": "Server-Side Apply | If you manage a resource with kubectl apply --server-side, you can downgrade to client-side apply directly with kubectl apply.", "source": "https://kubernetes.io/docs/reference/using-api/server-side-apply/", "id": "k8s_00060_code_explain_1717"} {"prompt": "Explain this bash code:\n\nkubectl apply --server-side --field-manager=my-manager [--dry-run=server]", "completion": "This bash code Server-Side Apply | The PATCH verb (for an object that supports Server-Side Apply) accepts the unofficial application/apply-patch+yaml content type. Users of Server-Side Apply can send partially specified objects as YAML", "language": "bash", "type": "kubectl-command_explanation", "context": "Server-Side Apply | The PATCH verb (for an object that supports Server-Side Apply) accepts the unofficial application/apply-patch+yaml content type. Users of Server-Side Apply can send partially specified objects as YAML", "source": "https://kubernetes.io/docs/reference/using-api/server-side-apply/", "id": "k8s_00060_code_explain_1718"} {"prompt": "Complete this text code:\n\nPATCH /api/v1/namespaces/default/configmaps/example-cm\nAccept: application/json\nContent-Type: application/merge-patch+json\n\n{", "completion": " \"metadata\": {\n \"managedFields\": [\n {}\n ]\n }\n}", "language": "text", "type": "code-example", "context": "Server-Side Apply | This will overwrite the managedFields with a list containing a single empty entry that then results in the managedFields being stripped entirely from the object. Note that setting the managedFields to", "source": "https://kubernetes.io/docs/reference/using-api/server-side-apply/", "id": "k8s_00060_code_completion_1719"} {"prompt": "Explain this text code:\n\nPATCH /api/v1/namespaces/default/configmaps/example-cm\nAccept: application/json\nContent-Type: application/merge-patch+json\n\n{\n \"metadata\": {\n \"managedFields\": [\n {}\n ]\n }\n}", "completion": "This text code Server-Side Apply | This will overwrite the managedFields with a list containing a single empty entry that then results in the managedFields being stripped entirely from the object. Note that setting the managedFields to", "language": "text", "type": "code-example_explanation", "context": "Server-Side Apply | This will overwrite the managedFields with a list containing a single empty entry that then results in the managedFields being stripped entirely from the object. Note that setting the managedFields to", "source": "https://kubernetes.io/docs/reference/using-api/server-side-apply/", "id": "k8s_00060_code_explain_1720"} {"prompt": "Complete this text code:\n\nPATCH /api/v1/namespaces/default/configmaps/example-cm\nAccept: application/json\nContent-Type: application/json-patch+json", "completion": "If-Match: 1234567890123456789\n\n[{\"op\": \"replace\", \"path\": \"/metadata/managedFields\", \"value\": [{}]}]", "language": "text", "type": "code-example", "context": "Server-Side Apply | This will overwrite the managedFields with a list containing a single empty entry that then results in the managedFields being stripped entirely from the object. Note that setting the managedFields to", "source": "https://kubernetes.io/docs/reference/using-api/server-side-apply/", "id": "k8s_00060_code_completion_1721"} {"prompt": "Explain this text code:\n\nPATCH /api/v1/namespaces/default/configmaps/example-cm\nAccept: application/json\nContent-Type: application/json-patch+json\nIf-Match: 1234567890123456789\n\n[{\"op\": \"replace\", \"path\": \"/metadata/managedFields\", \"value\": [{}]}]", "completion": "This text code Server-Side Apply | This will overwrite the managedFields with a list containing a single empty entry that then results in the managedFields being stripped entirely from the object. Note that setting the managedFields to", "language": "text", "type": "code-example_explanation", "context": "Server-Side Apply | This will overwrite the managedFields with a list containing a single empty entry that then results in the managedFields being stripped entirely from the object. Note that setting the managedFields to", "source": "https://kubernetes.io/docs/reference/using-api/server-side-apply/", "id": "k8s_00060_code_explain_1722"} {"prompt": "Complete this yaml code:\n\n---\napiVersion: v1\nkind: ConfigMap\nmetadata:\n name: test-cm\n namespace: default\n labels:\n test-label: test\n managedFields:\n - manager: kubectl\n operation: Apply # note capitalization: \"Apply\" (or \"Update\")", "completion": " apiVersion: v1\n time: \"2010-10-10T0:00:00Z\"\n fieldsType: FieldsV1\n fieldsV1:\n f:metadata:\n f:labels:\n f:test-label: {}\n f:data:\n f:key: {}\ndata:\n key: some value", "language": "yaml", "type": "kubernetes-manifest", "context": "Server-Side Apply | That example ConfigMap object contains a single field management record in .metadata.managedFields. The field management record consists of basic information about the managing entity itself, plus det", "source": "https://kubernetes.io/docs/reference/using-api/server-side-apply/", "id": "k8s_00060_code_completion_1723"} {"prompt": "Explain this yaml code:\n\n---\napiVersion: v1\nkind: ConfigMap\nmetadata:\n name: test-cm\n namespace: default\n labels:\n test-label: test\n managedFields:\n - manager: kubectl\n operation: Apply # note capitalization: \"Apply\" (or \"Update\")\n apiVersion: v1\n time: \"2010-10-10T0:00:00Z\"\n fieldsType: FieldsV1\n fieldsV1:\n f:metadata:\n f:labels:\n f:test-label: {}\n f:data:\n f:key: {}\ndata:\n key: some value", "completion": "This yaml code Server-Side Apply | That example ConfigMap object contains a single field management record in .metadata.managedFields. The field management record consists of basic information about the managing entity itself, plus det", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Server-Side Apply | That example ConfigMap object contains a single field management record in .metadata.managedFields. The field management record consists of basic information about the managing entity itself, plus det", "source": "https://kubernetes.io/docs/reference/using-api/server-side-apply/", "id": "k8s_00060_code_explain_1724"} {"prompt": "Complete this yaml code:\n\n---\napiVersion: v1\nkind: ConfigMap\nmetadata:\n name: test-cm\n namespace: default\n labels:\n test-label: test\n managedFields:\n - manager: kubectl\n operation: Apply\n time: '2019-03-30T15:00:00.000Z'\n apiVersion: v1\n fieldsType: FieldsV1", "completion": " fieldsV1:\n f:metadata:\n f:labels:\n f:test-label: {}\n - manager: kube-controller-manager\n operation: Update\n apiVersion: v1\n time: '2019-03-30T16:00:00.000Z'\n fieldsType: FieldsV1\n fieldsV1:\n f:data:\n f:key: {}\ndata:\n key: new value", "language": "yaml", "type": "kubernetes-manifest", "context": "Server-Side Apply | In this example, a second operation was run as an update by the manager called kube-controller-manager. The update request succeeded and changed a value in the data field, which caused that field's ma", "source": "https://kubernetes.io/docs/reference/using-api/server-side-apply/", "id": "k8s_00060_code_completion_1725"} {"prompt": "Explain this yaml code:\n\n---\napiVersion: v1\nkind: ConfigMap\nmetadata:\n name: test-cm\n namespace: default\n labels:\n test-label: test\n managedFields:\n - manager: kubectl\n operation: Apply\n time: '2019-03-30T15:00:00.000Z'\n apiVersion: v1\n fieldsType: FieldsV1\n fieldsV1:\n f:metadata:\n f:labels:\n f:test-label: {}\n - manager: kube-controller-manager\n operation: Update\n apiVersion: v1\n time: '2019-03-30T16:00:00.000Z'\n fieldsType: FieldsV1\n fieldsV1:\n f:data:\n f:key: {}\ndata:\n key: new value", "completion": "This yaml code Server-Side Apply | In this example, a second operation was run as an update by the manager called kube-controller-manager. The update request succeeded and changed a value in the data field, which caused that field's ma", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Server-Side Apply | In this example, a second operation was run as an update by the manager called kube-controller-manager. The update request succeeded and changed a value in the data field, which caused that field's ma", "source": "https://kubernetes.io/docs/reference/using-api/server-side-apply/", "id": "k8s_00060_code_explain_1726"} {"prompt": "Complete this yaml code:\n\n---\napiVersion: example.com/v1\nkind: Foo\nmetadata:\n name: foo-sample\n managedFields:\n - manager: \"manager-one\"\n operation: Apply", "completion": " apiVersion: example.com/v1\n fieldsType: FieldsV1\n fieldsV1:\n f:spec:\n f:data: {}\nspec:\n data:\n key1: val1\n key2: val2", "language": "yaml", "type": "kubernetes-manifest", "context": "Server-Side Apply | Before spec.data gets changed from atomic to granular, manager-one owns the field spec.data, and all the fields within it (key1 and key2). When the CRD gets changed to make spec.data granular, manager", "source": "https://kubernetes.io/docs/reference/using-api/server-side-apply/", "id": "k8s_00060_code_completion_1727"} {"prompt": "Explain this yaml code:\n\n---\napiVersion: example.com/v1\nkind: Foo\nmetadata:\n name: foo-sample\n managedFields:\n - manager: \"manager-one\"\n operation: Apply\n apiVersion: example.com/v1\n fieldsType: FieldsV1\n fieldsV1:\n f:spec:\n f:data: {}\nspec:\n data:\n key1: val1\n key2: val2", "completion": "This yaml code Server-Side Apply | Before spec.data gets changed from atomic to granular, manager-one owns the field spec.data, and all the fields within it (key1 and key2). When the CRD gets changed to make spec.data granular, manager", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Server-Side Apply | Before spec.data gets changed from atomic to granular, manager-one owns the field spec.data, and all the fields within it (key1 and key2). When the CRD gets changed to make spec.data granular, manager", "source": "https://kubernetes.io/docs/reference/using-api/server-side-apply/", "id": "k8s_00060_code_explain_1728"} {"prompt": "Complete this yaml code:\n\napiVersion: apps/v1\nkind: Deployment\nmetadata:\n name: nginx-deployment\n labels:\n app: nginx\nspec:\n replicas: 3\n selector:", "completion": " matchLabels:\n app: nginx\n template:\n metadata:\n labels:\n app: nginx\n spec:\n containers:\n - name: nginx\n image: nginx:1.14.2", "language": "yaml", "type": "deployment-definition", "context": "Server-Side Apply | And the user has created the Deployment using Server-Side Apply, like so:", "source": "https://kubernetes.io/docs/reference/using-api/server-side-apply/", "id": "k8s_00060_code_completion_1729"} {"prompt": "Explain this yaml code:\n\napiVersion: apps/v1\nkind: Deployment\nmetadata:\n name: nginx-deployment\n labels:\n app: nginx\nspec:\n replicas: 3\n selector:\n matchLabels:\n app: nginx\n template:\n metadata:\n labels:\n app: nginx\n spec:\n containers:\n - name: nginx\n image: nginx:1.14.2", "completion": "This yaml code Server-Side Apply | And the user has created the Deployment using Server-Side Apply, like so:", "language": "yaml", "type": "deployment-definition_explanation", "context": "Server-Side Apply | And the user has created the Deployment using Server-Side Apply, like so:", "source": "https://kubernetes.io/docs/reference/using-api/server-side-apply/", "id": "k8s_00060_code_explain_1730"} {"prompt": "Explain this shell code:\n\nkubectl apply -f https://k8s.io/examples/application/ssa/nginx-deployment.yaml --server-side", "completion": "This shell code Server-Side Apply | Then later, automatic scaling is enabled for the Deployment; for example:", "language": "shell", "type": "code-example_explanation", "context": "Server-Side Apply | Then later, automatic scaling is enabled for the Deployment; for example:", "source": "https://kubernetes.io/docs/reference/using-api/server-side-apply/", "id": "k8s_00060_code_explain_1731"} {"prompt": "Explain this shell code:\n\nkubectl autoscale deployment nginx-deployment --cpu-percent=50 --min=1 --max=10", "completion": "This shell code Server-Side Apply | Now, the user would like to remove replicas from their configuration, so they don't accidentally fight with the HorizontalPodAutoscaler (HPA) and its controller. However, there is a race: it might tak", "language": "shell", "type": "code-example_explanation", "context": "Server-Side Apply | Now, the user would like to remove replicas from their configuration, so they don't accidentally fight with the HorizontalPodAutoscaler (HPA) and its controller. However, there is a race: it might tak", "source": "https://kubernetes.io/docs/reference/using-api/server-side-apply/", "id": "k8s_00060_code_explain_1732"} {"prompt": "Complete this yaml code:\n\n# Save this file as 'nginx-deployment-replicas-only.yaml'.\napiVersion: apps/v1\nkind: Deployment", "completion": "metadata:\n name: nginx-deployment\nspec:\n replicas: 3", "language": "yaml", "type": "deployment-definition", "context": "Server-Side Apply | The user applies that manifest using a private field manager name. In this example, the user picked handover-to-hpa:", "source": "https://kubernetes.io/docs/reference/using-api/server-side-apply/", "id": "k8s_00060_code_completion_1733"} {"prompt": "Explain this yaml code:\n\n# Save this file as 'nginx-deployment-replicas-only.yaml'.\napiVersion: apps/v1\nkind: Deployment\nmetadata:\n name: nginx-deployment\nspec:\n replicas: 3", "completion": "This yaml code Server-Side Apply | The user applies that manifest using a private field manager name. In this example, the user picked handover-to-hpa:", "language": "yaml", "type": "deployment-definition_explanation", "context": "Server-Side Apply | The user applies that manifest using a private field manager name. In this example, the user picked handover-to-hpa:", "source": "https://kubernetes.io/docs/reference/using-api/server-side-apply/", "id": "k8s_00060_code_explain_1734"} {"prompt": "Explain this shell code:\n\nkubectl apply -f nginx-deployment-replicas-only.yaml \\\n --server-side --field-manager=handover-to-hpa \\\n --validate=false", "completion": "This shell code Server-Side Apply | If the apply results in a conflict with the HPA controller, then do nothing. The conflict indicates the controller has claimed the field earlier in the process than it sometimes does.", "language": "shell", "type": "code-example_explanation", "context": "Server-Side Apply | If the apply results in a conflict with the HPA controller, then do nothing. The conflict indicates the controller has claimed the field earlier in the process than it sometimes does.", "source": "https://kubernetes.io/docs/reference/using-api/server-side-apply/", "id": "k8s_00060_code_explain_1735"} {"prompt": "Complete this yaml code:\n\napiVersion: apps/v1\nkind: Deployment\nmetadata:\n name: nginx-deployment\n labels:\n app: nginx\nspec:\n selector:\n matchLabels:", "completion": " app: nginx\n template:\n metadata:\n labels:\n app: nginx\n spec:\n containers:\n - name: nginx\n image: nginx:1.14.2", "language": "yaml", "type": "deployment-definition", "context": "Server-Side Apply | Note that whenever the HPA controller sets the replicas field to a new value, the temporary field manager will no longer own any fields and will be automatically deleted. No further clean up is requir", "source": "https://kubernetes.io/docs/reference/using-api/server-side-apply/", "id": "k8s_00060_code_completion_1736"} {"prompt": "Explain this yaml code:\n\napiVersion: apps/v1\nkind: Deployment\nmetadata:\n name: nginx-deployment\n labels:\n app: nginx\nspec:\n selector:\n matchLabels:\n app: nginx\n template:\n metadata:\n labels:\n app: nginx\n spec:\n containers:\n - name: nginx\n image: nginx:1.14.2", "completion": "This yaml code Server-Side Apply | Note that whenever the HPA controller sets the replicas field to a new value, the temporary field manager will no longer own any fields and will be automatically deleted. No further clean up is requir", "language": "yaml", "type": "deployment-definition_explanation", "context": "Server-Side Apply | Note that whenever the HPA controller sets the replicas field to a new value, the temporary field manager will no longer own any fields and will be automatically deleted. No further clean up is requir", "source": "https://kubernetes.io/docs/reference/using-api/server-side-apply/", "id": "k8s_00060_code_explain_1737"} {"prompt": "Explain this shell code:\n\nkubectl apply --server-side --field-manager=my-manager [--dry-run=server]", "completion": "This shell code Server-Side Apply | If you manage a resource with kubectl apply --server-side, you can downgrade to client-side apply directly with kubectl apply.", "language": "shell", "type": "code-example_explanation", "context": "Server-Side Apply | If you manage a resource with kubectl apply --server-side, you can downgrade to client-side apply directly with kubectl apply.", "source": "https://kubernetes.io/docs/reference/using-api/server-side-apply/", "id": "k8s_00060_code_explain_1738"} {"prompt": "Explain this shell code:\n\nkubectl apply --server-side --field-manager=my-manager [--dry-run=server]", "completion": "This shell code Server-Side Apply | The PATCH verb (for an object that supports Server-Side Apply) accepts the unofficial application/apply-patch+yaml content type. Users of Server-Side Apply can send partially specified objects as YAML", "language": "shell", "type": "code-example_explanation", "context": "Server-Side Apply | The PATCH verb (for an object that supports Server-Side Apply) accepts the unofficial application/apply-patch+yaml content type. Users of Server-Side Apply can send partially specified objects as YAML", "source": "https://kubernetes.io/docs/reference/using-api/server-side-apply/", "id": "k8s_00060_code_explain_1739"} {"prompt": "Complete this console code:\n\nPATCH /api/v1/namespaces/default/configmaps/example-cm\nAccept: application/json\nContent-Type: application/merge-patch+json\n\n{", "completion": " \"metadata\": {\n \"managedFields\": [\n {}\n ]\n }\n}", "language": "console", "type": "code-example", "context": "Server-Side Apply | This will overwrite the managedFields with a list containing a single empty entry that then results in the managedFields being stripped entirely from the object. Note that setting the managedFields to", "source": "https://kubernetes.io/docs/reference/using-api/server-side-apply/", "id": "k8s_00060_code_completion_1740"} {"prompt": "Explain this console code:\n\nPATCH /api/v1/namespaces/default/configmaps/example-cm\nAccept: application/json\nContent-Type: application/merge-patch+json\n\n{\n \"metadata\": {\n \"managedFields\": [\n {}\n ]\n }\n}", "completion": "This console code Server-Side Apply | This will overwrite the managedFields with a list containing a single empty entry that then results in the managedFields being stripped entirely from the object. Note that setting the managedFields to", "language": "console", "type": "code-example_explanation", "context": "Server-Side Apply | This will overwrite the managedFields with a list containing a single empty entry that then results in the managedFields being stripped entirely from the object. Note that setting the managedFields to", "source": "https://kubernetes.io/docs/reference/using-api/server-side-apply/", "id": "k8s_00060_code_explain_1741"} {"prompt": "Complete this console code:\n\nPATCH /api/v1/namespaces/default/configmaps/example-cm\nAccept: application/json\nContent-Type: application/json-patch+json", "completion": "If-Match: 1234567890123456789\n\n[{\"op\": \"replace\", \"path\": \"/metadata/managedFields\", \"value\": [{}]}]", "language": "console", "type": "code-example", "context": "Server-Side Apply | This will overwrite the managedFields with a list containing a single empty entry that then results in the managedFields being stripped entirely from the object. Note that setting the managedFields to", "source": "https://kubernetes.io/docs/reference/using-api/server-side-apply/", "id": "k8s_00060_code_completion_1742"} {"prompt": "Explain this console code:\n\nPATCH /api/v1/namespaces/default/configmaps/example-cm\nAccept: application/json\nContent-Type: application/json-patch+json\nIf-Match: 1234567890123456789\n\n[{\"op\": \"replace\", \"path\": \"/metadata/managedFields\", \"value\": [{}]}]", "completion": "This console code Server-Side Apply | This will overwrite the managedFields with a list containing a single empty entry that then results in the managedFields being stripped entirely from the object. Note that setting the managedFields to", "language": "console", "type": "code-example_explanation", "context": "Server-Side Apply | This will overwrite the managedFields with a list containing a single empty entry that then results in the managedFields being stripped entirely from the object. Note that setting the managedFields to", "source": "https://kubernetes.io/docs/reference/using-api/server-side-apply/", "id": "k8s_00060_code_explain_1743"} {"prompt": "Complete this yaml code:\n\n---\napiVersion: v1\nkind: ConfigMap\nmetadata:\n name: test-cm\n namespace: default\n labels:\n test-label: test\n managedFields:\n - manager: kubectl\n operation: Apply # note capitalization: \"Apply\" (or \"Update\")", "completion": " apiVersion: v1\n time: \"2010-10-10T0:00:00Z\"\n fieldsType: FieldsV1\n fieldsV1:\n f:metadata:\n f:labels:\n f:test-label: {}\n f:data:\n f:key: {}\ndata:\n key: some value", "language": "yaml", "type": "kubernetes-manifest", "context": "Server-Side Apply | That example ConfigMap object contains a single field management record in .metadata.managedFields. The field management record consists of basic information about the managing entity itself, plus det", "source": "https://kubernetes.io/docs/reference/using-api/server-side-apply/", "id": "k8s_00060_code_completion_1744"} {"prompt": "Explain this yaml code:\n\n---\napiVersion: v1\nkind: ConfigMap\nmetadata:\n name: test-cm\n namespace: default\n labels:\n test-label: test\n managedFields:\n - manager: kubectl\n operation: Apply # note capitalization: \"Apply\" (or \"Update\")\n apiVersion: v1\n time: \"2010-10-10T0:00:00Z\"\n fieldsType: FieldsV1\n fieldsV1:\n f:metadata:\n f:labels:\n f:test-label: {}\n f:data:\n f:key: {}\ndata:\n key: some value", "completion": "This yaml code Server-Side Apply | That example ConfigMap object contains a single field management record in .metadata.managedFields. The field management record consists of basic information about the managing entity itself, plus det", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Server-Side Apply | That example ConfigMap object contains a single field management record in .metadata.managedFields. The field management record consists of basic information about the managing entity itself, plus det", "source": "https://kubernetes.io/docs/reference/using-api/server-side-apply/", "id": "k8s_00060_code_explain_1745"} {"prompt": "Complete this yaml code:\n\n---\napiVersion: v1\nkind: ConfigMap\nmetadata:\n name: test-cm\n namespace: default\n labels:\n test-label: test\n managedFields:\n - manager: kubectl\n operation: Apply\n time: '2019-03-30T15:00:00.000Z'\n apiVersion: v1\n fieldsType: FieldsV1", "completion": " fieldsV1:\n f:metadata:\n f:labels:\n f:test-label: {}\n - manager: kube-controller-manager\n operation: Update\n apiVersion: v1\n time: '2019-03-30T16:00:00.000Z'\n fieldsType: FieldsV1\n fieldsV1:\n f:data:\n f:key: {}\ndata:\n key: new value", "language": "yaml", "type": "kubernetes-manifest", "context": "Server-Side Apply | In this example, a second operation was run as an update by the manager called kube-controller-manager. The update request succeeded and changed a value in the data field, which caused that field's ma", "source": "https://kubernetes.io/docs/reference/using-api/server-side-apply/", "id": "k8s_00060_code_completion_1746"} {"prompt": "Explain this yaml code:\n\n---\napiVersion: v1\nkind: ConfigMap\nmetadata:\n name: test-cm\n namespace: default\n labels:\n test-label: test\n managedFields:\n - manager: kubectl\n operation: Apply\n time: '2019-03-30T15:00:00.000Z'\n apiVersion: v1\n fieldsType: FieldsV1\n fieldsV1:\n f:metadata:\n f:labels:\n f:test-label: {}\n - manager: kube-controller-manager\n operation: Update\n apiVersion: v1\n time: '2019-03-30T16:00:00.000Z'\n fieldsType: FieldsV1\n fieldsV1:\n f:data:\n f:key: {}\ndata:\n key: new value", "completion": "This yaml code Server-Side Apply | In this example, a second operation was run as an update by the manager called kube-controller-manager. The update request succeeded and changed a value in the data field, which caused that field's ma", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Server-Side Apply | In this example, a second operation was run as an update by the manager called kube-controller-manager. The update request succeeded and changed a value in the data field, which caused that field's ma", "source": "https://kubernetes.io/docs/reference/using-api/server-side-apply/", "id": "k8s_00060_code_explain_1747"} {"prompt": "Complete this yaml code:\n\n---\napiVersion: example.com/v1\nkind: Foo\nmetadata:\n name: foo-sample\n managedFields:\n - manager: \"manager-one\"\n operation: Apply", "completion": " apiVersion: example.com/v1\n fieldsType: FieldsV1\n fieldsV1:\n f:spec:\n f:data: {}\nspec:\n data:\n key1: val1\n key2: val2", "language": "yaml", "type": "kubernetes-manifest", "context": "Server-Side Apply | Before spec.data gets changed from atomic to granular, manager-one owns the field spec.data, and all the fields within it (key1 and key2). When the CRD gets changed to make spec.data granular, manager", "source": "https://kubernetes.io/docs/reference/using-api/server-side-apply/", "id": "k8s_00060_code_completion_1748"} {"prompt": "Explain this yaml code:\n\n---\napiVersion: example.com/v1\nkind: Foo\nmetadata:\n name: foo-sample\n managedFields:\n - manager: \"manager-one\"\n operation: Apply\n apiVersion: example.com/v1\n fieldsType: FieldsV1\n fieldsV1:\n f:spec:\n f:data: {}\nspec:\n data:\n key1: val1\n key2: val2", "completion": "This yaml code Server-Side Apply | Before spec.data gets changed from atomic to granular, manager-one owns the field spec.data, and all the fields within it (key1 and key2). When the CRD gets changed to make spec.data granular, manager", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Server-Side Apply | Before spec.data gets changed from atomic to granular, manager-one owns the field spec.data, and all the fields within it (key1 and key2). When the CRD gets changed to make spec.data granular, manager", "source": "https://kubernetes.io/docs/reference/using-api/server-side-apply/", "id": "k8s_00060_code_explain_1749"} {"prompt": "Complete this yaml code:\n\napplication/ssa/nginx-deployment.yaml\napiVersion: apps/v1\nkind: Deployment\nmetadata:\n name: nginx-deployment\n labels:\n app: nginx\nspec:\n replicas: 3\n selector:", "completion": " matchLabels:\n app: nginx\n template:\n metadata:\n labels:\n app: nginx\n spec:\n containers:\n - name: nginx\n image: nginx:1.14.2", "language": "yaml", "type": "deployment-definition", "context": "Server-Side Apply | And the user has created the Deployment using Server-Side Apply, like so:", "source": "https://kubernetes.io/docs/reference/using-api/server-side-apply/", "id": "k8s_00060_code_completion_1750"} {"prompt": "Explain this yaml code:\n\napplication/ssa/nginx-deployment.yaml\napiVersion: apps/v1\nkind: Deployment\nmetadata:\n name: nginx-deployment\n labels:\n app: nginx\nspec:\n replicas: 3\n selector:\n matchLabels:\n app: nginx\n template:\n metadata:\n labels:\n app: nginx\n spec:\n containers:\n - name: nginx\n image: nginx:1.14.2", "completion": "This yaml code Server-Side Apply | And the user has created the Deployment using Server-Side Apply, like so:", "language": "yaml", "type": "deployment-definition_explanation", "context": "Server-Side Apply | And the user has created the Deployment using Server-Side Apply, like so:", "source": "https://kubernetes.io/docs/reference/using-api/server-side-apply/", "id": "k8s_00060_code_explain_1751"} {"prompt": "Complete this yaml code:\n\napiVersion: apps/v1\nkind: Deployment\nmetadata:\n name: nginx-deployment\n labels:\n app: nginx\nspec:\n replicas: 3\n selector:", "completion": " matchLabels:\n app: nginx\n template:\n metadata:\n labels:\n app: nginx\n spec:\n containers:\n - name: nginx\n image: nginx:1.14.2", "language": "yaml", "type": "deployment-definition", "context": "Server-Side Apply | And the user has created the Deployment using Server-Side Apply, like so:", "source": "https://kubernetes.io/docs/reference/using-api/server-side-apply/", "id": "k8s_00060_code_completion_1752"} {"prompt": "Explain this yaml code:\n\napiVersion: apps/v1\nkind: Deployment\nmetadata:\n name: nginx-deployment\n labels:\n app: nginx\nspec:\n replicas: 3\n selector:\n matchLabels:\n app: nginx\n template:\n metadata:\n labels:\n app: nginx\n spec:\n containers:\n - name: nginx\n image: nginx:1.14.2", "completion": "This yaml code Server-Side Apply | And the user has created the Deployment using Server-Side Apply, like so:", "language": "yaml", "type": "deployment-definition_explanation", "context": "Server-Side Apply | And the user has created the Deployment using Server-Side Apply, like so:", "source": "https://kubernetes.io/docs/reference/using-api/server-side-apply/", "id": "k8s_00060_code_explain_1753"} {"prompt": "Explain this bash code:\n\nkubectl apply -f https://k8s.io/examples/application/ssa/nginx-deployment.yaml --server-side", "completion": "This bash code Server-Side Apply | Then later, automatic scaling is enabled for the Deployment; for example:", "language": "bash", "type": "kubectl-command_explanation", "context": "Server-Side Apply | Then later, automatic scaling is enabled for the Deployment; for example:", "source": "https://kubernetes.io/docs/reference/using-api/server-side-apply/", "id": "k8s_00060_code_explain_1754"} {"prompt": "Explain this bash code:\n\nkubectl autoscale deployment nginx-deployment --cpu-percent=50 --min=1 --max=10", "completion": "This bash code Server-Side Apply | Now, the user would like to remove replicas from their configuration, so they don't accidentally fight with the HorizontalPodAutoscaler (HPA) and its controller. However, there is a race: it might tak", "language": "bash", "type": "kubectl-command_explanation", "context": "Server-Side Apply | Now, the user would like to remove replicas from their configuration, so they don't accidentally fight with the HorizontalPodAutoscaler (HPA) and its controller. However, there is a race: it might tak", "source": "https://kubernetes.io/docs/reference/using-api/server-side-apply/", "id": "k8s_00060_code_explain_1755"} {"prompt": "Complete this yaml code:\n\n# Save this file as 'nginx-deployment-replicas-only.yaml'.\napiVersion: apps/v1\nkind: Deployment", "completion": "metadata:\n name: nginx-deployment\nspec:\n replicas: 3", "language": "yaml", "type": "deployment-definition", "context": "Server-Side Apply | The user applies that manifest using a private field manager name. In this example, the user picked handover-to-hpa:", "source": "https://kubernetes.io/docs/reference/using-api/server-side-apply/", "id": "k8s_00060_code_completion_1756"} {"prompt": "Explain this yaml code:\n\n# Save this file as 'nginx-deployment-replicas-only.yaml'.\napiVersion: apps/v1\nkind: Deployment\nmetadata:\n name: nginx-deployment\nspec:\n replicas: 3", "completion": "This yaml code Server-Side Apply | The user applies that manifest using a private field manager name. In this example, the user picked handover-to-hpa:", "language": "yaml", "type": "deployment-definition_explanation", "context": "Server-Side Apply | The user applies that manifest using a private field manager name. In this example, the user picked handover-to-hpa:", "source": "https://kubernetes.io/docs/reference/using-api/server-side-apply/", "id": "k8s_00060_code_explain_1757"} {"prompt": "Explain this bash code:\n\nkubectl apply -f nginx-deployment-replicas-only.yaml \\\n --server-side --field-manager=handover-to-hpa \\\n --validate=false", "completion": "This bash code Server-Side Apply | If the apply results in a conflict with the HPA controller, then do nothing. The conflict indicates the controller has claimed the field earlier in the process than it sometimes does.", "language": "bash", "type": "kubectl-command_explanation", "context": "Server-Side Apply | If the apply results in a conflict with the HPA controller, then do nothing. The conflict indicates the controller has claimed the field earlier in the process than it sometimes does.", "source": "https://kubernetes.io/docs/reference/using-api/server-side-apply/", "id": "k8s_00060_code_explain_1758"} {"prompt": "Complete this yaml code:\n\napplication/ssa/nginx-deployment-no-replicas.yaml\napiVersion: apps/v1\nkind: Deployment\nmetadata:\n name: nginx-deployment\n labels:\n app: nginx\nspec:\n selector:", "completion": " matchLabels:\n app: nginx\n template:\n metadata:\n labels:\n app: nginx\n spec:\n containers:\n - name: nginx\n image: nginx:1.14.2", "language": "yaml", "type": "deployment-definition", "context": "Server-Side Apply | Note that whenever the HPA controller sets the replicas field to a new value, the temporary field manager will no longer own any fields and will be automatically deleted. No further clean up is requir", "source": "https://kubernetes.io/docs/reference/using-api/server-side-apply/", "id": "k8s_00060_code_completion_1759"} {"prompt": "Explain this yaml code:\n\napplication/ssa/nginx-deployment-no-replicas.yaml\napiVersion: apps/v1\nkind: Deployment\nmetadata:\n name: nginx-deployment\n labels:\n app: nginx\nspec:\n selector:\n matchLabels:\n app: nginx\n template:\n metadata:\n labels:\n app: nginx\n spec:\n containers:\n - name: nginx\n image: nginx:1.14.2", "completion": "This yaml code Server-Side Apply | Note that whenever the HPA controller sets the replicas field to a new value, the temporary field manager will no longer own any fields and will be automatically deleted. No further clean up is requir", "language": "yaml", "type": "deployment-definition_explanation", "context": "Server-Side Apply | Note that whenever the HPA controller sets the replicas field to a new value, the temporary field manager will no longer own any fields and will be automatically deleted. No further clean up is requir", "source": "https://kubernetes.io/docs/reference/using-api/server-side-apply/", "id": "k8s_00060_code_explain_1760"} {"prompt": "Complete this yaml code:\n\napiVersion: apps/v1\nkind: Deployment\nmetadata:\n name: nginx-deployment\n labels:\n app: nginx\nspec:\n selector:\n matchLabels:", "completion": " app: nginx\n template:\n metadata:\n labels:\n app: nginx\n spec:\n containers:\n - name: nginx\n image: nginx:1.14.2", "language": "yaml", "type": "deployment-definition", "context": "Server-Side Apply | Note that whenever the HPA controller sets the replicas field to a new value, the temporary field manager will no longer own any fields and will be automatically deleted. No further clean up is requir", "source": "https://kubernetes.io/docs/reference/using-api/server-side-apply/", "id": "k8s_00060_code_completion_1761"} {"prompt": "Explain this yaml code:\n\napiVersion: apps/v1\nkind: Deployment\nmetadata:\n name: nginx-deployment\n labels:\n app: nginx\nspec:\n selector:\n matchLabels:\n app: nginx\n template:\n metadata:\n labels:\n app: nginx\n spec:\n containers:\n - name: nginx\n image: nginx:1.14.2", "completion": "This yaml code Server-Side Apply | Note that whenever the HPA controller sets the replicas field to a new value, the temporary field manager will no longer own any fields and will be automatically deleted. No further clean up is requir", "language": "yaml", "type": "deployment-definition_explanation", "context": "Server-Side Apply | Note that whenever the HPA controller sets the replicas field to a new value, the temporary field manager will no longer own any fields and will be automatically deleted. No further clean up is requir", "source": "https://kubernetes.io/docs/reference/using-api/server-side-apply/", "id": "k8s_00060_code_explain_1762"} {"prompt": "Explain this bash code:\n\nkubectl apply --server-side --field-manager=my-manager [--dry-run=server]", "completion": "This bash code Server-Side Apply | If you manage a resource with kubectl apply --server-side, you can downgrade to client-side apply directly with kubectl apply.", "language": "bash", "type": "kubectl-command_explanation", "context": "Server-Side Apply | If you manage a resource with kubectl apply --server-side, you can downgrade to client-side apply directly with kubectl apply.", "source": "https://kubernetes.io/docs/reference/using-api/server-side-apply/", "id": "k8s_00060_code_explain_1763"} {"prompt": "Explain this bash code:\n\nkubectl apply --server-side --field-manager=my-manager [--dry-run=server]", "completion": "This bash code Server-Side Apply | The PATCH verb (for an object that supports Server-Side Apply) accepts the unofficial application/apply-patch+yaml content type. Users of Server-Side Apply can send partially specified objects as YAML", "language": "bash", "type": "kubectl-command_explanation", "context": "Server-Side Apply | The PATCH verb (for an object that supports Server-Side Apply) accepts the unofficial application/apply-patch+yaml content type. Users of Server-Side Apply can send partially specified objects as YAML", "source": "https://kubernetes.io/docs/reference/using-api/server-side-apply/", "id": "k8s_00060_code_explain_1764"} {"prompt": "Complete this text code:\n\nPATCH /api/v1/namespaces/default/configmaps/example-cm\nAccept: application/json\nContent-Type: application/merge-patch+json\n\n{", "completion": " \"metadata\": {\n \"managedFields\": [\n {}\n ]\n }\n}", "language": "text", "type": "code-example", "context": "Server-Side Apply | This will overwrite the managedFields with a list containing a single empty entry that then results in the managedFields being stripped entirely from the object. Note that setting the managedFields to", "source": "https://kubernetes.io/docs/reference/using-api/server-side-apply/", "id": "k8s_00060_code_completion_1765"} {"prompt": "Explain this text code:\n\nPATCH /api/v1/namespaces/default/configmaps/example-cm\nAccept: application/json\nContent-Type: application/merge-patch+json\n\n{\n \"metadata\": {\n \"managedFields\": [\n {}\n ]\n }\n}", "completion": "This text code Server-Side Apply | This will overwrite the managedFields with a list containing a single empty entry that then results in the managedFields being stripped entirely from the object. Note that setting the managedFields to", "language": "text", "type": "code-example_explanation", "context": "Server-Side Apply | This will overwrite the managedFields with a list containing a single empty entry that then results in the managedFields being stripped entirely from the object. Note that setting the managedFields to", "source": "https://kubernetes.io/docs/reference/using-api/server-side-apply/", "id": "k8s_00060_code_explain_1766"} {"prompt": "Complete this text code:\n\nPATCH /api/v1/namespaces/default/configmaps/example-cm\nAccept: application/json\nContent-Type: application/json-patch+json", "completion": "If-Match: 1234567890123456789\n\n[{\"op\": \"replace\", \"path\": \"/metadata/managedFields\", \"value\": [{}]}]", "language": "text", "type": "code-example", "context": "Server-Side Apply | This will overwrite the managedFields with a list containing a single empty entry that then results in the managedFields being stripped entirely from the object. Note that setting the managedFields to", "source": "https://kubernetes.io/docs/reference/using-api/server-side-apply/", "id": "k8s_00060_code_completion_1767"} {"prompt": "Explain this text code:\n\nPATCH /api/v1/namespaces/default/configmaps/example-cm\nAccept: application/json\nContent-Type: application/json-patch+json\nIf-Match: 1234567890123456789\n\n[{\"op\": \"replace\", \"path\": \"/metadata/managedFields\", \"value\": [{}]}]", "completion": "This text code Server-Side Apply | This will overwrite the managedFields with a list containing a single empty entry that then results in the managedFields being stripped entirely from the object. Note that setting the managedFields to", "language": "text", "type": "code-example_explanation", "context": "Server-Side Apply | This will overwrite the managedFields with a list containing a single empty entry that then results in the managedFields being stripped entirely from the object. Note that setting the managedFields to", "source": "https://kubernetes.io/docs/reference/using-api/server-side-apply/", "id": "k8s_00060_code_explain_1768"} {"prompt": "Complete this text code:\n\ncat <\nAnnotations: \nCreationTimestamp: Tue, 01 Feb 2022 11:49:15 -0500\nRequesting User: yourname@example.com\nSigner: example.com/serving\nStatus: Pending\nSubject:", "completion": " Common Name: my-pod.my-namespace.pod.cluster.local\n Serial Number:\nSubject Alternative Names:\n DNS Names: my-pod.my-namespace.pod.cluster.local\n my-svc.my-namespace.svc.cluster.local\n IP Addresses: 192.0.2.24\n 10.0.34.2\nEvents: ", "language": "text", "type": "code-example", "context": "Manage TLS Certificates in a Cluster | Approving the certificate signing request is either done by an automated approval process or on a one off basis by a cluster administrator. If you're authorized to approve a certificate request, you c", "source": "https://kubernetes.io/docs/tasks/tls/managing-tls-in-a-cluster/", "id": "k8s_00062_code_completion_1775"} {"prompt": "Explain this text code:\n\nName: my-svc.my-namespace\nLabels: \nAnnotations: \nCreationTimestamp: Tue, 01 Feb 2022 11:49:15 -0500\nRequesting User: yourname@example.com\nSigner: example.com/serving\nStatus: Pending\nSubject:\n Common Name: my-pod.my-namespace.pod.cluster.local\n Serial Number:\nSubject Alternative Names:\n DNS Names: my-pod.my-namespace.pod.cluster.local\n my-svc.my-namespace.svc.cluster.local\n IP Addresses: 192.0.2.24\n 10.0.34.2\nEvents: ", "completion": "This text code Manage TLS Certificates in a Cluster | Approving the certificate signing request is either done by an automated approval process or on a one off basis by a cluster administrator. If you're authorized to approve a certificate request, you c", "language": "text", "type": "code-example_explanation", "context": "Manage TLS Certificates in a Cluster | Approving the certificate signing request is either done by an automated approval process or on a one off basis by a cluster administrator. If you're authorized to approve a certificate request, you c", "source": "https://kubernetes.io/docs/tasks/tls/managing-tls-in-a-cluster/", "id": "k8s_00062_code_explain_1776"} {"prompt": "Explain this text code:\n\ncertificatesigningrequest.certificates.k8s.io/my-svc.my-namespace approved", "completion": "This text code Manage TLS Certificates in a Cluster | You should now see the following:", "language": "text", "type": "code-example_explanation", "context": "Manage TLS Certificates in a Cluster | You should now see the following:", "source": "https://kubernetes.io/docs/tasks/tls/managing-tls-in-a-cluster/", "id": "k8s_00062_code_explain_1777"} {"prompt": "Explain this text code:\n\nNAME AGE SIGNERNAME REQUESTOR REQUESTEDDURATION CONDITION\nmy-svc.my-namespace 10m example.com/serving yourname@example.com Approved", "completion": "This text code Manage TLS Certificates in a Cluster | This means the certificate request has been approved and is waiting for the requested signer to sign it.", "language": "text", "type": "code-example_explanation", "context": "Manage TLS Certificates in a Cluster | This means the certificate request has been approved and is waiting for the requested signer to sign it.", "source": "https://kubernetes.io/docs/tasks/tls/managing-tls-in-a-cluster/", "id": "k8s_00062_code_explain_1778"} {"prompt": "Complete this text code:\n\ncat < Approved,Issued", "completion": "This text code Manage TLS Certificates in a Cluster | Now, as the requesting user, you can download the issued certificate and save it to a server.crt file by running the following:", "language": "text", "type": "code-example_explanation", "context": "Manage TLS Certificates in a Cluster | Now, as the requesting user, you can download the issued certificate and save it to a server.crt file by running the following:", "source": "https://kubernetes.io/docs/tasks/tls/managing-tls-in-a-cluster/", "id": "k8s_00062_code_explain_1789"} {"prompt": "Explain this bash code:\n\nkubectl get csr my-svc.my-namespace -o jsonpath='{.status.certificate}' \\\n | base64 --decode > server.crt", "completion": "This bash code Manage TLS Certificates in a Cluster | Now you can populate server.crt and server-key.pem in a Secret that you could later mount into a Pod (for example, to use with a webserver that serves HTTPS).", "language": "bash", "type": "kubectl-command_explanation", "context": "Manage TLS Certificates in a Cluster | Now you can populate server.crt and server-key.pem in a Secret that you could later mount into a Pod (for example, to use with a webserver that serves HTTPS).", "source": "https://kubernetes.io/docs/tasks/tls/managing-tls-in-a-cluster/", "id": "k8s_00062_code_explain_1790"} {"prompt": "Explain this bash code:\n\nkubectl create secret tls server --cert server.crt --key server-key.pem", "completion": "This bash code Manage TLS Certificates in a Cluster | Finally, you can populate ca.pem into a ConfigMap and use it as the trust root to verify the serving certificate:", "language": "bash", "type": "kubectl-command_explanation", "context": "Manage TLS Certificates in a Cluster | Finally, you can populate ca.pem into a ConfigMap and use it as the trust root to verify the serving certificate:", "source": "https://kubernetes.io/docs/tasks/tls/managing-tls-in-a-cluster/", "id": "k8s_00062_code_explain_1791"} {"prompt": "Explain this bash code:\n\nkubectl create configmap example-serving-ca --from-file ca.crt=ca.pem", "completion": "This bash code Manage TLS Certificates in a Cluster | A Kubernetes administrator (with appropriate permissions) can manually approve (or deny) CertificateSigningRequests by using the kubectl certificate approve and kubectl certificate deny commands. Howe", "language": "bash", "type": "kubectl-command_explanation", "context": "Manage TLS Certificates in a Cluster | A Kubernetes administrator (with appropriate permissions) can manually approve (or deny) CertificateSigningRequests by using the kubectl certificate approve and kubectl certificate deny commands. Howe", "source": "https://kubernetes.io/docs/tasks/tls/managing-tls-in-a-cluster/", "id": "k8s_00062_code_explain_1792"} {"prompt": "Complete this shell code:\n\ncat <\nAnnotations: \nCreationTimestamp: Tue, 01 Feb 2022 11:49:15 -0500\nRequesting User: yourname@example.com\nSigner: example.com/serving\nStatus: Pending\nSubject:", "completion": " Common Name: my-pod.my-namespace.pod.cluster.local\n Serial Number:\nSubject Alternative Names:\n DNS Names: my-pod.my-namespace.pod.cluster.local\n my-svc.my-namespace.svc.cluster.local\n IP Addresses: 192.0.2.24\n 10.0.34.2\nEvents: ", "language": "none", "type": "code-example", "context": "Manage TLS Certificates in a Cluster | Approving the certificate signing request is either done by an automated approval process or on a one off basis by a cluster administrator. If you're authorized to approve a certificate request, you c", "source": "https://kubernetes.io/docs/tasks/tls/managing-tls-in-a-cluster/", "id": "k8s_00062_code_completion_1799"} {"prompt": "Explain this none code:\n\nName: my-svc.my-namespace\nLabels: \nAnnotations: \nCreationTimestamp: Tue, 01 Feb 2022 11:49:15 -0500\nRequesting User: yourname@example.com\nSigner: example.com/serving\nStatus: Pending\nSubject:\n Common Name: my-pod.my-namespace.pod.cluster.local\n Serial Number:\nSubject Alternative Names:\n DNS Names: my-pod.my-namespace.pod.cluster.local\n my-svc.my-namespace.svc.cluster.local\n IP Addresses: 192.0.2.24\n 10.0.34.2\nEvents: ", "completion": "This none code Manage TLS Certificates in a Cluster | Approving the certificate signing request is either done by an automated approval process or on a one off basis by a cluster administrator. If you're authorized to approve a certificate request, you c", "language": "none", "type": "code-example_explanation", "context": "Manage TLS Certificates in a Cluster | Approving the certificate signing request is either done by an automated approval process or on a one off basis by a cluster administrator. If you're authorized to approve a certificate request, you c", "source": "https://kubernetes.io/docs/tasks/tls/managing-tls-in-a-cluster/", "id": "k8s_00062_code_explain_1800"} {"prompt": "Explain this none code:\n\ncertificatesigningrequest.certificates.k8s.io/my-svc.my-namespace approved", "completion": "This none code Manage TLS Certificates in a Cluster | You should now see the following:", "language": "none", "type": "code-example_explanation", "context": "Manage TLS Certificates in a Cluster | You should now see the following:", "source": "https://kubernetes.io/docs/tasks/tls/managing-tls-in-a-cluster/", "id": "k8s_00062_code_explain_1801"} {"prompt": "Explain this none code:\n\nNAME AGE SIGNERNAME REQUESTOR REQUESTEDDURATION CONDITION\nmy-svc.my-namespace 10m example.com/serving yourname@example.com Approved", "completion": "This none code Manage TLS Certificates in a Cluster | This means the certificate request has been approved and is waiting for the requested signer to sign it.", "language": "none", "type": "code-example_explanation", "context": "Manage TLS Certificates in a Cluster | This means the certificate request has been approved and is waiting for the requested signer to sign it.", "source": "https://kubernetes.io/docs/tasks/tls/managing-tls-in-a-cluster/", "id": "k8s_00062_code_explain_1802"} {"prompt": "Complete this shell code:\n\ncat < Approved,Issued", "completion": "This none code Manage TLS Certificates in a Cluster | Now, as the requesting user, you can download the issued certificate and save it to a server.crt file by running the following:", "language": "none", "type": "code-example_explanation", "context": "Manage TLS Certificates in a Cluster | Now, as the requesting user, you can download the issued certificate and save it to a server.crt file by running the following:", "source": "https://kubernetes.io/docs/tasks/tls/managing-tls-in-a-cluster/", "id": "k8s_00062_code_explain_1813"} {"prompt": "Explain this shell code:\n\nkubectl get csr my-svc.my-namespace -o jsonpath='{.status.certificate}' \\\n | base64 --decode > server.crt", "completion": "This shell code Manage TLS Certificates in a Cluster | Now you can populate server.crt and server-key.pem in a Secret that you could later mount into a Pod (for example, to use with a webserver that serves HTTPS).", "language": "shell", "type": "code-example_explanation", "context": "Manage TLS Certificates in a Cluster | Now you can populate server.crt and server-key.pem in a Secret that you could later mount into a Pod (for example, to use with a webserver that serves HTTPS).", "source": "https://kubernetes.io/docs/tasks/tls/managing-tls-in-a-cluster/", "id": "k8s_00062_code_explain_1814"} {"prompt": "Explain this shell code:\n\nkubectl create secret tls server --cert server.crt --key server-key.pem", "completion": "This shell code Manage TLS Certificates in a Cluster | Finally, you can populate ca.pem into a ConfigMap and use it as the trust root to verify the serving certificate:", "language": "shell", "type": "code-example_explanation", "context": "Manage TLS Certificates in a Cluster | Finally, you can populate ca.pem into a ConfigMap and use it as the trust root to verify the serving certificate:", "source": "https://kubernetes.io/docs/tasks/tls/managing-tls-in-a-cluster/", "id": "k8s_00062_code_explain_1815"} {"prompt": "Explain this shell code:\n\nkubectl create configmap example-serving-ca --from-file ca.crt=ca.pem", "completion": "This shell code Manage TLS Certificates in a Cluster | A Kubernetes administrator (with appropriate permissions) can manually approve (or deny) CertificateSigningRequests by using the kubectl certificate approve and kubectl certificate deny commands. Howe", "language": "shell", "type": "code-example_explanation", "context": "Manage TLS Certificates in a Cluster | A Kubernetes administrator (with appropriate permissions) can manually approve (or deny) CertificateSigningRequests by using the kubectl certificate approve and kubectl certificate deny commands. Howe", "source": "https://kubernetes.io/docs/tasks/tls/managing-tls-in-a-cluster/", "id": "k8s_00062_code_explain_1816"} {"prompt": "Complete this text code:\n\ncat < server.crt", "completion": "This bash code Manage TLS Certificates in a Cluster | Now you can populate server.crt and server-key.pem in a Secret that you could later mount into a Pod (for example, to use with a webserver that serves HTTPS).", "language": "bash", "type": "kubectl-command_explanation", "context": "Manage TLS Certificates in a Cluster | Now you can populate server.crt and server-key.pem in a Secret that you could later mount into a Pod (for example, to use with a webserver that serves HTTPS).", "source": "https://kubernetes.io/docs/tasks/tls/managing-tls-in-a-cluster/", "id": "k8s_00062_code_explain_1830"} {"prompt": "Explain this bash code:\n\nkubectl create secret tls server --cert server.crt --key server-key.pem", "completion": "This bash code Manage TLS Certificates in a Cluster | Finally, you can populate ca.pem into a ConfigMap and use it as the trust root to verify the serving certificate:", "language": "bash", "type": "kubectl-command_explanation", "context": "Manage TLS Certificates in a Cluster | Finally, you can populate ca.pem into a ConfigMap and use it as the trust root to verify the serving certificate:", "source": "https://kubernetes.io/docs/tasks/tls/managing-tls-in-a-cluster/", "id": "k8s_00062_code_explain_1831"} {"prompt": "Explain this bash code:\n\nkubectl create configmap example-serving-ca --from-file ca.crt=ca.pem", "completion": "This bash code Manage TLS Certificates in a Cluster | A Kubernetes administrator (with appropriate permissions) can manually approve (or deny) CertificateSigningRequests by using the kubectl certificate approve and kubectl certificate deny commands. Howe", "language": "bash", "type": "kubectl-command_explanation", "context": "Manage TLS Certificates in a Cluster | A Kubernetes administrator (with appropriate permissions) can manually approve (or deny) CertificateSigningRequests by using the kubectl certificate approve and kubectl certificate deny commands. Howe", "source": "https://kubernetes.io/docs/tasks/tls/managing-tls-in-a-cluster/", "id": "k8s_00062_code_explain_1832"} {"prompt": "Explain this text code:\n\nnode.kubernetes.io/exclude-from-external-load-balancers", "completion": "This text code Feature Gates (removed) | ServiceNodePortStaticSubrange: Enables the use of different port allocation strategies for NodePort Services. For more details, see reserve NodePort ranges to avoid collisions.", "language": "text", "type": "code-example_explanation", "context": "Feature Gates (removed) | ServiceNodePortStaticSubrange: Enables the use of different port allocation strategies for NodePort Services. For more details, see reserve NodePort ranges to avoid collisions.", "source": "https://kubernetes.io/docs/reference/command-line-tools-reference/feature-gates-removed/", "id": "k8s_00065_code_explain_1833"} {"prompt": "Complete this text code:\n\nfor macOS:\n https://kubernetes.io/docs/tasks/tools/install-kubectl-macos/#enable-shell-autocompletion\n \n for linux:", "completion": " https://kubernetes.io/docs/tasks/tools/install-kubectl-linux/#enable-shell-autocompletion\n \n for windows:\n https://kubernetes.io/docs/tasks/tools/install-kubectl-windows/#enable-shell-autocompletion", "language": "text", "type": "code-example", "context": "kubectl completion | Note for zsh users: [1] zsh completions are only supported in versions of zsh >= 5.2.", "source": "https://kubernetes.io/docs/reference/kubectl/generated/kubectl_completion/", "id": "k8s_00066_code_completion_1834"} {"prompt": "Explain this text code:\n\nfor macOS:\n https://kubernetes.io/docs/tasks/tools/install-kubectl-macos/#enable-shell-autocompletion\n \n for linux:\n https://kubernetes.io/docs/tasks/tools/install-kubectl-linux/#enable-shell-autocompletion\n \n for windows:\n https://kubernetes.io/docs/tasks/tools/install-kubectl-windows/#enable-shell-autocompletion", "completion": "This text code kubectl completion | Note for zsh users: [1] zsh completions are only supported in versions of zsh >= 5.2.", "language": "text", "type": "code-example_explanation", "context": "kubectl completion | Note for zsh users: [1] zsh completions are only supported in versions of zsh >= 5.2.", "source": "https://kubernetes.io/docs/reference/kubectl/generated/kubectl_completion/", "id": "k8s_00066_code_explain_1835"} {"prompt": "Complete this go code:\n\n# Installing bash completion on macOS using homebrew\n ## If running Bash 3.2 included with macOS\n brew install bash-completion\n ## or, if running Bash 4.1+\n brew install bash-completion@2\n ## If kubectl is installed via homebrew, this should start working immediately\n ## If you've installed via other means, you may need add the completion to your completion directory\n kubectl completion bash > $(brew --prefix)/etc/bash_completion.d/kubectl\n \n \n # Installing bash completion on Linux\n ## If bash-completion is not installed on Linux, install the 'bash-completion' package\n ## via your distribution's package manager.\n ## Load the kubectl completion code for bash into the current shell\n source <(kubectl completion bash)\n ## Write bash completion code to a file and source it from .bash_profile\n kubectl completion bash > ~/.kube/completion.bash.inc\n printf \"\n # kubectl shell completion\n source '$HOME/.kube/completion.bash.inc'\n \" >> $HOME/.bash_profile\n source $HOME/.bash_profile\n ", "completion": " # Load the kubectl completion code for zsh[1] into the current shell\n source <(kubectl completion zsh)\n # Set the kubectl completion code for zsh[1] to autoload on startup\n kubectl completion zsh > \"${fpath[1]}/_kubectl\"\n \n \n # Load the kubectl completion code for fish[2] into the current shell\n kubectl completion fish | source\n # To load completions for each session, execute once:\n kubectl completion fish > ~/.config/fish/completions/kubectl.fish\n \n # Load the kubectl completion code for powershell into the current shell\n kubectl completion powershell | Out-String | Invoke-Expression\n # Set kubectl completion code for powershell to run on startup\n ## Save completion code to a script and execute in the profile\n kubectl completion powershell > $HOME\\.kube\\completion.ps1\n Add-Content $PROFILE \"$HOME\\.kube\\completion.ps1\"\n ## Execute completion code in the profile\n Add-Content $PROFILE \"if (Get-Command kubectl -ErrorAction SilentlyContinue) {\n kubectl completion powershell | Out-String | Invoke-Expression\n }\"\n ## Add completion code directly to the $PROFILE script\n kubectl completion powershell >> $PROFILE", "language": "go", "type": "code-example", "context": "kubectl completion | help for completion", "source": "https://kubernetes.io/docs/reference/kubectl/generated/kubectl_completion/", "id": "k8s_00066_code_completion_1836"} {"prompt": "Explain this go code:\n\n# Installing bash completion on macOS using homebrew\n ## If running Bash 3.2 included with macOS\n brew install bash-completion\n ## or, if running Bash 4.1+\n brew install bash-completion@2\n ## If kubectl is installed via homebrew, this should start working immediately\n ## If you've installed via other means, you may need add the completion to your completion directory\n kubectl completion bash > $(brew --prefix)/etc/bash_completion.d/kubectl\n \n \n # Installing bash completion on Linux\n ## If bash-completion is not installed on Linux, install the 'bash-completion' package\n ## via your distribution's package manager.\n ## Load the kubectl completion code for bash into the current shell\n source <(kubectl completion bash)\n ## Write bash completion code to a file and source it from .bash_profile\n kubectl completion bash > ~/.kube/completion.bash.inc\n printf \"\n # kubectl shell completion\n source '$HOME/.kube/completion.bash.inc'\n \" >> $HOME/.bash_profile\n source $HOME/.bash_profile\n \n # Load the kubectl completion code for zsh[1] into the current shell\n source <(kubectl completion zsh)\n # Set the kubectl completion code for zsh[1] to autoload on startup\n kubectl completion zsh > \"${fpath[1]}/_kubectl\"\n \n \n # Load the kubectl completion code for fish[2] into the current shell\n kubectl completion fish | source\n # To load completions for each session, execute once:\n kubectl completion fish > ~/.config/fish/completions/kubectl.fish\n \n # Load the kubectl completion code for powershell into the current shell\n kubectl completion powershell | Out-String | Invoke-Expression\n # Set kubectl completion code for powershell to run on startup\n ## Save completion code to a script and execute in the profile\n kubectl completion powershell > $HOME\\.kube\\completion.ps1\n Add-Content $PROFILE \"$HOME\\.kube\\completion.ps1\"\n ## Execute completion code in the profile\n Add-Content $PROFILE \"if (Get-Command kubectl -ErrorAction SilentlyContinue) {\n kubectl completion powershell | Out-String | Invoke-Expression\n }\"\n ## Add completion code directly to the $PROFILE script\n kubectl completion powershell >> $PROFILE", "completion": "This go code kubectl completion | help for completion", "language": "go", "type": "code-example_explanation", "context": "kubectl completion | help for completion", "source": "https://kubernetes.io/docs/reference/kubectl/generated/kubectl_completion/", "id": "k8s_00066_code_explain_1837"} {"prompt": "Complete this text code:\n\nfor macOS:\n https://kubernetes.io/docs/tasks/tools/install-kubectl-macos/#enable-shell-autocompletion\n \n for linux:", "completion": " https://kubernetes.io/docs/tasks/tools/install-kubectl-linux/#enable-shell-autocompletion\n \n for windows:\n https://kubernetes.io/docs/tasks/tools/install-kubectl-windows/#enable-shell-autocompletion", "language": "text", "type": "code-example", "context": "kubectl completion | Note for zsh users: [1] zsh completions are only supported in versions of zsh >= 5.2.", "source": "https://kubernetes.io/docs/reference/kubectl/generated/kubectl_completion/", "id": "k8s_00066_code_completion_1838"} {"prompt": "Explain this text code:\n\nfor macOS:\n https://kubernetes.io/docs/tasks/tools/install-kubectl-macos/#enable-shell-autocompletion\n \n for linux:\n https://kubernetes.io/docs/tasks/tools/install-kubectl-linux/#enable-shell-autocompletion\n \n for windows:\n https://kubernetes.io/docs/tasks/tools/install-kubectl-windows/#enable-shell-autocompletion", "completion": "This text code kubectl completion | Note for zsh users: [1] zsh completions are only supported in versions of zsh >= 5.2.", "language": "text", "type": "code-example_explanation", "context": "kubectl completion | Note for zsh users: [1] zsh completions are only supported in versions of zsh >= 5.2.", "source": "https://kubernetes.io/docs/reference/kubectl/generated/kubectl_completion/", "id": "k8s_00066_code_explain_1839"} {"prompt": "Complete this go code:\n\n# Installing bash completion on macOS using homebrew\n ## If running Bash 3.2 included with macOS\n brew install bash-completion\n ## or, if running Bash 4.1+\n brew install bash-completion@2\n ## If kubectl is installed via homebrew, this should start working immediately\n ## If you've installed via other means, you may need add the completion to your completion directory\n kubectl completion bash > $(brew --prefix)/etc/bash_completion.d/kubectl\n \n \n # Installing bash completion on Linux\n ## If bash-completion is not installed on Linux, install the 'bash-completion' package\n ## via your distribution's package manager.\n ## Load the kubectl completion code for bash into the current shell\n source <(kubectl completion bash)\n ## Write bash completion code to a file and source it from .bash_profile\n kubectl completion bash > ~/.kube/completion.bash.inc\n printf \"\n # kubectl shell completion\n source '$HOME/.kube/completion.bash.inc'\n \" >> $HOME/.bash_profile\n source $HOME/.bash_profile\n ", "completion": " # Load the kubectl completion code for zsh[1] into the current shell\n source <(kubectl completion zsh)\n # Set the kubectl completion code for zsh[1] to autoload on startup\n kubectl completion zsh > \"${fpath[1]}/_kubectl\"\n \n \n # Load the kubectl completion code for fish[2] into the current shell\n kubectl completion fish | source\n # To load completions for each session, execute once:\n kubectl completion fish > ~/.config/fish/completions/kubectl.fish\n \n # Load the kubectl completion code for powershell into the current shell\n kubectl completion powershell | Out-String | Invoke-Expression\n # Set kubectl completion code for powershell to run on startup\n ## Save completion code to a script and execute in the profile\n kubectl completion powershell > $HOME\\.kube\\completion.ps1\n Add-Content $PROFILE \"$HOME\\.kube\\completion.ps1\"\n ## Execute completion code in the profile\n Add-Content $PROFILE \"if (Get-Command kubectl -ErrorAction SilentlyContinue) {\n kubectl completion powershell | Out-String | Invoke-Expression\n }\"\n ## Add completion code directly to the $PROFILE script\n kubectl completion powershell >> $PROFILE", "language": "go", "type": "code-example", "context": "kubectl completion | help for completion", "source": "https://kubernetes.io/docs/reference/kubectl/generated/kubectl_completion/", "id": "k8s_00066_code_completion_1840"} {"prompt": "Explain this go code:\n\n# Installing bash completion on macOS using homebrew\n ## If running Bash 3.2 included with macOS\n brew install bash-completion\n ## or, if running Bash 4.1+\n brew install bash-completion@2\n ## If kubectl is installed via homebrew, this should start working immediately\n ## If you've installed via other means, you may need add the completion to your completion directory\n kubectl completion bash > $(brew --prefix)/etc/bash_completion.d/kubectl\n \n \n # Installing bash completion on Linux\n ## If bash-completion is not installed on Linux, install the 'bash-completion' package\n ## via your distribution's package manager.\n ## Load the kubectl completion code for bash into the current shell\n source <(kubectl completion bash)\n ## Write bash completion code to a file and source it from .bash_profile\n kubectl completion bash > ~/.kube/completion.bash.inc\n printf \"\n # kubectl shell completion\n source '$HOME/.kube/completion.bash.inc'\n \" >> $HOME/.bash_profile\n source $HOME/.bash_profile\n \n # Load the kubectl completion code for zsh[1] into the current shell\n source <(kubectl completion zsh)\n # Set the kubectl completion code for zsh[1] to autoload on startup\n kubectl completion zsh > \"${fpath[1]}/_kubectl\"\n \n \n # Load the kubectl completion code for fish[2] into the current shell\n kubectl completion fish | source\n # To load completions for each session, execute once:\n kubectl completion fish > ~/.config/fish/completions/kubectl.fish\n \n # Load the kubectl completion code for powershell into the current shell\n kubectl completion powershell | Out-String | Invoke-Expression\n # Set kubectl completion code for powershell to run on startup\n ## Save completion code to a script and execute in the profile\n kubectl completion powershell > $HOME\\.kube\\completion.ps1\n Add-Content $PROFILE \"$HOME\\.kube\\completion.ps1\"\n ## Execute completion code in the profile\n Add-Content $PROFILE \"if (Get-Command kubectl -ErrorAction SilentlyContinue) {\n kubectl completion powershell | Out-String | Invoke-Expression\n }\"\n ## Add completion code directly to the $PROFILE script\n kubectl completion powershell >> $PROFILE", "completion": "This go code kubectl completion | help for completion", "language": "go", "type": "code-example_explanation", "context": "kubectl completion | help for completion", "source": "https://kubernetes.io/docs/reference/kubectl/generated/kubectl_completion/", "id": "k8s_00066_code_explain_1841"} {"prompt": "Complete this yaml code:\n\n# You need to change this example to match the actual runtime name, and per-Pod\n# resource overhead, that the container runtime is adding in your cluster.\napiVersion: node.k8s.io/v1\nkind: RuntimeClass\nmetadata:", "completion": " name: kata-fc\nhandler: kata-fc\noverhead:\n podFixed:\n memory: \"120Mi\"\n cpu: \"250m\"", "language": "yaml", "type": "kubernetes-manifest", "context": "Pod Overhead | Workloads which are created which specify the kata-fc RuntimeClass handler will take the memory and cpu overheads into account for resource quota calculations, node scheduling, as well as Pod cgroup s", "source": "https://kubernetes.io/docs/concepts/scheduling-eviction/pod-overhead/", "id": "k8s_00067_code_completion_1842"} {"prompt": "Explain this yaml code:\n\n# You need to change this example to match the actual runtime name, and per-Pod\n# resource overhead, that the container runtime is adding in your cluster.\napiVersion: node.k8s.io/v1\nkind: RuntimeClass\nmetadata:\n name: kata-fc\nhandler: kata-fc\noverhead:\n podFixed:\n memory: \"120Mi\"\n cpu: \"250m\"", "completion": "This yaml code Pod Overhead | Workloads which are created which specify the kata-fc RuntimeClass handler will take the memory and cpu overheads into account for resource quota calculations, node scheduling, as well as Pod cgroup s", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Pod Overhead | Workloads which are created which specify the kata-fc RuntimeClass handler will take the memory and cpu overheads into account for resource quota calculations, node scheduling, as well as Pod cgroup s", "source": "https://kubernetes.io/docs/concepts/scheduling-eviction/pod-overhead/", "id": "k8s_00067_code_explain_1843"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: test-pod\nspec:\n runtimeClassName: kata-fc\n containers:\n - name: busybox-ctr\n image: busybox:1.28\n stdin: true", "completion": " tty: true\n resources:\n limits:\n cpu: 500m\n memory: 100Mi\n - name: nginx-ctr\n image: nginx\n resources:\n limits:\n cpu: 1500m\n memory: 100Mi", "language": "yaml", "type": "pod-definition", "context": "Pod Overhead | At admission time the RuntimeClass admission controller updates the workload's PodSpec to include the overhead as described in the RuntimeClass. If the PodSpec already has this field defined, the Pod", "source": "https://kubernetes.io/docs/concepts/scheduling-eviction/pod-overhead/", "id": "k8s_00067_code_completion_1844"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: test-pod\nspec:\n runtimeClassName: kata-fc\n containers:\n - name: busybox-ctr\n image: busybox:1.28\n stdin: true\n tty: true\n resources:\n limits:\n cpu: 500m\n memory: 100Mi\n - name: nginx-ctr\n image: nginx\n resources:\n limits:\n cpu: 1500m\n memory: 100Mi", "completion": "This yaml code Pod Overhead | At admission time the RuntimeClass admission controller updates the workload's PodSpec to include the overhead as described in the RuntimeClass. If the PodSpec already has this field defined, the Pod", "language": "yaml", "type": "pod-definition_explanation", "context": "Pod Overhead | At admission time the RuntimeClass admission controller updates the workload's PodSpec to include the overhead as described in the RuntimeClass. If the PodSpec already has this field defined, the Pod", "source": "https://kubernetes.io/docs/concepts/scheduling-eviction/pod-overhead/", "id": "k8s_00067_code_explain_1845"} {"prompt": "Explain this bash code:\n\nkubectl get pod test-pod -o jsonpath='{.spec.overhead}'", "completion": "This bash code Pod Overhead | The output is:", "language": "bash", "type": "kubectl-command_explanation", "context": "Pod Overhead | The output is:", "source": "https://kubernetes.io/docs/concepts/scheduling-eviction/pod-overhead/", "id": "k8s_00067_code_explain_1846"} {"prompt": "Explain this bash code:\n\nkubectl get pod test-pod -o jsonpath='{.spec.containers[*].resources.limits}'", "completion": "This bash code Pod Overhead | The total container requests are 2000m CPU and 200MiB of memory:", "language": "bash", "type": "kubectl-command_explanation", "context": "Pod Overhead | The total container requests are 2000m CPU and 200MiB of memory:", "source": "https://kubernetes.io/docs/concepts/scheduling-eviction/pod-overhead/", "id": "k8s_00067_code_explain_1847"} {"prompt": "Explain this text code:\n\nmap[cpu: 500m memory:100Mi] map[cpu:1500m memory:100Mi]", "completion": "This text code Pod Overhead | Check this against what is observed by the node:", "language": "text", "type": "code-example_explanation", "context": "Pod Overhead | Check this against what is observed by the node:", "source": "https://kubernetes.io/docs/concepts/scheduling-eviction/pod-overhead/", "id": "k8s_00067_code_explain_1848"} {"prompt": "Explain this text code:\n\nNamespace Name CPU Requests CPU Limits Memory Requests Memory Limits AGE\n --------- ---- ------------ ---------- --------------- ------------- ---\n default test-pod 2250m (56%) 2250m (56%) 320Mi (1%) 320Mi (1%) 36m", "completion": "This text code Pod Overhead | Check the Pod's memory cgroups on the node where the workload is running. In the following example, crictl is used on the node, which provides a CLI for CRI-compatible container runtimes. This is an a", "language": "text", "type": "code-example_explanation", "context": "Pod Overhead | Check the Pod's memory cgroups on the node where the workload is running. In the following example, crictl is used on the node, which provides a CLI for CRI-compatible container runtimes. This is an a", "source": "https://kubernetes.io/docs/concepts/scheduling-eviction/pod-overhead/", "id": "k8s_00067_code_explain_1849"} {"prompt": "Explain this text code:\n\n# Run this on the node where the Pod is scheduled\nPOD_ID=\"$(sudo crictl pods --name test-pod -q)\"", "completion": "This text code Pod Overhead | From this, you can determine the cgroup path for the Pod:", "language": "text", "type": "code-example_explanation", "context": "Pod Overhead | From this, you can determine the cgroup path for the Pod:", "source": "https://kubernetes.io/docs/concepts/scheduling-eviction/pod-overhead/", "id": "k8s_00067_code_explain_1850"} {"prompt": "Explain this text code:\n\n# Run this on the node where the Pod is scheduled\nsudo crictl inspectp -o=json $POD_ID | grep cgroupsPath", "completion": "This text code Pod Overhead | The resulting cgroup path includes the Pod's pause container. The Pod level cgroup is one directory above.", "language": "text", "type": "code-example_explanation", "context": "Pod Overhead | The resulting cgroup path includes the Pod's pause container. The Pod level cgroup is one directory above.", "source": "https://kubernetes.io/docs/concepts/scheduling-eviction/pod-overhead/", "id": "k8s_00067_code_explain_1851"} {"prompt": "Explain this text code:\n\n\"cgroupsPath\": \"/kubepods/podd7f4b509-cf94-4951-9417-d1087c92a5b2/7ccf55aee35dd16aca4189c952d83487297f3cd760f1bbf09620e206e7d0c27a\"", "completion": "This text code Pod Overhead | In this specific case, the pod cgroup path is kubepods/podd7f4b509-cf94-4951-9417-d1087c92a5b2. Verify the Pod level cgroup setting for memory:", "language": "text", "type": "code-example_explanation", "context": "Pod Overhead | In this specific case, the pod cgroup path is kubepods/podd7f4b509-cf94-4951-9417-d1087c92a5b2. Verify the Pod level cgroup setting for memory:", "source": "https://kubernetes.io/docs/concepts/scheduling-eviction/pod-overhead/", "id": "k8s_00067_code_explain_1852"} {"prompt": "Explain this text code:\n\n# Run this on the node where the Pod is scheduled.\n# Also, change the name of the cgroup to match the cgroup allocated for your pod.\n cat /sys/fs/cgroup/memory/kubepods/podd7f4b509-cf94-4951-9417-d1087c92a5b2/memory.limit_in_bytes", "completion": "This text code Pod Overhead | This is 320 MiB, as expected:", "language": "text", "type": "code-example_explanation", "context": "Pod Overhead | This is 320 MiB, as expected:", "source": "https://kubernetes.io/docs/concepts/scheduling-eviction/pod-overhead/", "id": "k8s_00067_code_explain_1853"} {"prompt": "Complete this yaml code:\n\n# You need to change this example to match the actual runtime name, and per-Pod\n# resource overhead, that the container runtime is adding in your cluster.\napiVersion: node.k8s.io/v1\nkind: RuntimeClass\nmetadata:", "completion": " name: kata-fc\nhandler: kata-fc\noverhead:\n podFixed:\n memory: \"120Mi\"\n cpu: \"250m\"", "language": "yaml", "type": "kubernetes-manifest", "context": "Pod Overhead | Workloads which are created which specify the kata-fc RuntimeClass handler will take the memory and cpu overheads into account for resource quota calculations, node scheduling, as well as Pod cgroup s", "source": "https://kubernetes.io/docs/concepts/scheduling-eviction/pod-overhead/", "id": "k8s_00067_code_completion_1854"} {"prompt": "Explain this yaml code:\n\n# You need to change this example to match the actual runtime name, and per-Pod\n# resource overhead, that the container runtime is adding in your cluster.\napiVersion: node.k8s.io/v1\nkind: RuntimeClass\nmetadata:\n name: kata-fc\nhandler: kata-fc\noverhead:\n podFixed:\n memory: \"120Mi\"\n cpu: \"250m\"", "completion": "This yaml code Pod Overhead | Workloads which are created which specify the kata-fc RuntimeClass handler will take the memory and cpu overheads into account for resource quota calculations, node scheduling, as well as Pod cgroup s", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Pod Overhead | Workloads which are created which specify the kata-fc RuntimeClass handler will take the memory and cpu overheads into account for resource quota calculations, node scheduling, as well as Pod cgroup s", "source": "https://kubernetes.io/docs/concepts/scheduling-eviction/pod-overhead/", "id": "k8s_00067_code_explain_1855"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: test-pod\nspec:\n runtimeClassName: kata-fc\n containers:\n - name: busybox-ctr\n image: busybox:1.28\n stdin: true", "completion": " tty: true\n resources:\n limits:\n cpu: 500m\n memory: 100Mi\n - name: nginx-ctr\n image: nginx\n resources:\n limits:\n cpu: 1500m\n memory: 100Mi", "language": "yaml", "type": "pod-definition", "context": "Pod Overhead | At admission time the RuntimeClass admission controller updates the workload's PodSpec to include the overhead as described in the RuntimeClass. If the PodSpec already has this field defined, the Pod", "source": "https://kubernetes.io/docs/concepts/scheduling-eviction/pod-overhead/", "id": "k8s_00067_code_completion_1856"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: test-pod\nspec:\n runtimeClassName: kata-fc\n containers:\n - name: busybox-ctr\n image: busybox:1.28\n stdin: true\n tty: true\n resources:\n limits:\n cpu: 500m\n memory: 100Mi\n - name: nginx-ctr\n image: nginx\n resources:\n limits:\n cpu: 1500m\n memory: 100Mi", "completion": "This yaml code Pod Overhead | At admission time the RuntimeClass admission controller updates the workload's PodSpec to include the overhead as described in the RuntimeClass. If the PodSpec already has this field defined, the Pod", "language": "yaml", "type": "pod-definition_explanation", "context": "Pod Overhead | At admission time the RuntimeClass admission controller updates the workload's PodSpec to include the overhead as described in the RuntimeClass. If the PodSpec already has this field defined, the Pod", "source": "https://kubernetes.io/docs/concepts/scheduling-eviction/pod-overhead/", "id": "k8s_00067_code_explain_1857"} {"prompt": "Explain this bash code:\n\nkubectl get pod test-pod -o jsonpath='{.spec.overhead}'", "completion": "This bash code Pod Overhead | The output is:", "language": "bash", "type": "kubectl-command_explanation", "context": "Pod Overhead | The output is:", "source": "https://kubernetes.io/docs/concepts/scheduling-eviction/pod-overhead/", "id": "k8s_00067_code_explain_1858"} {"prompt": "Explain this bash code:\n\nkubectl get pod test-pod -o jsonpath='{.spec.containers[*].resources.limits}'", "completion": "This bash code Pod Overhead | The total container requests are 2000m CPU and 200MiB of memory:", "language": "bash", "type": "kubectl-command_explanation", "context": "Pod Overhead | The total container requests are 2000m CPU and 200MiB of memory:", "source": "https://kubernetes.io/docs/concepts/scheduling-eviction/pod-overhead/", "id": "k8s_00067_code_explain_1859"} {"prompt": "Explain this text code:\n\nmap[cpu: 500m memory:100Mi] map[cpu:1500m memory:100Mi]", "completion": "This text code Pod Overhead | Check this against what is observed by the node:", "language": "text", "type": "code-example_explanation", "context": "Pod Overhead | Check this against what is observed by the node:", "source": "https://kubernetes.io/docs/concepts/scheduling-eviction/pod-overhead/", "id": "k8s_00067_code_explain_1860"} {"prompt": "Explain this text code:\n\nNamespace Name CPU Requests CPU Limits Memory Requests Memory Limits AGE\n --------- ---- ------------ ---------- --------------- ------------- ---\n default test-pod 2250m (56%) 2250m (56%) 320Mi (1%) 320Mi (1%) 36m", "completion": "This text code Pod Overhead | Check the Pod's memory cgroups on the node where the workload is running. In the following example, crictl is used on the node, which provides a CLI for CRI-compatible container runtimes. This is an a", "language": "text", "type": "code-example_explanation", "context": "Pod Overhead | Check the Pod's memory cgroups on the node where the workload is running. In the following example, crictl is used on the node, which provides a CLI for CRI-compatible container runtimes. This is an a", "source": "https://kubernetes.io/docs/concepts/scheduling-eviction/pod-overhead/", "id": "k8s_00067_code_explain_1861"} {"prompt": "Explain this bash code:\n\n# Run this on the node where the Pod is scheduled\nPOD_ID=\"$(sudo crictl pods --name test-pod -q)\"", "completion": "This bash code Pod Overhead | From this, you can determine the cgroup path for the Pod:", "language": "bash", "type": "shell-command_explanation", "context": "Pod Overhead | From this, you can determine the cgroup path for the Pod:", "source": "https://kubernetes.io/docs/concepts/scheduling-eviction/pod-overhead/", "id": "k8s_00067_code_explain_1862"} {"prompt": "Explain this bash code:\n\n# Run this on the node where the Pod is scheduled\nsudo crictl inspectp -o=json $POD_ID | grep cgroupsPath", "completion": "This bash code Pod Overhead | The resulting cgroup path includes the Pod's pause container. The Pod level cgroup is one directory above.", "language": "bash", "type": "shell-command_explanation", "context": "Pod Overhead | The resulting cgroup path includes the Pod's pause container. The Pod level cgroup is one directory above.", "source": "https://kubernetes.io/docs/concepts/scheduling-eviction/pod-overhead/", "id": "k8s_00067_code_explain_1863"} {"prompt": "Explain this text code:\n\n\"cgroupsPath\": \"/kubepods/podd7f4b509-cf94-4951-9417-d1087c92a5b2/7ccf55aee35dd16aca4189c952d83487297f3cd760f1bbf09620e206e7d0c27a\"", "completion": "This text code Pod Overhead | In this specific case, the pod cgroup path is kubepods/podd7f4b509-cf94-4951-9417-d1087c92a5b2. Verify the Pod level cgroup setting for memory:", "language": "text", "type": "code-example_explanation", "context": "Pod Overhead | In this specific case, the pod cgroup path is kubepods/podd7f4b509-cf94-4951-9417-d1087c92a5b2. Verify the Pod level cgroup setting for memory:", "source": "https://kubernetes.io/docs/concepts/scheduling-eviction/pod-overhead/", "id": "k8s_00067_code_explain_1864"} {"prompt": "Explain this bash code:\n\n# Run this on the node where the Pod is scheduled.\n# Also, change the name of the cgroup to match the cgroup allocated for your pod.\n cat /sys/fs/cgroup/memory/kubepods/podd7f4b509-cf94-4951-9417-d1087c92a5b2/memory.limit_in_bytes", "completion": "This bash code Pod Overhead | This is 320 MiB, as expected:", "language": "bash", "type": "shell-command_explanation", "context": "Pod Overhead | This is 320 MiB, as expected:", "source": "https://kubernetes.io/docs/concepts/scheduling-eviction/pod-overhead/", "id": "k8s_00067_code_explain_1865"} {"prompt": "Complete this yaml code:\n\n# You need to change this example to match the actual runtime name, and per-Pod\n# resource overhead, that the container runtime is adding in your cluster.\napiVersion: node.k8s.io/v1\nkind: RuntimeClass\nmetadata:", "completion": " name: kata-fc\nhandler: kata-fc\noverhead:\n podFixed:\n memory: \"120Mi\"\n cpu: \"250m\"", "language": "yaml", "type": "kubernetes-manifest", "context": "Pod Overhead | Workloads which are created which specify the kata-fc RuntimeClass handler will take the memory and cpu overheads into account for resource quota calculations, node scheduling, as well as Pod cgroup s", "source": "https://kubernetes.io/docs/concepts/scheduling-eviction/pod-overhead/", "id": "k8s_00067_code_completion_1866"} {"prompt": "Explain this yaml code:\n\n# You need to change this example to match the actual runtime name, and per-Pod\n# resource overhead, that the container runtime is adding in your cluster.\napiVersion: node.k8s.io/v1\nkind: RuntimeClass\nmetadata:\n name: kata-fc\nhandler: kata-fc\noverhead:\n podFixed:\n memory: \"120Mi\"\n cpu: \"250m\"", "completion": "This yaml code Pod Overhead | Workloads which are created which specify the kata-fc RuntimeClass handler will take the memory and cpu overheads into account for resource quota calculations, node scheduling, as well as Pod cgroup s", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Pod Overhead | Workloads which are created which specify the kata-fc RuntimeClass handler will take the memory and cpu overheads into account for resource quota calculations, node scheduling, as well as Pod cgroup s", "source": "https://kubernetes.io/docs/concepts/scheduling-eviction/pod-overhead/", "id": "k8s_00067_code_explain_1867"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: test-pod\nspec:\n runtimeClassName: kata-fc\n containers:\n - name: busybox-ctr\n image: busybox:1.28\n stdin: true", "completion": " tty: true\n resources:\n limits:\n cpu: 500m\n memory: 100Mi\n - name: nginx-ctr\n image: nginx\n resources:\n limits:\n cpu: 1500m\n memory: 100Mi", "language": "yaml", "type": "pod-definition", "context": "Pod Overhead | At admission time the RuntimeClass admission controller updates the workload's PodSpec to include the overhead as described in the RuntimeClass. If the PodSpec already has this field defined, the Pod", "source": "https://kubernetes.io/docs/concepts/scheduling-eviction/pod-overhead/", "id": "k8s_00067_code_completion_1868"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: test-pod\nspec:\n runtimeClassName: kata-fc\n containers:\n - name: busybox-ctr\n image: busybox:1.28\n stdin: true\n tty: true\n resources:\n limits:\n cpu: 500m\n memory: 100Mi\n - name: nginx-ctr\n image: nginx\n resources:\n limits:\n cpu: 1500m\n memory: 100Mi", "completion": "This yaml code Pod Overhead | At admission time the RuntimeClass admission controller updates the workload's PodSpec to include the overhead as described in the RuntimeClass. If the PodSpec already has this field defined, the Pod", "language": "yaml", "type": "pod-definition_explanation", "context": "Pod Overhead | At admission time the RuntimeClass admission controller updates the workload's PodSpec to include the overhead as described in the RuntimeClass. If the PodSpec already has this field defined, the Pod", "source": "https://kubernetes.io/docs/concepts/scheduling-eviction/pod-overhead/", "id": "k8s_00067_code_explain_1869"} {"prompt": "Explain this bash code:\n\nkubectl get pod test-pod -o jsonpath='{.spec.overhead}'", "completion": "This bash code Pod Overhead | The output is:", "language": "bash", "type": "kubectl-command_explanation", "context": "Pod Overhead | The output is:", "source": "https://kubernetes.io/docs/concepts/scheduling-eviction/pod-overhead/", "id": "k8s_00067_code_explain_1870"} {"prompt": "Explain this bash code:\n\nkubectl get pod test-pod -o jsonpath='{.spec.containers[*].resources.limits}'", "completion": "This bash code Pod Overhead | The total container requests are 2000m CPU and 200MiB of memory:", "language": "bash", "type": "kubectl-command_explanation", "context": "Pod Overhead | The total container requests are 2000m CPU and 200MiB of memory:", "source": "https://kubernetes.io/docs/concepts/scheduling-eviction/pod-overhead/", "id": "k8s_00067_code_explain_1871"} {"prompt": "Explain this text code:\n\n# Run this on the node where the Pod is scheduled\nPOD_ID=\"$(sudo crictl pods --name test-pod -q)\"", "completion": "This text code Pod Overhead | From this, you can determine the cgroup path for the Pod:", "language": "text", "type": "code-example_explanation", "context": "Pod Overhead | From this, you can determine the cgroup path for the Pod:", "source": "https://kubernetes.io/docs/concepts/scheduling-eviction/pod-overhead/", "id": "k8s_00067_code_explain_1872"} {"prompt": "Explain this text code:\n\n# Run this on the node where the Pod is scheduled\nsudo crictl inspectp -o=json $POD_ID | grep cgroupsPath", "completion": "This text code Pod Overhead | The resulting cgroup path includes the Pod's pause container. The Pod level cgroup is one directory above.", "language": "text", "type": "code-example_explanation", "context": "Pod Overhead | The resulting cgroup path includes the Pod's pause container. The Pod level cgroup is one directory above.", "source": "https://kubernetes.io/docs/concepts/scheduling-eviction/pod-overhead/", "id": "k8s_00067_code_explain_1873"} {"prompt": "Explain this text code:\n\n# Run this on the node where the Pod is scheduled.\n# Also, change the name of the cgroup to match the cgroup allocated for your pod.\n cat /sys/fs/cgroup/memory/kubepods/podd7f4b509-cf94-4951-9417-d1087c92a5b2/memory.limit_in_bytes", "completion": "This text code Pod Overhead | This is 320 MiB, as expected:", "language": "text", "type": "code-example_explanation", "context": "Pod Overhead | This is 320 MiB, as expected:", "source": "https://kubernetes.io/docs/concepts/scheduling-eviction/pod-overhead/", "id": "k8s_00067_code_explain_1874"} {"prompt": "Complete this json code:\n\n{\n \"defaultAction\": \"SCMP_ACT_ERRNO\",\n \"architectures\": [\n \"SCMP_ARCH_X86_64\",\n \"SCMP_ARCH_X86\",\n \"SCMP_ARCH_X32\"\n ],\n \"syscalls\": [\n {\n \"names\": [\n \"accept4\",\n \"epoll_wait\",\n \"pselect6\",\n \"futex\",\n \"madvise\",\n \"epoll_ctl\",\n \"getsockname\",\n \"setsockopt\",\n \"vfork\",\n \"mmap\",\n \"read\",\n \"write\",\n \"close\",\n \"arch_prctl\",\n \"sched_getaffinity\",\n \"munmap\",\n \"brk\",\n \"rt_sigaction\",\n \"rt_sigprocmask\",\n \"sigaltstack\",\n \"gettid\",\n \"clone\",\n \"bind\",\n \"socket\",", "completion": " \"openat\",\n \"readlinkat\",\n \"exit_group\",\n \"epoll_create1\",\n \"listen\",\n \"rt_sigreturn\",\n \"sched_yield\",\n \"clock_gettime\",\n \"connect\",\n \"dup2\",\n \"epoll_pwait\",\n \"execve\",\n \"exit\",\n \"fcntl\",\n \"getpid\",\n \"getuid\",\n \"ioctl\",\n \"mprotect\",\n \"nanosleep\",\n \"open\",\n \"poll\",\n \"recvfrom\",\n \"sendto\",\n \"set_tid_address\",\n \"setitimer\",\n \"writev\",\n \"fstatfs\",\n \"getdents64\",\n \"pipe2\",\n \"getrlimit\"\n ],\n \"action\": \"SCMP_ACT_ALLOW\"\n }\n ]\n}", "language": "json", "type": "api-response", "context": "Restrict a Container's Syscalls with seccomp | Run these commands:", "source": "https://kubernetes.io/docs/tutorials/security/seccomp/", "id": "k8s_00068_code_completion_1875"} {"prompt": "Explain this json code:\n\n{\n \"defaultAction\": \"SCMP_ACT_ERRNO\",\n \"architectures\": [\n \"SCMP_ARCH_X86_64\",\n \"SCMP_ARCH_X86\",\n \"SCMP_ARCH_X32\"\n ],\n \"syscalls\": [\n {\n \"names\": [\n \"accept4\",\n \"epoll_wait\",\n \"pselect6\",\n \"futex\",\n \"madvise\",\n \"epoll_ctl\",\n \"getsockname\",\n \"setsockopt\",\n \"vfork\",\n \"mmap\",\n \"read\",\n \"write\",\n \"close\",\n \"arch_prctl\",\n \"sched_getaffinity\",\n \"munmap\",\n \"brk\",\n \"rt_sigaction\",\n \"rt_sigprocmask\",\n \"sigaltstack\",\n \"gettid\",\n \"clone\",\n \"bind\",\n \"socket\",\n \"openat\",\n \"readlinkat\",\n \"exit_group\",\n \"epoll_create1\",\n \"listen\",\n \"rt_sigreturn\",\n \"sched_yield\",\n \"clock_gettime\",\n \"connect\",\n \"dup2\",\n \"epoll_pwait\",\n \"execve\",\n \"exit\",\n \"fcntl\",\n \"getpid\",\n \"getuid\",\n \"ioctl\",\n \"mprotect\",\n \"nanosleep\",\n \"open\",\n \"poll\",\n \"recvfrom\",\n \"sendto\",\n \"set_tid_address\",\n \"setitimer\",\n \"writev\",\n \"fstatfs\",\n \"getdents64\",\n \"pipe2\",\n \"getrlimit\"\n ],\n \"action\": \"SCMP_ACT_ALLOW\"\n }\n ]\n}", "completion": "This json code Restrict a Container's Syscalls with seccomp | Run these commands:", "language": "json", "type": "api-response_explanation", "context": "Restrict a Container's Syscalls with seccomp | Run these commands:", "source": "https://kubernetes.io/docs/tutorials/security/seccomp/", "id": "k8s_00068_code_explain_1876"} {"prompt": "Complete this text code:\n\nmkdir ./profiles\ncurl -L -o profiles/audit.json https://k8s.io/examples/pods/security/seccomp/profiles/audit.json", "completion": "curl -L -o profiles/violation.json https://k8s.io/examples/pods/security/seccomp/profiles/violation.json\ncurl -L -o profiles/fine-grained.json https://k8s.io/examples/pods/security/seccomp/profiles/fine-grained.json\nls profiles", "language": "text", "type": "code-example", "context": "Restrict a Container's Syscalls with seccomp | You should see three profiles listed at the end of the final step:", "source": "https://kubernetes.io/docs/tutorials/security/seccomp/", "id": "k8s_00068_code_completion_1877"} {"prompt": "Explain this text code:\n\nmkdir ./profiles\ncurl -L -o profiles/audit.json https://k8s.io/examples/pods/security/seccomp/profiles/audit.json\ncurl -L -o profiles/violation.json https://k8s.io/examples/pods/security/seccomp/profiles/violation.json\ncurl -L -o profiles/fine-grained.json https://k8s.io/examples/pods/security/seccomp/profiles/fine-grained.json\nls profiles", "completion": "This text code Restrict a Container's Syscalls with seccomp | You should see three profiles listed at the end of the final step:", "language": "text", "type": "code-example_explanation", "context": "Restrict a Container's Syscalls with seccomp | You should see three profiles listed at the end of the final step:", "source": "https://kubernetes.io/docs/tutorials/security/seccomp/", "id": "k8s_00068_code_explain_1878"} {"prompt": "Complete this yaml code:\n\napiVersion: kind.x-k8s.io/v1alpha4\nkind: Cluster\nnodes:", "completion": "- role: control-plane\n extraMounts:\n - hostPath: \"./profiles\"\n containerPath: \"/var/lib/kubelet/seccomp/profiles\"", "language": "yaml", "type": "kubernetes-manifest", "context": "Restrict a Container's Syscalls with seccomp | Download that example kind configuration, and save it to a file named kind.yaml:", "source": "https://kubernetes.io/docs/tutorials/security/seccomp/", "id": "k8s_00068_code_completion_1879"} {"prompt": "Explain this yaml code:\n\napiVersion: kind.x-k8s.io/v1alpha4\nkind: Cluster\nnodes:\n- role: control-plane\n extraMounts:\n - hostPath: \"./profiles\"\n containerPath: \"/var/lib/kubelet/seccomp/profiles\"", "completion": "This yaml code Restrict a Container's Syscalls with seccomp | Download that example kind configuration, and save it to a file named kind.yaml:", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Restrict a Container's Syscalls with seccomp | Download that example kind configuration, and save it to a file named kind.yaml:", "source": "https://kubernetes.io/docs/tutorials/security/seccomp/", "id": "k8s_00068_code_explain_1880"} {"prompt": "Explain this text code:\n\ncurl -L -O https://k8s.io/examples/pods/security/seccomp/kind.yaml", "completion": "This text code Restrict a Container's Syscalls with seccomp | You can set a specific Kubernetes version by setting the node's container image. See Nodes within the kind documentation about configuration for more details on this. This tutorial assumes you are usi", "language": "text", "type": "code-example_explanation", "context": "Restrict a Container's Syscalls with seccomp | You can set a specific Kubernetes version by setting the node's container image. See Nodes within the kind documentation about configuration for more details on this. This tutorial assumes you are usi", "source": "https://kubernetes.io/docs/tutorials/security/seccomp/", "id": "k8s_00068_code_explain_1881"} {"prompt": "Explain this text code:\n\nCONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES\n6a96207fed4b kindest/node:v1.18.2 \"/usr/local/bin/entr…\" 27 seconds ago Up 24 seconds 127.0.0.1:42223->6443/tcp kind-control-plane", "completion": "This text code Restrict a Container's Syscalls with seccomp | If observing the filesystem of that container, you should see that the profiles/ directory has been successfully loaded into the default seccomp path of the kubelet. Use docker exec to run a command i", "language": "text", "type": "code-example_explanation", "context": "Restrict a Container's Syscalls with seccomp | If observing the filesystem of that container, you should see that the profiles/ directory has been successfully loaded into the default seccomp path of the kubelet. Use docker exec to run a command i", "source": "https://kubernetes.io/docs/tutorials/security/seccomp/", "id": "k8s_00068_code_explain_1882"} {"prompt": "Explain this text code:\n\n# Change 6a96207fed4b to the container ID you saw from \"docker ps\"\ndocker exec -it 6a96207fed4b ls /var/lib/kubelet/seccomp/profiles", "completion": "This text code Restrict a Container's Syscalls with seccomp | You have verified that these seccomp profiles are available to the kubelet running within kind.", "language": "text", "type": "code-example_explanation", "context": "Restrict a Container's Syscalls with seccomp | You have verified that these seccomp profiles are available to the kubelet running within kind.", "source": "https://kubernetes.io/docs/tutorials/security/seccomp/", "id": "k8s_00068_code_explain_1883"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: default-pod\n labels:\n app: default-pod\nspec:\n securityContext:", "completion": " seccompProfile:\n type: RuntimeDefault\n containers:\n - name: test-container\n image: hashicorp/http-echo:1.0\n args:\n - \"-text=just made some more syscalls!\"\n securityContext:\n allowPrivilegeEscalation: false", "language": "yaml", "type": "pod-definition", "context": "Restrict a Container's Syscalls with seccomp | Create that Pod:", "source": "https://kubernetes.io/docs/tutorials/security/seccomp/", "id": "k8s_00068_code_completion_1884"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: default-pod\n labels:\n app: default-pod\nspec:\n securityContext:\n seccompProfile:\n type: RuntimeDefault\n containers:\n - name: test-container\n image: hashicorp/http-echo:1.0\n args:\n - \"-text=just made some more syscalls!\"\n securityContext:\n allowPrivilegeEscalation: false", "completion": "This yaml code Restrict a Container's Syscalls with seccomp | Create that Pod:", "language": "yaml", "type": "pod-definition_explanation", "context": "Restrict a Container's Syscalls with seccomp | Create that Pod:", "source": "https://kubernetes.io/docs/tutorials/security/seccomp/", "id": "k8s_00068_code_explain_1885"} {"prompt": "Explain this bash code:\n\nkubectl apply -f https://k8s.io/examples/pods/security/seccomp/ga/default-pod.yaml", "completion": "This bash code Restrict a Container's Syscalls with seccomp | The Pod should be showing as having started successfully:", "language": "bash", "type": "kubectl-command_explanation", "context": "Restrict a Container's Syscalls with seccomp | The Pod should be showing as having started successfully:", "source": "https://kubernetes.io/docs/tutorials/security/seccomp/", "id": "k8s_00068_code_explain_1886"} {"prompt": "Explain this text code:\n\nNAME READY STATUS RESTARTS AGE\ndefault-pod 1/1 Running 0 20s", "completion": "This text code Restrict a Container's Syscalls with seccomp | Delete the Pod before moving to the next section:", "language": "text", "type": "code-example_explanation", "context": "Restrict a Container's Syscalls with seccomp | Delete the Pod before moving to the next section:", "source": "https://kubernetes.io/docs/tutorials/security/seccomp/", "id": "k8s_00068_code_explain_1887"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: audit-pod\n labels:\n app: audit-pod\nspec:\n securityContext:\n seccompProfile:", "completion": " type: Localhost\n localhostProfile: profiles/audit.json\n containers:\n - name: test-container\n image: hashicorp/http-echo:1.0\n args:\n - \"-text=just made some syscalls!\"\n securityContext:\n allowPrivilegeEscalation: false", "language": "yaml", "type": "pod-definition", "context": "Restrict a Container's Syscalls with seccomp | Create the Pod in the cluster:", "source": "https://kubernetes.io/docs/tutorials/security/seccomp/", "id": "k8s_00068_code_completion_1888"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: audit-pod\n labels:\n app: audit-pod\nspec:\n securityContext:\n seccompProfile:\n type: Localhost\n localhostProfile: profiles/audit.json\n containers:\n - name: test-container\n image: hashicorp/http-echo:1.0\n args:\n - \"-text=just made some syscalls!\"\n securityContext:\n allowPrivilegeEscalation: false", "completion": "This yaml code Restrict a Container's Syscalls with seccomp | Create the Pod in the cluster:", "language": "yaml", "type": "pod-definition_explanation", "context": "Restrict a Container's Syscalls with seccomp | Create the Pod in the cluster:", "source": "https://kubernetes.io/docs/tutorials/security/seccomp/", "id": "k8s_00068_code_explain_1889"} {"prompt": "Explain this bash code:\n\nkubectl apply -f https://k8s.io/examples/pods/security/seccomp/ga/audit-pod.yaml", "completion": "This bash code Restrict a Container's Syscalls with seccomp | This profile does not restrict any syscalls, so the Pod should start successfully.", "language": "bash", "type": "kubectl-command_explanation", "context": "Restrict a Container's Syscalls with seccomp | This profile does not restrict any syscalls, so the Pod should start successfully.", "source": "https://kubernetes.io/docs/tutorials/security/seccomp/", "id": "k8s_00068_code_explain_1890"} {"prompt": "Explain this text code:\n\nNAME READY STATUS RESTARTS AGE\naudit-pod 1/1 Running 0 30s", "completion": "This text code Restrict a Container's Syscalls with seccomp | In order to be able to interact with this endpoint exposed by this container, create a NodePort Service that allows access to the endpoint from inside the kind control plane container.", "language": "text", "type": "code-example_explanation", "context": "Restrict a Container's Syscalls with seccomp | In order to be able to interact with this endpoint exposed by this container, create a NodePort Service that allows access to the endpoint from inside the kind control plane container.", "source": "https://kubernetes.io/docs/tutorials/security/seccomp/", "id": "k8s_00068_code_explain_1891"} {"prompt": "Explain this bash code:\n\nkubectl expose pod audit-pod --type NodePort --port 5678", "completion": "This bash code Restrict a Container's Syscalls with seccomp | Check what port the Service has been assigned on the node.", "language": "bash", "type": "kubectl-command_explanation", "context": "Restrict a Container's Syscalls with seccomp | Check what port the Service has been assigned on the node.", "source": "https://kubernetes.io/docs/tutorials/security/seccomp/", "id": "k8s_00068_code_explain_1892"} {"prompt": "Explain this text code:\n\nNAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE\naudit-pod NodePort 10.111.36.142 5678:32373/TCP 72s", "completion": "This text code Restrict a Container's Syscalls with seccomp | Now you can use curl to access that endpoint from inside the kind control plane container, at the port exposed by this Service. Use docker exec to run the curl command within the container belonging t", "language": "text", "type": "code-example_explanation", "context": "Restrict a Container's Syscalls with seccomp | Now you can use curl to access that endpoint from inside the kind control plane container, at the port exposed by this Service. Use docker exec to run the curl command within the container belonging t", "source": "https://kubernetes.io/docs/tutorials/security/seccomp/", "id": "k8s_00068_code_explain_1893"} {"prompt": "Explain this text code:\n\n# Change 6a96207fed4b to the control plane container ID and 32373 to the port number you saw from \"docker ps\"\ndocker exec -it 6a96207fed4b curl localhost:32373", "completion": "This text code Restrict a Container's Syscalls with seccomp | You can see that the process is running, but what syscalls did it actually make? Because this Pod is running in a local cluster, you should be able to see those in /var/log/syslog on your local system", "language": "text", "type": "code-example_explanation", "context": "Restrict a Container's Syscalls with seccomp | You can see that the process is running, but what syscalls did it actually make? Because this Pod is running in a local cluster, you should be able to see those in /var/log/syslog on your local system", "source": "https://kubernetes.io/docs/tutorials/security/seccomp/", "id": "k8s_00068_code_explain_1894"} {"prompt": "Explain this text code:\n\n# The log path on your computer might be different from \"/var/log/syslog\"\ntail -f /var/log/syslog | grep 'http-echo'", "completion": "This text code Restrict a Container's Syscalls with seccomp | You should already see some logs of syscalls made by http-echo, and if you run curl again inside the control plane container you will see more output written to the log.", "language": "text", "type": "code-example_explanation", "context": "Restrict a Container's Syscalls with seccomp | You should already see some logs of syscalls made by http-echo, and if you run curl again inside the control plane container you will see more output written to the log.", "source": "https://kubernetes.io/docs/tutorials/security/seccomp/", "id": "k8s_00068_code_explain_1895"} {"prompt": "Complete this text code:\n\nJul 6 15:37:40 my-machine kernel: [369128.669452] audit: type=1326 audit(1594067860.484:14536): auid=4294967295 uid=0 gid=0 ses=4294967295 pid=29064 comm=\"http-echo\" exe=\"/http-echo\" sig=0 arch=c000003e syscall=51 compat=0 ip=0x46fe1f code=0x7ffc0000\nJul 6 15:37:40 my-machine kernel: [369128.669453] audit: type=1326 audit(1594067860.484:14537): auid=4294967295 uid=0 gid=0 ses=4294967295 pid=29064 comm=\"http-echo\" exe=\"/http-echo\" sig=0 arch=c000003e syscall=54 compat=0 ip=0x46fdba code=0x7ffc0000\nJul 6 15:37:40 my-machine kernel: [369128.669455] audit: type=1326 audit(1594067860.484:14538): auid=4294967295 uid=0 gid=0 ses=4294967295 pid=29064 comm=\"http-echo\" exe=\"/http-echo\" sig=0 arch=c000003e syscall=202 compat=0 ip=0x455e53 code=0x7ffc0000\nJul 6 15:37:40 my-machine kernel: [369128.669456] audit: type=1326 audit(1594067860.484:14539): auid=4294967295 uid=0 gid=0 ses=4294967295 pid=29064 comm=\"http-echo\" exe=\"/http-echo\" sig=0 arch=c000003e syscall=288 compat=0 ip=0x46fdba code=0x7ffc0000", "completion": "Jul 6 15:37:40 my-machine kernel: [369128.669517] audit: type=1326 audit(1594067860.484:14540): auid=4294967295 uid=0 gid=0 ses=4294967295 pid=29064 comm=\"http-echo\" exe=\"/http-echo\" sig=0 arch=c000003e syscall=0 compat=0 ip=0x46fd44 code=0x7ffc0000\nJul 6 15:37:40 my-machine kernel: [369128.669519] audit: type=1326 audit(1594067860.484:14541): auid=4294967295 uid=0 gid=0 ses=4294967295 pid=29064 comm=\"http-echo\" exe=\"/http-echo\" sig=0 arch=c000003e syscall=270 compat=0 ip=0x4559b1 code=0x7ffc0000\nJul 6 15:38:40 my-machine kernel: [369188.671648] audit: type=1326 audit(1594067920.488:14559): auid=4294967295 uid=0 gid=0 ses=4294967295 pid=29064 comm=\"http-echo\" exe=\"/http-echo\" sig=0 arch=c000003e syscall=270 compat=0 ip=0x4559b1 code=0x7ffc0000\nJul 6 15:38:40 my-machine kernel: [369188.671726] audit: type=1326 audit(1594067920.488:14560): auid=4294967295 uid=0 gid=0 ses=4294967295 pid=29064 comm=\"http-echo\" exe=\"/http-echo\" sig=0 arch=c000003e syscall=202 compat=0 ip=0x455e53 code=0x7ffc0000", "language": "text", "type": "code-example", "context": "Restrict a Container's Syscalls with seccomp | You can begin to understand the syscalls required by the http-echo process by looking at the syscall= entry on each line. While these are unlikely to encompass all syscalls it uses, it can serve as a", "source": "https://kubernetes.io/docs/tutorials/security/seccomp/", "id": "k8s_00068_code_completion_1896"} {"prompt": "Explain this text code:\n\nJul 6 15:37:40 my-machine kernel: [369128.669452] audit: type=1326 audit(1594067860.484:14536): auid=4294967295 uid=0 gid=0 ses=4294967295 pid=29064 comm=\"http-echo\" exe=\"/http-echo\" sig=0 arch=c000003e syscall=51 compat=0 ip=0x46fe1f code=0x7ffc0000\nJul 6 15:37:40 my-machine kernel: [369128.669453] audit: type=1326 audit(1594067860.484:14537): auid=4294967295 uid=0 gid=0 ses=4294967295 pid=29064 comm=\"http-echo\" exe=\"/http-echo\" sig=0 arch=c000003e syscall=54 compat=0 ip=0x46fdba code=0x7ffc0000\nJul 6 15:37:40 my-machine kernel: [369128.669455] audit: type=1326 audit(1594067860.484:14538): auid=4294967295 uid=0 gid=0 ses=4294967295 pid=29064 comm=\"http-echo\" exe=\"/http-echo\" sig=0 arch=c000003e syscall=202 compat=0 ip=0x455e53 code=0x7ffc0000\nJul 6 15:37:40 my-machine kernel: [369128.669456] audit: type=1326 audit(1594067860.484:14539): auid=4294967295 uid=0 gid=0 ses=4294967295 pid=29064 comm=\"http-echo\" exe=\"/http-echo\" sig=0 arch=c000003e syscall=288 compat=0 ip=0x46fdba code=0x7ffc0000\nJul 6 15:37:40 my-machine kernel: [369128.669517] audit: type=1326 audit(1594067860.484:14540): auid=4294967295 uid=0 gid=0 ses=4294967295 pid=29064 comm=\"http-echo\" exe=\"/http-echo\" sig=0 arch=c000003e syscall=0 compat=0 ip=0x46fd44 code=0x7ffc0000\nJul 6 15:37:40 my-machine kernel: [369128.669519] audit: type=1326 audit(1594067860.484:14541): auid=4294967295 uid=0 gid=0 ses=4294967295 pid=29064 comm=\"http-echo\" exe=\"/http-echo\" sig=0 arch=c000003e syscall=270 compat=0 ip=0x4559b1 code=0x7ffc0000\nJul 6 15:38:40 my-machine kernel: [369188.671648] audit: type=1326 audit(1594067920.488:14559): auid=4294967295 uid=0 gid=0 ses=4294967295 pid=29064 comm=\"http-echo\" exe=\"/http-echo\" sig=0 arch=c000003e syscall=270 compat=0 ip=0x4559b1 code=0x7ffc0000\nJul 6 15:38:40 my-machine kernel: [369188.671726] audit: type=1326 audit(1594067920.488:14560): auid=4294967295 uid=0 gid=0 ses=4294967295 pid=29064 comm=\"http-echo\" exe=\"/http-echo\" sig=0 arch=c000003e syscall=202 compat=0 ip=0x455e53 code=0x7ffc0000", "completion": "This text code Restrict a Container's Syscalls with seccomp | You can begin to understand the syscalls required by the http-echo process by looking at the syscall= entry on each line. While these are unlikely to encompass all syscalls it uses, it can serve as a", "language": "text", "type": "code-example_explanation", "context": "Restrict a Container's Syscalls with seccomp | You can begin to understand the syscalls required by the http-echo process by looking at the syscall= entry on each line. While these are unlikely to encompass all syscalls it uses, it can serve as a", "source": "https://kubernetes.io/docs/tutorials/security/seccomp/", "id": "k8s_00068_code_explain_1897"} {"prompt": "Explain this bash code:\n\nkubectl delete service audit-pod --wait\nkubectl delete pod audit-pod --wait --now", "completion": "This bash code Restrict a Container's Syscalls with seccomp | For demonstration, apply a profile to the Pod that does not allow for any syscalls.", "language": "bash", "type": "kubectl-command_explanation", "context": "Restrict a Container's Syscalls with seccomp | For demonstration, apply a profile to the Pod that does not allow for any syscalls.", "source": "https://kubernetes.io/docs/tutorials/security/seccomp/", "id": "k8s_00068_code_explain_1898"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: violation-pod\n labels:\n app: violation-pod\nspec:\n securityContext:\n seccompProfile:", "completion": " type: Localhost\n localhostProfile: profiles/violation.json\n containers:\n - name: test-container\n image: hashicorp/http-echo:1.0\n args:\n - \"-text=just made some syscalls!\"\n securityContext:\n allowPrivilegeEscalation: false", "language": "yaml", "type": "pod-definition", "context": "Restrict a Container's Syscalls with seccomp | Attempt to create the Pod in the cluster:", "source": "https://kubernetes.io/docs/tutorials/security/seccomp/", "id": "k8s_00068_code_completion_1899"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: violation-pod\n labels:\n app: violation-pod\nspec:\n securityContext:\n seccompProfile:\n type: Localhost\n localhostProfile: profiles/violation.json\n containers:\n - name: test-container\n image: hashicorp/http-echo:1.0\n args:\n - \"-text=just made some syscalls!\"\n securityContext:\n allowPrivilegeEscalation: false", "completion": "This yaml code Restrict a Container's Syscalls with seccomp | Attempt to create the Pod in the cluster:", "language": "yaml", "type": "pod-definition_explanation", "context": "Restrict a Container's Syscalls with seccomp | Attempt to create the Pod in the cluster:", "source": "https://kubernetes.io/docs/tutorials/security/seccomp/", "id": "k8s_00068_code_explain_1900"} {"prompt": "Explain this bash code:\n\nkubectl apply -f https://k8s.io/examples/pods/security/seccomp/ga/violation-pod.yaml", "completion": "This bash code Restrict a Container's Syscalls with seccomp | The Pod creates, but there is an issue. If you check the status of the Pod, you should see that it failed to start.", "language": "bash", "type": "kubectl-command_explanation", "context": "Restrict a Container's Syscalls with seccomp | The Pod creates, but there is an issue. If you check the status of the Pod, you should see that it failed to start.", "source": "https://kubernetes.io/docs/tutorials/security/seccomp/", "id": "k8s_00068_code_explain_1901"} {"prompt": "Explain this text code:\n\nNAME READY STATUS RESTARTS AGE\nviolation-pod 0/1 CrashLoopBackOff 1 6s", "completion": "This text code Restrict a Container's Syscalls with seccomp | As seen in the previous example, the http-echo process requires quite a few syscalls. Here seccomp has been instructed to error on any syscall by setting \"defaultAction\": \"SCMP_ACT_ERRNO\". This is ext", "language": "text", "type": "code-example_explanation", "context": "Restrict a Container's Syscalls with seccomp | As seen in the previous example, the http-echo process requires quite a few syscalls. Here seccomp has been instructed to error on any syscall by setting \"defaultAction\": \"SCMP_ACT_ERRNO\". This is ext", "source": "https://kubernetes.io/docs/tutorials/security/seccomp/", "id": "k8s_00068_code_explain_1902"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: fine-pod\n labels:\n app: fine-pod\nspec:\n securityContext:\n seccompProfile:", "completion": " type: Localhost\n localhostProfile: profiles/fine-grained.json\n containers:\n - name: test-container\n image: hashicorp/http-echo:1.0\n args:\n - \"-text=just made some syscalls!\"\n securityContext:\n allowPrivilegeEscalation: false", "language": "yaml", "type": "pod-definition", "context": "Restrict a Container's Syscalls with seccomp | Create the Pod in your cluster:", "source": "https://kubernetes.io/docs/tutorials/security/seccomp/", "id": "k8s_00068_code_completion_1903"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: fine-pod\n labels:\n app: fine-pod\nspec:\n securityContext:\n seccompProfile:\n type: Localhost\n localhostProfile: profiles/fine-grained.json\n containers:\n - name: test-container\n image: hashicorp/http-echo:1.0\n args:\n - \"-text=just made some syscalls!\"\n securityContext:\n allowPrivilegeEscalation: false", "completion": "This yaml code Restrict a Container's Syscalls with seccomp | Create the Pod in your cluster:", "language": "yaml", "type": "pod-definition_explanation", "context": "Restrict a Container's Syscalls with seccomp | Create the Pod in your cluster:", "source": "https://kubernetes.io/docs/tutorials/security/seccomp/", "id": "k8s_00068_code_explain_1904"} {"prompt": "Explain this bash code:\n\nkubectl apply -f https://k8s.io/examples/pods/security/seccomp/ga/fine-pod.yaml", "completion": "This bash code Restrict a Container's Syscalls with seccomp | The Pod should be showing as having started successfully:", "language": "bash", "type": "kubectl-command_explanation", "context": "Restrict a Container's Syscalls with seccomp | The Pod should be showing as having started successfully:", "source": "https://kubernetes.io/docs/tutorials/security/seccomp/", "id": "k8s_00068_code_explain_1905"} {"prompt": "Explain this text code:\n\nNAME READY STATUS RESTARTS AGE\nfine-pod 1/1 Running 0 30s", "completion": "This text code Restrict a Container's Syscalls with seccomp | Open up a new terminal window and use tail to monitor for log entries that mention calls from http-echo:", "language": "text", "type": "code-example_explanation", "context": "Restrict a Container's Syscalls with seccomp | Open up a new terminal window and use tail to monitor for log entries that mention calls from http-echo:", "source": "https://kubernetes.io/docs/tutorials/security/seccomp/", "id": "k8s_00068_code_explain_1906"} {"prompt": "Explain this text code:\n\n# The log path on your computer might be different from \"/var/log/syslog\"\ntail -f /var/log/syslog | grep 'http-echo'", "completion": "This text code Restrict a Container's Syscalls with seccomp | Next, expose the Pod with a NodePort Service:", "language": "text", "type": "code-example_explanation", "context": "Restrict a Container's Syscalls with seccomp | Next, expose the Pod with a NodePort Service:", "source": "https://kubernetes.io/docs/tutorials/security/seccomp/", "id": "k8s_00068_code_explain_1907"} {"prompt": "Explain this bash code:\n\nkubectl expose pod fine-pod --type NodePort --port 5678", "completion": "This bash code Restrict a Container's Syscalls with seccomp | Check what port the Service has been assigned on the node:", "language": "bash", "type": "kubectl-command_explanation", "context": "Restrict a Container's Syscalls with seccomp | Check what port the Service has been assigned on the node:", "source": "https://kubernetes.io/docs/tutorials/security/seccomp/", "id": "k8s_00068_code_explain_1908"} {"prompt": "Explain this text code:\n\nNAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE\nfine-pod NodePort 10.111.36.142 5678:32373/TCP 72s", "completion": "This text code Restrict a Container's Syscalls with seccomp | Use curl to access that endpoint from inside the kind control plane container:", "language": "text", "type": "code-example_explanation", "context": "Restrict a Container's Syscalls with seccomp | Use curl to access that endpoint from inside the kind control plane container:", "source": "https://kubernetes.io/docs/tutorials/security/seccomp/", "id": "k8s_00068_code_explain_1909"} {"prompt": "Explain this text code:\n\n# Change 6a96207fed4b to the control plane container ID and 32373 to the port number you saw from \"docker ps\"\ndocker exec -it 6a96207fed4b curl localhost:32373", "completion": "This text code Restrict a Container's Syscalls with seccomp | You should see no output in the syslog. This is because the profile allowed all necessary syscalls and specified that an error should occur if one outside of the list is invoked. This is an ideal situ", "language": "text", "type": "code-example_explanation", "context": "Restrict a Container's Syscalls with seccomp | You should see no output in the syslog. This is because the profile allowed all necessary syscalls and specified that an error should occur if one outside of the list is invoked. This is an ideal situ", "source": "https://kubernetes.io/docs/tutorials/security/seccomp/", "id": "k8s_00068_code_explain_1910"} {"prompt": "Explain this bash code:\n\nkubectl delete service fine-pod --wait\nkubectl delete pod fine-pod --wait --now", "completion": "This bash code Restrict a Container's Syscalls with seccomp | To use seccomp profile defaulting, you must run the kubelet with the --seccomp-default command line flag enabled for each node where you want to use it.", "language": "bash", "type": "kubectl-command_explanation", "context": "Restrict a Container's Syscalls with seccomp | To use seccomp profile defaulting, you must run the kubelet with the --seccomp-default command line flag enabled for each node where you want to use it.", "source": "https://kubernetes.io/docs/tutorials/security/seccomp/", "id": "k8s_00068_code_explain_1911"} {"prompt": "Complete this yaml code:\n\nkind: Cluster\napiVersion: kind.x-k8s.io/v1alpha4\nnodes:\n - role: control-plane\n image: kindest/node:v1.28.0@sha256:9f3ff58f19dcf1a0611d11e8ac989fdb30a28f40f236f59f0bea31fb956ccf5c\n kubeadmConfigPatches:\n - |\n kind: JoinConfiguration\n nodeRegistration:", "completion": " kubeletExtraArgs:\n seccomp-default: \"true\" \n - role: worker\n image: kindest/node:v1.28.0@sha256:9f3ff58f19dcf1a0611d11e8ac989fdb30a28f40f236f59f0bea31fb956ccf5c\n kubeadmConfigPatches:\n - |\n kind: JoinConfiguration\n nodeRegistration:\n kubeletExtraArgs:\n seccomp-default: \"true\"", "language": "yaml", "type": "kubernetes-manifest", "context": "Restrict a Container's Syscalls with seccomp | If the cluster is ready, then running a pod:", "source": "https://kubernetes.io/docs/tutorials/security/seccomp/", "id": "k8s_00068_code_completion_1912"} {"prompt": "Explain this yaml code:\n\nkind: Cluster\napiVersion: kind.x-k8s.io/v1alpha4\nnodes:\n - role: control-plane\n image: kindest/node:v1.28.0@sha256:9f3ff58f19dcf1a0611d11e8ac989fdb30a28f40f236f59f0bea31fb956ccf5c\n kubeadmConfigPatches:\n - |\n kind: JoinConfiguration\n nodeRegistration:\n kubeletExtraArgs:\n seccomp-default: \"true\" \n - role: worker\n image: kindest/node:v1.28.0@sha256:9f3ff58f19dcf1a0611d11e8ac989fdb30a28f40f236f59f0bea31fb956ccf5c\n kubeadmConfigPatches:\n - |\n kind: JoinConfiguration\n nodeRegistration:\n kubeletExtraArgs:\n seccomp-default: \"true\"", "completion": "This yaml code Restrict a Container's Syscalls with seccomp | If the cluster is ready, then running a pod:", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Restrict a Container's Syscalls with seccomp | If the cluster is ready, then running a pod:", "source": "https://kubernetes.io/docs/tutorials/security/seccomp/", "id": "k8s_00068_code_explain_1913"} {"prompt": "Explain this bash code:\n\nkubectl run --rm -it --restart=Never --image=alpine alpine -- sh", "completion": "This bash code Restrict a Container's Syscalls with seccomp | Should now have the default seccomp profile attached. This can be verified by using docker exec to run crictl inspect for the container on the kind worker:", "language": "bash", "type": "kubectl-command_explanation", "context": "Restrict a Container's Syscalls with seccomp | Should now have the default seccomp profile attached. This can be verified by using docker exec to run crictl inspect for the container on the kind worker:", "source": "https://kubernetes.io/docs/tutorials/security/seccomp/", "id": "k8s_00068_code_explain_1914"} {"prompt": "Explain this bash code:\n\ndocker exec -it kind-worker bash -c \\\n 'crictl inspect $(crictl ps --name=alpine -q) | jq .info.runtimeSpec.linux.seccomp'", "completion": "This bash code Restrict a Container's Syscalls with seccomp | You can learn more about Linux seccomp:", "language": "bash", "type": "shell-command_explanation", "context": "Restrict a Container's Syscalls with seccomp | You can learn more about Linux seccomp:", "source": "https://kubernetes.io/docs/tutorials/security/seccomp/", "id": "k8s_00068_code_explain_1915"} {"prompt": "Complete this json code:\n\n{\n \"defaultAction\": \"SCMP_ACT_ERRNO\",\n \"architectures\": [\"SCMP_ARCH_X86_64\", \"SCMP_ARCH_X86\", \"SCMP_ARCH_X32\"],\n \"syscalls\": [", "completion": " {\n \"names\": [\"...\"]\n }\n ]\n}", "language": "json", "type": "api-response", "context": "Restrict a Container's Syscalls with seccomp | You can learn more about Linux seccomp:", "source": "https://kubernetes.io/docs/tutorials/security/seccomp/", "id": "k8s_00068_code_completion_1916"} {"prompt": "Explain this json code:\n\n{\n \"defaultAction\": \"SCMP_ACT_ERRNO\",\n \"architectures\": [\"SCMP_ARCH_X86_64\", \"SCMP_ARCH_X86\", \"SCMP_ARCH_X32\"],\n \"syscalls\": [\n {\n \"names\": [\"...\"]\n }\n ]\n}", "completion": "This json code Restrict a Container's Syscalls with seccomp | You can learn more about Linux seccomp:", "language": "json", "type": "api-response_explanation", "context": "Restrict a Container's Syscalls with seccomp | You can learn more about Linux seccomp:", "source": "https://kubernetes.io/docs/tutorials/security/seccomp/", "id": "k8s_00068_code_explain_1917"} {"prompt": "Complete this json code:\n\n{\n \"defaultAction\": \"SCMP_ACT_ERRNO\",\n \"architectures\": [\n \"SCMP_ARCH_X86_64\",\n \"SCMP_ARCH_X86\",\n \"SCMP_ARCH_X32\"\n ],\n \"syscalls\": [\n {\n \"names\": [\n \"accept4\",\n \"epoll_wait\",\n \"pselect6\",\n \"futex\",\n \"madvise\",\n \"epoll_ctl\",\n \"getsockname\",\n \"setsockopt\",\n \"vfork\",\n \"mmap\",\n \"read\",\n \"write\",\n \"close\",\n \"arch_prctl\",\n \"sched_getaffinity\",\n \"munmap\",\n \"brk\",\n \"rt_sigaction\",\n \"rt_sigprocmask\",\n \"sigaltstack\",\n \"gettid\",\n \"clone\",\n \"bind\",\n \"socket\",", "completion": " \"openat\",\n \"readlinkat\",\n \"exit_group\",\n \"epoll_create1\",\n \"listen\",\n \"rt_sigreturn\",\n \"sched_yield\",\n \"clock_gettime\",\n \"connect\",\n \"dup2\",\n \"epoll_pwait\",\n \"execve\",\n \"exit\",\n \"fcntl\",\n \"getpid\",\n \"getuid\",\n \"ioctl\",\n \"mprotect\",\n \"nanosleep\",\n \"open\",\n \"poll\",\n \"recvfrom\",\n \"sendto\",\n \"set_tid_address\",\n \"setitimer\",\n \"writev\",\n \"fstatfs\",\n \"getdents64\",\n \"pipe2\",\n \"getrlimit\"\n ],\n \"action\": \"SCMP_ACT_ALLOW\"\n }\n ]\n}", "language": "json", "type": "api-response", "context": "Restrict a Container's Syscalls with seccomp | Run these commands:", "source": "https://kubernetes.io/docs/tutorials/security/seccomp/", "id": "k8s_00068_code_completion_1918"} {"prompt": "Explain this json code:\n\n{\n \"defaultAction\": \"SCMP_ACT_ERRNO\",\n \"architectures\": [\n \"SCMP_ARCH_X86_64\",\n \"SCMP_ARCH_X86\",\n \"SCMP_ARCH_X32\"\n ],\n \"syscalls\": [\n {\n \"names\": [\n \"accept4\",\n \"epoll_wait\",\n \"pselect6\",\n \"futex\",\n \"madvise\",\n \"epoll_ctl\",\n \"getsockname\",\n \"setsockopt\",\n \"vfork\",\n \"mmap\",\n \"read\",\n \"write\",\n \"close\",\n \"arch_prctl\",\n \"sched_getaffinity\",\n \"munmap\",\n \"brk\",\n \"rt_sigaction\",\n \"rt_sigprocmask\",\n \"sigaltstack\",\n \"gettid\",\n \"clone\",\n \"bind\",\n \"socket\",\n \"openat\",\n \"readlinkat\",\n \"exit_group\",\n \"epoll_create1\",\n \"listen\",\n \"rt_sigreturn\",\n \"sched_yield\",\n \"clock_gettime\",\n \"connect\",\n \"dup2\",\n \"epoll_pwait\",\n \"execve\",\n \"exit\",\n \"fcntl\",\n \"getpid\",\n \"getuid\",\n \"ioctl\",\n \"mprotect\",\n \"nanosleep\",\n \"open\",\n \"poll\",\n \"recvfrom\",\n \"sendto\",\n \"set_tid_address\",\n \"setitimer\",\n \"writev\",\n \"fstatfs\",\n \"getdents64\",\n \"pipe2\",\n \"getrlimit\"\n ],\n \"action\": \"SCMP_ACT_ALLOW\"\n }\n ]\n}", "completion": "This json code Restrict a Container's Syscalls with seccomp | Run these commands:", "language": "json", "type": "api-response_explanation", "context": "Restrict a Container's Syscalls with seccomp | Run these commands:", "source": "https://kubernetes.io/docs/tutorials/security/seccomp/", "id": "k8s_00068_code_explain_1919"} {"prompt": "Complete this shell code:\n\nmkdir ./profiles\ncurl -L -o profiles/audit.json https://k8s.io/examples/pods/security/seccomp/profiles/audit.json", "completion": "curl -L -o profiles/violation.json https://k8s.io/examples/pods/security/seccomp/profiles/violation.json\ncurl -L -o profiles/fine-grained.json https://k8s.io/examples/pods/security/seccomp/profiles/fine-grained.json\nls profiles", "language": "shell", "type": "code-example", "context": "Restrict a Container's Syscalls with seccomp | You should see three profiles listed at the end of the final step:", "source": "https://kubernetes.io/docs/tutorials/security/seccomp/", "id": "k8s_00068_code_completion_1920"} {"prompt": "Explain this shell code:\n\nmkdir ./profiles\ncurl -L -o profiles/audit.json https://k8s.io/examples/pods/security/seccomp/profiles/audit.json\ncurl -L -o profiles/violation.json https://k8s.io/examples/pods/security/seccomp/profiles/violation.json\ncurl -L -o profiles/fine-grained.json https://k8s.io/examples/pods/security/seccomp/profiles/fine-grained.json\nls profiles", "completion": "This shell code Restrict a Container's Syscalls with seccomp | You should see three profiles listed at the end of the final step:", "language": "shell", "type": "code-example_explanation", "context": "Restrict a Container's Syscalls with seccomp | You should see three profiles listed at the end of the final step:", "source": "https://kubernetes.io/docs/tutorials/security/seccomp/", "id": "k8s_00068_code_explain_1921"} {"prompt": "Complete this yaml code:\n\napiVersion: kind.x-k8s.io/v1alpha4\nkind: Cluster\nnodes:", "completion": "- role: control-plane\n extraMounts:\n - hostPath: \"./profiles\"\n containerPath: \"/var/lib/kubelet/seccomp/profiles\"", "language": "yaml", "type": "kubernetes-manifest", "context": "Restrict a Container's Syscalls with seccomp | Download that example kind configuration, and save it to a file named kind.yaml:", "source": "https://kubernetes.io/docs/tutorials/security/seccomp/", "id": "k8s_00068_code_completion_1922"} {"prompt": "Explain this yaml code:\n\napiVersion: kind.x-k8s.io/v1alpha4\nkind: Cluster\nnodes:\n- role: control-plane\n extraMounts:\n - hostPath: \"./profiles\"\n containerPath: \"/var/lib/kubelet/seccomp/profiles\"", "completion": "This yaml code Restrict a Container's Syscalls with seccomp | Download that example kind configuration, and save it to a file named kind.yaml:", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Restrict a Container's Syscalls with seccomp | Download that example kind configuration, and save it to a file named kind.yaml:", "source": "https://kubernetes.io/docs/tutorials/security/seccomp/", "id": "k8s_00068_code_explain_1923"} {"prompt": "Explain this shell code:\n\ncurl -L -O https://k8s.io/examples/pods/security/seccomp/kind.yaml", "completion": "This shell code Restrict a Container's Syscalls with seccomp | You can set a specific Kubernetes version by setting the node's container image. See Nodes within the kind documentation about configuration for more details on this. This tutorial assumes you are usi", "language": "shell", "type": "code-example_explanation", "context": "Restrict a Container's Syscalls with seccomp | You can set a specific Kubernetes version by setting the node's container image. See Nodes within the kind documentation about configuration for more details on this. This tutorial assumes you are usi", "source": "https://kubernetes.io/docs/tutorials/security/seccomp/", "id": "k8s_00068_code_explain_1924"} {"prompt": "Explain this text code:\n\nCONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES\n6a96207fed4b kindest/node:v1.18.2 \"/usr/local/bin/entr…\" 27 seconds ago Up 24 seconds 127.0.0.1:42223->6443/tcp kind-control-plane", "completion": "This text code Restrict a Container's Syscalls with seccomp | If observing the filesystem of that container, you should see that the profiles/ directory has been successfully loaded into the default seccomp path of the kubelet. Use docker exec to run a command i", "language": "text", "type": "code-example_explanation", "context": "Restrict a Container's Syscalls with seccomp | If observing the filesystem of that container, you should see that the profiles/ directory has been successfully loaded into the default seccomp path of the kubelet. Use docker exec to run a command i", "source": "https://kubernetes.io/docs/tutorials/security/seccomp/", "id": "k8s_00068_code_explain_1925"} {"prompt": "Explain this shell code:\n\n# Change 6a96207fed4b to the container ID you saw from \"docker ps\"\ndocker exec -it 6a96207fed4b ls /var/lib/kubelet/seccomp/profiles", "completion": "This shell code Restrict a Container's Syscalls with seccomp | You have verified that these seccomp profiles are available to the kubelet running within kind.", "language": "shell", "type": "code-example_explanation", "context": "Restrict a Container's Syscalls with seccomp | You have verified that these seccomp profiles are available to the kubelet running within kind.", "source": "https://kubernetes.io/docs/tutorials/security/seccomp/", "id": "k8s_00068_code_explain_1926"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: default-pod\n labels:\n app: default-pod\nspec:\n securityContext:", "completion": " seccompProfile:\n type: RuntimeDefault\n containers:\n - name: test-container\n image: hashicorp/http-echo:1.0\n args:\n - \"-text=just made some more syscalls!\"\n securityContext:\n allowPrivilegeEscalation: false", "language": "yaml", "type": "pod-definition", "context": "Restrict a Container's Syscalls with seccomp | Create that Pod:", "source": "https://kubernetes.io/docs/tutorials/security/seccomp/", "id": "k8s_00068_code_completion_1927"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: default-pod\n labels:\n app: default-pod\nspec:\n securityContext:\n seccompProfile:\n type: RuntimeDefault\n containers:\n - name: test-container\n image: hashicorp/http-echo:1.0\n args:\n - \"-text=just made some more syscalls!\"\n securityContext:\n allowPrivilegeEscalation: false", "completion": "This yaml code Restrict a Container's Syscalls with seccomp | Create that Pod:", "language": "yaml", "type": "pod-definition_explanation", "context": "Restrict a Container's Syscalls with seccomp | Create that Pod:", "source": "https://kubernetes.io/docs/tutorials/security/seccomp/", "id": "k8s_00068_code_explain_1928"} {"prompt": "Explain this shell code:\n\nkubectl apply -f https://k8s.io/examples/pods/security/seccomp/ga/default-pod.yaml", "completion": "This shell code Restrict a Container's Syscalls with seccomp | The Pod should be showing as having started successfully:", "language": "shell", "type": "code-example_explanation", "context": "Restrict a Container's Syscalls with seccomp | The Pod should be showing as having started successfully:", "source": "https://kubernetes.io/docs/tutorials/security/seccomp/", "id": "k8s_00068_code_explain_1929"} {"prompt": "Explain this text code:\n\nNAME READY STATUS RESTARTS AGE\ndefault-pod 1/1 Running 0 20s", "completion": "This text code Restrict a Container's Syscalls with seccomp | Delete the Pod before moving to the next section:", "language": "text", "type": "code-example_explanation", "context": "Restrict a Container's Syscalls with seccomp | Delete the Pod before moving to the next section:", "source": "https://kubernetes.io/docs/tutorials/security/seccomp/", "id": "k8s_00068_code_explain_1930"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: audit-pod\n labels:\n app: audit-pod\nspec:\n securityContext:\n seccompProfile:", "completion": " type: Localhost\n localhostProfile: profiles/audit.json\n containers:\n - name: test-container\n image: hashicorp/http-echo:1.0\n args:\n - \"-text=just made some syscalls!\"\n securityContext:\n allowPrivilegeEscalation: false", "language": "yaml", "type": "pod-definition", "context": "Restrict a Container's Syscalls with seccomp | Create the Pod in the cluster:", "source": "https://kubernetes.io/docs/tutorials/security/seccomp/", "id": "k8s_00068_code_completion_1931"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: audit-pod\n labels:\n app: audit-pod\nspec:\n securityContext:\n seccompProfile:\n type: Localhost\n localhostProfile: profiles/audit.json\n containers:\n - name: test-container\n image: hashicorp/http-echo:1.0\n args:\n - \"-text=just made some syscalls!\"\n securityContext:\n allowPrivilegeEscalation: false", "completion": "This yaml code Restrict a Container's Syscalls with seccomp | Create the Pod in the cluster:", "language": "yaml", "type": "pod-definition_explanation", "context": "Restrict a Container's Syscalls with seccomp | Create the Pod in the cluster:", "source": "https://kubernetes.io/docs/tutorials/security/seccomp/", "id": "k8s_00068_code_explain_1932"} {"prompt": "Explain this shell code:\n\nkubectl apply -f https://k8s.io/examples/pods/security/seccomp/ga/audit-pod.yaml", "completion": "This shell code Restrict a Container's Syscalls with seccomp | This profile does not restrict any syscalls, so the Pod should start successfully.", "language": "shell", "type": "code-example_explanation", "context": "Restrict a Container's Syscalls with seccomp | This profile does not restrict any syscalls, so the Pod should start successfully.", "source": "https://kubernetes.io/docs/tutorials/security/seccomp/", "id": "k8s_00068_code_explain_1933"} {"prompt": "Explain this text code:\n\nNAME READY STATUS RESTARTS AGE\naudit-pod 1/1 Running 0 30s", "completion": "This text code Restrict a Container's Syscalls with seccomp | In order to be able to interact with this endpoint exposed by this container, create a NodePort Service that allows access to the endpoint from inside the kind control plane container.", "language": "text", "type": "code-example_explanation", "context": "Restrict a Container's Syscalls with seccomp | In order to be able to interact with this endpoint exposed by this container, create a NodePort Service that allows access to the endpoint from inside the kind control plane container.", "source": "https://kubernetes.io/docs/tutorials/security/seccomp/", "id": "k8s_00068_code_explain_1934"} {"prompt": "Explain this shell code:\n\nkubectl expose pod audit-pod --type NodePort --port 5678", "completion": "This shell code Restrict a Container's Syscalls with seccomp | Check what port the Service has been assigned on the node.", "language": "shell", "type": "code-example_explanation", "context": "Restrict a Container's Syscalls with seccomp | Check what port the Service has been assigned on the node.", "source": "https://kubernetes.io/docs/tutorials/security/seccomp/", "id": "k8s_00068_code_explain_1935"} {"prompt": "Explain this text code:\n\nNAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE\naudit-pod NodePort 10.111.36.142 5678:32373/TCP 72s", "completion": "This text code Restrict a Container's Syscalls with seccomp | Now you can use curl to access that endpoint from inside the kind control plane container, at the port exposed by this Service. Use docker exec to run the curl command within the container belonging t", "language": "text", "type": "code-example_explanation", "context": "Restrict a Container's Syscalls with seccomp | Now you can use curl to access that endpoint from inside the kind control plane container, at the port exposed by this Service. Use docker exec to run the curl command within the container belonging t", "source": "https://kubernetes.io/docs/tutorials/security/seccomp/", "id": "k8s_00068_code_explain_1936"} {"prompt": "Explain this shell code:\n\n# Change 6a96207fed4b to the control plane container ID and 32373 to the port number you saw from \"docker ps\"\ndocker exec -it 6a96207fed4b curl localhost:32373", "completion": "This shell code Restrict a Container's Syscalls with seccomp | You can see that the process is running, but what syscalls did it actually make? Because this Pod is running in a local cluster, you should be able to see those in /var/log/syslog on your local system", "language": "shell", "type": "code-example_explanation", "context": "Restrict a Container's Syscalls with seccomp | You can see that the process is running, but what syscalls did it actually make? Because this Pod is running in a local cluster, you should be able to see those in /var/log/syslog on your local system", "source": "https://kubernetes.io/docs/tutorials/security/seccomp/", "id": "k8s_00068_code_explain_1937"} {"prompt": "Explain this shell code:\n\n# The log path on your computer might be different from \"/var/log/syslog\"\ntail -f /var/log/syslog | grep 'http-echo'", "completion": "This shell code Restrict a Container's Syscalls with seccomp | You should already see some logs of syscalls made by http-echo, and if you run curl again inside the control plane container you will see more output written to the log.", "language": "shell", "type": "code-example_explanation", "context": "Restrict a Container's Syscalls with seccomp | You should already see some logs of syscalls made by http-echo, and if you run curl again inside the control plane container you will see more output written to the log.", "source": "https://kubernetes.io/docs/tutorials/security/seccomp/", "id": "k8s_00068_code_explain_1938"} {"prompt": "Complete this text code:\n\nJul 6 15:37:40 my-machine kernel: [369128.669452] audit: type=1326 audit(1594067860.484:14536): auid=4294967295 uid=0 gid=0 ses=4294967295 pid=29064 comm=\"http-echo\" exe=\"/http-echo\" sig=0 arch=c000003e syscall=51 compat=0 ip=0x46fe1f code=0x7ffc0000\nJul 6 15:37:40 my-machine kernel: [369128.669453] audit: type=1326 audit(1594067860.484:14537): auid=4294967295 uid=0 gid=0 ses=4294967295 pid=29064 comm=\"http-echo\" exe=\"/http-echo\" sig=0 arch=c000003e syscall=54 compat=0 ip=0x46fdba code=0x7ffc0000\nJul 6 15:37:40 my-machine kernel: [369128.669455] audit: type=1326 audit(1594067860.484:14538): auid=4294967295 uid=0 gid=0 ses=4294967295 pid=29064 comm=\"http-echo\" exe=\"/http-echo\" sig=0 arch=c000003e syscall=202 compat=0 ip=0x455e53 code=0x7ffc0000\nJul 6 15:37:40 my-machine kernel: [369128.669456] audit: type=1326 audit(1594067860.484:14539): auid=4294967295 uid=0 gid=0 ses=4294967295 pid=29064 comm=\"http-echo\" exe=\"/http-echo\" sig=0 arch=c000003e syscall=288 compat=0 ip=0x46fdba code=0x7ffc0000", "completion": "Jul 6 15:37:40 my-machine kernel: [369128.669517] audit: type=1326 audit(1594067860.484:14540): auid=4294967295 uid=0 gid=0 ses=4294967295 pid=29064 comm=\"http-echo\" exe=\"/http-echo\" sig=0 arch=c000003e syscall=0 compat=0 ip=0x46fd44 code=0x7ffc0000\nJul 6 15:37:40 my-machine kernel: [369128.669519] audit: type=1326 audit(1594067860.484:14541): auid=4294967295 uid=0 gid=0 ses=4294967295 pid=29064 comm=\"http-echo\" exe=\"/http-echo\" sig=0 arch=c000003e syscall=270 compat=0 ip=0x4559b1 code=0x7ffc0000\nJul 6 15:38:40 my-machine kernel: [369188.671648] audit: type=1326 audit(1594067920.488:14559): auid=4294967295 uid=0 gid=0 ses=4294967295 pid=29064 comm=\"http-echo\" exe=\"/http-echo\" sig=0 arch=c000003e syscall=270 compat=0 ip=0x4559b1 code=0x7ffc0000\nJul 6 15:38:40 my-machine kernel: [369188.671726] audit: type=1326 audit(1594067920.488:14560): auid=4294967295 uid=0 gid=0 ses=4294967295 pid=29064 comm=\"http-echo\" exe=\"/http-echo\" sig=0 arch=c000003e syscall=202 compat=0 ip=0x455e53 code=0x7ffc0000", "language": "text", "type": "code-example", "context": "Restrict a Container's Syscalls with seccomp | You can begin to understand the syscalls required by the http-echo process by looking at the syscall= entry on each line. While these are unlikely to encompass all syscalls it uses, it can serve as a", "source": "https://kubernetes.io/docs/tutorials/security/seccomp/", "id": "k8s_00068_code_completion_1939"} {"prompt": "Explain this text code:\n\nJul 6 15:37:40 my-machine kernel: [369128.669452] audit: type=1326 audit(1594067860.484:14536): auid=4294967295 uid=0 gid=0 ses=4294967295 pid=29064 comm=\"http-echo\" exe=\"/http-echo\" sig=0 arch=c000003e syscall=51 compat=0 ip=0x46fe1f code=0x7ffc0000\nJul 6 15:37:40 my-machine kernel: [369128.669453] audit: type=1326 audit(1594067860.484:14537): auid=4294967295 uid=0 gid=0 ses=4294967295 pid=29064 comm=\"http-echo\" exe=\"/http-echo\" sig=0 arch=c000003e syscall=54 compat=0 ip=0x46fdba code=0x7ffc0000\nJul 6 15:37:40 my-machine kernel: [369128.669455] audit: type=1326 audit(1594067860.484:14538): auid=4294967295 uid=0 gid=0 ses=4294967295 pid=29064 comm=\"http-echo\" exe=\"/http-echo\" sig=0 arch=c000003e syscall=202 compat=0 ip=0x455e53 code=0x7ffc0000\nJul 6 15:37:40 my-machine kernel: [369128.669456] audit: type=1326 audit(1594067860.484:14539): auid=4294967295 uid=0 gid=0 ses=4294967295 pid=29064 comm=\"http-echo\" exe=\"/http-echo\" sig=0 arch=c000003e syscall=288 compat=0 ip=0x46fdba code=0x7ffc0000\nJul 6 15:37:40 my-machine kernel: [369128.669517] audit: type=1326 audit(1594067860.484:14540): auid=4294967295 uid=0 gid=0 ses=4294967295 pid=29064 comm=\"http-echo\" exe=\"/http-echo\" sig=0 arch=c000003e syscall=0 compat=0 ip=0x46fd44 code=0x7ffc0000\nJul 6 15:37:40 my-machine kernel: [369128.669519] audit: type=1326 audit(1594067860.484:14541): auid=4294967295 uid=0 gid=0 ses=4294967295 pid=29064 comm=\"http-echo\" exe=\"/http-echo\" sig=0 arch=c000003e syscall=270 compat=0 ip=0x4559b1 code=0x7ffc0000\nJul 6 15:38:40 my-machine kernel: [369188.671648] audit: type=1326 audit(1594067920.488:14559): auid=4294967295 uid=0 gid=0 ses=4294967295 pid=29064 comm=\"http-echo\" exe=\"/http-echo\" sig=0 arch=c000003e syscall=270 compat=0 ip=0x4559b1 code=0x7ffc0000\nJul 6 15:38:40 my-machine kernel: [369188.671726] audit: type=1326 audit(1594067920.488:14560): auid=4294967295 uid=0 gid=0 ses=4294967295 pid=29064 comm=\"http-echo\" exe=\"/http-echo\" sig=0 arch=c000003e syscall=202 compat=0 ip=0x455e53 code=0x7ffc0000", "completion": "This text code Restrict a Container's Syscalls with seccomp | You can begin to understand the syscalls required by the http-echo process by looking at the syscall= entry on each line. While these are unlikely to encompass all syscalls it uses, it can serve as a", "language": "text", "type": "code-example_explanation", "context": "Restrict a Container's Syscalls with seccomp | You can begin to understand the syscalls required by the http-echo process by looking at the syscall= entry on each line. While these are unlikely to encompass all syscalls it uses, it can serve as a", "source": "https://kubernetes.io/docs/tutorials/security/seccomp/", "id": "k8s_00068_code_explain_1940"} {"prompt": "Explain this shell code:\n\nkubectl delete service audit-pod --wait\nkubectl delete pod audit-pod --wait --now", "completion": "This shell code Restrict a Container's Syscalls with seccomp | For demonstration, apply a profile to the Pod that does not allow for any syscalls.", "language": "shell", "type": "code-example_explanation", "context": "Restrict a Container's Syscalls with seccomp | For demonstration, apply a profile to the Pod that does not allow for any syscalls.", "source": "https://kubernetes.io/docs/tutorials/security/seccomp/", "id": "k8s_00068_code_explain_1941"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: violation-pod\n labels:\n app: violation-pod\nspec:\n securityContext:\n seccompProfile:", "completion": " type: Localhost\n localhostProfile: profiles/violation.json\n containers:\n - name: test-container\n image: hashicorp/http-echo:1.0\n args:\n - \"-text=just made some syscalls!\"\n securityContext:\n allowPrivilegeEscalation: false", "language": "yaml", "type": "pod-definition", "context": "Restrict a Container's Syscalls with seccomp | Attempt to create the Pod in the cluster:", "source": "https://kubernetes.io/docs/tutorials/security/seccomp/", "id": "k8s_00068_code_completion_1942"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: violation-pod\n labels:\n app: violation-pod\nspec:\n securityContext:\n seccompProfile:\n type: Localhost\n localhostProfile: profiles/violation.json\n containers:\n - name: test-container\n image: hashicorp/http-echo:1.0\n args:\n - \"-text=just made some syscalls!\"\n securityContext:\n allowPrivilegeEscalation: false", "completion": "This yaml code Restrict a Container's Syscalls with seccomp | Attempt to create the Pod in the cluster:", "language": "yaml", "type": "pod-definition_explanation", "context": "Restrict a Container's Syscalls with seccomp | Attempt to create the Pod in the cluster:", "source": "https://kubernetes.io/docs/tutorials/security/seccomp/", "id": "k8s_00068_code_explain_1943"} {"prompt": "Explain this shell code:\n\nkubectl apply -f https://k8s.io/examples/pods/security/seccomp/ga/violation-pod.yaml", "completion": "This shell code Restrict a Container's Syscalls with seccomp | The Pod creates, but there is an issue. If you check the status of the Pod, you should see that it failed to start.", "language": "shell", "type": "code-example_explanation", "context": "Restrict a Container's Syscalls with seccomp | The Pod creates, but there is an issue. If you check the status of the Pod, you should see that it failed to start.", "source": "https://kubernetes.io/docs/tutorials/security/seccomp/", "id": "k8s_00068_code_explain_1944"} {"prompt": "Explain this text code:\n\nNAME READY STATUS RESTARTS AGE\nviolation-pod 0/1 CrashLoopBackOff 1 6s", "completion": "This text code Restrict a Container's Syscalls with seccomp | As seen in the previous example, the http-echo process requires quite a few syscalls. Here seccomp has been instructed to error on any syscall by setting \"defaultAction\": \"SCMP_ACT_ERRNO\". This is ext", "language": "text", "type": "code-example_explanation", "context": "Restrict a Container's Syscalls with seccomp | As seen in the previous example, the http-echo process requires quite a few syscalls. Here seccomp has been instructed to error on any syscall by setting \"defaultAction\": \"SCMP_ACT_ERRNO\". This is ext", "source": "https://kubernetes.io/docs/tutorials/security/seccomp/", "id": "k8s_00068_code_explain_1945"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: fine-pod\n labels:\n app: fine-pod\nspec:\n securityContext:\n seccompProfile:", "completion": " type: Localhost\n localhostProfile: profiles/fine-grained.json\n containers:\n - name: test-container\n image: hashicorp/http-echo:1.0\n args:\n - \"-text=just made some syscalls!\"\n securityContext:\n allowPrivilegeEscalation: false", "language": "yaml", "type": "pod-definition", "context": "Restrict a Container's Syscalls with seccomp | Create the Pod in your cluster:", "source": "https://kubernetes.io/docs/tutorials/security/seccomp/", "id": "k8s_00068_code_completion_1946"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: fine-pod\n labels:\n app: fine-pod\nspec:\n securityContext:\n seccompProfile:\n type: Localhost\n localhostProfile: profiles/fine-grained.json\n containers:\n - name: test-container\n image: hashicorp/http-echo:1.0\n args:\n - \"-text=just made some syscalls!\"\n securityContext:\n allowPrivilegeEscalation: false", "completion": "This yaml code Restrict a Container's Syscalls with seccomp | Create the Pod in your cluster:", "language": "yaml", "type": "pod-definition_explanation", "context": "Restrict a Container's Syscalls with seccomp | Create the Pod in your cluster:", "source": "https://kubernetes.io/docs/tutorials/security/seccomp/", "id": "k8s_00068_code_explain_1947"} {"prompt": "Explain this shell code:\n\nkubectl apply -f https://k8s.io/examples/pods/security/seccomp/ga/fine-pod.yaml", "completion": "This shell code Restrict a Container's Syscalls with seccomp | The Pod should be showing as having started successfully:", "language": "shell", "type": "code-example_explanation", "context": "Restrict a Container's Syscalls with seccomp | The Pod should be showing as having started successfully:", "source": "https://kubernetes.io/docs/tutorials/security/seccomp/", "id": "k8s_00068_code_explain_1948"} {"prompt": "Explain this text code:\n\nNAME READY STATUS RESTARTS AGE\nfine-pod 1/1 Running 0 30s", "completion": "This text code Restrict a Container's Syscalls with seccomp | Open up a new terminal window and use tail to monitor for log entries that mention calls from http-echo:", "language": "text", "type": "code-example_explanation", "context": "Restrict a Container's Syscalls with seccomp | Open up a new terminal window and use tail to monitor for log entries that mention calls from http-echo:", "source": "https://kubernetes.io/docs/tutorials/security/seccomp/", "id": "k8s_00068_code_explain_1949"} {"prompt": "Explain this shell code:\n\n# The log path on your computer might be different from \"/var/log/syslog\"\ntail -f /var/log/syslog | grep 'http-echo'", "completion": "This shell code Restrict a Container's Syscalls with seccomp | Next, expose the Pod with a NodePort Service:", "language": "shell", "type": "code-example_explanation", "context": "Restrict a Container's Syscalls with seccomp | Next, expose the Pod with a NodePort Service:", "source": "https://kubernetes.io/docs/tutorials/security/seccomp/", "id": "k8s_00068_code_explain_1950"} {"prompt": "Explain this shell code:\n\nkubectl expose pod fine-pod --type NodePort --port 5678", "completion": "This shell code Restrict a Container's Syscalls with seccomp | Check what port the Service has been assigned on the node:", "language": "shell", "type": "code-example_explanation", "context": "Restrict a Container's Syscalls with seccomp | Check what port the Service has been assigned on the node:", "source": "https://kubernetes.io/docs/tutorials/security/seccomp/", "id": "k8s_00068_code_explain_1951"} {"prompt": "Explain this text code:\n\nNAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE\nfine-pod NodePort 10.111.36.142 5678:32373/TCP 72s", "completion": "This text code Restrict a Container's Syscalls with seccomp | Use curl to access that endpoint from inside the kind control plane container:", "language": "text", "type": "code-example_explanation", "context": "Restrict a Container's Syscalls with seccomp | Use curl to access that endpoint from inside the kind control plane container:", "source": "https://kubernetes.io/docs/tutorials/security/seccomp/", "id": "k8s_00068_code_explain_1952"} {"prompt": "Explain this shell code:\n\n# Change 6a96207fed4b to the control plane container ID and 32373 to the port number you saw from \"docker ps\"\ndocker exec -it 6a96207fed4b curl localhost:32373", "completion": "This shell code Restrict a Container's Syscalls with seccomp | You should see no output in the syslog. This is because the profile allowed all necessary syscalls and specified that an error should occur if one outside of the list is invoked. This is an ideal situ", "language": "shell", "type": "code-example_explanation", "context": "Restrict a Container's Syscalls with seccomp | You should see no output in the syslog. This is because the profile allowed all necessary syscalls and specified that an error should occur if one outside of the list is invoked. This is an ideal situ", "source": "https://kubernetes.io/docs/tutorials/security/seccomp/", "id": "k8s_00068_code_explain_1953"} {"prompt": "Explain this shell code:\n\nkubectl delete service fine-pod --wait\nkubectl delete pod fine-pod --wait --now", "completion": "This shell code Restrict a Container's Syscalls with seccomp | To use seccomp profile defaulting, you must run the kubelet with the --seccomp-default command line flag enabled for each node where you want to use it.", "language": "shell", "type": "code-example_explanation", "context": "Restrict a Container's Syscalls with seccomp | To use seccomp profile defaulting, you must run the kubelet with the --seccomp-default command line flag enabled for each node where you want to use it.", "source": "https://kubernetes.io/docs/tutorials/security/seccomp/", "id": "k8s_00068_code_explain_1954"} {"prompt": "Complete this yaml code:\n\nkind: Cluster\napiVersion: kind.x-k8s.io/v1alpha4\nnodes:\n - role: control-plane\n image: kindest/node:v1.28.0@sha256:9f3ff58f19dcf1a0611d11e8ac989fdb30a28f40f236f59f0bea31fb956ccf5c\n kubeadmConfigPatches:\n - |\n kind: JoinConfiguration\n nodeRegistration:", "completion": " kubeletExtraArgs:\n seccomp-default: \"true\" \n - role: worker\n image: kindest/node:v1.28.0@sha256:9f3ff58f19dcf1a0611d11e8ac989fdb30a28f40f236f59f0bea31fb956ccf5c\n kubeadmConfigPatches:\n - |\n kind: JoinConfiguration\n nodeRegistration:\n kubeletExtraArgs:\n seccomp-default: \"true\"", "language": "yaml", "type": "kubernetes-manifest", "context": "Restrict a Container's Syscalls with seccomp | If the cluster is ready, then running a pod:", "source": "https://kubernetes.io/docs/tutorials/security/seccomp/", "id": "k8s_00068_code_completion_1955"} {"prompt": "Explain this yaml code:\n\nkind: Cluster\napiVersion: kind.x-k8s.io/v1alpha4\nnodes:\n - role: control-plane\n image: kindest/node:v1.28.0@sha256:9f3ff58f19dcf1a0611d11e8ac989fdb30a28f40f236f59f0bea31fb956ccf5c\n kubeadmConfigPatches:\n - |\n kind: JoinConfiguration\n nodeRegistration:\n kubeletExtraArgs:\n seccomp-default: \"true\" \n - role: worker\n image: kindest/node:v1.28.0@sha256:9f3ff58f19dcf1a0611d11e8ac989fdb30a28f40f236f59f0bea31fb956ccf5c\n kubeadmConfigPatches:\n - |\n kind: JoinConfiguration\n nodeRegistration:\n kubeletExtraArgs:\n seccomp-default: \"true\"", "completion": "This yaml code Restrict a Container's Syscalls with seccomp | If the cluster is ready, then running a pod:", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Restrict a Container's Syscalls with seccomp | If the cluster is ready, then running a pod:", "source": "https://kubernetes.io/docs/tutorials/security/seccomp/", "id": "k8s_00068_code_explain_1956"} {"prompt": "Explain this shell code:\n\nkubectl run --rm -it --restart=Never --image=alpine alpine -- sh", "completion": "This shell code Restrict a Container's Syscalls with seccomp | Should now have the default seccomp profile attached. This can be verified by using docker exec to run crictl inspect for the container on the kind worker:", "language": "shell", "type": "code-example_explanation", "context": "Restrict a Container's Syscalls with seccomp | Should now have the default seccomp profile attached. This can be verified by using docker exec to run crictl inspect for the container on the kind worker:", "source": "https://kubernetes.io/docs/tutorials/security/seccomp/", "id": "k8s_00068_code_explain_1957"} {"prompt": "Explain this shell code:\n\ndocker exec -it kind-worker bash -c \\\n 'crictl inspect $(crictl ps --name=alpine -q) | jq .info.runtimeSpec.linux.seccomp'", "completion": "This shell code Restrict a Container's Syscalls with seccomp | You can learn more about Linux seccomp:", "language": "shell", "type": "code-example_explanation", "context": "Restrict a Container's Syscalls with seccomp | You can learn more about Linux seccomp:", "source": "https://kubernetes.io/docs/tutorials/security/seccomp/", "id": "k8s_00068_code_explain_1958"} {"prompt": "Complete this json code:\n\n{\n \"defaultAction\": \"SCMP_ACT_ERRNO\",\n \"architectures\": [\"SCMP_ARCH_X86_64\", \"SCMP_ARCH_X86\", \"SCMP_ARCH_X32\"],\n \"syscalls\": [", "completion": " {\n \"names\": [\"...\"]\n }\n ]\n}", "language": "json", "type": "api-response", "context": "Restrict a Container's Syscalls with seccomp | You can learn more about Linux seccomp:", "source": "https://kubernetes.io/docs/tutorials/security/seccomp/", "id": "k8s_00068_code_completion_1959"} {"prompt": "Explain this json code:\n\n{\n \"defaultAction\": \"SCMP_ACT_ERRNO\",\n \"architectures\": [\"SCMP_ARCH_X86_64\", \"SCMP_ARCH_X86\", \"SCMP_ARCH_X32\"],\n \"syscalls\": [\n {\n \"names\": [\"...\"]\n }\n ]\n}", "completion": "This json code Restrict a Container's Syscalls with seccomp | You can learn more about Linux seccomp:", "language": "json", "type": "api-response_explanation", "context": "Restrict a Container's Syscalls with seccomp | You can learn more about Linux seccomp:", "source": "https://kubernetes.io/docs/tutorials/security/seccomp/", "id": "k8s_00068_code_explain_1960"} {"prompt": "Complete this text code:\n\npods/security/seccomp/profiles/audit.json\n{", "completion": " \"defaultAction\": \"SCMP_ACT_LOG\"\n}", "language": "text", "type": "code-example", "context": "Restrict a Container's Syscalls with seccomp | pods/security/seccomp/profiles/violation.json { \"defaultAction\": \"SCMP_ACT_ERRNO\" }", "source": "https://kubernetes.io/docs/tutorials/security/seccomp/", "id": "k8s_00068_code_completion_1961"} {"prompt": "Explain this text code:\n\npods/security/seccomp/profiles/audit.json\n{\n \"defaultAction\": \"SCMP_ACT_LOG\"\n}", "completion": "This text code Restrict a Container's Syscalls with seccomp | pods/security/seccomp/profiles/violation.json { \"defaultAction\": \"SCMP_ACT_ERRNO\" }", "language": "text", "type": "code-example_explanation", "context": "Restrict a Container's Syscalls with seccomp | pods/security/seccomp/profiles/violation.json { \"defaultAction\": \"SCMP_ACT_ERRNO\" }", "source": "https://kubernetes.io/docs/tutorials/security/seccomp/", "id": "k8s_00068_code_explain_1962"} {"prompt": "Complete this text code:\n\npods/security/seccomp/profiles/violation.json\n{", "completion": " \"defaultAction\": \"SCMP_ACT_ERRNO\"\n}", "language": "text", "type": "code-example", "context": "Restrict a Container's Syscalls with seccomp | pods/security/seccomp/profiles/fine-grained.json { \"defaultAction\": \"SCMP_ACT_ERRNO\", \"architectures\": [ \"SCMP_ARCH_X86_64\", \"SCMP_ARCH_X86\", \"SCMP_ARCH_X32\" ],", "source": "https://kubernetes.io/docs/tutorials/security/seccomp/", "id": "k8s_00068_code_completion_1963"} {"prompt": "Explain this text code:\n\npods/security/seccomp/profiles/violation.json\n{\n \"defaultAction\": \"SCMP_ACT_ERRNO\"\n}", "completion": "This text code Restrict a Container's Syscalls with seccomp | pods/security/seccomp/profiles/fine-grained.json { \"defaultAction\": \"SCMP_ACT_ERRNO\", \"architectures\": [ \"SCMP_ARCH_X86_64\", \"SCMP_ARCH_X86\", \"SCMP_ARCH_X32\" ],", "language": "text", "type": "code-example_explanation", "context": "Restrict a Container's Syscalls with seccomp | pods/security/seccomp/profiles/fine-grained.json { \"defaultAction\": \"SCMP_ACT_ERRNO\", \"architectures\": [ \"SCMP_ARCH_X86_64\", \"SCMP_ARCH_X86\", \"SCMP_ARCH_X32\" ],", "source": "https://kubernetes.io/docs/tutorials/security/seccomp/", "id": "k8s_00068_code_explain_1964"} {"prompt": "Complete this text code:\n\npods/security/seccomp/profiles/fine-grained.json\n{\n \"defaultAction\": \"SCMP_ACT_ERRNO\",\n \"architectures\": [\n \"SCMP_ARCH_X86_64\",\n \"SCMP_ARCH_X86\",\n \"SCMP_ARCH_X32\"\n ],\n \"syscalls\": [\n {\n \"names\": [\n \"accept4\",\n \"epoll_wait\",\n \"pselect6\",\n \"futex\",\n \"madvise\",\n \"epoll_ctl\",\n \"getsockname\",\n \"setsockopt\",\n \"vfork\",\n \"mmap\",\n \"read\",\n \"write\",\n \"close\",\n \"arch_prctl\",\n \"sched_getaffinity\",\n \"munmap\",\n \"brk\",\n \"rt_sigaction\",\n \"rt_sigprocmask\",\n \"sigaltstack\",\n \"gettid\",\n \"clone\",\n \"bind\",\n \"socket\",", "completion": " \"openat\",\n \"readlinkat\",\n \"exit_group\",\n \"epoll_create1\",\n \"listen\",\n \"rt_sigreturn\",\n \"sched_yield\",\n \"clock_gettime\",\n \"connect\",\n \"dup2\",\n \"epoll_pwait\",\n \"execve\",\n \"exit\",\n \"fcntl\",\n \"getpid\",\n \"getuid\",\n \"ioctl\",\n \"mprotect\",\n \"nanosleep\",\n \"open\",\n \"poll\",\n \"recvfrom\",\n \"sendto\",\n \"set_tid_address\",\n \"setitimer\",\n \"writev\",\n \"fstatfs\",\n \"getdents64\",\n \"pipe2\",\n \"getrlimit\"\n ],\n \"action\": \"SCMP_ACT_ALLOW\"\n }\n ]\n}", "language": "text", "type": "code-example", "context": "Restrict a Container's Syscalls with seccomp | Run these commands:", "source": "https://kubernetes.io/docs/tutorials/security/seccomp/", "id": "k8s_00068_code_completion_1965"} {"prompt": "Explain this text code:\n\npods/security/seccomp/profiles/fine-grained.json\n{\n \"defaultAction\": \"SCMP_ACT_ERRNO\",\n \"architectures\": [\n \"SCMP_ARCH_X86_64\",\n \"SCMP_ARCH_X86\",\n \"SCMP_ARCH_X32\"\n ],\n \"syscalls\": [\n {\n \"names\": [\n \"accept4\",\n \"epoll_wait\",\n \"pselect6\",\n \"futex\",\n \"madvise\",\n \"epoll_ctl\",\n \"getsockname\",\n \"setsockopt\",\n \"vfork\",\n \"mmap\",\n \"read\",\n \"write\",\n \"close\",\n \"arch_prctl\",\n \"sched_getaffinity\",\n \"munmap\",\n \"brk\",\n \"rt_sigaction\",\n \"rt_sigprocmask\",\n \"sigaltstack\",\n \"gettid\",\n \"clone\",\n \"bind\",\n \"socket\",\n \"openat\",\n \"readlinkat\",\n \"exit_group\",\n \"epoll_create1\",\n \"listen\",\n \"rt_sigreturn\",\n \"sched_yield\",\n \"clock_gettime\",\n \"connect\",\n \"dup2\",\n \"epoll_pwait\",\n \"execve\",\n \"exit\",\n \"fcntl\",\n \"getpid\",\n \"getuid\",\n \"ioctl\",\n \"mprotect\",\n \"nanosleep\",\n \"open\",\n \"poll\",\n \"recvfrom\",\n \"sendto\",\n \"set_tid_address\",\n \"setitimer\",\n \"writev\",\n \"fstatfs\",\n \"getdents64\",\n \"pipe2\",\n \"getrlimit\"\n ],\n \"action\": \"SCMP_ACT_ALLOW\"\n }\n ]\n}", "completion": "This text code Restrict a Container's Syscalls with seccomp | Run these commands:", "language": "text", "type": "code-example_explanation", "context": "Restrict a Container's Syscalls with seccomp | Run these commands:", "source": "https://kubernetes.io/docs/tutorials/security/seccomp/", "id": "k8s_00068_code_explain_1966"} {"prompt": "Complete this json code:\n\n{\n \"defaultAction\": \"SCMP_ACT_ERRNO\",\n \"architectures\": [\n \"SCMP_ARCH_X86_64\",\n \"SCMP_ARCH_X86\",\n \"SCMP_ARCH_X32\"\n ],\n \"syscalls\": [\n {\n \"names\": [\n \"accept4\",\n \"epoll_wait\",\n \"pselect6\",\n \"futex\",\n \"madvise\",\n \"epoll_ctl\",\n \"getsockname\",\n \"setsockopt\",\n \"vfork\",\n \"mmap\",\n \"read\",\n \"write\",\n \"close\",\n \"arch_prctl\",\n \"sched_getaffinity\",\n \"munmap\",\n \"brk\",\n \"rt_sigaction\",\n \"rt_sigprocmask\",\n \"sigaltstack\",\n \"gettid\",\n \"clone\",\n \"bind\",\n \"socket\",", "completion": " \"openat\",\n \"readlinkat\",\n \"exit_group\",\n \"epoll_create1\",\n \"listen\",\n \"rt_sigreturn\",\n \"sched_yield\",\n \"clock_gettime\",\n \"connect\",\n \"dup2\",\n \"epoll_pwait\",\n \"execve\",\n \"exit\",\n \"fcntl\",\n \"getpid\",\n \"getuid\",\n \"ioctl\",\n \"mprotect\",\n \"nanosleep\",\n \"open\",\n \"poll\",\n \"recvfrom\",\n \"sendto\",\n \"set_tid_address\",\n \"setitimer\",\n \"writev\",\n \"fstatfs\",\n \"getdents64\",\n \"pipe2\",\n \"getrlimit\"\n ],\n \"action\": \"SCMP_ACT_ALLOW\"\n }\n ]\n}", "language": "json", "type": "api-response", "context": "Restrict a Container's Syscalls with seccomp | Run these commands:", "source": "https://kubernetes.io/docs/tutorials/security/seccomp/", "id": "k8s_00068_code_completion_1967"} {"prompt": "Explain this json code:\n\n{\n \"defaultAction\": \"SCMP_ACT_ERRNO\",\n \"architectures\": [\n \"SCMP_ARCH_X86_64\",\n \"SCMP_ARCH_X86\",\n \"SCMP_ARCH_X32\"\n ],\n \"syscalls\": [\n {\n \"names\": [\n \"accept4\",\n \"epoll_wait\",\n \"pselect6\",\n \"futex\",\n \"madvise\",\n \"epoll_ctl\",\n \"getsockname\",\n \"setsockopt\",\n \"vfork\",\n \"mmap\",\n \"read\",\n \"write\",\n \"close\",\n \"arch_prctl\",\n \"sched_getaffinity\",\n \"munmap\",\n \"brk\",\n \"rt_sigaction\",\n \"rt_sigprocmask\",\n \"sigaltstack\",\n \"gettid\",\n \"clone\",\n \"bind\",\n \"socket\",\n \"openat\",\n \"readlinkat\",\n \"exit_group\",\n \"epoll_create1\",\n \"listen\",\n \"rt_sigreturn\",\n \"sched_yield\",\n \"clock_gettime\",\n \"connect\",\n \"dup2\",\n \"epoll_pwait\",\n \"execve\",\n \"exit\",\n \"fcntl\",\n \"getpid\",\n \"getuid\",\n \"ioctl\",\n \"mprotect\",\n \"nanosleep\",\n \"open\",\n \"poll\",\n \"recvfrom\",\n \"sendto\",\n \"set_tid_address\",\n \"setitimer\",\n \"writev\",\n \"fstatfs\",\n \"getdents64\",\n \"pipe2\",\n \"getrlimit\"\n ],\n \"action\": \"SCMP_ACT_ALLOW\"\n }\n ]\n}", "completion": "This json code Restrict a Container's Syscalls with seccomp | Run these commands:", "language": "json", "type": "api-response_explanation", "context": "Restrict a Container's Syscalls with seccomp | Run these commands:", "source": "https://kubernetes.io/docs/tutorials/security/seccomp/", "id": "k8s_00068_code_explain_1968"} {"prompt": "Complete this text code:\n\nmkdir ./profiles\ncurl -L -o profiles/audit.json https://k8s.io/examples/pods/security/seccomp/profiles/audit.json", "completion": "curl -L -o profiles/violation.json https://k8s.io/examples/pods/security/seccomp/profiles/violation.json\ncurl -L -o profiles/fine-grained.json https://k8s.io/examples/pods/security/seccomp/profiles/fine-grained.json\nls profiles", "language": "text", "type": "code-example", "context": "Restrict a Container's Syscalls with seccomp | You should see three profiles listed at the end of the final step:", "source": "https://kubernetes.io/docs/tutorials/security/seccomp/", "id": "k8s_00068_code_completion_1969"} {"prompt": "Explain this text code:\n\nmkdir ./profiles\ncurl -L -o profiles/audit.json https://k8s.io/examples/pods/security/seccomp/profiles/audit.json\ncurl -L -o profiles/violation.json https://k8s.io/examples/pods/security/seccomp/profiles/violation.json\ncurl -L -o profiles/fine-grained.json https://k8s.io/examples/pods/security/seccomp/profiles/fine-grained.json\nls profiles", "completion": "This text code Restrict a Container's Syscalls with seccomp | You should see three profiles listed at the end of the final step:", "language": "text", "type": "code-example_explanation", "context": "Restrict a Container's Syscalls with seccomp | You should see three profiles listed at the end of the final step:", "source": "https://kubernetes.io/docs/tutorials/security/seccomp/", "id": "k8s_00068_code_explain_1970"} {"prompt": "Complete this yaml code:\n\npods/security/seccomp/kind.yaml\napiVersion: kind.x-k8s.io/v1alpha4\nkind: Cluster\nnodes:", "completion": "- role: control-plane\n extraMounts:\n - hostPath: \"./profiles\"\n containerPath: \"/var/lib/kubelet/seccomp/profiles\"", "language": "yaml", "type": "kubernetes-manifest", "context": "Restrict a Container's Syscalls with seccomp | Download that example kind configuration, and save it to a file named kind.yaml:", "source": "https://kubernetes.io/docs/tutorials/security/seccomp/", "id": "k8s_00068_code_completion_1971"} {"prompt": "Explain this yaml code:\n\npods/security/seccomp/kind.yaml\napiVersion: kind.x-k8s.io/v1alpha4\nkind: Cluster\nnodes:\n- role: control-plane\n extraMounts:\n - hostPath: \"./profiles\"\n containerPath: \"/var/lib/kubelet/seccomp/profiles\"", "completion": "This yaml code Restrict a Container's Syscalls with seccomp | Download that example kind configuration, and save it to a file named kind.yaml:", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Restrict a Container's Syscalls with seccomp | Download that example kind configuration, and save it to a file named kind.yaml:", "source": "https://kubernetes.io/docs/tutorials/security/seccomp/", "id": "k8s_00068_code_explain_1972"} {"prompt": "Complete this yaml code:\n\napiVersion: kind.x-k8s.io/v1alpha4\nkind: Cluster\nnodes:", "completion": "- role: control-plane\n extraMounts:\n - hostPath: \"./profiles\"\n containerPath: \"/var/lib/kubelet/seccomp/profiles\"", "language": "yaml", "type": "kubernetes-manifest", "context": "Restrict a Container's Syscalls with seccomp | Download that example kind configuration, and save it to a file named kind.yaml:", "source": "https://kubernetes.io/docs/tutorials/security/seccomp/", "id": "k8s_00068_code_completion_1973"} {"prompt": "Explain this yaml code:\n\napiVersion: kind.x-k8s.io/v1alpha4\nkind: Cluster\nnodes:\n- role: control-plane\n extraMounts:\n - hostPath: \"./profiles\"\n containerPath: \"/var/lib/kubelet/seccomp/profiles\"", "completion": "This yaml code Restrict a Container's Syscalls with seccomp | Download that example kind configuration, and save it to a file named kind.yaml:", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Restrict a Container's Syscalls with seccomp | Download that example kind configuration, and save it to a file named kind.yaml:", "source": "https://kubernetes.io/docs/tutorials/security/seccomp/", "id": "k8s_00068_code_explain_1974"} {"prompt": "Explain this text code:\n\ncurl -L -O https://k8s.io/examples/pods/security/seccomp/kind.yaml", "completion": "This text code Restrict a Container's Syscalls with seccomp | You can set a specific Kubernetes version by setting the node's container image. See Nodes within the kind documentation about configuration for more details on this. This tutorial assumes you are usi", "language": "text", "type": "code-example_explanation", "context": "Restrict a Container's Syscalls with seccomp | You can set a specific Kubernetes version by setting the node's container image. See Nodes within the kind documentation about configuration for more details on this. This tutorial assumes you are usi", "source": "https://kubernetes.io/docs/tutorials/security/seccomp/", "id": "k8s_00068_code_explain_1975"} {"prompt": "Explain this text code:\n\n# Change 6a96207fed4b to the container ID you saw from \"docker ps\"\ndocker exec -it 6a96207fed4b ls /var/lib/kubelet/seccomp/profiles", "completion": "This text code Restrict a Container's Syscalls with seccomp | You have verified that these seccomp profiles are available to the kubelet running within kind.", "language": "text", "type": "code-example_explanation", "context": "Restrict a Container's Syscalls with seccomp | You have verified that these seccomp profiles are available to the kubelet running within kind.", "source": "https://kubernetes.io/docs/tutorials/security/seccomp/", "id": "k8s_00068_code_explain_1976"} {"prompt": "Complete this yaml code:\n\npods/security/seccomp/ga/default-pod.yaml\napiVersion: v1\nkind: Pod\nmetadata:\n name: default-pod\n labels:\n app: default-pod\nspec:\n securityContext:", "completion": " seccompProfile:\n type: RuntimeDefault\n containers:\n - name: test-container\n image: hashicorp/http-echo:1.0\n args:\n - \"-text=just made some more syscalls!\"\n securityContext:\n allowPrivilegeEscalation: false", "language": "yaml", "type": "pod-definition", "context": "Restrict a Container's Syscalls with seccomp | Create that Pod:", "source": "https://kubernetes.io/docs/tutorials/security/seccomp/", "id": "k8s_00068_code_completion_1977"} {"prompt": "Explain this yaml code:\n\npods/security/seccomp/ga/default-pod.yaml\napiVersion: v1\nkind: Pod\nmetadata:\n name: default-pod\n labels:\n app: default-pod\nspec:\n securityContext:\n seccompProfile:\n type: RuntimeDefault\n containers:\n - name: test-container\n image: hashicorp/http-echo:1.0\n args:\n - \"-text=just made some more syscalls!\"\n securityContext:\n allowPrivilegeEscalation: false", "completion": "This yaml code Restrict a Container's Syscalls with seccomp | Create that Pod:", "language": "yaml", "type": "pod-definition_explanation", "context": "Restrict a Container's Syscalls with seccomp | Create that Pod:", "source": "https://kubernetes.io/docs/tutorials/security/seccomp/", "id": "k8s_00068_code_explain_1978"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: default-pod\n labels:\n app: default-pod\nspec:\n securityContext:", "completion": " seccompProfile:\n type: RuntimeDefault\n containers:\n - name: test-container\n image: hashicorp/http-echo:1.0\n args:\n - \"-text=just made some more syscalls!\"\n securityContext:\n allowPrivilegeEscalation: false", "language": "yaml", "type": "pod-definition", "context": "Restrict a Container's Syscalls with seccomp | Create that Pod:", "source": "https://kubernetes.io/docs/tutorials/security/seccomp/", "id": "k8s_00068_code_completion_1979"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: default-pod\n labels:\n app: default-pod\nspec:\n securityContext:\n seccompProfile:\n type: RuntimeDefault\n containers:\n - name: test-container\n image: hashicorp/http-echo:1.0\n args:\n - \"-text=just made some more syscalls!\"\n securityContext:\n allowPrivilegeEscalation: false", "completion": "This yaml code Restrict a Container's Syscalls with seccomp | Create that Pod:", "language": "yaml", "type": "pod-definition_explanation", "context": "Restrict a Container's Syscalls with seccomp | Create that Pod:", "source": "https://kubernetes.io/docs/tutorials/security/seccomp/", "id": "k8s_00068_code_explain_1980"} {"prompt": "Explain this bash code:\n\nkubectl apply -f https://k8s.io/examples/pods/security/seccomp/ga/default-pod.yaml", "completion": "This bash code Restrict a Container's Syscalls with seccomp | The Pod should be showing as having started successfully:", "language": "bash", "type": "kubectl-command_explanation", "context": "Restrict a Container's Syscalls with seccomp | The Pod should be showing as having started successfully:", "source": "https://kubernetes.io/docs/tutorials/security/seccomp/", "id": "k8s_00068_code_explain_1981"} {"prompt": "Complete this yaml code:\n\npods/security/seccomp/ga/audit-pod.yaml\napiVersion: v1\nkind: Pod\nmetadata:\n name: audit-pod\n labels:\n app: audit-pod\nspec:\n securityContext:", "completion": " seccompProfile:\n type: Localhost\n localhostProfile: profiles/audit.json\n containers:\n - name: test-container\n image: hashicorp/http-echo:1.0\n args:\n - \"-text=just made some syscalls!\"\n securityContext:\n allowPrivilegeEscalation: false", "language": "yaml", "type": "pod-definition", "context": "Restrict a Container's Syscalls with seccomp | Create the Pod in the cluster:", "source": "https://kubernetes.io/docs/tutorials/security/seccomp/", "id": "k8s_00068_code_completion_1982"} {"prompt": "Explain this yaml code:\n\npods/security/seccomp/ga/audit-pod.yaml\napiVersion: v1\nkind: Pod\nmetadata:\n name: audit-pod\n labels:\n app: audit-pod\nspec:\n securityContext:\n seccompProfile:\n type: Localhost\n localhostProfile: profiles/audit.json\n containers:\n - name: test-container\n image: hashicorp/http-echo:1.0\n args:\n - \"-text=just made some syscalls!\"\n securityContext:\n allowPrivilegeEscalation: false", "completion": "This yaml code Restrict a Container's Syscalls with seccomp | Create the Pod in the cluster:", "language": "yaml", "type": "pod-definition_explanation", "context": "Restrict a Container's Syscalls with seccomp | Create the Pod in the cluster:", "source": "https://kubernetes.io/docs/tutorials/security/seccomp/", "id": "k8s_00068_code_explain_1983"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: audit-pod\n labels:\n app: audit-pod\nspec:\n securityContext:\n seccompProfile:", "completion": " type: Localhost\n localhostProfile: profiles/audit.json\n containers:\n - name: test-container\n image: hashicorp/http-echo:1.0\n args:\n - \"-text=just made some syscalls!\"\n securityContext:\n allowPrivilegeEscalation: false", "language": "yaml", "type": "pod-definition", "context": "Restrict a Container's Syscalls with seccomp | Create the Pod in the cluster:", "source": "https://kubernetes.io/docs/tutorials/security/seccomp/", "id": "k8s_00068_code_completion_1984"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: audit-pod\n labels:\n app: audit-pod\nspec:\n securityContext:\n seccompProfile:\n type: Localhost\n localhostProfile: profiles/audit.json\n containers:\n - name: test-container\n image: hashicorp/http-echo:1.0\n args:\n - \"-text=just made some syscalls!\"\n securityContext:\n allowPrivilegeEscalation: false", "completion": "This yaml code Restrict a Container's Syscalls with seccomp | Create the Pod in the cluster:", "language": "yaml", "type": "pod-definition_explanation", "context": "Restrict a Container's Syscalls with seccomp | Create the Pod in the cluster:", "source": "https://kubernetes.io/docs/tutorials/security/seccomp/", "id": "k8s_00068_code_explain_1985"} {"prompt": "Explain this bash code:\n\nkubectl apply -f https://k8s.io/examples/pods/security/seccomp/ga/audit-pod.yaml", "completion": "This bash code Restrict a Container's Syscalls with seccomp | This profile does not restrict any syscalls, so the Pod should start successfully.", "language": "bash", "type": "kubectl-command_explanation", "context": "Restrict a Container's Syscalls with seccomp | This profile does not restrict any syscalls, so the Pod should start successfully.", "source": "https://kubernetes.io/docs/tutorials/security/seccomp/", "id": "k8s_00068_code_explain_1986"} {"prompt": "Explain this bash code:\n\nkubectl expose pod audit-pod --type NodePort --port 5678", "completion": "This bash code Restrict a Container's Syscalls with seccomp | Check what port the Service has been assigned on the node.", "language": "bash", "type": "kubectl-command_explanation", "context": "Restrict a Container's Syscalls with seccomp | Check what port the Service has been assigned on the node.", "source": "https://kubernetes.io/docs/tutorials/security/seccomp/", "id": "k8s_00068_code_explain_1987"} {"prompt": "Explain this text code:\n\n# Change 6a96207fed4b to the control plane container ID and 32373 to the port number you saw from \"docker ps\"\ndocker exec -it 6a96207fed4b curl localhost:32373", "completion": "This text code Restrict a Container's Syscalls with seccomp | You can see that the process is running, but what syscalls did it actually make? Because this Pod is running in a local cluster, you should be able to see those in /var/log/syslog on your local system", "language": "text", "type": "code-example_explanation", "context": "Restrict a Container's Syscalls with seccomp | You can see that the process is running, but what syscalls did it actually make? Because this Pod is running in a local cluster, you should be able to see those in /var/log/syslog on your local system", "source": "https://kubernetes.io/docs/tutorials/security/seccomp/", "id": "k8s_00068_code_explain_1988"} {"prompt": "Explain this text code:\n\n# The log path on your computer might be different from \"/var/log/syslog\"\ntail -f /var/log/syslog | grep 'http-echo'", "completion": "This text code Restrict a Container's Syscalls with seccomp | You should already see some logs of syscalls made by http-echo, and if you run curl again inside the control plane container you will see more output written to the log.", "language": "text", "type": "code-example_explanation", "context": "Restrict a Container's Syscalls with seccomp | You should already see some logs of syscalls made by http-echo, and if you run curl again inside the control plane container you will see more output written to the log.", "source": "https://kubernetes.io/docs/tutorials/security/seccomp/", "id": "k8s_00068_code_explain_1989"} {"prompt": "Explain this bash code:\n\nkubectl delete service audit-pod --wait\nkubectl delete pod audit-pod --wait --now", "completion": "This bash code Restrict a Container's Syscalls with seccomp | For demonstration, apply a profile to the Pod that does not allow for any syscalls.", "language": "bash", "type": "kubectl-command_explanation", "context": "Restrict a Container's Syscalls with seccomp | For demonstration, apply a profile to the Pod that does not allow for any syscalls.", "source": "https://kubernetes.io/docs/tutorials/security/seccomp/", "id": "k8s_00068_code_explain_1990"} {"prompt": "Complete this yaml code:\n\npods/security/seccomp/ga/violation-pod.yaml\napiVersion: v1\nkind: Pod\nmetadata:\n name: violation-pod\n labels:\n app: violation-pod\nspec:\n securityContext:", "completion": " seccompProfile:\n type: Localhost\n localhostProfile: profiles/violation.json\n containers:\n - name: test-container\n image: hashicorp/http-echo:1.0\n args:\n - \"-text=just made some syscalls!\"\n securityContext:\n allowPrivilegeEscalation: false", "language": "yaml", "type": "pod-definition", "context": "Restrict a Container's Syscalls with seccomp | Attempt to create the Pod in the cluster:", "source": "https://kubernetes.io/docs/tutorials/security/seccomp/", "id": "k8s_00068_code_completion_1991"} {"prompt": "Explain this yaml code:\n\npods/security/seccomp/ga/violation-pod.yaml\napiVersion: v1\nkind: Pod\nmetadata:\n name: violation-pod\n labels:\n app: violation-pod\nspec:\n securityContext:\n seccompProfile:\n type: Localhost\n localhostProfile: profiles/violation.json\n containers:\n - name: test-container\n image: hashicorp/http-echo:1.0\n args:\n - \"-text=just made some syscalls!\"\n securityContext:\n allowPrivilegeEscalation: false", "completion": "This yaml code Restrict a Container's Syscalls with seccomp | Attempt to create the Pod in the cluster:", "language": "yaml", "type": "pod-definition_explanation", "context": "Restrict a Container's Syscalls with seccomp | Attempt to create the Pod in the cluster:", "source": "https://kubernetes.io/docs/tutorials/security/seccomp/", "id": "k8s_00068_code_explain_1992"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: violation-pod\n labels:\n app: violation-pod\nspec:\n securityContext:\n seccompProfile:", "completion": " type: Localhost\n localhostProfile: profiles/violation.json\n containers:\n - name: test-container\n image: hashicorp/http-echo:1.0\n args:\n - \"-text=just made some syscalls!\"\n securityContext:\n allowPrivilegeEscalation: false", "language": "yaml", "type": "pod-definition", "context": "Restrict a Container's Syscalls with seccomp | Attempt to create the Pod in the cluster:", "source": "https://kubernetes.io/docs/tutorials/security/seccomp/", "id": "k8s_00068_code_completion_1993"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: violation-pod\n labels:\n app: violation-pod\nspec:\n securityContext:\n seccompProfile:\n type: Localhost\n localhostProfile: profiles/violation.json\n containers:\n - name: test-container\n image: hashicorp/http-echo:1.0\n args:\n - \"-text=just made some syscalls!\"\n securityContext:\n allowPrivilegeEscalation: false", "completion": "This yaml code Restrict a Container's Syscalls with seccomp | Attempt to create the Pod in the cluster:", "language": "yaml", "type": "pod-definition_explanation", "context": "Restrict a Container's Syscalls with seccomp | Attempt to create the Pod in the cluster:", "source": "https://kubernetes.io/docs/tutorials/security/seccomp/", "id": "k8s_00068_code_explain_1994"} {"prompt": "Explain this bash code:\n\nkubectl apply -f https://k8s.io/examples/pods/security/seccomp/ga/violation-pod.yaml", "completion": "This bash code Restrict a Container's Syscalls with seccomp | The Pod creates, but there is an issue. If you check the status of the Pod, you should see that it failed to start.", "language": "bash", "type": "kubectl-command_explanation", "context": "Restrict a Container's Syscalls with seccomp | The Pod creates, but there is an issue. If you check the status of the Pod, you should see that it failed to start.", "source": "https://kubernetes.io/docs/tutorials/security/seccomp/", "id": "k8s_00068_code_explain_1995"} {"prompt": "Complete this yaml code:\n\npods/security/seccomp/ga/fine-pod.yaml\napiVersion: v1\nkind: Pod\nmetadata:\n name: fine-pod\n labels:\n app: fine-pod\nspec:\n securityContext:", "completion": " seccompProfile:\n type: Localhost\n localhostProfile: profiles/fine-grained.json\n containers:\n - name: test-container\n image: hashicorp/http-echo:1.0\n args:\n - \"-text=just made some syscalls!\"\n securityContext:\n allowPrivilegeEscalation: false", "language": "yaml", "type": "pod-definition", "context": "Restrict a Container's Syscalls with seccomp | Create the Pod in your cluster:", "source": "https://kubernetes.io/docs/tutorials/security/seccomp/", "id": "k8s_00068_code_completion_1996"} {"prompt": "Explain this yaml code:\n\npods/security/seccomp/ga/fine-pod.yaml\napiVersion: v1\nkind: Pod\nmetadata:\n name: fine-pod\n labels:\n app: fine-pod\nspec:\n securityContext:\n seccompProfile:\n type: Localhost\n localhostProfile: profiles/fine-grained.json\n containers:\n - name: test-container\n image: hashicorp/http-echo:1.0\n args:\n - \"-text=just made some syscalls!\"\n securityContext:\n allowPrivilegeEscalation: false", "completion": "This yaml code Restrict a Container's Syscalls with seccomp | Create the Pod in your cluster:", "language": "yaml", "type": "pod-definition_explanation", "context": "Restrict a Container's Syscalls with seccomp | Create the Pod in your cluster:", "source": "https://kubernetes.io/docs/tutorials/security/seccomp/", "id": "k8s_00068_code_explain_1997"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: fine-pod\n labels:\n app: fine-pod\nspec:\n securityContext:\n seccompProfile:", "completion": " type: Localhost\n localhostProfile: profiles/fine-grained.json\n containers:\n - name: test-container\n image: hashicorp/http-echo:1.0\n args:\n - \"-text=just made some syscalls!\"\n securityContext:\n allowPrivilegeEscalation: false", "language": "yaml", "type": "pod-definition", "context": "Restrict a Container's Syscalls with seccomp | Create the Pod in your cluster:", "source": "https://kubernetes.io/docs/tutorials/security/seccomp/", "id": "k8s_00068_code_completion_1998"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: fine-pod\n labels:\n app: fine-pod\nspec:\n securityContext:\n seccompProfile:\n type: Localhost\n localhostProfile: profiles/fine-grained.json\n containers:\n - name: test-container\n image: hashicorp/http-echo:1.0\n args:\n - \"-text=just made some syscalls!\"\n securityContext:\n allowPrivilegeEscalation: false", "completion": "This yaml code Restrict a Container's Syscalls with seccomp | Create the Pod in your cluster:", "language": "yaml", "type": "pod-definition_explanation", "context": "Restrict a Container's Syscalls with seccomp | Create the Pod in your cluster:", "source": "https://kubernetes.io/docs/tutorials/security/seccomp/", "id": "k8s_00068_code_explain_1999"} {"prompt": "Explain this bash code:\n\nkubectl apply -f https://k8s.io/examples/pods/security/seccomp/ga/fine-pod.yaml", "completion": "This bash code Restrict a Container's Syscalls with seccomp | The Pod should be showing as having started successfully:", "language": "bash", "type": "kubectl-command_explanation", "context": "Restrict a Container's Syscalls with seccomp | The Pod should be showing as having started successfully:", "source": "https://kubernetes.io/docs/tutorials/security/seccomp/", "id": "k8s_00068_code_explain_2000"} {"prompt": "Explain this text code:\n\n# The log path on your computer might be different from \"/var/log/syslog\"\ntail -f /var/log/syslog | grep 'http-echo'", "completion": "This text code Restrict a Container's Syscalls with seccomp | Next, expose the Pod with a NodePort Service:", "language": "text", "type": "code-example_explanation", "context": "Restrict a Container's Syscalls with seccomp | Next, expose the Pod with a NodePort Service:", "source": "https://kubernetes.io/docs/tutorials/security/seccomp/", "id": "k8s_00068_code_explain_2001"} {"prompt": "Explain this bash code:\n\nkubectl expose pod fine-pod --type NodePort --port 5678", "completion": "This bash code Restrict a Container's Syscalls with seccomp | Check what port the Service has been assigned on the node:", "language": "bash", "type": "kubectl-command_explanation", "context": "Restrict a Container's Syscalls with seccomp | Check what port the Service has been assigned on the node:", "source": "https://kubernetes.io/docs/tutorials/security/seccomp/", "id": "k8s_00068_code_explain_2002"} {"prompt": "Explain this text code:\n\n# Change 6a96207fed4b to the control plane container ID and 32373 to the port number you saw from \"docker ps\"\ndocker exec -it 6a96207fed4b curl localhost:32373", "completion": "This text code Restrict a Container's Syscalls with seccomp | You should see no output in the syslog. This is because the profile allowed all necessary syscalls and specified that an error should occur if one outside of the list is invoked. This is an ideal situ", "language": "text", "type": "code-example_explanation", "context": "Restrict a Container's Syscalls with seccomp | You should see no output in the syslog. This is because the profile allowed all necessary syscalls and specified that an error should occur if one outside of the list is invoked. This is an ideal situ", "source": "https://kubernetes.io/docs/tutorials/security/seccomp/", "id": "k8s_00068_code_explain_2003"} {"prompt": "Explain this bash code:\n\nkubectl delete service fine-pod --wait\nkubectl delete pod fine-pod --wait --now", "completion": "This bash code Restrict a Container's Syscalls with seccomp | To use seccomp profile defaulting, you must run the kubelet with the --seccomp-default command line flag enabled for each node where you want to use it.", "language": "bash", "type": "kubectl-command_explanation", "context": "Restrict a Container's Syscalls with seccomp | To use seccomp profile defaulting, you must run the kubelet with the --seccomp-default command line flag enabled for each node where you want to use it.", "source": "https://kubernetes.io/docs/tutorials/security/seccomp/", "id": "k8s_00068_code_explain_2004"} {"prompt": "Complete this yaml code:\n\nkind: Cluster\napiVersion: kind.x-k8s.io/v1alpha4\nnodes:\n - role: control-plane\n image: kindest/node:v1.28.0@sha256:9f3ff58f19dcf1a0611d11e8ac989fdb30a28f40f236f59f0bea31fb956ccf5c\n kubeadmConfigPatches:\n - |\n kind: JoinConfiguration\n nodeRegistration:", "completion": " kubeletExtraArgs:\n seccomp-default: \"true\" \n - role: worker\n image: kindest/node:v1.28.0@sha256:9f3ff58f19dcf1a0611d11e8ac989fdb30a28f40f236f59f0bea31fb956ccf5c\n kubeadmConfigPatches:\n - |\n kind: JoinConfiguration\n nodeRegistration:\n kubeletExtraArgs:\n seccomp-default: \"true\"", "language": "yaml", "type": "kubernetes-manifest", "context": "Restrict a Container's Syscalls with seccomp | If the cluster is ready, then running a pod:", "source": "https://kubernetes.io/docs/tutorials/security/seccomp/", "id": "k8s_00068_code_completion_2005"} {"prompt": "Explain this yaml code:\n\nkind: Cluster\napiVersion: kind.x-k8s.io/v1alpha4\nnodes:\n - role: control-plane\n image: kindest/node:v1.28.0@sha256:9f3ff58f19dcf1a0611d11e8ac989fdb30a28f40f236f59f0bea31fb956ccf5c\n kubeadmConfigPatches:\n - |\n kind: JoinConfiguration\n nodeRegistration:\n kubeletExtraArgs:\n seccomp-default: \"true\" \n - role: worker\n image: kindest/node:v1.28.0@sha256:9f3ff58f19dcf1a0611d11e8ac989fdb30a28f40f236f59f0bea31fb956ccf5c\n kubeadmConfigPatches:\n - |\n kind: JoinConfiguration\n nodeRegistration:\n kubeletExtraArgs:\n seccomp-default: \"true\"", "completion": "This yaml code Restrict a Container's Syscalls with seccomp | If the cluster is ready, then running a pod:", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Restrict a Container's Syscalls with seccomp | If the cluster is ready, then running a pod:", "source": "https://kubernetes.io/docs/tutorials/security/seccomp/", "id": "k8s_00068_code_explain_2006"} {"prompt": "Explain this bash code:\n\nkubectl run --rm -it --restart=Never --image=alpine alpine -- sh", "completion": "This bash code Restrict a Container's Syscalls with seccomp | Should now have the default seccomp profile attached. This can be verified by using docker exec to run crictl inspect for the container on the kind worker:", "language": "bash", "type": "kubectl-command_explanation", "context": "Restrict a Container's Syscalls with seccomp | Should now have the default seccomp profile attached. This can be verified by using docker exec to run crictl inspect for the container on the kind worker:", "source": "https://kubernetes.io/docs/tutorials/security/seccomp/", "id": "k8s_00068_code_explain_2007"} {"prompt": "Explain this bash code:\n\ndocker exec -it kind-worker bash -c \\\n 'crictl inspect $(crictl ps --name=alpine -q) | jq .info.runtimeSpec.linux.seccomp'", "completion": "This bash code Restrict a Container's Syscalls with seccomp | You can learn more about Linux seccomp:", "language": "bash", "type": "shell-command_explanation", "context": "Restrict a Container's Syscalls with seccomp | You can learn more about Linux seccomp:", "source": "https://kubernetes.io/docs/tutorials/security/seccomp/", "id": "k8s_00068_code_explain_2008"} {"prompt": "Complete this json code:\n\n{\n \"defaultAction\": \"SCMP_ACT_ERRNO\",\n \"architectures\": [\"SCMP_ARCH_X86_64\", \"SCMP_ARCH_X86\", \"SCMP_ARCH_X32\"],\n \"syscalls\": [", "completion": " {\n \"names\": [\"...\"]\n }\n ]\n}", "language": "json", "type": "api-response", "context": "Restrict a Container's Syscalls with seccomp | You can learn more about Linux seccomp:", "source": "https://kubernetes.io/docs/tutorials/security/seccomp/", "id": "k8s_00068_code_completion_2009"} {"prompt": "Explain this json code:\n\n{\n \"defaultAction\": \"SCMP_ACT_ERRNO\",\n \"architectures\": [\"SCMP_ARCH_X86_64\", \"SCMP_ARCH_X86\", \"SCMP_ARCH_X32\"],\n \"syscalls\": [\n {\n \"names\": [\"...\"]\n }\n ]\n}", "completion": "This json code Restrict a Container's Syscalls with seccomp | You can learn more about Linux seccomp:", "language": "json", "type": "api-response_explanation", "context": "Restrict a Container's Syscalls with seccomp | You can learn more about Linux seccomp:", "source": "https://kubernetes.io/docs/tutorials/security/seccomp/", "id": "k8s_00068_code_explain_2010"} {"prompt": "Complete this yaml code:\n\n---\napiVersion: apiserver.config.k8s.io/v1\nkind: EncryptionConfiguration\nresources:\n - resources:\n - secrets", "completion": " providers:\n - aescbc:\n keys:\n # Do not use this (invalid) example key for encryption\n - name: example\n secret: 2KfZgdiq2K0g2YrYpyDYs9mF2LPZhQ==", "language": "yaml", "type": "kubernetes-manifest", "context": "Decrypt Confidential Data that is Already Encrypted at Rest | then change it to:", "source": "https://kubernetes.io/docs/tasks/administer-cluster/decrypt-data/", "id": "k8s_00069_code_completion_2011"} {"prompt": "Explain this yaml code:\n\n---\napiVersion: apiserver.config.k8s.io/v1\nkind: EncryptionConfiguration\nresources:\n - resources:\n - secrets\n providers:\n - aescbc:\n keys:\n # Do not use this (invalid) example key for encryption\n - name: example\n secret: 2KfZgdiq2K0g2YrYpyDYs9mF2LPZhQ==", "completion": "This yaml code Decrypt Confidential Data that is Already Encrypted at Rest | then change it to:", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Decrypt Confidential Data that is Already Encrypted at Rest | then change it to:", "source": "https://kubernetes.io/docs/tasks/administer-cluster/decrypt-data/", "id": "k8s_00069_code_explain_2012"} {"prompt": "Complete this yaml code:\n\n---\napiVersion: apiserver.config.k8s.io/v1\nkind: EncryptionConfiguration\nresources:\n - resources:\n - secrets", "completion": " providers:\n - identity: {} # add this line\n - aescbc:\n keys:\n - name: example\n secret: 2KfZgdiq2K0g2YrYpyDYs9mF2LPZhQ==", "language": "yaml", "type": "kubernetes-manifest", "context": "Decrypt Confidential Data that is Already Encrypted at Rest | and restart the kube-apiserver Pod on this node.", "source": "https://kubernetes.io/docs/tasks/administer-cluster/decrypt-data/", "id": "k8s_00069_code_completion_2013"} {"prompt": "Explain this yaml code:\n\n---\napiVersion: apiserver.config.k8s.io/v1\nkind: EncryptionConfiguration\nresources:\n - resources:\n - secrets\n providers:\n - identity: {} # add this line\n - aescbc:\n keys:\n - name: example\n secret: 2KfZgdiq2K0g2YrYpyDYs9mF2LPZhQ==", "completion": "This yaml code Decrypt Confidential Data that is Already Encrypted at Rest | and restart the kube-apiserver Pod on this node.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Decrypt Confidential Data that is Already Encrypted at Rest | and restart the kube-apiserver Pod on this node.", "source": "https://kubernetes.io/docs/tasks/administer-cluster/decrypt-data/", "id": "k8s_00069_code_explain_2014"} {"prompt": "Explain this text code:\n\n# If you are decrypting a different kind of object, change \"secrets\" to match.\nkubectl get secrets --all-namespaces -o json | kubectl replace -f -", "completion": "This text code Decrypt Confidential Data that is Already Encrypted at Rest | Once you have replaced all existing encrypted resources with backing data that don't use encryption, you can remove the encryption settings from the kube-apiserver.", "language": "text", "type": "code-example_explanation", "context": "Decrypt Confidential Data that is Already Encrypted at Rest | Once you have replaced all existing encrypted resources with backing data that don't use encryption, you can remove the encryption settings from the kube-apiserver.", "source": "https://kubernetes.io/docs/tasks/administer-cluster/decrypt-data/", "id": "k8s_00069_code_explain_2015"} {"prompt": "Complete this yaml code:\n\n---\napiVersion: apiserver.config.k8s.io/v1\nkind: EncryptionConfiguration\nresources:\n - resources:\n - secrets", "completion": " providers:\n - aescbc:\n keys:\n # Do not use this (invalid) example key for encryption\n - name: example\n secret: 2KfZgdiq2K0g2YrYpyDYs9mF2LPZhQ==", "language": "yaml", "type": "kubernetes-manifest", "context": "Decrypt Confidential Data that is Already Encrypted at Rest | then change it to:", "source": "https://kubernetes.io/docs/tasks/administer-cluster/decrypt-data/", "id": "k8s_00069_code_completion_2016"} {"prompt": "Explain this yaml code:\n\n---\napiVersion: apiserver.config.k8s.io/v1\nkind: EncryptionConfiguration\nresources:\n - resources:\n - secrets\n providers:\n - aescbc:\n keys:\n # Do not use this (invalid) example key for encryption\n - name: example\n secret: 2KfZgdiq2K0g2YrYpyDYs9mF2LPZhQ==", "completion": "This yaml code Decrypt Confidential Data that is Already Encrypted at Rest | then change it to:", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Decrypt Confidential Data that is Already Encrypted at Rest | then change it to:", "source": "https://kubernetes.io/docs/tasks/administer-cluster/decrypt-data/", "id": "k8s_00069_code_explain_2017"} {"prompt": "Complete this yaml code:\n\n---\napiVersion: apiserver.config.k8s.io/v1\nkind: EncryptionConfiguration\nresources:\n - resources:\n - secrets", "completion": " providers:\n - identity: {} # add this line\n - aescbc:\n keys:\n - name: example\n secret: 2KfZgdiq2K0g2YrYpyDYs9mF2LPZhQ==", "language": "yaml", "type": "kubernetes-manifest", "context": "Decrypt Confidential Data that is Already Encrypted at Rest | and restart the kube-apiserver Pod on this node.", "source": "https://kubernetes.io/docs/tasks/administer-cluster/decrypt-data/", "id": "k8s_00069_code_completion_2018"} {"prompt": "Explain this yaml code:\n\n---\napiVersion: apiserver.config.k8s.io/v1\nkind: EncryptionConfiguration\nresources:\n - resources:\n - secrets\n providers:\n - identity: {} # add this line\n - aescbc:\n keys:\n - name: example\n secret: 2KfZgdiq2K0g2YrYpyDYs9mF2LPZhQ==", "completion": "This yaml code Decrypt Confidential Data that is Already Encrypted at Rest | and restart the kube-apiserver Pod on this node.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Decrypt Confidential Data that is Already Encrypted at Rest | and restart the kube-apiserver Pod on this node.", "source": "https://kubernetes.io/docs/tasks/administer-cluster/decrypt-data/", "id": "k8s_00069_code_explain_2019"} {"prompt": "Explain this shell code:\n\n# If you are decrypting a different kind of object, change \"secrets\" to match.\nkubectl get secrets --all-namespaces -o json | kubectl replace -f -", "completion": "This shell code Decrypt Confidential Data that is Already Encrypted at Rest | Once you have replaced all existing encrypted resources with backing data that don't use encryption, you can remove the encryption settings from the kube-apiserver.", "language": "shell", "type": "code-example_explanation", "context": "Decrypt Confidential Data that is Already Encrypted at Rest | Once you have replaced all existing encrypted resources with backing data that don't use encryption, you can remove the encryption settings from the kube-apiserver.", "source": "https://kubernetes.io/docs/tasks/administer-cluster/decrypt-data/", "id": "k8s_00069_code_explain_2020"} {"prompt": "Complete this yaml code:\n\n---\napiVersion: apiserver.config.k8s.io/v1\nkind: EncryptionConfiguration\nresources:\n - resources:\n - secrets", "completion": " providers:\n - aescbc:\n keys:\n # Do not use this (invalid) example key for encryption\n - name: example\n secret: 2KfZgdiq2K0g2YrYpyDYs9mF2LPZhQ==", "language": "yaml", "type": "kubernetes-manifest", "context": "Decrypt Confidential Data that is Already Encrypted at Rest | then change it to:", "source": "https://kubernetes.io/docs/tasks/administer-cluster/decrypt-data/", "id": "k8s_00069_code_completion_2021"} {"prompt": "Explain this yaml code:\n\n---\napiVersion: apiserver.config.k8s.io/v1\nkind: EncryptionConfiguration\nresources:\n - resources:\n - secrets\n providers:\n - aescbc:\n keys:\n # Do not use this (invalid) example key for encryption\n - name: example\n secret: 2KfZgdiq2K0g2YrYpyDYs9mF2LPZhQ==", "completion": "This yaml code Decrypt Confidential Data that is Already Encrypted at Rest | then change it to:", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Decrypt Confidential Data that is Already Encrypted at Rest | then change it to:", "source": "https://kubernetes.io/docs/tasks/administer-cluster/decrypt-data/", "id": "k8s_00069_code_explain_2022"} {"prompt": "Complete this yaml code:\n\n---\napiVersion: apiserver.config.k8s.io/v1\nkind: EncryptionConfiguration\nresources:\n - resources:\n - secrets", "completion": " providers:\n - identity: {} # add this line\n - aescbc:\n keys:\n - name: example\n secret: 2KfZgdiq2K0g2YrYpyDYs9mF2LPZhQ==", "language": "yaml", "type": "kubernetes-manifest", "context": "Decrypt Confidential Data that is Already Encrypted at Rest | and restart the kube-apiserver Pod on this node.", "source": "https://kubernetes.io/docs/tasks/administer-cluster/decrypt-data/", "id": "k8s_00069_code_completion_2023"} {"prompt": "Explain this yaml code:\n\n---\napiVersion: apiserver.config.k8s.io/v1\nkind: EncryptionConfiguration\nresources:\n - resources:\n - secrets\n providers:\n - identity: {} # add this line\n - aescbc:\n keys:\n - name: example\n secret: 2KfZgdiq2K0g2YrYpyDYs9mF2LPZhQ==", "completion": "This yaml code Decrypt Confidential Data that is Already Encrypted at Rest | and restart the kube-apiserver Pod on this node.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Decrypt Confidential Data that is Already Encrypted at Rest | and restart the kube-apiserver Pod on this node.", "source": "https://kubernetes.io/docs/tasks/administer-cluster/decrypt-data/", "id": "k8s_00069_code_explain_2024"} {"prompt": "Explain this text code:\n\n# If you are decrypting a different kind of object, change \"secrets\" to match.\nkubectl get secrets --all-namespaces -o json | kubectl replace -f -", "completion": "This text code Decrypt Confidential Data that is Already Encrypted at Rest | Once you have replaced all existing encrypted resources with backing data that don't use encryption, you can remove the encryption settings from the kube-apiserver.", "language": "text", "type": "code-example_explanation", "context": "Decrypt Confidential Data that is Already Encrypted at Rest | Once you have replaced all existing encrypted resources with backing data that don't use encryption, you can remove the encryption settings from the kube-apiserver.", "source": "https://kubernetes.io/docs/tasks/administer-cluster/decrypt-data/", "id": "k8s_00069_code_explain_2025"} {"prompt": "Explain this text code:\n\nkubelet --allowed-unsafe-sysctls \\\n 'kernel.msg*,net.core.somaxconn' ...", "completion": "This text code Using sysctls in a Kubernetes Cluster | For Minikube, this can be done via the extra-config flag:", "language": "text", "type": "code-example_explanation", "context": "Using sysctls in a Kubernetes Cluster | For Minikube, this can be done via the extra-config flag:", "source": "https://kubernetes.io/docs/tasks/administer-cluster/sysctl-cluster/", "id": "k8s_00072_code_explain_2026"} {"prompt": "Explain this text code:\n\nminikube start --extra-config=\"kubelet.allowed-unsafe-sysctls=kernel.msg*,net.core.somaxconn\"...", "completion": "This text code Using sysctls in a Kubernetes Cluster | Only namespaced sysctls can be enabled this way.", "language": "text", "type": "code-example_explanation", "context": "Using sysctls in a Kubernetes Cluster | Only namespaced sysctls can be enabled this way.", "source": "https://kubernetes.io/docs/tasks/administer-cluster/sysctl-cluster/", "id": "k8s_00072_code_explain_2027"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: sysctl-example\nspec:\n securityContext:\n sysctls:", "completion": " - name: kernel.shm_rmid_forced\n value: \"0\"\n - name: net.core.somaxconn\n value: \"1024\"\n - name: kernel.msgmax\n value: \"65536\"\n ...", "language": "yaml", "type": "pod-definition", "context": "Using sysctls in a Kubernetes Cluster | It is good practice to consider nodes with special sysctl settings as tainted within a cluster, and only schedule pods onto them which need those sysctl settings. It is suggested to use the Kubernetes", "source": "https://kubernetes.io/docs/tasks/administer-cluster/sysctl-cluster/", "id": "k8s_00072_code_completion_2028"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: sysctl-example\nspec:\n securityContext:\n sysctls:\n - name: kernel.shm_rmid_forced\n value: \"0\"\n - name: net.core.somaxconn\n value: \"1024\"\n - name: kernel.msgmax\n value: \"65536\"\n ...", "completion": "This yaml code Using sysctls in a Kubernetes Cluster | It is good practice to consider nodes with special sysctl settings as tainted within a cluster, and only schedule pods onto them which need those sysctl settings. It is suggested to use the Kubernetes", "language": "yaml", "type": "pod-definition_explanation", "context": "Using sysctls in a Kubernetes Cluster | It is good practice to consider nodes with special sysctl settings as tainted within a cluster, and only schedule pods onto them which need those sysctl settings. It is suggested to use the Kubernetes", "source": "https://kubernetes.io/docs/tasks/administer-cluster/sysctl-cluster/", "id": "k8s_00072_code_explain_2029"} {"prompt": "Explain this shell code:\n\nkubelet --allowed-unsafe-sysctls \\\n 'kernel.msg*,net.core.somaxconn' ...", "completion": "This shell code Using sysctls in a Kubernetes Cluster | For Minikube, this can be done via the extra-config flag:", "language": "shell", "type": "code-example_explanation", "context": "Using sysctls in a Kubernetes Cluster | For Minikube, this can be done via the extra-config flag:", "source": "https://kubernetes.io/docs/tasks/administer-cluster/sysctl-cluster/", "id": "k8s_00072_code_explain_2030"} {"prompt": "Explain this shell code:\n\nminikube start --extra-config=\"kubelet.allowed-unsafe-sysctls=kernel.msg*,net.core.somaxconn\"...", "completion": "This shell code Using sysctls in a Kubernetes Cluster | Only namespaced sysctls can be enabled this way.", "language": "shell", "type": "code-example_explanation", "context": "Using sysctls in a Kubernetes Cluster | Only namespaced sysctls can be enabled this way.", "source": "https://kubernetes.io/docs/tasks/administer-cluster/sysctl-cluster/", "id": "k8s_00072_code_explain_2031"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: sysctl-example\nspec:\n securityContext:\n sysctls:", "completion": " - name: kernel.shm_rmid_forced\n value: \"0\"\n - name: net.core.somaxconn\n value: \"1024\"\n - name: kernel.msgmax\n value: \"65536\"\n ...", "language": "yaml", "type": "pod-definition", "context": "Using sysctls in a Kubernetes Cluster | It is good practice to consider nodes with special sysctl settings as tainted within a cluster, and only schedule pods onto them which need those sysctl settings. It is suggested to use the Kubernetes", "source": "https://kubernetes.io/docs/tasks/administer-cluster/sysctl-cluster/", "id": "k8s_00072_code_completion_2032"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: sysctl-example\nspec:\n securityContext:\n sysctls:\n - name: kernel.shm_rmid_forced\n value: \"0\"\n - name: net.core.somaxconn\n value: \"1024\"\n - name: kernel.msgmax\n value: \"65536\"\n ...", "completion": "This yaml code Using sysctls in a Kubernetes Cluster | It is good practice to consider nodes with special sysctl settings as tainted within a cluster, and only schedule pods onto them which need those sysctl settings. It is suggested to use the Kubernetes", "language": "yaml", "type": "pod-definition_explanation", "context": "Using sysctls in a Kubernetes Cluster | It is good practice to consider nodes with special sysctl settings as tainted within a cluster, and only schedule pods onto them which need those sysctl settings. It is suggested to use the Kubernetes", "source": "https://kubernetes.io/docs/tasks/administer-cluster/sysctl-cluster/", "id": "k8s_00072_code_explain_2033"} {"prompt": "Explain this text code:\n\nkubelet --allowed-unsafe-sysctls \\\n 'kernel.msg*,net.core.somaxconn' ...", "completion": "This text code Using sysctls in a Kubernetes Cluster | For Minikube, this can be done via the extra-config flag:", "language": "text", "type": "code-example_explanation", "context": "Using sysctls in a Kubernetes Cluster | For Minikube, this can be done via the extra-config flag:", "source": "https://kubernetes.io/docs/tasks/administer-cluster/sysctl-cluster/", "id": "k8s_00072_code_explain_2034"} {"prompt": "Explain this text code:\n\nminikube start --extra-config=\"kubelet.allowed-unsafe-sysctls=kernel.msg*,net.core.somaxconn\"...", "completion": "This text code Using sysctls in a Kubernetes Cluster | Only namespaced sysctls can be enabled this way.", "language": "text", "type": "code-example_explanation", "context": "Using sysctls in a Kubernetes Cluster | Only namespaced sysctls can be enabled this way.", "source": "https://kubernetes.io/docs/tasks/administer-cluster/sysctl-cluster/", "id": "k8s_00072_code_explain_2035"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: sysctl-example\nspec:\n securityContext:\n sysctls:", "completion": " - name: kernel.shm_rmid_forced\n value: \"0\"\n - name: net.core.somaxconn\n value: \"1024\"\n - name: kernel.msgmax\n value: \"65536\"\n ...", "language": "yaml", "type": "pod-definition", "context": "Using sysctls in a Kubernetes Cluster | It is good practice to consider nodes with special sysctl settings as tainted within a cluster, and only schedule pods onto them which need those sysctl settings. It is suggested to use the Kubernetes", "source": "https://kubernetes.io/docs/tasks/administer-cluster/sysctl-cluster/", "id": "k8s_00072_code_completion_2036"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: sysctl-example\nspec:\n securityContext:\n sysctls:\n - name: kernel.shm_rmid_forced\n value: \"0\"\n - name: net.core.somaxconn\n value: \"1024\"\n - name: kernel.msgmax\n value: \"65536\"\n ...", "completion": "This yaml code Using sysctls in a Kubernetes Cluster | It is good practice to consider nodes with special sysctl settings as tainted within a cluster, and only schedule pods onto them which need those sysctl settings. It is suggested to use the Kubernetes", "language": "yaml", "type": "pod-definition_explanation", "context": "Using sysctls in a Kubernetes Cluster | It is good practice to consider nodes with special sysctl settings as tainted within a cluster, and only schedule pods onto them which need those sysctl settings. It is suggested to use the Kubernetes", "source": "https://kubernetes.io/docs/tasks/administer-cluster/sysctl-cluster/", "id": "k8s_00072_code_explain_2037"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: termination-demo\nspec:", "completion": " containers:\n - name: termination-demo-container\n image: debian\n command: [\"/bin/sh\"]\n args: [\"-c\", \"sleep 10 && echo Sleep expired > /dev/termination-log\"]", "language": "yaml", "type": "pod-definition", "context": "Determine the Reason for Pod Failure | Create a Pod based on the YAML configuration file:", "source": "https://kubernetes.io/docs/tasks/debug/debug-application/determine-reason-pod-failure/", "id": "k8s_00073_code_completion_2038"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: termination-demo\nspec:\n containers:\n - name: termination-demo-container\n image: debian\n command: [\"/bin/sh\"]\n args: [\"-c\", \"sleep 10 && echo Sleep expired > /dev/termination-log\"]", "completion": "This yaml code Determine the Reason for Pod Failure | Create a Pod based on the YAML configuration file:", "language": "yaml", "type": "pod-definition_explanation", "context": "Determine the Reason for Pod Failure | Create a Pod based on the YAML configuration file:", "source": "https://kubernetes.io/docs/tasks/debug/debug-application/determine-reason-pod-failure/", "id": "k8s_00073_code_explain_2039"} {"prompt": "Explain this bash code:\n\nkubectl apply -f https://k8s.io/examples/debug/termination.yaml", "completion": "This bash code Determine the Reason for Pod Failure | In the YAML file, in the command and args fields, you can see that the container sleeps for 10 seconds and then writes \"Sleep expired\" to the /dev/termination-log file. After the container writes the", "language": "bash", "type": "kubectl-command_explanation", "context": "Determine the Reason for Pod Failure | In the YAML file, in the command and args fields, you can see that the container sleeps for 10 seconds and then writes \"Sleep expired\" to the /dev/termination-log file. After the container writes the", "source": "https://kubernetes.io/docs/tasks/debug/debug-application/determine-reason-pod-failure/", "id": "k8s_00073_code_explain_2040"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Pod\n...\n lastState:\n terminated:", "completion": " containerID: ...\n exitCode: 0\n finishedAt: ...\n message: |\n Sleep expired \n ...", "language": "yaml", "type": "pod-definition", "context": "Determine the Reason for Pod Failure | Use a Go template to filter the output so that it includes only the termination message:", "source": "https://kubernetes.io/docs/tasks/debug/debug-application/determine-reason-pod-failure/", "id": "k8s_00073_code_completion_2041"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Pod\n...\n lastState:\n terminated:\n containerID: ...\n exitCode: 0\n finishedAt: ...\n message: |\n Sleep expired \n ...", "completion": "This yaml code Determine the Reason for Pod Failure | Use a Go template to filter the output so that it includes only the termination message:", "language": "yaml", "type": "pod-definition_explanation", "context": "Determine the Reason for Pod Failure | Use a Go template to filter the output so that it includes only the termination message:", "source": "https://kubernetes.io/docs/tasks/debug/debug-application/determine-reason-pod-failure/", "id": "k8s_00073_code_explain_2042"} {"prompt": "Explain this bash code:\n\nkubectl get pod termination-demo -o go-template=\"{{range .status.containerStatuses}}{{.lastState.terminated.message}}{{end}}\"", "completion": "This bash code Determine the Reason for Pod Failure | If you are running a multi-container Pod, you can use a Go template to include the container's name. By doing so, you can discover which of the containers is failing:", "language": "bash", "type": "kubectl-command_explanation", "context": "Determine the Reason for Pod Failure | If you are running a multi-container Pod, you can use a Go template to include the container's name. By doing so, you can discover which of the containers is failing:", "source": "https://kubernetes.io/docs/tasks/debug/debug-application/determine-reason-pod-failure/", "id": "k8s_00073_code_explain_2043"} {"prompt": "Explain this bash code:\n\nkubectl get pod multi-container-pod -o go-template='{{range .status.containerStatuses}}{{printf \"%s:\\n%s\\n\\n\" .name .lastState.terminated.message}}{{end}}'", "completion": "This bash code Determine the Reason for Pod Failure | Kubernetes retrieves termination messages from the termination message file specified in the terminationMessagePath field of a Container, which has a default value of /dev/termination-log. By customiz", "language": "bash", "type": "kubectl-command_explanation", "context": "Determine the Reason for Pod Failure | Kubernetes retrieves termination messages from the termination message file specified in the terminationMessagePath field of a Container, which has a default value of /dev/termination-log. By customiz", "source": "https://kubernetes.io/docs/tasks/debug/debug-application/determine-reason-pod-failure/", "id": "k8s_00073_code_explain_2044"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: msg-path-demo", "completion": "spec:\n containers:\n - name: msg-path-demo-container\n image: debian\n terminationMessagePath: \"/tmp/my-log\"", "language": "yaml", "type": "pod-definition", "context": "Determine the Reason for Pod Failure | Moreover, users can set the terminationMessagePolicy field of a Container for further customization. This field defaults to \"File\" which means the termination messages are retrieved only from the term", "source": "https://kubernetes.io/docs/tasks/debug/debug-application/determine-reason-pod-failure/", "id": "k8s_00073_code_completion_2045"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: msg-path-demo\nspec:\n containers:\n - name: msg-path-demo-container\n image: debian\n terminationMessagePath: \"/tmp/my-log\"", "completion": "This yaml code Determine the Reason for Pod Failure | Moreover, users can set the terminationMessagePolicy field of a Container for further customization. This field defaults to \"File\" which means the termination messages are retrieved only from the term", "language": "yaml", "type": "pod-definition_explanation", "context": "Determine the Reason for Pod Failure | Moreover, users can set the terminationMessagePolicy field of a Container for further customization. This field defaults to \"File\" which means the termination messages are retrieved only from the term", "source": "https://kubernetes.io/docs/tasks/debug/debug-application/determine-reason-pod-failure/", "id": "k8s_00073_code_explain_2046"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: termination-demo\nspec:", "completion": " containers:\n - name: termination-demo-container\n image: debian\n command: [\"/bin/sh\"]\n args: [\"-c\", \"sleep 10 && echo Sleep expired > /dev/termination-log\"]", "language": "yaml", "type": "pod-definition", "context": "Determine the Reason for Pod Failure | Create a Pod based on the YAML configuration file:", "source": "https://kubernetes.io/docs/tasks/debug/debug-application/determine-reason-pod-failure/", "id": "k8s_00073_code_completion_2047"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: termination-demo\nspec:\n containers:\n - name: termination-demo-container\n image: debian\n command: [\"/bin/sh\"]\n args: [\"-c\", \"sleep 10 && echo Sleep expired > /dev/termination-log\"]", "completion": "This yaml code Determine the Reason for Pod Failure | Create a Pod based on the YAML configuration file:", "language": "yaml", "type": "pod-definition_explanation", "context": "Determine the Reason for Pod Failure | Create a Pod based on the YAML configuration file:", "source": "https://kubernetes.io/docs/tasks/debug/debug-application/determine-reason-pod-failure/", "id": "k8s_00073_code_explain_2048"} {"prompt": "Explain this shell code:\n\nkubectl apply -f https://k8s.io/examples/debug/termination.yaml", "completion": "This shell code Determine the Reason for Pod Failure | In the YAML file, in the command and args fields, you can see that the container sleeps for 10 seconds and then writes \"Sleep expired\" to the /dev/termination-log file. After the container writes the", "language": "shell", "type": "code-example_explanation", "context": "Determine the Reason for Pod Failure | In the YAML file, in the command and args fields, you can see that the container sleeps for 10 seconds and then writes \"Sleep expired\" to the /dev/termination-log file. After the container writes the", "source": "https://kubernetes.io/docs/tasks/debug/debug-application/determine-reason-pod-failure/", "id": "k8s_00073_code_explain_2049"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Pod\n...\n lastState:\n terminated:", "completion": " containerID: ...\n exitCode: 0\n finishedAt: ...\n message: |\n Sleep expired \n ...", "language": "yaml", "type": "pod-definition", "context": "Determine the Reason for Pod Failure | Use a Go template to filter the output so that it includes only the termination message:", "source": "https://kubernetes.io/docs/tasks/debug/debug-application/determine-reason-pod-failure/", "id": "k8s_00073_code_completion_2050"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Pod\n...\n lastState:\n terminated:\n containerID: ...\n exitCode: 0\n finishedAt: ...\n message: |\n Sleep expired \n ...", "completion": "This yaml code Determine the Reason for Pod Failure | Use a Go template to filter the output so that it includes only the termination message:", "language": "yaml", "type": "pod-definition_explanation", "context": "Determine the Reason for Pod Failure | Use a Go template to filter the output so that it includes only the termination message:", "source": "https://kubernetes.io/docs/tasks/debug/debug-application/determine-reason-pod-failure/", "id": "k8s_00073_code_explain_2051"} {"prompt": "Explain this shell code:\n\nkubectl get pod termination-demo -o go-template=\"{{range .status.containerStatuses}}{{.lastState.terminated.message}}{{end}}\"", "completion": "This shell code Determine the Reason for Pod Failure | If you are running a multi-container Pod, you can use a Go template to include the container's name. By doing so, you can discover which of the containers is failing:", "language": "shell", "type": "code-example_explanation", "context": "Determine the Reason for Pod Failure | If you are running a multi-container Pod, you can use a Go template to include the container's name. By doing so, you can discover which of the containers is failing:", "source": "https://kubernetes.io/docs/tasks/debug/debug-application/determine-reason-pod-failure/", "id": "k8s_00073_code_explain_2052"} {"prompt": "Explain this shell code:\n\nkubectl get pod multi-container-pod -o go-template='{{range .status.containerStatuses}}{{printf \"%s:\\n%s\\n\\n\" .name .lastState.terminated.message}}{{end}}'", "completion": "This shell code Determine the Reason for Pod Failure | Kubernetes retrieves termination messages from the termination message file specified in the terminationMessagePath field of a Container, which has a default value of /dev/termination-log. By customiz", "language": "shell", "type": "code-example_explanation", "context": "Determine the Reason for Pod Failure | Kubernetes retrieves termination messages from the termination message file specified in the terminationMessagePath field of a Container, which has a default value of /dev/termination-log. By customiz", "source": "https://kubernetes.io/docs/tasks/debug/debug-application/determine-reason-pod-failure/", "id": "k8s_00073_code_explain_2053"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: msg-path-demo", "completion": "spec:\n containers:\n - name: msg-path-demo-container\n image: debian\n terminationMessagePath: \"/tmp/my-log\"", "language": "yaml", "type": "pod-definition", "context": "Determine the Reason for Pod Failure | Moreover, users can set the terminationMessagePolicy field of a Container for further customization. This field defaults to \"File\" which means the termination messages are retrieved only from the term", "source": "https://kubernetes.io/docs/tasks/debug/debug-application/determine-reason-pod-failure/", "id": "k8s_00073_code_completion_2054"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: msg-path-demo\nspec:\n containers:\n - name: msg-path-demo-container\n image: debian\n terminationMessagePath: \"/tmp/my-log\"", "completion": "This yaml code Determine the Reason for Pod Failure | Moreover, users can set the terminationMessagePolicy field of a Container for further customization. This field defaults to \"File\" which means the termination messages are retrieved only from the term", "language": "yaml", "type": "pod-definition_explanation", "context": "Determine the Reason for Pod Failure | Moreover, users can set the terminationMessagePolicy field of a Container for further customization. This field defaults to \"File\" which means the termination messages are retrieved only from the term", "source": "https://kubernetes.io/docs/tasks/debug/debug-application/determine-reason-pod-failure/", "id": "k8s_00073_code_explain_2055"} {"prompt": "Complete this yaml code:\n\ndebug/termination.yaml\napiVersion: v1\nkind: Pod\nmetadata:\n name: termination-demo", "completion": "spec:\n containers:\n - name: termination-demo-container\n image: debian\n command: [\"/bin/sh\"]\n args: [\"-c\", \"sleep 10 && echo Sleep expired > /dev/termination-log\"]", "language": "yaml", "type": "pod-definition", "context": "Determine the Reason for Pod Failure | Create a Pod based on the YAML configuration file:", "source": "https://kubernetes.io/docs/tasks/debug/debug-application/determine-reason-pod-failure/", "id": "k8s_00073_code_completion_2056"} {"prompt": "Explain this yaml code:\n\ndebug/termination.yaml\napiVersion: v1\nkind: Pod\nmetadata:\n name: termination-demo\nspec:\n containers:\n - name: termination-demo-container\n image: debian\n command: [\"/bin/sh\"]\n args: [\"-c\", \"sleep 10 && echo Sleep expired > /dev/termination-log\"]", "completion": "This yaml code Determine the Reason for Pod Failure | Create a Pod based on the YAML configuration file:", "language": "yaml", "type": "pod-definition_explanation", "context": "Determine the Reason for Pod Failure | Create a Pod based on the YAML configuration file:", "source": "https://kubernetes.io/docs/tasks/debug/debug-application/determine-reason-pod-failure/", "id": "k8s_00073_code_explain_2057"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: termination-demo\nspec:", "completion": " containers:\n - name: termination-demo-container\n image: debian\n command: [\"/bin/sh\"]\n args: [\"-c\", \"sleep 10 && echo Sleep expired > /dev/termination-log\"]", "language": "yaml", "type": "pod-definition", "context": "Determine the Reason for Pod Failure | Create a Pod based on the YAML configuration file:", "source": "https://kubernetes.io/docs/tasks/debug/debug-application/determine-reason-pod-failure/", "id": "k8s_00073_code_completion_2058"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: termination-demo\nspec:\n containers:\n - name: termination-demo-container\n image: debian\n command: [\"/bin/sh\"]\n args: [\"-c\", \"sleep 10 && echo Sleep expired > /dev/termination-log\"]", "completion": "This yaml code Determine the Reason for Pod Failure | Create a Pod based on the YAML configuration file:", "language": "yaml", "type": "pod-definition_explanation", "context": "Determine the Reason for Pod Failure | Create a Pod based on the YAML configuration file:", "source": "https://kubernetes.io/docs/tasks/debug/debug-application/determine-reason-pod-failure/", "id": "k8s_00073_code_explain_2059"} {"prompt": "Explain this bash code:\n\nkubectl apply -f https://k8s.io/examples/debug/termination.yaml", "completion": "This bash code Determine the Reason for Pod Failure | In the YAML file, in the command and args fields, you can see that the container sleeps for 10 seconds and then writes \"Sleep expired\" to the /dev/termination-log file. After the container writes the", "language": "bash", "type": "kubectl-command_explanation", "context": "Determine the Reason for Pod Failure | In the YAML file, in the command and args fields, you can see that the container sleeps for 10 seconds and then writes \"Sleep expired\" to the /dev/termination-log file. After the container writes the", "source": "https://kubernetes.io/docs/tasks/debug/debug-application/determine-reason-pod-failure/", "id": "k8s_00073_code_explain_2060"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Pod\n...\n lastState:\n terminated:", "completion": " containerID: ...\n exitCode: 0\n finishedAt: ...\n message: |\n Sleep expired \n ...", "language": "yaml", "type": "pod-definition", "context": "Determine the Reason for Pod Failure | Use a Go template to filter the output so that it includes only the termination message:", "source": "https://kubernetes.io/docs/tasks/debug/debug-application/determine-reason-pod-failure/", "id": "k8s_00073_code_completion_2061"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Pod\n...\n lastState:\n terminated:\n containerID: ...\n exitCode: 0\n finishedAt: ...\n message: |\n Sleep expired \n ...", "completion": "This yaml code Determine the Reason for Pod Failure | Use a Go template to filter the output so that it includes only the termination message:", "language": "yaml", "type": "pod-definition_explanation", "context": "Determine the Reason for Pod Failure | Use a Go template to filter the output so that it includes only the termination message:", "source": "https://kubernetes.io/docs/tasks/debug/debug-application/determine-reason-pod-failure/", "id": "k8s_00073_code_explain_2062"} {"prompt": "Explain this bash code:\n\nkubectl get pod termination-demo -o go-template=\"{{range .status.containerStatuses}}{{.lastState.terminated.message}}{{end}}\"", "completion": "This bash code Determine the Reason for Pod Failure | If you are running a multi-container Pod, you can use a Go template to include the container's name. By doing so, you can discover which of the containers is failing:", "language": "bash", "type": "kubectl-command_explanation", "context": "Determine the Reason for Pod Failure | If you are running a multi-container Pod, you can use a Go template to include the container's name. By doing so, you can discover which of the containers is failing:", "source": "https://kubernetes.io/docs/tasks/debug/debug-application/determine-reason-pod-failure/", "id": "k8s_00073_code_explain_2063"} {"prompt": "Explain this bash code:\n\nkubectl get pod multi-container-pod -o go-template='{{range .status.containerStatuses}}{{printf \"%s:\\n%s\\n\\n\" .name .lastState.terminated.message}}{{end}}'", "completion": "This bash code Determine the Reason for Pod Failure | Kubernetes retrieves termination messages from the termination message file specified in the terminationMessagePath field of a Container, which has a default value of /dev/termination-log. By customiz", "language": "bash", "type": "kubectl-command_explanation", "context": "Determine the Reason for Pod Failure | Kubernetes retrieves termination messages from the termination message file specified in the terminationMessagePath field of a Container, which has a default value of /dev/termination-log. By customiz", "source": "https://kubernetes.io/docs/tasks/debug/debug-application/determine-reason-pod-failure/", "id": "k8s_00073_code_explain_2064"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: msg-path-demo", "completion": "spec:\n containers:\n - name: msg-path-demo-container\n image: debian\n terminationMessagePath: \"/tmp/my-log\"", "language": "yaml", "type": "pod-definition", "context": "Determine the Reason for Pod Failure | Moreover, users can set the terminationMessagePolicy field of a Container for further customization. This field defaults to \"File\" which means the termination messages are retrieved only from the term", "source": "https://kubernetes.io/docs/tasks/debug/debug-application/determine-reason-pod-failure/", "id": "k8s_00073_code_completion_2065"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: msg-path-demo\nspec:\n containers:\n - name: msg-path-demo-container\n image: debian\n terminationMessagePath: \"/tmp/my-log\"", "completion": "This yaml code Determine the Reason for Pod Failure | Moreover, users can set the terminationMessagePolicy field of a Container for further customization. This field defaults to \"File\" which means the termination messages are retrieved only from the term", "language": "yaml", "type": "pod-definition_explanation", "context": "Determine the Reason for Pod Failure | Moreover, users can set the terminationMessagePolicy field of a Container for further customization. This field defaults to \"File\" which means the termination messages are retrieved only from the term", "source": "https://kubernetes.io/docs/tasks/debug/debug-application/determine-reason-pod-failure/", "id": "k8s_00073_code_explain_2066"} {"prompt": "Explain this text code:\n\nsystemctl stop kubelet\nsystemctl disable docker.service --now", "completion": "This text code Changing the Container Runtime on a Node from Docker Engine to containerd | Follow the guide for detailed steps to install containerd.", "language": "text", "type": "code-example_explanation", "context": "Changing the Container Runtime on a Node from Docker Engine to containerd | Follow the guide for detailed steps to install containerd.", "source": "https://kubernetes.io/docs/tasks/administer-cluster/migrating-from-dockershim/change-runtime-containerd/", "id": "k8s_00074_code_explain_2067"} {"prompt": "Explain this text code:\n\nsudo mkdir -p /etc/containerd\ncontainerd config default | sudo tee /etc/containerd/config.toml", "completion": "This text code Changing the Container Runtime on a Node from Docker Engine to containerd | Restart containerd:", "language": "text", "type": "code-example_explanation", "context": "Changing the Container Runtime on a Node from Docker Engine to containerd | Restart containerd:", "source": "https://kubernetes.io/docs/tasks/administer-cluster/migrating-from-dockershim/change-runtime-containerd/", "id": "k8s_00074_code_explain_2068"} {"prompt": "Explain this text code:\n\ncurl.exe -L https://github.com/containerd/containerd/releases/download/v$Version/containerd-$Version-windows-amd64.tar.gz -o containerd-windows-amd64.tar.gz\ntar.exe xvf .\\containerd-windows-amd64.tar.gz", "completion": "This text code Changing the Container Runtime on a Node from Docker Engine to containerd | Extract and configure:", "language": "text", "type": "code-example_explanation", "context": "Changing the Container Runtime on a Node from Docker Engine to containerd | Extract and configure:", "source": "https://kubernetes.io/docs/tasks/administer-cluster/migrating-from-dockershim/change-runtime-containerd/", "id": "k8s_00074_code_explain_2069"} {"prompt": "Complete this text code:\n\nCopy-Item -Path \".\\bin\\\" -Destination \"$Env:ProgramFiles\\containerd\" -Recurse -Force\ncd $Env:ProgramFiles\\containerd\\\n.\\containerd.exe config default | Out-File config.toml -Encoding ascii\n\n# Review the configuration. Depending on setup you may want to adjust:", "completion": "# - the sandbox_image (Kubernetes pause image)\n# - cni bin_dir and conf_dir locations\nGet-Content config.toml\n\n# (Optional - but highly recommended) Exclude containerd from Windows Defender Scans\nAdd-MpPreference -ExclusionProcess \"$Env:ProgramFiles\\containerd\\containerd.exe\"", "language": "text", "type": "code-example", "context": "Changing the Container Runtime on a Node from Docker Engine to containerd | Start containerd:", "source": "https://kubernetes.io/docs/tasks/administer-cluster/migrating-from-dockershim/change-runtime-containerd/", "id": "k8s_00074_code_completion_2070"} {"prompt": "Explain this text code:\n\nCopy-Item -Path \".\\bin\\\" -Destination \"$Env:ProgramFiles\\containerd\" -Recurse -Force\ncd $Env:ProgramFiles\\containerd\\\n.\\containerd.exe config default | Out-File config.toml -Encoding ascii\n\n# Review the configuration. Depending on setup you may want to adjust:\n# - the sandbox_image (Kubernetes pause image)\n# - cni bin_dir and conf_dir locations\nGet-Content config.toml\n\n# (Optional - but highly recommended) Exclude containerd from Windows Defender Scans\nAdd-MpPreference -ExclusionProcess \"$Env:ProgramFiles\\containerd\\containerd.exe\"", "completion": "This text code Changing the Container Runtime on a Node from Docker Engine to containerd | Start containerd:", "language": "text", "type": "code-example_explanation", "context": "Changing the Container Runtime on a Node from Docker Engine to containerd | Start containerd:", "source": "https://kubernetes.io/docs/tasks/administer-cluster/migrating-from-dockershim/change-runtime-containerd/", "id": "k8s_00074_code_explain_2071"} {"prompt": "Explain this text code:\n\n.\\containerd.exe --register-service\nStart-Service containerd", "completion": "This text code Changing the Container Runtime on a Node from Docker Engine to containerd | Edit the file /var/lib/kubelet/kubeadm-flags.env and add the containerd runtime to the flags; --container-runtime-endpoint=unix:///run/containerd/containerd.sock.", "language": "text", "type": "code-example_explanation", "context": "Changing the Container Runtime on a Node from Docker Engine to containerd | Edit the file /var/lib/kubelet/kubeadm-flags.env and add the containerd runtime to the flags; --container-runtime-endpoint=unix:///run/containerd/containerd.sock.", "source": "https://kubernetes.io/docs/tasks/administer-cluster/migrating-from-dockershim/change-runtime-containerd/", "id": "k8s_00074_code_explain_2072"} {"prompt": "Explain this shell code:\n\nsystemctl stop kubelet\nsystemctl disable docker.service --now", "completion": "This shell code Changing the Container Runtime on a Node from Docker Engine to containerd | Follow the guide for detailed steps to install containerd.", "language": "shell", "type": "code-example_explanation", "context": "Changing the Container Runtime on a Node from Docker Engine to containerd | Follow the guide for detailed steps to install containerd.", "source": "https://kubernetes.io/docs/tasks/administer-cluster/migrating-from-dockershim/change-runtime-containerd/", "id": "k8s_00074_code_explain_2073"} {"prompt": "Explain this shell code:\n\nsudo mkdir -p /etc/containerd\ncontainerd config default | sudo tee /etc/containerd/config.toml", "completion": "This shell code Changing the Container Runtime on a Node from Docker Engine to containerd | Restart containerd:", "language": "shell", "type": "code-example_explanation", "context": "Changing the Container Runtime on a Node from Docker Engine to containerd | Restart containerd:", "source": "https://kubernetes.io/docs/tasks/administer-cluster/migrating-from-dockershim/change-runtime-containerd/", "id": "k8s_00074_code_explain_2074"} {"prompt": "Explain this powershell code:\n\ncurl.exe -L https://github.com/containerd/containerd/releases/download/v$Version/containerd-$Version-windows-amd64.tar.gz -o containerd-windows-amd64.tar.gz\ntar.exe xvf .\\containerd-windows-amd64.tar.gz", "completion": "This powershell code Changing the Container Runtime on a Node from Docker Engine to containerd | Extract and configure:", "language": "powershell", "type": "code-example_explanation", "context": "Changing the Container Runtime on a Node from Docker Engine to containerd | Extract and configure:", "source": "https://kubernetes.io/docs/tasks/administer-cluster/migrating-from-dockershim/change-runtime-containerd/", "id": "k8s_00074_code_explain_2075"} {"prompt": "Complete this powershell code:\n\nCopy-Item -Path \".\\bin\\\" -Destination \"$Env:ProgramFiles\\containerd\" -Recurse -Force\ncd $Env:ProgramFiles\\containerd\\\n.\\containerd.exe config default | Out-File config.toml -Encoding ascii\n\n# Review the configuration. Depending on setup you may want to adjust:", "completion": "# - the sandbox_image (Kubernetes pause image)\n# - cni bin_dir and conf_dir locations\nGet-Content config.toml\n\n# (Optional - but highly recommended) Exclude containerd from Windows Defender Scans\nAdd-MpPreference -ExclusionProcess \"$Env:ProgramFiles\\containerd\\containerd.exe\"", "language": "powershell", "type": "code-example", "context": "Changing the Container Runtime on a Node from Docker Engine to containerd | Start containerd:", "source": "https://kubernetes.io/docs/tasks/administer-cluster/migrating-from-dockershim/change-runtime-containerd/", "id": "k8s_00074_code_completion_2076"} {"prompt": "Explain this powershell code:\n\nCopy-Item -Path \".\\bin\\\" -Destination \"$Env:ProgramFiles\\containerd\" -Recurse -Force\ncd $Env:ProgramFiles\\containerd\\\n.\\containerd.exe config default | Out-File config.toml -Encoding ascii\n\n# Review the configuration. Depending on setup you may want to adjust:\n# - the sandbox_image (Kubernetes pause image)\n# - cni bin_dir and conf_dir locations\nGet-Content config.toml\n\n# (Optional - but highly recommended) Exclude containerd from Windows Defender Scans\nAdd-MpPreference -ExclusionProcess \"$Env:ProgramFiles\\containerd\\containerd.exe\"", "completion": "This powershell code Changing the Container Runtime on a Node from Docker Engine to containerd | Start containerd:", "language": "powershell", "type": "code-example_explanation", "context": "Changing the Container Runtime on a Node from Docker Engine to containerd | Start containerd:", "source": "https://kubernetes.io/docs/tasks/administer-cluster/migrating-from-dockershim/change-runtime-containerd/", "id": "k8s_00074_code_explain_2077"} {"prompt": "Explain this powershell code:\n\n.\\containerd.exe --register-service\nStart-Service containerd", "completion": "This powershell code Changing the Container Runtime on a Node from Docker Engine to containerd | Edit the file /var/lib/kubelet/kubeadm-flags.env and add the containerd runtime to the flags; --container-runtime-endpoint=unix:///run/containerd/containerd.sock.", "language": "powershell", "type": "code-example_explanation", "context": "Changing the Container Runtime on a Node from Docker Engine to containerd | Edit the file /var/lib/kubelet/kubeadm-flags.env and add the containerd runtime to the flags; --container-runtime-endpoint=unix:///run/containerd/containerd.sock.", "source": "https://kubernetes.io/docs/tasks/administer-cluster/migrating-from-dockershim/change-runtime-containerd/", "id": "k8s_00074_code_explain_2078"} {"prompt": "Explain this text code:\n\n--container-runtime-endpoint=unix:///run/containerd/containerd.sock", "completion": "This text code Changing the Container Runtime on a Node from Docker Engine to containerd | Users using kubeadm should be aware that the kubeadm tool stores the host's CRI socket in the", "language": "text", "type": "code-example_explanation", "context": "Changing the Container Runtime on a Node from Docker Engine to containerd | Users using kubeadm should be aware that the kubeadm tool stores the host's CRI socket in the", "source": "https://kubernetes.io/docs/tasks/administer-cluster/migrating-from-dockershim/change-runtime-containerd/", "id": "k8s_00074_code_explain_2079"} {"prompt": "Explain this text code:\n\nsystemctl stop kubelet\nsystemctl disable docker.service --now", "completion": "This text code Changing the Container Runtime on a Node from Docker Engine to containerd | Follow the guide for detailed steps to install containerd.", "language": "text", "type": "code-example_explanation", "context": "Changing the Container Runtime on a Node from Docker Engine to containerd | Follow the guide for detailed steps to install containerd.", "source": "https://kubernetes.io/docs/tasks/administer-cluster/migrating-from-dockershim/change-runtime-containerd/", "id": "k8s_00074_code_explain_2080"} {"prompt": "Explain this text code:\n\nsudo mkdir -p /etc/containerd\ncontainerd config default | sudo tee /etc/containerd/config.toml", "completion": "This text code Changing the Container Runtime on a Node from Docker Engine to containerd | Restart containerd:", "language": "text", "type": "code-example_explanation", "context": "Changing the Container Runtime on a Node from Docker Engine to containerd | Restart containerd:", "source": "https://kubernetes.io/docs/tasks/administer-cluster/migrating-from-dockershim/change-runtime-containerd/", "id": "k8s_00074_code_explain_2081"} {"prompt": "Explain this text code:\n\ncurl.exe -L https://github.com/containerd/containerd/releases/download/v$Version/containerd-$Version-windows-amd64.tar.gz -o containerd-windows-amd64.tar.gz\ntar.exe xvf .\\containerd-windows-amd64.tar.gz", "completion": "This text code Changing the Container Runtime on a Node from Docker Engine to containerd | Extract and configure:", "language": "text", "type": "code-example_explanation", "context": "Changing the Container Runtime on a Node from Docker Engine to containerd | Extract and configure:", "source": "https://kubernetes.io/docs/tasks/administer-cluster/migrating-from-dockershim/change-runtime-containerd/", "id": "k8s_00074_code_explain_2082"} {"prompt": "Complete this text code:\n\nCopy-Item -Path \".\\bin\\\" -Destination \"$Env:ProgramFiles\\containerd\" -Recurse -Force\ncd $Env:ProgramFiles\\containerd\\\n.\\containerd.exe config default | Out-File config.toml -Encoding ascii\n\n# Review the configuration. Depending on setup you may want to adjust:", "completion": "# - the sandbox_image (Kubernetes pause image)\n# - cni bin_dir and conf_dir locations\nGet-Content config.toml\n\n# (Optional - but highly recommended) Exclude containerd from Windows Defender Scans\nAdd-MpPreference -ExclusionProcess \"$Env:ProgramFiles\\containerd\\containerd.exe\"", "language": "text", "type": "code-example", "context": "Changing the Container Runtime on a Node from Docker Engine to containerd | Start containerd:", "source": "https://kubernetes.io/docs/tasks/administer-cluster/migrating-from-dockershim/change-runtime-containerd/", "id": "k8s_00074_code_completion_2083"} {"prompt": "Explain this text code:\n\nCopy-Item -Path \".\\bin\\\" -Destination \"$Env:ProgramFiles\\containerd\" -Recurse -Force\ncd $Env:ProgramFiles\\containerd\\\n.\\containerd.exe config default | Out-File config.toml -Encoding ascii\n\n# Review the configuration. Depending on setup you may want to adjust:\n# - the sandbox_image (Kubernetes pause image)\n# - cni bin_dir and conf_dir locations\nGet-Content config.toml\n\n# (Optional - but highly recommended) Exclude containerd from Windows Defender Scans\nAdd-MpPreference -ExclusionProcess \"$Env:ProgramFiles\\containerd\\containerd.exe\"", "completion": "This text code Changing the Container Runtime on a Node from Docker Engine to containerd | Start containerd:", "language": "text", "type": "code-example_explanation", "context": "Changing the Container Runtime on a Node from Docker Engine to containerd | Start containerd:", "source": "https://kubernetes.io/docs/tasks/administer-cluster/migrating-from-dockershim/change-runtime-containerd/", "id": "k8s_00074_code_explain_2084"} {"prompt": "Explain this text code:\n\n.\\containerd.exe --register-service\nStart-Service containerd", "completion": "This text code Changing the Container Runtime on a Node from Docker Engine to containerd | Edit the file /var/lib/kubelet/kubeadm-flags.env and add the containerd runtime to the flags; --container-runtime-endpoint=unix:///run/containerd/containerd.sock.", "language": "text", "type": "code-example_explanation", "context": "Changing the Container Runtime on a Node from Docker Engine to containerd | Edit the file /var/lib/kubelet/kubeadm-flags.env and add the containerd runtime to the flags; --container-runtime-endpoint=unix:///run/containerd/containerd.sock.", "source": "https://kubernetes.io/docs/tasks/administer-cluster/migrating-from-dockershim/change-runtime-containerd/", "id": "k8s_00074_code_explain_2085"} {"prompt": "Complete this yaml code:\n\napiVersion: apiserver.config.k8s.io/v1\nkind: EncryptionConfiguration\nresources:\n - resources:\n - secrets\n - configmaps\n - pandas.awesome.bears.example\n providers:\n - kms:", "completion": " name: myKmsPluginFoo\n endpoint: unix:///tmp/socketfile-foo.sock\n cachesize: 100\n timeout: 3s\n - kms:\n name: myKmsPluginBar\n endpoint: unix:///tmp/socketfile-bar.sock\n cachesize: 100\n timeout: 3s", "language": "yaml", "type": "kubernetes-manifest", "context": "Using a KMS provider for data encryption | Setting --encryption-provider-config-automatic-reload to true collapses all health checks to a single health check endpoint. Individual health checks are only available when KMS v1 providers are in us", "source": "https://kubernetes.io/docs/tasks/administer-cluster/kms-provider/", "id": "k8s_00077_code_completion_2086"} {"prompt": "Explain this yaml code:\n\napiVersion: apiserver.config.k8s.io/v1\nkind: EncryptionConfiguration\nresources:\n - resources:\n - secrets\n - configmaps\n - pandas.awesome.bears.example\n providers:\n - kms:\n name: myKmsPluginFoo\n endpoint: unix:///tmp/socketfile-foo.sock\n cachesize: 100\n timeout: 3s\n - kms:\n name: myKmsPluginBar\n endpoint: unix:///tmp/socketfile-bar.sock\n cachesize: 100\n timeout: 3s", "completion": "This yaml code Using a KMS provider for data encryption | Setting --encryption-provider-config-automatic-reload to true collapses all health checks to a single health check endpoint. Individual health checks are only available when KMS v1 providers are in us", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Using a KMS provider for data encryption | Setting --encryption-provider-config-automatic-reload to true collapses all health checks to a single health check endpoint. Individual health checks are only available when KMS v1 providers are in us", "source": "https://kubernetes.io/docs/tasks/administer-cluster/kms-provider/", "id": "k8s_00077_code_explain_2087"} {"prompt": "Complete this yaml code:\n\napiVersion: apiserver.config.k8s.io/v1\nkind: EncryptionConfiguration\nresources:\n - resources:\n - secrets\n - configmaps\n - pandas.awesome.bears.example\n providers:\n - kms:", "completion": " apiVersion: v2\n name: myKmsPluginFoo\n endpoint: unix:///tmp/socketfile-foo.sock\n timeout: 3s\n - kms:\n apiVersion: v2\n name: myKmsPluginBar\n endpoint: unix:///tmp/socketfile-bar.sock\n timeout: 3s", "language": "yaml", "type": "kubernetes-manifest", "context": "Using a KMS provider for data encryption | Setting --encryption-provider-config-automatic-reload to true collapses all health checks to a single health check endpoint. Individual health checks are only available when KMS v1 providers are in us", "source": "https://kubernetes.io/docs/tasks/administer-cluster/kms-provider/", "id": "k8s_00077_code_completion_2088"} {"prompt": "Explain this yaml code:\n\napiVersion: apiserver.config.k8s.io/v1\nkind: EncryptionConfiguration\nresources:\n - resources:\n - secrets\n - configmaps\n - pandas.awesome.bears.example\n providers:\n - kms:\n apiVersion: v2\n name: myKmsPluginFoo\n endpoint: unix:///tmp/socketfile-foo.sock\n timeout: 3s\n - kms:\n apiVersion: v2\n name: myKmsPluginBar\n endpoint: unix:///tmp/socketfile-bar.sock\n timeout: 3s", "completion": "This yaml code Using a KMS provider for data encryption | Setting --encryption-provider-config-automatic-reload to true collapses all health checks to a single health check endpoint. Individual health checks are only available when KMS v1 providers are in us", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Using a KMS provider for data encryption | Setting --encryption-provider-config-automatic-reload to true collapses all health checks to a single health check endpoint. Individual health checks are only available when KMS v1 providers are in us", "source": "https://kubernetes.io/docs/tasks/administer-cluster/kms-provider/", "id": "k8s_00077_code_explain_2089"} {"prompt": "Explain this bash code:\n\nkubectl create secret generic secret1 -n default --from-literal=mykey=mydata", "completion": "This bash code Using a KMS provider for data encryption | Using the etcdctl command line, read that secret out of etcd:", "language": "bash", "type": "kubectl-command_explanation", "context": "Using a KMS provider for data encryption | Using the etcdctl command line, read that secret out of etcd:", "source": "https://kubernetes.io/docs/tasks/administer-cluster/kms-provider/", "id": "k8s_00077_code_explain_2090"} {"prompt": "Explain this text code:\n\nETCDCTL_API=3 etcdctl get /kubernetes.io/secrets/default/secret1 [...] | hexdump -C", "completion": "This text code Using a KMS provider for data encryption | where [...] contains the additional arguments for connecting to the etcd server.", "language": "text", "type": "code-example_explanation", "context": "Using a KMS provider for data encryption | where [...] contains the additional arguments for connecting to the etcd server.", "source": "https://kubernetes.io/docs/tasks/administer-cluster/kms-provider/", "id": "k8s_00077_code_explain_2091"} {"prompt": "Explain this bash code:\n\nkubectl get secrets --all-namespaces -o json | kubectl replace -f -", "completion": "This bash code Using a KMS provider for data encryption | To switch from a local encryption provider to the kms provider and re-encrypt all of the secrets:", "language": "bash", "type": "kubectl-command_explanation", "context": "Using a KMS provider for data encryption | To switch from a local encryption provider to the kms provider and re-encrypt all of the secrets:", "source": "https://kubernetes.io/docs/tasks/administer-cluster/kms-provider/", "id": "k8s_00077_code_explain_2092"} {"prompt": "Complete this yaml code:\n\napiVersion: apiserver.config.k8s.io/v1\nkind: EncryptionConfiguration\nresources:\n - resources:\n - secrets\n providers:\n - kms:", "completion": " apiVersion: v2\n name : myKmsPlugin\n endpoint: unix:///tmp/socketfile.sock\n - aescbc:\n keys:\n - name: key1\n secret: ", "language": "yaml", "type": "kubernetes-manifest", "context": "Using a KMS provider for data encryption | Restart all kube-apiserver processes.", "source": "https://kubernetes.io/docs/tasks/administer-cluster/kms-provider/", "id": "k8s_00077_code_completion_2093"} {"prompt": "Explain this yaml code:\n\napiVersion: apiserver.config.k8s.io/v1\nkind: EncryptionConfiguration\nresources:\n - resources:\n - secrets\n providers:\n - kms:\n apiVersion: v2\n name : myKmsPlugin\n endpoint: unix:///tmp/socketfile.sock\n - aescbc:\n keys:\n - name: key1\n secret: ", "completion": "This yaml code Using a KMS provider for data encryption | Restart all kube-apiserver processes.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Using a KMS provider for data encryption | Restart all kube-apiserver processes.", "source": "https://kubernetes.io/docs/tasks/administer-cluster/kms-provider/", "id": "k8s_00077_code_explain_2094"} {"prompt": "Explain this bash code:\n\nkubectl get secrets --all-namespaces -o json | kubectl replace -f -", "completion": "This bash code Using a KMS provider for data encryption | If you no longer want to use encryption for data persisted in the Kubernetes API, read decrypt data that are already stored at rest.", "language": "bash", "type": "kubectl-command_explanation", "context": "Using a KMS provider for data encryption | If you no longer want to use encryption for data persisted in the Kubernetes API, read decrypt data that are already stored at rest.", "source": "https://kubernetes.io/docs/tasks/administer-cluster/kms-provider/", "id": "k8s_00077_code_explain_2095"} {"prompt": "Explain this json code:\n\n{\"kms.example.io/remote-kms-auditid\":\"\"}", "completion": "This json code Using a KMS provider for data encryption | The API server does not perform the EncryptRequest procedure call at a high rate. Plugin implementations should still aim to keep each request's latency at under 100 milliseconds.", "language": "json", "type": "api-response_explanation", "context": "Using a KMS provider for data encryption | The API server does not perform the EncryptRequest procedure call at a high rate. Plugin implementations should still aim to keep each request's latency at under 100 milliseconds.", "source": "https://kubernetes.io/docs/tasks/administer-cluster/kms-provider/", "id": "k8s_00077_code_explain_2096"} {"prompt": "Complete this yaml code:\n\napiVersion: apiserver.config.k8s.io/v1\nkind: EncryptionConfiguration\nresources:\n - resources:\n - secrets\n - configmaps\n - pandas.awesome.bears.example\n providers:\n - kms:", "completion": " name: myKmsPluginFoo\n endpoint: unix:///tmp/socketfile-foo.sock\n cachesize: 100\n timeout: 3s\n - kms:\n name: myKmsPluginBar\n endpoint: unix:///tmp/socketfile-bar.sock\n cachesize: 100\n timeout: 3s", "language": "yaml", "type": "kubernetes-manifest", "context": "Using a KMS provider for data encryption | Setting --encryption-provider-config-automatic-reload to true collapses all health checks to a single health check endpoint. Individual health checks are only available when KMS v1 providers are in us", "source": "https://kubernetes.io/docs/tasks/administer-cluster/kms-provider/", "id": "k8s_00077_code_completion_2097"} {"prompt": "Explain this yaml code:\n\napiVersion: apiserver.config.k8s.io/v1\nkind: EncryptionConfiguration\nresources:\n - resources:\n - secrets\n - configmaps\n - pandas.awesome.bears.example\n providers:\n - kms:\n name: myKmsPluginFoo\n endpoint: unix:///tmp/socketfile-foo.sock\n cachesize: 100\n timeout: 3s\n - kms:\n name: myKmsPluginBar\n endpoint: unix:///tmp/socketfile-bar.sock\n cachesize: 100\n timeout: 3s", "completion": "This yaml code Using a KMS provider for data encryption | Setting --encryption-provider-config-automatic-reload to true collapses all health checks to a single health check endpoint. Individual health checks are only available when KMS v1 providers are in us", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Using a KMS provider for data encryption | Setting --encryption-provider-config-automatic-reload to true collapses all health checks to a single health check endpoint. Individual health checks are only available when KMS v1 providers are in us", "source": "https://kubernetes.io/docs/tasks/administer-cluster/kms-provider/", "id": "k8s_00077_code_explain_2098"} {"prompt": "Complete this yaml code:\n\napiVersion: apiserver.config.k8s.io/v1\nkind: EncryptionConfiguration\nresources:\n - resources:\n - secrets\n - configmaps\n - pandas.awesome.bears.example\n providers:\n - kms:", "completion": " apiVersion: v2\n name: myKmsPluginFoo\n endpoint: unix:///tmp/socketfile-foo.sock\n timeout: 3s\n - kms:\n apiVersion: v2\n name: myKmsPluginBar\n endpoint: unix:///tmp/socketfile-bar.sock\n timeout: 3s", "language": "yaml", "type": "kubernetes-manifest", "context": "Using a KMS provider for data encryption | Setting --encryption-provider-config-automatic-reload to true collapses all health checks to a single health check endpoint. Individual health checks are only available when KMS v1 providers are in us", "source": "https://kubernetes.io/docs/tasks/administer-cluster/kms-provider/", "id": "k8s_00077_code_completion_2099"} {"prompt": "Explain this yaml code:\n\napiVersion: apiserver.config.k8s.io/v1\nkind: EncryptionConfiguration\nresources:\n - resources:\n - secrets\n - configmaps\n - pandas.awesome.bears.example\n providers:\n - kms:\n apiVersion: v2\n name: myKmsPluginFoo\n endpoint: unix:///tmp/socketfile-foo.sock\n timeout: 3s\n - kms:\n apiVersion: v2\n name: myKmsPluginBar\n endpoint: unix:///tmp/socketfile-bar.sock\n timeout: 3s", "completion": "This yaml code Using a KMS provider for data encryption | Setting --encryption-provider-config-automatic-reload to true collapses all health checks to a single health check endpoint. Individual health checks are only available when KMS v1 providers are in us", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Using a KMS provider for data encryption | Setting --encryption-provider-config-automatic-reload to true collapses all health checks to a single health check endpoint. Individual health checks are only available when KMS v1 providers are in us", "source": "https://kubernetes.io/docs/tasks/administer-cluster/kms-provider/", "id": "k8s_00077_code_explain_2100"} {"prompt": "Explain this shell code:\n\nkubectl create secret generic secret1 -n default --from-literal=mykey=mydata", "completion": "This shell code Using a KMS provider for data encryption | Using the etcdctl command line, read that secret out of etcd:", "language": "shell", "type": "code-example_explanation", "context": "Using a KMS provider for data encryption | Using the etcdctl command line, read that secret out of etcd:", "source": "https://kubernetes.io/docs/tasks/administer-cluster/kms-provider/", "id": "k8s_00077_code_explain_2101"} {"prompt": "Explain this shell code:\n\nETCDCTL_API=3 etcdctl get /kubernetes.io/secrets/default/secret1 [...] | hexdump -C", "completion": "This shell code Using a KMS provider for data encryption | where [...] contains the additional arguments for connecting to the etcd server.", "language": "shell", "type": "code-example_explanation", "context": "Using a KMS provider for data encryption | where [...] contains the additional arguments for connecting to the etcd server.", "source": "https://kubernetes.io/docs/tasks/administer-cluster/kms-provider/", "id": "k8s_00077_code_explain_2102"} {"prompt": "Explain this shell code:\n\nkubectl get secrets --all-namespaces -o json | kubectl replace -f -", "completion": "This shell code Using a KMS provider for data encryption | To switch from a local encryption provider to the kms provider and re-encrypt all of the secrets:", "language": "shell", "type": "code-example_explanation", "context": "Using a KMS provider for data encryption | To switch from a local encryption provider to the kms provider and re-encrypt all of the secrets:", "source": "https://kubernetes.io/docs/tasks/administer-cluster/kms-provider/", "id": "k8s_00077_code_explain_2103"} {"prompt": "Complete this yaml code:\n\napiVersion: apiserver.config.k8s.io/v1\nkind: EncryptionConfiguration\nresources:\n - resources:\n - secrets\n providers:\n - kms:", "completion": " apiVersion: v2\n name : myKmsPlugin\n endpoint: unix:///tmp/socketfile.sock\n - aescbc:\n keys:\n - name: key1\n secret: ", "language": "yaml", "type": "kubernetes-manifest", "context": "Using a KMS provider for data encryption | Restart all kube-apiserver processes.", "source": "https://kubernetes.io/docs/tasks/administer-cluster/kms-provider/", "id": "k8s_00077_code_completion_2104"} {"prompt": "Explain this yaml code:\n\napiVersion: apiserver.config.k8s.io/v1\nkind: EncryptionConfiguration\nresources:\n - resources:\n - secrets\n providers:\n - kms:\n apiVersion: v2\n name : myKmsPlugin\n endpoint: unix:///tmp/socketfile.sock\n - aescbc:\n keys:\n - name: key1\n secret: ", "completion": "This yaml code Using a KMS provider for data encryption | Restart all kube-apiserver processes.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Using a KMS provider for data encryption | Restart all kube-apiserver processes.", "source": "https://kubernetes.io/docs/tasks/administer-cluster/kms-provider/", "id": "k8s_00077_code_explain_2105"} {"prompt": "Explain this shell code:\n\nkubectl get secrets --all-namespaces -o json | kubectl replace -f -", "completion": "This shell code Using a KMS provider for data encryption | If you no longer want to use encryption for data persisted in the Kubernetes API, read decrypt data that are already stored at rest.", "language": "shell", "type": "code-example_explanation", "context": "Using a KMS provider for data encryption | If you no longer want to use encryption for data persisted in the Kubernetes API, read decrypt data that are already stored at rest.", "source": "https://kubernetes.io/docs/tasks/administer-cluster/kms-provider/", "id": "k8s_00077_code_explain_2106"} {"prompt": "Complete this yaml code:\n\napiVersion: apiserver.config.k8s.io/v1\nkind: EncryptionConfiguration\nresources:\n - resources:\n - secrets\n - configmaps\n - pandas.awesome.bears.example\n providers:\n - kms:", "completion": " name: myKmsPluginFoo\n endpoint: unix:///tmp/socketfile-foo.sock\n cachesize: 100\n timeout: 3s\n - kms:\n name: myKmsPluginBar\n endpoint: unix:///tmp/socketfile-bar.sock\n cachesize: 100\n timeout: 3s", "language": "yaml", "type": "kubernetes-manifest", "context": "Using a KMS provider for data encryption | Setting --encryption-provider-config-automatic-reload to true collapses all health checks to a single health check endpoint. Individual health checks are only available when KMS v1 providers are in us", "source": "https://kubernetes.io/docs/tasks/administer-cluster/kms-provider/", "id": "k8s_00077_code_completion_2107"} {"prompt": "Explain this yaml code:\n\napiVersion: apiserver.config.k8s.io/v1\nkind: EncryptionConfiguration\nresources:\n - resources:\n - secrets\n - configmaps\n - pandas.awesome.bears.example\n providers:\n - kms:\n name: myKmsPluginFoo\n endpoint: unix:///tmp/socketfile-foo.sock\n cachesize: 100\n timeout: 3s\n - kms:\n name: myKmsPluginBar\n endpoint: unix:///tmp/socketfile-bar.sock\n cachesize: 100\n timeout: 3s", "completion": "This yaml code Using a KMS provider for data encryption | Setting --encryption-provider-config-automatic-reload to true collapses all health checks to a single health check endpoint. Individual health checks are only available when KMS v1 providers are in us", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Using a KMS provider for data encryption | Setting --encryption-provider-config-automatic-reload to true collapses all health checks to a single health check endpoint. Individual health checks are only available when KMS v1 providers are in us", "source": "https://kubernetes.io/docs/tasks/administer-cluster/kms-provider/", "id": "k8s_00077_code_explain_2108"} {"prompt": "Complete this yaml code:\n\napiVersion: apiserver.config.k8s.io/v1\nkind: EncryptionConfiguration\nresources:\n - resources:\n - secrets\n - configmaps\n - pandas.awesome.bears.example\n providers:\n - kms:", "completion": " apiVersion: v2\n name: myKmsPluginFoo\n endpoint: unix:///tmp/socketfile-foo.sock\n timeout: 3s\n - kms:\n apiVersion: v2\n name: myKmsPluginBar\n endpoint: unix:///tmp/socketfile-bar.sock\n timeout: 3s", "language": "yaml", "type": "kubernetes-manifest", "context": "Using a KMS provider for data encryption | Setting --encryption-provider-config-automatic-reload to true collapses all health checks to a single health check endpoint. Individual health checks are only available when KMS v1 providers are in us", "source": "https://kubernetes.io/docs/tasks/administer-cluster/kms-provider/", "id": "k8s_00077_code_completion_2109"} {"prompt": "Explain this yaml code:\n\napiVersion: apiserver.config.k8s.io/v1\nkind: EncryptionConfiguration\nresources:\n - resources:\n - secrets\n - configmaps\n - pandas.awesome.bears.example\n providers:\n - kms:\n apiVersion: v2\n name: myKmsPluginFoo\n endpoint: unix:///tmp/socketfile-foo.sock\n timeout: 3s\n - kms:\n apiVersion: v2\n name: myKmsPluginBar\n endpoint: unix:///tmp/socketfile-bar.sock\n timeout: 3s", "completion": "This yaml code Using a KMS provider for data encryption | Setting --encryption-provider-config-automatic-reload to true collapses all health checks to a single health check endpoint. Individual health checks are only available when KMS v1 providers are in us", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Using a KMS provider for data encryption | Setting --encryption-provider-config-automatic-reload to true collapses all health checks to a single health check endpoint. Individual health checks are only available when KMS v1 providers are in us", "source": "https://kubernetes.io/docs/tasks/administer-cluster/kms-provider/", "id": "k8s_00077_code_explain_2110"} {"prompt": "Explain this bash code:\n\nkubectl create secret generic secret1 -n default --from-literal=mykey=mydata", "completion": "This bash code Using a KMS provider for data encryption | Using the etcdctl command line, read that secret out of etcd:", "language": "bash", "type": "kubectl-command_explanation", "context": "Using a KMS provider for data encryption | Using the etcdctl command line, read that secret out of etcd:", "source": "https://kubernetes.io/docs/tasks/administer-cluster/kms-provider/", "id": "k8s_00077_code_explain_2111"} {"prompt": "Explain this text code:\n\nETCDCTL_API=3 etcdctl get /kubernetes.io/secrets/default/secret1 [...] | hexdump -C", "completion": "This text code Using a KMS provider for data encryption | where [...] contains the additional arguments for connecting to the etcd server.", "language": "text", "type": "code-example_explanation", "context": "Using a KMS provider for data encryption | where [...] contains the additional arguments for connecting to the etcd server.", "source": "https://kubernetes.io/docs/tasks/administer-cluster/kms-provider/", "id": "k8s_00077_code_explain_2112"} {"prompt": "Explain this bash code:\n\nkubectl get secrets --all-namespaces -o json | kubectl replace -f -", "completion": "This bash code Using a KMS provider for data encryption | To switch from a local encryption provider to the kms provider and re-encrypt all of the secrets:", "language": "bash", "type": "kubectl-command_explanation", "context": "Using a KMS provider for data encryption | To switch from a local encryption provider to the kms provider and re-encrypt all of the secrets:", "source": "https://kubernetes.io/docs/tasks/administer-cluster/kms-provider/", "id": "k8s_00077_code_explain_2113"} {"prompt": "Complete this yaml code:\n\napiVersion: apiserver.config.k8s.io/v1\nkind: EncryptionConfiguration\nresources:\n - resources:\n - secrets\n providers:\n - kms:", "completion": " apiVersion: v2\n name : myKmsPlugin\n endpoint: unix:///tmp/socketfile.sock\n - aescbc:\n keys:\n - name: key1\n secret: ", "language": "yaml", "type": "kubernetes-manifest", "context": "Using a KMS provider for data encryption | Restart all kube-apiserver processes.", "source": "https://kubernetes.io/docs/tasks/administer-cluster/kms-provider/", "id": "k8s_00077_code_completion_2114"} {"prompt": "Explain this yaml code:\n\napiVersion: apiserver.config.k8s.io/v1\nkind: EncryptionConfiguration\nresources:\n - resources:\n - secrets\n providers:\n - kms:\n apiVersion: v2\n name : myKmsPlugin\n endpoint: unix:///tmp/socketfile.sock\n - aescbc:\n keys:\n - name: key1\n secret: ", "completion": "This yaml code Using a KMS provider for data encryption | Restart all kube-apiserver processes.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Using a KMS provider for data encryption | Restart all kube-apiserver processes.", "source": "https://kubernetes.io/docs/tasks/administer-cluster/kms-provider/", "id": "k8s_00077_code_explain_2115"} {"prompt": "Explain this bash code:\n\nkubectl get secrets --all-namespaces -o json | kubectl replace -f -", "completion": "This bash code Using a KMS provider for data encryption | If you no longer want to use encryption for data persisted in the Kubernetes API, read decrypt data that are already stored at rest.", "language": "bash", "type": "kubectl-command_explanation", "context": "Using a KMS provider for data encryption | If you no longer want to use encryption for data persisted in the Kubernetes API, read decrypt data that are already stored at rest.", "source": "https://kubernetes.io/docs/tasks/administer-cluster/kms-provider/", "id": "k8s_00077_code_explain_2116"} {"prompt": "Complete this text code:\n\n# Drain node \"foo\", even if there are pods not managed by a replication controller, replica set, job, daemon set, or stateful set on it\n kubectl drain foo --force", "completion": " \n # As above, but abort if there are pods not managed by a replication controller, replica set, job, daemon set, or stateful set, and use a grace period of 15 minutes\n kubectl drain foo --grace-period=900", "language": "text", "type": "code-example", "context": "kubectl drain | Return large lists in chunks rather than all at once. Pass 0 to disable. This flag is beta and may change in the future.", "source": "https://kubernetes.io/docs/reference/kubectl/generated/kubectl_drain/", "id": "k8s_00078_code_completion_2117"} {"prompt": "Explain this text code:\n\n# Drain node \"foo\", even if there are pods not managed by a replication controller, replica set, job, daemon set, or stateful set on it\n kubectl drain foo --force\n \n # As above, but abort if there are pods not managed by a replication controller, replica set, job, daemon set, or stateful set, and use a grace period of 15 minutes\n kubectl drain foo --grace-period=900", "completion": "This text code kubectl drain | Return large lists in chunks rather than all at once. Pass 0 to disable. This flag is beta and may change in the future.", "language": "text", "type": "code-example_explanation", "context": "kubectl drain | Return large lists in chunks rather than all at once. Pass 0 to disable. This flag is beta and may change in the future.", "source": "https://kubernetes.io/docs/reference/kubectl/generated/kubectl_drain/", "id": "k8s_00078_code_explain_2118"} {"prompt": "Complete this text code:\n\n# Drain node \"foo\", even if there are pods not managed by a replication controller, replica set, job, daemon set, or stateful set on it\n kubectl drain foo --force", "completion": " \n # As above, but abort if there are pods not managed by a replication controller, replica set, job, daemon set, or stateful set, and use a grace period of 15 minutes\n kubectl drain foo --grace-period=900", "language": "text", "type": "code-example", "context": "kubectl drain | Return large lists in chunks rather than all at once. Pass 0 to disable. This flag is beta and may change in the future.", "source": "https://kubernetes.io/docs/reference/kubectl/generated/kubectl_drain/", "id": "k8s_00078_code_completion_2119"} {"prompt": "Explain this text code:\n\n# Drain node \"foo\", even if there are pods not managed by a replication controller, replica set, job, daemon set, or stateful set on it\n kubectl drain foo --force\n \n # As above, but abort if there are pods not managed by a replication controller, replica set, job, daemon set, or stateful set, and use a grace period of 15 minutes\n kubectl drain foo --grace-period=900", "completion": "This text code kubectl drain | Return large lists in chunks rather than all at once. Pass 0 to disable. This flag is beta and may change in the future.", "language": "text", "type": "code-example_explanation", "context": "kubectl drain | Return large lists in chunks rather than all at once. Pass 0 to disable. This flag is beta and may change in the future.", "source": "https://kubernetes.io/docs/reference/kubectl/generated/kubectl_drain/", "id": "k8s_00078_code_explain_2120"} {"prompt": "Complete this text code:\n\ncat <./kustomization.yaml\nsecretGenerator:\n- name: mysql-pass", "completion": " literals:\n - password=YOUR_PASSWORD\nEOF", "language": "text", "type": "code-example", "context": "Example: Deploying WordPress and MySQL with Persistent Volumes | The following manifest describes a single-instance MySQL Deployment. The MySQL container mounts the PersistentVolume at /var/lib/mysql. The MYSQL_ROOT_PASSWORD environment variable sets the database p", "source": "https://kubernetes.io/docs/tutorials/stateful-application/mysql-wordpress-persistent-volume/", "id": "k8s_00079_code_completion_2121"} {"prompt": "Explain this text code:\n\ncat <./kustomization.yaml\nsecretGenerator:\n- name: mysql-pass\n literals:\n - password=YOUR_PASSWORD\nEOF", "completion": "This text code Example: Deploying WordPress and MySQL with Persistent Volumes | The following manifest describes a single-instance MySQL Deployment. The MySQL container mounts the PersistentVolume at /var/lib/mysql. The MYSQL_ROOT_PASSWORD environment variable sets the database p", "language": "text", "type": "code-example_explanation", "context": "Example: Deploying WordPress and MySQL with Persistent Volumes | The following manifest describes a single-instance MySQL Deployment. The MySQL container mounts the PersistentVolume at /var/lib/mysql. The MYSQL_ROOT_PASSWORD environment variable sets the database p", "source": "https://kubernetes.io/docs/tutorials/stateful-application/mysql-wordpress-persistent-volume/", "id": "k8s_00079_code_explain_2122"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Service\nmetadata:\n name: wordpress-mysql\n labels:\n app: wordpress\nspec:\n ports:\n - port: 3306\n selector:\n app: wordpress\n tier: mysql\n clusterIP: None\n---\napiVersion: v1\nkind: PersistentVolumeClaim\nmetadata:\n name: mysql-pv-claim\n labels:\n app: wordpress\nspec:\n accessModes:\n - ReadWriteOnce\n resources:\n requests:\n storage: 20Gi\n---\napiVersion: apps/v1\nkind: Deployment\nmetadata:\n name: wordpress-mysql\n labels:\n app: wordpress\nspec:\n selector:\n matchLabels:\n app: wordpress", "completion": " tier: mysql\n strategy:\n type: Recreate\n template:\n metadata:\n labels:\n app: wordpress\n tier: mysql\n spec:\n containers:\n - image: mysql:8.0\n name: mysql\n env:\n - name: MYSQL_ROOT_PASSWORD\n valueFrom:\n secretKeyRef:\n name: mysql-pass\n key: password\n - name: MYSQL_DATABASE\n value: wordpress\n - name: MYSQL_USER\n value: wordpress\n - name: MYSQL_PASSWORD\n valueFrom:\n secretKeyRef:\n name: mysql-pass\n key: password\n ports:\n - containerPort: 3306\n name: mysql\n volumeMounts:\n - name: mysql-persistent-storage\n mountPath: /var/lib/mysql\n volumes:\n - name: mysql-persistent-storage\n persistentVolumeClaim:\n claimName: mysql-pv-claim", "language": "yaml", "type": "deployment-definition", "context": "Example: Deploying WordPress and MySQL with Persistent Volumes | The following manifest describes a single-instance WordPress Deployment. The WordPress container mounts the PersistentVolume at /var/www/html for website data files. The WORDPRESS_DB_HOST environment", "source": "https://kubernetes.io/docs/tutorials/stateful-application/mysql-wordpress-persistent-volume/", "id": "k8s_00079_code_completion_2123"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Service\nmetadata:\n name: wordpress-mysql\n labels:\n app: wordpress\nspec:\n ports:\n - port: 3306\n selector:\n app: wordpress\n tier: mysql\n clusterIP: None\n---\napiVersion: v1\nkind: PersistentVolumeClaim\nmetadata:\n name: mysql-pv-claim\n labels:\n app: wordpress\nspec:\n accessModes:\n - ReadWriteOnce\n resources:\n requests:\n storage: 20Gi\n---\napiVersion: apps/v1\nkind: Deployment\nmetadata:\n name: wordpress-mysql\n labels:\n app: wordpress\nspec:\n selector:\n matchLabels:\n app: wordpress\n tier: mysql\n strategy:\n type: Recreate\n template:\n metadata:\n labels:\n app: wordpress\n tier: mysql\n spec:\n containers:\n - image: mysql:8.0\n name: mysql\n env:\n - name: MYSQL_ROOT_PASSWORD\n valueFrom:\n secretKeyRef:\n name: mysql-pass\n key: password\n - name: MYSQL_DATABASE\n value: wordpress\n - name: MYSQL_USER\n value: wordpress\n - name: MYSQL_PASSWORD\n valueFrom:\n secretKeyRef:\n name: mysql-pass\n key: password\n ports:\n - containerPort: 3306\n name: mysql\n volumeMounts:\n - name: mysql-persistent-storage\n mountPath: /var/lib/mysql\n volumes:\n - name: mysql-persistent-storage\n persistentVolumeClaim:\n claimName: mysql-pv-claim", "completion": "This yaml code Example: Deploying WordPress and MySQL with Persistent Volumes | The following manifest describes a single-instance WordPress Deployment. The WordPress container mounts the PersistentVolume at /var/www/html for website data files. The WORDPRESS_DB_HOST environment", "language": "yaml", "type": "deployment-definition_explanation", "context": "Example: Deploying WordPress and MySQL with Persistent Volumes | The following manifest describes a single-instance WordPress Deployment. The WordPress container mounts the PersistentVolume at /var/www/html for website data files. The WORDPRESS_DB_HOST environment", "source": "https://kubernetes.io/docs/tutorials/stateful-application/mysql-wordpress-persistent-volume/", "id": "k8s_00079_code_explain_2124"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Service\nmetadata:\n name: wordpress\n labels:\n app: wordpress\nspec:\n ports:\n - port: 80\n selector:\n app: wordpress\n tier: frontend\n type: LoadBalancer\n---\napiVersion: v1\nkind: PersistentVolumeClaim\nmetadata:\n name: wp-pv-claim\n labels:\n app: wordpress\nspec:\n accessModes:\n - ReadWriteOnce\n resources:\n requests:\n storage: 20Gi\n---\napiVersion: apps/v1\nkind: Deployment\nmetadata:\n name: wordpress\n labels:\n app: wordpress\nspec:", "completion": " selector:\n matchLabels:\n app: wordpress\n tier: frontend\n strategy:\n type: Recreate\n template:\n metadata:\n labels:\n app: wordpress\n tier: frontend\n spec:\n containers:\n - image: wordpress:6.2.1-apache\n name: wordpress\n env:\n - name: WORDPRESS_DB_HOST\n value: wordpress-mysql\n - name: WORDPRESS_DB_PASSWORD\n valueFrom:\n secretKeyRef:\n name: mysql-pass\n key: password\n - name: WORDPRESS_DB_USER\n value: wordpress\n ports:\n - containerPort: 80\n name: wordpress\n volumeMounts:\n - name: wordpress-persistent-storage\n mountPath: /var/www/html\n volumes:\n - name: wordpress-persistent-storage\n persistentVolumeClaim:\n claimName: wp-pv-claim", "language": "yaml", "type": "deployment-definition", "context": "Example: Deploying WordPress and MySQL with Persistent Volumes | Download the MySQL deployment configuration file.", "source": "https://kubernetes.io/docs/tutorials/stateful-application/mysql-wordpress-persistent-volume/", "id": "k8s_00079_code_completion_2125"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Service\nmetadata:\n name: wordpress\n labels:\n app: wordpress\nspec:\n ports:\n - port: 80\n selector:\n app: wordpress\n tier: frontend\n type: LoadBalancer\n---\napiVersion: v1\nkind: PersistentVolumeClaim\nmetadata:\n name: wp-pv-claim\n labels:\n app: wordpress\nspec:\n accessModes:\n - ReadWriteOnce\n resources:\n requests:\n storage: 20Gi\n---\napiVersion: apps/v1\nkind: Deployment\nmetadata:\n name: wordpress\n labels:\n app: wordpress\nspec:\n selector:\n matchLabels:\n app: wordpress\n tier: frontend\n strategy:\n type: Recreate\n template:\n metadata:\n labels:\n app: wordpress\n tier: frontend\n spec:\n containers:\n - image: wordpress:6.2.1-apache\n name: wordpress\n env:\n - name: WORDPRESS_DB_HOST\n value: wordpress-mysql\n - name: WORDPRESS_DB_PASSWORD\n valueFrom:\n secretKeyRef:\n name: mysql-pass\n key: password\n - name: WORDPRESS_DB_USER\n value: wordpress\n ports:\n - containerPort: 80\n name: wordpress\n volumeMounts:\n - name: wordpress-persistent-storage\n mountPath: /var/www/html\n volumes:\n - name: wordpress-persistent-storage\n persistentVolumeClaim:\n claimName: wp-pv-claim", "completion": "This yaml code Example: Deploying WordPress and MySQL with Persistent Volumes | Download the MySQL deployment configuration file.", "language": "yaml", "type": "deployment-definition_explanation", "context": "Example: Deploying WordPress and MySQL with Persistent Volumes | Download the MySQL deployment configuration file.", "source": "https://kubernetes.io/docs/tutorials/stateful-application/mysql-wordpress-persistent-volume/", "id": "k8s_00079_code_explain_2126"} {"prompt": "Explain this text code:\n\ncurl -LO https://k8s.io/examples/application/wordpress/mysql-deployment.yaml", "completion": "This text code Example: Deploying WordPress and MySQL with Persistent Volumes | Download the WordPress configuration file.", "language": "text", "type": "code-example_explanation", "context": "Example: Deploying WordPress and MySQL with Persistent Volumes | Download the WordPress configuration file.", "source": "https://kubernetes.io/docs/tutorials/stateful-application/mysql-wordpress-persistent-volume/", "id": "k8s_00079_code_explain_2127"} {"prompt": "Explain this text code:\n\ncurl -LO https://k8s.io/examples/application/wordpress/wordpress-deployment.yaml", "completion": "This text code Example: Deploying WordPress and MySQL with Persistent Volumes | Add them to kustomization.yaml file.", "language": "text", "type": "code-example_explanation", "context": "Example: Deploying WordPress and MySQL with Persistent Volumes | Add them to kustomization.yaml file.", "source": "https://kubernetes.io/docs/tutorials/stateful-application/mysql-wordpress-persistent-volume/", "id": "k8s_00079_code_explain_2128"} {"prompt": "Complete this text code:\n\ncat <>./kustomization.yaml\nresources:", "completion": " - mysql-deployment.yaml\n - wordpress-deployment.yaml\nEOF", "language": "text", "type": "code-example", "context": "Example: Deploying WordPress and MySQL with Persistent Volumes | The kustomization.yaml contains all the resources for deploying a WordPress site and a MySQL database. You can apply the directory by", "source": "https://kubernetes.io/docs/tutorials/stateful-application/mysql-wordpress-persistent-volume/", "id": "k8s_00079_code_completion_2129"} {"prompt": "Explain this text code:\n\ncat <>./kustomization.yaml\nresources:\n - mysql-deployment.yaml\n - wordpress-deployment.yaml\nEOF", "completion": "This text code Example: Deploying WordPress and MySQL with Persistent Volumes | The kustomization.yaml contains all the resources for deploying a WordPress site and a MySQL database. You can apply the directory by", "language": "text", "type": "code-example_explanation", "context": "Example: Deploying WordPress and MySQL with Persistent Volumes | The kustomization.yaml contains all the resources for deploying a WordPress site and a MySQL database. You can apply the directory by", "source": "https://kubernetes.io/docs/tutorials/stateful-application/mysql-wordpress-persistent-volume/", "id": "k8s_00079_code_explain_2130"} {"prompt": "Explain this text code:\n\nNAME TYPE DATA AGE\nmysql-pass-c57bb4t7mf Opaque 1 9s", "completion": "This text code Example: Deploying WordPress and MySQL with Persistent Volumes | Verify that a PersistentVolume got dynamically provisioned.", "language": "text", "type": "code-example_explanation", "context": "Example: Deploying WordPress and MySQL with Persistent Volumes | Verify that a PersistentVolume got dynamically provisioned.", "source": "https://kubernetes.io/docs/tutorials/stateful-application/mysql-wordpress-persistent-volume/", "id": "k8s_00079_code_explain_2131"} {"prompt": "Explain this text code:\n\nNAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE\nmysql-pv-claim Bound pvc-8cbd7b2e-4044-11e9-b2bb-42010a800002 20Gi RWO standard 77s\nwp-pv-claim Bound pvc-8cd0df54-4044-11e9-b2bb-42010a800002 20Gi RWO standard 77s", "completion": "This text code Example: Deploying WordPress and MySQL with Persistent Volumes | Verify that the Pod is running by running the following command:", "language": "text", "type": "code-example_explanation", "context": "Example: Deploying WordPress and MySQL with Persistent Volumes | Verify that the Pod is running by running the following command:", "source": "https://kubernetes.io/docs/tutorials/stateful-application/mysql-wordpress-persistent-volume/", "id": "k8s_00079_code_explain_2132"} {"prompt": "Explain this text code:\n\nNAME READY STATUS RESTARTS AGE\nwordpress-mysql-1894417608-x5dzt 1/1 Running 0 40s", "completion": "This text code Example: Deploying WordPress and MySQL with Persistent Volumes | Verify that the Service is running by running the following command:", "language": "text", "type": "code-example_explanation", "context": "Example: Deploying WordPress and MySQL with Persistent Volumes | Verify that the Service is running by running the following command:", "source": "https://kubernetes.io/docs/tutorials/stateful-application/mysql-wordpress-persistent-volume/", "id": "k8s_00079_code_explain_2133"} {"prompt": "Explain this text code:\n\nNAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE\nwordpress LoadBalancer 10.0.0.89 80:32406/TCP 4m", "completion": "This text code Example: Deploying WordPress and MySQL with Persistent Volumes | Run the following command to get the IP Address for the WordPress Service:", "language": "text", "type": "code-example_explanation", "context": "Example: Deploying WordPress and MySQL with Persistent Volumes | Run the following command to get the IP Address for the WordPress Service:", "source": "https://kubernetes.io/docs/tutorials/stateful-application/mysql-wordpress-persistent-volume/", "id": "k8s_00079_code_explain_2134"} {"prompt": "Complete this shell code:\n\ncat <./kustomization.yaml\nsecretGenerator:\n- name: mysql-pass", "completion": " literals:\n - password=YOUR_PASSWORD\nEOF", "language": "shell", "type": "code-example", "context": "Example: Deploying WordPress and MySQL with Persistent Volumes | The following manifest describes a single-instance MySQL Deployment. The MySQL container mounts the PersistentVolume at /var/lib/mysql. The MYSQL_ROOT_PASSWORD environment variable sets the database p", "source": "https://kubernetes.io/docs/tutorials/stateful-application/mysql-wordpress-persistent-volume/", "id": "k8s_00079_code_completion_2135"} {"prompt": "Explain this shell code:\n\ncat <./kustomization.yaml\nsecretGenerator:\n- name: mysql-pass\n literals:\n - password=YOUR_PASSWORD\nEOF", "completion": "This shell code Example: Deploying WordPress and MySQL with Persistent Volumes | The following manifest describes a single-instance MySQL Deployment. The MySQL container mounts the PersistentVolume at /var/lib/mysql. The MYSQL_ROOT_PASSWORD environment variable sets the database p", "language": "shell", "type": "code-example_explanation", "context": "Example: Deploying WordPress and MySQL with Persistent Volumes | The following manifest describes a single-instance MySQL Deployment. The MySQL container mounts the PersistentVolume at /var/lib/mysql. The MYSQL_ROOT_PASSWORD environment variable sets the database p", "source": "https://kubernetes.io/docs/tutorials/stateful-application/mysql-wordpress-persistent-volume/", "id": "k8s_00079_code_explain_2136"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Service\nmetadata:\n name: wordpress-mysql\n labels:\n app: wordpress\nspec:\n ports:\n - port: 3306\n selector:\n app: wordpress\n tier: mysql\n clusterIP: None\n---\napiVersion: v1\nkind: PersistentVolumeClaim\nmetadata:\n name: mysql-pv-claim\n labels:\n app: wordpress\nspec:\n accessModes:\n - ReadWriteOnce\n resources:\n requests:\n storage: 20Gi\n---\napiVersion: apps/v1\nkind: Deployment\nmetadata:\n name: wordpress-mysql\n labels:\n app: wordpress\nspec:\n selector:\n matchLabels:\n app: wordpress", "completion": " tier: mysql\n strategy:\n type: Recreate\n template:\n metadata:\n labels:\n app: wordpress\n tier: mysql\n spec:\n containers:\n - image: mysql:8.0\n name: mysql\n env:\n - name: MYSQL_ROOT_PASSWORD\n valueFrom:\n secretKeyRef:\n name: mysql-pass\n key: password\n - name: MYSQL_DATABASE\n value: wordpress\n - name: MYSQL_USER\n value: wordpress\n - name: MYSQL_PASSWORD\n valueFrom:\n secretKeyRef:\n name: mysql-pass\n key: password\n ports:\n - containerPort: 3306\n name: mysql\n volumeMounts:\n - name: mysql-persistent-storage\n mountPath: /var/lib/mysql\n volumes:\n - name: mysql-persistent-storage\n persistentVolumeClaim:\n claimName: mysql-pv-claim", "language": "yaml", "type": "deployment-definition", "context": "Example: Deploying WordPress and MySQL with Persistent Volumes | The following manifest describes a single-instance WordPress Deployment. The WordPress container mounts the PersistentVolume at /var/www/html for website data files. The WORDPRESS_DB_HOST environment", "source": "https://kubernetes.io/docs/tutorials/stateful-application/mysql-wordpress-persistent-volume/", "id": "k8s_00079_code_completion_2137"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Service\nmetadata:\n name: wordpress-mysql\n labels:\n app: wordpress\nspec:\n ports:\n - port: 3306\n selector:\n app: wordpress\n tier: mysql\n clusterIP: None\n---\napiVersion: v1\nkind: PersistentVolumeClaim\nmetadata:\n name: mysql-pv-claim\n labels:\n app: wordpress\nspec:\n accessModes:\n - ReadWriteOnce\n resources:\n requests:\n storage: 20Gi\n---\napiVersion: apps/v1\nkind: Deployment\nmetadata:\n name: wordpress-mysql\n labels:\n app: wordpress\nspec:\n selector:\n matchLabels:\n app: wordpress\n tier: mysql\n strategy:\n type: Recreate\n template:\n metadata:\n labels:\n app: wordpress\n tier: mysql\n spec:\n containers:\n - image: mysql:8.0\n name: mysql\n env:\n - name: MYSQL_ROOT_PASSWORD\n valueFrom:\n secretKeyRef:\n name: mysql-pass\n key: password\n - name: MYSQL_DATABASE\n value: wordpress\n - name: MYSQL_USER\n value: wordpress\n - name: MYSQL_PASSWORD\n valueFrom:\n secretKeyRef:\n name: mysql-pass\n key: password\n ports:\n - containerPort: 3306\n name: mysql\n volumeMounts:\n - name: mysql-persistent-storage\n mountPath: /var/lib/mysql\n volumes:\n - name: mysql-persistent-storage\n persistentVolumeClaim:\n claimName: mysql-pv-claim", "completion": "This yaml code Example: Deploying WordPress and MySQL with Persistent Volumes | The following manifest describes a single-instance WordPress Deployment. The WordPress container mounts the PersistentVolume at /var/www/html for website data files. The WORDPRESS_DB_HOST environment", "language": "yaml", "type": "deployment-definition_explanation", "context": "Example: Deploying WordPress and MySQL with Persistent Volumes | The following manifest describes a single-instance WordPress Deployment. The WordPress container mounts the PersistentVolume at /var/www/html for website data files. The WORDPRESS_DB_HOST environment", "source": "https://kubernetes.io/docs/tutorials/stateful-application/mysql-wordpress-persistent-volume/", "id": "k8s_00079_code_explain_2138"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Service\nmetadata:\n name: wordpress\n labels:\n app: wordpress\nspec:\n ports:\n - port: 80\n selector:\n app: wordpress\n tier: frontend\n type: LoadBalancer\n---\napiVersion: v1\nkind: PersistentVolumeClaim\nmetadata:\n name: wp-pv-claim\n labels:\n app: wordpress\nspec:\n accessModes:\n - ReadWriteOnce\n resources:\n requests:\n storage: 20Gi\n---\napiVersion: apps/v1\nkind: Deployment\nmetadata:\n name: wordpress\n labels:\n app: wordpress\nspec:", "completion": " selector:\n matchLabels:\n app: wordpress\n tier: frontend\n strategy:\n type: Recreate\n template:\n metadata:\n labels:\n app: wordpress\n tier: frontend\n spec:\n containers:\n - image: wordpress:6.2.1-apache\n name: wordpress\n env:\n - name: WORDPRESS_DB_HOST\n value: wordpress-mysql\n - name: WORDPRESS_DB_PASSWORD\n valueFrom:\n secretKeyRef:\n name: mysql-pass\n key: password\n - name: WORDPRESS_DB_USER\n value: wordpress\n ports:\n - containerPort: 80\n name: wordpress\n volumeMounts:\n - name: wordpress-persistent-storage\n mountPath: /var/www/html\n volumes:\n - name: wordpress-persistent-storage\n persistentVolumeClaim:\n claimName: wp-pv-claim", "language": "yaml", "type": "deployment-definition", "context": "Example: Deploying WordPress and MySQL with Persistent Volumes | Download the MySQL deployment configuration file.", "source": "https://kubernetes.io/docs/tutorials/stateful-application/mysql-wordpress-persistent-volume/", "id": "k8s_00079_code_completion_2139"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Service\nmetadata:\n name: wordpress\n labels:\n app: wordpress\nspec:\n ports:\n - port: 80\n selector:\n app: wordpress\n tier: frontend\n type: LoadBalancer\n---\napiVersion: v1\nkind: PersistentVolumeClaim\nmetadata:\n name: wp-pv-claim\n labels:\n app: wordpress\nspec:\n accessModes:\n - ReadWriteOnce\n resources:\n requests:\n storage: 20Gi\n---\napiVersion: apps/v1\nkind: Deployment\nmetadata:\n name: wordpress\n labels:\n app: wordpress\nspec:\n selector:\n matchLabels:\n app: wordpress\n tier: frontend\n strategy:\n type: Recreate\n template:\n metadata:\n labels:\n app: wordpress\n tier: frontend\n spec:\n containers:\n - image: wordpress:6.2.1-apache\n name: wordpress\n env:\n - name: WORDPRESS_DB_HOST\n value: wordpress-mysql\n - name: WORDPRESS_DB_PASSWORD\n valueFrom:\n secretKeyRef:\n name: mysql-pass\n key: password\n - name: WORDPRESS_DB_USER\n value: wordpress\n ports:\n - containerPort: 80\n name: wordpress\n volumeMounts:\n - name: wordpress-persistent-storage\n mountPath: /var/www/html\n volumes:\n - name: wordpress-persistent-storage\n persistentVolumeClaim:\n claimName: wp-pv-claim", "completion": "This yaml code Example: Deploying WordPress and MySQL with Persistent Volumes | Download the MySQL deployment configuration file.", "language": "yaml", "type": "deployment-definition_explanation", "context": "Example: Deploying WordPress and MySQL with Persistent Volumes | Download the MySQL deployment configuration file.", "source": "https://kubernetes.io/docs/tutorials/stateful-application/mysql-wordpress-persistent-volume/", "id": "k8s_00079_code_explain_2140"} {"prompt": "Explain this shell code:\n\ncurl -LO https://k8s.io/examples/application/wordpress/mysql-deployment.yaml", "completion": "This shell code Example: Deploying WordPress and MySQL with Persistent Volumes | Download the WordPress configuration file.", "language": "shell", "type": "code-example_explanation", "context": "Example: Deploying WordPress and MySQL with Persistent Volumes | Download the WordPress configuration file.", "source": "https://kubernetes.io/docs/tutorials/stateful-application/mysql-wordpress-persistent-volume/", "id": "k8s_00079_code_explain_2141"} {"prompt": "Explain this shell code:\n\ncurl -LO https://k8s.io/examples/application/wordpress/wordpress-deployment.yaml", "completion": "This shell code Example: Deploying WordPress and MySQL with Persistent Volumes | Add them to kustomization.yaml file.", "language": "shell", "type": "code-example_explanation", "context": "Example: Deploying WordPress and MySQL with Persistent Volumes | Add them to kustomization.yaml file.", "source": "https://kubernetes.io/docs/tutorials/stateful-application/mysql-wordpress-persistent-volume/", "id": "k8s_00079_code_explain_2142"} {"prompt": "Complete this shell code:\n\ncat <>./kustomization.yaml\nresources:", "completion": " - mysql-deployment.yaml\n - wordpress-deployment.yaml\nEOF", "language": "shell", "type": "code-example", "context": "Example: Deploying WordPress and MySQL with Persistent Volumes | The kustomization.yaml contains all the resources for deploying a WordPress site and a MySQL database. You can apply the directory by", "source": "https://kubernetes.io/docs/tutorials/stateful-application/mysql-wordpress-persistent-volume/", "id": "k8s_00079_code_completion_2143"} {"prompt": "Explain this shell code:\n\ncat <>./kustomization.yaml\nresources:\n - mysql-deployment.yaml\n - wordpress-deployment.yaml\nEOF", "completion": "This shell code Example: Deploying WordPress and MySQL with Persistent Volumes | The kustomization.yaml contains all the resources for deploying a WordPress site and a MySQL database. You can apply the directory by", "language": "shell", "type": "code-example_explanation", "context": "Example: Deploying WordPress and MySQL with Persistent Volumes | The kustomization.yaml contains all the resources for deploying a WordPress site and a MySQL database. You can apply the directory by", "source": "https://kubernetes.io/docs/tutorials/stateful-application/mysql-wordpress-persistent-volume/", "id": "k8s_00079_code_explain_2144"} {"prompt": "Explain this text code:\n\nNAME TYPE DATA AGE\nmysql-pass-c57bb4t7mf Opaque 1 9s", "completion": "This text code Example: Deploying WordPress and MySQL with Persistent Volumes | Verify that a PersistentVolume got dynamically provisioned.", "language": "text", "type": "code-example_explanation", "context": "Example: Deploying WordPress and MySQL with Persistent Volumes | Verify that a PersistentVolume got dynamically provisioned.", "source": "https://kubernetes.io/docs/tutorials/stateful-application/mysql-wordpress-persistent-volume/", "id": "k8s_00079_code_explain_2145"} {"prompt": "Explain this text code:\n\nNAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE\nmysql-pv-claim Bound pvc-8cbd7b2e-4044-11e9-b2bb-42010a800002 20Gi RWO standard 77s\nwp-pv-claim Bound pvc-8cd0df54-4044-11e9-b2bb-42010a800002 20Gi RWO standard 77s", "completion": "This text code Example: Deploying WordPress and MySQL with Persistent Volumes | Verify that the Pod is running by running the following command:", "language": "text", "type": "code-example_explanation", "context": "Example: Deploying WordPress and MySQL with Persistent Volumes | Verify that the Pod is running by running the following command:", "source": "https://kubernetes.io/docs/tutorials/stateful-application/mysql-wordpress-persistent-volume/", "id": "k8s_00079_code_explain_2146"} {"prompt": "Explain this text code:\n\nNAME READY STATUS RESTARTS AGE\nwordpress-mysql-1894417608-x5dzt 1/1 Running 0 40s", "completion": "This text code Example: Deploying WordPress and MySQL with Persistent Volumes | Verify that the Service is running by running the following command:", "language": "text", "type": "code-example_explanation", "context": "Example: Deploying WordPress and MySQL with Persistent Volumes | Verify that the Service is running by running the following command:", "source": "https://kubernetes.io/docs/tutorials/stateful-application/mysql-wordpress-persistent-volume/", "id": "k8s_00079_code_explain_2147"} {"prompt": "Explain this text code:\n\nNAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE\nwordpress LoadBalancer 10.0.0.89 80:32406/TCP 4m", "completion": "This text code Example: Deploying WordPress and MySQL with Persistent Volumes | Run the following command to get the IP Address for the WordPress Service:", "language": "text", "type": "code-example_explanation", "context": "Example: Deploying WordPress and MySQL with Persistent Volumes | Run the following command to get the IP Address for the WordPress Service:", "source": "https://kubernetes.io/docs/tutorials/stateful-application/mysql-wordpress-persistent-volume/", "id": "k8s_00079_code_explain_2148"} {"prompt": "Complete this text code:\n\ncat <./kustomization.yaml\nsecretGenerator:\n- name: mysql-pass", "completion": " literals:\n - password=YOUR_PASSWORD\nEOF", "language": "text", "type": "code-example", "context": "Example: Deploying WordPress and MySQL with Persistent Volumes | The following manifest describes a single-instance MySQL Deployment. The MySQL container mounts the PersistentVolume at /var/lib/mysql. The MYSQL_ROOT_PASSWORD environment variable sets the database p", "source": "https://kubernetes.io/docs/tutorials/stateful-application/mysql-wordpress-persistent-volume/", "id": "k8s_00079_code_completion_2149"} {"prompt": "Explain this text code:\n\ncat <./kustomization.yaml\nsecretGenerator:\n- name: mysql-pass\n literals:\n - password=YOUR_PASSWORD\nEOF", "completion": "This text code Example: Deploying WordPress and MySQL with Persistent Volumes | The following manifest describes a single-instance MySQL Deployment. The MySQL container mounts the PersistentVolume at /var/lib/mysql. The MYSQL_ROOT_PASSWORD environment variable sets the database p", "language": "text", "type": "code-example_explanation", "context": "Example: Deploying WordPress and MySQL with Persistent Volumes | The following manifest describes a single-instance MySQL Deployment. The MySQL container mounts the PersistentVolume at /var/lib/mysql. The MYSQL_ROOT_PASSWORD environment variable sets the database p", "source": "https://kubernetes.io/docs/tutorials/stateful-application/mysql-wordpress-persistent-volume/", "id": "k8s_00079_code_explain_2150"} {"prompt": "Complete this yaml code:\n\napplication/wordpress/mysql-deployment.yaml\napiVersion: v1\nkind: Service\nmetadata:\n name: wordpress-mysql\n labels:\n app: wordpress\nspec:\n ports:\n - port: 3306\n selector:\n app: wordpress\n tier: mysql\n clusterIP: None\n---\napiVersion: v1\nkind: PersistentVolumeClaim\nmetadata:\n name: mysql-pv-claim\n labels:\n app: wordpress\nspec:\n accessModes:\n - ReadWriteOnce\n resources:\n requests:\n storage: 20Gi\n---\napiVersion: apps/v1\nkind: Deployment\nmetadata:\n name: wordpress-mysql\n labels:\n app: wordpress\nspec:\n selector:\n matchLabels:", "completion": " app: wordpress\n tier: mysql\n strategy:\n type: Recreate\n template:\n metadata:\n labels:\n app: wordpress\n tier: mysql\n spec:\n containers:\n - image: mysql:8.0\n name: mysql\n env:\n - name: MYSQL_ROOT_PASSWORD\n valueFrom:\n secretKeyRef:\n name: mysql-pass\n key: password\n - name: MYSQL_DATABASE\n value: wordpress\n - name: MYSQL_USER\n value: wordpress\n - name: MYSQL_PASSWORD\n valueFrom:\n secretKeyRef:\n name: mysql-pass\n key: password\n ports:\n - containerPort: 3306\n name: mysql\n volumeMounts:\n - name: mysql-persistent-storage\n mountPath: /var/lib/mysql\n volumes:\n - name: mysql-persistent-storage\n persistentVolumeClaim:\n claimName: mysql-pv-claim", "language": "yaml", "type": "deployment-definition", "context": "Example: Deploying WordPress and MySQL with Persistent Volumes | The following manifest describes a single-instance WordPress Deployment. The WordPress container mounts the PersistentVolume at /var/www/html for website data files. The WORDPRESS_DB_HOST environment", "source": "https://kubernetes.io/docs/tutorials/stateful-application/mysql-wordpress-persistent-volume/", "id": "k8s_00079_code_completion_2151"} {"prompt": "Explain this yaml code:\n\napplication/wordpress/mysql-deployment.yaml\napiVersion: v1\nkind: Service\nmetadata:\n name: wordpress-mysql\n labels:\n app: wordpress\nspec:\n ports:\n - port: 3306\n selector:\n app: wordpress\n tier: mysql\n clusterIP: None\n---\napiVersion: v1\nkind: PersistentVolumeClaim\nmetadata:\n name: mysql-pv-claim\n labels:\n app: wordpress\nspec:\n accessModes:\n - ReadWriteOnce\n resources:\n requests:\n storage: 20Gi\n---\napiVersion: apps/v1\nkind: Deployment\nmetadata:\n name: wordpress-mysql\n labels:\n app: wordpress\nspec:\n selector:\n matchLabels:\n app: wordpress\n tier: mysql\n strategy:\n type: Recreate\n template:\n metadata:\n labels:\n app: wordpress\n tier: mysql\n spec:\n containers:\n - image: mysql:8.0\n name: mysql\n env:\n - name: MYSQL_ROOT_PASSWORD\n valueFrom:\n secretKeyRef:\n name: mysql-pass\n key: password\n - name: MYSQL_DATABASE\n value: wordpress\n - name: MYSQL_USER\n value: wordpress\n - name: MYSQL_PASSWORD\n valueFrom:\n secretKeyRef:\n name: mysql-pass\n key: password\n ports:\n - containerPort: 3306\n name: mysql\n volumeMounts:\n - name: mysql-persistent-storage\n mountPath: /var/lib/mysql\n volumes:\n - name: mysql-persistent-storage\n persistentVolumeClaim:\n claimName: mysql-pv-claim", "completion": "This yaml code Example: Deploying WordPress and MySQL with Persistent Volumes | The following manifest describes a single-instance WordPress Deployment. The WordPress container mounts the PersistentVolume at /var/www/html for website data files. The WORDPRESS_DB_HOST environment", "language": "yaml", "type": "deployment-definition_explanation", "context": "Example: Deploying WordPress and MySQL with Persistent Volumes | The following manifest describes a single-instance WordPress Deployment. The WordPress container mounts the PersistentVolume at /var/www/html for website data files. The WORDPRESS_DB_HOST environment", "source": "https://kubernetes.io/docs/tutorials/stateful-application/mysql-wordpress-persistent-volume/", "id": "k8s_00079_code_explain_2152"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Service\nmetadata:\n name: wordpress-mysql\n labels:\n app: wordpress\nspec:\n ports:\n - port: 3306\n selector:\n app: wordpress\n tier: mysql\n clusterIP: None\n---\napiVersion: v1\nkind: PersistentVolumeClaim\nmetadata:\n name: mysql-pv-claim\n labels:\n app: wordpress\nspec:\n accessModes:\n - ReadWriteOnce\n resources:\n requests:\n storage: 20Gi\n---\napiVersion: apps/v1\nkind: Deployment\nmetadata:\n name: wordpress-mysql\n labels:\n app: wordpress\nspec:\n selector:\n matchLabels:\n app: wordpress", "completion": " tier: mysql\n strategy:\n type: Recreate\n template:\n metadata:\n labels:\n app: wordpress\n tier: mysql\n spec:\n containers:\n - image: mysql:8.0\n name: mysql\n env:\n - name: MYSQL_ROOT_PASSWORD\n valueFrom:\n secretKeyRef:\n name: mysql-pass\n key: password\n - name: MYSQL_DATABASE\n value: wordpress\n - name: MYSQL_USER\n value: wordpress\n - name: MYSQL_PASSWORD\n valueFrom:\n secretKeyRef:\n name: mysql-pass\n key: password\n ports:\n - containerPort: 3306\n name: mysql\n volumeMounts:\n - name: mysql-persistent-storage\n mountPath: /var/lib/mysql\n volumes:\n - name: mysql-persistent-storage\n persistentVolumeClaim:\n claimName: mysql-pv-claim", "language": "yaml", "type": "deployment-definition", "context": "Example: Deploying WordPress and MySQL with Persistent Volumes | The following manifest describes a single-instance WordPress Deployment. The WordPress container mounts the PersistentVolume at /var/www/html for website data files. The WORDPRESS_DB_HOST environment", "source": "https://kubernetes.io/docs/tutorials/stateful-application/mysql-wordpress-persistent-volume/", "id": "k8s_00079_code_completion_2153"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Service\nmetadata:\n name: wordpress-mysql\n labels:\n app: wordpress\nspec:\n ports:\n - port: 3306\n selector:\n app: wordpress\n tier: mysql\n clusterIP: None\n---\napiVersion: v1\nkind: PersistentVolumeClaim\nmetadata:\n name: mysql-pv-claim\n labels:\n app: wordpress\nspec:\n accessModes:\n - ReadWriteOnce\n resources:\n requests:\n storage: 20Gi\n---\napiVersion: apps/v1\nkind: Deployment\nmetadata:\n name: wordpress-mysql\n labels:\n app: wordpress\nspec:\n selector:\n matchLabels:\n app: wordpress\n tier: mysql\n strategy:\n type: Recreate\n template:\n metadata:\n labels:\n app: wordpress\n tier: mysql\n spec:\n containers:\n - image: mysql:8.0\n name: mysql\n env:\n - name: MYSQL_ROOT_PASSWORD\n valueFrom:\n secretKeyRef:\n name: mysql-pass\n key: password\n - name: MYSQL_DATABASE\n value: wordpress\n - name: MYSQL_USER\n value: wordpress\n - name: MYSQL_PASSWORD\n valueFrom:\n secretKeyRef:\n name: mysql-pass\n key: password\n ports:\n - containerPort: 3306\n name: mysql\n volumeMounts:\n - name: mysql-persistent-storage\n mountPath: /var/lib/mysql\n volumes:\n - name: mysql-persistent-storage\n persistentVolumeClaim:\n claimName: mysql-pv-claim", "completion": "This yaml code Example: Deploying WordPress and MySQL with Persistent Volumes | The following manifest describes a single-instance WordPress Deployment. The WordPress container mounts the PersistentVolume at /var/www/html for website data files. The WORDPRESS_DB_HOST environment", "language": "yaml", "type": "deployment-definition_explanation", "context": "Example: Deploying WordPress and MySQL with Persistent Volumes | The following manifest describes a single-instance WordPress Deployment. The WordPress container mounts the PersistentVolume at /var/www/html for website data files. The WORDPRESS_DB_HOST environment", "source": "https://kubernetes.io/docs/tutorials/stateful-application/mysql-wordpress-persistent-volume/", "id": "k8s_00079_code_explain_2154"} {"prompt": "Complete this yaml code:\n\napplication/wordpress/wordpress-deployment.yaml\napiVersion: v1\nkind: Service\nmetadata:\n name: wordpress\n labels:\n app: wordpress\nspec:\n ports:\n - port: 80\n selector:\n app: wordpress\n tier: frontend\n type: LoadBalancer\n---\napiVersion: v1\nkind: PersistentVolumeClaim\nmetadata:\n name: wp-pv-claim\n labels:\n app: wordpress\nspec:\n accessModes:\n - ReadWriteOnce\n resources:\n requests:\n storage: 20Gi\n---\napiVersion: apps/v1\nkind: Deployment\nmetadata:\n name: wordpress\n labels:\n app: wordpress\nspec:", "completion": " selector:\n matchLabels:\n app: wordpress\n tier: frontend\n strategy:\n type: Recreate\n template:\n metadata:\n labels:\n app: wordpress\n tier: frontend\n spec:\n containers:\n - image: wordpress:6.2.1-apache\n name: wordpress\n env:\n - name: WORDPRESS_DB_HOST\n value: wordpress-mysql\n - name: WORDPRESS_DB_PASSWORD\n valueFrom:\n secretKeyRef:\n name: mysql-pass\n key: password\n - name: WORDPRESS_DB_USER\n value: wordpress\n ports:\n - containerPort: 80\n name: wordpress\n volumeMounts:\n - name: wordpress-persistent-storage\n mountPath: /var/www/html\n volumes:\n - name: wordpress-persistent-storage\n persistentVolumeClaim:\n claimName: wp-pv-claim", "language": "yaml", "type": "deployment-definition", "context": "Example: Deploying WordPress and MySQL with Persistent Volumes | Download the MySQL deployment configuration file.", "source": "https://kubernetes.io/docs/tutorials/stateful-application/mysql-wordpress-persistent-volume/", "id": "k8s_00079_code_completion_2155"} {"prompt": "Explain this yaml code:\n\napplication/wordpress/wordpress-deployment.yaml\napiVersion: v1\nkind: Service\nmetadata:\n name: wordpress\n labels:\n app: wordpress\nspec:\n ports:\n - port: 80\n selector:\n app: wordpress\n tier: frontend\n type: LoadBalancer\n---\napiVersion: v1\nkind: PersistentVolumeClaim\nmetadata:\n name: wp-pv-claim\n labels:\n app: wordpress\nspec:\n accessModes:\n - ReadWriteOnce\n resources:\n requests:\n storage: 20Gi\n---\napiVersion: apps/v1\nkind: Deployment\nmetadata:\n name: wordpress\n labels:\n app: wordpress\nspec:\n selector:\n matchLabels:\n app: wordpress\n tier: frontend\n strategy:\n type: Recreate\n template:\n metadata:\n labels:\n app: wordpress\n tier: frontend\n spec:\n containers:\n - image: wordpress:6.2.1-apache\n name: wordpress\n env:\n - name: WORDPRESS_DB_HOST\n value: wordpress-mysql\n - name: WORDPRESS_DB_PASSWORD\n valueFrom:\n secretKeyRef:\n name: mysql-pass\n key: password\n - name: WORDPRESS_DB_USER\n value: wordpress\n ports:\n - containerPort: 80\n name: wordpress\n volumeMounts:\n - name: wordpress-persistent-storage\n mountPath: /var/www/html\n volumes:\n - name: wordpress-persistent-storage\n persistentVolumeClaim:\n claimName: wp-pv-claim", "completion": "This yaml code Example: Deploying WordPress and MySQL with Persistent Volumes | Download the MySQL deployment configuration file.", "language": "yaml", "type": "deployment-definition_explanation", "context": "Example: Deploying WordPress and MySQL with Persistent Volumes | Download the MySQL deployment configuration file.", "source": "https://kubernetes.io/docs/tutorials/stateful-application/mysql-wordpress-persistent-volume/", "id": "k8s_00079_code_explain_2156"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Service\nmetadata:\n name: wordpress\n labels:\n app: wordpress\nspec:\n ports:\n - port: 80\n selector:\n app: wordpress\n tier: frontend\n type: LoadBalancer\n---\napiVersion: v1\nkind: PersistentVolumeClaim\nmetadata:\n name: wp-pv-claim\n labels:\n app: wordpress\nspec:\n accessModes:\n - ReadWriteOnce\n resources:\n requests:\n storage: 20Gi\n---\napiVersion: apps/v1\nkind: Deployment\nmetadata:\n name: wordpress\n labels:\n app: wordpress\nspec:", "completion": " selector:\n matchLabels:\n app: wordpress\n tier: frontend\n strategy:\n type: Recreate\n template:\n metadata:\n labels:\n app: wordpress\n tier: frontend\n spec:\n containers:\n - image: wordpress:6.2.1-apache\n name: wordpress\n env:\n - name: WORDPRESS_DB_HOST\n value: wordpress-mysql\n - name: WORDPRESS_DB_PASSWORD\n valueFrom:\n secretKeyRef:\n name: mysql-pass\n key: password\n - name: WORDPRESS_DB_USER\n value: wordpress\n ports:\n - containerPort: 80\n name: wordpress\n volumeMounts:\n - name: wordpress-persistent-storage\n mountPath: /var/www/html\n volumes:\n - name: wordpress-persistent-storage\n persistentVolumeClaim:\n claimName: wp-pv-claim", "language": "yaml", "type": "deployment-definition", "context": "Example: Deploying WordPress and MySQL with Persistent Volumes | Download the MySQL deployment configuration file.", "source": "https://kubernetes.io/docs/tutorials/stateful-application/mysql-wordpress-persistent-volume/", "id": "k8s_00079_code_completion_2157"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Service\nmetadata:\n name: wordpress\n labels:\n app: wordpress\nspec:\n ports:\n - port: 80\n selector:\n app: wordpress\n tier: frontend\n type: LoadBalancer\n---\napiVersion: v1\nkind: PersistentVolumeClaim\nmetadata:\n name: wp-pv-claim\n labels:\n app: wordpress\nspec:\n accessModes:\n - ReadWriteOnce\n resources:\n requests:\n storage: 20Gi\n---\napiVersion: apps/v1\nkind: Deployment\nmetadata:\n name: wordpress\n labels:\n app: wordpress\nspec:\n selector:\n matchLabels:\n app: wordpress\n tier: frontend\n strategy:\n type: Recreate\n template:\n metadata:\n labels:\n app: wordpress\n tier: frontend\n spec:\n containers:\n - image: wordpress:6.2.1-apache\n name: wordpress\n env:\n - name: WORDPRESS_DB_HOST\n value: wordpress-mysql\n - name: WORDPRESS_DB_PASSWORD\n valueFrom:\n secretKeyRef:\n name: mysql-pass\n key: password\n - name: WORDPRESS_DB_USER\n value: wordpress\n ports:\n - containerPort: 80\n name: wordpress\n volumeMounts:\n - name: wordpress-persistent-storage\n mountPath: /var/www/html\n volumes:\n - name: wordpress-persistent-storage\n persistentVolumeClaim:\n claimName: wp-pv-claim", "completion": "This yaml code Example: Deploying WordPress and MySQL with Persistent Volumes | Download the MySQL deployment configuration file.", "language": "yaml", "type": "deployment-definition_explanation", "context": "Example: Deploying WordPress and MySQL with Persistent Volumes | Download the MySQL deployment configuration file.", "source": "https://kubernetes.io/docs/tutorials/stateful-application/mysql-wordpress-persistent-volume/", "id": "k8s_00079_code_explain_2158"} {"prompt": "Explain this text code:\n\ncurl -LO https://k8s.io/examples/application/wordpress/mysql-deployment.yaml", "completion": "This text code Example: Deploying WordPress and MySQL with Persistent Volumes | Download the WordPress configuration file.", "language": "text", "type": "code-example_explanation", "context": "Example: Deploying WordPress and MySQL with Persistent Volumes | Download the WordPress configuration file.", "source": "https://kubernetes.io/docs/tutorials/stateful-application/mysql-wordpress-persistent-volume/", "id": "k8s_00079_code_explain_2159"} {"prompt": "Explain this text code:\n\ncurl -LO https://k8s.io/examples/application/wordpress/wordpress-deployment.yaml", "completion": "This text code Example: Deploying WordPress and MySQL with Persistent Volumes | Add them to kustomization.yaml file.", "language": "text", "type": "code-example_explanation", "context": "Example: Deploying WordPress and MySQL with Persistent Volumes | Add them to kustomization.yaml file.", "source": "https://kubernetes.io/docs/tutorials/stateful-application/mysql-wordpress-persistent-volume/", "id": "k8s_00079_code_explain_2160"} {"prompt": "Complete this text code:\n\ncat <>./kustomization.yaml\nresources:", "completion": " - mysql-deployment.yaml\n - wordpress-deployment.yaml\nEOF", "language": "text", "type": "code-example", "context": "Example: Deploying WordPress and MySQL with Persistent Volumes | The kustomization.yaml contains all the resources for deploying a WordPress site and a MySQL database. You can apply the directory by", "source": "https://kubernetes.io/docs/tutorials/stateful-application/mysql-wordpress-persistent-volume/", "id": "k8s_00079_code_completion_2161"} {"prompt": "Explain this text code:\n\ncat <>./kustomization.yaml\nresources:\n - mysql-deployment.yaml\n - wordpress-deployment.yaml\nEOF", "completion": "This text code Example: Deploying WordPress and MySQL with Persistent Volumes | The kustomization.yaml contains all the resources for deploying a WordPress site and a MySQL database. You can apply the directory by", "language": "text", "type": "code-example_explanation", "context": "Example: Deploying WordPress and MySQL with Persistent Volumes | The kustomization.yaml contains all the resources for deploying a WordPress site and a MySQL database. You can apply the directory by", "source": "https://kubernetes.io/docs/tutorials/stateful-application/mysql-wordpress-persistent-volume/", "id": "k8s_00079_code_explain_2162"} {"prompt": "Complete this text code:\n\n# Request a token to authenticate to the kube-apiserver as the service account \"myapp\" in the current namespace\n kubectl create token myapp\n \n # Request a token for a service account in a custom namespace\n kubectl create token myapp --namespace myns\n \n # Request a token with a custom expiration\n kubectl create token myapp --duration 10m", "completion": " \n # Request a token with a custom audience\n kubectl create token myapp --audience https://example.com\n \n # Request a token bound to an instance of a Secret object\n kubectl create token myapp --bound-object-kind Secret --bound-object-name mysecret\n \n # Request a token bound to an instance of a Secret object with a specific UID\n kubectl create token myapp --bound-object-kind Secret --bound-object-name mysecret --bound-object-uid 0d4691ed-659b-4935-a832-355f77ee47cc", "language": "text", "type": "code-example", "context": "kubectl create token | If true, ignore any errors in templates when a field or map key is missing in the template. Only applies to golang and jsonpath output formats.", "source": "https://kubernetes.io/docs/reference/kubectl/generated/kubectl_create/kubectl_create_token/", "id": "k8s_00080_code_completion_2163"} {"prompt": "Explain this text code:\n\n# Request a token to authenticate to the kube-apiserver as the service account \"myapp\" in the current namespace\n kubectl create token myapp\n \n # Request a token for a service account in a custom namespace\n kubectl create token myapp --namespace myns\n \n # Request a token with a custom expiration\n kubectl create token myapp --duration 10m\n \n # Request a token with a custom audience\n kubectl create token myapp --audience https://example.com\n \n # Request a token bound to an instance of a Secret object\n kubectl create token myapp --bound-object-kind Secret --bound-object-name mysecret\n \n # Request a token bound to an instance of a Secret object with a specific UID\n kubectl create token myapp --bound-object-kind Secret --bound-object-name mysecret --bound-object-uid 0d4691ed-659b-4935-a832-355f77ee47cc", "completion": "This text code kubectl create token | If true, ignore any errors in templates when a field or map key is missing in the template. Only applies to golang and jsonpath output formats.", "language": "text", "type": "code-example_explanation", "context": "kubectl create token | If true, ignore any errors in templates when a field or map key is missing in the template. Only applies to golang and jsonpath output formats.", "source": "https://kubernetes.io/docs/reference/kubectl/generated/kubectl_create/kubectl_create_token/", "id": "k8s_00080_code_explain_2164"} {"prompt": "Complete this text code:\n\n# Request a token to authenticate to the kube-apiserver as the service account \"myapp\" in the current namespace\n kubectl create token myapp\n \n # Request a token for a service account in a custom namespace\n kubectl create token myapp --namespace myns\n \n # Request a token with a custom expiration\n kubectl create token myapp --duration 10m", "completion": " \n # Request a token with a custom audience\n kubectl create token myapp --audience https://example.com\n \n # Request a token bound to an instance of a Secret object\n kubectl create token myapp --bound-object-kind Secret --bound-object-name mysecret\n \n # Request a token bound to an instance of a Secret object with a specific UID\n kubectl create token myapp --bound-object-kind Secret --bound-object-name mysecret --bound-object-uid 0d4691ed-659b-4935-a832-355f77ee47cc", "language": "text", "type": "code-example", "context": "kubectl create token | If true, ignore any errors in templates when a field or map key is missing in the template. Only applies to golang and jsonpath output formats.", "source": "https://kubernetes.io/docs/reference/kubectl/generated/kubectl_create/kubectl_create_token/", "id": "k8s_00080_code_completion_2165"} {"prompt": "Explain this text code:\n\n# Request a token to authenticate to the kube-apiserver as the service account \"myapp\" in the current namespace\n kubectl create token myapp\n \n # Request a token for a service account in a custom namespace\n kubectl create token myapp --namespace myns\n \n # Request a token with a custom expiration\n kubectl create token myapp --duration 10m\n \n # Request a token with a custom audience\n kubectl create token myapp --audience https://example.com\n \n # Request a token bound to an instance of a Secret object\n kubectl create token myapp --bound-object-kind Secret --bound-object-name mysecret\n \n # Request a token bound to an instance of a Secret object with a specific UID\n kubectl create token myapp --bound-object-kind Secret --bound-object-name mysecret --bound-object-uid 0d4691ed-659b-4935-a832-355f77ee47cc", "completion": "This text code kubectl create token | If true, ignore any errors in templates when a field or map key is missing in the template. Only applies to golang and jsonpath output formats.", "language": "text", "type": "code-example_explanation", "context": "kubectl create token | If true, ignore any errors in templates when a field or map key is missing in the template. Only applies to golang and jsonpath output formats.", "source": "https://kubernetes.io/docs/reference/kubectl/generated/kubectl_create/kubectl_create_token/", "id": "k8s_00080_code_explain_2166"} {"prompt": "Explain this bash code:\n\nkubectl patch (-f FILENAME | TYPE NAME) [-p PATCH|--patch-file FILE]", "completion": "This bash code kubectl patch | If true, ignore any errors in templates when a field or map key is missing in the template. Only applies to golang and jsonpath output formats.", "language": "bash", "type": "kubectl-command_explanation", "context": "kubectl patch | If true, ignore any errors in templates when a field or map key is missing in the template. Only applies to golang and jsonpath output formats.", "source": "https://kubernetes.io/docs/reference/kubectl/generated/kubectl_patch/", "id": "k8s_00081_code_explain_2167"} {"prompt": "Complete this text code:\n\n# Partially update a node using a strategic merge patch, specifying the patch as JSON\n kubectl patch node k8s-node-1 -p '{\"spec\":{\"unschedulable\":true}}'\n \n # Partially update a node using a strategic merge patch, specifying the patch as YAML\n kubectl patch node k8s-node-1 -p $'spec:\\n unschedulable: true'\n \n # Partially update a node identified by the type and name specified in \"node.json\" using strategic merge patch\n kubectl patch -f node.json -p '{\"spec\":{\"unschedulable\":true}}'", "completion": " \n # Update a container's image; spec.containers[*].name is required because it's a merge key\n kubectl patch pod valid-pod -p '{\"spec\":{\"containers\":[{\"name\":\"kubernetes-serve-hostname\",\"image\":\"new image\"}]}}'\n \n # Update a container's image using a JSON patch with positional arrays\n kubectl patch pod valid-pod --type='json' -p='[{\"op\": \"replace\", \"path\": \"/spec/containers/0/image\", \"value\":\"new image\"}]'\n \n # Update a deployment's replicas through the 'scale' subresource using a merge patch\n kubectl patch deployment nginx-deployment --subresource='scale' --type='merge' -p '{\"spec\":{\"replicas\":2}}'", "language": "text", "type": "code-example", "context": "kubectl patch | If true, ignore any errors in templates when a field or map key is missing in the template. Only applies to golang and jsonpath output formats.", "source": "https://kubernetes.io/docs/reference/kubectl/generated/kubectl_patch/", "id": "k8s_00081_code_completion_2168"} {"prompt": "Explain this text code:\n\n# Partially update a node using a strategic merge patch, specifying the patch as JSON\n kubectl patch node k8s-node-1 -p '{\"spec\":{\"unschedulable\":true}}'\n \n # Partially update a node using a strategic merge patch, specifying the patch as YAML\n kubectl patch node k8s-node-1 -p $'spec:\\n unschedulable: true'\n \n # Partially update a node identified by the type and name specified in \"node.json\" using strategic merge patch\n kubectl patch -f node.json -p '{\"spec\":{\"unschedulable\":true}}'\n \n # Update a container's image; spec.containers[*].name is required because it's a merge key\n kubectl patch pod valid-pod -p '{\"spec\":{\"containers\":[{\"name\":\"kubernetes-serve-hostname\",\"image\":\"new image\"}]}}'\n \n # Update a container's image using a JSON patch with positional arrays\n kubectl patch pod valid-pod --type='json' -p='[{\"op\": \"replace\", \"path\": \"/spec/containers/0/image\", \"value\":\"new image\"}]'\n \n # Update a deployment's replicas through the 'scale' subresource using a merge patch\n kubectl patch deployment nginx-deployment --subresource='scale' --type='merge' -p '{\"spec\":{\"replicas\":2}}'", "completion": "This text code kubectl patch | If true, ignore any errors in templates when a field or map key is missing in the template. Only applies to golang and jsonpath output formats.", "language": "text", "type": "code-example_explanation", "context": "kubectl patch | If true, ignore any errors in templates when a field or map key is missing in the template. Only applies to golang and jsonpath output formats.", "source": "https://kubernetes.io/docs/reference/kubectl/generated/kubectl_patch/", "id": "k8s_00081_code_explain_2169"} {"prompt": "Explain this bash code:\n\nkubectl patch (-f FILENAME | TYPE NAME) [-p PATCH|--patch-file FILE]", "completion": "This bash code kubectl patch | If true, ignore any errors in templates when a field or map key is missing in the template. Only applies to golang and jsonpath output formats.", "language": "bash", "type": "kubectl-command_explanation", "context": "kubectl patch | If true, ignore any errors in templates when a field or map key is missing in the template. Only applies to golang and jsonpath output formats.", "source": "https://kubernetes.io/docs/reference/kubectl/generated/kubectl_patch/", "id": "k8s_00081_code_explain_2170"} {"prompt": "Complete this text code:\n\n# Partially update a node using a strategic merge patch, specifying the patch as JSON\n kubectl patch node k8s-node-1 -p '{\"spec\":{\"unschedulable\":true}}'\n \n # Partially update a node using a strategic merge patch, specifying the patch as YAML\n kubectl patch node k8s-node-1 -p $'spec:\\n unschedulable: true'\n \n # Partially update a node identified by the type and name specified in \"node.json\" using strategic merge patch\n kubectl patch -f node.json -p '{\"spec\":{\"unschedulable\":true}}'", "completion": " \n # Update a container's image; spec.containers[*].name is required because it's a merge key\n kubectl patch pod valid-pod -p '{\"spec\":{\"containers\":[{\"name\":\"kubernetes-serve-hostname\",\"image\":\"new image\"}]}}'\n \n # Update a container's image using a JSON patch with positional arrays\n kubectl patch pod valid-pod --type='json' -p='[{\"op\": \"replace\", \"path\": \"/spec/containers/0/image\", \"value\":\"new image\"}]'\n \n # Update a deployment's replicas through the 'scale' subresource using a merge patch\n kubectl patch deployment nginx-deployment --subresource='scale' --type='merge' -p '{\"spec\":{\"replicas\":2}}'", "language": "text", "type": "code-example", "context": "kubectl patch | If true, ignore any errors in templates when a field or map key is missing in the template. Only applies to golang and jsonpath output formats.", "source": "https://kubernetes.io/docs/reference/kubectl/generated/kubectl_patch/", "id": "k8s_00081_code_completion_2171"} {"prompt": "Explain this text code:\n\n# Partially update a node using a strategic merge patch, specifying the patch as JSON\n kubectl patch node k8s-node-1 -p '{\"spec\":{\"unschedulable\":true}}'\n \n # Partially update a node using a strategic merge patch, specifying the patch as YAML\n kubectl patch node k8s-node-1 -p $'spec:\\n unschedulable: true'\n \n # Partially update a node identified by the type and name specified in \"node.json\" using strategic merge patch\n kubectl patch -f node.json -p '{\"spec\":{\"unschedulable\":true}}'\n \n # Update a container's image; spec.containers[*].name is required because it's a merge key\n kubectl patch pod valid-pod -p '{\"spec\":{\"containers\":[{\"name\":\"kubernetes-serve-hostname\",\"image\":\"new image\"}]}}'\n \n # Update a container's image using a JSON patch with positional arrays\n kubectl patch pod valid-pod --type='json' -p='[{\"op\": \"replace\", \"path\": \"/spec/containers/0/image\", \"value\":\"new image\"}]'\n \n # Update a deployment's replicas through the 'scale' subresource using a merge patch\n kubectl patch deployment nginx-deployment --subresource='scale' --type='merge' -p '{\"spec\":{\"replicas\":2}}'", "completion": "This text code kubectl patch | If true, ignore any errors in templates when a field or map key is missing in the template. Only applies to golang and jsonpath output formats.", "language": "text", "type": "code-example_explanation", "context": "kubectl patch | If true, ignore any errors in templates when a field or map key is missing in the template. Only applies to golang and jsonpath output formats.", "source": "https://kubernetes.io/docs/reference/kubectl/generated/kubectl_patch/", "id": "k8s_00081_code_explain_2172"} {"prompt": "Complete this yaml code:\n\napiVersion: apps/v1\nkind: DaemonSet\nmetadata:\n name: node-problem-detector-v0.1\n namespace: kube-system\n labels:\n k8s-app: node-problem-detector\n version: v0.1\n kubernetes.io/cluster-service: \"true\"\nspec:\n selector:\n matchLabels:\n k8s-app: node-problem-detector \n version: v0.1\n kubernetes.io/cluster-service: \"true\"\n template:\n metadata:\n labels:\n k8s-app: node-problem-detector\n version: v0.1\n kubernetes.io/cluster-service: \"true\"", "completion": " spec:\n hostNetwork: true\n containers:\n - name: node-problem-detector\n image: registry.k8s.io/node-problem-detector:v0.1\n securityContext:\n privileged: true\n resources:\n limits:\n cpu: \"200m\"\n memory: \"100Mi\"\n requests:\n cpu: \"20m\"\n memory: \"20Mi\"\n volumeMounts:\n - name: log\n mountPath: /log\n readOnly: true\n volumes:\n - name: log\n hostPath:\n path: /var/log/", "language": "yaml", "type": "kubernetes-manifest", "context": "Monitor Node Health | Start node problem detector with kubectl:", "source": "https://kubernetes.io/docs/tasks/debug/debug-cluster/monitor-node-health/", "id": "k8s_00082_code_completion_2173"} {"prompt": "Explain this yaml code:\n\napiVersion: apps/v1\nkind: DaemonSet\nmetadata:\n name: node-problem-detector-v0.1\n namespace: kube-system\n labels:\n k8s-app: node-problem-detector\n version: v0.1\n kubernetes.io/cluster-service: \"true\"\nspec:\n selector:\n matchLabels:\n k8s-app: node-problem-detector \n version: v0.1\n kubernetes.io/cluster-service: \"true\"\n template:\n metadata:\n labels:\n k8s-app: node-problem-detector\n version: v0.1\n kubernetes.io/cluster-service: \"true\"\n spec:\n hostNetwork: true\n containers:\n - name: node-problem-detector\n image: registry.k8s.io/node-problem-detector:v0.1\n securityContext:\n privileged: true\n resources:\n limits:\n cpu: \"200m\"\n memory: \"100Mi\"\n requests:\n cpu: \"20m\"\n memory: \"20Mi\"\n volumeMounts:\n - name: log\n mountPath: /log\n readOnly: true\n volumes:\n - name: log\n hostPath:\n path: /var/log/", "completion": "This yaml code Monitor Node Health | Start node problem detector with kubectl:", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Monitor Node Health | Start node problem detector with kubectl:", "source": "https://kubernetes.io/docs/tasks/debug/debug-cluster/monitor-node-health/", "id": "k8s_00082_code_explain_2174"} {"prompt": "Explain this bash code:\n\nkubectl apply -f https://k8s.io/examples/debug/node-problem-detector.yaml", "completion": "This bash code Monitor Node Health | If you are using a custom cluster bootstrap solution and don't need to overwrite the default configuration, you can leverage the Addon pod to further automate the deployment.", "language": "bash", "type": "kubectl-command_explanation", "context": "Monitor Node Health | If you are using a custom cluster bootstrap solution and don't need to overwrite the default configuration, you can leverage the Addon pod to further automate the deployment.", "source": "https://kubernetes.io/docs/tasks/debug/debug-cluster/monitor-node-health/", "id": "k8s_00082_code_explain_2175"} {"prompt": "Explain this bash code:\n\nkubectl create configmap node-problem-detector-config --from-file=config/", "completion": "This bash code Monitor Node Health | Change the node-problem-detector.yaml to use the ConfigMap:", "language": "bash", "type": "kubectl-command_explanation", "context": "Monitor Node Health | Change the node-problem-detector.yaml to use the ConfigMap:", "source": "https://kubernetes.io/docs/tasks/debug/debug-cluster/monitor-node-health/", "id": "k8s_00082_code_explain_2176"} {"prompt": "Complete this yaml code:\n\napiVersion: apps/v1\nkind: DaemonSet\nmetadata:\n name: node-problem-detector-v0.1\n namespace: kube-system\n labels:\n k8s-app: node-problem-detector\n version: v0.1\n kubernetes.io/cluster-service: \"true\"\nspec:\n selector:\n matchLabels:\n k8s-app: node-problem-detector \n version: v0.1\n kubernetes.io/cluster-service: \"true\"\n template:\n metadata:\n labels:\n k8s-app: node-problem-detector\n version: v0.1\n kubernetes.io/cluster-service: \"true\"\n spec:\n hostNetwork: true\n containers:", "completion": " - name: node-problem-detector\n image: registry.k8s.io/node-problem-detector:v0.1\n securityContext:\n privileged: true\n resources:\n limits:\n cpu: \"200m\"\n memory: \"100Mi\"\n requests:\n cpu: \"20m\"\n memory: \"20Mi\"\n volumeMounts:\n - name: log\n mountPath: /log\n readOnly: true\n - name: config # Overwrite the config/ directory with ConfigMap volume\n mountPath: /config\n readOnly: true\n volumes:\n - name: log\n hostPath:\n path: /var/log/\n - name: config # Define ConfigMap volume\n configMap:\n name: node-problem-detector-config", "language": "yaml", "type": "kubernetes-manifest", "context": "Monitor Node Health | Recreate the Node Problem Detector with the new configuration file:", "source": "https://kubernetes.io/docs/tasks/debug/debug-cluster/monitor-node-health/", "id": "k8s_00082_code_completion_2177"} {"prompt": "Explain this yaml code:\n\napiVersion: apps/v1\nkind: DaemonSet\nmetadata:\n name: node-problem-detector-v0.1\n namespace: kube-system\n labels:\n k8s-app: node-problem-detector\n version: v0.1\n kubernetes.io/cluster-service: \"true\"\nspec:\n selector:\n matchLabels:\n k8s-app: node-problem-detector \n version: v0.1\n kubernetes.io/cluster-service: \"true\"\n template:\n metadata:\n labels:\n k8s-app: node-problem-detector\n version: v0.1\n kubernetes.io/cluster-service: \"true\"\n spec:\n hostNetwork: true\n containers:\n - name: node-problem-detector\n image: registry.k8s.io/node-problem-detector:v0.1\n securityContext:\n privileged: true\n resources:\n limits:\n cpu: \"200m\"\n memory: \"100Mi\"\n requests:\n cpu: \"20m\"\n memory: \"20Mi\"\n volumeMounts:\n - name: log\n mountPath: /log\n readOnly: true\n - name: config # Overwrite the config/ directory with ConfigMap volume\n mountPath: /config\n readOnly: true\n volumes:\n - name: log\n hostPath:\n path: /var/log/\n - name: config # Define ConfigMap volume\n configMap:\n name: node-problem-detector-config", "completion": "This yaml code Monitor Node Health | Recreate the Node Problem Detector with the new configuration file:", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Monitor Node Health | Recreate the Node Problem Detector with the new configuration file:", "source": "https://kubernetes.io/docs/tasks/debug/debug-cluster/monitor-node-health/", "id": "k8s_00082_code_explain_2178"} {"prompt": "Explain this text code:\n\n# If you have a node-problem-detector running, delete before recreating\nkubectl delete -f https://k8s.io/examples/debug/node-problem-detector.yaml\nkubectl apply -f https://k8s.io/examples/debug/node-problem-detector-configmap.yaml", "completion": "This text code Monitor Node Health | Overwriting a configuration is not supported if a Node Problem Detector runs as a cluster Addon. The Addon manager does not support ConfigMap.", "language": "text", "type": "code-example_explanation", "context": "Monitor Node Health | Overwriting a configuration is not supported if a Node Problem Detector runs as a cluster Addon. The Addon manager does not support ConfigMap.", "source": "https://kubernetes.io/docs/tasks/debug/debug-cluster/monitor-node-health/", "id": "k8s_00082_code_explain_2179"} {"prompt": "Complete this yaml code:\n\napiVersion: apps/v1\nkind: DaemonSet\nmetadata:\n name: node-problem-detector-v0.1\n namespace: kube-system\n labels:\n k8s-app: node-problem-detector\n version: v0.1\n kubernetes.io/cluster-service: \"true\"\nspec:\n selector:\n matchLabels:\n k8s-app: node-problem-detector \n version: v0.1\n kubernetes.io/cluster-service: \"true\"\n template:\n metadata:\n labels:\n k8s-app: node-problem-detector\n version: v0.1\n kubernetes.io/cluster-service: \"true\"", "completion": " spec:\n hostNetwork: true\n containers:\n - name: node-problem-detector\n image: registry.k8s.io/node-problem-detector:v0.1\n securityContext:\n privileged: true\n resources:\n limits:\n cpu: \"200m\"\n memory: \"100Mi\"\n requests:\n cpu: \"20m\"\n memory: \"20Mi\"\n volumeMounts:\n - name: log\n mountPath: /log\n readOnly: true\n volumes:\n - name: log\n hostPath:\n path: /var/log/", "language": "yaml", "type": "kubernetes-manifest", "context": "Monitor Node Health | Start node problem detector with kubectl:", "source": "https://kubernetes.io/docs/tasks/debug/debug-cluster/monitor-node-health/", "id": "k8s_00082_code_completion_2180"} {"prompt": "Explain this yaml code:\n\napiVersion: apps/v1\nkind: DaemonSet\nmetadata:\n name: node-problem-detector-v0.1\n namespace: kube-system\n labels:\n k8s-app: node-problem-detector\n version: v0.1\n kubernetes.io/cluster-service: \"true\"\nspec:\n selector:\n matchLabels:\n k8s-app: node-problem-detector \n version: v0.1\n kubernetes.io/cluster-service: \"true\"\n template:\n metadata:\n labels:\n k8s-app: node-problem-detector\n version: v0.1\n kubernetes.io/cluster-service: \"true\"\n spec:\n hostNetwork: true\n containers:\n - name: node-problem-detector\n image: registry.k8s.io/node-problem-detector:v0.1\n securityContext:\n privileged: true\n resources:\n limits:\n cpu: \"200m\"\n memory: \"100Mi\"\n requests:\n cpu: \"20m\"\n memory: \"20Mi\"\n volumeMounts:\n - name: log\n mountPath: /log\n readOnly: true\n volumes:\n - name: log\n hostPath:\n path: /var/log/", "completion": "This yaml code Monitor Node Health | Start node problem detector with kubectl:", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Monitor Node Health | Start node problem detector with kubectl:", "source": "https://kubernetes.io/docs/tasks/debug/debug-cluster/monitor-node-health/", "id": "k8s_00082_code_explain_2181"} {"prompt": "Explain this shell code:\n\nkubectl apply -f https://k8s.io/examples/debug/node-problem-detector.yaml", "completion": "This shell code Monitor Node Health | If you are using a custom cluster bootstrap solution and don't need to overwrite the default configuration, you can leverage the Addon pod to further automate the deployment.", "language": "shell", "type": "code-example_explanation", "context": "Monitor Node Health | If you are using a custom cluster bootstrap solution and don't need to overwrite the default configuration, you can leverage the Addon pod to further automate the deployment.", "source": "https://kubernetes.io/docs/tasks/debug/debug-cluster/monitor-node-health/", "id": "k8s_00082_code_explain_2182"} {"prompt": "Explain this shell code:\n\nkubectl create configmap node-problem-detector-config --from-file=config/", "completion": "This shell code Monitor Node Health | Change the node-problem-detector.yaml to use the ConfigMap:", "language": "shell", "type": "code-example_explanation", "context": "Monitor Node Health | Change the node-problem-detector.yaml to use the ConfigMap:", "source": "https://kubernetes.io/docs/tasks/debug/debug-cluster/monitor-node-health/", "id": "k8s_00082_code_explain_2183"} {"prompt": "Complete this yaml code:\n\napiVersion: apps/v1\nkind: DaemonSet\nmetadata:\n name: node-problem-detector-v0.1\n namespace: kube-system\n labels:\n k8s-app: node-problem-detector\n version: v0.1\n kubernetes.io/cluster-service: \"true\"\nspec:\n selector:\n matchLabels:\n k8s-app: node-problem-detector \n version: v0.1\n kubernetes.io/cluster-service: \"true\"\n template:\n metadata:\n labels:\n k8s-app: node-problem-detector\n version: v0.1\n kubernetes.io/cluster-service: \"true\"\n spec:\n hostNetwork: true\n containers:", "completion": " - name: node-problem-detector\n image: registry.k8s.io/node-problem-detector:v0.1\n securityContext:\n privileged: true\n resources:\n limits:\n cpu: \"200m\"\n memory: \"100Mi\"\n requests:\n cpu: \"20m\"\n memory: \"20Mi\"\n volumeMounts:\n - name: log\n mountPath: /log\n readOnly: true\n - name: config # Overwrite the config/ directory with ConfigMap volume\n mountPath: /config\n readOnly: true\n volumes:\n - name: log\n hostPath:\n path: /var/log/\n - name: config # Define ConfigMap volume\n configMap:\n name: node-problem-detector-config", "language": "yaml", "type": "kubernetes-manifest", "context": "Monitor Node Health | Recreate the Node Problem Detector with the new configuration file:", "source": "https://kubernetes.io/docs/tasks/debug/debug-cluster/monitor-node-health/", "id": "k8s_00082_code_completion_2184"} {"prompt": "Explain this yaml code:\n\napiVersion: apps/v1\nkind: DaemonSet\nmetadata:\n name: node-problem-detector-v0.1\n namespace: kube-system\n labels:\n k8s-app: node-problem-detector\n version: v0.1\n kubernetes.io/cluster-service: \"true\"\nspec:\n selector:\n matchLabels:\n k8s-app: node-problem-detector \n version: v0.1\n kubernetes.io/cluster-service: \"true\"\n template:\n metadata:\n labels:\n k8s-app: node-problem-detector\n version: v0.1\n kubernetes.io/cluster-service: \"true\"\n spec:\n hostNetwork: true\n containers:\n - name: node-problem-detector\n image: registry.k8s.io/node-problem-detector:v0.1\n securityContext:\n privileged: true\n resources:\n limits:\n cpu: \"200m\"\n memory: \"100Mi\"\n requests:\n cpu: \"20m\"\n memory: \"20Mi\"\n volumeMounts:\n - name: log\n mountPath: /log\n readOnly: true\n - name: config # Overwrite the config/ directory with ConfigMap volume\n mountPath: /config\n readOnly: true\n volumes:\n - name: log\n hostPath:\n path: /var/log/\n - name: config # Define ConfigMap volume\n configMap:\n name: node-problem-detector-config", "completion": "This yaml code Monitor Node Health | Recreate the Node Problem Detector with the new configuration file:", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Monitor Node Health | Recreate the Node Problem Detector with the new configuration file:", "source": "https://kubernetes.io/docs/tasks/debug/debug-cluster/monitor-node-health/", "id": "k8s_00082_code_explain_2185"} {"prompt": "Explain this shell code:\n\n# If you have a node-problem-detector running, delete before recreating\nkubectl delete -f https://k8s.io/examples/debug/node-problem-detector.yaml\nkubectl apply -f https://k8s.io/examples/debug/node-problem-detector-configmap.yaml", "completion": "This shell code Monitor Node Health | Overwriting a configuration is not supported if a Node Problem Detector runs as a cluster Addon. The Addon manager does not support ConfigMap.", "language": "shell", "type": "code-example_explanation", "context": "Monitor Node Health | Overwriting a configuration is not supported if a Node Problem Detector runs as a cluster Addon. The Addon manager does not support ConfigMap.", "source": "https://kubernetes.io/docs/tasks/debug/debug-cluster/monitor-node-health/", "id": "k8s_00082_code_explain_2186"} {"prompt": "Complete this yaml code:\n\ndebug/node-problem-detector.yaml\napiVersion: apps/v1\nkind: DaemonSet\nmetadata:\n name: node-problem-detector-v0.1\n namespace: kube-system\n labels:\n k8s-app: node-problem-detector\n version: v0.1\n kubernetes.io/cluster-service: \"true\"\nspec:\n selector:\n matchLabels:\n k8s-app: node-problem-detector \n version: v0.1\n kubernetes.io/cluster-service: \"true\"\n template:\n metadata:\n labels:\n k8s-app: node-problem-detector\n version: v0.1\n kubernetes.io/cluster-service: \"true\"", "completion": " spec:\n hostNetwork: true\n containers:\n - name: node-problem-detector\n image: registry.k8s.io/node-problem-detector:v0.1\n securityContext:\n privileged: true\n resources:\n limits:\n cpu: \"200m\"\n memory: \"100Mi\"\n requests:\n cpu: \"20m\"\n memory: \"20Mi\"\n volumeMounts:\n - name: log\n mountPath: /log\n readOnly: true\n volumes:\n - name: log\n hostPath:\n path: /var/log/", "language": "yaml", "type": "kubernetes-manifest", "context": "Monitor Node Health | Start node problem detector with kubectl:", "source": "https://kubernetes.io/docs/tasks/debug/debug-cluster/monitor-node-health/", "id": "k8s_00082_code_completion_2187"} {"prompt": "Explain this yaml code:\n\ndebug/node-problem-detector.yaml\napiVersion: apps/v1\nkind: DaemonSet\nmetadata:\n name: node-problem-detector-v0.1\n namespace: kube-system\n labels:\n k8s-app: node-problem-detector\n version: v0.1\n kubernetes.io/cluster-service: \"true\"\nspec:\n selector:\n matchLabels:\n k8s-app: node-problem-detector \n version: v0.1\n kubernetes.io/cluster-service: \"true\"\n template:\n metadata:\n labels:\n k8s-app: node-problem-detector\n version: v0.1\n kubernetes.io/cluster-service: \"true\"\n spec:\n hostNetwork: true\n containers:\n - name: node-problem-detector\n image: registry.k8s.io/node-problem-detector:v0.1\n securityContext:\n privileged: true\n resources:\n limits:\n cpu: \"200m\"\n memory: \"100Mi\"\n requests:\n cpu: \"20m\"\n memory: \"20Mi\"\n volumeMounts:\n - name: log\n mountPath: /log\n readOnly: true\n volumes:\n - name: log\n hostPath:\n path: /var/log/", "completion": "This yaml code Monitor Node Health | Start node problem detector with kubectl:", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Monitor Node Health | Start node problem detector with kubectl:", "source": "https://kubernetes.io/docs/tasks/debug/debug-cluster/monitor-node-health/", "id": "k8s_00082_code_explain_2188"} {"prompt": "Complete this yaml code:\n\napiVersion: apps/v1\nkind: DaemonSet\nmetadata:\n name: node-problem-detector-v0.1\n namespace: kube-system\n labels:\n k8s-app: node-problem-detector\n version: v0.1\n kubernetes.io/cluster-service: \"true\"\nspec:\n selector:\n matchLabels:\n k8s-app: node-problem-detector \n version: v0.1\n kubernetes.io/cluster-service: \"true\"\n template:\n metadata:\n labels:\n k8s-app: node-problem-detector\n version: v0.1\n kubernetes.io/cluster-service: \"true\"", "completion": " spec:\n hostNetwork: true\n containers:\n - name: node-problem-detector\n image: registry.k8s.io/node-problem-detector:v0.1\n securityContext:\n privileged: true\n resources:\n limits:\n cpu: \"200m\"\n memory: \"100Mi\"\n requests:\n cpu: \"20m\"\n memory: \"20Mi\"\n volumeMounts:\n - name: log\n mountPath: /log\n readOnly: true\n volumes:\n - name: log\n hostPath:\n path: /var/log/", "language": "yaml", "type": "kubernetes-manifest", "context": "Monitor Node Health | Start node problem detector with kubectl:", "source": "https://kubernetes.io/docs/tasks/debug/debug-cluster/monitor-node-health/", "id": "k8s_00082_code_completion_2189"} {"prompt": "Explain this yaml code:\n\napiVersion: apps/v1\nkind: DaemonSet\nmetadata:\n name: node-problem-detector-v0.1\n namespace: kube-system\n labels:\n k8s-app: node-problem-detector\n version: v0.1\n kubernetes.io/cluster-service: \"true\"\nspec:\n selector:\n matchLabels:\n k8s-app: node-problem-detector \n version: v0.1\n kubernetes.io/cluster-service: \"true\"\n template:\n metadata:\n labels:\n k8s-app: node-problem-detector\n version: v0.1\n kubernetes.io/cluster-service: \"true\"\n spec:\n hostNetwork: true\n containers:\n - name: node-problem-detector\n image: registry.k8s.io/node-problem-detector:v0.1\n securityContext:\n privileged: true\n resources:\n limits:\n cpu: \"200m\"\n memory: \"100Mi\"\n requests:\n cpu: \"20m\"\n memory: \"20Mi\"\n volumeMounts:\n - name: log\n mountPath: /log\n readOnly: true\n volumes:\n - name: log\n hostPath:\n path: /var/log/", "completion": "This yaml code Monitor Node Health | Start node problem detector with kubectl:", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Monitor Node Health | Start node problem detector with kubectl:", "source": "https://kubernetes.io/docs/tasks/debug/debug-cluster/monitor-node-health/", "id": "k8s_00082_code_explain_2190"} {"prompt": "Explain this bash code:\n\nkubectl apply -f https://k8s.io/examples/debug/node-problem-detector.yaml", "completion": "This bash code Monitor Node Health | If you are using a custom cluster bootstrap solution and don't need to overwrite the default configuration, you can leverage the Addon pod to further automate the deployment.", "language": "bash", "type": "kubectl-command_explanation", "context": "Monitor Node Health | If you are using a custom cluster bootstrap solution and don't need to overwrite the default configuration, you can leverage the Addon pod to further automate the deployment.", "source": "https://kubernetes.io/docs/tasks/debug/debug-cluster/monitor-node-health/", "id": "k8s_00082_code_explain_2191"} {"prompt": "Explain this bash code:\n\nkubectl create configmap node-problem-detector-config --from-file=config/", "completion": "This bash code Monitor Node Health | Change the node-problem-detector.yaml to use the ConfigMap:", "language": "bash", "type": "kubectl-command_explanation", "context": "Monitor Node Health | Change the node-problem-detector.yaml to use the ConfigMap:", "source": "https://kubernetes.io/docs/tasks/debug/debug-cluster/monitor-node-health/", "id": "k8s_00082_code_explain_2192"} {"prompt": "Complete this yaml code:\n\ndebug/node-problem-detector-configmap.yaml\napiVersion: apps/v1\nkind: DaemonSet\nmetadata:\n name: node-problem-detector-v0.1\n namespace: kube-system\n labels:\n k8s-app: node-problem-detector\n version: v0.1\n kubernetes.io/cluster-service: \"true\"\nspec:\n selector:\n matchLabels:\n k8s-app: node-problem-detector \n version: v0.1\n kubernetes.io/cluster-service: \"true\"\n template:\n metadata:\n labels:\n k8s-app: node-problem-detector\n version: v0.1\n kubernetes.io/cluster-service: \"true\"\n spec:\n hostNetwork: true\n containers:", "completion": " - name: node-problem-detector\n image: registry.k8s.io/node-problem-detector:v0.1\n securityContext:\n privileged: true\n resources:\n limits:\n cpu: \"200m\"\n memory: \"100Mi\"\n requests:\n cpu: \"20m\"\n memory: \"20Mi\"\n volumeMounts:\n - name: log\n mountPath: /log\n readOnly: true\n - name: config # Overwrite the config/ directory with ConfigMap volume\n mountPath: /config\n readOnly: true\n volumes:\n - name: log\n hostPath:\n path: /var/log/\n - name: config # Define ConfigMap volume\n configMap:\n name: node-problem-detector-config", "language": "yaml", "type": "kubernetes-manifest", "context": "Monitor Node Health | Recreate the Node Problem Detector with the new configuration file:", "source": "https://kubernetes.io/docs/tasks/debug/debug-cluster/monitor-node-health/", "id": "k8s_00082_code_completion_2193"} {"prompt": "Explain this yaml code:\n\ndebug/node-problem-detector-configmap.yaml\napiVersion: apps/v1\nkind: DaemonSet\nmetadata:\n name: node-problem-detector-v0.1\n namespace: kube-system\n labels:\n k8s-app: node-problem-detector\n version: v0.1\n kubernetes.io/cluster-service: \"true\"\nspec:\n selector:\n matchLabels:\n k8s-app: node-problem-detector \n version: v0.1\n kubernetes.io/cluster-service: \"true\"\n template:\n metadata:\n labels:\n k8s-app: node-problem-detector\n version: v0.1\n kubernetes.io/cluster-service: \"true\"\n spec:\n hostNetwork: true\n containers:\n - name: node-problem-detector\n image: registry.k8s.io/node-problem-detector:v0.1\n securityContext:\n privileged: true\n resources:\n limits:\n cpu: \"200m\"\n memory: \"100Mi\"\n requests:\n cpu: \"20m\"\n memory: \"20Mi\"\n volumeMounts:\n - name: log\n mountPath: /log\n readOnly: true\n - name: config # Overwrite the config/ directory with ConfigMap volume\n mountPath: /config\n readOnly: true\n volumes:\n - name: log\n hostPath:\n path: /var/log/\n - name: config # Define ConfigMap volume\n configMap:\n name: node-problem-detector-config", "completion": "This yaml code Monitor Node Health | Recreate the Node Problem Detector with the new configuration file:", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Monitor Node Health | Recreate the Node Problem Detector with the new configuration file:", "source": "https://kubernetes.io/docs/tasks/debug/debug-cluster/monitor-node-health/", "id": "k8s_00082_code_explain_2194"} {"prompt": "Complete this yaml code:\n\napiVersion: apps/v1\nkind: DaemonSet\nmetadata:\n name: node-problem-detector-v0.1\n namespace: kube-system\n labels:\n k8s-app: node-problem-detector\n version: v0.1\n kubernetes.io/cluster-service: \"true\"\nspec:\n selector:\n matchLabels:\n k8s-app: node-problem-detector \n version: v0.1\n kubernetes.io/cluster-service: \"true\"\n template:\n metadata:\n labels:\n k8s-app: node-problem-detector\n version: v0.1\n kubernetes.io/cluster-service: \"true\"\n spec:\n hostNetwork: true\n containers:", "completion": " - name: node-problem-detector\n image: registry.k8s.io/node-problem-detector:v0.1\n securityContext:\n privileged: true\n resources:\n limits:\n cpu: \"200m\"\n memory: \"100Mi\"\n requests:\n cpu: \"20m\"\n memory: \"20Mi\"\n volumeMounts:\n - name: log\n mountPath: /log\n readOnly: true\n - name: config # Overwrite the config/ directory with ConfigMap volume\n mountPath: /config\n readOnly: true\n volumes:\n - name: log\n hostPath:\n path: /var/log/\n - name: config # Define ConfigMap volume\n configMap:\n name: node-problem-detector-config", "language": "yaml", "type": "kubernetes-manifest", "context": "Monitor Node Health | Recreate the Node Problem Detector with the new configuration file:", "source": "https://kubernetes.io/docs/tasks/debug/debug-cluster/monitor-node-health/", "id": "k8s_00082_code_completion_2195"} {"prompt": "Explain this yaml code:\n\napiVersion: apps/v1\nkind: DaemonSet\nmetadata:\n name: node-problem-detector-v0.1\n namespace: kube-system\n labels:\n k8s-app: node-problem-detector\n version: v0.1\n kubernetes.io/cluster-service: \"true\"\nspec:\n selector:\n matchLabels:\n k8s-app: node-problem-detector \n version: v0.1\n kubernetes.io/cluster-service: \"true\"\n template:\n metadata:\n labels:\n k8s-app: node-problem-detector\n version: v0.1\n kubernetes.io/cluster-service: \"true\"\n spec:\n hostNetwork: true\n containers:\n - name: node-problem-detector\n image: registry.k8s.io/node-problem-detector:v0.1\n securityContext:\n privileged: true\n resources:\n limits:\n cpu: \"200m\"\n memory: \"100Mi\"\n requests:\n cpu: \"20m\"\n memory: \"20Mi\"\n volumeMounts:\n - name: log\n mountPath: /log\n readOnly: true\n - name: config # Overwrite the config/ directory with ConfigMap volume\n mountPath: /config\n readOnly: true\n volumes:\n - name: log\n hostPath:\n path: /var/log/\n - name: config # Define ConfigMap volume\n configMap:\n name: node-problem-detector-config", "completion": "This yaml code Monitor Node Health | Recreate the Node Problem Detector with the new configuration file:", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Monitor Node Health | Recreate the Node Problem Detector with the new configuration file:", "source": "https://kubernetes.io/docs/tasks/debug/debug-cluster/monitor-node-health/", "id": "k8s_00082_code_explain_2196"} {"prompt": "Explain this text code:\n\n# If you have a node-problem-detector running, delete before recreating\nkubectl delete -f https://k8s.io/examples/debug/node-problem-detector.yaml\nkubectl apply -f https://k8s.io/examples/debug/node-problem-detector-configmap.yaml", "completion": "This text code Monitor Node Health | Overwriting a configuration is not supported if a Node Problem Detector runs as a cluster Addon. The Addon manager does not support ConfigMap.", "language": "text", "type": "code-example_explanation", "context": "Monitor Node Health | Overwriting a configuration is not supported if a Node Problem Detector runs as a cluster Addon. The Addon manager does not support ConfigMap.", "source": "https://kubernetes.io/docs/tasks/debug/debug-cluster/monitor-node-health/", "id": "k8s_00082_code_explain_2197"} {"prompt": "Complete this bash code:\n\nkubectl describe pvc hostpath\nName: hostpath\nNamespace: default\nStorageClass: example-hostpath\nStatus: Terminating", "completion": "Volume:\nLabels: \nAnnotations: volume.beta.kubernetes.io/storage-class=example-hostpath\n volume.beta.kubernetes.io/storage-provisioner=example.com/hostpath\nFinalizers: [kubernetes.io/pvc-protection]\n...", "language": "bash", "type": "kubectl-command", "context": "Persistent Volumes | You can see that a PV is protected when the PV's status is Terminating and the Finalizers list includes kubernetes.io/pv-protection too:", "source": "https://kubernetes.io/docs/concepts/storage/persistent-volumes/", "id": "k8s_00083_code_completion_2198"} {"prompt": "Explain this bash code:\n\nkubectl describe pvc hostpath\nName: hostpath\nNamespace: default\nStorageClass: example-hostpath\nStatus: Terminating\nVolume:\nLabels: \nAnnotations: volume.beta.kubernetes.io/storage-class=example-hostpath\n volume.beta.kubernetes.io/storage-provisioner=example.com/hostpath\nFinalizers: [kubernetes.io/pvc-protection]\n...", "completion": "This bash code Persistent Volumes | You can see that a PV is protected when the PV's status is Terminating and the Finalizers list includes kubernetes.io/pv-protection too:", "language": "bash", "type": "kubectl-command_explanation", "context": "Persistent Volumes | You can see that a PV is protected when the PV's status is Terminating and the Finalizers list includes kubernetes.io/pv-protection too:", "source": "https://kubernetes.io/docs/concepts/storage/persistent-volumes/", "id": "k8s_00083_code_explain_2199"} {"prompt": "Complete this bash code:\n\nkubectl describe pv task-pv-volume\nName: task-pv-volume\nLabels: type=local\nAnnotations: \nFinalizers: [kubernetes.io/pv-protection]\nStorageClass: standard\nStatus: Terminating\nClaim:", "completion": "Reclaim Policy: Delete\nAccess Modes: RWO\nCapacity: 1Gi\nMessage:\nSource:\n Type: HostPath (bare host directory volume)\n Path: /tmp/data\n HostPathType:\nEvents: ", "language": "bash", "type": "kubectl-command", "context": "Persistent Volumes | When a user is done with their volume, they can delete the PVC objects from the API that allows reclamation of the resource. The reclaim policy for a PersistentVolume tells the cluster what to do with", "source": "https://kubernetes.io/docs/concepts/storage/persistent-volumes/", "id": "k8s_00083_code_completion_2200"} {"prompt": "Explain this bash code:\n\nkubectl describe pv task-pv-volume\nName: task-pv-volume\nLabels: type=local\nAnnotations: \nFinalizers: [kubernetes.io/pv-protection]\nStorageClass: standard\nStatus: Terminating\nClaim:\nReclaim Policy: Delete\nAccess Modes: RWO\nCapacity: 1Gi\nMessage:\nSource:\n Type: HostPath (bare host directory volume)\n Path: /tmp/data\n HostPathType:\nEvents: ", "completion": "This bash code Persistent Volumes | When a user is done with their volume, they can delete the PVC objects from the API that allows reclamation of the resource. The reclaim policy for a PersistentVolume tells the cluster what to do with", "language": "bash", "type": "kubectl-command_explanation", "context": "Persistent Volumes | When a user is done with their volume, they can delete the PVC objects from the API that allows reclamation of the resource. The reclaim policy for a PersistentVolume tells the cluster what to do with", "source": "https://kubernetes.io/docs/concepts/storage/persistent-volumes/", "id": "k8s_00083_code_explain_2201"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: pv-recycler\n namespace: default\nspec:\n restartPolicy: Never\n volumes:\n - name: vol", "completion": " hostPath:\n path: /any/path/it/will/be/replaced\n containers:\n - name: pv-recycler\n image: \"registry.k8s.io/busybox\"\n command: [\"/bin/sh\", \"-c\", \"test -e /scrub && rm -rf /scrub/..?* /scrub/.[!.]* /scrub/* && test -z \\\"$(ls -A /scrub)\\\" || exit 1\"]\n volumeMounts:\n - name: vol\n mountPath: /scrub", "language": "yaml", "type": "pod-definition", "context": "Persistent Volumes | However, the particular path specified in the custom recycler Pod template in the volumes part is replaced with the particular path of the volume that is being recycled.", "source": "https://kubernetes.io/docs/concepts/storage/persistent-volumes/", "id": "k8s_00083_code_completion_2202"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: pv-recycler\n namespace: default\nspec:\n restartPolicy: Never\n volumes:\n - name: vol\n hostPath:\n path: /any/path/it/will/be/replaced\n containers:\n - name: pv-recycler\n image: \"registry.k8s.io/busybox\"\n command: [\"/bin/sh\", \"-c\", \"test -e /scrub && rm -rf /scrub/..?* /scrub/.[!.]* /scrub/* && test -z \\\"$(ls -A /scrub)\\\" || exit 1\"]\n volumeMounts:\n - name: vol\n mountPath: /scrub", "completion": "This yaml code Persistent Volumes | However, the particular path specified in the custom recycler Pod template in the volumes part is replaced with the particular path of the volume that is being recycled.", "language": "yaml", "type": "pod-definition_explanation", "context": "Persistent Volumes | However, the particular path specified in the custom recycler Pod template in the volumes part is replaced with the particular path of the volume that is being recycled.", "source": "https://kubernetes.io/docs/concepts/storage/persistent-volumes/", "id": "k8s_00083_code_explain_2203"} {"prompt": "Complete this bash code:\n\nkubectl describe pv pvc-74a498d6-3929-47e8-8c02-078c1ece4d78\nName: pvc-74a498d6-3929-47e8-8c02-078c1ece4d78\nLabels: \nAnnotations: kubernetes.io/createdby: vsphere-volume-dynamic-provisioner\n pv.kubernetes.io/bound-by-controller: yes\n pv.kubernetes.io/provisioned-by: kubernetes.io/vsphere-volume\nFinalizers: [kubernetes.io/pv-protection kubernetes.io/pv-controller]\nStorageClass: vcp-sc\nStatus: Bound\nClaim: default/vcp-pvc-1\nReclaim Policy: Delete", "completion": "Access Modes: RWO\nVolumeMode: Filesystem\nCapacity: 1Gi\nNode Affinity: \nMessage:\nSource:\n Type: vSphereVolume (a Persistent Disk resource in vSphere)\n VolumePath: [vsanDatastore] d49c4a62-166f-ce12-c464-020077ba5d46/kubernetes-dynamic-pvc-74a498d6-3929-47e8-8c02-078c1ece4d78.vmdk\n FSType: ext4\n StoragePolicyName: vSAN Default Storage Policy\nEvents: ", "language": "bash", "type": "kubectl-command", "context": "Persistent Volumes | The finalizer external-provisioner.volume.kubernetes.io/finalizer is added for CSI volumes. The following is an example:", "source": "https://kubernetes.io/docs/concepts/storage/persistent-volumes/", "id": "k8s_00083_code_completion_2204"} {"prompt": "Explain this bash code:\n\nkubectl describe pv pvc-74a498d6-3929-47e8-8c02-078c1ece4d78\nName: pvc-74a498d6-3929-47e8-8c02-078c1ece4d78\nLabels: \nAnnotations: kubernetes.io/createdby: vsphere-volume-dynamic-provisioner\n pv.kubernetes.io/bound-by-controller: yes\n pv.kubernetes.io/provisioned-by: kubernetes.io/vsphere-volume\nFinalizers: [kubernetes.io/pv-protection kubernetes.io/pv-controller]\nStorageClass: vcp-sc\nStatus: Bound\nClaim: default/vcp-pvc-1\nReclaim Policy: Delete\nAccess Modes: RWO\nVolumeMode: Filesystem\nCapacity: 1Gi\nNode Affinity: \nMessage:\nSource:\n Type: vSphereVolume (a Persistent Disk resource in vSphere)\n VolumePath: [vsanDatastore] d49c4a62-166f-ce12-c464-020077ba5d46/kubernetes-dynamic-pvc-74a498d6-3929-47e8-8c02-078c1ece4d78.vmdk\n FSType: ext4\n StoragePolicyName: vSAN Default Storage Policy\nEvents: ", "completion": "This bash code Persistent Volumes | The finalizer external-provisioner.volume.kubernetes.io/finalizer is added for CSI volumes. The following is an example:", "language": "bash", "type": "kubectl-command_explanation", "context": "Persistent Volumes | The finalizer external-provisioner.volume.kubernetes.io/finalizer is added for CSI volumes. The following is an example:", "source": "https://kubernetes.io/docs/concepts/storage/persistent-volumes/", "id": "k8s_00083_code_explain_2205"} {"prompt": "Complete this text code:\n\nName: pvc-2f0bab97-85a8-4552-8044-eb8be45cf48d\nLabels: \nAnnotations: pv.kubernetes.io/provisioned-by: csi.vsphere.vmware.com\nFinalizers: [kubernetes.io/pv-protection external-provisioner.volume.kubernetes.io/finalizer]\nStorageClass: fast\nStatus: Bound\nClaim: demo-app/nginx-logs\nReclaim Policy: Delete\nAccess Modes: RWO\nVolumeMode: Filesystem\nCapacity: 200Mi", "completion": "Node Affinity: \nMessage:\nSource:\n Type: CSI (a Container Storage Interface (CSI) volume source)\n Driver: csi.vsphere.vmware.com\n FSType: ext4\n VolumeHandle: 44830fa8-79b4-406b-8b58-621ba25353fd\n ReadOnly: false\n VolumeAttributes: storage.kubernetes.io/csiProvisionerIdentity=1648442357185-8081-csi.vsphere.vmware.com\n type=vSphere CNS Block Volume\nEvents: ", "language": "text", "type": "code-example", "context": "Persistent Volumes | When the CSIMigration{provider} feature flag is enabled for a specific in-tree volume plugin, the kubernetes.io/pv-controller finalizer is replaced by the external-provisioner.volume.kubernetes.io/fin", "source": "https://kubernetes.io/docs/concepts/storage/persistent-volumes/", "id": "k8s_00083_code_completion_2206"} {"prompt": "Explain this text code:\n\nName: pvc-2f0bab97-85a8-4552-8044-eb8be45cf48d\nLabels: \nAnnotations: pv.kubernetes.io/provisioned-by: csi.vsphere.vmware.com\nFinalizers: [kubernetes.io/pv-protection external-provisioner.volume.kubernetes.io/finalizer]\nStorageClass: fast\nStatus: Bound\nClaim: demo-app/nginx-logs\nReclaim Policy: Delete\nAccess Modes: RWO\nVolumeMode: Filesystem\nCapacity: 200Mi\nNode Affinity: \nMessage:\nSource:\n Type: CSI (a Container Storage Interface (CSI) volume source)\n Driver: csi.vsphere.vmware.com\n FSType: ext4\n VolumeHandle: 44830fa8-79b4-406b-8b58-621ba25353fd\n ReadOnly: false\n VolumeAttributes: storage.kubernetes.io/csiProvisionerIdentity=1648442357185-8081-csi.vsphere.vmware.com\n type=vSphere CNS Block Volume\nEvents: ", "completion": "This text code Persistent Volumes | When the CSIMigration{provider} feature flag is enabled for a specific in-tree volume plugin, the kubernetes.io/pv-controller finalizer is replaced by the external-provisioner.volume.kubernetes.io/fin", "language": "text", "type": "code-example_explanation", "context": "Persistent Volumes | When the CSIMigration{provider} feature flag is enabled for a specific in-tree volume plugin, the kubernetes.io/pv-controller finalizer is replaced by the external-provisioner.volume.kubernetes.io/fin", "source": "https://kubernetes.io/docs/concepts/storage/persistent-volumes/", "id": "k8s_00083_code_explain_2207"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: PersistentVolumeClaim\nmetadata:\n name: foo-pvc", "completion": " namespace: foo\nspec:\n storageClassName: \"\" # Empty string must be explicitly set otherwise default StorageClass will be set\n volumeName: foo-pv\n ...", "language": "yaml", "type": "kubernetes-manifest", "context": "Persistent Volumes | This method does not guarantee any binding privileges to the PersistentVolume. If other PersistentVolumeClaims could use the PV that you specify, you first need to reserve that storage volume. Specify", "source": "https://kubernetes.io/docs/concepts/storage/persistent-volumes/", "id": "k8s_00083_code_completion_2208"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: PersistentVolumeClaim\nmetadata:\n name: foo-pvc\n namespace: foo\nspec:\n storageClassName: \"\" # Empty string must be explicitly set otherwise default StorageClass will be set\n volumeName: foo-pv\n ...", "completion": "This yaml code Persistent Volumes | This method does not guarantee any binding privileges to the PersistentVolume. If other PersistentVolumeClaims could use the PV that you specify, you first need to reserve that storage volume. Specify", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Persistent Volumes | This method does not guarantee any binding privileges to the PersistentVolume. If other PersistentVolumeClaims could use the PV that you specify, you first need to reserve that storage volume. Specify", "source": "https://kubernetes.io/docs/concepts/storage/persistent-volumes/", "id": "k8s_00083_code_explain_2209"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: PersistentVolume\nmetadata:\n name: foo-pv\nspec:", "completion": " storageClassName: \"\"\n claimRef:\n name: foo-pvc\n namespace: foo\n ...", "language": "yaml", "type": "kubernetes-manifest", "context": "Persistent Volumes | This is useful if you want to consume PersistentVolumes that have their persistentVolumeReclaimPolicy set to Retain, including cases where you are reusing an existing PV.", "source": "https://kubernetes.io/docs/concepts/storage/persistent-volumes/", "id": "k8s_00083_code_completion_2210"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: PersistentVolume\nmetadata:\n name: foo-pv\nspec:\n storageClassName: \"\"\n claimRef:\n name: foo-pvc\n namespace: foo\n ...", "completion": "This yaml code Persistent Volumes | This is useful if you want to consume PersistentVolumes that have their persistentVolumeReclaimPolicy set to Retain, including cases where you are reusing an existing PV.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Persistent Volumes | This is useful if you want to consume PersistentVolumes that have their persistentVolumeReclaimPolicy set to Retain, including cases where you are reusing an existing PV.", "source": "https://kubernetes.io/docs/concepts/storage/persistent-volumes/", "id": "k8s_00083_code_explain_2211"} {"prompt": "Complete this yaml code:\n\napiVersion: storage.k8s.io/v1\nkind: StorageClass\nmetadata:\n name: example-vol-default\nprovisioner: vendor-name.example/magicstorage", "completion": "parameters:\n resturl: \"http://192.168.10.100:8080\"\n restuser: \"\"\n secretNamespace: \"\"\n secretName: \"\"\nallowVolumeExpansion: true", "language": "yaml", "type": "kubernetes-manifest", "context": "Persistent Volumes | To request a larger volume for a PVC, edit the PVC object and specify a larger size. This triggers expansion of the volume that backs the underlying PersistentVolume. A new PersistentVolume is never c", "source": "https://kubernetes.io/docs/concepts/storage/persistent-volumes/", "id": "k8s_00083_code_completion_2212"} {"prompt": "Explain this yaml code:\n\napiVersion: storage.k8s.io/v1\nkind: StorageClass\nmetadata:\n name: example-vol-default\nprovisioner: vendor-name.example/magicstorage\nparameters:\n resturl: \"http://192.168.10.100:8080\"\n restuser: \"\"\n secretNamespace: \"\"\n secretName: \"\"\nallowVolumeExpansion: true", "completion": "This yaml code Persistent Volumes | To request a larger volume for a PVC, edit the PVC object and specify a larger size. This triggers expansion of the volume that backs the underlying PersistentVolume. A new PersistentVolume is never c", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Persistent Volumes | To request a larger volume for a PVC, edit the PVC object and specify a larger size. This triggers expansion of the volume that backs the underlying PersistentVolume. A new PersistentVolume is never c", "source": "https://kubernetes.io/docs/concepts/storage/persistent-volumes/", "id": "k8s_00083_code_explain_2213"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: PersistentVolume\nmetadata:\n name: pv0003\nspec:\n capacity:\n storage: 5Gi\n volumeMode: Filesystem\n accessModes:", "completion": " - ReadWriteOnce\n persistentVolumeReclaimPolicy: Recycle\n storageClassName: slow\n mountOptions:\n - hard\n - nfsvers=4.1\n nfs:\n path: /tmp\n server: 172.17.0.2", "language": "yaml", "type": "kubernetes-manifest", "context": "Persistent Volumes | Generally, a PV will have a specific storage capacity. This is set using the PV's capacity attribute which is a Quantity value.", "source": "https://kubernetes.io/docs/concepts/storage/persistent-volumes/", "id": "k8s_00083_code_completion_2214"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: PersistentVolume\nmetadata:\n name: pv0003\nspec:\n capacity:\n storage: 5Gi\n volumeMode: Filesystem\n accessModes:\n - ReadWriteOnce\n persistentVolumeReclaimPolicy: Recycle\n storageClassName: slow\n mountOptions:\n - hard\n - nfsvers=4.1\n nfs:\n path: /tmp\n server: 172.17.0.2", "completion": "This yaml code Persistent Volumes | Generally, a PV will have a specific storage capacity. This is set using the PV's capacity attribute which is a Quantity value.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Persistent Volumes | Generally, a PV will have a specific storage capacity. This is set using the PV's capacity attribute which is a Quantity value.", "source": "https://kubernetes.io/docs/concepts/storage/persistent-volumes/", "id": "k8s_00083_code_explain_2215"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: PersistentVolumeClaim\nmetadata:\n name: myclaim\nspec:\n accessModes:\n - ReadWriteOnce\n volumeMode: Filesystem", "completion": " resources:\n requests:\n storage: 8Gi\n storageClassName: slow\n selector:\n matchLabels:\n release: \"stable\"\n matchExpressions:\n - {key: environment, operator: In, values: [dev]}", "language": "yaml", "type": "kubernetes-manifest", "context": "Persistent Volumes | Claims use the same conventions as volumes when requesting storage with specific access modes.", "source": "https://kubernetes.io/docs/concepts/storage/persistent-volumes/", "id": "k8s_00083_code_completion_2216"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: PersistentVolumeClaim\nmetadata:\n name: myclaim\nspec:\n accessModes:\n - ReadWriteOnce\n volumeMode: Filesystem\n resources:\n requests:\n storage: 8Gi\n storageClassName: slow\n selector:\n matchLabels:\n release: \"stable\"\n matchExpressions:\n - {key: environment, operator: In, values: [dev]}", "completion": "This yaml code Persistent Volumes | Claims use the same conventions as volumes when requesting storage with specific access modes.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Persistent Volumes | Claims use the same conventions as volumes when requesting storage with specific access modes.", "source": "https://kubernetes.io/docs/concepts/storage/persistent-volumes/", "id": "k8s_00083_code_explain_2217"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: mypod\nspec:\n containers:\n - name: myfrontend", "completion": " image: nginx\n volumeMounts:\n - mountPath: \"/var/www/html\"\n name: mypd\n volumes:\n - name: mypd\n persistentVolumeClaim:\n claimName: myclaim", "language": "yaml", "type": "pod-definition", "context": "Persistent Volumes | PersistentVolumes binds are exclusive, and since PersistentVolumeClaims are namespaced objects, mounting claims with \"Many\" modes (ROX, RWX) is only possible within one namespace.", "source": "https://kubernetes.io/docs/concepts/storage/persistent-volumes/", "id": "k8s_00083_code_completion_2218"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: mypod\nspec:\n containers:\n - name: myfrontend\n image: nginx\n volumeMounts:\n - mountPath: \"/var/www/html\"\n name: mypd\n volumes:\n - name: mypd\n persistentVolumeClaim:\n claimName: myclaim", "completion": "This yaml code Persistent Volumes | PersistentVolumes binds are exclusive, and since PersistentVolumeClaims are namespaced objects, mounting claims with \"Many\" modes (ROX, RWX) is only possible within one namespace.", "language": "yaml", "type": "pod-definition_explanation", "context": "Persistent Volumes | PersistentVolumes binds are exclusive, and since PersistentVolumeClaims are namespaced objects, mounting claims with \"Many\" modes (ROX, RWX) is only possible within one namespace.", "source": "https://kubernetes.io/docs/concepts/storage/persistent-volumes/", "id": "k8s_00083_code_explain_2219"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: PersistentVolume\nmetadata:\n name: block-pv\nspec:\n capacity:\n storage: 10Gi", "completion": " accessModes:\n - ReadWriteOnce\n volumeMode: Block\n persistentVolumeReclaimPolicy: Retain\n fc:\n targetWWNs: [\"50060e801049cfd1\"]\n lun: 0\n readOnly: false", "language": "yaml", "type": "kubernetes-manifest", "context": "Persistent Volumes | If a user requests a raw block volume by indicating this using the volumeMode field in the PersistentVolumeClaim spec, the binding rules differ slightly from previous releases that didn't consider thi", "source": "https://kubernetes.io/docs/concepts/storage/persistent-volumes/", "id": "k8s_00083_code_completion_2220"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: PersistentVolume\nmetadata:\n name: block-pv\nspec:\n capacity:\n storage: 10Gi\n accessModes:\n - ReadWriteOnce\n volumeMode: Block\n persistentVolumeReclaimPolicy: Retain\n fc:\n targetWWNs: [\"50060e801049cfd1\"]\n lun: 0\n readOnly: false", "completion": "This yaml code Persistent Volumes | If a user requests a raw block volume by indicating this using the volumeMode field in the PersistentVolumeClaim spec, the binding rules differ slightly from previous releases that didn't consider thi", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Persistent Volumes | If a user requests a raw block volume by indicating this using the volumeMode field in the PersistentVolumeClaim spec, the binding rules differ slightly from previous releases that didn't consider thi", "source": "https://kubernetes.io/docs/concepts/storage/persistent-volumes/", "id": "k8s_00083_code_explain_2221"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: PersistentVolumeClaim\nmetadata:\n name: block-pvc\nspec:", "completion": " accessModes:\n - ReadWriteOnce\n volumeMode: Block\n resources:\n requests:\n storage: 10Gi", "language": "yaml", "type": "kubernetes-manifest", "context": "Persistent Volumes | If a user requests a raw block volume by indicating this using the volumeMode field in the PersistentVolumeClaim spec, the binding rules differ slightly from previous releases that didn't consider thi", "source": "https://kubernetes.io/docs/concepts/storage/persistent-volumes/", "id": "k8s_00083_code_completion_2222"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: PersistentVolumeClaim\nmetadata:\n name: block-pvc\nspec:\n accessModes:\n - ReadWriteOnce\n volumeMode: Block\n resources:\n requests:\n storage: 10Gi", "completion": "This yaml code Persistent Volumes | If a user requests a raw block volume by indicating this using the volumeMode field in the PersistentVolumeClaim spec, the binding rules differ slightly from previous releases that didn't consider thi", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Persistent Volumes | If a user requests a raw block volume by indicating this using the volumeMode field in the PersistentVolumeClaim spec, the binding rules differ slightly from previous releases that didn't consider thi", "source": "https://kubernetes.io/docs/concepts/storage/persistent-volumes/", "id": "k8s_00083_code_explain_2223"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: pod-with-block-volume\nspec:\n containers:\n - name: fc-container\n image: fedora:26", "completion": " command: [\"/bin/sh\", \"-c\"]\n args: [ \"tail -f /dev/null\" ]\n volumeDevices:\n - name: data\n devicePath: /dev/xvda\n volumes:\n - name: data\n persistentVolumeClaim:\n claimName: block-pvc", "language": "yaml", "type": "pod-definition", "context": "Persistent Volumes | If a user requests a raw block volume by indicating this using the volumeMode field in the PersistentVolumeClaim spec, the binding rules differ slightly from previous releases that didn't consider thi", "source": "https://kubernetes.io/docs/concepts/storage/persistent-volumes/", "id": "k8s_00083_code_completion_2224"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: pod-with-block-volume\nspec:\n containers:\n - name: fc-container\n image: fedora:26\n command: [\"/bin/sh\", \"-c\"]\n args: [ \"tail -f /dev/null\" ]\n volumeDevices:\n - name: data\n devicePath: /dev/xvda\n volumes:\n - name: data\n persistentVolumeClaim:\n claimName: block-pvc", "completion": "This yaml code Persistent Volumes | If a user requests a raw block volume by indicating this using the volumeMode field in the PersistentVolumeClaim spec, the binding rules differ slightly from previous releases that didn't consider thi", "language": "yaml", "type": "pod-definition_explanation", "context": "Persistent Volumes | If a user requests a raw block volume by indicating this using the volumeMode field in the PersistentVolumeClaim spec, the binding rules differ slightly from previous releases that didn't consider thi", "source": "https://kubernetes.io/docs/concepts/storage/persistent-volumes/", "id": "k8s_00083_code_explain_2225"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: PersistentVolumeClaim\nmetadata:\n name: restore-pvc\nspec:\n storageClassName: csi-hostpath-sc\n dataSource:", "completion": " name: new-snapshot-test\n kind: VolumeSnapshot\n apiGroup: snapshot.storage.k8s.io\n accessModes:\n - ReadWriteOnce\n resources:\n requests:\n storage: 10Gi", "language": "yaml", "type": "kubernetes-manifest", "context": "Persistent Volumes | Volume Cloning only available for CSI volume plugins.", "source": "https://kubernetes.io/docs/concepts/storage/persistent-volumes/", "id": "k8s_00083_code_completion_2226"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: PersistentVolumeClaim\nmetadata:\n name: restore-pvc\nspec:\n storageClassName: csi-hostpath-sc\n dataSource:\n name: new-snapshot-test\n kind: VolumeSnapshot\n apiGroup: snapshot.storage.k8s.io\n accessModes:\n - ReadWriteOnce\n resources:\n requests:\n storage: 10Gi", "completion": "This yaml code Persistent Volumes | Volume Cloning only available for CSI volume plugins.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Persistent Volumes | Volume Cloning only available for CSI volume plugins.", "source": "https://kubernetes.io/docs/concepts/storage/persistent-volumes/", "id": "k8s_00083_code_explain_2227"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: PersistentVolumeClaim\nmetadata:\n name: cloned-pvc\nspec:\n storageClassName: my-csi-plugin\n dataSource:", "completion": " name: existing-src-pvc-name\n kind: PersistentVolumeClaim\n accessModes:\n - ReadWriteOnce\n resources:\n requests:\n storage: 10Gi", "language": "yaml", "type": "kubernetes-manifest", "context": "Persistent Volumes | Kubernetes supports custom volume populators. To use custom volume populators, you must enable the AnyVolumeDataSource feature gate for the kube-apiserver and kube-controller-manager.", "source": "https://kubernetes.io/docs/concepts/storage/persistent-volumes/", "id": "k8s_00083_code_completion_2228"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: PersistentVolumeClaim\nmetadata:\n name: cloned-pvc\nspec:\n storageClassName: my-csi-plugin\n dataSource:\n name: existing-src-pvc-name\n kind: PersistentVolumeClaim\n accessModes:\n - ReadWriteOnce\n resources:\n requests:\n storage: 10Gi", "completion": "This yaml code Persistent Volumes | Kubernetes supports custom volume populators. To use custom volume populators, you must enable the AnyVolumeDataSource feature gate for the kube-apiserver and kube-controller-manager.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Persistent Volumes | Kubernetes supports custom volume populators. To use custom volume populators, you must enable the AnyVolumeDataSource feature gate for the kube-apiserver and kube-controller-manager.", "source": "https://kubernetes.io/docs/concepts/storage/persistent-volumes/", "id": "k8s_00083_code_explain_2229"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: PersistentVolumeClaim\nmetadata:\n name: populated-pvc\nspec:\n dataSourceRef:\n name: example-name", "completion": " kind: ExampleDataSource\n apiGroup: example.storage.k8s.io\n accessModes:\n - ReadWriteOnce\n resources:\n requests:\n storage: 10Gi", "language": "yaml", "type": "kubernetes-manifest", "context": "Persistent Volumes | Because volume populators are external components, attempts to create a PVC that uses one can fail if not all the correct components are installed. External controllers should generate events on the P", "source": "https://kubernetes.io/docs/concepts/storage/persistent-volumes/", "id": "k8s_00083_code_completion_2230"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: PersistentVolumeClaim\nmetadata:\n name: populated-pvc\nspec:\n dataSourceRef:\n name: example-name\n kind: ExampleDataSource\n apiGroup: example.storage.k8s.io\n accessModes:\n - ReadWriteOnce\n resources:\n requests:\n storage: 10Gi", "completion": "This yaml code Persistent Volumes | Because volume populators are external components, attempts to create a PVC that uses one can fail if not all the correct components are installed. External controllers should generate events on the P", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Persistent Volumes | Because volume populators are external components, attempts to create a PVC that uses one can fail if not all the correct components are installed. External controllers should generate events on the P", "source": "https://kubernetes.io/docs/concepts/storage/persistent-volumes/", "id": "k8s_00083_code_explain_2231"} {"prompt": "Complete this yaml code:\n\napiVersion: gateway.networking.k8s.io/v1beta1\nkind: ReferenceGrant\nmetadata:\n name: allow-ns1-pvc\n namespace: default\nspec:\n from:", "completion": " - group: \"\"\n kind: PersistentVolumeClaim\n namespace: ns1\n to:\n - group: snapshot.storage.k8s.io\n kind: VolumeSnapshot\n name: new-snapshot-demo", "language": "yaml", "type": "kubernetes-manifest", "context": "Persistent Volumes | If you're writing configuration templates or examples that run on a wide range of clusters and need persistent storage, it is recommended that you use the following pattern:", "source": "https://kubernetes.io/docs/concepts/storage/persistent-volumes/", "id": "k8s_00083_code_completion_2232"} {"prompt": "Explain this yaml code:\n\napiVersion: gateway.networking.k8s.io/v1beta1\nkind: ReferenceGrant\nmetadata:\n name: allow-ns1-pvc\n namespace: default\nspec:\n from:\n - group: \"\"\n kind: PersistentVolumeClaim\n namespace: ns1\n to:\n - group: snapshot.storage.k8s.io\n kind: VolumeSnapshot\n name: new-snapshot-demo", "completion": "This yaml code Persistent Volumes | If you're writing configuration templates or examples that run on a wide range of clusters and need persistent storage, it is recommended that you use the following pattern:", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Persistent Volumes | If you're writing configuration templates or examples that run on a wide range of clusters and need persistent storage, it is recommended that you use the following pattern:", "source": "https://kubernetes.io/docs/concepts/storage/persistent-volumes/", "id": "k8s_00083_code_explain_2233"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: PersistentVolumeClaim\nmetadata:\n name: foo-pvc\n namespace: ns1\nspec:\n storageClassName: example\n accessModes:\n - ReadWriteOnce", "completion": " resources:\n requests:\n storage: 1Gi\n dataSourceRef:\n apiGroup: snapshot.storage.k8s.io\n kind: VolumeSnapshot\n name: new-snapshot-demo\n namespace: default\n volumeMode: Filesystem", "language": "yaml", "type": "kubernetes-manifest", "context": "Persistent Volumes | If you're writing configuration templates or examples that run on a wide range of clusters and need persistent storage, it is recommended that you use the following pattern:", "source": "https://kubernetes.io/docs/concepts/storage/persistent-volumes/", "id": "k8s_00083_code_completion_2234"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: PersistentVolumeClaim\nmetadata:\n name: foo-pvc\n namespace: ns1\nspec:\n storageClassName: example\n accessModes:\n - ReadWriteOnce\n resources:\n requests:\n storage: 1Gi\n dataSourceRef:\n apiGroup: snapshot.storage.k8s.io\n kind: VolumeSnapshot\n name: new-snapshot-demo\n namespace: default\n volumeMode: Filesystem", "completion": "This yaml code Persistent Volumes | If you're writing configuration templates or examples that run on a wide range of clusters and need persistent storage, it is recommended that you use the following pattern:", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Persistent Volumes | If you're writing configuration templates or examples that run on a wide range of clusters and need persistent storage, it is recommended that you use the following pattern:", "source": "https://kubernetes.io/docs/concepts/storage/persistent-volumes/", "id": "k8s_00083_code_explain_2235"} {"prompt": "Complete this shell code:\n\nkubectl describe pvc hostpath\nName: hostpath\nNamespace: default\nStorageClass: example-hostpath\nStatus: Terminating", "completion": "Volume:\nLabels: \nAnnotations: volume.beta.kubernetes.io/storage-class=example-hostpath\n volume.beta.kubernetes.io/storage-provisioner=example.com/hostpath\nFinalizers: [kubernetes.io/pvc-protection]\n...", "language": "shell", "type": "code-example", "context": "Persistent Volumes | You can see that a PV is protected when the PV's status is Terminating and the Finalizers list includes kubernetes.io/pv-protection too:", "source": "https://kubernetes.io/docs/concepts/storage/persistent-volumes/", "id": "k8s_00083_code_completion_2236"} {"prompt": "Explain this shell code:\n\nkubectl describe pvc hostpath\nName: hostpath\nNamespace: default\nStorageClass: example-hostpath\nStatus: Terminating\nVolume:\nLabels: \nAnnotations: volume.beta.kubernetes.io/storage-class=example-hostpath\n volume.beta.kubernetes.io/storage-provisioner=example.com/hostpath\nFinalizers: [kubernetes.io/pvc-protection]\n...", "completion": "This shell code Persistent Volumes | You can see that a PV is protected when the PV's status is Terminating and the Finalizers list includes kubernetes.io/pv-protection too:", "language": "shell", "type": "code-example_explanation", "context": "Persistent Volumes | You can see that a PV is protected when the PV's status is Terminating and the Finalizers list includes kubernetes.io/pv-protection too:", "source": "https://kubernetes.io/docs/concepts/storage/persistent-volumes/", "id": "k8s_00083_code_explain_2237"} {"prompt": "Complete this shell code:\n\nkubectl describe pv task-pv-volume\nName: task-pv-volume\nLabels: type=local\nAnnotations: \nFinalizers: [kubernetes.io/pv-protection]\nStorageClass: standard\nStatus: Terminating\nClaim:", "completion": "Reclaim Policy: Delete\nAccess Modes: RWO\nCapacity: 1Gi\nMessage:\nSource:\n Type: HostPath (bare host directory volume)\n Path: /tmp/data\n HostPathType:\nEvents: ", "language": "shell", "type": "code-example", "context": "Persistent Volumes | When a user is done with their volume, they can delete the PVC objects from the API that allows reclamation of the resource. The reclaim policy for a PersistentVolume tells the cluster what to do with", "source": "https://kubernetes.io/docs/concepts/storage/persistent-volumes/", "id": "k8s_00083_code_completion_2238"} {"prompt": "Explain this shell code:\n\nkubectl describe pv task-pv-volume\nName: task-pv-volume\nLabels: type=local\nAnnotations: \nFinalizers: [kubernetes.io/pv-protection]\nStorageClass: standard\nStatus: Terminating\nClaim:\nReclaim Policy: Delete\nAccess Modes: RWO\nCapacity: 1Gi\nMessage:\nSource:\n Type: HostPath (bare host directory volume)\n Path: /tmp/data\n HostPathType:\nEvents: ", "completion": "This shell code Persistent Volumes | When a user is done with their volume, they can delete the PVC objects from the API that allows reclamation of the resource. The reclaim policy for a PersistentVolume tells the cluster what to do with", "language": "shell", "type": "code-example_explanation", "context": "Persistent Volumes | When a user is done with their volume, they can delete the PVC objects from the API that allows reclamation of the resource. The reclaim policy for a PersistentVolume tells the cluster what to do with", "source": "https://kubernetes.io/docs/concepts/storage/persistent-volumes/", "id": "k8s_00083_code_explain_2239"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: pv-recycler\n namespace: default\nspec:\n restartPolicy: Never\n volumes:\n - name: vol", "completion": " hostPath:\n path: /any/path/it/will/be/replaced\n containers:\n - name: pv-recycler\n image: \"registry.k8s.io/busybox\"\n command: [\"/bin/sh\", \"-c\", \"test -e /scrub && rm -rf /scrub/..?* /scrub/.[!.]* /scrub/* && test -z \\\"$(ls -A /scrub)\\\" || exit 1\"]\n volumeMounts:\n - name: vol\n mountPath: /scrub", "language": "yaml", "type": "pod-definition", "context": "Persistent Volumes | However, the particular path specified in the custom recycler Pod template in the volumes part is replaced with the particular path of the volume that is being recycled.", "source": "https://kubernetes.io/docs/concepts/storage/persistent-volumes/", "id": "k8s_00083_code_completion_2240"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: pv-recycler\n namespace: default\nspec:\n restartPolicy: Never\n volumes:\n - name: vol\n hostPath:\n path: /any/path/it/will/be/replaced\n containers:\n - name: pv-recycler\n image: \"registry.k8s.io/busybox\"\n command: [\"/bin/sh\", \"-c\", \"test -e /scrub && rm -rf /scrub/..?* /scrub/.[!.]* /scrub/* && test -z \\\"$(ls -A /scrub)\\\" || exit 1\"]\n volumeMounts:\n - name: vol\n mountPath: /scrub", "completion": "This yaml code Persistent Volumes | However, the particular path specified in the custom recycler Pod template in the volumes part is replaced with the particular path of the volume that is being recycled.", "language": "yaml", "type": "pod-definition_explanation", "context": "Persistent Volumes | However, the particular path specified in the custom recycler Pod template in the volumes part is replaced with the particular path of the volume that is being recycled.", "source": "https://kubernetes.io/docs/concepts/storage/persistent-volumes/", "id": "k8s_00083_code_explain_2241"} {"prompt": "Explain this text code:\n\nexternal-provisioner.volume.kubernetes.io/finalizer", "completion": "This text code Persistent Volumes | The finalizer kubernetes.io/pv-controller(introduced in v1.31) is added to dynamically provisioned in-tree plugin volumes and skipped for statically provisioned in-tree plugin volumes.", "language": "text", "type": "code-example_explanation", "context": "Persistent Volumes | The finalizer kubernetes.io/pv-controller(introduced in v1.31) is added to dynamically provisioned in-tree plugin volumes and skipped for statically provisioned in-tree plugin volumes.", "source": "https://kubernetes.io/docs/concepts/storage/persistent-volumes/", "id": "k8s_00083_code_explain_2242"} {"prompt": "Complete this shell code:\n\nkubectl describe pv pvc-74a498d6-3929-47e8-8c02-078c1ece4d78\nName: pvc-74a498d6-3929-47e8-8c02-078c1ece4d78\nLabels: \nAnnotations: kubernetes.io/createdby: vsphere-volume-dynamic-provisioner\n pv.kubernetes.io/bound-by-controller: yes\n pv.kubernetes.io/provisioned-by: kubernetes.io/vsphere-volume\nFinalizers: [kubernetes.io/pv-protection kubernetes.io/pv-controller]\nStorageClass: vcp-sc\nStatus: Bound\nClaim: default/vcp-pvc-1\nReclaim Policy: Delete", "completion": "Access Modes: RWO\nVolumeMode: Filesystem\nCapacity: 1Gi\nNode Affinity: \nMessage:\nSource:\n Type: vSphereVolume (a Persistent Disk resource in vSphere)\n VolumePath: [vsanDatastore] d49c4a62-166f-ce12-c464-020077ba5d46/kubernetes-dynamic-pvc-74a498d6-3929-47e8-8c02-078c1ece4d78.vmdk\n FSType: ext4\n StoragePolicyName: vSAN Default Storage Policy\nEvents: ", "language": "shell", "type": "code-example", "context": "Persistent Volumes | The finalizer external-provisioner.volume.kubernetes.io/finalizer is added for CSI volumes. The following is an example:", "source": "https://kubernetes.io/docs/concepts/storage/persistent-volumes/", "id": "k8s_00083_code_completion_2243"} {"prompt": "Explain this shell code:\n\nkubectl describe pv pvc-74a498d6-3929-47e8-8c02-078c1ece4d78\nName: pvc-74a498d6-3929-47e8-8c02-078c1ece4d78\nLabels: \nAnnotations: kubernetes.io/createdby: vsphere-volume-dynamic-provisioner\n pv.kubernetes.io/bound-by-controller: yes\n pv.kubernetes.io/provisioned-by: kubernetes.io/vsphere-volume\nFinalizers: [kubernetes.io/pv-protection kubernetes.io/pv-controller]\nStorageClass: vcp-sc\nStatus: Bound\nClaim: default/vcp-pvc-1\nReclaim Policy: Delete\nAccess Modes: RWO\nVolumeMode: Filesystem\nCapacity: 1Gi\nNode Affinity: \nMessage:\nSource:\n Type: vSphereVolume (a Persistent Disk resource in vSphere)\n VolumePath: [vsanDatastore] d49c4a62-166f-ce12-c464-020077ba5d46/kubernetes-dynamic-pvc-74a498d6-3929-47e8-8c02-078c1ece4d78.vmdk\n FSType: ext4\n StoragePolicyName: vSAN Default Storage Policy\nEvents: ", "completion": "This shell code Persistent Volumes | The finalizer external-provisioner.volume.kubernetes.io/finalizer is added for CSI volumes. The following is an example:", "language": "shell", "type": "code-example_explanation", "context": "Persistent Volumes | The finalizer external-provisioner.volume.kubernetes.io/finalizer is added for CSI volumes. The following is an example:", "source": "https://kubernetes.io/docs/concepts/storage/persistent-volumes/", "id": "k8s_00083_code_explain_2244"} {"prompt": "Explain this text code:\n\nexternal-provisioner.volume.kubernetes.io/finalizer", "completion": "This text code Persistent Volumes | When the CSIMigration{provider} feature flag is enabled for a specific in-tree volume plugin, the kubernetes.io/pv-controller finalizer is replaced by the external-provisioner.volume.kubernetes.io/fin", "language": "text", "type": "code-example_explanation", "context": "Persistent Volumes | When the CSIMigration{provider} feature flag is enabled for a specific in-tree volume plugin, the kubernetes.io/pv-controller finalizer is replaced by the external-provisioner.volume.kubernetes.io/fin", "source": "https://kubernetes.io/docs/concepts/storage/persistent-volumes/", "id": "k8s_00083_code_explain_2245"} {"prompt": "Complete this shell code:\n\nName: pvc-2f0bab97-85a8-4552-8044-eb8be45cf48d\nLabels: \nAnnotations: pv.kubernetes.io/provisioned-by: csi.vsphere.vmware.com\nFinalizers: [kubernetes.io/pv-protection external-provisioner.volume.kubernetes.io/finalizer]\nStorageClass: fast\nStatus: Bound\nClaim: demo-app/nginx-logs\nReclaim Policy: Delete\nAccess Modes: RWO\nVolumeMode: Filesystem\nCapacity: 200Mi", "completion": "Node Affinity: \nMessage:\nSource:\n Type: CSI (a Container Storage Interface (CSI) volume source)\n Driver: csi.vsphere.vmware.com\n FSType: ext4\n VolumeHandle: 44830fa8-79b4-406b-8b58-621ba25353fd\n ReadOnly: false\n VolumeAttributes: storage.kubernetes.io/csiProvisionerIdentity=1648442357185-8081-csi.vsphere.vmware.com\n type=vSphere CNS Block Volume\nEvents: ", "language": "shell", "type": "code-example", "context": "Persistent Volumes | When the CSIMigration{provider} feature flag is enabled for a specific in-tree volume plugin, the kubernetes.io/pv-controller finalizer is replaced by the external-provisioner.volume.kubernetes.io/fin", "source": "https://kubernetes.io/docs/concepts/storage/persistent-volumes/", "id": "k8s_00083_code_completion_2246"} {"prompt": "Explain this shell code:\n\nName: pvc-2f0bab97-85a8-4552-8044-eb8be45cf48d\nLabels: \nAnnotations: pv.kubernetes.io/provisioned-by: csi.vsphere.vmware.com\nFinalizers: [kubernetes.io/pv-protection external-provisioner.volume.kubernetes.io/finalizer]\nStorageClass: fast\nStatus: Bound\nClaim: demo-app/nginx-logs\nReclaim Policy: Delete\nAccess Modes: RWO\nVolumeMode: Filesystem\nCapacity: 200Mi\nNode Affinity: \nMessage:\nSource:\n Type: CSI (a Container Storage Interface (CSI) volume source)\n Driver: csi.vsphere.vmware.com\n FSType: ext4\n VolumeHandle: 44830fa8-79b4-406b-8b58-621ba25353fd\n ReadOnly: false\n VolumeAttributes: storage.kubernetes.io/csiProvisionerIdentity=1648442357185-8081-csi.vsphere.vmware.com\n type=vSphere CNS Block Volume\nEvents: ", "completion": "This shell code Persistent Volumes | When the CSIMigration{provider} feature flag is enabled for a specific in-tree volume plugin, the kubernetes.io/pv-controller finalizer is replaced by the external-provisioner.volume.kubernetes.io/fin", "language": "shell", "type": "code-example_explanation", "context": "Persistent Volumes | When the CSIMigration{provider} feature flag is enabled for a specific in-tree volume plugin, the kubernetes.io/pv-controller finalizer is replaced by the external-provisioner.volume.kubernetes.io/fin", "source": "https://kubernetes.io/docs/concepts/storage/persistent-volumes/", "id": "k8s_00083_code_explain_2247"} {"prompt": "Explain this text code:\n\nexternal-provisioner.volume.kubernetes.io/finalizer", "completion": "This text code Persistent Volumes | The finalizers ensure that the PV object is removed only after the volume is deleted from the storage backend provided the reclaim policy of the PV is Delete. This also ensures that the volume is dele", "language": "text", "type": "code-example_explanation", "context": "Persistent Volumes | The finalizers ensure that the PV object is removed only after the volume is deleted from the storage backend provided the reclaim policy of the PV is Delete. This also ensures that the volume is dele", "source": "https://kubernetes.io/docs/concepts/storage/persistent-volumes/", "id": "k8s_00083_code_explain_2248"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: PersistentVolumeClaim\nmetadata:\n name: foo-pvc", "completion": " namespace: foo\nspec:\n storageClassName: \"\" # Empty string must be explicitly set otherwise default StorageClass will be set\n volumeName: foo-pv\n ...", "language": "yaml", "type": "kubernetes-manifest", "context": "Persistent Volumes | This method does not guarantee any binding privileges to the PersistentVolume. If other PersistentVolumeClaims could use the PV that you specify, you first need to reserve that storage volume. Specify", "source": "https://kubernetes.io/docs/concepts/storage/persistent-volumes/", "id": "k8s_00083_code_completion_2249"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: PersistentVolumeClaim\nmetadata:\n name: foo-pvc\n namespace: foo\nspec:\n storageClassName: \"\" # Empty string must be explicitly set otherwise default StorageClass will be set\n volumeName: foo-pv\n ...", "completion": "This yaml code Persistent Volumes | This method does not guarantee any binding privileges to the PersistentVolume. If other PersistentVolumeClaims could use the PV that you specify, you first need to reserve that storage volume. Specify", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Persistent Volumes | This method does not guarantee any binding privileges to the PersistentVolume. If other PersistentVolumeClaims could use the PV that you specify, you first need to reserve that storage volume. Specify", "source": "https://kubernetes.io/docs/concepts/storage/persistent-volumes/", "id": "k8s_00083_code_explain_2250"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: PersistentVolume\nmetadata:\n name: foo-pv\nspec:", "completion": " storageClassName: \"\"\n claimRef:\n name: foo-pvc\n namespace: foo\n ...", "language": "yaml", "type": "kubernetes-manifest", "context": "Persistent Volumes | This is useful if you want to consume PersistentVolumes that have their persistentVolumeReclaimPolicy set to Retain, including cases where you are reusing an existing PV.", "source": "https://kubernetes.io/docs/concepts/storage/persistent-volumes/", "id": "k8s_00083_code_completion_2251"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: PersistentVolume\nmetadata:\n name: foo-pv\nspec:\n storageClassName: \"\"\n claimRef:\n name: foo-pvc\n namespace: foo\n ...", "completion": "This yaml code Persistent Volumes | This is useful if you want to consume PersistentVolumes that have their persistentVolumeReclaimPolicy set to Retain, including cases where you are reusing an existing PV.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Persistent Volumes | This is useful if you want to consume PersistentVolumes that have their persistentVolumeReclaimPolicy set to Retain, including cases where you are reusing an existing PV.", "source": "https://kubernetes.io/docs/concepts/storage/persistent-volumes/", "id": "k8s_00083_code_explain_2252"} {"prompt": "Complete this yaml code:\n\napiVersion: storage.k8s.io/v1\nkind: StorageClass\nmetadata:\n name: example-vol-default\nprovisioner: vendor-name.example/magicstorage", "completion": "parameters:\n resturl: \"http://192.168.10.100:8080\"\n restuser: \"\"\n secretNamespace: \"\"\n secretName: \"\"\nallowVolumeExpansion: true", "language": "yaml", "type": "kubernetes-manifest", "context": "Persistent Volumes | To request a larger volume for a PVC, edit the PVC object and specify a larger size. This triggers expansion of the volume that backs the underlying PersistentVolume. A new PersistentVolume is never c", "source": "https://kubernetes.io/docs/concepts/storage/persistent-volumes/", "id": "k8s_00083_code_completion_2253"} {"prompt": "Explain this yaml code:\n\napiVersion: storage.k8s.io/v1\nkind: StorageClass\nmetadata:\n name: example-vol-default\nprovisioner: vendor-name.example/magicstorage\nparameters:\n resturl: \"http://192.168.10.100:8080\"\n restuser: \"\"\n secretNamespace: \"\"\n secretName: \"\"\nallowVolumeExpansion: true", "completion": "This yaml code Persistent Volumes | To request a larger volume for a PVC, edit the PVC object and specify a larger size. This triggers expansion of the volume that backs the underlying PersistentVolume. A new PersistentVolume is never c", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Persistent Volumes | To request a larger volume for a PVC, edit the PVC object and specify a larger size. This triggers expansion of the volume that backs the underlying PersistentVolume. A new PersistentVolume is never c", "source": "https://kubernetes.io/docs/concepts/storage/persistent-volumes/", "id": "k8s_00083_code_explain_2254"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: PersistentVolume\nmetadata:\n name: pv0003\nspec:\n capacity:\n storage: 5Gi\n volumeMode: Filesystem\n accessModes:", "completion": " - ReadWriteOnce\n persistentVolumeReclaimPolicy: Recycle\n storageClassName: slow\n mountOptions:\n - hard\n - nfsvers=4.1\n nfs:\n path: /tmp\n server: 172.17.0.2", "language": "yaml", "type": "kubernetes-manifest", "context": "Persistent Volumes | Generally, a PV will have a specific storage capacity. This is set using the PV's capacity attribute which is a Quantity value.", "source": "https://kubernetes.io/docs/concepts/storage/persistent-volumes/", "id": "k8s_00083_code_completion_2255"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: PersistentVolume\nmetadata:\n name: pv0003\nspec:\n capacity:\n storage: 5Gi\n volumeMode: Filesystem\n accessModes:\n - ReadWriteOnce\n persistentVolumeReclaimPolicy: Recycle\n storageClassName: slow\n mountOptions:\n - hard\n - nfsvers=4.1\n nfs:\n path: /tmp\n server: 172.17.0.2", "completion": "This yaml code Persistent Volumes | Generally, a PV will have a specific storage capacity. This is set using the PV's capacity attribute which is a Quantity value.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Persistent Volumes | Generally, a PV will have a specific storage capacity. This is set using the PV's capacity attribute which is a Quantity value.", "source": "https://kubernetes.io/docs/concepts/storage/persistent-volumes/", "id": "k8s_00083_code_explain_2256"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: PersistentVolumeClaim\nmetadata:\n name: myclaim\nspec:\n accessModes:\n - ReadWriteOnce\n volumeMode: Filesystem", "completion": " resources:\n requests:\n storage: 8Gi\n storageClassName: slow\n selector:\n matchLabels:\n release: \"stable\"\n matchExpressions:\n - {key: environment, operator: In, values: [dev]}", "language": "yaml", "type": "kubernetes-manifest", "context": "Persistent Volumes | Claims use the same conventions as volumes when requesting storage with specific access modes.", "source": "https://kubernetes.io/docs/concepts/storage/persistent-volumes/", "id": "k8s_00083_code_completion_2257"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: PersistentVolumeClaim\nmetadata:\n name: myclaim\nspec:\n accessModes:\n - ReadWriteOnce\n volumeMode: Filesystem\n resources:\n requests:\n storage: 8Gi\n storageClassName: slow\n selector:\n matchLabels:\n release: \"stable\"\n matchExpressions:\n - {key: environment, operator: In, values: [dev]}", "completion": "This yaml code Persistent Volumes | Claims use the same conventions as volumes when requesting storage with specific access modes.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Persistent Volumes | Claims use the same conventions as volumes when requesting storage with specific access modes.", "source": "https://kubernetes.io/docs/concepts/storage/persistent-volumes/", "id": "k8s_00083_code_explain_2258"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: mypod\nspec:\n containers:\n - name: myfrontend", "completion": " image: nginx\n volumeMounts:\n - mountPath: \"/var/www/html\"\n name: mypd\n volumes:\n - name: mypd\n persistentVolumeClaim:\n claimName: myclaim", "language": "yaml", "type": "pod-definition", "context": "Persistent Volumes | PersistentVolumes binds are exclusive, and since PersistentVolumeClaims are namespaced objects, mounting claims with \"Many\" modes (ROX, RWX) is only possible within one namespace.", "source": "https://kubernetes.io/docs/concepts/storage/persistent-volumes/", "id": "k8s_00083_code_completion_2259"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: mypod\nspec:\n containers:\n - name: myfrontend\n image: nginx\n volumeMounts:\n - mountPath: \"/var/www/html\"\n name: mypd\n volumes:\n - name: mypd\n persistentVolumeClaim:\n claimName: myclaim", "completion": "This yaml code Persistent Volumes | PersistentVolumes binds are exclusive, and since PersistentVolumeClaims are namespaced objects, mounting claims with \"Many\" modes (ROX, RWX) is only possible within one namespace.", "language": "yaml", "type": "pod-definition_explanation", "context": "Persistent Volumes | PersistentVolumes binds are exclusive, and since PersistentVolumeClaims are namespaced objects, mounting claims with \"Many\" modes (ROX, RWX) is only possible within one namespace.", "source": "https://kubernetes.io/docs/concepts/storage/persistent-volumes/", "id": "k8s_00083_code_explain_2260"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: PersistentVolume\nmetadata:\n name: block-pv\nspec:\n capacity:\n storage: 10Gi", "completion": " accessModes:\n - ReadWriteOnce\n volumeMode: Block\n persistentVolumeReclaimPolicy: Retain\n fc:\n targetWWNs: [\"50060e801049cfd1\"]\n lun: 0\n readOnly: false", "language": "yaml", "type": "kubernetes-manifest", "context": "Persistent Volumes | If a user requests a raw block volume by indicating this using the volumeMode field in the PersistentVolumeClaim spec, the binding rules differ slightly from previous releases that didn't consider thi", "source": "https://kubernetes.io/docs/concepts/storage/persistent-volumes/", "id": "k8s_00083_code_completion_2261"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: PersistentVolume\nmetadata:\n name: block-pv\nspec:\n capacity:\n storage: 10Gi\n accessModes:\n - ReadWriteOnce\n volumeMode: Block\n persistentVolumeReclaimPolicy: Retain\n fc:\n targetWWNs: [\"50060e801049cfd1\"]\n lun: 0\n readOnly: false", "completion": "This yaml code Persistent Volumes | If a user requests a raw block volume by indicating this using the volumeMode field in the PersistentVolumeClaim spec, the binding rules differ slightly from previous releases that didn't consider thi", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Persistent Volumes | If a user requests a raw block volume by indicating this using the volumeMode field in the PersistentVolumeClaim spec, the binding rules differ slightly from previous releases that didn't consider thi", "source": "https://kubernetes.io/docs/concepts/storage/persistent-volumes/", "id": "k8s_00083_code_explain_2262"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: PersistentVolumeClaim\nmetadata:\n name: block-pvc\nspec:", "completion": " accessModes:\n - ReadWriteOnce\n volumeMode: Block\n resources:\n requests:\n storage: 10Gi", "language": "yaml", "type": "kubernetes-manifest", "context": "Persistent Volumes | If a user requests a raw block volume by indicating this using the volumeMode field in the PersistentVolumeClaim spec, the binding rules differ slightly from previous releases that didn't consider thi", "source": "https://kubernetes.io/docs/concepts/storage/persistent-volumes/", "id": "k8s_00083_code_completion_2263"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: PersistentVolumeClaim\nmetadata:\n name: block-pvc\nspec:\n accessModes:\n - ReadWriteOnce\n volumeMode: Block\n resources:\n requests:\n storage: 10Gi", "completion": "This yaml code Persistent Volumes | If a user requests a raw block volume by indicating this using the volumeMode field in the PersistentVolumeClaim spec, the binding rules differ slightly from previous releases that didn't consider thi", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Persistent Volumes | If a user requests a raw block volume by indicating this using the volumeMode field in the PersistentVolumeClaim spec, the binding rules differ slightly from previous releases that didn't consider thi", "source": "https://kubernetes.io/docs/concepts/storage/persistent-volumes/", "id": "k8s_00083_code_explain_2264"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: pod-with-block-volume\nspec:\n containers:\n - name: fc-container\n image: fedora:26", "completion": " command: [\"/bin/sh\", \"-c\"]\n args: [ \"tail -f /dev/null\" ]\n volumeDevices:\n - name: data\n devicePath: /dev/xvda\n volumes:\n - name: data\n persistentVolumeClaim:\n claimName: block-pvc", "language": "yaml", "type": "pod-definition", "context": "Persistent Volumes | If a user requests a raw block volume by indicating this using the volumeMode field in the PersistentVolumeClaim spec, the binding rules differ slightly from previous releases that didn't consider thi", "source": "https://kubernetes.io/docs/concepts/storage/persistent-volumes/", "id": "k8s_00083_code_completion_2265"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: pod-with-block-volume\nspec:\n containers:\n - name: fc-container\n image: fedora:26\n command: [\"/bin/sh\", \"-c\"]\n args: [ \"tail -f /dev/null\" ]\n volumeDevices:\n - name: data\n devicePath: /dev/xvda\n volumes:\n - name: data\n persistentVolumeClaim:\n claimName: block-pvc", "completion": "This yaml code Persistent Volumes | If a user requests a raw block volume by indicating this using the volumeMode field in the PersistentVolumeClaim spec, the binding rules differ slightly from previous releases that didn't consider thi", "language": "yaml", "type": "pod-definition_explanation", "context": "Persistent Volumes | If a user requests a raw block volume by indicating this using the volumeMode field in the PersistentVolumeClaim spec, the binding rules differ slightly from previous releases that didn't consider thi", "source": "https://kubernetes.io/docs/concepts/storage/persistent-volumes/", "id": "k8s_00083_code_explain_2266"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: PersistentVolumeClaim\nmetadata:\n name: restore-pvc\nspec:\n storageClassName: csi-hostpath-sc\n dataSource:", "completion": " name: new-snapshot-test\n kind: VolumeSnapshot\n apiGroup: snapshot.storage.k8s.io\n accessModes:\n - ReadWriteOnce\n resources:\n requests:\n storage: 10Gi", "language": "yaml", "type": "kubernetes-manifest", "context": "Persistent Volumes | Volume Cloning only available for CSI volume plugins.", "source": "https://kubernetes.io/docs/concepts/storage/persistent-volumes/", "id": "k8s_00083_code_completion_2267"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: PersistentVolumeClaim\nmetadata:\n name: restore-pvc\nspec:\n storageClassName: csi-hostpath-sc\n dataSource:\n name: new-snapshot-test\n kind: VolumeSnapshot\n apiGroup: snapshot.storage.k8s.io\n accessModes:\n - ReadWriteOnce\n resources:\n requests:\n storage: 10Gi", "completion": "This yaml code Persistent Volumes | Volume Cloning only available for CSI volume plugins.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Persistent Volumes | Volume Cloning only available for CSI volume plugins.", "source": "https://kubernetes.io/docs/concepts/storage/persistent-volumes/", "id": "k8s_00083_code_explain_2268"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: PersistentVolumeClaim\nmetadata:\n name: cloned-pvc\nspec:\n storageClassName: my-csi-plugin\n dataSource:", "completion": " name: existing-src-pvc-name\n kind: PersistentVolumeClaim\n accessModes:\n - ReadWriteOnce\n resources:\n requests:\n storage: 10Gi", "language": "yaml", "type": "kubernetes-manifest", "context": "Persistent Volumes | Kubernetes supports custom volume populators. To use custom volume populators, you must enable the AnyVolumeDataSource feature gate for the kube-apiserver and kube-controller-manager.", "source": "https://kubernetes.io/docs/concepts/storage/persistent-volumes/", "id": "k8s_00083_code_completion_2269"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: PersistentVolumeClaim\nmetadata:\n name: cloned-pvc\nspec:\n storageClassName: my-csi-plugin\n dataSource:\n name: existing-src-pvc-name\n kind: PersistentVolumeClaim\n accessModes:\n - ReadWriteOnce\n resources:\n requests:\n storage: 10Gi", "completion": "This yaml code Persistent Volumes | Kubernetes supports custom volume populators. To use custom volume populators, you must enable the AnyVolumeDataSource feature gate for the kube-apiserver and kube-controller-manager.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Persistent Volumes | Kubernetes supports custom volume populators. To use custom volume populators, you must enable the AnyVolumeDataSource feature gate for the kube-apiserver and kube-controller-manager.", "source": "https://kubernetes.io/docs/concepts/storage/persistent-volumes/", "id": "k8s_00083_code_explain_2270"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: PersistentVolumeClaim\nmetadata:\n name: populated-pvc\nspec:\n dataSourceRef:\n name: example-name", "completion": " kind: ExampleDataSource\n apiGroup: example.storage.k8s.io\n accessModes:\n - ReadWriteOnce\n resources:\n requests:\n storage: 10Gi", "language": "yaml", "type": "kubernetes-manifest", "context": "Persistent Volumes | Because volume populators are external components, attempts to create a PVC that uses one can fail if not all the correct components are installed. External controllers should generate events on the P", "source": "https://kubernetes.io/docs/concepts/storage/persistent-volumes/", "id": "k8s_00083_code_completion_2271"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: PersistentVolumeClaim\nmetadata:\n name: populated-pvc\nspec:\n dataSourceRef:\n name: example-name\n kind: ExampleDataSource\n apiGroup: example.storage.k8s.io\n accessModes:\n - ReadWriteOnce\n resources:\n requests:\n storage: 10Gi", "completion": "This yaml code Persistent Volumes | Because volume populators are external components, attempts to create a PVC that uses one can fail if not all the correct components are installed. External controllers should generate events on the P", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Persistent Volumes | Because volume populators are external components, attempts to create a PVC that uses one can fail if not all the correct components are installed. External controllers should generate events on the P", "source": "https://kubernetes.io/docs/concepts/storage/persistent-volumes/", "id": "k8s_00083_code_explain_2272"} {"prompt": "Complete this yaml code:\n\napiVersion: gateway.networking.k8s.io/v1beta1\nkind: ReferenceGrant\nmetadata:\n name: allow-ns1-pvc\n namespace: default\nspec:\n from:", "completion": " - group: \"\"\n kind: PersistentVolumeClaim\n namespace: ns1\n to:\n - group: snapshot.storage.k8s.io\n kind: VolumeSnapshot\n name: new-snapshot-demo", "language": "yaml", "type": "kubernetes-manifest", "context": "Persistent Volumes | If you're writing configuration templates or examples that run on a wide range of clusters and need persistent storage, it is recommended that you use the following pattern:", "source": "https://kubernetes.io/docs/concepts/storage/persistent-volumes/", "id": "k8s_00083_code_completion_2273"} {"prompt": "Explain this yaml code:\n\napiVersion: gateway.networking.k8s.io/v1beta1\nkind: ReferenceGrant\nmetadata:\n name: allow-ns1-pvc\n namespace: default\nspec:\n from:\n - group: \"\"\n kind: PersistentVolumeClaim\n namespace: ns1\n to:\n - group: snapshot.storage.k8s.io\n kind: VolumeSnapshot\n name: new-snapshot-demo", "completion": "This yaml code Persistent Volumes | If you're writing configuration templates or examples that run on a wide range of clusters and need persistent storage, it is recommended that you use the following pattern:", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Persistent Volumes | If you're writing configuration templates or examples that run on a wide range of clusters and need persistent storage, it is recommended that you use the following pattern:", "source": "https://kubernetes.io/docs/concepts/storage/persistent-volumes/", "id": "k8s_00083_code_explain_2274"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: PersistentVolumeClaim\nmetadata:\n name: foo-pvc\n namespace: ns1\nspec:\n storageClassName: example\n accessModes:\n - ReadWriteOnce", "completion": " resources:\n requests:\n storage: 1Gi\n dataSourceRef:\n apiGroup: snapshot.storage.k8s.io\n kind: VolumeSnapshot\n name: new-snapshot-demo\n namespace: default\n volumeMode: Filesystem", "language": "yaml", "type": "kubernetes-manifest", "context": "Persistent Volumes | If you're writing configuration templates or examples that run on a wide range of clusters and need persistent storage, it is recommended that you use the following pattern:", "source": "https://kubernetes.io/docs/concepts/storage/persistent-volumes/", "id": "k8s_00083_code_completion_2275"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: PersistentVolumeClaim\nmetadata:\n name: foo-pvc\n namespace: ns1\nspec:\n storageClassName: example\n accessModes:\n - ReadWriteOnce\n resources:\n requests:\n storage: 1Gi\n dataSourceRef:\n apiGroup: snapshot.storage.k8s.io\n kind: VolumeSnapshot\n name: new-snapshot-demo\n namespace: default\n volumeMode: Filesystem", "completion": "This yaml code Persistent Volumes | If you're writing configuration templates or examples that run on a wide range of clusters and need persistent storage, it is recommended that you use the following pattern:", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Persistent Volumes | If you're writing configuration templates or examples that run on a wide range of clusters and need persistent storage, it is recommended that you use the following pattern:", "source": "https://kubernetes.io/docs/concepts/storage/persistent-volumes/", "id": "k8s_00083_code_explain_2276"} {"prompt": "Complete this bash code:\n\nkubectl describe pvc hostpath\nName: hostpath\nNamespace: default\nStorageClass: example-hostpath\nStatus: Terminating", "completion": "Volume:\nLabels: \nAnnotations: volume.beta.kubernetes.io/storage-class=example-hostpath\n volume.beta.kubernetes.io/storage-provisioner=example.com/hostpath\nFinalizers: [kubernetes.io/pvc-protection]\n...", "language": "bash", "type": "kubectl-command", "context": "Persistent Volumes | You can see that a PV is protected when the PV's status is Terminating and the Finalizers list includes kubernetes.io/pv-protection too:", "source": "https://kubernetes.io/docs/concepts/storage/persistent-volumes/", "id": "k8s_00083_code_completion_2277"} {"prompt": "Explain this bash code:\n\nkubectl describe pvc hostpath\nName: hostpath\nNamespace: default\nStorageClass: example-hostpath\nStatus: Terminating\nVolume:\nLabels: \nAnnotations: volume.beta.kubernetes.io/storage-class=example-hostpath\n volume.beta.kubernetes.io/storage-provisioner=example.com/hostpath\nFinalizers: [kubernetes.io/pvc-protection]\n...", "completion": "This bash code Persistent Volumes | You can see that a PV is protected when the PV's status is Terminating and the Finalizers list includes kubernetes.io/pv-protection too:", "language": "bash", "type": "kubectl-command_explanation", "context": "Persistent Volumes | You can see that a PV is protected when the PV's status is Terminating and the Finalizers list includes kubernetes.io/pv-protection too:", "source": "https://kubernetes.io/docs/concepts/storage/persistent-volumes/", "id": "k8s_00083_code_explain_2278"} {"prompt": "Complete this bash code:\n\nkubectl describe pv task-pv-volume\nName: task-pv-volume\nLabels: type=local\nAnnotations: \nFinalizers: [kubernetes.io/pv-protection]\nStorageClass: standard\nStatus: Terminating\nClaim:", "completion": "Reclaim Policy: Delete\nAccess Modes: RWO\nCapacity: 1Gi\nMessage:\nSource:\n Type: HostPath (bare host directory volume)\n Path: /tmp/data\n HostPathType:\nEvents: ", "language": "bash", "type": "kubectl-command", "context": "Persistent Volumes | When a user is done with their volume, they can delete the PVC objects from the API that allows reclamation of the resource. The reclaim policy for a PersistentVolume tells the cluster what to do with", "source": "https://kubernetes.io/docs/concepts/storage/persistent-volumes/", "id": "k8s_00083_code_completion_2279"} {"prompt": "Explain this bash code:\n\nkubectl describe pv task-pv-volume\nName: task-pv-volume\nLabels: type=local\nAnnotations: \nFinalizers: [kubernetes.io/pv-protection]\nStorageClass: standard\nStatus: Terminating\nClaim:\nReclaim Policy: Delete\nAccess Modes: RWO\nCapacity: 1Gi\nMessage:\nSource:\n Type: HostPath (bare host directory volume)\n Path: /tmp/data\n HostPathType:\nEvents: ", "completion": "This bash code Persistent Volumes | When a user is done with their volume, they can delete the PVC objects from the API that allows reclamation of the resource. The reclaim policy for a PersistentVolume tells the cluster what to do with", "language": "bash", "type": "kubectl-command_explanation", "context": "Persistent Volumes | When a user is done with their volume, they can delete the PVC objects from the API that allows reclamation of the resource. The reclaim policy for a PersistentVolume tells the cluster what to do with", "source": "https://kubernetes.io/docs/concepts/storage/persistent-volumes/", "id": "k8s_00083_code_explain_2280"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: pv-recycler\n namespace: default\nspec:\n restartPolicy: Never\n volumes:\n - name: vol", "completion": " hostPath:\n path: /any/path/it/will/be/replaced\n containers:\n - name: pv-recycler\n image: \"registry.k8s.io/busybox\"\n command: [\"/bin/sh\", \"-c\", \"test -e /scrub && rm -rf /scrub/..?* /scrub/.[!.]* /scrub/* && test -z \\\"$(ls -A /scrub)\\\" || exit 1\"]\n volumeMounts:\n - name: vol\n mountPath: /scrub", "language": "yaml", "type": "pod-definition", "context": "Persistent Volumes | However, the particular path specified in the custom recycler Pod template in the volumes part is replaced with the particular path of the volume that is being recycled.", "source": "https://kubernetes.io/docs/concepts/storage/persistent-volumes/", "id": "k8s_00083_code_completion_2281"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: pv-recycler\n namespace: default\nspec:\n restartPolicy: Never\n volumes:\n - name: vol\n hostPath:\n path: /any/path/it/will/be/replaced\n containers:\n - name: pv-recycler\n image: \"registry.k8s.io/busybox\"\n command: [\"/bin/sh\", \"-c\", \"test -e /scrub && rm -rf /scrub/..?* /scrub/.[!.]* /scrub/* && test -z \\\"$(ls -A /scrub)\\\" || exit 1\"]\n volumeMounts:\n - name: vol\n mountPath: /scrub", "completion": "This yaml code Persistent Volumes | However, the particular path specified in the custom recycler Pod template in the volumes part is replaced with the particular path of the volume that is being recycled.", "language": "yaml", "type": "pod-definition_explanation", "context": "Persistent Volumes | However, the particular path specified in the custom recycler Pod template in the volumes part is replaced with the particular path of the volume that is being recycled.", "source": "https://kubernetes.io/docs/concepts/storage/persistent-volumes/", "id": "k8s_00083_code_explain_2282"} {"prompt": "Complete this bash code:\n\nkubectl describe pv pvc-74a498d6-3929-47e8-8c02-078c1ece4d78\nName: pvc-74a498d6-3929-47e8-8c02-078c1ece4d78\nLabels: \nAnnotations: kubernetes.io/createdby: vsphere-volume-dynamic-provisioner\n pv.kubernetes.io/bound-by-controller: yes\n pv.kubernetes.io/provisioned-by: kubernetes.io/vsphere-volume\nFinalizers: [kubernetes.io/pv-protection kubernetes.io/pv-controller]\nStorageClass: vcp-sc\nStatus: Bound\nClaim: default/vcp-pvc-1\nReclaim Policy: Delete", "completion": "Access Modes: RWO\nVolumeMode: Filesystem\nCapacity: 1Gi\nNode Affinity: \nMessage:\nSource:\n Type: vSphereVolume (a Persistent Disk resource in vSphere)\n VolumePath: [vsanDatastore] d49c4a62-166f-ce12-c464-020077ba5d46/kubernetes-dynamic-pvc-74a498d6-3929-47e8-8c02-078c1ece4d78.vmdk\n FSType: ext4\n StoragePolicyName: vSAN Default Storage Policy\nEvents: ", "language": "bash", "type": "kubectl-command", "context": "Persistent Volumes | The finalizer external-provisioner.volume.kubernetes.io/finalizer is added for CSI volumes. The following is an example:", "source": "https://kubernetes.io/docs/concepts/storage/persistent-volumes/", "id": "k8s_00083_code_completion_2283"} {"prompt": "Explain this bash code:\n\nkubectl describe pv pvc-74a498d6-3929-47e8-8c02-078c1ece4d78\nName: pvc-74a498d6-3929-47e8-8c02-078c1ece4d78\nLabels: \nAnnotations: kubernetes.io/createdby: vsphere-volume-dynamic-provisioner\n pv.kubernetes.io/bound-by-controller: yes\n pv.kubernetes.io/provisioned-by: kubernetes.io/vsphere-volume\nFinalizers: [kubernetes.io/pv-protection kubernetes.io/pv-controller]\nStorageClass: vcp-sc\nStatus: Bound\nClaim: default/vcp-pvc-1\nReclaim Policy: Delete\nAccess Modes: RWO\nVolumeMode: Filesystem\nCapacity: 1Gi\nNode Affinity: \nMessage:\nSource:\n Type: vSphereVolume (a Persistent Disk resource in vSphere)\n VolumePath: [vsanDatastore] d49c4a62-166f-ce12-c464-020077ba5d46/kubernetes-dynamic-pvc-74a498d6-3929-47e8-8c02-078c1ece4d78.vmdk\n FSType: ext4\n StoragePolicyName: vSAN Default Storage Policy\nEvents: ", "completion": "This bash code Persistent Volumes | The finalizer external-provisioner.volume.kubernetes.io/finalizer is added for CSI volumes. The following is an example:", "language": "bash", "type": "kubectl-command_explanation", "context": "Persistent Volumes | The finalizer external-provisioner.volume.kubernetes.io/finalizer is added for CSI volumes. The following is an example:", "source": "https://kubernetes.io/docs/concepts/storage/persistent-volumes/", "id": "k8s_00083_code_explain_2284"} {"prompt": "Complete this text code:\n\nName: pvc-2f0bab97-85a8-4552-8044-eb8be45cf48d\nLabels: \nAnnotations: pv.kubernetes.io/provisioned-by: csi.vsphere.vmware.com\nFinalizers: [kubernetes.io/pv-protection external-provisioner.volume.kubernetes.io/finalizer]\nStorageClass: fast\nStatus: Bound\nClaim: demo-app/nginx-logs\nReclaim Policy: Delete\nAccess Modes: RWO\nVolumeMode: Filesystem\nCapacity: 200Mi", "completion": "Node Affinity: \nMessage:\nSource:\n Type: CSI (a Container Storage Interface (CSI) volume source)\n Driver: csi.vsphere.vmware.com\n FSType: ext4\n VolumeHandle: 44830fa8-79b4-406b-8b58-621ba25353fd\n ReadOnly: false\n VolumeAttributes: storage.kubernetes.io/csiProvisionerIdentity=1648442357185-8081-csi.vsphere.vmware.com\n type=vSphere CNS Block Volume\nEvents: ", "language": "text", "type": "code-example", "context": "Persistent Volumes | When the CSIMigration{provider} feature flag is enabled for a specific in-tree volume plugin, the kubernetes.io/pv-controller finalizer is replaced by the external-provisioner.volume.kubernetes.io/fin", "source": "https://kubernetes.io/docs/concepts/storage/persistent-volumes/", "id": "k8s_00083_code_completion_2285"} {"prompt": "Explain this text code:\n\nName: pvc-2f0bab97-85a8-4552-8044-eb8be45cf48d\nLabels: \nAnnotations: pv.kubernetes.io/provisioned-by: csi.vsphere.vmware.com\nFinalizers: [kubernetes.io/pv-protection external-provisioner.volume.kubernetes.io/finalizer]\nStorageClass: fast\nStatus: Bound\nClaim: demo-app/nginx-logs\nReclaim Policy: Delete\nAccess Modes: RWO\nVolumeMode: Filesystem\nCapacity: 200Mi\nNode Affinity: \nMessage:\nSource:\n Type: CSI (a Container Storage Interface (CSI) volume source)\n Driver: csi.vsphere.vmware.com\n FSType: ext4\n VolumeHandle: 44830fa8-79b4-406b-8b58-621ba25353fd\n ReadOnly: false\n VolumeAttributes: storage.kubernetes.io/csiProvisionerIdentity=1648442357185-8081-csi.vsphere.vmware.com\n type=vSphere CNS Block Volume\nEvents: ", "completion": "This text code Persistent Volumes | When the CSIMigration{provider} feature flag is enabled for a specific in-tree volume plugin, the kubernetes.io/pv-controller finalizer is replaced by the external-provisioner.volume.kubernetes.io/fin", "language": "text", "type": "code-example_explanation", "context": "Persistent Volumes | When the CSIMigration{provider} feature flag is enabled for a specific in-tree volume plugin, the kubernetes.io/pv-controller finalizer is replaced by the external-provisioner.volume.kubernetes.io/fin", "source": "https://kubernetes.io/docs/concepts/storage/persistent-volumes/", "id": "k8s_00083_code_explain_2286"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: PersistentVolumeClaim\nmetadata:\n name: foo-pvc", "completion": " namespace: foo\nspec:\n storageClassName: \"\" # Empty string must be explicitly set otherwise default StorageClass will be set\n volumeName: foo-pv\n ...", "language": "yaml", "type": "kubernetes-manifest", "context": "Persistent Volumes | This method does not guarantee any binding privileges to the PersistentVolume. If other PersistentVolumeClaims could use the PV that you specify, you first need to reserve that storage volume. Specify", "source": "https://kubernetes.io/docs/concepts/storage/persistent-volumes/", "id": "k8s_00083_code_completion_2287"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: PersistentVolumeClaim\nmetadata:\n name: foo-pvc\n namespace: foo\nspec:\n storageClassName: \"\" # Empty string must be explicitly set otherwise default StorageClass will be set\n volumeName: foo-pv\n ...", "completion": "This yaml code Persistent Volumes | This method does not guarantee any binding privileges to the PersistentVolume. If other PersistentVolumeClaims could use the PV that you specify, you first need to reserve that storage volume. Specify", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Persistent Volumes | This method does not guarantee any binding privileges to the PersistentVolume. If other PersistentVolumeClaims could use the PV that you specify, you first need to reserve that storage volume. Specify", "source": "https://kubernetes.io/docs/concepts/storage/persistent-volumes/", "id": "k8s_00083_code_explain_2288"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: PersistentVolume\nmetadata:\n name: foo-pv\nspec:", "completion": " storageClassName: \"\"\n claimRef:\n name: foo-pvc\n namespace: foo\n ...", "language": "yaml", "type": "kubernetes-manifest", "context": "Persistent Volumes | This is useful if you want to consume PersistentVolumes that have their persistentVolumeReclaimPolicy set to Retain, including cases where you are reusing an existing PV.", "source": "https://kubernetes.io/docs/concepts/storage/persistent-volumes/", "id": "k8s_00083_code_completion_2289"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: PersistentVolume\nmetadata:\n name: foo-pv\nspec:\n storageClassName: \"\"\n claimRef:\n name: foo-pvc\n namespace: foo\n ...", "completion": "This yaml code Persistent Volumes | This is useful if you want to consume PersistentVolumes that have their persistentVolumeReclaimPolicy set to Retain, including cases where you are reusing an existing PV.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Persistent Volumes | This is useful if you want to consume PersistentVolumes that have their persistentVolumeReclaimPolicy set to Retain, including cases where you are reusing an existing PV.", "source": "https://kubernetes.io/docs/concepts/storage/persistent-volumes/", "id": "k8s_00083_code_explain_2290"} {"prompt": "Complete this yaml code:\n\napiVersion: storage.k8s.io/v1\nkind: StorageClass\nmetadata:\n name: example-vol-default\nprovisioner: vendor-name.example/magicstorage", "completion": "parameters:\n resturl: \"http://192.168.10.100:8080\"\n restuser: \"\"\n secretNamespace: \"\"\n secretName: \"\"\nallowVolumeExpansion: true", "language": "yaml", "type": "kubernetes-manifest", "context": "Persistent Volumes | To request a larger volume for a PVC, edit the PVC object and specify a larger size. This triggers expansion of the volume that backs the underlying PersistentVolume. A new PersistentVolume is never c", "source": "https://kubernetes.io/docs/concepts/storage/persistent-volumes/", "id": "k8s_00083_code_completion_2291"} {"prompt": "Explain this yaml code:\n\napiVersion: storage.k8s.io/v1\nkind: StorageClass\nmetadata:\n name: example-vol-default\nprovisioner: vendor-name.example/magicstorage\nparameters:\n resturl: \"http://192.168.10.100:8080\"\n restuser: \"\"\n secretNamespace: \"\"\n secretName: \"\"\nallowVolumeExpansion: true", "completion": "This yaml code Persistent Volumes | To request a larger volume for a PVC, edit the PVC object and specify a larger size. This triggers expansion of the volume that backs the underlying PersistentVolume. A new PersistentVolume is never c", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Persistent Volumes | To request a larger volume for a PVC, edit the PVC object and specify a larger size. This triggers expansion of the volume that backs the underlying PersistentVolume. A new PersistentVolume is never c", "source": "https://kubernetes.io/docs/concepts/storage/persistent-volumes/", "id": "k8s_00083_code_explain_2292"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: PersistentVolume\nmetadata:\n name: pv0003\nspec:\n capacity:\n storage: 5Gi\n volumeMode: Filesystem\n accessModes:", "completion": " - ReadWriteOnce\n persistentVolumeReclaimPolicy: Recycle\n storageClassName: slow\n mountOptions:\n - hard\n - nfsvers=4.1\n nfs:\n path: /tmp\n server: 172.17.0.2", "language": "yaml", "type": "kubernetes-manifest", "context": "Persistent Volumes | Generally, a PV will have a specific storage capacity. This is set using the PV's capacity attribute which is a Quantity value.", "source": "https://kubernetes.io/docs/concepts/storage/persistent-volumes/", "id": "k8s_00083_code_completion_2293"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: PersistentVolume\nmetadata:\n name: pv0003\nspec:\n capacity:\n storage: 5Gi\n volumeMode: Filesystem\n accessModes:\n - ReadWriteOnce\n persistentVolumeReclaimPolicy: Recycle\n storageClassName: slow\n mountOptions:\n - hard\n - nfsvers=4.1\n nfs:\n path: /tmp\n server: 172.17.0.2", "completion": "This yaml code Persistent Volumes | Generally, a PV will have a specific storage capacity. This is set using the PV's capacity attribute which is a Quantity value.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Persistent Volumes | Generally, a PV will have a specific storage capacity. This is set using the PV's capacity attribute which is a Quantity value.", "source": "https://kubernetes.io/docs/concepts/storage/persistent-volumes/", "id": "k8s_00083_code_explain_2294"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: PersistentVolumeClaim\nmetadata:\n name: myclaim\nspec:\n accessModes:\n - ReadWriteOnce\n volumeMode: Filesystem", "completion": " resources:\n requests:\n storage: 8Gi\n storageClassName: slow\n selector:\n matchLabels:\n release: \"stable\"\n matchExpressions:\n - {key: environment, operator: In, values: [dev]}", "language": "yaml", "type": "kubernetes-manifest", "context": "Persistent Volumes | Claims use the same conventions as volumes when requesting storage with specific access modes.", "source": "https://kubernetes.io/docs/concepts/storage/persistent-volumes/", "id": "k8s_00083_code_completion_2295"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: PersistentVolumeClaim\nmetadata:\n name: myclaim\nspec:\n accessModes:\n - ReadWriteOnce\n volumeMode: Filesystem\n resources:\n requests:\n storage: 8Gi\n storageClassName: slow\n selector:\n matchLabels:\n release: \"stable\"\n matchExpressions:\n - {key: environment, operator: In, values: [dev]}", "completion": "This yaml code Persistent Volumes | Claims use the same conventions as volumes when requesting storage with specific access modes.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Persistent Volumes | Claims use the same conventions as volumes when requesting storage with specific access modes.", "source": "https://kubernetes.io/docs/concepts/storage/persistent-volumes/", "id": "k8s_00083_code_explain_2296"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: mypod\nspec:\n containers:\n - name: myfrontend", "completion": " image: nginx\n volumeMounts:\n - mountPath: \"/var/www/html\"\n name: mypd\n volumes:\n - name: mypd\n persistentVolumeClaim:\n claimName: myclaim", "language": "yaml", "type": "pod-definition", "context": "Persistent Volumes | PersistentVolumes binds are exclusive, and since PersistentVolumeClaims are namespaced objects, mounting claims with \"Many\" modes (ROX, RWX) is only possible within one namespace.", "source": "https://kubernetes.io/docs/concepts/storage/persistent-volumes/", "id": "k8s_00083_code_completion_2297"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: mypod\nspec:\n containers:\n - name: myfrontend\n image: nginx\n volumeMounts:\n - mountPath: \"/var/www/html\"\n name: mypd\n volumes:\n - name: mypd\n persistentVolumeClaim:\n claimName: myclaim", "completion": "This yaml code Persistent Volumes | PersistentVolumes binds are exclusive, and since PersistentVolumeClaims are namespaced objects, mounting claims with \"Many\" modes (ROX, RWX) is only possible within one namespace.", "language": "yaml", "type": "pod-definition_explanation", "context": "Persistent Volumes | PersistentVolumes binds are exclusive, and since PersistentVolumeClaims are namespaced objects, mounting claims with \"Many\" modes (ROX, RWX) is only possible within one namespace.", "source": "https://kubernetes.io/docs/concepts/storage/persistent-volumes/", "id": "k8s_00083_code_explain_2298"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: PersistentVolume\nmetadata:\n name: block-pv\nspec:\n capacity:\n storage: 10Gi", "completion": " accessModes:\n - ReadWriteOnce\n volumeMode: Block\n persistentVolumeReclaimPolicy: Retain\n fc:\n targetWWNs: [\"50060e801049cfd1\"]\n lun: 0\n readOnly: false", "language": "yaml", "type": "kubernetes-manifest", "context": "Persistent Volumes | If a user requests a raw block volume by indicating this using the volumeMode field in the PersistentVolumeClaim spec, the binding rules differ slightly from previous releases that didn't consider thi", "source": "https://kubernetes.io/docs/concepts/storage/persistent-volumes/", "id": "k8s_00083_code_completion_2299"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: PersistentVolume\nmetadata:\n name: block-pv\nspec:\n capacity:\n storage: 10Gi\n accessModes:\n - ReadWriteOnce\n volumeMode: Block\n persistentVolumeReclaimPolicy: Retain\n fc:\n targetWWNs: [\"50060e801049cfd1\"]\n lun: 0\n readOnly: false", "completion": "This yaml code Persistent Volumes | If a user requests a raw block volume by indicating this using the volumeMode field in the PersistentVolumeClaim spec, the binding rules differ slightly from previous releases that didn't consider thi", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Persistent Volumes | If a user requests a raw block volume by indicating this using the volumeMode field in the PersistentVolumeClaim spec, the binding rules differ slightly from previous releases that didn't consider thi", "source": "https://kubernetes.io/docs/concepts/storage/persistent-volumes/", "id": "k8s_00083_code_explain_2300"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: PersistentVolumeClaim\nmetadata:\n name: block-pvc\nspec:", "completion": " accessModes:\n - ReadWriteOnce\n volumeMode: Block\n resources:\n requests:\n storage: 10Gi", "language": "yaml", "type": "kubernetes-manifest", "context": "Persistent Volumes | If a user requests a raw block volume by indicating this using the volumeMode field in the PersistentVolumeClaim spec, the binding rules differ slightly from previous releases that didn't consider thi", "source": "https://kubernetes.io/docs/concepts/storage/persistent-volumes/", "id": "k8s_00083_code_completion_2301"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: PersistentVolumeClaim\nmetadata:\n name: block-pvc\nspec:\n accessModes:\n - ReadWriteOnce\n volumeMode: Block\n resources:\n requests:\n storage: 10Gi", "completion": "This yaml code Persistent Volumes | If a user requests a raw block volume by indicating this using the volumeMode field in the PersistentVolumeClaim spec, the binding rules differ slightly from previous releases that didn't consider thi", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Persistent Volumes | If a user requests a raw block volume by indicating this using the volumeMode field in the PersistentVolumeClaim spec, the binding rules differ slightly from previous releases that didn't consider thi", "source": "https://kubernetes.io/docs/concepts/storage/persistent-volumes/", "id": "k8s_00083_code_explain_2302"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: pod-with-block-volume\nspec:\n containers:\n - name: fc-container\n image: fedora:26", "completion": " command: [\"/bin/sh\", \"-c\"]\n args: [ \"tail -f /dev/null\" ]\n volumeDevices:\n - name: data\n devicePath: /dev/xvda\n volumes:\n - name: data\n persistentVolumeClaim:\n claimName: block-pvc", "language": "yaml", "type": "pod-definition", "context": "Persistent Volumes | If a user requests a raw block volume by indicating this using the volumeMode field in the PersistentVolumeClaim spec, the binding rules differ slightly from previous releases that didn't consider thi", "source": "https://kubernetes.io/docs/concepts/storage/persistent-volumes/", "id": "k8s_00083_code_completion_2303"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: pod-with-block-volume\nspec:\n containers:\n - name: fc-container\n image: fedora:26\n command: [\"/bin/sh\", \"-c\"]\n args: [ \"tail -f /dev/null\" ]\n volumeDevices:\n - name: data\n devicePath: /dev/xvda\n volumes:\n - name: data\n persistentVolumeClaim:\n claimName: block-pvc", "completion": "This yaml code Persistent Volumes | If a user requests a raw block volume by indicating this using the volumeMode field in the PersistentVolumeClaim spec, the binding rules differ slightly from previous releases that didn't consider thi", "language": "yaml", "type": "pod-definition_explanation", "context": "Persistent Volumes | If a user requests a raw block volume by indicating this using the volumeMode field in the PersistentVolumeClaim spec, the binding rules differ slightly from previous releases that didn't consider thi", "source": "https://kubernetes.io/docs/concepts/storage/persistent-volumes/", "id": "k8s_00083_code_explain_2304"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: PersistentVolumeClaim\nmetadata:\n name: restore-pvc\nspec:\n storageClassName: csi-hostpath-sc\n dataSource:", "completion": " name: new-snapshot-test\n kind: VolumeSnapshot\n apiGroup: snapshot.storage.k8s.io\n accessModes:\n - ReadWriteOnce\n resources:\n requests:\n storage: 10Gi", "language": "yaml", "type": "kubernetes-manifest", "context": "Persistent Volumes | Volume Cloning only available for CSI volume plugins.", "source": "https://kubernetes.io/docs/concepts/storage/persistent-volumes/", "id": "k8s_00083_code_completion_2305"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: PersistentVolumeClaim\nmetadata:\n name: restore-pvc\nspec:\n storageClassName: csi-hostpath-sc\n dataSource:\n name: new-snapshot-test\n kind: VolumeSnapshot\n apiGroup: snapshot.storage.k8s.io\n accessModes:\n - ReadWriteOnce\n resources:\n requests:\n storage: 10Gi", "completion": "This yaml code Persistent Volumes | Volume Cloning only available for CSI volume plugins.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Persistent Volumes | Volume Cloning only available for CSI volume plugins.", "source": "https://kubernetes.io/docs/concepts/storage/persistent-volumes/", "id": "k8s_00083_code_explain_2306"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: PersistentVolumeClaim\nmetadata:\n name: cloned-pvc\nspec:\n storageClassName: my-csi-plugin\n dataSource:", "completion": " name: existing-src-pvc-name\n kind: PersistentVolumeClaim\n accessModes:\n - ReadWriteOnce\n resources:\n requests:\n storage: 10Gi", "language": "yaml", "type": "kubernetes-manifest", "context": "Persistent Volumes | Kubernetes supports custom volume populators. To use custom volume populators, you must enable the AnyVolumeDataSource feature gate for the kube-apiserver and kube-controller-manager.", "source": "https://kubernetes.io/docs/concepts/storage/persistent-volumes/", "id": "k8s_00083_code_completion_2307"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: PersistentVolumeClaim\nmetadata:\n name: cloned-pvc\nspec:\n storageClassName: my-csi-plugin\n dataSource:\n name: existing-src-pvc-name\n kind: PersistentVolumeClaim\n accessModes:\n - ReadWriteOnce\n resources:\n requests:\n storage: 10Gi", "completion": "This yaml code Persistent Volumes | Kubernetes supports custom volume populators. To use custom volume populators, you must enable the AnyVolumeDataSource feature gate for the kube-apiserver and kube-controller-manager.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Persistent Volumes | Kubernetes supports custom volume populators. To use custom volume populators, you must enable the AnyVolumeDataSource feature gate for the kube-apiserver and kube-controller-manager.", "source": "https://kubernetes.io/docs/concepts/storage/persistent-volumes/", "id": "k8s_00083_code_explain_2308"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: PersistentVolumeClaim\nmetadata:\n name: populated-pvc\nspec:\n dataSourceRef:\n name: example-name", "completion": " kind: ExampleDataSource\n apiGroup: example.storage.k8s.io\n accessModes:\n - ReadWriteOnce\n resources:\n requests:\n storage: 10Gi", "language": "yaml", "type": "kubernetes-manifest", "context": "Persistent Volumes | Because volume populators are external components, attempts to create a PVC that uses one can fail if not all the correct components are installed. External controllers should generate events on the P", "source": "https://kubernetes.io/docs/concepts/storage/persistent-volumes/", "id": "k8s_00083_code_completion_2309"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: PersistentVolumeClaim\nmetadata:\n name: populated-pvc\nspec:\n dataSourceRef:\n name: example-name\n kind: ExampleDataSource\n apiGroup: example.storage.k8s.io\n accessModes:\n - ReadWriteOnce\n resources:\n requests:\n storage: 10Gi", "completion": "This yaml code Persistent Volumes | Because volume populators are external components, attempts to create a PVC that uses one can fail if not all the correct components are installed. External controllers should generate events on the P", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Persistent Volumes | Because volume populators are external components, attempts to create a PVC that uses one can fail if not all the correct components are installed. External controllers should generate events on the P", "source": "https://kubernetes.io/docs/concepts/storage/persistent-volumes/", "id": "k8s_00083_code_explain_2310"} {"prompt": "Complete this yaml code:\n\napiVersion: gateway.networking.k8s.io/v1beta1\nkind: ReferenceGrant\nmetadata:\n name: allow-ns1-pvc\n namespace: default\nspec:\n from:", "completion": " - group: \"\"\n kind: PersistentVolumeClaim\n namespace: ns1\n to:\n - group: snapshot.storage.k8s.io\n kind: VolumeSnapshot\n name: new-snapshot-demo", "language": "yaml", "type": "kubernetes-manifest", "context": "Persistent Volumes | If you're writing configuration templates or examples that run on a wide range of clusters and need persistent storage, it is recommended that you use the following pattern:", "source": "https://kubernetes.io/docs/concepts/storage/persistent-volumes/", "id": "k8s_00083_code_completion_2311"} {"prompt": "Explain this yaml code:\n\napiVersion: gateway.networking.k8s.io/v1beta1\nkind: ReferenceGrant\nmetadata:\n name: allow-ns1-pvc\n namespace: default\nspec:\n from:\n - group: \"\"\n kind: PersistentVolumeClaim\n namespace: ns1\n to:\n - group: snapshot.storage.k8s.io\n kind: VolumeSnapshot\n name: new-snapshot-demo", "completion": "This yaml code Persistent Volumes | If you're writing configuration templates or examples that run on a wide range of clusters and need persistent storage, it is recommended that you use the following pattern:", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Persistent Volumes | If you're writing configuration templates or examples that run on a wide range of clusters and need persistent storage, it is recommended that you use the following pattern:", "source": "https://kubernetes.io/docs/concepts/storage/persistent-volumes/", "id": "k8s_00083_code_explain_2312"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: PersistentVolumeClaim\nmetadata:\n name: foo-pvc\n namespace: ns1\nspec:\n storageClassName: example\n accessModes:\n - ReadWriteOnce", "completion": " resources:\n requests:\n storage: 1Gi\n dataSourceRef:\n apiGroup: snapshot.storage.k8s.io\n kind: VolumeSnapshot\n name: new-snapshot-demo\n namespace: default\n volumeMode: Filesystem", "language": "yaml", "type": "kubernetes-manifest", "context": "Persistent Volumes | If you're writing configuration templates or examples that run on a wide range of clusters and need persistent storage, it is recommended that you use the following pattern:", "source": "https://kubernetes.io/docs/concepts/storage/persistent-volumes/", "id": "k8s_00083_code_completion_2313"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: PersistentVolumeClaim\nmetadata:\n name: foo-pvc\n namespace: ns1\nspec:\n storageClassName: example\n accessModes:\n - ReadWriteOnce\n resources:\n requests:\n storage: 1Gi\n dataSourceRef:\n apiGroup: snapshot.storage.k8s.io\n kind: VolumeSnapshot\n name: new-snapshot-demo\n namespace: default\n volumeMode: Filesystem", "completion": "This yaml code Persistent Volumes | If you're writing configuration templates or examples that run on a wide range of clusters and need persistent storage, it is recommended that you use the following pattern:", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Persistent Volumes | If you're writing configuration templates or examples that run on a wide range of clusters and need persistent storage, it is recommended that you use the following pattern:", "source": "https://kubernetes.io/docs/concepts/storage/persistent-volumes/", "id": "k8s_00083_code_explain_2314"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Config\n\nclusters:\n- cluster:\n proxy-url: http://proxy.example.org:3128\n server: https://k8s.example.org/k8s/clusters/c-xxyyzz", "completion": " name: development\n\nusers:\n- name: developer\n\ncontexts:\n- context:\n name: development", "language": "yaml", "type": "kubernetes-manifest", "context": "Organizing Cluster Access Using kubeconfig Files |", "source": "https://kubernetes.io/docs/concepts/configuration/organize-cluster-access-kubeconfig/", "id": "k8s_00086_code_completion_2315"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Config\n\nclusters:\n- cluster:\n proxy-url: http://proxy.example.org:3128\n server: https://k8s.example.org/k8s/clusters/c-xxyyzz\n name: development\n\nusers:\n- name: developer\n\ncontexts:\n- context:\n name: development", "completion": "This yaml code Organizing Cluster Access Using kubeconfig Files |", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Organizing Cluster Access Using kubeconfig Files |", "source": "https://kubernetes.io/docs/concepts/configuration/organize-cluster-access-kubeconfig/", "id": "k8s_00086_code_explain_2316"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Config\n\nclusters:\n- cluster:\n proxy-url: http://proxy.example.org:3128\n server: https://k8s.example.org/k8s/clusters/c-xxyyzz", "completion": " name: development\n\nusers:\n- name: developer\n\ncontexts:\n- context:\n name: development", "language": "yaml", "type": "kubernetes-manifest", "context": "Organizing Cluster Access Using kubeconfig Files |", "source": "https://kubernetes.io/docs/concepts/configuration/organize-cluster-access-kubeconfig/", "id": "k8s_00086_code_completion_2317"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Config\n\nclusters:\n- cluster:\n proxy-url: http://proxy.example.org:3128\n server: https://k8s.example.org/k8s/clusters/c-xxyyzz\n name: development\n\nusers:\n- name: developer\n\ncontexts:\n- context:\n name: development", "completion": "This yaml code Organizing Cluster Access Using kubeconfig Files |", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Organizing Cluster Access Using kubeconfig Files |", "source": "https://kubernetes.io/docs/concepts/configuration/organize-cluster-access-kubeconfig/", "id": "k8s_00086_code_explain_2318"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Config\n\nclusters:\n- cluster:\n proxy-url: http://proxy.example.org:3128\n server: https://k8s.example.org/k8s/clusters/c-xxyyzz", "completion": " name: development\n\nusers:\n- name: developer\n\ncontexts:\n- context:\n name: development", "language": "yaml", "type": "kubernetes-manifest", "context": "Organizing Cluster Access Using kubeconfig Files |", "source": "https://kubernetes.io/docs/concepts/configuration/organize-cluster-access-kubeconfig/", "id": "k8s_00086_code_completion_2319"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Config\n\nclusters:\n- cluster:\n proxy-url: http://proxy.example.org:3128\n server: https://k8s.example.org/k8s/clusters/c-xxyyzz\n name: development\n\nusers:\n- name: developer\n\ncontexts:\n- context:\n name: development", "completion": "This yaml code Organizing Cluster Access Using kubeconfig Files |", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Organizing Cluster Access Using kubeconfig Files |", "source": "https://kubernetes.io/docs/concepts/configuration/organize-cluster-access-kubeconfig/", "id": "k8s_00086_code_explain_2320"} {"prompt": "Explain this bash code:\n\nkubectl get events -A --field-selector=reason=OwnerRefInvalidNamespace", "completion": "This bash code Garbage Collection | Kubernetes checks for and deletes objects that no longer have owner references, like the pods left behind when you delete a ReplicaSet. When you delete an object, you can control whether Kubernetes de", "language": "bash", "type": "kubectl-command_explanation", "context": "Garbage Collection | Kubernetes checks for and deletes objects that no longer have owner references, like the pods left behind when you delete a ReplicaSet. When you delete an object, you can control whether Kubernetes de", "source": "https://kubernetes.io/docs/concepts/architecture/garbage-collection/", "id": "k8s_00089_code_explain_2321"} {"prompt": "Complete this yaml code:\n\napiVersion: batch/v1\nkind: Job\nmetadata:\n name: process-item-$ITEM\n labels:\n jobgroup: jobexample\nspec:\n template:\n metadata:", "completion": " name: jobexample\n labels:\n jobgroup: jobexample\n spec:\n containers:\n - name: c\n image: busybox:1.28\n command: [\"sh\", \"-c\", \"echo Processing item $ITEM && sleep 5\"]\n restartPolicy: Never", "language": "yaml", "type": "kubernetes-manifest", "context": "Parallel Processing using Expansions | The file you downloaded is not yet a valid Kubernetes manifest. Instead that template is a YAML representation of a Job object with some placeholders that need to be filled in before it can be used. T", "source": "https://kubernetes.io/docs/tasks/job/parallel-processing-expansion/", "id": "k8s_00090_code_completion_2322"} {"prompt": "Explain this yaml code:\n\napiVersion: batch/v1\nkind: Job\nmetadata:\n name: process-item-$ITEM\n labels:\n jobgroup: jobexample\nspec:\n template:\n metadata:\n name: jobexample\n labels:\n jobgroup: jobexample\n spec:\n containers:\n - name: c\n image: busybox:1.28\n command: [\"sh\", \"-c\", \"echo Processing item $ITEM && sleep 5\"]\n restartPolicy: Never", "completion": "This yaml code Parallel Processing using Expansions | The file you downloaded is not yet a valid Kubernetes manifest. Instead that template is a YAML representation of a Job object with some placeholders that need to be filled in before it can be used. T", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Parallel Processing using Expansions | The file you downloaded is not yet a valid Kubernetes manifest. Instead that template is a YAML representation of a Job object with some placeholders that need to be filled in before it can be used. T", "source": "https://kubernetes.io/docs/tasks/job/parallel-processing-expansion/", "id": "k8s_00090_code_explain_2323"} {"prompt": "Explain this text code:\n\n# Use curl to download job-tmpl.yaml\ncurl -L -s -O https://k8s.io/examples/application/job/job-tmpl.yaml", "completion": "This text code Parallel Processing using Expansions | The file you downloaded is not yet a valid Kubernetes manifest. Instead that template is a YAML representation of a Job object with some placeholders that need to be filled in before it can be used. T", "language": "text", "type": "code-example_explanation", "context": "Parallel Processing using Expansions | The file you downloaded is not yet a valid Kubernetes manifest. Instead that template is a YAML representation of a Job object with some placeholders that need to be filled in before it can be used. T", "source": "https://kubernetes.io/docs/tasks/job/parallel-processing-expansion/", "id": "k8s_00090_code_explain_2324"} {"prompt": "Complete this text code:\n\n# Expand the template into multiple files, one for each item to be processed.\nmkdir ./jobs\nfor i in apple banana cherry", "completion": "do\n cat job-tmpl.yaml | sed \"s/\\$ITEM/$i/\" > ./jobs/job-$i.yaml\ndone", "language": "text", "type": "code-example", "context": "Parallel Processing using Expansions | Check if it worked:", "source": "https://kubernetes.io/docs/tasks/job/parallel-processing-expansion/", "id": "k8s_00090_code_completion_2325"} {"prompt": "Explain this text code:\n\n# Expand the template into multiple files, one for each item to be processed.\nmkdir ./jobs\nfor i in apple banana cherry\ndo\n cat job-tmpl.yaml | sed \"s/\\$ITEM/$i/\" > ./jobs/job-$i.yaml\ndone", "completion": "This text code Parallel Processing using Expansions | Check if it worked:", "language": "text", "type": "code-example_explanation", "context": "Parallel Processing using Expansions | Check if it worked:", "source": "https://kubernetes.io/docs/tasks/job/parallel-processing-expansion/", "id": "k8s_00090_code_explain_2326"} {"prompt": "Explain this text code:\n\njob.batch/process-item-apple created\njob.batch/process-item-banana created\njob.batch/process-item-cherry created", "completion": "This text code Parallel Processing using Expansions | Now, check on the jobs:", "language": "text", "type": "code-example_explanation", "context": "Parallel Processing using Expansions | Now, check on the jobs:", "source": "https://kubernetes.io/docs/tasks/job/parallel-processing-expansion/", "id": "k8s_00090_code_explain_2327"} {"prompt": "Complete this text code:\n\nNAME COMPLETIONS DURATION AGE\nprocess-item-apple 1/1 14s 22s", "completion": "process-item-banana 1/1 12s 21s\nprocess-item-cherry 1/1 12s 20s", "language": "text", "type": "code-example", "context": "Parallel Processing using Expansions | Using the -l option to kubectl selects only the Jobs that are part of this group of jobs (there might be other unrelated jobs in the system).", "source": "https://kubernetes.io/docs/tasks/job/parallel-processing-expansion/", "id": "k8s_00090_code_completion_2328"} {"prompt": "Explain this text code:\n\nNAME COMPLETIONS DURATION AGE\nprocess-item-apple 1/1 14s 22s\nprocess-item-banana 1/1 12s 21s\nprocess-item-cherry 1/1 12s 20s", "completion": "This text code Parallel Processing using Expansions | Using the -l option to kubectl selects only the Jobs that are part of this group of jobs (there might be other unrelated jobs in the system).", "language": "text", "type": "code-example_explanation", "context": "Parallel Processing using Expansions | Using the -l option to kubectl selects only the Jobs that are part of this group of jobs (there might be other unrelated jobs in the system).", "source": "https://kubernetes.io/docs/tasks/job/parallel-processing-expansion/", "id": "k8s_00090_code_explain_2329"} {"prompt": "Complete this text code:\n\nNAME READY STATUS RESTARTS AGE\nprocess-item-apple-kixwv 0/1 Completed 0 4m", "completion": "process-item-banana-wrsf7 0/1 Completed 0 4m\nprocess-item-cherry-dnfu9 0/1 Completed 0 4m", "language": "text", "type": "code-example", "context": "Parallel Processing using Expansions | We can use this single command to check on the output of all jobs at once:", "source": "https://kubernetes.io/docs/tasks/job/parallel-processing-expansion/", "id": "k8s_00090_code_completion_2330"} {"prompt": "Explain this text code:\n\nNAME READY STATUS RESTARTS AGE\nprocess-item-apple-kixwv 0/1 Completed 0 4m\nprocess-item-banana-wrsf7 0/1 Completed 0 4m\nprocess-item-cherry-dnfu9 0/1 Completed 0 4m", "completion": "This text code Parallel Processing using Expansions | We can use this single command to check on the output of all jobs at once:", "language": "text", "type": "code-example_explanation", "context": "Parallel Processing using Expansions | We can use this single command to check on the output of all jobs at once:", "source": "https://kubernetes.io/docs/tasks/job/parallel-processing-expansion/", "id": "k8s_00090_code_explain_2331"} {"prompt": "Explain this text code:\n\nProcessing item apple\nProcessing item banana\nProcessing item cherry", "completion": "This text code Parallel Processing using Expansions | In the first example, each instance of the template had one parameter, and that parameter was also used in the Job's name. However, names are restricted to contain only certain characters.", "language": "text", "type": "code-example_explanation", "context": "Parallel Processing using Expansions | In the first example, each instance of the template had one parameter, and that parameter was also used in the Job's name. However, names are restricted to contain only certain characters.", "source": "https://kubernetes.io/docs/tasks/job/parallel-processing-expansion/", "id": "k8s_00090_code_explain_2332"} {"prompt": "Explain this text code:\n\n# Remove the Jobs you created\n# Your cluster automatically cleans up their Pods\nkubectl delete job -l jobgroup=jobexample", "completion": "This text code Parallel Processing using Expansions | In the first example, each instance of the template had one parameter, and that parameter was also used in the Job's name. However, names are restricted to contain only certain characters.", "language": "text", "type": "code-example_explanation", "context": "Parallel Processing using Expansions | In the first example, each instance of the template had one parameter, and that parameter was also used in the Job's name. However, names are restricted to contain only certain characters.", "source": "https://kubernetes.io/docs/tasks/job/parallel-processing-expansion/", "id": "k8s_00090_code_explain_2333"} {"prompt": "Complete this yaml code:\n\n{% set params = [{ \"name\": \"apple\", \"url\": \"http://dbpedia.org/resource/Apple\", },\n { \"name\": \"banana\", \"url\": \"http://dbpedia.org/resource/Banana\", },\n { \"name\": \"cherry\", \"url\": \"http://dbpedia.org/resource/Cherry\" }]\n%}\n{% for p in params %}\n{% set name = p[\"name\"] %}\n{% set url = p[\"url\"] %}\n---\napiVersion: batch/v1\nkind: Job\nmetadata:\n name: jobexample-{{ name }}\n labels:", "completion": " jobgroup: jobexample\nspec:\n template:\n metadata:\n name: jobexample\n labels:\n jobgroup: jobexample\n spec:\n containers:\n - name: c\n image: busybox:1.28\n command: [\"sh\", \"-c\", \"echo Processing URL {{ url }} && sleep 5\"]\n restartPolicy: Never\n{% endfor %}", "language": "yaml", "type": "kubernetes-manifest", "context": "Parallel Processing using Expansions | The above template defines two parameters for each Job object using a list of python dicts (lines 1-4). A for loop emits one Job manifest for each set of parameters (remaining lines).", "source": "https://kubernetes.io/docs/tasks/job/parallel-processing-expansion/", "id": "k8s_00090_code_completion_2334"} {"prompt": "Explain this yaml code:\n\n{% set params = [{ \"name\": \"apple\", \"url\": \"http://dbpedia.org/resource/Apple\", },\n { \"name\": \"banana\", \"url\": \"http://dbpedia.org/resource/Banana\", },\n { \"name\": \"cherry\", \"url\": \"http://dbpedia.org/resource/Cherry\" }]\n%}\n{% for p in params %}\n{% set name = p[\"name\"] %}\n{% set url = p[\"url\"] %}\n---\napiVersion: batch/v1\nkind: Job\nmetadata:\n name: jobexample-{{ name }}\n labels:\n jobgroup: jobexample\nspec:\n template:\n metadata:\n name: jobexample\n labels:\n jobgroup: jobexample\n spec:\n containers:\n - name: c\n image: busybox:1.28\n command: [\"sh\", \"-c\", \"echo Processing URL {{ url }} && sleep 5\"]\n restartPolicy: Never\n{% endfor %}", "completion": "This yaml code Parallel Processing using Expansions | The above template defines two parameters for each Job object using a list of python dicts (lines 1-4). A for loop emits one Job manifest for each set of parameters (remaining lines).", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Parallel Processing using Expansions | The above template defines two parameters for each Job object using a list of python dicts (lines 1-4). A for loop emits one Job manifest for each set of parameters (remaining lines).", "source": "https://kubernetes.io/docs/tasks/job/parallel-processing-expansion/", "id": "k8s_00090_code_explain_2335"} {"prompt": "Explain this python code:\n\nalias render_template='python -c \"from jinja2 import Template; import sys; print(Template(sys.stdin.read()).render());\"'", "completion": "This python code Parallel Processing using Expansions | Use render_template to convert the parameters and template into a single YAML file containing Kubernetes manifests:", "language": "python", "type": "code-example_explanation", "context": "Parallel Processing using Expansions | Use render_template to convert the parameters and template into a single YAML file containing Kubernetes manifests:", "source": "https://kubernetes.io/docs/tasks/job/parallel-processing-expansion/", "id": "k8s_00090_code_explain_2336"} {"prompt": "Explain this text code:\n\n# This requires the alias you defined earlier\ncat job.yaml.jinja2 | render_template > jobs.yaml", "completion": "This text code Parallel Processing using Expansions | You can view jobs.yaml to verify that the render_template script worked correctly.", "language": "text", "type": "code-example_explanation", "context": "Parallel Processing using Expansions | You can view jobs.yaml to verify that the render_template script worked correctly.", "source": "https://kubernetes.io/docs/tasks/job/parallel-processing-expansion/", "id": "k8s_00090_code_explain_2337"} {"prompt": "Explain this text code:\n\ncat job.yaml.jinja2 | render_template | kubectl apply -f -", "completion": "This text code Parallel Processing using Expansions | Kubernetes accepts and runs the Jobs you created.", "language": "text", "type": "code-example_explanation", "context": "Parallel Processing using Expansions | Kubernetes accepts and runs the Jobs you created.", "source": "https://kubernetes.io/docs/tasks/job/parallel-processing-expansion/", "id": "k8s_00090_code_explain_2338"} {"prompt": "Explain this text code:\n\n# Remove the Jobs you created\n# Your cluster automatically cleans up their Pods\nkubectl delete job -l jobgroup=jobexample", "completion": "This text code Parallel Processing using Expansions | In a real use case, each Job performs some substantial computation, such as rendering a frame of a movie, or processing a range of rows in a database. If you were rendering a movie you would set $ITEM", "language": "text", "type": "code-example_explanation", "context": "Parallel Processing using Expansions | In a real use case, each Job performs some substantial computation, such as rendering a frame of a movie, or processing a range of rows in a database. If you were rendering a movie you would set $ITEM", "source": "https://kubernetes.io/docs/tasks/job/parallel-processing-expansion/", "id": "k8s_00090_code_explain_2339"} {"prompt": "Complete this yaml code:\n\napiVersion: batch/v1\nkind: Job\nmetadata:\n name: process-item-$ITEM\n labels:\n jobgroup: jobexample\nspec:\n template:\n metadata:", "completion": " name: jobexample\n labels:\n jobgroup: jobexample\n spec:\n containers:\n - name: c\n image: busybox:1.28\n command: [\"sh\", \"-c\", \"echo Processing item $ITEM && sleep 5\"]\n restartPolicy: Never", "language": "yaml", "type": "kubernetes-manifest", "context": "Parallel Processing using Expansions | The file you downloaded is not yet a valid Kubernetes manifest. Instead that template is a YAML representation of a Job object with some placeholders that need to be filled in before it can be used. T", "source": "https://kubernetes.io/docs/tasks/job/parallel-processing-expansion/", "id": "k8s_00090_code_completion_2340"} {"prompt": "Explain this yaml code:\n\napiVersion: batch/v1\nkind: Job\nmetadata:\n name: process-item-$ITEM\n labels:\n jobgroup: jobexample\nspec:\n template:\n metadata:\n name: jobexample\n labels:\n jobgroup: jobexample\n spec:\n containers:\n - name: c\n image: busybox:1.28\n command: [\"sh\", \"-c\", \"echo Processing item $ITEM && sleep 5\"]\n restartPolicy: Never", "completion": "This yaml code Parallel Processing using Expansions | The file you downloaded is not yet a valid Kubernetes manifest. Instead that template is a YAML representation of a Job object with some placeholders that need to be filled in before it can be used. T", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Parallel Processing using Expansions | The file you downloaded is not yet a valid Kubernetes manifest. Instead that template is a YAML representation of a Job object with some placeholders that need to be filled in before it can be used. T", "source": "https://kubernetes.io/docs/tasks/job/parallel-processing-expansion/", "id": "k8s_00090_code_explain_2341"} {"prompt": "Explain this shell code:\n\n# Use curl to download job-tmpl.yaml\ncurl -L -s -O https://k8s.io/examples/application/job/job-tmpl.yaml", "completion": "This shell code Parallel Processing using Expansions | The file you downloaded is not yet a valid Kubernetes manifest. Instead that template is a YAML representation of a Job object with some placeholders that need to be filled in before it can be used. T", "language": "shell", "type": "code-example_explanation", "context": "Parallel Processing using Expansions | The file you downloaded is not yet a valid Kubernetes manifest. Instead that template is a YAML representation of a Job object with some placeholders that need to be filled in before it can be used. T", "source": "https://kubernetes.io/docs/tasks/job/parallel-processing-expansion/", "id": "k8s_00090_code_explain_2342"} {"prompt": "Complete this shell code:\n\n# Expand the template into multiple files, one for each item to be processed.\nmkdir ./jobs\nfor i in apple banana cherry", "completion": "do\n cat job-tmpl.yaml | sed \"s/\\$ITEM/$i/\" > ./jobs/job-$i.yaml\ndone", "language": "shell", "type": "code-example", "context": "Parallel Processing using Expansions | Check if it worked:", "source": "https://kubernetes.io/docs/tasks/job/parallel-processing-expansion/", "id": "k8s_00090_code_completion_2343"} {"prompt": "Explain this shell code:\n\n# Expand the template into multiple files, one for each item to be processed.\nmkdir ./jobs\nfor i in apple banana cherry\ndo\n cat job-tmpl.yaml | sed \"s/\\$ITEM/$i/\" > ./jobs/job-$i.yaml\ndone", "completion": "This shell code Parallel Processing using Expansions | Check if it worked:", "language": "shell", "type": "code-example_explanation", "context": "Parallel Processing using Expansions | Check if it worked:", "source": "https://kubernetes.io/docs/tasks/job/parallel-processing-expansion/", "id": "k8s_00090_code_explain_2344"} {"prompt": "Explain this text code:\n\njob.batch/process-item-apple created\njob.batch/process-item-banana created\njob.batch/process-item-cherry created", "completion": "This text code Parallel Processing using Expansions | Now, check on the jobs:", "language": "text", "type": "code-example_explanation", "context": "Parallel Processing using Expansions | Now, check on the jobs:", "source": "https://kubernetes.io/docs/tasks/job/parallel-processing-expansion/", "id": "k8s_00090_code_explain_2345"} {"prompt": "Complete this text code:\n\nNAME COMPLETIONS DURATION AGE\nprocess-item-apple 1/1 14s 22s", "completion": "process-item-banana 1/1 12s 21s\nprocess-item-cherry 1/1 12s 20s", "language": "text", "type": "code-example", "context": "Parallel Processing using Expansions | Using the -l option to kubectl selects only the Jobs that are part of this group of jobs (there might be other unrelated jobs in the system).", "source": "https://kubernetes.io/docs/tasks/job/parallel-processing-expansion/", "id": "k8s_00090_code_completion_2346"} {"prompt": "Explain this text code:\n\nNAME COMPLETIONS DURATION AGE\nprocess-item-apple 1/1 14s 22s\nprocess-item-banana 1/1 12s 21s\nprocess-item-cherry 1/1 12s 20s", "completion": "This text code Parallel Processing using Expansions | Using the -l option to kubectl selects only the Jobs that are part of this group of jobs (there might be other unrelated jobs in the system).", "language": "text", "type": "code-example_explanation", "context": "Parallel Processing using Expansions | Using the -l option to kubectl selects only the Jobs that are part of this group of jobs (there might be other unrelated jobs in the system).", "source": "https://kubernetes.io/docs/tasks/job/parallel-processing-expansion/", "id": "k8s_00090_code_explain_2347"} {"prompt": "Complete this text code:\n\nNAME READY STATUS RESTARTS AGE\nprocess-item-apple-kixwv 0/1 Completed 0 4m", "completion": "process-item-banana-wrsf7 0/1 Completed 0 4m\nprocess-item-cherry-dnfu9 0/1 Completed 0 4m", "language": "text", "type": "code-example", "context": "Parallel Processing using Expansions | We can use this single command to check on the output of all jobs at once:", "source": "https://kubernetes.io/docs/tasks/job/parallel-processing-expansion/", "id": "k8s_00090_code_completion_2348"} {"prompt": "Explain this text code:\n\nNAME READY STATUS RESTARTS AGE\nprocess-item-apple-kixwv 0/1 Completed 0 4m\nprocess-item-banana-wrsf7 0/1 Completed 0 4m\nprocess-item-cherry-dnfu9 0/1 Completed 0 4m", "completion": "This text code Parallel Processing using Expansions | We can use this single command to check on the output of all jobs at once:", "language": "text", "type": "code-example_explanation", "context": "Parallel Processing using Expansions | We can use this single command to check on the output of all jobs at once:", "source": "https://kubernetes.io/docs/tasks/job/parallel-processing-expansion/", "id": "k8s_00090_code_explain_2349"} {"prompt": "Explain this text code:\n\nProcessing item apple\nProcessing item banana\nProcessing item cherry", "completion": "This text code Parallel Processing using Expansions | In the first example, each instance of the template had one parameter, and that parameter was also used in the Job's name. However, names are restricted to contain only certain characters.", "language": "text", "type": "code-example_explanation", "context": "Parallel Processing using Expansions | In the first example, each instance of the template had one parameter, and that parameter was also used in the Job's name. However, names are restricted to contain only certain characters.", "source": "https://kubernetes.io/docs/tasks/job/parallel-processing-expansion/", "id": "k8s_00090_code_explain_2350"} {"prompt": "Explain this shell code:\n\n# Remove the Jobs you created\n# Your cluster automatically cleans up their Pods\nkubectl delete job -l jobgroup=jobexample", "completion": "This shell code Parallel Processing using Expansions | In the first example, each instance of the template had one parameter, and that parameter was also used in the Job's name. However, names are restricted to contain only certain characters.", "language": "shell", "type": "code-example_explanation", "context": "Parallel Processing using Expansions | In the first example, each instance of the template had one parameter, and that parameter was also used in the Job's name. However, names are restricted to contain only certain characters.", "source": "https://kubernetes.io/docs/tasks/job/parallel-processing-expansion/", "id": "k8s_00090_code_explain_2351"} {"prompt": "Complete this liquid code:\n\n{% set params = [{ \"name\": \"apple\", \"url\": \"http://dbpedia.org/resource/Apple\", },\n { \"name\": \"banana\", \"url\": \"http://dbpedia.org/resource/Banana\", },\n { \"name\": \"cherry\", \"url\": \"http://dbpedia.org/resource/Cherry\" }]\n%}\n{% for p in params %}\n{% set name = p[\"name\"] %}\n{% set url = p[\"url\"] %}\n---\napiVersion: batch/v1\nkind: Job\nmetadata:\n name: jobexample-{{ name }}\n labels:", "completion": " jobgroup: jobexample\nspec:\n template:\n metadata:\n name: jobexample\n labels:\n jobgroup: jobexample\n spec:\n containers:\n - name: c\n image: busybox:1.28\n command: [\"sh\", \"-c\", \"echo Processing URL {{ url }} && sleep 5\"]\n restartPolicy: Never\n{% endfor %}", "language": "liquid", "type": "code-example", "context": "Parallel Processing using Expansions | The above template defines two parameters for each Job object using a list of python dicts (lines 1-4). A for loop emits one Job manifest for each set of parameters (remaining lines).", "source": "https://kubernetes.io/docs/tasks/job/parallel-processing-expansion/", "id": "k8s_00090_code_completion_2352"} {"prompt": "Explain this liquid code:\n\n{% set params = [{ \"name\": \"apple\", \"url\": \"http://dbpedia.org/resource/Apple\", },\n { \"name\": \"banana\", \"url\": \"http://dbpedia.org/resource/Banana\", },\n { \"name\": \"cherry\", \"url\": \"http://dbpedia.org/resource/Cherry\" }]\n%}\n{% for p in params %}\n{% set name = p[\"name\"] %}\n{% set url = p[\"url\"] %}\n---\napiVersion: batch/v1\nkind: Job\nmetadata:\n name: jobexample-{{ name }}\n labels:\n jobgroup: jobexample\nspec:\n template:\n metadata:\n name: jobexample\n labels:\n jobgroup: jobexample\n spec:\n containers:\n - name: c\n image: busybox:1.28\n command: [\"sh\", \"-c\", \"echo Processing URL {{ url }} && sleep 5\"]\n restartPolicy: Never\n{% endfor %}", "completion": "This liquid code Parallel Processing using Expansions | The above template defines two parameters for each Job object using a list of python dicts (lines 1-4). A for loop emits one Job manifest for each set of parameters (remaining lines).", "language": "liquid", "type": "code-example_explanation", "context": "Parallel Processing using Expansions | The above template defines two parameters for each Job object using a list of python dicts (lines 1-4). A for loop emits one Job manifest for each set of parameters (remaining lines).", "source": "https://kubernetes.io/docs/tasks/job/parallel-processing-expansion/", "id": "k8s_00090_code_explain_2353"} {"prompt": "Explain this shell code:\n\nalias render_template='python -c \"from jinja2 import Template; import sys; print(Template(sys.stdin.read()).render());\"'", "completion": "This shell code Parallel Processing using Expansions | Use render_template to convert the parameters and template into a single YAML file containing Kubernetes manifests:", "language": "shell", "type": "code-example_explanation", "context": "Parallel Processing using Expansions | Use render_template to convert the parameters and template into a single YAML file containing Kubernetes manifests:", "source": "https://kubernetes.io/docs/tasks/job/parallel-processing-expansion/", "id": "k8s_00090_code_explain_2354"} {"prompt": "Explain this shell code:\n\n# This requires the alias you defined earlier\ncat job.yaml.jinja2 | render_template > jobs.yaml", "completion": "This shell code Parallel Processing using Expansions | You can view jobs.yaml to verify that the render_template script worked correctly.", "language": "shell", "type": "code-example_explanation", "context": "Parallel Processing using Expansions | You can view jobs.yaml to verify that the render_template script worked correctly.", "source": "https://kubernetes.io/docs/tasks/job/parallel-processing-expansion/", "id": "k8s_00090_code_explain_2355"} {"prompt": "Explain this shell code:\n\ncat job.yaml.jinja2 | render_template | kubectl apply -f -", "completion": "This shell code Parallel Processing using Expansions | Kubernetes accepts and runs the Jobs you created.", "language": "shell", "type": "code-example_explanation", "context": "Parallel Processing using Expansions | Kubernetes accepts and runs the Jobs you created.", "source": "https://kubernetes.io/docs/tasks/job/parallel-processing-expansion/", "id": "k8s_00090_code_explain_2356"} {"prompt": "Explain this shell code:\n\n# Remove the Jobs you created\n# Your cluster automatically cleans up their Pods\nkubectl delete job -l jobgroup=jobexample", "completion": "This shell code Parallel Processing using Expansions | In a real use case, each Job performs some substantial computation, such as rendering a frame of a movie, or processing a range of rows in a database. If you were rendering a movie you would set $ITEM", "language": "shell", "type": "code-example_explanation", "context": "Parallel Processing using Expansions | In a real use case, each Job performs some substantial computation, such as rendering a frame of a movie, or processing a range of rows in a database. If you were rendering a movie you would set $ITEM", "source": "https://kubernetes.io/docs/tasks/job/parallel-processing-expansion/", "id": "k8s_00090_code_explain_2357"} {"prompt": "Complete this yaml code:\n\napplication/job/job-tmpl.yaml\napiVersion: batch/v1\nkind: Job\nmetadata:\n name: process-item-$ITEM\n labels:\n jobgroup: jobexample\nspec:\n template:", "completion": " metadata:\n name: jobexample\n labels:\n jobgroup: jobexample\n spec:\n containers:\n - name: c\n image: busybox:1.28\n command: [\"sh\", \"-c\", \"echo Processing item $ITEM && sleep 5\"]\n restartPolicy: Never", "language": "yaml", "type": "kubernetes-manifest", "context": "Parallel Processing using Expansions | The file you downloaded is not yet a valid Kubernetes manifest. Instead that template is a YAML representation of a Job object with some placeholders that need to be filled in before it can be used. T", "source": "https://kubernetes.io/docs/tasks/job/parallel-processing-expansion/", "id": "k8s_00090_code_completion_2358"} {"prompt": "Explain this yaml code:\n\napplication/job/job-tmpl.yaml\napiVersion: batch/v1\nkind: Job\nmetadata:\n name: process-item-$ITEM\n labels:\n jobgroup: jobexample\nspec:\n template:\n metadata:\n name: jobexample\n labels:\n jobgroup: jobexample\n spec:\n containers:\n - name: c\n image: busybox:1.28\n command: [\"sh\", \"-c\", \"echo Processing item $ITEM && sleep 5\"]\n restartPolicy: Never", "completion": "This yaml code Parallel Processing using Expansions | The file you downloaded is not yet a valid Kubernetes manifest. Instead that template is a YAML representation of a Job object with some placeholders that need to be filled in before it can be used. T", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Parallel Processing using Expansions | The file you downloaded is not yet a valid Kubernetes manifest. Instead that template is a YAML representation of a Job object with some placeholders that need to be filled in before it can be used. T", "source": "https://kubernetes.io/docs/tasks/job/parallel-processing-expansion/", "id": "k8s_00090_code_explain_2359"} {"prompt": "Complete this yaml code:\n\napiVersion: batch/v1\nkind: Job\nmetadata:\n name: process-item-$ITEM\n labels:\n jobgroup: jobexample\nspec:\n template:\n metadata:", "completion": " name: jobexample\n labels:\n jobgroup: jobexample\n spec:\n containers:\n - name: c\n image: busybox:1.28\n command: [\"sh\", \"-c\", \"echo Processing item $ITEM && sleep 5\"]\n restartPolicy: Never", "language": "yaml", "type": "kubernetes-manifest", "context": "Parallel Processing using Expansions | The file you downloaded is not yet a valid Kubernetes manifest. Instead that template is a YAML representation of a Job object with some placeholders that need to be filled in before it can be used. T", "source": "https://kubernetes.io/docs/tasks/job/parallel-processing-expansion/", "id": "k8s_00090_code_completion_2360"} {"prompt": "Explain this yaml code:\n\napiVersion: batch/v1\nkind: Job\nmetadata:\n name: process-item-$ITEM\n labels:\n jobgroup: jobexample\nspec:\n template:\n metadata:\n name: jobexample\n labels:\n jobgroup: jobexample\n spec:\n containers:\n - name: c\n image: busybox:1.28\n command: [\"sh\", \"-c\", \"echo Processing item $ITEM && sleep 5\"]\n restartPolicy: Never", "completion": "This yaml code Parallel Processing using Expansions | The file you downloaded is not yet a valid Kubernetes manifest. Instead that template is a YAML representation of a Job object with some placeholders that need to be filled in before it can be used. T", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Parallel Processing using Expansions | The file you downloaded is not yet a valid Kubernetes manifest. Instead that template is a YAML representation of a Job object with some placeholders that need to be filled in before it can be used. T", "source": "https://kubernetes.io/docs/tasks/job/parallel-processing-expansion/", "id": "k8s_00090_code_explain_2361"} {"prompt": "Explain this text code:\n\n# Use curl to download job-tmpl.yaml\ncurl -L -s -O https://k8s.io/examples/application/job/job-tmpl.yaml", "completion": "This text code Parallel Processing using Expansions | The file you downloaded is not yet a valid Kubernetes manifest. Instead that template is a YAML representation of a Job object with some placeholders that need to be filled in before it can be used. T", "language": "text", "type": "code-example_explanation", "context": "Parallel Processing using Expansions | The file you downloaded is not yet a valid Kubernetes manifest. Instead that template is a YAML representation of a Job object with some placeholders that need to be filled in before it can be used. T", "source": "https://kubernetes.io/docs/tasks/job/parallel-processing-expansion/", "id": "k8s_00090_code_explain_2362"} {"prompt": "Complete this text code:\n\n# Expand the template into multiple files, one for each item to be processed.\nmkdir ./jobs\nfor i in apple banana cherry", "completion": "do\n cat job-tmpl.yaml | sed \"s/\\$ITEM/$i/\" > ./jobs/job-$i.yaml\ndone", "language": "text", "type": "code-example", "context": "Parallel Processing using Expansions | Check if it worked:", "source": "https://kubernetes.io/docs/tasks/job/parallel-processing-expansion/", "id": "k8s_00090_code_completion_2363"} {"prompt": "Explain this text code:\n\n# Expand the template into multiple files, one for each item to be processed.\nmkdir ./jobs\nfor i in apple banana cherry\ndo\n cat job-tmpl.yaml | sed \"s/\\$ITEM/$i/\" > ./jobs/job-$i.yaml\ndone", "completion": "This text code Parallel Processing using Expansions | Check if it worked:", "language": "text", "type": "code-example_explanation", "context": "Parallel Processing using Expansions | Check if it worked:", "source": "https://kubernetes.io/docs/tasks/job/parallel-processing-expansion/", "id": "k8s_00090_code_explain_2364"} {"prompt": "Explain this text code:\n\n# Remove the Jobs you created\n# Your cluster automatically cleans up their Pods\nkubectl delete job -l jobgroup=jobexample", "completion": "This text code Parallel Processing using Expansions | In the first example, each instance of the template had one parameter, and that parameter was also used in the Job's name. However, names are restricted to contain only certain characters.", "language": "text", "type": "code-example_explanation", "context": "Parallel Processing using Expansions | In the first example, each instance of the template had one parameter, and that parameter was also used in the Job's name. However, names are restricted to contain only certain characters.", "source": "https://kubernetes.io/docs/tasks/job/parallel-processing-expansion/", "id": "k8s_00090_code_explain_2365"} {"prompt": "Explain this python code:\n\nalias render_template='python -c \"from jinja2 import Template; import sys; print(Template(sys.stdin.read()).render());\"'", "completion": "This python code Parallel Processing using Expansions | Use render_template to convert the parameters and template into a single YAML file containing Kubernetes manifests:", "language": "python", "type": "code-example_explanation", "context": "Parallel Processing using Expansions | Use render_template to convert the parameters and template into a single YAML file containing Kubernetes manifests:", "source": "https://kubernetes.io/docs/tasks/job/parallel-processing-expansion/", "id": "k8s_00090_code_explain_2366"} {"prompt": "Explain this text code:\n\n# This requires the alias you defined earlier\ncat job.yaml.jinja2 | render_template > jobs.yaml", "completion": "This text code Parallel Processing using Expansions | You can view jobs.yaml to verify that the render_template script worked correctly.", "language": "text", "type": "code-example_explanation", "context": "Parallel Processing using Expansions | You can view jobs.yaml to verify that the render_template script worked correctly.", "source": "https://kubernetes.io/docs/tasks/job/parallel-processing-expansion/", "id": "k8s_00090_code_explain_2367"} {"prompt": "Explain this text code:\n\ncat job.yaml.jinja2 | render_template | kubectl apply -f -", "completion": "This text code Parallel Processing using Expansions | Kubernetes accepts and runs the Jobs you created.", "language": "text", "type": "code-example_explanation", "context": "Parallel Processing using Expansions | Kubernetes accepts and runs the Jobs you created.", "source": "https://kubernetes.io/docs/tasks/job/parallel-processing-expansion/", "id": "k8s_00090_code_explain_2368"} {"prompt": "Explain this text code:\n\n# Remove the Jobs you created\n# Your cluster automatically cleans up their Pods\nkubectl delete job -l jobgroup=jobexample", "completion": "This text code Parallel Processing using Expansions | In a real use case, each Job performs some substantial computation, such as rendering a frame of a movie, or processing a range of rows in a database. If you were rendering a movie you would set $ITEM", "language": "text", "type": "code-example_explanation", "context": "Parallel Processing using Expansions | In a real use case, each Job performs some substantial computation, such as rendering a frame of a movie, or processing a range of rows in a database. If you were rendering a movie you would set $ITEM", "source": "https://kubernetes.io/docs/tasks/job/parallel-processing-expansion/", "id": "k8s_00090_code_explain_2369"} {"prompt": "Explain this text code:\n\ngit clone https://github.com/kubernetes/kubernetes.git\ncd kubernetes\nmake", "completion": "This text code Configure Multiple Schedulers | Create a container image containing the kube-scheduler binary. Here is the Dockerfile to build the image:", "language": "text", "type": "code-example_explanation", "context": "Configure Multiple Schedulers | Create a container image containing the kube-scheduler binary. Here is the Dockerfile to build the image:", "source": "https://kubernetes.io/docs/tasks/extend-kubernetes/configure-multiple-schedulers/", "id": "k8s_00091_code_explain_2370"} {"prompt": "Explain this text code:\n\nFROM busybox\nADD ./_output/local/bin/linux/amd64/kube-scheduler /usr/local/bin/kube-scheduler", "completion": "This text code Configure Multiple Schedulers | Save the file as Dockerfile, build the image and push it to a registry. This example pushes the image to Google Container Registry (GCR). For more details, please read the GCR documentation. Alternati", "language": "text", "type": "code-example_explanation", "context": "Configure Multiple Schedulers | Save the file as Dockerfile, build the image and push it to a registry. This example pushes the image to Google Container Registry (GCR). For more details, please read the GCR documentation. Alternati", "source": "https://kubernetes.io/docs/tasks/extend-kubernetes/configure-multiple-schedulers/", "id": "k8s_00091_code_explain_2371"} {"prompt": "Explain this bash code:\n\ndocker build -t gcr.io/my-gcp-project/my-kube-scheduler:1.0 . # The image name and the repository\ngcloud docker -- push gcr.io/my-gcp-project/my-kube-scheduler:1.0 # used in here is just an example", "completion": "This bash code Configure Multiple Schedulers | Now that you have your scheduler in a container image, create a pod configuration for it and run it in your Kubernetes cluster. But instead of creating a pod directly in the cluster, you can use a Dep", "language": "bash", "type": "shell-command_explanation", "context": "Configure Multiple Schedulers | Now that you have your scheduler in a container image, create a pod configuration for it and run it in your Kubernetes cluster. But instead of creating a pod directly in the cluster, you can use a Dep", "source": "https://kubernetes.io/docs/tasks/extend-kubernetes/configure-multiple-schedulers/", "id": "k8s_00091_code_explain_2372"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: ServiceAccount\nmetadata:\n name: my-scheduler\n namespace: kube-system\n---\napiVersion: rbac.authorization.k8s.io/v1\nkind: ClusterRoleBinding\nmetadata:\n name: my-scheduler-as-kube-scheduler\nsubjects:\n- kind: ServiceAccount\n name: my-scheduler\n namespace: kube-system\nroleRef:\n kind: ClusterRole\n name: system:kube-scheduler\n apiGroup: rbac.authorization.k8s.io\n---\napiVersion: rbac.authorization.k8s.io/v1\nkind: ClusterRoleBinding\nmetadata:\n name: my-scheduler-as-volume-scheduler\nsubjects:\n- kind: ServiceAccount\n name: my-scheduler\n namespace: kube-system\nroleRef:\n kind: ClusterRole\n name: system:volume-scheduler\n apiGroup: rbac.authorization.k8s.io\n---\napiVersion: rbac.authorization.k8s.io/v1\nkind: RoleBinding\nmetadata:\n name: my-scheduler-extension-apiserver-authentication-reader\n namespace: kube-system\nroleRef:\n kind: Role\n name: extension-apiserver-authentication-reader\n apiGroup: rbac.authorization.k8s.io\nsubjects:\n- kind: ServiceAccount\n name: my-scheduler\n namespace: kube-system\n---\napiVersion: v1\nkind: ConfigMap\nmetadata:\n name: my-scheduler-config\n namespace: kube-system\ndata:\n my-scheduler-config.yaml: |\n apiVersion: kubescheduler.config.k8s.io/v1\n kind: KubeSchedulerConfiguration\n profiles:", "completion": " - schedulerName: my-scheduler\n leaderElection:\n leaderElect: false \n---\napiVersion: apps/v1\nkind: Deployment\nmetadata:\n labels:\n component: scheduler\n tier: control-plane\n name: my-scheduler\n namespace: kube-system\nspec:\n selector:\n matchLabels:\n component: scheduler\n tier: control-plane\n replicas: 1\n template:\n metadata:\n labels:\n component: scheduler\n tier: control-plane\n version: second\n spec:\n serviceAccountName: my-scheduler\n containers:\n - command:\n - /usr/local/bin/kube-scheduler\n - --config=/etc/kubernetes/my-scheduler/my-scheduler-config.yaml\n image: gcr.io/my-gcp-project/my-kube-scheduler:1.0\n livenessProbe:\n httpGet:\n path: /healthz\n port: 10259\n scheme: HTTPS\n initialDelaySeconds: 15\n name: kube-second-scheduler\n readinessProbe:\n httpGet:\n path: /healthz\n port: 10259\n scheme: HTTPS\n resources:\n requests:\n cpu: '0.1'\n securityContext:\n privileged: false\n volumeMounts:\n - name: config-volume\n mountPath: /etc/kubernetes/my-scheduler\n hostNetwork: false\n hostPID: false\n volumes:\n - name: config-volume\n configMap:\n name: my-scheduler-config", "language": "yaml", "type": "deployment-definition", "context": "Configure Multiple Schedulers | In the above manifest, you use a KubeSchedulerConfiguration to customize the behavior of your scheduler implementation. This configuration has been passed to the kube-scheduler during initialization w", "source": "https://kubernetes.io/docs/tasks/extend-kubernetes/configure-multiple-schedulers/", "id": "k8s_00091_code_completion_2373"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: ServiceAccount\nmetadata:\n name: my-scheduler\n namespace: kube-system\n---\napiVersion: rbac.authorization.k8s.io/v1\nkind: ClusterRoleBinding\nmetadata:\n name: my-scheduler-as-kube-scheduler\nsubjects:\n- kind: ServiceAccount\n name: my-scheduler\n namespace: kube-system\nroleRef:\n kind: ClusterRole\n name: system:kube-scheduler\n apiGroup: rbac.authorization.k8s.io\n---\napiVersion: rbac.authorization.k8s.io/v1\nkind: ClusterRoleBinding\nmetadata:\n name: my-scheduler-as-volume-scheduler\nsubjects:\n- kind: ServiceAccount\n name: my-scheduler\n namespace: kube-system\nroleRef:\n kind: ClusterRole\n name: system:volume-scheduler\n apiGroup: rbac.authorization.k8s.io\n---\napiVersion: rbac.authorization.k8s.io/v1\nkind: RoleBinding\nmetadata:\n name: my-scheduler-extension-apiserver-authentication-reader\n namespace: kube-system\nroleRef:\n kind: Role\n name: extension-apiserver-authentication-reader\n apiGroup: rbac.authorization.k8s.io\nsubjects:\n- kind: ServiceAccount\n name: my-scheduler\n namespace: kube-system\n---\napiVersion: v1\nkind: ConfigMap\nmetadata:\n name: my-scheduler-config\n namespace: kube-system\ndata:\n my-scheduler-config.yaml: |\n apiVersion: kubescheduler.config.k8s.io/v1\n kind: KubeSchedulerConfiguration\n profiles:\n - schedulerName: my-scheduler\n leaderElection:\n leaderElect: false \n---\napiVersion: apps/v1\nkind: Deployment\nmetadata:\n labels:\n component: scheduler\n tier: control-plane\n name: my-scheduler\n namespace: kube-system\nspec:\n selector:\n matchLabels:\n component: scheduler\n tier: control-plane\n replicas: 1\n template:\n metadata:\n labels:\n component: scheduler\n tier: control-plane\n version: second\n spec:\n serviceAccountName: my-scheduler\n containers:\n - command:\n - /usr/local/bin/kube-scheduler\n - --config=/etc/kubernetes/my-scheduler/my-scheduler-config.yaml\n image: gcr.io/my-gcp-project/my-kube-scheduler:1.0\n livenessProbe:\n httpGet:\n path: /healthz\n port: 10259\n scheme: HTTPS\n initialDelaySeconds: 15\n name: kube-second-scheduler\n readinessProbe:\n httpGet:\n path: /healthz\n port: 10259\n scheme: HTTPS\n resources:\n requests:\n cpu: '0.1'\n securityContext:\n privileged: false\n volumeMounts:\n - name: config-volume\n mountPath: /etc/kubernetes/my-scheduler\n hostNetwork: false\n hostPID: false\n volumes:\n - name: config-volume\n configMap:\n name: my-scheduler-config", "completion": "This yaml code Configure Multiple Schedulers | In the above manifest, you use a KubeSchedulerConfiguration to customize the behavior of your scheduler implementation. This configuration has been passed to the kube-scheduler during initialization w", "language": "yaml", "type": "deployment-definition_explanation", "context": "Configure Multiple Schedulers | In the above manifest, you use a KubeSchedulerConfiguration to customize the behavior of your scheduler implementation. This configuration has been passed to the kube-scheduler during initialization w", "source": "https://kubernetes.io/docs/tasks/extend-kubernetes/configure-multiple-schedulers/", "id": "k8s_00091_code_explain_2374"} {"prompt": "Complete this text code:\n\nNAME READY STATUS RESTARTS AGE\n....", "completion": "my-scheduler-lnf4s-4744f 1/1 Running 0 2m\n...", "language": "text", "type": "code-example", "context": "Configure Multiple Schedulers | You should see a \"Running\" my-scheduler pod, in addition to the default kube-scheduler pod in this list.", "source": "https://kubernetes.io/docs/tasks/extend-kubernetes/configure-multiple-schedulers/", "id": "k8s_00091_code_completion_2375"} {"prompt": "Explain this text code:\n\nNAME READY STATUS RESTARTS AGE\n....\nmy-scheduler-lnf4s-4744f 1/1 Running 0 2m\n...", "completion": "This text code Configure Multiple Schedulers | You should see a \"Running\" my-scheduler pod, in addition to the default kube-scheduler pod in this list.", "language": "text", "type": "code-example_explanation", "context": "Configure Multiple Schedulers | You should see a \"Running\" my-scheduler pod, in addition to the default kube-scheduler pod in this list.", "source": "https://kubernetes.io/docs/tasks/extend-kubernetes/configure-multiple-schedulers/", "id": "k8s_00091_code_explain_2376"} {"prompt": "Complete this yaml code:\n\napiVersion: rbac.authorization.k8s.io/v1\nkind: ClusterRole\nmetadata:\n annotations:\n rbac.authorization.kubernetes.io/autoupdate: \"true\"\n labels:\n kubernetes.io/bootstrapping: rbac-defaults\n name: system:kube-scheduler\nrules:\n - apiGroups:\n - coordination.k8s.io\n resources:\n - leases\n verbs:\n - create\n - apiGroups:\n - coordination.k8s.io\n resourceNames:", "completion": " - kube-scheduler\n - my-scheduler\n resources:\n - leases\n verbs:\n - get\n - update\n - apiGroups:\n - \"\"\n resourceNames:\n - kube-scheduler\n - my-scheduler\n resources:\n - endpoints\n verbs:\n - delete\n - get\n - patch\n - update", "language": "yaml", "type": "kubernetes-manifest", "context": "Configure Multiple Schedulers | Now that your second scheduler is running, create some pods, and direct them to be scheduled by either the default scheduler or the one you deployed. In order to schedule a given pod using a specific", "source": "https://kubernetes.io/docs/tasks/extend-kubernetes/configure-multiple-schedulers/", "id": "k8s_00091_code_completion_2377"} {"prompt": "Explain this yaml code:\n\napiVersion: rbac.authorization.k8s.io/v1\nkind: ClusterRole\nmetadata:\n annotations:\n rbac.authorization.kubernetes.io/autoupdate: \"true\"\n labels:\n kubernetes.io/bootstrapping: rbac-defaults\n name: system:kube-scheduler\nrules:\n - apiGroups:\n - coordination.k8s.io\n resources:\n - leases\n verbs:\n - create\n - apiGroups:\n - coordination.k8s.io\n resourceNames:\n - kube-scheduler\n - my-scheduler\n resources:\n - leases\n verbs:\n - get\n - update\n - apiGroups:\n - \"\"\n resourceNames:\n - kube-scheduler\n - my-scheduler\n resources:\n - endpoints\n verbs:\n - delete\n - get\n - patch\n - update", "completion": "This yaml code Configure Multiple Schedulers | Now that your second scheduler is running, create some pods, and direct them to be scheduled by either the default scheduler or the one you deployed. In order to schedule a given pod using a specific", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Configure Multiple Schedulers | Now that your second scheduler is running, create some pods, and direct them to be scheduled by either the default scheduler or the one you deployed. In order to schedule a given pod using a specific", "source": "https://kubernetes.io/docs/tasks/extend-kubernetes/configure-multiple-schedulers/", "id": "k8s_00091_code_explain_2378"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: no-annotation\n labels:", "completion": " name: multischeduler-example\nspec:\n containers:\n - name: pod-with-no-annotation-container\n image: registry.k8s.io/pause:3.8", "language": "yaml", "type": "pod-definition", "context": "Configure Multiple Schedulers | When no scheduler name is supplied, the pod is automatically scheduled using the default-scheduler.", "source": "https://kubernetes.io/docs/tasks/extend-kubernetes/configure-multiple-schedulers/", "id": "k8s_00091_code_completion_2379"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: no-annotation\n labels:\n name: multischeduler-example\nspec:\n containers:\n - name: pod-with-no-annotation-container\n image: registry.k8s.io/pause:3.8", "completion": "This yaml code Configure Multiple Schedulers | When no scheduler name is supplied, the pod is automatically scheduled using the default-scheduler.", "language": "yaml", "type": "pod-definition_explanation", "context": "Configure Multiple Schedulers | When no scheduler name is supplied, the pod is automatically scheduled using the default-scheduler.", "source": "https://kubernetes.io/docs/tasks/extend-kubernetes/configure-multiple-schedulers/", "id": "k8s_00091_code_explain_2380"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: annotation-default-scheduler\n labels:", "completion": " name: multischeduler-example\nspec:\n schedulerName: default-scheduler\n containers:\n - name: pod-with-default-annotation-container\n image: registry.k8s.io/pause:3.8", "language": "yaml", "type": "pod-definition", "context": "Configure Multiple Schedulers | A scheduler is specified by supplying the scheduler name as a value to spec.schedulerName. In this case, we supply the name of the default scheduler which is default-scheduler.", "source": "https://kubernetes.io/docs/tasks/extend-kubernetes/configure-multiple-schedulers/", "id": "k8s_00091_code_completion_2381"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: annotation-default-scheduler\n labels:\n name: multischeduler-example\nspec:\n schedulerName: default-scheduler\n containers:\n - name: pod-with-default-annotation-container\n image: registry.k8s.io/pause:3.8", "completion": "This yaml code Configure Multiple Schedulers | A scheduler is specified by supplying the scheduler name as a value to spec.schedulerName. In this case, we supply the name of the default scheduler which is default-scheduler.", "language": "yaml", "type": "pod-definition_explanation", "context": "Configure Multiple Schedulers | A scheduler is specified by supplying the scheduler name as a value to spec.schedulerName. In this case, we supply the name of the default scheduler which is default-scheduler.", "source": "https://kubernetes.io/docs/tasks/extend-kubernetes/configure-multiple-schedulers/", "id": "k8s_00091_code_explain_2382"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: annotation-second-scheduler\n labels:", "completion": " name: multischeduler-example\nspec:\n schedulerName: my-scheduler\n containers:\n - name: pod-with-second-annotation-container\n image: registry.k8s.io/pause:3.8", "language": "yaml", "type": "pod-definition", "context": "Configure Multiple Schedulers | In this case, we specify that this pod should be scheduled using the scheduler that we deployed - my-scheduler. Note that the value of spec.schedulerName should match the name supplied for the schedul", "source": "https://kubernetes.io/docs/tasks/extend-kubernetes/configure-multiple-schedulers/", "id": "k8s_00091_code_completion_2383"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: annotation-second-scheduler\n labels:\n name: multischeduler-example\nspec:\n schedulerName: my-scheduler\n containers:\n - name: pod-with-second-annotation-container\n image: registry.k8s.io/pause:3.8", "completion": "This yaml code Configure Multiple Schedulers | In this case, we specify that this pod should be scheduled using the scheduler that we deployed - my-scheduler. Note that the value of spec.schedulerName should match the name supplied for the schedul", "language": "yaml", "type": "pod-definition_explanation", "context": "Configure Multiple Schedulers | In this case, we specify that this pod should be scheduled using the scheduler that we deployed - my-scheduler. Note that the value of spec.schedulerName should match the name supplied for the schedul", "source": "https://kubernetes.io/docs/tasks/extend-kubernetes/configure-multiple-schedulers/", "id": "k8s_00091_code_explain_2384"} {"prompt": "Explain this shell code:\n\ngit clone https://github.com/kubernetes/kubernetes.git\ncd kubernetes\nmake", "completion": "This shell code Configure Multiple Schedulers | Create a container image containing the kube-scheduler binary. Here is the Dockerfile to build the image:", "language": "shell", "type": "code-example_explanation", "context": "Configure Multiple Schedulers | Create a container image containing the kube-scheduler binary. Here is the Dockerfile to build the image:", "source": "https://kubernetes.io/docs/tasks/extend-kubernetes/configure-multiple-schedulers/", "id": "k8s_00091_code_explain_2385"} {"prompt": "Explain this docker code:\n\nFROM busybox\nADD ./_output/local/bin/linux/amd64/kube-scheduler /usr/local/bin/kube-scheduler", "completion": "This docker code Configure Multiple Schedulers | Save the file as Dockerfile, build the image and push it to a registry. This example pushes the image to Google Container Registry (GCR). For more details, please read the GCR documentation. Alternati", "language": "docker", "type": "code-example_explanation", "context": "Configure Multiple Schedulers | Save the file as Dockerfile, build the image and push it to a registry. This example pushes the image to Google Container Registry (GCR). For more details, please read the GCR documentation. Alternati", "source": "https://kubernetes.io/docs/tasks/extend-kubernetes/configure-multiple-schedulers/", "id": "k8s_00091_code_explain_2386"} {"prompt": "Explain this shell code:\n\ndocker build -t gcr.io/my-gcp-project/my-kube-scheduler:1.0 . # The image name and the repository\ngcloud docker -- push gcr.io/my-gcp-project/my-kube-scheduler:1.0 # used in here is just an example", "completion": "This shell code Configure Multiple Schedulers | Now that you have your scheduler in a container image, create a pod configuration for it and run it in your Kubernetes cluster. But instead of creating a pod directly in the cluster, you can use a Dep", "language": "shell", "type": "code-example_explanation", "context": "Configure Multiple Schedulers | Now that you have your scheduler in a container image, create a pod configuration for it and run it in your Kubernetes cluster. But instead of creating a pod directly in the cluster, you can use a Dep", "source": "https://kubernetes.io/docs/tasks/extend-kubernetes/configure-multiple-schedulers/", "id": "k8s_00091_code_explain_2387"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: ServiceAccount\nmetadata:\n name: my-scheduler\n namespace: kube-system\n---\napiVersion: rbac.authorization.k8s.io/v1\nkind: ClusterRoleBinding\nmetadata:\n name: my-scheduler-as-kube-scheduler\nsubjects:\n- kind: ServiceAccount\n name: my-scheduler\n namespace: kube-system\nroleRef:\n kind: ClusterRole\n name: system:kube-scheduler\n apiGroup: rbac.authorization.k8s.io\n---\napiVersion: rbac.authorization.k8s.io/v1\nkind: ClusterRoleBinding\nmetadata:\n name: my-scheduler-as-volume-scheduler\nsubjects:\n- kind: ServiceAccount\n name: my-scheduler\n namespace: kube-system\nroleRef:\n kind: ClusterRole\n name: system:volume-scheduler\n apiGroup: rbac.authorization.k8s.io\n---\napiVersion: rbac.authorization.k8s.io/v1\nkind: RoleBinding\nmetadata:\n name: my-scheduler-extension-apiserver-authentication-reader\n namespace: kube-system\nroleRef:\n kind: Role\n name: extension-apiserver-authentication-reader\n apiGroup: rbac.authorization.k8s.io\nsubjects:\n- kind: ServiceAccount\n name: my-scheduler\n namespace: kube-system\n---\napiVersion: v1\nkind: ConfigMap\nmetadata:\n name: my-scheduler-config\n namespace: kube-system\ndata:\n my-scheduler-config.yaml: |\n apiVersion: kubescheduler.config.k8s.io/v1\n kind: KubeSchedulerConfiguration\n profiles:", "completion": " - schedulerName: my-scheduler\n leaderElection:\n leaderElect: false \n---\napiVersion: apps/v1\nkind: Deployment\nmetadata:\n labels:\n component: scheduler\n tier: control-plane\n name: my-scheduler\n namespace: kube-system\nspec:\n selector:\n matchLabels:\n component: scheduler\n tier: control-plane\n replicas: 1\n template:\n metadata:\n labels:\n component: scheduler\n tier: control-plane\n version: second\n spec:\n serviceAccountName: my-scheduler\n containers:\n - command:\n - /usr/local/bin/kube-scheduler\n - --config=/etc/kubernetes/my-scheduler/my-scheduler-config.yaml\n image: gcr.io/my-gcp-project/my-kube-scheduler:1.0\n livenessProbe:\n httpGet:\n path: /healthz\n port: 10259\n scheme: HTTPS\n initialDelaySeconds: 15\n name: kube-second-scheduler\n readinessProbe:\n httpGet:\n path: /healthz\n port: 10259\n scheme: HTTPS\n resources:\n requests:\n cpu: '0.1'\n securityContext:\n privileged: false\n volumeMounts:\n - name: config-volume\n mountPath: /etc/kubernetes/my-scheduler\n hostNetwork: false\n hostPID: false\n volumes:\n - name: config-volume\n configMap:\n name: my-scheduler-config", "language": "yaml", "type": "deployment-definition", "context": "Configure Multiple Schedulers | In the above manifest, you use a KubeSchedulerConfiguration to customize the behavior of your scheduler implementation. This configuration has been passed to the kube-scheduler during initialization w", "source": "https://kubernetes.io/docs/tasks/extend-kubernetes/configure-multiple-schedulers/", "id": "k8s_00091_code_completion_2388"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: ServiceAccount\nmetadata:\n name: my-scheduler\n namespace: kube-system\n---\napiVersion: rbac.authorization.k8s.io/v1\nkind: ClusterRoleBinding\nmetadata:\n name: my-scheduler-as-kube-scheduler\nsubjects:\n- kind: ServiceAccount\n name: my-scheduler\n namespace: kube-system\nroleRef:\n kind: ClusterRole\n name: system:kube-scheduler\n apiGroup: rbac.authorization.k8s.io\n---\napiVersion: rbac.authorization.k8s.io/v1\nkind: ClusterRoleBinding\nmetadata:\n name: my-scheduler-as-volume-scheduler\nsubjects:\n- kind: ServiceAccount\n name: my-scheduler\n namespace: kube-system\nroleRef:\n kind: ClusterRole\n name: system:volume-scheduler\n apiGroup: rbac.authorization.k8s.io\n---\napiVersion: rbac.authorization.k8s.io/v1\nkind: RoleBinding\nmetadata:\n name: my-scheduler-extension-apiserver-authentication-reader\n namespace: kube-system\nroleRef:\n kind: Role\n name: extension-apiserver-authentication-reader\n apiGroup: rbac.authorization.k8s.io\nsubjects:\n- kind: ServiceAccount\n name: my-scheduler\n namespace: kube-system\n---\napiVersion: v1\nkind: ConfigMap\nmetadata:\n name: my-scheduler-config\n namespace: kube-system\ndata:\n my-scheduler-config.yaml: |\n apiVersion: kubescheduler.config.k8s.io/v1\n kind: KubeSchedulerConfiguration\n profiles:\n - schedulerName: my-scheduler\n leaderElection:\n leaderElect: false \n---\napiVersion: apps/v1\nkind: Deployment\nmetadata:\n labels:\n component: scheduler\n tier: control-plane\n name: my-scheduler\n namespace: kube-system\nspec:\n selector:\n matchLabels:\n component: scheduler\n tier: control-plane\n replicas: 1\n template:\n metadata:\n labels:\n component: scheduler\n tier: control-plane\n version: second\n spec:\n serviceAccountName: my-scheduler\n containers:\n - command:\n - /usr/local/bin/kube-scheduler\n - --config=/etc/kubernetes/my-scheduler/my-scheduler-config.yaml\n image: gcr.io/my-gcp-project/my-kube-scheduler:1.0\n livenessProbe:\n httpGet:\n path: /healthz\n port: 10259\n scheme: HTTPS\n initialDelaySeconds: 15\n name: kube-second-scheduler\n readinessProbe:\n httpGet:\n path: /healthz\n port: 10259\n scheme: HTTPS\n resources:\n requests:\n cpu: '0.1'\n securityContext:\n privileged: false\n volumeMounts:\n - name: config-volume\n mountPath: /etc/kubernetes/my-scheduler\n hostNetwork: false\n hostPID: false\n volumes:\n - name: config-volume\n configMap:\n name: my-scheduler-config", "completion": "This yaml code Configure Multiple Schedulers | In the above manifest, you use a KubeSchedulerConfiguration to customize the behavior of your scheduler implementation. This configuration has been passed to the kube-scheduler during initialization w", "language": "yaml", "type": "deployment-definition_explanation", "context": "Configure Multiple Schedulers | In the above manifest, you use a KubeSchedulerConfiguration to customize the behavior of your scheduler implementation. This configuration has been passed to the kube-scheduler during initialization w", "source": "https://kubernetes.io/docs/tasks/extend-kubernetes/configure-multiple-schedulers/", "id": "k8s_00091_code_explain_2389"} {"prompt": "Complete this text code:\n\nNAME READY STATUS RESTARTS AGE\n....", "completion": "my-scheduler-lnf4s-4744f 1/1 Running 0 2m\n...", "language": "text", "type": "code-example", "context": "Configure Multiple Schedulers | You should see a \"Running\" my-scheduler pod, in addition to the default kube-scheduler pod in this list.", "source": "https://kubernetes.io/docs/tasks/extend-kubernetes/configure-multiple-schedulers/", "id": "k8s_00091_code_completion_2390"} {"prompt": "Explain this text code:\n\nNAME READY STATUS RESTARTS AGE\n....\nmy-scheduler-lnf4s-4744f 1/1 Running 0 2m\n...", "completion": "This text code Configure Multiple Schedulers | You should see a \"Running\" my-scheduler pod, in addition to the default kube-scheduler pod in this list.", "language": "text", "type": "code-example_explanation", "context": "Configure Multiple Schedulers | You should see a \"Running\" my-scheduler pod, in addition to the default kube-scheduler pod in this list.", "source": "https://kubernetes.io/docs/tasks/extend-kubernetes/configure-multiple-schedulers/", "id": "k8s_00091_code_explain_2391"} {"prompt": "Complete this yaml code:\n\napiVersion: rbac.authorization.k8s.io/v1\nkind: ClusterRole\nmetadata:\n annotations:\n rbac.authorization.kubernetes.io/autoupdate: \"true\"\n labels:\n kubernetes.io/bootstrapping: rbac-defaults\n name: system:kube-scheduler\nrules:\n - apiGroups:\n - coordination.k8s.io\n resources:\n - leases\n verbs:\n - create\n - apiGroups:\n - coordination.k8s.io\n resourceNames:", "completion": " - kube-scheduler\n - my-scheduler\n resources:\n - leases\n verbs:\n - get\n - update\n - apiGroups:\n - \"\"\n resourceNames:\n - kube-scheduler\n - my-scheduler\n resources:\n - endpoints\n verbs:\n - delete\n - get\n - patch\n - update", "language": "yaml", "type": "kubernetes-manifest", "context": "Configure Multiple Schedulers | Now that your second scheduler is running, create some pods, and direct them to be scheduled by either the default scheduler or the one you deployed. In order to schedule a given pod using a specific", "source": "https://kubernetes.io/docs/tasks/extend-kubernetes/configure-multiple-schedulers/", "id": "k8s_00091_code_completion_2392"} {"prompt": "Explain this yaml code:\n\napiVersion: rbac.authorization.k8s.io/v1\nkind: ClusterRole\nmetadata:\n annotations:\n rbac.authorization.kubernetes.io/autoupdate: \"true\"\n labels:\n kubernetes.io/bootstrapping: rbac-defaults\n name: system:kube-scheduler\nrules:\n - apiGroups:\n - coordination.k8s.io\n resources:\n - leases\n verbs:\n - create\n - apiGroups:\n - coordination.k8s.io\n resourceNames:\n - kube-scheduler\n - my-scheduler\n resources:\n - leases\n verbs:\n - get\n - update\n - apiGroups:\n - \"\"\n resourceNames:\n - kube-scheduler\n - my-scheduler\n resources:\n - endpoints\n verbs:\n - delete\n - get\n - patch\n - update", "completion": "This yaml code Configure Multiple Schedulers | Now that your second scheduler is running, create some pods, and direct them to be scheduled by either the default scheduler or the one you deployed. In order to schedule a given pod using a specific", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Configure Multiple Schedulers | Now that your second scheduler is running, create some pods, and direct them to be scheduled by either the default scheduler or the one you deployed. In order to schedule a given pod using a specific", "source": "https://kubernetes.io/docs/tasks/extend-kubernetes/configure-multiple-schedulers/", "id": "k8s_00091_code_explain_2393"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: no-annotation\n labels:", "completion": " name: multischeduler-example\nspec:\n containers:\n - name: pod-with-no-annotation-container\n image: registry.k8s.io/pause:3.8", "language": "yaml", "type": "pod-definition", "context": "Configure Multiple Schedulers | When no scheduler name is supplied, the pod is automatically scheduled using the default-scheduler.", "source": "https://kubernetes.io/docs/tasks/extend-kubernetes/configure-multiple-schedulers/", "id": "k8s_00091_code_completion_2394"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: no-annotation\n labels:\n name: multischeduler-example\nspec:\n containers:\n - name: pod-with-no-annotation-container\n image: registry.k8s.io/pause:3.8", "completion": "This yaml code Configure Multiple Schedulers | When no scheduler name is supplied, the pod is automatically scheduled using the default-scheduler.", "language": "yaml", "type": "pod-definition_explanation", "context": "Configure Multiple Schedulers | When no scheduler name is supplied, the pod is automatically scheduled using the default-scheduler.", "source": "https://kubernetes.io/docs/tasks/extend-kubernetes/configure-multiple-schedulers/", "id": "k8s_00091_code_explain_2395"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: annotation-default-scheduler\n labels:", "completion": " name: multischeduler-example\nspec:\n schedulerName: default-scheduler\n containers:\n - name: pod-with-default-annotation-container\n image: registry.k8s.io/pause:3.8", "language": "yaml", "type": "pod-definition", "context": "Configure Multiple Schedulers | A scheduler is specified by supplying the scheduler name as a value to spec.schedulerName. In this case, we supply the name of the default scheduler which is default-scheduler.", "source": "https://kubernetes.io/docs/tasks/extend-kubernetes/configure-multiple-schedulers/", "id": "k8s_00091_code_completion_2396"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: annotation-default-scheduler\n labels:\n name: multischeduler-example\nspec:\n schedulerName: default-scheduler\n containers:\n - name: pod-with-default-annotation-container\n image: registry.k8s.io/pause:3.8", "completion": "This yaml code Configure Multiple Schedulers | A scheduler is specified by supplying the scheduler name as a value to spec.schedulerName. In this case, we supply the name of the default scheduler which is default-scheduler.", "language": "yaml", "type": "pod-definition_explanation", "context": "Configure Multiple Schedulers | A scheduler is specified by supplying the scheduler name as a value to spec.schedulerName. In this case, we supply the name of the default scheduler which is default-scheduler.", "source": "https://kubernetes.io/docs/tasks/extend-kubernetes/configure-multiple-schedulers/", "id": "k8s_00091_code_explain_2397"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: annotation-second-scheduler\n labels:", "completion": " name: multischeduler-example\nspec:\n schedulerName: my-scheduler\n containers:\n - name: pod-with-second-annotation-container\n image: registry.k8s.io/pause:3.8", "language": "yaml", "type": "pod-definition", "context": "Configure Multiple Schedulers | In this case, we specify that this pod should be scheduled using the scheduler that we deployed - my-scheduler. Note that the value of spec.schedulerName should match the name supplied for the schedul", "source": "https://kubernetes.io/docs/tasks/extend-kubernetes/configure-multiple-schedulers/", "id": "k8s_00091_code_completion_2398"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: annotation-second-scheduler\n labels:\n name: multischeduler-example\nspec:\n schedulerName: my-scheduler\n containers:\n - name: pod-with-second-annotation-container\n image: registry.k8s.io/pause:3.8", "completion": "This yaml code Configure Multiple Schedulers | In this case, we specify that this pod should be scheduled using the scheduler that we deployed - my-scheduler. Note that the value of spec.schedulerName should match the name supplied for the schedul", "language": "yaml", "type": "pod-definition_explanation", "context": "Configure Multiple Schedulers | In this case, we specify that this pod should be scheduled using the scheduler that we deployed - my-scheduler. Note that the value of spec.schedulerName should match the name supplied for the schedul", "source": "https://kubernetes.io/docs/tasks/extend-kubernetes/configure-multiple-schedulers/", "id": "k8s_00091_code_explain_2399"} {"prompt": "Explain this text code:\n\ngit clone https://github.com/kubernetes/kubernetes.git\ncd kubernetes\nmake", "completion": "This text code Configure Multiple Schedulers | Create a container image containing the kube-scheduler binary. Here is the Dockerfile to build the image:", "language": "text", "type": "code-example_explanation", "context": "Configure Multiple Schedulers | Create a container image containing the kube-scheduler binary. Here is the Dockerfile to build the image:", "source": "https://kubernetes.io/docs/tasks/extend-kubernetes/configure-multiple-schedulers/", "id": "k8s_00091_code_explain_2400"} {"prompt": "Explain this text code:\n\nFROM busybox\nADD ./_output/local/bin/linux/amd64/kube-scheduler /usr/local/bin/kube-scheduler", "completion": "This text code Configure Multiple Schedulers | Save the file as Dockerfile, build the image and push it to a registry. This example pushes the image to Google Container Registry (GCR). For more details, please read the GCR documentation. Alternati", "language": "text", "type": "code-example_explanation", "context": "Configure Multiple Schedulers | Save the file as Dockerfile, build the image and push it to a registry. This example pushes the image to Google Container Registry (GCR). For more details, please read the GCR documentation. Alternati", "source": "https://kubernetes.io/docs/tasks/extend-kubernetes/configure-multiple-schedulers/", "id": "k8s_00091_code_explain_2401"} {"prompt": "Explain this bash code:\n\ndocker build -t gcr.io/my-gcp-project/my-kube-scheduler:1.0 . # The image name and the repository\ngcloud docker -- push gcr.io/my-gcp-project/my-kube-scheduler:1.0 # used in here is just an example", "completion": "This bash code Configure Multiple Schedulers | Now that you have your scheduler in a container image, create a pod configuration for it and run it in your Kubernetes cluster. But instead of creating a pod directly in the cluster, you can use a Dep", "language": "bash", "type": "shell-command_explanation", "context": "Configure Multiple Schedulers | Now that you have your scheduler in a container image, create a pod configuration for it and run it in your Kubernetes cluster. But instead of creating a pod directly in the cluster, you can use a Dep", "source": "https://kubernetes.io/docs/tasks/extend-kubernetes/configure-multiple-schedulers/", "id": "k8s_00091_code_explain_2402"} {"prompt": "Complete this yaml code:\n\nadmin/sched/my-scheduler.yaml\napiVersion: v1\nkind: ServiceAccount\nmetadata:\n name: my-scheduler\n namespace: kube-system\n---\napiVersion: rbac.authorization.k8s.io/v1\nkind: ClusterRoleBinding\nmetadata:\n name: my-scheduler-as-kube-scheduler\nsubjects:\n- kind: ServiceAccount\n name: my-scheduler\n namespace: kube-system\nroleRef:\n kind: ClusterRole\n name: system:kube-scheduler\n apiGroup: rbac.authorization.k8s.io\n---\napiVersion: rbac.authorization.k8s.io/v1\nkind: ClusterRoleBinding\nmetadata:\n name: my-scheduler-as-volume-scheduler\nsubjects:\n- kind: ServiceAccount\n name: my-scheduler\n namespace: kube-system\nroleRef:\n kind: ClusterRole\n name: system:volume-scheduler\n apiGroup: rbac.authorization.k8s.io\n---\napiVersion: rbac.authorization.k8s.io/v1\nkind: RoleBinding\nmetadata:\n name: my-scheduler-extension-apiserver-authentication-reader\n namespace: kube-system\nroleRef:\n kind: Role\n name: extension-apiserver-authentication-reader\n apiGroup: rbac.authorization.k8s.io\nsubjects:\n- kind: ServiceAccount\n name: my-scheduler\n namespace: kube-system\n---\napiVersion: v1\nkind: ConfigMap\nmetadata:\n name: my-scheduler-config\n namespace: kube-system\ndata:\n my-scheduler-config.yaml: |\n apiVersion: kubescheduler.config.k8s.io/v1\n kind: KubeSchedulerConfiguration\n profiles:", "completion": " - schedulerName: my-scheduler\n leaderElection:\n leaderElect: false \n---\napiVersion: apps/v1\nkind: Deployment\nmetadata:\n labels:\n component: scheduler\n tier: control-plane\n name: my-scheduler\n namespace: kube-system\nspec:\n selector:\n matchLabels:\n component: scheduler\n tier: control-plane\n replicas: 1\n template:\n metadata:\n labels:\n component: scheduler\n tier: control-plane\n version: second\n spec:\n serviceAccountName: my-scheduler\n containers:\n - command:\n - /usr/local/bin/kube-scheduler\n - --config=/etc/kubernetes/my-scheduler/my-scheduler-config.yaml\n image: gcr.io/my-gcp-project/my-kube-scheduler:1.0\n livenessProbe:\n httpGet:\n path: /healthz\n port: 10259\n scheme: HTTPS\n initialDelaySeconds: 15\n name: kube-second-scheduler\n readinessProbe:\n httpGet:\n path: /healthz\n port: 10259\n scheme: HTTPS\n resources:\n requests:\n cpu: '0.1'\n securityContext:\n privileged: false\n volumeMounts:\n - name: config-volume\n mountPath: /etc/kubernetes/my-scheduler\n hostNetwork: false\n hostPID: false\n volumes:\n - name: config-volume\n configMap:\n name: my-scheduler-config", "language": "yaml", "type": "deployment-definition", "context": "Configure Multiple Schedulers | In the above manifest, you use a KubeSchedulerConfiguration to customize the behavior of your scheduler implementation. This configuration has been passed to the kube-scheduler during initialization w", "source": "https://kubernetes.io/docs/tasks/extend-kubernetes/configure-multiple-schedulers/", "id": "k8s_00091_code_completion_2403"} {"prompt": "Explain this yaml code:\n\nadmin/sched/my-scheduler.yaml\napiVersion: v1\nkind: ServiceAccount\nmetadata:\n name: my-scheduler\n namespace: kube-system\n---\napiVersion: rbac.authorization.k8s.io/v1\nkind: ClusterRoleBinding\nmetadata:\n name: my-scheduler-as-kube-scheduler\nsubjects:\n- kind: ServiceAccount\n name: my-scheduler\n namespace: kube-system\nroleRef:\n kind: ClusterRole\n name: system:kube-scheduler\n apiGroup: rbac.authorization.k8s.io\n---\napiVersion: rbac.authorization.k8s.io/v1\nkind: ClusterRoleBinding\nmetadata:\n name: my-scheduler-as-volume-scheduler\nsubjects:\n- kind: ServiceAccount\n name: my-scheduler\n namespace: kube-system\nroleRef:\n kind: ClusterRole\n name: system:volume-scheduler\n apiGroup: rbac.authorization.k8s.io\n---\napiVersion: rbac.authorization.k8s.io/v1\nkind: RoleBinding\nmetadata:\n name: my-scheduler-extension-apiserver-authentication-reader\n namespace: kube-system\nroleRef:\n kind: Role\n name: extension-apiserver-authentication-reader\n apiGroup: rbac.authorization.k8s.io\nsubjects:\n- kind: ServiceAccount\n name: my-scheduler\n namespace: kube-system\n---\napiVersion: v1\nkind: ConfigMap\nmetadata:\n name: my-scheduler-config\n namespace: kube-system\ndata:\n my-scheduler-config.yaml: |\n apiVersion: kubescheduler.config.k8s.io/v1\n kind: KubeSchedulerConfiguration\n profiles:\n - schedulerName: my-scheduler\n leaderElection:\n leaderElect: false \n---\napiVersion: apps/v1\nkind: Deployment\nmetadata:\n labels:\n component: scheduler\n tier: control-plane\n name: my-scheduler\n namespace: kube-system\nspec:\n selector:\n matchLabels:\n component: scheduler\n tier: control-plane\n replicas: 1\n template:\n metadata:\n labels:\n component: scheduler\n tier: control-plane\n version: second\n spec:\n serviceAccountName: my-scheduler\n containers:\n - command:\n - /usr/local/bin/kube-scheduler\n - --config=/etc/kubernetes/my-scheduler/my-scheduler-config.yaml\n image: gcr.io/my-gcp-project/my-kube-scheduler:1.0\n livenessProbe:\n httpGet:\n path: /healthz\n port: 10259\n scheme: HTTPS\n initialDelaySeconds: 15\n name: kube-second-scheduler\n readinessProbe:\n httpGet:\n path: /healthz\n port: 10259\n scheme: HTTPS\n resources:\n requests:\n cpu: '0.1'\n securityContext:\n privileged: false\n volumeMounts:\n - name: config-volume\n mountPath: /etc/kubernetes/my-scheduler\n hostNetwork: false\n hostPID: false\n volumes:\n - name: config-volume\n configMap:\n name: my-scheduler-config", "completion": "This yaml code Configure Multiple Schedulers | In the above manifest, you use a KubeSchedulerConfiguration to customize the behavior of your scheduler implementation. This configuration has been passed to the kube-scheduler during initialization w", "language": "yaml", "type": "deployment-definition_explanation", "context": "Configure Multiple Schedulers | In the above manifest, you use a KubeSchedulerConfiguration to customize the behavior of your scheduler implementation. This configuration has been passed to the kube-scheduler during initialization w", "source": "https://kubernetes.io/docs/tasks/extend-kubernetes/configure-multiple-schedulers/", "id": "k8s_00091_code_explain_2404"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: ServiceAccount\nmetadata:\n name: my-scheduler\n namespace: kube-system\n---\napiVersion: rbac.authorization.k8s.io/v1\nkind: ClusterRoleBinding\nmetadata:\n name: my-scheduler-as-kube-scheduler\nsubjects:\n- kind: ServiceAccount\n name: my-scheduler\n namespace: kube-system\nroleRef:\n kind: ClusterRole\n name: system:kube-scheduler\n apiGroup: rbac.authorization.k8s.io\n---\napiVersion: rbac.authorization.k8s.io/v1\nkind: ClusterRoleBinding\nmetadata:\n name: my-scheduler-as-volume-scheduler\nsubjects:\n- kind: ServiceAccount\n name: my-scheduler\n namespace: kube-system\nroleRef:\n kind: ClusterRole\n name: system:volume-scheduler\n apiGroup: rbac.authorization.k8s.io\n---\napiVersion: rbac.authorization.k8s.io/v1\nkind: RoleBinding\nmetadata:\n name: my-scheduler-extension-apiserver-authentication-reader\n namespace: kube-system\nroleRef:\n kind: Role\n name: extension-apiserver-authentication-reader\n apiGroup: rbac.authorization.k8s.io\nsubjects:\n- kind: ServiceAccount\n name: my-scheduler\n namespace: kube-system\n---\napiVersion: v1\nkind: ConfigMap\nmetadata:\n name: my-scheduler-config\n namespace: kube-system\ndata:\n my-scheduler-config.yaml: |\n apiVersion: kubescheduler.config.k8s.io/v1\n kind: KubeSchedulerConfiguration\n profiles:", "completion": " - schedulerName: my-scheduler\n leaderElection:\n leaderElect: false \n---\napiVersion: apps/v1\nkind: Deployment\nmetadata:\n labels:\n component: scheduler\n tier: control-plane\n name: my-scheduler\n namespace: kube-system\nspec:\n selector:\n matchLabels:\n component: scheduler\n tier: control-plane\n replicas: 1\n template:\n metadata:\n labels:\n component: scheduler\n tier: control-plane\n version: second\n spec:\n serviceAccountName: my-scheduler\n containers:\n - command:\n - /usr/local/bin/kube-scheduler\n - --config=/etc/kubernetes/my-scheduler/my-scheduler-config.yaml\n image: gcr.io/my-gcp-project/my-kube-scheduler:1.0\n livenessProbe:\n httpGet:\n path: /healthz\n port: 10259\n scheme: HTTPS\n initialDelaySeconds: 15\n name: kube-second-scheduler\n readinessProbe:\n httpGet:\n path: /healthz\n port: 10259\n scheme: HTTPS\n resources:\n requests:\n cpu: '0.1'\n securityContext:\n privileged: false\n volumeMounts:\n - name: config-volume\n mountPath: /etc/kubernetes/my-scheduler\n hostNetwork: false\n hostPID: false\n volumes:\n - name: config-volume\n configMap:\n name: my-scheduler-config", "language": "yaml", "type": "deployment-definition", "context": "Configure Multiple Schedulers | In the above manifest, you use a KubeSchedulerConfiguration to customize the behavior of your scheduler implementation. This configuration has been passed to the kube-scheduler during initialization w", "source": "https://kubernetes.io/docs/tasks/extend-kubernetes/configure-multiple-schedulers/", "id": "k8s_00091_code_completion_2405"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: ServiceAccount\nmetadata:\n name: my-scheduler\n namespace: kube-system\n---\napiVersion: rbac.authorization.k8s.io/v1\nkind: ClusterRoleBinding\nmetadata:\n name: my-scheduler-as-kube-scheduler\nsubjects:\n- kind: ServiceAccount\n name: my-scheduler\n namespace: kube-system\nroleRef:\n kind: ClusterRole\n name: system:kube-scheduler\n apiGroup: rbac.authorization.k8s.io\n---\napiVersion: rbac.authorization.k8s.io/v1\nkind: ClusterRoleBinding\nmetadata:\n name: my-scheduler-as-volume-scheduler\nsubjects:\n- kind: ServiceAccount\n name: my-scheduler\n namespace: kube-system\nroleRef:\n kind: ClusterRole\n name: system:volume-scheduler\n apiGroup: rbac.authorization.k8s.io\n---\napiVersion: rbac.authorization.k8s.io/v1\nkind: RoleBinding\nmetadata:\n name: my-scheduler-extension-apiserver-authentication-reader\n namespace: kube-system\nroleRef:\n kind: Role\n name: extension-apiserver-authentication-reader\n apiGroup: rbac.authorization.k8s.io\nsubjects:\n- kind: ServiceAccount\n name: my-scheduler\n namespace: kube-system\n---\napiVersion: v1\nkind: ConfigMap\nmetadata:\n name: my-scheduler-config\n namespace: kube-system\ndata:\n my-scheduler-config.yaml: |\n apiVersion: kubescheduler.config.k8s.io/v1\n kind: KubeSchedulerConfiguration\n profiles:\n - schedulerName: my-scheduler\n leaderElection:\n leaderElect: false \n---\napiVersion: apps/v1\nkind: Deployment\nmetadata:\n labels:\n component: scheduler\n tier: control-plane\n name: my-scheduler\n namespace: kube-system\nspec:\n selector:\n matchLabels:\n component: scheduler\n tier: control-plane\n replicas: 1\n template:\n metadata:\n labels:\n component: scheduler\n tier: control-plane\n version: second\n spec:\n serviceAccountName: my-scheduler\n containers:\n - command:\n - /usr/local/bin/kube-scheduler\n - --config=/etc/kubernetes/my-scheduler/my-scheduler-config.yaml\n image: gcr.io/my-gcp-project/my-kube-scheduler:1.0\n livenessProbe:\n httpGet:\n path: /healthz\n port: 10259\n scheme: HTTPS\n initialDelaySeconds: 15\n name: kube-second-scheduler\n readinessProbe:\n httpGet:\n path: /healthz\n port: 10259\n scheme: HTTPS\n resources:\n requests:\n cpu: '0.1'\n securityContext:\n privileged: false\n volumeMounts:\n - name: config-volume\n mountPath: /etc/kubernetes/my-scheduler\n hostNetwork: false\n hostPID: false\n volumes:\n - name: config-volume\n configMap:\n name: my-scheduler-config", "completion": "This yaml code Configure Multiple Schedulers | In the above manifest, you use a KubeSchedulerConfiguration to customize the behavior of your scheduler implementation. This configuration has been passed to the kube-scheduler during initialization w", "language": "yaml", "type": "deployment-definition_explanation", "context": "Configure Multiple Schedulers | In the above manifest, you use a KubeSchedulerConfiguration to customize the behavior of your scheduler implementation. This configuration has been passed to the kube-scheduler during initialization w", "source": "https://kubernetes.io/docs/tasks/extend-kubernetes/configure-multiple-schedulers/", "id": "k8s_00091_code_explain_2406"} {"prompt": "Complete this yaml code:\n\nadmin/sched/clusterrole.yaml\napiVersion: rbac.authorization.k8s.io/v1\nkind: ClusterRole\nmetadata:\n annotations:\n rbac.authorization.kubernetes.io/autoupdate: \"true\"\n labels:\n kubernetes.io/bootstrapping: rbac-defaults\n name: system:kube-scheduler\nrules:\n - apiGroups:\n - coordination.k8s.io\n resources:\n - leases\n verbs:\n - create\n - apiGroups:\n - coordination.k8s.io\n resourceNames:", "completion": " - kube-scheduler\n - my-scheduler\n resources:\n - leases\n verbs:\n - get\n - update\n - apiGroups:\n - \"\"\n resourceNames:\n - kube-scheduler\n - my-scheduler\n resources:\n - endpoints\n verbs:\n - delete\n - get\n - patch\n - update", "language": "yaml", "type": "kubernetes-manifest", "context": "Configure Multiple Schedulers | Now that your second scheduler is running, create some pods, and direct them to be scheduled by either the default scheduler or the one you deployed. In order to schedule a given pod using a specific", "source": "https://kubernetes.io/docs/tasks/extend-kubernetes/configure-multiple-schedulers/", "id": "k8s_00091_code_completion_2407"} {"prompt": "Explain this yaml code:\n\nadmin/sched/clusterrole.yaml\napiVersion: rbac.authorization.k8s.io/v1\nkind: ClusterRole\nmetadata:\n annotations:\n rbac.authorization.kubernetes.io/autoupdate: \"true\"\n labels:\n kubernetes.io/bootstrapping: rbac-defaults\n name: system:kube-scheduler\nrules:\n - apiGroups:\n - coordination.k8s.io\n resources:\n - leases\n verbs:\n - create\n - apiGroups:\n - coordination.k8s.io\n resourceNames:\n - kube-scheduler\n - my-scheduler\n resources:\n - leases\n verbs:\n - get\n - update\n - apiGroups:\n - \"\"\n resourceNames:\n - kube-scheduler\n - my-scheduler\n resources:\n - endpoints\n verbs:\n - delete\n - get\n - patch\n - update", "completion": "This yaml code Configure Multiple Schedulers | Now that your second scheduler is running, create some pods, and direct them to be scheduled by either the default scheduler or the one you deployed. In order to schedule a given pod using a specific", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Configure Multiple Schedulers | Now that your second scheduler is running, create some pods, and direct them to be scheduled by either the default scheduler or the one you deployed. In order to schedule a given pod using a specific", "source": "https://kubernetes.io/docs/tasks/extend-kubernetes/configure-multiple-schedulers/", "id": "k8s_00091_code_explain_2408"} {"prompt": "Complete this yaml code:\n\napiVersion: rbac.authorization.k8s.io/v1\nkind: ClusterRole\nmetadata:\n annotations:\n rbac.authorization.kubernetes.io/autoupdate: \"true\"\n labels:\n kubernetes.io/bootstrapping: rbac-defaults\n name: system:kube-scheduler\nrules:\n - apiGroups:\n - coordination.k8s.io\n resources:\n - leases\n verbs:\n - create\n - apiGroups:\n - coordination.k8s.io\n resourceNames:", "completion": " - kube-scheduler\n - my-scheduler\n resources:\n - leases\n verbs:\n - get\n - update\n - apiGroups:\n - \"\"\n resourceNames:\n - kube-scheduler\n - my-scheduler\n resources:\n - endpoints\n verbs:\n - delete\n - get\n - patch\n - update", "language": "yaml", "type": "kubernetes-manifest", "context": "Configure Multiple Schedulers | Now that your second scheduler is running, create some pods, and direct them to be scheduled by either the default scheduler or the one you deployed. In order to schedule a given pod using a specific", "source": "https://kubernetes.io/docs/tasks/extend-kubernetes/configure-multiple-schedulers/", "id": "k8s_00091_code_completion_2409"} {"prompt": "Explain this yaml code:\n\napiVersion: rbac.authorization.k8s.io/v1\nkind: ClusterRole\nmetadata:\n annotations:\n rbac.authorization.kubernetes.io/autoupdate: \"true\"\n labels:\n kubernetes.io/bootstrapping: rbac-defaults\n name: system:kube-scheduler\nrules:\n - apiGroups:\n - coordination.k8s.io\n resources:\n - leases\n verbs:\n - create\n - apiGroups:\n - coordination.k8s.io\n resourceNames:\n - kube-scheduler\n - my-scheduler\n resources:\n - leases\n verbs:\n - get\n - update\n - apiGroups:\n - \"\"\n resourceNames:\n - kube-scheduler\n - my-scheduler\n resources:\n - endpoints\n verbs:\n - delete\n - get\n - patch\n - update", "completion": "This yaml code Configure Multiple Schedulers | Now that your second scheduler is running, create some pods, and direct them to be scheduled by either the default scheduler or the one you deployed. In order to schedule a given pod using a specific", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Configure Multiple Schedulers | Now that your second scheduler is running, create some pods, and direct them to be scheduled by either the default scheduler or the one you deployed. In order to schedule a given pod using a specific", "source": "https://kubernetes.io/docs/tasks/extend-kubernetes/configure-multiple-schedulers/", "id": "k8s_00091_code_explain_2410"} {"prompt": "Complete this yaml code:\n\nadmin/sched/pod1.yaml\napiVersion: v1\nkind: Pod\nmetadata:\n name: no-annotation", "completion": " labels:\n name: multischeduler-example\nspec:\n containers:\n - name: pod-with-no-annotation-container\n image: registry.k8s.io/pause:3.8", "language": "yaml", "type": "pod-definition", "context": "Configure Multiple Schedulers | When no scheduler name is supplied, the pod is automatically scheduled using the default-scheduler.", "source": "https://kubernetes.io/docs/tasks/extend-kubernetes/configure-multiple-schedulers/", "id": "k8s_00091_code_completion_2411"} {"prompt": "Explain this yaml code:\n\nadmin/sched/pod1.yaml\napiVersion: v1\nkind: Pod\nmetadata:\n name: no-annotation\n labels:\n name: multischeduler-example\nspec:\n containers:\n - name: pod-with-no-annotation-container\n image: registry.k8s.io/pause:3.8", "completion": "This yaml code Configure Multiple Schedulers | When no scheduler name is supplied, the pod is automatically scheduled using the default-scheduler.", "language": "yaml", "type": "pod-definition_explanation", "context": "Configure Multiple Schedulers | When no scheduler name is supplied, the pod is automatically scheduled using the default-scheduler.", "source": "https://kubernetes.io/docs/tasks/extend-kubernetes/configure-multiple-schedulers/", "id": "k8s_00091_code_explain_2412"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: no-annotation\n labels:", "completion": " name: multischeduler-example\nspec:\n containers:\n - name: pod-with-no-annotation-container\n image: registry.k8s.io/pause:3.8", "language": "yaml", "type": "pod-definition", "context": "Configure Multiple Schedulers | When no scheduler name is supplied, the pod is automatically scheduled using the default-scheduler.", "source": "https://kubernetes.io/docs/tasks/extend-kubernetes/configure-multiple-schedulers/", "id": "k8s_00091_code_completion_2413"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: no-annotation\n labels:\n name: multischeduler-example\nspec:\n containers:\n - name: pod-with-no-annotation-container\n image: registry.k8s.io/pause:3.8", "completion": "This yaml code Configure Multiple Schedulers | When no scheduler name is supplied, the pod is automatically scheduled using the default-scheduler.", "language": "yaml", "type": "pod-definition_explanation", "context": "Configure Multiple Schedulers | When no scheduler name is supplied, the pod is automatically scheduled using the default-scheduler.", "source": "https://kubernetes.io/docs/tasks/extend-kubernetes/configure-multiple-schedulers/", "id": "k8s_00091_code_explain_2414"} {"prompt": "Complete this yaml code:\n\nadmin/sched/pod2.yaml\napiVersion: v1\nkind: Pod\nmetadata:\n name: annotation-default-scheduler\n labels:", "completion": " name: multischeduler-example\nspec:\n schedulerName: default-scheduler\n containers:\n - name: pod-with-default-annotation-container\n image: registry.k8s.io/pause:3.8", "language": "yaml", "type": "pod-definition", "context": "Configure Multiple Schedulers | A scheduler is specified by supplying the scheduler name as a value to spec.schedulerName. In this case, we supply the name of the default scheduler which is default-scheduler.", "source": "https://kubernetes.io/docs/tasks/extend-kubernetes/configure-multiple-schedulers/", "id": "k8s_00091_code_completion_2415"} {"prompt": "Explain this yaml code:\n\nadmin/sched/pod2.yaml\napiVersion: v1\nkind: Pod\nmetadata:\n name: annotation-default-scheduler\n labels:\n name: multischeduler-example\nspec:\n schedulerName: default-scheduler\n containers:\n - name: pod-with-default-annotation-container\n image: registry.k8s.io/pause:3.8", "completion": "This yaml code Configure Multiple Schedulers | A scheduler is specified by supplying the scheduler name as a value to spec.schedulerName. In this case, we supply the name of the default scheduler which is default-scheduler.", "language": "yaml", "type": "pod-definition_explanation", "context": "Configure Multiple Schedulers | A scheduler is specified by supplying the scheduler name as a value to spec.schedulerName. In this case, we supply the name of the default scheduler which is default-scheduler.", "source": "https://kubernetes.io/docs/tasks/extend-kubernetes/configure-multiple-schedulers/", "id": "k8s_00091_code_explain_2416"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: annotation-default-scheduler\n labels:", "completion": " name: multischeduler-example\nspec:\n schedulerName: default-scheduler\n containers:\n - name: pod-with-default-annotation-container\n image: registry.k8s.io/pause:3.8", "language": "yaml", "type": "pod-definition", "context": "Configure Multiple Schedulers | A scheduler is specified by supplying the scheduler name as a value to spec.schedulerName. In this case, we supply the name of the default scheduler which is default-scheduler.", "source": "https://kubernetes.io/docs/tasks/extend-kubernetes/configure-multiple-schedulers/", "id": "k8s_00091_code_completion_2417"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: annotation-default-scheduler\n labels:\n name: multischeduler-example\nspec:\n schedulerName: default-scheduler\n containers:\n - name: pod-with-default-annotation-container\n image: registry.k8s.io/pause:3.8", "completion": "This yaml code Configure Multiple Schedulers | A scheduler is specified by supplying the scheduler name as a value to spec.schedulerName. In this case, we supply the name of the default scheduler which is default-scheduler.", "language": "yaml", "type": "pod-definition_explanation", "context": "Configure Multiple Schedulers | A scheduler is specified by supplying the scheduler name as a value to spec.schedulerName. In this case, we supply the name of the default scheduler which is default-scheduler.", "source": "https://kubernetes.io/docs/tasks/extend-kubernetes/configure-multiple-schedulers/", "id": "k8s_00091_code_explain_2418"} {"prompt": "Complete this yaml code:\n\nadmin/sched/pod3.yaml\napiVersion: v1\nkind: Pod\nmetadata:\n name: annotation-second-scheduler\n labels:", "completion": " name: multischeduler-example\nspec:\n schedulerName: my-scheduler\n containers:\n - name: pod-with-second-annotation-container\n image: registry.k8s.io/pause:3.8", "language": "yaml", "type": "pod-definition", "context": "Configure Multiple Schedulers | In this case, we specify that this pod should be scheduled using the scheduler that we deployed - my-scheduler. Note that the value of spec.schedulerName should match the name supplied for the schedul", "source": "https://kubernetes.io/docs/tasks/extend-kubernetes/configure-multiple-schedulers/", "id": "k8s_00091_code_completion_2419"} {"prompt": "Explain this yaml code:\n\nadmin/sched/pod3.yaml\napiVersion: v1\nkind: Pod\nmetadata:\n name: annotation-second-scheduler\n labels:\n name: multischeduler-example\nspec:\n schedulerName: my-scheduler\n containers:\n - name: pod-with-second-annotation-container\n image: registry.k8s.io/pause:3.8", "completion": "This yaml code Configure Multiple Schedulers | In this case, we specify that this pod should be scheduled using the scheduler that we deployed - my-scheduler. Note that the value of spec.schedulerName should match the name supplied for the schedul", "language": "yaml", "type": "pod-definition_explanation", "context": "Configure Multiple Schedulers | In this case, we specify that this pod should be scheduled using the scheduler that we deployed - my-scheduler. Note that the value of spec.schedulerName should match the name supplied for the schedul", "source": "https://kubernetes.io/docs/tasks/extend-kubernetes/configure-multiple-schedulers/", "id": "k8s_00091_code_explain_2420"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: annotation-second-scheduler\n labels:", "completion": " name: multischeduler-example\nspec:\n schedulerName: my-scheduler\n containers:\n - name: pod-with-second-annotation-container\n image: registry.k8s.io/pause:3.8", "language": "yaml", "type": "pod-definition", "context": "Configure Multiple Schedulers | In this case, we specify that this pod should be scheduled using the scheduler that we deployed - my-scheduler. Note that the value of spec.schedulerName should match the name supplied for the schedul", "source": "https://kubernetes.io/docs/tasks/extend-kubernetes/configure-multiple-schedulers/", "id": "k8s_00091_code_completion_2421"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: annotation-second-scheduler\n labels:\n name: multischeduler-example\nspec:\n schedulerName: my-scheduler\n containers:\n - name: pod-with-second-annotation-container\n image: registry.k8s.io/pause:3.8", "completion": "This yaml code Configure Multiple Schedulers | In this case, we specify that this pod should be scheduled using the scheduler that we deployed - my-scheduler. Note that the value of spec.schedulerName should match the name supplied for the schedul", "language": "yaml", "type": "pod-definition_explanation", "context": "Configure Multiple Schedulers | In this case, we specify that this pod should be scheduled using the scheduler that we deployed - my-scheduler. Note that the value of spec.schedulerName should match the name supplied for the schedul", "source": "https://kubernetes.io/docs/tasks/extend-kubernetes/configure-multiple-schedulers/", "id": "k8s_00091_code_explain_2422"} {"prompt": "Explain this text code:\n\netcd --listen-client-urls=http://$PRIVATE_IP:2379 \\\n --advertise-client-urls=http://$PRIVATE_IP:2379", "completion": "This text code Operating etcd clusters for Kubernetes | Start the Kubernetes API server with the flag --etcd-servers=$PRIVATE_IP:2379.", "language": "text", "type": "code-example_explanation", "context": "Operating etcd clusters for Kubernetes | Start the Kubernetes API server with the flag --etcd-servers=$PRIVATE_IP:2379.", "source": "https://kubernetes.io/docs/tasks/administer-cluster/configure-upgrade-etcd/", "id": "k8s_00092_code_explain_2423"} {"prompt": "Explain this text code:\n\netcd --listen-client-urls=http://$IP1:2379,http://$IP2:2379,http://$IP3:2379,http://$IP4:2379,http://$IP5:2379 --advertise-client-urls=http://$IP1:2379,http://$IP2:2379,http://$IP3:2379,http://$IP4:2379,http://$IP5:2379", "completion": "This text code Operating etcd clusters for Kubernetes | Start the Kubernetes API servers with the flag --etcd-servers=$IP1:2379,$IP2:2379,$IP3:2379,$IP4:2379,$IP5:2379.", "language": "text", "type": "code-example_explanation", "context": "Operating etcd clusters for Kubernetes | Start the Kubernetes API servers with the flag --etcd-servers=$IP1:2379,$IP2:2379,$IP3:2379,$IP4:2379,$IP5:2379.", "source": "https://kubernetes.io/docs/tasks/administer-cluster/configure-upgrade-etcd/", "id": "k8s_00092_code_explain_2424"} {"prompt": "Complete this text code:\n\nETCDCTL_API=3 etcdctl --endpoints 10.2.0.9:2379 \\\n --cert=/etc/kubernetes/pki/etcd/server.crt \\", "completion": " --key=/etc/kubernetes/pki/etcd/server.key \\\n --cacert=/etc/kubernetes/pki/etcd/ca.crt \\\n member list", "language": "text", "type": "code-example", "context": "Operating etcd clusters for Kubernetes | After configuring secure communication, restrict the access of the etcd cluster to only the Kubernetes API servers using TLS authentication.", "source": "https://kubernetes.io/docs/tasks/administer-cluster/configure-upgrade-etcd/", "id": "k8s_00092_code_completion_2425"} {"prompt": "Explain this text code:\n\nETCDCTL_API=3 etcdctl --endpoints 10.2.0.9:2379 \\\n --cert=/etc/kubernetes/pki/etcd/server.crt \\\n --key=/etc/kubernetes/pki/etcd/server.key \\\n --cacert=/etc/kubernetes/pki/etcd/ca.crt \\\n member list", "completion": "This text code Operating etcd clusters for Kubernetes | After configuring secure communication, restrict the access of the etcd cluster to only the Kubernetes API servers using TLS authentication.", "language": "text", "type": "code-example_explanation", "context": "Operating etcd clusters for Kubernetes | After configuring secure communication, restrict the access of the etcd cluster to only the Kubernetes API servers using TLS authentication.", "source": "https://kubernetes.io/docs/tasks/administer-cluster/configure-upgrade-etcd/", "id": "k8s_00092_code_explain_2426"} {"prompt": "Explain this text code:\n\netcdctl --endpoints=http://10.0.0.2,http://10.0.0.3 member list", "completion": "This text code Operating etcd clusters for Kubernetes | The following message is displayed:", "language": "text", "type": "code-example_explanation", "context": "Operating etcd clusters for Kubernetes | The following message is displayed:", "source": "https://kubernetes.io/docs/tasks/administer-cluster/configure-upgrade-etcd/", "id": "k8s_00092_code_explain_2427"} {"prompt": "Explain this text code:\n\n8211f1d0f64f3269, started, member1, http://10.0.0.1:2380, http://10.0.0.1:2379\n91bc3c398fb3c146, started, member2, http://10.0.0.2:2380, http://10.0.0.2:2379\nfd422379fda50e48, started, member3, http://10.0.0.3:2380, http://10.0.0.3:2379", "completion": "This text code Operating etcd clusters for Kubernetes | Do either of the following:", "language": "text", "type": "code-example_explanation", "context": "Operating etcd clusters for Kubernetes | Do either of the following:", "source": "https://kubernetes.io/docs/tasks/administer-cluster/configure-upgrade-etcd/", "id": "k8s_00092_code_explain_2428"} {"prompt": "Explain this text code:\n\netcdctl member add member4 --peer-urls=http://10.0.0.4:2380", "completion": "This text code Operating etcd clusters for Kubernetes | The following message is displayed:", "language": "text", "type": "code-example_explanation", "context": "Operating etcd clusters for Kubernetes | The following message is displayed:", "source": "https://kubernetes.io/docs/tasks/administer-cluster/configure-upgrade-etcd/", "id": "k8s_00092_code_explain_2429"} {"prompt": "Explain this text code:\n\nMember 2be1eb8f84b7f63e added to cluster ef37ad9dc622a7c4", "completion": "This text code Operating etcd clusters for Kubernetes | Start the newly added member on a machine with the IP 10.0.0.4:", "language": "text", "type": "code-example_explanation", "context": "Operating etcd clusters for Kubernetes | Start the newly added member on a machine with the IP 10.0.0.4:", "source": "https://kubernetes.io/docs/tasks/administer-cluster/configure-upgrade-etcd/", "id": "k8s_00092_code_explain_2430"} {"prompt": "Complete this text code:\n\nexport ETCD_NAME=\"member4\"\nexport ETCD_INITIAL_CLUSTER=\"member2=http://10.0.0.2:2380,member3=http://10.0.0.3:2380,member4=http://10.0.0.4:2380\"", "completion": "export ETCD_INITIAL_CLUSTER_STATE=existing\netcd [flags]", "language": "text", "type": "code-example", "context": "Operating etcd clusters for Kubernetes | Do either of the following:", "source": "https://kubernetes.io/docs/tasks/administer-cluster/configure-upgrade-etcd/", "id": "k8s_00092_code_completion_2431"} {"prompt": "Explain this text code:\n\nexport ETCD_NAME=\"member4\"\nexport ETCD_INITIAL_CLUSTER=\"member2=http://10.0.0.2:2380,member3=http://10.0.0.3:2380,member4=http://10.0.0.4:2380\"\nexport ETCD_INITIAL_CLUSTER_STATE=existing\netcd [flags]", "completion": "This text code Operating etcd clusters for Kubernetes | Do either of the following:", "language": "text", "type": "code-example_explanation", "context": "Operating etcd clusters for Kubernetes | Do either of the following:", "source": "https://kubernetes.io/docs/tasks/administer-cluster/configure-upgrade-etcd/", "id": "k8s_00092_code_explain_2432"} {"prompt": "Explain this text code:\n\nETCDCTL_API=3 etcdctl --endpoints $ENDPOINT snapshot save snapshot.db", "completion": "This text code Operating etcd clusters for Kubernetes | Verify the snapshot:", "language": "text", "type": "code-example_explanation", "context": "Operating etcd clusters for Kubernetes | Verify the snapshot:", "source": "https://kubernetes.io/docs/tasks/administer-cluster/configure-upgrade-etcd/", "id": "k8s_00092_code_explain_2433"} {"prompt": "Explain this text code:\n\netcdutl --write-out=table snapshot status snapshot.db", "completion": "This text code Operating etcd clusters for Kubernetes | This should generate an output resembling the example provided below:", "language": "text", "type": "code-example_explanation", "context": "Operating etcd clusters for Kubernetes | This should generate an output resembling the example provided below:", "source": "https://kubernetes.io/docs/tasks/administer-cluster/configure-upgrade-etcd/", "id": "k8s_00092_code_explain_2434"} {"prompt": "Complete this text code:\n\n+----------+----------+------------+------------+\n| HASH | REVISION | TOTAL KEYS | TOTAL SIZE |", "completion": "+----------+----------+------------+------------+\n| fe01cf57 | 10 | 7 | 2.1 MB |\n+----------+----------+------------+------------+", "language": "text", "type": "code-example", "context": "Operating etcd clusters for Kubernetes | Note:The usage of etcdctl snapshot status has been deprecated since etcd v3.5.x and is slated for removal from etcd v3.6. It is recommended to utilize etcdutl instead.The below example depicts the usa", "source": "https://kubernetes.io/docs/tasks/administer-cluster/configure-upgrade-etcd/", "id": "k8s_00092_code_completion_2435"} {"prompt": "Explain this text code:\n\n+----------+----------+------------+------------+\n| HASH | REVISION | TOTAL KEYS | TOTAL SIZE |\n+----------+----------+------------+------------+\n| fe01cf57 | 10 | 7 | 2.1 MB |\n+----------+----------+------------+------------+", "completion": "This text code Operating etcd clusters for Kubernetes | Note:The usage of etcdctl snapshot status has been deprecated since etcd v3.5.x and is slated for removal from etcd v3.6. It is recommended to utilize etcdutl instead.The below example depicts the usa", "language": "text", "type": "code-example_explanation", "context": "Operating etcd clusters for Kubernetes | Note:The usage of etcdctl snapshot status has been deprecated since etcd v3.5.x and is slated for removal from etcd v3.6. It is recommended to utilize etcdutl instead.The below example depicts the usa", "source": "https://kubernetes.io/docs/tasks/administer-cluster/configure-upgrade-etcd/", "id": "k8s_00092_code_explain_2436"} {"prompt": "Explain this text code:\n\nexport ETCDCTL_API=3\netcdctl --write-out=table snapshot status snapshot.db", "completion": "This text code Operating etcd clusters for Kubernetes | This should generate an output resembling the example provided below:", "language": "text", "type": "code-example_explanation", "context": "Operating etcd clusters for Kubernetes | This should generate an output resembling the example provided below:", "source": "https://kubernetes.io/docs/tasks/administer-cluster/configure-upgrade-etcd/", "id": "k8s_00092_code_explain_2437"} {"prompt": "Complete this text code:\n\nDeprecated: Use `etcdutl snapshot status` instead.\n\n+----------+----------+------------+------------+", "completion": "| HASH | REVISION | TOTAL KEYS | TOTAL SIZE |\n+----------+----------+------------+------------+\n| fe01cf57 | 10 | 7 | 2.1 MB |\n+----------+----------+------------+------------+", "language": "text", "type": "code-example", "context": "Operating etcd clusters for Kubernetes | If etcd is running on a storage volume that supports backup, such as Amazon Elastic Block Store, back up etcd data by creating a snapshot of the storage volume.", "source": "https://kubernetes.io/docs/tasks/administer-cluster/configure-upgrade-etcd/", "id": "k8s_00092_code_completion_2438"} {"prompt": "Explain this text code:\n\nDeprecated: Use `etcdutl snapshot status` instead.\n\n+----------+----------+------------+------------+\n| HASH | REVISION | TOTAL KEYS | TOTAL SIZE |\n+----------+----------+------------+------------+\n| fe01cf57 | 10 | 7 | 2.1 MB |\n+----------+----------+------------+------------+", "completion": "This text code Operating etcd clusters for Kubernetes | If etcd is running on a storage volume that supports backup, such as Amazon Elastic Block Store, back up etcd data by creating a snapshot of the storage volume.", "language": "text", "type": "code-example_explanation", "context": "Operating etcd clusters for Kubernetes | If etcd is running on a storage volume that supports backup, such as Amazon Elastic Block Store, back up etcd data by creating a snapshot of the storage volume.", "source": "https://kubernetes.io/docs/tasks/administer-cluster/configure-upgrade-etcd/", "id": "k8s_00092_code_explain_2439"} {"prompt": "Explain this text code:\n\nETCDCTL_API=3 etcdctl --endpoints=https://127.0.0.1:2379 \\\n --cacert= --cert= --key= \\\n snapshot save ", "completion": "This text code Operating etcd clusters for Kubernetes | where trusted-ca-file, cert-file and key-file can be obtained from the description of the etcd Pod.", "language": "text", "type": "code-example_explanation", "context": "Operating etcd clusters for Kubernetes | where trusted-ca-file, cert-file and key-file can be obtained from the description of the etcd Pod.", "source": "https://kubernetes.io/docs/tasks/administer-cluster/configure-upgrade-etcd/", "id": "k8s_00092_code_explain_2440"} {"prompt": "Explain this text code:\n\netcdutl --data-dir snapshot restore snapshot.db", "completion": "This text code Operating etcd clusters for Kubernetes | where is a directory that will be created during the restore process.", "language": "text", "type": "code-example_explanation", "context": "Operating etcd clusters for Kubernetes | where is a directory that will be created during the restore process.", "source": "https://kubernetes.io/docs/tasks/administer-cluster/configure-upgrade-etcd/", "id": "k8s_00092_code_explain_2441"} {"prompt": "Explain this text code:\n\nexport ETCDCTL_API=3\netcdctl --data-dir snapshot restore snapshot.db", "completion": "This text code Operating etcd clusters for Kubernetes | If is the same folder as before, delete it and stop the etcd process before restoring the cluster. Otherwise, change etcd configuration and restart the etcd process after restorati", "language": "text", "type": "code-example_explanation", "context": "Operating etcd clusters for Kubernetes | If is the same folder as before, delete it and stop the etcd process before restoring the cluster. Otherwise, change etcd configuration and restart the etcd process after restorati", "source": "https://kubernetes.io/docs/tasks/administer-cluster/configure-upgrade-etcd/", "id": "k8s_00092_code_explain_2442"} {"prompt": "Explain this sh code:\n\netcd --listen-client-urls=http://$PRIVATE_IP:2379 \\\n --advertise-client-urls=http://$PRIVATE_IP:2379", "completion": "This sh code Operating etcd clusters for Kubernetes | Start the Kubernetes API server with the flag --etcd-servers=$PRIVATE_IP:2379.", "language": "sh", "type": "code-example_explanation", "context": "Operating etcd clusters for Kubernetes | Start the Kubernetes API server with the flag --etcd-servers=$PRIVATE_IP:2379.", "source": "https://kubernetes.io/docs/tasks/administer-cluster/configure-upgrade-etcd/", "id": "k8s_00092_code_explain_2443"} {"prompt": "Explain this shell code:\n\netcd --listen-client-urls=http://$IP1:2379,http://$IP2:2379,http://$IP3:2379,http://$IP4:2379,http://$IP5:2379 --advertise-client-urls=http://$IP1:2379,http://$IP2:2379,http://$IP3:2379,http://$IP4:2379,http://$IP5:2379", "completion": "This shell code Operating etcd clusters for Kubernetes | Start the Kubernetes API servers with the flag --etcd-servers=$IP1:2379,$IP2:2379,$IP3:2379,$IP4:2379,$IP5:2379.", "language": "shell", "type": "code-example_explanation", "context": "Operating etcd clusters for Kubernetes | Start the Kubernetes API servers with the flag --etcd-servers=$IP1:2379,$IP2:2379,$IP3:2379,$IP4:2379,$IP5:2379.", "source": "https://kubernetes.io/docs/tasks/administer-cluster/configure-upgrade-etcd/", "id": "k8s_00092_code_explain_2444"} {"prompt": "Explain this text code:\n\n--etcd-servers=$IP1:2379,$IP2:2379,$IP3:2379,$IP4:2379,$IP5:2379", "completion": "This text code Operating etcd clusters for Kubernetes | Make sure the IP variables are set to your client IP addresses.", "language": "text", "type": "code-example_explanation", "context": "Operating etcd clusters for Kubernetes | Make sure the IP variables are set to your client IP addresses.", "source": "https://kubernetes.io/docs/tasks/administer-cluster/configure-upgrade-etcd/", "id": "k8s_00092_code_explain_2445"} {"prompt": "Complete this text code:\n\nETCDCTL_API=3 etcdctl --endpoints 10.2.0.9:2379 \\\n --cert=/etc/kubernetes/pki/etcd/server.crt \\", "completion": " --key=/etc/kubernetes/pki/etcd/server.key \\\n --cacert=/etc/kubernetes/pki/etcd/ca.crt \\\n member list", "language": "text", "type": "code-example", "context": "Operating etcd clusters for Kubernetes | After configuring secure communication, restrict the access of the etcd cluster to only the Kubernetes API servers using TLS authentication.", "source": "https://kubernetes.io/docs/tasks/administer-cluster/configure-upgrade-etcd/", "id": "k8s_00092_code_completion_2446"} {"prompt": "Explain this text code:\n\nETCDCTL_API=3 etcdctl --endpoints 10.2.0.9:2379 \\\n --cert=/etc/kubernetes/pki/etcd/server.crt \\\n --key=/etc/kubernetes/pki/etcd/server.key \\\n --cacert=/etc/kubernetes/pki/etcd/ca.crt \\\n member list", "completion": "This text code Operating etcd clusters for Kubernetes | After configuring secure communication, restrict the access of the etcd cluster to only the Kubernetes API servers using TLS authentication.", "language": "text", "type": "code-example_explanation", "context": "Operating etcd clusters for Kubernetes | After configuring secure communication, restrict the access of the etcd cluster to only the Kubernetes API servers using TLS authentication.", "source": "https://kubernetes.io/docs/tasks/administer-cluster/configure-upgrade-etcd/", "id": "k8s_00092_code_explain_2447"} {"prompt": "Explain this shell code:\n\netcdctl --endpoints=http://10.0.0.2,http://10.0.0.3 member list", "completion": "This shell code Operating etcd clusters for Kubernetes | The following message is displayed:", "language": "shell", "type": "code-example_explanation", "context": "Operating etcd clusters for Kubernetes | The following message is displayed:", "source": "https://kubernetes.io/docs/tasks/administer-cluster/configure-upgrade-etcd/", "id": "k8s_00092_code_explain_2448"} {"prompt": "Explain this console code:\n\n8211f1d0f64f3269, started, member1, http://10.0.0.1:2380, http://10.0.0.1:2379\n91bc3c398fb3c146, started, member2, http://10.0.0.2:2380, http://10.0.0.2:2379\nfd422379fda50e48, started, member3, http://10.0.0.3:2380, http://10.0.0.3:2379", "completion": "This console code Operating etcd clusters for Kubernetes | Do either of the following:", "language": "console", "type": "code-example_explanation", "context": "Operating etcd clusters for Kubernetes | Do either of the following:", "source": "https://kubernetes.io/docs/tasks/administer-cluster/configure-upgrade-etcd/", "id": "k8s_00092_code_explain_2449"} {"prompt": "Explain this shell code:\n\netcdctl member add member4 --peer-urls=http://10.0.0.4:2380", "completion": "This shell code Operating etcd clusters for Kubernetes | The following message is displayed:", "language": "shell", "type": "code-example_explanation", "context": "Operating etcd clusters for Kubernetes | The following message is displayed:", "source": "https://kubernetes.io/docs/tasks/administer-cluster/configure-upgrade-etcd/", "id": "k8s_00092_code_explain_2450"} {"prompt": "Explain this console code:\n\nMember 2be1eb8f84b7f63e added to cluster ef37ad9dc622a7c4", "completion": "This console code Operating etcd clusters for Kubernetes | Start the newly added member on a machine with the IP 10.0.0.4:", "language": "console", "type": "code-example_explanation", "context": "Operating etcd clusters for Kubernetes | Start the newly added member on a machine with the IP 10.0.0.4:", "source": "https://kubernetes.io/docs/tasks/administer-cluster/configure-upgrade-etcd/", "id": "k8s_00092_code_explain_2451"} {"prompt": "Complete this shell code:\n\nexport ETCD_NAME=\"member4\"\nexport ETCD_INITIAL_CLUSTER=\"member2=http://10.0.0.2:2380,member3=http://10.0.0.3:2380,member4=http://10.0.0.4:2380\"", "completion": "export ETCD_INITIAL_CLUSTER_STATE=existing\netcd [flags]", "language": "shell", "type": "code-example", "context": "Operating etcd clusters for Kubernetes | Do either of the following:", "source": "https://kubernetes.io/docs/tasks/administer-cluster/configure-upgrade-etcd/", "id": "k8s_00092_code_completion_2452"} {"prompt": "Explain this shell code:\n\nexport ETCD_NAME=\"member4\"\nexport ETCD_INITIAL_CLUSTER=\"member2=http://10.0.0.2:2380,member3=http://10.0.0.3:2380,member4=http://10.0.0.4:2380\"\nexport ETCD_INITIAL_CLUSTER_STATE=existing\netcd [flags]", "completion": "This shell code Operating etcd clusters for Kubernetes | Do either of the following:", "language": "shell", "type": "code-example_explanation", "context": "Operating etcd clusters for Kubernetes | Do either of the following:", "source": "https://kubernetes.io/docs/tasks/administer-cluster/configure-upgrade-etcd/", "id": "k8s_00092_code_explain_2453"} {"prompt": "Explain this shell code:\n\nETCDCTL_API=3 etcdctl --endpoints $ENDPOINT snapshot save snapshot.db", "completion": "This shell code Operating etcd clusters for Kubernetes | Verify the snapshot:", "language": "shell", "type": "code-example_explanation", "context": "Operating etcd clusters for Kubernetes | Verify the snapshot:", "source": "https://kubernetes.io/docs/tasks/administer-cluster/configure-upgrade-etcd/", "id": "k8s_00092_code_explain_2454"} {"prompt": "Explain this shell code:\n\netcdutl --write-out=table snapshot status snapshot.db", "completion": "This shell code Operating etcd clusters for Kubernetes | This should generate an output resembling the example provided below:", "language": "shell", "type": "code-example_explanation", "context": "Operating etcd clusters for Kubernetes | This should generate an output resembling the example provided below:", "source": "https://kubernetes.io/docs/tasks/administer-cluster/configure-upgrade-etcd/", "id": "k8s_00092_code_explain_2455"} {"prompt": "Complete this console code:\n\n+----------+----------+------------+------------+\n| HASH | REVISION | TOTAL KEYS | TOTAL SIZE |", "completion": "+----------+----------+------------+------------+\n| fe01cf57 | 10 | 7 | 2.1 MB |\n+----------+----------+------------+------------+", "language": "console", "type": "code-example", "context": "Operating etcd clusters for Kubernetes | Note:The usage of etcdctl snapshot status has been deprecated since etcd v3.5.x and is slated for removal from etcd v3.6. It is recommended to utilize etcdutl instead.The below example depicts the usa", "source": "https://kubernetes.io/docs/tasks/administer-cluster/configure-upgrade-etcd/", "id": "k8s_00092_code_completion_2456"} {"prompt": "Explain this console code:\n\n+----------+----------+------------+------------+\n| HASH | REVISION | TOTAL KEYS | TOTAL SIZE |\n+----------+----------+------------+------------+\n| fe01cf57 | 10 | 7 | 2.1 MB |\n+----------+----------+------------+------------+", "completion": "This console code Operating etcd clusters for Kubernetes | Note:The usage of etcdctl snapshot status has been deprecated since etcd v3.5.x and is slated for removal from etcd v3.6. It is recommended to utilize etcdutl instead.The below example depicts the usa", "language": "console", "type": "code-example_explanation", "context": "Operating etcd clusters for Kubernetes | Note:The usage of etcdctl snapshot status has been deprecated since etcd v3.5.x and is slated for removal from etcd v3.6. It is recommended to utilize etcdutl instead.The below example depicts the usa", "source": "https://kubernetes.io/docs/tasks/administer-cluster/configure-upgrade-etcd/", "id": "k8s_00092_code_explain_2457"} {"prompt": "Explain this shell code:\n\nexport ETCDCTL_API=3\netcdctl --write-out=table snapshot status snapshot.db", "completion": "This shell code Operating etcd clusters for Kubernetes | This should generate an output resembling the example provided below:", "language": "shell", "type": "code-example_explanation", "context": "Operating etcd clusters for Kubernetes | This should generate an output resembling the example provided below:", "source": "https://kubernetes.io/docs/tasks/administer-cluster/configure-upgrade-etcd/", "id": "k8s_00092_code_explain_2458"} {"prompt": "Complete this console code:\n\nDeprecated: Use `etcdutl snapshot status` instead.\n\n+----------+----------+------------+------------+", "completion": "| HASH | REVISION | TOTAL KEYS | TOTAL SIZE |\n+----------+----------+------------+------------+\n| fe01cf57 | 10 | 7 | 2.1 MB |\n+----------+----------+------------+------------+", "language": "console", "type": "code-example", "context": "Operating etcd clusters for Kubernetes | If etcd is running on a storage volume that supports backup, such as Amazon Elastic Block Store, back up etcd data by creating a snapshot of the storage volume.", "source": "https://kubernetes.io/docs/tasks/administer-cluster/configure-upgrade-etcd/", "id": "k8s_00092_code_completion_2459"} {"prompt": "Explain this console code:\n\nDeprecated: Use `etcdutl snapshot status` instead.\n\n+----------+----------+------------+------------+\n| HASH | REVISION | TOTAL KEYS | TOTAL SIZE |\n+----------+----------+------------+------------+\n| fe01cf57 | 10 | 7 | 2.1 MB |\n+----------+----------+------------+------------+", "completion": "This console code Operating etcd clusters for Kubernetes | If etcd is running on a storage volume that supports backup, such as Amazon Elastic Block Store, back up etcd data by creating a snapshot of the storage volume.", "language": "console", "type": "code-example_explanation", "context": "Operating etcd clusters for Kubernetes | If etcd is running on a storage volume that supports backup, such as Amazon Elastic Block Store, back up etcd data by creating a snapshot of the storage volume.", "source": "https://kubernetes.io/docs/tasks/administer-cluster/configure-upgrade-etcd/", "id": "k8s_00092_code_explain_2460"} {"prompt": "Explain this shell code:\n\nETCDCTL_API=3 etcdctl --endpoints=https://127.0.0.1:2379 \\\n --cacert= --cert= --key= \\\n snapshot save ", "completion": "This shell code Operating etcd clusters for Kubernetes | where trusted-ca-file, cert-file and key-file can be obtained from the description of the etcd Pod.", "language": "shell", "type": "code-example_explanation", "context": "Operating etcd clusters for Kubernetes | where trusted-ca-file, cert-file and key-file can be obtained from the description of the etcd Pod.", "source": "https://kubernetes.io/docs/tasks/administer-cluster/configure-upgrade-etcd/", "id": "k8s_00092_code_explain_2461"} {"prompt": "Explain this shell code:\n\netcdutl --data-dir snapshot restore snapshot.db", "completion": "This shell code Operating etcd clusters for Kubernetes | where is a directory that will be created during the restore process.", "language": "shell", "type": "code-example_explanation", "context": "Operating etcd clusters for Kubernetes | where is a directory that will be created during the restore process.", "source": "https://kubernetes.io/docs/tasks/administer-cluster/configure-upgrade-etcd/", "id": "k8s_00092_code_explain_2462"} {"prompt": "Explain this shell code:\n\nexport ETCDCTL_API=3\netcdctl --data-dir snapshot restore snapshot.db", "completion": "This shell code Operating etcd clusters for Kubernetes | If is the same folder as before, delete it and stop the etcd process before restoring the cluster. Otherwise, change etcd configuration and restart the etcd process after restorati", "language": "shell", "type": "code-example_explanation", "context": "Operating etcd clusters for Kubernetes | If is the same folder as before, delete it and stop the etcd process before restoring the cluster. Otherwise, change etcd configuration and restart the etcd process after restorati", "source": "https://kubernetes.io/docs/tasks/administer-cluster/configure-upgrade-etcd/", "id": "k8s_00092_code_explain_2463"} {"prompt": "Explain this bash code:\n\nkubectl -n kube-system delete pod ", "completion": "This bash code Operating etcd clusters for Kubernetes | For more information and examples on restoring a cluster from a snapshot file, see etcd disaster recovery documentation.", "language": "bash", "type": "kubectl-command_explanation", "context": "Operating etcd clusters for Kubernetes | For more information and examples on restoring a cluster from a snapshot file, see etcd disaster recovery documentation.", "source": "https://kubernetes.io/docs/tasks/administer-cluster/configure-upgrade-etcd/", "id": "k8s_00092_code_explain_2464"} {"prompt": "Explain this text code:\n\netcd --listen-client-urls=http://$PRIVATE_IP:2379 \\\n --advertise-client-urls=http://$PRIVATE_IP:2379", "completion": "This text code Operating etcd clusters for Kubernetes | Start the Kubernetes API server with the flag --etcd-servers=$PRIVATE_IP:2379.", "language": "text", "type": "code-example_explanation", "context": "Operating etcd clusters for Kubernetes | Start the Kubernetes API server with the flag --etcd-servers=$PRIVATE_IP:2379.", "source": "https://kubernetes.io/docs/tasks/administer-cluster/configure-upgrade-etcd/", "id": "k8s_00092_code_explain_2465"} {"prompt": "Explain this text code:\n\netcd --listen-client-urls=http://$IP1:2379,http://$IP2:2379,http://$IP3:2379,http://$IP4:2379,http://$IP5:2379 --advertise-client-urls=http://$IP1:2379,http://$IP2:2379,http://$IP3:2379,http://$IP4:2379,http://$IP5:2379", "completion": "This text code Operating etcd clusters for Kubernetes | Start the Kubernetes API servers with the flag --etcd-servers=$IP1:2379,$IP2:2379,$IP3:2379,$IP4:2379,$IP5:2379.", "language": "text", "type": "code-example_explanation", "context": "Operating etcd clusters for Kubernetes | Start the Kubernetes API servers with the flag --etcd-servers=$IP1:2379,$IP2:2379,$IP3:2379,$IP4:2379,$IP5:2379.", "source": "https://kubernetes.io/docs/tasks/administer-cluster/configure-upgrade-etcd/", "id": "k8s_00092_code_explain_2466"} {"prompt": "Explain this text code:\n\netcdctl --endpoints=http://10.0.0.2,http://10.0.0.3 member list", "completion": "This text code Operating etcd clusters for Kubernetes | The following message is displayed:", "language": "text", "type": "code-example_explanation", "context": "Operating etcd clusters for Kubernetes | The following message is displayed:", "source": "https://kubernetes.io/docs/tasks/administer-cluster/configure-upgrade-etcd/", "id": "k8s_00092_code_explain_2467"} {"prompt": "Explain this text code:\n\n8211f1d0f64f3269, started, member1, http://10.0.0.1:2380, http://10.0.0.1:2379\n91bc3c398fb3c146, started, member2, http://10.0.0.2:2380, http://10.0.0.2:2379\nfd422379fda50e48, started, member3, http://10.0.0.3:2380, http://10.0.0.3:2379", "completion": "This text code Operating etcd clusters for Kubernetes | Do either of the following:", "language": "text", "type": "code-example_explanation", "context": "Operating etcd clusters for Kubernetes | Do either of the following:", "source": "https://kubernetes.io/docs/tasks/administer-cluster/configure-upgrade-etcd/", "id": "k8s_00092_code_explain_2468"} {"prompt": "Explain this text code:\n\netcdctl member add member4 --peer-urls=http://10.0.0.4:2380", "completion": "This text code Operating etcd clusters for Kubernetes | The following message is displayed:", "language": "text", "type": "code-example_explanation", "context": "Operating etcd clusters for Kubernetes | The following message is displayed:", "source": "https://kubernetes.io/docs/tasks/administer-cluster/configure-upgrade-etcd/", "id": "k8s_00092_code_explain_2469"} {"prompt": "Explain this text code:\n\nMember 2be1eb8f84b7f63e added to cluster ef37ad9dc622a7c4", "completion": "This text code Operating etcd clusters for Kubernetes | Start the newly added member on a machine with the IP 10.0.0.4:", "language": "text", "type": "code-example_explanation", "context": "Operating etcd clusters for Kubernetes | Start the newly added member on a machine with the IP 10.0.0.4:", "source": "https://kubernetes.io/docs/tasks/administer-cluster/configure-upgrade-etcd/", "id": "k8s_00092_code_explain_2470"} {"prompt": "Complete this text code:\n\nexport ETCD_NAME=\"member4\"\nexport ETCD_INITIAL_CLUSTER=\"member2=http://10.0.0.2:2380,member3=http://10.0.0.3:2380,member4=http://10.0.0.4:2380\"", "completion": "export ETCD_INITIAL_CLUSTER_STATE=existing\netcd [flags]", "language": "text", "type": "code-example", "context": "Operating etcd clusters for Kubernetes | Do either of the following:", "source": "https://kubernetes.io/docs/tasks/administer-cluster/configure-upgrade-etcd/", "id": "k8s_00092_code_completion_2471"} {"prompt": "Explain this text code:\n\nexport ETCD_NAME=\"member4\"\nexport ETCD_INITIAL_CLUSTER=\"member2=http://10.0.0.2:2380,member3=http://10.0.0.3:2380,member4=http://10.0.0.4:2380\"\nexport ETCD_INITIAL_CLUSTER_STATE=existing\netcd [flags]", "completion": "This text code Operating etcd clusters for Kubernetes | Do either of the following:", "language": "text", "type": "code-example_explanation", "context": "Operating etcd clusters for Kubernetes | Do either of the following:", "source": "https://kubernetes.io/docs/tasks/administer-cluster/configure-upgrade-etcd/", "id": "k8s_00092_code_explain_2472"} {"prompt": "Explain this text code:\n\nETCDCTL_API=3 etcdctl --endpoints $ENDPOINT snapshot save snapshot.db", "completion": "This text code Operating etcd clusters for Kubernetes | Verify the snapshot:", "language": "text", "type": "code-example_explanation", "context": "Operating etcd clusters for Kubernetes | Verify the snapshot:", "source": "https://kubernetes.io/docs/tasks/administer-cluster/configure-upgrade-etcd/", "id": "k8s_00092_code_explain_2473"} {"prompt": "Explain this text code:\n\netcdutl --write-out=table snapshot status snapshot.db", "completion": "This text code Operating etcd clusters for Kubernetes | This should generate an output resembling the example provided below:", "language": "text", "type": "code-example_explanation", "context": "Operating etcd clusters for Kubernetes | This should generate an output resembling the example provided below:", "source": "https://kubernetes.io/docs/tasks/administer-cluster/configure-upgrade-etcd/", "id": "k8s_00092_code_explain_2474"} {"prompt": "Complete this text code:\n\n+----------+----------+------------+------------+\n| HASH | REVISION | TOTAL KEYS | TOTAL SIZE |", "completion": "+----------+----------+------------+------------+\n| fe01cf57 | 10 | 7 | 2.1 MB |\n+----------+----------+------------+------------+", "language": "text", "type": "code-example", "context": "Operating etcd clusters for Kubernetes | Note:The usage of etcdctl snapshot status has been deprecated since etcd v3.5.x and is slated for removal from etcd v3.6. It is recommended to utilize etcdutl instead.The below example depicts the usa", "source": "https://kubernetes.io/docs/tasks/administer-cluster/configure-upgrade-etcd/", "id": "k8s_00092_code_completion_2475"} {"prompt": "Explain this text code:\n\n+----------+----------+------------+------------+\n| HASH | REVISION | TOTAL KEYS | TOTAL SIZE |\n+----------+----------+------------+------------+\n| fe01cf57 | 10 | 7 | 2.1 MB |\n+----------+----------+------------+------------+", "completion": "This text code Operating etcd clusters for Kubernetes | Note:The usage of etcdctl snapshot status has been deprecated since etcd v3.5.x and is slated for removal from etcd v3.6. It is recommended to utilize etcdutl instead.The below example depicts the usa", "language": "text", "type": "code-example_explanation", "context": "Operating etcd clusters for Kubernetes | Note:The usage of etcdctl snapshot status has been deprecated since etcd v3.5.x and is slated for removal from etcd v3.6. It is recommended to utilize etcdutl instead.The below example depicts the usa", "source": "https://kubernetes.io/docs/tasks/administer-cluster/configure-upgrade-etcd/", "id": "k8s_00092_code_explain_2476"} {"prompt": "Explain this text code:\n\nexport ETCDCTL_API=3\netcdctl --write-out=table snapshot status snapshot.db", "completion": "This text code Operating etcd clusters for Kubernetes | This should generate an output resembling the example provided below:", "language": "text", "type": "code-example_explanation", "context": "Operating etcd clusters for Kubernetes | This should generate an output resembling the example provided below:", "source": "https://kubernetes.io/docs/tasks/administer-cluster/configure-upgrade-etcd/", "id": "k8s_00092_code_explain_2477"} {"prompt": "Complete this text code:\n\nDeprecated: Use `etcdutl snapshot status` instead.\n\n+----------+----------+------------+------------+", "completion": "| HASH | REVISION | TOTAL KEYS | TOTAL SIZE |\n+----------+----------+------------+------------+\n| fe01cf57 | 10 | 7 | 2.1 MB |\n+----------+----------+------------+------------+", "language": "text", "type": "code-example", "context": "Operating etcd clusters for Kubernetes | If etcd is running on a storage volume that supports backup, such as Amazon Elastic Block Store, back up etcd data by creating a snapshot of the storage volume.", "source": "https://kubernetes.io/docs/tasks/administer-cluster/configure-upgrade-etcd/", "id": "k8s_00092_code_completion_2478"} {"prompt": "Explain this text code:\n\nDeprecated: Use `etcdutl snapshot status` instead.\n\n+----------+----------+------------+------------+\n| HASH | REVISION | TOTAL KEYS | TOTAL SIZE |\n+----------+----------+------------+------------+\n| fe01cf57 | 10 | 7 | 2.1 MB |\n+----------+----------+------------+------------+", "completion": "This text code Operating etcd clusters for Kubernetes | If etcd is running on a storage volume that supports backup, such as Amazon Elastic Block Store, back up etcd data by creating a snapshot of the storage volume.", "language": "text", "type": "code-example_explanation", "context": "Operating etcd clusters for Kubernetes | If etcd is running on a storage volume that supports backup, such as Amazon Elastic Block Store, back up etcd data by creating a snapshot of the storage volume.", "source": "https://kubernetes.io/docs/tasks/administer-cluster/configure-upgrade-etcd/", "id": "k8s_00092_code_explain_2479"} {"prompt": "Explain this text code:\n\nETCDCTL_API=3 etcdctl --endpoints=https://127.0.0.1:2379 \\\n --cacert= --cert= --key= \\\n snapshot save ", "completion": "This text code Operating etcd clusters for Kubernetes | where trusted-ca-file, cert-file and key-file can be obtained from the description of the etcd Pod.", "language": "text", "type": "code-example_explanation", "context": "Operating etcd clusters for Kubernetes | where trusted-ca-file, cert-file and key-file can be obtained from the description of the etcd Pod.", "source": "https://kubernetes.io/docs/tasks/administer-cluster/configure-upgrade-etcd/", "id": "k8s_00092_code_explain_2480"} {"prompt": "Explain this text code:\n\netcdutl --data-dir snapshot restore snapshot.db", "completion": "This text code Operating etcd clusters for Kubernetes | where is a directory that will be created during the restore process.", "language": "text", "type": "code-example_explanation", "context": "Operating etcd clusters for Kubernetes | where is a directory that will be created during the restore process.", "source": "https://kubernetes.io/docs/tasks/administer-cluster/configure-upgrade-etcd/", "id": "k8s_00092_code_explain_2481"} {"prompt": "Explain this text code:\n\nexport ETCDCTL_API=3\netcdctl --data-dir snapshot restore snapshot.db", "completion": "This text code Operating etcd clusters for Kubernetes | If is the same folder as before, delete it and stop the etcd process before restoring the cluster. Otherwise, change etcd configuration and restart the etcd process after restorati", "language": "text", "type": "code-example_explanation", "context": "Operating etcd clusters for Kubernetes | If is the same folder as before, delete it and stop the etcd process before restoring the cluster. Otherwise, change etcd configuration and restart the etcd process after restorati", "source": "https://kubernetes.io/docs/tasks/administer-cluster/configure-upgrade-etcd/", "id": "k8s_00092_code_explain_2482"} {"prompt": "Explain this bash code:\n\nkubectl label [--overwrite] (-f FILENAME | TYPE NAME) KEY_1=VAL_1 ... KEY_N=VAL_N [--resource-version=version]", "completion": "This bash code kubectl label | Select all resources, in the namespace of the specified resource types", "language": "bash", "type": "kubectl-command_explanation", "context": "kubectl label | Select all resources, in the namespace of the specified resource types", "source": "https://kubernetes.io/docs/reference/kubectl/generated/kubectl_label/", "id": "k8s_00093_code_explain_2483"} {"prompt": "Complete this text code:\n\n# Update pod 'foo' with the label 'unhealthy' and the value 'true'\n kubectl label pods foo unhealthy=true\n \n # Update pod 'foo' with the label 'status' and the value 'unhealthy', overwriting any existing value\n kubectl label --overwrite pods foo status=unhealthy\n \n # Update all pods in the namespace\n kubectl label pods --all status=unhealthy\n ", "completion": " # Update a pod identified by the type and name in \"pod.json\"\n kubectl label -f pod.json status=unhealthy\n \n # Update pod 'foo' only if the resource is unchanged from version 1\n kubectl label pods foo status=unhealthy --resource-version=1\n \n # Update pod 'foo' by removing a label named 'bar' if it exists\n # Does not require the --overwrite flag\n kubectl label pods foo bar-", "language": "text", "type": "code-example", "context": "kubectl label | Select all resources, in the namespace of the specified resource types", "source": "https://kubernetes.io/docs/reference/kubectl/generated/kubectl_label/", "id": "k8s_00093_code_completion_2484"} {"prompt": "Explain this text code:\n\n# Update pod 'foo' with the label 'unhealthy' and the value 'true'\n kubectl label pods foo unhealthy=true\n \n # Update pod 'foo' with the label 'status' and the value 'unhealthy', overwriting any existing value\n kubectl label --overwrite pods foo status=unhealthy\n \n # Update all pods in the namespace\n kubectl label pods --all status=unhealthy\n \n # Update a pod identified by the type and name in \"pod.json\"\n kubectl label -f pod.json status=unhealthy\n \n # Update pod 'foo' only if the resource is unchanged from version 1\n kubectl label pods foo status=unhealthy --resource-version=1\n \n # Update pod 'foo' by removing a label named 'bar' if it exists\n # Does not require the --overwrite flag\n kubectl label pods foo bar-", "completion": "This text code kubectl label | Select all resources, in the namespace of the specified resource types", "language": "text", "type": "code-example_explanation", "context": "kubectl label | Select all resources, in the namespace of the specified resource types", "source": "https://kubernetes.io/docs/reference/kubectl/generated/kubectl_label/", "id": "k8s_00093_code_explain_2485"} {"prompt": "Explain this bash code:\n\nkubectl label [--overwrite] (-f FILENAME | TYPE NAME) KEY_1=VAL_1 ... KEY_N=VAL_N [--resource-version=version]", "completion": "This bash code kubectl label | Select all resources, in the namespace of the specified resource types", "language": "bash", "type": "kubectl-command_explanation", "context": "kubectl label | Select all resources, in the namespace of the specified resource types", "source": "https://kubernetes.io/docs/reference/kubectl/generated/kubectl_label/", "id": "k8s_00093_code_explain_2486"} {"prompt": "Complete this text code:\n\n# Update pod 'foo' with the label 'unhealthy' and the value 'true'\n kubectl label pods foo unhealthy=true\n \n # Update pod 'foo' with the label 'status' and the value 'unhealthy', overwriting any existing value\n kubectl label --overwrite pods foo status=unhealthy\n \n # Update all pods in the namespace\n kubectl label pods --all status=unhealthy\n ", "completion": " # Update a pod identified by the type and name in \"pod.json\"\n kubectl label -f pod.json status=unhealthy\n \n # Update pod 'foo' only if the resource is unchanged from version 1\n kubectl label pods foo status=unhealthy --resource-version=1\n \n # Update pod 'foo' by removing a label named 'bar' if it exists\n # Does not require the --overwrite flag\n kubectl label pods foo bar-", "language": "text", "type": "code-example", "context": "kubectl label | Select all resources, in the namespace of the specified resource types", "source": "https://kubernetes.io/docs/reference/kubectl/generated/kubectl_label/", "id": "k8s_00093_code_completion_2487"} {"prompt": "Explain this text code:\n\n# Update pod 'foo' with the label 'unhealthy' and the value 'true'\n kubectl label pods foo unhealthy=true\n \n # Update pod 'foo' with the label 'status' and the value 'unhealthy', overwriting any existing value\n kubectl label --overwrite pods foo status=unhealthy\n \n # Update all pods in the namespace\n kubectl label pods --all status=unhealthy\n \n # Update a pod identified by the type and name in \"pod.json\"\n kubectl label -f pod.json status=unhealthy\n \n # Update pod 'foo' only if the resource is unchanged from version 1\n kubectl label pods foo status=unhealthy --resource-version=1\n \n # Update pod 'foo' by removing a label named 'bar' if it exists\n # Does not require the --overwrite flag\n kubectl label pods foo bar-", "completion": "This text code kubectl label | Select all resources, in the namespace of the specified resource types", "language": "text", "type": "code-example_explanation", "context": "kubectl label | Select all resources, in the namespace of the specified resource types", "source": "https://kubernetes.io/docs/reference/kubectl/generated/kubectl_label/", "id": "k8s_00093_code_explain_2488"} {"prompt": "Complete this text code:\n\nNAME READY STATUS RESTARTS AGE\ningress-nginx-admission-create-g9g49 0/1 Completed 0 11m", "completion": "ingress-nginx-admission-patch-rqp78 0/1 Completed 1 11m\ningress-nginx-controller-59b45fb494-26npt 1/1 Running 0 11m", "language": "text", "type": "code-example", "context": "Set up Ingress on Minikube with the NGINX Ingress Controller | Create a Deployment using the following command:", "source": "https://kubernetes.io/docs/tasks/access-application-cluster/ingress-minikube/", "id": "k8s_00094_code_completion_2489"} {"prompt": "Explain this text code:\n\nNAME READY STATUS RESTARTS AGE\ningress-nginx-admission-create-g9g49 0/1 Completed 0 11m\ningress-nginx-admission-patch-rqp78 0/1 Completed 1 11m\ningress-nginx-controller-59b45fb494-26npt 1/1 Running 0 11m", "completion": "This text code Set up Ingress on Minikube with the NGINX Ingress Controller | Create a Deployment using the following command:", "language": "text", "type": "code-example_explanation", "context": "Set up Ingress on Minikube with the NGINX Ingress Controller | Create a Deployment using the following command:", "source": "https://kubernetes.io/docs/tasks/access-application-cluster/ingress-minikube/", "id": "k8s_00094_code_explain_2490"} {"prompt": "Explain this bash code:\n\nkubectl create deployment web --image=gcr.io/google-samples/hello-app:1.0", "completion": "This bash code Set up Ingress on Minikube with the NGINX Ingress Controller | The output should be:", "language": "bash", "type": "kubectl-command_explanation", "context": "Set up Ingress on Minikube with the NGINX Ingress Controller | The output should be:", "source": "https://kubernetes.io/docs/tasks/access-application-cluster/ingress-minikube/", "id": "k8s_00094_code_explain_2491"} {"prompt": "Explain this text code:\n\nNAME READY UP-TO-DATE AVAILABLE AGE\nweb 1/1 1 1 53s", "completion": "This text code Set up Ingress on Minikube with the NGINX Ingress Controller | Expose the Deployment:", "language": "text", "type": "code-example_explanation", "context": "Set up Ingress on Minikube with the NGINX Ingress Controller | Expose the Deployment:", "source": "https://kubernetes.io/docs/tasks/access-application-cluster/ingress-minikube/", "id": "k8s_00094_code_explain_2492"} {"prompt": "Explain this bash code:\n\nkubectl expose deployment web --type=NodePort --port=8080", "completion": "This bash code Set up Ingress on Minikube with the NGINX Ingress Controller | The output should be:", "language": "bash", "type": "kubectl-command_explanation", "context": "Set up Ingress on Minikube with the NGINX Ingress Controller | The output should be:", "source": "https://kubernetes.io/docs/tasks/access-application-cluster/ingress-minikube/", "id": "k8s_00094_code_explain_2493"} {"prompt": "Explain this text code:\n\nNAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE\nweb NodePort 10.104.133.249 8080:31637/TCP 12m", "completion": "This text code Set up Ingress on Minikube with the NGINX Ingress Controller | Visit the Service via NodePort, using the minikube service command. Follow the instructions for your platform:", "language": "text", "type": "code-example_explanation", "context": "Set up Ingress on Minikube with the NGINX Ingress Controller | Visit the Service via NodePort, using the minikube service command. Follow the instructions for your platform:", "source": "https://kubernetes.io/docs/tasks/access-application-cluster/ingress-minikube/", "id": "k8s_00094_code_explain_2494"} {"prompt": "Explain this text code:\n\n# The command must be run in a separate terminal.\nminikube service web --url", "completion": "This text code Set up Ingress on Minikube with the NGINX Ingress Controller | The output is similar to:", "language": "text", "type": "code-example_explanation", "context": "Set up Ingress on Minikube with the NGINX Ingress Controller | The output is similar to:", "source": "https://kubernetes.io/docs/tasks/access-application-cluster/ingress-minikube/", "id": "k8s_00094_code_explain_2495"} {"prompt": "Explain this text code:\n\nhttp://127.0.0.1:62445\n! Because you are using a Docker driver on darwin, the terminal needs to be open to run it.", "completion": "This text code Set up Ingress on Minikube with the NGINX Ingress Controller | From a different terminal, invoke the URL obtained in the output of the previous step:", "language": "text", "type": "code-example_explanation", "context": "Set up Ingress on Minikube with the NGINX Ingress Controller | From a different terminal, invoke the URL obtained in the output of the previous step:", "source": "https://kubernetes.io/docs/tasks/access-application-cluster/ingress-minikube/", "id": "k8s_00094_code_explain_2496"} {"prompt": "Explain this text code:\n\nHello, world!\nVersion: 1.0.0\nHostname: web-55b8c6998d-8k564", "completion": "This text code Set up Ingress on Minikube with the NGINX Ingress Controller | You can now access the sample application via the Minikube IP address and NodePort. The next step lets you access the application using the Ingress resource.", "language": "text", "type": "code-example_explanation", "context": "Set up Ingress on Minikube with the NGINX Ingress Controller | You can now access the sample application via the Minikube IP address and NodePort. The next step lets you access the application using the Ingress resource.", "source": "https://kubernetes.io/docs/tasks/access-application-cluster/ingress-minikube/", "id": "k8s_00094_code_explain_2497"} {"prompt": "Complete this yaml code:\n\napiVersion: networking.k8s.io/v1\nkind: Ingress\nmetadata:\n name: example-ingress\nspec:\n ingressClassName: nginx\n rules:\n - host: hello-world.example", "completion": " http:\n paths:\n - path: /\n pathType: Prefix\n backend:\n service:\n name: web\n port:\n number: 8080", "language": "yaml", "type": "kubernetes-manifest", "context": "Set up Ingress on Minikube with the NGINX Ingress Controller | Create the Ingress object by running the following command:", "source": "https://kubernetes.io/docs/tasks/access-application-cluster/ingress-minikube/", "id": "k8s_00094_code_completion_2498"} {"prompt": "Explain this yaml code:\n\napiVersion: networking.k8s.io/v1\nkind: Ingress\nmetadata:\n name: example-ingress\nspec:\n ingressClassName: nginx\n rules:\n - host: hello-world.example\n http:\n paths:\n - path: /\n pathType: Prefix\n backend:\n service:\n name: web\n port:\n number: 8080", "completion": "This yaml code Set up Ingress on Minikube with the NGINX Ingress Controller | Create the Ingress object by running the following command:", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Set up Ingress on Minikube with the NGINX Ingress Controller | Create the Ingress object by running the following command:", "source": "https://kubernetes.io/docs/tasks/access-application-cluster/ingress-minikube/", "id": "k8s_00094_code_explain_2499"} {"prompt": "Explain this bash code:\n\nkubectl apply -f https://k8s.io/examples/service/networking/example-ingress.yaml", "completion": "This bash code Set up Ingress on Minikube with the NGINX Ingress Controller | The output should be:", "language": "bash", "type": "kubectl-command_explanation", "context": "Set up Ingress on Minikube with the NGINX Ingress Controller | The output should be:", "source": "https://kubernetes.io/docs/tasks/access-application-cluster/ingress-minikube/", "id": "k8s_00094_code_explain_2500"} {"prompt": "Explain this text code:\n\nNAME CLASS HOSTS ADDRESS PORTS AGE\nexample-ingress nginx hello-world.example 172.17.0.15 80 38s", "completion": "This text code Set up Ingress on Minikube with the NGINX Ingress Controller | Verify that the Ingress controller is directing traffic, by following the instructions for your platform:", "language": "text", "type": "code-example_explanation", "context": "Set up Ingress on Minikube with the NGINX Ingress Controller | Verify that the Ingress controller is directing traffic, by following the instructions for your platform:", "source": "https://kubernetes.io/docs/tasks/access-application-cluster/ingress-minikube/", "id": "k8s_00094_code_explain_2501"} {"prompt": "Explain this text code:\n\ncurl --resolve \"hello-world.example:80:$( minikube ip )\" -i http://hello-world.example", "completion": "This text code Set up Ingress on Minikube with the NGINX Ingress Controller | minikube tunnel The output is similar to:Tunnel successfully started NOTE: Please do not close this terminal as this process must stay alive for the tunnel to be accessible ... The service/ingress e", "language": "text", "type": "code-example_explanation", "context": "Set up Ingress on Minikube with the NGINX Ingress Controller | minikube tunnel The output is similar to:Tunnel successfully started NOTE: Please do not close this terminal as this process must stay alive for the tunnel to be accessible ... The service/ingress e", "source": "https://kubernetes.io/docs/tasks/access-application-cluster/ingress-minikube/", "id": "k8s_00094_code_explain_2502"} {"prompt": "Complete this text code:\n\nTunnel successfully started\n\nNOTE: Please do not close this terminal as this process must stay alive for the tunnel to be accessible ...", "completion": "\nThe service/ingress example-ingress requires privileged ports to be exposed: [80 443]\nsudo permission will be asked for it.\nStarting tunnel for service example-ingress.", "language": "text", "type": "code-example", "context": "Set up Ingress on Minikube with the NGINX Ingress Controller | From within a new terminal, invoke the following command:", "source": "https://kubernetes.io/docs/tasks/access-application-cluster/ingress-minikube/", "id": "k8s_00094_code_completion_2503"} {"prompt": "Explain this text code:\n\nTunnel successfully started\n\nNOTE: Please do not close this terminal as this process must stay alive for the tunnel to be accessible ...\n\nThe service/ingress example-ingress requires privileged ports to be exposed: [80 443]\nsudo permission will be asked for it.\nStarting tunnel for service example-ingress.", "completion": "This text code Set up Ingress on Minikube with the NGINX Ingress Controller | From within a new terminal, invoke the following command:", "language": "text", "type": "code-example_explanation", "context": "Set up Ingress on Minikube with the NGINX Ingress Controller | From within a new terminal, invoke the following command:", "source": "https://kubernetes.io/docs/tasks/access-application-cluster/ingress-minikube/", "id": "k8s_00094_code_explain_2504"} {"prompt": "Explain this text code:\n\ncurl --resolve \"hello-world.example:80:127.0.0.1\" -i http://hello-world.example", "completion": "This text code Set up Ingress on Minikube with the NGINX Ingress Controller | Optionally, you can also visit hello-world.example from your browser.", "language": "text", "type": "code-example_explanation", "context": "Set up Ingress on Minikube with the NGINX Ingress Controller | Optionally, you can also visit hello-world.example from your browser.", "source": "https://kubernetes.io/docs/tasks/access-application-cluster/ingress-minikube/", "id": "k8s_00094_code_explain_2505"} {"prompt": "Explain this text code:\n\nHello, world!\nVersion: 1.0.0\nHostname: web-55b8c6998d-8k564", "completion": "This text code Set up Ingress on Minikube with the NGINX Ingress Controller | Optionally, you can also visit hello-world.example from your browser.", "language": "text", "type": "code-example_explanation", "context": "Set up Ingress on Minikube with the NGINX Ingress Controller | Optionally, you can also visit hello-world.example from your browser.", "source": "https://kubernetes.io/docs/tasks/access-application-cluster/ingress-minikube/", "id": "k8s_00094_code_explain_2506"} {"prompt": "Explain this bash code:\n\nkubectl create deployment web2 --image=gcr.io/google-samples/hello-app:2.0", "completion": "This bash code Set up Ingress on Minikube with the NGINX Ingress Controller | The output should be:", "language": "bash", "type": "kubectl-command_explanation", "context": "Set up Ingress on Minikube with the NGINX Ingress Controller | The output should be:", "source": "https://kubernetes.io/docs/tasks/access-application-cluster/ingress-minikube/", "id": "k8s_00094_code_explain_2507"} {"prompt": "Explain this text code:\n\nNAME READY UP-TO-DATE AVAILABLE AGE\nweb2 1/1 1 1 16s", "completion": "This text code Set up Ingress on Minikube with the NGINX Ingress Controller | Expose the second Deployment:", "language": "text", "type": "code-example_explanation", "context": "Set up Ingress on Minikube with the NGINX Ingress Controller | Expose the second Deployment:", "source": "https://kubernetes.io/docs/tasks/access-application-cluster/ingress-minikube/", "id": "k8s_00094_code_explain_2508"} {"prompt": "Explain this bash code:\n\nkubectl expose deployment web2 --port=8080 --type=NodePort", "completion": "This bash code Set up Ingress on Minikube with the NGINX Ingress Controller | The output should be:", "language": "bash", "type": "kubectl-command_explanation", "context": "Set up Ingress on Minikube with the NGINX Ingress Controller | The output should be:", "source": "https://kubernetes.io/docs/tasks/access-application-cluster/ingress-minikube/", "id": "k8s_00094_code_explain_2509"} {"prompt": "Complete this text code:\n\n- path: /v2\n pathType: Prefix\n backend:", "completion": " service:\n name: web2\n port:\n number: 8080", "language": "text", "type": "code-example", "context": "Set up Ingress on Minikube with the NGINX Ingress Controller | Apply the changes:", "source": "https://kubernetes.io/docs/tasks/access-application-cluster/ingress-minikube/", "id": "k8s_00094_code_completion_2510"} {"prompt": "Explain this text code:\n\n- path: /v2\n pathType: Prefix\n backend:\n service:\n name: web2\n port:\n number: 8080", "completion": "This text code Set up Ingress on Minikube with the NGINX Ingress Controller | Apply the changes:", "language": "text", "type": "code-example_explanation", "context": "Set up Ingress on Minikube with the NGINX Ingress Controller | Apply the changes:", "source": "https://kubernetes.io/docs/tasks/access-application-cluster/ingress-minikube/", "id": "k8s_00094_code_explain_2511"} {"prompt": "Explain this text code:\n\ncurl --resolve \"hello-world.example:80:$( minikube ip )\" -i http://hello-world.example", "completion": "This text code Set up Ingress on Minikube with the NGINX Ingress Controller | minikube tunnel The output is similar to:Tunnel successfully started NOTE: Please do not close this terminal as this process must stay alive for the tunnel to be accessible ... The service/ingress e", "language": "text", "type": "code-example_explanation", "context": "Set up Ingress on Minikube with the NGINX Ingress Controller | minikube tunnel The output is similar to:Tunnel successfully started NOTE: Please do not close this terminal as this process must stay alive for the tunnel to be accessible ... The service/ingress e", "source": "https://kubernetes.io/docs/tasks/access-application-cluster/ingress-minikube/", "id": "k8s_00094_code_explain_2512"} {"prompt": "Complete this text code:\n\nTunnel successfully started\n\nNOTE: Please do not close this terminal as this process must stay alive for the tunnel to be accessible ...", "completion": "\nThe service/ingress example-ingress requires privileged ports to be exposed: [80 443]\nsudo permission will be asked for it.\nStarting tunnel for service example-ingress.", "language": "text", "type": "code-example", "context": "Set up Ingress on Minikube with the NGINX Ingress Controller | From within a new terminal, invoke the following command:", "source": "https://kubernetes.io/docs/tasks/access-application-cluster/ingress-minikube/", "id": "k8s_00094_code_completion_2513"} {"prompt": "Explain this text code:\n\nTunnel successfully started\n\nNOTE: Please do not close this terminal as this process must stay alive for the tunnel to be accessible ...\n\nThe service/ingress example-ingress requires privileged ports to be exposed: [80 443]\nsudo permission will be asked for it.\nStarting tunnel for service example-ingress.", "completion": "This text code Set up Ingress on Minikube with the NGINX Ingress Controller | From within a new terminal, invoke the following command:", "language": "text", "type": "code-example_explanation", "context": "Set up Ingress on Minikube with the NGINX Ingress Controller | From within a new terminal, invoke the following command:", "source": "https://kubernetes.io/docs/tasks/access-application-cluster/ingress-minikube/", "id": "k8s_00094_code_explain_2514"} {"prompt": "Explain this text code:\n\ncurl --resolve \"hello-world.example:80:127.0.0.1\" -i http://hello-world.example", "completion": "This text code Set up Ingress on Minikube with the NGINX Ingress Controller | The output is similar to:", "language": "text", "type": "code-example_explanation", "context": "Set up Ingress on Minikube with the NGINX Ingress Controller | The output is similar to:", "source": "https://kubernetes.io/docs/tasks/access-application-cluster/ingress-minikube/", "id": "k8s_00094_code_explain_2515"} {"prompt": "Explain this text code:\n\nHello, world!\nVersion: 1.0.0\nHostname: web-55b8c6998d-8k564", "completion": "This text code Set up Ingress on Minikube with the NGINX Ingress Controller | Access the 2nd version of the Hello World app.", "language": "text", "type": "code-example_explanation", "context": "Set up Ingress on Minikube with the NGINX Ingress Controller | Access the 2nd version of the Hello World app.", "source": "https://kubernetes.io/docs/tasks/access-application-cluster/ingress-minikube/", "id": "k8s_00094_code_explain_2516"} {"prompt": "Explain this text code:\n\ncurl --resolve \"hello-world.example:80:$( minikube ip )\" -i http://hello-world.example/v2", "completion": "This text code Set up Ingress on Minikube with the NGINX Ingress Controller | minikube tunnel The output is similar to:Tunnel successfully started NOTE: Please do not close this terminal as this process must stay alive for the tunnel to be accessible ... The service/ingress e", "language": "text", "type": "code-example_explanation", "context": "Set up Ingress on Minikube with the NGINX Ingress Controller | minikube tunnel The output is similar to:Tunnel successfully started NOTE: Please do not close this terminal as this process must stay alive for the tunnel to be accessible ... The service/ingress e", "source": "https://kubernetes.io/docs/tasks/access-application-cluster/ingress-minikube/", "id": "k8s_00094_code_explain_2517"} {"prompt": "Complete this text code:\n\nTunnel successfully started\n\nNOTE: Please do not close this terminal as this process must stay alive for the tunnel to be accessible ...", "completion": "\nThe service/ingress example-ingress requires privileged ports to be exposed: [80 443]\nsudo permission will be asked for it.\nStarting tunnel for service example-ingress.", "language": "text", "type": "code-example", "context": "Set up Ingress on Minikube with the NGINX Ingress Controller | From within a new terminal, invoke the following command:", "source": "https://kubernetes.io/docs/tasks/access-application-cluster/ingress-minikube/", "id": "k8s_00094_code_completion_2518"} {"prompt": "Explain this text code:\n\nTunnel successfully started\n\nNOTE: Please do not close this terminal as this process must stay alive for the tunnel to be accessible ...\n\nThe service/ingress example-ingress requires privileged ports to be exposed: [80 443]\nsudo permission will be asked for it.\nStarting tunnel for service example-ingress.", "completion": "This text code Set up Ingress on Minikube with the NGINX Ingress Controller | From within a new terminal, invoke the following command:", "language": "text", "type": "code-example_explanation", "context": "Set up Ingress on Minikube with the NGINX Ingress Controller | From within a new terminal, invoke the following command:", "source": "https://kubernetes.io/docs/tasks/access-application-cluster/ingress-minikube/", "id": "k8s_00094_code_explain_2519"} {"prompt": "Explain this text code:\n\ncurl --resolve \"hello-world.example:80:127.0.0.1\" -i http://hello-world.example/v2", "completion": "This text code Set up Ingress on Minikube with the NGINX Ingress Controller | The output is similar to:", "language": "text", "type": "code-example_explanation", "context": "Set up Ingress on Minikube with the NGINX Ingress Controller | The output is similar to:", "source": "https://kubernetes.io/docs/tasks/access-application-cluster/ingress-minikube/", "id": "k8s_00094_code_explain_2520"} {"prompt": "Explain this text code:\n\nHello, world!\nVersion: 2.0.0\nHostname: web2-75cd47646f-t8cjk", "completion": "This text code Set up Ingress on Minikube with the NGINX Ingress Controller |", "language": "text", "type": "code-example_explanation", "context": "Set up Ingress on Minikube with the NGINX Ingress Controller |", "source": "https://kubernetes.io/docs/tasks/access-application-cluster/ingress-minikube/", "id": "k8s_00094_code_explain_2521"} {"prompt": "Complete this none code:\n\nNAME READY STATUS RESTARTS AGE\ningress-nginx-admission-create-g9g49 0/1 Completed 0 11m", "completion": "ingress-nginx-admission-patch-rqp78 0/1 Completed 1 11m\ningress-nginx-controller-59b45fb494-26npt 1/1 Running 0 11m", "language": "none", "type": "code-example", "context": "Set up Ingress on Minikube with the NGINX Ingress Controller | Create a Deployment using the following command:", "source": "https://kubernetes.io/docs/tasks/access-application-cluster/ingress-minikube/", "id": "k8s_00094_code_completion_2522"} {"prompt": "Explain this none code:\n\nNAME READY STATUS RESTARTS AGE\ningress-nginx-admission-create-g9g49 0/1 Completed 0 11m\ningress-nginx-admission-patch-rqp78 0/1 Completed 1 11m\ningress-nginx-controller-59b45fb494-26npt 1/1 Running 0 11m", "completion": "This none code Set up Ingress on Minikube with the NGINX Ingress Controller | Create a Deployment using the following command:", "language": "none", "type": "code-example_explanation", "context": "Set up Ingress on Minikube with the NGINX Ingress Controller | Create a Deployment using the following command:", "source": "https://kubernetes.io/docs/tasks/access-application-cluster/ingress-minikube/", "id": "k8s_00094_code_explain_2523"} {"prompt": "Explain this shell code:\n\nkubectl create deployment web --image=gcr.io/google-samples/hello-app:1.0", "completion": "This shell code Set up Ingress on Minikube with the NGINX Ingress Controller | The output should be:", "language": "shell", "type": "code-example_explanation", "context": "Set up Ingress on Minikube with the NGINX Ingress Controller | The output should be:", "source": "https://kubernetes.io/docs/tasks/access-application-cluster/ingress-minikube/", "id": "k8s_00094_code_explain_2524"} {"prompt": "Explain this none code:\n\nNAME READY UP-TO-DATE AVAILABLE AGE\nweb 1/1 1 1 53s", "completion": "This none code Set up Ingress on Minikube with the NGINX Ingress Controller | Expose the Deployment:", "language": "none", "type": "code-example_explanation", "context": "Set up Ingress on Minikube with the NGINX Ingress Controller | Expose the Deployment:", "source": "https://kubernetes.io/docs/tasks/access-application-cluster/ingress-minikube/", "id": "k8s_00094_code_explain_2525"} {"prompt": "Explain this shell code:\n\nkubectl expose deployment web --type=NodePort --port=8080", "completion": "This shell code Set up Ingress on Minikube with the NGINX Ingress Controller | The output should be:", "language": "shell", "type": "code-example_explanation", "context": "Set up Ingress on Minikube with the NGINX Ingress Controller | The output should be:", "source": "https://kubernetes.io/docs/tasks/access-application-cluster/ingress-minikube/", "id": "k8s_00094_code_explain_2526"} {"prompt": "Explain this none code:\n\nNAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE\nweb NodePort 10.104.133.249 8080:31637/TCP 12m", "completion": "This none code Set up Ingress on Minikube with the NGINX Ingress Controller | Visit the Service via NodePort, using the minikube service command. Follow the instructions for your platform:", "language": "none", "type": "code-example_explanation", "context": "Set up Ingress on Minikube with the NGINX Ingress Controller | Visit the Service via NodePort, using the minikube service command. Follow the instructions for your platform:", "source": "https://kubernetes.io/docs/tasks/access-application-cluster/ingress-minikube/", "id": "k8s_00094_code_explain_2527"} {"prompt": "Explain this shell code:\n\n# The command must be run in a separate terminal.\nminikube service web --url", "completion": "This shell code Set up Ingress on Minikube with the NGINX Ingress Controller | The output is similar to:", "language": "shell", "type": "code-example_explanation", "context": "Set up Ingress on Minikube with the NGINX Ingress Controller | The output is similar to:", "source": "https://kubernetes.io/docs/tasks/access-application-cluster/ingress-minikube/", "id": "k8s_00094_code_explain_2528"} {"prompt": "Explain this none code:\n\nhttp://127.0.0.1:62445\n! Because you are using a Docker driver on darwin, the terminal needs to be open to run it.", "completion": "This none code Set up Ingress on Minikube with the NGINX Ingress Controller | From a different terminal, invoke the URL obtained in the output of the previous step:", "language": "none", "type": "code-example_explanation", "context": "Set up Ingress on Minikube with the NGINX Ingress Controller | From a different terminal, invoke the URL obtained in the output of the previous step:", "source": "https://kubernetes.io/docs/tasks/access-application-cluster/ingress-minikube/", "id": "k8s_00094_code_explain_2529"} {"prompt": "Explain this none code:\n\nHello, world!\nVersion: 1.0.0\nHostname: web-55b8c6998d-8k564", "completion": "This none code Set up Ingress on Minikube with the NGINX Ingress Controller | You can now access the sample application via the Minikube IP address and NodePort. The next step lets you access the application using the Ingress resource.", "language": "none", "type": "code-example_explanation", "context": "Set up Ingress on Minikube with the NGINX Ingress Controller | You can now access the sample application via the Minikube IP address and NodePort. The next step lets you access the application using the Ingress resource.", "source": "https://kubernetes.io/docs/tasks/access-application-cluster/ingress-minikube/", "id": "k8s_00094_code_explain_2530"} {"prompt": "Complete this yaml code:\n\napiVersion: networking.k8s.io/v1\nkind: Ingress\nmetadata:\n name: example-ingress\nspec:\n ingressClassName: nginx\n rules:\n - host: hello-world.example", "completion": " http:\n paths:\n - path: /\n pathType: Prefix\n backend:\n service:\n name: web\n port:\n number: 8080", "language": "yaml", "type": "kubernetes-manifest", "context": "Set up Ingress on Minikube with the NGINX Ingress Controller | Create the Ingress object by running the following command:", "source": "https://kubernetes.io/docs/tasks/access-application-cluster/ingress-minikube/", "id": "k8s_00094_code_completion_2531"} {"prompt": "Explain this yaml code:\n\napiVersion: networking.k8s.io/v1\nkind: Ingress\nmetadata:\n name: example-ingress\nspec:\n ingressClassName: nginx\n rules:\n - host: hello-world.example\n http:\n paths:\n - path: /\n pathType: Prefix\n backend:\n service:\n name: web\n port:\n number: 8080", "completion": "This yaml code Set up Ingress on Minikube with the NGINX Ingress Controller | Create the Ingress object by running the following command:", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Set up Ingress on Minikube with the NGINX Ingress Controller | Create the Ingress object by running the following command:", "source": "https://kubernetes.io/docs/tasks/access-application-cluster/ingress-minikube/", "id": "k8s_00094_code_explain_2532"} {"prompt": "Explain this shell code:\n\nkubectl apply -f https://k8s.io/examples/service/networking/example-ingress.yaml", "completion": "This shell code Set up Ingress on Minikube with the NGINX Ingress Controller | The output should be:", "language": "shell", "type": "code-example_explanation", "context": "Set up Ingress on Minikube with the NGINX Ingress Controller | The output should be:", "source": "https://kubernetes.io/docs/tasks/access-application-cluster/ingress-minikube/", "id": "k8s_00094_code_explain_2533"} {"prompt": "Explain this none code:\n\nNAME CLASS HOSTS ADDRESS PORTS AGE\nexample-ingress nginx hello-world.example 172.17.0.15 80 38s", "completion": "This none code Set up Ingress on Minikube with the NGINX Ingress Controller | Verify that the Ingress controller is directing traffic, by following the instructions for your platform:", "language": "none", "type": "code-example_explanation", "context": "Set up Ingress on Minikube with the NGINX Ingress Controller | Verify that the Ingress controller is directing traffic, by following the instructions for your platform:", "source": "https://kubernetes.io/docs/tasks/access-application-cluster/ingress-minikube/", "id": "k8s_00094_code_explain_2534"} {"prompt": "Explain this shell code:\n\ncurl --resolve \"hello-world.example:80:$( minikube ip )\" -i http://hello-world.example", "completion": "This shell code Set up Ingress on Minikube with the NGINX Ingress Controller | minikube tunnel The output is similar to:Tunnel successfully started NOTE: Please do not close this terminal as this process must stay alive for the tunnel to be accessible ... The service/ingress e", "language": "shell", "type": "code-example_explanation", "context": "Set up Ingress on Minikube with the NGINX Ingress Controller | minikube tunnel The output is similar to:Tunnel successfully started NOTE: Please do not close this terminal as this process must stay alive for the tunnel to be accessible ... The service/ingress e", "source": "https://kubernetes.io/docs/tasks/access-application-cluster/ingress-minikube/", "id": "k8s_00094_code_explain_2535"} {"prompt": "Complete this none code:\n\nTunnel successfully started\n\nNOTE: Please do not close this terminal as this process must stay alive for the tunnel to be accessible ...", "completion": "\nThe service/ingress example-ingress requires privileged ports to be exposed: [80 443]\nsudo permission will be asked for it.\nStarting tunnel for service example-ingress.", "language": "none", "type": "code-example", "context": "Set up Ingress on Minikube with the NGINX Ingress Controller | From within a new terminal, invoke the following command:", "source": "https://kubernetes.io/docs/tasks/access-application-cluster/ingress-minikube/", "id": "k8s_00094_code_completion_2536"} {"prompt": "Explain this none code:\n\nTunnel successfully started\n\nNOTE: Please do not close this terminal as this process must stay alive for the tunnel to be accessible ...\n\nThe service/ingress example-ingress requires privileged ports to be exposed: [80 443]\nsudo permission will be asked for it.\nStarting tunnel for service example-ingress.", "completion": "This none code Set up Ingress on Minikube with the NGINX Ingress Controller | From within a new terminal, invoke the following command:", "language": "none", "type": "code-example_explanation", "context": "Set up Ingress on Minikube with the NGINX Ingress Controller | From within a new terminal, invoke the following command:", "source": "https://kubernetes.io/docs/tasks/access-application-cluster/ingress-minikube/", "id": "k8s_00094_code_explain_2537"} {"prompt": "Explain this shell code:\n\ncurl --resolve \"hello-world.example:80:127.0.0.1\" -i http://hello-world.example", "completion": "This shell code Set up Ingress on Minikube with the NGINX Ingress Controller | Optionally, you can also visit hello-world.example from your browser.", "language": "shell", "type": "code-example_explanation", "context": "Set up Ingress on Minikube with the NGINX Ingress Controller | Optionally, you can also visit hello-world.example from your browser.", "source": "https://kubernetes.io/docs/tasks/access-application-cluster/ingress-minikube/", "id": "k8s_00094_code_explain_2538"} {"prompt": "Explain this none code:\n\nHello, world!\nVersion: 1.0.0\nHostname: web-55b8c6998d-8k564", "completion": "This none code Set up Ingress on Minikube with the NGINX Ingress Controller | Optionally, you can also visit hello-world.example from your browser.", "language": "none", "type": "code-example_explanation", "context": "Set up Ingress on Minikube with the NGINX Ingress Controller | Optionally, you can also visit hello-world.example from your browser.", "source": "https://kubernetes.io/docs/tasks/access-application-cluster/ingress-minikube/", "id": "k8s_00094_code_explain_2539"} {"prompt": "Explain this shell code:\n\nkubectl create deployment web2 --image=gcr.io/google-samples/hello-app:2.0", "completion": "This shell code Set up Ingress on Minikube with the NGINX Ingress Controller | The output should be:", "language": "shell", "type": "code-example_explanation", "context": "Set up Ingress on Minikube with the NGINX Ingress Controller | The output should be:", "source": "https://kubernetes.io/docs/tasks/access-application-cluster/ingress-minikube/", "id": "k8s_00094_code_explain_2540"} {"prompt": "Explain this none code:\n\nNAME READY UP-TO-DATE AVAILABLE AGE\nweb2 1/1 1 1 16s", "completion": "This none code Set up Ingress on Minikube with the NGINX Ingress Controller | Expose the second Deployment:", "language": "none", "type": "code-example_explanation", "context": "Set up Ingress on Minikube with the NGINX Ingress Controller | Expose the second Deployment:", "source": "https://kubernetes.io/docs/tasks/access-application-cluster/ingress-minikube/", "id": "k8s_00094_code_explain_2541"} {"prompt": "Explain this shell code:\n\nkubectl expose deployment web2 --port=8080 --type=NodePort", "completion": "This shell code Set up Ingress on Minikube with the NGINX Ingress Controller | The output should be:", "language": "shell", "type": "code-example_explanation", "context": "Set up Ingress on Minikube with the NGINX Ingress Controller | The output should be:", "source": "https://kubernetes.io/docs/tasks/access-application-cluster/ingress-minikube/", "id": "k8s_00094_code_explain_2542"} {"prompt": "Complete this yaml code:\n\n- path: /v2\n pathType: Prefix\n backend:", "completion": " service:\n name: web2\n port:\n number: 8080", "language": "yaml", "type": "kubernetes-manifest", "context": "Set up Ingress on Minikube with the NGINX Ingress Controller | Apply the changes:", "source": "https://kubernetes.io/docs/tasks/access-application-cluster/ingress-minikube/", "id": "k8s_00094_code_completion_2543"} {"prompt": "Explain this yaml code:\n\n- path: /v2\n pathType: Prefix\n backend:\n service:\n name: web2\n port:\n number: 8080", "completion": "This yaml code Set up Ingress on Minikube with the NGINX Ingress Controller | Apply the changes:", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Set up Ingress on Minikube with the NGINX Ingress Controller | Apply the changes:", "source": "https://kubernetes.io/docs/tasks/access-application-cluster/ingress-minikube/", "id": "k8s_00094_code_explain_2544"} {"prompt": "Explain this shell code:\n\ncurl --resolve \"hello-world.example:80:$( minikube ip )\" -i http://hello-world.example", "completion": "This shell code Set up Ingress on Minikube with the NGINX Ingress Controller | minikube tunnel The output is similar to:Tunnel successfully started NOTE: Please do not close this terminal as this process must stay alive for the tunnel to be accessible ... The service/ingress e", "language": "shell", "type": "code-example_explanation", "context": "Set up Ingress on Minikube with the NGINX Ingress Controller | minikube tunnel The output is similar to:Tunnel successfully started NOTE: Please do not close this terminal as this process must stay alive for the tunnel to be accessible ... The service/ingress e", "source": "https://kubernetes.io/docs/tasks/access-application-cluster/ingress-minikube/", "id": "k8s_00094_code_explain_2545"} {"prompt": "Complete this none code:\n\nTunnel successfully started\n\nNOTE: Please do not close this terminal as this process must stay alive for the tunnel to be accessible ...", "completion": "\nThe service/ingress example-ingress requires privileged ports to be exposed: [80 443]\nsudo permission will be asked for it.\nStarting tunnel for service example-ingress.", "language": "none", "type": "code-example", "context": "Set up Ingress on Minikube with the NGINX Ingress Controller | From within a new terminal, invoke the following command:", "source": "https://kubernetes.io/docs/tasks/access-application-cluster/ingress-minikube/", "id": "k8s_00094_code_completion_2546"} {"prompt": "Explain this none code:\n\nTunnel successfully started\n\nNOTE: Please do not close this terminal as this process must stay alive for the tunnel to be accessible ...\n\nThe service/ingress example-ingress requires privileged ports to be exposed: [80 443]\nsudo permission will be asked for it.\nStarting tunnel for service example-ingress.", "completion": "This none code Set up Ingress on Minikube with the NGINX Ingress Controller | From within a new terminal, invoke the following command:", "language": "none", "type": "code-example_explanation", "context": "Set up Ingress on Minikube with the NGINX Ingress Controller | From within a new terminal, invoke the following command:", "source": "https://kubernetes.io/docs/tasks/access-application-cluster/ingress-minikube/", "id": "k8s_00094_code_explain_2547"} {"prompt": "Explain this shell code:\n\ncurl --resolve \"hello-world.example:80:127.0.0.1\" -i http://hello-world.example", "completion": "This shell code Set up Ingress on Minikube with the NGINX Ingress Controller | The output is similar to:", "language": "shell", "type": "code-example_explanation", "context": "Set up Ingress on Minikube with the NGINX Ingress Controller | The output is similar to:", "source": "https://kubernetes.io/docs/tasks/access-application-cluster/ingress-minikube/", "id": "k8s_00094_code_explain_2548"} {"prompt": "Explain this none code:\n\nHello, world!\nVersion: 1.0.0\nHostname: web-55b8c6998d-8k564", "completion": "This none code Set up Ingress on Minikube with the NGINX Ingress Controller | Access the 2nd version of the Hello World app.", "language": "none", "type": "code-example_explanation", "context": "Set up Ingress on Minikube with the NGINX Ingress Controller | Access the 2nd version of the Hello World app.", "source": "https://kubernetes.io/docs/tasks/access-application-cluster/ingress-minikube/", "id": "k8s_00094_code_explain_2549"} {"prompt": "Explain this shell code:\n\ncurl --resolve \"hello-world.example:80:$( minikube ip )\" -i http://hello-world.example/v2", "completion": "This shell code Set up Ingress on Minikube with the NGINX Ingress Controller | minikube tunnel The output is similar to:Tunnel successfully started NOTE: Please do not close this terminal as this process must stay alive for the tunnel to be accessible ... The service/ingress e", "language": "shell", "type": "code-example_explanation", "context": "Set up Ingress on Minikube with the NGINX Ingress Controller | minikube tunnel The output is similar to:Tunnel successfully started NOTE: Please do not close this terminal as this process must stay alive for the tunnel to be accessible ... The service/ingress e", "source": "https://kubernetes.io/docs/tasks/access-application-cluster/ingress-minikube/", "id": "k8s_00094_code_explain_2550"} {"prompt": "Complete this none code:\n\nTunnel successfully started\n\nNOTE: Please do not close this terminal as this process must stay alive for the tunnel to be accessible ...", "completion": "\nThe service/ingress example-ingress requires privileged ports to be exposed: [80 443]\nsudo permission will be asked for it.\nStarting tunnel for service example-ingress.", "language": "none", "type": "code-example", "context": "Set up Ingress on Minikube with the NGINX Ingress Controller | From within a new terminal, invoke the following command:", "source": "https://kubernetes.io/docs/tasks/access-application-cluster/ingress-minikube/", "id": "k8s_00094_code_completion_2551"} {"prompt": "Explain this none code:\n\nTunnel successfully started\n\nNOTE: Please do not close this terminal as this process must stay alive for the tunnel to be accessible ...\n\nThe service/ingress example-ingress requires privileged ports to be exposed: [80 443]\nsudo permission will be asked for it.\nStarting tunnel for service example-ingress.", "completion": "This none code Set up Ingress on Minikube with the NGINX Ingress Controller | From within a new terminal, invoke the following command:", "language": "none", "type": "code-example_explanation", "context": "Set up Ingress on Minikube with the NGINX Ingress Controller | From within a new terminal, invoke the following command:", "source": "https://kubernetes.io/docs/tasks/access-application-cluster/ingress-minikube/", "id": "k8s_00094_code_explain_2552"} {"prompt": "Explain this shell code:\n\ncurl --resolve \"hello-world.example:80:127.0.0.1\" -i http://hello-world.example/v2", "completion": "This shell code Set up Ingress on Minikube with the NGINX Ingress Controller | The output is similar to:", "language": "shell", "type": "code-example_explanation", "context": "Set up Ingress on Minikube with the NGINX Ingress Controller | The output is similar to:", "source": "https://kubernetes.io/docs/tasks/access-application-cluster/ingress-minikube/", "id": "k8s_00094_code_explain_2553"} {"prompt": "Explain this none code:\n\nHello, world!\nVersion: 2.0.0\nHostname: web2-75cd47646f-t8cjk", "completion": "This none code Set up Ingress on Minikube with the NGINX Ingress Controller |", "language": "none", "type": "code-example_explanation", "context": "Set up Ingress on Minikube with the NGINX Ingress Controller |", "source": "https://kubernetes.io/docs/tasks/access-application-cluster/ingress-minikube/", "id": "k8s_00094_code_explain_2554"} {"prompt": "Explain this bash code:\n\nkubectl create deployment web --image=gcr.io/google-samples/hello-app:1.0", "completion": "This bash code Set up Ingress on Minikube with the NGINX Ingress Controller | The output should be:", "language": "bash", "type": "kubectl-command_explanation", "context": "Set up Ingress on Minikube with the NGINX Ingress Controller | The output should be:", "source": "https://kubernetes.io/docs/tasks/access-application-cluster/ingress-minikube/", "id": "k8s_00094_code_explain_2555"} {"prompt": "Explain this bash code:\n\nkubectl expose deployment web --type=NodePort --port=8080", "completion": "This bash code Set up Ingress on Minikube with the NGINX Ingress Controller | The output should be:", "language": "bash", "type": "kubectl-command_explanation", "context": "Set up Ingress on Minikube with the NGINX Ingress Controller | The output should be:", "source": "https://kubernetes.io/docs/tasks/access-application-cluster/ingress-minikube/", "id": "k8s_00094_code_explain_2556"} {"prompt": "Explain this text code:\n\n# The command must be run in a separate terminal.\nminikube service web --url", "completion": "This text code Set up Ingress on Minikube with the NGINX Ingress Controller | The output is similar to:", "language": "text", "type": "code-example_explanation", "context": "Set up Ingress on Minikube with the NGINX Ingress Controller | The output is similar to:", "source": "https://kubernetes.io/docs/tasks/access-application-cluster/ingress-minikube/", "id": "k8s_00094_code_explain_2557"} {"prompt": "Complete this yaml code:\n\nservice/networking/example-ingress.yaml\napiVersion: networking.k8s.io/v1\nkind: Ingress\nmetadata:\n name: example-ingress\nspec:\n ingressClassName: nginx\n rules:\n - host: hello-world.example", "completion": " http:\n paths:\n - path: /\n pathType: Prefix\n backend:\n service:\n name: web\n port:\n number: 8080", "language": "yaml", "type": "kubernetes-manifest", "context": "Set up Ingress on Minikube with the NGINX Ingress Controller | Create the Ingress object by running the following command:", "source": "https://kubernetes.io/docs/tasks/access-application-cluster/ingress-minikube/", "id": "k8s_00094_code_completion_2558"} {"prompt": "Explain this yaml code:\n\nservice/networking/example-ingress.yaml\napiVersion: networking.k8s.io/v1\nkind: Ingress\nmetadata:\n name: example-ingress\nspec:\n ingressClassName: nginx\n rules:\n - host: hello-world.example\n http:\n paths:\n - path: /\n pathType: Prefix\n backend:\n service:\n name: web\n port:\n number: 8080", "completion": "This yaml code Set up Ingress on Minikube with the NGINX Ingress Controller | Create the Ingress object by running the following command:", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Set up Ingress on Minikube with the NGINX Ingress Controller | Create the Ingress object by running the following command:", "source": "https://kubernetes.io/docs/tasks/access-application-cluster/ingress-minikube/", "id": "k8s_00094_code_explain_2559"} {"prompt": "Complete this yaml code:\n\napiVersion: networking.k8s.io/v1\nkind: Ingress\nmetadata:\n name: example-ingress\nspec:\n ingressClassName: nginx\n rules:\n - host: hello-world.example", "completion": " http:\n paths:\n - path: /\n pathType: Prefix\n backend:\n service:\n name: web\n port:\n number: 8080", "language": "yaml", "type": "kubernetes-manifest", "context": "Set up Ingress on Minikube with the NGINX Ingress Controller | Create the Ingress object by running the following command:", "source": "https://kubernetes.io/docs/tasks/access-application-cluster/ingress-minikube/", "id": "k8s_00094_code_completion_2560"} {"prompt": "Explain this yaml code:\n\napiVersion: networking.k8s.io/v1\nkind: Ingress\nmetadata:\n name: example-ingress\nspec:\n ingressClassName: nginx\n rules:\n - host: hello-world.example\n http:\n paths:\n - path: /\n pathType: Prefix\n backend:\n service:\n name: web\n port:\n number: 8080", "completion": "This yaml code Set up Ingress on Minikube with the NGINX Ingress Controller | Create the Ingress object by running the following command:", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Set up Ingress on Minikube with the NGINX Ingress Controller | Create the Ingress object by running the following command:", "source": "https://kubernetes.io/docs/tasks/access-application-cluster/ingress-minikube/", "id": "k8s_00094_code_explain_2561"} {"prompt": "Explain this bash code:\n\nkubectl apply -f https://k8s.io/examples/service/networking/example-ingress.yaml", "completion": "This bash code Set up Ingress on Minikube with the NGINX Ingress Controller | The output should be:", "language": "bash", "type": "kubectl-command_explanation", "context": "Set up Ingress on Minikube with the NGINX Ingress Controller | The output should be:", "source": "https://kubernetes.io/docs/tasks/access-application-cluster/ingress-minikube/", "id": "k8s_00094_code_explain_2562"} {"prompt": "Explain this text code:\n\ncurl --resolve \"hello-world.example:80:$( minikube ip )\" -i http://hello-world.example", "completion": "This text code Set up Ingress on Minikube with the NGINX Ingress Controller | minikube tunnel The output is similar to:Tunnel successfully started NOTE: Please do not close this terminal as this process must stay alive for the tunnel to be accessible ... The service/ingress e", "language": "text", "type": "code-example_explanation", "context": "Set up Ingress on Minikube with the NGINX Ingress Controller | minikube tunnel The output is similar to:Tunnel successfully started NOTE: Please do not close this terminal as this process must stay alive for the tunnel to be accessible ... The service/ingress e", "source": "https://kubernetes.io/docs/tasks/access-application-cluster/ingress-minikube/", "id": "k8s_00094_code_explain_2563"} {"prompt": "Explain this text code:\n\ncurl --resolve \"hello-world.example:80:127.0.0.1\" -i http://hello-world.example", "completion": "This text code Set up Ingress on Minikube with the NGINX Ingress Controller | Optionally, you can also visit hello-world.example from your browser.", "language": "text", "type": "code-example_explanation", "context": "Set up Ingress on Minikube with the NGINX Ingress Controller | Optionally, you can also visit hello-world.example from your browser.", "source": "https://kubernetes.io/docs/tasks/access-application-cluster/ingress-minikube/", "id": "k8s_00094_code_explain_2564"} {"prompt": "Explain this bash code:\n\nkubectl create deployment web2 --image=gcr.io/google-samples/hello-app:2.0", "completion": "This bash code Set up Ingress on Minikube with the NGINX Ingress Controller | The output should be:", "language": "bash", "type": "kubectl-command_explanation", "context": "Set up Ingress on Minikube with the NGINX Ingress Controller | The output should be:", "source": "https://kubernetes.io/docs/tasks/access-application-cluster/ingress-minikube/", "id": "k8s_00094_code_explain_2565"} {"prompt": "Explain this bash code:\n\nkubectl expose deployment web2 --port=8080 --type=NodePort", "completion": "This bash code Set up Ingress on Minikube with the NGINX Ingress Controller | The output should be:", "language": "bash", "type": "kubectl-command_explanation", "context": "Set up Ingress on Minikube with the NGINX Ingress Controller | The output should be:", "source": "https://kubernetes.io/docs/tasks/access-application-cluster/ingress-minikube/", "id": "k8s_00094_code_explain_2566"} {"prompt": "Complete this text code:\n\n- path: /v2\n pathType: Prefix\n backend:", "completion": " service:\n name: web2\n port:\n number: 8080", "language": "text", "type": "code-example", "context": "Set up Ingress on Minikube with the NGINX Ingress Controller | Apply the changes:", "source": "https://kubernetes.io/docs/tasks/access-application-cluster/ingress-minikube/", "id": "k8s_00094_code_completion_2567"} {"prompt": "Explain this text code:\n\n- path: /v2\n pathType: Prefix\n backend:\n service:\n name: web2\n port:\n number: 8080", "completion": "This text code Set up Ingress on Minikube with the NGINX Ingress Controller | Apply the changes:", "language": "text", "type": "code-example_explanation", "context": "Set up Ingress on Minikube with the NGINX Ingress Controller | Apply the changes:", "source": "https://kubernetes.io/docs/tasks/access-application-cluster/ingress-minikube/", "id": "k8s_00094_code_explain_2568"} {"prompt": "Explain this text code:\n\ncurl --resolve \"hello-world.example:80:$( minikube ip )\" -i http://hello-world.example", "completion": "This text code Set up Ingress on Minikube with the NGINX Ingress Controller | minikube tunnel The output is similar to:Tunnel successfully started NOTE: Please do not close this terminal as this process must stay alive for the tunnel to be accessible ... The service/ingress e", "language": "text", "type": "code-example_explanation", "context": "Set up Ingress on Minikube with the NGINX Ingress Controller | minikube tunnel The output is similar to:Tunnel successfully started NOTE: Please do not close this terminal as this process must stay alive for the tunnel to be accessible ... The service/ingress e", "source": "https://kubernetes.io/docs/tasks/access-application-cluster/ingress-minikube/", "id": "k8s_00094_code_explain_2569"} {"prompt": "Explain this text code:\n\ncurl --resolve \"hello-world.example:80:127.0.0.1\" -i http://hello-world.example", "completion": "This text code Set up Ingress on Minikube with the NGINX Ingress Controller | The output is similar to:", "language": "text", "type": "code-example_explanation", "context": "Set up Ingress on Minikube with the NGINX Ingress Controller | The output is similar to:", "source": "https://kubernetes.io/docs/tasks/access-application-cluster/ingress-minikube/", "id": "k8s_00094_code_explain_2570"} {"prompt": "Explain this text code:\n\ncurl --resolve \"hello-world.example:80:$( minikube ip )\" -i http://hello-world.example/v2", "completion": "This text code Set up Ingress on Minikube with the NGINX Ingress Controller | minikube tunnel The output is similar to:Tunnel successfully started NOTE: Please do not close this terminal as this process must stay alive for the tunnel to be accessible ... The service/ingress e", "language": "text", "type": "code-example_explanation", "context": "Set up Ingress on Minikube with the NGINX Ingress Controller | minikube tunnel The output is similar to:Tunnel successfully started NOTE: Please do not close this terminal as this process must stay alive for the tunnel to be accessible ... The service/ingress e", "source": "https://kubernetes.io/docs/tasks/access-application-cluster/ingress-minikube/", "id": "k8s_00094_code_explain_2571"} {"prompt": "Explain this text code:\n\ncurl --resolve \"hello-world.example:80:127.0.0.1\" -i http://hello-world.example/v2", "completion": "This text code Set up Ingress on Minikube with the NGINX Ingress Controller | The output is similar to:", "language": "text", "type": "code-example_explanation", "context": "Set up Ingress on Minikube with the NGINX Ingress Controller | The output is similar to:", "source": "https://kubernetes.io/docs/tasks/access-application-cluster/ingress-minikube/", "id": "k8s_00094_code_explain_2572"} {"prompt": "Explain this text code:\n\nkubedns=`kubectl get svc kube-dns -n kube-system -o jsonpath={.spec.clusterIP}`\ndomain=\nlocaldns=", "completion": "This text code Using NodeLocal DNSCache in Kubernetes Clusters | is \"cluster.local\" by default. is the local listen IP address chosen for NodeLocal DNSCache.", "language": "text", "type": "code-example_explanation", "context": "Using NodeLocal DNSCache in Kubernetes Clusters | is \"cluster.local\" by default. is the local listen IP address chosen for NodeLocal DNSCache.", "source": "https://kubernetes.io/docs/tasks/administer-cluster/nodelocaldns/", "id": "k8s_00096_code_explain_2573"} {"prompt": "Explain this text code:\n\nsed -i \"s/__PILLAR__LOCAL__DNS__/$localdns/g; s/__PILLAR__DNS__DOMAIN__/$domain/g; s/__PILLAR__DNS__SERVER__/$kubedns/g\" nodelocaldns.yaml", "completion": "This text code Using NodeLocal DNSCache in Kubernetes Clusters | __PILLAR__CLUSTER__DNS__ and __PILLAR__UPSTREAM__SERVERS__ will be populated by the node-local-dns pods. In this mode, the node-local-dns pods listen on both the kube-dns service IP as well as . The node-local-dns interface cannot bind the kube-dns cluster IP since the interface used for IPVS loadbalancing already uses", "language": "text", "type": "code-example_explanation", "context": "Using NodeLocal DNSCache in Kubernetes Clusters | In this mode, the node-local-dns pods listen only on . The node-local-dns interface cannot bind the kube-dns cluster IP since the interface used for IPVS loadbalancing already uses", "source": "https://kubernetes.io/docs/tasks/administer-cluster/nodelocaldns/", "id": "k8s_00096_code_explain_2575"} {"prompt": "Explain this shell code:\n\nkubedns=`kubectl get svc kube-dns -n kube-system -o jsonpath={.spec.clusterIP}`\ndomain=\nlocaldns=", "completion": "This shell code Using NodeLocal DNSCache in Kubernetes Clusters | is \"cluster.local\" by default. is the local listen IP address chosen for NodeLocal DNSCache.", "language": "shell", "type": "code-example_explanation", "context": "Using NodeLocal DNSCache in Kubernetes Clusters | is \"cluster.local\" by default. is the local listen IP address chosen for NodeLocal DNSCache.", "source": "https://kubernetes.io/docs/tasks/administer-cluster/nodelocaldns/", "id": "k8s_00096_code_explain_2576"} {"prompt": "Explain this bash code:\n\nsed -i \"s/__PILLAR__LOCAL__DNS__/$localdns/g; s/__PILLAR__DNS__DOMAIN__/$domain/g; s/__PILLAR__DNS__SERVER__/$kubedns/g\" nodelocaldns.yaml", "completion": "This bash code Using NodeLocal DNSCache in Kubernetes Clusters | __PILLAR__CLUSTER__DNS__ and __PILLAR__UPSTREAM__SERVERS__ will be populated by the node-local-dns pods. In this mode, the node-local-dns pods listen on both the kube-dns service IP as well as . The node-local-dns interface cannot bind the kube-dns cluster IP since the interface used for IPVS loadbalancing already uses", "language": "bash", "type": "shell-command_explanation", "context": "Using NodeLocal DNSCache in Kubernetes Clusters | In this mode, the node-local-dns pods listen only on . The node-local-dns interface cannot bind the kube-dns cluster IP since the interface used for IPVS loadbalancing already uses", "source": "https://kubernetes.io/docs/tasks/administer-cluster/nodelocaldns/", "id": "k8s_00096_code_explain_2578"} {"prompt": "Explain this text code:\n\nkubedns=`kubectl get svc kube-dns -n kube-system -o jsonpath={.spec.clusterIP}`\ndomain=\nlocaldns=", "completion": "This text code Using NodeLocal DNSCache in Kubernetes Clusters | is \"cluster.local\" by default. is the local listen IP address chosen for NodeLocal DNSCache.", "language": "text", "type": "code-example_explanation", "context": "Using NodeLocal DNSCache in Kubernetes Clusters | is \"cluster.local\" by default. is the local listen IP address chosen for NodeLocal DNSCache.", "source": "https://kubernetes.io/docs/tasks/administer-cluster/nodelocaldns/", "id": "k8s_00096_code_explain_2579"} {"prompt": "Explain this text code:\n\nsed -i \"s/__PILLAR__LOCAL__DNS__/$localdns/g; s/__PILLAR__DNS__DOMAIN__/$domain/g; s/__PILLAR__DNS__SERVER__/$kubedns/g\" nodelocaldns.yaml", "completion": "This text code Using NodeLocal DNSCache in Kubernetes Clusters | __PILLAR__CLUSTER__DNS__ and __PILLAR__UPSTREAM__SERVERS__ will be populated by the node-local-dns pods. In this mode, the node-local-dns pods listen on both the kube-dns service IP as well as . The node-local-dns interface cannot bind the kube-dns cluster IP since the interface used for IPVS loadbalancing already uses", "language": "text", "type": "code-example_explanation", "context": "Using NodeLocal DNSCache in Kubernetes Clusters | In this mode, the node-local-dns pods listen only on . The node-local-dns interface cannot bind the kube-dns cluster IP since the interface used for IPVS loadbalancing already uses", "source": "https://kubernetes.io/docs/tasks/administer-cluster/nodelocaldns/", "id": "k8s_00096_code_explain_2581"} {"prompt": "Complete this yaml code:\n\napiVersion: kubescheduler.config.k8s.io/v1\nkind: KubeSchedulerConfiguration", "completion": "clientConnection:\n kubeconfig: /etc/srv/kubernetes/kube-scheduler/kubeconfig", "language": "yaml", "type": "kubernetes-manifest", "context": "Scheduler Configuration | A scheduling Profile allows you to configure the different stages of scheduling in the kube-scheduler. Each stage is exposed in an extension point. Plugins provide scheduling behaviors by implementing", "source": "https://kubernetes.io/docs/reference/scheduling/config/", "id": "k8s_00097_code_completion_2582"} {"prompt": "Explain this yaml code:\n\napiVersion: kubescheduler.config.k8s.io/v1\nkind: KubeSchedulerConfiguration\nclientConnection:\n kubeconfig: /etc/srv/kubernetes/kube-scheduler/kubeconfig", "completion": "This yaml code Scheduler Configuration | A scheduling Profile allows you to configure the different stages of scheduling in the kube-scheduler. Each stage is exposed in an extension point. Plugins provide scheduling behaviors by implementing", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Scheduler Configuration | A scheduling Profile allows you to configure the different stages of scheduling in the kube-scheduler. Each stage is exposed in an extension point. Plugins provide scheduling behaviors by implementing", "source": "https://kubernetes.io/docs/reference/scheduling/config/", "id": "k8s_00097_code_explain_2583"} {"prompt": "Complete this yaml code:\n\napiVersion: kubescheduler.config.k8s.io/v1\nkind: KubeSchedulerConfiguration\nprofiles:\n - plugins:\n score:\n disabled:", "completion": " - name: PodTopologySpread\n enabled:\n - name: MyCustomPluginA\n weight: 2\n - name: MyCustomPluginB\n weight: 1", "language": "yaml", "type": "kubernetes-manifest", "context": "Scheduler Configuration | You can use * as name in the disabled array to disable all default plugins for that extension point. This can also be used to rearrange plugins order, if desired.", "source": "https://kubernetes.io/docs/reference/scheduling/config/", "id": "k8s_00097_code_completion_2584"} {"prompt": "Explain this yaml code:\n\napiVersion: kubescheduler.config.k8s.io/v1\nkind: KubeSchedulerConfiguration\nprofiles:\n - plugins:\n score:\n disabled:\n - name: PodTopologySpread\n enabled:\n - name: MyCustomPluginA\n weight: 2\n - name: MyCustomPluginB\n weight: 1", "completion": "This yaml code Scheduler Configuration | You can use * as name in the disabled array to disable all default plugins for that extension point. This can also be used to rearrange plugins order, if desired.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Scheduler Configuration | You can use * as name in the disabled array to disable all default plugins for that extension point. This can also be used to rearrange plugins order, if desired.", "source": "https://kubernetes.io/docs/reference/scheduling/config/", "id": "k8s_00097_code_explain_2585"} {"prompt": "Complete this yaml code:\n\napiVersion: kubescheduler.config.k8s.io/v1\nkind: KubeSchedulerConfiguration\nprofiles:\n - schedulerName: default-scheduler\n - schedulerName: no-scoring-scheduler\n plugins:", "completion": " preScore:\n disabled:\n - name: '*'\n score:\n disabled:\n - name: '*'", "language": "yaml", "type": "kubernetes-manifest", "context": "Scheduler Configuration | Pods that want to be scheduled according to a specific profile can include the corresponding scheduler name in its .spec.schedulerName.", "source": "https://kubernetes.io/docs/reference/scheduling/config/", "id": "k8s_00097_code_completion_2586"} {"prompt": "Explain this yaml code:\n\napiVersion: kubescheduler.config.k8s.io/v1\nkind: KubeSchedulerConfiguration\nprofiles:\n - schedulerName: default-scheduler\n - schedulerName: no-scoring-scheduler\n plugins:\n preScore:\n disabled:\n - name: '*'\n score:\n disabled:\n - name: '*'", "completion": "This yaml code Scheduler Configuration | Pods that want to be scheduled according to a specific profile can include the corresponding scheduler name in its .spec.schedulerName.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Scheduler Configuration | Pods that want to be scheduled according to a specific profile can include the corresponding scheduler name in its .spec.schedulerName.", "source": "https://kubernetes.io/docs/reference/scheduling/config/", "id": "k8s_00097_code_explain_2587"} {"prompt": "Complete this yaml code:\n\napiVersion: kubescheduler.config.k8s.io/v1\nkind: KubeSchedulerConfiguration\nprofiles:\n - schedulerName: multipoint-scheduler", "completion": " plugins:\n multiPoint:\n enabled:\n - name: MyPlugin", "language": "yaml", "type": "kubernetes-manifest", "context": "Scheduler Configuration | This would equate to manually enabling MyPlugin for all of its extension points, like so:", "source": "https://kubernetes.io/docs/reference/scheduling/config/", "id": "k8s_00097_code_completion_2588"} {"prompt": "Explain this yaml code:\n\napiVersion: kubescheduler.config.k8s.io/v1\nkind: KubeSchedulerConfiguration\nprofiles:\n - schedulerName: multipoint-scheduler\n plugins:\n multiPoint:\n enabled:\n - name: MyPlugin", "completion": "This yaml code Scheduler Configuration | This would equate to manually enabling MyPlugin for all of its extension points, like so:", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Scheduler Configuration | This would equate to manually enabling MyPlugin for all of its extension points, like so:", "source": "https://kubernetes.io/docs/reference/scheduling/config/", "id": "k8s_00097_code_explain_2589"} {"prompt": "Complete this yaml code:\n\napiVersion: kubescheduler.config.k8s.io/v1\nkind: KubeSchedulerConfiguration\nprofiles:\n - schedulerName: non-multipoint-scheduler\n plugins:\n preScore:\n enabled:\n - name: MyPlugin", "completion": " score:\n enabled:\n - name: MyPlugin\n preFilter:\n enabled:\n - name: MyPlugin\n filter:\n enabled:\n - name: MyPlugin", "language": "yaml", "type": "kubernetes-manifest", "context": "Scheduler Configuration | One benefit of using multiPoint here is that if MyPlugin implements another extension point in the future, the multiPoint config will automatically enable it for the new extension.", "source": "https://kubernetes.io/docs/reference/scheduling/config/", "id": "k8s_00097_code_completion_2590"} {"prompt": "Explain this yaml code:\n\napiVersion: kubescheduler.config.k8s.io/v1\nkind: KubeSchedulerConfiguration\nprofiles:\n - schedulerName: non-multipoint-scheduler\n plugins:\n preScore:\n enabled:\n - name: MyPlugin\n score:\n enabled:\n - name: MyPlugin\n preFilter:\n enabled:\n - name: MyPlugin\n filter:\n enabled:\n - name: MyPlugin", "completion": "This yaml code Scheduler Configuration | One benefit of using multiPoint here is that if MyPlugin implements another extension point in the future, the multiPoint config will automatically enable it for the new extension.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Scheduler Configuration | One benefit of using multiPoint here is that if MyPlugin implements another extension point in the future, the multiPoint config will automatically enable it for the new extension.", "source": "https://kubernetes.io/docs/reference/scheduling/config/", "id": "k8s_00097_code_explain_2591"} {"prompt": "Complete this yaml code:\n\napiVersion: kubescheduler.config.k8s.io/v1\nkind: KubeSchedulerConfiguration\nprofiles:\n - schedulerName: non-multipoint-scheduler\n plugins:\n multiPoint:\n enabled:", "completion": " - name: 'MyPlugin'\n preScore:\n disabled:\n - name: '*'\n score:\n disabled:\n - name: '*'", "language": "yaml", "type": "kubernetes-manifest", "context": "Scheduler Configuration | Starting from kubescheduler.config.k8s.io/v1beta3, all default plugins are enabled internally through MultiPoint. However, individual extension points are still available to allow flexible reconfigura", "source": "https://kubernetes.io/docs/reference/scheduling/config/", "id": "k8s_00097_code_completion_2592"} {"prompt": "Explain this yaml code:\n\napiVersion: kubescheduler.config.k8s.io/v1\nkind: KubeSchedulerConfiguration\nprofiles:\n - schedulerName: non-multipoint-scheduler\n plugins:\n multiPoint:\n enabled:\n - name: 'MyPlugin'\n preScore:\n disabled:\n - name: '*'\n score:\n disabled:\n - name: '*'", "completion": "This yaml code Scheduler Configuration | Starting from kubescheduler.config.k8s.io/v1beta3, all default plugins are enabled internally through MultiPoint. However, individual extension points are still available to allow flexible reconfigura", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Scheduler Configuration | Starting from kubescheduler.config.k8s.io/v1beta3, all default plugins are enabled internally through MultiPoint. However, individual extension points are still available to allow flexible reconfigura", "source": "https://kubernetes.io/docs/reference/scheduling/config/", "id": "k8s_00097_code_explain_2593"} {"prompt": "Complete this yaml code:\n\napiVersion: kubescheduler.config.k8s.io/v1\nkind: KubeSchedulerConfiguration\nprofiles:\n - schedulerName: multipoint-scheduler", "completion": " plugins:\n score:\n enabled:\n - name: 'DefaultScore2'\n weight: 5", "language": "yaml", "type": "kubernetes-manifest", "context": "Scheduler Configuration | In this example, it's unnecessary to specify the plugins in MultiPoint explicitly because they are default plugins. And the only plugin specified in Score is DefaultScore2. This is because plugins set", "source": "https://kubernetes.io/docs/reference/scheduling/config/", "id": "k8s_00097_code_completion_2594"} {"prompt": "Explain this yaml code:\n\napiVersion: kubescheduler.config.k8s.io/v1\nkind: KubeSchedulerConfiguration\nprofiles:\n - schedulerName: multipoint-scheduler\n plugins:\n score:\n enabled:\n - name: 'DefaultScore2'\n weight: 5", "completion": "This yaml code Scheduler Configuration | In this example, it's unnecessary to specify the plugins in MultiPoint explicitly because they are default plugins. And the only plugin specified in Score is DefaultScore2. This is because plugins set", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Scheduler Configuration | In this example, it's unnecessary to specify the plugins in MultiPoint explicitly because they are default plugins. And the only plugin specified in Score is DefaultScore2. This is because plugins set", "source": "https://kubernetes.io/docs/reference/scheduling/config/", "id": "k8s_00097_code_explain_2595"} {"prompt": "Complete this yaml code:\n\napiVersion: kubescheduler.config.k8s.io/v1\nkind: KubeSchedulerConfiguration\nprofiles:\n - schedulerName: multipoint-scheduler\n plugins:\n multiPoint:\n enabled:\n - name: 'CustomQueueSort'\n - name: 'CustomPlugin1'", "completion": " weight: 3\n - name: 'CustomPlugin2'\n disabled:\n - name: 'DefaultQueueSort'\n filter:\n disabled:\n - name: 'DefaultPlugin1'\n score:\n enabled:\n - name: 'DefaultPlugin2'", "language": "yaml", "type": "kubernetes-manifest", "context": "Scheduler Configuration | Note that there is no error for re-declaring a MultiPoint plugin in a specific extension point. The re-declaration is ignored (and logged), as specific extension points take precedence.", "source": "https://kubernetes.io/docs/reference/scheduling/config/", "id": "k8s_00097_code_completion_2596"} {"prompt": "Explain this yaml code:\n\napiVersion: kubescheduler.config.k8s.io/v1\nkind: KubeSchedulerConfiguration\nprofiles:\n - schedulerName: multipoint-scheduler\n plugins:\n multiPoint:\n enabled:\n - name: 'CustomQueueSort'\n - name: 'CustomPlugin1'\n weight: 3\n - name: 'CustomPlugin2'\n disabled:\n - name: 'DefaultQueueSort'\n filter:\n disabled:\n - name: 'DefaultPlugin1'\n score:\n enabled:\n - name: 'DefaultPlugin2'", "completion": "This yaml code Scheduler Configuration | Note that there is no error for re-declaring a MultiPoint plugin in a specific extension point. The re-declaration is ignored (and logged), as specific extension points take precedence.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Scheduler Configuration | Note that there is no error for re-declaring a MultiPoint plugin in a specific extension point. The re-declaration is ignored (and logged), as specific extension points take precedence.", "source": "https://kubernetes.io/docs/reference/scheduling/config/", "id": "k8s_00097_code_explain_2597"} {"prompt": "Complete this yaml code:\n\napiVersion: kubescheduler.config.k8s.io/v1beta2\nkind: KubeSchedulerConfiguration\nprofiles:\n - schedulerName: multipoint-scheduler\n plugins:\n\n # Disable the default QueueSort plugin\n queueSort:\n enabled:\n - name: 'CustomQueueSort'\n disabled:\n - name: 'DefaultQueueSort'\n\n # Enable custom Filter plugins", "completion": " filter:\n enabled:\n - name: 'CustomPlugin1'\n - name: 'CustomPlugin2'\n - name: 'DefaultPlugin2'\n disabled:\n - name: 'DefaultPlugin1'\n\n # Enable and reorder custom score plugins\n score:\n enabled:\n - name: 'DefaultPlugin2'\n weight: 1\n - name: 'DefaultPlugin1'\n weight: 3", "language": "yaml", "type": "kubernetes-manifest", "context": "Scheduler Configuration | While this is a complicated example, it demonstrates the flexibility of MultiPoint config as well as its seamless integration with the existing methods for configuring extension points.", "source": "https://kubernetes.io/docs/reference/scheduling/config/", "id": "k8s_00097_code_completion_2598"} {"prompt": "Explain this yaml code:\n\napiVersion: kubescheduler.config.k8s.io/v1beta2\nkind: KubeSchedulerConfiguration\nprofiles:\n - schedulerName: multipoint-scheduler\n plugins:\n\n # Disable the default QueueSort plugin\n queueSort:\n enabled:\n - name: 'CustomQueueSort'\n disabled:\n - name: 'DefaultQueueSort'\n\n # Enable custom Filter plugins\n filter:\n enabled:\n - name: 'CustomPlugin1'\n - name: 'CustomPlugin2'\n - name: 'DefaultPlugin2'\n disabled:\n - name: 'DefaultPlugin1'\n\n # Enable and reorder custom score plugins\n score:\n enabled:\n - name: 'DefaultPlugin2'\n weight: 1\n - name: 'DefaultPlugin1'\n weight: 3", "completion": "This yaml code Scheduler Configuration | While this is a complicated example, it demonstrates the flexibility of MultiPoint config as well as its seamless integration with the existing methods for configuring extension points.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Scheduler Configuration | While this is a complicated example, it demonstrates the flexibility of MultiPoint config as well as its seamless integration with the existing methods for configuring extension points.", "source": "https://kubernetes.io/docs/reference/scheduling/config/", "id": "k8s_00097_code_explain_2599"} {"prompt": "Complete this yaml code:\n\napiVersion: kubescheduler.config.k8s.io/v1beta2\nkind: KubeSchedulerConfiguration\nprofiles:\n- pluginConfig:\n - args:", "completion": " scoringStrategy:\n resources:\n - name: cpu\n weight: 1\n type: MostAllocated\n name: NodeResourcesFit", "language": "yaml", "type": "kubernetes-manifest", "context": "Scheduler Configuration | The scheduler plugin NodeLabel is deprecated; instead, use the NodeAffinity plugin (enabled by default) to achieve similar behavior.", "source": "https://kubernetes.io/docs/reference/scheduling/config/", "id": "k8s_00097_code_completion_2600"} {"prompt": "Explain this yaml code:\n\napiVersion: kubescheduler.config.k8s.io/v1beta2\nkind: KubeSchedulerConfiguration\nprofiles:\n- pluginConfig:\n - args:\n scoringStrategy:\n resources:\n - name: cpu\n weight: 1\n type: MostAllocated\n name: NodeResourcesFit", "completion": "This yaml code Scheduler Configuration | The scheduler plugin NodeLabel is deprecated; instead, use the NodeAffinity plugin (enabled by default) to achieve similar behavior.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Scheduler Configuration | The scheduler plugin NodeLabel is deprecated; instead, use the NodeAffinity plugin (enabled by default) to achieve similar behavior.", "source": "https://kubernetes.io/docs/reference/scheduling/config/", "id": "k8s_00097_code_explain_2601"} {"prompt": "Complete this yaml code:\n\napiVersion: kubescheduler.config.k8s.io/v1\nkind: KubeSchedulerConfiguration", "completion": "clientConnection:\n kubeconfig: /etc/srv/kubernetes/kube-scheduler/kubeconfig", "language": "yaml", "type": "kubernetes-manifest", "context": "Scheduler Configuration | A scheduling Profile allows you to configure the different stages of scheduling in the kube-scheduler. Each stage is exposed in an extension point. Plugins provide scheduling behaviors by implementing", "source": "https://kubernetes.io/docs/reference/scheduling/config/", "id": "k8s_00097_code_completion_2602"} {"prompt": "Explain this yaml code:\n\napiVersion: kubescheduler.config.k8s.io/v1\nkind: KubeSchedulerConfiguration\nclientConnection:\n kubeconfig: /etc/srv/kubernetes/kube-scheduler/kubeconfig", "completion": "This yaml code Scheduler Configuration | A scheduling Profile allows you to configure the different stages of scheduling in the kube-scheduler. Each stage is exposed in an extension point. Plugins provide scheduling behaviors by implementing", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Scheduler Configuration | A scheduling Profile allows you to configure the different stages of scheduling in the kube-scheduler. Each stage is exposed in an extension point. Plugins provide scheduling behaviors by implementing", "source": "https://kubernetes.io/docs/reference/scheduling/config/", "id": "k8s_00097_code_explain_2603"} {"prompt": "Complete this yaml code:\n\napiVersion: kubescheduler.config.k8s.io/v1\nkind: KubeSchedulerConfiguration\nprofiles:\n - plugins:\n score:\n disabled:", "completion": " - name: PodTopologySpread\n enabled:\n - name: MyCustomPluginA\n weight: 2\n - name: MyCustomPluginB\n weight: 1", "language": "yaml", "type": "kubernetes-manifest", "context": "Scheduler Configuration | You can use * as name in the disabled array to disable all default plugins for that extension point. This can also be used to rearrange plugins order, if desired.", "source": "https://kubernetes.io/docs/reference/scheduling/config/", "id": "k8s_00097_code_completion_2604"} {"prompt": "Explain this yaml code:\n\napiVersion: kubescheduler.config.k8s.io/v1\nkind: KubeSchedulerConfiguration\nprofiles:\n - plugins:\n score:\n disabled:\n - name: PodTopologySpread\n enabled:\n - name: MyCustomPluginA\n weight: 2\n - name: MyCustomPluginB\n weight: 1", "completion": "This yaml code Scheduler Configuration | You can use * as name in the disabled array to disable all default plugins for that extension point. This can also be used to rearrange plugins order, if desired.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Scheduler Configuration | You can use * as name in the disabled array to disable all default plugins for that extension point. This can also be used to rearrange plugins order, if desired.", "source": "https://kubernetes.io/docs/reference/scheduling/config/", "id": "k8s_00097_code_explain_2605"} {"prompt": "Complete this yaml code:\n\napiVersion: kubescheduler.config.k8s.io/v1\nkind: KubeSchedulerConfiguration\nprofiles:\n - schedulerName: default-scheduler\n - schedulerName: no-scoring-scheduler\n plugins:", "completion": " preScore:\n disabled:\n - name: '*'\n score:\n disabled:\n - name: '*'", "language": "yaml", "type": "kubernetes-manifest", "context": "Scheduler Configuration | Pods that want to be scheduled according to a specific profile can include the corresponding scheduler name in its .spec.schedulerName.", "source": "https://kubernetes.io/docs/reference/scheduling/config/", "id": "k8s_00097_code_completion_2606"} {"prompt": "Explain this yaml code:\n\napiVersion: kubescheduler.config.k8s.io/v1\nkind: KubeSchedulerConfiguration\nprofiles:\n - schedulerName: default-scheduler\n - schedulerName: no-scoring-scheduler\n plugins:\n preScore:\n disabled:\n - name: '*'\n score:\n disabled:\n - name: '*'", "completion": "This yaml code Scheduler Configuration | Pods that want to be scheduled according to a specific profile can include the corresponding scheduler name in its .spec.schedulerName.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Scheduler Configuration | Pods that want to be scheduled according to a specific profile can include the corresponding scheduler name in its .spec.schedulerName.", "source": "https://kubernetes.io/docs/reference/scheduling/config/", "id": "k8s_00097_code_explain_2607"} {"prompt": "Complete this yaml code:\n\napiVersion: kubescheduler.config.k8s.io/v1\nkind: KubeSchedulerConfiguration\nprofiles:\n - schedulerName: multipoint-scheduler", "completion": " plugins:\n multiPoint:\n enabled:\n - name: MyPlugin", "language": "yaml", "type": "kubernetes-manifest", "context": "Scheduler Configuration | This would equate to manually enabling MyPlugin for all of its extension points, like so:", "source": "https://kubernetes.io/docs/reference/scheduling/config/", "id": "k8s_00097_code_completion_2608"} {"prompt": "Explain this yaml code:\n\napiVersion: kubescheduler.config.k8s.io/v1\nkind: KubeSchedulerConfiguration\nprofiles:\n - schedulerName: multipoint-scheduler\n plugins:\n multiPoint:\n enabled:\n - name: MyPlugin", "completion": "This yaml code Scheduler Configuration | This would equate to manually enabling MyPlugin for all of its extension points, like so:", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Scheduler Configuration | This would equate to manually enabling MyPlugin for all of its extension points, like so:", "source": "https://kubernetes.io/docs/reference/scheduling/config/", "id": "k8s_00097_code_explain_2609"} {"prompt": "Complete this yaml code:\n\napiVersion: kubescheduler.config.k8s.io/v1\nkind: KubeSchedulerConfiguration\nprofiles:\n - schedulerName: non-multipoint-scheduler\n plugins:\n preScore:\n enabled:\n - name: MyPlugin", "completion": " score:\n enabled:\n - name: MyPlugin\n preFilter:\n enabled:\n - name: MyPlugin\n filter:\n enabled:\n - name: MyPlugin", "language": "yaml", "type": "kubernetes-manifest", "context": "Scheduler Configuration | One benefit of using multiPoint here is that if MyPlugin implements another extension point in the future, the multiPoint config will automatically enable it for the new extension.", "source": "https://kubernetes.io/docs/reference/scheduling/config/", "id": "k8s_00097_code_completion_2610"} {"prompt": "Explain this yaml code:\n\napiVersion: kubescheduler.config.k8s.io/v1\nkind: KubeSchedulerConfiguration\nprofiles:\n - schedulerName: non-multipoint-scheduler\n plugins:\n preScore:\n enabled:\n - name: MyPlugin\n score:\n enabled:\n - name: MyPlugin\n preFilter:\n enabled:\n - name: MyPlugin\n filter:\n enabled:\n - name: MyPlugin", "completion": "This yaml code Scheduler Configuration | One benefit of using multiPoint here is that if MyPlugin implements another extension point in the future, the multiPoint config will automatically enable it for the new extension.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Scheduler Configuration | One benefit of using multiPoint here is that if MyPlugin implements another extension point in the future, the multiPoint config will automatically enable it for the new extension.", "source": "https://kubernetes.io/docs/reference/scheduling/config/", "id": "k8s_00097_code_explain_2611"} {"prompt": "Complete this yaml code:\n\napiVersion: kubescheduler.config.k8s.io/v1\nkind: KubeSchedulerConfiguration\nprofiles:\n - schedulerName: non-multipoint-scheduler\n plugins:\n multiPoint:\n enabled:", "completion": " - name: 'MyPlugin'\n preScore:\n disabled:\n - name: '*'\n score:\n disabled:\n - name: '*'", "language": "yaml", "type": "kubernetes-manifest", "context": "Scheduler Configuration | Starting from kubescheduler.config.k8s.io/v1beta3, all default plugins are enabled internally through MultiPoint. However, individual extension points are still available to allow flexible reconfigura", "source": "https://kubernetes.io/docs/reference/scheduling/config/", "id": "k8s_00097_code_completion_2612"} {"prompt": "Explain this yaml code:\n\napiVersion: kubescheduler.config.k8s.io/v1\nkind: KubeSchedulerConfiguration\nprofiles:\n - schedulerName: non-multipoint-scheduler\n plugins:\n multiPoint:\n enabled:\n - name: 'MyPlugin'\n preScore:\n disabled:\n - name: '*'\n score:\n disabled:\n - name: '*'", "completion": "This yaml code Scheduler Configuration | Starting from kubescheduler.config.k8s.io/v1beta3, all default plugins are enabled internally through MultiPoint. However, individual extension points are still available to allow flexible reconfigura", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Scheduler Configuration | Starting from kubescheduler.config.k8s.io/v1beta3, all default plugins are enabled internally through MultiPoint. However, individual extension points are still available to allow flexible reconfigura", "source": "https://kubernetes.io/docs/reference/scheduling/config/", "id": "k8s_00097_code_explain_2613"} {"prompt": "Complete this yaml code:\n\napiVersion: kubescheduler.config.k8s.io/v1\nkind: KubeSchedulerConfiguration\nprofiles:\n - schedulerName: multipoint-scheduler", "completion": " plugins:\n score:\n enabled:\n - name: 'DefaultScore2'\n weight: 5", "language": "yaml", "type": "kubernetes-manifest", "context": "Scheduler Configuration | In this example, it's unnecessary to specify the plugins in MultiPoint explicitly because they are default plugins. And the only plugin specified in Score is DefaultScore2. This is because plugins set", "source": "https://kubernetes.io/docs/reference/scheduling/config/", "id": "k8s_00097_code_completion_2614"} {"prompt": "Explain this yaml code:\n\napiVersion: kubescheduler.config.k8s.io/v1\nkind: KubeSchedulerConfiguration\nprofiles:\n - schedulerName: multipoint-scheduler\n plugins:\n score:\n enabled:\n - name: 'DefaultScore2'\n weight: 5", "completion": "This yaml code Scheduler Configuration | In this example, it's unnecessary to specify the plugins in MultiPoint explicitly because they are default plugins. And the only plugin specified in Score is DefaultScore2. This is because plugins set", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Scheduler Configuration | In this example, it's unnecessary to specify the plugins in MultiPoint explicitly because they are default plugins. And the only plugin specified in Score is DefaultScore2. This is because plugins set", "source": "https://kubernetes.io/docs/reference/scheduling/config/", "id": "k8s_00097_code_explain_2615"} {"prompt": "Complete this yaml code:\n\napiVersion: kubescheduler.config.k8s.io/v1\nkind: KubeSchedulerConfiguration\nprofiles:\n - schedulerName: multipoint-scheduler\n plugins:\n multiPoint:\n enabled:\n - name: 'CustomQueueSort'\n - name: 'CustomPlugin1'", "completion": " weight: 3\n - name: 'CustomPlugin2'\n disabled:\n - name: 'DefaultQueueSort'\n filter:\n disabled:\n - name: 'DefaultPlugin1'\n score:\n enabled:\n - name: 'DefaultPlugin2'", "language": "yaml", "type": "kubernetes-manifest", "context": "Scheduler Configuration | Note that there is no error for re-declaring a MultiPoint plugin in a specific extension point. The re-declaration is ignored (and logged), as specific extension points take precedence.", "source": "https://kubernetes.io/docs/reference/scheduling/config/", "id": "k8s_00097_code_completion_2616"} {"prompt": "Explain this yaml code:\n\napiVersion: kubescheduler.config.k8s.io/v1\nkind: KubeSchedulerConfiguration\nprofiles:\n - schedulerName: multipoint-scheduler\n plugins:\n multiPoint:\n enabled:\n - name: 'CustomQueueSort'\n - name: 'CustomPlugin1'\n weight: 3\n - name: 'CustomPlugin2'\n disabled:\n - name: 'DefaultQueueSort'\n filter:\n disabled:\n - name: 'DefaultPlugin1'\n score:\n enabled:\n - name: 'DefaultPlugin2'", "completion": "This yaml code Scheduler Configuration | Note that there is no error for re-declaring a MultiPoint plugin in a specific extension point. The re-declaration is ignored (and logged), as specific extension points take precedence.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Scheduler Configuration | Note that there is no error for re-declaring a MultiPoint plugin in a specific extension point. The re-declaration is ignored (and logged), as specific extension points take precedence.", "source": "https://kubernetes.io/docs/reference/scheduling/config/", "id": "k8s_00097_code_explain_2617"} {"prompt": "Complete this yaml code:\n\napiVersion: kubescheduler.config.k8s.io/v1beta2\nkind: KubeSchedulerConfiguration\nprofiles:\n - schedulerName: multipoint-scheduler\n plugins:\n\n # Disable the default QueueSort plugin\n queueSort:\n enabled:\n - name: 'CustomQueueSort'\n disabled:\n - name: 'DefaultQueueSort'\n\n # Enable custom Filter plugins", "completion": " filter:\n enabled:\n - name: 'CustomPlugin1'\n - name: 'CustomPlugin2'\n - name: 'DefaultPlugin2'\n disabled:\n - name: 'DefaultPlugin1'\n\n # Enable and reorder custom score plugins\n score:\n enabled:\n - name: 'DefaultPlugin2'\n weight: 1\n - name: 'DefaultPlugin1'\n weight: 3", "language": "yaml", "type": "kubernetes-manifest", "context": "Scheduler Configuration | While this is a complicated example, it demonstrates the flexibility of MultiPoint config as well as its seamless integration with the existing methods for configuring extension points.", "source": "https://kubernetes.io/docs/reference/scheduling/config/", "id": "k8s_00097_code_completion_2618"} {"prompt": "Explain this yaml code:\n\napiVersion: kubescheduler.config.k8s.io/v1beta2\nkind: KubeSchedulerConfiguration\nprofiles:\n - schedulerName: multipoint-scheduler\n plugins:\n\n # Disable the default QueueSort plugin\n queueSort:\n enabled:\n - name: 'CustomQueueSort'\n disabled:\n - name: 'DefaultQueueSort'\n\n # Enable custom Filter plugins\n filter:\n enabled:\n - name: 'CustomPlugin1'\n - name: 'CustomPlugin2'\n - name: 'DefaultPlugin2'\n disabled:\n - name: 'DefaultPlugin1'\n\n # Enable and reorder custom score plugins\n score:\n enabled:\n - name: 'DefaultPlugin2'\n weight: 1\n - name: 'DefaultPlugin1'\n weight: 3", "completion": "This yaml code Scheduler Configuration | While this is a complicated example, it demonstrates the flexibility of MultiPoint config as well as its seamless integration with the existing methods for configuring extension points.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Scheduler Configuration | While this is a complicated example, it demonstrates the flexibility of MultiPoint config as well as its seamless integration with the existing methods for configuring extension points.", "source": "https://kubernetes.io/docs/reference/scheduling/config/", "id": "k8s_00097_code_explain_2619"} {"prompt": "Complete this yaml code:\n\napiVersion: kubescheduler.config.k8s.io/v1beta2\nkind: KubeSchedulerConfiguration\nprofiles:\n- pluginConfig:\n - args:", "completion": " scoringStrategy:\n resources:\n - name: cpu\n weight: 1\n type: MostAllocated\n name: NodeResourcesFit", "language": "yaml", "type": "kubernetes-manifest", "context": "Scheduler Configuration | The scheduler plugin NodeLabel is deprecated; instead, use the NodeAffinity plugin (enabled by default) to achieve similar behavior.", "source": "https://kubernetes.io/docs/reference/scheduling/config/", "id": "k8s_00097_code_completion_2620"} {"prompt": "Explain this yaml code:\n\napiVersion: kubescheduler.config.k8s.io/v1beta2\nkind: KubeSchedulerConfiguration\nprofiles:\n- pluginConfig:\n - args:\n scoringStrategy:\n resources:\n - name: cpu\n weight: 1\n type: MostAllocated\n name: NodeResourcesFit", "completion": "This yaml code Scheduler Configuration | The scheduler plugin NodeLabel is deprecated; instead, use the NodeAffinity plugin (enabled by default) to achieve similar behavior.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Scheduler Configuration | The scheduler plugin NodeLabel is deprecated; instead, use the NodeAffinity plugin (enabled by default) to achieve similar behavior.", "source": "https://kubernetes.io/docs/reference/scheduling/config/", "id": "k8s_00097_code_explain_2621"} {"prompt": "Complete this yaml code:\n\napiVersion: kubescheduler.config.k8s.io/v1\nkind: KubeSchedulerConfiguration", "completion": "clientConnection:\n kubeconfig: /etc/srv/kubernetes/kube-scheduler/kubeconfig", "language": "yaml", "type": "kubernetes-manifest", "context": "Scheduler Configuration | A scheduling Profile allows you to configure the different stages of scheduling in the kube-scheduler. Each stage is exposed in an extension point. Plugins provide scheduling behaviors by implementing", "source": "https://kubernetes.io/docs/reference/scheduling/config/", "id": "k8s_00097_code_completion_2622"} {"prompt": "Explain this yaml code:\n\napiVersion: kubescheduler.config.k8s.io/v1\nkind: KubeSchedulerConfiguration\nclientConnection:\n kubeconfig: /etc/srv/kubernetes/kube-scheduler/kubeconfig", "completion": "This yaml code Scheduler Configuration | A scheduling Profile allows you to configure the different stages of scheduling in the kube-scheduler. Each stage is exposed in an extension point. Plugins provide scheduling behaviors by implementing", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Scheduler Configuration | A scheduling Profile allows you to configure the different stages of scheduling in the kube-scheduler. Each stage is exposed in an extension point. Plugins provide scheduling behaviors by implementing", "source": "https://kubernetes.io/docs/reference/scheduling/config/", "id": "k8s_00097_code_explain_2623"} {"prompt": "Complete this yaml code:\n\napiVersion: kubescheduler.config.k8s.io/v1\nkind: KubeSchedulerConfiguration\nprofiles:\n - plugins:\n score:\n disabled:", "completion": " - name: PodTopologySpread\n enabled:\n - name: MyCustomPluginA\n weight: 2\n - name: MyCustomPluginB\n weight: 1", "language": "yaml", "type": "kubernetes-manifest", "context": "Scheduler Configuration | You can use * as name in the disabled array to disable all default plugins for that extension point. This can also be used to rearrange plugins order, if desired.", "source": "https://kubernetes.io/docs/reference/scheduling/config/", "id": "k8s_00097_code_completion_2624"} {"prompt": "Explain this yaml code:\n\napiVersion: kubescheduler.config.k8s.io/v1\nkind: KubeSchedulerConfiguration\nprofiles:\n - plugins:\n score:\n disabled:\n - name: PodTopologySpread\n enabled:\n - name: MyCustomPluginA\n weight: 2\n - name: MyCustomPluginB\n weight: 1", "completion": "This yaml code Scheduler Configuration | You can use * as name in the disabled array to disable all default plugins for that extension point. This can also be used to rearrange plugins order, if desired.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Scheduler Configuration | You can use * as name in the disabled array to disable all default plugins for that extension point. This can also be used to rearrange plugins order, if desired.", "source": "https://kubernetes.io/docs/reference/scheduling/config/", "id": "k8s_00097_code_explain_2625"} {"prompt": "Complete this yaml code:\n\napiVersion: kubescheduler.config.k8s.io/v1\nkind: KubeSchedulerConfiguration\nprofiles:\n - schedulerName: default-scheduler\n - schedulerName: no-scoring-scheduler\n plugins:", "completion": " preScore:\n disabled:\n - name: '*'\n score:\n disabled:\n - name: '*'", "language": "yaml", "type": "kubernetes-manifest", "context": "Scheduler Configuration | Pods that want to be scheduled according to a specific profile can include the corresponding scheduler name in its .spec.schedulerName.", "source": "https://kubernetes.io/docs/reference/scheduling/config/", "id": "k8s_00097_code_completion_2626"} {"prompt": "Explain this yaml code:\n\napiVersion: kubescheduler.config.k8s.io/v1\nkind: KubeSchedulerConfiguration\nprofiles:\n - schedulerName: default-scheduler\n - schedulerName: no-scoring-scheduler\n plugins:\n preScore:\n disabled:\n - name: '*'\n score:\n disabled:\n - name: '*'", "completion": "This yaml code Scheduler Configuration | Pods that want to be scheduled according to a specific profile can include the corresponding scheduler name in its .spec.schedulerName.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Scheduler Configuration | Pods that want to be scheduled according to a specific profile can include the corresponding scheduler name in its .spec.schedulerName.", "source": "https://kubernetes.io/docs/reference/scheduling/config/", "id": "k8s_00097_code_explain_2627"} {"prompt": "Complete this yaml code:\n\napiVersion: kubescheduler.config.k8s.io/v1\nkind: KubeSchedulerConfiguration\nprofiles:\n - schedulerName: multipoint-scheduler", "completion": " plugins:\n multiPoint:\n enabled:\n - name: MyPlugin", "language": "yaml", "type": "kubernetes-manifest", "context": "Scheduler Configuration | This would equate to manually enabling MyPlugin for all of its extension points, like so:", "source": "https://kubernetes.io/docs/reference/scheduling/config/", "id": "k8s_00097_code_completion_2628"} {"prompt": "Explain this yaml code:\n\napiVersion: kubescheduler.config.k8s.io/v1\nkind: KubeSchedulerConfiguration\nprofiles:\n - schedulerName: multipoint-scheduler\n plugins:\n multiPoint:\n enabled:\n - name: MyPlugin", "completion": "This yaml code Scheduler Configuration | This would equate to manually enabling MyPlugin for all of its extension points, like so:", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Scheduler Configuration | This would equate to manually enabling MyPlugin for all of its extension points, like so:", "source": "https://kubernetes.io/docs/reference/scheduling/config/", "id": "k8s_00097_code_explain_2629"} {"prompt": "Complete this yaml code:\n\napiVersion: kubescheduler.config.k8s.io/v1\nkind: KubeSchedulerConfiguration\nprofiles:\n - schedulerName: non-multipoint-scheduler\n plugins:\n preScore:\n enabled:\n - name: MyPlugin", "completion": " score:\n enabled:\n - name: MyPlugin\n preFilter:\n enabled:\n - name: MyPlugin\n filter:\n enabled:\n - name: MyPlugin", "language": "yaml", "type": "kubernetes-manifest", "context": "Scheduler Configuration | One benefit of using multiPoint here is that if MyPlugin implements another extension point in the future, the multiPoint config will automatically enable it for the new extension.", "source": "https://kubernetes.io/docs/reference/scheduling/config/", "id": "k8s_00097_code_completion_2630"} {"prompt": "Explain this yaml code:\n\napiVersion: kubescheduler.config.k8s.io/v1\nkind: KubeSchedulerConfiguration\nprofiles:\n - schedulerName: non-multipoint-scheduler\n plugins:\n preScore:\n enabled:\n - name: MyPlugin\n score:\n enabled:\n - name: MyPlugin\n preFilter:\n enabled:\n - name: MyPlugin\n filter:\n enabled:\n - name: MyPlugin", "completion": "This yaml code Scheduler Configuration | One benefit of using multiPoint here is that if MyPlugin implements another extension point in the future, the multiPoint config will automatically enable it for the new extension.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Scheduler Configuration | One benefit of using multiPoint here is that if MyPlugin implements another extension point in the future, the multiPoint config will automatically enable it for the new extension.", "source": "https://kubernetes.io/docs/reference/scheduling/config/", "id": "k8s_00097_code_explain_2631"} {"prompt": "Complete this yaml code:\n\napiVersion: kubescheduler.config.k8s.io/v1\nkind: KubeSchedulerConfiguration\nprofiles:\n - schedulerName: non-multipoint-scheduler\n plugins:\n multiPoint:\n enabled:", "completion": " - name: 'MyPlugin'\n preScore:\n disabled:\n - name: '*'\n score:\n disabled:\n - name: '*'", "language": "yaml", "type": "kubernetes-manifest", "context": "Scheduler Configuration | Starting from kubescheduler.config.k8s.io/v1beta3, all default plugins are enabled internally through MultiPoint. However, individual extension points are still available to allow flexible reconfigura", "source": "https://kubernetes.io/docs/reference/scheduling/config/", "id": "k8s_00097_code_completion_2632"} {"prompt": "Explain this yaml code:\n\napiVersion: kubescheduler.config.k8s.io/v1\nkind: KubeSchedulerConfiguration\nprofiles:\n - schedulerName: non-multipoint-scheduler\n plugins:\n multiPoint:\n enabled:\n - name: 'MyPlugin'\n preScore:\n disabled:\n - name: '*'\n score:\n disabled:\n - name: '*'", "completion": "This yaml code Scheduler Configuration | Starting from kubescheduler.config.k8s.io/v1beta3, all default plugins are enabled internally through MultiPoint. However, individual extension points are still available to allow flexible reconfigura", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Scheduler Configuration | Starting from kubescheduler.config.k8s.io/v1beta3, all default plugins are enabled internally through MultiPoint. However, individual extension points are still available to allow flexible reconfigura", "source": "https://kubernetes.io/docs/reference/scheduling/config/", "id": "k8s_00097_code_explain_2633"} {"prompt": "Complete this yaml code:\n\napiVersion: kubescheduler.config.k8s.io/v1\nkind: KubeSchedulerConfiguration\nprofiles:\n - schedulerName: multipoint-scheduler", "completion": " plugins:\n score:\n enabled:\n - name: 'DefaultScore2'\n weight: 5", "language": "yaml", "type": "kubernetes-manifest", "context": "Scheduler Configuration | In this example, it's unnecessary to specify the plugins in MultiPoint explicitly because they are default plugins. And the only plugin specified in Score is DefaultScore2. This is because plugins set", "source": "https://kubernetes.io/docs/reference/scheduling/config/", "id": "k8s_00097_code_completion_2634"} {"prompt": "Explain this yaml code:\n\napiVersion: kubescheduler.config.k8s.io/v1\nkind: KubeSchedulerConfiguration\nprofiles:\n - schedulerName: multipoint-scheduler\n plugins:\n score:\n enabled:\n - name: 'DefaultScore2'\n weight: 5", "completion": "This yaml code Scheduler Configuration | In this example, it's unnecessary to specify the plugins in MultiPoint explicitly because they are default plugins. And the only plugin specified in Score is DefaultScore2. This is because plugins set", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Scheduler Configuration | In this example, it's unnecessary to specify the plugins in MultiPoint explicitly because they are default plugins. And the only plugin specified in Score is DefaultScore2. This is because plugins set", "source": "https://kubernetes.io/docs/reference/scheduling/config/", "id": "k8s_00097_code_explain_2635"} {"prompt": "Complete this yaml code:\n\napiVersion: kubescheduler.config.k8s.io/v1\nkind: KubeSchedulerConfiguration\nprofiles:\n - schedulerName: multipoint-scheduler\n plugins:\n multiPoint:\n enabled:\n - name: 'CustomQueueSort'\n - name: 'CustomPlugin1'", "completion": " weight: 3\n - name: 'CustomPlugin2'\n disabled:\n - name: 'DefaultQueueSort'\n filter:\n disabled:\n - name: 'DefaultPlugin1'\n score:\n enabled:\n - name: 'DefaultPlugin2'", "language": "yaml", "type": "kubernetes-manifest", "context": "Scheduler Configuration | Note that there is no error for re-declaring a MultiPoint plugin in a specific extension point. The re-declaration is ignored (and logged), as specific extension points take precedence.", "source": "https://kubernetes.io/docs/reference/scheduling/config/", "id": "k8s_00097_code_completion_2636"} {"prompt": "Explain this yaml code:\n\napiVersion: kubescheduler.config.k8s.io/v1\nkind: KubeSchedulerConfiguration\nprofiles:\n - schedulerName: multipoint-scheduler\n plugins:\n multiPoint:\n enabled:\n - name: 'CustomQueueSort'\n - name: 'CustomPlugin1'\n weight: 3\n - name: 'CustomPlugin2'\n disabled:\n - name: 'DefaultQueueSort'\n filter:\n disabled:\n - name: 'DefaultPlugin1'\n score:\n enabled:\n - name: 'DefaultPlugin2'", "completion": "This yaml code Scheduler Configuration | Note that there is no error for re-declaring a MultiPoint plugin in a specific extension point. The re-declaration is ignored (and logged), as specific extension points take precedence.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Scheduler Configuration | Note that there is no error for re-declaring a MultiPoint plugin in a specific extension point. The re-declaration is ignored (and logged), as specific extension points take precedence.", "source": "https://kubernetes.io/docs/reference/scheduling/config/", "id": "k8s_00097_code_explain_2637"} {"prompt": "Complete this yaml code:\n\napiVersion: kubescheduler.config.k8s.io/v1beta2\nkind: KubeSchedulerConfiguration\nprofiles:\n - schedulerName: multipoint-scheduler\n plugins:\n\n # Disable the default QueueSort plugin\n queueSort:\n enabled:\n - name: 'CustomQueueSort'\n disabled:\n - name: 'DefaultQueueSort'\n\n # Enable custom Filter plugins", "completion": " filter:\n enabled:\n - name: 'CustomPlugin1'\n - name: 'CustomPlugin2'\n - name: 'DefaultPlugin2'\n disabled:\n - name: 'DefaultPlugin1'\n\n # Enable and reorder custom score plugins\n score:\n enabled:\n - name: 'DefaultPlugin2'\n weight: 1\n - name: 'DefaultPlugin1'\n weight: 3", "language": "yaml", "type": "kubernetes-manifest", "context": "Scheduler Configuration | While this is a complicated example, it demonstrates the flexibility of MultiPoint config as well as its seamless integration with the existing methods for configuring extension points.", "source": "https://kubernetes.io/docs/reference/scheduling/config/", "id": "k8s_00097_code_completion_2638"} {"prompt": "Explain this yaml code:\n\napiVersion: kubescheduler.config.k8s.io/v1beta2\nkind: KubeSchedulerConfiguration\nprofiles:\n - schedulerName: multipoint-scheduler\n plugins:\n\n # Disable the default QueueSort plugin\n queueSort:\n enabled:\n - name: 'CustomQueueSort'\n disabled:\n - name: 'DefaultQueueSort'\n\n # Enable custom Filter plugins\n filter:\n enabled:\n - name: 'CustomPlugin1'\n - name: 'CustomPlugin2'\n - name: 'DefaultPlugin2'\n disabled:\n - name: 'DefaultPlugin1'\n\n # Enable and reorder custom score plugins\n score:\n enabled:\n - name: 'DefaultPlugin2'\n weight: 1\n - name: 'DefaultPlugin1'\n weight: 3", "completion": "This yaml code Scheduler Configuration | While this is a complicated example, it demonstrates the flexibility of MultiPoint config as well as its seamless integration with the existing methods for configuring extension points.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Scheduler Configuration | While this is a complicated example, it demonstrates the flexibility of MultiPoint config as well as its seamless integration with the existing methods for configuring extension points.", "source": "https://kubernetes.io/docs/reference/scheduling/config/", "id": "k8s_00097_code_explain_2639"} {"prompt": "Complete this yaml code:\n\napiVersion: kubescheduler.config.k8s.io/v1beta2\nkind: KubeSchedulerConfiguration\nprofiles:\n- pluginConfig:\n - args:", "completion": " scoringStrategy:\n resources:\n - name: cpu\n weight: 1\n type: MostAllocated\n name: NodeResourcesFit", "language": "yaml", "type": "kubernetes-manifest", "context": "Scheduler Configuration | The scheduler plugin NodeLabel is deprecated; instead, use the NodeAffinity plugin (enabled by default) to achieve similar behavior.", "source": "https://kubernetes.io/docs/reference/scheduling/config/", "id": "k8s_00097_code_completion_2640"} {"prompt": "Explain this yaml code:\n\napiVersion: kubescheduler.config.k8s.io/v1beta2\nkind: KubeSchedulerConfiguration\nprofiles:\n- pluginConfig:\n - args:\n scoringStrategy:\n resources:\n - name: cpu\n weight: 1\n type: MostAllocated\n name: NodeResourcesFit", "completion": "This yaml code Scheduler Configuration | The scheduler plugin NodeLabel is deprecated; instead, use the NodeAffinity plugin (enabled by default) to achieve similar behavior.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Scheduler Configuration | The scheduler plugin NodeLabel is deprecated; instead, use the NodeAffinity plugin (enabled by default) to achieve similar behavior.", "source": "https://kubernetes.io/docs/reference/scheduling/config/", "id": "k8s_00097_code_explain_2641"} {"prompt": "Complete this yaml code:\n\napiVersion: apps/v1\nkind: Deployment\nmetadata:\n name: my-nginx\nspec:\n selector:\n matchLabels:\n run: my-nginx\n replicas: 2", "completion": " template:\n metadata:\n labels:\n run: my-nginx\n spec:\n containers:\n - name: my-nginx\n image: nginx\n ports:\n - containerPort: 80", "language": "yaml", "type": "deployment-definition", "context": "Connecting Applications with Services | This makes it accessible from any node in your cluster. Check the nodes the Pod is running on:", "source": "https://kubernetes.io/docs/tutorials/services/connect-applications-service/", "id": "k8s_00098_code_completion_2642"} {"prompt": "Explain this yaml code:\n\napiVersion: apps/v1\nkind: Deployment\nmetadata:\n name: my-nginx\nspec:\n selector:\n matchLabels:\n run: my-nginx\n replicas: 2\n template:\n metadata:\n labels:\n run: my-nginx\n spec:\n containers:\n - name: my-nginx\n image: nginx\n ports:\n - containerPort: 80", "completion": "This yaml code Connecting Applications with Services | This makes it accessible from any node in your cluster. Check the nodes the Pod is running on:", "language": "yaml", "type": "deployment-definition_explanation", "context": "Connecting Applications with Services | This makes it accessible from any node in your cluster. Check the nodes the Pod is running on:", "source": "https://kubernetes.io/docs/tutorials/services/connect-applications-service/", "id": "k8s_00098_code_explain_2643"} {"prompt": "Explain this bash code:\n\nkubectl apply -f ./run-my-nginx.yaml\nkubectl get pods -l run=my-nginx -o wide", "completion": "This bash code Connecting Applications with Services | Check your pods' IPs:", "language": "bash", "type": "kubectl-command_explanation", "context": "Connecting Applications with Services | Check your pods' IPs:", "source": "https://kubernetes.io/docs/tutorials/services/connect-applications-service/", "id": "k8s_00098_code_explain_2644"} {"prompt": "Explain this text code:\n\nNAME READY STATUS RESTARTS AGE IP NODE\nmy-nginx-3800858182-jr4a2 1/1 Running 0 13s 10.244.3.4 kubernetes-minion-905m\nmy-nginx-3800858182-kna2y 1/1 Running 0 13s 10.244.2.5 kubernetes-minion-ljyd", "completion": "This text code Connecting Applications with Services | Check your pods' IPs:", "language": "text", "type": "code-example_explanation", "context": "Connecting Applications with Services | Check your pods' IPs:", "source": "https://kubernetes.io/docs/tutorials/services/connect-applications-service/", "id": "k8s_00098_code_explain_2645"} {"prompt": "Complete this bash code:\n\nkubectl get pods -l run=my-nginx -o custom-columns=POD_IP:.status.podIPs\n POD_IP", "completion": " [map[ip:10.244.3.4]]\n [map[ip:10.244.2.5]]", "language": "bash", "type": "kubectl-command", "context": "Connecting Applications with Services | You should be able to ssh into any node in your cluster and use a tool such as curl to make queries against both IPs. Note that the containers are not using port 80 on the node, nor are there any spec", "source": "https://kubernetes.io/docs/tutorials/services/connect-applications-service/", "id": "k8s_00098_code_completion_2646"} {"prompt": "Explain this bash code:\n\nkubectl get pods -l run=my-nginx -o custom-columns=POD_IP:.status.podIPs\n POD_IP\n [map[ip:10.244.3.4]]\n [map[ip:10.244.2.5]]", "completion": "This bash code Connecting Applications with Services | You should be able to ssh into any node in your cluster and use a tool such as curl to make queries against both IPs. Note that the containers are not using port 80 on the node, nor are there any spec", "language": "bash", "type": "kubectl-command_explanation", "context": "Connecting Applications with Services | You should be able to ssh into any node in your cluster and use a tool such as curl to make queries against both IPs. Note that the containers are not using port 80 on the node, nor are there any spec", "source": "https://kubernetes.io/docs/tutorials/services/connect-applications-service/", "id": "k8s_00098_code_explain_2647"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Service\nmetadata:\n name: my-nginx\n labels:\n run: my-nginx", "completion": "spec:\n ports:\n - port: 80\n protocol: TCP\n selector:\n run: my-nginx", "language": "yaml", "type": "service-definition", "context": "Connecting Applications with Services | This specification will create a Service which targets TCP port 80 on any Pod with the run: my-nginx label, and expose it on an abstracted Service port (targetPort: is the port the container accepts t", "source": "https://kubernetes.io/docs/tutorials/services/connect-applications-service/", "id": "k8s_00098_code_completion_2648"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Service\nmetadata:\n name: my-nginx\n labels:\n run: my-nginx\nspec:\n ports:\n - port: 80\n protocol: TCP\n selector:\n run: my-nginx", "completion": "This yaml code Connecting Applications with Services | This specification will create a Service which targets TCP port 80 on any Pod with the run: my-nginx label, and expose it on an abstracted Service port (targetPort: is the port the container accepts t", "language": "yaml", "type": "service-definition_explanation", "context": "Connecting Applications with Services | This specification will create a Service which targets TCP port 80 on any Pod with the run: my-nginx label, and expose it on an abstracted Service port (targetPort: is the port the container accepts t", "source": "https://kubernetes.io/docs/tutorials/services/connect-applications-service/", "id": "k8s_00098_code_explain_2649"} {"prompt": "Explain this text code:\n\nNAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE\nmy-nginx ClusterIP 10.0.162.149 80/TCP 21s", "completion": "This text code Connecting Applications with Services | As mentioned previously, a Service is backed by a group of Pods. These Pods are exposed through EndpointSlices. The Service's selector will be evaluated continuously and the results will be POSTed to", "language": "text", "type": "code-example_explanation", "context": "Connecting Applications with Services | As mentioned previously, a Service is backed by a group of Pods. These Pods are exposed through EndpointSlices. The Service's selector will be evaluated continuously and the results will be POSTed to", "source": "https://kubernetes.io/docs/tutorials/services/connect-applications-service/", "id": "k8s_00098_code_explain_2650"} {"prompt": "Complete this text code:\n\nName: my-nginx\nNamespace: default\nLabels: run=my-nginx\nAnnotations: \nSelector: run=my-nginx\nType: ClusterIP\nIP Family Policy: SingleStack", "completion": "IP Families: IPv4\nIP: 10.0.162.149\nIPs: 10.0.162.149\nPort: 80/TCP\nTargetPort: 80/TCP\nEndpoints: 10.244.2.5:80,10.244.3.4:80\nSession Affinity: None\nEvents: ", "language": "text", "type": "code-example", "context": "Connecting Applications with Services | You should now be able to curl the nginx Service on : from any node in your cluster. Note that the Service IP is completely virtual, it never hits the wire. If you're curious about h", "source": "https://kubernetes.io/docs/tutorials/services/connect-applications-service/", "id": "k8s_00098_code_completion_2651"} {"prompt": "Explain this text code:\n\nName: my-nginx\nNamespace: default\nLabels: run=my-nginx\nAnnotations: \nSelector: run=my-nginx\nType: ClusterIP\nIP Family Policy: SingleStack\nIP Families: IPv4\nIP: 10.0.162.149\nIPs: 10.0.162.149\nPort: 80/TCP\nTargetPort: 80/TCP\nEndpoints: 10.244.2.5:80,10.244.3.4:80\nSession Affinity: None\nEvents: ", "completion": "This text code Connecting Applications with Services | You should now be able to curl the nginx Service on : from any node in your cluster. Note that the Service IP is completely virtual, it never hits the wire. If you're curious about h", "language": "text", "type": "code-example_explanation", "context": "Connecting Applications with Services | You should now be able to curl the nginx Service on : from any node in your cluster. Note that the Service IP is completely virtual, it never hits the wire. If you're curious about h", "source": "https://kubernetes.io/docs/tutorials/services/connect-applications-service/", "id": "k8s_00098_code_explain_2652"} {"prompt": "Explain this bash code:\n\nkubectl get endpointslices -l kubernetes.io/service-name=my-nginx", "completion": "This bash code Connecting Applications with Services | You should now be able to curl the nginx Service on : from any node in your cluster. Note that the Service IP is completely virtual, it never hits the wire. If you're curious about h", "language": "bash", "type": "kubectl-command_explanation", "context": "Connecting Applications with Services | You should now be able to curl the nginx Service on : from any node in your cluster. Note that the Service IP is completely virtual, it never hits the wire. If you're curious about h", "source": "https://kubernetes.io/docs/tutorials/services/connect-applications-service/", "id": "k8s_00098_code_explain_2653"} {"prompt": "Explain this text code:\n\nNAME ADDRESSTYPE PORTS ENDPOINTS AGE\nmy-nginx-7vzhx IPv4 80 10.244.2.5,10.244.3.4 21s", "completion": "This text code Connecting Applications with Services | You should now be able to curl the nginx Service on : from any node in your cluster. Note that the Service IP is completely virtual, it never hits the wire. If you're curious about h", "language": "text", "type": "code-example_explanation", "context": "Connecting Applications with Services | You should now be able to curl the nginx Service on : from any node in your cluster. Note that the Service IP is completely virtual, it never hits the wire. If you're curious about h", "source": "https://kubernetes.io/docs/tutorials/services/connect-applications-service/", "id": "k8s_00098_code_explain_2654"} {"prompt": "Explain this bash code:\n\nkubectl exec my-nginx-3800858182-jr4a2 -- printenv | grep SERVICE", "completion": "This bash code Connecting Applications with Services | Note there's no mention of your Service. This is because you created the replicas before the Service. Another disadvantage of doing this is that the scheduler might put both Pods on the same machine,", "language": "bash", "type": "kubectl-command_explanation", "context": "Connecting Applications with Services | Note there's no mention of your Service. This is because you created the replicas before the Service. Another disadvantage of doing this is that the scheduler might put both Pods on the same machine,", "source": "https://kubernetes.io/docs/tutorials/services/connect-applications-service/", "id": "k8s_00098_code_explain_2655"} {"prompt": "Explain this text code:\n\nKUBERNETES_SERVICE_HOST=10.0.0.1\nKUBERNETES_SERVICE_PORT=443\nKUBERNETES_SERVICE_PORT_HTTPS=443", "completion": "This text code Connecting Applications with Services | Note there's no mention of your Service. This is because you created the replicas before the Service. Another disadvantage of doing this is that the scheduler might put both Pods on the same machine,", "language": "text", "type": "code-example_explanation", "context": "Connecting Applications with Services | Note there's no mention of your Service. This is because you created the replicas before the Service. Another disadvantage of doing this is that the scheduler might put both Pods on the same machine,", "source": "https://kubernetes.io/docs/tutorials/services/connect-applications-service/", "id": "k8s_00098_code_explain_2656"} {"prompt": "Explain this bash code:\n\nkubectl scale deployment my-nginx --replicas=0; kubectl scale deployment my-nginx --replicas=2;\n\nkubectl get pods -l run=my-nginx -o wide", "completion": "This bash code Connecting Applications with Services | You may notice that the pods have different names, since they are killed and recreated.", "language": "bash", "type": "kubectl-command_explanation", "context": "Connecting Applications with Services | You may notice that the pods have different names, since they are killed and recreated.", "source": "https://kubernetes.io/docs/tutorials/services/connect-applications-service/", "id": "k8s_00098_code_explain_2657"} {"prompt": "Explain this text code:\n\nNAME READY STATUS RESTARTS AGE IP NODE\nmy-nginx-3800858182-e9ihh 1/1 Running 0 5s 10.244.2.7 kubernetes-minion-ljyd\nmy-nginx-3800858182-j4rm4 1/1 Running 0 5s 10.244.3.8 kubernetes-minion-905m", "completion": "This text code Connecting Applications with Services | You may notice that the pods have different names, since they are killed and recreated.", "language": "text", "type": "code-example_explanation", "context": "Connecting Applications with Services | You may notice that the pods have different names, since they are killed and recreated.", "source": "https://kubernetes.io/docs/tutorials/services/connect-applications-service/", "id": "k8s_00098_code_explain_2658"} {"prompt": "Explain this bash code:\n\nkubectl exec my-nginx-3800858182-e9ihh -- printenv | grep SERVICE", "completion": "This bash code Connecting Applications with Services | Kubernetes offers a DNS cluster addon Service that automatically assigns dns names to other Services. You can check if it's running on your cluster:", "language": "bash", "type": "kubectl-command_explanation", "context": "Connecting Applications with Services | Kubernetes offers a DNS cluster addon Service that automatically assigns dns names to other Services. You can check if it's running on your cluster:", "source": "https://kubernetes.io/docs/tutorials/services/connect-applications-service/", "id": "k8s_00098_code_explain_2659"} {"prompt": "Complete this text code:\n\nKUBERNETES_SERVICE_PORT=443\nMY_NGINX_SERVICE_HOST=10.0.162.149", "completion": "KUBERNETES_SERVICE_HOST=10.0.0.1\nMY_NGINX_SERVICE_PORT=80\nKUBERNETES_SERVICE_PORT_HTTPS=443", "language": "text", "type": "code-example", "context": "Connecting Applications with Services | Kubernetes offers a DNS cluster addon Service that automatically assigns dns names to other Services. You can check if it's running on your cluster:", "source": "https://kubernetes.io/docs/tutorials/services/connect-applications-service/", "id": "k8s_00098_code_completion_2660"} {"prompt": "Explain this text code:\n\nKUBERNETES_SERVICE_PORT=443\nMY_NGINX_SERVICE_HOST=10.0.162.149\nKUBERNETES_SERVICE_HOST=10.0.0.1\nMY_NGINX_SERVICE_PORT=80\nKUBERNETES_SERVICE_PORT_HTTPS=443", "completion": "This text code Connecting Applications with Services | Kubernetes offers a DNS cluster addon Service that automatically assigns dns names to other Services. You can check if it's running on your cluster:", "language": "text", "type": "code-example_explanation", "context": "Connecting Applications with Services | Kubernetes offers a DNS cluster addon Service that automatically assigns dns names to other Services. You can check if it's running on your cluster:", "source": "https://kubernetes.io/docs/tutorials/services/connect-applications-service/", "id": "k8s_00098_code_explain_2661"} {"prompt": "Explain this bash code:\n\nkubectl get services kube-dns --namespace=kube-system", "completion": "This bash code Connecting Applications with Services | The rest of this section will assume you have a Service with a long lived IP (my-nginx), and a DNS server that has assigned a name to that IP. Here we use the CoreDNS cluster addon (application name k", "language": "bash", "type": "kubectl-command_explanation", "context": "Connecting Applications with Services | The rest of this section will assume you have a Service with a long lived IP (my-nginx), and a DNS server that has assigned a name to that IP. Here we use the CoreDNS cluster addon (application name k", "source": "https://kubernetes.io/docs/tutorials/services/connect-applications-service/", "id": "k8s_00098_code_explain_2662"} {"prompt": "Explain this text code:\n\nNAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE\nkube-dns ClusterIP 10.0.0.10 53/UDP,53/TCP 8m", "completion": "This text code Connecting Applications with Services | The rest of this section will assume you have a Service with a long lived IP (my-nginx), and a DNS server that has assigned a name to that IP. Here we use the CoreDNS cluster addon (application name k", "language": "text", "type": "code-example_explanation", "context": "Connecting Applications with Services | The rest of this section will assume you have a Service with a long lived IP (my-nginx), and a DNS server that has assigned a name to that IP. Here we use the CoreDNS cluster addon (application name k", "source": "https://kubernetes.io/docs/tutorials/services/connect-applications-service/", "id": "k8s_00098_code_explain_2663"} {"prompt": "Explain this bash code:\n\nkubectl run curl --image=radial/busyboxplus:curl -i --tty --rm", "completion": "This bash code Connecting Applications with Services | Then, hit enter and run nslookup my-nginx:", "language": "bash", "type": "kubectl-command_explanation", "context": "Connecting Applications with Services | Then, hit enter and run nslookup my-nginx:", "source": "https://kubernetes.io/docs/tutorials/services/connect-applications-service/", "id": "k8s_00098_code_explain_2664"} {"prompt": "Explain this text code:\n\nWaiting for pod default/curl-131556218-9fnch to be running, status is Pending, pod ready: false\nHit enter for command prompt", "completion": "This text code Connecting Applications with Services | Then, hit enter and run nslookup my-nginx:", "language": "text", "type": "code-example_explanation", "context": "Connecting Applications with Services | Then, hit enter and run nslookup my-nginx:", "source": "https://kubernetes.io/docs/tutorials/services/connect-applications-service/", "id": "k8s_00098_code_explain_2665"} {"prompt": "Complete this json code:\n\n[ root@curl-131556218-9fnch:/ ]$ nslookup my-nginx\nServer: 10.0.0.10\nAddress 1: 10.0.0.10", "completion": "\nName: my-nginx\nAddress 1: 10.0.162.149", "language": "json", "type": "api-response", "context": "Connecting Applications with Services | Till now we have only accessed the nginx server from within the cluster. Before exposing the Service to the internet, you want to make sure the communication channel is secure. For this, you will need", "source": "https://kubernetes.io/docs/tutorials/services/connect-applications-service/", "id": "k8s_00098_code_completion_2666"} {"prompt": "Explain this json code:\n\n[ root@curl-131556218-9fnch:/ ]$ nslookup my-nginx\nServer: 10.0.0.10\nAddress 1: 10.0.0.10\n\nName: my-nginx\nAddress 1: 10.0.162.149", "completion": "This json code Connecting Applications with Services | Till now we have only accessed the nginx server from within the cluster. Before exposing the Service to the internet, you want to make sure the communication channel is secure. For this, you will need", "language": "json", "type": "api-response_explanation", "context": "Connecting Applications with Services | Till now we have only accessed the nginx server from within the cluster. Before exposing the Service to the internet, you want to make sure the communication channel is secure. For this, you will need", "source": "https://kubernetes.io/docs/tutorials/services/connect-applications-service/", "id": "k8s_00098_code_explain_2667"} {"prompt": "Explain this text code:\n\nmake keys KEY=/tmp/nginx.key CERT=/tmp/nginx.crt\nkubectl create secret tls nginxsecret --key /tmp/nginx.key --cert /tmp/nginx.crt", "completion": "This text code Connecting Applications with Services | And also the configmap:", "language": "text", "type": "code-example_explanation", "context": "Connecting Applications with Services | And also the configmap:", "source": "https://kubernetes.io/docs/tutorials/services/connect-applications-service/", "id": "k8s_00098_code_explain_2668"} {"prompt": "Explain this text code:\n\nNAME TYPE DATA AGE\nnginxsecret kubernetes.io/tls 2 1m", "completion": "This text code Connecting Applications with Services | And also the configmap:", "language": "text", "type": "code-example_explanation", "context": "Connecting Applications with Services | And also the configmap:", "source": "https://kubernetes.io/docs/tutorials/services/connect-applications-service/", "id": "k8s_00098_code_explain_2669"} {"prompt": "Explain this bash code:\n\nkubectl create configmap nginxconfigmap --from-file=default.conf", "completion": "This bash code Connecting Applications with Services | You can find an example for default.conf in the Kubernetes examples project repo.", "language": "bash", "type": "kubectl-command_explanation", "context": "Connecting Applications with Services | You can find an example for default.conf in the Kubernetes examples project repo.", "source": "https://kubernetes.io/docs/tutorials/services/connect-applications-service/", "id": "k8s_00098_code_explain_2670"} {"prompt": "Explain this text code:\n\nNAME DATA AGE\nnginxconfigmap 1 114s", "completion": "This text code Connecting Applications with Services | You can view the details of the nginxconfigmap ConfigMap using the following command:", "language": "text", "type": "code-example_explanation", "context": "Connecting Applications with Services | You can view the details of the nginxconfigmap ConfigMap using the following command:", "source": "https://kubernetes.io/docs/tutorials/services/connect-applications-service/", "id": "k8s_00098_code_explain_2671"} {"prompt": "Complete this text code:\n\nName: nginxconfigmap\nNamespace: default\nLabels: \nAnnotations: \n\nData\n====\ndefault.conf:\n----\nserver {\n listen 80 default_server;\n listen [::]:80 default_server ipv6only=on;\n\n listen 443 ssl;\n", "completion": " root /usr/share/nginx/html;\n index index.html;\n\n server_name localhost;\n ssl_certificate /etc/nginx/ssl/tls.crt;\n ssl_certificate_key /etc/nginx/ssl/tls.key;\n\n location / {\n try_files $uri $uri/ =404;\n }\n}\n\nBinaryData\n====\n\nEvents: ", "language": "text", "type": "code-example", "context": "Connecting Applications with Services | Following are the manual steps to follow in case you run into problems running make (on windows for example):", "source": "https://kubernetes.io/docs/tutorials/services/connect-applications-service/", "id": "k8s_00098_code_completion_2672"} {"prompt": "Explain this text code:\n\nName: nginxconfigmap\nNamespace: default\nLabels: \nAnnotations: \n\nData\n====\ndefault.conf:\n----\nserver {\n listen 80 default_server;\n listen [::]:80 default_server ipv6only=on;\n\n listen 443 ssl;\n\n root /usr/share/nginx/html;\n index index.html;\n\n server_name localhost;\n ssl_certificate /etc/nginx/ssl/tls.crt;\n ssl_certificate_key /etc/nginx/ssl/tls.key;\n\n location / {\n try_files $uri $uri/ =404;\n }\n}\n\nBinaryData\n====\n\nEvents: ", "completion": "This text code Connecting Applications with Services | Following are the manual steps to follow in case you run into problems running make (on windows for example):", "language": "text", "type": "code-example_explanation", "context": "Connecting Applications with Services | Following are the manual steps to follow in case you run into problems running make (on windows for example):", "source": "https://kubernetes.io/docs/tutorials/services/connect-applications-service/", "id": "k8s_00098_code_explain_2673"} {"prompt": "Complete this text code:\n\n# Create a public private key pair\nopenssl req -x509 -nodes -days 365 -newkey rsa:2048 -keyout /d/tmp/nginx.key -out /d/tmp/nginx.crt -subj \"/CN=my-nginx/O=my-nginx\"", "completion": "# Convert the keys to base64 encoding\ncat /d/tmp/nginx.crt | base64\ncat /d/tmp/nginx.key | base64", "language": "text", "type": "code-example", "context": "Connecting Applications with Services | Use the output from the previous commands to create a yaml file as follows. The base64 encoded value should all be on a single line.", "source": "https://kubernetes.io/docs/tutorials/services/connect-applications-service/", "id": "k8s_00098_code_completion_2674"} {"prompt": "Explain this text code:\n\n# Create a public private key pair\nopenssl req -x509 -nodes -days 365 -newkey rsa:2048 -keyout /d/tmp/nginx.key -out /d/tmp/nginx.crt -subj \"/CN=my-nginx/O=my-nginx\"\n# Convert the keys to base64 encoding\ncat /d/tmp/nginx.crt | base64\ncat /d/tmp/nginx.key | base64", "completion": "This text code Connecting Applications with Services | Use the output from the previous commands to create a yaml file as follows. The base64 encoded value should all be on a single line.", "language": "text", "type": "code-example_explanation", "context": "Connecting Applications with Services | Use the output from the previous commands to create a yaml file as follows. The base64 encoded value should all be on a single line.", "source": "https://kubernetes.io/docs/tutorials/services/connect-applications-service/", "id": "k8s_00098_code_explain_2675"} {"prompt": "Complete this yaml code:\n\napiVersion: \"v1\"\nkind: \"Secret\"\nmetadata:\n name: \"nginxsecret\"\n namespace: \"default\"", "completion": "type: kubernetes.io/tls\ndata:\n # NOTE: Replace the following values with your own base64-encoded certificate and key.\n tls.crt: \"REPLACE_WITH_BASE64_CERT\" \n tls.key: \"REPLACE_WITH_BASE64_KEY\"", "language": "yaml", "type": "kubernetes-manifest", "context": "Connecting Applications with Services | Now create the secrets using the file:", "source": "https://kubernetes.io/docs/tutorials/services/connect-applications-service/", "id": "k8s_00098_code_completion_2676"} {"prompt": "Explain this yaml code:\n\napiVersion: \"v1\"\nkind: \"Secret\"\nmetadata:\n name: \"nginxsecret\"\n namespace: \"default\"\ntype: kubernetes.io/tls\ndata:\n # NOTE: Replace the following values with your own base64-encoded certificate and key.\n tls.crt: \"REPLACE_WITH_BASE64_CERT\" \n tls.key: \"REPLACE_WITH_BASE64_KEY\"", "completion": "This yaml code Connecting Applications with Services | Now create the secrets using the file:", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Connecting Applications with Services | Now create the secrets using the file:", "source": "https://kubernetes.io/docs/tutorials/services/connect-applications-service/", "id": "k8s_00098_code_explain_2677"} {"prompt": "Explain this bash code:\n\nkubectl apply -f nginxsecrets.yaml\nkubectl get secrets", "completion": "This bash code Connecting Applications with Services | Now modify your nginx replicas to start an https server using the certificate in the secret, and the Service, to expose both ports (80 and 443):", "language": "bash", "type": "kubectl-command_explanation", "context": "Connecting Applications with Services | Now modify your nginx replicas to start an https server using the certificate in the secret, and the Service, to expose both ports (80 and 443):", "source": "https://kubernetes.io/docs/tutorials/services/connect-applications-service/", "id": "k8s_00098_code_explain_2678"} {"prompt": "Explain this text code:\n\nNAME TYPE DATA AGE\nnginxsecret kubernetes.io/tls 2 1m", "completion": "This text code Connecting Applications with Services | Now modify your nginx replicas to start an https server using the certificate in the secret, and the Service, to expose both ports (80 and 443):", "language": "text", "type": "code-example_explanation", "context": "Connecting Applications with Services | Now modify your nginx replicas to start an https server using the certificate in the secret, and the Service, to expose both ports (80 and 443):", "source": "https://kubernetes.io/docs/tutorials/services/connect-applications-service/", "id": "k8s_00098_code_explain_2679"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Service\nmetadata:\n name: my-nginx\n labels:\n run: my-nginx\nspec:\n type: NodePort\n ports:\n - port: 8080\n targetPort: 80\n protocol: TCP\n name: http\n - port: 443\n protocol: TCP\n name: https\n selector:\n run: my-nginx\n---\napiVersion: apps/v1\nkind: Deployment\nmetadata:\n name: my-nginx\nspec:\n selector:", "completion": " matchLabels:\n run: my-nginx\n replicas: 1\n template:\n metadata:\n labels:\n run: my-nginx\n spec:\n volumes:\n - name: secret-volume\n secret:\n secretName: nginxsecret\n - name: configmap-volume\n configMap:\n name: nginxconfigmap\n containers:\n - name: nginxhttps\n image: bprashanth/nginxhttps:1.0\n ports:\n - containerPort: 443\n - containerPort: 80\n volumeMounts:\n - mountPath: /etc/nginx/ssl\n name: secret-volume\n - mountPath: /etc/nginx/conf.d\n name: configmap-volume", "language": "yaml", "type": "deployment-definition", "context": "Connecting Applications with Services | Noteworthy points about the nginx-secure-app manifest:", "source": "https://kubernetes.io/docs/tutorials/services/connect-applications-service/", "id": "k8s_00098_code_completion_2680"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Service\nmetadata:\n name: my-nginx\n labels:\n run: my-nginx\nspec:\n type: NodePort\n ports:\n - port: 8080\n targetPort: 80\n protocol: TCP\n name: http\n - port: 443\n protocol: TCP\n name: https\n selector:\n run: my-nginx\n---\napiVersion: apps/v1\nkind: Deployment\nmetadata:\n name: my-nginx\nspec:\n selector:\n matchLabels:\n run: my-nginx\n replicas: 1\n template:\n metadata:\n labels:\n run: my-nginx\n spec:\n volumes:\n - name: secret-volume\n secret:\n secretName: nginxsecret\n - name: configmap-volume\n configMap:\n name: nginxconfigmap\n containers:\n - name: nginxhttps\n image: bprashanth/nginxhttps:1.0\n ports:\n - containerPort: 443\n - containerPort: 80\n volumeMounts:\n - mountPath: /etc/nginx/ssl\n name: secret-volume\n - mountPath: /etc/nginx/conf.d\n name: configmap-volume", "completion": "This yaml code Connecting Applications with Services | Noteworthy points about the nginx-secure-app manifest:", "language": "yaml", "type": "deployment-definition_explanation", "context": "Connecting Applications with Services | Noteworthy points about the nginx-secure-app manifest:", "source": "https://kubernetes.io/docs/tutorials/services/connect-applications-service/", "id": "k8s_00098_code_explain_2681"} {"prompt": "Explain this bash code:\n\nkubectl delete deployments,svc my-nginx; kubectl create -f ./nginx-secure-app.yaml", "completion": "This bash code Connecting Applications with Services | At this point you can reach the nginx server from any node.", "language": "bash", "type": "kubectl-command_explanation", "context": "Connecting Applications with Services | At this point you can reach the nginx server from any node.", "source": "https://kubernetes.io/docs/tutorials/services/connect-applications-service/", "id": "k8s_00098_code_explain_2682"} {"prompt": "Explain this bash code:\n\nkubectl get pods -l run=my-nginx -o custom-columns=POD_IP:.status.podIPs\n POD_IP\n [map[ip:10.244.3.5]]", "completion": "This bash code Connecting Applications with Services | Note how we supplied the -k parameter to curl in the last step, this is because we don't know anything about the pods running nginx at certificate generation time, so we have to tell curl to ignore th", "language": "bash", "type": "kubectl-command_explanation", "context": "Connecting Applications with Services | Note how we supplied the -k parameter to curl in the last step, this is because we don't know anything about the pods running nginx at certificate generation time, so we have to tell curl to ignore th", "source": "https://kubernetes.io/docs/tutorials/services/connect-applications-service/", "id": "k8s_00098_code_explain_2683"} {"prompt": "Explain this text code:\n\nnode $ curl -k https://10.244.3.5\n...\n

Welcome to nginx!

", "completion": "This text code Connecting Applications with Services | Note how we supplied the -k parameter to curl in the last step, this is because we don't know anything about the pods running nginx at certificate generation time, so we have to tell curl to ignore th", "language": "text", "type": "code-example_explanation", "context": "Connecting Applications with Services | Note how we supplied the -k parameter to curl in the last step, this is because we don't know anything about the pods running nginx at certificate generation time, so we have to tell curl to ignore th", "source": "https://kubernetes.io/docs/tutorials/services/connect-applications-service/", "id": "k8s_00098_code_explain_2684"} {"prompt": "Complete this yaml code:\n\napiVersion: apps/v1\nkind: Deployment\nmetadata:\n name: curl-deployment\nspec:\n selector:\n matchLabels:\n app: curlpod\n replicas: 1\n template:\n metadata:\n labels:\n app: curlpod\n spec:", "completion": " volumes:\n - name: secret-volume\n secret:\n secretName: nginxsecret\n containers:\n - name: curlpod\n command:\n - sh\n - -c\n - while true; do sleep 1; done\n image: radial/busyboxplus:curl\n volumeMounts:\n - mountPath: /etc/nginx/ssl\n name: secret-volume", "language": "yaml", "type": "deployment-definition", "context": "Connecting Applications with Services | For some parts of your applications you may want to expose a Service onto an external IP address. Kubernetes supports two ways of doing this: NodePorts and LoadBalancers. The Service created in the la", "source": "https://kubernetes.io/docs/tutorials/services/connect-applications-service/", "id": "k8s_00098_code_completion_2685"} {"prompt": "Explain this yaml code:\n\napiVersion: apps/v1\nkind: Deployment\nmetadata:\n name: curl-deployment\nspec:\n selector:\n matchLabels:\n app: curlpod\n replicas: 1\n template:\n metadata:\n labels:\n app: curlpod\n spec:\n volumes:\n - name: secret-volume\n secret:\n secretName: nginxsecret\n containers:\n - name: curlpod\n command:\n - sh\n - -c\n - while true; do sleep 1; done\n image: radial/busyboxplus:curl\n volumeMounts:\n - mountPath: /etc/nginx/ssl\n name: secret-volume", "completion": "This yaml code Connecting Applications with Services | For some parts of your applications you may want to expose a Service onto an external IP address. Kubernetes supports two ways of doing this: NodePorts and LoadBalancers. The Service created in the la", "language": "yaml", "type": "deployment-definition_explanation", "context": "Connecting Applications with Services | For some parts of your applications you may want to expose a Service onto an external IP address. Kubernetes supports two ways of doing this: NodePorts and LoadBalancers. The Service created in the la", "source": "https://kubernetes.io/docs/tutorials/services/connect-applications-service/", "id": "k8s_00098_code_explain_2686"} {"prompt": "Explain this bash code:\n\nkubectl apply -f ./curlpod.yaml\nkubectl get pods -l app=curlpod", "completion": "This bash code Connecting Applications with Services | For some parts of your applications you may want to expose a Service onto an external IP address. Kubernetes supports two ways of doing this: NodePorts and LoadBalancers. The Service created in the la", "language": "bash", "type": "kubectl-command_explanation", "context": "Connecting Applications with Services | For some parts of your applications you may want to expose a Service onto an external IP address. Kubernetes supports two ways of doing this: NodePorts and LoadBalancers. The Service created in the la", "source": "https://kubernetes.io/docs/tutorials/services/connect-applications-service/", "id": "k8s_00098_code_explain_2687"} {"prompt": "Explain this text code:\n\nNAME READY STATUS RESTARTS AGE\ncurl-deployment-1515033274-1410r 1/1 Running 0 1m", "completion": "This text code Connecting Applications with Services | For some parts of your applications you may want to expose a Service onto an external IP address. Kubernetes supports two ways of doing this: NodePorts and LoadBalancers. The Service created in the la", "language": "text", "type": "code-example_explanation", "context": "Connecting Applications with Services | For some parts of your applications you may want to expose a Service onto an external IP address. Kubernetes supports two ways of doing this: NodePorts and LoadBalancers. The Service created in the la", "source": "https://kubernetes.io/docs/tutorials/services/connect-applications-service/", "id": "k8s_00098_code_explain_2688"} {"prompt": "Complete this bash code:\n\nkubectl exec curl-deployment-1515033274-1410r -- curl https://my-nginx --cacert /etc/nginx/ssl/tls.crt\n...", "completion": "Welcome to nginx!\n...", "language": "bash", "type": "kubectl-command", "context": "Connecting Applications with Services | For some parts of your applications you may want to expose a Service onto an external IP address. Kubernetes supports two ways of doing this: NodePorts and LoadBalancers. The Service created in the la", "source": "https://kubernetes.io/docs/tutorials/services/connect-applications-service/", "id": "k8s_00098_code_completion_2689"} {"prompt": "Explain this bash code:\n\nkubectl exec curl-deployment-1515033274-1410r -- curl https://my-nginx --cacert /etc/nginx/ssl/tls.crt\n...\nWelcome to nginx!\n...", "completion": "This bash code Connecting Applications with Services | For some parts of your applications you may want to expose a Service onto an external IP address. Kubernetes supports two ways of doing this: NodePorts and LoadBalancers. The Service created in the la", "language": "bash", "type": "kubectl-command_explanation", "context": "Connecting Applications with Services | For some parts of your applications you may want to expose a Service onto an external IP address. Kubernetes supports two ways of doing this: NodePorts and LoadBalancers. The Service created in the la", "source": "https://kubernetes.io/docs/tutorials/services/connect-applications-service/", "id": "k8s_00098_code_explain_2690"} {"prompt": "Complete this bash code:\n\nkubectl get svc my-nginx -o yaml | grep nodePort -C 5\n uid: 07191fb3-f61a-11e5-8ae5-42010af00002\nspec:\n clusterIP: 10.0.162.149\n ports:\n - name: http\n nodePort: 31704\n port: 8080", "completion": " protocol: TCP\n targetPort: 80\n - name: https\n nodePort: 32453\n port: 443\n protocol: TCP\n targetPort: 443\n selector:\n run: my-nginx", "language": "bash", "type": "kubectl-command", "context": "Connecting Applications with Services | Let's now recreate the Service to use a cloud load balancer. Change the Type of my-nginx Service from NodePort to LoadBalancer:", "source": "https://kubernetes.io/docs/tutorials/services/connect-applications-service/", "id": "k8s_00098_code_completion_2691"} {"prompt": "Explain this bash code:\n\nkubectl get svc my-nginx -o yaml | grep nodePort -C 5\n uid: 07191fb3-f61a-11e5-8ae5-42010af00002\nspec:\n clusterIP: 10.0.162.149\n ports:\n - name: http\n nodePort: 31704\n port: 8080\n protocol: TCP\n targetPort: 80\n - name: https\n nodePort: 32453\n port: 443\n protocol: TCP\n targetPort: 443\n selector:\n run: my-nginx", "completion": "This bash code Connecting Applications with Services | Let's now recreate the Service to use a cloud load balancer. Change the Type of my-nginx Service from NodePort to LoadBalancer:", "language": "bash", "type": "kubectl-command_explanation", "context": "Connecting Applications with Services | Let's now recreate the Service to use a cloud load balancer. Change the Type of my-nginx Service from NodePort to LoadBalancer:", "source": "https://kubernetes.io/docs/tutorials/services/connect-applications-service/", "id": "k8s_00098_code_explain_2692"} {"prompt": "Complete this bash code:\n\nkubectl get nodes -o yaml | grep ExternalIP -C 1\n - address: 104.197.41.11\n type: ExternalIP\n allocatable:\n--\n - address: 23.251.152.56", "completion": " type: ExternalIP\n allocatable:\n...\n\n$ curl https://: -k\n...\n

Welcome to nginx!

", "language": "bash", "type": "kubectl-command", "context": "Connecting Applications with Services | Let's now recreate the Service to use a cloud load balancer. Change the Type of my-nginx Service from NodePort to LoadBalancer:", "source": "https://kubernetes.io/docs/tutorials/services/connect-applications-service/", "id": "k8s_00098_code_completion_2693"} {"prompt": "Explain this bash code:\n\nkubectl get nodes -o yaml | grep ExternalIP -C 1\n - address: 104.197.41.11\n type: ExternalIP\n allocatable:\n--\n - address: 23.251.152.56\n type: ExternalIP\n allocatable:\n...\n\n$ curl https://: -k\n...\n

Welcome to nginx!

", "completion": "This bash code Connecting Applications with Services | Let's now recreate the Service to use a cloud load balancer. Change the Type of my-nginx Service from NodePort to LoadBalancer:", "language": "bash", "type": "kubectl-command_explanation", "context": "Connecting Applications with Services | Let's now recreate the Service to use a cloud load balancer. Change the Type of my-nginx Service from NodePort to LoadBalancer:", "source": "https://kubernetes.io/docs/tutorials/services/connect-applications-service/", "id": "k8s_00098_code_explain_2694"} {"prompt": "Explain this text code:\n\nNAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE\nmy-nginx LoadBalancer 10.0.162.149 xx.xxx.xxx.xxx 8080:30163/TCP 21s", "completion": "This text code Connecting Applications with Services | The IP address in the EXTERNAL-IP column is the one that is available on the public internet. The CLUSTER-IP is only available inside your cluster/private cloud network.", "language": "text", "type": "code-example_explanation", "context": "Connecting Applications with Services | The IP address in the EXTERNAL-IP column is the one that is available on the public internet. The CLUSTER-IP is only available inside your cluster/private cloud network.", "source": "https://kubernetes.io/docs/tutorials/services/connect-applications-service/", "id": "k8s_00098_code_explain_2695"} {"prompt": "Explain this text code:\n\ncurl https:// -k\n...\nWelcome to nginx!", "completion": "This text code Connecting Applications with Services | The IP address in the EXTERNAL-IP column is the one that is available on the public internet. The CLUSTER-IP is only available inside your cluster/private cloud network.", "language": "text", "type": "code-example_explanation", "context": "Connecting Applications with Services | The IP address in the EXTERNAL-IP column is the one that is available on the public internet. The CLUSTER-IP is only available inside your cluster/private cloud network.", "source": "https://kubernetes.io/docs/tutorials/services/connect-applications-service/", "id": "k8s_00098_code_explain_2696"} {"prompt": "Complete this bash code:\n\nkubectl describe service my-nginx\n...", "completion": "LoadBalancer Ingress: a320587ffd19711e5a37606cf4a74574-1142138393.us-east-1.elb.amazonaws.com\n...", "language": "bash", "type": "kubectl-command", "context": "Connecting Applications with Services |", "source": "https://kubernetes.io/docs/tutorials/services/connect-applications-service/", "id": "k8s_00098_code_completion_2697"} {"prompt": "Explain this bash code:\n\nkubectl describe service my-nginx\n...\nLoadBalancer Ingress: a320587ffd19711e5a37606cf4a74574-1142138393.us-east-1.elb.amazonaws.com\n...", "completion": "This bash code Connecting Applications with Services |", "language": "bash", "type": "kubectl-command_explanation", "context": "Connecting Applications with Services |", "source": "https://kubernetes.io/docs/tutorials/services/connect-applications-service/", "id": "k8s_00098_code_explain_2698"} {"prompt": "Complete this yaml code:\n\napiVersion: apps/v1\nkind: Deployment\nmetadata:\n name: my-nginx\nspec:\n selector:\n matchLabels:\n run: my-nginx\n replicas: 2", "completion": " template:\n metadata:\n labels:\n run: my-nginx\n spec:\n containers:\n - name: my-nginx\n image: nginx\n ports:\n - containerPort: 80", "language": "yaml", "type": "deployment-definition", "context": "Connecting Applications with Services | This makes it accessible from any node in your cluster. Check the nodes the Pod is running on:", "source": "https://kubernetes.io/docs/tutorials/services/connect-applications-service/", "id": "k8s_00098_code_completion_2699"} {"prompt": "Explain this yaml code:\n\napiVersion: apps/v1\nkind: Deployment\nmetadata:\n name: my-nginx\nspec:\n selector:\n matchLabels:\n run: my-nginx\n replicas: 2\n template:\n metadata:\n labels:\n run: my-nginx\n spec:\n containers:\n - name: my-nginx\n image: nginx\n ports:\n - containerPort: 80", "completion": "This yaml code Connecting Applications with Services | This makes it accessible from any node in your cluster. Check the nodes the Pod is running on:", "language": "yaml", "type": "deployment-definition_explanation", "context": "Connecting Applications with Services | This makes it accessible from any node in your cluster. Check the nodes the Pod is running on:", "source": "https://kubernetes.io/docs/tutorials/services/connect-applications-service/", "id": "k8s_00098_code_explain_2700"} {"prompt": "Explain this shell code:\n\nkubectl apply -f ./run-my-nginx.yaml\nkubectl get pods -l run=my-nginx -o wide", "completion": "This shell code Connecting Applications with Services | Check your pods' IPs:", "language": "shell", "type": "code-example_explanation", "context": "Connecting Applications with Services | Check your pods' IPs:", "source": "https://kubernetes.io/docs/tutorials/services/connect-applications-service/", "id": "k8s_00098_code_explain_2701"} {"prompt": "Explain this text code:\n\nNAME READY STATUS RESTARTS AGE IP NODE\nmy-nginx-3800858182-jr4a2 1/1 Running 0 13s 10.244.3.4 kubernetes-minion-905m\nmy-nginx-3800858182-kna2y 1/1 Running 0 13s 10.244.2.5 kubernetes-minion-ljyd", "completion": "This text code Connecting Applications with Services | Check your pods' IPs:", "language": "text", "type": "code-example_explanation", "context": "Connecting Applications with Services | Check your pods' IPs:", "source": "https://kubernetes.io/docs/tutorials/services/connect-applications-service/", "id": "k8s_00098_code_explain_2702"} {"prompt": "Complete this shell code:\n\nkubectl get pods -l run=my-nginx -o custom-columns=POD_IP:.status.podIPs\n POD_IP", "completion": " [map[ip:10.244.3.4]]\n [map[ip:10.244.2.5]]", "language": "shell", "type": "code-example", "context": "Connecting Applications with Services | You should be able to ssh into any node in your cluster and use a tool such as curl to make queries against both IPs. Note that the containers are not using port 80 on the node, nor are there any spec", "source": "https://kubernetes.io/docs/tutorials/services/connect-applications-service/", "id": "k8s_00098_code_completion_2703"} {"prompt": "Explain this shell code:\n\nkubectl get pods -l run=my-nginx -o custom-columns=POD_IP:.status.podIPs\n POD_IP\n [map[ip:10.244.3.4]]\n [map[ip:10.244.2.5]]", "completion": "This shell code Connecting Applications with Services | You should be able to ssh into any node in your cluster and use a tool such as curl to make queries against both IPs. Note that the containers are not using port 80 on the node, nor are there any spec", "language": "shell", "type": "code-example_explanation", "context": "Connecting Applications with Services | You should be able to ssh into any node in your cluster and use a tool such as curl to make queries against both IPs. Note that the containers are not using port 80 on the node, nor are there any spec", "source": "https://kubernetes.io/docs/tutorials/services/connect-applications-service/", "id": "k8s_00098_code_explain_2704"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Service\nmetadata:\n name: my-nginx\n labels:\n run: my-nginx", "completion": "spec:\n ports:\n - port: 80\n protocol: TCP\n selector:\n run: my-nginx", "language": "yaml", "type": "service-definition", "context": "Connecting Applications with Services | This specification will create a Service which targets TCP port 80 on any Pod with the run: my-nginx label, and expose it on an abstracted Service port (targetPort: is the port the container accepts t", "source": "https://kubernetes.io/docs/tutorials/services/connect-applications-service/", "id": "k8s_00098_code_completion_2705"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Service\nmetadata:\n name: my-nginx\n labels:\n run: my-nginx\nspec:\n ports:\n - port: 80\n protocol: TCP\n selector:\n run: my-nginx", "completion": "This yaml code Connecting Applications with Services | This specification will create a Service which targets TCP port 80 on any Pod with the run: my-nginx label, and expose it on an abstracted Service port (targetPort: is the port the container accepts t", "language": "yaml", "type": "service-definition_explanation", "context": "Connecting Applications with Services | This specification will create a Service which targets TCP port 80 on any Pod with the run: my-nginx label, and expose it on an abstracted Service port (targetPort: is the port the container accepts t", "source": "https://kubernetes.io/docs/tutorials/services/connect-applications-service/", "id": "k8s_00098_code_explain_2706"} {"prompt": "Explain this text code:\n\nNAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE\nmy-nginx ClusterIP 10.0.162.149 80/TCP 21s", "completion": "This text code Connecting Applications with Services | As mentioned previously, a Service is backed by a group of Pods. These Pods are exposed through EndpointSlices. The Service's selector will be evaluated continuously and the results will be POSTed to", "language": "text", "type": "code-example_explanation", "context": "Connecting Applications with Services | As mentioned previously, a Service is backed by a group of Pods. These Pods are exposed through EndpointSlices. The Service's selector will be evaluated continuously and the results will be POSTed to", "source": "https://kubernetes.io/docs/tutorials/services/connect-applications-service/", "id": "k8s_00098_code_explain_2707"} {"prompt": "Complete this text code:\n\nName: my-nginx\nNamespace: default\nLabels: run=my-nginx\nAnnotations: \nSelector: run=my-nginx\nType: ClusterIP\nIP Family Policy: SingleStack", "completion": "IP Families: IPv4\nIP: 10.0.162.149\nIPs: 10.0.162.149\nPort: 80/TCP\nTargetPort: 80/TCP\nEndpoints: 10.244.2.5:80,10.244.3.4:80\nSession Affinity: None\nEvents: ", "language": "text", "type": "code-example", "context": "Connecting Applications with Services | You should now be able to curl the nginx Service on : from any node in your cluster. Note that the Service IP is completely virtual, it never hits the wire. If you're curious about h", "source": "https://kubernetes.io/docs/tutorials/services/connect-applications-service/", "id": "k8s_00098_code_completion_2708"} {"prompt": "Explain this text code:\n\nName: my-nginx\nNamespace: default\nLabels: run=my-nginx\nAnnotations: \nSelector: run=my-nginx\nType: ClusterIP\nIP Family Policy: SingleStack\nIP Families: IPv4\nIP: 10.0.162.149\nIPs: 10.0.162.149\nPort: 80/TCP\nTargetPort: 80/TCP\nEndpoints: 10.244.2.5:80,10.244.3.4:80\nSession Affinity: None\nEvents: ", "completion": "This text code Connecting Applications with Services | You should now be able to curl the nginx Service on : from any node in your cluster. Note that the Service IP is completely virtual, it never hits the wire. If you're curious about h", "language": "text", "type": "code-example_explanation", "context": "Connecting Applications with Services | You should now be able to curl the nginx Service on : from any node in your cluster. Note that the Service IP is completely virtual, it never hits the wire. If you're curious about h", "source": "https://kubernetes.io/docs/tutorials/services/connect-applications-service/", "id": "k8s_00098_code_explain_2709"} {"prompt": "Explain this shell code:\n\nkubectl get endpointslices -l kubernetes.io/service-name=my-nginx", "completion": "This shell code Connecting Applications with Services | You should now be able to curl the nginx Service on : from any node in your cluster. Note that the Service IP is completely virtual, it never hits the wire. If you're curious about h", "language": "shell", "type": "code-example_explanation", "context": "Connecting Applications with Services | You should now be able to curl the nginx Service on : from any node in your cluster. Note that the Service IP is completely virtual, it never hits the wire. If you're curious about h", "source": "https://kubernetes.io/docs/tutorials/services/connect-applications-service/", "id": "k8s_00098_code_explain_2710"} {"prompt": "Explain this text code:\n\nNAME ADDRESSTYPE PORTS ENDPOINTS AGE\nmy-nginx-7vzhx IPv4 80 10.244.2.5,10.244.3.4 21s", "completion": "This text code Connecting Applications with Services | You should now be able to curl the nginx Service on : from any node in your cluster. Note that the Service IP is completely virtual, it never hits the wire. If you're curious about h", "language": "text", "type": "code-example_explanation", "context": "Connecting Applications with Services | You should now be able to curl the nginx Service on : from any node in your cluster. Note that the Service IP is completely virtual, it never hits the wire. If you're curious about h", "source": "https://kubernetes.io/docs/tutorials/services/connect-applications-service/", "id": "k8s_00098_code_explain_2711"} {"prompt": "Explain this shell code:\n\nkubectl exec my-nginx-3800858182-jr4a2 -- printenv | grep SERVICE", "completion": "This shell code Connecting Applications with Services | Note there's no mention of your Service. This is because you created the replicas before the Service. Another disadvantage of doing this is that the scheduler might put both Pods on the same machine,", "language": "shell", "type": "code-example_explanation", "context": "Connecting Applications with Services | Note there's no mention of your Service. This is because you created the replicas before the Service. Another disadvantage of doing this is that the scheduler might put both Pods on the same machine,", "source": "https://kubernetes.io/docs/tutorials/services/connect-applications-service/", "id": "k8s_00098_code_explain_2712"} {"prompt": "Explain this text code:\n\nKUBERNETES_SERVICE_HOST=10.0.0.1\nKUBERNETES_SERVICE_PORT=443\nKUBERNETES_SERVICE_PORT_HTTPS=443", "completion": "This text code Connecting Applications with Services | Note there's no mention of your Service. This is because you created the replicas before the Service. Another disadvantage of doing this is that the scheduler might put both Pods on the same machine,", "language": "text", "type": "code-example_explanation", "context": "Connecting Applications with Services | Note there's no mention of your Service. This is because you created the replicas before the Service. Another disadvantage of doing this is that the scheduler might put both Pods on the same machine,", "source": "https://kubernetes.io/docs/tutorials/services/connect-applications-service/", "id": "k8s_00098_code_explain_2713"} {"prompt": "Explain this shell code:\n\nkubectl scale deployment my-nginx --replicas=0; kubectl scale deployment my-nginx --replicas=2;\n\nkubectl get pods -l run=my-nginx -o wide", "completion": "This shell code Connecting Applications with Services | You may notice that the pods have different names, since they are killed and recreated.", "language": "shell", "type": "code-example_explanation", "context": "Connecting Applications with Services | You may notice that the pods have different names, since they are killed and recreated.", "source": "https://kubernetes.io/docs/tutorials/services/connect-applications-service/", "id": "k8s_00098_code_explain_2714"} {"prompt": "Explain this text code:\n\nNAME READY STATUS RESTARTS AGE IP NODE\nmy-nginx-3800858182-e9ihh 1/1 Running 0 5s 10.244.2.7 kubernetes-minion-ljyd\nmy-nginx-3800858182-j4rm4 1/1 Running 0 5s 10.244.3.8 kubernetes-minion-905m", "completion": "This text code Connecting Applications with Services | You may notice that the pods have different names, since they are killed and recreated.", "language": "text", "type": "code-example_explanation", "context": "Connecting Applications with Services | You may notice that the pods have different names, since they are killed and recreated.", "source": "https://kubernetes.io/docs/tutorials/services/connect-applications-service/", "id": "k8s_00098_code_explain_2715"} {"prompt": "Explain this shell code:\n\nkubectl exec my-nginx-3800858182-e9ihh -- printenv | grep SERVICE", "completion": "This shell code Connecting Applications with Services | Kubernetes offers a DNS cluster addon Service that automatically assigns dns names to other Services. You can check if it's running on your cluster:", "language": "shell", "type": "code-example_explanation", "context": "Connecting Applications with Services | Kubernetes offers a DNS cluster addon Service that automatically assigns dns names to other Services. You can check if it's running on your cluster:", "source": "https://kubernetes.io/docs/tutorials/services/connect-applications-service/", "id": "k8s_00098_code_explain_2716"} {"prompt": "Complete this text code:\n\nKUBERNETES_SERVICE_PORT=443\nMY_NGINX_SERVICE_HOST=10.0.162.149", "completion": "KUBERNETES_SERVICE_HOST=10.0.0.1\nMY_NGINX_SERVICE_PORT=80\nKUBERNETES_SERVICE_PORT_HTTPS=443", "language": "text", "type": "code-example", "context": "Connecting Applications with Services | Kubernetes offers a DNS cluster addon Service that automatically assigns dns names to other Services. You can check if it's running on your cluster:", "source": "https://kubernetes.io/docs/tutorials/services/connect-applications-service/", "id": "k8s_00098_code_completion_2717"} {"prompt": "Explain this text code:\n\nKUBERNETES_SERVICE_PORT=443\nMY_NGINX_SERVICE_HOST=10.0.162.149\nKUBERNETES_SERVICE_HOST=10.0.0.1\nMY_NGINX_SERVICE_PORT=80\nKUBERNETES_SERVICE_PORT_HTTPS=443", "completion": "This text code Connecting Applications with Services | Kubernetes offers a DNS cluster addon Service that automatically assigns dns names to other Services. You can check if it's running on your cluster:", "language": "text", "type": "code-example_explanation", "context": "Connecting Applications with Services | Kubernetes offers a DNS cluster addon Service that automatically assigns dns names to other Services. You can check if it's running on your cluster:", "source": "https://kubernetes.io/docs/tutorials/services/connect-applications-service/", "id": "k8s_00098_code_explain_2718"} {"prompt": "Explain this shell code:\n\nkubectl get services kube-dns --namespace=kube-system", "completion": "This shell code Connecting Applications with Services | The rest of this section will assume you have a Service with a long lived IP (my-nginx), and a DNS server that has assigned a name to that IP. Here we use the CoreDNS cluster addon (application name k", "language": "shell", "type": "code-example_explanation", "context": "Connecting Applications with Services | The rest of this section will assume you have a Service with a long lived IP (my-nginx), and a DNS server that has assigned a name to that IP. Here we use the CoreDNS cluster addon (application name k", "source": "https://kubernetes.io/docs/tutorials/services/connect-applications-service/", "id": "k8s_00098_code_explain_2719"} {"prompt": "Explain this text code:\n\nNAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE\nkube-dns ClusterIP 10.0.0.10 53/UDP,53/TCP 8m", "completion": "This text code Connecting Applications with Services | The rest of this section will assume you have a Service with a long lived IP (my-nginx), and a DNS server that has assigned a name to that IP. Here we use the CoreDNS cluster addon (application name k", "language": "text", "type": "code-example_explanation", "context": "Connecting Applications with Services | The rest of this section will assume you have a Service with a long lived IP (my-nginx), and a DNS server that has assigned a name to that IP. Here we use the CoreDNS cluster addon (application name k", "source": "https://kubernetes.io/docs/tutorials/services/connect-applications-service/", "id": "k8s_00098_code_explain_2720"} {"prompt": "Explain this shell code:\n\nkubectl run curl --image=radial/busyboxplus:curl -i --tty --rm", "completion": "This shell code Connecting Applications with Services | Then, hit enter and run nslookup my-nginx:", "language": "shell", "type": "code-example_explanation", "context": "Connecting Applications with Services | Then, hit enter and run nslookup my-nginx:", "source": "https://kubernetes.io/docs/tutorials/services/connect-applications-service/", "id": "k8s_00098_code_explain_2721"} {"prompt": "Explain this text code:\n\nWaiting for pod default/curl-131556218-9fnch to be running, status is Pending, pod ready: false\nHit enter for command prompt", "completion": "This text code Connecting Applications with Services | Then, hit enter and run nslookup my-nginx:", "language": "text", "type": "code-example_explanation", "context": "Connecting Applications with Services | Then, hit enter and run nslookup my-nginx:", "source": "https://kubernetes.io/docs/tutorials/services/connect-applications-service/", "id": "k8s_00098_code_explain_2722"} {"prompt": "Complete this shell code:\n\n[ root@curl-131556218-9fnch:/ ]$ nslookup my-nginx\nServer: 10.0.0.10\nAddress 1: 10.0.0.10", "completion": "\nName: my-nginx\nAddress 1: 10.0.162.149", "language": "shell", "type": "code-example", "context": "Connecting Applications with Services | Till now we have only accessed the nginx server from within the cluster. Before exposing the Service to the internet, you want to make sure the communication channel is secure. For this, you will need", "source": "https://kubernetes.io/docs/tutorials/services/connect-applications-service/", "id": "k8s_00098_code_completion_2723"} {"prompt": "Explain this shell code:\n\n[ root@curl-131556218-9fnch:/ ]$ nslookup my-nginx\nServer: 10.0.0.10\nAddress 1: 10.0.0.10\n\nName: my-nginx\nAddress 1: 10.0.162.149", "completion": "This shell code Connecting Applications with Services | Till now we have only accessed the nginx server from within the cluster. Before exposing the Service to the internet, you want to make sure the communication channel is secure. For this, you will need", "language": "shell", "type": "code-example_explanation", "context": "Connecting Applications with Services | Till now we have only accessed the nginx server from within the cluster. Before exposing the Service to the internet, you want to make sure the communication channel is secure. For this, you will need", "source": "https://kubernetes.io/docs/tutorials/services/connect-applications-service/", "id": "k8s_00098_code_explain_2724"} {"prompt": "Explain this shell code:\n\nmake keys KEY=/tmp/nginx.key CERT=/tmp/nginx.crt\nkubectl create secret tls nginxsecret --key /tmp/nginx.key --cert /tmp/nginx.crt", "completion": "This shell code Connecting Applications with Services | And also the configmap:", "language": "shell", "type": "code-example_explanation", "context": "Connecting Applications with Services | And also the configmap:", "source": "https://kubernetes.io/docs/tutorials/services/connect-applications-service/", "id": "k8s_00098_code_explain_2725"} {"prompt": "Explain this text code:\n\nNAME TYPE DATA AGE\nnginxsecret kubernetes.io/tls 2 1m", "completion": "This text code Connecting Applications with Services | And also the configmap:", "language": "text", "type": "code-example_explanation", "context": "Connecting Applications with Services | And also the configmap:", "source": "https://kubernetes.io/docs/tutorials/services/connect-applications-service/", "id": "k8s_00098_code_explain_2726"} {"prompt": "Explain this shell code:\n\nkubectl create configmap nginxconfigmap --from-file=default.conf", "completion": "This shell code Connecting Applications with Services | You can find an example for default.conf in the Kubernetes examples project repo.", "language": "shell", "type": "code-example_explanation", "context": "Connecting Applications with Services | You can find an example for default.conf in the Kubernetes examples project repo.", "source": "https://kubernetes.io/docs/tutorials/services/connect-applications-service/", "id": "k8s_00098_code_explain_2727"} {"prompt": "Explain this text code:\n\nNAME DATA AGE\nnginxconfigmap 1 114s", "completion": "This text code Connecting Applications with Services | You can view the details of the nginxconfigmap ConfigMap using the following command:", "language": "text", "type": "code-example_explanation", "context": "Connecting Applications with Services | You can view the details of the nginxconfigmap ConfigMap using the following command:", "source": "https://kubernetes.io/docs/tutorials/services/connect-applications-service/", "id": "k8s_00098_code_explain_2728"} {"prompt": "Complete this console code:\n\nName: nginxconfigmap\nNamespace: default\nLabels: \nAnnotations: \n\nData\n====\ndefault.conf:\n----\nserver {\n listen 80 default_server;\n listen [::]:80 default_server ipv6only=on;\n\n listen 443 ssl;\n", "completion": " root /usr/share/nginx/html;\n index index.html;\n\n server_name localhost;\n ssl_certificate /etc/nginx/ssl/tls.crt;\n ssl_certificate_key /etc/nginx/ssl/tls.key;\n\n location / {\n try_files $uri $uri/ =404;\n }\n}\n\nBinaryData\n====\n\nEvents: ", "language": "console", "type": "code-example", "context": "Connecting Applications with Services | Following are the manual steps to follow in case you run into problems running make (on windows for example):", "source": "https://kubernetes.io/docs/tutorials/services/connect-applications-service/", "id": "k8s_00098_code_completion_2729"} {"prompt": "Explain this console code:\n\nName: nginxconfigmap\nNamespace: default\nLabels: \nAnnotations: \n\nData\n====\ndefault.conf:\n----\nserver {\n listen 80 default_server;\n listen [::]:80 default_server ipv6only=on;\n\n listen 443 ssl;\n\n root /usr/share/nginx/html;\n index index.html;\n\n server_name localhost;\n ssl_certificate /etc/nginx/ssl/tls.crt;\n ssl_certificate_key /etc/nginx/ssl/tls.key;\n\n location / {\n try_files $uri $uri/ =404;\n }\n}\n\nBinaryData\n====\n\nEvents: ", "completion": "This console code Connecting Applications with Services | Following are the manual steps to follow in case you run into problems running make (on windows for example):", "language": "console", "type": "code-example_explanation", "context": "Connecting Applications with Services | Following are the manual steps to follow in case you run into problems running make (on windows for example):", "source": "https://kubernetes.io/docs/tutorials/services/connect-applications-service/", "id": "k8s_00098_code_explain_2730"} {"prompt": "Complete this shell code:\n\n# Create a public private key pair\nopenssl req -x509 -nodes -days 365 -newkey rsa:2048 -keyout /d/tmp/nginx.key -out /d/tmp/nginx.crt -subj \"/CN=my-nginx/O=my-nginx\"", "completion": "# Convert the keys to base64 encoding\ncat /d/tmp/nginx.crt | base64\ncat /d/tmp/nginx.key | base64", "language": "shell", "type": "code-example", "context": "Connecting Applications with Services | Use the output from the previous commands to create a yaml file as follows. The base64 encoded value should all be on a single line.", "source": "https://kubernetes.io/docs/tutorials/services/connect-applications-service/", "id": "k8s_00098_code_completion_2731"} {"prompt": "Explain this shell code:\n\n# Create a public private key pair\nopenssl req -x509 -nodes -days 365 -newkey rsa:2048 -keyout /d/tmp/nginx.key -out /d/tmp/nginx.crt -subj \"/CN=my-nginx/O=my-nginx\"\n# Convert the keys to base64 encoding\ncat /d/tmp/nginx.crt | base64\ncat /d/tmp/nginx.key | base64", "completion": "This shell code Connecting Applications with Services | Use the output from the previous commands to create a yaml file as follows. The base64 encoded value should all be on a single line.", "language": "shell", "type": "code-example_explanation", "context": "Connecting Applications with Services | Use the output from the previous commands to create a yaml file as follows. The base64 encoded value should all be on a single line.", "source": "https://kubernetes.io/docs/tutorials/services/connect-applications-service/", "id": "k8s_00098_code_explain_2732"} {"prompt": "Complete this yaml code:\n\napiVersion: \"v1\"\nkind: \"Secret\"\nmetadata:\n name: \"nginxsecret\"\n namespace: \"default\"", "completion": "type: kubernetes.io/tls\ndata:\n # NOTE: Replace the following values with your own base64-encoded certificate and key.\n tls.crt: \"REPLACE_WITH_BASE64_CERT\" \n tls.key: \"REPLACE_WITH_BASE64_KEY\"", "language": "yaml", "type": "kubernetes-manifest", "context": "Connecting Applications with Services | Now create the secrets using the file:", "source": "https://kubernetes.io/docs/tutorials/services/connect-applications-service/", "id": "k8s_00098_code_completion_2733"} {"prompt": "Explain this yaml code:\n\napiVersion: \"v1\"\nkind: \"Secret\"\nmetadata:\n name: \"nginxsecret\"\n namespace: \"default\"\ntype: kubernetes.io/tls\ndata:\n # NOTE: Replace the following values with your own base64-encoded certificate and key.\n tls.crt: \"REPLACE_WITH_BASE64_CERT\" \n tls.key: \"REPLACE_WITH_BASE64_KEY\"", "completion": "This yaml code Connecting Applications with Services | Now create the secrets using the file:", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Connecting Applications with Services | Now create the secrets using the file:", "source": "https://kubernetes.io/docs/tutorials/services/connect-applications-service/", "id": "k8s_00098_code_explain_2734"} {"prompt": "Explain this shell code:\n\nkubectl apply -f nginxsecrets.yaml\nkubectl get secrets", "completion": "This shell code Connecting Applications with Services | Now modify your nginx replicas to start an https server using the certificate in the secret, and the Service, to expose both ports (80 and 443):", "language": "shell", "type": "code-example_explanation", "context": "Connecting Applications with Services | Now modify your nginx replicas to start an https server using the certificate in the secret, and the Service, to expose both ports (80 and 443):", "source": "https://kubernetes.io/docs/tutorials/services/connect-applications-service/", "id": "k8s_00098_code_explain_2735"} {"prompt": "Explain this text code:\n\nNAME TYPE DATA AGE\nnginxsecret kubernetes.io/tls 2 1m", "completion": "This text code Connecting Applications with Services | Now modify your nginx replicas to start an https server using the certificate in the secret, and the Service, to expose both ports (80 and 443):", "language": "text", "type": "code-example_explanation", "context": "Connecting Applications with Services | Now modify your nginx replicas to start an https server using the certificate in the secret, and the Service, to expose both ports (80 and 443):", "source": "https://kubernetes.io/docs/tutorials/services/connect-applications-service/", "id": "k8s_00098_code_explain_2736"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Service\nmetadata:\n name: my-nginx\n labels:\n run: my-nginx\nspec:\n type: NodePort\n ports:\n - port: 8080\n targetPort: 80\n protocol: TCP\n name: http\n - port: 443\n protocol: TCP\n name: https\n selector:\n run: my-nginx\n---\napiVersion: apps/v1\nkind: Deployment\nmetadata:\n name: my-nginx\nspec:\n selector:", "completion": " matchLabels:\n run: my-nginx\n replicas: 1\n template:\n metadata:\n labels:\n run: my-nginx\n spec:\n volumes:\n - name: secret-volume\n secret:\n secretName: nginxsecret\n - name: configmap-volume\n configMap:\n name: nginxconfigmap\n containers:\n - name: nginxhttps\n image: bprashanth/nginxhttps:1.0\n ports:\n - containerPort: 443\n - containerPort: 80\n volumeMounts:\n - mountPath: /etc/nginx/ssl\n name: secret-volume\n - mountPath: /etc/nginx/conf.d\n name: configmap-volume", "language": "yaml", "type": "deployment-definition", "context": "Connecting Applications with Services | Noteworthy points about the nginx-secure-app manifest:", "source": "https://kubernetes.io/docs/tutorials/services/connect-applications-service/", "id": "k8s_00098_code_completion_2737"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Service\nmetadata:\n name: my-nginx\n labels:\n run: my-nginx\nspec:\n type: NodePort\n ports:\n - port: 8080\n targetPort: 80\n protocol: TCP\n name: http\n - port: 443\n protocol: TCP\n name: https\n selector:\n run: my-nginx\n---\napiVersion: apps/v1\nkind: Deployment\nmetadata:\n name: my-nginx\nspec:\n selector:\n matchLabels:\n run: my-nginx\n replicas: 1\n template:\n metadata:\n labels:\n run: my-nginx\n spec:\n volumes:\n - name: secret-volume\n secret:\n secretName: nginxsecret\n - name: configmap-volume\n configMap:\n name: nginxconfigmap\n containers:\n - name: nginxhttps\n image: bprashanth/nginxhttps:1.0\n ports:\n - containerPort: 443\n - containerPort: 80\n volumeMounts:\n - mountPath: /etc/nginx/ssl\n name: secret-volume\n - mountPath: /etc/nginx/conf.d\n name: configmap-volume", "completion": "This yaml code Connecting Applications with Services | Noteworthy points about the nginx-secure-app manifest:", "language": "yaml", "type": "deployment-definition_explanation", "context": "Connecting Applications with Services | Noteworthy points about the nginx-secure-app manifest:", "source": "https://kubernetes.io/docs/tutorials/services/connect-applications-service/", "id": "k8s_00098_code_explain_2738"} {"prompt": "Explain this shell code:\n\nkubectl delete deployments,svc my-nginx; kubectl create -f ./nginx-secure-app.yaml", "completion": "This shell code Connecting Applications with Services | At this point you can reach the nginx server from any node.", "language": "shell", "type": "code-example_explanation", "context": "Connecting Applications with Services | At this point you can reach the nginx server from any node.", "source": "https://kubernetes.io/docs/tutorials/services/connect-applications-service/", "id": "k8s_00098_code_explain_2739"} {"prompt": "Explain this shell code:\n\nkubectl get pods -l run=my-nginx -o custom-columns=POD_IP:.status.podIPs\n POD_IP\n [map[ip:10.244.3.5]]", "completion": "This shell code Connecting Applications with Services | Note how we supplied the -k parameter to curl in the last step, this is because we don't know anything about the pods running nginx at certificate generation time, so we have to tell curl to ignore th", "language": "shell", "type": "code-example_explanation", "context": "Connecting Applications with Services | Note how we supplied the -k parameter to curl in the last step, this is because we don't know anything about the pods running nginx at certificate generation time, so we have to tell curl to ignore th", "source": "https://kubernetes.io/docs/tutorials/services/connect-applications-service/", "id": "k8s_00098_code_explain_2740"} {"prompt": "Explain this shell code:\n\nnode $ curl -k https://10.244.3.5\n...\n

Welcome to nginx!

", "completion": "This shell code Connecting Applications with Services | Note how we supplied the -k parameter to curl in the last step, this is because we don't know anything about the pods running nginx at certificate generation time, so we have to tell curl to ignore th", "language": "shell", "type": "code-example_explanation", "context": "Connecting Applications with Services | Note how we supplied the -k parameter to curl in the last step, this is because we don't know anything about the pods running nginx at certificate generation time, so we have to tell curl to ignore th", "source": "https://kubernetes.io/docs/tutorials/services/connect-applications-service/", "id": "k8s_00098_code_explain_2741"} {"prompt": "Complete this yaml code:\n\napiVersion: apps/v1\nkind: Deployment\nmetadata:\n name: curl-deployment\nspec:\n selector:\n matchLabels:\n app: curlpod\n replicas: 1\n template:\n metadata:\n labels:\n app: curlpod\n spec:", "completion": " volumes:\n - name: secret-volume\n secret:\n secretName: nginxsecret\n containers:\n - name: curlpod\n command:\n - sh\n - -c\n - while true; do sleep 1; done\n image: radial/busyboxplus:curl\n volumeMounts:\n - mountPath: /etc/nginx/ssl\n name: secret-volume", "language": "yaml", "type": "deployment-definition", "context": "Connecting Applications with Services | For some parts of your applications you may want to expose a Service onto an external IP address. Kubernetes supports two ways of doing this: NodePorts and LoadBalancers. The Service created in the la", "source": "https://kubernetes.io/docs/tutorials/services/connect-applications-service/", "id": "k8s_00098_code_completion_2742"} {"prompt": "Explain this yaml code:\n\napiVersion: apps/v1\nkind: Deployment\nmetadata:\n name: curl-deployment\nspec:\n selector:\n matchLabels:\n app: curlpod\n replicas: 1\n template:\n metadata:\n labels:\n app: curlpod\n spec:\n volumes:\n - name: secret-volume\n secret:\n secretName: nginxsecret\n containers:\n - name: curlpod\n command:\n - sh\n - -c\n - while true; do sleep 1; done\n image: radial/busyboxplus:curl\n volumeMounts:\n - mountPath: /etc/nginx/ssl\n name: secret-volume", "completion": "This yaml code Connecting Applications with Services | For some parts of your applications you may want to expose a Service onto an external IP address. Kubernetes supports two ways of doing this: NodePorts and LoadBalancers. The Service created in the la", "language": "yaml", "type": "deployment-definition_explanation", "context": "Connecting Applications with Services | For some parts of your applications you may want to expose a Service onto an external IP address. Kubernetes supports two ways of doing this: NodePorts and LoadBalancers. The Service created in the la", "source": "https://kubernetes.io/docs/tutorials/services/connect-applications-service/", "id": "k8s_00098_code_explain_2743"} {"prompt": "Explain this shell code:\n\nkubectl apply -f ./curlpod.yaml\nkubectl get pods -l app=curlpod", "completion": "This shell code Connecting Applications with Services | For some parts of your applications you may want to expose a Service onto an external IP address. Kubernetes supports two ways of doing this: NodePorts and LoadBalancers. The Service created in the la", "language": "shell", "type": "code-example_explanation", "context": "Connecting Applications with Services | For some parts of your applications you may want to expose a Service onto an external IP address. Kubernetes supports two ways of doing this: NodePorts and LoadBalancers. The Service created in the la", "source": "https://kubernetes.io/docs/tutorials/services/connect-applications-service/", "id": "k8s_00098_code_explain_2744"} {"prompt": "Explain this text code:\n\nNAME READY STATUS RESTARTS AGE\ncurl-deployment-1515033274-1410r 1/1 Running 0 1m", "completion": "This text code Connecting Applications with Services | For some parts of your applications you may want to expose a Service onto an external IP address. Kubernetes supports two ways of doing this: NodePorts and LoadBalancers. The Service created in the la", "language": "text", "type": "code-example_explanation", "context": "Connecting Applications with Services | For some parts of your applications you may want to expose a Service onto an external IP address. Kubernetes supports two ways of doing this: NodePorts and LoadBalancers. The Service created in the la", "source": "https://kubernetes.io/docs/tutorials/services/connect-applications-service/", "id": "k8s_00098_code_explain_2745"} {"prompt": "Complete this shell code:\n\nkubectl exec curl-deployment-1515033274-1410r -- curl https://my-nginx --cacert /etc/nginx/ssl/tls.crt\n...", "completion": "Welcome to nginx!\n...", "language": "shell", "type": "code-example", "context": "Connecting Applications with Services | For some parts of your applications you may want to expose a Service onto an external IP address. Kubernetes supports two ways of doing this: NodePorts and LoadBalancers. The Service created in the la", "source": "https://kubernetes.io/docs/tutorials/services/connect-applications-service/", "id": "k8s_00098_code_completion_2746"} {"prompt": "Explain this shell code:\n\nkubectl exec curl-deployment-1515033274-1410r -- curl https://my-nginx --cacert /etc/nginx/ssl/tls.crt\n...\nWelcome to nginx!\n...", "completion": "This shell code Connecting Applications with Services | For some parts of your applications you may want to expose a Service onto an external IP address. Kubernetes supports two ways of doing this: NodePorts and LoadBalancers. The Service created in the la", "language": "shell", "type": "code-example_explanation", "context": "Connecting Applications with Services | For some parts of your applications you may want to expose a Service onto an external IP address. Kubernetes supports two ways of doing this: NodePorts and LoadBalancers. The Service created in the la", "source": "https://kubernetes.io/docs/tutorials/services/connect-applications-service/", "id": "k8s_00098_code_explain_2747"} {"prompt": "Complete this shell code:\n\nkubectl get svc my-nginx -o yaml | grep nodePort -C 5\n uid: 07191fb3-f61a-11e5-8ae5-42010af00002\nspec:\n clusterIP: 10.0.162.149\n ports:\n - name: http\n nodePort: 31704\n port: 8080", "completion": " protocol: TCP\n targetPort: 80\n - name: https\n nodePort: 32453\n port: 443\n protocol: TCP\n targetPort: 443\n selector:\n run: my-nginx", "language": "shell", "type": "code-example", "context": "Connecting Applications with Services | Let's now recreate the Service to use a cloud load balancer. Change the Type of my-nginx Service from NodePort to LoadBalancer:", "source": "https://kubernetes.io/docs/tutorials/services/connect-applications-service/", "id": "k8s_00098_code_completion_2748"} {"prompt": "Explain this shell code:\n\nkubectl get svc my-nginx -o yaml | grep nodePort -C 5\n uid: 07191fb3-f61a-11e5-8ae5-42010af00002\nspec:\n clusterIP: 10.0.162.149\n ports:\n - name: http\n nodePort: 31704\n port: 8080\n protocol: TCP\n targetPort: 80\n - name: https\n nodePort: 32453\n port: 443\n protocol: TCP\n targetPort: 443\n selector:\n run: my-nginx", "completion": "This shell code Connecting Applications with Services | Let's now recreate the Service to use a cloud load balancer. Change the Type of my-nginx Service from NodePort to LoadBalancer:", "language": "shell", "type": "code-example_explanation", "context": "Connecting Applications with Services | Let's now recreate the Service to use a cloud load balancer. Change the Type of my-nginx Service from NodePort to LoadBalancer:", "source": "https://kubernetes.io/docs/tutorials/services/connect-applications-service/", "id": "k8s_00098_code_explain_2749"} {"prompt": "Complete this shell code:\n\nkubectl get nodes -o yaml | grep ExternalIP -C 1\n - address: 104.197.41.11\n type: ExternalIP\n allocatable:\n--\n - address: 23.251.152.56", "completion": " type: ExternalIP\n allocatable:\n...\n\n$ curl https://: -k\n...\n

Welcome to nginx!

", "language": "shell", "type": "code-example", "context": "Connecting Applications with Services | Let's now recreate the Service to use a cloud load balancer. Change the Type of my-nginx Service from NodePort to LoadBalancer:", "source": "https://kubernetes.io/docs/tutorials/services/connect-applications-service/", "id": "k8s_00098_code_completion_2750"} {"prompt": "Explain this shell code:\n\nkubectl get nodes -o yaml | grep ExternalIP -C 1\n - address: 104.197.41.11\n type: ExternalIP\n allocatable:\n--\n - address: 23.251.152.56\n type: ExternalIP\n allocatable:\n...\n\n$ curl https://: -k\n...\n

Welcome to nginx!

", "completion": "This shell code Connecting Applications with Services | Let's now recreate the Service to use a cloud load balancer. Change the Type of my-nginx Service from NodePort to LoadBalancer:", "language": "shell", "type": "code-example_explanation", "context": "Connecting Applications with Services | Let's now recreate the Service to use a cloud load balancer. Change the Type of my-nginx Service from NodePort to LoadBalancer:", "source": "https://kubernetes.io/docs/tutorials/services/connect-applications-service/", "id": "k8s_00098_code_explain_2751"} {"prompt": "Explain this text code:\n\nNAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE\nmy-nginx LoadBalancer 10.0.162.149 xx.xxx.xxx.xxx 8080:30163/TCP 21s", "completion": "This text code Connecting Applications with Services | The IP address in the EXTERNAL-IP column is the one that is available on the public internet. The CLUSTER-IP is only available inside your cluster/private cloud network.", "language": "text", "type": "code-example_explanation", "context": "Connecting Applications with Services | The IP address in the EXTERNAL-IP column is the one that is available on the public internet. The CLUSTER-IP is only available inside your cluster/private cloud network.", "source": "https://kubernetes.io/docs/tutorials/services/connect-applications-service/", "id": "k8s_00098_code_explain_2752"} {"prompt": "Explain this text code:\n\ncurl https:// -k\n...\nWelcome to nginx!", "completion": "This text code Connecting Applications with Services | The IP address in the EXTERNAL-IP column is the one that is available on the public internet. The CLUSTER-IP is only available inside your cluster/private cloud network.", "language": "text", "type": "code-example_explanation", "context": "Connecting Applications with Services | The IP address in the EXTERNAL-IP column is the one that is available on the public internet. The CLUSTER-IP is only available inside your cluster/private cloud network.", "source": "https://kubernetes.io/docs/tutorials/services/connect-applications-service/", "id": "k8s_00098_code_explain_2753"} {"prompt": "Complete this shell code:\n\nkubectl describe service my-nginx\n...", "completion": "LoadBalancer Ingress: a320587ffd19711e5a37606cf4a74574-1142138393.us-east-1.elb.amazonaws.com\n...", "language": "shell", "type": "code-example", "context": "Connecting Applications with Services |", "source": "https://kubernetes.io/docs/tutorials/services/connect-applications-service/", "id": "k8s_00098_code_completion_2754"} {"prompt": "Explain this shell code:\n\nkubectl describe service my-nginx\n...\nLoadBalancer Ingress: a320587ffd19711e5a37606cf4a74574-1142138393.us-east-1.elb.amazonaws.com\n...", "completion": "This shell code Connecting Applications with Services |", "language": "shell", "type": "code-example_explanation", "context": "Connecting Applications with Services |", "source": "https://kubernetes.io/docs/tutorials/services/connect-applications-service/", "id": "k8s_00098_code_explain_2755"} {"prompt": "Complete this yaml code:\n\nservice/networking/run-my-nginx.yaml\napiVersion: apps/v1\nkind: Deployment\nmetadata:\n name: my-nginx\nspec:\n selector:\n matchLabels:\n run: my-nginx\n replicas: 2", "completion": " template:\n metadata:\n labels:\n run: my-nginx\n spec:\n containers:\n - name: my-nginx\n image: nginx\n ports:\n - containerPort: 80", "language": "yaml", "type": "deployment-definition", "context": "Connecting Applications with Services | This makes it accessible from any node in your cluster. Check the nodes the Pod is running on:", "source": "https://kubernetes.io/docs/tutorials/services/connect-applications-service/", "id": "k8s_00098_code_completion_2756"} {"prompt": "Explain this yaml code:\n\nservice/networking/run-my-nginx.yaml\napiVersion: apps/v1\nkind: Deployment\nmetadata:\n name: my-nginx\nspec:\n selector:\n matchLabels:\n run: my-nginx\n replicas: 2\n template:\n metadata:\n labels:\n run: my-nginx\n spec:\n containers:\n - name: my-nginx\n image: nginx\n ports:\n - containerPort: 80", "completion": "This yaml code Connecting Applications with Services | This makes it accessible from any node in your cluster. Check the nodes the Pod is running on:", "language": "yaml", "type": "deployment-definition_explanation", "context": "Connecting Applications with Services | This makes it accessible from any node in your cluster. Check the nodes the Pod is running on:", "source": "https://kubernetes.io/docs/tutorials/services/connect-applications-service/", "id": "k8s_00098_code_explain_2757"} {"prompt": "Complete this yaml code:\n\napiVersion: apps/v1\nkind: Deployment\nmetadata:\n name: my-nginx\nspec:\n selector:\n matchLabels:\n run: my-nginx\n replicas: 2", "completion": " template:\n metadata:\n labels:\n run: my-nginx\n spec:\n containers:\n - name: my-nginx\n image: nginx\n ports:\n - containerPort: 80", "language": "yaml", "type": "deployment-definition", "context": "Connecting Applications with Services | This makes it accessible from any node in your cluster. Check the nodes the Pod is running on:", "source": "https://kubernetes.io/docs/tutorials/services/connect-applications-service/", "id": "k8s_00098_code_completion_2758"} {"prompt": "Explain this yaml code:\n\napiVersion: apps/v1\nkind: Deployment\nmetadata:\n name: my-nginx\nspec:\n selector:\n matchLabels:\n run: my-nginx\n replicas: 2\n template:\n metadata:\n labels:\n run: my-nginx\n spec:\n containers:\n - name: my-nginx\n image: nginx\n ports:\n - containerPort: 80", "completion": "This yaml code Connecting Applications with Services | This makes it accessible from any node in your cluster. Check the nodes the Pod is running on:", "language": "yaml", "type": "deployment-definition_explanation", "context": "Connecting Applications with Services | This makes it accessible from any node in your cluster. Check the nodes the Pod is running on:", "source": "https://kubernetes.io/docs/tutorials/services/connect-applications-service/", "id": "k8s_00098_code_explain_2759"} {"prompt": "Explain this bash code:\n\nkubectl apply -f ./run-my-nginx.yaml\nkubectl get pods -l run=my-nginx -o wide", "completion": "This bash code Connecting Applications with Services | Check your pods' IPs:", "language": "bash", "type": "kubectl-command_explanation", "context": "Connecting Applications with Services | Check your pods' IPs:", "source": "https://kubernetes.io/docs/tutorials/services/connect-applications-service/", "id": "k8s_00098_code_explain_2760"} {"prompt": "Complete this bash code:\n\nkubectl get pods -l run=my-nginx -o custom-columns=POD_IP:.status.podIPs\n POD_IP", "completion": " [map[ip:10.244.3.4]]\n [map[ip:10.244.2.5]]", "language": "bash", "type": "kubectl-command", "context": "Connecting Applications with Services | You should be able to ssh into any node in your cluster and use a tool such as curl to make queries against both IPs. Note that the containers are not using port 80 on the node, nor are there any spec", "source": "https://kubernetes.io/docs/tutorials/services/connect-applications-service/", "id": "k8s_00098_code_completion_2761"} {"prompt": "Explain this bash code:\n\nkubectl get pods -l run=my-nginx -o custom-columns=POD_IP:.status.podIPs\n POD_IP\n [map[ip:10.244.3.4]]\n [map[ip:10.244.2.5]]", "completion": "This bash code Connecting Applications with Services | You should be able to ssh into any node in your cluster and use a tool such as curl to make queries against both IPs. Note that the containers are not using port 80 on the node, nor are there any spec", "language": "bash", "type": "kubectl-command_explanation", "context": "Connecting Applications with Services | You should be able to ssh into any node in your cluster and use a tool such as curl to make queries against both IPs. Note that the containers are not using port 80 on the node, nor are there any spec", "source": "https://kubernetes.io/docs/tutorials/services/connect-applications-service/", "id": "k8s_00098_code_explain_2762"} {"prompt": "Complete this yaml code:\n\nservice/networking/nginx-svc.yaml\napiVersion: v1\nkind: Service\nmetadata:\n name: my-nginx\n labels:", "completion": " run: my-nginx\nspec:\n ports:\n - port: 80\n protocol: TCP\n selector:\n run: my-nginx", "language": "yaml", "type": "service-definition", "context": "Connecting Applications with Services | This specification will create a Service which targets TCP port 80 on any Pod with the run: my-nginx label, and expose it on an abstracted Service port (targetPort: is the port the container accepts t", "source": "https://kubernetes.io/docs/tutorials/services/connect-applications-service/", "id": "k8s_00098_code_completion_2763"} {"prompt": "Explain this yaml code:\n\nservice/networking/nginx-svc.yaml\napiVersion: v1\nkind: Service\nmetadata:\n name: my-nginx\n labels:\n run: my-nginx\nspec:\n ports:\n - port: 80\n protocol: TCP\n selector:\n run: my-nginx", "completion": "This yaml code Connecting Applications with Services | This specification will create a Service which targets TCP port 80 on any Pod with the run: my-nginx label, and expose it on an abstracted Service port (targetPort: is the port the container accepts t", "language": "yaml", "type": "service-definition_explanation", "context": "Connecting Applications with Services | This specification will create a Service which targets TCP port 80 on any Pod with the run: my-nginx label, and expose it on an abstracted Service port (targetPort: is the port the container accepts t", "source": "https://kubernetes.io/docs/tutorials/services/connect-applications-service/", "id": "k8s_00098_code_explain_2764"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Service\nmetadata:\n name: my-nginx\n labels:\n run: my-nginx", "completion": "spec:\n ports:\n - port: 80\n protocol: TCP\n selector:\n run: my-nginx", "language": "yaml", "type": "service-definition", "context": "Connecting Applications with Services | This specification will create a Service which targets TCP port 80 on any Pod with the run: my-nginx label, and expose it on an abstracted Service port (targetPort: is the port the container accepts t", "source": "https://kubernetes.io/docs/tutorials/services/connect-applications-service/", "id": "k8s_00098_code_completion_2765"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Service\nmetadata:\n name: my-nginx\n labels:\n run: my-nginx\nspec:\n ports:\n - port: 80\n protocol: TCP\n selector:\n run: my-nginx", "completion": "This yaml code Connecting Applications with Services | This specification will create a Service which targets TCP port 80 on any Pod with the run: my-nginx label, and expose it on an abstracted Service port (targetPort: is the port the container accepts t", "language": "yaml", "type": "service-definition_explanation", "context": "Connecting Applications with Services | This specification will create a Service which targets TCP port 80 on any Pod with the run: my-nginx label, and expose it on an abstracted Service port (targetPort: is the port the container accepts t", "source": "https://kubernetes.io/docs/tutorials/services/connect-applications-service/", "id": "k8s_00098_code_explain_2766"} {"prompt": "Explain this bash code:\n\nkubectl get endpointslices -l kubernetes.io/service-name=my-nginx", "completion": "This bash code Connecting Applications with Services | You should now be able to curl the nginx Service on : from any node in your cluster. Note that the Service IP is completely virtual, it never hits the wire. If you're curious about h", "language": "bash", "type": "kubectl-command_explanation", "context": "Connecting Applications with Services | You should now be able to curl the nginx Service on : from any node in your cluster. Note that the Service IP is completely virtual, it never hits the wire. If you're curious about h", "source": "https://kubernetes.io/docs/tutorials/services/connect-applications-service/", "id": "k8s_00098_code_explain_2767"} {"prompt": "Explain this bash code:\n\nkubectl exec my-nginx-3800858182-jr4a2 -- printenv | grep SERVICE", "completion": "This bash code Connecting Applications with Services | Note there's no mention of your Service. This is because you created the replicas before the Service. Another disadvantage of doing this is that the scheduler might put both Pods on the same machine,", "language": "bash", "type": "kubectl-command_explanation", "context": "Connecting Applications with Services | Note there's no mention of your Service. This is because you created the replicas before the Service. Another disadvantage of doing this is that the scheduler might put both Pods on the same machine,", "source": "https://kubernetes.io/docs/tutorials/services/connect-applications-service/", "id": "k8s_00098_code_explain_2768"} {"prompt": "Explain this bash code:\n\nkubectl scale deployment my-nginx --replicas=0; kubectl scale deployment my-nginx --replicas=2;\n\nkubectl get pods -l run=my-nginx -o wide", "completion": "This bash code Connecting Applications with Services | You may notice that the pods have different names, since they are killed and recreated.", "language": "bash", "type": "kubectl-command_explanation", "context": "Connecting Applications with Services | You may notice that the pods have different names, since they are killed and recreated.", "source": "https://kubernetes.io/docs/tutorials/services/connect-applications-service/", "id": "k8s_00098_code_explain_2769"} {"prompt": "Explain this bash code:\n\nkubectl exec my-nginx-3800858182-e9ihh -- printenv | grep SERVICE", "completion": "This bash code Connecting Applications with Services | Kubernetes offers a DNS cluster addon Service that automatically assigns dns names to other Services. You can check if it's running on your cluster:", "language": "bash", "type": "kubectl-command_explanation", "context": "Connecting Applications with Services | Kubernetes offers a DNS cluster addon Service that automatically assigns dns names to other Services. You can check if it's running on your cluster:", "source": "https://kubernetes.io/docs/tutorials/services/connect-applications-service/", "id": "k8s_00098_code_explain_2770"} {"prompt": "Explain this bash code:\n\nkubectl get services kube-dns --namespace=kube-system", "completion": "This bash code Connecting Applications with Services | The rest of this section will assume you have a Service with a long lived IP (my-nginx), and a DNS server that has assigned a name to that IP. Here we use the CoreDNS cluster addon (application name k", "language": "bash", "type": "kubectl-command_explanation", "context": "Connecting Applications with Services | The rest of this section will assume you have a Service with a long lived IP (my-nginx), and a DNS server that has assigned a name to that IP. Here we use the CoreDNS cluster addon (application name k", "source": "https://kubernetes.io/docs/tutorials/services/connect-applications-service/", "id": "k8s_00098_code_explain_2771"} {"prompt": "Explain this bash code:\n\nkubectl run curl --image=radial/busyboxplus:curl -i --tty --rm", "completion": "This bash code Connecting Applications with Services | Then, hit enter and run nslookup my-nginx:", "language": "bash", "type": "kubectl-command_explanation", "context": "Connecting Applications with Services | Then, hit enter and run nslookup my-nginx:", "source": "https://kubernetes.io/docs/tutorials/services/connect-applications-service/", "id": "k8s_00098_code_explain_2772"} {"prompt": "Complete this json code:\n\n[ root@curl-131556218-9fnch:/ ]$ nslookup my-nginx\nServer: 10.0.0.10\nAddress 1: 10.0.0.10", "completion": "\nName: my-nginx\nAddress 1: 10.0.162.149", "language": "json", "type": "api-response", "context": "Connecting Applications with Services | Till now we have only accessed the nginx server from within the cluster. Before exposing the Service to the internet, you want to make sure the communication channel is secure. For this, you will need", "source": "https://kubernetes.io/docs/tutorials/services/connect-applications-service/", "id": "k8s_00098_code_completion_2773"} {"prompt": "Explain this json code:\n\n[ root@curl-131556218-9fnch:/ ]$ nslookup my-nginx\nServer: 10.0.0.10\nAddress 1: 10.0.0.10\n\nName: my-nginx\nAddress 1: 10.0.162.149", "completion": "This json code Connecting Applications with Services | Till now we have only accessed the nginx server from within the cluster. Before exposing the Service to the internet, you want to make sure the communication channel is secure. For this, you will need", "language": "json", "type": "api-response_explanation", "context": "Connecting Applications with Services | Till now we have only accessed the nginx server from within the cluster. Before exposing the Service to the internet, you want to make sure the communication channel is secure. For this, you will need", "source": "https://kubernetes.io/docs/tutorials/services/connect-applications-service/", "id": "k8s_00098_code_explain_2774"} {"prompt": "Explain this text code:\n\nmake keys KEY=/tmp/nginx.key CERT=/tmp/nginx.crt\nkubectl create secret tls nginxsecret --key /tmp/nginx.key --cert /tmp/nginx.crt", "completion": "This text code Connecting Applications with Services | And also the configmap:", "language": "text", "type": "code-example_explanation", "context": "Connecting Applications with Services | And also the configmap:", "source": "https://kubernetes.io/docs/tutorials/services/connect-applications-service/", "id": "k8s_00098_code_explain_2775"} {"prompt": "Explain this bash code:\n\nkubectl create configmap nginxconfigmap --from-file=default.conf", "completion": "This bash code Connecting Applications with Services | You can find an example for default.conf in the Kubernetes examples project repo.", "language": "bash", "type": "kubectl-command_explanation", "context": "Connecting Applications with Services | You can find an example for default.conf in the Kubernetes examples project repo.", "source": "https://kubernetes.io/docs/tutorials/services/connect-applications-service/", "id": "k8s_00098_code_explain_2776"} {"prompt": "Complete this text code:\n\nName: nginxconfigmap\nNamespace: default\nLabels: \nAnnotations: \n\nData\n====\ndefault.conf:\n----\nserver {\n listen 80 default_server;\n listen [::]:80 default_server ipv6only=on;\n\n listen 443 ssl;\n", "completion": " root /usr/share/nginx/html;\n index index.html;\n\n server_name localhost;\n ssl_certificate /etc/nginx/ssl/tls.crt;\n ssl_certificate_key /etc/nginx/ssl/tls.key;\n\n location / {\n try_files $uri $uri/ =404;\n }\n}\n\nBinaryData\n====\n\nEvents: ", "language": "text", "type": "code-example", "context": "Connecting Applications with Services | Following are the manual steps to follow in case you run into problems running make (on windows for example):", "source": "https://kubernetes.io/docs/tutorials/services/connect-applications-service/", "id": "k8s_00098_code_completion_2777"} {"prompt": "Explain this text code:\n\nName: nginxconfigmap\nNamespace: default\nLabels: \nAnnotations: \n\nData\n====\ndefault.conf:\n----\nserver {\n listen 80 default_server;\n listen [::]:80 default_server ipv6only=on;\n\n listen 443 ssl;\n\n root /usr/share/nginx/html;\n index index.html;\n\n server_name localhost;\n ssl_certificate /etc/nginx/ssl/tls.crt;\n ssl_certificate_key /etc/nginx/ssl/tls.key;\n\n location / {\n try_files $uri $uri/ =404;\n }\n}\n\nBinaryData\n====\n\nEvents: ", "completion": "This text code Connecting Applications with Services | Following are the manual steps to follow in case you run into problems running make (on windows for example):", "language": "text", "type": "code-example_explanation", "context": "Connecting Applications with Services | Following are the manual steps to follow in case you run into problems running make (on windows for example):", "source": "https://kubernetes.io/docs/tutorials/services/connect-applications-service/", "id": "k8s_00098_code_explain_2778"} {"prompt": "Complete this text code:\n\n# Create a public private key pair\nopenssl req -x509 -nodes -days 365 -newkey rsa:2048 -keyout /d/tmp/nginx.key -out /d/tmp/nginx.crt -subj \"/CN=my-nginx/O=my-nginx\"", "completion": "# Convert the keys to base64 encoding\ncat /d/tmp/nginx.crt | base64\ncat /d/tmp/nginx.key | base64", "language": "text", "type": "code-example", "context": "Connecting Applications with Services | Use the output from the previous commands to create a yaml file as follows. The base64 encoded value should all be on a single line.", "source": "https://kubernetes.io/docs/tutorials/services/connect-applications-service/", "id": "k8s_00098_code_completion_2779"} {"prompt": "Explain this text code:\n\n# Create a public private key pair\nopenssl req -x509 -nodes -days 365 -newkey rsa:2048 -keyout /d/tmp/nginx.key -out /d/tmp/nginx.crt -subj \"/CN=my-nginx/O=my-nginx\"\n# Convert the keys to base64 encoding\ncat /d/tmp/nginx.crt | base64\ncat /d/tmp/nginx.key | base64", "completion": "This text code Connecting Applications with Services | Use the output from the previous commands to create a yaml file as follows. The base64 encoded value should all be on a single line.", "language": "text", "type": "code-example_explanation", "context": "Connecting Applications with Services | Use the output from the previous commands to create a yaml file as follows. The base64 encoded value should all be on a single line.", "source": "https://kubernetes.io/docs/tutorials/services/connect-applications-service/", "id": "k8s_00098_code_explain_2780"} {"prompt": "Complete this yaml code:\n\napiVersion: \"v1\"\nkind: \"Secret\"\nmetadata:\n name: \"nginxsecret\"\n namespace: \"default\"", "completion": "type: kubernetes.io/tls\ndata:\n # NOTE: Replace the following values with your own base64-encoded certificate and key.\n tls.crt: \"REPLACE_WITH_BASE64_CERT\" \n tls.key: \"REPLACE_WITH_BASE64_KEY\"", "language": "yaml", "type": "kubernetes-manifest", "context": "Connecting Applications with Services | Now create the secrets using the file:", "source": "https://kubernetes.io/docs/tutorials/services/connect-applications-service/", "id": "k8s_00098_code_completion_2781"} {"prompt": "Explain this yaml code:\n\napiVersion: \"v1\"\nkind: \"Secret\"\nmetadata:\n name: \"nginxsecret\"\n namespace: \"default\"\ntype: kubernetes.io/tls\ndata:\n # NOTE: Replace the following values with your own base64-encoded certificate and key.\n tls.crt: \"REPLACE_WITH_BASE64_CERT\" \n tls.key: \"REPLACE_WITH_BASE64_KEY\"", "completion": "This yaml code Connecting Applications with Services | Now create the secrets using the file:", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Connecting Applications with Services | Now create the secrets using the file:", "source": "https://kubernetes.io/docs/tutorials/services/connect-applications-service/", "id": "k8s_00098_code_explain_2782"} {"prompt": "Explain this bash code:\n\nkubectl apply -f nginxsecrets.yaml\nkubectl get secrets", "completion": "This bash code Connecting Applications with Services | Now modify your nginx replicas to start an https server using the certificate in the secret, and the Service, to expose both ports (80 and 443):", "language": "bash", "type": "kubectl-command_explanation", "context": "Connecting Applications with Services | Now modify your nginx replicas to start an https server using the certificate in the secret, and the Service, to expose both ports (80 and 443):", "source": "https://kubernetes.io/docs/tutorials/services/connect-applications-service/", "id": "k8s_00098_code_explain_2783"} {"prompt": "Complete this yaml code:\n\nservice/networking/nginx-secure-app.yaml\napiVersion: v1\nkind: Service\nmetadata:\n name: my-nginx\n labels:\n run: my-nginx\nspec:\n type: NodePort\n ports:\n - port: 8080\n targetPort: 80\n protocol: TCP\n name: http\n - port: 443\n protocol: TCP\n name: https\n selector:\n run: my-nginx\n---\napiVersion: apps/v1\nkind: Deployment\nmetadata:\n name: my-nginx\nspec:\n selector:", "completion": " matchLabels:\n run: my-nginx\n replicas: 1\n template:\n metadata:\n labels:\n run: my-nginx\n spec:\n volumes:\n - name: secret-volume\n secret:\n secretName: nginxsecret\n - name: configmap-volume\n configMap:\n name: nginxconfigmap\n containers:\n - name: nginxhttps\n image: bprashanth/nginxhttps:1.0\n ports:\n - containerPort: 443\n - containerPort: 80\n volumeMounts:\n - mountPath: /etc/nginx/ssl\n name: secret-volume\n - mountPath: /etc/nginx/conf.d\n name: configmap-volume", "language": "yaml", "type": "deployment-definition", "context": "Connecting Applications with Services | Noteworthy points about the nginx-secure-app manifest:", "source": "https://kubernetes.io/docs/tutorials/services/connect-applications-service/", "id": "k8s_00098_code_completion_2784"} {"prompt": "Explain this yaml code:\n\nservice/networking/nginx-secure-app.yaml\napiVersion: v1\nkind: Service\nmetadata:\n name: my-nginx\n labels:\n run: my-nginx\nspec:\n type: NodePort\n ports:\n - port: 8080\n targetPort: 80\n protocol: TCP\n name: http\n - port: 443\n protocol: TCP\n name: https\n selector:\n run: my-nginx\n---\napiVersion: apps/v1\nkind: Deployment\nmetadata:\n name: my-nginx\nspec:\n selector:\n matchLabels:\n run: my-nginx\n replicas: 1\n template:\n metadata:\n labels:\n run: my-nginx\n spec:\n volumes:\n - name: secret-volume\n secret:\n secretName: nginxsecret\n - name: configmap-volume\n configMap:\n name: nginxconfigmap\n containers:\n - name: nginxhttps\n image: bprashanth/nginxhttps:1.0\n ports:\n - containerPort: 443\n - containerPort: 80\n volumeMounts:\n - mountPath: /etc/nginx/ssl\n name: secret-volume\n - mountPath: /etc/nginx/conf.d\n name: configmap-volume", "completion": "This yaml code Connecting Applications with Services | Noteworthy points about the nginx-secure-app manifest:", "language": "yaml", "type": "deployment-definition_explanation", "context": "Connecting Applications with Services | Noteworthy points about the nginx-secure-app manifest:", "source": "https://kubernetes.io/docs/tutorials/services/connect-applications-service/", "id": "k8s_00098_code_explain_2785"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Service\nmetadata:\n name: my-nginx\n labels:\n run: my-nginx\nspec:\n type: NodePort\n ports:\n - port: 8080\n targetPort: 80\n protocol: TCP\n name: http\n - port: 443\n protocol: TCP\n name: https\n selector:\n run: my-nginx\n---\napiVersion: apps/v1\nkind: Deployment\nmetadata:\n name: my-nginx\nspec:\n selector:", "completion": " matchLabels:\n run: my-nginx\n replicas: 1\n template:\n metadata:\n labels:\n run: my-nginx\n spec:\n volumes:\n - name: secret-volume\n secret:\n secretName: nginxsecret\n - name: configmap-volume\n configMap:\n name: nginxconfigmap\n containers:\n - name: nginxhttps\n image: bprashanth/nginxhttps:1.0\n ports:\n - containerPort: 443\n - containerPort: 80\n volumeMounts:\n - mountPath: /etc/nginx/ssl\n name: secret-volume\n - mountPath: /etc/nginx/conf.d\n name: configmap-volume", "language": "yaml", "type": "deployment-definition", "context": "Connecting Applications with Services | Noteworthy points about the nginx-secure-app manifest:", "source": "https://kubernetes.io/docs/tutorials/services/connect-applications-service/", "id": "k8s_00098_code_completion_2786"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Service\nmetadata:\n name: my-nginx\n labels:\n run: my-nginx\nspec:\n type: NodePort\n ports:\n - port: 8080\n targetPort: 80\n protocol: TCP\n name: http\n - port: 443\n protocol: TCP\n name: https\n selector:\n run: my-nginx\n---\napiVersion: apps/v1\nkind: Deployment\nmetadata:\n name: my-nginx\nspec:\n selector:\n matchLabels:\n run: my-nginx\n replicas: 1\n template:\n metadata:\n labels:\n run: my-nginx\n spec:\n volumes:\n - name: secret-volume\n secret:\n secretName: nginxsecret\n - name: configmap-volume\n configMap:\n name: nginxconfigmap\n containers:\n - name: nginxhttps\n image: bprashanth/nginxhttps:1.0\n ports:\n - containerPort: 443\n - containerPort: 80\n volumeMounts:\n - mountPath: /etc/nginx/ssl\n name: secret-volume\n - mountPath: /etc/nginx/conf.d\n name: configmap-volume", "completion": "This yaml code Connecting Applications with Services | Noteworthy points about the nginx-secure-app manifest:", "language": "yaml", "type": "deployment-definition_explanation", "context": "Connecting Applications with Services | Noteworthy points about the nginx-secure-app manifest:", "source": "https://kubernetes.io/docs/tutorials/services/connect-applications-service/", "id": "k8s_00098_code_explain_2787"} {"prompt": "Explain this bash code:\n\nkubectl delete deployments,svc my-nginx; kubectl create -f ./nginx-secure-app.yaml", "completion": "This bash code Connecting Applications with Services | At this point you can reach the nginx server from any node.", "language": "bash", "type": "kubectl-command_explanation", "context": "Connecting Applications with Services | At this point you can reach the nginx server from any node.", "source": "https://kubernetes.io/docs/tutorials/services/connect-applications-service/", "id": "k8s_00098_code_explain_2788"} {"prompt": "Explain this bash code:\n\nkubectl get pods -l run=my-nginx -o custom-columns=POD_IP:.status.podIPs\n POD_IP\n [map[ip:10.244.3.5]]", "completion": "This bash code Connecting Applications with Services | Note how we supplied the -k parameter to curl in the last step, this is because we don't know anything about the pods running nginx at certificate generation time, so we have to tell curl to ignore th", "language": "bash", "type": "kubectl-command_explanation", "context": "Connecting Applications with Services | Note how we supplied the -k parameter to curl in the last step, this is because we don't know anything about the pods running nginx at certificate generation time, so we have to tell curl to ignore th", "source": "https://kubernetes.io/docs/tutorials/services/connect-applications-service/", "id": "k8s_00098_code_explain_2789"} {"prompt": "Explain this text code:\n\nnode $ curl -k https://10.244.3.5\n...\n

Welcome to nginx!

", "completion": "This text code Connecting Applications with Services | Note how we supplied the -k parameter to curl in the last step, this is because we don't know anything about the pods running nginx at certificate generation time, so we have to tell curl to ignore th", "language": "text", "type": "code-example_explanation", "context": "Connecting Applications with Services | Note how we supplied the -k parameter to curl in the last step, this is because we don't know anything about the pods running nginx at certificate generation time, so we have to tell curl to ignore th", "source": "https://kubernetes.io/docs/tutorials/services/connect-applications-service/", "id": "k8s_00098_code_explain_2790"} {"prompt": "Complete this yaml code:\n\nservice/networking/curlpod.yaml\napiVersion: apps/v1\nkind: Deployment\nmetadata:\n name: curl-deployment\nspec:\n selector:\n matchLabels:\n app: curlpod\n replicas: 1\n template:\n metadata:\n labels:\n app: curlpod", "completion": " spec:\n volumes:\n - name: secret-volume\n secret:\n secretName: nginxsecret\n containers:\n - name: curlpod\n command:\n - sh\n - -c\n - while true; do sleep 1; done\n image: radial/busyboxplus:curl\n volumeMounts:\n - mountPath: /etc/nginx/ssl\n name: secret-volume", "language": "yaml", "type": "deployment-definition", "context": "Connecting Applications with Services | For some parts of your applications you may want to expose a Service onto an external IP address. Kubernetes supports two ways of doing this: NodePorts and LoadBalancers. The Service created in the la", "source": "https://kubernetes.io/docs/tutorials/services/connect-applications-service/", "id": "k8s_00098_code_completion_2791"} {"prompt": "Explain this yaml code:\n\nservice/networking/curlpod.yaml\napiVersion: apps/v1\nkind: Deployment\nmetadata:\n name: curl-deployment\nspec:\n selector:\n matchLabels:\n app: curlpod\n replicas: 1\n template:\n metadata:\n labels:\n app: curlpod\n spec:\n volumes:\n - name: secret-volume\n secret:\n secretName: nginxsecret\n containers:\n - name: curlpod\n command:\n - sh\n - -c\n - while true; do sleep 1; done\n image: radial/busyboxplus:curl\n volumeMounts:\n - mountPath: /etc/nginx/ssl\n name: secret-volume", "completion": "This yaml code Connecting Applications with Services | For some parts of your applications you may want to expose a Service onto an external IP address. Kubernetes supports two ways of doing this: NodePorts and LoadBalancers. The Service created in the la", "language": "yaml", "type": "deployment-definition_explanation", "context": "Connecting Applications with Services | For some parts of your applications you may want to expose a Service onto an external IP address. Kubernetes supports two ways of doing this: NodePorts and LoadBalancers. The Service created in the la", "source": "https://kubernetes.io/docs/tutorials/services/connect-applications-service/", "id": "k8s_00098_code_explain_2792"} {"prompt": "Complete this yaml code:\n\napiVersion: apps/v1\nkind: Deployment\nmetadata:\n name: curl-deployment\nspec:\n selector:\n matchLabels:\n app: curlpod\n replicas: 1\n template:\n metadata:\n labels:\n app: curlpod\n spec:", "completion": " volumes:\n - name: secret-volume\n secret:\n secretName: nginxsecret\n containers:\n - name: curlpod\n command:\n - sh\n - -c\n - while true; do sleep 1; done\n image: radial/busyboxplus:curl\n volumeMounts:\n - mountPath: /etc/nginx/ssl\n name: secret-volume", "language": "yaml", "type": "deployment-definition", "context": "Connecting Applications with Services | For some parts of your applications you may want to expose a Service onto an external IP address. Kubernetes supports two ways of doing this: NodePorts and LoadBalancers. The Service created in the la", "source": "https://kubernetes.io/docs/tutorials/services/connect-applications-service/", "id": "k8s_00098_code_completion_2793"} {"prompt": "Explain this yaml code:\n\napiVersion: apps/v1\nkind: Deployment\nmetadata:\n name: curl-deployment\nspec:\n selector:\n matchLabels:\n app: curlpod\n replicas: 1\n template:\n metadata:\n labels:\n app: curlpod\n spec:\n volumes:\n - name: secret-volume\n secret:\n secretName: nginxsecret\n containers:\n - name: curlpod\n command:\n - sh\n - -c\n - while true; do sleep 1; done\n image: radial/busyboxplus:curl\n volumeMounts:\n - mountPath: /etc/nginx/ssl\n name: secret-volume", "completion": "This yaml code Connecting Applications with Services | For some parts of your applications you may want to expose a Service onto an external IP address. Kubernetes supports two ways of doing this: NodePorts and LoadBalancers. The Service created in the la", "language": "yaml", "type": "deployment-definition_explanation", "context": "Connecting Applications with Services | For some parts of your applications you may want to expose a Service onto an external IP address. Kubernetes supports two ways of doing this: NodePorts and LoadBalancers. The Service created in the la", "source": "https://kubernetes.io/docs/tutorials/services/connect-applications-service/", "id": "k8s_00098_code_explain_2794"} {"prompt": "Explain this bash code:\n\nkubectl apply -f ./curlpod.yaml\nkubectl get pods -l app=curlpod", "completion": "This bash code Connecting Applications with Services | For some parts of your applications you may want to expose a Service onto an external IP address. Kubernetes supports two ways of doing this: NodePorts and LoadBalancers. The Service created in the la", "language": "bash", "type": "kubectl-command_explanation", "context": "Connecting Applications with Services | For some parts of your applications you may want to expose a Service onto an external IP address. Kubernetes supports two ways of doing this: NodePorts and LoadBalancers. The Service created in the la", "source": "https://kubernetes.io/docs/tutorials/services/connect-applications-service/", "id": "k8s_00098_code_explain_2795"} {"prompt": "Complete this bash code:\n\nkubectl exec curl-deployment-1515033274-1410r -- curl https://my-nginx --cacert /etc/nginx/ssl/tls.crt\n...", "completion": "Welcome to nginx!\n...", "language": "bash", "type": "kubectl-command", "context": "Connecting Applications with Services | For some parts of your applications you may want to expose a Service onto an external IP address. Kubernetes supports two ways of doing this: NodePorts and LoadBalancers. The Service created in the la", "source": "https://kubernetes.io/docs/tutorials/services/connect-applications-service/", "id": "k8s_00098_code_completion_2796"} {"prompt": "Explain this bash code:\n\nkubectl exec curl-deployment-1515033274-1410r -- curl https://my-nginx --cacert /etc/nginx/ssl/tls.crt\n...\nWelcome to nginx!\n...", "completion": "This bash code Connecting Applications with Services | For some parts of your applications you may want to expose a Service onto an external IP address. Kubernetes supports two ways of doing this: NodePorts and LoadBalancers. The Service created in the la", "language": "bash", "type": "kubectl-command_explanation", "context": "Connecting Applications with Services | For some parts of your applications you may want to expose a Service onto an external IP address. Kubernetes supports two ways of doing this: NodePorts and LoadBalancers. The Service created in the la", "source": "https://kubernetes.io/docs/tutorials/services/connect-applications-service/", "id": "k8s_00098_code_explain_2797"} {"prompt": "Complete this bash code:\n\nkubectl get svc my-nginx -o yaml | grep nodePort -C 5\n uid: 07191fb3-f61a-11e5-8ae5-42010af00002\nspec:\n clusterIP: 10.0.162.149\n ports:\n - name: http\n nodePort: 31704\n port: 8080", "completion": " protocol: TCP\n targetPort: 80\n - name: https\n nodePort: 32453\n port: 443\n protocol: TCP\n targetPort: 443\n selector:\n run: my-nginx", "language": "bash", "type": "kubectl-command", "context": "Connecting Applications with Services | Let's now recreate the Service to use a cloud load balancer. Change the Type of my-nginx Service from NodePort to LoadBalancer:", "source": "https://kubernetes.io/docs/tutorials/services/connect-applications-service/", "id": "k8s_00098_code_completion_2798"} {"prompt": "Explain this bash code:\n\nkubectl get svc my-nginx -o yaml | grep nodePort -C 5\n uid: 07191fb3-f61a-11e5-8ae5-42010af00002\nspec:\n clusterIP: 10.0.162.149\n ports:\n - name: http\n nodePort: 31704\n port: 8080\n protocol: TCP\n targetPort: 80\n - name: https\n nodePort: 32453\n port: 443\n protocol: TCP\n targetPort: 443\n selector:\n run: my-nginx", "completion": "This bash code Connecting Applications with Services | Let's now recreate the Service to use a cloud load balancer. Change the Type of my-nginx Service from NodePort to LoadBalancer:", "language": "bash", "type": "kubectl-command_explanation", "context": "Connecting Applications with Services | Let's now recreate the Service to use a cloud load balancer. Change the Type of my-nginx Service from NodePort to LoadBalancer:", "source": "https://kubernetes.io/docs/tutorials/services/connect-applications-service/", "id": "k8s_00098_code_explain_2799"} {"prompt": "Complete this bash code:\n\nkubectl get nodes -o yaml | grep ExternalIP -C 1\n - address: 104.197.41.11\n type: ExternalIP\n allocatable:\n--\n - address: 23.251.152.56", "completion": " type: ExternalIP\n allocatable:\n...\n\n$ curl https://: -k\n...\n

Welcome to nginx!

", "language": "bash", "type": "kubectl-command", "context": "Connecting Applications with Services | Let's now recreate the Service to use a cloud load balancer. Change the Type of my-nginx Service from NodePort to LoadBalancer:", "source": "https://kubernetes.io/docs/tutorials/services/connect-applications-service/", "id": "k8s_00098_code_completion_2800"} {"prompt": "Explain this bash code:\n\nkubectl get nodes -o yaml | grep ExternalIP -C 1\n - address: 104.197.41.11\n type: ExternalIP\n allocatable:\n--\n - address: 23.251.152.56\n type: ExternalIP\n allocatable:\n...\n\n$ curl https://: -k\n...\n

Welcome to nginx!

", "completion": "This bash code Connecting Applications with Services | Let's now recreate the Service to use a cloud load balancer. Change the Type of my-nginx Service from NodePort to LoadBalancer:", "language": "bash", "type": "kubectl-command_explanation", "context": "Connecting Applications with Services | Let's now recreate the Service to use a cloud load balancer. Change the Type of my-nginx Service from NodePort to LoadBalancer:", "source": "https://kubernetes.io/docs/tutorials/services/connect-applications-service/", "id": "k8s_00098_code_explain_2801"} {"prompt": "Complete this bash code:\n\nkubectl describe service my-nginx\n...", "completion": "LoadBalancer Ingress: a320587ffd19711e5a37606cf4a74574-1142138393.us-east-1.elb.amazonaws.com\n...", "language": "bash", "type": "kubectl-command", "context": "Connecting Applications with Services |", "source": "https://kubernetes.io/docs/tutorials/services/connect-applications-service/", "id": "k8s_00098_code_completion_2802"} {"prompt": "Explain this bash code:\n\nkubectl describe service my-nginx\n...\nLoadBalancer Ingress: a320587ffd19711e5a37606cf4a74574-1142138393.us-east-1.elb.amazonaws.com\n...", "completion": "This bash code Connecting Applications with Services |", "language": "bash", "type": "kubectl-command_explanation", "context": "Connecting Applications with Services |", "source": "https://kubernetes.io/docs/tutorials/services/connect-applications-service/", "id": "k8s_00098_code_explain_2803"} {"prompt": "Explain this text code:\n\n# Delete the context for the minikube cluster\n kubectl config delete-context minikube", "completion": "This text code kubectl config delete-context | help for delete-context", "language": "text", "type": "code-example_explanation", "context": "kubectl config delete-context | help for delete-context", "source": "https://kubernetes.io/docs/reference/kubectl/generated/kubectl_config/kubectl_config_delete-context/", "id": "k8s_00103_code_explain_2804"} {"prompt": "Explain this text code:\n\n# Delete the context for the minikube cluster\n kubectl config delete-context minikube", "completion": "This text code kubectl config delete-context | help for delete-context", "language": "text", "type": "code-example_explanation", "context": "kubectl config delete-context | help for delete-context", "source": "https://kubernetes.io/docs/reference/kubectl/generated/kubectl_config/kubectl_config_delete-context/", "id": "k8s_00103_code_explain_2805"} {"prompt": "Explain this bash code:\n\nkubectl -n kube-system get lease -l apiserver.kubernetes.io/identity=kube-apiserver", "completion": "This bash code Leases | The SHA256 hash used in the lease name is based on the OS hostname as seen by that API server. Each kube-apiserver should be configured to use a hostname that is unique within the cluster. New instanc", "language": "bash", "type": "kubectl-command_explanation", "context": "Leases | The SHA256 hash used in the lease name is based on the OS hostname as seen by that API server. Each kube-apiserver should be configured to use a hostname that is unique within the cluster. New instanc", "source": "https://kubernetes.io/docs/concepts/architecture/leases/", "id": "k8s_00104_code_explain_2806"} {"prompt": "Complete this text code:\n\nNAME HOLDER AGE\napiserver-07a5ea9b9b072c4a5f3d1c3702 apiserver-07a5ea9b9b072c4a5f3d1c3702_0c8914f7-0f35-440e-8676-7844977d3a05 5m33s", "completion": "apiserver-7be9e061c59d368b3ddaf1376e apiserver-7be9e061c59d368b3ddaf1376e_84f2a85d-37c1-4b14-b6b9-603e62e4896f 4m23s\napiserver-1dfef752bcb36637d2763d1868 apiserver-1dfef752bcb36637d2763d1868_c5ffa286-8a9a-45d4-91e7-61118ed58d2e 4m43s", "language": "text", "type": "code-example", "context": "Leases | The SHA256 hash used in the lease name is based on the OS hostname as seen by that API server. Each kube-apiserver should be configured to use a hostname that is unique within the cluster. New instanc", "source": "https://kubernetes.io/docs/concepts/architecture/leases/", "id": "k8s_00104_code_completion_2807"} {"prompt": "Explain this text code:\n\nNAME HOLDER AGE\napiserver-07a5ea9b9b072c4a5f3d1c3702 apiserver-07a5ea9b9b072c4a5f3d1c3702_0c8914f7-0f35-440e-8676-7844977d3a05 5m33s\napiserver-7be9e061c59d368b3ddaf1376e apiserver-7be9e061c59d368b3ddaf1376e_84f2a85d-37c1-4b14-b6b9-603e62e4896f 4m23s\napiserver-1dfef752bcb36637d2763d1868 apiserver-1dfef752bcb36637d2763d1868_c5ffa286-8a9a-45d4-91e7-61118ed58d2e 4m43s", "completion": "This text code Leases | The SHA256 hash used in the lease name is based on the OS hostname as seen by that API server. Each kube-apiserver should be configured to use a hostname that is unique within the cluster. New instanc", "language": "text", "type": "code-example_explanation", "context": "Leases | The SHA256 hash used in the lease name is based on the OS hostname as seen by that API server. Each kube-apiserver should be configured to use a hostname that is unique within the cluster. New instanc", "source": "https://kubernetes.io/docs/concepts/architecture/leases/", "id": "k8s_00104_code_explain_2808"} {"prompt": "Explain this bash code:\n\nkubectl -n kube-system get lease apiserver-07a5ea9b9b072c4a5f3d1c3702 -o yaml", "completion": "This bash code Leases | Expired leases from kube-apiservers that no longer exist are garbage collected by new kube-apiservers after 1 hour.", "language": "bash", "type": "kubectl-command_explanation", "context": "Leases | Expired leases from kube-apiservers that no longer exist are garbage collected by new kube-apiservers after 1 hour.", "source": "https://kubernetes.io/docs/concepts/architecture/leases/", "id": "k8s_00104_code_explain_2809"} {"prompt": "Complete this yaml code:\n\napiVersion: coordination.k8s.io/v1\nkind: Lease\nmetadata:\n creationTimestamp: \"2023-07-02T13:16:48Z\"\n labels:\n apiserver.kubernetes.io/identity: kube-apiserver\n kubernetes.io/hostname: master-1", "completion": " name: apiserver-07a5ea9b9b072c4a5f3d1c3702\n namespace: kube-system\n resourceVersion: \"334899\"\n uid: 90870ab5-1ba9-4523-b215-e4d4e662acb1\nspec:\n holderIdentity: apiserver-07a5ea9b9b072c4a5f3d1c3702_0c8914f7-0f35-440e-8676-7844977d3a05\n leaseDurationSeconds: 3600\n renewTime: \"2023-07-04T21:58:48.065888Z\"", "language": "yaml", "type": "kubernetes-manifest", "context": "Leases | Expired leases from kube-apiservers that no longer exist are garbage collected by new kube-apiservers after 1 hour.", "source": "https://kubernetes.io/docs/concepts/architecture/leases/", "id": "k8s_00104_code_completion_2810"} {"prompt": "Explain this yaml code:\n\napiVersion: coordination.k8s.io/v1\nkind: Lease\nmetadata:\n creationTimestamp: \"2023-07-02T13:16:48Z\"\n labels:\n apiserver.kubernetes.io/identity: kube-apiserver\n kubernetes.io/hostname: master-1\n name: apiserver-07a5ea9b9b072c4a5f3d1c3702\n namespace: kube-system\n resourceVersion: \"334899\"\n uid: 90870ab5-1ba9-4523-b215-e4d4e662acb1\nspec:\n holderIdentity: apiserver-07a5ea9b9b072c4a5f3d1c3702_0c8914f7-0f35-440e-8676-7844977d3a05\n leaseDurationSeconds: 3600\n renewTime: \"2023-07-04T21:58:48.065888Z\"", "completion": "This yaml code Leases | Expired leases from kube-apiservers that no longer exist are garbage collected by new kube-apiservers after 1 hour.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Leases | Expired leases from kube-apiservers that no longer exist are garbage collected by new kube-apiservers after 1 hour.", "source": "https://kubernetes.io/docs/concepts/architecture/leases/", "id": "k8s_00104_code_explain_2811"} {"prompt": "Explain this shell code:\n\nkubectl -n kube-system get lease -l apiserver.kubernetes.io/identity=kube-apiserver", "completion": "This shell code Leases | The SHA256 hash used in the lease name is based on the OS hostname as seen by that API server. Each kube-apiserver should be configured to use a hostname that is unique within the cluster. New instanc", "language": "shell", "type": "code-example_explanation", "context": "Leases | The SHA256 hash used in the lease name is based on the OS hostname as seen by that API server. Each kube-apiserver should be configured to use a hostname that is unique within the cluster. New instanc", "source": "https://kubernetes.io/docs/concepts/architecture/leases/", "id": "k8s_00104_code_explain_2812"} {"prompt": "Complete this text code:\n\nNAME HOLDER AGE\napiserver-07a5ea9b9b072c4a5f3d1c3702 apiserver-07a5ea9b9b072c4a5f3d1c3702_0c8914f7-0f35-440e-8676-7844977d3a05 5m33s", "completion": "apiserver-7be9e061c59d368b3ddaf1376e apiserver-7be9e061c59d368b3ddaf1376e_84f2a85d-37c1-4b14-b6b9-603e62e4896f 4m23s\napiserver-1dfef752bcb36637d2763d1868 apiserver-1dfef752bcb36637d2763d1868_c5ffa286-8a9a-45d4-91e7-61118ed58d2e 4m43s", "language": "text", "type": "code-example", "context": "Leases | The SHA256 hash used in the lease name is based on the OS hostname as seen by that API server. Each kube-apiserver should be configured to use a hostname that is unique within the cluster. New instanc", "source": "https://kubernetes.io/docs/concepts/architecture/leases/", "id": "k8s_00104_code_completion_2813"} {"prompt": "Explain this text code:\n\nNAME HOLDER AGE\napiserver-07a5ea9b9b072c4a5f3d1c3702 apiserver-07a5ea9b9b072c4a5f3d1c3702_0c8914f7-0f35-440e-8676-7844977d3a05 5m33s\napiserver-7be9e061c59d368b3ddaf1376e apiserver-7be9e061c59d368b3ddaf1376e_84f2a85d-37c1-4b14-b6b9-603e62e4896f 4m23s\napiserver-1dfef752bcb36637d2763d1868 apiserver-1dfef752bcb36637d2763d1868_c5ffa286-8a9a-45d4-91e7-61118ed58d2e 4m43s", "completion": "This text code Leases | The SHA256 hash used in the lease name is based on the OS hostname as seen by that API server. Each kube-apiserver should be configured to use a hostname that is unique within the cluster. New instanc", "language": "text", "type": "code-example_explanation", "context": "Leases | The SHA256 hash used in the lease name is based on the OS hostname as seen by that API server. Each kube-apiserver should be configured to use a hostname that is unique within the cluster. New instanc", "source": "https://kubernetes.io/docs/concepts/architecture/leases/", "id": "k8s_00104_code_explain_2814"} {"prompt": "Explain this shell code:\n\nkubectl -n kube-system get lease apiserver-07a5ea9b9b072c4a5f3d1c3702 -o yaml", "completion": "This shell code Leases | Expired leases from kube-apiservers that no longer exist are garbage collected by new kube-apiservers after 1 hour.", "language": "shell", "type": "code-example_explanation", "context": "Leases | Expired leases from kube-apiservers that no longer exist are garbage collected by new kube-apiservers after 1 hour.", "source": "https://kubernetes.io/docs/concepts/architecture/leases/", "id": "k8s_00104_code_explain_2815"} {"prompt": "Complete this yaml code:\n\napiVersion: coordination.k8s.io/v1\nkind: Lease\nmetadata:\n creationTimestamp: \"2023-07-02T13:16:48Z\"\n labels:\n apiserver.kubernetes.io/identity: kube-apiserver\n kubernetes.io/hostname: master-1", "completion": " name: apiserver-07a5ea9b9b072c4a5f3d1c3702\n namespace: kube-system\n resourceVersion: \"334899\"\n uid: 90870ab5-1ba9-4523-b215-e4d4e662acb1\nspec:\n holderIdentity: apiserver-07a5ea9b9b072c4a5f3d1c3702_0c8914f7-0f35-440e-8676-7844977d3a05\n leaseDurationSeconds: 3600\n renewTime: \"2023-07-04T21:58:48.065888Z\"", "language": "yaml", "type": "kubernetes-manifest", "context": "Leases | Expired leases from kube-apiservers that no longer exist are garbage collected by new kube-apiservers after 1 hour.", "source": "https://kubernetes.io/docs/concepts/architecture/leases/", "id": "k8s_00104_code_completion_2816"} {"prompt": "Explain this yaml code:\n\napiVersion: coordination.k8s.io/v1\nkind: Lease\nmetadata:\n creationTimestamp: \"2023-07-02T13:16:48Z\"\n labels:\n apiserver.kubernetes.io/identity: kube-apiserver\n kubernetes.io/hostname: master-1\n name: apiserver-07a5ea9b9b072c4a5f3d1c3702\n namespace: kube-system\n resourceVersion: \"334899\"\n uid: 90870ab5-1ba9-4523-b215-e4d4e662acb1\nspec:\n holderIdentity: apiserver-07a5ea9b9b072c4a5f3d1c3702_0c8914f7-0f35-440e-8676-7844977d3a05\n leaseDurationSeconds: 3600\n renewTime: \"2023-07-04T21:58:48.065888Z\"", "completion": "This yaml code Leases | Expired leases from kube-apiservers that no longer exist are garbage collected by new kube-apiservers after 1 hour.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Leases | Expired leases from kube-apiservers that no longer exist are garbage collected by new kube-apiservers after 1 hour.", "source": "https://kubernetes.io/docs/concepts/architecture/leases/", "id": "k8s_00104_code_explain_2817"} {"prompt": "Explain this bash code:\n\nkubectl -n kube-system get lease -l apiserver.kubernetes.io/identity=kube-apiserver", "completion": "This bash code Leases | The SHA256 hash used in the lease name is based on the OS hostname as seen by that API server. Each kube-apiserver should be configured to use a hostname that is unique within the cluster. New instanc", "language": "bash", "type": "kubectl-command_explanation", "context": "Leases | The SHA256 hash used in the lease name is based on the OS hostname as seen by that API server. Each kube-apiserver should be configured to use a hostname that is unique within the cluster. New instanc", "source": "https://kubernetes.io/docs/concepts/architecture/leases/", "id": "k8s_00104_code_explain_2818"} {"prompt": "Explain this bash code:\n\nkubectl -n kube-system get lease apiserver-07a5ea9b9b072c4a5f3d1c3702 -o yaml", "completion": "This bash code Leases | Expired leases from kube-apiservers that no longer exist are garbage collected by new kube-apiservers after 1 hour.", "language": "bash", "type": "kubectl-command_explanation", "context": "Leases | Expired leases from kube-apiservers that no longer exist are garbage collected by new kube-apiservers after 1 hour.", "source": "https://kubernetes.io/docs/concepts/architecture/leases/", "id": "k8s_00104_code_explain_2819"} {"prompt": "Complete this yaml code:\n\napiVersion: coordination.k8s.io/v1\nkind: Lease\nmetadata:\n creationTimestamp: \"2023-07-02T13:16:48Z\"\n labels:\n apiserver.kubernetes.io/identity: kube-apiserver\n kubernetes.io/hostname: master-1", "completion": " name: apiserver-07a5ea9b9b072c4a5f3d1c3702\n namespace: kube-system\n resourceVersion: \"334899\"\n uid: 90870ab5-1ba9-4523-b215-e4d4e662acb1\nspec:\n holderIdentity: apiserver-07a5ea9b9b072c4a5f3d1c3702_0c8914f7-0f35-440e-8676-7844977d3a05\n leaseDurationSeconds: 3600\n renewTime: \"2023-07-04T21:58:48.065888Z\"", "language": "yaml", "type": "kubernetes-manifest", "context": "Leases | Expired leases from kube-apiservers that no longer exist are garbage collected by new kube-apiservers after 1 hour.", "source": "https://kubernetes.io/docs/concepts/architecture/leases/", "id": "k8s_00104_code_completion_2820"} {"prompt": "Explain this yaml code:\n\napiVersion: coordination.k8s.io/v1\nkind: Lease\nmetadata:\n creationTimestamp: \"2023-07-02T13:16:48Z\"\n labels:\n apiserver.kubernetes.io/identity: kube-apiserver\n kubernetes.io/hostname: master-1\n name: apiserver-07a5ea9b9b072c4a5f3d1c3702\n namespace: kube-system\n resourceVersion: \"334899\"\n uid: 90870ab5-1ba9-4523-b215-e4d4e662acb1\nspec:\n holderIdentity: apiserver-07a5ea9b9b072c4a5f3d1c3702_0c8914f7-0f35-440e-8676-7844977d3a05\n leaseDurationSeconds: 3600\n renewTime: \"2023-07-04T21:58:48.065888Z\"", "completion": "This yaml code Leases | Expired leases from kube-apiservers that no longer exist are garbage collected by new kube-apiservers after 1 hour.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Leases | Expired leases from kube-apiservers that no longer exist are garbage collected by new kube-apiservers after 1 hour.", "source": "https://kubernetes.io/docs/concepts/architecture/leases/", "id": "k8s_00104_code_explain_2821"} {"prompt": "Complete this yaml code:\n\napiVersion: admissionregistration.k8s.io/v1beta1\nkind: MutatingAdmissionPolicy\nmetadata:\n name: \"sidecar-policy.example.com\"\nspec:\n paramKind:\n kind: Sidecar\n apiVersion: mutations.example.com/v1\n matchConstraints:\n resourceRules:\n - apiGroups: [\"\"]\n apiVersions: [\"v1\"]\n operations: [\"CREATE\"]\n resources: [\"pods\"]\n matchConditions:\n - name: does-not-already-have-sidecar\n expression: \"!object.spec.initContainers.exists(ic, ic.name == \\\"mesh-proxy\\\")\"", "completion": " failurePolicy: Fail\n reinvocationPolicy: IfNeeded\n mutations:\n - patchType: \"ApplyConfiguration\"\n applyConfiguration:\n expression: >\n Object{\n spec: Object.spec{\n initContainers: [\n Object.spec.initContainers{\n name: \"mesh-proxy\",\n image: \"mesh/proxy:v1.0.0\",\n args: [\"proxy\", \"sidecar\"],\n restartPolicy: \"Always\"\n }\n ]\n }\n }", "language": "yaml", "type": "kubernetes-manifest", "context": "Mutating Admission Policy | The .spec.mutations field consists of a list of expressions that evaluate to resource patches. The emitted patches may be either apply configurations or JSON Patch patches. You cannot specify an empty", "source": "https://kubernetes.io/docs/reference/access-authn-authz/mutating-admission-policy/", "id": "k8s_00106_code_completion_2822"} {"prompt": "Explain this yaml code:\n\napiVersion: admissionregistration.k8s.io/v1beta1\nkind: MutatingAdmissionPolicy\nmetadata:\n name: \"sidecar-policy.example.com\"\nspec:\n paramKind:\n kind: Sidecar\n apiVersion: mutations.example.com/v1\n matchConstraints:\n resourceRules:\n - apiGroups: [\"\"]\n apiVersions: [\"v1\"]\n operations: [\"CREATE\"]\n resources: [\"pods\"]\n matchConditions:\n - name: does-not-already-have-sidecar\n expression: \"!object.spec.initContainers.exists(ic, ic.name == \\\"mesh-proxy\\\")\"\n failurePolicy: Fail\n reinvocationPolicy: IfNeeded\n mutations:\n - patchType: \"ApplyConfiguration\"\n applyConfiguration:\n expression: >\n Object{\n spec: Object.spec{\n initContainers: [\n Object.spec.initContainers{\n name: \"mesh-proxy\",\n image: \"mesh/proxy:v1.0.0\",\n args: [\"proxy\", \"sidecar\"],\n restartPolicy: \"Always\"\n }\n ]\n }\n }", "completion": "This yaml code Mutating Admission Policy | The .spec.mutations field consists of a list of expressions that evaluate to resource patches. The emitted patches may be either apply configurations or JSON Patch patches. You cannot specify an empty", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Mutating Admission Policy | The .spec.mutations field consists of a list of expressions that evaluate to resource patches. The emitted patches may be either apply configurations or JSON Patch patches. You cannot specify an empty", "source": "https://kubernetes.io/docs/reference/access-authn-authz/mutating-admission-policy/", "id": "k8s_00106_code_explain_2823"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: myapp\n namespace: default\nspec:\n ...", "completion": " initContainers:\n - name: mesh-proxy\n image: mesh/proxy:v1.0.0\n args: [\"proxy\", \"sidecar\"]\n restartPolicy: Always\n - name: myapp-initializer\n image: example/initializer:v1.0.0\n ...", "language": "yaml", "type": "pod-definition", "context": "Mutating Admission Policy | Parameter resources allow a policy configuration to be separate from its definition. A policy can define paramKind, which outlines GVK of the parameter resource, and then a policy binding ties a polic", "source": "https://kubernetes.io/docs/reference/access-authn-authz/mutating-admission-policy/", "id": "k8s_00106_code_completion_2824"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: myapp\n namespace: default\nspec:\n ...\n initContainers:\n - name: mesh-proxy\n image: mesh/proxy:v1.0.0\n args: [\"proxy\", \"sidecar\"]\n restartPolicy: Always\n - name: myapp-initializer\n image: example/initializer:v1.0.0\n ...", "completion": "This yaml code Mutating Admission Policy | Parameter resources allow a policy configuration to be separate from its definition. A policy can define paramKind, which outlines GVK of the parameter resource, and then a policy binding ties a polic", "language": "yaml", "type": "pod-definition_explanation", "context": "Mutating Admission Policy | Parameter resources allow a policy configuration to be separate from its definition. A policy can define paramKind, which outlines GVK of the parameter resource, and then a policy binding ties a polic", "source": "https://kubernetes.io/docs/reference/access-authn-authz/mutating-admission-policy/", "id": "k8s_00106_code_explain_2825"} {"prompt": "Complete this yaml code:\n\napiVersion: admissionregistration.k8s.io/v1beta1\nkind: MutatingAdmissionPolicy\nmetadata:\n name: \"sidecar-policy.example.com\"\nspec:\n paramKind:\n kind: Sidecar\n apiVersion: mutations.example.com/v1\n matchConstraints:\n resourceRules:\n - apiGroups: [\"\"]\n apiVersions: [\"v1\"]\n operations: [\"CREATE\"]\n resources: [\"pods\"]\n matchConditions:\n - name: does-not-already-have-sidecar", "completion": " expression: \"!object.spec.initContainers.exists(ic, ic.name == \\\"mesh-proxy\\\")\"\n failurePolicy: Fail\n reinvocationPolicy: IfNeeded\n mutations:\n - patchType: \"JSONPatch\"\n jsonPatch:\n expression: >\n [\n JSONPatch{\n op: \"add\", path: \"/spec/initContainers/-\",\n value: Object.spec.initContainers{\n name: \"mesh-proxy\",\n image: \"mesh-proxy/v1.0.0\",\n restartPolicy: \"Always\"\n }\n }\n ]", "language": "yaml", "type": "kubernetes-manifest", "context": "Mutating Admission Policy | The expression will be evaluated by CEL to create a JSON patch. ref: https://github.com/google/cel-spec", "source": "https://kubernetes.io/docs/reference/access-authn-authz/mutating-admission-policy/", "id": "k8s_00106_code_completion_2826"} {"prompt": "Explain this yaml code:\n\napiVersion: admissionregistration.k8s.io/v1beta1\nkind: MutatingAdmissionPolicy\nmetadata:\n name: \"sidecar-policy.example.com\"\nspec:\n paramKind:\n kind: Sidecar\n apiVersion: mutations.example.com/v1\n matchConstraints:\n resourceRules:\n - apiGroups: [\"\"]\n apiVersions: [\"v1\"]\n operations: [\"CREATE\"]\n resources: [\"pods\"]\n matchConditions:\n - name: does-not-already-have-sidecar\n expression: \"!object.spec.initContainers.exists(ic, ic.name == \\\"mesh-proxy\\\")\"\n failurePolicy: Fail\n reinvocationPolicy: IfNeeded\n mutations:\n - patchType: \"JSONPatch\"\n jsonPatch:\n expression: >\n [\n JSONPatch{\n op: \"add\", path: \"/spec/initContainers/-\",\n value: Object.spec.initContainers{\n name: \"mesh-proxy\",\n image: \"mesh-proxy/v1.0.0\",\n restartPolicy: \"Always\"\n }\n }\n ]", "completion": "This yaml code Mutating Admission Policy | The expression will be evaluated by CEL to create a JSON patch. ref: https://github.com/google/cel-spec", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Mutating Admission Policy | The expression will be evaluated by CEL to create a JSON patch. ref: https://github.com/google/cel-spec", "source": "https://kubernetes.io/docs/reference/access-authn-authz/mutating-admission-policy/", "id": "k8s_00106_code_explain_2827"} {"prompt": "Complete this json code:\n\n[\n JSONPatch{op: \"test\", path: \"/spec/example\", value: \"Red\"},", "completion": " JSONPatch{op: \"replace\", path: \"/spec/example\", value: \"Green\"}\n ]", "language": "json", "type": "api-response", "context": "Mutating Admission Policy | To define a JSON object for the patch operation value, use CEL Object types. For example:", "source": "https://kubernetes.io/docs/reference/access-authn-authz/mutating-admission-policy/", "id": "k8s_00106_code_completion_2828"} {"prompt": "Explain this json code:\n\n[\n JSONPatch{op: \"test\", path: \"/spec/example\", value: \"Red\"},\n JSONPatch{op: \"replace\", path: \"/spec/example\", value: \"Green\"}\n ]", "completion": "This json code Mutating Admission Policy | To define a JSON object for the patch operation value, use CEL Object types. For example:", "language": "json", "type": "api-response_explanation", "context": "Mutating Admission Policy | To define a JSON object for the patch operation value, use CEL Object types. For example:", "source": "https://kubernetes.io/docs/reference/access-authn-authz/mutating-admission-policy/", "id": "k8s_00106_code_explain_2829"} {"prompt": "Complete this json code:\n\n[\n JSONPatch{\n op: \"add\",", "completion": " path: \"/spec/selector\",\n value: Object.spec.selector{matchLabels: {\"environment\": \"test\"}}\n }\n ]", "language": "json", "type": "api-response", "context": "Mutating Admission Policy | To use strings containing '/' and '~' as JSONPatch path keys, use jsonpatch.escapeKey(). For example:", "source": "https://kubernetes.io/docs/reference/access-authn-authz/mutating-admission-policy/", "id": "k8s_00106_code_completion_2830"} {"prompt": "Explain this json code:\n\n[\n JSONPatch{\n op: \"add\",\n path: \"/spec/selector\",\n value: Object.spec.selector{matchLabels: {\"environment\": \"test\"}}\n }\n ]", "completion": "This json code Mutating Admission Policy | To use strings containing '/' and '~' as JSONPatch path keys, use jsonpatch.escapeKey(). For example:", "language": "json", "type": "api-response_explanation", "context": "Mutating Admission Policy | To use strings containing '/' and '~' as JSONPatch path keys, use jsonpatch.escapeKey(). For example:", "source": "https://kubernetes.io/docs/reference/access-authn-authz/mutating-admission-policy/", "id": "k8s_00106_code_explain_2831"} {"prompt": "Complete this json code:\n\n[\n JSONPatch{\n op: \"add\",", "completion": " path: \"/metadata/labels/\" + jsonpatch.escapeKey(\"example.com/environment\"),\n value: \"test\"\n },\n ]", "language": "json", "type": "api-response", "context": "Mutating Admission Policy | CEL expressions have access to the types needed to create JSON patches and objects:", "source": "https://kubernetes.io/docs/reference/access-authn-authz/mutating-admission-policy/", "id": "k8s_00106_code_completion_2832"} {"prompt": "Explain this json code:\n\n[\n JSONPatch{\n op: \"add\",\n path: \"/metadata/labels/\" + jsonpatch.escapeKey(\"example.com/environment\"),\n value: \"test\"\n },\n ]", "completion": "This json code Mutating Admission Policy | CEL expressions have access to the types needed to create JSON patches and objects:", "language": "json", "type": "api-response_explanation", "context": "Mutating Admission Policy | CEL expressions have access to the types needed to create JSON patches and objects:", "source": "https://kubernetes.io/docs/reference/access-authn-authz/mutating-admission-policy/", "id": "k8s_00106_code_explain_2833"} {"prompt": "Explain this text code:\n\n--runtime-config=admissionregistration.k8s.io/v1beta1=true", "completion": "This text code Mutating Admission Policy | Mutating admission policies offer a declarative, in-process alternative to mutating admission webhooks.", "language": "text", "type": "code-example_explanation", "context": "Mutating Admission Policy | Mutating admission policies offer a declarative, in-process alternative to mutating admission webhooks.", "source": "https://kubernetes.io/docs/reference/access-authn-authz/mutating-admission-policy/", "id": "k8s_00106_code_explain_2834"} {"prompt": "Explain this text code:\n\nmutatingadmissionpolicy/applyconfiguration-example.yaml", "completion": "This text code Mutating Admission Policy | The .spec.mutations field consists of a list of expressions that evaluate to resource patches. The emitted patches may be either apply configurations or JSON Patch patches. You cannot specify an empty", "language": "text", "type": "code-example_explanation", "context": "Mutating Admission Policy | The .spec.mutations field consists of a list of expressions that evaluate to resource patches. The emitted patches may be either apply configurations or JSON Patch patches. You cannot specify an empty", "source": "https://kubernetes.io/docs/reference/access-authn-authz/mutating-admission-policy/", "id": "k8s_00106_code_explain_2835"} {"prompt": "Complete this yaml code:\n\napiVersion: admissionregistration.k8s.io/v1beta1\nkind: MutatingAdmissionPolicy\nmetadata:\n name: \"sidecar-policy.example.com\"\nspec:\n paramKind:\n kind: Sidecar\n apiVersion: mutations.example.com/v1\n matchConstraints:\n resourceRules:\n - apiGroups: [\"\"]\n apiVersions: [\"v1\"]\n operations: [\"CREATE\"]\n resources: [\"pods\"]\n matchConditions:\n - name: does-not-already-have-sidecar\n expression: \"!object.spec.initContainers.exists(ic, ic.name == \\\"mesh-proxy\\\")\"", "completion": " failurePolicy: Fail\n reinvocationPolicy: IfNeeded\n mutations:\n - patchType: \"ApplyConfiguration\"\n applyConfiguration:\n expression: >\n Object{\n spec: Object.spec{\n initContainers: [\n Object.spec.initContainers{\n name: \"mesh-proxy\",\n image: \"mesh/proxy:v1.0.0\",\n args: [\"proxy\", \"sidecar\"],\n restartPolicy: \"Always\"\n }\n ]\n }\n }", "language": "yaml", "type": "kubernetes-manifest", "context": "Mutating Admission Policy | The .spec.mutations field consists of a list of expressions that evaluate to resource patches. The emitted patches may be either apply configurations or JSON Patch patches. You cannot specify an empty", "source": "https://kubernetes.io/docs/reference/access-authn-authz/mutating-admission-policy/", "id": "k8s_00106_code_completion_2836"} {"prompt": "Explain this yaml code:\n\napiVersion: admissionregistration.k8s.io/v1beta1\nkind: MutatingAdmissionPolicy\nmetadata:\n name: \"sidecar-policy.example.com\"\nspec:\n paramKind:\n kind: Sidecar\n apiVersion: mutations.example.com/v1\n matchConstraints:\n resourceRules:\n - apiGroups: [\"\"]\n apiVersions: [\"v1\"]\n operations: [\"CREATE\"]\n resources: [\"pods\"]\n matchConditions:\n - name: does-not-already-have-sidecar\n expression: \"!object.spec.initContainers.exists(ic, ic.name == \\\"mesh-proxy\\\")\"\n failurePolicy: Fail\n reinvocationPolicy: IfNeeded\n mutations:\n - patchType: \"ApplyConfiguration\"\n applyConfiguration:\n expression: >\n Object{\n spec: Object.spec{\n initContainers: [\n Object.spec.initContainers{\n name: \"mesh-proxy\",\n image: \"mesh/proxy:v1.0.0\",\n args: [\"proxy\", \"sidecar\"],\n restartPolicy: \"Always\"\n }\n ]\n }\n }", "completion": "This yaml code Mutating Admission Policy | The .spec.mutations field consists of a list of expressions that evaluate to resource patches. The emitted patches may be either apply configurations or JSON Patch patches. You cannot specify an empty", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Mutating Admission Policy | The .spec.mutations field consists of a list of expressions that evaluate to resource patches. The emitted patches may be either apply configurations or JSON Patch patches. You cannot specify an empty", "source": "https://kubernetes.io/docs/reference/access-authn-authz/mutating-admission-policy/", "id": "k8s_00106_code_explain_2837"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: myapp\n namespace: default\nspec:\n ...", "completion": " initContainers:\n - name: mesh-proxy\n image: mesh/proxy:v1.0.0\n args: [\"proxy\", \"sidecar\"]\n restartPolicy: Always\n - name: myapp-initializer\n image: example/initializer:v1.0.0\n ...", "language": "yaml", "type": "pod-definition", "context": "Mutating Admission Policy | Parameter resources allow a policy configuration to be separate from its definition. A policy can define paramKind, which outlines GVK of the parameter resource, and then a policy binding ties a polic", "source": "https://kubernetes.io/docs/reference/access-authn-authz/mutating-admission-policy/", "id": "k8s_00106_code_completion_2838"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: myapp\n namespace: default\nspec:\n ...\n initContainers:\n - name: mesh-proxy\n image: mesh/proxy:v1.0.0\n args: [\"proxy\", \"sidecar\"]\n restartPolicy: Always\n - name: myapp-initializer\n image: example/initializer:v1.0.0\n ...", "completion": "This yaml code Mutating Admission Policy | Parameter resources allow a policy configuration to be separate from its definition. A policy can define paramKind, which outlines GVK of the parameter resource, and then a policy binding ties a polic", "language": "yaml", "type": "pod-definition_explanation", "context": "Mutating Admission Policy | Parameter resources allow a policy configuration to be separate from its definition. A policy can define paramKind, which outlines GVK of the parameter resource, and then a policy binding ties a polic", "source": "https://kubernetes.io/docs/reference/access-authn-authz/mutating-admission-policy/", "id": "k8s_00106_code_explain_2839"} {"prompt": "Complete this yaml code:\n\napiVersion: admissionregistration.k8s.io/v1beta1\nkind: MutatingAdmissionPolicy\nmetadata:\n name: \"sidecar-policy.example.com\"\nspec:\n paramKind:\n kind: Sidecar\n apiVersion: mutations.example.com/v1\n matchConstraints:\n resourceRules:\n - apiGroups: [\"\"]\n apiVersions: [\"v1\"]\n operations: [\"CREATE\"]\n resources: [\"pods\"]\n matchConditions:\n - name: does-not-already-have-sidecar", "completion": " expression: \"!object.spec.initContainers.exists(ic, ic.name == \\\"mesh-proxy\\\")\"\n failurePolicy: Fail\n reinvocationPolicy: IfNeeded\n mutations:\n - patchType: \"JSONPatch\"\n jsonPatch:\n expression: >\n [\n JSONPatch{\n op: \"add\", path: \"/spec/initContainers/-\",\n value: Object.spec.initContainers{\n name: \"mesh-proxy\",\n image: \"mesh-proxy/v1.0.0\",\n restartPolicy: \"Always\"\n }\n }\n ]", "language": "yaml", "type": "kubernetes-manifest", "context": "Mutating Admission Policy | The expression will be evaluated by CEL to create a JSON patch. ref: https://github.com/google/cel-spec", "source": "https://kubernetes.io/docs/reference/access-authn-authz/mutating-admission-policy/", "id": "k8s_00106_code_completion_2840"} {"prompt": "Explain this yaml code:\n\napiVersion: admissionregistration.k8s.io/v1beta1\nkind: MutatingAdmissionPolicy\nmetadata:\n name: \"sidecar-policy.example.com\"\nspec:\n paramKind:\n kind: Sidecar\n apiVersion: mutations.example.com/v1\n matchConstraints:\n resourceRules:\n - apiGroups: [\"\"]\n apiVersions: [\"v1\"]\n operations: [\"CREATE\"]\n resources: [\"pods\"]\n matchConditions:\n - name: does-not-already-have-sidecar\n expression: \"!object.spec.initContainers.exists(ic, ic.name == \\\"mesh-proxy\\\")\"\n failurePolicy: Fail\n reinvocationPolicy: IfNeeded\n mutations:\n - patchType: \"JSONPatch\"\n jsonPatch:\n expression: >\n [\n JSONPatch{\n op: \"add\", path: \"/spec/initContainers/-\",\n value: Object.spec.initContainers{\n name: \"mesh-proxy\",\n image: \"mesh-proxy/v1.0.0\",\n restartPolicy: \"Always\"\n }\n }\n ]", "completion": "This yaml code Mutating Admission Policy | The expression will be evaluated by CEL to create a JSON patch. ref: https://github.com/google/cel-spec", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Mutating Admission Policy | The expression will be evaluated by CEL to create a JSON patch. ref: https://github.com/google/cel-spec", "source": "https://kubernetes.io/docs/reference/access-authn-authz/mutating-admission-policy/", "id": "k8s_00106_code_explain_2841"} {"prompt": "Complete this json code:\n\n[\n JSONPatch{op: \"test\", path: \"/spec/example\", value: \"Red\"},", "completion": " JSONPatch{op: \"replace\", path: \"/spec/example\", value: \"Green\"}\n ]", "language": "json", "type": "api-response", "context": "Mutating Admission Policy | To define a JSON object for the patch operation value, use CEL Object types. For example:", "source": "https://kubernetes.io/docs/reference/access-authn-authz/mutating-admission-policy/", "id": "k8s_00106_code_completion_2842"} {"prompt": "Explain this json code:\n\n[\n JSONPatch{op: \"test\", path: \"/spec/example\", value: \"Red\"},\n JSONPatch{op: \"replace\", path: \"/spec/example\", value: \"Green\"}\n ]", "completion": "This json code Mutating Admission Policy | To define a JSON object for the patch operation value, use CEL Object types. For example:", "language": "json", "type": "api-response_explanation", "context": "Mutating Admission Policy | To define a JSON object for the patch operation value, use CEL Object types. For example:", "source": "https://kubernetes.io/docs/reference/access-authn-authz/mutating-admission-policy/", "id": "k8s_00106_code_explain_2843"} {"prompt": "Complete this json code:\n\n[\n JSONPatch{\n op: \"add\",", "completion": " path: \"/spec/selector\",\n value: Object.spec.selector{matchLabels: {\"environment\": \"test\"}}\n }\n ]", "language": "json", "type": "api-response", "context": "Mutating Admission Policy | To use strings containing '/' and '~' as JSONPatch path keys, use jsonpatch.escapeKey(). For example:", "source": "https://kubernetes.io/docs/reference/access-authn-authz/mutating-admission-policy/", "id": "k8s_00106_code_completion_2844"} {"prompt": "Explain this json code:\n\n[\n JSONPatch{\n op: \"add\",\n path: \"/spec/selector\",\n value: Object.spec.selector{matchLabels: {\"environment\": \"test\"}}\n }\n ]", "completion": "This json code Mutating Admission Policy | To use strings containing '/' and '~' as JSONPatch path keys, use jsonpatch.escapeKey(). For example:", "language": "json", "type": "api-response_explanation", "context": "Mutating Admission Policy | To use strings containing '/' and '~' as JSONPatch path keys, use jsonpatch.escapeKey(). For example:", "source": "https://kubernetes.io/docs/reference/access-authn-authz/mutating-admission-policy/", "id": "k8s_00106_code_explain_2845"} {"prompt": "Complete this json code:\n\n[\n JSONPatch{\n op: \"add\",", "completion": " path: \"/metadata/labels/\" + jsonpatch.escapeKey(\"example.com/environment\"),\n value: \"test\"\n },\n ]", "language": "json", "type": "api-response", "context": "Mutating Admission Policy | CEL expressions have access to the types needed to create JSON patches and objects:", "source": "https://kubernetes.io/docs/reference/access-authn-authz/mutating-admission-policy/", "id": "k8s_00106_code_completion_2846"} {"prompt": "Explain this json code:\n\n[\n JSONPatch{\n op: \"add\",\n path: \"/metadata/labels/\" + jsonpatch.escapeKey(\"example.com/environment\"),\n value: \"test\"\n },\n ]", "completion": "This json code Mutating Admission Policy | CEL expressions have access to the types needed to create JSON patches and objects:", "language": "json", "type": "api-response_explanation", "context": "Mutating Admission Policy | CEL expressions have access to the types needed to create JSON patches and objects:", "source": "https://kubernetes.io/docs/reference/access-authn-authz/mutating-admission-policy/", "id": "k8s_00106_code_explain_2847"} {"prompt": "Complete this yaml code:\n\nmutatingadmissionpolicy/applyconfiguration-example.yaml\napiVersion: admissionregistration.k8s.io/v1beta1\nkind: MutatingAdmissionPolicy\nmetadata:\n name: \"sidecar-policy.example.com\"\nspec:\n paramKind:\n kind: Sidecar\n apiVersion: mutations.example.com/v1\n matchConstraints:\n resourceRules:\n - apiGroups: [\"\"]\n apiVersions: [\"v1\"]\n operations: [\"CREATE\"]\n resources: [\"pods\"]\n matchConditions:\n - name: does-not-already-have-sidecar\n expression: \"!object.spec.initContainers.exists(ic, ic.name == \\\"mesh-proxy\\\")\"", "completion": " failurePolicy: Fail\n reinvocationPolicy: IfNeeded\n mutations:\n - patchType: \"ApplyConfiguration\"\n applyConfiguration:\n expression: >\n Object{\n spec: Object.spec{\n initContainers: [\n Object.spec.initContainers{\n name: \"mesh-proxy\",\n image: \"mesh/proxy:v1.0.0\",\n args: [\"proxy\", \"sidecar\"],\n restartPolicy: \"Always\"\n }\n ]\n }\n }", "language": "yaml", "type": "kubernetes-manifest", "context": "Mutating Admission Policy | The .spec.mutations field consists of a list of expressions that evaluate to resource patches. The emitted patches may be either apply configurations or JSON Patch patches. You cannot specify an empty", "source": "https://kubernetes.io/docs/reference/access-authn-authz/mutating-admission-policy/", "id": "k8s_00106_code_completion_2848"} {"prompt": "Explain this yaml code:\n\nmutatingadmissionpolicy/applyconfiguration-example.yaml\napiVersion: admissionregistration.k8s.io/v1beta1\nkind: MutatingAdmissionPolicy\nmetadata:\n name: \"sidecar-policy.example.com\"\nspec:\n paramKind:\n kind: Sidecar\n apiVersion: mutations.example.com/v1\n matchConstraints:\n resourceRules:\n - apiGroups: [\"\"]\n apiVersions: [\"v1\"]\n operations: [\"CREATE\"]\n resources: [\"pods\"]\n matchConditions:\n - name: does-not-already-have-sidecar\n expression: \"!object.spec.initContainers.exists(ic, ic.name == \\\"mesh-proxy\\\")\"\n failurePolicy: Fail\n reinvocationPolicy: IfNeeded\n mutations:\n - patchType: \"ApplyConfiguration\"\n applyConfiguration:\n expression: >\n Object{\n spec: Object.spec{\n initContainers: [\n Object.spec.initContainers{\n name: \"mesh-proxy\",\n image: \"mesh/proxy:v1.0.0\",\n args: [\"proxy\", \"sidecar\"],\n restartPolicy: \"Always\"\n }\n ]\n }\n }", "completion": "This yaml code Mutating Admission Policy | The .spec.mutations field consists of a list of expressions that evaluate to resource patches. The emitted patches may be either apply configurations or JSON Patch patches. You cannot specify an empty", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Mutating Admission Policy | The .spec.mutations field consists of a list of expressions that evaluate to resource patches. The emitted patches may be either apply configurations or JSON Patch patches. You cannot specify an empty", "source": "https://kubernetes.io/docs/reference/access-authn-authz/mutating-admission-policy/", "id": "k8s_00106_code_explain_2849"} {"prompt": "Complete this yaml code:\n\napiVersion: admissionregistration.k8s.io/v1beta1\nkind: MutatingAdmissionPolicy\nmetadata:\n name: \"sidecar-policy.example.com\"\nspec:\n paramKind:\n kind: Sidecar\n apiVersion: mutations.example.com/v1\n matchConstraints:\n resourceRules:\n - apiGroups: [\"\"]\n apiVersions: [\"v1\"]\n operations: [\"CREATE\"]\n resources: [\"pods\"]\n matchConditions:\n - name: does-not-already-have-sidecar\n expression: \"!object.spec.initContainers.exists(ic, ic.name == \\\"mesh-proxy\\\")\"", "completion": " failurePolicy: Fail\n reinvocationPolicy: IfNeeded\n mutations:\n - patchType: \"ApplyConfiguration\"\n applyConfiguration:\n expression: >\n Object{\n spec: Object.spec{\n initContainers: [\n Object.spec.initContainers{\n name: \"mesh-proxy\",\n image: \"mesh/proxy:v1.0.0\",\n args: [\"proxy\", \"sidecar\"],\n restartPolicy: \"Always\"\n }\n ]\n }\n }", "language": "yaml", "type": "kubernetes-manifest", "context": "Mutating Admission Policy | The .spec.mutations field consists of a list of expressions that evaluate to resource patches. The emitted patches may be either apply configurations or JSON Patch patches. You cannot specify an empty", "source": "https://kubernetes.io/docs/reference/access-authn-authz/mutating-admission-policy/", "id": "k8s_00106_code_completion_2850"} {"prompt": "Explain this yaml code:\n\napiVersion: admissionregistration.k8s.io/v1beta1\nkind: MutatingAdmissionPolicy\nmetadata:\n name: \"sidecar-policy.example.com\"\nspec:\n paramKind:\n kind: Sidecar\n apiVersion: mutations.example.com/v1\n matchConstraints:\n resourceRules:\n - apiGroups: [\"\"]\n apiVersions: [\"v1\"]\n operations: [\"CREATE\"]\n resources: [\"pods\"]\n matchConditions:\n - name: does-not-already-have-sidecar\n expression: \"!object.spec.initContainers.exists(ic, ic.name == \\\"mesh-proxy\\\")\"\n failurePolicy: Fail\n reinvocationPolicy: IfNeeded\n mutations:\n - patchType: \"ApplyConfiguration\"\n applyConfiguration:\n expression: >\n Object{\n spec: Object.spec{\n initContainers: [\n Object.spec.initContainers{\n name: \"mesh-proxy\",\n image: \"mesh/proxy:v1.0.0\",\n args: [\"proxy\", \"sidecar\"],\n restartPolicy: \"Always\"\n }\n ]\n }\n }", "completion": "This yaml code Mutating Admission Policy | The .spec.mutations field consists of a list of expressions that evaluate to resource patches. The emitted patches may be either apply configurations or JSON Patch patches. You cannot specify an empty", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Mutating Admission Policy | The .spec.mutations field consists of a list of expressions that evaluate to resource patches. The emitted patches may be either apply configurations or JSON Patch patches. You cannot specify an empty", "source": "https://kubernetes.io/docs/reference/access-authn-authz/mutating-admission-policy/", "id": "k8s_00106_code_explain_2851"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: myapp\n namespace: default\nspec:\n ...", "completion": " initContainers:\n - name: mesh-proxy\n image: mesh/proxy:v1.0.0\n args: [\"proxy\", \"sidecar\"]\n restartPolicy: Always\n - name: myapp-initializer\n image: example/initializer:v1.0.0\n ...", "language": "yaml", "type": "pod-definition", "context": "Mutating Admission Policy | Parameter resources allow a policy configuration to be separate from its definition. A policy can define paramKind, which outlines GVK of the parameter resource, and then a policy binding ties a polic", "source": "https://kubernetes.io/docs/reference/access-authn-authz/mutating-admission-policy/", "id": "k8s_00106_code_completion_2852"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: myapp\n namespace: default\nspec:\n ...\n initContainers:\n - name: mesh-proxy\n image: mesh/proxy:v1.0.0\n args: [\"proxy\", \"sidecar\"]\n restartPolicy: Always\n - name: myapp-initializer\n image: example/initializer:v1.0.0\n ...", "completion": "This yaml code Mutating Admission Policy | Parameter resources allow a policy configuration to be separate from its definition. A policy can define paramKind, which outlines GVK of the parameter resource, and then a policy binding ties a polic", "language": "yaml", "type": "pod-definition_explanation", "context": "Mutating Admission Policy | Parameter resources allow a policy configuration to be separate from its definition. A policy can define paramKind, which outlines GVK of the parameter resource, and then a policy binding ties a polic", "source": "https://kubernetes.io/docs/reference/access-authn-authz/mutating-admission-policy/", "id": "k8s_00106_code_explain_2853"} {"prompt": "Complete this yaml code:\n\nmutatingadmissionpolicy/json-patch-example.yaml\napiVersion: admissionregistration.k8s.io/v1beta1\nkind: MutatingAdmissionPolicy\nmetadata:\n name: \"sidecar-policy.example.com\"\nspec:\n paramKind:\n kind: Sidecar\n apiVersion: mutations.example.com/v1\n matchConstraints:\n resourceRules:\n - apiGroups: [\"\"]\n apiVersions: [\"v1\"]\n operations: [\"CREATE\"]\n resources: [\"pods\"]\n matchConditions:\n - name: does-not-already-have-sidecar", "completion": " expression: \"!object.spec.initContainers.exists(ic, ic.name == \\\"mesh-proxy\\\")\"\n failurePolicy: Fail\n reinvocationPolicy: IfNeeded\n mutations:\n - patchType: \"JSONPatch\"\n jsonPatch:\n expression: >\n [\n JSONPatch{\n op: \"add\", path: \"/spec/initContainers/-\",\n value: Object.spec.initContainers{\n name: \"mesh-proxy\",\n image: \"mesh-proxy/v1.0.0\",\n restartPolicy: \"Always\"\n }\n }\n ]", "language": "yaml", "type": "kubernetes-manifest", "context": "Mutating Admission Policy | The expression will be evaluated by CEL to create a JSON patch. ref: https://github.com/google/cel-spec", "source": "https://kubernetes.io/docs/reference/access-authn-authz/mutating-admission-policy/", "id": "k8s_00106_code_completion_2854"} {"prompt": "Explain this yaml code:\n\nmutatingadmissionpolicy/json-patch-example.yaml\napiVersion: admissionregistration.k8s.io/v1beta1\nkind: MutatingAdmissionPolicy\nmetadata:\n name: \"sidecar-policy.example.com\"\nspec:\n paramKind:\n kind: Sidecar\n apiVersion: mutations.example.com/v1\n matchConstraints:\n resourceRules:\n - apiGroups: [\"\"]\n apiVersions: [\"v1\"]\n operations: [\"CREATE\"]\n resources: [\"pods\"]\n matchConditions:\n - name: does-not-already-have-sidecar\n expression: \"!object.spec.initContainers.exists(ic, ic.name == \\\"mesh-proxy\\\")\"\n failurePolicy: Fail\n reinvocationPolicy: IfNeeded\n mutations:\n - patchType: \"JSONPatch\"\n jsonPatch:\n expression: >\n [\n JSONPatch{\n op: \"add\", path: \"/spec/initContainers/-\",\n value: Object.spec.initContainers{\n name: \"mesh-proxy\",\n image: \"mesh-proxy/v1.0.0\",\n restartPolicy: \"Always\"\n }\n }\n ]", "completion": "This yaml code Mutating Admission Policy | The expression will be evaluated by CEL to create a JSON patch. ref: https://github.com/google/cel-spec", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Mutating Admission Policy | The expression will be evaluated by CEL to create a JSON patch. ref: https://github.com/google/cel-spec", "source": "https://kubernetes.io/docs/reference/access-authn-authz/mutating-admission-policy/", "id": "k8s_00106_code_explain_2855"} {"prompt": "Complete this yaml code:\n\napiVersion: admissionregistration.k8s.io/v1beta1\nkind: MutatingAdmissionPolicy\nmetadata:\n name: \"sidecar-policy.example.com\"\nspec:\n paramKind:\n kind: Sidecar\n apiVersion: mutations.example.com/v1\n matchConstraints:\n resourceRules:\n - apiGroups: [\"\"]\n apiVersions: [\"v1\"]\n operations: [\"CREATE\"]\n resources: [\"pods\"]\n matchConditions:\n - name: does-not-already-have-sidecar", "completion": " expression: \"!object.spec.initContainers.exists(ic, ic.name == \\\"mesh-proxy\\\")\"\n failurePolicy: Fail\n reinvocationPolicy: IfNeeded\n mutations:\n - patchType: \"JSONPatch\"\n jsonPatch:\n expression: >\n [\n JSONPatch{\n op: \"add\", path: \"/spec/initContainers/-\",\n value: Object.spec.initContainers{\n name: \"mesh-proxy\",\n image: \"mesh-proxy/v1.0.0\",\n restartPolicy: \"Always\"\n }\n }\n ]", "language": "yaml", "type": "kubernetes-manifest", "context": "Mutating Admission Policy | The expression will be evaluated by CEL to create a JSON patch. ref: https://github.com/google/cel-spec", "source": "https://kubernetes.io/docs/reference/access-authn-authz/mutating-admission-policy/", "id": "k8s_00106_code_completion_2856"} {"prompt": "Explain this yaml code:\n\napiVersion: admissionregistration.k8s.io/v1beta1\nkind: MutatingAdmissionPolicy\nmetadata:\n name: \"sidecar-policy.example.com\"\nspec:\n paramKind:\n kind: Sidecar\n apiVersion: mutations.example.com/v1\n matchConstraints:\n resourceRules:\n - apiGroups: [\"\"]\n apiVersions: [\"v1\"]\n operations: [\"CREATE\"]\n resources: [\"pods\"]\n matchConditions:\n - name: does-not-already-have-sidecar\n expression: \"!object.spec.initContainers.exists(ic, ic.name == \\\"mesh-proxy\\\")\"\n failurePolicy: Fail\n reinvocationPolicy: IfNeeded\n mutations:\n - patchType: \"JSONPatch\"\n jsonPatch:\n expression: >\n [\n JSONPatch{\n op: \"add\", path: \"/spec/initContainers/-\",\n value: Object.spec.initContainers{\n name: \"mesh-proxy\",\n image: \"mesh-proxy/v1.0.0\",\n restartPolicy: \"Always\"\n }\n }\n ]", "completion": "This yaml code Mutating Admission Policy | The expression will be evaluated by CEL to create a JSON patch. ref: https://github.com/google/cel-spec", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Mutating Admission Policy | The expression will be evaluated by CEL to create a JSON patch. ref: https://github.com/google/cel-spec", "source": "https://kubernetes.io/docs/reference/access-authn-authz/mutating-admission-policy/", "id": "k8s_00106_code_explain_2857"} {"prompt": "Complete this text code:\n\nscopeSelector:\n matchExpressions:\n - scopeName: PriorityClass", "completion": " operator: In\n values:\n - middle", "language": "text", "type": "code-example", "context": "Resource Quotas | If the operator is Exists or DoesNotExist, the values field must NOT be specified.", "source": "https://kubernetes.io/docs/concepts/policy/resource-quotas/", "id": "k8s_00108_code_completion_2858"} {"prompt": "Explain this text code:\n\nscopeSelector:\n matchExpressions:\n - scopeName: PriorityClass\n operator: In\n values:\n - middle", "completion": "This text code Resource Quotas | If the operator is Exists or DoesNotExist, the values field must NOT be specified.", "language": "text", "type": "code-example_explanation", "context": "Resource Quotas | If the operator is Exists or DoesNotExist, the values field must NOT be specified.", "source": "https://kubernetes.io/docs/concepts/policy/resource-quotas/", "id": "k8s_00108_code_explain_2859"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: List\nitems:\n- apiVersion: v1\n kind: ResourceQuota\n metadata:\n name: pods-high\n spec:\n hard:\n cpu: \"1000\"\n memory: \"200Gi\"\n pods: \"10\"\n scopeSelector:\n matchExpressions:\n - operator: In\n scopeName: PriorityClass\n values: [\"high\"]\n- apiVersion: v1\n kind: ResourceQuota\n metadata:\n name: pods-medium\n spec:", "completion": " hard:\n cpu: \"10\"\n memory: \"20Gi\"\n pods: \"10\"\n scopeSelector:\n matchExpressions:\n - operator: In\n scopeName: PriorityClass\n values: [\"medium\"]\n- apiVersion: v1\n kind: ResourceQuota\n metadata:\n name: pods-low\n spec:\n hard:\n cpu: \"5\"\n memory: \"10Gi\"\n pods: \"10\"\n scopeSelector:\n matchExpressions:\n - operator: In\n scopeName: PriorityClass\n values: [\"low\"]", "language": "yaml", "type": "kubernetes-manifest", "context": "Resource Quotas | Apply the YAML using kubectl create.", "source": "https://kubernetes.io/docs/concepts/policy/resource-quotas/", "id": "k8s_00108_code_completion_2860"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: List\nitems:\n- apiVersion: v1\n kind: ResourceQuota\n metadata:\n name: pods-high\n spec:\n hard:\n cpu: \"1000\"\n memory: \"200Gi\"\n pods: \"10\"\n scopeSelector:\n matchExpressions:\n - operator: In\n scopeName: PriorityClass\n values: [\"high\"]\n- apiVersion: v1\n kind: ResourceQuota\n metadata:\n name: pods-medium\n spec:\n hard:\n cpu: \"10\"\n memory: \"20Gi\"\n pods: \"10\"\n scopeSelector:\n matchExpressions:\n - operator: In\n scopeName: PriorityClass\n values: [\"medium\"]\n- apiVersion: v1\n kind: ResourceQuota\n metadata:\n name: pods-low\n spec:\n hard:\n cpu: \"5\"\n memory: \"10Gi\"\n pods: \"10\"\n scopeSelector:\n matchExpressions:\n - operator: In\n scopeName: PriorityClass\n values: [\"low\"]", "completion": "This yaml code Resource Quotas | Apply the YAML using kubectl create.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Resource Quotas | Apply the YAML using kubectl create.", "source": "https://kubernetes.io/docs/concepts/policy/resource-quotas/", "id": "k8s_00108_code_explain_2861"} {"prompt": "Explain this text code:\n\nresourcequota/pods-high created\nresourcequota/pods-medium created\nresourcequota/pods-low created", "completion": "This text code Resource Quotas | Verify that Used quota is 0 using kubectl describe quota.", "language": "text", "type": "code-example_explanation", "context": "Resource Quotas | Verify that Used quota is 0 using kubectl describe quota.", "source": "https://kubernetes.io/docs/concepts/policy/resource-quotas/", "id": "k8s_00108_code_explain_2862"} {"prompt": "Complete this text code:\n\nName: pods-high\nNamespace: default\nResource Used Hard\n-------- ---- ----\ncpu 0 1k\nmemory 0 200Gi\npods 0 10\n\n\nName: pods-low\nNamespace: default\nResource Used Hard", "completion": "-------- ---- ----\ncpu 0 5\nmemory 0 10Gi\npods 0 10\n\n\nName: pods-medium\nNamespace: default\nResource Used Hard\n-------- ---- ----\ncpu 0 10\nmemory 0 20Gi\npods 0 10", "language": "text", "type": "code-example", "context": "Resource Quotas | Create a pod with priority \"high\". Save the following YAML to a file high-priority-pod.yaml.", "source": "https://kubernetes.io/docs/concepts/policy/resource-quotas/", "id": "k8s_00108_code_completion_2863"} {"prompt": "Explain this text code:\n\nName: pods-high\nNamespace: default\nResource Used Hard\n-------- ---- ----\ncpu 0 1k\nmemory 0 200Gi\npods 0 10\n\n\nName: pods-low\nNamespace: default\nResource Used Hard\n-------- ---- ----\ncpu 0 5\nmemory 0 10Gi\npods 0 10\n\n\nName: pods-medium\nNamespace: default\nResource Used Hard\n-------- ---- ----\ncpu 0 10\nmemory 0 20Gi\npods 0 10", "completion": "This text code Resource Quotas | Create a pod with priority \"high\". Save the following YAML to a file high-priority-pod.yaml.", "language": "text", "type": "code-example_explanation", "context": "Resource Quotas | Create a pod with priority \"high\". Save the following YAML to a file high-priority-pod.yaml.", "source": "https://kubernetes.io/docs/concepts/policy/resource-quotas/", "id": "k8s_00108_code_explain_2864"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: high-priority\nspec:\n containers:\n - name: high-priority\n image: ubuntu\n command: [\"/bin/sh\"]", "completion": " args: [\"-c\", \"while true; do echo hello; sleep 10;done\"]\n resources:\n requests:\n memory: \"10Gi\"\n cpu: \"500m\"\n limits:\n memory: \"10Gi\"\n cpu: \"500m\"\n priorityClassName: high", "language": "yaml", "type": "pod-definition", "context": "Resource Quotas | Apply it with kubectl create.", "source": "https://kubernetes.io/docs/concepts/policy/resource-quotas/", "id": "k8s_00108_code_completion_2865"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: high-priority\nspec:\n containers:\n - name: high-priority\n image: ubuntu\n command: [\"/bin/sh\"]\n args: [\"-c\", \"while true; do echo hello; sleep 10;done\"]\n resources:\n requests:\n memory: \"10Gi\"\n cpu: \"500m\"\n limits:\n memory: \"10Gi\"\n cpu: \"500m\"\n priorityClassName: high", "completion": "This yaml code Resource Quotas | Apply it with kubectl create.", "language": "yaml", "type": "pod-definition_explanation", "context": "Resource Quotas | Apply it with kubectl create.", "source": "https://kubernetes.io/docs/concepts/policy/resource-quotas/", "id": "k8s_00108_code_explain_2866"} {"prompt": "Complete this text code:\n\nName: pods-high\nNamespace: default\nResource Used Hard\n-------- ---- ----\ncpu 500m 1k\nmemory 10Gi 200Gi\npods 1 10\n\n\nName: pods-low\nNamespace: default\nResource Used Hard", "completion": "-------- ---- ----\ncpu 0 5\nmemory 0 10Gi\npods 0 10\n\n\nName: pods-medium\nNamespace: default\nResource Used Hard\n-------- ---- ----\ncpu 0 10\nmemory 0 20Gi\npods 0 10", "language": "text", "type": "code-example", "context": "Resource Quotas | Operators can use CrossNamespacePodAffinity quota scope to limit which namespaces are allowed to have pods with affinity terms that cross namespaces. Specifically, it controls which pods are allowed t", "source": "https://kubernetes.io/docs/concepts/policy/resource-quotas/", "id": "k8s_00108_code_completion_2867"} {"prompt": "Explain this text code:\n\nName: pods-high\nNamespace: default\nResource Used Hard\n-------- ---- ----\ncpu 500m 1k\nmemory 10Gi 200Gi\npods 1 10\n\n\nName: pods-low\nNamespace: default\nResource Used Hard\n-------- ---- ----\ncpu 0 5\nmemory 0 10Gi\npods 0 10\n\n\nName: pods-medium\nNamespace: default\nResource Used Hard\n-------- ---- ----\ncpu 0 10\nmemory 0 20Gi\npods 0 10", "completion": "This text code Resource Quotas | Operators can use CrossNamespacePodAffinity quota scope to limit which namespaces are allowed to have pods with affinity terms that cross namespaces. Specifically, it controls which pods are allowed t", "language": "text", "type": "code-example_explanation", "context": "Resource Quotas | Operators can use CrossNamespacePodAffinity quota scope to limit which namespaces are allowed to have pods with affinity terms that cross namespaces. Specifically, it controls which pods are allowed t", "source": "https://kubernetes.io/docs/concepts/policy/resource-quotas/", "id": "k8s_00108_code_explain_2868"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: ResourceQuota\nmetadata:\n name: disable-cross-namespace-affinity\n namespace: foo-ns\nspec:", "completion": " hard:\n pods: \"0\"\n scopeSelector:\n matchExpressions:\n - scopeName: CrossNamespacePodAffinity\n operator: Exists", "language": "yaml", "type": "kubernetes-manifest", "context": "Resource Quotas | If operators want to disallow using namespaces and namespaceSelector by default, and only allow it for specific namespaces, they could configure CrossNamespacePodAffinity as a limited resource by sett", "source": "https://kubernetes.io/docs/concepts/policy/resource-quotas/", "id": "k8s_00108_code_completion_2869"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: ResourceQuota\nmetadata:\n name: disable-cross-namespace-affinity\n namespace: foo-ns\nspec:\n hard:\n pods: \"0\"\n scopeSelector:\n matchExpressions:\n - scopeName: CrossNamespacePodAffinity\n operator: Exists", "completion": "This yaml code Resource Quotas | If operators want to disallow using namespaces and namespaceSelector by default, and only allow it for specific namespaces, they could configure CrossNamespacePodAffinity as a limited resource by sett", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Resource Quotas | If operators want to disallow using namespaces and namespaceSelector by default, and only allow it for specific namespaces, they could configure CrossNamespacePodAffinity as a limited resource by sett", "source": "https://kubernetes.io/docs/concepts/policy/resource-quotas/", "id": "k8s_00108_code_explain_2870"} {"prompt": "Complete this yaml code:\n\napiVersion: apiserver.config.k8s.io/v1\nkind: AdmissionConfiguration\nplugins:\n- name: \"ResourceQuota\"\n configuration:\n apiVersion: apiserver.config.k8s.io/v1", "completion": " kind: ResourceQuotaConfiguration\n limitedResources:\n - resource: pods\n matchScopes:\n - scopeName: CrossNamespacePodAffinity\n operator: Exists", "language": "yaml", "type": "kubernetes-manifest", "context": "Resource Quotas | With the above configuration, pods can use namespaces and namespaceSelector in pod affinity only if the namespace where they are created have a resource quota object with CrossNamespacePodAffinity sco", "source": "https://kubernetes.io/docs/concepts/policy/resource-quotas/", "id": "k8s_00108_code_completion_2871"} {"prompt": "Explain this yaml code:\n\napiVersion: apiserver.config.k8s.io/v1\nkind: AdmissionConfiguration\nplugins:\n- name: \"ResourceQuota\"\n configuration:\n apiVersion: apiserver.config.k8s.io/v1\n kind: ResourceQuotaConfiguration\n limitedResources:\n - resource: pods\n matchScopes:\n - scopeName: CrossNamespacePodAffinity\n operator: Exists", "completion": "This yaml code Resource Quotas | With the above configuration, pods can use namespaces and namespaceSelector in pod affinity only if the namespace where they are created have a resource quota object with CrossNamespacePodAffinity sco", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Resource Quotas | With the above configuration, pods can use namespaces and namespaceSelector in pod affinity only if the namespace where they are created have a resource quota object with CrossNamespacePodAffinity sco", "source": "https://kubernetes.io/docs/concepts/policy/resource-quotas/", "id": "k8s_00108_code_explain_2872"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: List\nitems:\n- apiVersion: v1\n kind: ResourceQuota\n metadata:\n name: pvcs-gold\n spec:\n hard:\n requests.storage: \"10Gi\"\n persistentvolumeclaims: \"10\"\n scopeSelector:\n matchExpressions:\n - operator: In\n scopeName: VolumeAttributesClass\n values: [\"gold\"]\n- apiVersion: v1\n kind: ResourceQuota\n metadata:\n name: pvcs-silver\n spec:", "completion": " hard:\n requests.storage: \"20Gi\"\n persistentvolumeclaims: \"10\"\n scopeSelector:\n matchExpressions:\n - operator: In\n scopeName: VolumeAttributesClass\n values: [\"silver\"]\n- apiVersion: v1\n kind: ResourceQuota\n metadata:\n name: pvcs-copper\n spec:\n hard:\n requests.storage: \"30Gi\"\n persistentvolumeclaims: \"10\"\n scopeSelector:\n matchExpressions:\n - operator: In\n scopeName: VolumeAttributesClass\n values: [\"copper\"]", "language": "yaml", "type": "kubernetes-manifest", "context": "Resource Quotas | Apply the YAML using kubectl create.", "source": "https://kubernetes.io/docs/concepts/policy/resource-quotas/", "id": "k8s_00108_code_completion_2873"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: List\nitems:\n- apiVersion: v1\n kind: ResourceQuota\n metadata:\n name: pvcs-gold\n spec:\n hard:\n requests.storage: \"10Gi\"\n persistentvolumeclaims: \"10\"\n scopeSelector:\n matchExpressions:\n - operator: In\n scopeName: VolumeAttributesClass\n values: [\"gold\"]\n- apiVersion: v1\n kind: ResourceQuota\n metadata:\n name: pvcs-silver\n spec:\n hard:\n requests.storage: \"20Gi\"\n persistentvolumeclaims: \"10\"\n scopeSelector:\n matchExpressions:\n - operator: In\n scopeName: VolumeAttributesClass\n values: [\"silver\"]\n- apiVersion: v1\n kind: ResourceQuota\n metadata:\n name: pvcs-copper\n spec:\n hard:\n requests.storage: \"30Gi\"\n persistentvolumeclaims: \"10\"\n scopeSelector:\n matchExpressions:\n - operator: In\n scopeName: VolumeAttributesClass\n values: [\"copper\"]", "completion": "This yaml code Resource Quotas | Apply the YAML using kubectl create.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Resource Quotas | Apply the YAML using kubectl create.", "source": "https://kubernetes.io/docs/concepts/policy/resource-quotas/", "id": "k8s_00108_code_explain_2874"} {"prompt": "Explain this text code:\n\nresourcequota/pvcs-gold created\nresourcequota/pvcs-silver created\nresourcequota/pvcs-copper created", "completion": "This text code Resource Quotas | Verify that Used quota is 0 using kubectl describe quota.", "language": "text", "type": "code-example_explanation", "context": "Resource Quotas | Verify that Used quota is 0 using kubectl describe quota.", "source": "https://kubernetes.io/docs/concepts/policy/resource-quotas/", "id": "k8s_00108_code_explain_2875"} {"prompt": "Complete this text code:\n\nName: pvcs-gold\nNamespace: default\nResource Used Hard\n-------- ---- ----\npersistentvolumeclaims 0 10\nrequests.storage 0 10Gi\n\n\nName: pvcs-silver\nNamespace: default\nResource Used Hard", "completion": "-------- ---- ----\npersistentvolumeclaims 0 10\nrequests.storage 0 20Gi\n\n\nName: pvcs-copper\nNamespace: default\nResource Used Hard\n-------- ---- ----\npersistentvolumeclaims 0 10\nrequests.storage 0 30Gi", "language": "text", "type": "code-example", "context": "Resource Quotas | Create a pvc with volume attributes class \"gold\". Save the following YAML to a file gold-vac-pvc.yaml.", "source": "https://kubernetes.io/docs/concepts/policy/resource-quotas/", "id": "k8s_00108_code_completion_2876"} {"prompt": "Explain this text code:\n\nName: pvcs-gold\nNamespace: default\nResource Used Hard\n-------- ---- ----\npersistentvolumeclaims 0 10\nrequests.storage 0 10Gi\n\n\nName: pvcs-silver\nNamespace: default\nResource Used Hard\n-------- ---- ----\npersistentvolumeclaims 0 10\nrequests.storage 0 20Gi\n\n\nName: pvcs-copper\nNamespace: default\nResource Used Hard\n-------- ---- ----\npersistentvolumeclaims 0 10\nrequests.storage 0 30Gi", "completion": "This text code Resource Quotas | Create a pvc with volume attributes class \"gold\". Save the following YAML to a file gold-vac-pvc.yaml.", "language": "text", "type": "code-example_explanation", "context": "Resource Quotas | Create a pvc with volume attributes class \"gold\". Save the following YAML to a file gold-vac-pvc.yaml.", "source": "https://kubernetes.io/docs/concepts/policy/resource-quotas/", "id": "k8s_00108_code_explain_2877"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: PersistentVolumeClaim\nmetadata:\n name: gold-vac-pvc\nspec:\n accessModes:", "completion": " - ReadWriteOnce\n resources:\n requests:\n storage: 2Gi\n storageClassName: # change this to the name of the storage class you want to use\n volumeAttributesClassName: gold", "language": "yaml", "type": "kubernetes-manifest", "context": "Resource Quotas | Apply it with kubectl create.", "source": "https://kubernetes.io/docs/concepts/policy/resource-quotas/", "id": "k8s_00108_code_completion_2878"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: PersistentVolumeClaim\nmetadata:\n name: gold-vac-pvc\nspec:\n accessModes:\n - ReadWriteOnce\n resources:\n requests:\n storage: 2Gi\n storageClassName: # change this to the name of the storage class you want to use\n volumeAttributesClassName: gold", "completion": "This yaml code Resource Quotas | Apply it with kubectl create.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Resource Quotas | Apply it with kubectl create.", "source": "https://kubernetes.io/docs/concepts/policy/resource-quotas/", "id": "k8s_00108_code_explain_2879"} {"prompt": "Complete this text code:\n\nName: pvcs-gold\nNamespace: default\nResource Used Hard\n-------- ---- ----\npersistentvolumeclaims 1 10\nrequests.storage 2Gi 10Gi\n\n\nName: pvcs-silver\nNamespace: default\nResource Used Hard", "completion": "-------- ---- ----\npersistentvolumeclaims 0 10\nrequests.storage 0 20Gi\n\n\nName: pvcs-copper\nNamespace: default\nResource Used Hard\n-------- ---- ----\npersistentvolumeclaims 0 10\nrequests.storage 0 30Gi", "language": "text", "type": "code-example", "context": "Resource Quotas | Once the PVC is bound, it is allowed to modify the desired volume attributes class. Let's change it to \"silver\" with kubectl patch.", "source": "https://kubernetes.io/docs/concepts/policy/resource-quotas/", "id": "k8s_00108_code_completion_2880"} {"prompt": "Explain this text code:\n\nName: pvcs-gold\nNamespace: default\nResource Used Hard\n-------- ---- ----\npersistentvolumeclaims 1 10\nrequests.storage 2Gi 10Gi\n\n\nName: pvcs-silver\nNamespace: default\nResource Used Hard\n-------- ---- ----\npersistentvolumeclaims 0 10\nrequests.storage 0 20Gi\n\n\nName: pvcs-copper\nNamespace: default\nResource Used Hard\n-------- ---- ----\npersistentvolumeclaims 0 10\nrequests.storage 0 30Gi", "completion": "This text code Resource Quotas | Once the PVC is bound, it is allowed to modify the desired volume attributes class. Let's change it to \"silver\" with kubectl patch.", "language": "text", "type": "code-example_explanation", "context": "Resource Quotas | Once the PVC is bound, it is allowed to modify the desired volume attributes class. Let's change it to \"silver\" with kubectl patch.", "source": "https://kubernetes.io/docs/concepts/policy/resource-quotas/", "id": "k8s_00108_code_explain_2881"} {"prompt": "Explain this bash code:\n\nkubectl patch pvc gold-vac-pvc --type='merge' -p '{\"spec\":{\"volumeAttributesClassName\":\"silver\"}}'", "completion": "This bash code Resource Quotas | Verify that \"Used\" stats for \"silver\" volume attributes class quota, pvcs-silver has changed, pvcs-copper is unchanged, and pvcs-gold might be unchanged or released, which depends on the PVC's status.", "language": "bash", "type": "kubectl-command_explanation", "context": "Resource Quotas | Verify that \"Used\" stats for \"silver\" volume attributes class quota, pvcs-silver has changed, pvcs-copper is unchanged, and pvcs-gold might be unchanged or released, which depends on the PVC's status.", "source": "https://kubernetes.io/docs/concepts/policy/resource-quotas/", "id": "k8s_00108_code_explain_2882"} {"prompt": "Complete this text code:\n\nName: pvcs-gold\nNamespace: default\nResource Used Hard\n-------- ---- ----\npersistentvolumeclaims 1 10\nrequests.storage 2Gi 10Gi\n\n\nName: pvcs-silver\nNamespace: default\nResource Used Hard", "completion": "-------- ---- ----\npersistentvolumeclaims 1 10\nrequests.storage 2Gi 20Gi\n\n\nName: pvcs-copper\nNamespace: default\nResource Used Hard\n-------- ---- ----\npersistentvolumeclaims 0 10\nrequests.storage 0 30Gi", "language": "text", "type": "code-example", "context": "Resource Quotas | Let's change it to \"copper\" with kubectl patch.", "source": "https://kubernetes.io/docs/concepts/policy/resource-quotas/", "id": "k8s_00108_code_completion_2883"} {"prompt": "Explain this text code:\n\nName: pvcs-gold\nNamespace: default\nResource Used Hard\n-------- ---- ----\npersistentvolumeclaims 1 10\nrequests.storage 2Gi 10Gi\n\n\nName: pvcs-silver\nNamespace: default\nResource Used Hard\n-------- ---- ----\npersistentvolumeclaims 1 10\nrequests.storage 2Gi 20Gi\n\n\nName: pvcs-copper\nNamespace: default\nResource Used Hard\n-------- ---- ----\npersistentvolumeclaims 0 10\nrequests.storage 0 30Gi", "completion": "This text code Resource Quotas | Let's change it to \"copper\" with kubectl patch.", "language": "text", "type": "code-example_explanation", "context": "Resource Quotas | Let's change it to \"copper\" with kubectl patch.", "source": "https://kubernetes.io/docs/concepts/policy/resource-quotas/", "id": "k8s_00108_code_explain_2884"} {"prompt": "Explain this bash code:\n\nkubectl patch pvc gold-vac-pvc --type='merge' -p '{\"spec\":{\"volumeAttributesClassName\":\"copper\"}}'", "completion": "This bash code Resource Quotas | Verify that \"Used\" stats for \"copper\" volume attributes class quota, pvcs-copper has changed, pvcs-silver and pvcs-gold might be unchanged or released, which depends on the PVC's status.", "language": "bash", "type": "kubectl-command_explanation", "context": "Resource Quotas | Verify that \"Used\" stats for \"copper\" volume attributes class quota, pvcs-copper has changed, pvcs-silver and pvcs-gold might be unchanged or released, which depends on the PVC's status.", "source": "https://kubernetes.io/docs/concepts/policy/resource-quotas/", "id": "k8s_00108_code_explain_2885"} {"prompt": "Complete this text code:\n\nName: pvcs-gold\nNamespace: default\nResource Used Hard\n-------- ---- ----\npersistentvolumeclaims 1 10\nrequests.storage 2Gi 10Gi\n\n\nName: pvcs-silver\nNamespace: default\nResource Used Hard", "completion": "-------- ---- ----\npersistentvolumeclaims 1 10\nrequests.storage 2Gi 20Gi\n\n\nName: pvcs-copper\nNamespace: default\nResource Used Hard\n-------- ---- ----\npersistentvolumeclaims 1 10\nrequests.storage 2Gi 30Gi", "language": "text", "type": "code-example", "context": "Resource Quotas | Print the manifest of the PVC using the following command:", "source": "https://kubernetes.io/docs/concepts/policy/resource-quotas/", "id": "k8s_00108_code_completion_2886"} {"prompt": "Explain this text code:\n\nName: pvcs-gold\nNamespace: default\nResource Used Hard\n-------- ---- ----\npersistentvolumeclaims 1 10\nrequests.storage 2Gi 10Gi\n\n\nName: pvcs-silver\nNamespace: default\nResource Used Hard\n-------- ---- ----\npersistentvolumeclaims 1 10\nrequests.storage 2Gi 20Gi\n\n\nName: pvcs-copper\nNamespace: default\nResource Used Hard\n-------- ---- ----\npersistentvolumeclaims 1 10\nrequests.storage 2Gi 30Gi", "completion": "This text code Resource Quotas | Print the manifest of the PVC using the following command:", "language": "text", "type": "code-example_explanation", "context": "Resource Quotas | Print the manifest of the PVC using the following command:", "source": "https://kubernetes.io/docs/concepts/policy/resource-quotas/", "id": "k8s_00108_code_explain_2887"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: PersistentVolumeClaim\nmetadata:\n name: gold-vac-pvc\nspec:\n accessModes:\n - ReadWriteOnce\n resources:\n requests:\n storage: 2Gi\n storageClassName: default", "completion": " volumeAttributesClassName: copper\nstatus:\n accessModes:\n - ReadWriteOnce\n capacity:\n storage: 2Gi\n currentVolumeAttributesClassName: gold\n phase: Bound\n modifyVolumeStatus:\n status: InProgress\n targetVolumeAttributesClassName: silver\n storageClassName: default", "language": "yaml", "type": "kubernetes-manifest", "context": "Resource Quotas | Wait a moment for the volume modification to complete, then verify the quota again.", "source": "https://kubernetes.io/docs/concepts/policy/resource-quotas/", "id": "k8s_00108_code_completion_2888"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: PersistentVolumeClaim\nmetadata:\n name: gold-vac-pvc\nspec:\n accessModes:\n - ReadWriteOnce\n resources:\n requests:\n storage: 2Gi\n storageClassName: default\n volumeAttributesClassName: copper\nstatus:\n accessModes:\n - ReadWriteOnce\n capacity:\n storage: 2Gi\n currentVolumeAttributesClassName: gold\n phase: Bound\n modifyVolumeStatus:\n status: InProgress\n targetVolumeAttributesClassName: silver\n storageClassName: default", "completion": "This yaml code Resource Quotas | Wait a moment for the volume modification to complete, then verify the quota again.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Resource Quotas | Wait a moment for the volume modification to complete, then verify the quota again.", "source": "https://kubernetes.io/docs/concepts/policy/resource-quotas/", "id": "k8s_00108_code_explain_2889"} {"prompt": "Complete this text code:\n\nName: pvcs-gold\nNamespace: default\nResource Used Hard\n-------- ---- ----\npersistentvolumeclaims 0 10\nrequests.storage 0 10Gi\n\n\nName: pvcs-silver\nNamespace: default\nResource Used Hard", "completion": "-------- ---- ----\npersistentvolumeclaims 0 10\nrequests.storage 0 20Gi\n\n\nName: pvcs-copper\nNamespace: default\nResource Used Hard\n-------- ---- ----\npersistentvolumeclaims 1 10\nrequests.storage 2Gi 30Gi", "language": "text", "type": "code-example", "context": "Resource Quotas | When allocating compute resources, each container may specify a request and a limit value for either CPU or memory. The quota can be configured to quota either value.", "source": "https://kubernetes.io/docs/concepts/policy/resource-quotas/", "id": "k8s_00108_code_completion_2890"} {"prompt": "Explain this text code:\n\nName: pvcs-gold\nNamespace: default\nResource Used Hard\n-------- ---- ----\npersistentvolumeclaims 0 10\nrequests.storage 0 10Gi\n\n\nName: pvcs-silver\nNamespace: default\nResource Used Hard\n-------- ---- ----\npersistentvolumeclaims 0 10\nrequests.storage 0 20Gi\n\n\nName: pvcs-copper\nNamespace: default\nResource Used Hard\n-------- ---- ----\npersistentvolumeclaims 1 10\nrequests.storage 2Gi 30Gi", "completion": "This text code Resource Quotas | When allocating compute resources, each container may specify a request and a limit value for either CPU or memory. The quota can be configured to quota either value.", "language": "text", "type": "code-example_explanation", "context": "Resource Quotas | When allocating compute resources, each container may specify a request and a limit value for either CPU or memory. The quota can be configured to quota either value.", "source": "https://kubernetes.io/docs/concepts/policy/resource-quotas/", "id": "k8s_00108_code_explain_2891"} {"prompt": "Complete this yaml code:\n\ncat < compute-resources.yaml\napiVersion: v1\nkind: ResourceQuota\nmetadata:\n name: compute-resources\nspec:", "completion": " hard:\n requests.cpu: \"1\"\n requests.memory: \"1Gi\"\n limits.cpu: \"2\"\n limits.memory: \"2Gi\"\n requests.nvidia.com/gpu: 4\nEOF", "language": "yaml", "type": "kubernetes-manifest", "context": "Resource Quotas | kubectl also supports object count quota for all standard namespaced resources using the syntax count/.:", "source": "https://kubernetes.io/docs/concepts/policy/resource-quotas/", "id": "k8s_00108_code_completion_2892"} {"prompt": "Explain this yaml code:\n\ncat < compute-resources.yaml\napiVersion: v1\nkind: ResourceQuota\nmetadata:\n name: compute-resources\nspec:\n hard:\n requests.cpu: \"1\"\n requests.memory: \"1Gi\"\n limits.cpu: \"2\"\n limits.memory: \"2Gi\"\n requests.nvidia.com/gpu: 4\nEOF", "completion": "This yaml code Resource Quotas | kubectl also supports object count quota for all standard namespaced resources using the syntax count/.:", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Resource Quotas | kubectl also supports object count quota for all standard namespaced resources using the syntax count/.:", "source": "https://kubernetes.io/docs/concepts/policy/resource-quotas/", "id": "k8s_00108_code_explain_2893"} {"prompt": "Explain this bash code:\n\nkubectl create -f ./compute-resources.yaml --namespace=myspace", "completion": "This bash code Resource Quotas | kubectl also supports object count quota for all standard namespaced resources using the syntax count/.:", "language": "bash", "type": "kubectl-command_explanation", "context": "Resource Quotas | kubectl also supports object count quota for all standard namespaced resources using the syntax count/.:", "source": "https://kubernetes.io/docs/concepts/policy/resource-quotas/", "id": "k8s_00108_code_explain_2894"} {"prompt": "Complete this yaml code:\n\ncat < object-counts.yaml\napiVersion: v1\nkind: ResourceQuota\nmetadata:\n name: object-counts\nspec:\n hard:", "completion": " configmaps: \"10\"\n persistentvolumeclaims: \"4\"\n pods: \"4\"\n replicationcontrollers: \"20\"\n secrets: \"10\"\n services: \"10\"\n services.loadbalancers: \"2\"\nEOF", "language": "yaml", "type": "kubernetes-manifest", "context": "Resource Quotas | kubectl also supports object count quota for all standard namespaced resources using the syntax count/.:", "source": "https://kubernetes.io/docs/concepts/policy/resource-quotas/", "id": "k8s_00108_code_completion_2895"} {"prompt": "Explain this yaml code:\n\ncat < object-counts.yaml\napiVersion: v1\nkind: ResourceQuota\nmetadata:\n name: object-counts\nspec:\n hard:\n configmaps: \"10\"\n persistentvolumeclaims: \"4\"\n pods: \"4\"\n replicationcontrollers: \"20\"\n secrets: \"10\"\n services: \"10\"\n services.loadbalancers: \"2\"\nEOF", "completion": "This yaml code Resource Quotas | kubectl also supports object count quota for all standard namespaced resources using the syntax count/.:", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Resource Quotas | kubectl also supports object count quota for all standard namespaced resources using the syntax count/.:", "source": "https://kubernetes.io/docs/concepts/policy/resource-quotas/", "id": "k8s_00108_code_explain_2896"} {"prompt": "Explain this bash code:\n\nkubectl create -f ./object-counts.yaml --namespace=myspace", "completion": "This bash code Resource Quotas | kubectl also supports object count quota for all standard namespaced resources using the syntax count/.:", "language": "bash", "type": "kubectl-command_explanation", "context": "Resource Quotas | kubectl also supports object count quota for all standard namespaced resources using the syntax count/.:", "source": "https://kubernetes.io/docs/concepts/policy/resource-quotas/", "id": "k8s_00108_code_explain_2897"} {"prompt": "Explain this text code:\n\nNAME AGE\ncompute-resources 30s\nobject-counts 32s", "completion": "This text code Resource Quotas | kubectl also supports object count quota for all standard namespaced resources using the syntax count/.:", "language": "text", "type": "code-example_explanation", "context": "Resource Quotas | kubectl also supports object count quota for all standard namespaced resources using the syntax count/.:", "source": "https://kubernetes.io/docs/concepts/policy/resource-quotas/", "id": "k8s_00108_code_explain_2898"} {"prompt": "Explain this bash code:\n\nkubectl describe quota compute-resources --namespace=myspace", "completion": "This bash code Resource Quotas | kubectl also supports object count quota for all standard namespaced resources using the syntax count/.:", "language": "bash", "type": "kubectl-command_explanation", "context": "Resource Quotas | kubectl also supports object count quota for all standard namespaced resources using the syntax count/.:", "source": "https://kubernetes.io/docs/concepts/policy/resource-quotas/", "id": "k8s_00108_code_explain_2899"} {"prompt": "Complete this text code:\n\nName: compute-resources\nNamespace: myspace\nResource Used Hard\n-------- ---- ----", "completion": "limits.cpu 0 2\nlimits.memory 0 2Gi\nrequests.cpu 0 1\nrequests.memory 0 1Gi\nrequests.nvidia.com/gpu 0 4", "language": "text", "type": "code-example", "context": "Resource Quotas | kubectl also supports object count quota for all standard namespaced resources using the syntax count/.:", "source": "https://kubernetes.io/docs/concepts/policy/resource-quotas/", "id": "k8s_00108_code_completion_2900"} {"prompt": "Explain this text code:\n\nName: compute-resources\nNamespace: myspace\nResource Used Hard\n-------- ---- ----\nlimits.cpu 0 2\nlimits.memory 0 2Gi\nrequests.cpu 0 1\nrequests.memory 0 1Gi\nrequests.nvidia.com/gpu 0 4", "completion": "This text code Resource Quotas | kubectl also supports object count quota for all standard namespaced resources using the syntax count/.:", "language": "text", "type": "code-example_explanation", "context": "Resource Quotas | kubectl also supports object count quota for all standard namespaced resources using the syntax count/.:", "source": "https://kubernetes.io/docs/concepts/policy/resource-quotas/", "id": "k8s_00108_code_explain_2901"} {"prompt": "Explain this bash code:\n\nkubectl describe quota object-counts --namespace=myspace", "completion": "This bash code Resource Quotas | kubectl also supports object count quota for all standard namespaced resources using the syntax count/.:", "language": "bash", "type": "kubectl-command_explanation", "context": "Resource Quotas | kubectl also supports object count quota for all standard namespaced resources using the syntax count/.:", "source": "https://kubernetes.io/docs/concepts/policy/resource-quotas/", "id": "k8s_00108_code_explain_2902"} {"prompt": "Complete this text code:\n\nName: object-counts\nNamespace: myspace\nResource Used Hard\n-------- ---- ----\nconfigmaps 0 10", "completion": "persistentvolumeclaims 0 4\npods 0 4\nreplicationcontrollers 0 20\nsecrets 1 10\nservices 0 10\nservices.loadbalancers 0 2", "language": "text", "type": "code-example", "context": "Resource Quotas | kubectl also supports object count quota for all standard namespaced resources using the syntax count/.:", "source": "https://kubernetes.io/docs/concepts/policy/resource-quotas/", "id": "k8s_00108_code_completion_2903"} {"prompt": "Explain this text code:\n\nName: object-counts\nNamespace: myspace\nResource Used Hard\n-------- ---- ----\nconfigmaps 0 10\npersistentvolumeclaims 0 4\npods 0 4\nreplicationcontrollers 0 20\nsecrets 1 10\nservices 0 10\nservices.loadbalancers 0 2", "completion": "This text code Resource Quotas | kubectl also supports object count quota for all standard namespaced resources using the syntax count/.:", "language": "text", "type": "code-example_explanation", "context": "Resource Quotas | kubectl also supports object count quota for all standard namespaced resources using the syntax count/.:", "source": "https://kubernetes.io/docs/concepts/policy/resource-quotas/", "id": "k8s_00108_code_explain_2904"} {"prompt": "Explain this bash code:\n\nkubectl create quota test --hard=count/deployments.apps=2,count/replicasets.apps=4,count/pods=3,count/secrets=4 --namespace=myspace", "completion": "This bash code Resource Quotas | ResourceQuotas are independent of the cluster capacity. They are expressed in absolute units. So, if you add nodes to your cluster, this does not automatically give each namespace the ability to consu", "language": "bash", "type": "kubectl-command_explanation", "context": "Resource Quotas | ResourceQuotas are independent of the cluster capacity. They are expressed in absolute units. So, if you add nodes to your cluster, this does not automatically give each namespace the ability to consu", "source": "https://kubernetes.io/docs/concepts/policy/resource-quotas/", "id": "k8s_00108_code_explain_2905"} {"prompt": "Explain this bash code:\n\nkubectl create deployment nginx --image=nginx --namespace=myspace --replicas=2", "completion": "This bash code Resource Quotas | ResourceQuotas are independent of the cluster capacity. They are expressed in absolute units. So, if you add nodes to your cluster, this does not automatically give each namespace the ability to consu", "language": "bash", "type": "kubectl-command_explanation", "context": "Resource Quotas | ResourceQuotas are independent of the cluster capacity. They are expressed in absolute units. So, if you add nodes to your cluster, this does not automatically give each namespace the ability to consu", "source": "https://kubernetes.io/docs/concepts/policy/resource-quotas/", "id": "k8s_00108_code_explain_2906"} {"prompt": "Complete this text code:\n\nName: test\nNamespace: myspace\nResource Used Hard\n-------- ---- ----", "completion": "count/deployments.apps 1 2\ncount/pods 2 3\ncount/replicasets.apps 1 4\ncount/secrets 1 4", "language": "text", "type": "code-example", "context": "Resource Quotas | ResourceQuotas are independent of the cluster capacity. They are expressed in absolute units. So, if you add nodes to your cluster, this does not automatically give each namespace the ability to consu", "source": "https://kubernetes.io/docs/concepts/policy/resource-quotas/", "id": "k8s_00108_code_completion_2907"} {"prompt": "Explain this text code:\n\nName: test\nNamespace: myspace\nResource Used Hard\n-------- ---- ----\ncount/deployments.apps 1 2\ncount/pods 2 3\ncount/replicasets.apps 1 4\ncount/secrets 1 4", "completion": "This text code Resource Quotas | ResourceQuotas are independent of the cluster capacity. They are expressed in absolute units. So, if you add nodes to your cluster, this does not automatically give each namespace the ability to consu", "language": "text", "type": "code-example_explanation", "context": "Resource Quotas | ResourceQuotas are independent of the cluster capacity. They are expressed in absolute units. So, if you add nodes to your cluster, this does not automatically give each namespace the ability to consu", "source": "https://kubernetes.io/docs/concepts/policy/resource-quotas/", "id": "k8s_00108_code_explain_2908"} {"prompt": "Complete this yaml code:\n\napiVersion: apiserver.config.k8s.io/v1\nkind: AdmissionConfiguration\nplugins:\n- name: \"ResourceQuota\"\n configuration:\n apiVersion: apiserver.config.k8s.io/v1", "completion": " kind: ResourceQuotaConfiguration\n limitedResources:\n - resource: pods\n matchScopes:\n - scopeName: PriorityClass\n operator: In\n values: [\"cluster-services\"]", "language": "yaml", "type": "kubernetes-manifest", "context": "Resource Quotas | Then, create a resource quota object in the kube-system namespace:", "source": "https://kubernetes.io/docs/concepts/policy/resource-quotas/", "id": "k8s_00108_code_completion_2909"} {"prompt": "Explain this yaml code:\n\napiVersion: apiserver.config.k8s.io/v1\nkind: AdmissionConfiguration\nplugins:\n- name: \"ResourceQuota\"\n configuration:\n apiVersion: apiserver.config.k8s.io/v1\n kind: ResourceQuotaConfiguration\n limitedResources:\n - resource: pods\n matchScopes:\n - scopeName: PriorityClass\n operator: In\n values: [\"cluster-services\"]", "completion": "This yaml code Resource Quotas | Then, create a resource quota object in the kube-system namespace:", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Resource Quotas | Then, create a resource quota object in the kube-system namespace:", "source": "https://kubernetes.io/docs/concepts/policy/resource-quotas/", "id": "k8s_00108_code_explain_2910"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: ResourceQuota\nmetadata:\n name: pods-cluster-services\nspec:", "completion": " scopeSelector:\n matchExpressions:\n - operator : In\n scopeName: PriorityClass\n values: [\"cluster-services\"]", "language": "yaml", "type": "kubernetes-manifest", "context": "Resource Quotas | In this case, a pod creation will be allowed if:", "source": "https://kubernetes.io/docs/concepts/policy/resource-quotas/", "id": "k8s_00108_code_completion_2911"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: ResourceQuota\nmetadata:\n name: pods-cluster-services\nspec:\n scopeSelector:\n matchExpressions:\n - operator : In\n scopeName: PriorityClass\n values: [\"cluster-services\"]", "completion": "This yaml code Resource Quotas | In this case, a pod creation will be allowed if:", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Resource Quotas | In this case, a pod creation will be allowed if:", "source": "https://kubernetes.io/docs/concepts/policy/resource-quotas/", "id": "k8s_00108_code_explain_2912"} {"prompt": "Explain this bash code:\n\nkubectl apply -f https://k8s.io/examples/policy/priority-class-resourcequota.yaml -n kube-system", "completion": "This bash code Resource Quotas | In this case, a pod creation will be allowed if:", "language": "bash", "type": "kubectl-command_explanation", "context": "Resource Quotas | In this case, a pod creation will be allowed if:", "source": "https://kubernetes.io/docs/concepts/policy/resource-quotas/", "id": "k8s_00108_code_explain_2913"} {"prompt": "Explain this text code:\n\n.storageclass.storage.k8s.io/requests.storage", "completion": "This text code Resource Quotas | For example, if you want to quota storage with gold StorageClass separate from a bronze StorageClass, you can define a quota as follows:", "language": "text", "type": "code-example_explanation", "context": "Resource Quotas | For example, if you want to quota storage with gold StorageClass separate from a bronze StorageClass, you can define a quota as follows:", "source": "https://kubernetes.io/docs/concepts/policy/resource-quotas/", "id": "k8s_00108_code_explain_2914"} {"prompt": "Explain this text code:\n\n.storageclass.storage.k8s.io/persistentvolumeclaims", "completion": "This text code Resource Quotas | For example, if you want to quota storage with gold StorageClass separate from a bronze StorageClass, you can define a quota as follows:", "language": "text", "type": "code-example_explanation", "context": "Resource Quotas | For example, if you want to quota storage with gold StorageClass separate from a bronze StorageClass, you can define a quota as follows:", "source": "https://kubernetes.io/docs/concepts/policy/resource-quotas/", "id": "k8s_00108_code_explain_2915"} {"prompt": "Explain this text code:\n\ngold.storageclass.storage.k8s.io/requests.storage: 500Gi", "completion": "This text code Resource Quotas | When using a CRI container runtime, container logs will count against the ephemeral storage quota. This can result in the unexpected eviction of pods that have exhausted their storage quotas.", "language": "text", "type": "code-example_explanation", "context": "Resource Quotas | When using a CRI container runtime, container logs will count against the ephemeral storage quota. This can result in the unexpected eviction of pods that have exhausted their storage quotas.", "source": "https://kubernetes.io/docs/concepts/policy/resource-quotas/", "id": "k8s_00108_code_explain_2916"} {"prompt": "Explain this text code:\n\nbronze.storageclass.storage.k8s.io/requests.storage: 100Gi", "completion": "This text code Resource Quotas | When using a CRI container runtime, container logs will count against the ephemeral storage quota. This can result in the unexpected eviction of pods that have exhausted their storage quotas.", "language": "text", "type": "code-example_explanation", "context": "Resource Quotas | When using a CRI container runtime, container logs will count against the ephemeral storage quota. This can result in the unexpected eviction of pods that have exhausted their storage quotas.", "source": "https://kubernetes.io/docs/concepts/policy/resource-quotas/", "id": "k8s_00108_code_explain_2917"} {"prompt": "Complete this yaml code:\n\nscopeSelector:\n matchExpressions:\n - scopeName: PriorityClass", "completion": " operator: In\n values:\n - middle", "language": "yaml", "type": "kubernetes-manifest", "context": "Resource Quotas | If the operator is Exists or DoesNotExist, the values field must NOT be specified.", "source": "https://kubernetes.io/docs/concepts/policy/resource-quotas/", "id": "k8s_00108_code_completion_2918"} {"prompt": "Explain this yaml code:\n\nscopeSelector:\n matchExpressions:\n - scopeName: PriorityClass\n operator: In\n values:\n - middle", "completion": "This yaml code Resource Quotas | If the operator is Exists or DoesNotExist, the values field must NOT be specified.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Resource Quotas | If the operator is Exists or DoesNotExist, the values field must NOT be specified.", "source": "https://kubernetes.io/docs/concepts/policy/resource-quotas/", "id": "k8s_00108_code_explain_2919"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: List\nitems:\n- apiVersion: v1\n kind: ResourceQuota\n metadata:\n name: pods-high\n spec:\n hard:\n cpu: \"1000\"\n memory: \"200Gi\"\n pods: \"10\"\n scopeSelector:\n matchExpressions:\n - operator: In\n scopeName: PriorityClass\n values: [\"high\"]\n- apiVersion: v1\n kind: ResourceQuota\n metadata:\n name: pods-medium\n spec:", "completion": " hard:\n cpu: \"10\"\n memory: \"20Gi\"\n pods: \"10\"\n scopeSelector:\n matchExpressions:\n - operator: In\n scopeName: PriorityClass\n values: [\"medium\"]\n- apiVersion: v1\n kind: ResourceQuota\n metadata:\n name: pods-low\n spec:\n hard:\n cpu: \"5\"\n memory: \"10Gi\"\n pods: \"10\"\n scopeSelector:\n matchExpressions:\n - operator: In\n scopeName: PriorityClass\n values: [\"low\"]", "language": "yaml", "type": "kubernetes-manifest", "context": "Resource Quotas | Apply the YAML using kubectl create.", "source": "https://kubernetes.io/docs/concepts/policy/resource-quotas/", "id": "k8s_00108_code_completion_2920"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: List\nitems:\n- apiVersion: v1\n kind: ResourceQuota\n metadata:\n name: pods-high\n spec:\n hard:\n cpu: \"1000\"\n memory: \"200Gi\"\n pods: \"10\"\n scopeSelector:\n matchExpressions:\n - operator: In\n scopeName: PriorityClass\n values: [\"high\"]\n- apiVersion: v1\n kind: ResourceQuota\n metadata:\n name: pods-medium\n spec:\n hard:\n cpu: \"10\"\n memory: \"20Gi\"\n pods: \"10\"\n scopeSelector:\n matchExpressions:\n - operator: In\n scopeName: PriorityClass\n values: [\"medium\"]\n- apiVersion: v1\n kind: ResourceQuota\n metadata:\n name: pods-low\n spec:\n hard:\n cpu: \"5\"\n memory: \"10Gi\"\n pods: \"10\"\n scopeSelector:\n matchExpressions:\n - operator: In\n scopeName: PriorityClass\n values: [\"low\"]", "completion": "This yaml code Resource Quotas | Apply the YAML using kubectl create.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Resource Quotas | Apply the YAML using kubectl create.", "source": "https://kubernetes.io/docs/concepts/policy/resource-quotas/", "id": "k8s_00108_code_explain_2921"} {"prompt": "Explain this text code:\n\nresourcequota/pods-high created\nresourcequota/pods-medium created\nresourcequota/pods-low created", "completion": "This text code Resource Quotas | Verify that Used quota is 0 using kubectl describe quota.", "language": "text", "type": "code-example_explanation", "context": "Resource Quotas | Verify that Used quota is 0 using kubectl describe quota.", "source": "https://kubernetes.io/docs/concepts/policy/resource-quotas/", "id": "k8s_00108_code_explain_2922"} {"prompt": "Complete this text code:\n\nName: pods-high\nNamespace: default\nResource Used Hard\n-------- ---- ----\ncpu 0 1k\nmemory 0 200Gi\npods 0 10\n\n\nName: pods-low\nNamespace: default\nResource Used Hard", "completion": "-------- ---- ----\ncpu 0 5\nmemory 0 10Gi\npods 0 10\n\n\nName: pods-medium\nNamespace: default\nResource Used Hard\n-------- ---- ----\ncpu 0 10\nmemory 0 20Gi\npods 0 10", "language": "text", "type": "code-example", "context": "Resource Quotas | Create a pod with priority \"high\". Save the following YAML to a file high-priority-pod.yaml.", "source": "https://kubernetes.io/docs/concepts/policy/resource-quotas/", "id": "k8s_00108_code_completion_2923"} {"prompt": "Explain this text code:\n\nName: pods-high\nNamespace: default\nResource Used Hard\n-------- ---- ----\ncpu 0 1k\nmemory 0 200Gi\npods 0 10\n\n\nName: pods-low\nNamespace: default\nResource Used Hard\n-------- ---- ----\ncpu 0 5\nmemory 0 10Gi\npods 0 10\n\n\nName: pods-medium\nNamespace: default\nResource Used Hard\n-------- ---- ----\ncpu 0 10\nmemory 0 20Gi\npods 0 10", "completion": "This text code Resource Quotas | Create a pod with priority \"high\". Save the following YAML to a file high-priority-pod.yaml.", "language": "text", "type": "code-example_explanation", "context": "Resource Quotas | Create a pod with priority \"high\". Save the following YAML to a file high-priority-pod.yaml.", "source": "https://kubernetes.io/docs/concepts/policy/resource-quotas/", "id": "k8s_00108_code_explain_2924"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: high-priority\nspec:\n containers:\n - name: high-priority\n image: ubuntu\n command: [\"/bin/sh\"]", "completion": " args: [\"-c\", \"while true; do echo hello; sleep 10;done\"]\n resources:\n requests:\n memory: \"10Gi\"\n cpu: \"500m\"\n limits:\n memory: \"10Gi\"\n cpu: \"500m\"\n priorityClassName: high", "language": "yaml", "type": "pod-definition", "context": "Resource Quotas | Apply it with kubectl create.", "source": "https://kubernetes.io/docs/concepts/policy/resource-quotas/", "id": "k8s_00108_code_completion_2925"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: high-priority\nspec:\n containers:\n - name: high-priority\n image: ubuntu\n command: [\"/bin/sh\"]\n args: [\"-c\", \"while true; do echo hello; sleep 10;done\"]\n resources:\n requests:\n memory: \"10Gi\"\n cpu: \"500m\"\n limits:\n memory: \"10Gi\"\n cpu: \"500m\"\n priorityClassName: high", "completion": "This yaml code Resource Quotas | Apply it with kubectl create.", "language": "yaml", "type": "pod-definition_explanation", "context": "Resource Quotas | Apply it with kubectl create.", "source": "https://kubernetes.io/docs/concepts/policy/resource-quotas/", "id": "k8s_00108_code_explain_2926"} {"prompt": "Complete this text code:\n\nName: pods-high\nNamespace: default\nResource Used Hard\n-------- ---- ----\ncpu 500m 1k\nmemory 10Gi 200Gi\npods 1 10\n\n\nName: pods-low\nNamespace: default\nResource Used Hard", "completion": "-------- ---- ----\ncpu 0 5\nmemory 0 10Gi\npods 0 10\n\n\nName: pods-medium\nNamespace: default\nResource Used Hard\n-------- ---- ----\ncpu 0 10\nmemory 0 20Gi\npods 0 10", "language": "text", "type": "code-example", "context": "Resource Quotas | Operators can use CrossNamespacePodAffinity quota scope to limit which namespaces are allowed to have pods with affinity terms that cross namespaces. Specifically, it controls which pods are allowed t", "source": "https://kubernetes.io/docs/concepts/policy/resource-quotas/", "id": "k8s_00108_code_completion_2927"} {"prompt": "Explain this text code:\n\nName: pods-high\nNamespace: default\nResource Used Hard\n-------- ---- ----\ncpu 500m 1k\nmemory 10Gi 200Gi\npods 1 10\n\n\nName: pods-low\nNamespace: default\nResource Used Hard\n-------- ---- ----\ncpu 0 5\nmemory 0 10Gi\npods 0 10\n\n\nName: pods-medium\nNamespace: default\nResource Used Hard\n-------- ---- ----\ncpu 0 10\nmemory 0 20Gi\npods 0 10", "completion": "This text code Resource Quotas | Operators can use CrossNamespacePodAffinity quota scope to limit which namespaces are allowed to have pods with affinity terms that cross namespaces. Specifically, it controls which pods are allowed t", "language": "text", "type": "code-example_explanation", "context": "Resource Quotas | Operators can use CrossNamespacePodAffinity quota scope to limit which namespaces are allowed to have pods with affinity terms that cross namespaces. Specifically, it controls which pods are allowed t", "source": "https://kubernetes.io/docs/concepts/policy/resource-quotas/", "id": "k8s_00108_code_explain_2928"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: ResourceQuota\nmetadata:\n name: disable-cross-namespace-affinity\n namespace: foo-ns\nspec:", "completion": " hard:\n pods: \"0\"\n scopeSelector:\n matchExpressions:\n - scopeName: CrossNamespacePodAffinity\n operator: Exists", "language": "yaml", "type": "kubernetes-manifest", "context": "Resource Quotas | If operators want to disallow using namespaces and namespaceSelector by default, and only allow it for specific namespaces, they could configure CrossNamespacePodAffinity as a limited resource by sett", "source": "https://kubernetes.io/docs/concepts/policy/resource-quotas/", "id": "k8s_00108_code_completion_2929"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: ResourceQuota\nmetadata:\n name: disable-cross-namespace-affinity\n namespace: foo-ns\nspec:\n hard:\n pods: \"0\"\n scopeSelector:\n matchExpressions:\n - scopeName: CrossNamespacePodAffinity\n operator: Exists", "completion": "This yaml code Resource Quotas | If operators want to disallow using namespaces and namespaceSelector by default, and only allow it for specific namespaces, they could configure CrossNamespacePodAffinity as a limited resource by sett", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Resource Quotas | If operators want to disallow using namespaces and namespaceSelector by default, and only allow it for specific namespaces, they could configure CrossNamespacePodAffinity as a limited resource by sett", "source": "https://kubernetes.io/docs/concepts/policy/resource-quotas/", "id": "k8s_00108_code_explain_2930"} {"prompt": "Complete this yaml code:\n\napiVersion: apiserver.config.k8s.io/v1\nkind: AdmissionConfiguration\nplugins:\n- name: \"ResourceQuota\"\n configuration:\n apiVersion: apiserver.config.k8s.io/v1", "completion": " kind: ResourceQuotaConfiguration\n limitedResources:\n - resource: pods\n matchScopes:\n - scopeName: CrossNamespacePodAffinity\n operator: Exists", "language": "yaml", "type": "kubernetes-manifest", "context": "Resource Quotas | With the above configuration, pods can use namespaces and namespaceSelector in pod affinity only if the namespace where they are created have a resource quota object with CrossNamespacePodAffinity sco", "source": "https://kubernetes.io/docs/concepts/policy/resource-quotas/", "id": "k8s_00108_code_completion_2931"} {"prompt": "Explain this yaml code:\n\napiVersion: apiserver.config.k8s.io/v1\nkind: AdmissionConfiguration\nplugins:\n- name: \"ResourceQuota\"\n configuration:\n apiVersion: apiserver.config.k8s.io/v1\n kind: ResourceQuotaConfiguration\n limitedResources:\n - resource: pods\n matchScopes:\n - scopeName: CrossNamespacePodAffinity\n operator: Exists", "completion": "This yaml code Resource Quotas | With the above configuration, pods can use namespaces and namespaceSelector in pod affinity only if the namespace where they are created have a resource quota object with CrossNamespacePodAffinity sco", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Resource Quotas | With the above configuration, pods can use namespaces and namespaceSelector in pod affinity only if the namespace where they are created have a resource quota object with CrossNamespacePodAffinity sco", "source": "https://kubernetes.io/docs/concepts/policy/resource-quotas/", "id": "k8s_00108_code_explain_2932"} {"prompt": "Explain this text code:\n\nstatus.modifyVolumeStatus.targetVolumeAttributesClassName", "completion": "This text code Resource Quotas | A quota is matched and consumed only if scopeSelector in the quota spec selects the PVC.", "language": "text", "type": "code-example_explanation", "context": "Resource Quotas | A quota is matched and consumed only if scopeSelector in the quota spec selects the PVC.", "source": "https://kubernetes.io/docs/concepts/policy/resource-quotas/", "id": "k8s_00108_code_explain_2933"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: List\nitems:\n- apiVersion: v1\n kind: ResourceQuota\n metadata:\n name: pvcs-gold\n spec:\n hard:\n requests.storage: \"10Gi\"\n persistentvolumeclaims: \"10\"\n scopeSelector:\n matchExpressions:\n - operator: In\n scopeName: VolumeAttributesClass\n values: [\"gold\"]\n- apiVersion: v1\n kind: ResourceQuota\n metadata:\n name: pvcs-silver\n spec:", "completion": " hard:\n requests.storage: \"20Gi\"\n persistentvolumeclaims: \"10\"\n scopeSelector:\n matchExpressions:\n - operator: In\n scopeName: VolumeAttributesClass\n values: [\"silver\"]\n- apiVersion: v1\n kind: ResourceQuota\n metadata:\n name: pvcs-copper\n spec:\n hard:\n requests.storage: \"30Gi\"\n persistentvolumeclaims: \"10\"\n scopeSelector:\n matchExpressions:\n - operator: In\n scopeName: VolumeAttributesClass\n values: [\"copper\"]", "language": "yaml", "type": "kubernetes-manifest", "context": "Resource Quotas | Apply the YAML using kubectl create.", "source": "https://kubernetes.io/docs/concepts/policy/resource-quotas/", "id": "k8s_00108_code_completion_2934"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: List\nitems:\n- apiVersion: v1\n kind: ResourceQuota\n metadata:\n name: pvcs-gold\n spec:\n hard:\n requests.storage: \"10Gi\"\n persistentvolumeclaims: \"10\"\n scopeSelector:\n matchExpressions:\n - operator: In\n scopeName: VolumeAttributesClass\n values: [\"gold\"]\n- apiVersion: v1\n kind: ResourceQuota\n metadata:\n name: pvcs-silver\n spec:\n hard:\n requests.storage: \"20Gi\"\n persistentvolumeclaims: \"10\"\n scopeSelector:\n matchExpressions:\n - operator: In\n scopeName: VolumeAttributesClass\n values: [\"silver\"]\n- apiVersion: v1\n kind: ResourceQuota\n metadata:\n name: pvcs-copper\n spec:\n hard:\n requests.storage: \"30Gi\"\n persistentvolumeclaims: \"10\"\n scopeSelector:\n matchExpressions:\n - operator: In\n scopeName: VolumeAttributesClass\n values: [\"copper\"]", "completion": "This yaml code Resource Quotas | Apply the YAML using kubectl create.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Resource Quotas | Apply the YAML using kubectl create.", "source": "https://kubernetes.io/docs/concepts/policy/resource-quotas/", "id": "k8s_00108_code_explain_2935"} {"prompt": "Explain this text code:\n\nresourcequota/pvcs-gold created\nresourcequota/pvcs-silver created\nresourcequota/pvcs-copper created", "completion": "This text code Resource Quotas | Verify that Used quota is 0 using kubectl describe quota.", "language": "text", "type": "code-example_explanation", "context": "Resource Quotas | Verify that Used quota is 0 using kubectl describe quota.", "source": "https://kubernetes.io/docs/concepts/policy/resource-quotas/", "id": "k8s_00108_code_explain_2936"} {"prompt": "Complete this text code:\n\nName: pvcs-gold\nNamespace: default\nResource Used Hard\n-------- ---- ----\npersistentvolumeclaims 0 10\nrequests.storage 0 10Gi\n\n\nName: pvcs-silver\nNamespace: default\nResource Used Hard", "completion": "-------- ---- ----\npersistentvolumeclaims 0 10\nrequests.storage 0 20Gi\n\n\nName: pvcs-copper\nNamespace: default\nResource Used Hard\n-------- ---- ----\npersistentvolumeclaims 0 10\nrequests.storage 0 30Gi", "language": "text", "type": "code-example", "context": "Resource Quotas | Create a pvc with volume attributes class \"gold\". Save the following YAML to a file gold-vac-pvc.yaml.", "source": "https://kubernetes.io/docs/concepts/policy/resource-quotas/", "id": "k8s_00108_code_completion_2937"} {"prompt": "Explain this text code:\n\nName: pvcs-gold\nNamespace: default\nResource Used Hard\n-------- ---- ----\npersistentvolumeclaims 0 10\nrequests.storage 0 10Gi\n\n\nName: pvcs-silver\nNamespace: default\nResource Used Hard\n-------- ---- ----\npersistentvolumeclaims 0 10\nrequests.storage 0 20Gi\n\n\nName: pvcs-copper\nNamespace: default\nResource Used Hard\n-------- ---- ----\npersistentvolumeclaims 0 10\nrequests.storage 0 30Gi", "completion": "This text code Resource Quotas | Create a pvc with volume attributes class \"gold\". Save the following YAML to a file gold-vac-pvc.yaml.", "language": "text", "type": "code-example_explanation", "context": "Resource Quotas | Create a pvc with volume attributes class \"gold\". Save the following YAML to a file gold-vac-pvc.yaml.", "source": "https://kubernetes.io/docs/concepts/policy/resource-quotas/", "id": "k8s_00108_code_explain_2938"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: PersistentVolumeClaim\nmetadata:\n name: gold-vac-pvc\nspec:\n accessModes:", "completion": " - ReadWriteOnce\n resources:\n requests:\n storage: 2Gi\n storageClassName: # change this to the name of the storage class you want to use\n volumeAttributesClassName: gold", "language": "yaml", "type": "kubernetes-manifest", "context": "Resource Quotas | Apply it with kubectl create.", "source": "https://kubernetes.io/docs/concepts/policy/resource-quotas/", "id": "k8s_00108_code_completion_2939"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: PersistentVolumeClaim\nmetadata:\n name: gold-vac-pvc\nspec:\n accessModes:\n - ReadWriteOnce\n resources:\n requests:\n storage: 2Gi\n storageClassName: # change this to the name of the storage class you want to use\n volumeAttributesClassName: gold", "completion": "This yaml code Resource Quotas | Apply it with kubectl create.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Resource Quotas | Apply it with kubectl create.", "source": "https://kubernetes.io/docs/concepts/policy/resource-quotas/", "id": "k8s_00108_code_explain_2940"} {"prompt": "Complete this text code:\n\nName: pvcs-gold\nNamespace: default\nResource Used Hard\n-------- ---- ----\npersistentvolumeclaims 1 10\nrequests.storage 2Gi 10Gi\n\n\nName: pvcs-silver\nNamespace: default\nResource Used Hard", "completion": "-------- ---- ----\npersistentvolumeclaims 0 10\nrequests.storage 0 20Gi\n\n\nName: pvcs-copper\nNamespace: default\nResource Used Hard\n-------- ---- ----\npersistentvolumeclaims 0 10\nrequests.storage 0 30Gi", "language": "text", "type": "code-example", "context": "Resource Quotas | Once the PVC is bound, it is allowed to modify the desired volume attributes class. Let's change it to \"silver\" with kubectl patch.", "source": "https://kubernetes.io/docs/concepts/policy/resource-quotas/", "id": "k8s_00108_code_completion_2941"} {"prompt": "Explain this text code:\n\nName: pvcs-gold\nNamespace: default\nResource Used Hard\n-------- ---- ----\npersistentvolumeclaims 1 10\nrequests.storage 2Gi 10Gi\n\n\nName: pvcs-silver\nNamespace: default\nResource Used Hard\n-------- ---- ----\npersistentvolumeclaims 0 10\nrequests.storage 0 20Gi\n\n\nName: pvcs-copper\nNamespace: default\nResource Used Hard\n-------- ---- ----\npersistentvolumeclaims 0 10\nrequests.storage 0 30Gi", "completion": "This text code Resource Quotas | Once the PVC is bound, it is allowed to modify the desired volume attributes class. Let's change it to \"silver\" with kubectl patch.", "language": "text", "type": "code-example_explanation", "context": "Resource Quotas | Once the PVC is bound, it is allowed to modify the desired volume attributes class. Let's change it to \"silver\" with kubectl patch.", "source": "https://kubernetes.io/docs/concepts/policy/resource-quotas/", "id": "k8s_00108_code_explain_2942"} {"prompt": "Explain this shell code:\n\nkubectl patch pvc gold-vac-pvc --type='merge' -p '{\"spec\":{\"volumeAttributesClassName\":\"silver\"}}'", "completion": "This shell code Resource Quotas | Verify that \"Used\" stats for \"silver\" volume attributes class quota, pvcs-silver has changed, pvcs-copper is unchanged, and pvcs-gold might be unchanged or released, which depends on the PVC's status.", "language": "shell", "type": "code-example_explanation", "context": "Resource Quotas | Verify that \"Used\" stats for \"silver\" volume attributes class quota, pvcs-silver has changed, pvcs-copper is unchanged, and pvcs-gold might be unchanged or released, which depends on the PVC's status.", "source": "https://kubernetes.io/docs/concepts/policy/resource-quotas/", "id": "k8s_00108_code_explain_2943"} {"prompt": "Complete this text code:\n\nName: pvcs-gold\nNamespace: default\nResource Used Hard\n-------- ---- ----\npersistentvolumeclaims 1 10\nrequests.storage 2Gi 10Gi\n\n\nName: pvcs-silver\nNamespace: default\nResource Used Hard", "completion": "-------- ---- ----\npersistentvolumeclaims 1 10\nrequests.storage 2Gi 20Gi\n\n\nName: pvcs-copper\nNamespace: default\nResource Used Hard\n-------- ---- ----\npersistentvolumeclaims 0 10\nrequests.storage 0 30Gi", "language": "text", "type": "code-example", "context": "Resource Quotas | Let's change it to \"copper\" with kubectl patch.", "source": "https://kubernetes.io/docs/concepts/policy/resource-quotas/", "id": "k8s_00108_code_completion_2944"} {"prompt": "Explain this text code:\n\nName: pvcs-gold\nNamespace: default\nResource Used Hard\n-------- ---- ----\npersistentvolumeclaims 1 10\nrequests.storage 2Gi 10Gi\n\n\nName: pvcs-silver\nNamespace: default\nResource Used Hard\n-------- ---- ----\npersistentvolumeclaims 1 10\nrequests.storage 2Gi 20Gi\n\n\nName: pvcs-copper\nNamespace: default\nResource Used Hard\n-------- ---- ----\npersistentvolumeclaims 0 10\nrequests.storage 0 30Gi", "completion": "This text code Resource Quotas | Let's change it to \"copper\" with kubectl patch.", "language": "text", "type": "code-example_explanation", "context": "Resource Quotas | Let's change it to \"copper\" with kubectl patch.", "source": "https://kubernetes.io/docs/concepts/policy/resource-quotas/", "id": "k8s_00108_code_explain_2945"} {"prompt": "Explain this shell code:\n\nkubectl patch pvc gold-vac-pvc --type='merge' -p '{\"spec\":{\"volumeAttributesClassName\":\"copper\"}}'", "completion": "This shell code Resource Quotas | Verify that \"Used\" stats for \"copper\" volume attributes class quota, pvcs-copper has changed, pvcs-silver and pvcs-gold might be unchanged or released, which depends on the PVC's status.", "language": "shell", "type": "code-example_explanation", "context": "Resource Quotas | Verify that \"Used\" stats for \"copper\" volume attributes class quota, pvcs-copper has changed, pvcs-silver and pvcs-gold might be unchanged or released, which depends on the PVC's status.", "source": "https://kubernetes.io/docs/concepts/policy/resource-quotas/", "id": "k8s_00108_code_explain_2946"} {"prompt": "Complete this text code:\n\nName: pvcs-gold\nNamespace: default\nResource Used Hard\n-------- ---- ----\npersistentvolumeclaims 1 10\nrequests.storage 2Gi 10Gi\n\n\nName: pvcs-silver\nNamespace: default\nResource Used Hard", "completion": "-------- ---- ----\npersistentvolumeclaims 1 10\nrequests.storage 2Gi 20Gi\n\n\nName: pvcs-copper\nNamespace: default\nResource Used Hard\n-------- ---- ----\npersistentvolumeclaims 1 10\nrequests.storage 2Gi 30Gi", "language": "text", "type": "code-example", "context": "Resource Quotas | Print the manifest of the PVC using the following command:", "source": "https://kubernetes.io/docs/concepts/policy/resource-quotas/", "id": "k8s_00108_code_completion_2947"} {"prompt": "Explain this text code:\n\nName: pvcs-gold\nNamespace: default\nResource Used Hard\n-------- ---- ----\npersistentvolumeclaims 1 10\nrequests.storage 2Gi 10Gi\n\n\nName: pvcs-silver\nNamespace: default\nResource Used Hard\n-------- ---- ----\npersistentvolumeclaims 1 10\nrequests.storage 2Gi 20Gi\n\n\nName: pvcs-copper\nNamespace: default\nResource Used Hard\n-------- ---- ----\npersistentvolumeclaims 1 10\nrequests.storage 2Gi 30Gi", "completion": "This text code Resource Quotas | Print the manifest of the PVC using the following command:", "language": "text", "type": "code-example_explanation", "context": "Resource Quotas | Print the manifest of the PVC using the following command:", "source": "https://kubernetes.io/docs/concepts/policy/resource-quotas/", "id": "k8s_00108_code_explain_2948"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: PersistentVolumeClaim\nmetadata:\n name: gold-vac-pvc\nspec:\n accessModes:\n - ReadWriteOnce\n resources:\n requests:\n storage: 2Gi\n storageClassName: default", "completion": " volumeAttributesClassName: copper\nstatus:\n accessModes:\n - ReadWriteOnce\n capacity:\n storage: 2Gi\n currentVolumeAttributesClassName: gold\n phase: Bound\n modifyVolumeStatus:\n status: InProgress\n targetVolumeAttributesClassName: silver\n storageClassName: default", "language": "yaml", "type": "kubernetes-manifest", "context": "Resource Quotas | Wait a moment for the volume modification to complete, then verify the quota again.", "source": "https://kubernetes.io/docs/concepts/policy/resource-quotas/", "id": "k8s_00108_code_completion_2949"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: PersistentVolumeClaim\nmetadata:\n name: gold-vac-pvc\nspec:\n accessModes:\n - ReadWriteOnce\n resources:\n requests:\n storage: 2Gi\n storageClassName: default\n volumeAttributesClassName: copper\nstatus:\n accessModes:\n - ReadWriteOnce\n capacity:\n storage: 2Gi\n currentVolumeAttributesClassName: gold\n phase: Bound\n modifyVolumeStatus:\n status: InProgress\n targetVolumeAttributesClassName: silver\n storageClassName: default", "completion": "This yaml code Resource Quotas | Wait a moment for the volume modification to complete, then verify the quota again.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Resource Quotas | Wait a moment for the volume modification to complete, then verify the quota again.", "source": "https://kubernetes.io/docs/concepts/policy/resource-quotas/", "id": "k8s_00108_code_explain_2950"} {"prompt": "Complete this text code:\n\nName: pvcs-gold\nNamespace: default\nResource Used Hard\n-------- ---- ----\npersistentvolumeclaims 0 10\nrequests.storage 0 10Gi\n\n\nName: pvcs-silver\nNamespace: default\nResource Used Hard", "completion": "-------- ---- ----\npersistentvolumeclaims 0 10\nrequests.storage 0 20Gi\n\n\nName: pvcs-copper\nNamespace: default\nResource Used Hard\n-------- ---- ----\npersistentvolumeclaims 1 10\nrequests.storage 2Gi 30Gi", "language": "text", "type": "code-example", "context": "Resource Quotas | When allocating compute resources, each container may specify a request and a limit value for either CPU or memory. The quota can be configured to quota either value.", "source": "https://kubernetes.io/docs/concepts/policy/resource-quotas/", "id": "k8s_00108_code_completion_2951"} {"prompt": "Explain this text code:\n\nName: pvcs-gold\nNamespace: default\nResource Used Hard\n-------- ---- ----\npersistentvolumeclaims 0 10\nrequests.storage 0 10Gi\n\n\nName: pvcs-silver\nNamespace: default\nResource Used Hard\n-------- ---- ----\npersistentvolumeclaims 0 10\nrequests.storage 0 20Gi\n\n\nName: pvcs-copper\nNamespace: default\nResource Used Hard\n-------- ---- ----\npersistentvolumeclaims 1 10\nrequests.storage 2Gi 30Gi", "completion": "This text code Resource Quotas | When allocating compute resources, each container may specify a request and a limit value for either CPU or memory. The quota can be configured to quota either value.", "language": "text", "type": "code-example_explanation", "context": "Resource Quotas | When allocating compute resources, each container may specify a request and a limit value for either CPU or memory. The quota can be configured to quota either value.", "source": "https://kubernetes.io/docs/concepts/policy/resource-quotas/", "id": "k8s_00108_code_explain_2952"} {"prompt": "Complete this shell code:\n\ncat < compute-resources.yaml\napiVersion: v1\nkind: ResourceQuota\nmetadata:\n name: compute-resources\nspec:", "completion": " hard:\n requests.cpu: \"1\"\n requests.memory: \"1Gi\"\n limits.cpu: \"2\"\n limits.memory: \"2Gi\"\n requests.nvidia.com/gpu: 4\nEOF", "language": "shell", "type": "code-example", "context": "Resource Quotas | kubectl also supports object count quota for all standard namespaced resources using the syntax count/.:", "source": "https://kubernetes.io/docs/concepts/policy/resource-quotas/", "id": "k8s_00108_code_completion_2953"} {"prompt": "Explain this shell code:\n\ncat < compute-resources.yaml\napiVersion: v1\nkind: ResourceQuota\nmetadata:\n name: compute-resources\nspec:\n hard:\n requests.cpu: \"1\"\n requests.memory: \"1Gi\"\n limits.cpu: \"2\"\n limits.memory: \"2Gi\"\n requests.nvidia.com/gpu: 4\nEOF", "completion": "This shell code Resource Quotas | kubectl also supports object count quota for all standard namespaced resources using the syntax count/.:", "language": "shell", "type": "code-example_explanation", "context": "Resource Quotas | kubectl also supports object count quota for all standard namespaced resources using the syntax count/.:", "source": "https://kubernetes.io/docs/concepts/policy/resource-quotas/", "id": "k8s_00108_code_explain_2954"} {"prompt": "Explain this shell code:\n\nkubectl create -f ./compute-resources.yaml --namespace=myspace", "completion": "This shell code Resource Quotas | kubectl also supports object count quota for all standard namespaced resources using the syntax count/.:", "language": "shell", "type": "code-example_explanation", "context": "Resource Quotas | kubectl also supports object count quota for all standard namespaced resources using the syntax count/.:", "source": "https://kubernetes.io/docs/concepts/policy/resource-quotas/", "id": "k8s_00108_code_explain_2955"} {"prompt": "Complete this shell code:\n\ncat < object-counts.yaml\napiVersion: v1\nkind: ResourceQuota\nmetadata:\n name: object-counts\nspec:\n hard:", "completion": " configmaps: \"10\"\n persistentvolumeclaims: \"4\"\n pods: \"4\"\n replicationcontrollers: \"20\"\n secrets: \"10\"\n services: \"10\"\n services.loadbalancers: \"2\"\nEOF", "language": "shell", "type": "code-example", "context": "Resource Quotas | kubectl also supports object count quota for all standard namespaced resources using the syntax count/.:", "source": "https://kubernetes.io/docs/concepts/policy/resource-quotas/", "id": "k8s_00108_code_completion_2956"} {"prompt": "Explain this shell code:\n\ncat < object-counts.yaml\napiVersion: v1\nkind: ResourceQuota\nmetadata:\n name: object-counts\nspec:\n hard:\n configmaps: \"10\"\n persistentvolumeclaims: \"4\"\n pods: \"4\"\n replicationcontrollers: \"20\"\n secrets: \"10\"\n services: \"10\"\n services.loadbalancers: \"2\"\nEOF", "completion": "This shell code Resource Quotas | kubectl also supports object count quota for all standard namespaced resources using the syntax count/.:", "language": "shell", "type": "code-example_explanation", "context": "Resource Quotas | kubectl also supports object count quota for all standard namespaced resources using the syntax count/.:", "source": "https://kubernetes.io/docs/concepts/policy/resource-quotas/", "id": "k8s_00108_code_explain_2957"} {"prompt": "Explain this shell code:\n\nkubectl create -f ./object-counts.yaml --namespace=myspace", "completion": "This shell code Resource Quotas | kubectl also supports object count quota for all standard namespaced resources using the syntax count/.:", "language": "shell", "type": "code-example_explanation", "context": "Resource Quotas | kubectl also supports object count quota for all standard namespaced resources using the syntax count/.:", "source": "https://kubernetes.io/docs/concepts/policy/resource-quotas/", "id": "k8s_00108_code_explain_2958"} {"prompt": "Explain this none code:\n\nNAME AGE\ncompute-resources 30s\nobject-counts 32s", "completion": "This none code Resource Quotas | kubectl also supports object count quota for all standard namespaced resources using the syntax count/.:", "language": "none", "type": "code-example_explanation", "context": "Resource Quotas | kubectl also supports object count quota for all standard namespaced resources using the syntax count/.:", "source": "https://kubernetes.io/docs/concepts/policy/resource-quotas/", "id": "k8s_00108_code_explain_2959"} {"prompt": "Explain this shell code:\n\nkubectl describe quota compute-resources --namespace=myspace", "completion": "This shell code Resource Quotas | kubectl also supports object count quota for all standard namespaced resources using the syntax count/.:", "language": "shell", "type": "code-example_explanation", "context": "Resource Quotas | kubectl also supports object count quota for all standard namespaced resources using the syntax count/.:", "source": "https://kubernetes.io/docs/concepts/policy/resource-quotas/", "id": "k8s_00108_code_explain_2960"} {"prompt": "Complete this none code:\n\nName: compute-resources\nNamespace: myspace\nResource Used Hard\n-------- ---- ----", "completion": "limits.cpu 0 2\nlimits.memory 0 2Gi\nrequests.cpu 0 1\nrequests.memory 0 1Gi\nrequests.nvidia.com/gpu 0 4", "language": "none", "type": "code-example", "context": "Resource Quotas | kubectl also supports object count quota for all standard namespaced resources using the syntax count/.:", "source": "https://kubernetes.io/docs/concepts/policy/resource-quotas/", "id": "k8s_00108_code_completion_2961"} {"prompt": "Explain this none code:\n\nName: compute-resources\nNamespace: myspace\nResource Used Hard\n-------- ---- ----\nlimits.cpu 0 2\nlimits.memory 0 2Gi\nrequests.cpu 0 1\nrequests.memory 0 1Gi\nrequests.nvidia.com/gpu 0 4", "completion": "This none code Resource Quotas | kubectl also supports object count quota for all standard namespaced resources using the syntax count/.:", "language": "none", "type": "code-example_explanation", "context": "Resource Quotas | kubectl also supports object count quota for all standard namespaced resources using the syntax count/.:", "source": "https://kubernetes.io/docs/concepts/policy/resource-quotas/", "id": "k8s_00108_code_explain_2962"} {"prompt": "Explain this shell code:\n\nkubectl describe quota object-counts --namespace=myspace", "completion": "This shell code Resource Quotas | kubectl also supports object count quota for all standard namespaced resources using the syntax count/.:", "language": "shell", "type": "code-example_explanation", "context": "Resource Quotas | kubectl also supports object count quota for all standard namespaced resources using the syntax count/.:", "source": "https://kubernetes.io/docs/concepts/policy/resource-quotas/", "id": "k8s_00108_code_explain_2963"} {"prompt": "Complete this none code:\n\nName: object-counts\nNamespace: myspace\nResource Used Hard\n-------- ---- ----\nconfigmaps 0 10", "completion": "persistentvolumeclaims 0 4\npods 0 4\nreplicationcontrollers 0 20\nsecrets 1 10\nservices 0 10\nservices.loadbalancers 0 2", "language": "none", "type": "code-example", "context": "Resource Quotas | kubectl also supports object count quota for all standard namespaced resources using the syntax count/.:", "source": "https://kubernetes.io/docs/concepts/policy/resource-quotas/", "id": "k8s_00108_code_completion_2964"} {"prompt": "Explain this none code:\n\nName: object-counts\nNamespace: myspace\nResource Used Hard\n-------- ---- ----\nconfigmaps 0 10\npersistentvolumeclaims 0 4\npods 0 4\nreplicationcontrollers 0 20\nsecrets 1 10\nservices 0 10\nservices.loadbalancers 0 2", "completion": "This none code Resource Quotas | kubectl also supports object count quota for all standard namespaced resources using the syntax count/.:", "language": "none", "type": "code-example_explanation", "context": "Resource Quotas | kubectl also supports object count quota for all standard namespaced resources using the syntax count/.:", "source": "https://kubernetes.io/docs/concepts/policy/resource-quotas/", "id": "k8s_00108_code_explain_2965"} {"prompt": "Explain this shell code:\n\nkubectl create quota test --hard=count/deployments.apps=2,count/replicasets.apps=4,count/pods=3,count/secrets=4 --namespace=myspace", "completion": "This shell code Resource Quotas | ResourceQuotas are independent of the cluster capacity. They are expressed in absolute units. So, if you add nodes to your cluster, this does not automatically give each namespace the ability to consu", "language": "shell", "type": "code-example_explanation", "context": "Resource Quotas | ResourceQuotas are independent of the cluster capacity. They are expressed in absolute units. So, if you add nodes to your cluster, this does not automatically give each namespace the ability to consu", "source": "https://kubernetes.io/docs/concepts/policy/resource-quotas/", "id": "k8s_00108_code_explain_2966"} {"prompt": "Explain this shell code:\n\nkubectl create deployment nginx --image=nginx --namespace=myspace --replicas=2", "completion": "This shell code Resource Quotas | ResourceQuotas are independent of the cluster capacity. They are expressed in absolute units. So, if you add nodes to your cluster, this does not automatically give each namespace the ability to consu", "language": "shell", "type": "code-example_explanation", "context": "Resource Quotas | ResourceQuotas are independent of the cluster capacity. They are expressed in absolute units. So, if you add nodes to your cluster, this does not automatically give each namespace the ability to consu", "source": "https://kubernetes.io/docs/concepts/policy/resource-quotas/", "id": "k8s_00108_code_explain_2967"} {"prompt": "Complete this text code:\n\nName: test\nNamespace: myspace\nResource Used Hard\n-------- ---- ----", "completion": "count/deployments.apps 1 2\ncount/pods 2 3\ncount/replicasets.apps 1 4\ncount/secrets 1 4", "language": "text", "type": "code-example", "context": "Resource Quotas | ResourceQuotas are independent of the cluster capacity. They are expressed in absolute units. So, if you add nodes to your cluster, this does not automatically give each namespace the ability to consu", "source": "https://kubernetes.io/docs/concepts/policy/resource-quotas/", "id": "k8s_00108_code_completion_2968"} {"prompt": "Explain this text code:\n\nName: test\nNamespace: myspace\nResource Used Hard\n-------- ---- ----\ncount/deployments.apps 1 2\ncount/pods 2 3\ncount/replicasets.apps 1 4\ncount/secrets 1 4", "completion": "This text code Resource Quotas | ResourceQuotas are independent of the cluster capacity. They are expressed in absolute units. So, if you add nodes to your cluster, this does not automatically give each namespace the ability to consu", "language": "text", "type": "code-example_explanation", "context": "Resource Quotas | ResourceQuotas are independent of the cluster capacity. They are expressed in absolute units. So, if you add nodes to your cluster, this does not automatically give each namespace the ability to consu", "source": "https://kubernetes.io/docs/concepts/policy/resource-quotas/", "id": "k8s_00108_code_explain_2969"} {"prompt": "Complete this yaml code:\n\napiVersion: apiserver.config.k8s.io/v1\nkind: AdmissionConfiguration\nplugins:\n- name: \"ResourceQuota\"\n configuration:\n apiVersion: apiserver.config.k8s.io/v1", "completion": " kind: ResourceQuotaConfiguration\n limitedResources:\n - resource: pods\n matchScopes:\n - scopeName: PriorityClass\n operator: In\n values: [\"cluster-services\"]", "language": "yaml", "type": "kubernetes-manifest", "context": "Resource Quotas | Then, create a resource quota object in the kube-system namespace:", "source": "https://kubernetes.io/docs/concepts/policy/resource-quotas/", "id": "k8s_00108_code_completion_2970"} {"prompt": "Explain this yaml code:\n\napiVersion: apiserver.config.k8s.io/v1\nkind: AdmissionConfiguration\nplugins:\n- name: \"ResourceQuota\"\n configuration:\n apiVersion: apiserver.config.k8s.io/v1\n kind: ResourceQuotaConfiguration\n limitedResources:\n - resource: pods\n matchScopes:\n - scopeName: PriorityClass\n operator: In\n values: [\"cluster-services\"]", "completion": "This yaml code Resource Quotas | Then, create a resource quota object in the kube-system namespace:", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Resource Quotas | Then, create a resource quota object in the kube-system namespace:", "source": "https://kubernetes.io/docs/concepts/policy/resource-quotas/", "id": "k8s_00108_code_explain_2971"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: ResourceQuota\nmetadata:\n name: pods-cluster-services\nspec:", "completion": " scopeSelector:\n matchExpressions:\n - operator : In\n scopeName: PriorityClass\n values: [\"cluster-services\"]", "language": "yaml", "type": "kubernetes-manifest", "context": "Resource Quotas | In this case, a pod creation will be allowed if:", "source": "https://kubernetes.io/docs/concepts/policy/resource-quotas/", "id": "k8s_00108_code_completion_2972"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: ResourceQuota\nmetadata:\n name: pods-cluster-services\nspec:\n scopeSelector:\n matchExpressions:\n - operator : In\n scopeName: PriorityClass\n values: [\"cluster-services\"]", "completion": "This yaml code Resource Quotas | In this case, a pod creation will be allowed if:", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Resource Quotas | In this case, a pod creation will be allowed if:", "source": "https://kubernetes.io/docs/concepts/policy/resource-quotas/", "id": "k8s_00108_code_explain_2973"} {"prompt": "Explain this shell code:\n\nkubectl apply -f https://k8s.io/examples/policy/priority-class-resourcequota.yaml -n kube-system", "completion": "This shell code Resource Quotas | In this case, a pod creation will be allowed if:", "language": "shell", "type": "code-example_explanation", "context": "Resource Quotas | In this case, a pod creation will be allowed if:", "source": "https://kubernetes.io/docs/concepts/policy/resource-quotas/", "id": "k8s_00108_code_explain_2974"} {"prompt": "Complete this text code:\n\nscopeSelector:\n matchExpressions:\n - scopeName: PriorityClass", "completion": " operator: In\n values:\n - middle", "language": "text", "type": "code-example", "context": "Resource Quotas | If the operator is Exists or DoesNotExist, the values field must NOT be specified.", "source": "https://kubernetes.io/docs/concepts/policy/resource-quotas/", "id": "k8s_00108_code_completion_2975"} {"prompt": "Explain this text code:\n\nscopeSelector:\n matchExpressions:\n - scopeName: PriorityClass\n operator: In\n values:\n - middle", "completion": "This text code Resource Quotas | If the operator is Exists or DoesNotExist, the values field must NOT be specified.", "language": "text", "type": "code-example_explanation", "context": "Resource Quotas | If the operator is Exists or DoesNotExist, the values field must NOT be specified.", "source": "https://kubernetes.io/docs/concepts/policy/resource-quotas/", "id": "k8s_00108_code_explain_2976"} {"prompt": "Complete this yaml code:\n\npolicy/quota.yaml\napiVersion: v1\nkind: List\nitems:\n- apiVersion: v1\n kind: ResourceQuota\n metadata:\n name: pods-high\n spec:\n hard:\n cpu: \"1000\"\n memory: \"200Gi\"\n pods: \"10\"\n scopeSelector:\n matchExpressions:\n - operator: In\n scopeName: PriorityClass\n values: [\"high\"]\n- apiVersion: v1\n kind: ResourceQuota\n metadata:\n name: pods-medium\n spec:", "completion": " hard:\n cpu: \"10\"\n memory: \"20Gi\"\n pods: \"10\"\n scopeSelector:\n matchExpressions:\n - operator: In\n scopeName: PriorityClass\n values: [\"medium\"]\n- apiVersion: v1\n kind: ResourceQuota\n metadata:\n name: pods-low\n spec:\n hard:\n cpu: \"5\"\n memory: \"10Gi\"\n pods: \"10\"\n scopeSelector:\n matchExpressions:\n - operator: In\n scopeName: PriorityClass\n values: [\"low\"]", "language": "yaml", "type": "kubernetes-manifest", "context": "Resource Quotas | Apply the YAML using kubectl create.", "source": "https://kubernetes.io/docs/concepts/policy/resource-quotas/", "id": "k8s_00108_code_completion_2977"} {"prompt": "Explain this yaml code:\n\npolicy/quota.yaml\napiVersion: v1\nkind: List\nitems:\n- apiVersion: v1\n kind: ResourceQuota\n metadata:\n name: pods-high\n spec:\n hard:\n cpu: \"1000\"\n memory: \"200Gi\"\n pods: \"10\"\n scopeSelector:\n matchExpressions:\n - operator: In\n scopeName: PriorityClass\n values: [\"high\"]\n- apiVersion: v1\n kind: ResourceQuota\n metadata:\n name: pods-medium\n spec:\n hard:\n cpu: \"10\"\n memory: \"20Gi\"\n pods: \"10\"\n scopeSelector:\n matchExpressions:\n - operator: In\n scopeName: PriorityClass\n values: [\"medium\"]\n- apiVersion: v1\n kind: ResourceQuota\n metadata:\n name: pods-low\n spec:\n hard:\n cpu: \"5\"\n memory: \"10Gi\"\n pods: \"10\"\n scopeSelector:\n matchExpressions:\n - operator: In\n scopeName: PriorityClass\n values: [\"low\"]", "completion": "This yaml code Resource Quotas | Apply the YAML using kubectl create.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Resource Quotas | Apply the YAML using kubectl create.", "source": "https://kubernetes.io/docs/concepts/policy/resource-quotas/", "id": "k8s_00108_code_explain_2978"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: List\nitems:\n- apiVersion: v1\n kind: ResourceQuota\n metadata:\n name: pods-high\n spec:\n hard:\n cpu: \"1000\"\n memory: \"200Gi\"\n pods: \"10\"\n scopeSelector:\n matchExpressions:\n - operator: In\n scopeName: PriorityClass\n values: [\"high\"]\n- apiVersion: v1\n kind: ResourceQuota\n metadata:\n name: pods-medium\n spec:", "completion": " hard:\n cpu: \"10\"\n memory: \"20Gi\"\n pods: \"10\"\n scopeSelector:\n matchExpressions:\n - operator: In\n scopeName: PriorityClass\n values: [\"medium\"]\n- apiVersion: v1\n kind: ResourceQuota\n metadata:\n name: pods-low\n spec:\n hard:\n cpu: \"5\"\n memory: \"10Gi\"\n pods: \"10\"\n scopeSelector:\n matchExpressions:\n - operator: In\n scopeName: PriorityClass\n values: [\"low\"]", "language": "yaml", "type": "kubernetes-manifest", "context": "Resource Quotas | Apply the YAML using kubectl create.", "source": "https://kubernetes.io/docs/concepts/policy/resource-quotas/", "id": "k8s_00108_code_completion_2979"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: List\nitems:\n- apiVersion: v1\n kind: ResourceQuota\n metadata:\n name: pods-high\n spec:\n hard:\n cpu: \"1000\"\n memory: \"200Gi\"\n pods: \"10\"\n scopeSelector:\n matchExpressions:\n - operator: In\n scopeName: PriorityClass\n values: [\"high\"]\n- apiVersion: v1\n kind: ResourceQuota\n metadata:\n name: pods-medium\n spec:\n hard:\n cpu: \"10\"\n memory: \"20Gi\"\n pods: \"10\"\n scopeSelector:\n matchExpressions:\n - operator: In\n scopeName: PriorityClass\n values: [\"medium\"]\n- apiVersion: v1\n kind: ResourceQuota\n metadata:\n name: pods-low\n spec:\n hard:\n cpu: \"5\"\n memory: \"10Gi\"\n pods: \"10\"\n scopeSelector:\n matchExpressions:\n - operator: In\n scopeName: PriorityClass\n values: [\"low\"]", "completion": "This yaml code Resource Quotas | Apply the YAML using kubectl create.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Resource Quotas | Apply the YAML using kubectl create.", "source": "https://kubernetes.io/docs/concepts/policy/resource-quotas/", "id": "k8s_00108_code_explain_2980"} {"prompt": "Complete this yaml code:\n\npolicy/high-priority-pod.yaml\napiVersion: v1\nkind: Pod\nmetadata:\n name: high-priority\nspec:\n containers:\n - name: high-priority\n image: ubuntu", "completion": " command: [\"/bin/sh\"]\n args: [\"-c\", \"while true; do echo hello; sleep 10;done\"]\n resources:\n requests:\n memory: \"10Gi\"\n cpu: \"500m\"\n limits:\n memory: \"10Gi\"\n cpu: \"500m\"\n priorityClassName: high", "language": "yaml", "type": "pod-definition", "context": "Resource Quotas | Apply it with kubectl create.", "source": "https://kubernetes.io/docs/concepts/policy/resource-quotas/", "id": "k8s_00108_code_completion_2981"} {"prompt": "Explain this yaml code:\n\npolicy/high-priority-pod.yaml\napiVersion: v1\nkind: Pod\nmetadata:\n name: high-priority\nspec:\n containers:\n - name: high-priority\n image: ubuntu\n command: [\"/bin/sh\"]\n args: [\"-c\", \"while true; do echo hello; sleep 10;done\"]\n resources:\n requests:\n memory: \"10Gi\"\n cpu: \"500m\"\n limits:\n memory: \"10Gi\"\n cpu: \"500m\"\n priorityClassName: high", "completion": "This yaml code Resource Quotas | Apply it with kubectl create.", "language": "yaml", "type": "pod-definition_explanation", "context": "Resource Quotas | Apply it with kubectl create.", "source": "https://kubernetes.io/docs/concepts/policy/resource-quotas/", "id": "k8s_00108_code_explain_2982"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: high-priority\nspec:\n containers:\n - name: high-priority\n image: ubuntu\n command: [\"/bin/sh\"]", "completion": " args: [\"-c\", \"while true; do echo hello; sleep 10;done\"]\n resources:\n requests:\n memory: \"10Gi\"\n cpu: \"500m\"\n limits:\n memory: \"10Gi\"\n cpu: \"500m\"\n priorityClassName: high", "language": "yaml", "type": "pod-definition", "context": "Resource Quotas | Apply it with kubectl create.", "source": "https://kubernetes.io/docs/concepts/policy/resource-quotas/", "id": "k8s_00108_code_completion_2983"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: high-priority\nspec:\n containers:\n - name: high-priority\n image: ubuntu\n command: [\"/bin/sh\"]\n args: [\"-c\", \"while true; do echo hello; sleep 10;done\"]\n resources:\n requests:\n memory: \"10Gi\"\n cpu: \"500m\"\n limits:\n memory: \"10Gi\"\n cpu: \"500m\"\n priorityClassName: high", "completion": "This yaml code Resource Quotas | Apply it with kubectl create.", "language": "yaml", "type": "pod-definition_explanation", "context": "Resource Quotas | Apply it with kubectl create.", "source": "https://kubernetes.io/docs/concepts/policy/resource-quotas/", "id": "k8s_00108_code_explain_2984"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: ResourceQuota\nmetadata:\n name: disable-cross-namespace-affinity\n namespace: foo-ns\nspec:", "completion": " hard:\n pods: \"0\"\n scopeSelector:\n matchExpressions:\n - scopeName: CrossNamespacePodAffinity\n operator: Exists", "language": "yaml", "type": "kubernetes-manifest", "context": "Resource Quotas | If operators want to disallow using namespaces and namespaceSelector by default, and only allow it for specific namespaces, they could configure CrossNamespacePodAffinity as a limited resource by sett", "source": "https://kubernetes.io/docs/concepts/policy/resource-quotas/", "id": "k8s_00108_code_completion_2985"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: ResourceQuota\nmetadata:\n name: disable-cross-namespace-affinity\n namespace: foo-ns\nspec:\n hard:\n pods: \"0\"\n scopeSelector:\n matchExpressions:\n - scopeName: CrossNamespacePodAffinity\n operator: Exists", "completion": "This yaml code Resource Quotas | If operators want to disallow using namespaces and namespaceSelector by default, and only allow it for specific namespaces, they could configure CrossNamespacePodAffinity as a limited resource by sett", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Resource Quotas | If operators want to disallow using namespaces and namespaceSelector by default, and only allow it for specific namespaces, they could configure CrossNamespacePodAffinity as a limited resource by sett", "source": "https://kubernetes.io/docs/concepts/policy/resource-quotas/", "id": "k8s_00108_code_explain_2986"} {"prompt": "Complete this yaml code:\n\napiVersion: apiserver.config.k8s.io/v1\nkind: AdmissionConfiguration\nplugins:\n- name: \"ResourceQuota\"\n configuration:\n apiVersion: apiserver.config.k8s.io/v1", "completion": " kind: ResourceQuotaConfiguration\n limitedResources:\n - resource: pods\n matchScopes:\n - scopeName: CrossNamespacePodAffinity\n operator: Exists", "language": "yaml", "type": "kubernetes-manifest", "context": "Resource Quotas | With the above configuration, pods can use namespaces and namespaceSelector in pod affinity only if the namespace where they are created have a resource quota object with CrossNamespacePodAffinity sco", "source": "https://kubernetes.io/docs/concepts/policy/resource-quotas/", "id": "k8s_00108_code_completion_2987"} {"prompt": "Explain this yaml code:\n\napiVersion: apiserver.config.k8s.io/v1\nkind: AdmissionConfiguration\nplugins:\n- name: \"ResourceQuota\"\n configuration:\n apiVersion: apiserver.config.k8s.io/v1\n kind: ResourceQuotaConfiguration\n limitedResources:\n - resource: pods\n matchScopes:\n - scopeName: CrossNamespacePodAffinity\n operator: Exists", "completion": "This yaml code Resource Quotas | With the above configuration, pods can use namespaces and namespaceSelector in pod affinity only if the namespace where they are created have a resource quota object with CrossNamespacePodAffinity sco", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Resource Quotas | With the above configuration, pods can use namespaces and namespaceSelector in pod affinity only if the namespace where they are created have a resource quota object with CrossNamespacePodAffinity sco", "source": "https://kubernetes.io/docs/concepts/policy/resource-quotas/", "id": "k8s_00108_code_explain_2988"} {"prompt": "Complete this yaml code:\n\npolicy/quota-vac.yaml\napiVersion: v1\nkind: List\nitems:\n- apiVersion: v1\n kind: ResourceQuota\n metadata:\n name: pvcs-gold\n spec:\n hard:\n requests.storage: \"10Gi\"\n persistentvolumeclaims: \"10\"\n scopeSelector:\n matchExpressions:\n - operator: In\n scopeName: VolumeAttributesClass\n values: [\"gold\"]\n- apiVersion: v1\n kind: ResourceQuota\n metadata:\n name: pvcs-silver", "completion": " spec:\n hard:\n requests.storage: \"20Gi\"\n persistentvolumeclaims: \"10\"\n scopeSelector:\n matchExpressions:\n - operator: In\n scopeName: VolumeAttributesClass\n values: [\"silver\"]\n- apiVersion: v1\n kind: ResourceQuota\n metadata:\n name: pvcs-copper\n spec:\n hard:\n requests.storage: \"30Gi\"\n persistentvolumeclaims: \"10\"\n scopeSelector:\n matchExpressions:\n - operator: In\n scopeName: VolumeAttributesClass\n values: [\"copper\"]", "language": "yaml", "type": "kubernetes-manifest", "context": "Resource Quotas | Apply the YAML using kubectl create.", "source": "https://kubernetes.io/docs/concepts/policy/resource-quotas/", "id": "k8s_00108_code_completion_2989"} {"prompt": "Explain this yaml code:\n\npolicy/quota-vac.yaml\napiVersion: v1\nkind: List\nitems:\n- apiVersion: v1\n kind: ResourceQuota\n metadata:\n name: pvcs-gold\n spec:\n hard:\n requests.storage: \"10Gi\"\n persistentvolumeclaims: \"10\"\n scopeSelector:\n matchExpressions:\n - operator: In\n scopeName: VolumeAttributesClass\n values: [\"gold\"]\n- apiVersion: v1\n kind: ResourceQuota\n metadata:\n name: pvcs-silver\n spec:\n hard:\n requests.storage: \"20Gi\"\n persistentvolumeclaims: \"10\"\n scopeSelector:\n matchExpressions:\n - operator: In\n scopeName: VolumeAttributesClass\n values: [\"silver\"]\n- apiVersion: v1\n kind: ResourceQuota\n metadata:\n name: pvcs-copper\n spec:\n hard:\n requests.storage: \"30Gi\"\n persistentvolumeclaims: \"10\"\n scopeSelector:\n matchExpressions:\n - operator: In\n scopeName: VolumeAttributesClass\n values: [\"copper\"]", "completion": "This yaml code Resource Quotas | Apply the YAML using kubectl create.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Resource Quotas | Apply the YAML using kubectl create.", "source": "https://kubernetes.io/docs/concepts/policy/resource-quotas/", "id": "k8s_00108_code_explain_2990"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: List\nitems:\n- apiVersion: v1\n kind: ResourceQuota\n metadata:\n name: pvcs-gold\n spec:\n hard:\n requests.storage: \"10Gi\"\n persistentvolumeclaims: \"10\"\n scopeSelector:\n matchExpressions:\n - operator: In\n scopeName: VolumeAttributesClass\n values: [\"gold\"]\n- apiVersion: v1\n kind: ResourceQuota\n metadata:\n name: pvcs-silver\n spec:", "completion": " hard:\n requests.storage: \"20Gi\"\n persistentvolumeclaims: \"10\"\n scopeSelector:\n matchExpressions:\n - operator: In\n scopeName: VolumeAttributesClass\n values: [\"silver\"]\n- apiVersion: v1\n kind: ResourceQuota\n metadata:\n name: pvcs-copper\n spec:\n hard:\n requests.storage: \"30Gi\"\n persistentvolumeclaims: \"10\"\n scopeSelector:\n matchExpressions:\n - operator: In\n scopeName: VolumeAttributesClass\n values: [\"copper\"]", "language": "yaml", "type": "kubernetes-manifest", "context": "Resource Quotas | Apply the YAML using kubectl create.", "source": "https://kubernetes.io/docs/concepts/policy/resource-quotas/", "id": "k8s_00108_code_completion_2991"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: List\nitems:\n- apiVersion: v1\n kind: ResourceQuota\n metadata:\n name: pvcs-gold\n spec:\n hard:\n requests.storage: \"10Gi\"\n persistentvolumeclaims: \"10\"\n scopeSelector:\n matchExpressions:\n - operator: In\n scopeName: VolumeAttributesClass\n values: [\"gold\"]\n- apiVersion: v1\n kind: ResourceQuota\n metadata:\n name: pvcs-silver\n spec:\n hard:\n requests.storage: \"20Gi\"\n persistentvolumeclaims: \"10\"\n scopeSelector:\n matchExpressions:\n - operator: In\n scopeName: VolumeAttributesClass\n values: [\"silver\"]\n- apiVersion: v1\n kind: ResourceQuota\n metadata:\n name: pvcs-copper\n spec:\n hard:\n requests.storage: \"30Gi\"\n persistentvolumeclaims: \"10\"\n scopeSelector:\n matchExpressions:\n - operator: In\n scopeName: VolumeAttributesClass\n values: [\"copper\"]", "completion": "This yaml code Resource Quotas | Apply the YAML using kubectl create.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Resource Quotas | Apply the YAML using kubectl create.", "source": "https://kubernetes.io/docs/concepts/policy/resource-quotas/", "id": "k8s_00108_code_explain_2992"} {"prompt": "Complete this yaml code:\n\npolicy/gold-vac-pvc.yaml\napiVersion: v1\nkind: PersistentVolumeClaim\nmetadata:\n name: gold-vac-pvc\nspec:", "completion": " accessModes:\n - ReadWriteOnce\n resources:\n requests:\n storage: 2Gi\n storageClassName: # change this to the name of the storage class you want to use\n volumeAttributesClassName: gold", "language": "yaml", "type": "kubernetes-manifest", "context": "Resource Quotas | Apply it with kubectl create.", "source": "https://kubernetes.io/docs/concepts/policy/resource-quotas/", "id": "k8s_00108_code_completion_2993"} {"prompt": "Explain this yaml code:\n\npolicy/gold-vac-pvc.yaml\napiVersion: v1\nkind: PersistentVolumeClaim\nmetadata:\n name: gold-vac-pvc\nspec:\n accessModes:\n - ReadWriteOnce\n resources:\n requests:\n storage: 2Gi\n storageClassName: # change this to the name of the storage class you want to use\n volumeAttributesClassName: gold", "completion": "This yaml code Resource Quotas | Apply it with kubectl create.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Resource Quotas | Apply it with kubectl create.", "source": "https://kubernetes.io/docs/concepts/policy/resource-quotas/", "id": "k8s_00108_code_explain_2994"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: PersistentVolumeClaim\nmetadata:\n name: gold-vac-pvc\nspec:\n accessModes:", "completion": " - ReadWriteOnce\n resources:\n requests:\n storage: 2Gi\n storageClassName: # change this to the name of the storage class you want to use\n volumeAttributesClassName: gold", "language": "yaml", "type": "kubernetes-manifest", "context": "Resource Quotas | Apply it with kubectl create.", "source": "https://kubernetes.io/docs/concepts/policy/resource-quotas/", "id": "k8s_00108_code_completion_2995"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: PersistentVolumeClaim\nmetadata:\n name: gold-vac-pvc\nspec:\n accessModes:\n - ReadWriteOnce\n resources:\n requests:\n storage: 2Gi\n storageClassName: # change this to the name of the storage class you want to use\n volumeAttributesClassName: gold", "completion": "This yaml code Resource Quotas | Apply it with kubectl create.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Resource Quotas | Apply it with kubectl create.", "source": "https://kubernetes.io/docs/concepts/policy/resource-quotas/", "id": "k8s_00108_code_explain_2996"} {"prompt": "Explain this bash code:\n\nkubectl patch pvc gold-vac-pvc --type='merge' -p '{\"spec\":{\"volumeAttributesClassName\":\"silver\"}}'", "completion": "This bash code Resource Quotas | Verify that \"Used\" stats for \"silver\" volume attributes class quota, pvcs-silver has changed, pvcs-copper is unchanged, and pvcs-gold might be unchanged or released, which depends on the PVC's status.", "language": "bash", "type": "kubectl-command_explanation", "context": "Resource Quotas | Verify that \"Used\" stats for \"silver\" volume attributes class quota, pvcs-silver has changed, pvcs-copper is unchanged, and pvcs-gold might be unchanged or released, which depends on the PVC's status.", "source": "https://kubernetes.io/docs/concepts/policy/resource-quotas/", "id": "k8s_00108_code_explain_2997"} {"prompt": "Explain this bash code:\n\nkubectl patch pvc gold-vac-pvc --type='merge' -p '{\"spec\":{\"volumeAttributesClassName\":\"copper\"}}'", "completion": "This bash code Resource Quotas | Verify that \"Used\" stats for \"copper\" volume attributes class quota, pvcs-copper has changed, pvcs-silver and pvcs-gold might be unchanged or released, which depends on the PVC's status.", "language": "bash", "type": "kubectl-command_explanation", "context": "Resource Quotas | Verify that \"Used\" stats for \"copper\" volume attributes class quota, pvcs-copper has changed, pvcs-silver and pvcs-gold might be unchanged or released, which depends on the PVC's status.", "source": "https://kubernetes.io/docs/concepts/policy/resource-quotas/", "id": "k8s_00108_code_explain_2998"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: PersistentVolumeClaim\nmetadata:\n name: gold-vac-pvc\nspec:\n accessModes:\n - ReadWriteOnce\n resources:\n requests:\n storage: 2Gi\n storageClassName: default", "completion": " volumeAttributesClassName: copper\nstatus:\n accessModes:\n - ReadWriteOnce\n capacity:\n storage: 2Gi\n currentVolumeAttributesClassName: gold\n phase: Bound\n modifyVolumeStatus:\n status: InProgress\n targetVolumeAttributesClassName: silver\n storageClassName: default", "language": "yaml", "type": "kubernetes-manifest", "context": "Resource Quotas | Wait a moment for the volume modification to complete, then verify the quota again.", "source": "https://kubernetes.io/docs/concepts/policy/resource-quotas/", "id": "k8s_00108_code_completion_2999"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: PersistentVolumeClaim\nmetadata:\n name: gold-vac-pvc\nspec:\n accessModes:\n - ReadWriteOnce\n resources:\n requests:\n storage: 2Gi\n storageClassName: default\n volumeAttributesClassName: copper\nstatus:\n accessModes:\n - ReadWriteOnce\n capacity:\n storage: 2Gi\n currentVolumeAttributesClassName: gold\n phase: Bound\n modifyVolumeStatus:\n status: InProgress\n targetVolumeAttributesClassName: silver\n storageClassName: default", "completion": "This yaml code Resource Quotas | Wait a moment for the volume modification to complete, then verify the quota again.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Resource Quotas | Wait a moment for the volume modification to complete, then verify the quota again.", "source": "https://kubernetes.io/docs/concepts/policy/resource-quotas/", "id": "k8s_00108_code_explain_3000"} {"prompt": "Complete this yaml code:\n\ncat < compute-resources.yaml\napiVersion: v1\nkind: ResourceQuota\nmetadata:\n name: compute-resources\nspec:", "completion": " hard:\n requests.cpu: \"1\"\n requests.memory: \"1Gi\"\n limits.cpu: \"2\"\n limits.memory: \"2Gi\"\n requests.nvidia.com/gpu: 4\nEOF", "language": "yaml", "type": "kubernetes-manifest", "context": "Resource Quotas | kubectl also supports object count quota for all standard namespaced resources using the syntax count/.:", "source": "https://kubernetes.io/docs/concepts/policy/resource-quotas/", "id": "k8s_00108_code_completion_3001"} {"prompt": "Explain this yaml code:\n\ncat < compute-resources.yaml\napiVersion: v1\nkind: ResourceQuota\nmetadata:\n name: compute-resources\nspec:\n hard:\n requests.cpu: \"1\"\n requests.memory: \"1Gi\"\n limits.cpu: \"2\"\n limits.memory: \"2Gi\"\n requests.nvidia.com/gpu: 4\nEOF", "completion": "This yaml code Resource Quotas | kubectl also supports object count quota for all standard namespaced resources using the syntax count/.:", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Resource Quotas | kubectl also supports object count quota for all standard namespaced resources using the syntax count/.:", "source": "https://kubernetes.io/docs/concepts/policy/resource-quotas/", "id": "k8s_00108_code_explain_3002"} {"prompt": "Explain this bash code:\n\nkubectl create -f ./compute-resources.yaml --namespace=myspace", "completion": "This bash code Resource Quotas | kubectl also supports object count quota for all standard namespaced resources using the syntax count/.:", "language": "bash", "type": "kubectl-command_explanation", "context": "Resource Quotas | kubectl also supports object count quota for all standard namespaced resources using the syntax count/.:", "source": "https://kubernetes.io/docs/concepts/policy/resource-quotas/", "id": "k8s_00108_code_explain_3003"} {"prompt": "Complete this yaml code:\n\ncat < object-counts.yaml\napiVersion: v1\nkind: ResourceQuota\nmetadata:\n name: object-counts\nspec:\n hard:", "completion": " configmaps: \"10\"\n persistentvolumeclaims: \"4\"\n pods: \"4\"\n replicationcontrollers: \"20\"\n secrets: \"10\"\n services: \"10\"\n services.loadbalancers: \"2\"\nEOF", "language": "yaml", "type": "kubernetes-manifest", "context": "Resource Quotas | kubectl also supports object count quota for all standard namespaced resources using the syntax count/.:", "source": "https://kubernetes.io/docs/concepts/policy/resource-quotas/", "id": "k8s_00108_code_completion_3004"} {"prompt": "Explain this yaml code:\n\ncat < object-counts.yaml\napiVersion: v1\nkind: ResourceQuota\nmetadata:\n name: object-counts\nspec:\n hard:\n configmaps: \"10\"\n persistentvolumeclaims: \"4\"\n pods: \"4\"\n replicationcontrollers: \"20\"\n secrets: \"10\"\n services: \"10\"\n services.loadbalancers: \"2\"\nEOF", "completion": "This yaml code Resource Quotas | kubectl also supports object count quota for all standard namespaced resources using the syntax count/.:", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Resource Quotas | kubectl also supports object count quota for all standard namespaced resources using the syntax count/.:", "source": "https://kubernetes.io/docs/concepts/policy/resource-quotas/", "id": "k8s_00108_code_explain_3005"} {"prompt": "Explain this bash code:\n\nkubectl create -f ./object-counts.yaml --namespace=myspace", "completion": "This bash code Resource Quotas | kubectl also supports object count quota for all standard namespaced resources using the syntax count/.:", "language": "bash", "type": "kubectl-command_explanation", "context": "Resource Quotas | kubectl also supports object count quota for all standard namespaced resources using the syntax count/.:", "source": "https://kubernetes.io/docs/concepts/policy/resource-quotas/", "id": "k8s_00108_code_explain_3006"} {"prompt": "Explain this bash code:\n\nkubectl describe quota compute-resources --namespace=myspace", "completion": "This bash code Resource Quotas | kubectl also supports object count quota for all standard namespaced resources using the syntax count/.:", "language": "bash", "type": "kubectl-command_explanation", "context": "Resource Quotas | kubectl also supports object count quota for all standard namespaced resources using the syntax count/.:", "source": "https://kubernetes.io/docs/concepts/policy/resource-quotas/", "id": "k8s_00108_code_explain_3007"} {"prompt": "Explain this bash code:\n\nkubectl describe quota object-counts --namespace=myspace", "completion": "This bash code Resource Quotas | kubectl also supports object count quota for all standard namespaced resources using the syntax count/.:", "language": "bash", "type": "kubectl-command_explanation", "context": "Resource Quotas | kubectl also supports object count quota for all standard namespaced resources using the syntax count/.:", "source": "https://kubernetes.io/docs/concepts/policy/resource-quotas/", "id": "k8s_00108_code_explain_3008"} {"prompt": "Explain this bash code:\n\nkubectl create quota test --hard=count/deployments.apps=2,count/replicasets.apps=4,count/pods=3,count/secrets=4 --namespace=myspace", "completion": "This bash code Resource Quotas | ResourceQuotas are independent of the cluster capacity. They are expressed in absolute units. So, if you add nodes to your cluster, this does not automatically give each namespace the ability to consu", "language": "bash", "type": "kubectl-command_explanation", "context": "Resource Quotas | ResourceQuotas are independent of the cluster capacity. They are expressed in absolute units. So, if you add nodes to your cluster, this does not automatically give each namespace the ability to consu", "source": "https://kubernetes.io/docs/concepts/policy/resource-quotas/", "id": "k8s_00108_code_explain_3009"} {"prompt": "Explain this bash code:\n\nkubectl create deployment nginx --image=nginx --namespace=myspace --replicas=2", "completion": "This bash code Resource Quotas | ResourceQuotas are independent of the cluster capacity. They are expressed in absolute units. So, if you add nodes to your cluster, this does not automatically give each namespace the ability to consu", "language": "bash", "type": "kubectl-command_explanation", "context": "Resource Quotas | ResourceQuotas are independent of the cluster capacity. They are expressed in absolute units. So, if you add nodes to your cluster, this does not automatically give each namespace the ability to consu", "source": "https://kubernetes.io/docs/concepts/policy/resource-quotas/", "id": "k8s_00108_code_explain_3010"} {"prompt": "Complete this yaml code:\n\napiVersion: apiserver.config.k8s.io/v1\nkind: AdmissionConfiguration\nplugins:\n- name: \"ResourceQuota\"\n configuration:\n apiVersion: apiserver.config.k8s.io/v1", "completion": " kind: ResourceQuotaConfiguration\n limitedResources:\n - resource: pods\n matchScopes:\n - scopeName: PriorityClass\n operator: In\n values: [\"cluster-services\"]", "language": "yaml", "type": "kubernetes-manifest", "context": "Resource Quotas | Then, create a resource quota object in the kube-system namespace:", "source": "https://kubernetes.io/docs/concepts/policy/resource-quotas/", "id": "k8s_00108_code_completion_3011"} {"prompt": "Explain this yaml code:\n\napiVersion: apiserver.config.k8s.io/v1\nkind: AdmissionConfiguration\nplugins:\n- name: \"ResourceQuota\"\n configuration:\n apiVersion: apiserver.config.k8s.io/v1\n kind: ResourceQuotaConfiguration\n limitedResources:\n - resource: pods\n matchScopes:\n - scopeName: PriorityClass\n operator: In\n values: [\"cluster-services\"]", "completion": "This yaml code Resource Quotas | Then, create a resource quota object in the kube-system namespace:", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Resource Quotas | Then, create a resource quota object in the kube-system namespace:", "source": "https://kubernetes.io/docs/concepts/policy/resource-quotas/", "id": "k8s_00108_code_explain_3012"} {"prompt": "Complete this yaml code:\n\npolicy/priority-class-resourcequota.yaml\napiVersion: v1\nkind: ResourceQuota\nmetadata:\n name: pods-cluster-services", "completion": "spec:\n scopeSelector:\n matchExpressions:\n - operator : In\n scopeName: PriorityClass\n values: [\"cluster-services\"]", "language": "yaml", "type": "kubernetes-manifest", "context": "Resource Quotas | In this case, a pod creation will be allowed if:", "source": "https://kubernetes.io/docs/concepts/policy/resource-quotas/", "id": "k8s_00108_code_completion_3013"} {"prompt": "Explain this yaml code:\n\npolicy/priority-class-resourcequota.yaml\napiVersion: v1\nkind: ResourceQuota\nmetadata:\n name: pods-cluster-services\nspec:\n scopeSelector:\n matchExpressions:\n - operator : In\n scopeName: PriorityClass\n values: [\"cluster-services\"]", "completion": "This yaml code Resource Quotas | In this case, a pod creation will be allowed if:", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Resource Quotas | In this case, a pod creation will be allowed if:", "source": "https://kubernetes.io/docs/concepts/policy/resource-quotas/", "id": "k8s_00108_code_explain_3014"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: ResourceQuota\nmetadata:\n name: pods-cluster-services\nspec:", "completion": " scopeSelector:\n matchExpressions:\n - operator : In\n scopeName: PriorityClass\n values: [\"cluster-services\"]", "language": "yaml", "type": "kubernetes-manifest", "context": "Resource Quotas | In this case, a pod creation will be allowed if:", "source": "https://kubernetes.io/docs/concepts/policy/resource-quotas/", "id": "k8s_00108_code_completion_3015"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: ResourceQuota\nmetadata:\n name: pods-cluster-services\nspec:\n scopeSelector:\n matchExpressions:\n - operator : In\n scopeName: PriorityClass\n values: [\"cluster-services\"]", "completion": "This yaml code Resource Quotas | In this case, a pod creation will be allowed if:", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Resource Quotas | In this case, a pod creation will be allowed if:", "source": "https://kubernetes.io/docs/concepts/policy/resource-quotas/", "id": "k8s_00108_code_explain_3016"} {"prompt": "Explain this bash code:\n\nkubectl apply -f https://k8s.io/examples/policy/priority-class-resourcequota.yaml -n kube-system", "completion": "This bash code Resource Quotas | In this case, a pod creation will be allowed if:", "language": "bash", "type": "kubectl-command_explanation", "context": "Resource Quotas | In this case, a pod creation will be allowed if:", "source": "https://kubernetes.io/docs/concepts/policy/resource-quotas/", "id": "k8s_00108_code_explain_3017"} {"prompt": "Complete this text code:\n\n# Allocate storage and restrict access\nfallocate --length 4GiB /swapfile\nchmod 600 /swapfile\n\n# Create an encrypted device backed by the allocated storage\ncryptsetup --type plain --cipher aes-xts-plain64 --key-size 256 -d /dev/urandom open /swapfile cryptswap", "completion": "\n# Format the swap space\nmkswap /dev/mapper/cryptswap\n\n# Activate the swap space for paging\nswapon /dev/mapper/cryptswap", "language": "text", "type": "code-example", "context": "Configuring swap memory on Kubernetes nodes | An unencrypted swap file can be set up as follows.# Allocate storage and restrict access fallocate --length 4GiB /swapfile chmod 600 /swapfile # Format the swap space mkswap /swapfile # Activate the", "source": "https://kubernetes.io/docs/tutorials/cluster-management/provision-swap-memory/", "id": "k8s_00109_code_completion_3018"} {"prompt": "Explain this text code:\n\n# Allocate storage and restrict access\nfallocate --length 4GiB /swapfile\nchmod 600 /swapfile\n\n# Create an encrypted device backed by the allocated storage\ncryptsetup --type plain --cipher aes-xts-plain64 --key-size 256 -d /dev/urandom open /swapfile cryptswap\n\n# Format the swap space\nmkswap /dev/mapper/cryptswap\n\n# Activate the swap space for paging\nswapon /dev/mapper/cryptswap", "completion": "This text code Configuring swap memory on Kubernetes nodes | An unencrypted swap file can be set up as follows.# Allocate storage and restrict access fallocate --length 4GiB /swapfile chmod 600 /swapfile # Format the swap space mkswap /swapfile # Activate the", "language": "text", "type": "code-example_explanation", "context": "Configuring swap memory on Kubernetes nodes | An unencrypted swap file can be set up as follows.# Allocate storage and restrict access fallocate --length 4GiB /swapfile chmod 600 /swapfile # Format the swap space mkswap /swapfile # Activate the", "source": "https://kubernetes.io/docs/tutorials/cluster-management/provision-swap-memory/", "id": "k8s_00109_code_explain_3019"} {"prompt": "Complete this text code:\n\n# Allocate storage and restrict access\nfallocate --length 4GiB /swapfile\nchmod 600 /swapfile\n", "completion": "# Format the swap space\nmkswap /swapfile\n\n# Activate the swap space for paging\nswapon /swapfile", "language": "text", "type": "code-example", "context": "Configuring swap memory on Kubernetes nodes | Swap can be verified to be enabled with both swapon -s command or the free command.", "source": "https://kubernetes.io/docs/tutorials/cluster-management/provision-swap-memory/", "id": "k8s_00109_code_completion_3020"} {"prompt": "Explain this text code:\n\n# Allocate storage and restrict access\nfallocate --length 4GiB /swapfile\nchmod 600 /swapfile\n\n# Format the swap space\nmkswap /swapfile\n\n# Activate the swap space for paging\nswapon /swapfile", "completion": "This text code Configuring swap memory on Kubernetes nodes | Swap can be verified to be enabled with both swapon -s command or the free command.", "language": "text", "type": "code-example_explanation", "context": "Configuring swap memory on Kubernetes nodes | Swap can be verified to be enabled with both swapon -s command or the free command.", "source": "https://kubernetes.io/docs/tutorials/cluster-management/provision-swap-memory/", "id": "k8s_00109_code_explain_3021"} {"prompt": "Explain this text code:\n\nFilename Type\t\tSize\t\tUsed\t\tPriority\n/dev/dm-0 partition \t4194300\t\t0\t\t-2", "completion": "This text code Configuring swap memory on Kubernetes nodes | Using free -h:", "language": "text", "type": "code-example_explanation", "context": "Configuring swap memory on Kubernetes nodes | Using free -h:", "source": "https://kubernetes.io/docs/tutorials/cluster-management/provision-swap-memory/", "id": "k8s_00109_code_explain_3022"} {"prompt": "Explain this text code:\n\ntotal used free shared buff/cache available\nMem: 3.8Gi 1.3Gi 249Mi 25Mi 2.5Gi 2.5Gi\nSwap: 4.0Gi 0B 4.0Gi", "completion": "This text code Configuring swap memory on Kubernetes nodes | After setting up swap, to start the swap file at boot time, you typically either set up a systemd unit to activate (encrypted) swap, or you add a line similar to /swapfile swap swap defaults 0 0 into", "language": "text", "type": "code-example_explanation", "context": "Configuring swap memory on Kubernetes nodes | After setting up swap, to start the swap file at boot time, you typically either set up a systemd unit to activate (encrypted) swap, or you add a line similar to /swapfile swap swap defaults 0 0 into", "source": "https://kubernetes.io/docs/tutorials/cluster-management/provision-swap-memory/", "id": "k8s_00109_code_explain_3023"} {"prompt": "Complete this text code:\n\n# this fragment goes into the kubelet's configuration file\n failSwapOn: false", "completion": " memorySwap:\n swapBehavior: LimitedSwap", "language": "text", "type": "code-example", "context": "Configuring swap memory on Kubernetes nodes | In order for these configurations to take effect, kubelet needs to be restarted. Typically you do that by running:", "source": "https://kubernetes.io/docs/tutorials/cluster-management/provision-swap-memory/", "id": "k8s_00109_code_completion_3024"} {"prompt": "Explain this text code:\n\n# this fragment goes into the kubelet's configuration file\n failSwapOn: false\n memorySwap:\n swapBehavior: LimitedSwap", "completion": "This text code Configuring swap memory on Kubernetes nodes | In order for these configurations to take effect, kubelet needs to be restarted. Typically you do that by running:", "language": "text", "type": "code-example_explanation", "context": "Configuring swap memory on Kubernetes nodes | In order for these configurations to take effect, kubelet needs to be restarted. Typically you do that by running:", "source": "https://kubernetes.io/docs/tutorials/cluster-management/provision-swap-memory/", "id": "k8s_00109_code_explain_3025"} {"prompt": "Complete this bash code:\n\n# Allocate storage and restrict access\nfallocate --length 4GiB /swapfile\nchmod 600 /swapfile\n\n# Create an encrypted device backed by the allocated storage\ncryptsetup --type plain --cipher aes-xts-plain64 --key-size 256 -d /dev/urandom open /swapfile cryptswap", "completion": "\n# Format the swap space\nmkswap /dev/mapper/cryptswap\n\n# Activate the swap space for paging\nswapon /dev/mapper/cryptswap", "language": "bash", "type": "shell-command", "context": "Configuring swap memory on Kubernetes nodes | An unencrypted swap file can be set up as follows.# Allocate storage and restrict access fallocate --length 4GiB /swapfile chmod 600 /swapfile # Format the swap space mkswap /swapfile # Activate the", "source": "https://kubernetes.io/docs/tutorials/cluster-management/provision-swap-memory/", "id": "k8s_00109_code_completion_3026"} {"prompt": "Explain this bash code:\n\n# Allocate storage and restrict access\nfallocate --length 4GiB /swapfile\nchmod 600 /swapfile\n\n# Create an encrypted device backed by the allocated storage\ncryptsetup --type plain --cipher aes-xts-plain64 --key-size 256 -d /dev/urandom open /swapfile cryptswap\n\n# Format the swap space\nmkswap /dev/mapper/cryptswap\n\n# Activate the swap space for paging\nswapon /dev/mapper/cryptswap", "completion": "This bash code Configuring swap memory on Kubernetes nodes | An unencrypted swap file can be set up as follows.# Allocate storage and restrict access fallocate --length 4GiB /swapfile chmod 600 /swapfile # Format the swap space mkswap /swapfile # Activate the", "language": "bash", "type": "shell-command_explanation", "context": "Configuring swap memory on Kubernetes nodes | An unencrypted swap file can be set up as follows.# Allocate storage and restrict access fallocate --length 4GiB /swapfile chmod 600 /swapfile # Format the swap space mkswap /swapfile # Activate the", "source": "https://kubernetes.io/docs/tutorials/cluster-management/provision-swap-memory/", "id": "k8s_00109_code_explain_3027"} {"prompt": "Complete this bash code:\n\n# Allocate storage and restrict access\nfallocate --length 4GiB /swapfile\nchmod 600 /swapfile\n", "completion": "# Format the swap space\nmkswap /swapfile\n\n# Activate the swap space for paging\nswapon /swapfile", "language": "bash", "type": "shell-command", "context": "Configuring swap memory on Kubernetes nodes | Swap can be verified to be enabled with both swapon -s command or the free command.", "source": "https://kubernetes.io/docs/tutorials/cluster-management/provision-swap-memory/", "id": "k8s_00109_code_completion_3028"} {"prompt": "Explain this bash code:\n\n# Allocate storage and restrict access\nfallocate --length 4GiB /swapfile\nchmod 600 /swapfile\n\n# Format the swap space\nmkswap /swapfile\n\n# Activate the swap space for paging\nswapon /swapfile", "completion": "This bash code Configuring swap memory on Kubernetes nodes | Swap can be verified to be enabled with both swapon -s command or the free command.", "language": "bash", "type": "shell-command_explanation", "context": "Configuring swap memory on Kubernetes nodes | Swap can be verified to be enabled with both swapon -s command or the free command.", "source": "https://kubernetes.io/docs/tutorials/cluster-management/provision-swap-memory/", "id": "k8s_00109_code_explain_3029"} {"prompt": "Explain this text code:\n\nFilename Type\t\tSize\t\tUsed\t\tPriority\n/dev/dm-0 partition \t4194300\t\t0\t\t-2", "completion": "This text code Configuring swap memory on Kubernetes nodes | Using free -h:", "language": "text", "type": "code-example_explanation", "context": "Configuring swap memory on Kubernetes nodes | Using free -h:", "source": "https://kubernetes.io/docs/tutorials/cluster-management/provision-swap-memory/", "id": "k8s_00109_code_explain_3030"} {"prompt": "Explain this text code:\n\ntotal used free shared buff/cache available\nMem: 3.8Gi 1.3Gi 249Mi 25Mi 2.5Gi 2.5Gi\nSwap: 4.0Gi 0B 4.0Gi", "completion": "This text code Configuring swap memory on Kubernetes nodes | After setting up swap, to start the swap file at boot time, you typically either set up a systemd unit to activate (encrypted) swap, or you add a line similar to /swapfile swap swap defaults 0 0 into", "language": "text", "type": "code-example_explanation", "context": "Configuring swap memory on Kubernetes nodes | After setting up swap, to start the swap file at boot time, you typically either set up a systemd unit to activate (encrypted) swap, or you add a line similar to /swapfile swap swap defaults 0 0 into", "source": "https://kubernetes.io/docs/tutorials/cluster-management/provision-swap-memory/", "id": "k8s_00109_code_explain_3031"} {"prompt": "Complete this yaml code:\n\n# this fragment goes into the kubelet's configuration file\n failSwapOn: false", "completion": " memorySwap:\n swapBehavior: LimitedSwap", "language": "yaml", "type": "kubernetes-manifest", "context": "Configuring swap memory on Kubernetes nodes | In order for these configurations to take effect, kubelet needs to be restarted. Typically you do that by running:", "source": "https://kubernetes.io/docs/tutorials/cluster-management/provision-swap-memory/", "id": "k8s_00109_code_completion_3032"} {"prompt": "Explain this yaml code:\n\n# this fragment goes into the kubelet's configuration file\n failSwapOn: false\n memorySwap:\n swapBehavior: LimitedSwap", "completion": "This yaml code Configuring swap memory on Kubernetes nodes | In order for these configurations to take effect, kubelet needs to be restarted. Typically you do that by running:", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Configuring swap memory on Kubernetes nodes | In order for these configurations to take effect, kubelet needs to be restarted. Typically you do that by running:", "source": "https://kubernetes.io/docs/tutorials/cluster-management/provision-swap-memory/", "id": "k8s_00109_code_explain_3033"} {"prompt": "Complete this text code:\n\n# Allocate storage and restrict access\nfallocate --length 4GiB /swapfile\nchmod 600 /swapfile\n\n# Create an encrypted device backed by the allocated storage\ncryptsetup --type plain --cipher aes-xts-plain64 --key-size 256 -d /dev/urandom open /swapfile cryptswap", "completion": "\n# Format the swap space\nmkswap /dev/mapper/cryptswap\n\n# Activate the swap space for paging\nswapon /dev/mapper/cryptswap", "language": "text", "type": "code-example", "context": "Configuring swap memory on Kubernetes nodes | An unencrypted swap file can be set up as follows.# Allocate storage and restrict access fallocate --length 4GiB /swapfile chmod 600 /swapfile # Format the swap space mkswap /swapfile # Activate the", "source": "https://kubernetes.io/docs/tutorials/cluster-management/provision-swap-memory/", "id": "k8s_00109_code_completion_3034"} {"prompt": "Explain this text code:\n\n# Allocate storage and restrict access\nfallocate --length 4GiB /swapfile\nchmod 600 /swapfile\n\n# Create an encrypted device backed by the allocated storage\ncryptsetup --type plain --cipher aes-xts-plain64 --key-size 256 -d /dev/urandom open /swapfile cryptswap\n\n# Format the swap space\nmkswap /dev/mapper/cryptswap\n\n# Activate the swap space for paging\nswapon /dev/mapper/cryptswap", "completion": "This text code Configuring swap memory on Kubernetes nodes | An unencrypted swap file can be set up as follows.# Allocate storage and restrict access fallocate --length 4GiB /swapfile chmod 600 /swapfile # Format the swap space mkswap /swapfile # Activate the", "language": "text", "type": "code-example_explanation", "context": "Configuring swap memory on Kubernetes nodes | An unencrypted swap file can be set up as follows.# Allocate storage and restrict access fallocate --length 4GiB /swapfile chmod 600 /swapfile # Format the swap space mkswap /swapfile # Activate the", "source": "https://kubernetes.io/docs/tutorials/cluster-management/provision-swap-memory/", "id": "k8s_00109_code_explain_3035"} {"prompt": "Complete this text code:\n\n# Allocate storage and restrict access\nfallocate --length 4GiB /swapfile\nchmod 600 /swapfile\n", "completion": "# Format the swap space\nmkswap /swapfile\n\n# Activate the swap space for paging\nswapon /swapfile", "language": "text", "type": "code-example", "context": "Configuring swap memory on Kubernetes nodes | Swap can be verified to be enabled with both swapon -s command or the free command.", "source": "https://kubernetes.io/docs/tutorials/cluster-management/provision-swap-memory/", "id": "k8s_00109_code_completion_3036"} {"prompt": "Explain this text code:\n\n# Allocate storage and restrict access\nfallocate --length 4GiB /swapfile\nchmod 600 /swapfile\n\n# Format the swap space\nmkswap /swapfile\n\n# Activate the swap space for paging\nswapon /swapfile", "completion": "This text code Configuring swap memory on Kubernetes nodes | Swap can be verified to be enabled with both swapon -s command or the free command.", "language": "text", "type": "code-example_explanation", "context": "Configuring swap memory on Kubernetes nodes | Swap can be verified to be enabled with both swapon -s command or the free command.", "source": "https://kubernetes.io/docs/tutorials/cluster-management/provision-swap-memory/", "id": "k8s_00109_code_explain_3037"} {"prompt": "Complete this text code:\n\n# this fragment goes into the kubelet's configuration file\n failSwapOn: false", "completion": " memorySwap:\n swapBehavior: LimitedSwap", "language": "text", "type": "code-example", "context": "Configuring swap memory on Kubernetes nodes | In order for these configurations to take effect, kubelet needs to be restarted. Typically you do that by running:", "source": "https://kubernetes.io/docs/tutorials/cluster-management/provision-swap-memory/", "id": "k8s_00109_code_completion_3038"} {"prompt": "Explain this text code:\n\n# this fragment goes into the kubelet's configuration file\n failSwapOn: false\n memorySwap:\n swapBehavior: LimitedSwap", "completion": "This text code Configuring swap memory on Kubernetes nodes | In order for these configurations to take effect, kubelet needs to be restarted. Typically you do that by running:", "language": "text", "type": "code-example_explanation", "context": "Configuring swap memory on Kubernetes nodes | In order for these configurations to take effect, kubelet needs to be restarted. Typically you do that by running:", "source": "https://kubernetes.io/docs/tutorials/cluster-management/provision-swap-memory/", "id": "k8s_00109_code_explain_3039"} {"prompt": "Explain this text code:\n\n# On your system, this configuration file could have a different name\npager /etc/apt/sources.list.d/kubernetes.list", "completion": "This text code Changing The Kubernetes Package Repository | If you see a line similar to:", "language": "text", "type": "code-example_explanation", "context": "Changing The Kubernetes Package Repository | If you see a line similar to:", "source": "https://kubernetes.io/docs/tasks/administer-cluster/kubeadm/change-package-repository/", "id": "k8s_00110_code_explain_3040"} {"prompt": "Explain this text code:\n\ndeb [signed-by=/etc/apt/keyrings/kubernetes-apt-keyring.gpg] https://pkgs.k8s.io/core:/stable:/v1.33/deb/ /", "completion": "This text code Changing The Kubernetes Package Repository | You're using the Kubernetes package repositories and this guide applies to you. Otherwise, it's strongly recommended to migrate to the Kubernetes package repositories as described in the official anno", "language": "text", "type": "code-example_explanation", "context": "Changing The Kubernetes Package Repository | You're using the Kubernetes package repositories and this guide applies to you. Otherwise, it's strongly recommended to migrate to the Kubernetes package repositories as described in the official anno", "source": "https://kubernetes.io/docs/tasks/administer-cluster/kubeadm/change-package-repository/", "id": "k8s_00110_code_explain_3041"} {"prompt": "Explain this text code:\n\n# On your system, this configuration file could have a different name\ncat /etc/yum.repos.d/kubernetes.repo", "completion": "This text code Changing The Kubernetes Package Repository | If you see a baseurl similar to the baseurl in the output below:", "language": "text", "type": "code-example_explanation", "context": "Changing The Kubernetes Package Repository | If you see a baseurl similar to the baseurl in the output below:", "source": "https://kubernetes.io/docs/tasks/administer-cluster/kubeadm/change-package-repository/", "id": "k8s_00110_code_explain_3042"} {"prompt": "Complete this json code:\n\n[kubernetes]\nname=Kubernetes\nbaseurl=https://pkgs.k8s.io/core:/stable:/v1.33/rpm/", "completion": "enabled=1\ngpgcheck=1\ngpgkey=https://pkgs.k8s.io/core:/stable:/v1.33/rpm/repodata/repomd.xml.key\nexclude=kubelet kubeadm kubectl", "language": "json", "type": "api-response", "context": "Changing The Kubernetes Package Repository | You're using the Kubernetes package repositories and this guide applies to you. Otherwise, it's strongly recommended to migrate to the Kubernetes package repositories as described in the official anno", "source": "https://kubernetes.io/docs/tasks/administer-cluster/kubeadm/change-package-repository/", "id": "k8s_00110_code_completion_3043"} {"prompt": "Explain this json code:\n\n[kubernetes]\nname=Kubernetes\nbaseurl=https://pkgs.k8s.io/core:/stable:/v1.33/rpm/\nenabled=1\ngpgcheck=1\ngpgkey=https://pkgs.k8s.io/core:/stable:/v1.33/rpm/repodata/repomd.xml.key\nexclude=kubelet kubeadm kubectl", "completion": "This json code Changing The Kubernetes Package Repository | You're using the Kubernetes package repositories and this guide applies to you. Otherwise, it's strongly recommended to migrate to the Kubernetes package repositories as described in the official anno", "language": "json", "type": "api-response_explanation", "context": "Changing The Kubernetes Package Repository | You're using the Kubernetes package repositories and this guide applies to you. Otherwise, it's strongly recommended to migrate to the Kubernetes package repositories as described in the official anno", "source": "https://kubernetes.io/docs/tasks/administer-cluster/kubeadm/change-package-repository/", "id": "k8s_00110_code_explain_3044"} {"prompt": "Explain this text code:\n\n# On your system, this configuration file could have a different name\ncat /etc/zypp/repos.d/kubernetes.repo", "completion": "This text code Changing The Kubernetes Package Repository | If you see a baseurl similar to the baseurl in the output below:", "language": "text", "type": "code-example_explanation", "context": "Changing The Kubernetes Package Repository | If you see a baseurl similar to the baseurl in the output below:", "source": "https://kubernetes.io/docs/tasks/administer-cluster/kubeadm/change-package-repository/", "id": "k8s_00110_code_explain_3045"} {"prompt": "Complete this json code:\n\n[kubernetes]\nname=Kubernetes\nbaseurl=https://pkgs.k8s.io/core:/stable:/v1.33/rpm/", "completion": "enabled=1\ngpgcheck=1\ngpgkey=https://pkgs.k8s.io/core:/stable:/v1.33/rpm/repodata/repomd.xml.key\nexclude=kubelet kubeadm kubectl", "language": "json", "type": "api-response", "context": "Changing The Kubernetes Package Repository | You're using the Kubernetes package repositories and this guide applies to you. Otherwise, it's strongly recommended to migrate to the Kubernetes package repositories as described in the official anno", "source": "https://kubernetes.io/docs/tasks/administer-cluster/kubeadm/change-package-repository/", "id": "k8s_00110_code_completion_3046"} {"prompt": "Explain this json code:\n\n[kubernetes]\nname=Kubernetes\nbaseurl=https://pkgs.k8s.io/core:/stable:/v1.33/rpm/\nenabled=1\ngpgcheck=1\ngpgkey=https://pkgs.k8s.io/core:/stable:/v1.33/rpm/repodata/repomd.xml.key\nexclude=kubelet kubeadm kubectl", "completion": "This json code Changing The Kubernetes Package Repository | You're using the Kubernetes package repositories and this guide applies to you. Otherwise, it's strongly recommended to migrate to the Kubernetes package repositories as described in the official anno", "language": "json", "type": "api-response_explanation", "context": "Changing The Kubernetes Package Repository | You're using the Kubernetes package repositories and this guide applies to you. Otherwise, it's strongly recommended to migrate to the Kubernetes package repositories as described in the official anno", "source": "https://kubernetes.io/docs/tasks/administer-cluster/kubeadm/change-package-repository/", "id": "k8s_00110_code_explain_3047"} {"prompt": "Explain this text code:\n\ndeb [signed-by=/etc/apt/keyrings/kubernetes-apt-keyring.gpg] https://pkgs.k8s.io/core:/stable:/v1.33/deb/ /", "completion": "This text code Changing The Kubernetes Package Repository | Change the version in the URL to the next available minor release, for example:", "language": "text", "type": "code-example_explanation", "context": "Changing The Kubernetes Package Repository | Change the version in the URL to the next available minor release, for example:", "source": "https://kubernetes.io/docs/tasks/administer-cluster/kubeadm/change-package-repository/", "id": "k8s_00110_code_explain_3048"} {"prompt": "Explain this text code:\n\ndeb [signed-by=/etc/apt/keyrings/kubernetes-apt-keyring.gpg] https://pkgs.k8s.io/core:/stable:/v1.34/deb/ /", "completion": "This text code Changing The Kubernetes Package Repository | Save the file and exit your text editor. Continue following the relevant upgrade instructions.", "language": "text", "type": "code-example_explanation", "context": "Changing The Kubernetes Package Repository | Save the file and exit your text editor. Continue following the relevant upgrade instructions.", "source": "https://kubernetes.io/docs/tasks/administer-cluster/kubeadm/change-package-repository/", "id": "k8s_00110_code_explain_3049"} {"prompt": "Complete this json code:\n\n[kubernetes]\nname=Kubernetes\nbaseurl=https://pkgs.k8s.io/core:/stable:/v1.33/rpm/", "completion": "enabled=1\ngpgcheck=1\ngpgkey=https://pkgs.k8s.io/core:/stable:/v1.33/rpm/repodata/repomd.xml.key\nexclude=kubelet kubeadm kubectl cri-tools kubernetes-cni", "language": "json", "type": "api-response", "context": "Changing The Kubernetes Package Repository | Change the version in these URLs to the next available minor release, for example:", "source": "https://kubernetes.io/docs/tasks/administer-cluster/kubeadm/change-package-repository/", "id": "k8s_00110_code_completion_3050"} {"prompt": "Explain this json code:\n\n[kubernetes]\nname=Kubernetes\nbaseurl=https://pkgs.k8s.io/core:/stable:/v1.33/rpm/\nenabled=1\ngpgcheck=1\ngpgkey=https://pkgs.k8s.io/core:/stable:/v1.33/rpm/repodata/repomd.xml.key\nexclude=kubelet kubeadm kubectl cri-tools kubernetes-cni", "completion": "This json code Changing The Kubernetes Package Repository | Change the version in these URLs to the next available minor release, for example:", "language": "json", "type": "api-response_explanation", "context": "Changing The Kubernetes Package Repository | Change the version in these URLs to the next available minor release, for example:", "source": "https://kubernetes.io/docs/tasks/administer-cluster/kubeadm/change-package-repository/", "id": "k8s_00110_code_explain_3051"} {"prompt": "Complete this json code:\n\n[kubernetes]\nname=Kubernetes\nbaseurl=https://pkgs.k8s.io/core:/stable:/v1.34/rpm/", "completion": "enabled=1\ngpgcheck=1\ngpgkey=https://pkgs.k8s.io/core:/stable:/v1.34/rpm/repodata/repomd.xml.key\nexclude=kubelet kubeadm kubectl cri-tools kubernetes-cni", "language": "json", "type": "api-response", "context": "Changing The Kubernetes Package Repository | Save the file and exit your text editor. Continue following the relevant upgrade instructions.", "source": "https://kubernetes.io/docs/tasks/administer-cluster/kubeadm/change-package-repository/", "id": "k8s_00110_code_completion_3052"} {"prompt": "Explain this json code:\n\n[kubernetes]\nname=Kubernetes\nbaseurl=https://pkgs.k8s.io/core:/stable:/v1.34/rpm/\nenabled=1\ngpgcheck=1\ngpgkey=https://pkgs.k8s.io/core:/stable:/v1.34/rpm/repodata/repomd.xml.key\nexclude=kubelet kubeadm kubectl cri-tools kubernetes-cni", "completion": "This json code Changing The Kubernetes Package Repository | Save the file and exit your text editor. Continue following the relevant upgrade instructions.", "language": "json", "type": "api-response_explanation", "context": "Changing The Kubernetes Package Repository | Save the file and exit your text editor. Continue following the relevant upgrade instructions.", "source": "https://kubernetes.io/docs/tasks/administer-cluster/kubeadm/change-package-repository/", "id": "k8s_00110_code_explain_3053"} {"prompt": "Explain this shell code:\n\n# On your system, this configuration file could have a different name\npager /etc/apt/sources.list.d/kubernetes.list", "completion": "This shell code Changing The Kubernetes Package Repository | If you see a line similar to:", "language": "shell", "type": "code-example_explanation", "context": "Changing The Kubernetes Package Repository | If you see a line similar to:", "source": "https://kubernetes.io/docs/tasks/administer-cluster/kubeadm/change-package-repository/", "id": "k8s_00110_code_explain_3054"} {"prompt": "Explain this text code:\n\ndeb [signed-by=/etc/apt/keyrings/kubernetes-apt-keyring.gpg] https://pkgs.k8s.io/core:/stable:/v1.33/deb/ /", "completion": "This text code Changing The Kubernetes Package Repository | You're using the Kubernetes package repositories and this guide applies to you. Otherwise, it's strongly recommended to migrate to the Kubernetes package repositories as described in the official anno", "language": "text", "type": "code-example_explanation", "context": "Changing The Kubernetes Package Repository | You're using the Kubernetes package repositories and this guide applies to you. Otherwise, it's strongly recommended to migrate to the Kubernetes package repositories as described in the official anno", "source": "https://kubernetes.io/docs/tasks/administer-cluster/kubeadm/change-package-repository/", "id": "k8s_00110_code_explain_3055"} {"prompt": "Explain this shell code:\n\n# On your system, this configuration file could have a different name\ncat /etc/yum.repos.d/kubernetes.repo", "completion": "This shell code Changing The Kubernetes Package Repository | If you see a baseurl similar to the baseurl in the output below:", "language": "shell", "type": "code-example_explanation", "context": "Changing The Kubernetes Package Repository | If you see a baseurl similar to the baseurl in the output below:", "source": "https://kubernetes.io/docs/tasks/administer-cluster/kubeadm/change-package-repository/", "id": "k8s_00110_code_explain_3056"} {"prompt": "Complete this json code:\n\n[kubernetes]\nname=Kubernetes\nbaseurl=https://pkgs.k8s.io/core:/stable:/v1.33/rpm/", "completion": "enabled=1\ngpgcheck=1\ngpgkey=https://pkgs.k8s.io/core:/stable:/v1.33/rpm/repodata/repomd.xml.key\nexclude=kubelet kubeadm kubectl", "language": "json", "type": "api-response", "context": "Changing The Kubernetes Package Repository | You're using the Kubernetes package repositories and this guide applies to you. Otherwise, it's strongly recommended to migrate to the Kubernetes package repositories as described in the official anno", "source": "https://kubernetes.io/docs/tasks/administer-cluster/kubeadm/change-package-repository/", "id": "k8s_00110_code_completion_3057"} {"prompt": "Explain this json code:\n\n[kubernetes]\nname=Kubernetes\nbaseurl=https://pkgs.k8s.io/core:/stable:/v1.33/rpm/\nenabled=1\ngpgcheck=1\ngpgkey=https://pkgs.k8s.io/core:/stable:/v1.33/rpm/repodata/repomd.xml.key\nexclude=kubelet kubeadm kubectl", "completion": "This json code Changing The Kubernetes Package Repository | You're using the Kubernetes package repositories and this guide applies to you. Otherwise, it's strongly recommended to migrate to the Kubernetes package repositories as described in the official anno", "language": "json", "type": "api-response_explanation", "context": "Changing The Kubernetes Package Repository | You're using the Kubernetes package repositories and this guide applies to you. Otherwise, it's strongly recommended to migrate to the Kubernetes package repositories as described in the official anno", "source": "https://kubernetes.io/docs/tasks/administer-cluster/kubeadm/change-package-repository/", "id": "k8s_00110_code_explain_3058"} {"prompt": "Explain this shell code:\n\n# On your system, this configuration file could have a different name\ncat /etc/zypp/repos.d/kubernetes.repo", "completion": "This shell code Changing The Kubernetes Package Repository | If you see a baseurl similar to the baseurl in the output below:", "language": "shell", "type": "code-example_explanation", "context": "Changing The Kubernetes Package Repository | If you see a baseurl similar to the baseurl in the output below:", "source": "https://kubernetes.io/docs/tasks/administer-cluster/kubeadm/change-package-repository/", "id": "k8s_00110_code_explain_3059"} {"prompt": "Complete this json code:\n\n[kubernetes]\nname=Kubernetes\nbaseurl=https://pkgs.k8s.io/core:/stable:/v1.33/rpm/", "completion": "enabled=1\ngpgcheck=1\ngpgkey=https://pkgs.k8s.io/core:/stable:/v1.33/rpm/repodata/repomd.xml.key\nexclude=kubelet kubeadm kubectl", "language": "json", "type": "api-response", "context": "Changing The Kubernetes Package Repository | You're using the Kubernetes package repositories and this guide applies to you. Otherwise, it's strongly recommended to migrate to the Kubernetes package repositories as described in the official anno", "source": "https://kubernetes.io/docs/tasks/administer-cluster/kubeadm/change-package-repository/", "id": "k8s_00110_code_completion_3060"} {"prompt": "Explain this json code:\n\n[kubernetes]\nname=Kubernetes\nbaseurl=https://pkgs.k8s.io/core:/stable:/v1.33/rpm/\nenabled=1\ngpgcheck=1\ngpgkey=https://pkgs.k8s.io/core:/stable:/v1.33/rpm/repodata/repomd.xml.key\nexclude=kubelet kubeadm kubectl", "completion": "This json code Changing The Kubernetes Package Repository | You're using the Kubernetes package repositories and this guide applies to you. Otherwise, it's strongly recommended to migrate to the Kubernetes package repositories as described in the official anno", "language": "json", "type": "api-response_explanation", "context": "Changing The Kubernetes Package Repository | You're using the Kubernetes package repositories and this guide applies to you. Otherwise, it's strongly recommended to migrate to the Kubernetes package repositories as described in the official anno", "source": "https://kubernetes.io/docs/tasks/administer-cluster/kubeadm/change-package-repository/", "id": "k8s_00110_code_explain_3061"} {"prompt": "Explain this text code:\n\ndeb [signed-by=/etc/apt/keyrings/kubernetes-apt-keyring.gpg] https://pkgs.k8s.io/core:/stable:/v1.33/deb/ /", "completion": "This text code Changing The Kubernetes Package Repository | Change the version in the URL to the next available minor release, for example:", "language": "text", "type": "code-example_explanation", "context": "Changing The Kubernetes Package Repository | Change the version in the URL to the next available minor release, for example:", "source": "https://kubernetes.io/docs/tasks/administer-cluster/kubeadm/change-package-repository/", "id": "k8s_00110_code_explain_3062"} {"prompt": "Explain this text code:\n\ndeb [signed-by=/etc/apt/keyrings/kubernetes-apt-keyring.gpg] https://pkgs.k8s.io/core:/stable:/v1.34/deb/ /", "completion": "This text code Changing The Kubernetes Package Repository | Save the file and exit your text editor. Continue following the relevant upgrade instructions.", "language": "text", "type": "code-example_explanation", "context": "Changing The Kubernetes Package Repository | Save the file and exit your text editor. Continue following the relevant upgrade instructions.", "source": "https://kubernetes.io/docs/tasks/administer-cluster/kubeadm/change-package-repository/", "id": "k8s_00110_code_explain_3063"} {"prompt": "Complete this json code:\n\n[kubernetes]\nname=Kubernetes\nbaseurl=https://pkgs.k8s.io/core:/stable:/v1.33/rpm/", "completion": "enabled=1\ngpgcheck=1\ngpgkey=https://pkgs.k8s.io/core:/stable:/v1.33/rpm/repodata/repomd.xml.key\nexclude=kubelet kubeadm kubectl cri-tools kubernetes-cni", "language": "json", "type": "api-response", "context": "Changing The Kubernetes Package Repository | Change the version in these URLs to the next available minor release, for example:", "source": "https://kubernetes.io/docs/tasks/administer-cluster/kubeadm/change-package-repository/", "id": "k8s_00110_code_completion_3064"} {"prompt": "Explain this json code:\n\n[kubernetes]\nname=Kubernetes\nbaseurl=https://pkgs.k8s.io/core:/stable:/v1.33/rpm/\nenabled=1\ngpgcheck=1\ngpgkey=https://pkgs.k8s.io/core:/stable:/v1.33/rpm/repodata/repomd.xml.key\nexclude=kubelet kubeadm kubectl cri-tools kubernetes-cni", "completion": "This json code Changing The Kubernetes Package Repository | Change the version in these URLs to the next available minor release, for example:", "language": "json", "type": "api-response_explanation", "context": "Changing The Kubernetes Package Repository | Change the version in these URLs to the next available minor release, for example:", "source": "https://kubernetes.io/docs/tasks/administer-cluster/kubeadm/change-package-repository/", "id": "k8s_00110_code_explain_3065"} {"prompt": "Complete this json code:\n\n[kubernetes]\nname=Kubernetes\nbaseurl=https://pkgs.k8s.io/core:/stable:/v1.34/rpm/", "completion": "enabled=1\ngpgcheck=1\ngpgkey=https://pkgs.k8s.io/core:/stable:/v1.34/rpm/repodata/repomd.xml.key\nexclude=kubelet kubeadm kubectl cri-tools kubernetes-cni", "language": "json", "type": "api-response", "context": "Changing The Kubernetes Package Repository | Save the file and exit your text editor. Continue following the relevant upgrade instructions.", "source": "https://kubernetes.io/docs/tasks/administer-cluster/kubeadm/change-package-repository/", "id": "k8s_00110_code_completion_3066"} {"prompt": "Explain this json code:\n\n[kubernetes]\nname=Kubernetes\nbaseurl=https://pkgs.k8s.io/core:/stable:/v1.34/rpm/\nenabled=1\ngpgcheck=1\ngpgkey=https://pkgs.k8s.io/core:/stable:/v1.34/rpm/repodata/repomd.xml.key\nexclude=kubelet kubeadm kubectl cri-tools kubernetes-cni", "completion": "This json code Changing The Kubernetes Package Repository | Save the file and exit your text editor. Continue following the relevant upgrade instructions.", "language": "json", "type": "api-response_explanation", "context": "Changing The Kubernetes Package Repository | Save the file and exit your text editor. Continue following the relevant upgrade instructions.", "source": "https://kubernetes.io/docs/tasks/administer-cluster/kubeadm/change-package-repository/", "id": "k8s_00110_code_explain_3067"} {"prompt": "Explain this text code:\n\n# On your system, this configuration file could have a different name\npager /etc/apt/sources.list.d/kubernetes.list", "completion": "This text code Changing The Kubernetes Package Repository | If you see a line similar to:", "language": "text", "type": "code-example_explanation", "context": "Changing The Kubernetes Package Repository | If you see a line similar to:", "source": "https://kubernetes.io/docs/tasks/administer-cluster/kubeadm/change-package-repository/", "id": "k8s_00110_code_explain_3068"} {"prompt": "Explain this text code:\n\n# On your system, this configuration file could have a different name\ncat /etc/yum.repos.d/kubernetes.repo", "completion": "This text code Changing The Kubernetes Package Repository | If you see a baseurl similar to the baseurl in the output below:", "language": "text", "type": "code-example_explanation", "context": "Changing The Kubernetes Package Repository | If you see a baseurl similar to the baseurl in the output below:", "source": "https://kubernetes.io/docs/tasks/administer-cluster/kubeadm/change-package-repository/", "id": "k8s_00110_code_explain_3069"} {"prompt": "Explain this text code:\n\n# On your system, this configuration file could have a different name\ncat /etc/zypp/repos.d/kubernetes.repo", "completion": "This text code Changing The Kubernetes Package Repository | If you see a baseurl similar to the baseurl in the output below:", "language": "text", "type": "code-example_explanation", "context": "Changing The Kubernetes Package Repository | If you see a baseurl similar to the baseurl in the output below:", "source": "https://kubernetes.io/docs/tasks/administer-cluster/kubeadm/change-package-repository/", "id": "k8s_00110_code_explain_3070"} {"prompt": "Explain this bash code:\n\nkubectl get --raw \"/apis/metrics.k8s.io/v1beta1/nodes/minikube\" | jq '.'", "completion": "This bash code Resource metrics pipeline | Here is the same API call using curl:", "language": "bash", "type": "kubectl-command_explanation", "context": "Resource metrics pipeline | Here is the same API call using curl:", "source": "https://kubernetes.io/docs/tasks/debug/debug-cluster/resource-metrics-pipeline/", "id": "k8s_00111_code_explain_3071"} {"prompt": "Explain this text code:\n\ncurl http://localhost:8080/apis/metrics.k8s.io/v1beta1/nodes/minikube", "completion": "This text code Resource metrics pipeline | Sample response:", "language": "text", "type": "code-example_explanation", "context": "Resource metrics pipeline | Sample response:", "source": "https://kubernetes.io/docs/tasks/debug/debug-cluster/resource-metrics-pipeline/", "id": "k8s_00111_code_explain_3072"} {"prompt": "Complete this json code:\n\n{\n \"kind\": \"NodeMetrics\",\n \"apiVersion\": \"metrics.k8s.io/v1beta1\",\n \"metadata\": {\n \"name\": \"minikube\",\n \"selfLink\": \"/apis/metrics.k8s.io/v1beta1/nodes/minikube\",\n \"creationTimestamp\": \"2022-01-27T18:48:43Z\"", "completion": " },\n \"timestamp\": \"2022-01-27T18:48:33Z\",\n \"window\": \"30s\",\n \"usage\": {\n \"cpu\": \"487558164n\",\n \"memory\": \"732212Ki\"\n }\n}", "language": "json", "type": "api-response", "context": "Resource metrics pipeline | Here is an example of the Metrics API request for a kube-scheduler-minikube pod contained in the kube-system namespace and piped through jq for easier reading:", "source": "https://kubernetes.io/docs/tasks/debug/debug-cluster/resource-metrics-pipeline/", "id": "k8s_00111_code_completion_3073"} {"prompt": "Explain this json code:\n\n{\n \"kind\": \"NodeMetrics\",\n \"apiVersion\": \"metrics.k8s.io/v1beta1\",\n \"metadata\": {\n \"name\": \"minikube\",\n \"selfLink\": \"/apis/metrics.k8s.io/v1beta1/nodes/minikube\",\n \"creationTimestamp\": \"2022-01-27T18:48:43Z\"\n },\n \"timestamp\": \"2022-01-27T18:48:33Z\",\n \"window\": \"30s\",\n \"usage\": {\n \"cpu\": \"487558164n\",\n \"memory\": \"732212Ki\"\n }\n}", "completion": "This json code Resource metrics pipeline | Here is an example of the Metrics API request for a kube-scheduler-minikube pod contained in the kube-system namespace and piped through jq for easier reading:", "language": "json", "type": "api-response_explanation", "context": "Resource metrics pipeline | Here is an example of the Metrics API request for a kube-scheduler-minikube pod contained in the kube-system namespace and piped through jq for easier reading:", "source": "https://kubernetes.io/docs/tasks/debug/debug-cluster/resource-metrics-pipeline/", "id": "k8s_00111_code_explain_3074"} {"prompt": "Explain this bash code:\n\nkubectl get --raw \"/apis/metrics.k8s.io/v1beta1/namespaces/kube-system/pods/kube-scheduler-minikube\" | jq '.'", "completion": "This bash code Resource metrics pipeline | Here is the same API call using curl:", "language": "bash", "type": "kubectl-command_explanation", "context": "Resource metrics pipeline | Here is the same API call using curl:", "source": "https://kubernetes.io/docs/tasks/debug/debug-cluster/resource-metrics-pipeline/", "id": "k8s_00111_code_explain_3075"} {"prompt": "Explain this text code:\n\ncurl http://localhost:8080/apis/metrics.k8s.io/v1beta1/namespaces/kube-system/pods/kube-scheduler-minikube", "completion": "This text code Resource metrics pipeline | Sample response:", "language": "text", "type": "code-example_explanation", "context": "Resource metrics pipeline | Sample response:", "source": "https://kubernetes.io/docs/tasks/debug/debug-cluster/resource-metrics-pipeline/", "id": "k8s_00111_code_explain_3076"} {"prompt": "Complete this json code:\n\n{\n \"kind\": \"PodMetrics\",\n \"apiVersion\": \"metrics.k8s.io/v1beta1\",\n \"metadata\": {\n \"name\": \"kube-scheduler-minikube\",\n \"namespace\": \"kube-system\",\n \"selfLink\": \"/apis/metrics.k8s.io/v1beta1/namespaces/kube-system/pods/kube-scheduler-minikube\",\n \"creationTimestamp\": \"2022-01-27T19:25:00Z\"\n },\n \"timestamp\": \"2022-01-27T19:24:31Z\",", "completion": " \"window\": \"30s\",\n \"containers\": [\n {\n \"name\": \"kube-scheduler\",\n \"usage\": {\n \"cpu\": \"9559630n\",\n \"memory\": \"22244Ki\"\n }\n }\n ]\n}", "language": "json", "type": "api-response", "context": "Resource metrics pipeline | The Metrics API is defined in the k8s.io/metrics repository. You must enable the API aggregation layer and register an APIService for the metrics.k8s.io API.", "source": "https://kubernetes.io/docs/tasks/debug/debug-cluster/resource-metrics-pipeline/", "id": "k8s_00111_code_completion_3077"} {"prompt": "Explain this json code:\n\n{\n \"kind\": \"PodMetrics\",\n \"apiVersion\": \"metrics.k8s.io/v1beta1\",\n \"metadata\": {\n \"name\": \"kube-scheduler-minikube\",\n \"namespace\": \"kube-system\",\n \"selfLink\": \"/apis/metrics.k8s.io/v1beta1/namespaces/kube-system/pods/kube-scheduler-minikube\",\n \"creationTimestamp\": \"2022-01-27T19:25:00Z\"\n },\n \"timestamp\": \"2022-01-27T19:24:31Z\",\n \"window\": \"30s\",\n \"containers\": [\n {\n \"name\": \"kube-scheduler\",\n \"usage\": {\n \"cpu\": \"9559630n\",\n \"memory\": \"22244Ki\"\n }\n }\n ]\n}", "completion": "This json code Resource metrics pipeline | The Metrics API is defined in the k8s.io/metrics repository. You must enable the API aggregation layer and register an APIService for the metrics.k8s.io API.", "language": "json", "type": "api-response_explanation", "context": "Resource metrics pipeline | The Metrics API is defined in the k8s.io/metrics repository. You must enable the API aggregation layer and register an APIService for the metrics.k8s.io API.", "source": "https://kubernetes.io/docs/tasks/debug/debug-cluster/resource-metrics-pipeline/", "id": "k8s_00111_code_explain_3078"} {"prompt": "Explain this shell code:\n\nkubectl get --raw \"/apis/metrics.k8s.io/v1beta1/nodes/minikube\" | jq '.'", "completion": "This shell code Resource metrics pipeline | Here is the same API call using curl:", "language": "shell", "type": "code-example_explanation", "context": "Resource metrics pipeline | Here is the same API call using curl:", "source": "https://kubernetes.io/docs/tasks/debug/debug-cluster/resource-metrics-pipeline/", "id": "k8s_00111_code_explain_3079"} {"prompt": "Explain this shell code:\n\ncurl http://localhost:8080/apis/metrics.k8s.io/v1beta1/nodes/minikube", "completion": "This shell code Resource metrics pipeline | Sample response:", "language": "shell", "type": "code-example_explanation", "context": "Resource metrics pipeline | Sample response:", "source": "https://kubernetes.io/docs/tasks/debug/debug-cluster/resource-metrics-pipeline/", "id": "k8s_00111_code_explain_3080"} {"prompt": "Complete this json code:\n\n{\n \"kind\": \"NodeMetrics\",\n \"apiVersion\": \"metrics.k8s.io/v1beta1\",\n \"metadata\": {\n \"name\": \"minikube\",\n \"selfLink\": \"/apis/metrics.k8s.io/v1beta1/nodes/minikube\",\n \"creationTimestamp\": \"2022-01-27T18:48:43Z\"", "completion": " },\n \"timestamp\": \"2022-01-27T18:48:33Z\",\n \"window\": \"30s\",\n \"usage\": {\n \"cpu\": \"487558164n\",\n \"memory\": \"732212Ki\"\n }\n}", "language": "json", "type": "api-response", "context": "Resource metrics pipeline | Here is an example of the Metrics API request for a kube-scheduler-minikube pod contained in the kube-system namespace and piped through jq for easier reading:", "source": "https://kubernetes.io/docs/tasks/debug/debug-cluster/resource-metrics-pipeline/", "id": "k8s_00111_code_completion_3081"} {"prompt": "Explain this json code:\n\n{\n \"kind\": \"NodeMetrics\",\n \"apiVersion\": \"metrics.k8s.io/v1beta1\",\n \"metadata\": {\n \"name\": \"minikube\",\n \"selfLink\": \"/apis/metrics.k8s.io/v1beta1/nodes/minikube\",\n \"creationTimestamp\": \"2022-01-27T18:48:43Z\"\n },\n \"timestamp\": \"2022-01-27T18:48:33Z\",\n \"window\": \"30s\",\n \"usage\": {\n \"cpu\": \"487558164n\",\n \"memory\": \"732212Ki\"\n }\n}", "completion": "This json code Resource metrics pipeline | Here is an example of the Metrics API request for a kube-scheduler-minikube pod contained in the kube-system namespace and piped through jq for easier reading:", "language": "json", "type": "api-response_explanation", "context": "Resource metrics pipeline | Here is an example of the Metrics API request for a kube-scheduler-minikube pod contained in the kube-system namespace and piped through jq for easier reading:", "source": "https://kubernetes.io/docs/tasks/debug/debug-cluster/resource-metrics-pipeline/", "id": "k8s_00111_code_explain_3082"} {"prompt": "Explain this shell code:\n\nkubectl get --raw \"/apis/metrics.k8s.io/v1beta1/namespaces/kube-system/pods/kube-scheduler-minikube\" | jq '.'", "completion": "This shell code Resource metrics pipeline | Here is the same API call using curl:", "language": "shell", "type": "code-example_explanation", "context": "Resource metrics pipeline | Here is the same API call using curl:", "source": "https://kubernetes.io/docs/tasks/debug/debug-cluster/resource-metrics-pipeline/", "id": "k8s_00111_code_explain_3083"} {"prompt": "Explain this shell code:\n\ncurl http://localhost:8080/apis/metrics.k8s.io/v1beta1/namespaces/kube-system/pods/kube-scheduler-minikube", "completion": "This shell code Resource metrics pipeline | Sample response:", "language": "shell", "type": "code-example_explanation", "context": "Resource metrics pipeline | Sample response:", "source": "https://kubernetes.io/docs/tasks/debug/debug-cluster/resource-metrics-pipeline/", "id": "k8s_00111_code_explain_3084"} {"prompt": "Complete this json code:\n\n{\n \"kind\": \"PodMetrics\",\n \"apiVersion\": \"metrics.k8s.io/v1beta1\",\n \"metadata\": {\n \"name\": \"kube-scheduler-minikube\",\n \"namespace\": \"kube-system\",\n \"selfLink\": \"/apis/metrics.k8s.io/v1beta1/namespaces/kube-system/pods/kube-scheduler-minikube\",\n \"creationTimestamp\": \"2022-01-27T19:25:00Z\"\n },\n \"timestamp\": \"2022-01-27T19:24:31Z\",", "completion": " \"window\": \"30s\",\n \"containers\": [\n {\n \"name\": \"kube-scheduler\",\n \"usage\": {\n \"cpu\": \"9559630n\",\n \"memory\": \"22244Ki\"\n }\n }\n ]\n}", "language": "json", "type": "api-response", "context": "Resource metrics pipeline | The Metrics API is defined in the k8s.io/metrics repository. You must enable the API aggregation layer and register an APIService for the metrics.k8s.io API.", "source": "https://kubernetes.io/docs/tasks/debug/debug-cluster/resource-metrics-pipeline/", "id": "k8s_00111_code_completion_3085"} {"prompt": "Explain this json code:\n\n{\n \"kind\": \"PodMetrics\",\n \"apiVersion\": \"metrics.k8s.io/v1beta1\",\n \"metadata\": {\n \"name\": \"kube-scheduler-minikube\",\n \"namespace\": \"kube-system\",\n \"selfLink\": \"/apis/metrics.k8s.io/v1beta1/namespaces/kube-system/pods/kube-scheduler-minikube\",\n \"creationTimestamp\": \"2022-01-27T19:25:00Z\"\n },\n \"timestamp\": \"2022-01-27T19:24:31Z\",\n \"window\": \"30s\",\n \"containers\": [\n {\n \"name\": \"kube-scheduler\",\n \"usage\": {\n \"cpu\": \"9559630n\",\n \"memory\": \"22244Ki\"\n }\n }\n ]\n}", "completion": "This json code Resource metrics pipeline | The Metrics API is defined in the k8s.io/metrics repository. You must enable the API aggregation layer and register an APIService for the metrics.k8s.io API.", "language": "json", "type": "api-response_explanation", "context": "Resource metrics pipeline | The Metrics API is defined in the k8s.io/metrics repository. You must enable the API aggregation layer and register an APIService for the metrics.k8s.io API.", "source": "https://kubernetes.io/docs/tasks/debug/debug-cluster/resource-metrics-pipeline/", "id": "k8s_00111_code_explain_3086"} {"prompt": "Explain this bash code:\n\nkubectl get --raw \"/apis/metrics.k8s.io/v1beta1/nodes/minikube\" | jq '.'", "completion": "This bash code Resource metrics pipeline | Here is the same API call using curl:", "language": "bash", "type": "kubectl-command_explanation", "context": "Resource metrics pipeline | Here is the same API call using curl:", "source": "https://kubernetes.io/docs/tasks/debug/debug-cluster/resource-metrics-pipeline/", "id": "k8s_00111_code_explain_3087"} {"prompt": "Explain this text code:\n\ncurl http://localhost:8080/apis/metrics.k8s.io/v1beta1/nodes/minikube", "completion": "This text code Resource metrics pipeline | Sample response:", "language": "text", "type": "code-example_explanation", "context": "Resource metrics pipeline | Sample response:", "source": "https://kubernetes.io/docs/tasks/debug/debug-cluster/resource-metrics-pipeline/", "id": "k8s_00111_code_explain_3088"} {"prompt": "Complete this json code:\n\n{\n \"kind\": \"NodeMetrics\",\n \"apiVersion\": \"metrics.k8s.io/v1beta1\",\n \"metadata\": {\n \"name\": \"minikube\",\n \"selfLink\": \"/apis/metrics.k8s.io/v1beta1/nodes/minikube\",\n \"creationTimestamp\": \"2022-01-27T18:48:43Z\"", "completion": " },\n \"timestamp\": \"2022-01-27T18:48:33Z\",\n \"window\": \"30s\",\n \"usage\": {\n \"cpu\": \"487558164n\",\n \"memory\": \"732212Ki\"\n }\n}", "language": "json", "type": "api-response", "context": "Resource metrics pipeline | Here is an example of the Metrics API request for a kube-scheduler-minikube pod contained in the kube-system namespace and piped through jq for easier reading:", "source": "https://kubernetes.io/docs/tasks/debug/debug-cluster/resource-metrics-pipeline/", "id": "k8s_00111_code_completion_3089"} {"prompt": "Explain this json code:\n\n{\n \"kind\": \"NodeMetrics\",\n \"apiVersion\": \"metrics.k8s.io/v1beta1\",\n \"metadata\": {\n \"name\": \"minikube\",\n \"selfLink\": \"/apis/metrics.k8s.io/v1beta1/nodes/minikube\",\n \"creationTimestamp\": \"2022-01-27T18:48:43Z\"\n },\n \"timestamp\": \"2022-01-27T18:48:33Z\",\n \"window\": \"30s\",\n \"usage\": {\n \"cpu\": \"487558164n\",\n \"memory\": \"732212Ki\"\n }\n}", "completion": "This json code Resource metrics pipeline | Here is an example of the Metrics API request for a kube-scheduler-minikube pod contained in the kube-system namespace and piped through jq for easier reading:", "language": "json", "type": "api-response_explanation", "context": "Resource metrics pipeline | Here is an example of the Metrics API request for a kube-scheduler-minikube pod contained in the kube-system namespace and piped through jq for easier reading:", "source": "https://kubernetes.io/docs/tasks/debug/debug-cluster/resource-metrics-pipeline/", "id": "k8s_00111_code_explain_3090"} {"prompt": "Explain this bash code:\n\nkubectl get --raw \"/apis/metrics.k8s.io/v1beta1/namespaces/kube-system/pods/kube-scheduler-minikube\" | jq '.'", "completion": "This bash code Resource metrics pipeline | Here is the same API call using curl:", "language": "bash", "type": "kubectl-command_explanation", "context": "Resource metrics pipeline | Here is the same API call using curl:", "source": "https://kubernetes.io/docs/tasks/debug/debug-cluster/resource-metrics-pipeline/", "id": "k8s_00111_code_explain_3091"} {"prompt": "Explain this text code:\n\ncurl http://localhost:8080/apis/metrics.k8s.io/v1beta1/namespaces/kube-system/pods/kube-scheduler-minikube", "completion": "This text code Resource metrics pipeline | Sample response:", "language": "text", "type": "code-example_explanation", "context": "Resource metrics pipeline | Sample response:", "source": "https://kubernetes.io/docs/tasks/debug/debug-cluster/resource-metrics-pipeline/", "id": "k8s_00111_code_explain_3092"} {"prompt": "Complete this json code:\n\n{\n \"kind\": \"PodMetrics\",\n \"apiVersion\": \"metrics.k8s.io/v1beta1\",\n \"metadata\": {\n \"name\": \"kube-scheduler-minikube\",\n \"namespace\": \"kube-system\",\n \"selfLink\": \"/apis/metrics.k8s.io/v1beta1/namespaces/kube-system/pods/kube-scheduler-minikube\",\n \"creationTimestamp\": \"2022-01-27T19:25:00Z\"\n },\n \"timestamp\": \"2022-01-27T19:24:31Z\",", "completion": " \"window\": \"30s\",\n \"containers\": [\n {\n \"name\": \"kube-scheduler\",\n \"usage\": {\n \"cpu\": \"9559630n\",\n \"memory\": \"22244Ki\"\n }\n }\n ]\n}", "language": "json", "type": "api-response", "context": "Resource metrics pipeline | The Metrics API is defined in the k8s.io/metrics repository. You must enable the API aggregation layer and register an APIService for the metrics.k8s.io API.", "source": "https://kubernetes.io/docs/tasks/debug/debug-cluster/resource-metrics-pipeline/", "id": "k8s_00111_code_completion_3093"} {"prompt": "Explain this json code:\n\n{\n \"kind\": \"PodMetrics\",\n \"apiVersion\": \"metrics.k8s.io/v1beta1\",\n \"metadata\": {\n \"name\": \"kube-scheduler-minikube\",\n \"namespace\": \"kube-system\",\n \"selfLink\": \"/apis/metrics.k8s.io/v1beta1/namespaces/kube-system/pods/kube-scheduler-minikube\",\n \"creationTimestamp\": \"2022-01-27T19:25:00Z\"\n },\n \"timestamp\": \"2022-01-27T19:24:31Z\",\n \"window\": \"30s\",\n \"containers\": [\n {\n \"name\": \"kube-scheduler\",\n \"usage\": {\n \"cpu\": \"9559630n\",\n \"memory\": \"22244Ki\"\n }\n }\n ]\n}", "completion": "This json code Resource metrics pipeline | The Metrics API is defined in the k8s.io/metrics repository. You must enable the API aggregation layer and register an APIService for the metrics.k8s.io API.", "language": "json", "type": "api-response_explanation", "context": "Resource metrics pipeline | The Metrics API is defined in the k8s.io/metrics repository. You must enable the API aggregation layer and register an APIService for the metrics.k8s.io API.", "source": "https://kubernetes.io/docs/tasks/debug/debug-cluster/resource-metrics-pipeline/", "id": "k8s_00111_code_explain_3094"} {"prompt": "Explain this bash code:\n\nkubectl create service loadbalancer NAME [--tcp=port:targetPort] [--dry-run=server|client|none]", "completion": "This bash code kubectl create service loadbalancer | If true, ignore any errors in templates when a field or map key is missing in the template. Only applies to golang and jsonpath output formats.", "language": "bash", "type": "kubectl-command_explanation", "context": "kubectl create service loadbalancer | If true, ignore any errors in templates when a field or map key is missing in the template. Only applies to golang and jsonpath output formats.", "source": "https://kubernetes.io/docs/reference/kubectl/generated/kubectl_create/kubectl_create_service_loadbalancer/", "id": "k8s_00112_code_explain_3095"} {"prompt": "Explain this text code:\n\n# Create a new LoadBalancer service named my-lbs\n kubectl create service loadbalancer my-lbs --tcp=5678:8080", "completion": "This text code kubectl create service loadbalancer | If true, ignore any errors in templates when a field or map key is missing in the template. Only applies to golang and jsonpath output formats.", "language": "text", "type": "code-example_explanation", "context": "kubectl create service loadbalancer | If true, ignore any errors in templates when a field or map key is missing in the template. Only applies to golang and jsonpath output formats.", "source": "https://kubernetes.io/docs/reference/kubectl/generated/kubectl_create/kubectl_create_service_loadbalancer/", "id": "k8s_00112_code_explain_3096"} {"prompt": "Explain this bash code:\n\nkubectl create service loadbalancer NAME [--tcp=port:targetPort] [--dry-run=server|client|none]", "completion": "This bash code kubectl create service loadbalancer | If true, ignore any errors in templates when a field or map key is missing in the template. Only applies to golang and jsonpath output formats.", "language": "bash", "type": "kubectl-command_explanation", "context": "kubectl create service loadbalancer | If true, ignore any errors in templates when a field or map key is missing in the template. Only applies to golang and jsonpath output formats.", "source": "https://kubernetes.io/docs/reference/kubectl/generated/kubectl_create/kubectl_create_service_loadbalancer/", "id": "k8s_00112_code_explain_3097"} {"prompt": "Explain this text code:\n\n# Create a new LoadBalancer service named my-lbs\n kubectl create service loadbalancer my-lbs --tcp=5678:8080", "completion": "This text code kubectl create service loadbalancer | If true, ignore any errors in templates when a field or map key is missing in the template. Only applies to golang and jsonpath output formats.", "language": "text", "type": "code-example_explanation", "context": "kubectl create service loadbalancer | If true, ignore any errors in templates when a field or map key is missing in the template. Only applies to golang and jsonpath output formats.", "source": "https://kubernetes.io/docs/reference/kubectl/generated/kubectl_create/kubectl_create_service_loadbalancer/", "id": "k8s_00112_code_explain_3098"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: init-demo\nspec:\n containers:\n - name: nginx\n image: nginx\n ports:\n - containerPort: 80\n volumeMounts:\n - name: workdir\n mountPath: /usr/share/nginx/html\n # These containers are run during pod initialization", "completion": " initContainers:\n - name: install\n image: busybox:1.28\n command:\n - wget\n - \"-O\"\n - \"/work-dir/index.html\"\n - http://info.cern.ch\n volumeMounts:\n - name: workdir\n mountPath: \"/work-dir\"\n dnsPolicy: Default\n volumes:\n - name: workdir\n emptyDir: {}", "language": "yaml", "type": "pod-definition", "context": "Configure Pod Initialization | In the configuration file, you can see that the Pod has a Volume that the init container and the application container share.", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/configure-pod-initialization/", "id": "k8s_00114_code_completion_3099"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: init-demo\nspec:\n containers:\n - name: nginx\n image: nginx\n ports:\n - containerPort: 80\n volumeMounts:\n - name: workdir\n mountPath: /usr/share/nginx/html\n # These containers are run during pod initialization\n initContainers:\n - name: install\n image: busybox:1.28\n command:\n - wget\n - \"-O\"\n - \"/work-dir/index.html\"\n - http://info.cern.ch\n volumeMounts:\n - name: workdir\n mountPath: \"/work-dir\"\n dnsPolicy: Default\n volumes:\n - name: workdir\n emptyDir: {}", "completion": "This yaml code Configure Pod Initialization | In the configuration file, you can see that the Pod has a Volume that the init container and the application container share.", "language": "yaml", "type": "pod-definition_explanation", "context": "Configure Pod Initialization | In the configuration file, you can see that the Pod has a Volume that the init container and the application container share.", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/configure-pod-initialization/", "id": "k8s_00114_code_explain_3100"} {"prompt": "Explain this bash code:\n\nkubectl apply -f https://k8s.io/examples/pods/init-containers.yaml", "completion": "This bash code Configure Pod Initialization | Verify that the nginx container is running:", "language": "bash", "type": "kubectl-command_explanation", "context": "Configure Pod Initialization | Verify that the nginx container is running:", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/configure-pod-initialization/", "id": "k8s_00114_code_explain_3101"} {"prompt": "Explain this text code:\n\nNAME READY STATUS RESTARTS AGE\ninit-demo 1/1 Running 0 1m", "completion": "This text code Configure Pod Initialization | Get a shell into the nginx container running in the init-demo Pod:", "language": "text", "type": "code-example_explanation", "context": "Configure Pod Initialization | Get a shell into the nginx container running in the init-demo Pod:", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/configure-pod-initialization/", "id": "k8s_00114_code_explain_3102"} {"prompt": "Explain this text code:\n\nroot@nginx:~# apt-get update\nroot@nginx:~# apt-get install curl\nroot@nginx:~# curl localhost", "completion": "This text code Configure Pod Initialization | The output shows that nginx is serving the web page that was written by the init container:", "language": "text", "type": "code-example_explanation", "context": "Configure Pod Initialization | The output shows that nginx is serving the web page that was written by the init container:", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/configure-pod-initialization/", "id": "k8s_00114_code_explain_3103"} {"prompt": "Complete this text code:\n\n
\nhttp://info.cern.ch\n
\n", "completion": "

http://info.cern.ch - home of the first website

\n ...\n
  • Browse the first website
  • \n ...", "language": "text", "type": "code-example", "context": "Configure Pod Initialization |", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/configure-pod-initialization/", "id": "k8s_00114_code_completion_3104"} {"prompt": "Explain this text code:\n\n
    \nhttp://info.cern.ch\n
    \n\n

    http://info.cern.ch - home of the first website

    \n ...\n
  • Browse the first website
  • \n ...", "completion": "This text code Configure Pod Initialization |", "language": "text", "type": "code-example_explanation", "context": "Configure Pod Initialization |", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/configure-pod-initialization/", "id": "k8s_00114_code_explain_3105"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: init-demo\nspec:\n containers:\n - name: nginx\n image: nginx\n ports:\n - containerPort: 80\n volumeMounts:\n - name: workdir\n mountPath: /usr/share/nginx/html\n # These containers are run during pod initialization", "completion": " initContainers:\n - name: install\n image: busybox:1.28\n command:\n - wget\n - \"-O\"\n - \"/work-dir/index.html\"\n - http://info.cern.ch\n volumeMounts:\n - name: workdir\n mountPath: \"/work-dir\"\n dnsPolicy: Default\n volumes:\n - name: workdir\n emptyDir: {}", "language": "yaml", "type": "pod-definition", "context": "Configure Pod Initialization | In the configuration file, you can see that the Pod has a Volume that the init container and the application container share.", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/configure-pod-initialization/", "id": "k8s_00114_code_completion_3106"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: init-demo\nspec:\n containers:\n - name: nginx\n image: nginx\n ports:\n - containerPort: 80\n volumeMounts:\n - name: workdir\n mountPath: /usr/share/nginx/html\n # These containers are run during pod initialization\n initContainers:\n - name: install\n image: busybox:1.28\n command:\n - wget\n - \"-O\"\n - \"/work-dir/index.html\"\n - http://info.cern.ch\n volumeMounts:\n - name: workdir\n mountPath: \"/work-dir\"\n dnsPolicy: Default\n volumes:\n - name: workdir\n emptyDir: {}", "completion": "This yaml code Configure Pod Initialization | In the configuration file, you can see that the Pod has a Volume that the init container and the application container share.", "language": "yaml", "type": "pod-definition_explanation", "context": "Configure Pod Initialization | In the configuration file, you can see that the Pod has a Volume that the init container and the application container share.", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/configure-pod-initialization/", "id": "k8s_00114_code_explain_3107"} {"prompt": "Explain this shell code:\n\nkubectl apply -f https://k8s.io/examples/pods/init-containers.yaml", "completion": "This shell code Configure Pod Initialization | Verify that the nginx container is running:", "language": "shell", "type": "code-example_explanation", "context": "Configure Pod Initialization | Verify that the nginx container is running:", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/configure-pod-initialization/", "id": "k8s_00114_code_explain_3108"} {"prompt": "Explain this text code:\n\nNAME READY STATUS RESTARTS AGE\ninit-demo 1/1 Running 0 1m", "completion": "This text code Configure Pod Initialization | Get a shell into the nginx container running in the init-demo Pod:", "language": "text", "type": "code-example_explanation", "context": "Configure Pod Initialization | Get a shell into the nginx container running in the init-demo Pod:", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/configure-pod-initialization/", "id": "k8s_00114_code_explain_3109"} {"prompt": "Explain this text code:\n\nroot@nginx:~# apt-get update\nroot@nginx:~# apt-get install curl\nroot@nginx:~# curl localhost", "completion": "This text code Configure Pod Initialization | The output shows that nginx is serving the web page that was written by the init container:", "language": "text", "type": "code-example_explanation", "context": "Configure Pod Initialization | The output shows that nginx is serving the web page that was written by the init container:", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/configure-pod-initialization/", "id": "k8s_00114_code_explain_3110"} {"prompt": "Complete this html code:\n\n
    \nhttp://info.cern.ch\n
    \n", "completion": "

    http://info.cern.ch - home of the first website

    \n ...\n
  • Browse the first website
  • \n ...", "language": "html", "type": "code-example", "context": "Configure Pod Initialization |", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/configure-pod-initialization/", "id": "k8s_00114_code_completion_3111"} {"prompt": "Explain this html code:\n\n
    \nhttp://info.cern.ch\n
    \n\n

    http://info.cern.ch - home of the first website

    \n ...\n
  • Browse the first website
  • \n ...", "completion": "This html code Configure Pod Initialization |", "language": "html", "type": "code-example_explanation", "context": "Configure Pod Initialization |", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/configure-pod-initialization/", "id": "k8s_00114_code_explain_3112"} {"prompt": "Complete this yaml code:\n\npods/init-containers.yaml\napiVersion: v1\nkind: Pod\nmetadata:\n name: init-demo\nspec:\n containers:\n - name: nginx\n image: nginx\n ports:\n - containerPort: 80\n volumeMounts:\n - name: workdir\n mountPath: /usr/share/nginx/html\n # These containers are run during pod initialization", "completion": " initContainers:\n - name: install\n image: busybox:1.28\n command:\n - wget\n - \"-O\"\n - \"/work-dir/index.html\"\n - http://info.cern.ch\n volumeMounts:\n - name: workdir\n mountPath: \"/work-dir\"\n dnsPolicy: Default\n volumes:\n - name: workdir\n emptyDir: {}", "language": "yaml", "type": "pod-definition", "context": "Configure Pod Initialization | In the configuration file, you can see that the Pod has a Volume that the init container and the application container share.", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/configure-pod-initialization/", "id": "k8s_00114_code_completion_3113"} {"prompt": "Explain this yaml code:\n\npods/init-containers.yaml\napiVersion: v1\nkind: Pod\nmetadata:\n name: init-demo\nspec:\n containers:\n - name: nginx\n image: nginx\n ports:\n - containerPort: 80\n volumeMounts:\n - name: workdir\n mountPath: /usr/share/nginx/html\n # These containers are run during pod initialization\n initContainers:\n - name: install\n image: busybox:1.28\n command:\n - wget\n - \"-O\"\n - \"/work-dir/index.html\"\n - http://info.cern.ch\n volumeMounts:\n - name: workdir\n mountPath: \"/work-dir\"\n dnsPolicy: Default\n volumes:\n - name: workdir\n emptyDir: {}", "completion": "This yaml code Configure Pod Initialization | In the configuration file, you can see that the Pod has a Volume that the init container and the application container share.", "language": "yaml", "type": "pod-definition_explanation", "context": "Configure Pod Initialization | In the configuration file, you can see that the Pod has a Volume that the init container and the application container share.", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/configure-pod-initialization/", "id": "k8s_00114_code_explain_3114"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: init-demo\nspec:\n containers:\n - name: nginx\n image: nginx\n ports:\n - containerPort: 80\n volumeMounts:\n - name: workdir\n mountPath: /usr/share/nginx/html\n # These containers are run during pod initialization", "completion": " initContainers:\n - name: install\n image: busybox:1.28\n command:\n - wget\n - \"-O\"\n - \"/work-dir/index.html\"\n - http://info.cern.ch\n volumeMounts:\n - name: workdir\n mountPath: \"/work-dir\"\n dnsPolicy: Default\n volumes:\n - name: workdir\n emptyDir: {}", "language": "yaml", "type": "pod-definition", "context": "Configure Pod Initialization | In the configuration file, you can see that the Pod has a Volume that the init container and the application container share.", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/configure-pod-initialization/", "id": "k8s_00114_code_completion_3115"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: init-demo\nspec:\n containers:\n - name: nginx\n image: nginx\n ports:\n - containerPort: 80\n volumeMounts:\n - name: workdir\n mountPath: /usr/share/nginx/html\n # These containers are run during pod initialization\n initContainers:\n - name: install\n image: busybox:1.28\n command:\n - wget\n - \"-O\"\n - \"/work-dir/index.html\"\n - http://info.cern.ch\n volumeMounts:\n - name: workdir\n mountPath: \"/work-dir\"\n dnsPolicy: Default\n volumes:\n - name: workdir\n emptyDir: {}", "completion": "This yaml code Configure Pod Initialization | In the configuration file, you can see that the Pod has a Volume that the init container and the application container share.", "language": "yaml", "type": "pod-definition_explanation", "context": "Configure Pod Initialization | In the configuration file, you can see that the Pod has a Volume that the init container and the application container share.", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/configure-pod-initialization/", "id": "k8s_00114_code_explain_3116"} {"prompt": "Explain this bash code:\n\nkubectl apply -f https://k8s.io/examples/pods/init-containers.yaml", "completion": "This bash code Configure Pod Initialization | Verify that the nginx container is running:", "language": "bash", "type": "kubectl-command_explanation", "context": "Configure Pod Initialization | Verify that the nginx container is running:", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/configure-pod-initialization/", "id": "k8s_00114_code_explain_3117"} {"prompt": "Complete this text code:\n\n
    \nhttp://info.cern.ch\n
    \n", "completion": "

    http://info.cern.ch - home of the first website

    \n ...\n
  • Browse the first website
  • \n ...", "language": "text", "type": "code-example", "context": "Configure Pod Initialization |", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/configure-pod-initialization/", "id": "k8s_00114_code_completion_3118"} {"prompt": "Explain this text code:\n\n
    \nhttp://info.cern.ch\n
    \n\n

    http://info.cern.ch - home of the first website

    \n ...\n
  • Browse the first website
  • \n ...", "completion": "This text code Configure Pod Initialization |", "language": "text", "type": "code-example_explanation", "context": "Configure Pod Initialization |", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/configure-pod-initialization/", "id": "k8s_00114_code_explain_3119"} {"prompt": "Complete this yaml code:\n\napiVersion: apiserver.config.k8s.io/v1\nkind: AdmissionConfiguration\nplugins:\n- name: PodSecurity\n configuration:\n apiVersion: pod-security.admission.config.k8s.io/v1 # see compatibility note\n kind: PodSecurityConfiguration\n # Defaults applied when a mode label is not set.\n #\n # Level label values must be one of:\n # - \"privileged\" (default)\n # - \"baseline\"\n # - \"restricted\"\n #\n # Version label values must be one of:", "completion": " # - \"latest\" (default) \n # - specific version like \"v1.34\"\n defaults:\n enforce: \"privileged\"\n enforce-version: \"latest\"\n audit: \"privileged\"\n audit-version: \"latest\"\n warn: \"privileged\"\n warn-version: \"latest\"\n exemptions:\n # Array of authenticated usernames to exempt.\n usernames: []\n # Array of runtime class names to exempt.\n runtimeClasses: []\n # Array of namespaces to exempt.\n namespaces: []", "language": "yaml", "type": "pod-definition", "context": "Enforce Pod Security Standards by Configuring the Built-in Admission Controller |", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/enforce-standards-admission-controller/", "id": "k8s_00115_code_completion_3120"} {"prompt": "Explain this yaml code:\n\napiVersion: apiserver.config.k8s.io/v1\nkind: AdmissionConfiguration\nplugins:\n- name: PodSecurity\n configuration:\n apiVersion: pod-security.admission.config.k8s.io/v1 # see compatibility note\n kind: PodSecurityConfiguration\n # Defaults applied when a mode label is not set.\n #\n # Level label values must be one of:\n # - \"privileged\" (default)\n # - \"baseline\"\n # - \"restricted\"\n #\n # Version label values must be one of:\n # - \"latest\" (default) \n # - specific version like \"v1.34\"\n defaults:\n enforce: \"privileged\"\n enforce-version: \"latest\"\n audit: \"privileged\"\n audit-version: \"latest\"\n warn: \"privileged\"\n warn-version: \"latest\"\n exemptions:\n # Array of authenticated usernames to exempt.\n usernames: []\n # Array of runtime class names to exempt.\n runtimeClasses: []\n # Array of namespaces to exempt.\n namespaces: []", "completion": "This yaml code Enforce Pod Security Standards by Configuring the Built-in Admission Controller |", "language": "yaml", "type": "pod-definition_explanation", "context": "Enforce Pod Security Standards by Configuring the Built-in Admission Controller |", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/enforce-standards-admission-controller/", "id": "k8s_00115_code_explain_3121"} {"prompt": "Complete this yaml code:\n\napiVersion: apiserver.config.k8s.io/v1\nkind: AdmissionConfiguration\nplugins:\n- name: PodSecurity\n configuration:\n apiVersion: pod-security.admission.config.k8s.io/v1 # see compatibility note\n kind: PodSecurityConfiguration\n # Defaults applied when a mode label is not set.\n #\n # Level label values must be one of:\n # - \"privileged\" (default)\n # - \"baseline\"\n # - \"restricted\"\n #\n # Version label values must be one of:", "completion": " # - \"latest\" (default) \n # - specific version like \"v1.34\"\n defaults:\n enforce: \"privileged\"\n enforce-version: \"latest\"\n audit: \"privileged\"\n audit-version: \"latest\"\n warn: \"privileged\"\n warn-version: \"latest\"\n exemptions:\n # Array of authenticated usernames to exempt.\n usernames: []\n # Array of runtime class names to exempt.\n runtimeClasses: []\n # Array of namespaces to exempt.\n namespaces: []", "language": "yaml", "type": "pod-definition", "context": "Enforce Pod Security Standards by Configuring the Built-in Admission Controller |", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/enforce-standards-admission-controller/", "id": "k8s_00115_code_completion_3122"} {"prompt": "Explain this yaml code:\n\napiVersion: apiserver.config.k8s.io/v1\nkind: AdmissionConfiguration\nplugins:\n- name: PodSecurity\n configuration:\n apiVersion: pod-security.admission.config.k8s.io/v1 # see compatibility note\n kind: PodSecurityConfiguration\n # Defaults applied when a mode label is not set.\n #\n # Level label values must be one of:\n # - \"privileged\" (default)\n # - \"baseline\"\n # - \"restricted\"\n #\n # Version label values must be one of:\n # - \"latest\" (default) \n # - specific version like \"v1.34\"\n defaults:\n enforce: \"privileged\"\n enforce-version: \"latest\"\n audit: \"privileged\"\n audit-version: \"latest\"\n warn: \"privileged\"\n warn-version: \"latest\"\n exemptions:\n # Array of authenticated usernames to exempt.\n usernames: []\n # Array of runtime class names to exempt.\n runtimeClasses: []\n # Array of namespaces to exempt.\n namespaces: []", "completion": "This yaml code Enforce Pod Security Standards by Configuring the Built-in Admission Controller |", "language": "yaml", "type": "pod-definition_explanation", "context": "Enforce Pod Security Standards by Configuring the Built-in Admission Controller |", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/enforce-standards-admission-controller/", "id": "k8s_00115_code_explain_3123"} {"prompt": "Complete this yaml code:\n\napiVersion: apiserver.config.k8s.io/v1\nkind: AdmissionConfiguration\nplugins:\n- name: PodSecurity\n configuration:\n apiVersion: pod-security.admission.config.k8s.io/v1 # see compatibility note\n kind: PodSecurityConfiguration\n # Defaults applied when a mode label is not set.\n #\n # Level label values must be one of:\n # - \"privileged\" (default)\n # - \"baseline\"\n # - \"restricted\"\n #\n # Version label values must be one of:", "completion": " # - \"latest\" (default) \n # - specific version like \"v1.34\"\n defaults:\n enforce: \"privileged\"\n enforce-version: \"latest\"\n audit: \"privileged\"\n audit-version: \"latest\"\n warn: \"privileged\"\n warn-version: \"latest\"\n exemptions:\n # Array of authenticated usernames to exempt.\n usernames: []\n # Array of runtime class names to exempt.\n runtimeClasses: []\n # Array of namespaces to exempt.\n namespaces: []", "language": "yaml", "type": "pod-definition", "context": "Enforce Pod Security Standards by Configuring the Built-in Admission Controller |", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/enforce-standards-admission-controller/", "id": "k8s_00115_code_completion_3124"} {"prompt": "Explain this yaml code:\n\napiVersion: apiserver.config.k8s.io/v1\nkind: AdmissionConfiguration\nplugins:\n- name: PodSecurity\n configuration:\n apiVersion: pod-security.admission.config.k8s.io/v1 # see compatibility note\n kind: PodSecurityConfiguration\n # Defaults applied when a mode label is not set.\n #\n # Level label values must be one of:\n # - \"privileged\" (default)\n # - \"baseline\"\n # - \"restricted\"\n #\n # Version label values must be one of:\n # - \"latest\" (default) \n # - specific version like \"v1.34\"\n defaults:\n enforce: \"privileged\"\n enforce-version: \"latest\"\n audit: \"privileged\"\n audit-version: \"latest\"\n warn: \"privileged\"\n warn-version: \"latest\"\n exemptions:\n # Array of authenticated usernames to exempt.\n usernames: []\n # Array of runtime class names to exempt.\n runtimeClasses: []\n # Array of namespaces to exempt.\n namespaces: []", "completion": "This yaml code Enforce Pod Security Standards by Configuring the Built-in Admission Controller |", "language": "yaml", "type": "pod-definition_explanation", "context": "Enforce Pod Security Standards by Configuring the Built-in Admission Controller |", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/enforce-standards-admission-controller/", "id": "k8s_00115_code_explain_3125"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: volume-test\nspec:\n containers:\n - name: container-test\n image: busybox:1.28\n command: [\"sleep\", \"3600\"]\n volumeMounts:\n - name: all-in-one\n mountPath: \"/projected-volume\"\n readOnly: true\n volumes:\n - name: all-in-one\n projected:\n sources:\n - secret:", "completion": " name: mysecret\n items:\n - key: username\n path: my-group/my-username\n - downwardAPI:\n items:\n - path: \"labels\"\n fieldRef:\n fieldPath: metadata.labels\n - path: \"cpu_limit\"\n resourceFieldRef:\n containerName: container-test\n resource: limits.cpu\n - configMap:\n name: myconfigmap\n items:\n - key: config\n path: my-group/my-config", "language": "yaml", "type": "pod-definition", "context": "Projected Volumes | Each projected volume source is listed in the spec under sources. The parameters are nearly the same with two exceptions:", "source": "https://kubernetes.io/docs/concepts/storage/projected-volumes/", "id": "k8s_00117_code_completion_3126"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: volume-test\nspec:\n containers:\n - name: container-test\n image: busybox:1.28\n command: [\"sleep\", \"3600\"]\n volumeMounts:\n - name: all-in-one\n mountPath: \"/projected-volume\"\n readOnly: true\n volumes:\n - name: all-in-one\n projected:\n sources:\n - secret:\n name: mysecret\n items:\n - key: username\n path: my-group/my-username\n - downwardAPI:\n items:\n - path: \"labels\"\n fieldRef:\n fieldPath: metadata.labels\n - path: \"cpu_limit\"\n resourceFieldRef:\n containerName: container-test\n resource: limits.cpu\n - configMap:\n name: myconfigmap\n items:\n - key: config\n path: my-group/my-config", "completion": "This yaml code Projected Volumes | Each projected volume source is listed in the spec under sources. The parameters are nearly the same with two exceptions:", "language": "yaml", "type": "pod-definition_explanation", "context": "Projected Volumes | Each projected volume source is listed in the spec under sources. The parameters are nearly the same with two exceptions:", "source": "https://kubernetes.io/docs/concepts/storage/projected-volumes/", "id": "k8s_00117_code_explain_3127"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: volume-test\nspec:\n containers:\n - name: container-test\n image: busybox:1.28\n command: [\"sleep\", \"3600\"]\n volumeMounts:\n - name: all-in-one\n mountPath: \"/projected-volume\"\n readOnly: true\n volumes:", "completion": " - name: all-in-one\n projected:\n sources:\n - secret:\n name: mysecret\n items:\n - key: username\n path: my-group/my-username\n - secret:\n name: mysecret2\n items:\n - key: password\n path: my-group/my-password\n mode: 511", "language": "yaml", "type": "pod-definition", "context": "Projected Volumes | Each projected volume source is listed in the spec under sources. The parameters are nearly the same with two exceptions:", "source": "https://kubernetes.io/docs/concepts/storage/projected-volumes/", "id": "k8s_00117_code_completion_3128"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: volume-test\nspec:\n containers:\n - name: container-test\n image: busybox:1.28\n command: [\"sleep\", \"3600\"]\n volumeMounts:\n - name: all-in-one\n mountPath: \"/projected-volume\"\n readOnly: true\n volumes:\n - name: all-in-one\n projected:\n sources:\n - secret:\n name: mysecret\n items:\n - key: username\n path: my-group/my-username\n - secret:\n name: mysecret2\n items:\n - key: password\n path: my-group/my-password\n mode: 511", "completion": "This yaml code Projected Volumes | Each projected volume source is listed in the spec under sources. The parameters are nearly the same with two exceptions:", "language": "yaml", "type": "pod-definition_explanation", "context": "Projected Volumes | Each projected volume source is listed in the spec under sources. The parameters are nearly the same with two exceptions:", "source": "https://kubernetes.io/docs/concepts/storage/projected-volumes/", "id": "k8s_00117_code_explain_3129"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: sa-token-test\nspec:\n containers:\n - name: container-test\n image: busybox:1.28\n command: [\"sleep\", \"3600\"]\n volumeMounts:\n - name: token-vol", "completion": " mountPath: \"/service-account\"\n readOnly: true\n serviceAccountName: default\n volumes:\n - name: token-vol\n projected:\n sources:\n - serviceAccountToken:\n audience: api\n expirationSeconds: 3600\n path: token", "language": "yaml", "type": "pod-definition", "context": "Projected Volumes | The example Pod has a projected volume containing the injected service account token. Containers in this Pod can use that token to access the Kubernetes API server, authenticating with the identity of", "source": "https://kubernetes.io/docs/concepts/storage/projected-volumes/", "id": "k8s_00117_code_completion_3130"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: sa-token-test\nspec:\n containers:\n - name: container-test\n image: busybox:1.28\n command: [\"sleep\", \"3600\"]\n volumeMounts:\n - name: token-vol\n mountPath: \"/service-account\"\n readOnly: true\n serviceAccountName: default\n volumes:\n - name: token-vol\n projected:\n sources:\n - serviceAccountToken:\n audience: api\n expirationSeconds: 3600\n path: token", "completion": "This yaml code Projected Volumes | The example Pod has a projected volume containing the injected service account token. Containers in this Pod can use that token to access the Kubernetes API server, authenticating with the identity of", "language": "yaml", "type": "pod-definition_explanation", "context": "Projected Volumes | The example Pod has a projected volume containing the injected service account token. Containers in this Pod can use that token to access the Kubernetes API server, authenticating with the identity of", "source": "https://kubernetes.io/docs/concepts/storage/projected-volumes/", "id": "k8s_00117_code_explain_3131"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: sa-ctb-name-test\nspec:\n containers:\n - name: container-test\n image: busybox\n command: [\"sleep\", \"3600\"]\n volumeMounts:\n - name: token-vol\n mountPath: \"/root-certificates\"\n readOnly: true\n serviceAccountName: default", "completion": " volumes:\n - name: token-vol\n projected:\n sources:\n - clusterTrustBundle:\n name: example\n path: example-roots.pem\n - clusterTrustBundle:\n signerName: \"example.com/mysigner\"\n labelSelector:\n matchLabels:\n version: live\n path: mysigner-roots.pem\n optional: true", "language": "yaml", "type": "pod-definition", "context": "Projected Volumes | The podCertificate projected volumes source securely provisions a private key and X.509 certificate chain for pod to use as client or server credentials. Kubelet will then handle refreshing the privat", "source": "https://kubernetes.io/docs/concepts/storage/projected-volumes/", "id": "k8s_00117_code_completion_3132"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: sa-ctb-name-test\nspec:\n containers:\n - name: container-test\n image: busybox\n command: [\"sleep\", \"3600\"]\n volumeMounts:\n - name: token-vol\n mountPath: \"/root-certificates\"\n readOnly: true\n serviceAccountName: default\n volumes:\n - name: token-vol\n projected:\n sources:\n - clusterTrustBundle:\n name: example\n path: example-roots.pem\n - clusterTrustBundle:\n signerName: \"example.com/mysigner\"\n labelSelector:\n matchLabels:\n version: live\n path: mysigner-roots.pem\n optional: true", "completion": "This yaml code Projected Volumes | The podCertificate projected volumes source securely provisions a private key and X.509 certificate chain for pod to use as client or server credentials. Kubelet will then handle refreshing the privat", "language": "yaml", "type": "pod-definition_explanation", "context": "Projected Volumes | The podCertificate projected volumes source securely provisions a private key and X.509 certificate chain for pod to use as client or server credentials. Kubelet will then handle refreshing the privat", "source": "https://kubernetes.io/docs/concepts/storage/projected-volumes/", "id": "k8s_00117_code_explain_3133"} {"prompt": "Complete this yaml code:\n\n# Sample Pod spec that uses a podCertificate projection to request an ED25519\n# private key, a certificate from the `coolcert.example.com/foo` signer, and\n# write the results to `/var/run/my-x509-credentials/credentialbundle.pem`.\napiVersion: v1\nkind: Pod\nmetadata:\n namespace: default\n name: podcertificate-pod\nspec:\n serviceAccountName: default\n containers:\n - image: debian\n name: main", "completion": " command: [\"sleep\", \"infinity\"]\n volumeMounts:\n - name: my-x509-credentials\n mountPath: /var/run/my-x509-credentials\n volumes:\n - name: my-x509-credentials\n projected:\n defaultMode: 420\n sources:\n - podCertificate:\n keyType: ED25519\n signerName: coolcert.example.com/foo\n credentialBundlePath: credentialbundle.pem", "language": "yaml", "type": "pod-definition", "context": "Projected Volumes | The proposal for file permission handling in projected service account volume enhancement introduced the projected files having the correct owner permissions set.", "source": "https://kubernetes.io/docs/concepts/storage/projected-volumes/", "id": "k8s_00117_code_completion_3134"} {"prompt": "Explain this yaml code:\n\n# Sample Pod spec that uses a podCertificate projection to request an ED25519\n# private key, a certificate from the `coolcert.example.com/foo` signer, and\n# write the results to `/var/run/my-x509-credentials/credentialbundle.pem`.\napiVersion: v1\nkind: Pod\nmetadata:\n namespace: default\n name: podcertificate-pod\nspec:\n serviceAccountName: default\n containers:\n - image: debian\n name: main\n command: [\"sleep\", \"infinity\"]\n volumeMounts:\n - name: my-x509-credentials\n mountPath: /var/run/my-x509-credentials\n volumes:\n - name: my-x509-credentials\n projected:\n defaultMode: 420\n sources:\n - podCertificate:\n keyType: ED25519\n signerName: coolcert.example.com/foo\n credentialBundlePath: credentialbundle.pem", "completion": "This yaml code Projected Volumes | The proposal for file permission handling in projected service account volume enhancement introduced the projected files having the correct owner permissions set.", "language": "yaml", "type": "pod-definition_explanation", "context": "Projected Volumes | The proposal for file permission handling in projected service account volume enhancement introduced the projected files having the correct owner permissions set.", "source": "https://kubernetes.io/docs/concepts/storage/projected-volumes/", "id": "k8s_00117_code_explain_3135"} {"prompt": "Complete this text code:\n\nPS C:\\> Get-Acl C:\\var\\run\\secrets\\kubernetes.io\\serviceaccount\\..2021_08_31_22_22_18.318230061\\ca.crt | Format-List\n\nPath : Microsoft.PowerShell.Core\\FileSystem::C:\\var\\run\\secrets\\kubernetes.io\\serviceaccount\\..2021_08_31_22_22_18.318230061\\ca.crt\nOwner : BUILTIN\\Administrators\nGroup : NT AUTHORITY\\SYSTEM", "completion": "Access : NT AUTHORITY\\SYSTEM Allow FullControl\n BUILTIN\\Administrators Allow FullControl\n BUILTIN\\Users Allow ReadAndExecute, Synchronize\nAudit :\nSddl : O:BAG:SYD:AI(A;ID;FA;;;SY)(A;ID;FA;;;BA)(A;ID;0x1200a9;;;BU)", "language": "text", "type": "code-example", "context": "Projected Volumes | This implies all administrator users like ContainerAdministrator will have read, write and execute access while, non-administrator users will have read and execute access.", "source": "https://kubernetes.io/docs/concepts/storage/projected-volumes/", "id": "k8s_00117_code_completion_3136"} {"prompt": "Explain this text code:\n\nPS C:\\> Get-Acl C:\\var\\run\\secrets\\kubernetes.io\\serviceaccount\\..2021_08_31_22_22_18.318230061\\ca.crt | Format-List\n\nPath : Microsoft.PowerShell.Core\\FileSystem::C:\\var\\run\\secrets\\kubernetes.io\\serviceaccount\\..2021_08_31_22_22_18.318230061\\ca.crt\nOwner : BUILTIN\\Administrators\nGroup : NT AUTHORITY\\SYSTEM\nAccess : NT AUTHORITY\\SYSTEM Allow FullControl\n BUILTIN\\Administrators Allow FullControl\n BUILTIN\\Users Allow ReadAndExecute, Synchronize\nAudit :\nSddl : O:BAG:SYD:AI(A;ID;FA;;;SY)(A;ID;FA;;;BA)(A;ID;0x1200a9;;;BU)", "completion": "This text code Projected Volumes | This implies all administrator users like ContainerAdministrator will have read, write and execute access while, non-administrator users will have read and execute access.", "language": "text", "type": "code-example_explanation", "context": "Projected Volumes | This implies all administrator users like ContainerAdministrator will have read, write and execute access while, non-administrator users will have read and execute access.", "source": "https://kubernetes.io/docs/concepts/storage/projected-volumes/", "id": "k8s_00117_code_explain_3137"} {"prompt": "Explain this text code:\n\npods/storage/projected-secret-downwardapi-configmap.yaml", "completion": "This text code Projected Volumes | Each projected volume source is listed in the spec under sources. The parameters are nearly the same with two exceptions:", "language": "text", "type": "code-example_explanation", "context": "Projected Volumes | Each projected volume source is listed in the spec under sources. The parameters are nearly the same with two exceptions:", "source": "https://kubernetes.io/docs/concepts/storage/projected-volumes/", "id": "k8s_00117_code_explain_3138"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: volume-test\nspec:\n containers:\n - name: container-test\n image: busybox:1.28\n command: [\"sleep\", \"3600\"]\n volumeMounts:\n - name: all-in-one\n mountPath: \"/projected-volume\"\n readOnly: true\n volumes:\n - name: all-in-one\n projected:\n sources:\n - secret:", "completion": " name: mysecret\n items:\n - key: username\n path: my-group/my-username\n - downwardAPI:\n items:\n - path: \"labels\"\n fieldRef:\n fieldPath: metadata.labels\n - path: \"cpu_limit\"\n resourceFieldRef:\n containerName: container-test\n resource: limits.cpu\n - configMap:\n name: myconfigmap\n items:\n - key: config\n path: my-group/my-config", "language": "yaml", "type": "pod-definition", "context": "Projected Volumes | Each projected volume source is listed in the spec under sources. The parameters are nearly the same with two exceptions:", "source": "https://kubernetes.io/docs/concepts/storage/projected-volumes/", "id": "k8s_00117_code_completion_3139"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: volume-test\nspec:\n containers:\n - name: container-test\n image: busybox:1.28\n command: [\"sleep\", \"3600\"]\n volumeMounts:\n - name: all-in-one\n mountPath: \"/projected-volume\"\n readOnly: true\n volumes:\n - name: all-in-one\n projected:\n sources:\n - secret:\n name: mysecret\n items:\n - key: username\n path: my-group/my-username\n - downwardAPI:\n items:\n - path: \"labels\"\n fieldRef:\n fieldPath: metadata.labels\n - path: \"cpu_limit\"\n resourceFieldRef:\n containerName: container-test\n resource: limits.cpu\n - configMap:\n name: myconfigmap\n items:\n - key: config\n path: my-group/my-config", "completion": "This yaml code Projected Volumes | Each projected volume source is listed in the spec under sources. The parameters are nearly the same with two exceptions:", "language": "yaml", "type": "pod-definition_explanation", "context": "Projected Volumes | Each projected volume source is listed in the spec under sources. The parameters are nearly the same with two exceptions:", "source": "https://kubernetes.io/docs/concepts/storage/projected-volumes/", "id": "k8s_00117_code_explain_3140"} {"prompt": "Explain this text code:\n\npods/storage/projected-secrets-nondefault-permission-mode.yaml", "completion": "This text code Projected Volumes | Each projected volume source is listed in the spec under sources. The parameters are nearly the same with two exceptions:", "language": "text", "type": "code-example_explanation", "context": "Projected Volumes | Each projected volume source is listed in the spec under sources. The parameters are nearly the same with two exceptions:", "source": "https://kubernetes.io/docs/concepts/storage/projected-volumes/", "id": "k8s_00117_code_explain_3141"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: volume-test\nspec:\n containers:\n - name: container-test\n image: busybox:1.28\n command: [\"sleep\", \"3600\"]\n volumeMounts:\n - name: all-in-one\n mountPath: \"/projected-volume\"\n readOnly: true\n volumes:", "completion": " - name: all-in-one\n projected:\n sources:\n - secret:\n name: mysecret\n items:\n - key: username\n path: my-group/my-username\n - secret:\n name: mysecret2\n items:\n - key: password\n path: my-group/my-password\n mode: 511", "language": "yaml", "type": "pod-definition", "context": "Projected Volumes | Each projected volume source is listed in the spec under sources. The parameters are nearly the same with two exceptions:", "source": "https://kubernetes.io/docs/concepts/storage/projected-volumes/", "id": "k8s_00117_code_completion_3142"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: volume-test\nspec:\n containers:\n - name: container-test\n image: busybox:1.28\n command: [\"sleep\", \"3600\"]\n volumeMounts:\n - name: all-in-one\n mountPath: \"/projected-volume\"\n readOnly: true\n volumes:\n - name: all-in-one\n projected:\n sources:\n - secret:\n name: mysecret\n items:\n - key: username\n path: my-group/my-username\n - secret:\n name: mysecret2\n items:\n - key: password\n path: my-group/my-password\n mode: 511", "completion": "This yaml code Projected Volumes | Each projected volume source is listed in the spec under sources. The parameters are nearly the same with two exceptions:", "language": "yaml", "type": "pod-definition_explanation", "context": "Projected Volumes | Each projected volume source is listed in the spec under sources. The parameters are nearly the same with two exceptions:", "source": "https://kubernetes.io/docs/concepts/storage/projected-volumes/", "id": "k8s_00117_code_explain_3143"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: sa-token-test\nspec:\n containers:\n - name: container-test\n image: busybox:1.28\n command: [\"sleep\", \"3600\"]\n volumeMounts:\n - name: token-vol", "completion": " mountPath: \"/service-account\"\n readOnly: true\n serviceAccountName: default\n volumes:\n - name: token-vol\n projected:\n sources:\n - serviceAccountToken:\n audience: api\n expirationSeconds: 3600\n path: token", "language": "yaml", "type": "pod-definition", "context": "Projected Volumes | The example Pod has a projected volume containing the injected service account token. Containers in this Pod can use that token to access the Kubernetes API server, authenticating with the identity of", "source": "https://kubernetes.io/docs/concepts/storage/projected-volumes/", "id": "k8s_00117_code_completion_3144"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: sa-token-test\nspec:\n containers:\n - name: container-test\n image: busybox:1.28\n command: [\"sleep\", \"3600\"]\n volumeMounts:\n - name: token-vol\n mountPath: \"/service-account\"\n readOnly: true\n serviceAccountName: default\n volumes:\n - name: token-vol\n projected:\n sources:\n - serviceAccountToken:\n audience: api\n expirationSeconds: 3600\n path: token", "completion": "This yaml code Projected Volumes | The example Pod has a projected volume containing the injected service account token. Containers in this Pod can use that token to access the Kubernetes API server, authenticating with the identity of", "language": "yaml", "type": "pod-definition_explanation", "context": "Projected Volumes | The example Pod has a projected volume containing the injected service account token. Containers in this Pod can use that token to access the Kubernetes API server, authenticating with the identity of", "source": "https://kubernetes.io/docs/concepts/storage/projected-volumes/", "id": "k8s_00117_code_explain_3145"} {"prompt": "Explain this text code:\n\n--runtime-config=certificates.k8s.io/v1beta1/clustertrustbundles=true", "completion": "This text code Projected Volumes | The clusterTrustBundle projected volume source injects the contents of one or more ClusterTrustBundle objects as an automatically-updating file in the container filesystem.", "language": "text", "type": "code-example_explanation", "context": "Projected Volumes | The clusterTrustBundle projected volume source injects the contents of one or more ClusterTrustBundle objects as an automatically-updating file in the container filesystem.", "source": "https://kubernetes.io/docs/concepts/storage/projected-volumes/", "id": "k8s_00117_code_explain_3146"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: sa-ctb-name-test\nspec:\n containers:\n - name: container-test\n image: busybox\n command: [\"sleep\", \"3600\"]\n volumeMounts:\n - name: token-vol\n mountPath: \"/root-certificates\"\n readOnly: true\n serviceAccountName: default", "completion": " volumes:\n - name: token-vol\n projected:\n sources:\n - clusterTrustBundle:\n name: example\n path: example-roots.pem\n - clusterTrustBundle:\n signerName: \"example.com/mysigner\"\n labelSelector:\n matchLabels:\n version: live\n path: mysigner-roots.pem\n optional: true", "language": "yaml", "type": "pod-definition", "context": "Projected Volumes | The podCertificate projected volumes source securely provisions a private key and X.509 certificate chain for pod to use as client or server credentials. Kubelet will then handle refreshing the privat", "source": "https://kubernetes.io/docs/concepts/storage/projected-volumes/", "id": "k8s_00117_code_completion_3147"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: sa-ctb-name-test\nspec:\n containers:\n - name: container-test\n image: busybox\n command: [\"sleep\", \"3600\"]\n volumeMounts:\n - name: token-vol\n mountPath: \"/root-certificates\"\n readOnly: true\n serviceAccountName: default\n volumes:\n - name: token-vol\n projected:\n sources:\n - clusterTrustBundle:\n name: example\n path: example-roots.pem\n - clusterTrustBundle:\n signerName: \"example.com/mysigner\"\n labelSelector:\n matchLabels:\n version: live\n path: mysigner-roots.pem\n optional: true", "completion": "This yaml code Projected Volumes | The podCertificate projected volumes source securely provisions a private key and X.509 certificate chain for pod to use as client or server credentials. Kubelet will then handle refreshing the privat", "language": "yaml", "type": "pod-definition_explanation", "context": "Projected Volumes | The podCertificate projected volumes source securely provisions a private key and X.509 certificate chain for pod to use as client or server credentials. Kubelet will then handle refreshing the privat", "source": "https://kubernetes.io/docs/concepts/storage/projected-volumes/", "id": "k8s_00117_code_explain_3148"} {"prompt": "Explain this text code:\n\n--runtime-config=certificates.k8s.io/v1alpha1/podcertificaterequests=true", "completion": "This text code Projected Volumes | The podCertificate projected volumes source securely provisions a private key and X.509 certificate chain for pod to use as client or server credentials. Kubelet will then handle refreshing the privat", "language": "text", "type": "code-example_explanation", "context": "Projected Volumes | The podCertificate projected volumes source securely provisions a private key and X.509 certificate chain for pod to use as client or server credentials. Kubelet will then handle refreshing the privat", "source": "https://kubernetes.io/docs/concepts/storage/projected-volumes/", "id": "k8s_00117_code_explain_3149"} {"prompt": "Complete this yaml code:\n\n# Sample Pod spec that uses a podCertificate projection to request an ED25519\n# private key, a certificate from the `coolcert.example.com/foo` signer, and\n# write the results to `/var/run/my-x509-credentials/credentialbundle.pem`.\napiVersion: v1\nkind: Pod\nmetadata:\n namespace: default\n name: podcertificate-pod\nspec:\n serviceAccountName: default\n containers:\n - image: debian\n name: main", "completion": " command: [\"sleep\", \"infinity\"]\n volumeMounts:\n - name: my-x509-credentials\n mountPath: /var/run/my-x509-credentials\n volumes:\n - name: my-x509-credentials\n projected:\n defaultMode: 420\n sources:\n - podCertificate:\n keyType: ED25519\n signerName: coolcert.example.com/foo\n credentialBundlePath: credentialbundle.pem", "language": "yaml", "type": "pod-definition", "context": "Projected Volumes | The proposal for file permission handling in projected service account volume enhancement introduced the projected files having the correct owner permissions set.", "source": "https://kubernetes.io/docs/concepts/storage/projected-volumes/", "id": "k8s_00117_code_completion_3150"} {"prompt": "Explain this yaml code:\n\n# Sample Pod spec that uses a podCertificate projection to request an ED25519\n# private key, a certificate from the `coolcert.example.com/foo` signer, and\n# write the results to `/var/run/my-x509-credentials/credentialbundle.pem`.\napiVersion: v1\nkind: Pod\nmetadata:\n namespace: default\n name: podcertificate-pod\nspec:\n serviceAccountName: default\n containers:\n - image: debian\n name: main\n command: [\"sleep\", \"infinity\"]\n volumeMounts:\n - name: my-x509-credentials\n mountPath: /var/run/my-x509-credentials\n volumes:\n - name: my-x509-credentials\n projected:\n defaultMode: 420\n sources:\n - podCertificate:\n keyType: ED25519\n signerName: coolcert.example.com/foo\n credentialBundlePath: credentialbundle.pem", "completion": "This yaml code Projected Volumes | The proposal for file permission handling in projected service account volume enhancement introduced the projected files having the correct owner permissions set.", "language": "yaml", "type": "pod-definition_explanation", "context": "Projected Volumes | The proposal for file permission handling in projected service account volume enhancement introduced the projected files having the correct owner permissions set.", "source": "https://kubernetes.io/docs/concepts/storage/projected-volumes/", "id": "k8s_00117_code_explain_3151"} {"prompt": "Complete this powershell code:\n\nPS C:\\> Get-Acl C:\\var\\run\\secrets\\kubernetes.io\\serviceaccount\\..2021_08_31_22_22_18.318230061\\ca.crt | Format-List\n\nPath : Microsoft.PowerShell.Core\\FileSystem::C:\\var\\run\\secrets\\kubernetes.io\\serviceaccount\\..2021_08_31_22_22_18.318230061\\ca.crt\nOwner : BUILTIN\\Administrators\nGroup : NT AUTHORITY\\SYSTEM", "completion": "Access : NT AUTHORITY\\SYSTEM Allow FullControl\n BUILTIN\\Administrators Allow FullControl\n BUILTIN\\Users Allow ReadAndExecute, Synchronize\nAudit :\nSddl : O:BAG:SYD:AI(A;ID;FA;;;SY)(A;ID;FA;;;BA)(A;ID;0x1200a9;;;BU)", "language": "powershell", "type": "code-example", "context": "Projected Volumes | This implies all administrator users like ContainerAdministrator will have read, write and execute access while, non-administrator users will have read and execute access.", "source": "https://kubernetes.io/docs/concepts/storage/projected-volumes/", "id": "k8s_00117_code_completion_3152"} {"prompt": "Explain this powershell code:\n\nPS C:\\> Get-Acl C:\\var\\run\\secrets\\kubernetes.io\\serviceaccount\\..2021_08_31_22_22_18.318230061\\ca.crt | Format-List\n\nPath : Microsoft.PowerShell.Core\\FileSystem::C:\\var\\run\\secrets\\kubernetes.io\\serviceaccount\\..2021_08_31_22_22_18.318230061\\ca.crt\nOwner : BUILTIN\\Administrators\nGroup : NT AUTHORITY\\SYSTEM\nAccess : NT AUTHORITY\\SYSTEM Allow FullControl\n BUILTIN\\Administrators Allow FullControl\n BUILTIN\\Users Allow ReadAndExecute, Synchronize\nAudit :\nSddl : O:BAG:SYD:AI(A;ID;FA;;;SY)(A;ID;FA;;;BA)(A;ID;0x1200a9;;;BU)", "completion": "This powershell code Projected Volumes | This implies all administrator users like ContainerAdministrator will have read, write and execute access while, non-administrator users will have read and execute access.", "language": "powershell", "type": "code-example_explanation", "context": "Projected Volumes | This implies all administrator users like ContainerAdministrator will have read, write and execute access while, non-administrator users will have read and execute access.", "source": "https://kubernetes.io/docs/concepts/storage/projected-volumes/", "id": "k8s_00117_code_explain_3153"} {"prompt": "Complete this yaml code:\n\npods/storage/projected-secret-downwardapi-configmap.yaml\napiVersion: v1\nkind: Pod\nmetadata:\n name: volume-test\nspec:\n containers:\n - name: container-test\n image: busybox:1.28\n command: [\"sleep\", \"3600\"]\n volumeMounts:\n - name: all-in-one\n mountPath: \"/projected-volume\"\n readOnly: true\n volumes:\n - name: all-in-one\n projected:\n sources:", "completion": " - secret:\n name: mysecret\n items:\n - key: username\n path: my-group/my-username\n - downwardAPI:\n items:\n - path: \"labels\"\n fieldRef:\n fieldPath: metadata.labels\n - path: \"cpu_limit\"\n resourceFieldRef:\n containerName: container-test\n resource: limits.cpu\n - configMap:\n name: myconfigmap\n items:\n - key: config\n path: my-group/my-config", "language": "yaml", "type": "pod-definition", "context": "Projected Volumes | Each projected volume source is listed in the spec under sources. The parameters are nearly the same with two exceptions:", "source": "https://kubernetes.io/docs/concepts/storage/projected-volumes/", "id": "k8s_00117_code_completion_3154"} {"prompt": "Explain this yaml code:\n\npods/storage/projected-secret-downwardapi-configmap.yaml\napiVersion: v1\nkind: Pod\nmetadata:\n name: volume-test\nspec:\n containers:\n - name: container-test\n image: busybox:1.28\n command: [\"sleep\", \"3600\"]\n volumeMounts:\n - name: all-in-one\n mountPath: \"/projected-volume\"\n readOnly: true\n volumes:\n - name: all-in-one\n projected:\n sources:\n - secret:\n name: mysecret\n items:\n - key: username\n path: my-group/my-username\n - downwardAPI:\n items:\n - path: \"labels\"\n fieldRef:\n fieldPath: metadata.labels\n - path: \"cpu_limit\"\n resourceFieldRef:\n containerName: container-test\n resource: limits.cpu\n - configMap:\n name: myconfigmap\n items:\n - key: config\n path: my-group/my-config", "completion": "This yaml code Projected Volumes | Each projected volume source is listed in the spec under sources. The parameters are nearly the same with two exceptions:", "language": "yaml", "type": "pod-definition_explanation", "context": "Projected Volumes | Each projected volume source is listed in the spec under sources. The parameters are nearly the same with two exceptions:", "source": "https://kubernetes.io/docs/concepts/storage/projected-volumes/", "id": "k8s_00117_code_explain_3155"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: volume-test\nspec:\n containers:\n - name: container-test\n image: busybox:1.28\n command: [\"sleep\", \"3600\"]\n volumeMounts:\n - name: all-in-one\n mountPath: \"/projected-volume\"\n readOnly: true\n volumes:\n - name: all-in-one\n projected:\n sources:\n - secret:", "completion": " name: mysecret\n items:\n - key: username\n path: my-group/my-username\n - downwardAPI:\n items:\n - path: \"labels\"\n fieldRef:\n fieldPath: metadata.labels\n - path: \"cpu_limit\"\n resourceFieldRef:\n containerName: container-test\n resource: limits.cpu\n - configMap:\n name: myconfigmap\n items:\n - key: config\n path: my-group/my-config", "language": "yaml", "type": "pod-definition", "context": "Projected Volumes | Each projected volume source is listed in the spec under sources. The parameters are nearly the same with two exceptions:", "source": "https://kubernetes.io/docs/concepts/storage/projected-volumes/", "id": "k8s_00117_code_completion_3156"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: volume-test\nspec:\n containers:\n - name: container-test\n image: busybox:1.28\n command: [\"sleep\", \"3600\"]\n volumeMounts:\n - name: all-in-one\n mountPath: \"/projected-volume\"\n readOnly: true\n volumes:\n - name: all-in-one\n projected:\n sources:\n - secret:\n name: mysecret\n items:\n - key: username\n path: my-group/my-username\n - downwardAPI:\n items:\n - path: \"labels\"\n fieldRef:\n fieldPath: metadata.labels\n - path: \"cpu_limit\"\n resourceFieldRef:\n containerName: container-test\n resource: limits.cpu\n - configMap:\n name: myconfigmap\n items:\n - key: config\n path: my-group/my-config", "completion": "This yaml code Projected Volumes | Each projected volume source is listed in the spec under sources. The parameters are nearly the same with two exceptions:", "language": "yaml", "type": "pod-definition_explanation", "context": "Projected Volumes | Each projected volume source is listed in the spec under sources. The parameters are nearly the same with two exceptions:", "source": "https://kubernetes.io/docs/concepts/storage/projected-volumes/", "id": "k8s_00117_code_explain_3157"} {"prompt": "Complete this yaml code:\n\npods/storage/projected-secrets-nondefault-permission-mode.yaml\napiVersion: v1\nkind: Pod\nmetadata:\n name: volume-test\nspec:\n containers:\n - name: container-test\n image: busybox:1.28\n command: [\"sleep\", \"3600\"]\n volumeMounts:\n - name: all-in-one\n mountPath: \"/projected-volume\"\n readOnly: true", "completion": " volumes:\n - name: all-in-one\n projected:\n sources:\n - secret:\n name: mysecret\n items:\n - key: username\n path: my-group/my-username\n - secret:\n name: mysecret2\n items:\n - key: password\n path: my-group/my-password\n mode: 511", "language": "yaml", "type": "pod-definition", "context": "Projected Volumes | Each projected volume source is listed in the spec under sources. The parameters are nearly the same with two exceptions:", "source": "https://kubernetes.io/docs/concepts/storage/projected-volumes/", "id": "k8s_00117_code_completion_3158"} {"prompt": "Explain this yaml code:\n\npods/storage/projected-secrets-nondefault-permission-mode.yaml\napiVersion: v1\nkind: Pod\nmetadata:\n name: volume-test\nspec:\n containers:\n - name: container-test\n image: busybox:1.28\n command: [\"sleep\", \"3600\"]\n volumeMounts:\n - name: all-in-one\n mountPath: \"/projected-volume\"\n readOnly: true\n volumes:\n - name: all-in-one\n projected:\n sources:\n - secret:\n name: mysecret\n items:\n - key: username\n path: my-group/my-username\n - secret:\n name: mysecret2\n items:\n - key: password\n path: my-group/my-password\n mode: 511", "completion": "This yaml code Projected Volumes | Each projected volume source is listed in the spec under sources. The parameters are nearly the same with two exceptions:", "language": "yaml", "type": "pod-definition_explanation", "context": "Projected Volumes | Each projected volume source is listed in the spec under sources. The parameters are nearly the same with two exceptions:", "source": "https://kubernetes.io/docs/concepts/storage/projected-volumes/", "id": "k8s_00117_code_explain_3159"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: volume-test\nspec:\n containers:\n - name: container-test\n image: busybox:1.28\n command: [\"sleep\", \"3600\"]\n volumeMounts:\n - name: all-in-one\n mountPath: \"/projected-volume\"\n readOnly: true\n volumes:", "completion": " - name: all-in-one\n projected:\n sources:\n - secret:\n name: mysecret\n items:\n - key: username\n path: my-group/my-username\n - secret:\n name: mysecret2\n items:\n - key: password\n path: my-group/my-password\n mode: 511", "language": "yaml", "type": "pod-definition", "context": "Projected Volumes | Each projected volume source is listed in the spec under sources. The parameters are nearly the same with two exceptions:", "source": "https://kubernetes.io/docs/concepts/storage/projected-volumes/", "id": "k8s_00117_code_completion_3160"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: volume-test\nspec:\n containers:\n - name: container-test\n image: busybox:1.28\n command: [\"sleep\", \"3600\"]\n volumeMounts:\n - name: all-in-one\n mountPath: \"/projected-volume\"\n readOnly: true\n volumes:\n - name: all-in-one\n projected:\n sources:\n - secret:\n name: mysecret\n items:\n - key: username\n path: my-group/my-username\n - secret:\n name: mysecret2\n items:\n - key: password\n path: my-group/my-password\n mode: 511", "completion": "This yaml code Projected Volumes | Each projected volume source is listed in the spec under sources. The parameters are nearly the same with two exceptions:", "language": "yaml", "type": "pod-definition_explanation", "context": "Projected Volumes | Each projected volume source is listed in the spec under sources. The parameters are nearly the same with two exceptions:", "source": "https://kubernetes.io/docs/concepts/storage/projected-volumes/", "id": "k8s_00117_code_explain_3161"} {"prompt": "Complete this yaml code:\n\npods/storage/projected-service-account-token.yaml\napiVersion: v1\nkind: Pod\nmetadata:\n name: sa-token-test\nspec:\n containers:\n - name: container-test\n image: busybox:1.28\n command: [\"sleep\", \"3600\"]\n volumeMounts:", "completion": " - name: token-vol\n mountPath: \"/service-account\"\n readOnly: true\n serviceAccountName: default\n volumes:\n - name: token-vol\n projected:\n sources:\n - serviceAccountToken:\n audience: api\n expirationSeconds: 3600\n path: token", "language": "yaml", "type": "pod-definition", "context": "Projected Volumes | The example Pod has a projected volume containing the injected service account token. Containers in this Pod can use that token to access the Kubernetes API server, authenticating with the identity of", "source": "https://kubernetes.io/docs/concepts/storage/projected-volumes/", "id": "k8s_00117_code_completion_3162"} {"prompt": "Explain this yaml code:\n\npods/storage/projected-service-account-token.yaml\napiVersion: v1\nkind: Pod\nmetadata:\n name: sa-token-test\nspec:\n containers:\n - name: container-test\n image: busybox:1.28\n command: [\"sleep\", \"3600\"]\n volumeMounts:\n - name: token-vol\n mountPath: \"/service-account\"\n readOnly: true\n serviceAccountName: default\n volumes:\n - name: token-vol\n projected:\n sources:\n - serviceAccountToken:\n audience: api\n expirationSeconds: 3600\n path: token", "completion": "This yaml code Projected Volumes | The example Pod has a projected volume containing the injected service account token. Containers in this Pod can use that token to access the Kubernetes API server, authenticating with the identity of", "language": "yaml", "type": "pod-definition_explanation", "context": "Projected Volumes | The example Pod has a projected volume containing the injected service account token. Containers in this Pod can use that token to access the Kubernetes API server, authenticating with the identity of", "source": "https://kubernetes.io/docs/concepts/storage/projected-volumes/", "id": "k8s_00117_code_explain_3163"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: sa-token-test\nspec:\n containers:\n - name: container-test\n image: busybox:1.28\n command: [\"sleep\", \"3600\"]\n volumeMounts:\n - name: token-vol", "completion": " mountPath: \"/service-account\"\n readOnly: true\n serviceAccountName: default\n volumes:\n - name: token-vol\n projected:\n sources:\n - serviceAccountToken:\n audience: api\n expirationSeconds: 3600\n path: token", "language": "yaml", "type": "pod-definition", "context": "Projected Volumes | The example Pod has a projected volume containing the injected service account token. Containers in this Pod can use that token to access the Kubernetes API server, authenticating with the identity of", "source": "https://kubernetes.io/docs/concepts/storage/projected-volumes/", "id": "k8s_00117_code_completion_3164"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: sa-token-test\nspec:\n containers:\n - name: container-test\n image: busybox:1.28\n command: [\"sleep\", \"3600\"]\n volumeMounts:\n - name: token-vol\n mountPath: \"/service-account\"\n readOnly: true\n serviceAccountName: default\n volumes:\n - name: token-vol\n projected:\n sources:\n - serviceAccountToken:\n audience: api\n expirationSeconds: 3600\n path: token", "completion": "This yaml code Projected Volumes | The example Pod has a projected volume containing the injected service account token. Containers in this Pod can use that token to access the Kubernetes API server, authenticating with the identity of", "language": "yaml", "type": "pod-definition_explanation", "context": "Projected Volumes | The example Pod has a projected volume containing the injected service account token. Containers in this Pod can use that token to access the Kubernetes API server, authenticating with the identity of", "source": "https://kubernetes.io/docs/concepts/storage/projected-volumes/", "id": "k8s_00117_code_explain_3165"} {"prompt": "Complete this yaml code:\n\npods/storage/projected-clustertrustbundle.yaml\napiVersion: v1\nkind: Pod\nmetadata:\n name: sa-ctb-name-test\nspec:\n containers:\n - name: container-test\n image: busybox\n command: [\"sleep\", \"3600\"]\n volumeMounts:\n - name: token-vol\n mountPath: \"/root-certificates\"\n readOnly: true", "completion": " serviceAccountName: default\n volumes:\n - name: token-vol\n projected:\n sources:\n - clusterTrustBundle:\n name: example\n path: example-roots.pem\n - clusterTrustBundle:\n signerName: \"example.com/mysigner\"\n labelSelector:\n matchLabels:\n version: live\n path: mysigner-roots.pem\n optional: true", "language": "yaml", "type": "pod-definition", "context": "Projected Volumes | The podCertificate projected volumes source securely provisions a private key and X.509 certificate chain for pod to use as client or server credentials. Kubelet will then handle refreshing the privat", "source": "https://kubernetes.io/docs/concepts/storage/projected-volumes/", "id": "k8s_00117_code_completion_3166"} {"prompt": "Explain this yaml code:\n\npods/storage/projected-clustertrustbundle.yaml\napiVersion: v1\nkind: Pod\nmetadata:\n name: sa-ctb-name-test\nspec:\n containers:\n - name: container-test\n image: busybox\n command: [\"sleep\", \"3600\"]\n volumeMounts:\n - name: token-vol\n mountPath: \"/root-certificates\"\n readOnly: true\n serviceAccountName: default\n volumes:\n - name: token-vol\n projected:\n sources:\n - clusterTrustBundle:\n name: example\n path: example-roots.pem\n - clusterTrustBundle:\n signerName: \"example.com/mysigner\"\n labelSelector:\n matchLabels:\n version: live\n path: mysigner-roots.pem\n optional: true", "completion": "This yaml code Projected Volumes | The podCertificate projected volumes source securely provisions a private key and X.509 certificate chain for pod to use as client or server credentials. Kubelet will then handle refreshing the privat", "language": "yaml", "type": "pod-definition_explanation", "context": "Projected Volumes | The podCertificate projected volumes source securely provisions a private key and X.509 certificate chain for pod to use as client or server credentials. Kubelet will then handle refreshing the privat", "source": "https://kubernetes.io/docs/concepts/storage/projected-volumes/", "id": "k8s_00117_code_explain_3167"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: sa-ctb-name-test\nspec:\n containers:\n - name: container-test\n image: busybox\n command: [\"sleep\", \"3600\"]\n volumeMounts:\n - name: token-vol\n mountPath: \"/root-certificates\"\n readOnly: true\n serviceAccountName: default", "completion": " volumes:\n - name: token-vol\n projected:\n sources:\n - clusterTrustBundle:\n name: example\n path: example-roots.pem\n - clusterTrustBundle:\n signerName: \"example.com/mysigner\"\n labelSelector:\n matchLabels:\n version: live\n path: mysigner-roots.pem\n optional: true", "language": "yaml", "type": "pod-definition", "context": "Projected Volumes | The podCertificate projected volumes source securely provisions a private key and X.509 certificate chain for pod to use as client or server credentials. Kubelet will then handle refreshing the privat", "source": "https://kubernetes.io/docs/concepts/storage/projected-volumes/", "id": "k8s_00117_code_completion_3168"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: sa-ctb-name-test\nspec:\n containers:\n - name: container-test\n image: busybox\n command: [\"sleep\", \"3600\"]\n volumeMounts:\n - name: token-vol\n mountPath: \"/root-certificates\"\n readOnly: true\n serviceAccountName: default\n volumes:\n - name: token-vol\n projected:\n sources:\n - clusterTrustBundle:\n name: example\n path: example-roots.pem\n - clusterTrustBundle:\n signerName: \"example.com/mysigner\"\n labelSelector:\n matchLabels:\n version: live\n path: mysigner-roots.pem\n optional: true", "completion": "This yaml code Projected Volumes | The podCertificate projected volumes source securely provisions a private key and X.509 certificate chain for pod to use as client or server credentials. Kubelet will then handle refreshing the privat", "language": "yaml", "type": "pod-definition_explanation", "context": "Projected Volumes | The podCertificate projected volumes source securely provisions a private key and X.509 certificate chain for pod to use as client or server credentials. Kubelet will then handle refreshing the privat", "source": "https://kubernetes.io/docs/concepts/storage/projected-volumes/", "id": "k8s_00117_code_explain_3169"} {"prompt": "Complete this yaml code:\n\npods/storage/projected-podcertificate.yaml\n# Sample Pod spec that uses a podCertificate projection to request an ED25519\n# private key, a certificate from the `coolcert.example.com/foo` signer, and\n# write the results to `/var/run/my-x509-credentials/credentialbundle.pem`.\napiVersion: v1\nkind: Pod\nmetadata:\n namespace: default\n name: podcertificate-pod\nspec:\n serviceAccountName: default\n containers:\n - image: debian", "completion": " name: main\n command: [\"sleep\", \"infinity\"]\n volumeMounts:\n - name: my-x509-credentials\n mountPath: /var/run/my-x509-credentials\n volumes:\n - name: my-x509-credentials\n projected:\n defaultMode: 420\n sources:\n - podCertificate:\n keyType: ED25519\n signerName: coolcert.example.com/foo\n credentialBundlePath: credentialbundle.pem", "language": "yaml", "type": "pod-definition", "context": "Projected Volumes | The proposal for file permission handling in projected service account volume enhancement introduced the projected files having the correct owner permissions set.", "source": "https://kubernetes.io/docs/concepts/storage/projected-volumes/", "id": "k8s_00117_code_completion_3170"} {"prompt": "Explain this yaml code:\n\npods/storage/projected-podcertificate.yaml\n# Sample Pod spec that uses a podCertificate projection to request an ED25519\n# private key, a certificate from the `coolcert.example.com/foo` signer, and\n# write the results to `/var/run/my-x509-credentials/credentialbundle.pem`.\napiVersion: v1\nkind: Pod\nmetadata:\n namespace: default\n name: podcertificate-pod\nspec:\n serviceAccountName: default\n containers:\n - image: debian\n name: main\n command: [\"sleep\", \"infinity\"]\n volumeMounts:\n - name: my-x509-credentials\n mountPath: /var/run/my-x509-credentials\n volumes:\n - name: my-x509-credentials\n projected:\n defaultMode: 420\n sources:\n - podCertificate:\n keyType: ED25519\n signerName: coolcert.example.com/foo\n credentialBundlePath: credentialbundle.pem", "completion": "This yaml code Projected Volumes | The proposal for file permission handling in projected service account volume enhancement introduced the projected files having the correct owner permissions set.", "language": "yaml", "type": "pod-definition_explanation", "context": "Projected Volumes | The proposal for file permission handling in projected service account volume enhancement introduced the projected files having the correct owner permissions set.", "source": "https://kubernetes.io/docs/concepts/storage/projected-volumes/", "id": "k8s_00117_code_explain_3171"} {"prompt": "Complete this yaml code:\n\n# Sample Pod spec that uses a podCertificate projection to request an ED25519\n# private key, a certificate from the `coolcert.example.com/foo` signer, and\n# write the results to `/var/run/my-x509-credentials/credentialbundle.pem`.\napiVersion: v1\nkind: Pod\nmetadata:\n namespace: default\n name: podcertificate-pod\nspec:\n serviceAccountName: default\n containers:\n - image: debian\n name: main", "completion": " command: [\"sleep\", \"infinity\"]\n volumeMounts:\n - name: my-x509-credentials\n mountPath: /var/run/my-x509-credentials\n volumes:\n - name: my-x509-credentials\n projected:\n defaultMode: 420\n sources:\n - podCertificate:\n keyType: ED25519\n signerName: coolcert.example.com/foo\n credentialBundlePath: credentialbundle.pem", "language": "yaml", "type": "pod-definition", "context": "Projected Volumes | The proposal for file permission handling in projected service account volume enhancement introduced the projected files having the correct owner permissions set.", "source": "https://kubernetes.io/docs/concepts/storage/projected-volumes/", "id": "k8s_00117_code_completion_3172"} {"prompt": "Explain this yaml code:\n\n# Sample Pod spec that uses a podCertificate projection to request an ED25519\n# private key, a certificate from the `coolcert.example.com/foo` signer, and\n# write the results to `/var/run/my-x509-credentials/credentialbundle.pem`.\napiVersion: v1\nkind: Pod\nmetadata:\n namespace: default\n name: podcertificate-pod\nspec:\n serviceAccountName: default\n containers:\n - image: debian\n name: main\n command: [\"sleep\", \"infinity\"]\n volumeMounts:\n - name: my-x509-credentials\n mountPath: /var/run/my-x509-credentials\n volumes:\n - name: my-x509-credentials\n projected:\n defaultMode: 420\n sources:\n - podCertificate:\n keyType: ED25519\n signerName: coolcert.example.com/foo\n credentialBundlePath: credentialbundle.pem", "completion": "This yaml code Projected Volumes | The proposal for file permission handling in projected service account volume enhancement introduced the projected files having the correct owner permissions set.", "language": "yaml", "type": "pod-definition_explanation", "context": "Projected Volumes | The proposal for file permission handling in projected service account volume enhancement introduced the projected files having the correct owner permissions set.", "source": "https://kubernetes.io/docs/concepts/storage/projected-volumes/", "id": "k8s_00117_code_explain_3173"} {"prompt": "Complete this text code:\n\nPS C:\\> Get-Acl C:\\var\\run\\secrets\\kubernetes.io\\serviceaccount\\..2021_08_31_22_22_18.318230061\\ca.crt | Format-List\n\nPath : Microsoft.PowerShell.Core\\FileSystem::C:\\var\\run\\secrets\\kubernetes.io\\serviceaccount\\..2021_08_31_22_22_18.318230061\\ca.crt\nOwner : BUILTIN\\Administrators\nGroup : NT AUTHORITY\\SYSTEM", "completion": "Access : NT AUTHORITY\\SYSTEM Allow FullControl\n BUILTIN\\Administrators Allow FullControl\n BUILTIN\\Users Allow ReadAndExecute, Synchronize\nAudit :\nSddl : O:BAG:SYD:AI(A;ID;FA;;;SY)(A;ID;FA;;;BA)(A;ID;0x1200a9;;;BU)", "language": "text", "type": "code-example", "context": "Projected Volumes | This implies all administrator users like ContainerAdministrator will have read, write and execute access while, non-administrator users will have read and execute access.", "source": "https://kubernetes.io/docs/concepts/storage/projected-volumes/", "id": "k8s_00117_code_completion_3174"} {"prompt": "Explain this text code:\n\nPS C:\\> Get-Acl C:\\var\\run\\secrets\\kubernetes.io\\serviceaccount\\..2021_08_31_22_22_18.318230061\\ca.crt | Format-List\n\nPath : Microsoft.PowerShell.Core\\FileSystem::C:\\var\\run\\secrets\\kubernetes.io\\serviceaccount\\..2021_08_31_22_22_18.318230061\\ca.crt\nOwner : BUILTIN\\Administrators\nGroup : NT AUTHORITY\\SYSTEM\nAccess : NT AUTHORITY\\SYSTEM Allow FullControl\n BUILTIN\\Administrators Allow FullControl\n BUILTIN\\Users Allow ReadAndExecute, Synchronize\nAudit :\nSddl : O:BAG:SYD:AI(A;ID;FA;;;SY)(A;ID;FA;;;BA)(A;ID;0x1200a9;;;BU)", "completion": "This text code Projected Volumes | This implies all administrator users like ContainerAdministrator will have read, write and execute access while, non-administrator users will have read and execute access.", "language": "text", "type": "code-example_explanation", "context": "Projected Volumes | This implies all administrator users like ContainerAdministrator will have read, write and execute access while, non-administrator users will have read and execute access.", "source": "https://kubernetes.io/docs/concepts/storage/projected-volumes/", "id": "k8s_00117_code_explain_3175"} {"prompt": "Complete this text code:\n\nNAME STATUS ROLES AGE VERSION LABELS\nworker0 Ready 1d v1.13.0 ...,kubernetes.io/hostname=worker0", "completion": "worker1 Ready 1d v1.13.0 ...,kubernetes.io/hostname=worker1\nworker2 Ready 1d v1.13.0 ...,kubernetes.io/hostname=worker2", "language": "text", "type": "code-example", "context": "Assign Pods to Nodes using Node Affinity | Choose one of your nodes, and add a label to it:", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/assign-pods-nodes-using-node-affinity/", "id": "k8s_00118_code_completion_3176"} {"prompt": "Explain this text code:\n\nNAME STATUS ROLES AGE VERSION LABELS\nworker0 Ready 1d v1.13.0 ...,kubernetes.io/hostname=worker0\nworker1 Ready 1d v1.13.0 ...,kubernetes.io/hostname=worker1\nworker2 Ready 1d v1.13.0 ...,kubernetes.io/hostname=worker2", "completion": "This text code Assign Pods to Nodes using Node Affinity | Choose one of your nodes, and add a label to it:", "language": "text", "type": "code-example_explanation", "context": "Assign Pods to Nodes using Node Affinity | Choose one of your nodes, and add a label to it:", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/assign-pods-nodes-using-node-affinity/", "id": "k8s_00118_code_explain_3177"} {"prompt": "Complete this text code:\n\nNAME STATUS ROLES AGE VERSION LABELS\nworker0 Ready 1d v1.13.0 ...,disktype=ssd,kubernetes.io/hostname=worker0", "completion": "worker1 Ready 1d v1.13.0 ...,kubernetes.io/hostname=worker1\nworker2 Ready 1d v1.13.0 ...,kubernetes.io/hostname=worker2", "language": "text", "type": "code-example", "context": "Assign Pods to Nodes using Node Affinity | In the preceding output, you can see that the worker0 node has a disktype=ssd label.", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/assign-pods-nodes-using-node-affinity/", "id": "k8s_00118_code_completion_3178"} {"prompt": "Explain this text code:\n\nNAME STATUS ROLES AGE VERSION LABELS\nworker0 Ready 1d v1.13.0 ...,disktype=ssd,kubernetes.io/hostname=worker0\nworker1 Ready 1d v1.13.0 ...,kubernetes.io/hostname=worker1\nworker2 Ready 1d v1.13.0 ...,kubernetes.io/hostname=worker2", "completion": "This text code Assign Pods to Nodes using Node Affinity | In the preceding output, you can see that the worker0 node has a disktype=ssd label.", "language": "text", "type": "code-example_explanation", "context": "Assign Pods to Nodes using Node Affinity | In the preceding output, you can see that the worker0 node has a disktype=ssd label.", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/assign-pods-nodes-using-node-affinity/", "id": "k8s_00118_code_explain_3179"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: nginx\nspec:\n affinity:\n nodeAffinity:\n requiredDuringSchedulingIgnoredDuringExecution:\n nodeSelectorTerms:", "completion": " - matchExpressions:\n - key: disktype\n operator: In\n values:\n - ssd \n containers:\n - name: nginx\n image: nginx\n imagePullPolicy: IfNotPresent", "language": "yaml", "type": "pod-definition", "context": "Assign Pods to Nodes using Node Affinity | Apply the manifest to create a Pod that is scheduled onto your chosen node:", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/assign-pods-nodes-using-node-affinity/", "id": "k8s_00118_code_completion_3180"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: nginx\nspec:\n affinity:\n nodeAffinity:\n requiredDuringSchedulingIgnoredDuringExecution:\n nodeSelectorTerms:\n - matchExpressions:\n - key: disktype\n operator: In\n values:\n - ssd \n containers:\n - name: nginx\n image: nginx\n imagePullPolicy: IfNotPresent", "completion": "This yaml code Assign Pods to Nodes using Node Affinity | Apply the manifest to create a Pod that is scheduled onto your chosen node:", "language": "yaml", "type": "pod-definition_explanation", "context": "Assign Pods to Nodes using Node Affinity | Apply the manifest to create a Pod that is scheduled onto your chosen node:", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/assign-pods-nodes-using-node-affinity/", "id": "k8s_00118_code_explain_3181"} {"prompt": "Explain this bash code:\n\nkubectl apply -f https://k8s.io/examples/pods/pod-nginx-required-affinity.yaml", "completion": "This bash code Assign Pods to Nodes using Node Affinity | Verify that the pod is running on your chosen node:", "language": "bash", "type": "kubectl-command_explanation", "context": "Assign Pods to Nodes using Node Affinity | Verify that the pod is running on your chosen node:", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/assign-pods-nodes-using-node-affinity/", "id": "k8s_00118_code_explain_3182"} {"prompt": "Explain this text code:\n\nNAME READY STATUS RESTARTS AGE IP NODE\nnginx 1/1 Running 0 13s 10.200.0.4 worker0", "completion": "This text code Assign Pods to Nodes using Node Affinity | This manifest describes a Pod that has a preferredDuringSchedulingIgnoredDuringExecution node affinity,disktype: ssd. This means that the pod will prefer a node that has a disktype=ssd label.", "language": "text", "type": "code-example_explanation", "context": "Assign Pods to Nodes using Node Affinity | This manifest describes a Pod that has a preferredDuringSchedulingIgnoredDuringExecution node affinity,disktype: ssd. This means that the pod will prefer a node that has a disktype=ssd label.", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/assign-pods-nodes-using-node-affinity/", "id": "k8s_00118_code_explain_3183"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: nginx\nspec:\n affinity:\n nodeAffinity:\n preferredDuringSchedulingIgnoredDuringExecution:\n - weight: 1", "completion": " preference:\n matchExpressions:\n - key: disktype\n operator: In\n values:\n - ssd \n containers:\n - name: nginx\n image: nginx\n imagePullPolicy: IfNotPresent", "language": "yaml", "type": "pod-definition", "context": "Assign Pods to Nodes using Node Affinity | Apply the manifest to create a Pod that is scheduled onto your chosen node:", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/assign-pods-nodes-using-node-affinity/", "id": "k8s_00118_code_completion_3184"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: nginx\nspec:\n affinity:\n nodeAffinity:\n preferredDuringSchedulingIgnoredDuringExecution:\n - weight: 1\n preference:\n matchExpressions:\n - key: disktype\n operator: In\n values:\n - ssd \n containers:\n - name: nginx\n image: nginx\n imagePullPolicy: IfNotPresent", "completion": "This yaml code Assign Pods to Nodes using Node Affinity | Apply the manifest to create a Pod that is scheduled onto your chosen node:", "language": "yaml", "type": "pod-definition_explanation", "context": "Assign Pods to Nodes using Node Affinity | Apply the manifest to create a Pod that is scheduled onto your chosen node:", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/assign-pods-nodes-using-node-affinity/", "id": "k8s_00118_code_explain_3185"} {"prompt": "Explain this bash code:\n\nkubectl apply -f https://k8s.io/examples/pods/pod-nginx-preferred-affinity.yaml", "completion": "This bash code Assign Pods to Nodes using Node Affinity | Verify that the pod is running on your chosen node:", "language": "bash", "type": "kubectl-command_explanation", "context": "Assign Pods to Nodes using Node Affinity | Verify that the pod is running on your chosen node:", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/assign-pods-nodes-using-node-affinity/", "id": "k8s_00118_code_explain_3186"} {"prompt": "Explain this text code:\n\nNAME READY STATUS RESTARTS AGE IP NODE\nnginx 1/1 Running 0 13s 10.200.0.4 worker0", "completion": "This text code Assign Pods to Nodes using Node Affinity | Learn more about Node Affinity.", "language": "text", "type": "code-example_explanation", "context": "Assign Pods to Nodes using Node Affinity | Learn more about Node Affinity.", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/assign-pods-nodes-using-node-affinity/", "id": "k8s_00118_code_explain_3187"} {"prompt": "Complete this shell code:\n\nNAME STATUS ROLES AGE VERSION LABELS\nworker0 Ready 1d v1.13.0 ...,kubernetes.io/hostname=worker0", "completion": "worker1 Ready 1d v1.13.0 ...,kubernetes.io/hostname=worker1\nworker2 Ready 1d v1.13.0 ...,kubernetes.io/hostname=worker2", "language": "shell", "type": "code-example", "context": "Assign Pods to Nodes using Node Affinity | Choose one of your nodes, and add a label to it:", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/assign-pods-nodes-using-node-affinity/", "id": "k8s_00118_code_completion_3188"} {"prompt": "Explain this shell code:\n\nNAME STATUS ROLES AGE VERSION LABELS\nworker0 Ready 1d v1.13.0 ...,kubernetes.io/hostname=worker0\nworker1 Ready 1d v1.13.0 ...,kubernetes.io/hostname=worker1\nworker2 Ready 1d v1.13.0 ...,kubernetes.io/hostname=worker2", "completion": "This shell code Assign Pods to Nodes using Node Affinity | Choose one of your nodes, and add a label to it:", "language": "shell", "type": "code-example_explanation", "context": "Assign Pods to Nodes using Node Affinity | Choose one of your nodes, and add a label to it:", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/assign-pods-nodes-using-node-affinity/", "id": "k8s_00118_code_explain_3189"} {"prompt": "Complete this text code:\n\nNAME STATUS ROLES AGE VERSION LABELS\nworker0 Ready 1d v1.13.0 ...,disktype=ssd,kubernetes.io/hostname=worker0", "completion": "worker1 Ready 1d v1.13.0 ...,kubernetes.io/hostname=worker1\nworker2 Ready 1d v1.13.0 ...,kubernetes.io/hostname=worker2", "language": "text", "type": "code-example", "context": "Assign Pods to Nodes using Node Affinity | In the preceding output, you can see that the worker0 node has a disktype=ssd label.", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/assign-pods-nodes-using-node-affinity/", "id": "k8s_00118_code_completion_3190"} {"prompt": "Explain this text code:\n\nNAME STATUS ROLES AGE VERSION LABELS\nworker0 Ready 1d v1.13.0 ...,disktype=ssd,kubernetes.io/hostname=worker0\nworker1 Ready 1d v1.13.0 ...,kubernetes.io/hostname=worker1\nworker2 Ready 1d v1.13.0 ...,kubernetes.io/hostname=worker2", "completion": "This text code Assign Pods to Nodes using Node Affinity | In the preceding output, you can see that the worker0 node has a disktype=ssd label.", "language": "text", "type": "code-example_explanation", "context": "Assign Pods to Nodes using Node Affinity | In the preceding output, you can see that the worker0 node has a disktype=ssd label.", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/assign-pods-nodes-using-node-affinity/", "id": "k8s_00118_code_explain_3191"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: nginx\nspec:\n affinity:\n nodeAffinity:\n requiredDuringSchedulingIgnoredDuringExecution:\n nodeSelectorTerms:", "completion": " - matchExpressions:\n - key: disktype\n operator: In\n values:\n - ssd \n containers:\n - name: nginx\n image: nginx\n imagePullPolicy: IfNotPresent", "language": "yaml", "type": "pod-definition", "context": "Assign Pods to Nodes using Node Affinity | Apply the manifest to create a Pod that is scheduled onto your chosen node:", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/assign-pods-nodes-using-node-affinity/", "id": "k8s_00118_code_completion_3192"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: nginx\nspec:\n affinity:\n nodeAffinity:\n requiredDuringSchedulingIgnoredDuringExecution:\n nodeSelectorTerms:\n - matchExpressions:\n - key: disktype\n operator: In\n values:\n - ssd \n containers:\n - name: nginx\n image: nginx\n imagePullPolicy: IfNotPresent", "completion": "This yaml code Assign Pods to Nodes using Node Affinity | Apply the manifest to create a Pod that is scheduled onto your chosen node:", "language": "yaml", "type": "pod-definition_explanation", "context": "Assign Pods to Nodes using Node Affinity | Apply the manifest to create a Pod that is scheduled onto your chosen node:", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/assign-pods-nodes-using-node-affinity/", "id": "k8s_00118_code_explain_3193"} {"prompt": "Explain this shell code:\n\nkubectl apply -f https://k8s.io/examples/pods/pod-nginx-required-affinity.yaml", "completion": "This shell code Assign Pods to Nodes using Node Affinity | Verify that the pod is running on your chosen node:", "language": "shell", "type": "code-example_explanation", "context": "Assign Pods to Nodes using Node Affinity | Verify that the pod is running on your chosen node:", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/assign-pods-nodes-using-node-affinity/", "id": "k8s_00118_code_explain_3194"} {"prompt": "Explain this text code:\n\nNAME READY STATUS RESTARTS AGE IP NODE\nnginx 1/1 Running 0 13s 10.200.0.4 worker0", "completion": "This text code Assign Pods to Nodes using Node Affinity | This manifest describes a Pod that has a preferredDuringSchedulingIgnoredDuringExecution node affinity,disktype: ssd. This means that the pod will prefer a node that has a disktype=ssd label.", "language": "text", "type": "code-example_explanation", "context": "Assign Pods to Nodes using Node Affinity | This manifest describes a Pod that has a preferredDuringSchedulingIgnoredDuringExecution node affinity,disktype: ssd. This means that the pod will prefer a node that has a disktype=ssd label.", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/assign-pods-nodes-using-node-affinity/", "id": "k8s_00118_code_explain_3195"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: nginx\nspec:\n affinity:\n nodeAffinity:\n preferredDuringSchedulingIgnoredDuringExecution:\n - weight: 1", "completion": " preference:\n matchExpressions:\n - key: disktype\n operator: In\n values:\n - ssd \n containers:\n - name: nginx\n image: nginx\n imagePullPolicy: IfNotPresent", "language": "yaml", "type": "pod-definition", "context": "Assign Pods to Nodes using Node Affinity | Apply the manifest to create a Pod that is scheduled onto your chosen node:", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/assign-pods-nodes-using-node-affinity/", "id": "k8s_00118_code_completion_3196"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: nginx\nspec:\n affinity:\n nodeAffinity:\n preferredDuringSchedulingIgnoredDuringExecution:\n - weight: 1\n preference:\n matchExpressions:\n - key: disktype\n operator: In\n values:\n - ssd \n containers:\n - name: nginx\n image: nginx\n imagePullPolicy: IfNotPresent", "completion": "This yaml code Assign Pods to Nodes using Node Affinity | Apply the manifest to create a Pod that is scheduled onto your chosen node:", "language": "yaml", "type": "pod-definition_explanation", "context": "Assign Pods to Nodes using Node Affinity | Apply the manifest to create a Pod that is scheduled onto your chosen node:", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/assign-pods-nodes-using-node-affinity/", "id": "k8s_00118_code_explain_3197"} {"prompt": "Explain this shell code:\n\nkubectl apply -f https://k8s.io/examples/pods/pod-nginx-preferred-affinity.yaml", "completion": "This shell code Assign Pods to Nodes using Node Affinity | Verify that the pod is running on your chosen node:", "language": "shell", "type": "code-example_explanation", "context": "Assign Pods to Nodes using Node Affinity | Verify that the pod is running on your chosen node:", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/assign-pods-nodes-using-node-affinity/", "id": "k8s_00118_code_explain_3198"} {"prompt": "Explain this text code:\n\nNAME READY STATUS RESTARTS AGE IP NODE\nnginx 1/1 Running 0 13s 10.200.0.4 worker0", "completion": "This text code Assign Pods to Nodes using Node Affinity | Learn more about Node Affinity.", "language": "text", "type": "code-example_explanation", "context": "Assign Pods to Nodes using Node Affinity | Learn more about Node Affinity.", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/assign-pods-nodes-using-node-affinity/", "id": "k8s_00118_code_explain_3199"} {"prompt": "Complete this text code:\n\nNAME STATUS ROLES AGE VERSION LABELS\nworker0 Ready 1d v1.13.0 ...,kubernetes.io/hostname=worker0", "completion": "worker1 Ready 1d v1.13.0 ...,kubernetes.io/hostname=worker1\nworker2 Ready 1d v1.13.0 ...,kubernetes.io/hostname=worker2", "language": "text", "type": "code-example", "context": "Assign Pods to Nodes using Node Affinity | Choose one of your nodes, and add a label to it:", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/assign-pods-nodes-using-node-affinity/", "id": "k8s_00118_code_completion_3200"} {"prompt": "Explain this text code:\n\nNAME STATUS ROLES AGE VERSION LABELS\nworker0 Ready 1d v1.13.0 ...,kubernetes.io/hostname=worker0\nworker1 Ready 1d v1.13.0 ...,kubernetes.io/hostname=worker1\nworker2 Ready 1d v1.13.0 ...,kubernetes.io/hostname=worker2", "completion": "This text code Assign Pods to Nodes using Node Affinity | Choose one of your nodes, and add a label to it:", "language": "text", "type": "code-example_explanation", "context": "Assign Pods to Nodes using Node Affinity | Choose one of your nodes, and add a label to it:", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/assign-pods-nodes-using-node-affinity/", "id": "k8s_00118_code_explain_3201"} {"prompt": "Complete this yaml code:\n\npods/pod-nginx-required-affinity.yaml\napiVersion: v1\nkind: Pod\nmetadata:\n name: nginx\nspec:\n affinity:\n nodeAffinity:\n requiredDuringSchedulingIgnoredDuringExecution:", "completion": " nodeSelectorTerms:\n - matchExpressions:\n - key: disktype\n operator: In\n values:\n - ssd \n containers:\n - name: nginx\n image: nginx\n imagePullPolicy: IfNotPresent", "language": "yaml", "type": "pod-definition", "context": "Assign Pods to Nodes using Node Affinity | Apply the manifest to create a Pod that is scheduled onto your chosen node:", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/assign-pods-nodes-using-node-affinity/", "id": "k8s_00118_code_completion_3202"} {"prompt": "Explain this yaml code:\n\npods/pod-nginx-required-affinity.yaml\napiVersion: v1\nkind: Pod\nmetadata:\n name: nginx\nspec:\n affinity:\n nodeAffinity:\n requiredDuringSchedulingIgnoredDuringExecution:\n nodeSelectorTerms:\n - matchExpressions:\n - key: disktype\n operator: In\n values:\n - ssd \n containers:\n - name: nginx\n image: nginx\n imagePullPolicy: IfNotPresent", "completion": "This yaml code Assign Pods to Nodes using Node Affinity | Apply the manifest to create a Pod that is scheduled onto your chosen node:", "language": "yaml", "type": "pod-definition_explanation", "context": "Assign Pods to Nodes using Node Affinity | Apply the manifest to create a Pod that is scheduled onto your chosen node:", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/assign-pods-nodes-using-node-affinity/", "id": "k8s_00118_code_explain_3203"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: nginx\nspec:\n affinity:\n nodeAffinity:\n requiredDuringSchedulingIgnoredDuringExecution:\n nodeSelectorTerms:", "completion": " - matchExpressions:\n - key: disktype\n operator: In\n values:\n - ssd \n containers:\n - name: nginx\n image: nginx\n imagePullPolicy: IfNotPresent", "language": "yaml", "type": "pod-definition", "context": "Assign Pods to Nodes using Node Affinity | Apply the manifest to create a Pod that is scheduled onto your chosen node:", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/assign-pods-nodes-using-node-affinity/", "id": "k8s_00118_code_completion_3204"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: nginx\nspec:\n affinity:\n nodeAffinity:\n requiredDuringSchedulingIgnoredDuringExecution:\n nodeSelectorTerms:\n - matchExpressions:\n - key: disktype\n operator: In\n values:\n - ssd \n containers:\n - name: nginx\n image: nginx\n imagePullPolicy: IfNotPresent", "completion": "This yaml code Assign Pods to Nodes using Node Affinity | Apply the manifest to create a Pod that is scheduled onto your chosen node:", "language": "yaml", "type": "pod-definition_explanation", "context": "Assign Pods to Nodes using Node Affinity | Apply the manifest to create a Pod that is scheduled onto your chosen node:", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/assign-pods-nodes-using-node-affinity/", "id": "k8s_00118_code_explain_3205"} {"prompt": "Explain this bash code:\n\nkubectl apply -f https://k8s.io/examples/pods/pod-nginx-required-affinity.yaml", "completion": "This bash code Assign Pods to Nodes using Node Affinity | Verify that the pod is running on your chosen node:", "language": "bash", "type": "kubectl-command_explanation", "context": "Assign Pods to Nodes using Node Affinity | Verify that the pod is running on your chosen node:", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/assign-pods-nodes-using-node-affinity/", "id": "k8s_00118_code_explain_3206"} {"prompt": "Complete this yaml code:\n\npods/pod-nginx-preferred-affinity.yaml\napiVersion: v1\nkind: Pod\nmetadata:\n name: nginx\nspec:\n affinity:\n nodeAffinity:\n preferredDuringSchedulingIgnoredDuringExecution:\n - weight: 1", "completion": " preference:\n matchExpressions:\n - key: disktype\n operator: In\n values:\n - ssd \n containers:\n - name: nginx\n image: nginx\n imagePullPolicy: IfNotPresent", "language": "yaml", "type": "pod-definition", "context": "Assign Pods to Nodes using Node Affinity | Apply the manifest to create a Pod that is scheduled onto your chosen node:", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/assign-pods-nodes-using-node-affinity/", "id": "k8s_00118_code_completion_3207"} {"prompt": "Explain this yaml code:\n\npods/pod-nginx-preferred-affinity.yaml\napiVersion: v1\nkind: Pod\nmetadata:\n name: nginx\nspec:\n affinity:\n nodeAffinity:\n preferredDuringSchedulingIgnoredDuringExecution:\n - weight: 1\n preference:\n matchExpressions:\n - key: disktype\n operator: In\n values:\n - ssd \n containers:\n - name: nginx\n image: nginx\n imagePullPolicy: IfNotPresent", "completion": "This yaml code Assign Pods to Nodes using Node Affinity | Apply the manifest to create a Pod that is scheduled onto your chosen node:", "language": "yaml", "type": "pod-definition_explanation", "context": "Assign Pods to Nodes using Node Affinity | Apply the manifest to create a Pod that is scheduled onto your chosen node:", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/assign-pods-nodes-using-node-affinity/", "id": "k8s_00118_code_explain_3208"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: nginx\nspec:\n affinity:\n nodeAffinity:\n preferredDuringSchedulingIgnoredDuringExecution:\n - weight: 1", "completion": " preference:\n matchExpressions:\n - key: disktype\n operator: In\n values:\n - ssd \n containers:\n - name: nginx\n image: nginx\n imagePullPolicy: IfNotPresent", "language": "yaml", "type": "pod-definition", "context": "Assign Pods to Nodes using Node Affinity | Apply the manifest to create a Pod that is scheduled onto your chosen node:", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/assign-pods-nodes-using-node-affinity/", "id": "k8s_00118_code_completion_3209"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: nginx\nspec:\n affinity:\n nodeAffinity:\n preferredDuringSchedulingIgnoredDuringExecution:\n - weight: 1\n preference:\n matchExpressions:\n - key: disktype\n operator: In\n values:\n - ssd \n containers:\n - name: nginx\n image: nginx\n imagePullPolicy: IfNotPresent", "completion": "This yaml code Assign Pods to Nodes using Node Affinity | Apply the manifest to create a Pod that is scheduled onto your chosen node:", "language": "yaml", "type": "pod-definition_explanation", "context": "Assign Pods to Nodes using Node Affinity | Apply the manifest to create a Pod that is scheduled onto your chosen node:", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/assign-pods-nodes-using-node-affinity/", "id": "k8s_00118_code_explain_3210"} {"prompt": "Explain this bash code:\n\nkubectl apply -f https://k8s.io/examples/pods/pod-nginx-preferred-affinity.yaml", "completion": "This bash code Assign Pods to Nodes using Node Affinity | Verify that the pod is running on your chosen node:", "language": "bash", "type": "kubectl-command_explanation", "context": "Assign Pods to Nodes using Node Affinity | Verify that the pod is running on your chosen node:", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/assign-pods-nodes-using-node-affinity/", "id": "k8s_00118_code_explain_3211"} {"prompt": "Explain this text code:\n\nssh gke-test-default-pool-239f5d02-gyn2 \"sudo cat /sys/kernel/security/apparmor/profiles | sort\"", "completion": "This text code Restrict a Container's Access to Resources with AppArmor | For more details on loading profiles on nodes, see Setting up nodes with profiles.", "language": "text", "type": "code-example_explanation", "context": "Restrict a Container's Access to Resources with AppArmor | For more details on loading profiles on nodes, see Setting up nodes with profiles.", "source": "https://kubernetes.io/docs/tutorials/security/apparmor/", "id": "k8s_00119_code_explain_3212"} {"prompt": "Complete this text code:\n\napparmor-test-deny-write (enforce)\napparmor-test-audit-write (enforce)", "completion": "docker-default (enforce)\nk8s-nginx (enforce)", "language": "text", "type": "code-example", "context": "Restrict a Container's Access to Resources with AppArmor | For more details on loading profiles on nodes, see Setting up nodes with profiles.", "source": "https://kubernetes.io/docs/tutorials/security/apparmor/", "id": "k8s_00119_code_completion_3213"} {"prompt": "Explain this text code:\n\napparmor-test-deny-write (enforce)\napparmor-test-audit-write (enforce)\ndocker-default (enforce)\nk8s-nginx (enforce)", "completion": "This text code Restrict a Container's Access to Resources with AppArmor | For more details on loading profiles on nodes, see Setting up nodes with profiles.", "language": "text", "type": "code-example_explanation", "context": "Restrict a Container's Access to Resources with AppArmor | For more details on loading profiles on nodes, see Setting up nodes with profiles.", "source": "https://kubernetes.io/docs/tutorials/security/apparmor/", "id": "k8s_00119_code_explain_3214"} {"prompt": "Explain this text code:\n\nsecurityContext:\n appArmorProfile:\n type: ", "completion": "This text code Restrict a Container's Access to Resources with AppArmor | Where is one of:", "language": "text", "type": "code-example_explanation", "context": "Restrict a Container's Access to Resources with AppArmor | Where is one of:", "source": "https://kubernetes.io/docs/tutorials/security/apparmor/", "id": "k8s_00119_code_explain_3215"} {"prompt": "Explain this bash code:\n\nkubectl exec -- cat /proc/1/attr/current", "completion": "This bash code Restrict a Container's Access to Resources with AppArmor | The output should look something like this:", "language": "bash", "type": "kubectl-command_explanation", "context": "Restrict a Container's Access to Resources with AppArmor | The output should look something like this:", "source": "https://kubernetes.io/docs/tutorials/security/apparmor/", "id": "k8s_00119_code_explain_3216"} {"prompt": "Complete this cpp code:\n\n#include \n\nprofile k8s-apparmor-example-deny-write flags=(attach_disconnected) {\n #include \n", "completion": " file,\n\n # Deny all file writes.\n deny /** w,\n}", "language": "cpp", "type": "code-example", "context": "Restrict a Container's Access to Resources with AppArmor | The profile needs to be loaded onto all nodes, since you don't know where the pod will be scheduled. For this example you can use SSH to install the profiles, but other approaches are discussed in Set", "source": "https://kubernetes.io/docs/tutorials/security/apparmor/", "id": "k8s_00119_code_completion_3217"} {"prompt": "Explain this cpp code:\n\n#include \n\nprofile k8s-apparmor-example-deny-write flags=(attach_disconnected) {\n #include \n\n file,\n\n # Deny all file writes.\n deny /** w,\n}", "completion": "This cpp code Restrict a Container's Access to Resources with AppArmor | The profile needs to be loaded onto all nodes, since you don't know where the pod will be scheduled. For this example you can use SSH to install the profiles, but other approaches are discussed in Set", "language": "cpp", "type": "code-example_explanation", "context": "Restrict a Container's Access to Resources with AppArmor | The profile needs to be loaded onto all nodes, since you don't know where the pod will be scheduled. For this example you can use SSH to install the profiles, but other approaches are discussed in Set", "source": "https://kubernetes.io/docs/tutorials/security/apparmor/", "id": "k8s_00119_code_explain_3218"} {"prompt": "Complete this cpp code:\n\n# This example assumes that node names match host names, and are reachable via SSH.\nNODES=($( kubectl get node -o jsonpath='{.items[*].status.addresses[?(.type == \"Hostname\")].address}' ))\n\nfor NODE in ${NODES[*]}; do ssh $NODE 'sudo apparmor_parser -q <\n\nprofile k8s-apparmor-example-deny-write flags=(attach_disconnected) {\n #include ", "completion": "\n file,\n\n # Deny all file writes.\n deny /** w,\n}\nEOF'\ndone", "language": "cpp", "type": "code-example", "context": "Restrict a Container's Access to Resources with AppArmor | Next, run a simple \"Hello AppArmor\" Pod with the deny-write profile:", "source": "https://kubernetes.io/docs/tutorials/security/apparmor/", "id": "k8s_00119_code_completion_3219"} {"prompt": "Explain this cpp code:\n\n# This example assumes that node names match host names, and are reachable via SSH.\nNODES=($( kubectl get node -o jsonpath='{.items[*].status.addresses[?(.type == \"Hostname\")].address}' ))\n\nfor NODE in ${NODES[*]}; do ssh $NODE 'sudo apparmor_parser -q <\n\nprofile k8s-apparmor-example-deny-write flags=(attach_disconnected) {\n #include \n\n file,\n\n # Deny all file writes.\n deny /** w,\n}\nEOF'\ndone", "completion": "This cpp code Restrict a Container's Access to Resources with AppArmor | Next, run a simple \"Hello AppArmor\" Pod with the deny-write profile:", "language": "cpp", "type": "code-example_explanation", "context": "Restrict a Container's Access to Resources with AppArmor | Next, run a simple \"Hello AppArmor\" Pod with the deny-write profile:", "source": "https://kubernetes.io/docs/tutorials/security/apparmor/", "id": "k8s_00119_code_explain_3220"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: hello-apparmor\nspec:\n securityContext:", "completion": " appArmorProfile:\n type: Localhost\n localhostProfile: k8s-apparmor-example-deny-write\n containers:\n - name: hello\n image: busybox:1.28\n command: [ \"sh\", \"-c\", \"echo 'Hello AppArmor!' && sleep 1h\" ]", "language": "yaml", "type": "pod-definition", "context": "Restrict a Container's Access to Resources with AppArmor | You can verify that the container is actually running with that profile by checking /proc/1/attr/current:", "source": "https://kubernetes.io/docs/tutorials/security/apparmor/", "id": "k8s_00119_code_completion_3221"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: hello-apparmor\nspec:\n securityContext:\n appArmorProfile:\n type: Localhost\n localhostProfile: k8s-apparmor-example-deny-write\n containers:\n - name: hello\n image: busybox:1.28\n command: [ \"sh\", \"-c\", \"echo 'Hello AppArmor!' && sleep 1h\" ]", "completion": "This yaml code Restrict a Container's Access to Resources with AppArmor | You can verify that the container is actually running with that profile by checking /proc/1/attr/current:", "language": "yaml", "type": "pod-definition_explanation", "context": "Restrict a Container's Access to Resources with AppArmor | You can verify that the container is actually running with that profile by checking /proc/1/attr/current:", "source": "https://kubernetes.io/docs/tutorials/security/apparmor/", "id": "k8s_00119_code_explain_3222"} {"prompt": "Explain this bash code:\n\nkubectl exec hello-apparmor -- cat /proc/1/attr/current", "completion": "This bash code Restrict a Container's Access to Resources with AppArmor | The output should be:", "language": "bash", "type": "kubectl-command_explanation", "context": "Restrict a Container's Access to Resources with AppArmor | The output should be:", "source": "https://kubernetes.io/docs/tutorials/security/apparmor/", "id": "k8s_00119_code_explain_3223"} {"prompt": "Explain this text code:\n\ntouch: /tmp/test: Permission denied\nerror: error executing remote command: command terminated with non-zero exit code: Error executing in Docker Container: 1", "completion": "This text code Restrict a Container's Access to Resources with AppArmor | To wrap up, see what happens if you try to specify a profile that hasn't been loaded:", "language": "text", "type": "code-example_explanation", "context": "Restrict a Container's Access to Resources with AppArmor | To wrap up, see what happens if you try to specify a profile that hasn't been loaded:", "source": "https://kubernetes.io/docs/tutorials/security/apparmor/", "id": "k8s_00119_code_explain_3224"} {"prompt": "Complete this yaml code:\n\nkubectl create -f /dev/stdin <\nAnnotations: container.apparmor.security.beta.kubernetes.io/hello=localhost/k8s-apparmor-example-allow-write\nStatus: Pending\n... ", "completion": "Events:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal Scheduled 10s default-scheduler Successfully assigned default/hello-apparmor to gke-test-default-pool-239f5d02-x1kf\n Normal Pulled 8s kubelet Successfully pulled image \"busybox:1.28\" in 370.157088ms (370.172701ms including waiting)\n Normal Pulling 7s (x2 over 9s) kubelet Pulling image \"busybox:1.28\"\n Warning Failed 7s (x2 over 8s) kubelet Error: failed to get container spec opts: failed to generate apparmor spec opts: apparmor profile not found k8s-apparmor-example-allow-write\n Normal Pulled 7s kubelet Successfully pulled image \"busybox:1.28\" in 90.980331ms (91.005869ms including waiting)", "language": "text", "type": "code-example", "context": "Restrict a Container's Access to Resources with AppArmor | An Event provides the error message with the reason, the specific wording is runtime-dependent:", "source": "https://kubernetes.io/docs/tutorials/security/apparmor/", "id": "k8s_00119_code_completion_3227"} {"prompt": "Explain this text code:\n\nName: hello-apparmor-2\nNamespace: default\nNode: gke-test-default-pool-239f5d02-x1kf/10.128.0.27\nStart Time: Tue, 30 Aug 2016 17:58:56 -0700\nLabels: \nAnnotations: container.apparmor.security.beta.kubernetes.io/hello=localhost/k8s-apparmor-example-allow-write\nStatus: Pending\n... \nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal Scheduled 10s default-scheduler Successfully assigned default/hello-apparmor to gke-test-default-pool-239f5d02-x1kf\n Normal Pulled 8s kubelet Successfully pulled image \"busybox:1.28\" in 370.157088ms (370.172701ms including waiting)\n Normal Pulling 7s (x2 over 9s) kubelet Pulling image \"busybox:1.28\"\n Warning Failed 7s (x2 over 8s) kubelet Error: failed to get container spec opts: failed to generate apparmor spec opts: apparmor profile not found k8s-apparmor-example-allow-write\n Normal Pulled 7s kubelet Successfully pulled image \"busybox:1.28\" in 90.980331ms (91.005869ms including waiting)", "completion": "This text code Restrict a Container's Access to Resources with AppArmor | An Event provides the error message with the reason, the specific wording is runtime-dependent:", "language": "text", "type": "code-example_explanation", "context": "Restrict a Container's Access to Resources with AppArmor | An Event provides the error message with the reason, the specific wording is runtime-dependent:", "source": "https://kubernetes.io/docs/tutorials/security/apparmor/", "id": "k8s_00119_code_explain_3228"} {"prompt": "Explain this text code:\n\nWarning Failed 7s (x2 over 8s) kubelet Error: failed to get container spec opts: failed to generate apparmor spec opts: apparmor profile not found", "completion": "This text code Restrict a Container's Access to Resources with AppArmor | Kubernetes 1.34 does not provide any built-in mechanisms for loading AppArmor profiles onto Nodes. Profiles can be loaded through custom infrastructure or tools like the Kubernetes Security Profiles O", "language": "text", "type": "code-example_explanation", "context": "Restrict a Container's Access to Resources with AppArmor | Kubernetes 1.34 does not provide any built-in mechanisms for loading AppArmor profiles onto Nodes. Profiles can be loaded through custom infrastructure or tools like the Kubernetes Security Profiles O", "source": "https://kubernetes.io/docs/tutorials/security/apparmor/", "id": "k8s_00119_code_explain_3229"} {"prompt": "Explain this shell code:\n\nssh gke-test-default-pool-239f5d02-gyn2 \"sudo cat /sys/kernel/security/apparmor/profiles | sort\"", "completion": "This shell code Restrict a Container's Access to Resources with AppArmor | For more details on loading profiles on nodes, see Setting up nodes with profiles.", "language": "shell", "type": "code-example_explanation", "context": "Restrict a Container's Access to Resources with AppArmor | For more details on loading profiles on nodes, see Setting up nodes with profiles.", "source": "https://kubernetes.io/docs/tutorials/security/apparmor/", "id": "k8s_00119_code_explain_3230"} {"prompt": "Complete this text code:\n\napparmor-test-deny-write (enforce)\napparmor-test-audit-write (enforce)", "completion": "docker-default (enforce)\nk8s-nginx (enforce)", "language": "text", "type": "code-example", "context": "Restrict a Container's Access to Resources with AppArmor | For more details on loading profiles on nodes, see Setting up nodes with profiles.", "source": "https://kubernetes.io/docs/tutorials/security/apparmor/", "id": "k8s_00119_code_completion_3231"} {"prompt": "Explain this text code:\n\napparmor-test-deny-write (enforce)\napparmor-test-audit-write (enforce)\ndocker-default (enforce)\nk8s-nginx (enforce)", "completion": "This text code Restrict a Container's Access to Resources with AppArmor | For more details on loading profiles on nodes, see Setting up nodes with profiles.", "language": "text", "type": "code-example_explanation", "context": "Restrict a Container's Access to Resources with AppArmor | For more details on loading profiles on nodes, see Setting up nodes with profiles.", "source": "https://kubernetes.io/docs/tutorials/security/apparmor/", "id": "k8s_00119_code_explain_3232"} {"prompt": "Explain this yaml code:\n\nsecurityContext:\n appArmorProfile:\n type: ", "completion": "This yaml code Restrict a Container's Access to Resources with AppArmor | Where is one of:", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Restrict a Container's Access to Resources with AppArmor | Where is one of:", "source": "https://kubernetes.io/docs/tutorials/security/apparmor/", "id": "k8s_00119_code_explain_3233"} {"prompt": "Explain this shell code:\n\nkubectl exec -- cat /proc/1/attr/current", "completion": "This shell code Restrict a Container's Access to Resources with AppArmor | The output should look something like this:", "language": "shell", "type": "code-example_explanation", "context": "Restrict a Container's Access to Resources with AppArmor | The output should look something like this:", "source": "https://kubernetes.io/docs/tutorials/security/apparmor/", "id": "k8s_00119_code_explain_3234"} {"prompt": "Complete this cpp code:\n\n#include \n\nprofile k8s-apparmor-example-deny-write flags=(attach_disconnected) {\n #include \n", "completion": " file,\n\n # Deny all file writes.\n deny /** w,\n}", "language": "cpp", "type": "code-example", "context": "Restrict a Container's Access to Resources with AppArmor | The profile needs to be loaded onto all nodes, since you don't know where the pod will be scheduled. For this example you can use SSH to install the profiles, but other approaches are discussed in Set", "source": "https://kubernetes.io/docs/tutorials/security/apparmor/", "id": "k8s_00119_code_completion_3235"} {"prompt": "Explain this cpp code:\n\n#include \n\nprofile k8s-apparmor-example-deny-write flags=(attach_disconnected) {\n #include \n\n file,\n\n # Deny all file writes.\n deny /** w,\n}", "completion": "This cpp code Restrict a Container's Access to Resources with AppArmor | The profile needs to be loaded onto all nodes, since you don't know where the pod will be scheduled. For this example you can use SSH to install the profiles, but other approaches are discussed in Set", "language": "cpp", "type": "code-example_explanation", "context": "Restrict a Container's Access to Resources with AppArmor | The profile needs to be loaded onto all nodes, since you don't know where the pod will be scheduled. For this example you can use SSH to install the profiles, but other approaches are discussed in Set", "source": "https://kubernetes.io/docs/tutorials/security/apparmor/", "id": "k8s_00119_code_explain_3236"} {"prompt": "Complete this shell code:\n\n# This example assumes that node names match host names, and are reachable via SSH.\nNODES=($( kubectl get node -o jsonpath='{.items[*].status.addresses[?(.type == \"Hostname\")].address}' ))\n\nfor NODE in ${NODES[*]}; do ssh $NODE 'sudo apparmor_parser -q <\n\nprofile k8s-apparmor-example-deny-write flags=(attach_disconnected) {\n #include ", "completion": "\n file,\n\n # Deny all file writes.\n deny /** w,\n}\nEOF'\ndone", "language": "shell", "type": "code-example", "context": "Restrict a Container's Access to Resources with AppArmor | Next, run a simple \"Hello AppArmor\" Pod with the deny-write profile:", "source": "https://kubernetes.io/docs/tutorials/security/apparmor/", "id": "k8s_00119_code_completion_3237"} {"prompt": "Explain this shell code:\n\n# This example assumes that node names match host names, and are reachable via SSH.\nNODES=($( kubectl get node -o jsonpath='{.items[*].status.addresses[?(.type == \"Hostname\")].address}' ))\n\nfor NODE in ${NODES[*]}; do ssh $NODE 'sudo apparmor_parser -q <\n\nprofile k8s-apparmor-example-deny-write flags=(attach_disconnected) {\n #include \n\n file,\n\n # Deny all file writes.\n deny /** w,\n}\nEOF'\ndone", "completion": "This shell code Restrict a Container's Access to Resources with AppArmor | Next, run a simple \"Hello AppArmor\" Pod with the deny-write profile:", "language": "shell", "type": "code-example_explanation", "context": "Restrict a Container's Access to Resources with AppArmor | Next, run a simple \"Hello AppArmor\" Pod with the deny-write profile:", "source": "https://kubernetes.io/docs/tutorials/security/apparmor/", "id": "k8s_00119_code_explain_3238"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: hello-apparmor\nspec:\n securityContext:", "completion": " appArmorProfile:\n type: Localhost\n localhostProfile: k8s-apparmor-example-deny-write\n containers:\n - name: hello\n image: busybox:1.28\n command: [ \"sh\", \"-c\", \"echo 'Hello AppArmor!' && sleep 1h\" ]", "language": "yaml", "type": "pod-definition", "context": "Restrict a Container's Access to Resources with AppArmor | You can verify that the container is actually running with that profile by checking /proc/1/attr/current:", "source": "https://kubernetes.io/docs/tutorials/security/apparmor/", "id": "k8s_00119_code_completion_3239"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: hello-apparmor\nspec:\n securityContext:\n appArmorProfile:\n type: Localhost\n localhostProfile: k8s-apparmor-example-deny-write\n containers:\n - name: hello\n image: busybox:1.28\n command: [ \"sh\", \"-c\", \"echo 'Hello AppArmor!' && sleep 1h\" ]", "completion": "This yaml code Restrict a Container's Access to Resources with AppArmor | You can verify that the container is actually running with that profile by checking /proc/1/attr/current:", "language": "yaml", "type": "pod-definition_explanation", "context": "Restrict a Container's Access to Resources with AppArmor | You can verify that the container is actually running with that profile by checking /proc/1/attr/current:", "source": "https://kubernetes.io/docs/tutorials/security/apparmor/", "id": "k8s_00119_code_explain_3240"} {"prompt": "Explain this shell code:\n\nkubectl exec hello-apparmor -- cat /proc/1/attr/current", "completion": "This shell code Restrict a Container's Access to Resources with AppArmor | The output should be:", "language": "shell", "type": "code-example_explanation", "context": "Restrict a Container's Access to Resources with AppArmor | The output should be:", "source": "https://kubernetes.io/docs/tutorials/security/apparmor/", "id": "k8s_00119_code_explain_3241"} {"prompt": "Explain this text code:\n\ntouch: /tmp/test: Permission denied\nerror: error executing remote command: command terminated with non-zero exit code: Error executing in Docker Container: 1", "completion": "This text code Restrict a Container's Access to Resources with AppArmor | To wrap up, see what happens if you try to specify a profile that hasn't been loaded:", "language": "text", "type": "code-example_explanation", "context": "Restrict a Container's Access to Resources with AppArmor | To wrap up, see what happens if you try to specify a profile that hasn't been loaded:", "source": "https://kubernetes.io/docs/tutorials/security/apparmor/", "id": "k8s_00119_code_explain_3242"} {"prompt": "Complete this shell code:\n\nkubectl create -f /dev/stdin <\nAnnotations: container.apparmor.security.beta.kubernetes.io/hello=localhost/k8s-apparmor-example-allow-write\nStatus: Pending\n... ", "completion": "Events:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal Scheduled 10s default-scheduler Successfully assigned default/hello-apparmor to gke-test-default-pool-239f5d02-x1kf\n Normal Pulled 8s kubelet Successfully pulled image \"busybox:1.28\" in 370.157088ms (370.172701ms including waiting)\n Normal Pulling 7s (x2 over 9s) kubelet Pulling image \"busybox:1.28\"\n Warning Failed 7s (x2 over 8s) kubelet Error: failed to get container spec opts: failed to generate apparmor spec opts: apparmor profile not found k8s-apparmor-example-allow-write\n Normal Pulled 7s kubelet Successfully pulled image \"busybox:1.28\" in 90.980331ms (91.005869ms including waiting)", "language": "text", "type": "code-example", "context": "Restrict a Container's Access to Resources with AppArmor | An Event provides the error message with the reason, the specific wording is runtime-dependent:", "source": "https://kubernetes.io/docs/tutorials/security/apparmor/", "id": "k8s_00119_code_completion_3245"} {"prompt": "Explain this text code:\n\nName: hello-apparmor-2\nNamespace: default\nNode: gke-test-default-pool-239f5d02-x1kf/10.128.0.27\nStart Time: Tue, 30 Aug 2016 17:58:56 -0700\nLabels: \nAnnotations: container.apparmor.security.beta.kubernetes.io/hello=localhost/k8s-apparmor-example-allow-write\nStatus: Pending\n... \nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal Scheduled 10s default-scheduler Successfully assigned default/hello-apparmor to gke-test-default-pool-239f5d02-x1kf\n Normal Pulled 8s kubelet Successfully pulled image \"busybox:1.28\" in 370.157088ms (370.172701ms including waiting)\n Normal Pulling 7s (x2 over 9s) kubelet Pulling image \"busybox:1.28\"\n Warning Failed 7s (x2 over 8s) kubelet Error: failed to get container spec opts: failed to generate apparmor spec opts: apparmor profile not found k8s-apparmor-example-allow-write\n Normal Pulled 7s kubelet Successfully pulled image \"busybox:1.28\" in 90.980331ms (91.005869ms including waiting)", "completion": "This text code Restrict a Container's Access to Resources with AppArmor | An Event provides the error message with the reason, the specific wording is runtime-dependent:", "language": "text", "type": "code-example_explanation", "context": "Restrict a Container's Access to Resources with AppArmor | An Event provides the error message with the reason, the specific wording is runtime-dependent:", "source": "https://kubernetes.io/docs/tutorials/security/apparmor/", "id": "k8s_00119_code_explain_3246"} {"prompt": "Explain this text code:\n\nWarning Failed 7s (x2 over 8s) kubelet Error: failed to get container spec opts: failed to generate apparmor spec opts: apparmor profile not found", "completion": "This text code Restrict a Container's Access to Resources with AppArmor | Kubernetes 1.34 does not provide any built-in mechanisms for loading AppArmor profiles onto Nodes. Profiles can be loaded through custom infrastructure or tools like the Kubernetes Security Profiles O", "language": "text", "type": "code-example_explanation", "context": "Restrict a Container's Access to Resources with AppArmor | Kubernetes 1.34 does not provide any built-in mechanisms for loading AppArmor profiles onto Nodes. Profiles can be loaded through custom infrastructure or tools like the Kubernetes Security Profiles O", "source": "https://kubernetes.io/docs/tutorials/security/apparmor/", "id": "k8s_00119_code_explain_3247"} {"prompt": "Explain this text code:\n\nssh gke-test-default-pool-239f5d02-gyn2 \"sudo cat /sys/kernel/security/apparmor/profiles | sort\"", "completion": "This text code Restrict a Container's Access to Resources with AppArmor | For more details on loading profiles on nodes, see Setting up nodes with profiles.", "language": "text", "type": "code-example_explanation", "context": "Restrict a Container's Access to Resources with AppArmor | For more details on loading profiles on nodes, see Setting up nodes with profiles.", "source": "https://kubernetes.io/docs/tutorials/security/apparmor/", "id": "k8s_00119_code_explain_3248"} {"prompt": "Explain this text code:\n\nsecurityContext:\n appArmorProfile:\n type: ", "completion": "This text code Restrict a Container's Access to Resources with AppArmor | Where is one of:", "language": "text", "type": "code-example_explanation", "context": "Restrict a Container's Access to Resources with AppArmor | Where is one of:", "source": "https://kubernetes.io/docs/tutorials/security/apparmor/", "id": "k8s_00119_code_explain_3249"} {"prompt": "Explain this bash code:\n\nkubectl exec -- cat /proc/1/attr/current", "completion": "This bash code Restrict a Container's Access to Resources with AppArmor | The output should look something like this:", "language": "bash", "type": "kubectl-command_explanation", "context": "Restrict a Container's Access to Resources with AppArmor | The output should look something like this:", "source": "https://kubernetes.io/docs/tutorials/security/apparmor/", "id": "k8s_00119_code_explain_3250"} {"prompt": "Complete this cpp code:\n\n# This example assumes that node names match host names, and are reachable via SSH.\nNODES=($( kubectl get node -o jsonpath='{.items[*].status.addresses[?(.type == \"Hostname\")].address}' ))\n\nfor NODE in ${NODES[*]}; do ssh $NODE 'sudo apparmor_parser -q <\n\nprofile k8s-apparmor-example-deny-write flags=(attach_disconnected) {\n #include ", "completion": "\n file,\n\n # Deny all file writes.\n deny /** w,\n}\nEOF'\ndone", "language": "cpp", "type": "code-example", "context": "Restrict a Container's Access to Resources with AppArmor | Next, run a simple \"Hello AppArmor\" Pod with the deny-write profile:", "source": "https://kubernetes.io/docs/tutorials/security/apparmor/", "id": "k8s_00119_code_completion_3251"} {"prompt": "Explain this cpp code:\n\n# This example assumes that node names match host names, and are reachable via SSH.\nNODES=($( kubectl get node -o jsonpath='{.items[*].status.addresses[?(.type == \"Hostname\")].address}' ))\n\nfor NODE in ${NODES[*]}; do ssh $NODE 'sudo apparmor_parser -q <\n\nprofile k8s-apparmor-example-deny-write flags=(attach_disconnected) {\n #include \n\n file,\n\n # Deny all file writes.\n deny /** w,\n}\nEOF'\ndone", "completion": "This cpp code Restrict a Container's Access to Resources with AppArmor | Next, run a simple \"Hello AppArmor\" Pod with the deny-write profile:", "language": "cpp", "type": "code-example_explanation", "context": "Restrict a Container's Access to Resources with AppArmor | Next, run a simple \"Hello AppArmor\" Pod with the deny-write profile:", "source": "https://kubernetes.io/docs/tutorials/security/apparmor/", "id": "k8s_00119_code_explain_3252"} {"prompt": "Complete this yaml code:\n\npods/security/hello-apparmor.yaml\napiVersion: v1\nkind: Pod\nmetadata:\n name: hello-apparmor\nspec:\n securityContext:", "completion": " appArmorProfile:\n type: Localhost\n localhostProfile: k8s-apparmor-example-deny-write\n containers:\n - name: hello\n image: busybox:1.28\n command: [ \"sh\", \"-c\", \"echo 'Hello AppArmor!' && sleep 1h\" ]", "language": "yaml", "type": "pod-definition", "context": "Restrict a Container's Access to Resources with AppArmor | You can verify that the container is actually running with that profile by checking /proc/1/attr/current:", "source": "https://kubernetes.io/docs/tutorials/security/apparmor/", "id": "k8s_00119_code_completion_3253"} {"prompt": "Explain this yaml code:\n\npods/security/hello-apparmor.yaml\napiVersion: v1\nkind: Pod\nmetadata:\n name: hello-apparmor\nspec:\n securityContext:\n appArmorProfile:\n type: Localhost\n localhostProfile: k8s-apparmor-example-deny-write\n containers:\n - name: hello\n image: busybox:1.28\n command: [ \"sh\", \"-c\", \"echo 'Hello AppArmor!' && sleep 1h\" ]", "completion": "This yaml code Restrict a Container's Access to Resources with AppArmor | You can verify that the container is actually running with that profile by checking /proc/1/attr/current:", "language": "yaml", "type": "pod-definition_explanation", "context": "Restrict a Container's Access to Resources with AppArmor | You can verify that the container is actually running with that profile by checking /proc/1/attr/current:", "source": "https://kubernetes.io/docs/tutorials/security/apparmor/", "id": "k8s_00119_code_explain_3254"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: hello-apparmor\nspec:\n securityContext:", "completion": " appArmorProfile:\n type: Localhost\n localhostProfile: k8s-apparmor-example-deny-write\n containers:\n - name: hello\n image: busybox:1.28\n command: [ \"sh\", \"-c\", \"echo 'Hello AppArmor!' && sleep 1h\" ]", "language": "yaml", "type": "pod-definition", "context": "Restrict a Container's Access to Resources with AppArmor | You can verify that the container is actually running with that profile by checking /proc/1/attr/current:", "source": "https://kubernetes.io/docs/tutorials/security/apparmor/", "id": "k8s_00119_code_completion_3255"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: hello-apparmor\nspec:\n securityContext:\n appArmorProfile:\n type: Localhost\n localhostProfile: k8s-apparmor-example-deny-write\n containers:\n - name: hello\n image: busybox:1.28\n command: [ \"sh\", \"-c\", \"echo 'Hello AppArmor!' && sleep 1h\" ]", "completion": "This yaml code Restrict a Container's Access to Resources with AppArmor | You can verify that the container is actually running with that profile by checking /proc/1/attr/current:", "language": "yaml", "type": "pod-definition_explanation", "context": "Restrict a Container's Access to Resources with AppArmor | You can verify that the container is actually running with that profile by checking /proc/1/attr/current:", "source": "https://kubernetes.io/docs/tutorials/security/apparmor/", "id": "k8s_00119_code_explain_3256"} {"prompt": "Explain this bash code:\n\nkubectl exec hello-apparmor -- cat /proc/1/attr/current", "completion": "This bash code Restrict a Container's Access to Resources with AppArmor | The output should be:", "language": "bash", "type": "kubectl-command_explanation", "context": "Restrict a Container's Access to Resources with AppArmor | The output should be:", "source": "https://kubernetes.io/docs/tutorials/security/apparmor/", "id": "k8s_00119_code_explain_3257"} {"prompt": "Complete this yaml code:\n\nkubectl create -f /dev/stdin <\"", "language": "yaml", "type": "pod-definition", "context": "1 - Volumes | In Kubernetes 1.34, all operations for the in-tree Portworx volumes are redirected to the pxd.portworx.com Container Storage Interface (CSI) Driver by default.Portworx CSI Driver must be installed on", "source": "https://kubernetes.io/docs/concepts/storage/_print/", "id": "k8s_00120_code_completion_3281"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: test-portworx-volume-pod\nspec:\n containers:\n - image: registry.k8s.io/test-webserver\n name: test-container\n volumeMounts:\n - mountPath: /mnt\n name: pxvol\n volumes:\n - name: pxvol\n # This Portworx volume must already exist.\n portworxVolume:\n volumeID: \"pxvol\"\n fsType: \"\"", "completion": "This yaml code 1 - Volumes | In Kubernetes 1.34, all operations for the in-tree Portworx volumes are redirected to the pxd.portworx.com Container Storage Interface (CSI) Driver by default.Portworx CSI Driver must be installed on", "language": "yaml", "type": "pod-definition_explanation", "context": "1 - Volumes | In Kubernetes 1.34, all operations for the in-tree Portworx volumes are redirected to the pxd.portworx.com Container Storage Interface (CSI) Driver by default.Portworx CSI Driver must be installed on", "source": "https://kubernetes.io/docs/concepts/storage/_print/", "id": "k8s_00120_code_explain_3282"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: my-lamp-site\nspec:\n containers:\n - name: mysql\n image: mysql\n env:\n - name: MYSQL_ROOT_PASSWORD\n value: \"rootpasswd\"\n volumeMounts:", "completion": " - mountPath: /var/lib/mysql\n name: site-data\n subPath: mysql\n - name: php\n image: php:7.0-apache\n volumeMounts:\n - mountPath: /var/www/html\n name: site-data\n subPath: html\n volumes:\n - name: site-data\n persistentVolumeClaim:\n claimName: my-lamp-site-data", "language": "yaml", "type": "pod-definition", "context": "1 - Volumes | Use the subPathExpr field to construct subPath directory names from downward API environment variables. The subPath and subPathExpr properties are mutually exclusive.", "source": "https://kubernetes.io/docs/concepts/storage/_print/", "id": "k8s_00120_code_completion_3283"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: my-lamp-site\nspec:\n containers:\n - name: mysql\n image: mysql\n env:\n - name: MYSQL_ROOT_PASSWORD\n value: \"rootpasswd\"\n volumeMounts:\n - mountPath: /var/lib/mysql\n name: site-data\n subPath: mysql\n - name: php\n image: php:7.0-apache\n volumeMounts:\n - mountPath: /var/www/html\n name: site-data\n subPath: html\n volumes:\n - name: site-data\n persistentVolumeClaim:\n claimName: my-lamp-site-data", "completion": "This yaml code 1 - Volumes | Use the subPathExpr field to construct subPath directory names from downward API environment variables. The subPath and subPathExpr properties are mutually exclusive.", "language": "yaml", "type": "pod-definition_explanation", "context": "1 - Volumes | Use the subPathExpr field to construct subPath directory names from downward API environment variables. The subPath and subPathExpr properties are mutually exclusive.", "source": "https://kubernetes.io/docs/concepts/storage/_print/", "id": "k8s_00120_code_explain_3284"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: pod1\nspec:\n containers:\n - name: container1\n env:\n - name: POD_NAME\n valueFrom:\n fieldRef:\n apiVersion: v1", "completion": " fieldPath: metadata.name\n image: busybox:1.28\n command: [ \"sh\", \"-c\", \"while [ true ]; do echo 'Hello'; sleep 10; done | tee -a /logs/hello.txt\" ]\n volumeMounts:\n - name: workdir1\n mountPath: /logs\n # The variable expansion uses round brackets (not curly brackets).\n subPathExpr: $(POD_NAME)\n restartPolicy: Never\n volumes:\n - name: workdir1\n hostPath:\n path: /var/log/pods", "language": "yaml", "type": "pod-definition", "context": "1 - Volumes | The storage medium (such as Disk or SSD) of an emptyDir volume is determined by the medium of the filesystem holding the kubelet root dir (typically /var/lib/kubelet). There is no limit on how much sp", "source": "https://kubernetes.io/docs/concepts/storage/_print/", "id": "k8s_00120_code_completion_3285"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: pod1\nspec:\n containers:\n - name: container1\n env:\n - name: POD_NAME\n valueFrom:\n fieldRef:\n apiVersion: v1\n fieldPath: metadata.name\n image: busybox:1.28\n command: [ \"sh\", \"-c\", \"while [ true ]; do echo 'Hello'; sleep 10; done | tee -a /logs/hello.txt\" ]\n volumeMounts:\n - name: workdir1\n mountPath: /logs\n # The variable expansion uses round brackets (not curly brackets).\n subPathExpr: $(POD_NAME)\n restartPolicy: Never\n volumes:\n - name: workdir1\n hostPath:\n path: /var/log/pods", "completion": "This yaml code 1 - Volumes | The storage medium (such as Disk or SSD) of an emptyDir volume is determined by the medium of the filesystem holding the kubelet root dir (typically /var/lib/kubelet). There is no limit on how much sp", "language": "yaml", "type": "pod-definition_explanation", "context": "1 - Volumes | The storage medium (such as Disk or SSD) of an emptyDir volume is determined by the medium of the filesystem holding the kubelet root dir (typically /var/lib/kubelet). There is no limit on how much sp", "source": "https://kubernetes.io/docs/concepts/storage/_print/", "id": "k8s_00120_code_explain_3286"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: rro\nspec:\n volumes:\n - name: mnt\n hostPath:\n # tmpfs is mounted on /mnt/tmpfs\n path: /mnt\n containers:\n - name: busybox\n image: busybox\n args: [\"sleep\", \"infinity\"]", "completion": " volumeMounts:\n # /mnt-rro/tmpfs is not writable\n - name: mnt\n mountPath: /mnt-rro\n readOnly: true\n mountPropagation: None\n recursiveReadOnly: Enabled\n # /mnt-ro/tmpfs is writable\n - name: mnt\n mountPath: /mnt-ro\n readOnly: true\n # /mnt-rw/tmpfs is writable\n - name: mnt\n mountPath: /mnt-rw", "language": "yaml", "type": "pod-definition", "context": "1 - Volumes | When this property is recognized by kubelet and kube-apiserver, the .status.containerStatuses[].volumeMounts[].recursiveReadOnly field is set to either Enabled or Disabled.", "source": "https://kubernetes.io/docs/concepts/storage/_print/", "id": "k8s_00120_code_completion_3287"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: rro\nspec:\n volumes:\n - name: mnt\n hostPath:\n # tmpfs is mounted on /mnt/tmpfs\n path: /mnt\n containers:\n - name: busybox\n image: busybox\n args: [\"sleep\", \"infinity\"]\n volumeMounts:\n # /mnt-rro/tmpfs is not writable\n - name: mnt\n mountPath: /mnt-rro\n readOnly: true\n mountPropagation: None\n recursiveReadOnly: Enabled\n # /mnt-ro/tmpfs is writable\n - name: mnt\n mountPath: /mnt-ro\n readOnly: true\n # /mnt-rw/tmpfs is writable\n - name: mnt\n mountPath: /mnt-rw", "completion": "This yaml code 1 - Volumes | When this property is recognized by kubelet and kube-apiserver, the .status.containerStatuses[].volumeMounts[].recursiveReadOnly field is set to either Enabled or Disabled.", "language": "yaml", "type": "pod-definition_explanation", "context": "1 - Volumes | When this property is recognized by kubelet and kube-apiserver, the .status.containerStatuses[].volumeMounts[].recursiveReadOnly field is set to either Enabled or Disabled.", "source": "https://kubernetes.io/docs/concepts/storage/_print/", "id": "k8s_00120_code_explain_3288"} {"prompt": "Complete this bash code:\n\nkubectl describe pvc hostpath\nName: hostpath\nNamespace: default\nStorageClass: example-hostpath\nStatus: Terminating", "completion": "Volume:\nLabels: \nAnnotations: volume.beta.kubernetes.io/storage-class=example-hostpath\n volume.beta.kubernetes.io/storage-provisioner=example.com/hostpath\nFinalizers: [kubernetes.io/pvc-protection]\n...", "language": "bash", "type": "kubectl-command", "context": "2 - Persistent Volumes | You can see that a PV is protected when the PV's status is Terminating and the Finalizers list includes kubernetes.io/pv-protection too:", "source": "https://kubernetes.io/docs/concepts/storage/_print/", "id": "k8s_00120_code_completion_3289"} {"prompt": "Explain this bash code:\n\nkubectl describe pvc hostpath\nName: hostpath\nNamespace: default\nStorageClass: example-hostpath\nStatus: Terminating\nVolume:\nLabels: \nAnnotations: volume.beta.kubernetes.io/storage-class=example-hostpath\n volume.beta.kubernetes.io/storage-provisioner=example.com/hostpath\nFinalizers: [kubernetes.io/pvc-protection]\n...", "completion": "This bash code 2 - Persistent Volumes | You can see that a PV is protected when the PV's status is Terminating and the Finalizers list includes kubernetes.io/pv-protection too:", "language": "bash", "type": "kubectl-command_explanation", "context": "2 - Persistent Volumes | You can see that a PV is protected when the PV's status is Terminating and the Finalizers list includes kubernetes.io/pv-protection too:", "source": "https://kubernetes.io/docs/concepts/storage/_print/", "id": "k8s_00120_code_explain_3290"} {"prompt": "Complete this bash code:\n\nkubectl describe pv task-pv-volume\nName: task-pv-volume\nLabels: type=local\nAnnotations: \nFinalizers: [kubernetes.io/pv-protection]\nStorageClass: standard\nStatus: Terminating\nClaim:", "completion": "Reclaim Policy: Delete\nAccess Modes: RWO\nCapacity: 1Gi\nMessage:\nSource:\n Type: HostPath (bare host directory volume)\n Path: /tmp/data\n HostPathType:\nEvents: ", "language": "bash", "type": "kubectl-command", "context": "2 - Persistent Volumes | When a user is done with their volume, they can delete the PVC objects from the API that allows reclamation of the resource. The reclaim policy for a PersistentVolume tells the cluster what to do with", "source": "https://kubernetes.io/docs/concepts/storage/_print/", "id": "k8s_00120_code_completion_3291"} {"prompt": "Explain this bash code:\n\nkubectl describe pv task-pv-volume\nName: task-pv-volume\nLabels: type=local\nAnnotations: \nFinalizers: [kubernetes.io/pv-protection]\nStorageClass: standard\nStatus: Terminating\nClaim:\nReclaim Policy: Delete\nAccess Modes: RWO\nCapacity: 1Gi\nMessage:\nSource:\n Type: HostPath (bare host directory volume)\n Path: /tmp/data\n HostPathType:\nEvents: ", "completion": "This bash code 2 - Persistent Volumes | When a user is done with their volume, they can delete the PVC objects from the API that allows reclamation of the resource. The reclaim policy for a PersistentVolume tells the cluster what to do with", "language": "bash", "type": "kubectl-command_explanation", "context": "2 - Persistent Volumes | When a user is done with their volume, they can delete the PVC objects from the API that allows reclamation of the resource. The reclaim policy for a PersistentVolume tells the cluster what to do with", "source": "https://kubernetes.io/docs/concepts/storage/_print/", "id": "k8s_00120_code_explain_3292"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: pv-recycler\n namespace: default\nspec:\n restartPolicy: Never\n volumes:\n - name: vol", "completion": " hostPath:\n path: /any/path/it/will/be/replaced\n containers:\n - name: pv-recycler\n image: \"registry.k8s.io/busybox\"\n command: [\"/bin/sh\", \"-c\", \"test -e /scrub && rm -rf /scrub/..?* /scrub/.[!.]* /scrub/* && test -z \\\"$(ls -A /scrub)\\\" || exit 1\"]\n volumeMounts:\n - name: vol\n mountPath: /scrub", "language": "yaml", "type": "pod-definition", "context": "2 - Persistent Volumes | However, the particular path specified in the custom recycler Pod template in the volumes part is replaced with the particular path of the volume that is being recycled.", "source": "https://kubernetes.io/docs/concepts/storage/_print/", "id": "k8s_00120_code_completion_3293"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: pv-recycler\n namespace: default\nspec:\n restartPolicy: Never\n volumes:\n - name: vol\n hostPath:\n path: /any/path/it/will/be/replaced\n containers:\n - name: pv-recycler\n image: \"registry.k8s.io/busybox\"\n command: [\"/bin/sh\", \"-c\", \"test -e /scrub && rm -rf /scrub/..?* /scrub/.[!.]* /scrub/* && test -z \\\"$(ls -A /scrub)\\\" || exit 1\"]\n volumeMounts:\n - name: vol\n mountPath: /scrub", "completion": "This yaml code 2 - Persistent Volumes | However, the particular path specified in the custom recycler Pod template in the volumes part is replaced with the particular path of the volume that is being recycled.", "language": "yaml", "type": "pod-definition_explanation", "context": "2 - Persistent Volumes | However, the particular path specified in the custom recycler Pod template in the volumes part is replaced with the particular path of the volume that is being recycled.", "source": "https://kubernetes.io/docs/concepts/storage/_print/", "id": "k8s_00120_code_explain_3294"} {"prompt": "Complete this bash code:\n\nkubectl describe pv pvc-74a498d6-3929-47e8-8c02-078c1ece4d78\nName: pvc-74a498d6-3929-47e8-8c02-078c1ece4d78\nLabels: \nAnnotations: kubernetes.io/createdby: vsphere-volume-dynamic-provisioner\n pv.kubernetes.io/bound-by-controller: yes\n pv.kubernetes.io/provisioned-by: kubernetes.io/vsphere-volume\nFinalizers: [kubernetes.io/pv-protection kubernetes.io/pv-controller]\nStorageClass: vcp-sc\nStatus: Bound\nClaim: default/vcp-pvc-1\nReclaim Policy: Delete", "completion": "Access Modes: RWO\nVolumeMode: Filesystem\nCapacity: 1Gi\nNode Affinity: \nMessage:\nSource:\n Type: vSphereVolume (a Persistent Disk resource in vSphere)\n VolumePath: [vsanDatastore] d49c4a62-166f-ce12-c464-020077ba5d46/kubernetes-dynamic-pvc-74a498d6-3929-47e8-8c02-078c1ece4d78.vmdk\n FSType: ext4\n StoragePolicyName: vSAN Default Storage Policy\nEvents: ", "language": "bash", "type": "kubectl-command", "context": "2 - Persistent Volumes | The finalizer external-provisioner.volume.kubernetes.io/finalizer is added for CSI volumes. The following is an example:", "source": "https://kubernetes.io/docs/concepts/storage/_print/", "id": "k8s_00120_code_completion_3295"} {"prompt": "Explain this bash code:\n\nkubectl describe pv pvc-74a498d6-3929-47e8-8c02-078c1ece4d78\nName: pvc-74a498d6-3929-47e8-8c02-078c1ece4d78\nLabels: \nAnnotations: kubernetes.io/createdby: vsphere-volume-dynamic-provisioner\n pv.kubernetes.io/bound-by-controller: yes\n pv.kubernetes.io/provisioned-by: kubernetes.io/vsphere-volume\nFinalizers: [kubernetes.io/pv-protection kubernetes.io/pv-controller]\nStorageClass: vcp-sc\nStatus: Bound\nClaim: default/vcp-pvc-1\nReclaim Policy: Delete\nAccess Modes: RWO\nVolumeMode: Filesystem\nCapacity: 1Gi\nNode Affinity: \nMessage:\nSource:\n Type: vSphereVolume (a Persistent Disk resource in vSphere)\n VolumePath: [vsanDatastore] d49c4a62-166f-ce12-c464-020077ba5d46/kubernetes-dynamic-pvc-74a498d6-3929-47e8-8c02-078c1ece4d78.vmdk\n FSType: ext4\n StoragePolicyName: vSAN Default Storage Policy\nEvents: ", "completion": "This bash code 2 - Persistent Volumes | The finalizer external-provisioner.volume.kubernetes.io/finalizer is added for CSI volumes. The following is an example:", "language": "bash", "type": "kubectl-command_explanation", "context": "2 - Persistent Volumes | The finalizer external-provisioner.volume.kubernetes.io/finalizer is added for CSI volumes. The following is an example:", "source": "https://kubernetes.io/docs/concepts/storage/_print/", "id": "k8s_00120_code_explain_3296"} {"prompt": "Complete this text code:\n\nName: pvc-2f0bab97-85a8-4552-8044-eb8be45cf48d\nLabels: \nAnnotations: pv.kubernetes.io/provisioned-by: csi.vsphere.vmware.com\nFinalizers: [kubernetes.io/pv-protection external-provisioner.volume.kubernetes.io/finalizer]\nStorageClass: fast\nStatus: Bound\nClaim: demo-app/nginx-logs\nReclaim Policy: Delete\nAccess Modes: RWO\nVolumeMode: Filesystem\nCapacity: 200Mi", "completion": "Node Affinity: \nMessage:\nSource:\n Type: CSI (a Container Storage Interface (CSI) volume source)\n Driver: csi.vsphere.vmware.com\n FSType: ext4\n VolumeHandle: 44830fa8-79b4-406b-8b58-621ba25353fd\n ReadOnly: false\n VolumeAttributes: storage.kubernetes.io/csiProvisionerIdentity=1648442357185-8081-csi.vsphere.vmware.com\n type=vSphere CNS Block Volume\nEvents: ", "language": "text", "type": "code-example", "context": "2 - Persistent Volumes | When the CSIMigration{provider} feature flag is enabled for a specific in-tree volume plugin, the kubernetes.io/pv-controller finalizer is replaced by the external-provisioner.volume.kubernetes.io/fin", "source": "https://kubernetes.io/docs/concepts/storage/_print/", "id": "k8s_00120_code_completion_3297"} {"prompt": "Explain this text code:\n\nName: pvc-2f0bab97-85a8-4552-8044-eb8be45cf48d\nLabels: \nAnnotations: pv.kubernetes.io/provisioned-by: csi.vsphere.vmware.com\nFinalizers: [kubernetes.io/pv-protection external-provisioner.volume.kubernetes.io/finalizer]\nStorageClass: fast\nStatus: Bound\nClaim: demo-app/nginx-logs\nReclaim Policy: Delete\nAccess Modes: RWO\nVolumeMode: Filesystem\nCapacity: 200Mi\nNode Affinity: \nMessage:\nSource:\n Type: CSI (a Container Storage Interface (CSI) volume source)\n Driver: csi.vsphere.vmware.com\n FSType: ext4\n VolumeHandle: 44830fa8-79b4-406b-8b58-621ba25353fd\n ReadOnly: false\n VolumeAttributes: storage.kubernetes.io/csiProvisionerIdentity=1648442357185-8081-csi.vsphere.vmware.com\n type=vSphere CNS Block Volume\nEvents: ", "completion": "This text code 2 - Persistent Volumes | When the CSIMigration{provider} feature flag is enabled for a specific in-tree volume plugin, the kubernetes.io/pv-controller finalizer is replaced by the external-provisioner.volume.kubernetes.io/fin", "language": "text", "type": "code-example_explanation", "context": "2 - Persistent Volumes | When the CSIMigration{provider} feature flag is enabled for a specific in-tree volume plugin, the kubernetes.io/pv-controller finalizer is replaced by the external-provisioner.volume.kubernetes.io/fin", "source": "https://kubernetes.io/docs/concepts/storage/_print/", "id": "k8s_00120_code_explain_3298"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: PersistentVolumeClaim\nmetadata:\n name: foo-pvc", "completion": " namespace: foo\nspec:\n storageClassName: \"\" # Empty string must be explicitly set otherwise default StorageClass will be set\n volumeName: foo-pv\n ...", "language": "yaml", "type": "kubernetes-manifest", "context": "2 - Persistent Volumes | This method does not guarantee any binding privileges to the PersistentVolume. If other PersistentVolumeClaims could use the PV that you specify, you first need to reserve that storage volume. Specify", "source": "https://kubernetes.io/docs/concepts/storage/_print/", "id": "k8s_00120_code_completion_3299"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: PersistentVolumeClaim\nmetadata:\n name: foo-pvc\n namespace: foo\nspec:\n storageClassName: \"\" # Empty string must be explicitly set otherwise default StorageClass will be set\n volumeName: foo-pv\n ...", "completion": "This yaml code 2 - Persistent Volumes | This method does not guarantee any binding privileges to the PersistentVolume. If other PersistentVolumeClaims could use the PV that you specify, you first need to reserve that storage volume. Specify", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "2 - Persistent Volumes | This method does not guarantee any binding privileges to the PersistentVolume. If other PersistentVolumeClaims could use the PV that you specify, you first need to reserve that storage volume. Specify", "source": "https://kubernetes.io/docs/concepts/storage/_print/", "id": "k8s_00120_code_explain_3300"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: PersistentVolume\nmetadata:\n name: foo-pv\nspec:", "completion": " storageClassName: \"\"\n claimRef:\n name: foo-pvc\n namespace: foo\n ...", "language": "yaml", "type": "kubernetes-manifest", "context": "2 - Persistent Volumes | This is useful if you want to consume PersistentVolumes that have their persistentVolumeReclaimPolicy set to Retain, including cases where you are reusing an existing PV.", "source": "https://kubernetes.io/docs/concepts/storage/_print/", "id": "k8s_00120_code_completion_3301"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: PersistentVolume\nmetadata:\n name: foo-pv\nspec:\n storageClassName: \"\"\n claimRef:\n name: foo-pvc\n namespace: foo\n ...", "completion": "This yaml code 2 - Persistent Volumes | This is useful if you want to consume PersistentVolumes that have their persistentVolumeReclaimPolicy set to Retain, including cases where you are reusing an existing PV.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "2 - Persistent Volumes | This is useful if you want to consume PersistentVolumes that have their persistentVolumeReclaimPolicy set to Retain, including cases where you are reusing an existing PV.", "source": "https://kubernetes.io/docs/concepts/storage/_print/", "id": "k8s_00120_code_explain_3302"} {"prompt": "Complete this yaml code:\n\napiVersion: storage.k8s.io/v1\nkind: StorageClass\nmetadata:\n name: example-vol-default\nprovisioner: vendor-name.example/magicstorage", "completion": "parameters:\n resturl: \"http://192.168.10.100:8080\"\n restuser: \"\"\n secretNamespace: \"\"\n secretName: \"\"\nallowVolumeExpansion: true", "language": "yaml", "type": "kubernetes-manifest", "context": "2 - Persistent Volumes | To request a larger volume for a PVC, edit the PVC object and specify a larger size. This triggers expansion of the volume that backs the underlying PersistentVolume. A new PersistentVolume is never c", "source": "https://kubernetes.io/docs/concepts/storage/_print/", "id": "k8s_00120_code_completion_3303"} {"prompt": "Explain this yaml code:\n\napiVersion: storage.k8s.io/v1\nkind: StorageClass\nmetadata:\n name: example-vol-default\nprovisioner: vendor-name.example/magicstorage\nparameters:\n resturl: \"http://192.168.10.100:8080\"\n restuser: \"\"\n secretNamespace: \"\"\n secretName: \"\"\nallowVolumeExpansion: true", "completion": "This yaml code 2 - Persistent Volumes | To request a larger volume for a PVC, edit the PVC object and specify a larger size. This triggers expansion of the volume that backs the underlying PersistentVolume. A new PersistentVolume is never c", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "2 - Persistent Volumes | To request a larger volume for a PVC, edit the PVC object and specify a larger size. This triggers expansion of the volume that backs the underlying PersistentVolume. A new PersistentVolume is never c", "source": "https://kubernetes.io/docs/concepts/storage/_print/", "id": "k8s_00120_code_explain_3304"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: PersistentVolume\nmetadata:\n name: pv0003\nspec:\n capacity:\n storage: 5Gi\n volumeMode: Filesystem\n accessModes:", "completion": " - ReadWriteOnce\n persistentVolumeReclaimPolicy: Recycle\n storageClassName: slow\n mountOptions:\n - hard\n - nfsvers=4.1\n nfs:\n path: /tmp\n server: 172.17.0.2", "language": "yaml", "type": "kubernetes-manifest", "context": "2 - Persistent Volumes | Generally, a PV will have a specific storage capacity. This is set using the PV's capacity attribute which is a Quantity value.", "source": "https://kubernetes.io/docs/concepts/storage/_print/", "id": "k8s_00120_code_completion_3305"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: PersistentVolume\nmetadata:\n name: pv0003\nspec:\n capacity:\n storage: 5Gi\n volumeMode: Filesystem\n accessModes:\n - ReadWriteOnce\n persistentVolumeReclaimPolicy: Recycle\n storageClassName: slow\n mountOptions:\n - hard\n - nfsvers=4.1\n nfs:\n path: /tmp\n server: 172.17.0.2", "completion": "This yaml code 2 - Persistent Volumes | Generally, a PV will have a specific storage capacity. This is set using the PV's capacity attribute which is a Quantity value.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "2 - Persistent Volumes | Generally, a PV will have a specific storage capacity. This is set using the PV's capacity attribute which is a Quantity value.", "source": "https://kubernetes.io/docs/concepts/storage/_print/", "id": "k8s_00120_code_explain_3306"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: PersistentVolumeClaim\nmetadata:\n name: myclaim\nspec:\n accessModes:\n - ReadWriteOnce\n volumeMode: Filesystem", "completion": " resources:\n requests:\n storage: 8Gi\n storageClassName: slow\n selector:\n matchLabels:\n release: \"stable\"\n matchExpressions:\n - {key: environment, operator: In, values: [dev]}", "language": "yaml", "type": "kubernetes-manifest", "context": "2 - Persistent Volumes | Claims use the same conventions as volumes when requesting storage with specific access modes.", "source": "https://kubernetes.io/docs/concepts/storage/_print/", "id": "k8s_00120_code_completion_3307"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: PersistentVolumeClaim\nmetadata:\n name: myclaim\nspec:\n accessModes:\n - ReadWriteOnce\n volumeMode: Filesystem\n resources:\n requests:\n storage: 8Gi\n storageClassName: slow\n selector:\n matchLabels:\n release: \"stable\"\n matchExpressions:\n - {key: environment, operator: In, values: [dev]}", "completion": "This yaml code 2 - Persistent Volumes | Claims use the same conventions as volumes when requesting storage with specific access modes.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "2 - Persistent Volumes | Claims use the same conventions as volumes when requesting storage with specific access modes.", "source": "https://kubernetes.io/docs/concepts/storage/_print/", "id": "k8s_00120_code_explain_3308"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: mypod\nspec:\n containers:\n - name: myfrontend", "completion": " image: nginx\n volumeMounts:\n - mountPath: \"/var/www/html\"\n name: mypd\n volumes:\n - name: mypd\n persistentVolumeClaim:\n claimName: myclaim", "language": "yaml", "type": "pod-definition", "context": "2 - Persistent Volumes | PersistentVolumes binds are exclusive, and since PersistentVolumeClaims are namespaced objects, mounting claims with \"Many\" modes (ROX, RWX) is only possible within one namespace.", "source": "https://kubernetes.io/docs/concepts/storage/_print/", "id": "k8s_00120_code_completion_3309"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: mypod\nspec:\n containers:\n - name: myfrontend\n image: nginx\n volumeMounts:\n - mountPath: \"/var/www/html\"\n name: mypd\n volumes:\n - name: mypd\n persistentVolumeClaim:\n claimName: myclaim", "completion": "This yaml code 2 - Persistent Volumes | PersistentVolumes binds are exclusive, and since PersistentVolumeClaims are namespaced objects, mounting claims with \"Many\" modes (ROX, RWX) is only possible within one namespace.", "language": "yaml", "type": "pod-definition_explanation", "context": "2 - Persistent Volumes | PersistentVolumes binds are exclusive, and since PersistentVolumeClaims are namespaced objects, mounting claims with \"Many\" modes (ROX, RWX) is only possible within one namespace.", "source": "https://kubernetes.io/docs/concepts/storage/_print/", "id": "k8s_00120_code_explain_3310"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: PersistentVolume\nmetadata:\n name: block-pv\nspec:\n capacity:\n storage: 10Gi", "completion": " accessModes:\n - ReadWriteOnce\n volumeMode: Block\n persistentVolumeReclaimPolicy: Retain\n fc:\n targetWWNs: [\"50060e801049cfd1\"]\n lun: 0\n readOnly: false", "language": "yaml", "type": "kubernetes-manifest", "context": "2 - Persistent Volumes | If a user requests a raw block volume by indicating this using the volumeMode field in the PersistentVolumeClaim spec, the binding rules differ slightly from previous releases that didn't consider thi", "source": "https://kubernetes.io/docs/concepts/storage/_print/", "id": "k8s_00120_code_completion_3311"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: PersistentVolume\nmetadata:\n name: block-pv\nspec:\n capacity:\n storage: 10Gi\n accessModes:\n - ReadWriteOnce\n volumeMode: Block\n persistentVolumeReclaimPolicy: Retain\n fc:\n targetWWNs: [\"50060e801049cfd1\"]\n lun: 0\n readOnly: false", "completion": "This yaml code 2 - Persistent Volumes | If a user requests a raw block volume by indicating this using the volumeMode field in the PersistentVolumeClaim spec, the binding rules differ slightly from previous releases that didn't consider thi", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "2 - Persistent Volumes | If a user requests a raw block volume by indicating this using the volumeMode field in the PersistentVolumeClaim spec, the binding rules differ slightly from previous releases that didn't consider thi", "source": "https://kubernetes.io/docs/concepts/storage/_print/", "id": "k8s_00120_code_explain_3312"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: PersistentVolumeClaim\nmetadata:\n name: block-pvc\nspec:", "completion": " accessModes:\n - ReadWriteOnce\n volumeMode: Block\n resources:\n requests:\n storage: 10Gi", "language": "yaml", "type": "kubernetes-manifest", "context": "2 - Persistent Volumes | If a user requests a raw block volume by indicating this using the volumeMode field in the PersistentVolumeClaim spec, the binding rules differ slightly from previous releases that didn't consider thi", "source": "https://kubernetes.io/docs/concepts/storage/_print/", "id": "k8s_00120_code_completion_3313"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: PersistentVolumeClaim\nmetadata:\n name: block-pvc\nspec:\n accessModes:\n - ReadWriteOnce\n volumeMode: Block\n resources:\n requests:\n storage: 10Gi", "completion": "This yaml code 2 - Persistent Volumes | If a user requests a raw block volume by indicating this using the volumeMode field in the PersistentVolumeClaim spec, the binding rules differ slightly from previous releases that didn't consider thi", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "2 - Persistent Volumes | If a user requests a raw block volume by indicating this using the volumeMode field in the PersistentVolumeClaim spec, the binding rules differ slightly from previous releases that didn't consider thi", "source": "https://kubernetes.io/docs/concepts/storage/_print/", "id": "k8s_00120_code_explain_3314"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: pod-with-block-volume\nspec:\n containers:\n - name: fc-container\n image: fedora:26", "completion": " command: [\"/bin/sh\", \"-c\"]\n args: [ \"tail -f /dev/null\" ]\n volumeDevices:\n - name: data\n devicePath: /dev/xvda\n volumes:\n - name: data\n persistentVolumeClaim:\n claimName: block-pvc", "language": "yaml", "type": "pod-definition", "context": "2 - Persistent Volumes | If a user requests a raw block volume by indicating this using the volumeMode field in the PersistentVolumeClaim spec, the binding rules differ slightly from previous releases that didn't consider thi", "source": "https://kubernetes.io/docs/concepts/storage/_print/", "id": "k8s_00120_code_completion_3315"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: pod-with-block-volume\nspec:\n containers:\n - name: fc-container\n image: fedora:26\n command: [\"/bin/sh\", \"-c\"]\n args: [ \"tail -f /dev/null\" ]\n volumeDevices:\n - name: data\n devicePath: /dev/xvda\n volumes:\n - name: data\n persistentVolumeClaim:\n claimName: block-pvc", "completion": "This yaml code 2 - Persistent Volumes | If a user requests a raw block volume by indicating this using the volumeMode field in the PersistentVolumeClaim spec, the binding rules differ slightly from previous releases that didn't consider thi", "language": "yaml", "type": "pod-definition_explanation", "context": "2 - Persistent Volumes | If a user requests a raw block volume by indicating this using the volumeMode field in the PersistentVolumeClaim spec, the binding rules differ slightly from previous releases that didn't consider thi", "source": "https://kubernetes.io/docs/concepts/storage/_print/", "id": "k8s_00120_code_explain_3316"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: PersistentVolumeClaim\nmetadata:\n name: restore-pvc\nspec:\n storageClassName: csi-hostpath-sc\n dataSource:", "completion": " name: new-snapshot-test\n kind: VolumeSnapshot\n apiGroup: snapshot.storage.k8s.io\n accessModes:\n - ReadWriteOnce\n resources:\n requests:\n storage: 10Gi", "language": "yaml", "type": "kubernetes-manifest", "context": "2 - Persistent Volumes | Volume Cloning only available for CSI volume plugins.", "source": "https://kubernetes.io/docs/concepts/storage/_print/", "id": "k8s_00120_code_completion_3317"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: PersistentVolumeClaim\nmetadata:\n name: restore-pvc\nspec:\n storageClassName: csi-hostpath-sc\n dataSource:\n name: new-snapshot-test\n kind: VolumeSnapshot\n apiGroup: snapshot.storage.k8s.io\n accessModes:\n - ReadWriteOnce\n resources:\n requests:\n storage: 10Gi", "completion": "This yaml code 2 - Persistent Volumes | Volume Cloning only available for CSI volume plugins.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "2 - Persistent Volumes | Volume Cloning only available for CSI volume plugins.", "source": "https://kubernetes.io/docs/concepts/storage/_print/", "id": "k8s_00120_code_explain_3318"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: PersistentVolumeClaim\nmetadata:\n name: cloned-pvc\nspec:\n storageClassName: my-csi-plugin\n dataSource:", "completion": " name: existing-src-pvc-name\n kind: PersistentVolumeClaim\n accessModes:\n - ReadWriteOnce\n resources:\n requests:\n storage: 10Gi", "language": "yaml", "type": "kubernetes-manifest", "context": "2 - Persistent Volumes | Kubernetes supports custom volume populators. To use custom volume populators, you must enable the AnyVolumeDataSource feature gate for the kube-apiserver and kube-controller-manager.", "source": "https://kubernetes.io/docs/concepts/storage/_print/", "id": "k8s_00120_code_completion_3319"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: PersistentVolumeClaim\nmetadata:\n name: cloned-pvc\nspec:\n storageClassName: my-csi-plugin\n dataSource:\n name: existing-src-pvc-name\n kind: PersistentVolumeClaim\n accessModes:\n - ReadWriteOnce\n resources:\n requests:\n storage: 10Gi", "completion": "This yaml code 2 - Persistent Volumes | Kubernetes supports custom volume populators. To use custom volume populators, you must enable the AnyVolumeDataSource feature gate for the kube-apiserver and kube-controller-manager.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "2 - Persistent Volumes | Kubernetes supports custom volume populators. To use custom volume populators, you must enable the AnyVolumeDataSource feature gate for the kube-apiserver and kube-controller-manager.", "source": "https://kubernetes.io/docs/concepts/storage/_print/", "id": "k8s_00120_code_explain_3320"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: PersistentVolumeClaim\nmetadata:\n name: populated-pvc\nspec:\n dataSourceRef:\n name: example-name", "completion": " kind: ExampleDataSource\n apiGroup: example.storage.k8s.io\n accessModes:\n - ReadWriteOnce\n resources:\n requests:\n storage: 10Gi", "language": "yaml", "type": "kubernetes-manifest", "context": "2 - Persistent Volumes | Because volume populators are external components, attempts to create a PVC that uses one can fail if not all the correct components are installed. External controllers should generate events on the P", "source": "https://kubernetes.io/docs/concepts/storage/_print/", "id": "k8s_00120_code_completion_3321"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: PersistentVolumeClaim\nmetadata:\n name: populated-pvc\nspec:\n dataSourceRef:\n name: example-name\n kind: ExampleDataSource\n apiGroup: example.storage.k8s.io\n accessModes:\n - ReadWriteOnce\n resources:\n requests:\n storage: 10Gi", "completion": "This yaml code 2 - Persistent Volumes | Because volume populators are external components, attempts to create a PVC that uses one can fail if not all the correct components are installed. External controllers should generate events on the P", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "2 - Persistent Volumes | Because volume populators are external components, attempts to create a PVC that uses one can fail if not all the correct components are installed. External controllers should generate events on the P", "source": "https://kubernetes.io/docs/concepts/storage/_print/", "id": "k8s_00120_code_explain_3322"} {"prompt": "Complete this yaml code:\n\napiVersion: gateway.networking.k8s.io/v1beta1\nkind: ReferenceGrant\nmetadata:\n name: allow-ns1-pvc\n namespace: default\nspec:\n from:", "completion": " - group: \"\"\n kind: PersistentVolumeClaim\n namespace: ns1\n to:\n - group: snapshot.storage.k8s.io\n kind: VolumeSnapshot\n name: new-snapshot-demo", "language": "yaml", "type": "kubernetes-manifest", "context": "2 - Persistent Volumes | If you're writing configuration templates or examples that run on a wide range of clusters and need persistent storage, it is recommended that you use the following pattern:", "source": "https://kubernetes.io/docs/concepts/storage/_print/", "id": "k8s_00120_code_completion_3323"} {"prompt": "Explain this yaml code:\n\napiVersion: gateway.networking.k8s.io/v1beta1\nkind: ReferenceGrant\nmetadata:\n name: allow-ns1-pvc\n namespace: default\nspec:\n from:\n - group: \"\"\n kind: PersistentVolumeClaim\n namespace: ns1\n to:\n - group: snapshot.storage.k8s.io\n kind: VolumeSnapshot\n name: new-snapshot-demo", "completion": "This yaml code 2 - Persistent Volumes | If you're writing configuration templates or examples that run on a wide range of clusters and need persistent storage, it is recommended that you use the following pattern:", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "2 - Persistent Volumes | If you're writing configuration templates or examples that run on a wide range of clusters and need persistent storage, it is recommended that you use the following pattern:", "source": "https://kubernetes.io/docs/concepts/storage/_print/", "id": "k8s_00120_code_explain_3324"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: PersistentVolumeClaim\nmetadata:\n name: foo-pvc\n namespace: ns1\nspec:\n storageClassName: example\n accessModes:\n - ReadWriteOnce", "completion": " resources:\n requests:\n storage: 1Gi\n dataSourceRef:\n apiGroup: snapshot.storage.k8s.io\n kind: VolumeSnapshot\n name: new-snapshot-demo\n namespace: default\n volumeMode: Filesystem", "language": "yaml", "type": "kubernetes-manifest", "context": "2 - Persistent Volumes | If you're writing configuration templates or examples that run on a wide range of clusters and need persistent storage, it is recommended that you use the following pattern:", "source": "https://kubernetes.io/docs/concepts/storage/_print/", "id": "k8s_00120_code_completion_3325"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: PersistentVolumeClaim\nmetadata:\n name: foo-pvc\n namespace: ns1\nspec:\n storageClassName: example\n accessModes:\n - ReadWriteOnce\n resources:\n requests:\n storage: 1Gi\n dataSourceRef:\n apiGroup: snapshot.storage.k8s.io\n kind: VolumeSnapshot\n name: new-snapshot-demo\n namespace: default\n volumeMode: Filesystem", "completion": "This yaml code 2 - Persistent Volumes | If you're writing configuration templates or examples that run on a wide range of clusters and need persistent storage, it is recommended that you use the following pattern:", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "2 - Persistent Volumes | If you're writing configuration templates or examples that run on a wide range of clusters and need persistent storage, it is recommended that you use the following pattern:", "source": "https://kubernetes.io/docs/concepts/storage/_print/", "id": "k8s_00120_code_explain_3326"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: volume-test\nspec:\n containers:\n - name: container-test\n image: busybox:1.28\n command: [\"sleep\", \"3600\"]\n volumeMounts:\n - name: all-in-one\n mountPath: \"/projected-volume\"\n readOnly: true\n volumes:\n - name: all-in-one\n projected:\n sources:\n - secret:", "completion": " name: mysecret\n items:\n - key: username\n path: my-group/my-username\n - downwardAPI:\n items:\n - path: \"labels\"\n fieldRef:\n fieldPath: metadata.labels\n - path: \"cpu_limit\"\n resourceFieldRef:\n containerName: container-test\n resource: limits.cpu\n - configMap:\n name: myconfigmap\n items:\n - key: config\n path: my-group/my-config", "language": "yaml", "type": "pod-definition", "context": "3 - Projected Volumes | Each projected volume source is listed in the spec under sources. The parameters are nearly the same with two exceptions:", "source": "https://kubernetes.io/docs/concepts/storage/_print/", "id": "k8s_00120_code_completion_3327"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: volume-test\nspec:\n containers:\n - name: container-test\n image: busybox:1.28\n command: [\"sleep\", \"3600\"]\n volumeMounts:\n - name: all-in-one\n mountPath: \"/projected-volume\"\n readOnly: true\n volumes:\n - name: all-in-one\n projected:\n sources:\n - secret:\n name: mysecret\n items:\n - key: username\n path: my-group/my-username\n - downwardAPI:\n items:\n - path: \"labels\"\n fieldRef:\n fieldPath: metadata.labels\n - path: \"cpu_limit\"\n resourceFieldRef:\n containerName: container-test\n resource: limits.cpu\n - configMap:\n name: myconfigmap\n items:\n - key: config\n path: my-group/my-config", "completion": "This yaml code 3 - Projected Volumes | Each projected volume source is listed in the spec under sources. The parameters are nearly the same with two exceptions:", "language": "yaml", "type": "pod-definition_explanation", "context": "3 - Projected Volumes | Each projected volume source is listed in the spec under sources. The parameters are nearly the same with two exceptions:", "source": "https://kubernetes.io/docs/concepts/storage/_print/", "id": "k8s_00120_code_explain_3328"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: volume-test\nspec:\n containers:\n - name: container-test\n image: busybox:1.28\n command: [\"sleep\", \"3600\"]\n volumeMounts:\n - name: all-in-one\n mountPath: \"/projected-volume\"\n readOnly: true\n volumes:", "completion": " - name: all-in-one\n projected:\n sources:\n - secret:\n name: mysecret\n items:\n - key: username\n path: my-group/my-username\n - secret:\n name: mysecret2\n items:\n - key: password\n path: my-group/my-password\n mode: 511", "language": "yaml", "type": "pod-definition", "context": "3 - Projected Volumes | Each projected volume source is listed in the spec under sources. The parameters are nearly the same with two exceptions:", "source": "https://kubernetes.io/docs/concepts/storage/_print/", "id": "k8s_00120_code_completion_3329"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: volume-test\nspec:\n containers:\n - name: container-test\n image: busybox:1.28\n command: [\"sleep\", \"3600\"]\n volumeMounts:\n - name: all-in-one\n mountPath: \"/projected-volume\"\n readOnly: true\n volumes:\n - name: all-in-one\n projected:\n sources:\n - secret:\n name: mysecret\n items:\n - key: username\n path: my-group/my-username\n - secret:\n name: mysecret2\n items:\n - key: password\n path: my-group/my-password\n mode: 511", "completion": "This yaml code 3 - Projected Volumes | Each projected volume source is listed in the spec under sources. The parameters are nearly the same with two exceptions:", "language": "yaml", "type": "pod-definition_explanation", "context": "3 - Projected Volumes | Each projected volume source is listed in the spec under sources. The parameters are nearly the same with two exceptions:", "source": "https://kubernetes.io/docs/concepts/storage/_print/", "id": "k8s_00120_code_explain_3330"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: sa-token-test\nspec:\n containers:\n - name: container-test\n image: busybox:1.28\n command: [\"sleep\", \"3600\"]\n volumeMounts:\n - name: token-vol", "completion": " mountPath: \"/service-account\"\n readOnly: true\n serviceAccountName: default\n volumes:\n - name: token-vol\n projected:\n sources:\n - serviceAccountToken:\n audience: api\n expirationSeconds: 3600\n path: token", "language": "yaml", "type": "pod-definition", "context": "3 - Projected Volumes | The example Pod has a projected volume containing the injected service account token. Containers in this Pod can use that token to access the Kubernetes API server, authenticating with the identity of", "source": "https://kubernetes.io/docs/concepts/storage/_print/", "id": "k8s_00120_code_completion_3331"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: sa-token-test\nspec:\n containers:\n - name: container-test\n image: busybox:1.28\n command: [\"sleep\", \"3600\"]\n volumeMounts:\n - name: token-vol\n mountPath: \"/service-account\"\n readOnly: true\n serviceAccountName: default\n volumes:\n - name: token-vol\n projected:\n sources:\n - serviceAccountToken:\n audience: api\n expirationSeconds: 3600\n path: token", "completion": "This yaml code 3 - Projected Volumes | The example Pod has a projected volume containing the injected service account token. Containers in this Pod can use that token to access the Kubernetes API server, authenticating with the identity of", "language": "yaml", "type": "pod-definition_explanation", "context": "3 - Projected Volumes | The example Pod has a projected volume containing the injected service account token. Containers in this Pod can use that token to access the Kubernetes API server, authenticating with the identity of", "source": "https://kubernetes.io/docs/concepts/storage/_print/", "id": "k8s_00120_code_explain_3332"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: sa-ctb-name-test\nspec:\n containers:\n - name: container-test\n image: busybox\n command: [\"sleep\", \"3600\"]\n volumeMounts:\n - name: token-vol\n mountPath: \"/root-certificates\"\n readOnly: true\n serviceAccountName: default", "completion": " volumes:\n - name: token-vol\n projected:\n sources:\n - clusterTrustBundle:\n name: example\n path: example-roots.pem\n - clusterTrustBundle:\n signerName: \"example.com/mysigner\"\n labelSelector:\n matchLabels:\n version: live\n path: mysigner-roots.pem\n optional: true", "language": "yaml", "type": "pod-definition", "context": "3 - Projected Volumes | The podCertificate projected volumes source securely provisions a private key and X.509 certificate chain for pod to use as client or server credentials. Kubelet will then handle refreshing the privat", "source": "https://kubernetes.io/docs/concepts/storage/_print/", "id": "k8s_00120_code_completion_3333"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: sa-ctb-name-test\nspec:\n containers:\n - name: container-test\n image: busybox\n command: [\"sleep\", \"3600\"]\n volumeMounts:\n - name: token-vol\n mountPath: \"/root-certificates\"\n readOnly: true\n serviceAccountName: default\n volumes:\n - name: token-vol\n projected:\n sources:\n - clusterTrustBundle:\n name: example\n path: example-roots.pem\n - clusterTrustBundle:\n signerName: \"example.com/mysigner\"\n labelSelector:\n matchLabels:\n version: live\n path: mysigner-roots.pem\n optional: true", "completion": "This yaml code 3 - Projected Volumes | The podCertificate projected volumes source securely provisions a private key and X.509 certificate chain for pod to use as client or server credentials. Kubelet will then handle refreshing the privat", "language": "yaml", "type": "pod-definition_explanation", "context": "3 - Projected Volumes | The podCertificate projected volumes source securely provisions a private key and X.509 certificate chain for pod to use as client or server credentials. Kubelet will then handle refreshing the privat", "source": "https://kubernetes.io/docs/concepts/storage/_print/", "id": "k8s_00120_code_explain_3334"} {"prompt": "Complete this yaml code:\n\n# Sample Pod spec that uses a podCertificate projection to request an ED25519\n# private key, a certificate from the `coolcert.example.com/foo` signer, and\n# write the results to `/var/run/my-x509-credentials/credentialbundle.pem`.\napiVersion: v1\nkind: Pod\nmetadata:\n namespace: default\n name: podcertificate-pod\nspec:\n serviceAccountName: default\n containers:\n - image: debian\n name: main", "completion": " command: [\"sleep\", \"infinity\"]\n volumeMounts:\n - name: my-x509-credentials\n mountPath: /var/run/my-x509-credentials\n volumes:\n - name: my-x509-credentials\n projected:\n defaultMode: 420\n sources:\n - podCertificate:\n keyType: ED25519\n signerName: coolcert.example.com/foo\n credentialBundlePath: credentialbundle.pem", "language": "yaml", "type": "pod-definition", "context": "3 - Projected Volumes | The proposal for file permission handling in projected service account volume enhancement introduced the projected files having the correct owner permissions set.", "source": "https://kubernetes.io/docs/concepts/storage/_print/", "id": "k8s_00120_code_completion_3335"} {"prompt": "Explain this yaml code:\n\n# Sample Pod spec that uses a podCertificate projection to request an ED25519\n# private key, a certificate from the `coolcert.example.com/foo` signer, and\n# write the results to `/var/run/my-x509-credentials/credentialbundle.pem`.\napiVersion: v1\nkind: Pod\nmetadata:\n namespace: default\n name: podcertificate-pod\nspec:\n serviceAccountName: default\n containers:\n - image: debian\n name: main\n command: [\"sleep\", \"infinity\"]\n volumeMounts:\n - name: my-x509-credentials\n mountPath: /var/run/my-x509-credentials\n volumes:\n - name: my-x509-credentials\n projected:\n defaultMode: 420\n sources:\n - podCertificate:\n keyType: ED25519\n signerName: coolcert.example.com/foo\n credentialBundlePath: credentialbundle.pem", "completion": "This yaml code 3 - Projected Volumes | The proposal for file permission handling in projected service account volume enhancement introduced the projected files having the correct owner permissions set.", "language": "yaml", "type": "pod-definition_explanation", "context": "3 - Projected Volumes | The proposal for file permission handling in projected service account volume enhancement introduced the projected files having the correct owner permissions set.", "source": "https://kubernetes.io/docs/concepts/storage/_print/", "id": "k8s_00120_code_explain_3336"} {"prompt": "Complete this text code:\n\nPS C:\\> Get-Acl C:\\var\\run\\secrets\\kubernetes.io\\serviceaccount\\..2021_08_31_22_22_18.318230061\\ca.crt | Format-List\n\nPath : Microsoft.PowerShell.Core\\FileSystem::C:\\var\\run\\secrets\\kubernetes.io\\serviceaccount\\..2021_08_31_22_22_18.318230061\\ca.crt\nOwner : BUILTIN\\Administrators\nGroup : NT AUTHORITY\\SYSTEM", "completion": "Access : NT AUTHORITY\\SYSTEM Allow FullControl\n BUILTIN\\Administrators Allow FullControl\n BUILTIN\\Users Allow ReadAndExecute, Synchronize\nAudit :\nSddl : O:BAG:SYD:AI(A;ID;FA;;;SY)(A;ID;FA;;;BA)(A;ID;0x1200a9;;;BU)", "language": "text", "type": "code-example", "context": "3 - Projected Volumes | This implies all administrator users like ContainerAdministrator will have read, write and execute access while, non-administrator users will have read and execute access.", "source": "https://kubernetes.io/docs/concepts/storage/_print/", "id": "k8s_00120_code_completion_3337"} {"prompt": "Explain this text code:\n\nPS C:\\> Get-Acl C:\\var\\run\\secrets\\kubernetes.io\\serviceaccount\\..2021_08_31_22_22_18.318230061\\ca.crt | Format-List\n\nPath : Microsoft.PowerShell.Core\\FileSystem::C:\\var\\run\\secrets\\kubernetes.io\\serviceaccount\\..2021_08_31_22_22_18.318230061\\ca.crt\nOwner : BUILTIN\\Administrators\nGroup : NT AUTHORITY\\SYSTEM\nAccess : NT AUTHORITY\\SYSTEM Allow FullControl\n BUILTIN\\Administrators Allow FullControl\n BUILTIN\\Users Allow ReadAndExecute, Synchronize\nAudit :\nSddl : O:BAG:SYD:AI(A;ID;FA;;;SY)(A;ID;FA;;;BA)(A;ID;0x1200a9;;;BU)", "completion": "This text code 3 - Projected Volumes | This implies all administrator users like ContainerAdministrator will have read, write and execute access while, non-administrator users will have read and execute access.", "language": "text", "type": "code-example_explanation", "context": "3 - Projected Volumes | This implies all administrator users like ContainerAdministrator will have read, write and execute access while, non-administrator users will have read and execute access.", "source": "https://kubernetes.io/docs/concepts/storage/_print/", "id": "k8s_00120_code_explain_3338"} {"prompt": "Complete this yaml code:\n\nkind: Pod\napiVersion: v1\nmetadata:\n name: my-csi-app\nspec:\n containers:\n - name: my-frontend\n image: busybox:1.28\n volumeMounts:", "completion": " - mountPath: \"/data\"\n name: my-csi-inline-vol\n command: [ \"sleep\", \"1000000\" ]\n volumes:\n - name: my-csi-inline-vol\n csi:\n driver: inline.storage.kubernetes.io\n volumeAttributes:\n foo: bar", "language": "yaml", "type": "pod-definition", "context": "4 - Ephemeral Volumes | The volumeAttributes determine what volume is prepared by the driver. These attributes are specific to each driver and not standardized. See the documentation of each CSI driver for further instructio", "source": "https://kubernetes.io/docs/concepts/storage/_print/", "id": "k8s_00120_code_completion_3339"} {"prompt": "Explain this yaml code:\n\nkind: Pod\napiVersion: v1\nmetadata:\n name: my-csi-app\nspec:\n containers:\n - name: my-frontend\n image: busybox:1.28\n volumeMounts:\n - mountPath: \"/data\"\n name: my-csi-inline-vol\n command: [ \"sleep\", \"1000000\" ]\n volumes:\n - name: my-csi-inline-vol\n csi:\n driver: inline.storage.kubernetes.io\n volumeAttributes:\n foo: bar", "completion": "This yaml code 4 - Ephemeral Volumes | The volumeAttributes determine what volume is prepared by the driver. These attributes are specific to each driver and not standardized. See the documentation of each CSI driver for further instructio", "language": "yaml", "type": "pod-definition_explanation", "context": "4 - Ephemeral Volumes | The volumeAttributes determine what volume is prepared by the driver. These attributes are specific to each driver and not standardized. See the documentation of each CSI driver for further instructio", "source": "https://kubernetes.io/docs/concepts/storage/_print/", "id": "k8s_00120_code_explain_3340"} {"prompt": "Complete this yaml code:\n\nkind: Pod\napiVersion: v1\nmetadata:\n name: my-app\nspec:\n containers:\n - name: my-frontend\n image: busybox:1.28\n volumeMounts:\n - mountPath: \"/scratch\"\n name: scratch-volume\n command: [ \"sleep\", \"1000000\" ]", "completion": " volumes:\n - name: scratch-volume\n ephemeral:\n volumeClaimTemplate:\n metadata:\n labels:\n type: my-frontend-volume\n spec:\n accessModes: [ \"ReadWriteOnce\" ]\n storageClassName: \"scratch-storage-class\"\n resources:\n requests:\n storage: 1Gi", "language": "yaml", "type": "pod-definition", "context": "4 - Ephemeral Volumes | The key design idea is that the parameters for a volume claim are allowed inside a volume source of the Pod. Labels, annotations and the whole set of fields for a PersistentVolumeClaim are supported.", "source": "https://kubernetes.io/docs/concepts/storage/_print/", "id": "k8s_00120_code_completion_3341"} {"prompt": "Explain this yaml code:\n\nkind: Pod\napiVersion: v1\nmetadata:\n name: my-app\nspec:\n containers:\n - name: my-frontend\n image: busybox:1.28\n volumeMounts:\n - mountPath: \"/scratch\"\n name: scratch-volume\n command: [ \"sleep\", \"1000000\" ]\n volumes:\n - name: scratch-volume\n ephemeral:\n volumeClaimTemplate:\n metadata:\n labels:\n type: my-frontend-volume\n spec:\n accessModes: [ \"ReadWriteOnce\" ]\n storageClassName: \"scratch-storage-class\"\n resources:\n requests:\n storage: 1Gi", "completion": "This yaml code 4 - Ephemeral Volumes | The key design idea is that the parameters for a volume claim are allowed inside a volume source of the Pod. Labels, annotations and the whole set of fields for a PersistentVolumeClaim are supported.", "language": "yaml", "type": "pod-definition_explanation", "context": "4 - Ephemeral Volumes | The key design idea is that the parameters for a volume claim are allowed inside a volume source of the Pod. Labels, annotations and the whole set of fields for a PersistentVolumeClaim are supported.", "source": "https://kubernetes.io/docs/concepts/storage/_print/", "id": "k8s_00120_code_explain_3342"} {"prompt": "Complete this yaml code:\n\napiVersion: storage.k8s.io/v1\nkind: StorageClass\nmetadata:\n name: low-latency\n annotations:\n storageclass.kubernetes.io/is-default-class: \"false\"\nprovisioner: csi-driver.example-vendor.example", "completion": "reclaimPolicy: Retain # default value is Delete\nallowVolumeExpansion: true\nmountOptions:\n - discard # this might enable UNMAP / TRIM at the block storage layer\nvolumeBindingMode: WaitForFirstConsumer\nparameters:\n guaranteedReadWriteLatency: \"true\" # provider-specific", "language": "yaml", "type": "kubernetes-manifest", "context": "5 - Storage Classes | You can mark a StorageClass as the default for your cluster. For instructions on setting the default StorageClass, see Change the default StorageClass.", "source": "https://kubernetes.io/docs/concepts/storage/_print/", "id": "k8s_00120_code_completion_3343"} {"prompt": "Explain this yaml code:\n\napiVersion: storage.k8s.io/v1\nkind: StorageClass\nmetadata:\n name: low-latency\n annotations:\n storageclass.kubernetes.io/is-default-class: \"false\"\nprovisioner: csi-driver.example-vendor.example\nreclaimPolicy: Retain # default value is Delete\nallowVolumeExpansion: true\nmountOptions:\n - discard # this might enable UNMAP / TRIM at the block storage layer\nvolumeBindingMode: WaitForFirstConsumer\nparameters:\n guaranteedReadWriteLatency: \"true\" # provider-specific", "completion": "This yaml code 5 - Storage Classes | You can mark a StorageClass as the default for your cluster. For instructions on setting the default StorageClass, see Change the default StorageClass.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "5 - Storage Classes | You can mark a StorageClass as the default for your cluster. For instructions on setting the default StorageClass, see Change the default StorageClass.", "source": "https://kubernetes.io/docs/concepts/storage/_print/", "id": "k8s_00120_code_explain_3344"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: task-pv-pod\nspec:\n nodeSelector:\n kubernetes.io/hostname: kube-01\n volumes:\n - name: task-pv-storage\n persistentVolumeClaim:", "completion": " claimName: task-pv-claim\n containers:\n - name: task-pv-container\n image: nginx\n ports:\n - containerPort: 80\n name: \"http-server\"\n volumeMounts:\n - mountPath: \"/usr/share/nginx/html\"\n name: task-pv-storage", "language": "yaml", "type": "pod-definition", "context": "5 - Storage Classes | When a cluster operator specifies the WaitForFirstConsumer volume binding mode, it is no longer necessary to restrict provisioning to specific topologies in most situations. However, if still required", "source": "https://kubernetes.io/docs/concepts/storage/_print/", "id": "k8s_00120_code_completion_3345"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: task-pv-pod\nspec:\n nodeSelector:\n kubernetes.io/hostname: kube-01\n volumes:\n - name: task-pv-storage\n persistentVolumeClaim:\n claimName: task-pv-claim\n containers:\n - name: task-pv-container\n image: nginx\n ports:\n - containerPort: 80\n name: \"http-server\"\n volumeMounts:\n - mountPath: \"/usr/share/nginx/html\"\n name: task-pv-storage", "completion": "This yaml code 5 - Storage Classes | When a cluster operator specifies the WaitForFirstConsumer volume binding mode, it is no longer necessary to restrict provisioning to specific topologies in most situations. However, if still required", "language": "yaml", "type": "pod-definition_explanation", "context": "5 - Storage Classes | When a cluster operator specifies the WaitForFirstConsumer volume binding mode, it is no longer necessary to restrict provisioning to specific topologies in most situations. However, if still required", "source": "https://kubernetes.io/docs/concepts/storage/_print/", "id": "k8s_00120_code_explain_3346"} {"prompt": "Complete this yaml code:\n\napiVersion: storage.k8s.io/v1\nkind: StorageClass\nmetadata:\n name: standard\nprovisioner: example.com/example\nparameters:\n type: pd-standard", "completion": "volumeBindingMode: WaitForFirstConsumer\nallowedTopologies:\n- matchLabelExpressions:\n - key: topology.kubernetes.io/zone\n values:\n - us-central-1a\n - us-central-1b", "language": "yaml", "type": "kubernetes-manifest", "context": "5 - Storage Classes | StorageClasses have parameters that describe volumes belonging to the storage class. Different parameters may be accepted depending on the provisioner. When a parameter is omitted, some default is use", "source": "https://kubernetes.io/docs/concepts/storage/_print/", "id": "k8s_00120_code_completion_3347"} {"prompt": "Explain this yaml code:\n\napiVersion: storage.k8s.io/v1\nkind: StorageClass\nmetadata:\n name: standard\nprovisioner: example.com/example\nparameters:\n type: pd-standard\nvolumeBindingMode: WaitForFirstConsumer\nallowedTopologies:\n- matchLabelExpressions:\n - key: topology.kubernetes.io/zone\n values:\n - us-central-1a\n - us-central-1b", "completion": "This yaml code 5 - Storage Classes | StorageClasses have parameters that describe volumes belonging to the storage class. Different parameters may be accepted depending on the provisioner. When a parameter is omitted, some default is use", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "5 - Storage Classes | StorageClasses have parameters that describe volumes belonging to the storage class. Different parameters may be accepted depending on the provisioner. When a parameter is omitted, some default is use", "source": "https://kubernetes.io/docs/concepts/storage/_print/", "id": "k8s_00120_code_explain_3348"} {"prompt": "Complete this yaml code:\n\napiVersion: storage.k8s.io/v1\nkind: StorageClass\nmetadata:\n name: ebs-sc\nprovisioner: ebs.csi.aws.com\nvolumeBindingMode: WaitForFirstConsumer\nparameters:\n csi.storage.k8s.io/fstype: xfs\n type: io1", "completion": " iopsPerGB: \"50\"\n encrypted: \"true\"\n tagSpecification_1: \"key1=value1\"\n tagSpecification_2: \"key2=value2\"\nallowedTopologies:\n- matchLabelExpressions:\n - key: topology.ebs.csi.aws.com/zone\n values:\n - us-east-2c", "language": "yaml", "type": "kubernetes-manifest", "context": "5 - Storage Classes | tagSpecification: Tags with this prefix are applied to dynamically provisioned EBS volumes.", "source": "https://kubernetes.io/docs/concepts/storage/_print/", "id": "k8s_00120_code_completion_3349"} {"prompt": "Explain this yaml code:\n\napiVersion: storage.k8s.io/v1\nkind: StorageClass\nmetadata:\n name: ebs-sc\nprovisioner: ebs.csi.aws.com\nvolumeBindingMode: WaitForFirstConsumer\nparameters:\n csi.storage.k8s.io/fstype: xfs\n type: io1\n iopsPerGB: \"50\"\n encrypted: \"true\"\n tagSpecification_1: \"key1=value1\"\n tagSpecification_2: \"key2=value2\"\nallowedTopologies:\n- matchLabelExpressions:\n - key: topology.ebs.csi.aws.com/zone\n values:\n - us-east-2c", "completion": "This yaml code 5 - Storage Classes | tagSpecification: Tags with this prefix are applied to dynamically provisioned EBS volumes.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "5 - Storage Classes | tagSpecification: Tags with this prefix are applied to dynamically provisioned EBS volumes.", "source": "https://kubernetes.io/docs/concepts/storage/_print/", "id": "k8s_00120_code_explain_3350"} {"prompt": "Complete this yaml code:\n\nkind: StorageClass\napiVersion: storage.k8s.io/v1\nmetadata:\n name: efs-sc", "completion": "provisioner: efs.csi.aws.com\nparameters:\n provisioningMode: efs-ap\n fileSystemId: fs-92107410\n directoryPerms: \"700\"", "language": "yaml", "type": "kubernetes-manifest", "context": "5 - Storage Classes | For more details, refer to the AWS_EFS_CSI_Driver Dynamic Provisioning documentation.", "source": "https://kubernetes.io/docs/concepts/storage/_print/", "id": "k8s_00120_code_completion_3351"} {"prompt": "Explain this yaml code:\n\nkind: StorageClass\napiVersion: storage.k8s.io/v1\nmetadata:\n name: efs-sc\nprovisioner: efs.csi.aws.com\nparameters:\n provisioningMode: efs-ap\n fileSystemId: fs-92107410\n directoryPerms: \"700\"", "completion": "This yaml code 5 - Storage Classes | For more details, refer to the AWS_EFS_CSI_Driver Dynamic Provisioning documentation.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "5 - Storage Classes | For more details, refer to the AWS_EFS_CSI_Driver Dynamic Provisioning documentation.", "source": "https://kubernetes.io/docs/concepts/storage/_print/", "id": "k8s_00120_code_explain_3352"} {"prompt": "Complete this yaml code:\n\napiVersion: storage.k8s.io/v1\nkind: StorageClass\nmetadata:\n name: example-nfs", "completion": "provisioner: example.com/external-nfs\nparameters:\n server: nfs-server.example.com\n path: /share\n readOnly: \"false\"", "language": "yaml", "type": "kubernetes-manifest", "context": "5 - Storage Classes | Kubernetes doesn't include an internal NFS provisioner. You need to use an external provisioner to create a StorageClass for NFS. Here are some examples:", "source": "https://kubernetes.io/docs/concepts/storage/_print/", "id": "k8s_00120_code_completion_3353"} {"prompt": "Explain this yaml code:\n\napiVersion: storage.k8s.io/v1\nkind: StorageClass\nmetadata:\n name: example-nfs\nprovisioner: example.com/external-nfs\nparameters:\n server: nfs-server.example.com\n path: /share\n readOnly: \"false\"", "completion": "This yaml code 5 - Storage Classes | Kubernetes doesn't include an internal NFS provisioner. You need to use an external provisioner to create a StorageClass for NFS. Here are some examples:", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "5 - Storage Classes | Kubernetes doesn't include an internal NFS provisioner. You need to use an external provisioner to create a StorageClass for NFS. Here are some examples:", "source": "https://kubernetes.io/docs/concepts/storage/_print/", "id": "k8s_00120_code_explain_3354"} {"prompt": "Complete this yaml code:\n\napiVersion: storage.k8s.io/v1\nkind: StorageClass\nmetadata:", "completion": " name: fast\nprovisioner: kubernetes.io/vsphere-volume\nparameters:\n diskformat: zeroedthick", "language": "yaml", "type": "kubernetes-manifest", "context": "5 - Storage Classes | diskformat: thin, zeroedthick and eagerzeroedthick. Default: \"thin\".", "source": "https://kubernetes.io/docs/concepts/storage/_print/", "id": "k8s_00120_code_completion_3355"} {"prompt": "Explain this yaml code:\n\napiVersion: storage.k8s.io/v1\nkind: StorageClass\nmetadata:\n name: fast\nprovisioner: kubernetes.io/vsphere-volume\nparameters:\n diskformat: zeroedthick", "completion": "This yaml code 5 - Storage Classes | diskformat: thin, zeroedthick and eagerzeroedthick. Default: \"thin\".", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "5 - Storage Classes | diskformat: thin, zeroedthick and eagerzeroedthick. Default: \"thin\".", "source": "https://kubernetes.io/docs/concepts/storage/_print/", "id": "k8s_00120_code_explain_3356"} {"prompt": "Complete this yaml code:\n\napiVersion: storage.k8s.io/v1\nkind: StorageClass\nmetadata:\n name: fast", "completion": "provisioner: kubernetes.io/vsphere-volume\nparameters:\n diskformat: zeroedthick\n datastore: VSANDatastore", "language": "yaml", "type": "kubernetes-manifest", "context": "5 - Storage Classes | datastore: The user can also specify the datastore in the StorageClass. The volume will be created on the datastore specified in the StorageClass, which in this case is VSANDatastore. This field is op", "source": "https://kubernetes.io/docs/concepts/storage/_print/", "id": "k8s_00120_code_completion_3357"} {"prompt": "Explain this yaml code:\n\napiVersion: storage.k8s.io/v1\nkind: StorageClass\nmetadata:\n name: fast\nprovisioner: kubernetes.io/vsphere-volume\nparameters:\n diskformat: zeroedthick\n datastore: VSANDatastore", "completion": "This yaml code 5 - Storage Classes | datastore: The user can also specify the datastore in the StorageClass. The volume will be created on the datastore specified in the StorageClass, which in this case is VSANDatastore. This field is op", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "5 - Storage Classes | datastore: The user can also specify the datastore in the StorageClass. The volume will be created on the datastore specified in the StorageClass, which in this case is VSANDatastore. This field is op", "source": "https://kubernetes.io/docs/concepts/storage/_print/", "id": "k8s_00120_code_explain_3358"} {"prompt": "Complete this yaml code:\n\napiVersion: storage.k8s.io/v1\nkind: StorageClass\nmetadata:\n name: fast\nprovisioner: kubernetes.io/rbd # This provisioner is deprecated\nparameters:\n monitors: 198.19.254.105:6789\n adminId: kube", "completion": " adminSecretName: ceph-secret\n adminSecretNamespace: kube-system\n pool: kube\n userId: kube\n userSecretName: ceph-secret-user\n userSecretNamespace: default\n fsType: ext4\n imageFormat: \"2\"\n imageFeatures: \"layering\"", "language": "yaml", "type": "kubernetes-manifest", "context": "5 - Storage Classes | monitors: Ceph monitors, comma delimited. This parameter is required.", "source": "https://kubernetes.io/docs/concepts/storage/_print/", "id": "k8s_00120_code_completion_3359"} {"prompt": "Explain this yaml code:\n\napiVersion: storage.k8s.io/v1\nkind: StorageClass\nmetadata:\n name: fast\nprovisioner: kubernetes.io/rbd # This provisioner is deprecated\nparameters:\n monitors: 198.19.254.105:6789\n adminId: kube\n adminSecretName: ceph-secret\n adminSecretNamespace: kube-system\n pool: kube\n userId: kube\n userSecretName: ceph-secret-user\n userSecretNamespace: default\n fsType: ext4\n imageFormat: \"2\"\n imageFeatures: \"layering\"", "completion": "This yaml code 5 - Storage Classes | monitors: Ceph monitors, comma delimited. This parameter is required.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "5 - Storage Classes | monitors: Ceph monitors, comma delimited. This parameter is required.", "source": "https://kubernetes.io/docs/concepts/storage/_print/", "id": "k8s_00120_code_explain_3360"} {"prompt": "Explain this bash code:\n\nkubectl create secret generic ceph-secret --type=\"kubernetes.io/rbd\" \\\n --from-literal=key='QVFEQ1pMdFhPUnQrSmhBQUFYaERWNHJsZ3BsMmNjcDR6RFZST0E9PQ==' \\\n --namespace=kube-system", "completion": "This bash code 5 - Storage Classes | userSecretNamespace: The namespace for userSecretName.", "language": "bash", "type": "kubectl-command_explanation", "context": "5 - Storage Classes | userSecretNamespace: The namespace for userSecretName.", "source": "https://kubernetes.io/docs/concepts/storage/_print/", "id": "k8s_00120_code_explain_3361"} {"prompt": "Complete this yaml code:\n\napiVersion: storage.k8s.io/v1\nkind: StorageClass\nmetadata:\n name: azurefile", "completion": "provisioner: kubernetes.io/azure-file\nparameters:\n skuName: Standard_LRS\n location: eastus\n storageAccount: azure_storage_account_name # example value", "language": "yaml", "type": "kubernetes-manifest", "context": "5 - Storage Classes | During storage provisioning, a secret named by secretName is created for the mounting credentials. If the cluster has enabled both RBAC and Controller Roles, add the create permission of resource secr", "source": "https://kubernetes.io/docs/concepts/storage/_print/", "id": "k8s_00120_code_completion_3362"} {"prompt": "Explain this yaml code:\n\napiVersion: storage.k8s.io/v1\nkind: StorageClass\nmetadata:\n name: azurefile\nprovisioner: kubernetes.io/azure-file\nparameters:\n skuName: Standard_LRS\n location: eastus\n storageAccount: azure_storage_account_name # example value", "completion": "This yaml code 5 - Storage Classes | During storage provisioning, a secret named by secretName is created for the mounting credentials. If the cluster has enabled both RBAC and Controller Roles, add the create permission of resource secr", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "5 - Storage Classes | During storage provisioning, a secret named by secretName is created for the mounting credentials. If the cluster has enabled both RBAC and Controller Roles, add the create permission of resource secr", "source": "https://kubernetes.io/docs/concepts/storage/_print/", "id": "k8s_00120_code_explain_3363"} {"prompt": "Complete this yaml code:\n\napiVersion: storage.k8s.io/v1\nkind: StorageClass\nmetadata:\n name: portworx-io-priority-high", "completion": "provisioner: kubernetes.io/portworx-volume # This provisioner is deprecated\nparameters:\n repl: \"1\"\n snap_interval: \"70\"\n priority_io: \"high\"", "language": "yaml", "type": "kubernetes-manifest", "context": "5 - Storage Classes | Local volumes do not support dynamic provisioning in Kubernetes 1.34; however a StorageClass should still be created to delay volume binding until a Pod is actually scheduled to the appropriate node.", "source": "https://kubernetes.io/docs/concepts/storage/_print/", "id": "k8s_00120_code_completion_3364"} {"prompt": "Explain this yaml code:\n\napiVersion: storage.k8s.io/v1\nkind: StorageClass\nmetadata:\n name: portworx-io-priority-high\nprovisioner: kubernetes.io/portworx-volume # This provisioner is deprecated\nparameters:\n repl: \"1\"\n snap_interval: \"70\"\n priority_io: \"high\"", "completion": "This yaml code 5 - Storage Classes | Local volumes do not support dynamic provisioning in Kubernetes 1.34; however a StorageClass should still be created to delay volume binding until a Pod is actually scheduled to the appropriate node.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "5 - Storage Classes | Local volumes do not support dynamic provisioning in Kubernetes 1.34; however a StorageClass should still be created to delay volume binding until a Pod is actually scheduled to the appropriate node.", "source": "https://kubernetes.io/docs/concepts/storage/_print/", "id": "k8s_00120_code_explain_3365"} {"prompt": "Complete this yaml code:\n\napiVersion: storage.k8s.io/v1\nkind: StorageClass\nmetadata:", "completion": " name: local-storage\nprovisioner: kubernetes.io/no-provisioner # indicates that this StorageClass does not support automatic provisioning\nvolumeBindingMode: WaitForFirstConsumer", "language": "yaml", "type": "kubernetes-manifest", "context": "5 - Storage Classes | Local volumes do not support dynamic provisioning in Kubernetes 1.34; however a StorageClass should still be created to delay volume binding until a Pod is actually scheduled to the appropriate node.", "source": "https://kubernetes.io/docs/concepts/storage/_print/", "id": "k8s_00120_code_completion_3366"} {"prompt": "Explain this yaml code:\n\napiVersion: storage.k8s.io/v1\nkind: StorageClass\nmetadata:\n name: local-storage\nprovisioner: kubernetes.io/no-provisioner # indicates that this StorageClass does not support automatic provisioning\nvolumeBindingMode: WaitForFirstConsumer", "completion": "This yaml code 5 - Storage Classes | Local volumes do not support dynamic provisioning in Kubernetes 1.34; however a StorageClass should still be created to delay volume binding until a Pod is actually scheduled to the appropriate node.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "5 - Storage Classes | Local volumes do not support dynamic provisioning in Kubernetes 1.34; however a StorageClass should still be created to delay volume binding until a Pod is actually scheduled to the appropriate node.", "source": "https://kubernetes.io/docs/concepts/storage/_print/", "id": "k8s_00120_code_explain_3367"} {"prompt": "Complete this yaml code:\n\napiVersion: storage.k8s.io/v1\nkind: VolumeAttributesClass\nmetadata:\n name: silver", "completion": "driverName: pd.csi.storage.gke.io\nparameters:\n provisioned-iops: \"3000\"\n provisioned-throughput: \"50\"", "language": "yaml", "type": "kubernetes-manifest", "context": "6 - Volume Attributes Classes | Each VolumeAttributesClass has a provisioner that determines what volume plugin is used for provisioning PVs. The field driverName must be specified.", "source": "https://kubernetes.io/docs/concepts/storage/_print/", "id": "k8s_00120_code_completion_3368"} {"prompt": "Explain this yaml code:\n\napiVersion: storage.k8s.io/v1\nkind: VolumeAttributesClass\nmetadata:\n name: silver\ndriverName: pd.csi.storage.gke.io\nparameters:\n provisioned-iops: \"3000\"\n provisioned-throughput: \"50\"", "completion": "This yaml code 6 - Volume Attributes Classes | Each VolumeAttributesClass has a provisioner that determines what volume plugin is used for provisioning PVs. The field driverName must be specified.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "6 - Volume Attributes Classes | Each VolumeAttributesClass has a provisioner that determines what volume plugin is used for provisioning PVs. The field driverName must be specified.", "source": "https://kubernetes.io/docs/concepts/storage/_print/", "id": "k8s_00120_code_explain_3369"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: PersistentVolumeClaim\nmetadata:\n name: test-pv-claim", "completion": "spec:\n …\n volumeAttributesClassName: silver\n …", "language": "yaml", "type": "kubernetes-manifest", "context": "6 - Volume Attributes Classes | A new VolumeAttributesClass gold is available in the cluster:", "source": "https://kubernetes.io/docs/concepts/storage/_print/", "id": "k8s_00120_code_completion_3370"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: PersistentVolumeClaim\nmetadata:\n name: test-pv-claim\nspec:\n …\n volumeAttributesClassName: silver\n …", "completion": "This yaml code 6 - Volume Attributes Classes | A new VolumeAttributesClass gold is available in the cluster:", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "6 - Volume Attributes Classes | A new VolumeAttributesClass gold is available in the cluster:", "source": "https://kubernetes.io/docs/concepts/storage/_print/", "id": "k8s_00120_code_explain_3371"} {"prompt": "Complete this yaml code:\n\napiVersion: storage.k8s.io/v1\nkind: VolumeAttributesClass\nmetadata:\n name: gold", "completion": "driverName: pd.csi.storage.gke.io\nparameters:\n iops: \"4000\"\n throughput: \"60\"", "language": "yaml", "type": "kubernetes-manifest", "context": "6 - Volume Attributes Classes | The end user can update the PVC with the new VolumeAttributesClass gold and apply:", "source": "https://kubernetes.io/docs/concepts/storage/_print/", "id": "k8s_00120_code_completion_3372"} {"prompt": "Explain this yaml code:\n\napiVersion: storage.k8s.io/v1\nkind: VolumeAttributesClass\nmetadata:\n name: gold\ndriverName: pd.csi.storage.gke.io\nparameters:\n iops: \"4000\"\n throughput: \"60\"", "completion": "This yaml code 6 - Volume Attributes Classes | The end user can update the PVC with the new VolumeAttributesClass gold and apply:", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "6 - Volume Attributes Classes | The end user can update the PVC with the new VolumeAttributesClass gold and apply:", "source": "https://kubernetes.io/docs/concepts/storage/_print/", "id": "k8s_00120_code_explain_3373"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: PersistentVolumeClaim\nmetadata:\n name: test-pv-claim", "completion": "spec:\n …\n volumeAttributesClassName: gold\n …", "language": "yaml", "type": "kubernetes-manifest", "context": "6 - Volume Attributes Classes | To understand how the resizer works with VolumeAttributesClass, refer to the CSI external-resizer documentation.", "source": "https://kubernetes.io/docs/concepts/storage/_print/", "id": "k8s_00120_code_completion_3374"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: PersistentVolumeClaim\nmetadata:\n name: test-pv-claim\nspec:\n …\n volumeAttributesClassName: gold\n …", "completion": "This yaml code 6 - Volume Attributes Classes | To understand how the resizer works with VolumeAttributesClass, refer to the CSI external-resizer documentation.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "6 - Volume Attributes Classes | To understand how the resizer works with VolumeAttributesClass, refer to the CSI external-resizer documentation.", "source": "https://kubernetes.io/docs/concepts/storage/_print/", "id": "k8s_00120_code_explain_3375"} {"prompt": "Complete this yaml code:\n\napiVersion: storage.k8s.io/v1\nkind: StorageClass\nmetadata:", "completion": " name: slow\nprovisioner: kubernetes.io/gce-pd\nparameters:\n type: pd-standard", "language": "yaml", "type": "kubernetes-manifest", "context": "7 - Dynamic Volume Provisioning | The following manifest creates a storage class \"fast\" which provisions SSD-like persistent disks.", "source": "https://kubernetes.io/docs/concepts/storage/_print/", "id": "k8s_00120_code_completion_3376"} {"prompt": "Explain this yaml code:\n\napiVersion: storage.k8s.io/v1\nkind: StorageClass\nmetadata:\n name: slow\nprovisioner: kubernetes.io/gce-pd\nparameters:\n type: pd-standard", "completion": "This yaml code 7 - Dynamic Volume Provisioning | The following manifest creates a storage class \"fast\" which provisions SSD-like persistent disks.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "7 - Dynamic Volume Provisioning | The following manifest creates a storage class \"fast\" which provisions SSD-like persistent disks.", "source": "https://kubernetes.io/docs/concepts/storage/_print/", "id": "k8s_00120_code_explain_3377"} {"prompt": "Complete this yaml code:\n\napiVersion: storage.k8s.io/v1\nkind: StorageClass\nmetadata:", "completion": " name: fast\nprovisioner: kubernetes.io/gce-pd\nparameters:\n type: pd-ssd", "language": "yaml", "type": "kubernetes-manifest", "context": "7 - Dynamic Volume Provisioning | Users request dynamically provisioned storage by including a storage class in their PersistentVolumeClaim. Before Kubernetes v1.6, this was done via the volume.beta.kubernetes.io/storage-class annotat", "source": "https://kubernetes.io/docs/concepts/storage/_print/", "id": "k8s_00120_code_completion_3378"} {"prompt": "Explain this yaml code:\n\napiVersion: storage.k8s.io/v1\nkind: StorageClass\nmetadata:\n name: fast\nprovisioner: kubernetes.io/gce-pd\nparameters:\n type: pd-ssd", "completion": "This yaml code 7 - Dynamic Volume Provisioning | Users request dynamically provisioned storage by including a storage class in their PersistentVolumeClaim. Before Kubernetes v1.6, this was done via the volume.beta.kubernetes.io/storage-class annotat", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "7 - Dynamic Volume Provisioning | Users request dynamically provisioned storage by including a storage class in their PersistentVolumeClaim. Before Kubernetes v1.6, this was done via the volume.beta.kubernetes.io/storage-class annotat", "source": "https://kubernetes.io/docs/concepts/storage/_print/", "id": "k8s_00120_code_explain_3379"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: PersistentVolumeClaim\nmetadata:\n name: claim1\nspec:", "completion": " accessModes:\n - ReadWriteOnce\n storageClassName: fast\n resources:\n requests:\n storage: 30Gi", "language": "yaml", "type": "kubernetes-manifest", "context": "7 - Dynamic Volume Provisioning | This claim results in an SSD-like Persistent Disk being automatically provisioned. When the claim is deleted, the volume is destroyed.", "source": "https://kubernetes.io/docs/concepts/storage/_print/", "id": "k8s_00120_code_completion_3380"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: PersistentVolumeClaim\nmetadata:\n name: claim1\nspec:\n accessModes:\n - ReadWriteOnce\n storageClassName: fast\n resources:\n requests:\n storage: 30Gi", "completion": "This yaml code 7 - Dynamic Volume Provisioning | This claim results in an SSD-like Persistent Disk being automatically provisioned. When the claim is deleted, the volume is destroyed.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "7 - Dynamic Volume Provisioning | This claim results in an SSD-like Persistent Disk being automatically provisioned. When the claim is deleted, the volume is destroyed.", "source": "https://kubernetes.io/docs/concepts/storage/_print/", "id": "k8s_00120_code_explain_3381"} {"prompt": "Complete this yaml code:\n\napiVersion: snapshot.storage.k8s.io/v1\nkind: VolumeSnapshot\nmetadata:\n name: new-snapshot-test", "completion": "spec:\n volumeSnapshotClassName: csi-hostpath-snapclass\n source:\n persistentVolumeClaimName: pvc-test", "language": "yaml", "type": "kubernetes-manifest", "context": "8 - Volume Snapshots | persistentVolumeClaimName is the name of the PersistentVolumeClaim data source for the snapshot. This field is required for dynamically provisioning a snapshot.", "source": "https://kubernetes.io/docs/concepts/storage/_print/", "id": "k8s_00120_code_completion_3382"} {"prompt": "Explain this yaml code:\n\napiVersion: snapshot.storage.k8s.io/v1\nkind: VolumeSnapshot\nmetadata:\n name: new-snapshot-test\nspec:\n volumeSnapshotClassName: csi-hostpath-snapclass\n source:\n persistentVolumeClaimName: pvc-test", "completion": "This yaml code 8 - Volume Snapshots | persistentVolumeClaimName is the name of the PersistentVolumeClaim data source for the snapshot. This field is required for dynamically provisioning a snapshot.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "8 - Volume Snapshots | persistentVolumeClaimName is the name of the PersistentVolumeClaim data source for the snapshot. This field is required for dynamically provisioning a snapshot.", "source": "https://kubernetes.io/docs/concepts/storage/_print/", "id": "k8s_00120_code_explain_3383"} {"prompt": "Complete this yaml code:\n\napiVersion: snapshot.storage.k8s.io/v1\nkind: VolumeSnapshot\nmetadata:", "completion": " name: test-snapshot\nspec:\n source:\n volumeSnapshotContentName: test-content", "language": "yaml", "type": "kubernetes-manifest", "context": "8 - Volume Snapshots | Each VolumeSnapshotContent contains a spec and status. In dynamic provisioning, the snapshot common controller creates VolumeSnapshotContent objects. Here is an example:", "source": "https://kubernetes.io/docs/concepts/storage/_print/", "id": "k8s_00120_code_completion_3384"} {"prompt": "Explain this yaml code:\n\napiVersion: snapshot.storage.k8s.io/v1\nkind: VolumeSnapshot\nmetadata:\n name: test-snapshot\nspec:\n source:\n volumeSnapshotContentName: test-content", "completion": "This yaml code 8 - Volume Snapshots | Each VolumeSnapshotContent contains a spec and status. In dynamic provisioning, the snapshot common controller creates VolumeSnapshotContent objects. Here is an example:", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "8 - Volume Snapshots | Each VolumeSnapshotContent contains a spec and status. In dynamic provisioning, the snapshot common controller creates VolumeSnapshotContent objects. Here is an example:", "source": "https://kubernetes.io/docs/concepts/storage/_print/", "id": "k8s_00120_code_explain_3385"} {"prompt": "Complete this yaml code:\n\napiVersion: snapshot.storage.k8s.io/v1\nkind: VolumeSnapshotContent\nmetadata:\n name: snapcontent-72d9a349-aacd-42d2-a240-d775650d2455\nspec:\n deletionPolicy: Delete\n driver: hostpath.csi.k8s.io", "completion": " source:\n volumeHandle: ee0cfb94-f8d4-11e9-b2d8-0242ac110002\n sourceVolumeMode: Filesystem\n volumeSnapshotClassName: csi-hostpath-snapclass\n volumeSnapshotRef:\n name: new-snapshot-test\n namespace: default\n uid: 72d9a349-aacd-42d2-a240-d775650d2455", "language": "yaml", "type": "kubernetes-manifest", "context": "8 - Volume Snapshots | volumeHandle is the unique identifier of the volume created on the storage backend and returned by the CSI driver during the volume creation. This field is required for dynamically provisioning a snap", "source": "https://kubernetes.io/docs/concepts/storage/_print/", "id": "k8s_00120_code_completion_3386"} {"prompt": "Explain this yaml code:\n\napiVersion: snapshot.storage.k8s.io/v1\nkind: VolumeSnapshotContent\nmetadata:\n name: snapcontent-72d9a349-aacd-42d2-a240-d775650d2455\nspec:\n deletionPolicy: Delete\n driver: hostpath.csi.k8s.io\n source:\n volumeHandle: ee0cfb94-f8d4-11e9-b2d8-0242ac110002\n sourceVolumeMode: Filesystem\n volumeSnapshotClassName: csi-hostpath-snapclass\n volumeSnapshotRef:\n name: new-snapshot-test\n namespace: default\n uid: 72d9a349-aacd-42d2-a240-d775650d2455", "completion": "This yaml code 8 - Volume Snapshots | volumeHandle is the unique identifier of the volume created on the storage backend and returned by the CSI driver during the volume creation. This field is required for dynamically provisioning a snap", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "8 - Volume Snapshots | volumeHandle is the unique identifier of the volume created on the storage backend and returned by the CSI driver during the volume creation. This field is required for dynamically provisioning a snap", "source": "https://kubernetes.io/docs/concepts/storage/_print/", "id": "k8s_00120_code_explain_3387"} {"prompt": "Complete this yaml code:\n\napiVersion: snapshot.storage.k8s.io/v1\nkind: VolumeSnapshotContent\nmetadata:\n name: new-snapshot-content-test\nspec:\n deletionPolicy: Delete", "completion": " driver: hostpath.csi.k8s.io\n source:\n snapshotHandle: 7bdd0de3-aaeb-11e8-9aae-0242ac110002\n sourceVolumeMode: Filesystem\n volumeSnapshotRef:\n name: new-snapshot-test\n namespace: default", "language": "yaml", "type": "kubernetes-manifest", "context": "8 - Volume Snapshots | snapshotHandle is the unique identifier of the volume snapshot created on the storage backend. This field is required for the pre-provisioned snapshots. It specifies the CSI snapshot id on the storage", "source": "https://kubernetes.io/docs/concepts/storage/_print/", "id": "k8s_00120_code_completion_3388"} {"prompt": "Explain this yaml code:\n\napiVersion: snapshot.storage.k8s.io/v1\nkind: VolumeSnapshotContent\nmetadata:\n name: new-snapshot-content-test\nspec:\n deletionPolicy: Delete\n driver: hostpath.csi.k8s.io\n source:\n snapshotHandle: 7bdd0de3-aaeb-11e8-9aae-0242ac110002\n sourceVolumeMode: Filesystem\n volumeSnapshotRef:\n name: new-snapshot-test\n namespace: default", "completion": "This yaml code 8 - Volume Snapshots | snapshotHandle is the unique identifier of the volume snapshot created on the storage backend. This field is required for the pre-provisioned snapshots. It specifies the CSI snapshot id on the storage", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "8 - Volume Snapshots | snapshotHandle is the unique identifier of the volume snapshot created on the storage backend. This field is required for the pre-provisioned snapshots. It specifies the CSI snapshot id on the storage", "source": "https://kubernetes.io/docs/concepts/storage/_print/", "id": "k8s_00120_code_explain_3389"} {"prompt": "Complete this yaml code:\n\napiVersion: snapshot.storage.k8s.io/v1\nkind: VolumeSnapshotContent\nmetadata:\n name: new-snapshot-content-test\n annotations:\n - snapshot.storage.kubernetes.io/allow-volume-mode-change: \"true\"\nspec:", "completion": " deletionPolicy: Delete\n driver: hostpath.csi.k8s.io\n source:\n snapshotHandle: 7bdd0de3-aaeb-11e8-9aae-0242ac110002\n sourceVolumeMode: Filesystem\n volumeSnapshotRef:\n name: new-snapshot-test\n namespace: default", "language": "yaml", "type": "kubernetes-manifest", "context": "8 - Volume Snapshots | You can provision a new volume, pre-populated with data from a snapshot, by using the dataSource field in the PersistentVolumeClaim object.", "source": "https://kubernetes.io/docs/concepts/storage/_print/", "id": "k8s_00120_code_completion_3390"} {"prompt": "Explain this yaml code:\n\napiVersion: snapshot.storage.k8s.io/v1\nkind: VolumeSnapshotContent\nmetadata:\n name: new-snapshot-content-test\n annotations:\n - snapshot.storage.kubernetes.io/allow-volume-mode-change: \"true\"\nspec:\n deletionPolicy: Delete\n driver: hostpath.csi.k8s.io\n source:\n snapshotHandle: 7bdd0de3-aaeb-11e8-9aae-0242ac110002\n sourceVolumeMode: Filesystem\n volumeSnapshotRef:\n name: new-snapshot-test\n namespace: default", "completion": "This yaml code 8 - Volume Snapshots | You can provision a new volume, pre-populated with data from a snapshot, by using the dataSource field in the PersistentVolumeClaim object.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "8 - Volume Snapshots | You can provision a new volume, pre-populated with data from a snapshot, by using the dataSource field in the PersistentVolumeClaim object.", "source": "https://kubernetes.io/docs/concepts/storage/_print/", "id": "k8s_00120_code_explain_3391"} {"prompt": "Complete this yaml code:\n\napiVersion: snapshot.storage.k8s.io/v1\nkind: VolumeSnapshotClass\nmetadata:", "completion": " name: csi-hostpath-snapclass\ndriver: hostpath.csi.k8s.io\ndeletionPolicy: Delete\nparameters:", "language": "yaml", "type": "kubernetes-manifest", "context": "9 - Volume Snapshot Classes | Administrators can specify a default VolumeSnapshotClass for VolumeSnapshots that don't request any particular class to bind to by adding the snapshot.storage.kubernetes.io/is-default-class: \"true\" an", "source": "https://kubernetes.io/docs/concepts/storage/_print/", "id": "k8s_00120_code_completion_3392"} {"prompt": "Explain this yaml code:\n\napiVersion: snapshot.storage.k8s.io/v1\nkind: VolumeSnapshotClass\nmetadata:\n name: csi-hostpath-snapclass\ndriver: hostpath.csi.k8s.io\ndeletionPolicy: Delete\nparameters:", "completion": "This yaml code 9 - Volume Snapshot Classes | Administrators can specify a default VolumeSnapshotClass for VolumeSnapshots that don't request any particular class to bind to by adding the snapshot.storage.kubernetes.io/is-default-class: \"true\" an", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "9 - Volume Snapshot Classes | Administrators can specify a default VolumeSnapshotClass for VolumeSnapshots that don't request any particular class to bind to by adding the snapshot.storage.kubernetes.io/is-default-class: \"true\" an", "source": "https://kubernetes.io/docs/concepts/storage/_print/", "id": "k8s_00120_code_explain_3393"} {"prompt": "Complete this yaml code:\n\napiVersion: snapshot.storage.k8s.io/v1\nkind: VolumeSnapshotClass\nmetadata:\n name: csi-hostpath-snapclass", "completion": " annotations:\n snapshot.storage.kubernetes.io/is-default-class: \"true\"\ndriver: hostpath.csi.k8s.io\ndeletionPolicy: Delete\nparameters:", "language": "yaml", "type": "kubernetes-manifest", "context": "9 - Volume Snapshot Classes | If multiple CSI drivers exist, a default VolumeSnapshotClass can be specified for each of them.", "source": "https://kubernetes.io/docs/concepts/storage/_print/", "id": "k8s_00120_code_completion_3394"} {"prompt": "Explain this yaml code:\n\napiVersion: snapshot.storage.k8s.io/v1\nkind: VolumeSnapshotClass\nmetadata:\n name: csi-hostpath-snapclass\n annotations:\n snapshot.storage.kubernetes.io/is-default-class: \"true\"\ndriver: hostpath.csi.k8s.io\ndeletionPolicy: Delete\nparameters:", "completion": "This yaml code 9 - Volume Snapshot Classes | If multiple CSI drivers exist, a default VolumeSnapshotClass can be specified for each of them.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "9 - Volume Snapshot Classes | If multiple CSI drivers exist, a default VolumeSnapshotClass can be specified for each of them.", "source": "https://kubernetes.io/docs/concepts/storage/_print/", "id": "k8s_00120_code_explain_3395"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: PersistentVolumeClaim\nmetadata:\n name: clone-of-pvc-1\n namespace: myns\nspec:\n accessModes:", "completion": " - ReadWriteOnce\n storageClassName: cloning\n resources:\n requests:\n storage: 5Gi\n dataSource:\n kind: PersistentVolumeClaim\n name: pvc-1", "language": "yaml", "type": "kubernetes-manifest", "context": "10 - CSI Volume Cloning | The result is a new PVC with the name clone-of-pvc-1 that has the exact same content as the specified source pvc-1.", "source": "https://kubernetes.io/docs/concepts/storage/_print/", "id": "k8s_00120_code_completion_3396"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: PersistentVolumeClaim\nmetadata:\n name: clone-of-pvc-1\n namespace: myns\nspec:\n accessModes:\n - ReadWriteOnce\n storageClassName: cloning\n resources:\n requests:\n storage: 5Gi\n dataSource:\n kind: PersistentVolumeClaim\n name: pvc-1", "completion": "This yaml code 10 - CSI Volume Cloning | The result is a new PVC with the name clone-of-pvc-1 that has the exact same content as the specified source pvc-1.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "10 - CSI Volume Cloning | The result is a new PVC with the name clone-of-pvc-1 that has the exact same content as the specified source pvc-1.", "source": "https://kubernetes.io/docs/concepts/storage/_print/", "id": "k8s_00120_code_explain_3397"} {"prompt": "Complete this yaml code:\n\napiVersion: storage.k8s.io/v1\nkind: CSIDriver\nmetadata:", "completion": " name: hostpath.csi.k8s.io\nspec:\n nodeAllocatableUpdatePeriodSeconds: 60", "language": "yaml", "type": "kubernetes-manifest", "context": "12 - Node-specific Volume Limits | Kubelet will periodically call the corresponding CSI driver’s NodeGetInfo endpoint to refresh the maximum number of attachable volumes, using the interval specified in nodeAllocatableUpdatePeriodSecon", "source": "https://kubernetes.io/docs/concepts/storage/_print/", "id": "k8s_00120_code_completion_3398"} {"prompt": "Explain this yaml code:\n\napiVersion: storage.k8s.io/v1\nkind: CSIDriver\nmetadata:\n name: hostpath.csi.k8s.io\nspec:\n nodeAllocatableUpdatePeriodSeconds: 60", "completion": "This yaml code 12 - Node-specific Volume Limits | Kubelet will periodically call the corresponding CSI driver’s NodeGetInfo endpoint to refresh the maximum number of attachable volumes, using the interval specified in nodeAllocatableUpdatePeriodSecon", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "12 - Node-specific Volume Limits | Kubelet will periodically call the corresponding CSI driver’s NodeGetInfo endpoint to refresh the maximum number of attachable volumes, using the interval specified in nodeAllocatableUpdatePeriodSecon", "source": "https://kubernetes.io/docs/concepts/storage/_print/", "id": "k8s_00120_code_explain_3399"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: configmap-pod\nspec:\n containers:\n - name: test\n image: busybox:1.28\n command: ['sh', '-c', 'echo \"The app is running!\" && tail -f /dev/null']", "completion": " volumeMounts:\n - name: config-vol\n mountPath: /etc/config\n volumes:\n - name: config-vol\n configMap:\n name: log-config\n items:\n - key: log_level\n path: log_level.conf", "language": "yaml", "type": "pod-definition", "context": "1 - Volumes | The log-config ConfigMap is mounted as a volume, and all contents stored in its log_level entry are mounted into the Pod at path /etc/config/log_level.conf. Note that this path is derived from the vol", "source": "https://kubernetes.io/docs/concepts/storage/_print/", "id": "k8s_00120_code_completion_3400"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: configmap-pod\nspec:\n containers:\n - name: test\n image: busybox:1.28\n command: ['sh', '-c', 'echo \"The app is running!\" && tail -f /dev/null']\n volumeMounts:\n - name: config-vol\n mountPath: /etc/config\n volumes:\n - name: config-vol\n configMap:\n name: log-config\n items:\n - key: log_level\n path: log_level.conf", "completion": "This yaml code 1 - Volumes | The log-config ConfigMap is mounted as a volume, and all contents stored in its log_level entry are mounted into the Pod at path /etc/config/log_level.conf. Note that this path is derived from the vol", "language": "yaml", "type": "pod-definition_explanation", "context": "1 - Volumes | The log-config ConfigMap is mounted as a volume, and all contents stored in its log_level entry are mounted into the Pod at path /etc/config/log_level.conf. Note that this path is derived from the vol", "source": "https://kubernetes.io/docs/concepts/storage/_print/", "id": "k8s_00120_code_explain_3401"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: test-pd\nspec:\n containers:\n - image: registry.k8s.io/test-webserver", "completion": " name: test-container\n volumeMounts:\n - mountPath: /cache\n name: cache-volume\n volumes:\n - name: cache-volume\n emptyDir:\n sizeLimit: 500Mi", "language": "yaml", "type": "pod-definition", "context": "1 - Volumes | An fc volume type allows an existing fibre channel block storage volume to be mounted in a Pod. You can specify single or multiple target world wide names (WWNs) using the parameter targetWWNs in your", "source": "https://kubernetes.io/docs/concepts/storage/_print/", "id": "k8s_00120_code_completion_3402"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: test-pd\nspec:\n containers:\n - image: registry.k8s.io/test-webserver\n name: test-container\n volumeMounts:\n - mountPath: /cache\n name: cache-volume\n volumes:\n - name: cache-volume\n emptyDir:\n sizeLimit: 500Mi", "completion": "This yaml code 1 - Volumes | An fc volume type allows an existing fibre channel block storage volume to be mounted in a Pod. You can specify single or multiple target world wide names (WWNs) using the parameter targetWWNs in your", "language": "yaml", "type": "pod-definition_explanation", "context": "1 - Volumes | An fc volume type allows an existing fibre channel block storage volume to be mounted in a Pod. You can specify single or multiple target world wide names (WWNs) using the parameter targetWWNs in your", "source": "https://kubernetes.io/docs/concepts/storage/_print/", "id": "k8s_00120_code_explain_3403"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: test-pd\nspec:\n containers:\n - image: registry.k8s.io/test-webserver\n name: test-container", "completion": " volumeMounts:\n - mountPath: /cache\n name: cache-volume\n volumes:\n - name: cache-volume\n emptyDir:\n sizeLimit: 500Mi\n medium: Memory", "language": "yaml", "type": "pod-definition", "context": "1 - Volumes | An fc volume type allows an existing fibre channel block storage volume to be mounted in a Pod. You can specify single or multiple target world wide names (WWNs) using the parameter targetWWNs in your", "source": "https://kubernetes.io/docs/concepts/storage/_print/", "id": "k8s_00120_code_completion_3404"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: test-pd\nspec:\n containers:\n - image: registry.k8s.io/test-webserver\n name: test-container\n volumeMounts:\n - mountPath: /cache\n name: cache-volume\n volumes:\n - name: cache-volume\n emptyDir:\n sizeLimit: 500Mi\n medium: Memory", "completion": "This yaml code 1 - Volumes | An fc volume type allows an existing fibre channel block storage volume to be mounted in a Pod. You can specify single or multiple target world wide names (WWNs) using the parameter targetWWNs in your", "language": "yaml", "type": "pod-definition_explanation", "context": "1 - Volumes | An fc volume type allows an existing fibre channel block storage volume to be mounted in a Pod. You can specify single or multiple target world wide names (WWNs) using the parameter targetWWNs in your", "source": "https://kubernetes.io/docs/concepts/storage/_print/", "id": "k8s_00120_code_explain_3405"} {"prompt": "Explain this cel code:\n\n!has(object.spec.volumes) || !object.spec.volumes.exists(v, has(v.gitRepo))", "completion": "This cel code 1 - Volumes | You can use this deprecated storage plugin in your cluster if you explicitly enable the GitRepoVolumeDriver feature gate.", "language": "cel", "type": "code-example_explanation", "context": "1 - Volumes | You can use this deprecated storage plugin in your cluster if you explicitly enable the GitRepoVolumeDriver feature gate.", "source": "https://kubernetes.io/docs/concepts/storage/_print/", "id": "k8s_00120_code_explain_3406"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: server\nspec:\n containers:\n - image: nginx\n name: nginx", "completion": " volumeMounts:\n - mountPath: /mypath\n name: git-volume\n volumes:\n - name: git-volume\n gitRepo:\n repository: \"git@somewhere:me/my-git-repository.git\"\n revision: \"22f1d8406d464b0c0874075539c1f2e96c253775\"", "language": "yaml", "type": "pod-definition", "context": "1 - Volumes | Kubernetes 1.34 does not include a glusterfs volume type.", "source": "https://kubernetes.io/docs/concepts/storage/_print/", "id": "k8s_00120_code_completion_3407"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: server\nspec:\n containers:\n - image: nginx\n name: nginx\n volumeMounts:\n - mountPath: /mypath\n name: git-volume\n volumes:\n - name: git-volume\n gitRepo:\n repository: \"git@somewhere:me/my-git-repository.git\"\n revision: \"22f1d8406d464b0c0874075539c1f2e96c253775\"", "completion": "This yaml code 1 - Volumes | Kubernetes 1.34 does not include a glusterfs volume type.", "language": "yaml", "type": "pod-definition_explanation", "context": "1 - Volumes | Kubernetes 1.34 does not include a glusterfs volume type.", "source": "https://kubernetes.io/docs/concepts/storage/_print/", "id": "k8s_00120_code_explain_3408"} {"prompt": "Complete this yaml code:\n\n---\n# This manifest mounts /data/foo on the host as /foo inside the\n# single container that runs within the hostpath-example-linux Pod.\n#\n# The mount into the container is read-only.\napiVersion: v1\nkind: Pod\nmetadata:\n name: hostpath-example-linux\nspec:\n os: { name: linux }\n nodeSelector:\n kubernetes.io/os: linux", "completion": " containers:\n - name: example-container\n image: registry.k8s.io/test-webserver\n volumeMounts:\n - mountPath: /foo\n name: example-volume\n readOnly: true\n volumes:\n - name: example-volume\n # mount /data/foo, but only if that directory already exists\n hostPath:\n path: /data/foo # directory location on host\n type: Directory # this field is optional", "language": "yaml", "type": "pod-definition", "context": "1 - Volumes | --- # This manifest mounts C:\\Data\\foo on the host as C:\\foo, inside the # single container that runs within the hostpath-example-windows Pod. # # The mount into the container is read-only. apiVersion", "source": "https://kubernetes.io/docs/concepts/storage/_print/", "id": "k8s_00120_code_completion_3409"} {"prompt": "Explain this yaml code:\n\n---\n# This manifest mounts /data/foo on the host as /foo inside the\n# single container that runs within the hostpath-example-linux Pod.\n#\n# The mount into the container is read-only.\napiVersion: v1\nkind: Pod\nmetadata:\n name: hostpath-example-linux\nspec:\n os: { name: linux }\n nodeSelector:\n kubernetes.io/os: linux\n containers:\n - name: example-container\n image: registry.k8s.io/test-webserver\n volumeMounts:\n - mountPath: /foo\n name: example-volume\n readOnly: true\n volumes:\n - name: example-volume\n # mount /data/foo, but only if that directory already exists\n hostPath:\n path: /data/foo # directory location on host\n type: Directory # this field is optional", "completion": "This yaml code 1 - Volumes | --- # This manifest mounts C:\\Data\\foo on the host as C:\\foo, inside the # single container that runs within the hostpath-example-windows Pod. # # The mount into the container is read-only. apiVersion", "language": "yaml", "type": "pod-definition_explanation", "context": "1 - Volumes | --- # This manifest mounts C:\\Data\\foo on the host as C:\\foo, inside the # single container that runs within the hostpath-example-windows Pod. # # The mount into the container is read-only. apiVersion", "source": "https://kubernetes.io/docs/concepts/storage/_print/", "id": "k8s_00120_code_explain_3410"} {"prompt": "Complete this yaml code:\n\n---\n# This manifest mounts C:\\Data\\foo on the host as C:\\foo, inside the\n# single container that runs within the hostpath-example-windows Pod.\n#\n# The mount into the container is read-only.\napiVersion: v1\nkind: Pod\nmetadata:\n name: hostpath-example-windows\nspec:\n os: { name: windows }\n nodeSelector:\n kubernetes.io/os: windows", "completion": " containers:\n - name: example-container\n image: microsoft/windowsservercore:1709\n volumeMounts:\n - name: example-volume\n mountPath: \"C:\\\\foo\"\n readOnly: true\n volumes:\n # mount C:\\Data\\foo from the host, but only if that directory already exists\n - name: example-volume\n hostPath:\n path: \"C:\\\\Data\\\\foo\" # directory location on host\n type: Directory # this field is optional", "language": "yaml", "type": "pod-definition", "context": "1 - Volumes | The following manifest defines a Pod that mounts /var/local/aaa inside the single container in the Pod. If the node does not already have a path /var/local/aaa, the kubelet creates it as a directory a", "source": "https://kubernetes.io/docs/concepts/storage/_print/", "id": "k8s_00120_code_completion_3411"} {"prompt": "Explain this yaml code:\n\n---\n# This manifest mounts C:\\Data\\foo on the host as C:\\foo, inside the\n# single container that runs within the hostpath-example-windows Pod.\n#\n# The mount into the container is read-only.\napiVersion: v1\nkind: Pod\nmetadata:\n name: hostpath-example-windows\nspec:\n os: { name: windows }\n nodeSelector:\n kubernetes.io/os: windows\n containers:\n - name: example-container\n image: microsoft/windowsservercore:1709\n volumeMounts:\n - name: example-volume\n mountPath: \"C:\\\\foo\"\n readOnly: true\n volumes:\n # mount C:\\Data\\foo from the host, but only if that directory already exists\n - name: example-volume\n hostPath:\n path: \"C:\\\\Data\\\\foo\" # directory location on host\n type: Directory # this field is optional", "completion": "This yaml code 1 - Volumes | The following manifest defines a Pod that mounts /var/local/aaa inside the single container in the Pod. If the node does not already have a path /var/local/aaa, the kubelet creates it as a directory a", "language": "yaml", "type": "pod-definition_explanation", "context": "1 - Volumes | The following manifest defines a Pod that mounts /var/local/aaa inside the single container in the Pod. If the node does not already have a path /var/local/aaa, the kubelet creates it as a directory a", "source": "https://kubernetes.io/docs/concepts/storage/_print/", "id": "k8s_00120_code_explain_3412"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: test-webserver\nspec:\n os: { name: linux }\n nodeSelector:\n kubernetes.io/os: linux\n containers:\n - name: test-webserver\n image: registry.k8s.io/test-webserver:latest\n volumeMounts:\n - mountPath: /var/local/aaa", "completion": " name: mydir\n - mountPath: /var/local/aaa/1.txt\n name: myfile\n volumes:\n - name: mydir\n hostPath:\n # Ensure the file directory is created.\n path: /var/local/aaa\n type: DirectoryOrCreate\n - name: myfile\n hostPath:\n path: /var/local/aaa/1.txt\n type: FileOrCreate", "language": "yaml", "type": "pod-definition", "context": "1 - Volumes | An image volume source represents an OCI object (a container image or artifact) which is available on the kubelet's host machine.", "source": "https://kubernetes.io/docs/concepts/storage/_print/", "id": "k8s_00120_code_completion_3413"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: test-webserver\nspec:\n os: { name: linux }\n nodeSelector:\n kubernetes.io/os: linux\n containers:\n - name: test-webserver\n image: registry.k8s.io/test-webserver:latest\n volumeMounts:\n - mountPath: /var/local/aaa\n name: mydir\n - mountPath: /var/local/aaa/1.txt\n name: myfile\n volumes:\n - name: mydir\n hostPath:\n # Ensure the file directory is created.\n path: /var/local/aaa\n type: DirectoryOrCreate\n - name: myfile\n hostPath:\n path: /var/local/aaa/1.txt\n type: FileOrCreate", "completion": "This yaml code 1 - Volumes | An image volume source represents an OCI object (a container image or artifact) which is available on the kubelet's host machine.", "language": "yaml", "type": "pod-definition_explanation", "context": "1 - Volumes | An image volume source represents an OCI object (a container image or artifact) which is available on the kubelet's host machine.", "source": "https://kubernetes.io/docs/concepts/storage/_print/", "id": "k8s_00120_code_explain_3414"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: image-volume\nspec:\n containers:\n - name: shell\n command: [\"sleep\", \"infinity\"]", "completion": " image: debian\n volumeMounts:\n - name: volume\n mountPath: /volume\n volumes:\n - name: volume\n image:\n reference: quay.io/crio/artifact:v2\n pullPolicy: IfNotPresent", "language": "yaml", "type": "pod-definition", "context": "1 - Volumes | The volume is resolved at pod startup depending on which pullPolicy value is provided:", "source": "https://kubernetes.io/docs/concepts/storage/_print/", "id": "k8s_00120_code_completion_3415"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: image-volume\nspec:\n containers:\n - name: shell\n command: [\"sleep\", \"infinity\"]\n image: debian\n volumeMounts:\n - name: volume\n mountPath: /volume\n volumes:\n - name: volume\n image:\n reference: quay.io/crio/artifact:v2\n pullPolicy: IfNotPresent", "completion": "This yaml code 1 - Volumes | The volume is resolved at pod startup depending on which pullPolicy value is provided:", "language": "yaml", "type": "pod-definition_explanation", "context": "1 - Volumes | The volume is resolved at pod startup depending on which pullPolicy value is provided:", "source": "https://kubernetes.io/docs/concepts/storage/_print/", "id": "k8s_00120_code_explain_3416"} {"prompt": "Explain this text code:\n\nspec.containers[*].volumeMounts.[subPath,subPathExpr]", "completion": "This text code 1 - Volumes | The following fields are available for the image type:", "language": "text", "type": "code-example_explanation", "context": "1 - Volumes | The following fields are available for the image type:", "source": "https://kubernetes.io/docs/concepts/storage/_print/", "id": "k8s_00120_code_explain_3417"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: PersistentVolume\nmetadata:\n name: example-pv\nspec:\n capacity:\n storage: 100Gi\n volumeMode: Filesystem\n accessModes:\n - ReadWriteOnce\n persistentVolumeReclaimPolicy: Delete", "completion": " storageClassName: local-storage\n local:\n path: /mnt/disks/ssd1\n nodeAffinity:\n required:\n nodeSelectorTerms:\n - matchExpressions:\n - key: kubernetes.io/hostname\n operator: In\n values:\n - example-node", "language": "yaml", "type": "kubernetes-manifest", "context": "1 - Volumes | You must set a PersistentVolume nodeAffinity when using local volumes. The Kubernetes scheduler uses the PersistentVolume nodeAffinity to schedule these Pods to the correct node.", "source": "https://kubernetes.io/docs/concepts/storage/_print/", "id": "k8s_00120_code_completion_3418"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: PersistentVolume\nmetadata:\n name: example-pv\nspec:\n capacity:\n storage: 100Gi\n volumeMode: Filesystem\n accessModes:\n - ReadWriteOnce\n persistentVolumeReclaimPolicy: Delete\n storageClassName: local-storage\n local:\n path: /mnt/disks/ssd1\n nodeAffinity:\n required:\n nodeSelectorTerms:\n - matchExpressions:\n - key: kubernetes.io/hostname\n operator: In\n values:\n - example-node", "completion": "This yaml code 1 - Volumes | You must set a PersistentVolume nodeAffinity when using local volumes. The Kubernetes scheduler uses the PersistentVolume nodeAffinity to schedule these Pods to the correct node.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "1 - Volumes | You must set a PersistentVolume nodeAffinity when using local volumes. The Kubernetes scheduler uses the PersistentVolume nodeAffinity to schedule these Pods to the correct node.", "source": "https://kubernetes.io/docs/concepts/storage/_print/", "id": "k8s_00120_code_explain_3419"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: test-pd\nspec:\n containers:\n - image: registry.k8s.io/test-webserver\n name: test-container", "completion": " volumeMounts:\n - mountPath: /my-nfs-data\n name: test-volume\n volumes:\n - name: test-volume\n nfs:\n server: my-nfs-server.example.com\n path: /my-nfs-volume\n readOnly: true", "language": "yaml", "type": "pod-definition", "context": "1 - Volumes | You must have your own NFS server running with the share exported before you can use it.", "source": "https://kubernetes.io/docs/concepts/storage/_print/", "id": "k8s_00120_code_completion_3420"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: test-pd\nspec:\n containers:\n - image: registry.k8s.io/test-webserver\n name: test-container\n volumeMounts:\n - mountPath: /my-nfs-data\n name: test-volume\n volumes:\n - name: test-volume\n nfs:\n server: my-nfs-server.example.com\n path: /my-nfs-volume\n readOnly: true", "completion": "This yaml code 1 - Volumes | You must have your own NFS server running with the share exported before you can use it.", "language": "yaml", "type": "pod-definition_explanation", "context": "1 - Volumes | You must have your own NFS server running with the share exported before you can use it.", "source": "https://kubernetes.io/docs/concepts/storage/_print/", "id": "k8s_00120_code_explain_3421"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: test-portworx-volume-pod\nspec:\n containers:\n - image: registry.k8s.io/test-webserver\n name: test-container", "completion": " volumeMounts:\n - mountPath: /mnt\n name: pxvol\n volumes:\n - name: pxvol\n # This Portworx volume must already exist.\n portworxVolume:\n volumeID: \"pxvol\"\n fsType: \"\"", "language": "yaml", "type": "pod-definition", "context": "1 - Volumes | In Kubernetes 1.34, all operations for the in-tree Portworx volumes are redirected to the pxd.portworx.com Container Storage Interface (CSI) Driver by default.Portworx CSI Driver must be installed on", "source": "https://kubernetes.io/docs/concepts/storage/_print/", "id": "k8s_00120_code_completion_3422"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: test-portworx-volume-pod\nspec:\n containers:\n - image: registry.k8s.io/test-webserver\n name: test-container\n volumeMounts:\n - mountPath: /mnt\n name: pxvol\n volumes:\n - name: pxvol\n # This Portworx volume must already exist.\n portworxVolume:\n volumeID: \"pxvol\"\n fsType: \"\"", "completion": "This yaml code 1 - Volumes | In Kubernetes 1.34, all operations for the in-tree Portworx volumes are redirected to the pxd.portworx.com Container Storage Interface (CSI) Driver by default.Portworx CSI Driver must be installed on", "language": "yaml", "type": "pod-definition_explanation", "context": "1 - Volumes | In Kubernetes 1.34, all operations for the in-tree Portworx volumes are redirected to the pxd.portworx.com Container Storage Interface (CSI) Driver by default.Portworx CSI Driver must be installed on", "source": "https://kubernetes.io/docs/concepts/storage/_print/", "id": "k8s_00120_code_explain_3423"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: my-lamp-site\nspec:\n containers:\n - name: mysql\n image: mysql\n env:\n - name: MYSQL_ROOT_PASSWORD\n value: \"rootpasswd\"\n volumeMounts:", "completion": " - mountPath: /var/lib/mysql\n name: site-data\n subPath: mysql\n - name: php\n image: php:7.0-apache\n volumeMounts:\n - mountPath: /var/www/html\n name: site-data\n subPath: html\n volumes:\n - name: site-data\n persistentVolumeClaim:\n claimName: my-lamp-site-data", "language": "yaml", "type": "pod-definition", "context": "1 - Volumes | Use the subPathExpr field to construct subPath directory names from downward API environment variables. The subPath and subPathExpr properties are mutually exclusive.", "source": "https://kubernetes.io/docs/concepts/storage/_print/", "id": "k8s_00120_code_completion_3424"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: my-lamp-site\nspec:\n containers:\n - name: mysql\n image: mysql\n env:\n - name: MYSQL_ROOT_PASSWORD\n value: \"rootpasswd\"\n volumeMounts:\n - mountPath: /var/lib/mysql\n name: site-data\n subPath: mysql\n - name: php\n image: php:7.0-apache\n volumeMounts:\n - mountPath: /var/www/html\n name: site-data\n subPath: html\n volumes:\n - name: site-data\n persistentVolumeClaim:\n claimName: my-lamp-site-data", "completion": "This yaml code 1 - Volumes | Use the subPathExpr field to construct subPath directory names from downward API environment variables. The subPath and subPathExpr properties are mutually exclusive.", "language": "yaml", "type": "pod-definition_explanation", "context": "1 - Volumes | Use the subPathExpr field to construct subPath directory names from downward API environment variables. The subPath and subPathExpr properties are mutually exclusive.", "source": "https://kubernetes.io/docs/concepts/storage/_print/", "id": "k8s_00120_code_explain_3425"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: pod1\nspec:\n containers:\n - name: container1\n env:\n - name: POD_NAME\n valueFrom:\n fieldRef:\n apiVersion: v1", "completion": " fieldPath: metadata.name\n image: busybox:1.28\n command: [ \"sh\", \"-c\", \"while [ true ]; do echo 'Hello'; sleep 10; done | tee -a /logs/hello.txt\" ]\n volumeMounts:\n - name: workdir1\n mountPath: /logs\n # The variable expansion uses round brackets (not curly brackets).\n subPathExpr: $(POD_NAME)\n restartPolicy: Never\n volumes:\n - name: workdir1\n hostPath:\n path: /var/log/pods", "language": "yaml", "type": "pod-definition", "context": "1 - Volumes | The storage medium (such as Disk or SSD) of an emptyDir volume is determined by the medium of the filesystem holding the kubelet root dir (typically /var/lib/kubelet). There is no limit on how much sp", "source": "https://kubernetes.io/docs/concepts/storage/_print/", "id": "k8s_00120_code_completion_3426"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: pod1\nspec:\n containers:\n - name: container1\n env:\n - name: POD_NAME\n valueFrom:\n fieldRef:\n apiVersion: v1\n fieldPath: metadata.name\n image: busybox:1.28\n command: [ \"sh\", \"-c\", \"while [ true ]; do echo 'Hello'; sleep 10; done | tee -a /logs/hello.txt\" ]\n volumeMounts:\n - name: workdir1\n mountPath: /logs\n # The variable expansion uses round brackets (not curly brackets).\n subPathExpr: $(POD_NAME)\n restartPolicy: Never\n volumes:\n - name: workdir1\n hostPath:\n path: /var/log/pods", "completion": "This yaml code 1 - Volumes | The storage medium (such as Disk or SSD) of an emptyDir volume is determined by the medium of the filesystem holding the kubelet root dir (typically /var/lib/kubelet). There is no limit on how much sp", "language": "yaml", "type": "pod-definition_explanation", "context": "1 - Volumes | The storage medium (such as Disk or SSD) of an emptyDir volume is determined by the medium of the filesystem holding the kubelet root dir (typically /var/lib/kubelet). There is no limit on how much sp", "source": "https://kubernetes.io/docs/concepts/storage/_print/", "id": "k8s_00120_code_explain_3427"} {"prompt": "Explain this text code:\n\n.spec.containers[].volumeMounts[].recursiveReadOnly", "completion": "This text code 1 - Volumes | The allowed values are:", "language": "text", "type": "code-example_explanation", "context": "1 - Volumes | The allowed values are:", "source": "https://kubernetes.io/docs/concepts/storage/_print/", "id": "k8s_00120_code_explain_3428"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: rro\nspec:\n volumes:\n - name: mnt\n hostPath:\n # tmpfs is mounted on /mnt/tmpfs\n path: /mnt\n containers:\n - name: busybox\n image: busybox\n args: [\"sleep\", \"infinity\"]", "completion": " volumeMounts:\n # /mnt-rro/tmpfs is not writable\n - name: mnt\n mountPath: /mnt-rro\n readOnly: true\n mountPropagation: None\n recursiveReadOnly: Enabled\n # /mnt-ro/tmpfs is writable\n - name: mnt\n mountPath: /mnt-ro\n readOnly: true\n # /mnt-rw/tmpfs is writable\n - name: mnt\n mountPath: /mnt-rw", "language": "yaml", "type": "pod-definition", "context": "1 - Volumes | When this property is recognized by kubelet and kube-apiserver, the .status.containerStatuses[].volumeMounts[].recursiveReadOnly field is set to either Enabled or Disabled.", "source": "https://kubernetes.io/docs/concepts/storage/_print/", "id": "k8s_00120_code_completion_3429"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: rro\nspec:\n volumes:\n - name: mnt\n hostPath:\n # tmpfs is mounted on /mnt/tmpfs\n path: /mnt\n containers:\n - name: busybox\n image: busybox\n args: [\"sleep\", \"infinity\"]\n volumeMounts:\n # /mnt-rro/tmpfs is not writable\n - name: mnt\n mountPath: /mnt-rro\n readOnly: true\n mountPropagation: None\n recursiveReadOnly: Enabled\n # /mnt-ro/tmpfs is writable\n - name: mnt\n mountPath: /mnt-ro\n readOnly: true\n # /mnt-rw/tmpfs is writable\n - name: mnt\n mountPath: /mnt-rw", "completion": "This yaml code 1 - Volumes | When this property is recognized by kubelet and kube-apiserver, the .status.containerStatuses[].volumeMounts[].recursiveReadOnly field is set to either Enabled or Disabled.", "language": "yaml", "type": "pod-definition_explanation", "context": "1 - Volumes | When this property is recognized by kubelet and kube-apiserver, the .status.containerStatuses[].volumeMounts[].recursiveReadOnly field is set to either Enabled or Disabled.", "source": "https://kubernetes.io/docs/concepts/storage/_print/", "id": "k8s_00120_code_explain_3430"} {"prompt": "Explain this text code:\n\n.status.containerStatuses[].volumeMounts[].recursiveReadOnly", "completion": "This text code 1 - Volumes | The following container runtimes are known to support recursive read-only mounts.", "language": "text", "type": "code-example_explanation", "context": "1 - Volumes | The following container runtimes are known to support recursive read-only mounts.", "source": "https://kubernetes.io/docs/concepts/storage/_print/", "id": "k8s_00120_code_explain_3431"} {"prompt": "Complete this shell code:\n\nkubectl describe pvc hostpath\nName: hostpath\nNamespace: default\nStorageClass: example-hostpath\nStatus: Terminating", "completion": "Volume:\nLabels: \nAnnotations: volume.beta.kubernetes.io/storage-class=example-hostpath\n volume.beta.kubernetes.io/storage-provisioner=example.com/hostpath\nFinalizers: [kubernetes.io/pvc-protection]\n...", "language": "shell", "type": "code-example", "context": "2 - Persistent Volumes | You can see that a PV is protected when the PV's status is Terminating and the Finalizers list includes kubernetes.io/pv-protection too:", "source": "https://kubernetes.io/docs/concepts/storage/_print/", "id": "k8s_00120_code_completion_3432"} {"prompt": "Explain this shell code:\n\nkubectl describe pvc hostpath\nName: hostpath\nNamespace: default\nStorageClass: example-hostpath\nStatus: Terminating\nVolume:\nLabels: \nAnnotations: volume.beta.kubernetes.io/storage-class=example-hostpath\n volume.beta.kubernetes.io/storage-provisioner=example.com/hostpath\nFinalizers: [kubernetes.io/pvc-protection]\n...", "completion": "This shell code 2 - Persistent Volumes | You can see that a PV is protected when the PV's status is Terminating and the Finalizers list includes kubernetes.io/pv-protection too:", "language": "shell", "type": "code-example_explanation", "context": "2 - Persistent Volumes | You can see that a PV is protected when the PV's status is Terminating and the Finalizers list includes kubernetes.io/pv-protection too:", "source": "https://kubernetes.io/docs/concepts/storage/_print/", "id": "k8s_00120_code_explain_3433"} {"prompt": "Complete this shell code:\n\nkubectl describe pv task-pv-volume\nName: task-pv-volume\nLabels: type=local\nAnnotations: \nFinalizers: [kubernetes.io/pv-protection]\nStorageClass: standard\nStatus: Terminating\nClaim:", "completion": "Reclaim Policy: Delete\nAccess Modes: RWO\nCapacity: 1Gi\nMessage:\nSource:\n Type: HostPath (bare host directory volume)\n Path: /tmp/data\n HostPathType:\nEvents: ", "language": "shell", "type": "code-example", "context": "2 - Persistent Volumes | When a user is done with their volume, they can delete the PVC objects from the API that allows reclamation of the resource. The reclaim policy for a PersistentVolume tells the cluster what to do with", "source": "https://kubernetes.io/docs/concepts/storage/_print/", "id": "k8s_00120_code_completion_3434"} {"prompt": "Explain this shell code:\n\nkubectl describe pv task-pv-volume\nName: task-pv-volume\nLabels: type=local\nAnnotations: \nFinalizers: [kubernetes.io/pv-protection]\nStorageClass: standard\nStatus: Terminating\nClaim:\nReclaim Policy: Delete\nAccess Modes: RWO\nCapacity: 1Gi\nMessage:\nSource:\n Type: HostPath (bare host directory volume)\n Path: /tmp/data\n HostPathType:\nEvents: ", "completion": "This shell code 2 - Persistent Volumes | When a user is done with their volume, they can delete the PVC objects from the API that allows reclamation of the resource. The reclaim policy for a PersistentVolume tells the cluster what to do with", "language": "shell", "type": "code-example_explanation", "context": "2 - Persistent Volumes | When a user is done with their volume, they can delete the PVC objects from the API that allows reclamation of the resource. The reclaim policy for a PersistentVolume tells the cluster what to do with", "source": "https://kubernetes.io/docs/concepts/storage/_print/", "id": "k8s_00120_code_explain_3435"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: pv-recycler\n namespace: default\nspec:\n restartPolicy: Never\n volumes:\n - name: vol", "completion": " hostPath:\n path: /any/path/it/will/be/replaced\n containers:\n - name: pv-recycler\n image: \"registry.k8s.io/busybox\"\n command: [\"/bin/sh\", \"-c\", \"test -e /scrub && rm -rf /scrub/..?* /scrub/.[!.]* /scrub/* && test -z \\\"$(ls -A /scrub)\\\" || exit 1\"]\n volumeMounts:\n - name: vol\n mountPath: /scrub", "language": "yaml", "type": "pod-definition", "context": "2 - Persistent Volumes | However, the particular path specified in the custom recycler Pod template in the volumes part is replaced with the particular path of the volume that is being recycled.", "source": "https://kubernetes.io/docs/concepts/storage/_print/", "id": "k8s_00120_code_completion_3436"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: pv-recycler\n namespace: default\nspec:\n restartPolicy: Never\n volumes:\n - name: vol\n hostPath:\n path: /any/path/it/will/be/replaced\n containers:\n - name: pv-recycler\n image: \"registry.k8s.io/busybox\"\n command: [\"/bin/sh\", \"-c\", \"test -e /scrub && rm -rf /scrub/..?* /scrub/.[!.]* /scrub/* && test -z \\\"$(ls -A /scrub)\\\" || exit 1\"]\n volumeMounts:\n - name: vol\n mountPath: /scrub", "completion": "This yaml code 2 - Persistent Volumes | However, the particular path specified in the custom recycler Pod template in the volumes part is replaced with the particular path of the volume that is being recycled.", "language": "yaml", "type": "pod-definition_explanation", "context": "2 - Persistent Volumes | However, the particular path specified in the custom recycler Pod template in the volumes part is replaced with the particular path of the volume that is being recycled.", "source": "https://kubernetes.io/docs/concepts/storage/_print/", "id": "k8s_00120_code_explain_3437"} {"prompt": "Explain this text code:\n\nexternal-provisioner.volume.kubernetes.io/finalizer", "completion": "This text code 2 - Persistent Volumes | The finalizer kubernetes.io/pv-controller(introduced in v1.31) is added to dynamically provisioned in-tree plugin volumes and skipped for statically provisioned in-tree plugin volumes.", "language": "text", "type": "code-example_explanation", "context": "2 - Persistent Volumes | The finalizer kubernetes.io/pv-controller(introduced in v1.31) is added to dynamically provisioned in-tree plugin volumes and skipped for statically provisioned in-tree plugin volumes.", "source": "https://kubernetes.io/docs/concepts/storage/_print/", "id": "k8s_00120_code_explain_3438"} {"prompt": "Complete this shell code:\n\nkubectl describe pv pvc-74a498d6-3929-47e8-8c02-078c1ece4d78\nName: pvc-74a498d6-3929-47e8-8c02-078c1ece4d78\nLabels: \nAnnotations: kubernetes.io/createdby: vsphere-volume-dynamic-provisioner\n pv.kubernetes.io/bound-by-controller: yes\n pv.kubernetes.io/provisioned-by: kubernetes.io/vsphere-volume\nFinalizers: [kubernetes.io/pv-protection kubernetes.io/pv-controller]\nStorageClass: vcp-sc\nStatus: Bound\nClaim: default/vcp-pvc-1\nReclaim Policy: Delete", "completion": "Access Modes: RWO\nVolumeMode: Filesystem\nCapacity: 1Gi\nNode Affinity: \nMessage:\nSource:\n Type: vSphereVolume (a Persistent Disk resource in vSphere)\n VolumePath: [vsanDatastore] d49c4a62-166f-ce12-c464-020077ba5d46/kubernetes-dynamic-pvc-74a498d6-3929-47e8-8c02-078c1ece4d78.vmdk\n FSType: ext4\n StoragePolicyName: vSAN Default Storage Policy\nEvents: ", "language": "shell", "type": "code-example", "context": "2 - Persistent Volumes | The finalizer external-provisioner.volume.kubernetes.io/finalizer is added for CSI volumes. The following is an example:", "source": "https://kubernetes.io/docs/concepts/storage/_print/", "id": "k8s_00120_code_completion_3439"} {"prompt": "Explain this shell code:\n\nkubectl describe pv pvc-74a498d6-3929-47e8-8c02-078c1ece4d78\nName: pvc-74a498d6-3929-47e8-8c02-078c1ece4d78\nLabels: \nAnnotations: kubernetes.io/createdby: vsphere-volume-dynamic-provisioner\n pv.kubernetes.io/bound-by-controller: yes\n pv.kubernetes.io/provisioned-by: kubernetes.io/vsphere-volume\nFinalizers: [kubernetes.io/pv-protection kubernetes.io/pv-controller]\nStorageClass: vcp-sc\nStatus: Bound\nClaim: default/vcp-pvc-1\nReclaim Policy: Delete\nAccess Modes: RWO\nVolumeMode: Filesystem\nCapacity: 1Gi\nNode Affinity: \nMessage:\nSource:\n Type: vSphereVolume (a Persistent Disk resource in vSphere)\n VolumePath: [vsanDatastore] d49c4a62-166f-ce12-c464-020077ba5d46/kubernetes-dynamic-pvc-74a498d6-3929-47e8-8c02-078c1ece4d78.vmdk\n FSType: ext4\n StoragePolicyName: vSAN Default Storage Policy\nEvents: ", "completion": "This shell code 2 - Persistent Volumes | The finalizer external-provisioner.volume.kubernetes.io/finalizer is added for CSI volumes. The following is an example:", "language": "shell", "type": "code-example_explanation", "context": "2 - Persistent Volumes | The finalizer external-provisioner.volume.kubernetes.io/finalizer is added for CSI volumes. The following is an example:", "source": "https://kubernetes.io/docs/concepts/storage/_print/", "id": "k8s_00120_code_explain_3440"} {"prompt": "Explain this text code:\n\nexternal-provisioner.volume.kubernetes.io/finalizer", "completion": "This text code 2 - Persistent Volumes | When the CSIMigration{provider} feature flag is enabled for a specific in-tree volume plugin, the kubernetes.io/pv-controller finalizer is replaced by the external-provisioner.volume.kubernetes.io/fin", "language": "text", "type": "code-example_explanation", "context": "2 - Persistent Volumes | When the CSIMigration{provider} feature flag is enabled for a specific in-tree volume plugin, the kubernetes.io/pv-controller finalizer is replaced by the external-provisioner.volume.kubernetes.io/fin", "source": "https://kubernetes.io/docs/concepts/storage/_print/", "id": "k8s_00120_code_explain_3441"} {"prompt": "Complete this shell code:\n\nName: pvc-2f0bab97-85a8-4552-8044-eb8be45cf48d\nLabels: \nAnnotations: pv.kubernetes.io/provisioned-by: csi.vsphere.vmware.com\nFinalizers: [kubernetes.io/pv-protection external-provisioner.volume.kubernetes.io/finalizer]\nStorageClass: fast\nStatus: Bound\nClaim: demo-app/nginx-logs\nReclaim Policy: Delete\nAccess Modes: RWO\nVolumeMode: Filesystem\nCapacity: 200Mi", "completion": "Node Affinity: \nMessage:\nSource:\n Type: CSI (a Container Storage Interface (CSI) volume source)\n Driver: csi.vsphere.vmware.com\n FSType: ext4\n VolumeHandle: 44830fa8-79b4-406b-8b58-621ba25353fd\n ReadOnly: false\n VolumeAttributes: storage.kubernetes.io/csiProvisionerIdentity=1648442357185-8081-csi.vsphere.vmware.com\n type=vSphere CNS Block Volume\nEvents: ", "language": "shell", "type": "code-example", "context": "2 - Persistent Volumes | When the CSIMigration{provider} feature flag is enabled for a specific in-tree volume plugin, the kubernetes.io/pv-controller finalizer is replaced by the external-provisioner.volume.kubernetes.io/fin", "source": "https://kubernetes.io/docs/concepts/storage/_print/", "id": "k8s_00120_code_completion_3442"} {"prompt": "Explain this shell code:\n\nName: pvc-2f0bab97-85a8-4552-8044-eb8be45cf48d\nLabels: \nAnnotations: pv.kubernetes.io/provisioned-by: csi.vsphere.vmware.com\nFinalizers: [kubernetes.io/pv-protection external-provisioner.volume.kubernetes.io/finalizer]\nStorageClass: fast\nStatus: Bound\nClaim: demo-app/nginx-logs\nReclaim Policy: Delete\nAccess Modes: RWO\nVolumeMode: Filesystem\nCapacity: 200Mi\nNode Affinity: \nMessage:\nSource:\n Type: CSI (a Container Storage Interface (CSI) volume source)\n Driver: csi.vsphere.vmware.com\n FSType: ext4\n VolumeHandle: 44830fa8-79b4-406b-8b58-621ba25353fd\n ReadOnly: false\n VolumeAttributes: storage.kubernetes.io/csiProvisionerIdentity=1648442357185-8081-csi.vsphere.vmware.com\n type=vSphere CNS Block Volume\nEvents: ", "completion": "This shell code 2 - Persistent Volumes | When the CSIMigration{provider} feature flag is enabled for a specific in-tree volume plugin, the kubernetes.io/pv-controller finalizer is replaced by the external-provisioner.volume.kubernetes.io/fin", "language": "shell", "type": "code-example_explanation", "context": "2 - Persistent Volumes | When the CSIMigration{provider} feature flag is enabled for a specific in-tree volume plugin, the kubernetes.io/pv-controller finalizer is replaced by the external-provisioner.volume.kubernetes.io/fin", "source": "https://kubernetes.io/docs/concepts/storage/_print/", "id": "k8s_00120_code_explain_3443"} {"prompt": "Explain this text code:\n\nexternal-provisioner.volume.kubernetes.io/finalizer", "completion": "This text code 2 - Persistent Volumes | The finalizers ensure that the PV object is removed only after the volume is deleted from the storage backend provided the reclaim policy of the PV is Delete. This also ensures that the volume is dele", "language": "text", "type": "code-example_explanation", "context": "2 - Persistent Volumes | The finalizers ensure that the PV object is removed only after the volume is deleted from the storage backend provided the reclaim policy of the PV is Delete. This also ensures that the volume is dele", "source": "https://kubernetes.io/docs/concepts/storage/_print/", "id": "k8s_00120_code_explain_3444"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: PersistentVolumeClaim\nmetadata:\n name: foo-pvc", "completion": " namespace: foo\nspec:\n storageClassName: \"\" # Empty string must be explicitly set otherwise default StorageClass will be set\n volumeName: foo-pv\n ...", "language": "yaml", "type": "kubernetes-manifest", "context": "2 - Persistent Volumes | This method does not guarantee any binding privileges to the PersistentVolume. If other PersistentVolumeClaims could use the PV that you specify, you first need to reserve that storage volume. Specify", "source": "https://kubernetes.io/docs/concepts/storage/_print/", "id": "k8s_00120_code_completion_3445"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: PersistentVolumeClaim\nmetadata:\n name: foo-pvc\n namespace: foo\nspec:\n storageClassName: \"\" # Empty string must be explicitly set otherwise default StorageClass will be set\n volumeName: foo-pv\n ...", "completion": "This yaml code 2 - Persistent Volumes | This method does not guarantee any binding privileges to the PersistentVolume. If other PersistentVolumeClaims could use the PV that you specify, you first need to reserve that storage volume. Specify", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "2 - Persistent Volumes | This method does not guarantee any binding privileges to the PersistentVolume. If other PersistentVolumeClaims could use the PV that you specify, you first need to reserve that storage volume. Specify", "source": "https://kubernetes.io/docs/concepts/storage/_print/", "id": "k8s_00120_code_explain_3446"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: PersistentVolume\nmetadata:\n name: foo-pv\nspec:", "completion": " storageClassName: \"\"\n claimRef:\n name: foo-pvc\n namespace: foo\n ...", "language": "yaml", "type": "kubernetes-manifest", "context": "2 - Persistent Volumes | This is useful if you want to consume PersistentVolumes that have their persistentVolumeReclaimPolicy set to Retain, including cases where you are reusing an existing PV.", "source": "https://kubernetes.io/docs/concepts/storage/_print/", "id": "k8s_00120_code_completion_3447"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: PersistentVolume\nmetadata:\n name: foo-pv\nspec:\n storageClassName: \"\"\n claimRef:\n name: foo-pvc\n namespace: foo\n ...", "completion": "This yaml code 2 - Persistent Volumes | This is useful if you want to consume PersistentVolumes that have their persistentVolumeReclaimPolicy set to Retain, including cases where you are reusing an existing PV.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "2 - Persistent Volumes | This is useful if you want to consume PersistentVolumes that have their persistentVolumeReclaimPolicy set to Retain, including cases where you are reusing an existing PV.", "source": "https://kubernetes.io/docs/concepts/storage/_print/", "id": "k8s_00120_code_explain_3448"} {"prompt": "Complete this yaml code:\n\napiVersion: storage.k8s.io/v1\nkind: StorageClass\nmetadata:\n name: example-vol-default\nprovisioner: vendor-name.example/magicstorage", "completion": "parameters:\n resturl: \"http://192.168.10.100:8080\"\n restuser: \"\"\n secretNamespace: \"\"\n secretName: \"\"\nallowVolumeExpansion: true", "language": "yaml", "type": "kubernetes-manifest", "context": "2 - Persistent Volumes | To request a larger volume for a PVC, edit the PVC object and specify a larger size. This triggers expansion of the volume that backs the underlying PersistentVolume. A new PersistentVolume is never c", "source": "https://kubernetes.io/docs/concepts/storage/_print/", "id": "k8s_00120_code_completion_3449"} {"prompt": "Explain this yaml code:\n\napiVersion: storage.k8s.io/v1\nkind: StorageClass\nmetadata:\n name: example-vol-default\nprovisioner: vendor-name.example/magicstorage\nparameters:\n resturl: \"http://192.168.10.100:8080\"\n restuser: \"\"\n secretNamespace: \"\"\n secretName: \"\"\nallowVolumeExpansion: true", "completion": "This yaml code 2 - Persistent Volumes | To request a larger volume for a PVC, edit the PVC object and specify a larger size. This triggers expansion of the volume that backs the underlying PersistentVolume. A new PersistentVolume is never c", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "2 - Persistent Volumes | To request a larger volume for a PVC, edit the PVC object and specify a larger size. This triggers expansion of the volume that backs the underlying PersistentVolume. A new PersistentVolume is never c", "source": "https://kubernetes.io/docs/concepts/storage/_print/", "id": "k8s_00120_code_explain_3450"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: PersistentVolume\nmetadata:\n name: pv0003\nspec:\n capacity:\n storage: 5Gi\n volumeMode: Filesystem\n accessModes:", "completion": " - ReadWriteOnce\n persistentVolumeReclaimPolicy: Recycle\n storageClassName: slow\n mountOptions:\n - hard\n - nfsvers=4.1\n nfs:\n path: /tmp\n server: 172.17.0.2", "language": "yaml", "type": "kubernetes-manifest", "context": "2 - Persistent Volumes | Generally, a PV will have a specific storage capacity. This is set using the PV's capacity attribute which is a Quantity value.", "source": "https://kubernetes.io/docs/concepts/storage/_print/", "id": "k8s_00120_code_completion_3451"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: PersistentVolume\nmetadata:\n name: pv0003\nspec:\n capacity:\n storage: 5Gi\n volumeMode: Filesystem\n accessModes:\n - ReadWriteOnce\n persistentVolumeReclaimPolicy: Recycle\n storageClassName: slow\n mountOptions:\n - hard\n - nfsvers=4.1\n nfs:\n path: /tmp\n server: 172.17.0.2", "completion": "This yaml code 2 - Persistent Volumes | Generally, a PV will have a specific storage capacity. This is set using the PV's capacity attribute which is a Quantity value.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "2 - Persistent Volumes | Generally, a PV will have a specific storage capacity. This is set using the PV's capacity attribute which is a Quantity value.", "source": "https://kubernetes.io/docs/concepts/storage/_print/", "id": "k8s_00120_code_explain_3452"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: PersistentVolumeClaim\nmetadata:\n name: myclaim\nspec:\n accessModes:\n - ReadWriteOnce\n volumeMode: Filesystem", "completion": " resources:\n requests:\n storage: 8Gi\n storageClassName: slow\n selector:\n matchLabels:\n release: \"stable\"\n matchExpressions:\n - {key: environment, operator: In, values: [dev]}", "language": "yaml", "type": "kubernetes-manifest", "context": "2 - Persistent Volumes | Claims use the same conventions as volumes when requesting storage with specific access modes.", "source": "https://kubernetes.io/docs/concepts/storage/_print/", "id": "k8s_00120_code_completion_3453"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: PersistentVolumeClaim\nmetadata:\n name: myclaim\nspec:\n accessModes:\n - ReadWriteOnce\n volumeMode: Filesystem\n resources:\n requests:\n storage: 8Gi\n storageClassName: slow\n selector:\n matchLabels:\n release: \"stable\"\n matchExpressions:\n - {key: environment, operator: In, values: [dev]}", "completion": "This yaml code 2 - Persistent Volumes | Claims use the same conventions as volumes when requesting storage with specific access modes.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "2 - Persistent Volumes | Claims use the same conventions as volumes when requesting storage with specific access modes.", "source": "https://kubernetes.io/docs/concepts/storage/_print/", "id": "k8s_00120_code_explain_3454"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: mypod\nspec:\n containers:\n - name: myfrontend", "completion": " image: nginx\n volumeMounts:\n - mountPath: \"/var/www/html\"\n name: mypd\n volumes:\n - name: mypd\n persistentVolumeClaim:\n claimName: myclaim", "language": "yaml", "type": "pod-definition", "context": "2 - Persistent Volumes | PersistentVolumes binds are exclusive, and since PersistentVolumeClaims are namespaced objects, mounting claims with \"Many\" modes (ROX, RWX) is only possible within one namespace.", "source": "https://kubernetes.io/docs/concepts/storage/_print/", "id": "k8s_00120_code_completion_3455"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: mypod\nspec:\n containers:\n - name: myfrontend\n image: nginx\n volumeMounts:\n - mountPath: \"/var/www/html\"\n name: mypd\n volumes:\n - name: mypd\n persistentVolumeClaim:\n claimName: myclaim", "completion": "This yaml code 2 - Persistent Volumes | PersistentVolumes binds are exclusive, and since PersistentVolumeClaims are namespaced objects, mounting claims with \"Many\" modes (ROX, RWX) is only possible within one namespace.", "language": "yaml", "type": "pod-definition_explanation", "context": "2 - Persistent Volumes | PersistentVolumes binds are exclusive, and since PersistentVolumeClaims are namespaced objects, mounting claims with \"Many\" modes (ROX, RWX) is only possible within one namespace.", "source": "https://kubernetes.io/docs/concepts/storage/_print/", "id": "k8s_00120_code_explain_3456"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: PersistentVolume\nmetadata:\n name: block-pv\nspec:\n capacity:\n storage: 10Gi", "completion": " accessModes:\n - ReadWriteOnce\n volumeMode: Block\n persistentVolumeReclaimPolicy: Retain\n fc:\n targetWWNs: [\"50060e801049cfd1\"]\n lun: 0\n readOnly: false", "language": "yaml", "type": "kubernetes-manifest", "context": "2 - Persistent Volumes | If a user requests a raw block volume by indicating this using the volumeMode field in the PersistentVolumeClaim spec, the binding rules differ slightly from previous releases that didn't consider thi", "source": "https://kubernetes.io/docs/concepts/storage/_print/", "id": "k8s_00120_code_completion_3457"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: PersistentVolume\nmetadata:\n name: block-pv\nspec:\n capacity:\n storage: 10Gi\n accessModes:\n - ReadWriteOnce\n volumeMode: Block\n persistentVolumeReclaimPolicy: Retain\n fc:\n targetWWNs: [\"50060e801049cfd1\"]\n lun: 0\n readOnly: false", "completion": "This yaml code 2 - Persistent Volumes | If a user requests a raw block volume by indicating this using the volumeMode field in the PersistentVolumeClaim spec, the binding rules differ slightly from previous releases that didn't consider thi", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "2 - Persistent Volumes | If a user requests a raw block volume by indicating this using the volumeMode field in the PersistentVolumeClaim spec, the binding rules differ slightly from previous releases that didn't consider thi", "source": "https://kubernetes.io/docs/concepts/storage/_print/", "id": "k8s_00120_code_explain_3458"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: PersistentVolumeClaim\nmetadata:\n name: block-pvc\nspec:", "completion": " accessModes:\n - ReadWriteOnce\n volumeMode: Block\n resources:\n requests:\n storage: 10Gi", "language": "yaml", "type": "kubernetes-manifest", "context": "2 - Persistent Volumes | If a user requests a raw block volume by indicating this using the volumeMode field in the PersistentVolumeClaim spec, the binding rules differ slightly from previous releases that didn't consider thi", "source": "https://kubernetes.io/docs/concepts/storage/_print/", "id": "k8s_00120_code_completion_3459"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: PersistentVolumeClaim\nmetadata:\n name: block-pvc\nspec:\n accessModes:\n - ReadWriteOnce\n volumeMode: Block\n resources:\n requests:\n storage: 10Gi", "completion": "This yaml code 2 - Persistent Volumes | If a user requests a raw block volume by indicating this using the volumeMode field in the PersistentVolumeClaim spec, the binding rules differ slightly from previous releases that didn't consider thi", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "2 - Persistent Volumes | If a user requests a raw block volume by indicating this using the volumeMode field in the PersistentVolumeClaim spec, the binding rules differ slightly from previous releases that didn't consider thi", "source": "https://kubernetes.io/docs/concepts/storage/_print/", "id": "k8s_00120_code_explain_3460"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: pod-with-block-volume\nspec:\n containers:\n - name: fc-container\n image: fedora:26", "completion": " command: [\"/bin/sh\", \"-c\"]\n args: [ \"tail -f /dev/null\" ]\n volumeDevices:\n - name: data\n devicePath: /dev/xvda\n volumes:\n - name: data\n persistentVolumeClaim:\n claimName: block-pvc", "language": "yaml", "type": "pod-definition", "context": "2 - Persistent Volumes | If a user requests a raw block volume by indicating this using the volumeMode field in the PersistentVolumeClaim spec, the binding rules differ slightly from previous releases that didn't consider thi", "source": "https://kubernetes.io/docs/concepts/storage/_print/", "id": "k8s_00120_code_completion_3461"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: pod-with-block-volume\nspec:\n containers:\n - name: fc-container\n image: fedora:26\n command: [\"/bin/sh\", \"-c\"]\n args: [ \"tail -f /dev/null\" ]\n volumeDevices:\n - name: data\n devicePath: /dev/xvda\n volumes:\n - name: data\n persistentVolumeClaim:\n claimName: block-pvc", "completion": "This yaml code 2 - Persistent Volumes | If a user requests a raw block volume by indicating this using the volumeMode field in the PersistentVolumeClaim spec, the binding rules differ slightly from previous releases that didn't consider thi", "language": "yaml", "type": "pod-definition_explanation", "context": "2 - Persistent Volumes | If a user requests a raw block volume by indicating this using the volumeMode field in the PersistentVolumeClaim spec, the binding rules differ slightly from previous releases that didn't consider thi", "source": "https://kubernetes.io/docs/concepts/storage/_print/", "id": "k8s_00120_code_explain_3462"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: PersistentVolumeClaim\nmetadata:\n name: restore-pvc\nspec:\n storageClassName: csi-hostpath-sc\n dataSource:", "completion": " name: new-snapshot-test\n kind: VolumeSnapshot\n apiGroup: snapshot.storage.k8s.io\n accessModes:\n - ReadWriteOnce\n resources:\n requests:\n storage: 10Gi", "language": "yaml", "type": "kubernetes-manifest", "context": "2 - Persistent Volumes | Volume Cloning only available for CSI volume plugins.", "source": "https://kubernetes.io/docs/concepts/storage/_print/", "id": "k8s_00120_code_completion_3463"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: PersistentVolumeClaim\nmetadata:\n name: restore-pvc\nspec:\n storageClassName: csi-hostpath-sc\n dataSource:\n name: new-snapshot-test\n kind: VolumeSnapshot\n apiGroup: snapshot.storage.k8s.io\n accessModes:\n - ReadWriteOnce\n resources:\n requests:\n storage: 10Gi", "completion": "This yaml code 2 - Persistent Volumes | Volume Cloning only available for CSI volume plugins.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "2 - Persistent Volumes | Volume Cloning only available for CSI volume plugins.", "source": "https://kubernetes.io/docs/concepts/storage/_print/", "id": "k8s_00120_code_explain_3464"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: PersistentVolumeClaim\nmetadata:\n name: cloned-pvc\nspec:\n storageClassName: my-csi-plugin\n dataSource:", "completion": " name: existing-src-pvc-name\n kind: PersistentVolumeClaim\n accessModes:\n - ReadWriteOnce\n resources:\n requests:\n storage: 10Gi", "language": "yaml", "type": "kubernetes-manifest", "context": "2 - Persistent Volumes | Kubernetes supports custom volume populators. To use custom volume populators, you must enable the AnyVolumeDataSource feature gate for the kube-apiserver and kube-controller-manager.", "source": "https://kubernetes.io/docs/concepts/storage/_print/", "id": "k8s_00120_code_completion_3465"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: PersistentVolumeClaim\nmetadata:\n name: cloned-pvc\nspec:\n storageClassName: my-csi-plugin\n dataSource:\n name: existing-src-pvc-name\n kind: PersistentVolumeClaim\n accessModes:\n - ReadWriteOnce\n resources:\n requests:\n storage: 10Gi", "completion": "This yaml code 2 - Persistent Volumes | Kubernetes supports custom volume populators. To use custom volume populators, you must enable the AnyVolumeDataSource feature gate for the kube-apiserver and kube-controller-manager.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "2 - Persistent Volumes | Kubernetes supports custom volume populators. To use custom volume populators, you must enable the AnyVolumeDataSource feature gate for the kube-apiserver and kube-controller-manager.", "source": "https://kubernetes.io/docs/concepts/storage/_print/", "id": "k8s_00120_code_explain_3466"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: PersistentVolumeClaim\nmetadata:\n name: populated-pvc\nspec:\n dataSourceRef:\n name: example-name", "completion": " kind: ExampleDataSource\n apiGroup: example.storage.k8s.io\n accessModes:\n - ReadWriteOnce\n resources:\n requests:\n storage: 10Gi", "language": "yaml", "type": "kubernetes-manifest", "context": "2 - Persistent Volumes | Because volume populators are external components, attempts to create a PVC that uses one can fail if not all the correct components are installed. External controllers should generate events on the P", "source": "https://kubernetes.io/docs/concepts/storage/_print/", "id": "k8s_00120_code_completion_3467"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: PersistentVolumeClaim\nmetadata:\n name: populated-pvc\nspec:\n dataSourceRef:\n name: example-name\n kind: ExampleDataSource\n apiGroup: example.storage.k8s.io\n accessModes:\n - ReadWriteOnce\n resources:\n requests:\n storage: 10Gi", "completion": "This yaml code 2 - Persistent Volumes | Because volume populators are external components, attempts to create a PVC that uses one can fail if not all the correct components are installed. External controllers should generate events on the P", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "2 - Persistent Volumes | Because volume populators are external components, attempts to create a PVC that uses one can fail if not all the correct components are installed. External controllers should generate events on the P", "source": "https://kubernetes.io/docs/concepts/storage/_print/", "id": "k8s_00120_code_explain_3468"} {"prompt": "Complete this yaml code:\n\napiVersion: gateway.networking.k8s.io/v1beta1\nkind: ReferenceGrant\nmetadata:\n name: allow-ns1-pvc\n namespace: default\nspec:\n from:", "completion": " - group: \"\"\n kind: PersistentVolumeClaim\n namespace: ns1\n to:\n - group: snapshot.storage.k8s.io\n kind: VolumeSnapshot\n name: new-snapshot-demo", "language": "yaml", "type": "kubernetes-manifest", "context": "2 - Persistent Volumes | If you're writing configuration templates or examples that run on a wide range of clusters and need persistent storage, it is recommended that you use the following pattern:", "source": "https://kubernetes.io/docs/concepts/storage/_print/", "id": "k8s_00120_code_completion_3469"} {"prompt": "Explain this yaml code:\n\napiVersion: gateway.networking.k8s.io/v1beta1\nkind: ReferenceGrant\nmetadata:\n name: allow-ns1-pvc\n namespace: default\nspec:\n from:\n - group: \"\"\n kind: PersistentVolumeClaim\n namespace: ns1\n to:\n - group: snapshot.storage.k8s.io\n kind: VolumeSnapshot\n name: new-snapshot-demo", "completion": "This yaml code 2 - Persistent Volumes | If you're writing configuration templates or examples that run on a wide range of clusters and need persistent storage, it is recommended that you use the following pattern:", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "2 - Persistent Volumes | If you're writing configuration templates or examples that run on a wide range of clusters and need persistent storage, it is recommended that you use the following pattern:", "source": "https://kubernetes.io/docs/concepts/storage/_print/", "id": "k8s_00120_code_explain_3470"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: PersistentVolumeClaim\nmetadata:\n name: foo-pvc\n namespace: ns1\nspec:\n storageClassName: example\n accessModes:\n - ReadWriteOnce", "completion": " resources:\n requests:\n storage: 1Gi\n dataSourceRef:\n apiGroup: snapshot.storage.k8s.io\n kind: VolumeSnapshot\n name: new-snapshot-demo\n namespace: default\n volumeMode: Filesystem", "language": "yaml", "type": "kubernetes-manifest", "context": "2 - Persistent Volumes | If you're writing configuration templates or examples that run on a wide range of clusters and need persistent storage, it is recommended that you use the following pattern:", "source": "https://kubernetes.io/docs/concepts/storage/_print/", "id": "k8s_00120_code_completion_3471"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: PersistentVolumeClaim\nmetadata:\n name: foo-pvc\n namespace: ns1\nspec:\n storageClassName: example\n accessModes:\n - ReadWriteOnce\n resources:\n requests:\n storage: 1Gi\n dataSourceRef:\n apiGroup: snapshot.storage.k8s.io\n kind: VolumeSnapshot\n name: new-snapshot-demo\n namespace: default\n volumeMode: Filesystem", "completion": "This yaml code 2 - Persistent Volumes | If you're writing configuration templates or examples that run on a wide range of clusters and need persistent storage, it is recommended that you use the following pattern:", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "2 - Persistent Volumes | If you're writing configuration templates or examples that run on a wide range of clusters and need persistent storage, it is recommended that you use the following pattern:", "source": "https://kubernetes.io/docs/concepts/storage/_print/", "id": "k8s_00120_code_explain_3472"} {"prompt": "Explain this text code:\n\npods/storage/projected-secret-downwardapi-configmap.yaml", "completion": "This text code 3 - Projected Volumes | Each projected volume source is listed in the spec under sources. The parameters are nearly the same with two exceptions:", "language": "text", "type": "code-example_explanation", "context": "3 - Projected Volumes | Each projected volume source is listed in the spec under sources. The parameters are nearly the same with two exceptions:", "source": "https://kubernetes.io/docs/concepts/storage/_print/", "id": "k8s_00120_code_explain_3473"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: volume-test\nspec:\n containers:\n - name: container-test\n image: busybox:1.28\n command: [\"sleep\", \"3600\"]\n volumeMounts:\n - name: all-in-one\n mountPath: \"/projected-volume\"\n readOnly: true\n volumes:\n - name: all-in-one\n projected:\n sources:\n - secret:", "completion": " name: mysecret\n items:\n - key: username\n path: my-group/my-username\n - downwardAPI:\n items:\n - path: \"labels\"\n fieldRef:\n fieldPath: metadata.labels\n - path: \"cpu_limit\"\n resourceFieldRef:\n containerName: container-test\n resource: limits.cpu\n - configMap:\n name: myconfigmap\n items:\n - key: config\n path: my-group/my-config", "language": "yaml", "type": "pod-definition", "context": "3 - Projected Volumes | Each projected volume source is listed in the spec under sources. The parameters are nearly the same with two exceptions:", "source": "https://kubernetes.io/docs/concepts/storage/_print/", "id": "k8s_00120_code_completion_3474"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: volume-test\nspec:\n containers:\n - name: container-test\n image: busybox:1.28\n command: [\"sleep\", \"3600\"]\n volumeMounts:\n - name: all-in-one\n mountPath: \"/projected-volume\"\n readOnly: true\n volumes:\n - name: all-in-one\n projected:\n sources:\n - secret:\n name: mysecret\n items:\n - key: username\n path: my-group/my-username\n - downwardAPI:\n items:\n - path: \"labels\"\n fieldRef:\n fieldPath: metadata.labels\n - path: \"cpu_limit\"\n resourceFieldRef:\n containerName: container-test\n resource: limits.cpu\n - configMap:\n name: myconfigmap\n items:\n - key: config\n path: my-group/my-config", "completion": "This yaml code 3 - Projected Volumes | Each projected volume source is listed in the spec under sources. The parameters are nearly the same with two exceptions:", "language": "yaml", "type": "pod-definition_explanation", "context": "3 - Projected Volumes | Each projected volume source is listed in the spec under sources. The parameters are nearly the same with two exceptions:", "source": "https://kubernetes.io/docs/concepts/storage/_print/", "id": "k8s_00120_code_explain_3475"} {"prompt": "Explain this text code:\n\npods/storage/projected-secrets-nondefault-permission-mode.yaml", "completion": "This text code 3 - Projected Volumes | Each projected volume source is listed in the spec under sources. The parameters are nearly the same with two exceptions:", "language": "text", "type": "code-example_explanation", "context": "3 - Projected Volumes | Each projected volume source is listed in the spec under sources. The parameters are nearly the same with two exceptions:", "source": "https://kubernetes.io/docs/concepts/storage/_print/", "id": "k8s_00120_code_explain_3476"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: volume-test\nspec:\n containers:\n - name: container-test\n image: busybox:1.28\n command: [\"sleep\", \"3600\"]\n volumeMounts:\n - name: all-in-one\n mountPath: \"/projected-volume\"\n readOnly: true\n volumes:", "completion": " - name: all-in-one\n projected:\n sources:\n - secret:\n name: mysecret\n items:\n - key: username\n path: my-group/my-username\n - secret:\n name: mysecret2\n items:\n - key: password\n path: my-group/my-password\n mode: 511", "language": "yaml", "type": "pod-definition", "context": "3 - Projected Volumes | Each projected volume source is listed in the spec under sources. The parameters are nearly the same with two exceptions:", "source": "https://kubernetes.io/docs/concepts/storage/_print/", "id": "k8s_00120_code_completion_3477"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: volume-test\nspec:\n containers:\n - name: container-test\n image: busybox:1.28\n command: [\"sleep\", \"3600\"]\n volumeMounts:\n - name: all-in-one\n mountPath: \"/projected-volume\"\n readOnly: true\n volumes:\n - name: all-in-one\n projected:\n sources:\n - secret:\n name: mysecret\n items:\n - key: username\n path: my-group/my-username\n - secret:\n name: mysecret2\n items:\n - key: password\n path: my-group/my-password\n mode: 511", "completion": "This yaml code 3 - Projected Volumes | Each projected volume source is listed in the spec under sources. The parameters are nearly the same with two exceptions:", "language": "yaml", "type": "pod-definition_explanation", "context": "3 - Projected Volumes | Each projected volume source is listed in the spec under sources. The parameters are nearly the same with two exceptions:", "source": "https://kubernetes.io/docs/concepts/storage/_print/", "id": "k8s_00120_code_explain_3478"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: sa-token-test\nspec:\n containers:\n - name: container-test\n image: busybox:1.28\n command: [\"sleep\", \"3600\"]\n volumeMounts:\n - name: token-vol", "completion": " mountPath: \"/service-account\"\n readOnly: true\n serviceAccountName: default\n volumes:\n - name: token-vol\n projected:\n sources:\n - serviceAccountToken:\n audience: api\n expirationSeconds: 3600\n path: token", "language": "yaml", "type": "pod-definition", "context": "3 - Projected Volumes | The example Pod has a projected volume containing the injected service account token. Containers in this Pod can use that token to access the Kubernetes API server, authenticating with the identity of", "source": "https://kubernetes.io/docs/concepts/storage/_print/", "id": "k8s_00120_code_completion_3479"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: sa-token-test\nspec:\n containers:\n - name: container-test\n image: busybox:1.28\n command: [\"sleep\", \"3600\"]\n volumeMounts:\n - name: token-vol\n mountPath: \"/service-account\"\n readOnly: true\n serviceAccountName: default\n volumes:\n - name: token-vol\n projected:\n sources:\n - serviceAccountToken:\n audience: api\n expirationSeconds: 3600\n path: token", "completion": "This yaml code 3 - Projected Volumes | The example Pod has a projected volume containing the injected service account token. Containers in this Pod can use that token to access the Kubernetes API server, authenticating with the identity of", "language": "yaml", "type": "pod-definition_explanation", "context": "3 - Projected Volumes | The example Pod has a projected volume containing the injected service account token. Containers in this Pod can use that token to access the Kubernetes API server, authenticating with the identity of", "source": "https://kubernetes.io/docs/concepts/storage/_print/", "id": "k8s_00120_code_explain_3480"} {"prompt": "Explain this text code:\n\n--runtime-config=certificates.k8s.io/v1beta1/clustertrustbundles=true", "completion": "This text code 3 - Projected Volumes | The clusterTrustBundle projected volume source injects the contents of one or more ClusterTrustBundle objects as an automatically-updating file in the container filesystem.", "language": "text", "type": "code-example_explanation", "context": "3 - Projected Volumes | The clusterTrustBundle projected volume source injects the contents of one or more ClusterTrustBundle objects as an automatically-updating file in the container filesystem.", "source": "https://kubernetes.io/docs/concepts/storage/_print/", "id": "k8s_00120_code_explain_3481"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: sa-ctb-name-test\nspec:\n containers:\n - name: container-test\n image: busybox\n command: [\"sleep\", \"3600\"]\n volumeMounts:\n - name: token-vol\n mountPath: \"/root-certificates\"\n readOnly: true\n serviceAccountName: default", "completion": " volumes:\n - name: token-vol\n projected:\n sources:\n - clusterTrustBundle:\n name: example\n path: example-roots.pem\n - clusterTrustBundle:\n signerName: \"example.com/mysigner\"\n labelSelector:\n matchLabels:\n version: live\n path: mysigner-roots.pem\n optional: true", "language": "yaml", "type": "pod-definition", "context": "3 - Projected Volumes | The podCertificate projected volumes source securely provisions a private key and X.509 certificate chain for pod to use as client or server credentials. Kubelet will then handle refreshing the privat", "source": "https://kubernetes.io/docs/concepts/storage/_print/", "id": "k8s_00120_code_completion_3482"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: sa-ctb-name-test\nspec:\n containers:\n - name: container-test\n image: busybox\n command: [\"sleep\", \"3600\"]\n volumeMounts:\n - name: token-vol\n mountPath: \"/root-certificates\"\n readOnly: true\n serviceAccountName: default\n volumes:\n - name: token-vol\n projected:\n sources:\n - clusterTrustBundle:\n name: example\n path: example-roots.pem\n - clusterTrustBundle:\n signerName: \"example.com/mysigner\"\n labelSelector:\n matchLabels:\n version: live\n path: mysigner-roots.pem\n optional: true", "completion": "This yaml code 3 - Projected Volumes | The podCertificate projected volumes source securely provisions a private key and X.509 certificate chain for pod to use as client or server credentials. Kubelet will then handle refreshing the privat", "language": "yaml", "type": "pod-definition_explanation", "context": "3 - Projected Volumes | The podCertificate projected volumes source securely provisions a private key and X.509 certificate chain for pod to use as client or server credentials. Kubelet will then handle refreshing the privat", "source": "https://kubernetes.io/docs/concepts/storage/_print/", "id": "k8s_00120_code_explain_3483"} {"prompt": "Explain this text code:\n\n--runtime-config=certificates.k8s.io/v1alpha1/podcertificaterequests=true", "completion": "This text code 3 - Projected Volumes | The podCertificate projected volumes source securely provisions a private key and X.509 certificate chain for pod to use as client or server credentials. Kubelet will then handle refreshing the privat", "language": "text", "type": "code-example_explanation", "context": "3 - Projected Volumes | The podCertificate projected volumes source securely provisions a private key and X.509 certificate chain for pod to use as client or server credentials. Kubelet will then handle refreshing the privat", "source": "https://kubernetes.io/docs/concepts/storage/_print/", "id": "k8s_00120_code_explain_3484"} {"prompt": "Complete this yaml code:\n\n# Sample Pod spec that uses a podCertificate projection to request an ED25519\n# private key, a certificate from the `coolcert.example.com/foo` signer, and\n# write the results to `/var/run/my-x509-credentials/credentialbundle.pem`.\napiVersion: v1\nkind: Pod\nmetadata:\n namespace: default\n name: podcertificate-pod\nspec:\n serviceAccountName: default\n containers:\n - image: debian\n name: main", "completion": " command: [\"sleep\", \"infinity\"]\n volumeMounts:\n - name: my-x509-credentials\n mountPath: /var/run/my-x509-credentials\n volumes:\n - name: my-x509-credentials\n projected:\n defaultMode: 420\n sources:\n - podCertificate:\n keyType: ED25519\n signerName: coolcert.example.com/foo\n credentialBundlePath: credentialbundle.pem", "language": "yaml", "type": "pod-definition", "context": "3 - Projected Volumes | The proposal for file permission handling in projected service account volume enhancement introduced the projected files having the correct owner permissions set.", "source": "https://kubernetes.io/docs/concepts/storage/_print/", "id": "k8s_00120_code_completion_3485"} {"prompt": "Explain this yaml code:\n\n# Sample Pod spec that uses a podCertificate projection to request an ED25519\n# private key, a certificate from the `coolcert.example.com/foo` signer, and\n# write the results to `/var/run/my-x509-credentials/credentialbundle.pem`.\napiVersion: v1\nkind: Pod\nmetadata:\n namespace: default\n name: podcertificate-pod\nspec:\n serviceAccountName: default\n containers:\n - image: debian\n name: main\n command: [\"sleep\", \"infinity\"]\n volumeMounts:\n - name: my-x509-credentials\n mountPath: /var/run/my-x509-credentials\n volumes:\n - name: my-x509-credentials\n projected:\n defaultMode: 420\n sources:\n - podCertificate:\n keyType: ED25519\n signerName: coolcert.example.com/foo\n credentialBundlePath: credentialbundle.pem", "completion": "This yaml code 3 - Projected Volumes | The proposal for file permission handling in projected service account volume enhancement introduced the projected files having the correct owner permissions set.", "language": "yaml", "type": "pod-definition_explanation", "context": "3 - Projected Volumes | The proposal for file permission handling in projected service account volume enhancement introduced the projected files having the correct owner permissions set.", "source": "https://kubernetes.io/docs/concepts/storage/_print/", "id": "k8s_00120_code_explain_3486"} {"prompt": "Complete this powershell code:\n\nPS C:\\> Get-Acl C:\\var\\run\\secrets\\kubernetes.io\\serviceaccount\\..2021_08_31_22_22_18.318230061\\ca.crt | Format-List\n\nPath : Microsoft.PowerShell.Core\\FileSystem::C:\\var\\run\\secrets\\kubernetes.io\\serviceaccount\\..2021_08_31_22_22_18.318230061\\ca.crt\nOwner : BUILTIN\\Administrators\nGroup : NT AUTHORITY\\SYSTEM", "completion": "Access : NT AUTHORITY\\SYSTEM Allow FullControl\n BUILTIN\\Administrators Allow FullControl\n BUILTIN\\Users Allow ReadAndExecute, Synchronize\nAudit :\nSddl : O:BAG:SYD:AI(A;ID;FA;;;SY)(A;ID;FA;;;BA)(A;ID;0x1200a9;;;BU)", "language": "powershell", "type": "code-example", "context": "3 - Projected Volumes | This implies all administrator users like ContainerAdministrator will have read, write and execute access while, non-administrator users will have read and execute access.", "source": "https://kubernetes.io/docs/concepts/storage/_print/", "id": "k8s_00120_code_completion_3487"} {"prompt": "Explain this powershell code:\n\nPS C:\\> Get-Acl C:\\var\\run\\secrets\\kubernetes.io\\serviceaccount\\..2021_08_31_22_22_18.318230061\\ca.crt | Format-List\n\nPath : Microsoft.PowerShell.Core\\FileSystem::C:\\var\\run\\secrets\\kubernetes.io\\serviceaccount\\..2021_08_31_22_22_18.318230061\\ca.crt\nOwner : BUILTIN\\Administrators\nGroup : NT AUTHORITY\\SYSTEM\nAccess : NT AUTHORITY\\SYSTEM Allow FullControl\n BUILTIN\\Administrators Allow FullControl\n BUILTIN\\Users Allow ReadAndExecute, Synchronize\nAudit :\nSddl : O:BAG:SYD:AI(A;ID;FA;;;SY)(A;ID;FA;;;BA)(A;ID;0x1200a9;;;BU)", "completion": "This powershell code 3 - Projected Volumes | This implies all administrator users like ContainerAdministrator will have read, write and execute access while, non-administrator users will have read and execute access.", "language": "powershell", "type": "code-example_explanation", "context": "3 - Projected Volumes | This implies all administrator users like ContainerAdministrator will have read, write and execute access while, non-administrator users will have read and execute access.", "source": "https://kubernetes.io/docs/concepts/storage/_print/", "id": "k8s_00120_code_explain_3488"} {"prompt": "Complete this yaml code:\n\nkind: Pod\napiVersion: v1\nmetadata:\n name: my-csi-app\nspec:\n containers:\n - name: my-frontend\n image: busybox:1.28\n volumeMounts:", "completion": " - mountPath: \"/data\"\n name: my-csi-inline-vol\n command: [ \"sleep\", \"1000000\" ]\n volumes:\n - name: my-csi-inline-vol\n csi:\n driver: inline.storage.kubernetes.io\n volumeAttributes:\n foo: bar", "language": "yaml", "type": "pod-definition", "context": "4 - Ephemeral Volumes | The volumeAttributes determine what volume is prepared by the driver. These attributes are specific to each driver and not standardized. See the documentation of each CSI driver for further instructio", "source": "https://kubernetes.io/docs/concepts/storage/_print/", "id": "k8s_00120_code_completion_3489"} {"prompt": "Explain this yaml code:\n\nkind: Pod\napiVersion: v1\nmetadata:\n name: my-csi-app\nspec:\n containers:\n - name: my-frontend\n image: busybox:1.28\n volumeMounts:\n - mountPath: \"/data\"\n name: my-csi-inline-vol\n command: [ \"sleep\", \"1000000\" ]\n volumes:\n - name: my-csi-inline-vol\n csi:\n driver: inline.storage.kubernetes.io\n volumeAttributes:\n foo: bar", "completion": "This yaml code 4 - Ephemeral Volumes | The volumeAttributes determine what volume is prepared by the driver. These attributes are specific to each driver and not standardized. See the documentation of each CSI driver for further instructio", "language": "yaml", "type": "pod-definition_explanation", "context": "4 - Ephemeral Volumes | The volumeAttributes determine what volume is prepared by the driver. These attributes are specific to each driver and not standardized. See the documentation of each CSI driver for further instructio", "source": "https://kubernetes.io/docs/concepts/storage/_print/", "id": "k8s_00120_code_explain_3490"} {"prompt": "Complete this yaml code:\n\nkind: Pod\napiVersion: v1\nmetadata:\n name: my-app\nspec:\n containers:\n - name: my-frontend\n image: busybox:1.28\n volumeMounts:\n - mountPath: \"/scratch\"\n name: scratch-volume\n command: [ \"sleep\", \"1000000\" ]", "completion": " volumes:\n - name: scratch-volume\n ephemeral:\n volumeClaimTemplate:\n metadata:\n labels:\n type: my-frontend-volume\n spec:\n accessModes: [ \"ReadWriteOnce\" ]\n storageClassName: \"scratch-storage-class\"\n resources:\n requests:\n storage: 1Gi", "language": "yaml", "type": "pod-definition", "context": "4 - Ephemeral Volumes | The key design idea is that the parameters for a volume claim are allowed inside a volume source of the Pod. Labels, annotations and the whole set of fields for a PersistentVolumeClaim are supported.", "source": "https://kubernetes.io/docs/concepts/storage/_print/", "id": "k8s_00120_code_completion_3491"} {"prompt": "Explain this yaml code:\n\nkind: Pod\napiVersion: v1\nmetadata:\n name: my-app\nspec:\n containers:\n - name: my-frontend\n image: busybox:1.28\n volumeMounts:\n - mountPath: \"/scratch\"\n name: scratch-volume\n command: [ \"sleep\", \"1000000\" ]\n volumes:\n - name: scratch-volume\n ephemeral:\n volumeClaimTemplate:\n metadata:\n labels:\n type: my-frontend-volume\n spec:\n accessModes: [ \"ReadWriteOnce\" ]\n storageClassName: \"scratch-storage-class\"\n resources:\n requests:\n storage: 1Gi", "completion": "This yaml code 4 - Ephemeral Volumes | The key design idea is that the parameters for a volume claim are allowed inside a volume source of the Pod. Labels, annotations and the whole set of fields for a PersistentVolumeClaim are supported.", "language": "yaml", "type": "pod-definition_explanation", "context": "4 - Ephemeral Volumes | The key design idea is that the parameters for a volume claim are allowed inside a volume source of the Pod. Labels, annotations and the whole set of fields for a PersistentVolumeClaim are supported.", "source": "https://kubernetes.io/docs/concepts/storage/_print/", "id": "k8s_00120_code_explain_3492"} {"prompt": "Complete this yaml code:\n\napiVersion: storage.k8s.io/v1\nkind: StorageClass\nmetadata:\n name: low-latency\n annotations:\n storageclass.kubernetes.io/is-default-class: \"false\"\nprovisioner: csi-driver.example-vendor.example", "completion": "reclaimPolicy: Retain # default value is Delete\nallowVolumeExpansion: true\nmountOptions:\n - discard # this might enable UNMAP / TRIM at the block storage layer\nvolumeBindingMode: WaitForFirstConsumer\nparameters:\n guaranteedReadWriteLatency: \"true\" # provider-specific", "language": "yaml", "type": "kubernetes-manifest", "context": "5 - Storage Classes | You can mark a StorageClass as the default for your cluster. For instructions on setting the default StorageClass, see Change the default StorageClass.", "source": "https://kubernetes.io/docs/concepts/storage/_print/", "id": "k8s_00120_code_completion_3493"} {"prompt": "Explain this yaml code:\n\napiVersion: storage.k8s.io/v1\nkind: StorageClass\nmetadata:\n name: low-latency\n annotations:\n storageclass.kubernetes.io/is-default-class: \"false\"\nprovisioner: csi-driver.example-vendor.example\nreclaimPolicy: Retain # default value is Delete\nallowVolumeExpansion: true\nmountOptions:\n - discard # this might enable UNMAP / TRIM at the block storage layer\nvolumeBindingMode: WaitForFirstConsumer\nparameters:\n guaranteedReadWriteLatency: \"true\" # provider-specific", "completion": "This yaml code 5 - Storage Classes | You can mark a StorageClass as the default for your cluster. For instructions on setting the default StorageClass, see Change the default StorageClass.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "5 - Storage Classes | You can mark a StorageClass as the default for your cluster. For instructions on setting the default StorageClass, see Change the default StorageClass.", "source": "https://kubernetes.io/docs/concepts/storage/_print/", "id": "k8s_00120_code_explain_3494"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: task-pv-pod\nspec:\n nodeSelector:\n kubernetes.io/hostname: kube-01\n volumes:\n - name: task-pv-storage\n persistentVolumeClaim:", "completion": " claimName: task-pv-claim\n containers:\n - name: task-pv-container\n image: nginx\n ports:\n - containerPort: 80\n name: \"http-server\"\n volumeMounts:\n - mountPath: \"/usr/share/nginx/html\"\n name: task-pv-storage", "language": "yaml", "type": "pod-definition", "context": "5 - Storage Classes | When a cluster operator specifies the WaitForFirstConsumer volume binding mode, it is no longer necessary to restrict provisioning to specific topologies in most situations. However, if still required", "source": "https://kubernetes.io/docs/concepts/storage/_print/", "id": "k8s_00120_code_completion_3495"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: task-pv-pod\nspec:\n nodeSelector:\n kubernetes.io/hostname: kube-01\n volumes:\n - name: task-pv-storage\n persistentVolumeClaim:\n claimName: task-pv-claim\n containers:\n - name: task-pv-container\n image: nginx\n ports:\n - containerPort: 80\n name: \"http-server\"\n volumeMounts:\n - mountPath: \"/usr/share/nginx/html\"\n name: task-pv-storage", "completion": "This yaml code 5 - Storage Classes | When a cluster operator specifies the WaitForFirstConsumer volume binding mode, it is no longer necessary to restrict provisioning to specific topologies in most situations. However, if still required", "language": "yaml", "type": "pod-definition_explanation", "context": "5 - Storage Classes | When a cluster operator specifies the WaitForFirstConsumer volume binding mode, it is no longer necessary to restrict provisioning to specific topologies in most situations. However, if still required", "source": "https://kubernetes.io/docs/concepts/storage/_print/", "id": "k8s_00120_code_explain_3496"} {"prompt": "Complete this yaml code:\n\napiVersion: storage.k8s.io/v1\nkind: StorageClass\nmetadata:\n name: standard\nprovisioner: example.com/example\nparameters:\n type: pd-standard", "completion": "volumeBindingMode: WaitForFirstConsumer\nallowedTopologies:\n- matchLabelExpressions:\n - key: topology.kubernetes.io/zone\n values:\n - us-central-1a\n - us-central-1b", "language": "yaml", "type": "kubernetes-manifest", "context": "5 - Storage Classes | StorageClasses have parameters that describe volumes belonging to the storage class. Different parameters may be accepted depending on the provisioner. When a parameter is omitted, some default is use", "source": "https://kubernetes.io/docs/concepts/storage/_print/", "id": "k8s_00120_code_completion_3497"} {"prompt": "Explain this yaml code:\n\napiVersion: storage.k8s.io/v1\nkind: StorageClass\nmetadata:\n name: standard\nprovisioner: example.com/example\nparameters:\n type: pd-standard\nvolumeBindingMode: WaitForFirstConsumer\nallowedTopologies:\n- matchLabelExpressions:\n - key: topology.kubernetes.io/zone\n values:\n - us-central-1a\n - us-central-1b", "completion": "This yaml code 5 - Storage Classes | StorageClasses have parameters that describe volumes belonging to the storage class. Different parameters may be accepted depending on the provisioner. When a parameter is omitted, some default is use", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "5 - Storage Classes | StorageClasses have parameters that describe volumes belonging to the storage class. Different parameters may be accepted depending on the provisioner. When a parameter is omitted, some default is use", "source": "https://kubernetes.io/docs/concepts/storage/_print/", "id": "k8s_00120_code_explain_3498"} {"prompt": "Complete this yaml code:\n\napiVersion: storage.k8s.io/v1\nkind: StorageClass\nmetadata:\n name: ebs-sc\nprovisioner: ebs.csi.aws.com\nvolumeBindingMode: WaitForFirstConsumer\nparameters:\n csi.storage.k8s.io/fstype: xfs\n type: io1", "completion": " iopsPerGB: \"50\"\n encrypted: \"true\"\n tagSpecification_1: \"key1=value1\"\n tagSpecification_2: \"key2=value2\"\nallowedTopologies:\n- matchLabelExpressions:\n - key: topology.ebs.csi.aws.com/zone\n values:\n - us-east-2c", "language": "yaml", "type": "kubernetes-manifest", "context": "5 - Storage Classes | tagSpecification: Tags with this prefix are applied to dynamically provisioned EBS volumes.", "source": "https://kubernetes.io/docs/concepts/storage/_print/", "id": "k8s_00120_code_completion_3499"} {"prompt": "Explain this yaml code:\n\napiVersion: storage.k8s.io/v1\nkind: StorageClass\nmetadata:\n name: ebs-sc\nprovisioner: ebs.csi.aws.com\nvolumeBindingMode: WaitForFirstConsumer\nparameters:\n csi.storage.k8s.io/fstype: xfs\n type: io1\n iopsPerGB: \"50\"\n encrypted: \"true\"\n tagSpecification_1: \"key1=value1\"\n tagSpecification_2: \"key2=value2\"\nallowedTopologies:\n- matchLabelExpressions:\n - key: topology.ebs.csi.aws.com/zone\n values:\n - us-east-2c", "completion": "This yaml code 5 - Storage Classes | tagSpecification: Tags with this prefix are applied to dynamically provisioned EBS volumes.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "5 - Storage Classes | tagSpecification: Tags with this prefix are applied to dynamically provisioned EBS volumes.", "source": "https://kubernetes.io/docs/concepts/storage/_print/", "id": "k8s_00120_code_explain_3500"} {"prompt": "Complete this yaml code:\n\nkind: StorageClass\napiVersion: storage.k8s.io/v1\nmetadata:\n name: efs-sc", "completion": "provisioner: efs.csi.aws.com\nparameters:\n provisioningMode: efs-ap\n fileSystemId: fs-92107410\n directoryPerms: \"700\"", "language": "yaml", "type": "kubernetes-manifest", "context": "5 - Storage Classes | For more details, refer to the AWS_EFS_CSI_Driver Dynamic Provisioning documentation.", "source": "https://kubernetes.io/docs/concepts/storage/_print/", "id": "k8s_00120_code_completion_3501"} {"prompt": "Explain this yaml code:\n\nkind: StorageClass\napiVersion: storage.k8s.io/v1\nmetadata:\n name: efs-sc\nprovisioner: efs.csi.aws.com\nparameters:\n provisioningMode: efs-ap\n fileSystemId: fs-92107410\n directoryPerms: \"700\"", "completion": "This yaml code 5 - Storage Classes | For more details, refer to the AWS_EFS_CSI_Driver Dynamic Provisioning documentation.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "5 - Storage Classes | For more details, refer to the AWS_EFS_CSI_Driver Dynamic Provisioning documentation.", "source": "https://kubernetes.io/docs/concepts/storage/_print/", "id": "k8s_00120_code_explain_3502"} {"prompt": "Complete this yaml code:\n\napiVersion: storage.k8s.io/v1\nkind: StorageClass\nmetadata:\n name: example-nfs", "completion": "provisioner: example.com/external-nfs\nparameters:\n server: nfs-server.example.com\n path: /share\n readOnly: \"false\"", "language": "yaml", "type": "kubernetes-manifest", "context": "5 - Storage Classes | Kubernetes doesn't include an internal NFS provisioner. You need to use an external provisioner to create a StorageClass for NFS. Here are some examples:", "source": "https://kubernetes.io/docs/concepts/storage/_print/", "id": "k8s_00120_code_completion_3503"} {"prompt": "Explain this yaml code:\n\napiVersion: storage.k8s.io/v1\nkind: StorageClass\nmetadata:\n name: example-nfs\nprovisioner: example.com/external-nfs\nparameters:\n server: nfs-server.example.com\n path: /share\n readOnly: \"false\"", "completion": "This yaml code 5 - Storage Classes | Kubernetes doesn't include an internal NFS provisioner. You need to use an external provisioner to create a StorageClass for NFS. Here are some examples:", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "5 - Storage Classes | Kubernetes doesn't include an internal NFS provisioner. You need to use an external provisioner to create a StorageClass for NFS. Here are some examples:", "source": "https://kubernetes.io/docs/concepts/storage/_print/", "id": "k8s_00120_code_explain_3504"} {"prompt": "Complete this yaml code:\n\napiVersion: storage.k8s.io/v1\nkind: StorageClass\nmetadata:", "completion": " name: fast\nprovisioner: kubernetes.io/vsphere-volume\nparameters:\n diskformat: zeroedthick", "language": "yaml", "type": "kubernetes-manifest", "context": "5 - Storage Classes | diskformat: thin, zeroedthick and eagerzeroedthick. Default: \"thin\".", "source": "https://kubernetes.io/docs/concepts/storage/_print/", "id": "k8s_00120_code_completion_3505"} {"prompt": "Explain this yaml code:\n\napiVersion: storage.k8s.io/v1\nkind: StorageClass\nmetadata:\n name: fast\nprovisioner: kubernetes.io/vsphere-volume\nparameters:\n diskformat: zeroedthick", "completion": "This yaml code 5 - Storage Classes | diskformat: thin, zeroedthick and eagerzeroedthick. Default: \"thin\".", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "5 - Storage Classes | diskformat: thin, zeroedthick and eagerzeroedthick. Default: \"thin\".", "source": "https://kubernetes.io/docs/concepts/storage/_print/", "id": "k8s_00120_code_explain_3506"} {"prompt": "Complete this yaml code:\n\napiVersion: storage.k8s.io/v1\nkind: StorageClass\nmetadata:\n name: fast", "completion": "provisioner: kubernetes.io/vsphere-volume\nparameters:\n diskformat: zeroedthick\n datastore: VSANDatastore", "language": "yaml", "type": "kubernetes-manifest", "context": "5 - Storage Classes | datastore: The user can also specify the datastore in the StorageClass. The volume will be created on the datastore specified in the StorageClass, which in this case is VSANDatastore. This field is op", "source": "https://kubernetes.io/docs/concepts/storage/_print/", "id": "k8s_00120_code_completion_3507"} {"prompt": "Explain this yaml code:\n\napiVersion: storage.k8s.io/v1\nkind: StorageClass\nmetadata:\n name: fast\nprovisioner: kubernetes.io/vsphere-volume\nparameters:\n diskformat: zeroedthick\n datastore: VSANDatastore", "completion": "This yaml code 5 - Storage Classes | datastore: The user can also specify the datastore in the StorageClass. The volume will be created on the datastore specified in the StorageClass, which in this case is VSANDatastore. This field is op", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "5 - Storage Classes | datastore: The user can also specify the datastore in the StorageClass. The volume will be created on the datastore specified in the StorageClass, which in this case is VSANDatastore. This field is op", "source": "https://kubernetes.io/docs/concepts/storage/_print/", "id": "k8s_00120_code_explain_3508"} {"prompt": "Complete this yaml code:\n\napiVersion: storage.k8s.io/v1\nkind: StorageClass\nmetadata:\n name: fast\nprovisioner: kubernetes.io/rbd # This provisioner is deprecated\nparameters:\n monitors: 198.19.254.105:6789\n adminId: kube", "completion": " adminSecretName: ceph-secret\n adminSecretNamespace: kube-system\n pool: kube\n userId: kube\n userSecretName: ceph-secret-user\n userSecretNamespace: default\n fsType: ext4\n imageFormat: \"2\"\n imageFeatures: \"layering\"", "language": "yaml", "type": "kubernetes-manifest", "context": "5 - Storage Classes | monitors: Ceph monitors, comma delimited. This parameter is required.", "source": "https://kubernetes.io/docs/concepts/storage/_print/", "id": "k8s_00120_code_completion_3509"} {"prompt": "Explain this yaml code:\n\napiVersion: storage.k8s.io/v1\nkind: StorageClass\nmetadata:\n name: fast\nprovisioner: kubernetes.io/rbd # This provisioner is deprecated\nparameters:\n monitors: 198.19.254.105:6789\n adminId: kube\n adminSecretName: ceph-secret\n adminSecretNamespace: kube-system\n pool: kube\n userId: kube\n userSecretName: ceph-secret-user\n userSecretNamespace: default\n fsType: ext4\n imageFormat: \"2\"\n imageFeatures: \"layering\"", "completion": "This yaml code 5 - Storage Classes | monitors: Ceph monitors, comma delimited. This parameter is required.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "5 - Storage Classes | monitors: Ceph monitors, comma delimited. This parameter is required.", "source": "https://kubernetes.io/docs/concepts/storage/_print/", "id": "k8s_00120_code_explain_3510"} {"prompt": "Explain this shell code:\n\nkubectl create secret generic ceph-secret --type=\"kubernetes.io/rbd\" \\\n --from-literal=key='QVFEQ1pMdFhPUnQrSmhBQUFYaERWNHJsZ3BsMmNjcDR6RFZST0E9PQ==' \\\n --namespace=kube-system", "completion": "This shell code 5 - Storage Classes | userSecretNamespace: The namespace for userSecretName.", "language": "shell", "type": "code-example_explanation", "context": "5 - Storage Classes | userSecretNamespace: The namespace for userSecretName.", "source": "https://kubernetes.io/docs/concepts/storage/_print/", "id": "k8s_00120_code_explain_3511"} {"prompt": "Complete this yaml code:\n\napiVersion: storage.k8s.io/v1\nkind: StorageClass\nmetadata:\n name: azurefile", "completion": "provisioner: kubernetes.io/azure-file\nparameters:\n skuName: Standard_LRS\n location: eastus\n storageAccount: azure_storage_account_name # example value", "language": "yaml", "type": "kubernetes-manifest", "context": "5 - Storage Classes | During storage provisioning, a secret named by secretName is created for the mounting credentials. If the cluster has enabled both RBAC and Controller Roles, add the create permission of resource secr", "source": "https://kubernetes.io/docs/concepts/storage/_print/", "id": "k8s_00120_code_completion_3512"} {"prompt": "Explain this yaml code:\n\napiVersion: storage.k8s.io/v1\nkind: StorageClass\nmetadata:\n name: azurefile\nprovisioner: kubernetes.io/azure-file\nparameters:\n skuName: Standard_LRS\n location: eastus\n storageAccount: azure_storage_account_name # example value", "completion": "This yaml code 5 - Storage Classes | During storage provisioning, a secret named by secretName is created for the mounting credentials. If the cluster has enabled both RBAC and Controller Roles, add the create permission of resource secr", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "5 - Storage Classes | During storage provisioning, a secret named by secretName is created for the mounting credentials. If the cluster has enabled both RBAC and Controller Roles, add the create permission of resource secr", "source": "https://kubernetes.io/docs/concepts/storage/_print/", "id": "k8s_00120_code_explain_3513"} {"prompt": "Explain this text code:\n\nstorage/storageclass/storageclass-portworx-volume.yaml", "completion": "This text code 5 - Storage Classes | Local volumes do not support dynamic provisioning in Kubernetes 1.34; however a StorageClass should still be created to delay volume binding until a Pod is actually scheduled to the appropriate node.", "language": "text", "type": "code-example_explanation", "context": "5 - Storage Classes | Local volumes do not support dynamic provisioning in Kubernetes 1.34; however a StorageClass should still be created to delay volume binding until a Pod is actually scheduled to the appropriate node.", "source": "https://kubernetes.io/docs/concepts/storage/_print/", "id": "k8s_00120_code_explain_3514"} {"prompt": "Complete this yaml code:\n\napiVersion: storage.k8s.io/v1\nkind: StorageClass\nmetadata:\n name: portworx-io-priority-high", "completion": "provisioner: kubernetes.io/portworx-volume # This provisioner is deprecated\nparameters:\n repl: \"1\"\n snap_interval: \"70\"\n priority_io: \"high\"", "language": "yaml", "type": "kubernetes-manifest", "context": "5 - Storage Classes | Local volumes do not support dynamic provisioning in Kubernetes 1.34; however a StorageClass should still be created to delay volume binding until a Pod is actually scheduled to the appropriate node.", "source": "https://kubernetes.io/docs/concepts/storage/_print/", "id": "k8s_00120_code_completion_3515"} {"prompt": "Explain this yaml code:\n\napiVersion: storage.k8s.io/v1\nkind: StorageClass\nmetadata:\n name: portworx-io-priority-high\nprovisioner: kubernetes.io/portworx-volume # This provisioner is deprecated\nparameters:\n repl: \"1\"\n snap_interval: \"70\"\n priority_io: \"high\"", "completion": "This yaml code 5 - Storage Classes | Local volumes do not support dynamic provisioning in Kubernetes 1.34; however a StorageClass should still be created to delay volume binding until a Pod is actually scheduled to the appropriate node.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "5 - Storage Classes | Local volumes do not support dynamic provisioning in Kubernetes 1.34; however a StorageClass should still be created to delay volume binding until a Pod is actually scheduled to the appropriate node.", "source": "https://kubernetes.io/docs/concepts/storage/_print/", "id": "k8s_00120_code_explain_3516"} {"prompt": "Complete this yaml code:\n\napiVersion: storage.k8s.io/v1\nkind: StorageClass\nmetadata:", "completion": " name: local-storage\nprovisioner: kubernetes.io/no-provisioner # indicates that this StorageClass does not support automatic provisioning\nvolumeBindingMode: WaitForFirstConsumer", "language": "yaml", "type": "kubernetes-manifest", "context": "5 - Storage Classes | Local volumes do not support dynamic provisioning in Kubernetes 1.34; however a StorageClass should still be created to delay volume binding until a Pod is actually scheduled to the appropriate node.", "source": "https://kubernetes.io/docs/concepts/storage/_print/", "id": "k8s_00120_code_completion_3517"} {"prompt": "Explain this yaml code:\n\napiVersion: storage.k8s.io/v1\nkind: StorageClass\nmetadata:\n name: local-storage\nprovisioner: kubernetes.io/no-provisioner # indicates that this StorageClass does not support automatic provisioning\nvolumeBindingMode: WaitForFirstConsumer", "completion": "This yaml code 5 - Storage Classes | Local volumes do not support dynamic provisioning in Kubernetes 1.34; however a StorageClass should still be created to delay volume binding until a Pod is actually scheduled to the appropriate node.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "5 - Storage Classes | Local volumes do not support dynamic provisioning in Kubernetes 1.34; however a StorageClass should still be created to delay volume binding until a Pod is actually scheduled to the appropriate node.", "source": "https://kubernetes.io/docs/concepts/storage/_print/", "id": "k8s_00120_code_explain_3518"} {"prompt": "Complete this yaml code:\n\napiVersion: storage.k8s.io/v1\nkind: VolumeAttributesClass\nmetadata:\n name: silver", "completion": "driverName: pd.csi.storage.gke.io\nparameters:\n provisioned-iops: \"3000\"\n provisioned-throughput: \"50\"", "language": "yaml", "type": "kubernetes-manifest", "context": "6 - Volume Attributes Classes | Each VolumeAttributesClass has a provisioner that determines what volume plugin is used for provisioning PVs. The field driverName must be specified.", "source": "https://kubernetes.io/docs/concepts/storage/_print/", "id": "k8s_00120_code_completion_3519"} {"prompt": "Explain this yaml code:\n\napiVersion: storage.k8s.io/v1\nkind: VolumeAttributesClass\nmetadata:\n name: silver\ndriverName: pd.csi.storage.gke.io\nparameters:\n provisioned-iops: \"3000\"\n provisioned-throughput: \"50\"", "completion": "This yaml code 6 - Volume Attributes Classes | Each VolumeAttributesClass has a provisioner that determines what volume plugin is used for provisioning PVs. The field driverName must be specified.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "6 - Volume Attributes Classes | Each VolumeAttributesClass has a provisioner that determines what volume plugin is used for provisioning PVs. The field driverName must be specified.", "source": "https://kubernetes.io/docs/concepts/storage/_print/", "id": "k8s_00120_code_explain_3520"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: PersistentVolumeClaim\nmetadata:\n name: test-pv-claim", "completion": "spec:\n …\n volumeAttributesClassName: silver\n …", "language": "yaml", "type": "kubernetes-manifest", "context": "6 - Volume Attributes Classes | A new VolumeAttributesClass gold is available in the cluster:", "source": "https://kubernetes.io/docs/concepts/storage/_print/", "id": "k8s_00120_code_completion_3521"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: PersistentVolumeClaim\nmetadata:\n name: test-pv-claim\nspec:\n …\n volumeAttributesClassName: silver\n …", "completion": "This yaml code 6 - Volume Attributes Classes | A new VolumeAttributesClass gold is available in the cluster:", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "6 - Volume Attributes Classes | A new VolumeAttributesClass gold is available in the cluster:", "source": "https://kubernetes.io/docs/concepts/storage/_print/", "id": "k8s_00120_code_explain_3522"} {"prompt": "Complete this yaml code:\n\napiVersion: storage.k8s.io/v1\nkind: VolumeAttributesClass\nmetadata:\n name: gold", "completion": "driverName: pd.csi.storage.gke.io\nparameters:\n iops: \"4000\"\n throughput: \"60\"", "language": "yaml", "type": "kubernetes-manifest", "context": "6 - Volume Attributes Classes | The end user can update the PVC with the new VolumeAttributesClass gold and apply:", "source": "https://kubernetes.io/docs/concepts/storage/_print/", "id": "k8s_00120_code_completion_3523"} {"prompt": "Explain this yaml code:\n\napiVersion: storage.k8s.io/v1\nkind: VolumeAttributesClass\nmetadata:\n name: gold\ndriverName: pd.csi.storage.gke.io\nparameters:\n iops: \"4000\"\n throughput: \"60\"", "completion": "This yaml code 6 - Volume Attributes Classes | The end user can update the PVC with the new VolumeAttributesClass gold and apply:", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "6 - Volume Attributes Classes | The end user can update the PVC with the new VolumeAttributesClass gold and apply:", "source": "https://kubernetes.io/docs/concepts/storage/_print/", "id": "k8s_00120_code_explain_3524"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: PersistentVolumeClaim\nmetadata:\n name: test-pv-claim", "completion": "spec:\n …\n volumeAttributesClassName: gold\n …", "language": "yaml", "type": "kubernetes-manifest", "context": "6 - Volume Attributes Classes | To understand how the resizer works with VolumeAttributesClass, refer to the CSI external-resizer documentation.", "source": "https://kubernetes.io/docs/concepts/storage/_print/", "id": "k8s_00120_code_completion_3525"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: PersistentVolumeClaim\nmetadata:\n name: test-pv-claim\nspec:\n …\n volumeAttributesClassName: gold\n …", "completion": "This yaml code 6 - Volume Attributes Classes | To understand how the resizer works with VolumeAttributesClass, refer to the CSI external-resizer documentation.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "6 - Volume Attributes Classes | To understand how the resizer works with VolumeAttributesClass, refer to the CSI external-resizer documentation.", "source": "https://kubernetes.io/docs/concepts/storage/_print/", "id": "k8s_00120_code_explain_3526"} {"prompt": "Complete this yaml code:\n\napiVersion: storage.k8s.io/v1\nkind: StorageClass\nmetadata:", "completion": " name: slow\nprovisioner: kubernetes.io/gce-pd\nparameters:\n type: pd-standard", "language": "yaml", "type": "kubernetes-manifest", "context": "7 - Dynamic Volume Provisioning | The following manifest creates a storage class \"fast\" which provisions SSD-like persistent disks.", "source": "https://kubernetes.io/docs/concepts/storage/_print/", "id": "k8s_00120_code_completion_3527"} {"prompt": "Explain this yaml code:\n\napiVersion: storage.k8s.io/v1\nkind: StorageClass\nmetadata:\n name: slow\nprovisioner: kubernetes.io/gce-pd\nparameters:\n type: pd-standard", "completion": "This yaml code 7 - Dynamic Volume Provisioning | The following manifest creates a storage class \"fast\" which provisions SSD-like persistent disks.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "7 - Dynamic Volume Provisioning | The following manifest creates a storage class \"fast\" which provisions SSD-like persistent disks.", "source": "https://kubernetes.io/docs/concepts/storage/_print/", "id": "k8s_00120_code_explain_3528"} {"prompt": "Complete this yaml code:\n\napiVersion: storage.k8s.io/v1\nkind: StorageClass\nmetadata:", "completion": " name: fast\nprovisioner: kubernetes.io/gce-pd\nparameters:\n type: pd-ssd", "language": "yaml", "type": "kubernetes-manifest", "context": "7 - Dynamic Volume Provisioning | Users request dynamically provisioned storage by including a storage class in their PersistentVolumeClaim. Before Kubernetes v1.6, this was done via the volume.beta.kubernetes.io/storage-class annotat", "source": "https://kubernetes.io/docs/concepts/storage/_print/", "id": "k8s_00120_code_completion_3529"} {"prompt": "Explain this yaml code:\n\napiVersion: storage.k8s.io/v1\nkind: StorageClass\nmetadata:\n name: fast\nprovisioner: kubernetes.io/gce-pd\nparameters:\n type: pd-ssd", "completion": "This yaml code 7 - Dynamic Volume Provisioning | Users request dynamically provisioned storage by including a storage class in their PersistentVolumeClaim. Before Kubernetes v1.6, this was done via the volume.beta.kubernetes.io/storage-class annotat", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "7 - Dynamic Volume Provisioning | Users request dynamically provisioned storage by including a storage class in their PersistentVolumeClaim. Before Kubernetes v1.6, this was done via the volume.beta.kubernetes.io/storage-class annotat", "source": "https://kubernetes.io/docs/concepts/storage/_print/", "id": "k8s_00120_code_explain_3530"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: PersistentVolumeClaim\nmetadata:\n name: claim1\nspec:", "completion": " accessModes:\n - ReadWriteOnce\n storageClassName: fast\n resources:\n requests:\n storage: 30Gi", "language": "yaml", "type": "kubernetes-manifest", "context": "7 - Dynamic Volume Provisioning | This claim results in an SSD-like Persistent Disk being automatically provisioned. When the claim is deleted, the volume is destroyed.", "source": "https://kubernetes.io/docs/concepts/storage/_print/", "id": "k8s_00120_code_completion_3531"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: PersistentVolumeClaim\nmetadata:\n name: claim1\nspec:\n accessModes:\n - ReadWriteOnce\n storageClassName: fast\n resources:\n requests:\n storage: 30Gi", "completion": "This yaml code 7 - Dynamic Volume Provisioning | This claim results in an SSD-like Persistent Disk being automatically provisioned. When the claim is deleted, the volume is destroyed.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "7 - Dynamic Volume Provisioning | This claim results in an SSD-like Persistent Disk being automatically provisioned. When the claim is deleted, the volume is destroyed.", "source": "https://kubernetes.io/docs/concepts/storage/_print/", "id": "k8s_00120_code_explain_3532"} {"prompt": "Complete this yaml code:\n\napiVersion: snapshot.storage.k8s.io/v1\nkind: VolumeSnapshot\nmetadata:\n name: new-snapshot-test", "completion": "spec:\n volumeSnapshotClassName: csi-hostpath-snapclass\n source:\n persistentVolumeClaimName: pvc-test", "language": "yaml", "type": "kubernetes-manifest", "context": "8 - Volume Snapshots | persistentVolumeClaimName is the name of the PersistentVolumeClaim data source for the snapshot. This field is required for dynamically provisioning a snapshot.", "source": "https://kubernetes.io/docs/concepts/storage/_print/", "id": "k8s_00120_code_completion_3533"} {"prompt": "Explain this yaml code:\n\napiVersion: snapshot.storage.k8s.io/v1\nkind: VolumeSnapshot\nmetadata:\n name: new-snapshot-test\nspec:\n volumeSnapshotClassName: csi-hostpath-snapclass\n source:\n persistentVolumeClaimName: pvc-test", "completion": "This yaml code 8 - Volume Snapshots | persistentVolumeClaimName is the name of the PersistentVolumeClaim data source for the snapshot. This field is required for dynamically provisioning a snapshot.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "8 - Volume Snapshots | persistentVolumeClaimName is the name of the PersistentVolumeClaim data source for the snapshot. This field is required for dynamically provisioning a snapshot.", "source": "https://kubernetes.io/docs/concepts/storage/_print/", "id": "k8s_00120_code_explain_3534"} {"prompt": "Complete this yaml code:\n\napiVersion: snapshot.storage.k8s.io/v1\nkind: VolumeSnapshot\nmetadata:", "completion": " name: test-snapshot\nspec:\n source:\n volumeSnapshotContentName: test-content", "language": "yaml", "type": "kubernetes-manifest", "context": "8 - Volume Snapshots | Each VolumeSnapshotContent contains a spec and status. In dynamic provisioning, the snapshot common controller creates VolumeSnapshotContent objects. Here is an example:", "source": "https://kubernetes.io/docs/concepts/storage/_print/", "id": "k8s_00120_code_completion_3535"} {"prompt": "Explain this yaml code:\n\napiVersion: snapshot.storage.k8s.io/v1\nkind: VolumeSnapshot\nmetadata:\n name: test-snapshot\nspec:\n source:\n volumeSnapshotContentName: test-content", "completion": "This yaml code 8 - Volume Snapshots | Each VolumeSnapshotContent contains a spec and status. In dynamic provisioning, the snapshot common controller creates VolumeSnapshotContent objects. Here is an example:", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "8 - Volume Snapshots | Each VolumeSnapshotContent contains a spec and status. In dynamic provisioning, the snapshot common controller creates VolumeSnapshotContent objects. Here is an example:", "source": "https://kubernetes.io/docs/concepts/storage/_print/", "id": "k8s_00120_code_explain_3536"} {"prompt": "Complete this yaml code:\n\napiVersion: snapshot.storage.k8s.io/v1\nkind: VolumeSnapshotContent\nmetadata:\n name: snapcontent-72d9a349-aacd-42d2-a240-d775650d2455\nspec:\n deletionPolicy: Delete\n driver: hostpath.csi.k8s.io", "completion": " source:\n volumeHandle: ee0cfb94-f8d4-11e9-b2d8-0242ac110002\n sourceVolumeMode: Filesystem\n volumeSnapshotClassName: csi-hostpath-snapclass\n volumeSnapshotRef:\n name: new-snapshot-test\n namespace: default\n uid: 72d9a349-aacd-42d2-a240-d775650d2455", "language": "yaml", "type": "kubernetes-manifest", "context": "8 - Volume Snapshots | volumeHandle is the unique identifier of the volume created on the storage backend and returned by the CSI driver during the volume creation. This field is required for dynamically provisioning a snap", "source": "https://kubernetes.io/docs/concepts/storage/_print/", "id": "k8s_00120_code_completion_3537"} {"prompt": "Explain this yaml code:\n\napiVersion: snapshot.storage.k8s.io/v1\nkind: VolumeSnapshotContent\nmetadata:\n name: snapcontent-72d9a349-aacd-42d2-a240-d775650d2455\nspec:\n deletionPolicy: Delete\n driver: hostpath.csi.k8s.io\n source:\n volumeHandle: ee0cfb94-f8d4-11e9-b2d8-0242ac110002\n sourceVolumeMode: Filesystem\n volumeSnapshotClassName: csi-hostpath-snapclass\n volumeSnapshotRef:\n name: new-snapshot-test\n namespace: default\n uid: 72d9a349-aacd-42d2-a240-d775650d2455", "completion": "This yaml code 8 - Volume Snapshots | volumeHandle is the unique identifier of the volume created on the storage backend and returned by the CSI driver during the volume creation. This field is required for dynamically provisioning a snap", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "8 - Volume Snapshots | volumeHandle is the unique identifier of the volume created on the storage backend and returned by the CSI driver during the volume creation. This field is required for dynamically provisioning a snap", "source": "https://kubernetes.io/docs/concepts/storage/_print/", "id": "k8s_00120_code_explain_3538"} {"prompt": "Complete this yaml code:\n\napiVersion: snapshot.storage.k8s.io/v1\nkind: VolumeSnapshotContent\nmetadata:\n name: new-snapshot-content-test\nspec:\n deletionPolicy: Delete", "completion": " driver: hostpath.csi.k8s.io\n source:\n snapshotHandle: 7bdd0de3-aaeb-11e8-9aae-0242ac110002\n sourceVolumeMode: Filesystem\n volumeSnapshotRef:\n name: new-snapshot-test\n namespace: default", "language": "yaml", "type": "kubernetes-manifest", "context": "8 - Volume Snapshots | snapshotHandle is the unique identifier of the volume snapshot created on the storage backend. This field is required for the pre-provisioned snapshots. It specifies the CSI snapshot id on the storage", "source": "https://kubernetes.io/docs/concepts/storage/_print/", "id": "k8s_00120_code_completion_3539"} {"prompt": "Explain this yaml code:\n\napiVersion: snapshot.storage.k8s.io/v1\nkind: VolumeSnapshotContent\nmetadata:\n name: new-snapshot-content-test\nspec:\n deletionPolicy: Delete\n driver: hostpath.csi.k8s.io\n source:\n snapshotHandle: 7bdd0de3-aaeb-11e8-9aae-0242ac110002\n sourceVolumeMode: Filesystem\n volumeSnapshotRef:\n name: new-snapshot-test\n namespace: default", "completion": "This yaml code 8 - Volume Snapshots | snapshotHandle is the unique identifier of the volume snapshot created on the storage backend. This field is required for the pre-provisioned snapshots. It specifies the CSI snapshot id on the storage", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "8 - Volume Snapshots | snapshotHandle is the unique identifier of the volume snapshot created on the storage backend. This field is required for the pre-provisioned snapshots. It specifies the CSI snapshot id on the storage", "source": "https://kubernetes.io/docs/concepts/storage/_print/", "id": "k8s_00120_code_explain_3540"} {"prompt": "Explain this text code:\n\nsnapshot.storage.kubernetes.io/allow-volume-mode-change: \"true\"", "completion": "This text code 8 - Volume Snapshots | For pre-provisioned snapshots, spec.sourceVolumeMode needs to be populated by the cluster administrator.", "language": "text", "type": "code-example_explanation", "context": "8 - Volume Snapshots | For pre-provisioned snapshots, spec.sourceVolumeMode needs to be populated by the cluster administrator.", "source": "https://kubernetes.io/docs/concepts/storage/_print/", "id": "k8s_00120_code_explain_3541"} {"prompt": "Complete this yaml code:\n\napiVersion: snapshot.storage.k8s.io/v1\nkind: VolumeSnapshotContent\nmetadata:\n name: new-snapshot-content-test\n annotations:\n - snapshot.storage.kubernetes.io/allow-volume-mode-change: \"true\"\nspec:", "completion": " deletionPolicy: Delete\n driver: hostpath.csi.k8s.io\n source:\n snapshotHandle: 7bdd0de3-aaeb-11e8-9aae-0242ac110002\n sourceVolumeMode: Filesystem\n volumeSnapshotRef:\n name: new-snapshot-test\n namespace: default", "language": "yaml", "type": "kubernetes-manifest", "context": "8 - Volume Snapshots | You can provision a new volume, pre-populated with data from a snapshot, by using the dataSource field in the PersistentVolumeClaim object.", "source": "https://kubernetes.io/docs/concepts/storage/_print/", "id": "k8s_00120_code_completion_3542"} {"prompt": "Explain this yaml code:\n\napiVersion: snapshot.storage.k8s.io/v1\nkind: VolumeSnapshotContent\nmetadata:\n name: new-snapshot-content-test\n annotations:\n - snapshot.storage.kubernetes.io/allow-volume-mode-change: \"true\"\nspec:\n deletionPolicy: Delete\n driver: hostpath.csi.k8s.io\n source:\n snapshotHandle: 7bdd0de3-aaeb-11e8-9aae-0242ac110002\n sourceVolumeMode: Filesystem\n volumeSnapshotRef:\n name: new-snapshot-test\n namespace: default", "completion": "This yaml code 8 - Volume Snapshots | You can provision a new volume, pre-populated with data from a snapshot, by using the dataSource field in the PersistentVolumeClaim object.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "8 - Volume Snapshots | You can provision a new volume, pre-populated with data from a snapshot, by using the dataSource field in the PersistentVolumeClaim object.", "source": "https://kubernetes.io/docs/concepts/storage/_print/", "id": "k8s_00120_code_explain_3543"} {"prompt": "Complete this yaml code:\n\napiVersion: snapshot.storage.k8s.io/v1\nkind: VolumeSnapshotClass\nmetadata:", "completion": " name: csi-hostpath-snapclass\ndriver: hostpath.csi.k8s.io\ndeletionPolicy: Delete\nparameters:", "language": "yaml", "type": "kubernetes-manifest", "context": "9 - Volume Snapshot Classes | Administrators can specify a default VolumeSnapshotClass for VolumeSnapshots that don't request any particular class to bind to by adding the snapshot.storage.kubernetes.io/is-default-class: \"true\" an", "source": "https://kubernetes.io/docs/concepts/storage/_print/", "id": "k8s_00120_code_completion_3544"} {"prompt": "Explain this yaml code:\n\napiVersion: snapshot.storage.k8s.io/v1\nkind: VolumeSnapshotClass\nmetadata:\n name: csi-hostpath-snapclass\ndriver: hostpath.csi.k8s.io\ndeletionPolicy: Delete\nparameters:", "completion": "This yaml code 9 - Volume Snapshot Classes | Administrators can specify a default VolumeSnapshotClass for VolumeSnapshots that don't request any particular class to bind to by adding the snapshot.storage.kubernetes.io/is-default-class: \"true\" an", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "9 - Volume Snapshot Classes | Administrators can specify a default VolumeSnapshotClass for VolumeSnapshots that don't request any particular class to bind to by adding the snapshot.storage.kubernetes.io/is-default-class: \"true\" an", "source": "https://kubernetes.io/docs/concepts/storage/_print/", "id": "k8s_00120_code_explain_3545"} {"prompt": "Explain this text code:\n\nsnapshot.storage.kubernetes.io/is-default-class: \"true\"", "completion": "This text code 9 - Volume Snapshot Classes | If multiple CSI drivers exist, a default VolumeSnapshotClass can be specified for each of them.", "language": "text", "type": "code-example_explanation", "context": "9 - Volume Snapshot Classes | If multiple CSI drivers exist, a default VolumeSnapshotClass can be specified for each of them.", "source": "https://kubernetes.io/docs/concepts/storage/_print/", "id": "k8s_00120_code_explain_3546"} {"prompt": "Complete this yaml code:\n\napiVersion: snapshot.storage.k8s.io/v1\nkind: VolumeSnapshotClass\nmetadata:\n name: csi-hostpath-snapclass", "completion": " annotations:\n snapshot.storage.kubernetes.io/is-default-class: \"true\"\ndriver: hostpath.csi.k8s.io\ndeletionPolicy: Delete\nparameters:", "language": "yaml", "type": "kubernetes-manifest", "context": "9 - Volume Snapshot Classes | If multiple CSI drivers exist, a default VolumeSnapshotClass can be specified for each of them.", "source": "https://kubernetes.io/docs/concepts/storage/_print/", "id": "k8s_00120_code_completion_3547"} {"prompt": "Explain this yaml code:\n\napiVersion: snapshot.storage.k8s.io/v1\nkind: VolumeSnapshotClass\nmetadata:\n name: csi-hostpath-snapclass\n annotations:\n snapshot.storage.kubernetes.io/is-default-class: \"true\"\ndriver: hostpath.csi.k8s.io\ndeletionPolicy: Delete\nparameters:", "completion": "This yaml code 9 - Volume Snapshot Classes | If multiple CSI drivers exist, a default VolumeSnapshotClass can be specified for each of them.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "9 - Volume Snapshot Classes | If multiple CSI drivers exist, a default VolumeSnapshotClass can be specified for each of them.", "source": "https://kubernetes.io/docs/concepts/storage/_print/", "id": "k8s_00120_code_explain_3548"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: PersistentVolumeClaim\nmetadata:\n name: clone-of-pvc-1\n namespace: myns\nspec:\n accessModes:", "completion": " - ReadWriteOnce\n storageClassName: cloning\n resources:\n requests:\n storage: 5Gi\n dataSource:\n kind: PersistentVolumeClaim\n name: pvc-1", "language": "yaml", "type": "kubernetes-manifest", "context": "10 - CSI Volume Cloning | The result is a new PVC with the name clone-of-pvc-1 that has the exact same content as the specified source pvc-1.", "source": "https://kubernetes.io/docs/concepts/storage/_print/", "id": "k8s_00120_code_completion_3549"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: PersistentVolumeClaim\nmetadata:\n name: clone-of-pvc-1\n namespace: myns\nspec:\n accessModes:\n - ReadWriteOnce\n storageClassName: cloning\n resources:\n requests:\n storage: 5Gi\n dataSource:\n kind: PersistentVolumeClaim\n name: pvc-1", "completion": "This yaml code 10 - CSI Volume Cloning | The result is a new PVC with the name clone-of-pvc-1 that has the exact same content as the specified source pvc-1.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "10 - CSI Volume Cloning | The result is a new PVC with the name clone-of-pvc-1 that has the exact same content as the specified source pvc-1.", "source": "https://kubernetes.io/docs/concepts/storage/_print/", "id": "k8s_00120_code_explain_3550"} {"prompt": "Complete this yaml code:\n\napiVersion: storage.k8s.io/v1\nkind: CSIDriver\nmetadata:", "completion": " name: hostpath.csi.k8s.io\nspec:\n nodeAllocatableUpdatePeriodSeconds: 60", "language": "yaml", "type": "kubernetes-manifest", "context": "12 - Node-specific Volume Limits | Kubelet will periodically call the corresponding CSI driver’s NodeGetInfo endpoint to refresh the maximum number of attachable volumes, using the interval specified in nodeAllocatableUpdatePeriodSecon", "source": "https://kubernetes.io/docs/concepts/storage/_print/", "id": "k8s_00120_code_completion_3551"} {"prompt": "Explain this yaml code:\n\napiVersion: storage.k8s.io/v1\nkind: CSIDriver\nmetadata:\n name: hostpath.csi.k8s.io\nspec:\n nodeAllocatableUpdatePeriodSeconds: 60", "completion": "This yaml code 12 - Node-specific Volume Limits | Kubelet will periodically call the corresponding CSI driver’s NodeGetInfo endpoint to refresh the maximum number of attachable volumes, using the interval specified in nodeAllocatableUpdatePeriodSecon", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "12 - Node-specific Volume Limits | Kubelet will periodically call the corresponding CSI driver’s NodeGetInfo endpoint to refresh the maximum number of attachable volumes, using the interval specified in nodeAllocatableUpdatePeriodSecon", "source": "https://kubernetes.io/docs/concepts/storage/_print/", "id": "k8s_00120_code_explain_3552"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: configmap-pod\nspec:\n containers:\n - name: test\n image: busybox:1.28\n command: ['sh', '-c', 'echo \"The app is running!\" && tail -f /dev/null']", "completion": " volumeMounts:\n - name: config-vol\n mountPath: /etc/config\n volumes:\n - name: config-vol\n configMap:\n name: log-config\n items:\n - key: log_level\n path: log_level.conf", "language": "yaml", "type": "pod-definition", "context": "1 - Volumes | The log-config ConfigMap is mounted as a volume, and all contents stored in its log_level entry are mounted into the Pod at path /etc/config/log_level.conf. Note that this path is derived from the vol", "source": "https://kubernetes.io/docs/concepts/storage/_print/", "id": "k8s_00120_code_completion_3553"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: configmap-pod\nspec:\n containers:\n - name: test\n image: busybox:1.28\n command: ['sh', '-c', 'echo \"The app is running!\" && tail -f /dev/null']\n volumeMounts:\n - name: config-vol\n mountPath: /etc/config\n volumes:\n - name: config-vol\n configMap:\n name: log-config\n items:\n - key: log_level\n path: log_level.conf", "completion": "This yaml code 1 - Volumes | The log-config ConfigMap is mounted as a volume, and all contents stored in its log_level entry are mounted into the Pod at path /etc/config/log_level.conf. Note that this path is derived from the vol", "language": "yaml", "type": "pod-definition_explanation", "context": "1 - Volumes | The log-config ConfigMap is mounted as a volume, and all contents stored in its log_level entry are mounted into the Pod at path /etc/config/log_level.conf. Note that this path is derived from the vol", "source": "https://kubernetes.io/docs/concepts/storage/_print/", "id": "k8s_00120_code_explain_3554"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: test-pd\nspec:\n containers:\n - image: registry.k8s.io/test-webserver", "completion": " name: test-container\n volumeMounts:\n - mountPath: /cache\n name: cache-volume\n volumes:\n - name: cache-volume\n emptyDir:\n sizeLimit: 500Mi", "language": "yaml", "type": "pod-definition", "context": "1 - Volumes | An fc volume type allows an existing fibre channel block storage volume to be mounted in a Pod. You can specify single or multiple target world wide names (WWNs) using the parameter targetWWNs in your", "source": "https://kubernetes.io/docs/concepts/storage/_print/", "id": "k8s_00120_code_completion_3555"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: test-pd\nspec:\n containers:\n - image: registry.k8s.io/test-webserver\n name: test-container\n volumeMounts:\n - mountPath: /cache\n name: cache-volume\n volumes:\n - name: cache-volume\n emptyDir:\n sizeLimit: 500Mi", "completion": "This yaml code 1 - Volumes | An fc volume type allows an existing fibre channel block storage volume to be mounted in a Pod. You can specify single or multiple target world wide names (WWNs) using the parameter targetWWNs in your", "language": "yaml", "type": "pod-definition_explanation", "context": "1 - Volumes | An fc volume type allows an existing fibre channel block storage volume to be mounted in a Pod. You can specify single or multiple target world wide names (WWNs) using the parameter targetWWNs in your", "source": "https://kubernetes.io/docs/concepts/storage/_print/", "id": "k8s_00120_code_explain_3556"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: test-pd\nspec:\n containers:\n - image: registry.k8s.io/test-webserver\n name: test-container", "completion": " volumeMounts:\n - mountPath: /cache\n name: cache-volume\n volumes:\n - name: cache-volume\n emptyDir:\n sizeLimit: 500Mi\n medium: Memory", "language": "yaml", "type": "pod-definition", "context": "1 - Volumes | An fc volume type allows an existing fibre channel block storage volume to be mounted in a Pod. You can specify single or multiple target world wide names (WWNs) using the parameter targetWWNs in your", "source": "https://kubernetes.io/docs/concepts/storage/_print/", "id": "k8s_00120_code_completion_3557"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: test-pd\nspec:\n containers:\n - image: registry.k8s.io/test-webserver\n name: test-container\n volumeMounts:\n - mountPath: /cache\n name: cache-volume\n volumes:\n - name: cache-volume\n emptyDir:\n sizeLimit: 500Mi\n medium: Memory", "completion": "This yaml code 1 - Volumes | An fc volume type allows an existing fibre channel block storage volume to be mounted in a Pod. You can specify single or multiple target world wide names (WWNs) using the parameter targetWWNs in your", "language": "yaml", "type": "pod-definition_explanation", "context": "1 - Volumes | An fc volume type allows an existing fibre channel block storage volume to be mounted in a Pod. You can specify single or multiple target world wide names (WWNs) using the parameter targetWWNs in your", "source": "https://kubernetes.io/docs/concepts/storage/_print/", "id": "k8s_00120_code_explain_3558"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: server\nspec:\n containers:\n - image: nginx\n name: nginx", "completion": " volumeMounts:\n - mountPath: /mypath\n name: git-volume\n volumes:\n - name: git-volume\n gitRepo:\n repository: \"git@somewhere:me/my-git-repository.git\"\n revision: \"22f1d8406d464b0c0874075539c1f2e96c253775\"", "language": "yaml", "type": "pod-definition", "context": "1 - Volumes | Kubernetes 1.34 does not include a glusterfs volume type.", "source": "https://kubernetes.io/docs/concepts/storage/_print/", "id": "k8s_00120_code_completion_3559"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: server\nspec:\n containers:\n - image: nginx\n name: nginx\n volumeMounts:\n - mountPath: /mypath\n name: git-volume\n volumes:\n - name: git-volume\n gitRepo:\n repository: \"git@somewhere:me/my-git-repository.git\"\n revision: \"22f1d8406d464b0c0874075539c1f2e96c253775\"", "completion": "This yaml code 1 - Volumes | Kubernetes 1.34 does not include a glusterfs volume type.", "language": "yaml", "type": "pod-definition_explanation", "context": "1 - Volumes | Kubernetes 1.34 does not include a glusterfs volume type.", "source": "https://kubernetes.io/docs/concepts/storage/_print/", "id": "k8s_00120_code_explain_3560"} {"prompt": "Complete this yaml code:\n\n---\n# This manifest mounts /data/foo on the host as /foo inside the\n# single container that runs within the hostpath-example-linux Pod.\n#\n# The mount into the container is read-only.\napiVersion: v1\nkind: Pod\nmetadata:\n name: hostpath-example-linux\nspec:\n os: { name: linux }\n nodeSelector:\n kubernetes.io/os: linux", "completion": " containers:\n - name: example-container\n image: registry.k8s.io/test-webserver\n volumeMounts:\n - mountPath: /foo\n name: example-volume\n readOnly: true\n volumes:\n - name: example-volume\n # mount /data/foo, but only if that directory already exists\n hostPath:\n path: /data/foo # directory location on host\n type: Directory # this field is optional", "language": "yaml", "type": "pod-definition", "context": "1 - Volumes | --- # This manifest mounts C:\\Data\\foo on the host as C:\\foo, inside the # single container that runs within the hostpath-example-windows Pod. # # The mount into the container is read-only. apiVersion", "source": "https://kubernetes.io/docs/concepts/storage/_print/", "id": "k8s_00120_code_completion_3561"} {"prompt": "Explain this yaml code:\n\n---\n# This manifest mounts /data/foo on the host as /foo inside the\n# single container that runs within the hostpath-example-linux Pod.\n#\n# The mount into the container is read-only.\napiVersion: v1\nkind: Pod\nmetadata:\n name: hostpath-example-linux\nspec:\n os: { name: linux }\n nodeSelector:\n kubernetes.io/os: linux\n containers:\n - name: example-container\n image: registry.k8s.io/test-webserver\n volumeMounts:\n - mountPath: /foo\n name: example-volume\n readOnly: true\n volumes:\n - name: example-volume\n # mount /data/foo, but only if that directory already exists\n hostPath:\n path: /data/foo # directory location on host\n type: Directory # this field is optional", "completion": "This yaml code 1 - Volumes | --- # This manifest mounts C:\\Data\\foo on the host as C:\\foo, inside the # single container that runs within the hostpath-example-windows Pod. # # The mount into the container is read-only. apiVersion", "language": "yaml", "type": "pod-definition_explanation", "context": "1 - Volumes | --- # This manifest mounts C:\\Data\\foo on the host as C:\\foo, inside the # single container that runs within the hostpath-example-windows Pod. # # The mount into the container is read-only. apiVersion", "source": "https://kubernetes.io/docs/concepts/storage/_print/", "id": "k8s_00120_code_explain_3562"} {"prompt": "Complete this yaml code:\n\n---\n# This manifest mounts C:\\Data\\foo on the host as C:\\foo, inside the\n# single container that runs within the hostpath-example-windows Pod.\n#\n# The mount into the container is read-only.\napiVersion: v1\nkind: Pod\nmetadata:\n name: hostpath-example-windows\nspec:\n os: { name: windows }\n nodeSelector:\n kubernetes.io/os: windows", "completion": " containers:\n - name: example-container\n image: microsoft/windowsservercore:1709\n volumeMounts:\n - name: example-volume\n mountPath: \"C:\\\\foo\"\n readOnly: true\n volumes:\n # mount C:\\Data\\foo from the host, but only if that directory already exists\n - name: example-volume\n hostPath:\n path: \"C:\\\\Data\\\\foo\" # directory location on host\n type: Directory # this field is optional", "language": "yaml", "type": "pod-definition", "context": "1 - Volumes | The following manifest defines a Pod that mounts /var/local/aaa inside the single container in the Pod. If the node does not already have a path /var/local/aaa, the kubelet creates it as a directory a", "source": "https://kubernetes.io/docs/concepts/storage/_print/", "id": "k8s_00120_code_completion_3563"} {"prompt": "Explain this yaml code:\n\n---\n# This manifest mounts C:\\Data\\foo on the host as C:\\foo, inside the\n# single container that runs within the hostpath-example-windows Pod.\n#\n# The mount into the container is read-only.\napiVersion: v1\nkind: Pod\nmetadata:\n name: hostpath-example-windows\nspec:\n os: { name: windows }\n nodeSelector:\n kubernetes.io/os: windows\n containers:\n - name: example-container\n image: microsoft/windowsservercore:1709\n volumeMounts:\n - name: example-volume\n mountPath: \"C:\\\\foo\"\n readOnly: true\n volumes:\n # mount C:\\Data\\foo from the host, but only if that directory already exists\n - name: example-volume\n hostPath:\n path: \"C:\\\\Data\\\\foo\" # directory location on host\n type: Directory # this field is optional", "completion": "This yaml code 1 - Volumes | The following manifest defines a Pod that mounts /var/local/aaa inside the single container in the Pod. If the node does not already have a path /var/local/aaa, the kubelet creates it as a directory a", "language": "yaml", "type": "pod-definition_explanation", "context": "1 - Volumes | The following manifest defines a Pod that mounts /var/local/aaa inside the single container in the Pod. If the node does not already have a path /var/local/aaa, the kubelet creates it as a directory a", "source": "https://kubernetes.io/docs/concepts/storage/_print/", "id": "k8s_00120_code_explain_3564"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: test-webserver\nspec:\n os: { name: linux }\n nodeSelector:\n kubernetes.io/os: linux\n containers:\n - name: test-webserver\n image: registry.k8s.io/test-webserver:latest\n volumeMounts:\n - mountPath: /var/local/aaa", "completion": " name: mydir\n - mountPath: /var/local/aaa/1.txt\n name: myfile\n volumes:\n - name: mydir\n hostPath:\n # Ensure the file directory is created.\n path: /var/local/aaa\n type: DirectoryOrCreate\n - name: myfile\n hostPath:\n path: /var/local/aaa/1.txt\n type: FileOrCreate", "language": "yaml", "type": "pod-definition", "context": "1 - Volumes | An image volume source represents an OCI object (a container image or artifact) which is available on the kubelet's host machine.", "source": "https://kubernetes.io/docs/concepts/storage/_print/", "id": "k8s_00120_code_completion_3565"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: test-webserver\nspec:\n os: { name: linux }\n nodeSelector:\n kubernetes.io/os: linux\n containers:\n - name: test-webserver\n image: registry.k8s.io/test-webserver:latest\n volumeMounts:\n - mountPath: /var/local/aaa\n name: mydir\n - mountPath: /var/local/aaa/1.txt\n name: myfile\n volumes:\n - name: mydir\n hostPath:\n # Ensure the file directory is created.\n path: /var/local/aaa\n type: DirectoryOrCreate\n - name: myfile\n hostPath:\n path: /var/local/aaa/1.txt\n type: FileOrCreate", "completion": "This yaml code 1 - Volumes | An image volume source represents an OCI object (a container image or artifact) which is available on the kubelet's host machine.", "language": "yaml", "type": "pod-definition_explanation", "context": "1 - Volumes | An image volume source represents an OCI object (a container image or artifact) which is available on the kubelet's host machine.", "source": "https://kubernetes.io/docs/concepts/storage/_print/", "id": "k8s_00120_code_explain_3566"} {"prompt": "Complete this yaml code:\n\npods/image-volumes.yaml\napiVersion: v1\nkind: Pod\nmetadata:\n name: image-volume\nspec:\n containers:\n - name: shell\n command: [\"sleep\", \"infinity\"]", "completion": " image: debian\n volumeMounts:\n - name: volume\n mountPath: /volume\n volumes:\n - name: volume\n image:\n reference: quay.io/crio/artifact:v2\n pullPolicy: IfNotPresent", "language": "yaml", "type": "pod-definition", "context": "1 - Volumes | The volume is resolved at pod startup depending on which pullPolicy value is provided:", "source": "https://kubernetes.io/docs/concepts/storage/_print/", "id": "k8s_00120_code_completion_3567"} {"prompt": "Explain this yaml code:\n\npods/image-volumes.yaml\napiVersion: v1\nkind: Pod\nmetadata:\n name: image-volume\nspec:\n containers:\n - name: shell\n command: [\"sleep\", \"infinity\"]\n image: debian\n volumeMounts:\n - name: volume\n mountPath: /volume\n volumes:\n - name: volume\n image:\n reference: quay.io/crio/artifact:v2\n pullPolicy: IfNotPresent", "completion": "This yaml code 1 - Volumes | The volume is resolved at pod startup depending on which pullPolicy value is provided:", "language": "yaml", "type": "pod-definition_explanation", "context": "1 - Volumes | The volume is resolved at pod startup depending on which pullPolicy value is provided:", "source": "https://kubernetes.io/docs/concepts/storage/_print/", "id": "k8s_00120_code_explain_3568"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: image-volume\nspec:\n containers:\n - name: shell\n command: [\"sleep\", \"infinity\"]", "completion": " image: debian\n volumeMounts:\n - name: volume\n mountPath: /volume\n volumes:\n - name: volume\n image:\n reference: quay.io/crio/artifact:v2\n pullPolicy: IfNotPresent", "language": "yaml", "type": "pod-definition", "context": "1 - Volumes | The volume is resolved at pod startup depending on which pullPolicy value is provided:", "source": "https://kubernetes.io/docs/concepts/storage/_print/", "id": "k8s_00120_code_completion_3569"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: image-volume\nspec:\n containers:\n - name: shell\n command: [\"sleep\", \"infinity\"]\n image: debian\n volumeMounts:\n - name: volume\n mountPath: /volume\n volumes:\n - name: volume\n image:\n reference: quay.io/crio/artifact:v2\n pullPolicy: IfNotPresent", "completion": "This yaml code 1 - Volumes | The volume is resolved at pod startup depending on which pullPolicy value is provided:", "language": "yaml", "type": "pod-definition_explanation", "context": "1 - Volumes | The volume is resolved at pod startup depending on which pullPolicy value is provided:", "source": "https://kubernetes.io/docs/concepts/storage/_print/", "id": "k8s_00120_code_explain_3570"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: PersistentVolume\nmetadata:\n name: example-pv\nspec:\n capacity:\n storage: 100Gi\n volumeMode: Filesystem\n accessModes:\n - ReadWriteOnce\n persistentVolumeReclaimPolicy: Delete", "completion": " storageClassName: local-storage\n local:\n path: /mnt/disks/ssd1\n nodeAffinity:\n required:\n nodeSelectorTerms:\n - matchExpressions:\n - key: kubernetes.io/hostname\n operator: In\n values:\n - example-node", "language": "yaml", "type": "kubernetes-manifest", "context": "1 - Volumes | You must set a PersistentVolume nodeAffinity when using local volumes. The Kubernetes scheduler uses the PersistentVolume nodeAffinity to schedule these Pods to the correct node.", "source": "https://kubernetes.io/docs/concepts/storage/_print/", "id": "k8s_00120_code_completion_3571"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: PersistentVolume\nmetadata:\n name: example-pv\nspec:\n capacity:\n storage: 100Gi\n volumeMode: Filesystem\n accessModes:\n - ReadWriteOnce\n persistentVolumeReclaimPolicy: Delete\n storageClassName: local-storage\n local:\n path: /mnt/disks/ssd1\n nodeAffinity:\n required:\n nodeSelectorTerms:\n - matchExpressions:\n - key: kubernetes.io/hostname\n operator: In\n values:\n - example-node", "completion": "This yaml code 1 - Volumes | You must set a PersistentVolume nodeAffinity when using local volumes. The Kubernetes scheduler uses the PersistentVolume nodeAffinity to schedule these Pods to the correct node.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "1 - Volumes | You must set a PersistentVolume nodeAffinity when using local volumes. The Kubernetes scheduler uses the PersistentVolume nodeAffinity to schedule these Pods to the correct node.", "source": "https://kubernetes.io/docs/concepts/storage/_print/", "id": "k8s_00120_code_explain_3572"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: test-pd\nspec:\n containers:\n - image: registry.k8s.io/test-webserver\n name: test-container", "completion": " volumeMounts:\n - mountPath: /my-nfs-data\n name: test-volume\n volumes:\n - name: test-volume\n nfs:\n server: my-nfs-server.example.com\n path: /my-nfs-volume\n readOnly: true", "language": "yaml", "type": "pod-definition", "context": "1 - Volumes | You must have your own NFS server running with the share exported before you can use it.", "source": "https://kubernetes.io/docs/concepts/storage/_print/", "id": "k8s_00120_code_completion_3573"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: test-pd\nspec:\n containers:\n - image: registry.k8s.io/test-webserver\n name: test-container\n volumeMounts:\n - mountPath: /my-nfs-data\n name: test-volume\n volumes:\n - name: test-volume\n nfs:\n server: my-nfs-server.example.com\n path: /my-nfs-volume\n readOnly: true", "completion": "This yaml code 1 - Volumes | You must have your own NFS server running with the share exported before you can use it.", "language": "yaml", "type": "pod-definition_explanation", "context": "1 - Volumes | You must have your own NFS server running with the share exported before you can use it.", "source": "https://kubernetes.io/docs/concepts/storage/_print/", "id": "k8s_00120_code_explain_3574"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: test-portworx-volume-pod\nspec:\n containers:\n - image: registry.k8s.io/test-webserver\n name: test-container", "completion": " volumeMounts:\n - mountPath: /mnt\n name: pxvol\n volumes:\n - name: pxvol\n # This Portworx volume must already exist.\n portworxVolume:\n volumeID: \"pxvol\"\n fsType: \"\"", "language": "yaml", "type": "pod-definition", "context": "1 - Volumes | In Kubernetes 1.34, all operations for the in-tree Portworx volumes are redirected to the pxd.portworx.com Container Storage Interface (CSI) Driver by default.Portworx CSI Driver must be installed on", "source": "https://kubernetes.io/docs/concepts/storage/_print/", "id": "k8s_00120_code_completion_3575"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: test-portworx-volume-pod\nspec:\n containers:\n - image: registry.k8s.io/test-webserver\n name: test-container\n volumeMounts:\n - mountPath: /mnt\n name: pxvol\n volumes:\n - name: pxvol\n # This Portworx volume must already exist.\n portworxVolume:\n volumeID: \"pxvol\"\n fsType: \"\"", "completion": "This yaml code 1 - Volumes | In Kubernetes 1.34, all operations for the in-tree Portworx volumes are redirected to the pxd.portworx.com Container Storage Interface (CSI) Driver by default.Portworx CSI Driver must be installed on", "language": "yaml", "type": "pod-definition_explanation", "context": "1 - Volumes | In Kubernetes 1.34, all operations for the in-tree Portworx volumes are redirected to the pxd.portworx.com Container Storage Interface (CSI) Driver by default.Portworx CSI Driver must be installed on", "source": "https://kubernetes.io/docs/concepts/storage/_print/", "id": "k8s_00120_code_explain_3576"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: my-lamp-site\nspec:\n containers:\n - name: mysql\n image: mysql\n env:\n - name: MYSQL_ROOT_PASSWORD\n value: \"rootpasswd\"\n volumeMounts:", "completion": " - mountPath: /var/lib/mysql\n name: site-data\n subPath: mysql\n - name: php\n image: php:7.0-apache\n volumeMounts:\n - mountPath: /var/www/html\n name: site-data\n subPath: html\n volumes:\n - name: site-data\n persistentVolumeClaim:\n claimName: my-lamp-site-data", "language": "yaml", "type": "pod-definition", "context": "1 - Volumes | Use the subPathExpr field to construct subPath directory names from downward API environment variables. The subPath and subPathExpr properties are mutually exclusive.", "source": "https://kubernetes.io/docs/concepts/storage/_print/", "id": "k8s_00120_code_completion_3577"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: my-lamp-site\nspec:\n containers:\n - name: mysql\n image: mysql\n env:\n - name: MYSQL_ROOT_PASSWORD\n value: \"rootpasswd\"\n volumeMounts:\n - mountPath: /var/lib/mysql\n name: site-data\n subPath: mysql\n - name: php\n image: php:7.0-apache\n volumeMounts:\n - mountPath: /var/www/html\n name: site-data\n subPath: html\n volumes:\n - name: site-data\n persistentVolumeClaim:\n claimName: my-lamp-site-data", "completion": "This yaml code 1 - Volumes | Use the subPathExpr field to construct subPath directory names from downward API environment variables. The subPath and subPathExpr properties are mutually exclusive.", "language": "yaml", "type": "pod-definition_explanation", "context": "1 - Volumes | Use the subPathExpr field to construct subPath directory names from downward API environment variables. The subPath and subPathExpr properties are mutually exclusive.", "source": "https://kubernetes.io/docs/concepts/storage/_print/", "id": "k8s_00120_code_explain_3578"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: pod1\nspec:\n containers:\n - name: container1\n env:\n - name: POD_NAME\n valueFrom:\n fieldRef:\n apiVersion: v1", "completion": " fieldPath: metadata.name\n image: busybox:1.28\n command: [ \"sh\", \"-c\", \"while [ true ]; do echo 'Hello'; sleep 10; done | tee -a /logs/hello.txt\" ]\n volumeMounts:\n - name: workdir1\n mountPath: /logs\n # The variable expansion uses round brackets (not curly brackets).\n subPathExpr: $(POD_NAME)\n restartPolicy: Never\n volumes:\n - name: workdir1\n hostPath:\n path: /var/log/pods", "language": "yaml", "type": "pod-definition", "context": "1 - Volumes | The storage medium (such as Disk or SSD) of an emptyDir volume is determined by the medium of the filesystem holding the kubelet root dir (typically /var/lib/kubelet). There is no limit on how much sp", "source": "https://kubernetes.io/docs/concepts/storage/_print/", "id": "k8s_00120_code_completion_3579"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: pod1\nspec:\n containers:\n - name: container1\n env:\n - name: POD_NAME\n valueFrom:\n fieldRef:\n apiVersion: v1\n fieldPath: metadata.name\n image: busybox:1.28\n command: [ \"sh\", \"-c\", \"while [ true ]; do echo 'Hello'; sleep 10; done | tee -a /logs/hello.txt\" ]\n volumeMounts:\n - name: workdir1\n mountPath: /logs\n # The variable expansion uses round brackets (not curly brackets).\n subPathExpr: $(POD_NAME)\n restartPolicy: Never\n volumes:\n - name: workdir1\n hostPath:\n path: /var/log/pods", "completion": "This yaml code 1 - Volumes | The storage medium (such as Disk or SSD) of an emptyDir volume is determined by the medium of the filesystem holding the kubelet root dir (typically /var/lib/kubelet). There is no limit on how much sp", "language": "yaml", "type": "pod-definition_explanation", "context": "1 - Volumes | The storage medium (such as Disk or SSD) of an emptyDir volume is determined by the medium of the filesystem holding the kubelet root dir (typically /var/lib/kubelet). There is no limit on how much sp", "source": "https://kubernetes.io/docs/concepts/storage/_print/", "id": "k8s_00120_code_explain_3580"} {"prompt": "Complete this yaml code:\n\nstorage/rro.yaml\napiVersion: v1\nkind: Pod\nmetadata:\n name: rro\nspec:\n volumes:\n - name: mnt\n hostPath:\n # tmpfs is mounted on /mnt/tmpfs\n path: /mnt\n containers:\n - name: busybox\n image: busybox", "completion": " args: [\"sleep\", \"infinity\"]\n volumeMounts:\n # /mnt-rro/tmpfs is not writable\n - name: mnt\n mountPath: /mnt-rro\n readOnly: true\n mountPropagation: None\n recursiveReadOnly: Enabled\n # /mnt-ro/tmpfs is writable\n - name: mnt\n mountPath: /mnt-ro\n readOnly: true\n # /mnt-rw/tmpfs is writable\n - name: mnt\n mountPath: /mnt-rw", "language": "yaml", "type": "pod-definition", "context": "1 - Volumes | When this property is recognized by kubelet and kube-apiserver, the .status.containerStatuses[].volumeMounts[].recursiveReadOnly field is set to either Enabled or Disabled.", "source": "https://kubernetes.io/docs/concepts/storage/_print/", "id": "k8s_00120_code_completion_3581"} {"prompt": "Explain this yaml code:\n\nstorage/rro.yaml\napiVersion: v1\nkind: Pod\nmetadata:\n name: rro\nspec:\n volumes:\n - name: mnt\n hostPath:\n # tmpfs is mounted on /mnt/tmpfs\n path: /mnt\n containers:\n - name: busybox\n image: busybox\n args: [\"sleep\", \"infinity\"]\n volumeMounts:\n # /mnt-rro/tmpfs is not writable\n - name: mnt\n mountPath: /mnt-rro\n readOnly: true\n mountPropagation: None\n recursiveReadOnly: Enabled\n # /mnt-ro/tmpfs is writable\n - name: mnt\n mountPath: /mnt-ro\n readOnly: true\n # /mnt-rw/tmpfs is writable\n - name: mnt\n mountPath: /mnt-rw", "completion": "This yaml code 1 - Volumes | When this property is recognized by kubelet and kube-apiserver, the .status.containerStatuses[].volumeMounts[].recursiveReadOnly field is set to either Enabled or Disabled.", "language": "yaml", "type": "pod-definition_explanation", "context": "1 - Volumes | When this property is recognized by kubelet and kube-apiserver, the .status.containerStatuses[].volumeMounts[].recursiveReadOnly field is set to either Enabled or Disabled.", "source": "https://kubernetes.io/docs/concepts/storage/_print/", "id": "k8s_00120_code_explain_3582"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: rro\nspec:\n volumes:\n - name: mnt\n hostPath:\n # tmpfs is mounted on /mnt/tmpfs\n path: /mnt\n containers:\n - name: busybox\n image: busybox\n args: [\"sleep\", \"infinity\"]", "completion": " volumeMounts:\n # /mnt-rro/tmpfs is not writable\n - name: mnt\n mountPath: /mnt-rro\n readOnly: true\n mountPropagation: None\n recursiveReadOnly: Enabled\n # /mnt-ro/tmpfs is writable\n - name: mnt\n mountPath: /mnt-ro\n readOnly: true\n # /mnt-rw/tmpfs is writable\n - name: mnt\n mountPath: /mnt-rw", "language": "yaml", "type": "pod-definition", "context": "1 - Volumes | When this property is recognized by kubelet and kube-apiserver, the .status.containerStatuses[].volumeMounts[].recursiveReadOnly field is set to either Enabled or Disabled.", "source": "https://kubernetes.io/docs/concepts/storage/_print/", "id": "k8s_00120_code_completion_3583"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: rro\nspec:\n volumes:\n - name: mnt\n hostPath:\n # tmpfs is mounted on /mnt/tmpfs\n path: /mnt\n containers:\n - name: busybox\n image: busybox\n args: [\"sleep\", \"infinity\"]\n volumeMounts:\n # /mnt-rro/tmpfs is not writable\n - name: mnt\n mountPath: /mnt-rro\n readOnly: true\n mountPropagation: None\n recursiveReadOnly: Enabled\n # /mnt-ro/tmpfs is writable\n - name: mnt\n mountPath: /mnt-ro\n readOnly: true\n # /mnt-rw/tmpfs is writable\n - name: mnt\n mountPath: /mnt-rw", "completion": "This yaml code 1 - Volumes | When this property is recognized by kubelet and kube-apiserver, the .status.containerStatuses[].volumeMounts[].recursiveReadOnly field is set to either Enabled or Disabled.", "language": "yaml", "type": "pod-definition_explanation", "context": "1 - Volumes | When this property is recognized by kubelet and kube-apiserver, the .status.containerStatuses[].volumeMounts[].recursiveReadOnly field is set to either Enabled or Disabled.", "source": "https://kubernetes.io/docs/concepts/storage/_print/", "id": "k8s_00120_code_explain_3584"} {"prompt": "Complete this bash code:\n\nkubectl describe pvc hostpath\nName: hostpath\nNamespace: default\nStorageClass: example-hostpath\nStatus: Terminating", "completion": "Volume:\nLabels: \nAnnotations: volume.beta.kubernetes.io/storage-class=example-hostpath\n volume.beta.kubernetes.io/storage-provisioner=example.com/hostpath\nFinalizers: [kubernetes.io/pvc-protection]\n...", "language": "bash", "type": "kubectl-command", "context": "2 - Persistent Volumes | You can see that a PV is protected when the PV's status is Terminating and the Finalizers list includes kubernetes.io/pv-protection too:", "source": "https://kubernetes.io/docs/concepts/storage/_print/", "id": "k8s_00120_code_completion_3585"} {"prompt": "Explain this bash code:\n\nkubectl describe pvc hostpath\nName: hostpath\nNamespace: default\nStorageClass: example-hostpath\nStatus: Terminating\nVolume:\nLabels: \nAnnotations: volume.beta.kubernetes.io/storage-class=example-hostpath\n volume.beta.kubernetes.io/storage-provisioner=example.com/hostpath\nFinalizers: [kubernetes.io/pvc-protection]\n...", "completion": "This bash code 2 - Persistent Volumes | You can see that a PV is protected when the PV's status is Terminating and the Finalizers list includes kubernetes.io/pv-protection too:", "language": "bash", "type": "kubectl-command_explanation", "context": "2 - Persistent Volumes | You can see that a PV is protected when the PV's status is Terminating and the Finalizers list includes kubernetes.io/pv-protection too:", "source": "https://kubernetes.io/docs/concepts/storage/_print/", "id": "k8s_00120_code_explain_3586"} {"prompt": "Complete this bash code:\n\nkubectl describe pv task-pv-volume\nName: task-pv-volume\nLabels: type=local\nAnnotations: \nFinalizers: [kubernetes.io/pv-protection]\nStorageClass: standard\nStatus: Terminating\nClaim:", "completion": "Reclaim Policy: Delete\nAccess Modes: RWO\nCapacity: 1Gi\nMessage:\nSource:\n Type: HostPath (bare host directory volume)\n Path: /tmp/data\n HostPathType:\nEvents: ", "language": "bash", "type": "kubectl-command", "context": "2 - Persistent Volumes | When a user is done with their volume, they can delete the PVC objects from the API that allows reclamation of the resource. The reclaim policy for a PersistentVolume tells the cluster what to do with", "source": "https://kubernetes.io/docs/concepts/storage/_print/", "id": "k8s_00120_code_completion_3587"} {"prompt": "Explain this bash code:\n\nkubectl describe pv task-pv-volume\nName: task-pv-volume\nLabels: type=local\nAnnotations: \nFinalizers: [kubernetes.io/pv-protection]\nStorageClass: standard\nStatus: Terminating\nClaim:\nReclaim Policy: Delete\nAccess Modes: RWO\nCapacity: 1Gi\nMessage:\nSource:\n Type: HostPath (bare host directory volume)\n Path: /tmp/data\n HostPathType:\nEvents: ", "completion": "This bash code 2 - Persistent Volumes | When a user is done with their volume, they can delete the PVC objects from the API that allows reclamation of the resource. The reclaim policy for a PersistentVolume tells the cluster what to do with", "language": "bash", "type": "kubectl-command_explanation", "context": "2 - Persistent Volumes | When a user is done with their volume, they can delete the PVC objects from the API that allows reclamation of the resource. The reclaim policy for a PersistentVolume tells the cluster what to do with", "source": "https://kubernetes.io/docs/concepts/storage/_print/", "id": "k8s_00120_code_explain_3588"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: pv-recycler\n namespace: default\nspec:\n restartPolicy: Never\n volumes:\n - name: vol", "completion": " hostPath:\n path: /any/path/it/will/be/replaced\n containers:\n - name: pv-recycler\n image: \"registry.k8s.io/busybox\"\n command: [\"/bin/sh\", \"-c\", \"test -e /scrub && rm -rf /scrub/..?* /scrub/.[!.]* /scrub/* && test -z \\\"$(ls -A /scrub)\\\" || exit 1\"]\n volumeMounts:\n - name: vol\n mountPath: /scrub", "language": "yaml", "type": "pod-definition", "context": "2 - Persistent Volumes | However, the particular path specified in the custom recycler Pod template in the volumes part is replaced with the particular path of the volume that is being recycled.", "source": "https://kubernetes.io/docs/concepts/storage/_print/", "id": "k8s_00120_code_completion_3589"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: pv-recycler\n namespace: default\nspec:\n restartPolicy: Never\n volumes:\n - name: vol\n hostPath:\n path: /any/path/it/will/be/replaced\n containers:\n - name: pv-recycler\n image: \"registry.k8s.io/busybox\"\n command: [\"/bin/sh\", \"-c\", \"test -e /scrub && rm -rf /scrub/..?* /scrub/.[!.]* /scrub/* && test -z \\\"$(ls -A /scrub)\\\" || exit 1\"]\n volumeMounts:\n - name: vol\n mountPath: /scrub", "completion": "This yaml code 2 - Persistent Volumes | However, the particular path specified in the custom recycler Pod template in the volumes part is replaced with the particular path of the volume that is being recycled.", "language": "yaml", "type": "pod-definition_explanation", "context": "2 - Persistent Volumes | However, the particular path specified in the custom recycler Pod template in the volumes part is replaced with the particular path of the volume that is being recycled.", "source": "https://kubernetes.io/docs/concepts/storage/_print/", "id": "k8s_00120_code_explain_3590"} {"prompt": "Complete this bash code:\n\nkubectl describe pv pvc-74a498d6-3929-47e8-8c02-078c1ece4d78\nName: pvc-74a498d6-3929-47e8-8c02-078c1ece4d78\nLabels: \nAnnotations: kubernetes.io/createdby: vsphere-volume-dynamic-provisioner\n pv.kubernetes.io/bound-by-controller: yes\n pv.kubernetes.io/provisioned-by: kubernetes.io/vsphere-volume\nFinalizers: [kubernetes.io/pv-protection kubernetes.io/pv-controller]\nStorageClass: vcp-sc\nStatus: Bound\nClaim: default/vcp-pvc-1\nReclaim Policy: Delete", "completion": "Access Modes: RWO\nVolumeMode: Filesystem\nCapacity: 1Gi\nNode Affinity: \nMessage:\nSource:\n Type: vSphereVolume (a Persistent Disk resource in vSphere)\n VolumePath: [vsanDatastore] d49c4a62-166f-ce12-c464-020077ba5d46/kubernetes-dynamic-pvc-74a498d6-3929-47e8-8c02-078c1ece4d78.vmdk\n FSType: ext4\n StoragePolicyName: vSAN Default Storage Policy\nEvents: ", "language": "bash", "type": "kubectl-command", "context": "2 - Persistent Volumes | The finalizer external-provisioner.volume.kubernetes.io/finalizer is added for CSI volumes. The following is an example:", "source": "https://kubernetes.io/docs/concepts/storage/_print/", "id": "k8s_00120_code_completion_3591"} {"prompt": "Explain this bash code:\n\nkubectl describe pv pvc-74a498d6-3929-47e8-8c02-078c1ece4d78\nName: pvc-74a498d6-3929-47e8-8c02-078c1ece4d78\nLabels: \nAnnotations: kubernetes.io/createdby: vsphere-volume-dynamic-provisioner\n pv.kubernetes.io/bound-by-controller: yes\n pv.kubernetes.io/provisioned-by: kubernetes.io/vsphere-volume\nFinalizers: [kubernetes.io/pv-protection kubernetes.io/pv-controller]\nStorageClass: vcp-sc\nStatus: Bound\nClaim: default/vcp-pvc-1\nReclaim Policy: Delete\nAccess Modes: RWO\nVolumeMode: Filesystem\nCapacity: 1Gi\nNode Affinity: \nMessage:\nSource:\n Type: vSphereVolume (a Persistent Disk resource in vSphere)\n VolumePath: [vsanDatastore] d49c4a62-166f-ce12-c464-020077ba5d46/kubernetes-dynamic-pvc-74a498d6-3929-47e8-8c02-078c1ece4d78.vmdk\n FSType: ext4\n StoragePolicyName: vSAN Default Storage Policy\nEvents: ", "completion": "This bash code 2 - Persistent Volumes | The finalizer external-provisioner.volume.kubernetes.io/finalizer is added for CSI volumes. The following is an example:", "language": "bash", "type": "kubectl-command_explanation", "context": "2 - Persistent Volumes | The finalizer external-provisioner.volume.kubernetes.io/finalizer is added for CSI volumes. The following is an example:", "source": "https://kubernetes.io/docs/concepts/storage/_print/", "id": "k8s_00120_code_explain_3592"} {"prompt": "Complete this text code:\n\nName: pvc-2f0bab97-85a8-4552-8044-eb8be45cf48d\nLabels: \nAnnotations: pv.kubernetes.io/provisioned-by: csi.vsphere.vmware.com\nFinalizers: [kubernetes.io/pv-protection external-provisioner.volume.kubernetes.io/finalizer]\nStorageClass: fast\nStatus: Bound\nClaim: demo-app/nginx-logs\nReclaim Policy: Delete\nAccess Modes: RWO\nVolumeMode: Filesystem\nCapacity: 200Mi", "completion": "Node Affinity: \nMessage:\nSource:\n Type: CSI (a Container Storage Interface (CSI) volume source)\n Driver: csi.vsphere.vmware.com\n FSType: ext4\n VolumeHandle: 44830fa8-79b4-406b-8b58-621ba25353fd\n ReadOnly: false\n VolumeAttributes: storage.kubernetes.io/csiProvisionerIdentity=1648442357185-8081-csi.vsphere.vmware.com\n type=vSphere CNS Block Volume\nEvents: ", "language": "text", "type": "code-example", "context": "2 - Persistent Volumes | When the CSIMigration{provider} feature flag is enabled for a specific in-tree volume plugin, the kubernetes.io/pv-controller finalizer is replaced by the external-provisioner.volume.kubernetes.io/fin", "source": "https://kubernetes.io/docs/concepts/storage/_print/", "id": "k8s_00120_code_completion_3593"} {"prompt": "Explain this text code:\n\nName: pvc-2f0bab97-85a8-4552-8044-eb8be45cf48d\nLabels: \nAnnotations: pv.kubernetes.io/provisioned-by: csi.vsphere.vmware.com\nFinalizers: [kubernetes.io/pv-protection external-provisioner.volume.kubernetes.io/finalizer]\nStorageClass: fast\nStatus: Bound\nClaim: demo-app/nginx-logs\nReclaim Policy: Delete\nAccess Modes: RWO\nVolumeMode: Filesystem\nCapacity: 200Mi\nNode Affinity: \nMessage:\nSource:\n Type: CSI (a Container Storage Interface (CSI) volume source)\n Driver: csi.vsphere.vmware.com\n FSType: ext4\n VolumeHandle: 44830fa8-79b4-406b-8b58-621ba25353fd\n ReadOnly: false\n VolumeAttributes: storage.kubernetes.io/csiProvisionerIdentity=1648442357185-8081-csi.vsphere.vmware.com\n type=vSphere CNS Block Volume\nEvents: ", "completion": "This text code 2 - Persistent Volumes | When the CSIMigration{provider} feature flag is enabled for a specific in-tree volume plugin, the kubernetes.io/pv-controller finalizer is replaced by the external-provisioner.volume.kubernetes.io/fin", "language": "text", "type": "code-example_explanation", "context": "2 - Persistent Volumes | When the CSIMigration{provider} feature flag is enabled for a specific in-tree volume plugin, the kubernetes.io/pv-controller finalizer is replaced by the external-provisioner.volume.kubernetes.io/fin", "source": "https://kubernetes.io/docs/concepts/storage/_print/", "id": "k8s_00120_code_explain_3594"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: PersistentVolumeClaim\nmetadata:\n name: foo-pvc", "completion": " namespace: foo\nspec:\n storageClassName: \"\" # Empty string must be explicitly set otherwise default StorageClass will be set\n volumeName: foo-pv\n ...", "language": "yaml", "type": "kubernetes-manifest", "context": "2 - Persistent Volumes | This method does not guarantee any binding privileges to the PersistentVolume. If other PersistentVolumeClaims could use the PV that you specify, you first need to reserve that storage volume. Specify", "source": "https://kubernetes.io/docs/concepts/storage/_print/", "id": "k8s_00120_code_completion_3595"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: PersistentVolumeClaim\nmetadata:\n name: foo-pvc\n namespace: foo\nspec:\n storageClassName: \"\" # Empty string must be explicitly set otherwise default StorageClass will be set\n volumeName: foo-pv\n ...", "completion": "This yaml code 2 - Persistent Volumes | This method does not guarantee any binding privileges to the PersistentVolume. If other PersistentVolumeClaims could use the PV that you specify, you first need to reserve that storage volume. Specify", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "2 - Persistent Volumes | This method does not guarantee any binding privileges to the PersistentVolume. If other PersistentVolumeClaims could use the PV that you specify, you first need to reserve that storage volume. Specify", "source": "https://kubernetes.io/docs/concepts/storage/_print/", "id": "k8s_00120_code_explain_3596"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: PersistentVolume\nmetadata:\n name: foo-pv\nspec:", "completion": " storageClassName: \"\"\n claimRef:\n name: foo-pvc\n namespace: foo\n ...", "language": "yaml", "type": "kubernetes-manifest", "context": "2 - Persistent Volumes | This is useful if you want to consume PersistentVolumes that have their persistentVolumeReclaimPolicy set to Retain, including cases where you are reusing an existing PV.", "source": "https://kubernetes.io/docs/concepts/storage/_print/", "id": "k8s_00120_code_completion_3597"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: PersistentVolume\nmetadata:\n name: foo-pv\nspec:\n storageClassName: \"\"\n claimRef:\n name: foo-pvc\n namespace: foo\n ...", "completion": "This yaml code 2 - Persistent Volumes | This is useful if you want to consume PersistentVolumes that have their persistentVolumeReclaimPolicy set to Retain, including cases where you are reusing an existing PV.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "2 - Persistent Volumes | This is useful if you want to consume PersistentVolumes that have their persistentVolumeReclaimPolicy set to Retain, including cases where you are reusing an existing PV.", "source": "https://kubernetes.io/docs/concepts/storage/_print/", "id": "k8s_00120_code_explain_3598"} {"prompt": "Complete this yaml code:\n\napiVersion: storage.k8s.io/v1\nkind: StorageClass\nmetadata:\n name: example-vol-default\nprovisioner: vendor-name.example/magicstorage", "completion": "parameters:\n resturl: \"http://192.168.10.100:8080\"\n restuser: \"\"\n secretNamespace: \"\"\n secretName: \"\"\nallowVolumeExpansion: true", "language": "yaml", "type": "kubernetes-manifest", "context": "2 - Persistent Volumes | To request a larger volume for a PVC, edit the PVC object and specify a larger size. This triggers expansion of the volume that backs the underlying PersistentVolume. A new PersistentVolume is never c", "source": "https://kubernetes.io/docs/concepts/storage/_print/", "id": "k8s_00120_code_completion_3599"} {"prompt": "Explain this yaml code:\n\napiVersion: storage.k8s.io/v1\nkind: StorageClass\nmetadata:\n name: example-vol-default\nprovisioner: vendor-name.example/magicstorage\nparameters:\n resturl: \"http://192.168.10.100:8080\"\n restuser: \"\"\n secretNamespace: \"\"\n secretName: \"\"\nallowVolumeExpansion: true", "completion": "This yaml code 2 - Persistent Volumes | To request a larger volume for a PVC, edit the PVC object and specify a larger size. This triggers expansion of the volume that backs the underlying PersistentVolume. A new PersistentVolume is never c", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "2 - Persistent Volumes | To request a larger volume for a PVC, edit the PVC object and specify a larger size. This triggers expansion of the volume that backs the underlying PersistentVolume. A new PersistentVolume is never c", "source": "https://kubernetes.io/docs/concepts/storage/_print/", "id": "k8s_00120_code_explain_3600"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: PersistentVolume\nmetadata:\n name: pv0003\nspec:\n capacity:\n storage: 5Gi\n volumeMode: Filesystem\n accessModes:", "completion": " - ReadWriteOnce\n persistentVolumeReclaimPolicy: Recycle\n storageClassName: slow\n mountOptions:\n - hard\n - nfsvers=4.1\n nfs:\n path: /tmp\n server: 172.17.0.2", "language": "yaml", "type": "kubernetes-manifest", "context": "2 - Persistent Volumes | Generally, a PV will have a specific storage capacity. This is set using the PV's capacity attribute which is a Quantity value.", "source": "https://kubernetes.io/docs/concepts/storage/_print/", "id": "k8s_00120_code_completion_3601"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: PersistentVolume\nmetadata:\n name: pv0003\nspec:\n capacity:\n storage: 5Gi\n volumeMode: Filesystem\n accessModes:\n - ReadWriteOnce\n persistentVolumeReclaimPolicy: Recycle\n storageClassName: slow\n mountOptions:\n - hard\n - nfsvers=4.1\n nfs:\n path: /tmp\n server: 172.17.0.2", "completion": "This yaml code 2 - Persistent Volumes | Generally, a PV will have a specific storage capacity. This is set using the PV's capacity attribute which is a Quantity value.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "2 - Persistent Volumes | Generally, a PV will have a specific storage capacity. This is set using the PV's capacity attribute which is a Quantity value.", "source": "https://kubernetes.io/docs/concepts/storage/_print/", "id": "k8s_00120_code_explain_3602"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: PersistentVolumeClaim\nmetadata:\n name: myclaim\nspec:\n accessModes:\n - ReadWriteOnce\n volumeMode: Filesystem", "completion": " resources:\n requests:\n storage: 8Gi\n storageClassName: slow\n selector:\n matchLabels:\n release: \"stable\"\n matchExpressions:\n - {key: environment, operator: In, values: [dev]}", "language": "yaml", "type": "kubernetes-manifest", "context": "2 - Persistent Volumes | Claims use the same conventions as volumes when requesting storage with specific access modes.", "source": "https://kubernetes.io/docs/concepts/storage/_print/", "id": "k8s_00120_code_completion_3603"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: PersistentVolumeClaim\nmetadata:\n name: myclaim\nspec:\n accessModes:\n - ReadWriteOnce\n volumeMode: Filesystem\n resources:\n requests:\n storage: 8Gi\n storageClassName: slow\n selector:\n matchLabels:\n release: \"stable\"\n matchExpressions:\n - {key: environment, operator: In, values: [dev]}", "completion": "This yaml code 2 - Persistent Volumes | Claims use the same conventions as volumes when requesting storage with specific access modes.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "2 - Persistent Volumes | Claims use the same conventions as volumes when requesting storage with specific access modes.", "source": "https://kubernetes.io/docs/concepts/storage/_print/", "id": "k8s_00120_code_explain_3604"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: mypod\nspec:\n containers:\n - name: myfrontend", "completion": " image: nginx\n volumeMounts:\n - mountPath: \"/var/www/html\"\n name: mypd\n volumes:\n - name: mypd\n persistentVolumeClaim:\n claimName: myclaim", "language": "yaml", "type": "pod-definition", "context": "2 - Persistent Volumes | PersistentVolumes binds are exclusive, and since PersistentVolumeClaims are namespaced objects, mounting claims with \"Many\" modes (ROX, RWX) is only possible within one namespace.", "source": "https://kubernetes.io/docs/concepts/storage/_print/", "id": "k8s_00120_code_completion_3605"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: mypod\nspec:\n containers:\n - name: myfrontend\n image: nginx\n volumeMounts:\n - mountPath: \"/var/www/html\"\n name: mypd\n volumes:\n - name: mypd\n persistentVolumeClaim:\n claimName: myclaim", "completion": "This yaml code 2 - Persistent Volumes | PersistentVolumes binds are exclusive, and since PersistentVolumeClaims are namespaced objects, mounting claims with \"Many\" modes (ROX, RWX) is only possible within one namespace.", "language": "yaml", "type": "pod-definition_explanation", "context": "2 - Persistent Volumes | PersistentVolumes binds are exclusive, and since PersistentVolumeClaims are namespaced objects, mounting claims with \"Many\" modes (ROX, RWX) is only possible within one namespace.", "source": "https://kubernetes.io/docs/concepts/storage/_print/", "id": "k8s_00120_code_explain_3606"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: PersistentVolume\nmetadata:\n name: block-pv\nspec:\n capacity:\n storage: 10Gi", "completion": " accessModes:\n - ReadWriteOnce\n volumeMode: Block\n persistentVolumeReclaimPolicy: Retain\n fc:\n targetWWNs: [\"50060e801049cfd1\"]\n lun: 0\n readOnly: false", "language": "yaml", "type": "kubernetes-manifest", "context": "2 - Persistent Volumes | If a user requests a raw block volume by indicating this using the volumeMode field in the PersistentVolumeClaim spec, the binding rules differ slightly from previous releases that didn't consider thi", "source": "https://kubernetes.io/docs/concepts/storage/_print/", "id": "k8s_00120_code_completion_3607"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: PersistentVolume\nmetadata:\n name: block-pv\nspec:\n capacity:\n storage: 10Gi\n accessModes:\n - ReadWriteOnce\n volumeMode: Block\n persistentVolumeReclaimPolicy: Retain\n fc:\n targetWWNs: [\"50060e801049cfd1\"]\n lun: 0\n readOnly: false", "completion": "This yaml code 2 - Persistent Volumes | If a user requests a raw block volume by indicating this using the volumeMode field in the PersistentVolumeClaim spec, the binding rules differ slightly from previous releases that didn't consider thi", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "2 - Persistent Volumes | If a user requests a raw block volume by indicating this using the volumeMode field in the PersistentVolumeClaim spec, the binding rules differ slightly from previous releases that didn't consider thi", "source": "https://kubernetes.io/docs/concepts/storage/_print/", "id": "k8s_00120_code_explain_3608"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: PersistentVolumeClaim\nmetadata:\n name: block-pvc\nspec:", "completion": " accessModes:\n - ReadWriteOnce\n volumeMode: Block\n resources:\n requests:\n storage: 10Gi", "language": "yaml", "type": "kubernetes-manifest", "context": "2 - Persistent Volumes | If a user requests a raw block volume by indicating this using the volumeMode field in the PersistentVolumeClaim spec, the binding rules differ slightly from previous releases that didn't consider thi", "source": "https://kubernetes.io/docs/concepts/storage/_print/", "id": "k8s_00120_code_completion_3609"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: PersistentVolumeClaim\nmetadata:\n name: block-pvc\nspec:\n accessModes:\n - ReadWriteOnce\n volumeMode: Block\n resources:\n requests:\n storage: 10Gi", "completion": "This yaml code 2 - Persistent Volumes | If a user requests a raw block volume by indicating this using the volumeMode field in the PersistentVolumeClaim spec, the binding rules differ slightly from previous releases that didn't consider thi", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "2 - Persistent Volumes | If a user requests a raw block volume by indicating this using the volumeMode field in the PersistentVolumeClaim spec, the binding rules differ slightly from previous releases that didn't consider thi", "source": "https://kubernetes.io/docs/concepts/storage/_print/", "id": "k8s_00120_code_explain_3610"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: pod-with-block-volume\nspec:\n containers:\n - name: fc-container\n image: fedora:26", "completion": " command: [\"/bin/sh\", \"-c\"]\n args: [ \"tail -f /dev/null\" ]\n volumeDevices:\n - name: data\n devicePath: /dev/xvda\n volumes:\n - name: data\n persistentVolumeClaim:\n claimName: block-pvc", "language": "yaml", "type": "pod-definition", "context": "2 - Persistent Volumes | If a user requests a raw block volume by indicating this using the volumeMode field in the PersistentVolumeClaim spec, the binding rules differ slightly from previous releases that didn't consider thi", "source": "https://kubernetes.io/docs/concepts/storage/_print/", "id": "k8s_00120_code_completion_3611"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: pod-with-block-volume\nspec:\n containers:\n - name: fc-container\n image: fedora:26\n command: [\"/bin/sh\", \"-c\"]\n args: [ \"tail -f /dev/null\" ]\n volumeDevices:\n - name: data\n devicePath: /dev/xvda\n volumes:\n - name: data\n persistentVolumeClaim:\n claimName: block-pvc", "completion": "This yaml code 2 - Persistent Volumes | If a user requests a raw block volume by indicating this using the volumeMode field in the PersistentVolumeClaim spec, the binding rules differ slightly from previous releases that didn't consider thi", "language": "yaml", "type": "pod-definition_explanation", "context": "2 - Persistent Volumes | If a user requests a raw block volume by indicating this using the volumeMode field in the PersistentVolumeClaim spec, the binding rules differ slightly from previous releases that didn't consider thi", "source": "https://kubernetes.io/docs/concepts/storage/_print/", "id": "k8s_00120_code_explain_3612"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: PersistentVolumeClaim\nmetadata:\n name: restore-pvc\nspec:\n storageClassName: csi-hostpath-sc\n dataSource:", "completion": " name: new-snapshot-test\n kind: VolumeSnapshot\n apiGroup: snapshot.storage.k8s.io\n accessModes:\n - ReadWriteOnce\n resources:\n requests:\n storage: 10Gi", "language": "yaml", "type": "kubernetes-manifest", "context": "2 - Persistent Volumes | Volume Cloning only available for CSI volume plugins.", "source": "https://kubernetes.io/docs/concepts/storage/_print/", "id": "k8s_00120_code_completion_3613"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: PersistentVolumeClaim\nmetadata:\n name: restore-pvc\nspec:\n storageClassName: csi-hostpath-sc\n dataSource:\n name: new-snapshot-test\n kind: VolumeSnapshot\n apiGroup: snapshot.storage.k8s.io\n accessModes:\n - ReadWriteOnce\n resources:\n requests:\n storage: 10Gi", "completion": "This yaml code 2 - Persistent Volumes | Volume Cloning only available for CSI volume plugins.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "2 - Persistent Volumes | Volume Cloning only available for CSI volume plugins.", "source": "https://kubernetes.io/docs/concepts/storage/_print/", "id": "k8s_00120_code_explain_3614"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: PersistentVolumeClaim\nmetadata:\n name: cloned-pvc\nspec:\n storageClassName: my-csi-plugin\n dataSource:", "completion": " name: existing-src-pvc-name\n kind: PersistentVolumeClaim\n accessModes:\n - ReadWriteOnce\n resources:\n requests:\n storage: 10Gi", "language": "yaml", "type": "kubernetes-manifest", "context": "2 - Persistent Volumes | Kubernetes supports custom volume populators. To use custom volume populators, you must enable the AnyVolumeDataSource feature gate for the kube-apiserver and kube-controller-manager.", "source": "https://kubernetes.io/docs/concepts/storage/_print/", "id": "k8s_00120_code_completion_3615"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: PersistentVolumeClaim\nmetadata:\n name: cloned-pvc\nspec:\n storageClassName: my-csi-plugin\n dataSource:\n name: existing-src-pvc-name\n kind: PersistentVolumeClaim\n accessModes:\n - ReadWriteOnce\n resources:\n requests:\n storage: 10Gi", "completion": "This yaml code 2 - Persistent Volumes | Kubernetes supports custom volume populators. To use custom volume populators, you must enable the AnyVolumeDataSource feature gate for the kube-apiserver and kube-controller-manager.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "2 - Persistent Volumes | Kubernetes supports custom volume populators. To use custom volume populators, you must enable the AnyVolumeDataSource feature gate for the kube-apiserver and kube-controller-manager.", "source": "https://kubernetes.io/docs/concepts/storage/_print/", "id": "k8s_00120_code_explain_3616"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: PersistentVolumeClaim\nmetadata:\n name: populated-pvc\nspec:\n dataSourceRef:\n name: example-name", "completion": " kind: ExampleDataSource\n apiGroup: example.storage.k8s.io\n accessModes:\n - ReadWriteOnce\n resources:\n requests:\n storage: 10Gi", "language": "yaml", "type": "kubernetes-manifest", "context": "2 - Persistent Volumes | Because volume populators are external components, attempts to create a PVC that uses one can fail if not all the correct components are installed. External controllers should generate events on the P", "source": "https://kubernetes.io/docs/concepts/storage/_print/", "id": "k8s_00120_code_completion_3617"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: PersistentVolumeClaim\nmetadata:\n name: populated-pvc\nspec:\n dataSourceRef:\n name: example-name\n kind: ExampleDataSource\n apiGroup: example.storage.k8s.io\n accessModes:\n - ReadWriteOnce\n resources:\n requests:\n storage: 10Gi", "completion": "This yaml code 2 - Persistent Volumes | Because volume populators are external components, attempts to create a PVC that uses one can fail if not all the correct components are installed. External controllers should generate events on the P", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "2 - Persistent Volumes | Because volume populators are external components, attempts to create a PVC that uses one can fail if not all the correct components are installed. External controllers should generate events on the P", "source": "https://kubernetes.io/docs/concepts/storage/_print/", "id": "k8s_00120_code_explain_3618"} {"prompt": "Complete this yaml code:\n\napiVersion: gateway.networking.k8s.io/v1beta1\nkind: ReferenceGrant\nmetadata:\n name: allow-ns1-pvc\n namespace: default\nspec:\n from:", "completion": " - group: \"\"\n kind: PersistentVolumeClaim\n namespace: ns1\n to:\n - group: snapshot.storage.k8s.io\n kind: VolumeSnapshot\n name: new-snapshot-demo", "language": "yaml", "type": "kubernetes-manifest", "context": "2 - Persistent Volumes | If you're writing configuration templates or examples that run on a wide range of clusters and need persistent storage, it is recommended that you use the following pattern:", "source": "https://kubernetes.io/docs/concepts/storage/_print/", "id": "k8s_00120_code_completion_3619"} {"prompt": "Explain this yaml code:\n\napiVersion: gateway.networking.k8s.io/v1beta1\nkind: ReferenceGrant\nmetadata:\n name: allow-ns1-pvc\n namespace: default\nspec:\n from:\n - group: \"\"\n kind: PersistentVolumeClaim\n namespace: ns1\n to:\n - group: snapshot.storage.k8s.io\n kind: VolumeSnapshot\n name: new-snapshot-demo", "completion": "This yaml code 2 - Persistent Volumes | If you're writing configuration templates or examples that run on a wide range of clusters and need persistent storage, it is recommended that you use the following pattern:", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "2 - Persistent Volumes | If you're writing configuration templates or examples that run on a wide range of clusters and need persistent storage, it is recommended that you use the following pattern:", "source": "https://kubernetes.io/docs/concepts/storage/_print/", "id": "k8s_00120_code_explain_3620"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: PersistentVolumeClaim\nmetadata:\n name: foo-pvc\n namespace: ns1\nspec:\n storageClassName: example\n accessModes:\n - ReadWriteOnce", "completion": " resources:\n requests:\n storage: 1Gi\n dataSourceRef:\n apiGroup: snapshot.storage.k8s.io\n kind: VolumeSnapshot\n name: new-snapshot-demo\n namespace: default\n volumeMode: Filesystem", "language": "yaml", "type": "kubernetes-manifest", "context": "2 - Persistent Volumes | If you're writing configuration templates or examples that run on a wide range of clusters and need persistent storage, it is recommended that you use the following pattern:", "source": "https://kubernetes.io/docs/concepts/storage/_print/", "id": "k8s_00120_code_completion_3621"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: PersistentVolumeClaim\nmetadata:\n name: foo-pvc\n namespace: ns1\nspec:\n storageClassName: example\n accessModes:\n - ReadWriteOnce\n resources:\n requests:\n storage: 1Gi\n dataSourceRef:\n apiGroup: snapshot.storage.k8s.io\n kind: VolumeSnapshot\n name: new-snapshot-demo\n namespace: default\n volumeMode: Filesystem", "completion": "This yaml code 2 - Persistent Volumes | If you're writing configuration templates or examples that run on a wide range of clusters and need persistent storage, it is recommended that you use the following pattern:", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "2 - Persistent Volumes | If you're writing configuration templates or examples that run on a wide range of clusters and need persistent storage, it is recommended that you use the following pattern:", "source": "https://kubernetes.io/docs/concepts/storage/_print/", "id": "k8s_00120_code_explain_3622"} {"prompt": "Complete this yaml code:\n\npods/storage/projected-secret-downwardapi-configmap.yaml\napiVersion: v1\nkind: Pod\nmetadata:\n name: volume-test\nspec:\n containers:\n - name: container-test\n image: busybox:1.28\n command: [\"sleep\", \"3600\"]\n volumeMounts:\n - name: all-in-one\n mountPath: \"/projected-volume\"\n readOnly: true\n volumes:\n - name: all-in-one\n projected:\n sources:", "completion": " - secret:\n name: mysecret\n items:\n - key: username\n path: my-group/my-username\n - downwardAPI:\n items:\n - path: \"labels\"\n fieldRef:\n fieldPath: metadata.labels\n - path: \"cpu_limit\"\n resourceFieldRef:\n containerName: container-test\n resource: limits.cpu\n - configMap:\n name: myconfigmap\n items:\n - key: config\n path: my-group/my-config", "language": "yaml", "type": "pod-definition", "context": "3 - Projected Volumes | Each projected volume source is listed in the spec under sources. The parameters are nearly the same with two exceptions:", "source": "https://kubernetes.io/docs/concepts/storage/_print/", "id": "k8s_00120_code_completion_3623"} {"prompt": "Explain this yaml code:\n\npods/storage/projected-secret-downwardapi-configmap.yaml\napiVersion: v1\nkind: Pod\nmetadata:\n name: volume-test\nspec:\n containers:\n - name: container-test\n image: busybox:1.28\n command: [\"sleep\", \"3600\"]\n volumeMounts:\n - name: all-in-one\n mountPath: \"/projected-volume\"\n readOnly: true\n volumes:\n - name: all-in-one\n projected:\n sources:\n - secret:\n name: mysecret\n items:\n - key: username\n path: my-group/my-username\n - downwardAPI:\n items:\n - path: \"labels\"\n fieldRef:\n fieldPath: metadata.labels\n - path: \"cpu_limit\"\n resourceFieldRef:\n containerName: container-test\n resource: limits.cpu\n - configMap:\n name: myconfigmap\n items:\n - key: config\n path: my-group/my-config", "completion": "This yaml code 3 - Projected Volumes | Each projected volume source is listed in the spec under sources. The parameters are nearly the same with two exceptions:", "language": "yaml", "type": "pod-definition_explanation", "context": "3 - Projected Volumes | Each projected volume source is listed in the spec under sources. The parameters are nearly the same with two exceptions:", "source": "https://kubernetes.io/docs/concepts/storage/_print/", "id": "k8s_00120_code_explain_3624"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: volume-test\nspec:\n containers:\n - name: container-test\n image: busybox:1.28\n command: [\"sleep\", \"3600\"]\n volumeMounts:\n - name: all-in-one\n mountPath: \"/projected-volume\"\n readOnly: true\n volumes:\n - name: all-in-one\n projected:\n sources:\n - secret:", "completion": " name: mysecret\n items:\n - key: username\n path: my-group/my-username\n - downwardAPI:\n items:\n - path: \"labels\"\n fieldRef:\n fieldPath: metadata.labels\n - path: \"cpu_limit\"\n resourceFieldRef:\n containerName: container-test\n resource: limits.cpu\n - configMap:\n name: myconfigmap\n items:\n - key: config\n path: my-group/my-config", "language": "yaml", "type": "pod-definition", "context": "3 - Projected Volumes | Each projected volume source is listed in the spec under sources. The parameters are nearly the same with two exceptions:", "source": "https://kubernetes.io/docs/concepts/storage/_print/", "id": "k8s_00120_code_completion_3625"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: volume-test\nspec:\n containers:\n - name: container-test\n image: busybox:1.28\n command: [\"sleep\", \"3600\"]\n volumeMounts:\n - name: all-in-one\n mountPath: \"/projected-volume\"\n readOnly: true\n volumes:\n - name: all-in-one\n projected:\n sources:\n - secret:\n name: mysecret\n items:\n - key: username\n path: my-group/my-username\n - downwardAPI:\n items:\n - path: \"labels\"\n fieldRef:\n fieldPath: metadata.labels\n - path: \"cpu_limit\"\n resourceFieldRef:\n containerName: container-test\n resource: limits.cpu\n - configMap:\n name: myconfigmap\n items:\n - key: config\n path: my-group/my-config", "completion": "This yaml code 3 - Projected Volumes | Each projected volume source is listed in the spec under sources. The parameters are nearly the same with two exceptions:", "language": "yaml", "type": "pod-definition_explanation", "context": "3 - Projected Volumes | Each projected volume source is listed in the spec under sources. The parameters are nearly the same with two exceptions:", "source": "https://kubernetes.io/docs/concepts/storage/_print/", "id": "k8s_00120_code_explain_3626"} {"prompt": "Complete this yaml code:\n\npods/storage/projected-secrets-nondefault-permission-mode.yaml\napiVersion: v1\nkind: Pod\nmetadata:\n name: volume-test\nspec:\n containers:\n - name: container-test\n image: busybox:1.28\n command: [\"sleep\", \"3600\"]\n volumeMounts:\n - name: all-in-one\n mountPath: \"/projected-volume\"\n readOnly: true", "completion": " volumes:\n - name: all-in-one\n projected:\n sources:\n - secret:\n name: mysecret\n items:\n - key: username\n path: my-group/my-username\n - secret:\n name: mysecret2\n items:\n - key: password\n path: my-group/my-password\n mode: 511", "language": "yaml", "type": "pod-definition", "context": "3 - Projected Volumes | Each projected volume source is listed in the spec under sources. The parameters are nearly the same with two exceptions:", "source": "https://kubernetes.io/docs/concepts/storage/_print/", "id": "k8s_00120_code_completion_3627"} {"prompt": "Explain this yaml code:\n\npods/storage/projected-secrets-nondefault-permission-mode.yaml\napiVersion: v1\nkind: Pod\nmetadata:\n name: volume-test\nspec:\n containers:\n - name: container-test\n image: busybox:1.28\n command: [\"sleep\", \"3600\"]\n volumeMounts:\n - name: all-in-one\n mountPath: \"/projected-volume\"\n readOnly: true\n volumes:\n - name: all-in-one\n projected:\n sources:\n - secret:\n name: mysecret\n items:\n - key: username\n path: my-group/my-username\n - secret:\n name: mysecret2\n items:\n - key: password\n path: my-group/my-password\n mode: 511", "completion": "This yaml code 3 - Projected Volumes | Each projected volume source is listed in the spec under sources. The parameters are nearly the same with two exceptions:", "language": "yaml", "type": "pod-definition_explanation", "context": "3 - Projected Volumes | Each projected volume source is listed in the spec under sources. The parameters are nearly the same with two exceptions:", "source": "https://kubernetes.io/docs/concepts/storage/_print/", "id": "k8s_00120_code_explain_3628"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: volume-test\nspec:\n containers:\n - name: container-test\n image: busybox:1.28\n command: [\"sleep\", \"3600\"]\n volumeMounts:\n - name: all-in-one\n mountPath: \"/projected-volume\"\n readOnly: true\n volumes:", "completion": " - name: all-in-one\n projected:\n sources:\n - secret:\n name: mysecret\n items:\n - key: username\n path: my-group/my-username\n - secret:\n name: mysecret2\n items:\n - key: password\n path: my-group/my-password\n mode: 511", "language": "yaml", "type": "pod-definition", "context": "3 - Projected Volumes | Each projected volume source is listed in the spec under sources. The parameters are nearly the same with two exceptions:", "source": "https://kubernetes.io/docs/concepts/storage/_print/", "id": "k8s_00120_code_completion_3629"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: volume-test\nspec:\n containers:\n - name: container-test\n image: busybox:1.28\n command: [\"sleep\", \"3600\"]\n volumeMounts:\n - name: all-in-one\n mountPath: \"/projected-volume\"\n readOnly: true\n volumes:\n - name: all-in-one\n projected:\n sources:\n - secret:\n name: mysecret\n items:\n - key: username\n path: my-group/my-username\n - secret:\n name: mysecret2\n items:\n - key: password\n path: my-group/my-password\n mode: 511", "completion": "This yaml code 3 - Projected Volumes | Each projected volume source is listed in the spec under sources. The parameters are nearly the same with two exceptions:", "language": "yaml", "type": "pod-definition_explanation", "context": "3 - Projected Volumes | Each projected volume source is listed in the spec under sources. The parameters are nearly the same with two exceptions:", "source": "https://kubernetes.io/docs/concepts/storage/_print/", "id": "k8s_00120_code_explain_3630"} {"prompt": "Complete this yaml code:\n\npods/storage/projected-service-account-token.yaml\napiVersion: v1\nkind: Pod\nmetadata:\n name: sa-token-test\nspec:\n containers:\n - name: container-test\n image: busybox:1.28\n command: [\"sleep\", \"3600\"]\n volumeMounts:", "completion": " - name: token-vol\n mountPath: \"/service-account\"\n readOnly: true\n serviceAccountName: default\n volumes:\n - name: token-vol\n projected:\n sources:\n - serviceAccountToken:\n audience: api\n expirationSeconds: 3600\n path: token", "language": "yaml", "type": "pod-definition", "context": "3 - Projected Volumes | The example Pod has a projected volume containing the injected service account token. Containers in this Pod can use that token to access the Kubernetes API server, authenticating with the identity of", "source": "https://kubernetes.io/docs/concepts/storage/_print/", "id": "k8s_00120_code_completion_3631"} {"prompt": "Explain this yaml code:\n\npods/storage/projected-service-account-token.yaml\napiVersion: v1\nkind: Pod\nmetadata:\n name: sa-token-test\nspec:\n containers:\n - name: container-test\n image: busybox:1.28\n command: [\"sleep\", \"3600\"]\n volumeMounts:\n - name: token-vol\n mountPath: \"/service-account\"\n readOnly: true\n serviceAccountName: default\n volumes:\n - name: token-vol\n projected:\n sources:\n - serviceAccountToken:\n audience: api\n expirationSeconds: 3600\n path: token", "completion": "This yaml code 3 - Projected Volumes | The example Pod has a projected volume containing the injected service account token. Containers in this Pod can use that token to access the Kubernetes API server, authenticating with the identity of", "language": "yaml", "type": "pod-definition_explanation", "context": "3 - Projected Volumes | The example Pod has a projected volume containing the injected service account token. Containers in this Pod can use that token to access the Kubernetes API server, authenticating with the identity of", "source": "https://kubernetes.io/docs/concepts/storage/_print/", "id": "k8s_00120_code_explain_3632"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: sa-token-test\nspec:\n containers:\n - name: container-test\n image: busybox:1.28\n command: [\"sleep\", \"3600\"]\n volumeMounts:\n - name: token-vol", "completion": " mountPath: \"/service-account\"\n readOnly: true\n serviceAccountName: default\n volumes:\n - name: token-vol\n projected:\n sources:\n - serviceAccountToken:\n audience: api\n expirationSeconds: 3600\n path: token", "language": "yaml", "type": "pod-definition", "context": "3 - Projected Volumes | The example Pod has a projected volume containing the injected service account token. Containers in this Pod can use that token to access the Kubernetes API server, authenticating with the identity of", "source": "https://kubernetes.io/docs/concepts/storage/_print/", "id": "k8s_00120_code_completion_3633"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: sa-token-test\nspec:\n containers:\n - name: container-test\n image: busybox:1.28\n command: [\"sleep\", \"3600\"]\n volumeMounts:\n - name: token-vol\n mountPath: \"/service-account\"\n readOnly: true\n serviceAccountName: default\n volumes:\n - name: token-vol\n projected:\n sources:\n - serviceAccountToken:\n audience: api\n expirationSeconds: 3600\n path: token", "completion": "This yaml code 3 - Projected Volumes | The example Pod has a projected volume containing the injected service account token. Containers in this Pod can use that token to access the Kubernetes API server, authenticating with the identity of", "language": "yaml", "type": "pod-definition_explanation", "context": "3 - Projected Volumes | The example Pod has a projected volume containing the injected service account token. Containers in this Pod can use that token to access the Kubernetes API server, authenticating with the identity of", "source": "https://kubernetes.io/docs/concepts/storage/_print/", "id": "k8s_00120_code_explain_3634"} {"prompt": "Complete this yaml code:\n\npods/storage/projected-clustertrustbundle.yaml\napiVersion: v1\nkind: Pod\nmetadata:\n name: sa-ctb-name-test\nspec:\n containers:\n - name: container-test\n image: busybox\n command: [\"sleep\", \"3600\"]\n volumeMounts:\n - name: token-vol\n mountPath: \"/root-certificates\"\n readOnly: true", "completion": " serviceAccountName: default\n volumes:\n - name: token-vol\n projected:\n sources:\n - clusterTrustBundle:\n name: example\n path: example-roots.pem\n - clusterTrustBundle:\n signerName: \"example.com/mysigner\"\n labelSelector:\n matchLabels:\n version: live\n path: mysigner-roots.pem\n optional: true", "language": "yaml", "type": "pod-definition", "context": "3 - Projected Volumes | The podCertificate projected volumes source securely provisions a private key and X.509 certificate chain for pod to use as client or server credentials. Kubelet will then handle refreshing the privat", "source": "https://kubernetes.io/docs/concepts/storage/_print/", "id": "k8s_00120_code_completion_3635"} {"prompt": "Explain this yaml code:\n\npods/storage/projected-clustertrustbundle.yaml\napiVersion: v1\nkind: Pod\nmetadata:\n name: sa-ctb-name-test\nspec:\n containers:\n - name: container-test\n image: busybox\n command: [\"sleep\", \"3600\"]\n volumeMounts:\n - name: token-vol\n mountPath: \"/root-certificates\"\n readOnly: true\n serviceAccountName: default\n volumes:\n - name: token-vol\n projected:\n sources:\n - clusterTrustBundle:\n name: example\n path: example-roots.pem\n - clusterTrustBundle:\n signerName: \"example.com/mysigner\"\n labelSelector:\n matchLabels:\n version: live\n path: mysigner-roots.pem\n optional: true", "completion": "This yaml code 3 - Projected Volumes | The podCertificate projected volumes source securely provisions a private key and X.509 certificate chain for pod to use as client or server credentials. Kubelet will then handle refreshing the privat", "language": "yaml", "type": "pod-definition_explanation", "context": "3 - Projected Volumes | The podCertificate projected volumes source securely provisions a private key and X.509 certificate chain for pod to use as client or server credentials. Kubelet will then handle refreshing the privat", "source": "https://kubernetes.io/docs/concepts/storage/_print/", "id": "k8s_00120_code_explain_3636"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: sa-ctb-name-test\nspec:\n containers:\n - name: container-test\n image: busybox\n command: [\"sleep\", \"3600\"]\n volumeMounts:\n - name: token-vol\n mountPath: \"/root-certificates\"\n readOnly: true\n serviceAccountName: default", "completion": " volumes:\n - name: token-vol\n projected:\n sources:\n - clusterTrustBundle:\n name: example\n path: example-roots.pem\n - clusterTrustBundle:\n signerName: \"example.com/mysigner\"\n labelSelector:\n matchLabels:\n version: live\n path: mysigner-roots.pem\n optional: true", "language": "yaml", "type": "pod-definition", "context": "3 - Projected Volumes | The podCertificate projected volumes source securely provisions a private key and X.509 certificate chain for pod to use as client or server credentials. Kubelet will then handle refreshing the privat", "source": "https://kubernetes.io/docs/concepts/storage/_print/", "id": "k8s_00120_code_completion_3637"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: sa-ctb-name-test\nspec:\n containers:\n - name: container-test\n image: busybox\n command: [\"sleep\", \"3600\"]\n volumeMounts:\n - name: token-vol\n mountPath: \"/root-certificates\"\n readOnly: true\n serviceAccountName: default\n volumes:\n - name: token-vol\n projected:\n sources:\n - clusterTrustBundle:\n name: example\n path: example-roots.pem\n - clusterTrustBundle:\n signerName: \"example.com/mysigner\"\n labelSelector:\n matchLabels:\n version: live\n path: mysigner-roots.pem\n optional: true", "completion": "This yaml code 3 - Projected Volumes | The podCertificate projected volumes source securely provisions a private key and X.509 certificate chain for pod to use as client or server credentials. Kubelet will then handle refreshing the privat", "language": "yaml", "type": "pod-definition_explanation", "context": "3 - Projected Volumes | The podCertificate projected volumes source securely provisions a private key and X.509 certificate chain for pod to use as client or server credentials. Kubelet will then handle refreshing the privat", "source": "https://kubernetes.io/docs/concepts/storage/_print/", "id": "k8s_00120_code_explain_3638"} {"prompt": "Complete this yaml code:\n\npods/storage/projected-podcertificate.yaml\n# Sample Pod spec that uses a podCertificate projection to request an ED25519\n# private key, a certificate from the `coolcert.example.com/foo` signer, and\n# write the results to `/var/run/my-x509-credentials/credentialbundle.pem`.\napiVersion: v1\nkind: Pod\nmetadata:\n namespace: default\n name: podcertificate-pod\nspec:\n serviceAccountName: default\n containers:\n - image: debian", "completion": " name: main\n command: [\"sleep\", \"infinity\"]\n volumeMounts:\n - name: my-x509-credentials\n mountPath: /var/run/my-x509-credentials\n volumes:\n - name: my-x509-credentials\n projected:\n defaultMode: 420\n sources:\n - podCertificate:\n keyType: ED25519\n signerName: coolcert.example.com/foo\n credentialBundlePath: credentialbundle.pem", "language": "yaml", "type": "pod-definition", "context": "3 - Projected Volumes | The proposal for file permission handling in projected service account volume enhancement introduced the projected files having the correct owner permissions set.", "source": "https://kubernetes.io/docs/concepts/storage/_print/", "id": "k8s_00120_code_completion_3639"} {"prompt": "Explain this yaml code:\n\npods/storage/projected-podcertificate.yaml\n# Sample Pod spec that uses a podCertificate projection to request an ED25519\n# private key, a certificate from the `coolcert.example.com/foo` signer, and\n# write the results to `/var/run/my-x509-credentials/credentialbundle.pem`.\napiVersion: v1\nkind: Pod\nmetadata:\n namespace: default\n name: podcertificate-pod\nspec:\n serviceAccountName: default\n containers:\n - image: debian\n name: main\n command: [\"sleep\", \"infinity\"]\n volumeMounts:\n - name: my-x509-credentials\n mountPath: /var/run/my-x509-credentials\n volumes:\n - name: my-x509-credentials\n projected:\n defaultMode: 420\n sources:\n - podCertificate:\n keyType: ED25519\n signerName: coolcert.example.com/foo\n credentialBundlePath: credentialbundle.pem", "completion": "This yaml code 3 - Projected Volumes | The proposal for file permission handling in projected service account volume enhancement introduced the projected files having the correct owner permissions set.", "language": "yaml", "type": "pod-definition_explanation", "context": "3 - Projected Volumes | The proposal for file permission handling in projected service account volume enhancement introduced the projected files having the correct owner permissions set.", "source": "https://kubernetes.io/docs/concepts/storage/_print/", "id": "k8s_00120_code_explain_3640"} {"prompt": "Complete this yaml code:\n\n# Sample Pod spec that uses a podCertificate projection to request an ED25519\n# private key, a certificate from the `coolcert.example.com/foo` signer, and\n# write the results to `/var/run/my-x509-credentials/credentialbundle.pem`.\napiVersion: v1\nkind: Pod\nmetadata:\n namespace: default\n name: podcertificate-pod\nspec:\n serviceAccountName: default\n containers:\n - image: debian\n name: main", "completion": " command: [\"sleep\", \"infinity\"]\n volumeMounts:\n - name: my-x509-credentials\n mountPath: /var/run/my-x509-credentials\n volumes:\n - name: my-x509-credentials\n projected:\n defaultMode: 420\n sources:\n - podCertificate:\n keyType: ED25519\n signerName: coolcert.example.com/foo\n credentialBundlePath: credentialbundle.pem", "language": "yaml", "type": "pod-definition", "context": "3 - Projected Volumes | The proposal for file permission handling in projected service account volume enhancement introduced the projected files having the correct owner permissions set.", "source": "https://kubernetes.io/docs/concepts/storage/_print/", "id": "k8s_00120_code_completion_3641"} {"prompt": "Explain this yaml code:\n\n# Sample Pod spec that uses a podCertificate projection to request an ED25519\n# private key, a certificate from the `coolcert.example.com/foo` signer, and\n# write the results to `/var/run/my-x509-credentials/credentialbundle.pem`.\napiVersion: v1\nkind: Pod\nmetadata:\n namespace: default\n name: podcertificate-pod\nspec:\n serviceAccountName: default\n containers:\n - image: debian\n name: main\n command: [\"sleep\", \"infinity\"]\n volumeMounts:\n - name: my-x509-credentials\n mountPath: /var/run/my-x509-credentials\n volumes:\n - name: my-x509-credentials\n projected:\n defaultMode: 420\n sources:\n - podCertificate:\n keyType: ED25519\n signerName: coolcert.example.com/foo\n credentialBundlePath: credentialbundle.pem", "completion": "This yaml code 3 - Projected Volumes | The proposal for file permission handling in projected service account volume enhancement introduced the projected files having the correct owner permissions set.", "language": "yaml", "type": "pod-definition_explanation", "context": "3 - Projected Volumes | The proposal for file permission handling in projected service account volume enhancement introduced the projected files having the correct owner permissions set.", "source": "https://kubernetes.io/docs/concepts/storage/_print/", "id": "k8s_00120_code_explain_3642"} {"prompt": "Complete this text code:\n\nPS C:\\> Get-Acl C:\\var\\run\\secrets\\kubernetes.io\\serviceaccount\\..2021_08_31_22_22_18.318230061\\ca.crt | Format-List\n\nPath : Microsoft.PowerShell.Core\\FileSystem::C:\\var\\run\\secrets\\kubernetes.io\\serviceaccount\\..2021_08_31_22_22_18.318230061\\ca.crt\nOwner : BUILTIN\\Administrators\nGroup : NT AUTHORITY\\SYSTEM", "completion": "Access : NT AUTHORITY\\SYSTEM Allow FullControl\n BUILTIN\\Administrators Allow FullControl\n BUILTIN\\Users Allow ReadAndExecute, Synchronize\nAudit :\nSddl : O:BAG:SYD:AI(A;ID;FA;;;SY)(A;ID;FA;;;BA)(A;ID;0x1200a9;;;BU)", "language": "text", "type": "code-example", "context": "3 - Projected Volumes | This implies all administrator users like ContainerAdministrator will have read, write and execute access while, non-administrator users will have read and execute access.", "source": "https://kubernetes.io/docs/concepts/storage/_print/", "id": "k8s_00120_code_completion_3643"} {"prompt": "Explain this text code:\n\nPS C:\\> Get-Acl C:\\var\\run\\secrets\\kubernetes.io\\serviceaccount\\..2021_08_31_22_22_18.318230061\\ca.crt | Format-List\n\nPath : Microsoft.PowerShell.Core\\FileSystem::C:\\var\\run\\secrets\\kubernetes.io\\serviceaccount\\..2021_08_31_22_22_18.318230061\\ca.crt\nOwner : BUILTIN\\Administrators\nGroup : NT AUTHORITY\\SYSTEM\nAccess : NT AUTHORITY\\SYSTEM Allow FullControl\n BUILTIN\\Administrators Allow FullControl\n BUILTIN\\Users Allow ReadAndExecute, Synchronize\nAudit :\nSddl : O:BAG:SYD:AI(A;ID;FA;;;SY)(A;ID;FA;;;BA)(A;ID;0x1200a9;;;BU)", "completion": "This text code 3 - Projected Volumes | This implies all administrator users like ContainerAdministrator will have read, write and execute access while, non-administrator users will have read and execute access.", "language": "text", "type": "code-example_explanation", "context": "3 - Projected Volumes | This implies all administrator users like ContainerAdministrator will have read, write and execute access while, non-administrator users will have read and execute access.", "source": "https://kubernetes.io/docs/concepts/storage/_print/", "id": "k8s_00120_code_explain_3644"} {"prompt": "Complete this yaml code:\n\nkind: Pod\napiVersion: v1\nmetadata:\n name: my-csi-app\nspec:\n containers:\n - name: my-frontend\n image: busybox:1.28\n volumeMounts:", "completion": " - mountPath: \"/data\"\n name: my-csi-inline-vol\n command: [ \"sleep\", \"1000000\" ]\n volumes:\n - name: my-csi-inline-vol\n csi:\n driver: inline.storage.kubernetes.io\n volumeAttributes:\n foo: bar", "language": "yaml", "type": "pod-definition", "context": "4 - Ephemeral Volumes | The volumeAttributes determine what volume is prepared by the driver. These attributes are specific to each driver and not standardized. See the documentation of each CSI driver for further instructio", "source": "https://kubernetes.io/docs/concepts/storage/_print/", "id": "k8s_00120_code_completion_3645"} {"prompt": "Explain this yaml code:\n\nkind: Pod\napiVersion: v1\nmetadata:\n name: my-csi-app\nspec:\n containers:\n - name: my-frontend\n image: busybox:1.28\n volumeMounts:\n - mountPath: \"/data\"\n name: my-csi-inline-vol\n command: [ \"sleep\", \"1000000\" ]\n volumes:\n - name: my-csi-inline-vol\n csi:\n driver: inline.storage.kubernetes.io\n volumeAttributes:\n foo: bar", "completion": "This yaml code 4 - Ephemeral Volumes | The volumeAttributes determine what volume is prepared by the driver. These attributes are specific to each driver and not standardized. See the documentation of each CSI driver for further instructio", "language": "yaml", "type": "pod-definition_explanation", "context": "4 - Ephemeral Volumes | The volumeAttributes determine what volume is prepared by the driver. These attributes are specific to each driver and not standardized. See the documentation of each CSI driver for further instructio", "source": "https://kubernetes.io/docs/concepts/storage/_print/", "id": "k8s_00120_code_explain_3646"} {"prompt": "Complete this yaml code:\n\nkind: Pod\napiVersion: v1\nmetadata:\n name: my-app\nspec:\n containers:\n - name: my-frontend\n image: busybox:1.28\n volumeMounts:\n - mountPath: \"/scratch\"\n name: scratch-volume\n command: [ \"sleep\", \"1000000\" ]", "completion": " volumes:\n - name: scratch-volume\n ephemeral:\n volumeClaimTemplate:\n metadata:\n labels:\n type: my-frontend-volume\n spec:\n accessModes: [ \"ReadWriteOnce\" ]\n storageClassName: \"scratch-storage-class\"\n resources:\n requests:\n storage: 1Gi", "language": "yaml", "type": "pod-definition", "context": "4 - Ephemeral Volumes | The key design idea is that the parameters for a volume claim are allowed inside a volume source of the Pod. Labels, annotations and the whole set of fields for a PersistentVolumeClaim are supported.", "source": "https://kubernetes.io/docs/concepts/storage/_print/", "id": "k8s_00120_code_completion_3647"} {"prompt": "Explain this yaml code:\n\nkind: Pod\napiVersion: v1\nmetadata:\n name: my-app\nspec:\n containers:\n - name: my-frontend\n image: busybox:1.28\n volumeMounts:\n - mountPath: \"/scratch\"\n name: scratch-volume\n command: [ \"sleep\", \"1000000\" ]\n volumes:\n - name: scratch-volume\n ephemeral:\n volumeClaimTemplate:\n metadata:\n labels:\n type: my-frontend-volume\n spec:\n accessModes: [ \"ReadWriteOnce\" ]\n storageClassName: \"scratch-storage-class\"\n resources:\n requests:\n storage: 1Gi", "completion": "This yaml code 4 - Ephemeral Volumes | The key design idea is that the parameters for a volume claim are allowed inside a volume source of the Pod. Labels, annotations and the whole set of fields for a PersistentVolumeClaim are supported.", "language": "yaml", "type": "pod-definition_explanation", "context": "4 - Ephemeral Volumes | The key design idea is that the parameters for a volume claim are allowed inside a volume source of the Pod. Labels, annotations and the whole set of fields for a PersistentVolumeClaim are supported.", "source": "https://kubernetes.io/docs/concepts/storage/_print/", "id": "k8s_00120_code_explain_3648"} {"prompt": "Complete this yaml code:\n\nstorage/storageclass-low-latency.yaml\napiVersion: storage.k8s.io/v1\nkind: StorageClass\nmetadata:\n name: low-latency\n annotations:\n storageclass.kubernetes.io/is-default-class: \"false\"", "completion": "provisioner: csi-driver.example-vendor.example\nreclaimPolicy: Retain # default value is Delete\nallowVolumeExpansion: true\nmountOptions:\n - discard # this might enable UNMAP / TRIM at the block storage layer\nvolumeBindingMode: WaitForFirstConsumer\nparameters:\n guaranteedReadWriteLatency: \"true\" # provider-specific", "language": "yaml", "type": "kubernetes-manifest", "context": "5 - Storage Classes | You can mark a StorageClass as the default for your cluster. For instructions on setting the default StorageClass, see Change the default StorageClass.", "source": "https://kubernetes.io/docs/concepts/storage/_print/", "id": "k8s_00120_code_completion_3649"} {"prompt": "Explain this yaml code:\n\nstorage/storageclass-low-latency.yaml\napiVersion: storage.k8s.io/v1\nkind: StorageClass\nmetadata:\n name: low-latency\n annotations:\n storageclass.kubernetes.io/is-default-class: \"false\"\nprovisioner: csi-driver.example-vendor.example\nreclaimPolicy: Retain # default value is Delete\nallowVolumeExpansion: true\nmountOptions:\n - discard # this might enable UNMAP / TRIM at the block storage layer\nvolumeBindingMode: WaitForFirstConsumer\nparameters:\n guaranteedReadWriteLatency: \"true\" # provider-specific", "completion": "This yaml code 5 - Storage Classes | You can mark a StorageClass as the default for your cluster. For instructions on setting the default StorageClass, see Change the default StorageClass.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "5 - Storage Classes | You can mark a StorageClass as the default for your cluster. For instructions on setting the default StorageClass, see Change the default StorageClass.", "source": "https://kubernetes.io/docs/concepts/storage/_print/", "id": "k8s_00120_code_explain_3650"} {"prompt": "Complete this yaml code:\n\napiVersion: storage.k8s.io/v1\nkind: StorageClass\nmetadata:\n name: low-latency\n annotations:\n storageclass.kubernetes.io/is-default-class: \"false\"\nprovisioner: csi-driver.example-vendor.example", "completion": "reclaimPolicy: Retain # default value is Delete\nallowVolumeExpansion: true\nmountOptions:\n - discard # this might enable UNMAP / TRIM at the block storage layer\nvolumeBindingMode: WaitForFirstConsumer\nparameters:\n guaranteedReadWriteLatency: \"true\" # provider-specific", "language": "yaml", "type": "kubernetes-manifest", "context": "5 - Storage Classes | You can mark a StorageClass as the default for your cluster. For instructions on setting the default StorageClass, see Change the default StorageClass.", "source": "https://kubernetes.io/docs/concepts/storage/_print/", "id": "k8s_00120_code_completion_3651"} {"prompt": "Explain this yaml code:\n\napiVersion: storage.k8s.io/v1\nkind: StorageClass\nmetadata:\n name: low-latency\n annotations:\n storageclass.kubernetes.io/is-default-class: \"false\"\nprovisioner: csi-driver.example-vendor.example\nreclaimPolicy: Retain # default value is Delete\nallowVolumeExpansion: true\nmountOptions:\n - discard # this might enable UNMAP / TRIM at the block storage layer\nvolumeBindingMode: WaitForFirstConsumer\nparameters:\n guaranteedReadWriteLatency: \"true\" # provider-specific", "completion": "This yaml code 5 - Storage Classes | You can mark a StorageClass as the default for your cluster. For instructions on setting the default StorageClass, see Change the default StorageClass.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "5 - Storage Classes | You can mark a StorageClass as the default for your cluster. For instructions on setting the default StorageClass, see Change the default StorageClass.", "source": "https://kubernetes.io/docs/concepts/storage/_print/", "id": "k8s_00120_code_explain_3652"} {"prompt": "Complete this yaml code:\n\nstorage/storageclass/pod-volume-binding.yaml\napiVersion: v1\nkind: Pod\nmetadata:\n name: task-pv-pod\nspec:\n nodeSelector:\n kubernetes.io/hostname: kube-01\n volumes:\n - name: task-pv-storage", "completion": " persistentVolumeClaim:\n claimName: task-pv-claim\n containers:\n - name: task-pv-container\n image: nginx\n ports:\n - containerPort: 80\n name: \"http-server\"\n volumeMounts:\n - mountPath: \"/usr/share/nginx/html\"\n name: task-pv-storage", "language": "yaml", "type": "pod-definition", "context": "5 - Storage Classes | When a cluster operator specifies the WaitForFirstConsumer volume binding mode, it is no longer necessary to restrict provisioning to specific topologies in most situations. However, if still required", "source": "https://kubernetes.io/docs/concepts/storage/_print/", "id": "k8s_00120_code_completion_3653"} {"prompt": "Explain this yaml code:\n\nstorage/storageclass/pod-volume-binding.yaml\napiVersion: v1\nkind: Pod\nmetadata:\n name: task-pv-pod\nspec:\n nodeSelector:\n kubernetes.io/hostname: kube-01\n volumes:\n - name: task-pv-storage\n persistentVolumeClaim:\n claimName: task-pv-claim\n containers:\n - name: task-pv-container\n image: nginx\n ports:\n - containerPort: 80\n name: \"http-server\"\n volumeMounts:\n - mountPath: \"/usr/share/nginx/html\"\n name: task-pv-storage", "completion": "This yaml code 5 - Storage Classes | When a cluster operator specifies the WaitForFirstConsumer volume binding mode, it is no longer necessary to restrict provisioning to specific topologies in most situations. However, if still required", "language": "yaml", "type": "pod-definition_explanation", "context": "5 - Storage Classes | When a cluster operator specifies the WaitForFirstConsumer volume binding mode, it is no longer necessary to restrict provisioning to specific topologies in most situations. However, if still required", "source": "https://kubernetes.io/docs/concepts/storage/_print/", "id": "k8s_00120_code_explain_3654"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: task-pv-pod\nspec:\n nodeSelector:\n kubernetes.io/hostname: kube-01\n volumes:\n - name: task-pv-storage\n persistentVolumeClaim:", "completion": " claimName: task-pv-claim\n containers:\n - name: task-pv-container\n image: nginx\n ports:\n - containerPort: 80\n name: \"http-server\"\n volumeMounts:\n - mountPath: \"/usr/share/nginx/html\"\n name: task-pv-storage", "language": "yaml", "type": "pod-definition", "context": "5 - Storage Classes | When a cluster operator specifies the WaitForFirstConsumer volume binding mode, it is no longer necessary to restrict provisioning to specific topologies in most situations. However, if still required", "source": "https://kubernetes.io/docs/concepts/storage/_print/", "id": "k8s_00120_code_completion_3655"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: task-pv-pod\nspec:\n nodeSelector:\n kubernetes.io/hostname: kube-01\n volumes:\n - name: task-pv-storage\n persistentVolumeClaim:\n claimName: task-pv-claim\n containers:\n - name: task-pv-container\n image: nginx\n ports:\n - containerPort: 80\n name: \"http-server\"\n volumeMounts:\n - mountPath: \"/usr/share/nginx/html\"\n name: task-pv-storage", "completion": "This yaml code 5 - Storage Classes | When a cluster operator specifies the WaitForFirstConsumer volume binding mode, it is no longer necessary to restrict provisioning to specific topologies in most situations. However, if still required", "language": "yaml", "type": "pod-definition_explanation", "context": "5 - Storage Classes | When a cluster operator specifies the WaitForFirstConsumer volume binding mode, it is no longer necessary to restrict provisioning to specific topologies in most situations. However, if still required", "source": "https://kubernetes.io/docs/concepts/storage/_print/", "id": "k8s_00120_code_explain_3656"} {"prompt": "Complete this yaml code:\n\nstorage/storageclass/storageclass-topology.yaml\napiVersion: storage.k8s.io/v1\nkind: StorageClass\nmetadata:\n name: standard\nprovisioner: example.com/example\nparameters:", "completion": " type: pd-standard\nvolumeBindingMode: WaitForFirstConsumer\nallowedTopologies:\n- matchLabelExpressions:\n - key: topology.kubernetes.io/zone\n values:\n - us-central-1a\n - us-central-1b", "language": "yaml", "type": "kubernetes-manifest", "context": "5 - Storage Classes | StorageClasses have parameters that describe volumes belonging to the storage class. Different parameters may be accepted depending on the provisioner. When a parameter is omitted, some default is use", "source": "https://kubernetes.io/docs/concepts/storage/_print/", "id": "k8s_00120_code_completion_3657"} {"prompt": "Explain this yaml code:\n\nstorage/storageclass/storageclass-topology.yaml\napiVersion: storage.k8s.io/v1\nkind: StorageClass\nmetadata:\n name: standard\nprovisioner: example.com/example\nparameters:\n type: pd-standard\nvolumeBindingMode: WaitForFirstConsumer\nallowedTopologies:\n- matchLabelExpressions:\n - key: topology.kubernetes.io/zone\n values:\n - us-central-1a\n - us-central-1b", "completion": "This yaml code 5 - Storage Classes | StorageClasses have parameters that describe volumes belonging to the storage class. Different parameters may be accepted depending on the provisioner. When a parameter is omitted, some default is use", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "5 - Storage Classes | StorageClasses have parameters that describe volumes belonging to the storage class. Different parameters may be accepted depending on the provisioner. When a parameter is omitted, some default is use", "source": "https://kubernetes.io/docs/concepts/storage/_print/", "id": "k8s_00120_code_explain_3658"} {"prompt": "Complete this yaml code:\n\napiVersion: storage.k8s.io/v1\nkind: StorageClass\nmetadata:\n name: standard\nprovisioner: example.com/example\nparameters:\n type: pd-standard", "completion": "volumeBindingMode: WaitForFirstConsumer\nallowedTopologies:\n- matchLabelExpressions:\n - key: topology.kubernetes.io/zone\n values:\n - us-central-1a\n - us-central-1b", "language": "yaml", "type": "kubernetes-manifest", "context": "5 - Storage Classes | StorageClasses have parameters that describe volumes belonging to the storage class. Different parameters may be accepted depending on the provisioner. When a parameter is omitted, some default is use", "source": "https://kubernetes.io/docs/concepts/storage/_print/", "id": "k8s_00120_code_completion_3659"} {"prompt": "Explain this yaml code:\n\napiVersion: storage.k8s.io/v1\nkind: StorageClass\nmetadata:\n name: standard\nprovisioner: example.com/example\nparameters:\n type: pd-standard\nvolumeBindingMode: WaitForFirstConsumer\nallowedTopologies:\n- matchLabelExpressions:\n - key: topology.kubernetes.io/zone\n values:\n - us-central-1a\n - us-central-1b", "completion": "This yaml code 5 - Storage Classes | StorageClasses have parameters that describe volumes belonging to the storage class. Different parameters may be accepted depending on the provisioner. When a parameter is omitted, some default is use", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "5 - Storage Classes | StorageClasses have parameters that describe volumes belonging to the storage class. Different parameters may be accepted depending on the provisioner. When a parameter is omitted, some default is use", "source": "https://kubernetes.io/docs/concepts/storage/_print/", "id": "k8s_00120_code_explain_3660"} {"prompt": "Complete this yaml code:\n\nstorage/storageclass/storageclass-aws-ebs.yaml\napiVersion: storage.k8s.io/v1\nkind: StorageClass\nmetadata:\n name: ebs-sc\nprovisioner: ebs.csi.aws.com\nvolumeBindingMode: WaitForFirstConsumer\nparameters:\n csi.storage.k8s.io/fstype: xfs", "completion": " type: io1\n iopsPerGB: \"50\"\n encrypted: \"true\"\n tagSpecification_1: \"key1=value1\"\n tagSpecification_2: \"key2=value2\"\nallowedTopologies:\n- matchLabelExpressions:\n - key: topology.ebs.csi.aws.com/zone\n values:\n - us-east-2c", "language": "yaml", "type": "kubernetes-manifest", "context": "5 - Storage Classes | tagSpecification: Tags with this prefix are applied to dynamically provisioned EBS volumes.", "source": "https://kubernetes.io/docs/concepts/storage/_print/", "id": "k8s_00120_code_completion_3661"} {"prompt": "Explain this yaml code:\n\nstorage/storageclass/storageclass-aws-ebs.yaml\napiVersion: storage.k8s.io/v1\nkind: StorageClass\nmetadata:\n name: ebs-sc\nprovisioner: ebs.csi.aws.com\nvolumeBindingMode: WaitForFirstConsumer\nparameters:\n csi.storage.k8s.io/fstype: xfs\n type: io1\n iopsPerGB: \"50\"\n encrypted: \"true\"\n tagSpecification_1: \"key1=value1\"\n tagSpecification_2: \"key2=value2\"\nallowedTopologies:\n- matchLabelExpressions:\n - key: topology.ebs.csi.aws.com/zone\n values:\n - us-east-2c", "completion": "This yaml code 5 - Storage Classes | tagSpecification: Tags with this prefix are applied to dynamically provisioned EBS volumes.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "5 - Storage Classes | tagSpecification: Tags with this prefix are applied to dynamically provisioned EBS volumes.", "source": "https://kubernetes.io/docs/concepts/storage/_print/", "id": "k8s_00120_code_explain_3662"} {"prompt": "Complete this yaml code:\n\napiVersion: storage.k8s.io/v1\nkind: StorageClass\nmetadata:\n name: ebs-sc\nprovisioner: ebs.csi.aws.com\nvolumeBindingMode: WaitForFirstConsumer\nparameters:\n csi.storage.k8s.io/fstype: xfs\n type: io1", "completion": " iopsPerGB: \"50\"\n encrypted: \"true\"\n tagSpecification_1: \"key1=value1\"\n tagSpecification_2: \"key2=value2\"\nallowedTopologies:\n- matchLabelExpressions:\n - key: topology.ebs.csi.aws.com/zone\n values:\n - us-east-2c", "language": "yaml", "type": "kubernetes-manifest", "context": "5 - Storage Classes | tagSpecification: Tags with this prefix are applied to dynamically provisioned EBS volumes.", "source": "https://kubernetes.io/docs/concepts/storage/_print/", "id": "k8s_00120_code_completion_3663"} {"prompt": "Explain this yaml code:\n\napiVersion: storage.k8s.io/v1\nkind: StorageClass\nmetadata:\n name: ebs-sc\nprovisioner: ebs.csi.aws.com\nvolumeBindingMode: WaitForFirstConsumer\nparameters:\n csi.storage.k8s.io/fstype: xfs\n type: io1\n iopsPerGB: \"50\"\n encrypted: \"true\"\n tagSpecification_1: \"key1=value1\"\n tagSpecification_2: \"key2=value2\"\nallowedTopologies:\n- matchLabelExpressions:\n - key: topology.ebs.csi.aws.com/zone\n values:\n - us-east-2c", "completion": "This yaml code 5 - Storage Classes | tagSpecification: Tags with this prefix are applied to dynamically provisioned EBS volumes.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "5 - Storage Classes | tagSpecification: Tags with this prefix are applied to dynamically provisioned EBS volumes.", "source": "https://kubernetes.io/docs/concepts/storage/_print/", "id": "k8s_00120_code_explain_3664"} {"prompt": "Complete this yaml code:\n\nstorage/storageclass/storageclass-aws-efs.yaml\nkind: StorageClass\napiVersion: storage.k8s.io/v1\nmetadata:\n name: efs-sc", "completion": "provisioner: efs.csi.aws.com\nparameters:\n provisioningMode: efs-ap\n fileSystemId: fs-92107410\n directoryPerms: \"700\"", "language": "yaml", "type": "kubernetes-manifest", "context": "5 - Storage Classes | For more details, refer to the AWS_EFS_CSI_Driver Dynamic Provisioning documentation.", "source": "https://kubernetes.io/docs/concepts/storage/_print/", "id": "k8s_00120_code_completion_3665"} {"prompt": "Explain this yaml code:\n\nstorage/storageclass/storageclass-aws-efs.yaml\nkind: StorageClass\napiVersion: storage.k8s.io/v1\nmetadata:\n name: efs-sc\nprovisioner: efs.csi.aws.com\nparameters:\n provisioningMode: efs-ap\n fileSystemId: fs-92107410\n directoryPerms: \"700\"", "completion": "This yaml code 5 - Storage Classes | For more details, refer to the AWS_EFS_CSI_Driver Dynamic Provisioning documentation.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "5 - Storage Classes | For more details, refer to the AWS_EFS_CSI_Driver Dynamic Provisioning documentation.", "source": "https://kubernetes.io/docs/concepts/storage/_print/", "id": "k8s_00120_code_explain_3666"} {"prompt": "Complete this yaml code:\n\nkind: StorageClass\napiVersion: storage.k8s.io/v1\nmetadata:\n name: efs-sc", "completion": "provisioner: efs.csi.aws.com\nparameters:\n provisioningMode: efs-ap\n fileSystemId: fs-92107410\n directoryPerms: \"700\"", "language": "yaml", "type": "kubernetes-manifest", "context": "5 - Storage Classes | For more details, refer to the AWS_EFS_CSI_Driver Dynamic Provisioning documentation.", "source": "https://kubernetes.io/docs/concepts/storage/_print/", "id": "k8s_00120_code_completion_3667"} {"prompt": "Explain this yaml code:\n\nkind: StorageClass\napiVersion: storage.k8s.io/v1\nmetadata:\n name: efs-sc\nprovisioner: efs.csi.aws.com\nparameters:\n provisioningMode: efs-ap\n fileSystemId: fs-92107410\n directoryPerms: \"700\"", "completion": "This yaml code 5 - Storage Classes | For more details, refer to the AWS_EFS_CSI_Driver Dynamic Provisioning documentation.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "5 - Storage Classes | For more details, refer to the AWS_EFS_CSI_Driver Dynamic Provisioning documentation.", "source": "https://kubernetes.io/docs/concepts/storage/_print/", "id": "k8s_00120_code_explain_3668"} {"prompt": "Complete this yaml code:\n\nstorage/storageclass/storageclass-nfs.yaml\napiVersion: storage.k8s.io/v1\nkind: StorageClass\nmetadata:\n name: example-nfs", "completion": "provisioner: example.com/external-nfs\nparameters:\n server: nfs-server.example.com\n path: /share\n readOnly: \"false\"", "language": "yaml", "type": "kubernetes-manifest", "context": "5 - Storage Classes | Kubernetes doesn't include an internal NFS provisioner. You need to use an external provisioner to create a StorageClass for NFS. Here are some examples:", "source": "https://kubernetes.io/docs/concepts/storage/_print/", "id": "k8s_00120_code_completion_3669"} {"prompt": "Explain this yaml code:\n\nstorage/storageclass/storageclass-nfs.yaml\napiVersion: storage.k8s.io/v1\nkind: StorageClass\nmetadata:\n name: example-nfs\nprovisioner: example.com/external-nfs\nparameters:\n server: nfs-server.example.com\n path: /share\n readOnly: \"false\"", "completion": "This yaml code 5 - Storage Classes | Kubernetes doesn't include an internal NFS provisioner. You need to use an external provisioner to create a StorageClass for NFS. Here are some examples:", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "5 - Storage Classes | Kubernetes doesn't include an internal NFS provisioner. You need to use an external provisioner to create a StorageClass for NFS. Here are some examples:", "source": "https://kubernetes.io/docs/concepts/storage/_print/", "id": "k8s_00120_code_explain_3670"} {"prompt": "Complete this yaml code:\n\napiVersion: storage.k8s.io/v1\nkind: StorageClass\nmetadata:\n name: example-nfs", "completion": "provisioner: example.com/external-nfs\nparameters:\n server: nfs-server.example.com\n path: /share\n readOnly: \"false\"", "language": "yaml", "type": "kubernetes-manifest", "context": "5 - Storage Classes | Kubernetes doesn't include an internal NFS provisioner. You need to use an external provisioner to create a StorageClass for NFS. Here are some examples:", "source": "https://kubernetes.io/docs/concepts/storage/_print/", "id": "k8s_00120_code_completion_3671"} {"prompt": "Explain this yaml code:\n\napiVersion: storage.k8s.io/v1\nkind: StorageClass\nmetadata:\n name: example-nfs\nprovisioner: example.com/external-nfs\nparameters:\n server: nfs-server.example.com\n path: /share\n readOnly: \"false\"", "completion": "This yaml code 5 - Storage Classes | Kubernetes doesn't include an internal NFS provisioner. You need to use an external provisioner to create a StorageClass for NFS. Here are some examples:", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "5 - Storage Classes | Kubernetes doesn't include an internal NFS provisioner. You need to use an external provisioner to create a StorageClass for NFS. Here are some examples:", "source": "https://kubernetes.io/docs/concepts/storage/_print/", "id": "k8s_00120_code_explain_3672"} {"prompt": "Complete this yaml code:\n\napiVersion: storage.k8s.io/v1\nkind: StorageClass\nmetadata:", "completion": " name: fast\nprovisioner: kubernetes.io/vsphere-volume\nparameters:\n diskformat: zeroedthick", "language": "yaml", "type": "kubernetes-manifest", "context": "5 - Storage Classes | diskformat: thin, zeroedthick and eagerzeroedthick. Default: \"thin\".", "source": "https://kubernetes.io/docs/concepts/storage/_print/", "id": "k8s_00120_code_completion_3673"} {"prompt": "Explain this yaml code:\n\napiVersion: storage.k8s.io/v1\nkind: StorageClass\nmetadata:\n name: fast\nprovisioner: kubernetes.io/vsphere-volume\nparameters:\n diskformat: zeroedthick", "completion": "This yaml code 5 - Storage Classes | diskformat: thin, zeroedthick and eagerzeroedthick. Default: \"thin\".", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "5 - Storage Classes | diskformat: thin, zeroedthick and eagerzeroedthick. Default: \"thin\".", "source": "https://kubernetes.io/docs/concepts/storage/_print/", "id": "k8s_00120_code_explain_3674"} {"prompt": "Complete this yaml code:\n\napiVersion: storage.k8s.io/v1\nkind: StorageClass\nmetadata:\n name: fast", "completion": "provisioner: kubernetes.io/vsphere-volume\nparameters:\n diskformat: zeroedthick\n datastore: VSANDatastore", "language": "yaml", "type": "kubernetes-manifest", "context": "5 - Storage Classes | datastore: The user can also specify the datastore in the StorageClass. The volume will be created on the datastore specified in the StorageClass, which in this case is VSANDatastore. This field is op", "source": "https://kubernetes.io/docs/concepts/storage/_print/", "id": "k8s_00120_code_completion_3675"} {"prompt": "Explain this yaml code:\n\napiVersion: storage.k8s.io/v1\nkind: StorageClass\nmetadata:\n name: fast\nprovisioner: kubernetes.io/vsphere-volume\nparameters:\n diskformat: zeroedthick\n datastore: VSANDatastore", "completion": "This yaml code 5 - Storage Classes | datastore: The user can also specify the datastore in the StorageClass. The volume will be created on the datastore specified in the StorageClass, which in this case is VSANDatastore. This field is op", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "5 - Storage Classes | datastore: The user can also specify the datastore in the StorageClass. The volume will be created on the datastore specified in the StorageClass, which in this case is VSANDatastore. This field is op", "source": "https://kubernetes.io/docs/concepts/storage/_print/", "id": "k8s_00120_code_explain_3676"} {"prompt": "Complete this yaml code:\n\nstorage/storageclass/storageclass-ceph-rbd.yaml\napiVersion: storage.k8s.io/v1\nkind: StorageClass\nmetadata:\n name: fast\nprovisioner: kubernetes.io/rbd # This provisioner is deprecated\nparameters:\n monitors: 198.19.254.105:6789\n adminId: kube", "completion": " adminSecretName: ceph-secret\n adminSecretNamespace: kube-system\n pool: kube\n userId: kube\n userSecretName: ceph-secret-user\n userSecretNamespace: default\n fsType: ext4\n imageFormat: \"2\"\n imageFeatures: \"layering\"", "language": "yaml", "type": "kubernetes-manifest", "context": "5 - Storage Classes | monitors: Ceph monitors, comma delimited. This parameter is required.", "source": "https://kubernetes.io/docs/concepts/storage/_print/", "id": "k8s_00120_code_completion_3677"} {"prompt": "Explain this yaml code:\n\nstorage/storageclass/storageclass-ceph-rbd.yaml\napiVersion: storage.k8s.io/v1\nkind: StorageClass\nmetadata:\n name: fast\nprovisioner: kubernetes.io/rbd # This provisioner is deprecated\nparameters:\n monitors: 198.19.254.105:6789\n adminId: kube\n adminSecretName: ceph-secret\n adminSecretNamespace: kube-system\n pool: kube\n userId: kube\n userSecretName: ceph-secret-user\n userSecretNamespace: default\n fsType: ext4\n imageFormat: \"2\"\n imageFeatures: \"layering\"", "completion": "This yaml code 5 - Storage Classes | monitors: Ceph monitors, comma delimited. This parameter is required.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "5 - Storage Classes | monitors: Ceph monitors, comma delimited. This parameter is required.", "source": "https://kubernetes.io/docs/concepts/storage/_print/", "id": "k8s_00120_code_explain_3678"} {"prompt": "Complete this yaml code:\n\napiVersion: storage.k8s.io/v1\nkind: StorageClass\nmetadata:\n name: fast\nprovisioner: kubernetes.io/rbd # This provisioner is deprecated\nparameters:\n monitors: 198.19.254.105:6789\n adminId: kube", "completion": " adminSecretName: ceph-secret\n adminSecretNamespace: kube-system\n pool: kube\n userId: kube\n userSecretName: ceph-secret-user\n userSecretNamespace: default\n fsType: ext4\n imageFormat: \"2\"\n imageFeatures: \"layering\"", "language": "yaml", "type": "kubernetes-manifest", "context": "5 - Storage Classes | monitors: Ceph monitors, comma delimited. This parameter is required.", "source": "https://kubernetes.io/docs/concepts/storage/_print/", "id": "k8s_00120_code_completion_3679"} {"prompt": "Explain this yaml code:\n\napiVersion: storage.k8s.io/v1\nkind: StorageClass\nmetadata:\n name: fast\nprovisioner: kubernetes.io/rbd # This provisioner is deprecated\nparameters:\n monitors: 198.19.254.105:6789\n adminId: kube\n adminSecretName: ceph-secret\n adminSecretNamespace: kube-system\n pool: kube\n userId: kube\n userSecretName: ceph-secret-user\n userSecretNamespace: default\n fsType: ext4\n imageFormat: \"2\"\n imageFeatures: \"layering\"", "completion": "This yaml code 5 - Storage Classes | monitors: Ceph monitors, comma delimited. This parameter is required.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "5 - Storage Classes | monitors: Ceph monitors, comma delimited. This parameter is required.", "source": "https://kubernetes.io/docs/concepts/storage/_print/", "id": "k8s_00120_code_explain_3680"} {"prompt": "Explain this bash code:\n\nkubectl create secret generic ceph-secret --type=\"kubernetes.io/rbd\" \\\n --from-literal=key='QVFEQ1pMdFhPUnQrSmhBQUFYaERWNHJsZ3BsMmNjcDR6RFZST0E9PQ==' \\\n --namespace=kube-system", "completion": "This bash code 5 - Storage Classes | userSecretNamespace: The namespace for userSecretName.", "language": "bash", "type": "kubectl-command_explanation", "context": "5 - Storage Classes | userSecretNamespace: The namespace for userSecretName.", "source": "https://kubernetes.io/docs/concepts/storage/_print/", "id": "k8s_00120_code_explain_3681"} {"prompt": "Complete this yaml code:\n\nstorage/storageclass/storageclass-azure-file.yaml\napiVersion: storage.k8s.io/v1\nkind: StorageClass\nmetadata:\n name: azurefile", "completion": "provisioner: kubernetes.io/azure-file\nparameters:\n skuName: Standard_LRS\n location: eastus\n storageAccount: azure_storage_account_name # example value", "language": "yaml", "type": "kubernetes-manifest", "context": "5 - Storage Classes | During storage provisioning, a secret named by secretName is created for the mounting credentials. If the cluster has enabled both RBAC and Controller Roles, add the create permission of resource secr", "source": "https://kubernetes.io/docs/concepts/storage/_print/", "id": "k8s_00120_code_completion_3682"} {"prompt": "Explain this yaml code:\n\nstorage/storageclass/storageclass-azure-file.yaml\napiVersion: storage.k8s.io/v1\nkind: StorageClass\nmetadata:\n name: azurefile\nprovisioner: kubernetes.io/azure-file\nparameters:\n skuName: Standard_LRS\n location: eastus\n storageAccount: azure_storage_account_name # example value", "completion": "This yaml code 5 - Storage Classes | During storage provisioning, a secret named by secretName is created for the mounting credentials. If the cluster has enabled both RBAC and Controller Roles, add the create permission of resource secr", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "5 - Storage Classes | During storage provisioning, a secret named by secretName is created for the mounting credentials. If the cluster has enabled both RBAC and Controller Roles, add the create permission of resource secr", "source": "https://kubernetes.io/docs/concepts/storage/_print/", "id": "k8s_00120_code_explain_3683"} {"prompt": "Complete this yaml code:\n\napiVersion: storage.k8s.io/v1\nkind: StorageClass\nmetadata:\n name: azurefile", "completion": "provisioner: kubernetes.io/azure-file\nparameters:\n skuName: Standard_LRS\n location: eastus\n storageAccount: azure_storage_account_name # example value", "language": "yaml", "type": "kubernetes-manifest", "context": "5 - Storage Classes | During storage provisioning, a secret named by secretName is created for the mounting credentials. If the cluster has enabled both RBAC and Controller Roles, add the create permission of resource secr", "source": "https://kubernetes.io/docs/concepts/storage/_print/", "id": "k8s_00120_code_completion_3684"} {"prompt": "Explain this yaml code:\n\napiVersion: storage.k8s.io/v1\nkind: StorageClass\nmetadata:\n name: azurefile\nprovisioner: kubernetes.io/azure-file\nparameters:\n skuName: Standard_LRS\n location: eastus\n storageAccount: azure_storage_account_name # example value", "completion": "This yaml code 5 - Storage Classes | During storage provisioning, a secret named by secretName is created for the mounting credentials. If the cluster has enabled both RBAC and Controller Roles, add the create permission of resource secr", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "5 - Storage Classes | During storage provisioning, a secret named by secretName is created for the mounting credentials. If the cluster has enabled both RBAC and Controller Roles, add the create permission of resource secr", "source": "https://kubernetes.io/docs/concepts/storage/_print/", "id": "k8s_00120_code_explain_3685"} {"prompt": "Complete this yaml code:\n\nstorage/storageclass/storageclass-portworx-volume.yaml\napiVersion: storage.k8s.io/v1\nkind: StorageClass\nmetadata:\n name: portworx-io-priority-high", "completion": "provisioner: kubernetes.io/portworx-volume # This provisioner is deprecated\nparameters:\n repl: \"1\"\n snap_interval: \"70\"\n priority_io: \"high\"", "language": "yaml", "type": "kubernetes-manifest", "context": "5 - Storage Classes | Local volumes do not support dynamic provisioning in Kubernetes 1.34; however a StorageClass should still be created to delay volume binding until a Pod is actually scheduled to the appropriate node.", "source": "https://kubernetes.io/docs/concepts/storage/_print/", "id": "k8s_00120_code_completion_3686"} {"prompt": "Explain this yaml code:\n\nstorage/storageclass/storageclass-portworx-volume.yaml\napiVersion: storage.k8s.io/v1\nkind: StorageClass\nmetadata:\n name: portworx-io-priority-high\nprovisioner: kubernetes.io/portworx-volume # This provisioner is deprecated\nparameters:\n repl: \"1\"\n snap_interval: \"70\"\n priority_io: \"high\"", "completion": "This yaml code 5 - Storage Classes | Local volumes do not support dynamic provisioning in Kubernetes 1.34; however a StorageClass should still be created to delay volume binding until a Pod is actually scheduled to the appropriate node.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "5 - Storage Classes | Local volumes do not support dynamic provisioning in Kubernetes 1.34; however a StorageClass should still be created to delay volume binding until a Pod is actually scheduled to the appropriate node.", "source": "https://kubernetes.io/docs/concepts/storage/_print/", "id": "k8s_00120_code_explain_3687"} {"prompt": "Complete this yaml code:\n\napiVersion: storage.k8s.io/v1\nkind: StorageClass\nmetadata:\n name: portworx-io-priority-high", "completion": "provisioner: kubernetes.io/portworx-volume # This provisioner is deprecated\nparameters:\n repl: \"1\"\n snap_interval: \"70\"\n priority_io: \"high\"", "language": "yaml", "type": "kubernetes-manifest", "context": "5 - Storage Classes | Local volumes do not support dynamic provisioning in Kubernetes 1.34; however a StorageClass should still be created to delay volume binding until a Pod is actually scheduled to the appropriate node.", "source": "https://kubernetes.io/docs/concepts/storage/_print/", "id": "k8s_00120_code_completion_3688"} {"prompt": "Explain this yaml code:\n\napiVersion: storage.k8s.io/v1\nkind: StorageClass\nmetadata:\n name: portworx-io-priority-high\nprovisioner: kubernetes.io/portworx-volume # This provisioner is deprecated\nparameters:\n repl: \"1\"\n snap_interval: \"70\"\n priority_io: \"high\"", "completion": "This yaml code 5 - Storage Classes | Local volumes do not support dynamic provisioning in Kubernetes 1.34; however a StorageClass should still be created to delay volume binding until a Pod is actually scheduled to the appropriate node.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "5 - Storage Classes | Local volumes do not support dynamic provisioning in Kubernetes 1.34; however a StorageClass should still be created to delay volume binding until a Pod is actually scheduled to the appropriate node.", "source": "https://kubernetes.io/docs/concepts/storage/_print/", "id": "k8s_00120_code_explain_3689"} {"prompt": "Complete this yaml code:\n\nstorage/storageclass/storageclass-local.yaml\napiVersion: storage.k8s.io/v1\nkind: StorageClass", "completion": "metadata:\n name: local-storage\nprovisioner: kubernetes.io/no-provisioner # indicates that this StorageClass does not support automatic provisioning\nvolumeBindingMode: WaitForFirstConsumer", "language": "yaml", "type": "kubernetes-manifest", "context": "5 - Storage Classes | Local volumes do not support dynamic provisioning in Kubernetes 1.34; however a StorageClass should still be created to delay volume binding until a Pod is actually scheduled to the appropriate node.", "source": "https://kubernetes.io/docs/concepts/storage/_print/", "id": "k8s_00120_code_completion_3690"} {"prompt": "Explain this yaml code:\n\nstorage/storageclass/storageclass-local.yaml\napiVersion: storage.k8s.io/v1\nkind: StorageClass\nmetadata:\n name: local-storage\nprovisioner: kubernetes.io/no-provisioner # indicates that this StorageClass does not support automatic provisioning\nvolumeBindingMode: WaitForFirstConsumer", "completion": "This yaml code 5 - Storage Classes | Local volumes do not support dynamic provisioning in Kubernetes 1.34; however a StorageClass should still be created to delay volume binding until a Pod is actually scheduled to the appropriate node.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "5 - Storage Classes | Local volumes do not support dynamic provisioning in Kubernetes 1.34; however a StorageClass should still be created to delay volume binding until a Pod is actually scheduled to the appropriate node.", "source": "https://kubernetes.io/docs/concepts/storage/_print/", "id": "k8s_00120_code_explain_3691"} {"prompt": "Complete this yaml code:\n\napiVersion: storage.k8s.io/v1\nkind: StorageClass\nmetadata:", "completion": " name: local-storage\nprovisioner: kubernetes.io/no-provisioner # indicates that this StorageClass does not support automatic provisioning\nvolumeBindingMode: WaitForFirstConsumer", "language": "yaml", "type": "kubernetes-manifest", "context": "5 - Storage Classes | Local volumes do not support dynamic provisioning in Kubernetes 1.34; however a StorageClass should still be created to delay volume binding until a Pod is actually scheduled to the appropriate node.", "source": "https://kubernetes.io/docs/concepts/storage/_print/", "id": "k8s_00120_code_completion_3692"} {"prompt": "Explain this yaml code:\n\napiVersion: storage.k8s.io/v1\nkind: StorageClass\nmetadata:\n name: local-storage\nprovisioner: kubernetes.io/no-provisioner # indicates that this StorageClass does not support automatic provisioning\nvolumeBindingMode: WaitForFirstConsumer", "completion": "This yaml code 5 - Storage Classes | Local volumes do not support dynamic provisioning in Kubernetes 1.34; however a StorageClass should still be created to delay volume binding until a Pod is actually scheduled to the appropriate node.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "5 - Storage Classes | Local volumes do not support dynamic provisioning in Kubernetes 1.34; however a StorageClass should still be created to delay volume binding until a Pod is actually scheduled to the appropriate node.", "source": "https://kubernetes.io/docs/concepts/storage/_print/", "id": "k8s_00120_code_explain_3693"} {"prompt": "Complete this yaml code:\n\napiVersion: storage.k8s.io/v1\nkind: VolumeAttributesClass\nmetadata:\n name: silver", "completion": "driverName: pd.csi.storage.gke.io\nparameters:\n provisioned-iops: \"3000\"\n provisioned-throughput: \"50\"", "language": "yaml", "type": "kubernetes-manifest", "context": "6 - Volume Attributes Classes | Each VolumeAttributesClass has a provisioner that determines what volume plugin is used for provisioning PVs. The field driverName must be specified.", "source": "https://kubernetes.io/docs/concepts/storage/_print/", "id": "k8s_00120_code_completion_3694"} {"prompt": "Explain this yaml code:\n\napiVersion: storage.k8s.io/v1\nkind: VolumeAttributesClass\nmetadata:\n name: silver\ndriverName: pd.csi.storage.gke.io\nparameters:\n provisioned-iops: \"3000\"\n provisioned-throughput: \"50\"", "completion": "This yaml code 6 - Volume Attributes Classes | Each VolumeAttributesClass has a provisioner that determines what volume plugin is used for provisioning PVs. The field driverName must be specified.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "6 - Volume Attributes Classes | Each VolumeAttributesClass has a provisioner that determines what volume plugin is used for provisioning PVs. The field driverName must be specified.", "source": "https://kubernetes.io/docs/concepts/storage/_print/", "id": "k8s_00120_code_explain_3695"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: PersistentVolumeClaim\nmetadata:\n name: test-pv-claim", "completion": "spec:\n …\n volumeAttributesClassName: silver\n …", "language": "yaml", "type": "kubernetes-manifest", "context": "6 - Volume Attributes Classes | A new VolumeAttributesClass gold is available in the cluster:", "source": "https://kubernetes.io/docs/concepts/storage/_print/", "id": "k8s_00120_code_completion_3696"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: PersistentVolumeClaim\nmetadata:\n name: test-pv-claim\nspec:\n …\n volumeAttributesClassName: silver\n …", "completion": "This yaml code 6 - Volume Attributes Classes | A new VolumeAttributesClass gold is available in the cluster:", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "6 - Volume Attributes Classes | A new VolumeAttributesClass gold is available in the cluster:", "source": "https://kubernetes.io/docs/concepts/storage/_print/", "id": "k8s_00120_code_explain_3697"} {"prompt": "Complete this yaml code:\n\napiVersion: storage.k8s.io/v1\nkind: VolumeAttributesClass\nmetadata:\n name: gold", "completion": "driverName: pd.csi.storage.gke.io\nparameters:\n iops: \"4000\"\n throughput: \"60\"", "language": "yaml", "type": "kubernetes-manifest", "context": "6 - Volume Attributes Classes | The end user can update the PVC with the new VolumeAttributesClass gold and apply:", "source": "https://kubernetes.io/docs/concepts/storage/_print/", "id": "k8s_00120_code_completion_3698"} {"prompt": "Explain this yaml code:\n\napiVersion: storage.k8s.io/v1\nkind: VolumeAttributesClass\nmetadata:\n name: gold\ndriverName: pd.csi.storage.gke.io\nparameters:\n iops: \"4000\"\n throughput: \"60\"", "completion": "This yaml code 6 - Volume Attributes Classes | The end user can update the PVC with the new VolumeAttributesClass gold and apply:", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "6 - Volume Attributes Classes | The end user can update the PVC with the new VolumeAttributesClass gold and apply:", "source": "https://kubernetes.io/docs/concepts/storage/_print/", "id": "k8s_00120_code_explain_3699"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: PersistentVolumeClaim\nmetadata:\n name: test-pv-claim", "completion": "spec:\n …\n volumeAttributesClassName: gold\n …", "language": "yaml", "type": "kubernetes-manifest", "context": "6 - Volume Attributes Classes | To understand how the resizer works with VolumeAttributesClass, refer to the CSI external-resizer documentation.", "source": "https://kubernetes.io/docs/concepts/storage/_print/", "id": "k8s_00120_code_completion_3700"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: PersistentVolumeClaim\nmetadata:\n name: test-pv-claim\nspec:\n …\n volumeAttributesClassName: gold\n …", "completion": "This yaml code 6 - Volume Attributes Classes | To understand how the resizer works with VolumeAttributesClass, refer to the CSI external-resizer documentation.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "6 - Volume Attributes Classes | To understand how the resizer works with VolumeAttributesClass, refer to the CSI external-resizer documentation.", "source": "https://kubernetes.io/docs/concepts/storage/_print/", "id": "k8s_00120_code_explain_3701"} {"prompt": "Complete this yaml code:\n\napiVersion: storage.k8s.io/v1\nkind: StorageClass\nmetadata:", "completion": " name: slow\nprovisioner: kubernetes.io/gce-pd\nparameters:\n type: pd-standard", "language": "yaml", "type": "kubernetes-manifest", "context": "7 - Dynamic Volume Provisioning | The following manifest creates a storage class \"fast\" which provisions SSD-like persistent disks.", "source": "https://kubernetes.io/docs/concepts/storage/_print/", "id": "k8s_00120_code_completion_3702"} {"prompt": "Explain this yaml code:\n\napiVersion: storage.k8s.io/v1\nkind: StorageClass\nmetadata:\n name: slow\nprovisioner: kubernetes.io/gce-pd\nparameters:\n type: pd-standard", "completion": "This yaml code 7 - Dynamic Volume Provisioning | The following manifest creates a storage class \"fast\" which provisions SSD-like persistent disks.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "7 - Dynamic Volume Provisioning | The following manifest creates a storage class \"fast\" which provisions SSD-like persistent disks.", "source": "https://kubernetes.io/docs/concepts/storage/_print/", "id": "k8s_00120_code_explain_3703"} {"prompt": "Complete this yaml code:\n\napiVersion: storage.k8s.io/v1\nkind: StorageClass\nmetadata:", "completion": " name: fast\nprovisioner: kubernetes.io/gce-pd\nparameters:\n type: pd-ssd", "language": "yaml", "type": "kubernetes-manifest", "context": "7 - Dynamic Volume Provisioning | Users request dynamically provisioned storage by including a storage class in their PersistentVolumeClaim. Before Kubernetes v1.6, this was done via the volume.beta.kubernetes.io/storage-class annotat", "source": "https://kubernetes.io/docs/concepts/storage/_print/", "id": "k8s_00120_code_completion_3704"} {"prompt": "Explain this yaml code:\n\napiVersion: storage.k8s.io/v1\nkind: StorageClass\nmetadata:\n name: fast\nprovisioner: kubernetes.io/gce-pd\nparameters:\n type: pd-ssd", "completion": "This yaml code 7 - Dynamic Volume Provisioning | Users request dynamically provisioned storage by including a storage class in their PersistentVolumeClaim. Before Kubernetes v1.6, this was done via the volume.beta.kubernetes.io/storage-class annotat", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "7 - Dynamic Volume Provisioning | Users request dynamically provisioned storage by including a storage class in their PersistentVolumeClaim. Before Kubernetes v1.6, this was done via the volume.beta.kubernetes.io/storage-class annotat", "source": "https://kubernetes.io/docs/concepts/storage/_print/", "id": "k8s_00120_code_explain_3705"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: PersistentVolumeClaim\nmetadata:\n name: claim1\nspec:", "completion": " accessModes:\n - ReadWriteOnce\n storageClassName: fast\n resources:\n requests:\n storage: 30Gi", "language": "yaml", "type": "kubernetes-manifest", "context": "7 - Dynamic Volume Provisioning | This claim results in an SSD-like Persistent Disk being automatically provisioned. When the claim is deleted, the volume is destroyed.", "source": "https://kubernetes.io/docs/concepts/storage/_print/", "id": "k8s_00120_code_completion_3706"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: PersistentVolumeClaim\nmetadata:\n name: claim1\nspec:\n accessModes:\n - ReadWriteOnce\n storageClassName: fast\n resources:\n requests:\n storage: 30Gi", "completion": "This yaml code 7 - Dynamic Volume Provisioning | This claim results in an SSD-like Persistent Disk being automatically provisioned. When the claim is deleted, the volume is destroyed.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "7 - Dynamic Volume Provisioning | This claim results in an SSD-like Persistent Disk being automatically provisioned. When the claim is deleted, the volume is destroyed.", "source": "https://kubernetes.io/docs/concepts/storage/_print/", "id": "k8s_00120_code_explain_3707"} {"prompt": "Complete this yaml code:\n\napiVersion: snapshot.storage.k8s.io/v1\nkind: VolumeSnapshot\nmetadata:\n name: new-snapshot-test", "completion": "spec:\n volumeSnapshotClassName: csi-hostpath-snapclass\n source:\n persistentVolumeClaimName: pvc-test", "language": "yaml", "type": "kubernetes-manifest", "context": "8 - Volume Snapshots | persistentVolumeClaimName is the name of the PersistentVolumeClaim data source for the snapshot. This field is required for dynamically provisioning a snapshot.", "source": "https://kubernetes.io/docs/concepts/storage/_print/", "id": "k8s_00120_code_completion_3708"} {"prompt": "Explain this yaml code:\n\napiVersion: snapshot.storage.k8s.io/v1\nkind: VolumeSnapshot\nmetadata:\n name: new-snapshot-test\nspec:\n volumeSnapshotClassName: csi-hostpath-snapclass\n source:\n persistentVolumeClaimName: pvc-test", "completion": "This yaml code 8 - Volume Snapshots | persistentVolumeClaimName is the name of the PersistentVolumeClaim data source for the snapshot. This field is required for dynamically provisioning a snapshot.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "8 - Volume Snapshots | persistentVolumeClaimName is the name of the PersistentVolumeClaim data source for the snapshot. This field is required for dynamically provisioning a snapshot.", "source": "https://kubernetes.io/docs/concepts/storage/_print/", "id": "k8s_00120_code_explain_3709"} {"prompt": "Complete this yaml code:\n\napiVersion: snapshot.storage.k8s.io/v1\nkind: VolumeSnapshot\nmetadata:", "completion": " name: test-snapshot\nspec:\n source:\n volumeSnapshotContentName: test-content", "language": "yaml", "type": "kubernetes-manifest", "context": "8 - Volume Snapshots | Each VolumeSnapshotContent contains a spec and status. In dynamic provisioning, the snapshot common controller creates VolumeSnapshotContent objects. Here is an example:", "source": "https://kubernetes.io/docs/concepts/storage/_print/", "id": "k8s_00120_code_completion_3710"} {"prompt": "Explain this yaml code:\n\napiVersion: snapshot.storage.k8s.io/v1\nkind: VolumeSnapshot\nmetadata:\n name: test-snapshot\nspec:\n source:\n volumeSnapshotContentName: test-content", "completion": "This yaml code 8 - Volume Snapshots | Each VolumeSnapshotContent contains a spec and status. In dynamic provisioning, the snapshot common controller creates VolumeSnapshotContent objects. Here is an example:", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "8 - Volume Snapshots | Each VolumeSnapshotContent contains a spec and status. In dynamic provisioning, the snapshot common controller creates VolumeSnapshotContent objects. Here is an example:", "source": "https://kubernetes.io/docs/concepts/storage/_print/", "id": "k8s_00120_code_explain_3711"} {"prompt": "Complete this yaml code:\n\napiVersion: snapshot.storage.k8s.io/v1\nkind: VolumeSnapshotContent\nmetadata:\n name: snapcontent-72d9a349-aacd-42d2-a240-d775650d2455\nspec:\n deletionPolicy: Delete\n driver: hostpath.csi.k8s.io", "completion": " source:\n volumeHandle: ee0cfb94-f8d4-11e9-b2d8-0242ac110002\n sourceVolumeMode: Filesystem\n volumeSnapshotClassName: csi-hostpath-snapclass\n volumeSnapshotRef:\n name: new-snapshot-test\n namespace: default\n uid: 72d9a349-aacd-42d2-a240-d775650d2455", "language": "yaml", "type": "kubernetes-manifest", "context": "8 - Volume Snapshots | volumeHandle is the unique identifier of the volume created on the storage backend and returned by the CSI driver during the volume creation. This field is required for dynamically provisioning a snap", "source": "https://kubernetes.io/docs/concepts/storage/_print/", "id": "k8s_00120_code_completion_3712"} {"prompt": "Explain this yaml code:\n\napiVersion: snapshot.storage.k8s.io/v1\nkind: VolumeSnapshotContent\nmetadata:\n name: snapcontent-72d9a349-aacd-42d2-a240-d775650d2455\nspec:\n deletionPolicy: Delete\n driver: hostpath.csi.k8s.io\n source:\n volumeHandle: ee0cfb94-f8d4-11e9-b2d8-0242ac110002\n sourceVolumeMode: Filesystem\n volumeSnapshotClassName: csi-hostpath-snapclass\n volumeSnapshotRef:\n name: new-snapshot-test\n namespace: default\n uid: 72d9a349-aacd-42d2-a240-d775650d2455", "completion": "This yaml code 8 - Volume Snapshots | volumeHandle is the unique identifier of the volume created on the storage backend and returned by the CSI driver during the volume creation. This field is required for dynamically provisioning a snap", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "8 - Volume Snapshots | volumeHandle is the unique identifier of the volume created on the storage backend and returned by the CSI driver during the volume creation. This field is required for dynamically provisioning a snap", "source": "https://kubernetes.io/docs/concepts/storage/_print/", "id": "k8s_00120_code_explain_3713"} {"prompt": "Complete this yaml code:\n\napiVersion: snapshot.storage.k8s.io/v1\nkind: VolumeSnapshotContent\nmetadata:\n name: new-snapshot-content-test\nspec:\n deletionPolicy: Delete", "completion": " driver: hostpath.csi.k8s.io\n source:\n snapshotHandle: 7bdd0de3-aaeb-11e8-9aae-0242ac110002\n sourceVolumeMode: Filesystem\n volumeSnapshotRef:\n name: new-snapshot-test\n namespace: default", "language": "yaml", "type": "kubernetes-manifest", "context": "8 - Volume Snapshots | snapshotHandle is the unique identifier of the volume snapshot created on the storage backend. This field is required for the pre-provisioned snapshots. It specifies the CSI snapshot id on the storage", "source": "https://kubernetes.io/docs/concepts/storage/_print/", "id": "k8s_00120_code_completion_3714"} {"prompt": "Explain this yaml code:\n\napiVersion: snapshot.storage.k8s.io/v1\nkind: VolumeSnapshotContent\nmetadata:\n name: new-snapshot-content-test\nspec:\n deletionPolicy: Delete\n driver: hostpath.csi.k8s.io\n source:\n snapshotHandle: 7bdd0de3-aaeb-11e8-9aae-0242ac110002\n sourceVolumeMode: Filesystem\n volumeSnapshotRef:\n name: new-snapshot-test\n namespace: default", "completion": "This yaml code 8 - Volume Snapshots | snapshotHandle is the unique identifier of the volume snapshot created on the storage backend. This field is required for the pre-provisioned snapshots. It specifies the CSI snapshot id on the storage", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "8 - Volume Snapshots | snapshotHandle is the unique identifier of the volume snapshot created on the storage backend. This field is required for the pre-provisioned snapshots. It specifies the CSI snapshot id on the storage", "source": "https://kubernetes.io/docs/concepts/storage/_print/", "id": "k8s_00120_code_explain_3715"} {"prompt": "Complete this yaml code:\n\napiVersion: snapshot.storage.k8s.io/v1\nkind: VolumeSnapshotContent\nmetadata:\n name: new-snapshot-content-test\n annotations:\n - snapshot.storage.kubernetes.io/allow-volume-mode-change: \"true\"\nspec:", "completion": " deletionPolicy: Delete\n driver: hostpath.csi.k8s.io\n source:\n snapshotHandle: 7bdd0de3-aaeb-11e8-9aae-0242ac110002\n sourceVolumeMode: Filesystem\n volumeSnapshotRef:\n name: new-snapshot-test\n namespace: default", "language": "yaml", "type": "kubernetes-manifest", "context": "8 - Volume Snapshots | You can provision a new volume, pre-populated with data from a snapshot, by using the dataSource field in the PersistentVolumeClaim object.", "source": "https://kubernetes.io/docs/concepts/storage/_print/", "id": "k8s_00120_code_completion_3716"} {"prompt": "Explain this yaml code:\n\napiVersion: snapshot.storage.k8s.io/v1\nkind: VolumeSnapshotContent\nmetadata:\n name: new-snapshot-content-test\n annotations:\n - snapshot.storage.kubernetes.io/allow-volume-mode-change: \"true\"\nspec:\n deletionPolicy: Delete\n driver: hostpath.csi.k8s.io\n source:\n snapshotHandle: 7bdd0de3-aaeb-11e8-9aae-0242ac110002\n sourceVolumeMode: Filesystem\n volumeSnapshotRef:\n name: new-snapshot-test\n namespace: default", "completion": "This yaml code 8 - Volume Snapshots | You can provision a new volume, pre-populated with data from a snapshot, by using the dataSource field in the PersistentVolumeClaim object.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "8 - Volume Snapshots | You can provision a new volume, pre-populated with data from a snapshot, by using the dataSource field in the PersistentVolumeClaim object.", "source": "https://kubernetes.io/docs/concepts/storage/_print/", "id": "k8s_00120_code_explain_3717"} {"prompt": "Complete this yaml code:\n\napiVersion: snapshot.storage.k8s.io/v1\nkind: VolumeSnapshotClass\nmetadata:", "completion": " name: csi-hostpath-snapclass\ndriver: hostpath.csi.k8s.io\ndeletionPolicy: Delete\nparameters:", "language": "yaml", "type": "kubernetes-manifest", "context": "9 - Volume Snapshot Classes | Administrators can specify a default VolumeSnapshotClass for VolumeSnapshots that don't request any particular class to bind to by adding the snapshot.storage.kubernetes.io/is-default-class: \"true\" an", "source": "https://kubernetes.io/docs/concepts/storage/_print/", "id": "k8s_00120_code_completion_3718"} {"prompt": "Explain this yaml code:\n\napiVersion: snapshot.storage.k8s.io/v1\nkind: VolumeSnapshotClass\nmetadata:\n name: csi-hostpath-snapclass\ndriver: hostpath.csi.k8s.io\ndeletionPolicy: Delete\nparameters:", "completion": "This yaml code 9 - Volume Snapshot Classes | Administrators can specify a default VolumeSnapshotClass for VolumeSnapshots that don't request any particular class to bind to by adding the snapshot.storage.kubernetes.io/is-default-class: \"true\" an", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "9 - Volume Snapshot Classes | Administrators can specify a default VolumeSnapshotClass for VolumeSnapshots that don't request any particular class to bind to by adding the snapshot.storage.kubernetes.io/is-default-class: \"true\" an", "source": "https://kubernetes.io/docs/concepts/storage/_print/", "id": "k8s_00120_code_explain_3719"} {"prompt": "Complete this yaml code:\n\napiVersion: snapshot.storage.k8s.io/v1\nkind: VolumeSnapshotClass\nmetadata:\n name: csi-hostpath-snapclass", "completion": " annotations:\n snapshot.storage.kubernetes.io/is-default-class: \"true\"\ndriver: hostpath.csi.k8s.io\ndeletionPolicy: Delete\nparameters:", "language": "yaml", "type": "kubernetes-manifest", "context": "9 - Volume Snapshot Classes | If multiple CSI drivers exist, a default VolumeSnapshotClass can be specified for each of them.", "source": "https://kubernetes.io/docs/concepts/storage/_print/", "id": "k8s_00120_code_completion_3720"} {"prompt": "Explain this yaml code:\n\napiVersion: snapshot.storage.k8s.io/v1\nkind: VolumeSnapshotClass\nmetadata:\n name: csi-hostpath-snapclass\n annotations:\n snapshot.storage.kubernetes.io/is-default-class: \"true\"\ndriver: hostpath.csi.k8s.io\ndeletionPolicy: Delete\nparameters:", "completion": "This yaml code 9 - Volume Snapshot Classes | If multiple CSI drivers exist, a default VolumeSnapshotClass can be specified for each of them.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "9 - Volume Snapshot Classes | If multiple CSI drivers exist, a default VolumeSnapshotClass can be specified for each of them.", "source": "https://kubernetes.io/docs/concepts/storage/_print/", "id": "k8s_00120_code_explain_3721"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: PersistentVolumeClaim\nmetadata:\n name: clone-of-pvc-1\n namespace: myns\nspec:\n accessModes:", "completion": " - ReadWriteOnce\n storageClassName: cloning\n resources:\n requests:\n storage: 5Gi\n dataSource:\n kind: PersistentVolumeClaim\n name: pvc-1", "language": "yaml", "type": "kubernetes-manifest", "context": "10 - CSI Volume Cloning | The result is a new PVC with the name clone-of-pvc-1 that has the exact same content as the specified source pvc-1.", "source": "https://kubernetes.io/docs/concepts/storage/_print/", "id": "k8s_00120_code_completion_3722"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: PersistentVolumeClaim\nmetadata:\n name: clone-of-pvc-1\n namespace: myns\nspec:\n accessModes:\n - ReadWriteOnce\n storageClassName: cloning\n resources:\n requests:\n storage: 5Gi\n dataSource:\n kind: PersistentVolumeClaim\n name: pvc-1", "completion": "This yaml code 10 - CSI Volume Cloning | The result is a new PVC with the name clone-of-pvc-1 that has the exact same content as the specified source pvc-1.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "10 - CSI Volume Cloning | The result is a new PVC with the name clone-of-pvc-1 that has the exact same content as the specified source pvc-1.", "source": "https://kubernetes.io/docs/concepts/storage/_print/", "id": "k8s_00120_code_explain_3723"} {"prompt": "Complete this yaml code:\n\napiVersion: storage.k8s.io/v1\nkind: CSIDriver\nmetadata:", "completion": " name: hostpath.csi.k8s.io\nspec:\n nodeAllocatableUpdatePeriodSeconds: 60", "language": "yaml", "type": "kubernetes-manifest", "context": "12 - Node-specific Volume Limits | Kubelet will periodically call the corresponding CSI driver’s NodeGetInfo endpoint to refresh the maximum number of attachable volumes, using the interval specified in nodeAllocatableUpdatePeriodSecon", "source": "https://kubernetes.io/docs/concepts/storage/_print/", "id": "k8s_00120_code_completion_3724"} {"prompt": "Explain this yaml code:\n\napiVersion: storage.k8s.io/v1\nkind: CSIDriver\nmetadata:\n name: hostpath.csi.k8s.io\nspec:\n nodeAllocatableUpdatePeriodSeconds: 60", "completion": "This yaml code 12 - Node-specific Volume Limits | Kubelet will periodically call the corresponding CSI driver’s NodeGetInfo endpoint to refresh the maximum number of attachable volumes, using the interval specified in nodeAllocatableUpdatePeriodSecon", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "12 - Node-specific Volume Limits | Kubelet will periodically call the corresponding CSI driver’s NodeGetInfo endpoint to refresh the maximum number of attachable volumes, using the interval specified in nodeAllocatableUpdatePeriodSecon", "source": "https://kubernetes.io/docs/concepts/storage/_print/", "id": "k8s_00120_code_explain_3725"} {"prompt": "Complete this text code:\n\nKubernetes master is running at https://192.0.2.1\nelasticsearch-logging is running at https://192.0.2.1/api/v1/namespaces/kube-system/services/elasticsearch-logging/proxy\nkibana-logging is running at https://192.0.2.1/api/v1/namespaces/kube-system/services/kibana-logging/proxy", "completion": "kube-dns is running at https://192.0.2.1/api/v1/namespaces/kube-system/services/kube-dns/proxy\ngrafana is running at https://192.0.2.1/api/v1/namespaces/kube-system/services/monitoring-grafana/proxy\nheapster is running at https://192.0.2.1/api/v1/namespaces/kube-system/services/monitoring-heapster/proxy", "language": "text", "type": "code-example", "context": "Access Services Running on Clusters | This shows the proxy-verb URL for accessing each service. For example, this cluster has cluster-level logging enabled (using Elasticsearch), which can be reached at https://192.0.2.1/api/v1/namespaces", "source": "https://kubernetes.io/docs/tasks/access-application-cluster/access-cluster-services/", "id": "k8s_00121_code_completion_3726"} {"prompt": "Explain this text code:\n\nKubernetes master is running at https://192.0.2.1\nelasticsearch-logging is running at https://192.0.2.1/api/v1/namespaces/kube-system/services/elasticsearch-logging/proxy\nkibana-logging is running at https://192.0.2.1/api/v1/namespaces/kube-system/services/kibana-logging/proxy\nkube-dns is running at https://192.0.2.1/api/v1/namespaces/kube-system/services/kube-dns/proxy\ngrafana is running at https://192.0.2.1/api/v1/namespaces/kube-system/services/monitoring-grafana/proxy\nheapster is running at https://192.0.2.1/api/v1/namespaces/kube-system/services/monitoring-heapster/proxy", "completion": "This text code Access Services Running on Clusters | This shows the proxy-verb URL for accessing each service. For example, this cluster has cluster-level logging enabled (using Elasticsearch), which can be reached at https://192.0.2.1/api/v1/namespaces", "language": "text", "type": "code-example_explanation", "context": "Access Services Running on Clusters | This shows the proxy-verb URL for accessing each service. For example, this cluster has cluster-level logging enabled (using Elasticsearch), which can be reached at https://192.0.2.1/api/v1/namespaces", "source": "https://kubernetes.io/docs/tasks/access-application-cluster/access-cluster-services/", "id": "k8s_00121_code_explain_3727"} {"prompt": "Explain this text code:\n\nhttp://192.0.2.1/api/v1/namespaces/kube-system/services/elasticsearch-logging/proxy/_search?q=user:kimchy", "completion": "This text code Access Services Running on Clusters | To access the Elasticsearch cluster health information _cluster/health?pretty=true, you would use:", "language": "text", "type": "code-example_explanation", "context": "Access Services Running on Clusters | To access the Elasticsearch cluster health information _cluster/health?pretty=true, you would use:", "source": "https://kubernetes.io/docs/tasks/access-application-cluster/access-cluster-services/", "id": "k8s_00121_code_explain_3728"} {"prompt": "Explain this text code:\n\nhttps://192.0.2.1/api/v1/namespaces/kube-system/services/elasticsearch-logging/proxy/_cluster/health?pretty=true", "completion": "This text code Access Services Running on Clusters | The health information is similar to this:", "language": "text", "type": "code-example_explanation", "context": "Access Services Running on Clusters | The health information is similar to this:", "source": "https://kubernetes.io/docs/tasks/access-application-cluster/access-cluster-services/", "id": "k8s_00121_code_explain_3729"} {"prompt": "Complete this json code:\n\n{\n \"cluster_name\" : \"kubernetes_logging\",\n \"status\" : \"yellow\",\n \"timed_out\" : false,\n \"number_of_nodes\" : 1,\n \"number_of_data_nodes\" : 1,", "completion": " \"active_primary_shards\" : 5,\n \"active_shards\" : 5,\n \"relocating_shards\" : 0,\n \"initializing_shards\" : 0,\n \"unassigned_shards\" : 5\n}", "language": "json", "type": "api-response", "context": "Access Services Running on Clusters | To access the https Elasticsearch service health information _cluster/health?pretty=true, you would use:", "source": "https://kubernetes.io/docs/tasks/access-application-cluster/access-cluster-services/", "id": "k8s_00121_code_completion_3730"} {"prompt": "Explain this json code:\n\n{\n \"cluster_name\" : \"kubernetes_logging\",\n \"status\" : \"yellow\",\n \"timed_out\" : false,\n \"number_of_nodes\" : 1,\n \"number_of_data_nodes\" : 1,\n \"active_primary_shards\" : 5,\n \"active_shards\" : 5,\n \"relocating_shards\" : 0,\n \"initializing_shards\" : 0,\n \"unassigned_shards\" : 5\n}", "completion": "This json code Access Services Running on Clusters | To access the https Elasticsearch service health information _cluster/health?pretty=true, you would use:", "language": "json", "type": "api-response_explanation", "context": "Access Services Running on Clusters | To access the https Elasticsearch service health information _cluster/health?pretty=true, you would use:", "source": "https://kubernetes.io/docs/tasks/access-application-cluster/access-cluster-services/", "id": "k8s_00121_code_explain_3731"} {"prompt": "Explain this text code:\n\nhttps://192.0.2.1/api/v1/namespaces/kube-system/services/https:elasticsearch-logging:/proxy/_cluster/health?pretty=true", "completion": "This text code Access Services Running on Clusters | You may be able to put an apiserver proxy URL into the address bar of a browser. However:", "language": "text", "type": "code-example_explanation", "context": "Access Services Running on Clusters | You may be able to put an apiserver proxy URL into the address bar of a browser. However:", "source": "https://kubernetes.io/docs/tasks/access-application-cluster/access-cluster-services/", "id": "k8s_00121_code_explain_3732"} {"prompt": "Complete this text code:\n\nKubernetes master is running at https://192.0.2.1\nelasticsearch-logging is running at https://192.0.2.1/api/v1/namespaces/kube-system/services/elasticsearch-logging/proxy\nkibana-logging is running at https://192.0.2.1/api/v1/namespaces/kube-system/services/kibana-logging/proxy", "completion": "kube-dns is running at https://192.0.2.1/api/v1/namespaces/kube-system/services/kube-dns/proxy\ngrafana is running at https://192.0.2.1/api/v1/namespaces/kube-system/services/monitoring-grafana/proxy\nheapster is running at https://192.0.2.1/api/v1/namespaces/kube-system/services/monitoring-heapster/proxy", "language": "text", "type": "code-example", "context": "Access Services Running on Clusters | This shows the proxy-verb URL for accessing each service. For example, this cluster has cluster-level logging enabled (using Elasticsearch), which can be reached at https://192.0.2.1/api/v1/namespaces", "source": "https://kubernetes.io/docs/tasks/access-application-cluster/access-cluster-services/", "id": "k8s_00121_code_completion_3733"} {"prompt": "Explain this text code:\n\nKubernetes master is running at https://192.0.2.1\nelasticsearch-logging is running at https://192.0.2.1/api/v1/namespaces/kube-system/services/elasticsearch-logging/proxy\nkibana-logging is running at https://192.0.2.1/api/v1/namespaces/kube-system/services/kibana-logging/proxy\nkube-dns is running at https://192.0.2.1/api/v1/namespaces/kube-system/services/kube-dns/proxy\ngrafana is running at https://192.0.2.1/api/v1/namespaces/kube-system/services/monitoring-grafana/proxy\nheapster is running at https://192.0.2.1/api/v1/namespaces/kube-system/services/monitoring-heapster/proxy", "completion": "This text code Access Services Running on Clusters | This shows the proxy-verb URL for accessing each service. For example, this cluster has cluster-level logging enabled (using Elasticsearch), which can be reached at https://192.0.2.1/api/v1/namespaces", "language": "text", "type": "code-example_explanation", "context": "Access Services Running on Clusters | This shows the proxy-verb URL for accessing each service. For example, this cluster has cluster-level logging enabled (using Elasticsearch), which can be reached at https://192.0.2.1/api/v1/namespaces", "source": "https://kubernetes.io/docs/tasks/access-application-cluster/access-cluster-services/", "id": "k8s_00121_code_explain_3734"} {"prompt": "Explain this text code:\n\nhttps://192.0.2.1/api/v1/namespaces/kube-system/services/elasticsearch-logging/proxy/", "completion": "This text code Access Services Running on Clusters | As mentioned above, you use the kubectl cluster-info command to retrieve the service's proxy URL. To create proxy URLs that include service endpoints, suffixes, and parameters, you append to the servi", "language": "text", "type": "code-example_explanation", "context": "Access Services Running on Clusters | As mentioned above, you use the kubectl cluster-info command to retrieve the service's proxy URL. To create proxy URLs that include service endpoints, suffixes, and parameters, you append to the servi", "source": "https://kubernetes.io/docs/tasks/access-application-cluster/access-cluster-services/", "id": "k8s_00121_code_explain_3735"} {"prompt": "Explain this text code:\n\nhttp://localhost:8080/api/v1/namespaces/kube-system/services/elasticsearch-logging/proxy/", "completion": "This text code Access Services Running on Clusters | As mentioned above, you use the kubectl cluster-info command to retrieve the service's proxy URL. To create proxy URLs that include service endpoints, suffixes, and parameters, you append to the servi", "language": "text", "type": "code-example_explanation", "context": "Access Services Running on Clusters | As mentioned above, you use the kubectl cluster-info command to retrieve the service's proxy URL. To create proxy URLs that include service endpoints, suffixes, and parameters, you append to the servi", "source": "https://kubernetes.io/docs/tasks/access-application-cluster/access-cluster-services/", "id": "k8s_00121_code_explain_3736"} {"prompt": "Explain this text code:\n\nhttp:///api/v1/namespaces//services//proxy", "completion": "This text code Access Services Running on Clusters | The supported formats for the segment of the URL are:", "language": "text", "type": "code-example_explanation", "context": "Access Services Running on Clusters | The supported formats for the segment of the URL are:", "source": "https://kubernetes.io/docs/tasks/access-application-cluster/access-cluster-services/", "id": "k8s_00121_code_explain_3737"} {"prompt": "Explain this text code:\n\nhttp://192.0.2.1/api/v1/namespaces/kube-system/services/elasticsearch-logging/proxy/_search?q=user:kimchy", "completion": "This text code Access Services Running on Clusters | To access the Elasticsearch cluster health information _cluster/health?pretty=true, you would use:", "language": "text", "type": "code-example_explanation", "context": "Access Services Running on Clusters | To access the Elasticsearch cluster health information _cluster/health?pretty=true, you would use:", "source": "https://kubernetes.io/docs/tasks/access-application-cluster/access-cluster-services/", "id": "k8s_00121_code_explain_3738"} {"prompt": "Explain this text code:\n\nhttps://192.0.2.1/api/v1/namespaces/kube-system/services/elasticsearch-logging/proxy/_cluster/health?pretty=true", "completion": "This text code Access Services Running on Clusters | The health information is similar to this:", "language": "text", "type": "code-example_explanation", "context": "Access Services Running on Clusters | The health information is similar to this:", "source": "https://kubernetes.io/docs/tasks/access-application-cluster/access-cluster-services/", "id": "k8s_00121_code_explain_3739"} {"prompt": "Complete this json code:\n\n{\n \"cluster_name\" : \"kubernetes_logging\",\n \"status\" : \"yellow\",\n \"timed_out\" : false,\n \"number_of_nodes\" : 1,\n \"number_of_data_nodes\" : 1,", "completion": " \"active_primary_shards\" : 5,\n \"active_shards\" : 5,\n \"relocating_shards\" : 0,\n \"initializing_shards\" : 0,\n \"unassigned_shards\" : 5\n}", "language": "json", "type": "api-response", "context": "Access Services Running on Clusters | To access the https Elasticsearch service health information _cluster/health?pretty=true, you would use:", "source": "https://kubernetes.io/docs/tasks/access-application-cluster/access-cluster-services/", "id": "k8s_00121_code_completion_3740"} {"prompt": "Explain this json code:\n\n{\n \"cluster_name\" : \"kubernetes_logging\",\n \"status\" : \"yellow\",\n \"timed_out\" : false,\n \"number_of_nodes\" : 1,\n \"number_of_data_nodes\" : 1,\n \"active_primary_shards\" : 5,\n \"active_shards\" : 5,\n \"relocating_shards\" : 0,\n \"initializing_shards\" : 0,\n \"unassigned_shards\" : 5\n}", "completion": "This json code Access Services Running on Clusters | To access the https Elasticsearch service health information _cluster/health?pretty=true, you would use:", "language": "json", "type": "api-response_explanation", "context": "Access Services Running on Clusters | To access the https Elasticsearch service health information _cluster/health?pretty=true, you would use:", "source": "https://kubernetes.io/docs/tasks/access-application-cluster/access-cluster-services/", "id": "k8s_00121_code_explain_3741"} {"prompt": "Explain this text code:\n\nhttps://192.0.2.1/api/v1/namespaces/kube-system/services/https:elasticsearch-logging:/proxy/_cluster/health?pretty=true", "completion": "This text code Access Services Running on Clusters | You may be able to put an apiserver proxy URL into the address bar of a browser. However:", "language": "text", "type": "code-example_explanation", "context": "Access Services Running on Clusters | You may be able to put an apiserver proxy URL into the address bar of a browser. However:", "source": "https://kubernetes.io/docs/tasks/access-application-cluster/access-cluster-services/", "id": "k8s_00121_code_explain_3742"} {"prompt": "Complete this json code:\n\n{\n \"cluster_name\" : \"kubernetes_logging\",\n \"status\" : \"yellow\",\n \"timed_out\" : false,\n \"number_of_nodes\" : 1,\n \"number_of_data_nodes\" : 1,", "completion": " \"active_primary_shards\" : 5,\n \"active_shards\" : 5,\n \"relocating_shards\" : 0,\n \"initializing_shards\" : 0,\n \"unassigned_shards\" : 5\n}", "language": "json", "type": "api-response", "context": "Access Services Running on Clusters | To access the https Elasticsearch service health information _cluster/health?pretty=true, you would use:", "source": "https://kubernetes.io/docs/tasks/access-application-cluster/access-cluster-services/", "id": "k8s_00121_code_completion_3743"} {"prompt": "Explain this json code:\n\n{\n \"cluster_name\" : \"kubernetes_logging\",\n \"status\" : \"yellow\",\n \"timed_out\" : false,\n \"number_of_nodes\" : 1,\n \"number_of_data_nodes\" : 1,\n \"active_primary_shards\" : 5,\n \"active_shards\" : 5,\n \"relocating_shards\" : 0,\n \"initializing_shards\" : 0,\n \"unassigned_shards\" : 5\n}", "completion": "This json code Access Services Running on Clusters | To access the https Elasticsearch service health information _cluster/health?pretty=true, you would use:", "language": "json", "type": "api-response_explanation", "context": "Access Services Running on Clusters | To access the https Elasticsearch service health information _cluster/health?pretty=true, you would use:", "source": "https://kubernetes.io/docs/tasks/access-application-cluster/access-cluster-services/", "id": "k8s_00121_code_explain_3744"} {"prompt": "Explain this text code:\n\n# Print the address of the control plane and cluster services\n kubectl cluster-info", "completion": "This text code kubectl cluster-info | help for cluster-info", "language": "text", "type": "code-example_explanation", "context": "kubectl cluster-info | help for cluster-info", "source": "https://kubernetes.io/docs/reference/kubectl/generated/kubectl_cluster-info/", "id": "k8s_00122_code_explain_3745"} {"prompt": "Explain this text code:\n\n# Print the address of the control plane and cluster services\n kubectl cluster-info", "completion": "This text code kubectl cluster-info | help for cluster-info", "language": "text", "type": "code-example_explanation", "context": "kubectl cluster-info | help for cluster-info", "source": "https://kubernetes.io/docs/reference/kubectl/generated/kubectl_cluster-info/", "id": "k8s_00122_code_explain_3746"} {"prompt": "Complete this yaml code:\n\napiVersion: discovery.k8s.io/v1\nkind: EndpointSlice\nmetadata:\n name: example-hints\n labels:\n kubernetes.io/service-name: example-svc\naddressType: IPv4\nports:\n - name: http\n protocol: TCP", "completion": " port: 80\nendpoints:\n - addresses:\n - \"10.1.2.3\"\n conditions:\n ready: true\n hostname: pod-1\n zone: zone-a\n hints:\n forZones:\n - name: \"zone-a\"", "language": "yaml", "type": "kubernetes-manifest", "context": "Topology Aware Routing | The kube-proxy component filters the endpoints it routes to based on the hints set by the EndpointSlice controller. In most cases, this means that the kube-proxy is able to route traffic to endpoints", "source": "https://kubernetes.io/docs/concepts/services-networking/topology-aware-routing/", "id": "k8s_00124_code_completion_3747"} {"prompt": "Explain this yaml code:\n\napiVersion: discovery.k8s.io/v1\nkind: EndpointSlice\nmetadata:\n name: example-hints\n labels:\n kubernetes.io/service-name: example-svc\naddressType: IPv4\nports:\n - name: http\n protocol: TCP\n port: 80\nendpoints:\n - addresses:\n - \"10.1.2.3\"\n conditions:\n ready: true\n hostname: pod-1\n zone: zone-a\n hints:\n forZones:\n - name: \"zone-a\"", "completion": "This yaml code Topology Aware Routing | The kube-proxy component filters the endpoints it routes to based on the hints set by the EndpointSlice controller. In most cases, this means that the kube-proxy is able to route traffic to endpoints", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Topology Aware Routing | The kube-proxy component filters the endpoints it routes to based on the hints set by the EndpointSlice controller. In most cases, this means that the kube-proxy is able to route traffic to endpoints", "source": "https://kubernetes.io/docs/concepts/services-networking/topology-aware-routing/", "id": "k8s_00124_code_explain_3748"} {"prompt": "Complete this yaml code:\n\napiVersion: discovery.k8s.io/v1\nkind: EndpointSlice\nmetadata:\n name: example-hints\n labels:\n kubernetes.io/service-name: example-svc\naddressType: IPv4\nports:\n - name: http\n protocol: TCP", "completion": " port: 80\nendpoints:\n - addresses:\n - \"10.1.2.3\"\n conditions:\n ready: true\n hostname: pod-1\n zone: zone-a\n hints:\n forZones:\n - name: \"zone-a\"", "language": "yaml", "type": "kubernetes-manifest", "context": "Topology Aware Routing | The kube-proxy component filters the endpoints it routes to based on the hints set by the EndpointSlice controller. In most cases, this means that the kube-proxy is able to route traffic to endpoints", "source": "https://kubernetes.io/docs/concepts/services-networking/topology-aware-routing/", "id": "k8s_00124_code_completion_3749"} {"prompt": "Explain this yaml code:\n\napiVersion: discovery.k8s.io/v1\nkind: EndpointSlice\nmetadata:\n name: example-hints\n labels:\n kubernetes.io/service-name: example-svc\naddressType: IPv4\nports:\n - name: http\n protocol: TCP\n port: 80\nendpoints:\n - addresses:\n - \"10.1.2.3\"\n conditions:\n ready: true\n hostname: pod-1\n zone: zone-a\n hints:\n forZones:\n - name: \"zone-a\"", "completion": "This yaml code Topology Aware Routing | The kube-proxy component filters the endpoints it routes to based on the hints set by the EndpointSlice controller. In most cases, this means that the kube-proxy is able to route traffic to endpoints", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Topology Aware Routing | The kube-proxy component filters the endpoints it routes to based on the hints set by the EndpointSlice controller. In most cases, this means that the kube-proxy is able to route traffic to endpoints", "source": "https://kubernetes.io/docs/concepts/services-networking/topology-aware-routing/", "id": "k8s_00124_code_explain_3750"} {"prompt": "Complete this yaml code:\n\napiVersion: discovery.k8s.io/v1\nkind: EndpointSlice\nmetadata:\n name: example-hints\n labels:\n kubernetes.io/service-name: example-svc\naddressType: IPv4\nports:\n - name: http\n protocol: TCP", "completion": " port: 80\nendpoints:\n - addresses:\n - \"10.1.2.3\"\n conditions:\n ready: true\n hostname: pod-1\n zone: zone-a\n hints:\n forZones:\n - name: \"zone-a\"", "language": "yaml", "type": "kubernetes-manifest", "context": "Topology Aware Routing | The kube-proxy component filters the endpoints it routes to based on the hints set by the EndpointSlice controller. In most cases, this means that the kube-proxy is able to route traffic to endpoints", "source": "https://kubernetes.io/docs/concepts/services-networking/topology-aware-routing/", "id": "k8s_00124_code_completion_3751"} {"prompt": "Explain this yaml code:\n\napiVersion: discovery.k8s.io/v1\nkind: EndpointSlice\nmetadata:\n name: example-hints\n labels:\n kubernetes.io/service-name: example-svc\naddressType: IPv4\nports:\n - name: http\n protocol: TCP\n port: 80\nendpoints:\n - addresses:\n - \"10.1.2.3\"\n conditions:\n ready: true\n hostname: pod-1\n zone: zone-a\n hints:\n forZones:\n - name: \"zone-a\"", "completion": "This yaml code Topology Aware Routing | The kube-proxy component filters the endpoints it routes to based on the hints set by the EndpointSlice controller. In most cases, this means that the kube-proxy is able to route traffic to endpoints", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Topology Aware Routing | The kube-proxy component filters the endpoints it routes to based on the hints set by the EndpointSlice controller. In most cases, this means that the kube-proxy is able to route traffic to endpoints", "source": "https://kubernetes.io/docs/concepts/services-networking/topology-aware-routing/", "id": "k8s_00124_code_explain_3752"} {"prompt": "Complete this text code:\n\n#!/bin/bash\n\n# optional argument handling\nif [[ \"$1\" == \"version\" ]]\nthen\n echo \"1.0.0\"\n exit 0\nfi", "completion": "\n# optional argument handling\nif [[ \"$1\" == \"config\" ]]\nthen\n echo \"$KUBECONFIG\"\n exit 0\nfi\n\necho \"I am a plugin named kubectl-foo\"", "language": "text", "type": "code-example", "context": "Extend kubectl with plugins | To use a plugin, make the plugin executable:", "source": "https://kubernetes.io/docs/tasks/extend-kubectl/kubectl-plugins/", "id": "k8s_00126_code_completion_3753"} {"prompt": "Explain this text code:\n\n#!/bin/bash\n\n# optional argument handling\nif [[ \"$1\" == \"version\" ]]\nthen\n echo \"1.0.0\"\n exit 0\nfi\n\n# optional argument handling\nif [[ \"$1\" == \"config\" ]]\nthen\n echo \"$KUBECONFIG\"\n exit 0\nfi\n\necho \"I am a plugin named kubectl-foo\"", "completion": "This text code Extend kubectl with plugins | To use a plugin, make the plugin executable:", "language": "text", "type": "code-example_explanation", "context": "Extend kubectl with plugins | To use a plugin, make the plugin executable:", "source": "https://kubernetes.io/docs/tasks/extend-kubectl/kubectl-plugins/", "id": "k8s_00126_code_explain_3754"} {"prompt": "Explain this text code:\n\nexport KUBECONFIG=~/.kube/config\nkubectl foo config", "completion": "This text code Extend kubectl with plugins | Additionally, the first argument that is passed to a plugin will always be the full path to the location where it was invoked ($0 would equal /usr/local/bin/kubectl-foo in the example above).", "language": "text", "type": "code-example_explanation", "context": "Extend kubectl with plugins | Additionally, the first argument that is passed to a plugin will always be the full path to the location where it was invoked ($0 would equal /usr/local/bin/kubectl-foo in the example above).", "source": "https://kubernetes.io/docs/tasks/extend-kubectl/kubectl-plugins/", "id": "k8s_00126_code_explain_3755"} {"prompt": "Complete this text code:\n\n# create a plugin\necho -e '#!/bin/bash\\n\\necho \"My first command-line argument was $1\"' > kubectl-foo-bar-baz\nsudo chmod +x ./kubectl-foo-bar-baz\n", "completion": "# \"install\" your plugin by moving it to a directory in your $PATH\nsudo mv ./kubectl-foo-bar-baz /usr/local/bin\n\n# check that kubectl recognizes your plugin\nkubectl plugin list", "language": "text", "type": "code-example", "context": "Extend kubectl with plugins | As you can see, your plugin was found based on the kubectl command specified by a user, and all extra arguments and flags were passed as-is to the plugin executable once it was found.", "source": "https://kubernetes.io/docs/tasks/extend-kubectl/kubectl-plugins/", "id": "k8s_00126_code_completion_3756"} {"prompt": "Explain this text code:\n\n# create a plugin\necho -e '#!/bin/bash\\n\\necho \"My first command-line argument was $1\"' > kubectl-foo-bar-baz\nsudo chmod +x ./kubectl-foo-bar-baz\n\n# \"install\" your plugin by moving it to a directory in your $PATH\nsudo mv ./kubectl-foo-bar-baz /usr/local/bin\n\n# check that kubectl recognizes your plugin\nkubectl plugin list", "completion": "This text code Extend kubectl with plugins | As you can see, your plugin was found based on the kubectl command specified by a user, and all extra arguments and flags were passed as-is to the plugin executable once it was found.", "language": "text", "type": "code-example_explanation", "context": "Extend kubectl with plugins | As you can see, your plugin was found based on the kubectl command specified by a user, and all extra arguments and flags were passed as-is to the plugin executable once it was found.", "source": "https://kubernetes.io/docs/tasks/extend-kubectl/kubectl-plugins/", "id": "k8s_00126_code_explain_3757"} {"prompt": "Explain this text code:\n\nThe following kubectl-compatible plugins are available:\n\n/usr/local/bin/kubectl-foo-bar-baz", "completion": "This text code Extend kubectl with plugins | As you can see, your plugin was found based on the kubectl command specified by a user, and all extra arguments and flags were passed as-is to the plugin executable once it was found.", "language": "text", "type": "code-example_explanation", "context": "Extend kubectl with plugins | As you can see, your plugin was found based on the kubectl command specified by a user, and all extra arguments and flags were passed as-is to the plugin executable once it was found.", "source": "https://kubernetes.io/docs/tasks/extend-kubectl/kubectl-plugins/", "id": "k8s_00126_code_explain_3758"} {"prompt": "Complete this text code:\n\n# test that calling your plugin via a \"kubectl\" command works\n# even when additional arguments and flags are passed to your", "completion": "# plugin executable by the user.\nkubectl foo bar baz arg1 --meaningless-flag=true", "language": "text", "type": "code-example", "context": "Extend kubectl with plugins | As you can see, your plugin was found based on the kubectl command specified by a user, and all extra arguments and flags were passed as-is to the plugin executable once it was found.", "source": "https://kubernetes.io/docs/tasks/extend-kubectl/kubectl-plugins/", "id": "k8s_00126_code_completion_3759"} {"prompt": "Explain this text code:\n\n# test that calling your plugin via a \"kubectl\" command works\n# even when additional arguments and flags are passed to your\n# plugin executable by the user.\nkubectl foo bar baz arg1 --meaningless-flag=true", "completion": "This text code Extend kubectl with plugins | As you can see, your plugin was found based on the kubectl command specified by a user, and all extra arguments and flags were passed as-is to the plugin executable once it was found.", "language": "text", "type": "code-example_explanation", "context": "Extend kubectl with plugins | As you can see, your plugin was found based on the kubectl command specified by a user, and all extra arguments and flags were passed as-is to the plugin executable once it was found.", "source": "https://kubernetes.io/docs/tasks/extend-kubectl/kubectl-plugins/", "id": "k8s_00126_code_explain_3760"} {"prompt": "Complete this text code:\n\n# create a plugin containing an underscore in its filename\necho -e '#!/bin/bash\\n\\necho \"I am a plugin with a dash in my name\"' > ./kubectl-foo_bar\nsudo chmod +x ./kubectl-foo_bar\n", "completion": "# move the plugin into your $PATH\nsudo mv ./kubectl-foo_bar /usr/local/bin\n\n# You can now invoke your plugin via kubectl:\nkubectl foo-bar", "language": "text", "type": "code-example", "context": "Extend kubectl with plugins | Note that the introduction of underscores to a plugin filename does not prevent you from having commands such as kubectl foo_bar. The command from the above example, can be invoked using either a dash", "source": "https://kubernetes.io/docs/tasks/extend-kubectl/kubectl-plugins/", "id": "k8s_00126_code_completion_3761"} {"prompt": "Explain this text code:\n\n# create a plugin containing an underscore in its filename\necho -e '#!/bin/bash\\n\\necho \"I am a plugin with a dash in my name\"' > ./kubectl-foo_bar\nsudo chmod +x ./kubectl-foo_bar\n\n# move the plugin into your $PATH\nsudo mv ./kubectl-foo_bar /usr/local/bin\n\n# You can now invoke your plugin via kubectl:\nkubectl foo-bar", "completion": "This text code Extend kubectl with plugins | Note that the introduction of underscores to a plugin filename does not prevent you from having commands such as kubectl foo_bar. The command from the above example, can be invoked using either a dash", "language": "text", "type": "code-example_explanation", "context": "Extend kubectl with plugins | Note that the introduction of underscores to a plugin filename does not prevent you from having commands such as kubectl foo_bar. The command from the above example, can be invoked using either a dash", "source": "https://kubernetes.io/docs/tasks/extend-kubectl/kubectl-plugins/", "id": "k8s_00126_code_explain_3762"} {"prompt": "Explain this text code:\n\n# You can invoke your custom command with a dash\nkubectl foo-bar", "completion": "This text code Extend kubectl with plugins | It is possible to have multiple plugins with the same filename in different locations throughout your PATH. For example, given a PATH with the following value: PATH=/usr/local/bin/plugins:/usr/local/b", "language": "text", "type": "code-example_explanation", "context": "Extend kubectl with plugins | It is possible to have multiple plugins with the same filename in different locations throughout your PATH. For example, given a PATH with the following value: PATH=/usr/local/bin/plugins:/usr/local/b", "source": "https://kubernetes.io/docs/tasks/extend-kubectl/kubectl-plugins/", "id": "k8s_00126_code_explain_3763"} {"prompt": "Explain this text code:\n\n# You can also invoke your custom command with an underscore\nkubectl foo_bar", "completion": "This text code Extend kubectl with plugins | It is possible to have multiple plugins with the same filename in different locations throughout your PATH. For example, given a PATH with the following value: PATH=/usr/local/bin/plugins:/usr/local/b", "language": "text", "type": "code-example_explanation", "context": "Extend kubectl with plugins | It is possible to have multiple plugins with the same filename in different locations throughout your PATH. For example, given a PATH with the following value: PATH=/usr/local/bin/plugins:/usr/local/b", "source": "https://kubernetes.io/docs/tasks/extend-kubectl/kubectl-plugins/", "id": "k8s_00126_code_explain_3764"} {"prompt": "Explain this text code:\n\nPATH=/usr/local/bin/plugins:/usr/local/bin/moreplugins kubectl plugin list", "completion": "This text code Extend kubectl with plugins | In the above scenario, the warning under /usr/local/bin/moreplugins/kubectl-foo tells you that this plugin will never be executed. Instead, the executable that appears first in your PATH, /usr/local/b", "language": "text", "type": "code-example_explanation", "context": "Extend kubectl with plugins | In the above scenario, the warning under /usr/local/bin/moreplugins/kubectl-foo tells you that this plugin will never be executed. Instead, the executable that appears first in your PATH, /usr/local/b", "source": "https://kubernetes.io/docs/tasks/extend-kubectl/kubectl-plugins/", "id": "k8s_00126_code_explain_3765"} {"prompt": "Complete this text code:\n\nThe following kubectl-compatible plugins are available:\n\n/usr/local/bin/plugins/kubectl-foo", "completion": "/usr/local/bin/moreplugins/kubectl-foo\n - warning: /usr/local/bin/moreplugins/kubectl-foo is overshadowed by a similarly named plugin: /usr/local/bin/plugins/kubectl-foo\n\nerror: one plugin warning was found", "language": "text", "type": "code-example", "context": "Extend kubectl with plugins | In the above scenario, the warning under /usr/local/bin/moreplugins/kubectl-foo tells you that this plugin will never be executed. Instead, the executable that appears first in your PATH, /usr/local/b", "source": "https://kubernetes.io/docs/tasks/extend-kubectl/kubectl-plugins/", "id": "k8s_00126_code_completion_3766"} {"prompt": "Explain this text code:\n\nThe following kubectl-compatible plugins are available:\n\n/usr/local/bin/plugins/kubectl-foo\n/usr/local/bin/moreplugins/kubectl-foo\n - warning: /usr/local/bin/moreplugins/kubectl-foo is overshadowed by a similarly named plugin: /usr/local/bin/plugins/kubectl-foo\n\nerror: one plugin warning was found", "completion": "This text code Extend kubectl with plugins | In the above scenario, the warning under /usr/local/bin/moreplugins/kubectl-foo tells you that this plugin will never be executed. Instead, the executable that appears first in your PATH, /usr/local/b", "language": "text", "type": "code-example_explanation", "context": "Extend kubectl with plugins | In the above scenario, the warning under /usr/local/bin/moreplugins/kubectl-foo tells you that this plugin will never be executed. Instead, the executable that appears first in your PATH, /usr/local/b", "source": "https://kubernetes.io/docs/tasks/extend-kubectl/kubectl-plugins/", "id": "k8s_00126_code_explain_3767"} {"prompt": "Explain this text code:\n\n# for a given kubectl command, the plugin with the longest possible filename will always be preferred\nkubectl foo bar baz", "completion": "This text code Extend kubectl with plugins | This design choice ensures that plugin sub-commands can be implemented across multiple files, if needed, and that these sub-commands can be nested under a \"parent\" plugin command:", "language": "text", "type": "code-example_explanation", "context": "Extend kubectl with plugins | This design choice ensures that plugin sub-commands can be implemented across multiple files, if needed, and that these sub-commands can be nested under a \"parent\" plugin command:", "source": "https://kubernetes.io/docs/tasks/extend-kubectl/kubectl-plugins/", "id": "k8s_00126_code_explain_3768"} {"prompt": "Explain this text code:\n\nPlugin kubectl-foo-bar-baz is executed, with \"buz\" as its first argument", "completion": "This text code Extend kubectl with plugins | This design choice ensures that plugin sub-commands can be implemented across multiple files, if needed, and that these sub-commands can be nested under a \"parent\" plugin command:", "language": "text", "type": "code-example_explanation", "context": "Extend kubectl with plugins | This design choice ensures that plugin sub-commands can be implemented across multiple files, if needed, and that these sub-commands can be nested under a \"parent\" plugin command:", "source": "https://kubernetes.io/docs/tasks/extend-kubectl/kubectl-plugins/", "id": "k8s_00126_code_explain_3769"} {"prompt": "Explain this text code:\n\nPlugin kubectl-foo-bar is executed, with \"buz\" as its first argument", "completion": "This text code Extend kubectl with plugins | This design choice ensures that plugin sub-commands can be implemented across multiple files, if needed, and that these sub-commands can be nested under a \"parent\" plugin command:", "language": "text", "type": "code-example_explanation", "context": "Extend kubectl with plugins | This design choice ensures that plugin sub-commands can be implemented across multiple files, if needed, and that these sub-commands can be nested under a \"parent\" plugin command:", "source": "https://kubernetes.io/docs/tasks/extend-kubectl/kubectl-plugins/", "id": "k8s_00126_code_explain_3770"} {"prompt": "Explain this bash code:\n\nkubectl-parent\nkubectl-parent-subcommand\nkubectl-parent-subcommand-subsubcommand", "completion": "This bash code Extend kubectl with plugins | You can use the aforementioned kubectl plugin list command to ensure that your plugin is visible by kubectl, and verify that there are no warnings preventing it from being called as a kubectl command.", "language": "bash", "type": "kubectl-command_explanation", "context": "Extend kubectl with plugins | You can use the aforementioned kubectl plugin list command to ensure that your plugin is visible by kubectl, and verify that there are no warnings preventing it from being called as a kubectl command.", "source": "https://kubernetes.io/docs/tasks/extend-kubectl/kubectl-plugins/", "id": "k8s_00126_code_explain_3771"} {"prompt": "Complete this text code:\n\nThe following kubectl-compatible plugins are available:\n\ntest/fixtures/pkg/kubectl/plugins/kubectl-foo\n/usr/local/bin/kubectl-foo", "completion": " - warning: /usr/local/bin/kubectl-foo is overshadowed by a similarly named plugin: test/fixtures/pkg/kubectl/plugins/kubectl-foo\nplugins/kubectl-invalid\n - warning: plugins/kubectl-invalid identified as a kubectl plugin, but it is not executable\n\nerror: 2 plugin warnings were found", "language": "text", "type": "code-example", "context": "Extend kubectl with plugins | If you're writing a plugin for kubectl and you're using Go, you can make use of the cli-runtime utility libraries.", "source": "https://kubernetes.io/docs/tasks/extend-kubectl/kubectl-plugins/", "id": "k8s_00126_code_completion_3772"} {"prompt": "Explain this text code:\n\nThe following kubectl-compatible plugins are available:\n\ntest/fixtures/pkg/kubectl/plugins/kubectl-foo\n/usr/local/bin/kubectl-foo\n - warning: /usr/local/bin/kubectl-foo is overshadowed by a similarly named plugin: test/fixtures/pkg/kubectl/plugins/kubectl-foo\nplugins/kubectl-invalid\n - warning: plugins/kubectl-invalid identified as a kubectl plugin, but it is not executable\n\nerror: 2 plugin warnings were found", "completion": "This text code Extend kubectl with plugins | If you're writing a plugin for kubectl and you're using Go, you can make use of the cli-runtime utility libraries.", "language": "text", "type": "code-example_explanation", "context": "Extend kubectl with plugins | If you're writing a plugin for kubectl and you're using Go, you can make use of the cli-runtime utility libraries.", "source": "https://kubernetes.io/docs/tasks/extend-kubectl/kubectl-plugins/", "id": "k8s_00126_code_explain_3773"} {"prompt": "Complete this bash code:\n\n#!/bin/bash\n\n# optional argument handling\nif [[ \"$1\" == \"version\" ]]\nthen\n echo \"1.0.0\"\n exit 0\nfi", "completion": "\n# optional argument handling\nif [[ \"$1\" == \"config\" ]]\nthen\n echo \"$KUBECONFIG\"\n exit 0\nfi\n\necho \"I am a plugin named kubectl-foo\"", "language": "bash", "type": "shell-command", "context": "Extend kubectl with plugins | To use a plugin, make the plugin executable:", "source": "https://kubernetes.io/docs/tasks/extend-kubectl/kubectl-plugins/", "id": "k8s_00126_code_completion_3774"} {"prompt": "Explain this bash code:\n\n#!/bin/bash\n\n# optional argument handling\nif [[ \"$1\" == \"version\" ]]\nthen\n echo \"1.0.0\"\n exit 0\nfi\n\n# optional argument handling\nif [[ \"$1\" == \"config\" ]]\nthen\n echo \"$KUBECONFIG\"\n exit 0\nfi\n\necho \"I am a plugin named kubectl-foo\"", "completion": "This bash code Extend kubectl with plugins | To use a plugin, make the plugin executable:", "language": "bash", "type": "shell-command_explanation", "context": "Extend kubectl with plugins | To use a plugin, make the plugin executable:", "source": "https://kubernetes.io/docs/tasks/extend-kubectl/kubectl-plugins/", "id": "k8s_00126_code_explain_3775"} {"prompt": "Explain this bash code:\n\nexport KUBECONFIG=~/.kube/config\nkubectl foo config", "completion": "This bash code Extend kubectl with plugins | Additionally, the first argument that is passed to a plugin will always be the full path to the location where it was invoked ($0 would equal /usr/local/bin/kubectl-foo in the example above).", "language": "bash", "type": "shell-command_explanation", "context": "Extend kubectl with plugins | Additionally, the first argument that is passed to a plugin will always be the full path to the location where it was invoked ($0 would equal /usr/local/bin/kubectl-foo in the example above).", "source": "https://kubernetes.io/docs/tasks/extend-kubectl/kubectl-plugins/", "id": "k8s_00126_code_explain_3776"} {"prompt": "Complete this bash code:\n\n# create a plugin\necho -e '#!/bin/bash\\n\\necho \"My first command-line argument was $1\"' > kubectl-foo-bar-baz\nsudo chmod +x ./kubectl-foo-bar-baz\n", "completion": "# \"install\" your plugin by moving it to a directory in your $PATH\nsudo mv ./kubectl-foo-bar-baz /usr/local/bin\n\n# check that kubectl recognizes your plugin\nkubectl plugin list", "language": "bash", "type": "shell-command", "context": "Extend kubectl with plugins | As you can see, your plugin was found based on the kubectl command specified by a user, and all extra arguments and flags were passed as-is to the plugin executable once it was found.", "source": "https://kubernetes.io/docs/tasks/extend-kubectl/kubectl-plugins/", "id": "k8s_00126_code_completion_3777"} {"prompt": "Explain this bash code:\n\n# create a plugin\necho -e '#!/bin/bash\\n\\necho \"My first command-line argument was $1\"' > kubectl-foo-bar-baz\nsudo chmod +x ./kubectl-foo-bar-baz\n\n# \"install\" your plugin by moving it to a directory in your $PATH\nsudo mv ./kubectl-foo-bar-baz /usr/local/bin\n\n# check that kubectl recognizes your plugin\nkubectl plugin list", "completion": "This bash code Extend kubectl with plugins | As you can see, your plugin was found based on the kubectl command specified by a user, and all extra arguments and flags were passed as-is to the plugin executable once it was found.", "language": "bash", "type": "shell-command_explanation", "context": "Extend kubectl with plugins | As you can see, your plugin was found based on the kubectl command specified by a user, and all extra arguments and flags were passed as-is to the plugin executable once it was found.", "source": "https://kubernetes.io/docs/tasks/extend-kubectl/kubectl-plugins/", "id": "k8s_00126_code_explain_3778"} {"prompt": "Explain this text code:\n\nThe following kubectl-compatible plugins are available:\n\n/usr/local/bin/kubectl-foo-bar-baz", "completion": "This text code Extend kubectl with plugins | As you can see, your plugin was found based on the kubectl command specified by a user, and all extra arguments and flags were passed as-is to the plugin executable once it was found.", "language": "text", "type": "code-example_explanation", "context": "Extend kubectl with plugins | As you can see, your plugin was found based on the kubectl command specified by a user, and all extra arguments and flags were passed as-is to the plugin executable once it was found.", "source": "https://kubernetes.io/docs/tasks/extend-kubectl/kubectl-plugins/", "id": "k8s_00126_code_explain_3779"} {"prompt": "Complete this text code:\n\n# test that calling your plugin via a \"kubectl\" command works\n# even when additional arguments and flags are passed to your", "completion": "# plugin executable by the user.\nkubectl foo bar baz arg1 --meaningless-flag=true", "language": "text", "type": "code-example", "context": "Extend kubectl with plugins | As you can see, your plugin was found based on the kubectl command specified by a user, and all extra arguments and flags were passed as-is to the plugin executable once it was found.", "source": "https://kubernetes.io/docs/tasks/extend-kubectl/kubectl-plugins/", "id": "k8s_00126_code_completion_3780"} {"prompt": "Explain this text code:\n\n# test that calling your plugin via a \"kubectl\" command works\n# even when additional arguments and flags are passed to your\n# plugin executable by the user.\nkubectl foo bar baz arg1 --meaningless-flag=true", "completion": "This text code Extend kubectl with plugins | As you can see, your plugin was found based on the kubectl command specified by a user, and all extra arguments and flags were passed as-is to the plugin executable once it was found.", "language": "text", "type": "code-example_explanation", "context": "Extend kubectl with plugins | As you can see, your plugin was found based on the kubectl command specified by a user, and all extra arguments and flags were passed as-is to the plugin executable once it was found.", "source": "https://kubernetes.io/docs/tasks/extend-kubectl/kubectl-plugins/", "id": "k8s_00126_code_explain_3781"} {"prompt": "Complete this bash code:\n\n# create a plugin containing an underscore in its filename\necho -e '#!/bin/bash\\n\\necho \"I am a plugin with a dash in my name\"' > ./kubectl-foo_bar\nsudo chmod +x ./kubectl-foo_bar\n", "completion": "# move the plugin into your $PATH\nsudo mv ./kubectl-foo_bar /usr/local/bin\n\n# You can now invoke your plugin via kubectl:\nkubectl foo-bar", "language": "bash", "type": "shell-command", "context": "Extend kubectl with plugins | Note that the introduction of underscores to a plugin filename does not prevent you from having commands such as kubectl foo_bar. The command from the above example, can be invoked using either a dash", "source": "https://kubernetes.io/docs/tasks/extend-kubectl/kubectl-plugins/", "id": "k8s_00126_code_completion_3782"} {"prompt": "Explain this bash code:\n\n# create a plugin containing an underscore in its filename\necho -e '#!/bin/bash\\n\\necho \"I am a plugin with a dash in my name\"' > ./kubectl-foo_bar\nsudo chmod +x ./kubectl-foo_bar\n\n# move the plugin into your $PATH\nsudo mv ./kubectl-foo_bar /usr/local/bin\n\n# You can now invoke your plugin via kubectl:\nkubectl foo-bar", "completion": "This bash code Extend kubectl with plugins | Note that the introduction of underscores to a plugin filename does not prevent you from having commands such as kubectl foo_bar. The command from the above example, can be invoked using either a dash", "language": "bash", "type": "shell-command_explanation", "context": "Extend kubectl with plugins | Note that the introduction of underscores to a plugin filename does not prevent you from having commands such as kubectl foo_bar. The command from the above example, can be invoked using either a dash", "source": "https://kubernetes.io/docs/tasks/extend-kubectl/kubectl-plugins/", "id": "k8s_00126_code_explain_3783"} {"prompt": "Explain this bash code:\n\n# You can invoke your custom command with a dash\nkubectl foo-bar", "completion": "This bash code Extend kubectl with plugins | It is possible to have multiple plugins with the same filename in different locations throughout your PATH. For example, given a PATH with the following value: PATH=/usr/local/bin/plugins:/usr/local/b", "language": "bash", "type": "shell-command_explanation", "context": "Extend kubectl with plugins | It is possible to have multiple plugins with the same filename in different locations throughout your PATH. For example, given a PATH with the following value: PATH=/usr/local/bin/plugins:/usr/local/b", "source": "https://kubernetes.io/docs/tasks/extend-kubectl/kubectl-plugins/", "id": "k8s_00126_code_explain_3784"} {"prompt": "Explain this bash code:\n\n# You can also invoke your custom command with an underscore\nkubectl foo_bar", "completion": "This bash code Extend kubectl with plugins | It is possible to have multiple plugins with the same filename in different locations throughout your PATH. For example, given a PATH with the following value: PATH=/usr/local/bin/plugins:/usr/local/b", "language": "bash", "type": "shell-command_explanation", "context": "Extend kubectl with plugins | It is possible to have multiple plugins with the same filename in different locations throughout your PATH. For example, given a PATH with the following value: PATH=/usr/local/bin/plugins:/usr/local/b", "source": "https://kubernetes.io/docs/tasks/extend-kubectl/kubectl-plugins/", "id": "k8s_00126_code_explain_3785"} {"prompt": "Explain this text code:\n\nPATH=/usr/local/bin/plugins:/usr/local/bin/moreplugins", "completion": "This text code Extend kubectl with plugins | In the above scenario, the warning under /usr/local/bin/moreplugins/kubectl-foo tells you that this plugin will never be executed. Instead, the executable that appears first in your PATH, /usr/local/b", "language": "text", "type": "code-example_explanation", "context": "Extend kubectl with plugins | In the above scenario, the warning under /usr/local/bin/moreplugins/kubectl-foo tells you that this plugin will never be executed. Instead, the executable that appears first in your PATH, /usr/local/b", "source": "https://kubernetes.io/docs/tasks/extend-kubectl/kubectl-plugins/", "id": "k8s_00126_code_explain_3786"} {"prompt": "Explain this bash code:\n\nPATH=/usr/local/bin/plugins:/usr/local/bin/moreplugins kubectl plugin list", "completion": "This bash code Extend kubectl with plugins | In the above scenario, the warning under /usr/local/bin/moreplugins/kubectl-foo tells you that this plugin will never be executed. Instead, the executable that appears first in your PATH, /usr/local/b", "language": "bash", "type": "shell-command_explanation", "context": "Extend kubectl with plugins | In the above scenario, the warning under /usr/local/bin/moreplugins/kubectl-foo tells you that this plugin will never be executed. Instead, the executable that appears first in your PATH, /usr/local/b", "source": "https://kubernetes.io/docs/tasks/extend-kubectl/kubectl-plugins/", "id": "k8s_00126_code_explain_3787"} {"prompt": "Complete this text code:\n\nThe following kubectl-compatible plugins are available:\n\n/usr/local/bin/plugins/kubectl-foo", "completion": "/usr/local/bin/moreplugins/kubectl-foo\n - warning: /usr/local/bin/moreplugins/kubectl-foo is overshadowed by a similarly named plugin: /usr/local/bin/plugins/kubectl-foo\n\nerror: one plugin warning was found", "language": "text", "type": "code-example", "context": "Extend kubectl with plugins | In the above scenario, the warning under /usr/local/bin/moreplugins/kubectl-foo tells you that this plugin will never be executed. Instead, the executable that appears first in your PATH, /usr/local/b", "source": "https://kubernetes.io/docs/tasks/extend-kubectl/kubectl-plugins/", "id": "k8s_00126_code_completion_3788"} {"prompt": "Explain this text code:\n\nThe following kubectl-compatible plugins are available:\n\n/usr/local/bin/plugins/kubectl-foo\n/usr/local/bin/moreplugins/kubectl-foo\n - warning: /usr/local/bin/moreplugins/kubectl-foo is overshadowed by a similarly named plugin: /usr/local/bin/plugins/kubectl-foo\n\nerror: one plugin warning was found", "completion": "This text code Extend kubectl with plugins | In the above scenario, the warning under /usr/local/bin/moreplugins/kubectl-foo tells you that this plugin will never be executed. Instead, the executable that appears first in your PATH, /usr/local/b", "language": "text", "type": "code-example_explanation", "context": "Extend kubectl with plugins | In the above scenario, the warning under /usr/local/bin/moreplugins/kubectl-foo tells you that this plugin will never be executed. Instead, the executable that appears first in your PATH, /usr/local/b", "source": "https://kubernetes.io/docs/tasks/extend-kubectl/kubectl-plugins/", "id": "k8s_00126_code_explain_3789"} {"prompt": "Explain this bash code:\n\n# for a given kubectl command, the plugin with the longest possible filename will always be preferred\nkubectl foo bar baz", "completion": "This bash code Extend kubectl with plugins | This design choice ensures that plugin sub-commands can be implemented across multiple files, if needed, and that these sub-commands can be nested under a \"parent\" plugin command:", "language": "bash", "type": "shell-command_explanation", "context": "Extend kubectl with plugins | This design choice ensures that plugin sub-commands can be implemented across multiple files, if needed, and that these sub-commands can be nested under a \"parent\" plugin command:", "source": "https://kubernetes.io/docs/tasks/extend-kubectl/kubectl-plugins/", "id": "k8s_00126_code_explain_3790"} {"prompt": "Explain this text code:\n\nPlugin kubectl-foo-bar-baz is executed, with \"buz\" as its first argument", "completion": "This text code Extend kubectl with plugins | This design choice ensures that plugin sub-commands can be implemented across multiple files, if needed, and that these sub-commands can be nested under a \"parent\" plugin command:", "language": "text", "type": "code-example_explanation", "context": "Extend kubectl with plugins | This design choice ensures that plugin sub-commands can be implemented across multiple files, if needed, and that these sub-commands can be nested under a \"parent\" plugin command:", "source": "https://kubernetes.io/docs/tasks/extend-kubectl/kubectl-plugins/", "id": "k8s_00126_code_explain_3791"} {"prompt": "Explain this text code:\n\nPlugin kubectl-foo-bar is executed, with \"buz\" as its first argument", "completion": "This text code Extend kubectl with plugins | This design choice ensures that plugin sub-commands can be implemented across multiple files, if needed, and that these sub-commands can be nested under a \"parent\" plugin command:", "language": "text", "type": "code-example_explanation", "context": "Extend kubectl with plugins | This design choice ensures that plugin sub-commands can be implemented across multiple files, if needed, and that these sub-commands can be nested under a \"parent\" plugin command:", "source": "https://kubernetes.io/docs/tasks/extend-kubectl/kubectl-plugins/", "id": "k8s_00126_code_explain_3792"} {"prompt": "Explain this bash code:\n\nkubectl-parent\nkubectl-parent-subcommand\nkubectl-parent-subcommand-subsubcommand", "completion": "This bash code Extend kubectl with plugins | You can use the aforementioned kubectl plugin list command to ensure that your plugin is visible by kubectl, and verify that there are no warnings preventing it from being called as a kubectl command.", "language": "bash", "type": "kubectl-command_explanation", "context": "Extend kubectl with plugins | You can use the aforementioned kubectl plugin list command to ensure that your plugin is visible by kubectl, and verify that there are no warnings preventing it from being called as a kubectl command.", "source": "https://kubernetes.io/docs/tasks/extend-kubectl/kubectl-plugins/", "id": "k8s_00126_code_explain_3793"} {"prompt": "Complete this text code:\n\nThe following kubectl-compatible plugins are available:\n\ntest/fixtures/pkg/kubectl/plugins/kubectl-foo\n/usr/local/bin/kubectl-foo", "completion": " - warning: /usr/local/bin/kubectl-foo is overshadowed by a similarly named plugin: test/fixtures/pkg/kubectl/plugins/kubectl-foo\nplugins/kubectl-invalid\n - warning: plugins/kubectl-invalid identified as a kubectl plugin, but it is not executable\n\nerror: 2 plugin warnings were found", "language": "text", "type": "code-example", "context": "Extend kubectl with plugins | If you're writing a plugin for kubectl and you're using Go, you can make use of the cli-runtime utility libraries.", "source": "https://kubernetes.io/docs/tasks/extend-kubectl/kubectl-plugins/", "id": "k8s_00126_code_completion_3794"} {"prompt": "Explain this text code:\n\nThe following kubectl-compatible plugins are available:\n\ntest/fixtures/pkg/kubectl/plugins/kubectl-foo\n/usr/local/bin/kubectl-foo\n - warning: /usr/local/bin/kubectl-foo is overshadowed by a similarly named plugin: test/fixtures/pkg/kubectl/plugins/kubectl-foo\nplugins/kubectl-invalid\n - warning: plugins/kubectl-invalid identified as a kubectl plugin, but it is not executable\n\nerror: 2 plugin warnings were found", "completion": "This text code Extend kubectl with plugins | If you're writing a plugin for kubectl and you're using Go, you can make use of the cli-runtime utility libraries.", "language": "text", "type": "code-example_explanation", "context": "Extend kubectl with plugins | If you're writing a plugin for kubectl and you're using Go, you can make use of the cli-runtime utility libraries.", "source": "https://kubernetes.io/docs/tasks/extend-kubectl/kubectl-plugins/", "id": "k8s_00126_code_explain_3795"} {"prompt": "Complete this text code:\n\n#!/bin/bash\n\n# optional argument handling\nif [[ \"$1\" == \"version\" ]]\nthen\n echo \"1.0.0\"\n exit 0\nfi", "completion": "\n# optional argument handling\nif [[ \"$1\" == \"config\" ]]\nthen\n echo \"$KUBECONFIG\"\n exit 0\nfi\n\necho \"I am a plugin named kubectl-foo\"", "language": "text", "type": "code-example", "context": "Extend kubectl with plugins | To use a plugin, make the plugin executable:", "source": "https://kubernetes.io/docs/tasks/extend-kubectl/kubectl-plugins/", "id": "k8s_00126_code_completion_3796"} {"prompt": "Explain this text code:\n\n#!/bin/bash\n\n# optional argument handling\nif [[ \"$1\" == \"version\" ]]\nthen\n echo \"1.0.0\"\n exit 0\nfi\n\n# optional argument handling\nif [[ \"$1\" == \"config\" ]]\nthen\n echo \"$KUBECONFIG\"\n exit 0\nfi\n\necho \"I am a plugin named kubectl-foo\"", "completion": "This text code Extend kubectl with plugins | To use a plugin, make the plugin executable:", "language": "text", "type": "code-example_explanation", "context": "Extend kubectl with plugins | To use a plugin, make the plugin executable:", "source": "https://kubernetes.io/docs/tasks/extend-kubectl/kubectl-plugins/", "id": "k8s_00126_code_explain_3797"} {"prompt": "Explain this text code:\n\nexport KUBECONFIG=~/.kube/config\nkubectl foo config", "completion": "This text code Extend kubectl with plugins | Additionally, the first argument that is passed to a plugin will always be the full path to the location where it was invoked ($0 would equal /usr/local/bin/kubectl-foo in the example above).", "language": "text", "type": "code-example_explanation", "context": "Extend kubectl with plugins | Additionally, the first argument that is passed to a plugin will always be the full path to the location where it was invoked ($0 would equal /usr/local/bin/kubectl-foo in the example above).", "source": "https://kubernetes.io/docs/tasks/extend-kubectl/kubectl-plugins/", "id": "k8s_00126_code_explain_3798"} {"prompt": "Complete this text code:\n\n# create a plugin\necho -e '#!/bin/bash\\n\\necho \"My first command-line argument was $1\"' > kubectl-foo-bar-baz\nsudo chmod +x ./kubectl-foo-bar-baz\n", "completion": "# \"install\" your plugin by moving it to a directory in your $PATH\nsudo mv ./kubectl-foo-bar-baz /usr/local/bin\n\n# check that kubectl recognizes your plugin\nkubectl plugin list", "language": "text", "type": "code-example", "context": "Extend kubectl with plugins | As you can see, your plugin was found based on the kubectl command specified by a user, and all extra arguments and flags were passed as-is to the plugin executable once it was found.", "source": "https://kubernetes.io/docs/tasks/extend-kubectl/kubectl-plugins/", "id": "k8s_00126_code_completion_3799"} {"prompt": "Explain this text code:\n\n# create a plugin\necho -e '#!/bin/bash\\n\\necho \"My first command-line argument was $1\"' > kubectl-foo-bar-baz\nsudo chmod +x ./kubectl-foo-bar-baz\n\n# \"install\" your plugin by moving it to a directory in your $PATH\nsudo mv ./kubectl-foo-bar-baz /usr/local/bin\n\n# check that kubectl recognizes your plugin\nkubectl plugin list", "completion": "This text code Extend kubectl with plugins | As you can see, your plugin was found based on the kubectl command specified by a user, and all extra arguments and flags were passed as-is to the plugin executable once it was found.", "language": "text", "type": "code-example_explanation", "context": "Extend kubectl with plugins | As you can see, your plugin was found based on the kubectl command specified by a user, and all extra arguments and flags were passed as-is to the plugin executable once it was found.", "source": "https://kubernetes.io/docs/tasks/extend-kubectl/kubectl-plugins/", "id": "k8s_00126_code_explain_3800"} {"prompt": "Complete this text code:\n\n# create a plugin containing an underscore in its filename\necho -e '#!/bin/bash\\n\\necho \"I am a plugin with a dash in my name\"' > ./kubectl-foo_bar\nsudo chmod +x ./kubectl-foo_bar\n", "completion": "# move the plugin into your $PATH\nsudo mv ./kubectl-foo_bar /usr/local/bin\n\n# You can now invoke your plugin via kubectl:\nkubectl foo-bar", "language": "text", "type": "code-example", "context": "Extend kubectl with plugins | Note that the introduction of underscores to a plugin filename does not prevent you from having commands such as kubectl foo_bar. The command from the above example, can be invoked using either a dash", "source": "https://kubernetes.io/docs/tasks/extend-kubectl/kubectl-plugins/", "id": "k8s_00126_code_completion_3801"} {"prompt": "Explain this text code:\n\n# create a plugin containing an underscore in its filename\necho -e '#!/bin/bash\\n\\necho \"I am a plugin with a dash in my name\"' > ./kubectl-foo_bar\nsudo chmod +x ./kubectl-foo_bar\n\n# move the plugin into your $PATH\nsudo mv ./kubectl-foo_bar /usr/local/bin\n\n# You can now invoke your plugin via kubectl:\nkubectl foo-bar", "completion": "This text code Extend kubectl with plugins | Note that the introduction of underscores to a plugin filename does not prevent you from having commands such as kubectl foo_bar. The command from the above example, can be invoked using either a dash", "language": "text", "type": "code-example_explanation", "context": "Extend kubectl with plugins | Note that the introduction of underscores to a plugin filename does not prevent you from having commands such as kubectl foo_bar. The command from the above example, can be invoked using either a dash", "source": "https://kubernetes.io/docs/tasks/extend-kubectl/kubectl-plugins/", "id": "k8s_00126_code_explain_3802"} {"prompt": "Explain this text code:\n\n# You can invoke your custom command with a dash\nkubectl foo-bar", "completion": "This text code Extend kubectl with plugins | It is possible to have multiple plugins with the same filename in different locations throughout your PATH. For example, given a PATH with the following value: PATH=/usr/local/bin/plugins:/usr/local/b", "language": "text", "type": "code-example_explanation", "context": "Extend kubectl with plugins | It is possible to have multiple plugins with the same filename in different locations throughout your PATH. For example, given a PATH with the following value: PATH=/usr/local/bin/plugins:/usr/local/b", "source": "https://kubernetes.io/docs/tasks/extend-kubectl/kubectl-plugins/", "id": "k8s_00126_code_explain_3803"} {"prompt": "Explain this text code:\n\n# You can also invoke your custom command with an underscore\nkubectl foo_bar", "completion": "This text code Extend kubectl with plugins | It is possible to have multiple plugins with the same filename in different locations throughout your PATH. For example, given a PATH with the following value: PATH=/usr/local/bin/plugins:/usr/local/b", "language": "text", "type": "code-example_explanation", "context": "Extend kubectl with plugins | It is possible to have multiple plugins with the same filename in different locations throughout your PATH. For example, given a PATH with the following value: PATH=/usr/local/bin/plugins:/usr/local/b", "source": "https://kubernetes.io/docs/tasks/extend-kubectl/kubectl-plugins/", "id": "k8s_00126_code_explain_3804"} {"prompt": "Explain this text code:\n\nPATH=/usr/local/bin/plugins:/usr/local/bin/moreplugins kubectl plugin list", "completion": "This text code Extend kubectl with plugins | In the above scenario, the warning under /usr/local/bin/moreplugins/kubectl-foo tells you that this plugin will never be executed. Instead, the executable that appears first in your PATH, /usr/local/b", "language": "text", "type": "code-example_explanation", "context": "Extend kubectl with plugins | In the above scenario, the warning under /usr/local/bin/moreplugins/kubectl-foo tells you that this plugin will never be executed. Instead, the executable that appears first in your PATH, /usr/local/b", "source": "https://kubernetes.io/docs/tasks/extend-kubectl/kubectl-plugins/", "id": "k8s_00126_code_explain_3805"} {"prompt": "Explain this text code:\n\n# for a given kubectl command, the plugin with the longest possible filename will always be preferred\nkubectl foo bar baz", "completion": "This text code Extend kubectl with plugins | This design choice ensures that plugin sub-commands can be implemented across multiple files, if needed, and that these sub-commands can be nested under a \"parent\" plugin command:", "language": "text", "type": "code-example_explanation", "context": "Extend kubectl with plugins | This design choice ensures that plugin sub-commands can be implemented across multiple files, if needed, and that these sub-commands can be nested under a \"parent\" plugin command:", "source": "https://kubernetes.io/docs/tasks/extend-kubectl/kubectl-plugins/", "id": "k8s_00126_code_explain_3806"} {"prompt": "Complete this yaml code:\n\nkind: EncryptionConfiguration\napiVersion: apiserver.config.k8s.io/v1\nresources:\n- resources:\n - secrets", "completion": " providers:\n - aescbc:\n keys:\n - name: key1\n secret: c2VjcmV0IGlzIHNlY3VyZQ==", "language": "yaml", "type": "kubernetes-manifest", "context": "Migrate Kubernetes Objects Using Storage Version Migration | Make sure to enable automatic reload of encryption configuration file by setting --encryption-provider-config-automatic-reload to true.", "source": "https://kubernetes.io/docs/tasks/manage-kubernetes-objects/storage-version-migration/", "id": "k8s_00128_code_completion_3807"} {"prompt": "Explain this yaml code:\n\nkind: EncryptionConfiguration\napiVersion: apiserver.config.k8s.io/v1\nresources:\n- resources:\n - secrets\n providers:\n - aescbc:\n keys:\n - name: key1\n secret: c2VjcmV0IGlzIHNlY3VyZQ==", "completion": "This yaml code Migrate Kubernetes Objects Using Storage Version Migration | Make sure to enable automatic reload of encryption configuration file by setting --encryption-provider-config-automatic-reload to true.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Migrate Kubernetes Objects Using Storage Version Migration | Make sure to enable automatic reload of encryption configuration file by setting --encryption-provider-config-automatic-reload to true.", "source": "https://kubernetes.io/docs/tasks/manage-kubernetes-objects/storage-version-migration/", "id": "k8s_00128_code_explain_3808"} {"prompt": "Explain this bash code:\n\nkubectl create secret generic my-secret --from-literal=key1=supersecret", "completion": "This bash code Migrate Kubernetes Objects Using Storage Version Migration | Verify the serialized data for that Secret object is prefixed with k8s:enc:aescbc:v1:key1.", "language": "bash", "type": "kubectl-command_explanation", "context": "Migrate Kubernetes Objects Using Storage Version Migration | Verify the serialized data for that Secret object is prefixed with k8s:enc:aescbc:v1:key1.", "source": "https://kubernetes.io/docs/tasks/manage-kubernetes-objects/storage-version-migration/", "id": "k8s_00128_code_explain_3809"} {"prompt": "Complete this yaml code:\n\nkind: EncryptionConfiguration\napiVersion: apiserver.config.k8s.io/v1\nresources:\n- resources:\n - secrets\n providers:\n - aescbc:", "completion": " keys:\n - name: key2\n secret: c2VjcmV0IGlzIHNlY3VyZSwgaXMgaXQ/\n - aescbc:\n keys:\n - name: key1\n secret: c2VjcmV0IGlzIHNlY3VyZQ==", "language": "yaml", "type": "kubernetes-manifest", "context": "Migrate Kubernetes Objects Using Storage Version Migration | To ensure that previously created secret my-secret is re-encrypted with new key key2, you will use Storage Version Migration.", "source": "https://kubernetes.io/docs/tasks/manage-kubernetes-objects/storage-version-migration/", "id": "k8s_00128_code_completion_3810"} {"prompt": "Explain this yaml code:\n\nkind: EncryptionConfiguration\napiVersion: apiserver.config.k8s.io/v1\nresources:\n- resources:\n - secrets\n providers:\n - aescbc:\n keys:\n - name: key2\n secret: c2VjcmV0IGlzIHNlY3VyZSwgaXMgaXQ/\n - aescbc:\n keys:\n - name: key1\n secret: c2VjcmV0IGlzIHNlY3VyZQ==", "completion": "This yaml code Migrate Kubernetes Objects Using Storage Version Migration | To ensure that previously created secret my-secret is re-encrypted with new key key2, you will use Storage Version Migration.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Migrate Kubernetes Objects Using Storage Version Migration | To ensure that previously created secret my-secret is re-encrypted with new key key2, you will use Storage Version Migration.", "source": "https://kubernetes.io/docs/tasks/manage-kubernetes-objects/storage-version-migration/", "id": "k8s_00128_code_explain_3811"} {"prompt": "Complete this yaml code:\n\nkind: StorageVersionMigration\napiVersion: storagemigration.k8s.io/v1alpha1\nmetadata:\n name: secrets-migration", "completion": "spec:\n resource:\n group: \"\"\n version: v1\n resource: secrets", "language": "yaml", "type": "kubernetes-manifest", "context": "Migrate Kubernetes Objects Using Storage Version Migration | Create the object using kubectl as follows:", "source": "https://kubernetes.io/docs/tasks/manage-kubernetes-objects/storage-version-migration/", "id": "k8s_00128_code_completion_3812"} {"prompt": "Explain this yaml code:\n\nkind: StorageVersionMigration\napiVersion: storagemigration.k8s.io/v1alpha1\nmetadata:\n name: secrets-migration\nspec:\n resource:\n group: \"\"\n version: v1\n resource: secrets", "completion": "This yaml code Migrate Kubernetes Objects Using Storage Version Migration | Create the object using kubectl as follows:", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Migrate Kubernetes Objects Using Storage Version Migration | Create the object using kubectl as follows:", "source": "https://kubernetes.io/docs/tasks/manage-kubernetes-objects/storage-version-migration/", "id": "k8s_00128_code_explain_3813"} {"prompt": "Explain this bash code:\n\nkubectl get storageversionmigration.storagemigration.k8s.io/secrets-migration -o yaml", "completion": "This bash code Migrate Kubernetes Objects Using Storage Version Migration | The output is similar to:", "language": "bash", "type": "kubectl-command_explanation", "context": "Migrate Kubernetes Objects Using Storage Version Migration | The output is similar to:", "source": "https://kubernetes.io/docs/tasks/manage-kubernetes-objects/storage-version-migration/", "id": "k8s_00128_code_explain_3814"} {"prompt": "Complete this yaml code:\n\nkind: StorageVersionMigration\napiVersion: storagemigration.k8s.io/v1alpha1\nmetadata:\n name: secrets-migration\n uid: 628f6922-a9cb-4514-b076-12d3c178967c\n resourceVersion: \"90\"\n creationTimestamp: \"2024-03-12T20:29:45Z\"\nspec:\n resource:\n group: \"\"\n version: v1", "completion": " resource: secrets\nstatus:\n conditions:\n - type: Running\n status: \"False\"\n lastUpdateTime: \"2024-03-12T20:29:46Z\"\n reason: StorageVersionMigrationInProgress\n - type: Succeeded\n status: \"True\"\n lastUpdateTime: \"2024-03-12T20:29:46Z\"\n reason: StorageVersionMigrationSucceeded\n resourceVersion: \"84\"", "language": "yaml", "type": "kubernetes-manifest", "context": "Migrate Kubernetes Objects Using Storage Version Migration | Verify the stored secret is now prefixed with k8s:enc:aescbc:v1:key2.", "source": "https://kubernetes.io/docs/tasks/manage-kubernetes-objects/storage-version-migration/", "id": "k8s_00128_code_completion_3815"} {"prompt": "Explain this yaml code:\n\nkind: StorageVersionMigration\napiVersion: storagemigration.k8s.io/v1alpha1\nmetadata:\n name: secrets-migration\n uid: 628f6922-a9cb-4514-b076-12d3c178967c\n resourceVersion: \"90\"\n creationTimestamp: \"2024-03-12T20:29:45Z\"\nspec:\n resource:\n group: \"\"\n version: v1\n resource: secrets\nstatus:\n conditions:\n - type: Running\n status: \"False\"\n lastUpdateTime: \"2024-03-12T20:29:46Z\"\n reason: StorageVersionMigrationInProgress\n - type: Succeeded\n status: \"True\"\n lastUpdateTime: \"2024-03-12T20:29:46Z\"\n reason: StorageVersionMigrationSucceeded\n resourceVersion: \"84\"", "completion": "This yaml code Migrate Kubernetes Objects Using Storage Version Migration | Verify the stored secret is now prefixed with k8s:enc:aescbc:v1:key2.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Migrate Kubernetes Objects Using Storage Version Migration | Verify the stored secret is now prefixed with k8s:enc:aescbc:v1:key2.", "source": "https://kubernetes.io/docs/tasks/manage-kubernetes-objects/storage-version-migration/", "id": "k8s_00128_code_explain_3816"} {"prompt": "Complete this yaml code:\n\napiVersion: apiextensions.k8s.io/v1\nkind: CustomResourceDefinition\nmetadata:\n name: selfierequests.stable.example.com\nspec:\n group: stable.example.com\n names:\n plural: SelfieRequests\n singular: SelfieRequest\n kind: SelfieRequest\n listKind: SelfieRequestList\n scope: Namespaced\n versions:\n - name: v1\n served: true", "completion": " storage: true\n schema:\n openAPIV3Schema:\n type: object\n properties:\n hostPort:\n type: string\n conversion:\n strategy: Webhook\n webhook:\n clientConfig:\n url: \"https://127.0.0.1:9443/crdconvert\"\n caBundle: \n conversionReviewVersions:\n - v1\n - v2", "language": "yaml", "type": "kubernetes-manifest", "context": "Migrate Kubernetes Objects Using Storage Version Migration | Create CRD using kubectl:", "source": "https://kubernetes.io/docs/tasks/manage-kubernetes-objects/storage-version-migration/", "id": "k8s_00128_code_completion_3817"} {"prompt": "Explain this yaml code:\n\napiVersion: apiextensions.k8s.io/v1\nkind: CustomResourceDefinition\nmetadata:\n name: selfierequests.stable.example.com\nspec:\n group: stable.example.com\n names:\n plural: SelfieRequests\n singular: SelfieRequest\n kind: SelfieRequest\n listKind: SelfieRequestList\n scope: Namespaced\n versions:\n - name: v1\n served: true\n storage: true\n schema:\n openAPIV3Schema:\n type: object\n properties:\n hostPort:\n type: string\n conversion:\n strategy: Webhook\n webhook:\n clientConfig:\n url: \"https://127.0.0.1:9443/crdconvert\"\n caBundle: \n conversionReviewVersions:\n - v1\n - v2", "completion": "This yaml code Migrate Kubernetes Objects Using Storage Version Migration | Create CRD using kubectl:", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Migrate Kubernetes Objects Using Storage Version Migration | Create CRD using kubectl:", "source": "https://kubernetes.io/docs/tasks/manage-kubernetes-objects/storage-version-migration/", "id": "k8s_00128_code_explain_3818"} {"prompt": "Complete this yaml code:\n\napiVersion: stable.example.com/v1\nkind: SelfieRequest", "completion": "metadata:\n name: cr1\n namespace: default", "language": "yaml", "type": "kubernetes-manifest", "context": "Migrate Kubernetes Objects Using Storage Version Migration | Create CR using kubectl:", "source": "https://kubernetes.io/docs/tasks/manage-kubernetes-objects/storage-version-migration/", "id": "k8s_00128_code_completion_3819"} {"prompt": "Explain this yaml code:\n\napiVersion: stable.example.com/v1\nkind: SelfieRequest\nmetadata:\n name: cr1\n namespace: default", "completion": "This yaml code Migrate Kubernetes Objects Using Storage Version Migration | Create CR using kubectl:", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Migrate Kubernetes Objects Using Storage Version Migration | Create CR using kubectl:", "source": "https://kubernetes.io/docs/tasks/manage-kubernetes-objects/storage-version-migration/", "id": "k8s_00128_code_explain_3820"} {"prompt": "Explain this text code:\n\nETCDCTL_API=3 etcdctl get /kubernetes.io/stable.example.com/testcrds/default/cr1 [...] | hexdump -C", "completion": "This text code Migrate Kubernetes Objects Using Storage Version Migration | where [...] contains the additional arguments for connecting to the etcd server.", "language": "text", "type": "code-example_explanation", "context": "Migrate Kubernetes Objects Using Storage Version Migration | where [...] contains the additional arguments for connecting to the etcd server.", "source": "https://kubernetes.io/docs/tasks/manage-kubernetes-objects/storage-version-migration/", "id": "k8s_00128_code_explain_3821"} {"prompt": "Complete this yaml code:\n\napiVersion: apiextensions.k8s.io/v1\nkind: CustomResourceDefinition\nmetadata:\nname: selfierequests.stable.example.com\nspec:\n group: stable.example.com\n names:\n plural: SelfieRequests\n singular: SelfieRequest\n kind: SelfieRequest\n listKind: SelfieRequestList\n scope: Namespaced\n versions:\n - name: v2\n served: true\n storage: true\n schema:\n openAPIV3Schema:\n type: object\n properties:\n host:", "completion": " type: string\n port:\n type: string\n - name: v1\n served: true\n storage: false\n schema:\n openAPIV3Schema:\n type: object\n properties:\n hostPort:\n type: string\n conversion:\n strategy: Webhook\n webhook:\n clientConfig:\n url: \"https://127.0.0.1:9443/crdconvert\"\n caBundle: \n conversionReviewVersions:\n - v1\n - v2", "language": "yaml", "type": "kubernetes-manifest", "context": "Migrate Kubernetes Objects Using Storage Version Migration | Update CRD using kubectl:", "source": "https://kubernetes.io/docs/tasks/manage-kubernetes-objects/storage-version-migration/", "id": "k8s_00128_code_completion_3822"} {"prompt": "Explain this yaml code:\n\napiVersion: apiextensions.k8s.io/v1\nkind: CustomResourceDefinition\nmetadata:\nname: selfierequests.stable.example.com\nspec:\n group: stable.example.com\n names:\n plural: SelfieRequests\n singular: SelfieRequest\n kind: SelfieRequest\n listKind: SelfieRequestList\n scope: Namespaced\n versions:\n - name: v2\n served: true\n storage: true\n schema:\n openAPIV3Schema:\n type: object\n properties:\n host:\n type: string\n port:\n type: string\n - name: v1\n served: true\n storage: false\n schema:\n openAPIV3Schema:\n type: object\n properties:\n hostPort:\n type: string\n conversion:\n strategy: Webhook\n webhook:\n clientConfig:\n url: \"https://127.0.0.1:9443/crdconvert\"\n caBundle: \n conversionReviewVersions:\n - v1\n - v2", "completion": "This yaml code Migrate Kubernetes Objects Using Storage Version Migration | Update CRD using kubectl:", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Migrate Kubernetes Objects Using Storage Version Migration | Update CRD using kubectl:", "source": "https://kubernetes.io/docs/tasks/manage-kubernetes-objects/storage-version-migration/", "id": "k8s_00128_code_explain_3823"} {"prompt": "Complete this yaml code:\n\napiVersion: stable.example.com/v2\nkind: SelfieRequest", "completion": "metadata:\n name: cr2\n namespace: default", "language": "yaml", "type": "kubernetes-manifest", "context": "Migrate Kubernetes Objects Using Storage Version Migration | Create CR using kubectl:", "source": "https://kubernetes.io/docs/tasks/manage-kubernetes-objects/storage-version-migration/", "id": "k8s_00128_code_completion_3824"} {"prompt": "Explain this yaml code:\n\napiVersion: stable.example.com/v2\nkind: SelfieRequest\nmetadata:\n name: cr2\n namespace: default", "completion": "This yaml code Migrate Kubernetes Objects Using Storage Version Migration | Create CR using kubectl:", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Migrate Kubernetes Objects Using Storage Version Migration | Create CR using kubectl:", "source": "https://kubernetes.io/docs/tasks/manage-kubernetes-objects/storage-version-migration/", "id": "k8s_00128_code_explain_3825"} {"prompt": "Explain this text code:\n\nETCDCTL_API=3 etcdctl get /kubernetes.io/stable.example.com/testcrds/default/cr2 [...] | hexdump -C", "completion": "This text code Migrate Kubernetes Objects Using Storage Version Migration | where [...] contains the additional arguments for connecting to the etcd server.", "language": "text", "type": "code-example_explanation", "context": "Migrate Kubernetes Objects Using Storage Version Migration | where [...] contains the additional arguments for connecting to the etcd server.", "source": "https://kubernetes.io/docs/tasks/manage-kubernetes-objects/storage-version-migration/", "id": "k8s_00128_code_explain_3826"} {"prompt": "Complete this yaml code:\n\nkind: StorageVersionMigration\napiVersion: storagemigration.k8s.io/v1alpha1\nmetadata:\n name: crdsvm", "completion": "spec:\n resource:\n group: stable.example.com\n version: v1\n resource: SelfieRequest", "language": "yaml", "type": "kubernetes-manifest", "context": "Migrate Kubernetes Objects Using Storage Version Migration | Create the object using kubectl as follows:", "source": "https://kubernetes.io/docs/tasks/manage-kubernetes-objects/storage-version-migration/", "id": "k8s_00128_code_completion_3827"} {"prompt": "Explain this yaml code:\n\nkind: StorageVersionMigration\napiVersion: storagemigration.k8s.io/v1alpha1\nmetadata:\n name: crdsvm\nspec:\n resource:\n group: stable.example.com\n version: v1\n resource: SelfieRequest", "completion": "This yaml code Migrate Kubernetes Objects Using Storage Version Migration | Create the object using kubectl as follows:", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Migrate Kubernetes Objects Using Storage Version Migration | Create the object using kubectl as follows:", "source": "https://kubernetes.io/docs/tasks/manage-kubernetes-objects/storage-version-migration/", "id": "k8s_00128_code_explain_3828"} {"prompt": "Explain this bash code:\n\nkubectl get storageversionmigration.storagemigration.k8s.io/crdsvm -o yaml", "completion": "This bash code Migrate Kubernetes Objects Using Storage Version Migration | The output is similar to:", "language": "bash", "type": "kubectl-command_explanation", "context": "Migrate Kubernetes Objects Using Storage Version Migration | The output is similar to:", "source": "https://kubernetes.io/docs/tasks/manage-kubernetes-objects/storage-version-migration/", "id": "k8s_00128_code_explain_3829"} {"prompt": "Complete this yaml code:\n\nkind: StorageVersionMigration\napiVersion: storagemigration.k8s.io/v1alpha1\nmetadata:\n name: crdsvm\n uid: 13062fe4-32d7-47cc-9528-5067fa0c6ac8\n resourceVersion: \"111\"\n creationTimestamp: \"2024-03-12T22:40:01Z\"\nspec:\n resource:\n group: stable.example.com\n version: v1", "completion": " resource: testcrds\nstatus:\n conditions:\n - type: Running\n status: \"False\"\n lastUpdateTime: \"2024-03-12T22:40:03Z\"\n reason: StorageVersionMigrationInProgress\n - type: Succeeded\n status: \"True\"\n lastUpdateTime: \"2024-03-12T22:40:03Z\"\n reason: StorageVersionMigrationSucceeded\n resourceVersion: \"106\"", "language": "yaml", "type": "kubernetes-manifest", "context": "Migrate Kubernetes Objects Using Storage Version Migration | Verify that previously created cr1 is now written and stored as v2 by getting the object from etcd.", "source": "https://kubernetes.io/docs/tasks/manage-kubernetes-objects/storage-version-migration/", "id": "k8s_00128_code_completion_3830"} {"prompt": "Explain this yaml code:\n\nkind: StorageVersionMigration\napiVersion: storagemigration.k8s.io/v1alpha1\nmetadata:\n name: crdsvm\n uid: 13062fe4-32d7-47cc-9528-5067fa0c6ac8\n resourceVersion: \"111\"\n creationTimestamp: \"2024-03-12T22:40:01Z\"\nspec:\n resource:\n group: stable.example.com\n version: v1\n resource: testcrds\nstatus:\n conditions:\n - type: Running\n status: \"False\"\n lastUpdateTime: \"2024-03-12T22:40:03Z\"\n reason: StorageVersionMigrationInProgress\n - type: Succeeded\n status: \"True\"\n lastUpdateTime: \"2024-03-12T22:40:03Z\"\n reason: StorageVersionMigrationSucceeded\n resourceVersion: \"106\"", "completion": "This yaml code Migrate Kubernetes Objects Using Storage Version Migration | Verify that previously created cr1 is now written and stored as v2 by getting the object from etcd.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Migrate Kubernetes Objects Using Storage Version Migration | Verify that previously created cr1 is now written and stored as v2 by getting the object from etcd.", "source": "https://kubernetes.io/docs/tasks/manage-kubernetes-objects/storage-version-migration/", "id": "k8s_00128_code_explain_3831"} {"prompt": "Explain this text code:\n\nETCDCTL_API=3 etcdctl get /kubernetes.io/stable.example.com/testcrds/default/cr1 [...] | hexdump -C", "completion": "This text code Migrate Kubernetes Objects Using Storage Version Migration | where [...] contains the additional arguments for connecting to the etcd server.", "language": "text", "type": "code-example_explanation", "context": "Migrate Kubernetes Objects Using Storage Version Migration | where [...] contains the additional arguments for connecting to the etcd server.", "source": "https://kubernetes.io/docs/tasks/manage-kubernetes-objects/storage-version-migration/", "id": "k8s_00128_code_explain_3832"} {"prompt": "Complete this yaml code:\n\nkind: EncryptionConfiguration\napiVersion: apiserver.config.k8s.io/v1\nresources:\n- resources:\n - secrets", "completion": " providers:\n - aescbc:\n keys:\n - name: key1\n secret: c2VjcmV0IGlzIHNlY3VyZQ==", "language": "yaml", "type": "kubernetes-manifest", "context": "Migrate Kubernetes Objects Using Storage Version Migration | Make sure to enable automatic reload of encryption configuration file by setting --encryption-provider-config-automatic-reload to true.", "source": "https://kubernetes.io/docs/tasks/manage-kubernetes-objects/storage-version-migration/", "id": "k8s_00128_code_completion_3833"} {"prompt": "Explain this yaml code:\n\nkind: EncryptionConfiguration\napiVersion: apiserver.config.k8s.io/v1\nresources:\n- resources:\n - secrets\n providers:\n - aescbc:\n keys:\n - name: key1\n secret: c2VjcmV0IGlzIHNlY3VyZQ==", "completion": "This yaml code Migrate Kubernetes Objects Using Storage Version Migration | Make sure to enable automatic reload of encryption configuration file by setting --encryption-provider-config-automatic-reload to true.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Migrate Kubernetes Objects Using Storage Version Migration | Make sure to enable automatic reload of encryption configuration file by setting --encryption-provider-config-automatic-reload to true.", "source": "https://kubernetes.io/docs/tasks/manage-kubernetes-objects/storage-version-migration/", "id": "k8s_00128_code_explain_3834"} {"prompt": "Explain this shell code:\n\nkubectl create secret generic my-secret --from-literal=key1=supersecret", "completion": "This shell code Migrate Kubernetes Objects Using Storage Version Migration | Verify the serialized data for that Secret object is prefixed with k8s:enc:aescbc:v1:key1.", "language": "shell", "type": "code-example_explanation", "context": "Migrate Kubernetes Objects Using Storage Version Migration | Verify the serialized data for that Secret object is prefixed with k8s:enc:aescbc:v1:key1.", "source": "https://kubernetes.io/docs/tasks/manage-kubernetes-objects/storage-version-migration/", "id": "k8s_00128_code_explain_3835"} {"prompt": "Complete this yaml code:\n\nkind: EncryptionConfiguration\napiVersion: apiserver.config.k8s.io/v1\nresources:\n- resources:\n - secrets\n providers:\n - aescbc:", "completion": " keys:\n - name: key2\n secret: c2VjcmV0IGlzIHNlY3VyZSwgaXMgaXQ/\n - aescbc:\n keys:\n - name: key1\n secret: c2VjcmV0IGlzIHNlY3VyZQ==", "language": "yaml", "type": "kubernetes-manifest", "context": "Migrate Kubernetes Objects Using Storage Version Migration | To ensure that previously created secret my-secret is re-encrypted with new key key2, you will use Storage Version Migration.", "source": "https://kubernetes.io/docs/tasks/manage-kubernetes-objects/storage-version-migration/", "id": "k8s_00128_code_completion_3836"} {"prompt": "Explain this yaml code:\n\nkind: EncryptionConfiguration\napiVersion: apiserver.config.k8s.io/v1\nresources:\n- resources:\n - secrets\n providers:\n - aescbc:\n keys:\n - name: key2\n secret: c2VjcmV0IGlzIHNlY3VyZSwgaXMgaXQ/\n - aescbc:\n keys:\n - name: key1\n secret: c2VjcmV0IGlzIHNlY3VyZQ==", "completion": "This yaml code Migrate Kubernetes Objects Using Storage Version Migration | To ensure that previously created secret my-secret is re-encrypted with new key key2, you will use Storage Version Migration.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Migrate Kubernetes Objects Using Storage Version Migration | To ensure that previously created secret my-secret is re-encrypted with new key key2, you will use Storage Version Migration.", "source": "https://kubernetes.io/docs/tasks/manage-kubernetes-objects/storage-version-migration/", "id": "k8s_00128_code_explain_3837"} {"prompt": "Complete this yaml code:\n\nkind: StorageVersionMigration\napiVersion: storagemigration.k8s.io/v1alpha1\nmetadata:\n name: secrets-migration", "completion": "spec:\n resource:\n group: \"\"\n version: v1\n resource: secrets", "language": "yaml", "type": "kubernetes-manifest", "context": "Migrate Kubernetes Objects Using Storage Version Migration | Create the object using kubectl as follows:", "source": "https://kubernetes.io/docs/tasks/manage-kubernetes-objects/storage-version-migration/", "id": "k8s_00128_code_completion_3838"} {"prompt": "Explain this yaml code:\n\nkind: StorageVersionMigration\napiVersion: storagemigration.k8s.io/v1alpha1\nmetadata:\n name: secrets-migration\nspec:\n resource:\n group: \"\"\n version: v1\n resource: secrets", "completion": "This yaml code Migrate Kubernetes Objects Using Storage Version Migration | Create the object using kubectl as follows:", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Migrate Kubernetes Objects Using Storage Version Migration | Create the object using kubectl as follows:", "source": "https://kubernetes.io/docs/tasks/manage-kubernetes-objects/storage-version-migration/", "id": "k8s_00128_code_explain_3839"} {"prompt": "Explain this shell code:\n\nkubectl get storageversionmigration.storagemigration.k8s.io/secrets-migration -o yaml", "completion": "This shell code Migrate Kubernetes Objects Using Storage Version Migration | The output is similar to:", "language": "shell", "type": "code-example_explanation", "context": "Migrate Kubernetes Objects Using Storage Version Migration | The output is similar to:", "source": "https://kubernetes.io/docs/tasks/manage-kubernetes-objects/storage-version-migration/", "id": "k8s_00128_code_explain_3840"} {"prompt": "Complete this yaml code:\n\nkind: StorageVersionMigration\napiVersion: storagemigration.k8s.io/v1alpha1\nmetadata:\n name: secrets-migration\n uid: 628f6922-a9cb-4514-b076-12d3c178967c\n resourceVersion: \"90\"\n creationTimestamp: \"2024-03-12T20:29:45Z\"\nspec:\n resource:\n group: \"\"\n version: v1", "completion": " resource: secrets\nstatus:\n conditions:\n - type: Running\n status: \"False\"\n lastUpdateTime: \"2024-03-12T20:29:46Z\"\n reason: StorageVersionMigrationInProgress\n - type: Succeeded\n status: \"True\"\n lastUpdateTime: \"2024-03-12T20:29:46Z\"\n reason: StorageVersionMigrationSucceeded\n resourceVersion: \"84\"", "language": "yaml", "type": "kubernetes-manifest", "context": "Migrate Kubernetes Objects Using Storage Version Migration | Verify the stored secret is now prefixed with k8s:enc:aescbc:v1:key2.", "source": "https://kubernetes.io/docs/tasks/manage-kubernetes-objects/storage-version-migration/", "id": "k8s_00128_code_completion_3841"} {"prompt": "Explain this yaml code:\n\nkind: StorageVersionMigration\napiVersion: storagemigration.k8s.io/v1alpha1\nmetadata:\n name: secrets-migration\n uid: 628f6922-a9cb-4514-b076-12d3c178967c\n resourceVersion: \"90\"\n creationTimestamp: \"2024-03-12T20:29:45Z\"\nspec:\n resource:\n group: \"\"\n version: v1\n resource: secrets\nstatus:\n conditions:\n - type: Running\n status: \"False\"\n lastUpdateTime: \"2024-03-12T20:29:46Z\"\n reason: StorageVersionMigrationInProgress\n - type: Succeeded\n status: \"True\"\n lastUpdateTime: \"2024-03-12T20:29:46Z\"\n reason: StorageVersionMigrationSucceeded\n resourceVersion: \"84\"", "completion": "This yaml code Migrate Kubernetes Objects Using Storage Version Migration | Verify the stored secret is now prefixed with k8s:enc:aescbc:v1:key2.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Migrate Kubernetes Objects Using Storage Version Migration | Verify the stored secret is now prefixed with k8s:enc:aescbc:v1:key2.", "source": "https://kubernetes.io/docs/tasks/manage-kubernetes-objects/storage-version-migration/", "id": "k8s_00128_code_explain_3842"} {"prompt": "Complete this yaml code:\n\napiVersion: apiextensions.k8s.io/v1\nkind: CustomResourceDefinition\nmetadata:\n name: selfierequests.stable.example.com\nspec:\n group: stable.example.com\n names:\n plural: SelfieRequests\n singular: SelfieRequest\n kind: SelfieRequest\n listKind: SelfieRequestList\n scope: Namespaced\n versions:\n - name: v1\n served: true", "completion": " storage: true\n schema:\n openAPIV3Schema:\n type: object\n properties:\n hostPort:\n type: string\n conversion:\n strategy: Webhook\n webhook:\n clientConfig:\n url: \"https://127.0.0.1:9443/crdconvert\"\n caBundle: \n conversionReviewVersions:\n - v1\n - v2", "language": "yaml", "type": "kubernetes-manifest", "context": "Migrate Kubernetes Objects Using Storage Version Migration | Create CRD using kubectl:", "source": "https://kubernetes.io/docs/tasks/manage-kubernetes-objects/storage-version-migration/", "id": "k8s_00128_code_completion_3843"} {"prompt": "Explain this yaml code:\n\napiVersion: apiextensions.k8s.io/v1\nkind: CustomResourceDefinition\nmetadata:\n name: selfierequests.stable.example.com\nspec:\n group: stable.example.com\n names:\n plural: SelfieRequests\n singular: SelfieRequest\n kind: SelfieRequest\n listKind: SelfieRequestList\n scope: Namespaced\n versions:\n - name: v1\n served: true\n storage: true\n schema:\n openAPIV3Schema:\n type: object\n properties:\n hostPort:\n type: string\n conversion:\n strategy: Webhook\n webhook:\n clientConfig:\n url: \"https://127.0.0.1:9443/crdconvert\"\n caBundle: \n conversionReviewVersions:\n - v1\n - v2", "completion": "This yaml code Migrate Kubernetes Objects Using Storage Version Migration | Create CRD using kubectl:", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Migrate Kubernetes Objects Using Storage Version Migration | Create CRD using kubectl:", "source": "https://kubernetes.io/docs/tasks/manage-kubernetes-objects/storage-version-migration/", "id": "k8s_00128_code_explain_3844"} {"prompt": "Complete this yaml code:\n\napiVersion: stable.example.com/v1\nkind: SelfieRequest", "completion": "metadata:\n name: cr1\n namespace: default", "language": "yaml", "type": "kubernetes-manifest", "context": "Migrate Kubernetes Objects Using Storage Version Migration | Create CR using kubectl:", "source": "https://kubernetes.io/docs/tasks/manage-kubernetes-objects/storage-version-migration/", "id": "k8s_00128_code_completion_3845"} {"prompt": "Explain this yaml code:\n\napiVersion: stable.example.com/v1\nkind: SelfieRequest\nmetadata:\n name: cr1\n namespace: default", "completion": "This yaml code Migrate Kubernetes Objects Using Storage Version Migration | Create CR using kubectl:", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Migrate Kubernetes Objects Using Storage Version Migration | Create CR using kubectl:", "source": "https://kubernetes.io/docs/tasks/manage-kubernetes-objects/storage-version-migration/", "id": "k8s_00128_code_explain_3846"} {"prompt": "Explain this shell code:\n\nETCDCTL_API=3 etcdctl get /kubernetes.io/stable.example.com/testcrds/default/cr1 [...] | hexdump -C", "completion": "This shell code Migrate Kubernetes Objects Using Storage Version Migration | where [...] contains the additional arguments for connecting to the etcd server.", "language": "shell", "type": "code-example_explanation", "context": "Migrate Kubernetes Objects Using Storage Version Migration | where [...] contains the additional arguments for connecting to the etcd server.", "source": "https://kubernetes.io/docs/tasks/manage-kubernetes-objects/storage-version-migration/", "id": "k8s_00128_code_explain_3847"} {"prompt": "Complete this yaml code:\n\napiVersion: apiextensions.k8s.io/v1\nkind: CustomResourceDefinition\nmetadata:\nname: selfierequests.stable.example.com\nspec:\n group: stable.example.com\n names:\n plural: SelfieRequests\n singular: SelfieRequest\n kind: SelfieRequest\n listKind: SelfieRequestList\n scope: Namespaced\n versions:\n - name: v2\n served: true\n storage: true\n schema:\n openAPIV3Schema:\n type: object\n properties:\n host:", "completion": " type: string\n port:\n type: string\n - name: v1\n served: true\n storage: false\n schema:\n openAPIV3Schema:\n type: object\n properties:\n hostPort:\n type: string\n conversion:\n strategy: Webhook\n webhook:\n clientConfig:\n url: \"https://127.0.0.1:9443/crdconvert\"\n caBundle: \n conversionReviewVersions:\n - v1\n - v2", "language": "yaml", "type": "kubernetes-manifest", "context": "Migrate Kubernetes Objects Using Storage Version Migration | Update CRD using kubectl:", "source": "https://kubernetes.io/docs/tasks/manage-kubernetes-objects/storage-version-migration/", "id": "k8s_00128_code_completion_3848"} {"prompt": "Explain this yaml code:\n\napiVersion: apiextensions.k8s.io/v1\nkind: CustomResourceDefinition\nmetadata:\nname: selfierequests.stable.example.com\nspec:\n group: stable.example.com\n names:\n plural: SelfieRequests\n singular: SelfieRequest\n kind: SelfieRequest\n listKind: SelfieRequestList\n scope: Namespaced\n versions:\n - name: v2\n served: true\n storage: true\n schema:\n openAPIV3Schema:\n type: object\n properties:\n host:\n type: string\n port:\n type: string\n - name: v1\n served: true\n storage: false\n schema:\n openAPIV3Schema:\n type: object\n properties:\n hostPort:\n type: string\n conversion:\n strategy: Webhook\n webhook:\n clientConfig:\n url: \"https://127.0.0.1:9443/crdconvert\"\n caBundle: \n conversionReviewVersions:\n - v1\n - v2", "completion": "This yaml code Migrate Kubernetes Objects Using Storage Version Migration | Update CRD using kubectl:", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Migrate Kubernetes Objects Using Storage Version Migration | Update CRD using kubectl:", "source": "https://kubernetes.io/docs/tasks/manage-kubernetes-objects/storage-version-migration/", "id": "k8s_00128_code_explain_3849"} {"prompt": "Complete this yaml code:\n\napiVersion: stable.example.com/v2\nkind: SelfieRequest", "completion": "metadata:\n name: cr2\n namespace: default", "language": "yaml", "type": "kubernetes-manifest", "context": "Migrate Kubernetes Objects Using Storage Version Migration | Create CR using kubectl:", "source": "https://kubernetes.io/docs/tasks/manage-kubernetes-objects/storage-version-migration/", "id": "k8s_00128_code_completion_3850"} {"prompt": "Explain this yaml code:\n\napiVersion: stable.example.com/v2\nkind: SelfieRequest\nmetadata:\n name: cr2\n namespace: default", "completion": "This yaml code Migrate Kubernetes Objects Using Storage Version Migration | Create CR using kubectl:", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Migrate Kubernetes Objects Using Storage Version Migration | Create CR using kubectl:", "source": "https://kubernetes.io/docs/tasks/manage-kubernetes-objects/storage-version-migration/", "id": "k8s_00128_code_explain_3851"} {"prompt": "Explain this shell code:\n\nETCDCTL_API=3 etcdctl get /kubernetes.io/stable.example.com/testcrds/default/cr2 [...] | hexdump -C", "completion": "This shell code Migrate Kubernetes Objects Using Storage Version Migration | where [...] contains the additional arguments for connecting to the etcd server.", "language": "shell", "type": "code-example_explanation", "context": "Migrate Kubernetes Objects Using Storage Version Migration | where [...] contains the additional arguments for connecting to the etcd server.", "source": "https://kubernetes.io/docs/tasks/manage-kubernetes-objects/storage-version-migration/", "id": "k8s_00128_code_explain_3852"} {"prompt": "Complete this yaml code:\n\nkind: StorageVersionMigration\napiVersion: storagemigration.k8s.io/v1alpha1\nmetadata:\n name: crdsvm", "completion": "spec:\n resource:\n group: stable.example.com\n version: v1\n resource: SelfieRequest", "language": "yaml", "type": "kubernetes-manifest", "context": "Migrate Kubernetes Objects Using Storage Version Migration | Create the object using kubectl as follows:", "source": "https://kubernetes.io/docs/tasks/manage-kubernetes-objects/storage-version-migration/", "id": "k8s_00128_code_completion_3853"} {"prompt": "Explain this yaml code:\n\nkind: StorageVersionMigration\napiVersion: storagemigration.k8s.io/v1alpha1\nmetadata:\n name: crdsvm\nspec:\n resource:\n group: stable.example.com\n version: v1\n resource: SelfieRequest", "completion": "This yaml code Migrate Kubernetes Objects Using Storage Version Migration | Create the object using kubectl as follows:", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Migrate Kubernetes Objects Using Storage Version Migration | Create the object using kubectl as follows:", "source": "https://kubernetes.io/docs/tasks/manage-kubernetes-objects/storage-version-migration/", "id": "k8s_00128_code_explain_3854"} {"prompt": "Explain this shell code:\n\nkubectl get storageversionmigration.storagemigration.k8s.io/crdsvm -o yaml", "completion": "This shell code Migrate Kubernetes Objects Using Storage Version Migration | The output is similar to:", "language": "shell", "type": "code-example_explanation", "context": "Migrate Kubernetes Objects Using Storage Version Migration | The output is similar to:", "source": "https://kubernetes.io/docs/tasks/manage-kubernetes-objects/storage-version-migration/", "id": "k8s_00128_code_explain_3855"} {"prompt": "Complete this yaml code:\n\nkind: StorageVersionMigration\napiVersion: storagemigration.k8s.io/v1alpha1\nmetadata:\n name: crdsvm\n uid: 13062fe4-32d7-47cc-9528-5067fa0c6ac8\n resourceVersion: \"111\"\n creationTimestamp: \"2024-03-12T22:40:01Z\"\nspec:\n resource:\n group: stable.example.com\n version: v1", "completion": " resource: testcrds\nstatus:\n conditions:\n - type: Running\n status: \"False\"\n lastUpdateTime: \"2024-03-12T22:40:03Z\"\n reason: StorageVersionMigrationInProgress\n - type: Succeeded\n status: \"True\"\n lastUpdateTime: \"2024-03-12T22:40:03Z\"\n reason: StorageVersionMigrationSucceeded\n resourceVersion: \"106\"", "language": "yaml", "type": "kubernetes-manifest", "context": "Migrate Kubernetes Objects Using Storage Version Migration | Verify that previously created cr1 is now written and stored as v2 by getting the object from etcd.", "source": "https://kubernetes.io/docs/tasks/manage-kubernetes-objects/storage-version-migration/", "id": "k8s_00128_code_completion_3856"} {"prompt": "Explain this yaml code:\n\nkind: StorageVersionMigration\napiVersion: storagemigration.k8s.io/v1alpha1\nmetadata:\n name: crdsvm\n uid: 13062fe4-32d7-47cc-9528-5067fa0c6ac8\n resourceVersion: \"111\"\n creationTimestamp: \"2024-03-12T22:40:01Z\"\nspec:\n resource:\n group: stable.example.com\n version: v1\n resource: testcrds\nstatus:\n conditions:\n - type: Running\n status: \"False\"\n lastUpdateTime: \"2024-03-12T22:40:03Z\"\n reason: StorageVersionMigrationInProgress\n - type: Succeeded\n status: \"True\"\n lastUpdateTime: \"2024-03-12T22:40:03Z\"\n reason: StorageVersionMigrationSucceeded\n resourceVersion: \"106\"", "completion": "This yaml code Migrate Kubernetes Objects Using Storage Version Migration | Verify that previously created cr1 is now written and stored as v2 by getting the object from etcd.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Migrate Kubernetes Objects Using Storage Version Migration | Verify that previously created cr1 is now written and stored as v2 by getting the object from etcd.", "source": "https://kubernetes.io/docs/tasks/manage-kubernetes-objects/storage-version-migration/", "id": "k8s_00128_code_explain_3857"} {"prompt": "Explain this shell code:\n\nETCDCTL_API=3 etcdctl get /kubernetes.io/stable.example.com/testcrds/default/cr1 [...] | hexdump -C", "completion": "This shell code Migrate Kubernetes Objects Using Storage Version Migration | where [...] contains the additional arguments for connecting to the etcd server.", "language": "shell", "type": "code-example_explanation", "context": "Migrate Kubernetes Objects Using Storage Version Migration | where [...] contains the additional arguments for connecting to the etcd server.", "source": "https://kubernetes.io/docs/tasks/manage-kubernetes-objects/storage-version-migration/", "id": "k8s_00128_code_explain_3858"} {"prompt": "Complete this yaml code:\n\nkind: EncryptionConfiguration\napiVersion: apiserver.config.k8s.io/v1\nresources:\n- resources:\n - secrets", "completion": " providers:\n - aescbc:\n keys:\n - name: key1\n secret: c2VjcmV0IGlzIHNlY3VyZQ==", "language": "yaml", "type": "kubernetes-manifest", "context": "Migrate Kubernetes Objects Using Storage Version Migration | Make sure to enable automatic reload of encryption configuration file by setting --encryption-provider-config-automatic-reload to true.", "source": "https://kubernetes.io/docs/tasks/manage-kubernetes-objects/storage-version-migration/", "id": "k8s_00128_code_completion_3859"} {"prompt": "Explain this yaml code:\n\nkind: EncryptionConfiguration\napiVersion: apiserver.config.k8s.io/v1\nresources:\n- resources:\n - secrets\n providers:\n - aescbc:\n keys:\n - name: key1\n secret: c2VjcmV0IGlzIHNlY3VyZQ==", "completion": "This yaml code Migrate Kubernetes Objects Using Storage Version Migration | Make sure to enable automatic reload of encryption configuration file by setting --encryption-provider-config-automatic-reload to true.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Migrate Kubernetes Objects Using Storage Version Migration | Make sure to enable automatic reload of encryption configuration file by setting --encryption-provider-config-automatic-reload to true.", "source": "https://kubernetes.io/docs/tasks/manage-kubernetes-objects/storage-version-migration/", "id": "k8s_00128_code_explain_3860"} {"prompt": "Explain this bash code:\n\nkubectl create secret generic my-secret --from-literal=key1=supersecret", "completion": "This bash code Migrate Kubernetes Objects Using Storage Version Migration | Verify the serialized data for that Secret object is prefixed with k8s:enc:aescbc:v1:key1.", "language": "bash", "type": "kubectl-command_explanation", "context": "Migrate Kubernetes Objects Using Storage Version Migration | Verify the serialized data for that Secret object is prefixed with k8s:enc:aescbc:v1:key1.", "source": "https://kubernetes.io/docs/tasks/manage-kubernetes-objects/storage-version-migration/", "id": "k8s_00128_code_explain_3861"} {"prompt": "Complete this yaml code:\n\nkind: EncryptionConfiguration\napiVersion: apiserver.config.k8s.io/v1\nresources:\n- resources:\n - secrets\n providers:\n - aescbc:", "completion": " keys:\n - name: key2\n secret: c2VjcmV0IGlzIHNlY3VyZSwgaXMgaXQ/\n - aescbc:\n keys:\n - name: key1\n secret: c2VjcmV0IGlzIHNlY3VyZQ==", "language": "yaml", "type": "kubernetes-manifest", "context": "Migrate Kubernetes Objects Using Storage Version Migration | To ensure that previously created secret my-secret is re-encrypted with new key key2, you will use Storage Version Migration.", "source": "https://kubernetes.io/docs/tasks/manage-kubernetes-objects/storage-version-migration/", "id": "k8s_00128_code_completion_3862"} {"prompt": "Explain this yaml code:\n\nkind: EncryptionConfiguration\napiVersion: apiserver.config.k8s.io/v1\nresources:\n- resources:\n - secrets\n providers:\n - aescbc:\n keys:\n - name: key2\n secret: c2VjcmV0IGlzIHNlY3VyZSwgaXMgaXQ/\n - aescbc:\n keys:\n - name: key1\n secret: c2VjcmV0IGlzIHNlY3VyZQ==", "completion": "This yaml code Migrate Kubernetes Objects Using Storage Version Migration | To ensure that previously created secret my-secret is re-encrypted with new key key2, you will use Storage Version Migration.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Migrate Kubernetes Objects Using Storage Version Migration | To ensure that previously created secret my-secret is re-encrypted with new key key2, you will use Storage Version Migration.", "source": "https://kubernetes.io/docs/tasks/manage-kubernetes-objects/storage-version-migration/", "id": "k8s_00128_code_explain_3863"} {"prompt": "Complete this yaml code:\n\nkind: StorageVersionMigration\napiVersion: storagemigration.k8s.io/v1alpha1\nmetadata:\n name: secrets-migration", "completion": "spec:\n resource:\n group: \"\"\n version: v1\n resource: secrets", "language": "yaml", "type": "kubernetes-manifest", "context": "Migrate Kubernetes Objects Using Storage Version Migration | Create the object using kubectl as follows:", "source": "https://kubernetes.io/docs/tasks/manage-kubernetes-objects/storage-version-migration/", "id": "k8s_00128_code_completion_3864"} {"prompt": "Explain this yaml code:\n\nkind: StorageVersionMigration\napiVersion: storagemigration.k8s.io/v1alpha1\nmetadata:\n name: secrets-migration\nspec:\n resource:\n group: \"\"\n version: v1\n resource: secrets", "completion": "This yaml code Migrate Kubernetes Objects Using Storage Version Migration | Create the object using kubectl as follows:", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Migrate Kubernetes Objects Using Storage Version Migration | Create the object using kubectl as follows:", "source": "https://kubernetes.io/docs/tasks/manage-kubernetes-objects/storage-version-migration/", "id": "k8s_00128_code_explain_3865"} {"prompt": "Explain this bash code:\n\nkubectl get storageversionmigration.storagemigration.k8s.io/secrets-migration -o yaml", "completion": "This bash code Migrate Kubernetes Objects Using Storage Version Migration | The output is similar to:", "language": "bash", "type": "kubectl-command_explanation", "context": "Migrate Kubernetes Objects Using Storage Version Migration | The output is similar to:", "source": "https://kubernetes.io/docs/tasks/manage-kubernetes-objects/storage-version-migration/", "id": "k8s_00128_code_explain_3866"} {"prompt": "Complete this yaml code:\n\nkind: StorageVersionMigration\napiVersion: storagemigration.k8s.io/v1alpha1\nmetadata:\n name: secrets-migration\n uid: 628f6922-a9cb-4514-b076-12d3c178967c\n resourceVersion: \"90\"\n creationTimestamp: \"2024-03-12T20:29:45Z\"\nspec:\n resource:\n group: \"\"\n version: v1", "completion": " resource: secrets\nstatus:\n conditions:\n - type: Running\n status: \"False\"\n lastUpdateTime: \"2024-03-12T20:29:46Z\"\n reason: StorageVersionMigrationInProgress\n - type: Succeeded\n status: \"True\"\n lastUpdateTime: \"2024-03-12T20:29:46Z\"\n reason: StorageVersionMigrationSucceeded\n resourceVersion: \"84\"", "language": "yaml", "type": "kubernetes-manifest", "context": "Migrate Kubernetes Objects Using Storage Version Migration | Verify the stored secret is now prefixed with k8s:enc:aescbc:v1:key2.", "source": "https://kubernetes.io/docs/tasks/manage-kubernetes-objects/storage-version-migration/", "id": "k8s_00128_code_completion_3867"} {"prompt": "Explain this yaml code:\n\nkind: StorageVersionMigration\napiVersion: storagemigration.k8s.io/v1alpha1\nmetadata:\n name: secrets-migration\n uid: 628f6922-a9cb-4514-b076-12d3c178967c\n resourceVersion: \"90\"\n creationTimestamp: \"2024-03-12T20:29:45Z\"\nspec:\n resource:\n group: \"\"\n version: v1\n resource: secrets\nstatus:\n conditions:\n - type: Running\n status: \"False\"\n lastUpdateTime: \"2024-03-12T20:29:46Z\"\n reason: StorageVersionMigrationInProgress\n - type: Succeeded\n status: \"True\"\n lastUpdateTime: \"2024-03-12T20:29:46Z\"\n reason: StorageVersionMigrationSucceeded\n resourceVersion: \"84\"", "completion": "This yaml code Migrate Kubernetes Objects Using Storage Version Migration | Verify the stored secret is now prefixed with k8s:enc:aescbc:v1:key2.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Migrate Kubernetes Objects Using Storage Version Migration | Verify the stored secret is now prefixed with k8s:enc:aescbc:v1:key2.", "source": "https://kubernetes.io/docs/tasks/manage-kubernetes-objects/storage-version-migration/", "id": "k8s_00128_code_explain_3868"} {"prompt": "Complete this yaml code:\n\napiVersion: apiextensions.k8s.io/v1\nkind: CustomResourceDefinition\nmetadata:\n name: selfierequests.stable.example.com\nspec:\n group: stable.example.com\n names:\n plural: SelfieRequests\n singular: SelfieRequest\n kind: SelfieRequest\n listKind: SelfieRequestList\n scope: Namespaced\n versions:\n - name: v1\n served: true", "completion": " storage: true\n schema:\n openAPIV3Schema:\n type: object\n properties:\n hostPort:\n type: string\n conversion:\n strategy: Webhook\n webhook:\n clientConfig:\n url: \"https://127.0.0.1:9443/crdconvert\"\n caBundle: \n conversionReviewVersions:\n - v1\n - v2", "language": "yaml", "type": "kubernetes-manifest", "context": "Migrate Kubernetes Objects Using Storage Version Migration | Create CRD using kubectl:", "source": "https://kubernetes.io/docs/tasks/manage-kubernetes-objects/storage-version-migration/", "id": "k8s_00128_code_completion_3869"} {"prompt": "Explain this yaml code:\n\napiVersion: apiextensions.k8s.io/v1\nkind: CustomResourceDefinition\nmetadata:\n name: selfierequests.stable.example.com\nspec:\n group: stable.example.com\n names:\n plural: SelfieRequests\n singular: SelfieRequest\n kind: SelfieRequest\n listKind: SelfieRequestList\n scope: Namespaced\n versions:\n - name: v1\n served: true\n storage: true\n schema:\n openAPIV3Schema:\n type: object\n properties:\n hostPort:\n type: string\n conversion:\n strategy: Webhook\n webhook:\n clientConfig:\n url: \"https://127.0.0.1:9443/crdconvert\"\n caBundle: \n conversionReviewVersions:\n - v1\n - v2", "completion": "This yaml code Migrate Kubernetes Objects Using Storage Version Migration | Create CRD using kubectl:", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Migrate Kubernetes Objects Using Storage Version Migration | Create CRD using kubectl:", "source": "https://kubernetes.io/docs/tasks/manage-kubernetes-objects/storage-version-migration/", "id": "k8s_00128_code_explain_3870"} {"prompt": "Complete this yaml code:\n\napiVersion: stable.example.com/v1\nkind: SelfieRequest", "completion": "metadata:\n name: cr1\n namespace: default", "language": "yaml", "type": "kubernetes-manifest", "context": "Migrate Kubernetes Objects Using Storage Version Migration | Create CR using kubectl:", "source": "https://kubernetes.io/docs/tasks/manage-kubernetes-objects/storage-version-migration/", "id": "k8s_00128_code_completion_3871"} {"prompt": "Explain this yaml code:\n\napiVersion: stable.example.com/v1\nkind: SelfieRequest\nmetadata:\n name: cr1\n namespace: default", "completion": "This yaml code Migrate Kubernetes Objects Using Storage Version Migration | Create CR using kubectl:", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Migrate Kubernetes Objects Using Storage Version Migration | Create CR using kubectl:", "source": "https://kubernetes.io/docs/tasks/manage-kubernetes-objects/storage-version-migration/", "id": "k8s_00128_code_explain_3872"} {"prompt": "Explain this text code:\n\nETCDCTL_API=3 etcdctl get /kubernetes.io/stable.example.com/testcrds/default/cr1 [...] | hexdump -C", "completion": "This text code Migrate Kubernetes Objects Using Storage Version Migration | where [...] contains the additional arguments for connecting to the etcd server.", "language": "text", "type": "code-example_explanation", "context": "Migrate Kubernetes Objects Using Storage Version Migration | where [...] contains the additional arguments for connecting to the etcd server.", "source": "https://kubernetes.io/docs/tasks/manage-kubernetes-objects/storage-version-migration/", "id": "k8s_00128_code_explain_3873"} {"prompt": "Complete this yaml code:\n\napiVersion: apiextensions.k8s.io/v1\nkind: CustomResourceDefinition\nmetadata:\nname: selfierequests.stable.example.com\nspec:\n group: stable.example.com\n names:\n plural: SelfieRequests\n singular: SelfieRequest\n kind: SelfieRequest\n listKind: SelfieRequestList\n scope: Namespaced\n versions:\n - name: v2\n served: true\n storage: true\n schema:\n openAPIV3Schema:\n type: object\n properties:\n host:", "completion": " type: string\n port:\n type: string\n - name: v1\n served: true\n storage: false\n schema:\n openAPIV3Schema:\n type: object\n properties:\n hostPort:\n type: string\n conversion:\n strategy: Webhook\n webhook:\n clientConfig:\n url: \"https://127.0.0.1:9443/crdconvert\"\n caBundle: \n conversionReviewVersions:\n - v1\n - v2", "language": "yaml", "type": "kubernetes-manifest", "context": "Migrate Kubernetes Objects Using Storage Version Migration | Update CRD using kubectl:", "source": "https://kubernetes.io/docs/tasks/manage-kubernetes-objects/storage-version-migration/", "id": "k8s_00128_code_completion_3874"} {"prompt": "Explain this yaml code:\n\napiVersion: apiextensions.k8s.io/v1\nkind: CustomResourceDefinition\nmetadata:\nname: selfierequests.stable.example.com\nspec:\n group: stable.example.com\n names:\n plural: SelfieRequests\n singular: SelfieRequest\n kind: SelfieRequest\n listKind: SelfieRequestList\n scope: Namespaced\n versions:\n - name: v2\n served: true\n storage: true\n schema:\n openAPIV3Schema:\n type: object\n properties:\n host:\n type: string\n port:\n type: string\n - name: v1\n served: true\n storage: false\n schema:\n openAPIV3Schema:\n type: object\n properties:\n hostPort:\n type: string\n conversion:\n strategy: Webhook\n webhook:\n clientConfig:\n url: \"https://127.0.0.1:9443/crdconvert\"\n caBundle: \n conversionReviewVersions:\n - v1\n - v2", "completion": "This yaml code Migrate Kubernetes Objects Using Storage Version Migration | Update CRD using kubectl:", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Migrate Kubernetes Objects Using Storage Version Migration | Update CRD using kubectl:", "source": "https://kubernetes.io/docs/tasks/manage-kubernetes-objects/storage-version-migration/", "id": "k8s_00128_code_explain_3875"} {"prompt": "Complete this yaml code:\n\napiVersion: stable.example.com/v2\nkind: SelfieRequest", "completion": "metadata:\n name: cr2\n namespace: default", "language": "yaml", "type": "kubernetes-manifest", "context": "Migrate Kubernetes Objects Using Storage Version Migration | Create CR using kubectl:", "source": "https://kubernetes.io/docs/tasks/manage-kubernetes-objects/storage-version-migration/", "id": "k8s_00128_code_completion_3876"} {"prompt": "Explain this yaml code:\n\napiVersion: stable.example.com/v2\nkind: SelfieRequest\nmetadata:\n name: cr2\n namespace: default", "completion": "This yaml code Migrate Kubernetes Objects Using Storage Version Migration | Create CR using kubectl:", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Migrate Kubernetes Objects Using Storage Version Migration | Create CR using kubectl:", "source": "https://kubernetes.io/docs/tasks/manage-kubernetes-objects/storage-version-migration/", "id": "k8s_00128_code_explain_3877"} {"prompt": "Explain this text code:\n\nETCDCTL_API=3 etcdctl get /kubernetes.io/stable.example.com/testcrds/default/cr2 [...] | hexdump -C", "completion": "This text code Migrate Kubernetes Objects Using Storage Version Migration | where [...] contains the additional arguments for connecting to the etcd server.", "language": "text", "type": "code-example_explanation", "context": "Migrate Kubernetes Objects Using Storage Version Migration | where [...] contains the additional arguments for connecting to the etcd server.", "source": "https://kubernetes.io/docs/tasks/manage-kubernetes-objects/storage-version-migration/", "id": "k8s_00128_code_explain_3878"} {"prompt": "Complete this yaml code:\n\nkind: StorageVersionMigration\napiVersion: storagemigration.k8s.io/v1alpha1\nmetadata:\n name: crdsvm", "completion": "spec:\n resource:\n group: stable.example.com\n version: v1\n resource: SelfieRequest", "language": "yaml", "type": "kubernetes-manifest", "context": "Migrate Kubernetes Objects Using Storage Version Migration | Create the object using kubectl as follows:", "source": "https://kubernetes.io/docs/tasks/manage-kubernetes-objects/storage-version-migration/", "id": "k8s_00128_code_completion_3879"} {"prompt": "Explain this yaml code:\n\nkind: StorageVersionMigration\napiVersion: storagemigration.k8s.io/v1alpha1\nmetadata:\n name: crdsvm\nspec:\n resource:\n group: stable.example.com\n version: v1\n resource: SelfieRequest", "completion": "This yaml code Migrate Kubernetes Objects Using Storage Version Migration | Create the object using kubectl as follows:", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Migrate Kubernetes Objects Using Storage Version Migration | Create the object using kubectl as follows:", "source": "https://kubernetes.io/docs/tasks/manage-kubernetes-objects/storage-version-migration/", "id": "k8s_00128_code_explain_3880"} {"prompt": "Explain this bash code:\n\nkubectl get storageversionmigration.storagemigration.k8s.io/crdsvm -o yaml", "completion": "This bash code Migrate Kubernetes Objects Using Storage Version Migration | The output is similar to:", "language": "bash", "type": "kubectl-command_explanation", "context": "Migrate Kubernetes Objects Using Storage Version Migration | The output is similar to:", "source": "https://kubernetes.io/docs/tasks/manage-kubernetes-objects/storage-version-migration/", "id": "k8s_00128_code_explain_3881"} {"prompt": "Complete this yaml code:\n\nkind: StorageVersionMigration\napiVersion: storagemigration.k8s.io/v1alpha1\nmetadata:\n name: crdsvm\n uid: 13062fe4-32d7-47cc-9528-5067fa0c6ac8\n resourceVersion: \"111\"\n creationTimestamp: \"2024-03-12T22:40:01Z\"\nspec:\n resource:\n group: stable.example.com\n version: v1", "completion": " resource: testcrds\nstatus:\n conditions:\n - type: Running\n status: \"False\"\n lastUpdateTime: \"2024-03-12T22:40:03Z\"\n reason: StorageVersionMigrationInProgress\n - type: Succeeded\n status: \"True\"\n lastUpdateTime: \"2024-03-12T22:40:03Z\"\n reason: StorageVersionMigrationSucceeded\n resourceVersion: \"106\"", "language": "yaml", "type": "kubernetes-manifest", "context": "Migrate Kubernetes Objects Using Storage Version Migration | Verify that previously created cr1 is now written and stored as v2 by getting the object from etcd.", "source": "https://kubernetes.io/docs/tasks/manage-kubernetes-objects/storage-version-migration/", "id": "k8s_00128_code_completion_3882"} {"prompt": "Explain this yaml code:\n\nkind: StorageVersionMigration\napiVersion: storagemigration.k8s.io/v1alpha1\nmetadata:\n name: crdsvm\n uid: 13062fe4-32d7-47cc-9528-5067fa0c6ac8\n resourceVersion: \"111\"\n creationTimestamp: \"2024-03-12T22:40:01Z\"\nspec:\n resource:\n group: stable.example.com\n version: v1\n resource: testcrds\nstatus:\n conditions:\n - type: Running\n status: \"False\"\n lastUpdateTime: \"2024-03-12T22:40:03Z\"\n reason: StorageVersionMigrationInProgress\n - type: Succeeded\n status: \"True\"\n lastUpdateTime: \"2024-03-12T22:40:03Z\"\n reason: StorageVersionMigrationSucceeded\n resourceVersion: \"106\"", "completion": "This yaml code Migrate Kubernetes Objects Using Storage Version Migration | Verify that previously created cr1 is now written and stored as v2 by getting the object from etcd.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Migrate Kubernetes Objects Using Storage Version Migration | Verify that previously created cr1 is now written and stored as v2 by getting the object from etcd.", "source": "https://kubernetes.io/docs/tasks/manage-kubernetes-objects/storage-version-migration/", "id": "k8s_00128_code_explain_3883"} {"prompt": "Explain this text code:\n\nETCDCTL_API=3 etcdctl get /kubernetes.io/stable.example.com/testcrds/default/cr1 [...] | hexdump -C", "completion": "This text code Migrate Kubernetes Objects Using Storage Version Migration | where [...] contains the additional arguments for connecting to the etcd server.", "language": "text", "type": "code-example_explanation", "context": "Migrate Kubernetes Objects Using Storage Version Migration | where [...] contains the additional arguments for connecting to the etcd server.", "source": "https://kubernetes.io/docs/tasks/manage-kubernetes-objects/storage-version-migration/", "id": "k8s_00128_code_explain_3884"} {"prompt": "Explain this text code:\n\n# make a Service for the StatefulSet to use\nkubectl create -f https://kubernetes.io/examples/application/job/rabbitmq/rabbitmq-service.yaml", "completion": "This text code Coarse Parallel Processing Using a Work Queue | Now, we can experiment with accessing the message queue. We will create a temporary interactive pod, install some tools on it, and experiment with queues.", "language": "text", "type": "code-example_explanation", "context": "Coarse Parallel Processing Using a Work Queue | Now, we can experiment with accessing the message queue. We will create a temporary interactive pod, install some tools on it, and experiment with queues.", "source": "https://kubernetes.io/docs/tasks/job/coarse-parallel-processing-work-queue/", "id": "k8s_00129_code_explain_3885"} {"prompt": "Explain this bash code:\n\nkubectl create -f https://kubernetes.io/examples/application/job/rabbitmq/rabbitmq-statefulset.yaml", "completion": "This bash code Coarse Parallel Processing Using a Work Queue | Now, we can experiment with accessing the message queue. We will create a temporary interactive pod, install some tools on it, and experiment with queues.", "language": "bash", "type": "kubectl-command_explanation", "context": "Coarse Parallel Processing Using a Work Queue | Now, we can experiment with accessing the message queue. We will create a temporary interactive pod, install some tools on it, and experiment with queues.", "source": "https://kubernetes.io/docs/tasks/job/coarse-parallel-processing-work-queue/", "id": "k8s_00129_code_explain_3886"} {"prompt": "Explain this text code:\n\n# Create a temporary interactive container\nkubectl run -i --tty temp --image ubuntu:22.04", "completion": "This text code Coarse Parallel Processing Using a Work Queue | Note that your pod name and command prompt will be different.", "language": "text", "type": "code-example_explanation", "context": "Coarse Parallel Processing Using a Work Queue | Note that your pod name and command prompt will be different.", "source": "https://kubernetes.io/docs/tasks/job/coarse-parallel-processing-work-queue/", "id": "k8s_00129_code_explain_3887"} {"prompt": "Explain this text code:\n\nWaiting for pod default/temp-loe07 to be running, status is Pending, pod ready: false\n... [ previous line repeats several times .. hit return when it stops ] ...", "completion": "This text code Coarse Parallel Processing Using a Work Queue | Note that your pod name and command prompt will be different.", "language": "text", "type": "code-example_explanation", "context": "Coarse Parallel Processing Using a Work Queue | Note that your pod name and command prompt will be different.", "source": "https://kubernetes.io/docs/tasks/job/coarse-parallel-processing-work-queue/", "id": "k8s_00129_code_explain_3888"} {"prompt": "Explain this text code:\n\napt-get update && apt-get install -y curl ca-certificates amqp-tools python3 dnsutils", "completion": "This text code Coarse Parallel Processing Using a Work Queue | Later, you will make a container image that includes these packages.", "language": "text", "type": "code-example_explanation", "context": "Coarse Parallel Processing Using a Work Queue | Later, you will make a container image that includes these packages.", "source": "https://kubernetes.io/docs/tasks/job/coarse-parallel-processing-work-queue/", "id": "k8s_00129_code_explain_3889"} {"prompt": "Explain this text code:\n\n# Run these commands inside the Pod\n# Note the rabbitmq-service has a DNS name, provided by Kubernetes:\nnslookup rabbitmq-service", "completion": "This text code Coarse Parallel Processing Using a Work Queue | (the IP addresses will vary)", "language": "text", "type": "code-example_explanation", "context": "Coarse Parallel Processing Using a Work Queue | (the IP addresses will vary)", "source": "https://kubernetes.io/docs/tasks/job/coarse-parallel-processing-work-queue/", "id": "k8s_00129_code_explain_3890"} {"prompt": "Complete this text code:\n\nServer: 10.0.0.10\nAddress: 10.0.0.10#53", "completion": "\nName: rabbitmq-service.default.svc.cluster.local\nAddress: 10.0.147.152", "language": "text", "type": "code-example", "context": "Coarse Parallel Processing Using a Work Queue | (the IP addresses will vary)", "source": "https://kubernetes.io/docs/tasks/job/coarse-parallel-processing-work-queue/", "id": "k8s_00129_code_completion_3891"} {"prompt": "Explain this text code:\n\nServer: 10.0.0.10\nAddress: 10.0.0.10#53\n\nName: rabbitmq-service.default.svc.cluster.local\nAddress: 10.0.147.152", "completion": "This text code Coarse Parallel Processing Using a Work Queue | (the IP addresses will vary)", "language": "text", "type": "code-example_explanation", "context": "Coarse Parallel Processing Using a Work Queue | (the IP addresses will vary)", "source": "https://kubernetes.io/docs/tasks/job/coarse-parallel-processing-work-queue/", "id": "k8s_00129_code_explain_3892"} {"prompt": "Explain this text code:\n\n# run this check inside the Pod\nenv | grep RABBITMQ_SERVICE | grep HOST", "completion": "This text code Coarse Parallel Processing Using a Work Queue | (the IP address will vary)", "language": "text", "type": "code-example_explanation", "context": "Coarse Parallel Processing Using a Work Queue | (the IP address will vary)", "source": "https://kubernetes.io/docs/tasks/job/coarse-parallel-processing-work-queue/", "id": "k8s_00129_code_explain_3893"} {"prompt": "Complete this text code:\n\n# Run these commands inside the Pod\n# In the next line, rabbitmq-service is the hostname where the rabbitmq-service\n# can be reached. 5672 is the standard port for rabbitmq.\nexport BROKER_URL=amqp://guest:guest@rabbitmq-service:5672\n# If you could not resolve \"rabbitmq-service\" in the previous step,", "completion": "# then use this command instead:\nBROKER_URL=amqp://guest:guest@$RABBITMQ_SERVICE_SERVICE_HOST:5672\n\n# Now create a queue:\n\n/usr/bin/amqp-declare-queue --url=$BROKER_URL -q foo -d", "language": "text", "type": "code-example", "context": "Coarse Parallel Processing Using a Work Queue | Publish one message to the queue:", "source": "https://kubernetes.io/docs/tasks/job/coarse-parallel-processing-work-queue/", "id": "k8s_00129_code_completion_3894"} {"prompt": "Explain this text code:\n\n# Run these commands inside the Pod\n# In the next line, rabbitmq-service is the hostname where the rabbitmq-service\n# can be reached. 5672 is the standard port for rabbitmq.\nexport BROKER_URL=amqp://guest:guest@rabbitmq-service:5672\n# If you could not resolve \"rabbitmq-service\" in the previous step,\n# then use this command instead:\nBROKER_URL=amqp://guest:guest@$RABBITMQ_SERVICE_SERVICE_HOST:5672\n\n# Now create a queue:\n\n/usr/bin/amqp-declare-queue --url=$BROKER_URL -q foo -d", "completion": "This text code Coarse Parallel Processing Using a Work Queue | Publish one message to the queue:", "language": "text", "type": "code-example_explanation", "context": "Coarse Parallel Processing Using a Work Queue | Publish one message to the queue:", "source": "https://kubernetes.io/docs/tasks/job/coarse-parallel-processing-work-queue/", "id": "k8s_00129_code_explain_3895"} {"prompt": "Complete this text code:\n\n/usr/bin/amqp-publish --url=$BROKER_URL -r foo -p -b Hello\n", "completion": "# And get it back.\n\n/usr/bin/amqp-consume --url=$BROKER_URL -q foo -c 1 cat && echo 1>&2", "language": "text", "type": "code-example", "context": "Coarse Parallel Processing Using a Work Queue | In the last command, the amqp-consume tool took one message (-c 1) from the queue, and passes that message to the standard input of an arbitrary command. In this case, the program cat prints out the c", "source": "https://kubernetes.io/docs/tasks/job/coarse-parallel-processing-work-queue/", "id": "k8s_00129_code_completion_3896"} {"prompt": "Explain this text code:\n\n/usr/bin/amqp-publish --url=$BROKER_URL -r foo -p -b Hello\n\n# And get it back.\n\n/usr/bin/amqp-consume --url=$BROKER_URL -q foo -c 1 cat && echo 1>&2", "completion": "This text code Coarse Parallel Processing Using a Work Queue | In the last command, the amqp-consume tool took one message (-c 1) from the queue, and passes that message to the standard input of an arbitrary command. In this case, the program cat prints out the c", "language": "text", "type": "code-example_explanation", "context": "Coarse Parallel Processing Using a Work Queue | In the last command, the amqp-consume tool took one message (-c 1) from the queue, and passes that message to the standard input of an arbitrary command. In this case, the program cat prints out the c", "source": "https://kubernetes.io/docs/tasks/job/coarse-parallel-processing-work-queue/", "id": "k8s_00129_code_explain_3897"} {"prompt": "Explain this text code:\n\n# Run this on your computer, not in the Pod\n/usr/bin/amqp-declare-queue --url=$BROKER_URL -q job1 -d", "completion": "This text code Coarse Parallel Processing Using a Work Queue | Add items to the queue:", "language": "text", "type": "code-example_explanation", "context": "Coarse Parallel Processing Using a Work Queue | Add items to the queue:", "source": "https://kubernetes.io/docs/tasks/job/coarse-parallel-processing-work-queue/", "id": "k8s_00129_code_explain_3898"} {"prompt": "Complete this text code:\n\nfor f in apple banana cherry date fig grape lemon melon\ndo", "completion": " /usr/bin/amqp-publish --url=$BROKER_URL -r job1 -p -b $f\ndone", "language": "text", "type": "code-example", "context": "Coarse Parallel Processing Using a Work Queue | You added 8 messages to the queue.", "source": "https://kubernetes.io/docs/tasks/job/coarse-parallel-processing-work-queue/", "id": "k8s_00129_code_completion_3899"} {"prompt": "Explain this text code:\n\nfor f in apple banana cherry date fig grape lemon melon\ndo\n /usr/bin/amqp-publish --url=$BROKER_URL -r job1 -p -b $f\ndone", "completion": "This text code Coarse Parallel Processing Using a Work Queue | You added 8 messages to the queue.", "language": "text", "type": "code-example_explanation", "context": "Coarse Parallel Processing Using a Work Queue | You added 8 messages to the queue.", "source": "https://kubernetes.io/docs/tasks/job/coarse-parallel-processing-work-queue/", "id": "k8s_00129_code_explain_3900"} {"prompt": "Complete this python code:\n\n#!/usr/bin/env python\n\n# Just prints standard out and sleeps for 10 seconds.", "completion": "import sys\nimport time\nprint(\"Processing \" + sys.stdin.readlines()[0])\ntime.sleep(10)", "language": "python", "type": "code-example", "context": "Coarse Parallel Processing Using a Work Queue | Give the script execution permission:", "source": "https://kubernetes.io/docs/tasks/job/coarse-parallel-processing-work-queue/", "id": "k8s_00129_code_completion_3901"} {"prompt": "Explain this python code:\n\n#!/usr/bin/env python\n\n# Just prints standard out and sleeps for 10 seconds.\nimport sys\nimport time\nprint(\"Processing \" + sys.stdin.readlines()[0])\ntime.sleep(10)", "completion": "This python code Coarse Parallel Processing Using a Work Queue | Give the script execution permission:", "language": "python", "type": "code-example_explanation", "context": "Coarse Parallel Processing Using a Work Queue | Give the script execution permission:", "source": "https://kubernetes.io/docs/tasks/job/coarse-parallel-processing-work-queue/", "id": "k8s_00129_code_explain_3902"} {"prompt": "Explain this bash code:\n\ndocker tag job-wq-1 /job-wq-1\ndocker push /job-wq-1", "completion": "This bash code Coarse Parallel Processing Using a Work Queue | If you are using an alternative container image registry, tag the image and push it there instead.", "language": "bash", "type": "shell-command_explanation", "context": "Coarse Parallel Processing Using a Work Queue | If you are using an alternative container image registry, tag the image and push it there instead.", "source": "https://kubernetes.io/docs/tasks/job/coarse-parallel-processing-work-queue/", "id": "k8s_00129_code_explain_3903"} {"prompt": "Complete this yaml code:\n\napiVersion: batch/v1\nkind: Job\nmetadata:\n name: job-wq-1\nspec:\n completions: 8\n parallelism: 2\n template:\n metadata:\n name: job-wq-1", "completion": " spec:\n containers:\n - name: c\n image: gcr.io//job-wq-1\n env:\n - name: BROKER_URL\n value: amqp://guest:guest@rabbitmq-service:5672\n - name: QUEUE\n value: job1\n restartPolicy: OnFailure", "language": "yaml", "type": "kubernetes-manifest", "context": "Coarse Parallel Processing Using a Work Queue | In this example, each pod works on one item from the queue and then exits. So, the completion count of the Job corresponds to the number of work items done. That is why the example manifest has .spec.", "source": "https://kubernetes.io/docs/tasks/job/coarse-parallel-processing-work-queue/", "id": "k8s_00129_code_completion_3904"} {"prompt": "Explain this yaml code:\n\napiVersion: batch/v1\nkind: Job\nmetadata:\n name: job-wq-1\nspec:\n completions: 8\n parallelism: 2\n template:\n metadata:\n name: job-wq-1\n spec:\n containers:\n - name: c\n image: gcr.io//job-wq-1\n env:\n - name: BROKER_URL\n value: amqp://guest:guest@rabbitmq-service:5672\n - name: QUEUE\n value: job1\n restartPolicy: OnFailure", "completion": "This yaml code Coarse Parallel Processing Using a Work Queue | In this example, each pod works on one item from the queue and then exits. So, the completion count of the Job corresponds to the number of work items done. That is why the example manifest has .spec.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Coarse Parallel Processing Using a Work Queue | In this example, each pod works on one item from the queue and then exits. So, the completion count of the Job corresponds to the number of work items done. That is why the example manifest has .spec.", "source": "https://kubernetes.io/docs/tasks/job/coarse-parallel-processing-work-queue/", "id": "k8s_00129_code_explain_3905"} {"prompt": "Explain this text code:\n\n# this assumes you downloaded and then edited the manifest already\nkubectl apply -f ./job.yaml", "completion": "This text code Coarse Parallel Processing Using a Work Queue | You can wait for the Job to succeed, with a timeout:", "language": "text", "type": "code-example_explanation", "context": "Coarse Parallel Processing Using a Work Queue | You can wait for the Job to succeed, with a timeout:", "source": "https://kubernetes.io/docs/tasks/job/coarse-parallel-processing-work-queue/", "id": "k8s_00129_code_explain_3906"} {"prompt": "Explain this text code:\n\n# The check for condition name is case insensitive\nkubectl wait --for=condition=complete --timeout=300s job/job-wq-1", "completion": "This text code Coarse Parallel Processing Using a Work Queue | Next, check on the Job:", "language": "text", "type": "code-example_explanation", "context": "Coarse Parallel Processing Using a Work Queue | Next, check on the Job:", "source": "https://kubernetes.io/docs/tasks/job/coarse-parallel-processing-work-queue/", "id": "k8s_00129_code_explain_3907"} {"prompt": "Complete this text code:\n\nName: job-wq-1\nNamespace: default\nSelector: controller-uid=41d75705-92df-11e7-b85e-fa163ee3c11f\nLabels: controller-uid=41d75705-92df-11e7-b85e-fa163ee3c11f\n job-name=job-wq-1\nAnnotations: \nParallelism: 2\nCompletions: 8\nStart Time: Wed, 06 Sep 2022 16:42:02 +0000\nPods Statuses: 0 Running / 8 Succeeded / 0 Failed\nPod Template:\n Labels: controller-uid=41d75705-92df-11e7-b85e-fa163ee3c11f\n job-name=job-wq-1\n Containers:\n c:\n Image: container-registry.example/causal-jigsaw-637/job-wq-1", "completion": " Port:\n Environment:\n BROKER_URL: amqp://guest:guest@rabbitmq-service:5672\n QUEUE: job1\n Mounts: \n Volumes: \nEvents:\n FirstSeen LastSeen Count From SubobjectPath Type Reason Message\n ───────── ──────── ───── ──── ───────────── ────── ────── ───────\n 27s 27s 1 {job } Normal SuccessfulCreate Created pod: job-wq-1-hcobb\n 27s 27s 1 {job } Normal SuccessfulCreate Created pod: job-wq-1-weytj\n 27s 27s 1 {job } Normal SuccessfulCreate Created pod: job-wq-1-qaam5\n 27s 27s 1 {job } Normal SuccessfulCreate Created pod: job-wq-1-b67sr\n 26s 26s 1 {job } Normal SuccessfulCreate Created pod: job-wq-1-xe5hj\n 15s 15s 1 {job } Normal SuccessfulCreate Created pod: job-wq-1-w2zqe\n 14s 14s 1 {job } Normal SuccessfulCreate Created pod: job-wq-1-d6ppa\n 14s 14s 1 {job } Normal SuccessfulCreate Created pod: job-wq-1-p17e0", "language": "text", "type": "code-example", "context": "Coarse Parallel Processing Using a Work Queue | All the pods for that Job succeeded! You're done.", "source": "https://kubernetes.io/docs/tasks/job/coarse-parallel-processing-work-queue/", "id": "k8s_00129_code_completion_3908"} {"prompt": "Explain this text code:\n\nName: job-wq-1\nNamespace: default\nSelector: controller-uid=41d75705-92df-11e7-b85e-fa163ee3c11f\nLabels: controller-uid=41d75705-92df-11e7-b85e-fa163ee3c11f\n job-name=job-wq-1\nAnnotations: \nParallelism: 2\nCompletions: 8\nStart Time: Wed, 06 Sep 2022 16:42:02 +0000\nPods Statuses: 0 Running / 8 Succeeded / 0 Failed\nPod Template:\n Labels: controller-uid=41d75705-92df-11e7-b85e-fa163ee3c11f\n job-name=job-wq-1\n Containers:\n c:\n Image: container-registry.example/causal-jigsaw-637/job-wq-1\n Port:\n Environment:\n BROKER_URL: amqp://guest:guest@rabbitmq-service:5672\n QUEUE: job1\n Mounts: \n Volumes: \nEvents:\n FirstSeen LastSeen Count From SubobjectPath Type Reason Message\n ───────── ──────── ───── ──── ───────────── ────── ────── ───────\n 27s 27s 1 {job } Normal SuccessfulCreate Created pod: job-wq-1-hcobb\n 27s 27s 1 {job } Normal SuccessfulCreate Created pod: job-wq-1-weytj\n 27s 27s 1 {job } Normal SuccessfulCreate Created pod: job-wq-1-qaam5\n 27s 27s 1 {job } Normal SuccessfulCreate Created pod: job-wq-1-b67sr\n 26s 26s 1 {job } Normal SuccessfulCreate Created pod: job-wq-1-xe5hj\n 15s 15s 1 {job } Normal SuccessfulCreate Created pod: job-wq-1-w2zqe\n 14s 14s 1 {job } Normal SuccessfulCreate Created pod: job-wq-1-d6ppa\n 14s 14s 1 {job } Normal SuccessfulCreate Created pod: job-wq-1-p17e0", "completion": "This text code Coarse Parallel Processing Using a Work Queue | All the pods for that Job succeeded! You're done.", "language": "text", "type": "code-example_explanation", "context": "Coarse Parallel Processing Using a Work Queue | All the pods for that Job succeeded! You're done.", "source": "https://kubernetes.io/docs/tasks/job/coarse-parallel-processing-work-queue/", "id": "k8s_00129_code_explain_3909"} {"prompt": "Explain this shell code:\n\n# make a Service for the StatefulSet to use\nkubectl create -f https://kubernetes.io/examples/application/job/rabbitmq/rabbitmq-service.yaml", "completion": "This shell code Coarse Parallel Processing Using a Work Queue | Now, we can experiment with accessing the message queue. We will create a temporary interactive pod, install some tools on it, and experiment with queues.", "language": "shell", "type": "code-example_explanation", "context": "Coarse Parallel Processing Using a Work Queue | Now, we can experiment with accessing the message queue. We will create a temporary interactive pod, install some tools on it, and experiment with queues.", "source": "https://kubernetes.io/docs/tasks/job/coarse-parallel-processing-work-queue/", "id": "k8s_00129_code_explain_3910"} {"prompt": "Explain this shell code:\n\nkubectl create -f https://kubernetes.io/examples/application/job/rabbitmq/rabbitmq-statefulset.yaml", "completion": "This shell code Coarse Parallel Processing Using a Work Queue | Now, we can experiment with accessing the message queue. We will create a temporary interactive pod, install some tools on it, and experiment with queues.", "language": "shell", "type": "code-example_explanation", "context": "Coarse Parallel Processing Using a Work Queue | Now, we can experiment with accessing the message queue. We will create a temporary interactive pod, install some tools on it, and experiment with queues.", "source": "https://kubernetes.io/docs/tasks/job/coarse-parallel-processing-work-queue/", "id": "k8s_00129_code_explain_3911"} {"prompt": "Explain this shell code:\n\n# Create a temporary interactive container\nkubectl run -i --tty temp --image ubuntu:22.04", "completion": "This shell code Coarse Parallel Processing Using a Work Queue | Note that your pod name and command prompt will be different.", "language": "shell", "type": "code-example_explanation", "context": "Coarse Parallel Processing Using a Work Queue | Note that your pod name and command prompt will be different.", "source": "https://kubernetes.io/docs/tasks/job/coarse-parallel-processing-work-queue/", "id": "k8s_00129_code_explain_3912"} {"prompt": "Explain this text code:\n\nWaiting for pod default/temp-loe07 to be running, status is Pending, pod ready: false\n... [ previous line repeats several times .. hit return when it stops ] ...", "completion": "This text code Coarse Parallel Processing Using a Work Queue | Note that your pod name and command prompt will be different.", "language": "text", "type": "code-example_explanation", "context": "Coarse Parallel Processing Using a Work Queue | Note that your pod name and command prompt will be different.", "source": "https://kubernetes.io/docs/tasks/job/coarse-parallel-processing-work-queue/", "id": "k8s_00129_code_explain_3913"} {"prompt": "Explain this shell code:\n\napt-get update && apt-get install -y curl ca-certificates amqp-tools python3 dnsutils", "completion": "This shell code Coarse Parallel Processing Using a Work Queue | Later, you will make a container image that includes these packages.", "language": "shell", "type": "code-example_explanation", "context": "Coarse Parallel Processing Using a Work Queue | Later, you will make a container image that includes these packages.", "source": "https://kubernetes.io/docs/tasks/job/coarse-parallel-processing-work-queue/", "id": "k8s_00129_code_explain_3914"} {"prompt": "Explain this text code:\n\n# Run these commands inside the Pod\n# Note the rabbitmq-service has a DNS name, provided by Kubernetes:\nnslookup rabbitmq-service", "completion": "This text code Coarse Parallel Processing Using a Work Queue | (the IP addresses will vary)", "language": "text", "type": "code-example_explanation", "context": "Coarse Parallel Processing Using a Work Queue | (the IP addresses will vary)", "source": "https://kubernetes.io/docs/tasks/job/coarse-parallel-processing-work-queue/", "id": "k8s_00129_code_explain_3915"} {"prompt": "Complete this text code:\n\nServer: 10.0.0.10\nAddress: 10.0.0.10#53", "completion": "\nName: rabbitmq-service.default.svc.cluster.local\nAddress: 10.0.147.152", "language": "text", "type": "code-example", "context": "Coarse Parallel Processing Using a Work Queue | (the IP addresses will vary)", "source": "https://kubernetes.io/docs/tasks/job/coarse-parallel-processing-work-queue/", "id": "k8s_00129_code_completion_3916"} {"prompt": "Explain this text code:\n\nServer: 10.0.0.10\nAddress: 10.0.0.10#53\n\nName: rabbitmq-service.default.svc.cluster.local\nAddress: 10.0.147.152", "completion": "This text code Coarse Parallel Processing Using a Work Queue | (the IP addresses will vary)", "language": "text", "type": "code-example_explanation", "context": "Coarse Parallel Processing Using a Work Queue | (the IP addresses will vary)", "source": "https://kubernetes.io/docs/tasks/job/coarse-parallel-processing-work-queue/", "id": "k8s_00129_code_explain_3917"} {"prompt": "Explain this shell code:\n\n# run this check inside the Pod\nenv | grep RABBITMQ_SERVICE | grep HOST", "completion": "This shell code Coarse Parallel Processing Using a Work Queue | (the IP address will vary)", "language": "shell", "type": "code-example_explanation", "context": "Coarse Parallel Processing Using a Work Queue | (the IP address will vary)", "source": "https://kubernetes.io/docs/tasks/job/coarse-parallel-processing-work-queue/", "id": "k8s_00129_code_explain_3918"} {"prompt": "Complete this shell code:\n\n# Run these commands inside the Pod\n# In the next line, rabbitmq-service is the hostname where the rabbitmq-service\n# can be reached. 5672 is the standard port for rabbitmq.\nexport BROKER_URL=amqp://guest:guest@rabbitmq-service:5672\n# If you could not resolve \"rabbitmq-service\" in the previous step,", "completion": "# then use this command instead:\nBROKER_URL=amqp://guest:guest@$RABBITMQ_SERVICE_SERVICE_HOST:5672\n\n# Now create a queue:\n\n/usr/bin/amqp-declare-queue --url=$BROKER_URL -q foo -d", "language": "shell", "type": "code-example", "context": "Coarse Parallel Processing Using a Work Queue | Publish one message to the queue:", "source": "https://kubernetes.io/docs/tasks/job/coarse-parallel-processing-work-queue/", "id": "k8s_00129_code_completion_3919"} {"prompt": "Explain this shell code:\n\n# Run these commands inside the Pod\n# In the next line, rabbitmq-service is the hostname where the rabbitmq-service\n# can be reached. 5672 is the standard port for rabbitmq.\nexport BROKER_URL=amqp://guest:guest@rabbitmq-service:5672\n# If you could not resolve \"rabbitmq-service\" in the previous step,\n# then use this command instead:\nBROKER_URL=amqp://guest:guest@$RABBITMQ_SERVICE_SERVICE_HOST:5672\n\n# Now create a queue:\n\n/usr/bin/amqp-declare-queue --url=$BROKER_URL -q foo -d", "completion": "This shell code Coarse Parallel Processing Using a Work Queue | Publish one message to the queue:", "language": "shell", "type": "code-example_explanation", "context": "Coarse Parallel Processing Using a Work Queue | Publish one message to the queue:", "source": "https://kubernetes.io/docs/tasks/job/coarse-parallel-processing-work-queue/", "id": "k8s_00129_code_explain_3920"} {"prompt": "Complete this shell code:\n\n/usr/bin/amqp-publish --url=$BROKER_URL -r foo -p -b Hello\n", "completion": "# And get it back.\n\n/usr/bin/amqp-consume --url=$BROKER_URL -q foo -c 1 cat && echo 1>&2", "language": "shell", "type": "code-example", "context": "Coarse Parallel Processing Using a Work Queue | In the last command, the amqp-consume tool took one message (-c 1) from the queue, and passes that message to the standard input of an arbitrary command. In this case, the program cat prints out the c", "source": "https://kubernetes.io/docs/tasks/job/coarse-parallel-processing-work-queue/", "id": "k8s_00129_code_completion_3921"} {"prompt": "Explain this shell code:\n\n/usr/bin/amqp-publish --url=$BROKER_URL -r foo -p -b Hello\n\n# And get it back.\n\n/usr/bin/amqp-consume --url=$BROKER_URL -q foo -c 1 cat && echo 1>&2", "completion": "This shell code Coarse Parallel Processing Using a Work Queue | In the last command, the amqp-consume tool took one message (-c 1) from the queue, and passes that message to the standard input of an arbitrary command. In this case, the program cat prints out the c", "language": "shell", "type": "code-example_explanation", "context": "Coarse Parallel Processing Using a Work Queue | In the last command, the amqp-consume tool took one message (-c 1) from the queue, and passes that message to the standard input of an arbitrary command. In this case, the program cat prints out the c", "source": "https://kubernetes.io/docs/tasks/job/coarse-parallel-processing-work-queue/", "id": "k8s_00129_code_explain_3922"} {"prompt": "Explain this shell code:\n\n# Run this on your computer, not in the Pod\n/usr/bin/amqp-declare-queue --url=$BROKER_URL -q job1 -d", "completion": "This shell code Coarse Parallel Processing Using a Work Queue | Add items to the queue:", "language": "shell", "type": "code-example_explanation", "context": "Coarse Parallel Processing Using a Work Queue | Add items to the queue:", "source": "https://kubernetes.io/docs/tasks/job/coarse-parallel-processing-work-queue/", "id": "k8s_00129_code_explain_3923"} {"prompt": "Complete this shell code:\n\nfor f in apple banana cherry date fig grape lemon melon\ndo", "completion": " /usr/bin/amqp-publish --url=$BROKER_URL -r job1 -p -b $f\ndone", "language": "shell", "type": "code-example", "context": "Coarse Parallel Processing Using a Work Queue | You added 8 messages to the queue.", "source": "https://kubernetes.io/docs/tasks/job/coarse-parallel-processing-work-queue/", "id": "k8s_00129_code_completion_3924"} {"prompt": "Explain this shell code:\n\nfor f in apple banana cherry date fig grape lemon melon\ndo\n /usr/bin/amqp-publish --url=$BROKER_URL -r job1 -p -b $f\ndone", "completion": "This shell code Coarse Parallel Processing Using a Work Queue | You added 8 messages to the queue.", "language": "shell", "type": "code-example_explanation", "context": "Coarse Parallel Processing Using a Work Queue | You added 8 messages to the queue.", "source": "https://kubernetes.io/docs/tasks/job/coarse-parallel-processing-work-queue/", "id": "k8s_00129_code_explain_3925"} {"prompt": "Complete this python code:\n\n#!/usr/bin/env python\n\n# Just prints standard out and sleeps for 10 seconds.", "completion": "import sys\nimport time\nprint(\"Processing \" + sys.stdin.readlines()[0])\ntime.sleep(10)", "language": "python", "type": "code-example", "context": "Coarse Parallel Processing Using a Work Queue | Give the script execution permission:", "source": "https://kubernetes.io/docs/tasks/job/coarse-parallel-processing-work-queue/", "id": "k8s_00129_code_completion_3926"} {"prompt": "Explain this python code:\n\n#!/usr/bin/env python\n\n# Just prints standard out and sleeps for 10 seconds.\nimport sys\nimport time\nprint(\"Processing \" + sys.stdin.readlines()[0])\ntime.sleep(10)", "completion": "This python code Coarse Parallel Processing Using a Work Queue | Give the script execution permission:", "language": "python", "type": "code-example_explanation", "context": "Coarse Parallel Processing Using a Work Queue | Give the script execution permission:", "source": "https://kubernetes.io/docs/tasks/job/coarse-parallel-processing-work-queue/", "id": "k8s_00129_code_explain_3927"} {"prompt": "Explain this shell code:\n\ndocker tag job-wq-1 /job-wq-1\ndocker push /job-wq-1", "completion": "This shell code Coarse Parallel Processing Using a Work Queue | If you are using an alternative container image registry, tag the image and push it there instead.", "language": "shell", "type": "code-example_explanation", "context": "Coarse Parallel Processing Using a Work Queue | If you are using an alternative container image registry, tag the image and push it there instead.", "source": "https://kubernetes.io/docs/tasks/job/coarse-parallel-processing-work-queue/", "id": "k8s_00129_code_explain_3928"} {"prompt": "Complete this yaml code:\n\napiVersion: batch/v1\nkind: Job\nmetadata:\n name: job-wq-1\nspec:\n completions: 8\n parallelism: 2\n template:\n metadata:\n name: job-wq-1", "completion": " spec:\n containers:\n - name: c\n image: gcr.io//job-wq-1\n env:\n - name: BROKER_URL\n value: amqp://guest:guest@rabbitmq-service:5672\n - name: QUEUE\n value: job1\n restartPolicy: OnFailure", "language": "yaml", "type": "kubernetes-manifest", "context": "Coarse Parallel Processing Using a Work Queue | In this example, each pod works on one item from the queue and then exits. So, the completion count of the Job corresponds to the number of work items done. That is why the example manifest has .spec.", "source": "https://kubernetes.io/docs/tasks/job/coarse-parallel-processing-work-queue/", "id": "k8s_00129_code_completion_3929"} {"prompt": "Explain this yaml code:\n\napiVersion: batch/v1\nkind: Job\nmetadata:\n name: job-wq-1\nspec:\n completions: 8\n parallelism: 2\n template:\n metadata:\n name: job-wq-1\n spec:\n containers:\n - name: c\n image: gcr.io//job-wq-1\n env:\n - name: BROKER_URL\n value: amqp://guest:guest@rabbitmq-service:5672\n - name: QUEUE\n value: job1\n restartPolicy: OnFailure", "completion": "This yaml code Coarse Parallel Processing Using a Work Queue | In this example, each pod works on one item from the queue and then exits. So, the completion count of the Job corresponds to the number of work items done. That is why the example manifest has .spec.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Coarse Parallel Processing Using a Work Queue | In this example, each pod works on one item from the queue and then exits. So, the completion count of the Job corresponds to the number of work items done. That is why the example manifest has .spec.", "source": "https://kubernetes.io/docs/tasks/job/coarse-parallel-processing-work-queue/", "id": "k8s_00129_code_explain_3930"} {"prompt": "Explain this shell code:\n\n# this assumes you downloaded and then edited the manifest already\nkubectl apply -f ./job.yaml", "completion": "This shell code Coarse Parallel Processing Using a Work Queue | You can wait for the Job to succeed, with a timeout:", "language": "shell", "type": "code-example_explanation", "context": "Coarse Parallel Processing Using a Work Queue | You can wait for the Job to succeed, with a timeout:", "source": "https://kubernetes.io/docs/tasks/job/coarse-parallel-processing-work-queue/", "id": "k8s_00129_code_explain_3931"} {"prompt": "Explain this shell code:\n\n# The check for condition name is case insensitive\nkubectl wait --for=condition=complete --timeout=300s job/job-wq-1", "completion": "This shell code Coarse Parallel Processing Using a Work Queue | Next, check on the Job:", "language": "shell", "type": "code-example_explanation", "context": "Coarse Parallel Processing Using a Work Queue | Next, check on the Job:", "source": "https://kubernetes.io/docs/tasks/job/coarse-parallel-processing-work-queue/", "id": "k8s_00129_code_explain_3932"} {"prompt": "Complete this text code:\n\nName: job-wq-1\nNamespace: default\nSelector: controller-uid=41d75705-92df-11e7-b85e-fa163ee3c11f\nLabels: controller-uid=41d75705-92df-11e7-b85e-fa163ee3c11f\n job-name=job-wq-1\nAnnotations: \nParallelism: 2\nCompletions: 8\nStart Time: Wed, 06 Sep 2022 16:42:02 +0000\nPods Statuses: 0 Running / 8 Succeeded / 0 Failed\nPod Template:\n Labels: controller-uid=41d75705-92df-11e7-b85e-fa163ee3c11f\n job-name=job-wq-1\n Containers:\n c:\n Image: container-registry.example/causal-jigsaw-637/job-wq-1", "completion": " Port:\n Environment:\n BROKER_URL: amqp://guest:guest@rabbitmq-service:5672\n QUEUE: job1\n Mounts: \n Volumes: \nEvents:\n FirstSeen LastSeen Count From SubobjectPath Type Reason Message\n ───────── ──────── ───── ──── ───────────── ────── ────── ───────\n 27s 27s 1 {job } Normal SuccessfulCreate Created pod: job-wq-1-hcobb\n 27s 27s 1 {job } Normal SuccessfulCreate Created pod: job-wq-1-weytj\n 27s 27s 1 {job } Normal SuccessfulCreate Created pod: job-wq-1-qaam5\n 27s 27s 1 {job } Normal SuccessfulCreate Created pod: job-wq-1-b67sr\n 26s 26s 1 {job } Normal SuccessfulCreate Created pod: job-wq-1-xe5hj\n 15s 15s 1 {job } Normal SuccessfulCreate Created pod: job-wq-1-w2zqe\n 14s 14s 1 {job } Normal SuccessfulCreate Created pod: job-wq-1-d6ppa\n 14s 14s 1 {job } Normal SuccessfulCreate Created pod: job-wq-1-p17e0", "language": "text", "type": "code-example", "context": "Coarse Parallel Processing Using a Work Queue | All the pods for that Job succeeded! You're done.", "source": "https://kubernetes.io/docs/tasks/job/coarse-parallel-processing-work-queue/", "id": "k8s_00129_code_completion_3933"} {"prompt": "Explain this text code:\n\nName: job-wq-1\nNamespace: default\nSelector: controller-uid=41d75705-92df-11e7-b85e-fa163ee3c11f\nLabels: controller-uid=41d75705-92df-11e7-b85e-fa163ee3c11f\n job-name=job-wq-1\nAnnotations: \nParallelism: 2\nCompletions: 8\nStart Time: Wed, 06 Sep 2022 16:42:02 +0000\nPods Statuses: 0 Running / 8 Succeeded / 0 Failed\nPod Template:\n Labels: controller-uid=41d75705-92df-11e7-b85e-fa163ee3c11f\n job-name=job-wq-1\n Containers:\n c:\n Image: container-registry.example/causal-jigsaw-637/job-wq-1\n Port:\n Environment:\n BROKER_URL: amqp://guest:guest@rabbitmq-service:5672\n QUEUE: job1\n Mounts: \n Volumes: \nEvents:\n FirstSeen LastSeen Count From SubobjectPath Type Reason Message\n ───────── ──────── ───── ──── ───────────── ────── ────── ───────\n 27s 27s 1 {job } Normal SuccessfulCreate Created pod: job-wq-1-hcobb\n 27s 27s 1 {job } Normal SuccessfulCreate Created pod: job-wq-1-weytj\n 27s 27s 1 {job } Normal SuccessfulCreate Created pod: job-wq-1-qaam5\n 27s 27s 1 {job } Normal SuccessfulCreate Created pod: job-wq-1-b67sr\n 26s 26s 1 {job } Normal SuccessfulCreate Created pod: job-wq-1-xe5hj\n 15s 15s 1 {job } Normal SuccessfulCreate Created pod: job-wq-1-w2zqe\n 14s 14s 1 {job } Normal SuccessfulCreate Created pod: job-wq-1-d6ppa\n 14s 14s 1 {job } Normal SuccessfulCreate Created pod: job-wq-1-p17e0", "completion": "This text code Coarse Parallel Processing Using a Work Queue | All the pods for that Job succeeded! You're done.", "language": "text", "type": "code-example_explanation", "context": "Coarse Parallel Processing Using a Work Queue | All the pods for that Job succeeded! You're done.", "source": "https://kubernetes.io/docs/tasks/job/coarse-parallel-processing-work-queue/", "id": "k8s_00129_code_explain_3934"} {"prompt": "Explain this text code:\n\n# make a Service for the StatefulSet to use\nkubectl create -f https://kubernetes.io/examples/application/job/rabbitmq/rabbitmq-service.yaml", "completion": "This text code Coarse Parallel Processing Using a Work Queue | Now, we can experiment with accessing the message queue. We will create a temporary interactive pod, install some tools on it, and experiment with queues.", "language": "text", "type": "code-example_explanation", "context": "Coarse Parallel Processing Using a Work Queue | Now, we can experiment with accessing the message queue. We will create a temporary interactive pod, install some tools on it, and experiment with queues.", "source": "https://kubernetes.io/docs/tasks/job/coarse-parallel-processing-work-queue/", "id": "k8s_00129_code_explain_3935"} {"prompt": "Explain this bash code:\n\nkubectl create -f https://kubernetes.io/examples/application/job/rabbitmq/rabbitmq-statefulset.yaml", "completion": "This bash code Coarse Parallel Processing Using a Work Queue | Now, we can experiment with accessing the message queue. We will create a temporary interactive pod, install some tools on it, and experiment with queues.", "language": "bash", "type": "kubectl-command_explanation", "context": "Coarse Parallel Processing Using a Work Queue | Now, we can experiment with accessing the message queue. We will create a temporary interactive pod, install some tools on it, and experiment with queues.", "source": "https://kubernetes.io/docs/tasks/job/coarse-parallel-processing-work-queue/", "id": "k8s_00129_code_explain_3936"} {"prompt": "Explain this text code:\n\n# Create a temporary interactive container\nkubectl run -i --tty temp --image ubuntu:22.04", "completion": "This text code Coarse Parallel Processing Using a Work Queue | Note that your pod name and command prompt will be different.", "language": "text", "type": "code-example_explanation", "context": "Coarse Parallel Processing Using a Work Queue | Note that your pod name and command prompt will be different.", "source": "https://kubernetes.io/docs/tasks/job/coarse-parallel-processing-work-queue/", "id": "k8s_00129_code_explain_3937"} {"prompt": "Explain this text code:\n\napt-get update && apt-get install -y curl ca-certificates amqp-tools python3 dnsutils", "completion": "This text code Coarse Parallel Processing Using a Work Queue | Later, you will make a container image that includes these packages.", "language": "text", "type": "code-example_explanation", "context": "Coarse Parallel Processing Using a Work Queue | Later, you will make a container image that includes these packages.", "source": "https://kubernetes.io/docs/tasks/job/coarse-parallel-processing-work-queue/", "id": "k8s_00129_code_explain_3938"} {"prompt": "Explain this text code:\n\n# run this check inside the Pod\nenv | grep RABBITMQ_SERVICE | grep HOST", "completion": "This text code Coarse Parallel Processing Using a Work Queue | (the IP address will vary)", "language": "text", "type": "code-example_explanation", "context": "Coarse Parallel Processing Using a Work Queue | (the IP address will vary)", "source": "https://kubernetes.io/docs/tasks/job/coarse-parallel-processing-work-queue/", "id": "k8s_00129_code_explain_3939"} {"prompt": "Complete this text code:\n\n# Run these commands inside the Pod\n# In the next line, rabbitmq-service is the hostname where the rabbitmq-service\n# can be reached. 5672 is the standard port for rabbitmq.\nexport BROKER_URL=amqp://guest:guest@rabbitmq-service:5672\n# If you could not resolve \"rabbitmq-service\" in the previous step,", "completion": "# then use this command instead:\nBROKER_URL=amqp://guest:guest@$RABBITMQ_SERVICE_SERVICE_HOST:5672\n\n# Now create a queue:\n\n/usr/bin/amqp-declare-queue --url=$BROKER_URL -q foo -d", "language": "text", "type": "code-example", "context": "Coarse Parallel Processing Using a Work Queue | Publish one message to the queue:", "source": "https://kubernetes.io/docs/tasks/job/coarse-parallel-processing-work-queue/", "id": "k8s_00129_code_completion_3940"} {"prompt": "Explain this text code:\n\n# Run these commands inside the Pod\n# In the next line, rabbitmq-service is the hostname where the rabbitmq-service\n# can be reached. 5672 is the standard port for rabbitmq.\nexport BROKER_URL=amqp://guest:guest@rabbitmq-service:5672\n# If you could not resolve \"rabbitmq-service\" in the previous step,\n# then use this command instead:\nBROKER_URL=amqp://guest:guest@$RABBITMQ_SERVICE_SERVICE_HOST:5672\n\n# Now create a queue:\n\n/usr/bin/amqp-declare-queue --url=$BROKER_URL -q foo -d", "completion": "This text code Coarse Parallel Processing Using a Work Queue | Publish one message to the queue:", "language": "text", "type": "code-example_explanation", "context": "Coarse Parallel Processing Using a Work Queue | Publish one message to the queue:", "source": "https://kubernetes.io/docs/tasks/job/coarse-parallel-processing-work-queue/", "id": "k8s_00129_code_explain_3941"} {"prompt": "Complete this text code:\n\n/usr/bin/amqp-publish --url=$BROKER_URL -r foo -p -b Hello\n", "completion": "# And get it back.\n\n/usr/bin/amqp-consume --url=$BROKER_URL -q foo -c 1 cat && echo 1>&2", "language": "text", "type": "code-example", "context": "Coarse Parallel Processing Using a Work Queue | In the last command, the amqp-consume tool took one message (-c 1) from the queue, and passes that message to the standard input of an arbitrary command. In this case, the program cat prints out the c", "source": "https://kubernetes.io/docs/tasks/job/coarse-parallel-processing-work-queue/", "id": "k8s_00129_code_completion_3942"} {"prompt": "Explain this text code:\n\n/usr/bin/amqp-publish --url=$BROKER_URL -r foo -p -b Hello\n\n# And get it back.\n\n/usr/bin/amqp-consume --url=$BROKER_URL -q foo -c 1 cat && echo 1>&2", "completion": "This text code Coarse Parallel Processing Using a Work Queue | In the last command, the amqp-consume tool took one message (-c 1) from the queue, and passes that message to the standard input of an arbitrary command. In this case, the program cat prints out the c", "language": "text", "type": "code-example_explanation", "context": "Coarse Parallel Processing Using a Work Queue | In the last command, the amqp-consume tool took one message (-c 1) from the queue, and passes that message to the standard input of an arbitrary command. In this case, the program cat prints out the c", "source": "https://kubernetes.io/docs/tasks/job/coarse-parallel-processing-work-queue/", "id": "k8s_00129_code_explain_3943"} {"prompt": "Explain this text code:\n\n# Run this on your computer, not in the Pod\n/usr/bin/amqp-declare-queue --url=$BROKER_URL -q job1 -d", "completion": "This text code Coarse Parallel Processing Using a Work Queue | Add items to the queue:", "language": "text", "type": "code-example_explanation", "context": "Coarse Parallel Processing Using a Work Queue | Add items to the queue:", "source": "https://kubernetes.io/docs/tasks/job/coarse-parallel-processing-work-queue/", "id": "k8s_00129_code_explain_3944"} {"prompt": "Complete this text code:\n\nfor f in apple banana cherry date fig grape lemon melon\ndo", "completion": " /usr/bin/amqp-publish --url=$BROKER_URL -r job1 -p -b $f\ndone", "language": "text", "type": "code-example", "context": "Coarse Parallel Processing Using a Work Queue | You added 8 messages to the queue.", "source": "https://kubernetes.io/docs/tasks/job/coarse-parallel-processing-work-queue/", "id": "k8s_00129_code_completion_3945"} {"prompt": "Explain this text code:\n\nfor f in apple banana cherry date fig grape lemon melon\ndo\n /usr/bin/amqp-publish --url=$BROKER_URL -r job1 -p -b $f\ndone", "completion": "This text code Coarse Parallel Processing Using a Work Queue | You added 8 messages to the queue.", "language": "text", "type": "code-example_explanation", "context": "Coarse Parallel Processing Using a Work Queue | You added 8 messages to the queue.", "source": "https://kubernetes.io/docs/tasks/job/coarse-parallel-processing-work-queue/", "id": "k8s_00129_code_explain_3946"} {"prompt": "Complete this python code:\n\napplication/job/rabbitmq/worker.py\n#!/usr/bin/env python\n\n# Just prints standard out and sleeps for 10 seconds.", "completion": "import sys\nimport time\nprint(\"Processing \" + sys.stdin.readlines()[0])\ntime.sleep(10)", "language": "python", "type": "code-example", "context": "Coarse Parallel Processing Using a Work Queue | Give the script execution permission:", "source": "https://kubernetes.io/docs/tasks/job/coarse-parallel-processing-work-queue/", "id": "k8s_00129_code_completion_3947"} {"prompt": "Explain this python code:\n\napplication/job/rabbitmq/worker.py\n#!/usr/bin/env python\n\n# Just prints standard out and sleeps for 10 seconds.\nimport sys\nimport time\nprint(\"Processing \" + sys.stdin.readlines()[0])\ntime.sleep(10)", "completion": "This python code Coarse Parallel Processing Using a Work Queue | Give the script execution permission:", "language": "python", "type": "code-example_explanation", "context": "Coarse Parallel Processing Using a Work Queue | Give the script execution permission:", "source": "https://kubernetes.io/docs/tasks/job/coarse-parallel-processing-work-queue/", "id": "k8s_00129_code_explain_3948"} {"prompt": "Complete this python code:\n\n#!/usr/bin/env python\n\n# Just prints standard out and sleeps for 10 seconds.", "completion": "import sys\nimport time\nprint(\"Processing \" + sys.stdin.readlines()[0])\ntime.sleep(10)", "language": "python", "type": "code-example", "context": "Coarse Parallel Processing Using a Work Queue | Give the script execution permission:", "source": "https://kubernetes.io/docs/tasks/job/coarse-parallel-processing-work-queue/", "id": "k8s_00129_code_completion_3949"} {"prompt": "Explain this python code:\n\n#!/usr/bin/env python\n\n# Just prints standard out and sleeps for 10 seconds.\nimport sys\nimport time\nprint(\"Processing \" + sys.stdin.readlines()[0])\ntime.sleep(10)", "completion": "This python code Coarse Parallel Processing Using a Work Queue | Give the script execution permission:", "language": "python", "type": "code-example_explanation", "context": "Coarse Parallel Processing Using a Work Queue | Give the script execution permission:", "source": "https://kubernetes.io/docs/tasks/job/coarse-parallel-processing-work-queue/", "id": "k8s_00129_code_explain_3950"} {"prompt": "Explain this bash code:\n\ndocker tag job-wq-1 /job-wq-1\ndocker push /job-wq-1", "completion": "This bash code Coarse Parallel Processing Using a Work Queue | If you are using an alternative container image registry, tag the image and push it there instead.", "language": "bash", "type": "shell-command_explanation", "context": "Coarse Parallel Processing Using a Work Queue | If you are using an alternative container image registry, tag the image and push it there instead.", "source": "https://kubernetes.io/docs/tasks/job/coarse-parallel-processing-work-queue/", "id": "k8s_00129_code_explain_3951"} {"prompt": "Complete this yaml code:\n\napplication/job/rabbitmq/job.yaml\napiVersion: batch/v1\nkind: Job\nmetadata:\n name: job-wq-1\nspec:\n completions: 8\n parallelism: 2\n template:\n metadata:", "completion": " name: job-wq-1\n spec:\n containers:\n - name: c\n image: gcr.io//job-wq-1\n env:\n - name: BROKER_URL\n value: amqp://guest:guest@rabbitmq-service:5672\n - name: QUEUE\n value: job1\n restartPolicy: OnFailure", "language": "yaml", "type": "kubernetes-manifest", "context": "Coarse Parallel Processing Using a Work Queue | In this example, each pod works on one item from the queue and then exits. So, the completion count of the Job corresponds to the number of work items done. That is why the example manifest has .spec.", "source": "https://kubernetes.io/docs/tasks/job/coarse-parallel-processing-work-queue/", "id": "k8s_00129_code_completion_3952"} {"prompt": "Explain this yaml code:\n\napplication/job/rabbitmq/job.yaml\napiVersion: batch/v1\nkind: Job\nmetadata:\n name: job-wq-1\nspec:\n completions: 8\n parallelism: 2\n template:\n metadata:\n name: job-wq-1\n spec:\n containers:\n - name: c\n image: gcr.io//job-wq-1\n env:\n - name: BROKER_URL\n value: amqp://guest:guest@rabbitmq-service:5672\n - name: QUEUE\n value: job1\n restartPolicy: OnFailure", "completion": "This yaml code Coarse Parallel Processing Using a Work Queue | In this example, each pod works on one item from the queue and then exits. So, the completion count of the Job corresponds to the number of work items done. That is why the example manifest has .spec.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Coarse Parallel Processing Using a Work Queue | In this example, each pod works on one item from the queue and then exits. So, the completion count of the Job corresponds to the number of work items done. That is why the example manifest has .spec.", "source": "https://kubernetes.io/docs/tasks/job/coarse-parallel-processing-work-queue/", "id": "k8s_00129_code_explain_3953"} {"prompt": "Complete this yaml code:\n\napiVersion: batch/v1\nkind: Job\nmetadata:\n name: job-wq-1\nspec:\n completions: 8\n parallelism: 2\n template:\n metadata:\n name: job-wq-1", "completion": " spec:\n containers:\n - name: c\n image: gcr.io//job-wq-1\n env:\n - name: BROKER_URL\n value: amqp://guest:guest@rabbitmq-service:5672\n - name: QUEUE\n value: job1\n restartPolicy: OnFailure", "language": "yaml", "type": "kubernetes-manifest", "context": "Coarse Parallel Processing Using a Work Queue | In this example, each pod works on one item from the queue and then exits. So, the completion count of the Job corresponds to the number of work items done. That is why the example manifest has .spec.", "source": "https://kubernetes.io/docs/tasks/job/coarse-parallel-processing-work-queue/", "id": "k8s_00129_code_completion_3954"} {"prompt": "Explain this yaml code:\n\napiVersion: batch/v1\nkind: Job\nmetadata:\n name: job-wq-1\nspec:\n completions: 8\n parallelism: 2\n template:\n metadata:\n name: job-wq-1\n spec:\n containers:\n - name: c\n image: gcr.io//job-wq-1\n env:\n - name: BROKER_URL\n value: amqp://guest:guest@rabbitmq-service:5672\n - name: QUEUE\n value: job1\n restartPolicy: OnFailure", "completion": "This yaml code Coarse Parallel Processing Using a Work Queue | In this example, each pod works on one item from the queue and then exits. So, the completion count of the Job corresponds to the number of work items done. That is why the example manifest has .spec.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Coarse Parallel Processing Using a Work Queue | In this example, each pod works on one item from the queue and then exits. So, the completion count of the Job corresponds to the number of work items done. That is why the example manifest has .spec.", "source": "https://kubernetes.io/docs/tasks/job/coarse-parallel-processing-work-queue/", "id": "k8s_00129_code_explain_3955"} {"prompt": "Explain this text code:\n\n# this assumes you downloaded and then edited the manifest already\nkubectl apply -f ./job.yaml", "completion": "This text code Coarse Parallel Processing Using a Work Queue | You can wait for the Job to succeed, with a timeout:", "language": "text", "type": "code-example_explanation", "context": "Coarse Parallel Processing Using a Work Queue | You can wait for the Job to succeed, with a timeout:", "source": "https://kubernetes.io/docs/tasks/job/coarse-parallel-processing-work-queue/", "id": "k8s_00129_code_explain_3956"} {"prompt": "Explain this text code:\n\n# The check for condition name is case insensitive\nkubectl wait --for=condition=complete --timeout=300s job/job-wq-1", "completion": "This text code Coarse Parallel Processing Using a Work Queue | Next, check on the Job:", "language": "text", "type": "code-example_explanation", "context": "Coarse Parallel Processing Using a Work Queue | Next, check on the Job:", "source": "https://kubernetes.io/docs/tasks/job/coarse-parallel-processing-work-queue/", "id": "k8s_00129_code_explain_3957"} {"prompt": "Explain this text code:\n\n# Run pre-flight checks for kubeadm init using a config file.\n kubeadm init phase preflight --config kubeadm-config.yaml", "completion": "This text code kubeadm init phase | Path to a kubeadm configuration file.", "language": "text", "type": "code-example_explanation", "context": "kubeadm init phase | Path to a kubeadm configuration file.", "source": "https://kubernetes.io/docs/reference/setup-tools/kubeadm/kubeadm-init-phase/", "id": "k8s_00132_code_explain_3958"} {"prompt": "Explain this text code:\n\n# Writes a dynamic environment file with kubelet flags from a InitConfiguration file.\n kubeadm init phase kubelet-start --config config.yaml", "completion": "This text code kubeadm init phase | Path to a kubeadm configuration file.", "language": "text", "type": "code-example_explanation", "context": "kubeadm init phase | Path to a kubeadm configuration file.", "source": "https://kubernetes.io/docs/reference/setup-tools/kubeadm/kubeadm-init-phase/", "id": "k8s_00132_code_explain_3959"} {"prompt": "Explain this text code:\n\nkubeadm init phase certs apiserver-kubelet-client [flags]", "completion": "This text code kubeadm init phase | The path where to save and store the certificates.", "language": "text", "type": "code-example_explanation", "context": "kubeadm init phase | The path where to save and store the certificates.", "source": "https://kubernetes.io/docs/reference/setup-tools/kubeadm/kubeadm-init-phase/", "id": "k8s_00132_code_explain_3960"} {"prompt": "Explain this text code:\n\nkubeadm init phase certs front-proxy-client [flags]", "completion": "This text code kubeadm init phase | The path where to save and store the certificates.", "language": "text", "type": "code-example_explanation", "context": "kubeadm init phase | The path where to save and store the certificates.", "source": "https://kubernetes.io/docs/reference/setup-tools/kubeadm/kubeadm-init-phase/", "id": "k8s_00132_code_explain_3961"} {"prompt": "Explain this text code:\n\nkubeadm init phase certs etcd-healthcheck-client [flags]", "completion": "This text code kubeadm init phase | The path where to save and store the certificates.", "language": "text", "type": "code-example_explanation", "context": "kubeadm init phase | The path where to save and store the certificates.", "source": "https://kubernetes.io/docs/reference/setup-tools/kubeadm/kubeadm-init-phase/", "id": "k8s_00132_code_explain_3962"} {"prompt": "Explain this text code:\n\nkubeadm init phase certs apiserver-etcd-client [flags]", "completion": "This text code kubeadm init phase | The path where to save and store the certificates.", "language": "text", "type": "code-example_explanation", "context": "kubeadm init phase | The path where to save and store the certificates.", "source": "https://kubernetes.io/docs/reference/setup-tools/kubeadm/kubeadm-init-phase/", "id": "k8s_00132_code_explain_3963"} {"prompt": "Explain this text code:\n\nkubeadm init phase kubeconfig controller-manager [flags]", "completion": "This text code kubeadm init phase | The IP address the API Server will advertise it's listening on. If not set the default network interface will be used.", "language": "text", "type": "code-example_explanation", "context": "kubeadm init phase | The IP address the API Server will advertise it's listening on. If not set the default network interface will be used.", "source": "https://kubernetes.io/docs/reference/setup-tools/kubeadm/kubeadm-init-phase/", "id": "k8s_00132_code_explain_3964"} {"prompt": "Complete this text code:\n\n# Generates all static Pod manifest files for control plane components,\n # functionally equivalent to what is generated by kubeadm init.\n kubeadm init phase control-plane all", "completion": " \n # Generates all static Pod manifest files using options read from a configuration file.\n kubeadm init phase control-plane all --config config.yaml", "language": "text", "type": "code-example", "context": "kubeadm init phase | The IP address the API Server will advertise it's listening on. If not set the default network interface will be used.", "source": "https://kubernetes.io/docs/reference/setup-tools/kubeadm/kubeadm-init-phase/", "id": "k8s_00132_code_completion_3965"} {"prompt": "Explain this text code:\n\n# Generates all static Pod manifest files for control plane components,\n # functionally equivalent to what is generated by kubeadm init.\n kubeadm init phase control-plane all\n \n # Generates all static Pod manifest files using options read from a configuration file.\n kubeadm init phase control-plane all --config config.yaml", "completion": "This text code kubeadm init phase | The IP address the API Server will advertise it's listening on. If not set the default network interface will be used.", "language": "text", "type": "code-example_explanation", "context": "kubeadm init phase | The IP address the API Server will advertise it's listening on. If not set the default network interface will be used.", "source": "https://kubernetes.io/docs/reference/setup-tools/kubeadm/kubeadm-init-phase/", "id": "k8s_00132_code_explain_3966"} {"prompt": "Explain this text code:\n\nkubeadm init phase control-plane controller-manager [flags]", "completion": "This text code kubeadm init phase | The path where to save and store the certificates.", "language": "text", "type": "code-example_explanation", "context": "kubeadm init phase | The path where to save and store the certificates.", "source": "https://kubernetes.io/docs/reference/setup-tools/kubeadm/kubeadm-init-phase/", "id": "k8s_00132_code_explain_3967"} {"prompt": "Complete this text code:\n\n# Generates the static Pod manifest file for etcd, functionally\n # equivalent to what is generated by kubeadm init.\n kubeadm init phase etcd local", "completion": " \n # Generates the static Pod manifest file for etcd using options\n # read from a configuration file.\n kubeadm init phase etcd local --config config.yaml", "language": "text", "type": "code-example", "context": "kubeadm init phase | The path where to save and store the certificates.", "source": "https://kubernetes.io/docs/reference/setup-tools/kubeadm/kubeadm-init-phase/", "id": "k8s_00132_code_completion_3968"} {"prompt": "Explain this text code:\n\n# Generates the static Pod manifest file for etcd, functionally\n # equivalent to what is generated by kubeadm init.\n kubeadm init phase etcd local\n \n # Generates the static Pod manifest file for etcd using options\n # read from a configuration file.\n kubeadm init phase etcd local --config config.yaml", "completion": "This text code kubeadm init phase | The path where to save and store the certificates.", "language": "text", "type": "code-example_explanation", "context": "kubeadm init phase | The path where to save and store the certificates.", "source": "https://kubernetes.io/docs/reference/setup-tools/kubeadm/kubeadm-init-phase/", "id": "k8s_00132_code_explain_3969"} {"prompt": "Explain this text code:\n\n# upload the configuration of your cluster\n kubeadm init phase upload-config kubeadm --config=myConfig.yaml", "completion": "This text code kubeadm init phase | Path to a kubeadm configuration file.", "language": "text", "type": "code-example_explanation", "context": "kubeadm init phase | Path to a kubeadm configuration file.", "source": "https://kubernetes.io/docs/reference/setup-tools/kubeadm/kubeadm-init-phase/", "id": "k8s_00132_code_explain_3970"} {"prompt": "Explain this text code:\n\n# Upload the kubelet configuration from the kubeadm Config file to a ConfigMap in the cluster.\n kubeadm init phase upload-config kubelet --config kubeadm.yaml", "completion": "This text code kubeadm init phase | Path to a kubeadm configuration file.", "language": "text", "type": "code-example_explanation", "context": "kubeadm init phase | Path to a kubeadm configuration file.", "source": "https://kubernetes.io/docs/reference/setup-tools/kubeadm/kubeadm-init-phase/", "id": "k8s_00132_code_explain_3971"} {"prompt": "Complete this text code:\n\n# Applies control-plane label and taint to the current node, functionally equivalent to what executed by kubeadm init.\n kubeadm init phase mark-control-plane --config config.yaml", "completion": " \n # Applies control-plane label and taint to a specific node\n kubeadm init phase mark-control-plane --node-name myNode", "language": "text", "type": "code-example", "context": "kubeadm init phase | Path to a kubeadm configuration file.", "source": "https://kubernetes.io/docs/reference/setup-tools/kubeadm/kubeadm-init-phase/", "id": "k8s_00132_code_completion_3972"} {"prompt": "Explain this text code:\n\n# Applies control-plane label and taint to the current node, functionally equivalent to what executed by kubeadm init.\n kubeadm init phase mark-control-plane --config config.yaml\n \n # Applies control-plane label and taint to a specific node\n kubeadm init phase mark-control-plane --node-name myNode", "completion": "This text code kubeadm init phase | Path to a kubeadm configuration file.", "language": "text", "type": "code-example_explanation", "context": "kubeadm init phase | Path to a kubeadm configuration file.", "source": "https://kubernetes.io/docs/reference/setup-tools/kubeadm/kubeadm-init-phase/", "id": "k8s_00132_code_explain_3973"} {"prompt": "Explain this text code:\n\n# Make all the bootstrap token configurations and create an initial token, functionally\n # equivalent to what generated by kubeadm init.\n kubeadm init phase bootstrap-token", "completion": "This text code kubeadm init phase | Path to a kubeadm configuration file.", "language": "text", "type": "code-example_explanation", "context": "kubeadm init phase | Path to a kubeadm configuration file.", "source": "https://kubernetes.io/docs/reference/setup-tools/kubeadm/kubeadm-init-phase/", "id": "k8s_00132_code_explain_3974"} {"prompt": "Explain this text code:\n\n# Updates settings relevant to the kubelet after TLS bootstrap\"\n kubeadm init phase kubelet-finalize all --config", "completion": "This text code kubeadm init phase | help for kubelet-finalize", "language": "text", "type": "code-example_explanation", "context": "kubeadm init phase | help for kubelet-finalize", "source": "https://kubernetes.io/docs/reference/setup-tools/kubeadm/kubeadm-init-phase/", "id": "k8s_00132_code_explain_3975"} {"prompt": "Explain this text code:\n\n# Updates settings relevant to the kubelet after TLS bootstrap\"\n kubeadm init phase kubelet-finalize all --config", "completion": "This text code kubeadm init phase | The path where to save and store the certificates.", "language": "text", "type": "code-example_explanation", "context": "kubeadm init phase | The path where to save and store the certificates.", "source": "https://kubernetes.io/docs/reference/setup-tools/kubeadm/kubeadm-init-phase/", "id": "k8s_00132_code_explain_3976"} {"prompt": "Explain this text code:\n\nkubeadm init phase kubelet-finalize enable-client-cert-rotation [flags]", "completion": "This text code kubeadm init phase | The path where to save and store the certificates.", "language": "text", "type": "code-example_explanation", "context": "kubeadm init phase | The path where to save and store the certificates.", "source": "https://kubernetes.io/docs/reference/setup-tools/kubeadm/kubeadm-init-phase/", "id": "k8s_00132_code_explain_3977"} {"prompt": "Explain this text code:\n\n# Run pre-flight checks for kubeadm init using a config file.\n kubeadm init phase preflight --config kubeadm-config.yaml", "completion": "This text code kubeadm init phase | Path to a kubeadm configuration file.", "language": "text", "type": "code-example_explanation", "context": "kubeadm init phase | Path to a kubeadm configuration file.", "source": "https://kubernetes.io/docs/reference/setup-tools/kubeadm/kubeadm-init-phase/", "id": "k8s_00132_code_explain_3978"} {"prompt": "Explain this text code:\n\n# Writes a dynamic environment file with kubelet flags from a InitConfiguration file.\n kubeadm init phase kubelet-start --config config.yaml", "completion": "This text code kubeadm init phase | Path to a kubeadm configuration file.", "language": "text", "type": "code-example_explanation", "context": "kubeadm init phase | Path to a kubeadm configuration file.", "source": "https://kubernetes.io/docs/reference/setup-tools/kubeadm/kubeadm-init-phase/", "id": "k8s_00132_code_explain_3979"} {"prompt": "Explain this text code:\n\nkubeadm init phase certs apiserver-kubelet-client [flags]", "completion": "This text code kubeadm init phase | The path where to save and store the certificates.", "language": "text", "type": "code-example_explanation", "context": "kubeadm init phase | The path where to save and store the certificates.", "source": "https://kubernetes.io/docs/reference/setup-tools/kubeadm/kubeadm-init-phase/", "id": "k8s_00132_code_explain_3980"} {"prompt": "Explain this text code:\n\nkubeadm init phase certs front-proxy-client [flags]", "completion": "This text code kubeadm init phase | The path where to save and store the certificates.", "language": "text", "type": "code-example_explanation", "context": "kubeadm init phase | The path where to save and store the certificates.", "source": "https://kubernetes.io/docs/reference/setup-tools/kubeadm/kubeadm-init-phase/", "id": "k8s_00132_code_explain_3981"} {"prompt": "Explain this text code:\n\nkubeadm init phase certs etcd-healthcheck-client [flags]", "completion": "This text code kubeadm init phase | The path where to save and store the certificates.", "language": "text", "type": "code-example_explanation", "context": "kubeadm init phase | The path where to save and store the certificates.", "source": "https://kubernetes.io/docs/reference/setup-tools/kubeadm/kubeadm-init-phase/", "id": "k8s_00132_code_explain_3982"} {"prompt": "Explain this text code:\n\nkubeadm init phase certs apiserver-etcd-client [flags]", "completion": "This text code kubeadm init phase | The path where to save and store the certificates.", "language": "text", "type": "code-example_explanation", "context": "kubeadm init phase | The path where to save and store the certificates.", "source": "https://kubernetes.io/docs/reference/setup-tools/kubeadm/kubeadm-init-phase/", "id": "k8s_00132_code_explain_3983"} {"prompt": "Explain this text code:\n\nkubeadm init phase kubeconfig controller-manager [flags]", "completion": "This text code kubeadm init phase | The IP address the API Server will advertise it's listening on. If not set the default network interface will be used.", "language": "text", "type": "code-example_explanation", "context": "kubeadm init phase | The IP address the API Server will advertise it's listening on. If not set the default network interface will be used.", "source": "https://kubernetes.io/docs/reference/setup-tools/kubeadm/kubeadm-init-phase/", "id": "k8s_00132_code_explain_3984"} {"prompt": "Complete this text code:\n\n# Generates all static Pod manifest files for control plane components,\n # functionally equivalent to what is generated by kubeadm init.\n kubeadm init phase control-plane all", "completion": " \n # Generates all static Pod manifest files using options read from a configuration file.\n kubeadm init phase control-plane all --config config.yaml", "language": "text", "type": "code-example", "context": "kubeadm init phase | The IP address the API Server will advertise it's listening on. If not set the default network interface will be used.", "source": "https://kubernetes.io/docs/reference/setup-tools/kubeadm/kubeadm-init-phase/", "id": "k8s_00132_code_completion_3985"} {"prompt": "Explain this text code:\n\n# Generates all static Pod manifest files for control plane components,\n # functionally equivalent to what is generated by kubeadm init.\n kubeadm init phase control-plane all\n \n # Generates all static Pod manifest files using options read from a configuration file.\n kubeadm init phase control-plane all --config config.yaml", "completion": "This text code kubeadm init phase | The IP address the API Server will advertise it's listening on. If not set the default network interface will be used.", "language": "text", "type": "code-example_explanation", "context": "kubeadm init phase | The IP address the API Server will advertise it's listening on. If not set the default network interface will be used.", "source": "https://kubernetes.io/docs/reference/setup-tools/kubeadm/kubeadm-init-phase/", "id": "k8s_00132_code_explain_3986"} {"prompt": "Explain this text code:\n\nkubeadm init phase control-plane controller-manager [flags]", "completion": "This text code kubeadm init phase | The path where to save and store the certificates.", "language": "text", "type": "code-example_explanation", "context": "kubeadm init phase | The path where to save and store the certificates.", "source": "https://kubernetes.io/docs/reference/setup-tools/kubeadm/kubeadm-init-phase/", "id": "k8s_00132_code_explain_3987"} {"prompt": "Complete this text code:\n\n# Generates the static Pod manifest file for etcd, functionally\n # equivalent to what is generated by kubeadm init.\n kubeadm init phase etcd local", "completion": " \n # Generates the static Pod manifest file for etcd using options\n # read from a configuration file.\n kubeadm init phase etcd local --config config.yaml", "language": "text", "type": "code-example", "context": "kubeadm init phase | The path where to save and store the certificates.", "source": "https://kubernetes.io/docs/reference/setup-tools/kubeadm/kubeadm-init-phase/", "id": "k8s_00132_code_completion_3988"} {"prompt": "Explain this text code:\n\n# Generates the static Pod manifest file for etcd, functionally\n # equivalent to what is generated by kubeadm init.\n kubeadm init phase etcd local\n \n # Generates the static Pod manifest file for etcd using options\n # read from a configuration file.\n kubeadm init phase etcd local --config config.yaml", "completion": "This text code kubeadm init phase | The path where to save and store the certificates.", "language": "text", "type": "code-example_explanation", "context": "kubeadm init phase | The path where to save and store the certificates.", "source": "https://kubernetes.io/docs/reference/setup-tools/kubeadm/kubeadm-init-phase/", "id": "k8s_00132_code_explain_3989"} {"prompt": "Explain this text code:\n\n# upload the configuration of your cluster\n kubeadm init phase upload-config kubeadm --config=myConfig.yaml", "completion": "This text code kubeadm init phase | Path to a kubeadm configuration file.", "language": "text", "type": "code-example_explanation", "context": "kubeadm init phase | Path to a kubeadm configuration file.", "source": "https://kubernetes.io/docs/reference/setup-tools/kubeadm/kubeadm-init-phase/", "id": "k8s_00132_code_explain_3990"} {"prompt": "Explain this text code:\n\n# Upload the kubelet configuration from the kubeadm Config file to a ConfigMap in the cluster.\n kubeadm init phase upload-config kubelet --config kubeadm.yaml", "completion": "This text code kubeadm init phase | Path to a kubeadm configuration file.", "language": "text", "type": "code-example_explanation", "context": "kubeadm init phase | Path to a kubeadm configuration file.", "source": "https://kubernetes.io/docs/reference/setup-tools/kubeadm/kubeadm-init-phase/", "id": "k8s_00132_code_explain_3991"} {"prompt": "Complete this text code:\n\n# Applies control-plane label and taint to the current node, functionally equivalent to what executed by kubeadm init.\n kubeadm init phase mark-control-plane --config config.yaml", "completion": " \n # Applies control-plane label and taint to a specific node\n kubeadm init phase mark-control-plane --node-name myNode", "language": "text", "type": "code-example", "context": "kubeadm init phase | Path to a kubeadm configuration file.", "source": "https://kubernetes.io/docs/reference/setup-tools/kubeadm/kubeadm-init-phase/", "id": "k8s_00132_code_completion_3992"} {"prompt": "Explain this text code:\n\n# Applies control-plane label and taint to the current node, functionally equivalent to what executed by kubeadm init.\n kubeadm init phase mark-control-plane --config config.yaml\n \n # Applies control-plane label and taint to a specific node\n kubeadm init phase mark-control-plane --node-name myNode", "completion": "This text code kubeadm init phase | Path to a kubeadm configuration file.", "language": "text", "type": "code-example_explanation", "context": "kubeadm init phase | Path to a kubeadm configuration file.", "source": "https://kubernetes.io/docs/reference/setup-tools/kubeadm/kubeadm-init-phase/", "id": "k8s_00132_code_explain_3993"} {"prompt": "Explain this text code:\n\n# Make all the bootstrap token configurations and create an initial token, functionally\n # equivalent to what generated by kubeadm init.\n kubeadm init phase bootstrap-token", "completion": "This text code kubeadm init phase | Path to a kubeadm configuration file.", "language": "text", "type": "code-example_explanation", "context": "kubeadm init phase | Path to a kubeadm configuration file.", "source": "https://kubernetes.io/docs/reference/setup-tools/kubeadm/kubeadm-init-phase/", "id": "k8s_00132_code_explain_3994"} {"prompt": "Explain this text code:\n\n# Updates settings relevant to the kubelet after TLS bootstrap\"\n kubeadm init phase kubelet-finalize all --config", "completion": "This text code kubeadm init phase | help for kubelet-finalize", "language": "text", "type": "code-example_explanation", "context": "kubeadm init phase | help for kubelet-finalize", "source": "https://kubernetes.io/docs/reference/setup-tools/kubeadm/kubeadm-init-phase/", "id": "k8s_00132_code_explain_3995"} {"prompt": "Explain this text code:\n\n# Updates settings relevant to the kubelet after TLS bootstrap\"\n kubeadm init phase kubelet-finalize all --config", "completion": "This text code kubeadm init phase | The path where to save and store the certificates.", "language": "text", "type": "code-example_explanation", "context": "kubeadm init phase | The path where to save and store the certificates.", "source": "https://kubernetes.io/docs/reference/setup-tools/kubeadm/kubeadm-init-phase/", "id": "k8s_00132_code_explain_3996"} {"prompt": "Explain this text code:\n\nkubeadm init phase kubelet-finalize enable-client-cert-rotation [flags]", "completion": "This text code kubeadm init phase | The path where to save and store the certificates.", "language": "text", "type": "code-example_explanation", "context": "kubeadm init phase | The path where to save and store the certificates.", "source": "https://kubernetes.io/docs/reference/setup-tools/kubeadm/kubeadm-init-phase/", "id": "k8s_00132_code_explain_3997"} {"prompt": "Complete this yaml code:\n\napiVersion: kubelet.config.k8s.io/v1\nkind: CredentialProviderConfig\n# providers is a list of credential provider helper plugins that will be enabled by the kubelet.\n# Multiple providers may match against a single image, in which case credentials\n# from all providers will be returned to the kubelet. If multiple providers are called\n# for a single image, the results are combined. If providers return overlapping\n# auth keys, the value from the provider earlier in this list is used.\nproviders:\n # name is the required name of the credential provider. It must match the name of the\n # provider executable as seen by the kubelet. The executable must be in the kubelet's\n # bin directory (set by the --image-credential-provider-bin-dir flag).\n - name: ecr-credential-provider\n # matchImages is a required list of strings used to match against images in order to\n # determine if this provider should be invoked. If one of the strings matches the\n # requested image from the kubelet, the plugin will be invoked and given a chance\n # to provide credentials. Images are expected to contain the registry domain\n # and URL path.\n #\n # Each entry in matchImages is a pattern which can optionally contain a port and a path.\n # Globs can be used in the domain, but not in the port or the path. Globs are supported\n # as subdomains like '*.k8s.io' or 'k8s.*.io', and top-level-domains such as 'k8s.*'.\n # Matching partial subdomains like 'app*.k8s.io' is also supported. Each glob can only match\n # a single subdomain segment, so `*.io` does **not** match `*.k8s.io`.\n #\n # A match exists between an image and a matchImage when all of the below are true:\n # - Both contain the same number of domain parts and each part matches.\n # - The URL path of an matchImages must be a prefix of the target image URL path.\n # - If the matchImages contains a port, then the port must match in the image as well.\n #\n # Example values of matchImages:\n # - 123456789.dkr.ecr.us-east-1.amazonaws.com\n # - *.azurecr.io\n # - gcr.io\n # - *.*.registry.io\n # - registry.io:8080/path\n matchImages:\n - \"*.dkr.ecr.*.amazonaws.com\"\n - \"*.dkr.ecr.*.amazonaws.com.cn\"\n - \"*.dkr.ecr-fips.*.amazonaws.com\"\n - \"*.dkr.ecr.us-iso-east-1.c2s.ic.gov\"\n - \"*.dkr.ecr.us-isob-east-1.sc2s.sgov.gov\"\n # defaultCacheDuration is the default duration the plugin will cache credentials in-memory\n # if a cache duration is not provided in the plugin response. This field is required.\n defaultCacheDuration: \"12h\"\n # Required input version of the exec CredentialProviderRequest. The returned CredentialProviderResponse\n # MUST use the same encoding version as the input. Current supported values are:\n # - credentialprovider.kubelet.k8s.io/v1\n apiVersion: credentialprovider.kubelet.k8s.io/v1\n # Arguments to pass to the command when executing it.\n # +optional\n # args:\n # - --example-argument\n # Env defines additional environment variables to expose to the process. These\n # are unioned with the host's environment, as well as variables client-go uses\n # to pass argument to the plugin.\n # +optional", "completion": " env:\n - name: AWS_PROFILE\n value: example_profile\n\n # tokenAttributes is the configuration for the service account token that will be passed to the plugin.\n # The credential provider opts in to using service account tokens for image pull by setting this field.\n # if this field is set without the `KubeletServiceAccountTokenForCredentialProviders` feature gate enabled, \n # kubelet will fail to start with invalid configuration error.\n # +optional\n tokenAttributes:\n # serviceAccountTokenAudience is the intended audience for the projected service account token.\n # +required\n serviceAccountTokenAudience: \"\"\n # cacheType indicates the type of cache key use for caching the credentials returned by the plugin\n # when the service account token is used.\n # The most conservative option is to set this to \"Token\", which means the kubelet will cache\n # returned credentials on a per-token basis. This should be set if the returned credential's\n # lifetime is limited to the service account token's lifetime.\n # If the plugin's credential retrieval logic depends only on the service account and not on\n # pod-specific claims, then the plugin can set this to \"ServiceAccount\". In this case, the\n # kubelet will cache returned credentials on a per-serviceaccount basis. Use this when the\n # returned credential is valid for all pods using the same service account.\n # +required\n cacheType: \"\"\n # requireServiceAccount indicates whether the plugin requires the pod to have a service account.\n # If set to true, kubelet will only invoke the plugin if the pod has a service account.\n # If set to false, kubelet will invoke the plugin even if the pod does not have a service account\n # and will not include a token in the CredentialProviderRequest. This is useful for plugins\n # that are used to pull images for pods without service accounts (e.g., static pods).\n # +required\n requireServiceAccount: true\n # requiredServiceAccountAnnotationKeys is the list of annotation keys that the plugin is interested in\n # and that are required to be present in the service account.\n # The keys defined in this list will be extracted from the corresponding service account and passed\n # to the plugin as part of the CredentialProviderRequest. If any of the keys defined in this list\n # are not present in the service account, kubelet will not invoke the plugin and will return an error.\n # This field is optional and may be empty. Plugins may use this field to extract additional information\n # required to fetch credentials or allow workloads to opt in to using service account tokens for image pull.\n # If non-empty, requireServiceAccount must be set to true.\n # The keys defined in this list must be unique and not overlap with the keys defined in the\n # optionalServiceAccountAnnotationKeys list.\n # +optional\n requiredServiceAccountAnnotationKeys:\n - \"example.com/required-annotation-key-1\"\n - \"example.com/required-annotation-key-2\"\n # optionalServiceAccountAnnotationKeys is the list of annotation keys that the plugin is interested in\n # and that are optional to be present in the service account.\n # The keys defined in this list will be extracted from the corresponding service account and passed\n # to the plugin as part of the CredentialProviderRequest. The plugin is responsible for validating the\n # existence of annotations and their values. This field is optional and may be empty.\n # Plugins may use this field to extract additional information required to fetch credentials.\n # The keys defined in this list must be unique and not overlap with the keys defined in the\n # requiredServiceAccountAnnotationKeys list.\n # +optional\n optionalServiceAccountAnnotationKeys:\n - \"example.com/optional-annotation-key-1\"\n - \"example.com/optional-annotation-key-2\"", "language": "yaml", "type": "kubernetes-manifest", "context": "Configure a kubelet image credential provider | The providers field is a list of enabled plugins used by the kubelet. Each entry has a few required fields:", "source": "https://kubernetes.io/docs/tasks/administer-cluster/kubelet-credential-provider/", "id": "k8s_00133_code_completion_3998"} {"prompt": "Explain this yaml code:\n\napiVersion: kubelet.config.k8s.io/v1\nkind: CredentialProviderConfig\n# providers is a list of credential provider helper plugins that will be enabled by the kubelet.\n# Multiple providers may match against a single image, in which case credentials\n# from all providers will be returned to the kubelet. If multiple providers are called\n# for a single image, the results are combined. If providers return overlapping\n# auth keys, the value from the provider earlier in this list is used.\nproviders:\n # name is the required name of the credential provider. It must match the name of the\n # provider executable as seen by the kubelet. The executable must be in the kubelet's\n # bin directory (set by the --image-credential-provider-bin-dir flag).\n - name: ecr-credential-provider\n # matchImages is a required list of strings used to match against images in order to\n # determine if this provider should be invoked. If one of the strings matches the\n # requested image from the kubelet, the plugin will be invoked and given a chance\n # to provide credentials. Images are expected to contain the registry domain\n # and URL path.\n #\n # Each entry in matchImages is a pattern which can optionally contain a port and a path.\n # Globs can be used in the domain, but not in the port or the path. Globs are supported\n # as subdomains like '*.k8s.io' or 'k8s.*.io', and top-level-domains such as 'k8s.*'.\n # Matching partial subdomains like 'app*.k8s.io' is also supported. Each glob can only match\n # a single subdomain segment, so `*.io` does **not** match `*.k8s.io`.\n #\n # A match exists between an image and a matchImage when all of the below are true:\n # - Both contain the same number of domain parts and each part matches.\n # - The URL path of an matchImages must be a prefix of the target image URL path.\n # - If the matchImages contains a port, then the port must match in the image as well.\n #\n # Example values of matchImages:\n # - 123456789.dkr.ecr.us-east-1.amazonaws.com\n # - *.azurecr.io\n # - gcr.io\n # - *.*.registry.io\n # - registry.io:8080/path\n matchImages:\n - \"*.dkr.ecr.*.amazonaws.com\"\n - \"*.dkr.ecr.*.amazonaws.com.cn\"\n - \"*.dkr.ecr-fips.*.amazonaws.com\"\n - \"*.dkr.ecr.us-iso-east-1.c2s.ic.gov\"\n - \"*.dkr.ecr.us-isob-east-1.sc2s.sgov.gov\"\n # defaultCacheDuration is the default duration the plugin will cache credentials in-memory\n # if a cache duration is not provided in the plugin response. This field is required.\n defaultCacheDuration: \"12h\"\n # Required input version of the exec CredentialProviderRequest. The returned CredentialProviderResponse\n # MUST use the same encoding version as the input. Current supported values are:\n # - credentialprovider.kubelet.k8s.io/v1\n apiVersion: credentialprovider.kubelet.k8s.io/v1\n # Arguments to pass to the command when executing it.\n # +optional\n # args:\n # - --example-argument\n # Env defines additional environment variables to expose to the process. These\n # are unioned with the host's environment, as well as variables client-go uses\n # to pass argument to the plugin.\n # +optional\n env:\n - name: AWS_PROFILE\n value: example_profile\n\n # tokenAttributes is the configuration for the service account token that will be passed to the plugin.\n # The credential provider opts in to using service account tokens for image pull by setting this field.\n # if this field is set without the `KubeletServiceAccountTokenForCredentialProviders` feature gate enabled, \n # kubelet will fail to start with invalid configuration error.\n # +optional\n tokenAttributes:\n # serviceAccountTokenAudience is the intended audience for the projected service account token.\n # +required\n serviceAccountTokenAudience: \"\"\n # cacheType indicates the type of cache key use for caching the credentials returned by the plugin\n # when the service account token is used.\n # The most conservative option is to set this to \"Token\", which means the kubelet will cache\n # returned credentials on a per-token basis. This should be set if the returned credential's\n # lifetime is limited to the service account token's lifetime.\n # If the plugin's credential retrieval logic depends only on the service account and not on\n # pod-specific claims, then the plugin can set this to \"ServiceAccount\". In this case, the\n # kubelet will cache returned credentials on a per-serviceaccount basis. Use this when the\n # returned credential is valid for all pods using the same service account.\n # +required\n cacheType: \"\"\n # requireServiceAccount indicates whether the plugin requires the pod to have a service account.\n # If set to true, kubelet will only invoke the plugin if the pod has a service account.\n # If set to false, kubelet will invoke the plugin even if the pod does not have a service account\n # and will not include a token in the CredentialProviderRequest. This is useful for plugins\n # that are used to pull images for pods without service accounts (e.g., static pods).\n # +required\n requireServiceAccount: true\n # requiredServiceAccountAnnotationKeys is the list of annotation keys that the plugin is interested in\n # and that are required to be present in the service account.\n # The keys defined in this list will be extracted from the corresponding service account and passed\n # to the plugin as part of the CredentialProviderRequest. If any of the keys defined in this list\n # are not present in the service account, kubelet will not invoke the plugin and will return an error.\n # This field is optional and may be empty. Plugins may use this field to extract additional information\n # required to fetch credentials or allow workloads to opt in to using service account tokens for image pull.\n # If non-empty, requireServiceAccount must be set to true.\n # The keys defined in this list must be unique and not overlap with the keys defined in the\n # optionalServiceAccountAnnotationKeys list.\n # +optional\n requiredServiceAccountAnnotationKeys:\n - \"example.com/required-annotation-key-1\"\n - \"example.com/required-annotation-key-2\"\n # optionalServiceAccountAnnotationKeys is the list of annotation keys that the plugin is interested in\n # and that are optional to be present in the service account.\n # The keys defined in this list will be extracted from the corresponding service account and passed\n # to the plugin as part of the CredentialProviderRequest. The plugin is responsible for validating the\n # existence of annotations and their values. This field is optional and may be empty.\n # Plugins may use this field to extract additional information required to fetch credentials.\n # The keys defined in this list must be unique and not overlap with the keys defined in the\n # requiredServiceAccountAnnotationKeys list.\n # +optional\n optionalServiceAccountAnnotationKeys:\n - \"example.com/optional-annotation-key-1\"\n - \"example.com/optional-annotation-key-2\"", "completion": "This yaml code Configure a kubelet image credential provider | The providers field is a list of enabled plugins used by the kubelet. Each entry has a few required fields:", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Configure a kubelet image credential provider | The providers field is a list of enabled plugins used by the kubelet. Each entry has a few required fields:", "source": "https://kubernetes.io/docs/tasks/administer-cluster/kubelet-credential-provider/", "id": "k8s_00133_code_explain_3999"} {"prompt": "Complete this yaml code:\n\napiVersion: kubelet.config.k8s.io/v1\nkind: CredentialProviderConfig\n# providers is a list of credential provider helper plugins that will be enabled by the kubelet.\n# Multiple providers may match against a single image, in which case credentials\n# from all providers will be returned to the kubelet. If multiple providers are called\n# for a single image, the results are combined. If providers return overlapping\n# auth keys, the value from the provider earlier in this list is used.\nproviders:\n # name is the required name of the credential provider. It must match the name of the\n # provider executable as seen by the kubelet. The executable must be in the kubelet's\n # bin directory (set by the --image-credential-provider-bin-dir flag).\n - name: ecr-credential-provider\n # matchImages is a required list of strings used to match against images in order to\n # determine if this provider should be invoked. If one of the strings matches the\n # requested image from the kubelet, the plugin will be invoked and given a chance\n # to provide credentials. Images are expected to contain the registry domain\n # and URL path.\n #\n # Each entry in matchImages is a pattern which can optionally contain a port and a path.\n # Globs can be used in the domain, but not in the port or the path. Globs are supported\n # as subdomains like '*.k8s.io' or 'k8s.*.io', and top-level-domains such as 'k8s.*'.\n # Matching partial subdomains like 'app*.k8s.io' is also supported. Each glob can only match\n # a single subdomain segment, so `*.io` does **not** match `*.k8s.io`.\n #\n # A match exists between an image and a matchImage when all of the below are true:\n # - Both contain the same number of domain parts and each part matches.\n # - The URL path of an matchImages must be a prefix of the target image URL path.\n # - If the matchImages contains a port, then the port must match in the image as well.\n #\n # Example values of matchImages:\n # - 123456789.dkr.ecr.us-east-1.amazonaws.com\n # - *.azurecr.io\n # - gcr.io\n # - *.*.registry.io\n # - registry.io:8080/path\n matchImages:\n - \"*.dkr.ecr.*.amazonaws.com\"\n - \"*.dkr.ecr.*.amazonaws.com.cn\"\n - \"*.dkr.ecr-fips.*.amazonaws.com\"\n - \"*.dkr.ecr.us-iso-east-1.c2s.ic.gov\"\n - \"*.dkr.ecr.us-isob-east-1.sc2s.sgov.gov\"\n # defaultCacheDuration is the default duration the plugin will cache credentials in-memory\n # if a cache duration is not provided in the plugin response. This field is required.\n defaultCacheDuration: \"12h\"\n # Required input version of the exec CredentialProviderRequest. The returned CredentialProviderResponse\n # MUST use the same encoding version as the input. Current supported values are:\n # - credentialprovider.kubelet.k8s.io/v1\n apiVersion: credentialprovider.kubelet.k8s.io/v1\n # Arguments to pass to the command when executing it.\n # +optional\n # args:\n # - --example-argument\n # Env defines additional environment variables to expose to the process. These\n # are unioned with the host's environment, as well as variables client-go uses\n # to pass argument to the plugin.\n # +optional", "completion": " env:\n - name: AWS_PROFILE\n value: example_profile\n\n # tokenAttributes is the configuration for the service account token that will be passed to the plugin.\n # The credential provider opts in to using service account tokens for image pull by setting this field.\n # if this field is set without the `KubeletServiceAccountTokenForCredentialProviders` feature gate enabled, \n # kubelet will fail to start with invalid configuration error.\n # +optional\n tokenAttributes:\n # serviceAccountTokenAudience is the intended audience for the projected service account token.\n # +required\n serviceAccountTokenAudience: \"\"\n # cacheType indicates the type of cache key use for caching the credentials returned by the plugin\n # when the service account token is used.\n # The most conservative option is to set this to \"Token\", which means the kubelet will cache\n # returned credentials on a per-token basis. This should be set if the returned credential's\n # lifetime is limited to the service account token's lifetime.\n # If the plugin's credential retrieval logic depends only on the service account and not on\n # pod-specific claims, then the plugin can set this to \"ServiceAccount\". In this case, the\n # kubelet will cache returned credentials on a per-serviceaccount basis. Use this when the\n # returned credential is valid for all pods using the same service account.\n # +required\n cacheType: \"\"\n # requireServiceAccount indicates whether the plugin requires the pod to have a service account.\n # If set to true, kubelet will only invoke the plugin if the pod has a service account.\n # If set to false, kubelet will invoke the plugin even if the pod does not have a service account\n # and will not include a token in the CredentialProviderRequest. This is useful for plugins\n # that are used to pull images for pods without service accounts (e.g., static pods).\n # +required\n requireServiceAccount: true\n # requiredServiceAccountAnnotationKeys is the list of annotation keys that the plugin is interested in\n # and that are required to be present in the service account.\n # The keys defined in this list will be extracted from the corresponding service account and passed\n # to the plugin as part of the CredentialProviderRequest. If any of the keys defined in this list\n # are not present in the service account, kubelet will not invoke the plugin and will return an error.\n # This field is optional and may be empty. Plugins may use this field to extract additional information\n # required to fetch credentials or allow workloads to opt in to using service account tokens for image pull.\n # If non-empty, requireServiceAccount must be set to true.\n # The keys defined in this list must be unique and not overlap with the keys defined in the\n # optionalServiceAccountAnnotationKeys list.\n # +optional\n requiredServiceAccountAnnotationKeys:\n - \"example.com/required-annotation-key-1\"\n - \"example.com/required-annotation-key-2\"\n # optionalServiceAccountAnnotationKeys is the list of annotation keys that the plugin is interested in\n # and that are optional to be present in the service account.\n # The keys defined in this list will be extracted from the corresponding service account and passed\n # to the plugin as part of the CredentialProviderRequest. The plugin is responsible for validating the\n # existence of annotations and their values. This field is optional and may be empty.\n # Plugins may use this field to extract additional information required to fetch credentials.\n # The keys defined in this list must be unique and not overlap with the keys defined in the\n # requiredServiceAccountAnnotationKeys list.\n # +optional\n optionalServiceAccountAnnotationKeys:\n - \"example.com/optional-annotation-key-1\"\n - \"example.com/optional-annotation-key-2\"", "language": "yaml", "type": "kubernetes-manifest", "context": "Configure a kubelet image credential provider | The providers field is a list of enabled plugins used by the kubelet. Each entry has a few required fields:", "source": "https://kubernetes.io/docs/tasks/administer-cluster/kubelet-credential-provider/", "id": "k8s_00133_code_completion_4000"} {"prompt": "Explain this yaml code:\n\napiVersion: kubelet.config.k8s.io/v1\nkind: CredentialProviderConfig\n# providers is a list of credential provider helper plugins that will be enabled by the kubelet.\n# Multiple providers may match against a single image, in which case credentials\n# from all providers will be returned to the kubelet. If multiple providers are called\n# for a single image, the results are combined. If providers return overlapping\n# auth keys, the value from the provider earlier in this list is used.\nproviders:\n # name is the required name of the credential provider. It must match the name of the\n # provider executable as seen by the kubelet. The executable must be in the kubelet's\n # bin directory (set by the --image-credential-provider-bin-dir flag).\n - name: ecr-credential-provider\n # matchImages is a required list of strings used to match against images in order to\n # determine if this provider should be invoked. If one of the strings matches the\n # requested image from the kubelet, the plugin will be invoked and given a chance\n # to provide credentials. Images are expected to contain the registry domain\n # and URL path.\n #\n # Each entry in matchImages is a pattern which can optionally contain a port and a path.\n # Globs can be used in the domain, but not in the port or the path. Globs are supported\n # as subdomains like '*.k8s.io' or 'k8s.*.io', and top-level-domains such as 'k8s.*'.\n # Matching partial subdomains like 'app*.k8s.io' is also supported. Each glob can only match\n # a single subdomain segment, so `*.io` does **not** match `*.k8s.io`.\n #\n # A match exists between an image and a matchImage when all of the below are true:\n # - Both contain the same number of domain parts and each part matches.\n # - The URL path of an matchImages must be a prefix of the target image URL path.\n # - If the matchImages contains a port, then the port must match in the image as well.\n #\n # Example values of matchImages:\n # - 123456789.dkr.ecr.us-east-1.amazonaws.com\n # - *.azurecr.io\n # - gcr.io\n # - *.*.registry.io\n # - registry.io:8080/path\n matchImages:\n - \"*.dkr.ecr.*.amazonaws.com\"\n - \"*.dkr.ecr.*.amazonaws.com.cn\"\n - \"*.dkr.ecr-fips.*.amazonaws.com\"\n - \"*.dkr.ecr.us-iso-east-1.c2s.ic.gov\"\n - \"*.dkr.ecr.us-isob-east-1.sc2s.sgov.gov\"\n # defaultCacheDuration is the default duration the plugin will cache credentials in-memory\n # if a cache duration is not provided in the plugin response. This field is required.\n defaultCacheDuration: \"12h\"\n # Required input version of the exec CredentialProviderRequest. The returned CredentialProviderResponse\n # MUST use the same encoding version as the input. Current supported values are:\n # - credentialprovider.kubelet.k8s.io/v1\n apiVersion: credentialprovider.kubelet.k8s.io/v1\n # Arguments to pass to the command when executing it.\n # +optional\n # args:\n # - --example-argument\n # Env defines additional environment variables to expose to the process. These\n # are unioned with the host's environment, as well as variables client-go uses\n # to pass argument to the plugin.\n # +optional\n env:\n - name: AWS_PROFILE\n value: example_profile\n\n # tokenAttributes is the configuration for the service account token that will be passed to the plugin.\n # The credential provider opts in to using service account tokens for image pull by setting this field.\n # if this field is set without the `KubeletServiceAccountTokenForCredentialProviders` feature gate enabled, \n # kubelet will fail to start with invalid configuration error.\n # +optional\n tokenAttributes:\n # serviceAccountTokenAudience is the intended audience for the projected service account token.\n # +required\n serviceAccountTokenAudience: \"\"\n # cacheType indicates the type of cache key use for caching the credentials returned by the plugin\n # when the service account token is used.\n # The most conservative option is to set this to \"Token\", which means the kubelet will cache\n # returned credentials on a per-token basis. This should be set if the returned credential's\n # lifetime is limited to the service account token's lifetime.\n # If the plugin's credential retrieval logic depends only on the service account and not on\n # pod-specific claims, then the plugin can set this to \"ServiceAccount\". In this case, the\n # kubelet will cache returned credentials on a per-serviceaccount basis. Use this when the\n # returned credential is valid for all pods using the same service account.\n # +required\n cacheType: \"\"\n # requireServiceAccount indicates whether the plugin requires the pod to have a service account.\n # If set to true, kubelet will only invoke the plugin if the pod has a service account.\n # If set to false, kubelet will invoke the plugin even if the pod does not have a service account\n # and will not include a token in the CredentialProviderRequest. This is useful for plugins\n # that are used to pull images for pods without service accounts (e.g., static pods).\n # +required\n requireServiceAccount: true\n # requiredServiceAccountAnnotationKeys is the list of annotation keys that the plugin is interested in\n # and that are required to be present in the service account.\n # The keys defined in this list will be extracted from the corresponding service account and passed\n # to the plugin as part of the CredentialProviderRequest. If any of the keys defined in this list\n # are not present in the service account, kubelet will not invoke the plugin and will return an error.\n # This field is optional and may be empty. Plugins may use this field to extract additional information\n # required to fetch credentials or allow workloads to opt in to using service account tokens for image pull.\n # If non-empty, requireServiceAccount must be set to true.\n # The keys defined in this list must be unique and not overlap with the keys defined in the\n # optionalServiceAccountAnnotationKeys list.\n # +optional\n requiredServiceAccountAnnotationKeys:\n - \"example.com/required-annotation-key-1\"\n - \"example.com/required-annotation-key-2\"\n # optionalServiceAccountAnnotationKeys is the list of annotation keys that the plugin is interested in\n # and that are optional to be present in the service account.\n # The keys defined in this list will be extracted from the corresponding service account and passed\n # to the plugin as part of the CredentialProviderRequest. The plugin is responsible for validating the\n # existence of annotations and their values. This field is optional and may be empty.\n # Plugins may use this field to extract additional information required to fetch credentials.\n # The keys defined in this list must be unique and not overlap with the keys defined in the\n # requiredServiceAccountAnnotationKeys list.\n # +optional\n optionalServiceAccountAnnotationKeys:\n - \"example.com/optional-annotation-key-1\"\n - \"example.com/optional-annotation-key-2\"", "completion": "This yaml code Configure a kubelet image credential provider | The providers field is a list of enabled plugins used by the kubelet. Each entry has a few required fields:", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Configure a kubelet image credential provider | The providers field is a list of enabled plugins used by the kubelet. Each entry has a few required fields:", "source": "https://kubernetes.io/docs/tasks/administer-cluster/kubelet-credential-provider/", "id": "k8s_00133_code_explain_4001"} {"prompt": "Complete this yaml code:\n\napiVersion: kubelet.config.k8s.io/v1\nkind: CredentialProviderConfig\n# providers is a list of credential provider helper plugins that will be enabled by the kubelet.\n# Multiple providers may match against a single image, in which case credentials\n# from all providers will be returned to the kubelet. If multiple providers are called\n# for a single image, the results are combined. If providers return overlapping\n# auth keys, the value from the provider earlier in this list is used.\nproviders:\n # name is the required name of the credential provider. It must match the name of the\n # provider executable as seen by the kubelet. The executable must be in the kubelet's\n # bin directory (set by the --image-credential-provider-bin-dir flag).\n - name: ecr-credential-provider\n # matchImages is a required list of strings used to match against images in order to\n # determine if this provider should be invoked. If one of the strings matches the\n # requested image from the kubelet, the plugin will be invoked and given a chance\n # to provide credentials. Images are expected to contain the registry domain\n # and URL path.\n #\n # Each entry in matchImages is a pattern which can optionally contain a port and a path.\n # Globs can be used in the domain, but not in the port or the path. Globs are supported\n # as subdomains like '*.k8s.io' or 'k8s.*.io', and top-level-domains such as 'k8s.*'.\n # Matching partial subdomains like 'app*.k8s.io' is also supported. Each glob can only match\n # a single subdomain segment, so `*.io` does **not** match `*.k8s.io`.\n #\n # A match exists between an image and a matchImage when all of the below are true:\n # - Both contain the same number of domain parts and each part matches.\n # - The URL path of an matchImages must be a prefix of the target image URL path.\n # - If the matchImages contains a port, then the port must match in the image as well.\n #\n # Example values of matchImages:\n # - 123456789.dkr.ecr.us-east-1.amazonaws.com\n # - *.azurecr.io\n # - gcr.io\n # - *.*.registry.io\n # - registry.io:8080/path\n matchImages:\n - \"*.dkr.ecr.*.amazonaws.com\"\n - \"*.dkr.ecr.*.amazonaws.com.cn\"\n - \"*.dkr.ecr-fips.*.amazonaws.com\"\n - \"*.dkr.ecr.us-iso-east-1.c2s.ic.gov\"\n - \"*.dkr.ecr.us-isob-east-1.sc2s.sgov.gov\"\n # defaultCacheDuration is the default duration the plugin will cache credentials in-memory\n # if a cache duration is not provided in the plugin response. This field is required.\n defaultCacheDuration: \"12h\"\n # Required input version of the exec CredentialProviderRequest. The returned CredentialProviderResponse\n # MUST use the same encoding version as the input. Current supported values are:\n # - credentialprovider.kubelet.k8s.io/v1\n apiVersion: credentialprovider.kubelet.k8s.io/v1\n # Arguments to pass to the command when executing it.\n # +optional\n # args:\n # - --example-argument\n # Env defines additional environment variables to expose to the process. These\n # are unioned with the host's environment, as well as variables client-go uses\n # to pass argument to the plugin.\n # +optional", "completion": " env:\n - name: AWS_PROFILE\n value: example_profile\n\n # tokenAttributes is the configuration for the service account token that will be passed to the plugin.\n # The credential provider opts in to using service account tokens for image pull by setting this field.\n # if this field is set without the `KubeletServiceAccountTokenForCredentialProviders` feature gate enabled, \n # kubelet will fail to start with invalid configuration error.\n # +optional\n tokenAttributes:\n # serviceAccountTokenAudience is the intended audience for the projected service account token.\n # +required\n serviceAccountTokenAudience: \"\"\n # cacheType indicates the type of cache key use for caching the credentials returned by the plugin\n # when the service account token is used.\n # The most conservative option is to set this to \"Token\", which means the kubelet will cache\n # returned credentials on a per-token basis. This should be set if the returned credential's\n # lifetime is limited to the service account token's lifetime.\n # If the plugin's credential retrieval logic depends only on the service account and not on\n # pod-specific claims, then the plugin can set this to \"ServiceAccount\". In this case, the\n # kubelet will cache returned credentials on a per-serviceaccount basis. Use this when the\n # returned credential is valid for all pods using the same service account.\n # +required\n cacheType: \"\"\n # requireServiceAccount indicates whether the plugin requires the pod to have a service account.\n # If set to true, kubelet will only invoke the plugin if the pod has a service account.\n # If set to false, kubelet will invoke the plugin even if the pod does not have a service account\n # and will not include a token in the CredentialProviderRequest. This is useful for plugins\n # that are used to pull images for pods without service accounts (e.g., static pods).\n # +required\n requireServiceAccount: true\n # requiredServiceAccountAnnotationKeys is the list of annotation keys that the plugin is interested in\n # and that are required to be present in the service account.\n # The keys defined in this list will be extracted from the corresponding service account and passed\n # to the plugin as part of the CredentialProviderRequest. If any of the keys defined in this list\n # are not present in the service account, kubelet will not invoke the plugin and will return an error.\n # This field is optional and may be empty. Plugins may use this field to extract additional information\n # required to fetch credentials or allow workloads to opt in to using service account tokens for image pull.\n # If non-empty, requireServiceAccount must be set to true.\n # The keys defined in this list must be unique and not overlap with the keys defined in the\n # optionalServiceAccountAnnotationKeys list.\n # +optional\n requiredServiceAccountAnnotationKeys:\n - \"example.com/required-annotation-key-1\"\n - \"example.com/required-annotation-key-2\"\n # optionalServiceAccountAnnotationKeys is the list of annotation keys that the plugin is interested in\n # and that are optional to be present in the service account.\n # The keys defined in this list will be extracted from the corresponding service account and passed\n # to the plugin as part of the CredentialProviderRequest. The plugin is responsible for validating the\n # existence of annotations and their values. This field is optional and may be empty.\n # Plugins may use this field to extract additional information required to fetch credentials.\n # The keys defined in this list must be unique and not overlap with the keys defined in the\n # requiredServiceAccountAnnotationKeys list.\n # +optional\n optionalServiceAccountAnnotationKeys:\n - \"example.com/optional-annotation-key-1\"\n - \"example.com/optional-annotation-key-2\"", "language": "yaml", "type": "kubernetes-manifest", "context": "Configure a kubelet image credential provider | The providers field is a list of enabled plugins used by the kubelet. Each entry has a few required fields:", "source": "https://kubernetes.io/docs/tasks/administer-cluster/kubelet-credential-provider/", "id": "k8s_00133_code_completion_4002"} {"prompt": "Explain this yaml code:\n\napiVersion: kubelet.config.k8s.io/v1\nkind: CredentialProviderConfig\n# providers is a list of credential provider helper plugins that will be enabled by the kubelet.\n# Multiple providers may match against a single image, in which case credentials\n# from all providers will be returned to the kubelet. If multiple providers are called\n# for a single image, the results are combined. If providers return overlapping\n# auth keys, the value from the provider earlier in this list is used.\nproviders:\n # name is the required name of the credential provider. It must match the name of the\n # provider executable as seen by the kubelet. The executable must be in the kubelet's\n # bin directory (set by the --image-credential-provider-bin-dir flag).\n - name: ecr-credential-provider\n # matchImages is a required list of strings used to match against images in order to\n # determine if this provider should be invoked. If one of the strings matches the\n # requested image from the kubelet, the plugin will be invoked and given a chance\n # to provide credentials. Images are expected to contain the registry domain\n # and URL path.\n #\n # Each entry in matchImages is a pattern which can optionally contain a port and a path.\n # Globs can be used in the domain, but not in the port or the path. Globs are supported\n # as subdomains like '*.k8s.io' or 'k8s.*.io', and top-level-domains such as 'k8s.*'.\n # Matching partial subdomains like 'app*.k8s.io' is also supported. Each glob can only match\n # a single subdomain segment, so `*.io` does **not** match `*.k8s.io`.\n #\n # A match exists between an image and a matchImage when all of the below are true:\n # - Both contain the same number of domain parts and each part matches.\n # - The URL path of an matchImages must be a prefix of the target image URL path.\n # - If the matchImages contains a port, then the port must match in the image as well.\n #\n # Example values of matchImages:\n # - 123456789.dkr.ecr.us-east-1.amazonaws.com\n # - *.azurecr.io\n # - gcr.io\n # - *.*.registry.io\n # - registry.io:8080/path\n matchImages:\n - \"*.dkr.ecr.*.amazonaws.com\"\n - \"*.dkr.ecr.*.amazonaws.com.cn\"\n - \"*.dkr.ecr-fips.*.amazonaws.com\"\n - \"*.dkr.ecr.us-iso-east-1.c2s.ic.gov\"\n - \"*.dkr.ecr.us-isob-east-1.sc2s.sgov.gov\"\n # defaultCacheDuration is the default duration the plugin will cache credentials in-memory\n # if a cache duration is not provided in the plugin response. This field is required.\n defaultCacheDuration: \"12h\"\n # Required input version of the exec CredentialProviderRequest. The returned CredentialProviderResponse\n # MUST use the same encoding version as the input. Current supported values are:\n # - credentialprovider.kubelet.k8s.io/v1\n apiVersion: credentialprovider.kubelet.k8s.io/v1\n # Arguments to pass to the command when executing it.\n # +optional\n # args:\n # - --example-argument\n # Env defines additional environment variables to expose to the process. These\n # are unioned with the host's environment, as well as variables client-go uses\n # to pass argument to the plugin.\n # +optional\n env:\n - name: AWS_PROFILE\n value: example_profile\n\n # tokenAttributes is the configuration for the service account token that will be passed to the plugin.\n # The credential provider opts in to using service account tokens for image pull by setting this field.\n # if this field is set without the `KubeletServiceAccountTokenForCredentialProviders` feature gate enabled, \n # kubelet will fail to start with invalid configuration error.\n # +optional\n tokenAttributes:\n # serviceAccountTokenAudience is the intended audience for the projected service account token.\n # +required\n serviceAccountTokenAudience: \"\"\n # cacheType indicates the type of cache key use for caching the credentials returned by the plugin\n # when the service account token is used.\n # The most conservative option is to set this to \"Token\", which means the kubelet will cache\n # returned credentials on a per-token basis. This should be set if the returned credential's\n # lifetime is limited to the service account token's lifetime.\n # If the plugin's credential retrieval logic depends only on the service account and not on\n # pod-specific claims, then the plugin can set this to \"ServiceAccount\". In this case, the\n # kubelet will cache returned credentials on a per-serviceaccount basis. Use this when the\n # returned credential is valid for all pods using the same service account.\n # +required\n cacheType: \"\"\n # requireServiceAccount indicates whether the plugin requires the pod to have a service account.\n # If set to true, kubelet will only invoke the plugin if the pod has a service account.\n # If set to false, kubelet will invoke the plugin even if the pod does not have a service account\n # and will not include a token in the CredentialProviderRequest. This is useful for plugins\n # that are used to pull images for pods without service accounts (e.g., static pods).\n # +required\n requireServiceAccount: true\n # requiredServiceAccountAnnotationKeys is the list of annotation keys that the plugin is interested in\n # and that are required to be present in the service account.\n # The keys defined in this list will be extracted from the corresponding service account and passed\n # to the plugin as part of the CredentialProviderRequest. If any of the keys defined in this list\n # are not present in the service account, kubelet will not invoke the plugin and will return an error.\n # This field is optional and may be empty. Plugins may use this field to extract additional information\n # required to fetch credentials or allow workloads to opt in to using service account tokens for image pull.\n # If non-empty, requireServiceAccount must be set to true.\n # The keys defined in this list must be unique and not overlap with the keys defined in the\n # optionalServiceAccountAnnotationKeys list.\n # +optional\n requiredServiceAccountAnnotationKeys:\n - \"example.com/required-annotation-key-1\"\n - \"example.com/required-annotation-key-2\"\n # optionalServiceAccountAnnotationKeys is the list of annotation keys that the plugin is interested in\n # and that are optional to be present in the service account.\n # The keys defined in this list will be extracted from the corresponding service account and passed\n # to the plugin as part of the CredentialProviderRequest. The plugin is responsible for validating the\n # existence of annotations and their values. This field is optional and may be empty.\n # Plugins may use this field to extract additional information required to fetch credentials.\n # The keys defined in this list must be unique and not overlap with the keys defined in the\n # requiredServiceAccountAnnotationKeys list.\n # +optional\n optionalServiceAccountAnnotationKeys:\n - \"example.com/optional-annotation-key-1\"\n - \"example.com/optional-annotation-key-2\"", "completion": "This yaml code Configure a kubelet image credential provider | The providers field is a list of enabled plugins used by the kubelet. Each entry has a few required fields:", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Configure a kubelet image credential provider | The providers field is a list of enabled plugins used by the kubelet. Each entry has a few required fields:", "source": "https://kubernetes.io/docs/tasks/administer-cluster/kubelet-credential-provider/", "id": "k8s_00133_code_explain_4003"} {"prompt": "Explain this text code:\n\nClient Version: version.Info{Major:\"1\", Minor:\"27\", GitVersion:\"v1.27.4\",GitCommit:\"fa3d7990104d7c1f16943a67f11b154b71f6a132\", GitTreeState:\"clean\",BuildDate:\"2023-07-19T12:20:54Z\", GoVersion:\"go1.20.6\", Compiler:\"gc\", Platform:\"linux/amd64\"}\nKustomize Version: v5.0.1\nServer Version: version.Info{Major:\"1\", Minor:\"27\", GitVersion:\"v1.27.3\",GitCommit:\"25b4e43193bcda6c7328a6d147b1fb73a33f1598\", GitTreeState:\"clean\",BuildDate:\"2023-06-14T09:47:40Z\", GoVersion:\"go1.20.5\", Compiler:\"gc\", Platform:\"linux/amd64\"}", "completion": "This text code Troubleshooting kubectl | If you see Unable to connect to the server: dial tcp :8443: i/o timeout, instead of Server Version, you need to troubleshoot kubectl connectivity with your cluster.", "language": "text", "type": "code-example_explanation", "context": "Troubleshooting kubectl | If you see Unable to connect to the server: dial tcp :8443: i/o timeout, instead of Server Version, you need to troubleshoot kubectl connectivity with your cluster.", "source": "https://kubernetes.io/docs/tasks/debug/debug-cluster/troubleshoot-kubectl/", "id": "k8s_00134_code_explain_4004"} {"prompt": "Explain this bash code:\n\nkubectl config view --flatten --output 'jsonpath={.clusters[0].cluster.certificate-authority-data}' | base64 -d | openssl x509 -noout -dates", "completion": "This bash code Troubleshooting kubectl | output:", "language": "bash", "type": "kubectl-command_explanation", "context": "Troubleshooting kubectl | output:", "source": "https://kubernetes.io/docs/tasks/debug/debug-cluster/troubleshoot-kubectl/", "id": "k8s_00134_code_explain_4005"} {"prompt": "Explain this text code:\n\nnotBefore=Feb 13 05:57:47 2024 GMT\nnotAfter=Feb 10 06:02:47 2034 GMT", "completion": "This text code Troubleshooting kubectl | output:", "language": "text", "type": "code-example_explanation", "context": "Troubleshooting kubectl | output:", "source": "https://kubernetes.io/docs/tasks/debug/debug-cluster/troubleshoot-kubectl/", "id": "k8s_00134_code_explain_4006"} {"prompt": "Explain this bash code:\n\nkubectl config view --flatten --output 'jsonpath={.users[0].user.client-certificate-data}'| base64 -d | openssl x509 -noout -dates", "completion": "This bash code Troubleshooting kubectl | output:", "language": "bash", "type": "kubectl-command_explanation", "context": "Troubleshooting kubectl | output:", "source": "https://kubernetes.io/docs/tasks/debug/debug-cluster/troubleshoot-kubectl/", "id": "k8s_00134_code_explain_4007"} {"prompt": "Explain this text code:\n\nnotBefore=Feb 13 05:57:47 2024 GMT\nnotAfter=Feb 12 06:02:50 2025 GMT", "completion": "This text code Troubleshooting kubectl | Some kubectl authentication helpers provide easy access to Kubernetes clusters. If you have used such helpers and are facing connectivity issues, ensure that the necessary configurations are still pre", "language": "text", "type": "code-example_explanation", "context": "Troubleshooting kubectl | Some kubectl authentication helpers provide easy access to Kubernetes clusters. If you have used such helpers and are facing connectivity issues, ensure that the necessary configurations are still pre", "source": "https://kubernetes.io/docs/tasks/debug/debug-cluster/troubleshoot-kubectl/", "id": "k8s_00134_code_explain_4008"} {"prompt": "Explain this console code:\n\nClient Version: version.Info{Major:\"1\", Minor:\"27\", GitVersion:\"v1.27.4\",GitCommit:\"fa3d7990104d7c1f16943a67f11b154b71f6a132\", GitTreeState:\"clean\",BuildDate:\"2023-07-19T12:20:54Z\", GoVersion:\"go1.20.6\", Compiler:\"gc\", Platform:\"linux/amd64\"}\nKustomize Version: v5.0.1\nServer Version: version.Info{Major:\"1\", Minor:\"27\", GitVersion:\"v1.27.3\",GitCommit:\"25b4e43193bcda6c7328a6d147b1fb73a33f1598\", GitTreeState:\"clean\",BuildDate:\"2023-06-14T09:47:40Z\", GoVersion:\"go1.20.5\", Compiler:\"gc\", Platform:\"linux/amd64\"}", "completion": "This console code Troubleshooting kubectl | If you see Unable to connect to the server: dial tcp :8443: i/o timeout, instead of Server Version, you need to troubleshoot kubectl connectivity with your cluster.", "language": "console", "type": "code-example_explanation", "context": "Troubleshooting kubectl | If you see Unable to connect to the server: dial tcp :8443: i/o timeout, instead of Server Version, you need to troubleshoot kubectl connectivity with your cluster.", "source": "https://kubernetes.io/docs/tasks/debug/debug-cluster/troubleshoot-kubectl/", "id": "k8s_00134_code_explain_4009"} {"prompt": "Explain this text code:\n\nUnable to connect to the server: dial tcp :8443: i/o timeout", "completion": "This text code Troubleshooting kubectl | Make sure you have installed the kubectl by following the official documentation for installing kubectl, and you have properly configured the $PATH environment variable.", "language": "text", "type": "code-example_explanation", "context": "Troubleshooting kubectl | Make sure you have installed the kubectl by following the official documentation for installing kubectl, and you have properly configured the $PATH environment variable.", "source": "https://kubernetes.io/docs/tasks/debug/debug-cluster/troubleshoot-kubectl/", "id": "k8s_00134_code_explain_4010"} {"prompt": "Explain this shell code:\n\nkubectl config view --flatten --output 'jsonpath={.clusters[0].cluster.certificate-authority-data}' | base64 -d | openssl x509 -noout -dates", "completion": "This shell code Troubleshooting kubectl | output:", "language": "shell", "type": "code-example_explanation", "context": "Troubleshooting kubectl | output:", "source": "https://kubernetes.io/docs/tasks/debug/debug-cluster/troubleshoot-kubectl/", "id": "k8s_00134_code_explain_4011"} {"prompt": "Explain this console code:\n\nnotBefore=Feb 13 05:57:47 2024 GMT\nnotAfter=Feb 10 06:02:47 2034 GMT", "completion": "This console code Troubleshooting kubectl | output:", "language": "console", "type": "code-example_explanation", "context": "Troubleshooting kubectl | output:", "source": "https://kubernetes.io/docs/tasks/debug/debug-cluster/troubleshoot-kubectl/", "id": "k8s_00134_code_explain_4012"} {"prompt": "Explain this shell code:\n\nkubectl config view --flatten --output 'jsonpath={.users[0].user.client-certificate-data}'| base64 -d | openssl x509 -noout -dates", "completion": "This shell code Troubleshooting kubectl | output:", "language": "shell", "type": "code-example_explanation", "context": "Troubleshooting kubectl | output:", "source": "https://kubernetes.io/docs/tasks/debug/debug-cluster/troubleshoot-kubectl/", "id": "k8s_00134_code_explain_4013"} {"prompt": "Explain this console code:\n\nnotBefore=Feb 13 05:57:47 2024 GMT\nnotAfter=Feb 12 06:02:50 2025 GMT", "completion": "This console code Troubleshooting kubectl | Some kubectl authentication helpers provide easy access to Kubernetes clusters. If you have used such helpers and are facing connectivity issues, ensure that the necessary configurations are still pre", "language": "console", "type": "code-example_explanation", "context": "Troubleshooting kubectl | Some kubectl authentication helpers provide easy access to Kubernetes clusters. If you have used such helpers and are facing connectivity issues, ensure that the necessary configurations are still pre", "source": "https://kubernetes.io/docs/tasks/debug/debug-cluster/troubleshoot-kubectl/", "id": "k8s_00134_code_explain_4014"} {"prompt": "Explain this text code:\n\nClient Version: version.Info{Major:\"1\", Minor:\"27\", GitVersion:\"v1.27.4\",GitCommit:\"fa3d7990104d7c1f16943a67f11b154b71f6a132\", GitTreeState:\"clean\",BuildDate:\"2023-07-19T12:20:54Z\", GoVersion:\"go1.20.6\", Compiler:\"gc\", Platform:\"linux/amd64\"}\nKustomize Version: v5.0.1\nServer Version: version.Info{Major:\"1\", Minor:\"27\", GitVersion:\"v1.27.3\",GitCommit:\"25b4e43193bcda6c7328a6d147b1fb73a33f1598\", GitTreeState:\"clean\",BuildDate:\"2023-06-14T09:47:40Z\", GoVersion:\"go1.20.5\", Compiler:\"gc\", Platform:\"linux/amd64\"}", "completion": "This text code Troubleshooting kubectl | If you see Unable to connect to the server: dial tcp :8443: i/o timeout, instead of Server Version, you need to troubleshoot kubectl connectivity with your cluster.", "language": "text", "type": "code-example_explanation", "context": "Troubleshooting kubectl | If you see Unable to connect to the server: dial tcp :8443: i/o timeout, instead of Server Version, you need to troubleshoot kubectl connectivity with your cluster.", "source": "https://kubernetes.io/docs/tasks/debug/debug-cluster/troubleshoot-kubectl/", "id": "k8s_00134_code_explain_4015"} {"prompt": "Explain this bash code:\n\nkubectl config view --flatten --output 'jsonpath={.clusters[0].cluster.certificate-authority-data}' | base64 -d | openssl x509 -noout -dates", "completion": "This bash code Troubleshooting kubectl | output:", "language": "bash", "type": "kubectl-command_explanation", "context": "Troubleshooting kubectl | output:", "source": "https://kubernetes.io/docs/tasks/debug/debug-cluster/troubleshoot-kubectl/", "id": "k8s_00134_code_explain_4016"} {"prompt": "Explain this text code:\n\nnotBefore=Feb 13 05:57:47 2024 GMT\nnotAfter=Feb 10 06:02:47 2034 GMT", "completion": "This text code Troubleshooting kubectl | output:", "language": "text", "type": "code-example_explanation", "context": "Troubleshooting kubectl | output:", "source": "https://kubernetes.io/docs/tasks/debug/debug-cluster/troubleshoot-kubectl/", "id": "k8s_00134_code_explain_4017"} {"prompt": "Explain this bash code:\n\nkubectl config view --flatten --output 'jsonpath={.users[0].user.client-certificate-data}'| base64 -d | openssl x509 -noout -dates", "completion": "This bash code Troubleshooting kubectl | output:", "language": "bash", "type": "kubectl-command_explanation", "context": "Troubleshooting kubectl | output:", "source": "https://kubernetes.io/docs/tasks/debug/debug-cluster/troubleshoot-kubectl/", "id": "k8s_00134_code_explain_4018"} {"prompt": "Explain this text code:\n\nnotBefore=Feb 13 05:57:47 2024 GMT\nnotAfter=Feb 12 06:02:50 2025 GMT", "completion": "This text code Troubleshooting kubectl | Some kubectl authentication helpers provide easy access to Kubernetes clusters. If you have used such helpers and are facing connectivity issues, ensure that the necessary configurations are still pre", "language": "text", "type": "code-example_explanation", "context": "Troubleshooting kubectl | Some kubectl authentication helpers provide easy access to Kubernetes clusters. If you have used such helpers and are facing connectivity issues, ensure that the necessary configurations are still pre", "source": "https://kubernetes.io/docs/tasks/debug/debug-cluster/troubleshoot-kubectl/", "id": "k8s_00134_code_explain_4019"} {"prompt": "Explain this text code:\n\nsudo kubeadm init --control-plane-endpoint \"LOAD_BALANCER_DNS:LOAD_BALANCER_PORT\" --upload-certs", "completion": "This text code Creating Highly Available Clusters with kubeadm | You can use the --kubernetes-version flag to set the Kubernetes version to use. It is recommended that the versions of kubeadm, kubelet, kubectl and Kubernetes match.", "language": "text", "type": "code-example_explanation", "context": "Creating Highly Available Clusters with kubeadm | You can use the --kubernetes-version flag to set the Kubernetes version to use. It is recommended that the versions of kubeadm, kubelet, kubectl and Kubernetes match.", "source": "https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/high-availability/", "id": "k8s_00136_code_explain_4020"} {"prompt": "Complete this text code:\n\n...\nYou can now join any number of control-plane node by running the following command on each as a root:\n kubeadm join 192.168.0.200:6443 --token 9vr73a.a8uxyaju799qwdjv --discovery-token-ca-cert-hash sha256:7c2e69131a36ae2a042a339b33381c6d0d43887e2de83720eff5359e26aec866 --control-plane --certificate-key f8902e114ef118304e561c3ecd4d0b543adc226b7a07f675f56564185ffe0c07\n", "completion": "Please note that the certificate-key gives access to cluster sensitive data, keep it secret!\nAs a safeguard, uploaded-certs will be deleted in two hours; If necessary, you can use kubeadm init phase upload-certs to reload certs afterward.\n\nThen you can join any number of worker nodes by running the following on each as root:\n kubeadm join 192.168.0.200:6443 --token 9vr73a.a8uxyaju799qwdjv --discovery-token-ca-cert-hash sha256:7c2e69131a36ae2a042a339b33381c6d0d43887e2de83720eff5359e26aec866", "language": "text", "type": "code-example", "context": "Creating Highly Available Clusters with kubeadm | Copy this output to a text file. You will need it later to join control plane and worker nodes to the cluster.", "source": "https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/high-availability/", "id": "k8s_00136_code_completion_4021"} {"prompt": "Explain this text code:\n\n...\nYou can now join any number of control-plane node by running the following command on each as a root:\n kubeadm join 192.168.0.200:6443 --token 9vr73a.a8uxyaju799qwdjv --discovery-token-ca-cert-hash sha256:7c2e69131a36ae2a042a339b33381c6d0d43887e2de83720eff5359e26aec866 --control-plane --certificate-key f8902e114ef118304e561c3ecd4d0b543adc226b7a07f675f56564185ffe0c07\n\nPlease note that the certificate-key gives access to cluster sensitive data, keep it secret!\nAs a safeguard, uploaded-certs will be deleted in two hours; If necessary, you can use kubeadm init phase upload-certs to reload certs afterward.\n\nThen you can join any number of worker nodes by running the following on each as root:\n kubeadm join 192.168.0.200:6443 --token 9vr73a.a8uxyaju799qwdjv --discovery-token-ca-cert-hash sha256:7c2e69131a36ae2a042a339b33381c6d0d43887e2de83720eff5359e26aec866", "completion": "This text code Creating Highly Available Clusters with kubeadm | Copy this output to a text file. You will need it later to join control plane and worker nodes to the cluster.", "language": "text", "type": "code-example_explanation", "context": "Creating Highly Available Clusters with kubeadm | Copy this output to a text file. You will need it later to join control plane and worker nodes to the cluster.", "source": "https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/high-availability/", "id": "k8s_00136_code_explain_4022"} {"prompt": "Explain this text code:\n\nsudo kubeadm init phase upload-certs --upload-certs", "completion": "This text code Creating Highly Available Clusters with kubeadm | You can also specify a custom --certificate-key during init that can later be used by join. To generate such a key you can use the following command:", "language": "text", "type": "code-example_explanation", "context": "Creating Highly Available Clusters with kubeadm | You can also specify a custom --certificate-key during init that can later be used by join. To generate such a key you can use the following command:", "source": "https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/high-availability/", "id": "k8s_00136_code_explain_4023"} {"prompt": "Explain this text code:\n\nsudo kubeadm join 192.168.0.200:6443 --token 9vr73a.a8uxyaju799qwdjv --discovery-token-ca-cert-hash sha256:7c2e69131a36ae2a042a339b33381c6d0d43887e2de83720eff5359e26aec866 --control-plane --certificate-key f8902e114ef118304e561c3ecd4d0b543adc226b7a07f675f56564185ffe0c07", "completion": "This text code Creating Highly Available Clusters with kubeadm | Setting up a cluster with external etcd nodes is similar to the procedure used for stacked etcd with the exception that you should setup etcd first, and you should pass the etcd information in the kub", "language": "text", "type": "code-example_explanation", "context": "Creating Highly Available Clusters with kubeadm | Setting up a cluster with external etcd nodes is similar to the procedure used for stacked etcd with the exception that you should setup etcd first, and you should pass the etcd information in the kub", "source": "https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/high-availability/", "id": "k8s_00136_code_explain_4024"} {"prompt": "Complete this text code:\n\nexport CONTROL_PLANE=\"ubuntu@10.0.0.7\"\nscp /etc/kubernetes/pki/etcd/ca.crt \"${CONTROL_PLANE}\":", "completion": "scp /etc/kubernetes/pki/apiserver-etcd-client.crt \"${CONTROL_PLANE}\":\nscp /etc/kubernetes/pki/apiserver-etcd-client.key \"${CONTROL_PLANE}\":", "language": "text", "type": "code-example", "context": "Creating Highly Available Clusters with kubeadm | Create a file called kubeadm-config.yaml with the following contents:", "source": "https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/high-availability/", "id": "k8s_00136_code_completion_4025"} {"prompt": "Explain this text code:\n\nexport CONTROL_PLANE=\"ubuntu@10.0.0.7\"\nscp /etc/kubernetes/pki/etcd/ca.crt \"${CONTROL_PLANE}\":\nscp /etc/kubernetes/pki/apiserver-etcd-client.crt \"${CONTROL_PLANE}\":\nscp /etc/kubernetes/pki/apiserver-etcd-client.key \"${CONTROL_PLANE}\":", "completion": "This text code Creating Highly Available Clusters with kubeadm | Create a file called kubeadm-config.yaml with the following contents:", "language": "text", "type": "code-example_explanation", "context": "Creating Highly Available Clusters with kubeadm | Create a file called kubeadm-config.yaml with the following contents:", "source": "https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/high-availability/", "id": "k8s_00136_code_explain_4026"} {"prompt": "Complete this yaml code:\n\n---\napiVersion: kubeadm.k8s.io/v1beta4\nkind: ClusterConfiguration\nkubernetesVersion: stable\ncontrolPlaneEndpoint: \"LOAD_BALANCER_DNS:LOAD_BALANCER_PORT\" # change this (see below)\netcd:\n external:", "completion": " endpoints:\n - https://ETCD_0_IP:2379 # change ETCD_0_IP appropriately\n - https://ETCD_1_IP:2379 # change ETCD_1_IP appropriately\n - https://ETCD_2_IP:2379 # change ETCD_2_IP appropriately\n caFile: /etc/kubernetes/pki/etcd/ca.crt\n certFile: /etc/kubernetes/pki/apiserver-etcd-client.crt\n keyFile: /etc/kubernetes/pki/apiserver-etcd-client.key", "language": "yaml", "type": "kubernetes-manifest", "context": "Creating Highly Available Clusters with kubeadm | Replace the following variables in the config template with the appropriate values for your cluster:", "source": "https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/high-availability/", "id": "k8s_00136_code_completion_4027"} {"prompt": "Explain this yaml code:\n\n---\napiVersion: kubeadm.k8s.io/v1beta4\nkind: ClusterConfiguration\nkubernetesVersion: stable\ncontrolPlaneEndpoint: \"LOAD_BALANCER_DNS:LOAD_BALANCER_PORT\" # change this (see below)\netcd:\n external:\n endpoints:\n - https://ETCD_0_IP:2379 # change ETCD_0_IP appropriately\n - https://ETCD_1_IP:2379 # change ETCD_1_IP appropriately\n - https://ETCD_2_IP:2379 # change ETCD_2_IP appropriately\n caFile: /etc/kubernetes/pki/etcd/ca.crt\n certFile: /etc/kubernetes/pki/apiserver-etcd-client.crt\n keyFile: /etc/kubernetes/pki/apiserver-etcd-client.key", "completion": "This yaml code Creating Highly Available Clusters with kubeadm | Replace the following variables in the config template with the appropriate values for your cluster:", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Creating Highly Available Clusters with kubeadm | Replace the following variables in the config template with the appropriate values for your cluster:", "source": "https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/high-availability/", "id": "k8s_00136_code_explain_4028"} {"prompt": "Explain this text code:\n\nsudo kubeadm join 192.168.0.200:6443 --token 9vr73a.a8uxyaju799qwdjv --discovery-token-ca-cert-hash sha256:7c2e69131a36ae2a042a339b33381c6d0d43887e2de83720eff5359e26aec866", "completion": "This text code Creating Highly Available Clusters with kubeadm | If you choose to not use kubeadm init with the --upload-certs flag this means that you are going to have to manually copy the certificates from the primary control plane node to the joining control pl", "language": "text", "type": "code-example_explanation", "context": "Creating Highly Available Clusters with kubeadm | If you choose to not use kubeadm init with the --upload-certs flag this means that you are going to have to manually copy the certificates from the primary control plane node to the joining control pl", "source": "https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/high-availability/", "id": "k8s_00136_code_explain_4029"} {"prompt": "Complete this text code:\n\nUSER=ubuntu # customizable\nCONTROL_PLANE_IPS=\"10.0.0.7 10.0.0.8\"\nfor host in ${CONTROL_PLANE_IPS}; do\n scp /etc/kubernetes/pki/ca.crt \"${USER}\"@$host:\n scp /etc/kubernetes/pki/ca.key \"${USER}\"@$host:\n scp /etc/kubernetes/pki/sa.key \"${USER}\"@$host:", "completion": " scp /etc/kubernetes/pki/sa.pub \"${USER}\"@$host:\n scp /etc/kubernetes/pki/front-proxy-ca.crt \"${USER}\"@$host:\n scp /etc/kubernetes/pki/front-proxy-ca.key \"${USER}\"@$host:\n scp /etc/kubernetes/pki/etcd/ca.crt \"${USER}\"@$host:etcd-ca.crt\n # Skip the next line if you are using external etcd\n scp /etc/kubernetes/pki/etcd/ca.key \"${USER}\"@$host:etcd-ca.key\ndone", "language": "text", "type": "code-example", "context": "Creating Highly Available Clusters with kubeadm | Then on each joining control plane node you have to run the following script before running kubeadm join. This script will move the previously copied certificates from the home directory to /etc/kuber", "source": "https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/high-availability/", "id": "k8s_00136_code_completion_4030"} {"prompt": "Explain this text code:\n\nUSER=ubuntu # customizable\nCONTROL_PLANE_IPS=\"10.0.0.7 10.0.0.8\"\nfor host in ${CONTROL_PLANE_IPS}; do\n scp /etc/kubernetes/pki/ca.crt \"${USER}\"@$host:\n scp /etc/kubernetes/pki/ca.key \"${USER}\"@$host:\n scp /etc/kubernetes/pki/sa.key \"${USER}\"@$host:\n scp /etc/kubernetes/pki/sa.pub \"${USER}\"@$host:\n scp /etc/kubernetes/pki/front-proxy-ca.crt \"${USER}\"@$host:\n scp /etc/kubernetes/pki/front-proxy-ca.key \"${USER}\"@$host:\n scp /etc/kubernetes/pki/etcd/ca.crt \"${USER}\"@$host:etcd-ca.crt\n # Skip the next line if you are using external etcd\n scp /etc/kubernetes/pki/etcd/ca.key \"${USER}\"@$host:etcd-ca.key\ndone", "completion": "This text code Creating Highly Available Clusters with kubeadm | Then on each joining control plane node you have to run the following script before running kubeadm join. This script will move the previously copied certificates from the home directory to /etc/kuber", "language": "text", "type": "code-example_explanation", "context": "Creating Highly Available Clusters with kubeadm | Then on each joining control plane node you have to run the following script before running kubeadm join. This script will move the previously copied certificates from the home directory to /etc/kuber", "source": "https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/high-availability/", "id": "k8s_00136_code_explain_4031"} {"prompt": "Complete this text code:\n\nUSER=ubuntu # customizable\nmkdir -p /etc/kubernetes/pki/etcd\nmv /home/${USER}/ca.crt /etc/kubernetes/pki/\nmv /home/${USER}/ca.key /etc/kubernetes/pki/\nmv /home/${USER}/sa.pub /etc/kubernetes/pki/", "completion": "mv /home/${USER}/sa.key /etc/kubernetes/pki/\nmv /home/${USER}/front-proxy-ca.crt /etc/kubernetes/pki/\nmv /home/${USER}/front-proxy-ca.key /etc/kubernetes/pki/\nmv /home/${USER}/etcd-ca.crt /etc/kubernetes/pki/etcd/ca.crt\n# Skip the next line if you are using external etcd\nmv /home/${USER}/etcd-ca.key /etc/kubernetes/pki/etcd/ca.key", "language": "text", "type": "code-example", "context": "Creating Highly Available Clusters with kubeadm |", "source": "https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/high-availability/", "id": "k8s_00136_code_completion_4032"} {"prompt": "Explain this text code:\n\nUSER=ubuntu # customizable\nmkdir -p /etc/kubernetes/pki/etcd\nmv /home/${USER}/ca.crt /etc/kubernetes/pki/\nmv /home/${USER}/ca.key /etc/kubernetes/pki/\nmv /home/${USER}/sa.pub /etc/kubernetes/pki/\nmv /home/${USER}/sa.key /etc/kubernetes/pki/\nmv /home/${USER}/front-proxy-ca.crt /etc/kubernetes/pki/\nmv /home/${USER}/front-proxy-ca.key /etc/kubernetes/pki/\nmv /home/${USER}/etcd-ca.crt /etc/kubernetes/pki/etcd/ca.crt\n# Skip the next line if you are using external etcd\nmv /home/${USER}/etcd-ca.key /etc/kubernetes/pki/etcd/ca.key", "completion": "This text code Creating Highly Available Clusters with kubeadm |", "language": "text", "type": "code-example_explanation", "context": "Creating Highly Available Clusters with kubeadm |", "source": "https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/high-availability/", "id": "k8s_00136_code_explain_4033"} {"prompt": "Explain this sh code:\n\nsudo kubeadm init --control-plane-endpoint \"LOAD_BALANCER_DNS:LOAD_BALANCER_PORT\" --upload-certs", "completion": "This sh code Creating Highly Available Clusters with kubeadm | You can use the --kubernetes-version flag to set the Kubernetes version to use. It is recommended that the versions of kubeadm, kubelet, kubectl and Kubernetes match.", "language": "sh", "type": "code-example_explanation", "context": "Creating Highly Available Clusters with kubeadm | You can use the --kubernetes-version flag to set the Kubernetes version to use. It is recommended that the versions of kubeadm, kubelet, kubectl and Kubernetes match.", "source": "https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/high-availability/", "id": "k8s_00136_code_explain_4034"} {"prompt": "Complete this sh code:\n\n...\nYou can now join any number of control-plane node by running the following command on each as a root:\n kubeadm join 192.168.0.200:6443 --token 9vr73a.a8uxyaju799qwdjv --discovery-token-ca-cert-hash sha256:7c2e69131a36ae2a042a339b33381c6d0d43887e2de83720eff5359e26aec866 --control-plane --certificate-key f8902e114ef118304e561c3ecd4d0b543adc226b7a07f675f56564185ffe0c07\n", "completion": "Please note that the certificate-key gives access to cluster sensitive data, keep it secret!\nAs a safeguard, uploaded-certs will be deleted in two hours; If necessary, you can use kubeadm init phase upload-certs to reload certs afterward.\n\nThen you can join any number of worker nodes by running the following on each as root:\n kubeadm join 192.168.0.200:6443 --token 9vr73a.a8uxyaju799qwdjv --discovery-token-ca-cert-hash sha256:7c2e69131a36ae2a042a339b33381c6d0d43887e2de83720eff5359e26aec866", "language": "sh", "type": "code-example", "context": "Creating Highly Available Clusters with kubeadm | Copy this output to a text file. You will need it later to join control plane and worker nodes to the cluster.", "source": "https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/high-availability/", "id": "k8s_00136_code_completion_4035"} {"prompt": "Explain this sh code:\n\n...\nYou can now join any number of control-plane node by running the following command on each as a root:\n kubeadm join 192.168.0.200:6443 --token 9vr73a.a8uxyaju799qwdjv --discovery-token-ca-cert-hash sha256:7c2e69131a36ae2a042a339b33381c6d0d43887e2de83720eff5359e26aec866 --control-plane --certificate-key f8902e114ef118304e561c3ecd4d0b543adc226b7a07f675f56564185ffe0c07\n\nPlease note that the certificate-key gives access to cluster sensitive data, keep it secret!\nAs a safeguard, uploaded-certs will be deleted in two hours; If necessary, you can use kubeadm init phase upload-certs to reload certs afterward.\n\nThen you can join any number of worker nodes by running the following on each as root:\n kubeadm join 192.168.0.200:6443 --token 9vr73a.a8uxyaju799qwdjv --discovery-token-ca-cert-hash sha256:7c2e69131a36ae2a042a339b33381c6d0d43887e2de83720eff5359e26aec866", "completion": "This sh code Creating Highly Available Clusters with kubeadm | Copy this output to a text file. You will need it later to join control plane and worker nodes to the cluster.", "language": "sh", "type": "code-example_explanation", "context": "Creating Highly Available Clusters with kubeadm | Copy this output to a text file. You will need it later to join control plane and worker nodes to the cluster.", "source": "https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/high-availability/", "id": "k8s_00136_code_explain_4036"} {"prompt": "Explain this sh code:\n\nsudo kubeadm init phase upload-certs --upload-certs", "completion": "This sh code Creating Highly Available Clusters with kubeadm | You can also specify a custom --certificate-key during init that can later be used by join. To generate such a key you can use the following command:", "language": "sh", "type": "code-example_explanation", "context": "Creating Highly Available Clusters with kubeadm | You can also specify a custom --certificate-key during init that can later be used by join. To generate such a key you can use the following command:", "source": "https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/high-availability/", "id": "k8s_00136_code_explain_4037"} {"prompt": "Explain this sh code:\n\nsudo kubeadm join 192.168.0.200:6443 --token 9vr73a.a8uxyaju799qwdjv --discovery-token-ca-cert-hash sha256:7c2e69131a36ae2a042a339b33381c6d0d43887e2de83720eff5359e26aec866 --control-plane --certificate-key f8902e114ef118304e561c3ecd4d0b543adc226b7a07f675f56564185ffe0c07", "completion": "This sh code Creating Highly Available Clusters with kubeadm | Setting up a cluster with external etcd nodes is similar to the procedure used for stacked etcd with the exception that you should setup etcd first, and you should pass the etcd information in the kub", "language": "sh", "type": "code-example_explanation", "context": "Creating Highly Available Clusters with kubeadm | Setting up a cluster with external etcd nodes is similar to the procedure used for stacked etcd with the exception that you should setup etcd first, and you should pass the etcd information in the kub", "source": "https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/high-availability/", "id": "k8s_00136_code_explain_4038"} {"prompt": "Explain this bash code:\n\nkubectl -n kube-system rollout restart deployment coredns", "completion": "This bash code Creating Highly Available Clusters with kubeadm | Setting up a cluster with external etcd nodes is similar to the procedure used for stacked etcd with the exception that you should setup etcd first, and you should pass the etcd information in the kub", "language": "bash", "type": "kubectl-command_explanation", "context": "Creating Highly Available Clusters with kubeadm | Setting up a cluster with external etcd nodes is similar to the procedure used for stacked etcd with the exception that you should setup etcd first, and you should pass the etcd information in the kub", "source": "https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/high-availability/", "id": "k8s_00136_code_explain_4039"} {"prompt": "Complete this sh code:\n\nexport CONTROL_PLANE=\"ubuntu@10.0.0.7\"\nscp /etc/kubernetes/pki/etcd/ca.crt \"${CONTROL_PLANE}\":", "completion": "scp /etc/kubernetes/pki/apiserver-etcd-client.crt \"${CONTROL_PLANE}\":\nscp /etc/kubernetes/pki/apiserver-etcd-client.key \"${CONTROL_PLANE}\":", "language": "sh", "type": "code-example", "context": "Creating Highly Available Clusters with kubeadm | Create a file called kubeadm-config.yaml with the following contents:", "source": "https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/high-availability/", "id": "k8s_00136_code_completion_4040"} {"prompt": "Explain this sh code:\n\nexport CONTROL_PLANE=\"ubuntu@10.0.0.7\"\nscp /etc/kubernetes/pki/etcd/ca.crt \"${CONTROL_PLANE}\":\nscp /etc/kubernetes/pki/apiserver-etcd-client.crt \"${CONTROL_PLANE}\":\nscp /etc/kubernetes/pki/apiserver-etcd-client.key \"${CONTROL_PLANE}\":", "completion": "This sh code Creating Highly Available Clusters with kubeadm | Create a file called kubeadm-config.yaml with the following contents:", "language": "sh", "type": "code-example_explanation", "context": "Creating Highly Available Clusters with kubeadm | Create a file called kubeadm-config.yaml with the following contents:", "source": "https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/high-availability/", "id": "k8s_00136_code_explain_4041"} {"prompt": "Complete this yaml code:\n\n---\napiVersion: kubeadm.k8s.io/v1beta4\nkind: ClusterConfiguration\nkubernetesVersion: stable\ncontrolPlaneEndpoint: \"LOAD_BALANCER_DNS:LOAD_BALANCER_PORT\" # change this (see below)\netcd:\n external:", "completion": " endpoints:\n - https://ETCD_0_IP:2379 # change ETCD_0_IP appropriately\n - https://ETCD_1_IP:2379 # change ETCD_1_IP appropriately\n - https://ETCD_2_IP:2379 # change ETCD_2_IP appropriately\n caFile: /etc/kubernetes/pki/etcd/ca.crt\n certFile: /etc/kubernetes/pki/apiserver-etcd-client.crt\n keyFile: /etc/kubernetes/pki/apiserver-etcd-client.key", "language": "yaml", "type": "kubernetes-manifest", "context": "Creating Highly Available Clusters with kubeadm | Replace the following variables in the config template with the appropriate values for your cluster:", "source": "https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/high-availability/", "id": "k8s_00136_code_completion_4042"} {"prompt": "Explain this yaml code:\n\n---\napiVersion: kubeadm.k8s.io/v1beta4\nkind: ClusterConfiguration\nkubernetesVersion: stable\ncontrolPlaneEndpoint: \"LOAD_BALANCER_DNS:LOAD_BALANCER_PORT\" # change this (see below)\netcd:\n external:\n endpoints:\n - https://ETCD_0_IP:2379 # change ETCD_0_IP appropriately\n - https://ETCD_1_IP:2379 # change ETCD_1_IP appropriately\n - https://ETCD_2_IP:2379 # change ETCD_2_IP appropriately\n caFile: /etc/kubernetes/pki/etcd/ca.crt\n certFile: /etc/kubernetes/pki/apiserver-etcd-client.crt\n keyFile: /etc/kubernetes/pki/apiserver-etcd-client.key", "completion": "This yaml code Creating Highly Available Clusters with kubeadm | Replace the following variables in the config template with the appropriate values for your cluster:", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Creating Highly Available Clusters with kubeadm | Replace the following variables in the config template with the appropriate values for your cluster:", "source": "https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/high-availability/", "id": "k8s_00136_code_explain_4043"} {"prompt": "Explain this text code:\n\nsudo kubeadm init --config kubeadm-config.yaml --upload-certs", "completion": "This text code Creating Highly Available Clusters with kubeadm | Write the output join commands that are returned to a text file for later use.", "language": "text", "type": "code-example_explanation", "context": "Creating Highly Available Clusters with kubeadm | Write the output join commands that are returned to a text file for later use.", "source": "https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/high-availability/", "id": "k8s_00136_code_explain_4044"} {"prompt": "Explain this sh code:\n\nsudo kubeadm join 192.168.0.200:6443 --token 9vr73a.a8uxyaju799qwdjv --discovery-token-ca-cert-hash sha256:7c2e69131a36ae2a042a339b33381c6d0d43887e2de83720eff5359e26aec866", "completion": "This sh code Creating Highly Available Clusters with kubeadm | If you choose to not use kubeadm init with the --upload-certs flag this means that you are going to have to manually copy the certificates from the primary control plane node to the joining control pl", "language": "sh", "type": "code-example_explanation", "context": "Creating Highly Available Clusters with kubeadm | If you choose to not use kubeadm init with the --upload-certs flag this means that you are going to have to manually copy the certificates from the primary control plane node to the joining control pl", "source": "https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/high-availability/", "id": "k8s_00136_code_explain_4045"} {"prompt": "Complete this sh code:\n\nUSER=ubuntu # customizable\nCONTROL_PLANE_IPS=\"10.0.0.7 10.0.0.8\"\nfor host in ${CONTROL_PLANE_IPS}; do\n scp /etc/kubernetes/pki/ca.crt \"${USER}\"@$host:\n scp /etc/kubernetes/pki/ca.key \"${USER}\"@$host:\n scp /etc/kubernetes/pki/sa.key \"${USER}\"@$host:", "completion": " scp /etc/kubernetes/pki/sa.pub \"${USER}\"@$host:\n scp /etc/kubernetes/pki/front-proxy-ca.crt \"${USER}\"@$host:\n scp /etc/kubernetes/pki/front-proxy-ca.key \"${USER}\"@$host:\n scp /etc/kubernetes/pki/etcd/ca.crt \"${USER}\"@$host:etcd-ca.crt\n # Skip the next line if you are using external etcd\n scp /etc/kubernetes/pki/etcd/ca.key \"${USER}\"@$host:etcd-ca.key\ndone", "language": "sh", "type": "code-example", "context": "Creating Highly Available Clusters with kubeadm | Then on each joining control plane node you have to run the following script before running kubeadm join. This script will move the previously copied certificates from the home directory to /etc/kuber", "source": "https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/high-availability/", "id": "k8s_00136_code_completion_4046"} {"prompt": "Explain this sh code:\n\nUSER=ubuntu # customizable\nCONTROL_PLANE_IPS=\"10.0.0.7 10.0.0.8\"\nfor host in ${CONTROL_PLANE_IPS}; do\n scp /etc/kubernetes/pki/ca.crt \"${USER}\"@$host:\n scp /etc/kubernetes/pki/ca.key \"${USER}\"@$host:\n scp /etc/kubernetes/pki/sa.key \"${USER}\"@$host:\n scp /etc/kubernetes/pki/sa.pub \"${USER}\"@$host:\n scp /etc/kubernetes/pki/front-proxy-ca.crt \"${USER}\"@$host:\n scp /etc/kubernetes/pki/front-proxy-ca.key \"${USER}\"@$host:\n scp /etc/kubernetes/pki/etcd/ca.crt \"${USER}\"@$host:etcd-ca.crt\n # Skip the next line if you are using external etcd\n scp /etc/kubernetes/pki/etcd/ca.key \"${USER}\"@$host:etcd-ca.key\ndone", "completion": "This sh code Creating Highly Available Clusters with kubeadm | Then on each joining control plane node you have to run the following script before running kubeadm join. This script will move the previously copied certificates from the home directory to /etc/kuber", "language": "sh", "type": "code-example_explanation", "context": "Creating Highly Available Clusters with kubeadm | Then on each joining control plane node you have to run the following script before running kubeadm join. This script will move the previously copied certificates from the home directory to /etc/kuber", "source": "https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/high-availability/", "id": "k8s_00136_code_explain_4047"} {"prompt": "Complete this sh code:\n\nUSER=ubuntu # customizable\nmkdir -p /etc/kubernetes/pki/etcd\nmv /home/${USER}/ca.crt /etc/kubernetes/pki/\nmv /home/${USER}/ca.key /etc/kubernetes/pki/\nmv /home/${USER}/sa.pub /etc/kubernetes/pki/", "completion": "mv /home/${USER}/sa.key /etc/kubernetes/pki/\nmv /home/${USER}/front-proxy-ca.crt /etc/kubernetes/pki/\nmv /home/${USER}/front-proxy-ca.key /etc/kubernetes/pki/\nmv /home/${USER}/etcd-ca.crt /etc/kubernetes/pki/etcd/ca.crt\n# Skip the next line if you are using external etcd\nmv /home/${USER}/etcd-ca.key /etc/kubernetes/pki/etcd/ca.key", "language": "sh", "type": "code-example", "context": "Creating Highly Available Clusters with kubeadm |", "source": "https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/high-availability/", "id": "k8s_00136_code_completion_4048"} {"prompt": "Explain this sh code:\n\nUSER=ubuntu # customizable\nmkdir -p /etc/kubernetes/pki/etcd\nmv /home/${USER}/ca.crt /etc/kubernetes/pki/\nmv /home/${USER}/ca.key /etc/kubernetes/pki/\nmv /home/${USER}/sa.pub /etc/kubernetes/pki/\nmv /home/${USER}/sa.key /etc/kubernetes/pki/\nmv /home/${USER}/front-proxy-ca.crt /etc/kubernetes/pki/\nmv /home/${USER}/front-proxy-ca.key /etc/kubernetes/pki/\nmv /home/${USER}/etcd-ca.crt /etc/kubernetes/pki/etcd/ca.crt\n# Skip the next line if you are using external etcd\nmv /home/${USER}/etcd-ca.key /etc/kubernetes/pki/etcd/ca.key", "completion": "This sh code Creating Highly Available Clusters with kubeadm |", "language": "sh", "type": "code-example_explanation", "context": "Creating Highly Available Clusters with kubeadm |", "source": "https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/high-availability/", "id": "k8s_00136_code_explain_4049"} {"prompt": "Explain this text code:\n\nsudo kubeadm init --control-plane-endpoint \"LOAD_BALANCER_DNS:LOAD_BALANCER_PORT\" --upload-certs", "completion": "This text code Creating Highly Available Clusters with kubeadm | You can use the --kubernetes-version flag to set the Kubernetes version to use. It is recommended that the versions of kubeadm, kubelet, kubectl and Kubernetes match.", "language": "text", "type": "code-example_explanation", "context": "Creating Highly Available Clusters with kubeadm | You can use the --kubernetes-version flag to set the Kubernetes version to use. It is recommended that the versions of kubeadm, kubelet, kubectl and Kubernetes match.", "source": "https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/high-availability/", "id": "k8s_00136_code_explain_4050"} {"prompt": "Complete this text code:\n\n...\nYou can now join any number of control-plane node by running the following command on each as a root:\n kubeadm join 192.168.0.200:6443 --token 9vr73a.a8uxyaju799qwdjv --discovery-token-ca-cert-hash sha256:7c2e69131a36ae2a042a339b33381c6d0d43887e2de83720eff5359e26aec866 --control-plane --certificate-key f8902e114ef118304e561c3ecd4d0b543adc226b7a07f675f56564185ffe0c07\n", "completion": "Please note that the certificate-key gives access to cluster sensitive data, keep it secret!\nAs a safeguard, uploaded-certs will be deleted in two hours; If necessary, you can use kubeadm init phase upload-certs to reload certs afterward.\n\nThen you can join any number of worker nodes by running the following on each as root:\n kubeadm join 192.168.0.200:6443 --token 9vr73a.a8uxyaju799qwdjv --discovery-token-ca-cert-hash sha256:7c2e69131a36ae2a042a339b33381c6d0d43887e2de83720eff5359e26aec866", "language": "text", "type": "code-example", "context": "Creating Highly Available Clusters with kubeadm | Copy this output to a text file. You will need it later to join control plane and worker nodes to the cluster.", "source": "https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/high-availability/", "id": "k8s_00136_code_completion_4051"} {"prompt": "Explain this text code:\n\n...\nYou can now join any number of control-plane node by running the following command on each as a root:\n kubeadm join 192.168.0.200:6443 --token 9vr73a.a8uxyaju799qwdjv --discovery-token-ca-cert-hash sha256:7c2e69131a36ae2a042a339b33381c6d0d43887e2de83720eff5359e26aec866 --control-plane --certificate-key f8902e114ef118304e561c3ecd4d0b543adc226b7a07f675f56564185ffe0c07\n\nPlease note that the certificate-key gives access to cluster sensitive data, keep it secret!\nAs a safeguard, uploaded-certs will be deleted in two hours; If necessary, you can use kubeadm init phase upload-certs to reload certs afterward.\n\nThen you can join any number of worker nodes by running the following on each as root:\n kubeadm join 192.168.0.200:6443 --token 9vr73a.a8uxyaju799qwdjv --discovery-token-ca-cert-hash sha256:7c2e69131a36ae2a042a339b33381c6d0d43887e2de83720eff5359e26aec866", "completion": "This text code Creating Highly Available Clusters with kubeadm | Copy this output to a text file. You will need it later to join control plane and worker nodes to the cluster.", "language": "text", "type": "code-example_explanation", "context": "Creating Highly Available Clusters with kubeadm | Copy this output to a text file. You will need it later to join control plane and worker nodes to the cluster.", "source": "https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/high-availability/", "id": "k8s_00136_code_explain_4052"} {"prompt": "Explain this text code:\n\nsudo kubeadm init phase upload-certs --upload-certs", "completion": "This text code Creating Highly Available Clusters with kubeadm | You can also specify a custom --certificate-key during init that can later be used by join. To generate such a key you can use the following command:", "language": "text", "type": "code-example_explanation", "context": "Creating Highly Available Clusters with kubeadm | You can also specify a custom --certificate-key during init that can later be used by join. To generate such a key you can use the following command:", "source": "https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/high-availability/", "id": "k8s_00136_code_explain_4053"} {"prompt": "Explain this text code:\n\nsudo kubeadm join 192.168.0.200:6443 --token 9vr73a.a8uxyaju799qwdjv --discovery-token-ca-cert-hash sha256:7c2e69131a36ae2a042a339b33381c6d0d43887e2de83720eff5359e26aec866 --control-plane --certificate-key f8902e114ef118304e561c3ecd4d0b543adc226b7a07f675f56564185ffe0c07", "completion": "This text code Creating Highly Available Clusters with kubeadm | Setting up a cluster with external etcd nodes is similar to the procedure used for stacked etcd with the exception that you should setup etcd first, and you should pass the etcd information in the kub", "language": "text", "type": "code-example_explanation", "context": "Creating Highly Available Clusters with kubeadm | Setting up a cluster with external etcd nodes is similar to the procedure used for stacked etcd with the exception that you should setup etcd first, and you should pass the etcd information in the kub", "source": "https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/high-availability/", "id": "k8s_00136_code_explain_4054"} {"prompt": "Complete this text code:\n\nexport CONTROL_PLANE=\"ubuntu@10.0.0.7\"\nscp /etc/kubernetes/pki/etcd/ca.crt \"${CONTROL_PLANE}\":", "completion": "scp /etc/kubernetes/pki/apiserver-etcd-client.crt \"${CONTROL_PLANE}\":\nscp /etc/kubernetes/pki/apiserver-etcd-client.key \"${CONTROL_PLANE}\":", "language": "text", "type": "code-example", "context": "Creating Highly Available Clusters with kubeadm | Create a file called kubeadm-config.yaml with the following contents:", "source": "https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/high-availability/", "id": "k8s_00136_code_completion_4055"} {"prompt": "Explain this text code:\n\nexport CONTROL_PLANE=\"ubuntu@10.0.0.7\"\nscp /etc/kubernetes/pki/etcd/ca.crt \"${CONTROL_PLANE}\":\nscp /etc/kubernetes/pki/apiserver-etcd-client.crt \"${CONTROL_PLANE}\":\nscp /etc/kubernetes/pki/apiserver-etcd-client.key \"${CONTROL_PLANE}\":", "completion": "This text code Creating Highly Available Clusters with kubeadm | Create a file called kubeadm-config.yaml with the following contents:", "language": "text", "type": "code-example_explanation", "context": "Creating Highly Available Clusters with kubeadm | Create a file called kubeadm-config.yaml with the following contents:", "source": "https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/high-availability/", "id": "k8s_00136_code_explain_4056"} {"prompt": "Complete this yaml code:\n\n---\napiVersion: kubeadm.k8s.io/v1beta4\nkind: ClusterConfiguration\nkubernetesVersion: stable\ncontrolPlaneEndpoint: \"LOAD_BALANCER_DNS:LOAD_BALANCER_PORT\" # change this (see below)\netcd:\n external:", "completion": " endpoints:\n - https://ETCD_0_IP:2379 # change ETCD_0_IP appropriately\n - https://ETCD_1_IP:2379 # change ETCD_1_IP appropriately\n - https://ETCD_2_IP:2379 # change ETCD_2_IP appropriately\n caFile: /etc/kubernetes/pki/etcd/ca.crt\n certFile: /etc/kubernetes/pki/apiserver-etcd-client.crt\n keyFile: /etc/kubernetes/pki/apiserver-etcd-client.key", "language": "yaml", "type": "kubernetes-manifest", "context": "Creating Highly Available Clusters with kubeadm | Replace the following variables in the config template with the appropriate values for your cluster:", "source": "https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/high-availability/", "id": "k8s_00136_code_completion_4057"} {"prompt": "Explain this yaml code:\n\n---\napiVersion: kubeadm.k8s.io/v1beta4\nkind: ClusterConfiguration\nkubernetesVersion: stable\ncontrolPlaneEndpoint: \"LOAD_BALANCER_DNS:LOAD_BALANCER_PORT\" # change this (see below)\netcd:\n external:\n endpoints:\n - https://ETCD_0_IP:2379 # change ETCD_0_IP appropriately\n - https://ETCD_1_IP:2379 # change ETCD_1_IP appropriately\n - https://ETCD_2_IP:2379 # change ETCD_2_IP appropriately\n caFile: /etc/kubernetes/pki/etcd/ca.crt\n certFile: /etc/kubernetes/pki/apiserver-etcd-client.crt\n keyFile: /etc/kubernetes/pki/apiserver-etcd-client.key", "completion": "This yaml code Creating Highly Available Clusters with kubeadm | Replace the following variables in the config template with the appropriate values for your cluster:", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Creating Highly Available Clusters with kubeadm | Replace the following variables in the config template with the appropriate values for your cluster:", "source": "https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/high-availability/", "id": "k8s_00136_code_explain_4058"} {"prompt": "Explain this text code:\n\nsudo kubeadm join 192.168.0.200:6443 --token 9vr73a.a8uxyaju799qwdjv --discovery-token-ca-cert-hash sha256:7c2e69131a36ae2a042a339b33381c6d0d43887e2de83720eff5359e26aec866", "completion": "This text code Creating Highly Available Clusters with kubeadm | If you choose to not use kubeadm init with the --upload-certs flag this means that you are going to have to manually copy the certificates from the primary control plane node to the joining control pl", "language": "text", "type": "code-example_explanation", "context": "Creating Highly Available Clusters with kubeadm | If you choose to not use kubeadm init with the --upload-certs flag this means that you are going to have to manually copy the certificates from the primary control plane node to the joining control pl", "source": "https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/high-availability/", "id": "k8s_00136_code_explain_4059"} {"prompt": "Complete this text code:\n\nUSER=ubuntu # customizable\nCONTROL_PLANE_IPS=\"10.0.0.7 10.0.0.8\"\nfor host in ${CONTROL_PLANE_IPS}; do\n scp /etc/kubernetes/pki/ca.crt \"${USER}\"@$host:\n scp /etc/kubernetes/pki/ca.key \"${USER}\"@$host:\n scp /etc/kubernetes/pki/sa.key \"${USER}\"@$host:", "completion": " scp /etc/kubernetes/pki/sa.pub \"${USER}\"@$host:\n scp /etc/kubernetes/pki/front-proxy-ca.crt \"${USER}\"@$host:\n scp /etc/kubernetes/pki/front-proxy-ca.key \"${USER}\"@$host:\n scp /etc/kubernetes/pki/etcd/ca.crt \"${USER}\"@$host:etcd-ca.crt\n # Skip the next line if you are using external etcd\n scp /etc/kubernetes/pki/etcd/ca.key \"${USER}\"@$host:etcd-ca.key\ndone", "language": "text", "type": "code-example", "context": "Creating Highly Available Clusters with kubeadm | Then on each joining control plane node you have to run the following script before running kubeadm join. This script will move the previously copied certificates from the home directory to /etc/kuber", "source": "https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/high-availability/", "id": "k8s_00136_code_completion_4060"} {"prompt": "Explain this text code:\n\nUSER=ubuntu # customizable\nCONTROL_PLANE_IPS=\"10.0.0.7 10.0.0.8\"\nfor host in ${CONTROL_PLANE_IPS}; do\n scp /etc/kubernetes/pki/ca.crt \"${USER}\"@$host:\n scp /etc/kubernetes/pki/ca.key \"${USER}\"@$host:\n scp /etc/kubernetes/pki/sa.key \"${USER}\"@$host:\n scp /etc/kubernetes/pki/sa.pub \"${USER}\"@$host:\n scp /etc/kubernetes/pki/front-proxy-ca.crt \"${USER}\"@$host:\n scp /etc/kubernetes/pki/front-proxy-ca.key \"${USER}\"@$host:\n scp /etc/kubernetes/pki/etcd/ca.crt \"${USER}\"@$host:etcd-ca.crt\n # Skip the next line if you are using external etcd\n scp /etc/kubernetes/pki/etcd/ca.key \"${USER}\"@$host:etcd-ca.key\ndone", "completion": "This text code Creating Highly Available Clusters with kubeadm | Then on each joining control plane node you have to run the following script before running kubeadm join. This script will move the previously copied certificates from the home directory to /etc/kuber", "language": "text", "type": "code-example_explanation", "context": "Creating Highly Available Clusters with kubeadm | Then on each joining control plane node you have to run the following script before running kubeadm join. This script will move the previously copied certificates from the home directory to /etc/kuber", "source": "https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/high-availability/", "id": "k8s_00136_code_explain_4061"} {"prompt": "Complete this text code:\n\nUSER=ubuntu # customizable\nmkdir -p /etc/kubernetes/pki/etcd\nmv /home/${USER}/ca.crt /etc/kubernetes/pki/\nmv /home/${USER}/ca.key /etc/kubernetes/pki/\nmv /home/${USER}/sa.pub /etc/kubernetes/pki/", "completion": "mv /home/${USER}/sa.key /etc/kubernetes/pki/\nmv /home/${USER}/front-proxy-ca.crt /etc/kubernetes/pki/\nmv /home/${USER}/front-proxy-ca.key /etc/kubernetes/pki/\nmv /home/${USER}/etcd-ca.crt /etc/kubernetes/pki/etcd/ca.crt\n# Skip the next line if you are using external etcd\nmv /home/${USER}/etcd-ca.key /etc/kubernetes/pki/etcd/ca.key", "language": "text", "type": "code-example", "context": "Creating Highly Available Clusters with kubeadm |", "source": "https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/high-availability/", "id": "k8s_00136_code_completion_4062"} {"prompt": "Explain this text code:\n\nUSER=ubuntu # customizable\nmkdir -p /etc/kubernetes/pki/etcd\nmv /home/${USER}/ca.crt /etc/kubernetes/pki/\nmv /home/${USER}/ca.key /etc/kubernetes/pki/\nmv /home/${USER}/sa.pub /etc/kubernetes/pki/\nmv /home/${USER}/sa.key /etc/kubernetes/pki/\nmv /home/${USER}/front-proxy-ca.crt /etc/kubernetes/pki/\nmv /home/${USER}/front-proxy-ca.key /etc/kubernetes/pki/\nmv /home/${USER}/etcd-ca.crt /etc/kubernetes/pki/etcd/ca.crt\n# Skip the next line if you are using external etcd\nmv /home/${USER}/etcd-ca.key /etc/kubernetes/pki/etcd/ca.key", "completion": "This text code Creating Highly Available Clusters with kubeadm |", "language": "text", "type": "code-example_explanation", "context": "Creating Highly Available Clusters with kubeadm |", "source": "https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/high-availability/", "id": "k8s_00136_code_explain_4063"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n labels:\n test: liveness\n name: liveness-exec\nspec:\n containers:\n - name: liveness\n image: registry.k8s.io/busybox:1.27.2", "completion": " args:\n - /bin/sh\n - -c\n - touch /tmp/healthy; sleep 30; rm -f /tmp/healthy; sleep 600\n livenessProbe:\n exec:\n command:\n - cat\n - /tmp/healthy\n initialDelaySeconds: 5\n periodSeconds: 5", "language": "yaml", "type": "pod-definition", "context": "Configure Liveness, Readiness and Startup Probes | In the configuration file, you can see that the Pod has a single Container. The periodSeconds field specifies that the kubelet should perform a liveness probe every 5 seconds. The initialDelaySeconds", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-startup-probes/", "id": "k8s_00137_code_completion_4064"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n labels:\n test: liveness\n name: liveness-exec\nspec:\n containers:\n - name: liveness\n image: registry.k8s.io/busybox:1.27.2\n args:\n - /bin/sh\n - -c\n - touch /tmp/healthy; sleep 30; rm -f /tmp/healthy; sleep 600\n livenessProbe:\n exec:\n command:\n - cat\n - /tmp/healthy\n initialDelaySeconds: 5\n periodSeconds: 5", "completion": "This yaml code Configure Liveness, Readiness and Startup Probes | In the configuration file, you can see that the Pod has a single Container. The periodSeconds field specifies that the kubelet should perform a liveness probe every 5 seconds. The initialDelaySeconds", "language": "yaml", "type": "pod-definition_explanation", "context": "Configure Liveness, Readiness and Startup Probes | In the configuration file, you can see that the Pod has a single Container. The periodSeconds field specifies that the kubelet should perform a liveness probe every 5 seconds. The initialDelaySeconds", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-startup-probes/", "id": "k8s_00137_code_explain_4065"} {"prompt": "Explain this text code:\n\n/bin/sh -c \"touch /tmp/healthy; sleep 30; rm -f /tmp/healthy; sleep 600\"", "completion": "This text code Configure Liveness, Readiness and Startup Probes | For the first 30 seconds of the container's life, there is a /tmp/healthy file. So during the first 30 seconds, the command cat /tmp/healthy returns a success code. After 30 seconds, cat /tmp/healthy", "language": "text", "type": "code-example_explanation", "context": "Configure Liveness, Readiness and Startup Probes | For the first 30 seconds of the container's life, there is a /tmp/healthy file. So during the first 30 seconds, the command cat /tmp/healthy returns a success code. After 30 seconds, cat /tmp/healthy", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-startup-probes/", "id": "k8s_00137_code_explain_4066"} {"prompt": "Explain this bash code:\n\nkubectl apply -f https://k8s.io/examples/pods/probe/exec-liveness.yaml", "completion": "This bash code Configure Liveness, Readiness and Startup Probes | Within 30 seconds, view the Pod events:", "language": "bash", "type": "kubectl-command_explanation", "context": "Configure Liveness, Readiness and Startup Probes | Within 30 seconds, view the Pod events:", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-startup-probes/", "id": "k8s_00137_code_explain_4067"} {"prompt": "Complete this text code:\n\nType Reason Age From Message\n---- ------ ---- ---- -------\nNormal Scheduled 11s default-scheduler Successfully assigned default/liveness-exec to node01", "completion": "Normal Pulling 9s kubelet, node01 Pulling image \"registry.k8s.io/busybox:1.27.2\"\nNormal Pulled 7s kubelet, node01 Successfully pulled image \"registry.k8s.io/busybox:1.27.2\"\nNormal Created 7s kubelet, node01 Created container liveness\nNormal Started 7s kubelet, node01 Started container liveness", "language": "text", "type": "code-example", "context": "Configure Liveness, Readiness and Startup Probes | After 35 seconds, view the Pod events again:", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-startup-probes/", "id": "k8s_00137_code_completion_4068"} {"prompt": "Explain this text code:\n\nType Reason Age From Message\n---- ------ ---- ---- -------\nNormal Scheduled 11s default-scheduler Successfully assigned default/liveness-exec to node01\nNormal Pulling 9s kubelet, node01 Pulling image \"registry.k8s.io/busybox:1.27.2\"\nNormal Pulled 7s kubelet, node01 Successfully pulled image \"registry.k8s.io/busybox:1.27.2\"\nNormal Created 7s kubelet, node01 Created container liveness\nNormal Started 7s kubelet, node01 Started container liveness", "completion": "This text code Configure Liveness, Readiness and Startup Probes | After 35 seconds, view the Pod events again:", "language": "text", "type": "code-example_explanation", "context": "Configure Liveness, Readiness and Startup Probes | After 35 seconds, view the Pod events again:", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-startup-probes/", "id": "k8s_00137_code_explain_4069"} {"prompt": "Complete this text code:\n\nType Reason Age From Message\n---- ------ ---- ---- -------\nNormal Scheduled 57s default-scheduler Successfully assigned default/liveness-exec to node01\nNormal Pulling 55s kubelet, node01 Pulling image \"registry.k8s.io/busybox:1.27.2\"", "completion": "Normal Pulled 53s kubelet, node01 Successfully pulled image \"registry.k8s.io/busybox:1.27.2\"\nNormal Created 53s kubelet, node01 Created container liveness\nNormal Started 53s kubelet, node01 Started container liveness\nWarning Unhealthy 10s (x3 over 20s) kubelet, node01 Liveness probe failed: cat: can't open '/tmp/healthy': No such file or directory\nNormal Killing 10s kubelet, node01 Container liveness failed liveness probe, will be restarted", "language": "text", "type": "code-example", "context": "Configure Liveness, Readiness and Startup Probes | Wait another 30 seconds, and verify that the container has been restarted:", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-startup-probes/", "id": "k8s_00137_code_completion_4070"} {"prompt": "Explain this text code:\n\nType Reason Age From Message\n---- ------ ---- ---- -------\nNormal Scheduled 57s default-scheduler Successfully assigned default/liveness-exec to node01\nNormal Pulling 55s kubelet, node01 Pulling image \"registry.k8s.io/busybox:1.27.2\"\nNormal Pulled 53s kubelet, node01 Successfully pulled image \"registry.k8s.io/busybox:1.27.2\"\nNormal Created 53s kubelet, node01 Created container liveness\nNormal Started 53s kubelet, node01 Started container liveness\nWarning Unhealthy 10s (x3 over 20s) kubelet, node01 Liveness probe failed: cat: can't open '/tmp/healthy': No such file or directory\nNormal Killing 10s kubelet, node01 Container liveness failed liveness probe, will be restarted", "completion": "This text code Configure Liveness, Readiness and Startup Probes | Wait another 30 seconds, and verify that the container has been restarted:", "language": "text", "type": "code-example_explanation", "context": "Configure Liveness, Readiness and Startup Probes | Wait another 30 seconds, and verify that the container has been restarted:", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-startup-probes/", "id": "k8s_00137_code_explain_4071"} {"prompt": "Explain this text code:\n\nNAME READY STATUS RESTARTS AGE\nliveness-exec 1/1 Running 1 1m", "completion": "This text code Configure Liveness, Readiness and Startup Probes | Another kind of liveness probe uses an HTTP GET request. Here is the configuration file for a Pod that runs a container based on the registry.k8s.io/e2e-test-images/agnhost image.", "language": "text", "type": "code-example_explanation", "context": "Configure Liveness, Readiness and Startup Probes | Another kind of liveness probe uses an HTTP GET request. Here is the configuration file for a Pod that runs a container based on the registry.k8s.io/e2e-test-images/agnhost image.", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-startup-probes/", "id": "k8s_00137_code_explain_4072"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n labels:\n test: liveness\n name: liveness-http\nspec:\n containers:\n - name: liveness\n image: registry.k8s.io/e2e-test-images/agnhost:2.40", "completion": " args:\n - liveness\n livenessProbe:\n httpGet:\n path: /healthz\n port: 8080\n httpHeaders:\n - name: Custom-Header\n value: Awesome\n initialDelaySeconds: 3\n periodSeconds: 3", "language": "yaml", "type": "pod-definition", "context": "Configure Liveness, Readiness and Startup Probes | In the configuration file, you can see that the Pod has a single container. The periodSeconds field specifies that the kubelet should perform a liveness probe every 3 seconds. The initialDelaySeconds", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-startup-probes/", "id": "k8s_00137_code_completion_4073"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n labels:\n test: liveness\n name: liveness-http\nspec:\n containers:\n - name: liveness\n image: registry.k8s.io/e2e-test-images/agnhost:2.40\n args:\n - liveness\n livenessProbe:\n httpGet:\n path: /healthz\n port: 8080\n httpHeaders:\n - name: Custom-Header\n value: Awesome\n initialDelaySeconds: 3\n periodSeconds: 3", "completion": "This yaml code Configure Liveness, Readiness and Startup Probes | In the configuration file, you can see that the Pod has a single container. The periodSeconds field specifies that the kubelet should perform a liveness probe every 3 seconds. The initialDelaySeconds", "language": "yaml", "type": "pod-definition_explanation", "context": "Configure Liveness, Readiness and Startup Probes | In the configuration file, you can see that the Pod has a single container. The periodSeconds field specifies that the kubelet should perform a liveness probe every 3 seconds. The initialDelaySeconds", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-startup-probes/", "id": "k8s_00137_code_explain_4074"} {"prompt": "Complete this text code:\n\nhttp.HandleFunc(\"/healthz\", func(w http.ResponseWriter, r *http.Request) {\n duration := time.Now().Sub(started)\n if duration.Seconds() > 10 {\n w.WriteHeader(500)\n w.Write([]byte(fmt.Sprintf(\"error: %v\", duration.Seconds())))", "completion": " } else {\n w.WriteHeader(200)\n w.Write([]byte(\"ok\"))\n }\n})", "language": "text", "type": "code-example", "context": "Configure Liveness, Readiness and Startup Probes | The kubelet starts performing health checks 3 seconds after the container starts. So the first couple of health checks will succeed. But after 10 seconds, the health checks will fail, and the kubelet", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-startup-probes/", "id": "k8s_00137_code_completion_4075"} {"prompt": "Explain this text code:\n\nhttp.HandleFunc(\"/healthz\", func(w http.ResponseWriter, r *http.Request) {\n duration := time.Now().Sub(started)\n if duration.Seconds() > 10 {\n w.WriteHeader(500)\n w.Write([]byte(fmt.Sprintf(\"error: %v\", duration.Seconds())))\n } else {\n w.WriteHeader(200)\n w.Write([]byte(\"ok\"))\n }\n})", "completion": "This text code Configure Liveness, Readiness and Startup Probes | The kubelet starts performing health checks 3 seconds after the container starts. So the first couple of health checks will succeed. But after 10 seconds, the health checks will fail, and the kubelet", "language": "text", "type": "code-example_explanation", "context": "Configure Liveness, Readiness and Startup Probes | The kubelet starts performing health checks 3 seconds after the container starts. So the first couple of health checks will succeed. But after 10 seconds, the health checks will fail, and the kubelet", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-startup-probes/", "id": "k8s_00137_code_explain_4076"} {"prompt": "Explain this bash code:\n\nkubectl apply -f https://k8s.io/examples/pods/probe/http-liveness.yaml", "completion": "This bash code Configure Liveness, Readiness and Startup Probes | After 10 seconds, view Pod events to verify that liveness probes have failed and the container has been restarted:", "language": "bash", "type": "kubectl-command_explanation", "context": "Configure Liveness, Readiness and Startup Probes | After 10 seconds, view Pod events to verify that liveness probes have failed and the container has been restarted:", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-startup-probes/", "id": "k8s_00137_code_explain_4077"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: goproxy\n labels:\n app: goproxy\nspec:\n containers:\n - name: goproxy\n image: registry.k8s.io/goproxy:0.1\n ports:", "completion": " - containerPort: 8080\n readinessProbe:\n tcpSocket:\n port: 8080\n initialDelaySeconds: 15\n periodSeconds: 10\n livenessProbe:\n tcpSocket:\n port: 8080\n initialDelaySeconds: 15\n periodSeconds: 10", "language": "yaml", "type": "pod-definition", "context": "Configure Liveness, Readiness and Startup Probes | As you can see, configuration for a TCP check is quite similar to an HTTP check. This example uses both readiness and liveness probes. The kubelet will run the first liveness probe 15 seconds after th", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-startup-probes/", "id": "k8s_00137_code_completion_4078"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: goproxy\n labels:\n app: goproxy\nspec:\n containers:\n - name: goproxy\n image: registry.k8s.io/goproxy:0.1\n ports:\n - containerPort: 8080\n readinessProbe:\n tcpSocket:\n port: 8080\n initialDelaySeconds: 15\n periodSeconds: 10\n livenessProbe:\n tcpSocket:\n port: 8080\n initialDelaySeconds: 15\n periodSeconds: 10", "completion": "This yaml code Configure Liveness, Readiness and Startup Probes | As you can see, configuration for a TCP check is quite similar to an HTTP check. This example uses both readiness and liveness probes. The kubelet will run the first liveness probe 15 seconds after th", "language": "yaml", "type": "pod-definition_explanation", "context": "Configure Liveness, Readiness and Startup Probes | As you can see, configuration for a TCP check is quite similar to an HTTP check. This example uses both readiness and liveness probes. The kubelet will run the first liveness probe 15 seconds after th", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-startup-probes/", "id": "k8s_00137_code_explain_4079"} {"prompt": "Explain this bash code:\n\nkubectl apply -f https://k8s.io/examples/pods/probe/tcp-liveness-readiness.yaml", "completion": "This bash code Configure Liveness, Readiness and Startup Probes | After 15 seconds, view Pod events to verify that liveness probes:", "language": "bash", "type": "kubectl-command_explanation", "context": "Configure Liveness, Readiness and Startup Probes | After 15 seconds, view Pod events to verify that liveness probes:", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-startup-probes/", "id": "k8s_00137_code_explain_4080"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: etcd-with-grpc\nspec:\n containers:\n - name: etcd", "completion": " image: registry.k8s.io/etcd:3.5.1-0\n command: [ \"/usr/local/bin/etcd\", \"--data-dir\", \"/var/lib/etcd\", \"--listen-client-urls\", \"http://0.0.0.0:2379\", \"--advertise-client-urls\", \"http://127.0.0.1:2379\", \"--log-level\", \"debug\"]\n ports:\n - containerPort: 2379\n livenessProbe:\n grpc:\n port: 2379\n initialDelaySeconds: 10", "language": "yaml", "type": "pod-definition", "context": "Configure Liveness, Readiness and Startup Probes | To use a gRPC probe, port must be configured. If you want to distinguish probes of different types and probes for different features you can use the service field. You can set service to the value liv", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-startup-probes/", "id": "k8s_00137_code_completion_4081"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: etcd-with-grpc\nspec:\n containers:\n - name: etcd\n image: registry.k8s.io/etcd:3.5.1-0\n command: [ \"/usr/local/bin/etcd\", \"--data-dir\", \"/var/lib/etcd\", \"--listen-client-urls\", \"http://0.0.0.0:2379\", \"--advertise-client-urls\", \"http://127.0.0.1:2379\", \"--log-level\", \"debug\"]\n ports:\n - containerPort: 2379\n livenessProbe:\n grpc:\n port: 2379\n initialDelaySeconds: 10", "completion": "This yaml code Configure Liveness, Readiness and Startup Probes | To use a gRPC probe, port must be configured. If you want to distinguish probes of different types and probes for different features you can use the service field. You can set service to the value liv", "language": "yaml", "type": "pod-definition_explanation", "context": "Configure Liveness, Readiness and Startup Probes | To use a gRPC probe, port must be configured. If you want to distinguish probes of different types and probes for different features you can use the service field. You can set service to the value liv", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-startup-probes/", "id": "k8s_00137_code_explain_4082"} {"prompt": "Explain this bash code:\n\nkubectl apply -f https://k8s.io/examples/pods/probe/grpc-liveness.yaml", "completion": "This bash code Configure Liveness, Readiness and Startup Probes | After 15 seconds, view Pod events to verify that the liveness check has not failed:", "language": "bash", "type": "kubectl-command_explanation", "context": "Configure Liveness, Readiness and Startup Probes | After 15 seconds, view Pod events to verify that the liveness check has not failed:", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-startup-probes/", "id": "k8s_00137_code_explain_4083"} {"prompt": "Complete this text code:\n\nports:\n- name: liveness-port\n containerPort: 8080\n", "completion": "livenessProbe:\n httpGet:\n path: /healthz\n port: liveness-port", "language": "text", "type": "code-example", "context": "Configure Liveness, Readiness and Startup Probes | Sometimes, you have to deal with applications that require additional startup time on their first initialization. In such cases, it can be tricky to set up liveness probe parameters without compromisi", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-startup-probes/", "id": "k8s_00137_code_completion_4084"} {"prompt": "Explain this text code:\n\nports:\n- name: liveness-port\n containerPort: 8080\n\nlivenessProbe:\n httpGet:\n path: /healthz\n port: liveness-port", "completion": "This text code Configure Liveness, Readiness and Startup Probes | Sometimes, you have to deal with applications that require additional startup time on their first initialization. In such cases, it can be tricky to set up liveness probe parameters without compromisi", "language": "text", "type": "code-example_explanation", "context": "Configure Liveness, Readiness and Startup Probes | Sometimes, you have to deal with applications that require additional startup time on their first initialization. In such cases, it can be tricky to set up liveness probe parameters without compromisi", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-startup-probes/", "id": "k8s_00137_code_explain_4085"} {"prompt": "Complete this text code:\n\nports:\n- name: liveness-port\n containerPort: 8080\n\nlivenessProbe:\n httpGet:\n path: /healthz\n port: liveness-port", "completion": " failureThreshold: 1\n periodSeconds: 10\n\nstartupProbe:\n httpGet:\n path: /healthz\n port: liveness-port\n failureThreshold: 30\n periodSeconds: 10", "language": "text", "type": "code-example", "context": "Configure Liveness, Readiness and Startup Probes | Thanks to the startup probe, the application will have a maximum of 5 minutes (30 * 10 = 300s) to finish its startup. Once the startup probe has succeeded once, the liveness probe takes over to provid", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-startup-probes/", "id": "k8s_00137_code_completion_4086"} {"prompt": "Explain this text code:\n\nports:\n- name: liveness-port\n containerPort: 8080\n\nlivenessProbe:\n httpGet:\n path: /healthz\n port: liveness-port\n failureThreshold: 1\n periodSeconds: 10\n\nstartupProbe:\n httpGet:\n path: /healthz\n port: liveness-port\n failureThreshold: 30\n periodSeconds: 10", "completion": "This text code Configure Liveness, Readiness and Startup Probes | Thanks to the startup probe, the application will have a maximum of 5 minutes (30 * 10 = 300s) to finish its startup. Once the startup probe has succeeded once, the liveness probe takes over to provid", "language": "text", "type": "code-example_explanation", "context": "Configure Liveness, Readiness and Startup Probes | Thanks to the startup probe, the application will have a maximum of 5 minutes (30 * 10 = 300s) to finish its startup. Once the startup probe has succeeded once, the liveness probe takes over to provid", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-startup-probes/", "id": "k8s_00137_code_explain_4087"} {"prompt": "Complete this text code:\n\nreadinessProbe:\n exec:\n command:", "completion": " - cat\n - /tmp/healthy\n initialDelaySeconds: 5\n periodSeconds: 5", "language": "text", "type": "code-example", "context": "Configure Liveness, Readiness and Startup Probes | Configuration for HTTP and TCP readiness probes also remains identical to liveness probes.", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-startup-probes/", "id": "k8s_00137_code_completion_4088"} {"prompt": "Explain this text code:\n\nreadinessProbe:\n exec:\n command:\n - cat\n - /tmp/healthy\n initialDelaySeconds: 5\n periodSeconds: 5", "completion": "This text code Configure Liveness, Readiness and Startup Probes | Configuration for HTTP and TCP readiness probes also remains identical to liveness probes.", "language": "text", "type": "code-example_explanation", "context": "Configure Liveness, Readiness and Startup Probes | Configuration for HTTP and TCP readiness probes also remains identical to liveness probes.", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-startup-probes/", "id": "k8s_00137_code_explain_4089"} {"prompt": "Complete this text code:\n\nlivenessProbe:\n httpGet:\n httpHeaders:\n - name: Accept\n value: application/json", "completion": "\nstartupProbe:\n httpGet:\n httpHeaders:\n - name: User-Agent\n value: MyUserAgent", "language": "text", "type": "code-example", "context": "Configure Liveness, Readiness and Startup Probes | You can also remove these two headers by defining them with an empty value.", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-startup-probes/", "id": "k8s_00137_code_completion_4090"} {"prompt": "Explain this text code:\n\nlivenessProbe:\n httpGet:\n httpHeaders:\n - name: Accept\n value: application/json\n\nstartupProbe:\n httpGet:\n httpHeaders:\n - name: User-Agent\n value: MyUserAgent", "completion": "This text code Configure Liveness, Readiness and Startup Probes | You can also remove these two headers by defining them with an empty value.", "language": "text", "type": "code-example_explanation", "context": "Configure Liveness, Readiness and Startup Probes | You can also remove these two headers by defining them with an empty value.", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-startup-probes/", "id": "k8s_00137_code_explain_4091"} {"prompt": "Complete this text code:\n\nlivenessProbe:\n httpGet:\n httpHeaders:\n - name: Accept\n value: \"\"", "completion": "\nstartupProbe:\n httpGet:\n httpHeaders:\n - name: User-Agent\n value: \"\"", "language": "text", "type": "code-example", "context": "Configure Liveness, Readiness and Startup Probes | When the kubelet probes a Pod using HTTP, it only follows redirects if the redirect is to the same host. If the kubelet receives 11 or more redirects during probing, the probe is considered successful", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-startup-probes/", "id": "k8s_00137_code_completion_4092"} {"prompt": "Explain this text code:\n\nlivenessProbe:\n httpGet:\n httpHeaders:\n - name: Accept\n value: \"\"\n\nstartupProbe:\n httpGet:\n httpHeaders:\n - name: User-Agent\n value: \"\"", "completion": "This text code Configure Liveness, Readiness and Startup Probes | When the kubelet probes a Pod using HTTP, it only follows redirects if the redirect is to the same host. If the kubelet receives 11 or more redirects during probing, the probe is considered successful", "language": "text", "type": "code-example_explanation", "context": "Configure Liveness, Readiness and Startup Probes | When the kubelet probes a Pod using HTTP, it only follows redirects if the redirect is to the same host. If the kubelet receives 11 or more redirects during probing, the probe is considered successful", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-startup-probes/", "id": "k8s_00137_code_explain_4093"} {"prompt": "Complete this text code:\n\nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal Scheduled 29m default-scheduler Successfully assigned default/httpbin-7b8bc9cb85-bjzwn to daocloud", "completion": " Normal Pulling 29m kubelet Pulling image \"docker.io/kennethreitz/httpbin\"\n Normal Pulled 24m kubelet Successfully pulled image \"docker.io/kennethreitz/httpbin\" in 5m12.402735213s\n Normal Created 24m kubelet Created container httpbin\n Normal Started 24m kubelet Started container httpbin\n Warning ProbeWarning 4m11s (x1197 over 24m) kubelet Readiness probe warning: Probe terminated redirects", "language": "text", "type": "code-example", "context": "Configure Liveness, Readiness and Startup Probes | If the kubelet receives a redirect where the hostname is different from the request, the outcome of the probe is treated as successful and kubelet creates an event to report the redirect failure.", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-startup-probes/", "id": "k8s_00137_code_completion_4094"} {"prompt": "Explain this text code:\n\nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal Scheduled 29m default-scheduler Successfully assigned default/httpbin-7b8bc9cb85-bjzwn to daocloud\n Normal Pulling 29m kubelet Pulling image \"docker.io/kennethreitz/httpbin\"\n Normal Pulled 24m kubelet Successfully pulled image \"docker.io/kennethreitz/httpbin\" in 5m12.402735213s\n Normal Created 24m kubelet Created container httpbin\n Normal Started 24m kubelet Started container httpbin\n Warning ProbeWarning 4m11s (x1197 over 24m) kubelet Readiness probe warning: Probe terminated redirects", "completion": "This text code Configure Liveness, Readiness and Startup Probes | If the kubelet receives a redirect where the hostname is different from the request, the outcome of the probe is treated as successful and kubelet creates an event to report the redirect failure.", "language": "text", "type": "code-example_explanation", "context": "Configure Liveness, Readiness and Startup Probes | If the kubelet receives a redirect where the hostname is different from the request, the outcome of the probe is treated as successful and kubelet creates an event to report the redirect failure.", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-startup-probes/", "id": "k8s_00137_code_explain_4095"} {"prompt": "Complete this text code:\n\nspec:\n terminationGracePeriodSeconds: 3600 # pod-level\n containers:\n - name: test\n image: ...\n\n ports:\n - name: liveness-port\n containerPort: 8080", "completion": "\n livenessProbe:\n httpGet:\n path: /healthz\n port: liveness-port\n failureThreshold: 1\n periodSeconds: 60\n # Override pod-level terminationGracePeriodSeconds #\n terminationGracePeriodSeconds: 60", "language": "text", "type": "code-example", "context": "Configure Liveness, Readiness and Startup Probes | Probe-level terminationGracePeriodSeconds cannot be set for readiness probes. It will be rejected by the API server.", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-startup-probes/", "id": "k8s_00137_code_completion_4096"} {"prompt": "Explain this text code:\n\nspec:\n terminationGracePeriodSeconds: 3600 # pod-level\n containers:\n - name: test\n image: ...\n\n ports:\n - name: liveness-port\n containerPort: 8080\n\n livenessProbe:\n httpGet:\n path: /healthz\n port: liveness-port\n failureThreshold: 1\n periodSeconds: 60\n # Override pod-level terminationGracePeriodSeconds #\n terminationGracePeriodSeconds: 60", "completion": "This text code Configure Liveness, Readiness and Startup Probes | Probe-level terminationGracePeriodSeconds cannot be set for readiness probes. It will be rejected by the API server.", "language": "text", "type": "code-example_explanation", "context": "Configure Liveness, Readiness and Startup Probes | Probe-level terminationGracePeriodSeconds cannot be set for readiness probes. It will be rejected by the API server.", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-startup-probes/", "id": "k8s_00137_code_explain_4097"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n labels:\n test: liveness\n name: liveness-exec\nspec:\n containers:\n - name: liveness\n image: registry.k8s.io/busybox:1.27.2", "completion": " args:\n - /bin/sh\n - -c\n - touch /tmp/healthy; sleep 30; rm -f /tmp/healthy; sleep 600\n livenessProbe:\n exec:\n command:\n - cat\n - /tmp/healthy\n initialDelaySeconds: 5\n periodSeconds: 5", "language": "yaml", "type": "pod-definition", "context": "Configure Liveness, Readiness and Startup Probes | In the configuration file, you can see that the Pod has a single Container. The periodSeconds field specifies that the kubelet should perform a liveness probe every 5 seconds. The initialDelaySeconds", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-startup-probes/", "id": "k8s_00137_code_completion_4098"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n labels:\n test: liveness\n name: liveness-exec\nspec:\n containers:\n - name: liveness\n image: registry.k8s.io/busybox:1.27.2\n args:\n - /bin/sh\n - -c\n - touch /tmp/healthy; sleep 30; rm -f /tmp/healthy; sleep 600\n livenessProbe:\n exec:\n command:\n - cat\n - /tmp/healthy\n initialDelaySeconds: 5\n periodSeconds: 5", "completion": "This yaml code Configure Liveness, Readiness and Startup Probes | In the configuration file, you can see that the Pod has a single Container. The periodSeconds field specifies that the kubelet should perform a liveness probe every 5 seconds. The initialDelaySeconds", "language": "yaml", "type": "pod-definition_explanation", "context": "Configure Liveness, Readiness and Startup Probes | In the configuration file, you can see that the Pod has a single Container. The periodSeconds field specifies that the kubelet should perform a liveness probe every 5 seconds. The initialDelaySeconds", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-startup-probes/", "id": "k8s_00137_code_explain_4099"} {"prompt": "Explain this shell code:\n\n/bin/sh -c \"touch /tmp/healthy; sleep 30; rm -f /tmp/healthy; sleep 600\"", "completion": "This shell code Configure Liveness, Readiness and Startup Probes | For the first 30 seconds of the container's life, there is a /tmp/healthy file. So during the first 30 seconds, the command cat /tmp/healthy returns a success code. After 30 seconds, cat /tmp/healthy", "language": "shell", "type": "code-example_explanation", "context": "Configure Liveness, Readiness and Startup Probes | For the first 30 seconds of the container's life, there is a /tmp/healthy file. So during the first 30 seconds, the command cat /tmp/healthy returns a success code. After 30 seconds, cat /tmp/healthy", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-startup-probes/", "id": "k8s_00137_code_explain_4100"} {"prompt": "Explain this shell code:\n\nkubectl apply -f https://k8s.io/examples/pods/probe/exec-liveness.yaml", "completion": "This shell code Configure Liveness, Readiness and Startup Probes | Within 30 seconds, view the Pod events:", "language": "shell", "type": "code-example_explanation", "context": "Configure Liveness, Readiness and Startup Probes | Within 30 seconds, view the Pod events:", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-startup-probes/", "id": "k8s_00137_code_explain_4101"} {"prompt": "Complete this none code:\n\nType Reason Age From Message\n---- ------ ---- ---- -------\nNormal Scheduled 11s default-scheduler Successfully assigned default/liveness-exec to node01", "completion": "Normal Pulling 9s kubelet, node01 Pulling image \"registry.k8s.io/busybox:1.27.2\"\nNormal Pulled 7s kubelet, node01 Successfully pulled image \"registry.k8s.io/busybox:1.27.2\"\nNormal Created 7s kubelet, node01 Created container liveness\nNormal Started 7s kubelet, node01 Started container liveness", "language": "none", "type": "code-example", "context": "Configure Liveness, Readiness and Startup Probes | After 35 seconds, view the Pod events again:", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-startup-probes/", "id": "k8s_00137_code_completion_4102"} {"prompt": "Explain this none code:\n\nType Reason Age From Message\n---- ------ ---- ---- -------\nNormal Scheduled 11s default-scheduler Successfully assigned default/liveness-exec to node01\nNormal Pulling 9s kubelet, node01 Pulling image \"registry.k8s.io/busybox:1.27.2\"\nNormal Pulled 7s kubelet, node01 Successfully pulled image \"registry.k8s.io/busybox:1.27.2\"\nNormal Created 7s kubelet, node01 Created container liveness\nNormal Started 7s kubelet, node01 Started container liveness", "completion": "This none code Configure Liveness, Readiness and Startup Probes | After 35 seconds, view the Pod events again:", "language": "none", "type": "code-example_explanation", "context": "Configure Liveness, Readiness and Startup Probes | After 35 seconds, view the Pod events again:", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-startup-probes/", "id": "k8s_00137_code_explain_4103"} {"prompt": "Complete this none code:\n\nType Reason Age From Message\n---- ------ ---- ---- -------\nNormal Scheduled 57s default-scheduler Successfully assigned default/liveness-exec to node01\nNormal Pulling 55s kubelet, node01 Pulling image \"registry.k8s.io/busybox:1.27.2\"", "completion": "Normal Pulled 53s kubelet, node01 Successfully pulled image \"registry.k8s.io/busybox:1.27.2\"\nNormal Created 53s kubelet, node01 Created container liveness\nNormal Started 53s kubelet, node01 Started container liveness\nWarning Unhealthy 10s (x3 over 20s) kubelet, node01 Liveness probe failed: cat: can't open '/tmp/healthy': No such file or directory\nNormal Killing 10s kubelet, node01 Container liveness failed liveness probe, will be restarted", "language": "none", "type": "code-example", "context": "Configure Liveness, Readiness and Startup Probes | Wait another 30 seconds, and verify that the container has been restarted:", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-startup-probes/", "id": "k8s_00137_code_completion_4104"} {"prompt": "Explain this none code:\n\nType Reason Age From Message\n---- ------ ---- ---- -------\nNormal Scheduled 57s default-scheduler Successfully assigned default/liveness-exec to node01\nNormal Pulling 55s kubelet, node01 Pulling image \"registry.k8s.io/busybox:1.27.2\"\nNormal Pulled 53s kubelet, node01 Successfully pulled image \"registry.k8s.io/busybox:1.27.2\"\nNormal Created 53s kubelet, node01 Created container liveness\nNormal Started 53s kubelet, node01 Started container liveness\nWarning Unhealthy 10s (x3 over 20s) kubelet, node01 Liveness probe failed: cat: can't open '/tmp/healthy': No such file or directory\nNormal Killing 10s kubelet, node01 Container liveness failed liveness probe, will be restarted", "completion": "This none code Configure Liveness, Readiness and Startup Probes | Wait another 30 seconds, and verify that the container has been restarted:", "language": "none", "type": "code-example_explanation", "context": "Configure Liveness, Readiness and Startup Probes | Wait another 30 seconds, and verify that the container has been restarted:", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-startup-probes/", "id": "k8s_00137_code_explain_4105"} {"prompt": "Explain this none code:\n\nNAME READY STATUS RESTARTS AGE\nliveness-exec 1/1 Running 1 1m", "completion": "This none code Configure Liveness, Readiness and Startup Probes | Another kind of liveness probe uses an HTTP GET request. Here is the configuration file for a Pod that runs a container based on the registry.k8s.io/e2e-test-images/agnhost image.", "language": "none", "type": "code-example_explanation", "context": "Configure Liveness, Readiness and Startup Probes | Another kind of liveness probe uses an HTTP GET request. Here is the configuration file for a Pod that runs a container based on the registry.k8s.io/e2e-test-images/agnhost image.", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-startup-probes/", "id": "k8s_00137_code_explain_4106"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n labels:\n test: liveness\n name: liveness-http\nspec:\n containers:\n - name: liveness\n image: registry.k8s.io/e2e-test-images/agnhost:2.40", "completion": " args:\n - liveness\n livenessProbe:\n httpGet:\n path: /healthz\n port: 8080\n httpHeaders:\n - name: Custom-Header\n value: Awesome\n initialDelaySeconds: 3\n periodSeconds: 3", "language": "yaml", "type": "pod-definition", "context": "Configure Liveness, Readiness and Startup Probes | In the configuration file, you can see that the Pod has a single container. The periodSeconds field specifies that the kubelet should perform a liveness probe every 3 seconds. The initialDelaySeconds", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-startup-probes/", "id": "k8s_00137_code_completion_4107"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n labels:\n test: liveness\n name: liveness-http\nspec:\n containers:\n - name: liveness\n image: registry.k8s.io/e2e-test-images/agnhost:2.40\n args:\n - liveness\n livenessProbe:\n httpGet:\n path: /healthz\n port: 8080\n httpHeaders:\n - name: Custom-Header\n value: Awesome\n initialDelaySeconds: 3\n periodSeconds: 3", "completion": "This yaml code Configure Liveness, Readiness and Startup Probes | In the configuration file, you can see that the Pod has a single container. The periodSeconds field specifies that the kubelet should perform a liveness probe every 3 seconds. The initialDelaySeconds", "language": "yaml", "type": "pod-definition_explanation", "context": "Configure Liveness, Readiness and Startup Probes | In the configuration file, you can see that the Pod has a single container. The periodSeconds field specifies that the kubelet should perform a liveness probe every 3 seconds. The initialDelaySeconds", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-startup-probes/", "id": "k8s_00137_code_explain_4108"} {"prompt": "Complete this go code:\n\nhttp.HandleFunc(\"/healthz\", func(w http.ResponseWriter, r *http.Request) {\n duration := time.Now().Sub(started)\n if duration.Seconds() > 10 {\n w.WriteHeader(500)\n w.Write([]byte(fmt.Sprintf(\"error: %v\", duration.Seconds())))", "completion": " } else {\n w.WriteHeader(200)\n w.Write([]byte(\"ok\"))\n }\n})", "language": "go", "type": "code-example", "context": "Configure Liveness, Readiness and Startup Probes | The kubelet starts performing health checks 3 seconds after the container starts. So the first couple of health checks will succeed. But after 10 seconds, the health checks will fail, and the kubelet", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-startup-probes/", "id": "k8s_00137_code_completion_4109"} {"prompt": "Explain this go code:\n\nhttp.HandleFunc(\"/healthz\", func(w http.ResponseWriter, r *http.Request) {\n duration := time.Now().Sub(started)\n if duration.Seconds() > 10 {\n w.WriteHeader(500)\n w.Write([]byte(fmt.Sprintf(\"error: %v\", duration.Seconds())))\n } else {\n w.WriteHeader(200)\n w.Write([]byte(\"ok\"))\n }\n})", "completion": "This go code Configure Liveness, Readiness and Startup Probes | The kubelet starts performing health checks 3 seconds after the container starts. So the first couple of health checks will succeed. But after 10 seconds, the health checks will fail, and the kubelet", "language": "go", "type": "code-example_explanation", "context": "Configure Liveness, Readiness and Startup Probes | The kubelet starts performing health checks 3 seconds after the container starts. So the first couple of health checks will succeed. But after 10 seconds, the health checks will fail, and the kubelet", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-startup-probes/", "id": "k8s_00137_code_explain_4110"} {"prompt": "Explain this shell code:\n\nkubectl apply -f https://k8s.io/examples/pods/probe/http-liveness.yaml", "completion": "This shell code Configure Liveness, Readiness and Startup Probes | After 10 seconds, view Pod events to verify that liveness probes have failed and the container has been restarted:", "language": "shell", "type": "code-example_explanation", "context": "Configure Liveness, Readiness and Startup Probes | After 10 seconds, view Pod events to verify that liveness probes have failed and the container has been restarted:", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-startup-probes/", "id": "k8s_00137_code_explain_4111"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: goproxy\n labels:\n app: goproxy\nspec:\n containers:\n - name: goproxy\n image: registry.k8s.io/goproxy:0.1\n ports:", "completion": " - containerPort: 8080\n readinessProbe:\n tcpSocket:\n port: 8080\n initialDelaySeconds: 15\n periodSeconds: 10\n livenessProbe:\n tcpSocket:\n port: 8080\n initialDelaySeconds: 15\n periodSeconds: 10", "language": "yaml", "type": "pod-definition", "context": "Configure Liveness, Readiness and Startup Probes | As you can see, configuration for a TCP check is quite similar to an HTTP check. This example uses both readiness and liveness probes. The kubelet will run the first liveness probe 15 seconds after th", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-startup-probes/", "id": "k8s_00137_code_completion_4112"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: goproxy\n labels:\n app: goproxy\nspec:\n containers:\n - name: goproxy\n image: registry.k8s.io/goproxy:0.1\n ports:\n - containerPort: 8080\n readinessProbe:\n tcpSocket:\n port: 8080\n initialDelaySeconds: 15\n periodSeconds: 10\n livenessProbe:\n tcpSocket:\n port: 8080\n initialDelaySeconds: 15\n periodSeconds: 10", "completion": "This yaml code Configure Liveness, Readiness and Startup Probes | As you can see, configuration for a TCP check is quite similar to an HTTP check. This example uses both readiness and liveness probes. The kubelet will run the first liveness probe 15 seconds after th", "language": "yaml", "type": "pod-definition_explanation", "context": "Configure Liveness, Readiness and Startup Probes | As you can see, configuration for a TCP check is quite similar to an HTTP check. This example uses both readiness and liveness probes. The kubelet will run the first liveness probe 15 seconds after th", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-startup-probes/", "id": "k8s_00137_code_explain_4113"} {"prompt": "Explain this shell code:\n\nkubectl apply -f https://k8s.io/examples/pods/probe/tcp-liveness-readiness.yaml", "completion": "This shell code Configure Liveness, Readiness and Startup Probes | After 15 seconds, view Pod events to verify that liveness probes:", "language": "shell", "type": "code-example_explanation", "context": "Configure Liveness, Readiness and Startup Probes | After 15 seconds, view Pod events to verify that liveness probes:", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-startup-probes/", "id": "k8s_00137_code_explain_4114"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: etcd-with-grpc\nspec:\n containers:\n - name: etcd", "completion": " image: registry.k8s.io/etcd:3.5.1-0\n command: [ \"/usr/local/bin/etcd\", \"--data-dir\", \"/var/lib/etcd\", \"--listen-client-urls\", \"http://0.0.0.0:2379\", \"--advertise-client-urls\", \"http://127.0.0.1:2379\", \"--log-level\", \"debug\"]\n ports:\n - containerPort: 2379\n livenessProbe:\n grpc:\n port: 2379\n initialDelaySeconds: 10", "language": "yaml", "type": "pod-definition", "context": "Configure Liveness, Readiness and Startup Probes | To use a gRPC probe, port must be configured. If you want to distinguish probes of different types and probes for different features you can use the service field. You can set service to the value liv", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-startup-probes/", "id": "k8s_00137_code_completion_4115"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: etcd-with-grpc\nspec:\n containers:\n - name: etcd\n image: registry.k8s.io/etcd:3.5.1-0\n command: [ \"/usr/local/bin/etcd\", \"--data-dir\", \"/var/lib/etcd\", \"--listen-client-urls\", \"http://0.0.0.0:2379\", \"--advertise-client-urls\", \"http://127.0.0.1:2379\", \"--log-level\", \"debug\"]\n ports:\n - containerPort: 2379\n livenessProbe:\n grpc:\n port: 2379\n initialDelaySeconds: 10", "completion": "This yaml code Configure Liveness, Readiness and Startup Probes | To use a gRPC probe, port must be configured. If you want to distinguish probes of different types and probes for different features you can use the service field. You can set service to the value liv", "language": "yaml", "type": "pod-definition_explanation", "context": "Configure Liveness, Readiness and Startup Probes | To use a gRPC probe, port must be configured. If you want to distinguish probes of different types and probes for different features you can use the service field. You can set service to the value liv", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-startup-probes/", "id": "k8s_00137_code_explain_4116"} {"prompt": "Explain this shell code:\n\nkubectl apply -f https://k8s.io/examples/pods/probe/grpc-liveness.yaml", "completion": "This shell code Configure Liveness, Readiness and Startup Probes | After 15 seconds, view Pod events to verify that the liveness check has not failed:", "language": "shell", "type": "code-example_explanation", "context": "Configure Liveness, Readiness and Startup Probes | After 15 seconds, view Pod events to verify that the liveness check has not failed:", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-startup-probes/", "id": "k8s_00137_code_explain_4117"} {"prompt": "Complete this yaml code:\n\nports:\n- name: liveness-port\n containerPort: 8080\n", "completion": "livenessProbe:\n httpGet:\n path: /healthz\n port: liveness-port", "language": "yaml", "type": "kubernetes-manifest", "context": "Configure Liveness, Readiness and Startup Probes | Sometimes, you have to deal with applications that require additional startup time on their first initialization. In such cases, it can be tricky to set up liveness probe parameters without compromisi", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-startup-probes/", "id": "k8s_00137_code_completion_4118"} {"prompt": "Explain this yaml code:\n\nports:\n- name: liveness-port\n containerPort: 8080\n\nlivenessProbe:\n httpGet:\n path: /healthz\n port: liveness-port", "completion": "This yaml code Configure Liveness, Readiness and Startup Probes | Sometimes, you have to deal with applications that require additional startup time on their first initialization. In such cases, it can be tricky to set up liveness probe parameters without compromisi", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Configure Liveness, Readiness and Startup Probes | Sometimes, you have to deal with applications that require additional startup time on their first initialization. In such cases, it can be tricky to set up liveness probe parameters without compromisi", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-startup-probes/", "id": "k8s_00137_code_explain_4119"} {"prompt": "Complete this yaml code:\n\nports:\n- name: liveness-port\n containerPort: 8080\n\nlivenessProbe:\n httpGet:\n path: /healthz\n port: liveness-port", "completion": " failureThreshold: 1\n periodSeconds: 10\n\nstartupProbe:\n httpGet:\n path: /healthz\n port: liveness-port\n failureThreshold: 30\n periodSeconds: 10", "language": "yaml", "type": "kubernetes-manifest", "context": "Configure Liveness, Readiness and Startup Probes | Thanks to the startup probe, the application will have a maximum of 5 minutes (30 * 10 = 300s) to finish its startup. Once the startup probe has succeeded once, the liveness probe takes over to provid", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-startup-probes/", "id": "k8s_00137_code_completion_4120"} {"prompt": "Explain this yaml code:\n\nports:\n- name: liveness-port\n containerPort: 8080\n\nlivenessProbe:\n httpGet:\n path: /healthz\n port: liveness-port\n failureThreshold: 1\n periodSeconds: 10\n\nstartupProbe:\n httpGet:\n path: /healthz\n port: liveness-port\n failureThreshold: 30\n periodSeconds: 10", "completion": "This yaml code Configure Liveness, Readiness and Startup Probes | Thanks to the startup probe, the application will have a maximum of 5 minutes (30 * 10 = 300s) to finish its startup. Once the startup probe has succeeded once, the liveness probe takes over to provid", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Configure Liveness, Readiness and Startup Probes | Thanks to the startup probe, the application will have a maximum of 5 minutes (30 * 10 = 300s) to finish its startup. Once the startup probe has succeeded once, the liveness probe takes over to provid", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-startup-probes/", "id": "k8s_00137_code_explain_4121"} {"prompt": "Complete this yaml code:\n\nreadinessProbe:\n exec:\n command:", "completion": " - cat\n - /tmp/healthy\n initialDelaySeconds: 5\n periodSeconds: 5", "language": "yaml", "type": "kubernetes-manifest", "context": "Configure Liveness, Readiness and Startup Probes | Configuration for HTTP and TCP readiness probes also remains identical to liveness probes.", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-startup-probes/", "id": "k8s_00137_code_completion_4122"} {"prompt": "Explain this yaml code:\n\nreadinessProbe:\n exec:\n command:\n - cat\n - /tmp/healthy\n initialDelaySeconds: 5\n periodSeconds: 5", "completion": "This yaml code Configure Liveness, Readiness and Startup Probes | Configuration for HTTP and TCP readiness probes also remains identical to liveness probes.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Configure Liveness, Readiness and Startup Probes | Configuration for HTTP and TCP readiness probes also remains identical to liveness probes.", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-startup-probes/", "id": "k8s_00137_code_explain_4123"} {"prompt": "Complete this yaml code:\n\nlivenessProbe:\n httpGet:\n httpHeaders:\n - name: Accept\n value: application/json", "completion": "\nstartupProbe:\n httpGet:\n httpHeaders:\n - name: User-Agent\n value: MyUserAgent", "language": "yaml", "type": "kubernetes-manifest", "context": "Configure Liveness, Readiness and Startup Probes | You can also remove these two headers by defining them with an empty value.", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-startup-probes/", "id": "k8s_00137_code_completion_4124"} {"prompt": "Explain this yaml code:\n\nlivenessProbe:\n httpGet:\n httpHeaders:\n - name: Accept\n value: application/json\n\nstartupProbe:\n httpGet:\n httpHeaders:\n - name: User-Agent\n value: MyUserAgent", "completion": "This yaml code Configure Liveness, Readiness and Startup Probes | You can also remove these two headers by defining them with an empty value.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Configure Liveness, Readiness and Startup Probes | You can also remove these two headers by defining them with an empty value.", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-startup-probes/", "id": "k8s_00137_code_explain_4125"} {"prompt": "Complete this yaml code:\n\nlivenessProbe:\n httpGet:\n httpHeaders:\n - name: Accept\n value: \"\"", "completion": "\nstartupProbe:\n httpGet:\n httpHeaders:\n - name: User-Agent\n value: \"\"", "language": "yaml", "type": "kubernetes-manifest", "context": "Configure Liveness, Readiness and Startup Probes | When the kubelet probes a Pod using HTTP, it only follows redirects if the redirect is to the same host. If the kubelet receives 11 or more redirects during probing, the probe is considered successful", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-startup-probes/", "id": "k8s_00137_code_completion_4126"} {"prompt": "Explain this yaml code:\n\nlivenessProbe:\n httpGet:\n httpHeaders:\n - name: Accept\n value: \"\"\n\nstartupProbe:\n httpGet:\n httpHeaders:\n - name: User-Agent\n value: \"\"", "completion": "This yaml code Configure Liveness, Readiness and Startup Probes | When the kubelet probes a Pod using HTTP, it only follows redirects if the redirect is to the same host. If the kubelet receives 11 or more redirects during probing, the probe is considered successful", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Configure Liveness, Readiness and Startup Probes | When the kubelet probes a Pod using HTTP, it only follows redirects if the redirect is to the same host. If the kubelet receives 11 or more redirects during probing, the probe is considered successful", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-startup-probes/", "id": "k8s_00137_code_explain_4127"} {"prompt": "Complete this none code:\n\nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal Scheduled 29m default-scheduler Successfully assigned default/httpbin-7b8bc9cb85-bjzwn to daocloud", "completion": " Normal Pulling 29m kubelet Pulling image \"docker.io/kennethreitz/httpbin\"\n Normal Pulled 24m kubelet Successfully pulled image \"docker.io/kennethreitz/httpbin\" in 5m12.402735213s\n Normal Created 24m kubelet Created container httpbin\n Normal Started 24m kubelet Started container httpbin\n Warning ProbeWarning 4m11s (x1197 over 24m) kubelet Readiness probe warning: Probe terminated redirects", "language": "none", "type": "code-example", "context": "Configure Liveness, Readiness and Startup Probes | If the kubelet receives a redirect where the hostname is different from the request, the outcome of the probe is treated as successful and kubelet creates an event to report the redirect failure.", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-startup-probes/", "id": "k8s_00137_code_completion_4128"} {"prompt": "Explain this none code:\n\nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal Scheduled 29m default-scheduler Successfully assigned default/httpbin-7b8bc9cb85-bjzwn to daocloud\n Normal Pulling 29m kubelet Pulling image \"docker.io/kennethreitz/httpbin\"\n Normal Pulled 24m kubelet Successfully pulled image \"docker.io/kennethreitz/httpbin\" in 5m12.402735213s\n Normal Created 24m kubelet Created container httpbin\n Normal Started 24m kubelet Started container httpbin\n Warning ProbeWarning 4m11s (x1197 over 24m) kubelet Readiness probe warning: Probe terminated redirects", "completion": "This none code Configure Liveness, Readiness and Startup Probes | If the kubelet receives a redirect where the hostname is different from the request, the outcome of the probe is treated as successful and kubelet creates an event to report the redirect failure.", "language": "none", "type": "code-example_explanation", "context": "Configure Liveness, Readiness and Startup Probes | If the kubelet receives a redirect where the hostname is different from the request, the outcome of the probe is treated as successful and kubelet creates an event to report the redirect failure.", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-startup-probes/", "id": "k8s_00137_code_explain_4129"} {"prompt": "Complete this yaml code:\n\nspec:\n terminationGracePeriodSeconds: 3600 # pod-level\n containers:\n - name: test\n image: ...\n\n ports:\n - name: liveness-port\n containerPort: 8080", "completion": "\n livenessProbe:\n httpGet:\n path: /healthz\n port: liveness-port\n failureThreshold: 1\n periodSeconds: 60\n # Override pod-level terminationGracePeriodSeconds #\n terminationGracePeriodSeconds: 60", "language": "yaml", "type": "kubernetes-manifest", "context": "Configure Liveness, Readiness and Startup Probes | Probe-level terminationGracePeriodSeconds cannot be set for readiness probes. It will be rejected by the API server.", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-startup-probes/", "id": "k8s_00137_code_completion_4130"} {"prompt": "Explain this yaml code:\n\nspec:\n terminationGracePeriodSeconds: 3600 # pod-level\n containers:\n - name: test\n image: ...\n\n ports:\n - name: liveness-port\n containerPort: 8080\n\n livenessProbe:\n httpGet:\n path: /healthz\n port: liveness-port\n failureThreshold: 1\n periodSeconds: 60\n # Override pod-level terminationGracePeriodSeconds #\n terminationGracePeriodSeconds: 60", "completion": "This yaml code Configure Liveness, Readiness and Startup Probes | Probe-level terminationGracePeriodSeconds cannot be set for readiness probes. It will be rejected by the API server.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Configure Liveness, Readiness and Startup Probes | Probe-level terminationGracePeriodSeconds cannot be set for readiness probes. It will be rejected by the API server.", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-startup-probes/", "id": "k8s_00137_code_explain_4131"} {"prompt": "Complete this yaml code:\n\npods/probe/exec-liveness.yaml\napiVersion: v1\nkind: Pod\nmetadata:\n labels:\n test: liveness\n name: liveness-exec\nspec:\n containers:\n - name: liveness\n image: registry.k8s.io/busybox:1.27.2", "completion": " args:\n - /bin/sh\n - -c\n - touch /tmp/healthy; sleep 30; rm -f /tmp/healthy; sleep 600\n livenessProbe:\n exec:\n command:\n - cat\n - /tmp/healthy\n initialDelaySeconds: 5\n periodSeconds: 5", "language": "yaml", "type": "pod-definition", "context": "Configure Liveness, Readiness and Startup Probes | In the configuration file, you can see that the Pod has a single Container. The periodSeconds field specifies that the kubelet should perform a liveness probe every 5 seconds. The initialDelaySeconds", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-startup-probes/", "id": "k8s_00137_code_completion_4132"} {"prompt": "Explain this yaml code:\n\npods/probe/exec-liveness.yaml\napiVersion: v1\nkind: Pod\nmetadata:\n labels:\n test: liveness\n name: liveness-exec\nspec:\n containers:\n - name: liveness\n image: registry.k8s.io/busybox:1.27.2\n args:\n - /bin/sh\n - -c\n - touch /tmp/healthy; sleep 30; rm -f /tmp/healthy; sleep 600\n livenessProbe:\n exec:\n command:\n - cat\n - /tmp/healthy\n initialDelaySeconds: 5\n periodSeconds: 5", "completion": "This yaml code Configure Liveness, Readiness and Startup Probes | In the configuration file, you can see that the Pod has a single Container. The periodSeconds field specifies that the kubelet should perform a liveness probe every 5 seconds. The initialDelaySeconds", "language": "yaml", "type": "pod-definition_explanation", "context": "Configure Liveness, Readiness and Startup Probes | In the configuration file, you can see that the Pod has a single Container. The periodSeconds field specifies that the kubelet should perform a liveness probe every 5 seconds. The initialDelaySeconds", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-startup-probes/", "id": "k8s_00137_code_explain_4133"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n labels:\n test: liveness\n name: liveness-exec\nspec:\n containers:\n - name: liveness\n image: registry.k8s.io/busybox:1.27.2", "completion": " args:\n - /bin/sh\n - -c\n - touch /tmp/healthy; sleep 30; rm -f /tmp/healthy; sleep 600\n livenessProbe:\n exec:\n command:\n - cat\n - /tmp/healthy\n initialDelaySeconds: 5\n periodSeconds: 5", "language": "yaml", "type": "pod-definition", "context": "Configure Liveness, Readiness and Startup Probes | In the configuration file, you can see that the Pod has a single Container. The periodSeconds field specifies that the kubelet should perform a liveness probe every 5 seconds. The initialDelaySeconds", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-startup-probes/", "id": "k8s_00137_code_completion_4134"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n labels:\n test: liveness\n name: liveness-exec\nspec:\n containers:\n - name: liveness\n image: registry.k8s.io/busybox:1.27.2\n args:\n - /bin/sh\n - -c\n - touch /tmp/healthy; sleep 30; rm -f /tmp/healthy; sleep 600\n livenessProbe:\n exec:\n command:\n - cat\n - /tmp/healthy\n initialDelaySeconds: 5\n periodSeconds: 5", "completion": "This yaml code Configure Liveness, Readiness and Startup Probes | In the configuration file, you can see that the Pod has a single Container. The periodSeconds field specifies that the kubelet should perform a liveness probe every 5 seconds. The initialDelaySeconds", "language": "yaml", "type": "pod-definition_explanation", "context": "Configure Liveness, Readiness and Startup Probes | In the configuration file, you can see that the Pod has a single Container. The periodSeconds field specifies that the kubelet should perform a liveness probe every 5 seconds. The initialDelaySeconds", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-startup-probes/", "id": "k8s_00137_code_explain_4135"} {"prompt": "Explain this text code:\n\n/bin/sh -c \"touch /tmp/healthy; sleep 30; rm -f /tmp/healthy; sleep 600\"", "completion": "This text code Configure Liveness, Readiness and Startup Probes | For the first 30 seconds of the container's life, there is a /tmp/healthy file. So during the first 30 seconds, the command cat /tmp/healthy returns a success code. After 30 seconds, cat /tmp/healthy", "language": "text", "type": "code-example_explanation", "context": "Configure Liveness, Readiness and Startup Probes | For the first 30 seconds of the container's life, there is a /tmp/healthy file. So during the first 30 seconds, the command cat /tmp/healthy returns a success code. After 30 seconds, cat /tmp/healthy", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-startup-probes/", "id": "k8s_00137_code_explain_4136"} {"prompt": "Explain this bash code:\n\nkubectl apply -f https://k8s.io/examples/pods/probe/exec-liveness.yaml", "completion": "This bash code Configure Liveness, Readiness and Startup Probes | Within 30 seconds, view the Pod events:", "language": "bash", "type": "kubectl-command_explanation", "context": "Configure Liveness, Readiness and Startup Probes | Within 30 seconds, view the Pod events:", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-startup-probes/", "id": "k8s_00137_code_explain_4137"} {"prompt": "Complete this yaml code:\n\npods/probe/http-liveness.yaml\napiVersion: v1\nkind: Pod\nmetadata:\n labels:\n test: liveness\n name: liveness-http\nspec:\n containers:\n - name: liveness\n image: registry.k8s.io/e2e-test-images/agnhost:2.40", "completion": " args:\n - liveness\n livenessProbe:\n httpGet:\n path: /healthz\n port: 8080\n httpHeaders:\n - name: Custom-Header\n value: Awesome\n initialDelaySeconds: 3\n periodSeconds: 3", "language": "yaml", "type": "pod-definition", "context": "Configure Liveness, Readiness and Startup Probes | In the configuration file, you can see that the Pod has a single container. The periodSeconds field specifies that the kubelet should perform a liveness probe every 3 seconds. The initialDelaySeconds", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-startup-probes/", "id": "k8s_00137_code_completion_4138"} {"prompt": "Explain this yaml code:\n\npods/probe/http-liveness.yaml\napiVersion: v1\nkind: Pod\nmetadata:\n labels:\n test: liveness\n name: liveness-http\nspec:\n containers:\n - name: liveness\n image: registry.k8s.io/e2e-test-images/agnhost:2.40\n args:\n - liveness\n livenessProbe:\n httpGet:\n path: /healthz\n port: 8080\n httpHeaders:\n - name: Custom-Header\n value: Awesome\n initialDelaySeconds: 3\n periodSeconds: 3", "completion": "This yaml code Configure Liveness, Readiness and Startup Probes | In the configuration file, you can see that the Pod has a single container. The periodSeconds field specifies that the kubelet should perform a liveness probe every 3 seconds. The initialDelaySeconds", "language": "yaml", "type": "pod-definition_explanation", "context": "Configure Liveness, Readiness and Startup Probes | In the configuration file, you can see that the Pod has a single container. The periodSeconds field specifies that the kubelet should perform a liveness probe every 3 seconds. The initialDelaySeconds", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-startup-probes/", "id": "k8s_00137_code_explain_4139"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n labels:\n test: liveness\n name: liveness-http\nspec:\n containers:\n - name: liveness\n image: registry.k8s.io/e2e-test-images/agnhost:2.40", "completion": " args:\n - liveness\n livenessProbe:\n httpGet:\n path: /healthz\n port: 8080\n httpHeaders:\n - name: Custom-Header\n value: Awesome\n initialDelaySeconds: 3\n periodSeconds: 3", "language": "yaml", "type": "pod-definition", "context": "Configure Liveness, Readiness and Startup Probes | In the configuration file, you can see that the Pod has a single container. The periodSeconds field specifies that the kubelet should perform a liveness probe every 3 seconds. The initialDelaySeconds", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-startup-probes/", "id": "k8s_00137_code_completion_4140"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n labels:\n test: liveness\n name: liveness-http\nspec:\n containers:\n - name: liveness\n image: registry.k8s.io/e2e-test-images/agnhost:2.40\n args:\n - liveness\n livenessProbe:\n httpGet:\n path: /healthz\n port: 8080\n httpHeaders:\n - name: Custom-Header\n value: Awesome\n initialDelaySeconds: 3\n periodSeconds: 3", "completion": "This yaml code Configure Liveness, Readiness and Startup Probes | In the configuration file, you can see that the Pod has a single container. The periodSeconds field specifies that the kubelet should perform a liveness probe every 3 seconds. The initialDelaySeconds", "language": "yaml", "type": "pod-definition_explanation", "context": "Configure Liveness, Readiness and Startup Probes | In the configuration file, you can see that the Pod has a single container. The periodSeconds field specifies that the kubelet should perform a liveness probe every 3 seconds. The initialDelaySeconds", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-startup-probes/", "id": "k8s_00137_code_explain_4141"} {"prompt": "Complete this text code:\n\nhttp.HandleFunc(\"/healthz\", func(w http.ResponseWriter, r *http.Request) {\n duration := time.Now().Sub(started)\n if duration.Seconds() > 10 {\n w.WriteHeader(500)\n w.Write([]byte(fmt.Sprintf(\"error: %v\", duration.Seconds())))", "completion": " } else {\n w.WriteHeader(200)\n w.Write([]byte(\"ok\"))\n }\n})", "language": "text", "type": "code-example", "context": "Configure Liveness, Readiness and Startup Probes | The kubelet starts performing health checks 3 seconds after the container starts. So the first couple of health checks will succeed. But after 10 seconds, the health checks will fail, and the kubelet", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-startup-probes/", "id": "k8s_00137_code_completion_4142"} {"prompt": "Explain this text code:\n\nhttp.HandleFunc(\"/healthz\", func(w http.ResponseWriter, r *http.Request) {\n duration := time.Now().Sub(started)\n if duration.Seconds() > 10 {\n w.WriteHeader(500)\n w.Write([]byte(fmt.Sprintf(\"error: %v\", duration.Seconds())))\n } else {\n w.WriteHeader(200)\n w.Write([]byte(\"ok\"))\n }\n})", "completion": "This text code Configure Liveness, Readiness and Startup Probes | The kubelet starts performing health checks 3 seconds after the container starts. So the first couple of health checks will succeed. But after 10 seconds, the health checks will fail, and the kubelet", "language": "text", "type": "code-example_explanation", "context": "Configure Liveness, Readiness and Startup Probes | The kubelet starts performing health checks 3 seconds after the container starts. So the first couple of health checks will succeed. But after 10 seconds, the health checks will fail, and the kubelet", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-startup-probes/", "id": "k8s_00137_code_explain_4143"} {"prompt": "Explain this bash code:\n\nkubectl apply -f https://k8s.io/examples/pods/probe/http-liveness.yaml", "completion": "This bash code Configure Liveness, Readiness and Startup Probes | After 10 seconds, view Pod events to verify that liveness probes have failed and the container has been restarted:", "language": "bash", "type": "kubectl-command_explanation", "context": "Configure Liveness, Readiness and Startup Probes | After 10 seconds, view Pod events to verify that liveness probes have failed and the container has been restarted:", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-startup-probes/", "id": "k8s_00137_code_explain_4144"} {"prompt": "Complete this yaml code:\n\npods/probe/tcp-liveness-readiness.yaml\napiVersion: v1\nkind: Pod\nmetadata:\n name: goproxy\n labels:\n app: goproxy\nspec:\n containers:\n - name: goproxy\n image: registry.k8s.io/goproxy:0.1", "completion": " ports:\n - containerPort: 8080\n readinessProbe:\n tcpSocket:\n port: 8080\n initialDelaySeconds: 15\n periodSeconds: 10\n livenessProbe:\n tcpSocket:\n port: 8080\n initialDelaySeconds: 15\n periodSeconds: 10", "language": "yaml", "type": "pod-definition", "context": "Configure Liveness, Readiness and Startup Probes | As you can see, configuration for a TCP check is quite similar to an HTTP check. This example uses both readiness and liveness probes. The kubelet will run the first liveness probe 15 seconds after th", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-startup-probes/", "id": "k8s_00137_code_completion_4145"} {"prompt": "Explain this yaml code:\n\npods/probe/tcp-liveness-readiness.yaml\napiVersion: v1\nkind: Pod\nmetadata:\n name: goproxy\n labels:\n app: goproxy\nspec:\n containers:\n - name: goproxy\n image: registry.k8s.io/goproxy:0.1\n ports:\n - containerPort: 8080\n readinessProbe:\n tcpSocket:\n port: 8080\n initialDelaySeconds: 15\n periodSeconds: 10\n livenessProbe:\n tcpSocket:\n port: 8080\n initialDelaySeconds: 15\n periodSeconds: 10", "completion": "This yaml code Configure Liveness, Readiness and Startup Probes | As you can see, configuration for a TCP check is quite similar to an HTTP check. This example uses both readiness and liveness probes. The kubelet will run the first liveness probe 15 seconds after th", "language": "yaml", "type": "pod-definition_explanation", "context": "Configure Liveness, Readiness and Startup Probes | As you can see, configuration for a TCP check is quite similar to an HTTP check. This example uses both readiness and liveness probes. The kubelet will run the first liveness probe 15 seconds after th", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-startup-probes/", "id": "k8s_00137_code_explain_4146"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: goproxy\n labels:\n app: goproxy\nspec:\n containers:\n - name: goproxy\n image: registry.k8s.io/goproxy:0.1\n ports:", "completion": " - containerPort: 8080\n readinessProbe:\n tcpSocket:\n port: 8080\n initialDelaySeconds: 15\n periodSeconds: 10\n livenessProbe:\n tcpSocket:\n port: 8080\n initialDelaySeconds: 15\n periodSeconds: 10", "language": "yaml", "type": "pod-definition", "context": "Configure Liveness, Readiness and Startup Probes | As you can see, configuration for a TCP check is quite similar to an HTTP check. This example uses both readiness and liveness probes. The kubelet will run the first liveness probe 15 seconds after th", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-startup-probes/", "id": "k8s_00137_code_completion_4147"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: goproxy\n labels:\n app: goproxy\nspec:\n containers:\n - name: goproxy\n image: registry.k8s.io/goproxy:0.1\n ports:\n - containerPort: 8080\n readinessProbe:\n tcpSocket:\n port: 8080\n initialDelaySeconds: 15\n periodSeconds: 10\n livenessProbe:\n tcpSocket:\n port: 8080\n initialDelaySeconds: 15\n periodSeconds: 10", "completion": "This yaml code Configure Liveness, Readiness and Startup Probes | As you can see, configuration for a TCP check is quite similar to an HTTP check. This example uses both readiness and liveness probes. The kubelet will run the first liveness probe 15 seconds after th", "language": "yaml", "type": "pod-definition_explanation", "context": "Configure Liveness, Readiness and Startup Probes | As you can see, configuration for a TCP check is quite similar to an HTTP check. This example uses both readiness and liveness probes. The kubelet will run the first liveness probe 15 seconds after th", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-startup-probes/", "id": "k8s_00137_code_explain_4148"} {"prompt": "Explain this bash code:\n\nkubectl apply -f https://k8s.io/examples/pods/probe/tcp-liveness-readiness.yaml", "completion": "This bash code Configure Liveness, Readiness and Startup Probes | After 15 seconds, view Pod events to verify that liveness probes:", "language": "bash", "type": "kubectl-command_explanation", "context": "Configure Liveness, Readiness and Startup Probes | After 15 seconds, view Pod events to verify that liveness probes:", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-startup-probes/", "id": "k8s_00137_code_explain_4149"} {"prompt": "Complete this yaml code:\n\npods/probe/grpc-liveness.yaml\napiVersion: v1\nkind: Pod\nmetadata:\n name: etcd-with-grpc\nspec:\n containers:\n - name: etcd", "completion": " image: registry.k8s.io/etcd:3.5.1-0\n command: [ \"/usr/local/bin/etcd\", \"--data-dir\", \"/var/lib/etcd\", \"--listen-client-urls\", \"http://0.0.0.0:2379\", \"--advertise-client-urls\", \"http://127.0.0.1:2379\", \"--log-level\", \"debug\"]\n ports:\n - containerPort: 2379\n livenessProbe:\n grpc:\n port: 2379\n initialDelaySeconds: 10", "language": "yaml", "type": "pod-definition", "context": "Configure Liveness, Readiness and Startup Probes | To use a gRPC probe, port must be configured. If you want to distinguish probes of different types and probes for different features you can use the service field. You can set service to the value liv", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-startup-probes/", "id": "k8s_00137_code_completion_4150"} {"prompt": "Explain this yaml code:\n\npods/probe/grpc-liveness.yaml\napiVersion: v1\nkind: Pod\nmetadata:\n name: etcd-with-grpc\nspec:\n containers:\n - name: etcd\n image: registry.k8s.io/etcd:3.5.1-0\n command: [ \"/usr/local/bin/etcd\", \"--data-dir\", \"/var/lib/etcd\", \"--listen-client-urls\", \"http://0.0.0.0:2379\", \"--advertise-client-urls\", \"http://127.0.0.1:2379\", \"--log-level\", \"debug\"]\n ports:\n - containerPort: 2379\n livenessProbe:\n grpc:\n port: 2379\n initialDelaySeconds: 10", "completion": "This yaml code Configure Liveness, Readiness and Startup Probes | To use a gRPC probe, port must be configured. If you want to distinguish probes of different types and probes for different features you can use the service field. You can set service to the value liv", "language": "yaml", "type": "pod-definition_explanation", "context": "Configure Liveness, Readiness and Startup Probes | To use a gRPC probe, port must be configured. If you want to distinguish probes of different types and probes for different features you can use the service field. You can set service to the value liv", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-startup-probes/", "id": "k8s_00137_code_explain_4151"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: etcd-with-grpc\nspec:\n containers:\n - name: etcd", "completion": " image: registry.k8s.io/etcd:3.5.1-0\n command: [ \"/usr/local/bin/etcd\", \"--data-dir\", \"/var/lib/etcd\", \"--listen-client-urls\", \"http://0.0.0.0:2379\", \"--advertise-client-urls\", \"http://127.0.0.1:2379\", \"--log-level\", \"debug\"]\n ports:\n - containerPort: 2379\n livenessProbe:\n grpc:\n port: 2379\n initialDelaySeconds: 10", "language": "yaml", "type": "pod-definition", "context": "Configure Liveness, Readiness and Startup Probes | To use a gRPC probe, port must be configured. If you want to distinguish probes of different types and probes for different features you can use the service field. You can set service to the value liv", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-startup-probes/", "id": "k8s_00137_code_completion_4152"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: etcd-with-grpc\nspec:\n containers:\n - name: etcd\n image: registry.k8s.io/etcd:3.5.1-0\n command: [ \"/usr/local/bin/etcd\", \"--data-dir\", \"/var/lib/etcd\", \"--listen-client-urls\", \"http://0.0.0.0:2379\", \"--advertise-client-urls\", \"http://127.0.0.1:2379\", \"--log-level\", \"debug\"]\n ports:\n - containerPort: 2379\n livenessProbe:\n grpc:\n port: 2379\n initialDelaySeconds: 10", "completion": "This yaml code Configure Liveness, Readiness and Startup Probes | To use a gRPC probe, port must be configured. If you want to distinguish probes of different types and probes for different features you can use the service field. You can set service to the value liv", "language": "yaml", "type": "pod-definition_explanation", "context": "Configure Liveness, Readiness and Startup Probes | To use a gRPC probe, port must be configured. If you want to distinguish probes of different types and probes for different features you can use the service field. You can set service to the value liv", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-startup-probes/", "id": "k8s_00137_code_explain_4153"} {"prompt": "Explain this bash code:\n\nkubectl apply -f https://k8s.io/examples/pods/probe/grpc-liveness.yaml", "completion": "This bash code Configure Liveness, Readiness and Startup Probes | After 15 seconds, view Pod events to verify that the liveness check has not failed:", "language": "bash", "type": "kubectl-command_explanation", "context": "Configure Liveness, Readiness and Startup Probes | After 15 seconds, view Pod events to verify that the liveness check has not failed:", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-startup-probes/", "id": "k8s_00137_code_explain_4154"} {"prompt": "Complete this text code:\n\nports:\n- name: liveness-port\n containerPort: 8080\n", "completion": "livenessProbe:\n httpGet:\n path: /healthz\n port: liveness-port", "language": "text", "type": "code-example", "context": "Configure Liveness, Readiness and Startup Probes | Sometimes, you have to deal with applications that require additional startup time on their first initialization. In such cases, it can be tricky to set up liveness probe parameters without compromisi", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-startup-probes/", "id": "k8s_00137_code_completion_4155"} {"prompt": "Explain this text code:\n\nports:\n- name: liveness-port\n containerPort: 8080\n\nlivenessProbe:\n httpGet:\n path: /healthz\n port: liveness-port", "completion": "This text code Configure Liveness, Readiness and Startup Probes | Sometimes, you have to deal with applications that require additional startup time on their first initialization. In such cases, it can be tricky to set up liveness probe parameters without compromisi", "language": "text", "type": "code-example_explanation", "context": "Configure Liveness, Readiness and Startup Probes | Sometimes, you have to deal with applications that require additional startup time on their first initialization. In such cases, it can be tricky to set up liveness probe parameters without compromisi", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-startup-probes/", "id": "k8s_00137_code_explain_4156"} {"prompt": "Complete this text code:\n\nports:\n- name: liveness-port\n containerPort: 8080\n\nlivenessProbe:\n httpGet:\n path: /healthz\n port: liveness-port", "completion": " failureThreshold: 1\n periodSeconds: 10\n\nstartupProbe:\n httpGet:\n path: /healthz\n port: liveness-port\n failureThreshold: 30\n periodSeconds: 10", "language": "text", "type": "code-example", "context": "Configure Liveness, Readiness and Startup Probes | Thanks to the startup probe, the application will have a maximum of 5 minutes (30 * 10 = 300s) to finish its startup. Once the startup probe has succeeded once, the liveness probe takes over to provid", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-startup-probes/", "id": "k8s_00137_code_completion_4157"} {"prompt": "Explain this text code:\n\nports:\n- name: liveness-port\n containerPort: 8080\n\nlivenessProbe:\n httpGet:\n path: /healthz\n port: liveness-port\n failureThreshold: 1\n periodSeconds: 10\n\nstartupProbe:\n httpGet:\n path: /healthz\n port: liveness-port\n failureThreshold: 30\n periodSeconds: 10", "completion": "This text code Configure Liveness, Readiness and Startup Probes | Thanks to the startup probe, the application will have a maximum of 5 minutes (30 * 10 = 300s) to finish its startup. Once the startup probe has succeeded once, the liveness probe takes over to provid", "language": "text", "type": "code-example_explanation", "context": "Configure Liveness, Readiness and Startup Probes | Thanks to the startup probe, the application will have a maximum of 5 minutes (30 * 10 = 300s) to finish its startup. Once the startup probe has succeeded once, the liveness probe takes over to provid", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-startup-probes/", "id": "k8s_00137_code_explain_4158"} {"prompt": "Complete this text code:\n\nreadinessProbe:\n exec:\n command:", "completion": " - cat\n - /tmp/healthy\n initialDelaySeconds: 5\n periodSeconds: 5", "language": "text", "type": "code-example", "context": "Configure Liveness, Readiness and Startup Probes | Configuration for HTTP and TCP readiness probes also remains identical to liveness probes.", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-startup-probes/", "id": "k8s_00137_code_completion_4159"} {"prompt": "Explain this text code:\n\nreadinessProbe:\n exec:\n command:\n - cat\n - /tmp/healthy\n initialDelaySeconds: 5\n periodSeconds: 5", "completion": "This text code Configure Liveness, Readiness and Startup Probes | Configuration for HTTP and TCP readiness probes also remains identical to liveness probes.", "language": "text", "type": "code-example_explanation", "context": "Configure Liveness, Readiness and Startup Probes | Configuration for HTTP and TCP readiness probes also remains identical to liveness probes.", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-startup-probes/", "id": "k8s_00137_code_explain_4160"} {"prompt": "Complete this text code:\n\nlivenessProbe:\n httpGet:\n httpHeaders:\n - name: Accept\n value: application/json", "completion": "\nstartupProbe:\n httpGet:\n httpHeaders:\n - name: User-Agent\n value: MyUserAgent", "language": "text", "type": "code-example", "context": "Configure Liveness, Readiness and Startup Probes | You can also remove these two headers by defining them with an empty value.", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-startup-probes/", "id": "k8s_00137_code_completion_4161"} {"prompt": "Explain this text code:\n\nlivenessProbe:\n httpGet:\n httpHeaders:\n - name: Accept\n value: application/json\n\nstartupProbe:\n httpGet:\n httpHeaders:\n - name: User-Agent\n value: MyUserAgent", "completion": "This text code Configure Liveness, Readiness and Startup Probes | You can also remove these two headers by defining them with an empty value.", "language": "text", "type": "code-example_explanation", "context": "Configure Liveness, Readiness and Startup Probes | You can also remove these two headers by defining them with an empty value.", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-startup-probes/", "id": "k8s_00137_code_explain_4162"} {"prompt": "Complete this text code:\n\nlivenessProbe:\n httpGet:\n httpHeaders:\n - name: Accept\n value: \"\"", "completion": "\nstartupProbe:\n httpGet:\n httpHeaders:\n - name: User-Agent\n value: \"\"", "language": "text", "type": "code-example", "context": "Configure Liveness, Readiness and Startup Probes | When the kubelet probes a Pod using HTTP, it only follows redirects if the redirect is to the same host. If the kubelet receives 11 or more redirects during probing, the probe is considered successful", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-startup-probes/", "id": "k8s_00137_code_completion_4163"} {"prompt": "Explain this text code:\n\nlivenessProbe:\n httpGet:\n httpHeaders:\n - name: Accept\n value: \"\"\n\nstartupProbe:\n httpGet:\n httpHeaders:\n - name: User-Agent\n value: \"\"", "completion": "This text code Configure Liveness, Readiness and Startup Probes | When the kubelet probes a Pod using HTTP, it only follows redirects if the redirect is to the same host. If the kubelet receives 11 or more redirects during probing, the probe is considered successful", "language": "text", "type": "code-example_explanation", "context": "Configure Liveness, Readiness and Startup Probes | When the kubelet probes a Pod using HTTP, it only follows redirects if the redirect is to the same host. If the kubelet receives 11 or more redirects during probing, the probe is considered successful", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-startup-probes/", "id": "k8s_00137_code_explain_4164"} {"prompt": "Complete this text code:\n\nspec:\n terminationGracePeriodSeconds: 3600 # pod-level\n containers:\n - name: test\n image: ...\n\n ports:\n - name: liveness-port\n containerPort: 8080", "completion": "\n livenessProbe:\n httpGet:\n path: /healthz\n port: liveness-port\n failureThreshold: 1\n periodSeconds: 60\n # Override pod-level terminationGracePeriodSeconds #\n terminationGracePeriodSeconds: 60", "language": "text", "type": "code-example", "context": "Configure Liveness, Readiness and Startup Probes | Probe-level terminationGracePeriodSeconds cannot be set for readiness probes. It will be rejected by the API server.", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-startup-probes/", "id": "k8s_00137_code_completion_4165"} {"prompt": "Explain this text code:\n\nspec:\n terminationGracePeriodSeconds: 3600 # pod-level\n containers:\n - name: test\n image: ...\n\n ports:\n - name: liveness-port\n containerPort: 8080\n\n livenessProbe:\n httpGet:\n path: /healthz\n port: liveness-port\n failureThreshold: 1\n periodSeconds: 60\n # Override pod-level terminationGracePeriodSeconds #\n terminationGracePeriodSeconds: 60", "completion": "This text code Configure Liveness, Readiness and Startup Probes | Probe-level terminationGracePeriodSeconds cannot be set for readiness probes. It will be rejected by the API server.", "language": "text", "type": "code-example_explanation", "context": "Configure Liveness, Readiness and Startup Probes | Probe-level terminationGracePeriodSeconds cannot be set for readiness probes. It will be rejected by the API server.", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-startup-probes/", "id": "k8s_00137_code_explain_4166"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: LimitRange\nmetadata:\n name: cpu-resource-constraint\nspec:\n limits:\n - default: # this section defines default limits", "completion": " cpu: 500m\n defaultRequest: # this section defines default requests\n cpu: 500m\n max: # max and min define the limit range\n cpu: \"1\"\n min:\n cpu: 100m\n type: Container", "language": "yaml", "type": "kubernetes-manifest", "context": "Limit Ranges | along with a Pod that declares a CPU resource request of 700m, but not a limit:", "source": "https://kubernetes.io/docs/concepts/policy/limit-range/", "id": "k8s_00138_code_completion_4167"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: LimitRange\nmetadata:\n name: cpu-resource-constraint\nspec:\n limits:\n - default: # this section defines default limits\n cpu: 500m\n defaultRequest: # this section defines default requests\n cpu: 500m\n max: # max and min define the limit range\n cpu: \"1\"\n min:\n cpu: 100m\n type: Container", "completion": "This yaml code Limit Ranges | along with a Pod that declares a CPU resource request of 700m, but not a limit:", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Limit Ranges | along with a Pod that declares a CPU resource request of 700m, but not a limit:", "source": "https://kubernetes.io/docs/concepts/policy/limit-range/", "id": "k8s_00138_code_explain_4168"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: example-conflict-with-limitrange-cpu\nspec:", "completion": " containers:\n - name: demo\n image: registry.k8s.io/pause:3.8\n resources:\n requests:\n cpu: 700m", "language": "yaml", "type": "pod-definition", "context": "Limit Ranges | then that Pod will not be scheduled, failing with an error similar to:", "source": "https://kubernetes.io/docs/concepts/policy/limit-range/", "id": "k8s_00138_code_completion_4169"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: example-conflict-with-limitrange-cpu\nspec:\n containers:\n - name: demo\n image: registry.k8s.io/pause:3.8\n resources:\n requests:\n cpu: 700m", "completion": "This yaml code Limit Ranges | then that Pod will not be scheduled, failing with an error similar to:", "language": "yaml", "type": "pod-definition_explanation", "context": "Limit Ranges | then that Pod will not be scheduled, failing with an error similar to:", "source": "https://kubernetes.io/docs/concepts/policy/limit-range/", "id": "k8s_00138_code_explain_4170"} {"prompt": "Explain this text code:\n\nPod \"example-conflict-with-limitrange-cpu\" is invalid: spec.containers[0].resources.requests: Invalid value: \"700m\": must be less than or equal to cpu limit", "completion": "This text code Limit Ranges | If you set both request and limit, then that new Pod will be scheduled successfully even with the same LimitRange in place:", "language": "text", "type": "code-example_explanation", "context": "Limit Ranges | If you set both request and limit, then that new Pod will be scheduled successfully even with the same LimitRange in place:", "source": "https://kubernetes.io/docs/concepts/policy/limit-range/", "id": "k8s_00138_code_explain_4171"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: example-no-conflict-with-limitrange-cpu\nspec:\n containers:", "completion": " - name: demo\n image: registry.k8s.io/pause:3.8\n resources:\n requests:\n cpu: 700m\n limits:\n cpu: 700m", "language": "yaml", "type": "pod-definition", "context": "Limit Ranges | Examples of policies that could be created using LimitRange are:", "source": "https://kubernetes.io/docs/concepts/policy/limit-range/", "id": "k8s_00138_code_completion_4172"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: example-no-conflict-with-limitrange-cpu\nspec:\n containers:\n - name: demo\n image: registry.k8s.io/pause:3.8\n resources:\n requests:\n cpu: 700m\n limits:\n cpu: 700m", "completion": "This yaml code Limit Ranges | Examples of policies that could be created using LimitRange are:", "language": "yaml", "type": "pod-definition_explanation", "context": "Limit Ranges | Examples of policies that could be created using LimitRange are:", "source": "https://kubernetes.io/docs/concepts/policy/limit-range/", "id": "k8s_00138_code_explain_4173"} {"prompt": "Explain this text code:\n\nconcepts/policy/limit-range/problematic-limit-range.yaml", "completion": "This text code Limit Ranges | along with a Pod that declares a CPU resource request of 700m, but not a limit:", "language": "text", "type": "code-example_explanation", "context": "Limit Ranges | along with a Pod that declares a CPU resource request of 700m, but not a limit:", "source": "https://kubernetes.io/docs/concepts/policy/limit-range/", "id": "k8s_00138_code_explain_4174"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: LimitRange\nmetadata:\n name: cpu-resource-constraint\nspec:\n limits:\n - default: # this section defines default limits", "completion": " cpu: 500m\n defaultRequest: # this section defines default requests\n cpu: 500m\n max: # max and min define the limit range\n cpu: \"1\"\n min:\n cpu: 100m\n type: Container", "language": "yaml", "type": "kubernetes-manifest", "context": "Limit Ranges | along with a Pod that declares a CPU resource request of 700m, but not a limit:", "source": "https://kubernetes.io/docs/concepts/policy/limit-range/", "id": "k8s_00138_code_completion_4175"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: LimitRange\nmetadata:\n name: cpu-resource-constraint\nspec:\n limits:\n - default: # this section defines default limits\n cpu: 500m\n defaultRequest: # this section defines default requests\n cpu: 500m\n max: # max and min define the limit range\n cpu: \"1\"\n min:\n cpu: 100m\n type: Container", "completion": "This yaml code Limit Ranges | along with a Pod that declares a CPU resource request of 700m, but not a limit:", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Limit Ranges | along with a Pod that declares a CPU resource request of 700m, but not a limit:", "source": "https://kubernetes.io/docs/concepts/policy/limit-range/", "id": "k8s_00138_code_explain_4176"} {"prompt": "Explain this text code:\n\nconcepts/policy/limit-range/example-conflict-with-limitrange-cpu.yaml", "completion": "This text code Limit Ranges | then that Pod will not be scheduled, failing with an error similar to:", "language": "text", "type": "code-example_explanation", "context": "Limit Ranges | then that Pod will not be scheduled, failing with an error similar to:", "source": "https://kubernetes.io/docs/concepts/policy/limit-range/", "id": "k8s_00138_code_explain_4177"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: example-conflict-with-limitrange-cpu\nspec:", "completion": " containers:\n - name: demo\n image: registry.k8s.io/pause:3.8\n resources:\n requests:\n cpu: 700m", "language": "yaml", "type": "pod-definition", "context": "Limit Ranges | then that Pod will not be scheduled, failing with an error similar to:", "source": "https://kubernetes.io/docs/concepts/policy/limit-range/", "id": "k8s_00138_code_completion_4178"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: example-conflict-with-limitrange-cpu\nspec:\n containers:\n - name: demo\n image: registry.k8s.io/pause:3.8\n resources:\n requests:\n cpu: 700m", "completion": "This yaml code Limit Ranges | then that Pod will not be scheduled, failing with an error similar to:", "language": "yaml", "type": "pod-definition_explanation", "context": "Limit Ranges | then that Pod will not be scheduled, failing with an error similar to:", "source": "https://kubernetes.io/docs/concepts/policy/limit-range/", "id": "k8s_00138_code_explain_4179"} {"prompt": "Explain this text code:\n\nPod \"example-conflict-with-limitrange-cpu\" is invalid: spec.containers[0].resources.requests: Invalid value: \"700m\": must be less than or equal to cpu limit", "completion": "This text code Limit Ranges | If you set both request and limit, then that new Pod will be scheduled successfully even with the same LimitRange in place:", "language": "text", "type": "code-example_explanation", "context": "Limit Ranges | If you set both request and limit, then that new Pod will be scheduled successfully even with the same LimitRange in place:", "source": "https://kubernetes.io/docs/concepts/policy/limit-range/", "id": "k8s_00138_code_explain_4180"} {"prompt": "Explain this text code:\n\nconcepts/policy/limit-range/example-no-conflict-with-limitrange-cpu.yaml", "completion": "This text code Limit Ranges | Examples of policies that could be created using LimitRange are:", "language": "text", "type": "code-example_explanation", "context": "Limit Ranges | Examples of policies that could be created using LimitRange are:", "source": "https://kubernetes.io/docs/concepts/policy/limit-range/", "id": "k8s_00138_code_explain_4181"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: example-no-conflict-with-limitrange-cpu\nspec:\n containers:", "completion": " - name: demo\n image: registry.k8s.io/pause:3.8\n resources:\n requests:\n cpu: 700m\n limits:\n cpu: 700m", "language": "yaml", "type": "pod-definition", "context": "Limit Ranges | Examples of policies that could be created using LimitRange are:", "source": "https://kubernetes.io/docs/concepts/policy/limit-range/", "id": "k8s_00138_code_completion_4182"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: example-no-conflict-with-limitrange-cpu\nspec:\n containers:\n - name: demo\n image: registry.k8s.io/pause:3.8\n resources:\n requests:\n cpu: 700m\n limits:\n cpu: 700m", "completion": "This yaml code Limit Ranges | Examples of policies that could be created using LimitRange are:", "language": "yaml", "type": "pod-definition_explanation", "context": "Limit Ranges | Examples of policies that could be created using LimitRange are:", "source": "https://kubernetes.io/docs/concepts/policy/limit-range/", "id": "k8s_00138_code_explain_4183"} {"prompt": "Complete this yaml code:\n\nconcepts/policy/limit-range/problematic-limit-range.yaml\napiVersion: v1\nkind: LimitRange\nmetadata:\n name: cpu-resource-constraint\nspec:\n limits:\n - default: # this section defines default limits", "completion": " cpu: 500m\n defaultRequest: # this section defines default requests\n cpu: 500m\n max: # max and min define the limit range\n cpu: \"1\"\n min:\n cpu: 100m\n type: Container", "language": "yaml", "type": "kubernetes-manifest", "context": "Limit Ranges | along with a Pod that declares a CPU resource request of 700m, but not a limit:", "source": "https://kubernetes.io/docs/concepts/policy/limit-range/", "id": "k8s_00138_code_completion_4184"} {"prompt": "Explain this yaml code:\n\nconcepts/policy/limit-range/problematic-limit-range.yaml\napiVersion: v1\nkind: LimitRange\nmetadata:\n name: cpu-resource-constraint\nspec:\n limits:\n - default: # this section defines default limits\n cpu: 500m\n defaultRequest: # this section defines default requests\n cpu: 500m\n max: # max and min define the limit range\n cpu: \"1\"\n min:\n cpu: 100m\n type: Container", "completion": "This yaml code Limit Ranges | along with a Pod that declares a CPU resource request of 700m, but not a limit:", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Limit Ranges | along with a Pod that declares a CPU resource request of 700m, but not a limit:", "source": "https://kubernetes.io/docs/concepts/policy/limit-range/", "id": "k8s_00138_code_explain_4185"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: LimitRange\nmetadata:\n name: cpu-resource-constraint\nspec:\n limits:\n - default: # this section defines default limits", "completion": " cpu: 500m\n defaultRequest: # this section defines default requests\n cpu: 500m\n max: # max and min define the limit range\n cpu: \"1\"\n min:\n cpu: 100m\n type: Container", "language": "yaml", "type": "kubernetes-manifest", "context": "Limit Ranges | along with a Pod that declares a CPU resource request of 700m, but not a limit:", "source": "https://kubernetes.io/docs/concepts/policy/limit-range/", "id": "k8s_00138_code_completion_4186"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: LimitRange\nmetadata:\n name: cpu-resource-constraint\nspec:\n limits:\n - default: # this section defines default limits\n cpu: 500m\n defaultRequest: # this section defines default requests\n cpu: 500m\n max: # max and min define the limit range\n cpu: \"1\"\n min:\n cpu: 100m\n type: Container", "completion": "This yaml code Limit Ranges | along with a Pod that declares a CPU resource request of 700m, but not a limit:", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Limit Ranges | along with a Pod that declares a CPU resource request of 700m, but not a limit:", "source": "https://kubernetes.io/docs/concepts/policy/limit-range/", "id": "k8s_00138_code_explain_4187"} {"prompt": "Complete this yaml code:\n\nconcepts/policy/limit-range/example-conflict-with-limitrange-cpu.yaml\napiVersion: v1\nkind: Pod\nmetadata:\n name: example-conflict-with-limitrange-cpu\nspec:", "completion": " containers:\n - name: demo\n image: registry.k8s.io/pause:3.8\n resources:\n requests:\n cpu: 700m", "language": "yaml", "type": "pod-definition", "context": "Limit Ranges | then that Pod will not be scheduled, failing with an error similar to:", "source": "https://kubernetes.io/docs/concepts/policy/limit-range/", "id": "k8s_00138_code_completion_4188"} {"prompt": "Explain this yaml code:\n\nconcepts/policy/limit-range/example-conflict-with-limitrange-cpu.yaml\napiVersion: v1\nkind: Pod\nmetadata:\n name: example-conflict-with-limitrange-cpu\nspec:\n containers:\n - name: demo\n image: registry.k8s.io/pause:3.8\n resources:\n requests:\n cpu: 700m", "completion": "This yaml code Limit Ranges | then that Pod will not be scheduled, failing with an error similar to:", "language": "yaml", "type": "pod-definition_explanation", "context": "Limit Ranges | then that Pod will not be scheduled, failing with an error similar to:", "source": "https://kubernetes.io/docs/concepts/policy/limit-range/", "id": "k8s_00138_code_explain_4189"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: example-conflict-with-limitrange-cpu\nspec:", "completion": " containers:\n - name: demo\n image: registry.k8s.io/pause:3.8\n resources:\n requests:\n cpu: 700m", "language": "yaml", "type": "pod-definition", "context": "Limit Ranges | then that Pod will not be scheduled, failing with an error similar to:", "source": "https://kubernetes.io/docs/concepts/policy/limit-range/", "id": "k8s_00138_code_completion_4190"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: example-conflict-with-limitrange-cpu\nspec:\n containers:\n - name: demo\n image: registry.k8s.io/pause:3.8\n resources:\n requests:\n cpu: 700m", "completion": "This yaml code Limit Ranges | then that Pod will not be scheduled, failing with an error similar to:", "language": "yaml", "type": "pod-definition_explanation", "context": "Limit Ranges | then that Pod will not be scheduled, failing with an error similar to:", "source": "https://kubernetes.io/docs/concepts/policy/limit-range/", "id": "k8s_00138_code_explain_4191"} {"prompt": "Complete this yaml code:\n\nconcepts/policy/limit-range/example-no-conflict-with-limitrange-cpu.yaml\napiVersion: v1\nkind: Pod\nmetadata:\n name: example-no-conflict-with-limitrange-cpu\nspec:\n containers:", "completion": " - name: demo\n image: registry.k8s.io/pause:3.8\n resources:\n requests:\n cpu: 700m\n limits:\n cpu: 700m", "language": "yaml", "type": "pod-definition", "context": "Limit Ranges | Examples of policies that could be created using LimitRange are:", "source": "https://kubernetes.io/docs/concepts/policy/limit-range/", "id": "k8s_00138_code_completion_4192"} {"prompt": "Explain this yaml code:\n\nconcepts/policy/limit-range/example-no-conflict-with-limitrange-cpu.yaml\napiVersion: v1\nkind: Pod\nmetadata:\n name: example-no-conflict-with-limitrange-cpu\nspec:\n containers:\n - name: demo\n image: registry.k8s.io/pause:3.8\n resources:\n requests:\n cpu: 700m\n limits:\n cpu: 700m", "completion": "This yaml code Limit Ranges | Examples of policies that could be created using LimitRange are:", "language": "yaml", "type": "pod-definition_explanation", "context": "Limit Ranges | Examples of policies that could be created using LimitRange are:", "source": "https://kubernetes.io/docs/concepts/policy/limit-range/", "id": "k8s_00138_code_explain_4193"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: example-no-conflict-with-limitrange-cpu\nspec:\n containers:", "completion": " - name: demo\n image: registry.k8s.io/pause:3.8\n resources:\n requests:\n cpu: 700m\n limits:\n cpu: 700m", "language": "yaml", "type": "pod-definition", "context": "Limit Ranges | Examples of policies that could be created using LimitRange are:", "source": "https://kubernetes.io/docs/concepts/policy/limit-range/", "id": "k8s_00138_code_completion_4194"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: example-no-conflict-with-limitrange-cpu\nspec:\n containers:\n - name: demo\n image: registry.k8s.io/pause:3.8\n resources:\n requests:\n cpu: 700m\n limits:\n cpu: 700m", "completion": "This yaml code Limit Ranges | Examples of policies that could be created using LimitRange are:", "language": "yaml", "type": "pod-definition_explanation", "context": "Limit Ranges | Examples of policies that could be created using LimitRange are:", "source": "https://kubernetes.io/docs/concepts/policy/limit-range/", "id": "k8s_00138_code_explain_4195"} {"prompt": "Complete this text code:\n\nCreating cluster \"psa-ns-level\" ...\n ✓ Ensuring node image (kindest/node:v1.34.0) 🖼 \n ✓ Preparing nodes 📦 \n ✓ Writing configuration 📜 \n ✓ Starting control-plane 🕹️ \n ✓ Installing CNI 🔌 ", "completion": " ✓ Installing StorageClass 💾 \nSet kubectl context to \"kind-psa-ns-level\"\nYou can now use your cluster with:\n\nkubectl cluster-info --context kind-psa-ns-level\n\nNot sure what to do next? 😅 Check out https://kind.sigs.k8s.io/docs/user/quick-start/", "language": "text", "type": "code-example", "context": "Apply Pod Security Standards at the Namespace Level | Set the kubectl context to the new cluster:", "source": "https://kubernetes.io/docs/tutorials/security/ns-level-pss/", "id": "k8s_00139_code_completion_4196"} {"prompt": "Explain this text code:\n\nCreating cluster \"psa-ns-level\" ...\n ✓ Ensuring node image (kindest/node:v1.34.0) 🖼 \n ✓ Preparing nodes 📦 \n ✓ Writing configuration 📜 \n ✓ Starting control-plane 🕹️ \n ✓ Installing CNI 🔌 \n ✓ Installing StorageClass 💾 \nSet kubectl context to \"kind-psa-ns-level\"\nYou can now use your cluster with:\n\nkubectl cluster-info --context kind-psa-ns-level\n\nNot sure what to do next? 😅 Check out https://kind.sigs.k8s.io/docs/user/quick-start/", "completion": "This text code Apply Pod Security Standards at the Namespace Level | Set the kubectl context to the new cluster:", "language": "text", "type": "code-example_explanation", "context": "Apply Pod Security Standards at the Namespace Level | Set the kubectl context to the new cluster:", "source": "https://kubernetes.io/docs/tutorials/security/ns-level-pss/", "id": "k8s_00139_code_explain_4197"} {"prompt": "Complete this text code:\n\nKubernetes control plane is running at https://127.0.0.1:50996\nCoreDNS is running at https://127.0.0.1:50996/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy", "completion": "\nTo further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.", "language": "text", "type": "code-example", "context": "Apply Pod Security Standards at the Namespace Level | Create a new namespace called example:", "source": "https://kubernetes.io/docs/tutorials/security/ns-level-pss/", "id": "k8s_00139_code_completion_4198"} {"prompt": "Explain this text code:\n\nKubernetes control plane is running at https://127.0.0.1:50996\nCoreDNS is running at https://127.0.0.1:50996/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy\n\nTo further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.", "completion": "This text code Apply Pod Security Standards at the Namespace Level | Create a new namespace called example:", "language": "text", "type": "code-example_explanation", "context": "Apply Pod Security Standards at the Namespace Level | Create a new namespace called example:", "source": "https://kubernetes.io/docs/tutorials/security/ns-level-pss/", "id": "k8s_00139_code_explain_4199"} {"prompt": "Explain this bash code:\n\nkubectl label --overwrite ns example \\\n pod-security.kubernetes.io/warn=baseline \\\n pod-security.kubernetes.io/warn-version=latest", "completion": "This bash code Apply Pod Security Standards at the Namespace Level | You can configure multiple pod security standard checks on any namespace, using labels. The following command will enforce the baseline Pod Security Standard, but warn and audit for restricted Pod Sec", "language": "bash", "type": "kubectl-command_explanation", "context": "Apply Pod Security Standards at the Namespace Level | You can configure multiple pod security standard checks on any namespace, using labels. The following command will enforce the baseline Pod Security Standard, but warn and audit for restricted Pod Sec", "source": "https://kubernetes.io/docs/tutorials/security/ns-level-pss/", "id": "k8s_00139_code_explain_4200"} {"prompt": "Complete this bash code:\n\nkubectl label --overwrite ns example \\\n pod-security.kubernetes.io/enforce=baseline \\\n pod-security.kubernetes.io/enforce-version=latest \\", "completion": " pod-security.kubernetes.io/warn=restricted \\\n pod-security.kubernetes.io/warn-version=latest \\\n pod-security.kubernetes.io/audit=restricted \\\n pod-security.kubernetes.io/audit-version=latest", "language": "bash", "type": "kubectl-command", "context": "Apply Pod Security Standards at the Namespace Level | Create a baseline Pod in the example namespace:", "source": "https://kubernetes.io/docs/tutorials/security/ns-level-pss/", "id": "k8s_00139_code_completion_4201"} {"prompt": "Explain this bash code:\n\nkubectl label --overwrite ns example \\\n pod-security.kubernetes.io/enforce=baseline \\\n pod-security.kubernetes.io/enforce-version=latest \\\n pod-security.kubernetes.io/warn=restricted \\\n pod-security.kubernetes.io/warn-version=latest \\\n pod-security.kubernetes.io/audit=restricted \\\n pod-security.kubernetes.io/audit-version=latest", "completion": "This bash code Apply Pod Security Standards at the Namespace Level | Create a baseline Pod in the example namespace:", "language": "bash", "type": "kubectl-command_explanation", "context": "Apply Pod Security Standards at the Namespace Level | Create a baseline Pod in the example namespace:", "source": "https://kubernetes.io/docs/tutorials/security/ns-level-pss/", "id": "k8s_00139_code_explain_4202"} {"prompt": "Explain this bash code:\n\nkubectl apply -n example -f https://k8s.io/examples/security/example-baseline-pod.yaml", "completion": "This bash code Apply Pod Security Standards at the Namespace Level | The Pod does start OK; the output includes a warning. For example:", "language": "bash", "type": "kubectl-command_explanation", "context": "Apply Pod Security Standards at the Namespace Level | The Pod does start OK; the output includes a warning. For example:", "source": "https://kubernetes.io/docs/tutorials/security/ns-level-pss/", "id": "k8s_00139_code_explain_4203"} {"prompt": "Explain this text code:\n\nWarning: would violate PodSecurity \"restricted:latest\": allowPrivilegeEscalation != false (container \"nginx\" must set securityContext.allowPrivilegeEscalation=false), unrestricted capabilities (container \"nginx\" must set securityContext.capabilities.drop=[\"ALL\"]), runAsNonRoot != true (pod or container \"nginx\" must set securityContext.runAsNonRoot=true), seccompProfile (pod or container \"nginx\" must set securityContext.seccompProfile.type to \"RuntimeDefault\" or \"Localhost\")\npod/nginx created", "completion": "This text code Apply Pod Security Standards at the Namespace Level | Create a baseline Pod in the default namespace:", "language": "text", "type": "code-example_explanation", "context": "Apply Pod Security Standards at the Namespace Level | Create a baseline Pod in the default namespace:", "source": "https://kubernetes.io/docs/tutorials/security/ns-level-pss/", "id": "k8s_00139_code_explain_4204"} {"prompt": "Explain this bash code:\n\nkubectl apply -n default -f https://k8s.io/examples/security/example-baseline-pod.yaml", "completion": "This bash code Apply Pod Security Standards at the Namespace Level | Output is similar to this:", "language": "bash", "type": "kubectl-command_explanation", "context": "Apply Pod Security Standards at the Namespace Level | Output is similar to this:", "source": "https://kubernetes.io/docs/tutorials/security/ns-level-pss/", "id": "k8s_00139_code_explain_4205"} {"prompt": "Complete this text code:\n\nCreating cluster \"psa-ns-level\" ...\n ✓ Ensuring node image (kindest/node:v1.34.0) 🖼 \n ✓ Preparing nodes 📦 \n ✓ Writing configuration 📜 \n ✓ Starting control-plane 🕹️ \n ✓ Installing CNI 🔌 ", "completion": " ✓ Installing StorageClass 💾 \nSet kubectl context to \"kind-psa-ns-level\"\nYou can now use your cluster with:\n\nkubectl cluster-info --context kind-psa-ns-level\n\nNot sure what to do next? 😅 Check out https://kind.sigs.k8s.io/docs/user/quick-start/", "language": "text", "type": "code-example", "context": "Apply Pod Security Standards at the Namespace Level | Set the kubectl context to the new cluster:", "source": "https://kubernetes.io/docs/tutorials/security/ns-level-pss/", "id": "k8s_00139_code_completion_4206"} {"prompt": "Explain this text code:\n\nCreating cluster \"psa-ns-level\" ...\n ✓ Ensuring node image (kindest/node:v1.34.0) 🖼 \n ✓ Preparing nodes 📦 \n ✓ Writing configuration 📜 \n ✓ Starting control-plane 🕹️ \n ✓ Installing CNI 🔌 \n ✓ Installing StorageClass 💾 \nSet kubectl context to \"kind-psa-ns-level\"\nYou can now use your cluster with:\n\nkubectl cluster-info --context kind-psa-ns-level\n\nNot sure what to do next? 😅 Check out https://kind.sigs.k8s.io/docs/user/quick-start/", "completion": "This text code Apply Pod Security Standards at the Namespace Level | Set the kubectl context to the new cluster:", "language": "text", "type": "code-example_explanation", "context": "Apply Pod Security Standards at the Namespace Level | Set the kubectl context to the new cluster:", "source": "https://kubernetes.io/docs/tutorials/security/ns-level-pss/", "id": "k8s_00139_code_explain_4207"} {"prompt": "Complete this text code:\n\nKubernetes control plane is running at https://127.0.0.1:50996\nCoreDNS is running at https://127.0.0.1:50996/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy", "completion": "\nTo further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.", "language": "text", "type": "code-example", "context": "Apply Pod Security Standards at the Namespace Level | Create a new namespace called example:", "source": "https://kubernetes.io/docs/tutorials/security/ns-level-pss/", "id": "k8s_00139_code_completion_4208"} {"prompt": "Explain this text code:\n\nKubernetes control plane is running at https://127.0.0.1:50996\nCoreDNS is running at https://127.0.0.1:50996/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy\n\nTo further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.", "completion": "This text code Apply Pod Security Standards at the Namespace Level | Create a new namespace called example:", "language": "text", "type": "code-example_explanation", "context": "Apply Pod Security Standards at the Namespace Level | Create a new namespace called example:", "source": "https://kubernetes.io/docs/tutorials/security/ns-level-pss/", "id": "k8s_00139_code_explain_4209"} {"prompt": "Explain this shell code:\n\nkubectl label --overwrite ns example \\\n pod-security.kubernetes.io/warn=baseline \\\n pod-security.kubernetes.io/warn-version=latest", "completion": "This shell code Apply Pod Security Standards at the Namespace Level | You can configure multiple pod security standard checks on any namespace, using labels. The following command will enforce the baseline Pod Security Standard, but warn and audit for restricted Pod Sec", "language": "shell", "type": "code-example_explanation", "context": "Apply Pod Security Standards at the Namespace Level | You can configure multiple pod security standard checks on any namespace, using labels. The following command will enforce the baseline Pod Security Standard, but warn and audit for restricted Pod Sec", "source": "https://kubernetes.io/docs/tutorials/security/ns-level-pss/", "id": "k8s_00139_code_explain_4210"} {"prompt": "Complete this shell code:\n\nkubectl label --overwrite ns example \\\n pod-security.kubernetes.io/enforce=baseline \\\n pod-security.kubernetes.io/enforce-version=latest \\", "completion": " pod-security.kubernetes.io/warn=restricted \\\n pod-security.kubernetes.io/warn-version=latest \\\n pod-security.kubernetes.io/audit=restricted \\\n pod-security.kubernetes.io/audit-version=latest", "language": "shell", "type": "code-example", "context": "Apply Pod Security Standards at the Namespace Level | Create a baseline Pod in the example namespace:", "source": "https://kubernetes.io/docs/tutorials/security/ns-level-pss/", "id": "k8s_00139_code_completion_4211"} {"prompt": "Explain this shell code:\n\nkubectl label --overwrite ns example \\\n pod-security.kubernetes.io/enforce=baseline \\\n pod-security.kubernetes.io/enforce-version=latest \\\n pod-security.kubernetes.io/warn=restricted \\\n pod-security.kubernetes.io/warn-version=latest \\\n pod-security.kubernetes.io/audit=restricted \\\n pod-security.kubernetes.io/audit-version=latest", "completion": "This shell code Apply Pod Security Standards at the Namespace Level | Create a baseline Pod in the example namespace:", "language": "shell", "type": "code-example_explanation", "context": "Apply Pod Security Standards at the Namespace Level | Create a baseline Pod in the example namespace:", "source": "https://kubernetes.io/docs/tutorials/security/ns-level-pss/", "id": "k8s_00139_code_explain_4212"} {"prompt": "Explain this shell code:\n\nkubectl apply -n example -f https://k8s.io/examples/security/example-baseline-pod.yaml", "completion": "This shell code Apply Pod Security Standards at the Namespace Level | The Pod does start OK; the output includes a warning. For example:", "language": "shell", "type": "code-example_explanation", "context": "Apply Pod Security Standards at the Namespace Level | The Pod does start OK; the output includes a warning. For example:", "source": "https://kubernetes.io/docs/tutorials/security/ns-level-pss/", "id": "k8s_00139_code_explain_4213"} {"prompt": "Explain this text code:\n\nWarning: would violate PodSecurity \"restricted:latest\": allowPrivilegeEscalation != false (container \"nginx\" must set securityContext.allowPrivilegeEscalation=false), unrestricted capabilities (container \"nginx\" must set securityContext.capabilities.drop=[\"ALL\"]), runAsNonRoot != true (pod or container \"nginx\" must set securityContext.runAsNonRoot=true), seccompProfile (pod or container \"nginx\" must set securityContext.seccompProfile.type to \"RuntimeDefault\" or \"Localhost\")\npod/nginx created", "completion": "This text code Apply Pod Security Standards at the Namespace Level | Create a baseline Pod in the default namespace:", "language": "text", "type": "code-example_explanation", "context": "Apply Pod Security Standards at the Namespace Level | Create a baseline Pod in the default namespace:", "source": "https://kubernetes.io/docs/tutorials/security/ns-level-pss/", "id": "k8s_00139_code_explain_4214"} {"prompt": "Explain this shell code:\n\nkubectl apply -n default -f https://k8s.io/examples/security/example-baseline-pod.yaml", "completion": "This shell code Apply Pod Security Standards at the Namespace Level | Output is similar to this:", "language": "shell", "type": "code-example_explanation", "context": "Apply Pod Security Standards at the Namespace Level | Output is similar to this:", "source": "https://kubernetes.io/docs/tutorials/security/ns-level-pss/", "id": "k8s_00139_code_explain_4215"} {"prompt": "Explain this bash code:\n\nkubectl label --overwrite ns example \\\n pod-security.kubernetes.io/warn=baseline \\\n pod-security.kubernetes.io/warn-version=latest", "completion": "This bash code Apply Pod Security Standards at the Namespace Level | You can configure multiple pod security standard checks on any namespace, using labels. The following command will enforce the baseline Pod Security Standard, but warn and audit for restricted Pod Sec", "language": "bash", "type": "kubectl-command_explanation", "context": "Apply Pod Security Standards at the Namespace Level | You can configure multiple pod security standard checks on any namespace, using labels. The following command will enforce the baseline Pod Security Standard, but warn and audit for restricted Pod Sec", "source": "https://kubernetes.io/docs/tutorials/security/ns-level-pss/", "id": "k8s_00139_code_explain_4216"} {"prompt": "Complete this bash code:\n\nkubectl label --overwrite ns example \\\n pod-security.kubernetes.io/enforce=baseline \\\n pod-security.kubernetes.io/enforce-version=latest \\", "completion": " pod-security.kubernetes.io/warn=restricted \\\n pod-security.kubernetes.io/warn-version=latest \\\n pod-security.kubernetes.io/audit=restricted \\\n pod-security.kubernetes.io/audit-version=latest", "language": "bash", "type": "kubectl-command", "context": "Apply Pod Security Standards at the Namespace Level | Create a baseline Pod in the example namespace:", "source": "https://kubernetes.io/docs/tutorials/security/ns-level-pss/", "id": "k8s_00139_code_completion_4217"} {"prompt": "Explain this bash code:\n\nkubectl label --overwrite ns example \\\n pod-security.kubernetes.io/enforce=baseline \\\n pod-security.kubernetes.io/enforce-version=latest \\\n pod-security.kubernetes.io/warn=restricted \\\n pod-security.kubernetes.io/warn-version=latest \\\n pod-security.kubernetes.io/audit=restricted \\\n pod-security.kubernetes.io/audit-version=latest", "completion": "This bash code Apply Pod Security Standards at the Namespace Level | Create a baseline Pod in the example namespace:", "language": "bash", "type": "kubectl-command_explanation", "context": "Apply Pod Security Standards at the Namespace Level | Create a baseline Pod in the example namespace:", "source": "https://kubernetes.io/docs/tutorials/security/ns-level-pss/", "id": "k8s_00139_code_explain_4218"} {"prompt": "Explain this bash code:\n\nkubectl apply -n example -f https://k8s.io/examples/security/example-baseline-pod.yaml", "completion": "This bash code Apply Pod Security Standards at the Namespace Level | The Pod does start OK; the output includes a warning. For example:", "language": "bash", "type": "kubectl-command_explanation", "context": "Apply Pod Security Standards at the Namespace Level | The Pod does start OK; the output includes a warning. For example:", "source": "https://kubernetes.io/docs/tutorials/security/ns-level-pss/", "id": "k8s_00139_code_explain_4219"} {"prompt": "Explain this bash code:\n\nkubectl apply -n default -f https://k8s.io/examples/security/example-baseline-pod.yaml", "completion": "This bash code Apply Pod Security Standards at the Namespace Level | Output is similar to this:", "language": "bash", "type": "kubectl-command_explanation", "context": "Apply Pod Security Standards at the Namespace Level | Output is similar to this:", "source": "https://kubernetes.io/docs/tutorials/security/ns-level-pss/", "id": "k8s_00139_code_explain_4220"} {"prompt": "Complete this text code:\n\nClient-certificate flags:\n --client-certificate=certfile --client-key=keyfile\n \n Bearer token flags:", "completion": " --token=bearer_token\n \n Basic auth flags:\n --username=basic_user --password=basic_password", "language": "text", "type": "code-example", "context": "kubectl config set-credentials | Bearer token and basic auth are mutually exclusive.", "source": "https://kubernetes.io/docs/reference/kubectl/generated/kubectl_config/kubectl_config_set-credentials/", "id": "k8s_00140_code_completion_4221"} {"prompt": "Explain this text code:\n\nClient-certificate flags:\n --client-certificate=certfile --client-key=keyfile\n \n Bearer token flags:\n --token=bearer_token\n \n Basic auth flags:\n --username=basic_user --password=basic_password", "completion": "This text code kubectl config set-credentials | Bearer token and basic auth are mutually exclusive.", "language": "text", "type": "code-example_explanation", "context": "kubectl config set-credentials | Bearer token and basic auth are mutually exclusive.", "source": "https://kubernetes.io/docs/reference/kubectl/generated/kubectl_config/kubectl_config_set-credentials/", "id": "k8s_00140_code_explain_4222"} {"prompt": "Explain this bash code:\n\nkubectl config set-credentials NAME [--client-certificate=path/to/certfile] [--client-key=path/to/keyfile] [--token=bearer_token] [--username=basic_user] [--password=basic_password] [--auth-provider=provider_name] [--auth-provider-arg=key=value] [--exec-command=exec_command] [--exec-api-version=exec_api_version] [--exec-arg=arg] [--exec-env=key=value]", "completion": "This bash code kubectl config set-credentials | Auth provider for the user entry in kubeconfig", "language": "bash", "type": "kubectl-command_explanation", "context": "kubectl config set-credentials | Auth provider for the user entry in kubeconfig", "source": "https://kubernetes.io/docs/reference/kubectl/generated/kubectl_config/kubectl_config_set-credentials/", "id": "k8s_00140_code_explain_4223"} {"prompt": "Complete this text code:\n\n# Set only the \"client-key\" field on the \"cluster-admin\"\n # entry, without touching other values\n kubectl config set-credentials cluster-admin --client-key=~/.kube/admin.key\n \n # Set basic auth for the \"cluster-admin\" entry\n kubectl config set-credentials cluster-admin --username=admin --password=uXFGweU9l35qcif\n \n # Embed client certificate data in the \"cluster-admin\" entry\n kubectl config set-credentials cluster-admin --client-certificate=~/.kube/admin.crt --embed-certs=true\n \n # Enable the Google Compute Platform auth provider for the \"cluster-admin\" entry\n kubectl config set-credentials cluster-admin --auth-provider=gcp\n \n # Enable the OpenID Connect auth provider for the \"cluster-admin\" entry with additional arguments\n kubectl config set-credentials cluster-admin --auth-provider=oidc --auth-provider-arg=client-id=foo --auth-provider-arg=client-secret=bar\n ", "completion": " # Remove the \"client-secret\" config value for the OpenID Connect auth provider for the \"cluster-admin\" entry\n kubectl config set-credentials cluster-admin --auth-provider=oidc --auth-provider-arg=client-secret-\n \n # Enable new exec auth plugin for the \"cluster-admin\" entry\n kubectl config set-credentials cluster-admin --exec-command=/path/to/the/executable --exec-api-version=client.authentication.k8s.io/v1beta1\n \n # Enable new exec auth plugin for the \"cluster-admin\" entry with interactive mode\n kubectl config set-credentials cluster-admin --exec-command=/path/to/the/executable --exec-api-version=client.authentication.k8s.io/v1beta1 --exec-interactive-mode=Never\n \n # Define new exec auth plugin arguments for the \"cluster-admin\" entry\n kubectl config set-credentials cluster-admin --exec-arg=arg1 --exec-arg=arg2\n \n # Create or update exec auth plugin environment variables for the \"cluster-admin\" entry\n kubectl config set-credentials cluster-admin --exec-env=key1=val1 --exec-env=key2=val2\n \n # Remove exec auth plugin environment variables for the \"cluster-admin\" entry\n kubectl config set-credentials cluster-admin --exec-env=var-to-remove-", "language": "text", "type": "code-example", "context": "kubectl config set-credentials | Auth provider for the user entry in kubeconfig", "source": "https://kubernetes.io/docs/reference/kubectl/generated/kubectl_config/kubectl_config_set-credentials/", "id": "k8s_00140_code_completion_4224"} {"prompt": "Explain this text code:\n\n# Set only the \"client-key\" field on the \"cluster-admin\"\n # entry, without touching other values\n kubectl config set-credentials cluster-admin --client-key=~/.kube/admin.key\n \n # Set basic auth for the \"cluster-admin\" entry\n kubectl config set-credentials cluster-admin --username=admin --password=uXFGweU9l35qcif\n \n # Embed client certificate data in the \"cluster-admin\" entry\n kubectl config set-credentials cluster-admin --client-certificate=~/.kube/admin.crt --embed-certs=true\n \n # Enable the Google Compute Platform auth provider for the \"cluster-admin\" entry\n kubectl config set-credentials cluster-admin --auth-provider=gcp\n \n # Enable the OpenID Connect auth provider for the \"cluster-admin\" entry with additional arguments\n kubectl config set-credentials cluster-admin --auth-provider=oidc --auth-provider-arg=client-id=foo --auth-provider-arg=client-secret=bar\n \n # Remove the \"client-secret\" config value for the OpenID Connect auth provider for the \"cluster-admin\" entry\n kubectl config set-credentials cluster-admin --auth-provider=oidc --auth-provider-arg=client-secret-\n \n # Enable new exec auth plugin for the \"cluster-admin\" entry\n kubectl config set-credentials cluster-admin --exec-command=/path/to/the/executable --exec-api-version=client.authentication.k8s.io/v1beta1\n \n # Enable new exec auth plugin for the \"cluster-admin\" entry with interactive mode\n kubectl config set-credentials cluster-admin --exec-command=/path/to/the/executable --exec-api-version=client.authentication.k8s.io/v1beta1 --exec-interactive-mode=Never\n \n # Define new exec auth plugin arguments for the \"cluster-admin\" entry\n kubectl config set-credentials cluster-admin --exec-arg=arg1 --exec-arg=arg2\n \n # Create or update exec auth plugin environment variables for the \"cluster-admin\" entry\n kubectl config set-credentials cluster-admin --exec-env=key1=val1 --exec-env=key2=val2\n \n # Remove exec auth plugin environment variables for the \"cluster-admin\" entry\n kubectl config set-credentials cluster-admin --exec-env=var-to-remove-", "completion": "This text code kubectl config set-credentials | Auth provider for the user entry in kubeconfig", "language": "text", "type": "code-example_explanation", "context": "kubectl config set-credentials | Auth provider for the user entry in kubeconfig", "source": "https://kubernetes.io/docs/reference/kubectl/generated/kubectl_config/kubectl_config_set-credentials/", "id": "k8s_00140_code_explain_4225"} {"prompt": "Complete this text code:\n\nClient-certificate flags:\n --client-certificate=certfile --client-key=keyfile\n \n Bearer token flags:", "completion": " --token=bearer_token\n \n Basic auth flags:\n --username=basic_user --password=basic_password", "language": "text", "type": "code-example", "context": "kubectl config set-credentials | Bearer token and basic auth are mutually exclusive.", "source": "https://kubernetes.io/docs/reference/kubectl/generated/kubectl_config/kubectl_config_set-credentials/", "id": "k8s_00140_code_completion_4226"} {"prompt": "Explain this text code:\n\nClient-certificate flags:\n --client-certificate=certfile --client-key=keyfile\n \n Bearer token flags:\n --token=bearer_token\n \n Basic auth flags:\n --username=basic_user --password=basic_password", "completion": "This text code kubectl config set-credentials | Bearer token and basic auth are mutually exclusive.", "language": "text", "type": "code-example_explanation", "context": "kubectl config set-credentials | Bearer token and basic auth are mutually exclusive.", "source": "https://kubernetes.io/docs/reference/kubectl/generated/kubectl_config/kubectl_config_set-credentials/", "id": "k8s_00140_code_explain_4227"} {"prompt": "Explain this bash code:\n\nkubectl config set-credentials NAME [--client-certificate=path/to/certfile] [--client-key=path/to/keyfile] [--token=bearer_token] [--username=basic_user] [--password=basic_password] [--auth-provider=provider_name] [--auth-provider-arg=key=value] [--exec-command=exec_command] [--exec-api-version=exec_api_version] [--exec-arg=arg] [--exec-env=key=value]", "completion": "This bash code kubectl config set-credentials | Auth provider for the user entry in kubeconfig", "language": "bash", "type": "kubectl-command_explanation", "context": "kubectl config set-credentials | Auth provider for the user entry in kubeconfig", "source": "https://kubernetes.io/docs/reference/kubectl/generated/kubectl_config/kubectl_config_set-credentials/", "id": "k8s_00140_code_explain_4228"} {"prompt": "Complete this text code:\n\n# Set only the \"client-key\" field on the \"cluster-admin\"\n # entry, without touching other values\n kubectl config set-credentials cluster-admin --client-key=~/.kube/admin.key\n \n # Set basic auth for the \"cluster-admin\" entry\n kubectl config set-credentials cluster-admin --username=admin --password=uXFGweU9l35qcif\n \n # Embed client certificate data in the \"cluster-admin\" entry\n kubectl config set-credentials cluster-admin --client-certificate=~/.kube/admin.crt --embed-certs=true\n \n # Enable the Google Compute Platform auth provider for the \"cluster-admin\" entry\n kubectl config set-credentials cluster-admin --auth-provider=gcp\n \n # Enable the OpenID Connect auth provider for the \"cluster-admin\" entry with additional arguments\n kubectl config set-credentials cluster-admin --auth-provider=oidc --auth-provider-arg=client-id=foo --auth-provider-arg=client-secret=bar\n ", "completion": " # Remove the \"client-secret\" config value for the OpenID Connect auth provider for the \"cluster-admin\" entry\n kubectl config set-credentials cluster-admin --auth-provider=oidc --auth-provider-arg=client-secret-\n \n # Enable new exec auth plugin for the \"cluster-admin\" entry\n kubectl config set-credentials cluster-admin --exec-command=/path/to/the/executable --exec-api-version=client.authentication.k8s.io/v1beta1\n \n # Enable new exec auth plugin for the \"cluster-admin\" entry with interactive mode\n kubectl config set-credentials cluster-admin --exec-command=/path/to/the/executable --exec-api-version=client.authentication.k8s.io/v1beta1 --exec-interactive-mode=Never\n \n # Define new exec auth plugin arguments for the \"cluster-admin\" entry\n kubectl config set-credentials cluster-admin --exec-arg=arg1 --exec-arg=arg2\n \n # Create or update exec auth plugin environment variables for the \"cluster-admin\" entry\n kubectl config set-credentials cluster-admin --exec-env=key1=val1 --exec-env=key2=val2\n \n # Remove exec auth plugin environment variables for the \"cluster-admin\" entry\n kubectl config set-credentials cluster-admin --exec-env=var-to-remove-", "language": "text", "type": "code-example", "context": "kubectl config set-credentials | Auth provider for the user entry in kubeconfig", "source": "https://kubernetes.io/docs/reference/kubectl/generated/kubectl_config/kubectl_config_set-credentials/", "id": "k8s_00140_code_completion_4229"} {"prompt": "Explain this text code:\n\n# Set only the \"client-key\" field on the \"cluster-admin\"\n # entry, without touching other values\n kubectl config set-credentials cluster-admin --client-key=~/.kube/admin.key\n \n # Set basic auth for the \"cluster-admin\" entry\n kubectl config set-credentials cluster-admin --username=admin --password=uXFGweU9l35qcif\n \n # Embed client certificate data in the \"cluster-admin\" entry\n kubectl config set-credentials cluster-admin --client-certificate=~/.kube/admin.crt --embed-certs=true\n \n # Enable the Google Compute Platform auth provider for the \"cluster-admin\" entry\n kubectl config set-credentials cluster-admin --auth-provider=gcp\n \n # Enable the OpenID Connect auth provider for the \"cluster-admin\" entry with additional arguments\n kubectl config set-credentials cluster-admin --auth-provider=oidc --auth-provider-arg=client-id=foo --auth-provider-arg=client-secret=bar\n \n # Remove the \"client-secret\" config value for the OpenID Connect auth provider for the \"cluster-admin\" entry\n kubectl config set-credentials cluster-admin --auth-provider=oidc --auth-provider-arg=client-secret-\n \n # Enable new exec auth plugin for the \"cluster-admin\" entry\n kubectl config set-credentials cluster-admin --exec-command=/path/to/the/executable --exec-api-version=client.authentication.k8s.io/v1beta1\n \n # Enable new exec auth plugin for the \"cluster-admin\" entry with interactive mode\n kubectl config set-credentials cluster-admin --exec-command=/path/to/the/executable --exec-api-version=client.authentication.k8s.io/v1beta1 --exec-interactive-mode=Never\n \n # Define new exec auth plugin arguments for the \"cluster-admin\" entry\n kubectl config set-credentials cluster-admin --exec-arg=arg1 --exec-arg=arg2\n \n # Create or update exec auth plugin environment variables for the \"cluster-admin\" entry\n kubectl config set-credentials cluster-admin --exec-env=key1=val1 --exec-env=key2=val2\n \n # Remove exec auth plugin environment variables for the \"cluster-admin\" entry\n kubectl config set-credentials cluster-admin --exec-env=var-to-remove-", "completion": "This text code kubectl config set-credentials | Auth provider for the user entry in kubeconfig", "language": "text", "type": "code-example_explanation", "context": "kubectl config set-credentials | Auth provider for the user entry in kubeconfig", "source": "https://kubernetes.io/docs/reference/kubectl/generated/kubectl_config/kubectl_config_set-credentials/", "id": "k8s_00140_code_explain_4230"} {"prompt": "Complete this text code:\n\n# HELP kubernetes_healthcheck [ALPHA] This metric records the result of a single healthcheck.\n# TYPE kubernetes_healthcheck gauge\nkubernetes_healthcheck{name=\"autoregister-completion\",type=\"healthz\"} 1\nkubernetes_healthcheck{name=\"autoregister-completion\",type=\"readyz\"} 1\nkubernetes_healthcheck{name=\"etcd\",type=\"healthz\"} 1\nkubernetes_healthcheck{name=\"etcd\",type=\"readyz\"} 1", "completion": "kubernetes_healthcheck{name=\"etcd-readiness\",type=\"readyz\"} 1\nkubernetes_healthcheck{name=\"informer-sync\",type=\"readyz\"} 1\nkubernetes_healthcheck{name=\"log\",type=\"healthz\"} 1\nkubernetes_healthcheck{name=\"log\",type=\"readyz\"} 1\nkubernetes_healthcheck{name=\"ping\",type=\"healthz\"} 1\nkubernetes_healthcheck{name=\"ping\",type=\"readyz\"} 1", "language": "text", "type": "code-example", "context": "Kubernetes Component SLI Metrics | While the counter data looks like this:", "source": "https://kubernetes.io/docs/reference/instrumentation/slis/", "id": "k8s_00141_code_completion_4231"} {"prompt": "Explain this text code:\n\n# HELP kubernetes_healthcheck [ALPHA] This metric records the result of a single healthcheck.\n# TYPE kubernetes_healthcheck gauge\nkubernetes_healthcheck{name=\"autoregister-completion\",type=\"healthz\"} 1\nkubernetes_healthcheck{name=\"autoregister-completion\",type=\"readyz\"} 1\nkubernetes_healthcheck{name=\"etcd\",type=\"healthz\"} 1\nkubernetes_healthcheck{name=\"etcd\",type=\"readyz\"} 1\nkubernetes_healthcheck{name=\"etcd-readiness\",type=\"readyz\"} 1\nkubernetes_healthcheck{name=\"informer-sync\",type=\"readyz\"} 1\nkubernetes_healthcheck{name=\"log\",type=\"healthz\"} 1\nkubernetes_healthcheck{name=\"log\",type=\"readyz\"} 1\nkubernetes_healthcheck{name=\"ping\",type=\"healthz\"} 1\nkubernetes_healthcheck{name=\"ping\",type=\"readyz\"} 1", "completion": "This text code Kubernetes Component SLI Metrics | While the counter data looks like this:", "language": "text", "type": "code-example_explanation", "context": "Kubernetes Component SLI Metrics | While the counter data looks like this:", "source": "https://kubernetes.io/docs/reference/instrumentation/slis/", "id": "k8s_00141_code_explain_4232"} {"prompt": "Complete this text code:\n\n# HELP kubernetes_healthchecks_total [ALPHA] This metric records the results of all healthcheck.\n# TYPE kubernetes_healthchecks_total counter\nkubernetes_healthchecks_total{name=\"autoregister-completion\",status=\"error\",type=\"readyz\"} 1\nkubernetes_healthchecks_total{name=\"autoregister-completion\",status=\"success\",type=\"healthz\"} 15\nkubernetes_healthchecks_total{name=\"autoregister-completion\",status=\"success\",type=\"readyz\"} 14\nkubernetes_healthchecks_total{name=\"etcd\",status=\"success\",type=\"healthz\"} 15\nkubernetes_healthchecks_total{name=\"etcd\",status=\"success\",type=\"readyz\"} 15", "completion": "kubernetes_healthchecks_total{name=\"etcd-readiness\",status=\"success\",type=\"readyz\"} 15\nkubernetes_healthchecks_total{name=\"informer-sync\",status=\"error\",type=\"readyz\"} 1\nkubernetes_healthchecks_total{name=\"informer-sync\",status=\"success\",type=\"readyz\"} 14\nkubernetes_healthchecks_total{name=\"log\",status=\"success\",type=\"healthz\"} 15\nkubernetes_healthchecks_total{name=\"log\",status=\"success\",type=\"readyz\"} 15\nkubernetes_healthchecks_total{name=\"ping\",status=\"success\",type=\"healthz\"} 15\nkubernetes_healthchecks_total{name=\"ping\",status=\"success\",type=\"readyz\"} 15", "language": "text", "type": "code-example", "context": "Kubernetes Component SLI Metrics | The component SLIs metrics endpoint is intended to be scraped at a high frequency. Scraping at a high frequency means that you end up with greater granularity of the gauge's signal, which can be then", "source": "https://kubernetes.io/docs/reference/instrumentation/slis/", "id": "k8s_00141_code_completion_4233"} {"prompt": "Explain this text code:\n\n# HELP kubernetes_healthchecks_total [ALPHA] This metric records the results of all healthcheck.\n# TYPE kubernetes_healthchecks_total counter\nkubernetes_healthchecks_total{name=\"autoregister-completion\",status=\"error\",type=\"readyz\"} 1\nkubernetes_healthchecks_total{name=\"autoregister-completion\",status=\"success\",type=\"healthz\"} 15\nkubernetes_healthchecks_total{name=\"autoregister-completion\",status=\"success\",type=\"readyz\"} 14\nkubernetes_healthchecks_total{name=\"etcd\",status=\"success\",type=\"healthz\"} 15\nkubernetes_healthchecks_total{name=\"etcd\",status=\"success\",type=\"readyz\"} 15\nkubernetes_healthchecks_total{name=\"etcd-readiness\",status=\"success\",type=\"readyz\"} 15\nkubernetes_healthchecks_total{name=\"informer-sync\",status=\"error\",type=\"readyz\"} 1\nkubernetes_healthchecks_total{name=\"informer-sync\",status=\"success\",type=\"readyz\"} 14\nkubernetes_healthchecks_total{name=\"log\",status=\"success\",type=\"healthz\"} 15\nkubernetes_healthchecks_total{name=\"log\",status=\"success\",type=\"readyz\"} 15\nkubernetes_healthchecks_total{name=\"ping\",status=\"success\",type=\"healthz\"} 15\nkubernetes_healthchecks_total{name=\"ping\",status=\"success\",type=\"readyz\"} 15", "completion": "This text code Kubernetes Component SLI Metrics | The component SLIs metrics endpoint is intended to be scraped at a high frequency. Scraping at a high frequency means that you end up with greater granularity of the gauge's signal, which can be then", "language": "text", "type": "code-example_explanation", "context": "Kubernetes Component SLI Metrics | The component SLIs metrics endpoint is intended to be scraped at a high frequency. Scraping at a high frequency means that you end up with greater granularity of the gauge's signal, which can be then", "source": "https://kubernetes.io/docs/reference/instrumentation/slis/", "id": "k8s_00141_code_explain_4234"} {"prompt": "Complete this text code:\n\n# HELP kubernetes_healthcheck [ALPHA] This metric records the result of a single healthcheck.\n# TYPE kubernetes_healthcheck gauge\nkubernetes_healthcheck{name=\"autoregister-completion\",type=\"healthz\"} 1\nkubernetes_healthcheck{name=\"autoregister-completion\",type=\"readyz\"} 1\nkubernetes_healthcheck{name=\"etcd\",type=\"healthz\"} 1\nkubernetes_healthcheck{name=\"etcd\",type=\"readyz\"} 1", "completion": "kubernetes_healthcheck{name=\"etcd-readiness\",type=\"readyz\"} 1\nkubernetes_healthcheck{name=\"informer-sync\",type=\"readyz\"} 1\nkubernetes_healthcheck{name=\"log\",type=\"healthz\"} 1\nkubernetes_healthcheck{name=\"log\",type=\"readyz\"} 1\nkubernetes_healthcheck{name=\"ping\",type=\"healthz\"} 1\nkubernetes_healthcheck{name=\"ping\",type=\"readyz\"} 1", "language": "text", "type": "code-example", "context": "Kubernetes Component SLI Metrics | While the counter data looks like this:", "source": "https://kubernetes.io/docs/reference/instrumentation/slis/", "id": "k8s_00141_code_completion_4235"} {"prompt": "Explain this text code:\n\n# HELP kubernetes_healthcheck [ALPHA] This metric records the result of a single healthcheck.\n# TYPE kubernetes_healthcheck gauge\nkubernetes_healthcheck{name=\"autoregister-completion\",type=\"healthz\"} 1\nkubernetes_healthcheck{name=\"autoregister-completion\",type=\"readyz\"} 1\nkubernetes_healthcheck{name=\"etcd\",type=\"healthz\"} 1\nkubernetes_healthcheck{name=\"etcd\",type=\"readyz\"} 1\nkubernetes_healthcheck{name=\"etcd-readiness\",type=\"readyz\"} 1\nkubernetes_healthcheck{name=\"informer-sync\",type=\"readyz\"} 1\nkubernetes_healthcheck{name=\"log\",type=\"healthz\"} 1\nkubernetes_healthcheck{name=\"log\",type=\"readyz\"} 1\nkubernetes_healthcheck{name=\"ping\",type=\"healthz\"} 1\nkubernetes_healthcheck{name=\"ping\",type=\"readyz\"} 1", "completion": "This text code Kubernetes Component SLI Metrics | While the counter data looks like this:", "language": "text", "type": "code-example_explanation", "context": "Kubernetes Component SLI Metrics | While the counter data looks like this:", "source": "https://kubernetes.io/docs/reference/instrumentation/slis/", "id": "k8s_00141_code_explain_4236"} {"prompt": "Complete this text code:\n\n# HELP kubernetes_healthchecks_total [ALPHA] This metric records the results of all healthcheck.\n# TYPE kubernetes_healthchecks_total counter\nkubernetes_healthchecks_total{name=\"autoregister-completion\",status=\"error\",type=\"readyz\"} 1\nkubernetes_healthchecks_total{name=\"autoregister-completion\",status=\"success\",type=\"healthz\"} 15\nkubernetes_healthchecks_total{name=\"autoregister-completion\",status=\"success\",type=\"readyz\"} 14\nkubernetes_healthchecks_total{name=\"etcd\",status=\"success\",type=\"healthz\"} 15\nkubernetes_healthchecks_total{name=\"etcd\",status=\"success\",type=\"readyz\"} 15", "completion": "kubernetes_healthchecks_total{name=\"etcd-readiness\",status=\"success\",type=\"readyz\"} 15\nkubernetes_healthchecks_total{name=\"informer-sync\",status=\"error\",type=\"readyz\"} 1\nkubernetes_healthchecks_total{name=\"informer-sync\",status=\"success\",type=\"readyz\"} 14\nkubernetes_healthchecks_total{name=\"log\",status=\"success\",type=\"healthz\"} 15\nkubernetes_healthchecks_total{name=\"log\",status=\"success\",type=\"readyz\"} 15\nkubernetes_healthchecks_total{name=\"ping\",status=\"success\",type=\"healthz\"} 15\nkubernetes_healthchecks_total{name=\"ping\",status=\"success\",type=\"readyz\"} 15", "language": "text", "type": "code-example", "context": "Kubernetes Component SLI Metrics | The component SLIs metrics endpoint is intended to be scraped at a high frequency. Scraping at a high frequency means that you end up with greater granularity of the gauge's signal, which can be then", "source": "https://kubernetes.io/docs/reference/instrumentation/slis/", "id": "k8s_00141_code_completion_4237"} {"prompt": "Explain this text code:\n\n# HELP kubernetes_healthchecks_total [ALPHA] This metric records the results of all healthcheck.\n# TYPE kubernetes_healthchecks_total counter\nkubernetes_healthchecks_total{name=\"autoregister-completion\",status=\"error\",type=\"readyz\"} 1\nkubernetes_healthchecks_total{name=\"autoregister-completion\",status=\"success\",type=\"healthz\"} 15\nkubernetes_healthchecks_total{name=\"autoregister-completion\",status=\"success\",type=\"readyz\"} 14\nkubernetes_healthchecks_total{name=\"etcd\",status=\"success\",type=\"healthz\"} 15\nkubernetes_healthchecks_total{name=\"etcd\",status=\"success\",type=\"readyz\"} 15\nkubernetes_healthchecks_total{name=\"etcd-readiness\",status=\"success\",type=\"readyz\"} 15\nkubernetes_healthchecks_total{name=\"informer-sync\",status=\"error\",type=\"readyz\"} 1\nkubernetes_healthchecks_total{name=\"informer-sync\",status=\"success\",type=\"readyz\"} 14\nkubernetes_healthchecks_total{name=\"log\",status=\"success\",type=\"healthz\"} 15\nkubernetes_healthchecks_total{name=\"log\",status=\"success\",type=\"readyz\"} 15\nkubernetes_healthchecks_total{name=\"ping\",status=\"success\",type=\"healthz\"} 15\nkubernetes_healthchecks_total{name=\"ping\",status=\"success\",type=\"readyz\"} 15", "completion": "This text code Kubernetes Component SLI Metrics | The component SLIs metrics endpoint is intended to be scraped at a high frequency. Scraping at a high frequency means that you end up with greater granularity of the gauge's signal, which can be then", "language": "text", "type": "code-example_explanation", "context": "Kubernetes Component SLI Metrics | The component SLIs metrics endpoint is intended to be scraped at a high frequency. Scraping at a high frequency means that you end up with greater granularity of the gauge's signal, which can be then", "source": "https://kubernetes.io/docs/reference/instrumentation/slis/", "id": "k8s_00141_code_explain_4238"} {"prompt": "Complete this yaml code:\n\napiVersion: storage.k8s.io/v1\nkind: VolumeAttributesClass\nmetadata:\n name: silver", "completion": "driverName: pd.csi.storage.gke.io\nparameters:\n provisioned-iops: \"3000\"\n provisioned-throughput: \"50\"", "language": "yaml", "type": "kubernetes-manifest", "context": "Volume Attributes Classes | Each VolumeAttributesClass has a provisioner that determines what volume plugin is used for provisioning PVs. The field driverName must be specified.", "source": "https://kubernetes.io/docs/concepts/storage/volume-attributes-classes/", "id": "k8s_00142_code_completion_4239"} {"prompt": "Explain this yaml code:\n\napiVersion: storage.k8s.io/v1\nkind: VolumeAttributesClass\nmetadata:\n name: silver\ndriverName: pd.csi.storage.gke.io\nparameters:\n provisioned-iops: \"3000\"\n provisioned-throughput: \"50\"", "completion": "This yaml code Volume Attributes Classes | Each VolumeAttributesClass has a provisioner that determines what volume plugin is used for provisioning PVs. The field driverName must be specified.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Volume Attributes Classes | Each VolumeAttributesClass has a provisioner that determines what volume plugin is used for provisioning PVs. The field driverName must be specified.", "source": "https://kubernetes.io/docs/concepts/storage/volume-attributes-classes/", "id": "k8s_00142_code_explain_4240"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: PersistentVolumeClaim\nmetadata:\n name: test-pv-claim", "completion": "spec:\n …\n volumeAttributesClassName: silver\n …", "language": "yaml", "type": "kubernetes-manifest", "context": "Volume Attributes Classes | A new VolumeAttributesClass gold is available in the cluster:", "source": "https://kubernetes.io/docs/concepts/storage/volume-attributes-classes/", "id": "k8s_00142_code_completion_4241"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: PersistentVolumeClaim\nmetadata:\n name: test-pv-claim\nspec:\n …\n volumeAttributesClassName: silver\n …", "completion": "This yaml code Volume Attributes Classes | A new VolumeAttributesClass gold is available in the cluster:", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Volume Attributes Classes | A new VolumeAttributesClass gold is available in the cluster:", "source": "https://kubernetes.io/docs/concepts/storage/volume-attributes-classes/", "id": "k8s_00142_code_explain_4242"} {"prompt": "Complete this yaml code:\n\napiVersion: storage.k8s.io/v1\nkind: VolumeAttributesClass\nmetadata:\n name: gold", "completion": "driverName: pd.csi.storage.gke.io\nparameters:\n iops: \"4000\"\n throughput: \"60\"", "language": "yaml", "type": "kubernetes-manifest", "context": "Volume Attributes Classes | The end user can update the PVC with the new VolumeAttributesClass gold and apply:", "source": "https://kubernetes.io/docs/concepts/storage/volume-attributes-classes/", "id": "k8s_00142_code_completion_4243"} {"prompt": "Explain this yaml code:\n\napiVersion: storage.k8s.io/v1\nkind: VolumeAttributesClass\nmetadata:\n name: gold\ndriverName: pd.csi.storage.gke.io\nparameters:\n iops: \"4000\"\n throughput: \"60\"", "completion": "This yaml code Volume Attributes Classes | The end user can update the PVC with the new VolumeAttributesClass gold and apply:", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Volume Attributes Classes | The end user can update the PVC with the new VolumeAttributesClass gold and apply:", "source": "https://kubernetes.io/docs/concepts/storage/volume-attributes-classes/", "id": "k8s_00142_code_explain_4244"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: PersistentVolumeClaim\nmetadata:\n name: test-pv-claim", "completion": "spec:\n …\n volumeAttributesClassName: gold\n …", "language": "yaml", "type": "kubernetes-manifest", "context": "Volume Attributes Classes | To understand how the resizer works with VolumeAttributesClass, refer to the CSI external-resizer documentation.", "source": "https://kubernetes.io/docs/concepts/storage/volume-attributes-classes/", "id": "k8s_00142_code_completion_4245"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: PersistentVolumeClaim\nmetadata:\n name: test-pv-claim\nspec:\n …\n volumeAttributesClassName: gold\n …", "completion": "This yaml code Volume Attributes Classes | To understand how the resizer works with VolumeAttributesClass, refer to the CSI external-resizer documentation.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Volume Attributes Classes | To understand how the resizer works with VolumeAttributesClass, refer to the CSI external-resizer documentation.", "source": "https://kubernetes.io/docs/concepts/storage/volume-attributes-classes/", "id": "k8s_00142_code_explain_4246"} {"prompt": "Complete this yaml code:\n\napiVersion: storage.k8s.io/v1\nkind: VolumeAttributesClass\nmetadata:\n name: silver", "completion": "driverName: pd.csi.storage.gke.io\nparameters:\n provisioned-iops: \"3000\"\n provisioned-throughput: \"50\"", "language": "yaml", "type": "kubernetes-manifest", "context": "Volume Attributes Classes | Each VolumeAttributesClass has a provisioner that determines what volume plugin is used for provisioning PVs. The field driverName must be specified.", "source": "https://kubernetes.io/docs/concepts/storage/volume-attributes-classes/", "id": "k8s_00142_code_completion_4247"} {"prompt": "Explain this yaml code:\n\napiVersion: storage.k8s.io/v1\nkind: VolumeAttributesClass\nmetadata:\n name: silver\ndriverName: pd.csi.storage.gke.io\nparameters:\n provisioned-iops: \"3000\"\n provisioned-throughput: \"50\"", "completion": "This yaml code Volume Attributes Classes | Each VolumeAttributesClass has a provisioner that determines what volume plugin is used for provisioning PVs. The field driverName must be specified.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Volume Attributes Classes | Each VolumeAttributesClass has a provisioner that determines what volume plugin is used for provisioning PVs. The field driverName must be specified.", "source": "https://kubernetes.io/docs/concepts/storage/volume-attributes-classes/", "id": "k8s_00142_code_explain_4248"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: PersistentVolumeClaim\nmetadata:\n name: test-pv-claim", "completion": "spec:\n …\n volumeAttributesClassName: silver\n …", "language": "yaml", "type": "kubernetes-manifest", "context": "Volume Attributes Classes | A new VolumeAttributesClass gold is available in the cluster:", "source": "https://kubernetes.io/docs/concepts/storage/volume-attributes-classes/", "id": "k8s_00142_code_completion_4249"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: PersistentVolumeClaim\nmetadata:\n name: test-pv-claim\nspec:\n …\n volumeAttributesClassName: silver\n …", "completion": "This yaml code Volume Attributes Classes | A new VolumeAttributesClass gold is available in the cluster:", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Volume Attributes Classes | A new VolumeAttributesClass gold is available in the cluster:", "source": "https://kubernetes.io/docs/concepts/storage/volume-attributes-classes/", "id": "k8s_00142_code_explain_4250"} {"prompt": "Complete this yaml code:\n\napiVersion: storage.k8s.io/v1\nkind: VolumeAttributesClass\nmetadata:\n name: gold", "completion": "driverName: pd.csi.storage.gke.io\nparameters:\n iops: \"4000\"\n throughput: \"60\"", "language": "yaml", "type": "kubernetes-manifest", "context": "Volume Attributes Classes | The end user can update the PVC with the new VolumeAttributesClass gold and apply:", "source": "https://kubernetes.io/docs/concepts/storage/volume-attributes-classes/", "id": "k8s_00142_code_completion_4251"} {"prompt": "Explain this yaml code:\n\napiVersion: storage.k8s.io/v1\nkind: VolumeAttributesClass\nmetadata:\n name: gold\ndriverName: pd.csi.storage.gke.io\nparameters:\n iops: \"4000\"\n throughput: \"60\"", "completion": "This yaml code Volume Attributes Classes | The end user can update the PVC with the new VolumeAttributesClass gold and apply:", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Volume Attributes Classes | The end user can update the PVC with the new VolumeAttributesClass gold and apply:", "source": "https://kubernetes.io/docs/concepts/storage/volume-attributes-classes/", "id": "k8s_00142_code_explain_4252"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: PersistentVolumeClaim\nmetadata:\n name: test-pv-claim", "completion": "spec:\n …\n volumeAttributesClassName: gold\n …", "language": "yaml", "type": "kubernetes-manifest", "context": "Volume Attributes Classes | To understand how the resizer works with VolumeAttributesClass, refer to the CSI external-resizer documentation.", "source": "https://kubernetes.io/docs/concepts/storage/volume-attributes-classes/", "id": "k8s_00142_code_completion_4253"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: PersistentVolumeClaim\nmetadata:\n name: test-pv-claim\nspec:\n …\n volumeAttributesClassName: gold\n …", "completion": "This yaml code Volume Attributes Classes | To understand how the resizer works with VolumeAttributesClass, refer to the CSI external-resizer documentation.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Volume Attributes Classes | To understand how the resizer works with VolumeAttributesClass, refer to the CSI external-resizer documentation.", "source": "https://kubernetes.io/docs/concepts/storage/volume-attributes-classes/", "id": "k8s_00142_code_explain_4254"} {"prompt": "Complete this yaml code:\n\napiVersion: storage.k8s.io/v1\nkind: VolumeAttributesClass\nmetadata:\n name: silver", "completion": "driverName: pd.csi.storage.gke.io\nparameters:\n provisioned-iops: \"3000\"\n provisioned-throughput: \"50\"", "language": "yaml", "type": "kubernetes-manifest", "context": "Volume Attributes Classes | Each VolumeAttributesClass has a provisioner that determines what volume plugin is used for provisioning PVs. The field driverName must be specified.", "source": "https://kubernetes.io/docs/concepts/storage/volume-attributes-classes/", "id": "k8s_00142_code_completion_4255"} {"prompt": "Explain this yaml code:\n\napiVersion: storage.k8s.io/v1\nkind: VolumeAttributesClass\nmetadata:\n name: silver\ndriverName: pd.csi.storage.gke.io\nparameters:\n provisioned-iops: \"3000\"\n provisioned-throughput: \"50\"", "completion": "This yaml code Volume Attributes Classes | Each VolumeAttributesClass has a provisioner that determines what volume plugin is used for provisioning PVs. The field driverName must be specified.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Volume Attributes Classes | Each VolumeAttributesClass has a provisioner that determines what volume plugin is used for provisioning PVs. The field driverName must be specified.", "source": "https://kubernetes.io/docs/concepts/storage/volume-attributes-classes/", "id": "k8s_00142_code_explain_4256"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: PersistentVolumeClaim\nmetadata:\n name: test-pv-claim", "completion": "spec:\n …\n volumeAttributesClassName: silver\n …", "language": "yaml", "type": "kubernetes-manifest", "context": "Volume Attributes Classes | A new VolumeAttributesClass gold is available in the cluster:", "source": "https://kubernetes.io/docs/concepts/storage/volume-attributes-classes/", "id": "k8s_00142_code_completion_4257"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: PersistentVolumeClaim\nmetadata:\n name: test-pv-claim\nspec:\n …\n volumeAttributesClassName: silver\n …", "completion": "This yaml code Volume Attributes Classes | A new VolumeAttributesClass gold is available in the cluster:", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Volume Attributes Classes | A new VolumeAttributesClass gold is available in the cluster:", "source": "https://kubernetes.io/docs/concepts/storage/volume-attributes-classes/", "id": "k8s_00142_code_explain_4258"} {"prompt": "Complete this yaml code:\n\napiVersion: storage.k8s.io/v1\nkind: VolumeAttributesClass\nmetadata:\n name: gold", "completion": "driverName: pd.csi.storage.gke.io\nparameters:\n iops: \"4000\"\n throughput: \"60\"", "language": "yaml", "type": "kubernetes-manifest", "context": "Volume Attributes Classes | The end user can update the PVC with the new VolumeAttributesClass gold and apply:", "source": "https://kubernetes.io/docs/concepts/storage/volume-attributes-classes/", "id": "k8s_00142_code_completion_4259"} {"prompt": "Explain this yaml code:\n\napiVersion: storage.k8s.io/v1\nkind: VolumeAttributesClass\nmetadata:\n name: gold\ndriverName: pd.csi.storage.gke.io\nparameters:\n iops: \"4000\"\n throughput: \"60\"", "completion": "This yaml code Volume Attributes Classes | The end user can update the PVC with the new VolumeAttributesClass gold and apply:", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Volume Attributes Classes | The end user can update the PVC with the new VolumeAttributesClass gold and apply:", "source": "https://kubernetes.io/docs/concepts/storage/volume-attributes-classes/", "id": "k8s_00142_code_explain_4260"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: PersistentVolumeClaim\nmetadata:\n name: test-pv-claim", "completion": "spec:\n …\n volumeAttributesClassName: gold\n …", "language": "yaml", "type": "kubernetes-manifest", "context": "Volume Attributes Classes | To understand how the resizer works with VolumeAttributesClass, refer to the CSI external-resizer documentation.", "source": "https://kubernetes.io/docs/concepts/storage/volume-attributes-classes/", "id": "k8s_00142_code_completion_4261"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: PersistentVolumeClaim\nmetadata:\n name: test-pv-claim\nspec:\n …\n volumeAttributesClassName: gold\n …", "completion": "This yaml code Volume Attributes Classes | To understand how the resizer works with VolumeAttributesClass, refer to the CSI external-resizer documentation.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Volume Attributes Classes | To understand how the resizer works with VolumeAttributesClass, refer to the CSI external-resizer documentation.", "source": "https://kubernetes.io/docs/concepts/storage/volume-attributes-classes/", "id": "k8s_00142_code_explain_4262"} {"prompt": "Complete this json code:\n\n{\n \"kind\": \"List\",\n \"items\":[\n {\n \"kind\":\"None\",\n \"metadata\":{\n \"name\":\"127.0.0.1\",\n \"labels\":{\n \"kubernetes.io/hostname\":\"127.0.0.1\"\n }\n },\n \"status\":{\n \"capacity\":{\"cpu\":\"4\"},\n \"addresses\":[{\"type\": \"LegacyHostIP\", \"address\":\"127.0.0.1\"}]\n }\n },\n {\n \"kind\":\"None\",\n \"metadata\":{\"name\":\"127.0.0.2\"},", "completion": " \"status\":{\n \"capacity\":{\"cpu\":\"8\"},\n \"addresses\":[\n {\"type\": \"LegacyHostIP\", \"address\":\"127.0.0.2\"},\n {\"type\": \"another\", \"address\":\"127.0.0.3\"}\n ]\n }\n }\n ],\n \"users\":[\n {\n \"name\": \"myself\",\n \"user\": {}\n },\n {\n \"name\": \"e2e\",\n \"user\": {\"username\": \"admin\", \"password\": \"secret\"}\n }\n ]\n}", "language": "json", "type": "api-response", "context": "JSONPath Support | Examples using kubectl and JSONPath expressions:", "source": "https://kubernetes.io/docs/reference/kubectl/jsonpath/", "id": "k8s_00148_code_completion_4263"} {"prompt": "Explain this json code:\n\n{\n \"kind\": \"List\",\n \"items\":[\n {\n \"kind\":\"None\",\n \"metadata\":{\n \"name\":\"127.0.0.1\",\n \"labels\":{\n \"kubernetes.io/hostname\":\"127.0.0.1\"\n }\n },\n \"status\":{\n \"capacity\":{\"cpu\":\"4\"},\n \"addresses\":[{\"type\": \"LegacyHostIP\", \"address\":\"127.0.0.1\"}]\n }\n },\n {\n \"kind\":\"None\",\n \"metadata\":{\"name\":\"127.0.0.2\"},\n \"status\":{\n \"capacity\":{\"cpu\":\"8\"},\n \"addresses\":[\n {\"type\": \"LegacyHostIP\", \"address\":\"127.0.0.2\"},\n {\"type\": \"another\", \"address\":\"127.0.0.3\"}\n ]\n }\n }\n ],\n \"users\":[\n {\n \"name\": \"myself\",\n \"user\": {}\n },\n {\n \"name\": \"e2e\",\n \"user\": {\"username\": \"admin\", \"password\": \"secret\"}\n }\n ]\n}", "completion": "This json code JSONPath Support | Examples using kubectl and JSONPath expressions:", "language": "json", "type": "api-response_explanation", "context": "JSONPath Support | Examples using kubectl and JSONPath expressions:", "source": "https://kubernetes.io/docs/reference/kubectl/jsonpath/", "id": "k8s_00148_code_explain_4264"} {"prompt": "Complete this bash code:\n\nkubectl get pods -o json\nkubectl get pods -o=jsonpath='{@}'\nkubectl get pods -o=jsonpath='{.items[0]}'", "completion": "kubectl get pods -o=jsonpath='{.items[0].metadata.name}'\nkubectl get pods -o=jsonpath=\"{.items[*]['metadata.name', 'status.capacity']}\"\nkubectl get pods -o=jsonpath='{range .items[*]}{.metadata.name}{\"\\t\"}{.status.startTime}{\"\\n\"}{end}'\nkubectl get pods -o=jsonpath='{.items[0].metadata.labels.kubernetes\\.io/hostname}'", "language": "bash", "type": "kubectl-command", "context": "JSONPath Support | Or, with a \"my_pod\" and \"my_namespace\" (adjust these names to your environment):", "source": "https://kubernetes.io/docs/reference/kubectl/jsonpath/", "id": "k8s_00148_code_completion_4265"} {"prompt": "Explain this bash code:\n\nkubectl get pods -o json\nkubectl get pods -o=jsonpath='{@}'\nkubectl get pods -o=jsonpath='{.items[0]}'\nkubectl get pods -o=jsonpath='{.items[0].metadata.name}'\nkubectl get pods -o=jsonpath=\"{.items[*]['metadata.name', 'status.capacity']}\"\nkubectl get pods -o=jsonpath='{range .items[*]}{.metadata.name}{\"\\t\"}{.status.startTime}{\"\\n\"}{end}'\nkubectl get pods -o=jsonpath='{.items[0].metadata.labels.kubernetes\\.io/hostname}'", "completion": "This bash code JSONPath Support | Or, with a \"my_pod\" and \"my_namespace\" (adjust these names to your environment):", "language": "bash", "type": "kubectl-command_explanation", "context": "JSONPath Support | Or, with a \"my_pod\" and \"my_namespace\" (adjust these names to your environment):", "source": "https://kubernetes.io/docs/reference/kubectl/jsonpath/", "id": "k8s_00148_code_explain_4266"} {"prompt": "Explain this bash code:\n\nkubectl get pod/my_pod -n my_namespace -o=jsonpath='{@}'\nkubectl get pod/my_pod -n my_namespace -o=jsonpath='{.metadata.name}'\nkubectl get pod/my_pod -n my_namespace -o=jsonpath='{.status}'", "completion": "This bash code JSONPath Support | On Windows, you must double quote any JSONPath template that contains spaces (not single quote as shown above for bash). This in turn means that you must use a single quote or escaped double quote aro", "language": "bash", "type": "kubectl-command_explanation", "context": "JSONPath Support | On Windows, you must double quote any JSONPath template that contains spaces (not single quote as shown above for bash). This in turn means that you must use a single quote or escaped double quote aro", "source": "https://kubernetes.io/docs/reference/kubectl/jsonpath/", "id": "k8s_00148_code_explain_4267"} {"prompt": "Explain this bash code:\n\nkubectl get pods -o=jsonpath=\"{range .items[*]}{.metadata.name}{'\\t'}{.status.startTime}{'\\n'}{end}\"\nkubectl get pods -o=jsonpath=\"{range .items[*]}{.metadata.name}{\\\"\\t\\\"}{.status.startTime}{\\\"\\n\\\"}{end}\"", "completion": "This bash code JSONPath Support | JSONPath regular expressions are not supported. If you want to match using regular expressions, you can use a tool such as jq.", "language": "bash", "type": "kubectl-command_explanation", "context": "JSONPath Support | JSONPath regular expressions are not supported. If you want to match using regular expressions, you can use a tool such as jq.", "source": "https://kubernetes.io/docs/reference/kubectl/jsonpath/", "id": "k8s_00148_code_explain_4268"} {"prompt": "Complete this text code:\n\n# kubectl does not support regular expressions for JSONpath output\n# The following command does not work\nkubectl get pods -o jsonpath='{.items[?(@.metadata.name=~/^test$/)].metadata.name}'", "completion": "\n# The following command achieves the desired result\nkubectl get pods -o json | jq -r '.items[] | select(.metadata.name | test(\"test-\")).metadata.name'", "language": "text", "type": "code-example", "context": "JSONPath Support |", "source": "https://kubernetes.io/docs/reference/kubectl/jsonpath/", "id": "k8s_00148_code_completion_4269"} {"prompt": "Explain this text code:\n\n# kubectl does not support regular expressions for JSONpath output\n# The following command does not work\nkubectl get pods -o jsonpath='{.items[?(@.metadata.name=~/^test$/)].metadata.name}'\n\n# The following command achieves the desired result\nkubectl get pods -o json | jq -r '.items[] | select(.metadata.name | test(\"test-\")).metadata.name'", "completion": "This text code JSONPath Support |", "language": "text", "type": "code-example_explanation", "context": "JSONPath Support |", "source": "https://kubernetes.io/docs/reference/kubectl/jsonpath/", "id": "k8s_00148_code_explain_4270"} {"prompt": "Complete this json code:\n\n{\n \"kind\": \"List\",\n \"items\":[\n {\n \"kind\":\"None\",\n \"metadata\":{\n \"name\":\"127.0.0.1\",\n \"labels\":{\n \"kubernetes.io/hostname\":\"127.0.0.1\"\n }\n },\n \"status\":{\n \"capacity\":{\"cpu\":\"4\"},\n \"addresses\":[{\"type\": \"LegacyHostIP\", \"address\":\"127.0.0.1\"}]\n }\n },\n {\n \"kind\":\"None\",\n \"metadata\":{\"name\":\"127.0.0.2\"},", "completion": " \"status\":{\n \"capacity\":{\"cpu\":\"8\"},\n \"addresses\":[\n {\"type\": \"LegacyHostIP\", \"address\":\"127.0.0.2\"},\n {\"type\": \"another\", \"address\":\"127.0.0.3\"}\n ]\n }\n }\n ],\n \"users\":[\n {\n \"name\": \"myself\",\n \"user\": {}\n },\n {\n \"name\": \"e2e\",\n \"user\": {\"username\": \"admin\", \"password\": \"secret\"}\n }\n ]\n}", "language": "json", "type": "api-response", "context": "JSONPath Support | Examples using kubectl and JSONPath expressions:", "source": "https://kubernetes.io/docs/reference/kubectl/jsonpath/", "id": "k8s_00148_code_completion_4271"} {"prompt": "Explain this json code:\n\n{\n \"kind\": \"List\",\n \"items\":[\n {\n \"kind\":\"None\",\n \"metadata\":{\n \"name\":\"127.0.0.1\",\n \"labels\":{\n \"kubernetes.io/hostname\":\"127.0.0.1\"\n }\n },\n \"status\":{\n \"capacity\":{\"cpu\":\"4\"},\n \"addresses\":[{\"type\": \"LegacyHostIP\", \"address\":\"127.0.0.1\"}]\n }\n },\n {\n \"kind\":\"None\",\n \"metadata\":{\"name\":\"127.0.0.2\"},\n \"status\":{\n \"capacity\":{\"cpu\":\"8\"},\n \"addresses\":[\n {\"type\": \"LegacyHostIP\", \"address\":\"127.0.0.2\"},\n {\"type\": \"another\", \"address\":\"127.0.0.3\"}\n ]\n }\n }\n ],\n \"users\":[\n {\n \"name\": \"myself\",\n \"user\": {}\n },\n {\n \"name\": \"e2e\",\n \"user\": {\"username\": \"admin\", \"password\": \"secret\"}\n }\n ]\n}", "completion": "This json code JSONPath Support | Examples using kubectl and JSONPath expressions:", "language": "json", "type": "api-response_explanation", "context": "JSONPath Support | Examples using kubectl and JSONPath expressions:", "source": "https://kubernetes.io/docs/reference/kubectl/jsonpath/", "id": "k8s_00148_code_explain_4272"} {"prompt": "Explain this json code:\n\n{range .items[*]}[{.metadata.name}, {.status.capacity}] {end}", "completion": "This json code JSONPath Support | Examples using kubectl and JSONPath expressions:", "language": "json", "type": "api-response_explanation", "context": "JSONPath Support | Examples using kubectl and JSONPath expressions:", "source": "https://kubernetes.io/docs/reference/kubectl/jsonpath/", "id": "k8s_00148_code_explain_4273"} {"prompt": "Explain this json code:\n\n{.items[0].metadata.labels.kubernetes\\.io/hostname}", "completion": "This json code JSONPath Support | Examples using kubectl and JSONPath expressions:", "language": "json", "type": "api-response_explanation", "context": "JSONPath Support | Examples using kubectl and JSONPath expressions:", "source": "https://kubernetes.io/docs/reference/kubectl/jsonpath/", "id": "k8s_00148_code_explain_4274"} {"prompt": "Complete this shell code:\n\nkubectl get pods -o json\nkubectl get pods -o=jsonpath='{@}'\nkubectl get pods -o=jsonpath='{.items[0]}'", "completion": "kubectl get pods -o=jsonpath='{.items[0].metadata.name}'\nkubectl get pods -o=jsonpath=\"{.items[*]['metadata.name', 'status.capacity']}\"\nkubectl get pods -o=jsonpath='{range .items[*]}{.metadata.name}{\"\\t\"}{.status.startTime}{\"\\n\"}{end}'\nkubectl get pods -o=jsonpath='{.items[0].metadata.labels.kubernetes\\.io/hostname}'", "language": "shell", "type": "code-example", "context": "JSONPath Support | Or, with a \"my_pod\" and \"my_namespace\" (adjust these names to your environment):", "source": "https://kubernetes.io/docs/reference/kubectl/jsonpath/", "id": "k8s_00148_code_completion_4275"} {"prompt": "Explain this shell code:\n\nkubectl get pods -o json\nkubectl get pods -o=jsonpath='{@}'\nkubectl get pods -o=jsonpath='{.items[0]}'\nkubectl get pods -o=jsonpath='{.items[0].metadata.name}'\nkubectl get pods -o=jsonpath=\"{.items[*]['metadata.name', 'status.capacity']}\"\nkubectl get pods -o=jsonpath='{range .items[*]}{.metadata.name}{\"\\t\"}{.status.startTime}{\"\\n\"}{end}'\nkubectl get pods -o=jsonpath='{.items[0].metadata.labels.kubernetes\\.io/hostname}'", "completion": "This shell code JSONPath Support | Or, with a \"my_pod\" and \"my_namespace\" (adjust these names to your environment):", "language": "shell", "type": "code-example_explanation", "context": "JSONPath Support | Or, with a \"my_pod\" and \"my_namespace\" (adjust these names to your environment):", "source": "https://kubernetes.io/docs/reference/kubectl/jsonpath/", "id": "k8s_00148_code_explain_4276"} {"prompt": "Explain this shell code:\n\nkubectl get pod/my_pod -n my_namespace -o=jsonpath='{@}'\nkubectl get pod/my_pod -n my_namespace -o=jsonpath='{.metadata.name}'\nkubectl get pod/my_pod -n my_namespace -o=jsonpath='{.status}'", "completion": "This shell code JSONPath Support | On Windows, you must double quote any JSONPath template that contains spaces (not single quote as shown above for bash). This in turn means that you must use a single quote or escaped double quote aro", "language": "shell", "type": "code-example_explanation", "context": "JSONPath Support | On Windows, you must double quote any JSONPath template that contains spaces (not single quote as shown above for bash). This in turn means that you must use a single quote or escaped double quote aro", "source": "https://kubernetes.io/docs/reference/kubectl/jsonpath/", "id": "k8s_00148_code_explain_4277"} {"prompt": "Explain this cmd code:\n\nkubectl get pods -o=jsonpath=\"{range .items[*]}{.metadata.name}{'\\t'}{.status.startTime}{'\\n'}{end}\"\nkubectl get pods -o=jsonpath=\"{range .items[*]}{.metadata.name}{\\\"\\t\\\"}{.status.startTime}{\\\"\\n\\\"}{end}\"", "completion": "This cmd code JSONPath Support | JSONPath regular expressions are not supported. If you want to match using regular expressions, you can use a tool such as jq.", "language": "cmd", "type": "code-example_explanation", "context": "JSONPath Support | JSONPath regular expressions are not supported. If you want to match using regular expressions, you can use a tool such as jq.", "source": "https://kubernetes.io/docs/reference/kubectl/jsonpath/", "id": "k8s_00148_code_explain_4278"} {"prompt": "Complete this shell code:\n\n# kubectl does not support regular expressions for JSONpath output\n# The following command does not work\nkubectl get pods -o jsonpath='{.items[?(@.metadata.name=~/^test$/)].metadata.name}'", "completion": "\n# The following command achieves the desired result\nkubectl get pods -o json | jq -r '.items[] | select(.metadata.name | test(\"test-\")).metadata.name'", "language": "shell", "type": "code-example", "context": "JSONPath Support |", "source": "https://kubernetes.io/docs/reference/kubectl/jsonpath/", "id": "k8s_00148_code_completion_4279"} {"prompt": "Explain this shell code:\n\n# kubectl does not support regular expressions for JSONpath output\n# The following command does not work\nkubectl get pods -o jsonpath='{.items[?(@.metadata.name=~/^test$/)].metadata.name}'\n\n# The following command achieves the desired result\nkubectl get pods -o json | jq -r '.items[] | select(.metadata.name | test(\"test-\")).metadata.name'", "completion": "This shell code JSONPath Support |", "language": "shell", "type": "code-example_explanation", "context": "JSONPath Support |", "source": "https://kubernetes.io/docs/reference/kubectl/jsonpath/", "id": "k8s_00148_code_explain_4280"} {"prompt": "Complete this json code:\n\n{\n \"kind\": \"List\",\n \"items\":[\n {\n \"kind\":\"None\",\n \"metadata\":{\n \"name\":\"127.0.0.1\",\n \"labels\":{\n \"kubernetes.io/hostname\":\"127.0.0.1\"\n }\n },\n \"status\":{\n \"capacity\":{\"cpu\":\"4\"},\n \"addresses\":[{\"type\": \"LegacyHostIP\", \"address\":\"127.0.0.1\"}]\n }\n },\n {\n \"kind\":\"None\",\n \"metadata\":{\"name\":\"127.0.0.2\"},", "completion": " \"status\":{\n \"capacity\":{\"cpu\":\"8\"},\n \"addresses\":[\n {\"type\": \"LegacyHostIP\", \"address\":\"127.0.0.2\"},\n {\"type\": \"another\", \"address\":\"127.0.0.3\"}\n ]\n }\n }\n ],\n \"users\":[\n {\n \"name\": \"myself\",\n \"user\": {}\n },\n {\n \"name\": \"e2e\",\n \"user\": {\"username\": \"admin\", \"password\": \"secret\"}\n }\n ]\n}", "language": "json", "type": "api-response", "context": "JSONPath Support | Examples using kubectl and JSONPath expressions:", "source": "https://kubernetes.io/docs/reference/kubectl/jsonpath/", "id": "k8s_00148_code_completion_4281"} {"prompt": "Explain this json code:\n\n{\n \"kind\": \"List\",\n \"items\":[\n {\n \"kind\":\"None\",\n \"metadata\":{\n \"name\":\"127.0.0.1\",\n \"labels\":{\n \"kubernetes.io/hostname\":\"127.0.0.1\"\n }\n },\n \"status\":{\n \"capacity\":{\"cpu\":\"4\"},\n \"addresses\":[{\"type\": \"LegacyHostIP\", \"address\":\"127.0.0.1\"}]\n }\n },\n {\n \"kind\":\"None\",\n \"metadata\":{\"name\":\"127.0.0.2\"},\n \"status\":{\n \"capacity\":{\"cpu\":\"8\"},\n \"addresses\":[\n {\"type\": \"LegacyHostIP\", \"address\":\"127.0.0.2\"},\n {\"type\": \"another\", \"address\":\"127.0.0.3\"}\n ]\n }\n }\n ],\n \"users\":[\n {\n \"name\": \"myself\",\n \"user\": {}\n },\n {\n \"name\": \"e2e\",\n \"user\": {\"username\": \"admin\", \"password\": \"secret\"}\n }\n ]\n}", "completion": "This json code JSONPath Support | Examples using kubectl and JSONPath expressions:", "language": "json", "type": "api-response_explanation", "context": "JSONPath Support | Examples using kubectl and JSONPath expressions:", "source": "https://kubernetes.io/docs/reference/kubectl/jsonpath/", "id": "k8s_00148_code_explain_4282"} {"prompt": "Complete this bash code:\n\nkubectl get pods -o json\nkubectl get pods -o=jsonpath='{@}'\nkubectl get pods -o=jsonpath='{.items[0]}'", "completion": "kubectl get pods -o=jsonpath='{.items[0].metadata.name}'\nkubectl get pods -o=jsonpath=\"{.items[*]['metadata.name', 'status.capacity']}\"\nkubectl get pods -o=jsonpath='{range .items[*]}{.metadata.name}{\"\\t\"}{.status.startTime}{\"\\n\"}{end}'\nkubectl get pods -o=jsonpath='{.items[0].metadata.labels.kubernetes\\.io/hostname}'", "language": "bash", "type": "kubectl-command", "context": "JSONPath Support | Or, with a \"my_pod\" and \"my_namespace\" (adjust these names to your environment):", "source": "https://kubernetes.io/docs/reference/kubectl/jsonpath/", "id": "k8s_00148_code_completion_4283"} {"prompt": "Explain this bash code:\n\nkubectl get pods -o json\nkubectl get pods -o=jsonpath='{@}'\nkubectl get pods -o=jsonpath='{.items[0]}'\nkubectl get pods -o=jsonpath='{.items[0].metadata.name}'\nkubectl get pods -o=jsonpath=\"{.items[*]['metadata.name', 'status.capacity']}\"\nkubectl get pods -o=jsonpath='{range .items[*]}{.metadata.name}{\"\\t\"}{.status.startTime}{\"\\n\"}{end}'\nkubectl get pods -o=jsonpath='{.items[0].metadata.labels.kubernetes\\.io/hostname}'", "completion": "This bash code JSONPath Support | Or, with a \"my_pod\" and \"my_namespace\" (adjust these names to your environment):", "language": "bash", "type": "kubectl-command_explanation", "context": "JSONPath Support | Or, with a \"my_pod\" and \"my_namespace\" (adjust these names to your environment):", "source": "https://kubernetes.io/docs/reference/kubectl/jsonpath/", "id": "k8s_00148_code_explain_4284"} {"prompt": "Explain this bash code:\n\nkubectl get pod/my_pod -n my_namespace -o=jsonpath='{@}'\nkubectl get pod/my_pod -n my_namespace -o=jsonpath='{.metadata.name}'\nkubectl get pod/my_pod -n my_namespace -o=jsonpath='{.status}'", "completion": "This bash code JSONPath Support | On Windows, you must double quote any JSONPath template that contains spaces (not single quote as shown above for bash). This in turn means that you must use a single quote or escaped double quote aro", "language": "bash", "type": "kubectl-command_explanation", "context": "JSONPath Support | On Windows, you must double quote any JSONPath template that contains spaces (not single quote as shown above for bash). This in turn means that you must use a single quote or escaped double quote aro", "source": "https://kubernetes.io/docs/reference/kubectl/jsonpath/", "id": "k8s_00148_code_explain_4285"} {"prompt": "Explain this bash code:\n\nkubectl get pods -o=jsonpath=\"{range .items[*]}{.metadata.name}{'\\t'}{.status.startTime}{'\\n'}{end}\"\nkubectl get pods -o=jsonpath=\"{range .items[*]}{.metadata.name}{\\\"\\t\\\"}{.status.startTime}{\\\"\\n\\\"}{end}\"", "completion": "This bash code JSONPath Support | JSONPath regular expressions are not supported. If you want to match using regular expressions, you can use a tool such as jq.", "language": "bash", "type": "kubectl-command_explanation", "context": "JSONPath Support | JSONPath regular expressions are not supported. If you want to match using regular expressions, you can use a tool such as jq.", "source": "https://kubernetes.io/docs/reference/kubectl/jsonpath/", "id": "k8s_00148_code_explain_4286"} {"prompt": "Complete this text code:\n\n# kubectl does not support regular expressions for JSONpath output\n# The following command does not work\nkubectl get pods -o jsonpath='{.items[?(@.metadata.name=~/^test$/)].metadata.name}'", "completion": "\n# The following command achieves the desired result\nkubectl get pods -o json | jq -r '.items[] | select(.metadata.name | test(\"test-\")).metadata.name'", "language": "text", "type": "code-example", "context": "JSONPath Support |", "source": "https://kubernetes.io/docs/reference/kubectl/jsonpath/", "id": "k8s_00148_code_completion_4287"} {"prompt": "Explain this text code:\n\n# kubectl does not support regular expressions for JSONpath output\n# The following command does not work\nkubectl get pods -o jsonpath='{.items[?(@.metadata.name=~/^test$/)].metadata.name}'\n\n# The following command achieves the desired result\nkubectl get pods -o json | jq -r '.items[] | select(.metadata.name | test(\"test-\")).metadata.name'", "completion": "This text code JSONPath Support |", "language": "text", "type": "code-example_explanation", "context": "JSONPath Support |", "source": "https://kubernetes.io/docs/reference/kubectl/jsonpath/", "id": "k8s_00148_code_explain_4288"} {"prompt": "Complete this text code:\n\nURL=https://dl.k8s.io/release/v1.34.0/bin/linux/amd64\nBINARY=kubectl\n\nFILES=(\n \"$BINARY\"\n \"$BINARY.sig\"", "completion": " \"$BINARY.cert\"\n)\n\nfor FILE in \"${FILES[@]}\"; do\n curl -sSfL --retry 3 --retry-delay 3 \"$URL/$FILE\" -o \"$FILE\"\ndone", "language": "text", "type": "code-example", "context": "Verify Signed Kubernetes Artifacts | Then verify the blob by using cosign verify-blob:", "source": "https://kubernetes.io/docs/tasks/administer-cluster/verify-signed-artifacts/", "id": "k8s_00149_code_completion_4289"} {"prompt": "Explain this text code:\n\nURL=https://dl.k8s.io/release/v1.34.0/bin/linux/amd64\nBINARY=kubectl\n\nFILES=(\n \"$BINARY\"\n \"$BINARY.sig\"\n \"$BINARY.cert\"\n)\n\nfor FILE in \"${FILES[@]}\"; do\n curl -sSfL --retry 3 --retry-delay 3 \"$URL/$FILE\" -o \"$FILE\"\ndone", "completion": "This text code Verify Signed Kubernetes Artifacts | Then verify the blob by using cosign verify-blob:", "language": "text", "type": "code-example_explanation", "context": "Verify Signed Kubernetes Artifacts | Then verify the blob by using cosign verify-blob:", "source": "https://kubernetes.io/docs/tasks/administer-cluster/verify-signed-artifacts/", "id": "k8s_00149_code_explain_4290"} {"prompt": "Complete this text code:\n\ncosign verify-blob \"$BINARY\" \\\n --signature \"$BINARY\".sig \\", "completion": " --certificate \"$BINARY\".cert \\\n --certificate-identity krel-staging@k8s-releng-prod.iam.gserviceaccount.com \\\n --certificate-oidc-issuer https://accounts.google.com", "language": "text", "type": "code-example", "context": "Verify Signed Kubernetes Artifacts | Cosign 2.0 requires the --certificate-identity and --certificate-oidc-issuer options.", "source": "https://kubernetes.io/docs/tasks/administer-cluster/verify-signed-artifacts/", "id": "k8s_00149_code_completion_4291"} {"prompt": "Explain this text code:\n\ncosign verify-blob \"$BINARY\" \\\n --signature \"$BINARY\".sig \\\n --certificate \"$BINARY\".cert \\\n --certificate-identity krel-staging@k8s-releng-prod.iam.gserviceaccount.com \\\n --certificate-oidc-issuer https://accounts.google.com", "completion": "This text code Verify Signed Kubernetes Artifacts | Cosign 2.0 requires the --certificate-identity and --certificate-oidc-issuer options.", "language": "text", "type": "code-example_explanation", "context": "Verify Signed Kubernetes Artifacts | Cosign 2.0 requires the --certificate-identity and --certificate-oidc-issuer options.", "source": "https://kubernetes.io/docs/tasks/administer-cluster/verify-signed-artifacts/", "id": "k8s_00149_code_explain_4292"} {"prompt": "Complete this text code:\n\ncosign verify registry.k8s.io/kube-apiserver-amd64:v1.34.0 \\\n --certificate-identity krel-trust@k8s-releng-prod.iam.gserviceaccount.com \\", "completion": " --certificate-oidc-issuer https://accounts.google.com \\\n | jq .", "language": "text", "type": "code-example", "context": "Verify Signed Kubernetes Artifacts | To verify all signed control plane images for the latest stable version (v1.34.0), please run the following commands:", "source": "https://kubernetes.io/docs/tasks/administer-cluster/verify-signed-artifacts/", "id": "k8s_00149_code_completion_4293"} {"prompt": "Explain this text code:\n\ncosign verify registry.k8s.io/kube-apiserver-amd64:v1.34.0 \\\n --certificate-identity krel-trust@k8s-releng-prod.iam.gserviceaccount.com \\\n --certificate-oidc-issuer https://accounts.google.com \\\n | jq .", "completion": "This text code Verify Signed Kubernetes Artifacts | To verify all signed control plane images for the latest stable version (v1.34.0), please run the following commands:", "language": "text", "type": "code-example_explanation", "context": "Verify Signed Kubernetes Artifacts | To verify all signed control plane images for the latest stable version (v1.34.0), please run the following commands:", "source": "https://kubernetes.io/docs/tasks/administer-cluster/verify-signed-artifacts/", "id": "k8s_00149_code_explain_4294"} {"prompt": "Complete this text code:\n\ncurl -Ls \"https://sbom.k8s.io/$(curl -Ls https://dl.k8s.io/release/stable.txt)/release\" \\\n | grep \"SPDXID: SPDXRef-Package-registry.k8s.io\" \\\n | grep -v sha256 | cut -d- -f3- | sed 's/-/\\//' | sed 's/-v1/:v1/' \\\n | sort > images.txt\ninput=images.txt\nwhile IFS= read -r image", "completion": "do\n cosign verify \"$image\" \\\n --certificate-identity krel-trust@k8s-releng-prod.iam.gserviceaccount.com \\\n --certificate-oidc-issuer https://accounts.google.com \\\n | jq .\ndone < \"$input\"", "language": "text", "type": "code-example", "context": "Verify Signed Kubernetes Artifacts | Once you have verified an image, you can specify the image by its digest in your Pod manifests as per this example:", "source": "https://kubernetes.io/docs/tasks/administer-cluster/verify-signed-artifacts/", "id": "k8s_00149_code_completion_4295"} {"prompt": "Explain this text code:\n\ncurl -Ls \"https://sbom.k8s.io/$(curl -Ls https://dl.k8s.io/release/stable.txt)/release\" \\\n | grep \"SPDXID: SPDXRef-Package-registry.k8s.io\" \\\n | grep -v sha256 | cut -d- -f3- | sed 's/-/\\//' | sed 's/-v1/:v1/' \\\n | sort > images.txt\ninput=images.txt\nwhile IFS= read -r image\ndo\n cosign verify \"$image\" \\\n --certificate-identity krel-trust@k8s-releng-prod.iam.gserviceaccount.com \\\n --certificate-oidc-issuer https://accounts.google.com \\\n | jq .\ndone < \"$input\"", "completion": "This text code Verify Signed Kubernetes Artifacts | Once you have verified an image, you can specify the image by its digest in your Pod manifests as per this example:", "language": "text", "type": "code-example_explanation", "context": "Verify Signed Kubernetes Artifacts | Once you have verified an image, you can specify the image by its digest in your Pod manifests as per this example:", "source": "https://kubernetes.io/docs/tasks/administer-cluster/verify-signed-artifacts/", "id": "k8s_00149_code_explain_4296"} {"prompt": "Explain this text code:\n\nregistry-url/image-name@sha256:45b23dee08af5e43a7fea6c4cf9c25ccf269ee113168c19722f87876677c5cb2", "completion": "This text code Verify Signed Kubernetes Artifacts | For more information, please refer to the Image Pull Policy section.", "language": "text", "type": "code-example_explanation", "context": "Verify Signed Kubernetes Artifacts | For more information, please refer to the Image Pull Policy section.", "source": "https://kubernetes.io/docs/tasks/administer-cluster/verify-signed-artifacts/", "id": "k8s_00149_code_explain_4297"} {"prompt": "Complete this text code:\n\n# Retrieve the latest available Kubernetes release version\nVERSION=$(curl -Ls https://dl.k8s.io/release/stable.txt)\n\n# Verify the SHA512 sum\ncurl -Ls \"https://sbom.k8s.io/$VERSION/release\" -o \"$VERSION.spdx\"\necho \"$(curl -Ls \"https://sbom.k8s.io/$VERSION/release.sha512\") $VERSION.spdx\" | sha512sum --check\n\n# Verify the SHA256 sum\necho \"$(curl -Ls \"https://sbom.k8s.io/$VERSION/release.sha256\") $VERSION.spdx\" | sha256sum --check\n", "completion": "# Retrieve sigstore signature and certificate\ncurl -Ls \"https://sbom.k8s.io/$VERSION/release.sig\" -o \"$VERSION.spdx.sig\"\ncurl -Ls \"https://sbom.k8s.io/$VERSION/release.cert\" -o \"$VERSION.spdx.cert\"\n\n# Verify the sigstore signature\ncosign verify-blob \\\n --certificate \"$VERSION.spdx.cert\" \\\n --signature \"$VERSION.spdx.sig\" \\\n --certificate-identity krel-staging@k8s-releng-prod.iam.gserviceaccount.com \\\n --certificate-oidc-issuer https://accounts.google.com \\\n \"$VERSION.spdx\"", "language": "text", "type": "code-example", "context": "Verify Signed Kubernetes Artifacts |", "source": "https://kubernetes.io/docs/tasks/administer-cluster/verify-signed-artifacts/", "id": "k8s_00149_code_completion_4298"} {"prompt": "Explain this text code:\n\n# Retrieve the latest available Kubernetes release version\nVERSION=$(curl -Ls https://dl.k8s.io/release/stable.txt)\n\n# Verify the SHA512 sum\ncurl -Ls \"https://sbom.k8s.io/$VERSION/release\" -o \"$VERSION.spdx\"\necho \"$(curl -Ls \"https://sbom.k8s.io/$VERSION/release.sha512\") $VERSION.spdx\" | sha512sum --check\n\n# Verify the SHA256 sum\necho \"$(curl -Ls \"https://sbom.k8s.io/$VERSION/release.sha256\") $VERSION.spdx\" | sha256sum --check\n\n# Retrieve sigstore signature and certificate\ncurl -Ls \"https://sbom.k8s.io/$VERSION/release.sig\" -o \"$VERSION.spdx.sig\"\ncurl -Ls \"https://sbom.k8s.io/$VERSION/release.cert\" -o \"$VERSION.spdx.cert\"\n\n# Verify the sigstore signature\ncosign verify-blob \\\n --certificate \"$VERSION.spdx.cert\" \\\n --signature \"$VERSION.spdx.sig\" \\\n --certificate-identity krel-staging@k8s-releng-prod.iam.gserviceaccount.com \\\n --certificate-oidc-issuer https://accounts.google.com \\\n \"$VERSION.spdx\"", "completion": "This text code Verify Signed Kubernetes Artifacts |", "language": "text", "type": "code-example_explanation", "context": "Verify Signed Kubernetes Artifacts |", "source": "https://kubernetes.io/docs/tasks/administer-cluster/verify-signed-artifacts/", "id": "k8s_00149_code_explain_4299"} {"prompt": "Complete this bash code:\n\nURL=https://dl.k8s.io/release/v1.34.0/bin/linux/amd64\nBINARY=kubectl\n\nFILES=(\n \"$BINARY\"\n \"$BINARY.sig\"", "completion": " \"$BINARY.cert\"\n)\n\nfor FILE in \"${FILES[@]}\"; do\n curl -sSfL --retry 3 --retry-delay 3 \"$URL/$FILE\" -o \"$FILE\"\ndone", "language": "bash", "type": "shell-command", "context": "Verify Signed Kubernetes Artifacts | Then verify the blob by using cosign verify-blob:", "source": "https://kubernetes.io/docs/tasks/administer-cluster/verify-signed-artifacts/", "id": "k8s_00149_code_completion_4300"} {"prompt": "Explain this bash code:\n\nURL=https://dl.k8s.io/release/v1.34.0/bin/linux/amd64\nBINARY=kubectl\n\nFILES=(\n \"$BINARY\"\n \"$BINARY.sig\"\n \"$BINARY.cert\"\n)\n\nfor FILE in \"${FILES[@]}\"; do\n curl -sSfL --retry 3 --retry-delay 3 \"$URL/$FILE\" -o \"$FILE\"\ndone", "completion": "This bash code Verify Signed Kubernetes Artifacts | Then verify the blob by using cosign verify-blob:", "language": "bash", "type": "shell-command_explanation", "context": "Verify Signed Kubernetes Artifacts | Then verify the blob by using cosign verify-blob:", "source": "https://kubernetes.io/docs/tasks/administer-cluster/verify-signed-artifacts/", "id": "k8s_00149_code_explain_4301"} {"prompt": "Complete this shell code:\n\ncosign verify-blob \"$BINARY\" \\\n --signature \"$BINARY\".sig \\", "completion": " --certificate \"$BINARY\".cert \\\n --certificate-identity krel-staging@k8s-releng-prod.iam.gserviceaccount.com \\\n --certificate-oidc-issuer https://accounts.google.com", "language": "shell", "type": "code-example", "context": "Verify Signed Kubernetes Artifacts | Cosign 2.0 requires the --certificate-identity and --certificate-oidc-issuer options.", "source": "https://kubernetes.io/docs/tasks/administer-cluster/verify-signed-artifacts/", "id": "k8s_00149_code_completion_4302"} {"prompt": "Explain this shell code:\n\ncosign verify-blob \"$BINARY\" \\\n --signature \"$BINARY\".sig \\\n --certificate \"$BINARY\".cert \\\n --certificate-identity krel-staging@k8s-releng-prod.iam.gserviceaccount.com \\\n --certificate-oidc-issuer https://accounts.google.com", "completion": "This shell code Verify Signed Kubernetes Artifacts | Cosign 2.0 requires the --certificate-identity and --certificate-oidc-issuer options.", "language": "shell", "type": "code-example_explanation", "context": "Verify Signed Kubernetes Artifacts | Cosign 2.0 requires the --certificate-identity and --certificate-oidc-issuer options.", "source": "https://kubernetes.io/docs/tasks/administer-cluster/verify-signed-artifacts/", "id": "k8s_00149_code_explain_4303"} {"prompt": "Complete this shell code:\n\ncosign verify registry.k8s.io/kube-apiserver-amd64:v1.34.0 \\\n --certificate-identity krel-trust@k8s-releng-prod.iam.gserviceaccount.com \\", "completion": " --certificate-oidc-issuer https://accounts.google.com \\\n | jq .", "language": "shell", "type": "code-example", "context": "Verify Signed Kubernetes Artifacts | To verify all signed control plane images for the latest stable version (v1.34.0), please run the following commands:", "source": "https://kubernetes.io/docs/tasks/administer-cluster/verify-signed-artifacts/", "id": "k8s_00149_code_completion_4304"} {"prompt": "Explain this shell code:\n\ncosign verify registry.k8s.io/kube-apiserver-amd64:v1.34.0 \\\n --certificate-identity krel-trust@k8s-releng-prod.iam.gserviceaccount.com \\\n --certificate-oidc-issuer https://accounts.google.com \\\n | jq .", "completion": "This shell code Verify Signed Kubernetes Artifacts | To verify all signed control plane images for the latest stable version (v1.34.0), please run the following commands:", "language": "shell", "type": "code-example_explanation", "context": "Verify Signed Kubernetes Artifacts | To verify all signed control plane images for the latest stable version (v1.34.0), please run the following commands:", "source": "https://kubernetes.io/docs/tasks/administer-cluster/verify-signed-artifacts/", "id": "k8s_00149_code_explain_4305"} {"prompt": "Complete this shell code:\n\ncurl -Ls \"https://sbom.k8s.io/$(curl -Ls https://dl.k8s.io/release/stable.txt)/release\" \\\n | grep \"SPDXID: SPDXRef-Package-registry.k8s.io\" \\\n | grep -v sha256 | cut -d- -f3- | sed 's/-/\\//' | sed 's/-v1/:v1/' \\\n | sort > images.txt\ninput=images.txt\nwhile IFS= read -r image", "completion": "do\n cosign verify \"$image\" \\\n --certificate-identity krel-trust@k8s-releng-prod.iam.gserviceaccount.com \\\n --certificate-oidc-issuer https://accounts.google.com \\\n | jq .\ndone < \"$input\"", "language": "shell", "type": "code-example", "context": "Verify Signed Kubernetes Artifacts | Once you have verified an image, you can specify the image by its digest in your Pod manifests as per this example:", "source": "https://kubernetes.io/docs/tasks/administer-cluster/verify-signed-artifacts/", "id": "k8s_00149_code_completion_4306"} {"prompt": "Explain this shell code:\n\ncurl -Ls \"https://sbom.k8s.io/$(curl -Ls https://dl.k8s.io/release/stable.txt)/release\" \\\n | grep \"SPDXID: SPDXRef-Package-registry.k8s.io\" \\\n | grep -v sha256 | cut -d- -f3- | sed 's/-/\\//' | sed 's/-v1/:v1/' \\\n | sort > images.txt\ninput=images.txt\nwhile IFS= read -r image\ndo\n cosign verify \"$image\" \\\n --certificate-identity krel-trust@k8s-releng-prod.iam.gserviceaccount.com \\\n --certificate-oidc-issuer https://accounts.google.com \\\n | jq .\ndone < \"$input\"", "completion": "This shell code Verify Signed Kubernetes Artifacts | Once you have verified an image, you can specify the image by its digest in your Pod manifests as per this example:", "language": "shell", "type": "code-example_explanation", "context": "Verify Signed Kubernetes Artifacts | Once you have verified an image, you can specify the image by its digest in your Pod manifests as per this example:", "source": "https://kubernetes.io/docs/tasks/administer-cluster/verify-signed-artifacts/", "id": "k8s_00149_code_explain_4307"} {"prompt": "Explain this console code:\n\nregistry-url/image-name@sha256:45b23dee08af5e43a7fea6c4cf9c25ccf269ee113168c19722f87876677c5cb2", "completion": "This console code Verify Signed Kubernetes Artifacts | For more information, please refer to the Image Pull Policy section.", "language": "console", "type": "code-example_explanation", "context": "Verify Signed Kubernetes Artifacts | For more information, please refer to the Image Pull Policy section.", "source": "https://kubernetes.io/docs/tasks/administer-cluster/verify-signed-artifacts/", "id": "k8s_00149_code_explain_4308"} {"prompt": "Complete this shell code:\n\n# Retrieve the latest available Kubernetes release version\nVERSION=$(curl -Ls https://dl.k8s.io/release/stable.txt)\n\n# Verify the SHA512 sum\ncurl -Ls \"https://sbom.k8s.io/$VERSION/release\" -o \"$VERSION.spdx\"\necho \"$(curl -Ls \"https://sbom.k8s.io/$VERSION/release.sha512\") $VERSION.spdx\" | sha512sum --check\n\n# Verify the SHA256 sum\necho \"$(curl -Ls \"https://sbom.k8s.io/$VERSION/release.sha256\") $VERSION.spdx\" | sha256sum --check\n", "completion": "# Retrieve sigstore signature and certificate\ncurl -Ls \"https://sbom.k8s.io/$VERSION/release.sig\" -o \"$VERSION.spdx.sig\"\ncurl -Ls \"https://sbom.k8s.io/$VERSION/release.cert\" -o \"$VERSION.spdx.cert\"\n\n# Verify the sigstore signature\ncosign verify-blob \\\n --certificate \"$VERSION.spdx.cert\" \\\n --signature \"$VERSION.spdx.sig\" \\\n --certificate-identity krel-staging@k8s-releng-prod.iam.gserviceaccount.com \\\n --certificate-oidc-issuer https://accounts.google.com \\\n \"$VERSION.spdx\"", "language": "shell", "type": "code-example", "context": "Verify Signed Kubernetes Artifacts |", "source": "https://kubernetes.io/docs/tasks/administer-cluster/verify-signed-artifacts/", "id": "k8s_00149_code_completion_4309"} {"prompt": "Explain this shell code:\n\n# Retrieve the latest available Kubernetes release version\nVERSION=$(curl -Ls https://dl.k8s.io/release/stable.txt)\n\n# Verify the SHA512 sum\ncurl -Ls \"https://sbom.k8s.io/$VERSION/release\" -o \"$VERSION.spdx\"\necho \"$(curl -Ls \"https://sbom.k8s.io/$VERSION/release.sha512\") $VERSION.spdx\" | sha512sum --check\n\n# Verify the SHA256 sum\necho \"$(curl -Ls \"https://sbom.k8s.io/$VERSION/release.sha256\") $VERSION.spdx\" | sha256sum --check\n\n# Retrieve sigstore signature and certificate\ncurl -Ls \"https://sbom.k8s.io/$VERSION/release.sig\" -o \"$VERSION.spdx.sig\"\ncurl -Ls \"https://sbom.k8s.io/$VERSION/release.cert\" -o \"$VERSION.spdx.cert\"\n\n# Verify the sigstore signature\ncosign verify-blob \\\n --certificate \"$VERSION.spdx.cert\" \\\n --signature \"$VERSION.spdx.sig\" \\\n --certificate-identity krel-staging@k8s-releng-prod.iam.gserviceaccount.com \\\n --certificate-oidc-issuer https://accounts.google.com \\\n \"$VERSION.spdx\"", "completion": "This shell code Verify Signed Kubernetes Artifacts |", "language": "shell", "type": "code-example_explanation", "context": "Verify Signed Kubernetes Artifacts |", "source": "https://kubernetes.io/docs/tasks/administer-cluster/verify-signed-artifacts/", "id": "k8s_00149_code_explain_4310"} {"prompt": "Complete this text code:\n\nURL=https://dl.k8s.io/release/v1.34.0/bin/linux/amd64\nBINARY=kubectl\n\nFILES=(\n \"$BINARY\"\n \"$BINARY.sig\"", "completion": " \"$BINARY.cert\"\n)\n\nfor FILE in \"${FILES[@]}\"; do\n curl -sSfL --retry 3 --retry-delay 3 \"$URL/$FILE\" -o \"$FILE\"\ndone", "language": "text", "type": "code-example", "context": "Verify Signed Kubernetes Artifacts | Then verify the blob by using cosign verify-blob:", "source": "https://kubernetes.io/docs/tasks/administer-cluster/verify-signed-artifacts/", "id": "k8s_00149_code_completion_4311"} {"prompt": "Explain this text code:\n\nURL=https://dl.k8s.io/release/v1.34.0/bin/linux/amd64\nBINARY=kubectl\n\nFILES=(\n \"$BINARY\"\n \"$BINARY.sig\"\n \"$BINARY.cert\"\n)\n\nfor FILE in \"${FILES[@]}\"; do\n curl -sSfL --retry 3 --retry-delay 3 \"$URL/$FILE\" -o \"$FILE\"\ndone", "completion": "This text code Verify Signed Kubernetes Artifacts | Then verify the blob by using cosign verify-blob:", "language": "text", "type": "code-example_explanation", "context": "Verify Signed Kubernetes Artifacts | Then verify the blob by using cosign verify-blob:", "source": "https://kubernetes.io/docs/tasks/administer-cluster/verify-signed-artifacts/", "id": "k8s_00149_code_explain_4312"} {"prompt": "Complete this text code:\n\ncosign verify-blob \"$BINARY\" \\\n --signature \"$BINARY\".sig \\", "completion": " --certificate \"$BINARY\".cert \\\n --certificate-identity krel-staging@k8s-releng-prod.iam.gserviceaccount.com \\\n --certificate-oidc-issuer https://accounts.google.com", "language": "text", "type": "code-example", "context": "Verify Signed Kubernetes Artifacts | Cosign 2.0 requires the --certificate-identity and --certificate-oidc-issuer options.", "source": "https://kubernetes.io/docs/tasks/administer-cluster/verify-signed-artifacts/", "id": "k8s_00149_code_completion_4313"} {"prompt": "Explain this text code:\n\ncosign verify-blob \"$BINARY\" \\\n --signature \"$BINARY\".sig \\\n --certificate \"$BINARY\".cert \\\n --certificate-identity krel-staging@k8s-releng-prod.iam.gserviceaccount.com \\\n --certificate-oidc-issuer https://accounts.google.com", "completion": "This text code Verify Signed Kubernetes Artifacts | Cosign 2.0 requires the --certificate-identity and --certificate-oidc-issuer options.", "language": "text", "type": "code-example_explanation", "context": "Verify Signed Kubernetes Artifacts | Cosign 2.0 requires the --certificate-identity and --certificate-oidc-issuer options.", "source": "https://kubernetes.io/docs/tasks/administer-cluster/verify-signed-artifacts/", "id": "k8s_00149_code_explain_4314"} {"prompt": "Complete this text code:\n\ncosign verify registry.k8s.io/kube-apiserver-amd64:v1.34.0 \\\n --certificate-identity krel-trust@k8s-releng-prod.iam.gserviceaccount.com \\", "completion": " --certificate-oidc-issuer https://accounts.google.com \\\n | jq .", "language": "text", "type": "code-example", "context": "Verify Signed Kubernetes Artifacts | To verify all signed control plane images for the latest stable version (v1.34.0), please run the following commands:", "source": "https://kubernetes.io/docs/tasks/administer-cluster/verify-signed-artifacts/", "id": "k8s_00149_code_completion_4315"} {"prompt": "Explain this text code:\n\ncosign verify registry.k8s.io/kube-apiserver-amd64:v1.34.0 \\\n --certificate-identity krel-trust@k8s-releng-prod.iam.gserviceaccount.com \\\n --certificate-oidc-issuer https://accounts.google.com \\\n | jq .", "completion": "This text code Verify Signed Kubernetes Artifacts | To verify all signed control plane images for the latest stable version (v1.34.0), please run the following commands:", "language": "text", "type": "code-example_explanation", "context": "Verify Signed Kubernetes Artifacts | To verify all signed control plane images for the latest stable version (v1.34.0), please run the following commands:", "source": "https://kubernetes.io/docs/tasks/administer-cluster/verify-signed-artifacts/", "id": "k8s_00149_code_explain_4316"} {"prompt": "Complete this text code:\n\ncurl -Ls \"https://sbom.k8s.io/$(curl -Ls https://dl.k8s.io/release/stable.txt)/release\" \\\n | grep \"SPDXID: SPDXRef-Package-registry.k8s.io\" \\\n | grep -v sha256 | cut -d- -f3- | sed 's/-/\\//' | sed 's/-v1/:v1/' \\\n | sort > images.txt\ninput=images.txt\nwhile IFS= read -r image", "completion": "do\n cosign verify \"$image\" \\\n --certificate-identity krel-trust@k8s-releng-prod.iam.gserviceaccount.com \\\n --certificate-oidc-issuer https://accounts.google.com \\\n | jq .\ndone < \"$input\"", "language": "text", "type": "code-example", "context": "Verify Signed Kubernetes Artifacts | Once you have verified an image, you can specify the image by its digest in your Pod manifests as per this example:", "source": "https://kubernetes.io/docs/tasks/administer-cluster/verify-signed-artifacts/", "id": "k8s_00149_code_completion_4317"} {"prompt": "Explain this text code:\n\ncurl -Ls \"https://sbom.k8s.io/$(curl -Ls https://dl.k8s.io/release/stable.txt)/release\" \\\n | grep \"SPDXID: SPDXRef-Package-registry.k8s.io\" \\\n | grep -v sha256 | cut -d- -f3- | sed 's/-/\\//' | sed 's/-v1/:v1/' \\\n | sort > images.txt\ninput=images.txt\nwhile IFS= read -r image\ndo\n cosign verify \"$image\" \\\n --certificate-identity krel-trust@k8s-releng-prod.iam.gserviceaccount.com \\\n --certificate-oidc-issuer https://accounts.google.com \\\n | jq .\ndone < \"$input\"", "completion": "This text code Verify Signed Kubernetes Artifacts | Once you have verified an image, you can specify the image by its digest in your Pod manifests as per this example:", "language": "text", "type": "code-example_explanation", "context": "Verify Signed Kubernetes Artifacts | Once you have verified an image, you can specify the image by its digest in your Pod manifests as per this example:", "source": "https://kubernetes.io/docs/tasks/administer-cluster/verify-signed-artifacts/", "id": "k8s_00149_code_explain_4318"} {"prompt": "Explain this text code:\n\nregistry-url/image-name@sha256:45b23dee08af5e43a7fea6c4cf9c25ccf269ee113168c19722f87876677c5cb2", "completion": "This text code Verify Signed Kubernetes Artifacts | For more information, please refer to the Image Pull Policy section.", "language": "text", "type": "code-example_explanation", "context": "Verify Signed Kubernetes Artifacts | For more information, please refer to the Image Pull Policy section.", "source": "https://kubernetes.io/docs/tasks/administer-cluster/verify-signed-artifacts/", "id": "k8s_00149_code_explain_4319"} {"prompt": "Complete this text code:\n\n# Retrieve the latest available Kubernetes release version\nVERSION=$(curl -Ls https://dl.k8s.io/release/stable.txt)\n\n# Verify the SHA512 sum\ncurl -Ls \"https://sbom.k8s.io/$VERSION/release\" -o \"$VERSION.spdx\"\necho \"$(curl -Ls \"https://sbom.k8s.io/$VERSION/release.sha512\") $VERSION.spdx\" | sha512sum --check\n\n# Verify the SHA256 sum\necho \"$(curl -Ls \"https://sbom.k8s.io/$VERSION/release.sha256\") $VERSION.spdx\" | sha256sum --check\n", "completion": "# Retrieve sigstore signature and certificate\ncurl -Ls \"https://sbom.k8s.io/$VERSION/release.sig\" -o \"$VERSION.spdx.sig\"\ncurl -Ls \"https://sbom.k8s.io/$VERSION/release.cert\" -o \"$VERSION.spdx.cert\"\n\n# Verify the sigstore signature\ncosign verify-blob \\\n --certificate \"$VERSION.spdx.cert\" \\\n --signature \"$VERSION.spdx.sig\" \\\n --certificate-identity krel-staging@k8s-releng-prod.iam.gserviceaccount.com \\\n --certificate-oidc-issuer https://accounts.google.com \\\n \"$VERSION.spdx\"", "language": "text", "type": "code-example", "context": "Verify Signed Kubernetes Artifacts |", "source": "https://kubernetes.io/docs/tasks/administer-cluster/verify-signed-artifacts/", "id": "k8s_00149_code_completion_4320"} {"prompt": "Explain this text code:\n\n# Retrieve the latest available Kubernetes release version\nVERSION=$(curl -Ls https://dl.k8s.io/release/stable.txt)\n\n# Verify the SHA512 sum\ncurl -Ls \"https://sbom.k8s.io/$VERSION/release\" -o \"$VERSION.spdx\"\necho \"$(curl -Ls \"https://sbom.k8s.io/$VERSION/release.sha512\") $VERSION.spdx\" | sha512sum --check\n\n# Verify the SHA256 sum\necho \"$(curl -Ls \"https://sbom.k8s.io/$VERSION/release.sha256\") $VERSION.spdx\" | sha256sum --check\n\n# Retrieve sigstore signature and certificate\ncurl -Ls \"https://sbom.k8s.io/$VERSION/release.sig\" -o \"$VERSION.spdx.sig\"\ncurl -Ls \"https://sbom.k8s.io/$VERSION/release.cert\" -o \"$VERSION.spdx.cert\"\n\n# Verify the sigstore signature\ncosign verify-blob \\\n --certificate \"$VERSION.spdx.cert\" \\\n --signature \"$VERSION.spdx.sig\" \\\n --certificate-identity krel-staging@k8s-releng-prod.iam.gserviceaccount.com \\\n --certificate-oidc-issuer https://accounts.google.com \\\n \"$VERSION.spdx\"", "completion": "This text code Verify Signed Kubernetes Artifacts |", "language": "text", "type": "code-example_explanation", "context": "Verify Signed Kubernetes Artifacts |", "source": "https://kubernetes.io/docs/tasks/administer-cluster/verify-signed-artifacts/", "id": "k8s_00149_code_explain_4321"} {"prompt": "Complete this yaml code:\n\n---\napiVersion: v1\nkind: Pod\nmetadata:\n name: frontend\nspec:\n containers:\n - name: app\n image: images.my-company.example/app:v4\n resources:\n requests:\n memory: \"64Mi\"", "completion": " cpu: \"250m\"\n limits:\n memory: \"128Mi\"\n cpu: \"500m\"\n - name: log-aggregator\n image: images.my-company.example/log-aggregator:v6\n resources:\n requests:\n memory: \"64Mi\"\n cpu: \"250m\"\n limits:\n memory: \"128Mi\"\n cpu: \"500m\"", "language": "yaml", "type": "pod-definition", "context": "Resource Management for Pods and Containers | This feature can be enabled by setting the PodLevelResources feature gate. The following Pod has an explicit request of 1 CPU and 100 MiB of memory, and an explicit limit of 1 CPU and 200 MiB of memor", "source": "https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/", "id": "k8s_00151_code_completion_4322"} {"prompt": "Explain this yaml code:\n\n---\napiVersion: v1\nkind: Pod\nmetadata:\n name: frontend\nspec:\n containers:\n - name: app\n image: images.my-company.example/app:v4\n resources:\n requests:\n memory: \"64Mi\"\n cpu: \"250m\"\n limits:\n memory: \"128Mi\"\n cpu: \"500m\"\n - name: log-aggregator\n image: images.my-company.example/log-aggregator:v6\n resources:\n requests:\n memory: \"64Mi\"\n cpu: \"250m\"\n limits:\n memory: \"128Mi\"\n cpu: \"500m\"", "completion": "This yaml code Resource Management for Pods and Containers | This feature can be enabled by setting the PodLevelResources feature gate. The following Pod has an explicit request of 1 CPU and 100 MiB of memory, and an explicit limit of 1 CPU and 200 MiB of memor", "language": "yaml", "type": "pod-definition_explanation", "context": "Resource Management for Pods and Containers | This feature can be enabled by setting the PodLevelResources feature gate. The following Pod has an explicit request of 1 CPU and 100 MiB of memory, and an explicit limit of 1 CPU and 200 MiB of memor", "source": "https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/", "id": "k8s_00151_code_explain_4323"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: pod-resources-demo\n namespace: pod-resources-example\nspec:\n resources:\n limits:\n cpu: \"1\"\n memory: \"200Mi\"\n requests:\n cpu: \"1\"\n memory: \"100Mi\"\n containers:", "completion": " - name: pod-resources-demo-ctr-1\n image: nginx\n resources:\n limits:\n cpu: \"0.5\"\n memory: \"100Mi\"\n requests:\n cpu: \"0.5\"\n memory: \"50Mi\"\n - name: pod-resources-demo-ctr-2\n image: fedora\n command:\n - sleep\n - inf", "language": "yaml", "type": "pod-definition", "context": "Resource Management for Pods and Containers | When you create a Pod, the Kubernetes scheduler selects a node for the Pod to run on. Each node has a maximum capacity for each of the resource types: the amount of CPU and memory it can provide for P", "source": "https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/", "id": "k8s_00151_code_completion_4324"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: pod-resources-demo\n namespace: pod-resources-example\nspec:\n resources:\n limits:\n cpu: \"1\"\n memory: \"200Mi\"\n requests:\n cpu: \"1\"\n memory: \"100Mi\"\n containers:\n - name: pod-resources-demo-ctr-1\n image: nginx\n resources:\n limits:\n cpu: \"0.5\"\n memory: \"100Mi\"\n requests:\n cpu: \"0.5\"\n memory: \"50Mi\"\n - name: pod-resources-demo-ctr-2\n image: fedora\n command:\n - sleep\n - inf", "completion": "This yaml code Resource Management for Pods and Containers | When you create a Pod, the Kubernetes scheduler selects a node for the Pod to run on. Each node has a maximum capacity for each of the resource types: the amount of CPU and memory it can provide for P", "language": "yaml", "type": "pod-definition_explanation", "context": "Resource Management for Pods and Containers | When you create a Pod, the Kubernetes scheduler selects a node for the Pod to run on. Each node has a maximum capacity for each of the resource types: the amount of CPU and memory it can provide for P", "source": "https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/", "id": "k8s_00151_code_explain_4325"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: frontend\nspec:\n containers:\n - name: app\n image: images.my-company.example/app:v4\n resources:\n requests:\n ephemeral-storage: \"2Gi\"\n limits:\n ephemeral-storage: \"4Gi\"\n volumeMounts:\n - name: ephemeral", "completion": " mountPath: \"/tmp\"\n - name: log-aggregator\n image: images.my-company.example/log-aggregator:v6\n resources:\n requests:\n ephemeral-storage: \"2Gi\"\n limits:\n ephemeral-storage: \"4Gi\"\n volumeMounts:\n - name: ephemeral\n mountPath: \"/tmp\"\n volumes:\n - name: ephemeral\n emptyDir:\n sizeLimit: 500Mi", "language": "yaml", "type": "pod-definition", "context": "Resource Management for Pods and Containers | When you create a Pod, the Kubernetes scheduler selects a node for the Pod to run on. Each node has a maximum amount of local ephemeral storage it can provide for Pods. For more information, see Node", "source": "https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/", "id": "k8s_00151_code_completion_4326"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: frontend\nspec:\n containers:\n - name: app\n image: images.my-company.example/app:v4\n resources:\n requests:\n ephemeral-storage: \"2Gi\"\n limits:\n ephemeral-storage: \"4Gi\"\n volumeMounts:\n - name: ephemeral\n mountPath: \"/tmp\"\n - name: log-aggregator\n image: images.my-company.example/log-aggregator:v6\n resources:\n requests:\n ephemeral-storage: \"2Gi\"\n limits:\n ephemeral-storage: \"4Gi\"\n volumeMounts:\n - name: ephemeral\n mountPath: \"/tmp\"\n volumes:\n - name: ephemeral\n emptyDir:\n sizeLimit: 500Mi", "completion": "This yaml code Resource Management for Pods and Containers | When you create a Pod, the Kubernetes scheduler selects a node for the Pod to run on. Each node has a maximum amount of local ephemeral storage it can provide for Pods. For more information, see Node", "language": "yaml", "type": "pod-definition_explanation", "context": "Resource Management for Pods and Containers | When you create a Pod, the Kubernetes scheduler selects a node for the Pod to run on. Each node has a maximum amount of local ephemeral storage it can provide for Pods. For more information, see Node", "source": "https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/", "id": "k8s_00151_code_explain_4327"} {"prompt": "Explain this text code:\n\n# For ext4, with /dev/block-device not mounted\nsudo tune2fs -O project -Q prjquota /dev/block-device", "completion": "This text code Resource Management for Pods and Containers | Ensure that the root filesystem (or optional runtime filesystem) is mounted with project quotas enabled. For both XFS and ext4fs, the mount option is named prjquota.", "language": "text", "type": "code-example_explanation", "context": "Resource Management for Pods and Containers | Ensure that the root filesystem (or optional runtime filesystem) is mounted with project quotas enabled. For both XFS and ext4fs, the mount option is named prjquota.", "source": "https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/", "id": "k8s_00151_code_explain_4328"} {"prompt": "Complete this text code:\n\ncurl --header \"Content-Type: application/json-patch+json\" \\\n--request PATCH \\", "completion": "--data '[{\"op\": \"add\", \"path\": \"/status/capacity/example.com~1foo\", \"value\": \"5\"}]' \\\nhttp://k8s-master:8080/api/v1/nodes/k8s-node-1/status", "language": "text", "type": "code-example", "context": "Resource Management for Pods and Containers | Cluster-level extended resources are not tied to nodes. They are usually managed by scheduler extenders, which handle the resource consumption and resource quota.", "source": "https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/", "id": "k8s_00151_code_completion_4329"} {"prompt": "Explain this text code:\n\ncurl --header \"Content-Type: application/json-patch+json\" \\\n--request PATCH \\\n--data '[{\"op\": \"add\", \"path\": \"/status/capacity/example.com~1foo\", \"value\": \"5\"}]' \\\nhttp://k8s-master:8080/api/v1/nodes/k8s-node-1/status", "completion": "This text code Resource Management for Pods and Containers | Cluster-level extended resources are not tied to nodes. They are usually managed by scheduler extenders, which handle the resource consumption and resource quota.", "language": "text", "type": "code-example_explanation", "context": "Resource Management for Pods and Containers | Cluster-level extended resources are not tied to nodes. They are usually managed by scheduler extenders, which handle the resource consumption and resource quota.", "source": "https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/", "id": "k8s_00151_code_explain_4330"} {"prompt": "Complete this json code:\n\n{\n \"kind\": \"Policy\",\n \"apiVersion\": \"v1\",\n \"extenders\": [\n {\n \"urlPrefix\":\"\",\n \"bindVerb\": \"bind\",\n \"managedResources\": [", "completion": " {\n \"name\": \"example.com/foo\",\n \"ignoredByScheduler\": true\n }\n ]\n }\n ]\n}", "language": "json", "type": "api-response", "context": "Resource Management for Pods and Containers | Extended resources allocation by DRA allows cluster administrators to specify an extendedResourceName in DeviceClass, then the devices matching the DeviceClass can be requested from a pod's extended r", "source": "https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/", "id": "k8s_00151_code_completion_4331"} {"prompt": "Explain this json code:\n\n{\n \"kind\": \"Policy\",\n \"apiVersion\": \"v1\",\n \"extenders\": [\n {\n \"urlPrefix\":\"\",\n \"bindVerb\": \"bind\",\n \"managedResources\": [\n {\n \"name\": \"example.com/foo\",\n \"ignoredByScheduler\": true\n }\n ]\n }\n ]\n}", "completion": "This json code Resource Management for Pods and Containers | Extended resources allocation by DRA allows cluster administrators to specify an extendedResourceName in DeviceClass, then the devices matching the DeviceClass can be requested from a pod's extended r", "language": "json", "type": "api-response_explanation", "context": "Resource Management for Pods and Containers | Extended resources allocation by DRA allows cluster administrators to specify an extendedResourceName in DeviceClass, then the devices matching the DeviceClass can be requested from a pod's extended r", "source": "https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/", "id": "k8s_00151_code_explain_4332"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: my-pod\nspec:\n containers:\n - name: my-container", "completion": " image: myimage\n resources:\n requests:\n cpu: 2\n example.com/foo: 1\n limits:\n example.com/foo: 1", "language": "yaml", "type": "pod-definition", "context": "Resource Management for Pods and Containers | Process ID (PID) limits allow for the configuration of a kubelet to limit the number of PIDs that a given Pod can consume. See PID Limiting for information.", "source": "https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/", "id": "k8s_00151_code_completion_4333"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: my-pod\nspec:\n containers:\n - name: my-container\n image: myimage\n resources:\n requests:\n cpu: 2\n example.com/foo: 1\n limits:\n example.com/foo: 1", "completion": "This yaml code Resource Management for Pods and Containers | Process ID (PID) limits allow for the configuration of a kubelet to limit the number of PIDs that a given Pod can consume. See PID Limiting for information.", "language": "yaml", "type": "pod-definition_explanation", "context": "Resource Management for Pods and Containers | Process ID (PID) limits allow for the configuration of a kubelet to limit the number of PIDs that a given Pod can consume. See PID Limiting for information.", "source": "https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/", "id": "k8s_00151_code_explain_4334"} {"prompt": "Explain this bash code:\n\nkubectl describe pod frontend | grep -A 9999999999 Events", "completion": "This bash code Resource Management for Pods and Containers | In the preceding example, the Pod named \"frontend\" fails to be scheduled due to insufficient CPU resource on any node. Similar error messages can also suggest failure due to insufficient memory (PodEx", "language": "bash", "type": "kubectl-command_explanation", "context": "Resource Management for Pods and Containers | In the preceding example, the Pod named \"frontend\" fails to be scheduled due to insufficient CPU resource on any node. Similar error messages can also suggest failure due to insufficient memory (PodEx", "source": "https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/", "id": "k8s_00151_code_explain_4335"} {"prompt": "Complete this text code:\n\nEvents:\n Type Reason Age From Message", "completion": " ---- ------ ---- ---- -------\n Warning FailedScheduling 23s default-scheduler 0/42 nodes available: insufficient cpu", "language": "text", "type": "code-example", "context": "Resource Management for Pods and Containers | In the preceding example, the Pod named \"frontend\" fails to be scheduled due to insufficient CPU resource on any node. Similar error messages can also suggest failure due to insufficient memory (PodEx", "source": "https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/", "id": "k8s_00151_code_completion_4336"} {"prompt": "Explain this text code:\n\nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Warning FailedScheduling 23s default-scheduler 0/42 nodes available: insufficient cpu", "completion": "This text code Resource Management for Pods and Containers | In the preceding example, the Pod named \"frontend\" fails to be scheduled due to insufficient CPU resource on any node. Similar error messages can also suggest failure due to insufficient memory (PodEx", "language": "text", "type": "code-example_explanation", "context": "Resource Management for Pods and Containers | In the preceding example, the Pod named \"frontend\" fails to be scheduled due to insufficient CPU resource on any node. Similar error messages can also suggest failure due to insufficient memory (PodEx", "source": "https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/", "id": "k8s_00151_code_explain_4337"} {"prompt": "Complete this text code:\n\nName: e2e-test-node-pool-4lw4\n[ ... lines removed for clarity ...]\nCapacity:\n cpu: 2\n memory: 7679792Ki\n pods: 110\nAllocatable:\n cpu: 1800m\n memory: 7474992Ki\n pods: 110\n[ ... lines removed for clarity ...]\nNon-terminated Pods: (5 in total)", "completion": " Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits\n --------- ---- ------------ ---------- --------------- -------------\n kube-system fluentd-gcp-v1.38-28bv1 100m (5%) 0 (0%) 200Mi (2%) 200Mi (2%)\n kube-system kube-dns-3297075139-61lj3 260m (13%) 0 (0%) 100Mi (1%) 170Mi (2%)\n kube-system kube-proxy-e2e-test-... 100m (5%) 0 (0%) 0 (0%) 0 (0%)\n kube-system monitoring-influxdb-grafana-v4-z1m12 200m (10%) 200m (10%) 600Mi (8%) 600Mi (8%)\n kube-system node-problem-detector-v0.1-fj7m3 20m (1%) 200m (10%) 20Mi (0%) 100Mi (1%)\nAllocated resources:\n (Total limits may be over 100 percent, i.e., overcommitted.)\n CPU Requests CPU Limits Memory Requests Memory Limits\n ------------ ---------- --------------- -------------\n 680m (34%) 400m (20%) 920Mi (11%) 1070Mi (13%)", "language": "text", "type": "code-example", "context": "Resource Management for Pods and Containers | In the preceding output, you can see that if a Pod requests more than 1.120 CPUs or more than 6.23Gi of memory, that Pod will not fit on the node.", "source": "https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/", "id": "k8s_00151_code_completion_4338"} {"prompt": "Explain this text code:\n\nName: e2e-test-node-pool-4lw4\n[ ... lines removed for clarity ...]\nCapacity:\n cpu: 2\n memory: 7679792Ki\n pods: 110\nAllocatable:\n cpu: 1800m\n memory: 7474992Ki\n pods: 110\n[ ... lines removed for clarity ...]\nNon-terminated Pods: (5 in total)\n Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits\n --------- ---- ------------ ---------- --------------- -------------\n kube-system fluentd-gcp-v1.38-28bv1 100m (5%) 0 (0%) 200Mi (2%) 200Mi (2%)\n kube-system kube-dns-3297075139-61lj3 260m (13%) 0 (0%) 100Mi (1%) 170Mi (2%)\n kube-system kube-proxy-e2e-test-... 100m (5%) 0 (0%) 0 (0%) 0 (0%)\n kube-system monitoring-influxdb-grafana-v4-z1m12 200m (10%) 200m (10%) 600Mi (8%) 600Mi (8%)\n kube-system node-problem-detector-v0.1-fj7m3 20m (1%) 200m (10%) 20Mi (0%) 100Mi (1%)\nAllocated resources:\n (Total limits may be over 100 percent, i.e., overcommitted.)\n CPU Requests CPU Limits Memory Requests Memory Limits\n ------------ ---------- --------------- -------------\n 680m (34%) 400m (20%) 920Mi (11%) 1070Mi (13%)", "completion": "This text code Resource Management for Pods and Containers | In the preceding output, you can see that if a Pod requests more than 1.120 CPUs or more than 6.23Gi of memory, that Pod will not fit on the node.", "language": "text", "type": "code-example_explanation", "context": "Resource Management for Pods and Containers | In the preceding output, you can see that if a Pod requests more than 1.120 CPUs or more than 6.23Gi of memory, that Pod will not fit on the node.", "source": "https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/", "id": "k8s_00151_code_explain_4339"} {"prompt": "Complete this text code:\n\nName: simmemleak-hra99\nNamespace: default\nImage(s): saadali/simmemleak\nNode: kubernetes-node-tf0f/10.240.216.66\nLabels: name=simmemleak\nStatus: Running\nReason:\nMessage:\nIP: 10.244.2.75\nContainers:\n simmemleak:\n Image: saadali/simmemleak:latest\n Limits:\n cpu: 100m\n memory: 50Mi\n State: Running\n Started: Tue, 07 Jul 2019 12:54:41 -0700", "completion": " Last State: Terminated\n Reason: OOMKilled\n Exit Code: 137\n Started: Fri, 07 Jul 2019 12:54:30 -0700\n Finished: Fri, 07 Jul 2019 12:54:33 -0700\n Ready: False\n Restart Count: 5\nConditions:\n Type Status\n Ready False\nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal Scheduled 42s default-scheduler Successfully assigned simmemleak-hra99 to kubernetes-node-tf0f\n Normal Pulled 41s kubelet Container image \"saadali/simmemleak:latest\" already present on machine\n Normal Created 41s kubelet Created container simmemleak\n Normal Started 40s kubelet Started container simmemleak\n Normal Killing 32s kubelet Killing container with id ead3fb35-5cf5-44ed-9ae1-488115be66c6: Need to kill Pod", "language": "text", "type": "code-example", "context": "Resource Management for Pods and Containers | In the preceding example, the Restart Count: 5 indicates that the simmemleak container in the Pod was terminated and restarted five times (so far). The OOMKilled reason shows that the container tried", "source": "https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/", "id": "k8s_00151_code_completion_4340"} {"prompt": "Explain this text code:\n\nName: simmemleak-hra99\nNamespace: default\nImage(s): saadali/simmemleak\nNode: kubernetes-node-tf0f/10.240.216.66\nLabels: name=simmemleak\nStatus: Running\nReason:\nMessage:\nIP: 10.244.2.75\nContainers:\n simmemleak:\n Image: saadali/simmemleak:latest\n Limits:\n cpu: 100m\n memory: 50Mi\n State: Running\n Started: Tue, 07 Jul 2019 12:54:41 -0700\n Last State: Terminated\n Reason: OOMKilled\n Exit Code: 137\n Started: Fri, 07 Jul 2019 12:54:30 -0700\n Finished: Fri, 07 Jul 2019 12:54:33 -0700\n Ready: False\n Restart Count: 5\nConditions:\n Type Status\n Ready False\nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal Scheduled 42s default-scheduler Successfully assigned simmemleak-hra99 to kubernetes-node-tf0f\n Normal Pulled 41s kubelet Container image \"saadali/simmemleak:latest\" already present on machine\n Normal Created 41s kubelet Created container simmemleak\n Normal Started 40s kubelet Started container simmemleak\n Normal Killing 32s kubelet Killing container with id ead3fb35-5cf5-44ed-9ae1-488115be66c6: Need to kill Pod", "completion": "This text code Resource Management for Pods and Containers | In the preceding example, the Restart Count: 5 indicates that the simmemleak container in the Pod was terminated and restarted five times (so far). The OOMKilled reason shows that the container tried", "language": "text", "type": "code-example_explanation", "context": "Resource Management for Pods and Containers | In the preceding example, the Restart Count: 5 indicates that the simmemleak container in the Pod was terminated and restarted five times (so far). The OOMKilled reason shows that the container tried", "source": "https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/", "id": "k8s_00151_code_explain_4341"} {"prompt": "Explain this text code:\n\nspec.containers[].resources.limits.hugepages-", "completion": "This text code Resource Management for Pods and Containers | Although you can only specify requests and limits for individual containers, it is also useful to think about the overall resource requests and limits for a Pod. For a particular resource, a Pod resou", "language": "text", "type": "code-example_explanation", "context": "Resource Management for Pods and Containers | Although you can only specify requests and limits for individual containers, it is also useful to think about the overall resource requests and limits for a Pod. For a particular resource, a Pod resou", "source": "https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/", "id": "k8s_00151_code_explain_4342"} {"prompt": "Explain this text code:\n\nspec.containers[].resources.requests.hugepages-", "completion": "This text code Resource Management for Pods and Containers | Although you can only specify requests and limits for individual containers, it is also useful to think about the overall resource requests and limits for a Pod. For a particular resource, a Pod resou", "language": "text", "type": "code-example_explanation", "context": "Resource Management for Pods and Containers | Although you can only specify requests and limits for individual containers, it is also useful to think about the overall resource requests and limits for a Pod. For a particular resource, a Pod resou", "source": "https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/", "id": "k8s_00151_code_explain_4343"} {"prompt": "Complete this yaml code:\n\n---\napiVersion: v1\nkind: Pod\nmetadata:\n name: frontend\nspec:\n containers:\n - name: app\n image: images.my-company.example/app:v4\n resources:\n requests:\n memory: \"64Mi\"", "completion": " cpu: \"250m\"\n limits:\n memory: \"128Mi\"\n cpu: \"500m\"\n - name: log-aggregator\n image: images.my-company.example/log-aggregator:v6\n resources:\n requests:\n memory: \"64Mi\"\n cpu: \"250m\"\n limits:\n memory: \"128Mi\"\n cpu: \"500m\"", "language": "yaml", "type": "pod-definition", "context": "Resource Management for Pods and Containers | This feature can be enabled by setting the PodLevelResources feature gate. The following Pod has an explicit request of 1 CPU and 100 MiB of memory, and an explicit limit of 1 CPU and 200 MiB of memor", "source": "https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/", "id": "k8s_00151_code_completion_4344"} {"prompt": "Explain this yaml code:\n\n---\napiVersion: v1\nkind: Pod\nmetadata:\n name: frontend\nspec:\n containers:\n - name: app\n image: images.my-company.example/app:v4\n resources:\n requests:\n memory: \"64Mi\"\n cpu: \"250m\"\n limits:\n memory: \"128Mi\"\n cpu: \"500m\"\n - name: log-aggregator\n image: images.my-company.example/log-aggregator:v6\n resources:\n requests:\n memory: \"64Mi\"\n cpu: \"250m\"\n limits:\n memory: \"128Mi\"\n cpu: \"500m\"", "completion": "This yaml code Resource Management for Pods and Containers | This feature can be enabled by setting the PodLevelResources feature gate. The following Pod has an explicit request of 1 CPU and 100 MiB of memory, and an explicit limit of 1 CPU and 200 MiB of memor", "language": "yaml", "type": "pod-definition_explanation", "context": "Resource Management for Pods and Containers | This feature can be enabled by setting the PodLevelResources feature gate. The following Pod has an explicit request of 1 CPU and 100 MiB of memory, and an explicit limit of 1 CPU and 200 MiB of memor", "source": "https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/", "id": "k8s_00151_code_explain_4345"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: pod-resources-demo\n namespace: pod-resources-example\nspec:\n resources:\n limits:\n cpu: \"1\"\n memory: \"200Mi\"\n requests:\n cpu: \"1\"\n memory: \"100Mi\"\n containers:", "completion": " - name: pod-resources-demo-ctr-1\n image: nginx\n resources:\n limits:\n cpu: \"0.5\"\n memory: \"100Mi\"\n requests:\n cpu: \"0.5\"\n memory: \"50Mi\"\n - name: pod-resources-demo-ctr-2\n image: fedora\n command:\n - sleep\n - inf", "language": "yaml", "type": "pod-definition", "context": "Resource Management for Pods and Containers | When you create a Pod, the Kubernetes scheduler selects a node for the Pod to run on. Each node has a maximum capacity for each of the resource types: the amount of CPU and memory it can provide for P", "source": "https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/", "id": "k8s_00151_code_completion_4346"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: pod-resources-demo\n namespace: pod-resources-example\nspec:\n resources:\n limits:\n cpu: \"1\"\n memory: \"200Mi\"\n requests:\n cpu: \"1\"\n memory: \"100Mi\"\n containers:\n - name: pod-resources-demo-ctr-1\n image: nginx\n resources:\n limits:\n cpu: \"0.5\"\n memory: \"100Mi\"\n requests:\n cpu: \"0.5\"\n memory: \"50Mi\"\n - name: pod-resources-demo-ctr-2\n image: fedora\n command:\n - sleep\n - inf", "completion": "This yaml code Resource Management for Pods and Containers | When you create a Pod, the Kubernetes scheduler selects a node for the Pod to run on. Each node has a maximum capacity for each of the resource types: the amount of CPU and memory it can provide for P", "language": "yaml", "type": "pod-definition_explanation", "context": "Resource Management for Pods and Containers | When you create a Pod, the Kubernetes scheduler selects a node for the Pod to run on. Each node has a maximum capacity for each of the resource types: the amount of CPU and memory it can provide for P", "source": "https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/", "id": "k8s_00151_code_explain_4347"} {"prompt": "Explain this text code:\n\nspec.containers[].resources.limits.ephemeral-storage", "completion": "This text code Resource Management for Pods and Containers | Limits and requests for ephemeral-storage are measured in byte quantities. You can express storage as a plain integer or as a fixed-point number using one of these suffixes: E, P, T, G, M, k. You can", "language": "text", "type": "code-example_explanation", "context": "Resource Management for Pods and Containers | Limits and requests for ephemeral-storage are measured in byte quantities. You can express storage as a plain integer or as a fixed-point number using one of these suffixes: E, P, T, G, M, k. You can", "source": "https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/", "id": "k8s_00151_code_explain_4348"} {"prompt": "Explain this text code:\n\nspec.containers[].resources.requests.ephemeral-storage", "completion": "This text code Resource Management for Pods and Containers | Limits and requests for ephemeral-storage are measured in byte quantities. You can express storage as a plain integer or as a fixed-point number using one of these suffixes: E, P, T, G, M, k. You can", "language": "text", "type": "code-example_explanation", "context": "Resource Management for Pods and Containers | Limits and requests for ephemeral-storage are measured in byte quantities. You can express storage as a plain integer or as a fixed-point number using one of these suffixes: E, P, T, G, M, k. You can", "source": "https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/", "id": "k8s_00151_code_explain_4349"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: frontend\nspec:\n containers:\n - name: app\n image: images.my-company.example/app:v4\n resources:\n requests:\n ephemeral-storage: \"2Gi\"\n limits:\n ephemeral-storage: \"4Gi\"\n volumeMounts:\n - name: ephemeral", "completion": " mountPath: \"/tmp\"\n - name: log-aggregator\n image: images.my-company.example/log-aggregator:v6\n resources:\n requests:\n ephemeral-storage: \"2Gi\"\n limits:\n ephemeral-storage: \"4Gi\"\n volumeMounts:\n - name: ephemeral\n mountPath: \"/tmp\"\n volumes:\n - name: ephemeral\n emptyDir:\n sizeLimit: 500Mi", "language": "yaml", "type": "pod-definition", "context": "Resource Management for Pods and Containers | When you create a Pod, the Kubernetes scheduler selects a node for the Pod to run on. Each node has a maximum amount of local ephemeral storage it can provide for Pods. For more information, see Node", "source": "https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/", "id": "k8s_00151_code_completion_4350"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: frontend\nspec:\n containers:\n - name: app\n image: images.my-company.example/app:v4\n resources:\n requests:\n ephemeral-storage: \"2Gi\"\n limits:\n ephemeral-storage: \"4Gi\"\n volumeMounts:\n - name: ephemeral\n mountPath: \"/tmp\"\n - name: log-aggregator\n image: images.my-company.example/log-aggregator:v6\n resources:\n requests:\n ephemeral-storage: \"2Gi\"\n limits:\n ephemeral-storage: \"4Gi\"\n volumeMounts:\n - name: ephemeral\n mountPath: \"/tmp\"\n volumes:\n - name: ephemeral\n emptyDir:\n sizeLimit: 500Mi", "completion": "This yaml code Resource Management for Pods and Containers | When you create a Pod, the Kubernetes scheduler selects a node for the Pod to run on. Each node has a maximum amount of local ephemeral storage it can provide for Pods. For more information, see Node", "language": "yaml", "type": "pod-definition_explanation", "context": "Resource Management for Pods and Containers | When you create a Pod, the Kubernetes scheduler selects a node for the Pod to run on. Each node has a maximum amount of local ephemeral storage it can provide for Pods. For more information, see Node", "source": "https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/", "id": "k8s_00151_code_explain_4351"} {"prompt": "Explain this text code:\n\nLocalStorageCapacityIsolationFSQuotaMonitoring=true", "completion": "This text code Resource Management for Pods and Containers | Ensure the UserNamespacesSupport feature gate is enabled, and that the kernel, CRI implementation and OCI runtime support user namespaces.", "language": "text", "type": "code-example_explanation", "context": "Resource Management for Pods and Containers | Ensure the UserNamespacesSupport feature gate is enabled, and that the kernel, CRI implementation and OCI runtime support user namespaces.", "source": "https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/", "id": "k8s_00151_code_explain_4352"} {"prompt": "Explain this bash code:\n\n# For ext4, with /dev/block-device not mounted\nsudo tune2fs -O project -Q prjquota /dev/block-device", "completion": "This bash code Resource Management for Pods and Containers | Ensure that the root filesystem (or optional runtime filesystem) is mounted with project quotas enabled. For both XFS and ext4fs, the mount option is named prjquota.", "language": "bash", "type": "shell-command_explanation", "context": "Resource Management for Pods and Containers | Ensure that the root filesystem (or optional runtime filesystem) is mounted with project quotas enabled. For both XFS and ext4fs, the mount option is named prjquota.", "source": "https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/", "id": "k8s_00151_code_explain_4353"} {"prompt": "Complete this shell code:\n\ncurl --header \"Content-Type: application/json-patch+json\" \\\n--request PATCH \\", "completion": "--data '[{\"op\": \"add\", \"path\": \"/status/capacity/example.com~1foo\", \"value\": \"5\"}]' \\\nhttp://k8s-master:8080/api/v1/nodes/k8s-node-1/status", "language": "shell", "type": "code-example", "context": "Resource Management for Pods and Containers | Cluster-level extended resources are not tied to nodes. They are usually managed by scheduler extenders, which handle the resource consumption and resource quota.", "source": "https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/", "id": "k8s_00151_code_completion_4354"} {"prompt": "Explain this shell code:\n\ncurl --header \"Content-Type: application/json-patch+json\" \\\n--request PATCH \\\n--data '[{\"op\": \"add\", \"path\": \"/status/capacity/example.com~1foo\", \"value\": \"5\"}]' \\\nhttp://k8s-master:8080/api/v1/nodes/k8s-node-1/status", "completion": "This shell code Resource Management for Pods and Containers | Cluster-level extended resources are not tied to nodes. They are usually managed by scheduler extenders, which handle the resource consumption and resource quota.", "language": "shell", "type": "code-example_explanation", "context": "Resource Management for Pods and Containers | Cluster-level extended resources are not tied to nodes. They are usually managed by scheduler extenders, which handle the resource consumption and resource quota.", "source": "https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/", "id": "k8s_00151_code_explain_4355"} {"prompt": "Complete this json code:\n\n{\n \"kind\": \"Policy\",\n \"apiVersion\": \"v1\",\n \"extenders\": [\n {\n \"urlPrefix\":\"\",\n \"bindVerb\": \"bind\",\n \"managedResources\": [", "completion": " {\n \"name\": \"example.com/foo\",\n \"ignoredByScheduler\": true\n }\n ]\n }\n ]\n}", "language": "json", "type": "api-response", "context": "Resource Management for Pods and Containers | Extended resources allocation by DRA allows cluster administrators to specify an extendedResourceName in DeviceClass, then the devices matching the DeviceClass can be requested from a pod's extended r", "source": "https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/", "id": "k8s_00151_code_completion_4356"} {"prompt": "Explain this json code:\n\n{\n \"kind\": \"Policy\",\n \"apiVersion\": \"v1\",\n \"extenders\": [\n {\n \"urlPrefix\":\"\",\n \"bindVerb\": \"bind\",\n \"managedResources\": [\n {\n \"name\": \"example.com/foo\",\n \"ignoredByScheduler\": true\n }\n ]\n }\n ]\n}", "completion": "This json code Resource Management for Pods and Containers | Extended resources allocation by DRA allows cluster administrators to specify an extendedResourceName in DeviceClass, then the devices matching the DeviceClass can be requested from a pod's extended r", "language": "json", "type": "api-response_explanation", "context": "Resource Management for Pods and Containers | Extended resources allocation by DRA allows cluster administrators to specify an extendedResourceName in DeviceClass, then the devices matching the DeviceClass can be requested from a pod's extended r", "source": "https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/", "id": "k8s_00151_code_explain_4357"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: my-pod\nspec:\n containers:\n - name: my-container", "completion": " image: myimage\n resources:\n requests:\n cpu: 2\n example.com/foo: 1\n limits:\n example.com/foo: 1", "language": "yaml", "type": "pod-definition", "context": "Resource Management for Pods and Containers | Process ID (PID) limits allow for the configuration of a kubelet to limit the number of PIDs that a given Pod can consume. See PID Limiting for information.", "source": "https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/", "id": "k8s_00151_code_completion_4358"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: my-pod\nspec:\n containers:\n - name: my-container\n image: myimage\n resources:\n requests:\n cpu: 2\n example.com/foo: 1\n limits:\n example.com/foo: 1", "completion": "This yaml code Resource Management for Pods and Containers | Process ID (PID) limits allow for the configuration of a kubelet to limit the number of PIDs that a given Pod can consume. See PID Limiting for information.", "language": "yaml", "type": "pod-definition_explanation", "context": "Resource Management for Pods and Containers | Process ID (PID) limits allow for the configuration of a kubelet to limit the number of PIDs that a given Pod can consume. See PID Limiting for information.", "source": "https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/", "id": "k8s_00151_code_explain_4359"} {"prompt": "Explain this shell code:\n\nkubectl describe pod frontend | grep -A 9999999999 Events", "completion": "This shell code Resource Management for Pods and Containers | In the preceding example, the Pod named \"frontend\" fails to be scheduled due to insufficient CPU resource on any node. Similar error messages can also suggest failure due to insufficient memory (PodEx", "language": "shell", "type": "code-example_explanation", "context": "Resource Management for Pods and Containers | In the preceding example, the Pod named \"frontend\" fails to be scheduled due to insufficient CPU resource on any node. Similar error messages can also suggest failure due to insufficient memory (PodEx", "source": "https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/", "id": "k8s_00151_code_explain_4360"} {"prompt": "Complete this text code:\n\nEvents:\n Type Reason Age From Message", "completion": " ---- ------ ---- ---- -------\n Warning FailedScheduling 23s default-scheduler 0/42 nodes available: insufficient cpu", "language": "text", "type": "code-example", "context": "Resource Management for Pods and Containers | In the preceding example, the Pod named \"frontend\" fails to be scheduled due to insufficient CPU resource on any node. Similar error messages can also suggest failure due to insufficient memory (PodEx", "source": "https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/", "id": "k8s_00151_code_completion_4361"} {"prompt": "Explain this text code:\n\nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Warning FailedScheduling 23s default-scheduler 0/42 nodes available: insufficient cpu", "completion": "This text code Resource Management for Pods and Containers | In the preceding example, the Pod named \"frontend\" fails to be scheduled due to insufficient CPU resource on any node. Similar error messages can also suggest failure due to insufficient memory (PodEx", "language": "text", "type": "code-example_explanation", "context": "Resource Management for Pods and Containers | In the preceding example, the Pod named \"frontend\" fails to be scheduled due to insufficient CPU resource on any node. Similar error messages can also suggest failure due to insufficient memory (PodEx", "source": "https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/", "id": "k8s_00151_code_explain_4362"} {"prompt": "Complete this text code:\n\nName: e2e-test-node-pool-4lw4\n[ ... lines removed for clarity ...]\nCapacity:\n cpu: 2\n memory: 7679792Ki\n pods: 110\nAllocatable:\n cpu: 1800m\n memory: 7474992Ki\n pods: 110\n[ ... lines removed for clarity ...]\nNon-terminated Pods: (5 in total)", "completion": " Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits\n --------- ---- ------------ ---------- --------------- -------------\n kube-system fluentd-gcp-v1.38-28bv1 100m (5%) 0 (0%) 200Mi (2%) 200Mi (2%)\n kube-system kube-dns-3297075139-61lj3 260m (13%) 0 (0%) 100Mi (1%) 170Mi (2%)\n kube-system kube-proxy-e2e-test-... 100m (5%) 0 (0%) 0 (0%) 0 (0%)\n kube-system monitoring-influxdb-grafana-v4-z1m12 200m (10%) 200m (10%) 600Mi (8%) 600Mi (8%)\n kube-system node-problem-detector-v0.1-fj7m3 20m (1%) 200m (10%) 20Mi (0%) 100Mi (1%)\nAllocated resources:\n (Total limits may be over 100 percent, i.e., overcommitted.)\n CPU Requests CPU Limits Memory Requests Memory Limits\n ------------ ---------- --------------- -------------\n 680m (34%) 400m (20%) 920Mi (11%) 1070Mi (13%)", "language": "text", "type": "code-example", "context": "Resource Management for Pods and Containers | In the preceding output, you can see that if a Pod requests more than 1.120 CPUs or more than 6.23Gi of memory, that Pod will not fit on the node.", "source": "https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/", "id": "k8s_00151_code_completion_4363"} {"prompt": "Explain this text code:\n\nName: e2e-test-node-pool-4lw4\n[ ... lines removed for clarity ...]\nCapacity:\n cpu: 2\n memory: 7679792Ki\n pods: 110\nAllocatable:\n cpu: 1800m\n memory: 7474992Ki\n pods: 110\n[ ... lines removed for clarity ...]\nNon-terminated Pods: (5 in total)\n Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits\n --------- ---- ------------ ---------- --------------- -------------\n kube-system fluentd-gcp-v1.38-28bv1 100m (5%) 0 (0%) 200Mi (2%) 200Mi (2%)\n kube-system kube-dns-3297075139-61lj3 260m (13%) 0 (0%) 100Mi (1%) 170Mi (2%)\n kube-system kube-proxy-e2e-test-... 100m (5%) 0 (0%) 0 (0%) 0 (0%)\n kube-system monitoring-influxdb-grafana-v4-z1m12 200m (10%) 200m (10%) 600Mi (8%) 600Mi (8%)\n kube-system node-problem-detector-v0.1-fj7m3 20m (1%) 200m (10%) 20Mi (0%) 100Mi (1%)\nAllocated resources:\n (Total limits may be over 100 percent, i.e., overcommitted.)\n CPU Requests CPU Limits Memory Requests Memory Limits\n ------------ ---------- --------------- -------------\n 680m (34%) 400m (20%) 920Mi (11%) 1070Mi (13%)", "completion": "This text code Resource Management for Pods and Containers | In the preceding output, you can see that if a Pod requests more than 1.120 CPUs or more than 6.23Gi of memory, that Pod will not fit on the node.", "language": "text", "type": "code-example_explanation", "context": "Resource Management for Pods and Containers | In the preceding output, you can see that if a Pod requests more than 1.120 CPUs or more than 6.23Gi of memory, that Pod will not fit on the node.", "source": "https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/", "id": "k8s_00151_code_explain_4364"} {"prompt": "Complete this text code:\n\nName: simmemleak-hra99\nNamespace: default\nImage(s): saadali/simmemleak\nNode: kubernetes-node-tf0f/10.240.216.66\nLabels: name=simmemleak\nStatus: Running\nReason:\nMessage:\nIP: 10.244.2.75\nContainers:\n simmemleak:\n Image: saadali/simmemleak:latest\n Limits:\n cpu: 100m\n memory: 50Mi\n State: Running\n Started: Tue, 07 Jul 2019 12:54:41 -0700", "completion": " Last State: Terminated\n Reason: OOMKilled\n Exit Code: 137\n Started: Fri, 07 Jul 2019 12:54:30 -0700\n Finished: Fri, 07 Jul 2019 12:54:33 -0700\n Ready: False\n Restart Count: 5\nConditions:\n Type Status\n Ready False\nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal Scheduled 42s default-scheduler Successfully assigned simmemleak-hra99 to kubernetes-node-tf0f\n Normal Pulled 41s kubelet Container image \"saadali/simmemleak:latest\" already present on machine\n Normal Created 41s kubelet Created container simmemleak\n Normal Started 40s kubelet Started container simmemleak\n Normal Killing 32s kubelet Killing container with id ead3fb35-5cf5-44ed-9ae1-488115be66c6: Need to kill Pod", "language": "text", "type": "code-example", "context": "Resource Management for Pods and Containers | In the preceding example, the Restart Count: 5 indicates that the simmemleak container in the Pod was terminated and restarted five times (so far). The OOMKilled reason shows that the container tried", "source": "https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/", "id": "k8s_00151_code_completion_4365"} {"prompt": "Explain this text code:\n\nName: simmemleak-hra99\nNamespace: default\nImage(s): saadali/simmemleak\nNode: kubernetes-node-tf0f/10.240.216.66\nLabels: name=simmemleak\nStatus: Running\nReason:\nMessage:\nIP: 10.244.2.75\nContainers:\n simmemleak:\n Image: saadali/simmemleak:latest\n Limits:\n cpu: 100m\n memory: 50Mi\n State: Running\n Started: Tue, 07 Jul 2019 12:54:41 -0700\n Last State: Terminated\n Reason: OOMKilled\n Exit Code: 137\n Started: Fri, 07 Jul 2019 12:54:30 -0700\n Finished: Fri, 07 Jul 2019 12:54:33 -0700\n Ready: False\n Restart Count: 5\nConditions:\n Type Status\n Ready False\nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal Scheduled 42s default-scheduler Successfully assigned simmemleak-hra99 to kubernetes-node-tf0f\n Normal Pulled 41s kubelet Container image \"saadali/simmemleak:latest\" already present on machine\n Normal Created 41s kubelet Created container simmemleak\n Normal Started 40s kubelet Started container simmemleak\n Normal Killing 32s kubelet Killing container with id ead3fb35-5cf5-44ed-9ae1-488115be66c6: Need to kill Pod", "completion": "This text code Resource Management for Pods and Containers | In the preceding example, the Restart Count: 5 indicates that the simmemleak container in the Pod was terminated and restarted five times (so far). The OOMKilled reason shows that the container tried", "language": "text", "type": "code-example_explanation", "context": "Resource Management for Pods and Containers | In the preceding example, the Restart Count: 5 indicates that the simmemleak container in the Pod was terminated and restarted five times (so far). The OOMKilled reason shows that the container tried", "source": "https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/", "id": "k8s_00151_code_explain_4366"} {"prompt": "Complete this yaml code:\n\n---\napiVersion: v1\nkind: Pod\nmetadata:\n name: frontend\nspec:\n containers:\n - name: app\n image: images.my-company.example/app:v4\n resources:\n requests:\n memory: \"64Mi\"", "completion": " cpu: \"250m\"\n limits:\n memory: \"128Mi\"\n cpu: \"500m\"\n - name: log-aggregator\n image: images.my-company.example/log-aggregator:v6\n resources:\n requests:\n memory: \"64Mi\"\n cpu: \"250m\"\n limits:\n memory: \"128Mi\"\n cpu: \"500m\"", "language": "yaml", "type": "pod-definition", "context": "Resource Management for Pods and Containers | This feature can be enabled by setting the PodLevelResources feature gate. The following Pod has an explicit request of 1 CPU and 100 MiB of memory, and an explicit limit of 1 CPU and 200 MiB of memor", "source": "https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/", "id": "k8s_00151_code_completion_4367"} {"prompt": "Explain this yaml code:\n\n---\napiVersion: v1\nkind: Pod\nmetadata:\n name: frontend\nspec:\n containers:\n - name: app\n image: images.my-company.example/app:v4\n resources:\n requests:\n memory: \"64Mi\"\n cpu: \"250m\"\n limits:\n memory: \"128Mi\"\n cpu: \"500m\"\n - name: log-aggregator\n image: images.my-company.example/log-aggregator:v6\n resources:\n requests:\n memory: \"64Mi\"\n cpu: \"250m\"\n limits:\n memory: \"128Mi\"\n cpu: \"500m\"", "completion": "This yaml code Resource Management for Pods and Containers | This feature can be enabled by setting the PodLevelResources feature gate. The following Pod has an explicit request of 1 CPU and 100 MiB of memory, and an explicit limit of 1 CPU and 200 MiB of memor", "language": "yaml", "type": "pod-definition_explanation", "context": "Resource Management for Pods and Containers | This feature can be enabled by setting the PodLevelResources feature gate. The following Pod has an explicit request of 1 CPU and 100 MiB of memory, and an explicit limit of 1 CPU and 200 MiB of memor", "source": "https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/", "id": "k8s_00151_code_explain_4368"} {"prompt": "Complete this yaml code:\n\npods/resource/pod-level-resources.yaml\napiVersion: v1\nkind: Pod\nmetadata:\n name: pod-resources-demo\n namespace: pod-resources-example\nspec:\n resources:\n limits:\n cpu: \"1\"\n memory: \"200Mi\"\n requests:\n cpu: \"1\"\n memory: \"100Mi\"", "completion": " containers:\n - name: pod-resources-demo-ctr-1\n image: nginx\n resources:\n limits:\n cpu: \"0.5\"\n memory: \"100Mi\"\n requests:\n cpu: \"0.5\"\n memory: \"50Mi\"\n - name: pod-resources-demo-ctr-2\n image: fedora\n command:\n - sleep\n - inf", "language": "yaml", "type": "pod-definition", "context": "Resource Management for Pods and Containers | When you create a Pod, the Kubernetes scheduler selects a node for the Pod to run on. Each node has a maximum capacity for each of the resource types: the amount of CPU and memory it can provide for P", "source": "https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/", "id": "k8s_00151_code_completion_4369"} {"prompt": "Explain this yaml code:\n\npods/resource/pod-level-resources.yaml\napiVersion: v1\nkind: Pod\nmetadata:\n name: pod-resources-demo\n namespace: pod-resources-example\nspec:\n resources:\n limits:\n cpu: \"1\"\n memory: \"200Mi\"\n requests:\n cpu: \"1\"\n memory: \"100Mi\"\n containers:\n - name: pod-resources-demo-ctr-1\n image: nginx\n resources:\n limits:\n cpu: \"0.5\"\n memory: \"100Mi\"\n requests:\n cpu: \"0.5\"\n memory: \"50Mi\"\n - name: pod-resources-demo-ctr-2\n image: fedora\n command:\n - sleep\n - inf", "completion": "This yaml code Resource Management for Pods and Containers | When you create a Pod, the Kubernetes scheduler selects a node for the Pod to run on. Each node has a maximum capacity for each of the resource types: the amount of CPU and memory it can provide for P", "language": "yaml", "type": "pod-definition_explanation", "context": "Resource Management for Pods and Containers | When you create a Pod, the Kubernetes scheduler selects a node for the Pod to run on. Each node has a maximum capacity for each of the resource types: the amount of CPU and memory it can provide for P", "source": "https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/", "id": "k8s_00151_code_explain_4370"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: pod-resources-demo\n namespace: pod-resources-example\nspec:\n resources:\n limits:\n cpu: \"1\"\n memory: \"200Mi\"\n requests:\n cpu: \"1\"\n memory: \"100Mi\"\n containers:", "completion": " - name: pod-resources-demo-ctr-1\n image: nginx\n resources:\n limits:\n cpu: \"0.5\"\n memory: \"100Mi\"\n requests:\n cpu: \"0.5\"\n memory: \"50Mi\"\n - name: pod-resources-demo-ctr-2\n image: fedora\n command:\n - sleep\n - inf", "language": "yaml", "type": "pod-definition", "context": "Resource Management for Pods and Containers | When you create a Pod, the Kubernetes scheduler selects a node for the Pod to run on. Each node has a maximum capacity for each of the resource types: the amount of CPU and memory it can provide for P", "source": "https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/", "id": "k8s_00151_code_completion_4371"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: pod-resources-demo\n namespace: pod-resources-example\nspec:\n resources:\n limits:\n cpu: \"1\"\n memory: \"200Mi\"\n requests:\n cpu: \"1\"\n memory: \"100Mi\"\n containers:\n - name: pod-resources-demo-ctr-1\n image: nginx\n resources:\n limits:\n cpu: \"0.5\"\n memory: \"100Mi\"\n requests:\n cpu: \"0.5\"\n memory: \"50Mi\"\n - name: pod-resources-demo-ctr-2\n image: fedora\n command:\n - sleep\n - inf", "completion": "This yaml code Resource Management for Pods and Containers | When you create a Pod, the Kubernetes scheduler selects a node for the Pod to run on. Each node has a maximum capacity for each of the resource types: the amount of CPU and memory it can provide for P", "language": "yaml", "type": "pod-definition_explanation", "context": "Resource Management for Pods and Containers | When you create a Pod, the Kubernetes scheduler selects a node for the Pod to run on. Each node has a maximum capacity for each of the resource types: the amount of CPU and memory it can provide for P", "source": "https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/", "id": "k8s_00151_code_explain_4372"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: frontend\nspec:\n containers:\n - name: app\n image: images.my-company.example/app:v4\n resources:\n requests:\n ephemeral-storage: \"2Gi\"\n limits:\n ephemeral-storage: \"4Gi\"\n volumeMounts:\n - name: ephemeral", "completion": " mountPath: \"/tmp\"\n - name: log-aggregator\n image: images.my-company.example/log-aggregator:v6\n resources:\n requests:\n ephemeral-storage: \"2Gi\"\n limits:\n ephemeral-storage: \"4Gi\"\n volumeMounts:\n - name: ephemeral\n mountPath: \"/tmp\"\n volumes:\n - name: ephemeral\n emptyDir:\n sizeLimit: 500Mi", "language": "yaml", "type": "pod-definition", "context": "Resource Management for Pods and Containers | When you create a Pod, the Kubernetes scheduler selects a node for the Pod to run on. Each node has a maximum amount of local ephemeral storage it can provide for Pods. For more information, see Node", "source": "https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/", "id": "k8s_00151_code_completion_4373"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: frontend\nspec:\n containers:\n - name: app\n image: images.my-company.example/app:v4\n resources:\n requests:\n ephemeral-storage: \"2Gi\"\n limits:\n ephemeral-storage: \"4Gi\"\n volumeMounts:\n - name: ephemeral\n mountPath: \"/tmp\"\n - name: log-aggregator\n image: images.my-company.example/log-aggregator:v6\n resources:\n requests:\n ephemeral-storage: \"2Gi\"\n limits:\n ephemeral-storage: \"4Gi\"\n volumeMounts:\n - name: ephemeral\n mountPath: \"/tmp\"\n volumes:\n - name: ephemeral\n emptyDir:\n sizeLimit: 500Mi", "completion": "This yaml code Resource Management for Pods and Containers | When you create a Pod, the Kubernetes scheduler selects a node for the Pod to run on. Each node has a maximum amount of local ephemeral storage it can provide for Pods. For more information, see Node", "language": "yaml", "type": "pod-definition_explanation", "context": "Resource Management for Pods and Containers | When you create a Pod, the Kubernetes scheduler selects a node for the Pod to run on. Each node has a maximum amount of local ephemeral storage it can provide for Pods. For more information, see Node", "source": "https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/", "id": "k8s_00151_code_explain_4374"} {"prompt": "Explain this text code:\n\n# For ext4, with /dev/block-device not mounted\nsudo tune2fs -O project -Q prjquota /dev/block-device", "completion": "This text code Resource Management for Pods and Containers | Ensure that the root filesystem (or optional runtime filesystem) is mounted with project quotas enabled. For both XFS and ext4fs, the mount option is named prjquota.", "language": "text", "type": "code-example_explanation", "context": "Resource Management for Pods and Containers | Ensure that the root filesystem (or optional runtime filesystem) is mounted with project quotas enabled. For both XFS and ext4fs, the mount option is named prjquota.", "source": "https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/", "id": "k8s_00151_code_explain_4375"} {"prompt": "Complete this text code:\n\ncurl --header \"Content-Type: application/json-patch+json\" \\\n--request PATCH \\", "completion": "--data '[{\"op\": \"add\", \"path\": \"/status/capacity/example.com~1foo\", \"value\": \"5\"}]' \\\nhttp://k8s-master:8080/api/v1/nodes/k8s-node-1/status", "language": "text", "type": "code-example", "context": "Resource Management for Pods and Containers | Cluster-level extended resources are not tied to nodes. They are usually managed by scheduler extenders, which handle the resource consumption and resource quota.", "source": "https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/", "id": "k8s_00151_code_completion_4376"} {"prompt": "Explain this text code:\n\ncurl --header \"Content-Type: application/json-patch+json\" \\\n--request PATCH \\\n--data '[{\"op\": \"add\", \"path\": \"/status/capacity/example.com~1foo\", \"value\": \"5\"}]' \\\nhttp://k8s-master:8080/api/v1/nodes/k8s-node-1/status", "completion": "This text code Resource Management for Pods and Containers | Cluster-level extended resources are not tied to nodes. They are usually managed by scheduler extenders, which handle the resource consumption and resource quota.", "language": "text", "type": "code-example_explanation", "context": "Resource Management for Pods and Containers | Cluster-level extended resources are not tied to nodes. They are usually managed by scheduler extenders, which handle the resource consumption and resource quota.", "source": "https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/", "id": "k8s_00151_code_explain_4377"} {"prompt": "Complete this json code:\n\n{\n \"kind\": \"Policy\",\n \"apiVersion\": \"v1\",\n \"extenders\": [\n {\n \"urlPrefix\":\"\",\n \"bindVerb\": \"bind\",\n \"managedResources\": [", "completion": " {\n \"name\": \"example.com/foo\",\n \"ignoredByScheduler\": true\n }\n ]\n }\n ]\n}", "language": "json", "type": "api-response", "context": "Resource Management for Pods and Containers | Extended resources allocation by DRA allows cluster administrators to specify an extendedResourceName in DeviceClass, then the devices matching the DeviceClass can be requested from a pod's extended r", "source": "https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/", "id": "k8s_00151_code_completion_4378"} {"prompt": "Explain this json code:\n\n{\n \"kind\": \"Policy\",\n \"apiVersion\": \"v1\",\n \"extenders\": [\n {\n \"urlPrefix\":\"\",\n \"bindVerb\": \"bind\",\n \"managedResources\": [\n {\n \"name\": \"example.com/foo\",\n \"ignoredByScheduler\": true\n }\n ]\n }\n ]\n}", "completion": "This json code Resource Management for Pods and Containers | Extended resources allocation by DRA allows cluster administrators to specify an extendedResourceName in DeviceClass, then the devices matching the DeviceClass can be requested from a pod's extended r", "language": "json", "type": "api-response_explanation", "context": "Resource Management for Pods and Containers | Extended resources allocation by DRA allows cluster administrators to specify an extendedResourceName in DeviceClass, then the devices matching the DeviceClass can be requested from a pod's extended r", "source": "https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/", "id": "k8s_00151_code_explain_4379"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: my-pod\nspec:\n containers:\n - name: my-container", "completion": " image: myimage\n resources:\n requests:\n cpu: 2\n example.com/foo: 1\n limits:\n example.com/foo: 1", "language": "yaml", "type": "pod-definition", "context": "Resource Management for Pods and Containers | Process ID (PID) limits allow for the configuration of a kubelet to limit the number of PIDs that a given Pod can consume. See PID Limiting for information.", "source": "https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/", "id": "k8s_00151_code_completion_4380"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: my-pod\nspec:\n containers:\n - name: my-container\n image: myimage\n resources:\n requests:\n cpu: 2\n example.com/foo: 1\n limits:\n example.com/foo: 1", "completion": "This yaml code Resource Management for Pods and Containers | Process ID (PID) limits allow for the configuration of a kubelet to limit the number of PIDs that a given Pod can consume. See PID Limiting for information.", "language": "yaml", "type": "pod-definition_explanation", "context": "Resource Management for Pods and Containers | Process ID (PID) limits allow for the configuration of a kubelet to limit the number of PIDs that a given Pod can consume. See PID Limiting for information.", "source": "https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/", "id": "k8s_00151_code_explain_4381"} {"prompt": "Explain this bash code:\n\nkubectl describe pod frontend | grep -A 9999999999 Events", "completion": "This bash code Resource Management for Pods and Containers | In the preceding example, the Pod named \"frontend\" fails to be scheduled due to insufficient CPU resource on any node. Similar error messages can also suggest failure due to insufficient memory (PodEx", "language": "bash", "type": "kubectl-command_explanation", "context": "Resource Management for Pods and Containers | In the preceding example, the Pod named \"frontend\" fails to be scheduled due to insufficient CPU resource on any node. Similar error messages can also suggest failure due to insufficient memory (PodEx", "source": "https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/", "id": "k8s_00151_code_explain_4382"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: LimitRange\nmetadata:\n name: cpu-min-max-demo-lr\nspec:", "completion": " limits:\n - max:\n cpu: \"800m\"\n min:\n cpu: \"200m\"\n type: Container", "language": "yaml", "type": "kubernetes-manifest", "context": "Configure Minimum and Maximum CPU Constraints for a Namespace | Create the LimitRange:", "source": "https://kubernetes.io/docs/tasks/administer-cluster/manage-resources/cpu-constraint-namespace/", "id": "k8s_00153_code_completion_4383"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: LimitRange\nmetadata:\n name: cpu-min-max-demo-lr\nspec:\n limits:\n - max:\n cpu: \"800m\"\n min:\n cpu: \"200m\"\n type: Container", "completion": "This yaml code Configure Minimum and Maximum CPU Constraints for a Namespace | Create the LimitRange:", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Configure Minimum and Maximum CPU Constraints for a Namespace | Create the LimitRange:", "source": "https://kubernetes.io/docs/tasks/administer-cluster/manage-resources/cpu-constraint-namespace/", "id": "k8s_00153_code_explain_4384"} {"prompt": "Explain this bash code:\n\nkubectl apply -f https://k8s.io/examples/admin/resource/cpu-constraints.yaml --namespace=constraints-cpu-example", "completion": "This bash code Configure Minimum and Maximum CPU Constraints for a Namespace | View detailed information about the LimitRange:", "language": "bash", "type": "kubectl-command_explanation", "context": "Configure Minimum and Maximum CPU Constraints for a Namespace | View detailed information about the LimitRange:", "source": "https://kubernetes.io/docs/tasks/administer-cluster/manage-resources/cpu-constraint-namespace/", "id": "k8s_00153_code_explain_4385"} {"prompt": "Explain this bash code:\n\nkubectl get limitrange cpu-min-max-demo-lr --output=yaml --namespace=constraints-cpu-example", "completion": "This bash code Configure Minimum and Maximum CPU Constraints for a Namespace | The output shows the minimum and maximum CPU constraints as expected. But notice that even though you didn't specify default values in the configuration file for the LimitRange, they were created auto", "language": "bash", "type": "kubectl-command_explanation", "context": "Configure Minimum and Maximum CPU Constraints for a Namespace | The output shows the minimum and maximum CPU constraints as expected. But notice that even though you didn't specify default values in the configuration file for the LimitRange, they were created auto", "source": "https://kubernetes.io/docs/tasks/administer-cluster/manage-resources/cpu-constraint-namespace/", "id": "k8s_00153_code_explain_4386"} {"prompt": "Complete this text code:\n\nlimits:\n- default:\n cpu: 800m\n defaultRequest:\n cpu: 800m", "completion": " max:\n cpu: 800m\n min:\n cpu: 200m\n type: Container", "language": "text", "type": "code-example", "context": "Configure Minimum and Maximum CPU Constraints for a Namespace | Now whenever you create a Pod in the constraints-cpu-example namespace (or some other client of the Kubernetes API creates an equivalent Pod), Kubernetes performs these steps:", "source": "https://kubernetes.io/docs/tasks/administer-cluster/manage-resources/cpu-constraint-namespace/", "id": "k8s_00153_code_completion_4387"} {"prompt": "Explain this text code:\n\nlimits:\n- default:\n cpu: 800m\n defaultRequest:\n cpu: 800m\n max:\n cpu: 800m\n min:\n cpu: 200m\n type: Container", "completion": "This text code Configure Minimum and Maximum CPU Constraints for a Namespace | Now whenever you create a Pod in the constraints-cpu-example namespace (or some other client of the Kubernetes API creates an equivalent Pod), Kubernetes performs these steps:", "language": "text", "type": "code-example_explanation", "context": "Configure Minimum and Maximum CPU Constraints for a Namespace | Now whenever you create a Pod in the constraints-cpu-example namespace (or some other client of the Kubernetes API creates an equivalent Pod), Kubernetes performs these steps:", "source": "https://kubernetes.io/docs/tasks/administer-cluster/manage-resources/cpu-constraint-namespace/", "id": "k8s_00153_code_explain_4388"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: constraints-cpu-demo\nspec:\n containers:", "completion": " - name: constraints-cpu-demo-ctr\n image: nginx\n resources:\n limits:\n cpu: \"800m\"\n requests:\n cpu: \"500m\"", "language": "yaml", "type": "pod-definition", "context": "Configure Minimum and Maximum CPU Constraints for a Namespace | Create the Pod:", "source": "https://kubernetes.io/docs/tasks/administer-cluster/manage-resources/cpu-constraint-namespace/", "id": "k8s_00153_code_completion_4389"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: constraints-cpu-demo\nspec:\n containers:\n - name: constraints-cpu-demo-ctr\n image: nginx\n resources:\n limits:\n cpu: \"800m\"\n requests:\n cpu: \"500m\"", "completion": "This yaml code Configure Minimum and Maximum CPU Constraints for a Namespace | Create the Pod:", "language": "yaml", "type": "pod-definition_explanation", "context": "Configure Minimum and Maximum CPU Constraints for a Namespace | Create the Pod:", "source": "https://kubernetes.io/docs/tasks/administer-cluster/manage-resources/cpu-constraint-namespace/", "id": "k8s_00153_code_explain_4390"} {"prompt": "Explain this bash code:\n\nkubectl apply -f https://k8s.io/examples/admin/resource/cpu-constraints-pod.yaml --namespace=constraints-cpu-example", "completion": "This bash code Configure Minimum and Maximum CPU Constraints for a Namespace | Verify that the Pod is running and that its container is healthy:", "language": "bash", "type": "kubectl-command_explanation", "context": "Configure Minimum and Maximum CPU Constraints for a Namespace | Verify that the Pod is running and that its container is healthy:", "source": "https://kubernetes.io/docs/tasks/administer-cluster/manage-resources/cpu-constraint-namespace/", "id": "k8s_00153_code_explain_4391"} {"prompt": "Explain this bash code:\n\nkubectl get pod constraints-cpu-demo --namespace=constraints-cpu-example", "completion": "This bash code Configure Minimum and Maximum CPU Constraints for a Namespace | View detailed information about the Pod:", "language": "bash", "type": "kubectl-command_explanation", "context": "Configure Minimum and Maximum CPU Constraints for a Namespace | View detailed information about the Pod:", "source": "https://kubernetes.io/docs/tasks/administer-cluster/manage-resources/cpu-constraint-namespace/", "id": "k8s_00153_code_explain_4392"} {"prompt": "Explain this bash code:\n\nkubectl get pod constraints-cpu-demo --output=yaml --namespace=constraints-cpu-example", "completion": "This bash code Configure Minimum and Maximum CPU Constraints for a Namespace | The output shows that the Pod's only container has a CPU request of 500 millicpu and CPU limit of 800 millicpu. These satisfy the constraints imposed by the LimitRange.", "language": "bash", "type": "kubectl-command_explanation", "context": "Configure Minimum and Maximum CPU Constraints for a Namespace | The output shows that the Pod's only container has a CPU request of 500 millicpu and CPU limit of 800 millicpu. These satisfy the constraints imposed by the LimitRange.", "source": "https://kubernetes.io/docs/tasks/administer-cluster/manage-resources/cpu-constraint-namespace/", "id": "k8s_00153_code_explain_4393"} {"prompt": "Complete this text code:\n\nresources:\n limits:", "completion": " cpu: 800m\n requests:\n cpu: 500m", "language": "text", "type": "code-example", "context": "Configure Minimum and Maximum CPU Constraints for a Namespace | Here's a manifest for a Pod that has one container. The container specifies a CPU request of 500 millicpu and a cpu limit of 1.5 cpu.", "source": "https://kubernetes.io/docs/tasks/administer-cluster/manage-resources/cpu-constraint-namespace/", "id": "k8s_00153_code_completion_4394"} {"prompt": "Explain this text code:\n\nresources:\n limits:\n cpu: 800m\n requests:\n cpu: 500m", "completion": "This text code Configure Minimum and Maximum CPU Constraints for a Namespace | Here's a manifest for a Pod that has one container. The container specifies a CPU request of 500 millicpu and a cpu limit of 1.5 cpu.", "language": "text", "type": "code-example_explanation", "context": "Configure Minimum and Maximum CPU Constraints for a Namespace | Here's a manifest for a Pod that has one container. The container specifies a CPU request of 500 millicpu and a cpu limit of 1.5 cpu.", "source": "https://kubernetes.io/docs/tasks/administer-cluster/manage-resources/cpu-constraint-namespace/", "id": "k8s_00153_code_explain_4395"} {"prompt": "Explain this bash code:\n\nkubectl delete pod constraints-cpu-demo --namespace=constraints-cpu-example", "completion": "This bash code Configure Minimum and Maximum CPU Constraints for a Namespace | Here's a manifest for a Pod that has one container. The container specifies a CPU request of 500 millicpu and a cpu limit of 1.5 cpu.", "language": "bash", "type": "kubectl-command_explanation", "context": "Configure Minimum and Maximum CPU Constraints for a Namespace | Here's a manifest for a Pod that has one container. The container specifies a CPU request of 500 millicpu and a cpu limit of 1.5 cpu.", "source": "https://kubernetes.io/docs/tasks/administer-cluster/manage-resources/cpu-constraint-namespace/", "id": "k8s_00153_code_explain_4396"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: constraints-cpu-demo-2\nspec:\n containers:", "completion": " - name: constraints-cpu-demo-2-ctr\n image: nginx\n resources:\n limits:\n cpu: \"1.5\"\n requests:\n cpu: \"500m\"", "language": "yaml", "type": "pod-definition", "context": "Configure Minimum and Maximum CPU Constraints for a Namespace | Attempt to create the Pod:", "source": "https://kubernetes.io/docs/tasks/administer-cluster/manage-resources/cpu-constraint-namespace/", "id": "k8s_00153_code_completion_4397"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: constraints-cpu-demo-2\nspec:\n containers:\n - name: constraints-cpu-demo-2-ctr\n image: nginx\n resources:\n limits:\n cpu: \"1.5\"\n requests:\n cpu: \"500m\"", "completion": "This yaml code Configure Minimum and Maximum CPU Constraints for a Namespace | Attempt to create the Pod:", "language": "yaml", "type": "pod-definition_explanation", "context": "Configure Minimum and Maximum CPU Constraints for a Namespace | Attempt to create the Pod:", "source": "https://kubernetes.io/docs/tasks/administer-cluster/manage-resources/cpu-constraint-namespace/", "id": "k8s_00153_code_explain_4398"} {"prompt": "Explain this bash code:\n\nkubectl apply -f https://k8s.io/examples/admin/resource/cpu-constraints-pod-2.yaml --namespace=constraints-cpu-example", "completion": "This bash code Configure Minimum and Maximum CPU Constraints for a Namespace | The output shows that the Pod does not get created, because it defines an unacceptable container. That container is not acceptable because it specifies a CPU limit that is too large:", "language": "bash", "type": "kubectl-command_explanation", "context": "Configure Minimum and Maximum CPU Constraints for a Namespace | The output shows that the Pod does not get created, because it defines an unacceptable container. That container is not acceptable because it specifies a CPU limit that is too large:", "source": "https://kubernetes.io/docs/tasks/administer-cluster/manage-resources/cpu-constraint-namespace/", "id": "k8s_00153_code_explain_4399"} {"prompt": "Explain this text code:\n\nError from server (Forbidden): error when creating \"examples/admin/resource/cpu-constraints-pod-2.yaml\":\npods \"constraints-cpu-demo-2\" is forbidden: maximum cpu usage per Container is 800m, but limit is 1500m.", "completion": "This text code Configure Minimum and Maximum CPU Constraints for a Namespace | Here's a manifest for a Pod that has one container. The container specifies a CPU request of 100 millicpu and a CPU limit of 800 millicpu.", "language": "text", "type": "code-example_explanation", "context": "Configure Minimum and Maximum CPU Constraints for a Namespace | Here's a manifest for a Pod that has one container. The container specifies a CPU request of 100 millicpu and a CPU limit of 800 millicpu.", "source": "https://kubernetes.io/docs/tasks/administer-cluster/manage-resources/cpu-constraint-namespace/", "id": "k8s_00153_code_explain_4400"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: constraints-cpu-demo-3\nspec:\n containers:", "completion": " - name: constraints-cpu-demo-3-ctr\n image: nginx\n resources:\n limits:\n cpu: \"800m\"\n requests:\n cpu: \"100m\"", "language": "yaml", "type": "pod-definition", "context": "Configure Minimum and Maximum CPU Constraints for a Namespace | Attempt to create the Pod:", "source": "https://kubernetes.io/docs/tasks/administer-cluster/manage-resources/cpu-constraint-namespace/", "id": "k8s_00153_code_completion_4401"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: constraints-cpu-demo-3\nspec:\n containers:\n - name: constraints-cpu-demo-3-ctr\n image: nginx\n resources:\n limits:\n cpu: \"800m\"\n requests:\n cpu: \"100m\"", "completion": "This yaml code Configure Minimum and Maximum CPU Constraints for a Namespace | Attempt to create the Pod:", "language": "yaml", "type": "pod-definition_explanation", "context": "Configure Minimum and Maximum CPU Constraints for a Namespace | Attempt to create the Pod:", "source": "https://kubernetes.io/docs/tasks/administer-cluster/manage-resources/cpu-constraint-namespace/", "id": "k8s_00153_code_explain_4402"} {"prompt": "Explain this bash code:\n\nkubectl apply -f https://k8s.io/examples/admin/resource/cpu-constraints-pod-3.yaml --namespace=constraints-cpu-example", "completion": "This bash code Configure Minimum and Maximum CPU Constraints for a Namespace | The output shows that the Pod does not get created, because it defines an unacceptable container. That container is not acceptable because it specifies a CPU request that is lower than the enforced mi", "language": "bash", "type": "kubectl-command_explanation", "context": "Configure Minimum and Maximum CPU Constraints for a Namespace | The output shows that the Pod does not get created, because it defines an unacceptable container. That container is not acceptable because it specifies a CPU request that is lower than the enforced mi", "source": "https://kubernetes.io/docs/tasks/administer-cluster/manage-resources/cpu-constraint-namespace/", "id": "k8s_00153_code_explain_4403"} {"prompt": "Explain this text code:\n\nError from server (Forbidden): error when creating \"examples/admin/resource/cpu-constraints-pod-3.yaml\":\npods \"constraints-cpu-demo-3\" is forbidden: minimum cpu usage per Container is 200m, but request is 100m.", "completion": "This text code Configure Minimum and Maximum CPU Constraints for a Namespace | Here's a manifest for a Pod that has one container. The container does not specify a CPU request, nor does it specify a CPU limit.", "language": "text", "type": "code-example_explanation", "context": "Configure Minimum and Maximum CPU Constraints for a Namespace | Here's a manifest for a Pod that has one container. The container does not specify a CPU request, nor does it specify a CPU limit.", "source": "https://kubernetes.io/docs/tasks/administer-cluster/manage-resources/cpu-constraint-namespace/", "id": "k8s_00153_code_explain_4404"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: constraints-cpu-demo-4", "completion": "spec:\n containers:\n - name: constraints-cpu-demo-4-ctr\n image: vish/stress", "language": "yaml", "type": "pod-definition", "context": "Configure Minimum and Maximum CPU Constraints for a Namespace | Create the Pod:", "source": "https://kubernetes.io/docs/tasks/administer-cluster/manage-resources/cpu-constraint-namespace/", "id": "k8s_00153_code_completion_4405"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: constraints-cpu-demo-4\nspec:\n containers:\n - name: constraints-cpu-demo-4-ctr\n image: vish/stress", "completion": "This yaml code Configure Minimum and Maximum CPU Constraints for a Namespace | Create the Pod:", "language": "yaml", "type": "pod-definition_explanation", "context": "Configure Minimum and Maximum CPU Constraints for a Namespace | Create the Pod:", "source": "https://kubernetes.io/docs/tasks/administer-cluster/manage-resources/cpu-constraint-namespace/", "id": "k8s_00153_code_explain_4406"} {"prompt": "Explain this bash code:\n\nkubectl apply -f https://k8s.io/examples/admin/resource/cpu-constraints-pod-4.yaml --namespace=constraints-cpu-example", "completion": "This bash code Configure Minimum and Maximum CPU Constraints for a Namespace | View detailed information about the Pod:", "language": "bash", "type": "kubectl-command_explanation", "context": "Configure Minimum and Maximum CPU Constraints for a Namespace | View detailed information about the Pod:", "source": "https://kubernetes.io/docs/tasks/administer-cluster/manage-resources/cpu-constraint-namespace/", "id": "k8s_00153_code_explain_4407"} {"prompt": "Explain this bash code:\n\nkubectl get pod constraints-cpu-demo-4 --namespace=constraints-cpu-example --output=yaml", "completion": "This bash code Configure Minimum and Maximum CPU Constraints for a Namespace | The output shows that the Pod's single container has a CPU request of 800 millicpu and a CPU limit of 800 millicpu. How did that container get those values?", "language": "bash", "type": "kubectl-command_explanation", "context": "Configure Minimum and Maximum CPU Constraints for a Namespace | The output shows that the Pod's single container has a CPU request of 800 millicpu and a CPU limit of 800 millicpu. How did that container get those values?", "source": "https://kubernetes.io/docs/tasks/administer-cluster/manage-resources/cpu-constraint-namespace/", "id": "k8s_00153_code_explain_4408"} {"prompt": "Complete this text code:\n\nresources:\n limits:", "completion": " cpu: 800m\n requests:\n cpu: 800m", "language": "text", "type": "code-example", "context": "Configure Minimum and Maximum CPU Constraints for a Namespace | Because that container did not specify its own CPU request and limit, the control plane applied the default CPU request and limit from the LimitRange for this namespace.", "source": "https://kubernetes.io/docs/tasks/administer-cluster/manage-resources/cpu-constraint-namespace/", "id": "k8s_00153_code_completion_4409"} {"prompt": "Explain this text code:\n\nresources:\n limits:\n cpu: 800m\n requests:\n cpu: 800m", "completion": "This text code Configure Minimum and Maximum CPU Constraints for a Namespace | Because that container did not specify its own CPU request and limit, the control plane applied the default CPU request and limit from the LimitRange for this namespace.", "language": "text", "type": "code-example_explanation", "context": "Configure Minimum and Maximum CPU Constraints for a Namespace | Because that container did not specify its own CPU request and limit, the control plane applied the default CPU request and limit from the LimitRange for this namespace.", "source": "https://kubernetes.io/docs/tasks/administer-cluster/manage-resources/cpu-constraint-namespace/", "id": "k8s_00153_code_explain_4410"} {"prompt": "Explain this bash code:\n\nkubectl delete pod constraints-cpu-demo-4 --namespace=constraints-cpu-example", "completion": "This bash code Configure Minimum and Maximum CPU Constraints for a Namespace | The maximum and minimum CPU constraints imposed on a namespace by a LimitRange are enforced only when a Pod is created or updated. If you change the LimitRange, it does not affect Pods that were creat", "language": "bash", "type": "kubectl-command_explanation", "context": "Configure Minimum and Maximum CPU Constraints for a Namespace | The maximum and minimum CPU constraints imposed on a namespace by a LimitRange are enforced only when a Pod is created or updated. If you change the LimitRange, it does not affect Pods that were creat", "source": "https://kubernetes.io/docs/tasks/administer-cluster/manage-resources/cpu-constraint-namespace/", "id": "k8s_00153_code_explain_4411"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: LimitRange\nmetadata:\n name: cpu-min-max-demo-lr\nspec:", "completion": " limits:\n - max:\n cpu: \"800m\"\n min:\n cpu: \"200m\"\n type: Container", "language": "yaml", "type": "kubernetes-manifest", "context": "Configure Minimum and Maximum CPU Constraints for a Namespace | Create the LimitRange:", "source": "https://kubernetes.io/docs/tasks/administer-cluster/manage-resources/cpu-constraint-namespace/", "id": "k8s_00153_code_completion_4412"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: LimitRange\nmetadata:\n name: cpu-min-max-demo-lr\nspec:\n limits:\n - max:\n cpu: \"800m\"\n min:\n cpu: \"200m\"\n type: Container", "completion": "This yaml code Configure Minimum and Maximum CPU Constraints for a Namespace | Create the LimitRange:", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Configure Minimum and Maximum CPU Constraints for a Namespace | Create the LimitRange:", "source": "https://kubernetes.io/docs/tasks/administer-cluster/manage-resources/cpu-constraint-namespace/", "id": "k8s_00153_code_explain_4413"} {"prompt": "Explain this shell code:\n\nkubectl apply -f https://k8s.io/examples/admin/resource/cpu-constraints.yaml --namespace=constraints-cpu-example", "completion": "This shell code Configure Minimum and Maximum CPU Constraints for a Namespace | View detailed information about the LimitRange:", "language": "shell", "type": "code-example_explanation", "context": "Configure Minimum and Maximum CPU Constraints for a Namespace | View detailed information about the LimitRange:", "source": "https://kubernetes.io/docs/tasks/administer-cluster/manage-resources/cpu-constraint-namespace/", "id": "k8s_00153_code_explain_4414"} {"prompt": "Explain this shell code:\n\nkubectl get limitrange cpu-min-max-demo-lr --output=yaml --namespace=constraints-cpu-example", "completion": "This shell code Configure Minimum and Maximum CPU Constraints for a Namespace | The output shows the minimum and maximum CPU constraints as expected. But notice that even though you didn't specify default values in the configuration file for the LimitRange, they were created auto", "language": "shell", "type": "code-example_explanation", "context": "Configure Minimum and Maximum CPU Constraints for a Namespace | The output shows the minimum and maximum CPU constraints as expected. But notice that even though you didn't specify default values in the configuration file for the LimitRange, they were created auto", "source": "https://kubernetes.io/docs/tasks/administer-cluster/manage-resources/cpu-constraint-namespace/", "id": "k8s_00153_code_explain_4415"} {"prompt": "Complete this yaml code:\n\nlimits:\n- default:\n cpu: 800m\n defaultRequest:\n cpu: 800m", "completion": " max:\n cpu: 800m\n min:\n cpu: 200m\n type: Container", "language": "yaml", "type": "kubernetes-manifest", "context": "Configure Minimum and Maximum CPU Constraints for a Namespace | Now whenever you create a Pod in the constraints-cpu-example namespace (or some other client of the Kubernetes API creates an equivalent Pod), Kubernetes performs these steps:", "source": "https://kubernetes.io/docs/tasks/administer-cluster/manage-resources/cpu-constraint-namespace/", "id": "k8s_00153_code_completion_4416"} {"prompt": "Explain this yaml code:\n\nlimits:\n- default:\n cpu: 800m\n defaultRequest:\n cpu: 800m\n max:\n cpu: 800m\n min:\n cpu: 200m\n type: Container", "completion": "This yaml code Configure Minimum and Maximum CPU Constraints for a Namespace | Now whenever you create a Pod in the constraints-cpu-example namespace (or some other client of the Kubernetes API creates an equivalent Pod), Kubernetes performs these steps:", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Configure Minimum and Maximum CPU Constraints for a Namespace | Now whenever you create a Pod in the constraints-cpu-example namespace (or some other client of the Kubernetes API creates an equivalent Pod), Kubernetes performs these steps:", "source": "https://kubernetes.io/docs/tasks/administer-cluster/manage-resources/cpu-constraint-namespace/", "id": "k8s_00153_code_explain_4417"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: constraints-cpu-demo\nspec:\n containers:", "completion": " - name: constraints-cpu-demo-ctr\n image: nginx\n resources:\n limits:\n cpu: \"800m\"\n requests:\n cpu: \"500m\"", "language": "yaml", "type": "pod-definition", "context": "Configure Minimum and Maximum CPU Constraints for a Namespace | Create the Pod:", "source": "https://kubernetes.io/docs/tasks/administer-cluster/manage-resources/cpu-constraint-namespace/", "id": "k8s_00153_code_completion_4418"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: constraints-cpu-demo\nspec:\n containers:\n - name: constraints-cpu-demo-ctr\n image: nginx\n resources:\n limits:\n cpu: \"800m\"\n requests:\n cpu: \"500m\"", "completion": "This yaml code Configure Minimum and Maximum CPU Constraints for a Namespace | Create the Pod:", "language": "yaml", "type": "pod-definition_explanation", "context": "Configure Minimum and Maximum CPU Constraints for a Namespace | Create the Pod:", "source": "https://kubernetes.io/docs/tasks/administer-cluster/manage-resources/cpu-constraint-namespace/", "id": "k8s_00153_code_explain_4419"} {"prompt": "Explain this shell code:\n\nkubectl apply -f https://k8s.io/examples/admin/resource/cpu-constraints-pod.yaml --namespace=constraints-cpu-example", "completion": "This shell code Configure Minimum and Maximum CPU Constraints for a Namespace | Verify that the Pod is running and that its container is healthy:", "language": "shell", "type": "code-example_explanation", "context": "Configure Minimum and Maximum CPU Constraints for a Namespace | Verify that the Pod is running and that its container is healthy:", "source": "https://kubernetes.io/docs/tasks/administer-cluster/manage-resources/cpu-constraint-namespace/", "id": "k8s_00153_code_explain_4420"} {"prompt": "Explain this shell code:\n\nkubectl get pod constraints-cpu-demo --namespace=constraints-cpu-example", "completion": "This shell code Configure Minimum and Maximum CPU Constraints for a Namespace | View detailed information about the Pod:", "language": "shell", "type": "code-example_explanation", "context": "Configure Minimum and Maximum CPU Constraints for a Namespace | View detailed information about the Pod:", "source": "https://kubernetes.io/docs/tasks/administer-cluster/manage-resources/cpu-constraint-namespace/", "id": "k8s_00153_code_explain_4421"} {"prompt": "Explain this shell code:\n\nkubectl get pod constraints-cpu-demo --output=yaml --namespace=constraints-cpu-example", "completion": "This shell code Configure Minimum and Maximum CPU Constraints for a Namespace | The output shows that the Pod's only container has a CPU request of 500 millicpu and CPU limit of 800 millicpu. These satisfy the constraints imposed by the LimitRange.", "language": "shell", "type": "code-example_explanation", "context": "Configure Minimum and Maximum CPU Constraints for a Namespace | The output shows that the Pod's only container has a CPU request of 500 millicpu and CPU limit of 800 millicpu. These satisfy the constraints imposed by the LimitRange.", "source": "https://kubernetes.io/docs/tasks/administer-cluster/manage-resources/cpu-constraint-namespace/", "id": "k8s_00153_code_explain_4422"} {"prompt": "Complete this yaml code:\n\nresources:\n limits:", "completion": " cpu: 800m\n requests:\n cpu: 500m", "language": "yaml", "type": "kubernetes-manifest", "context": "Configure Minimum and Maximum CPU Constraints for a Namespace | Here's a manifest for a Pod that has one container. The container specifies a CPU request of 500 millicpu and a cpu limit of 1.5 cpu.", "source": "https://kubernetes.io/docs/tasks/administer-cluster/manage-resources/cpu-constraint-namespace/", "id": "k8s_00153_code_completion_4423"} {"prompt": "Explain this yaml code:\n\nresources:\n limits:\n cpu: 800m\n requests:\n cpu: 500m", "completion": "This yaml code Configure Minimum and Maximum CPU Constraints for a Namespace | Here's a manifest for a Pod that has one container. The container specifies a CPU request of 500 millicpu and a cpu limit of 1.5 cpu.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Configure Minimum and Maximum CPU Constraints for a Namespace | Here's a manifest for a Pod that has one container. The container specifies a CPU request of 500 millicpu and a cpu limit of 1.5 cpu.", "source": "https://kubernetes.io/docs/tasks/administer-cluster/manage-resources/cpu-constraint-namespace/", "id": "k8s_00153_code_explain_4424"} {"prompt": "Explain this shell code:\n\nkubectl delete pod constraints-cpu-demo --namespace=constraints-cpu-example", "completion": "This shell code Configure Minimum and Maximum CPU Constraints for a Namespace | Here's a manifest for a Pod that has one container. The container specifies a CPU request of 500 millicpu and a cpu limit of 1.5 cpu.", "language": "shell", "type": "code-example_explanation", "context": "Configure Minimum and Maximum CPU Constraints for a Namespace | Here's a manifest for a Pod that has one container. The container specifies a CPU request of 500 millicpu and a cpu limit of 1.5 cpu.", "source": "https://kubernetes.io/docs/tasks/administer-cluster/manage-resources/cpu-constraint-namespace/", "id": "k8s_00153_code_explain_4425"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: constraints-cpu-demo-2\nspec:\n containers:", "completion": " - name: constraints-cpu-demo-2-ctr\n image: nginx\n resources:\n limits:\n cpu: \"1.5\"\n requests:\n cpu: \"500m\"", "language": "yaml", "type": "pod-definition", "context": "Configure Minimum and Maximum CPU Constraints for a Namespace | Attempt to create the Pod:", "source": "https://kubernetes.io/docs/tasks/administer-cluster/manage-resources/cpu-constraint-namespace/", "id": "k8s_00153_code_completion_4426"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: constraints-cpu-demo-2\nspec:\n containers:\n - name: constraints-cpu-demo-2-ctr\n image: nginx\n resources:\n limits:\n cpu: \"1.5\"\n requests:\n cpu: \"500m\"", "completion": "This yaml code Configure Minimum and Maximum CPU Constraints for a Namespace | Attempt to create the Pod:", "language": "yaml", "type": "pod-definition_explanation", "context": "Configure Minimum and Maximum CPU Constraints for a Namespace | Attempt to create the Pod:", "source": "https://kubernetes.io/docs/tasks/administer-cluster/manage-resources/cpu-constraint-namespace/", "id": "k8s_00153_code_explain_4427"} {"prompt": "Explain this shell code:\n\nkubectl apply -f https://k8s.io/examples/admin/resource/cpu-constraints-pod-2.yaml --namespace=constraints-cpu-example", "completion": "This shell code Configure Minimum and Maximum CPU Constraints for a Namespace | The output shows that the Pod does not get created, because it defines an unacceptable container. That container is not acceptable because it specifies a CPU limit that is too large:", "language": "shell", "type": "code-example_explanation", "context": "Configure Minimum and Maximum CPU Constraints for a Namespace | The output shows that the Pod does not get created, because it defines an unacceptable container. That container is not acceptable because it specifies a CPU limit that is too large:", "source": "https://kubernetes.io/docs/tasks/administer-cluster/manage-resources/cpu-constraint-namespace/", "id": "k8s_00153_code_explain_4428"} {"prompt": "Explain this text code:\n\nError from server (Forbidden): error when creating \"examples/admin/resource/cpu-constraints-pod-2.yaml\":\npods \"constraints-cpu-demo-2\" is forbidden: maximum cpu usage per Container is 800m, but limit is 1500m.", "completion": "This text code Configure Minimum and Maximum CPU Constraints for a Namespace | Here's a manifest for a Pod that has one container. The container specifies a CPU request of 100 millicpu and a CPU limit of 800 millicpu.", "language": "text", "type": "code-example_explanation", "context": "Configure Minimum and Maximum CPU Constraints for a Namespace | Here's a manifest for a Pod that has one container. The container specifies a CPU request of 100 millicpu and a CPU limit of 800 millicpu.", "source": "https://kubernetes.io/docs/tasks/administer-cluster/manage-resources/cpu-constraint-namespace/", "id": "k8s_00153_code_explain_4429"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: constraints-cpu-demo-3\nspec:\n containers:", "completion": " - name: constraints-cpu-demo-3-ctr\n image: nginx\n resources:\n limits:\n cpu: \"800m\"\n requests:\n cpu: \"100m\"", "language": "yaml", "type": "pod-definition", "context": "Configure Minimum and Maximum CPU Constraints for a Namespace | Attempt to create the Pod:", "source": "https://kubernetes.io/docs/tasks/administer-cluster/manage-resources/cpu-constraint-namespace/", "id": "k8s_00153_code_completion_4430"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: constraints-cpu-demo-3\nspec:\n containers:\n - name: constraints-cpu-demo-3-ctr\n image: nginx\n resources:\n limits:\n cpu: \"800m\"\n requests:\n cpu: \"100m\"", "completion": "This yaml code Configure Minimum and Maximum CPU Constraints for a Namespace | Attempt to create the Pod:", "language": "yaml", "type": "pod-definition_explanation", "context": "Configure Minimum and Maximum CPU Constraints for a Namespace | Attempt to create the Pod:", "source": "https://kubernetes.io/docs/tasks/administer-cluster/manage-resources/cpu-constraint-namespace/", "id": "k8s_00153_code_explain_4431"} {"prompt": "Explain this shell code:\n\nkubectl apply -f https://k8s.io/examples/admin/resource/cpu-constraints-pod-3.yaml --namespace=constraints-cpu-example", "completion": "This shell code Configure Minimum and Maximum CPU Constraints for a Namespace | The output shows that the Pod does not get created, because it defines an unacceptable container. That container is not acceptable because it specifies a CPU request that is lower than the enforced mi", "language": "shell", "type": "code-example_explanation", "context": "Configure Minimum and Maximum CPU Constraints for a Namespace | The output shows that the Pod does not get created, because it defines an unacceptable container. That container is not acceptable because it specifies a CPU request that is lower than the enforced mi", "source": "https://kubernetes.io/docs/tasks/administer-cluster/manage-resources/cpu-constraint-namespace/", "id": "k8s_00153_code_explain_4432"} {"prompt": "Explain this text code:\n\nError from server (Forbidden): error when creating \"examples/admin/resource/cpu-constraints-pod-3.yaml\":\npods \"constraints-cpu-demo-3\" is forbidden: minimum cpu usage per Container is 200m, but request is 100m.", "completion": "This text code Configure Minimum and Maximum CPU Constraints for a Namespace | Here's a manifest for a Pod that has one container. The container does not specify a CPU request, nor does it specify a CPU limit.", "language": "text", "type": "code-example_explanation", "context": "Configure Minimum and Maximum CPU Constraints for a Namespace | Here's a manifest for a Pod that has one container. The container does not specify a CPU request, nor does it specify a CPU limit.", "source": "https://kubernetes.io/docs/tasks/administer-cluster/manage-resources/cpu-constraint-namespace/", "id": "k8s_00153_code_explain_4433"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: constraints-cpu-demo-4", "completion": "spec:\n containers:\n - name: constraints-cpu-demo-4-ctr\n image: vish/stress", "language": "yaml", "type": "pod-definition", "context": "Configure Minimum and Maximum CPU Constraints for a Namespace | Create the Pod:", "source": "https://kubernetes.io/docs/tasks/administer-cluster/manage-resources/cpu-constraint-namespace/", "id": "k8s_00153_code_completion_4434"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: constraints-cpu-demo-4\nspec:\n containers:\n - name: constraints-cpu-demo-4-ctr\n image: vish/stress", "completion": "This yaml code Configure Minimum and Maximum CPU Constraints for a Namespace | Create the Pod:", "language": "yaml", "type": "pod-definition_explanation", "context": "Configure Minimum and Maximum CPU Constraints for a Namespace | Create the Pod:", "source": "https://kubernetes.io/docs/tasks/administer-cluster/manage-resources/cpu-constraint-namespace/", "id": "k8s_00153_code_explain_4435"} {"prompt": "Explain this shell code:\n\nkubectl apply -f https://k8s.io/examples/admin/resource/cpu-constraints-pod-4.yaml --namespace=constraints-cpu-example", "completion": "This shell code Configure Minimum and Maximum CPU Constraints for a Namespace | View detailed information about the Pod:", "language": "shell", "type": "code-example_explanation", "context": "Configure Minimum and Maximum CPU Constraints for a Namespace | View detailed information about the Pod:", "source": "https://kubernetes.io/docs/tasks/administer-cluster/manage-resources/cpu-constraint-namespace/", "id": "k8s_00153_code_explain_4436"} {"prompt": "Explain this bash code:\n\nkubectl get pod constraints-cpu-demo-4 --namespace=constraints-cpu-example --output=yaml", "completion": "This bash code Configure Minimum and Maximum CPU Constraints for a Namespace | The output shows that the Pod's single container has a CPU request of 800 millicpu and a CPU limit of 800 millicpu. How did that container get those values?", "language": "bash", "type": "kubectl-command_explanation", "context": "Configure Minimum and Maximum CPU Constraints for a Namespace | The output shows that the Pod's single container has a CPU request of 800 millicpu and a CPU limit of 800 millicpu. How did that container get those values?", "source": "https://kubernetes.io/docs/tasks/administer-cluster/manage-resources/cpu-constraint-namespace/", "id": "k8s_00153_code_explain_4437"} {"prompt": "Complete this yaml code:\n\nresources:\n limits:", "completion": " cpu: 800m\n requests:\n cpu: 800m", "language": "yaml", "type": "kubernetes-manifest", "context": "Configure Minimum and Maximum CPU Constraints for a Namespace | Because that container did not specify its own CPU request and limit, the control plane applied the default CPU request and limit from the LimitRange for this namespace.", "source": "https://kubernetes.io/docs/tasks/administer-cluster/manage-resources/cpu-constraint-namespace/", "id": "k8s_00153_code_completion_4438"} {"prompt": "Explain this yaml code:\n\nresources:\n limits:\n cpu: 800m\n requests:\n cpu: 800m", "completion": "This yaml code Configure Minimum and Maximum CPU Constraints for a Namespace | Because that container did not specify its own CPU request and limit, the control plane applied the default CPU request and limit from the LimitRange for this namespace.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Configure Minimum and Maximum CPU Constraints for a Namespace | Because that container did not specify its own CPU request and limit, the control plane applied the default CPU request and limit from the LimitRange for this namespace.", "source": "https://kubernetes.io/docs/tasks/administer-cluster/manage-resources/cpu-constraint-namespace/", "id": "k8s_00153_code_explain_4439"} {"prompt": "Explain this bash code:\n\nkubectl delete pod constraints-cpu-demo-4 --namespace=constraints-cpu-example", "completion": "This bash code Configure Minimum and Maximum CPU Constraints for a Namespace | The maximum and minimum CPU constraints imposed on a namespace by a LimitRange are enforced only when a Pod is created or updated. If you change the LimitRange, it does not affect Pods that were creat", "language": "bash", "type": "kubectl-command_explanation", "context": "Configure Minimum and Maximum CPU Constraints for a Namespace | The maximum and minimum CPU constraints imposed on a namespace by a LimitRange are enforced only when a Pod is created or updated. If you change the LimitRange, it does not affect Pods that were creat", "source": "https://kubernetes.io/docs/tasks/administer-cluster/manage-resources/cpu-constraint-namespace/", "id": "k8s_00153_code_explain_4440"} {"prompt": "Complete this yaml code:\n\nadmin/resource/cpu-constraints.yaml\napiVersion: v1\nkind: LimitRange\nmetadata:\n name: cpu-min-max-demo-lr\nspec:", "completion": " limits:\n - max:\n cpu: \"800m\"\n min:\n cpu: \"200m\"\n type: Container", "language": "yaml", "type": "kubernetes-manifest", "context": "Configure Minimum and Maximum CPU Constraints for a Namespace | Create the LimitRange:", "source": "https://kubernetes.io/docs/tasks/administer-cluster/manage-resources/cpu-constraint-namespace/", "id": "k8s_00153_code_completion_4441"} {"prompt": "Explain this yaml code:\n\nadmin/resource/cpu-constraints.yaml\napiVersion: v1\nkind: LimitRange\nmetadata:\n name: cpu-min-max-demo-lr\nspec:\n limits:\n - max:\n cpu: \"800m\"\n min:\n cpu: \"200m\"\n type: Container", "completion": "This yaml code Configure Minimum and Maximum CPU Constraints for a Namespace | Create the LimitRange:", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Configure Minimum and Maximum CPU Constraints for a Namespace | Create the LimitRange:", "source": "https://kubernetes.io/docs/tasks/administer-cluster/manage-resources/cpu-constraint-namespace/", "id": "k8s_00153_code_explain_4442"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: LimitRange\nmetadata:\n name: cpu-min-max-demo-lr\nspec:", "completion": " limits:\n - max:\n cpu: \"800m\"\n min:\n cpu: \"200m\"\n type: Container", "language": "yaml", "type": "kubernetes-manifest", "context": "Configure Minimum and Maximum CPU Constraints for a Namespace | Create the LimitRange:", "source": "https://kubernetes.io/docs/tasks/administer-cluster/manage-resources/cpu-constraint-namespace/", "id": "k8s_00153_code_completion_4443"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: LimitRange\nmetadata:\n name: cpu-min-max-demo-lr\nspec:\n limits:\n - max:\n cpu: \"800m\"\n min:\n cpu: \"200m\"\n type: Container", "completion": "This yaml code Configure Minimum and Maximum CPU Constraints for a Namespace | Create the LimitRange:", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Configure Minimum and Maximum CPU Constraints for a Namespace | Create the LimitRange:", "source": "https://kubernetes.io/docs/tasks/administer-cluster/manage-resources/cpu-constraint-namespace/", "id": "k8s_00153_code_explain_4444"} {"prompt": "Explain this bash code:\n\nkubectl apply -f https://k8s.io/examples/admin/resource/cpu-constraints.yaml --namespace=constraints-cpu-example", "completion": "This bash code Configure Minimum and Maximum CPU Constraints for a Namespace | View detailed information about the LimitRange:", "language": "bash", "type": "kubectl-command_explanation", "context": "Configure Minimum and Maximum CPU Constraints for a Namespace | View detailed information about the LimitRange:", "source": "https://kubernetes.io/docs/tasks/administer-cluster/manage-resources/cpu-constraint-namespace/", "id": "k8s_00153_code_explain_4445"} {"prompt": "Explain this bash code:\n\nkubectl get limitrange cpu-min-max-demo-lr --output=yaml --namespace=constraints-cpu-example", "completion": "This bash code Configure Minimum and Maximum CPU Constraints for a Namespace | The output shows the minimum and maximum CPU constraints as expected. But notice that even though you didn't specify default values in the configuration file for the LimitRange, they were created auto", "language": "bash", "type": "kubectl-command_explanation", "context": "Configure Minimum and Maximum CPU Constraints for a Namespace | The output shows the minimum and maximum CPU constraints as expected. But notice that even though you didn't specify default values in the configuration file for the LimitRange, they were created auto", "source": "https://kubernetes.io/docs/tasks/administer-cluster/manage-resources/cpu-constraint-namespace/", "id": "k8s_00153_code_explain_4446"} {"prompt": "Complete this text code:\n\nlimits:\n- default:\n cpu: 800m\n defaultRequest:\n cpu: 800m", "completion": " max:\n cpu: 800m\n min:\n cpu: 200m\n type: Container", "language": "text", "type": "code-example", "context": "Configure Minimum and Maximum CPU Constraints for a Namespace | Now whenever you create a Pod in the constraints-cpu-example namespace (or some other client of the Kubernetes API creates an equivalent Pod), Kubernetes performs these steps:", "source": "https://kubernetes.io/docs/tasks/administer-cluster/manage-resources/cpu-constraint-namespace/", "id": "k8s_00153_code_completion_4447"} {"prompt": "Explain this text code:\n\nlimits:\n- default:\n cpu: 800m\n defaultRequest:\n cpu: 800m\n max:\n cpu: 800m\n min:\n cpu: 200m\n type: Container", "completion": "This text code Configure Minimum and Maximum CPU Constraints for a Namespace | Now whenever you create a Pod in the constraints-cpu-example namespace (or some other client of the Kubernetes API creates an equivalent Pod), Kubernetes performs these steps:", "language": "text", "type": "code-example_explanation", "context": "Configure Minimum and Maximum CPU Constraints for a Namespace | Now whenever you create a Pod in the constraints-cpu-example namespace (or some other client of the Kubernetes API creates an equivalent Pod), Kubernetes performs these steps:", "source": "https://kubernetes.io/docs/tasks/administer-cluster/manage-resources/cpu-constraint-namespace/", "id": "k8s_00153_code_explain_4448"} {"prompt": "Complete this yaml code:\n\nadmin/resource/cpu-constraints-pod.yaml\napiVersion: v1\nkind: Pod\nmetadata:\n name: constraints-cpu-demo\nspec:\n containers:", "completion": " - name: constraints-cpu-demo-ctr\n image: nginx\n resources:\n limits:\n cpu: \"800m\"\n requests:\n cpu: \"500m\"", "language": "yaml", "type": "pod-definition", "context": "Configure Minimum and Maximum CPU Constraints for a Namespace | Create the Pod:", "source": "https://kubernetes.io/docs/tasks/administer-cluster/manage-resources/cpu-constraint-namespace/", "id": "k8s_00153_code_completion_4449"} {"prompt": "Explain this yaml code:\n\nadmin/resource/cpu-constraints-pod.yaml\napiVersion: v1\nkind: Pod\nmetadata:\n name: constraints-cpu-demo\nspec:\n containers:\n - name: constraints-cpu-demo-ctr\n image: nginx\n resources:\n limits:\n cpu: \"800m\"\n requests:\n cpu: \"500m\"", "completion": "This yaml code Configure Minimum and Maximum CPU Constraints for a Namespace | Create the Pod:", "language": "yaml", "type": "pod-definition_explanation", "context": "Configure Minimum and Maximum CPU Constraints for a Namespace | Create the Pod:", "source": "https://kubernetes.io/docs/tasks/administer-cluster/manage-resources/cpu-constraint-namespace/", "id": "k8s_00153_code_explain_4450"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: constraints-cpu-demo\nspec:\n containers:", "completion": " - name: constraints-cpu-demo-ctr\n image: nginx\n resources:\n limits:\n cpu: \"800m\"\n requests:\n cpu: \"500m\"", "language": "yaml", "type": "pod-definition", "context": "Configure Minimum and Maximum CPU Constraints for a Namespace | Create the Pod:", "source": "https://kubernetes.io/docs/tasks/administer-cluster/manage-resources/cpu-constraint-namespace/", "id": "k8s_00153_code_completion_4451"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: constraints-cpu-demo\nspec:\n containers:\n - name: constraints-cpu-demo-ctr\n image: nginx\n resources:\n limits:\n cpu: \"800m\"\n requests:\n cpu: \"500m\"", "completion": "This yaml code Configure Minimum and Maximum CPU Constraints for a Namespace | Create the Pod:", "language": "yaml", "type": "pod-definition_explanation", "context": "Configure Minimum and Maximum CPU Constraints for a Namespace | Create the Pod:", "source": "https://kubernetes.io/docs/tasks/administer-cluster/manage-resources/cpu-constraint-namespace/", "id": "k8s_00153_code_explain_4452"} {"prompt": "Explain this bash code:\n\nkubectl apply -f https://k8s.io/examples/admin/resource/cpu-constraints-pod.yaml --namespace=constraints-cpu-example", "completion": "This bash code Configure Minimum and Maximum CPU Constraints for a Namespace | Verify that the Pod is running and that its container is healthy:", "language": "bash", "type": "kubectl-command_explanation", "context": "Configure Minimum and Maximum CPU Constraints for a Namespace | Verify that the Pod is running and that its container is healthy:", "source": "https://kubernetes.io/docs/tasks/administer-cluster/manage-resources/cpu-constraint-namespace/", "id": "k8s_00153_code_explain_4453"} {"prompt": "Explain this bash code:\n\nkubectl get pod constraints-cpu-demo --namespace=constraints-cpu-example", "completion": "This bash code Configure Minimum and Maximum CPU Constraints for a Namespace | View detailed information about the Pod:", "language": "bash", "type": "kubectl-command_explanation", "context": "Configure Minimum and Maximum CPU Constraints for a Namespace | View detailed information about the Pod:", "source": "https://kubernetes.io/docs/tasks/administer-cluster/manage-resources/cpu-constraint-namespace/", "id": "k8s_00153_code_explain_4454"} {"prompt": "Explain this bash code:\n\nkubectl get pod constraints-cpu-demo --output=yaml --namespace=constraints-cpu-example", "completion": "This bash code Configure Minimum and Maximum CPU Constraints for a Namespace | The output shows that the Pod's only container has a CPU request of 500 millicpu and CPU limit of 800 millicpu. These satisfy the constraints imposed by the LimitRange.", "language": "bash", "type": "kubectl-command_explanation", "context": "Configure Minimum and Maximum CPU Constraints for a Namespace | The output shows that the Pod's only container has a CPU request of 500 millicpu and CPU limit of 800 millicpu. These satisfy the constraints imposed by the LimitRange.", "source": "https://kubernetes.io/docs/tasks/administer-cluster/manage-resources/cpu-constraint-namespace/", "id": "k8s_00153_code_explain_4455"} {"prompt": "Complete this text code:\n\nresources:\n limits:", "completion": " cpu: 800m\n requests:\n cpu: 500m", "language": "text", "type": "code-example", "context": "Configure Minimum and Maximum CPU Constraints for a Namespace | Here's a manifest for a Pod that has one container. The container specifies a CPU request of 500 millicpu and a cpu limit of 1.5 cpu.", "source": "https://kubernetes.io/docs/tasks/administer-cluster/manage-resources/cpu-constraint-namespace/", "id": "k8s_00153_code_completion_4456"} {"prompt": "Explain this text code:\n\nresources:\n limits:\n cpu: 800m\n requests:\n cpu: 500m", "completion": "This text code Configure Minimum and Maximum CPU Constraints for a Namespace | Here's a manifest for a Pod that has one container. The container specifies a CPU request of 500 millicpu and a cpu limit of 1.5 cpu.", "language": "text", "type": "code-example_explanation", "context": "Configure Minimum and Maximum CPU Constraints for a Namespace | Here's a manifest for a Pod that has one container. The container specifies a CPU request of 500 millicpu and a cpu limit of 1.5 cpu.", "source": "https://kubernetes.io/docs/tasks/administer-cluster/manage-resources/cpu-constraint-namespace/", "id": "k8s_00153_code_explain_4457"} {"prompt": "Explain this bash code:\n\nkubectl delete pod constraints-cpu-demo --namespace=constraints-cpu-example", "completion": "This bash code Configure Minimum and Maximum CPU Constraints for a Namespace | Here's a manifest for a Pod that has one container. The container specifies a CPU request of 500 millicpu and a cpu limit of 1.5 cpu.", "language": "bash", "type": "kubectl-command_explanation", "context": "Configure Minimum and Maximum CPU Constraints for a Namespace | Here's a manifest for a Pod that has one container. The container specifies a CPU request of 500 millicpu and a cpu limit of 1.5 cpu.", "source": "https://kubernetes.io/docs/tasks/administer-cluster/manage-resources/cpu-constraint-namespace/", "id": "k8s_00153_code_explain_4458"} {"prompt": "Complete this yaml code:\n\nadmin/resource/cpu-constraints-pod-2.yaml\napiVersion: v1\nkind: Pod\nmetadata:\n name: constraints-cpu-demo-2\nspec:\n containers:", "completion": " - name: constraints-cpu-demo-2-ctr\n image: nginx\n resources:\n limits:\n cpu: \"1.5\"\n requests:\n cpu: \"500m\"", "language": "yaml", "type": "pod-definition", "context": "Configure Minimum and Maximum CPU Constraints for a Namespace | Attempt to create the Pod:", "source": "https://kubernetes.io/docs/tasks/administer-cluster/manage-resources/cpu-constraint-namespace/", "id": "k8s_00153_code_completion_4459"} {"prompt": "Explain this yaml code:\n\nadmin/resource/cpu-constraints-pod-2.yaml\napiVersion: v1\nkind: Pod\nmetadata:\n name: constraints-cpu-demo-2\nspec:\n containers:\n - name: constraints-cpu-demo-2-ctr\n image: nginx\n resources:\n limits:\n cpu: \"1.5\"\n requests:\n cpu: \"500m\"", "completion": "This yaml code Configure Minimum and Maximum CPU Constraints for a Namespace | Attempt to create the Pod:", "language": "yaml", "type": "pod-definition_explanation", "context": "Configure Minimum and Maximum CPU Constraints for a Namespace | Attempt to create the Pod:", "source": "https://kubernetes.io/docs/tasks/administer-cluster/manage-resources/cpu-constraint-namespace/", "id": "k8s_00153_code_explain_4460"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: constraints-cpu-demo-2\nspec:\n containers:", "completion": " - name: constraints-cpu-demo-2-ctr\n image: nginx\n resources:\n limits:\n cpu: \"1.5\"\n requests:\n cpu: \"500m\"", "language": "yaml", "type": "pod-definition", "context": "Configure Minimum and Maximum CPU Constraints for a Namespace | Attempt to create the Pod:", "source": "https://kubernetes.io/docs/tasks/administer-cluster/manage-resources/cpu-constraint-namespace/", "id": "k8s_00153_code_completion_4461"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: constraints-cpu-demo-2\nspec:\n containers:\n - name: constraints-cpu-demo-2-ctr\n image: nginx\n resources:\n limits:\n cpu: \"1.5\"\n requests:\n cpu: \"500m\"", "completion": "This yaml code Configure Minimum and Maximum CPU Constraints for a Namespace | Attempt to create the Pod:", "language": "yaml", "type": "pod-definition_explanation", "context": "Configure Minimum and Maximum CPU Constraints for a Namespace | Attempt to create the Pod:", "source": "https://kubernetes.io/docs/tasks/administer-cluster/manage-resources/cpu-constraint-namespace/", "id": "k8s_00153_code_explain_4462"} {"prompt": "Explain this bash code:\n\nkubectl apply -f https://k8s.io/examples/admin/resource/cpu-constraints-pod-2.yaml --namespace=constraints-cpu-example", "completion": "This bash code Configure Minimum and Maximum CPU Constraints for a Namespace | The output shows that the Pod does not get created, because it defines an unacceptable container. That container is not acceptable because it specifies a CPU limit that is too large:", "language": "bash", "type": "kubectl-command_explanation", "context": "Configure Minimum and Maximum CPU Constraints for a Namespace | The output shows that the Pod does not get created, because it defines an unacceptable container. That container is not acceptable because it specifies a CPU limit that is too large:", "source": "https://kubernetes.io/docs/tasks/administer-cluster/manage-resources/cpu-constraint-namespace/", "id": "k8s_00153_code_explain_4463"} {"prompt": "Complete this yaml code:\n\nadmin/resource/cpu-constraints-pod-3.yaml\napiVersion: v1\nkind: Pod\nmetadata:\n name: constraints-cpu-demo-3\nspec:\n containers:", "completion": " - name: constraints-cpu-demo-3-ctr\n image: nginx\n resources:\n limits:\n cpu: \"800m\"\n requests:\n cpu: \"100m\"", "language": "yaml", "type": "pod-definition", "context": "Configure Minimum and Maximum CPU Constraints for a Namespace | Attempt to create the Pod:", "source": "https://kubernetes.io/docs/tasks/administer-cluster/manage-resources/cpu-constraint-namespace/", "id": "k8s_00153_code_completion_4464"} {"prompt": "Explain this yaml code:\n\nadmin/resource/cpu-constraints-pod-3.yaml\napiVersion: v1\nkind: Pod\nmetadata:\n name: constraints-cpu-demo-3\nspec:\n containers:\n - name: constraints-cpu-demo-3-ctr\n image: nginx\n resources:\n limits:\n cpu: \"800m\"\n requests:\n cpu: \"100m\"", "completion": "This yaml code Configure Minimum and Maximum CPU Constraints for a Namespace | Attempt to create the Pod:", "language": "yaml", "type": "pod-definition_explanation", "context": "Configure Minimum and Maximum CPU Constraints for a Namespace | Attempt to create the Pod:", "source": "https://kubernetes.io/docs/tasks/administer-cluster/manage-resources/cpu-constraint-namespace/", "id": "k8s_00153_code_explain_4465"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: constraints-cpu-demo-3\nspec:\n containers:", "completion": " - name: constraints-cpu-demo-3-ctr\n image: nginx\n resources:\n limits:\n cpu: \"800m\"\n requests:\n cpu: \"100m\"", "language": "yaml", "type": "pod-definition", "context": "Configure Minimum and Maximum CPU Constraints for a Namespace | Attempt to create the Pod:", "source": "https://kubernetes.io/docs/tasks/administer-cluster/manage-resources/cpu-constraint-namespace/", "id": "k8s_00153_code_completion_4466"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: constraints-cpu-demo-3\nspec:\n containers:\n - name: constraints-cpu-demo-3-ctr\n image: nginx\n resources:\n limits:\n cpu: \"800m\"\n requests:\n cpu: \"100m\"", "completion": "This yaml code Configure Minimum and Maximum CPU Constraints for a Namespace | Attempt to create the Pod:", "language": "yaml", "type": "pod-definition_explanation", "context": "Configure Minimum and Maximum CPU Constraints for a Namespace | Attempt to create the Pod:", "source": "https://kubernetes.io/docs/tasks/administer-cluster/manage-resources/cpu-constraint-namespace/", "id": "k8s_00153_code_explain_4467"} {"prompt": "Explain this bash code:\n\nkubectl apply -f https://k8s.io/examples/admin/resource/cpu-constraints-pod-3.yaml --namespace=constraints-cpu-example", "completion": "This bash code Configure Minimum and Maximum CPU Constraints for a Namespace | The output shows that the Pod does not get created, because it defines an unacceptable container. That container is not acceptable because it specifies a CPU request that is lower than the enforced mi", "language": "bash", "type": "kubectl-command_explanation", "context": "Configure Minimum and Maximum CPU Constraints for a Namespace | The output shows that the Pod does not get created, because it defines an unacceptable container. That container is not acceptable because it specifies a CPU request that is lower than the enforced mi", "source": "https://kubernetes.io/docs/tasks/administer-cluster/manage-resources/cpu-constraint-namespace/", "id": "k8s_00153_code_explain_4468"} {"prompt": "Complete this yaml code:\n\nadmin/resource/cpu-constraints-pod-4.yaml\napiVersion: v1\nkind: Pod\nmetadata:", "completion": " name: constraints-cpu-demo-4\nspec:\n containers:\n - name: constraints-cpu-demo-4-ctr\n image: vish/stress", "language": "yaml", "type": "pod-definition", "context": "Configure Minimum and Maximum CPU Constraints for a Namespace | Create the Pod:", "source": "https://kubernetes.io/docs/tasks/administer-cluster/manage-resources/cpu-constraint-namespace/", "id": "k8s_00153_code_completion_4469"} {"prompt": "Explain this yaml code:\n\nadmin/resource/cpu-constraints-pod-4.yaml\napiVersion: v1\nkind: Pod\nmetadata:\n name: constraints-cpu-demo-4\nspec:\n containers:\n - name: constraints-cpu-demo-4-ctr\n image: vish/stress", "completion": "This yaml code Configure Minimum and Maximum CPU Constraints for a Namespace | Create the Pod:", "language": "yaml", "type": "pod-definition_explanation", "context": "Configure Minimum and Maximum CPU Constraints for a Namespace | Create the Pod:", "source": "https://kubernetes.io/docs/tasks/administer-cluster/manage-resources/cpu-constraint-namespace/", "id": "k8s_00153_code_explain_4470"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: constraints-cpu-demo-4", "completion": "spec:\n containers:\n - name: constraints-cpu-demo-4-ctr\n image: vish/stress", "language": "yaml", "type": "pod-definition", "context": "Configure Minimum and Maximum CPU Constraints for a Namespace | Create the Pod:", "source": "https://kubernetes.io/docs/tasks/administer-cluster/manage-resources/cpu-constraint-namespace/", "id": "k8s_00153_code_completion_4471"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: constraints-cpu-demo-4\nspec:\n containers:\n - name: constraints-cpu-demo-4-ctr\n image: vish/stress", "completion": "This yaml code Configure Minimum and Maximum CPU Constraints for a Namespace | Create the Pod:", "language": "yaml", "type": "pod-definition_explanation", "context": "Configure Minimum and Maximum CPU Constraints for a Namespace | Create the Pod:", "source": "https://kubernetes.io/docs/tasks/administer-cluster/manage-resources/cpu-constraint-namespace/", "id": "k8s_00153_code_explain_4472"} {"prompt": "Explain this bash code:\n\nkubectl apply -f https://k8s.io/examples/admin/resource/cpu-constraints-pod-4.yaml --namespace=constraints-cpu-example", "completion": "This bash code Configure Minimum and Maximum CPU Constraints for a Namespace | View detailed information about the Pod:", "language": "bash", "type": "kubectl-command_explanation", "context": "Configure Minimum and Maximum CPU Constraints for a Namespace | View detailed information about the Pod:", "source": "https://kubernetes.io/docs/tasks/administer-cluster/manage-resources/cpu-constraint-namespace/", "id": "k8s_00153_code_explain_4473"} {"prompt": "Complete this text code:\n\nresources:\n limits:", "completion": " cpu: 800m\n requests:\n cpu: 800m", "language": "text", "type": "code-example", "context": "Configure Minimum and Maximum CPU Constraints for a Namespace | Because that container did not specify its own CPU request and limit, the control plane applied the default CPU request and limit from the LimitRange for this namespace.", "source": "https://kubernetes.io/docs/tasks/administer-cluster/manage-resources/cpu-constraint-namespace/", "id": "k8s_00153_code_completion_4474"} {"prompt": "Explain this text code:\n\nresources:\n limits:\n cpu: 800m\n requests:\n cpu: 800m", "completion": "This text code Configure Minimum and Maximum CPU Constraints for a Namespace | Because that container did not specify its own CPU request and limit, the control plane applied the default CPU request and limit from the LimitRange for this namespace.", "language": "text", "type": "code-example_explanation", "context": "Configure Minimum and Maximum CPU Constraints for a Namespace | Because that container did not specify its own CPU request and limit, the control plane applied the default CPU request and limit from the LimitRange for this namespace.", "source": "https://kubernetes.io/docs/tasks/administer-cluster/manage-resources/cpu-constraint-namespace/", "id": "k8s_00153_code_explain_4475"} {"prompt": "Complete this text code:\n\ntype: Resource\nresource:\n name: cpu", "completion": " target:\n type: Utilization\n averageUtilization: 60", "language": "text", "type": "code-example", "context": "Horizontal Pod Autoscaling | With this metric the HPA controller will keep the average utilization of the pods in the scaling target at 60%. Utilization is the ratio between the current usage of resource to the requested resource", "source": "https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale/", "id": "k8s_00154_code_completion_4476"} {"prompt": "Explain this text code:\n\ntype: Resource\nresource:\n name: cpu\n target:\n type: Utilization\n averageUtilization: 60", "completion": "This text code Horizontal Pod Autoscaling | With this metric the HPA controller will keep the average utilization of the pods in the scaling target at 60%. Utilization is the ratio between the current usage of resource to the requested resource", "language": "text", "type": "code-example_explanation", "context": "Horizontal Pod Autoscaling | With this metric the HPA controller will keep the average utilization of the pods in the scaling target at 60%. Utilization is the ratio between the current usage of resource to the requested resource", "source": "https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale/", "id": "k8s_00154_code_explain_4477"} {"prompt": "Complete this text code:\n\ntype: ContainerResource\ncontainerResource:\n name: cpu", "completion": " container: application\n target:\n type: Utilization\n averageUtilization: 60", "language": "text", "type": "code-example", "context": "Horizontal Pod Autoscaling | In the above example the HPA controller scales the target such that the average utilization of the cpu in the application container of all the pods is 60%.", "source": "https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale/", "id": "k8s_00154_code_completion_4478"} {"prompt": "Explain this text code:\n\ntype: ContainerResource\ncontainerResource:\n name: cpu\n container: application\n target:\n type: Utilization\n averageUtilization: 60", "completion": "This text code Horizontal Pod Autoscaling | In the above example the HPA controller scales the target such that the average utilization of the cpu in the application container of all the pods is 60%.", "language": "text", "type": "code-example_explanation", "context": "Horizontal Pod Autoscaling | In the above example the HPA controller scales the target such that the average utilization of the cpu in the application container of all the pods is 60%.", "source": "https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale/", "id": "k8s_00154_code_explain_4479"} {"prompt": "Complete this text code:\n\nbehavior:\n scaleDown:\n policies:\n - type: Pods", "completion": " value: 4\n periodSeconds: 60\n - type: Percent\n value: 10\n periodSeconds: 60", "language": "text", "type": "code-example", "context": "Horizontal Pod Autoscaling | periodSeconds indicates the length of time in the past for which the policy must hold true. The maximum value that you can set for periodSeconds is 1800 (half an hour). The first policy (Pods) allows", "source": "https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale/", "id": "k8s_00154_code_completion_4480"} {"prompt": "Explain this text code:\n\nbehavior:\n scaleDown:\n policies:\n - type: Pods\n value: 4\n periodSeconds: 60\n - type: Percent\n value: 10\n periodSeconds: 60", "completion": "This text code Horizontal Pod Autoscaling | periodSeconds indicates the length of time in the past for which the policy must hold true. The maximum value that you can set for periodSeconds is 1800 (half an hour). The first policy (Pods) allows", "language": "text", "type": "code-example_explanation", "context": "Horizontal Pod Autoscaling | periodSeconds indicates the length of time in the past for which the policy must hold true. The maximum value that you can set for periodSeconds is 1800 (half an hour). The first policy (Pods) allows", "source": "https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale/", "id": "k8s_00154_code_explain_4481"} {"prompt": "Explain this text code:\n\nbehavior:\n scaleDown:\n stabilizationWindowSeconds: 300", "completion": "This text code Horizontal Pod Autoscaling | When the metrics indicate that the target should be scaled down the algorithm looks into previously computed desired states, and uses the highest value from the specified interval. In the above exampl", "language": "text", "type": "code-example_explanation", "context": "Horizontal Pod Autoscaling | When the metrics indicate that the target should be scaled down the algorithm looks into previously computed desired states, and uses the highest value from the specified interval. In the above exampl", "source": "https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale/", "id": "k8s_00154_code_explain_4482"} {"prompt": "Explain this text code:\n\nbehavior:\n scaleUp:\n tolerance: 0.05 # 5% tolerance for scale up", "completion": "This text code Horizontal Pod Autoscaling | With this configuration, the HPA algorithm will only consider scaling up if the memory consumption is higher than 105MiB (that is: 5% above the target).", "language": "text", "type": "code-example_explanation", "context": "Horizontal Pod Autoscaling | With this configuration, the HPA algorithm will only consider scaling up if the memory consumption is higher than 105MiB (that is: 5% above the target).", "source": "https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale/", "id": "k8s_00154_code_explain_4483"} {"prompt": "Complete this text code:\n\nbehavior:\n scaleDown:\n stabilizationWindowSeconds: 300\n policies:\n - type: Percent\n value: 100\n periodSeconds: 15\n scaleUp:", "completion": " stabilizationWindowSeconds: 0\n policies:\n - type: Percent\n value: 100\n periodSeconds: 15\n - type: Pods\n value: 4\n periodSeconds: 15\n selectPolicy: Max", "language": "text", "type": "code-example", "context": "Horizontal Pod Autoscaling | For scaling down the stabilization window is 300 seconds (or the value of the --horizontal-pod-autoscaler-downscale-stabilization flag if provided). There is only a single policy for scaling down whic", "source": "https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale/", "id": "k8s_00154_code_completion_4484"} {"prompt": "Explain this text code:\n\nbehavior:\n scaleDown:\n stabilizationWindowSeconds: 300\n policies:\n - type: Percent\n value: 100\n periodSeconds: 15\n scaleUp:\n stabilizationWindowSeconds: 0\n policies:\n - type: Percent\n value: 100\n periodSeconds: 15\n - type: Pods\n value: 4\n periodSeconds: 15\n selectPolicy: Max", "completion": "This text code Horizontal Pod Autoscaling | For scaling down the stabilization window is 300 seconds (or the value of the --horizontal-pod-autoscaler-downscale-stabilization flag if provided). There is only a single policy for scaling down whic", "language": "text", "type": "code-example_explanation", "context": "Horizontal Pod Autoscaling | For scaling down the stabilization window is 300 seconds (or the value of the --horizontal-pod-autoscaler-downscale-stabilization flag if provided). There is only a single policy for scaling down whic", "source": "https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale/", "id": "k8s_00154_code_explain_4485"} {"prompt": "Explain this text code:\n\nbehavior:\n scaleDown:\n stabilizationWindowSeconds: 60", "completion": "This text code Horizontal Pod Autoscaling | To limit the rate at which pods are removed by the HPA to 10% per minute, the following behavior would be added to the HPA:", "language": "text", "type": "code-example_explanation", "context": "Horizontal Pod Autoscaling | To limit the rate at which pods are removed by the HPA to 10% per minute, the following behavior would be added to the HPA:", "source": "https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale/", "id": "k8s_00154_code_explain_4486"} {"prompt": "Complete this text code:\n\nbehavior:\n scaleDown:\n policies:", "completion": " - type: Percent\n value: 10\n periodSeconds: 60", "language": "text", "type": "code-example", "context": "Horizontal Pod Autoscaling | To ensure that no more than 5 Pods are removed per minute, you can add a second scale-down policy with a fixed size of 5, and set selectPolicy to minimum. Setting selectPolicy to Min means that the au", "source": "https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale/", "id": "k8s_00154_code_completion_4487"} {"prompt": "Explain this text code:\n\nbehavior:\n scaleDown:\n policies:\n - type: Percent\n value: 10\n periodSeconds: 60", "completion": "This text code Horizontal Pod Autoscaling | To ensure that no more than 5 Pods are removed per minute, you can add a second scale-down policy with a fixed size of 5, and set selectPolicy to minimum. Setting selectPolicy to Min means that the au", "language": "text", "type": "code-example_explanation", "context": "Horizontal Pod Autoscaling | To ensure that no more than 5 Pods are removed per minute, you can add a second scale-down policy with a fixed size of 5, and set selectPolicy to minimum. Setting selectPolicy to Min means that the au", "source": "https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale/", "id": "k8s_00154_code_explain_4488"} {"prompt": "Complete this text code:\n\nbehavior:\n scaleDown:\n policies:\n - type: Percent\n value: 10", "completion": " periodSeconds: 60\n - type: Pods\n value: 5\n periodSeconds: 60\n selectPolicy: Min", "language": "text", "type": "code-example", "context": "Horizontal Pod Autoscaling | The selectPolicy value of Disabled turns off scaling the given direction. So to prevent downscaling the following policy would be used:", "source": "https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale/", "id": "k8s_00154_code_completion_4489"} {"prompt": "Explain this text code:\n\nbehavior:\n scaleDown:\n policies:\n - type: Percent\n value: 10\n periodSeconds: 60\n - type: Pods\n value: 5\n periodSeconds: 60\n selectPolicy: Min", "completion": "This text code Horizontal Pod Autoscaling | The selectPolicy value of Disabled turns off scaling the given direction. So to prevent downscaling the following policy would be used:", "language": "text", "type": "code-example_explanation", "context": "Horizontal Pod Autoscaling | The selectPolicy value of Disabled turns off scaling the given direction. So to prevent downscaling the following policy would be used:", "source": "https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale/", "id": "k8s_00154_code_explain_4490"} {"prompt": "Explain this text code:\n\n--horizontal-pod-autoscaler-initial-readiness-delay", "completion": "This text code Horizontal Pod Autoscaling | The \\( currentMetricValue \\over desiredMetricValue \\) base scale ratio is then calculated, using the remaining pods not set aside or discarded from above.", "language": "text", "type": "code-example_explanation", "context": "Horizontal Pod Autoscaling | The \\( currentMetricValue \\over desiredMetricValue \\) base scale ratio is then calculated, using the remaining pods not set aside or discarded from above.", "source": "https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale/", "id": "k8s_00154_code_explain_4491"} {"prompt": "Explain this text code:\n\n--horizontal-pod-autoscaler-cpu-initialization-period", "completion": "This text code Horizontal Pod Autoscaling | The \\( currentMetricValue \\over desiredMetricValue \\) base scale ratio is then calculated, using the remaining pods not set aside or discarded from above.", "language": "text", "type": "code-example_explanation", "context": "Horizontal Pod Autoscaling | The \\( currentMetricValue \\over desiredMetricValue \\) base scale ratio is then calculated, using the remaining pods not set aside or discarded from above.", "source": "https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale/", "id": "k8s_00154_code_explain_4492"} {"prompt": "Explain this text code:\n\n--horizontal-pod-autoscaler-downscale-stabilization", "completion": "This text code Horizontal Pod Autoscaling | The HorizontalPodAutoscaler (HPA) controller includes two flags that influence how CPU metrics are collected from Pods during startup:", "language": "text", "type": "code-example_explanation", "context": "Horizontal Pod Autoscaling | The HorizontalPodAutoscaler (HPA) controller includes two flags that influence how CPU metrics are collected from Pods during startup:", "source": "https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale/", "id": "k8s_00154_code_explain_4493"} {"prompt": "Explain this text code:\n\n--horizontal-pod-autoscaler-cpu-initialization-period", "completion": "This text code Horizontal Pod Autoscaling | This defines the time window after a Pod starts during which its CPU usage is ignored unless: - The Pod is in a Ready state and - The metric sample was taken entirely during the period it was Ready.", "language": "text", "type": "code-example_explanation", "context": "Horizontal Pod Autoscaling | This defines the time window after a Pod starts during which its CPU usage is ignored unless: - The Pod is in a Ready state and - The metric sample was taken entirely during the period it was Ready.", "source": "https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale/", "id": "k8s_00154_code_explain_4494"} {"prompt": "Explain this text code:\n\n--horizontal-pod-autoscaler-initial-readiness-delay", "completion": "This text code Horizontal Pod Autoscaling | This defines a short delay period after a Pod starts during which the HPA controller treats Pods that are currently Unready as still initializing, even if they have previously transitioned to Ready br", "language": "text", "type": "code-example_explanation", "context": "Horizontal Pod Autoscaling | This defines a short delay period after a Pod starts during which the HPA controller treats Pods that are currently Unready as still initializing, even if they have previously transitioned to Ready br", "source": "https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale/", "id": "k8s_00154_code_explain_4495"} {"prompt": "Explain this text code:\n\n--horizontal-pod-autoscaler-cpu-initialization-period", "completion": "This text code Horizontal Pod Autoscaling | The Horizontal Pod Autoscaler is an API resource in the Kubernetes autoscaling API group. The current stable version can be found in the autoscaling/v2 API version which includes support for scaling o", "language": "text", "type": "code-example_explanation", "context": "Horizontal Pod Autoscaling | The Horizontal Pod Autoscaler is an API resource in the Kubernetes autoscaling API group. The current stable version can be found in the autoscaling/v2 API version which includes support for scaling o", "source": "https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale/", "id": "k8s_00154_code_explain_4496"} {"prompt": "Complete this yaml code:\n\ntype: Resource\nresource:\n name: cpu", "completion": " target:\n type: Utilization\n averageUtilization: 60", "language": "yaml", "type": "kubernetes-manifest", "context": "Horizontal Pod Autoscaling | With this metric the HPA controller will keep the average utilization of the pods in the scaling target at 60%. Utilization is the ratio between the current usage of resource to the requested resource", "source": "https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale/", "id": "k8s_00154_code_completion_4497"} {"prompt": "Explain this yaml code:\n\ntype: Resource\nresource:\n name: cpu\n target:\n type: Utilization\n averageUtilization: 60", "completion": "This yaml code Horizontal Pod Autoscaling | With this metric the HPA controller will keep the average utilization of the pods in the scaling target at 60%. Utilization is the ratio between the current usage of resource to the requested resource", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Horizontal Pod Autoscaling | With this metric the HPA controller will keep the average utilization of the pods in the scaling target at 60%. Utilization is the ratio between the current usage of resource to the requested resource", "source": "https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale/", "id": "k8s_00154_code_explain_4498"} {"prompt": "Complete this yaml code:\n\ntype: ContainerResource\ncontainerResource:\n name: cpu", "completion": " container: application\n target:\n type: Utilization\n averageUtilization: 60", "language": "yaml", "type": "kubernetes-manifest", "context": "Horizontal Pod Autoscaling | In the above example the HPA controller scales the target such that the average utilization of the cpu in the application container of all the pods is 60%.", "source": "https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale/", "id": "k8s_00154_code_completion_4499"} {"prompt": "Explain this yaml code:\n\ntype: ContainerResource\ncontainerResource:\n name: cpu\n container: application\n target:\n type: Utilization\n averageUtilization: 60", "completion": "This yaml code Horizontal Pod Autoscaling | In the above example the HPA controller scales the target such that the average utilization of the cpu in the application container of all the pods is 60%.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Horizontal Pod Autoscaling | In the above example the HPA controller scales the target such that the average utilization of the cpu in the application container of all the pods is 60%.", "source": "https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale/", "id": "k8s_00154_code_explain_4500"} {"prompt": "Complete this yaml code:\n\nbehavior:\n scaleDown:\n policies:\n - type: Pods", "completion": " value: 4\n periodSeconds: 60\n - type: Percent\n value: 10\n periodSeconds: 60", "language": "yaml", "type": "kubernetes-manifest", "context": "Horizontal Pod Autoscaling | periodSeconds indicates the length of time in the past for which the policy must hold true. The maximum value that you can set for periodSeconds is 1800 (half an hour). The first policy (Pods) allows", "source": "https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale/", "id": "k8s_00154_code_completion_4501"} {"prompt": "Explain this yaml code:\n\nbehavior:\n scaleDown:\n policies:\n - type: Pods\n value: 4\n periodSeconds: 60\n - type: Percent\n value: 10\n periodSeconds: 60", "completion": "This yaml code Horizontal Pod Autoscaling | periodSeconds indicates the length of time in the past for which the policy must hold true. The maximum value that you can set for periodSeconds is 1800 (half an hour). The first policy (Pods) allows", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Horizontal Pod Autoscaling | periodSeconds indicates the length of time in the past for which the policy must hold true. The maximum value that you can set for periodSeconds is 1800 (half an hour). The first policy (Pods) allows", "source": "https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale/", "id": "k8s_00154_code_explain_4502"} {"prompt": "Explain this yaml code:\n\nbehavior:\n scaleDown:\n stabilizationWindowSeconds: 300", "completion": "This yaml code Horizontal Pod Autoscaling | When the metrics indicate that the target should be scaled down the algorithm looks into previously computed desired states, and uses the highest value from the specified interval. In the above exampl", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Horizontal Pod Autoscaling | When the metrics indicate that the target should be scaled down the algorithm looks into previously computed desired states, and uses the highest value from the specified interval. In the above exampl", "source": "https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale/", "id": "k8s_00154_code_explain_4503"} {"prompt": "Explain this yaml code:\n\nbehavior:\n scaleUp:\n tolerance: 0.05 # 5% tolerance for scale up", "completion": "This yaml code Horizontal Pod Autoscaling | With this configuration, the HPA algorithm will only consider scaling up if the memory consumption is higher than 105MiB (that is: 5% above the target).", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Horizontal Pod Autoscaling | With this configuration, the HPA algorithm will only consider scaling up if the memory consumption is higher than 105MiB (that is: 5% above the target).", "source": "https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale/", "id": "k8s_00154_code_explain_4504"} {"prompt": "Complete this yaml code:\n\nbehavior:\n scaleDown:\n stabilizationWindowSeconds: 300\n policies:\n - type: Percent\n value: 100\n periodSeconds: 15\n scaleUp:", "completion": " stabilizationWindowSeconds: 0\n policies:\n - type: Percent\n value: 100\n periodSeconds: 15\n - type: Pods\n value: 4\n periodSeconds: 15\n selectPolicy: Max", "language": "yaml", "type": "kubernetes-manifest", "context": "Horizontal Pod Autoscaling | For scaling down the stabilization window is 300 seconds (or the value of the --horizontal-pod-autoscaler-downscale-stabilization flag if provided). There is only a single policy for scaling down whic", "source": "https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale/", "id": "k8s_00154_code_completion_4505"} {"prompt": "Explain this yaml code:\n\nbehavior:\n scaleDown:\n stabilizationWindowSeconds: 300\n policies:\n - type: Percent\n value: 100\n periodSeconds: 15\n scaleUp:\n stabilizationWindowSeconds: 0\n policies:\n - type: Percent\n value: 100\n periodSeconds: 15\n - type: Pods\n value: 4\n periodSeconds: 15\n selectPolicy: Max", "completion": "This yaml code Horizontal Pod Autoscaling | For scaling down the stabilization window is 300 seconds (or the value of the --horizontal-pod-autoscaler-downscale-stabilization flag if provided). There is only a single policy for scaling down whic", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Horizontal Pod Autoscaling | For scaling down the stabilization window is 300 seconds (or the value of the --horizontal-pod-autoscaler-downscale-stabilization flag if provided). There is only a single policy for scaling down whic", "source": "https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale/", "id": "k8s_00154_code_explain_4506"} {"prompt": "Explain this text code:\n\n--horizontal-pod-autoscaler-downscale-stabilization", "completion": "This text code Horizontal Pod Autoscaling | To provide a custom downscale stabilization window of 1 minute, the following behavior would be added to the HPA:", "language": "text", "type": "code-example_explanation", "context": "Horizontal Pod Autoscaling | To provide a custom downscale stabilization window of 1 minute, the following behavior would be added to the HPA:", "source": "https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale/", "id": "k8s_00154_code_explain_4507"} {"prompt": "Explain this yaml code:\n\nbehavior:\n scaleDown:\n stabilizationWindowSeconds: 60", "completion": "This yaml code Horizontal Pod Autoscaling | To limit the rate at which pods are removed by the HPA to 10% per minute, the following behavior would be added to the HPA:", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Horizontal Pod Autoscaling | To limit the rate at which pods are removed by the HPA to 10% per minute, the following behavior would be added to the HPA:", "source": "https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale/", "id": "k8s_00154_code_explain_4508"} {"prompt": "Complete this yaml code:\n\nbehavior:\n scaleDown:\n policies:", "completion": " - type: Percent\n value: 10\n periodSeconds: 60", "language": "yaml", "type": "kubernetes-manifest", "context": "Horizontal Pod Autoscaling | To ensure that no more than 5 Pods are removed per minute, you can add a second scale-down policy with a fixed size of 5, and set selectPolicy to minimum. Setting selectPolicy to Min means that the au", "source": "https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale/", "id": "k8s_00154_code_completion_4509"} {"prompt": "Explain this yaml code:\n\nbehavior:\n scaleDown:\n policies:\n - type: Percent\n value: 10\n periodSeconds: 60", "completion": "This yaml code Horizontal Pod Autoscaling | To ensure that no more than 5 Pods are removed per minute, you can add a second scale-down policy with a fixed size of 5, and set selectPolicy to minimum. Setting selectPolicy to Min means that the au", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Horizontal Pod Autoscaling | To ensure that no more than 5 Pods are removed per minute, you can add a second scale-down policy with a fixed size of 5, and set selectPolicy to minimum. Setting selectPolicy to Min means that the au", "source": "https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale/", "id": "k8s_00154_code_explain_4510"} {"prompt": "Complete this yaml code:\n\nbehavior:\n scaleDown:\n policies:\n - type: Percent\n value: 10", "completion": " periodSeconds: 60\n - type: Pods\n value: 5\n periodSeconds: 60\n selectPolicy: Min", "language": "yaml", "type": "kubernetes-manifest", "context": "Horizontal Pod Autoscaling | The selectPolicy value of Disabled turns off scaling the given direction. So to prevent downscaling the following policy would be used:", "source": "https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale/", "id": "k8s_00154_code_completion_4511"} {"prompt": "Explain this yaml code:\n\nbehavior:\n scaleDown:\n policies:\n - type: Percent\n value: 10\n periodSeconds: 60\n - type: Pods\n value: 5\n periodSeconds: 60\n selectPolicy: Min", "completion": "This yaml code Horizontal Pod Autoscaling | The selectPolicy value of Disabled turns off scaling the given direction. So to prevent downscaling the following policy would be used:", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Horizontal Pod Autoscaling | The selectPolicy value of Disabled turns off scaling the given direction. So to prevent downscaling the following policy would be used:", "source": "https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale/", "id": "k8s_00154_code_explain_4512"} {"prompt": "Explain this bash code:\n\nkubectl autoscale rs foo --min=2 --max=5 --cpu-percent=80", "completion": "This bash code Horizontal Pod Autoscaling | You can implicitly deactivate the HPA for a target without the need to change the HPA configuration itself. If the target's desired replica count is set to 0, and the HPA's minimum replica count is gr", "language": "bash", "type": "kubectl-command_explanation", "context": "Horizontal Pod Autoscaling | You can implicitly deactivate the HPA for a target without the need to change the HPA configuration itself. If the target's desired replica count is set to 0, and the HPA's minimum replica count is gr", "source": "https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale/", "id": "k8s_00154_code_explain_4513"} {"prompt": "Explain this bash code:\n\nkubectl apply edit-last-applied deployment/", "completion": "This bash code Horizontal Pod Autoscaling | When using the Server-Side Apply you can follow the transferring ownership guidelines, which cover this exact use case.", "language": "bash", "type": "kubectl-command_explanation", "context": "Horizontal Pod Autoscaling | When using the Server-Side Apply you can follow the transferring ownership guidelines, which cover this exact use case.", "source": "https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale/", "id": "k8s_00154_code_explain_4514"} {"prompt": "Complete this text code:\n\ntype: Resource\nresource:\n name: cpu", "completion": " target:\n type: Utilization\n averageUtilization: 60", "language": "text", "type": "code-example", "context": "Horizontal Pod Autoscaling | With this metric the HPA controller will keep the average utilization of the pods in the scaling target at 60%. Utilization is the ratio between the current usage of resource to the requested resource", "source": "https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale/", "id": "k8s_00154_code_completion_4515"} {"prompt": "Explain this text code:\n\ntype: Resource\nresource:\n name: cpu\n target:\n type: Utilization\n averageUtilization: 60", "completion": "This text code Horizontal Pod Autoscaling | With this metric the HPA controller will keep the average utilization of the pods in the scaling target at 60%. Utilization is the ratio between the current usage of resource to the requested resource", "language": "text", "type": "code-example_explanation", "context": "Horizontal Pod Autoscaling | With this metric the HPA controller will keep the average utilization of the pods in the scaling target at 60%. Utilization is the ratio between the current usage of resource to the requested resource", "source": "https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale/", "id": "k8s_00154_code_explain_4516"} {"prompt": "Complete this text code:\n\ntype: ContainerResource\ncontainerResource:\n name: cpu", "completion": " container: application\n target:\n type: Utilization\n averageUtilization: 60", "language": "text", "type": "code-example", "context": "Horizontal Pod Autoscaling | In the above example the HPA controller scales the target such that the average utilization of the cpu in the application container of all the pods is 60%.", "source": "https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale/", "id": "k8s_00154_code_completion_4517"} {"prompt": "Explain this text code:\n\ntype: ContainerResource\ncontainerResource:\n name: cpu\n container: application\n target:\n type: Utilization\n averageUtilization: 60", "completion": "This text code Horizontal Pod Autoscaling | In the above example the HPA controller scales the target such that the average utilization of the cpu in the application container of all the pods is 60%.", "language": "text", "type": "code-example_explanation", "context": "Horizontal Pod Autoscaling | In the above example the HPA controller scales the target such that the average utilization of the cpu in the application container of all the pods is 60%.", "source": "https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale/", "id": "k8s_00154_code_explain_4518"} {"prompt": "Complete this text code:\n\nbehavior:\n scaleDown:\n policies:\n - type: Pods", "completion": " value: 4\n periodSeconds: 60\n - type: Percent\n value: 10\n periodSeconds: 60", "language": "text", "type": "code-example", "context": "Horizontal Pod Autoscaling | periodSeconds indicates the length of time in the past for which the policy must hold true. The maximum value that you can set for periodSeconds is 1800 (half an hour). The first policy (Pods) allows", "source": "https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale/", "id": "k8s_00154_code_completion_4519"} {"prompt": "Explain this text code:\n\nbehavior:\n scaleDown:\n policies:\n - type: Pods\n value: 4\n periodSeconds: 60\n - type: Percent\n value: 10\n periodSeconds: 60", "completion": "This text code Horizontal Pod Autoscaling | periodSeconds indicates the length of time in the past for which the policy must hold true. The maximum value that you can set for periodSeconds is 1800 (half an hour). The first policy (Pods) allows", "language": "text", "type": "code-example_explanation", "context": "Horizontal Pod Autoscaling | periodSeconds indicates the length of time in the past for which the policy must hold true. The maximum value that you can set for periodSeconds is 1800 (half an hour). The first policy (Pods) allows", "source": "https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale/", "id": "k8s_00154_code_explain_4520"} {"prompt": "Explain this text code:\n\nbehavior:\n scaleDown:\n stabilizationWindowSeconds: 300", "completion": "This text code Horizontal Pod Autoscaling | When the metrics indicate that the target should be scaled down the algorithm looks into previously computed desired states, and uses the highest value from the specified interval. In the above exampl", "language": "text", "type": "code-example_explanation", "context": "Horizontal Pod Autoscaling | When the metrics indicate that the target should be scaled down the algorithm looks into previously computed desired states, and uses the highest value from the specified interval. In the above exampl", "source": "https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale/", "id": "k8s_00154_code_explain_4521"} {"prompt": "Explain this text code:\n\nbehavior:\n scaleUp:\n tolerance: 0.05 # 5% tolerance for scale up", "completion": "This text code Horizontal Pod Autoscaling | With this configuration, the HPA algorithm will only consider scaling up if the memory consumption is higher than 105MiB (that is: 5% above the target).", "language": "text", "type": "code-example_explanation", "context": "Horizontal Pod Autoscaling | With this configuration, the HPA algorithm will only consider scaling up if the memory consumption is higher than 105MiB (that is: 5% above the target).", "source": "https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale/", "id": "k8s_00154_code_explain_4522"} {"prompt": "Complete this text code:\n\nbehavior:\n scaleDown:\n stabilizationWindowSeconds: 300\n policies:\n - type: Percent\n value: 100\n periodSeconds: 15\n scaleUp:", "completion": " stabilizationWindowSeconds: 0\n policies:\n - type: Percent\n value: 100\n periodSeconds: 15\n - type: Pods\n value: 4\n periodSeconds: 15\n selectPolicy: Max", "language": "text", "type": "code-example", "context": "Horizontal Pod Autoscaling | For scaling down the stabilization window is 300 seconds (or the value of the --horizontal-pod-autoscaler-downscale-stabilization flag if provided). There is only a single policy for scaling down whic", "source": "https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale/", "id": "k8s_00154_code_completion_4523"} {"prompt": "Explain this text code:\n\nbehavior:\n scaleDown:\n stabilizationWindowSeconds: 300\n policies:\n - type: Percent\n value: 100\n periodSeconds: 15\n scaleUp:\n stabilizationWindowSeconds: 0\n policies:\n - type: Percent\n value: 100\n periodSeconds: 15\n - type: Pods\n value: 4\n periodSeconds: 15\n selectPolicy: Max", "completion": "This text code Horizontal Pod Autoscaling | For scaling down the stabilization window is 300 seconds (or the value of the --horizontal-pod-autoscaler-downscale-stabilization flag if provided). There is only a single policy for scaling down whic", "language": "text", "type": "code-example_explanation", "context": "Horizontal Pod Autoscaling | For scaling down the stabilization window is 300 seconds (or the value of the --horizontal-pod-autoscaler-downscale-stabilization flag if provided). There is only a single policy for scaling down whic", "source": "https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale/", "id": "k8s_00154_code_explain_4524"} {"prompt": "Explain this text code:\n\nbehavior:\n scaleDown:\n stabilizationWindowSeconds: 60", "completion": "This text code Horizontal Pod Autoscaling | To limit the rate at which pods are removed by the HPA to 10% per minute, the following behavior would be added to the HPA:", "language": "text", "type": "code-example_explanation", "context": "Horizontal Pod Autoscaling | To limit the rate at which pods are removed by the HPA to 10% per minute, the following behavior would be added to the HPA:", "source": "https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale/", "id": "k8s_00154_code_explain_4525"} {"prompt": "Complete this text code:\n\nbehavior:\n scaleDown:\n policies:", "completion": " - type: Percent\n value: 10\n periodSeconds: 60", "language": "text", "type": "code-example", "context": "Horizontal Pod Autoscaling | To ensure that no more than 5 Pods are removed per minute, you can add a second scale-down policy with a fixed size of 5, and set selectPolicy to minimum. Setting selectPolicy to Min means that the au", "source": "https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale/", "id": "k8s_00154_code_completion_4526"} {"prompt": "Explain this text code:\n\nbehavior:\n scaleDown:\n policies:\n - type: Percent\n value: 10\n periodSeconds: 60", "completion": "This text code Horizontal Pod Autoscaling | To ensure that no more than 5 Pods are removed per minute, you can add a second scale-down policy with a fixed size of 5, and set selectPolicy to minimum. Setting selectPolicy to Min means that the au", "language": "text", "type": "code-example_explanation", "context": "Horizontal Pod Autoscaling | To ensure that no more than 5 Pods are removed per minute, you can add a second scale-down policy with a fixed size of 5, and set selectPolicy to minimum. Setting selectPolicy to Min means that the au", "source": "https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale/", "id": "k8s_00154_code_explain_4527"} {"prompt": "Complete this text code:\n\nbehavior:\n scaleDown:\n policies:\n - type: Percent\n value: 10", "completion": " periodSeconds: 60\n - type: Pods\n value: 5\n periodSeconds: 60\n selectPolicy: Min", "language": "text", "type": "code-example", "context": "Horizontal Pod Autoscaling | The selectPolicy value of Disabled turns off scaling the given direction. So to prevent downscaling the following policy would be used:", "source": "https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale/", "id": "k8s_00154_code_completion_4528"} {"prompt": "Explain this text code:\n\nbehavior:\n scaleDown:\n policies:\n - type: Percent\n value: 10\n periodSeconds: 60\n - type: Pods\n value: 5\n periodSeconds: 60\n selectPolicy: Min", "completion": "This text code Horizontal Pod Autoscaling | The selectPolicy value of Disabled turns off scaling the given direction. So to prevent downscaling the following policy would be used:", "language": "text", "type": "code-example_explanation", "context": "Horizontal Pod Autoscaling | The selectPolicy value of Disabled turns off scaling the given direction. So to prevent downscaling the following policy would be used:", "source": "https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale/", "id": "k8s_00154_code_explain_4529"} {"prompt": "Complete this text code:\n\nstages:\n - stage: # Specify the development stage of the feature gate", "completion": " defaultValue: # Set to true if enabled by default, false otherwise\n fromVersion: # Version from which the feature gate is available\n toVersion: # (Optional) The version until which the feature gate is available", "language": "text", "type": "code-example", "context": "Documenting a feature for a release | With net new feature gates, a separate description of the feature gate is also required; create a new Markdown file inside content/en/docs/reference/command-line-tools-reference/feature-gates/ (use ot", "source": "https://kubernetes.io/docs/contribute/new-content/new-features/", "id": "k8s_00160_code_completion_4530"} {"prompt": "Explain this text code:\n\nstages:\n - stage: # Specify the development stage of the feature gate\n defaultValue: # Set to true if enabled by default, false otherwise\n fromVersion: # Version from which the feature gate is available\n toVersion: # (Optional) The version until which the feature gate is available", "completion": "This text code Documenting a feature for a release | With net new feature gates, a separate description of the feature gate is also required; create a new Markdown file inside content/en/docs/reference/command-line-tools-reference/feature-gates/ (use ot", "language": "text", "type": "code-example_explanation", "context": "Documenting a feature for a release | With net new feature gates, a separate description of the feature gate is also required; create a new Markdown file inside content/en/docs/reference/command-line-tools-reference/feature-gates/ (use ot", "source": "https://kubernetes.io/docs/contribute/new-content/new-features/", "id": "k8s_00160_code_explain_4531"} {"prompt": "Complete this text code:\n\nstages:\n - stage: alpha \n defaultValue: false\n fromVersion: \"1.12\"\n toVersion: \"1.12\"\n - stage: beta \n defaultValue: true", "completion": " fromVersion: \"1.13\"\n # Added a `toVersion` to the previous stage.\n toVersion: \"1.18\"\n # Added 'stable' stage block to existing stages.\n - stage: stable\n defaultValue: true\n fromVersion: \"1.19\"\n toVersion: \"1.27\"", "language": "text", "type": "code-example", "context": "Documenting a feature for a release | Eventually, Kubernetes will stop including the feature gate at all. To signify the removal of a feature gate, include removed: true in the front matter of the respective description file. Making that", "source": "https://kubernetes.io/docs/contribute/new-content/new-features/", "id": "k8s_00160_code_completion_4532"} {"prompt": "Explain this text code:\n\nstages:\n - stage: alpha \n defaultValue: false\n fromVersion: \"1.12\"\n toVersion: \"1.12\"\n - stage: beta \n defaultValue: true\n fromVersion: \"1.13\"\n # Added a `toVersion` to the previous stage.\n toVersion: \"1.18\"\n # Added 'stable' stage block to existing stages.\n - stage: stable\n defaultValue: true\n fromVersion: \"1.19\"\n toVersion: \"1.27\"", "completion": "This text code Documenting a feature for a release | Eventually, Kubernetes will stop including the feature gate at all. To signify the removal of a feature gate, include removed: true in the front matter of the respective description file. Making that", "language": "text", "type": "code-example_explanation", "context": "Documenting a feature for a release | Eventually, Kubernetes will stop including the feature gate at all. To signify the removal of a feature gate, include removed: true in the front matter of the respective description file. Making that", "source": "https://kubernetes.io/docs/contribute/new-content/new-features/", "id": "k8s_00160_code_explain_4533"} {"prompt": "Explain this text code:\n\ncontent/en/docs/reference/command-line-tools-reference/feature-gates/", "completion": "This text code Documenting a feature for a release | Also, to ensure your feature gate appears in the Alpha/Beta Feature gates table, include the following details in the front matter of your Markdown description file:", "language": "text", "type": "code-example_explanation", "context": "Documenting a feature for a release | Also, to ensure your feature gate appears in the Alpha/Beta Feature gates table, include the following details in the front matter of your Markdown description file:", "source": "https://kubernetes.io/docs/contribute/new-content/new-features/", "id": "k8s_00160_code_explain_4534"} {"prompt": "Complete this yaml code:\n\nstages:\n - stage: # Specify the development stage of the feature gate", "completion": " defaultValue: # Set to true if enabled by default, false otherwise\n fromVersion: # Version from which the feature gate is available\n toVersion: # (Optional) The version until which the feature gate is available", "language": "yaml", "type": "kubernetes-manifest", "context": "Documenting a feature for a release | With net new feature gates, a separate description of the feature gate is also required; create a new Markdown file inside content/en/docs/reference/command-line-tools-reference/feature-gates/ (use ot", "source": "https://kubernetes.io/docs/contribute/new-content/new-features/", "id": "k8s_00160_code_completion_4535"} {"prompt": "Explain this yaml code:\n\nstages:\n - stage: # Specify the development stage of the feature gate\n defaultValue: # Set to true if enabled by default, false otherwise\n fromVersion: # Version from which the feature gate is available\n toVersion: # (Optional) The version until which the feature gate is available", "completion": "This yaml code Documenting a feature for a release | With net new feature gates, a separate description of the feature gate is also required; create a new Markdown file inside content/en/docs/reference/command-line-tools-reference/feature-gates/ (use ot", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Documenting a feature for a release | With net new feature gates, a separate description of the feature gate is also required; create a new Markdown file inside content/en/docs/reference/command-line-tools-reference/feature-gates/ (use ot", "source": "https://kubernetes.io/docs/contribute/new-content/new-features/", "id": "k8s_00160_code_explain_4536"} {"prompt": "Explain this text code:\n\ncontent/en/docs/reference/command-line-tools-reference/feature-gates/", "completion": "This text code Documenting a feature for a release | When you change a feature gate from disabled-by-default to enabled-by-default, you may also need to change other documentation (not just the list of feature gates). Watch out for language such as ”The", "language": "text", "type": "code-example_explanation", "context": "Documenting a feature for a release | When you change a feature gate from disabled-by-default to enabled-by-default, you may also need to change other documentation (not just the list of feature gates). Watch out for language such as ”The", "source": "https://kubernetes.io/docs/contribute/new-content/new-features/", "id": "k8s_00160_code_explain_4537"} {"prompt": "Complete this yaml code:\n\nstages:\n - stage: alpha \n defaultValue: false\n fromVersion: \"1.12\"\n toVersion: \"1.12\"\n - stage: beta \n defaultValue: true", "completion": " fromVersion: \"1.13\"\n # Added a `toVersion` to the previous stage.\n toVersion: \"1.18\"\n # Added 'stable' stage block to existing stages.\n - stage: stable\n defaultValue: true\n fromVersion: \"1.19\"\n toVersion: \"1.27\"", "language": "yaml", "type": "kubernetes-manifest", "context": "Documenting a feature for a release | Eventually, Kubernetes will stop including the feature gate at all. To signify the removal of a feature gate, include removed: true in the front matter of the respective description file. Making that", "source": "https://kubernetes.io/docs/contribute/new-content/new-features/", "id": "k8s_00160_code_completion_4538"} {"prompt": "Explain this yaml code:\n\nstages:\n - stage: alpha \n defaultValue: false\n fromVersion: \"1.12\"\n toVersion: \"1.12\"\n - stage: beta \n defaultValue: true\n fromVersion: \"1.13\"\n # Added a `toVersion` to the previous stage.\n toVersion: \"1.18\"\n # Added 'stable' stage block to existing stages.\n - stage: stable\n defaultValue: true\n fromVersion: \"1.19\"\n toVersion: \"1.27\"", "completion": "This yaml code Documenting a feature for a release | Eventually, Kubernetes will stop including the feature gate at all. To signify the removal of a feature gate, include removed: true in the front matter of the respective description file. Making that", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Documenting a feature for a release | Eventually, Kubernetes will stop including the feature gate at all. To signify the removal of a feature gate, include removed: true in the front matter of the respective description file. Making that", "source": "https://kubernetes.io/docs/contribute/new-content/new-features/", "id": "k8s_00160_code_explain_4539"} {"prompt": "Complete this text code:\n\nstages:\n - stage: # Specify the development stage of the feature gate", "completion": " defaultValue: # Set to true if enabled by default, false otherwise\n fromVersion: # Version from which the feature gate is available\n toVersion: # (Optional) The version until which the feature gate is available", "language": "text", "type": "code-example", "context": "Documenting a feature for a release | With net new feature gates, a separate description of the feature gate is also required; create a new Markdown file inside content/en/docs/reference/command-line-tools-reference/feature-gates/ (use ot", "source": "https://kubernetes.io/docs/contribute/new-content/new-features/", "id": "k8s_00160_code_completion_4540"} {"prompt": "Explain this text code:\n\nstages:\n - stage: # Specify the development stage of the feature gate\n defaultValue: # Set to true if enabled by default, false otherwise\n fromVersion: # Version from which the feature gate is available\n toVersion: # (Optional) The version until which the feature gate is available", "completion": "This text code Documenting a feature for a release | With net new feature gates, a separate description of the feature gate is also required; create a new Markdown file inside content/en/docs/reference/command-line-tools-reference/feature-gates/ (use ot", "language": "text", "type": "code-example_explanation", "context": "Documenting a feature for a release | With net new feature gates, a separate description of the feature gate is also required; create a new Markdown file inside content/en/docs/reference/command-line-tools-reference/feature-gates/ (use ot", "source": "https://kubernetes.io/docs/contribute/new-content/new-features/", "id": "k8s_00160_code_explain_4541"} {"prompt": "Complete this text code:\n\nstages:\n - stage: alpha \n defaultValue: false\n fromVersion: \"1.12\"\n toVersion: \"1.12\"\n - stage: beta \n defaultValue: true", "completion": " fromVersion: \"1.13\"\n # Added a `toVersion` to the previous stage.\n toVersion: \"1.18\"\n # Added 'stable' stage block to existing stages.\n - stage: stable\n defaultValue: true\n fromVersion: \"1.19\"\n toVersion: \"1.27\"", "language": "text", "type": "code-example", "context": "Documenting a feature for a release | Eventually, Kubernetes will stop including the feature gate at all. To signify the removal of a feature gate, include removed: true in the front matter of the respective description file. Making that", "source": "https://kubernetes.io/docs/contribute/new-content/new-features/", "id": "k8s_00160_code_completion_4542"} {"prompt": "Explain this text code:\n\nstages:\n - stage: alpha \n defaultValue: false\n fromVersion: \"1.12\"\n toVersion: \"1.12\"\n - stage: beta \n defaultValue: true\n fromVersion: \"1.13\"\n # Added a `toVersion` to the previous stage.\n toVersion: \"1.18\"\n # Added 'stable' stage block to existing stages.\n - stage: stable\n defaultValue: true\n fromVersion: \"1.19\"\n toVersion: \"1.27\"", "completion": "This text code Documenting a feature for a release | Eventually, Kubernetes will stop including the feature gate at all. To signify the removal of a feature gate, include removed: true in the front matter of the respective description file. Making that", "language": "text", "type": "code-example_explanation", "context": "Documenting a feature for a release | Eventually, Kubernetes will stop including the feature gate at all. To signify the removal of a feature gate, include removed: true in the front matter of the respective description file. Making that", "source": "https://kubernetes.io/docs/contribute/new-content/new-features/", "id": "k8s_00160_code_explain_4543"} {"prompt": "Complete this yaml code:\n\napiVersion: batch/v1\nkind: CronJob\nmetadata:\n name: hello\nspec:\n schedule: \"* * * * *\"\n jobTemplate:\n spec:\n template:", "completion": " spec:\n containers:\n - name: hello\n image: busybox:1.28\n imagePullPolicy: IfNotPresent\n command:\n - /bin/sh\n - -c\n - date; echo Hello from the Kubernetes cluster\n restartPolicy: OnFailure", "language": "yaml", "type": "kubernetes-manifest", "context": "CronJob | (Running Automated Tasks with a CronJob takes you through this example in more detail).", "source": "https://kubernetes.io/docs/concepts/workloads/controllers/cron-jobs/", "id": "k8s_00161_code_completion_4544"} {"prompt": "Explain this yaml code:\n\napiVersion: batch/v1\nkind: CronJob\nmetadata:\n name: hello\nspec:\n schedule: \"* * * * *\"\n jobTemplate:\n spec:\n template:\n spec:\n containers:\n - name: hello\n image: busybox:1.28\n imagePullPolicy: IfNotPresent\n command:\n - /bin/sh\n - -c\n - date; echo Hello from the Kubernetes cluster\n restartPolicy: OnFailure", "completion": "This yaml code CronJob | (Running Automated Tasks with a CronJob takes you through this example in more detail).", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "CronJob | (Running Automated Tasks with a CronJob takes you through this example in more detail).", "source": "https://kubernetes.io/docs/concepts/workloads/controllers/cron-jobs/", "id": "k8s_00161_code_explain_4545"} {"prompt": "Complete this text code:\n\n# ┌───────────── minute (0 - 59)\n# │ ┌───────────── hour (0 - 23)\n# │ │ ┌───────────── day of the month (1 - 31)\n# │ │ │ ┌───────────── month (1 - 12)", "completion": "# │ │ │ │ ┌───────────── day of the week (0 - 6) (Sunday to Saturday)\n# │ │ │ │ │ OR sun, mon, tue, wed, thu, fri, sat\n# │ │ │ │ │\n# │ │ │ │ │\n# * * * * *", "language": "text", "type": "code-example", "context": "CronJob | For example, 0 3 * * 1 means this task is scheduled to run weekly on a Monday at 3 AM.", "source": "https://kubernetes.io/docs/concepts/workloads/controllers/cron-jobs/", "id": "k8s_00161_code_completion_4546"} {"prompt": "Explain this text code:\n\n# ┌───────────── minute (0 - 59)\n# │ ┌───────────── hour (0 - 23)\n# │ │ ┌───────────── day of the month (1 - 31)\n# │ │ │ ┌───────────── month (1 - 12)\n# │ │ │ │ ┌───────────── day of the week (0 - 6) (Sunday to Saturday)\n# │ │ │ │ │ OR sun, mon, tue, wed, thu, fri, sat\n# │ │ │ │ │\n# │ │ │ │ │\n# * * * * *", "completion": "This text code CronJob | For example, 0 3 * * 1 means this task is scheduled to run weekly on a Monday at 3 AM.", "language": "text", "type": "code-example_explanation", "context": "CronJob | For example, 0 3 * * 1 means this task is scheduled to run weekly on a Monday at 3 AM.", "source": "https://kubernetes.io/docs/concepts/workloads/controllers/cron-jobs/", "id": "k8s_00161_code_explain_4547"} {"prompt": "Explain this text code:\n\nCannot determine if job needs to be started. Too many missed start time (> 100). Set or decrease .spec.startingDeadlineSeconds or check clock skew.", "completion": "This text code CronJob | It is important to note that if the startingDeadlineSeconds field is set (not nil), the controller counts how many missed Jobs occurred from the value of startingDeadlineSeconds until now rather than", "language": "text", "type": "code-example_explanation", "context": "CronJob | It is important to note that if the startingDeadlineSeconds field is set (not nil), the controller counts how many missed Jobs occurred from the value of startingDeadlineSeconds until now rather than", "source": "https://kubernetes.io/docs/concepts/workloads/controllers/cron-jobs/", "id": "k8s_00161_code_explain_4548"} {"prompt": "Complete this yaml code:\n\napiVersion: batch/v1\nkind: CronJob\nmetadata:\n name: hello\nspec:\n schedule: \"* * * * *\"\n jobTemplate:\n spec:\n template:", "completion": " spec:\n containers:\n - name: hello\n image: busybox:1.28\n imagePullPolicy: IfNotPresent\n command:\n - /bin/sh\n - -c\n - date; echo Hello from the Kubernetes cluster\n restartPolicy: OnFailure", "language": "yaml", "type": "kubernetes-manifest", "context": "CronJob | (Running Automated Tasks with a CronJob takes you through this example in more detail).", "source": "https://kubernetes.io/docs/concepts/workloads/controllers/cron-jobs/", "id": "k8s_00161_code_completion_4549"} {"prompt": "Explain this yaml code:\n\napiVersion: batch/v1\nkind: CronJob\nmetadata:\n name: hello\nspec:\n schedule: \"* * * * *\"\n jobTemplate:\n spec:\n template:\n spec:\n containers:\n - name: hello\n image: busybox:1.28\n imagePullPolicy: IfNotPresent\n command:\n - /bin/sh\n - -c\n - date; echo Hello from the Kubernetes cluster\n restartPolicy: OnFailure", "completion": "This yaml code CronJob | (Running Automated Tasks with a CronJob takes you through this example in more detail).", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "CronJob | (Running Automated Tasks with a CronJob takes you through this example in more detail).", "source": "https://kubernetes.io/docs/concepts/workloads/controllers/cron-jobs/", "id": "k8s_00161_code_explain_4550"} {"prompt": "Complete this text code:\n\n# ┌───────────── minute (0 - 59)\n# │ ┌───────────── hour (0 - 23)\n# │ │ ┌───────────── day of the month (1 - 31)\n# │ │ │ ┌───────────── month (1 - 12)", "completion": "# │ │ │ │ ┌───────────── day of the week (0 - 6) (Sunday to Saturday)\n# │ │ │ │ │ OR sun, mon, tue, wed, thu, fri, sat\n# │ │ │ │ │\n# │ │ │ │ │\n# * * * * *", "language": "text", "type": "code-example", "context": "CronJob | For example, 0 3 * * 1 means this task is scheduled to run weekly on a Monday at 3 AM.", "source": "https://kubernetes.io/docs/concepts/workloads/controllers/cron-jobs/", "id": "k8s_00161_code_completion_4551"} {"prompt": "Explain this text code:\n\n# ┌───────────── minute (0 - 59)\n# │ ┌───────────── hour (0 - 23)\n# │ │ ┌───────────── day of the month (1 - 31)\n# │ │ │ ┌───────────── month (1 - 12)\n# │ │ │ │ ┌───────────── day of the week (0 - 6) (Sunday to Saturday)\n# │ │ │ │ │ OR sun, mon, tue, wed, thu, fri, sat\n# │ │ │ │ │\n# │ │ │ │ │\n# * * * * *", "completion": "This text code CronJob | For example, 0 3 * * 1 means this task is scheduled to run weekly on a Monday at 3 AM.", "language": "text", "type": "code-example_explanation", "context": "CronJob | For example, 0 3 * * 1 means this task is scheduled to run weekly on a Monday at 3 AM.", "source": "https://kubernetes.io/docs/concepts/workloads/controllers/cron-jobs/", "id": "k8s_00161_code_explain_4552"} {"prompt": "Explain this text code:\n\nCannot determine if job needs to be started. Too many missed start time (> 100). Set or decrease .spec.startingDeadlineSeconds or check clock skew.", "completion": "This text code CronJob | It is important to note that if the startingDeadlineSeconds field is set (not nil), the controller counts how many missed Jobs occurred from the value of startingDeadlineSeconds until now rather than", "language": "text", "type": "code-example_explanation", "context": "CronJob | It is important to note that if the startingDeadlineSeconds field is set (not nil), the controller counts how many missed Jobs occurred from the value of startingDeadlineSeconds until now rather than", "source": "https://kubernetes.io/docs/concepts/workloads/controllers/cron-jobs/", "id": "k8s_00161_code_explain_4553"} {"prompt": "Complete this yaml code:\n\napplication/job/cronjob.yaml\napiVersion: batch/v1\nkind: CronJob\nmetadata:\n name: hello\nspec:\n schedule: \"* * * * *\"\n jobTemplate:\n spec:\n template:", "completion": " spec:\n containers:\n - name: hello\n image: busybox:1.28\n imagePullPolicy: IfNotPresent\n command:\n - /bin/sh\n - -c\n - date; echo Hello from the Kubernetes cluster\n restartPolicy: OnFailure", "language": "yaml", "type": "kubernetes-manifest", "context": "CronJob | (Running Automated Tasks with a CronJob takes you through this example in more detail).", "source": "https://kubernetes.io/docs/concepts/workloads/controllers/cron-jobs/", "id": "k8s_00161_code_completion_4554"} {"prompt": "Explain this yaml code:\n\napplication/job/cronjob.yaml\napiVersion: batch/v1\nkind: CronJob\nmetadata:\n name: hello\nspec:\n schedule: \"* * * * *\"\n jobTemplate:\n spec:\n template:\n spec:\n containers:\n - name: hello\n image: busybox:1.28\n imagePullPolicy: IfNotPresent\n command:\n - /bin/sh\n - -c\n - date; echo Hello from the Kubernetes cluster\n restartPolicy: OnFailure", "completion": "This yaml code CronJob | (Running Automated Tasks with a CronJob takes you through this example in more detail).", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "CronJob | (Running Automated Tasks with a CronJob takes you through this example in more detail).", "source": "https://kubernetes.io/docs/concepts/workloads/controllers/cron-jobs/", "id": "k8s_00161_code_explain_4555"} {"prompt": "Complete this yaml code:\n\napiVersion: batch/v1\nkind: CronJob\nmetadata:\n name: hello\nspec:\n schedule: \"* * * * *\"\n jobTemplate:\n spec:\n template:", "completion": " spec:\n containers:\n - name: hello\n image: busybox:1.28\n imagePullPolicy: IfNotPresent\n command:\n - /bin/sh\n - -c\n - date; echo Hello from the Kubernetes cluster\n restartPolicy: OnFailure", "language": "yaml", "type": "kubernetes-manifest", "context": "CronJob | (Running Automated Tasks with a CronJob takes you through this example in more detail).", "source": "https://kubernetes.io/docs/concepts/workloads/controllers/cron-jobs/", "id": "k8s_00161_code_completion_4556"} {"prompt": "Explain this yaml code:\n\napiVersion: batch/v1\nkind: CronJob\nmetadata:\n name: hello\nspec:\n schedule: \"* * * * *\"\n jobTemplate:\n spec:\n template:\n spec:\n containers:\n - name: hello\n image: busybox:1.28\n imagePullPolicy: IfNotPresent\n command:\n - /bin/sh\n - -c\n - date; echo Hello from the Kubernetes cluster\n restartPolicy: OnFailure", "completion": "This yaml code CronJob | (Running Automated Tasks with a CronJob takes you through this example in more detail).", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "CronJob | (Running Automated Tasks with a CronJob takes you through this example in more detail).", "source": "https://kubernetes.io/docs/concepts/workloads/controllers/cron-jobs/", "id": "k8s_00161_code_explain_4557"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Config\npreferences: {}\n\nclusters:\n- cluster:\n name: development\n- cluster:\n name: test\n", "completion": "users:\n- name: developer\n- name: experimenter\n\ncontexts:\n- context:\n name: dev-frontend\n- context:\n name: dev-storage\n- context:\n name: exp-test", "language": "yaml", "type": "kubernetes-manifest", "context": "Configure Access to Multiple Clusters | A configuration file describes clusters, users, and contexts. Your config-demo file has the framework to describe two clusters, two users, and three contexts.", "source": "https://kubernetes.io/docs/tasks/access-application-cluster/configure-access-multiple-clusters/", "id": "k8s_00162_code_completion_4558"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Config\npreferences: {}\n\nclusters:\n- cluster:\n name: development\n- cluster:\n name: test\n\nusers:\n- name: developer\n- name: experimenter\n\ncontexts:\n- context:\n name: dev-frontend\n- context:\n name: dev-storage\n- context:\n name: exp-test", "completion": "This yaml code Configure Access to Multiple Clusters | A configuration file describes clusters, users, and contexts. Your config-demo file has the framework to describe two clusters, two users, and three contexts.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Configure Access to Multiple Clusters | A configuration file describes clusters, users, and contexts. Your config-demo file has the framework to describe two clusters, two users, and three contexts.", "source": "https://kubernetes.io/docs/tasks/access-application-cluster/configure-access-multiple-clusters/", "id": "k8s_00162_code_explain_4559"} {"prompt": "Explain this bash code:\n\nkubectl config --kubeconfig=config-demo set-cluster development --server=https://1.2.3.4 --certificate-authority=fake-ca-file\nkubectl config --kubeconfig=config-demo set-cluster test --server=https://5.6.7.8 --insecure-skip-tls-verify", "completion": "This bash code Configure Access to Multiple Clusters | Add user details to your configuration file:", "language": "bash", "type": "kubectl-command_explanation", "context": "Configure Access to Multiple Clusters | Add user details to your configuration file:", "source": "https://kubernetes.io/docs/tasks/access-application-cluster/configure-access-multiple-clusters/", "id": "k8s_00162_code_explain_4560"} {"prompt": "Explain this bash code:\n\nkubectl config --kubeconfig=config-demo set-credentials developer --client-certificate=fake-cert-file --client-key=fake-key-seefile\nkubectl config --kubeconfig=config-demo set-credentials experimenter --username=exp --password=some-password", "completion": "This bash code Configure Access to Multiple Clusters | Add context details to your configuration file:", "language": "bash", "type": "kubectl-command_explanation", "context": "Configure Access to Multiple Clusters | Add context details to your configuration file:", "source": "https://kubernetes.io/docs/tasks/access-application-cluster/configure-access-multiple-clusters/", "id": "k8s_00162_code_explain_4561"} {"prompt": "Explain this bash code:\n\nkubectl config --kubeconfig=config-demo set-context dev-frontend --cluster=development --namespace=frontend --user=developer\nkubectl config --kubeconfig=config-demo set-context dev-storage --cluster=development --namespace=storage --user=developer\nkubectl config --kubeconfig=config-demo set-context exp-test --cluster=test --namespace=default --user=experimenter", "completion": "This bash code Configure Access to Multiple Clusters | Open your config-demo file to see the added details. As an alternative to opening the config-demo file, you can use the config view command.", "language": "bash", "type": "kubectl-command_explanation", "context": "Configure Access to Multiple Clusters | Open your config-demo file to see the added details. As an alternative to opening the config-demo file, you can use the config view command.", "source": "https://kubernetes.io/docs/tasks/access-application-cluster/configure-access-multiple-clusters/", "id": "k8s_00162_code_explain_4562"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nclusters:\n- cluster:\n certificate-authority: fake-ca-file\n server: https://1.2.3.4\n name: development\n- cluster:\n insecure-skip-tls-verify: true\n server: https://5.6.7.8\n name: test\ncontexts:\n- context:\n cluster: development\n namespace: frontend\n user: developer\n name: dev-frontend\n- context:\n cluster: development\n namespace: storage\n user: developer\n name: dev-storage", "completion": "- context:\n cluster: test\n namespace: default\n user: experimenter\n name: exp-test\ncurrent-context: \"\"\nkind: Config\npreferences: {}\nusers:\n- name: developer\n user:\n client-certificate: fake-cert-file\n client-key: fake-key-file\n- name: experimenter\n user:\n # Documentation note (this comment is NOT part of the command output).\n # Storing passwords in Kubernetes client config is risky.\n # A better alternative would be to use a credential plugin\n # and store the credentials separately.\n # See https://kubernetes.io/docs/reference/access-authn-authz/authentication/#client-go-credential-plugins\n password: some-password\n username: exp", "language": "yaml", "type": "kubernetes-manifest", "context": "Configure Access to Multiple Clusters | The fake-ca-file, fake-cert-file and fake-key-file above are the placeholders for the pathnames of the certificate files. You need to change these to the actual pathnames of certificate files in your", "source": "https://kubernetes.io/docs/tasks/access-application-cluster/configure-access-multiple-clusters/", "id": "k8s_00162_code_completion_4563"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nclusters:\n- cluster:\n certificate-authority: fake-ca-file\n server: https://1.2.3.4\n name: development\n- cluster:\n insecure-skip-tls-verify: true\n server: https://5.6.7.8\n name: test\ncontexts:\n- context:\n cluster: development\n namespace: frontend\n user: developer\n name: dev-frontend\n- context:\n cluster: development\n namespace: storage\n user: developer\n name: dev-storage\n- context:\n cluster: test\n namespace: default\n user: experimenter\n name: exp-test\ncurrent-context: \"\"\nkind: Config\npreferences: {}\nusers:\n- name: developer\n user:\n client-certificate: fake-cert-file\n client-key: fake-key-file\n- name: experimenter\n user:\n # Documentation note (this comment is NOT part of the command output).\n # Storing passwords in Kubernetes client config is risky.\n # A better alternative would be to use a credential plugin\n # and store the credentials separately.\n # See https://kubernetes.io/docs/reference/access-authn-authz/authentication/#client-go-credential-plugins\n password: some-password\n username: exp", "completion": "This yaml code Configure Access to Multiple Clusters | The fake-ca-file, fake-cert-file and fake-key-file above are the placeholders for the pathnames of the certificate files. You need to change these to the actual pathnames of certificate files in your", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Configure Access to Multiple Clusters | The fake-ca-file, fake-cert-file and fake-key-file above are the placeholders for the pathnames of the certificate files. You need to change these to the actual pathnames of certificate files in your", "source": "https://kubernetes.io/docs/tasks/access-application-cluster/configure-access-multiple-clusters/", "id": "k8s_00162_code_explain_4564"} {"prompt": "Explain this bash code:\n\nkubectl config --kubeconfig=config-demo use-context dev-frontend", "completion": "This bash code Configure Access to Multiple Clusters | Now whenever you enter a kubectl command, the action will apply to the cluster, and namespace listed in the dev-frontend context. And the command will use the credentials of the user listed in the dev", "language": "bash", "type": "kubectl-command_explanation", "context": "Configure Access to Multiple Clusters | Now whenever you enter a kubectl command, the action will apply to the cluster, and namespace listed in the dev-frontend context. And the command will use the credentials of the user listed in the dev", "source": "https://kubernetes.io/docs/tasks/access-application-cluster/configure-access-multiple-clusters/", "id": "k8s_00162_code_explain_4565"} {"prompt": "Explain this bash code:\n\nkubectl config --kubeconfig=config-demo view --minify", "completion": "This bash code Configure Access to Multiple Clusters | The output shows configuration information associated with the dev-frontend context:", "language": "bash", "type": "kubectl-command_explanation", "context": "Configure Access to Multiple Clusters | The output shows configuration information associated with the dev-frontend context:", "source": "https://kubernetes.io/docs/tasks/access-application-cluster/configure-access-multiple-clusters/", "id": "k8s_00162_code_explain_4566"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nclusters:\n- cluster:\n certificate-authority: fake-ca-file\n server: https://1.2.3.4\n name: development\ncontexts:\n- context:\n cluster: development\n namespace: frontend", "completion": " user: developer\n name: dev-frontend\ncurrent-context: dev-frontend\nkind: Config\npreferences: {}\nusers:\n- name: developer\n user:\n client-certificate: fake-cert-file\n client-key: fake-key-file", "language": "yaml", "type": "kubernetes-manifest", "context": "Configure Access to Multiple Clusters | Now suppose you want to work for a while in the test cluster.", "source": "https://kubernetes.io/docs/tasks/access-application-cluster/configure-access-multiple-clusters/", "id": "k8s_00162_code_completion_4567"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nclusters:\n- cluster:\n certificate-authority: fake-ca-file\n server: https://1.2.3.4\n name: development\ncontexts:\n- context:\n cluster: development\n namespace: frontend\n user: developer\n name: dev-frontend\ncurrent-context: dev-frontend\nkind: Config\npreferences: {}\nusers:\n- name: developer\n user:\n client-certificate: fake-cert-file\n client-key: fake-key-file", "completion": "This yaml code Configure Access to Multiple Clusters | Now suppose you want to work for a while in the test cluster.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Configure Access to Multiple Clusters | Now suppose you want to work for a while in the test cluster.", "source": "https://kubernetes.io/docs/tasks/access-application-cluster/configure-access-multiple-clusters/", "id": "k8s_00162_code_explain_4568"} {"prompt": "Explain this bash code:\n\nkubectl config --kubeconfig=config-demo use-context exp-test", "completion": "This bash code Configure Access to Multiple Clusters | Now any kubectl command you give will apply to the default namespace of the test cluster. And the command will use the credentials of the user listed in the exp-test context.", "language": "bash", "type": "kubectl-command_explanation", "context": "Configure Access to Multiple Clusters | Now any kubectl command you give will apply to the default namespace of the test cluster. And the command will use the credentials of the user listed in the exp-test context.", "source": "https://kubernetes.io/docs/tasks/access-application-cluster/configure-access-multiple-clusters/", "id": "k8s_00162_code_explain_4569"} {"prompt": "Explain this bash code:\n\nkubectl config --kubeconfig=config-demo view --minify", "completion": "This bash code Configure Access to Multiple Clusters | Finally, suppose you want to work for a while in the storage namespace of the development cluster.", "language": "bash", "type": "kubectl-command_explanation", "context": "Configure Access to Multiple Clusters | Finally, suppose you want to work for a while in the storage namespace of the development cluster.", "source": "https://kubernetes.io/docs/tasks/access-application-cluster/configure-access-multiple-clusters/", "id": "k8s_00162_code_explain_4570"} {"prompt": "Explain this bash code:\n\nkubectl config --kubeconfig=config-demo use-context dev-storage", "completion": "This bash code Configure Access to Multiple Clusters | View configuration associated with the new current context, dev-storage.", "language": "bash", "type": "kubectl-command_explanation", "context": "Configure Access to Multiple Clusters | View configuration associated with the new current context, dev-storage.", "source": "https://kubernetes.io/docs/tasks/access-application-cluster/configure-access-multiple-clusters/", "id": "k8s_00162_code_explain_4571"} {"prompt": "Explain this bash code:\n\nkubectl config --kubeconfig=config-demo view --minify", "completion": "This bash code Configure Access to Multiple Clusters | In your config-exercise directory, create a file named config-demo-2 with this content:", "language": "bash", "type": "kubectl-command_explanation", "context": "Configure Access to Multiple Clusters | In your config-exercise directory, create a file named config-demo-2 with this content:", "source": "https://kubernetes.io/docs/tasks/access-application-cluster/configure-access-multiple-clusters/", "id": "k8s_00162_code_explain_4572"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Config\npreferences: {}\n\ncontexts:", "completion": "- context:\n cluster: development\n namespace: ramp\n user: developer\n name: dev-ramp-up", "language": "yaml", "type": "kubernetes-manifest", "context": "Configure Access to Multiple Clusters | The preceding configuration file defines a new context named dev-ramp-up.", "source": "https://kubernetes.io/docs/tasks/access-application-cluster/configure-access-multiple-clusters/", "id": "k8s_00162_code_completion_4573"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Config\npreferences: {}\n\ncontexts:\n- context:\n cluster: development\n namespace: ramp\n user: developer\n name: dev-ramp-up", "completion": "This yaml code Configure Access to Multiple Clusters | The preceding configuration file defines a new context named dev-ramp-up.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Configure Access to Multiple Clusters | The preceding configuration file defines a new context named dev-ramp-up.", "source": "https://kubernetes.io/docs/tasks/access-application-cluster/configure-access-multiple-clusters/", "id": "k8s_00162_code_explain_4574"} {"prompt": "Explain this text code:\n\nexport KUBECONFIG=\"${KUBECONFIG}:config-demo:config-demo-2\"", "completion": "This text code Configure Access to Multiple Clusters | In your config-exercise directory, enter this command:", "language": "text", "type": "code-example_explanation", "context": "Configure Access to Multiple Clusters | In your config-exercise directory, enter this command:", "source": "https://kubernetes.io/docs/tasks/access-application-cluster/configure-access-multiple-clusters/", "id": "k8s_00162_code_explain_4575"} {"prompt": "Complete this text code:\n\ncontexts:\n- context:\n cluster: development\n namespace: frontend\n user: developer\n name: dev-frontend\n- context:\n cluster: development\n namespace: ramp\n user: developer", "completion": " name: dev-ramp-up\n- context:\n cluster: development\n namespace: storage\n user: developer\n name: dev-storage\n- context:\n cluster: test\n namespace: default\n user: experimenter\n name: exp-test", "language": "text", "type": "code-example", "context": "Configure Access to Multiple Clusters | For more information about how kubeconfig files are merged, see Organizing Cluster Access Using kubeconfig Files", "source": "https://kubernetes.io/docs/tasks/access-application-cluster/configure-access-multiple-clusters/", "id": "k8s_00162_code_completion_4576"} {"prompt": "Explain this text code:\n\ncontexts:\n- context:\n cluster: development\n namespace: frontend\n user: developer\n name: dev-frontend\n- context:\n cluster: development\n namespace: ramp\n user: developer\n name: dev-ramp-up\n- context:\n cluster: development\n namespace: storage\n user: developer\n name: dev-storage\n- context:\n cluster: test\n namespace: default\n user: experimenter\n name: exp-test", "completion": "This text code Configure Access to Multiple Clusters | For more information about how kubeconfig files are merged, see Organizing Cluster Access Using kubeconfig Files", "language": "text", "type": "code-example_explanation", "context": "Configure Access to Multiple Clusters | For more information about how kubeconfig files are merged, see Organizing Cluster Access Using kubeconfig Files", "source": "https://kubernetes.io/docs/tasks/access-application-cluster/configure-access-multiple-clusters/", "id": "k8s_00162_code_explain_4577"} {"prompt": "Explain this text code:\n\nexport KUBECONFIG=\"${KUBECONFIG}:${HOME}/.kube/config\"", "completion": "This text code Configure Access to Multiple Clusters | View configuration information merged from all the files that are now listed in your KUBECONFIG environment variable. In your config-exercise directory, enter:", "language": "text", "type": "code-example_explanation", "context": "Configure Access to Multiple Clusters | View configuration information merged from all the files that are now listed in your KUBECONFIG environment variable. In your config-exercise directory, enter:", "source": "https://kubernetes.io/docs/tasks/access-application-cluster/configure-access-multiple-clusters/", "id": "k8s_00162_code_explain_4578"} {"prompt": "Explain this text code:\n\n$Env:KUBECONFIG=\"$Env:KUBECONFIG;$HOME\\.kube\\config\"", "completion": "This text code Configure Access to Multiple Clusters | View configuration information merged from all the files that are now listed in your KUBECONFIG environment variable. In your config-exercise directory, enter:", "language": "text", "type": "code-example_explanation", "context": "Configure Access to Multiple Clusters | View configuration information merged from all the files that are now listed in your KUBECONFIG environment variable. In your config-exercise directory, enter:", "source": "https://kubernetes.io/docs/tasks/access-application-cluster/configure-access-multiple-clusters/", "id": "k8s_00162_code_explain_4579"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Config\npreferences: {}\n\nclusters:\n- cluster:\n name: development\n- cluster:\n name: test\n", "completion": "users:\n- name: developer\n- name: experimenter\n\ncontexts:\n- context:\n name: dev-frontend\n- context:\n name: dev-storage\n- context:\n name: exp-test", "language": "yaml", "type": "kubernetes-manifest", "context": "Configure Access to Multiple Clusters | A configuration file describes clusters, users, and contexts. Your config-demo file has the framework to describe two clusters, two users, and three contexts.", "source": "https://kubernetes.io/docs/tasks/access-application-cluster/configure-access-multiple-clusters/", "id": "k8s_00162_code_completion_4580"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Config\npreferences: {}\n\nclusters:\n- cluster:\n name: development\n- cluster:\n name: test\n\nusers:\n- name: developer\n- name: experimenter\n\ncontexts:\n- context:\n name: dev-frontend\n- context:\n name: dev-storage\n- context:\n name: exp-test", "completion": "This yaml code Configure Access to Multiple Clusters | A configuration file describes clusters, users, and contexts. Your config-demo file has the framework to describe two clusters, two users, and three contexts.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Configure Access to Multiple Clusters | A configuration file describes clusters, users, and contexts. Your config-demo file has the framework to describe two clusters, two users, and three contexts.", "source": "https://kubernetes.io/docs/tasks/access-application-cluster/configure-access-multiple-clusters/", "id": "k8s_00162_code_explain_4581"} {"prompt": "Explain this shell code:\n\nkubectl config --kubeconfig=config-demo set-cluster development --server=https://1.2.3.4 --certificate-authority=fake-ca-file\nkubectl config --kubeconfig=config-demo set-cluster test --server=https://5.6.7.8 --insecure-skip-tls-verify", "completion": "This shell code Configure Access to Multiple Clusters | Add user details to your configuration file:", "language": "shell", "type": "code-example_explanation", "context": "Configure Access to Multiple Clusters | Add user details to your configuration file:", "source": "https://kubernetes.io/docs/tasks/access-application-cluster/configure-access-multiple-clusters/", "id": "k8s_00162_code_explain_4582"} {"prompt": "Explain this shell code:\n\nkubectl config --kubeconfig=config-demo set-credentials developer --client-certificate=fake-cert-file --client-key=fake-key-seefile\nkubectl config --kubeconfig=config-demo set-credentials experimenter --username=exp --password=some-password", "completion": "This shell code Configure Access to Multiple Clusters | Add context details to your configuration file:", "language": "shell", "type": "code-example_explanation", "context": "Configure Access to Multiple Clusters | Add context details to your configuration file:", "source": "https://kubernetes.io/docs/tasks/access-application-cluster/configure-access-multiple-clusters/", "id": "k8s_00162_code_explain_4583"} {"prompt": "Explain this bash code:\n\nkubectl --kubeconfig=config-demo config unset users.", "completion": "This bash code Configure Access to Multiple Clusters | Add context details to your configuration file:", "language": "bash", "type": "kubectl-command_explanation", "context": "Configure Access to Multiple Clusters | Add context details to your configuration file:", "source": "https://kubernetes.io/docs/tasks/access-application-cluster/configure-access-multiple-clusters/", "id": "k8s_00162_code_explain_4584"} {"prompt": "Explain this bash code:\n\nkubectl --kubeconfig=config-demo config unset clusters.", "completion": "This bash code Configure Access to Multiple Clusters | Add context details to your configuration file:", "language": "bash", "type": "kubectl-command_explanation", "context": "Configure Access to Multiple Clusters | Add context details to your configuration file:", "source": "https://kubernetes.io/docs/tasks/access-application-cluster/configure-access-multiple-clusters/", "id": "k8s_00162_code_explain_4585"} {"prompt": "Explain this bash code:\n\nkubectl --kubeconfig=config-demo config unset contexts.", "completion": "This bash code Configure Access to Multiple Clusters | Add context details to your configuration file:", "language": "bash", "type": "kubectl-command_explanation", "context": "Configure Access to Multiple Clusters | Add context details to your configuration file:", "source": "https://kubernetes.io/docs/tasks/access-application-cluster/configure-access-multiple-clusters/", "id": "k8s_00162_code_explain_4586"} {"prompt": "Explain this shell code:\n\nkubectl config --kubeconfig=config-demo set-context dev-frontend --cluster=development --namespace=frontend --user=developer\nkubectl config --kubeconfig=config-demo set-context dev-storage --cluster=development --namespace=storage --user=developer\nkubectl config --kubeconfig=config-demo set-context exp-test --cluster=test --namespace=default --user=experimenter", "completion": "This shell code Configure Access to Multiple Clusters | Open your config-demo file to see the added details. As an alternative to opening the config-demo file, you can use the config view command.", "language": "shell", "type": "code-example_explanation", "context": "Configure Access to Multiple Clusters | Open your config-demo file to see the added details. As an alternative to opening the config-demo file, you can use the config view command.", "source": "https://kubernetes.io/docs/tasks/access-application-cluster/configure-access-multiple-clusters/", "id": "k8s_00162_code_explain_4587"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nclusters:\n- cluster:\n certificate-authority: fake-ca-file\n server: https://1.2.3.4\n name: development\n- cluster:\n insecure-skip-tls-verify: true\n server: https://5.6.7.8\n name: test\ncontexts:\n- context:\n cluster: development\n namespace: frontend\n user: developer\n name: dev-frontend\n- context:\n cluster: development\n namespace: storage\n user: developer\n name: dev-storage", "completion": "- context:\n cluster: test\n namespace: default\n user: experimenter\n name: exp-test\ncurrent-context: \"\"\nkind: Config\npreferences: {}\nusers:\n- name: developer\n user:\n client-certificate: fake-cert-file\n client-key: fake-key-file\n- name: experimenter\n user:\n # Documentation note (this comment is NOT part of the command output).\n # Storing passwords in Kubernetes client config is risky.\n # A better alternative would be to use a credential plugin\n # and store the credentials separately.\n # See https://kubernetes.io/docs/reference/access-authn-authz/authentication/#client-go-credential-plugins\n password: some-password\n username: exp", "language": "yaml", "type": "kubernetes-manifest", "context": "Configure Access to Multiple Clusters | The fake-ca-file, fake-cert-file and fake-key-file above are the placeholders for the pathnames of the certificate files. You need to change these to the actual pathnames of certificate files in your", "source": "https://kubernetes.io/docs/tasks/access-application-cluster/configure-access-multiple-clusters/", "id": "k8s_00162_code_completion_4588"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nclusters:\n- cluster:\n certificate-authority: fake-ca-file\n server: https://1.2.3.4\n name: development\n- cluster:\n insecure-skip-tls-verify: true\n server: https://5.6.7.8\n name: test\ncontexts:\n- context:\n cluster: development\n namespace: frontend\n user: developer\n name: dev-frontend\n- context:\n cluster: development\n namespace: storage\n user: developer\n name: dev-storage\n- context:\n cluster: test\n namespace: default\n user: experimenter\n name: exp-test\ncurrent-context: \"\"\nkind: Config\npreferences: {}\nusers:\n- name: developer\n user:\n client-certificate: fake-cert-file\n client-key: fake-key-file\n- name: experimenter\n user:\n # Documentation note (this comment is NOT part of the command output).\n # Storing passwords in Kubernetes client config is risky.\n # A better alternative would be to use a credential plugin\n # and store the credentials separately.\n # See https://kubernetes.io/docs/reference/access-authn-authz/authentication/#client-go-credential-plugins\n password: some-password\n username: exp", "completion": "This yaml code Configure Access to Multiple Clusters | The fake-ca-file, fake-cert-file and fake-key-file above are the placeholders for the pathnames of the certificate files. You need to change these to the actual pathnames of certificate files in your", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Configure Access to Multiple Clusters | The fake-ca-file, fake-cert-file and fake-key-file above are the placeholders for the pathnames of the certificate files. You need to change these to the actual pathnames of certificate files in your", "source": "https://kubernetes.io/docs/tasks/access-application-cluster/configure-access-multiple-clusters/", "id": "k8s_00162_code_explain_4589"} {"prompt": "Explain this shell code:\n\nkubectl config --kubeconfig=config-demo use-context dev-frontend", "completion": "This shell code Configure Access to Multiple Clusters | Now whenever you enter a kubectl command, the action will apply to the cluster, and namespace listed in the dev-frontend context. And the command will use the credentials of the user listed in the dev", "language": "shell", "type": "code-example_explanation", "context": "Configure Access to Multiple Clusters | Now whenever you enter a kubectl command, the action will apply to the cluster, and namespace listed in the dev-frontend context. And the command will use the credentials of the user listed in the dev", "source": "https://kubernetes.io/docs/tasks/access-application-cluster/configure-access-multiple-clusters/", "id": "k8s_00162_code_explain_4590"} {"prompt": "Explain this shell code:\n\nkubectl config --kubeconfig=config-demo view --minify", "completion": "This shell code Configure Access to Multiple Clusters | The output shows configuration information associated with the dev-frontend context:", "language": "shell", "type": "code-example_explanation", "context": "Configure Access to Multiple Clusters | The output shows configuration information associated with the dev-frontend context:", "source": "https://kubernetes.io/docs/tasks/access-application-cluster/configure-access-multiple-clusters/", "id": "k8s_00162_code_explain_4591"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nclusters:\n- cluster:\n certificate-authority: fake-ca-file\n server: https://1.2.3.4\n name: development\ncontexts:\n- context:\n cluster: development\n namespace: frontend", "completion": " user: developer\n name: dev-frontend\ncurrent-context: dev-frontend\nkind: Config\npreferences: {}\nusers:\n- name: developer\n user:\n client-certificate: fake-cert-file\n client-key: fake-key-file", "language": "yaml", "type": "kubernetes-manifest", "context": "Configure Access to Multiple Clusters | Now suppose you want to work for a while in the test cluster.", "source": "https://kubernetes.io/docs/tasks/access-application-cluster/configure-access-multiple-clusters/", "id": "k8s_00162_code_completion_4592"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nclusters:\n- cluster:\n certificate-authority: fake-ca-file\n server: https://1.2.3.4\n name: development\ncontexts:\n- context:\n cluster: development\n namespace: frontend\n user: developer\n name: dev-frontend\ncurrent-context: dev-frontend\nkind: Config\npreferences: {}\nusers:\n- name: developer\n user:\n client-certificate: fake-cert-file\n client-key: fake-key-file", "completion": "This yaml code Configure Access to Multiple Clusters | Now suppose you want to work for a while in the test cluster.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Configure Access to Multiple Clusters | Now suppose you want to work for a while in the test cluster.", "source": "https://kubernetes.io/docs/tasks/access-application-cluster/configure-access-multiple-clusters/", "id": "k8s_00162_code_explain_4593"} {"prompt": "Explain this shell code:\n\nkubectl config --kubeconfig=config-demo use-context exp-test", "completion": "This shell code Configure Access to Multiple Clusters | Now any kubectl command you give will apply to the default namespace of the test cluster. And the command will use the credentials of the user listed in the exp-test context.", "language": "shell", "type": "code-example_explanation", "context": "Configure Access to Multiple Clusters | Now any kubectl command you give will apply to the default namespace of the test cluster. And the command will use the credentials of the user listed in the exp-test context.", "source": "https://kubernetes.io/docs/tasks/access-application-cluster/configure-access-multiple-clusters/", "id": "k8s_00162_code_explain_4594"} {"prompt": "Explain this shell code:\n\nkubectl config --kubeconfig=config-demo view --minify", "completion": "This shell code Configure Access to Multiple Clusters | Finally, suppose you want to work for a while in the storage namespace of the development cluster.", "language": "shell", "type": "code-example_explanation", "context": "Configure Access to Multiple Clusters | Finally, suppose you want to work for a while in the storage namespace of the development cluster.", "source": "https://kubernetes.io/docs/tasks/access-application-cluster/configure-access-multiple-clusters/", "id": "k8s_00162_code_explain_4595"} {"prompt": "Explain this shell code:\n\nkubectl config --kubeconfig=config-demo use-context dev-storage", "completion": "This shell code Configure Access to Multiple Clusters | View configuration associated with the new current context, dev-storage.", "language": "shell", "type": "code-example_explanation", "context": "Configure Access to Multiple Clusters | View configuration associated with the new current context, dev-storage.", "source": "https://kubernetes.io/docs/tasks/access-application-cluster/configure-access-multiple-clusters/", "id": "k8s_00162_code_explain_4596"} {"prompt": "Explain this shell code:\n\nkubectl config --kubeconfig=config-demo view --minify", "completion": "This shell code Configure Access to Multiple Clusters | In your config-exercise directory, create a file named config-demo-2 with this content:", "language": "shell", "type": "code-example_explanation", "context": "Configure Access to Multiple Clusters | In your config-exercise directory, create a file named config-demo-2 with this content:", "source": "https://kubernetes.io/docs/tasks/access-application-cluster/configure-access-multiple-clusters/", "id": "k8s_00162_code_explain_4597"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Config\npreferences: {}\n\ncontexts:", "completion": "- context:\n cluster: development\n namespace: ramp\n user: developer\n name: dev-ramp-up", "language": "yaml", "type": "kubernetes-manifest", "context": "Configure Access to Multiple Clusters | The preceding configuration file defines a new context named dev-ramp-up.", "source": "https://kubernetes.io/docs/tasks/access-application-cluster/configure-access-multiple-clusters/", "id": "k8s_00162_code_completion_4598"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Config\npreferences: {}\n\ncontexts:\n- context:\n cluster: development\n namespace: ramp\n user: developer\n name: dev-ramp-up", "completion": "This yaml code Configure Access to Multiple Clusters | The preceding configuration file defines a new context named dev-ramp-up.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Configure Access to Multiple Clusters | The preceding configuration file defines a new context named dev-ramp-up.", "source": "https://kubernetes.io/docs/tasks/access-application-cluster/configure-access-multiple-clusters/", "id": "k8s_00162_code_explain_4599"} {"prompt": "Explain this shell code:\n\nexport KUBECONFIG=\"${KUBECONFIG}:config-demo:config-demo-2\"", "completion": "This shell code Configure Access to Multiple Clusters | In your config-exercise directory, enter this command:", "language": "shell", "type": "code-example_explanation", "context": "Configure Access to Multiple Clusters | In your config-exercise directory, enter this command:", "source": "https://kubernetes.io/docs/tasks/access-application-cluster/configure-access-multiple-clusters/", "id": "k8s_00162_code_explain_4600"} {"prompt": "Complete this yaml code:\n\ncontexts:\n- context:\n cluster: development\n namespace: frontend\n user: developer\n name: dev-frontend\n- context:\n cluster: development\n namespace: ramp\n user: developer", "completion": " name: dev-ramp-up\n- context:\n cluster: development\n namespace: storage\n user: developer\n name: dev-storage\n- context:\n cluster: test\n namespace: default\n user: experimenter\n name: exp-test", "language": "yaml", "type": "kubernetes-manifest", "context": "Configure Access to Multiple Clusters | For more information about how kubeconfig files are merged, see Organizing Cluster Access Using kubeconfig Files", "source": "https://kubernetes.io/docs/tasks/access-application-cluster/configure-access-multiple-clusters/", "id": "k8s_00162_code_completion_4601"} {"prompt": "Explain this yaml code:\n\ncontexts:\n- context:\n cluster: development\n namespace: frontend\n user: developer\n name: dev-frontend\n- context:\n cluster: development\n namespace: ramp\n user: developer\n name: dev-ramp-up\n- context:\n cluster: development\n namespace: storage\n user: developer\n name: dev-storage\n- context:\n cluster: test\n namespace: default\n user: experimenter\n name: exp-test", "completion": "This yaml code Configure Access to Multiple Clusters | For more information about how kubeconfig files are merged, see Organizing Cluster Access Using kubeconfig Files", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Configure Access to Multiple Clusters | For more information about how kubeconfig files are merged, see Organizing Cluster Access Using kubeconfig Files", "source": "https://kubernetes.io/docs/tasks/access-application-cluster/configure-access-multiple-clusters/", "id": "k8s_00162_code_explain_4602"} {"prompt": "Explain this shell code:\n\nexport KUBECONFIG=\"${KUBECONFIG}:${HOME}/.kube/config\"", "completion": "This shell code Configure Access to Multiple Clusters | View configuration information merged from all the files that are now listed in your KUBECONFIG environment variable. In your config-exercise directory, enter:", "language": "shell", "type": "code-example_explanation", "context": "Configure Access to Multiple Clusters | View configuration information merged from all the files that are now listed in your KUBECONFIG environment variable. In your config-exercise directory, enter:", "source": "https://kubernetes.io/docs/tasks/access-application-cluster/configure-access-multiple-clusters/", "id": "k8s_00162_code_explain_4603"} {"prompt": "Explain this powershell code:\n\n$Env:KUBECONFIG=\"$Env:KUBECONFIG;$HOME\\.kube\\config\"", "completion": "This powershell code Configure Access to Multiple Clusters | View configuration information merged from all the files that are now listed in your KUBECONFIG environment variable. In your config-exercise directory, enter:", "language": "powershell", "type": "code-example_explanation", "context": "Configure Access to Multiple Clusters | View configuration information merged from all the files that are now listed in your KUBECONFIG environment variable. In your config-exercise directory, enter:", "source": "https://kubernetes.io/docs/tasks/access-application-cluster/configure-access-multiple-clusters/", "id": "k8s_00162_code_explain_4604"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Config\npreferences: {}\n\nclusters:\n- cluster:\n name: development\n- cluster:\n name: test\n", "completion": "users:\n- name: developer\n- name: experimenter\n\ncontexts:\n- context:\n name: dev-frontend\n- context:\n name: dev-storage\n- context:\n name: exp-test", "language": "yaml", "type": "kubernetes-manifest", "context": "Configure Access to Multiple Clusters | A configuration file describes clusters, users, and contexts. Your config-demo file has the framework to describe two clusters, two users, and three contexts.", "source": "https://kubernetes.io/docs/tasks/access-application-cluster/configure-access-multiple-clusters/", "id": "k8s_00162_code_completion_4605"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Config\npreferences: {}\n\nclusters:\n- cluster:\n name: development\n- cluster:\n name: test\n\nusers:\n- name: developer\n- name: experimenter\n\ncontexts:\n- context:\n name: dev-frontend\n- context:\n name: dev-storage\n- context:\n name: exp-test", "completion": "This yaml code Configure Access to Multiple Clusters | A configuration file describes clusters, users, and contexts. Your config-demo file has the framework to describe two clusters, two users, and three contexts.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Configure Access to Multiple Clusters | A configuration file describes clusters, users, and contexts. Your config-demo file has the framework to describe two clusters, two users, and three contexts.", "source": "https://kubernetes.io/docs/tasks/access-application-cluster/configure-access-multiple-clusters/", "id": "k8s_00162_code_explain_4606"} {"prompt": "Explain this bash code:\n\nkubectl config --kubeconfig=config-demo set-cluster development --server=https://1.2.3.4 --certificate-authority=fake-ca-file\nkubectl config --kubeconfig=config-demo set-cluster test --server=https://5.6.7.8 --insecure-skip-tls-verify", "completion": "This bash code Configure Access to Multiple Clusters | Add user details to your configuration file:", "language": "bash", "type": "kubectl-command_explanation", "context": "Configure Access to Multiple Clusters | Add user details to your configuration file:", "source": "https://kubernetes.io/docs/tasks/access-application-cluster/configure-access-multiple-clusters/", "id": "k8s_00162_code_explain_4607"} {"prompt": "Explain this bash code:\n\nkubectl config --kubeconfig=config-demo set-credentials developer --client-certificate=fake-cert-file --client-key=fake-key-seefile\nkubectl config --kubeconfig=config-demo set-credentials experimenter --username=exp --password=some-password", "completion": "This bash code Configure Access to Multiple Clusters | Add context details to your configuration file:", "language": "bash", "type": "kubectl-command_explanation", "context": "Configure Access to Multiple Clusters | Add context details to your configuration file:", "source": "https://kubernetes.io/docs/tasks/access-application-cluster/configure-access-multiple-clusters/", "id": "k8s_00162_code_explain_4608"} {"prompt": "Explain this bash code:\n\nkubectl config --kubeconfig=config-demo set-context dev-frontend --cluster=development --namespace=frontend --user=developer\nkubectl config --kubeconfig=config-demo set-context dev-storage --cluster=development --namespace=storage --user=developer\nkubectl config --kubeconfig=config-demo set-context exp-test --cluster=test --namespace=default --user=experimenter", "completion": "This bash code Configure Access to Multiple Clusters | Open your config-demo file to see the added details. As an alternative to opening the config-demo file, you can use the config view command.", "language": "bash", "type": "kubectl-command_explanation", "context": "Configure Access to Multiple Clusters | Open your config-demo file to see the added details. As an alternative to opening the config-demo file, you can use the config view command.", "source": "https://kubernetes.io/docs/tasks/access-application-cluster/configure-access-multiple-clusters/", "id": "k8s_00162_code_explain_4609"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nclusters:\n- cluster:\n certificate-authority: fake-ca-file\n server: https://1.2.3.4\n name: development\n- cluster:\n insecure-skip-tls-verify: true\n server: https://5.6.7.8\n name: test\ncontexts:\n- context:\n cluster: development\n namespace: frontend\n user: developer\n name: dev-frontend\n- context:\n cluster: development\n namespace: storage\n user: developer\n name: dev-storage", "completion": "- context:\n cluster: test\n namespace: default\n user: experimenter\n name: exp-test\ncurrent-context: \"\"\nkind: Config\npreferences: {}\nusers:\n- name: developer\n user:\n client-certificate: fake-cert-file\n client-key: fake-key-file\n- name: experimenter\n user:\n # Documentation note (this comment is NOT part of the command output).\n # Storing passwords in Kubernetes client config is risky.\n # A better alternative would be to use a credential plugin\n # and store the credentials separately.\n # See https://kubernetes.io/docs/reference/access-authn-authz/authentication/#client-go-credential-plugins\n password: some-password\n username: exp", "language": "yaml", "type": "kubernetes-manifest", "context": "Configure Access to Multiple Clusters | The fake-ca-file, fake-cert-file and fake-key-file above are the placeholders for the pathnames of the certificate files. You need to change these to the actual pathnames of certificate files in your", "source": "https://kubernetes.io/docs/tasks/access-application-cluster/configure-access-multiple-clusters/", "id": "k8s_00162_code_completion_4610"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nclusters:\n- cluster:\n certificate-authority: fake-ca-file\n server: https://1.2.3.4\n name: development\n- cluster:\n insecure-skip-tls-verify: true\n server: https://5.6.7.8\n name: test\ncontexts:\n- context:\n cluster: development\n namespace: frontend\n user: developer\n name: dev-frontend\n- context:\n cluster: development\n namespace: storage\n user: developer\n name: dev-storage\n- context:\n cluster: test\n namespace: default\n user: experimenter\n name: exp-test\ncurrent-context: \"\"\nkind: Config\npreferences: {}\nusers:\n- name: developer\n user:\n client-certificate: fake-cert-file\n client-key: fake-key-file\n- name: experimenter\n user:\n # Documentation note (this comment is NOT part of the command output).\n # Storing passwords in Kubernetes client config is risky.\n # A better alternative would be to use a credential plugin\n # and store the credentials separately.\n # See https://kubernetes.io/docs/reference/access-authn-authz/authentication/#client-go-credential-plugins\n password: some-password\n username: exp", "completion": "This yaml code Configure Access to Multiple Clusters | The fake-ca-file, fake-cert-file and fake-key-file above are the placeholders for the pathnames of the certificate files. You need to change these to the actual pathnames of certificate files in your", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Configure Access to Multiple Clusters | The fake-ca-file, fake-cert-file and fake-key-file above are the placeholders for the pathnames of the certificate files. You need to change these to the actual pathnames of certificate files in your", "source": "https://kubernetes.io/docs/tasks/access-application-cluster/configure-access-multiple-clusters/", "id": "k8s_00162_code_explain_4611"} {"prompt": "Explain this bash code:\n\nkubectl config --kubeconfig=config-demo use-context dev-frontend", "completion": "This bash code Configure Access to Multiple Clusters | Now whenever you enter a kubectl command, the action will apply to the cluster, and namespace listed in the dev-frontend context. And the command will use the credentials of the user listed in the dev", "language": "bash", "type": "kubectl-command_explanation", "context": "Configure Access to Multiple Clusters | Now whenever you enter a kubectl command, the action will apply to the cluster, and namespace listed in the dev-frontend context. And the command will use the credentials of the user listed in the dev", "source": "https://kubernetes.io/docs/tasks/access-application-cluster/configure-access-multiple-clusters/", "id": "k8s_00162_code_explain_4612"} {"prompt": "Explain this bash code:\n\nkubectl config --kubeconfig=config-demo view --minify", "completion": "This bash code Configure Access to Multiple Clusters | The output shows configuration information associated with the dev-frontend context:", "language": "bash", "type": "kubectl-command_explanation", "context": "Configure Access to Multiple Clusters | The output shows configuration information associated with the dev-frontend context:", "source": "https://kubernetes.io/docs/tasks/access-application-cluster/configure-access-multiple-clusters/", "id": "k8s_00162_code_explain_4613"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nclusters:\n- cluster:\n certificate-authority: fake-ca-file\n server: https://1.2.3.4\n name: development\ncontexts:\n- context:\n cluster: development\n namespace: frontend", "completion": " user: developer\n name: dev-frontend\ncurrent-context: dev-frontend\nkind: Config\npreferences: {}\nusers:\n- name: developer\n user:\n client-certificate: fake-cert-file\n client-key: fake-key-file", "language": "yaml", "type": "kubernetes-manifest", "context": "Configure Access to Multiple Clusters | Now suppose you want to work for a while in the test cluster.", "source": "https://kubernetes.io/docs/tasks/access-application-cluster/configure-access-multiple-clusters/", "id": "k8s_00162_code_completion_4614"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nclusters:\n- cluster:\n certificate-authority: fake-ca-file\n server: https://1.2.3.4\n name: development\ncontexts:\n- context:\n cluster: development\n namespace: frontend\n user: developer\n name: dev-frontend\ncurrent-context: dev-frontend\nkind: Config\npreferences: {}\nusers:\n- name: developer\n user:\n client-certificate: fake-cert-file\n client-key: fake-key-file", "completion": "This yaml code Configure Access to Multiple Clusters | Now suppose you want to work for a while in the test cluster.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Configure Access to Multiple Clusters | Now suppose you want to work for a while in the test cluster.", "source": "https://kubernetes.io/docs/tasks/access-application-cluster/configure-access-multiple-clusters/", "id": "k8s_00162_code_explain_4615"} {"prompt": "Explain this bash code:\n\nkubectl config --kubeconfig=config-demo use-context exp-test", "completion": "This bash code Configure Access to Multiple Clusters | Now any kubectl command you give will apply to the default namespace of the test cluster. And the command will use the credentials of the user listed in the exp-test context.", "language": "bash", "type": "kubectl-command_explanation", "context": "Configure Access to Multiple Clusters | Now any kubectl command you give will apply to the default namespace of the test cluster. And the command will use the credentials of the user listed in the exp-test context.", "source": "https://kubernetes.io/docs/tasks/access-application-cluster/configure-access-multiple-clusters/", "id": "k8s_00162_code_explain_4616"} {"prompt": "Explain this bash code:\n\nkubectl config --kubeconfig=config-demo view --minify", "completion": "This bash code Configure Access to Multiple Clusters | Finally, suppose you want to work for a while in the storage namespace of the development cluster.", "language": "bash", "type": "kubectl-command_explanation", "context": "Configure Access to Multiple Clusters | Finally, suppose you want to work for a while in the storage namespace of the development cluster.", "source": "https://kubernetes.io/docs/tasks/access-application-cluster/configure-access-multiple-clusters/", "id": "k8s_00162_code_explain_4617"} {"prompt": "Explain this bash code:\n\nkubectl config --kubeconfig=config-demo use-context dev-storage", "completion": "This bash code Configure Access to Multiple Clusters | View configuration associated with the new current context, dev-storage.", "language": "bash", "type": "kubectl-command_explanation", "context": "Configure Access to Multiple Clusters | View configuration associated with the new current context, dev-storage.", "source": "https://kubernetes.io/docs/tasks/access-application-cluster/configure-access-multiple-clusters/", "id": "k8s_00162_code_explain_4618"} {"prompt": "Explain this bash code:\n\nkubectl config --kubeconfig=config-demo view --minify", "completion": "This bash code Configure Access to Multiple Clusters | In your config-exercise directory, create a file named config-demo-2 with this content:", "language": "bash", "type": "kubectl-command_explanation", "context": "Configure Access to Multiple Clusters | In your config-exercise directory, create a file named config-demo-2 with this content:", "source": "https://kubernetes.io/docs/tasks/access-application-cluster/configure-access-multiple-clusters/", "id": "k8s_00162_code_explain_4619"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Config\npreferences: {}\n\ncontexts:", "completion": "- context:\n cluster: development\n namespace: ramp\n user: developer\n name: dev-ramp-up", "language": "yaml", "type": "kubernetes-manifest", "context": "Configure Access to Multiple Clusters | The preceding configuration file defines a new context named dev-ramp-up.", "source": "https://kubernetes.io/docs/tasks/access-application-cluster/configure-access-multiple-clusters/", "id": "k8s_00162_code_completion_4620"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Config\npreferences: {}\n\ncontexts:\n- context:\n cluster: development\n namespace: ramp\n user: developer\n name: dev-ramp-up", "completion": "This yaml code Configure Access to Multiple Clusters | The preceding configuration file defines a new context named dev-ramp-up.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Configure Access to Multiple Clusters | The preceding configuration file defines a new context named dev-ramp-up.", "source": "https://kubernetes.io/docs/tasks/access-application-cluster/configure-access-multiple-clusters/", "id": "k8s_00162_code_explain_4621"} {"prompt": "Explain this text code:\n\nexport KUBECONFIG=\"${KUBECONFIG}:config-demo:config-demo-2\"", "completion": "This text code Configure Access to Multiple Clusters | In your config-exercise directory, enter this command:", "language": "text", "type": "code-example_explanation", "context": "Configure Access to Multiple Clusters | In your config-exercise directory, enter this command:", "source": "https://kubernetes.io/docs/tasks/access-application-cluster/configure-access-multiple-clusters/", "id": "k8s_00162_code_explain_4622"} {"prompt": "Complete this text code:\n\ncontexts:\n- context:\n cluster: development\n namespace: frontend\n user: developer\n name: dev-frontend\n- context:\n cluster: development\n namespace: ramp\n user: developer", "completion": " name: dev-ramp-up\n- context:\n cluster: development\n namespace: storage\n user: developer\n name: dev-storage\n- context:\n cluster: test\n namespace: default\n user: experimenter\n name: exp-test", "language": "text", "type": "code-example", "context": "Configure Access to Multiple Clusters | For more information about how kubeconfig files are merged, see Organizing Cluster Access Using kubeconfig Files", "source": "https://kubernetes.io/docs/tasks/access-application-cluster/configure-access-multiple-clusters/", "id": "k8s_00162_code_completion_4623"} {"prompt": "Explain this text code:\n\ncontexts:\n- context:\n cluster: development\n namespace: frontend\n user: developer\n name: dev-frontend\n- context:\n cluster: development\n namespace: ramp\n user: developer\n name: dev-ramp-up\n- context:\n cluster: development\n namespace: storage\n user: developer\n name: dev-storage\n- context:\n cluster: test\n namespace: default\n user: experimenter\n name: exp-test", "completion": "This text code Configure Access to Multiple Clusters | For more information about how kubeconfig files are merged, see Organizing Cluster Access Using kubeconfig Files", "language": "text", "type": "code-example_explanation", "context": "Configure Access to Multiple Clusters | For more information about how kubeconfig files are merged, see Organizing Cluster Access Using kubeconfig Files", "source": "https://kubernetes.io/docs/tasks/access-application-cluster/configure-access-multiple-clusters/", "id": "k8s_00162_code_explain_4624"} {"prompt": "Explain this text code:\n\nexport KUBECONFIG=\"${KUBECONFIG}:${HOME}/.kube/config\"", "completion": "This text code Configure Access to Multiple Clusters | View configuration information merged from all the files that are now listed in your KUBECONFIG environment variable. In your config-exercise directory, enter:", "language": "text", "type": "code-example_explanation", "context": "Configure Access to Multiple Clusters | View configuration information merged from all the files that are now listed in your KUBECONFIG environment variable. In your config-exercise directory, enter:", "source": "https://kubernetes.io/docs/tasks/access-application-cluster/configure-access-multiple-clusters/", "id": "k8s_00162_code_explain_4625"} {"prompt": "Explain this text code:\n\n$Env:KUBECONFIG=\"$Env:KUBECONFIG;$HOME\\.kube\\config\"", "completion": "This text code Configure Access to Multiple Clusters | View configuration information merged from all the files that are now listed in your KUBECONFIG environment variable. In your config-exercise directory, enter:", "language": "text", "type": "code-example_explanation", "context": "Configure Access to Multiple Clusters | View configuration information merged from all the files that are now listed in your KUBECONFIG environment variable. In your config-exercise directory, enter:", "source": "https://kubernetes.io/docs/tasks/access-application-cluster/configure-access-multiple-clusters/", "id": "k8s_00162_code_explain_4626"} {"prompt": "Complete this json code:\n\n[Unit]\nDescription=kubelet: The Kubernetes Node Agent\nDocumentation=https://kubernetes.io/docs/home/\nWants=network-online.target\nAfter=network-online.target\n\n[Service]\nExecStart=/usr/bin/kubelet", "completion": "# Configures the watchdog timeout\nWatchdogSec=30s\nRestart=on-failure\nStartLimitInterval=0\nRestartSec=10\n\n[Install]\nWantedBy=multi-user.target", "language": "json", "type": "api-response", "context": "Kubelet Systemd Watchdog | For more details about systemd configuration, refer to the systemd documentation", "source": "https://kubernetes.io/docs/reference/node/systemd-watchdog/", "id": "k8s_00164_code_completion_4627"} {"prompt": "Explain this json code:\n\n[Unit]\nDescription=kubelet: The Kubernetes Node Agent\nDocumentation=https://kubernetes.io/docs/home/\nWants=network-online.target\nAfter=network-online.target\n\n[Service]\nExecStart=/usr/bin/kubelet\n# Configures the watchdog timeout\nWatchdogSec=30s\nRestart=on-failure\nStartLimitInterval=0\nRestartSec=10\n\n[Install]\nWantedBy=multi-user.target", "completion": "This json code Kubelet Systemd Watchdog | For more details about systemd configuration, refer to the systemd documentation", "language": "json", "type": "api-response_explanation", "context": "Kubelet Systemd Watchdog | For more details about systemd configuration, refer to the systemd documentation", "source": "https://kubernetes.io/docs/reference/node/systemd-watchdog/", "id": "k8s_00164_code_explain_4628"} {"prompt": "Complete this systemd code:\n\n[Unit]\nDescription=kubelet: The Kubernetes Node Agent\nDocumentation=https://kubernetes.io/docs/home/\nWants=network-online.target\nAfter=network-online.target\n\n[Service]\nExecStart=/usr/bin/kubelet", "completion": "# Configures the watchdog timeout\nWatchdogSec=30s\nRestart=on-failure\nStartLimitInterval=0\nRestartSec=10\n\n[Install]\nWantedBy=multi-user.target", "language": "systemd", "type": "code-example", "context": "Kubelet Systemd Watchdog | For more details about systemd configuration, refer to the systemd documentation", "source": "https://kubernetes.io/docs/reference/node/systemd-watchdog/", "id": "k8s_00164_code_completion_4629"} {"prompt": "Explain this systemd code:\n\n[Unit]\nDescription=kubelet: The Kubernetes Node Agent\nDocumentation=https://kubernetes.io/docs/home/\nWants=network-online.target\nAfter=network-online.target\n\n[Service]\nExecStart=/usr/bin/kubelet\n# Configures the watchdog timeout\nWatchdogSec=30s\nRestart=on-failure\nStartLimitInterval=0\nRestartSec=10\n\n[Install]\nWantedBy=multi-user.target", "completion": "This systemd code Kubelet Systemd Watchdog | For more details about systemd configuration, refer to the systemd documentation", "language": "systemd", "type": "code-example_explanation", "context": "Kubelet Systemd Watchdog | For more details about systemd configuration, refer to the systemd documentation", "source": "https://kubernetes.io/docs/reference/node/systemd-watchdog/", "id": "k8s_00164_code_explain_4630"} {"prompt": "Complete this json code:\n\n[Unit]\nDescription=kubelet: The Kubernetes Node Agent\nDocumentation=https://kubernetes.io/docs/home/\nWants=network-online.target\nAfter=network-online.target\n\n[Service]\nExecStart=/usr/bin/kubelet", "completion": "# Configures the watchdog timeout\nWatchdogSec=30s\nRestart=on-failure\nStartLimitInterval=0\nRestartSec=10\n\n[Install]\nWantedBy=multi-user.target", "language": "json", "type": "api-response", "context": "Kubelet Systemd Watchdog | For more details about systemd configuration, refer to the systemd documentation", "source": "https://kubernetes.io/docs/reference/node/systemd-watchdog/", "id": "k8s_00164_code_completion_4631"} {"prompt": "Explain this json code:\n\n[Unit]\nDescription=kubelet: The Kubernetes Node Agent\nDocumentation=https://kubernetes.io/docs/home/\nWants=network-online.target\nAfter=network-online.target\n\n[Service]\nExecStart=/usr/bin/kubelet\n# Configures the watchdog timeout\nWatchdogSec=30s\nRestart=on-failure\nStartLimitInterval=0\nRestartSec=10\n\n[Install]\nWantedBy=multi-user.target", "completion": "This json code Kubelet Systemd Watchdog | For more details about systemd configuration, refer to the systemd documentation", "language": "json", "type": "api-response_explanation", "context": "Kubelet Systemd Watchdog | For more details about systemd configuration, refer to the systemd documentation", "source": "https://kubernetes.io/docs/reference/node/systemd-watchdog/", "id": "k8s_00164_code_explain_4632"} {"prompt": "Explain this bash code:\n\nkubectl create serviceaccount NAME [--dry-run=server|client|none]", "completion": "This bash code kubectl create serviceaccount | If true, ignore any errors in templates when a field or map key is missing in the template. Only applies to golang and jsonpath output formats.", "language": "bash", "type": "kubectl-command_explanation", "context": "kubectl create serviceaccount | If true, ignore any errors in templates when a field or map key is missing in the template. Only applies to golang and jsonpath output formats.", "source": "https://kubernetes.io/docs/reference/kubectl/generated/kubectl_create/kubectl_create_serviceaccount/", "id": "k8s_00165_code_explain_4633"} {"prompt": "Explain this text code:\n\n# Create a new service account named my-service-account\n kubectl create serviceaccount my-service-account", "completion": "This text code kubectl create serviceaccount | If true, ignore any errors in templates when a field or map key is missing in the template. Only applies to golang and jsonpath output formats.", "language": "text", "type": "code-example_explanation", "context": "kubectl create serviceaccount | If true, ignore any errors in templates when a field or map key is missing in the template. Only applies to golang and jsonpath output formats.", "source": "https://kubernetes.io/docs/reference/kubectl/generated/kubectl_create/kubectl_create_serviceaccount/", "id": "k8s_00165_code_explain_4634"} {"prompt": "Explain this bash code:\n\nkubectl create serviceaccount NAME [--dry-run=server|client|none]", "completion": "This bash code kubectl create serviceaccount | If true, ignore any errors in templates when a field or map key is missing in the template. Only applies to golang and jsonpath output formats.", "language": "bash", "type": "kubectl-command_explanation", "context": "kubectl create serviceaccount | If true, ignore any errors in templates when a field or map key is missing in the template. Only applies to golang and jsonpath output formats.", "source": "https://kubernetes.io/docs/reference/kubectl/generated/kubectl_create/kubectl_create_serviceaccount/", "id": "k8s_00165_code_explain_4635"} {"prompt": "Explain this text code:\n\n# Create a new service account named my-service-account\n kubectl create serviceaccount my-service-account", "completion": "This text code kubectl create serviceaccount | If true, ignore any errors in templates when a field or map key is missing in the template. Only applies to golang and jsonpath output formats.", "language": "text", "type": "code-example_explanation", "context": "kubectl create serviceaccount | If true, ignore any errors in templates when a field or map key is missing in the template. Only applies to golang and jsonpath output formats.", "source": "https://kubernetes.io/docs/reference/kubectl/generated/kubectl_create/kubectl_create_serviceaccount/", "id": "k8s_00165_code_explain_4636"} {"prompt": "Complete this text code:\n\n---\ntitle: My PR\n---\nFigure 17 shows a simple A to B process.\nsome markdown text\n...", "completion": "{{< mermaid >}} \n graph TB\n A --> B\n{{< /mermaid >}}\n\nFigure 17. A to B\nmore text", "language": "text", "type": "code-example", "context": "Diagram Guide | For more details on diagram captions, see How to use captions.", "source": "https://kubernetes.io/docs/contribute/style/diagram-guide/", "id": "k8s_00167_code_completion_4637"} {"prompt": "Explain this text code:\n\n---\ntitle: My PR\n---\nFigure 17 shows a simple A to B process.\nsome markdown text\n...\n{{< mermaid >}} \n graph TB\n A --> B\n{{< /mermaid >}}\n\nFigure 17. A to B\nmore text", "completion": "This text code Diagram Guide | For more details on diagram captions, see How to use captions.", "language": "text", "type": "code-example_explanation", "context": "Diagram Guide | For more details on diagram captions, see How to use captions.", "source": "https://kubernetes.io/docs/contribute/style/diagram-guide/", "id": "k8s_00167_code_explain_4638"} {"prompt": "Explain this json code:\n\n{{< figure src=\"/static/images/boxnet.svg\" alt=\"Boxnet figure\" class=\"diagram-large\" caption=\"Figure 14. Boxnet caption\" >}}", "completion": "This json code Diagram Guide | For more details on diagram captions, see How to use captions.", "language": "json", "type": "api-response_explanation", "context": "Diagram Guide | For more details on diagram captions, see How to use captions.", "source": "https://kubernetes.io/docs/contribute/style/diagram-guide/", "id": "k8s_00167_code_explain_4639"} {"prompt": "Explain this text code:\n\n\n", "completion": "This text code Diagram Guide | The following lists advantages of the Mermaid+SVG method:", "language": "text", "type": "code-example_explanation", "context": "Diagram Guide | The following lists advantages of the Mermaid+SVG method:", "source": "https://kubernetes.io/docs/contribute/style/diagram-guide/", "id": "k8s_00167_code_explain_4640"} {"prompt": "Explain this json code:\n\n{{< figure src=\"/static/images/apple.svg\" alt=\"red-apple-figure\" class=\"diagram-large\" caption=\"Figure 9. A Big Red Apple\" >}}", "completion": "This json code Diagram Guide | If your external drawing tool permits:", "language": "json", "type": "api-response_explanation", "context": "Diagram Guide | If your external drawing tool permits:", "source": "https://kubernetes.io/docs/contribute/style/diagram-guide/", "id": "k8s_00167_code_explain_4641"} {"prompt": "Complete this text code:\n\ngraph TB\n subgraph \"zoneB\"\n n3(Node3)\n n4(Node4)\n end\n subgraph \"zoneA\"\n n1(Node1)", "completion": " n2(Node2)\n end\n \n classDef plain fill:#ddd,stroke:#fff,stroke-width:4px,color:#000;\n classDef k8s fill:#326ce5,stroke:#fff,stroke-width:4px,color:#fff;\n classDef cluster fill:#fff,stroke:#bbb,stroke-width:2px,color:#326ce5;\n class n1,n2,n3,n4 k8s;\n class zoneA,zoneB cluster;", "language": "text", "type": "code-example", "context": "Diagram Guide | Figure 7 shows the diagram appearing in the What is Ingress page.", "source": "https://kubernetes.io/docs/contribute/style/diagram-guide/", "id": "k8s_00167_code_completion_4642"} {"prompt": "Explain this text code:\n\ngraph TB\n subgraph \"zoneB\"\n n3(Node3)\n n4(Node4)\n end\n subgraph \"zoneA\"\n n1(Node1)\n n2(Node2)\n end\n \n classDef plain fill:#ddd,stroke:#fff,stroke-width:4px,color:#000;\n classDef k8s fill:#326ce5,stroke:#fff,stroke-width:4px,color:#fff;\n classDef cluster fill:#fff,stroke:#bbb,stroke-width:2px,color:#326ce5;\n class n1,n2,n3,n4 k8s;\n class zoneA,zoneB cluster;", "completion": "This text code Diagram Guide | Figure 7 shows the diagram appearing in the What is Ingress page.", "language": "text", "type": "code-example_explanation", "context": "Diagram Guide | Figure 7 shows the diagram appearing in the What is Ingress page.", "source": "https://kubernetes.io/docs/contribute/style/diagram-guide/", "id": "k8s_00167_code_explain_4643"} {"prompt": "Complete this text code:\n\ngraph LR;\n client([client])-. Ingress-managed
    load balancer .->ingress[Ingress];\n ingress-->|routing rule|service[Service];\n subgraph cluster\n ingress;\n service-->pod1[Pod];\n service-->pod2[Pod];", "completion": " end\n classDef plain fill:#ddd,stroke:#fff,stroke-width:4px,color:#000;\n classDef k8s fill:#326ce5,stroke:#fff,stroke-width:4px,color:#fff;\n classDef cluster fill:#fff,stroke:#bbb,stroke-width:2px,color:#326ce5;\n class ingress,service,pod1,pod2 k8s;\n class client plain;\n class cluster cluster;", "language": "text", "type": "code-example", "context": "Diagram Guide | Figure 8 depicts a Mermaid sequence diagram showing the system flow between K8s components to start a container.", "source": "https://kubernetes.io/docs/contribute/style/diagram-guide/", "id": "k8s_00167_code_completion_4644"} {"prompt": "Explain this text code:\n\ngraph LR;\n client([client])-. Ingress-managed
    load balancer .->ingress[Ingress];\n ingress-->|routing rule|service[Service];\n subgraph cluster\n ingress;\n service-->pod1[Pod];\n service-->pod2[Pod];\n end\n classDef plain fill:#ddd,stroke:#fff,stroke-width:4px,color:#000;\n classDef k8s fill:#326ce5,stroke:#fff,stroke-width:4px,color:#fff;\n classDef cluster fill:#fff,stroke:#bbb,stroke-width:2px,color:#326ce5;\n class ingress,service,pod1,pod2 k8s;\n class client plain;\n class cluster cluster;", "completion": "This text code Diagram Guide | Figure 8 depicts a Mermaid sequence diagram showing the system flow between K8s components to start a container.", "language": "text", "type": "code-example_explanation", "context": "Diagram Guide | Figure 8 depicts a Mermaid sequence diagram showing the system flow between K8s components to start a container.", "source": "https://kubernetes.io/docs/contribute/style/diagram-guide/", "id": "k8s_00167_code_explain_4645"} {"prompt": "Complete this text code:\n\n%%{init:{\"theme\":\"neutral\"}}%%\nsequenceDiagram\n actor me\n participant apiSrv as control plane

    api-server\n participant etcd as control plane

    etcd datastore\n participant cntrlMgr as control plane

    controller
    manager\n participant sched as control plane

    scheduler\n participant kubelet as node

    kubelet\n participant container as node

    container
    runtime\n me->>apiSrv: 1. kubectl create -f pod.yaml", "completion": " apiSrv-->>etcd: 2. save new state\n cntrlMgr->>apiSrv: 3. check for changes\n sched->>apiSrv: 4. watch for unassigned pods(s)\n apiSrv->>sched: 5. notify about pod w nodename=\" \"\n sched->>apiSrv: 6. assign pod to node\n apiSrv-->>etcd: 7. save new state\n kubelet->>apiSrv: 8. look for newly assigned pod(s)\n apiSrv->>kubelet: 9. bind pod to node\n kubelet->>container: 10. start container\n kubelet->>apiSrv: 11. update pod status\n apiSrv-->>etcd: 12. save new state", "language": "text", "type": "code-example", "context": "Diagram Guide | You can style one or more diagram elements using well-known CSS nomenclature. You accomplish this using two types of statements in the Mermaid code.", "source": "https://kubernetes.io/docs/contribute/style/diagram-guide/", "id": "k8s_00167_code_completion_4646"} {"prompt": "Explain this text code:\n\n%%{init:{\"theme\":\"neutral\"}}%%\nsequenceDiagram\n actor me\n participant apiSrv as control plane

    api-server\n participant etcd as control plane

    etcd datastore\n participant cntrlMgr as control plane

    controller
    manager\n participant sched as control plane

    scheduler\n participant kubelet as node

    kubelet\n participant container as node

    container
    runtime\n me->>apiSrv: 1. kubectl create -f pod.yaml\n apiSrv-->>etcd: 2. save new state\n cntrlMgr->>apiSrv: 3. check for changes\n sched->>apiSrv: 4. watch for unassigned pods(s)\n apiSrv->>sched: 5. notify about pod w nodename=\" \"\n sched->>apiSrv: 6. assign pod to node\n apiSrv-->>etcd: 7. save new state\n kubelet->>apiSrv: 8. look for newly assigned pod(s)\n apiSrv->>kubelet: 9. bind pod to node\n kubelet->>container: 10. start container\n kubelet->>apiSrv: 11. update pod status\n apiSrv-->>etcd: 12. save new state", "completion": "This text code Diagram Guide | You can style one or more diagram elements using well-known CSS nomenclature. You accomplish this using two types of statements in the Mermaid code.", "language": "text", "type": "code-example_explanation", "context": "Diagram Guide | You can style one or more diagram elements using well-known CSS nomenclature. You accomplish this using two types of statements in the Mermaid code.", "source": "https://kubernetes.io/docs/contribute/style/diagram-guide/", "id": "k8s_00167_code_explain_4647"} {"prompt": "Explain this text code:\n\nclassDef k8s fill:#326ce5,stroke:#fff,stroke-width:4px,color:#fff; // defines style for the k8s class\nclass ingress,service,pod1,pod2 k8s; // k8s class is applied to elements ingress, service, pod1 and pod2.", "completion": "This text code Diagram Guide | You can include one or multiple classDef and class statements in your diagram. You can also use the official K8s #326ce5 hex color code for K8s components in your diagram.", "language": "text", "type": "code-example_explanation", "context": "Diagram Guide | You can include one or multiple classDef and class statements in your diagram. You can also use the official K8s #326ce5 hex color code for K8s components in your diagram.", "source": "https://kubernetes.io/docs/contribute/style/diagram-guide/", "id": "k8s_00167_code_explain_4648"} {"prompt": "Explain this json code:\n\n{{< figure src=\"/images/docs/components-of-kubernetes.svg\" alt=\"Kubernetes pod running inside a cluster\" class=\"diagram-large\" caption=\"Figure 4. Kubernetes Architecture Components\" >}}", "completion": "This json code Diagram Guide | You should pass the src, alt, class and caption values into the {{< figure >}} shortcode. You can adjust the size of the diagram using diagram-large, diagram-medium and diagram-small classes.", "language": "json", "type": "api-response_explanation", "context": "Diagram Guide | You should pass the src, alt, class and caption values into the {{< figure >}} shortcode. You can adjust the size of the diagram using diagram-large, diagram-medium and diagram-small classes.", "source": "https://kubernetes.io/docs/contribute/style/diagram-guide/", "id": "k8s_00167_code_explain_4649"} {"prompt": "Explain this json code:\n\n{{< figure src=\"/images/docs/components-of-kubernetes.svg\" alt=\"Kubernetes pod running inside a cluster\" class=\"diagram-large\" caption=\"Figure 4. Kubernetes Architecture Components\" >}}", "completion": "This json code Diagram Guide | If you define your diagram using inline Mermaid code, then you should use Markdown text.", "language": "json", "type": "api-response_explanation", "context": "Diagram Guide | If you define your diagram using inline Mermaid code, then you should use Markdown text.", "source": "https://kubernetes.io/docs/contribute/style/diagram-guide/", "id": "k8s_00167_code_explain_4650"} {"prompt": "Explain this text code:\n\nFigure 10 depicts the components of the Kubernetes architecture.\nThe control plane ...", "completion": "This text code Diagram Guide | Diagram referrals are optional and there are cases where they might not be suitable. If you are not sure, add a diagram referral to your text to see if it looks and sounds okay. When in doubt, use a d", "language": "text", "type": "code-example_explanation", "context": "Diagram Guide | Diagram referrals are optional and there are cases where they might not be suitable. If you are not sure, add a diagram referral to your text to see if it looks and sounds okay. When in doubt, use a d", "source": "https://kubernetes.io/docs/contribute/style/diagram-guide/", "id": "k8s_00167_code_explain_4651"} {"prompt": "Explain this json code:\n\n{{< figure src=\"/images/docs/components-of-kubernetes.svg\" alt=\"Kubernetes pod running inside a cluster\" class=\"diagram-large\" caption=\"Figure 10. Kubernetes Architecture.\" link=\"https://kubernetes.io/docs/concepts/overview/components/\" >}}", "completion": "This json code Diagram Guide | Figure 10. Kubernetes Architecture.", "language": "json", "type": "api-response_explanation", "context": "Diagram Guide | Figure 10. Kubernetes Architecture.", "source": "https://kubernetes.io/docs/contribute/style/diagram-guide/", "id": "k8s_00167_code_explain_4652"} {"prompt": "Complete this text code:\n\n---\ntitle: My PR\n---\nFigure 17 shows a simple A to B process.\nsome markdown text\n...", "completion": "{{< mermaid >}} \n graph TB\n A --> B\n{{< /mermaid >}}\n\nFigure 17. A to B\nmore text", "language": "text", "type": "code-example", "context": "Diagram Guide | For more details on diagram captions, see How to use captions.", "source": "https://kubernetes.io/docs/contribute/style/diagram-guide/", "id": "k8s_00167_code_completion_4653"} {"prompt": "Explain this text code:\n\n---\ntitle: My PR\n---\nFigure 17 shows a simple A to B process.\nsome markdown text\n...\n{{< mermaid >}} \n graph TB\n A --> B\n{{< /mermaid >}}\n\nFigure 17. A to B\nmore text", "completion": "This text code Diagram Guide | For more details on diagram captions, see How to use captions.", "language": "text", "type": "code-example_explanation", "context": "Diagram Guide | For more details on diagram captions, see How to use captions.", "source": "https://kubernetes.io/docs/contribute/style/diagram-guide/", "id": "k8s_00167_code_explain_4654"} {"prompt": "Explain this json code:\n\n{{< figure src=\"/static/images/boxnet.svg\" alt=\"Boxnet figure\" class=\"diagram-large\" caption=\"Figure 14. Boxnet caption\" >}}", "completion": "This json code Diagram Guide | For more details on diagram captions, see How to use captions.", "language": "json", "type": "api-response_explanation", "context": "Diagram Guide | For more details on diagram captions, see How to use captions.", "source": "https://kubernetes.io/docs/contribute/style/diagram-guide/", "id": "k8s_00167_code_explain_4655"} {"prompt": "Explain this text code:\n\n\n", "completion": "This text code Diagram Guide | The following lists advantages of the Mermaid+SVG method:", "language": "text", "type": "code-example_explanation", "context": "Diagram Guide | The following lists advantages of the Mermaid+SVG method:", "source": "https://kubernetes.io/docs/contribute/style/diagram-guide/", "id": "k8s_00167_code_explain_4656"} {"prompt": "Explain this text code:\n\n{{< figure src=\"/static/images/apple.svg\" alt=\"red-apple-figure\" class=\"diagram-large\" caption=\"Figure 9. A Big Red Apple\" >}}", "completion": "This text code Diagram Guide | If your external drawing tool permits:", "language": "text", "type": "code-example_explanation", "context": "Diagram Guide | If your external drawing tool permits:", "source": "https://kubernetes.io/docs/contribute/style/diagram-guide/", "id": "k8s_00167_code_explain_4657"} {"prompt": "Complete this text code:\n\ngraph TB\n subgraph \"zoneB\"\n n3(Node3)\n n4(Node4)\n end\n subgraph \"zoneA\"\n n1(Node1)", "completion": " n2(Node2)\n end\n \n classDef plain fill:#ddd,stroke:#fff,stroke-width:4px,color:#000;\n classDef k8s fill:#326ce5,stroke:#fff,stroke-width:4px,color:#fff;\n classDef cluster fill:#fff,stroke:#bbb,stroke-width:2px,color:#326ce5;\n class n1,n2,n3,n4 k8s;\n class zoneA,zoneB cluster;", "language": "text", "type": "code-example", "context": "Diagram Guide | Figure 7 shows the diagram appearing in the What is Ingress page.", "source": "https://kubernetes.io/docs/contribute/style/diagram-guide/", "id": "k8s_00167_code_completion_4658"} {"prompt": "Explain this text code:\n\ngraph TB\n subgraph \"zoneB\"\n n3(Node3)\n n4(Node4)\n end\n subgraph \"zoneA\"\n n1(Node1)\n n2(Node2)\n end\n \n classDef plain fill:#ddd,stroke:#fff,stroke-width:4px,color:#000;\n classDef k8s fill:#326ce5,stroke:#fff,stroke-width:4px,color:#fff;\n classDef cluster fill:#fff,stroke:#bbb,stroke-width:2px,color:#326ce5;\n class n1,n2,n3,n4 k8s;\n class zoneA,zoneB cluster;", "completion": "This text code Diagram Guide | Figure 7 shows the diagram appearing in the What is Ingress page.", "language": "text", "type": "code-example_explanation", "context": "Diagram Guide | Figure 7 shows the diagram appearing in the What is Ingress page.", "source": "https://kubernetes.io/docs/contribute/style/diagram-guide/", "id": "k8s_00167_code_explain_4659"} {"prompt": "Complete this text code:\n\ngraph LR;\n client([client])-. Ingress-managed
    load balancer .->ingress[Ingress];\n ingress-->|routing rule|service[Service];\n subgraph cluster\n ingress;\n service-->pod1[Pod];\n service-->pod2[Pod];", "completion": " end\n classDef plain fill:#ddd,stroke:#fff,stroke-width:4px,color:#000;\n classDef k8s fill:#326ce5,stroke:#fff,stroke-width:4px,color:#fff;\n classDef cluster fill:#fff,stroke:#bbb,stroke-width:2px,color:#326ce5;\n class ingress,service,pod1,pod2 k8s;\n class client plain;\n class cluster cluster;", "language": "text", "type": "code-example", "context": "Diagram Guide | Figure 8 depicts a Mermaid sequence diagram showing the system flow between K8s components to start a container.", "source": "https://kubernetes.io/docs/contribute/style/diagram-guide/", "id": "k8s_00167_code_completion_4660"} {"prompt": "Explain this text code:\n\ngraph LR;\n client([client])-. Ingress-managed
    load balancer .->ingress[Ingress];\n ingress-->|routing rule|service[Service];\n subgraph cluster\n ingress;\n service-->pod1[Pod];\n service-->pod2[Pod];\n end\n classDef plain fill:#ddd,stroke:#fff,stroke-width:4px,color:#000;\n classDef k8s fill:#326ce5,stroke:#fff,stroke-width:4px,color:#fff;\n classDef cluster fill:#fff,stroke:#bbb,stroke-width:2px,color:#326ce5;\n class ingress,service,pod1,pod2 k8s;\n class client plain;\n class cluster cluster;", "completion": "This text code Diagram Guide | Figure 8 depicts a Mermaid sequence diagram showing the system flow between K8s components to start a container.", "language": "text", "type": "code-example_explanation", "context": "Diagram Guide | Figure 8 depicts a Mermaid sequence diagram showing the system flow between K8s components to start a container.", "source": "https://kubernetes.io/docs/contribute/style/diagram-guide/", "id": "k8s_00167_code_explain_4661"} {"prompt": "Complete this text code:\n\n%%{init:{\"theme\":\"neutral\"}}%%\nsequenceDiagram\n actor me\n participant apiSrv as control plane

    api-server\n participant etcd as control plane

    etcd datastore\n participant cntrlMgr as control plane

    controller
    manager\n participant sched as control plane

    scheduler\n participant kubelet as node

    kubelet\n participant container as node

    container
    runtime\n me->>apiSrv: 1. kubectl create -f pod.yaml", "completion": " apiSrv-->>etcd: 2. save new state\n cntrlMgr->>apiSrv: 3. check for changes\n sched->>apiSrv: 4. watch for unassigned pods(s)\n apiSrv->>sched: 5. notify about pod w nodename=\" \"\n sched->>apiSrv: 6. assign pod to node\n apiSrv-->>etcd: 7. save new state\n kubelet->>apiSrv: 8. look for newly assigned pod(s)\n apiSrv->>kubelet: 9. bind pod to node\n kubelet->>container: 10. start container\n kubelet->>apiSrv: 11. update pod status\n apiSrv-->>etcd: 12. save new state", "language": "text", "type": "code-example", "context": "Diagram Guide | You can style one or more diagram elements using well-known CSS nomenclature. You accomplish this using two types of statements in the Mermaid code.", "source": "https://kubernetes.io/docs/contribute/style/diagram-guide/", "id": "k8s_00167_code_completion_4662"} {"prompt": "Explain this text code:\n\n%%{init:{\"theme\":\"neutral\"}}%%\nsequenceDiagram\n actor me\n participant apiSrv as control plane

    api-server\n participant etcd as control plane

    etcd datastore\n participant cntrlMgr as control plane

    controller
    manager\n participant sched as control plane

    scheduler\n participant kubelet as node

    kubelet\n participant container as node

    container
    runtime\n me->>apiSrv: 1. kubectl create -f pod.yaml\n apiSrv-->>etcd: 2. save new state\n cntrlMgr->>apiSrv: 3. check for changes\n sched->>apiSrv: 4. watch for unassigned pods(s)\n apiSrv->>sched: 5. notify about pod w nodename=\" \"\n sched->>apiSrv: 6. assign pod to node\n apiSrv-->>etcd: 7. save new state\n kubelet->>apiSrv: 8. look for newly assigned pod(s)\n apiSrv->>kubelet: 9. bind pod to node\n kubelet->>container: 10. start container\n kubelet->>apiSrv: 11. update pod status\n apiSrv-->>etcd: 12. save new state", "completion": "This text code Diagram Guide | You can style one or more diagram elements using well-known CSS nomenclature. You accomplish this using two types of statements in the Mermaid code.", "language": "text", "type": "code-example_explanation", "context": "Diagram Guide | You can style one or more diagram elements using well-known CSS nomenclature. You accomplish this using two types of statements in the Mermaid code.", "source": "https://kubernetes.io/docs/contribute/style/diagram-guide/", "id": "k8s_00167_code_explain_4663"} {"prompt": "Explain this text code:\n\nclassDef k8s fill:#326ce5,stroke:#fff,stroke-width:4px,color:#fff; // defines style for the k8s class\nclass ingress,service,pod1,pod2 k8s; // k8s class is applied to elements ingress, service, pod1 and pod2.", "completion": "This text code Diagram Guide | You can include one or multiple classDef and class statements in your diagram. You can also use the official K8s #326ce5 hex color code for K8s components in your diagram.", "language": "text", "type": "code-example_explanation", "context": "Diagram Guide | You can include one or multiple classDef and class statements in your diagram. You can also use the official K8s #326ce5 hex color code for K8s components in your diagram.", "source": "https://kubernetes.io/docs/contribute/style/diagram-guide/", "id": "k8s_00167_code_explain_4664"} {"prompt": "Explain this none code:\n\n{{< figure src=\"/images/docs/components-of-kubernetes.svg\" alt=\"Kubernetes pod running inside a cluster\" class=\"diagram-large\" caption=\"Figure 4. Kubernetes Architecture Components\" >}}", "completion": "This none code Diagram Guide | You should pass the src, alt, class and caption values into the {{< figure >}} shortcode. You can adjust the size of the diagram using diagram-large, diagram-medium and diagram-small classes.", "language": "none", "type": "code-example_explanation", "context": "Diagram Guide | You should pass the src, alt, class and caption values into the {{< figure >}} shortcode. You can adjust the size of the diagram using diagram-large, diagram-medium and diagram-small classes.", "source": "https://kubernetes.io/docs/contribute/style/diagram-guide/", "id": "k8s_00167_code_explain_4665"} {"prompt": "Explain this none code:\n\n{{< figure src=\"/images/docs/components-of-kubernetes.svg\" alt=\"Kubernetes pod running inside a cluster\" class=\"diagram-large\" caption=\"Figure 4. Kubernetes Architecture Components\" >}}", "completion": "This none code Diagram Guide | If you define your diagram using inline Mermaid code, then you should use Markdown text.", "language": "none", "type": "code-example_explanation", "context": "Diagram Guide | If you define your diagram using inline Mermaid code, then you should use Markdown text.", "source": "https://kubernetes.io/docs/contribute/style/diagram-guide/", "id": "k8s_00167_code_explain_4666"} {"prompt": "Explain this text code:\n\nFigure 10 depicts the components of the Kubernetes architecture.\nThe control plane ...", "completion": "This text code Diagram Guide | Diagram referrals are optional and there are cases where they might not be suitable. If you are not sure, add a diagram referral to your text to see if it looks and sounds okay. When in doubt, use a d", "language": "text", "type": "code-example_explanation", "context": "Diagram Guide | Diagram referrals are optional and there are cases where they might not be suitable. If you are not sure, add a diagram referral to your text to see if it looks and sounds okay. When in doubt, use a d", "source": "https://kubernetes.io/docs/contribute/style/diagram-guide/", "id": "k8s_00167_code_explain_4667"} {"prompt": "Explain this json code:\n\n{{< figure src=\"/images/docs/components-of-kubernetes.svg\" alt=\"Kubernetes pod running inside a cluster\" class=\"diagram-large\" caption=\"Figure 10. Kubernetes Architecture.\" link=\"https://kubernetes.io/docs/concepts/overview/components/\" >}}", "completion": "This json code Diagram Guide | Figure 10. Kubernetes Architecture.", "language": "json", "type": "api-response_explanation", "context": "Diagram Guide | Figure 10. Kubernetes Architecture.", "source": "https://kubernetes.io/docs/contribute/style/diagram-guide/", "id": "k8s_00167_code_explain_4668"} {"prompt": "Explain this json code:\n\n{{< figure src=\"/static/images/boxnet.svg\" alt=\"Boxnet figure\" class=\"diagram-large\" caption=\"Figure 14. Boxnet caption\" >}}", "completion": "This json code Diagram Guide | For more details on diagram captions, see How to use captions.", "language": "json", "type": "api-response_explanation", "context": "Diagram Guide | For more details on diagram captions, see How to use captions.", "source": "https://kubernetes.io/docs/contribute/style/diagram-guide/", "id": "k8s_00167_code_explain_4669"} {"prompt": "Explain this json code:\n\n{{< figure src=\"/static/images/apple.svg\" alt=\"red-apple-figure\" class=\"diagram-large\" caption=\"Figure 9. A Big Red Apple\" >}}", "completion": "This json code Diagram Guide | If your external drawing tool permits:", "language": "json", "type": "api-response_explanation", "context": "Diagram Guide | If your external drawing tool permits:", "source": "https://kubernetes.io/docs/contribute/style/diagram-guide/", "id": "k8s_00167_code_explain_4670"} {"prompt": "Complete this text code:\n\ngraph TB\n subgraph \"zoneB\"\n n3(Node3)\n n4(Node4)\n end\n subgraph \"zoneA\"\n n1(Node1)", "completion": " n2(Node2)\n end\n \n classDef plain fill:#ddd,stroke:#fff,stroke-width:4px,color:#000;\n classDef k8s fill:#326ce5,stroke:#fff,stroke-width:4px,color:#fff;\n classDef cluster fill:#fff,stroke:#bbb,stroke-width:2px,color:#326ce5;\n class n1,n2,n3,n4 k8s;\n class zoneA,zoneB cluster;", "language": "text", "type": "code-example", "context": "Diagram Guide | Figure 7 shows the diagram appearing in the What is Ingress page.", "source": "https://kubernetes.io/docs/contribute/style/diagram-guide/", "id": "k8s_00167_code_completion_4671"} {"prompt": "Explain this text code:\n\ngraph TB\n subgraph \"zoneB\"\n n3(Node3)\n n4(Node4)\n end\n subgraph \"zoneA\"\n n1(Node1)\n n2(Node2)\n end\n \n classDef plain fill:#ddd,stroke:#fff,stroke-width:4px,color:#000;\n classDef k8s fill:#326ce5,stroke:#fff,stroke-width:4px,color:#fff;\n classDef cluster fill:#fff,stroke:#bbb,stroke-width:2px,color:#326ce5;\n class n1,n2,n3,n4 k8s;\n class zoneA,zoneB cluster;", "completion": "This text code Diagram Guide | Figure 7 shows the diagram appearing in the What is Ingress page.", "language": "text", "type": "code-example_explanation", "context": "Diagram Guide | Figure 7 shows the diagram appearing in the What is Ingress page.", "source": "https://kubernetes.io/docs/contribute/style/diagram-guide/", "id": "k8s_00167_code_explain_4672"} {"prompt": "Complete this text code:\n\ngraph LR;\n client([client])-. Ingress-managed
    load balancer .->ingress[Ingress];\n ingress-->|routing rule|service[Service];\n subgraph cluster\n ingress;\n service-->pod1[Pod];\n service-->pod2[Pod];", "completion": " end\n classDef plain fill:#ddd,stroke:#fff,stroke-width:4px,color:#000;\n classDef k8s fill:#326ce5,stroke:#fff,stroke-width:4px,color:#fff;\n classDef cluster fill:#fff,stroke:#bbb,stroke-width:2px,color:#326ce5;\n class ingress,service,pod1,pod2 k8s;\n class client plain;\n class cluster cluster;", "language": "text", "type": "code-example", "context": "Diagram Guide | Figure 8 depicts a Mermaid sequence diagram showing the system flow between K8s components to start a container.", "source": "https://kubernetes.io/docs/contribute/style/diagram-guide/", "id": "k8s_00167_code_completion_4673"} {"prompt": "Explain this text code:\n\ngraph LR;\n client([client])-. Ingress-managed
    load balancer .->ingress[Ingress];\n ingress-->|routing rule|service[Service];\n subgraph cluster\n ingress;\n service-->pod1[Pod];\n service-->pod2[Pod];\n end\n classDef plain fill:#ddd,stroke:#fff,stroke-width:4px,color:#000;\n classDef k8s fill:#326ce5,stroke:#fff,stroke-width:4px,color:#fff;\n classDef cluster fill:#fff,stroke:#bbb,stroke-width:2px,color:#326ce5;\n class ingress,service,pod1,pod2 k8s;\n class client plain;\n class cluster cluster;", "completion": "This text code Diagram Guide | Figure 8 depicts a Mermaid sequence diagram showing the system flow between K8s components to start a container.", "language": "text", "type": "code-example_explanation", "context": "Diagram Guide | Figure 8 depicts a Mermaid sequence diagram showing the system flow between K8s components to start a container.", "source": "https://kubernetes.io/docs/contribute/style/diagram-guide/", "id": "k8s_00167_code_explain_4674"} {"prompt": "Complete this text code:\n\n%%{init:{\"theme\":\"neutral\"}}%%\nsequenceDiagram\n actor me\n participant apiSrv as control plane

    api-server\n participant etcd as control plane

    etcd datastore\n participant cntrlMgr as control plane

    controller
    manager\n participant sched as control plane

    scheduler\n participant kubelet as node

    kubelet\n participant container as node

    container
    runtime\n me->>apiSrv: 1. kubectl create -f pod.yaml", "completion": " apiSrv-->>etcd: 2. save new state\n cntrlMgr->>apiSrv: 3. check for changes\n sched->>apiSrv: 4. watch for unassigned pods(s)\n apiSrv->>sched: 5. notify about pod w nodename=\" \"\n sched->>apiSrv: 6. assign pod to node\n apiSrv-->>etcd: 7. save new state\n kubelet->>apiSrv: 8. look for newly assigned pod(s)\n apiSrv->>kubelet: 9. bind pod to node\n kubelet->>container: 10. start container\n kubelet->>apiSrv: 11. update pod status\n apiSrv-->>etcd: 12. save new state", "language": "text", "type": "code-example", "context": "Diagram Guide | You can style one or more diagram elements using well-known CSS nomenclature. You accomplish this using two types of statements in the Mermaid code.", "source": "https://kubernetes.io/docs/contribute/style/diagram-guide/", "id": "k8s_00167_code_completion_4675"} {"prompt": "Explain this text code:\n\n%%{init:{\"theme\":\"neutral\"}}%%\nsequenceDiagram\n actor me\n participant apiSrv as control plane

    api-server\n participant etcd as control plane

    etcd datastore\n participant cntrlMgr as control plane

    controller
    manager\n participant sched as control plane

    scheduler\n participant kubelet as node

    kubelet\n participant container as node

    container
    runtime\n me->>apiSrv: 1. kubectl create -f pod.yaml\n apiSrv-->>etcd: 2. save new state\n cntrlMgr->>apiSrv: 3. check for changes\n sched->>apiSrv: 4. watch for unassigned pods(s)\n apiSrv->>sched: 5. notify about pod w nodename=\" \"\n sched->>apiSrv: 6. assign pod to node\n apiSrv-->>etcd: 7. save new state\n kubelet->>apiSrv: 8. look for newly assigned pod(s)\n apiSrv->>kubelet: 9. bind pod to node\n kubelet->>container: 10. start container\n kubelet->>apiSrv: 11. update pod status\n apiSrv-->>etcd: 12. save new state", "completion": "This text code Diagram Guide | You can style one or more diagram elements using well-known CSS nomenclature. You accomplish this using two types of statements in the Mermaid code.", "language": "text", "type": "code-example_explanation", "context": "Diagram Guide | You can style one or more diagram elements using well-known CSS nomenclature. You accomplish this using two types of statements in the Mermaid code.", "source": "https://kubernetes.io/docs/contribute/style/diagram-guide/", "id": "k8s_00167_code_explain_4676"} {"prompt": "Explain this text code:\n\nclassDef k8s fill:#326ce5,stroke:#fff,stroke-width:4px,color:#fff; // defines style for the k8s class\nclass ingress,service,pod1,pod2 k8s; // k8s class is applied to elements ingress, service, pod1 and pod2.", "completion": "This text code Diagram Guide | You can include one or multiple classDef and class statements in your diagram. You can also use the official K8s #326ce5 hex color code for K8s components in your diagram.", "language": "text", "type": "code-example_explanation", "context": "Diagram Guide | You can include one or multiple classDef and class statements in your diagram. You can also use the official K8s #326ce5 hex color code for K8s components in your diagram.", "source": "https://kubernetes.io/docs/contribute/style/diagram-guide/", "id": "k8s_00167_code_explain_4677"} {"prompt": "Explain this text code:\n\nFigure 10 depicts the components of the Kubernetes architecture.\nThe control plane ...", "completion": "This text code Diagram Guide | Diagram referrals are optional and there are cases where they might not be suitable. If you are not sure, add a diagram referral to your text to see if it looks and sounds okay. When in doubt, use a d", "language": "text", "type": "code-example_explanation", "context": "Diagram Guide | Diagram referrals are optional and there are cases where they might not be suitable. If you are not sure, add a diagram referral to your text to see if it looks and sounds okay. When in doubt, use a d", "source": "https://kubernetes.io/docs/contribute/style/diagram-guide/", "id": "k8s_00167_code_explain_4678"} {"prompt": "Explain this bash code:\n\nkubectl taint NODE NAME KEY_1=VAL_1:TAINT_EFFECT_1 ... KEY_N=VAL_N:TAINT_EFFECT_N", "completion": "This bash code kubectl taint | Select all nodes in the cluster", "language": "bash", "type": "kubectl-command_explanation", "context": "kubectl taint | Select all nodes in the cluster", "source": "https://kubernetes.io/docs/reference/kubectl/generated/kubectl_taint/", "id": "k8s_00169_code_explain_4679"} {"prompt": "Complete this text code:\n\n# Update node 'foo' with a taint with key 'dedicated' and value 'special-user' and effect 'NoSchedule'\n # If a taint with that key and effect already exists, its value is replaced as specified\n kubectl taint nodes foo dedicated=special-user:NoSchedule\n \n # Remove from node 'foo' the taint with key 'dedicated' and effect 'NoSchedule' if one exists\n kubectl taint nodes foo dedicated:NoSchedule-\n ", "completion": " # Remove from node 'foo' all the taints with key 'dedicated'\n kubectl taint nodes foo dedicated-\n \n # Add a taint with key 'dedicated' on nodes having label myLabel=X\n kubectl taint node -l myLabel=X dedicated=foo:PreferNoSchedule\n \n # Add to node 'foo' a taint with key 'bar' and no value\n kubectl taint nodes foo bar:NoSchedule", "language": "text", "type": "code-example", "context": "kubectl taint | Select all nodes in the cluster", "source": "https://kubernetes.io/docs/reference/kubectl/generated/kubectl_taint/", "id": "k8s_00169_code_completion_4680"} {"prompt": "Explain this text code:\n\n# Update node 'foo' with a taint with key 'dedicated' and value 'special-user' and effect 'NoSchedule'\n # If a taint with that key and effect already exists, its value is replaced as specified\n kubectl taint nodes foo dedicated=special-user:NoSchedule\n \n # Remove from node 'foo' the taint with key 'dedicated' and effect 'NoSchedule' if one exists\n kubectl taint nodes foo dedicated:NoSchedule-\n \n # Remove from node 'foo' all the taints with key 'dedicated'\n kubectl taint nodes foo dedicated-\n \n # Add a taint with key 'dedicated' on nodes having label myLabel=X\n kubectl taint node -l myLabel=X dedicated=foo:PreferNoSchedule\n \n # Add to node 'foo' a taint with key 'bar' and no value\n kubectl taint nodes foo bar:NoSchedule", "completion": "This text code kubectl taint | Select all nodes in the cluster", "language": "text", "type": "code-example_explanation", "context": "kubectl taint | Select all nodes in the cluster", "source": "https://kubernetes.io/docs/reference/kubectl/generated/kubectl_taint/", "id": "k8s_00169_code_explain_4681"} {"prompt": "Explain this bash code:\n\nkubectl taint NODE NAME KEY_1=VAL_1:TAINT_EFFECT_1 ... KEY_N=VAL_N:TAINT_EFFECT_N", "completion": "This bash code kubectl taint | Select all nodes in the cluster", "language": "bash", "type": "kubectl-command_explanation", "context": "kubectl taint | Select all nodes in the cluster", "source": "https://kubernetes.io/docs/reference/kubectl/generated/kubectl_taint/", "id": "k8s_00169_code_explain_4682"} {"prompt": "Complete this text code:\n\n# Update node 'foo' with a taint with key 'dedicated' and value 'special-user' and effect 'NoSchedule'\n # If a taint with that key and effect already exists, its value is replaced as specified\n kubectl taint nodes foo dedicated=special-user:NoSchedule\n \n # Remove from node 'foo' the taint with key 'dedicated' and effect 'NoSchedule' if one exists\n kubectl taint nodes foo dedicated:NoSchedule-\n ", "completion": " # Remove from node 'foo' all the taints with key 'dedicated'\n kubectl taint nodes foo dedicated-\n \n # Add a taint with key 'dedicated' on nodes having label myLabel=X\n kubectl taint node -l myLabel=X dedicated=foo:PreferNoSchedule\n \n # Add to node 'foo' a taint with key 'bar' and no value\n kubectl taint nodes foo bar:NoSchedule", "language": "text", "type": "code-example", "context": "kubectl taint | Select all nodes in the cluster", "source": "https://kubernetes.io/docs/reference/kubectl/generated/kubectl_taint/", "id": "k8s_00169_code_completion_4683"} {"prompt": "Explain this text code:\n\n# Update node 'foo' with a taint with key 'dedicated' and value 'special-user' and effect 'NoSchedule'\n # If a taint with that key and effect already exists, its value is replaced as specified\n kubectl taint nodes foo dedicated=special-user:NoSchedule\n \n # Remove from node 'foo' the taint with key 'dedicated' and effect 'NoSchedule' if one exists\n kubectl taint nodes foo dedicated:NoSchedule-\n \n # Remove from node 'foo' all the taints with key 'dedicated'\n kubectl taint nodes foo dedicated-\n \n # Add a taint with key 'dedicated' on nodes having label myLabel=X\n kubectl taint node -l myLabel=X dedicated=foo:PreferNoSchedule\n \n # Add to node 'foo' a taint with key 'bar' and no value\n kubectl taint nodes foo bar:NoSchedule", "completion": "This text code kubectl taint | Select all nodes in the cluster", "language": "text", "type": "code-example_explanation", "context": "kubectl taint | Select all nodes in the cluster", "source": "https://kubernetes.io/docs/reference/kubectl/generated/kubectl_taint/", "id": "k8s_00169_code_explain_4684"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: counter\nspec:", "completion": " containers:\n - name: count\n image: busybox:1.28\n args: [/bin/sh, -c,\n 'i=0; while true; do echo \"$i: $(date)\"; i=$((i+1)); sleep 1; done']", "language": "yaml", "type": "pod-definition", "context": "Logging Architecture | To run this pod, use the following command:", "source": "https://kubernetes.io/docs/concepts/cluster-administration/logging/", "id": "k8s_00170_code_completion_4685"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: counter\nspec:\n containers:\n - name: count\n image: busybox:1.28\n args: [/bin/sh, -c,\n 'i=0; while true; do echo \"$i: $(date)\"; i=$((i+1)); sleep 1; done']", "completion": "This yaml code Logging Architecture | To run this pod, use the following command:", "language": "yaml", "type": "pod-definition_explanation", "context": "Logging Architecture | To run this pod, use the following command:", "source": "https://kubernetes.io/docs/concepts/cluster-administration/logging/", "id": "k8s_00170_code_explain_4686"} {"prompt": "Explain this bash code:\n\nkubectl apply -f https://k8s.io/examples/debug/counter-pod.yaml", "completion": "This bash code Logging Architecture | The output is:", "language": "bash", "type": "kubectl-command_explanation", "context": "Logging Architecture | The output is:", "source": "https://kubernetes.io/docs/concepts/cluster-administration/logging/", "id": "k8s_00170_code_explain_4687"} {"prompt": "Explain this text code:\n\n0: Fri Apr 1 11:42:23 UTC 2022\n1: Fri Apr 1 11:42:24 UTC 2022\n2: Fri Apr 1 11:42:25 UTC 2022", "completion": "This text code Logging Architecture | You can use kubectl logs --previous to retrieve logs from a previous instantiation of a container. If your pod has multiple containers, specify which container's logs you want to access by appending a", "language": "text", "type": "code-example_explanation", "context": "Logging Architecture | You can use kubectl logs --previous to retrieve logs from a previous instantiation of a container. If your pod has multiple containers, specify which container's logs you want to access by appending a", "source": "https://kubernetes.io/docs/concepts/cluster-administration/logging/", "id": "k8s_00170_code_explain_4688"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: counter-err\nspec:", "completion": " containers:\n - name: count\n image: busybox:1.28\n args: [/bin/sh, -c,\n 'i=0; while true; do echo \"$i: $(date)\"; echo \"$i: err\" >&2 ; i=$((i+1)); sleep 1; done']", "language": "yaml", "type": "pod-definition", "context": "Logging Architecture | To run this pod, use the following command:", "source": "https://kubernetes.io/docs/concepts/cluster-administration/logging/", "id": "k8s_00170_code_completion_4689"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: counter-err\nspec:\n containers:\n - name: count\n image: busybox:1.28\n args: [/bin/sh, -c,\n 'i=0; while true; do echo \"$i: $(date)\"; echo \"$i: err\" >&2 ; i=$((i+1)); sleep 1; done']", "completion": "This yaml code Logging Architecture | To run this pod, use the following command:", "language": "yaml", "type": "pod-definition_explanation", "context": "Logging Architecture | To run this pod, use the following command:", "source": "https://kubernetes.io/docs/concepts/cluster-administration/logging/", "id": "k8s_00170_code_explain_4690"} {"prompt": "Explain this bash code:\n\nkubectl apply -f https://k8s.io/examples/debug/counter-pod-err.yaml", "completion": "This bash code Logging Architecture | To fetch only the stderr log stream, you can run:", "language": "bash", "type": "kubectl-command_explanation", "context": "Logging Architecture | To fetch only the stderr log stream, you can run:", "source": "https://kubernetes.io/docs/concepts/cluster-administration/logging/", "id": "k8s_00170_code_explain_4691"} {"prompt": "Explain this bash code:\n\nkubectl get --raw \"/api/v1/namespaces/default/pods/counter-err/log?stream=Stderr\"", "completion": "This bash code Logging Architecture | See the kubectl logs documentation for more details.", "language": "bash", "type": "kubectl-command_explanation", "context": "Logging Architecture | See the kubectl logs documentation for more details.", "source": "https://kubernetes.io/docs/concepts/cluster-administration/logging/", "id": "k8s_00170_code_explain_4692"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: counter\nspec:\n containers:\n - name: count\n image: busybox:1.28\n args:\n - /bin/sh\n - -c\n - >\n i=0;", "completion": " while true;\n do\n echo \"$i: $(date)\" >> /var/log/1.log;\n echo \"$(date) INFO $i\" >> /var/log/2.log;\n i=$((i+1));\n sleep 1;\n done \n volumeMounts:\n - name: varlog\n mountPath: /var/log\n volumes:\n - name: varlog\n emptyDir: {}", "language": "yaml", "type": "pod-definition", "context": "Logging Architecture | It is not recommended to write log entries with different formats to the same log stream, even if you managed to redirect both components to the stdout stream of the container. Instead, you can create", "source": "https://kubernetes.io/docs/concepts/cluster-administration/logging/", "id": "k8s_00170_code_completion_4693"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: counter\nspec:\n containers:\n - name: count\n image: busybox:1.28\n args:\n - /bin/sh\n - -c\n - >\n i=0;\n while true;\n do\n echo \"$i: $(date)\" >> /var/log/1.log;\n echo \"$(date) INFO $i\" >> /var/log/2.log;\n i=$((i+1));\n sleep 1;\n done \n volumeMounts:\n - name: varlog\n mountPath: /var/log\n volumes:\n - name: varlog\n emptyDir: {}", "completion": "This yaml code Logging Architecture | It is not recommended to write log entries with different formats to the same log stream, even if you managed to redirect both components to the stdout stream of the container. Instead, you can create", "language": "yaml", "type": "pod-definition_explanation", "context": "Logging Architecture | It is not recommended to write log entries with different formats to the same log stream, even if you managed to redirect both components to the stdout stream of the container. Instead, you can create", "source": "https://kubernetes.io/docs/concepts/cluster-administration/logging/", "id": "k8s_00170_code_explain_4694"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: counter\nspec:\n containers:\n - name: count\n image: busybox:1.28\n args:\n - /bin/sh\n - -c\n - >\n i=0;\n while true;\n do\n echo \"$i: $(date)\" >> /var/log/1.log;\n echo \"$(date) INFO $i\" >> /var/log/2.log;\n i=$((i+1));\n sleep 1;", "completion": " done \n volumeMounts:\n - name: varlog\n mountPath: /var/log\n - name: count-log-1\n image: busybox:1.28\n args: [/bin/sh, -c, 'tail -n+1 -F /var/log/1.log']\n volumeMounts:\n - name: varlog\n mountPath: /var/log\n - name: count-log-2\n image: busybox:1.28\n args: [/bin/sh, -c, 'tail -n+1 -F /var/log/2.log']\n volumeMounts:\n - name: varlog\n mountPath: /var/log\n volumes:\n - name: varlog\n emptyDir: {}", "language": "yaml", "type": "pod-definition", "context": "Logging Architecture | Now when you run this pod, you can access each log stream separately by running the following commands:", "source": "https://kubernetes.io/docs/concepts/cluster-administration/logging/", "id": "k8s_00170_code_completion_4695"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: counter\nspec:\n containers:\n - name: count\n image: busybox:1.28\n args:\n - /bin/sh\n - -c\n - >\n i=0;\n while true;\n do\n echo \"$i: $(date)\" >> /var/log/1.log;\n echo \"$(date) INFO $i\" >> /var/log/2.log;\n i=$((i+1));\n sleep 1;\n done \n volumeMounts:\n - name: varlog\n mountPath: /var/log\n - name: count-log-1\n image: busybox:1.28\n args: [/bin/sh, -c, 'tail -n+1 -F /var/log/1.log']\n volumeMounts:\n - name: varlog\n mountPath: /var/log\n - name: count-log-2\n image: busybox:1.28\n args: [/bin/sh, -c, 'tail -n+1 -F /var/log/2.log']\n volumeMounts:\n - name: varlog\n mountPath: /var/log\n volumes:\n - name: varlog\n emptyDir: {}", "completion": "This yaml code Logging Architecture | Now when you run this pod, you can access each log stream separately by running the following commands:", "language": "yaml", "type": "pod-definition_explanation", "context": "Logging Architecture | Now when you run this pod, you can access each log stream separately by running the following commands:", "source": "https://kubernetes.io/docs/concepts/cluster-administration/logging/", "id": "k8s_00170_code_explain_4696"} {"prompt": "Complete this text code:\n\n0: Fri Apr 1 11:42:26 UTC 2022\n1: Fri Apr 1 11:42:27 UTC 2022", "completion": "2: Fri Apr 1 11:42:28 UTC 2022\n...", "language": "text", "type": "code-example", "context": "Logging Architecture | The output is similar to:", "source": "https://kubernetes.io/docs/concepts/cluster-administration/logging/", "id": "k8s_00170_code_completion_4697"} {"prompt": "Explain this text code:\n\n0: Fri Apr 1 11:42:26 UTC 2022\n1: Fri Apr 1 11:42:27 UTC 2022\n2: Fri Apr 1 11:42:28 UTC 2022\n...", "completion": "This text code Logging Architecture | The output is similar to:", "language": "text", "type": "code-example_explanation", "context": "Logging Architecture | The output is similar to:", "source": "https://kubernetes.io/docs/concepts/cluster-administration/logging/", "id": "k8s_00170_code_explain_4698"} {"prompt": "Complete this text code:\n\nFri Apr 1 11:42:29 UTC 2022 INFO 0\nFri Apr 1 11:42:30 UTC 2022 INFO 0", "completion": "Fri Apr 1 11:42:31 UTC 2022 INFO 0\n...", "language": "text", "type": "code-example", "context": "Logging Architecture | If you installed a node-level agent in your cluster, that agent picks up those log streams automatically without any further configuration. If you like, you can configure the agent to parse log lines", "source": "https://kubernetes.io/docs/concepts/cluster-administration/logging/", "id": "k8s_00170_code_completion_4699"} {"prompt": "Explain this text code:\n\nFri Apr 1 11:42:29 UTC 2022 INFO 0\nFri Apr 1 11:42:30 UTC 2022 INFO 0\nFri Apr 1 11:42:31 UTC 2022 INFO 0\n...", "completion": "This text code Logging Architecture | If you installed a node-level agent in your cluster, that agent picks up those log streams automatically without any further configuration. If you like, you can configure the agent to parse log lines", "language": "text", "type": "code-example_explanation", "context": "Logging Architecture | If you installed a node-level agent in your cluster, that agent picks up those log streams automatically without any further configuration. If you like, you can configure the agent to parse log lines", "source": "https://kubernetes.io/docs/concepts/cluster-administration/logging/", "id": "k8s_00170_code_explain_4700"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: ConfigMap\nmetadata:\n name: fluentd-config\ndata:\n fluentd.conf: |\n \n type tail\n format none\n path /var/log/1.log\n pos_file /var/log/1.log.pos\n tag count.format1", "completion": " \n\n \n type tail\n format none\n path /var/log/2.log\n pos_file /var/log/2.log.pos\n tag count.format2\n \n\n \n type google_cloud\n ", "language": "yaml", "type": "kubernetes-manifest", "context": "Logging Architecture | The second manifest describes a pod that has a sidecar container running fluentd. The pod mounts a volume where fluentd can pick up its configuration data.", "source": "https://kubernetes.io/docs/concepts/cluster-administration/logging/", "id": "k8s_00170_code_completion_4701"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: ConfigMap\nmetadata:\n name: fluentd-config\ndata:\n fluentd.conf: |\n \n type tail\n format none\n path /var/log/1.log\n pos_file /var/log/1.log.pos\n tag count.format1\n \n\n \n type tail\n format none\n path /var/log/2.log\n pos_file /var/log/2.log.pos\n tag count.format2\n \n\n \n type google_cloud\n ", "completion": "This yaml code Logging Architecture | The second manifest describes a pod that has a sidecar container running fluentd. The pod mounts a volume where fluentd can pick up its configuration data.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Logging Architecture | The second manifest describes a pod that has a sidecar container running fluentd. The pod mounts a volume where fluentd can pick up its configuration data.", "source": "https://kubernetes.io/docs/concepts/cluster-administration/logging/", "id": "k8s_00170_code_explain_4702"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: counter\nspec:\n containers:\n - name: count\n image: busybox:1.28\n args:\n - /bin/sh\n - -c\n - >\n i=0;\n while true;\n do\n echo \"$i: $(date)\" >> /var/log/1.log;\n echo \"$(date) INFO $i\" >> /var/log/2.log;\n i=$((i+1));\n sleep 1;", "completion": " done \n volumeMounts:\n - name: varlog\n mountPath: /var/log\n - name: count-agent\n image: registry.k8s.io/fluentd-gcp:1.30\n env:\n - name: FLUENTD_ARGS\n value: -c /etc/fluentd-config/fluentd.conf\n volumeMounts:\n - name: varlog\n mountPath: /var/log\n - name: config-volume\n mountPath: /etc/fluentd-config\n volumes:\n - name: varlog\n emptyDir: {}\n - name: config-volume\n configMap:\n name: fluentd-config", "language": "yaml", "type": "pod-definition", "context": "Logging Architecture |", "source": "https://kubernetes.io/docs/concepts/cluster-administration/logging/", "id": "k8s_00170_code_completion_4703"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: counter\nspec:\n containers:\n - name: count\n image: busybox:1.28\n args:\n - /bin/sh\n - -c\n - >\n i=0;\n while true;\n do\n echo \"$i: $(date)\" >> /var/log/1.log;\n echo \"$(date) INFO $i\" >> /var/log/2.log;\n i=$((i+1));\n sleep 1;\n done \n volumeMounts:\n - name: varlog\n mountPath: /var/log\n - name: count-agent\n image: registry.k8s.io/fluentd-gcp:1.30\n env:\n - name: FLUENTD_ARGS\n value: -c /etc/fluentd-config/fluentd.conf\n volumeMounts:\n - name: varlog\n mountPath: /var/log\n - name: config-volume\n mountPath: /etc/fluentd-config\n volumes:\n - name: varlog\n emptyDir: {}\n - name: config-volume\n configMap:\n name: fluentd-config", "completion": "This yaml code Logging Architecture |", "language": "yaml", "type": "pod-definition_explanation", "context": "Logging Architecture |", "source": "https://kubernetes.io/docs/concepts/cluster-administration/logging/", "id": "k8s_00170_code_explain_4704"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: counter\nspec:", "completion": " containers:\n - name: count\n image: busybox:1.28\n args: [/bin/sh, -c,\n 'i=0; while true; do echo \"$i: $(date)\"; i=$((i+1)); sleep 1; done']", "language": "yaml", "type": "pod-definition", "context": "Logging Architecture | To run this pod, use the following command:", "source": "https://kubernetes.io/docs/concepts/cluster-administration/logging/", "id": "k8s_00170_code_completion_4705"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: counter\nspec:\n containers:\n - name: count\n image: busybox:1.28\n args: [/bin/sh, -c,\n 'i=0; while true; do echo \"$i: $(date)\"; i=$((i+1)); sleep 1; done']", "completion": "This yaml code Logging Architecture | To run this pod, use the following command:", "language": "yaml", "type": "pod-definition_explanation", "context": "Logging Architecture | To run this pod, use the following command:", "source": "https://kubernetes.io/docs/concepts/cluster-administration/logging/", "id": "k8s_00170_code_explain_4706"} {"prompt": "Explain this shell code:\n\nkubectl apply -f https://k8s.io/examples/debug/counter-pod.yaml", "completion": "This shell code Logging Architecture | The output is:", "language": "shell", "type": "code-example_explanation", "context": "Logging Architecture | The output is:", "source": "https://kubernetes.io/docs/concepts/cluster-administration/logging/", "id": "k8s_00170_code_explain_4707"} {"prompt": "Explain this console code:\n\n0: Fri Apr 1 11:42:23 UTC 2022\n1: Fri Apr 1 11:42:24 UTC 2022\n2: Fri Apr 1 11:42:25 UTC 2022", "completion": "This console code Logging Architecture | You can use kubectl logs --previous to retrieve logs from a previous instantiation of a container. If your pod has multiple containers, specify which container's logs you want to access by appending a", "language": "console", "type": "code-example_explanation", "context": "Logging Architecture | You can use kubectl logs --previous to retrieve logs from a previous instantiation of a container. If your pod has multiple containers, specify which container's logs you want to access by appending a", "source": "https://kubernetes.io/docs/concepts/cluster-administration/logging/", "id": "k8s_00170_code_explain_4708"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: counter-err\nspec:", "completion": " containers:\n - name: count\n image: busybox:1.28\n args: [/bin/sh, -c,\n 'i=0; while true; do echo \"$i: $(date)\"; echo \"$i: err\" >&2 ; i=$((i+1)); sleep 1; done']", "language": "yaml", "type": "pod-definition", "context": "Logging Architecture | To run this pod, use the following command:", "source": "https://kubernetes.io/docs/concepts/cluster-administration/logging/", "id": "k8s_00170_code_completion_4709"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: counter-err\nspec:\n containers:\n - name: count\n image: busybox:1.28\n args: [/bin/sh, -c,\n 'i=0; while true; do echo \"$i: $(date)\"; echo \"$i: err\" >&2 ; i=$((i+1)); sleep 1; done']", "completion": "This yaml code Logging Architecture | To run this pod, use the following command:", "language": "yaml", "type": "pod-definition_explanation", "context": "Logging Architecture | To run this pod, use the following command:", "source": "https://kubernetes.io/docs/concepts/cluster-administration/logging/", "id": "k8s_00170_code_explain_4710"} {"prompt": "Explain this shell code:\n\nkubectl apply -f https://k8s.io/examples/debug/counter-pod-err.yaml", "completion": "This shell code Logging Architecture | To fetch only the stderr log stream, you can run:", "language": "shell", "type": "code-example_explanation", "context": "Logging Architecture | To fetch only the stderr log stream, you can run:", "source": "https://kubernetes.io/docs/concepts/cluster-administration/logging/", "id": "k8s_00170_code_explain_4711"} {"prompt": "Explain this shell code:\n\nkubectl get --raw \"/api/v1/namespaces/default/pods/counter-err/log?stream=Stderr\"", "completion": "This shell code Logging Architecture | See the kubectl logs documentation for more details.", "language": "shell", "type": "code-example_explanation", "context": "Logging Architecture | See the kubectl logs documentation for more details.", "source": "https://kubernetes.io/docs/concepts/cluster-administration/logging/", "id": "k8s_00170_code_explain_4712"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: counter\nspec:\n containers:\n - name: count\n image: busybox:1.28\n args:\n - /bin/sh\n - -c\n - >\n i=0;", "completion": " while true;\n do\n echo \"$i: $(date)\" >> /var/log/1.log;\n echo \"$(date) INFO $i\" >> /var/log/2.log;\n i=$((i+1));\n sleep 1;\n done \n volumeMounts:\n - name: varlog\n mountPath: /var/log\n volumes:\n - name: varlog\n emptyDir: {}", "language": "yaml", "type": "pod-definition", "context": "Logging Architecture | It is not recommended to write log entries with different formats to the same log stream, even if you managed to redirect both components to the stdout stream of the container. Instead, you can create", "source": "https://kubernetes.io/docs/concepts/cluster-administration/logging/", "id": "k8s_00170_code_completion_4713"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: counter\nspec:\n containers:\n - name: count\n image: busybox:1.28\n args:\n - /bin/sh\n - -c\n - >\n i=0;\n while true;\n do\n echo \"$i: $(date)\" >> /var/log/1.log;\n echo \"$(date) INFO $i\" >> /var/log/2.log;\n i=$((i+1));\n sleep 1;\n done \n volumeMounts:\n - name: varlog\n mountPath: /var/log\n volumes:\n - name: varlog\n emptyDir: {}", "completion": "This yaml code Logging Architecture | It is not recommended to write log entries with different formats to the same log stream, even if you managed to redirect both components to the stdout stream of the container. Instead, you can create", "language": "yaml", "type": "pod-definition_explanation", "context": "Logging Architecture | It is not recommended to write log entries with different formats to the same log stream, even if you managed to redirect both components to the stdout stream of the container. Instead, you can create", "source": "https://kubernetes.io/docs/concepts/cluster-administration/logging/", "id": "k8s_00170_code_explain_4714"} {"prompt": "Explain this text code:\n\nadmin/logging/two-files-counter-pod-streaming-sidecar.yaml", "completion": "This text code Logging Architecture | Now when you run this pod, you can access each log stream separately by running the following commands:", "language": "text", "type": "code-example_explanation", "context": "Logging Architecture | Now when you run this pod, you can access each log stream separately by running the following commands:", "source": "https://kubernetes.io/docs/concepts/cluster-administration/logging/", "id": "k8s_00170_code_explain_4715"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: counter\nspec:\n containers:\n - name: count\n image: busybox:1.28\n args:\n - /bin/sh\n - -c\n - >\n i=0;\n while true;\n do\n echo \"$i: $(date)\" >> /var/log/1.log;\n echo \"$(date) INFO $i\" >> /var/log/2.log;\n i=$((i+1));\n sleep 1;", "completion": " done \n volumeMounts:\n - name: varlog\n mountPath: /var/log\n - name: count-log-1\n image: busybox:1.28\n args: [/bin/sh, -c, 'tail -n+1 -F /var/log/1.log']\n volumeMounts:\n - name: varlog\n mountPath: /var/log\n - name: count-log-2\n image: busybox:1.28\n args: [/bin/sh, -c, 'tail -n+1 -F /var/log/2.log']\n volumeMounts:\n - name: varlog\n mountPath: /var/log\n volumes:\n - name: varlog\n emptyDir: {}", "language": "yaml", "type": "pod-definition", "context": "Logging Architecture | Now when you run this pod, you can access each log stream separately by running the following commands:", "source": "https://kubernetes.io/docs/concepts/cluster-administration/logging/", "id": "k8s_00170_code_completion_4716"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: counter\nspec:\n containers:\n - name: count\n image: busybox:1.28\n args:\n - /bin/sh\n - -c\n - >\n i=0;\n while true;\n do\n echo \"$i: $(date)\" >> /var/log/1.log;\n echo \"$(date) INFO $i\" >> /var/log/2.log;\n i=$((i+1));\n sleep 1;\n done \n volumeMounts:\n - name: varlog\n mountPath: /var/log\n - name: count-log-1\n image: busybox:1.28\n args: [/bin/sh, -c, 'tail -n+1 -F /var/log/1.log']\n volumeMounts:\n - name: varlog\n mountPath: /var/log\n - name: count-log-2\n image: busybox:1.28\n args: [/bin/sh, -c, 'tail -n+1 -F /var/log/2.log']\n volumeMounts:\n - name: varlog\n mountPath: /var/log\n volumes:\n - name: varlog\n emptyDir: {}", "completion": "This yaml code Logging Architecture | Now when you run this pod, you can access each log stream separately by running the following commands:", "language": "yaml", "type": "pod-definition_explanation", "context": "Logging Architecture | Now when you run this pod, you can access each log stream separately by running the following commands:", "source": "https://kubernetes.io/docs/concepts/cluster-administration/logging/", "id": "k8s_00170_code_explain_4717"} {"prompt": "Complete this console code:\n\n0: Fri Apr 1 11:42:26 UTC 2022\n1: Fri Apr 1 11:42:27 UTC 2022", "completion": "2: Fri Apr 1 11:42:28 UTC 2022\n...", "language": "console", "type": "code-example", "context": "Logging Architecture | The output is similar to:", "source": "https://kubernetes.io/docs/concepts/cluster-administration/logging/", "id": "k8s_00170_code_completion_4718"} {"prompt": "Explain this console code:\n\n0: Fri Apr 1 11:42:26 UTC 2022\n1: Fri Apr 1 11:42:27 UTC 2022\n2: Fri Apr 1 11:42:28 UTC 2022\n...", "completion": "This console code Logging Architecture | The output is similar to:", "language": "console", "type": "code-example_explanation", "context": "Logging Architecture | The output is similar to:", "source": "https://kubernetes.io/docs/concepts/cluster-administration/logging/", "id": "k8s_00170_code_explain_4719"} {"prompt": "Complete this console code:\n\nFri Apr 1 11:42:29 UTC 2022 INFO 0\nFri Apr 1 11:42:30 UTC 2022 INFO 0", "completion": "Fri Apr 1 11:42:31 UTC 2022 INFO 0\n...", "language": "console", "type": "code-example", "context": "Logging Architecture | If you installed a node-level agent in your cluster, that agent picks up those log streams automatically without any further configuration. If you like, you can configure the agent to parse log lines", "source": "https://kubernetes.io/docs/concepts/cluster-administration/logging/", "id": "k8s_00170_code_completion_4720"} {"prompt": "Explain this console code:\n\nFri Apr 1 11:42:29 UTC 2022 INFO 0\nFri Apr 1 11:42:30 UTC 2022 INFO 0\nFri Apr 1 11:42:31 UTC 2022 INFO 0\n...", "completion": "This console code Logging Architecture | If you installed a node-level agent in your cluster, that agent picks up those log streams automatically without any further configuration. If you like, you can configure the agent to parse log lines", "language": "console", "type": "code-example_explanation", "context": "Logging Architecture | If you installed a node-level agent in your cluster, that agent picks up those log streams automatically without any further configuration. If you like, you can configure the agent to parse log lines", "source": "https://kubernetes.io/docs/concepts/cluster-administration/logging/", "id": "k8s_00170_code_explain_4721"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: ConfigMap\nmetadata:\n name: fluentd-config\ndata:\n fluentd.conf: |\n \n type tail\n format none\n path /var/log/1.log\n pos_file /var/log/1.log.pos\n tag count.format1", "completion": " \n\n \n type tail\n format none\n path /var/log/2.log\n pos_file /var/log/2.log.pos\n tag count.format2\n \n\n \n type google_cloud\n ", "language": "yaml", "type": "kubernetes-manifest", "context": "Logging Architecture | The second manifest describes a pod that has a sidecar container running fluentd. The pod mounts a volume where fluentd can pick up its configuration data.", "source": "https://kubernetes.io/docs/concepts/cluster-administration/logging/", "id": "k8s_00170_code_completion_4722"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: ConfigMap\nmetadata:\n name: fluentd-config\ndata:\n fluentd.conf: |\n \n type tail\n format none\n path /var/log/1.log\n pos_file /var/log/1.log.pos\n tag count.format1\n \n\n \n type tail\n format none\n path /var/log/2.log\n pos_file /var/log/2.log.pos\n tag count.format2\n \n\n \n type google_cloud\n ", "completion": "This yaml code Logging Architecture | The second manifest describes a pod that has a sidecar container running fluentd. The pod mounts a volume where fluentd can pick up its configuration data.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Logging Architecture | The second manifest describes a pod that has a sidecar container running fluentd. The pod mounts a volume where fluentd can pick up its configuration data.", "source": "https://kubernetes.io/docs/concepts/cluster-administration/logging/", "id": "k8s_00170_code_explain_4723"} {"prompt": "Explain this text code:\n\nadmin/logging/two-files-counter-pod-agent-sidecar.yaml", "completion": "This text code Logging Architecture |", "language": "text", "type": "code-example_explanation", "context": "Logging Architecture |", "source": "https://kubernetes.io/docs/concepts/cluster-administration/logging/", "id": "k8s_00170_code_explain_4724"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: counter\nspec:\n containers:\n - name: count\n image: busybox:1.28\n args:\n - /bin/sh\n - -c\n - >\n i=0;\n while true;\n do\n echo \"$i: $(date)\" >> /var/log/1.log;\n echo \"$(date) INFO $i\" >> /var/log/2.log;\n i=$((i+1));\n sleep 1;", "completion": " done \n volumeMounts:\n - name: varlog\n mountPath: /var/log\n - name: count-agent\n image: registry.k8s.io/fluentd-gcp:1.30\n env:\n - name: FLUENTD_ARGS\n value: -c /etc/fluentd-config/fluentd.conf\n volumeMounts:\n - name: varlog\n mountPath: /var/log\n - name: config-volume\n mountPath: /etc/fluentd-config\n volumes:\n - name: varlog\n emptyDir: {}\n - name: config-volume\n configMap:\n name: fluentd-config", "language": "yaml", "type": "pod-definition", "context": "Logging Architecture |", "source": "https://kubernetes.io/docs/concepts/cluster-administration/logging/", "id": "k8s_00170_code_completion_4725"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: counter\nspec:\n containers:\n - name: count\n image: busybox:1.28\n args:\n - /bin/sh\n - -c\n - >\n i=0;\n while true;\n do\n echo \"$i: $(date)\" >> /var/log/1.log;\n echo \"$(date) INFO $i\" >> /var/log/2.log;\n i=$((i+1));\n sleep 1;\n done \n volumeMounts:\n - name: varlog\n mountPath: /var/log\n - name: count-agent\n image: registry.k8s.io/fluentd-gcp:1.30\n env:\n - name: FLUENTD_ARGS\n value: -c /etc/fluentd-config/fluentd.conf\n volumeMounts:\n - name: varlog\n mountPath: /var/log\n - name: config-volume\n mountPath: /etc/fluentd-config\n volumes:\n - name: varlog\n emptyDir: {}\n - name: config-volume\n configMap:\n name: fluentd-config", "completion": "This yaml code Logging Architecture |", "language": "yaml", "type": "pod-definition_explanation", "context": "Logging Architecture |", "source": "https://kubernetes.io/docs/concepts/cluster-administration/logging/", "id": "k8s_00170_code_explain_4726"} {"prompt": "Complete this yaml code:\n\ndebug/counter-pod.yaml\napiVersion: v1\nkind: Pod\nmetadata:\n name: counter", "completion": "spec:\n containers:\n - name: count\n image: busybox:1.28\n args: [/bin/sh, -c,\n 'i=0; while true; do echo \"$i: $(date)\"; i=$((i+1)); sleep 1; done']", "language": "yaml", "type": "pod-definition", "context": "Logging Architecture | To run this pod, use the following command:", "source": "https://kubernetes.io/docs/concepts/cluster-administration/logging/", "id": "k8s_00170_code_completion_4727"} {"prompt": "Explain this yaml code:\n\ndebug/counter-pod.yaml\napiVersion: v1\nkind: Pod\nmetadata:\n name: counter\nspec:\n containers:\n - name: count\n image: busybox:1.28\n args: [/bin/sh, -c,\n 'i=0; while true; do echo \"$i: $(date)\"; i=$((i+1)); sleep 1; done']", "completion": "This yaml code Logging Architecture | To run this pod, use the following command:", "language": "yaml", "type": "pod-definition_explanation", "context": "Logging Architecture | To run this pod, use the following command:", "source": "https://kubernetes.io/docs/concepts/cluster-administration/logging/", "id": "k8s_00170_code_explain_4728"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: counter\nspec:", "completion": " containers:\n - name: count\n image: busybox:1.28\n args: [/bin/sh, -c,\n 'i=0; while true; do echo \"$i: $(date)\"; i=$((i+1)); sleep 1; done']", "language": "yaml", "type": "pod-definition", "context": "Logging Architecture | To run this pod, use the following command:", "source": "https://kubernetes.io/docs/concepts/cluster-administration/logging/", "id": "k8s_00170_code_completion_4729"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: counter\nspec:\n containers:\n - name: count\n image: busybox:1.28\n args: [/bin/sh, -c,\n 'i=0; while true; do echo \"$i: $(date)\"; i=$((i+1)); sleep 1; done']", "completion": "This yaml code Logging Architecture | To run this pod, use the following command:", "language": "yaml", "type": "pod-definition_explanation", "context": "Logging Architecture | To run this pod, use the following command:", "source": "https://kubernetes.io/docs/concepts/cluster-administration/logging/", "id": "k8s_00170_code_explain_4730"} {"prompt": "Explain this bash code:\n\nkubectl apply -f https://k8s.io/examples/debug/counter-pod.yaml", "completion": "This bash code Logging Architecture | The output is:", "language": "bash", "type": "kubectl-command_explanation", "context": "Logging Architecture | The output is:", "source": "https://kubernetes.io/docs/concepts/cluster-administration/logging/", "id": "k8s_00170_code_explain_4731"} {"prompt": "Explain this text code:\n\n0: Fri Apr 1 11:42:23 UTC 2022\n1: Fri Apr 1 11:42:24 UTC 2022\n2: Fri Apr 1 11:42:25 UTC 2022", "completion": "This text code Logging Architecture | You can use kubectl logs --previous to retrieve logs from a previous instantiation of a container. If your pod has multiple containers, specify which container's logs you want to access by appending a", "language": "text", "type": "code-example_explanation", "context": "Logging Architecture | You can use kubectl logs --previous to retrieve logs from a previous instantiation of a container. If your pod has multiple containers, specify which container's logs you want to access by appending a", "source": "https://kubernetes.io/docs/concepts/cluster-administration/logging/", "id": "k8s_00170_code_explain_4732"} {"prompt": "Complete this yaml code:\n\ndebug/counter-pod-err.yaml\napiVersion: v1\nkind: Pod\nmetadata:\n name: counter-err", "completion": "spec:\n containers:\n - name: count\n image: busybox:1.28\n args: [/bin/sh, -c,\n 'i=0; while true; do echo \"$i: $(date)\"; echo \"$i: err\" >&2 ; i=$((i+1)); sleep 1; done']", "language": "yaml", "type": "pod-definition", "context": "Logging Architecture | To run this pod, use the following command:", "source": "https://kubernetes.io/docs/concepts/cluster-administration/logging/", "id": "k8s_00170_code_completion_4733"} {"prompt": "Explain this yaml code:\n\ndebug/counter-pod-err.yaml\napiVersion: v1\nkind: Pod\nmetadata:\n name: counter-err\nspec:\n containers:\n - name: count\n image: busybox:1.28\n args: [/bin/sh, -c,\n 'i=0; while true; do echo \"$i: $(date)\"; echo \"$i: err\" >&2 ; i=$((i+1)); sleep 1; done']", "completion": "This yaml code Logging Architecture | To run this pod, use the following command:", "language": "yaml", "type": "pod-definition_explanation", "context": "Logging Architecture | To run this pod, use the following command:", "source": "https://kubernetes.io/docs/concepts/cluster-administration/logging/", "id": "k8s_00170_code_explain_4734"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: counter-err\nspec:", "completion": " containers:\n - name: count\n image: busybox:1.28\n args: [/bin/sh, -c,\n 'i=0; while true; do echo \"$i: $(date)\"; echo \"$i: err\" >&2 ; i=$((i+1)); sleep 1; done']", "language": "yaml", "type": "pod-definition", "context": "Logging Architecture | To run this pod, use the following command:", "source": "https://kubernetes.io/docs/concepts/cluster-administration/logging/", "id": "k8s_00170_code_completion_4735"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: counter-err\nspec:\n containers:\n - name: count\n image: busybox:1.28\n args: [/bin/sh, -c,\n 'i=0; while true; do echo \"$i: $(date)\"; echo \"$i: err\" >&2 ; i=$((i+1)); sleep 1; done']", "completion": "This yaml code Logging Architecture | To run this pod, use the following command:", "language": "yaml", "type": "pod-definition_explanation", "context": "Logging Architecture | To run this pod, use the following command:", "source": "https://kubernetes.io/docs/concepts/cluster-administration/logging/", "id": "k8s_00170_code_explain_4736"} {"prompt": "Explain this bash code:\n\nkubectl apply -f https://k8s.io/examples/debug/counter-pod-err.yaml", "completion": "This bash code Logging Architecture | To fetch only the stderr log stream, you can run:", "language": "bash", "type": "kubectl-command_explanation", "context": "Logging Architecture | To fetch only the stderr log stream, you can run:", "source": "https://kubernetes.io/docs/concepts/cluster-administration/logging/", "id": "k8s_00170_code_explain_4737"} {"prompt": "Explain this bash code:\n\nkubectl get --raw \"/api/v1/namespaces/default/pods/counter-err/log?stream=Stderr\"", "completion": "This bash code Logging Architecture | See the kubectl logs documentation for more details.", "language": "bash", "type": "kubectl-command_explanation", "context": "Logging Architecture | See the kubectl logs documentation for more details.", "source": "https://kubernetes.io/docs/concepts/cluster-administration/logging/", "id": "k8s_00170_code_explain_4738"} {"prompt": "Complete this yaml code:\n\nadmin/logging/two-files-counter-pod.yaml\napiVersion: v1\nkind: Pod\nmetadata:\n name: counter\nspec:\n containers:\n - name: count\n image: busybox:1.28\n args:\n - /bin/sh\n - -c\n - >", "completion": " i=0;\n while true;\n do\n echo \"$i: $(date)\" >> /var/log/1.log;\n echo \"$(date) INFO $i\" >> /var/log/2.log;\n i=$((i+1));\n sleep 1;\n done \n volumeMounts:\n - name: varlog\n mountPath: /var/log\n volumes:\n - name: varlog\n emptyDir: {}", "language": "yaml", "type": "pod-definition", "context": "Logging Architecture | It is not recommended to write log entries with different formats to the same log stream, even if you managed to redirect both components to the stdout stream of the container. Instead, you can create", "source": "https://kubernetes.io/docs/concepts/cluster-administration/logging/", "id": "k8s_00170_code_completion_4739"} {"prompt": "Explain this yaml code:\n\nadmin/logging/two-files-counter-pod.yaml\napiVersion: v1\nkind: Pod\nmetadata:\n name: counter\nspec:\n containers:\n - name: count\n image: busybox:1.28\n args:\n - /bin/sh\n - -c\n - >\n i=0;\n while true;\n do\n echo \"$i: $(date)\" >> /var/log/1.log;\n echo \"$(date) INFO $i\" >> /var/log/2.log;\n i=$((i+1));\n sleep 1;\n done \n volumeMounts:\n - name: varlog\n mountPath: /var/log\n volumes:\n - name: varlog\n emptyDir: {}", "completion": "This yaml code Logging Architecture | It is not recommended to write log entries with different formats to the same log stream, even if you managed to redirect both components to the stdout stream of the container. Instead, you can create", "language": "yaml", "type": "pod-definition_explanation", "context": "Logging Architecture | It is not recommended to write log entries with different formats to the same log stream, even if you managed to redirect both components to the stdout stream of the container. Instead, you can create", "source": "https://kubernetes.io/docs/concepts/cluster-administration/logging/", "id": "k8s_00170_code_explain_4740"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: counter\nspec:\n containers:\n - name: count\n image: busybox:1.28\n args:\n - /bin/sh\n - -c\n - >\n i=0;", "completion": " while true;\n do\n echo \"$i: $(date)\" >> /var/log/1.log;\n echo \"$(date) INFO $i\" >> /var/log/2.log;\n i=$((i+1));\n sleep 1;\n done \n volumeMounts:\n - name: varlog\n mountPath: /var/log\n volumes:\n - name: varlog\n emptyDir: {}", "language": "yaml", "type": "pod-definition", "context": "Logging Architecture | It is not recommended to write log entries with different formats to the same log stream, even if you managed to redirect both components to the stdout stream of the container. Instead, you can create", "source": "https://kubernetes.io/docs/concepts/cluster-administration/logging/", "id": "k8s_00170_code_completion_4741"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: counter\nspec:\n containers:\n - name: count\n image: busybox:1.28\n args:\n - /bin/sh\n - -c\n - >\n i=0;\n while true;\n do\n echo \"$i: $(date)\" >> /var/log/1.log;\n echo \"$(date) INFO $i\" >> /var/log/2.log;\n i=$((i+1));\n sleep 1;\n done \n volumeMounts:\n - name: varlog\n mountPath: /var/log\n volumes:\n - name: varlog\n emptyDir: {}", "completion": "This yaml code Logging Architecture | It is not recommended to write log entries with different formats to the same log stream, even if you managed to redirect both components to the stdout stream of the container. Instead, you can create", "language": "yaml", "type": "pod-definition_explanation", "context": "Logging Architecture | It is not recommended to write log entries with different formats to the same log stream, even if you managed to redirect both components to the stdout stream of the container. Instead, you can create", "source": "https://kubernetes.io/docs/concepts/cluster-administration/logging/", "id": "k8s_00170_code_explain_4742"} {"prompt": "Complete this yaml code:\n\nadmin/logging/two-files-counter-pod-streaming-sidecar.yaml\napiVersion: v1\nkind: Pod\nmetadata:\n name: counter\nspec:\n containers:\n - name: count\n image: busybox:1.28\n args:\n - /bin/sh\n - -c\n - >\n i=0;\n while true;\n do\n echo \"$i: $(date)\" >> /var/log/1.log;\n echo \"$(date) INFO $i\" >> /var/log/2.log;\n i=$((i+1));", "completion": " sleep 1;\n done \n volumeMounts:\n - name: varlog\n mountPath: /var/log\n - name: count-log-1\n image: busybox:1.28\n args: [/bin/sh, -c, 'tail -n+1 -F /var/log/1.log']\n volumeMounts:\n - name: varlog\n mountPath: /var/log\n - name: count-log-2\n image: busybox:1.28\n args: [/bin/sh, -c, 'tail -n+1 -F /var/log/2.log']\n volumeMounts:\n - name: varlog\n mountPath: /var/log\n volumes:\n - name: varlog\n emptyDir: {}", "language": "yaml", "type": "pod-definition", "context": "Logging Architecture | Now when you run this pod, you can access each log stream separately by running the following commands:", "source": "https://kubernetes.io/docs/concepts/cluster-administration/logging/", "id": "k8s_00170_code_completion_4743"} {"prompt": "Explain this yaml code:\n\nadmin/logging/two-files-counter-pod-streaming-sidecar.yaml\napiVersion: v1\nkind: Pod\nmetadata:\n name: counter\nspec:\n containers:\n - name: count\n image: busybox:1.28\n args:\n - /bin/sh\n - -c\n - >\n i=0;\n while true;\n do\n echo \"$i: $(date)\" >> /var/log/1.log;\n echo \"$(date) INFO $i\" >> /var/log/2.log;\n i=$((i+1));\n sleep 1;\n done \n volumeMounts:\n - name: varlog\n mountPath: /var/log\n - name: count-log-1\n image: busybox:1.28\n args: [/bin/sh, -c, 'tail -n+1 -F /var/log/1.log']\n volumeMounts:\n - name: varlog\n mountPath: /var/log\n - name: count-log-2\n image: busybox:1.28\n args: [/bin/sh, -c, 'tail -n+1 -F /var/log/2.log']\n volumeMounts:\n - name: varlog\n mountPath: /var/log\n volumes:\n - name: varlog\n emptyDir: {}", "completion": "This yaml code Logging Architecture | Now when you run this pod, you can access each log stream separately by running the following commands:", "language": "yaml", "type": "pod-definition_explanation", "context": "Logging Architecture | Now when you run this pod, you can access each log stream separately by running the following commands:", "source": "https://kubernetes.io/docs/concepts/cluster-administration/logging/", "id": "k8s_00170_code_explain_4744"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: counter\nspec:\n containers:\n - name: count\n image: busybox:1.28\n args:\n - /bin/sh\n - -c\n - >\n i=0;\n while true;\n do\n echo \"$i: $(date)\" >> /var/log/1.log;\n echo \"$(date) INFO $i\" >> /var/log/2.log;\n i=$((i+1));\n sleep 1;", "completion": " done \n volumeMounts:\n - name: varlog\n mountPath: /var/log\n - name: count-log-1\n image: busybox:1.28\n args: [/bin/sh, -c, 'tail -n+1 -F /var/log/1.log']\n volumeMounts:\n - name: varlog\n mountPath: /var/log\n - name: count-log-2\n image: busybox:1.28\n args: [/bin/sh, -c, 'tail -n+1 -F /var/log/2.log']\n volumeMounts:\n - name: varlog\n mountPath: /var/log\n volumes:\n - name: varlog\n emptyDir: {}", "language": "yaml", "type": "pod-definition", "context": "Logging Architecture | Now when you run this pod, you can access each log stream separately by running the following commands:", "source": "https://kubernetes.io/docs/concepts/cluster-administration/logging/", "id": "k8s_00170_code_completion_4745"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: counter\nspec:\n containers:\n - name: count\n image: busybox:1.28\n args:\n - /bin/sh\n - -c\n - >\n i=0;\n while true;\n do\n echo \"$i: $(date)\" >> /var/log/1.log;\n echo \"$(date) INFO $i\" >> /var/log/2.log;\n i=$((i+1));\n sleep 1;\n done \n volumeMounts:\n - name: varlog\n mountPath: /var/log\n - name: count-log-1\n image: busybox:1.28\n args: [/bin/sh, -c, 'tail -n+1 -F /var/log/1.log']\n volumeMounts:\n - name: varlog\n mountPath: /var/log\n - name: count-log-2\n image: busybox:1.28\n args: [/bin/sh, -c, 'tail -n+1 -F /var/log/2.log']\n volumeMounts:\n - name: varlog\n mountPath: /var/log\n volumes:\n - name: varlog\n emptyDir: {}", "completion": "This yaml code Logging Architecture | Now when you run this pod, you can access each log stream separately by running the following commands:", "language": "yaml", "type": "pod-definition_explanation", "context": "Logging Architecture | Now when you run this pod, you can access each log stream separately by running the following commands:", "source": "https://kubernetes.io/docs/concepts/cluster-administration/logging/", "id": "k8s_00170_code_explain_4746"} {"prompt": "Complete this text code:\n\n0: Fri Apr 1 11:42:26 UTC 2022\n1: Fri Apr 1 11:42:27 UTC 2022", "completion": "2: Fri Apr 1 11:42:28 UTC 2022\n...", "language": "text", "type": "code-example", "context": "Logging Architecture | The output is similar to:", "source": "https://kubernetes.io/docs/concepts/cluster-administration/logging/", "id": "k8s_00170_code_completion_4747"} {"prompt": "Explain this text code:\n\n0: Fri Apr 1 11:42:26 UTC 2022\n1: Fri Apr 1 11:42:27 UTC 2022\n2: Fri Apr 1 11:42:28 UTC 2022\n...", "completion": "This text code Logging Architecture | The output is similar to:", "language": "text", "type": "code-example_explanation", "context": "Logging Architecture | The output is similar to:", "source": "https://kubernetes.io/docs/concepts/cluster-administration/logging/", "id": "k8s_00170_code_explain_4748"} {"prompt": "Complete this text code:\n\nFri Apr 1 11:42:29 UTC 2022 INFO 0\nFri Apr 1 11:42:30 UTC 2022 INFO 0", "completion": "Fri Apr 1 11:42:31 UTC 2022 INFO 0\n...", "language": "text", "type": "code-example", "context": "Logging Architecture | If you installed a node-level agent in your cluster, that agent picks up those log streams automatically without any further configuration. If you like, you can configure the agent to parse log lines", "source": "https://kubernetes.io/docs/concepts/cluster-administration/logging/", "id": "k8s_00170_code_completion_4749"} {"prompt": "Explain this text code:\n\nFri Apr 1 11:42:29 UTC 2022 INFO 0\nFri Apr 1 11:42:30 UTC 2022 INFO 0\nFri Apr 1 11:42:31 UTC 2022 INFO 0\n...", "completion": "This text code Logging Architecture | If you installed a node-level agent in your cluster, that agent picks up those log streams automatically without any further configuration. If you like, you can configure the agent to parse log lines", "language": "text", "type": "code-example_explanation", "context": "Logging Architecture | If you installed a node-level agent in your cluster, that agent picks up those log streams automatically without any further configuration. If you like, you can configure the agent to parse log lines", "source": "https://kubernetes.io/docs/concepts/cluster-administration/logging/", "id": "k8s_00170_code_explain_4750"} {"prompt": "Complete this yaml code:\n\nadmin/logging/fluentd-sidecar-config.yaml\napiVersion: v1\nkind: ConfigMap\nmetadata:\n name: fluentd-config\ndata:\n fluentd.conf: |\n \n type tail\n format none\n path /var/log/1.log\n pos_file /var/log/1.log.pos\n tag count.format1", "completion": " \n\n \n type tail\n format none\n path /var/log/2.log\n pos_file /var/log/2.log.pos\n tag count.format2\n \n\n \n type google_cloud\n ", "language": "yaml", "type": "kubernetes-manifest", "context": "Logging Architecture | The second manifest describes a pod that has a sidecar container running fluentd. The pod mounts a volume where fluentd can pick up its configuration data.", "source": "https://kubernetes.io/docs/concepts/cluster-administration/logging/", "id": "k8s_00170_code_completion_4751"} {"prompt": "Explain this yaml code:\n\nadmin/logging/fluentd-sidecar-config.yaml\napiVersion: v1\nkind: ConfigMap\nmetadata:\n name: fluentd-config\ndata:\n fluentd.conf: |\n \n type tail\n format none\n path /var/log/1.log\n pos_file /var/log/1.log.pos\n tag count.format1\n \n\n \n type tail\n format none\n path /var/log/2.log\n pos_file /var/log/2.log.pos\n tag count.format2\n \n\n \n type google_cloud\n ", "completion": "This yaml code Logging Architecture | The second manifest describes a pod that has a sidecar container running fluentd. The pod mounts a volume where fluentd can pick up its configuration data.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Logging Architecture | The second manifest describes a pod that has a sidecar container running fluentd. The pod mounts a volume where fluentd can pick up its configuration data.", "source": "https://kubernetes.io/docs/concepts/cluster-administration/logging/", "id": "k8s_00170_code_explain_4752"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: ConfigMap\nmetadata:\n name: fluentd-config\ndata:\n fluentd.conf: |\n \n type tail\n format none\n path /var/log/1.log\n pos_file /var/log/1.log.pos\n tag count.format1", "completion": " \n\n \n type tail\n format none\n path /var/log/2.log\n pos_file /var/log/2.log.pos\n tag count.format2\n \n\n \n type google_cloud\n ", "language": "yaml", "type": "kubernetes-manifest", "context": "Logging Architecture | The second manifest describes a pod that has a sidecar container running fluentd. The pod mounts a volume where fluentd can pick up its configuration data.", "source": "https://kubernetes.io/docs/concepts/cluster-administration/logging/", "id": "k8s_00170_code_completion_4753"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: ConfigMap\nmetadata:\n name: fluentd-config\ndata:\n fluentd.conf: |\n \n type tail\n format none\n path /var/log/1.log\n pos_file /var/log/1.log.pos\n tag count.format1\n \n\n \n type tail\n format none\n path /var/log/2.log\n pos_file /var/log/2.log.pos\n tag count.format2\n \n\n \n type google_cloud\n ", "completion": "This yaml code Logging Architecture | The second manifest describes a pod that has a sidecar container running fluentd. The pod mounts a volume where fluentd can pick up its configuration data.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Logging Architecture | The second manifest describes a pod that has a sidecar container running fluentd. The pod mounts a volume where fluentd can pick up its configuration data.", "source": "https://kubernetes.io/docs/concepts/cluster-administration/logging/", "id": "k8s_00170_code_explain_4754"} {"prompt": "Complete this yaml code:\n\nadmin/logging/two-files-counter-pod-agent-sidecar.yaml\napiVersion: v1\nkind: Pod\nmetadata:\n name: counter\nspec:\n containers:\n - name: count\n image: busybox:1.28\n args:\n - /bin/sh\n - -c\n - >\n i=0;\n while true;\n do\n echo \"$i: $(date)\" >> /var/log/1.log;\n echo \"$(date) INFO $i\" >> /var/log/2.log;\n i=$((i+1));\n sleep 1;", "completion": " done \n volumeMounts:\n - name: varlog\n mountPath: /var/log\n - name: count-agent\n image: registry.k8s.io/fluentd-gcp:1.30\n env:\n - name: FLUENTD_ARGS\n value: -c /etc/fluentd-config/fluentd.conf\n volumeMounts:\n - name: varlog\n mountPath: /var/log\n - name: config-volume\n mountPath: /etc/fluentd-config\n volumes:\n - name: varlog\n emptyDir: {}\n - name: config-volume\n configMap:\n name: fluentd-config", "language": "yaml", "type": "pod-definition", "context": "Logging Architecture |", "source": "https://kubernetes.io/docs/concepts/cluster-administration/logging/", "id": "k8s_00170_code_completion_4755"} {"prompt": "Explain this yaml code:\n\nadmin/logging/two-files-counter-pod-agent-sidecar.yaml\napiVersion: v1\nkind: Pod\nmetadata:\n name: counter\nspec:\n containers:\n - name: count\n image: busybox:1.28\n args:\n - /bin/sh\n - -c\n - >\n i=0;\n while true;\n do\n echo \"$i: $(date)\" >> /var/log/1.log;\n echo \"$(date) INFO $i\" >> /var/log/2.log;\n i=$((i+1));\n sleep 1;\n done \n volumeMounts:\n - name: varlog\n mountPath: /var/log\n - name: count-agent\n image: registry.k8s.io/fluentd-gcp:1.30\n env:\n - name: FLUENTD_ARGS\n value: -c /etc/fluentd-config/fluentd.conf\n volumeMounts:\n - name: varlog\n mountPath: /var/log\n - name: config-volume\n mountPath: /etc/fluentd-config\n volumes:\n - name: varlog\n emptyDir: {}\n - name: config-volume\n configMap:\n name: fluentd-config", "completion": "This yaml code Logging Architecture |", "language": "yaml", "type": "pod-definition_explanation", "context": "Logging Architecture |", "source": "https://kubernetes.io/docs/concepts/cluster-administration/logging/", "id": "k8s_00170_code_explain_4756"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: counter\nspec:\n containers:\n - name: count\n image: busybox:1.28\n args:\n - /bin/sh\n - -c\n - >\n i=0;\n while true;\n do\n echo \"$i: $(date)\" >> /var/log/1.log;\n echo \"$(date) INFO $i\" >> /var/log/2.log;\n i=$((i+1));\n sleep 1;", "completion": " done \n volumeMounts:\n - name: varlog\n mountPath: /var/log\n - name: count-agent\n image: registry.k8s.io/fluentd-gcp:1.30\n env:\n - name: FLUENTD_ARGS\n value: -c /etc/fluentd-config/fluentd.conf\n volumeMounts:\n - name: varlog\n mountPath: /var/log\n - name: config-volume\n mountPath: /etc/fluentd-config\n volumes:\n - name: varlog\n emptyDir: {}\n - name: config-volume\n configMap:\n name: fluentd-config", "language": "yaml", "type": "pod-definition", "context": "Logging Architecture |", "source": "https://kubernetes.io/docs/concepts/cluster-administration/logging/", "id": "k8s_00170_code_completion_4757"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: counter\nspec:\n containers:\n - name: count\n image: busybox:1.28\n args:\n - /bin/sh\n - -c\n - >\n i=0;\n while true;\n do\n echo \"$i: $(date)\" >> /var/log/1.log;\n echo \"$(date) INFO $i\" >> /var/log/2.log;\n i=$((i+1));\n sleep 1;\n done \n volumeMounts:\n - name: varlog\n mountPath: /var/log\n - name: count-agent\n image: registry.k8s.io/fluentd-gcp:1.30\n env:\n - name: FLUENTD_ARGS\n value: -c /etc/fluentd-config/fluentd.conf\n volumeMounts:\n - name: varlog\n mountPath: /var/log\n - name: config-volume\n mountPath: /etc/fluentd-config\n volumes:\n - name: varlog\n emptyDir: {}\n - name: config-volume\n configMap:\n name: fluentd-config", "completion": "This yaml code Logging Architecture |", "language": "yaml", "type": "pod-definition_explanation", "context": "Logging Architecture |", "source": "https://kubernetes.io/docs/concepts/cluster-administration/logging/", "id": "k8s_00170_code_explain_4758"} {"prompt": "Complete this yaml code:\n\napiVersion: networking.k8s.io/v1\nkind: NetworkPolicy\nmetadata:\n name: test-network-policy\n namespace: default\nspec:\n podSelector:\n matchLabels:\n role: db\n policyTypes:\n - Ingress\n - Egress\n ingress:\n - from:\n - ipBlock:\n cidr: 172.17.0.0/16\n except:", "completion": " - 172.17.1.0/24\n - namespaceSelector:\n matchLabels:\n project: myproject\n - podSelector:\n matchLabels:\n role: frontend\n ports:\n - protocol: TCP\n port: 6379\n egress:\n - to:\n - ipBlock:\n cidr: 10.0.0.0/24\n ports:\n - protocol: TCP\n port: 5978", "language": "yaml", "type": "kubernetes-manifest", "context": "Network Policies | Mandatory Fields: As with all other Kubernetes config, a NetworkPolicy needs apiVersion, kind, and metadata fields. For general information about working with config files, see Configure a Pod to Use", "source": "https://kubernetes.io/docs/concepts/services-networking/network-policies/", "id": "k8s_00172_code_completion_4759"} {"prompt": "Explain this yaml code:\n\napiVersion: networking.k8s.io/v1\nkind: NetworkPolicy\nmetadata:\n name: test-network-policy\n namespace: default\nspec:\n podSelector:\n matchLabels:\n role: db\n policyTypes:\n - Ingress\n - Egress\n ingress:\n - from:\n - ipBlock:\n cidr: 172.17.0.0/16\n except:\n - 172.17.1.0/24\n - namespaceSelector:\n matchLabels:\n project: myproject\n - podSelector:\n matchLabels:\n role: frontend\n ports:\n - protocol: TCP\n port: 6379\n egress:\n - to:\n - ipBlock:\n cidr: 10.0.0.0/24\n ports:\n - protocol: TCP\n port: 5978", "completion": "This yaml code Network Policies | Mandatory Fields: As with all other Kubernetes config, a NetworkPolicy needs apiVersion, kind, and metadata fields. For general information about working with config files, see Configure a Pod to Use", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Network Policies | Mandatory Fields: As with all other Kubernetes config, a NetworkPolicy needs apiVersion, kind, and metadata fields. For general information about working with config files, see Configure a Pod to Use", "source": "https://kubernetes.io/docs/concepts/services-networking/network-policies/", "id": "k8s_00172_code_explain_4760"} {"prompt": "Complete this text code:\n\n...\n ingress:\n - from:\n - namespaceSelector:\n matchLabels:", "completion": " user: alice\n podSelector:\n matchLabels:\n role: client\n ...", "language": "text", "type": "code-example", "context": "Network Policies | This policy contains a single from element allowing connections from Pods with the label role=client in namespaces with the label user=alice. But the following policy is different:", "source": "https://kubernetes.io/docs/concepts/services-networking/network-policies/", "id": "k8s_00172_code_completion_4761"} {"prompt": "Explain this text code:\n\n...\n ingress:\n - from:\n - namespaceSelector:\n matchLabels:\n user: alice\n podSelector:\n matchLabels:\n role: client\n ...", "completion": "This text code Network Policies | This policy contains a single from element allowing connections from Pods with the label role=client in namespaces with the label user=alice. But the following policy is different:", "language": "text", "type": "code-example_explanation", "context": "Network Policies | This policy contains a single from element allowing connections from Pods with the label role=client in namespaces with the label user=alice. But the following policy is different:", "source": "https://kubernetes.io/docs/concepts/services-networking/network-policies/", "id": "k8s_00172_code_explain_4762"} {"prompt": "Complete this text code:\n\n...\n ingress:\n - from:\n - namespaceSelector:\n matchLabels:", "completion": " user: alice\n - podSelector:\n matchLabels:\n role: client\n ...", "language": "text", "type": "code-example", "context": "Network Policies | It contains two elements in the from array, and allows connections from Pods in the local Namespace with the label role=client, or from any Pod in any namespace with the label user=alice.", "source": "https://kubernetes.io/docs/concepts/services-networking/network-policies/", "id": "k8s_00172_code_completion_4763"} {"prompt": "Explain this text code:\n\n...\n ingress:\n - from:\n - namespaceSelector:\n matchLabels:\n user: alice\n - podSelector:\n matchLabels:\n role: client\n ...", "completion": "This text code Network Policies | It contains two elements in the from array, and allows connections from Pods in the local Namespace with the label role=client, or from any Pod in any namespace with the label user=alice.", "language": "text", "type": "code-example_explanation", "context": "Network Policies | It contains two elements in the from array, and allows connections from Pods in the local Namespace with the label role=client, or from any Pod in any namespace with the label user=alice.", "source": "https://kubernetes.io/docs/concepts/services-networking/network-policies/", "id": "k8s_00172_code_explain_4764"} {"prompt": "Complete this yaml code:\n\n---\napiVersion: networking.k8s.io/v1\nkind: NetworkPolicy\nmetadata:", "completion": " name: default-deny-ingress\nspec:\n podSelector: {}\n policyTypes:\n - Ingress", "language": "yaml", "type": "kubernetes-manifest", "context": "Network Policies | This ensures that even pods that aren't selected by any other NetworkPolicy will still be isolated for ingress. This policy does not affect isolation for egress from any pod.", "source": "https://kubernetes.io/docs/concepts/services-networking/network-policies/", "id": "k8s_00172_code_completion_4765"} {"prompt": "Explain this yaml code:\n\n---\napiVersion: networking.k8s.io/v1\nkind: NetworkPolicy\nmetadata:\n name: default-deny-ingress\nspec:\n podSelector: {}\n policyTypes:\n - Ingress", "completion": "This yaml code Network Policies | This ensures that even pods that aren't selected by any other NetworkPolicy will still be isolated for ingress. This policy does not affect isolation for egress from any pod.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Network Policies | This ensures that even pods that aren't selected by any other NetworkPolicy will still be isolated for ingress. This policy does not affect isolation for egress from any pod.", "source": "https://kubernetes.io/docs/concepts/services-networking/network-policies/", "id": "k8s_00172_code_explain_4766"} {"prompt": "Complete this yaml code:\n\n---\napiVersion: networking.k8s.io/v1\nkind: NetworkPolicy\nmetadata:\n name: allow-all-ingress", "completion": "spec:\n podSelector: {}\n ingress:\n - {}\n policyTypes:\n - Ingress", "language": "yaml", "type": "kubernetes-manifest", "context": "Network Policies | With this policy in place, no additional policy or policies can cause any incoming connection to those pods to be denied. This policy has no effect on isolation for egress from any pod.", "source": "https://kubernetes.io/docs/concepts/services-networking/network-policies/", "id": "k8s_00172_code_completion_4767"} {"prompt": "Explain this yaml code:\n\n---\napiVersion: networking.k8s.io/v1\nkind: NetworkPolicy\nmetadata:\n name: allow-all-ingress\nspec:\n podSelector: {}\n ingress:\n - {}\n policyTypes:\n - Ingress", "completion": "This yaml code Network Policies | With this policy in place, no additional policy or policies can cause any incoming connection to those pods to be denied. This policy has no effect on isolation for egress from any pod.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Network Policies | With this policy in place, no additional policy or policies can cause any incoming connection to those pods to be denied. This policy has no effect on isolation for egress from any pod.", "source": "https://kubernetes.io/docs/concepts/services-networking/network-policies/", "id": "k8s_00172_code_explain_4768"} {"prompt": "Complete this yaml code:\n\n---\napiVersion: networking.k8s.io/v1\nkind: NetworkPolicy\nmetadata:", "completion": " name: default-deny-egress\nspec:\n podSelector: {}\n policyTypes:\n - Egress", "language": "yaml", "type": "kubernetes-manifest", "context": "Network Policies | This ensures that even pods that aren't selected by any other NetworkPolicy will not be allowed egress traffic. This policy does not change the ingress isolation behavior of any pod.", "source": "https://kubernetes.io/docs/concepts/services-networking/network-policies/", "id": "k8s_00172_code_completion_4769"} {"prompt": "Explain this yaml code:\n\n---\napiVersion: networking.k8s.io/v1\nkind: NetworkPolicy\nmetadata:\n name: default-deny-egress\nspec:\n podSelector: {}\n policyTypes:\n - Egress", "completion": "This yaml code Network Policies | This ensures that even pods that aren't selected by any other NetworkPolicy will not be allowed egress traffic. This policy does not change the ingress isolation behavior of any pod.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Network Policies | This ensures that even pods that aren't selected by any other NetworkPolicy will not be allowed egress traffic. This policy does not change the ingress isolation behavior of any pod.", "source": "https://kubernetes.io/docs/concepts/services-networking/network-policies/", "id": "k8s_00172_code_explain_4770"} {"prompt": "Complete this yaml code:\n\n---\napiVersion: networking.k8s.io/v1\nkind: NetworkPolicy\nmetadata:\n name: allow-all-egress", "completion": "spec:\n podSelector: {}\n egress:\n - {}\n policyTypes:\n - Egress", "language": "yaml", "type": "kubernetes-manifest", "context": "Network Policies | With this policy in place, no additional policy or policies can cause any outgoing connection from those pods to be denied. This policy has no effect on isolation for ingress to any pod.", "source": "https://kubernetes.io/docs/concepts/services-networking/network-policies/", "id": "k8s_00172_code_completion_4771"} {"prompt": "Explain this yaml code:\n\n---\napiVersion: networking.k8s.io/v1\nkind: NetworkPolicy\nmetadata:\n name: allow-all-egress\nspec:\n podSelector: {}\n egress:\n - {}\n policyTypes:\n - Egress", "completion": "This yaml code Network Policies | With this policy in place, no additional policy or policies can cause any outgoing connection from those pods to be denied. This policy has no effect on isolation for ingress to any pod.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Network Policies | With this policy in place, no additional policy or policies can cause any outgoing connection from those pods to be denied. This policy has no effect on isolation for ingress to any pod.", "source": "https://kubernetes.io/docs/concepts/services-networking/network-policies/", "id": "k8s_00172_code_explain_4772"} {"prompt": "Complete this yaml code:\n\n---\napiVersion: networking.k8s.io/v1\nkind: NetworkPolicy\nmetadata:\n name: default-deny-all", "completion": "spec:\n podSelector: {}\n policyTypes:\n - Ingress\n - Egress", "language": "yaml", "type": "kubernetes-manifest", "context": "Network Policies | This ensures that even pods that aren't selected by any other NetworkPolicy will not be allowed ingress or egress traffic.", "source": "https://kubernetes.io/docs/concepts/services-networking/network-policies/", "id": "k8s_00172_code_completion_4773"} {"prompt": "Explain this yaml code:\n\n---\napiVersion: networking.k8s.io/v1\nkind: NetworkPolicy\nmetadata:\n name: default-deny-all\nspec:\n podSelector: {}\n policyTypes:\n - Ingress\n - Egress", "completion": "This yaml code Network Policies | This ensures that even pods that aren't selected by any other NetworkPolicy will not be allowed ingress or egress traffic.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Network Policies | This ensures that even pods that aren't selected by any other NetworkPolicy will not be allowed ingress or egress traffic.", "source": "https://kubernetes.io/docs/concepts/services-networking/network-policies/", "id": "k8s_00172_code_explain_4774"} {"prompt": "Complete this yaml code:\n\napiVersion: networking.k8s.io/v1\nkind: NetworkPolicy\nmetadata:\n name: multi-port-egress\n namespace: default\nspec:\n podSelector:\n matchLabels:\n role: db", "completion": " policyTypes:\n - Egress\n egress:\n - to:\n - ipBlock:\n cidr: 10.0.0.0/24\n ports:\n - protocol: TCP\n port: 32000\n endPort: 32768", "language": "yaml", "type": "kubernetes-manifest", "context": "Network Policies | The above rule allows any Pod with label role=db on the namespace default to communicate with any IP within the range 10.0.0.0/24 over TCP, provided that the target port is between the range 32000 and", "source": "https://kubernetes.io/docs/concepts/services-networking/network-policies/", "id": "k8s_00172_code_completion_4775"} {"prompt": "Explain this yaml code:\n\napiVersion: networking.k8s.io/v1\nkind: NetworkPolicy\nmetadata:\n name: multi-port-egress\n namespace: default\nspec:\n podSelector:\n matchLabels:\n role: db\n policyTypes:\n - Egress\n egress:\n - to:\n - ipBlock:\n cidr: 10.0.0.0/24\n ports:\n - protocol: TCP\n port: 32000\n endPort: 32768", "completion": "This yaml code Network Policies | The above rule allows any Pod with label role=db on the namespace default to communicate with any IP within the range 10.0.0.0/24 over TCP, provided that the target port is between the range 32000 and", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Network Policies | The above rule allows any Pod with label role=db on the namespace default to communicate with any IP within the range 10.0.0.0/24 over TCP, provided that the target port is between the range 32000 and", "source": "https://kubernetes.io/docs/concepts/services-networking/network-policies/", "id": "k8s_00172_code_explain_4776"} {"prompt": "Explain this bash code:\n\nkubectl label namespace frontend namespace=frontend\nkubectl label namespace backend namespace=backend", "completion": "This bash code Network Policies | Add the labels under namespaceSelector in your NetworkPolicy document. For example:", "language": "bash", "type": "kubectl-command_explanation", "context": "Network Policies | Add the labels under namespaceSelector in your NetworkPolicy document. For example:", "source": "https://kubernetes.io/docs/concepts/services-networking/network-policies/", "id": "k8s_00172_code_explain_4777"} {"prompt": "Complete this yaml code:\n\napiVersion: networking.k8s.io/v1\nkind: NetworkPolicy\nmetadata:\n name: egress-namespaces\nspec:\n podSelector:\n matchLabels:\n app: myapp", "completion": " policyTypes:\n - Egress\n egress:\n - to:\n - namespaceSelector:\n matchExpressions:\n - key: namespace\n operator: In\n values: [\"frontend\", \"backend\"]", "language": "yaml", "type": "kubernetes-manifest", "context": "Network Policies | The Kubernetes control plane sets an immutable label kubernetes.io/metadata.name on all namespaces, the value of the label is the namespace name.", "source": "https://kubernetes.io/docs/concepts/services-networking/network-policies/", "id": "k8s_00172_code_completion_4778"} {"prompt": "Explain this yaml code:\n\napiVersion: networking.k8s.io/v1\nkind: NetworkPolicy\nmetadata:\n name: egress-namespaces\nspec:\n podSelector:\n matchLabels:\n app: myapp\n policyTypes:\n - Egress\n egress:\n - to:\n - namespaceSelector:\n matchExpressions:\n - key: namespace\n operator: In\n values: [\"frontend\", \"backend\"]", "completion": "This yaml code Network Policies | The Kubernetes control plane sets an immutable label kubernetes.io/metadata.name on all namespaces, the value of the label is the namespace name.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Network Policies | The Kubernetes control plane sets an immutable label kubernetes.io/metadata.name on all namespaces, the value of the label is the namespace name.", "source": "https://kubernetes.io/docs/concepts/services-networking/network-policies/", "id": "k8s_00172_code_explain_4779"} {"prompt": "Complete this text code:\n\n...\n spec:\n podSelector:", "completion": " matchLabels:\n role: client\n ...", "language": "text", "type": "code-example", "context": "Network Policies | a hostNetwork pod is selected by a podSelector or namespaceSelector in an ingress or egress rule.", "source": "https://kubernetes.io/docs/concepts/services-networking/network-policies/", "id": "k8s_00172_code_completion_4780"} {"prompt": "Explain this text code:\n\n...\n spec:\n podSelector:\n matchLabels:\n role: client\n ...", "completion": "This text code Network Policies | a hostNetwork pod is selected by a podSelector or namespaceSelector in an ingress or egress rule.", "language": "text", "type": "code-example_explanation", "context": "Network Policies | a hostNetwork pod is selected by a podSelector or namespaceSelector in an ingress or egress rule.", "source": "https://kubernetes.io/docs/concepts/services-networking/network-policies/", "id": "k8s_00172_code_explain_4781"} {"prompt": "Complete this text code:\n\n...\n ingress:\n - from:", "completion": " - podSelector:\n matchLabels:\n role: client\n ...", "language": "text", "type": "code-example", "context": "Network Policies | At the same time, since hostNetwork pods have the same IP addresses as the nodes they reside on, their connections will be treated as node connections. For example, you can allow traffic from a hostNe", "source": "https://kubernetes.io/docs/concepts/services-networking/network-policies/", "id": "k8s_00172_code_completion_4782"} {"prompt": "Explain this text code:\n\n...\n ingress:\n - from:\n - podSelector:\n matchLabels:\n role: client\n ...", "completion": "This text code Network Policies | At the same time, since hostNetwork pods have the same IP addresses as the nodes they reside on, their connections will be treated as node connections. For example, you can allow traffic from a hostNe", "language": "text", "type": "code-example_explanation", "context": "Network Policies | At the same time, since hostNetwork pods have the same IP addresses as the nodes they reside on, their connections will be treated as node connections. For example, you can allow traffic from a hostNe", "source": "https://kubernetes.io/docs/concepts/services-networking/network-policies/", "id": "k8s_00172_code_explain_4783"} {"prompt": "Complete this yaml code:\n\napiVersion: networking.k8s.io/v1\nkind: NetworkPolicy\nmetadata:\n name: test-network-policy\n namespace: default\nspec:\n podSelector:\n matchLabels:\n role: db\n policyTypes:\n - Ingress\n - Egress\n ingress:\n - from:\n - ipBlock:\n cidr: 172.17.0.0/16\n except:", "completion": " - 172.17.1.0/24\n - namespaceSelector:\n matchLabels:\n project: myproject\n - podSelector:\n matchLabels:\n role: frontend\n ports:\n - protocol: TCP\n port: 6379\n egress:\n - to:\n - ipBlock:\n cidr: 10.0.0.0/24\n ports:\n - protocol: TCP\n port: 5978", "language": "yaml", "type": "kubernetes-manifest", "context": "Network Policies | Mandatory Fields: As with all other Kubernetes config, a NetworkPolicy needs apiVersion, kind, and metadata fields. For general information about working with config files, see Configure a Pod to Use", "source": "https://kubernetes.io/docs/concepts/services-networking/network-policies/", "id": "k8s_00172_code_completion_4784"} {"prompt": "Explain this yaml code:\n\napiVersion: networking.k8s.io/v1\nkind: NetworkPolicy\nmetadata:\n name: test-network-policy\n namespace: default\nspec:\n podSelector:\n matchLabels:\n role: db\n policyTypes:\n - Ingress\n - Egress\n ingress:\n - from:\n - ipBlock:\n cidr: 172.17.0.0/16\n except:\n - 172.17.1.0/24\n - namespaceSelector:\n matchLabels:\n project: myproject\n - podSelector:\n matchLabels:\n role: frontend\n ports:\n - protocol: TCP\n port: 6379\n egress:\n - to:\n - ipBlock:\n cidr: 10.0.0.0/24\n ports:\n - protocol: TCP\n port: 5978", "completion": "This yaml code Network Policies | Mandatory Fields: As with all other Kubernetes config, a NetworkPolicy needs apiVersion, kind, and metadata fields. For general information about working with config files, see Configure a Pod to Use", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Network Policies | Mandatory Fields: As with all other Kubernetes config, a NetworkPolicy needs apiVersion, kind, and metadata fields. For general information about working with config files, see Configure a Pod to Use", "source": "https://kubernetes.io/docs/concepts/services-networking/network-policies/", "id": "k8s_00172_code_explain_4785"} {"prompt": "Complete this yaml code:\n\n...\n ingress:\n - from:\n - namespaceSelector:\n matchLabels:", "completion": " user: alice\n podSelector:\n matchLabels:\n role: client\n ...", "language": "yaml", "type": "kubernetes-manifest", "context": "Network Policies | This policy contains a single from element allowing connections from Pods with the label role=client in namespaces with the label user=alice. But the following policy is different:", "source": "https://kubernetes.io/docs/concepts/services-networking/network-policies/", "id": "k8s_00172_code_completion_4786"} {"prompt": "Explain this yaml code:\n\n...\n ingress:\n - from:\n - namespaceSelector:\n matchLabels:\n user: alice\n podSelector:\n matchLabels:\n role: client\n ...", "completion": "This yaml code Network Policies | This policy contains a single from element allowing connections from Pods with the label role=client in namespaces with the label user=alice. But the following policy is different:", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Network Policies | This policy contains a single from element allowing connections from Pods with the label role=client in namespaces with the label user=alice. But the following policy is different:", "source": "https://kubernetes.io/docs/concepts/services-networking/network-policies/", "id": "k8s_00172_code_explain_4787"} {"prompt": "Complete this yaml code:\n\n...\n ingress:\n - from:\n - namespaceSelector:\n matchLabels:", "completion": " user: alice\n - podSelector:\n matchLabels:\n role: client\n ...", "language": "yaml", "type": "kubernetes-manifest", "context": "Network Policies | It contains two elements in the from array, and allows connections from Pods in the local Namespace with the label role=client, or from any Pod in any namespace with the label user=alice.", "source": "https://kubernetes.io/docs/concepts/services-networking/network-policies/", "id": "k8s_00172_code_completion_4788"} {"prompt": "Explain this yaml code:\n\n...\n ingress:\n - from:\n - namespaceSelector:\n matchLabels:\n user: alice\n - podSelector:\n matchLabels:\n role: client\n ...", "completion": "This yaml code Network Policies | It contains two elements in the from array, and allows connections from Pods in the local Namespace with the label role=client, or from any Pod in any namespace with the label user=alice.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Network Policies | It contains two elements in the from array, and allows connections from Pods in the local Namespace with the label role=client, or from any Pod in any namespace with the label user=alice.", "source": "https://kubernetes.io/docs/concepts/services-networking/network-policies/", "id": "k8s_00172_code_explain_4789"} {"prompt": "Explain this text code:\n\nservice/networking/network-policy-default-deny-ingress.yaml", "completion": "This text code Network Policies | This ensures that even pods that aren't selected by any other NetworkPolicy will still be isolated for ingress. This policy does not affect isolation for egress from any pod.", "language": "text", "type": "code-example_explanation", "context": "Network Policies | This ensures that even pods that aren't selected by any other NetworkPolicy will still be isolated for ingress. This policy does not affect isolation for egress from any pod.", "source": "https://kubernetes.io/docs/concepts/services-networking/network-policies/", "id": "k8s_00172_code_explain_4790"} {"prompt": "Complete this yaml code:\n\n---\napiVersion: networking.k8s.io/v1\nkind: NetworkPolicy\nmetadata:", "completion": " name: default-deny-ingress\nspec:\n podSelector: {}\n policyTypes:\n - Ingress", "language": "yaml", "type": "kubernetes-manifest", "context": "Network Policies | This ensures that even pods that aren't selected by any other NetworkPolicy will still be isolated for ingress. This policy does not affect isolation for egress from any pod.", "source": "https://kubernetes.io/docs/concepts/services-networking/network-policies/", "id": "k8s_00172_code_completion_4791"} {"prompt": "Explain this yaml code:\n\n---\napiVersion: networking.k8s.io/v1\nkind: NetworkPolicy\nmetadata:\n name: default-deny-ingress\nspec:\n podSelector: {}\n policyTypes:\n - Ingress", "completion": "This yaml code Network Policies | This ensures that even pods that aren't selected by any other NetworkPolicy will still be isolated for ingress. This policy does not affect isolation for egress from any pod.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Network Policies | This ensures that even pods that aren't selected by any other NetworkPolicy will still be isolated for ingress. This policy does not affect isolation for egress from any pod.", "source": "https://kubernetes.io/docs/concepts/services-networking/network-policies/", "id": "k8s_00172_code_explain_4792"} {"prompt": "Explain this text code:\n\nservice/networking/network-policy-allow-all-ingress.yaml", "completion": "This text code Network Policies | With this policy in place, no additional policy or policies can cause any incoming connection to those pods to be denied. This policy has no effect on isolation for egress from any pod.", "language": "text", "type": "code-example_explanation", "context": "Network Policies | With this policy in place, no additional policy or policies can cause any incoming connection to those pods to be denied. This policy has no effect on isolation for egress from any pod.", "source": "https://kubernetes.io/docs/concepts/services-networking/network-policies/", "id": "k8s_00172_code_explain_4793"} {"prompt": "Complete this yaml code:\n\n---\napiVersion: networking.k8s.io/v1\nkind: NetworkPolicy\nmetadata:\n name: allow-all-ingress", "completion": "spec:\n podSelector: {}\n ingress:\n - {}\n policyTypes:\n - Ingress", "language": "yaml", "type": "kubernetes-manifest", "context": "Network Policies | With this policy in place, no additional policy or policies can cause any incoming connection to those pods to be denied. This policy has no effect on isolation for egress from any pod.", "source": "https://kubernetes.io/docs/concepts/services-networking/network-policies/", "id": "k8s_00172_code_completion_4794"} {"prompt": "Explain this yaml code:\n\n---\napiVersion: networking.k8s.io/v1\nkind: NetworkPolicy\nmetadata:\n name: allow-all-ingress\nspec:\n podSelector: {}\n ingress:\n - {}\n policyTypes:\n - Ingress", "completion": "This yaml code Network Policies | With this policy in place, no additional policy or policies can cause any incoming connection to those pods to be denied. This policy has no effect on isolation for egress from any pod.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Network Policies | With this policy in place, no additional policy or policies can cause any incoming connection to those pods to be denied. This policy has no effect on isolation for egress from any pod.", "source": "https://kubernetes.io/docs/concepts/services-networking/network-policies/", "id": "k8s_00172_code_explain_4795"} {"prompt": "Explain this text code:\n\nservice/networking/network-policy-default-deny-egress.yaml", "completion": "This text code Network Policies | This ensures that even pods that aren't selected by any other NetworkPolicy will not be allowed egress traffic. This policy does not change the ingress isolation behavior of any pod.", "language": "text", "type": "code-example_explanation", "context": "Network Policies | This ensures that even pods that aren't selected by any other NetworkPolicy will not be allowed egress traffic. This policy does not change the ingress isolation behavior of any pod.", "source": "https://kubernetes.io/docs/concepts/services-networking/network-policies/", "id": "k8s_00172_code_explain_4796"} {"prompt": "Complete this yaml code:\n\n---\napiVersion: networking.k8s.io/v1\nkind: NetworkPolicy\nmetadata:", "completion": " name: default-deny-egress\nspec:\n podSelector: {}\n policyTypes:\n - Egress", "language": "yaml", "type": "kubernetes-manifest", "context": "Network Policies | This ensures that even pods that aren't selected by any other NetworkPolicy will not be allowed egress traffic. This policy does not change the ingress isolation behavior of any pod.", "source": "https://kubernetes.io/docs/concepts/services-networking/network-policies/", "id": "k8s_00172_code_completion_4797"} {"prompt": "Explain this yaml code:\n\n---\napiVersion: networking.k8s.io/v1\nkind: NetworkPolicy\nmetadata:\n name: default-deny-egress\nspec:\n podSelector: {}\n policyTypes:\n - Egress", "completion": "This yaml code Network Policies | This ensures that even pods that aren't selected by any other NetworkPolicy will not be allowed egress traffic. This policy does not change the ingress isolation behavior of any pod.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Network Policies | This ensures that even pods that aren't selected by any other NetworkPolicy will not be allowed egress traffic. This policy does not change the ingress isolation behavior of any pod.", "source": "https://kubernetes.io/docs/concepts/services-networking/network-policies/", "id": "k8s_00172_code_explain_4798"} {"prompt": "Explain this text code:\n\nservice/networking/network-policy-allow-all-egress.yaml", "completion": "This text code Network Policies | With this policy in place, no additional policy or policies can cause any outgoing connection from those pods to be denied. This policy has no effect on isolation for ingress to any pod.", "language": "text", "type": "code-example_explanation", "context": "Network Policies | With this policy in place, no additional policy or policies can cause any outgoing connection from those pods to be denied. This policy has no effect on isolation for ingress to any pod.", "source": "https://kubernetes.io/docs/concepts/services-networking/network-policies/", "id": "k8s_00172_code_explain_4799"} {"prompt": "Complete this yaml code:\n\n---\napiVersion: networking.k8s.io/v1\nkind: NetworkPolicy\nmetadata:\n name: allow-all-egress", "completion": "spec:\n podSelector: {}\n egress:\n - {}\n policyTypes:\n - Egress", "language": "yaml", "type": "kubernetes-manifest", "context": "Network Policies | With this policy in place, no additional policy or policies can cause any outgoing connection from those pods to be denied. This policy has no effect on isolation for ingress to any pod.", "source": "https://kubernetes.io/docs/concepts/services-networking/network-policies/", "id": "k8s_00172_code_completion_4800"} {"prompt": "Explain this yaml code:\n\n---\napiVersion: networking.k8s.io/v1\nkind: NetworkPolicy\nmetadata:\n name: allow-all-egress\nspec:\n podSelector: {}\n egress:\n - {}\n policyTypes:\n - Egress", "completion": "This yaml code Network Policies | With this policy in place, no additional policy or policies can cause any outgoing connection from those pods to be denied. This policy has no effect on isolation for ingress to any pod.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Network Policies | With this policy in place, no additional policy or policies can cause any outgoing connection from those pods to be denied. This policy has no effect on isolation for ingress to any pod.", "source": "https://kubernetes.io/docs/concepts/services-networking/network-policies/", "id": "k8s_00172_code_explain_4801"} {"prompt": "Explain this text code:\n\nservice/networking/network-policy-default-deny-all.yaml", "completion": "This text code Network Policies | This ensures that even pods that aren't selected by any other NetworkPolicy will not be allowed ingress or egress traffic.", "language": "text", "type": "code-example_explanation", "context": "Network Policies | This ensures that even pods that aren't selected by any other NetworkPolicy will not be allowed ingress or egress traffic.", "source": "https://kubernetes.io/docs/concepts/services-networking/network-policies/", "id": "k8s_00172_code_explain_4802"} {"prompt": "Complete this yaml code:\n\n---\napiVersion: networking.k8s.io/v1\nkind: NetworkPolicy\nmetadata:\n name: default-deny-all", "completion": "spec:\n podSelector: {}\n policyTypes:\n - Ingress\n - Egress", "language": "yaml", "type": "kubernetes-manifest", "context": "Network Policies | This ensures that even pods that aren't selected by any other NetworkPolicy will not be allowed ingress or egress traffic.", "source": "https://kubernetes.io/docs/concepts/services-networking/network-policies/", "id": "k8s_00172_code_completion_4803"} {"prompt": "Explain this yaml code:\n\n---\napiVersion: networking.k8s.io/v1\nkind: NetworkPolicy\nmetadata:\n name: default-deny-all\nspec:\n podSelector: {}\n policyTypes:\n - Ingress\n - Egress", "completion": "This yaml code Network Policies | This ensures that even pods that aren't selected by any other NetworkPolicy will not be allowed ingress or egress traffic.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Network Policies | This ensures that even pods that aren't selected by any other NetworkPolicy will not be allowed ingress or egress traffic.", "source": "https://kubernetes.io/docs/concepts/services-networking/network-policies/", "id": "k8s_00172_code_explain_4804"} {"prompt": "Explain this text code:\n\nservice/networking/networkpolicy-multiport-egress.yaml", "completion": "This text code Network Policies | The above rule allows any Pod with label role=db on the namespace default to communicate with any IP within the range 10.0.0.0/24 over TCP, provided that the target port is between the range 32000 and", "language": "text", "type": "code-example_explanation", "context": "Network Policies | The above rule allows any Pod with label role=db on the namespace default to communicate with any IP within the range 10.0.0.0/24 over TCP, provided that the target port is between the range 32000 and", "source": "https://kubernetes.io/docs/concepts/services-networking/network-policies/", "id": "k8s_00172_code_explain_4805"} {"prompt": "Complete this yaml code:\n\napiVersion: networking.k8s.io/v1\nkind: NetworkPolicy\nmetadata:\n name: multi-port-egress\n namespace: default\nspec:\n podSelector:\n matchLabels:\n role: db", "completion": " policyTypes:\n - Egress\n egress:\n - to:\n - ipBlock:\n cidr: 10.0.0.0/24\n ports:\n - protocol: TCP\n port: 32000\n endPort: 32768", "language": "yaml", "type": "kubernetes-manifest", "context": "Network Policies | The above rule allows any Pod with label role=db on the namespace default to communicate with any IP within the range 10.0.0.0/24 over TCP, provided that the target port is between the range 32000 and", "source": "https://kubernetes.io/docs/concepts/services-networking/network-policies/", "id": "k8s_00172_code_completion_4806"} {"prompt": "Explain this yaml code:\n\napiVersion: networking.k8s.io/v1\nkind: NetworkPolicy\nmetadata:\n name: multi-port-egress\n namespace: default\nspec:\n podSelector:\n matchLabels:\n role: db\n policyTypes:\n - Egress\n egress:\n - to:\n - ipBlock:\n cidr: 10.0.0.0/24\n ports:\n - protocol: TCP\n port: 32000\n endPort: 32768", "completion": "This yaml code Network Policies | The above rule allows any Pod with label role=db on the namespace default to communicate with any IP within the range 10.0.0.0/24 over TCP, provided that the target port is between the range 32000 and", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Network Policies | The above rule allows any Pod with label role=db on the namespace default to communicate with any IP within the range 10.0.0.0/24 over TCP, provided that the target port is between the range 32000 and", "source": "https://kubernetes.io/docs/concepts/services-networking/network-policies/", "id": "k8s_00172_code_explain_4807"} {"prompt": "Explain this shell code:\n\nkubectl label namespace frontend namespace=frontend\nkubectl label namespace backend namespace=backend", "completion": "This shell code Network Policies | Add the labels under namespaceSelector in your NetworkPolicy document. For example:", "language": "shell", "type": "code-example_explanation", "context": "Network Policies | Add the labels under namespaceSelector in your NetworkPolicy document. For example:", "source": "https://kubernetes.io/docs/concepts/services-networking/network-policies/", "id": "k8s_00172_code_explain_4808"} {"prompt": "Complete this yaml code:\n\napiVersion: networking.k8s.io/v1\nkind: NetworkPolicy\nmetadata:\n name: egress-namespaces\nspec:\n podSelector:\n matchLabels:\n app: myapp", "completion": " policyTypes:\n - Egress\n egress:\n - to:\n - namespaceSelector:\n matchExpressions:\n - key: namespace\n operator: In\n values: [\"frontend\", \"backend\"]", "language": "yaml", "type": "kubernetes-manifest", "context": "Network Policies | The Kubernetes control plane sets an immutable label kubernetes.io/metadata.name on all namespaces, the value of the label is the namespace name.", "source": "https://kubernetes.io/docs/concepts/services-networking/network-policies/", "id": "k8s_00172_code_completion_4809"} {"prompt": "Explain this yaml code:\n\napiVersion: networking.k8s.io/v1\nkind: NetworkPolicy\nmetadata:\n name: egress-namespaces\nspec:\n podSelector:\n matchLabels:\n app: myapp\n policyTypes:\n - Egress\n egress:\n - to:\n - namespaceSelector:\n matchExpressions:\n - key: namespace\n operator: In\n values: [\"frontend\", \"backend\"]", "completion": "This yaml code Network Policies | The Kubernetes control plane sets an immutable label kubernetes.io/metadata.name on all namespaces, the value of the label is the namespace name.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Network Policies | The Kubernetes control plane sets an immutable label kubernetes.io/metadata.name on all namespaces, the value of the label is the namespace name.", "source": "https://kubernetes.io/docs/concepts/services-networking/network-policies/", "id": "k8s_00172_code_explain_4810"} {"prompt": "Complete this yaml code:\n\n...\n spec:\n podSelector:", "completion": " matchLabels:\n role: client\n ...", "language": "yaml", "type": "kubernetes-manifest", "context": "Network Policies | a hostNetwork pod is selected by a podSelector or namespaceSelector in an ingress or egress rule.", "source": "https://kubernetes.io/docs/concepts/services-networking/network-policies/", "id": "k8s_00172_code_completion_4811"} {"prompt": "Explain this yaml code:\n\n...\n spec:\n podSelector:\n matchLabels:\n role: client\n ...", "completion": "This yaml code Network Policies | a hostNetwork pod is selected by a podSelector or namespaceSelector in an ingress or egress rule.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Network Policies | a hostNetwork pod is selected by a podSelector or namespaceSelector in an ingress or egress rule.", "source": "https://kubernetes.io/docs/concepts/services-networking/network-policies/", "id": "k8s_00172_code_explain_4812"} {"prompt": "Complete this yaml code:\n\n...\n ingress:\n - from:", "completion": " - podSelector:\n matchLabels:\n role: client\n ...", "language": "yaml", "type": "kubernetes-manifest", "context": "Network Policies | At the same time, since hostNetwork pods have the same IP addresses as the nodes they reside on, their connections will be treated as node connections. For example, you can allow traffic from a hostNe", "source": "https://kubernetes.io/docs/concepts/services-networking/network-policies/", "id": "k8s_00172_code_completion_4813"} {"prompt": "Explain this yaml code:\n\n...\n ingress:\n - from:\n - podSelector:\n matchLabels:\n role: client\n ...", "completion": "This yaml code Network Policies | At the same time, since hostNetwork pods have the same IP addresses as the nodes they reside on, their connections will be treated as node connections. For example, you can allow traffic from a hostNe", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Network Policies | At the same time, since hostNetwork pods have the same IP addresses as the nodes they reside on, their connections will be treated as node connections. For example, you can allow traffic from a hostNe", "source": "https://kubernetes.io/docs/concepts/services-networking/network-policies/", "id": "k8s_00172_code_explain_4814"} {"prompt": "Complete this yaml code:\n\nservice/networking/networkpolicy.yaml\napiVersion: networking.k8s.io/v1\nkind: NetworkPolicy\nmetadata:\n name: test-network-policy\n namespace: default\nspec:\n podSelector:\n matchLabels:\n role: db\n policyTypes:\n - Ingress\n - Egress\n ingress:\n - from:\n - ipBlock:\n cidr: 172.17.0.0/16", "completion": " except:\n - 172.17.1.0/24\n - namespaceSelector:\n matchLabels:\n project: myproject\n - podSelector:\n matchLabels:\n role: frontend\n ports:\n - protocol: TCP\n port: 6379\n egress:\n - to:\n - ipBlock:\n cidr: 10.0.0.0/24\n ports:\n - protocol: TCP\n port: 5978", "language": "yaml", "type": "kubernetes-manifest", "context": "Network Policies | Mandatory Fields: As with all other Kubernetes config, a NetworkPolicy needs apiVersion, kind, and metadata fields. For general information about working with config files, see Configure a Pod to Use", "source": "https://kubernetes.io/docs/concepts/services-networking/network-policies/", "id": "k8s_00172_code_completion_4815"} {"prompt": "Explain this yaml code:\n\nservice/networking/networkpolicy.yaml\napiVersion: networking.k8s.io/v1\nkind: NetworkPolicy\nmetadata:\n name: test-network-policy\n namespace: default\nspec:\n podSelector:\n matchLabels:\n role: db\n policyTypes:\n - Ingress\n - Egress\n ingress:\n - from:\n - ipBlock:\n cidr: 172.17.0.0/16\n except:\n - 172.17.1.0/24\n - namespaceSelector:\n matchLabels:\n project: myproject\n - podSelector:\n matchLabels:\n role: frontend\n ports:\n - protocol: TCP\n port: 6379\n egress:\n - to:\n - ipBlock:\n cidr: 10.0.0.0/24\n ports:\n - protocol: TCP\n port: 5978", "completion": "This yaml code Network Policies | Mandatory Fields: As with all other Kubernetes config, a NetworkPolicy needs apiVersion, kind, and metadata fields. For general information about working with config files, see Configure a Pod to Use", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Network Policies | Mandatory Fields: As with all other Kubernetes config, a NetworkPolicy needs apiVersion, kind, and metadata fields. For general information about working with config files, see Configure a Pod to Use", "source": "https://kubernetes.io/docs/concepts/services-networking/network-policies/", "id": "k8s_00172_code_explain_4816"} {"prompt": "Complete this yaml code:\n\napiVersion: networking.k8s.io/v1\nkind: NetworkPolicy\nmetadata:\n name: test-network-policy\n namespace: default\nspec:\n podSelector:\n matchLabels:\n role: db\n policyTypes:\n - Ingress\n - Egress\n ingress:\n - from:\n - ipBlock:\n cidr: 172.17.0.0/16\n except:", "completion": " - 172.17.1.0/24\n - namespaceSelector:\n matchLabels:\n project: myproject\n - podSelector:\n matchLabels:\n role: frontend\n ports:\n - protocol: TCP\n port: 6379\n egress:\n - to:\n - ipBlock:\n cidr: 10.0.0.0/24\n ports:\n - protocol: TCP\n port: 5978", "language": "yaml", "type": "kubernetes-manifest", "context": "Network Policies | Mandatory Fields: As with all other Kubernetes config, a NetworkPolicy needs apiVersion, kind, and metadata fields. For general information about working with config files, see Configure a Pod to Use", "source": "https://kubernetes.io/docs/concepts/services-networking/network-policies/", "id": "k8s_00172_code_completion_4817"} {"prompt": "Explain this yaml code:\n\napiVersion: networking.k8s.io/v1\nkind: NetworkPolicy\nmetadata:\n name: test-network-policy\n namespace: default\nspec:\n podSelector:\n matchLabels:\n role: db\n policyTypes:\n - Ingress\n - Egress\n ingress:\n - from:\n - ipBlock:\n cidr: 172.17.0.0/16\n except:\n - 172.17.1.0/24\n - namespaceSelector:\n matchLabels:\n project: myproject\n - podSelector:\n matchLabels:\n role: frontend\n ports:\n - protocol: TCP\n port: 6379\n egress:\n - to:\n - ipBlock:\n cidr: 10.0.0.0/24\n ports:\n - protocol: TCP\n port: 5978", "completion": "This yaml code Network Policies | Mandatory Fields: As with all other Kubernetes config, a NetworkPolicy needs apiVersion, kind, and metadata fields. For general information about working with config files, see Configure a Pod to Use", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Network Policies | Mandatory Fields: As with all other Kubernetes config, a NetworkPolicy needs apiVersion, kind, and metadata fields. For general information about working with config files, see Configure a Pod to Use", "source": "https://kubernetes.io/docs/concepts/services-networking/network-policies/", "id": "k8s_00172_code_explain_4818"} {"prompt": "Complete this text code:\n\n...\n ingress:\n - from:\n - namespaceSelector:\n matchLabels:", "completion": " user: alice\n podSelector:\n matchLabels:\n role: client\n ...", "language": "text", "type": "code-example", "context": "Network Policies | This policy contains a single from element allowing connections from Pods with the label role=client in namespaces with the label user=alice. But the following policy is different:", "source": "https://kubernetes.io/docs/concepts/services-networking/network-policies/", "id": "k8s_00172_code_completion_4819"} {"prompt": "Explain this text code:\n\n...\n ingress:\n - from:\n - namespaceSelector:\n matchLabels:\n user: alice\n podSelector:\n matchLabels:\n role: client\n ...", "completion": "This text code Network Policies | This policy contains a single from element allowing connections from Pods with the label role=client in namespaces with the label user=alice. But the following policy is different:", "language": "text", "type": "code-example_explanation", "context": "Network Policies | This policy contains a single from element allowing connections from Pods with the label role=client in namespaces with the label user=alice. But the following policy is different:", "source": "https://kubernetes.io/docs/concepts/services-networking/network-policies/", "id": "k8s_00172_code_explain_4820"} {"prompt": "Complete this text code:\n\n...\n ingress:\n - from:\n - namespaceSelector:\n matchLabels:", "completion": " user: alice\n - podSelector:\n matchLabels:\n role: client\n ...", "language": "text", "type": "code-example", "context": "Network Policies | It contains two elements in the from array, and allows connections from Pods in the local Namespace with the label role=client, or from any Pod in any namespace with the label user=alice.", "source": "https://kubernetes.io/docs/concepts/services-networking/network-policies/", "id": "k8s_00172_code_completion_4821"} {"prompt": "Explain this text code:\n\n...\n ingress:\n - from:\n - namespaceSelector:\n matchLabels:\n user: alice\n - podSelector:\n matchLabels:\n role: client\n ...", "completion": "This text code Network Policies | It contains two elements in the from array, and allows connections from Pods in the local Namespace with the label role=client, or from any Pod in any namespace with the label user=alice.", "language": "text", "type": "code-example_explanation", "context": "Network Policies | It contains two elements in the from array, and allows connections from Pods in the local Namespace with the label role=client, or from any Pod in any namespace with the label user=alice.", "source": "https://kubernetes.io/docs/concepts/services-networking/network-policies/", "id": "k8s_00172_code_explain_4822"} {"prompt": "Complete this yaml code:\n\nservice/networking/network-policy-default-deny-ingress.yaml\n---\napiVersion: networking.k8s.io/v1\nkind: NetworkPolicy\nmetadata:", "completion": " name: default-deny-ingress\nspec:\n podSelector: {}\n policyTypes:\n - Ingress", "language": "yaml", "type": "kubernetes-manifest", "context": "Network Policies | This ensures that even pods that aren't selected by any other NetworkPolicy will still be isolated for ingress. This policy does not affect isolation for egress from any pod.", "source": "https://kubernetes.io/docs/concepts/services-networking/network-policies/", "id": "k8s_00172_code_completion_4823"} {"prompt": "Explain this yaml code:\n\nservice/networking/network-policy-default-deny-ingress.yaml\n---\napiVersion: networking.k8s.io/v1\nkind: NetworkPolicy\nmetadata:\n name: default-deny-ingress\nspec:\n podSelector: {}\n policyTypes:\n - Ingress", "completion": "This yaml code Network Policies | This ensures that even pods that aren't selected by any other NetworkPolicy will still be isolated for ingress. This policy does not affect isolation for egress from any pod.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Network Policies | This ensures that even pods that aren't selected by any other NetworkPolicy will still be isolated for ingress. This policy does not affect isolation for egress from any pod.", "source": "https://kubernetes.io/docs/concepts/services-networking/network-policies/", "id": "k8s_00172_code_explain_4824"} {"prompt": "Complete this yaml code:\n\n---\napiVersion: networking.k8s.io/v1\nkind: NetworkPolicy\nmetadata:", "completion": " name: default-deny-ingress\nspec:\n podSelector: {}\n policyTypes:\n - Ingress", "language": "yaml", "type": "kubernetes-manifest", "context": "Network Policies | This ensures that even pods that aren't selected by any other NetworkPolicy will still be isolated for ingress. This policy does not affect isolation for egress from any pod.", "source": "https://kubernetes.io/docs/concepts/services-networking/network-policies/", "id": "k8s_00172_code_completion_4825"} {"prompt": "Explain this yaml code:\n\n---\napiVersion: networking.k8s.io/v1\nkind: NetworkPolicy\nmetadata:\n name: default-deny-ingress\nspec:\n podSelector: {}\n policyTypes:\n - Ingress", "completion": "This yaml code Network Policies | This ensures that even pods that aren't selected by any other NetworkPolicy will still be isolated for ingress. This policy does not affect isolation for egress from any pod.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Network Policies | This ensures that even pods that aren't selected by any other NetworkPolicy will still be isolated for ingress. This policy does not affect isolation for egress from any pod.", "source": "https://kubernetes.io/docs/concepts/services-networking/network-policies/", "id": "k8s_00172_code_explain_4826"} {"prompt": "Complete this yaml code:\n\nservice/networking/network-policy-allow-all-ingress.yaml\n---\napiVersion: networking.k8s.io/v1\nkind: NetworkPolicy\nmetadata:\n name: allow-all-ingress", "completion": "spec:\n podSelector: {}\n ingress:\n - {}\n policyTypes:\n - Ingress", "language": "yaml", "type": "kubernetes-manifest", "context": "Network Policies | With this policy in place, no additional policy or policies can cause any incoming connection to those pods to be denied. This policy has no effect on isolation for egress from any pod.", "source": "https://kubernetes.io/docs/concepts/services-networking/network-policies/", "id": "k8s_00172_code_completion_4827"} {"prompt": "Explain this yaml code:\n\nservice/networking/network-policy-allow-all-ingress.yaml\n---\napiVersion: networking.k8s.io/v1\nkind: NetworkPolicy\nmetadata:\n name: allow-all-ingress\nspec:\n podSelector: {}\n ingress:\n - {}\n policyTypes:\n - Ingress", "completion": "This yaml code Network Policies | With this policy in place, no additional policy or policies can cause any incoming connection to those pods to be denied. This policy has no effect on isolation for egress from any pod.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Network Policies | With this policy in place, no additional policy or policies can cause any incoming connection to those pods to be denied. This policy has no effect on isolation for egress from any pod.", "source": "https://kubernetes.io/docs/concepts/services-networking/network-policies/", "id": "k8s_00172_code_explain_4828"} {"prompt": "Complete this yaml code:\n\n---\napiVersion: networking.k8s.io/v1\nkind: NetworkPolicy\nmetadata:\n name: allow-all-ingress", "completion": "spec:\n podSelector: {}\n ingress:\n - {}\n policyTypes:\n - Ingress", "language": "yaml", "type": "kubernetes-manifest", "context": "Network Policies | With this policy in place, no additional policy or policies can cause any incoming connection to those pods to be denied. This policy has no effect on isolation for egress from any pod.", "source": "https://kubernetes.io/docs/concepts/services-networking/network-policies/", "id": "k8s_00172_code_completion_4829"} {"prompt": "Explain this yaml code:\n\n---\napiVersion: networking.k8s.io/v1\nkind: NetworkPolicy\nmetadata:\n name: allow-all-ingress\nspec:\n podSelector: {}\n ingress:\n - {}\n policyTypes:\n - Ingress", "completion": "This yaml code Network Policies | With this policy in place, no additional policy or policies can cause any incoming connection to those pods to be denied. This policy has no effect on isolation for egress from any pod.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Network Policies | With this policy in place, no additional policy or policies can cause any incoming connection to those pods to be denied. This policy has no effect on isolation for egress from any pod.", "source": "https://kubernetes.io/docs/concepts/services-networking/network-policies/", "id": "k8s_00172_code_explain_4830"} {"prompt": "Complete this yaml code:\n\nservice/networking/network-policy-default-deny-egress.yaml\n---\napiVersion: networking.k8s.io/v1\nkind: NetworkPolicy\nmetadata:", "completion": " name: default-deny-egress\nspec:\n podSelector: {}\n policyTypes:\n - Egress", "language": "yaml", "type": "kubernetes-manifest", "context": "Network Policies | This ensures that even pods that aren't selected by any other NetworkPolicy will not be allowed egress traffic. This policy does not change the ingress isolation behavior of any pod.", "source": "https://kubernetes.io/docs/concepts/services-networking/network-policies/", "id": "k8s_00172_code_completion_4831"} {"prompt": "Explain this yaml code:\n\nservice/networking/network-policy-default-deny-egress.yaml\n---\napiVersion: networking.k8s.io/v1\nkind: NetworkPolicy\nmetadata:\n name: default-deny-egress\nspec:\n podSelector: {}\n policyTypes:\n - Egress", "completion": "This yaml code Network Policies | This ensures that even pods that aren't selected by any other NetworkPolicy will not be allowed egress traffic. This policy does not change the ingress isolation behavior of any pod.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Network Policies | This ensures that even pods that aren't selected by any other NetworkPolicy will not be allowed egress traffic. This policy does not change the ingress isolation behavior of any pod.", "source": "https://kubernetes.io/docs/concepts/services-networking/network-policies/", "id": "k8s_00172_code_explain_4832"} {"prompt": "Complete this yaml code:\n\n---\napiVersion: networking.k8s.io/v1\nkind: NetworkPolicy\nmetadata:", "completion": " name: default-deny-egress\nspec:\n podSelector: {}\n policyTypes:\n - Egress", "language": "yaml", "type": "kubernetes-manifest", "context": "Network Policies | This ensures that even pods that aren't selected by any other NetworkPolicy will not be allowed egress traffic. This policy does not change the ingress isolation behavior of any pod.", "source": "https://kubernetes.io/docs/concepts/services-networking/network-policies/", "id": "k8s_00172_code_completion_4833"} {"prompt": "Explain this yaml code:\n\n---\napiVersion: networking.k8s.io/v1\nkind: NetworkPolicy\nmetadata:\n name: default-deny-egress\nspec:\n podSelector: {}\n policyTypes:\n - Egress", "completion": "This yaml code Network Policies | This ensures that even pods that aren't selected by any other NetworkPolicy will not be allowed egress traffic. This policy does not change the ingress isolation behavior of any pod.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Network Policies | This ensures that even pods that aren't selected by any other NetworkPolicy will not be allowed egress traffic. This policy does not change the ingress isolation behavior of any pod.", "source": "https://kubernetes.io/docs/concepts/services-networking/network-policies/", "id": "k8s_00172_code_explain_4834"} {"prompt": "Complete this yaml code:\n\nservice/networking/network-policy-allow-all-egress.yaml\n---\napiVersion: networking.k8s.io/v1\nkind: NetworkPolicy\nmetadata:\n name: allow-all-egress", "completion": "spec:\n podSelector: {}\n egress:\n - {}\n policyTypes:\n - Egress", "language": "yaml", "type": "kubernetes-manifest", "context": "Network Policies | With this policy in place, no additional policy or policies can cause any outgoing connection from those pods to be denied. This policy has no effect on isolation for ingress to any pod.", "source": "https://kubernetes.io/docs/concepts/services-networking/network-policies/", "id": "k8s_00172_code_completion_4835"} {"prompt": "Explain this yaml code:\n\nservice/networking/network-policy-allow-all-egress.yaml\n---\napiVersion: networking.k8s.io/v1\nkind: NetworkPolicy\nmetadata:\n name: allow-all-egress\nspec:\n podSelector: {}\n egress:\n - {}\n policyTypes:\n - Egress", "completion": "This yaml code Network Policies | With this policy in place, no additional policy or policies can cause any outgoing connection from those pods to be denied. This policy has no effect on isolation for ingress to any pod.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Network Policies | With this policy in place, no additional policy or policies can cause any outgoing connection from those pods to be denied. This policy has no effect on isolation for ingress to any pod.", "source": "https://kubernetes.io/docs/concepts/services-networking/network-policies/", "id": "k8s_00172_code_explain_4836"} {"prompt": "Complete this yaml code:\n\n---\napiVersion: networking.k8s.io/v1\nkind: NetworkPolicy\nmetadata:\n name: allow-all-egress", "completion": "spec:\n podSelector: {}\n egress:\n - {}\n policyTypes:\n - Egress", "language": "yaml", "type": "kubernetes-manifest", "context": "Network Policies | With this policy in place, no additional policy or policies can cause any outgoing connection from those pods to be denied. This policy has no effect on isolation for ingress to any pod.", "source": "https://kubernetes.io/docs/concepts/services-networking/network-policies/", "id": "k8s_00172_code_completion_4837"} {"prompt": "Explain this yaml code:\n\n---\napiVersion: networking.k8s.io/v1\nkind: NetworkPolicy\nmetadata:\n name: allow-all-egress\nspec:\n podSelector: {}\n egress:\n - {}\n policyTypes:\n - Egress", "completion": "This yaml code Network Policies | With this policy in place, no additional policy or policies can cause any outgoing connection from those pods to be denied. This policy has no effect on isolation for ingress to any pod.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Network Policies | With this policy in place, no additional policy or policies can cause any outgoing connection from those pods to be denied. This policy has no effect on isolation for ingress to any pod.", "source": "https://kubernetes.io/docs/concepts/services-networking/network-policies/", "id": "k8s_00172_code_explain_4838"} {"prompt": "Complete this yaml code:\n\nservice/networking/network-policy-default-deny-all.yaml\n---\napiVersion: networking.k8s.io/v1\nkind: NetworkPolicy\nmetadata:", "completion": " name: default-deny-all\nspec:\n podSelector: {}\n policyTypes:\n - Ingress\n - Egress", "language": "yaml", "type": "kubernetes-manifest", "context": "Network Policies | This ensures that even pods that aren't selected by any other NetworkPolicy will not be allowed ingress or egress traffic.", "source": "https://kubernetes.io/docs/concepts/services-networking/network-policies/", "id": "k8s_00172_code_completion_4839"} {"prompt": "Explain this yaml code:\n\nservice/networking/network-policy-default-deny-all.yaml\n---\napiVersion: networking.k8s.io/v1\nkind: NetworkPolicy\nmetadata:\n name: default-deny-all\nspec:\n podSelector: {}\n policyTypes:\n - Ingress\n - Egress", "completion": "This yaml code Network Policies | This ensures that even pods that aren't selected by any other NetworkPolicy will not be allowed ingress or egress traffic.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Network Policies | This ensures that even pods that aren't selected by any other NetworkPolicy will not be allowed ingress or egress traffic.", "source": "https://kubernetes.io/docs/concepts/services-networking/network-policies/", "id": "k8s_00172_code_explain_4840"} {"prompt": "Complete this yaml code:\n\n---\napiVersion: networking.k8s.io/v1\nkind: NetworkPolicy\nmetadata:\n name: default-deny-all", "completion": "spec:\n podSelector: {}\n policyTypes:\n - Ingress\n - Egress", "language": "yaml", "type": "kubernetes-manifest", "context": "Network Policies | This ensures that even pods that aren't selected by any other NetworkPolicy will not be allowed ingress or egress traffic.", "source": "https://kubernetes.io/docs/concepts/services-networking/network-policies/", "id": "k8s_00172_code_completion_4841"} {"prompt": "Explain this yaml code:\n\n---\napiVersion: networking.k8s.io/v1\nkind: NetworkPolicy\nmetadata:\n name: default-deny-all\nspec:\n podSelector: {}\n policyTypes:\n - Ingress\n - Egress", "completion": "This yaml code Network Policies | This ensures that even pods that aren't selected by any other NetworkPolicy will not be allowed ingress or egress traffic.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Network Policies | This ensures that even pods that aren't selected by any other NetworkPolicy will not be allowed ingress or egress traffic.", "source": "https://kubernetes.io/docs/concepts/services-networking/network-policies/", "id": "k8s_00172_code_explain_4842"} {"prompt": "Complete this yaml code:\n\nservice/networking/networkpolicy-multiport-egress.yaml\napiVersion: networking.k8s.io/v1\nkind: NetworkPolicy\nmetadata:\n name: multi-port-egress\n namespace: default\nspec:\n podSelector:\n matchLabels:\n role: db", "completion": " policyTypes:\n - Egress\n egress:\n - to:\n - ipBlock:\n cidr: 10.0.0.0/24\n ports:\n - protocol: TCP\n port: 32000\n endPort: 32768", "language": "yaml", "type": "kubernetes-manifest", "context": "Network Policies | The above rule allows any Pod with label role=db on the namespace default to communicate with any IP within the range 10.0.0.0/24 over TCP, provided that the target port is between the range 32000 and", "source": "https://kubernetes.io/docs/concepts/services-networking/network-policies/", "id": "k8s_00172_code_completion_4843"} {"prompt": "Explain this yaml code:\n\nservice/networking/networkpolicy-multiport-egress.yaml\napiVersion: networking.k8s.io/v1\nkind: NetworkPolicy\nmetadata:\n name: multi-port-egress\n namespace: default\nspec:\n podSelector:\n matchLabels:\n role: db\n policyTypes:\n - Egress\n egress:\n - to:\n - ipBlock:\n cidr: 10.0.0.0/24\n ports:\n - protocol: TCP\n port: 32000\n endPort: 32768", "completion": "This yaml code Network Policies | The above rule allows any Pod with label role=db on the namespace default to communicate with any IP within the range 10.0.0.0/24 over TCP, provided that the target port is between the range 32000 and", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Network Policies | The above rule allows any Pod with label role=db on the namespace default to communicate with any IP within the range 10.0.0.0/24 over TCP, provided that the target port is between the range 32000 and", "source": "https://kubernetes.io/docs/concepts/services-networking/network-policies/", "id": "k8s_00172_code_explain_4844"} {"prompt": "Complete this yaml code:\n\napiVersion: networking.k8s.io/v1\nkind: NetworkPolicy\nmetadata:\n name: multi-port-egress\n namespace: default\nspec:\n podSelector:\n matchLabels:\n role: db", "completion": " policyTypes:\n - Egress\n egress:\n - to:\n - ipBlock:\n cidr: 10.0.0.0/24\n ports:\n - protocol: TCP\n port: 32000\n endPort: 32768", "language": "yaml", "type": "kubernetes-manifest", "context": "Network Policies | The above rule allows any Pod with label role=db on the namespace default to communicate with any IP within the range 10.0.0.0/24 over TCP, provided that the target port is between the range 32000 and", "source": "https://kubernetes.io/docs/concepts/services-networking/network-policies/", "id": "k8s_00172_code_completion_4845"} {"prompt": "Explain this yaml code:\n\napiVersion: networking.k8s.io/v1\nkind: NetworkPolicy\nmetadata:\n name: multi-port-egress\n namespace: default\nspec:\n podSelector:\n matchLabels:\n role: db\n policyTypes:\n - Egress\n egress:\n - to:\n - ipBlock:\n cidr: 10.0.0.0/24\n ports:\n - protocol: TCP\n port: 32000\n endPort: 32768", "completion": "This yaml code Network Policies | The above rule allows any Pod with label role=db on the namespace default to communicate with any IP within the range 10.0.0.0/24 over TCP, provided that the target port is between the range 32000 and", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Network Policies | The above rule allows any Pod with label role=db on the namespace default to communicate with any IP within the range 10.0.0.0/24 over TCP, provided that the target port is between the range 32000 and", "source": "https://kubernetes.io/docs/concepts/services-networking/network-policies/", "id": "k8s_00172_code_explain_4846"} {"prompt": "Explain this bash code:\n\nkubectl label namespace frontend namespace=frontend\nkubectl label namespace backend namespace=backend", "completion": "This bash code Network Policies | Add the labels under namespaceSelector in your NetworkPolicy document. For example:", "language": "bash", "type": "kubectl-command_explanation", "context": "Network Policies | Add the labels under namespaceSelector in your NetworkPolicy document. For example:", "source": "https://kubernetes.io/docs/concepts/services-networking/network-policies/", "id": "k8s_00172_code_explain_4847"} {"prompt": "Complete this yaml code:\n\napiVersion: networking.k8s.io/v1\nkind: NetworkPolicy\nmetadata:\n name: egress-namespaces\nspec:\n podSelector:\n matchLabels:\n app: myapp", "completion": " policyTypes:\n - Egress\n egress:\n - to:\n - namespaceSelector:\n matchExpressions:\n - key: namespace\n operator: In\n values: [\"frontend\", \"backend\"]", "language": "yaml", "type": "kubernetes-manifest", "context": "Network Policies | The Kubernetes control plane sets an immutable label kubernetes.io/metadata.name on all namespaces, the value of the label is the namespace name.", "source": "https://kubernetes.io/docs/concepts/services-networking/network-policies/", "id": "k8s_00172_code_completion_4848"} {"prompt": "Explain this yaml code:\n\napiVersion: networking.k8s.io/v1\nkind: NetworkPolicy\nmetadata:\n name: egress-namespaces\nspec:\n podSelector:\n matchLabels:\n app: myapp\n policyTypes:\n - Egress\n egress:\n - to:\n - namespaceSelector:\n matchExpressions:\n - key: namespace\n operator: In\n values: [\"frontend\", \"backend\"]", "completion": "This yaml code Network Policies | The Kubernetes control plane sets an immutable label kubernetes.io/metadata.name on all namespaces, the value of the label is the namespace name.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Network Policies | The Kubernetes control plane sets an immutable label kubernetes.io/metadata.name on all namespaces, the value of the label is the namespace name.", "source": "https://kubernetes.io/docs/concepts/services-networking/network-policies/", "id": "k8s_00172_code_explain_4849"} {"prompt": "Complete this text code:\n\n...\n spec:\n podSelector:", "completion": " matchLabels:\n role: client\n ...", "language": "text", "type": "code-example", "context": "Network Policies | a hostNetwork pod is selected by a podSelector or namespaceSelector in an ingress or egress rule.", "source": "https://kubernetes.io/docs/concepts/services-networking/network-policies/", "id": "k8s_00172_code_completion_4850"} {"prompt": "Explain this text code:\n\n...\n spec:\n podSelector:\n matchLabels:\n role: client\n ...", "completion": "This text code Network Policies | a hostNetwork pod is selected by a podSelector or namespaceSelector in an ingress or egress rule.", "language": "text", "type": "code-example_explanation", "context": "Network Policies | a hostNetwork pod is selected by a podSelector or namespaceSelector in an ingress or egress rule.", "source": "https://kubernetes.io/docs/concepts/services-networking/network-policies/", "id": "k8s_00172_code_explain_4851"} {"prompt": "Complete this text code:\n\n...\n ingress:\n - from:", "completion": " - podSelector:\n matchLabels:\n role: client\n ...", "language": "text", "type": "code-example", "context": "Network Policies | At the same time, since hostNetwork pods have the same IP addresses as the nodes they reside on, their connections will be treated as node connections. For example, you can allow traffic from a hostNe", "source": "https://kubernetes.io/docs/concepts/services-networking/network-policies/", "id": "k8s_00172_code_completion_4852"} {"prompt": "Explain this text code:\n\n...\n ingress:\n - from:\n - podSelector:\n matchLabels:\n role: client\n ...", "completion": "This text code Network Policies | At the same time, since hostNetwork pods have the same IP addresses as the nodes they reside on, their connections will be treated as node connections. For example, you can allow traffic from a hostNe", "language": "text", "type": "code-example_explanation", "context": "Network Policies | At the same time, since hostNetwork pods have the same IP addresses as the nodes they reside on, their connections will be treated as node connections. For example, you can allow traffic from a hostNe", "source": "https://kubernetes.io/docs/concepts/services-networking/network-policies/", "id": "k8s_00172_code_explain_4853"} {"prompt": "Complete this yaml code:\n\napiVersion: apiserver.config.k8s.io/v1\nkind: AuthorizationConfiguration\nauthorizers:", "completion": " ...\n - type: Node\n ...", "language": "yaml", "type": "kubernetes-manifest", "context": "Using Node Authorization | Or, start the API server with the --authorization-mode flag set to a comma-separated list that includes Node; for example:", "source": "https://kubernetes.io/docs/reference/access-authn-authz/node/", "id": "k8s_00173_code_completion_4854"} {"prompt": "Explain this yaml code:\n\napiVersion: apiserver.config.k8s.io/v1\nkind: AuthorizationConfiguration\nauthorizers:\n ...\n - type: Node\n ...", "completion": "This yaml code Using Node Authorization | Or, start the API server with the --authorization-mode flag set to a comma-separated list that includes Node; for example:", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Using Node Authorization | Or, start the API server with the --authorization-mode flag set to a comma-separated list that includes Node; for example:", "source": "https://kubernetes.io/docs/reference/access-authn-authz/node/", "id": "k8s_00173_code_explain_4855"} {"prompt": "Explain this text code:\n\nkube-apiserver --authorization-mode=...,Node --other-options --more-options", "completion": "This text code Using Node Authorization | To limit the API objects kubelets are able to write, enable the NodeRestriction admission plugin by starting the apiserver with --enable-admission-plugins=...,NodeRestriction,...", "language": "text", "type": "code-example_explanation", "context": "Using Node Authorization | To limit the API objects kubelets are able to write, enable the NodeRestriction admission plugin by starting the apiserver with --enable-admission-plugins=...,NodeRestriction,...", "source": "https://kubernetes.io/docs/reference/access-authn-authz/node/", "id": "k8s_00173_code_explain_4856"} {"prompt": "Complete this yaml code:\n\napiVersion: apiserver.config.k8s.io/v1\nkind: AuthorizationConfiguration\nauthorizers:", "completion": " ...\n - type: Node\n ...", "language": "yaml", "type": "kubernetes-manifest", "context": "Using Node Authorization | Or, start the API server with the --authorization-mode flag set to a comma-separated list that includes Node; for example:", "source": "https://kubernetes.io/docs/reference/access-authn-authz/node/", "id": "k8s_00173_code_completion_4857"} {"prompt": "Explain this yaml code:\n\napiVersion: apiserver.config.k8s.io/v1\nkind: AuthorizationConfiguration\nauthorizers:\n ...\n - type: Node\n ...", "completion": "This yaml code Using Node Authorization | Or, start the API server with the --authorization-mode flag set to a comma-separated list that includes Node; for example:", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Using Node Authorization | Or, start the API server with the --authorization-mode flag set to a comma-separated list that includes Node; for example:", "source": "https://kubernetes.io/docs/reference/access-authn-authz/node/", "id": "k8s_00173_code_explain_4858"} {"prompt": "Explain this shell code:\n\nkube-apiserver --authorization-mode=...,Node --other-options --more-options", "completion": "This shell code Using Node Authorization | To limit the API objects kubelets are able to write, enable the NodeRestriction admission plugin by starting the apiserver with --enable-admission-plugins=...,NodeRestriction,...", "language": "shell", "type": "code-example_explanation", "context": "Using Node Authorization | To limit the API objects kubelets are able to write, enable the NodeRestriction admission plugin by starting the apiserver with --enable-admission-plugins=...,NodeRestriction,...", "source": "https://kubernetes.io/docs/reference/access-authn-authz/node/", "id": "k8s_00173_code_explain_4859"} {"prompt": "Complete this yaml code:\n\napiVersion: apiserver.config.k8s.io/v1\nkind: AuthorizationConfiguration\nauthorizers:", "completion": " ...\n - type: Node\n ...", "language": "yaml", "type": "kubernetes-manifest", "context": "Using Node Authorization | Or, start the API server with the --authorization-mode flag set to a comma-separated list that includes Node; for example:", "source": "https://kubernetes.io/docs/reference/access-authn-authz/node/", "id": "k8s_00173_code_completion_4860"} {"prompt": "Explain this yaml code:\n\napiVersion: apiserver.config.k8s.io/v1\nkind: AuthorizationConfiguration\nauthorizers:\n ...\n - type: Node\n ...", "completion": "This yaml code Using Node Authorization | Or, start the API server with the --authorization-mode flag set to a comma-separated list that includes Node; for example:", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Using Node Authorization | Or, start the API server with the --authorization-mode flag set to a comma-separated list that includes Node; for example:", "source": "https://kubernetes.io/docs/reference/access-authn-authz/node/", "id": "k8s_00173_code_explain_4861"} {"prompt": "Explain this text code:\n\nkube-apiserver --authorization-mode=...,Node --other-options --more-options", "completion": "This text code Using Node Authorization | To limit the API objects kubelets are able to write, enable the NodeRestriction admission plugin by starting the apiserver with --enable-admission-plugins=...,NodeRestriction,...", "language": "text", "type": "code-example_explanation", "context": "Using Node Authorization | To limit the API objects kubelets are able to write, enable the NodeRestriction admission plugin by starting the apiserver with --enable-admission-plugins=...,NodeRestriction,...", "source": "https://kubernetes.io/docs/reference/access-authn-authz/node/", "id": "k8s_00173_code_explain_4862"} {"prompt": "Explain this text code:\n\nmkdir -p $HOME/\n\nexport GOPATH=$HOME/", "completion": "This text code Generating Reference Documentation for the Kubernetes API | Get a local clone of the following repositories:", "language": "text", "type": "code-example_explanation", "context": "Generating Reference Documentation for the Kubernetes API | Get a local clone of the following repositories:", "source": "https://kubernetes.io/docs/contribute/generate-ref-docs/kubernetes-api/", "id": "k8s_00174_code_explain_4863"} {"prompt": "Complete this text code:\n\ngo get -u github.com/kubernetes-sigs/reference-docs\n", "completion": "go get -u github.com/go-openapi/loads\ngo get -u github.com/go-openapi/spec", "language": "text", "type": "code-example", "context": "Generating Reference Documentation for the Kubernetes API | If you don't already have the kubernetes/website repository, get it now:", "source": "https://kubernetes.io/docs/contribute/generate-ref-docs/kubernetes-api/", "id": "k8s_00174_code_completion_4864"} {"prompt": "Explain this text code:\n\ngo get -u github.com/kubernetes-sigs/reference-docs\n\ngo get -u github.com/go-openapi/loads\ngo get -u github.com/go-openapi/spec", "completion": "This text code Generating Reference Documentation for the Kubernetes API | If you don't already have the kubernetes/website repository, get it now:", "language": "text", "type": "code-example_explanation", "context": "Generating Reference Documentation for the Kubernetes API | If you don't already have the kubernetes/website repository, get it now:", "source": "https://kubernetes.io/docs/contribute/generate-ref-docs/kubernetes-api/", "id": "k8s_00174_code_explain_4865"} {"prompt": "Explain this text code:\n\ngit clone https://github.com//website $GOPATH/src/github.com//website", "completion": "This text code Generating Reference Documentation for the Kubernetes API | Get a clone of the kubernetes/kubernetes repository as k8s.io/kubernetes:", "language": "text", "type": "code-example_explanation", "context": "Generating Reference Documentation for the Kubernetes API | Get a clone of the kubernetes/kubernetes repository as k8s.io/kubernetes:", "source": "https://kubernetes.io/docs/contribute/generate-ref-docs/kubernetes-api/", "id": "k8s_00174_code_explain_4866"} {"prompt": "Explain this text code:\n\ngit clone https://github.com/kubernetes/kubernetes $GOPATH/src/k8s.io/kubernetes", "completion": "This text code Generating Reference Documentation for the Kubernetes API | The base directory of your clone of the kubernetes/kubernetes repository is $GOPATH/src/k8s.io/kubernetes. The remaining steps refer to your base directory as .", "language": "text", "type": "code-example_explanation", "context": "Generating Reference Documentation for the Kubernetes API | The base directory of your clone of the kubernetes/kubernetes repository is $GOPATH/src/k8s.io/kubernetes. The remaining steps refer to your base directory as .", "source": "https://kubernetes.io/docs/contribute/generate-ref-docs/kubernetes-api/", "id": "k8s_00174_code_explain_4867"} {"prompt": "Explain this text code:\n\nexport K8S_WEBROOT=${GOPATH}/src/github.com//website\nexport K8S_ROOT=${GOPATH}/src/k8s.io/kubernetes\nexport K8S_RELEASE=1.17.0", "completion": "This text code Generating Reference Documentation for the Kubernetes API | The updateapispec build target creates the versioned build directory. After the directory is created, the Open API spec is fetched from the repository. These steps ensure that the version o", "language": "text", "type": "code-example_explanation", "context": "Generating Reference Documentation for the Kubernetes API | The updateapispec build target creates the versioned build directory. After the directory is created, the Open API spec is fetched from the repository. These steps ensure that the version o", "source": "https://kubernetes.io/docs/contribute/generate-ref-docs/kubernetes-api/", "id": "k8s_00174_code_explain_4868"} {"prompt": "Explain this json code:\n\n[ -e \"/gen-apidocs/build/index.html\" ] && echo \"index.html built\" || echo \"no index.html\"\n[ -e \"/gen-apidocs/build/navData.js\" ] && echo \"navData.js built\" || echo \"no navData.js\"", "completion": "This json code Generating Reference Documentation for the Kubernetes API | Go to the base of your local , and view which files have been modified:", "language": "json", "type": "api-response_explanation", "context": "Generating Reference Documentation for the Kubernetes API | Go to the base of your local , and view which files have been modified:", "source": "https://kubernetes.io/docs/contribute/generate-ref-docs/kubernetes-api/", "id": "k8s_00174_code_explain_4869"} {"prompt": "Complete this text code:\n\nstatic/docs/reference/generated/kubernetes-api/v1.34/css/bootstrap.min.css\nstatic/docs/reference/generated/kubernetes-api/v1.34/css/font-awesome.min.css\nstatic/docs/reference/generated/kubernetes-api/v1.34/css/stylesheet.css\nstatic/docs/reference/generated/kubernetes-api/v1.34/fonts/FontAwesome.otf\nstatic/docs/reference/generated/kubernetes-api/v1.34/fonts/fontawesome-webfont.eot\nstatic/docs/reference/generated/kubernetes-api/v1.34/fonts/fontawesome-webfont.svg", "completion": "static/docs/reference/generated/kubernetes-api/v1.34/fonts/fontawesome-webfont.ttf\nstatic/docs/reference/generated/kubernetes-api/v1.34/fonts/fontawesome-webfont.woff\nstatic/docs/reference/generated/kubernetes-api/v1.34/fonts/fontawesome-webfont.woff2\nstatic/docs/reference/generated/kubernetes-api/v1.34/index.html\nstatic/docs/reference/generated/kubernetes-api/v1.34/js/jquery.scrollTo.min.js\nstatic/docs/reference/generated/kubernetes-api/v1.34/js/navData.js\nstatic/docs/reference/generated/kubernetes-api/v1.34/js/scroll.js", "language": "text", "type": "code-example", "context": "Generating Reference Documentation for the Kubernetes API | When generating reference documentation for a new release, update the file, /content/en/docs/reference/kubernetes-api/api-index.md with the new version number.", "source": "https://kubernetes.io/docs/contribute/generate-ref-docs/kubernetes-api/", "id": "k8s_00174_code_completion_4870"} {"prompt": "Explain this text code:\n\nstatic/docs/reference/generated/kubernetes-api/v1.34/css/bootstrap.min.css\nstatic/docs/reference/generated/kubernetes-api/v1.34/css/font-awesome.min.css\nstatic/docs/reference/generated/kubernetes-api/v1.34/css/stylesheet.css\nstatic/docs/reference/generated/kubernetes-api/v1.34/fonts/FontAwesome.otf\nstatic/docs/reference/generated/kubernetes-api/v1.34/fonts/fontawesome-webfont.eot\nstatic/docs/reference/generated/kubernetes-api/v1.34/fonts/fontawesome-webfont.svg\nstatic/docs/reference/generated/kubernetes-api/v1.34/fonts/fontawesome-webfont.ttf\nstatic/docs/reference/generated/kubernetes-api/v1.34/fonts/fontawesome-webfont.woff\nstatic/docs/reference/generated/kubernetes-api/v1.34/fonts/fontawesome-webfont.woff2\nstatic/docs/reference/generated/kubernetes-api/v1.34/index.html\nstatic/docs/reference/generated/kubernetes-api/v1.34/js/jquery.scrollTo.min.js\nstatic/docs/reference/generated/kubernetes-api/v1.34/js/navData.js\nstatic/docs/reference/generated/kubernetes-api/v1.34/js/scroll.js", "completion": "This text code Generating Reference Documentation for the Kubernetes API | When generating reference documentation for a new release, update the file, /content/en/docs/reference/kubernetes-api/api-index.md with the new version number.", "language": "text", "type": "code-example_explanation", "context": "Generating Reference Documentation for the Kubernetes API | When generating reference documentation for a new release, update the file, /content/en/docs/reference/kubernetes-api/api-index.md with the new version number.", "source": "https://kubernetes.io/docs/contribute/generate-ref-docs/kubernetes-api/", "id": "k8s_00174_code_explain_4871"} {"prompt": "Complete this text code:\n\n---\ntitle: v1.17", "completion": "---\n\n[Kubernetes API v1.17](/docs/reference/generated/kubernetes-api/v1.17/)", "language": "text", "type": "code-example", "context": "Generating Reference Documentation for the Kubernetes API | Open /content/en/docs/reference/_index.md for editing, and add a new link for the latest API reference. Remove the oldest API reference version. There should be five links to the most recent", "source": "https://kubernetes.io/docs/contribute/generate-ref-docs/kubernetes-api/", "id": "k8s_00174_code_completion_4872"} {"prompt": "Explain this text code:\n\n---\ntitle: v1.17\n---\n\n[Kubernetes API v1.17](/docs/reference/generated/kubernetes-api/v1.17/)", "completion": "This text code Generating Reference Documentation for the Kubernetes API | Open /content/en/docs/reference/_index.md for editing, and add a new link for the latest API reference. Remove the oldest API reference version. There should be five links to the most recent", "language": "text", "type": "code-example_explanation", "context": "Generating Reference Documentation for the Kubernetes API | Open /content/en/docs/reference/_index.md for editing, and add a new link for the latest API reference. Remove the oldest API reference version. There should be five links to the most recent", "source": "https://kubernetes.io/docs/contribute/generate-ref-docs/kubernetes-api/", "id": "k8s_00174_code_explain_4873"} {"prompt": "Explain this text code:\n\ncd \ngit submodule update --init --recursive --depth 1 # if not already done\nmake container-serve", "completion": "This text code Generating Reference Documentation for the Kubernetes API | In , run git add and git commit to commit the change.", "language": "text", "type": "code-example_explanation", "context": "Generating Reference Documentation for the Kubernetes API | In , run git add and git commit to commit the change.", "source": "https://kubernetes.io/docs/contribute/generate-ref-docs/kubernetes-api/", "id": "k8s_00174_code_explain_4874"} {"prompt": "Explain this shell code:\n\nmkdir -p $HOME/\n\nexport GOPATH=$HOME/", "completion": "This shell code Generating Reference Documentation for the Kubernetes API | Get a local clone of the following repositories:", "language": "shell", "type": "code-example_explanation", "context": "Generating Reference Documentation for the Kubernetes API | Get a local clone of the following repositories:", "source": "https://kubernetes.io/docs/contribute/generate-ref-docs/kubernetes-api/", "id": "k8s_00174_code_explain_4875"} {"prompt": "Complete this shell code:\n\ngo get -u github.com/kubernetes-sigs/reference-docs\n", "completion": "go get -u github.com/go-openapi/loads\ngo get -u github.com/go-openapi/spec", "language": "shell", "type": "code-example", "context": "Generating Reference Documentation for the Kubernetes API | If you don't already have the kubernetes/website repository, get it now:", "source": "https://kubernetes.io/docs/contribute/generate-ref-docs/kubernetes-api/", "id": "k8s_00174_code_completion_4876"} {"prompt": "Explain this shell code:\n\ngo get -u github.com/kubernetes-sigs/reference-docs\n\ngo get -u github.com/go-openapi/loads\ngo get -u github.com/go-openapi/spec", "completion": "This shell code Generating Reference Documentation for the Kubernetes API | If you don't already have the kubernetes/website repository, get it now:", "language": "shell", "type": "code-example_explanation", "context": "Generating Reference Documentation for the Kubernetes API | If you don't already have the kubernetes/website repository, get it now:", "source": "https://kubernetes.io/docs/contribute/generate-ref-docs/kubernetes-api/", "id": "k8s_00174_code_explain_4877"} {"prompt": "Explain this shell code:\n\ngit clone https://github.com//website $GOPATH/src/github.com//website", "completion": "This shell code Generating Reference Documentation for the Kubernetes API | Get a clone of the kubernetes/kubernetes repository as k8s.io/kubernetes:", "language": "shell", "type": "code-example_explanation", "context": "Generating Reference Documentation for the Kubernetes API | Get a clone of the kubernetes/kubernetes repository as k8s.io/kubernetes:", "source": "https://kubernetes.io/docs/contribute/generate-ref-docs/kubernetes-api/", "id": "k8s_00174_code_explain_4878"} {"prompt": "Explain this shell code:\n\ngit clone https://github.com/kubernetes/kubernetes $GOPATH/src/k8s.io/kubernetes", "completion": "This shell code Generating Reference Documentation for the Kubernetes API | The base directory of your clone of the kubernetes/kubernetes repository is $GOPATH/src/k8s.io/kubernetes. The remaining steps refer to your base directory as .", "language": "shell", "type": "code-example_explanation", "context": "Generating Reference Documentation for the Kubernetes API | The base directory of your clone of the kubernetes/kubernetes repository is $GOPATH/src/k8s.io/kubernetes. The remaining steps refer to your base directory as .", "source": "https://kubernetes.io/docs/contribute/generate-ref-docs/kubernetes-api/", "id": "k8s_00174_code_explain_4879"} {"prompt": "Explain this text code:\n\n$GOPATH/src/github.com/kubernetes-sigs/reference-docs", "completion": "This text code Generating Reference Documentation for the Kubernetes API | This section shows how to generate the published Kubernetes API reference documentation.", "language": "text", "type": "code-example_explanation", "context": "Generating Reference Documentation for the Kubernetes API | This section shows how to generate the published Kubernetes API reference documentation.", "source": "https://kubernetes.io/docs/contribute/generate-ref-docs/kubernetes-api/", "id": "k8s_00174_code_explain_4880"} {"prompt": "Explain this shell code:\n\nexport K8S_WEBROOT=${GOPATH}/src/github.com//website\nexport K8S_ROOT=${GOPATH}/src/k8s.io/kubernetes\nexport K8S_RELEASE=1.17.0", "completion": "This shell code Generating Reference Documentation for the Kubernetes API | The updateapispec build target creates the versioned build directory. After the directory is created, the Open API spec is fetched from the repository. These steps ensure that the version o", "language": "shell", "type": "code-example_explanation", "context": "Generating Reference Documentation for the Kubernetes API | The updateapispec build target creates the versioned build directory. After the directory is created, the Open API spec is fetched from the repository. These steps ensure that the version o", "source": "https://kubernetes.io/docs/contribute/generate-ref-docs/kubernetes-api/", "id": "k8s_00174_code_explain_4881"} {"prompt": "Explain this shell code:\n\n[ -e \"/gen-apidocs/build/index.html\" ] && echo \"index.html built\" || echo \"no index.html\"\n[ -e \"/gen-apidocs/build/navData.js\" ] && echo \"navData.js built\" || echo \"no navData.js\"", "completion": "This shell code Generating Reference Documentation for the Kubernetes API | Go to the base of your local , and view which files have been modified:", "language": "shell", "type": "code-example_explanation", "context": "Generating Reference Documentation for the Kubernetes API | Go to the base of your local , and view which files have been modified:", "source": "https://kubernetes.io/docs/contribute/generate-ref-docs/kubernetes-api/", "id": "k8s_00174_code_explain_4882"} {"prompt": "Complete this text code:\n\nstatic/docs/reference/generated/kubernetes-api/v1.34/css/bootstrap.min.css\nstatic/docs/reference/generated/kubernetes-api/v1.34/css/font-awesome.min.css\nstatic/docs/reference/generated/kubernetes-api/v1.34/css/stylesheet.css\nstatic/docs/reference/generated/kubernetes-api/v1.34/fonts/FontAwesome.otf\nstatic/docs/reference/generated/kubernetes-api/v1.34/fonts/fontawesome-webfont.eot\nstatic/docs/reference/generated/kubernetes-api/v1.34/fonts/fontawesome-webfont.svg", "completion": "static/docs/reference/generated/kubernetes-api/v1.34/fonts/fontawesome-webfont.ttf\nstatic/docs/reference/generated/kubernetes-api/v1.34/fonts/fontawesome-webfont.woff\nstatic/docs/reference/generated/kubernetes-api/v1.34/fonts/fontawesome-webfont.woff2\nstatic/docs/reference/generated/kubernetes-api/v1.34/index.html\nstatic/docs/reference/generated/kubernetes-api/v1.34/js/jquery.scrollTo.min.js\nstatic/docs/reference/generated/kubernetes-api/v1.34/js/navData.js\nstatic/docs/reference/generated/kubernetes-api/v1.34/js/scroll.js", "language": "text", "type": "code-example", "context": "Generating Reference Documentation for the Kubernetes API | When generating reference documentation for a new release, update the file, /content/en/docs/reference/kubernetes-api/api-index.md with the new version number.", "source": "https://kubernetes.io/docs/contribute/generate-ref-docs/kubernetes-api/", "id": "k8s_00174_code_completion_4883"} {"prompt": "Explain this text code:\n\nstatic/docs/reference/generated/kubernetes-api/v1.34/css/bootstrap.min.css\nstatic/docs/reference/generated/kubernetes-api/v1.34/css/font-awesome.min.css\nstatic/docs/reference/generated/kubernetes-api/v1.34/css/stylesheet.css\nstatic/docs/reference/generated/kubernetes-api/v1.34/fonts/FontAwesome.otf\nstatic/docs/reference/generated/kubernetes-api/v1.34/fonts/fontawesome-webfont.eot\nstatic/docs/reference/generated/kubernetes-api/v1.34/fonts/fontawesome-webfont.svg\nstatic/docs/reference/generated/kubernetes-api/v1.34/fonts/fontawesome-webfont.ttf\nstatic/docs/reference/generated/kubernetes-api/v1.34/fonts/fontawesome-webfont.woff\nstatic/docs/reference/generated/kubernetes-api/v1.34/fonts/fontawesome-webfont.woff2\nstatic/docs/reference/generated/kubernetes-api/v1.34/index.html\nstatic/docs/reference/generated/kubernetes-api/v1.34/js/jquery.scrollTo.min.js\nstatic/docs/reference/generated/kubernetes-api/v1.34/js/navData.js\nstatic/docs/reference/generated/kubernetes-api/v1.34/js/scroll.js", "completion": "This text code Generating Reference Documentation for the Kubernetes API | When generating reference documentation for a new release, update the file, /content/en/docs/reference/kubernetes-api/api-index.md with the new version number.", "language": "text", "type": "code-example_explanation", "context": "Generating Reference Documentation for the Kubernetes API | When generating reference documentation for a new release, update the file, /content/en/docs/reference/kubernetes-api/api-index.md with the new version number.", "source": "https://kubernetes.io/docs/contribute/generate-ref-docs/kubernetes-api/", "id": "k8s_00174_code_explain_4884"} {"prompt": "Explain this text code:\n\n/content/en/docs/reference/kubernetes-api/api-index.md", "completion": "This text code Generating Reference Documentation for the Kubernetes API | Open /content/en/docs/reference/kubernetes-api/api-index.md for editing, and update the API reference version number. For example:", "language": "text", "type": "code-example_explanation", "context": "Generating Reference Documentation for the Kubernetes API | Open /content/en/docs/reference/kubernetes-api/api-index.md for editing, and update the API reference version number. For example:", "source": "https://kubernetes.io/docs/contribute/generate-ref-docs/kubernetes-api/", "id": "k8s_00174_code_explain_4885"} {"prompt": "Explain this text code:\n\n/content/en/docs/reference/kubernetes-api/api-index.md", "completion": "This text code Generating Reference Documentation for the Kubernetes API | Open /content/en/docs/reference/_index.md for editing, and add a new link for the latest API reference. Remove the oldest API reference version. There should be five links to the most recent", "language": "text", "type": "code-example_explanation", "context": "Generating Reference Documentation for the Kubernetes API | Open /content/en/docs/reference/_index.md for editing, and add a new link for the latest API reference. Remove the oldest API reference version. There should be five links to the most recent", "source": "https://kubernetes.io/docs/contribute/generate-ref-docs/kubernetes-api/", "id": "k8s_00174_code_explain_4886"} {"prompt": "Complete this text code:\n\n---\ntitle: v1.17", "completion": "---\n\n[Kubernetes API v1.17](/docs/reference/generated/kubernetes-api/v1.17/)", "language": "text", "type": "code-example", "context": "Generating Reference Documentation for the Kubernetes API | Open /content/en/docs/reference/_index.md for editing, and add a new link for the latest API reference. Remove the oldest API reference version. There should be five links to the most recent", "source": "https://kubernetes.io/docs/contribute/generate-ref-docs/kubernetes-api/", "id": "k8s_00174_code_completion_4887"} {"prompt": "Explain this text code:\n\n---\ntitle: v1.17\n---\n\n[Kubernetes API v1.17](/docs/reference/generated/kubernetes-api/v1.17/)", "completion": "This text code Generating Reference Documentation for the Kubernetes API | Open /content/en/docs/reference/_index.md for editing, and add a new link for the latest API reference. Remove the oldest API reference version. There should be five links to the most recent", "language": "text", "type": "code-example_explanation", "context": "Generating Reference Documentation for the Kubernetes API | Open /content/en/docs/reference/_index.md for editing, and add a new link for the latest API reference. Remove the oldest API reference version. There should be five links to the most recent", "source": "https://kubernetes.io/docs/contribute/generate-ref-docs/kubernetes-api/", "id": "k8s_00174_code_explain_4888"} {"prompt": "Explain this shell code:\n\ncd \ngit submodule update --init --recursive --depth 1 # if not already done\nmake container-serve", "completion": "This shell code Generating Reference Documentation for the Kubernetes API | In , run git add and git commit to commit the change.", "language": "shell", "type": "code-example_explanation", "context": "Generating Reference Documentation for the Kubernetes API | In , run git add and git commit to commit the change.", "source": "https://kubernetes.io/docs/contribute/generate-ref-docs/kubernetes-api/", "id": "k8s_00174_code_explain_4889"} {"prompt": "Explain this text code:\n\nmkdir -p $HOME/\n\nexport GOPATH=$HOME/", "completion": "This text code Generating Reference Documentation for the Kubernetes API | Get a local clone of the following repositories:", "language": "text", "type": "code-example_explanation", "context": "Generating Reference Documentation for the Kubernetes API | Get a local clone of the following repositories:", "source": "https://kubernetes.io/docs/contribute/generate-ref-docs/kubernetes-api/", "id": "k8s_00174_code_explain_4890"} {"prompt": "Complete this text code:\n\ngo get -u github.com/kubernetes-sigs/reference-docs\n", "completion": "go get -u github.com/go-openapi/loads\ngo get -u github.com/go-openapi/spec", "language": "text", "type": "code-example", "context": "Generating Reference Documentation for the Kubernetes API | If you don't already have the kubernetes/website repository, get it now:", "source": "https://kubernetes.io/docs/contribute/generate-ref-docs/kubernetes-api/", "id": "k8s_00174_code_completion_4891"} {"prompt": "Explain this text code:\n\ngo get -u github.com/kubernetes-sigs/reference-docs\n\ngo get -u github.com/go-openapi/loads\ngo get -u github.com/go-openapi/spec", "completion": "This text code Generating Reference Documentation for the Kubernetes API | If you don't already have the kubernetes/website repository, get it now:", "language": "text", "type": "code-example_explanation", "context": "Generating Reference Documentation for the Kubernetes API | If you don't already have the kubernetes/website repository, get it now:", "source": "https://kubernetes.io/docs/contribute/generate-ref-docs/kubernetes-api/", "id": "k8s_00174_code_explain_4892"} {"prompt": "Explain this text code:\n\ngit clone https://github.com//website $GOPATH/src/github.com//website", "completion": "This text code Generating Reference Documentation for the Kubernetes API | Get a clone of the kubernetes/kubernetes repository as k8s.io/kubernetes:", "language": "text", "type": "code-example_explanation", "context": "Generating Reference Documentation for the Kubernetes API | Get a clone of the kubernetes/kubernetes repository as k8s.io/kubernetes:", "source": "https://kubernetes.io/docs/contribute/generate-ref-docs/kubernetes-api/", "id": "k8s_00174_code_explain_4893"} {"prompt": "Explain this text code:\n\ngit clone https://github.com/kubernetes/kubernetes $GOPATH/src/k8s.io/kubernetes", "completion": "This text code Generating Reference Documentation for the Kubernetes API | The base directory of your clone of the kubernetes/kubernetes repository is $GOPATH/src/k8s.io/kubernetes. The remaining steps refer to your base directory as .", "language": "text", "type": "code-example_explanation", "context": "Generating Reference Documentation for the Kubernetes API | The base directory of your clone of the kubernetes/kubernetes repository is $GOPATH/src/k8s.io/kubernetes. The remaining steps refer to your base directory as .", "source": "https://kubernetes.io/docs/contribute/generate-ref-docs/kubernetes-api/", "id": "k8s_00174_code_explain_4894"} {"prompt": "Explain this text code:\n\nexport K8S_WEBROOT=${GOPATH}/src/github.com//website\nexport K8S_ROOT=${GOPATH}/src/k8s.io/kubernetes\nexport K8S_RELEASE=1.17.0", "completion": "This text code Generating Reference Documentation for the Kubernetes API | The updateapispec build target creates the versioned build directory. After the directory is created, the Open API spec is fetched from the repository. These steps ensure that the version o", "language": "text", "type": "code-example_explanation", "context": "Generating Reference Documentation for the Kubernetes API | The updateapispec build target creates the versioned build directory. After the directory is created, the Open API spec is fetched from the repository. These steps ensure that the version o", "source": "https://kubernetes.io/docs/contribute/generate-ref-docs/kubernetes-api/", "id": "k8s_00174_code_explain_4895"} {"prompt": "Explain this json code:\n\n[ -e \"/gen-apidocs/build/index.html\" ] && echo \"index.html built\" || echo \"no index.html\"\n[ -e \"/gen-apidocs/build/navData.js\" ] && echo \"navData.js built\" || echo \"no navData.js\"", "completion": "This json code Generating Reference Documentation for the Kubernetes API | Go to the base of your local , and view which files have been modified:", "language": "json", "type": "api-response_explanation", "context": "Generating Reference Documentation for the Kubernetes API | Go to the base of your local , and view which files have been modified:", "source": "https://kubernetes.io/docs/contribute/generate-ref-docs/kubernetes-api/", "id": "k8s_00174_code_explain_4896"} {"prompt": "Explain this text code:\n\ncd \ngit submodule update --init --recursive --depth 1 # if not already done\nmake container-serve", "completion": "This text code Generating Reference Documentation for the Kubernetes API | In , run git add and git commit to commit the change.", "language": "text", "type": "code-example_explanation", "context": "Generating Reference Documentation for the Kubernetes API | In , run git add and git commit to commit the change.", "source": "https://kubernetes.io/docs/contribute/generate-ref-docs/kubernetes-api/", "id": "k8s_00174_code_explain_4897"} {"prompt": "Explain this text code:\n\n--runtime-config=admissionregistration.k8s.io/v1beta1=false,apiextensions.k8s.io/v1beta1,...", "completion": "This text code Deprecated API Migration Guide | Use client warnings, metrics, and audit information available in 1.19+ to locate use of deprecated APIs.", "language": "text", "type": "code-example_explanation", "context": "Deprecated API Migration Guide | Use client warnings, metrics, and audit information available in 1.19+ to locate use of deprecated APIs.", "source": "https://kubernetes.io/docs/reference/using-api/deprecation-guide/", "id": "k8s_00177_code_explain_4898"} {"prompt": "Explain this bash code:\n\nkubectl convert -f --output-version /", "completion": "This bash code Deprecated API Migration Guide | For example, to convert an older Deployment to apps/v1, you can run:", "language": "bash", "type": "kubectl-command_explanation", "context": "Deprecated API Migration Guide | For example, to convert an older Deployment to apps/v1, you can run:", "source": "https://kubernetes.io/docs/reference/using-api/deprecation-guide/", "id": "k8s_00177_code_explain_4899"} {"prompt": "Explain this bash code:\n\nkubectl convert -f ./my-deployment.yaml --output-version apps/v1", "completion": "This bash code Deprecated API Migration Guide | This conversion may use non-ideal default values. To learn more about a specific resource, check the Kubernetes API reference.", "language": "bash", "type": "kubectl-command_explanation", "context": "Deprecated API Migration Guide | This conversion may use non-ideal default values. To learn more about a specific resource, check the Kubernetes API reference.", "source": "https://kubernetes.io/docs/reference/using-api/deprecation-guide/", "id": "k8s_00177_code_explain_4900"} {"prompt": "Explain this text code:\n\n# Reconcile RBAC resources from a file\n kubectl auth reconcile -f my-rbac-rules.yaml", "completion": "This text code kubectl auth reconcile | If true, ignore any errors in templates when a field or map key is missing in the template. Only applies to golang and jsonpath output formats.", "language": "text", "type": "code-example_explanation", "context": "kubectl auth reconcile | If true, ignore any errors in templates when a field or map key is missing in the template. Only applies to golang and jsonpath output formats.", "source": "https://kubernetes.io/docs/reference/kubectl/generated/kubectl_auth/kubectl_auth_reconcile/", "id": "k8s_00179_code_explain_4901"} {"prompt": "Explain this text code:\n\n# Reconcile RBAC resources from a file\n kubectl auth reconcile -f my-rbac-rules.yaml", "completion": "This text code kubectl auth reconcile | If true, ignore any errors in templates when a field or map key is missing in the template. Only applies to golang and jsonpath output formats.", "language": "text", "type": "code-example_explanation", "context": "kubectl auth reconcile | If true, ignore any errors in templates when a field or map key is missing in the template. Only applies to golang and jsonpath output formats.", "source": "https://kubernetes.io/docs/reference/kubectl/generated/kubectl_auth/kubectl_auth_reconcile/", "id": "k8s_00179_code_explain_4902"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: run-as-username-pod-demo\nspec:\n securityContext:\n windowsOptions:", "completion": " runAsUserName: \"ContainerUser\"\n containers:\n - name: run-as-username-demo\n image: mcr.microsoft.com/windows/servercore:ltsc2019\n command: [\"ping\", \"-t\", \"localhost\"]\n nodeSelector:\n kubernetes.io/os: windows", "language": "yaml", "type": "pod-definition", "context": "Configure RunAsUserName for Windows pods and containers | Create the Pod:", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/configure-runasusername/", "id": "k8s_00181_code_completion_4903"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: run-as-username-pod-demo\nspec:\n securityContext:\n windowsOptions:\n runAsUserName: \"ContainerUser\"\n containers:\n - name: run-as-username-demo\n image: mcr.microsoft.com/windows/servercore:ltsc2019\n command: [\"ping\", \"-t\", \"localhost\"]\n nodeSelector:\n kubernetes.io/os: windows", "completion": "This yaml code Configure RunAsUserName for Windows pods and containers | Create the Pod:", "language": "yaml", "type": "pod-definition_explanation", "context": "Configure RunAsUserName for Windows pods and containers | Create the Pod:", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/configure-runasusername/", "id": "k8s_00181_code_explain_4904"} {"prompt": "Explain this bash code:\n\nkubectl apply -f https://k8s.io/examples/windows/run-as-username-pod.yaml", "completion": "This bash code Configure RunAsUserName for Windows pods and containers | Verify that the Pod's Container is running:", "language": "bash", "type": "kubectl-command_explanation", "context": "Configure RunAsUserName for Windows pods and containers | Verify that the Pod's Container is running:", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/configure-runasusername/", "id": "k8s_00181_code_explain_4905"} {"prompt": "Explain this bash code:\n\nkubectl exec -it run-as-username-pod-demo -- powershell", "completion": "This bash code Configure RunAsUserName for Windows pods and containers | Check that the shell is running user the correct username:", "language": "bash", "type": "kubectl-command_explanation", "context": "Configure RunAsUserName for Windows pods and containers | Check that the shell is running user the correct username:", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/configure-runasusername/", "id": "k8s_00181_code_explain_4906"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: run-as-username-container-demo\nspec:\n securityContext:\n windowsOptions:\n runAsUserName: \"ContainerUser\"", "completion": " containers:\n - name: run-as-username-demo\n image: mcr.microsoft.com/windows/servercore:ltsc2019\n command: [\"ping\", \"-t\", \"localhost\"]\n securityContext:\n windowsOptions:\n runAsUserName: \"ContainerAdministrator\"\n nodeSelector:\n kubernetes.io/os: windows", "language": "yaml", "type": "pod-definition", "context": "Configure RunAsUserName for Windows pods and containers | Create the Pod:", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/configure-runasusername/", "id": "k8s_00181_code_completion_4907"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: run-as-username-container-demo\nspec:\n securityContext:\n windowsOptions:\n runAsUserName: \"ContainerUser\"\n containers:\n - name: run-as-username-demo\n image: mcr.microsoft.com/windows/servercore:ltsc2019\n command: [\"ping\", \"-t\", \"localhost\"]\n securityContext:\n windowsOptions:\n runAsUserName: \"ContainerAdministrator\"\n nodeSelector:\n kubernetes.io/os: windows", "completion": "This yaml code Configure RunAsUserName for Windows pods and containers | Create the Pod:", "language": "yaml", "type": "pod-definition_explanation", "context": "Configure RunAsUserName for Windows pods and containers | Create the Pod:", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/configure-runasusername/", "id": "k8s_00181_code_explain_4908"} {"prompt": "Explain this bash code:\n\nkubectl apply -f https://k8s.io/examples/windows/run-as-username-container.yaml", "completion": "This bash code Configure RunAsUserName for Windows pods and containers | Verify that the Pod's Container is running:", "language": "bash", "type": "kubectl-command_explanation", "context": "Configure RunAsUserName for Windows pods and containers | Verify that the Pod's Container is running:", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/configure-runasusername/", "id": "k8s_00181_code_explain_4909"} {"prompt": "Explain this bash code:\n\nkubectl exec -it run-as-username-container-demo -- powershell", "completion": "This bash code Configure RunAsUserName for Windows pods and containers | Check that the shell is running user the correct username (the one set at the Container level):", "language": "bash", "type": "kubectl-command_explanation", "context": "Configure RunAsUserName for Windows pods and containers | Check that the shell is running user the correct username (the one set at the Container level):", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/configure-runasusername/", "id": "k8s_00181_code_explain_4910"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: run-as-username-pod-demo\nspec:\n securityContext:\n windowsOptions:", "completion": " runAsUserName: \"ContainerUser\"\n containers:\n - name: run-as-username-demo\n image: mcr.microsoft.com/windows/servercore:ltsc2019\n command: [\"ping\", \"-t\", \"localhost\"]\n nodeSelector:\n kubernetes.io/os: windows", "language": "yaml", "type": "pod-definition", "context": "Configure RunAsUserName for Windows pods and containers | Create the Pod:", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/configure-runasusername/", "id": "k8s_00181_code_completion_4911"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: run-as-username-pod-demo\nspec:\n securityContext:\n windowsOptions:\n runAsUserName: \"ContainerUser\"\n containers:\n - name: run-as-username-demo\n image: mcr.microsoft.com/windows/servercore:ltsc2019\n command: [\"ping\", \"-t\", \"localhost\"]\n nodeSelector:\n kubernetes.io/os: windows", "completion": "This yaml code Configure RunAsUserName for Windows pods and containers | Create the Pod:", "language": "yaml", "type": "pod-definition_explanation", "context": "Configure RunAsUserName for Windows pods and containers | Create the Pod:", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/configure-runasusername/", "id": "k8s_00181_code_explain_4912"} {"prompt": "Explain this shell code:\n\nkubectl apply -f https://k8s.io/examples/windows/run-as-username-pod.yaml", "completion": "This shell code Configure RunAsUserName for Windows pods and containers | Verify that the Pod's Container is running:", "language": "shell", "type": "code-example_explanation", "context": "Configure RunAsUserName for Windows pods and containers | Verify that the Pod's Container is running:", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/configure-runasusername/", "id": "k8s_00181_code_explain_4913"} {"prompt": "Explain this shell code:\n\nkubectl exec -it run-as-username-pod-demo -- powershell", "completion": "This shell code Configure RunAsUserName for Windows pods and containers | Check that the shell is running user the correct username:", "language": "shell", "type": "code-example_explanation", "context": "Configure RunAsUserName for Windows pods and containers | Check that the shell is running user the correct username:", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/configure-runasusername/", "id": "k8s_00181_code_explain_4914"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: run-as-username-container-demo\nspec:\n securityContext:\n windowsOptions:\n runAsUserName: \"ContainerUser\"", "completion": " containers:\n - name: run-as-username-demo\n image: mcr.microsoft.com/windows/servercore:ltsc2019\n command: [\"ping\", \"-t\", \"localhost\"]\n securityContext:\n windowsOptions:\n runAsUserName: \"ContainerAdministrator\"\n nodeSelector:\n kubernetes.io/os: windows", "language": "yaml", "type": "pod-definition", "context": "Configure RunAsUserName for Windows pods and containers | Create the Pod:", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/configure-runasusername/", "id": "k8s_00181_code_completion_4915"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: run-as-username-container-demo\nspec:\n securityContext:\n windowsOptions:\n runAsUserName: \"ContainerUser\"\n containers:\n - name: run-as-username-demo\n image: mcr.microsoft.com/windows/servercore:ltsc2019\n command: [\"ping\", \"-t\", \"localhost\"]\n securityContext:\n windowsOptions:\n runAsUserName: \"ContainerAdministrator\"\n nodeSelector:\n kubernetes.io/os: windows", "completion": "This yaml code Configure RunAsUserName for Windows pods and containers | Create the Pod:", "language": "yaml", "type": "pod-definition_explanation", "context": "Configure RunAsUserName for Windows pods and containers | Create the Pod:", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/configure-runasusername/", "id": "k8s_00181_code_explain_4916"} {"prompt": "Explain this shell code:\n\nkubectl apply -f https://k8s.io/examples/windows/run-as-username-container.yaml", "completion": "This shell code Configure RunAsUserName for Windows pods and containers | Verify that the Pod's Container is running:", "language": "shell", "type": "code-example_explanation", "context": "Configure RunAsUserName for Windows pods and containers | Verify that the Pod's Container is running:", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/configure-runasusername/", "id": "k8s_00181_code_explain_4917"} {"prompt": "Explain this shell code:\n\nkubectl exec -it run-as-username-container-demo -- powershell", "completion": "This shell code Configure RunAsUserName for Windows pods and containers | Check that the shell is running user the correct username (the one set at the Container level):", "language": "shell", "type": "code-example_explanation", "context": "Configure RunAsUserName for Windows pods and containers | Check that the shell is running user the correct username (the one set at the Container level):", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/configure-runasusername/", "id": "k8s_00181_code_explain_4918"} {"prompt": "Complete this yaml code:\n\nwindows/run-as-username-pod.yaml\napiVersion: v1\nkind: Pod\nmetadata:\n name: run-as-username-pod-demo\nspec:\n securityContext:", "completion": " windowsOptions:\n runAsUserName: \"ContainerUser\"\n containers:\n - name: run-as-username-demo\n image: mcr.microsoft.com/windows/servercore:ltsc2019\n command: [\"ping\", \"-t\", \"localhost\"]\n nodeSelector:\n kubernetes.io/os: windows", "language": "yaml", "type": "pod-definition", "context": "Configure RunAsUserName for Windows pods and containers | Create the Pod:", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/configure-runasusername/", "id": "k8s_00181_code_completion_4919"} {"prompt": "Explain this yaml code:\n\nwindows/run-as-username-pod.yaml\napiVersion: v1\nkind: Pod\nmetadata:\n name: run-as-username-pod-demo\nspec:\n securityContext:\n windowsOptions:\n runAsUserName: \"ContainerUser\"\n containers:\n - name: run-as-username-demo\n image: mcr.microsoft.com/windows/servercore:ltsc2019\n command: [\"ping\", \"-t\", \"localhost\"]\n nodeSelector:\n kubernetes.io/os: windows", "completion": "This yaml code Configure RunAsUserName for Windows pods and containers | Create the Pod:", "language": "yaml", "type": "pod-definition_explanation", "context": "Configure RunAsUserName for Windows pods and containers | Create the Pod:", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/configure-runasusername/", "id": "k8s_00181_code_explain_4920"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: run-as-username-pod-demo\nspec:\n securityContext:\n windowsOptions:", "completion": " runAsUserName: \"ContainerUser\"\n containers:\n - name: run-as-username-demo\n image: mcr.microsoft.com/windows/servercore:ltsc2019\n command: [\"ping\", \"-t\", \"localhost\"]\n nodeSelector:\n kubernetes.io/os: windows", "language": "yaml", "type": "pod-definition", "context": "Configure RunAsUserName for Windows pods and containers | Create the Pod:", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/configure-runasusername/", "id": "k8s_00181_code_completion_4921"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: run-as-username-pod-demo\nspec:\n securityContext:\n windowsOptions:\n runAsUserName: \"ContainerUser\"\n containers:\n - name: run-as-username-demo\n image: mcr.microsoft.com/windows/servercore:ltsc2019\n command: [\"ping\", \"-t\", \"localhost\"]\n nodeSelector:\n kubernetes.io/os: windows", "completion": "This yaml code Configure RunAsUserName for Windows pods and containers | Create the Pod:", "language": "yaml", "type": "pod-definition_explanation", "context": "Configure RunAsUserName for Windows pods and containers | Create the Pod:", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/configure-runasusername/", "id": "k8s_00181_code_explain_4922"} {"prompt": "Explain this bash code:\n\nkubectl apply -f https://k8s.io/examples/windows/run-as-username-pod.yaml", "completion": "This bash code Configure RunAsUserName for Windows pods and containers | Verify that the Pod's Container is running:", "language": "bash", "type": "kubectl-command_explanation", "context": "Configure RunAsUserName for Windows pods and containers | Verify that the Pod's Container is running:", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/configure-runasusername/", "id": "k8s_00181_code_explain_4923"} {"prompt": "Explain this bash code:\n\nkubectl exec -it run-as-username-pod-demo -- powershell", "completion": "This bash code Configure RunAsUserName for Windows pods and containers | Check that the shell is running user the correct username:", "language": "bash", "type": "kubectl-command_explanation", "context": "Configure RunAsUserName for Windows pods and containers | Check that the shell is running user the correct username:", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/configure-runasusername/", "id": "k8s_00181_code_explain_4924"} {"prompt": "Complete this yaml code:\n\nwindows/run-as-username-container.yaml\napiVersion: v1\nkind: Pod\nmetadata:\n name: run-as-username-container-demo\nspec:\n securityContext:\n windowsOptions:\n runAsUserName: \"ContainerUser\"", "completion": " containers:\n - name: run-as-username-demo\n image: mcr.microsoft.com/windows/servercore:ltsc2019\n command: [\"ping\", \"-t\", \"localhost\"]\n securityContext:\n windowsOptions:\n runAsUserName: \"ContainerAdministrator\"\n nodeSelector:\n kubernetes.io/os: windows", "language": "yaml", "type": "pod-definition", "context": "Configure RunAsUserName for Windows pods and containers | Create the Pod:", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/configure-runasusername/", "id": "k8s_00181_code_completion_4925"} {"prompt": "Explain this yaml code:\n\nwindows/run-as-username-container.yaml\napiVersion: v1\nkind: Pod\nmetadata:\n name: run-as-username-container-demo\nspec:\n securityContext:\n windowsOptions:\n runAsUserName: \"ContainerUser\"\n containers:\n - name: run-as-username-demo\n image: mcr.microsoft.com/windows/servercore:ltsc2019\n command: [\"ping\", \"-t\", \"localhost\"]\n securityContext:\n windowsOptions:\n runAsUserName: \"ContainerAdministrator\"\n nodeSelector:\n kubernetes.io/os: windows", "completion": "This yaml code Configure RunAsUserName for Windows pods and containers | Create the Pod:", "language": "yaml", "type": "pod-definition_explanation", "context": "Configure RunAsUserName for Windows pods and containers | Create the Pod:", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/configure-runasusername/", "id": "k8s_00181_code_explain_4926"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: run-as-username-container-demo\nspec:\n securityContext:\n windowsOptions:\n runAsUserName: \"ContainerUser\"", "completion": " containers:\n - name: run-as-username-demo\n image: mcr.microsoft.com/windows/servercore:ltsc2019\n command: [\"ping\", \"-t\", \"localhost\"]\n securityContext:\n windowsOptions:\n runAsUserName: \"ContainerAdministrator\"\n nodeSelector:\n kubernetes.io/os: windows", "language": "yaml", "type": "pod-definition", "context": "Configure RunAsUserName for Windows pods and containers | Create the Pod:", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/configure-runasusername/", "id": "k8s_00181_code_completion_4927"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: run-as-username-container-demo\nspec:\n securityContext:\n windowsOptions:\n runAsUserName: \"ContainerUser\"\n containers:\n - name: run-as-username-demo\n image: mcr.microsoft.com/windows/servercore:ltsc2019\n command: [\"ping\", \"-t\", \"localhost\"]\n securityContext:\n windowsOptions:\n runAsUserName: \"ContainerAdministrator\"\n nodeSelector:\n kubernetes.io/os: windows", "completion": "This yaml code Configure RunAsUserName for Windows pods and containers | Create the Pod:", "language": "yaml", "type": "pod-definition_explanation", "context": "Configure RunAsUserName for Windows pods and containers | Create the Pod:", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/configure-runasusername/", "id": "k8s_00181_code_explain_4928"} {"prompt": "Explain this bash code:\n\nkubectl apply -f https://k8s.io/examples/windows/run-as-username-container.yaml", "completion": "This bash code Configure RunAsUserName for Windows pods and containers | Verify that the Pod's Container is running:", "language": "bash", "type": "kubectl-command_explanation", "context": "Configure RunAsUserName for Windows pods and containers | Verify that the Pod's Container is running:", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/configure-runasusername/", "id": "k8s_00181_code_explain_4929"} {"prompt": "Explain this bash code:\n\nkubectl exec -it run-as-username-container-demo -- powershell", "completion": "This bash code Configure RunAsUserName for Windows pods and containers | Check that the shell is running user the correct username (the one set at the Container level):", "language": "bash", "type": "kubectl-command_explanation", "context": "Configure RunAsUserName for Windows pods and containers | Check that the shell is running user the correct username (the one set at the Container level):", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/configure-runasusername/", "id": "k8s_00181_code_explain_4930"} {"prompt": "Complete this yaml code:\n\napiVersion: apps/v1\nkind: DaemonSet\nmetadata:\n name: fluentd-elasticsearch\n namespace: kube-system\n labels:\n k8s-app: fluentd-logging\nspec:\n selector:\n matchLabels:\n name: fluentd-elasticsearch\n template:\n metadata:\n labels:\n name: fluentd-elasticsearch\n spec:\n tolerations:\n # these tolerations are to have the daemonset runnable on control plane nodes\n # remove them if your control plane nodes should not run pods\n - key: node-role.kubernetes.io/control-plane\n operator: Exists\n effect: NoSchedule", "completion": " - key: node-role.kubernetes.io/master\n operator: Exists\n effect: NoSchedule\n containers:\n - name: fluentd-elasticsearch\n image: quay.io/fluentd_elasticsearch/fluentd:v5.0.1\n resources:\n limits:\n memory: 200Mi\n requests:\n cpu: 100m\n memory: 200Mi\n volumeMounts:\n - name: varlog\n mountPath: /var/log\n # it may be desirable to set a high priority class to ensure that a DaemonSet Pod\n # preempts running Pods\n # priorityClassName: important\n terminationGracePeriodSeconds: 30\n volumes:\n - name: varlog\n hostPath:\n path: /var/log", "language": "yaml", "type": "kubernetes-manifest", "context": "DaemonSet | Create a DaemonSet based on the YAML file:", "source": "https://kubernetes.io/docs/concepts/workloads/controllers/daemonset/", "id": "k8s_00182_code_completion_4931"} {"prompt": "Explain this yaml code:\n\napiVersion: apps/v1\nkind: DaemonSet\nmetadata:\n name: fluentd-elasticsearch\n namespace: kube-system\n labels:\n k8s-app: fluentd-logging\nspec:\n selector:\n matchLabels:\n name: fluentd-elasticsearch\n template:\n metadata:\n labels:\n name: fluentd-elasticsearch\n spec:\n tolerations:\n # these tolerations are to have the daemonset runnable on control plane nodes\n # remove them if your control plane nodes should not run pods\n - key: node-role.kubernetes.io/control-plane\n operator: Exists\n effect: NoSchedule\n - key: node-role.kubernetes.io/master\n operator: Exists\n effect: NoSchedule\n containers:\n - name: fluentd-elasticsearch\n image: quay.io/fluentd_elasticsearch/fluentd:v5.0.1\n resources:\n limits:\n memory: 200Mi\n requests:\n cpu: 100m\n memory: 200Mi\n volumeMounts:\n - name: varlog\n mountPath: /var/log\n # it may be desirable to set a high priority class to ensure that a DaemonSet Pod\n # preempts running Pods\n # priorityClassName: important\n terminationGracePeriodSeconds: 30\n volumes:\n - name: varlog\n hostPath:\n path: /var/log", "completion": "This yaml code DaemonSet | Create a DaemonSet based on the YAML file:", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "DaemonSet | Create a DaemonSet based on the YAML file:", "source": "https://kubernetes.io/docs/concepts/workloads/controllers/daemonset/", "id": "k8s_00182_code_explain_4932"} {"prompt": "Explain this bash code:\n\nkubectl apply -f https://k8s.io/examples/controllers/daemonset.yaml", "completion": "This bash code DaemonSet | As with all other Kubernetes config, a DaemonSet needs apiVersion, kind, and metadata fields. For general information about working with config files, see running stateless applications and object man", "language": "bash", "type": "kubectl-command_explanation", "context": "DaemonSet | As with all other Kubernetes config, a DaemonSet needs apiVersion, kind, and metadata fields. For general information about working with config files, see running stateless applications and object man", "source": "https://kubernetes.io/docs/concepts/workloads/controllers/daemonset/", "id": "k8s_00182_code_explain_4933"} {"prompt": "Complete this text code:\n\nnodeAffinity:\n requiredDuringSchedulingIgnoredDuringExecution:\n nodeSelectorTerms:\n - matchFields:", "completion": " - key: metadata.name\n operator: In\n values:\n - target-host-name", "language": "text", "type": "code-example", "context": "DaemonSet | The DaemonSet controller automatically adds a set of tolerations to DaemonSet Pods:", "source": "https://kubernetes.io/docs/concepts/workloads/controllers/daemonset/", "id": "k8s_00182_code_completion_4934"} {"prompt": "Explain this text code:\n\nnodeAffinity:\n requiredDuringSchedulingIgnoredDuringExecution:\n nodeSelectorTerms:\n - matchFields:\n - key: metadata.name\n operator: In\n values:\n - target-host-name", "completion": "This text code DaemonSet | The DaemonSet controller automatically adds a set of tolerations to DaemonSet Pods:", "language": "text", "type": "code-example_explanation", "context": "DaemonSet | The DaemonSet controller automatically adds a set of tolerations to DaemonSet Pods:", "source": "https://kubernetes.io/docs/concepts/workloads/controllers/daemonset/", "id": "k8s_00182_code_explain_4935"} {"prompt": "Complete this yaml code:\n\napiVersion: apps/v1\nkind: DaemonSet\nmetadata:\n name: fluentd-elasticsearch\n namespace: kube-system\n labels:\n k8s-app: fluentd-logging\nspec:\n selector:\n matchLabels:\n name: fluentd-elasticsearch\n template:\n metadata:\n labels:\n name: fluentd-elasticsearch\n spec:\n tolerations:\n # these tolerations are to have the daemonset runnable on control plane nodes\n # remove them if your control plane nodes should not run pods\n - key: node-role.kubernetes.io/control-plane\n operator: Exists\n effect: NoSchedule", "completion": " - key: node-role.kubernetes.io/master\n operator: Exists\n effect: NoSchedule\n containers:\n - name: fluentd-elasticsearch\n image: quay.io/fluentd_elasticsearch/fluentd:v5.0.1\n resources:\n limits:\n memory: 200Mi\n requests:\n cpu: 100m\n memory: 200Mi\n volumeMounts:\n - name: varlog\n mountPath: /var/log\n # it may be desirable to set a high priority class to ensure that a DaemonSet Pod\n # preempts running Pods\n # priorityClassName: important\n terminationGracePeriodSeconds: 30\n volumes:\n - name: varlog\n hostPath:\n path: /var/log", "language": "yaml", "type": "kubernetes-manifest", "context": "DaemonSet | Create a DaemonSet based on the YAML file:", "source": "https://kubernetes.io/docs/concepts/workloads/controllers/daemonset/", "id": "k8s_00182_code_completion_4936"} {"prompt": "Explain this yaml code:\n\napiVersion: apps/v1\nkind: DaemonSet\nmetadata:\n name: fluentd-elasticsearch\n namespace: kube-system\n labels:\n k8s-app: fluentd-logging\nspec:\n selector:\n matchLabels:\n name: fluentd-elasticsearch\n template:\n metadata:\n labels:\n name: fluentd-elasticsearch\n spec:\n tolerations:\n # these tolerations are to have the daemonset runnable on control plane nodes\n # remove them if your control plane nodes should not run pods\n - key: node-role.kubernetes.io/control-plane\n operator: Exists\n effect: NoSchedule\n - key: node-role.kubernetes.io/master\n operator: Exists\n effect: NoSchedule\n containers:\n - name: fluentd-elasticsearch\n image: quay.io/fluentd_elasticsearch/fluentd:v5.0.1\n resources:\n limits:\n memory: 200Mi\n requests:\n cpu: 100m\n memory: 200Mi\n volumeMounts:\n - name: varlog\n mountPath: /var/log\n # it may be desirable to set a high priority class to ensure that a DaemonSet Pod\n # preempts running Pods\n # priorityClassName: important\n terminationGracePeriodSeconds: 30\n volumes:\n - name: varlog\n hostPath:\n path: /var/log", "completion": "This yaml code DaemonSet | Create a DaemonSet based on the YAML file:", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "DaemonSet | Create a DaemonSet based on the YAML file:", "source": "https://kubernetes.io/docs/concepts/workloads/controllers/daemonset/", "id": "k8s_00182_code_explain_4937"} {"prompt": "Explain this bash code:\n\nkubectl apply -f https://k8s.io/examples/controllers/daemonset.yaml", "completion": "This bash code DaemonSet | As with all other Kubernetes config, a DaemonSet needs apiVersion, kind, and metadata fields. For general information about working with config files, see running stateless applications and object man", "language": "bash", "type": "kubectl-command_explanation", "context": "DaemonSet | As with all other Kubernetes config, a DaemonSet needs apiVersion, kind, and metadata fields. For general information about working with config files, see running stateless applications and object man", "source": "https://kubernetes.io/docs/concepts/workloads/controllers/daemonset/", "id": "k8s_00182_code_explain_4938"} {"prompt": "Complete this yaml code:\n\nnodeAffinity:\n requiredDuringSchedulingIgnoredDuringExecution:\n nodeSelectorTerms:\n - matchFields:", "completion": " - key: metadata.name\n operator: In\n values:\n - target-host-name", "language": "yaml", "type": "kubernetes-manifest", "context": "DaemonSet | The DaemonSet controller automatically adds a set of tolerations to DaemonSet Pods:", "source": "https://kubernetes.io/docs/concepts/workloads/controllers/daemonset/", "id": "k8s_00182_code_completion_4939"} {"prompt": "Explain this yaml code:\n\nnodeAffinity:\n requiredDuringSchedulingIgnoredDuringExecution:\n nodeSelectorTerms:\n - matchFields:\n - key: metadata.name\n operator: In\n values:\n - target-host-name", "completion": "This yaml code DaemonSet | The DaemonSet controller automatically adds a set of tolerations to DaemonSet Pods:", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "DaemonSet | The DaemonSet controller automatically adds a set of tolerations to DaemonSet Pods:", "source": "https://kubernetes.io/docs/concepts/workloads/controllers/daemonset/", "id": "k8s_00182_code_explain_4940"} {"prompt": "Complete this yaml code:\n\ncontrollers/daemonset.yaml\napiVersion: apps/v1\nkind: DaemonSet\nmetadata:\n name: fluentd-elasticsearch\n namespace: kube-system\n labels:\n k8s-app: fluentd-logging\nspec:\n selector:\n matchLabels:\n name: fluentd-elasticsearch\n template:\n metadata:\n labels:\n name: fluentd-elasticsearch\n spec:\n tolerations:\n # these tolerations are to have the daemonset runnable on control plane nodes\n # remove them if your control plane nodes should not run pods\n - key: node-role.kubernetes.io/control-plane\n operator: Exists\n effect: NoSchedule", "completion": " - key: node-role.kubernetes.io/master\n operator: Exists\n effect: NoSchedule\n containers:\n - name: fluentd-elasticsearch\n image: quay.io/fluentd_elasticsearch/fluentd:v5.0.1\n resources:\n limits:\n memory: 200Mi\n requests:\n cpu: 100m\n memory: 200Mi\n volumeMounts:\n - name: varlog\n mountPath: /var/log\n # it may be desirable to set a high priority class to ensure that a DaemonSet Pod\n # preempts running Pods\n # priorityClassName: important\n terminationGracePeriodSeconds: 30\n volumes:\n - name: varlog\n hostPath:\n path: /var/log", "language": "yaml", "type": "kubernetes-manifest", "context": "DaemonSet | Create a DaemonSet based on the YAML file:", "source": "https://kubernetes.io/docs/concepts/workloads/controllers/daemonset/", "id": "k8s_00182_code_completion_4941"} {"prompt": "Explain this yaml code:\n\ncontrollers/daemonset.yaml\napiVersion: apps/v1\nkind: DaemonSet\nmetadata:\n name: fluentd-elasticsearch\n namespace: kube-system\n labels:\n k8s-app: fluentd-logging\nspec:\n selector:\n matchLabels:\n name: fluentd-elasticsearch\n template:\n metadata:\n labels:\n name: fluentd-elasticsearch\n spec:\n tolerations:\n # these tolerations are to have the daemonset runnable on control plane nodes\n # remove them if your control plane nodes should not run pods\n - key: node-role.kubernetes.io/control-plane\n operator: Exists\n effect: NoSchedule\n - key: node-role.kubernetes.io/master\n operator: Exists\n effect: NoSchedule\n containers:\n - name: fluentd-elasticsearch\n image: quay.io/fluentd_elasticsearch/fluentd:v5.0.1\n resources:\n limits:\n memory: 200Mi\n requests:\n cpu: 100m\n memory: 200Mi\n volumeMounts:\n - name: varlog\n mountPath: /var/log\n # it may be desirable to set a high priority class to ensure that a DaemonSet Pod\n # preempts running Pods\n # priorityClassName: important\n terminationGracePeriodSeconds: 30\n volumes:\n - name: varlog\n hostPath:\n path: /var/log", "completion": "This yaml code DaemonSet | Create a DaemonSet based on the YAML file:", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "DaemonSet | Create a DaemonSet based on the YAML file:", "source": "https://kubernetes.io/docs/concepts/workloads/controllers/daemonset/", "id": "k8s_00182_code_explain_4942"} {"prompt": "Complete this yaml code:\n\napiVersion: apps/v1\nkind: DaemonSet\nmetadata:\n name: fluentd-elasticsearch\n namespace: kube-system\n labels:\n k8s-app: fluentd-logging\nspec:\n selector:\n matchLabels:\n name: fluentd-elasticsearch\n template:\n metadata:\n labels:\n name: fluentd-elasticsearch\n spec:\n tolerations:\n # these tolerations are to have the daemonset runnable on control plane nodes\n # remove them if your control plane nodes should not run pods\n - key: node-role.kubernetes.io/control-plane\n operator: Exists\n effect: NoSchedule", "completion": " - key: node-role.kubernetes.io/master\n operator: Exists\n effect: NoSchedule\n containers:\n - name: fluentd-elasticsearch\n image: quay.io/fluentd_elasticsearch/fluentd:v5.0.1\n resources:\n limits:\n memory: 200Mi\n requests:\n cpu: 100m\n memory: 200Mi\n volumeMounts:\n - name: varlog\n mountPath: /var/log\n # it may be desirable to set a high priority class to ensure that a DaemonSet Pod\n # preempts running Pods\n # priorityClassName: important\n terminationGracePeriodSeconds: 30\n volumes:\n - name: varlog\n hostPath:\n path: /var/log", "language": "yaml", "type": "kubernetes-manifest", "context": "DaemonSet | Create a DaemonSet based on the YAML file:", "source": "https://kubernetes.io/docs/concepts/workloads/controllers/daemonset/", "id": "k8s_00182_code_completion_4943"} {"prompt": "Explain this yaml code:\n\napiVersion: apps/v1\nkind: DaemonSet\nmetadata:\n name: fluentd-elasticsearch\n namespace: kube-system\n labels:\n k8s-app: fluentd-logging\nspec:\n selector:\n matchLabels:\n name: fluentd-elasticsearch\n template:\n metadata:\n labels:\n name: fluentd-elasticsearch\n spec:\n tolerations:\n # these tolerations are to have the daemonset runnable on control plane nodes\n # remove them if your control plane nodes should not run pods\n - key: node-role.kubernetes.io/control-plane\n operator: Exists\n effect: NoSchedule\n - key: node-role.kubernetes.io/master\n operator: Exists\n effect: NoSchedule\n containers:\n - name: fluentd-elasticsearch\n image: quay.io/fluentd_elasticsearch/fluentd:v5.0.1\n resources:\n limits:\n memory: 200Mi\n requests:\n cpu: 100m\n memory: 200Mi\n volumeMounts:\n - name: varlog\n mountPath: /var/log\n # it may be desirable to set a high priority class to ensure that a DaemonSet Pod\n # preempts running Pods\n # priorityClassName: important\n terminationGracePeriodSeconds: 30\n volumes:\n - name: varlog\n hostPath:\n path: /var/log", "completion": "This yaml code DaemonSet | Create a DaemonSet based on the YAML file:", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "DaemonSet | Create a DaemonSet based on the YAML file:", "source": "https://kubernetes.io/docs/concepts/workloads/controllers/daemonset/", "id": "k8s_00182_code_explain_4944"} {"prompt": "Complete this text code:\n\nnodeAffinity:\n requiredDuringSchedulingIgnoredDuringExecution:\n nodeSelectorTerms:\n - matchFields:", "completion": " - key: metadata.name\n operator: In\n values:\n - target-host-name", "language": "text", "type": "code-example", "context": "DaemonSet | The DaemonSet controller automatically adds a set of tolerations to DaemonSet Pods:", "source": "https://kubernetes.io/docs/concepts/workloads/controllers/daemonset/", "id": "k8s_00182_code_completion_4945"} {"prompt": "Explain this text code:\n\nnodeAffinity:\n requiredDuringSchedulingIgnoredDuringExecution:\n nodeSelectorTerms:\n - matchFields:\n - key: metadata.name\n operator: In\n values:\n - target-host-name", "completion": "This text code DaemonSet | The DaemonSet controller automatically adds a set of tolerations to DaemonSet Pods:", "language": "text", "type": "code-example_explanation", "context": "DaemonSet | The DaemonSet controller automatically adds a set of tolerations to DaemonSet Pods:", "source": "https://kubernetes.io/docs/concepts/workloads/controllers/daemonset/", "id": "k8s_00182_code_explain_4946"} {"prompt": "Explain this text code:\n\n# This assumes that your Node uses \"sudo\" to run commands\n# as the superuser\nsudo mkdir /mnt/data", "completion": "This text code Configure a Pod to Use a PersistentVolume for Storage | In the /mnt/data directory, create an index.html file:", "language": "text", "type": "code-example_explanation", "context": "Configure a Pod to Use a PersistentVolume for Storage | In the /mnt/data directory, create an index.html file:", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/configure-persistent-volume-storage/", "id": "k8s_00183_code_explain_4947"} {"prompt": "Explain this text code:\n\n# This again assumes that your Node uses \"sudo\" to run commands\n# as the superuser\nsudo sh -c \"echo 'Hello from Kubernetes storage' > /mnt/data/index.html\"", "completion": "This text code Configure a Pod to Use a PersistentVolume for Storage | Test that the index.html file exists:", "language": "text", "type": "code-example_explanation", "context": "Configure a Pod to Use a PersistentVolume for Storage | Test that the index.html file exists:", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/configure-persistent-volume-storage/", "id": "k8s_00183_code_explain_4948"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: PersistentVolume\nmetadata:\n name: task-pv-volume\n labels:\n type: local\nspec:", "completion": " storageClassName: manual\n capacity:\n storage: 10Gi\n accessModes:\n - ReadWriteOnce\n hostPath:\n path: \"/mnt/data\"", "language": "yaml", "type": "kubernetes-manifest", "context": "Configure a Pod to Use a PersistentVolume for Storage | The configuration file specifies that the volume is at /mnt/data on the cluster's Node. The configuration also specifies a size of 10 gibibytes and an access mode of ReadWriteOnce, which means the vol", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/configure-persistent-volume-storage/", "id": "k8s_00183_code_completion_4949"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: PersistentVolume\nmetadata:\n name: task-pv-volume\n labels:\n type: local\nspec:\n storageClassName: manual\n capacity:\n storage: 10Gi\n accessModes:\n - ReadWriteOnce\n hostPath:\n path: \"/mnt/data\"", "completion": "This yaml code Configure a Pod to Use a PersistentVolume for Storage | The configuration file specifies that the volume is at /mnt/data on the cluster's Node. The configuration also specifies a size of 10 gibibytes and an access mode of ReadWriteOnce, which means the vol", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Configure a Pod to Use a PersistentVolume for Storage | The configuration file specifies that the volume is at /mnt/data on the cluster's Node. The configuration also specifies a size of 10 gibibytes and an access mode of ReadWriteOnce, which means the vol", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/configure-persistent-volume-storage/", "id": "k8s_00183_code_explain_4950"} {"prompt": "Explain this bash code:\n\nkubectl apply -f https://k8s.io/examples/pods/storage/pv-volume.yaml", "completion": "This bash code Configure a Pod to Use a PersistentVolume for Storage | View information about the PersistentVolume:", "language": "bash", "type": "kubectl-command_explanation", "context": "Configure a Pod to Use a PersistentVolume for Storage | View information about the PersistentVolume:", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/configure-persistent-volume-storage/", "id": "k8s_00183_code_explain_4951"} {"prompt": "Explain this text code:\n\nNAME CAPACITY ACCESSMODES RECLAIMPOLICY STATUS CLAIM STORAGECLASS REASON AGE\ntask-pv-volume 10Gi RWO Retain Available manual 4s", "completion": "This text code Configure a Pod to Use a PersistentVolume for Storage | The next step is to create a PersistentVolumeClaim. Pods use PersistentVolumeClaims to request physical storage. In this exercise, you create a PersistentVolumeClaim that requests a volume of at least", "language": "text", "type": "code-example_explanation", "context": "Configure a Pod to Use a PersistentVolume for Storage | The next step is to create a PersistentVolumeClaim. Pods use PersistentVolumeClaims to request physical storage. In this exercise, you create a PersistentVolumeClaim that requests a volume of at least", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/configure-persistent-volume-storage/", "id": "k8s_00183_code_explain_4952"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: PersistentVolumeClaim\nmetadata:\n name: task-pv-claim\nspec:", "completion": " storageClassName: manual\n accessModes:\n - ReadWriteOnce\n resources:\n requests:\n storage: 3Gi", "language": "yaml", "type": "kubernetes-manifest", "context": "Configure a Pod to Use a PersistentVolume for Storage | Create the PersistentVolumeClaim:", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/configure-persistent-volume-storage/", "id": "k8s_00183_code_completion_4953"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: PersistentVolumeClaim\nmetadata:\n name: task-pv-claim\nspec:\n storageClassName: manual\n accessModes:\n - ReadWriteOnce\n resources:\n requests:\n storage: 3Gi", "completion": "This yaml code Configure a Pod to Use a PersistentVolume for Storage | Create the PersistentVolumeClaim:", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Configure a Pod to Use a PersistentVolume for Storage | Create the PersistentVolumeClaim:", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/configure-persistent-volume-storage/", "id": "k8s_00183_code_explain_4954"} {"prompt": "Explain this bash code:\n\nkubectl apply -f https://k8s.io/examples/pods/storage/pv-claim.yaml", "completion": "This bash code Configure a Pod to Use a PersistentVolume for Storage | After you create the PersistentVolumeClaim, the Kubernetes control plane looks for a PersistentVolume that satisfies the claim's requirements. If the control plane finds a suitable PersistentVolume wi", "language": "bash", "type": "kubectl-command_explanation", "context": "Configure a Pod to Use a PersistentVolume for Storage | After you create the PersistentVolumeClaim, the Kubernetes control plane looks for a PersistentVolume that satisfies the claim's requirements. If the control plane finds a suitable PersistentVolume wi", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/configure-persistent-volume-storage/", "id": "k8s_00183_code_explain_4955"} {"prompt": "Explain this text code:\n\nNAME CAPACITY ACCESSMODES RECLAIMPOLICY STATUS CLAIM STORAGECLASS REASON AGE\ntask-pv-volume 10Gi RWO Retain Bound default/task-pv-claim manual 2m", "completion": "This text code Configure a Pod to Use a PersistentVolume for Storage | Look at the PersistentVolumeClaim:", "language": "text", "type": "code-example_explanation", "context": "Configure a Pod to Use a PersistentVolume for Storage | Look at the PersistentVolumeClaim:", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/configure-persistent-volume-storage/", "id": "k8s_00183_code_explain_4956"} {"prompt": "Explain this text code:\n\nNAME STATUS VOLUME CAPACITY ACCESSMODES STORAGECLASS AGE\ntask-pv-claim Bound task-pv-volume 10Gi RWO manual 30s", "completion": "This text code Configure a Pod to Use a PersistentVolume for Storage | The next step is to create a Pod that uses your PersistentVolumeClaim as a volume.", "language": "text", "type": "code-example_explanation", "context": "Configure a Pod to Use a PersistentVolume for Storage | The next step is to create a Pod that uses your PersistentVolumeClaim as a volume.", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/configure-persistent-volume-storage/", "id": "k8s_00183_code_explain_4957"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: task-pv-pod\nspec:\n volumes:\n - name: task-pv-storage\n persistentVolumeClaim:\n claimName: task-pv-claim", "completion": " containers:\n - name: task-pv-container\n image: nginx\n ports:\n - containerPort: 80\n name: \"http-server\"\n volumeMounts:\n - mountPath: \"/usr/share/nginx/html\"\n name: task-pv-storage", "language": "yaml", "type": "pod-definition", "context": "Configure a Pod to Use a PersistentVolume for Storage | Notice that the Pod's configuration file specifies a PersistentVolumeClaim, but it does not specify a PersistentVolume. From the Pod's point of view, the claim is a volume.", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/configure-persistent-volume-storage/", "id": "k8s_00183_code_completion_4958"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: task-pv-pod\nspec:\n volumes:\n - name: task-pv-storage\n persistentVolumeClaim:\n claimName: task-pv-claim\n containers:\n - name: task-pv-container\n image: nginx\n ports:\n - containerPort: 80\n name: \"http-server\"\n volumeMounts:\n - mountPath: \"/usr/share/nginx/html\"\n name: task-pv-storage", "completion": "This yaml code Configure a Pod to Use a PersistentVolume for Storage | Notice that the Pod's configuration file specifies a PersistentVolumeClaim, but it does not specify a PersistentVolume. From the Pod's point of view, the claim is a volume.", "language": "yaml", "type": "pod-definition_explanation", "context": "Configure a Pod to Use a PersistentVolume for Storage | Notice that the Pod's configuration file specifies a PersistentVolumeClaim, but it does not specify a PersistentVolume. From the Pod's point of view, the claim is a volume.", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/configure-persistent-volume-storage/", "id": "k8s_00183_code_explain_4959"} {"prompt": "Explain this bash code:\n\nkubectl apply -f https://k8s.io/examples/pods/storage/pv-pod.yaml", "completion": "This bash code Configure a Pod to Use a PersistentVolume for Storage | Verify that the container in the Pod is running:", "language": "bash", "type": "kubectl-command_explanation", "context": "Configure a Pod to Use a PersistentVolume for Storage | Verify that the container in the Pod is running:", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/configure-persistent-volume-storage/", "id": "k8s_00183_code_explain_4960"} {"prompt": "Complete this text code:\n\n# Be sure to run these 3 commands inside the root shell that comes from\n# running \"kubectl exec\" in the previous step", "completion": "apt update\napt install curl\ncurl http://localhost/", "language": "text", "type": "code-example", "context": "Configure a Pod to Use a PersistentVolume for Storage | The output shows the text that you wrote to the index.html file on the hostPath volume:", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/configure-persistent-volume-storage/", "id": "k8s_00183_code_completion_4961"} {"prompt": "Explain this text code:\n\n# Be sure to run these 3 commands inside the root shell that comes from\n# running \"kubectl exec\" in the previous step\napt update\napt install curl\ncurl http://localhost/", "completion": "This text code Configure a Pod to Use a PersistentVolume for Storage | The output shows the text that you wrote to the index.html file on the hostPath volume:", "language": "text", "type": "code-example_explanation", "context": "Configure a Pod to Use a PersistentVolume for Storage | The output shows the text that you wrote to the index.html file on the hostPath volume:", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/configure-persistent-volume-storage/", "id": "k8s_00183_code_explain_4962"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: test\nspec:\n containers:\n - name: test\n image: nginx\n volumeMounts:\n # a mount for site-data", "completion": " - name: config\n mountPath: /usr/share/nginx/html\n subPath: html\n # another mount for nginx config\n - name: config\n mountPath: /etc/nginx/nginx.conf\n subPath: nginx.conf\n volumes:\n - name: config\n persistentVolumeClaim:\n claimName: task-pv-storage", "language": "yaml", "type": "pod-definition", "context": "Configure a Pod to Use a PersistentVolume for Storage | Here:", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/configure-persistent-volume-storage/", "id": "k8s_00183_code_completion_4963"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: test\nspec:\n containers:\n - name: test\n image: nginx\n volumeMounts:\n # a mount for site-data\n - name: config\n mountPath: /usr/share/nginx/html\n subPath: html\n # another mount for nginx config\n - name: config\n mountPath: /etc/nginx/nginx.conf\n subPath: nginx.conf\n volumes:\n - name: config\n persistentVolumeClaim:\n claimName: task-pv-storage", "completion": "This yaml code Configure a Pod to Use a PersistentVolume for Storage | Here:", "language": "yaml", "type": "pod-definition_explanation", "context": "Configure a Pod to Use a PersistentVolume for Storage | Here:", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/configure-persistent-volume-storage/", "id": "k8s_00183_code_explain_4964"} {"prompt": "Explain this text code:\n\n# This assumes that your Node uses \"sudo\" to run commands\n# as the superuser\nsudo mkdir /mnt/data/html", "completion": "This text code Configure a Pod to Use a PersistentVolume for Storage | Move index.html into the directory:", "language": "text", "type": "code-example_explanation", "context": "Configure a Pod to Use a PersistentVolume for Storage | Move index.html into the directory:", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/configure-persistent-volume-storage/", "id": "k8s_00183_code_explain_4965"} {"prompt": "Explain this text code:\n\n# Move index.html from its current location to the html sub-directory\nsudo mv /mnt/data/index.html html", "completion": "This text code Configure a Pod to Use a PersistentVolume for Storage | error_log /var/log/nginx/error.log notice; pid /var/run/nginx.pid;", "language": "text", "type": "code-example_explanation", "context": "Configure a Pod to Use a PersistentVolume for Storage | error_log /var/log/nginx/error.log notice; pid /var/run/nginx.pid;", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/configure-persistent-volume-storage/", "id": "k8s_00183_code_explain_4966"} {"prompt": "Complete this text code:\n\nuser nginx;\nworker_processes auto;\nerror_log /var/log/nginx/error.log notice;\npid /var/run/nginx.pid;\nevents {\nworker_connections 1024;\n}\nhttp {\ninclude /etc/nginx/mime.types;\ndefault_type application/octet-stream;\nlog_format main '$remote_addr - $remote_user [$time_local] "$request" '\n '$status $body_bytes_sent "$http_referer" '", "completion": " '"$http_user_agent" "$http_x_forwarded_for"';\n\naccess_log /var/log/nginx/access.log main;\n\nsendfile on;\n#tcp_nopush on;\n\nkeepalive_timeout 60;\n\n#gzip on;\n\ninclude /etc/nginx/conf.d/*.conf;\n}", "language": "text", "type": "code-example", "context": "Configure a Pod to Use a PersistentVolume for Storage | error_log /var/log/nginx/error.log notice; pid /var/run/nginx.pid;", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/configure-persistent-volume-storage/", "id": "k8s_00183_code_completion_4967"} {"prompt": "Explain this text code:\n\nuser nginx;\nworker_processes auto;\nerror_log /var/log/nginx/error.log notice;\npid /var/run/nginx.pid;\nevents {\nworker_connections 1024;\n}\nhttp {\ninclude /etc/nginx/mime.types;\ndefault_type application/octet-stream;\nlog_format main '$remote_addr - $remote_user [$time_local] "$request" '\n '$status $body_bytes_sent "$http_referer" '\n '"$http_user_agent" "$http_x_forwarded_for"';\n\naccess_log /var/log/nginx/access.log main;\n\nsendfile on;\n#tcp_nopush on;\n\nkeepalive_timeout 60;\n\n#gzip on;\n\ninclude /etc/nginx/conf.d/*.conf;\n}", "completion": "This text code Configure a Pod to Use a PersistentVolume for Storage | error_log /var/log/nginx/error.log notice; pid /var/run/nginx.pid;", "language": "text", "type": "code-example_explanation", "context": "Configure a Pod to Use a PersistentVolume for Storage | error_log /var/log/nginx/error.log notice; pid /var/run/nginx.pid;", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/configure-persistent-volume-storage/", "id": "k8s_00183_code_explain_4968"} {"prompt": "Complete this text code:\n\nlog_format main '$remote_addr - $remote_user [$time_local] "$request" '\n '$status $body_bytes_sent "$http_referer" '\n '"$http_user_agent" "$http_x_forwarded_for"';\n\naccess_log /var/log/nginx/access.log main;\n\nsendfile on;", "completion": "#tcp_nopush on;\n\nkeepalive_timeout 60;\n\n#gzip on;\n\ninclude /etc/nginx/conf.d/*.conf;", "language": "text", "type": "code-example", "context": "Configure a Pod to Use a PersistentVolume for Storage | }", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/configure-persistent-volume-storage/", "id": "k8s_00183_code_completion_4969"} {"prompt": "Explain this text code:\n\nlog_format main '$remote_addr - $remote_user [$time_local] "$request" '\n '$status $body_bytes_sent "$http_referer" '\n '"$http_user_agent" "$http_x_forwarded_for"';\n\naccess_log /var/log/nginx/access.log main;\n\nsendfile on;\n#tcp_nopush on;\n\nkeepalive_timeout 60;\n\n#gzip on;\n\ninclude /etc/nginx/conf.d/*.conf;", "completion": "This text code Configure a Pod to Use a PersistentVolume for Storage | }", "language": "text", "type": "code-example_explanation", "context": "Configure a Pod to Use a PersistentVolume for Storage | }", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/configure-persistent-volume-storage/", "id": "k8s_00183_code_explain_4970"} {"prompt": "Complete this text code:\n\ncat < /mnt/data/nginx.conf\nuser nginx;\nworker_processes auto;\nerror_log /var/log/nginx/error.log notice;\npid /var/run/nginx.pid;\n\nevents {\n worker_connections 1024;\n}\n\nhttp {\n include /etc/nginx/mime.types;\n default_type application/octet-stream;\n\n log_format main '\\$remote_addr - \\$remote_user [\\$time_local] \"\\$request\" '", "completion": " '\\$status \\$body_bytes_sent \"\\$http_referer\" '\n '\"\\$http_user_agent\" \"\\$http_x_forwarded_for\"';\n\n access_log /var/log/nginx/access.log main;\n\n sendfile on;\n #tcp_nopush on;\n\n keepalive_timeout 60;\n\n #gzip on;\n\n include /etc/nginx/conf.d/*.conf;\n}\nEOF", "language": "text", "type": "code-example", "context": "Configure a Pod to Use a PersistentVolume for Storage | Here we will create a pod that uses the existing persistentVolume and persistentVolumeClaim. However, the pod mounts only a specific file, nginx.conf, and directory, html, to the container.", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/configure-persistent-volume-storage/", "id": "k8s_00183_code_completion_4971"} {"prompt": "Explain this text code:\n\ncat < /mnt/data/nginx.conf\nuser nginx;\nworker_processes auto;\nerror_log /var/log/nginx/error.log notice;\npid /var/run/nginx.pid;\n\nevents {\n worker_connections 1024;\n}\n\nhttp {\n include /etc/nginx/mime.types;\n default_type application/octet-stream;\n\n log_format main '\\$remote_addr - \\$remote_user [\\$time_local] \"\\$request\" '\n '\\$status \\$body_bytes_sent \"\\$http_referer\" '\n '\"\\$http_user_agent\" \"\\$http_x_forwarded_for\"';\n\n access_log /var/log/nginx/access.log main;\n\n sendfile on;\n #tcp_nopush on;\n\n keepalive_timeout 60;\n\n #gzip on;\n\n include /etc/nginx/conf.d/*.conf;\n}\nEOF", "completion": "This text code Configure a Pod to Use a PersistentVolume for Storage | Here we will create a pod that uses the existing persistentVolume and persistentVolumeClaim. However, the pod mounts only a specific file, nginx.conf, and directory, html, to the container.", "language": "text", "type": "code-example_explanation", "context": "Configure a Pod to Use a PersistentVolume for Storage | Here we will create a pod that uses the existing persistentVolume and persistentVolumeClaim. However, the pod mounts only a specific file, nginx.conf, and directory, html, to the container.", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/configure-persistent-volume-storage/", "id": "k8s_00183_code_explain_4972"} {"prompt": "Explain this bash code:\n\nkubectl apply -f https://k8s.io/examples/pods/storage/pv-duplicate.yaml", "completion": "This bash code Configure a Pod to Use a PersistentVolume for Storage | Verify that the container in the Pod is running:", "language": "bash", "type": "kubectl-command_explanation", "context": "Configure a Pod to Use a PersistentVolume for Storage | Verify that the container in the Pod is running:", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/configure-persistent-volume-storage/", "id": "k8s_00183_code_explain_4973"} {"prompt": "Complete this text code:\n\n# Be sure to run these 3 commands inside the root shell that comes from\n# running \"kubectl exec\" in the previous step", "completion": "apt update\napt install curl\ncurl http://localhost/", "language": "text", "type": "code-example", "context": "Configure a Pod to Use a PersistentVolume for Storage | The output shows the text that you wrote to the index.html file on the hostPath volume:", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/configure-persistent-volume-storage/", "id": "k8s_00183_code_completion_4974"} {"prompt": "Explain this text code:\n\n# Be sure to run these 3 commands inside the root shell that comes from\n# running \"kubectl exec\" in the previous step\napt update\napt install curl\ncurl http://localhost/", "completion": "This text code Configure a Pod to Use a PersistentVolume for Storage | The output shows the text that you wrote to the index.html file on the hostPath volume:", "language": "text", "type": "code-example_explanation", "context": "Configure a Pod to Use a PersistentVolume for Storage | The output shows the text that you wrote to the index.html file on the hostPath volume:", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/configure-persistent-volume-storage/", "id": "k8s_00183_code_explain_4975"} {"prompt": "Explain this text code:\n\n# Be sure to run these commands inside the root shell that comes from\n# running \"kubectl exec\" in the previous step\ncat /etc/nginx/nginx.conf | grep keepalive_timeout", "completion": "This text code Configure a Pod to Use a PersistentVolume for Storage | The output shows the modified text that you wrote to the nginx.conf file on the hostPath volume:", "language": "text", "type": "code-example_explanation", "context": "Configure a Pod to Use a PersistentVolume for Storage | The output shows the modified text that you wrote to the nginx.conf file on the hostPath volume:", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/configure-persistent-volume-storage/", "id": "k8s_00183_code_explain_4976"} {"prompt": "Explain this bash code:\n\nkubectl delete pod test\nkubectl delete pvc task-pv-claim\nkubectl delete pv task-pv-volume", "completion": "This bash code Configure a Pod to Use a PersistentVolume for Storage | If you don't already have a shell open to the Node in your cluster, open a new shell the same way that you did earlier.", "language": "bash", "type": "kubectl-command_explanation", "context": "Configure a Pod to Use a PersistentVolume for Storage | If you don't already have a shell open to the Node in your cluster, open a new shell the same way that you did earlier.", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/configure-persistent-volume-storage/", "id": "k8s_00183_code_explain_4977"} {"prompt": "Complete this text code:\n\n# This assumes that your Node uses \"sudo\" to run commands\n# as the superuser", "completion": "sudo rm /mnt/data/html/index.html\nsudo rm /mnt/data/nginx.conf\nsudo rmdir /mnt/data", "language": "text", "type": "code-example", "context": "Configure a Pod to Use a PersistentVolume for Storage | You can now close the shell to your Node.", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/configure-persistent-volume-storage/", "id": "k8s_00183_code_completion_4978"} {"prompt": "Explain this text code:\n\n# This assumes that your Node uses \"sudo\" to run commands\n# as the superuser\nsudo rm /mnt/data/html/index.html\nsudo rm /mnt/data/nginx.conf\nsudo rmdir /mnt/data", "completion": "This text code Configure a Pod to Use a PersistentVolume for Storage | You can now close the shell to your Node.", "language": "text", "type": "code-example_explanation", "context": "Configure a Pod to Use a PersistentVolume for Storage | You can now close the shell to your Node.", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/configure-persistent-volume-storage/", "id": "k8s_00183_code_explain_4979"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: PersistentVolume\nmetadata:", "completion": " name: pv1\n annotations:\n pv.beta.kubernetes.io/gid: \"1234\"", "language": "yaml", "type": "kubernetes-manifest", "context": "Configure a Pod to Use a PersistentVolume for Storage | When a Pod consumes a PersistentVolume that has a GID annotation, the annotated GID is applied to all containers in the Pod in the same way that GIDs specified in the Pod's security context are. Every", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/configure-persistent-volume-storage/", "id": "k8s_00183_code_completion_4980"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: PersistentVolume\nmetadata:\n name: pv1\n annotations:\n pv.beta.kubernetes.io/gid: \"1234\"", "completion": "This yaml code Configure a Pod to Use a PersistentVolume for Storage | When a Pod consumes a PersistentVolume that has a GID annotation, the annotated GID is applied to all containers in the Pod in the same way that GIDs specified in the Pod's security context are. Every", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Configure a Pod to Use a PersistentVolume for Storage | When a Pod consumes a PersistentVolume that has a GID annotation, the annotated GID is applied to all containers in the Pod in the same way that GIDs specified in the Pod's security context are. Every", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/configure-persistent-volume-storage/", "id": "k8s_00183_code_explain_4981"} {"prompt": "Explain this shell code:\n\n# This assumes that your Node uses \"sudo\" to run commands\n# as the superuser\nsudo mkdir /mnt/data", "completion": "This shell code Configure a Pod to Use a PersistentVolume for Storage | In the /mnt/data directory, create an index.html file:", "language": "shell", "type": "code-example_explanation", "context": "Configure a Pod to Use a PersistentVolume for Storage | In the /mnt/data directory, create an index.html file:", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/configure-persistent-volume-storage/", "id": "k8s_00183_code_explain_4982"} {"prompt": "Explain this shell code:\n\n# This again assumes that your Node uses \"sudo\" to run commands\n# as the superuser\nsudo sh -c \"echo 'Hello from Kubernetes storage' > /mnt/data/index.html\"", "completion": "This shell code Configure a Pod to Use a PersistentVolume for Storage | Test that the index.html file exists:", "language": "shell", "type": "code-example_explanation", "context": "Configure a Pod to Use a PersistentVolume for Storage | Test that the index.html file exists:", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/configure-persistent-volume-storage/", "id": "k8s_00183_code_explain_4983"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: PersistentVolume\nmetadata:\n name: task-pv-volume\n labels:\n type: local\nspec:", "completion": " storageClassName: manual\n capacity:\n storage: 10Gi\n accessModes:\n - ReadWriteOnce\n hostPath:\n path: \"/mnt/data\"", "language": "yaml", "type": "kubernetes-manifest", "context": "Configure a Pod to Use a PersistentVolume for Storage | The configuration file specifies that the volume is at /mnt/data on the cluster's Node. The configuration also specifies a size of 10 gibibytes and an access mode of ReadWriteOnce, which means the vol", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/configure-persistent-volume-storage/", "id": "k8s_00183_code_completion_4984"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: PersistentVolume\nmetadata:\n name: task-pv-volume\n labels:\n type: local\nspec:\n storageClassName: manual\n capacity:\n storage: 10Gi\n accessModes:\n - ReadWriteOnce\n hostPath:\n path: \"/mnt/data\"", "completion": "This yaml code Configure a Pod to Use a PersistentVolume for Storage | The configuration file specifies that the volume is at /mnt/data on the cluster's Node. The configuration also specifies a size of 10 gibibytes and an access mode of ReadWriteOnce, which means the vol", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Configure a Pod to Use a PersistentVolume for Storage | The configuration file specifies that the volume is at /mnt/data on the cluster's Node. The configuration also specifies a size of 10 gibibytes and an access mode of ReadWriteOnce, which means the vol", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/configure-persistent-volume-storage/", "id": "k8s_00183_code_explain_4985"} {"prompt": "Explain this shell code:\n\nkubectl apply -f https://k8s.io/examples/pods/storage/pv-volume.yaml", "completion": "This shell code Configure a Pod to Use a PersistentVolume for Storage | View information about the PersistentVolume:", "language": "shell", "type": "code-example_explanation", "context": "Configure a Pod to Use a PersistentVolume for Storage | View information about the PersistentVolume:", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/configure-persistent-volume-storage/", "id": "k8s_00183_code_explain_4986"} {"prompt": "Explain this text code:\n\nNAME CAPACITY ACCESSMODES RECLAIMPOLICY STATUS CLAIM STORAGECLASS REASON AGE\ntask-pv-volume 10Gi RWO Retain Available manual 4s", "completion": "This text code Configure a Pod to Use a PersistentVolume for Storage | The next step is to create a PersistentVolumeClaim. Pods use PersistentVolumeClaims to request physical storage. In this exercise, you create a PersistentVolumeClaim that requests a volume of at least", "language": "text", "type": "code-example_explanation", "context": "Configure a Pod to Use a PersistentVolume for Storage | The next step is to create a PersistentVolumeClaim. Pods use PersistentVolumeClaims to request physical storage. In this exercise, you create a PersistentVolumeClaim that requests a volume of at least", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/configure-persistent-volume-storage/", "id": "k8s_00183_code_explain_4987"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: PersistentVolumeClaim\nmetadata:\n name: task-pv-claim\nspec:", "completion": " storageClassName: manual\n accessModes:\n - ReadWriteOnce\n resources:\n requests:\n storage: 3Gi", "language": "yaml", "type": "kubernetes-manifest", "context": "Configure a Pod to Use a PersistentVolume for Storage | Create the PersistentVolumeClaim:", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/configure-persistent-volume-storage/", "id": "k8s_00183_code_completion_4988"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: PersistentVolumeClaim\nmetadata:\n name: task-pv-claim\nspec:\n storageClassName: manual\n accessModes:\n - ReadWriteOnce\n resources:\n requests:\n storage: 3Gi", "completion": "This yaml code Configure a Pod to Use a PersistentVolume for Storage | Create the PersistentVolumeClaim:", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Configure a Pod to Use a PersistentVolume for Storage | Create the PersistentVolumeClaim:", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/configure-persistent-volume-storage/", "id": "k8s_00183_code_explain_4989"} {"prompt": "Explain this shell code:\n\nkubectl apply -f https://k8s.io/examples/pods/storage/pv-claim.yaml", "completion": "This shell code Configure a Pod to Use a PersistentVolume for Storage | After you create the PersistentVolumeClaim, the Kubernetes control plane looks for a PersistentVolume that satisfies the claim's requirements. If the control plane finds a suitable PersistentVolume wi", "language": "shell", "type": "code-example_explanation", "context": "Configure a Pod to Use a PersistentVolume for Storage | After you create the PersistentVolumeClaim, the Kubernetes control plane looks for a PersistentVolume that satisfies the claim's requirements. If the control plane finds a suitable PersistentVolume wi", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/configure-persistent-volume-storage/", "id": "k8s_00183_code_explain_4990"} {"prompt": "Explain this text code:\n\nNAME CAPACITY ACCESSMODES RECLAIMPOLICY STATUS CLAIM STORAGECLASS REASON AGE\ntask-pv-volume 10Gi RWO Retain Bound default/task-pv-claim manual 2m", "completion": "This text code Configure a Pod to Use a PersistentVolume for Storage | Look at the PersistentVolumeClaim:", "language": "text", "type": "code-example_explanation", "context": "Configure a Pod to Use a PersistentVolume for Storage | Look at the PersistentVolumeClaim:", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/configure-persistent-volume-storage/", "id": "k8s_00183_code_explain_4991"} {"prompt": "Explain this text code:\n\nNAME STATUS VOLUME CAPACITY ACCESSMODES STORAGECLASS AGE\ntask-pv-claim Bound task-pv-volume 10Gi RWO manual 30s", "completion": "This text code Configure a Pod to Use a PersistentVolume for Storage | The next step is to create a Pod that uses your PersistentVolumeClaim as a volume.", "language": "text", "type": "code-example_explanation", "context": "Configure a Pod to Use a PersistentVolume for Storage | The next step is to create a Pod that uses your PersistentVolumeClaim as a volume.", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/configure-persistent-volume-storage/", "id": "k8s_00183_code_explain_4992"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: task-pv-pod\nspec:\n volumes:\n - name: task-pv-storage\n persistentVolumeClaim:\n claimName: task-pv-claim", "completion": " containers:\n - name: task-pv-container\n image: nginx\n ports:\n - containerPort: 80\n name: \"http-server\"\n volumeMounts:\n - mountPath: \"/usr/share/nginx/html\"\n name: task-pv-storage", "language": "yaml", "type": "pod-definition", "context": "Configure a Pod to Use a PersistentVolume for Storage | Notice that the Pod's configuration file specifies a PersistentVolumeClaim, but it does not specify a PersistentVolume. From the Pod's point of view, the claim is a volume.", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/configure-persistent-volume-storage/", "id": "k8s_00183_code_completion_4993"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: task-pv-pod\nspec:\n volumes:\n - name: task-pv-storage\n persistentVolumeClaim:\n claimName: task-pv-claim\n containers:\n - name: task-pv-container\n image: nginx\n ports:\n - containerPort: 80\n name: \"http-server\"\n volumeMounts:\n - mountPath: \"/usr/share/nginx/html\"\n name: task-pv-storage", "completion": "This yaml code Configure a Pod to Use a PersistentVolume for Storage | Notice that the Pod's configuration file specifies a PersistentVolumeClaim, but it does not specify a PersistentVolume. From the Pod's point of view, the claim is a volume.", "language": "yaml", "type": "pod-definition_explanation", "context": "Configure a Pod to Use a PersistentVolume for Storage | Notice that the Pod's configuration file specifies a PersistentVolumeClaim, but it does not specify a PersistentVolume. From the Pod's point of view, the claim is a volume.", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/configure-persistent-volume-storage/", "id": "k8s_00183_code_explain_4994"} {"prompt": "Explain this shell code:\n\nkubectl apply -f https://k8s.io/examples/pods/storage/pv-pod.yaml", "completion": "This shell code Configure a Pod to Use a PersistentVolume for Storage | Verify that the container in the Pod is running:", "language": "shell", "type": "code-example_explanation", "context": "Configure a Pod to Use a PersistentVolume for Storage | Verify that the container in the Pod is running:", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/configure-persistent-volume-storage/", "id": "k8s_00183_code_explain_4995"} {"prompt": "Complete this shell code:\n\n# Be sure to run these 3 commands inside the root shell that comes from\n# running \"kubectl exec\" in the previous step", "completion": "apt update\napt install curl\ncurl http://localhost/", "language": "shell", "type": "code-example", "context": "Configure a Pod to Use a PersistentVolume for Storage | The output shows the text that you wrote to the index.html file on the hostPath volume:", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/configure-persistent-volume-storage/", "id": "k8s_00183_code_completion_4996"} {"prompt": "Explain this shell code:\n\n# Be sure to run these 3 commands inside the root shell that comes from\n# running \"kubectl exec\" in the previous step\napt update\napt install curl\ncurl http://localhost/", "completion": "This shell code Configure a Pod to Use a PersistentVolume for Storage | The output shows the text that you wrote to the index.html file on the hostPath volume:", "language": "shell", "type": "code-example_explanation", "context": "Configure a Pod to Use a PersistentVolume for Storage | The output shows the text that you wrote to the index.html file on the hostPath volume:", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/configure-persistent-volume-storage/", "id": "k8s_00183_code_explain_4997"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: test\nspec:\n containers:\n - name: test\n image: nginx\n volumeMounts:\n # a mount for site-data", "completion": " - name: config\n mountPath: /usr/share/nginx/html\n subPath: html\n # another mount for nginx config\n - name: config\n mountPath: /etc/nginx/nginx.conf\n subPath: nginx.conf\n volumes:\n - name: config\n persistentVolumeClaim:\n claimName: task-pv-storage", "language": "yaml", "type": "pod-definition", "context": "Configure a Pod to Use a PersistentVolume for Storage | Here:", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/configure-persistent-volume-storage/", "id": "k8s_00183_code_completion_4998"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: test\nspec:\n containers:\n - name: test\n image: nginx\n volumeMounts:\n # a mount for site-data\n - name: config\n mountPath: /usr/share/nginx/html\n subPath: html\n # another mount for nginx config\n - name: config\n mountPath: /etc/nginx/nginx.conf\n subPath: nginx.conf\n volumes:\n - name: config\n persistentVolumeClaim:\n claimName: task-pv-storage", "completion": "This yaml code Configure a Pod to Use a PersistentVolume for Storage | Here:", "language": "yaml", "type": "pod-definition_explanation", "context": "Configure a Pod to Use a PersistentVolume for Storage | Here:", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/configure-persistent-volume-storage/", "id": "k8s_00183_code_explain_4999"} {"prompt": "Explain this shell code:\n\n# This assumes that your Node uses \"sudo\" to run commands\n# as the superuser\nsudo mkdir /mnt/data/html", "completion": "This shell code Configure a Pod to Use a PersistentVolume for Storage | Move index.html into the directory:", "language": "shell", "type": "code-example_explanation", "context": "Configure a Pod to Use a PersistentVolume for Storage | Move index.html into the directory:", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/configure-persistent-volume-storage/", "id": "k8s_00183_code_explain_5000"} {"prompt": "Explain this shell code:\n\n# Move index.html from its current location to the html sub-directory\nsudo mv /mnt/data/index.html html", "completion": "This shell code Configure a Pod to Use a PersistentVolume for Storage | error_log /var/log/nginx/error.log notice; pid /var/run/nginx.pid;", "language": "shell", "type": "code-example_explanation", "context": "Configure a Pod to Use a PersistentVolume for Storage | error_log /var/log/nginx/error.log notice; pid /var/run/nginx.pid;", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/configure-persistent-volume-storage/", "id": "k8s_00183_code_explain_5001"} {"prompt": "Complete this conf code:\n\nuser nginx;\nworker_processes auto;\nerror_log /var/log/nginx/error.log notice;\npid /var/run/nginx.pid;\nevents {\nworker_connections 1024;\n}\nhttp {\ninclude /etc/nginx/mime.types;\ndefault_type application/octet-stream;\nlog_format main '$remote_addr - $remote_user [$time_local] "$request" '\n '$status $body_bytes_sent "$http_referer" '", "completion": " '"$http_user_agent" "$http_x_forwarded_for"';\n\naccess_log /var/log/nginx/access.log main;\n\nsendfile on;\n#tcp_nopush on;\n\nkeepalive_timeout 60;\n\n#gzip on;\n\ninclude /etc/nginx/conf.d/*.conf;\n}", "language": "conf", "type": "code-example", "context": "Configure a Pod to Use a PersistentVolume for Storage | error_log /var/log/nginx/error.log notice; pid /var/run/nginx.pid;", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/configure-persistent-volume-storage/", "id": "k8s_00183_code_completion_5002"} {"prompt": "Explain this conf code:\n\nuser nginx;\nworker_processes auto;\nerror_log /var/log/nginx/error.log notice;\npid /var/run/nginx.pid;\nevents {\nworker_connections 1024;\n}\nhttp {\ninclude /etc/nginx/mime.types;\ndefault_type application/octet-stream;\nlog_format main '$remote_addr - $remote_user [$time_local] "$request" '\n '$status $body_bytes_sent "$http_referer" '\n '"$http_user_agent" "$http_x_forwarded_for"';\n\naccess_log /var/log/nginx/access.log main;\n\nsendfile on;\n#tcp_nopush on;\n\nkeepalive_timeout 60;\n\n#gzip on;\n\ninclude /etc/nginx/conf.d/*.conf;\n}", "completion": "This conf code Configure a Pod to Use a PersistentVolume for Storage | error_log /var/log/nginx/error.log notice; pid /var/run/nginx.pid;", "language": "conf", "type": "code-example_explanation", "context": "Configure a Pod to Use a PersistentVolume for Storage | error_log /var/log/nginx/error.log notice; pid /var/run/nginx.pid;", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/configure-persistent-volume-storage/", "id": "k8s_00183_code_explain_5003"} {"prompt": "Complete this text code:\n\nlog_format main '$remote_addr - $remote_user [$time_local] "$request" '\n '$status $body_bytes_sent "$http_referer" '\n '"$http_user_agent" "$http_x_forwarded_for"';\n\naccess_log /var/log/nginx/access.log main;\n\nsendfile on;", "completion": "#tcp_nopush on;\n\nkeepalive_timeout 60;\n\n#gzip on;\n\ninclude /etc/nginx/conf.d/*.conf;", "language": "text", "type": "code-example", "context": "Configure a Pod to Use a PersistentVolume for Storage | }", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/configure-persistent-volume-storage/", "id": "k8s_00183_code_completion_5004"} {"prompt": "Explain this text code:\n\nlog_format main '$remote_addr - $remote_user [$time_local] "$request" '\n '$status $body_bytes_sent "$http_referer" '\n '"$http_user_agent" "$http_x_forwarded_for"';\n\naccess_log /var/log/nginx/access.log main;\n\nsendfile on;\n#tcp_nopush on;\n\nkeepalive_timeout 60;\n\n#gzip on;\n\ninclude /etc/nginx/conf.d/*.conf;", "completion": "This text code Configure a Pod to Use a PersistentVolume for Storage | }", "language": "text", "type": "code-example_explanation", "context": "Configure a Pod to Use a PersistentVolume for Storage | }", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/configure-persistent-volume-storage/", "id": "k8s_00183_code_explain_5005"} {"prompt": "Complete this shell code:\n\ncat < /mnt/data/nginx.conf\nuser nginx;\nworker_processes auto;\nerror_log /var/log/nginx/error.log notice;\npid /var/run/nginx.pid;\n\nevents {\n worker_connections 1024;\n}\n\nhttp {\n include /etc/nginx/mime.types;\n default_type application/octet-stream;\n\n log_format main '\\$remote_addr - \\$remote_user [\\$time_local] \"\\$request\" '", "completion": " '\\$status \\$body_bytes_sent \"\\$http_referer\" '\n '\"\\$http_user_agent\" \"\\$http_x_forwarded_for\"';\n\n access_log /var/log/nginx/access.log main;\n\n sendfile on;\n #tcp_nopush on;\n\n keepalive_timeout 60;\n\n #gzip on;\n\n include /etc/nginx/conf.d/*.conf;\n}\nEOF", "language": "shell", "type": "code-example", "context": "Configure a Pod to Use a PersistentVolume for Storage | Here we will create a pod that uses the existing persistentVolume and persistentVolumeClaim. However, the pod mounts only a specific file, nginx.conf, and directory, html, to the container.", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/configure-persistent-volume-storage/", "id": "k8s_00183_code_completion_5006"} {"prompt": "Explain this shell code:\n\ncat < /mnt/data/nginx.conf\nuser nginx;\nworker_processes auto;\nerror_log /var/log/nginx/error.log notice;\npid /var/run/nginx.pid;\n\nevents {\n worker_connections 1024;\n}\n\nhttp {\n include /etc/nginx/mime.types;\n default_type application/octet-stream;\n\n log_format main '\\$remote_addr - \\$remote_user [\\$time_local] \"\\$request\" '\n '\\$status \\$body_bytes_sent \"\\$http_referer\" '\n '\"\\$http_user_agent\" \"\\$http_x_forwarded_for\"';\n\n access_log /var/log/nginx/access.log main;\n\n sendfile on;\n #tcp_nopush on;\n\n keepalive_timeout 60;\n\n #gzip on;\n\n include /etc/nginx/conf.d/*.conf;\n}\nEOF", "completion": "This shell code Configure a Pod to Use a PersistentVolume for Storage | Here we will create a pod that uses the existing persistentVolume and persistentVolumeClaim. However, the pod mounts only a specific file, nginx.conf, and directory, html, to the container.", "language": "shell", "type": "code-example_explanation", "context": "Configure a Pod to Use a PersistentVolume for Storage | Here we will create a pod that uses the existing persistentVolume and persistentVolumeClaim. However, the pod mounts only a specific file, nginx.conf, and directory, html, to the container.", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/configure-persistent-volume-storage/", "id": "k8s_00183_code_explain_5007"} {"prompt": "Explain this shell code:\n\nkubectl apply -f https://k8s.io/examples/pods/storage/pv-duplicate.yaml", "completion": "This shell code Configure a Pod to Use a PersistentVolume for Storage | Verify that the container in the Pod is running:", "language": "shell", "type": "code-example_explanation", "context": "Configure a Pod to Use a PersistentVolume for Storage | Verify that the container in the Pod is running:", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/configure-persistent-volume-storage/", "id": "k8s_00183_code_explain_5008"} {"prompt": "Complete this shell code:\n\n# Be sure to run these 3 commands inside the root shell that comes from\n# running \"kubectl exec\" in the previous step", "completion": "apt update\napt install curl\ncurl http://localhost/", "language": "shell", "type": "code-example", "context": "Configure a Pod to Use a PersistentVolume for Storage | The output shows the text that you wrote to the index.html file on the hostPath volume:", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/configure-persistent-volume-storage/", "id": "k8s_00183_code_completion_5009"} {"prompt": "Explain this shell code:\n\n# Be sure to run these 3 commands inside the root shell that comes from\n# running \"kubectl exec\" in the previous step\napt update\napt install curl\ncurl http://localhost/", "completion": "This shell code Configure a Pod to Use a PersistentVolume for Storage | The output shows the text that you wrote to the index.html file on the hostPath volume:", "language": "shell", "type": "code-example_explanation", "context": "Configure a Pod to Use a PersistentVolume for Storage | The output shows the text that you wrote to the index.html file on the hostPath volume:", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/configure-persistent-volume-storage/", "id": "k8s_00183_code_explain_5010"} {"prompt": "Explain this shell code:\n\n# Be sure to run these commands inside the root shell that comes from\n# running \"kubectl exec\" in the previous step\ncat /etc/nginx/nginx.conf | grep keepalive_timeout", "completion": "This shell code Configure a Pod to Use a PersistentVolume for Storage | The output shows the modified text that you wrote to the nginx.conf file on the hostPath volume:", "language": "shell", "type": "code-example_explanation", "context": "Configure a Pod to Use a PersistentVolume for Storage | The output shows the modified text that you wrote to the nginx.conf file on the hostPath volume:", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/configure-persistent-volume-storage/", "id": "k8s_00183_code_explain_5011"} {"prompt": "Explain this shell code:\n\nkubectl delete pod test\nkubectl delete pvc task-pv-claim\nkubectl delete pv task-pv-volume", "completion": "This shell code Configure a Pod to Use a PersistentVolume for Storage | If you don't already have a shell open to the Node in your cluster, open a new shell the same way that you did earlier.", "language": "shell", "type": "code-example_explanation", "context": "Configure a Pod to Use a PersistentVolume for Storage | If you don't already have a shell open to the Node in your cluster, open a new shell the same way that you did earlier.", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/configure-persistent-volume-storage/", "id": "k8s_00183_code_explain_5012"} {"prompt": "Complete this shell code:\n\n# This assumes that your Node uses \"sudo\" to run commands\n# as the superuser", "completion": "sudo rm /mnt/data/html/index.html\nsudo rm /mnt/data/nginx.conf\nsudo rmdir /mnt/data", "language": "shell", "type": "code-example", "context": "Configure a Pod to Use a PersistentVolume for Storage | You can now close the shell to your Node.", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/configure-persistent-volume-storage/", "id": "k8s_00183_code_completion_5013"} {"prompt": "Explain this shell code:\n\n# This assumes that your Node uses \"sudo\" to run commands\n# as the superuser\nsudo rm /mnt/data/html/index.html\nsudo rm /mnt/data/nginx.conf\nsudo rmdir /mnt/data", "completion": "This shell code Configure a Pod to Use a PersistentVolume for Storage | You can now close the shell to your Node.", "language": "shell", "type": "code-example_explanation", "context": "Configure a Pod to Use a PersistentVolume for Storage | You can now close the shell to your Node.", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/configure-persistent-volume-storage/", "id": "k8s_00183_code_explain_5014"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: PersistentVolume\nmetadata:", "completion": " name: pv1\n annotations:\n pv.beta.kubernetes.io/gid: \"1234\"", "language": "yaml", "type": "kubernetes-manifest", "context": "Configure a Pod to Use a PersistentVolume for Storage | When a Pod consumes a PersistentVolume that has a GID annotation, the annotated GID is applied to all containers in the Pod in the same way that GIDs specified in the Pod's security context are. Every", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/configure-persistent-volume-storage/", "id": "k8s_00183_code_completion_5015"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: PersistentVolume\nmetadata:\n name: pv1\n annotations:\n pv.beta.kubernetes.io/gid: \"1234\"", "completion": "This yaml code Configure a Pod to Use a PersistentVolume for Storage | When a Pod consumes a PersistentVolume that has a GID annotation, the annotated GID is applied to all containers in the Pod in the same way that GIDs specified in the Pod's security context are. Every", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Configure a Pod to Use a PersistentVolume for Storage | When a Pod consumes a PersistentVolume that has a GID annotation, the annotated GID is applied to all containers in the Pod in the same way that GIDs specified in the Pod's security context are. Every", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/configure-persistent-volume-storage/", "id": "k8s_00183_code_explain_5016"} {"prompt": "Explain this text code:\n\n# This assumes that your Node uses \"sudo\" to run commands\n# as the superuser\nsudo mkdir /mnt/data", "completion": "This text code Configure a Pod to Use a PersistentVolume for Storage | In the /mnt/data directory, create an index.html file:", "language": "text", "type": "code-example_explanation", "context": "Configure a Pod to Use a PersistentVolume for Storage | In the /mnt/data directory, create an index.html file:", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/configure-persistent-volume-storage/", "id": "k8s_00183_code_explain_5017"} {"prompt": "Explain this text code:\n\n# This again assumes that your Node uses \"sudo\" to run commands\n# as the superuser\nsudo sh -c \"echo 'Hello from Kubernetes storage' > /mnt/data/index.html\"", "completion": "This text code Configure a Pod to Use a PersistentVolume for Storage | Test that the index.html file exists:", "language": "text", "type": "code-example_explanation", "context": "Configure a Pod to Use a PersistentVolume for Storage | Test that the index.html file exists:", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/configure-persistent-volume-storage/", "id": "k8s_00183_code_explain_5018"} {"prompt": "Complete this yaml code:\n\npods/storage/pv-volume.yaml\napiVersion: v1\nkind: PersistentVolume\nmetadata:\n name: task-pv-volume\n labels:\n type: local", "completion": "spec:\n storageClassName: manual\n capacity:\n storage: 10Gi\n accessModes:\n - ReadWriteOnce\n hostPath:\n path: \"/mnt/data\"", "language": "yaml", "type": "kubernetes-manifest", "context": "Configure a Pod to Use a PersistentVolume for Storage | The configuration file specifies that the volume is at /mnt/data on the cluster's Node. The configuration also specifies a size of 10 gibibytes and an access mode of ReadWriteOnce, which means the vol", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/configure-persistent-volume-storage/", "id": "k8s_00183_code_completion_5019"} {"prompt": "Explain this yaml code:\n\npods/storage/pv-volume.yaml\napiVersion: v1\nkind: PersistentVolume\nmetadata:\n name: task-pv-volume\n labels:\n type: local\nspec:\n storageClassName: manual\n capacity:\n storage: 10Gi\n accessModes:\n - ReadWriteOnce\n hostPath:\n path: \"/mnt/data\"", "completion": "This yaml code Configure a Pod to Use a PersistentVolume for Storage | The configuration file specifies that the volume is at /mnt/data on the cluster's Node. The configuration also specifies a size of 10 gibibytes and an access mode of ReadWriteOnce, which means the vol", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Configure a Pod to Use a PersistentVolume for Storage | The configuration file specifies that the volume is at /mnt/data on the cluster's Node. The configuration also specifies a size of 10 gibibytes and an access mode of ReadWriteOnce, which means the vol", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/configure-persistent-volume-storage/", "id": "k8s_00183_code_explain_5020"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: PersistentVolume\nmetadata:\n name: task-pv-volume\n labels:\n type: local\nspec:", "completion": " storageClassName: manual\n capacity:\n storage: 10Gi\n accessModes:\n - ReadWriteOnce\n hostPath:\n path: \"/mnt/data\"", "language": "yaml", "type": "kubernetes-manifest", "context": "Configure a Pod to Use a PersistentVolume for Storage | The configuration file specifies that the volume is at /mnt/data on the cluster's Node. The configuration also specifies a size of 10 gibibytes and an access mode of ReadWriteOnce, which means the vol", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/configure-persistent-volume-storage/", "id": "k8s_00183_code_completion_5021"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: PersistentVolume\nmetadata:\n name: task-pv-volume\n labels:\n type: local\nspec:\n storageClassName: manual\n capacity:\n storage: 10Gi\n accessModes:\n - ReadWriteOnce\n hostPath:\n path: \"/mnt/data\"", "completion": "This yaml code Configure a Pod to Use a PersistentVolume for Storage | The configuration file specifies that the volume is at /mnt/data on the cluster's Node. The configuration also specifies a size of 10 gibibytes and an access mode of ReadWriteOnce, which means the vol", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Configure a Pod to Use a PersistentVolume for Storage | The configuration file specifies that the volume is at /mnt/data on the cluster's Node. The configuration also specifies a size of 10 gibibytes and an access mode of ReadWriteOnce, which means the vol", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/configure-persistent-volume-storage/", "id": "k8s_00183_code_explain_5022"} {"prompt": "Explain this bash code:\n\nkubectl apply -f https://k8s.io/examples/pods/storage/pv-volume.yaml", "completion": "This bash code Configure a Pod to Use a PersistentVolume for Storage | View information about the PersistentVolume:", "language": "bash", "type": "kubectl-command_explanation", "context": "Configure a Pod to Use a PersistentVolume for Storage | View information about the PersistentVolume:", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/configure-persistent-volume-storage/", "id": "k8s_00183_code_explain_5023"} {"prompt": "Complete this yaml code:\n\npods/storage/pv-claim.yaml\napiVersion: v1\nkind: PersistentVolumeClaim\nmetadata:\n name: task-pv-claim\nspec:", "completion": " storageClassName: manual\n accessModes:\n - ReadWriteOnce\n resources:\n requests:\n storage: 3Gi", "language": "yaml", "type": "kubernetes-manifest", "context": "Configure a Pod to Use a PersistentVolume for Storage | Create the PersistentVolumeClaim:", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/configure-persistent-volume-storage/", "id": "k8s_00183_code_completion_5024"} {"prompt": "Explain this yaml code:\n\npods/storage/pv-claim.yaml\napiVersion: v1\nkind: PersistentVolumeClaim\nmetadata:\n name: task-pv-claim\nspec:\n storageClassName: manual\n accessModes:\n - ReadWriteOnce\n resources:\n requests:\n storage: 3Gi", "completion": "This yaml code Configure a Pod to Use a PersistentVolume for Storage | Create the PersistentVolumeClaim:", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Configure a Pod to Use a PersistentVolume for Storage | Create the PersistentVolumeClaim:", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/configure-persistent-volume-storage/", "id": "k8s_00183_code_explain_5025"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: PersistentVolumeClaim\nmetadata:\n name: task-pv-claim\nspec:", "completion": " storageClassName: manual\n accessModes:\n - ReadWriteOnce\n resources:\n requests:\n storage: 3Gi", "language": "yaml", "type": "kubernetes-manifest", "context": "Configure a Pod to Use a PersistentVolume for Storage | Create the PersistentVolumeClaim:", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/configure-persistent-volume-storage/", "id": "k8s_00183_code_completion_5026"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: PersistentVolumeClaim\nmetadata:\n name: task-pv-claim\nspec:\n storageClassName: manual\n accessModes:\n - ReadWriteOnce\n resources:\n requests:\n storage: 3Gi", "completion": "This yaml code Configure a Pod to Use a PersistentVolume for Storage | Create the PersistentVolumeClaim:", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Configure a Pod to Use a PersistentVolume for Storage | Create the PersistentVolumeClaim:", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/configure-persistent-volume-storage/", "id": "k8s_00183_code_explain_5027"} {"prompt": "Explain this bash code:\n\nkubectl apply -f https://k8s.io/examples/pods/storage/pv-claim.yaml", "completion": "This bash code Configure a Pod to Use a PersistentVolume for Storage | After you create the PersistentVolumeClaim, the Kubernetes control plane looks for a PersistentVolume that satisfies the claim's requirements. If the control plane finds a suitable PersistentVolume wi", "language": "bash", "type": "kubectl-command_explanation", "context": "Configure a Pod to Use a PersistentVolume for Storage | After you create the PersistentVolumeClaim, the Kubernetes control plane looks for a PersistentVolume that satisfies the claim's requirements. If the control plane finds a suitable PersistentVolume wi", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/configure-persistent-volume-storage/", "id": "k8s_00183_code_explain_5028"} {"prompt": "Complete this yaml code:\n\npods/storage/pv-pod.yaml\napiVersion: v1\nkind: Pod\nmetadata:\n name: task-pv-pod\nspec:\n volumes:\n - name: task-pv-storage\n persistentVolumeClaim:", "completion": " claimName: task-pv-claim\n containers:\n - name: task-pv-container\n image: nginx\n ports:\n - containerPort: 80\n name: \"http-server\"\n volumeMounts:\n - mountPath: \"/usr/share/nginx/html\"\n name: task-pv-storage", "language": "yaml", "type": "pod-definition", "context": "Configure a Pod to Use a PersistentVolume for Storage | Notice that the Pod's configuration file specifies a PersistentVolumeClaim, but it does not specify a PersistentVolume. From the Pod's point of view, the claim is a volume.", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/configure-persistent-volume-storage/", "id": "k8s_00183_code_completion_5029"} {"prompt": "Explain this yaml code:\n\npods/storage/pv-pod.yaml\napiVersion: v1\nkind: Pod\nmetadata:\n name: task-pv-pod\nspec:\n volumes:\n - name: task-pv-storage\n persistentVolumeClaim:\n claimName: task-pv-claim\n containers:\n - name: task-pv-container\n image: nginx\n ports:\n - containerPort: 80\n name: \"http-server\"\n volumeMounts:\n - mountPath: \"/usr/share/nginx/html\"\n name: task-pv-storage", "completion": "This yaml code Configure a Pod to Use a PersistentVolume for Storage | Notice that the Pod's configuration file specifies a PersistentVolumeClaim, but it does not specify a PersistentVolume. From the Pod's point of view, the claim is a volume.", "language": "yaml", "type": "pod-definition_explanation", "context": "Configure a Pod to Use a PersistentVolume for Storage | Notice that the Pod's configuration file specifies a PersistentVolumeClaim, but it does not specify a PersistentVolume. From the Pod's point of view, the claim is a volume.", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/configure-persistent-volume-storage/", "id": "k8s_00183_code_explain_5030"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: task-pv-pod\nspec:\n volumes:\n - name: task-pv-storage\n persistentVolumeClaim:\n claimName: task-pv-claim", "completion": " containers:\n - name: task-pv-container\n image: nginx\n ports:\n - containerPort: 80\n name: \"http-server\"\n volumeMounts:\n - mountPath: \"/usr/share/nginx/html\"\n name: task-pv-storage", "language": "yaml", "type": "pod-definition", "context": "Configure a Pod to Use a PersistentVolume for Storage | Notice that the Pod's configuration file specifies a PersistentVolumeClaim, but it does not specify a PersistentVolume. From the Pod's point of view, the claim is a volume.", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/configure-persistent-volume-storage/", "id": "k8s_00183_code_completion_5031"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: task-pv-pod\nspec:\n volumes:\n - name: task-pv-storage\n persistentVolumeClaim:\n claimName: task-pv-claim\n containers:\n - name: task-pv-container\n image: nginx\n ports:\n - containerPort: 80\n name: \"http-server\"\n volumeMounts:\n - mountPath: \"/usr/share/nginx/html\"\n name: task-pv-storage", "completion": "This yaml code Configure a Pod to Use a PersistentVolume for Storage | Notice that the Pod's configuration file specifies a PersistentVolumeClaim, but it does not specify a PersistentVolume. From the Pod's point of view, the claim is a volume.", "language": "yaml", "type": "pod-definition_explanation", "context": "Configure a Pod to Use a PersistentVolume for Storage | Notice that the Pod's configuration file specifies a PersistentVolumeClaim, but it does not specify a PersistentVolume. From the Pod's point of view, the claim is a volume.", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/configure-persistent-volume-storage/", "id": "k8s_00183_code_explain_5032"} {"prompt": "Explain this bash code:\n\nkubectl apply -f https://k8s.io/examples/pods/storage/pv-pod.yaml", "completion": "This bash code Configure a Pod to Use a PersistentVolume for Storage | Verify that the container in the Pod is running:", "language": "bash", "type": "kubectl-command_explanation", "context": "Configure a Pod to Use a PersistentVolume for Storage | Verify that the container in the Pod is running:", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/configure-persistent-volume-storage/", "id": "k8s_00183_code_explain_5033"} {"prompt": "Complete this text code:\n\n# Be sure to run these 3 commands inside the root shell that comes from\n# running \"kubectl exec\" in the previous step", "completion": "apt update\napt install curl\ncurl http://localhost/", "language": "text", "type": "code-example", "context": "Configure a Pod to Use a PersistentVolume for Storage | The output shows the text that you wrote to the index.html file on the hostPath volume:", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/configure-persistent-volume-storage/", "id": "k8s_00183_code_completion_5034"} {"prompt": "Explain this text code:\n\n# Be sure to run these 3 commands inside the root shell that comes from\n# running \"kubectl exec\" in the previous step\napt update\napt install curl\ncurl http://localhost/", "completion": "This text code Configure a Pod to Use a PersistentVolume for Storage | The output shows the text that you wrote to the index.html file on the hostPath volume:", "language": "text", "type": "code-example_explanation", "context": "Configure a Pod to Use a PersistentVolume for Storage | The output shows the text that you wrote to the index.html file on the hostPath volume:", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/configure-persistent-volume-storage/", "id": "k8s_00183_code_explain_5035"} {"prompt": "Complete this yaml code:\n\npods/storage/pv-duplicate.yaml\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: test\nspec:\n containers:\n - name: test\n image: nginx\n volumeMounts:", "completion": " # a mount for site-data\n - name: config\n mountPath: /usr/share/nginx/html\n subPath: html\n # another mount for nginx config\n - name: config\n mountPath: /etc/nginx/nginx.conf\n subPath: nginx.conf\n volumes:\n - name: config\n persistentVolumeClaim:\n claimName: task-pv-storage", "language": "yaml", "type": "pod-definition", "context": "Configure a Pod to Use a PersistentVolume for Storage | Here:", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/configure-persistent-volume-storage/", "id": "k8s_00183_code_completion_5036"} {"prompt": "Explain this yaml code:\n\npods/storage/pv-duplicate.yaml\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: test\nspec:\n containers:\n - name: test\n image: nginx\n volumeMounts:\n # a mount for site-data\n - name: config\n mountPath: /usr/share/nginx/html\n subPath: html\n # another mount for nginx config\n - name: config\n mountPath: /etc/nginx/nginx.conf\n subPath: nginx.conf\n volumes:\n - name: config\n persistentVolumeClaim:\n claimName: task-pv-storage", "completion": "This yaml code Configure a Pod to Use a PersistentVolume for Storage | Here:", "language": "yaml", "type": "pod-definition_explanation", "context": "Configure a Pod to Use a PersistentVolume for Storage | Here:", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/configure-persistent-volume-storage/", "id": "k8s_00183_code_explain_5037"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: test\nspec:\n containers:\n - name: test\n image: nginx\n volumeMounts:\n # a mount for site-data", "completion": " - name: config\n mountPath: /usr/share/nginx/html\n subPath: html\n # another mount for nginx config\n - name: config\n mountPath: /etc/nginx/nginx.conf\n subPath: nginx.conf\n volumes:\n - name: config\n persistentVolumeClaim:\n claimName: task-pv-storage", "language": "yaml", "type": "pod-definition", "context": "Configure a Pod to Use a PersistentVolume for Storage | Here:", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/configure-persistent-volume-storage/", "id": "k8s_00183_code_completion_5038"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: test\nspec:\n containers:\n - name: test\n image: nginx\n volumeMounts:\n # a mount for site-data\n - name: config\n mountPath: /usr/share/nginx/html\n subPath: html\n # another mount for nginx config\n - name: config\n mountPath: /etc/nginx/nginx.conf\n subPath: nginx.conf\n volumes:\n - name: config\n persistentVolumeClaim:\n claimName: task-pv-storage", "completion": "This yaml code Configure a Pod to Use a PersistentVolume for Storage | Here:", "language": "yaml", "type": "pod-definition_explanation", "context": "Configure a Pod to Use a PersistentVolume for Storage | Here:", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/configure-persistent-volume-storage/", "id": "k8s_00183_code_explain_5039"} {"prompt": "Explain this text code:\n\n# This assumes that your Node uses \"sudo\" to run commands\n# as the superuser\nsudo mkdir /mnt/data/html", "completion": "This text code Configure a Pod to Use a PersistentVolume for Storage | Move index.html into the directory:", "language": "text", "type": "code-example_explanation", "context": "Configure a Pod to Use a PersistentVolume for Storage | Move index.html into the directory:", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/configure-persistent-volume-storage/", "id": "k8s_00183_code_explain_5040"} {"prompt": "Explain this text code:\n\n# Move index.html from its current location to the html sub-directory\nsudo mv /mnt/data/index.html html", "completion": "This text code Configure a Pod to Use a PersistentVolume for Storage | error_log /var/log/nginx/error.log notice; pid /var/run/nginx.pid;", "language": "text", "type": "code-example_explanation", "context": "Configure a Pod to Use a PersistentVolume for Storage | error_log /var/log/nginx/error.log notice; pid /var/run/nginx.pid;", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/configure-persistent-volume-storage/", "id": "k8s_00183_code_explain_5041"} {"prompt": "Complete this text code:\n\npods/storage/nginx.conf\nuser nginx;\nworker_processes auto;\nerror_log /var/log/nginx/error.log notice;\npid /var/run/nginx.pid;\nevents {\nworker_connections 1024;\n}\nhttp {\ninclude /etc/nginx/mime.types;\ndefault_type application/octet-stream;\nlog_format main '$remote_addr - $remote_user [$time_local] "$request" '\n '$status $body_bytes_sent "$http_referer" '", "completion": " '"$http_user_agent" "$http_x_forwarded_for"';\n\naccess_log /var/log/nginx/access.log main;\n\nsendfile on;\n#tcp_nopush on;\n\nkeepalive_timeout 60;\n\n#gzip on;\n\ninclude /etc/nginx/conf.d/*.conf;\n}", "language": "text", "type": "code-example", "context": "Configure a Pod to Use a PersistentVolume for Storage | error_log /var/log/nginx/error.log notice; pid /var/run/nginx.pid;", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/configure-persistent-volume-storage/", "id": "k8s_00183_code_completion_5042"} {"prompt": "Explain this text code:\n\npods/storage/nginx.conf\nuser nginx;\nworker_processes auto;\nerror_log /var/log/nginx/error.log notice;\npid /var/run/nginx.pid;\nevents {\nworker_connections 1024;\n}\nhttp {\ninclude /etc/nginx/mime.types;\ndefault_type application/octet-stream;\nlog_format main '$remote_addr - $remote_user [$time_local] "$request" '\n '$status $body_bytes_sent "$http_referer" '\n '"$http_user_agent" "$http_x_forwarded_for"';\n\naccess_log /var/log/nginx/access.log main;\n\nsendfile on;\n#tcp_nopush on;\n\nkeepalive_timeout 60;\n\n#gzip on;\n\ninclude /etc/nginx/conf.d/*.conf;\n}", "completion": "This text code Configure a Pod to Use a PersistentVolume for Storage | error_log /var/log/nginx/error.log notice; pid /var/run/nginx.pid;", "language": "text", "type": "code-example_explanation", "context": "Configure a Pod to Use a PersistentVolume for Storage | error_log /var/log/nginx/error.log notice; pid /var/run/nginx.pid;", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/configure-persistent-volume-storage/", "id": "k8s_00183_code_explain_5043"} {"prompt": "Complete this text code:\n\ncat < /mnt/data/nginx.conf\nuser nginx;\nworker_processes auto;\nerror_log /var/log/nginx/error.log notice;\npid /var/run/nginx.pid;\n\nevents {\n worker_connections 1024;\n}\n\nhttp {\n include /etc/nginx/mime.types;\n default_type application/octet-stream;\n\n log_format main '\\$remote_addr - \\$remote_user [\\$time_local] \"\\$request\" '", "completion": " '\\$status \\$body_bytes_sent \"\\$http_referer\" '\n '\"\\$http_user_agent\" \"\\$http_x_forwarded_for\"';\n\n access_log /var/log/nginx/access.log main;\n\n sendfile on;\n #tcp_nopush on;\n\n keepalive_timeout 60;\n\n #gzip on;\n\n include /etc/nginx/conf.d/*.conf;\n}\nEOF", "language": "text", "type": "code-example", "context": "Configure a Pod to Use a PersistentVolume for Storage | Here we will create a pod that uses the existing persistentVolume and persistentVolumeClaim. However, the pod mounts only a specific file, nginx.conf, and directory, html, to the container.", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/configure-persistent-volume-storage/", "id": "k8s_00183_code_completion_5044"} {"prompt": "Explain this text code:\n\ncat < /mnt/data/nginx.conf\nuser nginx;\nworker_processes auto;\nerror_log /var/log/nginx/error.log notice;\npid /var/run/nginx.pid;\n\nevents {\n worker_connections 1024;\n}\n\nhttp {\n include /etc/nginx/mime.types;\n default_type application/octet-stream;\n\n log_format main '\\$remote_addr - \\$remote_user [\\$time_local] \"\\$request\" '\n '\\$status \\$body_bytes_sent \"\\$http_referer\" '\n '\"\\$http_user_agent\" \"\\$http_x_forwarded_for\"';\n\n access_log /var/log/nginx/access.log main;\n\n sendfile on;\n #tcp_nopush on;\n\n keepalive_timeout 60;\n\n #gzip on;\n\n include /etc/nginx/conf.d/*.conf;\n}\nEOF", "completion": "This text code Configure a Pod to Use a PersistentVolume for Storage | Here we will create a pod that uses the existing persistentVolume and persistentVolumeClaim. However, the pod mounts only a specific file, nginx.conf, and directory, html, to the container.", "language": "text", "type": "code-example_explanation", "context": "Configure a Pod to Use a PersistentVolume for Storage | Here we will create a pod that uses the existing persistentVolume and persistentVolumeClaim. However, the pod mounts only a specific file, nginx.conf, and directory, html, to the container.", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/configure-persistent-volume-storage/", "id": "k8s_00183_code_explain_5045"} {"prompt": "Explain this bash code:\n\nkubectl apply -f https://k8s.io/examples/pods/storage/pv-duplicate.yaml", "completion": "This bash code Configure a Pod to Use a PersistentVolume for Storage | Verify that the container in the Pod is running:", "language": "bash", "type": "kubectl-command_explanation", "context": "Configure a Pod to Use a PersistentVolume for Storage | Verify that the container in the Pod is running:", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/configure-persistent-volume-storage/", "id": "k8s_00183_code_explain_5046"} {"prompt": "Complete this text code:\n\n# Be sure to run these 3 commands inside the root shell that comes from\n# running \"kubectl exec\" in the previous step", "completion": "apt update\napt install curl\ncurl http://localhost/", "language": "text", "type": "code-example", "context": "Configure a Pod to Use a PersistentVolume for Storage | The output shows the text that you wrote to the index.html file on the hostPath volume:", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/configure-persistent-volume-storage/", "id": "k8s_00183_code_completion_5047"} {"prompt": "Explain this text code:\n\n# Be sure to run these 3 commands inside the root shell that comes from\n# running \"kubectl exec\" in the previous step\napt update\napt install curl\ncurl http://localhost/", "completion": "This text code Configure a Pod to Use a PersistentVolume for Storage | The output shows the text that you wrote to the index.html file on the hostPath volume:", "language": "text", "type": "code-example_explanation", "context": "Configure a Pod to Use a PersistentVolume for Storage | The output shows the text that you wrote to the index.html file on the hostPath volume:", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/configure-persistent-volume-storage/", "id": "k8s_00183_code_explain_5048"} {"prompt": "Explain this text code:\n\n# Be sure to run these commands inside the root shell that comes from\n# running \"kubectl exec\" in the previous step\ncat /etc/nginx/nginx.conf | grep keepalive_timeout", "completion": "This text code Configure a Pod to Use a PersistentVolume for Storage | The output shows the modified text that you wrote to the nginx.conf file on the hostPath volume:", "language": "text", "type": "code-example_explanation", "context": "Configure a Pod to Use a PersistentVolume for Storage | The output shows the modified text that you wrote to the nginx.conf file on the hostPath volume:", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/configure-persistent-volume-storage/", "id": "k8s_00183_code_explain_5049"} {"prompt": "Explain this bash code:\n\nkubectl delete pod test\nkubectl delete pvc task-pv-claim\nkubectl delete pv task-pv-volume", "completion": "This bash code Configure a Pod to Use a PersistentVolume for Storage | If you don't already have a shell open to the Node in your cluster, open a new shell the same way that you did earlier.", "language": "bash", "type": "kubectl-command_explanation", "context": "Configure a Pod to Use a PersistentVolume for Storage | If you don't already have a shell open to the Node in your cluster, open a new shell the same way that you did earlier.", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/configure-persistent-volume-storage/", "id": "k8s_00183_code_explain_5050"} {"prompt": "Complete this text code:\n\n# This assumes that your Node uses \"sudo\" to run commands\n# as the superuser", "completion": "sudo rm /mnt/data/html/index.html\nsudo rm /mnt/data/nginx.conf\nsudo rmdir /mnt/data", "language": "text", "type": "code-example", "context": "Configure a Pod to Use a PersistentVolume for Storage | You can now close the shell to your Node.", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/configure-persistent-volume-storage/", "id": "k8s_00183_code_completion_5051"} {"prompt": "Explain this text code:\n\n# This assumes that your Node uses \"sudo\" to run commands\n# as the superuser\nsudo rm /mnt/data/html/index.html\nsudo rm /mnt/data/nginx.conf\nsudo rmdir /mnt/data", "completion": "This text code Configure a Pod to Use a PersistentVolume for Storage | You can now close the shell to your Node.", "language": "text", "type": "code-example_explanation", "context": "Configure a Pod to Use a PersistentVolume for Storage | You can now close the shell to your Node.", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/configure-persistent-volume-storage/", "id": "k8s_00183_code_explain_5052"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: PersistentVolume\nmetadata:", "completion": " name: pv1\n annotations:\n pv.beta.kubernetes.io/gid: \"1234\"", "language": "yaml", "type": "kubernetes-manifest", "context": "Configure a Pod to Use a PersistentVolume for Storage | When a Pod consumes a PersistentVolume that has a GID annotation, the annotated GID is applied to all containers in the Pod in the same way that GIDs specified in the Pod's security context are. Every", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/configure-persistent-volume-storage/", "id": "k8s_00183_code_completion_5053"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: PersistentVolume\nmetadata:\n name: pv1\n annotations:\n pv.beta.kubernetes.io/gid: \"1234\"", "completion": "This yaml code Configure a Pod to Use a PersistentVolume for Storage | When a Pod consumes a PersistentVolume that has a GID annotation, the annotated GID is applied to all containers in the Pod in the same way that GIDs specified in the Pod's security context are. Every", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Configure a Pod to Use a PersistentVolume for Storage | When a Pod consumes a PersistentVolume that has a GID annotation, the annotated GID is applied to all containers in the Pod in the same way that GIDs specified in the Pod's security context are. Every", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/configure-persistent-volume-storage/", "id": "k8s_00183_code_explain_5054"} {"prompt": "Complete this yaml code:\n\napiVersion: scheduling.k8s.io/v1\nkind: PriorityClass\nmetadata:", "completion": " name: placeholder # these Pods represent placeholder capacity\nvalue: -1000\nglobalDefault: false\ndescription: \"Negative priority for placeholder pods to enable overprovisioning.\"", "language": "yaml", "type": "kubernetes-manifest", "context": "Overprovision Node Capacity For A Cluster | Then create the PriorityClass:", "source": "https://kubernetes.io/docs/tasks/administer-cluster/node-overprovisioning/", "id": "k8s_00187_code_completion_5055"} {"prompt": "Explain this yaml code:\n\napiVersion: scheduling.k8s.io/v1\nkind: PriorityClass\nmetadata:\n name: placeholder # these Pods represent placeholder capacity\nvalue: -1000\nglobalDefault: false\ndescription: \"Negative priority for placeholder pods to enable overprovisioning.\"", "completion": "This yaml code Overprovision Node Capacity For A Cluster | Then create the PriorityClass:", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Overprovision Node Capacity For A Cluster | Then create the PriorityClass:", "source": "https://kubernetes.io/docs/tasks/administer-cluster/node-overprovisioning/", "id": "k8s_00187_code_explain_5056"} {"prompt": "Explain this bash code:\n\nkubectl apply -f https://k8s.io/examples/priorityclass/low-priority-class.yaml", "completion": "This bash code Overprovision Node Capacity For A Cluster | You will next define a Deployment that uses the negative-priority PriorityClass and runs a minimal container. When you add this to your cluster, Kubernetes runs those placeholder pods to reserve capac", "language": "bash", "type": "kubectl-command_explanation", "context": "Overprovision Node Capacity For A Cluster | You will next define a Deployment that uses the negative-priority PriorityClass and runs a minimal container. When you add this to your cluster, Kubernetes runs those placeholder pods to reserve capac", "source": "https://kubernetes.io/docs/tasks/administer-cluster/node-overprovisioning/", "id": "k8s_00187_code_explain_5057"} {"prompt": "Complete this yaml code:\n\napiVersion: apps/v1\nkind: Deployment\nmetadata:\n name: capacity-reservation\n # You should decide what namespace to deploy this into\nspec:\n replicas: 1\n selector:\n matchLabels:\n app.kubernetes.io/name: capacity-placeholder\n template:\n metadata:\n labels:\n app.kubernetes.io/name: capacity-placeholder\n annotations:\n kubernetes.io/description: \"Capacity reservation\"\n spec:\n priorityClassName: placeholder", "completion": " affinity: # Try to place these overhead Pods on different nodes\n # if possible\n podAntiAffinity:\n preferredDuringSchedulingIgnoredDuringExecution:\n - weight: 100\n podAffinityTerm:\n labelSelector:\n matchLabels:\n app.kubernetes.io/name: capacity-placeholder\n topologyKey: topology.kubernetes.io/hostname\n containers:\n - name: pause\n image: registry.k8s.io/pause:3.6\n resources:\n requests:\n cpu: \"50m\"\n memory: \"512Mi\"\n limits:\n memory: \"512Mi\"", "language": "yaml", "type": "deployment-definition", "context": "Overprovision Node Capacity For A Cluster | You should select, or create, a namespace that the placeholder Pods will go into.", "source": "https://kubernetes.io/docs/tasks/administer-cluster/node-overprovisioning/", "id": "k8s_00187_code_completion_5058"} {"prompt": "Explain this yaml code:\n\napiVersion: apps/v1\nkind: Deployment\nmetadata:\n name: capacity-reservation\n # You should decide what namespace to deploy this into\nspec:\n replicas: 1\n selector:\n matchLabels:\n app.kubernetes.io/name: capacity-placeholder\n template:\n metadata:\n labels:\n app.kubernetes.io/name: capacity-placeholder\n annotations:\n kubernetes.io/description: \"Capacity reservation\"\n spec:\n priorityClassName: placeholder\n affinity: # Try to place these overhead Pods on different nodes\n # if possible\n podAntiAffinity:\n preferredDuringSchedulingIgnoredDuringExecution:\n - weight: 100\n podAffinityTerm:\n labelSelector:\n matchLabels:\n app.kubernetes.io/name: capacity-placeholder\n topologyKey: topology.kubernetes.io/hostname\n containers:\n - name: pause\n image: registry.k8s.io/pause:3.6\n resources:\n requests:\n cpu: \"50m\"\n memory: \"512Mi\"\n limits:\n memory: \"512Mi\"", "completion": "This yaml code Overprovision Node Capacity For A Cluster | You should select, or create, a namespace that the placeholder Pods will go into.", "language": "yaml", "type": "deployment-definition_explanation", "context": "Overprovision Node Capacity For A Cluster | You should select, or create, a namespace that the placeholder Pods will go into.", "source": "https://kubernetes.io/docs/tasks/administer-cluster/node-overprovisioning/", "id": "k8s_00187_code_explain_5059"} {"prompt": "Explain this text code:\n\n# Change the namespace name \"example\"\nkubectl --namespace example apply -f https://k8s.io/examples/deployments/deployment-with-capacity-reservation.yaml", "completion": "This text code Overprovision Node Capacity For A Cluster | Configure the resource requests and limits for the placeholder pods to define the amount of overprovisioned resources you want to maintain. This reservation ensures that a specific amount of CPU and m", "language": "text", "type": "code-example_explanation", "context": "Overprovision Node Capacity For A Cluster | Configure the resource requests and limits for the placeholder pods to define the amount of overprovisioned resources you want to maintain. This reservation ensures that a specific amount of CPU and m", "source": "https://kubernetes.io/docs/tasks/administer-cluster/node-overprovisioning/", "id": "k8s_00187_code_explain_5060"} {"prompt": "Complete this text code:\n\nresources:\n requests:\n cpu: \"100m\"", "completion": " memory: \"200Mi\"\n limits:\n cpu: \"100m\"", "language": "text", "type": "code-example", "context": "Overprovision Node Capacity For A Cluster | For example, with 5 replicas each reserving 0.1 CPU and 200MiB of memory:Total CPU reserved: 5 × 0.1 = 0.5 (in the Pod specification, you'll write the quantity 500m)Total memory reserved: 5 × 200MiB =", "source": "https://kubernetes.io/docs/tasks/administer-cluster/node-overprovisioning/", "id": "k8s_00187_code_completion_5061"} {"prompt": "Explain this text code:\n\nresources:\n requests:\n cpu: \"100m\"\n memory: \"200Mi\"\n limits:\n cpu: \"100m\"", "completion": "This text code Overprovision Node Capacity For A Cluster | For example, with 5 replicas each reserving 0.1 CPU and 200MiB of memory:Total CPU reserved: 5 × 0.1 = 0.5 (in the Pod specification, you'll write the quantity 500m)Total memory reserved: 5 × 200MiB =", "language": "text", "type": "code-example_explanation", "context": "Overprovision Node Capacity For A Cluster | For example, with 5 replicas each reserving 0.1 CPU and 200MiB of memory:Total CPU reserved: 5 × 0.1 = 0.5 (in the Pod specification, you'll write the quantity 500m)Total memory reserved: 5 × 200MiB =", "source": "https://kubernetes.io/docs/tasks/administer-cluster/node-overprovisioning/", "id": "k8s_00187_code_explain_5062"} {"prompt": "Explain this bash code:\n\nkubectl scale deployment capacity-reservation --replicas=5", "completion": "This bash code Overprovision Node Capacity For A Cluster | Verify the scaling:", "language": "bash", "type": "kubectl-command_explanation", "context": "Overprovision Node Capacity For A Cluster | Verify the scaling:", "source": "https://kubernetes.io/docs/tasks/administer-cluster/node-overprovisioning/", "id": "k8s_00187_code_explain_5063"} {"prompt": "Explain this text code:\n\nNAME READY UP-TO-DATE AVAILABLE AGE\ncapacity-reservation 5/5 5 5 2m", "completion": "This text code Overprovision Node Capacity For A Cluster |", "language": "text", "type": "code-example_explanation", "context": "Overprovision Node Capacity For A Cluster |", "source": "https://kubernetes.io/docs/tasks/administer-cluster/node-overprovisioning/", "id": "k8s_00187_code_explain_5064"} {"prompt": "Complete this yaml code:\n\napiVersion: scheduling.k8s.io/v1\nkind: PriorityClass\nmetadata:", "completion": " name: placeholder # these Pods represent placeholder capacity\nvalue: -1000\nglobalDefault: false\ndescription: \"Negative priority for placeholder pods to enable overprovisioning.\"", "language": "yaml", "type": "kubernetes-manifest", "context": "Overprovision Node Capacity For A Cluster | Then create the PriorityClass:", "source": "https://kubernetes.io/docs/tasks/administer-cluster/node-overprovisioning/", "id": "k8s_00187_code_completion_5065"} {"prompt": "Explain this yaml code:\n\napiVersion: scheduling.k8s.io/v1\nkind: PriorityClass\nmetadata:\n name: placeholder # these Pods represent placeholder capacity\nvalue: -1000\nglobalDefault: false\ndescription: \"Negative priority for placeholder pods to enable overprovisioning.\"", "completion": "This yaml code Overprovision Node Capacity For A Cluster | Then create the PriorityClass:", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Overprovision Node Capacity For A Cluster | Then create the PriorityClass:", "source": "https://kubernetes.io/docs/tasks/administer-cluster/node-overprovisioning/", "id": "k8s_00187_code_explain_5066"} {"prompt": "Explain this shell code:\n\nkubectl apply -f https://k8s.io/examples/priorityclass/low-priority-class.yaml", "completion": "This shell code Overprovision Node Capacity For A Cluster | You will next define a Deployment that uses the negative-priority PriorityClass and runs a minimal container. When you add this to your cluster, Kubernetes runs those placeholder pods to reserve capac", "language": "shell", "type": "code-example_explanation", "context": "Overprovision Node Capacity For A Cluster | You will next define a Deployment that uses the negative-priority PriorityClass and runs a minimal container. When you add this to your cluster, Kubernetes runs those placeholder pods to reserve capac", "source": "https://kubernetes.io/docs/tasks/administer-cluster/node-overprovisioning/", "id": "k8s_00187_code_explain_5067"} {"prompt": "Explain this text code:\n\ndeployments/deployment-with-capacity-reservation.yaml", "completion": "This text code Overprovision Node Capacity For A Cluster | You should select, or create, a namespace that the placeholder Pods will go into.", "language": "text", "type": "code-example_explanation", "context": "Overprovision Node Capacity For A Cluster | You should select, or create, a namespace that the placeholder Pods will go into.", "source": "https://kubernetes.io/docs/tasks/administer-cluster/node-overprovisioning/", "id": "k8s_00187_code_explain_5068"} {"prompt": "Complete this yaml code:\n\napiVersion: apps/v1\nkind: Deployment\nmetadata:\n name: capacity-reservation\n # You should decide what namespace to deploy this into\nspec:\n replicas: 1\n selector:\n matchLabels:\n app.kubernetes.io/name: capacity-placeholder\n template:\n metadata:\n labels:\n app.kubernetes.io/name: capacity-placeholder\n annotations:\n kubernetes.io/description: \"Capacity reservation\"\n spec:\n priorityClassName: placeholder", "completion": " affinity: # Try to place these overhead Pods on different nodes\n # if possible\n podAntiAffinity:\n preferredDuringSchedulingIgnoredDuringExecution:\n - weight: 100\n podAffinityTerm:\n labelSelector:\n matchLabels:\n app.kubernetes.io/name: capacity-placeholder\n topologyKey: topology.kubernetes.io/hostname\n containers:\n - name: pause\n image: registry.k8s.io/pause:3.6\n resources:\n requests:\n cpu: \"50m\"\n memory: \"512Mi\"\n limits:\n memory: \"512Mi\"", "language": "yaml", "type": "deployment-definition", "context": "Overprovision Node Capacity For A Cluster | You should select, or create, a namespace that the placeholder Pods will go into.", "source": "https://kubernetes.io/docs/tasks/administer-cluster/node-overprovisioning/", "id": "k8s_00187_code_completion_5069"} {"prompt": "Explain this yaml code:\n\napiVersion: apps/v1\nkind: Deployment\nmetadata:\n name: capacity-reservation\n # You should decide what namespace to deploy this into\nspec:\n replicas: 1\n selector:\n matchLabels:\n app.kubernetes.io/name: capacity-placeholder\n template:\n metadata:\n labels:\n app.kubernetes.io/name: capacity-placeholder\n annotations:\n kubernetes.io/description: \"Capacity reservation\"\n spec:\n priorityClassName: placeholder\n affinity: # Try to place these overhead Pods on different nodes\n # if possible\n podAntiAffinity:\n preferredDuringSchedulingIgnoredDuringExecution:\n - weight: 100\n podAffinityTerm:\n labelSelector:\n matchLabels:\n app.kubernetes.io/name: capacity-placeholder\n topologyKey: topology.kubernetes.io/hostname\n containers:\n - name: pause\n image: registry.k8s.io/pause:3.6\n resources:\n requests:\n cpu: \"50m\"\n memory: \"512Mi\"\n limits:\n memory: \"512Mi\"", "completion": "This yaml code Overprovision Node Capacity For A Cluster | You should select, or create, a namespace that the placeholder Pods will go into.", "language": "yaml", "type": "deployment-definition_explanation", "context": "Overprovision Node Capacity For A Cluster | You should select, or create, a namespace that the placeholder Pods will go into.", "source": "https://kubernetes.io/docs/tasks/administer-cluster/node-overprovisioning/", "id": "k8s_00187_code_explain_5070"} {"prompt": "Explain this shell code:\n\n# Change the namespace name \"example\"\nkubectl --namespace example apply -f https://k8s.io/examples/deployments/deployment-with-capacity-reservation.yaml", "completion": "This shell code Overprovision Node Capacity For A Cluster | Configure the resource requests and limits for the placeholder pods to define the amount of overprovisioned resources you want to maintain. This reservation ensures that a specific amount of CPU and m", "language": "shell", "type": "code-example_explanation", "context": "Overprovision Node Capacity For A Cluster | Configure the resource requests and limits for the placeholder pods to define the amount of overprovisioned resources you want to maintain. This reservation ensures that a specific amount of CPU and m", "source": "https://kubernetes.io/docs/tasks/administer-cluster/node-overprovisioning/", "id": "k8s_00187_code_explain_5071"} {"prompt": "Complete this yaml code:\n\nresources:\n requests:\n cpu: \"100m\"", "completion": " memory: \"200Mi\"\n limits:\n cpu: \"100m\"", "language": "yaml", "type": "kubernetes-manifest", "context": "Overprovision Node Capacity For A Cluster | For example, with 5 replicas each reserving 0.1 CPU and 200MiB of memory:Total CPU reserved: 5 × 0.1 = 0.5 (in the Pod specification, you'll write the quantity 500m)Total memory reserved: 5 × 200MiB =", "source": "https://kubernetes.io/docs/tasks/administer-cluster/node-overprovisioning/", "id": "k8s_00187_code_completion_5072"} {"prompt": "Explain this yaml code:\n\nresources:\n requests:\n cpu: \"100m\"\n memory: \"200Mi\"\n limits:\n cpu: \"100m\"", "completion": "This yaml code Overprovision Node Capacity For A Cluster | For example, with 5 replicas each reserving 0.1 CPU and 200MiB of memory:Total CPU reserved: 5 × 0.1 = 0.5 (in the Pod specification, you'll write the quantity 500m)Total memory reserved: 5 × 200MiB =", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Overprovision Node Capacity For A Cluster | For example, with 5 replicas each reserving 0.1 CPU and 200MiB of memory:Total CPU reserved: 5 × 0.1 = 0.5 (in the Pod specification, you'll write the quantity 500m)Total memory reserved: 5 × 200MiB =", "source": "https://kubernetes.io/docs/tasks/administer-cluster/node-overprovisioning/", "id": "k8s_00187_code_explain_5073"} {"prompt": "Explain this shell code:\n\nkubectl scale deployment capacity-reservation --replicas=5", "completion": "This shell code Overprovision Node Capacity For A Cluster | Verify the scaling:", "language": "shell", "type": "code-example_explanation", "context": "Overprovision Node Capacity For A Cluster | Verify the scaling:", "source": "https://kubernetes.io/docs/tasks/administer-cluster/node-overprovisioning/", "id": "k8s_00187_code_explain_5074"} {"prompt": "Explain this none code:\n\nNAME READY UP-TO-DATE AVAILABLE AGE\ncapacity-reservation 5/5 5 5 2m", "completion": "This none code Overprovision Node Capacity For A Cluster |", "language": "none", "type": "code-example_explanation", "context": "Overprovision Node Capacity For A Cluster |", "source": "https://kubernetes.io/docs/tasks/administer-cluster/node-overprovisioning/", "id": "k8s_00187_code_explain_5075"} {"prompt": "Complete this yaml code:\n\npriorityclass/low-priority-class.yaml\napiVersion: scheduling.k8s.io/v1\nkind: PriorityClass\nmetadata:", "completion": " name: placeholder # these Pods represent placeholder capacity\nvalue: -1000\nglobalDefault: false\ndescription: \"Negative priority for placeholder pods to enable overprovisioning.\"", "language": "yaml", "type": "kubernetes-manifest", "context": "Overprovision Node Capacity For A Cluster | Then create the PriorityClass:", "source": "https://kubernetes.io/docs/tasks/administer-cluster/node-overprovisioning/", "id": "k8s_00187_code_completion_5076"} {"prompt": "Explain this yaml code:\n\npriorityclass/low-priority-class.yaml\napiVersion: scheduling.k8s.io/v1\nkind: PriorityClass\nmetadata:\n name: placeholder # these Pods represent placeholder capacity\nvalue: -1000\nglobalDefault: false\ndescription: \"Negative priority for placeholder pods to enable overprovisioning.\"", "completion": "This yaml code Overprovision Node Capacity For A Cluster | Then create the PriorityClass:", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Overprovision Node Capacity For A Cluster | Then create the PriorityClass:", "source": "https://kubernetes.io/docs/tasks/administer-cluster/node-overprovisioning/", "id": "k8s_00187_code_explain_5077"} {"prompt": "Complete this yaml code:\n\napiVersion: scheduling.k8s.io/v1\nkind: PriorityClass\nmetadata:", "completion": " name: placeholder # these Pods represent placeholder capacity\nvalue: -1000\nglobalDefault: false\ndescription: \"Negative priority for placeholder pods to enable overprovisioning.\"", "language": "yaml", "type": "kubernetes-manifest", "context": "Overprovision Node Capacity For A Cluster | Then create the PriorityClass:", "source": "https://kubernetes.io/docs/tasks/administer-cluster/node-overprovisioning/", "id": "k8s_00187_code_completion_5078"} {"prompt": "Explain this yaml code:\n\napiVersion: scheduling.k8s.io/v1\nkind: PriorityClass\nmetadata:\n name: placeholder # these Pods represent placeholder capacity\nvalue: -1000\nglobalDefault: false\ndescription: \"Negative priority for placeholder pods to enable overprovisioning.\"", "completion": "This yaml code Overprovision Node Capacity For A Cluster | Then create the PriorityClass:", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Overprovision Node Capacity For A Cluster | Then create the PriorityClass:", "source": "https://kubernetes.io/docs/tasks/administer-cluster/node-overprovisioning/", "id": "k8s_00187_code_explain_5079"} {"prompt": "Explain this bash code:\n\nkubectl apply -f https://k8s.io/examples/priorityclass/low-priority-class.yaml", "completion": "This bash code Overprovision Node Capacity For A Cluster | You will next define a Deployment that uses the negative-priority PriorityClass and runs a minimal container. When you add this to your cluster, Kubernetes runs those placeholder pods to reserve capac", "language": "bash", "type": "kubectl-command_explanation", "context": "Overprovision Node Capacity For A Cluster | You will next define a Deployment that uses the negative-priority PriorityClass and runs a minimal container. When you add this to your cluster, Kubernetes runs those placeholder pods to reserve capac", "source": "https://kubernetes.io/docs/tasks/administer-cluster/node-overprovisioning/", "id": "k8s_00187_code_explain_5080"} {"prompt": "Complete this yaml code:\n\ndeployments/deployment-with-capacity-reservation.yaml\napiVersion: apps/v1\nkind: Deployment\nmetadata:\n name: capacity-reservation\n # You should decide what namespace to deploy this into\nspec:\n replicas: 1\n selector:\n matchLabels:\n app.kubernetes.io/name: capacity-placeholder\n template:\n metadata:\n labels:\n app.kubernetes.io/name: capacity-placeholder\n annotations:\n kubernetes.io/description: \"Capacity reservation\"\n spec:\n priorityClassName: placeholder", "completion": " affinity: # Try to place these overhead Pods on different nodes\n # if possible\n podAntiAffinity:\n preferredDuringSchedulingIgnoredDuringExecution:\n - weight: 100\n podAffinityTerm:\n labelSelector:\n matchLabels:\n app.kubernetes.io/name: capacity-placeholder\n topologyKey: topology.kubernetes.io/hostname\n containers:\n - name: pause\n image: registry.k8s.io/pause:3.6\n resources:\n requests:\n cpu: \"50m\"\n memory: \"512Mi\"\n limits:\n memory: \"512Mi\"", "language": "yaml", "type": "deployment-definition", "context": "Overprovision Node Capacity For A Cluster | You should select, or create, a namespace that the placeholder Pods will go into.", "source": "https://kubernetes.io/docs/tasks/administer-cluster/node-overprovisioning/", "id": "k8s_00187_code_completion_5081"} {"prompt": "Explain this yaml code:\n\ndeployments/deployment-with-capacity-reservation.yaml\napiVersion: apps/v1\nkind: Deployment\nmetadata:\n name: capacity-reservation\n # You should decide what namespace to deploy this into\nspec:\n replicas: 1\n selector:\n matchLabels:\n app.kubernetes.io/name: capacity-placeholder\n template:\n metadata:\n labels:\n app.kubernetes.io/name: capacity-placeholder\n annotations:\n kubernetes.io/description: \"Capacity reservation\"\n spec:\n priorityClassName: placeholder\n affinity: # Try to place these overhead Pods on different nodes\n # if possible\n podAntiAffinity:\n preferredDuringSchedulingIgnoredDuringExecution:\n - weight: 100\n podAffinityTerm:\n labelSelector:\n matchLabels:\n app.kubernetes.io/name: capacity-placeholder\n topologyKey: topology.kubernetes.io/hostname\n containers:\n - name: pause\n image: registry.k8s.io/pause:3.6\n resources:\n requests:\n cpu: \"50m\"\n memory: \"512Mi\"\n limits:\n memory: \"512Mi\"", "completion": "This yaml code Overprovision Node Capacity For A Cluster | You should select, or create, a namespace that the placeholder Pods will go into.", "language": "yaml", "type": "deployment-definition_explanation", "context": "Overprovision Node Capacity For A Cluster | You should select, or create, a namespace that the placeholder Pods will go into.", "source": "https://kubernetes.io/docs/tasks/administer-cluster/node-overprovisioning/", "id": "k8s_00187_code_explain_5082"} {"prompt": "Complete this yaml code:\n\napiVersion: apps/v1\nkind: Deployment\nmetadata:\n name: capacity-reservation\n # You should decide what namespace to deploy this into\nspec:\n replicas: 1\n selector:\n matchLabels:\n app.kubernetes.io/name: capacity-placeholder\n template:\n metadata:\n labels:\n app.kubernetes.io/name: capacity-placeholder\n annotations:\n kubernetes.io/description: \"Capacity reservation\"\n spec:\n priorityClassName: placeholder", "completion": " affinity: # Try to place these overhead Pods on different nodes\n # if possible\n podAntiAffinity:\n preferredDuringSchedulingIgnoredDuringExecution:\n - weight: 100\n podAffinityTerm:\n labelSelector:\n matchLabels:\n app.kubernetes.io/name: capacity-placeholder\n topologyKey: topology.kubernetes.io/hostname\n containers:\n - name: pause\n image: registry.k8s.io/pause:3.6\n resources:\n requests:\n cpu: \"50m\"\n memory: \"512Mi\"\n limits:\n memory: \"512Mi\"", "language": "yaml", "type": "deployment-definition", "context": "Overprovision Node Capacity For A Cluster | You should select, or create, a namespace that the placeholder Pods will go into.", "source": "https://kubernetes.io/docs/tasks/administer-cluster/node-overprovisioning/", "id": "k8s_00187_code_completion_5083"} {"prompt": "Explain this yaml code:\n\napiVersion: apps/v1\nkind: Deployment\nmetadata:\n name: capacity-reservation\n # You should decide what namespace to deploy this into\nspec:\n replicas: 1\n selector:\n matchLabels:\n app.kubernetes.io/name: capacity-placeholder\n template:\n metadata:\n labels:\n app.kubernetes.io/name: capacity-placeholder\n annotations:\n kubernetes.io/description: \"Capacity reservation\"\n spec:\n priorityClassName: placeholder\n affinity: # Try to place these overhead Pods on different nodes\n # if possible\n podAntiAffinity:\n preferredDuringSchedulingIgnoredDuringExecution:\n - weight: 100\n podAffinityTerm:\n labelSelector:\n matchLabels:\n app.kubernetes.io/name: capacity-placeholder\n topologyKey: topology.kubernetes.io/hostname\n containers:\n - name: pause\n image: registry.k8s.io/pause:3.6\n resources:\n requests:\n cpu: \"50m\"\n memory: \"512Mi\"\n limits:\n memory: \"512Mi\"", "completion": "This yaml code Overprovision Node Capacity For A Cluster | You should select, or create, a namespace that the placeholder Pods will go into.", "language": "yaml", "type": "deployment-definition_explanation", "context": "Overprovision Node Capacity For A Cluster | You should select, or create, a namespace that the placeholder Pods will go into.", "source": "https://kubernetes.io/docs/tasks/administer-cluster/node-overprovisioning/", "id": "k8s_00187_code_explain_5084"} {"prompt": "Explain this text code:\n\n# Change the namespace name \"example\"\nkubectl --namespace example apply -f https://k8s.io/examples/deployments/deployment-with-capacity-reservation.yaml", "completion": "This text code Overprovision Node Capacity For A Cluster | Configure the resource requests and limits for the placeholder pods to define the amount of overprovisioned resources you want to maintain. This reservation ensures that a specific amount of CPU and m", "language": "text", "type": "code-example_explanation", "context": "Overprovision Node Capacity For A Cluster | Configure the resource requests and limits for the placeholder pods to define the amount of overprovisioned resources you want to maintain. This reservation ensures that a specific amount of CPU and m", "source": "https://kubernetes.io/docs/tasks/administer-cluster/node-overprovisioning/", "id": "k8s_00187_code_explain_5085"} {"prompt": "Complete this text code:\n\nresources:\n requests:\n cpu: \"100m\"", "completion": " memory: \"200Mi\"\n limits:\n cpu: \"100m\"", "language": "text", "type": "code-example", "context": "Overprovision Node Capacity For A Cluster | For example, with 5 replicas each reserving 0.1 CPU and 200MiB of memory:Total CPU reserved: 5 × 0.1 = 0.5 (in the Pod specification, you'll write the quantity 500m)Total memory reserved: 5 × 200MiB =", "source": "https://kubernetes.io/docs/tasks/administer-cluster/node-overprovisioning/", "id": "k8s_00187_code_completion_5086"} {"prompt": "Explain this text code:\n\nresources:\n requests:\n cpu: \"100m\"\n memory: \"200Mi\"\n limits:\n cpu: \"100m\"", "completion": "This text code Overprovision Node Capacity For A Cluster | For example, with 5 replicas each reserving 0.1 CPU and 200MiB of memory:Total CPU reserved: 5 × 0.1 = 0.5 (in the Pod specification, you'll write the quantity 500m)Total memory reserved: 5 × 200MiB =", "language": "text", "type": "code-example_explanation", "context": "Overprovision Node Capacity For A Cluster | For example, with 5 replicas each reserving 0.1 CPU and 200MiB of memory:Total CPU reserved: 5 × 0.1 = 0.5 (in the Pod specification, you'll write the quantity 500m)Total memory reserved: 5 × 200MiB =", "source": "https://kubernetes.io/docs/tasks/administer-cluster/node-overprovisioning/", "id": "k8s_00187_code_explain_5087"} {"prompt": "Explain this bash code:\n\nkubectl scale deployment capacity-reservation --replicas=5", "completion": "This bash code Overprovision Node Capacity For A Cluster | Verify the scaling:", "language": "bash", "type": "kubectl-command_explanation", "context": "Overprovision Node Capacity For A Cluster | Verify the scaling:", "source": "https://kubernetes.io/docs/tasks/administer-cluster/node-overprovisioning/", "id": "k8s_00187_code_explain_5088"} {"prompt": "Complete this yaml code:\n\napiVersion: apps/v1\nkind: Deployment\nmetadata:\n name: nginx-deployment\nspec:\n selector:\n matchLabels:\n app: nginx\n replicas: 2\n template:\n metadata:", "completion": " labels:\n app: nginx\n spec:\n containers:\n - name: nginx\n image: nginx\n resources:\n limits:\n memory: \"128Mi\"\n cpu: \"500m\"\n ports:\n - containerPort: 80", "language": "yaml", "type": "deployment-definition", "context": "Debug Running Pods | Create deployment by running following command:", "source": "https://kubernetes.io/docs/tasks/debug/debug-application/debug-running-pod/", "id": "k8s_00189_code_completion_5089"} {"prompt": "Explain this yaml code:\n\napiVersion: apps/v1\nkind: Deployment\nmetadata:\n name: nginx-deployment\nspec:\n selector:\n matchLabels:\n app: nginx\n replicas: 2\n template:\n metadata:\n labels:\n app: nginx\n spec:\n containers:\n - name: nginx\n image: nginx\n resources:\n limits:\n memory: \"128Mi\"\n cpu: \"500m\"\n ports:\n - containerPort: 80", "completion": "This yaml code Debug Running Pods | Create deployment by running following command:", "language": "yaml", "type": "deployment-definition_explanation", "context": "Debug Running Pods | Create deployment by running following command:", "source": "https://kubernetes.io/docs/tasks/debug/debug-application/debug-running-pod/", "id": "k8s_00189_code_explain_5090"} {"prompt": "Explain this bash code:\n\nkubectl apply -f https://k8s.io/examples/application/nginx-with-request.yaml", "completion": "This bash code Debug Running Pods | Check pod status by following command:", "language": "bash", "type": "kubectl-command_explanation", "context": "Debug Running Pods | Check pod status by following command:", "source": "https://kubernetes.io/docs/tasks/debug/debug-application/debug-running-pod/", "id": "k8s_00189_code_explain_5091"} {"prompt": "Explain this text code:\n\nNAME READY STATUS RESTARTS AGE\nnginx-deployment-67d4bdd6f5-cx2nz 1/1 Running 0 13s\nnginx-deployment-67d4bdd6f5-w6kd7 1/1 Running 0 13s", "completion": "This text code Debug Running Pods | We can retrieve a lot more information about each of these pods using kubectl describe pod. For example:", "language": "text", "type": "code-example_explanation", "context": "Debug Running Pods | We can retrieve a lot more information about each of these pods using kubectl describe pod. For example:", "source": "https://kubernetes.io/docs/tasks/debug/debug-application/debug-running-pod/", "id": "k8s_00189_code_explain_5092"} {"prompt": "Explain this bash code:\n\nkubectl describe pod nginx-deployment-67d4bdd6f5-w6kd7", "completion": "This bash code Debug Running Pods | Here you can see configuration information about the container(s) and Pod (labels, resource requirements, etc.), as well as status information about the container(s) and Pod (state, readiness, restart", "language": "bash", "type": "kubectl-command_explanation", "context": "Debug Running Pods | Here you can see configuration information about the container(s) and Pod (labels, resource requirements, etc.), as well as status information about the container(s) and Pod (state, readiness, restart", "source": "https://kubernetes.io/docs/tasks/debug/debug-application/debug-running-pod/", "id": "k8s_00189_code_explain_5093"} {"prompt": "Complete this text code:\n\nName: nginx-deployment-67d4bdd6f5-w6kd7\nNamespace: default\nPriority: 0\nNode: kube-worker-1/192.168.0.113\nStart Time: Thu, 17 Feb 2022 16:51:01 -0500\nLabels: app=nginx\n pod-template-hash=67d4bdd6f5\nAnnotations: \nStatus: Running\nIP: 10.88.0.3\nIPs:\n IP: 10.88.0.3\n IP: 2001:db8::1\nControlled By: ReplicaSet/nginx-deployment-67d4bdd6f5\nContainers:\n nginx:\n Container ID: containerd://5403af59a2b46ee5a23fb0ae4b1e077f7ca5c5fb7af16e1ab21c00e0e616462a\n Image: nginx\n Image ID: docker.io/library/nginx@sha256:2834dc507516af02784808c5f48b7cbe38b8ed5d0f4837f16e78d00deb7e7767\n Port: 80/TCP\n Host Port: 0/TCP\n State: Running\n Started: Thu, 17 Feb 2022 16:51:05 -0500\n Ready: True\n Restart Count: 0\n Limits:\n cpu: 500m\n memory: 128Mi\n Requests:", "completion": " cpu: 500m\n memory: 128Mi\n Environment: \n Mounts:\n /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-bgsgp (ro)\nConditions:\n Type Status\n Initialized True \n Ready True \n ContainersReady True \n PodScheduled True \nVolumes:\n kube-api-access-bgsgp:\n Type: Projected (a volume that contains injected data from multiple sources)\n TokenExpirationSeconds: 3607\n ConfigMapName: kube-root-ca.crt\n ConfigMapOptional: \n DownwardAPI: true\nQoS Class: Guaranteed\nNode-Selectors: \nTolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s\n node.kubernetes.io/unreachable:NoExecute op=Exists for 300s\nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal Scheduled 34s default-scheduler Successfully assigned default/nginx-deployment-67d4bdd6f5-w6kd7 to kube-worker-1\n Normal Pulling 31s kubelet Pulling image \"nginx\"\n Normal Pulled 30s kubelet Successfully pulled image \"nginx\" in 1.146417389s\n Normal Created 30s kubelet Created container nginx\n Normal Started 30s kubelet Started container nginx", "language": "text", "type": "code-example", "context": "Debug Running Pods | Here you can see configuration information about the container(s) and Pod (labels, resource requirements, etc.), as well as status information about the container(s) and Pod (state, readiness, restart", "source": "https://kubernetes.io/docs/tasks/debug/debug-application/debug-running-pod/", "id": "k8s_00189_code_completion_5094"} {"prompt": "Explain this text code:\n\nName: nginx-deployment-67d4bdd6f5-w6kd7\nNamespace: default\nPriority: 0\nNode: kube-worker-1/192.168.0.113\nStart Time: Thu, 17 Feb 2022 16:51:01 -0500\nLabels: app=nginx\n pod-template-hash=67d4bdd6f5\nAnnotations: \nStatus: Running\nIP: 10.88.0.3\nIPs:\n IP: 10.88.0.3\n IP: 2001:db8::1\nControlled By: ReplicaSet/nginx-deployment-67d4bdd6f5\nContainers:\n nginx:\n Container ID: containerd://5403af59a2b46ee5a23fb0ae4b1e077f7ca5c5fb7af16e1ab21c00e0e616462a\n Image: nginx\n Image ID: docker.io/library/nginx@sha256:2834dc507516af02784808c5f48b7cbe38b8ed5d0f4837f16e78d00deb7e7767\n Port: 80/TCP\n Host Port: 0/TCP\n State: Running\n Started: Thu, 17 Feb 2022 16:51:05 -0500\n Ready: True\n Restart Count: 0\n Limits:\n cpu: 500m\n memory: 128Mi\n Requests:\n cpu: 500m\n memory: 128Mi\n Environment: \n Mounts:\n /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-bgsgp (ro)\nConditions:\n Type Status\n Initialized True \n Ready True \n ContainersReady True \n PodScheduled True \nVolumes:\n kube-api-access-bgsgp:\n Type: Projected (a volume that contains injected data from multiple sources)\n TokenExpirationSeconds: 3607\n ConfigMapName: kube-root-ca.crt\n ConfigMapOptional: \n DownwardAPI: true\nQoS Class: Guaranteed\nNode-Selectors: \nTolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s\n node.kubernetes.io/unreachable:NoExecute op=Exists for 300s\nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal Scheduled 34s default-scheduler Successfully assigned default/nginx-deployment-67d4bdd6f5-w6kd7 to kube-worker-1\n Normal Pulling 31s kubelet Pulling image \"nginx\"\n Normal Pulled 30s kubelet Successfully pulled image \"nginx\" in 1.146417389s\n Normal Created 30s kubelet Created container nginx\n Normal Started 30s kubelet Started container nginx", "completion": "This text code Debug Running Pods | Here you can see configuration information about the container(s) and Pod (labels, resource requirements, etc.), as well as status information about the container(s) and Pod (state, readiness, restart", "language": "text", "type": "code-example_explanation", "context": "Debug Running Pods | Here you can see configuration information about the container(s) and Pod (labels, resource requirements, etc.), as well as status information about the container(s) and Pod (state, readiness, restart", "source": "https://kubernetes.io/docs/tasks/debug/debug-application/debug-running-pod/", "id": "k8s_00189_code_explain_5095"} {"prompt": "Complete this text code:\n\nNAME READY STATUS RESTARTS AGE\nnginx-deployment-1006230814-6winp 1/1 Running 0 7m\nnginx-deployment-1006230814-fmgu3 1/1 Running 0 7m", "completion": "nginx-deployment-1370807587-6ekbw 1/1 Running 0 1m\nnginx-deployment-1370807587-fg172 0/1 Pending 0 1m\nnginx-deployment-1370807587-fz9sd 0/1 Pending 0 1m", "language": "text", "type": "code-example", "context": "Debug Running Pods | To find out why the nginx-deployment-1370807587-fz9sd pod is not running, we can use kubectl describe pod on the pending Pod and look at its events:", "source": "https://kubernetes.io/docs/tasks/debug/debug-application/debug-running-pod/", "id": "k8s_00189_code_completion_5096"} {"prompt": "Explain this text code:\n\nNAME READY STATUS RESTARTS AGE\nnginx-deployment-1006230814-6winp 1/1 Running 0 7m\nnginx-deployment-1006230814-fmgu3 1/1 Running 0 7m\nnginx-deployment-1370807587-6ekbw 1/1 Running 0 1m\nnginx-deployment-1370807587-fg172 0/1 Pending 0 1m\nnginx-deployment-1370807587-fz9sd 0/1 Pending 0 1m", "completion": "This text code Debug Running Pods | To find out why the nginx-deployment-1370807587-fz9sd pod is not running, we can use kubectl describe pod on the pending Pod and look at its events:", "language": "text", "type": "code-example_explanation", "context": "Debug Running Pods | To find out why the nginx-deployment-1370807587-fz9sd pod is not running, we can use kubectl describe pod on the pending Pod and look at its events:", "source": "https://kubernetes.io/docs/tasks/debug/debug-application/debug-running-pod/", "id": "k8s_00189_code_explain_5097"} {"prompt": "Explain this bash code:\n\nkubectl describe pod nginx-deployment-1370807587-fz9sd", "completion": "This bash code Debug Running Pods | Here you can see the event generated by the scheduler saying that the Pod failed to schedule for reason FailedScheduling (and possibly others). The message tells us that there were not enough resource", "language": "bash", "type": "kubectl-command_explanation", "context": "Debug Running Pods | Here you can see the event generated by the scheduler saying that the Pod failed to schedule for reason FailedScheduling (and possibly others). The message tells us that there were not enough resource", "source": "https://kubernetes.io/docs/tasks/debug/debug-application/debug-running-pod/", "id": "k8s_00189_code_explain_5098"} {"prompt": "Complete this text code:\n\nName:\t\tnginx-deployment-1370807587-fz9sd\n Namespace:\tdefault\n Node:\t\t/\n Labels:\t\tapp=nginx,pod-template-hash=1370807587\n Status:\t\tPending\n IP:\n Controllers:\tReplicaSet/nginx-deployment-1370807587\n Containers:\n nginx:\n Image:\tnginx\n Port:\t80/TCP\n QoS Tier:\n memory:\tGuaranteed\n cpu:\tGuaranteed\n Limits:", "completion": " cpu:\t1\n memory:\t128Mi\n Requests:\n cpu:\t1\n memory:\t128Mi\n Environment Variables:\n Volumes:\n default-token-4bcbi:\n Type:\tSecret (a volume populated by a Secret)\n SecretName:\tdefault-token-4bcbi\n Events:\n FirstSeen\tLastSeen\tCount\tFrom\t\t\t SubobjectPath\tType\t\tReason\t\t\t Message\n ---------\t--------\t-----\t----\t\t\t -------------\t--------\t------\t\t\t -------\n 1m\t\t 48s\t\t 7\t {default-scheduler }\t\t\t Warning\t\tFailedScheduling\tpod (nginx-deployment-1370807587-fz9sd) failed to fit in any node\n fit failure on node (kubernetes-node-6ta5): Node didn't have enough resource: CPU, requested: 1000, used: 1420, capacity: 2000\n fit failure on node (kubernetes-node-wul5): Node didn't have enough resource: CPU, requested: 1000, used: 1100, capacity: 2000", "language": "text", "type": "code-example", "context": "Debug Running Pods | Here you can see the event generated by the scheduler saying that the Pod failed to schedule for reason FailedScheduling (and possibly others). The message tells us that there were not enough resource", "source": "https://kubernetes.io/docs/tasks/debug/debug-application/debug-running-pod/", "id": "k8s_00189_code_completion_5099"} {"prompt": "Explain this text code:\n\nName:\t\tnginx-deployment-1370807587-fz9sd\n Namespace:\tdefault\n Node:\t\t/\n Labels:\t\tapp=nginx,pod-template-hash=1370807587\n Status:\t\tPending\n IP:\n Controllers:\tReplicaSet/nginx-deployment-1370807587\n Containers:\n nginx:\n Image:\tnginx\n Port:\t80/TCP\n QoS Tier:\n memory:\tGuaranteed\n cpu:\tGuaranteed\n Limits:\n cpu:\t1\n memory:\t128Mi\n Requests:\n cpu:\t1\n memory:\t128Mi\n Environment Variables:\n Volumes:\n default-token-4bcbi:\n Type:\tSecret (a volume populated by a Secret)\n SecretName:\tdefault-token-4bcbi\n Events:\n FirstSeen\tLastSeen\tCount\tFrom\t\t\t SubobjectPath\tType\t\tReason\t\t\t Message\n ---------\t--------\t-----\t----\t\t\t -------------\t--------\t------\t\t\t -------\n 1m\t\t 48s\t\t 7\t {default-scheduler }\t\t\t Warning\t\tFailedScheduling\tpod (nginx-deployment-1370807587-fz9sd) failed to fit in any node\n fit failure on node (kubernetes-node-6ta5): Node didn't have enough resource: CPU, requested: 1000, used: 1420, capacity: 2000\n fit failure on node (kubernetes-node-wul5): Node didn't have enough resource: CPU, requested: 1000, used: 1100, capacity: 2000", "completion": "This text code Debug Running Pods | Here you can see the event generated by the scheduler saying that the Pod failed to schedule for reason FailedScheduling (and possibly others). The message tells us that there were not enough resource", "language": "text", "type": "code-example_explanation", "context": "Debug Running Pods | Here you can see the event generated by the scheduler saying that the Pod failed to schedule for reason FailedScheduling (and possibly others). The message tells us that there were not enough resource", "source": "https://kubernetes.io/docs/tasks/debug/debug-application/debug-running-pod/", "id": "k8s_00189_code_explain_5100"} {"prompt": "Explain this bash code:\n\nkubectl get pod nginx-deployment-1006230814-6winp -o yaml", "completion": "This bash code Debug Running Pods | First, look at the logs of the affected container:", "language": "bash", "type": "kubectl-command_explanation", "context": "Debug Running Pods | First, look at the logs of the affected container:", "source": "https://kubernetes.io/docs/tasks/debug/debug-application/debug-running-pod/", "id": "k8s_00189_code_explain_5101"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n creationTimestamp: \"2022-02-17T21:51:01Z\"\n generateName: nginx-deployment-67d4bdd6f5-\n labels:\n app: nginx\n pod-template-hash: 67d4bdd6f5\n name: nginx-deployment-67d4bdd6f5-w6kd7\n namespace: default\n ownerReferences:\n - apiVersion: apps/v1\n blockOwnerDeletion: true\n controller: true\n kind: ReplicaSet\n name: nginx-deployment-67d4bdd6f5\n uid: 7d41dfd4-84c0-4be4-88ab-cedbe626ad82\n resourceVersion: \"1364\"\n uid: a6501da1-0447-4262-98eb-c03d4002222e\nspec:\n containers:\n - image: nginx\n imagePullPolicy: Always\n name: nginx\n ports:\n - containerPort: 80\n protocol: TCP\n resources:\n limits:\n cpu: 500m\n memory: 128Mi\n requests:\n cpu: 500m\n memory: 128Mi\n terminationMessagePath: /dev/termination-log\n terminationMessagePolicy: File\n volumeMounts:\n - mountPath: /var/run/secrets/kubernetes.io/serviceaccount\n name: kube-api-access-bgsgp\n readOnly: true\n dnsPolicy: ClusterFirst\n enableServiceLinks: true\n nodeName: kube-worker-1\n preemptionPolicy: PreemptLowerPriority\n priority: 0\n restartPolicy: Always\n schedulerName: default-scheduler\n securityContext: {}\n serviceAccount: default\n serviceAccountName: default\n terminationGracePeriodSeconds: 30\n tolerations:\n - effect: NoExecute\n key: node.kubernetes.io/not-ready\n operator: Exists\n tolerationSeconds: 300\n - effect: NoExecute\n key: node.kubernetes.io/unreachable", "completion": " operator: Exists\n tolerationSeconds: 300\n volumes:\n - name: kube-api-access-bgsgp\n projected:\n defaultMode: 420\n sources:\n - serviceAccountToken:\n expirationSeconds: 3607\n path: token\n - configMap:\n items:\n - key: ca.crt\n path: ca.crt\n name: kube-root-ca.crt\n - downwardAPI:\n items:\n - fieldRef:\n apiVersion: v1\n fieldPath: metadata.namespace\n path: namespace\nstatus:\n conditions:\n - lastProbeTime: null\n lastTransitionTime: \"2022-02-17T21:51:01Z\"\n status: \"True\"\n type: Initialized\n - lastProbeTime: null\n lastTransitionTime: \"2022-02-17T21:51:06Z\"\n status: \"True\"\n type: Ready\n - lastProbeTime: null\n lastTransitionTime: \"2022-02-17T21:51:06Z\"\n status: \"True\"\n type: ContainersReady\n - lastProbeTime: null\n lastTransitionTime: \"2022-02-17T21:51:01Z\"\n status: \"True\"\n type: PodScheduled\n containerStatuses:\n - containerID: containerd://5403af59a2b46ee5a23fb0ae4b1e077f7ca5c5fb7af16e1ab21c00e0e616462a\n image: docker.io/library/nginx:latest\n imageID: docker.io/library/nginx@sha256:2834dc507516af02784808c5f48b7cbe38b8ed5d0f4837f16e78d00deb7e7767\n lastState: {}\n name: nginx\n ready: true\n restartCount: 0\n started: true\n state:\n running:\n startedAt: \"2022-02-17T21:51:05Z\"\n hostIP: 192.168.0.113\n phase: Running\n podIP: 10.88.0.3\n podIPs:\n - ip: 10.88.0.3\n - ip: 2001:db8::1\n qosClass: Guaranteed\n startTime: \"2022-02-17T21:51:01Z\"", "language": "yaml", "type": "pod-definition", "context": "Debug Running Pods | First, look at the logs of the affected container:", "source": "https://kubernetes.io/docs/tasks/debug/debug-application/debug-running-pod/", "id": "k8s_00189_code_completion_5102"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n creationTimestamp: \"2022-02-17T21:51:01Z\"\n generateName: nginx-deployment-67d4bdd6f5-\n labels:\n app: nginx\n pod-template-hash: 67d4bdd6f5\n name: nginx-deployment-67d4bdd6f5-w6kd7\n namespace: default\n ownerReferences:\n - apiVersion: apps/v1\n blockOwnerDeletion: true\n controller: true\n kind: ReplicaSet\n name: nginx-deployment-67d4bdd6f5\n uid: 7d41dfd4-84c0-4be4-88ab-cedbe626ad82\n resourceVersion: \"1364\"\n uid: a6501da1-0447-4262-98eb-c03d4002222e\nspec:\n containers:\n - image: nginx\n imagePullPolicy: Always\n name: nginx\n ports:\n - containerPort: 80\n protocol: TCP\n resources:\n limits:\n cpu: 500m\n memory: 128Mi\n requests:\n cpu: 500m\n memory: 128Mi\n terminationMessagePath: /dev/termination-log\n terminationMessagePolicy: File\n volumeMounts:\n - mountPath: /var/run/secrets/kubernetes.io/serviceaccount\n name: kube-api-access-bgsgp\n readOnly: true\n dnsPolicy: ClusterFirst\n enableServiceLinks: true\n nodeName: kube-worker-1\n preemptionPolicy: PreemptLowerPriority\n priority: 0\n restartPolicy: Always\n schedulerName: default-scheduler\n securityContext: {}\n serviceAccount: default\n serviceAccountName: default\n terminationGracePeriodSeconds: 30\n tolerations:\n - effect: NoExecute\n key: node.kubernetes.io/not-ready\n operator: Exists\n tolerationSeconds: 300\n - effect: NoExecute\n key: node.kubernetes.io/unreachable\n operator: Exists\n tolerationSeconds: 300\n volumes:\n - name: kube-api-access-bgsgp\n projected:\n defaultMode: 420\n sources:\n - serviceAccountToken:\n expirationSeconds: 3607\n path: token\n - configMap:\n items:\n - key: ca.crt\n path: ca.crt\n name: kube-root-ca.crt\n - downwardAPI:\n items:\n - fieldRef:\n apiVersion: v1\n fieldPath: metadata.namespace\n path: namespace\nstatus:\n conditions:\n - lastProbeTime: null\n lastTransitionTime: \"2022-02-17T21:51:01Z\"\n status: \"True\"\n type: Initialized\n - lastProbeTime: null\n lastTransitionTime: \"2022-02-17T21:51:06Z\"\n status: \"True\"\n type: Ready\n - lastProbeTime: null\n lastTransitionTime: \"2022-02-17T21:51:06Z\"\n status: \"True\"\n type: ContainersReady\n - lastProbeTime: null\n lastTransitionTime: \"2022-02-17T21:51:01Z\"\n status: \"True\"\n type: PodScheduled\n containerStatuses:\n - containerID: containerd://5403af59a2b46ee5a23fb0ae4b1e077f7ca5c5fb7af16e1ab21c00e0e616462a\n image: docker.io/library/nginx:latest\n imageID: docker.io/library/nginx@sha256:2834dc507516af02784808c5f48b7cbe38b8ed5d0f4837f16e78d00deb7e7767\n lastState: {}\n name: nginx\n ready: true\n restartCount: 0\n started: true\n state:\n running:\n startedAt: \"2022-02-17T21:51:05Z\"\n hostIP: 192.168.0.113\n phase: Running\n podIP: 10.88.0.3\n podIPs:\n - ip: 10.88.0.3\n - ip: 2001:db8::1\n qosClass: Guaranteed\n startTime: \"2022-02-17T21:51:01Z\"", "completion": "This yaml code Debug Running Pods | First, look at the logs of the affected container:", "language": "yaml", "type": "pod-definition_explanation", "context": "Debug Running Pods | First, look at the logs of the affected container:", "source": "https://kubernetes.io/docs/tasks/debug/debug-application/debug-running-pod/", "id": "k8s_00189_code_explain_5103"} {"prompt": "Explain this bash code:\n\nkubectl logs --previous ${POD_NAME} ${CONTAINER_NAME}", "completion": "This bash code Debug Running Pods | If the container image includes debugging utilities, as is the case with images built from Linux and Windows OS base images, you can run commands inside a specific container with kubectl exec:", "language": "bash", "type": "kubectl-command_explanation", "context": "Debug Running Pods | If the container image includes debugging utilities, as is the case with images built from Linux and Windows OS base images, you can run commands inside a specific container with kubectl exec:", "source": "https://kubernetes.io/docs/tasks/debug/debug-application/debug-running-pod/", "id": "k8s_00189_code_explain_5104"} {"prompt": "Explain this bash code:\n\nkubectl exec ${POD_NAME} -c ${CONTAINER_NAME} -- ${CMD} ${ARG1} ${ARG2} ... ${ARGN}", "completion": "This bash code Debug Running Pods | As an example, to look at the logs from a running Cassandra pod, you might run", "language": "bash", "type": "kubectl-command_explanation", "context": "Debug Running Pods | As an example, to look at the logs from a running Cassandra pod, you might run", "source": "https://kubernetes.io/docs/tasks/debug/debug-application/debug-running-pod/", "id": "k8s_00189_code_explain_5105"} {"prompt": "Explain this bash code:\n\nkubectl exec cassandra -- cat /var/log/cassandra/system.log", "completion": "This bash code Debug Running Pods | You can run a shell that's connected to your terminal using the -i and -t arguments to kubectl exec, for example:", "language": "bash", "type": "kubectl-command_explanation", "context": "Debug Running Pods | You can run a shell that's connected to your terminal using the -i and -t arguments to kubectl exec, for example:", "source": "https://kubernetes.io/docs/tasks/debug/debug-application/debug-running-pod/", "id": "k8s_00189_code_explain_5106"} {"prompt": "Explain this bash code:\n\nkubectl run ephemeral-demo --image=registry.k8s.io/pause:3.1 --restart=Never", "completion": "This bash code Debug Running Pods | The examples in this section use the pause container image because it does not contain debugging utilities, but this method works with all container images.", "language": "bash", "type": "kubectl-command_explanation", "context": "Debug Running Pods | The examples in this section use the pause container image because it does not contain debugging utilities, but this method works with all container images.", "source": "https://kubernetes.io/docs/tasks/debug/debug-application/debug-running-pod/", "id": "k8s_00189_code_explain_5107"} {"prompt": "Explain this text code:\n\nOCI runtime exec failed: exec failed: container_linux.go:346: starting container process caused \"exec: \\\"sh\\\": executable file not found in $PATH\": unknown", "completion": "This text code Debug Running Pods | You can instead add a debugging container using kubectl debug. If you specify the -i/--interactive argument, kubectl will automatically attach to the console of the Ephemeral Container.", "language": "text", "type": "code-example_explanation", "context": "Debug Running Pods | You can instead add a debugging container using kubectl debug. If you specify the -i/--interactive argument, kubectl will automatically attach to the console of the Ephemeral Container.", "source": "https://kubernetes.io/docs/tasks/debug/debug-application/debug-running-pod/", "id": "k8s_00189_code_explain_5108"} {"prompt": "Explain this bash code:\n\nkubectl debug -it ephemeral-demo --image=busybox:1.28 --target=ephemeral-demo", "completion": "This bash code Debug Running Pods | This command adds a new busybox container and attaches to it. The --target parameter targets the process namespace of another container. It's necessary here because kubectl run does not enable process", "language": "bash", "type": "kubectl-command_explanation", "context": "Debug Running Pods | This command adds a new busybox container and attaches to it. The --target parameter targets the process namespace of another container. It's necessary here because kubectl run does not enable process", "source": "https://kubernetes.io/docs/tasks/debug/debug-application/debug-running-pod/", "id": "k8s_00189_code_explain_5109"} {"prompt": "Explain this text code:\n\nDefaulting debug container name to debugger-8xzrl.\nIf you don't see a command prompt, try pressing enter.\n/ #", "completion": "This text code Debug Running Pods | This command adds a new busybox container and attaches to it. The --target parameter targets the process namespace of another container. It's necessary here because kubectl run does not enable process", "language": "text", "type": "code-example_explanation", "context": "Debug Running Pods | This command adds a new busybox container and attaches to it. The --target parameter targets the process namespace of another container. It's necessary here because kubectl run does not enable process", "source": "https://kubernetes.io/docs/tasks/debug/debug-application/debug-running-pod/", "id": "k8s_00189_code_explain_5110"} {"prompt": "Complete this text code:\n\n...\nEphemeral Containers:\n debugger-8xzrl:\n Container ID: docker://b888f9adfd15bd5739fefaa39e1df4dd3c617b9902082b1cfdc29c4028ffb2eb\n Image: busybox\n Image ID: docker-pullable://busybox@sha256:1828edd60c5efd34b2bf5dd3282ec0cc04d47b2ff9caa0b6d4f07a21d1c08084\n Port: ", "completion": " Host Port: \n State: Running\n Started: Wed, 12 Feb 2020 14:25:42 +0100\n Ready: False\n Restart Count: 0\n Environment: \n Mounts: \n...", "language": "text", "type": "code-example", "context": "Debug Running Pods | Use kubectl delete to remove the Pod when you're finished:", "source": "https://kubernetes.io/docs/tasks/debug/debug-application/debug-running-pod/", "id": "k8s_00189_code_completion_5111"} {"prompt": "Explain this text code:\n\n...\nEphemeral Containers:\n debugger-8xzrl:\n Container ID: docker://b888f9adfd15bd5739fefaa39e1df4dd3c617b9902082b1cfdc29c4028ffb2eb\n Image: busybox\n Image ID: docker-pullable://busybox@sha256:1828edd60c5efd34b2bf5dd3282ec0cc04d47b2ff9caa0b6d4f07a21d1c08084\n Port: \n Host Port: \n State: Running\n Started: Wed, 12 Feb 2020 14:25:42 +0100\n Ready: False\n Restart Count: 0\n Environment: \n Mounts: \n...", "completion": "This text code Debug Running Pods | Use kubectl delete to remove the Pod when you're finished:", "language": "text", "type": "code-example_explanation", "context": "Debug Running Pods | Use kubectl delete to remove the Pod when you're finished:", "source": "https://kubernetes.io/docs/tasks/debug/debug-application/debug-running-pod/", "id": "k8s_00189_code_explain_5112"} {"prompt": "Explain this bash code:\n\nkubectl run myapp --image=busybox:1.28 --restart=Never -- sleep 1d", "completion": "This bash code Debug Running Pods | Run this command to create a copy of myapp named myapp-debug that adds a new Ubuntu container for debugging:", "language": "bash", "type": "kubectl-command_explanation", "context": "Debug Running Pods | Run this command to create a copy of myapp named myapp-debug that adds a new Ubuntu container for debugging:", "source": "https://kubernetes.io/docs/tasks/debug/debug-application/debug-running-pod/", "id": "k8s_00189_code_explain_5113"} {"prompt": "Explain this bash code:\n\nkubectl debug myapp -it --image=ubuntu --share-processes --copy-to=myapp-debug", "completion": "This bash code Debug Running Pods | Don't forget to clean up the debugging Pod when you're finished with it:", "language": "bash", "type": "kubectl-command_explanation", "context": "Debug Running Pods | Don't forget to clean up the debugging Pod when you're finished with it:", "source": "https://kubernetes.io/docs/tasks/debug/debug-application/debug-running-pod/", "id": "k8s_00189_code_explain_5114"} {"prompt": "Explain this text code:\n\nDefaulting debug container name to debugger-w7xmf.\nIf you don't see a command prompt, try pressing enter.\nroot@myapp-debug:/#", "completion": "This text code Debug Running Pods | Don't forget to clean up the debugging Pod when you're finished with it:", "language": "text", "type": "code-example_explanation", "context": "Debug Running Pods | Don't forget to clean up the debugging Pod when you're finished with it:", "source": "https://kubernetes.io/docs/tasks/debug/debug-application/debug-running-pod/", "id": "k8s_00189_code_explain_5115"} {"prompt": "Complete this text code:\n\nContainers:\n myapp:\n Image: busybox\n ...\n Args:", "completion": " false\n State: Waiting\n Reason: CrashLoopBackOff\n Last State: Terminated\n Reason: Error\n Exit Code: 1", "language": "text", "type": "code-example", "context": "Debug Running Pods | You can use kubectl debug to create a copy of this Pod with the command changed to an interactive shell:", "source": "https://kubernetes.io/docs/tasks/debug/debug-application/debug-running-pod/", "id": "k8s_00189_code_completion_5116"} {"prompt": "Explain this text code:\n\nContainers:\n myapp:\n Image: busybox\n ...\n Args:\n false\n State: Waiting\n Reason: CrashLoopBackOff\n Last State: Terminated\n Reason: Error\n Exit Code: 1", "completion": "This text code Debug Running Pods | You can use kubectl debug to create a copy of this Pod with the command changed to an interactive shell:", "language": "text", "type": "code-example_explanation", "context": "Debug Running Pods | You can use kubectl debug to create a copy of this Pod with the command changed to an interactive shell:", "source": "https://kubernetes.io/docs/tasks/debug/debug-application/debug-running-pod/", "id": "k8s_00189_code_explain_5117"} {"prompt": "Explain this bash code:\n\nkubectl debug myapp -it --copy-to=myapp-debug --container=myapp -- sh", "completion": "This bash code Debug Running Pods | Now you have an interactive shell that you can use to perform tasks like checking filesystem paths or running the container command manually.", "language": "bash", "type": "kubectl-command_explanation", "context": "Debug Running Pods | Now you have an interactive shell that you can use to perform tasks like checking filesystem paths or running the container command manually.", "source": "https://kubernetes.io/docs/tasks/debug/debug-application/debug-running-pod/", "id": "k8s_00189_code_explain_5118"} {"prompt": "Explain this text code:\n\nIf you don't see a command prompt, try pressing enter.\n/ #", "completion": "This text code Debug Running Pods | Now you have an interactive shell that you can use to perform tasks like checking filesystem paths or running the container command manually.", "language": "text", "type": "code-example_explanation", "context": "Debug Running Pods | Now you have an interactive shell that you can use to perform tasks like checking filesystem paths or running the container command manually.", "source": "https://kubernetes.io/docs/tasks/debug/debug-application/debug-running-pod/", "id": "k8s_00189_code_explain_5119"} {"prompt": "Explain this bash code:\n\nkubectl run myapp --image=busybox:1.28 --restart=Never -- sleep 1d", "completion": "This bash code Debug Running Pods | Now use kubectl debug to make a copy and change its container image to ubuntu:", "language": "bash", "type": "kubectl-command_explanation", "context": "Debug Running Pods | Now use kubectl debug to make a copy and change its container image to ubuntu:", "source": "https://kubernetes.io/docs/tasks/debug/debug-application/debug-running-pod/", "id": "k8s_00189_code_explain_5120"} {"prompt": "Explain this bash code:\n\nkubectl debug myapp --copy-to=myapp-debug --set-image=*=ubuntu", "completion": "This bash code Debug Running Pods | The syntax of --set-image uses the same container_name=image syntax as kubectl set image. *=ubuntu means change the image of all containers to ubuntu.", "language": "bash", "type": "kubectl-command_explanation", "context": "Debug Running Pods | The syntax of --set-image uses the same container_name=image syntax as kubectl set image. *=ubuntu means change the image of all containers to ubuntu.", "source": "https://kubernetes.io/docs/tasks/debug/debug-application/debug-running-pod/", "id": "k8s_00189_code_explain_5121"} {"prompt": "Explain this text code:\n\nCreating debugging pod node-debugger-mynode-pdx84 with container debugger on node mynode.\nIf you don't see a command prompt, try pressing enter.\nroot@ek8s:/#", "completion": "This text code Debug Running Pods | When creating a debugging session on a node, keep in mind that:", "language": "text", "type": "code-example_explanation", "context": "Debug Running Pods | When creating a debugging session on a node, keep in mind that:", "source": "https://kubernetes.io/docs/tasks/debug/debug-application/debug-running-pod/", "id": "k8s_00189_code_explain_5122"} {"prompt": "Explain this bash code:\n\nkubectl run myapp --image=busybox:1.28 --restart=Never -- sleep 1d", "completion": "This bash code Debug Running Pods | Then, debug the Pod using an ephemeral container. If the ephemeral container needs to have privilege, you can use the sysadmin profile:", "language": "bash", "type": "kubectl-command_explanation", "context": "Debug Running Pods | Then, debug the Pod using an ephemeral container. If the ephemeral container needs to have privilege, you can use the sysadmin profile:", "source": "https://kubernetes.io/docs/tasks/debug/debug-application/debug-running-pod/", "id": "k8s_00189_code_explain_5123"} {"prompt": "Explain this bash code:\n\nkubectl debug -it myapp --image=busybox:1.28 --target=myapp --profile=sysadmin", "completion": "This bash code Debug Running Pods | Check the capabilities of the ephemeral container process by running the following command inside the container:", "language": "bash", "type": "kubectl-command_explanation", "context": "Debug Running Pods | Check the capabilities of the ephemeral container process by running the following command inside the container:", "source": "https://kubernetes.io/docs/tasks/debug/debug-application/debug-running-pod/", "id": "k8s_00189_code_explain_5124"} {"prompt": "Complete this text code:\n\nTargeting container \"myapp\". If you don't see processes from this container it may be because the container runtime doesn't support this feature.\nDefaulting debug container name to debugger-6kg4x.", "completion": "If you don't see a command prompt, try pressing enter.\n/ #", "language": "text", "type": "code-example", "context": "Debug Running Pods | Check the capabilities of the ephemeral container process by running the following command inside the container:", "source": "https://kubernetes.io/docs/tasks/debug/debug-application/debug-running-pod/", "id": "k8s_00189_code_completion_5125"} {"prompt": "Explain this text code:\n\nTargeting container \"myapp\". If you don't see processes from this container it may be because the container runtime doesn't support this feature.\nDefaulting debug container name to debugger-6kg4x.\nIf you don't see a command prompt, try pressing enter.\n/ #", "completion": "This text code Debug Running Pods | Check the capabilities of the ephemeral container process by running the following command inside the container:", "language": "text", "type": "code-example_explanation", "context": "Debug Running Pods | Check the capabilities of the ephemeral container process by running the following command inside the container:", "source": "https://kubernetes.io/docs/tasks/debug/debug-application/debug-running-pod/", "id": "k8s_00189_code_explain_5126"} {"prompt": "Complete this text code:\n\n...\nCapPrm:\t000001ffffffffff", "completion": "CapEff:\t000001ffffffffff\n...", "language": "text", "type": "code-example", "context": "Debug Running Pods | This means the container process is granted full capabilities as a privileged container by applying sysadmin profile. See more details about capabilities.", "source": "https://kubernetes.io/docs/tasks/debug/debug-application/debug-running-pod/", "id": "k8s_00189_code_completion_5127"} {"prompt": "Explain this text code:\n\n...\nCapPrm:\t000001ffffffffff\nCapEff:\t000001ffffffffff\n...", "completion": "This text code Debug Running Pods | This means the container process is granted full capabilities as a privileged container by applying sysadmin profile. See more details about capabilities.", "language": "text", "type": "code-example_explanation", "context": "Debug Running Pods | This means the container process is granted full capabilities as a privileged container by applying sysadmin profile. See more details about capabilities.", "source": "https://kubernetes.io/docs/tasks/debug/debug-application/debug-running-pod/", "id": "k8s_00189_code_explain_5128"} {"prompt": "Explain this bash code:\n\nkubectl get pod myapp -o jsonpath='{.spec.ephemeralContainers[0].securityContext}'", "completion": "This bash code Debug Running Pods | Clean up the Pod when you're finished with it:", "language": "bash", "type": "kubectl-command_explanation", "context": "Debug Running Pods | Clean up the Pod when you're finished with it:", "source": "https://kubernetes.io/docs/tasks/debug/debug-application/debug-running-pod/", "id": "k8s_00189_code_explain_5129"} {"prompt": "Explain this bash code:\n\nkubectl run myapp --image=busybox:1.28 --restart=Never -- sleep 1d", "completion": "This bash code Debug Running Pods | Create a custom profile in YAML or JSON format. Here, create a YAML format file named custom-profile.yaml:", "language": "bash", "type": "kubectl-command_explanation", "context": "Debug Running Pods | Create a custom profile in YAML or JSON format. Here, create a YAML format file named custom-profile.yaml:", "source": "https://kubernetes.io/docs/tasks/debug/debug-application/debug-running-pod/", "id": "k8s_00189_code_explain_5130"} {"prompt": "Complete this text code:\n\nenv:\n- name: ENV_VAR_1\n value: value_1\n- name: ENV_VAR_2\n value: value_2", "completion": "securityContext:\n capabilities:\n add:\n - NET_ADMIN\n - SYS_TIME", "language": "text", "type": "code-example", "context": "Debug Running Pods | Run this command to debug the Pod using an ephemeral container with the custom profile:", "source": "https://kubernetes.io/docs/tasks/debug/debug-application/debug-running-pod/", "id": "k8s_00189_code_completion_5131"} {"prompt": "Explain this text code:\n\nenv:\n- name: ENV_VAR_1\n value: value_1\n- name: ENV_VAR_2\n value: value_2\nsecurityContext:\n capabilities:\n add:\n - NET_ADMIN\n - SYS_TIME", "completion": "This text code Debug Running Pods | Run this command to debug the Pod using an ephemeral container with the custom profile:", "language": "text", "type": "code-example_explanation", "context": "Debug Running Pods | Run this command to debug the Pod using an ephemeral container with the custom profile:", "source": "https://kubernetes.io/docs/tasks/debug/debug-application/debug-running-pod/", "id": "k8s_00189_code_explain_5132"} {"prompt": "Explain this bash code:\n\nkubectl debug -it myapp --image=busybox:1.28 --target=myapp --profile=general --custom=custom-profile.yaml", "completion": "This bash code Debug Running Pods | You can check that the ephemeral container has been added to the target Pod with the custom profile applied:", "language": "bash", "type": "kubectl-command_explanation", "context": "Debug Running Pods | You can check that the ephemeral container has been added to the target Pod with the custom profile applied:", "source": "https://kubernetes.io/docs/tasks/debug/debug-application/debug-running-pod/", "id": "k8s_00189_code_explain_5133"} {"prompt": "Explain this bash code:\n\nkubectl get pod myapp -o jsonpath='{.spec.ephemeralContainers[0].env}'", "completion": "This bash code Debug Running Pods | Clean up the Pod when you're finished with it:", "language": "bash", "type": "kubectl-command_explanation", "context": "Debug Running Pods | Clean up the Pod when you're finished with it:", "source": "https://kubernetes.io/docs/tasks/debug/debug-application/debug-running-pod/", "id": "k8s_00189_code_explain_5134"} {"prompt": "Explain this json code:\n\n[{\"name\":\"ENV_VAR_1\",\"value\":\"value_1\"},{\"name\":\"ENV_VAR_2\",\"value\":\"value_2\"}]", "completion": "This json code Debug Running Pods | Clean up the Pod when you're finished with it:", "language": "json", "type": "api-response_explanation", "context": "Debug Running Pods | Clean up the Pod when you're finished with it:", "source": "https://kubernetes.io/docs/tasks/debug/debug-application/debug-running-pod/", "id": "k8s_00189_code_explain_5135"} {"prompt": "Explain this bash code:\n\nkubectl get pod myapp -o jsonpath='{.spec.ephemeralContainers[0].securityContext}'", "completion": "This bash code Debug Running Pods | Clean up the Pod when you're finished with it:", "language": "bash", "type": "kubectl-command_explanation", "context": "Debug Running Pods | Clean up the Pod when you're finished with it:", "source": "https://kubernetes.io/docs/tasks/debug/debug-application/debug-running-pod/", "id": "k8s_00189_code_explain_5136"} {"prompt": "Complete this yaml code:\n\napiVersion: apps/v1\nkind: Deployment\nmetadata:\n name: nginx-deployment\nspec:\n selector:\n matchLabels:\n app: nginx\n replicas: 2\n template:\n metadata:", "completion": " labels:\n app: nginx\n spec:\n containers:\n - name: nginx\n image: nginx\n resources:\n limits:\n memory: \"128Mi\"\n cpu: \"500m\"\n ports:\n - containerPort: 80", "language": "yaml", "type": "deployment-definition", "context": "Debug Running Pods | Create deployment by running following command:", "source": "https://kubernetes.io/docs/tasks/debug/debug-application/debug-running-pod/", "id": "k8s_00189_code_completion_5137"} {"prompt": "Explain this yaml code:\n\napiVersion: apps/v1\nkind: Deployment\nmetadata:\n name: nginx-deployment\nspec:\n selector:\n matchLabels:\n app: nginx\n replicas: 2\n template:\n metadata:\n labels:\n app: nginx\n spec:\n containers:\n - name: nginx\n image: nginx\n resources:\n limits:\n memory: \"128Mi\"\n cpu: \"500m\"\n ports:\n - containerPort: 80", "completion": "This yaml code Debug Running Pods | Create deployment by running following command:", "language": "yaml", "type": "deployment-definition_explanation", "context": "Debug Running Pods | Create deployment by running following command:", "source": "https://kubernetes.io/docs/tasks/debug/debug-application/debug-running-pod/", "id": "k8s_00189_code_explain_5138"} {"prompt": "Explain this shell code:\n\nkubectl apply -f https://k8s.io/examples/application/nginx-with-request.yaml", "completion": "This shell code Debug Running Pods | Check pod status by following command:", "language": "shell", "type": "code-example_explanation", "context": "Debug Running Pods | Check pod status by following command:", "source": "https://kubernetes.io/docs/tasks/debug/debug-application/debug-running-pod/", "id": "k8s_00189_code_explain_5139"} {"prompt": "Explain this none code:\n\nNAME READY STATUS RESTARTS AGE\nnginx-deployment-67d4bdd6f5-cx2nz 1/1 Running 0 13s\nnginx-deployment-67d4bdd6f5-w6kd7 1/1 Running 0 13s", "completion": "This none code Debug Running Pods | We can retrieve a lot more information about each of these pods using kubectl describe pod. For example:", "language": "none", "type": "code-example_explanation", "context": "Debug Running Pods | We can retrieve a lot more information about each of these pods using kubectl describe pod. For example:", "source": "https://kubernetes.io/docs/tasks/debug/debug-application/debug-running-pod/", "id": "k8s_00189_code_explain_5140"} {"prompt": "Explain this shell code:\n\nkubectl describe pod nginx-deployment-67d4bdd6f5-w6kd7", "completion": "This shell code Debug Running Pods | Here you can see configuration information about the container(s) and Pod (labels, resource requirements, etc.), as well as status information about the container(s) and Pod (state, readiness, restart", "language": "shell", "type": "code-example_explanation", "context": "Debug Running Pods | Here you can see configuration information about the container(s) and Pod (labels, resource requirements, etc.), as well as status information about the container(s) and Pod (state, readiness, restart", "source": "https://kubernetes.io/docs/tasks/debug/debug-application/debug-running-pod/", "id": "k8s_00189_code_explain_5141"} {"prompt": "Complete this none code:\n\nName: nginx-deployment-67d4bdd6f5-w6kd7\nNamespace: default\nPriority: 0\nNode: kube-worker-1/192.168.0.113\nStart Time: Thu, 17 Feb 2022 16:51:01 -0500\nLabels: app=nginx\n pod-template-hash=67d4bdd6f5\nAnnotations: \nStatus: Running\nIP: 10.88.0.3\nIPs:\n IP: 10.88.0.3\n IP: 2001:db8::1\nControlled By: ReplicaSet/nginx-deployment-67d4bdd6f5\nContainers:\n nginx:\n Container ID: containerd://5403af59a2b46ee5a23fb0ae4b1e077f7ca5c5fb7af16e1ab21c00e0e616462a\n Image: nginx\n Image ID: docker.io/library/nginx@sha256:2834dc507516af02784808c5f48b7cbe38b8ed5d0f4837f16e78d00deb7e7767\n Port: 80/TCP\n Host Port: 0/TCP\n State: Running\n Started: Thu, 17 Feb 2022 16:51:05 -0500\n Ready: True\n Restart Count: 0\n Limits:\n cpu: 500m\n memory: 128Mi\n Requests:", "completion": " cpu: 500m\n memory: 128Mi\n Environment: \n Mounts:\n /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-bgsgp (ro)\nConditions:\n Type Status\n Initialized True \n Ready True \n ContainersReady True \n PodScheduled True \nVolumes:\n kube-api-access-bgsgp:\n Type: Projected (a volume that contains injected data from multiple sources)\n TokenExpirationSeconds: 3607\n ConfigMapName: kube-root-ca.crt\n ConfigMapOptional: \n DownwardAPI: true\nQoS Class: Guaranteed\nNode-Selectors: \nTolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s\n node.kubernetes.io/unreachable:NoExecute op=Exists for 300s\nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal Scheduled 34s default-scheduler Successfully assigned default/nginx-deployment-67d4bdd6f5-w6kd7 to kube-worker-1\n Normal Pulling 31s kubelet Pulling image \"nginx\"\n Normal Pulled 30s kubelet Successfully pulled image \"nginx\" in 1.146417389s\n Normal Created 30s kubelet Created container nginx\n Normal Started 30s kubelet Started container nginx", "language": "none", "type": "code-example", "context": "Debug Running Pods | Here you can see configuration information about the container(s) and Pod (labels, resource requirements, etc.), as well as status information about the container(s) and Pod (state, readiness, restart", "source": "https://kubernetes.io/docs/tasks/debug/debug-application/debug-running-pod/", "id": "k8s_00189_code_completion_5142"} {"prompt": "Explain this none code:\n\nName: nginx-deployment-67d4bdd6f5-w6kd7\nNamespace: default\nPriority: 0\nNode: kube-worker-1/192.168.0.113\nStart Time: Thu, 17 Feb 2022 16:51:01 -0500\nLabels: app=nginx\n pod-template-hash=67d4bdd6f5\nAnnotations: \nStatus: Running\nIP: 10.88.0.3\nIPs:\n IP: 10.88.0.3\n IP: 2001:db8::1\nControlled By: ReplicaSet/nginx-deployment-67d4bdd6f5\nContainers:\n nginx:\n Container ID: containerd://5403af59a2b46ee5a23fb0ae4b1e077f7ca5c5fb7af16e1ab21c00e0e616462a\n Image: nginx\n Image ID: docker.io/library/nginx@sha256:2834dc507516af02784808c5f48b7cbe38b8ed5d0f4837f16e78d00deb7e7767\n Port: 80/TCP\n Host Port: 0/TCP\n State: Running\n Started: Thu, 17 Feb 2022 16:51:05 -0500\n Ready: True\n Restart Count: 0\n Limits:\n cpu: 500m\n memory: 128Mi\n Requests:\n cpu: 500m\n memory: 128Mi\n Environment: \n Mounts:\n /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-bgsgp (ro)\nConditions:\n Type Status\n Initialized True \n Ready True \n ContainersReady True \n PodScheduled True \nVolumes:\n kube-api-access-bgsgp:\n Type: Projected (a volume that contains injected data from multiple sources)\n TokenExpirationSeconds: 3607\n ConfigMapName: kube-root-ca.crt\n ConfigMapOptional: \n DownwardAPI: true\nQoS Class: Guaranteed\nNode-Selectors: \nTolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s\n node.kubernetes.io/unreachable:NoExecute op=Exists for 300s\nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal Scheduled 34s default-scheduler Successfully assigned default/nginx-deployment-67d4bdd6f5-w6kd7 to kube-worker-1\n Normal Pulling 31s kubelet Pulling image \"nginx\"\n Normal Pulled 30s kubelet Successfully pulled image \"nginx\" in 1.146417389s\n Normal Created 30s kubelet Created container nginx\n Normal Started 30s kubelet Started container nginx", "completion": "This none code Debug Running Pods | Here you can see configuration information about the container(s) and Pod (labels, resource requirements, etc.), as well as status information about the container(s) and Pod (state, readiness, restart", "language": "none", "type": "code-example_explanation", "context": "Debug Running Pods | Here you can see configuration information about the container(s) and Pod (labels, resource requirements, etc.), as well as status information about the container(s) and Pod (state, readiness, restart", "source": "https://kubernetes.io/docs/tasks/debug/debug-application/debug-running-pod/", "id": "k8s_00189_code_explain_5143"} {"prompt": "Complete this none code:\n\nNAME READY STATUS RESTARTS AGE\nnginx-deployment-1006230814-6winp 1/1 Running 0 7m\nnginx-deployment-1006230814-fmgu3 1/1 Running 0 7m", "completion": "nginx-deployment-1370807587-6ekbw 1/1 Running 0 1m\nnginx-deployment-1370807587-fg172 0/1 Pending 0 1m\nnginx-deployment-1370807587-fz9sd 0/1 Pending 0 1m", "language": "none", "type": "code-example", "context": "Debug Running Pods | To find out why the nginx-deployment-1370807587-fz9sd pod is not running, we can use kubectl describe pod on the pending Pod and look at its events:", "source": "https://kubernetes.io/docs/tasks/debug/debug-application/debug-running-pod/", "id": "k8s_00189_code_completion_5144"} {"prompt": "Explain this none code:\n\nNAME READY STATUS RESTARTS AGE\nnginx-deployment-1006230814-6winp 1/1 Running 0 7m\nnginx-deployment-1006230814-fmgu3 1/1 Running 0 7m\nnginx-deployment-1370807587-6ekbw 1/1 Running 0 1m\nnginx-deployment-1370807587-fg172 0/1 Pending 0 1m\nnginx-deployment-1370807587-fz9sd 0/1 Pending 0 1m", "completion": "This none code Debug Running Pods | To find out why the nginx-deployment-1370807587-fz9sd pod is not running, we can use kubectl describe pod on the pending Pod and look at its events:", "language": "none", "type": "code-example_explanation", "context": "Debug Running Pods | To find out why the nginx-deployment-1370807587-fz9sd pod is not running, we can use kubectl describe pod on the pending Pod and look at its events:", "source": "https://kubernetes.io/docs/tasks/debug/debug-application/debug-running-pod/", "id": "k8s_00189_code_explain_5145"} {"prompt": "Explain this shell code:\n\nkubectl describe pod nginx-deployment-1370807587-fz9sd", "completion": "This shell code Debug Running Pods | Here you can see the event generated by the scheduler saying that the Pod failed to schedule for reason FailedScheduling (and possibly others). The message tells us that there were not enough resource", "language": "shell", "type": "code-example_explanation", "context": "Debug Running Pods | Here you can see the event generated by the scheduler saying that the Pod failed to schedule for reason FailedScheduling (and possibly others). The message tells us that there were not enough resource", "source": "https://kubernetes.io/docs/tasks/debug/debug-application/debug-running-pod/", "id": "k8s_00189_code_explain_5146"} {"prompt": "Complete this none code:\n\nName:\t\tnginx-deployment-1370807587-fz9sd\n Namespace:\tdefault\n Node:\t\t/\n Labels:\t\tapp=nginx,pod-template-hash=1370807587\n Status:\t\tPending\n IP:\n Controllers:\tReplicaSet/nginx-deployment-1370807587\n Containers:\n nginx:\n Image:\tnginx\n Port:\t80/TCP\n QoS Tier:\n memory:\tGuaranteed\n cpu:\tGuaranteed\n Limits:", "completion": " cpu:\t1\n memory:\t128Mi\n Requests:\n cpu:\t1\n memory:\t128Mi\n Environment Variables:\n Volumes:\n default-token-4bcbi:\n Type:\tSecret (a volume populated by a Secret)\n SecretName:\tdefault-token-4bcbi\n Events:\n FirstSeen\tLastSeen\tCount\tFrom\t\t\t SubobjectPath\tType\t\tReason\t\t\t Message\n ---------\t--------\t-----\t----\t\t\t -------------\t--------\t------\t\t\t -------\n 1m\t\t 48s\t\t 7\t {default-scheduler }\t\t\t Warning\t\tFailedScheduling\tpod (nginx-deployment-1370807587-fz9sd) failed to fit in any node\n fit failure on node (kubernetes-node-6ta5): Node didn't have enough resource: CPU, requested: 1000, used: 1420, capacity: 2000\n fit failure on node (kubernetes-node-wul5): Node didn't have enough resource: CPU, requested: 1000, used: 1100, capacity: 2000", "language": "none", "type": "code-example", "context": "Debug Running Pods | Here you can see the event generated by the scheduler saying that the Pod failed to schedule for reason FailedScheduling (and possibly others). The message tells us that there were not enough resource", "source": "https://kubernetes.io/docs/tasks/debug/debug-application/debug-running-pod/", "id": "k8s_00189_code_completion_5147"} {"prompt": "Explain this none code:\n\nName:\t\tnginx-deployment-1370807587-fz9sd\n Namespace:\tdefault\n Node:\t\t/\n Labels:\t\tapp=nginx,pod-template-hash=1370807587\n Status:\t\tPending\n IP:\n Controllers:\tReplicaSet/nginx-deployment-1370807587\n Containers:\n nginx:\n Image:\tnginx\n Port:\t80/TCP\n QoS Tier:\n memory:\tGuaranteed\n cpu:\tGuaranteed\n Limits:\n cpu:\t1\n memory:\t128Mi\n Requests:\n cpu:\t1\n memory:\t128Mi\n Environment Variables:\n Volumes:\n default-token-4bcbi:\n Type:\tSecret (a volume populated by a Secret)\n SecretName:\tdefault-token-4bcbi\n Events:\n FirstSeen\tLastSeen\tCount\tFrom\t\t\t SubobjectPath\tType\t\tReason\t\t\t Message\n ---------\t--------\t-----\t----\t\t\t -------------\t--------\t------\t\t\t -------\n 1m\t\t 48s\t\t 7\t {default-scheduler }\t\t\t Warning\t\tFailedScheduling\tpod (nginx-deployment-1370807587-fz9sd) failed to fit in any node\n fit failure on node (kubernetes-node-6ta5): Node didn't have enough resource: CPU, requested: 1000, used: 1420, capacity: 2000\n fit failure on node (kubernetes-node-wul5): Node didn't have enough resource: CPU, requested: 1000, used: 1100, capacity: 2000", "completion": "This none code Debug Running Pods | Here you can see the event generated by the scheduler saying that the Pod failed to schedule for reason FailedScheduling (and possibly others). The message tells us that there were not enough resource", "language": "none", "type": "code-example_explanation", "context": "Debug Running Pods | Here you can see the event generated by the scheduler saying that the Pod failed to schedule for reason FailedScheduling (and possibly others). The message tells us that there were not enough resource", "source": "https://kubernetes.io/docs/tasks/debug/debug-application/debug-running-pod/", "id": "k8s_00189_code_explain_5148"} {"prompt": "Explain this shell code:\n\nkubectl get pod nginx-deployment-1006230814-6winp -o yaml", "completion": "This shell code Debug Running Pods | First, look at the logs of the affected container:", "language": "shell", "type": "code-example_explanation", "context": "Debug Running Pods | First, look at the logs of the affected container:", "source": "https://kubernetes.io/docs/tasks/debug/debug-application/debug-running-pod/", "id": "k8s_00189_code_explain_5149"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n creationTimestamp: \"2022-02-17T21:51:01Z\"\n generateName: nginx-deployment-67d4bdd6f5-\n labels:\n app: nginx\n pod-template-hash: 67d4bdd6f5\n name: nginx-deployment-67d4bdd6f5-w6kd7\n namespace: default\n ownerReferences:\n - apiVersion: apps/v1\n blockOwnerDeletion: true\n controller: true\n kind: ReplicaSet\n name: nginx-deployment-67d4bdd6f5\n uid: 7d41dfd4-84c0-4be4-88ab-cedbe626ad82\n resourceVersion: \"1364\"\n uid: a6501da1-0447-4262-98eb-c03d4002222e\nspec:\n containers:\n - image: nginx\n imagePullPolicy: Always\n name: nginx\n ports:\n - containerPort: 80\n protocol: TCP\n resources:\n limits:\n cpu: 500m\n memory: 128Mi\n requests:\n cpu: 500m\n memory: 128Mi\n terminationMessagePath: /dev/termination-log\n terminationMessagePolicy: File\n volumeMounts:\n - mountPath: /var/run/secrets/kubernetes.io/serviceaccount\n name: kube-api-access-bgsgp\n readOnly: true\n dnsPolicy: ClusterFirst\n enableServiceLinks: true\n nodeName: kube-worker-1\n preemptionPolicy: PreemptLowerPriority\n priority: 0\n restartPolicy: Always\n schedulerName: default-scheduler\n securityContext: {}\n serviceAccount: default\n serviceAccountName: default\n terminationGracePeriodSeconds: 30\n tolerations:\n - effect: NoExecute\n key: node.kubernetes.io/not-ready\n operator: Exists\n tolerationSeconds: 300\n - effect: NoExecute\n key: node.kubernetes.io/unreachable", "completion": " operator: Exists\n tolerationSeconds: 300\n volumes:\n - name: kube-api-access-bgsgp\n projected:\n defaultMode: 420\n sources:\n - serviceAccountToken:\n expirationSeconds: 3607\n path: token\n - configMap:\n items:\n - key: ca.crt\n path: ca.crt\n name: kube-root-ca.crt\n - downwardAPI:\n items:\n - fieldRef:\n apiVersion: v1\n fieldPath: metadata.namespace\n path: namespace\nstatus:\n conditions:\n - lastProbeTime: null\n lastTransitionTime: \"2022-02-17T21:51:01Z\"\n status: \"True\"\n type: Initialized\n - lastProbeTime: null\n lastTransitionTime: \"2022-02-17T21:51:06Z\"\n status: \"True\"\n type: Ready\n - lastProbeTime: null\n lastTransitionTime: \"2022-02-17T21:51:06Z\"\n status: \"True\"\n type: ContainersReady\n - lastProbeTime: null\n lastTransitionTime: \"2022-02-17T21:51:01Z\"\n status: \"True\"\n type: PodScheduled\n containerStatuses:\n - containerID: containerd://5403af59a2b46ee5a23fb0ae4b1e077f7ca5c5fb7af16e1ab21c00e0e616462a\n image: docker.io/library/nginx:latest\n imageID: docker.io/library/nginx@sha256:2834dc507516af02784808c5f48b7cbe38b8ed5d0f4837f16e78d00deb7e7767\n lastState: {}\n name: nginx\n ready: true\n restartCount: 0\n started: true\n state:\n running:\n startedAt: \"2022-02-17T21:51:05Z\"\n hostIP: 192.168.0.113\n phase: Running\n podIP: 10.88.0.3\n podIPs:\n - ip: 10.88.0.3\n - ip: 2001:db8::1\n qosClass: Guaranteed\n startTime: \"2022-02-17T21:51:01Z\"", "language": "yaml", "type": "pod-definition", "context": "Debug Running Pods | First, look at the logs of the affected container:", "source": "https://kubernetes.io/docs/tasks/debug/debug-application/debug-running-pod/", "id": "k8s_00189_code_completion_5150"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n creationTimestamp: \"2022-02-17T21:51:01Z\"\n generateName: nginx-deployment-67d4bdd6f5-\n labels:\n app: nginx\n pod-template-hash: 67d4bdd6f5\n name: nginx-deployment-67d4bdd6f5-w6kd7\n namespace: default\n ownerReferences:\n - apiVersion: apps/v1\n blockOwnerDeletion: true\n controller: true\n kind: ReplicaSet\n name: nginx-deployment-67d4bdd6f5\n uid: 7d41dfd4-84c0-4be4-88ab-cedbe626ad82\n resourceVersion: \"1364\"\n uid: a6501da1-0447-4262-98eb-c03d4002222e\nspec:\n containers:\n - image: nginx\n imagePullPolicy: Always\n name: nginx\n ports:\n - containerPort: 80\n protocol: TCP\n resources:\n limits:\n cpu: 500m\n memory: 128Mi\n requests:\n cpu: 500m\n memory: 128Mi\n terminationMessagePath: /dev/termination-log\n terminationMessagePolicy: File\n volumeMounts:\n - mountPath: /var/run/secrets/kubernetes.io/serviceaccount\n name: kube-api-access-bgsgp\n readOnly: true\n dnsPolicy: ClusterFirst\n enableServiceLinks: true\n nodeName: kube-worker-1\n preemptionPolicy: PreemptLowerPriority\n priority: 0\n restartPolicy: Always\n schedulerName: default-scheduler\n securityContext: {}\n serviceAccount: default\n serviceAccountName: default\n terminationGracePeriodSeconds: 30\n tolerations:\n - effect: NoExecute\n key: node.kubernetes.io/not-ready\n operator: Exists\n tolerationSeconds: 300\n - effect: NoExecute\n key: node.kubernetes.io/unreachable\n operator: Exists\n tolerationSeconds: 300\n volumes:\n - name: kube-api-access-bgsgp\n projected:\n defaultMode: 420\n sources:\n - serviceAccountToken:\n expirationSeconds: 3607\n path: token\n - configMap:\n items:\n - key: ca.crt\n path: ca.crt\n name: kube-root-ca.crt\n - downwardAPI:\n items:\n - fieldRef:\n apiVersion: v1\n fieldPath: metadata.namespace\n path: namespace\nstatus:\n conditions:\n - lastProbeTime: null\n lastTransitionTime: \"2022-02-17T21:51:01Z\"\n status: \"True\"\n type: Initialized\n - lastProbeTime: null\n lastTransitionTime: \"2022-02-17T21:51:06Z\"\n status: \"True\"\n type: Ready\n - lastProbeTime: null\n lastTransitionTime: \"2022-02-17T21:51:06Z\"\n status: \"True\"\n type: ContainersReady\n - lastProbeTime: null\n lastTransitionTime: \"2022-02-17T21:51:01Z\"\n status: \"True\"\n type: PodScheduled\n containerStatuses:\n - containerID: containerd://5403af59a2b46ee5a23fb0ae4b1e077f7ca5c5fb7af16e1ab21c00e0e616462a\n image: docker.io/library/nginx:latest\n imageID: docker.io/library/nginx@sha256:2834dc507516af02784808c5f48b7cbe38b8ed5d0f4837f16e78d00deb7e7767\n lastState: {}\n name: nginx\n ready: true\n restartCount: 0\n started: true\n state:\n running:\n startedAt: \"2022-02-17T21:51:05Z\"\n hostIP: 192.168.0.113\n phase: Running\n podIP: 10.88.0.3\n podIPs:\n - ip: 10.88.0.3\n - ip: 2001:db8::1\n qosClass: Guaranteed\n startTime: \"2022-02-17T21:51:01Z\"", "completion": "This yaml code Debug Running Pods | First, look at the logs of the affected container:", "language": "yaml", "type": "pod-definition_explanation", "context": "Debug Running Pods | First, look at the logs of the affected container:", "source": "https://kubernetes.io/docs/tasks/debug/debug-application/debug-running-pod/", "id": "k8s_00189_code_explain_5151"} {"prompt": "Explain this shell code:\n\nkubectl logs --previous ${POD_NAME} ${CONTAINER_NAME}", "completion": "This shell code Debug Running Pods | If the container image includes debugging utilities, as is the case with images built from Linux and Windows OS base images, you can run commands inside a specific container with kubectl exec:", "language": "shell", "type": "code-example_explanation", "context": "Debug Running Pods | If the container image includes debugging utilities, as is the case with images built from Linux and Windows OS base images, you can run commands inside a specific container with kubectl exec:", "source": "https://kubernetes.io/docs/tasks/debug/debug-application/debug-running-pod/", "id": "k8s_00189_code_explain_5152"} {"prompt": "Explain this shell code:\n\nkubectl exec ${POD_NAME} -c ${CONTAINER_NAME} -- ${CMD} ${ARG1} ${ARG2} ... ${ARGN}", "completion": "This shell code Debug Running Pods | As an example, to look at the logs from a running Cassandra pod, you might run", "language": "shell", "type": "code-example_explanation", "context": "Debug Running Pods | As an example, to look at the logs from a running Cassandra pod, you might run", "source": "https://kubernetes.io/docs/tasks/debug/debug-application/debug-running-pod/", "id": "k8s_00189_code_explain_5153"} {"prompt": "Explain this shell code:\n\nkubectl exec cassandra -- cat /var/log/cassandra/system.log", "completion": "This shell code Debug Running Pods | You can run a shell that's connected to your terminal using the -i and -t arguments to kubectl exec, for example:", "language": "shell", "type": "code-example_explanation", "context": "Debug Running Pods | You can run a shell that's connected to your terminal using the -i and -t arguments to kubectl exec, for example:", "source": "https://kubernetes.io/docs/tasks/debug/debug-application/debug-running-pod/", "id": "k8s_00189_code_explain_5154"} {"prompt": "Explain this shell code:\n\nkubectl run ephemeral-demo --image=registry.k8s.io/pause:3.1 --restart=Never", "completion": "This shell code Debug Running Pods | The examples in this section use the pause container image because it does not contain debugging utilities, but this method works with all container images.", "language": "shell", "type": "code-example_explanation", "context": "Debug Running Pods | The examples in this section use the pause container image because it does not contain debugging utilities, but this method works with all container images.", "source": "https://kubernetes.io/docs/tasks/debug/debug-application/debug-running-pod/", "id": "k8s_00189_code_explain_5155"} {"prompt": "Explain this text code:\n\nOCI runtime exec failed: exec failed: container_linux.go:346: starting container process caused \"exec: \\\"sh\\\": executable file not found in $PATH\": unknown", "completion": "This text code Debug Running Pods | You can instead add a debugging container using kubectl debug. If you specify the -i/--interactive argument, kubectl will automatically attach to the console of the Ephemeral Container.", "language": "text", "type": "code-example_explanation", "context": "Debug Running Pods | You can instead add a debugging container using kubectl debug. If you specify the -i/--interactive argument, kubectl will automatically attach to the console of the Ephemeral Container.", "source": "https://kubernetes.io/docs/tasks/debug/debug-application/debug-running-pod/", "id": "k8s_00189_code_explain_5156"} {"prompt": "Explain this shell code:\n\nkubectl debug -it ephemeral-demo --image=busybox:1.28 --target=ephemeral-demo", "completion": "This shell code Debug Running Pods | This command adds a new busybox container and attaches to it. The --target parameter targets the process namespace of another container. It's necessary here because kubectl run does not enable process", "language": "shell", "type": "code-example_explanation", "context": "Debug Running Pods | This command adds a new busybox container and attaches to it. The --target parameter targets the process namespace of another container. It's necessary here because kubectl run does not enable process", "source": "https://kubernetes.io/docs/tasks/debug/debug-application/debug-running-pod/", "id": "k8s_00189_code_explain_5157"} {"prompt": "Explain this text code:\n\nDefaulting debug container name to debugger-8xzrl.\nIf you don't see a command prompt, try pressing enter.\n/ #", "completion": "This text code Debug Running Pods | This command adds a new busybox container and attaches to it. The --target parameter targets the process namespace of another container. It's necessary here because kubectl run does not enable process", "language": "text", "type": "code-example_explanation", "context": "Debug Running Pods | This command adds a new busybox container and attaches to it. The --target parameter targets the process namespace of another container. It's necessary here because kubectl run does not enable process", "source": "https://kubernetes.io/docs/tasks/debug/debug-application/debug-running-pod/", "id": "k8s_00189_code_explain_5158"} {"prompt": "Complete this text code:\n\n...\nEphemeral Containers:\n debugger-8xzrl:\n Container ID: docker://b888f9adfd15bd5739fefaa39e1df4dd3c617b9902082b1cfdc29c4028ffb2eb\n Image: busybox\n Image ID: docker-pullable://busybox@sha256:1828edd60c5efd34b2bf5dd3282ec0cc04d47b2ff9caa0b6d4f07a21d1c08084\n Port: ", "completion": " Host Port: \n State: Running\n Started: Wed, 12 Feb 2020 14:25:42 +0100\n Ready: False\n Restart Count: 0\n Environment: \n Mounts: \n...", "language": "text", "type": "code-example", "context": "Debug Running Pods | Use kubectl delete to remove the Pod when you're finished:", "source": "https://kubernetes.io/docs/tasks/debug/debug-application/debug-running-pod/", "id": "k8s_00189_code_completion_5159"} {"prompt": "Explain this text code:\n\n...\nEphemeral Containers:\n debugger-8xzrl:\n Container ID: docker://b888f9adfd15bd5739fefaa39e1df4dd3c617b9902082b1cfdc29c4028ffb2eb\n Image: busybox\n Image ID: docker-pullable://busybox@sha256:1828edd60c5efd34b2bf5dd3282ec0cc04d47b2ff9caa0b6d4f07a21d1c08084\n Port: \n Host Port: \n State: Running\n Started: Wed, 12 Feb 2020 14:25:42 +0100\n Ready: False\n Restart Count: 0\n Environment: \n Mounts: \n...", "completion": "This text code Debug Running Pods | Use kubectl delete to remove the Pod when you're finished:", "language": "text", "type": "code-example_explanation", "context": "Debug Running Pods | Use kubectl delete to remove the Pod when you're finished:", "source": "https://kubernetes.io/docs/tasks/debug/debug-application/debug-running-pod/", "id": "k8s_00189_code_explain_5160"} {"prompt": "Explain this shell code:\n\nkubectl run myapp --image=busybox:1.28 --restart=Never -- sleep 1d", "completion": "This shell code Debug Running Pods | Run this command to create a copy of myapp named myapp-debug that adds a new Ubuntu container for debugging:", "language": "shell", "type": "code-example_explanation", "context": "Debug Running Pods | Run this command to create a copy of myapp named myapp-debug that adds a new Ubuntu container for debugging:", "source": "https://kubernetes.io/docs/tasks/debug/debug-application/debug-running-pod/", "id": "k8s_00189_code_explain_5161"} {"prompt": "Explain this shell code:\n\nkubectl debug myapp -it --image=ubuntu --share-processes --copy-to=myapp-debug", "completion": "This shell code Debug Running Pods | Don't forget to clean up the debugging Pod when you're finished with it:", "language": "shell", "type": "code-example_explanation", "context": "Debug Running Pods | Don't forget to clean up the debugging Pod when you're finished with it:", "source": "https://kubernetes.io/docs/tasks/debug/debug-application/debug-running-pod/", "id": "k8s_00189_code_explain_5162"} {"prompt": "Explain this text code:\n\nDefaulting debug container name to debugger-w7xmf.\nIf you don't see a command prompt, try pressing enter.\nroot@myapp-debug:/#", "completion": "This text code Debug Running Pods | Don't forget to clean up the debugging Pod when you're finished with it:", "language": "text", "type": "code-example_explanation", "context": "Debug Running Pods | Don't forget to clean up the debugging Pod when you're finished with it:", "source": "https://kubernetes.io/docs/tasks/debug/debug-application/debug-running-pod/", "id": "k8s_00189_code_explain_5163"} {"prompt": "Complete this text code:\n\nContainers:\n myapp:\n Image: busybox\n ...\n Args:", "completion": " false\n State: Waiting\n Reason: CrashLoopBackOff\n Last State: Terminated\n Reason: Error\n Exit Code: 1", "language": "text", "type": "code-example", "context": "Debug Running Pods | You can use kubectl debug to create a copy of this Pod with the command changed to an interactive shell:", "source": "https://kubernetes.io/docs/tasks/debug/debug-application/debug-running-pod/", "id": "k8s_00189_code_completion_5164"} {"prompt": "Explain this text code:\n\nContainers:\n myapp:\n Image: busybox\n ...\n Args:\n false\n State: Waiting\n Reason: CrashLoopBackOff\n Last State: Terminated\n Reason: Error\n Exit Code: 1", "completion": "This text code Debug Running Pods | You can use kubectl debug to create a copy of this Pod with the command changed to an interactive shell:", "language": "text", "type": "code-example_explanation", "context": "Debug Running Pods | You can use kubectl debug to create a copy of this Pod with the command changed to an interactive shell:", "source": "https://kubernetes.io/docs/tasks/debug/debug-application/debug-running-pod/", "id": "k8s_00189_code_explain_5165"} {"prompt": "Explain this bash code:\n\nkubectl debug myapp -it --copy-to=myapp-debug --container=myapp -- sh", "completion": "This bash code Debug Running Pods | Now you have an interactive shell that you can use to perform tasks like checking filesystem paths or running the container command manually.", "language": "bash", "type": "kubectl-command_explanation", "context": "Debug Running Pods | Now you have an interactive shell that you can use to perform tasks like checking filesystem paths or running the container command manually.", "source": "https://kubernetes.io/docs/tasks/debug/debug-application/debug-running-pod/", "id": "k8s_00189_code_explain_5166"} {"prompt": "Explain this text code:\n\nIf you don't see a command prompt, try pressing enter.\n/ #", "completion": "This text code Debug Running Pods | Now you have an interactive shell that you can use to perform tasks like checking filesystem paths or running the container command manually.", "language": "text", "type": "code-example_explanation", "context": "Debug Running Pods | Now you have an interactive shell that you can use to perform tasks like checking filesystem paths or running the container command manually.", "source": "https://kubernetes.io/docs/tasks/debug/debug-application/debug-running-pod/", "id": "k8s_00189_code_explain_5167"} {"prompt": "Explain this bash code:\n\nkubectl run myapp --image=busybox:1.28 --restart=Never -- sleep 1d", "completion": "This bash code Debug Running Pods | Now use kubectl debug to make a copy and change its container image to ubuntu:", "language": "bash", "type": "kubectl-command_explanation", "context": "Debug Running Pods | Now use kubectl debug to make a copy and change its container image to ubuntu:", "source": "https://kubernetes.io/docs/tasks/debug/debug-application/debug-running-pod/", "id": "k8s_00189_code_explain_5168"} {"prompt": "Explain this bash code:\n\nkubectl debug myapp --copy-to=myapp-debug --set-image=*=ubuntu", "completion": "This bash code Debug Running Pods | The syntax of --set-image uses the same container_name=image syntax as kubectl set image. *=ubuntu means change the image of all containers to ubuntu.", "language": "bash", "type": "kubectl-command_explanation", "context": "Debug Running Pods | The syntax of --set-image uses the same container_name=image syntax as kubectl set image. *=ubuntu means change the image of all containers to ubuntu.", "source": "https://kubernetes.io/docs/tasks/debug/debug-application/debug-running-pod/", "id": "k8s_00189_code_explain_5169"} {"prompt": "Explain this text code:\n\nCreating debugging pod node-debugger-mynode-pdx84 with container debugger on node mynode.\nIf you don't see a command prompt, try pressing enter.\nroot@ek8s:/#", "completion": "This text code Debug Running Pods | When creating a debugging session on a node, keep in mind that:", "language": "text", "type": "code-example_explanation", "context": "Debug Running Pods | When creating a debugging session on a node, keep in mind that:", "source": "https://kubernetes.io/docs/tasks/debug/debug-application/debug-running-pod/", "id": "k8s_00189_code_explain_5170"} {"prompt": "Explain this shell code:\n\nkubectl run myapp --image=busybox:1.28 --restart=Never -- sleep 1d", "completion": "This shell code Debug Running Pods | Then, debug the Pod using an ephemeral container. If the ephemeral container needs to have privilege, you can use the sysadmin profile:", "language": "shell", "type": "code-example_explanation", "context": "Debug Running Pods | Then, debug the Pod using an ephemeral container. If the ephemeral container needs to have privilege, you can use the sysadmin profile:", "source": "https://kubernetes.io/docs/tasks/debug/debug-application/debug-running-pod/", "id": "k8s_00189_code_explain_5171"} {"prompt": "Explain this shell code:\n\nkubectl debug -it myapp --image=busybox:1.28 --target=myapp --profile=sysadmin", "completion": "This shell code Debug Running Pods | Check the capabilities of the ephemeral container process by running the following command inside the container:", "language": "shell", "type": "code-example_explanation", "context": "Debug Running Pods | Check the capabilities of the ephemeral container process by running the following command inside the container:", "source": "https://kubernetes.io/docs/tasks/debug/debug-application/debug-running-pod/", "id": "k8s_00189_code_explain_5172"} {"prompt": "Complete this text code:\n\nTargeting container \"myapp\". If you don't see processes from this container it may be because the container runtime doesn't support this feature.\nDefaulting debug container name to debugger-6kg4x.", "completion": "If you don't see a command prompt, try pressing enter.\n/ #", "language": "text", "type": "code-example", "context": "Debug Running Pods | Check the capabilities of the ephemeral container process by running the following command inside the container:", "source": "https://kubernetes.io/docs/tasks/debug/debug-application/debug-running-pod/", "id": "k8s_00189_code_completion_5173"} {"prompt": "Explain this text code:\n\nTargeting container \"myapp\". If you don't see processes from this container it may be because the container runtime doesn't support this feature.\nDefaulting debug container name to debugger-6kg4x.\nIf you don't see a command prompt, try pressing enter.\n/ #", "completion": "This text code Debug Running Pods | Check the capabilities of the ephemeral container process by running the following command inside the container:", "language": "text", "type": "code-example_explanation", "context": "Debug Running Pods | Check the capabilities of the ephemeral container process by running the following command inside the container:", "source": "https://kubernetes.io/docs/tasks/debug/debug-application/debug-running-pod/", "id": "k8s_00189_code_explain_5174"} {"prompt": "Complete this text code:\n\n...\nCapPrm:\t000001ffffffffff", "completion": "CapEff:\t000001ffffffffff\n...", "language": "text", "type": "code-example", "context": "Debug Running Pods | This means the container process is granted full capabilities as a privileged container by applying sysadmin profile. See more details about capabilities.", "source": "https://kubernetes.io/docs/tasks/debug/debug-application/debug-running-pod/", "id": "k8s_00189_code_completion_5175"} {"prompt": "Explain this text code:\n\n...\nCapPrm:\t000001ffffffffff\nCapEff:\t000001ffffffffff\n...", "completion": "This text code Debug Running Pods | This means the container process is granted full capabilities as a privileged container by applying sysadmin profile. See more details about capabilities.", "language": "text", "type": "code-example_explanation", "context": "Debug Running Pods | This means the container process is granted full capabilities as a privileged container by applying sysadmin profile. See more details about capabilities.", "source": "https://kubernetes.io/docs/tasks/debug/debug-application/debug-running-pod/", "id": "k8s_00189_code_explain_5176"} {"prompt": "Explain this shell code:\n\nkubectl get pod myapp -o jsonpath='{.spec.ephemeralContainers[0].securityContext}'", "completion": "This shell code Debug Running Pods | Clean up the Pod when you're finished with it:", "language": "shell", "type": "code-example_explanation", "context": "Debug Running Pods | Clean up the Pod when you're finished with it:", "source": "https://kubernetes.io/docs/tasks/debug/debug-application/debug-running-pod/", "id": "k8s_00189_code_explain_5177"} {"prompt": "Explain this shell code:\n\nkubectl run myapp --image=busybox:1.28 --restart=Never -- sleep 1d", "completion": "This shell code Debug Running Pods | Create a custom profile in YAML or JSON format. Here, create a YAML format file named custom-profile.yaml:", "language": "shell", "type": "code-example_explanation", "context": "Debug Running Pods | Create a custom profile in YAML or JSON format. Here, create a YAML format file named custom-profile.yaml:", "source": "https://kubernetes.io/docs/tasks/debug/debug-application/debug-running-pod/", "id": "k8s_00189_code_explain_5178"} {"prompt": "Complete this yaml code:\n\nenv:\n- name: ENV_VAR_1\n value: value_1\n- name: ENV_VAR_2\n value: value_2", "completion": "securityContext:\n capabilities:\n add:\n - NET_ADMIN\n - SYS_TIME", "language": "yaml", "type": "kubernetes-manifest", "context": "Debug Running Pods | Run this command to debug the Pod using an ephemeral container with the custom profile:", "source": "https://kubernetes.io/docs/tasks/debug/debug-application/debug-running-pod/", "id": "k8s_00189_code_completion_5179"} {"prompt": "Explain this yaml code:\n\nenv:\n- name: ENV_VAR_1\n value: value_1\n- name: ENV_VAR_2\n value: value_2\nsecurityContext:\n capabilities:\n add:\n - NET_ADMIN\n - SYS_TIME", "completion": "This yaml code Debug Running Pods | Run this command to debug the Pod using an ephemeral container with the custom profile:", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Debug Running Pods | Run this command to debug the Pod using an ephemeral container with the custom profile:", "source": "https://kubernetes.io/docs/tasks/debug/debug-application/debug-running-pod/", "id": "k8s_00189_code_explain_5180"} {"prompt": "Explain this shell code:\n\nkubectl debug -it myapp --image=busybox:1.28 --target=myapp --profile=general --custom=custom-profile.yaml", "completion": "This shell code Debug Running Pods | You can check that the ephemeral container has been added to the target Pod with the custom profile applied:", "language": "shell", "type": "code-example_explanation", "context": "Debug Running Pods | You can check that the ephemeral container has been added to the target Pod with the custom profile applied:", "source": "https://kubernetes.io/docs/tasks/debug/debug-application/debug-running-pod/", "id": "k8s_00189_code_explain_5181"} {"prompt": "Explain this shell code:\n\nkubectl get pod myapp -o jsonpath='{.spec.ephemeralContainers[0].env}'", "completion": "This shell code Debug Running Pods | Clean up the Pod when you're finished with it:", "language": "shell", "type": "code-example_explanation", "context": "Debug Running Pods | Clean up the Pod when you're finished with it:", "source": "https://kubernetes.io/docs/tasks/debug/debug-application/debug-running-pod/", "id": "k8s_00189_code_explain_5182"} {"prompt": "Explain this json code:\n\n[{\"name\":\"ENV_VAR_1\",\"value\":\"value_1\"},{\"name\":\"ENV_VAR_2\",\"value\":\"value_2\"}]", "completion": "This json code Debug Running Pods | Clean up the Pod when you're finished with it:", "language": "json", "type": "api-response_explanation", "context": "Debug Running Pods | Clean up the Pod when you're finished with it:", "source": "https://kubernetes.io/docs/tasks/debug/debug-application/debug-running-pod/", "id": "k8s_00189_code_explain_5183"} {"prompt": "Explain this shell code:\n\nkubectl get pod myapp -o jsonpath='{.spec.ephemeralContainers[0].securityContext}'", "completion": "This shell code Debug Running Pods | Clean up the Pod when you're finished with it:", "language": "shell", "type": "code-example_explanation", "context": "Debug Running Pods | Clean up the Pod when you're finished with it:", "source": "https://kubernetes.io/docs/tasks/debug/debug-application/debug-running-pod/", "id": "k8s_00189_code_explain_5184"} {"prompt": "Complete this yaml code:\n\napplication/nginx-with-request.yaml\napiVersion: apps/v1\nkind: Deployment\nmetadata:\n name: nginx-deployment\nspec:\n selector:\n matchLabels:\n app: nginx\n replicas: 2\n template:\n metadata:", "completion": " labels:\n app: nginx\n spec:\n containers:\n - name: nginx\n image: nginx\n resources:\n limits:\n memory: \"128Mi\"\n cpu: \"500m\"\n ports:\n - containerPort: 80", "language": "yaml", "type": "deployment-definition", "context": "Debug Running Pods | Create deployment by running following command:", "source": "https://kubernetes.io/docs/tasks/debug/debug-application/debug-running-pod/", "id": "k8s_00189_code_completion_5185"} {"prompt": "Explain this yaml code:\n\napplication/nginx-with-request.yaml\napiVersion: apps/v1\nkind: Deployment\nmetadata:\n name: nginx-deployment\nspec:\n selector:\n matchLabels:\n app: nginx\n replicas: 2\n template:\n metadata:\n labels:\n app: nginx\n spec:\n containers:\n - name: nginx\n image: nginx\n resources:\n limits:\n memory: \"128Mi\"\n cpu: \"500m\"\n ports:\n - containerPort: 80", "completion": "This yaml code Debug Running Pods | Create deployment by running following command:", "language": "yaml", "type": "deployment-definition_explanation", "context": "Debug Running Pods | Create deployment by running following command:", "source": "https://kubernetes.io/docs/tasks/debug/debug-application/debug-running-pod/", "id": "k8s_00189_code_explain_5186"} {"prompt": "Complete this yaml code:\n\napiVersion: apps/v1\nkind: Deployment\nmetadata:\n name: nginx-deployment\nspec:\n selector:\n matchLabels:\n app: nginx\n replicas: 2\n template:\n metadata:", "completion": " labels:\n app: nginx\n spec:\n containers:\n - name: nginx\n image: nginx\n resources:\n limits:\n memory: \"128Mi\"\n cpu: \"500m\"\n ports:\n - containerPort: 80", "language": "yaml", "type": "deployment-definition", "context": "Debug Running Pods | Create deployment by running following command:", "source": "https://kubernetes.io/docs/tasks/debug/debug-application/debug-running-pod/", "id": "k8s_00189_code_completion_5187"} {"prompt": "Explain this yaml code:\n\napiVersion: apps/v1\nkind: Deployment\nmetadata:\n name: nginx-deployment\nspec:\n selector:\n matchLabels:\n app: nginx\n replicas: 2\n template:\n metadata:\n labels:\n app: nginx\n spec:\n containers:\n - name: nginx\n image: nginx\n resources:\n limits:\n memory: \"128Mi\"\n cpu: \"500m\"\n ports:\n - containerPort: 80", "completion": "This yaml code Debug Running Pods | Create deployment by running following command:", "language": "yaml", "type": "deployment-definition_explanation", "context": "Debug Running Pods | Create deployment by running following command:", "source": "https://kubernetes.io/docs/tasks/debug/debug-application/debug-running-pod/", "id": "k8s_00189_code_explain_5188"} {"prompt": "Explain this bash code:\n\nkubectl apply -f https://k8s.io/examples/application/nginx-with-request.yaml", "completion": "This bash code Debug Running Pods | Check pod status by following command:", "language": "bash", "type": "kubectl-command_explanation", "context": "Debug Running Pods | Check pod status by following command:", "source": "https://kubernetes.io/docs/tasks/debug/debug-application/debug-running-pod/", "id": "k8s_00189_code_explain_5189"} {"prompt": "Explain this bash code:\n\nkubectl describe pod nginx-deployment-67d4bdd6f5-w6kd7", "completion": "This bash code Debug Running Pods | Here you can see configuration information about the container(s) and Pod (labels, resource requirements, etc.), as well as status information about the container(s) and Pod (state, readiness, restart", "language": "bash", "type": "kubectl-command_explanation", "context": "Debug Running Pods | Here you can see configuration information about the container(s) and Pod (labels, resource requirements, etc.), as well as status information about the container(s) and Pod (state, readiness, restart", "source": "https://kubernetes.io/docs/tasks/debug/debug-application/debug-running-pod/", "id": "k8s_00189_code_explain_5190"} {"prompt": "Explain this bash code:\n\nkubectl describe pod nginx-deployment-1370807587-fz9sd", "completion": "This bash code Debug Running Pods | Here you can see the event generated by the scheduler saying that the Pod failed to schedule for reason FailedScheduling (and possibly others). The message tells us that there were not enough resource", "language": "bash", "type": "kubectl-command_explanation", "context": "Debug Running Pods | Here you can see the event generated by the scheduler saying that the Pod failed to schedule for reason FailedScheduling (and possibly others). The message tells us that there were not enough resource", "source": "https://kubernetes.io/docs/tasks/debug/debug-application/debug-running-pod/", "id": "k8s_00189_code_explain_5191"} {"prompt": "Explain this bash code:\n\nkubectl get pod nginx-deployment-1006230814-6winp -o yaml", "completion": "This bash code Debug Running Pods | First, look at the logs of the affected container:", "language": "bash", "type": "kubectl-command_explanation", "context": "Debug Running Pods | First, look at the logs of the affected container:", "source": "https://kubernetes.io/docs/tasks/debug/debug-application/debug-running-pod/", "id": "k8s_00189_code_explain_5192"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n creationTimestamp: \"2022-02-17T21:51:01Z\"\n generateName: nginx-deployment-67d4bdd6f5-\n labels:\n app: nginx\n pod-template-hash: 67d4bdd6f5\n name: nginx-deployment-67d4bdd6f5-w6kd7\n namespace: default\n ownerReferences:\n - apiVersion: apps/v1\n blockOwnerDeletion: true\n controller: true\n kind: ReplicaSet\n name: nginx-deployment-67d4bdd6f5\n uid: 7d41dfd4-84c0-4be4-88ab-cedbe626ad82\n resourceVersion: \"1364\"\n uid: a6501da1-0447-4262-98eb-c03d4002222e\nspec:\n containers:\n - image: nginx\n imagePullPolicy: Always\n name: nginx\n ports:\n - containerPort: 80\n protocol: TCP\n resources:\n limits:\n cpu: 500m\n memory: 128Mi\n requests:\n cpu: 500m\n memory: 128Mi\n terminationMessagePath: /dev/termination-log\n terminationMessagePolicy: File\n volumeMounts:\n - mountPath: /var/run/secrets/kubernetes.io/serviceaccount\n name: kube-api-access-bgsgp\n readOnly: true\n dnsPolicy: ClusterFirst\n enableServiceLinks: true\n nodeName: kube-worker-1\n preemptionPolicy: PreemptLowerPriority\n priority: 0\n restartPolicy: Always\n schedulerName: default-scheduler\n securityContext: {}\n serviceAccount: default\n serviceAccountName: default\n terminationGracePeriodSeconds: 30\n tolerations:\n - effect: NoExecute\n key: node.kubernetes.io/not-ready\n operator: Exists\n tolerationSeconds: 300\n - effect: NoExecute\n key: node.kubernetes.io/unreachable", "completion": " operator: Exists\n tolerationSeconds: 300\n volumes:\n - name: kube-api-access-bgsgp\n projected:\n defaultMode: 420\n sources:\n - serviceAccountToken:\n expirationSeconds: 3607\n path: token\n - configMap:\n items:\n - key: ca.crt\n path: ca.crt\n name: kube-root-ca.crt\n - downwardAPI:\n items:\n - fieldRef:\n apiVersion: v1\n fieldPath: metadata.namespace\n path: namespace\nstatus:\n conditions:\n - lastProbeTime: null\n lastTransitionTime: \"2022-02-17T21:51:01Z\"\n status: \"True\"\n type: Initialized\n - lastProbeTime: null\n lastTransitionTime: \"2022-02-17T21:51:06Z\"\n status: \"True\"\n type: Ready\n - lastProbeTime: null\n lastTransitionTime: \"2022-02-17T21:51:06Z\"\n status: \"True\"\n type: ContainersReady\n - lastProbeTime: null\n lastTransitionTime: \"2022-02-17T21:51:01Z\"\n status: \"True\"\n type: PodScheduled\n containerStatuses:\n - containerID: containerd://5403af59a2b46ee5a23fb0ae4b1e077f7ca5c5fb7af16e1ab21c00e0e616462a\n image: docker.io/library/nginx:latest\n imageID: docker.io/library/nginx@sha256:2834dc507516af02784808c5f48b7cbe38b8ed5d0f4837f16e78d00deb7e7767\n lastState: {}\n name: nginx\n ready: true\n restartCount: 0\n started: true\n state:\n running:\n startedAt: \"2022-02-17T21:51:05Z\"\n hostIP: 192.168.0.113\n phase: Running\n podIP: 10.88.0.3\n podIPs:\n - ip: 10.88.0.3\n - ip: 2001:db8::1\n qosClass: Guaranteed\n startTime: \"2022-02-17T21:51:01Z\"", "language": "yaml", "type": "pod-definition", "context": "Debug Running Pods | First, look at the logs of the affected container:", "source": "https://kubernetes.io/docs/tasks/debug/debug-application/debug-running-pod/", "id": "k8s_00189_code_completion_5193"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n creationTimestamp: \"2022-02-17T21:51:01Z\"\n generateName: nginx-deployment-67d4bdd6f5-\n labels:\n app: nginx\n pod-template-hash: 67d4bdd6f5\n name: nginx-deployment-67d4bdd6f5-w6kd7\n namespace: default\n ownerReferences:\n - apiVersion: apps/v1\n blockOwnerDeletion: true\n controller: true\n kind: ReplicaSet\n name: nginx-deployment-67d4bdd6f5\n uid: 7d41dfd4-84c0-4be4-88ab-cedbe626ad82\n resourceVersion: \"1364\"\n uid: a6501da1-0447-4262-98eb-c03d4002222e\nspec:\n containers:\n - image: nginx\n imagePullPolicy: Always\n name: nginx\n ports:\n - containerPort: 80\n protocol: TCP\n resources:\n limits:\n cpu: 500m\n memory: 128Mi\n requests:\n cpu: 500m\n memory: 128Mi\n terminationMessagePath: /dev/termination-log\n terminationMessagePolicy: File\n volumeMounts:\n - mountPath: /var/run/secrets/kubernetes.io/serviceaccount\n name: kube-api-access-bgsgp\n readOnly: true\n dnsPolicy: ClusterFirst\n enableServiceLinks: true\n nodeName: kube-worker-1\n preemptionPolicy: PreemptLowerPriority\n priority: 0\n restartPolicy: Always\n schedulerName: default-scheduler\n securityContext: {}\n serviceAccount: default\n serviceAccountName: default\n terminationGracePeriodSeconds: 30\n tolerations:\n - effect: NoExecute\n key: node.kubernetes.io/not-ready\n operator: Exists\n tolerationSeconds: 300\n - effect: NoExecute\n key: node.kubernetes.io/unreachable\n operator: Exists\n tolerationSeconds: 300\n volumes:\n - name: kube-api-access-bgsgp\n projected:\n defaultMode: 420\n sources:\n - serviceAccountToken:\n expirationSeconds: 3607\n path: token\n - configMap:\n items:\n - key: ca.crt\n path: ca.crt\n name: kube-root-ca.crt\n - downwardAPI:\n items:\n - fieldRef:\n apiVersion: v1\n fieldPath: metadata.namespace\n path: namespace\nstatus:\n conditions:\n - lastProbeTime: null\n lastTransitionTime: \"2022-02-17T21:51:01Z\"\n status: \"True\"\n type: Initialized\n - lastProbeTime: null\n lastTransitionTime: \"2022-02-17T21:51:06Z\"\n status: \"True\"\n type: Ready\n - lastProbeTime: null\n lastTransitionTime: \"2022-02-17T21:51:06Z\"\n status: \"True\"\n type: ContainersReady\n - lastProbeTime: null\n lastTransitionTime: \"2022-02-17T21:51:01Z\"\n status: \"True\"\n type: PodScheduled\n containerStatuses:\n - containerID: containerd://5403af59a2b46ee5a23fb0ae4b1e077f7ca5c5fb7af16e1ab21c00e0e616462a\n image: docker.io/library/nginx:latest\n imageID: docker.io/library/nginx@sha256:2834dc507516af02784808c5f48b7cbe38b8ed5d0f4837f16e78d00deb7e7767\n lastState: {}\n name: nginx\n ready: true\n restartCount: 0\n started: true\n state:\n running:\n startedAt: \"2022-02-17T21:51:05Z\"\n hostIP: 192.168.0.113\n phase: Running\n podIP: 10.88.0.3\n podIPs:\n - ip: 10.88.0.3\n - ip: 2001:db8::1\n qosClass: Guaranteed\n startTime: \"2022-02-17T21:51:01Z\"", "completion": "This yaml code Debug Running Pods | First, look at the logs of the affected container:", "language": "yaml", "type": "pod-definition_explanation", "context": "Debug Running Pods | First, look at the logs of the affected container:", "source": "https://kubernetes.io/docs/tasks/debug/debug-application/debug-running-pod/", "id": "k8s_00189_code_explain_5194"} {"prompt": "Explain this bash code:\n\nkubectl logs --previous ${POD_NAME} ${CONTAINER_NAME}", "completion": "This bash code Debug Running Pods | If the container image includes debugging utilities, as is the case with images built from Linux and Windows OS base images, you can run commands inside a specific container with kubectl exec:", "language": "bash", "type": "kubectl-command_explanation", "context": "Debug Running Pods | If the container image includes debugging utilities, as is the case with images built from Linux and Windows OS base images, you can run commands inside a specific container with kubectl exec:", "source": "https://kubernetes.io/docs/tasks/debug/debug-application/debug-running-pod/", "id": "k8s_00189_code_explain_5195"} {"prompt": "Explain this bash code:\n\nkubectl exec ${POD_NAME} -c ${CONTAINER_NAME} -- ${CMD} ${ARG1} ${ARG2} ... ${ARGN}", "completion": "This bash code Debug Running Pods | As an example, to look at the logs from a running Cassandra pod, you might run", "language": "bash", "type": "kubectl-command_explanation", "context": "Debug Running Pods | As an example, to look at the logs from a running Cassandra pod, you might run", "source": "https://kubernetes.io/docs/tasks/debug/debug-application/debug-running-pod/", "id": "k8s_00189_code_explain_5196"} {"prompt": "Explain this bash code:\n\nkubectl exec cassandra -- cat /var/log/cassandra/system.log", "completion": "This bash code Debug Running Pods | You can run a shell that's connected to your terminal using the -i and -t arguments to kubectl exec, for example:", "language": "bash", "type": "kubectl-command_explanation", "context": "Debug Running Pods | You can run a shell that's connected to your terminal using the -i and -t arguments to kubectl exec, for example:", "source": "https://kubernetes.io/docs/tasks/debug/debug-application/debug-running-pod/", "id": "k8s_00189_code_explain_5197"} {"prompt": "Explain this bash code:\n\nkubectl run ephemeral-demo --image=registry.k8s.io/pause:3.1 --restart=Never", "completion": "This bash code Debug Running Pods | The examples in this section use the pause container image because it does not contain debugging utilities, but this method works with all container images.", "language": "bash", "type": "kubectl-command_explanation", "context": "Debug Running Pods | The examples in this section use the pause container image because it does not contain debugging utilities, but this method works with all container images.", "source": "https://kubernetes.io/docs/tasks/debug/debug-application/debug-running-pod/", "id": "k8s_00189_code_explain_5198"} {"prompt": "Explain this bash code:\n\nkubectl debug -it ephemeral-demo --image=busybox:1.28 --target=ephemeral-demo", "completion": "This bash code Debug Running Pods | This command adds a new busybox container and attaches to it. The --target parameter targets the process namespace of another container. It's necessary here because kubectl run does not enable process", "language": "bash", "type": "kubectl-command_explanation", "context": "Debug Running Pods | This command adds a new busybox container and attaches to it. The --target parameter targets the process namespace of another container. It's necessary here because kubectl run does not enable process", "source": "https://kubernetes.io/docs/tasks/debug/debug-application/debug-running-pod/", "id": "k8s_00189_code_explain_5199"} {"prompt": "Explain this bash code:\n\nkubectl run myapp --image=busybox:1.28 --restart=Never -- sleep 1d", "completion": "This bash code Debug Running Pods | Run this command to create a copy of myapp named myapp-debug that adds a new Ubuntu container for debugging:", "language": "bash", "type": "kubectl-command_explanation", "context": "Debug Running Pods | Run this command to create a copy of myapp named myapp-debug that adds a new Ubuntu container for debugging:", "source": "https://kubernetes.io/docs/tasks/debug/debug-application/debug-running-pod/", "id": "k8s_00189_code_explain_5200"} {"prompt": "Explain this bash code:\n\nkubectl debug myapp -it --image=ubuntu --share-processes --copy-to=myapp-debug", "completion": "This bash code Debug Running Pods | Don't forget to clean up the debugging Pod when you're finished with it:", "language": "bash", "type": "kubectl-command_explanation", "context": "Debug Running Pods | Don't forget to clean up the debugging Pod when you're finished with it:", "source": "https://kubernetes.io/docs/tasks/debug/debug-application/debug-running-pod/", "id": "k8s_00189_code_explain_5201"} {"prompt": "Explain this bash code:\n\nkubectl run myapp --image=busybox:1.28 --restart=Never -- sleep 1d", "completion": "This bash code Debug Running Pods | Then, debug the Pod using an ephemeral container. If the ephemeral container needs to have privilege, you can use the sysadmin profile:", "language": "bash", "type": "kubectl-command_explanation", "context": "Debug Running Pods | Then, debug the Pod using an ephemeral container. If the ephemeral container needs to have privilege, you can use the sysadmin profile:", "source": "https://kubernetes.io/docs/tasks/debug/debug-application/debug-running-pod/", "id": "k8s_00189_code_explain_5202"} {"prompt": "Explain this bash code:\n\nkubectl debug -it myapp --image=busybox:1.28 --target=myapp --profile=sysadmin", "completion": "This bash code Debug Running Pods | Check the capabilities of the ephemeral container process by running the following command inside the container:", "language": "bash", "type": "kubectl-command_explanation", "context": "Debug Running Pods | Check the capabilities of the ephemeral container process by running the following command inside the container:", "source": "https://kubernetes.io/docs/tasks/debug/debug-application/debug-running-pod/", "id": "k8s_00189_code_explain_5203"} {"prompt": "Explain this bash code:\n\nkubectl get pod myapp -o jsonpath='{.spec.ephemeralContainers[0].securityContext}'", "completion": "This bash code Debug Running Pods | Clean up the Pod when you're finished with it:", "language": "bash", "type": "kubectl-command_explanation", "context": "Debug Running Pods | Clean up the Pod when you're finished with it:", "source": "https://kubernetes.io/docs/tasks/debug/debug-application/debug-running-pod/", "id": "k8s_00189_code_explain_5204"} {"prompt": "Explain this bash code:\n\nkubectl run myapp --image=busybox:1.28 --restart=Never -- sleep 1d", "completion": "This bash code Debug Running Pods | Create a custom profile in YAML or JSON format. Here, create a YAML format file named custom-profile.yaml:", "language": "bash", "type": "kubectl-command_explanation", "context": "Debug Running Pods | Create a custom profile in YAML or JSON format. Here, create a YAML format file named custom-profile.yaml:", "source": "https://kubernetes.io/docs/tasks/debug/debug-application/debug-running-pod/", "id": "k8s_00189_code_explain_5205"} {"prompt": "Complete this text code:\n\nenv:\n- name: ENV_VAR_1\n value: value_1\n- name: ENV_VAR_2\n value: value_2", "completion": "securityContext:\n capabilities:\n add:\n - NET_ADMIN\n - SYS_TIME", "language": "text", "type": "code-example", "context": "Debug Running Pods | Run this command to debug the Pod using an ephemeral container with the custom profile:", "source": "https://kubernetes.io/docs/tasks/debug/debug-application/debug-running-pod/", "id": "k8s_00189_code_completion_5206"} {"prompt": "Explain this text code:\n\nenv:\n- name: ENV_VAR_1\n value: value_1\n- name: ENV_VAR_2\n value: value_2\nsecurityContext:\n capabilities:\n add:\n - NET_ADMIN\n - SYS_TIME", "completion": "This text code Debug Running Pods | Run this command to debug the Pod using an ephemeral container with the custom profile:", "language": "text", "type": "code-example_explanation", "context": "Debug Running Pods | Run this command to debug the Pod using an ephemeral container with the custom profile:", "source": "https://kubernetes.io/docs/tasks/debug/debug-application/debug-running-pod/", "id": "k8s_00189_code_explain_5207"} {"prompt": "Explain this bash code:\n\nkubectl debug -it myapp --image=busybox:1.28 --target=myapp --profile=general --custom=custom-profile.yaml", "completion": "This bash code Debug Running Pods | You can check that the ephemeral container has been added to the target Pod with the custom profile applied:", "language": "bash", "type": "kubectl-command_explanation", "context": "Debug Running Pods | You can check that the ephemeral container has been added to the target Pod with the custom profile applied:", "source": "https://kubernetes.io/docs/tasks/debug/debug-application/debug-running-pod/", "id": "k8s_00189_code_explain_5208"} {"prompt": "Explain this bash code:\n\nkubectl get pod myapp -o jsonpath='{.spec.ephemeralContainers[0].env}'", "completion": "This bash code Debug Running Pods | Clean up the Pod when you're finished with it:", "language": "bash", "type": "kubectl-command_explanation", "context": "Debug Running Pods | Clean up the Pod when you're finished with it:", "source": "https://kubernetes.io/docs/tasks/debug/debug-application/debug-running-pod/", "id": "k8s_00189_code_explain_5209"} {"prompt": "Explain this bash code:\n\nkubectl get pod myapp -o jsonpath='{.spec.ephemeralContainers[0].securityContext}'", "completion": "This bash code Debug Running Pods | Clean up the Pod when you're finished with it:", "language": "bash", "type": "kubectl-command_explanation", "context": "Debug Running Pods | Clean up the Pod when you're finished with it:", "source": "https://kubernetes.io/docs/tasks/debug/debug-application/debug-running-pod/", "id": "k8s_00189_code_explain_5210"} {"prompt": "Complete this yaml code:\n\napiVersion: apps/v1\nkind: Deployment\nmetadata:\n name: nginx-deployment\nspec:\n selector:\n matchLabels:\n app: nginx\n replicas: 2 # tells deployment to run 2 pods matching the template", "completion": " template:\n metadata:\n labels:\n app: nginx\n spec:\n containers:\n - name: nginx\n image: nginx:1.14.2\n ports:\n - containerPort: 80", "language": "yaml", "type": "deployment-definition", "context": "Objects In Kubernetes | One way to create a Deployment using a manifest file like the one above is to use the kubectl apply command in the kubectl command-line interface, passing the .yaml file as an argument. Here's an exam", "source": "https://kubernetes.io/docs/concepts/overview/working-with-objects/", "id": "k8s_00192_code_completion_5211"} {"prompt": "Explain this yaml code:\n\napiVersion: apps/v1\nkind: Deployment\nmetadata:\n name: nginx-deployment\nspec:\n selector:\n matchLabels:\n app: nginx\n replicas: 2 # tells deployment to run 2 pods matching the template\n template:\n metadata:\n labels:\n app: nginx\n spec:\n containers:\n - name: nginx\n image: nginx:1.14.2\n ports:\n - containerPort: 80", "completion": "This yaml code Objects In Kubernetes | One way to create a Deployment using a manifest file like the one above is to use the kubectl apply command in the kubectl command-line interface, passing the .yaml file as an argument. Here's an exam", "language": "yaml", "type": "deployment-definition_explanation", "context": "Objects In Kubernetes | One way to create a Deployment using a manifest file like the one above is to use the kubectl apply command in the kubectl command-line interface, passing the .yaml file as an argument. Here's an exam", "source": "https://kubernetes.io/docs/concepts/overview/working-with-objects/", "id": "k8s_00192_code_explain_5212"} {"prompt": "Explain this bash code:\n\nkubectl apply -f https://k8s.io/examples/application/deployment.yaml", "completion": "This bash code Objects In Kubernetes | The output is similar to this:", "language": "bash", "type": "kubectl-command_explanation", "context": "Objects In Kubernetes | The output is similar to this:", "source": "https://kubernetes.io/docs/concepts/overview/working-with-objects/", "id": "k8s_00192_code_explain_5213"} {"prompt": "Complete this yaml code:\n\napiVersion: apps/v1\nkind: Deployment\nmetadata:\n name: nginx-deployment\nspec:\n selector:\n matchLabels:\n app: nginx\n replicas: 2 # tells deployment to run 2 pods matching the template", "completion": " template:\n metadata:\n labels:\n app: nginx\n spec:\n containers:\n - name: nginx\n image: nginx:1.14.2\n ports:\n - containerPort: 80", "language": "yaml", "type": "deployment-definition", "context": "Objects In Kubernetes | One way to create a Deployment using a manifest file like the one above is to use the kubectl apply command in the kubectl command-line interface, passing the .yaml file as an argument. Here's an exam", "source": "https://kubernetes.io/docs/concepts/overview/working-with-objects/", "id": "k8s_00192_code_completion_5214"} {"prompt": "Explain this yaml code:\n\napiVersion: apps/v1\nkind: Deployment\nmetadata:\n name: nginx-deployment\nspec:\n selector:\n matchLabels:\n app: nginx\n replicas: 2 # tells deployment to run 2 pods matching the template\n template:\n metadata:\n labels:\n app: nginx\n spec:\n containers:\n - name: nginx\n image: nginx:1.14.2\n ports:\n - containerPort: 80", "completion": "This yaml code Objects In Kubernetes | One way to create a Deployment using a manifest file like the one above is to use the kubectl apply command in the kubectl command-line interface, passing the .yaml file as an argument. Here's an exam", "language": "yaml", "type": "deployment-definition_explanation", "context": "Objects In Kubernetes | One way to create a Deployment using a manifest file like the one above is to use the kubectl apply command in the kubectl command-line interface, passing the .yaml file as an argument. Here's an exam", "source": "https://kubernetes.io/docs/concepts/overview/working-with-objects/", "id": "k8s_00192_code_explain_5215"} {"prompt": "Explain this shell code:\n\nkubectl apply -f https://k8s.io/examples/application/deployment.yaml", "completion": "This shell code Objects In Kubernetes | The output is similar to this:", "language": "shell", "type": "code-example_explanation", "context": "Objects In Kubernetes | The output is similar to this:", "source": "https://kubernetes.io/docs/concepts/overview/working-with-objects/", "id": "k8s_00192_code_explain_5216"} {"prompt": "Complete this yaml code:\n\napplication/deployment.yaml\napiVersion: apps/v1\nkind: Deployment\nmetadata:\n name: nginx-deployment\nspec:\n selector:\n matchLabels:\n app: nginx\n replicas: 2 # tells deployment to run 2 pods matching the template", "completion": " template:\n metadata:\n labels:\n app: nginx\n spec:\n containers:\n - name: nginx\n image: nginx:1.14.2\n ports:\n - containerPort: 80", "language": "yaml", "type": "deployment-definition", "context": "Objects In Kubernetes | One way to create a Deployment using a manifest file like the one above is to use the kubectl apply command in the kubectl command-line interface, passing the .yaml file as an argument. Here's an exam", "source": "https://kubernetes.io/docs/concepts/overview/working-with-objects/", "id": "k8s_00192_code_completion_5217"} {"prompt": "Explain this yaml code:\n\napplication/deployment.yaml\napiVersion: apps/v1\nkind: Deployment\nmetadata:\n name: nginx-deployment\nspec:\n selector:\n matchLabels:\n app: nginx\n replicas: 2 # tells deployment to run 2 pods matching the template\n template:\n metadata:\n labels:\n app: nginx\n spec:\n containers:\n - name: nginx\n image: nginx:1.14.2\n ports:\n - containerPort: 80", "completion": "This yaml code Objects In Kubernetes | One way to create a Deployment using a manifest file like the one above is to use the kubectl apply command in the kubectl command-line interface, passing the .yaml file as an argument. Here's an exam", "language": "yaml", "type": "deployment-definition_explanation", "context": "Objects In Kubernetes | One way to create a Deployment using a manifest file like the one above is to use the kubectl apply command in the kubectl command-line interface, passing the .yaml file as an argument. Here's an exam", "source": "https://kubernetes.io/docs/concepts/overview/working-with-objects/", "id": "k8s_00192_code_explain_5218"} {"prompt": "Complete this yaml code:\n\napiVersion: apps/v1\nkind: Deployment\nmetadata:\n name: nginx-deployment\nspec:\n selector:\n matchLabels:\n app: nginx\n replicas: 2 # tells deployment to run 2 pods matching the template", "completion": " template:\n metadata:\n labels:\n app: nginx\n spec:\n containers:\n - name: nginx\n image: nginx:1.14.2\n ports:\n - containerPort: 80", "language": "yaml", "type": "deployment-definition", "context": "Objects In Kubernetes | One way to create a Deployment using a manifest file like the one above is to use the kubectl apply command in the kubectl command-line interface, passing the .yaml file as an argument. Here's an exam", "source": "https://kubernetes.io/docs/concepts/overview/working-with-objects/", "id": "k8s_00192_code_completion_5219"} {"prompt": "Explain this yaml code:\n\napiVersion: apps/v1\nkind: Deployment\nmetadata:\n name: nginx-deployment\nspec:\n selector:\n matchLabels:\n app: nginx\n replicas: 2 # tells deployment to run 2 pods matching the template\n template:\n metadata:\n labels:\n app: nginx\n spec:\n containers:\n - name: nginx\n image: nginx:1.14.2\n ports:\n - containerPort: 80", "completion": "This yaml code Objects In Kubernetes | One way to create a Deployment using a manifest file like the one above is to use the kubectl apply command in the kubectl command-line interface, passing the .yaml file as an argument. Here's an exam", "language": "yaml", "type": "deployment-definition_explanation", "context": "Objects In Kubernetes | One way to create a Deployment using a manifest file like the one above is to use the kubectl apply command in the kubectl command-line interface, passing the .yaml file as an argument. Here's an exam", "source": "https://kubernetes.io/docs/concepts/overview/working-with-objects/", "id": "k8s_00192_code_explain_5220"} {"prompt": "Explain this bash code:\n\nkubectl apply -f https://k8s.io/examples/application/deployment.yaml", "completion": "This bash code Objects In Kubernetes | The output is similar to this:", "language": "bash", "type": "kubectl-command_explanation", "context": "Objects In Kubernetes | The output is similar to this:", "source": "https://kubernetes.io/docs/concepts/overview/working-with-objects/", "id": "k8s_00192_code_explain_5221"} {"prompt": "Complete this yaml code:\n\nkind: EncryptionConfiguration\napiVersion: apiserver.config.k8s.io/v1\nresources:\n- resources:\n - events\n providers:\n - identity: {} # do not encrypt events even though *.* is specified below\n- resources:\n - secrets\n - configmaps\n - pandas.awesome.bears.example\n providers:\n - aescbc:\n keys:\n - name: key1", "completion": " secret: c2VjcmV0IGlzIHNlY3VyZQ==\n- resources:\n - '*.apps'\n providers:\n - aescbc:\n keys:\n - name: key2\n secret: c2VjcmV0IGlzIHNlY3VyZSwgb3IgaXMgaXQ/Cg==\n- resources:\n - '*.*'\n providers:\n - aescbc:\n keys:\n - name: key3\n secret: c2VjcmV0IGlzIHNlY3VyZSwgSSB0aGluaw==", "language": "yaml", "type": "kubernetes-manifest", "context": "kube-apiserver Configuration (v1) | resources is a list containing resources, and their corresponding encryption providers.", "source": "https://kubernetes.io/docs/reference/config-api/apiserver-config.v1/", "id": "k8s_00193_code_completion_5222"} {"prompt": "Explain this yaml code:\n\nkind: EncryptionConfiguration\napiVersion: apiserver.config.k8s.io/v1\nresources:\n- resources:\n - events\n providers:\n - identity: {} # do not encrypt events even though *.* is specified below\n- resources:\n - secrets\n - configmaps\n - pandas.awesome.bears.example\n providers:\n - aescbc:\n keys:\n - name: key1\n secret: c2VjcmV0IGlzIHNlY3VyZQ==\n- resources:\n - '*.apps'\n providers:\n - aescbc:\n keys:\n - name: key2\n secret: c2VjcmV0IGlzIHNlY3VyZSwgb3IgaXMgaXQ/Cg==\n- resources:\n - '*.*'\n providers:\n - aescbc:\n keys:\n - name: key3\n secret: c2VjcmV0IGlzIHNlY3VyZSwgSSB0aGluaw==", "completion": "This yaml code kube-apiserver Configuration (v1) | resources is a list containing resources, and their corresponding encryption providers.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "kube-apiserver Configuration (v1) | resources is a list containing resources, and their corresponding encryption providers.", "source": "https://kubernetes.io/docs/reference/config-api/apiserver-config.v1/", "id": "k8s_00193_code_explain_5223"} {"prompt": "Complete this yaml code:\n\nkind: EncryptionConfiguration\napiVersion: apiserver.config.k8s.io/v1\nresources:\n- resources:\n - events\n providers:\n - identity: {} # do not encrypt events even though *.* is specified below\n- resources:\n - secrets\n - configmaps\n - pandas.awesome.bears.example\n providers:\n - aescbc:\n keys:\n - name: key1", "completion": " secret: c2VjcmV0IGlzIHNlY3VyZQ==\n- resources:\n - '*.apps'\n providers:\n - aescbc:\n keys:\n - name: key2\n secret: c2VjcmV0IGlzIHNlY3VyZSwgb3IgaXMgaXQ/Cg==\n- resources:\n - '*.*'\n providers:\n - aescbc:\n keys:\n - name: key3\n secret: c2VjcmV0IGlzIHNlY3VyZSwgSSB0aGluaw==", "language": "yaml", "type": "kubernetes-manifest", "context": "kube-apiserver Configuration (v1) | resources is a list containing resources, and their corresponding encryption providers.", "source": "https://kubernetes.io/docs/reference/config-api/apiserver-config.v1/", "id": "k8s_00193_code_completion_5224"} {"prompt": "Explain this yaml code:\n\nkind: EncryptionConfiguration\napiVersion: apiserver.config.k8s.io/v1\nresources:\n- resources:\n - events\n providers:\n - identity: {} # do not encrypt events even though *.* is specified below\n- resources:\n - secrets\n - configmaps\n - pandas.awesome.bears.example\n providers:\n - aescbc:\n keys:\n - name: key1\n secret: c2VjcmV0IGlzIHNlY3VyZQ==\n- resources:\n - '*.apps'\n providers:\n - aescbc:\n keys:\n - name: key2\n secret: c2VjcmV0IGlzIHNlY3VyZSwgb3IgaXMgaXQ/Cg==\n- resources:\n - '*.*'\n providers:\n - aescbc:\n keys:\n - name: key3\n secret: c2VjcmV0IGlzIHNlY3VyZSwgSSB0aGluaw==", "completion": "This yaml code kube-apiserver Configuration (v1) | resources is a list containing resources, and their corresponding encryption providers.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "kube-apiserver Configuration (v1) | resources is a list containing resources, and their corresponding encryption providers.", "source": "https://kubernetes.io/docs/reference/config-api/apiserver-config.v1/", "id": "k8s_00193_code_explain_5225"} {"prompt": "Explain this bash code:\n\nkubectl label nodes example-node-1 example-node-2 ssd=true", "completion": "This bash code Running Pods on Only Some Nodes | Let's create a DaemonSet which will provision the daemon pods on the SSD labeled nodes only.", "language": "bash", "type": "kubectl-command_explanation", "context": "Running Pods on Only Some Nodes | Let's create a DaemonSet which will provision the daemon pods on the SSD labeled nodes only.", "source": "https://kubernetes.io/docs/tasks/manage-daemon/pods-some-nodes/", "id": "k8s_00194_code_explain_5226"} {"prompt": "Complete this yaml code:\n\napiVersion: apps/v1\nkind: DaemonSet\nmetadata:\n name: ssd-driver\n labels:\n app: nginx\nspec:\n selector:\n matchLabels:\n app: ssd-driver-pod", "completion": " template:\n metadata:\n labels:\n app: ssd-driver-pod\n spec:\n nodeSelector:\n ssd: \"true\"\n containers:\n - name: example-container\n image: example-image", "language": "yaml", "type": "kubernetes-manifest", "context": "Running Pods on Only Some Nodes | Create the DaemonSet from the manifest by using kubectl create or kubectl apply", "source": "https://kubernetes.io/docs/tasks/manage-daemon/pods-some-nodes/", "id": "k8s_00194_code_completion_5227"} {"prompt": "Explain this yaml code:\n\napiVersion: apps/v1\nkind: DaemonSet\nmetadata:\n name: ssd-driver\n labels:\n app: nginx\nspec:\n selector:\n matchLabels:\n app: ssd-driver-pod\n template:\n metadata:\n labels:\n app: ssd-driver-pod\n spec:\n nodeSelector:\n ssd: \"true\"\n containers:\n - name: example-container\n image: example-image", "completion": "This yaml code Running Pods on Only Some Nodes | Create the DaemonSet from the manifest by using kubectl create or kubectl apply", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Running Pods on Only Some Nodes | Create the DaemonSet from the manifest by using kubectl create or kubectl apply", "source": "https://kubernetes.io/docs/tasks/manage-daemon/pods-some-nodes/", "id": "k8s_00194_code_explain_5228"} {"prompt": "Complete this text code:\n\nNAME READY STATUS RESTARTS AGE IP NODE\n 1/1 Running 0 13s ..... example-node-1", "completion": " 1/1 Running 0 13s ..... example-node-2\n 1/1 Running 0 5s ..... example-node-3", "language": "text", "type": "code-example", "context": "Running Pods on Only Some Nodes |", "source": "https://kubernetes.io/docs/tasks/manage-daemon/pods-some-nodes/", "id": "k8s_00194_code_completion_5229"} {"prompt": "Explain this text code:\n\nNAME READY STATUS RESTARTS AGE IP NODE\n 1/1 Running 0 13s ..... example-node-1\n 1/1 Running 0 13s ..... example-node-2\n 1/1 Running 0 5s ..... example-node-3", "completion": "This text code Running Pods on Only Some Nodes |", "language": "text", "type": "code-example_explanation", "context": "Running Pods on Only Some Nodes |", "source": "https://kubernetes.io/docs/tasks/manage-daemon/pods-some-nodes/", "id": "k8s_00194_code_explain_5230"} {"prompt": "Explain this shell code:\n\nkubectl label nodes example-node-1 example-node-2 ssd=true", "completion": "This shell code Running Pods on Only Some Nodes | Let's create a DaemonSet which will provision the daemon pods on the SSD labeled nodes only.", "language": "shell", "type": "code-example_explanation", "context": "Running Pods on Only Some Nodes | Let's create a DaemonSet which will provision the daemon pods on the SSD labeled nodes only.", "source": "https://kubernetes.io/docs/tasks/manage-daemon/pods-some-nodes/", "id": "k8s_00194_code_explain_5231"} {"prompt": "Complete this yaml code:\n\napiVersion: apps/v1\nkind: DaemonSet\nmetadata:\n name: ssd-driver\n labels:\n app: nginx\nspec:\n selector:\n matchLabels:\n app: ssd-driver-pod", "completion": " template:\n metadata:\n labels:\n app: ssd-driver-pod\n spec:\n nodeSelector:\n ssd: \"true\"\n containers:\n - name: example-container\n image: example-image", "language": "yaml", "type": "kubernetes-manifest", "context": "Running Pods on Only Some Nodes | Create the DaemonSet from the manifest by using kubectl create or kubectl apply", "source": "https://kubernetes.io/docs/tasks/manage-daemon/pods-some-nodes/", "id": "k8s_00194_code_completion_5232"} {"prompt": "Explain this yaml code:\n\napiVersion: apps/v1\nkind: DaemonSet\nmetadata:\n name: ssd-driver\n labels:\n app: nginx\nspec:\n selector:\n matchLabels:\n app: ssd-driver-pod\n template:\n metadata:\n labels:\n app: ssd-driver-pod\n spec:\n nodeSelector:\n ssd: \"true\"\n containers:\n - name: example-container\n image: example-image", "completion": "This yaml code Running Pods on Only Some Nodes | Create the DaemonSet from the manifest by using kubectl create or kubectl apply", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Running Pods on Only Some Nodes | Create the DaemonSet from the manifest by using kubectl create or kubectl apply", "source": "https://kubernetes.io/docs/tasks/manage-daemon/pods-some-nodes/", "id": "k8s_00194_code_explain_5233"} {"prompt": "Complete this text code:\n\nNAME READY STATUS RESTARTS AGE IP NODE\n 1/1 Running 0 13s ..... example-node-1", "completion": " 1/1 Running 0 13s ..... example-node-2\n 1/1 Running 0 5s ..... example-node-3", "language": "text", "type": "code-example", "context": "Running Pods on Only Some Nodes |", "source": "https://kubernetes.io/docs/tasks/manage-daemon/pods-some-nodes/", "id": "k8s_00194_code_completion_5234"} {"prompt": "Explain this text code:\n\nNAME READY STATUS RESTARTS AGE IP NODE\n 1/1 Running 0 13s ..... example-node-1\n 1/1 Running 0 13s ..... example-node-2\n 1/1 Running 0 5s ..... example-node-3", "completion": "This text code Running Pods on Only Some Nodes |", "language": "text", "type": "code-example_explanation", "context": "Running Pods on Only Some Nodes |", "source": "https://kubernetes.io/docs/tasks/manage-daemon/pods-some-nodes/", "id": "k8s_00194_code_explain_5235"} {"prompt": "Explain this bash code:\n\nkubectl label nodes example-node-1 example-node-2 ssd=true", "completion": "This bash code Running Pods on Only Some Nodes | Let's create a DaemonSet which will provision the daemon pods on the SSD labeled nodes only.", "language": "bash", "type": "kubectl-command_explanation", "context": "Running Pods on Only Some Nodes | Let's create a DaemonSet which will provision the daemon pods on the SSD labeled nodes only.", "source": "https://kubernetes.io/docs/tasks/manage-daemon/pods-some-nodes/", "id": "k8s_00194_code_explain_5236"} {"prompt": "Complete this yaml code:\n\ncontrollers/daemonset-label-selector.yaml\napiVersion: apps/v1\nkind: DaemonSet\nmetadata:\n name: ssd-driver\n labels:\n app: nginx\nspec:\n selector:\n matchLabels:", "completion": " app: ssd-driver-pod\n template:\n metadata:\n labels:\n app: ssd-driver-pod\n spec:\n nodeSelector:\n ssd: \"true\"\n containers:\n - name: example-container\n image: example-image", "language": "yaml", "type": "kubernetes-manifest", "context": "Running Pods on Only Some Nodes | Create the DaemonSet from the manifest by using kubectl create or kubectl apply", "source": "https://kubernetes.io/docs/tasks/manage-daemon/pods-some-nodes/", "id": "k8s_00194_code_completion_5237"} {"prompt": "Explain this yaml code:\n\ncontrollers/daemonset-label-selector.yaml\napiVersion: apps/v1\nkind: DaemonSet\nmetadata:\n name: ssd-driver\n labels:\n app: nginx\nspec:\n selector:\n matchLabels:\n app: ssd-driver-pod\n template:\n metadata:\n labels:\n app: ssd-driver-pod\n spec:\n nodeSelector:\n ssd: \"true\"\n containers:\n - name: example-container\n image: example-image", "completion": "This yaml code Running Pods on Only Some Nodes | Create the DaemonSet from the manifest by using kubectl create or kubectl apply", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Running Pods on Only Some Nodes | Create the DaemonSet from the manifest by using kubectl create or kubectl apply", "source": "https://kubernetes.io/docs/tasks/manage-daemon/pods-some-nodes/", "id": "k8s_00194_code_explain_5238"} {"prompt": "Complete this yaml code:\n\napiVersion: apps/v1\nkind: DaemonSet\nmetadata:\n name: ssd-driver\n labels:\n app: nginx\nspec:\n selector:\n matchLabels:\n app: ssd-driver-pod", "completion": " template:\n metadata:\n labels:\n app: ssd-driver-pod\n spec:\n nodeSelector:\n ssd: \"true\"\n containers:\n - name: example-container\n image: example-image", "language": "yaml", "type": "kubernetes-manifest", "context": "Running Pods on Only Some Nodes | Create the DaemonSet from the manifest by using kubectl create or kubectl apply", "source": "https://kubernetes.io/docs/tasks/manage-daemon/pods-some-nodes/", "id": "k8s_00194_code_completion_5239"} {"prompt": "Explain this yaml code:\n\napiVersion: apps/v1\nkind: DaemonSet\nmetadata:\n name: ssd-driver\n labels:\n app: nginx\nspec:\n selector:\n matchLabels:\n app: ssd-driver-pod\n template:\n metadata:\n labels:\n app: ssd-driver-pod\n spec:\n nodeSelector:\n ssd: \"true\"\n containers:\n - name: example-container\n image: example-image", "completion": "This yaml code Running Pods on Only Some Nodes | Create the DaemonSet from the manifest by using kubectl create or kubectl apply", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Running Pods on Only Some Nodes | Create the DaemonSet from the manifest by using kubectl create or kubectl apply", "source": "https://kubernetes.io/docs/tasks/manage-daemon/pods-some-nodes/", "id": "k8s_00194_code_explain_5240"} {"prompt": "Explain this text code:\n\nGRUB_CMDLINE_LINUX=\"hugepagesz=1G hugepages=2 hugepagesz=2M hugepages=512\"", "completion": "This text code Manage HugePages | The nodes will automatically discover and report all huge page resources as schedulable resources.", "language": "text", "type": "code-example_explanation", "context": "Manage HugePages | The nodes will automatically discover and report all huge page resources as schedulable resources.", "source": "https://kubernetes.io/docs/tasks/manage-hugepages/scheduling-hugepages/", "id": "k8s_00195_code_explain_5241"} {"prompt": "Complete this text code:\n\nCapacity:\n cpu: ...\n ephemeral-storage: ...\n hugepages-1Gi: 2Gi\n hugepages-2Mi: 1Gi\n memory: ...\n pods: ...", "completion": "Allocatable:\n cpu: ...\n ephemeral-storage: ...\n hugepages-1Gi: 2Gi\n hugepages-2Mi: 1Gi\n memory: ...\n pods: ...", "language": "text", "type": "code-example", "context": "Manage HugePages | Huge pages can be consumed via container level resource requirements using the resource name hugepages-, where is the most compact binary notation using integer values supported on a part", "source": "https://kubernetes.io/docs/tasks/manage-hugepages/scheduling-hugepages/", "id": "k8s_00195_code_completion_5242"} {"prompt": "Explain this text code:\n\nCapacity:\n cpu: ...\n ephemeral-storage: ...\n hugepages-1Gi: 2Gi\n hugepages-2Mi: 1Gi\n memory: ...\n pods: ...\nAllocatable:\n cpu: ...\n ephemeral-storage: ...\n hugepages-1Gi: 2Gi\n hugepages-2Mi: 1Gi\n memory: ...\n pods: ...", "completion": "This text code Manage HugePages | Huge pages can be consumed via container level resource requirements using the resource name hugepages-, where is the most compact binary notation using integer values supported on a part", "language": "text", "type": "code-example_explanation", "context": "Manage HugePages | Huge pages can be consumed via container level resource requirements using the resource name hugepages-, where is the most compact binary notation using integer values supported on a part", "source": "https://kubernetes.io/docs/tasks/manage-hugepages/scheduling-hugepages/", "id": "k8s_00195_code_explain_5243"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: huge-pages-example\nspec:\n containers:\n - name: example\n image: fedora:latest\n command:\n - sleep\n - inf\n volumeMounts:\n - mountPath: /hugepages-2Mi\n name: hugepage-2mi\n - mountPath: /hugepages-1Gi", "completion": " name: hugepage-1gi\n resources:\n limits:\n hugepages-2Mi: 100Mi\n hugepages-1Gi: 2Gi\n memory: 100Mi\n requests:\n memory: 100Mi\n volumes:\n - name: hugepage-2mi\n emptyDir:\n medium: HugePages-2Mi\n - name: hugepage-1gi\n emptyDir:\n medium: HugePages-1Gi", "language": "yaml", "type": "pod-definition", "context": "Manage HugePages | A pod may use medium: HugePages only if it requests huge pages of one size.", "source": "https://kubernetes.io/docs/tasks/manage-hugepages/scheduling-hugepages/", "id": "k8s_00195_code_completion_5244"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: huge-pages-example\nspec:\n containers:\n - name: example\n image: fedora:latest\n command:\n - sleep\n - inf\n volumeMounts:\n - mountPath: /hugepages-2Mi\n name: hugepage-2mi\n - mountPath: /hugepages-1Gi\n name: hugepage-1gi\n resources:\n limits:\n hugepages-2Mi: 100Mi\n hugepages-1Gi: 2Gi\n memory: 100Mi\n requests:\n memory: 100Mi\n volumes:\n - name: hugepage-2mi\n emptyDir:\n medium: HugePages-2Mi\n - name: hugepage-1gi\n emptyDir:\n medium: HugePages-1Gi", "completion": "This yaml code Manage HugePages | A pod may use medium: HugePages only if it requests huge pages of one size.", "language": "yaml", "type": "pod-definition_explanation", "context": "Manage HugePages | A pod may use medium: HugePages only if it requests huge pages of one size.", "source": "https://kubernetes.io/docs/tasks/manage-hugepages/scheduling-hugepages/", "id": "k8s_00195_code_explain_5245"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: huge-pages-example\nspec:\n containers:\n - name: example\n image: fedora:latest\n command:\n - sleep\n - inf\n volumeMounts:", "completion": " - mountPath: /hugepages\n name: hugepage\n resources:\n limits:\n hugepages-2Mi: 100Mi\n memory: 100Mi\n requests:\n memory: 100Mi\n volumes:\n - name: hugepage\n emptyDir:\n medium: HugePages", "language": "yaml", "type": "pod-definition", "context": "Manage HugePages |", "source": "https://kubernetes.io/docs/tasks/manage-hugepages/scheduling-hugepages/", "id": "k8s_00195_code_completion_5246"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: huge-pages-example\nspec:\n containers:\n - name: example\n image: fedora:latest\n command:\n - sleep\n - inf\n volumeMounts:\n - mountPath: /hugepages\n name: hugepage\n resources:\n limits:\n hugepages-2Mi: 100Mi\n memory: 100Mi\n requests:\n memory: 100Mi\n volumes:\n - name: hugepage\n emptyDir:\n medium: HugePages", "completion": "This yaml code Manage HugePages |", "language": "yaml", "type": "pod-definition_explanation", "context": "Manage HugePages |", "source": "https://kubernetes.io/docs/tasks/manage-hugepages/scheduling-hugepages/", "id": "k8s_00195_code_explain_5247"} {"prompt": "Explain this text code:\n\nGRUB_CMDLINE_LINUX=\"hugepagesz=1G hugepages=2 hugepagesz=2M hugepages=512\"", "completion": "This text code Manage HugePages | The nodes will automatically discover and report all huge page resources as schedulable resources.", "language": "text", "type": "code-example_explanation", "context": "Manage HugePages | The nodes will automatically discover and report all huge page resources as schedulable resources.", "source": "https://kubernetes.io/docs/tasks/manage-hugepages/scheduling-hugepages/", "id": "k8s_00195_code_explain_5248"} {"prompt": "Complete this text code:\n\nCapacity:\n cpu: ...\n ephemeral-storage: ...\n hugepages-1Gi: 2Gi\n hugepages-2Mi: 1Gi\n memory: ...\n pods: ...", "completion": "Allocatable:\n cpu: ...\n ephemeral-storage: ...\n hugepages-1Gi: 2Gi\n hugepages-2Mi: 1Gi\n memory: ...\n pods: ...", "language": "text", "type": "code-example", "context": "Manage HugePages | Huge pages can be consumed via container level resource requirements using the resource name hugepages-, where is the most compact binary notation using integer values supported on a part", "source": "https://kubernetes.io/docs/tasks/manage-hugepages/scheduling-hugepages/", "id": "k8s_00195_code_completion_5249"} {"prompt": "Explain this text code:\n\nCapacity:\n cpu: ...\n ephemeral-storage: ...\n hugepages-1Gi: 2Gi\n hugepages-2Mi: 1Gi\n memory: ...\n pods: ...\nAllocatable:\n cpu: ...\n ephemeral-storage: ...\n hugepages-1Gi: 2Gi\n hugepages-2Mi: 1Gi\n memory: ...\n pods: ...", "completion": "This text code Manage HugePages | Huge pages can be consumed via container level resource requirements using the resource name hugepages-, where is the most compact binary notation using integer values supported on a part", "language": "text", "type": "code-example_explanation", "context": "Manage HugePages | Huge pages can be consumed via container level resource requirements using the resource name hugepages-, where is the most compact binary notation using integer values supported on a part", "source": "https://kubernetes.io/docs/tasks/manage-hugepages/scheduling-hugepages/", "id": "k8s_00195_code_explain_5250"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: huge-pages-example\nspec:\n containers:\n - name: example\n image: fedora:latest\n command:\n - sleep\n - inf\n volumeMounts:\n - mountPath: /hugepages-2Mi\n name: hugepage-2mi\n - mountPath: /hugepages-1Gi", "completion": " name: hugepage-1gi\n resources:\n limits:\n hugepages-2Mi: 100Mi\n hugepages-1Gi: 2Gi\n memory: 100Mi\n requests:\n memory: 100Mi\n volumes:\n - name: hugepage-2mi\n emptyDir:\n medium: HugePages-2Mi\n - name: hugepage-1gi\n emptyDir:\n medium: HugePages-1Gi", "language": "yaml", "type": "pod-definition", "context": "Manage HugePages | A pod may use medium: HugePages only if it requests huge pages of one size.", "source": "https://kubernetes.io/docs/tasks/manage-hugepages/scheduling-hugepages/", "id": "k8s_00195_code_completion_5251"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: huge-pages-example\nspec:\n containers:\n - name: example\n image: fedora:latest\n command:\n - sleep\n - inf\n volumeMounts:\n - mountPath: /hugepages-2Mi\n name: hugepage-2mi\n - mountPath: /hugepages-1Gi\n name: hugepage-1gi\n resources:\n limits:\n hugepages-2Mi: 100Mi\n hugepages-1Gi: 2Gi\n memory: 100Mi\n requests:\n memory: 100Mi\n volumes:\n - name: hugepage-2mi\n emptyDir:\n medium: HugePages-2Mi\n - name: hugepage-1gi\n emptyDir:\n medium: HugePages-1Gi", "completion": "This yaml code Manage HugePages | A pod may use medium: HugePages only if it requests huge pages of one size.", "language": "yaml", "type": "pod-definition_explanation", "context": "Manage HugePages | A pod may use medium: HugePages only if it requests huge pages of one size.", "source": "https://kubernetes.io/docs/tasks/manage-hugepages/scheduling-hugepages/", "id": "k8s_00195_code_explain_5252"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: huge-pages-example\nspec:\n containers:\n - name: example\n image: fedora:latest\n command:\n - sleep\n - inf\n volumeMounts:", "completion": " - mountPath: /hugepages\n name: hugepage\n resources:\n limits:\n hugepages-2Mi: 100Mi\n memory: 100Mi\n requests:\n memory: 100Mi\n volumes:\n - name: hugepage\n emptyDir:\n medium: HugePages", "language": "yaml", "type": "pod-definition", "context": "Manage HugePages |", "source": "https://kubernetes.io/docs/tasks/manage-hugepages/scheduling-hugepages/", "id": "k8s_00195_code_completion_5253"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: huge-pages-example\nspec:\n containers:\n - name: example\n image: fedora:latest\n command:\n - sleep\n - inf\n volumeMounts:\n - mountPath: /hugepages\n name: hugepage\n resources:\n limits:\n hugepages-2Mi: 100Mi\n memory: 100Mi\n requests:\n memory: 100Mi\n volumes:\n - name: hugepage\n emptyDir:\n medium: HugePages", "completion": "This yaml code Manage HugePages |", "language": "yaml", "type": "pod-definition_explanation", "context": "Manage HugePages |", "source": "https://kubernetes.io/docs/tasks/manage-hugepages/scheduling-hugepages/", "id": "k8s_00195_code_explain_5254"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: huge-pages-example\nspec:\n containers:\n - name: example\n image: fedora:latest\n command:\n - sleep\n - inf\n volumeMounts:\n - mountPath: /hugepages-2Mi\n name: hugepage-2mi\n - mountPath: /hugepages-1Gi", "completion": " name: hugepage-1gi\n resources:\n limits:\n hugepages-2Mi: 100Mi\n hugepages-1Gi: 2Gi\n memory: 100Mi\n requests:\n memory: 100Mi\n volumes:\n - name: hugepage-2mi\n emptyDir:\n medium: HugePages-2Mi\n - name: hugepage-1gi\n emptyDir:\n medium: HugePages-1Gi", "language": "yaml", "type": "pod-definition", "context": "Manage HugePages | A pod may use medium: HugePages only if it requests huge pages of one size.", "source": "https://kubernetes.io/docs/tasks/manage-hugepages/scheduling-hugepages/", "id": "k8s_00195_code_completion_5255"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: huge-pages-example\nspec:\n containers:\n - name: example\n image: fedora:latest\n command:\n - sleep\n - inf\n volumeMounts:\n - mountPath: /hugepages-2Mi\n name: hugepage-2mi\n - mountPath: /hugepages-1Gi\n name: hugepage-1gi\n resources:\n limits:\n hugepages-2Mi: 100Mi\n hugepages-1Gi: 2Gi\n memory: 100Mi\n requests:\n memory: 100Mi\n volumes:\n - name: hugepage-2mi\n emptyDir:\n medium: HugePages-2Mi\n - name: hugepage-1gi\n emptyDir:\n medium: HugePages-1Gi", "completion": "This yaml code Manage HugePages | A pod may use medium: HugePages only if it requests huge pages of one size.", "language": "yaml", "type": "pod-definition_explanation", "context": "Manage HugePages | A pod may use medium: HugePages only if it requests huge pages of one size.", "source": "https://kubernetes.io/docs/tasks/manage-hugepages/scheduling-hugepages/", "id": "k8s_00195_code_explain_5256"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: huge-pages-example\nspec:\n containers:\n - name: example\n image: fedora:latest\n command:\n - sleep\n - inf\n volumeMounts:", "completion": " - mountPath: /hugepages\n name: hugepage\n resources:\n limits:\n hugepages-2Mi: 100Mi\n memory: 100Mi\n requests:\n memory: 100Mi\n volumes:\n - name: hugepage\n emptyDir:\n medium: HugePages", "language": "yaml", "type": "pod-definition", "context": "Manage HugePages |", "source": "https://kubernetes.io/docs/tasks/manage-hugepages/scheduling-hugepages/", "id": "k8s_00195_code_completion_5257"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: huge-pages-example\nspec:\n containers:\n - name: example\n image: fedora:latest\n command:\n - sleep\n - inf\n volumeMounts:\n - mountPath: /hugepages\n name: hugepage\n resources:\n limits:\n hugepages-2Mi: 100Mi\n memory: 100Mi\n requests:\n memory: 100Mi\n volumes:\n - name: hugepage\n emptyDir:\n medium: HugePages", "completion": "This yaml code Manage HugePages |", "language": "yaml", "type": "pod-definition_explanation", "context": "Manage HugePages |", "source": "https://kubernetes.io/docs/tasks/manage-hugepages/scheduling-hugepages/", "id": "k8s_00195_code_explain_5258"} {"prompt": "Complete this yaml code:\n\napiVersion: apps/v1\nkind: Deployment\nmetadata:\n name: hello-world\nspec:\n selector:\n matchLabels:\n run: load-balancer-example\n replicas: 2\n template:", "completion": " metadata:\n labels:\n run: load-balancer-example\n spec:\n containers:\n - name: hello-world\n image: us-docker.pkg.dev/google-samples/containers/gke/hello-app:2.0\n ports:\n - containerPort: 8080\n protocol: TCP", "language": "yaml", "type": "deployment-definition", "context": "Use a Service to Access an Application in a Cluster | Run a Hello World application in your cluster: Create the application Deployment using the file above:", "source": "https://kubernetes.io/docs/tasks/access-application-cluster/service-access-application-cluster/", "id": "k8s_00199_code_completion_5259"} {"prompt": "Explain this yaml code:\n\napiVersion: apps/v1\nkind: Deployment\nmetadata:\n name: hello-world\nspec:\n selector:\n matchLabels:\n run: load-balancer-example\n replicas: 2\n template:\n metadata:\n labels:\n run: load-balancer-example\n spec:\n containers:\n - name: hello-world\n image: us-docker.pkg.dev/google-samples/containers/gke/hello-app:2.0\n ports:\n - containerPort: 8080\n protocol: TCP", "completion": "This yaml code Use a Service to Access an Application in a Cluster | Run a Hello World application in your cluster: Create the application Deployment using the file above:", "language": "yaml", "type": "deployment-definition_explanation", "context": "Use a Service to Access an Application in a Cluster | Run a Hello World application in your cluster: Create the application Deployment using the file above:", "source": "https://kubernetes.io/docs/tasks/access-application-cluster/service-access-application-cluster/", "id": "k8s_00199_code_explain_5260"} {"prompt": "Explain this bash code:\n\nkubectl apply -f https://k8s.io/examples/service/access/hello-application.yaml", "completion": "This bash code Use a Service to Access an Application in a Cluster | The preceding command creates a Deployment and an associated ReplicaSet. The ReplicaSet has two Pods each of which runs the Hello World application.", "language": "bash", "type": "kubectl-command_explanation", "context": "Use a Service to Access an Application in a Cluster | The preceding command creates a Deployment and an associated ReplicaSet. The ReplicaSet has two Pods each of which runs the Hello World application.", "source": "https://kubernetes.io/docs/tasks/access-application-cluster/service-access-application-cluster/", "id": "k8s_00199_code_explain_5261"} {"prompt": "Explain this bash code:\n\nkubectl get deployments hello-world\nkubectl describe deployments hello-world", "completion": "This bash code Use a Service to Access an Application in a Cluster | Display information about your ReplicaSet objects:", "language": "bash", "type": "kubectl-command_explanation", "context": "Use a Service to Access an Application in a Cluster | Display information about your ReplicaSet objects:", "source": "https://kubernetes.io/docs/tasks/access-application-cluster/service-access-application-cluster/", "id": "k8s_00199_code_explain_5262"} {"prompt": "Explain this bash code:\n\nkubectl get replicasets\nkubectl describe replicasets", "completion": "This bash code Use a Service to Access an Application in a Cluster | Create a Service object that exposes the deployment:", "language": "bash", "type": "kubectl-command_explanation", "context": "Use a Service to Access an Application in a Cluster | Create a Service object that exposes the deployment:", "source": "https://kubernetes.io/docs/tasks/access-application-cluster/service-access-application-cluster/", "id": "k8s_00199_code_explain_5263"} {"prompt": "Explain this bash code:\n\nkubectl expose deployment hello-world --type=NodePort --name=example-service", "completion": "This bash code Use a Service to Access an Application in a Cluster | Display information about the Service:", "language": "bash", "type": "kubectl-command_explanation", "context": "Use a Service to Access an Application in a Cluster | Display information about the Service:", "source": "https://kubernetes.io/docs/tasks/access-application-cluster/service-access-application-cluster/", "id": "k8s_00199_code_explain_5264"} {"prompt": "Complete this text code:\n\nName: example-service\nNamespace: default\nLabels: run=load-balancer-example\nAnnotations: \nSelector: run=load-balancer-example\nType: NodePort", "completion": "IP: 10.32.0.16\nPort: 8080/TCP\nTargetPort: 8080/TCP\nNodePort: 31496/TCP\nEndpoints: 10.200.1.4:8080,10.200.2.5:8080\nSession Affinity: None\nEvents: ", "language": "text", "type": "code-example", "context": "Use a Service to Access an Application in a Cluster | Make a note of the NodePort value for the Service. For example, in the preceding output, the NodePort value is 31496.", "source": "https://kubernetes.io/docs/tasks/access-application-cluster/service-access-application-cluster/", "id": "k8s_00199_code_completion_5265"} {"prompt": "Explain this text code:\n\nName: example-service\nNamespace: default\nLabels: run=load-balancer-example\nAnnotations: \nSelector: run=load-balancer-example\nType: NodePort\nIP: 10.32.0.16\nPort: 8080/TCP\nTargetPort: 8080/TCP\nNodePort: 31496/TCP\nEndpoints: 10.200.1.4:8080,10.200.2.5:8080\nSession Affinity: None\nEvents: ", "completion": "This text code Use a Service to Access an Application in a Cluster | Make a note of the NodePort value for the Service. For example, in the preceding output, the NodePort value is 31496.", "language": "text", "type": "code-example_explanation", "context": "Use a Service to Access an Application in a Cluster | Make a note of the NodePort value for the Service. For example, in the preceding output, the NodePort value is 31496.", "source": "https://kubernetes.io/docs/tasks/access-application-cluster/service-access-application-cluster/", "id": "k8s_00199_code_explain_5266"} {"prompt": "Explain this bash code:\n\nkubectl get pods --selector=\"run=load-balancer-example\" --output=wide", "completion": "This bash code Use a Service to Access an Application in a Cluster | The output is similar to this:", "language": "bash", "type": "kubectl-command_explanation", "context": "Use a Service to Access an Application in a Cluster | The output is similar to this:", "source": "https://kubernetes.io/docs/tasks/access-application-cluster/service-access-application-cluster/", "id": "k8s_00199_code_explain_5267"} {"prompt": "Explain this text code:\n\nNAME READY STATUS ... IP NODE\nhello-world-2895499144-bsbk5 1/1 Running ... 10.200.1.4 worker1\nhello-world-2895499144-m1pwt 1/1 Running ... 10.200.2.5 worker2", "completion": "This text code Use a Service to Access an Application in a Cluster | Get the public IP address of one of your nodes that is running a Hello World pod. How you get this address depends on how you set up your cluster. For example, if you are using Minikube, you can see t", "language": "text", "type": "code-example_explanation", "context": "Use a Service to Access an Application in a Cluster | Get the public IP address of one of your nodes that is running a Hello World pod. How you get this address depends on how you set up your cluster. For example, if you are using Minikube, you can see t", "source": "https://kubernetes.io/docs/tasks/access-application-cluster/service-access-application-cluster/", "id": "k8s_00199_code_explain_5268"} {"prompt": "Explain this text code:\n\nHello, world!\nVersion: 2.0.0\nHostname: hello-world-cdd4458f4-m47c8", "completion": "This text code Use a Service to Access an Application in a Cluster | As an alternative to using kubectl expose, you can use a service configuration file to create a Service.", "language": "text", "type": "code-example_explanation", "context": "Use a Service to Access an Application in a Cluster | As an alternative to using kubectl expose, you can use a service configuration file to create a Service.", "source": "https://kubernetes.io/docs/tasks/access-application-cluster/service-access-application-cluster/", "id": "k8s_00199_code_explain_5269"} {"prompt": "Complete this yaml code:\n\napiVersion: apps/v1\nkind: Deployment\nmetadata:\n name: hello-world\nspec:\n selector:\n matchLabels:\n run: load-balancer-example\n replicas: 2\n template:", "completion": " metadata:\n labels:\n run: load-balancer-example\n spec:\n containers:\n - name: hello-world\n image: us-docker.pkg.dev/google-samples/containers/gke/hello-app:2.0\n ports:\n - containerPort: 8080\n protocol: TCP", "language": "yaml", "type": "deployment-definition", "context": "Use a Service to Access an Application in a Cluster | Run a Hello World application in your cluster: Create the application Deployment using the file above:", "source": "https://kubernetes.io/docs/tasks/access-application-cluster/service-access-application-cluster/", "id": "k8s_00199_code_completion_5270"} {"prompt": "Explain this yaml code:\n\napiVersion: apps/v1\nkind: Deployment\nmetadata:\n name: hello-world\nspec:\n selector:\n matchLabels:\n run: load-balancer-example\n replicas: 2\n template:\n metadata:\n labels:\n run: load-balancer-example\n spec:\n containers:\n - name: hello-world\n image: us-docker.pkg.dev/google-samples/containers/gke/hello-app:2.0\n ports:\n - containerPort: 8080\n protocol: TCP", "completion": "This yaml code Use a Service to Access an Application in a Cluster | Run a Hello World application in your cluster: Create the application Deployment using the file above:", "language": "yaml", "type": "deployment-definition_explanation", "context": "Use a Service to Access an Application in a Cluster | Run a Hello World application in your cluster: Create the application Deployment using the file above:", "source": "https://kubernetes.io/docs/tasks/access-application-cluster/service-access-application-cluster/", "id": "k8s_00199_code_explain_5271"} {"prompt": "Explain this shell code:\n\nkubectl apply -f https://k8s.io/examples/service/access/hello-application.yaml", "completion": "This shell code Use a Service to Access an Application in a Cluster | The preceding command creates a Deployment and an associated ReplicaSet. The ReplicaSet has two Pods each of which runs the Hello World application.", "language": "shell", "type": "code-example_explanation", "context": "Use a Service to Access an Application in a Cluster | The preceding command creates a Deployment and an associated ReplicaSet. The ReplicaSet has two Pods each of which runs the Hello World application.", "source": "https://kubernetes.io/docs/tasks/access-application-cluster/service-access-application-cluster/", "id": "k8s_00199_code_explain_5272"} {"prompt": "Explain this shell code:\n\nkubectl get deployments hello-world\nkubectl describe deployments hello-world", "completion": "This shell code Use a Service to Access an Application in a Cluster | Display information about your ReplicaSet objects:", "language": "shell", "type": "code-example_explanation", "context": "Use a Service to Access an Application in a Cluster | Display information about your ReplicaSet objects:", "source": "https://kubernetes.io/docs/tasks/access-application-cluster/service-access-application-cluster/", "id": "k8s_00199_code_explain_5273"} {"prompt": "Explain this shell code:\n\nkubectl get replicasets\nkubectl describe replicasets", "completion": "This shell code Use a Service to Access an Application in a Cluster | Create a Service object that exposes the deployment:", "language": "shell", "type": "code-example_explanation", "context": "Use a Service to Access an Application in a Cluster | Create a Service object that exposes the deployment:", "source": "https://kubernetes.io/docs/tasks/access-application-cluster/service-access-application-cluster/", "id": "k8s_00199_code_explain_5274"} {"prompt": "Explain this shell code:\n\nkubectl expose deployment hello-world --type=NodePort --name=example-service", "completion": "This shell code Use a Service to Access an Application in a Cluster | Display information about the Service:", "language": "shell", "type": "code-example_explanation", "context": "Use a Service to Access an Application in a Cluster | Display information about the Service:", "source": "https://kubernetes.io/docs/tasks/access-application-cluster/service-access-application-cluster/", "id": "k8s_00199_code_explain_5275"} {"prompt": "Complete this none code:\n\nName: example-service\nNamespace: default\nLabels: run=load-balancer-example\nAnnotations: \nSelector: run=load-balancer-example\nType: NodePort", "completion": "IP: 10.32.0.16\nPort: 8080/TCP\nTargetPort: 8080/TCP\nNodePort: 31496/TCP\nEndpoints: 10.200.1.4:8080,10.200.2.5:8080\nSession Affinity: None\nEvents: ", "language": "none", "type": "code-example", "context": "Use a Service to Access an Application in a Cluster | Make a note of the NodePort value for the Service. For example, in the preceding output, the NodePort value is 31496.", "source": "https://kubernetes.io/docs/tasks/access-application-cluster/service-access-application-cluster/", "id": "k8s_00199_code_completion_5276"} {"prompt": "Explain this none code:\n\nName: example-service\nNamespace: default\nLabels: run=load-balancer-example\nAnnotations: \nSelector: run=load-balancer-example\nType: NodePort\nIP: 10.32.0.16\nPort: 8080/TCP\nTargetPort: 8080/TCP\nNodePort: 31496/TCP\nEndpoints: 10.200.1.4:8080,10.200.2.5:8080\nSession Affinity: None\nEvents: ", "completion": "This none code Use a Service to Access an Application in a Cluster | Make a note of the NodePort value for the Service. For example, in the preceding output, the NodePort value is 31496.", "language": "none", "type": "code-example_explanation", "context": "Use a Service to Access an Application in a Cluster | Make a note of the NodePort value for the Service. For example, in the preceding output, the NodePort value is 31496.", "source": "https://kubernetes.io/docs/tasks/access-application-cluster/service-access-application-cluster/", "id": "k8s_00199_code_explain_5277"} {"prompt": "Explain this shell code:\n\nkubectl get pods --selector=\"run=load-balancer-example\" --output=wide", "completion": "This shell code Use a Service to Access an Application in a Cluster | The output is similar to this:", "language": "shell", "type": "code-example_explanation", "context": "Use a Service to Access an Application in a Cluster | The output is similar to this:", "source": "https://kubernetes.io/docs/tasks/access-application-cluster/service-access-application-cluster/", "id": "k8s_00199_code_explain_5278"} {"prompt": "Explain this none code:\n\nNAME READY STATUS ... IP NODE\nhello-world-2895499144-bsbk5 1/1 Running ... 10.200.1.4 worker1\nhello-world-2895499144-m1pwt 1/1 Running ... 10.200.2.5 worker2", "completion": "This none code Use a Service to Access an Application in a Cluster | Get the public IP address of one of your nodes that is running a Hello World pod. How you get this address depends on how you set up your cluster. For example, if you are using Minikube, you can see t", "language": "none", "type": "code-example_explanation", "context": "Use a Service to Access an Application in a Cluster | Get the public IP address of one of your nodes that is running a Hello World pod. How you get this address depends on how you set up your cluster. For example, if you are using Minikube, you can see t", "source": "https://kubernetes.io/docs/tasks/access-application-cluster/service-access-application-cluster/", "id": "k8s_00199_code_explain_5279"} {"prompt": "Explain this none code:\n\nHello, world!\nVersion: 2.0.0\nHostname: hello-world-cdd4458f4-m47c8", "completion": "This none code Use a Service to Access an Application in a Cluster | As an alternative to using kubectl expose, you can use a service configuration file to create a Service.", "language": "none", "type": "code-example_explanation", "context": "Use a Service to Access an Application in a Cluster | As an alternative to using kubectl expose, you can use a service configuration file to create a Service.", "source": "https://kubernetes.io/docs/tasks/access-application-cluster/service-access-application-cluster/", "id": "k8s_00199_code_explain_5280"} {"prompt": "Complete this yaml code:\n\nservice/access/hello-application.yaml\napiVersion: apps/v1\nkind: Deployment\nmetadata:\n name: hello-world\nspec:\n selector:\n matchLabels:\n run: load-balancer-example\n replicas: 2", "completion": " template:\n metadata:\n labels:\n run: load-balancer-example\n spec:\n containers:\n - name: hello-world\n image: us-docker.pkg.dev/google-samples/containers/gke/hello-app:2.0\n ports:\n - containerPort: 8080\n protocol: TCP", "language": "yaml", "type": "deployment-definition", "context": "Use a Service to Access an Application in a Cluster | Run a Hello World application in your cluster: Create the application Deployment using the file above:", "source": "https://kubernetes.io/docs/tasks/access-application-cluster/service-access-application-cluster/", "id": "k8s_00199_code_completion_5281"} {"prompt": "Explain this yaml code:\n\nservice/access/hello-application.yaml\napiVersion: apps/v1\nkind: Deployment\nmetadata:\n name: hello-world\nspec:\n selector:\n matchLabels:\n run: load-balancer-example\n replicas: 2\n template:\n metadata:\n labels:\n run: load-balancer-example\n spec:\n containers:\n - name: hello-world\n image: us-docker.pkg.dev/google-samples/containers/gke/hello-app:2.0\n ports:\n - containerPort: 8080\n protocol: TCP", "completion": "This yaml code Use a Service to Access an Application in a Cluster | Run a Hello World application in your cluster: Create the application Deployment using the file above:", "language": "yaml", "type": "deployment-definition_explanation", "context": "Use a Service to Access an Application in a Cluster | Run a Hello World application in your cluster: Create the application Deployment using the file above:", "source": "https://kubernetes.io/docs/tasks/access-application-cluster/service-access-application-cluster/", "id": "k8s_00199_code_explain_5282"} {"prompt": "Complete this yaml code:\n\napiVersion: apps/v1\nkind: Deployment\nmetadata:\n name: hello-world\nspec:\n selector:\n matchLabels:\n run: load-balancer-example\n replicas: 2\n template:", "completion": " metadata:\n labels:\n run: load-balancer-example\n spec:\n containers:\n - name: hello-world\n image: us-docker.pkg.dev/google-samples/containers/gke/hello-app:2.0\n ports:\n - containerPort: 8080\n protocol: TCP", "language": "yaml", "type": "deployment-definition", "context": "Use a Service to Access an Application in a Cluster | Run a Hello World application in your cluster: Create the application Deployment using the file above:", "source": "https://kubernetes.io/docs/tasks/access-application-cluster/service-access-application-cluster/", "id": "k8s_00199_code_completion_5283"} {"prompt": "Explain this yaml code:\n\napiVersion: apps/v1\nkind: Deployment\nmetadata:\n name: hello-world\nspec:\n selector:\n matchLabels:\n run: load-balancer-example\n replicas: 2\n template:\n metadata:\n labels:\n run: load-balancer-example\n spec:\n containers:\n - name: hello-world\n image: us-docker.pkg.dev/google-samples/containers/gke/hello-app:2.0\n ports:\n - containerPort: 8080\n protocol: TCP", "completion": "This yaml code Use a Service to Access an Application in a Cluster | Run a Hello World application in your cluster: Create the application Deployment using the file above:", "language": "yaml", "type": "deployment-definition_explanation", "context": "Use a Service to Access an Application in a Cluster | Run a Hello World application in your cluster: Create the application Deployment using the file above:", "source": "https://kubernetes.io/docs/tasks/access-application-cluster/service-access-application-cluster/", "id": "k8s_00199_code_explain_5284"} {"prompt": "Explain this bash code:\n\nkubectl apply -f https://k8s.io/examples/service/access/hello-application.yaml", "completion": "This bash code Use a Service to Access an Application in a Cluster | The preceding command creates a Deployment and an associated ReplicaSet. The ReplicaSet has two Pods each of which runs the Hello World application.", "language": "bash", "type": "kubectl-command_explanation", "context": "Use a Service to Access an Application in a Cluster | The preceding command creates a Deployment and an associated ReplicaSet. The ReplicaSet has two Pods each of which runs the Hello World application.", "source": "https://kubernetes.io/docs/tasks/access-application-cluster/service-access-application-cluster/", "id": "k8s_00199_code_explain_5285"} {"prompt": "Explain this bash code:\n\nkubectl get deployments hello-world\nkubectl describe deployments hello-world", "completion": "This bash code Use a Service to Access an Application in a Cluster | Display information about your ReplicaSet objects:", "language": "bash", "type": "kubectl-command_explanation", "context": "Use a Service to Access an Application in a Cluster | Display information about your ReplicaSet objects:", "source": "https://kubernetes.io/docs/tasks/access-application-cluster/service-access-application-cluster/", "id": "k8s_00199_code_explain_5286"} {"prompt": "Explain this bash code:\n\nkubectl get replicasets\nkubectl describe replicasets", "completion": "This bash code Use a Service to Access an Application in a Cluster | Create a Service object that exposes the deployment:", "language": "bash", "type": "kubectl-command_explanation", "context": "Use a Service to Access an Application in a Cluster | Create a Service object that exposes the deployment:", "source": "https://kubernetes.io/docs/tasks/access-application-cluster/service-access-application-cluster/", "id": "k8s_00199_code_explain_5287"} {"prompt": "Explain this bash code:\n\nkubectl expose deployment hello-world --type=NodePort --name=example-service", "completion": "This bash code Use a Service to Access an Application in a Cluster | Display information about the Service:", "language": "bash", "type": "kubectl-command_explanation", "context": "Use a Service to Access an Application in a Cluster | Display information about the Service:", "source": "https://kubernetes.io/docs/tasks/access-application-cluster/service-access-application-cluster/", "id": "k8s_00199_code_explain_5288"} {"prompt": "Explain this bash code:\n\nkubectl get pods --selector=\"run=load-balancer-example\" --output=wide", "completion": "This bash code Use a Service to Access an Application in a Cluster | The output is similar to this:", "language": "bash", "type": "kubectl-command_explanation", "context": "Use a Service to Access an Application in a Cluster | The output is similar to this:", "source": "https://kubernetes.io/docs/tasks/access-application-cluster/service-access-application-cluster/", "id": "k8s_00199_code_explain_5289"} {"prompt": "Explain this json code:\n\n{{< feature-state for_k8s_version=\"v1.10\" state=\"beta\" >}}", "completion": "This json code Custom Hugo Shortcodes | Renders to:", "language": "json", "type": "api-response_explanation", "context": "Custom Hugo Shortcodes | Renders to:", "source": "https://kubernetes.io/docs/contribute/style/hugo-shortcodes/", "id": "k8s_00200_code_explain_5290"} {"prompt": "Explain this json code:\n\n{{< glossary_tooltip text=\"cluster\" term_id=\"cluster\" >}}", "completion": "This json code Custom Hugo Shortcodes | Here's a short glossary definition:", "language": "json", "type": "api-response_explanation", "context": "Custom Hugo Shortcodes | Here's a short glossary definition:", "source": "https://kubernetes.io/docs/contribute/style/hugo-shortcodes/", "id": "k8s_00200_code_explain_5291"} {"prompt": "Explain this json code:\n\n{{< glossary_definition prepend=\"A cluster is\" term_id=\"cluster\" length=\"short\" >}}", "completion": "This json code Custom Hugo Shortcodes | which renders as:A cluster is a set of worker machines, called nodes, that run containerized applications. Every cluster has at least one worker node.", "language": "json", "type": "api-response_explanation", "context": "Custom Hugo Shortcodes | which renders as:A cluster is a set of worker machines, called nodes, that run containerized applications. Every cluster has at least one worker node.", "source": "https://kubernetes.io/docs/contribute/style/hugo-shortcodes/", "id": "k8s_00200_code_explain_5292"} {"prompt": "Explain this json code:\n\n{{< glossary_definition term_id=\"cluster\" length=\"all\" >}}", "completion": "This json code Custom Hugo Shortcodes | which renders as:A set of worker machines, called nodes, that run containerized applications. Every cluster has at least one worker node.The worker node(s) host the Pods that are the components of the", "language": "json", "type": "api-response_explanation", "context": "Custom Hugo Shortcodes | which renders as:A set of worker machines, called nodes, that run containerized applications. Every cluster has at least one worker node.The worker node(s) host the Pods that are the components of the", "source": "https://kubernetes.io/docs/contribute/style/hugo-shortcodes/", "id": "k8s_00200_code_explain_5293"} {"prompt": "Explain this json code:\n\n{{< api-reference page=\"workload-resources/pod-v1\" >}}", "completion": "This json code Custom Hugo Shortcodes | The content of the page parameter is the suffix of the URL of the API reference page.", "language": "json", "type": "api-response_explanation", "context": "Custom Hugo Shortcodes | The content of the page parameter is the suffix of the URL of the API reference page.", "source": "https://kubernetes.io/docs/contribute/style/hugo-shortcodes/", "id": "k8s_00200_code_explain_5294"} {"prompt": "Explain this json code:\n\n{{< api-reference page=\"workload-resources/pod-v1\" anchor=\"PodSpec\" >}}\n{{< api-reference page=\"workload-resources/pod-v1\" anchor=\"environment-variables\" >}}", "completion": "This json code Custom Hugo Shortcodes | You can change the text of the link by specifying a text parameter, for example by linking to the Environment Variables section of the page:", "language": "json", "type": "api-response_explanation", "context": "Custom Hugo Shortcodes | You can change the text of the link by specifying a text parameter, for example by linking to the Environment Variables section of the page:", "source": "https://kubernetes.io/docs/contribute/style/hugo-shortcodes/", "id": "k8s_00200_code_explain_5295"} {"prompt": "Explain this json code:\n\n{{< api-reference page=\"workload-resources/pod-v1\" anchor=\"environment-variables\" text=\"Environment Variable\" >}}", "completion": "This json code Custom Hugo Shortcodes | You can make tables more accessible to screen readers by adding a table caption. To add a caption to a table, enclose the table with a table shortcode and specify the caption with the caption paramete", "language": "json", "type": "api-response_explanation", "context": "Custom Hugo Shortcodes | You can make tables more accessible to screen readers by adding a table caption. To add a caption to a table, enclose the table with a table shortcode and specify the caption with the caption paramete", "source": "https://kubernetes.io/docs/contribute/style/hugo-shortcodes/", "id": "k8s_00200_code_explain_5296"} {"prompt": "Complete this json code:\n\n{{< table caption=\"Configuration parameters\" >}}\nParameter | Description | Default\n:---------|:------------|:-------", "completion": "`timeout` | The timeout for requests | `30s`\n`logLevel` | The log level for log output | `INFO`\n{{< /table >}}", "language": "json", "type": "api-response", "context": "Custom Hugo Shortcodes | The rendered table looks like this:", "source": "https://kubernetes.io/docs/contribute/style/hugo-shortcodes/", "id": "k8s_00200_code_completion_5297"} {"prompt": "Explain this json code:\n\n{{< table caption=\"Configuration parameters\" >}}\nParameter | Description | Default\n:---------|:------------|:-------\n`timeout` | The timeout for requests | `30s`\n`logLevel` | The log level for log output | `INFO`\n{{< /table >}}", "completion": "This json code Custom Hugo Shortcodes | The rendered table looks like this:", "language": "json", "type": "api-response_explanation", "context": "Custom Hugo Shortcodes | The rendered table looks like this:", "source": "https://kubernetes.io/docs/contribute/style/hugo-shortcodes/", "id": "k8s_00200_code_explain_5298"} {"prompt": "Explain this text code:\n\nConfiguration parameters", "completion": "This text code Custom Hugo Shortcodes | In a markdown page (.md file) on this site, you can add a tab set to display multiple flavors of a given solution.", "language": "text", "type": "code-example_explanation", "context": "Custom Hugo Shortcodes | In a markdown page (.md file) on this site, you can add a tab set to display multiple flavors of a given solution.", "source": "https://kubernetes.io/docs/contribute/style/hugo-shortcodes/", "id": "k8s_00200_code_explain_5299"} {"prompt": "Complete this json code:\n\n{{< tabs name=\"tab_with_code\" >}}\n{{< tab name=\"Tab 1\" codelang=\"bash\" >}}\necho \"This is tab 1.\"\n{{< /tab >}}", "completion": "{{< tab name=\"Tab 2\" codelang=\"go\" >}}\nprintln \"This is tab 2.\"\n{{< /tab >}}\n{{< /tabs >}}", "language": "json", "type": "api-response", "context": "Custom Hugo Shortcodes | Renders to:", "source": "https://kubernetes.io/docs/contribute/style/hugo-shortcodes/", "id": "k8s_00200_code_completion_5300"} {"prompt": "Explain this json code:\n\n{{< tabs name=\"tab_with_code\" >}}\n{{< tab name=\"Tab 1\" codelang=\"bash\" >}}\necho \"This is tab 1.\"\n{{< /tab >}}\n{{< tab name=\"Tab 2\" codelang=\"go\" >}}\nprintln \"This is tab 2.\"\n{{< /tab >}}\n{{< /tabs >}}", "completion": "This json code Custom Hugo Shortcodes | Renders to:", "language": "json", "type": "api-response_explanation", "context": "Custom Hugo Shortcodes | Renders to:", "source": "https://kubernetes.io/docs/contribute/style/hugo-shortcodes/", "id": "k8s_00200_code_explain_5301"} {"prompt": "Complete this json code:\n\n{{< tabs name=\"tab_with_md\" >}}\n{{% tab name=\"Markdown\" %}}\nThis is **some markdown.**\n{{< note >}}\nIt can even contain shortcodes.\n{{< /note >}}\n{{% /tab %}}", "completion": "{{< tab name=\"HTML\" >}}\n
    \n\t

    Plain HTML

    \n\t

    This is some plain HTML.

    \n
    \n{{< /tab >}}\n{{< /tabs >}}", "language": "json", "type": "api-response", "context": "Custom Hugo Shortcodes | Renders to:", "source": "https://kubernetes.io/docs/contribute/style/hugo-shortcodes/", "id": "k8s_00200_code_completion_5302"} {"prompt": "Explain this json code:\n\n{{< tabs name=\"tab_with_md\" >}}\n{{% tab name=\"Markdown\" %}}\nThis is **some markdown.**\n{{< note >}}\nIt can even contain shortcodes.\n{{< /note >}}\n{{% /tab %}}\n{{< tab name=\"HTML\" >}}\n
    \n\t

    Plain HTML

    \n\t

    This is some plain HTML.

    \n
    \n{{< /tab >}}\n{{< /tabs >}}", "completion": "This json code Custom Hugo Shortcodes | Renders to:", "language": "json", "type": "api-response_explanation", "context": "Custom Hugo Shortcodes | Renders to:", "source": "https://kubernetes.io/docs/contribute/style/hugo-shortcodes/", "id": "k8s_00200_code_explain_5303"} {"prompt": "Complete this json code:\n\n{{< tabs name=\"tab_with_file_include\" >}}\n{{< tab name=\"Content File #1\" include=\"example1\" />}}", "completion": "{{< tab name=\"Content File #2\" include=\"example2\" />}}\n{{< tab name=\"JSON File\" include=\"podtemplate\" />}}\n{{< /tabs >}}", "language": "json", "type": "api-response", "context": "Custom Hugo Shortcodes | Renders to:", "source": "https://kubernetes.io/docs/contribute/style/hugo-shortcodes/", "id": "k8s_00200_code_completion_5304"} {"prompt": "Explain this json code:\n\n{{< tabs name=\"tab_with_file_include\" >}}\n{{< tab name=\"Content File #1\" include=\"example1\" />}}\n{{< tab name=\"Content File #2\" include=\"example2\" />}}\n{{< tab name=\"JSON File\" include=\"podtemplate\" />}}\n{{< /tabs >}}", "completion": "This json code Custom Hugo Shortcodes | Renders to:", "language": "json", "type": "api-response_explanation", "context": "Custom Hugo Shortcodes | Renders to:", "source": "https://kubernetes.io/docs/contribute/style/hugo-shortcodes/", "id": "k8s_00200_code_explain_5305"} {"prompt": "Complete this json code:\n\n{\n \"apiVersion\": \"v1\",\n \"kind\": \"PodTemplate\",\n \"metadata\": {\n \"name\": \"nginx\"\n },\n \"template\": {\n \"metadata\": {\n \"labels\": {\n \"name\": \"nginx\"\n },", "completion": " \"generateName\": \"nginx-\"\n },\n \"spec\": {\n \"containers\": [{\n \"name\": \"nginx\",\n \"image\": \"dockerfile/nginx\",\n \"ports\": [{\"containerPort\": 80}]\n }]\n }\n }\n }", "language": "json", "type": "api-response", "context": "Custom Hugo Shortcodes | You can use the {{% code_sample %}} shortcode to embed the contents of file in a code block to allow users to download or copy its content to their clipboard. This shortcode is used when the contents", "source": "https://kubernetes.io/docs/contribute/style/hugo-shortcodes/", "id": "k8s_00200_code_completion_5306"} {"prompt": "Explain this json code:\n\n{\n \"apiVersion\": \"v1\",\n \"kind\": \"PodTemplate\",\n \"metadata\": {\n \"name\": \"nginx\"\n },\n \"template\": {\n \"metadata\": {\n \"labels\": {\n \"name\": \"nginx\"\n },\n \"generateName\": \"nginx-\"\n },\n \"spec\": {\n \"containers\": [{\n \"name\": \"nginx\",\n \"image\": \"dockerfile/nginx\",\n \"ports\": [{\"containerPort\": 80}]\n }]\n }\n }\n }", "completion": "This json code Custom Hugo Shortcodes | You can use the {{% code_sample %}} shortcode to embed the contents of file in a code block to allow users to download or copy its content to their clipboard. This shortcode is used when the contents", "language": "json", "type": "api-response_explanation", "context": "Custom Hugo Shortcodes | You can use the {{% code_sample %}} shortcode to embed the contents of file in a code block to allow users to download or copy its content to their clipboard. This shortcode is used when the contents", "source": "https://kubernetes.io/docs/contribute/style/hugo-shortcodes/", "id": "k8s_00200_code_explain_5307"} {"prompt": "Explain this json code:\n\n{{% code_sample language=\"yaml\" file=\"application/deployment-scale.yaml\" %}}", "completion": "This json code Custom Hugo Shortcodes | The output is:", "language": "json", "type": "api-response_explanation", "context": "Custom Hugo Shortcodes | The output is:", "source": "https://kubernetes.io/docs/contribute/style/hugo-shortcodes/", "id": "k8s_00200_code_explain_5308"} {"prompt": "Complete this yaml code:\n\napiVersion: apps/v1\nkind: Deployment\nmetadata:\n name: nginx-deployment\nspec:\n selector:\n matchLabels:\n app: nginx\n replicas: 4 # Update the replicas from 2 to 4", "completion": " template:\n metadata:\n labels:\n app: nginx\n spec:\n containers:\n - name: nginx\n image: nginx:1.16.1\n ports:\n - containerPort: 80", "language": "yaml", "type": "deployment-definition", "context": "Custom Hugo Shortcodes | When adding a new sample file, such as a YAML file, create the file in one of the /examples/ subdirectories where is the language for the page. In the markdown of your page, use the code", "source": "https://kubernetes.io/docs/contribute/style/hugo-shortcodes/", "id": "k8s_00200_code_completion_5309"} {"prompt": "Explain this yaml code:\n\napiVersion: apps/v1\nkind: Deployment\nmetadata:\n name: nginx-deployment\nspec:\n selector:\n matchLabels:\n app: nginx\n replicas: 4 # Update the replicas from 2 to 4\n template:\n metadata:\n labels:\n app: nginx\n spec:\n containers:\n - name: nginx\n image: nginx:1.16.1\n ports:\n - containerPort: 80", "completion": "This yaml code Custom Hugo Shortcodes | When adding a new sample file, such as a YAML file, create the file in one of the /examples/ subdirectories where is the language for the page. In the markdown of your page, use the code", "language": "yaml", "type": "deployment-definition_explanation", "context": "Custom Hugo Shortcodes | When adding a new sample file, such as a YAML file, create the file in one of the /examples/ subdirectories where is the language for the page. In the markdown of your page, use the code", "source": "https://kubernetes.io/docs/contribute/style/hugo-shortcodes/", "id": "k8s_00200_code_explain_5310"} {"prompt": "Explain this json code:\n\n{{% code_sample file=\"/example-yaml>\" %}}", "completion": "This json code Custom Hugo Shortcodes | where is the path to the sample file to include, relative to the examples directory. The following shortcode references a YAML file located at /content/en/examples/configmap/configmaps", "language": "json", "type": "api-response_explanation", "context": "Custom Hugo Shortcodes | where is the path to the sample file to include, relative to the examples directory. The following shortcode references a YAML file located at /content/en/examples/configmap/configmaps", "source": "https://kubernetes.io/docs/contribute/style/hugo-shortcodes/", "id": "k8s_00200_code_explain_5311"} {"prompt": "Explain this json code:\n\n{{% code_sample file=\"configmap/configmaps.yaml\" %}}", "completion": "This json code Custom Hugo Shortcodes | The legacy {{% codenew %}} shortcode is being replaced by {{% code_sample %}}. Use {{% code_sample %}} (not {{% codenew %}} or {{% code %}}) in new documentation.", "language": "json", "type": "api-response_explanation", "context": "Custom Hugo Shortcodes | The legacy {{% codenew %}} shortcode is being replaced by {{% code_sample %}}. Use {{% code_sample %}} (not {{% codenew %}} or {{% code %}}) in new documentation.", "source": "https://kubernetes.io/docs/contribute/style/hugo-shortcodes/", "id": "k8s_00200_code_explain_5312"} {"prompt": "Complete this json code:\n\n{{< details summary=\"More about widgets\" >}}\nThe frobnicator extension API implements _widgets_ using example running text.\n", "completion": "Neque porro quisquam est, qui dolorem ipsum quia dolor sit amet, consectetur,\nadipisci velit, sed quia non numquam eius modi tempora incidunt ut labore et\ndolore magnam aliquam quaerat voluptatem.\n{{< /details >}}", "language": "json", "type": "api-response", "context": "Custom Hugo Shortcodes | This renders as:More about widgetsThe frobnicator extension API implements widgets using example running text.Neque porro quisquam est, qui dolorem ipsum quia dolor sit amet, consectetur, adipisci vel", "source": "https://kubernetes.io/docs/contribute/style/hugo-shortcodes/", "id": "k8s_00200_code_completion_5313"} {"prompt": "Explain this json code:\n\n{{< details summary=\"More about widgets\" >}}\nThe frobnicator extension API implements _widgets_ using example running text.\n\nNeque porro quisquam est, qui dolorem ipsum quia dolor sit amet, consectetur,\nadipisci velit, sed quia non numquam eius modi tempora incidunt ut labore et\ndolore magnam aliquam quaerat voluptatem.\n{{< /details >}}", "completion": "This json code Custom Hugo Shortcodes | This renders as:More about widgetsThe frobnicator extension API implements widgets using example running text.Neque porro quisquam est, qui dolorem ipsum quia dolor sit amet, consectetur, adipisci vel", "language": "json", "type": "api-response_explanation", "context": "Custom Hugo Shortcodes | This renders as:More about widgetsThe frobnicator extension API implements widgets using example running text.Neque porro quisquam est, qui dolorem ipsum quia dolor sit amet, consectetur, adipisci vel", "source": "https://kubernetes.io/docs/contribute/style/hugo-shortcodes/", "id": "k8s_00200_code_explain_5314"} {"prompt": "Explain this json code:\n\n{{< feature-state for_k8s_version=\"v1.10\" state=\"beta\" >}}", "completion": "This json code Custom Hugo Shortcodes | Renders to:", "language": "json", "type": "api-response_explanation", "context": "Custom Hugo Shortcodes | Renders to:", "source": "https://kubernetes.io/docs/contribute/style/hugo-shortcodes/", "id": "k8s_00200_code_explain_5315"} {"prompt": "Explain this text code:\n\ncontent/en/docs/reference/command-line-tools-reference/feature-gates/", "completion": "This text code Custom Hugo Shortcodes | Renders to:", "language": "text", "type": "code-example_explanation", "context": "Custom Hugo Shortcodes | Renders to:", "source": "https://kubernetes.io/docs/contribute/style/hugo-shortcodes/", "id": "k8s_00200_code_explain_5316"} {"prompt": "Explain this json code:\n\n{{< glossary_tooltip text=\"cluster\" term_id=\"cluster\" >}}", "completion": "This json code Custom Hugo Shortcodes | Here's a short glossary definition:", "language": "json", "type": "api-response_explanation", "context": "Custom Hugo Shortcodes | Here's a short glossary definition:", "source": "https://kubernetes.io/docs/contribute/style/hugo-shortcodes/", "id": "k8s_00200_code_explain_5317"} {"prompt": "Explain this json code:\n\n{{< glossary_definition prepend=\"A cluster is\" term_id=\"cluster\" length=\"short\" >}}", "completion": "This json code Custom Hugo Shortcodes | which renders as:A cluster is a set of worker machines, called nodes, that run containerized applications. Every cluster has at least one worker node.", "language": "json", "type": "api-response_explanation", "context": "Custom Hugo Shortcodes | which renders as:A cluster is a set of worker machines, called nodes, that run containerized applications. Every cluster has at least one worker node.", "source": "https://kubernetes.io/docs/contribute/style/hugo-shortcodes/", "id": "k8s_00200_code_explain_5318"} {"prompt": "Explain this json code:\n\n{{< glossary_definition term_id=\"cluster\" length=\"all\" >}}", "completion": "This json code Custom Hugo Shortcodes | which renders as:A set of worker machines, called nodes, that run containerized applications. Every cluster has at least one worker node.The worker node(s) host the Pods that are the components of the", "language": "json", "type": "api-response_explanation", "context": "Custom Hugo Shortcodes | which renders as:A set of worker machines, called nodes, that run containerized applications. Every cluster has at least one worker node.The worker node(s) host the Pods that are the components of the", "source": "https://kubernetes.io/docs/contribute/style/hugo-shortcodes/", "id": "k8s_00200_code_explain_5319"} {"prompt": "Explain this json code:\n\n{{< api-reference page=\"workload-resources/pod-v1\" >}}", "completion": "This json code Custom Hugo Shortcodes | The content of the page parameter is the suffix of the URL of the API reference page.", "language": "json", "type": "api-response_explanation", "context": "Custom Hugo Shortcodes | The content of the page parameter is the suffix of the URL of the API reference page.", "source": "https://kubernetes.io/docs/contribute/style/hugo-shortcodes/", "id": "k8s_00200_code_explain_5320"} {"prompt": "Explain this json code:\n\n{{< api-reference page=\"workload-resources/pod-v1\" anchor=\"PodSpec\" >}}\n{{< api-reference page=\"workload-resources/pod-v1\" anchor=\"environment-variables\" >}}", "completion": "This json code Custom Hugo Shortcodes | You can change the text of the link by specifying a text parameter, for example by linking to the Environment Variables section of the page:", "language": "json", "type": "api-response_explanation", "context": "Custom Hugo Shortcodes | You can change the text of the link by specifying a text parameter, for example by linking to the Environment Variables section of the page:", "source": "https://kubernetes.io/docs/contribute/style/hugo-shortcodes/", "id": "k8s_00200_code_explain_5321"} {"prompt": "Explain this json code:\n\n{{< api-reference page=\"workload-resources/pod-v1\" anchor=\"environment-variables\" text=\"Environment Variable\" >}}", "completion": "This json code Custom Hugo Shortcodes | You can make tables more accessible to screen readers by adding a table caption. To add a caption to a table, enclose the table with a table shortcode and specify the caption with the caption paramete", "language": "json", "type": "api-response_explanation", "context": "Custom Hugo Shortcodes | You can make tables more accessible to screen readers by adding a table caption. To add a caption to a table, enclose the table with a table shortcode and specify the caption with the caption paramete", "source": "https://kubernetes.io/docs/contribute/style/hugo-shortcodes/", "id": "k8s_00200_code_explain_5322"} {"prompt": "Complete this go-html-template code:\n\n{{< table caption=\"Configuration parameters\" >}}\nParameter | Description | Default\n:---------|:------------|:-------", "completion": "`timeout` | The timeout for requests | `30s`\n`logLevel` | The log level for log output | `INFO`\n{{< /table >}}", "language": "go-html-template", "type": "code-example", "context": "Custom Hugo Shortcodes | The rendered table looks like this:", "source": "https://kubernetes.io/docs/contribute/style/hugo-shortcodes/", "id": "k8s_00200_code_completion_5323"} {"prompt": "Explain this go-html-template code:\n\n{{< table caption=\"Configuration parameters\" >}}\nParameter | Description | Default\n:---------|:------------|:-------\n`timeout` | The timeout for requests | `30s`\n`logLevel` | The log level for log output | `INFO`\n{{< /table >}}", "completion": "This go-html-template code Custom Hugo Shortcodes | The rendered table looks like this:", "language": "go-html-template", "type": "code-example_explanation", "context": "Custom Hugo Shortcodes | The rendered table looks like this:", "source": "https://kubernetes.io/docs/contribute/style/hugo-shortcodes/", "id": "k8s_00200_code_explain_5324"} {"prompt": "Explain this html code:\n\nConfiguration parameters", "completion": "This html code Custom Hugo Shortcodes | In a markdown page (.md file) on this site, you can add a tab set to display multiple flavors of a given solution.", "language": "html", "type": "code-example_explanation", "context": "Custom Hugo Shortcodes | In a markdown page (.md file) on this site, you can add a tab set to display multiple flavors of a given solution.", "source": "https://kubernetes.io/docs/contribute/style/hugo-shortcodes/", "id": "k8s_00200_code_explain_5325"} {"prompt": "Explain this json code:\n\n{{< tab name=\"Content File #1\" include=\"example1\" />}}", "completion": "This json code Custom Hugo Shortcodes | Below is a demo of the tabs shortcode.", "language": "json", "type": "api-response_explanation", "context": "Custom Hugo Shortcodes | Below is a demo of the tabs shortcode.", "source": "https://kubernetes.io/docs/contribute/style/hugo-shortcodes/", "id": "k8s_00200_code_explain_5326"} {"prompt": "Explain this json code:\n\n{{% tab name=\"Tab 1\" %}}This is **markdown**{{% /tab %}}", "completion": "This json code Custom Hugo Shortcodes | Below is a demo of the tabs shortcode.", "language": "json", "type": "api-response_explanation", "context": "Custom Hugo Shortcodes | Below is a demo of the tabs shortcode.", "source": "https://kubernetes.io/docs/contribute/style/hugo-shortcodes/", "id": "k8s_00200_code_explain_5327"} {"prompt": "Complete this go-text-template code:\n\n{{< tabs name=\"tab_with_code\" >}}\n{{< tab name=\"Tab 1\" codelang=\"bash\" >}}\necho \"This is tab 1.\"\n{{< /tab >}}", "completion": "{{< tab name=\"Tab 2\" codelang=\"go\" >}}\nprintln \"This is tab 2.\"\n{{< /tab >}}\n{{< /tabs >}}", "language": "go-text-template", "type": "code-example", "context": "Custom Hugo Shortcodes | Renders to:", "source": "https://kubernetes.io/docs/contribute/style/hugo-shortcodes/", "id": "k8s_00200_code_completion_5328"} {"prompt": "Explain this go-text-template code:\n\n{{< tabs name=\"tab_with_code\" >}}\n{{< tab name=\"Tab 1\" codelang=\"bash\" >}}\necho \"This is tab 1.\"\n{{< /tab >}}\n{{< tab name=\"Tab 2\" codelang=\"go\" >}}\nprintln \"This is tab 2.\"\n{{< /tab >}}\n{{< /tabs >}}", "completion": "This go-text-template code Custom Hugo Shortcodes | Renders to:", "language": "go-text-template", "type": "code-example_explanation", "context": "Custom Hugo Shortcodes | Renders to:", "source": "https://kubernetes.io/docs/contribute/style/hugo-shortcodes/", "id": "k8s_00200_code_explain_5329"} {"prompt": "Complete this go-html-template code:\n\n{{< tabs name=\"tab_with_md\" >}}\n{{% tab name=\"Markdown\" %}}\nThis is **some markdown.**\n{{< note >}}\nIt can even contain shortcodes.\n{{< /note >}}\n{{% /tab %}}", "completion": "{{< tab name=\"HTML\" >}}\n
    \n\t

    Plain HTML

    \n\t

    This is some plain HTML.

    \n
    \n{{< /tab >}}\n{{< /tabs >}}", "language": "go-html-template", "type": "code-example", "context": "Custom Hugo Shortcodes | Renders to:", "source": "https://kubernetes.io/docs/contribute/style/hugo-shortcodes/", "id": "k8s_00200_code_completion_5330"} {"prompt": "Explain this go-html-template code:\n\n{{< tabs name=\"tab_with_md\" >}}\n{{% tab name=\"Markdown\" %}}\nThis is **some markdown.**\n{{< note >}}\nIt can even contain shortcodes.\n{{< /note >}}\n{{% /tab %}}\n{{< tab name=\"HTML\" >}}\n
    \n\t

    Plain HTML

    \n\t

    This is some plain HTML.

    \n
    \n{{< /tab >}}\n{{< /tabs >}}", "completion": "This go-html-template code Custom Hugo Shortcodes | Renders to:", "language": "go-html-template", "type": "code-example_explanation", "context": "Custom Hugo Shortcodes | Renders to:", "source": "https://kubernetes.io/docs/contribute/style/hugo-shortcodes/", "id": "k8s_00200_code_explain_5331"} {"prompt": "Complete this go-text-template code:\n\n{{< tabs name=\"tab_with_file_include\" >}}\n{{< tab name=\"Content File #1\" include=\"example1\" />}}", "completion": "{{< tab name=\"Content File #2\" include=\"example2\" />}}\n{{< tab name=\"JSON File\" include=\"podtemplate\" />}}\n{{< /tabs >}}", "language": "go-text-template", "type": "code-example", "context": "Custom Hugo Shortcodes | Renders to:", "source": "https://kubernetes.io/docs/contribute/style/hugo-shortcodes/", "id": "k8s_00200_code_completion_5332"} {"prompt": "Explain this go-text-template code:\n\n{{< tabs name=\"tab_with_file_include\" >}}\n{{< tab name=\"Content File #1\" include=\"example1\" />}}\n{{< tab name=\"Content File #2\" include=\"example2\" />}}\n{{< tab name=\"JSON File\" include=\"podtemplate\" />}}\n{{< /tabs >}}", "completion": "This go-text-template code Custom Hugo Shortcodes | Renders to:", "language": "go-text-template", "type": "code-example_explanation", "context": "Custom Hugo Shortcodes | Renders to:", "source": "https://kubernetes.io/docs/contribute/style/hugo-shortcodes/", "id": "k8s_00200_code_explain_5333"} {"prompt": "Complete this json code:\n\n{\n \"apiVersion\": \"v1\",\n \"kind\": \"PodTemplate\",\n \"metadata\": {\n \"name\": \"nginx\"\n },\n \"template\": {\n \"metadata\": {\n \"labels\": {\n \"name\": \"nginx\"\n },", "completion": " \"generateName\": \"nginx-\"\n },\n \"spec\": {\n \"containers\": [{\n \"name\": \"nginx\",\n \"image\": \"dockerfile/nginx\",\n \"ports\": [{\"containerPort\": 80}]\n }]\n }\n }\n }", "language": "json", "type": "api-response", "context": "Custom Hugo Shortcodes | You can use the {{% code_sample %}} shortcode to embed the contents of file in a code block to allow users to download or copy its content to their clipboard. This shortcode is used when the contents", "source": "https://kubernetes.io/docs/contribute/style/hugo-shortcodes/", "id": "k8s_00200_code_completion_5334"} {"prompt": "Explain this json code:\n\n{\n \"apiVersion\": \"v1\",\n \"kind\": \"PodTemplate\",\n \"metadata\": {\n \"name\": \"nginx\"\n },\n \"template\": {\n \"metadata\": {\n \"labels\": {\n \"name\": \"nginx\"\n },\n \"generateName\": \"nginx-\"\n },\n \"spec\": {\n \"containers\": [{\n \"name\": \"nginx\",\n \"image\": \"dockerfile/nginx\",\n \"ports\": [{\"containerPort\": 80}]\n }]\n }\n }\n }", "completion": "This json code Custom Hugo Shortcodes | You can use the {{% code_sample %}} shortcode to embed the contents of file in a code block to allow users to download or copy its content to their clipboard. This shortcode is used when the contents", "language": "json", "type": "api-response_explanation", "context": "Custom Hugo Shortcodes | You can use the {{% code_sample %}} shortcode to embed the contents of file in a code block to allow users to download or copy its content to their clipboard. This shortcode is used when the contents", "source": "https://kubernetes.io/docs/contribute/style/hugo-shortcodes/", "id": "k8s_00200_code_explain_5335"} {"prompt": "Explain this none code:\n\n{{% code_sample language=\"yaml\" file=\"application/deployment-scale.yaml\" %}}", "completion": "This none code Custom Hugo Shortcodes | The output is:", "language": "none", "type": "code-example_explanation", "context": "Custom Hugo Shortcodes | The output is:", "source": "https://kubernetes.io/docs/contribute/style/hugo-shortcodes/", "id": "k8s_00200_code_explain_5336"} {"prompt": "Complete this yaml code:\n\napiVersion: apps/v1\nkind: Deployment\nmetadata:\n name: nginx-deployment\nspec:\n selector:\n matchLabels:\n app: nginx\n replicas: 4 # Update the replicas from 2 to 4", "completion": " template:\n metadata:\n labels:\n app: nginx\n spec:\n containers:\n - name: nginx\n image: nginx:1.16.1\n ports:\n - containerPort: 80", "language": "yaml", "type": "deployment-definition", "context": "Custom Hugo Shortcodes | When adding a new sample file, such as a YAML file, create the file in one of the /examples/ subdirectories where is the language for the page. In the markdown of your page, use the code", "source": "https://kubernetes.io/docs/contribute/style/hugo-shortcodes/", "id": "k8s_00200_code_completion_5337"} {"prompt": "Explain this yaml code:\n\napiVersion: apps/v1\nkind: Deployment\nmetadata:\n name: nginx-deployment\nspec:\n selector:\n matchLabels:\n app: nginx\n replicas: 4 # Update the replicas from 2 to 4\n template:\n metadata:\n labels:\n app: nginx\n spec:\n containers:\n - name: nginx\n image: nginx:1.16.1\n ports:\n - containerPort: 80", "completion": "This yaml code Custom Hugo Shortcodes | When adding a new sample file, such as a YAML file, create the file in one of the /examples/ subdirectories where is the language for the page. In the markdown of your page, use the code", "language": "yaml", "type": "deployment-definition_explanation", "context": "Custom Hugo Shortcodes | When adding a new sample file, such as a YAML file, create the file in one of the /examples/ subdirectories where is the language for the page. In the markdown of your page, use the code", "source": "https://kubernetes.io/docs/contribute/style/hugo-shortcodes/", "id": "k8s_00200_code_explain_5338"} {"prompt": "Explain this none code:\n\n{{% code_sample file=\"/example-yaml>\" %}}", "completion": "This none code Custom Hugo Shortcodes | where is the path to the sample file to include, relative to the examples directory. The following shortcode references a YAML file located at /content/en/examples/configmap/configmaps", "language": "none", "type": "code-example_explanation", "context": "Custom Hugo Shortcodes | where is the path to the sample file to include, relative to the examples directory. The following shortcode references a YAML file located at /content/en/examples/configmap/configmaps", "source": "https://kubernetes.io/docs/contribute/style/hugo-shortcodes/", "id": "k8s_00200_code_explain_5339"} {"prompt": "Explain this none code:\n\n{{% code_sample file=\"configmap/configmaps.yaml\" %}}", "completion": "This none code Custom Hugo Shortcodes | The legacy {{% codenew %}} shortcode is being replaced by {{% code_sample %}}. Use {{% code_sample %}} (not {{% codenew %}} or {{% code %}}) in new documentation.", "language": "none", "type": "code-example_explanation", "context": "Custom Hugo Shortcodes | The legacy {{% codenew %}} shortcode is being replaced by {{% code_sample %}}. Use {{% code_sample %}} (not {{% codenew %}} or {{% code %}}) in new documentation.", "source": "https://kubernetes.io/docs/contribute/style/hugo-shortcodes/", "id": "k8s_00200_code_explain_5340"} {"prompt": "Complete this markdown code:\n\n{{< details summary=\"More about widgets\" >}}\nThe frobnicator extension API implements _widgets_ using example running text.\n", "completion": "Neque porro quisquam est, qui dolorem ipsum quia dolor sit amet, consectetur,\nadipisci velit, sed quia non numquam eius modi tempora incidunt ut labore et\ndolore magnam aliquam quaerat voluptatem.\n{{< /details >}}", "language": "markdown", "type": "code-example", "context": "Custom Hugo Shortcodes | This renders as:More about widgetsThe frobnicator extension API implements widgets using example running text.Neque porro quisquam est, qui dolorem ipsum quia dolor sit amet, consectetur, adipisci vel", "source": "https://kubernetes.io/docs/contribute/style/hugo-shortcodes/", "id": "k8s_00200_code_completion_5341"} {"prompt": "Explain this markdown code:\n\n{{< details summary=\"More about widgets\" >}}\nThe frobnicator extension API implements _widgets_ using example running text.\n\nNeque porro quisquam est, qui dolorem ipsum quia dolor sit amet, consectetur,\nadipisci velit, sed quia non numquam eius modi tempora incidunt ut labore et\ndolore magnam aliquam quaerat voluptatem.\n{{< /details >}}", "completion": "This markdown code Custom Hugo Shortcodes | This renders as:More about widgetsThe frobnicator extension API implements widgets using example running text.Neque porro quisquam est, qui dolorem ipsum quia dolor sit amet, consectetur, adipisci vel", "language": "markdown", "type": "code-example_explanation", "context": "Custom Hugo Shortcodes | This renders as:More about widgetsThe frobnicator extension API implements widgets using example running text.Neque porro quisquam est, qui dolorem ipsum quia dolor sit amet, consectetur, adipisci vel", "source": "https://kubernetes.io/docs/contribute/style/hugo-shortcodes/", "id": "k8s_00200_code_explain_5342"} {"prompt": "Complete this json code:\n\n{{< table caption=\"Configuration parameters\" >}}\nParameter | Description | Default\n:---------|:------------|:-------", "completion": "`timeout` | The timeout for requests | `30s`\n`logLevel` | The log level for log output | `INFO`\n{{< /table >}}", "language": "json", "type": "api-response", "context": "Custom Hugo Shortcodes | The rendered table looks like this:", "source": "https://kubernetes.io/docs/contribute/style/hugo-shortcodes/", "id": "k8s_00200_code_completion_5343"} {"prompt": "Explain this json code:\n\n{{< table caption=\"Configuration parameters\" >}}\nParameter | Description | Default\n:---------|:------------|:-------\n`timeout` | The timeout for requests | `30s`\n`logLevel` | The log level for log output | `INFO`\n{{< /table >}}", "completion": "This json code Custom Hugo Shortcodes | The rendered table looks like this:", "language": "json", "type": "api-response_explanation", "context": "Custom Hugo Shortcodes | The rendered table looks like this:", "source": "https://kubernetes.io/docs/contribute/style/hugo-shortcodes/", "id": "k8s_00200_code_explain_5344"} {"prompt": "Explain this text code:\n\nConfiguration parameters", "completion": "This text code Custom Hugo Shortcodes | In a markdown page (.md file) on this site, you can add a tab set to display multiple flavors of a given solution.", "language": "text", "type": "code-example_explanation", "context": "Custom Hugo Shortcodes | In a markdown page (.md file) on this site, you can add a tab set to display multiple flavors of a given solution.", "source": "https://kubernetes.io/docs/contribute/style/hugo-shortcodes/", "id": "k8s_00200_code_explain_5345"} {"prompt": "Complete this json code:\n\n{{< tabs name=\"tab_with_code\" >}}\n{{< tab name=\"Tab 1\" codelang=\"bash\" >}}\necho \"This is tab 1.\"\n{{< /tab >}}", "completion": "{{< tab name=\"Tab 2\" codelang=\"go\" >}}\nprintln \"This is tab 2.\"\n{{< /tab >}}\n{{< /tabs >}}", "language": "json", "type": "api-response", "context": "Custom Hugo Shortcodes | Renders to:", "source": "https://kubernetes.io/docs/contribute/style/hugo-shortcodes/", "id": "k8s_00200_code_completion_5346"} {"prompt": "Explain this json code:\n\n{{< tabs name=\"tab_with_code\" >}}\n{{< tab name=\"Tab 1\" codelang=\"bash\" >}}\necho \"This is tab 1.\"\n{{< /tab >}}\n{{< tab name=\"Tab 2\" codelang=\"go\" >}}\nprintln \"This is tab 2.\"\n{{< /tab >}}\n{{< /tabs >}}", "completion": "This json code Custom Hugo Shortcodes | Renders to:", "language": "json", "type": "api-response_explanation", "context": "Custom Hugo Shortcodes | Renders to:", "source": "https://kubernetes.io/docs/contribute/style/hugo-shortcodes/", "id": "k8s_00200_code_explain_5347"} {"prompt": "Complete this json code:\n\n{{< tabs name=\"tab_with_md\" >}}\n{{% tab name=\"Markdown\" %}}\nThis is **some markdown.**\n{{< note >}}\nIt can even contain shortcodes.\n{{< /note >}}\n{{% /tab %}}", "completion": "{{< tab name=\"HTML\" >}}\n
    \n\t

    Plain HTML

    \n\t

    This is some plain HTML.

    \n
    \n{{< /tab >}}\n{{< /tabs >}}", "language": "json", "type": "api-response", "context": "Custom Hugo Shortcodes | Renders to:", "source": "https://kubernetes.io/docs/contribute/style/hugo-shortcodes/", "id": "k8s_00200_code_completion_5348"} {"prompt": "Explain this json code:\n\n{{< tabs name=\"tab_with_md\" >}}\n{{% tab name=\"Markdown\" %}}\nThis is **some markdown.**\n{{< note >}}\nIt can even contain shortcodes.\n{{< /note >}}\n{{% /tab %}}\n{{< tab name=\"HTML\" >}}\n
    \n\t

    Plain HTML

    \n\t

    This is some plain HTML.

    \n
    \n{{< /tab >}}\n{{< /tabs >}}", "completion": "This json code Custom Hugo Shortcodes | Renders to:", "language": "json", "type": "api-response_explanation", "context": "Custom Hugo Shortcodes | Renders to:", "source": "https://kubernetes.io/docs/contribute/style/hugo-shortcodes/", "id": "k8s_00200_code_explain_5349"} {"prompt": "Complete this json code:\n\n{{< tabs name=\"tab_with_file_include\" >}}\n{{< tab name=\"Content File #1\" include=\"example1\" />}}", "completion": "{{< tab name=\"Content File #2\" include=\"example2\" />}}\n{{< tab name=\"JSON File\" include=\"podtemplate\" />}}\n{{< /tabs >}}", "language": "json", "type": "api-response", "context": "Custom Hugo Shortcodes | Renders to:", "source": "https://kubernetes.io/docs/contribute/style/hugo-shortcodes/", "id": "k8s_00200_code_completion_5350"} {"prompt": "Explain this json code:\n\n{{< tabs name=\"tab_with_file_include\" >}}\n{{< tab name=\"Content File #1\" include=\"example1\" />}}\n{{< tab name=\"Content File #2\" include=\"example2\" />}}\n{{< tab name=\"JSON File\" include=\"podtemplate\" />}}\n{{< /tabs >}}", "completion": "This json code Custom Hugo Shortcodes | Renders to:", "language": "json", "type": "api-response_explanation", "context": "Custom Hugo Shortcodes | Renders to:", "source": "https://kubernetes.io/docs/contribute/style/hugo-shortcodes/", "id": "k8s_00200_code_explain_5351"} {"prompt": "Complete this json code:\n\n{\n \"apiVersion\": \"v1\",\n \"kind\": \"PodTemplate\",\n \"metadata\": {\n \"name\": \"nginx\"\n },\n \"template\": {\n \"metadata\": {\n \"labels\": {\n \"name\": \"nginx\"\n },", "completion": " \"generateName\": \"nginx-\"\n },\n \"spec\": {\n \"containers\": [{\n \"name\": \"nginx\",\n \"image\": \"dockerfile/nginx\",\n \"ports\": [{\"containerPort\": 80}]\n }]\n }\n }\n }", "language": "json", "type": "api-response", "context": "Custom Hugo Shortcodes | You can use the {{% code_sample %}} shortcode to embed the contents of file in a code block to allow users to download or copy its content to their clipboard. This shortcode is used when the contents", "source": "https://kubernetes.io/docs/contribute/style/hugo-shortcodes/", "id": "k8s_00200_code_completion_5352"} {"prompt": "Explain this json code:\n\n{\n \"apiVersion\": \"v1\",\n \"kind\": \"PodTemplate\",\n \"metadata\": {\n \"name\": \"nginx\"\n },\n \"template\": {\n \"metadata\": {\n \"labels\": {\n \"name\": \"nginx\"\n },\n \"generateName\": \"nginx-\"\n },\n \"spec\": {\n \"containers\": [{\n \"name\": \"nginx\",\n \"image\": \"dockerfile/nginx\",\n \"ports\": [{\"containerPort\": 80}]\n }]\n }\n }\n }", "completion": "This json code Custom Hugo Shortcodes | You can use the {{% code_sample %}} shortcode to embed the contents of file in a code block to allow users to download or copy its content to their clipboard. This shortcode is used when the contents", "language": "json", "type": "api-response_explanation", "context": "Custom Hugo Shortcodes | You can use the {{% code_sample %}} shortcode to embed the contents of file in a code block to allow users to download or copy its content to their clipboard. This shortcode is used when the contents", "source": "https://kubernetes.io/docs/contribute/style/hugo-shortcodes/", "id": "k8s_00200_code_explain_5353"} {"prompt": "Complete this yaml code:\n\napplication/deployment-scale.yaml\napiVersion: apps/v1\nkind: Deployment\nmetadata:\n name: nginx-deployment\nspec:\n selector:\n matchLabels:\n app: nginx\n replicas: 4 # Update the replicas from 2 to 4", "completion": " template:\n metadata:\n labels:\n app: nginx\n spec:\n containers:\n - name: nginx\n image: nginx:1.16.1\n ports:\n - containerPort: 80", "language": "yaml", "type": "deployment-definition", "context": "Custom Hugo Shortcodes | When adding a new sample file, such as a YAML file, create the file in one of the /examples/ subdirectories where is the language for the page. In the markdown of your page, use the code", "source": "https://kubernetes.io/docs/contribute/style/hugo-shortcodes/", "id": "k8s_00200_code_completion_5354"} {"prompt": "Explain this yaml code:\n\napplication/deployment-scale.yaml\napiVersion: apps/v1\nkind: Deployment\nmetadata:\n name: nginx-deployment\nspec:\n selector:\n matchLabels:\n app: nginx\n replicas: 4 # Update the replicas from 2 to 4\n template:\n metadata:\n labels:\n app: nginx\n spec:\n containers:\n - name: nginx\n image: nginx:1.16.1\n ports:\n - containerPort: 80", "completion": "This yaml code Custom Hugo Shortcodes | When adding a new sample file, such as a YAML file, create the file in one of the /examples/ subdirectories where is the language for the page. In the markdown of your page, use the code", "language": "yaml", "type": "deployment-definition_explanation", "context": "Custom Hugo Shortcodes | When adding a new sample file, such as a YAML file, create the file in one of the /examples/ subdirectories where is the language for the page. In the markdown of your page, use the code", "source": "https://kubernetes.io/docs/contribute/style/hugo-shortcodes/", "id": "k8s_00200_code_explain_5355"} {"prompt": "Complete this yaml code:\n\napiVersion: apps/v1\nkind: Deployment\nmetadata:\n name: nginx-deployment\nspec:\n selector:\n matchLabels:\n app: nginx\n replicas: 4 # Update the replicas from 2 to 4", "completion": " template:\n metadata:\n labels:\n app: nginx\n spec:\n containers:\n - name: nginx\n image: nginx:1.16.1\n ports:\n - containerPort: 80", "language": "yaml", "type": "deployment-definition", "context": "Custom Hugo Shortcodes | When adding a new sample file, such as a YAML file, create the file in one of the /examples/ subdirectories where is the language for the page. In the markdown of your page, use the code", "source": "https://kubernetes.io/docs/contribute/style/hugo-shortcodes/", "id": "k8s_00200_code_completion_5356"} {"prompt": "Explain this yaml code:\n\napiVersion: apps/v1\nkind: Deployment\nmetadata:\n name: nginx-deployment\nspec:\n selector:\n matchLabels:\n app: nginx\n replicas: 4 # Update the replicas from 2 to 4\n template:\n metadata:\n labels:\n app: nginx\n spec:\n containers:\n - name: nginx\n image: nginx:1.16.1\n ports:\n - containerPort: 80", "completion": "This yaml code Custom Hugo Shortcodes | When adding a new sample file, such as a YAML file, create the file in one of the /examples/ subdirectories where is the language for the page. In the markdown of your page, use the code", "language": "yaml", "type": "deployment-definition_explanation", "context": "Custom Hugo Shortcodes | When adding a new sample file, such as a YAML file, create the file in one of the /examples/ subdirectories where is the language for the page. In the markdown of your page, use the code", "source": "https://kubernetes.io/docs/contribute/style/hugo-shortcodes/", "id": "k8s_00200_code_explain_5357"} {"prompt": "Complete this json code:\n\n{{< details summary=\"More about widgets\" >}}\nThe frobnicator extension API implements _widgets_ using example running text.\n", "completion": "Neque porro quisquam est, qui dolorem ipsum quia dolor sit amet, consectetur,\nadipisci velit, sed quia non numquam eius modi tempora incidunt ut labore et\ndolore magnam aliquam quaerat voluptatem.\n{{< /details >}}", "language": "json", "type": "api-response", "context": "Custom Hugo Shortcodes | This renders as:More about widgetsThe frobnicator extension API implements widgets using example running text.Neque porro quisquam est, qui dolorem ipsum quia dolor sit amet, consectetur, adipisci vel", "source": "https://kubernetes.io/docs/contribute/style/hugo-shortcodes/", "id": "k8s_00200_code_completion_5358"} {"prompt": "Explain this json code:\n\n{{< details summary=\"More about widgets\" >}}\nThe frobnicator extension API implements _widgets_ using example running text.\n\nNeque porro quisquam est, qui dolorem ipsum quia dolor sit amet, consectetur,\nadipisci velit, sed quia non numquam eius modi tempora incidunt ut labore et\ndolore magnam aliquam quaerat voluptatem.\n{{< /details >}}", "completion": "This json code Custom Hugo Shortcodes | This renders as:More about widgetsThe frobnicator extension API implements widgets using example running text.Neque porro quisquam est, qui dolorem ipsum quia dolor sit amet, consectetur, adipisci vel", "language": "json", "type": "api-response_explanation", "context": "Custom Hugo Shortcodes | This renders as:More about widgetsThe frobnicator extension API implements widgets using example running text.Neque porro quisquam est, qui dolorem ipsum quia dolor sit amet, consectetur, adipisci vel", "source": "https://kubernetes.io/docs/contribute/style/hugo-shortcodes/", "id": "k8s_00200_code_explain_5359"} {"prompt": "Explain this bash code:\n\nkubectl set serviceaccount (-f FILENAME | TYPE NAME) SERVICE_ACCOUNT", "completion": "This bash code kubectl set serviceaccount | Select all resources, in the namespace of the specified resource types", "language": "bash", "type": "kubectl-command_explanation", "context": "kubectl set serviceaccount | Select all resources, in the namespace of the specified resource types", "source": "https://kubernetes.io/docs/reference/kubectl/generated/kubectl_set/kubectl_set_serviceaccount/", "id": "k8s_00201_code_explain_5360"} {"prompt": "Complete this text code:\n\n# Set deployment nginx-deployment's service account to serviceaccount1\n kubectl set serviceaccount deployment nginx-deployment serviceaccount1", "completion": " \n # Print the result (in YAML format) of updated nginx deployment with the service account from local file, without hitting the API server\n kubectl set sa -f nginx-deployment.yaml serviceaccount1 --local --dry-run=client -o yaml", "language": "text", "type": "code-example", "context": "kubectl set serviceaccount | Select all resources, in the namespace of the specified resource types", "source": "https://kubernetes.io/docs/reference/kubectl/generated/kubectl_set/kubectl_set_serviceaccount/", "id": "k8s_00201_code_completion_5361"} {"prompt": "Explain this text code:\n\n# Set deployment nginx-deployment's service account to serviceaccount1\n kubectl set serviceaccount deployment nginx-deployment serviceaccount1\n \n # Print the result (in YAML format) of updated nginx deployment with the service account from local file, without hitting the API server\n kubectl set sa -f nginx-deployment.yaml serviceaccount1 --local --dry-run=client -o yaml", "completion": "This text code kubectl set serviceaccount | Select all resources, in the namespace of the specified resource types", "language": "text", "type": "code-example_explanation", "context": "kubectl set serviceaccount | Select all resources, in the namespace of the specified resource types", "source": "https://kubernetes.io/docs/reference/kubectl/generated/kubectl_set/kubectl_set_serviceaccount/", "id": "k8s_00201_code_explain_5362"} {"prompt": "Explain this bash code:\n\nkubectl set serviceaccount (-f FILENAME | TYPE NAME) SERVICE_ACCOUNT", "completion": "This bash code kubectl set serviceaccount | Select all resources, in the namespace of the specified resource types", "language": "bash", "type": "kubectl-command_explanation", "context": "kubectl set serviceaccount | Select all resources, in the namespace of the specified resource types", "source": "https://kubernetes.io/docs/reference/kubectl/generated/kubectl_set/kubectl_set_serviceaccount/", "id": "k8s_00201_code_explain_5363"} {"prompt": "Complete this text code:\n\n# Set deployment nginx-deployment's service account to serviceaccount1\n kubectl set serviceaccount deployment nginx-deployment serviceaccount1", "completion": " \n # Print the result (in YAML format) of updated nginx deployment with the service account from local file, without hitting the API server\n kubectl set sa -f nginx-deployment.yaml serviceaccount1 --local --dry-run=client -o yaml", "language": "text", "type": "code-example", "context": "kubectl set serviceaccount | Select all resources, in the namespace of the specified resource types", "source": "https://kubernetes.io/docs/reference/kubectl/generated/kubectl_set/kubectl_set_serviceaccount/", "id": "k8s_00201_code_completion_5364"} {"prompt": "Explain this text code:\n\n# Set deployment nginx-deployment's service account to serviceaccount1\n kubectl set serviceaccount deployment nginx-deployment serviceaccount1\n \n # Print the result (in YAML format) of updated nginx deployment with the service account from local file, without hitting the API server\n kubectl set sa -f nginx-deployment.yaml serviceaccount1 --local --dry-run=client -o yaml", "completion": "This text code kubectl set serviceaccount | Select all resources, in the namespace of the specified resource types", "language": "text", "type": "code-example_explanation", "context": "kubectl set serviceaccount | Select all resources, in the namespace of the specified resource types", "source": "https://kubernetes.io/docs/reference/kubectl/generated/kubectl_set/kubectl_set_serviceaccount/", "id": "k8s_00201_code_explain_5365"} {"prompt": "Explain this text code:\n\ngit clone https://www.github.com/kubernetes/kubernetes", "completion": "This text code Generating Reference Documentation for Metrics | This creates a kubernetes folder in your current working directory.", "language": "text", "type": "code-example_explanation", "context": "Generating Reference Documentation for Metrics | This creates a kubernetes folder in your current working directory.", "source": "https://kubernetes.io/docs/contribute/generate-ref-docs/metrics-reference/", "id": "k8s_00202_code_explain_5366"} {"prompt": "Explain this text code:\n\n./test/instrumentation/documentation/documentation.md\r\n./test/instrumentation/documentation/documentation-list.yaml", "completion": "This text code Generating Reference Documentation for Metrics | Set the Kubernetes website root environment variable.", "language": "text", "type": "code-example_explanation", "context": "Generating Reference Documentation for Metrics | Set the Kubernetes website root environment variable.", "source": "https://kubernetes.io/docs/contribute/generate-ref-docs/metrics-reference/", "id": "k8s_00202_code_explain_5367"} {"prompt": "Explain this text code:\n\ncp ./test/instrumentation/documentation/documentation.md \"${WEBSITE_ROOT}/content/en/docs/reference/instrumentation/metrics.md\"", "completion": "This text code Generating Reference Documentation for Metrics | To create a pull request, follow the instructions in Opening a pull request.", "language": "text", "type": "code-example_explanation", "context": "Generating Reference Documentation for Metrics | To create a pull request, follow the instructions in Opening a pull request.", "source": "https://kubernetes.io/docs/contribute/generate-ref-docs/metrics-reference/", "id": "k8s_00202_code_explain_5368"} {"prompt": "Explain this shell code:\n\ngit clone https://www.github.com/kubernetes/kubernetes", "completion": "This shell code Generating Reference Documentation for Metrics | This creates a kubernetes folder in your current working directory.", "language": "shell", "type": "code-example_explanation", "context": "Generating Reference Documentation for Metrics | This creates a kubernetes folder in your current working directory.", "source": "https://kubernetes.io/docs/contribute/generate-ref-docs/metrics-reference/", "id": "k8s_00202_code_explain_5369"} {"prompt": "Explain this text code:\n\n./test/instrumentation/documentation/documentation.md\r\n./test/instrumentation/documentation/documentation-list.yaml", "completion": "This text code Generating Reference Documentation for Metrics | Set the Kubernetes website root environment variable.", "language": "text", "type": "code-example_explanation", "context": "Generating Reference Documentation for Metrics | Set the Kubernetes website root environment variable.", "source": "https://kubernetes.io/docs/contribute/generate-ref-docs/metrics-reference/", "id": "k8s_00202_code_explain_5370"} {"prompt": "Explain this shell code:\n\ncp ./test/instrumentation/documentation/documentation.md \"${WEBSITE_ROOT}/content/en/docs/reference/instrumentation/metrics.md\"", "completion": "This shell code Generating Reference Documentation for Metrics | To create a pull request, follow the instructions in Opening a pull request.", "language": "shell", "type": "code-example_explanation", "context": "Generating Reference Documentation for Metrics | To create a pull request, follow the instructions in Opening a pull request.", "source": "https://kubernetes.io/docs/contribute/generate-ref-docs/metrics-reference/", "id": "k8s_00202_code_explain_5371"} {"prompt": "Explain this text code:\n\ngit clone https://www.github.com/kubernetes/kubernetes", "completion": "This text code Generating Reference Documentation for Metrics | This creates a kubernetes folder in your current working directory.", "language": "text", "type": "code-example_explanation", "context": "Generating Reference Documentation for Metrics | This creates a kubernetes folder in your current working directory.", "source": "https://kubernetes.io/docs/contribute/generate-ref-docs/metrics-reference/", "id": "k8s_00202_code_explain_5372"} {"prompt": "Explain this text code:\n\ncp ./test/instrumentation/documentation/documentation.md \"${WEBSITE_ROOT}/content/en/docs/reference/instrumentation/metrics.md\"", "completion": "This text code Generating Reference Documentation for Metrics | To create a pull request, follow the instructions in Opening a pull request.", "language": "text", "type": "code-example_explanation", "context": "Generating Reference Documentation for Metrics | To create a pull request, follow the instructions in Opening a pull request.", "source": "https://kubernetes.io/docs/contribute/generate-ref-docs/metrics-reference/", "id": "k8s_00202_code_explain_5373"} {"prompt": "Explain this text code:\n\n# Display the current-context\n kubectl config current-context", "completion": "This text code kubectl config current-context | help for current-context", "language": "text", "type": "code-example_explanation", "context": "kubectl config current-context | help for current-context", "source": "https://kubernetes.io/docs/reference/kubectl/generated/kubectl_config/kubectl_config_current-context/", "id": "k8s_00204_code_explain_5374"} {"prompt": "Explain this text code:\n\n# Display the current-context\n kubectl config current-context", "completion": "This text code kubectl config current-context | help for current-context", "language": "text", "type": "code-example_explanation", "context": "kubectl config current-context | help for current-context", "source": "https://kubernetes.io/docs/reference/kubectl/generated/kubectl_config/kubectl_config_current-context/", "id": "k8s_00204_code_explain_5375"} {"prompt": "Complete this text code:\n\n# sysctl params required by setup, params persist across reboots\ncat <\n Mounts: ", "completion": " Volumes: \n Conditions:\n Type Status Reason\n ---- ------ ------\n Available True MinimumReplicasAvailable\n Progressing True NewReplicaSetAvailable\n OldReplicaSets: \n NewReplicaSet: nginx-deployment-1564180365 (3/3 replicas created)\n Events:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal ScalingReplicaSet 2m deployment-controller Scaled up replica set nginx-deployment-2035384211 to 3\n Normal ScalingReplicaSet 24s deployment-controller Scaled up replica set nginx-deployment-1564180365 to 1\n Normal ScalingReplicaSet 22s deployment-controller Scaled down replica set nginx-deployment-2035384211 to 2\n Normal ScalingReplicaSet 22s deployment-controller Scaled up replica set nginx-deployment-1564180365 to 2\n Normal ScalingReplicaSet 19s deployment-controller Scaled down replica set nginx-deployment-2035384211 to 1\n Normal ScalingReplicaSet 19s deployment-controller Scaled up replica set nginx-deployment-1564180365 to 3\n Normal ScalingReplicaSet 14s deployment-controller Scaled down replica set nginx-deployment-2035384211 to 0", "language": "text", "type": "code-example", "context": "Deployments | Here you see that when you first created the Deployment, it created a ReplicaSet (nginx-deployment-2035384211) and scaled it up to 3 replicas directly. When you updated the Deployment, it created a ne", "source": "https://kubernetes.io/docs/concepts/workloads/controllers/deployment/", "id": "k8s_00206_code_completion_5427"} {"prompt": "Explain this text code:\n\nName: nginx-deployment\nNamespace: default\nCreationTimestamp: Thu, 30 Nov 2017 10:56:25 +0000\nLabels: app=nginx\nAnnotations: deployment.kubernetes.io/revision=2\nSelector: app=nginx\nReplicas: 3 desired | 3 updated | 3 total | 3 available | 0 unavailable\nStrategyType: RollingUpdate\nMinReadySeconds: 0\nRollingUpdateStrategy: 25% max unavailable, 25% max surge\nPod Template:\n Labels: app=nginx\n Containers:\n nginx:\n Image: nginx:1.16.1\n Port: 80/TCP\n Environment: \n Mounts: \n Volumes: \n Conditions:\n Type Status Reason\n ---- ------ ------\n Available True MinimumReplicasAvailable\n Progressing True NewReplicaSetAvailable\n OldReplicaSets: \n NewReplicaSet: nginx-deployment-1564180365 (3/3 replicas created)\n Events:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal ScalingReplicaSet 2m deployment-controller Scaled up replica set nginx-deployment-2035384211 to 3\n Normal ScalingReplicaSet 24s deployment-controller Scaled up replica set nginx-deployment-1564180365 to 1\n Normal ScalingReplicaSet 22s deployment-controller Scaled down replica set nginx-deployment-2035384211 to 2\n Normal ScalingReplicaSet 22s deployment-controller Scaled up replica set nginx-deployment-1564180365 to 2\n Normal ScalingReplicaSet 19s deployment-controller Scaled down replica set nginx-deployment-2035384211 to 1\n Normal ScalingReplicaSet 19s deployment-controller Scaled up replica set nginx-deployment-1564180365 to 3\n Normal ScalingReplicaSet 14s deployment-controller Scaled down replica set nginx-deployment-2035384211 to 0", "completion": "This text code Deployments | Here you see that when you first created the Deployment, it created a ReplicaSet (nginx-deployment-2035384211) and scaled it up to 3 replicas directly. When you updated the Deployment, it created a ne", "language": "text", "type": "code-example_explanation", "context": "Deployments | Here you see that when you first created the Deployment, it created a ReplicaSet (nginx-deployment-2035384211) and scaled it up to 3 replicas directly. When you updated the Deployment, it created a ne", "source": "https://kubernetes.io/docs/concepts/workloads/controllers/deployment/", "id": "k8s_00206_code_explain_5428"} {"prompt": "Explain this bash code:\n\nkubectl set image deployment/nginx-deployment nginx=nginx:1.161", "completion": "This bash code Deployments | The output is similar to this:", "language": "bash", "type": "kubectl-command_explanation", "context": "Deployments | The output is similar to this:", "source": "https://kubernetes.io/docs/concepts/workloads/controllers/deployment/", "id": "k8s_00206_code_explain_5429"} {"prompt": "Explain this text code:\n\nWaiting for rollout to finish: 1 out of 3 new replicas have been updated...", "completion": "This text code Deployments | Press Ctrl-C to stop the above rollout status watch. For more information on stuck rollouts, read more here.", "language": "text", "type": "code-example_explanation", "context": "Deployments | Press Ctrl-C to stop the above rollout status watch. For more information on stuck rollouts, read more here.", "source": "https://kubernetes.io/docs/concepts/workloads/controllers/deployment/", "id": "k8s_00206_code_explain_5430"} {"prompt": "Complete this text code:\n\nNAME DESIRED CURRENT READY AGE\nnginx-deployment-1564180365 3 3 3 25s", "completion": "nginx-deployment-2035384211 0 0 0 36s\nnginx-deployment-3066724191 1 1 0 6s", "language": "text", "type": "code-example", "context": "Deployments | Looking at the Pods created, you see that 1 Pod created by new ReplicaSet is stuck in an image pull loop.", "source": "https://kubernetes.io/docs/concepts/workloads/controllers/deployment/", "id": "k8s_00206_code_completion_5431"} {"prompt": "Explain this text code:\n\nNAME DESIRED CURRENT READY AGE\nnginx-deployment-1564180365 3 3 3 25s\nnginx-deployment-2035384211 0 0 0 36s\nnginx-deployment-3066724191 1 1 0 6s", "completion": "This text code Deployments | Looking at the Pods created, you see that 1 Pod created by new ReplicaSet is stuck in an image pull loop.", "language": "text", "type": "code-example_explanation", "context": "Deployments | Looking at the Pods created, you see that 1 Pod created by new ReplicaSet is stuck in an image pull loop.", "source": "https://kubernetes.io/docs/concepts/workloads/controllers/deployment/", "id": "k8s_00206_code_explain_5432"} {"prompt": "Complete this text code:\n\nNAME READY STATUS RESTARTS AGE\nnginx-deployment-1564180365-70iae 1/1 Running 0 25s", "completion": "nginx-deployment-1564180365-jbqqo 1/1 Running 0 25s\nnginx-deployment-1564180365-hysrc 1/1 Running 0 25s\nnginx-deployment-3066724191-08mng 0/1 ImagePullBackOff 0 6s", "language": "text", "type": "code-example", "context": "Deployments | Get the description of the Deployment:", "source": "https://kubernetes.io/docs/concepts/workloads/controllers/deployment/", "id": "k8s_00206_code_completion_5433"} {"prompt": "Explain this text code:\n\nNAME READY STATUS RESTARTS AGE\nnginx-deployment-1564180365-70iae 1/1 Running 0 25s\nnginx-deployment-1564180365-jbqqo 1/1 Running 0 25s\nnginx-deployment-1564180365-hysrc 1/1 Running 0 25s\nnginx-deployment-3066724191-08mng 0/1 ImagePullBackOff 0 6s", "completion": "This text code Deployments | Get the description of the Deployment:", "language": "text", "type": "code-example_explanation", "context": "Deployments | Get the description of the Deployment:", "source": "https://kubernetes.io/docs/concepts/workloads/controllers/deployment/", "id": "k8s_00206_code_explain_5434"} {"prompt": "Complete this text code:\n\nName: nginx-deployment\nNamespace: default\nCreationTimestamp: Tue, 15 Mar 2016 14:48:04 -0700\nLabels: app=nginx\nSelector: app=nginx\nReplicas: 3 desired | 1 updated | 4 total | 3 available | 1 unavailable\nStrategyType: RollingUpdate\nMinReadySeconds: 0\nRollingUpdateStrategy: 25% max unavailable, 25% max surge\nPod Template:\n Labels: app=nginx\n Containers:\n nginx:\n Image: nginx:1.161\n Port: 80/TCP\n Host Port: 0/TCP\n Environment: \n Mounts: ", "completion": " Volumes: \nConditions:\n Type Status Reason\n ---- ------ ------\n Available True MinimumReplicasAvailable\n Progressing True ReplicaSetUpdated\nOldReplicaSets: nginx-deployment-1564180365 (3/3 replicas created)\nNewReplicaSet: nginx-deployment-3066724191 (1/1 replicas created)\nEvents:\n FirstSeen LastSeen Count From SubObjectPath Type Reason Message\n --------- -------- ----- ---- ------------- -------- ------ -------\n 1m 1m 1 {deployment-controller } Normal ScalingReplicaSet Scaled up replica set nginx-deployment-2035384211 to 3\n 22s 22s 1 {deployment-controller } Normal ScalingReplicaSet Scaled up replica set nginx-deployment-1564180365 to 1\n 22s 22s 1 {deployment-controller } Normal ScalingReplicaSet Scaled down replica set nginx-deployment-2035384211 to 2\n 22s 22s 1 {deployment-controller } Normal ScalingReplicaSet Scaled up replica set nginx-deployment-1564180365 to 2\n 21s 21s 1 {deployment-controller } Normal ScalingReplicaSet Scaled down replica set nginx-deployment-2035384211 to 1\n 21s 21s 1 {deployment-controller } Normal ScalingReplicaSet Scaled up replica set nginx-deployment-1564180365 to 3\n 13s 13s 1 {deployment-controller } Normal ScalingReplicaSet Scaled down replica set nginx-deployment-2035384211 to 0\n 13s 13s 1 {deployment-controller } Normal ScalingReplicaSet Scaled up replica set nginx-deployment-3066724191 to 1", "language": "text", "type": "code-example", "context": "Deployments | To fix this, you need to rollback to a previous revision of Deployment that is stable.", "source": "https://kubernetes.io/docs/concepts/workloads/controllers/deployment/", "id": "k8s_00206_code_completion_5435"} {"prompt": "Explain this text code:\n\nName: nginx-deployment\nNamespace: default\nCreationTimestamp: Tue, 15 Mar 2016 14:48:04 -0700\nLabels: app=nginx\nSelector: app=nginx\nReplicas: 3 desired | 1 updated | 4 total | 3 available | 1 unavailable\nStrategyType: RollingUpdate\nMinReadySeconds: 0\nRollingUpdateStrategy: 25% max unavailable, 25% max surge\nPod Template:\n Labels: app=nginx\n Containers:\n nginx:\n Image: nginx:1.161\n Port: 80/TCP\n Host Port: 0/TCP\n Environment: \n Mounts: \n Volumes: \nConditions:\n Type Status Reason\n ---- ------ ------\n Available True MinimumReplicasAvailable\n Progressing True ReplicaSetUpdated\nOldReplicaSets: nginx-deployment-1564180365 (3/3 replicas created)\nNewReplicaSet: nginx-deployment-3066724191 (1/1 replicas created)\nEvents:\n FirstSeen LastSeen Count From SubObjectPath Type Reason Message\n --------- -------- ----- ---- ------------- -------- ------ -------\n 1m 1m 1 {deployment-controller } Normal ScalingReplicaSet Scaled up replica set nginx-deployment-2035384211 to 3\n 22s 22s 1 {deployment-controller } Normal ScalingReplicaSet Scaled up replica set nginx-deployment-1564180365 to 1\n 22s 22s 1 {deployment-controller } Normal ScalingReplicaSet Scaled down replica set nginx-deployment-2035384211 to 2\n 22s 22s 1 {deployment-controller } Normal ScalingReplicaSet Scaled up replica set nginx-deployment-1564180365 to 2\n 21s 21s 1 {deployment-controller } Normal ScalingReplicaSet Scaled down replica set nginx-deployment-2035384211 to 1\n 21s 21s 1 {deployment-controller } Normal ScalingReplicaSet Scaled up replica set nginx-deployment-1564180365 to 3\n 13s 13s 1 {deployment-controller } Normal ScalingReplicaSet Scaled down replica set nginx-deployment-2035384211 to 0\n 13s 13s 1 {deployment-controller } Normal ScalingReplicaSet Scaled up replica set nginx-deployment-3066724191 to 1", "completion": "This text code Deployments | To fix this, you need to rollback to a previous revision of Deployment that is stable.", "language": "text", "type": "code-example_explanation", "context": "Deployments | To fix this, you need to rollback to a previous revision of Deployment that is stable.", "source": "https://kubernetes.io/docs/concepts/workloads/controllers/deployment/", "id": "k8s_00206_code_explain_5436"} {"prompt": "Explain this bash code:\n\nkubectl rollout history deployment/nginx-deployment", "completion": "This bash code Deployments | The output is similar to this:", "language": "bash", "type": "kubectl-command_explanation", "context": "Deployments | The output is similar to this:", "source": "https://kubernetes.io/docs/concepts/workloads/controllers/deployment/", "id": "k8s_00206_code_explain_5437"} {"prompt": "Complete this text code:\n\ndeployments \"nginx-deployment\"\nREVISION CHANGE-CAUSE", "completion": "1 kubectl apply --filename=https://k8s.io/examples/controllers/nginx-deployment.yaml\n2 kubectl set image deployment/nginx-deployment nginx=nginx:1.16.1\n3 kubectl set image deployment/nginx-deployment nginx=nginx:1.161", "language": "text", "type": "code-example", "context": "Deployments | CHANGE-CAUSE is copied from the Deployment annotation kubernetes.io/change-cause to its revisions upon creation. You can specify theCHANGE-CAUSE message by:", "source": "https://kubernetes.io/docs/concepts/workloads/controllers/deployment/", "id": "k8s_00206_code_completion_5438"} {"prompt": "Explain this text code:\n\ndeployments \"nginx-deployment\"\nREVISION CHANGE-CAUSE\n1 kubectl apply --filename=https://k8s.io/examples/controllers/nginx-deployment.yaml\n2 kubectl set image deployment/nginx-deployment nginx=nginx:1.16.1\n3 kubectl set image deployment/nginx-deployment nginx=nginx:1.161", "completion": "This text code Deployments | CHANGE-CAUSE is copied from the Deployment annotation kubernetes.io/change-cause to its revisions upon creation. You can specify theCHANGE-CAUSE message by:", "language": "text", "type": "code-example_explanation", "context": "Deployments | CHANGE-CAUSE is copied from the Deployment annotation kubernetes.io/change-cause to its revisions upon creation. You can specify theCHANGE-CAUSE message by:", "source": "https://kubernetes.io/docs/concepts/workloads/controllers/deployment/", "id": "k8s_00206_code_explain_5439"} {"prompt": "Explain this bash code:\n\nkubectl rollout history deployment/nginx-deployment --revision=2", "completion": "This bash code Deployments | The output is similar to this:", "language": "bash", "type": "kubectl-command_explanation", "context": "Deployments | The output is similar to this:", "source": "https://kubernetes.io/docs/concepts/workloads/controllers/deployment/", "id": "k8s_00206_code_explain_5440"} {"prompt": "Complete this text code:\n\ndeployments \"nginx-deployment\" revision 2\n Labels: app=nginx\n pod-template-hash=1159050644\n Annotations: kubernetes.io/change-cause=kubectl set image deployment/nginx-deployment nginx=nginx:1.16.1\n Containers:\n nginx:", "completion": " Image: nginx:1.16.1\n Port: 80/TCP\n QoS Tier:\n cpu: BestEffort\n memory: BestEffort\n Environment Variables: \n No volumes.", "language": "text", "type": "code-example", "context": "Deployments | Follow the steps given below to rollback the Deployment from the current version to the previous version, which is version 2.", "source": "https://kubernetes.io/docs/concepts/workloads/controllers/deployment/", "id": "k8s_00206_code_completion_5441"} {"prompt": "Explain this text code:\n\ndeployments \"nginx-deployment\" revision 2\n Labels: app=nginx\n pod-template-hash=1159050644\n Annotations: kubernetes.io/change-cause=kubectl set image deployment/nginx-deployment nginx=nginx:1.16.1\n Containers:\n nginx:\n Image: nginx:1.16.1\n Port: 80/TCP\n QoS Tier:\n cpu: BestEffort\n memory: BestEffort\n Environment Variables: \n No volumes.", "completion": "This text code Deployments | Follow the steps given below to rollback the Deployment from the current version to the previous version, which is version 2.", "language": "text", "type": "code-example_explanation", "context": "Deployments | Follow the steps given below to rollback the Deployment from the current version to the previous version, which is version 2.", "source": "https://kubernetes.io/docs/concepts/workloads/controllers/deployment/", "id": "k8s_00206_code_explain_5442"} {"prompt": "Explain this bash code:\n\nkubectl rollout undo deployment/nginx-deployment --to-revision=2", "completion": "This bash code Deployments | The output is similar to this:", "language": "bash", "type": "kubectl-command_explanation", "context": "Deployments | The output is similar to this:", "source": "https://kubernetes.io/docs/concepts/workloads/controllers/deployment/", "id": "k8s_00206_code_explain_5443"} {"prompt": "Explain this text code:\n\nNAME READY UP-TO-DATE AVAILABLE AGE\nnginx-deployment 3/3 3 3 30m", "completion": "This text code Deployments | Get the description of the Deployment:", "language": "text", "type": "code-example_explanation", "context": "Deployments | Get the description of the Deployment:", "source": "https://kubernetes.io/docs/concepts/workloads/controllers/deployment/", "id": "k8s_00206_code_explain_5444"} {"prompt": "Complete this text code:\n\nName: nginx-deployment\nNamespace: default\nCreationTimestamp: Sun, 02 Sep 2018 18:17:55 -0500\nLabels: app=nginx\nAnnotations: deployment.kubernetes.io/revision=4\n kubernetes.io/change-cause=kubectl set image deployment/nginx-deployment nginx=nginx:1.16.1\nSelector: app=nginx\nReplicas: 3 desired | 3 updated | 3 total | 3 available | 0 unavailable\nStrategyType: RollingUpdate\nMinReadySeconds: 0\nRollingUpdateStrategy: 25% max unavailable, 25% max surge\nPod Template:\n Labels: app=nginx\n Containers:\n nginx:\n Image: nginx:1.16.1\n Port: 80/TCP\n Host Port: 0/TCP\n Environment: \n Mounts: ", "completion": " Volumes: \nConditions:\n Type Status Reason\n ---- ------ ------\n Available True MinimumReplicasAvailable\n Progressing True NewReplicaSetAvailable\nOldReplicaSets: \nNewReplicaSet: nginx-deployment-c4747d96c (3/3 replicas created)\nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal ScalingReplicaSet 12m deployment-controller Scaled up replica set nginx-deployment-75675f5897 to 3\n Normal ScalingReplicaSet 11m deployment-controller Scaled up replica set nginx-deployment-c4747d96c to 1\n Normal ScalingReplicaSet 11m deployment-controller Scaled down replica set nginx-deployment-75675f5897 to 2\n Normal ScalingReplicaSet 11m deployment-controller Scaled up replica set nginx-deployment-c4747d96c to 2\n Normal ScalingReplicaSet 11m deployment-controller Scaled down replica set nginx-deployment-75675f5897 to 1\n Normal ScalingReplicaSet 11m deployment-controller Scaled up replica set nginx-deployment-c4747d96c to 3\n Normal ScalingReplicaSet 11m deployment-controller Scaled down replica set nginx-deployment-75675f5897 to 0\n Normal ScalingReplicaSet 11m deployment-controller Scaled up replica set nginx-deployment-595696685f to 1\n Normal DeploymentRollback 15s deployment-controller Rolled back deployment \"nginx-deployment\" to revision 2\n Normal ScalingReplicaSet 15s deployment-controller Scaled down replica set nginx-deployment-595696685f to 0", "language": "text", "type": "code-example", "context": "Deployments | You can scale a Deployment by using the following command:", "source": "https://kubernetes.io/docs/concepts/workloads/controllers/deployment/", "id": "k8s_00206_code_completion_5445"} {"prompt": "Explain this text code:\n\nName: nginx-deployment\nNamespace: default\nCreationTimestamp: Sun, 02 Sep 2018 18:17:55 -0500\nLabels: app=nginx\nAnnotations: deployment.kubernetes.io/revision=4\n kubernetes.io/change-cause=kubectl set image deployment/nginx-deployment nginx=nginx:1.16.1\nSelector: app=nginx\nReplicas: 3 desired | 3 updated | 3 total | 3 available | 0 unavailable\nStrategyType: RollingUpdate\nMinReadySeconds: 0\nRollingUpdateStrategy: 25% max unavailable, 25% max surge\nPod Template:\n Labels: app=nginx\n Containers:\n nginx:\n Image: nginx:1.16.1\n Port: 80/TCP\n Host Port: 0/TCP\n Environment: \n Mounts: \n Volumes: \nConditions:\n Type Status Reason\n ---- ------ ------\n Available True MinimumReplicasAvailable\n Progressing True NewReplicaSetAvailable\nOldReplicaSets: \nNewReplicaSet: nginx-deployment-c4747d96c (3/3 replicas created)\nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal ScalingReplicaSet 12m deployment-controller Scaled up replica set nginx-deployment-75675f5897 to 3\n Normal ScalingReplicaSet 11m deployment-controller Scaled up replica set nginx-deployment-c4747d96c to 1\n Normal ScalingReplicaSet 11m deployment-controller Scaled down replica set nginx-deployment-75675f5897 to 2\n Normal ScalingReplicaSet 11m deployment-controller Scaled up replica set nginx-deployment-c4747d96c to 2\n Normal ScalingReplicaSet 11m deployment-controller Scaled down replica set nginx-deployment-75675f5897 to 1\n Normal ScalingReplicaSet 11m deployment-controller Scaled up replica set nginx-deployment-c4747d96c to 3\n Normal ScalingReplicaSet 11m deployment-controller Scaled down replica set nginx-deployment-75675f5897 to 0\n Normal ScalingReplicaSet 11m deployment-controller Scaled up replica set nginx-deployment-595696685f to 1\n Normal DeploymentRollback 15s deployment-controller Rolled back deployment \"nginx-deployment\" to revision 2\n Normal ScalingReplicaSet 15s deployment-controller Scaled down replica set nginx-deployment-595696685f to 0", "completion": "This text code Deployments | You can scale a Deployment by using the following command:", "language": "text", "type": "code-example_explanation", "context": "Deployments | You can scale a Deployment by using the following command:", "source": "https://kubernetes.io/docs/concepts/workloads/controllers/deployment/", "id": "k8s_00206_code_explain_5446"} {"prompt": "Explain this bash code:\n\nkubectl scale deployment/nginx-deployment --replicas=10", "completion": "This bash code Deployments | The output is similar to this:", "language": "bash", "type": "kubectl-command_explanation", "context": "Deployments | The output is similar to this:", "source": "https://kubernetes.io/docs/concepts/workloads/controllers/deployment/", "id": "k8s_00206_code_explain_5447"} {"prompt": "Explain this bash code:\n\nkubectl autoscale deployment/nginx-deployment --min=10 --max=15 --cpu-percent=80", "completion": "This bash code Deployments | The output is similar to this:", "language": "bash", "type": "kubectl-command_explanation", "context": "Deployments | The output is similar to this:", "source": "https://kubernetes.io/docs/concepts/workloads/controllers/deployment/", "id": "k8s_00206_code_explain_5448"} {"prompt": "Explain this text code:\n\nNAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE\nnginx-deployment 10 10 10 10 50s", "completion": "This text code Deployments | You update to a new image which happens to be unresolvable from inside the cluster.", "language": "text", "type": "code-example_explanation", "context": "Deployments | You update to a new image which happens to be unresolvable from inside the cluster.", "source": "https://kubernetes.io/docs/concepts/workloads/controllers/deployment/", "id": "k8s_00206_code_explain_5449"} {"prompt": "Explain this bash code:\n\nkubectl set image deployment/nginx-deployment nginx=nginx:sometag", "completion": "This bash code Deployments | The output is similar to this:", "language": "bash", "type": "kubectl-command_explanation", "context": "Deployments | The output is similar to this:", "source": "https://kubernetes.io/docs/concepts/workloads/controllers/deployment/", "id": "k8s_00206_code_explain_5450"} {"prompt": "Explain this text code:\n\nNAME DESIRED CURRENT READY AGE\nnginx-deployment-1989198191 5 5 0 9s\nnginx-deployment-618515232 8 8 8 1m", "completion": "This text code Deployments | Then a new scaling request for the Deployment comes along. The autoscaler increments the Deployment replicas to 15. The Deployment controller needs to decide where to add these new 5 replicas. If you", "language": "text", "type": "code-example_explanation", "context": "Deployments | Then a new scaling request for the Deployment comes along. The autoscaler increments the Deployment replicas to 15. The Deployment controller needs to decide where to add these new 5 replicas. If you", "source": "https://kubernetes.io/docs/concepts/workloads/controllers/deployment/", "id": "k8s_00206_code_explain_5451"} {"prompt": "Explain this text code:\n\nNAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE\nnginx-deployment 15 18 7 8 7m", "completion": "This text code Deployments | The rollout status confirms how the replicas were added to each ReplicaSet.", "language": "text", "type": "code-example_explanation", "context": "Deployments | The rollout status confirms how the replicas were added to each ReplicaSet.", "source": "https://kubernetes.io/docs/concepts/workloads/controllers/deployment/", "id": "k8s_00206_code_explain_5452"} {"prompt": "Explain this text code:\n\nNAME DESIRED CURRENT READY AGE\nnginx-deployment-1989198191 7 7 0 7m\nnginx-deployment-618515232 11 11 11 7m", "completion": "This text code Deployments | When you update a Deployment, or plan to, you can pause rollouts for that Deployment before you trigger one or more updates. When you're ready to apply those changes, you resume rollouts for the Deplo", "language": "text", "type": "code-example_explanation", "context": "Deployments | When you update a Deployment, or plan to, you can pause rollouts for that Deployment before you trigger one or more updates. When you're ready to apply those changes, you resume rollouts for the Deplo", "source": "https://kubernetes.io/docs/concepts/workloads/controllers/deployment/", "id": "k8s_00206_code_explain_5453"} {"prompt": "Explain this text code:\n\nNAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE\nnginx 3 3 3 3 1m", "completion": "This text code Deployments | Get the rollout status:", "language": "text", "type": "code-example_explanation", "context": "Deployments | Get the rollout status:", "source": "https://kubernetes.io/docs/concepts/workloads/controllers/deployment/", "id": "k8s_00206_code_explain_5454"} {"prompt": "Explain this text code:\n\nNAME DESIRED CURRENT READY AGE\nnginx-2142116321 3 3 3 1m", "completion": "This text code Deployments | Pause by running the following command:", "language": "text", "type": "code-example_explanation", "context": "Deployments | Pause by running the following command:", "source": "https://kubernetes.io/docs/concepts/workloads/controllers/deployment/", "id": "k8s_00206_code_explain_5455"} {"prompt": "Explain this bash code:\n\nkubectl set image deployment/nginx-deployment nginx=nginx:1.16.1", "completion": "This bash code Deployments | The output is similar to this:", "language": "bash", "type": "kubectl-command_explanation", "context": "Deployments | The output is similar to this:", "source": "https://kubernetes.io/docs/concepts/workloads/controllers/deployment/", "id": "k8s_00206_code_explain_5456"} {"prompt": "Explain this bash code:\n\nkubectl rollout history deployment/nginx-deployment", "completion": "This bash code Deployments | The output is similar to this:", "language": "bash", "type": "kubectl-command_explanation", "context": "Deployments | The output is similar to this:", "source": "https://kubernetes.io/docs/concepts/workloads/controllers/deployment/", "id": "k8s_00206_code_explain_5457"} {"prompt": "Explain this text code:\n\ndeployments \"nginx\"\nREVISION CHANGE-CAUSE\n1 ", "completion": "This text code Deployments | Get the rollout status to verify that the existing ReplicaSet has not changed:", "language": "text", "type": "code-example_explanation", "context": "Deployments | Get the rollout status to verify that the existing ReplicaSet has not changed:", "source": "https://kubernetes.io/docs/concepts/workloads/controllers/deployment/", "id": "k8s_00206_code_explain_5458"} {"prompt": "Explain this text code:\n\nNAME DESIRED CURRENT READY AGE\nnginx-2142116321 3 3 3 2m", "completion": "This text code Deployments | You can make as many updates as you wish, for example, update the resources that will be used:", "language": "text", "type": "code-example_explanation", "context": "Deployments | You can make as many updates as you wish, for example, update the resources that will be used:", "source": "https://kubernetes.io/docs/concepts/workloads/controllers/deployment/", "id": "k8s_00206_code_explain_5459"} {"prompt": "Explain this bash code:\n\nkubectl set resources deployment/nginx-deployment -c=nginx --limits=cpu=200m,memory=512Mi", "completion": "This bash code Deployments | The output is similar to this:", "language": "bash", "type": "kubectl-command_explanation", "context": "Deployments | The output is similar to this:", "source": "https://kubernetes.io/docs/concepts/workloads/controllers/deployment/", "id": "k8s_00206_code_explain_5460"} {"prompt": "Explain this text code:\n\ndeployment.apps/nginx-deployment resource requirements updated", "completion": "This text code Deployments | The initial state of the Deployment prior to pausing its rollout will continue its function, but new updates to the Deployment will not have any effect as long as the Deployment rollout is paused.", "language": "text", "type": "code-example_explanation", "context": "Deployments | The initial state of the Deployment prior to pausing its rollout will continue its function, but new updates to the Deployment will not have any effect as long as the Deployment rollout is paused.", "source": "https://kubernetes.io/docs/concepts/workloads/controllers/deployment/", "id": "k8s_00206_code_explain_5461"} {"prompt": "Complete this text code:\n\nNAME DESIRED CURRENT READY AGE\nnginx-2142116321 2 2 2 2m\nnginx-3926361531 2 2 0 6s\nnginx-3926361531 2 2 1 18s\nnginx-2142116321 1 2 2 2m\nnginx-2142116321 1 2 2 2m\nnginx-3926361531 3 2 1 18s", "completion": "nginx-3926361531 3 2 1 18s\nnginx-2142116321 1 1 1 2m\nnginx-3926361531 3 3 1 18s\nnginx-3926361531 3 3 2 19s\nnginx-2142116321 0 1 1 2m\nnginx-2142116321 0 1 1 2m\nnginx-2142116321 0 0 0 2m\nnginx-3926361531 3 3 3 20s", "language": "text", "type": "code-example", "context": "Deployments | Get the status of the latest rollout:", "source": "https://kubernetes.io/docs/concepts/workloads/controllers/deployment/", "id": "k8s_00206_code_completion_5462"} {"prompt": "Explain this text code:\n\nNAME DESIRED CURRENT READY AGE\nnginx-2142116321 2 2 2 2m\nnginx-3926361531 2 2 0 6s\nnginx-3926361531 2 2 1 18s\nnginx-2142116321 1 2 2 2m\nnginx-2142116321 1 2 2 2m\nnginx-3926361531 3 2 1 18s\nnginx-3926361531 3 2 1 18s\nnginx-2142116321 1 1 1 2m\nnginx-3926361531 3 3 1 18s\nnginx-3926361531 3 3 2 19s\nnginx-2142116321 0 1 1 2m\nnginx-2142116321 0 1 1 2m\nnginx-2142116321 0 0 0 2m\nnginx-3926361531 3 3 3 20s", "completion": "This text code Deployments | Get the status of the latest rollout:", "language": "text", "type": "code-example_explanation", "context": "Deployments | Get the status of the latest rollout:", "source": "https://kubernetes.io/docs/concepts/workloads/controllers/deployment/", "id": "k8s_00206_code_explain_5463"} {"prompt": "Explain this text code:\n\nNAME DESIRED CURRENT READY AGE\nnginx-2142116321 0 0 0 2m\nnginx-3926361531 3 3 3 28s", "completion": "This text code Deployments | A Deployment enters various states during its lifecycle. It can be progressing while rolling out a new ReplicaSet, it can be complete, or it can fail to progress.", "language": "text", "type": "code-example_explanation", "context": "Deployments | A Deployment enters various states during its lifecycle. It can be progressing while rolling out a new ReplicaSet, it can be complete, or it can fail to progress.", "source": "https://kubernetes.io/docs/concepts/workloads/controllers/deployment/", "id": "k8s_00206_code_explain_5464"} {"prompt": "Explain this text code:\n\nWaiting for rollout to finish: 2 of 3 updated replicas are available...\ndeployment \"nginx-deployment\" successfully rolled out", "completion": "This text code Deployments | and the exit status from kubectl rollout is 0 (success):", "language": "text", "type": "code-example_explanation", "context": "Deployments | and the exit status from kubectl rollout is 0 (success):", "source": "https://kubernetes.io/docs/concepts/workloads/controllers/deployment/", "id": "k8s_00206_code_explain_5465"} {"prompt": "Explain this bash code:\n\nkubectl patch deployment/nginx-deployment -p '{\"spec\":{\"progressDeadlineSeconds\":600}}'", "completion": "This bash code Deployments | The output is similar to this:", "language": "bash", "type": "kubectl-command_explanation", "context": "Deployments | The output is similar to this:", "source": "https://kubernetes.io/docs/concepts/workloads/controllers/deployment/", "id": "k8s_00206_code_explain_5466"} {"prompt": "Complete this text code:\n\n<...>\nConditions:\n Type Status Reason\n ---- ------ ------", "completion": " Available True MinimumReplicasAvailable\n Progressing True ReplicaSetUpdated\n ReplicaFailure True FailedCreate\n<...>", "language": "text", "type": "code-example", "context": "Deployments | If you run kubectl get deployment nginx-deployment -o yaml, the Deployment status is similar to this:", "source": "https://kubernetes.io/docs/concepts/workloads/controllers/deployment/", "id": "k8s_00206_code_completion_5467"} {"prompt": "Explain this text code:\n\n<...>\nConditions:\n Type Status Reason\n ---- ------ ------\n Available True MinimumReplicasAvailable\n Progressing True ReplicaSetUpdated\n ReplicaFailure True FailedCreate\n<...>", "completion": "This text code Deployments | If you run kubectl get deployment nginx-deployment -o yaml, the Deployment status is similar to this:", "language": "text", "type": "code-example_explanation", "context": "Deployments | If you run kubectl get deployment nginx-deployment -o yaml, the Deployment status is similar to this:", "source": "https://kubernetes.io/docs/concepts/workloads/controllers/deployment/", "id": "k8s_00206_code_explain_5468"} {"prompt": "Complete this text code:\n\nstatus:\n availableReplicas: 2\n conditions:\n - lastTransitionTime: 2016-10-04T12:25:39Z\n lastUpdateTime: 2016-10-04T12:25:39Z\n message: Replica set \"nginx-deployment-4262182780\" is progressing.\n reason: ReplicaSetUpdated\n status: \"True\"\n type: Progressing\n - lastTransitionTime: 2016-10-04T12:25:42Z\n lastUpdateTime: 2016-10-04T12:25:42Z\n message: Deployment has minimum availability.", "completion": " reason: MinimumReplicasAvailable\n status: \"True\"\n type: Available\n - lastTransitionTime: 2016-10-04T12:25:39Z\n lastUpdateTime: 2016-10-04T12:25:39Z\n message: 'Error creating: pods \"nginx-deployment-4262182780-\" is forbidden: exceeded quota:\n object-counts, requested: pods=1, used: pods=3, limited: pods=2'\n reason: FailedCreate\n status: \"True\"\n type: ReplicaFailure\n observedGeneration: 3\n replicas: 2\n unavailableReplicas: 2", "language": "text", "type": "code-example", "context": "Deployments | Eventually, once the Deployment progress deadline is exceeded, Kubernetes updates the status and the reason for the Progressing condition:", "source": "https://kubernetes.io/docs/concepts/workloads/controllers/deployment/", "id": "k8s_00206_code_completion_5469"} {"prompt": "Explain this text code:\n\nstatus:\n availableReplicas: 2\n conditions:\n - lastTransitionTime: 2016-10-04T12:25:39Z\n lastUpdateTime: 2016-10-04T12:25:39Z\n message: Replica set \"nginx-deployment-4262182780\" is progressing.\n reason: ReplicaSetUpdated\n status: \"True\"\n type: Progressing\n - lastTransitionTime: 2016-10-04T12:25:42Z\n lastUpdateTime: 2016-10-04T12:25:42Z\n message: Deployment has minimum availability.\n reason: MinimumReplicasAvailable\n status: \"True\"\n type: Available\n - lastTransitionTime: 2016-10-04T12:25:39Z\n lastUpdateTime: 2016-10-04T12:25:39Z\n message: 'Error creating: pods \"nginx-deployment-4262182780-\" is forbidden: exceeded quota:\n object-counts, requested: pods=1, used: pods=3, limited: pods=2'\n reason: FailedCreate\n status: \"True\"\n type: ReplicaFailure\n observedGeneration: 3\n replicas: 2\n unavailableReplicas: 2", "completion": "This text code Deployments | Eventually, once the Deployment progress deadline is exceeded, Kubernetes updates the status and the reason for the Progressing condition:", "language": "text", "type": "code-example_explanation", "context": "Deployments | Eventually, once the Deployment progress deadline is exceeded, Kubernetes updates the status and the reason for the Progressing condition:", "source": "https://kubernetes.io/docs/concepts/workloads/controllers/deployment/", "id": "k8s_00206_code_explain_5470"} {"prompt": "Complete this text code:\n\nConditions:\n Type Status Reason\n ---- ------ ------", "completion": " Available True MinimumReplicasAvailable\n Progressing False ProgressDeadlineExceeded\n ReplicaFailure True FailedCreate", "language": "text", "type": "code-example", "context": "Deployments | You can address an issue of insufficient quota by scaling down your Deployment, by scaling down other controllers you may be running, or by increasing quota in your namespace. If you satisfy the quota", "source": "https://kubernetes.io/docs/concepts/workloads/controllers/deployment/", "id": "k8s_00206_code_completion_5471"} {"prompt": "Explain this text code:\n\nConditions:\n Type Status Reason\n ---- ------ ------\n Available True MinimumReplicasAvailable\n Progressing False ProgressDeadlineExceeded\n ReplicaFailure True FailedCreate", "completion": "This text code Deployments | You can address an issue of insufficient quota by scaling down your Deployment, by scaling down other controllers you may be running, or by increasing quota in your namespace. If you satisfy the quota", "language": "text", "type": "code-example_explanation", "context": "Deployments | You can address an issue of insufficient quota by scaling down your Deployment, by scaling down other controllers you may be running, or by increasing quota in your namespace. If you satisfy the quota", "source": "https://kubernetes.io/docs/concepts/workloads/controllers/deployment/", "id": "k8s_00206_code_explain_5472"} {"prompt": "Complete this text code:\n\nConditions:\n Type Status Reason", "completion": " ---- ------ ------\n Available True MinimumReplicasAvailable\n Progressing True NewReplicaSetAvailable", "language": "text", "type": "code-example", "context": "Deployments | type: Available with status: \"True\" means that your Deployment has minimum availability. Minimum availability is dictated by the parameters specified in the deployment strategy. type: Progressing with", "source": "https://kubernetes.io/docs/concepts/workloads/controllers/deployment/", "id": "k8s_00206_code_completion_5473"} {"prompt": "Explain this text code:\n\nConditions:\n Type Status Reason\n ---- ------ ------\n Available True MinimumReplicasAvailable\n Progressing True NewReplicaSetAvailable", "completion": "This text code Deployments | type: Available with status: \"True\" means that your Deployment has minimum availability. Minimum availability is dictated by the parameters specified in the deployment strategy. type: Progressing with", "language": "text", "type": "code-example_explanation", "context": "Deployments | type: Available with status: \"True\" means that your Deployment has minimum availability. Minimum availability is dictated by the parameters specified in the deployment strategy. type: Progressing with", "source": "https://kubernetes.io/docs/concepts/workloads/controllers/deployment/", "id": "k8s_00206_code_explain_5474"} {"prompt": "Explain this text code:\n\nWaiting for rollout to finish: 2 out of 3 new replicas have been updated...\nerror: deployment \"nginx\" exceeded its progress deadline", "completion": "This text code Deployments | and the exit status from kubectl rollout is 1 (indicating an error):", "language": "text", "type": "code-example_explanation", "context": "Deployments | and the exit status from kubectl rollout is 1 (indicating an error):", "source": "https://kubernetes.io/docs/concepts/workloads/controllers/deployment/", "id": "k8s_00206_code_explain_5475"} {"prompt": "Complete this yaml code:\n\napiVersion: apps/v1\nkind: Deployment\nmetadata:\n name: nginx-deployment\n labels:\n app: nginx\nspec:\n replicas: 3\n selector:\n matchLabels:\n app: nginx\n template:", "completion": " metadata:\n labels:\n app: nginx\n spec:\n containers:\n - name: nginx\n image: nginx:1.14.2\n ports:\n - containerPort: 80\n strategy:\n type: RollingUpdate\n rollingUpdate:\n maxUnavailable: 1", "language": "yaml", "type": "deployment-definition", "context": "Deployments | apiVersion: apps/v1 kind: Deployment metadata: name: nginx-deployment labels: app: nginx spec: replicas: 3 selector: matchLabels: app: nginx template: metadata: labels:", "source": "https://kubernetes.io/docs/concepts/workloads/controllers/deployment/", "id": "k8s_00206_code_completion_5476"} {"prompt": "Explain this yaml code:\n\napiVersion: apps/v1\nkind: Deployment\nmetadata:\n name: nginx-deployment\n labels:\n app: nginx\nspec:\n replicas: 3\n selector:\n matchLabels:\n app: nginx\n template:\n metadata:\n labels:\n app: nginx\n spec:\n containers:\n - name: nginx\n image: nginx:1.14.2\n ports:\n - containerPort: 80\n strategy:\n type: RollingUpdate\n rollingUpdate:\n maxUnavailable: 1", "completion": "This yaml code Deployments | apiVersion: apps/v1 kind: Deployment metadata: name: nginx-deployment labels: app: nginx spec: replicas: 3 selector: matchLabels: app: nginx template: metadata: labels:", "language": "yaml", "type": "deployment-definition_explanation", "context": "Deployments | apiVersion: apps/v1 kind: Deployment metadata: name: nginx-deployment labels: app: nginx spec: replicas: 3 selector: matchLabels: app: nginx template: metadata: labels:", "source": "https://kubernetes.io/docs/concepts/workloads/controllers/deployment/", "id": "k8s_00206_code_explain_5477"} {"prompt": "Complete this yaml code:\n\napiVersion: apps/v1\nkind: Deployment\nmetadata:\n name: nginx-deployment\n labels:\n app: nginx\nspec:\n replicas: 3\n selector:\n matchLabels:\n app: nginx\n template:", "completion": " metadata:\n labels:\n app: nginx\n spec:\n containers:\n - name: nginx\n image: nginx:1.14.2\n ports:\n - containerPort: 80\n strategy:\n type: RollingUpdate\n rollingUpdate:\n maxSurge: 1", "language": "yaml", "type": "deployment-definition", "context": "Deployments | apiVersion: apps/v1 kind: Deployment metadata: name: nginx-deployment labels: app: nginx spec: replicas: 3 selector: matchLabels: app: nginx template: metadata: labels:", "source": "https://kubernetes.io/docs/concepts/workloads/controllers/deployment/", "id": "k8s_00206_code_completion_5478"} {"prompt": "Explain this yaml code:\n\napiVersion: apps/v1\nkind: Deployment\nmetadata:\n name: nginx-deployment\n labels:\n app: nginx\nspec:\n replicas: 3\n selector:\n matchLabels:\n app: nginx\n template:\n metadata:\n labels:\n app: nginx\n spec:\n containers:\n - name: nginx\n image: nginx:1.14.2\n ports:\n - containerPort: 80\n strategy:\n type: RollingUpdate\n rollingUpdate:\n maxSurge: 1", "completion": "This yaml code Deployments | apiVersion: apps/v1 kind: Deployment metadata: name: nginx-deployment labels: app: nginx spec: replicas: 3 selector: matchLabels: app: nginx template: metadata: labels:", "language": "yaml", "type": "deployment-definition_explanation", "context": "Deployments | apiVersion: apps/v1 kind: Deployment metadata: name: nginx-deployment labels: app: nginx spec: replicas: 3 selector: matchLabels: app: nginx template: metadata: labels:", "source": "https://kubernetes.io/docs/concepts/workloads/controllers/deployment/", "id": "k8s_00206_code_explain_5479"} {"prompt": "Complete this yaml code:\n\napiVersion: apps/v1\nkind: Deployment\nmetadata:\n name: nginx-deployment\n labels:\n app: nginx\nspec:\n replicas: 3\n selector:\n matchLabels:\n app: nginx\n template:\n metadata:", "completion": " labels:\n app: nginx\n spec:\n containers:\n - name: nginx\n image: nginx:1.14.2\n ports:\n - containerPort: 80\n strategy:\n type: RollingUpdate\n rollingUpdate:\n maxSurge: 1\n maxUnavailable: 1", "language": "yaml", "type": "deployment-definition", "context": "Deployments | .spec.progressDeadlineSeconds is an optional field that specifies the number of seconds you want to wait for your Deployment to progress before the system reports back that the Deployment has failed p", "source": "https://kubernetes.io/docs/concepts/workloads/controllers/deployment/", "id": "k8s_00206_code_completion_5480"} {"prompt": "Explain this yaml code:\n\napiVersion: apps/v1\nkind: Deployment\nmetadata:\n name: nginx-deployment\n labels:\n app: nginx\nspec:\n replicas: 3\n selector:\n matchLabels:\n app: nginx\n template:\n metadata:\n labels:\n app: nginx\n spec:\n containers:\n - name: nginx\n image: nginx:1.14.2\n ports:\n - containerPort: 80\n strategy:\n type: RollingUpdate\n rollingUpdate:\n maxSurge: 1\n maxUnavailable: 1", "completion": "This yaml code Deployments | .spec.progressDeadlineSeconds is an optional field that specifies the number of seconds you want to wait for your Deployment to progress before the system reports back that the Deployment has failed p", "language": "yaml", "type": "deployment-definition_explanation", "context": "Deployments | .spec.progressDeadlineSeconds is an optional field that specifies the number of seconds you want to wait for your Deployment to progress before the system reports back that the Deployment has failed p", "source": "https://kubernetes.io/docs/concepts/workloads/controllers/deployment/", "id": "k8s_00206_code_explain_5481"} {"prompt": "Complete this yaml code:\n\napiVersion: apps/v1\nkind: Deployment\nmetadata:\n name: nginx-deployment\n labels:\n app: nginx\nspec:\n replicas: 3\n selector:\n matchLabels:", "completion": " app: nginx\n template:\n metadata:\n labels:\n app: nginx\n spec:\n containers:\n - name: nginx\n image: nginx:1.14.2\n ports:\n - containerPort: 80", "language": "yaml", "type": "deployment-definition", "context": "Deployments | In this example:", "source": "https://kubernetes.io/docs/concepts/workloads/controllers/deployment/", "id": "k8s_00206_code_completion_5482"} {"prompt": "Explain this yaml code:\n\napiVersion: apps/v1\nkind: Deployment\nmetadata:\n name: nginx-deployment\n labels:\n app: nginx\nspec:\n replicas: 3\n selector:\n matchLabels:\n app: nginx\n template:\n metadata:\n labels:\n app: nginx\n spec:\n containers:\n - name: nginx\n image: nginx:1.14.2\n ports:\n - containerPort: 80", "completion": "This yaml code Deployments | In this example:", "language": "yaml", "type": "deployment-definition_explanation", "context": "Deployments | In this example:", "source": "https://kubernetes.io/docs/concepts/workloads/controllers/deployment/", "id": "k8s_00206_code_explain_5483"} {"prompt": "Explain this shell code:\n\nkubectl apply -f https://k8s.io/examples/controllers/nginx-deployment.yaml", "completion": "This shell code Deployments | Run kubectl get deployments to check if the Deployment was created.", "language": "shell", "type": "code-example_explanation", "context": "Deployments | Run kubectl get deployments to check if the Deployment was created.", "source": "https://kubernetes.io/docs/concepts/workloads/controllers/deployment/", "id": "k8s_00206_code_explain_5484"} {"prompt": "Explain this text code:\n\nNAME READY UP-TO-DATE AVAILABLE AGE\nnginx-deployment 0/3 0 0 1s", "completion": "This text code Deployments | When you inspect the Deployments in your cluster, the following fields are displayed:", "language": "text", "type": "code-example_explanation", "context": "Deployments | When you inspect the Deployments in your cluster, the following fields are displayed:", "source": "https://kubernetes.io/docs/concepts/workloads/controllers/deployment/", "id": "k8s_00206_code_explain_5485"} {"prompt": "Explain this text code:\n\nWaiting for rollout to finish: 2 out of 3 new replicas have been updated...\ndeployment \"nginx-deployment\" successfully rolled out", "completion": "This text code Deployments | Run the kubectl get deployments again a few seconds later. The output is similar to this:", "language": "text", "type": "code-example_explanation", "context": "Deployments | Run the kubectl get deployments again a few seconds later. The output is similar to this:", "source": "https://kubernetes.io/docs/concepts/workloads/controllers/deployment/", "id": "k8s_00206_code_explain_5486"} {"prompt": "Explain this text code:\n\nNAME READY UP-TO-DATE AVAILABLE AGE\nnginx-deployment 3/3 3 3 18s", "completion": "This text code Deployments | Notice that the Deployment has created all three replicas, and all replicas are up-to-date (they contain the latest Pod template) and available.", "language": "text", "type": "code-example_explanation", "context": "Deployments | Notice that the Deployment has created all three replicas, and all replicas are up-to-date (they contain the latest Pod template) and available.", "source": "https://kubernetes.io/docs/concepts/workloads/controllers/deployment/", "id": "k8s_00206_code_explain_5487"} {"prompt": "Explain this text code:\n\nNAME DESIRED CURRENT READY AGE\nnginx-deployment-75675f5897 3 3 3 18s", "completion": "This text code Deployments | ReplicaSet output shows the following fields:", "language": "text", "type": "code-example_explanation", "context": "Deployments | ReplicaSet output shows the following fields:", "source": "https://kubernetes.io/docs/concepts/workloads/controllers/deployment/", "id": "k8s_00206_code_explain_5488"} {"prompt": "Complete this text code:\n\nNAME READY STATUS RESTARTS AGE LABELS\nnginx-deployment-75675f5897-7ci7o 1/1 Running 0 18s app=nginx,pod-template-hash=75675f5897", "completion": "nginx-deployment-75675f5897-kzszj 1/1 Running 0 18s app=nginx,pod-template-hash=75675f5897\nnginx-deployment-75675f5897-qqcnn 1/1 Running 0 18s app=nginx,pod-template-hash=75675f5897", "language": "text", "type": "code-example", "context": "Deployments | The created ReplicaSet ensures that there are three nginx Pods.", "source": "https://kubernetes.io/docs/concepts/workloads/controllers/deployment/", "id": "k8s_00206_code_completion_5489"} {"prompt": "Explain this text code:\n\nNAME READY STATUS RESTARTS AGE LABELS\nnginx-deployment-75675f5897-7ci7o 1/1 Running 0 18s app=nginx,pod-template-hash=75675f5897\nnginx-deployment-75675f5897-kzszj 1/1 Running 0 18s app=nginx,pod-template-hash=75675f5897\nnginx-deployment-75675f5897-qqcnn 1/1 Running 0 18s app=nginx,pod-template-hash=75675f5897", "completion": "This text code Deployments | The created ReplicaSet ensures that there are three nginx Pods.", "language": "text", "type": "code-example_explanation", "context": "Deployments | The created ReplicaSet ensures that there are three nginx Pods.", "source": "https://kubernetes.io/docs/concepts/workloads/controllers/deployment/", "id": "k8s_00206_code_explain_5490"} {"prompt": "Explain this shell code:\n\nkubectl set image deployment.v1.apps/nginx-deployment nginx=nginx:1.16.1", "completion": "This shell code Deployments | or use the following command:", "language": "shell", "type": "code-example_explanation", "context": "Deployments | or use the following command:", "source": "https://kubernetes.io/docs/concepts/workloads/controllers/deployment/", "id": "k8s_00206_code_explain_5491"} {"prompt": "Explain this shell code:\n\nkubectl set image deployment/nginx-deployment nginx=nginx:1.16.1", "completion": "This shell code Deployments | where deployment/nginx-deployment indicates the Deployment, nginx indicates the Container the update will take place and nginx:1.16.1 indicates the new image and its tag.", "language": "shell", "type": "code-example_explanation", "context": "Deployments | where deployment/nginx-deployment indicates the Deployment, nginx indicates the Container the update will take place and nginx:1.16.1 indicates the new image and its tag.", "source": "https://kubernetes.io/docs/concepts/workloads/controllers/deployment/", "id": "k8s_00206_code_explain_5492"} {"prompt": "Explain this text code:\n\nWaiting for rollout to finish: 2 out of 3 new replicas have been updated...", "completion": "This text code Deployments | or", "language": "text", "type": "code-example_explanation", "context": "Deployments | or", "source": "https://kubernetes.io/docs/concepts/workloads/controllers/deployment/", "id": "k8s_00206_code_explain_5493"} {"prompt": "Explain this text code:\n\ndeployment \"nginx-deployment\" successfully rolled out", "completion": "This text code Deployments | Get more details on your updated Deployment:", "language": "text", "type": "code-example_explanation", "context": "Deployments | Get more details on your updated Deployment:", "source": "https://kubernetes.io/docs/concepts/workloads/controllers/deployment/", "id": "k8s_00206_code_explain_5494"} {"prompt": "Explain this text code:\n\nNAME READY UP-TO-DATE AVAILABLE AGE\nnginx-deployment 3/3 3 3 36s", "completion": "This text code Deployments | Run kubectl get rs to see that the Deployment updated the Pods by creating a new ReplicaSet and scaling it up to 3 replicas, as well as scaling down the old ReplicaSet to 0 replicas.", "language": "text", "type": "code-example_explanation", "context": "Deployments | Run kubectl get rs to see that the Deployment updated the Pods by creating a new ReplicaSet and scaling it up to 3 replicas, as well as scaling down the old ReplicaSet to 0 replicas.", "source": "https://kubernetes.io/docs/concepts/workloads/controllers/deployment/", "id": "k8s_00206_code_explain_5495"} {"prompt": "Explain this text code:\n\nNAME DESIRED CURRENT READY AGE\nnginx-deployment-1564180365 3 3 3 6s\nnginx-deployment-2035384211 0 0 0 36s", "completion": "This text code Deployments | Running get pods should now show only the new Pods:", "language": "text", "type": "code-example_explanation", "context": "Deployments | Running get pods should now show only the new Pods:", "source": "https://kubernetes.io/docs/concepts/workloads/controllers/deployment/", "id": "k8s_00206_code_explain_5496"} {"prompt": "Complete this text code:\n\nNAME READY STATUS RESTARTS AGE\nnginx-deployment-1564180365-khku8 1/1 Running 0 14s", "completion": "nginx-deployment-1564180365-nacti 1/1 Running 0 14s\nnginx-deployment-1564180365-z9gth 1/1 Running 0 14s", "language": "text", "type": "code-example", "context": "Deployments | Next time you want to update these Pods, you only need to update the Deployment's Pod template again.", "source": "https://kubernetes.io/docs/concepts/workloads/controllers/deployment/", "id": "k8s_00206_code_completion_5497"} {"prompt": "Explain this text code:\n\nNAME READY STATUS RESTARTS AGE\nnginx-deployment-1564180365-khku8 1/1 Running 0 14s\nnginx-deployment-1564180365-nacti 1/1 Running 0 14s\nnginx-deployment-1564180365-z9gth 1/1 Running 0 14s", "completion": "This text code Deployments | Next time you want to update these Pods, you only need to update the Deployment's Pod template again.", "language": "text", "type": "code-example_explanation", "context": "Deployments | Next time you want to update these Pods, you only need to update the Deployment's Pod template again.", "source": "https://kubernetes.io/docs/concepts/workloads/controllers/deployment/", "id": "k8s_00206_code_explain_5498"} {"prompt": "Complete this text code:\n\nName: nginx-deployment\nNamespace: default\nCreationTimestamp: Thu, 30 Nov 2017 10:56:25 +0000\nLabels: app=nginx\nAnnotations: deployment.kubernetes.io/revision=2\nSelector: app=nginx\nReplicas: 3 desired | 3 updated | 3 total | 3 available | 0 unavailable\nStrategyType: RollingUpdate\nMinReadySeconds: 0\nRollingUpdateStrategy: 25% max unavailable, 25% max surge\nPod Template:\n Labels: app=nginx\n Containers:\n nginx:\n Image: nginx:1.16.1\n Port: 80/TCP\n Environment: \n Mounts: ", "completion": " Volumes: \n Conditions:\n Type Status Reason\n ---- ------ ------\n Available True MinimumReplicasAvailable\n Progressing True NewReplicaSetAvailable\n OldReplicaSets: \n NewReplicaSet: nginx-deployment-1564180365 (3/3 replicas created)\n Events:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal ScalingReplicaSet 2m deployment-controller Scaled up replica set nginx-deployment-2035384211 to 3\n Normal ScalingReplicaSet 24s deployment-controller Scaled up replica set nginx-deployment-1564180365 to 1\n Normal ScalingReplicaSet 22s deployment-controller Scaled down replica set nginx-deployment-2035384211 to 2\n Normal ScalingReplicaSet 22s deployment-controller Scaled up replica set nginx-deployment-1564180365 to 2\n Normal ScalingReplicaSet 19s deployment-controller Scaled down replica set nginx-deployment-2035384211 to 1\n Normal ScalingReplicaSet 19s deployment-controller Scaled up replica set nginx-deployment-1564180365 to 3\n Normal ScalingReplicaSet 14s deployment-controller Scaled down replica set nginx-deployment-2035384211 to 0", "language": "text", "type": "code-example", "context": "Deployments | Here you see that when you first created the Deployment, it created a ReplicaSet (nginx-deployment-2035384211) and scaled it up to 3 replicas directly. When you updated the Deployment, it created a ne", "source": "https://kubernetes.io/docs/concepts/workloads/controllers/deployment/", "id": "k8s_00206_code_completion_5499"} {"prompt": "Explain this text code:\n\nName: nginx-deployment\nNamespace: default\nCreationTimestamp: Thu, 30 Nov 2017 10:56:25 +0000\nLabels: app=nginx\nAnnotations: deployment.kubernetes.io/revision=2\nSelector: app=nginx\nReplicas: 3 desired | 3 updated | 3 total | 3 available | 0 unavailable\nStrategyType: RollingUpdate\nMinReadySeconds: 0\nRollingUpdateStrategy: 25% max unavailable, 25% max surge\nPod Template:\n Labels: app=nginx\n Containers:\n nginx:\n Image: nginx:1.16.1\n Port: 80/TCP\n Environment: \n Mounts: \n Volumes: \n Conditions:\n Type Status Reason\n ---- ------ ------\n Available True MinimumReplicasAvailable\n Progressing True NewReplicaSetAvailable\n OldReplicaSets: \n NewReplicaSet: nginx-deployment-1564180365 (3/3 replicas created)\n Events:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal ScalingReplicaSet 2m deployment-controller Scaled up replica set nginx-deployment-2035384211 to 3\n Normal ScalingReplicaSet 24s deployment-controller Scaled up replica set nginx-deployment-1564180365 to 1\n Normal ScalingReplicaSet 22s deployment-controller Scaled down replica set nginx-deployment-2035384211 to 2\n Normal ScalingReplicaSet 22s deployment-controller Scaled up replica set nginx-deployment-1564180365 to 2\n Normal ScalingReplicaSet 19s deployment-controller Scaled down replica set nginx-deployment-2035384211 to 1\n Normal ScalingReplicaSet 19s deployment-controller Scaled up replica set nginx-deployment-1564180365 to 3\n Normal ScalingReplicaSet 14s deployment-controller Scaled down replica set nginx-deployment-2035384211 to 0", "completion": "This text code Deployments | Here you see that when you first created the Deployment, it created a ReplicaSet (nginx-deployment-2035384211) and scaled it up to 3 replicas directly. When you updated the Deployment, it created a ne", "language": "text", "type": "code-example_explanation", "context": "Deployments | Here you see that when you first created the Deployment, it created a ReplicaSet (nginx-deployment-2035384211) and scaled it up to 3 replicas directly. When you updated the Deployment, it created a ne", "source": "https://kubernetes.io/docs/concepts/workloads/controllers/deployment/", "id": "k8s_00206_code_explain_5500"} {"prompt": "Explain this shell code:\n\nkubectl set image deployment/nginx-deployment nginx=nginx:1.161", "completion": "This shell code Deployments | The output is similar to this:", "language": "shell", "type": "code-example_explanation", "context": "Deployments | The output is similar to this:", "source": "https://kubernetes.io/docs/concepts/workloads/controllers/deployment/", "id": "k8s_00206_code_explain_5501"} {"prompt": "Explain this text code:\n\nWaiting for rollout to finish: 1 out of 3 new replicas have been updated...", "completion": "This text code Deployments | Press Ctrl-C to stop the above rollout status watch. For more information on stuck rollouts, read more here.", "language": "text", "type": "code-example_explanation", "context": "Deployments | Press Ctrl-C to stop the above rollout status watch. For more information on stuck rollouts, read more here.", "source": "https://kubernetes.io/docs/concepts/workloads/controllers/deployment/", "id": "k8s_00206_code_explain_5502"} {"prompt": "Complete this text code:\n\nNAME DESIRED CURRENT READY AGE\nnginx-deployment-1564180365 3 3 3 25s", "completion": "nginx-deployment-2035384211 0 0 0 36s\nnginx-deployment-3066724191 1 1 0 6s", "language": "text", "type": "code-example", "context": "Deployments | Looking at the Pods created, you see that 1 Pod created by new ReplicaSet is stuck in an image pull loop.", "source": "https://kubernetes.io/docs/concepts/workloads/controllers/deployment/", "id": "k8s_00206_code_completion_5503"} {"prompt": "Explain this text code:\n\nNAME DESIRED CURRENT READY AGE\nnginx-deployment-1564180365 3 3 3 25s\nnginx-deployment-2035384211 0 0 0 36s\nnginx-deployment-3066724191 1 1 0 6s", "completion": "This text code Deployments | Looking at the Pods created, you see that 1 Pod created by new ReplicaSet is stuck in an image pull loop.", "language": "text", "type": "code-example_explanation", "context": "Deployments | Looking at the Pods created, you see that 1 Pod created by new ReplicaSet is stuck in an image pull loop.", "source": "https://kubernetes.io/docs/concepts/workloads/controllers/deployment/", "id": "k8s_00206_code_explain_5504"} {"prompt": "Complete this text code:\n\nNAME READY STATUS RESTARTS AGE\nnginx-deployment-1564180365-70iae 1/1 Running 0 25s", "completion": "nginx-deployment-1564180365-jbqqo 1/1 Running 0 25s\nnginx-deployment-1564180365-hysrc 1/1 Running 0 25s\nnginx-deployment-3066724191-08mng 0/1 ImagePullBackOff 0 6s", "language": "text", "type": "code-example", "context": "Deployments | Get the description of the Deployment:", "source": "https://kubernetes.io/docs/concepts/workloads/controllers/deployment/", "id": "k8s_00206_code_completion_5505"} {"prompt": "Explain this text code:\n\nNAME READY STATUS RESTARTS AGE\nnginx-deployment-1564180365-70iae 1/1 Running 0 25s\nnginx-deployment-1564180365-jbqqo 1/1 Running 0 25s\nnginx-deployment-1564180365-hysrc 1/1 Running 0 25s\nnginx-deployment-3066724191-08mng 0/1 ImagePullBackOff 0 6s", "completion": "This text code Deployments | Get the description of the Deployment:", "language": "text", "type": "code-example_explanation", "context": "Deployments | Get the description of the Deployment:", "source": "https://kubernetes.io/docs/concepts/workloads/controllers/deployment/", "id": "k8s_00206_code_explain_5506"} {"prompt": "Complete this text code:\n\nName: nginx-deployment\nNamespace: default\nCreationTimestamp: Tue, 15 Mar 2016 14:48:04 -0700\nLabels: app=nginx\nSelector: app=nginx\nReplicas: 3 desired | 1 updated | 4 total | 3 available | 1 unavailable\nStrategyType: RollingUpdate\nMinReadySeconds: 0\nRollingUpdateStrategy: 25% max unavailable, 25% max surge\nPod Template:\n Labels: app=nginx\n Containers:\n nginx:\n Image: nginx:1.161\n Port: 80/TCP\n Host Port: 0/TCP\n Environment: \n Mounts: ", "completion": " Volumes: \nConditions:\n Type Status Reason\n ---- ------ ------\n Available True MinimumReplicasAvailable\n Progressing True ReplicaSetUpdated\nOldReplicaSets: nginx-deployment-1564180365 (3/3 replicas created)\nNewReplicaSet: nginx-deployment-3066724191 (1/1 replicas created)\nEvents:\n FirstSeen LastSeen Count From SubObjectPath Type Reason Message\n --------- -------- ----- ---- ------------- -------- ------ -------\n 1m 1m 1 {deployment-controller } Normal ScalingReplicaSet Scaled up replica set nginx-deployment-2035384211 to 3\n 22s 22s 1 {deployment-controller } Normal ScalingReplicaSet Scaled up replica set nginx-deployment-1564180365 to 1\n 22s 22s 1 {deployment-controller } Normal ScalingReplicaSet Scaled down replica set nginx-deployment-2035384211 to 2\n 22s 22s 1 {deployment-controller } Normal ScalingReplicaSet Scaled up replica set nginx-deployment-1564180365 to 2\n 21s 21s 1 {deployment-controller } Normal ScalingReplicaSet Scaled down replica set nginx-deployment-2035384211 to 1\n 21s 21s 1 {deployment-controller } Normal ScalingReplicaSet Scaled up replica set nginx-deployment-1564180365 to 3\n 13s 13s 1 {deployment-controller } Normal ScalingReplicaSet Scaled down replica set nginx-deployment-2035384211 to 0\n 13s 13s 1 {deployment-controller } Normal ScalingReplicaSet Scaled up replica set nginx-deployment-3066724191 to 1", "language": "text", "type": "code-example", "context": "Deployments | To fix this, you need to rollback to a previous revision of Deployment that is stable.", "source": "https://kubernetes.io/docs/concepts/workloads/controllers/deployment/", "id": "k8s_00206_code_completion_5507"} {"prompt": "Explain this text code:\n\nName: nginx-deployment\nNamespace: default\nCreationTimestamp: Tue, 15 Mar 2016 14:48:04 -0700\nLabels: app=nginx\nSelector: app=nginx\nReplicas: 3 desired | 1 updated | 4 total | 3 available | 1 unavailable\nStrategyType: RollingUpdate\nMinReadySeconds: 0\nRollingUpdateStrategy: 25% max unavailable, 25% max surge\nPod Template:\n Labels: app=nginx\n Containers:\n nginx:\n Image: nginx:1.161\n Port: 80/TCP\n Host Port: 0/TCP\n Environment: \n Mounts: \n Volumes: \nConditions:\n Type Status Reason\n ---- ------ ------\n Available True MinimumReplicasAvailable\n Progressing True ReplicaSetUpdated\nOldReplicaSets: nginx-deployment-1564180365 (3/3 replicas created)\nNewReplicaSet: nginx-deployment-3066724191 (1/1 replicas created)\nEvents:\n FirstSeen LastSeen Count From SubObjectPath Type Reason Message\n --------- -------- ----- ---- ------------- -------- ------ -------\n 1m 1m 1 {deployment-controller } Normal ScalingReplicaSet Scaled up replica set nginx-deployment-2035384211 to 3\n 22s 22s 1 {deployment-controller } Normal ScalingReplicaSet Scaled up replica set nginx-deployment-1564180365 to 1\n 22s 22s 1 {deployment-controller } Normal ScalingReplicaSet Scaled down replica set nginx-deployment-2035384211 to 2\n 22s 22s 1 {deployment-controller } Normal ScalingReplicaSet Scaled up replica set nginx-deployment-1564180365 to 2\n 21s 21s 1 {deployment-controller } Normal ScalingReplicaSet Scaled down replica set nginx-deployment-2035384211 to 1\n 21s 21s 1 {deployment-controller } Normal ScalingReplicaSet Scaled up replica set nginx-deployment-1564180365 to 3\n 13s 13s 1 {deployment-controller } Normal ScalingReplicaSet Scaled down replica set nginx-deployment-2035384211 to 0\n 13s 13s 1 {deployment-controller } Normal ScalingReplicaSet Scaled up replica set nginx-deployment-3066724191 to 1", "completion": "This text code Deployments | To fix this, you need to rollback to a previous revision of Deployment that is stable.", "language": "text", "type": "code-example_explanation", "context": "Deployments | To fix this, you need to rollback to a previous revision of Deployment that is stable.", "source": "https://kubernetes.io/docs/concepts/workloads/controllers/deployment/", "id": "k8s_00206_code_explain_5508"} {"prompt": "Explain this shell code:\n\nkubectl rollout history deployment/nginx-deployment", "completion": "This shell code Deployments | The output is similar to this:", "language": "shell", "type": "code-example_explanation", "context": "Deployments | The output is similar to this:", "source": "https://kubernetes.io/docs/concepts/workloads/controllers/deployment/", "id": "k8s_00206_code_explain_5509"} {"prompt": "Complete this text code:\n\ndeployments \"nginx-deployment\"\nREVISION CHANGE-CAUSE", "completion": "1 kubectl apply --filename=https://k8s.io/examples/controllers/nginx-deployment.yaml\n2 kubectl set image deployment/nginx-deployment nginx=nginx:1.16.1\n3 kubectl set image deployment/nginx-deployment nginx=nginx:1.161", "language": "text", "type": "code-example", "context": "Deployments | CHANGE-CAUSE is copied from the Deployment annotation kubernetes.io/change-cause to its revisions upon creation. You can specify theCHANGE-CAUSE message by:", "source": "https://kubernetes.io/docs/concepts/workloads/controllers/deployment/", "id": "k8s_00206_code_completion_5510"} {"prompt": "Explain this text code:\n\ndeployments \"nginx-deployment\"\nREVISION CHANGE-CAUSE\n1 kubectl apply --filename=https://k8s.io/examples/controllers/nginx-deployment.yaml\n2 kubectl set image deployment/nginx-deployment nginx=nginx:1.16.1\n3 kubectl set image deployment/nginx-deployment nginx=nginx:1.161", "completion": "This text code Deployments | CHANGE-CAUSE is copied from the Deployment annotation kubernetes.io/change-cause to its revisions upon creation. You can specify theCHANGE-CAUSE message by:", "language": "text", "type": "code-example_explanation", "context": "Deployments | CHANGE-CAUSE is copied from the Deployment annotation kubernetes.io/change-cause to its revisions upon creation. You can specify theCHANGE-CAUSE message by:", "source": "https://kubernetes.io/docs/concepts/workloads/controllers/deployment/", "id": "k8s_00206_code_explain_5511"} {"prompt": "Explain this bash code:\n\nkubectl annotate deployment/nginx-deployment kubernetes.io/change-cause=\"image updated to 1.16.1\"", "completion": "This bash code Deployments | To see the details of each revision, run:", "language": "bash", "type": "kubectl-command_explanation", "context": "Deployments | To see the details of each revision, run:", "source": "https://kubernetes.io/docs/concepts/workloads/controllers/deployment/", "id": "k8s_00206_code_explain_5512"} {"prompt": "Explain this shell code:\n\nkubectl rollout history deployment/nginx-deployment --revision=2", "completion": "This shell code Deployments | The output is similar to this:", "language": "shell", "type": "code-example_explanation", "context": "Deployments | The output is similar to this:", "source": "https://kubernetes.io/docs/concepts/workloads/controllers/deployment/", "id": "k8s_00206_code_explain_5513"} {"prompt": "Complete this text code:\n\ndeployments \"nginx-deployment\" revision 2\n Labels: app=nginx\n pod-template-hash=1159050644\n Annotations: kubernetes.io/change-cause=kubectl set image deployment/nginx-deployment nginx=nginx:1.16.1\n Containers:\n nginx:", "completion": " Image: nginx:1.16.1\n Port: 80/TCP\n QoS Tier:\n cpu: BestEffort\n memory: BestEffort\n Environment Variables: \n No volumes.", "language": "text", "type": "code-example", "context": "Deployments | Follow the steps given below to rollback the Deployment from the current version to the previous version, which is version 2.", "source": "https://kubernetes.io/docs/concepts/workloads/controllers/deployment/", "id": "k8s_00206_code_completion_5514"} {"prompt": "Explain this text code:\n\ndeployments \"nginx-deployment\" revision 2\n Labels: app=nginx\n pod-template-hash=1159050644\n Annotations: kubernetes.io/change-cause=kubectl set image deployment/nginx-deployment nginx=nginx:1.16.1\n Containers:\n nginx:\n Image: nginx:1.16.1\n Port: 80/TCP\n QoS Tier:\n cpu: BestEffort\n memory: BestEffort\n Environment Variables: \n No volumes.", "completion": "This text code Deployments | Follow the steps given below to rollback the Deployment from the current version to the previous version, which is version 2.", "language": "text", "type": "code-example_explanation", "context": "Deployments | Follow the steps given below to rollback the Deployment from the current version to the previous version, which is version 2.", "source": "https://kubernetes.io/docs/concepts/workloads/controllers/deployment/", "id": "k8s_00206_code_explain_5515"} {"prompt": "Explain this shell code:\n\nkubectl rollout undo deployment/nginx-deployment --to-revision=2", "completion": "This shell code Deployments | The output is similar to this:", "language": "shell", "type": "code-example_explanation", "context": "Deployments | The output is similar to this:", "source": "https://kubernetes.io/docs/concepts/workloads/controllers/deployment/", "id": "k8s_00206_code_explain_5516"} {"prompt": "Explain this text code:\n\nNAME READY UP-TO-DATE AVAILABLE AGE\nnginx-deployment 3/3 3 3 30m", "completion": "This text code Deployments | Get the description of the Deployment:", "language": "text", "type": "code-example_explanation", "context": "Deployments | Get the description of the Deployment:", "source": "https://kubernetes.io/docs/concepts/workloads/controllers/deployment/", "id": "k8s_00206_code_explain_5517"} {"prompt": "Complete this text code:\n\nName: nginx-deployment\nNamespace: default\nCreationTimestamp: Sun, 02 Sep 2018 18:17:55 -0500\nLabels: app=nginx\nAnnotations: deployment.kubernetes.io/revision=4\n kubernetes.io/change-cause=kubectl set image deployment/nginx-deployment nginx=nginx:1.16.1\nSelector: app=nginx\nReplicas: 3 desired | 3 updated | 3 total | 3 available | 0 unavailable\nStrategyType: RollingUpdate\nMinReadySeconds: 0\nRollingUpdateStrategy: 25% max unavailable, 25% max surge\nPod Template:\n Labels: app=nginx\n Containers:\n nginx:\n Image: nginx:1.16.1\n Port: 80/TCP\n Host Port: 0/TCP\n Environment: \n Mounts: ", "completion": " Volumes: \nConditions:\n Type Status Reason\n ---- ------ ------\n Available True MinimumReplicasAvailable\n Progressing True NewReplicaSetAvailable\nOldReplicaSets: \nNewReplicaSet: nginx-deployment-c4747d96c (3/3 replicas created)\nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal ScalingReplicaSet 12m deployment-controller Scaled up replica set nginx-deployment-75675f5897 to 3\n Normal ScalingReplicaSet 11m deployment-controller Scaled up replica set nginx-deployment-c4747d96c to 1\n Normal ScalingReplicaSet 11m deployment-controller Scaled down replica set nginx-deployment-75675f5897 to 2\n Normal ScalingReplicaSet 11m deployment-controller Scaled up replica set nginx-deployment-c4747d96c to 2\n Normal ScalingReplicaSet 11m deployment-controller Scaled down replica set nginx-deployment-75675f5897 to 1\n Normal ScalingReplicaSet 11m deployment-controller Scaled up replica set nginx-deployment-c4747d96c to 3\n Normal ScalingReplicaSet 11m deployment-controller Scaled down replica set nginx-deployment-75675f5897 to 0\n Normal ScalingReplicaSet 11m deployment-controller Scaled up replica set nginx-deployment-595696685f to 1\n Normal DeploymentRollback 15s deployment-controller Rolled back deployment \"nginx-deployment\" to revision 2\n Normal ScalingReplicaSet 15s deployment-controller Scaled down replica set nginx-deployment-595696685f to 0", "language": "text", "type": "code-example", "context": "Deployments | You can scale a Deployment by using the following command:", "source": "https://kubernetes.io/docs/concepts/workloads/controllers/deployment/", "id": "k8s_00206_code_completion_5518"} {"prompt": "Explain this text code:\n\nName: nginx-deployment\nNamespace: default\nCreationTimestamp: Sun, 02 Sep 2018 18:17:55 -0500\nLabels: app=nginx\nAnnotations: deployment.kubernetes.io/revision=4\n kubernetes.io/change-cause=kubectl set image deployment/nginx-deployment nginx=nginx:1.16.1\nSelector: app=nginx\nReplicas: 3 desired | 3 updated | 3 total | 3 available | 0 unavailable\nStrategyType: RollingUpdate\nMinReadySeconds: 0\nRollingUpdateStrategy: 25% max unavailable, 25% max surge\nPod Template:\n Labels: app=nginx\n Containers:\n nginx:\n Image: nginx:1.16.1\n Port: 80/TCP\n Host Port: 0/TCP\n Environment: \n Mounts: \n Volumes: \nConditions:\n Type Status Reason\n ---- ------ ------\n Available True MinimumReplicasAvailable\n Progressing True NewReplicaSetAvailable\nOldReplicaSets: \nNewReplicaSet: nginx-deployment-c4747d96c (3/3 replicas created)\nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal ScalingReplicaSet 12m deployment-controller Scaled up replica set nginx-deployment-75675f5897 to 3\n Normal ScalingReplicaSet 11m deployment-controller Scaled up replica set nginx-deployment-c4747d96c to 1\n Normal ScalingReplicaSet 11m deployment-controller Scaled down replica set nginx-deployment-75675f5897 to 2\n Normal ScalingReplicaSet 11m deployment-controller Scaled up replica set nginx-deployment-c4747d96c to 2\n Normal ScalingReplicaSet 11m deployment-controller Scaled down replica set nginx-deployment-75675f5897 to 1\n Normal ScalingReplicaSet 11m deployment-controller Scaled up replica set nginx-deployment-c4747d96c to 3\n Normal ScalingReplicaSet 11m deployment-controller Scaled down replica set nginx-deployment-75675f5897 to 0\n Normal ScalingReplicaSet 11m deployment-controller Scaled up replica set nginx-deployment-595696685f to 1\n Normal DeploymentRollback 15s deployment-controller Rolled back deployment \"nginx-deployment\" to revision 2\n Normal ScalingReplicaSet 15s deployment-controller Scaled down replica set nginx-deployment-595696685f to 0", "completion": "This text code Deployments | You can scale a Deployment by using the following command:", "language": "text", "type": "code-example_explanation", "context": "Deployments | You can scale a Deployment by using the following command:", "source": "https://kubernetes.io/docs/concepts/workloads/controllers/deployment/", "id": "k8s_00206_code_explain_5519"} {"prompt": "Explain this shell code:\n\nkubectl scale deployment/nginx-deployment --replicas=10", "completion": "This shell code Deployments | The output is similar to this:", "language": "shell", "type": "code-example_explanation", "context": "Deployments | The output is similar to this:", "source": "https://kubernetes.io/docs/concepts/workloads/controllers/deployment/", "id": "k8s_00206_code_explain_5520"} {"prompt": "Explain this shell code:\n\nkubectl autoscale deployment/nginx-deployment --min=10 --max=15 --cpu-percent=80", "completion": "This shell code Deployments | The output is similar to this:", "language": "shell", "type": "code-example_explanation", "context": "Deployments | The output is similar to this:", "source": "https://kubernetes.io/docs/concepts/workloads/controllers/deployment/", "id": "k8s_00206_code_explain_5521"} {"prompt": "Explain this text code:\n\nNAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE\nnginx-deployment 10 10 10 10 50s", "completion": "This text code Deployments | You update to a new image which happens to be unresolvable from inside the cluster.", "language": "text", "type": "code-example_explanation", "context": "Deployments | You update to a new image which happens to be unresolvable from inside the cluster.", "source": "https://kubernetes.io/docs/concepts/workloads/controllers/deployment/", "id": "k8s_00206_code_explain_5522"} {"prompt": "Explain this shell code:\n\nkubectl set image deployment/nginx-deployment nginx=nginx:sometag", "completion": "This shell code Deployments | The output is similar to this:", "language": "shell", "type": "code-example_explanation", "context": "Deployments | The output is similar to this:", "source": "https://kubernetes.io/docs/concepts/workloads/controllers/deployment/", "id": "k8s_00206_code_explain_5523"} {"prompt": "Explain this text code:\n\nNAME DESIRED CURRENT READY AGE\nnginx-deployment-1989198191 5 5 0 9s\nnginx-deployment-618515232 8 8 8 1m", "completion": "This text code Deployments | Then a new scaling request for the Deployment comes along. The autoscaler increments the Deployment replicas to 15. The Deployment controller needs to decide where to add these new 5 replicas. If you", "language": "text", "type": "code-example_explanation", "context": "Deployments | Then a new scaling request for the Deployment comes along. The autoscaler increments the Deployment replicas to 15. The Deployment controller needs to decide where to add these new 5 replicas. If you", "source": "https://kubernetes.io/docs/concepts/workloads/controllers/deployment/", "id": "k8s_00206_code_explain_5524"} {"prompt": "Explain this text code:\n\nNAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE\nnginx-deployment 15 18 7 8 7m", "completion": "This text code Deployments | The rollout status confirms how the replicas were added to each ReplicaSet.", "language": "text", "type": "code-example_explanation", "context": "Deployments | The rollout status confirms how the replicas were added to each ReplicaSet.", "source": "https://kubernetes.io/docs/concepts/workloads/controllers/deployment/", "id": "k8s_00206_code_explain_5525"} {"prompt": "Explain this text code:\n\nNAME DESIRED CURRENT READY AGE\nnginx-deployment-1989198191 7 7 0 7m\nnginx-deployment-618515232 11 11 11 7m", "completion": "This text code Deployments | When you update a Deployment, or plan to, you can pause rollouts for that Deployment before you trigger one or more updates. When you're ready to apply those changes, you resume rollouts for the Deplo", "language": "text", "type": "code-example_explanation", "context": "Deployments | When you update a Deployment, or plan to, you can pause rollouts for that Deployment before you trigger one or more updates. When you're ready to apply those changes, you resume rollouts for the Deplo", "source": "https://kubernetes.io/docs/concepts/workloads/controllers/deployment/", "id": "k8s_00206_code_explain_5526"} {"prompt": "Explain this text code:\n\nNAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE\nnginx 3 3 3 3 1m", "completion": "This text code Deployments | Get the rollout status:", "language": "text", "type": "code-example_explanation", "context": "Deployments | Get the rollout status:", "source": "https://kubernetes.io/docs/concepts/workloads/controllers/deployment/", "id": "k8s_00206_code_explain_5527"} {"prompt": "Explain this text code:\n\nNAME DESIRED CURRENT READY AGE\nnginx-2142116321 3 3 3 1m", "completion": "This text code Deployments | Pause by running the following command:", "language": "text", "type": "code-example_explanation", "context": "Deployments | Pause by running the following command:", "source": "https://kubernetes.io/docs/concepts/workloads/controllers/deployment/", "id": "k8s_00206_code_explain_5528"} {"prompt": "Explain this shell code:\n\nkubectl set image deployment/nginx-deployment nginx=nginx:1.16.1", "completion": "This shell code Deployments | The output is similar to this:", "language": "shell", "type": "code-example_explanation", "context": "Deployments | The output is similar to this:", "source": "https://kubernetes.io/docs/concepts/workloads/controllers/deployment/", "id": "k8s_00206_code_explain_5529"} {"prompt": "Explain this shell code:\n\nkubectl rollout history deployment/nginx-deployment", "completion": "This shell code Deployments | The output is similar to this:", "language": "shell", "type": "code-example_explanation", "context": "Deployments | The output is similar to this:", "source": "https://kubernetes.io/docs/concepts/workloads/controllers/deployment/", "id": "k8s_00206_code_explain_5530"} {"prompt": "Explain this text code:\n\ndeployments \"nginx\"\nREVISION CHANGE-CAUSE\n1 ", "completion": "This text code Deployments | Get the rollout status to verify that the existing ReplicaSet has not changed:", "language": "text", "type": "code-example_explanation", "context": "Deployments | Get the rollout status to verify that the existing ReplicaSet has not changed:", "source": "https://kubernetes.io/docs/concepts/workloads/controllers/deployment/", "id": "k8s_00206_code_explain_5531"} {"prompt": "Explain this text code:\n\nNAME DESIRED CURRENT READY AGE\nnginx-2142116321 3 3 3 2m", "completion": "This text code Deployments | You can make as many updates as you wish, for example, update the resources that will be used:", "language": "text", "type": "code-example_explanation", "context": "Deployments | You can make as many updates as you wish, for example, update the resources that will be used:", "source": "https://kubernetes.io/docs/concepts/workloads/controllers/deployment/", "id": "k8s_00206_code_explain_5532"} {"prompt": "Explain this shell code:\n\nkubectl set resources deployment/nginx-deployment -c=nginx --limits=cpu=200m,memory=512Mi", "completion": "This shell code Deployments | The output is similar to this:", "language": "shell", "type": "code-example_explanation", "context": "Deployments | The output is similar to this:", "source": "https://kubernetes.io/docs/concepts/workloads/controllers/deployment/", "id": "k8s_00206_code_explain_5533"} {"prompt": "Explain this text code:\n\ndeployment.apps/nginx-deployment resource requirements updated", "completion": "This text code Deployments | The initial state of the Deployment prior to pausing its rollout will continue its function, but new updates to the Deployment will not have any effect as long as the Deployment rollout is paused.", "language": "text", "type": "code-example_explanation", "context": "Deployments | The initial state of the Deployment prior to pausing its rollout will continue its function, but new updates to the Deployment will not have any effect as long as the Deployment rollout is paused.", "source": "https://kubernetes.io/docs/concepts/workloads/controllers/deployment/", "id": "k8s_00206_code_explain_5534"} {"prompt": "Complete this text code:\n\nNAME DESIRED CURRENT READY AGE\nnginx-2142116321 2 2 2 2m\nnginx-3926361531 2 2 0 6s\nnginx-3926361531 2 2 1 18s\nnginx-2142116321 1 2 2 2m\nnginx-2142116321 1 2 2 2m\nnginx-3926361531 3 2 1 18s", "completion": "nginx-3926361531 3 2 1 18s\nnginx-2142116321 1 1 1 2m\nnginx-3926361531 3 3 1 18s\nnginx-3926361531 3 3 2 19s\nnginx-2142116321 0 1 1 2m\nnginx-2142116321 0 1 1 2m\nnginx-2142116321 0 0 0 2m\nnginx-3926361531 3 3 3 20s", "language": "text", "type": "code-example", "context": "Deployments | Get the status of the latest rollout:", "source": "https://kubernetes.io/docs/concepts/workloads/controllers/deployment/", "id": "k8s_00206_code_completion_5535"} {"prompt": "Explain this text code:\n\nNAME DESIRED CURRENT READY AGE\nnginx-2142116321 2 2 2 2m\nnginx-3926361531 2 2 0 6s\nnginx-3926361531 2 2 1 18s\nnginx-2142116321 1 2 2 2m\nnginx-2142116321 1 2 2 2m\nnginx-3926361531 3 2 1 18s\nnginx-3926361531 3 2 1 18s\nnginx-2142116321 1 1 1 2m\nnginx-3926361531 3 3 1 18s\nnginx-3926361531 3 3 2 19s\nnginx-2142116321 0 1 1 2m\nnginx-2142116321 0 1 1 2m\nnginx-2142116321 0 0 0 2m\nnginx-3926361531 3 3 3 20s", "completion": "This text code Deployments | Get the status of the latest rollout:", "language": "text", "type": "code-example_explanation", "context": "Deployments | Get the status of the latest rollout:", "source": "https://kubernetes.io/docs/concepts/workloads/controllers/deployment/", "id": "k8s_00206_code_explain_5536"} {"prompt": "Explain this text code:\n\nNAME DESIRED CURRENT READY AGE\nnginx-2142116321 0 0 0 2m\nnginx-3926361531 3 3 3 28s", "completion": "This text code Deployments | A Deployment enters various states during its lifecycle. It can be progressing while rolling out a new ReplicaSet, it can be complete, or it can fail to progress.", "language": "text", "type": "code-example_explanation", "context": "Deployments | A Deployment enters various states during its lifecycle. It can be progressing while rolling out a new ReplicaSet, it can be complete, or it can fail to progress.", "source": "https://kubernetes.io/docs/concepts/workloads/controllers/deployment/", "id": "k8s_00206_code_explain_5537"} {"prompt": "Explain this text code:\n\nWaiting for rollout to finish: 2 of 3 updated replicas are available...\ndeployment \"nginx-deployment\" successfully rolled out", "completion": "This text code Deployments | and the exit status from kubectl rollout is 0 (success):", "language": "text", "type": "code-example_explanation", "context": "Deployments | and the exit status from kubectl rollout is 0 (success):", "source": "https://kubernetes.io/docs/concepts/workloads/controllers/deployment/", "id": "k8s_00206_code_explain_5538"} {"prompt": "Explain this shell code:\n\nkubectl patch deployment/nginx-deployment -p '{\"spec\":{\"progressDeadlineSeconds\":600}}'", "completion": "This shell code Deployments | The output is similar to this:", "language": "shell", "type": "code-example_explanation", "context": "Deployments | The output is similar to this:", "source": "https://kubernetes.io/docs/concepts/workloads/controllers/deployment/", "id": "k8s_00206_code_explain_5539"} {"prompt": "Complete this text code:\n\n<...>\nConditions:\n Type Status Reason\n ---- ------ ------", "completion": " Available True MinimumReplicasAvailable\n Progressing True ReplicaSetUpdated\n ReplicaFailure True FailedCreate\n<...>", "language": "text", "type": "code-example", "context": "Deployments | If you run kubectl get deployment nginx-deployment -o yaml, the Deployment status is similar to this:", "source": "https://kubernetes.io/docs/concepts/workloads/controllers/deployment/", "id": "k8s_00206_code_completion_5540"} {"prompt": "Explain this text code:\n\n<...>\nConditions:\n Type Status Reason\n ---- ------ ------\n Available True MinimumReplicasAvailable\n Progressing True ReplicaSetUpdated\n ReplicaFailure True FailedCreate\n<...>", "completion": "This text code Deployments | If you run kubectl get deployment nginx-deployment -o yaml, the Deployment status is similar to this:", "language": "text", "type": "code-example_explanation", "context": "Deployments | If you run kubectl get deployment nginx-deployment -o yaml, the Deployment status is similar to this:", "source": "https://kubernetes.io/docs/concepts/workloads/controllers/deployment/", "id": "k8s_00206_code_explain_5541"} {"prompt": "Complete this text code:\n\nstatus:\n availableReplicas: 2\n conditions:\n - lastTransitionTime: 2016-10-04T12:25:39Z\n lastUpdateTime: 2016-10-04T12:25:39Z\n message: Replica set \"nginx-deployment-4262182780\" is progressing.\n reason: ReplicaSetUpdated\n status: \"True\"\n type: Progressing\n - lastTransitionTime: 2016-10-04T12:25:42Z\n lastUpdateTime: 2016-10-04T12:25:42Z\n message: Deployment has minimum availability.", "completion": " reason: MinimumReplicasAvailable\n status: \"True\"\n type: Available\n - lastTransitionTime: 2016-10-04T12:25:39Z\n lastUpdateTime: 2016-10-04T12:25:39Z\n message: 'Error creating: pods \"nginx-deployment-4262182780-\" is forbidden: exceeded quota:\n object-counts, requested: pods=1, used: pods=3, limited: pods=2'\n reason: FailedCreate\n status: \"True\"\n type: ReplicaFailure\n observedGeneration: 3\n replicas: 2\n unavailableReplicas: 2", "language": "text", "type": "code-example", "context": "Deployments | Eventually, once the Deployment progress deadline is exceeded, Kubernetes updates the status and the reason for the Progressing condition:", "source": "https://kubernetes.io/docs/concepts/workloads/controllers/deployment/", "id": "k8s_00206_code_completion_5542"} {"prompt": "Explain this text code:\n\nstatus:\n availableReplicas: 2\n conditions:\n - lastTransitionTime: 2016-10-04T12:25:39Z\n lastUpdateTime: 2016-10-04T12:25:39Z\n message: Replica set \"nginx-deployment-4262182780\" is progressing.\n reason: ReplicaSetUpdated\n status: \"True\"\n type: Progressing\n - lastTransitionTime: 2016-10-04T12:25:42Z\n lastUpdateTime: 2016-10-04T12:25:42Z\n message: Deployment has minimum availability.\n reason: MinimumReplicasAvailable\n status: \"True\"\n type: Available\n - lastTransitionTime: 2016-10-04T12:25:39Z\n lastUpdateTime: 2016-10-04T12:25:39Z\n message: 'Error creating: pods \"nginx-deployment-4262182780-\" is forbidden: exceeded quota:\n object-counts, requested: pods=1, used: pods=3, limited: pods=2'\n reason: FailedCreate\n status: \"True\"\n type: ReplicaFailure\n observedGeneration: 3\n replicas: 2\n unavailableReplicas: 2", "completion": "This text code Deployments | Eventually, once the Deployment progress deadline is exceeded, Kubernetes updates the status and the reason for the Progressing condition:", "language": "text", "type": "code-example_explanation", "context": "Deployments | Eventually, once the Deployment progress deadline is exceeded, Kubernetes updates the status and the reason for the Progressing condition:", "source": "https://kubernetes.io/docs/concepts/workloads/controllers/deployment/", "id": "k8s_00206_code_explain_5543"} {"prompt": "Complete this text code:\n\nConditions:\n Type Status Reason\n ---- ------ ------", "completion": " Available True MinimumReplicasAvailable\n Progressing False ProgressDeadlineExceeded\n ReplicaFailure True FailedCreate", "language": "text", "type": "code-example", "context": "Deployments | You can address an issue of insufficient quota by scaling down your Deployment, by scaling down other controllers you may be running, or by increasing quota in your namespace. If you satisfy the quota", "source": "https://kubernetes.io/docs/concepts/workloads/controllers/deployment/", "id": "k8s_00206_code_completion_5544"} {"prompt": "Explain this text code:\n\nConditions:\n Type Status Reason\n ---- ------ ------\n Available True MinimumReplicasAvailable\n Progressing False ProgressDeadlineExceeded\n ReplicaFailure True FailedCreate", "completion": "This text code Deployments | You can address an issue of insufficient quota by scaling down your Deployment, by scaling down other controllers you may be running, or by increasing quota in your namespace. If you satisfy the quota", "language": "text", "type": "code-example_explanation", "context": "Deployments | You can address an issue of insufficient quota by scaling down your Deployment, by scaling down other controllers you may be running, or by increasing quota in your namespace. If you satisfy the quota", "source": "https://kubernetes.io/docs/concepts/workloads/controllers/deployment/", "id": "k8s_00206_code_explain_5545"} {"prompt": "Complete this text code:\n\nConditions:\n Type Status Reason", "completion": " ---- ------ ------\n Available True MinimumReplicasAvailable\n Progressing True NewReplicaSetAvailable", "language": "text", "type": "code-example", "context": "Deployments | type: Available with status: \"True\" means that your Deployment has minimum availability. Minimum availability is dictated by the parameters specified in the deployment strategy. type: Progressing with", "source": "https://kubernetes.io/docs/concepts/workloads/controllers/deployment/", "id": "k8s_00206_code_completion_5546"} {"prompt": "Explain this text code:\n\nConditions:\n Type Status Reason\n ---- ------ ------\n Available True MinimumReplicasAvailable\n Progressing True NewReplicaSetAvailable", "completion": "This text code Deployments | type: Available with status: \"True\" means that your Deployment has minimum availability. Minimum availability is dictated by the parameters specified in the deployment strategy. type: Progressing with", "language": "text", "type": "code-example_explanation", "context": "Deployments | type: Available with status: \"True\" means that your Deployment has minimum availability. Minimum availability is dictated by the parameters specified in the deployment strategy. type: Progressing with", "source": "https://kubernetes.io/docs/concepts/workloads/controllers/deployment/", "id": "k8s_00206_code_explain_5547"} {"prompt": "Explain this text code:\n\nWaiting for rollout to finish: 2 out of 3 new replicas have been updated...\nerror: deployment \"nginx\" exceeded its progress deadline", "completion": "This text code Deployments | and the exit status from kubectl rollout is 1 (indicating an error):", "language": "text", "type": "code-example_explanation", "context": "Deployments | and the exit status from kubectl rollout is 1 (indicating an error):", "source": "https://kubernetes.io/docs/concepts/workloads/controllers/deployment/", "id": "k8s_00206_code_explain_5548"} {"prompt": "Complete this yaml code:\n\napiVersion: apps/v1\nkind: Deployment\nmetadata:\n name: nginx-deployment\n labels:\n app: nginx\nspec:\n replicas: 3\n selector:\n matchLabels:\n app: nginx\n template:", "completion": " metadata:\n labels:\n app: nginx\n spec:\n containers:\n - name: nginx\n image: nginx:1.14.2\n ports:\n - containerPort: 80\n strategy:\n type: RollingUpdate\n rollingUpdate:\n maxUnavailable: 1", "language": "yaml", "type": "deployment-definition", "context": "Deployments | apiVersion: apps/v1 kind: Deployment metadata: name: nginx-deployment labels: app: nginx spec: replicas: 3 selector: matchLabels: app: nginx template: metadata: labels:", "source": "https://kubernetes.io/docs/concepts/workloads/controllers/deployment/", "id": "k8s_00206_code_completion_5549"} {"prompt": "Explain this yaml code:\n\napiVersion: apps/v1\nkind: Deployment\nmetadata:\n name: nginx-deployment\n labels:\n app: nginx\nspec:\n replicas: 3\n selector:\n matchLabels:\n app: nginx\n template:\n metadata:\n labels:\n app: nginx\n spec:\n containers:\n - name: nginx\n image: nginx:1.14.2\n ports:\n - containerPort: 80\n strategy:\n type: RollingUpdate\n rollingUpdate:\n maxUnavailable: 1", "completion": "This yaml code Deployments | apiVersion: apps/v1 kind: Deployment metadata: name: nginx-deployment labels: app: nginx spec: replicas: 3 selector: matchLabels: app: nginx template: metadata: labels:", "language": "yaml", "type": "deployment-definition_explanation", "context": "Deployments | apiVersion: apps/v1 kind: Deployment metadata: name: nginx-deployment labels: app: nginx spec: replicas: 3 selector: matchLabels: app: nginx template: metadata: labels:", "source": "https://kubernetes.io/docs/concepts/workloads/controllers/deployment/", "id": "k8s_00206_code_explain_5550"} {"prompt": "Complete this yaml code:\n\napiVersion: apps/v1\nkind: Deployment\nmetadata:\n name: nginx-deployment\n labels:\n app: nginx\nspec:\n replicas: 3\n selector:\n matchLabels:\n app: nginx\n template:", "completion": " metadata:\n labels:\n app: nginx\n spec:\n containers:\n - name: nginx\n image: nginx:1.14.2\n ports:\n - containerPort: 80\n strategy:\n type: RollingUpdate\n rollingUpdate:\n maxSurge: 1", "language": "yaml", "type": "deployment-definition", "context": "Deployments | apiVersion: apps/v1 kind: Deployment metadata: name: nginx-deployment labels: app: nginx spec: replicas: 3 selector: matchLabels: app: nginx template: metadata: labels:", "source": "https://kubernetes.io/docs/concepts/workloads/controllers/deployment/", "id": "k8s_00206_code_completion_5551"} {"prompt": "Explain this yaml code:\n\napiVersion: apps/v1\nkind: Deployment\nmetadata:\n name: nginx-deployment\n labels:\n app: nginx\nspec:\n replicas: 3\n selector:\n matchLabels:\n app: nginx\n template:\n metadata:\n labels:\n app: nginx\n spec:\n containers:\n - name: nginx\n image: nginx:1.14.2\n ports:\n - containerPort: 80\n strategy:\n type: RollingUpdate\n rollingUpdate:\n maxSurge: 1", "completion": "This yaml code Deployments | apiVersion: apps/v1 kind: Deployment metadata: name: nginx-deployment labels: app: nginx spec: replicas: 3 selector: matchLabels: app: nginx template: metadata: labels:", "language": "yaml", "type": "deployment-definition_explanation", "context": "Deployments | apiVersion: apps/v1 kind: Deployment metadata: name: nginx-deployment labels: app: nginx spec: replicas: 3 selector: matchLabels: app: nginx template: metadata: labels:", "source": "https://kubernetes.io/docs/concepts/workloads/controllers/deployment/", "id": "k8s_00206_code_explain_5552"} {"prompt": "Complete this yaml code:\n\napiVersion: apps/v1\nkind: Deployment\nmetadata:\n name: nginx-deployment\n labels:\n app: nginx\nspec:\n replicas: 3\n selector:\n matchLabels:\n app: nginx\n template:\n metadata:", "completion": " labels:\n app: nginx\n spec:\n containers:\n - name: nginx\n image: nginx:1.14.2\n ports:\n - containerPort: 80\n strategy:\n type: RollingUpdate\n rollingUpdate:\n maxSurge: 1\n maxUnavailable: 1", "language": "yaml", "type": "deployment-definition", "context": "Deployments | .spec.progressDeadlineSeconds is an optional field that specifies the number of seconds you want to wait for your Deployment to progress before the system reports back that the Deployment has failed p", "source": "https://kubernetes.io/docs/concepts/workloads/controllers/deployment/", "id": "k8s_00206_code_completion_5553"} {"prompt": "Explain this yaml code:\n\napiVersion: apps/v1\nkind: Deployment\nmetadata:\n name: nginx-deployment\n labels:\n app: nginx\nspec:\n replicas: 3\n selector:\n matchLabels:\n app: nginx\n template:\n metadata:\n labels:\n app: nginx\n spec:\n containers:\n - name: nginx\n image: nginx:1.14.2\n ports:\n - containerPort: 80\n strategy:\n type: RollingUpdate\n rollingUpdate:\n maxSurge: 1\n maxUnavailable: 1", "completion": "This yaml code Deployments | .spec.progressDeadlineSeconds is an optional field that specifies the number of seconds you want to wait for your Deployment to progress before the system reports back that the Deployment has failed p", "language": "yaml", "type": "deployment-definition_explanation", "context": "Deployments | .spec.progressDeadlineSeconds is an optional field that specifies the number of seconds you want to wait for your Deployment to progress before the system reports back that the Deployment has failed p", "source": "https://kubernetes.io/docs/concepts/workloads/controllers/deployment/", "id": "k8s_00206_code_explain_5554"} {"prompt": "Complete this yaml code:\n\ncontrollers/nginx-deployment.yaml\napiVersion: apps/v1\nkind: Deployment\nmetadata:\n name: nginx-deployment\n labels:\n app: nginx\nspec:\n replicas: 3\n selector:\n matchLabels:", "completion": " app: nginx\n template:\n metadata:\n labels:\n app: nginx\n spec:\n containers:\n - name: nginx\n image: nginx:1.14.2\n ports:\n - containerPort: 80", "language": "yaml", "type": "deployment-definition", "context": "Deployments | In this example:", "source": "https://kubernetes.io/docs/concepts/workloads/controllers/deployment/", "id": "k8s_00206_code_completion_5555"} {"prompt": "Explain this yaml code:\n\ncontrollers/nginx-deployment.yaml\napiVersion: apps/v1\nkind: Deployment\nmetadata:\n name: nginx-deployment\n labels:\n app: nginx\nspec:\n replicas: 3\n selector:\n matchLabels:\n app: nginx\n template:\n metadata:\n labels:\n app: nginx\n spec:\n containers:\n - name: nginx\n image: nginx:1.14.2\n ports:\n - containerPort: 80", "completion": "This yaml code Deployments | In this example:", "language": "yaml", "type": "deployment-definition_explanation", "context": "Deployments | In this example:", "source": "https://kubernetes.io/docs/concepts/workloads/controllers/deployment/", "id": "k8s_00206_code_explain_5556"} {"prompt": "Complete this yaml code:\n\napiVersion: apps/v1\nkind: Deployment\nmetadata:\n name: nginx-deployment\n labels:\n app: nginx\nspec:\n replicas: 3\n selector:\n matchLabels:", "completion": " app: nginx\n template:\n metadata:\n labels:\n app: nginx\n spec:\n containers:\n - name: nginx\n image: nginx:1.14.2\n ports:\n - containerPort: 80", "language": "yaml", "type": "deployment-definition", "context": "Deployments | In this example:", "source": "https://kubernetes.io/docs/concepts/workloads/controllers/deployment/", "id": "k8s_00206_code_completion_5557"} {"prompt": "Explain this yaml code:\n\napiVersion: apps/v1\nkind: Deployment\nmetadata:\n name: nginx-deployment\n labels:\n app: nginx\nspec:\n replicas: 3\n selector:\n matchLabels:\n app: nginx\n template:\n metadata:\n labels:\n app: nginx\n spec:\n containers:\n - name: nginx\n image: nginx:1.14.2\n ports:\n - containerPort: 80", "completion": "This yaml code Deployments | In this example:", "language": "yaml", "type": "deployment-definition_explanation", "context": "Deployments | In this example:", "source": "https://kubernetes.io/docs/concepts/workloads/controllers/deployment/", "id": "k8s_00206_code_explain_5558"} {"prompt": "Explain this bash code:\n\nkubectl apply -f https://k8s.io/examples/controllers/nginx-deployment.yaml", "completion": "This bash code Deployments | Run kubectl get deployments to check if the Deployment was created.", "language": "bash", "type": "kubectl-command_explanation", "context": "Deployments | Run kubectl get deployments to check if the Deployment was created.", "source": "https://kubernetes.io/docs/concepts/workloads/controllers/deployment/", "id": "k8s_00206_code_explain_5559"} {"prompt": "Explain this bash code:\n\nkubectl set image deployment.v1.apps/nginx-deployment nginx=nginx:1.16.1", "completion": "This bash code Deployments | or use the following command:", "language": "bash", "type": "kubectl-command_explanation", "context": "Deployments | or use the following command:", "source": "https://kubernetes.io/docs/concepts/workloads/controllers/deployment/", "id": "k8s_00206_code_explain_5560"} {"prompt": "Explain this bash code:\n\nkubectl set image deployment/nginx-deployment nginx=nginx:1.16.1", "completion": "This bash code Deployments | where deployment/nginx-deployment indicates the Deployment, nginx indicates the Container the update will take place and nginx:1.16.1 indicates the new image and its tag.", "language": "bash", "type": "kubectl-command_explanation", "context": "Deployments | where deployment/nginx-deployment indicates the Deployment, nginx indicates the Container the update will take place and nginx:1.16.1 indicates the new image and its tag.", "source": "https://kubernetes.io/docs/concepts/workloads/controllers/deployment/", "id": "k8s_00206_code_explain_5561"} {"prompt": "Explain this bash code:\n\nkubectl set image deployment/nginx-deployment nginx=nginx:1.161", "completion": "This bash code Deployments | The output is similar to this:", "language": "bash", "type": "kubectl-command_explanation", "context": "Deployments | The output is similar to this:", "source": "https://kubernetes.io/docs/concepts/workloads/controllers/deployment/", "id": "k8s_00206_code_explain_5562"} {"prompt": "Explain this bash code:\n\nkubectl rollout history deployment/nginx-deployment", "completion": "This bash code Deployments | The output is similar to this:", "language": "bash", "type": "kubectl-command_explanation", "context": "Deployments | The output is similar to this:", "source": "https://kubernetes.io/docs/concepts/workloads/controllers/deployment/", "id": "k8s_00206_code_explain_5563"} {"prompt": "Explain this bash code:\n\nkubectl rollout history deployment/nginx-deployment --revision=2", "completion": "This bash code Deployments | The output is similar to this:", "language": "bash", "type": "kubectl-command_explanation", "context": "Deployments | The output is similar to this:", "source": "https://kubernetes.io/docs/concepts/workloads/controllers/deployment/", "id": "k8s_00206_code_explain_5564"} {"prompt": "Explain this bash code:\n\nkubectl rollout undo deployment/nginx-deployment --to-revision=2", "completion": "This bash code Deployments | The output is similar to this:", "language": "bash", "type": "kubectl-command_explanation", "context": "Deployments | The output is similar to this:", "source": "https://kubernetes.io/docs/concepts/workloads/controllers/deployment/", "id": "k8s_00206_code_explain_5565"} {"prompt": "Explain this bash code:\n\nkubectl scale deployment/nginx-deployment --replicas=10", "completion": "This bash code Deployments | The output is similar to this:", "language": "bash", "type": "kubectl-command_explanation", "context": "Deployments | The output is similar to this:", "source": "https://kubernetes.io/docs/concepts/workloads/controllers/deployment/", "id": "k8s_00206_code_explain_5566"} {"prompt": "Explain this bash code:\n\nkubectl autoscale deployment/nginx-deployment --min=10 --max=15 --cpu-percent=80", "completion": "This bash code Deployments | The output is similar to this:", "language": "bash", "type": "kubectl-command_explanation", "context": "Deployments | The output is similar to this:", "source": "https://kubernetes.io/docs/concepts/workloads/controllers/deployment/", "id": "k8s_00206_code_explain_5567"} {"prompt": "Explain this bash code:\n\nkubectl set image deployment/nginx-deployment nginx=nginx:sometag", "completion": "This bash code Deployments | The output is similar to this:", "language": "bash", "type": "kubectl-command_explanation", "context": "Deployments | The output is similar to this:", "source": "https://kubernetes.io/docs/concepts/workloads/controllers/deployment/", "id": "k8s_00206_code_explain_5568"} {"prompt": "Explain this bash code:\n\nkubectl set image deployment/nginx-deployment nginx=nginx:1.16.1", "completion": "This bash code Deployments | The output is similar to this:", "language": "bash", "type": "kubectl-command_explanation", "context": "Deployments | The output is similar to this:", "source": "https://kubernetes.io/docs/concepts/workloads/controllers/deployment/", "id": "k8s_00206_code_explain_5569"} {"prompt": "Explain this bash code:\n\nkubectl rollout history deployment/nginx-deployment", "completion": "This bash code Deployments | The output is similar to this:", "language": "bash", "type": "kubectl-command_explanation", "context": "Deployments | The output is similar to this:", "source": "https://kubernetes.io/docs/concepts/workloads/controllers/deployment/", "id": "k8s_00206_code_explain_5570"} {"prompt": "Explain this bash code:\n\nkubectl set resources deployment/nginx-deployment -c=nginx --limits=cpu=200m,memory=512Mi", "completion": "This bash code Deployments | The output is similar to this:", "language": "bash", "type": "kubectl-command_explanation", "context": "Deployments | The output is similar to this:", "source": "https://kubernetes.io/docs/concepts/workloads/controllers/deployment/", "id": "k8s_00206_code_explain_5571"} {"prompt": "Explain this bash code:\n\nkubectl patch deployment/nginx-deployment -p '{\"spec\":{\"progressDeadlineSeconds\":600}}'", "completion": "This bash code Deployments | The output is similar to this:", "language": "bash", "type": "kubectl-command_explanation", "context": "Deployments | The output is similar to this:", "source": "https://kubernetes.io/docs/concepts/workloads/controllers/deployment/", "id": "k8s_00206_code_explain_5572"} {"prompt": "Complete this yaml code:\n\napiVersion: apps/v1\nkind: Deployment\nmetadata:\n name: nginx-deployment\n labels:\n app: nginx\nspec:\n replicas: 3\n selector:\n matchLabels:\n app: nginx\n template:", "completion": " metadata:\n labels:\n app: nginx\n spec:\n containers:\n - name: nginx\n image: nginx:1.14.2\n ports:\n - containerPort: 80\n strategy:\n type: RollingUpdate\n rollingUpdate:\n maxUnavailable: 1", "language": "yaml", "type": "deployment-definition", "context": "Deployments | apiVersion: apps/v1 kind: Deployment metadata: name: nginx-deployment labels: app: nginx spec: replicas: 3 selector: matchLabels: app: nginx template: metadata: labels:", "source": "https://kubernetes.io/docs/concepts/workloads/controllers/deployment/", "id": "k8s_00206_code_completion_5573"} {"prompt": "Explain this yaml code:\n\napiVersion: apps/v1\nkind: Deployment\nmetadata:\n name: nginx-deployment\n labels:\n app: nginx\nspec:\n replicas: 3\n selector:\n matchLabels:\n app: nginx\n template:\n metadata:\n labels:\n app: nginx\n spec:\n containers:\n - name: nginx\n image: nginx:1.14.2\n ports:\n - containerPort: 80\n strategy:\n type: RollingUpdate\n rollingUpdate:\n maxUnavailable: 1", "completion": "This yaml code Deployments | apiVersion: apps/v1 kind: Deployment metadata: name: nginx-deployment labels: app: nginx spec: replicas: 3 selector: matchLabels: app: nginx template: metadata: labels:", "language": "yaml", "type": "deployment-definition_explanation", "context": "Deployments | apiVersion: apps/v1 kind: Deployment metadata: name: nginx-deployment labels: app: nginx spec: replicas: 3 selector: matchLabels: app: nginx template: metadata: labels:", "source": "https://kubernetes.io/docs/concepts/workloads/controllers/deployment/", "id": "k8s_00206_code_explain_5574"} {"prompt": "Complete this yaml code:\n\napiVersion: apps/v1\nkind: Deployment\nmetadata:\n name: nginx-deployment\n labels:\n app: nginx\nspec:\n replicas: 3\n selector:\n matchLabels:\n app: nginx\n template:", "completion": " metadata:\n labels:\n app: nginx\n spec:\n containers:\n - name: nginx\n image: nginx:1.14.2\n ports:\n - containerPort: 80\n strategy:\n type: RollingUpdate\n rollingUpdate:\n maxSurge: 1", "language": "yaml", "type": "deployment-definition", "context": "Deployments | apiVersion: apps/v1 kind: Deployment metadata: name: nginx-deployment labels: app: nginx spec: replicas: 3 selector: matchLabels: app: nginx template: metadata: labels:", "source": "https://kubernetes.io/docs/concepts/workloads/controllers/deployment/", "id": "k8s_00206_code_completion_5575"} {"prompt": "Explain this yaml code:\n\napiVersion: apps/v1\nkind: Deployment\nmetadata:\n name: nginx-deployment\n labels:\n app: nginx\nspec:\n replicas: 3\n selector:\n matchLabels:\n app: nginx\n template:\n metadata:\n labels:\n app: nginx\n spec:\n containers:\n - name: nginx\n image: nginx:1.14.2\n ports:\n - containerPort: 80\n strategy:\n type: RollingUpdate\n rollingUpdate:\n maxSurge: 1", "completion": "This yaml code Deployments | apiVersion: apps/v1 kind: Deployment metadata: name: nginx-deployment labels: app: nginx spec: replicas: 3 selector: matchLabels: app: nginx template: metadata: labels:", "language": "yaml", "type": "deployment-definition_explanation", "context": "Deployments | apiVersion: apps/v1 kind: Deployment metadata: name: nginx-deployment labels: app: nginx spec: replicas: 3 selector: matchLabels: app: nginx template: metadata: labels:", "source": "https://kubernetes.io/docs/concepts/workloads/controllers/deployment/", "id": "k8s_00206_code_explain_5576"} {"prompt": "Complete this yaml code:\n\napiVersion: apps/v1\nkind: Deployment\nmetadata:\n name: nginx-deployment\n labels:\n app: nginx\nspec:\n replicas: 3\n selector:\n matchLabels:\n app: nginx\n template:\n metadata:", "completion": " labels:\n app: nginx\n spec:\n containers:\n - name: nginx\n image: nginx:1.14.2\n ports:\n - containerPort: 80\n strategy:\n type: RollingUpdate\n rollingUpdate:\n maxSurge: 1\n maxUnavailable: 1", "language": "yaml", "type": "deployment-definition", "context": "Deployments | .spec.progressDeadlineSeconds is an optional field that specifies the number of seconds you want to wait for your Deployment to progress before the system reports back that the Deployment has failed p", "source": "https://kubernetes.io/docs/concepts/workloads/controllers/deployment/", "id": "k8s_00206_code_completion_5577"} {"prompt": "Explain this yaml code:\n\napiVersion: apps/v1\nkind: Deployment\nmetadata:\n name: nginx-deployment\n labels:\n app: nginx\nspec:\n replicas: 3\n selector:\n matchLabels:\n app: nginx\n template:\n metadata:\n labels:\n app: nginx\n spec:\n containers:\n - name: nginx\n image: nginx:1.14.2\n ports:\n - containerPort: 80\n strategy:\n type: RollingUpdate\n rollingUpdate:\n maxSurge: 1\n maxUnavailable: 1", "completion": "This yaml code Deployments | .spec.progressDeadlineSeconds is an optional field that specifies the number of seconds you want to wait for your Deployment to progress before the system reports back that the Deployment has failed p", "language": "yaml", "type": "deployment-definition_explanation", "context": "Deployments | .spec.progressDeadlineSeconds is an optional field that specifies the number of seconds you want to wait for your Deployment to progress before the system reports back that the Deployment has failed p", "source": "https://kubernetes.io/docs/concepts/workloads/controllers/deployment/", "id": "k8s_00206_code_explain_5578"} {"prompt": "Explain this bash code:\n\nkubectl config set-cluster NAME [--server=server] [--certificate-authority=path/to/certificate/authority] [--insecure-skip-tls-verify=true] [--tls-server-name=example.com]", "completion": "This bash code kubectl config set-cluster | Path to certificate-authority file for the cluster entry in kubeconfig", "language": "bash", "type": "kubectl-command_explanation", "context": "kubectl config set-cluster | Path to certificate-authority file for the cluster entry in kubeconfig", "source": "https://kubernetes.io/docs/reference/kubectl/generated/kubectl_config/kubectl_config_set-cluster/", "id": "k8s_00207_code_explain_5579"} {"prompt": "Complete this text code:\n\n# Set only the server field on the e2e cluster entry without touching other values\n kubectl config set-cluster e2e --server=https://1.2.3.4\n \n # Embed certificate authority data for the e2e cluster entry\n kubectl config set-cluster e2e --embed-certs --certificate-authority=~/.kube/e2e/kubernetes.ca.crt\n \n # Disable cert checking for the e2e cluster entry", "completion": " kubectl config set-cluster e2e --insecure-skip-tls-verify=true\n \n # Set the custom TLS server name to use for validation for the e2e cluster entry\n kubectl config set-cluster e2e --tls-server-name=my-cluster-name\n \n # Set the proxy URL for the e2e cluster entry\n kubectl config set-cluster e2e --proxy-url=https://1.2.3.4", "language": "text", "type": "code-example", "context": "kubectl config set-cluster | Path to certificate-authority file for the cluster entry in kubeconfig", "source": "https://kubernetes.io/docs/reference/kubectl/generated/kubectl_config/kubectl_config_set-cluster/", "id": "k8s_00207_code_completion_5580"} {"prompt": "Explain this text code:\n\n# Set only the server field on the e2e cluster entry without touching other values\n kubectl config set-cluster e2e --server=https://1.2.3.4\n \n # Embed certificate authority data for the e2e cluster entry\n kubectl config set-cluster e2e --embed-certs --certificate-authority=~/.kube/e2e/kubernetes.ca.crt\n \n # Disable cert checking for the e2e cluster entry\n kubectl config set-cluster e2e --insecure-skip-tls-verify=true\n \n # Set the custom TLS server name to use for validation for the e2e cluster entry\n kubectl config set-cluster e2e --tls-server-name=my-cluster-name\n \n # Set the proxy URL for the e2e cluster entry\n kubectl config set-cluster e2e --proxy-url=https://1.2.3.4", "completion": "This text code kubectl config set-cluster | Path to certificate-authority file for the cluster entry in kubeconfig", "language": "text", "type": "code-example_explanation", "context": "kubectl config set-cluster | Path to certificate-authority file for the cluster entry in kubeconfig", "source": "https://kubernetes.io/docs/reference/kubectl/generated/kubectl_config/kubectl_config_set-cluster/", "id": "k8s_00207_code_explain_5581"} {"prompt": "Explain this bash code:\n\nkubectl config set-cluster NAME [--server=server] [--certificate-authority=path/to/certificate/authority] [--insecure-skip-tls-verify=true] [--tls-server-name=example.com]", "completion": "This bash code kubectl config set-cluster | Path to certificate-authority file for the cluster entry in kubeconfig", "language": "bash", "type": "kubectl-command_explanation", "context": "kubectl config set-cluster | Path to certificate-authority file for the cluster entry in kubeconfig", "source": "https://kubernetes.io/docs/reference/kubectl/generated/kubectl_config/kubectl_config_set-cluster/", "id": "k8s_00207_code_explain_5582"} {"prompt": "Complete this text code:\n\n# Set only the server field on the e2e cluster entry without touching other values\n kubectl config set-cluster e2e --server=https://1.2.3.4\n \n # Embed certificate authority data for the e2e cluster entry\n kubectl config set-cluster e2e --embed-certs --certificate-authority=~/.kube/e2e/kubernetes.ca.crt\n \n # Disable cert checking for the e2e cluster entry", "completion": " kubectl config set-cluster e2e --insecure-skip-tls-verify=true\n \n # Set the custom TLS server name to use for validation for the e2e cluster entry\n kubectl config set-cluster e2e --tls-server-name=my-cluster-name\n \n # Set the proxy URL for the e2e cluster entry\n kubectl config set-cluster e2e --proxy-url=https://1.2.3.4", "language": "text", "type": "code-example", "context": "kubectl config set-cluster | Path to certificate-authority file for the cluster entry in kubeconfig", "source": "https://kubernetes.io/docs/reference/kubectl/generated/kubectl_config/kubectl_config_set-cluster/", "id": "k8s_00207_code_completion_5583"} {"prompt": "Explain this text code:\n\n# Set only the server field on the e2e cluster entry without touching other values\n kubectl config set-cluster e2e --server=https://1.2.3.4\n \n # Embed certificate authority data for the e2e cluster entry\n kubectl config set-cluster e2e --embed-certs --certificate-authority=~/.kube/e2e/kubernetes.ca.crt\n \n # Disable cert checking for the e2e cluster entry\n kubectl config set-cluster e2e --insecure-skip-tls-verify=true\n \n # Set the custom TLS server name to use for validation for the e2e cluster entry\n kubectl config set-cluster e2e --tls-server-name=my-cluster-name\n \n # Set the proxy URL for the e2e cluster entry\n kubectl config set-cluster e2e --proxy-url=https://1.2.3.4", "completion": "This text code kubectl config set-cluster | Path to certificate-authority file for the cluster entry in kubeconfig", "language": "text", "type": "code-example_explanation", "context": "kubectl config set-cluster | Path to certificate-authority file for the cluster entry in kubeconfig", "source": "https://kubernetes.io/docs/reference/kubectl/generated/kubectl_config/kubectl_config_set-cluster/", "id": "k8s_00207_code_explain_5584"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: ConfigMap\nmetadata:\n name: game-demo\ndata:\n # property-like keys; each key maps to a simple value\n player_initial_lives: \"3\"\n ui_properties_file_name: \"user-interface.properties\"", "completion": "\n # file-like keys\n game.properties: |\n enemy.types=aliens,monsters\n player.maximum-lives=5 \n user-interface.properties: |\n color.good=purple\n color.bad=yellow\n allow.textmode=true", "language": "yaml", "type": "kubernetes-manifest", "context": "ConfigMaps | There are four different ways that you can use a ConfigMap to configure a container inside a Pod:", "source": "https://kubernetes.io/docs/concepts/configuration/configmap/", "id": "k8s_00208_code_completion_5585"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: ConfigMap\nmetadata:\n name: game-demo\ndata:\n # property-like keys; each key maps to a simple value\n player_initial_lives: \"3\"\n ui_properties_file_name: \"user-interface.properties\"\n\n # file-like keys\n game.properties: |\n enemy.types=aliens,monsters\n player.maximum-lives=5 \n user-interface.properties: |\n color.good=purple\n color.bad=yellow\n allow.textmode=true", "completion": "This yaml code ConfigMaps | There are four different ways that you can use a ConfigMap to configure a container inside a Pod:", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "ConfigMaps | There are four different ways that you can use a ConfigMap to configure a container inside a Pod:", "source": "https://kubernetes.io/docs/concepts/configuration/configmap/", "id": "k8s_00208_code_explain_5586"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: configmap-demo-pod\nspec:\n containers:\n - name: demo\n image: alpine\n command: [\"sleep\", \"3600\"]\n env:\n # Define the environment variable\n - name: PLAYER_INITIAL_LIVES # Notice that the case is different here\n # from the key name in the ConfigMap.\n valueFrom:\n configMapKeyRef:\n name: game-demo # The ConfigMap this value comes from.\n key: player_initial_lives # The key to fetch.\n - name: UI_PROPERTIES_FILE_NAME\n valueFrom:", "completion": " configMapKeyRef:\n name: game-demo\n key: ui_properties_file_name\n volumeMounts:\n - name: config\n mountPath: \"/config\"\n readOnly: true\n volumes:\n # You set volumes at the Pod level, then mount them into containers inside that Pod\n - name: config\n configMap:\n # Provide the name of the ConfigMap you want to mount.\n name: game-demo\n # An array of keys from the ConfigMap to create as files\n items:\n - key: \"game.properties\"\n path: \"game.properties\"\n - key: \"user-interface.properties\"\n path: \"user-interface.properties\"", "language": "yaml", "type": "pod-definition", "context": "ConfigMaps | A ConfigMap doesn't differentiate between single line property values and multi-line file-like values. What matters is how Pods and other objects consume those values.", "source": "https://kubernetes.io/docs/concepts/configuration/configmap/", "id": "k8s_00208_code_completion_5587"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: configmap-demo-pod\nspec:\n containers:\n - name: demo\n image: alpine\n command: [\"sleep\", \"3600\"]\n env:\n # Define the environment variable\n - name: PLAYER_INITIAL_LIVES # Notice that the case is different here\n # from the key name in the ConfigMap.\n valueFrom:\n configMapKeyRef:\n name: game-demo # The ConfigMap this value comes from.\n key: player_initial_lives # The key to fetch.\n - name: UI_PROPERTIES_FILE_NAME\n valueFrom:\n configMapKeyRef:\n name: game-demo\n key: ui_properties_file_name\n volumeMounts:\n - name: config\n mountPath: \"/config\"\n readOnly: true\n volumes:\n # You set volumes at the Pod level, then mount them into containers inside that Pod\n - name: config\n configMap:\n # Provide the name of the ConfigMap you want to mount.\n name: game-demo\n # An array of keys from the ConfigMap to create as files\n items:\n - key: \"game.properties\"\n path: \"game.properties\"\n - key: \"user-interface.properties\"\n path: \"user-interface.properties\"", "completion": "This yaml code ConfigMaps | A ConfigMap doesn't differentiate between single line property values and multi-line file-like values. What matters is how Pods and other objects consume those values.", "language": "yaml", "type": "pod-definition_explanation", "context": "ConfigMaps | A ConfigMap doesn't differentiate between single line property values and multi-line file-like values. What matters is how Pods and other objects consume those values.", "source": "https://kubernetes.io/docs/concepts/configuration/configmap/", "id": "k8s_00208_code_explain_5588"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: mypod\nspec:\n containers:\n - name: mypod\n image: redis", "completion": " volumeMounts:\n - name: foo\n mountPath: \"/etc/foo\"\n readOnly: true\n volumes:\n - name: foo\n configMap:\n name: myconfigmap", "language": "yaml", "type": "pod-definition", "context": "ConfigMaps | Each ConfigMap you want to use needs to be referred to in .spec.volumes.", "source": "https://kubernetes.io/docs/concepts/configuration/configmap/", "id": "k8s_00208_code_completion_5589"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: mypod\nspec:\n containers:\n - name: mypod\n image: redis\n volumeMounts:\n - name: foo\n mountPath: \"/etc/foo\"\n readOnly: true\n volumes:\n - name: foo\n configMap:\n name: myconfigmap", "completion": "This yaml code ConfigMaps | Each ConfigMap you want to use needs to be referred to in .spec.volumes.", "language": "yaml", "type": "pod-definition_explanation", "context": "ConfigMaps | Each ConfigMap you want to use needs to be referred to in .spec.volumes.", "source": "https://kubernetes.io/docs/concepts/configuration/configmap/", "id": "k8s_00208_code_explain_5590"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: ConfigMap\nmetadata:", "completion": " name: myconfigmap\ndata:\n username: k8s-admin\n access_level: \"1\"", "language": "yaml", "type": "kubernetes-manifest", "context": "ConfigMaps | The following command will create the ConfigMap object:", "source": "https://kubernetes.io/docs/concepts/configuration/configmap/", "id": "k8s_00208_code_completion_5591"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: ConfigMap\nmetadata:\n name: myconfigmap\ndata:\n username: k8s-admin\n access_level: \"1\"", "completion": "This yaml code ConfigMaps | The following command will create the ConfigMap object:", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "ConfigMaps | The following command will create the ConfigMap object:", "source": "https://kubernetes.io/docs/concepts/configuration/configmap/", "id": "k8s_00208_code_explain_5592"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: env-configmap\nspec:\n containers:", "completion": " - name: app\n command: [\"/bin/sh\", \"-c\", \"printenv\"]\n image: busybox:latest\n envFrom:\n - configMapRef:\n name: myconfigmap", "language": "yaml", "type": "pod-definition", "context": "ConfigMaps | The envFrom field instructs Kubernetes to create environment variables from the sources nested within it. The inner configMapRef refers to a ConfigMap by its name and selects all its key-value pairs.", "source": "https://kubernetes.io/docs/concepts/configuration/configmap/", "id": "k8s_00208_code_completion_5593"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: env-configmap\nspec:\n containers:\n - name: app\n command: [\"/bin/sh\", \"-c\", \"printenv\"]\n image: busybox:latest\n envFrom:\n - configMapRef:\n name: myconfigmap", "completion": "This yaml code ConfigMaps | The envFrom field instructs Kubernetes to create environment variables from the sources nested within it. The inner configMapRef refers to a ConfigMap by its name and selects all its key-value pairs.", "language": "yaml", "type": "pod-definition_explanation", "context": "ConfigMaps | The envFrom field instructs Kubernetes to create environment variables from the sources nested within it. The inner configMapRef refers to a ConfigMap by its name and selects all its key-value pairs.", "source": "https://kubernetes.io/docs/concepts/configuration/configmap/", "id": "k8s_00208_code_explain_5594"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: env-configmap\nspec:\n containers:\n - name: envars-test-container", "completion": " image: nginx\n env:\n - name: CONFIGMAP_USERNAME\n valueFrom:\n configMapKeyRef:\n name: myconfigmap\n key: username", "language": "yaml", "type": "pod-definition", "context": "ConfigMaps | In the Pod created from this manifest, you will see that the environment variable CONFIGMAP_USERNAME is set to the value of the username value from the ConfigMap. Other keys from the ConfigMap data ar", "source": "https://kubernetes.io/docs/concepts/configuration/configmap/", "id": "k8s_00208_code_completion_5595"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: env-configmap\nspec:\n containers:\n - name: envars-test-container\n image: nginx\n env:\n - name: CONFIGMAP_USERNAME\n valueFrom:\n configMapKeyRef:\n name: myconfigmap\n key: username", "completion": "This yaml code ConfigMaps | In the Pod created from this manifest, you will see that the environment variable CONFIGMAP_USERNAME is set to the value of the username value from the ConfigMap. Other keys from the ConfigMap data ar", "language": "yaml", "type": "pod-definition_explanation", "context": "ConfigMaps | In the Pod created from this manifest, you will see that the environment variable CONFIGMAP_USERNAME is set to the value of the username value from the ConfigMap. Other keys from the ConfigMap data ar", "source": "https://kubernetes.io/docs/concepts/configuration/configmap/", "id": "k8s_00208_code_explain_5596"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: ConfigMap\nmetadata:", "completion": " ...\ndata:\n ...\nimmutable: true", "language": "yaml", "type": "kubernetes-manifest", "context": "ConfigMaps | Once a ConfigMap is marked as immutable, it is not possible to revert this change nor to mutate the contents of the data or the binaryData field. You can only delete and recreate the ConfigMap. Becaus", "source": "https://kubernetes.io/docs/concepts/configuration/configmap/", "id": "k8s_00208_code_completion_5597"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: ConfigMap\nmetadata:\n ...\ndata:\n ...\nimmutable: true", "completion": "This yaml code ConfigMaps | Once a ConfigMap is marked as immutable, it is not possible to revert this change nor to mutate the contents of the data or the binaryData field. You can only delete and recreate the ConfigMap. Becaus", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "ConfigMaps | Once a ConfigMap is marked as immutable, it is not possible to revert this change nor to mutate the contents of the data or the binaryData field. You can only delete and recreate the ConfigMap. Becaus", "source": "https://kubernetes.io/docs/concepts/configuration/configmap/", "id": "k8s_00208_code_explain_5598"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: ConfigMap\nmetadata:\n name: game-demo\ndata:\n # property-like keys; each key maps to a simple value\n player_initial_lives: \"3\"\n ui_properties_file_name: \"user-interface.properties\"", "completion": "\n # file-like keys\n game.properties: |\n enemy.types=aliens,monsters\n player.maximum-lives=5 \n user-interface.properties: |\n color.good=purple\n color.bad=yellow\n allow.textmode=true", "language": "yaml", "type": "kubernetes-manifest", "context": "ConfigMaps | There are four different ways that you can use a ConfigMap to configure a container inside a Pod:", "source": "https://kubernetes.io/docs/concepts/configuration/configmap/", "id": "k8s_00208_code_completion_5599"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: ConfigMap\nmetadata:\n name: game-demo\ndata:\n # property-like keys; each key maps to a simple value\n player_initial_lives: \"3\"\n ui_properties_file_name: \"user-interface.properties\"\n\n # file-like keys\n game.properties: |\n enemy.types=aliens,monsters\n player.maximum-lives=5 \n user-interface.properties: |\n color.good=purple\n color.bad=yellow\n allow.textmode=true", "completion": "This yaml code ConfigMaps | There are four different ways that you can use a ConfigMap to configure a container inside a Pod:", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "ConfigMaps | There are four different ways that you can use a ConfigMap to configure a container inside a Pod:", "source": "https://kubernetes.io/docs/concepts/configuration/configmap/", "id": "k8s_00208_code_explain_5600"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: configmap-demo-pod\nspec:\n containers:\n - name: demo\n image: alpine\n command: [\"sleep\", \"3600\"]\n env:\n # Define the environment variable\n - name: PLAYER_INITIAL_LIVES # Notice that the case is different here\n # from the key name in the ConfigMap.\n valueFrom:\n configMapKeyRef:\n name: game-demo # The ConfigMap this value comes from.\n key: player_initial_lives # The key to fetch.\n - name: UI_PROPERTIES_FILE_NAME\n valueFrom:", "completion": " configMapKeyRef:\n name: game-demo\n key: ui_properties_file_name\n volumeMounts:\n - name: config\n mountPath: \"/config\"\n readOnly: true\n volumes:\n # You set volumes at the Pod level, then mount them into containers inside that Pod\n - name: config\n configMap:\n # Provide the name of the ConfigMap you want to mount.\n name: game-demo\n # An array of keys from the ConfigMap to create as files\n items:\n - key: \"game.properties\"\n path: \"game.properties\"\n - key: \"user-interface.properties\"\n path: \"user-interface.properties\"", "language": "yaml", "type": "pod-definition", "context": "ConfigMaps | A ConfigMap doesn't differentiate between single line property values and multi-line file-like values. What matters is how Pods and other objects consume those values.", "source": "https://kubernetes.io/docs/concepts/configuration/configmap/", "id": "k8s_00208_code_completion_5601"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: configmap-demo-pod\nspec:\n containers:\n - name: demo\n image: alpine\n command: [\"sleep\", \"3600\"]\n env:\n # Define the environment variable\n - name: PLAYER_INITIAL_LIVES # Notice that the case is different here\n # from the key name in the ConfigMap.\n valueFrom:\n configMapKeyRef:\n name: game-demo # The ConfigMap this value comes from.\n key: player_initial_lives # The key to fetch.\n - name: UI_PROPERTIES_FILE_NAME\n valueFrom:\n configMapKeyRef:\n name: game-demo\n key: ui_properties_file_name\n volumeMounts:\n - name: config\n mountPath: \"/config\"\n readOnly: true\n volumes:\n # You set volumes at the Pod level, then mount them into containers inside that Pod\n - name: config\n configMap:\n # Provide the name of the ConfigMap you want to mount.\n name: game-demo\n # An array of keys from the ConfigMap to create as files\n items:\n - key: \"game.properties\"\n path: \"game.properties\"\n - key: \"user-interface.properties\"\n path: \"user-interface.properties\"", "completion": "This yaml code ConfigMaps | A ConfigMap doesn't differentiate between single line property values and multi-line file-like values. What matters is how Pods and other objects consume those values.", "language": "yaml", "type": "pod-definition_explanation", "context": "ConfigMaps | A ConfigMap doesn't differentiate between single line property values and multi-line file-like values. What matters is how Pods and other objects consume those values.", "source": "https://kubernetes.io/docs/concepts/configuration/configmap/", "id": "k8s_00208_code_explain_5602"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: mypod\nspec:\n containers:\n - name: mypod\n image: redis", "completion": " volumeMounts:\n - name: foo\n mountPath: \"/etc/foo\"\n readOnly: true\n volumes:\n - name: foo\n configMap:\n name: myconfigmap", "language": "yaml", "type": "pod-definition", "context": "ConfigMaps | Each ConfigMap you want to use needs to be referred to in .spec.volumes.", "source": "https://kubernetes.io/docs/concepts/configuration/configmap/", "id": "k8s_00208_code_completion_5603"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: mypod\nspec:\n containers:\n - name: mypod\n image: redis\n volumeMounts:\n - name: foo\n mountPath: \"/etc/foo\"\n readOnly: true\n volumes:\n - name: foo\n configMap:\n name: myconfigmap", "completion": "This yaml code ConfigMaps | Each ConfigMap you want to use needs to be referred to in .spec.volumes.", "language": "yaml", "type": "pod-definition_explanation", "context": "ConfigMaps | Each ConfigMap you want to use needs to be referred to in .spec.volumes.", "source": "https://kubernetes.io/docs/concepts/configuration/configmap/", "id": "k8s_00208_code_explain_5604"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: ConfigMap\nmetadata:", "completion": " name: myconfigmap\ndata:\n username: k8s-admin\n access_level: \"1\"", "language": "yaml", "type": "kubernetes-manifest", "context": "ConfigMaps | The following command will create the ConfigMap object:", "source": "https://kubernetes.io/docs/concepts/configuration/configmap/", "id": "k8s_00208_code_completion_5605"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: ConfigMap\nmetadata:\n name: myconfigmap\ndata:\n username: k8s-admin\n access_level: \"1\"", "completion": "This yaml code ConfigMaps | The following command will create the ConfigMap object:", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "ConfigMaps | The following command will create the ConfigMap object:", "source": "https://kubernetes.io/docs/concepts/configuration/configmap/", "id": "k8s_00208_code_explain_5606"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: env-configmap\nspec:\n containers:", "completion": " - name: app\n command: [\"/bin/sh\", \"-c\", \"printenv\"]\n image: busybox:latest\n envFrom:\n - configMapRef:\n name: myconfigmap", "language": "yaml", "type": "pod-definition", "context": "ConfigMaps | The envFrom field instructs Kubernetes to create environment variables from the sources nested within it. The inner configMapRef refers to a ConfigMap by its name and selects all its key-value pairs.", "source": "https://kubernetes.io/docs/concepts/configuration/configmap/", "id": "k8s_00208_code_completion_5607"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: env-configmap\nspec:\n containers:\n - name: app\n command: [\"/bin/sh\", \"-c\", \"printenv\"]\n image: busybox:latest\n envFrom:\n - configMapRef:\n name: myconfigmap", "completion": "This yaml code ConfigMaps | The envFrom field instructs Kubernetes to create environment variables from the sources nested within it. The inner configMapRef refers to a ConfigMap by its name and selects all its key-value pairs.", "language": "yaml", "type": "pod-definition_explanation", "context": "ConfigMaps | The envFrom field instructs Kubernetes to create environment variables from the sources nested within it. The inner configMapRef refers to a ConfigMap by its name and selects all its key-value pairs.", "source": "https://kubernetes.io/docs/concepts/configuration/configmap/", "id": "k8s_00208_code_explain_5608"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: env-configmap\nspec:\n containers:\n - name: envars-test-container", "completion": " image: nginx\n env:\n - name: CONFIGMAP_USERNAME\n valueFrom:\n configMapKeyRef:\n name: myconfigmap\n key: username", "language": "yaml", "type": "pod-definition", "context": "ConfigMaps | In the Pod created from this manifest, you will see that the environment variable CONFIGMAP_USERNAME is set to the value of the username value from the ConfigMap. Other keys from the ConfigMap data ar", "source": "https://kubernetes.io/docs/concepts/configuration/configmap/", "id": "k8s_00208_code_completion_5609"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: env-configmap\nspec:\n containers:\n - name: envars-test-container\n image: nginx\n env:\n - name: CONFIGMAP_USERNAME\n valueFrom:\n configMapKeyRef:\n name: myconfigmap\n key: username", "completion": "This yaml code ConfigMaps | In the Pod created from this manifest, you will see that the environment variable CONFIGMAP_USERNAME is set to the value of the username value from the ConfigMap. Other keys from the ConfigMap data ar", "language": "yaml", "type": "pod-definition_explanation", "context": "ConfigMaps | In the Pod created from this manifest, you will see that the environment variable CONFIGMAP_USERNAME is set to the value of the username value from the ConfigMap. Other keys from the ConfigMap data ar", "source": "https://kubernetes.io/docs/concepts/configuration/configmap/", "id": "k8s_00208_code_explain_5610"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: ConfigMap\nmetadata:", "completion": " ...\ndata:\n ...\nimmutable: true", "language": "yaml", "type": "kubernetes-manifest", "context": "ConfigMaps | Once a ConfigMap is marked as immutable, it is not possible to revert this change nor to mutate the contents of the data or the binaryData field. You can only delete and recreate the ConfigMap. Becaus", "source": "https://kubernetes.io/docs/concepts/configuration/configmap/", "id": "k8s_00208_code_completion_5611"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: ConfigMap\nmetadata:\n ...\ndata:\n ...\nimmutable: true", "completion": "This yaml code ConfigMaps | Once a ConfigMap is marked as immutable, it is not possible to revert this change nor to mutate the contents of the data or the binaryData field. You can only delete and recreate the ConfigMap. Becaus", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "ConfigMaps | Once a ConfigMap is marked as immutable, it is not possible to revert this change nor to mutate the contents of the data or the binaryData field. You can only delete and recreate the ConfigMap. Becaus", "source": "https://kubernetes.io/docs/concepts/configuration/configmap/", "id": "k8s_00208_code_explain_5612"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: ConfigMap\nmetadata:\n name: game-demo\ndata:\n # property-like keys; each key maps to a simple value\n player_initial_lives: \"3\"\n ui_properties_file_name: \"user-interface.properties\"", "completion": "\n # file-like keys\n game.properties: |\n enemy.types=aliens,monsters\n player.maximum-lives=5 \n user-interface.properties: |\n color.good=purple\n color.bad=yellow\n allow.textmode=true", "language": "yaml", "type": "kubernetes-manifest", "context": "ConfigMaps | There are four different ways that you can use a ConfigMap to configure a container inside a Pod:", "source": "https://kubernetes.io/docs/concepts/configuration/configmap/", "id": "k8s_00208_code_completion_5613"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: ConfigMap\nmetadata:\n name: game-demo\ndata:\n # property-like keys; each key maps to a simple value\n player_initial_lives: \"3\"\n ui_properties_file_name: \"user-interface.properties\"\n\n # file-like keys\n game.properties: |\n enemy.types=aliens,monsters\n player.maximum-lives=5 \n user-interface.properties: |\n color.good=purple\n color.bad=yellow\n allow.textmode=true", "completion": "This yaml code ConfigMaps | There are four different ways that you can use a ConfigMap to configure a container inside a Pod:", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "ConfigMaps | There are four different ways that you can use a ConfigMap to configure a container inside a Pod:", "source": "https://kubernetes.io/docs/concepts/configuration/configmap/", "id": "k8s_00208_code_explain_5614"} {"prompt": "Complete this yaml code:\n\nconfigmap/configure-pod.yaml\napiVersion: v1\nkind: Pod\nmetadata:\n name: configmap-demo-pod\nspec:\n containers:\n - name: demo\n image: alpine\n command: [\"sleep\", \"3600\"]\n env:\n # Define the environment variable\n - name: PLAYER_INITIAL_LIVES # Notice that the case is different here\n # from the key name in the ConfigMap.\n valueFrom:\n configMapKeyRef:\n name: game-demo # The ConfigMap this value comes from.\n key: player_initial_lives # The key to fetch.\n - name: UI_PROPERTIES_FILE_NAME", "completion": " valueFrom:\n configMapKeyRef:\n name: game-demo\n key: ui_properties_file_name\n volumeMounts:\n - name: config\n mountPath: \"/config\"\n readOnly: true\n volumes:\n # You set volumes at the Pod level, then mount them into containers inside that Pod\n - name: config\n configMap:\n # Provide the name of the ConfigMap you want to mount.\n name: game-demo\n # An array of keys from the ConfigMap to create as files\n items:\n - key: \"game.properties\"\n path: \"game.properties\"\n - key: \"user-interface.properties\"\n path: \"user-interface.properties\"", "language": "yaml", "type": "pod-definition", "context": "ConfigMaps | A ConfigMap doesn't differentiate between single line property values and multi-line file-like values. What matters is how Pods and other objects consume those values.", "source": "https://kubernetes.io/docs/concepts/configuration/configmap/", "id": "k8s_00208_code_completion_5615"} {"prompt": "Explain this yaml code:\n\nconfigmap/configure-pod.yaml\napiVersion: v1\nkind: Pod\nmetadata:\n name: configmap-demo-pod\nspec:\n containers:\n - name: demo\n image: alpine\n command: [\"sleep\", \"3600\"]\n env:\n # Define the environment variable\n - name: PLAYER_INITIAL_LIVES # Notice that the case is different here\n # from the key name in the ConfigMap.\n valueFrom:\n configMapKeyRef:\n name: game-demo # The ConfigMap this value comes from.\n key: player_initial_lives # The key to fetch.\n - name: UI_PROPERTIES_FILE_NAME\n valueFrom:\n configMapKeyRef:\n name: game-demo\n key: ui_properties_file_name\n volumeMounts:\n - name: config\n mountPath: \"/config\"\n readOnly: true\n volumes:\n # You set volumes at the Pod level, then mount them into containers inside that Pod\n - name: config\n configMap:\n # Provide the name of the ConfigMap you want to mount.\n name: game-demo\n # An array of keys from the ConfigMap to create as files\n items:\n - key: \"game.properties\"\n path: \"game.properties\"\n - key: \"user-interface.properties\"\n path: \"user-interface.properties\"", "completion": "This yaml code ConfigMaps | A ConfigMap doesn't differentiate between single line property values and multi-line file-like values. What matters is how Pods and other objects consume those values.", "language": "yaml", "type": "pod-definition_explanation", "context": "ConfigMaps | A ConfigMap doesn't differentiate between single line property values and multi-line file-like values. What matters is how Pods and other objects consume those values.", "source": "https://kubernetes.io/docs/concepts/configuration/configmap/", "id": "k8s_00208_code_explain_5616"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: configmap-demo-pod\nspec:\n containers:\n - name: demo\n image: alpine\n command: [\"sleep\", \"3600\"]\n env:\n # Define the environment variable\n - name: PLAYER_INITIAL_LIVES # Notice that the case is different here\n # from the key name in the ConfigMap.\n valueFrom:\n configMapKeyRef:\n name: game-demo # The ConfigMap this value comes from.\n key: player_initial_lives # The key to fetch.\n - name: UI_PROPERTIES_FILE_NAME\n valueFrom:", "completion": " configMapKeyRef:\n name: game-demo\n key: ui_properties_file_name\n volumeMounts:\n - name: config\n mountPath: \"/config\"\n readOnly: true\n volumes:\n # You set volumes at the Pod level, then mount them into containers inside that Pod\n - name: config\n configMap:\n # Provide the name of the ConfigMap you want to mount.\n name: game-demo\n # An array of keys from the ConfigMap to create as files\n items:\n - key: \"game.properties\"\n path: \"game.properties\"\n - key: \"user-interface.properties\"\n path: \"user-interface.properties\"", "language": "yaml", "type": "pod-definition", "context": "ConfigMaps | A ConfigMap doesn't differentiate between single line property values and multi-line file-like values. What matters is how Pods and other objects consume those values.", "source": "https://kubernetes.io/docs/concepts/configuration/configmap/", "id": "k8s_00208_code_completion_5617"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: configmap-demo-pod\nspec:\n containers:\n - name: demo\n image: alpine\n command: [\"sleep\", \"3600\"]\n env:\n # Define the environment variable\n - name: PLAYER_INITIAL_LIVES # Notice that the case is different here\n # from the key name in the ConfigMap.\n valueFrom:\n configMapKeyRef:\n name: game-demo # The ConfigMap this value comes from.\n key: player_initial_lives # The key to fetch.\n - name: UI_PROPERTIES_FILE_NAME\n valueFrom:\n configMapKeyRef:\n name: game-demo\n key: ui_properties_file_name\n volumeMounts:\n - name: config\n mountPath: \"/config\"\n readOnly: true\n volumes:\n # You set volumes at the Pod level, then mount them into containers inside that Pod\n - name: config\n configMap:\n # Provide the name of the ConfigMap you want to mount.\n name: game-demo\n # An array of keys from the ConfigMap to create as files\n items:\n - key: \"game.properties\"\n path: \"game.properties\"\n - key: \"user-interface.properties\"\n path: \"user-interface.properties\"", "completion": "This yaml code ConfigMaps | A ConfigMap doesn't differentiate between single line property values and multi-line file-like values. What matters is how Pods and other objects consume those values.", "language": "yaml", "type": "pod-definition_explanation", "context": "ConfigMaps | A ConfigMap doesn't differentiate between single line property values and multi-line file-like values. What matters is how Pods and other objects consume those values.", "source": "https://kubernetes.io/docs/concepts/configuration/configmap/", "id": "k8s_00208_code_explain_5618"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: mypod\nspec:\n containers:\n - name: mypod\n image: redis", "completion": " volumeMounts:\n - name: foo\n mountPath: \"/etc/foo\"\n readOnly: true\n volumes:\n - name: foo\n configMap:\n name: myconfigmap", "language": "yaml", "type": "pod-definition", "context": "ConfigMaps | Each ConfigMap you want to use needs to be referred to in .spec.volumes.", "source": "https://kubernetes.io/docs/concepts/configuration/configmap/", "id": "k8s_00208_code_completion_5619"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: mypod\nspec:\n containers:\n - name: mypod\n image: redis\n volumeMounts:\n - name: foo\n mountPath: \"/etc/foo\"\n readOnly: true\n volumes:\n - name: foo\n configMap:\n name: myconfigmap", "completion": "This yaml code ConfigMaps | Each ConfigMap you want to use needs to be referred to in .spec.volumes.", "language": "yaml", "type": "pod-definition_explanation", "context": "ConfigMaps | Each ConfigMap you want to use needs to be referred to in .spec.volumes.", "source": "https://kubernetes.io/docs/concepts/configuration/configmap/", "id": "k8s_00208_code_explain_5620"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: ConfigMap\nmetadata:", "completion": " name: myconfigmap\ndata:\n username: k8s-admin\n access_level: \"1\"", "language": "yaml", "type": "kubernetes-manifest", "context": "ConfigMaps | The following command will create the ConfigMap object:", "source": "https://kubernetes.io/docs/concepts/configuration/configmap/", "id": "k8s_00208_code_completion_5621"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: ConfigMap\nmetadata:\n name: myconfigmap\ndata:\n username: k8s-admin\n access_level: \"1\"", "completion": "This yaml code ConfigMaps | The following command will create the ConfigMap object:", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "ConfigMaps | The following command will create the ConfigMap object:", "source": "https://kubernetes.io/docs/concepts/configuration/configmap/", "id": "k8s_00208_code_explain_5622"} {"prompt": "Complete this yaml code:\n\nconfigmap/env-configmap.yaml\napiVersion: v1\nkind: Pod\nmetadata:\n name: env-configmap\nspec:", "completion": " containers:\n - name: app\n command: [\"/bin/sh\", \"-c\", \"printenv\"]\n image: busybox:latest\n envFrom:\n - configMapRef:\n name: myconfigmap", "language": "yaml", "type": "pod-definition", "context": "ConfigMaps | The envFrom field instructs Kubernetes to create environment variables from the sources nested within it. The inner configMapRef refers to a ConfigMap by its name and selects all its key-value pairs.", "source": "https://kubernetes.io/docs/concepts/configuration/configmap/", "id": "k8s_00208_code_completion_5623"} {"prompt": "Explain this yaml code:\n\nconfigmap/env-configmap.yaml\napiVersion: v1\nkind: Pod\nmetadata:\n name: env-configmap\nspec:\n containers:\n - name: app\n command: [\"/bin/sh\", \"-c\", \"printenv\"]\n image: busybox:latest\n envFrom:\n - configMapRef:\n name: myconfigmap", "completion": "This yaml code ConfigMaps | The envFrom field instructs Kubernetes to create environment variables from the sources nested within it. The inner configMapRef refers to a ConfigMap by its name and selects all its key-value pairs.", "language": "yaml", "type": "pod-definition_explanation", "context": "ConfigMaps | The envFrom field instructs Kubernetes to create environment variables from the sources nested within it. The inner configMapRef refers to a ConfigMap by its name and selects all its key-value pairs.", "source": "https://kubernetes.io/docs/concepts/configuration/configmap/", "id": "k8s_00208_code_explain_5624"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: env-configmap\nspec:\n containers:", "completion": " - name: app\n command: [\"/bin/sh\", \"-c\", \"printenv\"]\n image: busybox:latest\n envFrom:\n - configMapRef:\n name: myconfigmap", "language": "yaml", "type": "pod-definition", "context": "ConfigMaps | The envFrom field instructs Kubernetes to create environment variables from the sources nested within it. The inner configMapRef refers to a ConfigMap by its name and selects all its key-value pairs.", "source": "https://kubernetes.io/docs/concepts/configuration/configmap/", "id": "k8s_00208_code_completion_5625"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: env-configmap\nspec:\n containers:\n - name: app\n command: [\"/bin/sh\", \"-c\", \"printenv\"]\n image: busybox:latest\n envFrom:\n - configMapRef:\n name: myconfigmap", "completion": "This yaml code ConfigMaps | The envFrom field instructs Kubernetes to create environment variables from the sources nested within it. The inner configMapRef refers to a ConfigMap by its name and selects all its key-value pairs.", "language": "yaml", "type": "pod-definition_explanation", "context": "ConfigMaps | The envFrom field instructs Kubernetes to create environment variables from the sources nested within it. The inner configMapRef refers to a ConfigMap by its name and selects all its key-value pairs.", "source": "https://kubernetes.io/docs/concepts/configuration/configmap/", "id": "k8s_00208_code_explain_5626"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: env-configmap\nspec:\n containers:\n - name: envars-test-container", "completion": " image: nginx\n env:\n - name: CONFIGMAP_USERNAME\n valueFrom:\n configMapKeyRef:\n name: myconfigmap\n key: username", "language": "yaml", "type": "pod-definition", "context": "ConfigMaps | In the Pod created from this manifest, you will see that the environment variable CONFIGMAP_USERNAME is set to the value of the username value from the ConfigMap. Other keys from the ConfigMap data ar", "source": "https://kubernetes.io/docs/concepts/configuration/configmap/", "id": "k8s_00208_code_completion_5627"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: env-configmap\nspec:\n containers:\n - name: envars-test-container\n image: nginx\n env:\n - name: CONFIGMAP_USERNAME\n valueFrom:\n configMapKeyRef:\n name: myconfigmap\n key: username", "completion": "This yaml code ConfigMaps | In the Pod created from this manifest, you will see that the environment variable CONFIGMAP_USERNAME is set to the value of the username value from the ConfigMap. Other keys from the ConfigMap data ar", "language": "yaml", "type": "pod-definition_explanation", "context": "ConfigMaps | In the Pod created from this manifest, you will see that the environment variable CONFIGMAP_USERNAME is set to the value of the username value from the ConfigMap. Other keys from the ConfigMap data ar", "source": "https://kubernetes.io/docs/concepts/configuration/configmap/", "id": "k8s_00208_code_explain_5628"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: ConfigMap\nmetadata:", "completion": " ...\ndata:\n ...\nimmutable: true", "language": "yaml", "type": "kubernetes-manifest", "context": "ConfigMaps | Once a ConfigMap is marked as immutable, it is not possible to revert this change nor to mutate the contents of the data or the binaryData field. You can only delete and recreate the ConfigMap. Becaus", "source": "https://kubernetes.io/docs/concepts/configuration/configmap/", "id": "k8s_00208_code_completion_5629"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: ConfigMap\nmetadata:\n ...\ndata:\n ...\nimmutable: true", "completion": "This yaml code ConfigMaps | Once a ConfigMap is marked as immutable, it is not possible to revert this change nor to mutate the contents of the data or the binaryData field. You can only delete and recreate the ConfigMap. Becaus", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "ConfigMaps | Once a ConfigMap is marked as immutable, it is not possible to revert this change nor to mutate the contents of the data or the binaryData field. You can only delete and recreate the ConfigMap. Becaus", "source": "https://kubernetes.io/docs/concepts/configuration/configmap/", "id": "k8s_00208_code_explain_5630"} {"prompt": "Complete this yaml code:\n\napiVersion: apps/v1\nkind: Deployment\nmetadata:\n name: myapp\n labels:\n app: myapp\nspec:\n replicas: 1\n selector:\n matchLabels:\n app: myapp\n template:\n metadata:\n labels:\n app: myapp\n spec:\n containers:", "completion": " - name: myapp\n image: alpine:latest\n command: ['sh', '-c', 'while true; do echo \"logging\" >> /opt/logs.txt; sleep 1; done']\n volumeMounts:\n - name: data\n mountPath: /opt\n initContainers:\n - name: logshipper\n image: alpine:latest\n restartPolicy: Always\n command: ['sh', '-c', 'tail -F /opt/logs.txt']\n volumeMounts:\n - name: data\n mountPath: /opt\n volumes:\n - name: data\n emptyDir: {}", "language": "yaml", "type": "deployment-definition", "context": "Sidecar Containers | If an init container is created with its restartPolicy set to Always, it will start and remain running during the entire life of the Pod. This can be helpful for running supporting services separated", "source": "https://kubernetes.io/docs/concepts/workloads/pods/sidecar-containers/", "id": "k8s_00209_code_completion_5631"} {"prompt": "Explain this yaml code:\n\napiVersion: apps/v1\nkind: Deployment\nmetadata:\n name: myapp\n labels:\n app: myapp\nspec:\n replicas: 1\n selector:\n matchLabels:\n app: myapp\n template:\n metadata:\n labels:\n app: myapp\n spec:\n containers:\n - name: myapp\n image: alpine:latest\n command: ['sh', '-c', 'while true; do echo \"logging\" >> /opt/logs.txt; sleep 1; done']\n volumeMounts:\n - name: data\n mountPath: /opt\n initContainers:\n - name: logshipper\n image: alpine:latest\n restartPolicy: Always\n command: ['sh', '-c', 'tail -F /opt/logs.txt']\n volumeMounts:\n - name: data\n mountPath: /opt\n volumes:\n - name: data\n emptyDir: {}", "completion": "This yaml code Sidecar Containers | If an init container is created with its restartPolicy set to Always, it will start and remain running during the entire life of the Pod. This can be helpful for running supporting services separated", "language": "yaml", "type": "deployment-definition_explanation", "context": "Sidecar Containers | If an init container is created with its restartPolicy set to Always, it will start and remain running during the entire life of the Pod. This can be helpful for running supporting services separated", "source": "https://kubernetes.io/docs/concepts/workloads/pods/sidecar-containers/", "id": "k8s_00209_code_explain_5632"} {"prompt": "Complete this yaml code:\n\napiVersion: batch/v1\nkind: Job\nmetadata:\n name: myjob\nspec:\n template:\n spec:\n containers:\n - name: myjob\n image: alpine:latest\n command: ['sh', '-c', 'echo \"logging\" > /opt/logs.txt']\n volumeMounts:\n - name: data", "completion": " mountPath: /opt\n initContainers:\n - name: logshipper\n image: alpine:latest\n restartPolicy: Always\n command: ['sh', '-c', 'tail -F /opt/logs.txt']\n volumeMounts:\n - name: data\n mountPath: /opt\n restartPolicy: Never\n volumes:\n - name: data\n emptyDir: {}", "language": "yaml", "type": "kubernetes-manifest", "context": "Sidecar Containers | Sidecar containers run alongside app containers in the same pod. However, they do not execute the primary application logic; instead, they provide supporting functionality to the main application.", "source": "https://kubernetes.io/docs/concepts/workloads/pods/sidecar-containers/", "id": "k8s_00209_code_completion_5633"} {"prompt": "Explain this yaml code:\n\napiVersion: batch/v1\nkind: Job\nmetadata:\n name: myjob\nspec:\n template:\n spec:\n containers:\n - name: myjob\n image: alpine:latest\n command: ['sh', '-c', 'echo \"logging\" > /opt/logs.txt']\n volumeMounts:\n - name: data\n mountPath: /opt\n initContainers:\n - name: logshipper\n image: alpine:latest\n restartPolicy: Always\n command: ['sh', '-c', 'tail -F /opt/logs.txt']\n volumeMounts:\n - name: data\n mountPath: /opt\n restartPolicy: Never\n volumes:\n - name: data\n emptyDir: {}", "completion": "This yaml code Sidecar Containers | Sidecar containers run alongside app containers in the same pod. However, they do not execute the primary application logic; instead, they provide supporting functionality to the main application.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Sidecar Containers | Sidecar containers run alongside app containers in the same pod. However, they do not execute the primary application logic; instead, they provide supporting functionality to the main application.", "source": "https://kubernetes.io/docs/concepts/workloads/pods/sidecar-containers/", "id": "k8s_00209_code_explain_5634"} {"prompt": "Complete this yaml code:\n\napiVersion: apps/v1\nkind: Deployment\nmetadata:\n name: myapp\n labels:\n app: myapp\nspec:\n replicas: 1\n selector:\n matchLabels:\n app: myapp\n template:\n metadata:\n labels:\n app: myapp\n spec:\n containers:", "completion": " - name: myapp\n image: alpine:latest\n command: ['sh', '-c', 'while true; do echo \"logging\" >> /opt/logs.txt; sleep 1; done']\n volumeMounts:\n - name: data\n mountPath: /opt\n initContainers:\n - name: logshipper\n image: alpine:latest\n restartPolicy: Always\n command: ['sh', '-c', 'tail -F /opt/logs.txt']\n volumeMounts:\n - name: data\n mountPath: /opt\n volumes:\n - name: data\n emptyDir: {}", "language": "yaml", "type": "deployment-definition", "context": "Sidecar Containers | If an init container is created with its restartPolicy set to Always, it will start and remain running during the entire life of the Pod. This can be helpful for running supporting services separated", "source": "https://kubernetes.io/docs/concepts/workloads/pods/sidecar-containers/", "id": "k8s_00209_code_completion_5635"} {"prompt": "Explain this yaml code:\n\napiVersion: apps/v1\nkind: Deployment\nmetadata:\n name: myapp\n labels:\n app: myapp\nspec:\n replicas: 1\n selector:\n matchLabels:\n app: myapp\n template:\n metadata:\n labels:\n app: myapp\n spec:\n containers:\n - name: myapp\n image: alpine:latest\n command: ['sh', '-c', 'while true; do echo \"logging\" >> /opt/logs.txt; sleep 1; done']\n volumeMounts:\n - name: data\n mountPath: /opt\n initContainers:\n - name: logshipper\n image: alpine:latest\n restartPolicy: Always\n command: ['sh', '-c', 'tail -F /opt/logs.txt']\n volumeMounts:\n - name: data\n mountPath: /opt\n volumes:\n - name: data\n emptyDir: {}", "completion": "This yaml code Sidecar Containers | If an init container is created with its restartPolicy set to Always, it will start and remain running during the entire life of the Pod. This can be helpful for running supporting services separated", "language": "yaml", "type": "deployment-definition_explanation", "context": "Sidecar Containers | If an init container is created with its restartPolicy set to Always, it will start and remain running during the entire life of the Pod. This can be helpful for running supporting services separated", "source": "https://kubernetes.io/docs/concepts/workloads/pods/sidecar-containers/", "id": "k8s_00209_code_explain_5636"} {"prompt": "Complete this yaml code:\n\napiVersion: batch/v1\nkind: Job\nmetadata:\n name: myjob\nspec:\n template:\n spec:\n containers:\n - name: myjob\n image: alpine:latest\n command: ['sh', '-c', 'echo \"logging\" > /opt/logs.txt']\n volumeMounts:\n - name: data", "completion": " mountPath: /opt\n initContainers:\n - name: logshipper\n image: alpine:latest\n restartPolicy: Always\n command: ['sh', '-c', 'tail -F /opt/logs.txt']\n volumeMounts:\n - name: data\n mountPath: /opt\n restartPolicy: Never\n volumes:\n - name: data\n emptyDir: {}", "language": "yaml", "type": "kubernetes-manifest", "context": "Sidecar Containers | Sidecar containers run alongside app containers in the same pod. However, they do not execute the primary application logic; instead, they provide supporting functionality to the main application.", "source": "https://kubernetes.io/docs/concepts/workloads/pods/sidecar-containers/", "id": "k8s_00209_code_completion_5637"} {"prompt": "Explain this yaml code:\n\napiVersion: batch/v1\nkind: Job\nmetadata:\n name: myjob\nspec:\n template:\n spec:\n containers:\n - name: myjob\n image: alpine:latest\n command: ['sh', '-c', 'echo \"logging\" > /opt/logs.txt']\n volumeMounts:\n - name: data\n mountPath: /opt\n initContainers:\n - name: logshipper\n image: alpine:latest\n restartPolicy: Always\n command: ['sh', '-c', 'tail -F /opt/logs.txt']\n volumeMounts:\n - name: data\n mountPath: /opt\n restartPolicy: Never\n volumes:\n - name: data\n emptyDir: {}", "completion": "This yaml code Sidecar Containers | Sidecar containers run alongside app containers in the same pod. However, they do not execute the primary application logic; instead, they provide supporting functionality to the main application.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Sidecar Containers | Sidecar containers run alongside app containers in the same pod. However, they do not execute the primary application logic; instead, they provide supporting functionality to the main application.", "source": "https://kubernetes.io/docs/concepts/workloads/pods/sidecar-containers/", "id": "k8s_00209_code_explain_5638"} {"prompt": "Complete this yaml code:\n\napplication/deployment-sidecar.yaml\napiVersion: apps/v1\nkind: Deployment\nmetadata:\n name: myapp\n labels:\n app: myapp\nspec:\n replicas: 1\n selector:\n matchLabels:\n app: myapp\n template:\n metadata:\n labels:\n app: myapp\n spec:", "completion": " containers:\n - name: myapp\n image: alpine:latest\n command: ['sh', '-c', 'while true; do echo \"logging\" >> /opt/logs.txt; sleep 1; done']\n volumeMounts:\n - name: data\n mountPath: /opt\n initContainers:\n - name: logshipper\n image: alpine:latest\n restartPolicy: Always\n command: ['sh', '-c', 'tail -F /opt/logs.txt']\n volumeMounts:\n - name: data\n mountPath: /opt\n volumes:\n - name: data\n emptyDir: {}", "language": "yaml", "type": "deployment-definition", "context": "Sidecar Containers | If an init container is created with its restartPolicy set to Always, it will start and remain running during the entire life of the Pod. This can be helpful for running supporting services separated", "source": "https://kubernetes.io/docs/concepts/workloads/pods/sidecar-containers/", "id": "k8s_00209_code_completion_5639"} {"prompt": "Explain this yaml code:\n\napplication/deployment-sidecar.yaml\napiVersion: apps/v1\nkind: Deployment\nmetadata:\n name: myapp\n labels:\n app: myapp\nspec:\n replicas: 1\n selector:\n matchLabels:\n app: myapp\n template:\n metadata:\n labels:\n app: myapp\n spec:\n containers:\n - name: myapp\n image: alpine:latest\n command: ['sh', '-c', 'while true; do echo \"logging\" >> /opt/logs.txt; sleep 1; done']\n volumeMounts:\n - name: data\n mountPath: /opt\n initContainers:\n - name: logshipper\n image: alpine:latest\n restartPolicy: Always\n command: ['sh', '-c', 'tail -F /opt/logs.txt']\n volumeMounts:\n - name: data\n mountPath: /opt\n volumes:\n - name: data\n emptyDir: {}", "completion": "This yaml code Sidecar Containers | If an init container is created with its restartPolicy set to Always, it will start and remain running during the entire life of the Pod. This can be helpful for running supporting services separated", "language": "yaml", "type": "deployment-definition_explanation", "context": "Sidecar Containers | If an init container is created with its restartPolicy set to Always, it will start and remain running during the entire life of the Pod. This can be helpful for running supporting services separated", "source": "https://kubernetes.io/docs/concepts/workloads/pods/sidecar-containers/", "id": "k8s_00209_code_explain_5640"} {"prompt": "Complete this yaml code:\n\napiVersion: apps/v1\nkind: Deployment\nmetadata:\n name: myapp\n labels:\n app: myapp\nspec:\n replicas: 1\n selector:\n matchLabels:\n app: myapp\n template:\n metadata:\n labels:\n app: myapp\n spec:\n containers:", "completion": " - name: myapp\n image: alpine:latest\n command: ['sh', '-c', 'while true; do echo \"logging\" >> /opt/logs.txt; sleep 1; done']\n volumeMounts:\n - name: data\n mountPath: /opt\n initContainers:\n - name: logshipper\n image: alpine:latest\n restartPolicy: Always\n command: ['sh', '-c', 'tail -F /opt/logs.txt']\n volumeMounts:\n - name: data\n mountPath: /opt\n volumes:\n - name: data\n emptyDir: {}", "language": "yaml", "type": "deployment-definition", "context": "Sidecar Containers | If an init container is created with its restartPolicy set to Always, it will start and remain running during the entire life of the Pod. This can be helpful for running supporting services separated", "source": "https://kubernetes.io/docs/concepts/workloads/pods/sidecar-containers/", "id": "k8s_00209_code_completion_5641"} {"prompt": "Explain this yaml code:\n\napiVersion: apps/v1\nkind: Deployment\nmetadata:\n name: myapp\n labels:\n app: myapp\nspec:\n replicas: 1\n selector:\n matchLabels:\n app: myapp\n template:\n metadata:\n labels:\n app: myapp\n spec:\n containers:\n - name: myapp\n image: alpine:latest\n command: ['sh', '-c', 'while true; do echo \"logging\" >> /opt/logs.txt; sleep 1; done']\n volumeMounts:\n - name: data\n mountPath: /opt\n initContainers:\n - name: logshipper\n image: alpine:latest\n restartPolicy: Always\n command: ['sh', '-c', 'tail -F /opt/logs.txt']\n volumeMounts:\n - name: data\n mountPath: /opt\n volumes:\n - name: data\n emptyDir: {}", "completion": "This yaml code Sidecar Containers | If an init container is created with its restartPolicy set to Always, it will start and remain running during the entire life of the Pod. This can be helpful for running supporting services separated", "language": "yaml", "type": "deployment-definition_explanation", "context": "Sidecar Containers | If an init container is created with its restartPolicy set to Always, it will start and remain running during the entire life of the Pod. This can be helpful for running supporting services separated", "source": "https://kubernetes.io/docs/concepts/workloads/pods/sidecar-containers/", "id": "k8s_00209_code_explain_5642"} {"prompt": "Complete this yaml code:\n\napplication/job/job-sidecar.yaml\napiVersion: batch/v1\nkind: Job\nmetadata:\n name: myjob\nspec:\n template:\n spec:\n containers:\n - name: myjob\n image: alpine:latest\n command: ['sh', '-c', 'echo \"logging\" > /opt/logs.txt']\n volumeMounts:", "completion": " - name: data\n mountPath: /opt\n initContainers:\n - name: logshipper\n image: alpine:latest\n restartPolicy: Always\n command: ['sh', '-c', 'tail -F /opt/logs.txt']\n volumeMounts:\n - name: data\n mountPath: /opt\n restartPolicy: Never\n volumes:\n - name: data\n emptyDir: {}", "language": "yaml", "type": "kubernetes-manifest", "context": "Sidecar Containers | Sidecar containers run alongside app containers in the same pod. However, they do not execute the primary application logic; instead, they provide supporting functionality to the main application.", "source": "https://kubernetes.io/docs/concepts/workloads/pods/sidecar-containers/", "id": "k8s_00209_code_completion_5643"} {"prompt": "Explain this yaml code:\n\napplication/job/job-sidecar.yaml\napiVersion: batch/v1\nkind: Job\nmetadata:\n name: myjob\nspec:\n template:\n spec:\n containers:\n - name: myjob\n image: alpine:latest\n command: ['sh', '-c', 'echo \"logging\" > /opt/logs.txt']\n volumeMounts:\n - name: data\n mountPath: /opt\n initContainers:\n - name: logshipper\n image: alpine:latest\n restartPolicy: Always\n command: ['sh', '-c', 'tail -F /opt/logs.txt']\n volumeMounts:\n - name: data\n mountPath: /opt\n restartPolicy: Never\n volumes:\n - name: data\n emptyDir: {}", "completion": "This yaml code Sidecar Containers | Sidecar containers run alongside app containers in the same pod. However, they do not execute the primary application logic; instead, they provide supporting functionality to the main application.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Sidecar Containers | Sidecar containers run alongside app containers in the same pod. However, they do not execute the primary application logic; instead, they provide supporting functionality to the main application.", "source": "https://kubernetes.io/docs/concepts/workloads/pods/sidecar-containers/", "id": "k8s_00209_code_explain_5644"} {"prompt": "Complete this yaml code:\n\napiVersion: batch/v1\nkind: Job\nmetadata:\n name: myjob\nspec:\n template:\n spec:\n containers:\n - name: myjob\n image: alpine:latest\n command: ['sh', '-c', 'echo \"logging\" > /opt/logs.txt']\n volumeMounts:\n - name: data", "completion": " mountPath: /opt\n initContainers:\n - name: logshipper\n image: alpine:latest\n restartPolicy: Always\n command: ['sh', '-c', 'tail -F /opt/logs.txt']\n volumeMounts:\n - name: data\n mountPath: /opt\n restartPolicy: Never\n volumes:\n - name: data\n emptyDir: {}", "language": "yaml", "type": "kubernetes-manifest", "context": "Sidecar Containers | Sidecar containers run alongside app containers in the same pod. However, they do not execute the primary application logic; instead, they provide supporting functionality to the main application.", "source": "https://kubernetes.io/docs/concepts/workloads/pods/sidecar-containers/", "id": "k8s_00209_code_completion_5645"} {"prompt": "Explain this yaml code:\n\napiVersion: batch/v1\nkind: Job\nmetadata:\n name: myjob\nspec:\n template:\n spec:\n containers:\n - name: myjob\n image: alpine:latest\n command: ['sh', '-c', 'echo \"logging\" > /opt/logs.txt']\n volumeMounts:\n - name: data\n mountPath: /opt\n initContainers:\n - name: logshipper\n image: alpine:latest\n restartPolicy: Always\n command: ['sh', '-c', 'tail -F /opt/logs.txt']\n volumeMounts:\n - name: data\n mountPath: /opt\n restartPolicy: Never\n volumes:\n - name: data\n emptyDir: {}", "completion": "This yaml code Sidecar Containers | Sidecar containers run alongside app containers in the same pod. However, they do not execute the primary application logic; instead, they provide supporting functionality to the main application.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Sidecar Containers | Sidecar containers run alongside app containers in the same pod. However, they do not execute the primary application logic; instead, they provide supporting functionality to the main application.", "source": "https://kubernetes.io/docs/concepts/workloads/pods/sidecar-containers/", "id": "k8s_00209_code_explain_5646"} {"prompt": "Explain this text code:\n\n# Resume an already paused deployment\n kubectl rollout resume deployment/nginx", "completion": "This text code kubectl rollout resume | If true, ignore any errors in templates when a field or map key is missing in the template. Only applies to golang and jsonpath output formats.", "language": "text", "type": "code-example_explanation", "context": "kubectl rollout resume | If true, ignore any errors in templates when a field or map key is missing in the template. Only applies to golang and jsonpath output formats.", "source": "https://kubernetes.io/docs/reference/kubectl/generated/kubectl_rollout/kubectl_rollout_resume/", "id": "k8s_00210_code_explain_5647"} {"prompt": "Explain this text code:\n\n# Resume an already paused deployment\n kubectl rollout resume deployment/nginx", "completion": "This text code kubectl rollout resume | If true, ignore any errors in templates when a field or map key is missing in the template. Only applies to golang and jsonpath output formats.", "language": "text", "type": "code-example_explanation", "context": "kubectl rollout resume | If true, ignore any errors in templates when a field or map key is missing in the template. Only applies to golang and jsonpath output formats.", "source": "https://kubernetes.io/docs/reference/kubectl/generated/kubectl_rollout/kubectl_rollout_resume/", "id": "k8s_00210_code_explain_5648"} {"prompt": "Complete this text code:\n\nspec:\n securityContext:\n windowsOptions:\n hostProcess: true\n runAsUserName: \"NT AUTHORITY\\\\Local service\"\n hostNetwork: true\n containers:", "completion": " - name: test\n image: image1:latest\n command:\n - ping\n - -t\n - 127.0.0.1\n nodeSelector:\n \"kubernetes.io/os\": windows", "language": "text", "type": "code-example", "context": "Create a Windows HostProcess Pod | HostProcess containers support the ability to mount volumes within the container volume space. Volume mount behavior differs depending on the version of containerd runtime used by on the node.", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/create-hostprocess-pod/", "id": "k8s_00211_code_completion_5649"} {"prompt": "Explain this text code:\n\nspec:\n securityContext:\n windowsOptions:\n hostProcess: true\n runAsUserName: \"NT AUTHORITY\\\\Local service\"\n hostNetwork: true\n containers:\n - name: test\n image: image1:latest\n command:\n - ping\n - -t\n - 127.0.0.1\n nodeSelector:\n \"kubernetes.io/os\": windows", "completion": "This text code Create a Windows HostProcess Pod | HostProcess containers support the ability to mount volumes within the container volume space. Volume mount behavior differs depending on the version of containerd runtime used by on the node.", "language": "text", "type": "code-example_explanation", "context": "Create a Windows HostProcess Pod | HostProcess containers support the ability to mount volumes within the container volume space. Volume mount behavior differs depending on the version of containerd runtime used by on the node.", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/create-hostprocess-pod/", "id": "k8s_00211_code_explain_5650"} {"prompt": "Complete this text code:\n\nsecurityContext:\n windowsOptions:", "completion": " hostProcess: true\n runAsUserName: hpc-localgroup", "language": "text", "type": "code-example", "context": "Create a Windows HostProcess Pod | Schedule the pod!", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/create-hostprocess-pod/", "id": "k8s_00211_code_completion_5651"} {"prompt": "Explain this text code:\n\nsecurityContext:\n windowsOptions:\n hostProcess: true\n runAsUserName: hpc-localgroup", "completion": "This text code Create a Windows HostProcess Pod | Schedule the pod!", "language": "text", "type": "code-example_explanation", "context": "Create a Windows HostProcess Pod | Schedule the pod!", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/create-hostprocess-pod/", "id": "k8s_00211_code_explain_5652"} {"prompt": "Complete this yaml code:\n\nspec:\n securityContext:\n windowsOptions:\n hostProcess: true\n runAsUserName: \"NT AUTHORITY\\\\Local service\"\n hostNetwork: true\n containers:", "completion": " - name: test\n image: image1:latest\n command:\n - ping\n - -t\n - 127.0.0.1\n nodeSelector:\n \"kubernetes.io/os\": windows", "language": "yaml", "type": "kubernetes-manifest", "context": "Create a Windows HostProcess Pod | HostProcess containers support the ability to mount volumes within the container volume space. Volume mount behavior differs depending on the version of containerd runtime used by on the node.", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/create-hostprocess-pod/", "id": "k8s_00211_code_completion_5653"} {"prompt": "Explain this yaml code:\n\nspec:\n securityContext:\n windowsOptions:\n hostProcess: true\n runAsUserName: \"NT AUTHORITY\\\\Local service\"\n hostNetwork: true\n containers:\n - name: test\n image: image1:latest\n command:\n - ping\n - -t\n - 127.0.0.1\n nodeSelector:\n \"kubernetes.io/os\": windows", "completion": "This yaml code Create a Windows HostProcess Pod | HostProcess containers support the ability to mount volumes within the container volume space. Volume mount behavior differs depending on the version of containerd runtime used by on the node.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Create a Windows HostProcess Pod | HostProcess containers support the ability to mount volumes within the container volume space. Volume mount behavior differs depending on the version of containerd runtime used by on the node.", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/create-hostprocess-pod/", "id": "k8s_00211_code_explain_5654"} {"prompt": "Explain this text code:\n\n$CONTAINER_SANDBOX_MOUNT_POINT\\var\\run\\secrets\\kubernetes.io\\serviceaccount\\", "completion": "This text code Create a Windows HostProcess Pod | Applications running inside the container can access volume mounts directly via the volumeMount's specified mountPath (just like Linux and non-HostProcess Windows containers).", "language": "text", "type": "code-example_explanation", "context": "Create a Windows HostProcess Pod | Applications running inside the container can access volume mounts directly via the volumeMount's specified mountPath (just like Linux and non-HostProcess Windows containers).", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/create-hostprocess-pod/", "id": "k8s_00211_code_explain_5655"} {"prompt": "Explain this text code:\n\n$CONTAINER_SANDBOX_MOUNT_POINT\\var\\run\\secrets\\kubernetes.io\\serviceaccount\\", "completion": "This text code Create a Windows HostProcess Pod | Resource limits (disk, memory, cpu count) are applied to the job and are job wide. For example, with a limit of 10MB set, the memory allocated for any HostProcess job object will be capped at 10MB. Th", "language": "text", "type": "code-example_explanation", "context": "Create a Windows HostProcess Pod | Resource limits (disk, memory, cpu count) are applied to the job and are job wide. For example, with a limit of 10MB set, the memory allocated for any HostProcess job object will be capped at 10MB. Th", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/create-hostprocess-pod/", "id": "k8s_00211_code_explain_5656"} {"prompt": "Complete this yaml code:\n\nsecurityContext:\n windowsOptions:", "completion": " hostProcess: true\n runAsUserName: hpc-localgroup", "language": "yaml", "type": "kubernetes-manifest", "context": "Create a Windows HostProcess Pod | Schedule the pod!", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/create-hostprocess-pod/", "id": "k8s_00211_code_completion_5657"} {"prompt": "Explain this yaml code:\n\nsecurityContext:\n windowsOptions:\n hostProcess: true\n runAsUserName: hpc-localgroup", "completion": "This yaml code Create a Windows HostProcess Pod | Schedule the pod!", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Create a Windows HostProcess Pod | Schedule the pod!", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/create-hostprocess-pod/", "id": "k8s_00211_code_explain_5658"} {"prompt": "Explain this text code:\n\nfailed to create user process token: failed to logon user: Access is denied.: unknown", "completion": "This text code Create a Windows HostProcess Pod | Ensure containerd is running as LocalSystem or LocalService service accounts. User accounts (even Administrator accounts) do not have permissions to create logon tokens for any of the supported user a", "language": "text", "type": "code-example_explanation", "context": "Create a Windows HostProcess Pod | Ensure containerd is running as LocalSystem or LocalService service accounts. User accounts (even Administrator accounts) do not have permissions to create logon tokens for any of the supported user a", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/create-hostprocess-pod/", "id": "k8s_00211_code_explain_5659"} {"prompt": "Complete this text code:\n\nspec:\n securityContext:\n windowsOptions:\n hostProcess: true\n runAsUserName: \"NT AUTHORITY\\\\Local service\"\n hostNetwork: true\n containers:", "completion": " - name: test\n image: image1:latest\n command:\n - ping\n - -t\n - 127.0.0.1\n nodeSelector:\n \"kubernetes.io/os\": windows", "language": "text", "type": "code-example", "context": "Create a Windows HostProcess Pod | HostProcess containers support the ability to mount volumes within the container volume space. Volume mount behavior differs depending on the version of containerd runtime used by on the node.", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/create-hostprocess-pod/", "id": "k8s_00211_code_completion_5660"} {"prompt": "Explain this text code:\n\nspec:\n securityContext:\n windowsOptions:\n hostProcess: true\n runAsUserName: \"NT AUTHORITY\\\\Local service\"\n hostNetwork: true\n containers:\n - name: test\n image: image1:latest\n command:\n - ping\n - -t\n - 127.0.0.1\n nodeSelector:\n \"kubernetes.io/os\": windows", "completion": "This text code Create a Windows HostProcess Pod | HostProcess containers support the ability to mount volumes within the container volume space. Volume mount behavior differs depending on the version of containerd runtime used by on the node.", "language": "text", "type": "code-example_explanation", "context": "Create a Windows HostProcess Pod | HostProcess containers support the ability to mount volumes within the container volume space. Volume mount behavior differs depending on the version of containerd runtime used by on the node.", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/create-hostprocess-pod/", "id": "k8s_00211_code_explain_5661"} {"prompt": "Complete this text code:\n\nsecurityContext:\n windowsOptions:", "completion": " hostProcess: true\n runAsUserName: hpc-localgroup", "language": "text", "type": "code-example", "context": "Create a Windows HostProcess Pod | Schedule the pod!", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/create-hostprocess-pod/", "id": "k8s_00211_code_completion_5662"} {"prompt": "Explain this text code:\n\nsecurityContext:\n windowsOptions:\n hostProcess: true\n runAsUserName: hpc-localgroup", "completion": "This text code Create a Windows HostProcess Pod | Schedule the pod!", "language": "text", "type": "code-example_explanation", "context": "Create a Windows HostProcess Pod | Schedule the pod!", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/create-hostprocess-pod/", "id": "k8s_00211_code_explain_5663"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Secret\nmetadata:\n name: dotfile-secret\ndata:\n .secret-file: dmFsdWUtMg0KDQo=\n---\napiVersion: v1\nkind: Pod\nmetadata:\n name: secret-dotfiles-pod\nspec:\n volumes:", "completion": " - name: secret-volume\n secret:\n secretName: dotfile-secret\n containers:\n - name: dotfile-test-container\n image: registry.k8s.io/busybox\n command:\n - ls\n - \"-l\"\n - \"/etc/secret-volume\"\n volumeMounts:\n - name: secret-volume\n readOnly: true\n mountPath: \"/etc/secret-volume\"", "language": "yaml", "type": "pod-definition", "context": "Secrets | Consider a program that needs to handle HTTP requests, do some complex business logic, and then sign some messages with an HMAC. Because it has complex application logic, there might be an unnoticed r", "source": "https://kubernetes.io/docs/concepts/configuration/secret/", "id": "k8s_00213_code_completion_5664"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Secret\nmetadata:\n name: dotfile-secret\ndata:\n .secret-file: dmFsdWUtMg0KDQo=\n---\napiVersion: v1\nkind: Pod\nmetadata:\n name: secret-dotfiles-pod\nspec:\n volumes:\n - name: secret-volume\n secret:\n secretName: dotfile-secret\n containers:\n - name: dotfile-test-container\n image: registry.k8s.io/busybox\n command:\n - ls\n - \"-l\"\n - \"/etc/secret-volume\"\n volumeMounts:\n - name: secret-volume\n readOnly: true\n mountPath: \"/etc/secret-volume\"", "completion": "This yaml code Secrets | Consider a program that needs to handle HTTP requests, do some complex business logic, and then sign some messages with an HMAC. Because it has complex application logic, there might be an unnoticed r", "language": "yaml", "type": "pod-definition_explanation", "context": "Secrets | Consider a program that needs to handle HTTP requests, do some complex business logic, and then sign some messages with an HMAC. Because it has complex application logic, there might be an unnoticed r", "source": "https://kubernetes.io/docs/concepts/configuration/secret/", "id": "k8s_00213_code_explain_5665"} {"prompt": "Explain this bash code:\n\nkubectl create secret generic empty-secret\nkubectl get secret empty-secret", "completion": "This bash code Secrets | The output looks like:", "language": "bash", "type": "kubectl-command_explanation", "context": "Secrets | The output looks like:", "source": "https://kubernetes.io/docs/concepts/configuration/secret/", "id": "k8s_00213_code_explain_5666"} {"prompt": "Explain this text code:\n\nNAME TYPE DATA AGE\nempty-secret Opaque 0 2m6s", "completion": "This text code Secrets | The DATA column shows the number of data items stored in the Secret. In this case, 0 means you have created an empty Secret.", "language": "text", "type": "code-example_explanation", "context": "Secrets | The DATA column shows the number of data items stored in the Secret. In this case, 0 means you have created an empty Secret.", "source": "https://kubernetes.io/docs/concepts/configuration/secret/", "id": "k8s_00213_code_explain_5667"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Secret\nmetadata:\n name: secret-sa-sample", "completion": " annotations:\n kubernetes.io/service-account.name: \"sa-name\"\ntype: kubernetes.io/service-account-token\ndata:\n extra: YmFyCg==", "language": "yaml", "type": "kubernetes-manifest", "context": "Secrets | After creating the Secret, wait for Kubernetes to populate the token key in the data field.", "source": "https://kubernetes.io/docs/concepts/configuration/secret/", "id": "k8s_00213_code_completion_5668"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Secret\nmetadata:\n name: secret-sa-sample\n annotations:\n kubernetes.io/service-account.name: \"sa-name\"\ntype: kubernetes.io/service-account-token\ndata:\n extra: YmFyCg==", "completion": "This yaml code Secrets | After creating the Secret, wait for Kubernetes to populate the token key in the data field.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Secrets | After creating the Secret, wait for Kubernetes to populate the token key in the data field.", "source": "https://kubernetes.io/docs/concepts/configuration/secret/", "id": "k8s_00213_code_explain_5669"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Secret\nmetadata:\n name: secret-dockercfg", "completion": "type: kubernetes.io/dockercfg\ndata:\n .dockercfg: |\n eyJhdXRocyI6eyJodHRwczovL2V4YW1wbGUvdjEvIjp7ImF1dGgiOiJvcGVuc2VzYW1lIn19fQo=", "language": "yaml", "type": "kubernetes-manifest", "context": "Secrets | When you create Docker config Secrets using a manifest, the API server checks whether the expected key exists in the data field, and it verifies if the value provided can be parsed as a valid JSON. Th", "source": "https://kubernetes.io/docs/concepts/configuration/secret/", "id": "k8s_00213_code_completion_5670"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Secret\nmetadata:\n name: secret-dockercfg\ntype: kubernetes.io/dockercfg\ndata:\n .dockercfg: |\n eyJhdXRocyI6eyJodHRwczovL2V4YW1wbGUvdjEvIjp7ImF1dGgiOiJvcGVuc2VzYW1lIn19fQo=", "completion": "This yaml code Secrets | When you create Docker config Secrets using a manifest, the API server checks whether the expected key exists in the data field, and it verifies if the value provided can be parsed as a valid JSON. Th", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Secrets | When you create Docker config Secrets using a manifest, the API server checks whether the expected key exists in the data field, and it verifies if the value provided can be parsed as a valid JSON. Th", "source": "https://kubernetes.io/docs/concepts/configuration/secret/", "id": "k8s_00213_code_explain_5671"} {"prompt": "Complete this bash code:\n\nkubectl create secret docker-registry secret-tiger-docker \\\n --docker-email=tiger@acme.example \\", "completion": " --docker-username=tiger \\\n --docker-password=pass1234 \\\n --docker-server=my-registry.example:5000", "language": "bash", "type": "kubectl-command", "context": "Secrets | This command creates a Secret of type kubernetes.io/dockerconfigjson.", "source": "https://kubernetes.io/docs/concepts/configuration/secret/", "id": "k8s_00213_code_completion_5672"} {"prompt": "Explain this bash code:\n\nkubectl create secret docker-registry secret-tiger-docker \\\n --docker-email=tiger@acme.example \\\n --docker-username=tiger \\\n --docker-password=pass1234 \\\n --docker-server=my-registry.example:5000", "completion": "This bash code Secrets | This command creates a Secret of type kubernetes.io/dockerconfigjson.", "language": "bash", "type": "kubectl-command_explanation", "context": "Secrets | This command creates a Secret of type kubernetes.io/dockerconfigjson.", "source": "https://kubernetes.io/docs/concepts/configuration/secret/", "id": "k8s_00213_code_explain_5673"} {"prompt": "Explain this bash code:\n\nkubectl get secret secret-tiger-docker -o jsonpath='{.data.*}' | base64 -d", "completion": "This bash code Secrets | The output is equivalent to the following JSON document (which is also a valid Docker configuration file):", "language": "bash", "type": "kubectl-command_explanation", "context": "Secrets | The output is equivalent to the following JSON document (which is also a valid Docker configuration file):", "source": "https://kubernetes.io/docs/concepts/configuration/secret/", "id": "k8s_00213_code_explain_5674"} {"prompt": "Complete this json code:\n\n{\n \"auths\": {\n \"my-registry.example:5000\": {\n \"username\": \"tiger\",\n \"password\": \"pass1234\",", "completion": " \"email\": \"tiger@acme.example\",\n \"auth\": \"dGlnZXI6cGFzczEyMzQ=\"\n }\n }\n}", "language": "json", "type": "api-response", "context": "Secrets | The auth value there is base64 encoded; it is obscured but not secret. Anyone who can read that Secret can learn the registry access bearer token.", "source": "https://kubernetes.io/docs/concepts/configuration/secret/", "id": "k8s_00213_code_completion_5675"} {"prompt": "Explain this json code:\n\n{\n \"auths\": {\n \"my-registry.example:5000\": {\n \"username\": \"tiger\",\n \"password\": \"pass1234\",\n \"email\": \"tiger@acme.example\",\n \"auth\": \"dGlnZXI6cGFzczEyMzQ=\"\n }\n }\n}", "completion": "This json code Secrets | The auth value there is base64 encoded; it is obscured but not secret. Anyone who can read that Secret can learn the registry access bearer token.", "language": "json", "type": "api-response_explanation", "context": "Secrets | The auth value there is base64 encoded; it is obscured but not secret. Anyone who can read that Secret can learn the registry access bearer token.", "source": "https://kubernetes.io/docs/concepts/configuration/secret/", "id": "k8s_00213_code_explain_5676"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Secret\nmetadata:\n name: secret-basic-auth", "completion": "type: kubernetes.io/basic-auth\nstringData:\n username: admin # required field for kubernetes.io/basic-auth\n password: t0p-Secret # required field for kubernetes.io/basic-auth", "language": "yaml", "type": "kubernetes-manifest", "context": "Secrets | The basic authentication Secret type is provided only for convenience. You can create an Opaque type for credentials used for basic authentication. However, using the defined and public Secret type (k", "source": "https://kubernetes.io/docs/concepts/configuration/secret/", "id": "k8s_00213_code_completion_5677"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Secret\nmetadata:\n name: secret-basic-auth\ntype: kubernetes.io/basic-auth\nstringData:\n username: admin # required field for kubernetes.io/basic-auth\n password: t0p-Secret # required field for kubernetes.io/basic-auth", "completion": "This yaml code Secrets | The basic authentication Secret type is provided only for convenience. You can create an Opaque type for credentials used for basic authentication. However, using the defined and public Secret type (k", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Secrets | The basic authentication Secret type is provided only for convenience. You can create an Opaque type for credentials used for basic authentication. However, using the defined and public Secret type (k", "source": "https://kubernetes.io/docs/concepts/configuration/secret/", "id": "k8s_00213_code_explain_5678"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Secret\nmetadata:\n name: secret-ssh-auth", "completion": "type: kubernetes.io/ssh-auth\ndata:\n # the data is abbreviated in this example\n ssh-privatekey: |\n UG91cmluZzYlRW1vdGljb24lU2N1YmE=", "language": "yaml", "type": "kubernetes-manifest", "context": "Secrets | The SSH authentication Secret type is provided only for convenience. You can create an Opaque type for credentials used for SSH authentication. However, using the defined and public Secret type (kuber", "source": "https://kubernetes.io/docs/concepts/configuration/secret/", "id": "k8s_00213_code_completion_5679"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Secret\nmetadata:\n name: secret-ssh-auth\ntype: kubernetes.io/ssh-auth\ndata:\n # the data is abbreviated in this example\n ssh-privatekey: |\n UG91cmluZzYlRW1vdGljb24lU2N1YmE=", "completion": "This yaml code Secrets | The SSH authentication Secret type is provided only for convenience. You can create an Opaque type for credentials used for SSH authentication. However, using the defined and public Secret type (kuber", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Secrets | The SSH authentication Secret type is provided only for convenience. You can create an Opaque type for credentials used for SSH authentication. However, using the defined and public Secret type (kuber", "source": "https://kubernetes.io/docs/concepts/configuration/secret/", "id": "k8s_00213_code_explain_5680"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Secret\nmetadata:\n name: secret-tls\ntype: kubernetes.io/tls", "completion": "data:\n # values are base64 encoded, which obscures them but does NOT provide\n # any useful level of confidentiality\n # Replace the following values with your own base64-encoded certificate and key.\n tls.crt: \"REPLACE_WITH_BASE64_CERT\" \n tls.key: \"REPLACE_WITH_BASE64_KEY\"", "language": "yaml", "type": "kubernetes-manifest", "context": "Secrets | The TLS Secret type is provided only for convenience. You can create an Opaque type for credentials used for TLS authentication. However, using the defined and public Secret type (kubernetes.io/tls) h", "source": "https://kubernetes.io/docs/concepts/configuration/secret/", "id": "k8s_00213_code_completion_5681"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Secret\nmetadata:\n name: secret-tls\ntype: kubernetes.io/tls\ndata:\n # values are base64 encoded, which obscures them but does NOT provide\n # any useful level of confidentiality\n # Replace the following values with your own base64-encoded certificate and key.\n tls.crt: \"REPLACE_WITH_BASE64_CERT\" \n tls.key: \"REPLACE_WITH_BASE64_KEY\"", "completion": "This yaml code Secrets | The TLS Secret type is provided only for convenience. You can create an Opaque type for credentials used for TLS authentication. However, using the defined and public Secret type (kubernetes.io/tls) h", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Secrets | The TLS Secret type is provided only for convenience. You can create an Opaque type for credentials used for TLS authentication. However, using the defined and public Secret type (kubernetes.io/tls) h", "source": "https://kubernetes.io/docs/concepts/configuration/secret/", "id": "k8s_00213_code_explain_5682"} {"prompt": "Explain this bash code:\n\nkubectl create secret tls my-tls-secret \\\n --cert=path/to/cert/file \\\n --key=path/to/key/file", "completion": "This bash code Secrets | The public/private key pair must exist before hand. The public key certificate for --cert must be .PEM encoded and must match the given private key for --key.", "language": "bash", "type": "kubectl-command_explanation", "context": "Secrets | The public/private key pair must exist before hand. The public key certificate for --cert must be .PEM encoded and must match the given private key for --key.", "source": "https://kubernetes.io/docs/concepts/configuration/secret/", "id": "k8s_00213_code_explain_5683"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Secret\nmetadata:\n name: bootstrap-token-5emitj\n namespace: kube-system\ntype: bootstrap.kubernetes.io/token", "completion": "data:\n auth-extra-groups: c3lzdGVtOmJvb3RzdHJhcHBlcnM6a3ViZWFkbTpkZWZhdWx0LW5vZGUtdG9rZW4=\n expiration: MjAyMC0wOS0xM1QwNDozOToxMFo=\n token-id: NWVtaXRq\n token-secret: a3E0Z2lodnN6emduMXAwcg==\n usage-bootstrap-authentication: dHJ1ZQ==\n usage-bootstrap-signing: dHJ1ZQ==", "language": "yaml", "type": "kubernetes-manifest", "context": "Secrets | A bootstrap token Secret has the following keys specified under data:", "source": "https://kubernetes.io/docs/concepts/configuration/secret/", "id": "k8s_00213_code_completion_5684"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Secret\nmetadata:\n name: bootstrap-token-5emitj\n namespace: kube-system\ntype: bootstrap.kubernetes.io/token\ndata:\n auth-extra-groups: c3lzdGVtOmJvb3RzdHJhcHBlcnM6a3ViZWFkbTpkZWZhdWx0LW5vZGUtdG9rZW4=\n expiration: MjAyMC0wOS0xM1QwNDozOToxMFo=\n token-id: NWVtaXRq\n token-secret: a3E0Z2lodnN6emduMXAwcg==\n usage-bootstrap-authentication: dHJ1ZQ==\n usage-bootstrap-signing: dHJ1ZQ==", "completion": "This yaml code Secrets | A bootstrap token Secret has the following keys specified under data:", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Secrets | A bootstrap token Secret has the following keys specified under data:", "source": "https://kubernetes.io/docs/concepts/configuration/secret/", "id": "k8s_00213_code_explain_5685"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Secret\nmetadata:\n # Note how the Secret is named\n name: bootstrap-token-5emitj\n # A bootstrap token Secret usually resides in the kube-system namespace\n namespace: kube-system\ntype: bootstrap.kubernetes.io/token\nstringData:", "completion": " auth-extra-groups: \"system:bootstrappers:kubeadm:default-node-token\"\n expiration: \"2020-09-13T04:39:10Z\"\n # This token ID is used in the name\n token-id: \"5emitj\"\n token-secret: \"kq4gihvszzgn1p0r\"\n # This token can be used for authentication\n usage-bootstrap-authentication: \"true\"\n # and it can be used for signing\n usage-bootstrap-signing: \"true\"", "language": "yaml", "type": "kubernetes-manifest", "context": "Secrets | There are several options to create a Secret:", "source": "https://kubernetes.io/docs/concepts/configuration/secret/", "id": "k8s_00213_code_completion_5686"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Secret\nmetadata:\n # Note how the Secret is named\n name: bootstrap-token-5emitj\n # A bootstrap token Secret usually resides in the kube-system namespace\n namespace: kube-system\ntype: bootstrap.kubernetes.io/token\nstringData:\n auth-extra-groups: \"system:bootstrappers:kubeadm:default-node-token\"\n expiration: \"2020-09-13T04:39:10Z\"\n # This token ID is used in the name\n token-id: \"5emitj\"\n token-secret: \"kq4gihvszzgn1p0r\"\n # This token can be used for authentication\n usage-bootstrap-authentication: \"true\"\n # and it can be used for signing\n usage-bootstrap-signing: \"true\"", "completion": "This yaml code Secrets | There are several options to create a Secret:", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Secrets | There are several options to create a Secret:", "source": "https://kubernetes.io/docs/concepts/configuration/secret/", "id": "k8s_00213_code_explain_5687"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: mypod\nspec:\n containers:\n - name: mypod\n image: redis", "completion": " volumeMounts:\n - name: foo\n mountPath: \"/etc/foo\"\n readOnly: true\n volumes:\n - name: foo\n secret:\n secretName: mysecret\n optional: true", "language": "yaml", "type": "pod-definition", "context": "Secrets | By default, Secrets are required. None of a Pod's containers will start until all non-optional Secrets are available.", "source": "https://kubernetes.io/docs/concepts/configuration/secret/", "id": "k8s_00213_code_completion_5688"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: mypod\nspec:\n containers:\n - name: mypod\n image: redis\n volumeMounts:\n - name: foo\n mountPath: \"/etc/foo\"\n readOnly: true\n volumes:\n - name: foo\n secret:\n secretName: mysecret\n optional: true", "completion": "This yaml code Secrets | By default, Secrets are required. None of a Pod's containers will start until all non-optional Secrets are available.", "language": "yaml", "type": "pod-definition_explanation", "context": "Secrets | By default, Secrets are required. None of a Pod's containers will start until all non-optional Secrets are available.", "source": "https://kubernetes.io/docs/concepts/configuration/secret/", "id": "k8s_00213_code_explain_5689"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Secret", "completion": "metadata: ...\ndata: ...\nimmutable: true", "language": "yaml", "type": "kubernetes-manifest", "context": "Secrets | You can also update any existing mutable Secret to make it immutable.", "source": "https://kubernetes.io/docs/concepts/configuration/secret/", "id": "k8s_00213_code_completion_5690"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Secret\nmetadata: ...\ndata: ...\nimmutable: true", "completion": "This yaml code Secrets | You can also update any existing mutable Secret to make it immutable.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Secrets | You can also update any existing mutable Secret to make it immutable.", "source": "https://kubernetes.io/docs/concepts/configuration/secret/", "id": "k8s_00213_code_explain_5691"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Secret\nmetadata:\n name: dotfile-secret\ndata:\n .secret-file: dmFsdWUtMg0KDQo=\n---\napiVersion: v1\nkind: Pod\nmetadata:\n name: secret-dotfiles-pod\nspec:\n volumes:", "completion": " - name: secret-volume\n secret:\n secretName: dotfile-secret\n containers:\n - name: dotfile-test-container\n image: registry.k8s.io/busybox\n command:\n - ls\n - \"-l\"\n - \"/etc/secret-volume\"\n volumeMounts:\n - name: secret-volume\n readOnly: true\n mountPath: \"/etc/secret-volume\"", "language": "yaml", "type": "pod-definition", "context": "Secrets | Consider a program that needs to handle HTTP requests, do some complex business logic, and then sign some messages with an HMAC. Because it has complex application logic, there might be an unnoticed r", "source": "https://kubernetes.io/docs/concepts/configuration/secret/", "id": "k8s_00213_code_completion_5692"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Secret\nmetadata:\n name: dotfile-secret\ndata:\n .secret-file: dmFsdWUtMg0KDQo=\n---\napiVersion: v1\nkind: Pod\nmetadata:\n name: secret-dotfiles-pod\nspec:\n volumes:\n - name: secret-volume\n secret:\n secretName: dotfile-secret\n containers:\n - name: dotfile-test-container\n image: registry.k8s.io/busybox\n command:\n - ls\n - \"-l\"\n - \"/etc/secret-volume\"\n volumeMounts:\n - name: secret-volume\n readOnly: true\n mountPath: \"/etc/secret-volume\"", "completion": "This yaml code Secrets | Consider a program that needs to handle HTTP requests, do some complex business logic, and then sign some messages with an HMAC. Because it has complex application logic, there might be an unnoticed r", "language": "yaml", "type": "pod-definition_explanation", "context": "Secrets | Consider a program that needs to handle HTTP requests, do some complex business logic, and then sign some messages with an HMAC. Because it has complex application logic, there might be an unnoticed r", "source": "https://kubernetes.io/docs/concepts/configuration/secret/", "id": "k8s_00213_code_explain_5693"} {"prompt": "Explain this shell code:\n\nkubectl create secret generic empty-secret\nkubectl get secret empty-secret", "completion": "This shell code Secrets | The output looks like:", "language": "shell", "type": "code-example_explanation", "context": "Secrets | The output looks like:", "source": "https://kubernetes.io/docs/concepts/configuration/secret/", "id": "k8s_00213_code_explain_5694"} {"prompt": "Explain this text code:\n\nNAME TYPE DATA AGE\nempty-secret Opaque 0 2m6s", "completion": "This text code Secrets | The DATA column shows the number of data items stored in the Secret. In this case, 0 means you have created an empty Secret.", "language": "text", "type": "code-example_explanation", "context": "Secrets | The DATA column shows the number of data items stored in the Secret. In this case, 0 means you have created an empty Secret.", "source": "https://kubernetes.io/docs/concepts/configuration/secret/", "id": "k8s_00213_code_explain_5695"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Secret\nmetadata:\n name: secret-sa-sample", "completion": " annotations:\n kubernetes.io/service-account.name: \"sa-name\"\ntype: kubernetes.io/service-account-token\ndata:\n extra: YmFyCg==", "language": "yaml", "type": "kubernetes-manifest", "context": "Secrets | After creating the Secret, wait for Kubernetes to populate the token key in the data field.", "source": "https://kubernetes.io/docs/concepts/configuration/secret/", "id": "k8s_00213_code_completion_5696"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Secret\nmetadata:\n name: secret-sa-sample\n annotations:\n kubernetes.io/service-account.name: \"sa-name\"\ntype: kubernetes.io/service-account-token\ndata:\n extra: YmFyCg==", "completion": "This yaml code Secrets | After creating the Secret, wait for Kubernetes to populate the token key in the data field.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Secrets | After creating the Secret, wait for Kubernetes to populate the token key in the data field.", "source": "https://kubernetes.io/docs/concepts/configuration/secret/", "id": "k8s_00213_code_explain_5697"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Secret\nmetadata:\n name: secret-dockercfg", "completion": "type: kubernetes.io/dockercfg\ndata:\n .dockercfg: |\n eyJhdXRocyI6eyJodHRwczovL2V4YW1wbGUvdjEvIjp7ImF1dGgiOiJvcGVuc2VzYW1lIn19fQo=", "language": "yaml", "type": "kubernetes-manifest", "context": "Secrets | When you create Docker config Secrets using a manifest, the API server checks whether the expected key exists in the data field, and it verifies if the value provided can be parsed as a valid JSON. Th", "source": "https://kubernetes.io/docs/concepts/configuration/secret/", "id": "k8s_00213_code_completion_5698"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Secret\nmetadata:\n name: secret-dockercfg\ntype: kubernetes.io/dockercfg\ndata:\n .dockercfg: |\n eyJhdXRocyI6eyJodHRwczovL2V4YW1wbGUvdjEvIjp7ImF1dGgiOiJvcGVuc2VzYW1lIn19fQo=", "completion": "This yaml code Secrets | When you create Docker config Secrets using a manifest, the API server checks whether the expected key exists in the data field, and it verifies if the value provided can be parsed as a valid JSON. Th", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Secrets | When you create Docker config Secrets using a manifest, the API server checks whether the expected key exists in the data field, and it verifies if the value provided can be parsed as a valid JSON. Th", "source": "https://kubernetes.io/docs/concepts/configuration/secret/", "id": "k8s_00213_code_explain_5699"} {"prompt": "Complete this shell code:\n\nkubectl create secret docker-registry secret-tiger-docker \\\n --docker-email=tiger@acme.example \\", "completion": " --docker-username=tiger \\\n --docker-password=pass1234 \\\n --docker-server=my-registry.example:5000", "language": "shell", "type": "code-example", "context": "Secrets | This command creates a Secret of type kubernetes.io/dockerconfigjson.", "source": "https://kubernetes.io/docs/concepts/configuration/secret/", "id": "k8s_00213_code_completion_5700"} {"prompt": "Explain this shell code:\n\nkubectl create secret docker-registry secret-tiger-docker \\\n --docker-email=tiger@acme.example \\\n --docker-username=tiger \\\n --docker-password=pass1234 \\\n --docker-server=my-registry.example:5000", "completion": "This shell code Secrets | This command creates a Secret of type kubernetes.io/dockerconfigjson.", "language": "shell", "type": "code-example_explanation", "context": "Secrets | This command creates a Secret of type kubernetes.io/dockerconfigjson.", "source": "https://kubernetes.io/docs/concepts/configuration/secret/", "id": "k8s_00213_code_explain_5701"} {"prompt": "Explain this shell code:\n\nkubectl get secret secret-tiger-docker -o jsonpath='{.data.*}' | base64 -d", "completion": "This shell code Secrets | The output is equivalent to the following JSON document (which is also a valid Docker configuration file):", "language": "shell", "type": "code-example_explanation", "context": "Secrets | The output is equivalent to the following JSON document (which is also a valid Docker configuration file):", "source": "https://kubernetes.io/docs/concepts/configuration/secret/", "id": "k8s_00213_code_explain_5702"} {"prompt": "Complete this json code:\n\n{\n \"auths\": {\n \"my-registry.example:5000\": {\n \"username\": \"tiger\",\n \"password\": \"pass1234\",", "completion": " \"email\": \"tiger@acme.example\",\n \"auth\": \"dGlnZXI6cGFzczEyMzQ=\"\n }\n }\n}", "language": "json", "type": "api-response", "context": "Secrets | The auth value there is base64 encoded; it is obscured but not secret. Anyone who can read that Secret can learn the registry access bearer token.", "source": "https://kubernetes.io/docs/concepts/configuration/secret/", "id": "k8s_00213_code_completion_5703"} {"prompt": "Explain this json code:\n\n{\n \"auths\": {\n \"my-registry.example:5000\": {\n \"username\": \"tiger\",\n \"password\": \"pass1234\",\n \"email\": \"tiger@acme.example\",\n \"auth\": \"dGlnZXI6cGFzczEyMzQ=\"\n }\n }\n}", "completion": "This json code Secrets | The auth value there is base64 encoded; it is obscured but not secret. Anyone who can read that Secret can learn the registry access bearer token.", "language": "json", "type": "api-response_explanation", "context": "Secrets | The auth value there is base64 encoded; it is obscured but not secret. Anyone who can read that Secret can learn the registry access bearer token.", "source": "https://kubernetes.io/docs/concepts/configuration/secret/", "id": "k8s_00213_code_explain_5704"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Secret\nmetadata:\n name: secret-basic-auth", "completion": "type: kubernetes.io/basic-auth\nstringData:\n username: admin # required field for kubernetes.io/basic-auth\n password: t0p-Secret # required field for kubernetes.io/basic-auth", "language": "yaml", "type": "kubernetes-manifest", "context": "Secrets | The basic authentication Secret type is provided only for convenience. You can create an Opaque type for credentials used for basic authentication. However, using the defined and public Secret type (k", "source": "https://kubernetes.io/docs/concepts/configuration/secret/", "id": "k8s_00213_code_completion_5705"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Secret\nmetadata:\n name: secret-basic-auth\ntype: kubernetes.io/basic-auth\nstringData:\n username: admin # required field for kubernetes.io/basic-auth\n password: t0p-Secret # required field for kubernetes.io/basic-auth", "completion": "This yaml code Secrets | The basic authentication Secret type is provided only for convenience. You can create an Opaque type for credentials used for basic authentication. However, using the defined and public Secret type (k", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Secrets | The basic authentication Secret type is provided only for convenience. You can create an Opaque type for credentials used for basic authentication. However, using the defined and public Secret type (k", "source": "https://kubernetes.io/docs/concepts/configuration/secret/", "id": "k8s_00213_code_explain_5706"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Secret\nmetadata:\n name: secret-ssh-auth", "completion": "type: kubernetes.io/ssh-auth\ndata:\n # the data is abbreviated in this example\n ssh-privatekey: |\n UG91cmluZzYlRW1vdGljb24lU2N1YmE=", "language": "yaml", "type": "kubernetes-manifest", "context": "Secrets | The SSH authentication Secret type is provided only for convenience. You can create an Opaque type for credentials used for SSH authentication. However, using the defined and public Secret type (kuber", "source": "https://kubernetes.io/docs/concepts/configuration/secret/", "id": "k8s_00213_code_completion_5707"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Secret\nmetadata:\n name: secret-ssh-auth\ntype: kubernetes.io/ssh-auth\ndata:\n # the data is abbreviated in this example\n ssh-privatekey: |\n UG91cmluZzYlRW1vdGljb24lU2N1YmE=", "completion": "This yaml code Secrets | The SSH authentication Secret type is provided only for convenience. You can create an Opaque type for credentials used for SSH authentication. However, using the defined and public Secret type (kuber", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Secrets | The SSH authentication Secret type is provided only for convenience. You can create an Opaque type for credentials used for SSH authentication. However, using the defined and public Secret type (kuber", "source": "https://kubernetes.io/docs/concepts/configuration/secret/", "id": "k8s_00213_code_explain_5708"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Secret\nmetadata:\n name: secret-tls\ntype: kubernetes.io/tls", "completion": "data:\n # values are base64 encoded, which obscures them but does NOT provide\n # any useful level of confidentiality\n # Replace the following values with your own base64-encoded certificate and key.\n tls.crt: \"REPLACE_WITH_BASE64_CERT\" \n tls.key: \"REPLACE_WITH_BASE64_KEY\"", "language": "yaml", "type": "kubernetes-manifest", "context": "Secrets | The TLS Secret type is provided only for convenience. You can create an Opaque type for credentials used for TLS authentication. However, using the defined and public Secret type (kubernetes.io/tls) h", "source": "https://kubernetes.io/docs/concepts/configuration/secret/", "id": "k8s_00213_code_completion_5709"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Secret\nmetadata:\n name: secret-tls\ntype: kubernetes.io/tls\ndata:\n # values are base64 encoded, which obscures them but does NOT provide\n # any useful level of confidentiality\n # Replace the following values with your own base64-encoded certificate and key.\n tls.crt: \"REPLACE_WITH_BASE64_CERT\" \n tls.key: \"REPLACE_WITH_BASE64_KEY\"", "completion": "This yaml code Secrets | The TLS Secret type is provided only for convenience. You can create an Opaque type for credentials used for TLS authentication. However, using the defined and public Secret type (kubernetes.io/tls) h", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Secrets | The TLS Secret type is provided only for convenience. You can create an Opaque type for credentials used for TLS authentication. However, using the defined and public Secret type (kubernetes.io/tls) h", "source": "https://kubernetes.io/docs/concepts/configuration/secret/", "id": "k8s_00213_code_explain_5710"} {"prompt": "Explain this shell code:\n\nkubectl create secret tls my-tls-secret \\\n --cert=path/to/cert/file \\\n --key=path/to/key/file", "completion": "This shell code Secrets | The public/private key pair must exist before hand. The public key certificate for --cert must be .PEM encoded and must match the given private key for --key.", "language": "shell", "type": "code-example_explanation", "context": "Secrets | The public/private key pair must exist before hand. The public key certificate for --cert must be .PEM encoded and must match the given private key for --key.", "source": "https://kubernetes.io/docs/concepts/configuration/secret/", "id": "k8s_00213_code_explain_5711"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Secret\nmetadata:\n name: bootstrap-token-5emitj\n namespace: kube-system\ntype: bootstrap.kubernetes.io/token", "completion": "data:\n auth-extra-groups: c3lzdGVtOmJvb3RzdHJhcHBlcnM6a3ViZWFkbTpkZWZhdWx0LW5vZGUtdG9rZW4=\n expiration: MjAyMC0wOS0xM1QwNDozOToxMFo=\n token-id: NWVtaXRq\n token-secret: a3E0Z2lodnN6emduMXAwcg==\n usage-bootstrap-authentication: dHJ1ZQ==\n usage-bootstrap-signing: dHJ1ZQ==", "language": "yaml", "type": "kubernetes-manifest", "context": "Secrets | A bootstrap token Secret has the following keys specified under data:", "source": "https://kubernetes.io/docs/concepts/configuration/secret/", "id": "k8s_00213_code_completion_5712"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Secret\nmetadata:\n name: bootstrap-token-5emitj\n namespace: kube-system\ntype: bootstrap.kubernetes.io/token\ndata:\n auth-extra-groups: c3lzdGVtOmJvb3RzdHJhcHBlcnM6a3ViZWFkbTpkZWZhdWx0LW5vZGUtdG9rZW4=\n expiration: MjAyMC0wOS0xM1QwNDozOToxMFo=\n token-id: NWVtaXRq\n token-secret: a3E0Z2lodnN6emduMXAwcg==\n usage-bootstrap-authentication: dHJ1ZQ==\n usage-bootstrap-signing: dHJ1ZQ==", "completion": "This yaml code Secrets | A bootstrap token Secret has the following keys specified under data:", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Secrets | A bootstrap token Secret has the following keys specified under data:", "source": "https://kubernetes.io/docs/concepts/configuration/secret/", "id": "k8s_00213_code_explain_5713"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Secret\nmetadata:\n # Note how the Secret is named\n name: bootstrap-token-5emitj\n # A bootstrap token Secret usually resides in the kube-system namespace\n namespace: kube-system\ntype: bootstrap.kubernetes.io/token\nstringData:", "completion": " auth-extra-groups: \"system:bootstrappers:kubeadm:default-node-token\"\n expiration: \"2020-09-13T04:39:10Z\"\n # This token ID is used in the name\n token-id: \"5emitj\"\n token-secret: \"kq4gihvszzgn1p0r\"\n # This token can be used for authentication\n usage-bootstrap-authentication: \"true\"\n # and it can be used for signing\n usage-bootstrap-signing: \"true\"", "language": "yaml", "type": "kubernetes-manifest", "context": "Secrets | There are several options to create a Secret:", "source": "https://kubernetes.io/docs/concepts/configuration/secret/", "id": "k8s_00213_code_completion_5714"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Secret\nmetadata:\n # Note how the Secret is named\n name: bootstrap-token-5emitj\n # A bootstrap token Secret usually resides in the kube-system namespace\n namespace: kube-system\ntype: bootstrap.kubernetes.io/token\nstringData:\n auth-extra-groups: \"system:bootstrappers:kubeadm:default-node-token\"\n expiration: \"2020-09-13T04:39:10Z\"\n # This token ID is used in the name\n token-id: \"5emitj\"\n token-secret: \"kq4gihvszzgn1p0r\"\n # This token can be used for authentication\n usage-bootstrap-authentication: \"true\"\n # and it can be used for signing\n usage-bootstrap-signing: \"true\"", "completion": "This yaml code Secrets | There are several options to create a Secret:", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Secrets | There are several options to create a Secret:", "source": "https://kubernetes.io/docs/concepts/configuration/secret/", "id": "k8s_00213_code_explain_5715"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: mypod\nspec:\n containers:\n - name: mypod\n image: redis", "completion": " volumeMounts:\n - name: foo\n mountPath: \"/etc/foo\"\n readOnly: true\n volumes:\n - name: foo\n secret:\n secretName: mysecret\n optional: true", "language": "yaml", "type": "pod-definition", "context": "Secrets | By default, Secrets are required. None of a Pod's containers will start until all non-optional Secrets are available.", "source": "https://kubernetes.io/docs/concepts/configuration/secret/", "id": "k8s_00213_code_completion_5716"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: mypod\nspec:\n containers:\n - name: mypod\n image: redis\n volumeMounts:\n - name: foo\n mountPath: \"/etc/foo\"\n readOnly: true\n volumes:\n - name: foo\n secret:\n secretName: mysecret\n optional: true", "completion": "This yaml code Secrets | By default, Secrets are required. None of a Pod's containers will start until all non-optional Secrets are available.", "language": "yaml", "type": "pod-definition_explanation", "context": "Secrets | By default, Secrets are required. None of a Pod's containers will start until all non-optional Secrets are available.", "source": "https://kubernetes.io/docs/concepts/configuration/secret/", "id": "k8s_00213_code_explain_5717"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Secret", "completion": "metadata: ...\ndata: ...\nimmutable: true", "language": "yaml", "type": "kubernetes-manifest", "context": "Secrets | You can also update any existing mutable Secret to make it immutable.", "source": "https://kubernetes.io/docs/concepts/configuration/secret/", "id": "k8s_00213_code_completion_5718"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Secret\nmetadata: ...\ndata: ...\nimmutable: true", "completion": "This yaml code Secrets | You can also update any existing mutable Secret to make it immutable.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Secrets | You can also update any existing mutable Secret to make it immutable.", "source": "https://kubernetes.io/docs/concepts/configuration/secret/", "id": "k8s_00213_code_explain_5719"} {"prompt": "Complete this yaml code:\n\nsecret/dotfile-secret.yaml\napiVersion: v1\nkind: Secret\nmetadata:\n name: dotfile-secret\ndata:\n .secret-file: dmFsdWUtMg0KDQo=\n---\napiVersion: v1\nkind: Pod\nmetadata:\n name: secret-dotfiles-pod\nspec:\n volumes:", "completion": " - name: secret-volume\n secret:\n secretName: dotfile-secret\n containers:\n - name: dotfile-test-container\n image: registry.k8s.io/busybox\n command:\n - ls\n - \"-l\"\n - \"/etc/secret-volume\"\n volumeMounts:\n - name: secret-volume\n readOnly: true\n mountPath: \"/etc/secret-volume\"", "language": "yaml", "type": "pod-definition", "context": "Secrets | Consider a program that needs to handle HTTP requests, do some complex business logic, and then sign some messages with an HMAC. Because it has complex application logic, there might be an unnoticed r", "source": "https://kubernetes.io/docs/concepts/configuration/secret/", "id": "k8s_00213_code_completion_5720"} {"prompt": "Explain this yaml code:\n\nsecret/dotfile-secret.yaml\napiVersion: v1\nkind: Secret\nmetadata:\n name: dotfile-secret\ndata:\n .secret-file: dmFsdWUtMg0KDQo=\n---\napiVersion: v1\nkind: Pod\nmetadata:\n name: secret-dotfiles-pod\nspec:\n volumes:\n - name: secret-volume\n secret:\n secretName: dotfile-secret\n containers:\n - name: dotfile-test-container\n image: registry.k8s.io/busybox\n command:\n - ls\n - \"-l\"\n - \"/etc/secret-volume\"\n volumeMounts:\n - name: secret-volume\n readOnly: true\n mountPath: \"/etc/secret-volume\"", "completion": "This yaml code Secrets | Consider a program that needs to handle HTTP requests, do some complex business logic, and then sign some messages with an HMAC. Because it has complex application logic, there might be an unnoticed r", "language": "yaml", "type": "pod-definition_explanation", "context": "Secrets | Consider a program that needs to handle HTTP requests, do some complex business logic, and then sign some messages with an HMAC. Because it has complex application logic, there might be an unnoticed r", "source": "https://kubernetes.io/docs/concepts/configuration/secret/", "id": "k8s_00213_code_explain_5721"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Secret\nmetadata:\n name: dotfile-secret\ndata:\n .secret-file: dmFsdWUtMg0KDQo=\n---\napiVersion: v1\nkind: Pod\nmetadata:\n name: secret-dotfiles-pod\nspec:\n volumes:", "completion": " - name: secret-volume\n secret:\n secretName: dotfile-secret\n containers:\n - name: dotfile-test-container\n image: registry.k8s.io/busybox\n command:\n - ls\n - \"-l\"\n - \"/etc/secret-volume\"\n volumeMounts:\n - name: secret-volume\n readOnly: true\n mountPath: \"/etc/secret-volume\"", "language": "yaml", "type": "pod-definition", "context": "Secrets | Consider a program that needs to handle HTTP requests, do some complex business logic, and then sign some messages with an HMAC. Because it has complex application logic, there might be an unnoticed r", "source": "https://kubernetes.io/docs/concepts/configuration/secret/", "id": "k8s_00213_code_completion_5722"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Secret\nmetadata:\n name: dotfile-secret\ndata:\n .secret-file: dmFsdWUtMg0KDQo=\n---\napiVersion: v1\nkind: Pod\nmetadata:\n name: secret-dotfiles-pod\nspec:\n volumes:\n - name: secret-volume\n secret:\n secretName: dotfile-secret\n containers:\n - name: dotfile-test-container\n image: registry.k8s.io/busybox\n command:\n - ls\n - \"-l\"\n - \"/etc/secret-volume\"\n volumeMounts:\n - name: secret-volume\n readOnly: true\n mountPath: \"/etc/secret-volume\"", "completion": "This yaml code Secrets | Consider a program that needs to handle HTTP requests, do some complex business logic, and then sign some messages with an HMAC. Because it has complex application logic, there might be an unnoticed r", "language": "yaml", "type": "pod-definition_explanation", "context": "Secrets | Consider a program that needs to handle HTTP requests, do some complex business logic, and then sign some messages with an HMAC. Because it has complex application logic, there might be an unnoticed r", "source": "https://kubernetes.io/docs/concepts/configuration/secret/", "id": "k8s_00213_code_explain_5723"} {"prompt": "Explain this bash code:\n\nkubectl create secret generic empty-secret\nkubectl get secret empty-secret", "completion": "This bash code Secrets | The output looks like:", "language": "bash", "type": "kubectl-command_explanation", "context": "Secrets | The output looks like:", "source": "https://kubernetes.io/docs/concepts/configuration/secret/", "id": "k8s_00213_code_explain_5724"} {"prompt": "Complete this yaml code:\n\nsecret/serviceaccount-token-secret.yaml\napiVersion: v1\nkind: Secret\nmetadata:\n name: secret-sa-sample", "completion": " annotations:\n kubernetes.io/service-account.name: \"sa-name\"\ntype: kubernetes.io/service-account-token\ndata:\n extra: YmFyCg==", "language": "yaml", "type": "kubernetes-manifest", "context": "Secrets | After creating the Secret, wait for Kubernetes to populate the token key in the data field.", "source": "https://kubernetes.io/docs/concepts/configuration/secret/", "id": "k8s_00213_code_completion_5725"} {"prompt": "Explain this yaml code:\n\nsecret/serviceaccount-token-secret.yaml\napiVersion: v1\nkind: Secret\nmetadata:\n name: secret-sa-sample\n annotations:\n kubernetes.io/service-account.name: \"sa-name\"\ntype: kubernetes.io/service-account-token\ndata:\n extra: YmFyCg==", "completion": "This yaml code Secrets | After creating the Secret, wait for Kubernetes to populate the token key in the data field.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Secrets | After creating the Secret, wait for Kubernetes to populate the token key in the data field.", "source": "https://kubernetes.io/docs/concepts/configuration/secret/", "id": "k8s_00213_code_explain_5726"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Secret\nmetadata:\n name: secret-sa-sample", "completion": " annotations:\n kubernetes.io/service-account.name: \"sa-name\"\ntype: kubernetes.io/service-account-token\ndata:\n extra: YmFyCg==", "language": "yaml", "type": "kubernetes-manifest", "context": "Secrets | After creating the Secret, wait for Kubernetes to populate the token key in the data field.", "source": "https://kubernetes.io/docs/concepts/configuration/secret/", "id": "k8s_00213_code_completion_5727"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Secret\nmetadata:\n name: secret-sa-sample\n annotations:\n kubernetes.io/service-account.name: \"sa-name\"\ntype: kubernetes.io/service-account-token\ndata:\n extra: YmFyCg==", "completion": "This yaml code Secrets | After creating the Secret, wait for Kubernetes to populate the token key in the data field.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Secrets | After creating the Secret, wait for Kubernetes to populate the token key in the data field.", "source": "https://kubernetes.io/docs/concepts/configuration/secret/", "id": "k8s_00213_code_explain_5728"} {"prompt": "Complete this yaml code:\n\nsecret/dockercfg-secret.yaml\napiVersion: v1\nkind: Secret\nmetadata:", "completion": " name: secret-dockercfg\ntype: kubernetes.io/dockercfg\ndata:\n .dockercfg: |\n eyJhdXRocyI6eyJodHRwczovL2V4YW1wbGUvdjEvIjp7ImF1dGgiOiJvcGVuc2VzYW1lIn19fQo=", "language": "yaml", "type": "kubernetes-manifest", "context": "Secrets | When you create Docker config Secrets using a manifest, the API server checks whether the expected key exists in the data field, and it verifies if the value provided can be parsed as a valid JSON. Th", "source": "https://kubernetes.io/docs/concepts/configuration/secret/", "id": "k8s_00213_code_completion_5729"} {"prompt": "Explain this yaml code:\n\nsecret/dockercfg-secret.yaml\napiVersion: v1\nkind: Secret\nmetadata:\n name: secret-dockercfg\ntype: kubernetes.io/dockercfg\ndata:\n .dockercfg: |\n eyJhdXRocyI6eyJodHRwczovL2V4YW1wbGUvdjEvIjp7ImF1dGgiOiJvcGVuc2VzYW1lIn19fQo=", "completion": "This yaml code Secrets | When you create Docker config Secrets using a manifest, the API server checks whether the expected key exists in the data field, and it verifies if the value provided can be parsed as a valid JSON. Th", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Secrets | When you create Docker config Secrets using a manifest, the API server checks whether the expected key exists in the data field, and it verifies if the value provided can be parsed as a valid JSON. Th", "source": "https://kubernetes.io/docs/concepts/configuration/secret/", "id": "k8s_00213_code_explain_5730"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Secret\nmetadata:\n name: secret-dockercfg", "completion": "type: kubernetes.io/dockercfg\ndata:\n .dockercfg: |\n eyJhdXRocyI6eyJodHRwczovL2V4YW1wbGUvdjEvIjp7ImF1dGgiOiJvcGVuc2VzYW1lIn19fQo=", "language": "yaml", "type": "kubernetes-manifest", "context": "Secrets | When you create Docker config Secrets using a manifest, the API server checks whether the expected key exists in the data field, and it verifies if the value provided can be parsed as a valid JSON. Th", "source": "https://kubernetes.io/docs/concepts/configuration/secret/", "id": "k8s_00213_code_completion_5731"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Secret\nmetadata:\n name: secret-dockercfg\ntype: kubernetes.io/dockercfg\ndata:\n .dockercfg: |\n eyJhdXRocyI6eyJodHRwczovL2V4YW1wbGUvdjEvIjp7ImF1dGgiOiJvcGVuc2VzYW1lIn19fQo=", "completion": "This yaml code Secrets | When you create Docker config Secrets using a manifest, the API server checks whether the expected key exists in the data field, and it verifies if the value provided can be parsed as a valid JSON. Th", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Secrets | When you create Docker config Secrets using a manifest, the API server checks whether the expected key exists in the data field, and it verifies if the value provided can be parsed as a valid JSON. Th", "source": "https://kubernetes.io/docs/concepts/configuration/secret/", "id": "k8s_00213_code_explain_5732"} {"prompt": "Complete this bash code:\n\nkubectl create secret docker-registry secret-tiger-docker \\\n --docker-email=tiger@acme.example \\", "completion": " --docker-username=tiger \\\n --docker-password=pass1234 \\\n --docker-server=my-registry.example:5000", "language": "bash", "type": "kubectl-command", "context": "Secrets | This command creates a Secret of type kubernetes.io/dockerconfigjson.", "source": "https://kubernetes.io/docs/concepts/configuration/secret/", "id": "k8s_00213_code_completion_5733"} {"prompt": "Explain this bash code:\n\nkubectl create secret docker-registry secret-tiger-docker \\\n --docker-email=tiger@acme.example \\\n --docker-username=tiger \\\n --docker-password=pass1234 \\\n --docker-server=my-registry.example:5000", "completion": "This bash code Secrets | This command creates a Secret of type kubernetes.io/dockerconfigjson.", "language": "bash", "type": "kubectl-command_explanation", "context": "Secrets | This command creates a Secret of type kubernetes.io/dockerconfigjson.", "source": "https://kubernetes.io/docs/concepts/configuration/secret/", "id": "k8s_00213_code_explain_5734"} {"prompt": "Explain this bash code:\n\nkubectl get secret secret-tiger-docker -o jsonpath='{.data.*}' | base64 -d", "completion": "This bash code Secrets | The output is equivalent to the following JSON document (which is also a valid Docker configuration file):", "language": "bash", "type": "kubectl-command_explanation", "context": "Secrets | The output is equivalent to the following JSON document (which is also a valid Docker configuration file):", "source": "https://kubernetes.io/docs/concepts/configuration/secret/", "id": "k8s_00213_code_explain_5735"} {"prompt": "Complete this json code:\n\n{\n \"auths\": {\n \"my-registry.example:5000\": {\n \"username\": \"tiger\",\n \"password\": \"pass1234\",", "completion": " \"email\": \"tiger@acme.example\",\n \"auth\": \"dGlnZXI6cGFzczEyMzQ=\"\n }\n }\n}", "language": "json", "type": "api-response", "context": "Secrets | The auth value there is base64 encoded; it is obscured but not secret. Anyone who can read that Secret can learn the registry access bearer token.", "source": "https://kubernetes.io/docs/concepts/configuration/secret/", "id": "k8s_00213_code_completion_5736"} {"prompt": "Explain this json code:\n\n{\n \"auths\": {\n \"my-registry.example:5000\": {\n \"username\": \"tiger\",\n \"password\": \"pass1234\",\n \"email\": \"tiger@acme.example\",\n \"auth\": \"dGlnZXI6cGFzczEyMzQ=\"\n }\n }\n}", "completion": "This json code Secrets | The auth value there is base64 encoded; it is obscured but not secret. Anyone who can read that Secret can learn the registry access bearer token.", "language": "json", "type": "api-response_explanation", "context": "Secrets | The auth value there is base64 encoded; it is obscured but not secret. Anyone who can read that Secret can learn the registry access bearer token.", "source": "https://kubernetes.io/docs/concepts/configuration/secret/", "id": "k8s_00213_code_explain_5737"} {"prompt": "Complete this yaml code:\n\nsecret/basicauth-secret.yaml\napiVersion: v1\nkind: Secret\nmetadata:", "completion": " name: secret-basic-auth\ntype: kubernetes.io/basic-auth\nstringData:\n username: admin # required field for kubernetes.io/basic-auth\n password: t0p-Secret # required field for kubernetes.io/basic-auth", "language": "yaml", "type": "kubernetes-manifest", "context": "Secrets | The basic authentication Secret type is provided only for convenience. You can create an Opaque type for credentials used for basic authentication. However, using the defined and public Secret type (k", "source": "https://kubernetes.io/docs/concepts/configuration/secret/", "id": "k8s_00213_code_completion_5738"} {"prompt": "Explain this yaml code:\n\nsecret/basicauth-secret.yaml\napiVersion: v1\nkind: Secret\nmetadata:\n name: secret-basic-auth\ntype: kubernetes.io/basic-auth\nstringData:\n username: admin # required field for kubernetes.io/basic-auth\n password: t0p-Secret # required field for kubernetes.io/basic-auth", "completion": "This yaml code Secrets | The basic authentication Secret type is provided only for convenience. You can create an Opaque type for credentials used for basic authentication. However, using the defined and public Secret type (k", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Secrets | The basic authentication Secret type is provided only for convenience. You can create an Opaque type for credentials used for basic authentication. However, using the defined and public Secret type (k", "source": "https://kubernetes.io/docs/concepts/configuration/secret/", "id": "k8s_00213_code_explain_5739"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Secret\nmetadata:\n name: secret-basic-auth", "completion": "type: kubernetes.io/basic-auth\nstringData:\n username: admin # required field for kubernetes.io/basic-auth\n password: t0p-Secret # required field for kubernetes.io/basic-auth", "language": "yaml", "type": "kubernetes-manifest", "context": "Secrets | The basic authentication Secret type is provided only for convenience. You can create an Opaque type for credentials used for basic authentication. However, using the defined and public Secret type (k", "source": "https://kubernetes.io/docs/concepts/configuration/secret/", "id": "k8s_00213_code_completion_5740"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Secret\nmetadata:\n name: secret-basic-auth\ntype: kubernetes.io/basic-auth\nstringData:\n username: admin # required field for kubernetes.io/basic-auth\n password: t0p-Secret # required field for kubernetes.io/basic-auth", "completion": "This yaml code Secrets | The basic authentication Secret type is provided only for convenience. You can create an Opaque type for credentials used for basic authentication. However, using the defined and public Secret type (k", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Secrets | The basic authentication Secret type is provided only for convenience. You can create an Opaque type for credentials used for basic authentication. However, using the defined and public Secret type (k", "source": "https://kubernetes.io/docs/concepts/configuration/secret/", "id": "k8s_00213_code_explain_5741"} {"prompt": "Complete this yaml code:\n\nsecret/ssh-auth-secret.yaml\napiVersion: v1\nkind: Secret\nmetadata:\n name: secret-ssh-auth", "completion": "type: kubernetes.io/ssh-auth\ndata:\n # the data is abbreviated in this example\n ssh-privatekey: |\n UG91cmluZzYlRW1vdGljb24lU2N1YmE=", "language": "yaml", "type": "kubernetes-manifest", "context": "Secrets | The SSH authentication Secret type is provided only for convenience. You can create an Opaque type for credentials used for SSH authentication. However, using the defined and public Secret type (kuber", "source": "https://kubernetes.io/docs/concepts/configuration/secret/", "id": "k8s_00213_code_completion_5742"} {"prompt": "Explain this yaml code:\n\nsecret/ssh-auth-secret.yaml\napiVersion: v1\nkind: Secret\nmetadata:\n name: secret-ssh-auth\ntype: kubernetes.io/ssh-auth\ndata:\n # the data is abbreviated in this example\n ssh-privatekey: |\n UG91cmluZzYlRW1vdGljb24lU2N1YmE=", "completion": "This yaml code Secrets | The SSH authentication Secret type is provided only for convenience. You can create an Opaque type for credentials used for SSH authentication. However, using the defined and public Secret type (kuber", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Secrets | The SSH authentication Secret type is provided only for convenience. You can create an Opaque type for credentials used for SSH authentication. However, using the defined and public Secret type (kuber", "source": "https://kubernetes.io/docs/concepts/configuration/secret/", "id": "k8s_00213_code_explain_5743"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Secret\nmetadata:\n name: secret-ssh-auth", "completion": "type: kubernetes.io/ssh-auth\ndata:\n # the data is abbreviated in this example\n ssh-privatekey: |\n UG91cmluZzYlRW1vdGljb24lU2N1YmE=", "language": "yaml", "type": "kubernetes-manifest", "context": "Secrets | The SSH authentication Secret type is provided only for convenience. You can create an Opaque type for credentials used for SSH authentication. However, using the defined and public Secret type (kuber", "source": "https://kubernetes.io/docs/concepts/configuration/secret/", "id": "k8s_00213_code_completion_5744"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Secret\nmetadata:\n name: secret-ssh-auth\ntype: kubernetes.io/ssh-auth\ndata:\n # the data is abbreviated in this example\n ssh-privatekey: |\n UG91cmluZzYlRW1vdGljb24lU2N1YmE=", "completion": "This yaml code Secrets | The SSH authentication Secret type is provided only for convenience. You can create an Opaque type for credentials used for SSH authentication. However, using the defined and public Secret type (kuber", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Secrets | The SSH authentication Secret type is provided only for convenience. You can create an Opaque type for credentials used for SSH authentication. However, using the defined and public Secret type (kuber", "source": "https://kubernetes.io/docs/concepts/configuration/secret/", "id": "k8s_00213_code_explain_5745"} {"prompt": "Complete this yaml code:\n\nsecret/tls-auth-secret.yaml\napiVersion: v1\nkind: Secret\nmetadata:\n name: secret-tls\ntype: kubernetes.io/tls", "completion": "data:\n # values are base64 encoded, which obscures them but does NOT provide\n # any useful level of confidentiality\n # Replace the following values with your own base64-encoded certificate and key.\n tls.crt: \"REPLACE_WITH_BASE64_CERT\" \n tls.key: \"REPLACE_WITH_BASE64_KEY\"", "language": "yaml", "type": "kubernetes-manifest", "context": "Secrets | The TLS Secret type is provided only for convenience. You can create an Opaque type for credentials used for TLS authentication. However, using the defined and public Secret type (kubernetes.io/tls) h", "source": "https://kubernetes.io/docs/concepts/configuration/secret/", "id": "k8s_00213_code_completion_5746"} {"prompt": "Explain this yaml code:\n\nsecret/tls-auth-secret.yaml\napiVersion: v1\nkind: Secret\nmetadata:\n name: secret-tls\ntype: kubernetes.io/tls\ndata:\n # values are base64 encoded, which obscures them but does NOT provide\n # any useful level of confidentiality\n # Replace the following values with your own base64-encoded certificate and key.\n tls.crt: \"REPLACE_WITH_BASE64_CERT\" \n tls.key: \"REPLACE_WITH_BASE64_KEY\"", "completion": "This yaml code Secrets | The TLS Secret type is provided only for convenience. You can create an Opaque type for credentials used for TLS authentication. However, using the defined and public Secret type (kubernetes.io/tls) h", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Secrets | The TLS Secret type is provided only for convenience. You can create an Opaque type for credentials used for TLS authentication. However, using the defined and public Secret type (kubernetes.io/tls) h", "source": "https://kubernetes.io/docs/concepts/configuration/secret/", "id": "k8s_00213_code_explain_5747"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Secret\nmetadata:\n name: secret-tls\ntype: kubernetes.io/tls", "completion": "data:\n # values are base64 encoded, which obscures them but does NOT provide\n # any useful level of confidentiality\n # Replace the following values with your own base64-encoded certificate and key.\n tls.crt: \"REPLACE_WITH_BASE64_CERT\" \n tls.key: \"REPLACE_WITH_BASE64_KEY\"", "language": "yaml", "type": "kubernetes-manifest", "context": "Secrets | The TLS Secret type is provided only for convenience. You can create an Opaque type for credentials used for TLS authentication. However, using the defined and public Secret type (kubernetes.io/tls) h", "source": "https://kubernetes.io/docs/concepts/configuration/secret/", "id": "k8s_00213_code_completion_5748"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Secret\nmetadata:\n name: secret-tls\ntype: kubernetes.io/tls\ndata:\n # values are base64 encoded, which obscures them but does NOT provide\n # any useful level of confidentiality\n # Replace the following values with your own base64-encoded certificate and key.\n tls.crt: \"REPLACE_WITH_BASE64_CERT\" \n tls.key: \"REPLACE_WITH_BASE64_KEY\"", "completion": "This yaml code Secrets | The TLS Secret type is provided only for convenience. You can create an Opaque type for credentials used for TLS authentication. However, using the defined and public Secret type (kubernetes.io/tls) h", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Secrets | The TLS Secret type is provided only for convenience. You can create an Opaque type for credentials used for TLS authentication. However, using the defined and public Secret type (kubernetes.io/tls) h", "source": "https://kubernetes.io/docs/concepts/configuration/secret/", "id": "k8s_00213_code_explain_5749"} {"prompt": "Explain this bash code:\n\nkubectl create secret tls my-tls-secret \\\n --cert=path/to/cert/file \\\n --key=path/to/key/file", "completion": "This bash code Secrets | The public/private key pair must exist before hand. The public key certificate for --cert must be .PEM encoded and must match the given private key for --key.", "language": "bash", "type": "kubectl-command_explanation", "context": "Secrets | The public/private key pair must exist before hand. The public key certificate for --cert must be .PEM encoded and must match the given private key for --key.", "source": "https://kubernetes.io/docs/concepts/configuration/secret/", "id": "k8s_00213_code_explain_5750"} {"prompt": "Complete this yaml code:\n\nsecret/bootstrap-token-secret-base64.yaml\napiVersion: v1\nkind: Secret\nmetadata:\n name: bootstrap-token-5emitj\n namespace: kube-system\ntype: bootstrap.kubernetes.io/token", "completion": "data:\n auth-extra-groups: c3lzdGVtOmJvb3RzdHJhcHBlcnM6a3ViZWFkbTpkZWZhdWx0LW5vZGUtdG9rZW4=\n expiration: MjAyMC0wOS0xM1QwNDozOToxMFo=\n token-id: NWVtaXRq\n token-secret: a3E0Z2lodnN6emduMXAwcg==\n usage-bootstrap-authentication: dHJ1ZQ==\n usage-bootstrap-signing: dHJ1ZQ==", "language": "yaml", "type": "kubernetes-manifest", "context": "Secrets | A bootstrap token Secret has the following keys specified under data:", "source": "https://kubernetes.io/docs/concepts/configuration/secret/", "id": "k8s_00213_code_completion_5751"} {"prompt": "Explain this yaml code:\n\nsecret/bootstrap-token-secret-base64.yaml\napiVersion: v1\nkind: Secret\nmetadata:\n name: bootstrap-token-5emitj\n namespace: kube-system\ntype: bootstrap.kubernetes.io/token\ndata:\n auth-extra-groups: c3lzdGVtOmJvb3RzdHJhcHBlcnM6a3ViZWFkbTpkZWZhdWx0LW5vZGUtdG9rZW4=\n expiration: MjAyMC0wOS0xM1QwNDozOToxMFo=\n token-id: NWVtaXRq\n token-secret: a3E0Z2lodnN6emduMXAwcg==\n usage-bootstrap-authentication: dHJ1ZQ==\n usage-bootstrap-signing: dHJ1ZQ==", "completion": "This yaml code Secrets | A bootstrap token Secret has the following keys specified under data:", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Secrets | A bootstrap token Secret has the following keys specified under data:", "source": "https://kubernetes.io/docs/concepts/configuration/secret/", "id": "k8s_00213_code_explain_5752"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Secret\nmetadata:\n name: bootstrap-token-5emitj\n namespace: kube-system\ntype: bootstrap.kubernetes.io/token", "completion": "data:\n auth-extra-groups: c3lzdGVtOmJvb3RzdHJhcHBlcnM6a3ViZWFkbTpkZWZhdWx0LW5vZGUtdG9rZW4=\n expiration: MjAyMC0wOS0xM1QwNDozOToxMFo=\n token-id: NWVtaXRq\n token-secret: a3E0Z2lodnN6emduMXAwcg==\n usage-bootstrap-authentication: dHJ1ZQ==\n usage-bootstrap-signing: dHJ1ZQ==", "language": "yaml", "type": "kubernetes-manifest", "context": "Secrets | A bootstrap token Secret has the following keys specified under data:", "source": "https://kubernetes.io/docs/concepts/configuration/secret/", "id": "k8s_00213_code_completion_5753"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Secret\nmetadata:\n name: bootstrap-token-5emitj\n namespace: kube-system\ntype: bootstrap.kubernetes.io/token\ndata:\n auth-extra-groups: c3lzdGVtOmJvb3RzdHJhcHBlcnM6a3ViZWFkbTpkZWZhdWx0LW5vZGUtdG9rZW4=\n expiration: MjAyMC0wOS0xM1QwNDozOToxMFo=\n token-id: NWVtaXRq\n token-secret: a3E0Z2lodnN6emduMXAwcg==\n usage-bootstrap-authentication: dHJ1ZQ==\n usage-bootstrap-signing: dHJ1ZQ==", "completion": "This yaml code Secrets | A bootstrap token Secret has the following keys specified under data:", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Secrets | A bootstrap token Secret has the following keys specified under data:", "source": "https://kubernetes.io/docs/concepts/configuration/secret/", "id": "k8s_00213_code_explain_5754"} {"prompt": "Complete this yaml code:\n\nsecret/bootstrap-token-secret-literal.yaml\napiVersion: v1\nkind: Secret\nmetadata:\n # Note how the Secret is named\n name: bootstrap-token-5emitj\n # A bootstrap token Secret usually resides in the kube-system namespace\n namespace: kube-system\ntype: bootstrap.kubernetes.io/token", "completion": "stringData:\n auth-extra-groups: \"system:bootstrappers:kubeadm:default-node-token\"\n expiration: \"2020-09-13T04:39:10Z\"\n # This token ID is used in the name\n token-id: \"5emitj\"\n token-secret: \"kq4gihvszzgn1p0r\"\n # This token can be used for authentication\n usage-bootstrap-authentication: \"true\"\n # and it can be used for signing\n usage-bootstrap-signing: \"true\"", "language": "yaml", "type": "kubernetes-manifest", "context": "Secrets | There are several options to create a Secret:", "source": "https://kubernetes.io/docs/concepts/configuration/secret/", "id": "k8s_00213_code_completion_5755"} {"prompt": "Explain this yaml code:\n\nsecret/bootstrap-token-secret-literal.yaml\napiVersion: v1\nkind: Secret\nmetadata:\n # Note how the Secret is named\n name: bootstrap-token-5emitj\n # A bootstrap token Secret usually resides in the kube-system namespace\n namespace: kube-system\ntype: bootstrap.kubernetes.io/token\nstringData:\n auth-extra-groups: \"system:bootstrappers:kubeadm:default-node-token\"\n expiration: \"2020-09-13T04:39:10Z\"\n # This token ID is used in the name\n token-id: \"5emitj\"\n token-secret: \"kq4gihvszzgn1p0r\"\n # This token can be used for authentication\n usage-bootstrap-authentication: \"true\"\n # and it can be used for signing\n usage-bootstrap-signing: \"true\"", "completion": "This yaml code Secrets | There are several options to create a Secret:", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Secrets | There are several options to create a Secret:", "source": "https://kubernetes.io/docs/concepts/configuration/secret/", "id": "k8s_00213_code_explain_5756"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Secret\nmetadata:\n # Note how the Secret is named\n name: bootstrap-token-5emitj\n # A bootstrap token Secret usually resides in the kube-system namespace\n namespace: kube-system\ntype: bootstrap.kubernetes.io/token\nstringData:", "completion": " auth-extra-groups: \"system:bootstrappers:kubeadm:default-node-token\"\n expiration: \"2020-09-13T04:39:10Z\"\n # This token ID is used in the name\n token-id: \"5emitj\"\n token-secret: \"kq4gihvszzgn1p0r\"\n # This token can be used for authentication\n usage-bootstrap-authentication: \"true\"\n # and it can be used for signing\n usage-bootstrap-signing: \"true\"", "language": "yaml", "type": "kubernetes-manifest", "context": "Secrets | There are several options to create a Secret:", "source": "https://kubernetes.io/docs/concepts/configuration/secret/", "id": "k8s_00213_code_completion_5757"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Secret\nmetadata:\n # Note how the Secret is named\n name: bootstrap-token-5emitj\n # A bootstrap token Secret usually resides in the kube-system namespace\n namespace: kube-system\ntype: bootstrap.kubernetes.io/token\nstringData:\n auth-extra-groups: \"system:bootstrappers:kubeadm:default-node-token\"\n expiration: \"2020-09-13T04:39:10Z\"\n # This token ID is used in the name\n token-id: \"5emitj\"\n token-secret: \"kq4gihvszzgn1p0r\"\n # This token can be used for authentication\n usage-bootstrap-authentication: \"true\"\n # and it can be used for signing\n usage-bootstrap-signing: \"true\"", "completion": "This yaml code Secrets | There are several options to create a Secret:", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Secrets | There are several options to create a Secret:", "source": "https://kubernetes.io/docs/concepts/configuration/secret/", "id": "k8s_00213_code_explain_5758"} {"prompt": "Complete this yaml code:\n\nsecret/optional-secret.yaml\napiVersion: v1\nkind: Pod\nmetadata:\n name: mypod\nspec:\n containers:\n - name: mypod\n image: redis", "completion": " volumeMounts:\n - name: foo\n mountPath: \"/etc/foo\"\n readOnly: true\n volumes:\n - name: foo\n secret:\n secretName: mysecret\n optional: true", "language": "yaml", "type": "pod-definition", "context": "Secrets | By default, Secrets are required. None of a Pod's containers will start until all non-optional Secrets are available.", "source": "https://kubernetes.io/docs/concepts/configuration/secret/", "id": "k8s_00213_code_completion_5759"} {"prompt": "Explain this yaml code:\n\nsecret/optional-secret.yaml\napiVersion: v1\nkind: Pod\nmetadata:\n name: mypod\nspec:\n containers:\n - name: mypod\n image: redis\n volumeMounts:\n - name: foo\n mountPath: \"/etc/foo\"\n readOnly: true\n volumes:\n - name: foo\n secret:\n secretName: mysecret\n optional: true", "completion": "This yaml code Secrets | By default, Secrets are required. None of a Pod's containers will start until all non-optional Secrets are available.", "language": "yaml", "type": "pod-definition_explanation", "context": "Secrets | By default, Secrets are required. None of a Pod's containers will start until all non-optional Secrets are available.", "source": "https://kubernetes.io/docs/concepts/configuration/secret/", "id": "k8s_00213_code_explain_5760"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: mypod\nspec:\n containers:\n - name: mypod\n image: redis", "completion": " volumeMounts:\n - name: foo\n mountPath: \"/etc/foo\"\n readOnly: true\n volumes:\n - name: foo\n secret:\n secretName: mysecret\n optional: true", "language": "yaml", "type": "pod-definition", "context": "Secrets | By default, Secrets are required. None of a Pod's containers will start until all non-optional Secrets are available.", "source": "https://kubernetes.io/docs/concepts/configuration/secret/", "id": "k8s_00213_code_completion_5761"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: mypod\nspec:\n containers:\n - name: mypod\n image: redis\n volumeMounts:\n - name: foo\n mountPath: \"/etc/foo\"\n readOnly: true\n volumes:\n - name: foo\n secret:\n secretName: mysecret\n optional: true", "completion": "This yaml code Secrets | By default, Secrets are required. None of a Pod's containers will start until all non-optional Secrets are available.", "language": "yaml", "type": "pod-definition_explanation", "context": "Secrets | By default, Secrets are required. None of a Pod's containers will start until all non-optional Secrets are available.", "source": "https://kubernetes.io/docs/concepts/configuration/secret/", "id": "k8s_00213_code_explain_5762"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Secret", "completion": "metadata: ...\ndata: ...\nimmutable: true", "language": "yaml", "type": "kubernetes-manifest", "context": "Secrets | You can also update any existing mutable Secret to make it immutable.", "source": "https://kubernetes.io/docs/concepts/configuration/secret/", "id": "k8s_00213_code_completion_5763"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Secret\nmetadata: ...\ndata: ...\nimmutable: true", "completion": "This yaml code Secrets | You can also update any existing mutable Secret to make it immutable.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Secrets | You can also update any existing mutable Secret to make it immutable.", "source": "https://kubernetes.io/docs/concepts/configuration/secret/", "id": "k8s_00213_code_explain_5764"} {"prompt": "Complete this text code:\n\n...\niptables:", "completion": " minSyncPeriod: 1s\n syncPeriod: 30s\n...", "language": "text", "type": "code-example", "context": "Virtual IPs and Service Proxies | The minSyncPeriod parameter sets the minimum duration between attempts to resynchronize iptables rules with the kernel. If it is 0s, then kube-proxy will always immediately synchronize the rules every", "source": "https://kubernetes.io/docs/reference/networking/virtual-ips/", "id": "k8s_00214_code_completion_5765"} {"prompt": "Explain this text code:\n\n...\niptables:\n minSyncPeriod: 1s\n syncPeriod: 30s\n...", "completion": "This text code Virtual IPs and Service Proxies | The minSyncPeriod parameter sets the minimum duration between attempts to resynchronize iptables rules with the kernel. If it is 0s, then kube-proxy will always immediately synchronize the rules every", "language": "text", "type": "code-example_explanation", "context": "Virtual IPs and Service Proxies | The minSyncPeriod parameter sets the minimum duration between attempts to resynchronize iptables rules with the kernel. If it is 0s, then kube-proxy will always immediately synchronize the rules every", "source": "https://kubernetes.io/docs/reference/networking/virtual-ips/", "id": "k8s_00214_code_explain_5766"} {"prompt": "Explain this text code:\n\nNAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE\nkubernetes ClusterIP 2001:db8:1:2::1 443/TCP 3d1h", "completion": "This text code Virtual IPs and Service Proxies | Kubernetes also allow users to dynamically define the available IP ranges for Services using ServiceCIDR objects. During bootstrap, a default ServiceCIDR object named kubernetes is created from the va", "language": "text", "type": "code-example_explanation", "context": "Virtual IPs and Service Proxies | Kubernetes also allow users to dynamically define the available IP ranges for Services using ServiceCIDR objects. During bootstrap, a default ServiceCIDR object named kubernetes is created from the va", "source": "https://kubernetes.io/docs/reference/networking/virtual-ips/", "id": "k8s_00214_code_explain_5767"} {"prompt": "Explain this text code:\n\nNAME PARENTREF\n2001:db8:1:2::1 services/default/kubernetes\n2001:db8:1:2::a services/kube-system/kube-dns", "completion": "This text code Virtual IPs and Service Proxies | Kubernetes also allow users to dynamically define the available IP ranges for Services using ServiceCIDR objects. During bootstrap, a default ServiceCIDR object named kubernetes is created from the va", "language": "text", "type": "code-example_explanation", "context": "Virtual IPs and Service Proxies | Kubernetes also allow users to dynamically define the available IP ranges for Services using ServiceCIDR objects. During bootstrap, a default ServiceCIDR object named kubernetes is created from the va", "source": "https://kubernetes.io/docs/reference/networking/virtual-ips/", "id": "k8s_00214_code_explain_5768"} {"prompt": "Explain this text code:\n\nNAME CIDRS AGE\nkubernetes 10.96.0.0/28 17m", "completion": "This text code Virtual IPs and Service Proxies | Users can create or delete new ServiceCIDR objects to manage the available IP ranges for Services:", "language": "text", "type": "code-example_explanation", "context": "Virtual IPs and Service Proxies | Users can create or delete new ServiceCIDR objects to manage the available IP ranges for Services:", "source": "https://kubernetes.io/docs/reference/networking/virtual-ips/", "id": "k8s_00214_code_explain_5769"} {"prompt": "Complete this yaml code:\n\ncat <<'EOF' | kubectl apply -f -\napiVersion: networking.k8s.io/v1\nkind: ServiceCIDR\nmetadata:", "completion": " name: newservicecidr\nspec:\n cidrs:\n - 10.96.0.0/24\nEOF", "language": "yaml", "type": "service-definition", "context": "Virtual IPs and Service Proxies | Distributions or administrators of Kubernetes clusters may want to control that new Service CIDRs added to the cluster does not overlap with other networks on the cluster, that only belong to a specif", "source": "https://kubernetes.io/docs/reference/networking/virtual-ips/", "id": "k8s_00214_code_completion_5770"} {"prompt": "Explain this yaml code:\n\ncat <<'EOF' | kubectl apply -f -\napiVersion: networking.k8s.io/v1\nkind: ServiceCIDR\nmetadata:\n name: newservicecidr\nspec:\n cidrs:\n - 10.96.0.0/24\nEOF", "completion": "This yaml code Virtual IPs and Service Proxies | Distributions or administrators of Kubernetes clusters may want to control that new Service CIDRs added to the cluster does not overlap with other networks on the cluster, that only belong to a specif", "language": "yaml", "type": "service-definition_explanation", "context": "Virtual IPs and Service Proxies | Distributions or administrators of Kubernetes clusters may want to control that new Service CIDRs added to the cluster does not overlap with other networks on the cluster, that only belong to a specif", "source": "https://kubernetes.io/docs/reference/networking/virtual-ips/", "id": "k8s_00214_code_explain_5771"} {"prompt": "Explain this text code:\n\nNAME CIDRS AGE\nkubernetes 10.96.0.0/28 17m\nnewservicecidr 10.96.0.0/24 7m", "completion": "This text code Virtual IPs and Service Proxies | Distributions or administrators of Kubernetes clusters may want to control that new Service CIDRs added to the cluster does not overlap with other networks on the cluster, that only belong to a specif", "language": "text", "type": "code-example_explanation", "context": "Virtual IPs and Service Proxies | Distributions or administrators of Kubernetes clusters may want to control that new Service CIDRs added to the cluster does not overlap with other networks on the cluster, that only belong to a specif", "source": "https://kubernetes.io/docs/reference/networking/virtual-ips/", "id": "k8s_00214_code_explain_5772"} {"prompt": "Complete this yaml code:\n\n---\napiVersion: admissionregistration.k8s.io/v1\nkind: ValidatingAdmissionPolicy\nmetadata:\n name: \"servicecidrs-default\"\nspec:\n failurePolicy: Fail\n matchConstraints:\n resourceRules:\n - apiGroups: [\"networking.k8s.io\"]\n apiVersions: [\"v1\",\"v1beta1\"]\n operations: [\"CREATE\", \"UPDATE\"]\n resources: [\"servicecidrs\"]\n matchConditions:\n - name: 'exclude-default-servicecidr'", "completion": " expression: \"object.metadata.name != 'kubernetes'\"\n variables:\n - name: allowed\n expression: \"['10.96.0.0/16','2001:db8::/64']\"\n validations:\n - expression: \"object.spec.cidrs.all(i , variables.allowed.exists(j , cidr(j).containsCIDR(i)))\"\n---\napiVersion: admissionregistration.k8s.io/v1\nkind: ValidatingAdmissionPolicyBinding\nmetadata:\n name: \"servicecidrs-binding\"\nspec:\n policyName: \"servicecidrs-default\"\n validationActions: [Deny,Audit]\n---", "language": "yaml", "type": "kubernetes-manifest", "context": "Virtual IPs and Service Proxies | Kubernetes divides the ClusterIP range into two bands, based on the size of the configured service-cluster-ip-range by using the following formula min(max(16, cidrSize / 16), 256). That formula means", "source": "https://kubernetes.io/docs/reference/networking/virtual-ips/", "id": "k8s_00214_code_completion_5773"} {"prompt": "Explain this yaml code:\n\n---\napiVersion: admissionregistration.k8s.io/v1\nkind: ValidatingAdmissionPolicy\nmetadata:\n name: \"servicecidrs-default\"\nspec:\n failurePolicy: Fail\n matchConstraints:\n resourceRules:\n - apiGroups: [\"networking.k8s.io\"]\n apiVersions: [\"v1\",\"v1beta1\"]\n operations: [\"CREATE\", \"UPDATE\"]\n resources: [\"servicecidrs\"]\n matchConditions:\n - name: 'exclude-default-servicecidr'\n expression: \"object.metadata.name != 'kubernetes'\"\n variables:\n - name: allowed\n expression: \"['10.96.0.0/16','2001:db8::/64']\"\n validations:\n - expression: \"object.spec.cidrs.all(i , variables.allowed.exists(j , cidr(j).containsCIDR(i)))\"\n---\napiVersion: admissionregistration.k8s.io/v1\nkind: ValidatingAdmissionPolicyBinding\nmetadata:\n name: \"servicecidrs-binding\"\nspec:\n policyName: \"servicecidrs-default\"\n validationActions: [Deny,Audit]\n---", "completion": "This yaml code Virtual IPs and Service Proxies | Kubernetes divides the ClusterIP range into two bands, based on the size of the configured service-cluster-ip-range by using the following formula min(max(16, cidrSize / 16), 256). That formula means", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Virtual IPs and Service Proxies | Kubernetes divides the ClusterIP range into two bands, based on the size of the configured service-cluster-ip-range by using the following formula min(max(16, cidrSize / 16), 256). That formula means", "source": "https://kubernetes.io/docs/reference/networking/virtual-ips/", "id": "k8s_00214_code_explain_5774"} {"prompt": "Complete this yaml code:\n\n...\niptables:", "completion": " minSyncPeriod: 1s\n syncPeriod: 30s\n...", "language": "yaml", "type": "kubernetes-manifest", "context": "Virtual IPs and Service Proxies | The minSyncPeriod parameter sets the minimum duration between attempts to resynchronize iptables rules with the kernel. If it is 0s, then kube-proxy will always immediately synchronize the rules every", "source": "https://kubernetes.io/docs/reference/networking/virtual-ips/", "id": "k8s_00214_code_completion_5775"} {"prompt": "Explain this yaml code:\n\n...\niptables:\n minSyncPeriod: 1s\n syncPeriod: 30s\n...", "completion": "This yaml code Virtual IPs and Service Proxies | The minSyncPeriod parameter sets the minimum duration between attempts to resynchronize iptables rules with the kernel. If it is 0s, then kube-proxy will always immediately synchronize the rules every", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Virtual IPs and Service Proxies | The minSyncPeriod parameter sets the minimum duration between attempts to resynchronize iptables rules with the kernel. If it is 0s, then kube-proxy will always immediately synchronize the rules every", "source": "https://kubernetes.io/docs/reference/networking/virtual-ips/", "id": "k8s_00214_code_explain_5776"} {"prompt": "Explain this text code:\n\niptables_localhost_nodeports_accepted_packets_total", "completion": "This text code Virtual IPs and Service Proxies | NodePort interaction with firewalls: The iptables mode of kube-proxy tries to be compatible with overly-aggressive firewalls; for each type: NodePort service, it will add rules to accept inbound traff", "language": "text", "type": "code-example_explanation", "context": "Virtual IPs and Service Proxies | NodePort interaction with firewalls: The iptables mode of kube-proxy tries to be compatible with overly-aggressive firewalls; for each type: NodePort service, it will add rules to accept inbound traff", "source": "https://kubernetes.io/docs/reference/networking/virtual-ips/", "id": "k8s_00214_code_explain_5777"} {"prompt": "Explain this text code:\n\n.spec.sessionAffinityConfig.clientIP.timeoutSeconds", "completion": "This text code Virtual IPs and Service Proxies | Unlike Pod IP addresses, which actually route to a fixed destination, Service IPs are not actually answered by a single host. Instead, kube-proxy uses packet processing logic (such as Linux iptables)", "language": "text", "type": "code-example_explanation", "context": "Virtual IPs and Service Proxies | Unlike Pod IP addresses, which actually route to a fixed destination, Service IPs are not actually answered by a single host. Instead, kube-proxy uses packet processing logic (such as Linux iptables)", "source": "https://kubernetes.io/docs/reference/networking/virtual-ips/", "id": "k8s_00214_code_explain_5778"} {"prompt": "Explain this text code:\n\nNAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE\nkubernetes ClusterIP 2001:db8:1:2::1 443/TCP 3d1h", "completion": "This text code Virtual IPs and Service Proxies | Kubernetes also allow users to dynamically define the available IP ranges for Services using ServiceCIDR objects. During bootstrap, a default ServiceCIDR object named kubernetes is created from the va", "language": "text", "type": "code-example_explanation", "context": "Virtual IPs and Service Proxies | Kubernetes also allow users to dynamically define the available IP ranges for Services using ServiceCIDR objects. During bootstrap, a default ServiceCIDR object named kubernetes is created from the va", "source": "https://kubernetes.io/docs/reference/networking/virtual-ips/", "id": "k8s_00214_code_explain_5779"} {"prompt": "Explain this text code:\n\nNAME PARENTREF\n2001:db8:1:2::1 services/default/kubernetes\n2001:db8:1:2::a services/kube-system/kube-dns", "completion": "This text code Virtual IPs and Service Proxies | Kubernetes also allow users to dynamically define the available IP ranges for Services using ServiceCIDR objects. During bootstrap, a default ServiceCIDR object named kubernetes is created from the va", "language": "text", "type": "code-example_explanation", "context": "Virtual IPs and Service Proxies | Kubernetes also allow users to dynamically define the available IP ranges for Services using ServiceCIDR objects. During bootstrap, a default ServiceCIDR object named kubernetes is created from the va", "source": "https://kubernetes.io/docs/reference/networking/virtual-ips/", "id": "k8s_00214_code_explain_5780"} {"prompt": "Explain this text code:\n\nNAME CIDRS AGE\nkubernetes 10.96.0.0/28 17m", "completion": "This text code Virtual IPs and Service Proxies | Users can create or delete new ServiceCIDR objects to manage the available IP ranges for Services:", "language": "text", "type": "code-example_explanation", "context": "Virtual IPs and Service Proxies | Users can create or delete new ServiceCIDR objects to manage the available IP ranges for Services:", "source": "https://kubernetes.io/docs/reference/networking/virtual-ips/", "id": "k8s_00214_code_explain_5781"} {"prompt": "Complete this shell code:\n\ncat <<'EOF' | kubectl apply -f -\napiVersion: networking.k8s.io/v1\nkind: ServiceCIDR\nmetadata:", "completion": " name: newservicecidr\nspec:\n cidrs:\n - 10.96.0.0/24\nEOF", "language": "shell", "type": "code-example", "context": "Virtual IPs and Service Proxies | Distributions or administrators of Kubernetes clusters may want to control that new Service CIDRs added to the cluster does not overlap with other networks on the cluster, that only belong to a specif", "source": "https://kubernetes.io/docs/reference/networking/virtual-ips/", "id": "k8s_00214_code_completion_5782"} {"prompt": "Explain this shell code:\n\ncat <<'EOF' | kubectl apply -f -\napiVersion: networking.k8s.io/v1\nkind: ServiceCIDR\nmetadata:\n name: newservicecidr\nspec:\n cidrs:\n - 10.96.0.0/24\nEOF", "completion": "This shell code Virtual IPs and Service Proxies | Distributions or administrators of Kubernetes clusters may want to control that new Service CIDRs added to the cluster does not overlap with other networks on the cluster, that only belong to a specif", "language": "shell", "type": "code-example_explanation", "context": "Virtual IPs and Service Proxies | Distributions or administrators of Kubernetes clusters may want to control that new Service CIDRs added to the cluster does not overlap with other networks on the cluster, that only belong to a specif", "source": "https://kubernetes.io/docs/reference/networking/virtual-ips/", "id": "k8s_00214_code_explain_5783"} {"prompt": "Explain this text code:\n\nNAME CIDRS AGE\nkubernetes 10.96.0.0/28 17m\nnewservicecidr 10.96.0.0/24 7m", "completion": "This text code Virtual IPs and Service Proxies | Distributions or administrators of Kubernetes clusters may want to control that new Service CIDRs added to the cluster does not overlap with other networks on the cluster, that only belong to a specif", "language": "text", "type": "code-example_explanation", "context": "Virtual IPs and Service Proxies | Distributions or administrators of Kubernetes clusters may want to control that new Service CIDRs added to the cluster does not overlap with other networks on the cluster, that only belong to a specif", "source": "https://kubernetes.io/docs/reference/networking/virtual-ips/", "id": "k8s_00214_code_explain_5784"} {"prompt": "Complete this yaml code:\n\n---\napiVersion: admissionregistration.k8s.io/v1\nkind: ValidatingAdmissionPolicy\nmetadata:\n name: \"servicecidrs-default\"\nspec:\n failurePolicy: Fail\n matchConstraints:\n resourceRules:\n - apiGroups: [\"networking.k8s.io\"]\n apiVersions: [\"v1\",\"v1beta1\"]\n operations: [\"CREATE\", \"UPDATE\"]\n resources: [\"servicecidrs\"]\n matchConditions:\n - name: 'exclude-default-servicecidr'", "completion": " expression: \"object.metadata.name != 'kubernetes'\"\n variables:\n - name: allowed\n expression: \"['10.96.0.0/16','2001:db8::/64']\"\n validations:\n - expression: \"object.spec.cidrs.all(i , variables.allowed.exists(j , cidr(j).containsCIDR(i)))\"\n---\napiVersion: admissionregistration.k8s.io/v1\nkind: ValidatingAdmissionPolicyBinding\nmetadata:\n name: \"servicecidrs-binding\"\nspec:\n policyName: \"servicecidrs-default\"\n validationActions: [Deny,Audit]\n---", "language": "yaml", "type": "kubernetes-manifest", "context": "Virtual IPs and Service Proxies | Kubernetes divides the ClusterIP range into two bands, based on the size of the configured service-cluster-ip-range by using the following formula min(max(16, cidrSize / 16), 256). That formula means", "source": "https://kubernetes.io/docs/reference/networking/virtual-ips/", "id": "k8s_00214_code_completion_5785"} {"prompt": "Explain this yaml code:\n\n---\napiVersion: admissionregistration.k8s.io/v1\nkind: ValidatingAdmissionPolicy\nmetadata:\n name: \"servicecidrs-default\"\nspec:\n failurePolicy: Fail\n matchConstraints:\n resourceRules:\n - apiGroups: [\"networking.k8s.io\"]\n apiVersions: [\"v1\",\"v1beta1\"]\n operations: [\"CREATE\", \"UPDATE\"]\n resources: [\"servicecidrs\"]\n matchConditions:\n - name: 'exclude-default-servicecidr'\n expression: \"object.metadata.name != 'kubernetes'\"\n variables:\n - name: allowed\n expression: \"['10.96.0.0/16','2001:db8::/64']\"\n validations:\n - expression: \"object.spec.cidrs.all(i , variables.allowed.exists(j , cidr(j).containsCIDR(i)))\"\n---\napiVersion: admissionregistration.k8s.io/v1\nkind: ValidatingAdmissionPolicyBinding\nmetadata:\n name: \"servicecidrs-binding\"\nspec:\n policyName: \"servicecidrs-default\"\n validationActions: [Deny,Audit]\n---", "completion": "This yaml code Virtual IPs and Service Proxies | Kubernetes divides the ClusterIP range into two bands, based on the size of the configured service-cluster-ip-range by using the following formula min(max(16, cidrSize / 16), 256). That formula means", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Virtual IPs and Service Proxies | Kubernetes divides the ClusterIP range into two bands, based on the size of the configured service-cluster-ip-range by using the following formula min(max(16, cidrSize / 16), 256). That formula means", "source": "https://kubernetes.io/docs/reference/networking/virtual-ips/", "id": "k8s_00214_code_explain_5786"} {"prompt": "Complete this text code:\n\n...\niptables:", "completion": " minSyncPeriod: 1s\n syncPeriod: 30s\n...", "language": "text", "type": "code-example", "context": "Virtual IPs and Service Proxies | The minSyncPeriod parameter sets the minimum duration between attempts to resynchronize iptables rules with the kernel. If it is 0s, then kube-proxy will always immediately synchronize the rules every", "source": "https://kubernetes.io/docs/reference/networking/virtual-ips/", "id": "k8s_00214_code_completion_5787"} {"prompt": "Explain this text code:\n\n...\niptables:\n minSyncPeriod: 1s\n syncPeriod: 30s\n...", "completion": "This text code Virtual IPs and Service Proxies | The minSyncPeriod parameter sets the minimum duration between attempts to resynchronize iptables rules with the kernel. If it is 0s, then kube-proxy will always immediately synchronize the rules every", "language": "text", "type": "code-example_explanation", "context": "Virtual IPs and Service Proxies | The minSyncPeriod parameter sets the minimum duration between attempts to resynchronize iptables rules with the kernel. If it is 0s, then kube-proxy will always immediately synchronize the rules every", "source": "https://kubernetes.io/docs/reference/networking/virtual-ips/", "id": "k8s_00214_code_explain_5788"} {"prompt": "Complete this yaml code:\n\ncat <<'EOF' | kubectl apply -f -\napiVersion: networking.k8s.io/v1\nkind: ServiceCIDR\nmetadata:", "completion": " name: newservicecidr\nspec:\n cidrs:\n - 10.96.0.0/24\nEOF", "language": "yaml", "type": "service-definition", "context": "Virtual IPs and Service Proxies | Distributions or administrators of Kubernetes clusters may want to control that new Service CIDRs added to the cluster does not overlap with other networks on the cluster, that only belong to a specif", "source": "https://kubernetes.io/docs/reference/networking/virtual-ips/", "id": "k8s_00214_code_completion_5789"} {"prompt": "Explain this yaml code:\n\ncat <<'EOF' | kubectl apply -f -\napiVersion: networking.k8s.io/v1\nkind: ServiceCIDR\nmetadata:\n name: newservicecidr\nspec:\n cidrs:\n - 10.96.0.0/24\nEOF", "completion": "This yaml code Virtual IPs and Service Proxies | Distributions or administrators of Kubernetes clusters may want to control that new Service CIDRs added to the cluster does not overlap with other networks on the cluster, that only belong to a specif", "language": "yaml", "type": "service-definition_explanation", "context": "Virtual IPs and Service Proxies | Distributions or administrators of Kubernetes clusters may want to control that new Service CIDRs added to the cluster does not overlap with other networks on the cluster, that only belong to a specif", "source": "https://kubernetes.io/docs/reference/networking/virtual-ips/", "id": "k8s_00214_code_explain_5790"} {"prompt": "Complete this yaml code:\n\n---\napiVersion: admissionregistration.k8s.io/v1\nkind: ValidatingAdmissionPolicy\nmetadata:\n name: \"servicecidrs-default\"\nspec:\n failurePolicy: Fail\n matchConstraints:\n resourceRules:\n - apiGroups: [\"networking.k8s.io\"]\n apiVersions: [\"v1\",\"v1beta1\"]\n operations: [\"CREATE\", \"UPDATE\"]\n resources: [\"servicecidrs\"]\n matchConditions:\n - name: 'exclude-default-servicecidr'", "completion": " expression: \"object.metadata.name != 'kubernetes'\"\n variables:\n - name: allowed\n expression: \"['10.96.0.0/16','2001:db8::/64']\"\n validations:\n - expression: \"object.spec.cidrs.all(i , variables.allowed.exists(j , cidr(j).containsCIDR(i)))\"\n---\napiVersion: admissionregistration.k8s.io/v1\nkind: ValidatingAdmissionPolicyBinding\nmetadata:\n name: \"servicecidrs-binding\"\nspec:\n policyName: \"servicecidrs-default\"\n validationActions: [Deny,Audit]\n---", "language": "yaml", "type": "kubernetes-manifest", "context": "Virtual IPs and Service Proxies | Kubernetes divides the ClusterIP range into two bands, based on the size of the configured service-cluster-ip-range by using the following formula min(max(16, cidrSize / 16), 256). That formula means", "source": "https://kubernetes.io/docs/reference/networking/virtual-ips/", "id": "k8s_00214_code_completion_5791"} {"prompt": "Explain this yaml code:\n\n---\napiVersion: admissionregistration.k8s.io/v1\nkind: ValidatingAdmissionPolicy\nmetadata:\n name: \"servicecidrs-default\"\nspec:\n failurePolicy: Fail\n matchConstraints:\n resourceRules:\n - apiGroups: [\"networking.k8s.io\"]\n apiVersions: [\"v1\",\"v1beta1\"]\n operations: [\"CREATE\", \"UPDATE\"]\n resources: [\"servicecidrs\"]\n matchConditions:\n - name: 'exclude-default-servicecidr'\n expression: \"object.metadata.name != 'kubernetes'\"\n variables:\n - name: allowed\n expression: \"['10.96.0.0/16','2001:db8::/64']\"\n validations:\n - expression: \"object.spec.cidrs.all(i , variables.allowed.exists(j , cidr(j).containsCIDR(i)))\"\n---\napiVersion: admissionregistration.k8s.io/v1\nkind: ValidatingAdmissionPolicyBinding\nmetadata:\n name: \"servicecidrs-binding\"\nspec:\n policyName: \"servicecidrs-default\"\n validationActions: [Deny,Audit]\n---", "completion": "This yaml code Virtual IPs and Service Proxies | Kubernetes divides the ClusterIP range into two bands, based on the size of the configured service-cluster-ip-range by using the following formula min(max(16, cidrSize / 16), 256). That formula means", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Virtual IPs and Service Proxies | Kubernetes divides the ClusterIP range into two bands, based on the size of the configured service-cluster-ip-range by using the following formula min(max(16, cidrSize / 16), 256). That formula means", "source": "https://kubernetes.io/docs/reference/networking/virtual-ips/", "id": "k8s_00214_code_explain_5792"} {"prompt": "Explain this text code:\n\nhas(object.namex) ? object.namex == 'special' : request.name == 'special'", "completion": "This text code Common Expression Language in Kubernetes | Also see: CEL types, OpenAPI types, Kubernetes Structural Schemas.", "language": "text", "type": "code-example_explanation", "context": "Common Expression Language in Kubernetes | Also see: CEL types, OpenAPI types, Kubernetes Structural Schemas.", "source": "https://kubernetes.io/docs/reference/using-api/cel/", "id": "k8s_00215_code_explain_5793"} {"prompt": "Explain this text code:\n\nAuthorizer.group(string).resource(string) ResourceCheck", "completion": "This text code Common Expression Language in Kubernetes | Non-resource authorization performed are used as follows:", "language": "text", "type": "code-example_explanation", "context": "Common Expression Language in Kubernetes | Non-resource authorization performed are used as follows:", "source": "https://kubernetes.io/docs/reference/using-api/cel/", "id": "k8s_00215_code_explain_5794"} {"prompt": "Explain this text code:\n\nAuthorizer.serviceAccount(namespace string, name string) Authorizer", "completion": "This text code Common Expression Language in Kubernetes | For CEL expressions in the API where a variable of type Authorizer is available, field and label selectors can be included in authorization checks.", "language": "text", "type": "code-example_explanation", "context": "Common Expression Language in Kubernetes | For CEL expressions in the API where a variable of type Authorizer is available, field and label selectors can be included in authorization checks.", "source": "https://kubernetes.io/docs/reference/using-api/cel/", "id": "k8s_00215_code_explain_5795"} {"prompt": "Explain this cel code:\n\nhas(object.namex) ? object.namex == 'special' : request.name == 'special'", "completion": "This cel code Common Expression Language in Kubernetes | Also see: CEL types, OpenAPI types, Kubernetes Structural Schemas.", "language": "cel", "type": "code-example_explanation", "context": "Common Expression Language in Kubernetes | Also see: CEL types, OpenAPI types, Kubernetes Structural Schemas.", "source": "https://kubernetes.io/docs/reference/using-api/cel/", "id": "k8s_00215_code_explain_5796"} {"prompt": "Explain this bash code:\n\nkubectl create namespace NAME [--dry-run=server|client|none]", "completion": "This bash code kubectl create namespace | If true, ignore any errors in templates when a field or map key is missing in the template. Only applies to golang and jsonpath output formats.", "language": "bash", "type": "kubectl-command_explanation", "context": "kubectl create namespace | If true, ignore any errors in templates when a field or map key is missing in the template. Only applies to golang and jsonpath output formats.", "source": "https://kubernetes.io/docs/reference/kubectl/generated/kubectl_create/kubectl_create_namespace/", "id": "k8s_00217_code_explain_5797"} {"prompt": "Explain this text code:\n\n# Create a new namespace named my-namespace\n kubectl create namespace my-namespace", "completion": "This text code kubectl create namespace | If true, ignore any errors in templates when a field or map key is missing in the template. Only applies to golang and jsonpath output formats.", "language": "text", "type": "code-example_explanation", "context": "kubectl create namespace | If true, ignore any errors in templates when a field or map key is missing in the template. Only applies to golang and jsonpath output formats.", "source": "https://kubernetes.io/docs/reference/kubectl/generated/kubectl_create/kubectl_create_namespace/", "id": "k8s_00217_code_explain_5798"} {"prompt": "Explain this bash code:\n\nkubectl create namespace NAME [--dry-run=server|client|none]", "completion": "This bash code kubectl create namespace | If true, ignore any errors in templates when a field or map key is missing in the template. Only applies to golang and jsonpath output formats.", "language": "bash", "type": "kubectl-command_explanation", "context": "kubectl create namespace | If true, ignore any errors in templates when a field or map key is missing in the template. Only applies to golang and jsonpath output formats.", "source": "https://kubernetes.io/docs/reference/kubectl/generated/kubectl_create/kubectl_create_namespace/", "id": "k8s_00217_code_explain_5799"} {"prompt": "Explain this text code:\n\n# Create a new namespace named my-namespace\n kubectl create namespace my-namespace", "completion": "This text code kubectl create namespace | If true, ignore any errors in templates when a field or map key is missing in the template. Only applies to golang and jsonpath output formats.", "language": "text", "type": "code-example_explanation", "context": "kubectl create namespace | If true, ignore any errors in templates when a field or map key is missing in the template. Only applies to golang and jsonpath output formats.", "source": "https://kubernetes.io/docs/reference/kubectl/generated/kubectl_create/kubectl_create_namespace/", "id": "k8s_00217_code_explain_5800"} {"prompt": "Explain this text code:\n\ncurl -LO \"https://dl.k8s.io/release/$(curl -L -s https://dl.k8s.io/release/stable.txt)/bin/linux/amd64/kubectl\"", "completion": "This text code Install and Set Up kubectl on Linux | curl -LO \"https://dl.k8s.io/release/$(curl -L -s https://dl.k8s.io/release/stable.txt)/bin/linux/arm64/kubectl\"", "language": "text", "type": "code-example_explanation", "context": "Install and Set Up kubectl on Linux | curl -LO \"https://dl.k8s.io/release/$(curl -L -s https://dl.k8s.io/release/stable.txt)/bin/linux/arm64/kubectl\"", "source": "https://kubernetes.io/docs/tasks/tools/install-kubectl-linux/", "id": "k8s_00218_code_explain_5801"} {"prompt": "Explain this text code:\n\ncurl -LO \"https://dl.k8s.io/release/$(curl -L -s https://dl.k8s.io/release/stable.txt)/bin/linux/arm64/kubectl\"", "completion": "This text code Install and Set Up kubectl on Linux | To download a specific version, replace the $(curl -L -s https://dl.k8s.io/release/stable.txt) portion of the command with the specific version.", "language": "text", "type": "code-example_explanation", "context": "Install and Set Up kubectl on Linux | To download a specific version, replace the $(curl -L -s https://dl.k8s.io/release/stable.txt) portion of the command with the specific version.", "source": "https://kubernetes.io/docs/tasks/tools/install-kubectl-linux/", "id": "k8s_00218_code_explain_5802"} {"prompt": "Explain this text code:\n\ncurl -LO https://dl.k8s.io/release/v1.34.0/bin/linux/amd64/kubectl", "completion": "This text code Install and Set Up kubectl on Linux | And for Linux ARM64, type:", "language": "text", "type": "code-example_explanation", "context": "Install and Set Up kubectl on Linux | And for Linux ARM64, type:", "source": "https://kubernetes.io/docs/tasks/tools/install-kubectl-linux/", "id": "k8s_00218_code_explain_5803"} {"prompt": "Explain this text code:\n\ncurl -LO https://dl.k8s.io/release/v1.34.0/bin/linux/arm64/kubectl", "completion": "This text code Install and Set Up kubectl on Linux | Validate the binary (optional)", "language": "text", "type": "code-example_explanation", "context": "Install and Set Up kubectl on Linux | Validate the binary (optional)", "source": "https://kubernetes.io/docs/tasks/tools/install-kubectl-linux/", "id": "k8s_00218_code_explain_5804"} {"prompt": "Explain this text code:\n\ncurl -LO \"https://dl.k8s.io/release/$(curl -L -s https://dl.k8s.io/release/stable.txt)/bin/linux/amd64/kubectl.sha256\"", "completion": "This text code Install and Set Up kubectl on Linux | curl -LO \"https://dl.k8s.io/release/$(curl -L -s https://dl.k8s.io/release/stable.txt)/bin/linux/arm64/kubectl.sha256\"", "language": "text", "type": "code-example_explanation", "context": "Install and Set Up kubectl on Linux | curl -LO \"https://dl.k8s.io/release/$(curl -L -s https://dl.k8s.io/release/stable.txt)/bin/linux/arm64/kubectl.sha256\"", "source": "https://kubernetes.io/docs/tasks/tools/install-kubectl-linux/", "id": "k8s_00218_code_explain_5805"} {"prompt": "Explain this text code:\n\ncurl -LO \"https://dl.k8s.io/release/$(curl -L -s https://dl.k8s.io/release/stable.txt)/bin/linux/arm64/kubectl.sha256\"", "completion": "This text code Install and Set Up kubectl on Linux | Validate the kubectl binary against the checksum file:", "language": "text", "type": "code-example_explanation", "context": "Install and Set Up kubectl on Linux | Validate the kubectl binary against the checksum file:", "source": "https://kubernetes.io/docs/tasks/tools/install-kubectl-linux/", "id": "k8s_00218_code_explain_5806"} {"prompt": "Explain this text code:\n\necho \"$(cat kubectl.sha256) kubectl\" | sha256sum --check", "completion": "This text code Install and Set Up kubectl on Linux | If valid, the output is:", "language": "text", "type": "code-example_explanation", "context": "Install and Set Up kubectl on Linux | If valid, the output is:", "source": "https://kubernetes.io/docs/tasks/tools/install-kubectl-linux/", "id": "k8s_00218_code_explain_5807"} {"prompt": "Explain this bash code:\n\nkubectl: FAILED\nsha256sum: WARNING: 1 computed checksum did NOT match", "completion": "This bash code Install and Set Up kubectl on Linux | Install kubectl", "language": "bash", "type": "kubectl-command_explanation", "context": "Install and Set Up kubectl on Linux | Install kubectl", "source": "https://kubernetes.io/docs/tasks/tools/install-kubectl-linux/", "id": "k8s_00218_code_explain_5808"} {"prompt": "Explain this text code:\n\nsudo install -o root -g root -m 0755 kubectl /usr/local/bin/kubectl", "completion": "This text code Install and Set Up kubectl on Linux | If you do not have root access on the target system, you can still install kubectl to the ~/.local/bin directory:", "language": "text", "type": "code-example_explanation", "context": "Install and Set Up kubectl on Linux | If you do not have root access on the target system, you can still install kubectl to the ~/.local/bin directory:", "source": "https://kubernetes.io/docs/tasks/tools/install-kubectl-linux/", "id": "k8s_00218_code_explain_5809"} {"prompt": "Complete this text code:\n\nchmod +x kubectl\nmkdir -p ~/.local/bin", "completion": "mv ./kubectl ~/.local/bin/kubectl\n# and then append (or prepend) ~/.local/bin to $PATH", "language": "text", "type": "code-example", "context": "Install and Set Up kubectl on Linux | Test to ensure the version you installed is up-to-date:", "source": "https://kubernetes.io/docs/tasks/tools/install-kubectl-linux/", "id": "k8s_00218_code_completion_5810"} {"prompt": "Explain this text code:\n\nchmod +x kubectl\nmkdir -p ~/.local/bin\nmv ./kubectl ~/.local/bin/kubectl\n# and then append (or prepend) ~/.local/bin to $PATH", "completion": "This text code Install and Set Up kubectl on Linux | Test to ensure the version you installed is up-to-date:", "language": "text", "type": "code-example_explanation", "context": "Install and Set Up kubectl on Linux | Test to ensure the version you installed is up-to-date:", "source": "https://kubernetes.io/docs/tasks/tools/install-kubectl-linux/", "id": "k8s_00218_code_explain_5811"} {"prompt": "Explain this text code:\n\nsudo apt-get update\n# apt-transport-https may be a dummy package; if so, you can skip that package\nsudo apt-get install -y apt-transport-https ca-certificates curl gnupg", "completion": "This text code Install and Set Up kubectl on Linux | Download the public signing key for the Kubernetes package repositories. The same signing key is used for all repositories so you can disregard the version in the URL:", "language": "text", "type": "code-example_explanation", "context": "Install and Set Up kubectl on Linux | Download the public signing key for the Kubernetes package repositories. The same signing key is used for all repositories so you can disregard the version in the URL:", "source": "https://kubernetes.io/docs/tasks/tools/install-kubectl-linux/", "id": "k8s_00218_code_explain_5812"} {"prompt": "Complete this text code:\n\n# If the folder `/etc/apt/keyrings` does not exist, it should be created before the curl command, read the note below.\n# sudo mkdir -p -m 755 /etc/apt/keyrings", "completion": "curl -fsSL https://pkgs.k8s.io/core:/stable:/v1.34/deb/Release.key | sudo gpg --dearmor -o /etc/apt/keyrings/kubernetes-apt-keyring.gpg\nsudo chmod 644 /etc/apt/keyrings/kubernetes-apt-keyring.gpg # allow unprivileged APT programs to read this keyring", "language": "text", "type": "code-example", "context": "Install and Set Up kubectl on Linux | Add the appropriate Kubernetes apt repository. If you want to use Kubernetes version different than v1.34, replace v1.34 with the desired minor version in the command below:", "source": "https://kubernetes.io/docs/tasks/tools/install-kubectl-linux/", "id": "k8s_00218_code_completion_5813"} {"prompt": "Explain this text code:\n\n# If the folder `/etc/apt/keyrings` does not exist, it should be created before the curl command, read the note below.\n# sudo mkdir -p -m 755 /etc/apt/keyrings\ncurl -fsSL https://pkgs.k8s.io/core:/stable:/v1.34/deb/Release.key | sudo gpg --dearmor -o /etc/apt/keyrings/kubernetes-apt-keyring.gpg\nsudo chmod 644 /etc/apt/keyrings/kubernetes-apt-keyring.gpg # allow unprivileged APT programs to read this keyring", "completion": "This text code Install and Set Up kubectl on Linux | Add the appropriate Kubernetes apt repository. If you want to use Kubernetes version different than v1.34, replace v1.34 with the desired minor version in the command below:", "language": "text", "type": "code-example_explanation", "context": "Install and Set Up kubectl on Linux | Add the appropriate Kubernetes apt repository. If you want to use Kubernetes version different than v1.34, replace v1.34 with the desired minor version in the command below:", "source": "https://kubernetes.io/docs/tasks/tools/install-kubectl-linux/", "id": "k8s_00218_code_explain_5814"} {"prompt": "Explain this text code:\n\n# This overwrites any existing configuration in /etc/apt/sources.list.d/kubernetes.list\necho 'deb [signed-by=/etc/apt/keyrings/kubernetes-apt-keyring.gpg] https://pkgs.k8s.io/core:/stable:/v1.34/deb/ /' | sudo tee /etc/apt/sources.list.d/kubernetes.list\nsudo chmod 644 /etc/apt/sources.list.d/kubernetes.list # helps tools such as command-not-found to work correctly", "completion": "This text code Install and Set Up kubectl on Linux | Update apt package index, then install kubectl:", "language": "text", "type": "code-example_explanation", "context": "Install and Set Up kubectl on Linux | Update apt package index, then install kubectl:", "source": "https://kubernetes.io/docs/tasks/tools/install-kubectl-linux/", "id": "k8s_00218_code_explain_5815"} {"prompt": "Explain this text code:\n\nsudo apt-get update\nsudo apt-get install -y kubectl", "completion": "This text code Install and Set Up kubectl on Linux | Add the Kubernetes yum repository. If you want to use Kubernetes version different than v1.34, replace v1.34 with the desired minor version in the command below.# This overwrites any existing configur", "language": "text", "type": "code-example_explanation", "context": "Install and Set Up kubectl on Linux | Add the Kubernetes yum repository. If you want to use Kubernetes version different than v1.34, replace v1.34 with the desired minor version in the command below.# This overwrites any existing configur", "source": "https://kubernetes.io/docs/tasks/tools/install-kubectl-linux/", "id": "k8s_00218_code_explain_5816"} {"prompt": "Complete this text code:\n\n# This overwrites any existing configuration in /etc/yum.repos.d/kubernetes.repo\ncat <\nKey Algorithm: RSA 2048\nKey Created: Thu 25 Aug 2022 01:21:11 PM -03\nKey Expires: Sat 02 Nov 2024 01:21:11 PM -03 (expires in 85 days)\nRpm Name: gpg-pubkey-9a296436-6307a177\n", "completion": "Note: Signing data enables the recipient to verify that no modifications occurred after the data\nwere signed. Accepting data with no, wrong or unknown signature can lead to a corrupted system\nand in extreme cases even to a system compromise.\n\nNote: A GPG pubkey is clearly identified by its fingerprint. Do not rely on the key's name. If\nyou are not sure whether the presented key is authentic, ask the repository provider or check\ntheir web site. Many providers maintain a web page showing the fingerprints of the GPG keys they\nare using.\n\nDo you want to reject the key, trust temporarily, or trust always? [r/t/a/?] (r): a", "language": "go", "type": "code-example", "context": "Install and Set Up kubectl on Linux | Install kubectl using zypper:", "source": "https://kubernetes.io/docs/tasks/tools/install-kubectl-linux/", "id": "k8s_00218_code_completion_5821"} {"prompt": "Explain this go code:\n\nNew repository or package signing key received:\n\nRepository: Kubernetes\nKey Fingerprint: 1111 2222 3333 4444 5555 6666 7777 8888 9999 AAAA\nKey Name: isv:kubernetes OBS Project \nKey Algorithm: RSA 2048\nKey Created: Thu 25 Aug 2022 01:21:11 PM -03\nKey Expires: Sat 02 Nov 2024 01:21:11 PM -03 (expires in 85 days)\nRpm Name: gpg-pubkey-9a296436-6307a177\n\nNote: Signing data enables the recipient to verify that no modifications occurred after the data\nwere signed. Accepting data with no, wrong or unknown signature can lead to a corrupted system\nand in extreme cases even to a system compromise.\n\nNote: A GPG pubkey is clearly identified by its fingerprint. Do not rely on the key's name. If\nyou are not sure whether the presented key is authentic, ask the repository provider or check\ntheir web site. Many providers maintain a web page showing the fingerprints of the GPG keys they\nare using.\n\nDo you want to reject the key, trust temporarily, or trust always? [r/t/a/?] (r): a", "completion": "This go code Install and Set Up kubectl on Linux | Install kubectl using zypper:", "language": "go", "type": "code-example_explanation", "context": "Install and Set Up kubectl on Linux | Install kubectl using zypper:", "source": "https://kubernetes.io/docs/tasks/tools/install-kubectl-linux/", "id": "k8s_00218_code_explain_5822"} {"prompt": "Explain this text code:\n\nsnap install kubectl --classic\nkubectl version --client", "completion": "This text code Install and Set Up kubectl on Linux | If you are on Linux and using Homebrew package manager, kubectl is available for installation.brew install kubectl kubectl version --client", "language": "text", "type": "code-example_explanation", "context": "Install and Set Up kubectl on Linux | If you are on Linux and using Homebrew package manager, kubectl is available for installation.brew install kubectl kubectl version --client", "source": "https://kubernetes.io/docs/tasks/tools/install-kubectl-linux/", "id": "k8s_00218_code_explain_5823"} {"prompt": "Explain this text code:\n\nThe connection to the server was refused - did you specify the right host or port?", "completion": "This text code Install and Set Up kubectl on Linux | For example, if you are intending to run a Kubernetes cluster on your laptop (locally), you will need a tool like Minikube to be installed first and then re-run the commands stated above.", "language": "text", "type": "code-example_explanation", "context": "Install and Set Up kubectl on Linux | For example, if you are intending to run a Kubernetes cluster on your laptop (locally), you will need a tool like Minikube to be installed first and then re-run the commands stated above.", "source": "https://kubernetes.io/docs/tasks/tools/install-kubectl-linux/", "id": "k8s_00218_code_explain_5824"} {"prompt": "Explain this text code:\n\necho 'source <(kubectl completion bash)' >>~/.bashrc", "completion": "This text code Install and Set Up kubectl on Linux | kubectl completion bash | sudo tee /etc/bash_completion.d/kubectl > /dev/null sudo chmod a+r /etc/bash_completion.d/kubectl", "language": "text", "type": "code-example_explanation", "context": "Install and Set Up kubectl on Linux | kubectl completion bash | sudo tee /etc/bash_completion.d/kubectl > /dev/null sudo chmod a+r /etc/bash_completion.d/kubectl", "source": "https://kubernetes.io/docs/tasks/tools/install-kubectl-linux/", "id": "k8s_00218_code_explain_5825"} {"prompt": "Explain this bash code:\n\nkubectl completion bash | sudo tee /etc/bash_completion.d/kubectl > /dev/null\nsudo chmod a+r /etc/bash_completion.d/kubectl", "completion": "This bash code Install and Set Up kubectl on Linux | If you have an alias for kubectl, you can extend shell completion to work with that alias:", "language": "bash", "type": "kubectl-command_explanation", "context": "Install and Set Up kubectl on Linux | If you have an alias for kubectl, you can extend shell completion to work with that alias:", "source": "https://kubernetes.io/docs/tasks/tools/install-kubectl-linux/", "id": "k8s_00218_code_explain_5826"} {"prompt": "Explain this text code:\n\necho 'alias k=kubectl' >>~/.bashrc\necho 'complete -o default -F __start_kubectl k' >>~/.bashrc", "completion": "This text code Install and Set Up kubectl on Linux | Both approaches are equivalent. After reloading your shell, kubectl autocompletion should be working. To enable bash autocompletion in current session of shell, source the ~/.bashrc file:", "language": "text", "type": "code-example_explanation", "context": "Install and Set Up kubectl on Linux | Both approaches are equivalent. After reloading your shell, kubectl autocompletion should be working. To enable bash autocompletion in current session of shell, source the ~/.bashrc file:", "source": "https://kubernetes.io/docs/tasks/tools/install-kubectl-linux/", "id": "k8s_00218_code_explain_5827"} {"prompt": "Explain this text code:\n\ncurl -LO \"https://dl.k8s.io/release/$(curl -L -s https://dl.k8s.io/release/stable.txt)/bin/linux/amd64/kubectl-convert\"", "completion": "This text code Install and Set Up kubectl on Linux | curl -LO \"https://dl.k8s.io/release/$(curl -L -s https://dl.k8s.io/release/stable.txt)/bin/linux/arm64/kubectl-convert\"", "language": "text", "type": "code-example_explanation", "context": "Install and Set Up kubectl on Linux | curl -LO \"https://dl.k8s.io/release/$(curl -L -s https://dl.k8s.io/release/stable.txt)/bin/linux/arm64/kubectl-convert\"", "source": "https://kubernetes.io/docs/tasks/tools/install-kubectl-linux/", "id": "k8s_00218_code_explain_5828"} {"prompt": "Explain this text code:\n\ncurl -LO \"https://dl.k8s.io/release/$(curl -L -s https://dl.k8s.io/release/stable.txt)/bin/linux/arm64/kubectl-convert\"", "completion": "This text code Install and Set Up kubectl on Linux | Validate the binary (optional)", "language": "text", "type": "code-example_explanation", "context": "Install and Set Up kubectl on Linux | Validate the binary (optional)", "source": "https://kubernetes.io/docs/tasks/tools/install-kubectl-linux/", "id": "k8s_00218_code_explain_5829"} {"prompt": "Explain this text code:\n\ncurl -LO \"https://dl.k8s.io/release/$(curl -L -s https://dl.k8s.io/release/stable.txt)/bin/linux/amd64/kubectl-convert.sha256\"", "completion": "This text code Install and Set Up kubectl on Linux | curl -LO \"https://dl.k8s.io/release/$(curl -L -s https://dl.k8s.io/release/stable.txt)/bin/linux/arm64/kubectl-convert.sha256\"", "language": "text", "type": "code-example_explanation", "context": "Install and Set Up kubectl on Linux | curl -LO \"https://dl.k8s.io/release/$(curl -L -s https://dl.k8s.io/release/stable.txt)/bin/linux/arm64/kubectl-convert.sha256\"", "source": "https://kubernetes.io/docs/tasks/tools/install-kubectl-linux/", "id": "k8s_00218_code_explain_5830"} {"prompt": "Explain this text code:\n\ncurl -LO \"https://dl.k8s.io/release/$(curl -L -s https://dl.k8s.io/release/stable.txt)/bin/linux/arm64/kubectl-convert.sha256\"", "completion": "This text code Install and Set Up kubectl on Linux | Validate the kubectl-convert binary against the checksum file:", "language": "text", "type": "code-example_explanation", "context": "Install and Set Up kubectl on Linux | Validate the kubectl-convert binary against the checksum file:", "source": "https://kubernetes.io/docs/tasks/tools/install-kubectl-linux/", "id": "k8s_00218_code_explain_5831"} {"prompt": "Explain this text code:\n\necho \"$(cat kubectl-convert.sha256) kubectl-convert\" | sha256sum --check", "completion": "This text code Install and Set Up kubectl on Linux | If valid, the output is:", "language": "text", "type": "code-example_explanation", "context": "Install and Set Up kubectl on Linux | If valid, the output is:", "source": "https://kubernetes.io/docs/tasks/tools/install-kubectl-linux/", "id": "k8s_00218_code_explain_5832"} {"prompt": "Explain this bash code:\n\nkubectl-convert: FAILED\nsha256sum: WARNING: 1 computed checksum did NOT match", "completion": "This bash code Install and Set Up kubectl on Linux | Install kubectl-convert", "language": "bash", "type": "kubectl-command_explanation", "context": "Install and Set Up kubectl on Linux | Install kubectl-convert", "source": "https://kubernetes.io/docs/tasks/tools/install-kubectl-linux/", "id": "k8s_00218_code_explain_5833"} {"prompt": "Explain this text code:\n\nsudo install -o root -g root -m 0755 kubectl-convert /usr/local/bin/kubectl-convert", "completion": "This text code Install and Set Up kubectl on Linux | Verify plugin is successfully installed", "language": "text", "type": "code-example_explanation", "context": "Install and Set Up kubectl on Linux | Verify plugin is successfully installed", "source": "https://kubernetes.io/docs/tasks/tools/install-kubectl-linux/", "id": "k8s_00218_code_explain_5834"} {"prompt": "Explain this bash code:\n\ncurl -LO \"https://dl.k8s.io/release/$(curl -L -s https://dl.k8s.io/release/stable.txt)/bin/linux/amd64/kubectl\"", "completion": "This bash code Install and Set Up kubectl on Linux | curl -LO \"https://dl.k8s.io/release/$(curl -L -s https://dl.k8s.io/release/stable.txt)/bin/linux/arm64/kubectl\"", "language": "bash", "type": "shell-command_explanation", "context": "Install and Set Up kubectl on Linux | curl -LO \"https://dl.k8s.io/release/$(curl -L -s https://dl.k8s.io/release/stable.txt)/bin/linux/arm64/kubectl\"", "source": "https://kubernetes.io/docs/tasks/tools/install-kubectl-linux/", "id": "k8s_00218_code_explain_5835"} {"prompt": "Explain this bash code:\n\ncurl -LO \"https://dl.k8s.io/release/$(curl -L -s https://dl.k8s.io/release/stable.txt)/bin/linux/arm64/kubectl\"", "completion": "This bash code Install and Set Up kubectl on Linux | To download a specific version, replace the $(curl -L -s https://dl.k8s.io/release/stable.txt) portion of the command with the specific version.", "language": "bash", "type": "shell-command_explanation", "context": "Install and Set Up kubectl on Linux | To download a specific version, replace the $(curl -L -s https://dl.k8s.io/release/stable.txt) portion of the command with the specific version.", "source": "https://kubernetes.io/docs/tasks/tools/install-kubectl-linux/", "id": "k8s_00218_code_explain_5836"} {"prompt": "Explain this bash code:\n\ncurl -LO https://dl.k8s.io/release/v1.34.0/bin/linux/amd64/kubectl", "completion": "This bash code Install and Set Up kubectl on Linux | And for Linux ARM64, type:", "language": "bash", "type": "shell-command_explanation", "context": "Install and Set Up kubectl on Linux | And for Linux ARM64, type:", "source": "https://kubernetes.io/docs/tasks/tools/install-kubectl-linux/", "id": "k8s_00218_code_explain_5837"} {"prompt": "Explain this bash code:\n\ncurl -LO https://dl.k8s.io/release/v1.34.0/bin/linux/arm64/kubectl", "completion": "This bash code Install and Set Up kubectl on Linux | Validate the binary (optional)", "language": "bash", "type": "shell-command_explanation", "context": "Install and Set Up kubectl on Linux | Validate the binary (optional)", "source": "https://kubernetes.io/docs/tasks/tools/install-kubectl-linux/", "id": "k8s_00218_code_explain_5838"} {"prompt": "Explain this bash code:\n\ncurl -LO \"https://dl.k8s.io/release/$(curl -L -s https://dl.k8s.io/release/stable.txt)/bin/linux/amd64/kubectl.sha256\"", "completion": "This bash code Install and Set Up kubectl on Linux | curl -LO \"https://dl.k8s.io/release/$(curl -L -s https://dl.k8s.io/release/stable.txt)/bin/linux/arm64/kubectl.sha256\"", "language": "bash", "type": "shell-command_explanation", "context": "Install and Set Up kubectl on Linux | curl -LO \"https://dl.k8s.io/release/$(curl -L -s https://dl.k8s.io/release/stable.txt)/bin/linux/arm64/kubectl.sha256\"", "source": "https://kubernetes.io/docs/tasks/tools/install-kubectl-linux/", "id": "k8s_00218_code_explain_5839"} {"prompt": "Explain this bash code:\n\ncurl -LO \"https://dl.k8s.io/release/$(curl -L -s https://dl.k8s.io/release/stable.txt)/bin/linux/arm64/kubectl.sha256\"", "completion": "This bash code Install and Set Up kubectl on Linux | Validate the kubectl binary against the checksum file:", "language": "bash", "type": "shell-command_explanation", "context": "Install and Set Up kubectl on Linux | Validate the kubectl binary against the checksum file:", "source": "https://kubernetes.io/docs/tasks/tools/install-kubectl-linux/", "id": "k8s_00218_code_explain_5840"} {"prompt": "Explain this bash code:\n\necho \"$(cat kubectl.sha256) kubectl\" | sha256sum --check", "completion": "This bash code Install and Set Up kubectl on Linux | If valid, the output is:", "language": "bash", "type": "shell-command_explanation", "context": "Install and Set Up kubectl on Linux | If valid, the output is:", "source": "https://kubernetes.io/docs/tasks/tools/install-kubectl-linux/", "id": "k8s_00218_code_explain_5841"} {"prompt": "Explain this console code:\n\nkubectl: FAILED\nsha256sum: WARNING: 1 computed checksum did NOT match", "completion": "This console code Install and Set Up kubectl on Linux | Install kubectl", "language": "console", "type": "code-example_explanation", "context": "Install and Set Up kubectl on Linux | Install kubectl", "source": "https://kubernetes.io/docs/tasks/tools/install-kubectl-linux/", "id": "k8s_00218_code_explain_5842"} {"prompt": "Explain this bash code:\n\nsudo install -o root -g root -m 0755 kubectl /usr/local/bin/kubectl", "completion": "This bash code Install and Set Up kubectl on Linux | If you do not have root access on the target system, you can still install kubectl to the ~/.local/bin directory:", "language": "bash", "type": "shell-command_explanation", "context": "Install and Set Up kubectl on Linux | If you do not have root access on the target system, you can still install kubectl to the ~/.local/bin directory:", "source": "https://kubernetes.io/docs/tasks/tools/install-kubectl-linux/", "id": "k8s_00218_code_explain_5843"} {"prompt": "Complete this bash code:\n\nchmod +x kubectl\nmkdir -p ~/.local/bin", "completion": "mv ./kubectl ~/.local/bin/kubectl\n# and then append (or prepend) ~/.local/bin to $PATH", "language": "bash", "type": "shell-command", "context": "Install and Set Up kubectl on Linux | Test to ensure the version you installed is up-to-date:", "source": "https://kubernetes.io/docs/tasks/tools/install-kubectl-linux/", "id": "k8s_00218_code_completion_5844"} {"prompt": "Explain this bash code:\n\nchmod +x kubectl\nmkdir -p ~/.local/bin\nmv ./kubectl ~/.local/bin/kubectl\n# and then append (or prepend) ~/.local/bin to $PATH", "completion": "This bash code Install and Set Up kubectl on Linux | Test to ensure the version you installed is up-to-date:", "language": "bash", "type": "shell-command_explanation", "context": "Install and Set Up kubectl on Linux | Test to ensure the version you installed is up-to-date:", "source": "https://kubernetes.io/docs/tasks/tools/install-kubectl-linux/", "id": "k8s_00218_code_explain_5845"} {"prompt": "Explain this shell code:\n\nsudo apt-get update\n# apt-transport-https may be a dummy package; if so, you can skip that package\nsudo apt-get install -y apt-transport-https ca-certificates curl gnupg", "completion": "This shell code Install and Set Up kubectl on Linux | Download the public signing key for the Kubernetes package repositories. The same signing key is used for all repositories so you can disregard the version in the URL:", "language": "shell", "type": "code-example_explanation", "context": "Install and Set Up kubectl on Linux | Download the public signing key for the Kubernetes package repositories. The same signing key is used for all repositories so you can disregard the version in the URL:", "source": "https://kubernetes.io/docs/tasks/tools/install-kubectl-linux/", "id": "k8s_00218_code_explain_5846"} {"prompt": "Complete this shell code:\n\n# If the folder `/etc/apt/keyrings` does not exist, it should be created before the curl command, read the note below.\n# sudo mkdir -p -m 755 /etc/apt/keyrings", "completion": "curl -fsSL https://pkgs.k8s.io/core:/stable:/v1.34/deb/Release.key | sudo gpg --dearmor -o /etc/apt/keyrings/kubernetes-apt-keyring.gpg\nsudo chmod 644 /etc/apt/keyrings/kubernetes-apt-keyring.gpg # allow unprivileged APT programs to read this keyring", "language": "shell", "type": "code-example", "context": "Install and Set Up kubectl on Linux | Add the appropriate Kubernetes apt repository. If you want to use Kubernetes version different than v1.34, replace v1.34 with the desired minor version in the command below:", "source": "https://kubernetes.io/docs/tasks/tools/install-kubectl-linux/", "id": "k8s_00218_code_completion_5847"} {"prompt": "Explain this shell code:\n\n# If the folder `/etc/apt/keyrings` does not exist, it should be created before the curl command, read the note below.\n# sudo mkdir -p -m 755 /etc/apt/keyrings\ncurl -fsSL https://pkgs.k8s.io/core:/stable:/v1.34/deb/Release.key | sudo gpg --dearmor -o /etc/apt/keyrings/kubernetes-apt-keyring.gpg\nsudo chmod 644 /etc/apt/keyrings/kubernetes-apt-keyring.gpg # allow unprivileged APT programs to read this keyring", "completion": "This shell code Install and Set Up kubectl on Linux | Add the appropriate Kubernetes apt repository. If you want to use Kubernetes version different than v1.34, replace v1.34 with the desired minor version in the command below:", "language": "shell", "type": "code-example_explanation", "context": "Install and Set Up kubectl on Linux | Add the appropriate Kubernetes apt repository. If you want to use Kubernetes version different than v1.34, replace v1.34 with the desired minor version in the command below:", "source": "https://kubernetes.io/docs/tasks/tools/install-kubectl-linux/", "id": "k8s_00218_code_explain_5848"} {"prompt": "Explain this shell code:\n\n# This overwrites any existing configuration in /etc/apt/sources.list.d/kubernetes.list\necho 'deb [signed-by=/etc/apt/keyrings/kubernetes-apt-keyring.gpg] https://pkgs.k8s.io/core:/stable:/v1.34/deb/ /' | sudo tee /etc/apt/sources.list.d/kubernetes.list\nsudo chmod 644 /etc/apt/sources.list.d/kubernetes.list # helps tools such as command-not-found to work correctly", "completion": "This shell code Install and Set Up kubectl on Linux | Update apt package index, then install kubectl:", "language": "shell", "type": "code-example_explanation", "context": "Install and Set Up kubectl on Linux | Update apt package index, then install kubectl:", "source": "https://kubernetes.io/docs/tasks/tools/install-kubectl-linux/", "id": "k8s_00218_code_explain_5849"} {"prompt": "Explain this shell code:\n\nsudo apt-get update\nsudo apt-get install -y kubectl", "completion": "This shell code Install and Set Up kubectl on Linux | Add the Kubernetes yum repository. If you want to use Kubernetes version different than v1.34, replace v1.34 with the desired minor version in the command below.# This overwrites any existing configur", "language": "shell", "type": "code-example_explanation", "context": "Install and Set Up kubectl on Linux | Add the Kubernetes yum repository. If you want to use Kubernetes version different than v1.34, replace v1.34 with the desired minor version in the command below.# This overwrites any existing configur", "source": "https://kubernetes.io/docs/tasks/tools/install-kubectl-linux/", "id": "k8s_00218_code_explain_5850"} {"prompt": "Complete this bash code:\n\n# This overwrites any existing configuration in /etc/yum.repos.d/kubernetes.repo\ncat <\nKey Algorithm: RSA 2048\nKey Created: Thu 25 Aug 2022 01:21:11 PM -03\nKey Expires: Sat 02 Nov 2024 01:21:11 PM -03 (expires in 85 days)\nRpm Name: gpg-pubkey-9a296436-6307a177\n", "completion": "Note: Signing data enables the recipient to verify that no modifications occurred after the data\nwere signed. Accepting data with no, wrong or unknown signature can lead to a corrupted system\nand in extreme cases even to a system compromise.\n\nNote: A GPG pubkey is clearly identified by its fingerprint. Do not rely on the key's name. If\nyou are not sure whether the presented key is authentic, ask the repository provider or check\ntheir web site. Many providers maintain a web page showing the fingerprints of the GPG keys they\nare using.\n\nDo you want to reject the key, trust temporarily, or trust always? [r/t/a/?] (r): a", "language": "go", "type": "code-example", "context": "Install and Set Up kubectl on Linux | Install kubectl using zypper:", "source": "https://kubernetes.io/docs/tasks/tools/install-kubectl-linux/", "id": "k8s_00218_code_completion_5855"} {"prompt": "Explain this go code:\n\nNew repository or package signing key received:\n\nRepository: Kubernetes\nKey Fingerprint: 1111 2222 3333 4444 5555 6666 7777 8888 9999 AAAA\nKey Name: isv:kubernetes OBS Project \nKey Algorithm: RSA 2048\nKey Created: Thu 25 Aug 2022 01:21:11 PM -03\nKey Expires: Sat 02 Nov 2024 01:21:11 PM -03 (expires in 85 days)\nRpm Name: gpg-pubkey-9a296436-6307a177\n\nNote: Signing data enables the recipient to verify that no modifications occurred after the data\nwere signed. Accepting data with no, wrong or unknown signature can lead to a corrupted system\nand in extreme cases even to a system compromise.\n\nNote: A GPG pubkey is clearly identified by its fingerprint. Do not rely on the key's name. If\nyou are not sure whether the presented key is authentic, ask the repository provider or check\ntheir web site. Many providers maintain a web page showing the fingerprints of the GPG keys they\nare using.\n\nDo you want to reject the key, trust temporarily, or trust always? [r/t/a/?] (r): a", "completion": "This go code Install and Set Up kubectl on Linux | Install kubectl using zypper:", "language": "go", "type": "code-example_explanation", "context": "Install and Set Up kubectl on Linux | Install kubectl using zypper:", "source": "https://kubernetes.io/docs/tasks/tools/install-kubectl-linux/", "id": "k8s_00218_code_explain_5856"} {"prompt": "Explain this shell code:\n\nsnap install kubectl --classic\nkubectl version --client", "completion": "This shell code Install and Set Up kubectl on Linux | If you are on Linux and using Homebrew package manager, kubectl is available for installation.brew install kubectl kubectl version --client", "language": "shell", "type": "code-example_explanation", "context": "Install and Set Up kubectl on Linux | If you are on Linux and using Homebrew package manager, kubectl is available for installation.brew install kubectl kubectl version --client", "source": "https://kubernetes.io/docs/tasks/tools/install-kubectl-linux/", "id": "k8s_00218_code_explain_5857"} {"prompt": "Explain this text code:\n\nThe connection to the server was refused - did you specify the right host or port?", "completion": "This text code Install and Set Up kubectl on Linux | For example, if you are intending to run a Kubernetes cluster on your laptop (locally), you will need a tool like Minikube to be installed first and then re-run the commands stated above.", "language": "text", "type": "code-example_explanation", "context": "Install and Set Up kubectl on Linux | For example, if you are intending to run a Kubernetes cluster on your laptop (locally), you will need a tool like Minikube to be installed first and then re-run the commands stated above.", "source": "https://kubernetes.io/docs/tasks/tools/install-kubectl-linux/", "id": "k8s_00218_code_explain_5858"} {"prompt": "Explain this bash code:\n\necho 'source <(kubectl completion bash)' >>~/.bashrc", "completion": "This bash code Install and Set Up kubectl on Linux | kubectl completion bash | sudo tee /etc/bash_completion.d/kubectl > /dev/null sudo chmod a+r /etc/bash_completion.d/kubectl", "language": "bash", "type": "shell-command_explanation", "context": "Install and Set Up kubectl on Linux | kubectl completion bash | sudo tee /etc/bash_completion.d/kubectl > /dev/null sudo chmod a+r /etc/bash_completion.d/kubectl", "source": "https://kubernetes.io/docs/tasks/tools/install-kubectl-linux/", "id": "k8s_00218_code_explain_5859"} {"prompt": "Explain this bash code:\n\nkubectl completion bash | sudo tee /etc/bash_completion.d/kubectl > /dev/null\nsudo chmod a+r /etc/bash_completion.d/kubectl", "completion": "This bash code Install and Set Up kubectl on Linux | If you have an alias for kubectl, you can extend shell completion to work with that alias:", "language": "bash", "type": "kubectl-command_explanation", "context": "Install and Set Up kubectl on Linux | If you have an alias for kubectl, you can extend shell completion to work with that alias:", "source": "https://kubernetes.io/docs/tasks/tools/install-kubectl-linux/", "id": "k8s_00218_code_explain_5860"} {"prompt": "Explain this bash code:\n\necho 'alias k=kubectl' >>~/.bashrc\necho 'complete -o default -F __start_kubectl k' >>~/.bashrc", "completion": "This bash code Install and Set Up kubectl on Linux | Both approaches are equivalent. After reloading your shell, kubectl autocompletion should be working. To enable bash autocompletion in current session of shell, source the ~/.bashrc file:", "language": "bash", "type": "shell-command_explanation", "context": "Install and Set Up kubectl on Linux | Both approaches are equivalent. After reloading your shell, kubectl autocompletion should be working. To enable bash autocompletion in current session of shell, source the ~/.bashrc file:", "source": "https://kubernetes.io/docs/tasks/tools/install-kubectl-linux/", "id": "k8s_00218_code_explain_5861"} {"prompt": "Explain this bash code:\n\ncurl -LO \"https://dl.k8s.io/release/$(curl -L -s https://dl.k8s.io/release/stable.txt)/bin/linux/amd64/kubectl-convert\"", "completion": "This bash code Install and Set Up kubectl on Linux | curl -LO \"https://dl.k8s.io/release/$(curl -L -s https://dl.k8s.io/release/stable.txt)/bin/linux/arm64/kubectl-convert\"", "language": "bash", "type": "shell-command_explanation", "context": "Install and Set Up kubectl on Linux | curl -LO \"https://dl.k8s.io/release/$(curl -L -s https://dl.k8s.io/release/stable.txt)/bin/linux/arm64/kubectl-convert\"", "source": "https://kubernetes.io/docs/tasks/tools/install-kubectl-linux/", "id": "k8s_00218_code_explain_5862"} {"prompt": "Explain this bash code:\n\ncurl -LO \"https://dl.k8s.io/release/$(curl -L -s https://dl.k8s.io/release/stable.txt)/bin/linux/arm64/kubectl-convert\"", "completion": "This bash code Install and Set Up kubectl on Linux | Validate the binary (optional)", "language": "bash", "type": "shell-command_explanation", "context": "Install and Set Up kubectl on Linux | Validate the binary (optional)", "source": "https://kubernetes.io/docs/tasks/tools/install-kubectl-linux/", "id": "k8s_00218_code_explain_5863"} {"prompt": "Explain this bash code:\n\ncurl -LO \"https://dl.k8s.io/release/$(curl -L -s https://dl.k8s.io/release/stable.txt)/bin/linux/amd64/kubectl-convert.sha256\"", "completion": "This bash code Install and Set Up kubectl on Linux | curl -LO \"https://dl.k8s.io/release/$(curl -L -s https://dl.k8s.io/release/stable.txt)/bin/linux/arm64/kubectl-convert.sha256\"", "language": "bash", "type": "shell-command_explanation", "context": "Install and Set Up kubectl on Linux | curl -LO \"https://dl.k8s.io/release/$(curl -L -s https://dl.k8s.io/release/stable.txt)/bin/linux/arm64/kubectl-convert.sha256\"", "source": "https://kubernetes.io/docs/tasks/tools/install-kubectl-linux/", "id": "k8s_00218_code_explain_5864"} {"prompt": "Explain this bash code:\n\ncurl -LO \"https://dl.k8s.io/release/$(curl -L -s https://dl.k8s.io/release/stable.txt)/bin/linux/arm64/kubectl-convert.sha256\"", "completion": "This bash code Install and Set Up kubectl on Linux | Validate the kubectl-convert binary against the checksum file:", "language": "bash", "type": "shell-command_explanation", "context": "Install and Set Up kubectl on Linux | Validate the kubectl-convert binary against the checksum file:", "source": "https://kubernetes.io/docs/tasks/tools/install-kubectl-linux/", "id": "k8s_00218_code_explain_5865"} {"prompt": "Explain this bash code:\n\necho \"$(cat kubectl-convert.sha256) kubectl-convert\" | sha256sum --check", "completion": "This bash code Install and Set Up kubectl on Linux | If valid, the output is:", "language": "bash", "type": "shell-command_explanation", "context": "Install and Set Up kubectl on Linux | If valid, the output is:", "source": "https://kubernetes.io/docs/tasks/tools/install-kubectl-linux/", "id": "k8s_00218_code_explain_5866"} {"prompt": "Explain this console code:\n\nkubectl-convert: FAILED\nsha256sum: WARNING: 1 computed checksum did NOT match", "completion": "This console code Install and Set Up kubectl on Linux | Install kubectl-convert", "language": "console", "type": "code-example_explanation", "context": "Install and Set Up kubectl on Linux | Install kubectl-convert", "source": "https://kubernetes.io/docs/tasks/tools/install-kubectl-linux/", "id": "k8s_00218_code_explain_5867"} {"prompt": "Explain this bash code:\n\nsudo install -o root -g root -m 0755 kubectl-convert /usr/local/bin/kubectl-convert", "completion": "This bash code Install and Set Up kubectl on Linux | Verify plugin is successfully installed", "language": "bash", "type": "shell-command_explanation", "context": "Install and Set Up kubectl on Linux | Verify plugin is successfully installed", "source": "https://kubernetes.io/docs/tasks/tools/install-kubectl-linux/", "id": "k8s_00218_code_explain_5868"} {"prompt": "Explain this text code:\n\ncurl -LO \"https://dl.k8s.io/release/$(curl -L -s https://dl.k8s.io/release/stable.txt)/bin/linux/amd64/kubectl\"", "completion": "This text code Install and Set Up kubectl on Linux | curl -LO \"https://dl.k8s.io/release/$(curl -L -s https://dl.k8s.io/release/stable.txt)/bin/linux/arm64/kubectl\"", "language": "text", "type": "code-example_explanation", "context": "Install and Set Up kubectl on Linux | curl -LO \"https://dl.k8s.io/release/$(curl -L -s https://dl.k8s.io/release/stable.txt)/bin/linux/arm64/kubectl\"", "source": "https://kubernetes.io/docs/tasks/tools/install-kubectl-linux/", "id": "k8s_00218_code_explain_5869"} {"prompt": "Explain this text code:\n\ncurl -LO \"https://dl.k8s.io/release/$(curl -L -s https://dl.k8s.io/release/stable.txt)/bin/linux/arm64/kubectl\"", "completion": "This text code Install and Set Up kubectl on Linux | To download a specific version, replace the $(curl -L -s https://dl.k8s.io/release/stable.txt) portion of the command with the specific version.", "language": "text", "type": "code-example_explanation", "context": "Install and Set Up kubectl on Linux | To download a specific version, replace the $(curl -L -s https://dl.k8s.io/release/stable.txt) portion of the command with the specific version.", "source": "https://kubernetes.io/docs/tasks/tools/install-kubectl-linux/", "id": "k8s_00218_code_explain_5870"} {"prompt": "Explain this text code:\n\ncurl -LO https://dl.k8s.io/release/v1.34.0/bin/linux/amd64/kubectl", "completion": "This text code Install and Set Up kubectl on Linux | And for Linux ARM64, type:", "language": "text", "type": "code-example_explanation", "context": "Install and Set Up kubectl on Linux | And for Linux ARM64, type:", "source": "https://kubernetes.io/docs/tasks/tools/install-kubectl-linux/", "id": "k8s_00218_code_explain_5871"} {"prompt": "Explain this text code:\n\ncurl -LO https://dl.k8s.io/release/v1.34.0/bin/linux/arm64/kubectl", "completion": "This text code Install and Set Up kubectl on Linux | Validate the binary (optional)", "language": "text", "type": "code-example_explanation", "context": "Install and Set Up kubectl on Linux | Validate the binary (optional)", "source": "https://kubernetes.io/docs/tasks/tools/install-kubectl-linux/", "id": "k8s_00218_code_explain_5872"} {"prompt": "Explain this text code:\n\ncurl -LO \"https://dl.k8s.io/release/$(curl -L -s https://dl.k8s.io/release/stable.txt)/bin/linux/amd64/kubectl.sha256\"", "completion": "This text code Install and Set Up kubectl on Linux | curl -LO \"https://dl.k8s.io/release/$(curl -L -s https://dl.k8s.io/release/stable.txt)/bin/linux/arm64/kubectl.sha256\"", "language": "text", "type": "code-example_explanation", "context": "Install and Set Up kubectl on Linux | curl -LO \"https://dl.k8s.io/release/$(curl -L -s https://dl.k8s.io/release/stable.txt)/bin/linux/arm64/kubectl.sha256\"", "source": "https://kubernetes.io/docs/tasks/tools/install-kubectl-linux/", "id": "k8s_00218_code_explain_5873"} {"prompt": "Explain this text code:\n\ncurl -LO \"https://dl.k8s.io/release/$(curl -L -s https://dl.k8s.io/release/stable.txt)/bin/linux/arm64/kubectl.sha256\"", "completion": "This text code Install and Set Up kubectl on Linux | Validate the kubectl binary against the checksum file:", "language": "text", "type": "code-example_explanation", "context": "Install and Set Up kubectl on Linux | Validate the kubectl binary against the checksum file:", "source": "https://kubernetes.io/docs/tasks/tools/install-kubectl-linux/", "id": "k8s_00218_code_explain_5874"} {"prompt": "Explain this text code:\n\necho \"$(cat kubectl.sha256) kubectl\" | sha256sum --check", "completion": "This text code Install and Set Up kubectl on Linux | If valid, the output is:", "language": "text", "type": "code-example_explanation", "context": "Install and Set Up kubectl on Linux | If valid, the output is:", "source": "https://kubernetes.io/docs/tasks/tools/install-kubectl-linux/", "id": "k8s_00218_code_explain_5875"} {"prompt": "Explain this bash code:\n\nkubectl: FAILED\nsha256sum: WARNING: 1 computed checksum did NOT match", "completion": "This bash code Install and Set Up kubectl on Linux | Install kubectl", "language": "bash", "type": "kubectl-command_explanation", "context": "Install and Set Up kubectl on Linux | Install kubectl", "source": "https://kubernetes.io/docs/tasks/tools/install-kubectl-linux/", "id": "k8s_00218_code_explain_5876"} {"prompt": "Explain this text code:\n\nsudo install -o root -g root -m 0755 kubectl /usr/local/bin/kubectl", "completion": "This text code Install and Set Up kubectl on Linux | If you do not have root access on the target system, you can still install kubectl to the ~/.local/bin directory:", "language": "text", "type": "code-example_explanation", "context": "Install and Set Up kubectl on Linux | If you do not have root access on the target system, you can still install kubectl to the ~/.local/bin directory:", "source": "https://kubernetes.io/docs/tasks/tools/install-kubectl-linux/", "id": "k8s_00218_code_explain_5877"} {"prompt": "Complete this text code:\n\nchmod +x kubectl\nmkdir -p ~/.local/bin", "completion": "mv ./kubectl ~/.local/bin/kubectl\n# and then append (or prepend) ~/.local/bin to $PATH", "language": "text", "type": "code-example", "context": "Install and Set Up kubectl on Linux | Test to ensure the version you installed is up-to-date:", "source": "https://kubernetes.io/docs/tasks/tools/install-kubectl-linux/", "id": "k8s_00218_code_completion_5878"} {"prompt": "Explain this text code:\n\nchmod +x kubectl\nmkdir -p ~/.local/bin\nmv ./kubectl ~/.local/bin/kubectl\n# and then append (or prepend) ~/.local/bin to $PATH", "completion": "This text code Install and Set Up kubectl on Linux | Test to ensure the version you installed is up-to-date:", "language": "text", "type": "code-example_explanation", "context": "Install and Set Up kubectl on Linux | Test to ensure the version you installed is up-to-date:", "source": "https://kubernetes.io/docs/tasks/tools/install-kubectl-linux/", "id": "k8s_00218_code_explain_5879"} {"prompt": "Explain this text code:\n\nsudo apt-get update\n# apt-transport-https may be a dummy package; if so, you can skip that package\nsudo apt-get install -y apt-transport-https ca-certificates curl gnupg", "completion": "This text code Install and Set Up kubectl on Linux | Download the public signing key for the Kubernetes package repositories. The same signing key is used for all repositories so you can disregard the version in the URL:", "language": "text", "type": "code-example_explanation", "context": "Install and Set Up kubectl on Linux | Download the public signing key for the Kubernetes package repositories. The same signing key is used for all repositories so you can disregard the version in the URL:", "source": "https://kubernetes.io/docs/tasks/tools/install-kubectl-linux/", "id": "k8s_00218_code_explain_5880"} {"prompt": "Complete this text code:\n\n# If the folder `/etc/apt/keyrings` does not exist, it should be created before the curl command, read the note below.\n# sudo mkdir -p -m 755 /etc/apt/keyrings", "completion": "curl -fsSL https://pkgs.k8s.io/core:/stable:/v1.34/deb/Release.key | sudo gpg --dearmor -o /etc/apt/keyrings/kubernetes-apt-keyring.gpg\nsudo chmod 644 /etc/apt/keyrings/kubernetes-apt-keyring.gpg # allow unprivileged APT programs to read this keyring", "language": "text", "type": "code-example", "context": "Install and Set Up kubectl on Linux | Add the appropriate Kubernetes apt repository. If you want to use Kubernetes version different than v1.34, replace v1.34 with the desired minor version in the command below:", "source": "https://kubernetes.io/docs/tasks/tools/install-kubectl-linux/", "id": "k8s_00218_code_completion_5881"} {"prompt": "Explain this text code:\n\n# If the folder `/etc/apt/keyrings` does not exist, it should be created before the curl command, read the note below.\n# sudo mkdir -p -m 755 /etc/apt/keyrings\ncurl -fsSL https://pkgs.k8s.io/core:/stable:/v1.34/deb/Release.key | sudo gpg --dearmor -o /etc/apt/keyrings/kubernetes-apt-keyring.gpg\nsudo chmod 644 /etc/apt/keyrings/kubernetes-apt-keyring.gpg # allow unprivileged APT programs to read this keyring", "completion": "This text code Install and Set Up kubectl on Linux | Add the appropriate Kubernetes apt repository. If you want to use Kubernetes version different than v1.34, replace v1.34 with the desired minor version in the command below:", "language": "text", "type": "code-example_explanation", "context": "Install and Set Up kubectl on Linux | Add the appropriate Kubernetes apt repository. If you want to use Kubernetes version different than v1.34, replace v1.34 with the desired minor version in the command below:", "source": "https://kubernetes.io/docs/tasks/tools/install-kubectl-linux/", "id": "k8s_00218_code_explain_5882"} {"prompt": "Explain this text code:\n\n# This overwrites any existing configuration in /etc/apt/sources.list.d/kubernetes.list\necho 'deb [signed-by=/etc/apt/keyrings/kubernetes-apt-keyring.gpg] https://pkgs.k8s.io/core:/stable:/v1.34/deb/ /' | sudo tee /etc/apt/sources.list.d/kubernetes.list\nsudo chmod 644 /etc/apt/sources.list.d/kubernetes.list # helps tools such as command-not-found to work correctly", "completion": "This text code Install and Set Up kubectl on Linux | Update apt package index, then install kubectl:", "language": "text", "type": "code-example_explanation", "context": "Install and Set Up kubectl on Linux | Update apt package index, then install kubectl:", "source": "https://kubernetes.io/docs/tasks/tools/install-kubectl-linux/", "id": "k8s_00218_code_explain_5883"} {"prompt": "Explain this text code:\n\nsudo apt-get update\nsudo apt-get install -y kubectl", "completion": "This text code Install and Set Up kubectl on Linux | Add the Kubernetes yum repository. If you want to use Kubernetes version different than v1.34, replace v1.34 with the desired minor version in the command below.# This overwrites any existing configur", "language": "text", "type": "code-example_explanation", "context": "Install and Set Up kubectl on Linux | Add the Kubernetes yum repository. If you want to use Kubernetes version different than v1.34, replace v1.34 with the desired minor version in the command below.# This overwrites any existing configur", "source": "https://kubernetes.io/docs/tasks/tools/install-kubectl-linux/", "id": "k8s_00218_code_explain_5884"} {"prompt": "Complete this text code:\n\n# This overwrites any existing configuration in /etc/yum.repos.d/kubernetes.repo\ncat <>~/.bashrc", "completion": "This text code Install and Set Up kubectl on Linux | kubectl completion bash | sudo tee /etc/bash_completion.d/kubectl > /dev/null sudo chmod a+r /etc/bash_completion.d/kubectl", "language": "text", "type": "code-example_explanation", "context": "Install and Set Up kubectl on Linux | kubectl completion bash | sudo tee /etc/bash_completion.d/kubectl > /dev/null sudo chmod a+r /etc/bash_completion.d/kubectl", "source": "https://kubernetes.io/docs/tasks/tools/install-kubectl-linux/", "id": "k8s_00218_code_explain_5890"} {"prompt": "Explain this bash code:\n\nkubectl completion bash | sudo tee /etc/bash_completion.d/kubectl > /dev/null\nsudo chmod a+r /etc/bash_completion.d/kubectl", "completion": "This bash code Install and Set Up kubectl on Linux | If you have an alias for kubectl, you can extend shell completion to work with that alias:", "language": "bash", "type": "kubectl-command_explanation", "context": "Install and Set Up kubectl on Linux | If you have an alias for kubectl, you can extend shell completion to work with that alias:", "source": "https://kubernetes.io/docs/tasks/tools/install-kubectl-linux/", "id": "k8s_00218_code_explain_5891"} {"prompt": "Explain this text code:\n\necho 'alias k=kubectl' >>~/.bashrc\necho 'complete -o default -F __start_kubectl k' >>~/.bashrc", "completion": "This text code Install and Set Up kubectl on Linux | Both approaches are equivalent. After reloading your shell, kubectl autocompletion should be working. To enable bash autocompletion in current session of shell, source the ~/.bashrc file:", "language": "text", "type": "code-example_explanation", "context": "Install and Set Up kubectl on Linux | Both approaches are equivalent. After reloading your shell, kubectl autocompletion should be working. To enable bash autocompletion in current session of shell, source the ~/.bashrc file:", "source": "https://kubernetes.io/docs/tasks/tools/install-kubectl-linux/", "id": "k8s_00218_code_explain_5892"} {"prompt": "Explain this text code:\n\ncurl -LO \"https://dl.k8s.io/release/$(curl -L -s https://dl.k8s.io/release/stable.txt)/bin/linux/amd64/kubectl-convert\"", "completion": "This text code Install and Set Up kubectl on Linux | curl -LO \"https://dl.k8s.io/release/$(curl -L -s https://dl.k8s.io/release/stable.txt)/bin/linux/arm64/kubectl-convert\"", "language": "text", "type": "code-example_explanation", "context": "Install and Set Up kubectl on Linux | curl -LO \"https://dl.k8s.io/release/$(curl -L -s https://dl.k8s.io/release/stable.txt)/bin/linux/arm64/kubectl-convert\"", "source": "https://kubernetes.io/docs/tasks/tools/install-kubectl-linux/", "id": "k8s_00218_code_explain_5893"} {"prompt": "Explain this text code:\n\ncurl -LO \"https://dl.k8s.io/release/$(curl -L -s https://dl.k8s.io/release/stable.txt)/bin/linux/arm64/kubectl-convert\"", "completion": "This text code Install and Set Up kubectl on Linux | Validate the binary (optional)", "language": "text", "type": "code-example_explanation", "context": "Install and Set Up kubectl on Linux | Validate the binary (optional)", "source": "https://kubernetes.io/docs/tasks/tools/install-kubectl-linux/", "id": "k8s_00218_code_explain_5894"} {"prompt": "Explain this text code:\n\ncurl -LO \"https://dl.k8s.io/release/$(curl -L -s https://dl.k8s.io/release/stable.txt)/bin/linux/amd64/kubectl-convert.sha256\"", "completion": "This text code Install and Set Up kubectl on Linux | curl -LO \"https://dl.k8s.io/release/$(curl -L -s https://dl.k8s.io/release/stable.txt)/bin/linux/arm64/kubectl-convert.sha256\"", "language": "text", "type": "code-example_explanation", "context": "Install and Set Up kubectl on Linux | curl -LO \"https://dl.k8s.io/release/$(curl -L -s https://dl.k8s.io/release/stable.txt)/bin/linux/arm64/kubectl-convert.sha256\"", "source": "https://kubernetes.io/docs/tasks/tools/install-kubectl-linux/", "id": "k8s_00218_code_explain_5895"} {"prompt": "Explain this text code:\n\ncurl -LO \"https://dl.k8s.io/release/$(curl -L -s https://dl.k8s.io/release/stable.txt)/bin/linux/arm64/kubectl-convert.sha256\"", "completion": "This text code Install and Set Up kubectl on Linux | Validate the kubectl-convert binary against the checksum file:", "language": "text", "type": "code-example_explanation", "context": "Install and Set Up kubectl on Linux | Validate the kubectl-convert binary against the checksum file:", "source": "https://kubernetes.io/docs/tasks/tools/install-kubectl-linux/", "id": "k8s_00218_code_explain_5896"} {"prompt": "Explain this text code:\n\necho \"$(cat kubectl-convert.sha256) kubectl-convert\" | sha256sum --check", "completion": "This text code Install and Set Up kubectl on Linux | If valid, the output is:", "language": "text", "type": "code-example_explanation", "context": "Install and Set Up kubectl on Linux | If valid, the output is:", "source": "https://kubernetes.io/docs/tasks/tools/install-kubectl-linux/", "id": "k8s_00218_code_explain_5897"} {"prompt": "Explain this bash code:\n\nkubectl-convert: FAILED\nsha256sum: WARNING: 1 computed checksum did NOT match", "completion": "This bash code Install and Set Up kubectl on Linux | Install kubectl-convert", "language": "bash", "type": "kubectl-command_explanation", "context": "Install and Set Up kubectl on Linux | Install kubectl-convert", "source": "https://kubernetes.io/docs/tasks/tools/install-kubectl-linux/", "id": "k8s_00218_code_explain_5898"} {"prompt": "Explain this text code:\n\nsudo install -o root -g root -m 0755 kubectl-convert /usr/local/bin/kubectl-convert", "completion": "This text code Install and Set Up kubectl on Linux | Verify plugin is successfully installed", "language": "text", "type": "code-example_explanation", "context": "Install and Set Up kubectl on Linux | Verify plugin is successfully installed", "source": "https://kubernetes.io/docs/tasks/tools/install-kubectl-linux/", "id": "k8s_00218_code_explain_5899"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: LimitRange\nmetadata:\n name: mem-min-max-demo-lr\nspec:", "completion": " limits:\n - max:\n memory: 1Gi\n min:\n memory: 500Mi\n type: Container", "language": "yaml", "type": "kubernetes-manifest", "context": "Configure Minimum and Maximum Memory Constraints for a Namespace | Create the LimitRange:", "source": "https://kubernetes.io/docs/tasks/administer-cluster/manage-resources/memory-constraint-namespace/", "id": "k8s_00219_code_completion_5900"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: LimitRange\nmetadata:\n name: mem-min-max-demo-lr\nspec:\n limits:\n - max:\n memory: 1Gi\n min:\n memory: 500Mi\n type: Container", "completion": "This yaml code Configure Minimum and Maximum Memory Constraints for a Namespace | Create the LimitRange:", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Configure Minimum and Maximum Memory Constraints for a Namespace | Create the LimitRange:", "source": "https://kubernetes.io/docs/tasks/administer-cluster/manage-resources/memory-constraint-namespace/", "id": "k8s_00219_code_explain_5901"} {"prompt": "Explain this bash code:\n\nkubectl apply -f https://k8s.io/examples/admin/resource/memory-constraints.yaml --namespace=constraints-mem-example", "completion": "This bash code Configure Minimum and Maximum Memory Constraints for a Namespace | View detailed information about the LimitRange:", "language": "bash", "type": "kubectl-command_explanation", "context": "Configure Minimum and Maximum Memory Constraints for a Namespace | View detailed information about the LimitRange:", "source": "https://kubernetes.io/docs/tasks/administer-cluster/manage-resources/memory-constraint-namespace/", "id": "k8s_00219_code_explain_5902"} {"prompt": "Explain this bash code:\n\nkubectl get limitrange mem-min-max-demo-lr --namespace=constraints-mem-example --output=yaml", "completion": "This bash code Configure Minimum and Maximum Memory Constraints for a Namespace | The output shows the minimum and maximum memory constraints as expected. But notice that even though you didn't specify default values in the configuration file for the LimitRange, they were created a", "language": "bash", "type": "kubectl-command_explanation", "context": "Configure Minimum and Maximum Memory Constraints for a Namespace | The output shows the minimum and maximum memory constraints as expected. But notice that even though you didn't specify default values in the configuration file for the LimitRange, they were created a", "source": "https://kubernetes.io/docs/tasks/administer-cluster/manage-resources/memory-constraint-namespace/", "id": "k8s_00219_code_explain_5903"} {"prompt": "Complete this text code:\n\nlimits:\n - default:\n memory: 1Gi\n defaultRequest:\n memory: 1Gi", "completion": " max:\n memory: 1Gi\n min:\n memory: 500Mi\n type: Container", "language": "text", "type": "code-example", "context": "Configure Minimum and Maximum Memory Constraints for a Namespace | Now whenever you define a Pod within the constraints-mem-example namespace, Kubernetes performs these steps:", "source": "https://kubernetes.io/docs/tasks/administer-cluster/manage-resources/memory-constraint-namespace/", "id": "k8s_00219_code_completion_5904"} {"prompt": "Explain this text code:\n\nlimits:\n - default:\n memory: 1Gi\n defaultRequest:\n memory: 1Gi\n max:\n memory: 1Gi\n min:\n memory: 500Mi\n type: Container", "completion": "This text code Configure Minimum and Maximum Memory Constraints for a Namespace | Now whenever you define a Pod within the constraints-mem-example namespace, Kubernetes performs these steps:", "language": "text", "type": "code-example_explanation", "context": "Configure Minimum and Maximum Memory Constraints for a Namespace | Now whenever you define a Pod within the constraints-mem-example namespace, Kubernetes performs these steps:", "source": "https://kubernetes.io/docs/tasks/administer-cluster/manage-resources/memory-constraint-namespace/", "id": "k8s_00219_code_explain_5905"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: constraints-mem-demo\nspec:\n containers:", "completion": " - name: constraints-mem-demo-ctr\n image: nginx\n resources:\n limits:\n memory: \"800Mi\"\n requests:\n memory: \"600Mi\"", "language": "yaml", "type": "pod-definition", "context": "Configure Minimum and Maximum Memory Constraints for a Namespace | Create the Pod:", "source": "https://kubernetes.io/docs/tasks/administer-cluster/manage-resources/memory-constraint-namespace/", "id": "k8s_00219_code_completion_5906"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: constraints-mem-demo\nspec:\n containers:\n - name: constraints-mem-demo-ctr\n image: nginx\n resources:\n limits:\n memory: \"800Mi\"\n requests:\n memory: \"600Mi\"", "completion": "This yaml code Configure Minimum and Maximum Memory Constraints for a Namespace | Create the Pod:", "language": "yaml", "type": "pod-definition_explanation", "context": "Configure Minimum and Maximum Memory Constraints for a Namespace | Create the Pod:", "source": "https://kubernetes.io/docs/tasks/administer-cluster/manage-resources/memory-constraint-namespace/", "id": "k8s_00219_code_explain_5907"} {"prompt": "Explain this bash code:\n\nkubectl apply -f https://k8s.io/examples/admin/resource/memory-constraints-pod.yaml --namespace=constraints-mem-example", "completion": "This bash code Configure Minimum and Maximum Memory Constraints for a Namespace | Verify that the Pod is running and that its container is healthy:", "language": "bash", "type": "kubectl-command_explanation", "context": "Configure Minimum and Maximum Memory Constraints for a Namespace | Verify that the Pod is running and that its container is healthy:", "source": "https://kubernetes.io/docs/tasks/administer-cluster/manage-resources/memory-constraint-namespace/", "id": "k8s_00219_code_explain_5908"} {"prompt": "Explain this bash code:\n\nkubectl get pod constraints-mem-demo --namespace=constraints-mem-example", "completion": "This bash code Configure Minimum and Maximum Memory Constraints for a Namespace | View detailed information about the Pod:", "language": "bash", "type": "kubectl-command_explanation", "context": "Configure Minimum and Maximum Memory Constraints for a Namespace | View detailed information about the Pod:", "source": "https://kubernetes.io/docs/tasks/administer-cluster/manage-resources/memory-constraint-namespace/", "id": "k8s_00219_code_explain_5909"} {"prompt": "Explain this bash code:\n\nkubectl get pod constraints-mem-demo --output=yaml --namespace=constraints-mem-example", "completion": "This bash code Configure Minimum and Maximum Memory Constraints for a Namespace | The output shows that the container within that Pod has a memory request of 600 MiB and a memory limit of 800 MiB. These satisfy the constraints imposed by the LimitRange for this namespace:", "language": "bash", "type": "kubectl-command_explanation", "context": "Configure Minimum and Maximum Memory Constraints for a Namespace | The output shows that the container within that Pod has a memory request of 600 MiB and a memory limit of 800 MiB. These satisfy the constraints imposed by the LimitRange for this namespace:", "source": "https://kubernetes.io/docs/tasks/administer-cluster/manage-resources/memory-constraint-namespace/", "id": "k8s_00219_code_explain_5910"} {"prompt": "Complete this text code:\n\nresources:\n limits:", "completion": " memory: 800Mi\n requests:\n memory: 600Mi", "language": "text", "type": "code-example", "context": "Configure Minimum and Maximum Memory Constraints for a Namespace | Delete your Pod:", "source": "https://kubernetes.io/docs/tasks/administer-cluster/manage-resources/memory-constraint-namespace/", "id": "k8s_00219_code_completion_5911"} {"prompt": "Explain this text code:\n\nresources:\n limits:\n memory: 800Mi\n requests:\n memory: 600Mi", "completion": "This text code Configure Minimum and Maximum Memory Constraints for a Namespace | Delete your Pod:", "language": "text", "type": "code-example_explanation", "context": "Configure Minimum and Maximum Memory Constraints for a Namespace | Delete your Pod:", "source": "https://kubernetes.io/docs/tasks/administer-cluster/manage-resources/memory-constraint-namespace/", "id": "k8s_00219_code_explain_5912"} {"prompt": "Explain this bash code:\n\nkubectl delete pod constraints-mem-demo --namespace=constraints-mem-example", "completion": "This bash code Configure Minimum and Maximum Memory Constraints for a Namespace | Here's a manifest for a Pod that has one container. The container specifies a memory request of 800 MiB and a memory limit of 1.5 GiB.", "language": "bash", "type": "kubectl-command_explanation", "context": "Configure Minimum and Maximum Memory Constraints for a Namespace | Here's a manifest for a Pod that has one container. The container specifies a memory request of 800 MiB and a memory limit of 1.5 GiB.", "source": "https://kubernetes.io/docs/tasks/administer-cluster/manage-resources/memory-constraint-namespace/", "id": "k8s_00219_code_explain_5913"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: constraints-mem-demo-2\nspec:\n containers:", "completion": " - name: constraints-mem-demo-2-ctr\n image: nginx\n resources:\n limits:\n memory: \"1.5Gi\"\n requests:\n memory: \"800Mi\"", "language": "yaml", "type": "pod-definition", "context": "Configure Minimum and Maximum Memory Constraints for a Namespace | Attempt to create the Pod:", "source": "https://kubernetes.io/docs/tasks/administer-cluster/manage-resources/memory-constraint-namespace/", "id": "k8s_00219_code_completion_5914"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: constraints-mem-demo-2\nspec:\n containers:\n - name: constraints-mem-demo-2-ctr\n image: nginx\n resources:\n limits:\n memory: \"1.5Gi\"\n requests:\n memory: \"800Mi\"", "completion": "This yaml code Configure Minimum and Maximum Memory Constraints for a Namespace | Attempt to create the Pod:", "language": "yaml", "type": "pod-definition_explanation", "context": "Configure Minimum and Maximum Memory Constraints for a Namespace | Attempt to create the Pod:", "source": "https://kubernetes.io/docs/tasks/administer-cluster/manage-resources/memory-constraint-namespace/", "id": "k8s_00219_code_explain_5915"} {"prompt": "Explain this bash code:\n\nkubectl apply -f https://k8s.io/examples/admin/resource/memory-constraints-pod-2.yaml --namespace=constraints-mem-example", "completion": "This bash code Configure Minimum and Maximum Memory Constraints for a Namespace | The output shows that the Pod does not get created, because it defines a container that requests more memory than is allowed:", "language": "bash", "type": "kubectl-command_explanation", "context": "Configure Minimum and Maximum Memory Constraints for a Namespace | The output shows that the Pod does not get created, because it defines a container that requests more memory than is allowed:", "source": "https://kubernetes.io/docs/tasks/administer-cluster/manage-resources/memory-constraint-namespace/", "id": "k8s_00219_code_explain_5916"} {"prompt": "Explain this text code:\n\nError from server (Forbidden): error when creating \"examples/admin/resource/memory-constraints-pod-2.yaml\":\npods \"constraints-mem-demo-2\" is forbidden: maximum memory usage per Container is 1Gi, but limit is 1536Mi.", "completion": "This text code Configure Minimum and Maximum Memory Constraints for a Namespace | Here's a manifest for a Pod that has one container. That container specifies a memory request of 100 MiB and a memory limit of 800 MiB.", "language": "text", "type": "code-example_explanation", "context": "Configure Minimum and Maximum Memory Constraints for a Namespace | Here's a manifest for a Pod that has one container. That container specifies a memory request of 100 MiB and a memory limit of 800 MiB.", "source": "https://kubernetes.io/docs/tasks/administer-cluster/manage-resources/memory-constraint-namespace/", "id": "k8s_00219_code_explain_5917"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: constraints-mem-demo-3\nspec:\n containers:", "completion": " - name: constraints-mem-demo-3-ctr\n image: nginx\n resources:\n limits:\n memory: \"800Mi\"\n requests:\n memory: \"100Mi\"", "language": "yaml", "type": "pod-definition", "context": "Configure Minimum and Maximum Memory Constraints for a Namespace | Attempt to create the Pod:", "source": "https://kubernetes.io/docs/tasks/administer-cluster/manage-resources/memory-constraint-namespace/", "id": "k8s_00219_code_completion_5918"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: constraints-mem-demo-3\nspec:\n containers:\n - name: constraints-mem-demo-3-ctr\n image: nginx\n resources:\n limits:\n memory: \"800Mi\"\n requests:\n memory: \"100Mi\"", "completion": "This yaml code Configure Minimum and Maximum Memory Constraints for a Namespace | Attempt to create the Pod:", "language": "yaml", "type": "pod-definition_explanation", "context": "Configure Minimum and Maximum Memory Constraints for a Namespace | Attempt to create the Pod:", "source": "https://kubernetes.io/docs/tasks/administer-cluster/manage-resources/memory-constraint-namespace/", "id": "k8s_00219_code_explain_5919"} {"prompt": "Explain this bash code:\n\nkubectl apply -f https://k8s.io/examples/admin/resource/memory-constraints-pod-3.yaml --namespace=constraints-mem-example", "completion": "This bash code Configure Minimum and Maximum Memory Constraints for a Namespace | The output shows that the Pod does not get created, because it defines a container that requests less memory than the enforced minimum:", "language": "bash", "type": "kubectl-command_explanation", "context": "Configure Minimum and Maximum Memory Constraints for a Namespace | The output shows that the Pod does not get created, because it defines a container that requests less memory than the enforced minimum:", "source": "https://kubernetes.io/docs/tasks/administer-cluster/manage-resources/memory-constraint-namespace/", "id": "k8s_00219_code_explain_5920"} {"prompt": "Explain this text code:\n\nError from server (Forbidden): error when creating \"examples/admin/resource/memory-constraints-pod-3.yaml\":\npods \"constraints-mem-demo-3\" is forbidden: minimum memory usage per Container is 500Mi, but request is 100Mi.", "completion": "This text code Configure Minimum and Maximum Memory Constraints for a Namespace | Here's a manifest for a Pod that has one container. The container does not specify a memory request, and it does not specify a memory limit.", "language": "text", "type": "code-example_explanation", "context": "Configure Minimum and Maximum Memory Constraints for a Namespace | Here's a manifest for a Pod that has one container. The container does not specify a memory request, and it does not specify a memory limit.", "source": "https://kubernetes.io/docs/tasks/administer-cluster/manage-resources/memory-constraint-namespace/", "id": "k8s_00219_code_explain_5921"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: constraints-mem-demo-4", "completion": "spec:\n containers:\n - name: constraints-mem-demo-4-ctr\n image: nginx", "language": "yaml", "type": "pod-definition", "context": "Configure Minimum and Maximum Memory Constraints for a Namespace | Create the Pod:", "source": "https://kubernetes.io/docs/tasks/administer-cluster/manage-resources/memory-constraint-namespace/", "id": "k8s_00219_code_completion_5922"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: constraints-mem-demo-4\nspec:\n containers:\n - name: constraints-mem-demo-4-ctr\n image: nginx", "completion": "This yaml code Configure Minimum and Maximum Memory Constraints for a Namespace | Create the Pod:", "language": "yaml", "type": "pod-definition_explanation", "context": "Configure Minimum and Maximum Memory Constraints for a Namespace | Create the Pod:", "source": "https://kubernetes.io/docs/tasks/administer-cluster/manage-resources/memory-constraint-namespace/", "id": "k8s_00219_code_explain_5923"} {"prompt": "Explain this bash code:\n\nkubectl apply -f https://k8s.io/examples/admin/resource/memory-constraints-pod-4.yaml --namespace=constraints-mem-example", "completion": "This bash code Configure Minimum and Maximum Memory Constraints for a Namespace | View detailed information about the Pod:", "language": "bash", "type": "kubectl-command_explanation", "context": "Configure Minimum and Maximum Memory Constraints for a Namespace | View detailed information about the Pod:", "source": "https://kubernetes.io/docs/tasks/administer-cluster/manage-resources/memory-constraint-namespace/", "id": "k8s_00219_code_explain_5924"} {"prompt": "Explain this bash code:\n\nkubectl get pod constraints-mem-demo-4 --namespace=constraints-mem-example --output=yaml", "completion": "This bash code Configure Minimum and Maximum Memory Constraints for a Namespace | The output shows that the Pod's only container has a memory request of 1 GiB and a memory limit of 1 GiB. How did that container get those values?", "language": "bash", "type": "kubectl-command_explanation", "context": "Configure Minimum and Maximum Memory Constraints for a Namespace | The output shows that the Pod's only container has a memory request of 1 GiB and a memory limit of 1 GiB. How did that container get those values?", "source": "https://kubernetes.io/docs/tasks/administer-cluster/manage-resources/memory-constraint-namespace/", "id": "k8s_00219_code_explain_5925"} {"prompt": "Complete this text code:\n\nresources:\n limits:", "completion": " memory: 1Gi\n requests:\n memory: 1Gi", "language": "text", "type": "code-example", "context": "Configure Minimum and Maximum Memory Constraints for a Namespace | Because your Pod did not define any memory request and limit for that container, the cluster applied a default memory request and limit from the LimitRange.", "source": "https://kubernetes.io/docs/tasks/administer-cluster/manage-resources/memory-constraint-namespace/", "id": "k8s_00219_code_completion_5926"} {"prompt": "Explain this text code:\n\nresources:\n limits:\n memory: 1Gi\n requests:\n memory: 1Gi", "completion": "This text code Configure Minimum and Maximum Memory Constraints for a Namespace | Because your Pod did not define any memory request and limit for that container, the cluster applied a default memory request and limit from the LimitRange.", "language": "text", "type": "code-example_explanation", "context": "Configure Minimum and Maximum Memory Constraints for a Namespace | Because your Pod did not define any memory request and limit for that container, the cluster applied a default memory request and limit from the LimitRange.", "source": "https://kubernetes.io/docs/tasks/administer-cluster/manage-resources/memory-constraint-namespace/", "id": "k8s_00219_code_explain_5927"} {"prompt": "Explain this text code:\n\n# Look for the \"Requests:\" section of the output\nkubectl describe pod constraints-mem-demo-4 --namespace=constraints-mem-example", "completion": "This text code Configure Minimum and Maximum Memory Constraints for a Namespace | At this point, your Pod might be running or it might not be running. Recall that a prerequisite for this task is that your Nodes have at least 1 GiB of memory. If each of your Nodes has only 1 GiB of", "language": "text", "type": "code-example_explanation", "context": "Configure Minimum and Maximum Memory Constraints for a Namespace | At this point, your Pod might be running or it might not be running. Recall that a prerequisite for this task is that your Nodes have at least 1 GiB of memory. If each of your Nodes has only 1 GiB of", "source": "https://kubernetes.io/docs/tasks/administer-cluster/manage-resources/memory-constraint-namespace/", "id": "k8s_00219_code_explain_5928"} {"prompt": "Explain this bash code:\n\nkubectl delete pod constraints-mem-demo-4 --namespace=constraints-mem-example", "completion": "This bash code Configure Minimum and Maximum Memory Constraints for a Namespace | The maximum and minimum memory constraints imposed on a namespace by a LimitRange are enforced only when a Pod is created or updated. If you change the LimitRange, it does not affect Pods that were cr", "language": "bash", "type": "kubectl-command_explanation", "context": "Configure Minimum and Maximum Memory Constraints for a Namespace | The maximum and minimum memory constraints imposed on a namespace by a LimitRange are enforced only when a Pod is created or updated. If you change the LimitRange, it does not affect Pods that were cr", "source": "https://kubernetes.io/docs/tasks/administer-cluster/manage-resources/memory-constraint-namespace/", "id": "k8s_00219_code_explain_5929"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: LimitRange\nmetadata:\n name: mem-min-max-demo-lr\nspec:", "completion": " limits:\n - max:\n memory: 1Gi\n min:\n memory: 500Mi\n type: Container", "language": "yaml", "type": "kubernetes-manifest", "context": "Configure Minimum and Maximum Memory Constraints for a Namespace | Create the LimitRange:", "source": "https://kubernetes.io/docs/tasks/administer-cluster/manage-resources/memory-constraint-namespace/", "id": "k8s_00219_code_completion_5930"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: LimitRange\nmetadata:\n name: mem-min-max-demo-lr\nspec:\n limits:\n - max:\n memory: 1Gi\n min:\n memory: 500Mi\n type: Container", "completion": "This yaml code Configure Minimum and Maximum Memory Constraints for a Namespace | Create the LimitRange:", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Configure Minimum and Maximum Memory Constraints for a Namespace | Create the LimitRange:", "source": "https://kubernetes.io/docs/tasks/administer-cluster/manage-resources/memory-constraint-namespace/", "id": "k8s_00219_code_explain_5931"} {"prompt": "Explain this shell code:\n\nkubectl apply -f https://k8s.io/examples/admin/resource/memory-constraints.yaml --namespace=constraints-mem-example", "completion": "This shell code Configure Minimum and Maximum Memory Constraints for a Namespace | View detailed information about the LimitRange:", "language": "shell", "type": "code-example_explanation", "context": "Configure Minimum and Maximum Memory Constraints for a Namespace | View detailed information about the LimitRange:", "source": "https://kubernetes.io/docs/tasks/administer-cluster/manage-resources/memory-constraint-namespace/", "id": "k8s_00219_code_explain_5932"} {"prompt": "Explain this shell code:\n\nkubectl get limitrange mem-min-max-demo-lr --namespace=constraints-mem-example --output=yaml", "completion": "This shell code Configure Minimum and Maximum Memory Constraints for a Namespace | The output shows the minimum and maximum memory constraints as expected. But notice that even though you didn't specify default values in the configuration file for the LimitRange, they were created a", "language": "shell", "type": "code-example_explanation", "context": "Configure Minimum and Maximum Memory Constraints for a Namespace | The output shows the minimum and maximum memory constraints as expected. But notice that even though you didn't specify default values in the configuration file for the LimitRange, they were created a", "source": "https://kubernetes.io/docs/tasks/administer-cluster/manage-resources/memory-constraint-namespace/", "id": "k8s_00219_code_explain_5933"} {"prompt": "Complete this text code:\n\nlimits:\n - default:\n memory: 1Gi\n defaultRequest:\n memory: 1Gi", "completion": " max:\n memory: 1Gi\n min:\n memory: 500Mi\n type: Container", "language": "text", "type": "code-example", "context": "Configure Minimum and Maximum Memory Constraints for a Namespace | Now whenever you define a Pod within the constraints-mem-example namespace, Kubernetes performs these steps:", "source": "https://kubernetes.io/docs/tasks/administer-cluster/manage-resources/memory-constraint-namespace/", "id": "k8s_00219_code_completion_5934"} {"prompt": "Explain this text code:\n\nlimits:\n - default:\n memory: 1Gi\n defaultRequest:\n memory: 1Gi\n max:\n memory: 1Gi\n min:\n memory: 500Mi\n type: Container", "completion": "This text code Configure Minimum and Maximum Memory Constraints for a Namespace | Now whenever you define a Pod within the constraints-mem-example namespace, Kubernetes performs these steps:", "language": "text", "type": "code-example_explanation", "context": "Configure Minimum and Maximum Memory Constraints for a Namespace | Now whenever you define a Pod within the constraints-mem-example namespace, Kubernetes performs these steps:", "source": "https://kubernetes.io/docs/tasks/administer-cluster/manage-resources/memory-constraint-namespace/", "id": "k8s_00219_code_explain_5935"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: constraints-mem-demo\nspec:\n containers:", "completion": " - name: constraints-mem-demo-ctr\n image: nginx\n resources:\n limits:\n memory: \"800Mi\"\n requests:\n memory: \"600Mi\"", "language": "yaml", "type": "pod-definition", "context": "Configure Minimum and Maximum Memory Constraints for a Namespace | Create the Pod:", "source": "https://kubernetes.io/docs/tasks/administer-cluster/manage-resources/memory-constraint-namespace/", "id": "k8s_00219_code_completion_5936"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: constraints-mem-demo\nspec:\n containers:\n - name: constraints-mem-demo-ctr\n image: nginx\n resources:\n limits:\n memory: \"800Mi\"\n requests:\n memory: \"600Mi\"", "completion": "This yaml code Configure Minimum and Maximum Memory Constraints for a Namespace | Create the Pod:", "language": "yaml", "type": "pod-definition_explanation", "context": "Configure Minimum and Maximum Memory Constraints for a Namespace | Create the Pod:", "source": "https://kubernetes.io/docs/tasks/administer-cluster/manage-resources/memory-constraint-namespace/", "id": "k8s_00219_code_explain_5937"} {"prompt": "Explain this shell code:\n\nkubectl apply -f https://k8s.io/examples/admin/resource/memory-constraints-pod.yaml --namespace=constraints-mem-example", "completion": "This shell code Configure Minimum and Maximum Memory Constraints for a Namespace | Verify that the Pod is running and that its container is healthy:", "language": "shell", "type": "code-example_explanation", "context": "Configure Minimum and Maximum Memory Constraints for a Namespace | Verify that the Pod is running and that its container is healthy:", "source": "https://kubernetes.io/docs/tasks/administer-cluster/manage-resources/memory-constraint-namespace/", "id": "k8s_00219_code_explain_5938"} {"prompt": "Explain this shell code:\n\nkubectl get pod constraints-mem-demo --namespace=constraints-mem-example", "completion": "This shell code Configure Minimum and Maximum Memory Constraints for a Namespace | View detailed information about the Pod:", "language": "shell", "type": "code-example_explanation", "context": "Configure Minimum and Maximum Memory Constraints for a Namespace | View detailed information about the Pod:", "source": "https://kubernetes.io/docs/tasks/administer-cluster/manage-resources/memory-constraint-namespace/", "id": "k8s_00219_code_explain_5939"} {"prompt": "Explain this shell code:\n\nkubectl get pod constraints-mem-demo --output=yaml --namespace=constraints-mem-example", "completion": "This shell code Configure Minimum and Maximum Memory Constraints for a Namespace | The output shows that the container within that Pod has a memory request of 600 MiB and a memory limit of 800 MiB. These satisfy the constraints imposed by the LimitRange for this namespace:", "language": "shell", "type": "code-example_explanation", "context": "Configure Minimum and Maximum Memory Constraints for a Namespace | The output shows that the container within that Pod has a memory request of 600 MiB and a memory limit of 800 MiB. These satisfy the constraints imposed by the LimitRange for this namespace:", "source": "https://kubernetes.io/docs/tasks/administer-cluster/manage-resources/memory-constraint-namespace/", "id": "k8s_00219_code_explain_5940"} {"prompt": "Complete this yaml code:\n\nresources:\n limits:", "completion": " memory: 800Mi\n requests:\n memory: 600Mi", "language": "yaml", "type": "kubernetes-manifest", "context": "Configure Minimum and Maximum Memory Constraints for a Namespace | Delete your Pod:", "source": "https://kubernetes.io/docs/tasks/administer-cluster/manage-resources/memory-constraint-namespace/", "id": "k8s_00219_code_completion_5941"} {"prompt": "Explain this yaml code:\n\nresources:\n limits:\n memory: 800Mi\n requests:\n memory: 600Mi", "completion": "This yaml code Configure Minimum and Maximum Memory Constraints for a Namespace | Delete your Pod:", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Configure Minimum and Maximum Memory Constraints for a Namespace | Delete your Pod:", "source": "https://kubernetes.io/docs/tasks/administer-cluster/manage-resources/memory-constraint-namespace/", "id": "k8s_00219_code_explain_5942"} {"prompt": "Explain this shell code:\n\nkubectl delete pod constraints-mem-demo --namespace=constraints-mem-example", "completion": "This shell code Configure Minimum and Maximum Memory Constraints for a Namespace | Here's a manifest for a Pod that has one container. The container specifies a memory request of 800 MiB and a memory limit of 1.5 GiB.", "language": "shell", "type": "code-example_explanation", "context": "Configure Minimum and Maximum Memory Constraints for a Namespace | Here's a manifest for a Pod that has one container. The container specifies a memory request of 800 MiB and a memory limit of 1.5 GiB.", "source": "https://kubernetes.io/docs/tasks/administer-cluster/manage-resources/memory-constraint-namespace/", "id": "k8s_00219_code_explain_5943"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: constraints-mem-demo-2\nspec:\n containers:", "completion": " - name: constraints-mem-demo-2-ctr\n image: nginx\n resources:\n limits:\n memory: \"1.5Gi\"\n requests:\n memory: \"800Mi\"", "language": "yaml", "type": "pod-definition", "context": "Configure Minimum and Maximum Memory Constraints for a Namespace | Attempt to create the Pod:", "source": "https://kubernetes.io/docs/tasks/administer-cluster/manage-resources/memory-constraint-namespace/", "id": "k8s_00219_code_completion_5944"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: constraints-mem-demo-2\nspec:\n containers:\n - name: constraints-mem-demo-2-ctr\n image: nginx\n resources:\n limits:\n memory: \"1.5Gi\"\n requests:\n memory: \"800Mi\"", "completion": "This yaml code Configure Minimum and Maximum Memory Constraints for a Namespace | Attempt to create the Pod:", "language": "yaml", "type": "pod-definition_explanation", "context": "Configure Minimum and Maximum Memory Constraints for a Namespace | Attempt to create the Pod:", "source": "https://kubernetes.io/docs/tasks/administer-cluster/manage-resources/memory-constraint-namespace/", "id": "k8s_00219_code_explain_5945"} {"prompt": "Explain this shell code:\n\nkubectl apply -f https://k8s.io/examples/admin/resource/memory-constraints-pod-2.yaml --namespace=constraints-mem-example", "completion": "This shell code Configure Minimum and Maximum Memory Constraints for a Namespace | The output shows that the Pod does not get created, because it defines a container that requests more memory than is allowed:", "language": "shell", "type": "code-example_explanation", "context": "Configure Minimum and Maximum Memory Constraints for a Namespace | The output shows that the Pod does not get created, because it defines a container that requests more memory than is allowed:", "source": "https://kubernetes.io/docs/tasks/administer-cluster/manage-resources/memory-constraint-namespace/", "id": "k8s_00219_code_explain_5946"} {"prompt": "Explain this text code:\n\nError from server (Forbidden): error when creating \"examples/admin/resource/memory-constraints-pod-2.yaml\":\npods \"constraints-mem-demo-2\" is forbidden: maximum memory usage per Container is 1Gi, but limit is 1536Mi.", "completion": "This text code Configure Minimum and Maximum Memory Constraints for a Namespace | Here's a manifest for a Pod that has one container. That container specifies a memory request of 100 MiB and a memory limit of 800 MiB.", "language": "text", "type": "code-example_explanation", "context": "Configure Minimum and Maximum Memory Constraints for a Namespace | Here's a manifest for a Pod that has one container. That container specifies a memory request of 100 MiB and a memory limit of 800 MiB.", "source": "https://kubernetes.io/docs/tasks/administer-cluster/manage-resources/memory-constraint-namespace/", "id": "k8s_00219_code_explain_5947"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: constraints-mem-demo-3\nspec:\n containers:", "completion": " - name: constraints-mem-demo-3-ctr\n image: nginx\n resources:\n limits:\n memory: \"800Mi\"\n requests:\n memory: \"100Mi\"", "language": "yaml", "type": "pod-definition", "context": "Configure Minimum and Maximum Memory Constraints for a Namespace | Attempt to create the Pod:", "source": "https://kubernetes.io/docs/tasks/administer-cluster/manage-resources/memory-constraint-namespace/", "id": "k8s_00219_code_completion_5948"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: constraints-mem-demo-3\nspec:\n containers:\n - name: constraints-mem-demo-3-ctr\n image: nginx\n resources:\n limits:\n memory: \"800Mi\"\n requests:\n memory: \"100Mi\"", "completion": "This yaml code Configure Minimum and Maximum Memory Constraints for a Namespace | Attempt to create the Pod:", "language": "yaml", "type": "pod-definition_explanation", "context": "Configure Minimum and Maximum Memory Constraints for a Namespace | Attempt to create the Pod:", "source": "https://kubernetes.io/docs/tasks/administer-cluster/manage-resources/memory-constraint-namespace/", "id": "k8s_00219_code_explain_5949"} {"prompt": "Explain this shell code:\n\nkubectl apply -f https://k8s.io/examples/admin/resource/memory-constraints-pod-3.yaml --namespace=constraints-mem-example", "completion": "This shell code Configure Minimum and Maximum Memory Constraints for a Namespace | The output shows that the Pod does not get created, because it defines a container that requests less memory than the enforced minimum:", "language": "shell", "type": "code-example_explanation", "context": "Configure Minimum and Maximum Memory Constraints for a Namespace | The output shows that the Pod does not get created, because it defines a container that requests less memory than the enforced minimum:", "source": "https://kubernetes.io/docs/tasks/administer-cluster/manage-resources/memory-constraint-namespace/", "id": "k8s_00219_code_explain_5950"} {"prompt": "Explain this text code:\n\nError from server (Forbidden): error when creating \"examples/admin/resource/memory-constraints-pod-3.yaml\":\npods \"constraints-mem-demo-3\" is forbidden: minimum memory usage per Container is 500Mi, but request is 100Mi.", "completion": "This text code Configure Minimum and Maximum Memory Constraints for a Namespace | Here's a manifest for a Pod that has one container. The container does not specify a memory request, and it does not specify a memory limit.", "language": "text", "type": "code-example_explanation", "context": "Configure Minimum and Maximum Memory Constraints for a Namespace | Here's a manifest for a Pod that has one container. The container does not specify a memory request, and it does not specify a memory limit.", "source": "https://kubernetes.io/docs/tasks/administer-cluster/manage-resources/memory-constraint-namespace/", "id": "k8s_00219_code_explain_5951"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: constraints-mem-demo-4", "completion": "spec:\n containers:\n - name: constraints-mem-demo-4-ctr\n image: nginx", "language": "yaml", "type": "pod-definition", "context": "Configure Minimum and Maximum Memory Constraints for a Namespace | Create the Pod:", "source": "https://kubernetes.io/docs/tasks/administer-cluster/manage-resources/memory-constraint-namespace/", "id": "k8s_00219_code_completion_5952"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: constraints-mem-demo-4\nspec:\n containers:\n - name: constraints-mem-demo-4-ctr\n image: nginx", "completion": "This yaml code Configure Minimum and Maximum Memory Constraints for a Namespace | Create the Pod:", "language": "yaml", "type": "pod-definition_explanation", "context": "Configure Minimum and Maximum Memory Constraints for a Namespace | Create the Pod:", "source": "https://kubernetes.io/docs/tasks/administer-cluster/manage-resources/memory-constraint-namespace/", "id": "k8s_00219_code_explain_5953"} {"prompt": "Explain this shell code:\n\nkubectl apply -f https://k8s.io/examples/admin/resource/memory-constraints-pod-4.yaml --namespace=constraints-mem-example", "completion": "This shell code Configure Minimum and Maximum Memory Constraints for a Namespace | View detailed information about the Pod:", "language": "shell", "type": "code-example_explanation", "context": "Configure Minimum and Maximum Memory Constraints for a Namespace | View detailed information about the Pod:", "source": "https://kubernetes.io/docs/tasks/administer-cluster/manage-resources/memory-constraint-namespace/", "id": "k8s_00219_code_explain_5954"} {"prompt": "Explain this shell code:\n\nkubectl get pod constraints-mem-demo-4 --namespace=constraints-mem-example --output=yaml", "completion": "This shell code Configure Minimum and Maximum Memory Constraints for a Namespace | The output shows that the Pod's only container has a memory request of 1 GiB and a memory limit of 1 GiB. How did that container get those values?", "language": "shell", "type": "code-example_explanation", "context": "Configure Minimum and Maximum Memory Constraints for a Namespace | The output shows that the Pod's only container has a memory request of 1 GiB and a memory limit of 1 GiB. How did that container get those values?", "source": "https://kubernetes.io/docs/tasks/administer-cluster/manage-resources/memory-constraint-namespace/", "id": "k8s_00219_code_explain_5955"} {"prompt": "Complete this text code:\n\nresources:\n limits:", "completion": " memory: 1Gi\n requests:\n memory: 1Gi", "language": "text", "type": "code-example", "context": "Configure Minimum and Maximum Memory Constraints for a Namespace | Because your Pod did not define any memory request and limit for that container, the cluster applied a default memory request and limit from the LimitRange.", "source": "https://kubernetes.io/docs/tasks/administer-cluster/manage-resources/memory-constraint-namespace/", "id": "k8s_00219_code_completion_5956"} {"prompt": "Explain this text code:\n\nresources:\n limits:\n memory: 1Gi\n requests:\n memory: 1Gi", "completion": "This text code Configure Minimum and Maximum Memory Constraints for a Namespace | Because your Pod did not define any memory request and limit for that container, the cluster applied a default memory request and limit from the LimitRange.", "language": "text", "type": "code-example_explanation", "context": "Configure Minimum and Maximum Memory Constraints for a Namespace | Because your Pod did not define any memory request and limit for that container, the cluster applied a default memory request and limit from the LimitRange.", "source": "https://kubernetes.io/docs/tasks/administer-cluster/manage-resources/memory-constraint-namespace/", "id": "k8s_00219_code_explain_5957"} {"prompt": "Explain this shell code:\n\n# Look for the \"Requests:\" section of the output\nkubectl describe pod constraints-mem-demo-4 --namespace=constraints-mem-example", "completion": "This shell code Configure Minimum and Maximum Memory Constraints for a Namespace | At this point, your Pod might be running or it might not be running. Recall that a prerequisite for this task is that your Nodes have at least 1 GiB of memory. If each of your Nodes has only 1 GiB of", "language": "shell", "type": "code-example_explanation", "context": "Configure Minimum and Maximum Memory Constraints for a Namespace | At this point, your Pod might be running or it might not be running. Recall that a prerequisite for this task is that your Nodes have at least 1 GiB of memory. If each of your Nodes has only 1 GiB of", "source": "https://kubernetes.io/docs/tasks/administer-cluster/manage-resources/memory-constraint-namespace/", "id": "k8s_00219_code_explain_5958"} {"prompt": "Explain this shell code:\n\nkubectl delete pod constraints-mem-demo-4 --namespace=constraints-mem-example", "completion": "This shell code Configure Minimum and Maximum Memory Constraints for a Namespace | The maximum and minimum memory constraints imposed on a namespace by a LimitRange are enforced only when a Pod is created or updated. If you change the LimitRange, it does not affect Pods that were cr", "language": "shell", "type": "code-example_explanation", "context": "Configure Minimum and Maximum Memory Constraints for a Namespace | The maximum and minimum memory constraints imposed on a namespace by a LimitRange are enforced only when a Pod is created or updated. If you change the LimitRange, it does not affect Pods that were cr", "source": "https://kubernetes.io/docs/tasks/administer-cluster/manage-resources/memory-constraint-namespace/", "id": "k8s_00219_code_explain_5959"} {"prompt": "Complete this yaml code:\n\nadmin/resource/memory-constraints.yaml\napiVersion: v1\nkind: LimitRange\nmetadata:\n name: mem-min-max-demo-lr\nspec:", "completion": " limits:\n - max:\n memory: 1Gi\n min:\n memory: 500Mi\n type: Container", "language": "yaml", "type": "kubernetes-manifest", "context": "Configure Minimum and Maximum Memory Constraints for a Namespace | Create the LimitRange:", "source": "https://kubernetes.io/docs/tasks/administer-cluster/manage-resources/memory-constraint-namespace/", "id": "k8s_00219_code_completion_5960"} {"prompt": "Explain this yaml code:\n\nadmin/resource/memory-constraints.yaml\napiVersion: v1\nkind: LimitRange\nmetadata:\n name: mem-min-max-demo-lr\nspec:\n limits:\n - max:\n memory: 1Gi\n min:\n memory: 500Mi\n type: Container", "completion": "This yaml code Configure Minimum and Maximum Memory Constraints for a Namespace | Create the LimitRange:", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Configure Minimum and Maximum Memory Constraints for a Namespace | Create the LimitRange:", "source": "https://kubernetes.io/docs/tasks/administer-cluster/manage-resources/memory-constraint-namespace/", "id": "k8s_00219_code_explain_5961"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: LimitRange\nmetadata:\n name: mem-min-max-demo-lr\nspec:", "completion": " limits:\n - max:\n memory: 1Gi\n min:\n memory: 500Mi\n type: Container", "language": "yaml", "type": "kubernetes-manifest", "context": "Configure Minimum and Maximum Memory Constraints for a Namespace | Create the LimitRange:", "source": "https://kubernetes.io/docs/tasks/administer-cluster/manage-resources/memory-constraint-namespace/", "id": "k8s_00219_code_completion_5962"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: LimitRange\nmetadata:\n name: mem-min-max-demo-lr\nspec:\n limits:\n - max:\n memory: 1Gi\n min:\n memory: 500Mi\n type: Container", "completion": "This yaml code Configure Minimum and Maximum Memory Constraints for a Namespace | Create the LimitRange:", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Configure Minimum and Maximum Memory Constraints for a Namespace | Create the LimitRange:", "source": "https://kubernetes.io/docs/tasks/administer-cluster/manage-resources/memory-constraint-namespace/", "id": "k8s_00219_code_explain_5963"} {"prompt": "Explain this bash code:\n\nkubectl apply -f https://k8s.io/examples/admin/resource/memory-constraints.yaml --namespace=constraints-mem-example", "completion": "This bash code Configure Minimum and Maximum Memory Constraints for a Namespace | View detailed information about the LimitRange:", "language": "bash", "type": "kubectl-command_explanation", "context": "Configure Minimum and Maximum Memory Constraints for a Namespace | View detailed information about the LimitRange:", "source": "https://kubernetes.io/docs/tasks/administer-cluster/manage-resources/memory-constraint-namespace/", "id": "k8s_00219_code_explain_5964"} {"prompt": "Explain this bash code:\n\nkubectl get limitrange mem-min-max-demo-lr --namespace=constraints-mem-example --output=yaml", "completion": "This bash code Configure Minimum and Maximum Memory Constraints for a Namespace | The output shows the minimum and maximum memory constraints as expected. But notice that even though you didn't specify default values in the configuration file for the LimitRange, they were created a", "language": "bash", "type": "kubectl-command_explanation", "context": "Configure Minimum and Maximum Memory Constraints for a Namespace | The output shows the minimum and maximum memory constraints as expected. But notice that even though you didn't specify default values in the configuration file for the LimitRange, they were created a", "source": "https://kubernetes.io/docs/tasks/administer-cluster/manage-resources/memory-constraint-namespace/", "id": "k8s_00219_code_explain_5965"} {"prompt": "Complete this yaml code:\n\nadmin/resource/memory-constraints-pod.yaml\napiVersion: v1\nkind: Pod\nmetadata:\n name: constraints-mem-demo\nspec:\n containers:", "completion": " - name: constraints-mem-demo-ctr\n image: nginx\n resources:\n limits:\n memory: \"800Mi\"\n requests:\n memory: \"600Mi\"", "language": "yaml", "type": "pod-definition", "context": "Configure Minimum and Maximum Memory Constraints for a Namespace | Create the Pod:", "source": "https://kubernetes.io/docs/tasks/administer-cluster/manage-resources/memory-constraint-namespace/", "id": "k8s_00219_code_completion_5966"} {"prompt": "Explain this yaml code:\n\nadmin/resource/memory-constraints-pod.yaml\napiVersion: v1\nkind: Pod\nmetadata:\n name: constraints-mem-demo\nspec:\n containers:\n - name: constraints-mem-demo-ctr\n image: nginx\n resources:\n limits:\n memory: \"800Mi\"\n requests:\n memory: \"600Mi\"", "completion": "This yaml code Configure Minimum and Maximum Memory Constraints for a Namespace | Create the Pod:", "language": "yaml", "type": "pod-definition_explanation", "context": "Configure Minimum and Maximum Memory Constraints for a Namespace | Create the Pod:", "source": "https://kubernetes.io/docs/tasks/administer-cluster/manage-resources/memory-constraint-namespace/", "id": "k8s_00219_code_explain_5967"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: constraints-mem-demo\nspec:\n containers:", "completion": " - name: constraints-mem-demo-ctr\n image: nginx\n resources:\n limits:\n memory: \"800Mi\"\n requests:\n memory: \"600Mi\"", "language": "yaml", "type": "pod-definition", "context": "Configure Minimum and Maximum Memory Constraints for a Namespace | Create the Pod:", "source": "https://kubernetes.io/docs/tasks/administer-cluster/manage-resources/memory-constraint-namespace/", "id": "k8s_00219_code_completion_5968"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: constraints-mem-demo\nspec:\n containers:\n - name: constraints-mem-demo-ctr\n image: nginx\n resources:\n limits:\n memory: \"800Mi\"\n requests:\n memory: \"600Mi\"", "completion": "This yaml code Configure Minimum and Maximum Memory Constraints for a Namespace | Create the Pod:", "language": "yaml", "type": "pod-definition_explanation", "context": "Configure Minimum and Maximum Memory Constraints for a Namespace | Create the Pod:", "source": "https://kubernetes.io/docs/tasks/administer-cluster/manage-resources/memory-constraint-namespace/", "id": "k8s_00219_code_explain_5969"} {"prompt": "Explain this bash code:\n\nkubectl apply -f https://k8s.io/examples/admin/resource/memory-constraints-pod.yaml --namespace=constraints-mem-example", "completion": "This bash code Configure Minimum and Maximum Memory Constraints for a Namespace | Verify that the Pod is running and that its container is healthy:", "language": "bash", "type": "kubectl-command_explanation", "context": "Configure Minimum and Maximum Memory Constraints for a Namespace | Verify that the Pod is running and that its container is healthy:", "source": "https://kubernetes.io/docs/tasks/administer-cluster/manage-resources/memory-constraint-namespace/", "id": "k8s_00219_code_explain_5970"} {"prompt": "Explain this bash code:\n\nkubectl get pod constraints-mem-demo --namespace=constraints-mem-example", "completion": "This bash code Configure Minimum and Maximum Memory Constraints for a Namespace | View detailed information about the Pod:", "language": "bash", "type": "kubectl-command_explanation", "context": "Configure Minimum and Maximum Memory Constraints for a Namespace | View detailed information about the Pod:", "source": "https://kubernetes.io/docs/tasks/administer-cluster/manage-resources/memory-constraint-namespace/", "id": "k8s_00219_code_explain_5971"} {"prompt": "Explain this bash code:\n\nkubectl get pod constraints-mem-demo --output=yaml --namespace=constraints-mem-example", "completion": "This bash code Configure Minimum and Maximum Memory Constraints for a Namespace | The output shows that the container within that Pod has a memory request of 600 MiB and a memory limit of 800 MiB. These satisfy the constraints imposed by the LimitRange for this namespace:", "language": "bash", "type": "kubectl-command_explanation", "context": "Configure Minimum and Maximum Memory Constraints for a Namespace | The output shows that the container within that Pod has a memory request of 600 MiB and a memory limit of 800 MiB. These satisfy the constraints imposed by the LimitRange for this namespace:", "source": "https://kubernetes.io/docs/tasks/administer-cluster/manage-resources/memory-constraint-namespace/", "id": "k8s_00219_code_explain_5972"} {"prompt": "Complete this text code:\n\nresources:\n limits:", "completion": " memory: 800Mi\n requests:\n memory: 600Mi", "language": "text", "type": "code-example", "context": "Configure Minimum and Maximum Memory Constraints for a Namespace | Delete your Pod:", "source": "https://kubernetes.io/docs/tasks/administer-cluster/manage-resources/memory-constraint-namespace/", "id": "k8s_00219_code_completion_5973"} {"prompt": "Explain this text code:\n\nresources:\n limits:\n memory: 800Mi\n requests:\n memory: 600Mi", "completion": "This text code Configure Minimum and Maximum Memory Constraints for a Namespace | Delete your Pod:", "language": "text", "type": "code-example_explanation", "context": "Configure Minimum and Maximum Memory Constraints for a Namespace | Delete your Pod:", "source": "https://kubernetes.io/docs/tasks/administer-cluster/manage-resources/memory-constraint-namespace/", "id": "k8s_00219_code_explain_5974"} {"prompt": "Explain this bash code:\n\nkubectl delete pod constraints-mem-demo --namespace=constraints-mem-example", "completion": "This bash code Configure Minimum and Maximum Memory Constraints for a Namespace | Here's a manifest for a Pod that has one container. The container specifies a memory request of 800 MiB and a memory limit of 1.5 GiB.", "language": "bash", "type": "kubectl-command_explanation", "context": "Configure Minimum and Maximum Memory Constraints for a Namespace | Here's a manifest for a Pod that has one container. The container specifies a memory request of 800 MiB and a memory limit of 1.5 GiB.", "source": "https://kubernetes.io/docs/tasks/administer-cluster/manage-resources/memory-constraint-namespace/", "id": "k8s_00219_code_explain_5975"} {"prompt": "Complete this yaml code:\n\nadmin/resource/memory-constraints-pod-2.yaml\napiVersion: v1\nkind: Pod\nmetadata:\n name: constraints-mem-demo-2\nspec:\n containers:", "completion": " - name: constraints-mem-demo-2-ctr\n image: nginx\n resources:\n limits:\n memory: \"1.5Gi\"\n requests:\n memory: \"800Mi\"", "language": "yaml", "type": "pod-definition", "context": "Configure Minimum and Maximum Memory Constraints for a Namespace | Attempt to create the Pod:", "source": "https://kubernetes.io/docs/tasks/administer-cluster/manage-resources/memory-constraint-namespace/", "id": "k8s_00219_code_completion_5976"} {"prompt": "Explain this yaml code:\n\nadmin/resource/memory-constraints-pod-2.yaml\napiVersion: v1\nkind: Pod\nmetadata:\n name: constraints-mem-demo-2\nspec:\n containers:\n - name: constraints-mem-demo-2-ctr\n image: nginx\n resources:\n limits:\n memory: \"1.5Gi\"\n requests:\n memory: \"800Mi\"", "completion": "This yaml code Configure Minimum and Maximum Memory Constraints for a Namespace | Attempt to create the Pod:", "language": "yaml", "type": "pod-definition_explanation", "context": "Configure Minimum and Maximum Memory Constraints for a Namespace | Attempt to create the Pod:", "source": "https://kubernetes.io/docs/tasks/administer-cluster/manage-resources/memory-constraint-namespace/", "id": "k8s_00219_code_explain_5977"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: constraints-mem-demo-2\nspec:\n containers:", "completion": " - name: constraints-mem-demo-2-ctr\n image: nginx\n resources:\n limits:\n memory: \"1.5Gi\"\n requests:\n memory: \"800Mi\"", "language": "yaml", "type": "pod-definition", "context": "Configure Minimum and Maximum Memory Constraints for a Namespace | Attempt to create the Pod:", "source": "https://kubernetes.io/docs/tasks/administer-cluster/manage-resources/memory-constraint-namespace/", "id": "k8s_00219_code_completion_5978"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: constraints-mem-demo-2\nspec:\n containers:\n - name: constraints-mem-demo-2-ctr\n image: nginx\n resources:\n limits:\n memory: \"1.5Gi\"\n requests:\n memory: \"800Mi\"", "completion": "This yaml code Configure Minimum and Maximum Memory Constraints for a Namespace | Attempt to create the Pod:", "language": "yaml", "type": "pod-definition_explanation", "context": "Configure Minimum and Maximum Memory Constraints for a Namespace | Attempt to create the Pod:", "source": "https://kubernetes.io/docs/tasks/administer-cluster/manage-resources/memory-constraint-namespace/", "id": "k8s_00219_code_explain_5979"} {"prompt": "Explain this bash code:\n\nkubectl apply -f https://k8s.io/examples/admin/resource/memory-constraints-pod-2.yaml --namespace=constraints-mem-example", "completion": "This bash code Configure Minimum and Maximum Memory Constraints for a Namespace | The output shows that the Pod does not get created, because it defines a container that requests more memory than is allowed:", "language": "bash", "type": "kubectl-command_explanation", "context": "Configure Minimum and Maximum Memory Constraints for a Namespace | The output shows that the Pod does not get created, because it defines a container that requests more memory than is allowed:", "source": "https://kubernetes.io/docs/tasks/administer-cluster/manage-resources/memory-constraint-namespace/", "id": "k8s_00219_code_explain_5980"} {"prompt": "Complete this yaml code:\n\nadmin/resource/memory-constraints-pod-3.yaml\napiVersion: v1\nkind: Pod\nmetadata:\n name: constraints-mem-demo-3\nspec:\n containers:", "completion": " - name: constraints-mem-demo-3-ctr\n image: nginx\n resources:\n limits:\n memory: \"800Mi\"\n requests:\n memory: \"100Mi\"", "language": "yaml", "type": "pod-definition", "context": "Configure Minimum and Maximum Memory Constraints for a Namespace | Attempt to create the Pod:", "source": "https://kubernetes.io/docs/tasks/administer-cluster/manage-resources/memory-constraint-namespace/", "id": "k8s_00219_code_completion_5981"} {"prompt": "Explain this yaml code:\n\nadmin/resource/memory-constraints-pod-3.yaml\napiVersion: v1\nkind: Pod\nmetadata:\n name: constraints-mem-demo-3\nspec:\n containers:\n - name: constraints-mem-demo-3-ctr\n image: nginx\n resources:\n limits:\n memory: \"800Mi\"\n requests:\n memory: \"100Mi\"", "completion": "This yaml code Configure Minimum and Maximum Memory Constraints for a Namespace | Attempt to create the Pod:", "language": "yaml", "type": "pod-definition_explanation", "context": "Configure Minimum and Maximum Memory Constraints for a Namespace | Attempt to create the Pod:", "source": "https://kubernetes.io/docs/tasks/administer-cluster/manage-resources/memory-constraint-namespace/", "id": "k8s_00219_code_explain_5982"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: constraints-mem-demo-3\nspec:\n containers:", "completion": " - name: constraints-mem-demo-3-ctr\n image: nginx\n resources:\n limits:\n memory: \"800Mi\"\n requests:\n memory: \"100Mi\"", "language": "yaml", "type": "pod-definition", "context": "Configure Minimum and Maximum Memory Constraints for a Namespace | Attempt to create the Pod:", "source": "https://kubernetes.io/docs/tasks/administer-cluster/manage-resources/memory-constraint-namespace/", "id": "k8s_00219_code_completion_5983"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: constraints-mem-demo-3\nspec:\n containers:\n - name: constraints-mem-demo-3-ctr\n image: nginx\n resources:\n limits:\n memory: \"800Mi\"\n requests:\n memory: \"100Mi\"", "completion": "This yaml code Configure Minimum and Maximum Memory Constraints for a Namespace | Attempt to create the Pod:", "language": "yaml", "type": "pod-definition_explanation", "context": "Configure Minimum and Maximum Memory Constraints for a Namespace | Attempt to create the Pod:", "source": "https://kubernetes.io/docs/tasks/administer-cluster/manage-resources/memory-constraint-namespace/", "id": "k8s_00219_code_explain_5984"} {"prompt": "Explain this bash code:\n\nkubectl apply -f https://k8s.io/examples/admin/resource/memory-constraints-pod-3.yaml --namespace=constraints-mem-example", "completion": "This bash code Configure Minimum and Maximum Memory Constraints for a Namespace | The output shows that the Pod does not get created, because it defines a container that requests less memory than the enforced minimum:", "language": "bash", "type": "kubectl-command_explanation", "context": "Configure Minimum and Maximum Memory Constraints for a Namespace | The output shows that the Pod does not get created, because it defines a container that requests less memory than the enforced minimum:", "source": "https://kubernetes.io/docs/tasks/administer-cluster/manage-resources/memory-constraint-namespace/", "id": "k8s_00219_code_explain_5985"} {"prompt": "Complete this yaml code:\n\nadmin/resource/memory-constraints-pod-4.yaml\napiVersion: v1\nkind: Pod\nmetadata:", "completion": " name: constraints-mem-demo-4\nspec:\n containers:\n - name: constraints-mem-demo-4-ctr\n image: nginx", "language": "yaml", "type": "pod-definition", "context": "Configure Minimum and Maximum Memory Constraints for a Namespace | Create the Pod:", "source": "https://kubernetes.io/docs/tasks/administer-cluster/manage-resources/memory-constraint-namespace/", "id": "k8s_00219_code_completion_5986"} {"prompt": "Explain this yaml code:\n\nadmin/resource/memory-constraints-pod-4.yaml\napiVersion: v1\nkind: Pod\nmetadata:\n name: constraints-mem-demo-4\nspec:\n containers:\n - name: constraints-mem-demo-4-ctr\n image: nginx", "completion": "This yaml code Configure Minimum and Maximum Memory Constraints for a Namespace | Create the Pod:", "language": "yaml", "type": "pod-definition_explanation", "context": "Configure Minimum and Maximum Memory Constraints for a Namespace | Create the Pod:", "source": "https://kubernetes.io/docs/tasks/administer-cluster/manage-resources/memory-constraint-namespace/", "id": "k8s_00219_code_explain_5987"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: constraints-mem-demo-4", "completion": "spec:\n containers:\n - name: constraints-mem-demo-4-ctr\n image: nginx", "language": "yaml", "type": "pod-definition", "context": "Configure Minimum and Maximum Memory Constraints for a Namespace | Create the Pod:", "source": "https://kubernetes.io/docs/tasks/administer-cluster/manage-resources/memory-constraint-namespace/", "id": "k8s_00219_code_completion_5988"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: constraints-mem-demo-4\nspec:\n containers:\n - name: constraints-mem-demo-4-ctr\n image: nginx", "completion": "This yaml code Configure Minimum and Maximum Memory Constraints for a Namespace | Create the Pod:", "language": "yaml", "type": "pod-definition_explanation", "context": "Configure Minimum and Maximum Memory Constraints for a Namespace | Create the Pod:", "source": "https://kubernetes.io/docs/tasks/administer-cluster/manage-resources/memory-constraint-namespace/", "id": "k8s_00219_code_explain_5989"} {"prompt": "Explain this bash code:\n\nkubectl apply -f https://k8s.io/examples/admin/resource/memory-constraints-pod-4.yaml --namespace=constraints-mem-example", "completion": "This bash code Configure Minimum and Maximum Memory Constraints for a Namespace | View detailed information about the Pod:", "language": "bash", "type": "kubectl-command_explanation", "context": "Configure Minimum and Maximum Memory Constraints for a Namespace | View detailed information about the Pod:", "source": "https://kubernetes.io/docs/tasks/administer-cluster/manage-resources/memory-constraint-namespace/", "id": "k8s_00219_code_explain_5990"} {"prompt": "Explain this bash code:\n\nkubectl get pod constraints-mem-demo-4 --namespace=constraints-mem-example --output=yaml", "completion": "This bash code Configure Minimum and Maximum Memory Constraints for a Namespace | The output shows that the Pod's only container has a memory request of 1 GiB and a memory limit of 1 GiB. How did that container get those values?", "language": "bash", "type": "kubectl-command_explanation", "context": "Configure Minimum and Maximum Memory Constraints for a Namespace | The output shows that the Pod's only container has a memory request of 1 GiB and a memory limit of 1 GiB. How did that container get those values?", "source": "https://kubernetes.io/docs/tasks/administer-cluster/manage-resources/memory-constraint-namespace/", "id": "k8s_00219_code_explain_5991"} {"prompt": "Explain this text code:\n\n# Look for the \"Requests:\" section of the output\nkubectl describe pod constraints-mem-demo-4 --namespace=constraints-mem-example", "completion": "This text code Configure Minimum and Maximum Memory Constraints for a Namespace | At this point, your Pod might be running or it might not be running. Recall that a prerequisite for this task is that your Nodes have at least 1 GiB of memory. If each of your Nodes has only 1 GiB of", "language": "text", "type": "code-example_explanation", "context": "Configure Minimum and Maximum Memory Constraints for a Namespace | At this point, your Pod might be running or it might not be running. Recall that a prerequisite for this task is that your Nodes have at least 1 GiB of memory. If each of your Nodes has only 1 GiB of", "source": "https://kubernetes.io/docs/tasks/administer-cluster/manage-resources/memory-constraint-namespace/", "id": "k8s_00219_code_explain_5992"} {"prompt": "Explain this bash code:\n\nkubectl delete pod constraints-mem-demo-4 --namespace=constraints-mem-example", "completion": "This bash code Configure Minimum and Maximum Memory Constraints for a Namespace | The maximum and minimum memory constraints imposed on a namespace by a LimitRange are enforced only when a Pod is created or updated. If you change the LimitRange, it does not affect Pods that were cr", "language": "bash", "type": "kubectl-command_explanation", "context": "Configure Minimum and Maximum Memory Constraints for a Namespace | The maximum and minimum memory constraints imposed on a namespace by a LimitRange are enforced only when a Pod is created or updated. If you change the LimitRange, it does not affect Pods that were cr", "source": "https://kubernetes.io/docs/tasks/administer-cluster/manage-resources/memory-constraint-namespace/", "id": "k8s_00219_code_explain_5993"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Service\nmetadata:\n name: zk-hs\n labels:\n app: zk\nspec:\n ports:\n - port: 2888\n name: server\n - port: 3888\n name: leader-election\n clusterIP: None\n selector:\n app: zk\n---\napiVersion: v1\nkind: Service\nmetadata:\n name: zk-cs\n labels:\n app: zk\nspec:\n ports:\n - port: 2181\n name: client\n selector:\n app: zk\n---\napiVersion: policy/v1\nkind: PodDisruptionBudget\nmetadata:\n name: zk-pdb\nspec:\n selector:\n matchLabels:\n app: zk\n maxUnavailable: 1\n---\napiVersion: apps/v1\nkind: StatefulSet\nmetadata:\n name: zk\nspec:\n selector:\n matchLabels:\n app: zk\n serviceName: zk-hs\n replicas: 3\n updateStrategy:\n type: RollingUpdate\n podManagementPolicy: OrderedReady\n template:\n metadata:\n labels:\n app: zk\n spec:\n affinity:\n podAntiAffinity:\n requiredDuringSchedulingIgnoredDuringExecution:\n - labelSelector:\n matchExpressions:\n - key: \"app\"\n operator: In\n values:\n - zk", "completion": " topologyKey: \"kubernetes.io/hostname\"\n containers:\n - name: kubernetes-zookeeper\n imagePullPolicy: Always\n image: \"registry.k8s.io/kubernetes-zookeeper:1.0-3.4.10\"\n resources:\n requests:\n memory: \"1Gi\"\n cpu: \"0.5\"\n ports:\n - containerPort: 2181\n name: client\n - containerPort: 2888\n name: server\n - containerPort: 3888\n name: leader-election\n command:\n - sh\n - -c\n - \"start-zookeeper \\\n --servers=3 \\\n --data_dir=/var/lib/zookeeper/data \\\n --data_log_dir=/var/lib/zookeeper/data/log \\\n --conf_dir=/opt/zookeeper/conf \\\n --client_port=2181 \\\n --election_port=3888 \\\n --server_port=2888 \\\n --tick_time=2000 \\\n --init_limit=10 \\\n --sync_limit=5 \\\n --heap=512M \\\n --max_client_cnxns=60 \\\n --snap_retain_count=3 \\\n --purge_interval=12 \\\n --max_session_timeout=40000 \\\n --min_session_timeout=4000 \\\n --log_level=INFO\"\n readinessProbe:\n exec:\n command:\n - sh\n - -c\n - \"zookeeper-ready 2181\"\n initialDelaySeconds: 10\n timeoutSeconds: 5\n livenessProbe:\n exec:\n command:\n - sh\n - -c\n - \"zookeeper-ready 2181\"\n initialDelaySeconds: 10\n timeoutSeconds: 5\n volumeMounts:\n - name: datadir\n mountPath: /var/lib/zookeeper\n securityContext:\n runAsUser: 1000\n fsGroup: 1000\n volumeClaimTemplates:\n - metadata:\n name: datadir\n spec:\n accessModes: [ \"ReadWriteOnce\" ]\n resources:\n requests:\n storage: 10Gi", "language": "yaml", "type": "pod-definition", "context": "Running ZooKeeper, A Distributed System Coordinator | Open a terminal, and use the kubectl apply command to create the manifest.", "source": "https://kubernetes.io/docs/tutorials/stateful-application/zookeeper/", "id": "k8s_00220_code_completion_5994"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Service\nmetadata:\n name: zk-hs\n labels:\n app: zk\nspec:\n ports:\n - port: 2888\n name: server\n - port: 3888\n name: leader-election\n clusterIP: None\n selector:\n app: zk\n---\napiVersion: v1\nkind: Service\nmetadata:\n name: zk-cs\n labels:\n app: zk\nspec:\n ports:\n - port: 2181\n name: client\n selector:\n app: zk\n---\napiVersion: policy/v1\nkind: PodDisruptionBudget\nmetadata:\n name: zk-pdb\nspec:\n selector:\n matchLabels:\n app: zk\n maxUnavailable: 1\n---\napiVersion: apps/v1\nkind: StatefulSet\nmetadata:\n name: zk\nspec:\n selector:\n matchLabels:\n app: zk\n serviceName: zk-hs\n replicas: 3\n updateStrategy:\n type: RollingUpdate\n podManagementPolicy: OrderedReady\n template:\n metadata:\n labels:\n app: zk\n spec:\n affinity:\n podAntiAffinity:\n requiredDuringSchedulingIgnoredDuringExecution:\n - labelSelector:\n matchExpressions:\n - key: \"app\"\n operator: In\n values:\n - zk\n topologyKey: \"kubernetes.io/hostname\"\n containers:\n - name: kubernetes-zookeeper\n imagePullPolicy: Always\n image: \"registry.k8s.io/kubernetes-zookeeper:1.0-3.4.10\"\n resources:\n requests:\n memory: \"1Gi\"\n cpu: \"0.5\"\n ports:\n - containerPort: 2181\n name: client\n - containerPort: 2888\n name: server\n - containerPort: 3888\n name: leader-election\n command:\n - sh\n - -c\n - \"start-zookeeper \\\n --servers=3 \\\n --data_dir=/var/lib/zookeeper/data \\\n --data_log_dir=/var/lib/zookeeper/data/log \\\n --conf_dir=/opt/zookeeper/conf \\\n --client_port=2181 \\\n --election_port=3888 \\\n --server_port=2888 \\\n --tick_time=2000 \\\n --init_limit=10 \\\n --sync_limit=5 \\\n --heap=512M \\\n --max_client_cnxns=60 \\\n --snap_retain_count=3 \\\n --purge_interval=12 \\\n --max_session_timeout=40000 \\\n --min_session_timeout=4000 \\\n --log_level=INFO\"\n readinessProbe:\n exec:\n command:\n - sh\n - -c\n - \"zookeeper-ready 2181\"\n initialDelaySeconds: 10\n timeoutSeconds: 5\n livenessProbe:\n exec:\n command:\n - sh\n - -c\n - \"zookeeper-ready 2181\"\n initialDelaySeconds: 10\n timeoutSeconds: 5\n volumeMounts:\n - name: datadir\n mountPath: /var/lib/zookeeper\n securityContext:\n runAsUser: 1000\n fsGroup: 1000\n volumeClaimTemplates:\n - metadata:\n name: datadir\n spec:\n accessModes: [ \"ReadWriteOnce\" ]\n resources:\n requests:\n storage: 10Gi", "completion": "This yaml code Running ZooKeeper, A Distributed System Coordinator | Open a terminal, and use the kubectl apply command to create the manifest.", "language": "yaml", "type": "pod-definition_explanation", "context": "Running ZooKeeper, A Distributed System Coordinator | Open a terminal, and use the kubectl apply command to create the manifest.", "source": "https://kubernetes.io/docs/tutorials/stateful-application/zookeeper/", "id": "k8s_00220_code_explain_5995"} {"prompt": "Explain this bash code:\n\nkubectl apply -f https://k8s.io/examples/application/zookeeper/zookeeper.yaml", "completion": "This bash code Running ZooKeeper, A Distributed System Coordinator | This creates the zk-hs Headless Service, the zk-cs Service, the zk-pdb PodDisruptionBudget, and the zk StatefulSet.", "language": "bash", "type": "kubectl-command_explanation", "context": "Running ZooKeeper, A Distributed System Coordinator | This creates the zk-hs Headless Service, the zk-cs Service, the zk-pdb PodDisruptionBudget, and the zk StatefulSet.", "source": "https://kubernetes.io/docs/tutorials/stateful-application/zookeeper/", "id": "k8s_00220_code_explain_5996"} {"prompt": "Complete this text code:\n\nservice/zk-hs created\nservice/zk-cs created", "completion": "poddisruptionbudget.policy/zk-pdb created\nstatefulset.apps/zk created", "language": "text", "type": "code-example", "context": "Running ZooKeeper, A Distributed System Coordinator | Use kubectl get to watch the StatefulSet controller create the StatefulSet's Pods.", "source": "https://kubernetes.io/docs/tutorials/stateful-application/zookeeper/", "id": "k8s_00220_code_completion_5997"} {"prompt": "Explain this text code:\n\nservice/zk-hs created\nservice/zk-cs created\npoddisruptionbudget.policy/zk-pdb created\nstatefulset.apps/zk created", "completion": "This text code Running ZooKeeper, A Distributed System Coordinator | Use kubectl get to watch the StatefulSet controller create the StatefulSet's Pods.", "language": "text", "type": "code-example_explanation", "context": "Running ZooKeeper, A Distributed System Coordinator | Use kubectl get to watch the StatefulSet controller create the StatefulSet's Pods.", "source": "https://kubernetes.io/docs/tutorials/stateful-application/zookeeper/", "id": "k8s_00220_code_explain_5998"} {"prompt": "Complete this text code:\n\nNAME READY STATUS RESTARTS AGE\nzk-0 0/1 Pending 0 0s\nzk-0 0/1 Pending 0 0s\nzk-0 0/1 ContainerCreating 0 0s\nzk-0 0/1 Running 0 19s\nzk-0 1/1 Running 0 40s\nzk-1 0/1 Pending 0 0s\nzk-1 0/1 Pending 0 0s", "completion": "zk-1 0/1 ContainerCreating 0 0s\nzk-1 0/1 Running 0 18s\nzk-1 1/1 Running 0 40s\nzk-2 0/1 Pending 0 0s\nzk-2 0/1 Pending 0 0s\nzk-2 0/1 ContainerCreating 0 0s\nzk-2 0/1 Running 0 19s\nzk-2 1/1 Running 0 40s", "language": "text", "type": "code-example", "context": "Running ZooKeeper, A Distributed System Coordinator | The StatefulSet controller creates three Pods, and each Pod has a container with a ZooKeeper server.", "source": "https://kubernetes.io/docs/tutorials/stateful-application/zookeeper/", "id": "k8s_00220_code_completion_5999"} {"prompt": "Explain this text code:\n\nNAME READY STATUS RESTARTS AGE\nzk-0 0/1 Pending 0 0s\nzk-0 0/1 Pending 0 0s\nzk-0 0/1 ContainerCreating 0 0s\nzk-0 0/1 Running 0 19s\nzk-0 1/1 Running 0 40s\nzk-1 0/1 Pending 0 0s\nzk-1 0/1 Pending 0 0s\nzk-1 0/1 ContainerCreating 0 0s\nzk-1 0/1 Running 0 18s\nzk-1 1/1 Running 0 40s\nzk-2 0/1 Pending 0 0s\nzk-2 0/1 Pending 0 0s\nzk-2 0/1 ContainerCreating 0 0s\nzk-2 0/1 Running 0 19s\nzk-2 1/1 Running 0 40s", "completion": "This text code Running ZooKeeper, A Distributed System Coordinator | The StatefulSet controller creates three Pods, and each Pod has a container with a ZooKeeper server.", "language": "text", "type": "code-example_explanation", "context": "Running ZooKeeper, A Distributed System Coordinator | The StatefulSet controller creates three Pods, and each Pod has a container with a ZooKeeper server.", "source": "https://kubernetes.io/docs/tutorials/stateful-application/zookeeper/", "id": "k8s_00220_code_explain_6000"} {"prompt": "Explain this text code:\n\nfor i in 0 1 2; do kubectl exec zk-$i -- hostname; done", "completion": "This text code Running ZooKeeper, A Distributed System Coordinator | The StatefulSet controller provides each Pod with a unique hostname based on its ordinal index. The hostnames take the form of -. Because the replicas field of the zk", "language": "text", "type": "code-example_explanation", "context": "Running ZooKeeper, A Distributed System Coordinator | The StatefulSet controller provides each Pod with a unique hostname based on its ordinal index. The hostnames take the form of -. Because the replicas field of the zk", "source": "https://kubernetes.io/docs/tutorials/stateful-application/zookeeper/", "id": "k8s_00220_code_explain_6001"} {"prompt": "Explain this text code:\n\nfor i in 0 1 2; do echo \"myid zk-$i\";kubectl exec zk-$i -- cat /var/lib/zookeeper/data/myid; done", "completion": "This text code Running ZooKeeper, A Distributed System Coordinator | Because the identifiers are natural numbers and the ordinal indices are non-negative integers, you can generate an identifier by adding 1 to the ordinal.", "language": "text", "type": "code-example_explanation", "context": "Running ZooKeeper, A Distributed System Coordinator | Because the identifiers are natural numbers and the ordinal indices are non-negative integers, you can generate an identifier by adding 1 to the ordinal.", "source": "https://kubernetes.io/docs/tutorials/stateful-application/zookeeper/", "id": "k8s_00220_code_explain_6002"} {"prompt": "Explain this text code:\n\nfor i in 0 1 2; do kubectl exec zk-$i -- hostname -f; done", "completion": "This text code Running ZooKeeper, A Distributed System Coordinator | The zk-hs Service creates a domain for all of the Pods, zk-hs.default.svc.cluster.local.", "language": "text", "type": "code-example_explanation", "context": "Running ZooKeeper, A Distributed System Coordinator | The zk-hs Service creates a domain for all of the Pods, zk-hs.default.svc.cluster.local.", "source": "https://kubernetes.io/docs/tutorials/stateful-application/zookeeper/", "id": "k8s_00220_code_explain_6003"} {"prompt": "Explain this text code:\n\nzk-0.zk-hs.default.svc.cluster.local\nzk-1.zk-hs.default.svc.cluster.local\nzk-2.zk-hs.default.svc.cluster.local", "completion": "This text code Running ZooKeeper, A Distributed System Coordinator | The A records in Kubernetes DNS resolve the FQDNs to the Pods' IP addresses. If Kubernetes reschedules the Pods, it will update the A records with the Pods' new IP addresses, but the A records names w", "language": "text", "type": "code-example_explanation", "context": "Running ZooKeeper, A Distributed System Coordinator | The A records in Kubernetes DNS resolve the FQDNs to the Pods' IP addresses. If Kubernetes reschedules the Pods, it will update the A records with the Pods' new IP addresses, but the A records names w", "source": "https://kubernetes.io/docs/tutorials/stateful-application/zookeeper/", "id": "k8s_00220_code_explain_6004"} {"prompt": "Explain this bash code:\n\nkubectl exec zk-0 -- cat /opt/zookeeper/conf/zoo.cfg", "completion": "This bash code Running ZooKeeper, A Distributed System Coordinator | In the server.1, server.2, and server.3 properties at the bottom of the file, the 1, 2, and 3 correspond to the identifiers in the ZooKeeper servers' myid files. They are set to the FQDNs for the Pods", "language": "bash", "type": "kubectl-command_explanation", "context": "Running ZooKeeper, A Distributed System Coordinator | In the server.1, server.2, and server.3 properties at the bottom of the file, the 1, 2, and 3 correspond to the identifiers in the ZooKeeper servers' myid files. They are set to the FQDNs for the Pods", "source": "https://kubernetes.io/docs/tutorials/stateful-application/zookeeper/", "id": "k8s_00220_code_explain_6005"} {"prompt": "Complete this text code:\n\nclientPort=2181\ndataDir=/var/lib/zookeeper/data\ndataLogDir=/var/lib/zookeeper/log\ntickTime=2000\ninitLimit=10\nsyncLimit=2000\nmaxClientCnxns=60", "completion": "minSessionTimeout= 4000\nmaxSessionTimeout= 40000\nautopurge.snapRetainCount=3\nautopurge.purgeInterval=0\nserver.1=zk-0.zk-hs.default.svc.cluster.local:2888:3888\nserver.2=zk-1.zk-hs.default.svc.cluster.local:2888:3888\nserver.3=zk-2.zk-hs.default.svc.cluster.local:2888:3888", "language": "text", "type": "code-example", "context": "Running ZooKeeper, A Distributed System Coordinator | Consensus protocols require that the identifiers of each participant be unique. No two participants in the Zab protocol should claim the same unique identifier. This is necessary to allow the processe", "source": "https://kubernetes.io/docs/tutorials/stateful-application/zookeeper/", "id": "k8s_00220_code_completion_6006"} {"prompt": "Explain this text code:\n\nclientPort=2181\ndataDir=/var/lib/zookeeper/data\ndataLogDir=/var/lib/zookeeper/log\ntickTime=2000\ninitLimit=10\nsyncLimit=2000\nmaxClientCnxns=60\nminSessionTimeout= 4000\nmaxSessionTimeout= 40000\nautopurge.snapRetainCount=3\nautopurge.purgeInterval=0\nserver.1=zk-0.zk-hs.default.svc.cluster.local:2888:3888\nserver.2=zk-1.zk-hs.default.svc.cluster.local:2888:3888\nserver.3=zk-2.zk-hs.default.svc.cluster.local:2888:3888", "completion": "This text code Running ZooKeeper, A Distributed System Coordinator | Consensus protocols require that the identifiers of each participant be unique. No two participants in the Zab protocol should claim the same unique identifier. This is necessary to allow the processe", "language": "text", "type": "code-example_explanation", "context": "Running ZooKeeper, A Distributed System Coordinator | Consensus protocols require that the identifiers of each participant be unique. No two participants in the Zab protocol should claim the same unique identifier. This is necessary to allow the processe", "source": "https://kubernetes.io/docs/tutorials/stateful-application/zookeeper/", "id": "k8s_00220_code_explain_6007"} {"prompt": "Complete this text code:\n\nNAME READY STATUS RESTARTS AGE\nzk-0 0/1 Pending 0 0s\nzk-0 0/1 Pending 0 0s\nzk-0 0/1 ContainerCreating 0 0s\nzk-0 0/1 Running 0 19s\nzk-0 1/1 Running 0 40s\nzk-1 0/1 Pending 0 0s\nzk-1 0/1 Pending 0 0s", "completion": "zk-1 0/1 ContainerCreating 0 0s\nzk-1 0/1 Running 0 18s\nzk-1 1/1 Running 0 40s\nzk-2 0/1 Pending 0 0s\nzk-2 0/1 Pending 0 0s\nzk-2 0/1 ContainerCreating 0 0s\nzk-2 0/1 Running 0 19s\nzk-2 1/1 Running 0 40s", "language": "text", "type": "code-example", "context": "Running ZooKeeper, A Distributed System Coordinator | The A records for each Pod are entered when the Pod becomes Ready. Therefore, the FQDNs of the ZooKeeper servers will resolve to a single endpoint, and that endpoint will be the unique ZooKeeper serve", "source": "https://kubernetes.io/docs/tutorials/stateful-application/zookeeper/", "id": "k8s_00220_code_completion_6008"} {"prompt": "Explain this text code:\n\nNAME READY STATUS RESTARTS AGE\nzk-0 0/1 Pending 0 0s\nzk-0 0/1 Pending 0 0s\nzk-0 0/1 ContainerCreating 0 0s\nzk-0 0/1 Running 0 19s\nzk-0 1/1 Running 0 40s\nzk-1 0/1 Pending 0 0s\nzk-1 0/1 Pending 0 0s\nzk-1 0/1 ContainerCreating 0 0s\nzk-1 0/1 Running 0 18s\nzk-1 1/1 Running 0 40s\nzk-2 0/1 Pending 0 0s\nzk-2 0/1 Pending 0 0s\nzk-2 0/1 ContainerCreating 0 0s\nzk-2 0/1 Running 0 19s\nzk-2 1/1 Running 0 40s", "completion": "This text code Running ZooKeeper, A Distributed System Coordinator | The A records for each Pod are entered when the Pod becomes Ready. Therefore, the FQDNs of the ZooKeeper servers will resolve to a single endpoint, and that endpoint will be the unique ZooKeeper serve", "language": "text", "type": "code-example_explanation", "context": "Running ZooKeeper, A Distributed System Coordinator | The A records for each Pod are entered when the Pod becomes Ready. Therefore, the FQDNs of the ZooKeeper servers will resolve to a single endpoint, and that endpoint will be the unique ZooKeeper serve", "source": "https://kubernetes.io/docs/tutorials/stateful-application/zookeeper/", "id": "k8s_00220_code_explain_6009"} {"prompt": "Explain this text code:\n\nzk-0.zk-hs.default.svc.cluster.local\nzk-1.zk-hs.default.svc.cluster.local\nzk-2.zk-hs.default.svc.cluster.local", "completion": "This text code Running ZooKeeper, A Distributed System Coordinator | This ensures that the servers properties in the ZooKeepers' zoo.cfg files represents a correctly configured ensemble.", "language": "text", "type": "code-example_explanation", "context": "Running ZooKeeper, A Distributed System Coordinator | This ensures that the servers properties in the ZooKeepers' zoo.cfg files represents a correctly configured ensemble.", "source": "https://kubernetes.io/docs/tutorials/stateful-application/zookeeper/", "id": "k8s_00220_code_explain_6010"} {"prompt": "Explain this text code:\n\nserver.1=zk-0.zk-hs.default.svc.cluster.local:2888:3888\nserver.2=zk-1.zk-hs.default.svc.cluster.local:2888:3888\nserver.3=zk-2.zk-hs.default.svc.cluster.local:2888:3888", "completion": "This text code Running ZooKeeper, A Distributed System Coordinator | When the servers use the Zab protocol to attempt to commit a value, they will either achieve consensus and commit the value (if leader election has succeeded and at least two of the Pods are Running a", "language": "text", "type": "code-example_explanation", "context": "Running ZooKeeper, A Distributed System Coordinator | When the servers use the Zab protocol to attempt to commit a value, they will either achieve consensus and commit the value (if leader election has succeeded and at least two of the Pods are Running a", "source": "https://kubernetes.io/docs/tutorials/stateful-application/zookeeper/", "id": "k8s_00220_code_explain_6011"} {"prompt": "Complete this text code:\n\nWATCHER::\n", "completion": "WatchedEvent state:SyncConnected type:None path:null\nCreated /hello", "language": "text", "type": "code-example", "context": "Running ZooKeeper, A Distributed System Coordinator | To get the data from the zk-1 Pod use the following command.", "source": "https://kubernetes.io/docs/tutorials/stateful-application/zookeeper/", "id": "k8s_00220_code_completion_6012"} {"prompt": "Explain this text code:\n\nWATCHER::\n\nWatchedEvent state:SyncConnected type:None path:null\nCreated /hello", "completion": "This text code Running ZooKeeper, A Distributed System Coordinator | To get the data from the zk-1 Pod use the following command.", "language": "text", "type": "code-example_explanation", "context": "Running ZooKeeper, A Distributed System Coordinator | To get the data from the zk-1 Pod use the following command.", "source": "https://kubernetes.io/docs/tutorials/stateful-application/zookeeper/", "id": "k8s_00220_code_explain_6013"} {"prompt": "Complete this text code:\n\nWATCHER::\n\nWatchedEvent state:SyncConnected type:None path:null\nworld\ncZxid = 0x100000002\nctime = Thu Dec 08 15:13:30 UTC 2016\nmZxid = 0x100000002", "completion": "mtime = Thu Dec 08 15:13:30 UTC 2016\npZxid = 0x100000002\ncversion = 0\ndataVersion = 0\naclVersion = 0\nephemeralOwner = 0x0\ndataLength = 5\nnumChildren = 0", "language": "text", "type": "code-example", "context": "Running ZooKeeper, A Distributed System Coordinator | As mentioned in the ZooKeeper Basics section, ZooKeeper commits all entries to a durable WAL, and periodically writes snapshots in memory state, to storage media. Using WALs to provide durability is a", "source": "https://kubernetes.io/docs/tutorials/stateful-application/zookeeper/", "id": "k8s_00220_code_completion_6014"} {"prompt": "Explain this text code:\n\nWATCHER::\n\nWatchedEvent state:SyncConnected type:None path:null\nworld\ncZxid = 0x100000002\nctime = Thu Dec 08 15:13:30 UTC 2016\nmZxid = 0x100000002\nmtime = Thu Dec 08 15:13:30 UTC 2016\npZxid = 0x100000002\ncversion = 0\ndataVersion = 0\naclVersion = 0\nephemeralOwner = 0x0\ndataLength = 5\nnumChildren = 0", "completion": "This text code Running ZooKeeper, A Distributed System Coordinator | As mentioned in the ZooKeeper Basics section, ZooKeeper commits all entries to a durable WAL, and periodically writes snapshots in memory state, to storage media. Using WALs to provide durability is a", "language": "text", "type": "code-example_explanation", "context": "Running ZooKeeper, A Distributed System Coordinator | As mentioned in the ZooKeeper Basics section, ZooKeeper commits all entries to a durable WAL, and periodically writes snapshots in memory state, to storage media. Using WALs to provide durability is a", "source": "https://kubernetes.io/docs/tutorials/stateful-application/zookeeper/", "id": "k8s_00220_code_explain_6015"} {"prompt": "Complete this text code:\n\nzk-2 1/1 Terminating 0 9m\nzk-0 1/1 Terminating 0 11m\nzk-1 1/1 Terminating 0 10m\nzk-2 0/1 Terminating 0 9m\nzk-2 0/1 Terminating 0 9m\nzk-2 0/1 Terminating 0 9m", "completion": "zk-1 0/1 Terminating 0 10m\nzk-1 0/1 Terminating 0 10m\nzk-1 0/1 Terminating 0 10m\nzk-0 0/1 Terminating 0 11m\nzk-0 0/1 Terminating 0 11m\nzk-0 0/1 Terminating 0 11m", "language": "text", "type": "code-example", "context": "Running ZooKeeper, A Distributed System Coordinator | Reapply the manifest in zookeeper.yaml.", "source": "https://kubernetes.io/docs/tutorials/stateful-application/zookeeper/", "id": "k8s_00220_code_completion_6016"} {"prompt": "Explain this text code:\n\nzk-2 1/1 Terminating 0 9m\nzk-0 1/1 Terminating 0 11m\nzk-1 1/1 Terminating 0 10m\nzk-2 0/1 Terminating 0 9m\nzk-2 0/1 Terminating 0 9m\nzk-2 0/1 Terminating 0 9m\nzk-1 0/1 Terminating 0 10m\nzk-1 0/1 Terminating 0 10m\nzk-1 0/1 Terminating 0 10m\nzk-0 0/1 Terminating 0 11m\nzk-0 0/1 Terminating 0 11m\nzk-0 0/1 Terminating 0 11m", "completion": "This text code Running ZooKeeper, A Distributed System Coordinator | Reapply the manifest in zookeeper.yaml.", "language": "text", "type": "code-example_explanation", "context": "Running ZooKeeper, A Distributed System Coordinator | Reapply the manifest in zookeeper.yaml.", "source": "https://kubernetes.io/docs/tutorials/stateful-application/zookeeper/", "id": "k8s_00220_code_explain_6017"} {"prompt": "Explain this bash code:\n\nkubectl apply -f https://k8s.io/examples/application/zookeeper/zookeeper.yaml", "completion": "This bash code Running ZooKeeper, A Distributed System Coordinator | This creates the zk StatefulSet object, but the other API objects in the manifest are not modified because they already exist.", "language": "bash", "type": "kubectl-command_explanation", "context": "Running ZooKeeper, A Distributed System Coordinator | This creates the zk StatefulSet object, but the other API objects in the manifest are not modified because they already exist.", "source": "https://kubernetes.io/docs/tutorials/stateful-application/zookeeper/", "id": "k8s_00220_code_explain_6018"} {"prompt": "Complete this text code:\n\nNAME READY STATUS RESTARTS AGE\nzk-0 0/1 Pending 0 0s\nzk-0 0/1 Pending 0 0s\nzk-0 0/1 ContainerCreating 0 0s\nzk-0 0/1 Running 0 19s\nzk-0 1/1 Running 0 40s\nzk-1 0/1 Pending 0 0s\nzk-1 0/1 Pending 0 0s", "completion": "zk-1 0/1 ContainerCreating 0 0s\nzk-1 0/1 Running 0 18s\nzk-1 1/1 Running 0 40s\nzk-2 0/1 Pending 0 0s\nzk-2 0/1 Pending 0 0s\nzk-2 0/1 ContainerCreating 0 0s\nzk-2 0/1 Running 0 19s\nzk-2 1/1 Running 0 40s", "language": "text", "type": "code-example", "context": "Running ZooKeeper, A Distributed System Coordinator | Use the command below to get the value you entered during the sanity test, from the zk-2 Pod.", "source": "https://kubernetes.io/docs/tutorials/stateful-application/zookeeper/", "id": "k8s_00220_code_completion_6019"} {"prompt": "Explain this text code:\n\nNAME READY STATUS RESTARTS AGE\nzk-0 0/1 Pending 0 0s\nzk-0 0/1 Pending 0 0s\nzk-0 0/1 ContainerCreating 0 0s\nzk-0 0/1 Running 0 19s\nzk-0 1/1 Running 0 40s\nzk-1 0/1 Pending 0 0s\nzk-1 0/1 Pending 0 0s\nzk-1 0/1 ContainerCreating 0 0s\nzk-1 0/1 Running 0 18s\nzk-1 1/1 Running 0 40s\nzk-2 0/1 Pending 0 0s\nzk-2 0/1 Pending 0 0s\nzk-2 0/1 ContainerCreating 0 0s\nzk-2 0/1 Running 0 19s\nzk-2 1/1 Running 0 40s", "completion": "This text code Running ZooKeeper, A Distributed System Coordinator | Use the command below to get the value you entered during the sanity test, from the zk-2 Pod.", "language": "text", "type": "code-example_explanation", "context": "Running ZooKeeper, A Distributed System Coordinator | Use the command below to get the value you entered during the sanity test, from the zk-2 Pod.", "source": "https://kubernetes.io/docs/tutorials/stateful-application/zookeeper/", "id": "k8s_00220_code_explain_6020"} {"prompt": "Complete this text code:\n\nWATCHER::\n\nWatchedEvent state:SyncConnected type:None path:null\nworld\ncZxid = 0x100000002\nctime = Thu Dec 08 15:13:30 UTC 2016\nmZxid = 0x100000002", "completion": "mtime = Thu Dec 08 15:13:30 UTC 2016\npZxid = 0x100000002\ncversion = 0\ndataVersion = 0\naclVersion = 0\nephemeralOwner = 0x0\ndataLength = 5\nnumChildren = 0", "language": "text", "type": "code-example", "context": "Running ZooKeeper, A Distributed System Coordinator | The volumeClaimTemplates field of the zk StatefulSet's spec specifies a PersistentVolume provisioned for each Pod.", "source": "https://kubernetes.io/docs/tutorials/stateful-application/zookeeper/", "id": "k8s_00220_code_completion_6021"} {"prompt": "Explain this text code:\n\nWATCHER::\n\nWatchedEvent state:SyncConnected type:None path:null\nworld\ncZxid = 0x100000002\nctime = Thu Dec 08 15:13:30 UTC 2016\nmZxid = 0x100000002\nmtime = Thu Dec 08 15:13:30 UTC 2016\npZxid = 0x100000002\ncversion = 0\ndataVersion = 0\naclVersion = 0\nephemeralOwner = 0x0\ndataLength = 5\nnumChildren = 0", "completion": "This text code Running ZooKeeper, A Distributed System Coordinator | The volumeClaimTemplates field of the zk StatefulSet's spec specifies a PersistentVolume provisioned for each Pod.", "language": "text", "type": "code-example_explanation", "context": "Running ZooKeeper, A Distributed System Coordinator | The volumeClaimTemplates field of the zk StatefulSet's spec specifies a PersistentVolume provisioned for each Pod.", "source": "https://kubernetes.io/docs/tutorials/stateful-application/zookeeper/", "id": "k8s_00220_code_explain_6022"} {"prompt": "Complete this text code:\n\nvolumeClaimTemplates:\n - metadata:\n name: datadir\n annotations:\n volume.alpha.kubernetes.io/storage-class: anything", "completion": " spec:\n accessModes: [ \"ReadWriteOnce\" ]\n resources:\n requests:\n storage: 20Gi", "language": "text", "type": "code-example", "context": "Running ZooKeeper, A Distributed System Coordinator | The StatefulSet controller generates a PersistentVolumeClaim for each Pod in the StatefulSet.", "source": "https://kubernetes.io/docs/tutorials/stateful-application/zookeeper/", "id": "k8s_00220_code_completion_6023"} {"prompt": "Explain this text code:\n\nvolumeClaimTemplates:\n - metadata:\n name: datadir\n annotations:\n volume.alpha.kubernetes.io/storage-class: anything\n spec:\n accessModes: [ \"ReadWriteOnce\" ]\n resources:\n requests:\n storage: 20Gi", "completion": "This text code Running ZooKeeper, A Distributed System Coordinator | The StatefulSet controller generates a PersistentVolumeClaim for each Pod in the StatefulSet.", "language": "text", "type": "code-example_explanation", "context": "Running ZooKeeper, A Distributed System Coordinator | The StatefulSet controller generates a PersistentVolumeClaim for each Pod in the StatefulSet.", "source": "https://kubernetes.io/docs/tutorials/stateful-application/zookeeper/", "id": "k8s_00220_code_explain_6024"} {"prompt": "Complete this text code:\n\nNAME STATUS VOLUME CAPACITY ACCESSMODES AGE\ndatadir-zk-0 Bound pvc-bed742cd-bcb1-11e6-994f-42010a800002 20Gi RWO 1h", "completion": "datadir-zk-1 Bound pvc-bedd27d2-bcb1-11e6-994f-42010a800002 20Gi RWO 1h\ndatadir-zk-2 Bound pvc-bee0817e-bcb1-11e6-994f-42010a800002 20Gi RWO 1h", "language": "text", "type": "code-example", "context": "Running ZooKeeper, A Distributed System Coordinator | The volumeMounts section of the StatefulSet's container template mounts the PersistentVolumes in the ZooKeeper servers' data directories.", "source": "https://kubernetes.io/docs/tutorials/stateful-application/zookeeper/", "id": "k8s_00220_code_completion_6025"} {"prompt": "Explain this text code:\n\nNAME STATUS VOLUME CAPACITY ACCESSMODES AGE\ndatadir-zk-0 Bound pvc-bed742cd-bcb1-11e6-994f-42010a800002 20Gi RWO 1h\ndatadir-zk-1 Bound pvc-bedd27d2-bcb1-11e6-994f-42010a800002 20Gi RWO 1h\ndatadir-zk-2 Bound pvc-bee0817e-bcb1-11e6-994f-42010a800002 20Gi RWO 1h", "completion": "This text code Running ZooKeeper, A Distributed System Coordinator | The volumeMounts section of the StatefulSet's container template mounts the PersistentVolumes in the ZooKeeper servers' data directories.", "language": "text", "type": "code-example_explanation", "context": "Running ZooKeeper, A Distributed System Coordinator | The volumeMounts section of the StatefulSet's container template mounts the PersistentVolumes in the ZooKeeper servers' data directories.", "source": "https://kubernetes.io/docs/tutorials/stateful-application/zookeeper/", "id": "k8s_00220_code_explain_6026"} {"prompt": "Explain this text code:\n\nvolumeMounts:\n- name: datadir\n mountPath: /var/lib/zookeeper", "completion": "This text code Running ZooKeeper, A Distributed System Coordinator | When a Pod in the zk StatefulSet is (re)scheduled, it will always have the same PersistentVolume mounted to the ZooKeeper server's data directory. Even when the Pods are rescheduled, all the writes ma", "language": "text", "type": "code-example_explanation", "context": "Running ZooKeeper, A Distributed System Coordinator | When a Pod in the zk StatefulSet is (re)scheduled, it will always have the same PersistentVolume mounted to the ZooKeeper server's data directory. Even when the Pods are rescheduled, all the writes ma", "source": "https://kubernetes.io/docs/tutorials/stateful-application/zookeeper/", "id": "k8s_00220_code_explain_6027"} {"prompt": "Complete this text code:\n\n…\ncommand:\n - sh\n - -c\n - \"start-zookeeper \\\n --servers=3 \\\n --data_dir=/var/lib/zookeeper/data \\\n --data_log_dir=/var/lib/zookeeper/data/log \\\n --conf_dir=/opt/zookeeper/conf \\\n --client_port=2181 \\\n --election_port=3888 \\", "completion": " --server_port=2888 \\\n --tick_time=2000 \\\n --init_limit=10 \\\n --sync_limit=5 \\\n --heap=512M \\\n --max_client_cnxns=60 \\\n --snap_retain_count=3 \\\n --purge_interval=12 \\\n --max_session_timeout=40000 \\\n --min_session_timeout=4000 \\\n --log_level=INFO\"\n…", "language": "text", "type": "code-example", "context": "Running ZooKeeper, A Distributed System Coordinator | The command used to start the ZooKeeper servers passed the configuration as command line parameter. You can also use environment variables to pass configuration to the ensemble.", "source": "https://kubernetes.io/docs/tutorials/stateful-application/zookeeper/", "id": "k8s_00220_code_completion_6028"} {"prompt": "Explain this text code:\n\n…\ncommand:\n - sh\n - -c\n - \"start-zookeeper \\\n --servers=3 \\\n --data_dir=/var/lib/zookeeper/data \\\n --data_log_dir=/var/lib/zookeeper/data/log \\\n --conf_dir=/opt/zookeeper/conf \\\n --client_port=2181 \\\n --election_port=3888 \\\n --server_port=2888 \\\n --tick_time=2000 \\\n --init_limit=10 \\\n --sync_limit=5 \\\n --heap=512M \\\n --max_client_cnxns=60 \\\n --snap_retain_count=3 \\\n --purge_interval=12 \\\n --max_session_timeout=40000 \\\n --min_session_timeout=4000 \\\n --log_level=INFO\"\n…", "completion": "This text code Running ZooKeeper, A Distributed System Coordinator | The command used to start the ZooKeeper servers passed the configuration as command line parameter. You can also use environment variables to pass configuration to the ensemble.", "language": "text", "type": "code-example_explanation", "context": "Running ZooKeeper, A Distributed System Coordinator | The command used to start the ZooKeeper servers passed the configuration as command line parameter. You can also use environment variables to pass configuration to the ensemble.", "source": "https://kubernetes.io/docs/tutorials/stateful-application/zookeeper/", "id": "k8s_00220_code_explain_6029"} {"prompt": "Explain this bash code:\n\nkubectl exec zk-0 cat /usr/etc/zookeeper/log4j.properties", "completion": "This bash code Running ZooKeeper, A Distributed System Coordinator | The logging configuration below will cause the ZooKeeper process to write all of its logs to the standard output file stream.", "language": "bash", "type": "kubectl-command_explanation", "context": "Running ZooKeeper, A Distributed System Coordinator | The logging configuration below will cause the ZooKeeper process to write all of its logs to the standard output file stream.", "source": "https://kubernetes.io/docs/tutorials/stateful-application/zookeeper/", "id": "k8s_00220_code_explain_6030"} {"prompt": "Complete this text code:\n\nzookeeper.root.logger=CONSOLE\nzookeeper.console.threshold=INFO\nlog4j.rootLogger=${zookeeper.root.logger}", "completion": "log4j.appender.CONSOLE=org.apache.log4j.ConsoleAppender\nlog4j.appender.CONSOLE.Threshold=${zookeeper.console.threshold}\nlog4j.appender.CONSOLE.layout=org.apache.log4j.PatternLayout\nlog4j.appender.CONSOLE.layout.ConversionPattern=%d{ISO8601} [myid:%X{myid}] - %-5p [%t:%C{1}@%L] - %m%n", "language": "text", "type": "code-example", "context": "Running ZooKeeper, A Distributed System Coordinator | This is the simplest possible way to safely log inside the container. Because the applications write logs to standard out, Kubernetes will handle log rotation for you. Kubernetes also implements a san", "source": "https://kubernetes.io/docs/tutorials/stateful-application/zookeeper/", "id": "k8s_00220_code_completion_6031"} {"prompt": "Explain this text code:\n\nzookeeper.root.logger=CONSOLE\nzookeeper.console.threshold=INFO\nlog4j.rootLogger=${zookeeper.root.logger}\nlog4j.appender.CONSOLE=org.apache.log4j.ConsoleAppender\nlog4j.appender.CONSOLE.Threshold=${zookeeper.console.threshold}\nlog4j.appender.CONSOLE.layout=org.apache.log4j.PatternLayout\nlog4j.appender.CONSOLE.layout.ConversionPattern=%d{ISO8601} [myid:%X{myid}] - %-5p [%t:%C{1}@%L] - %m%n", "completion": "This text code Running ZooKeeper, A Distributed System Coordinator | This is the simplest possible way to safely log inside the container. Because the applications write logs to standard out, Kubernetes will handle log rotation for you. Kubernetes also implements a san", "language": "text", "type": "code-example_explanation", "context": "Running ZooKeeper, A Distributed System Coordinator | This is the simplest possible way to safely log inside the container. Because the applications write logs to standard out, Kubernetes will handle log rotation for you. Kubernetes also implements a san", "source": "https://kubernetes.io/docs/tutorials/stateful-application/zookeeper/", "id": "k8s_00220_code_explain_6032"} {"prompt": "Complete this text code:\n\n2016-12-06 19:34:16,236 [myid:1] - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181:NIOServerCnxn@827] - Processing ruok command from /127.0.0.1:52740\n2016-12-06 19:34:16,237 [myid:1] - INFO [Thread-1136:NIOServerCnxn@1008] - Closed socket connection for client /127.0.0.1:52740 (no session established for client)\n2016-12-06 19:34:26,155 [myid:1] - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181:NIOServerCnxnFactory@192] - Accepted socket connection from /127.0.0.1:52749\n2016-12-06 19:34:26,155 [myid:1] - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181:NIOServerCnxn@827] - Processing ruok command from /127.0.0.1:52749\n2016-12-06 19:34:26,156 [myid:1] - INFO [Thread-1137:NIOServerCnxn@1008] - Closed socket connection for client /127.0.0.1:52749 (no session established for client)\n2016-12-06 19:34:26,222 [myid:1] - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181:NIOServerCnxnFactory@192] - Accepted socket connection from /127.0.0.1:52750\n2016-12-06 19:34:26,222 [myid:1] - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181:NIOServerCnxn@827] - Processing ruok command from /127.0.0.1:52750\n2016-12-06 19:34:26,226 [myid:1] - INFO [Thread-1138:NIOServerCnxn@1008] - Closed socket connection for client /127.0.0.1:52750 (no session established for client)\n2016-12-06 19:34:36,151 [myid:1] - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181:NIOServerCnxnFactory@192] - Accepted socket connection from /127.0.0.1:52760\n2016-12-06 19:34:36,152 [myid:1] - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181:NIOServerCnxn@827] - Processing ruok command from /127.0.0.1:52760", "completion": "2016-12-06 19:34:36,152 [myid:1] - INFO [Thread-1139:NIOServerCnxn@1008] - Closed socket connection for client /127.0.0.1:52760 (no session established for client)\n2016-12-06 19:34:36,230 [myid:1] - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181:NIOServerCnxnFactory@192] - Accepted socket connection from /127.0.0.1:52761\n2016-12-06 19:34:36,231 [myid:1] - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181:NIOServerCnxn@827] - Processing ruok command from /127.0.0.1:52761\n2016-12-06 19:34:36,231 [myid:1] - INFO [Thread-1140:NIOServerCnxn@1008] - Closed socket connection for client /127.0.0.1:52761 (no session established for client)\n2016-12-06 19:34:46,149 [myid:1] - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181:NIOServerCnxnFactory@192] - Accepted socket connection from /127.0.0.1:52767\n2016-12-06 19:34:46,149 [myid:1] - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181:NIOServerCnxn@827] - Processing ruok command from /127.0.0.1:52767\n2016-12-06 19:34:46,149 [myid:1] - INFO [Thread-1141:NIOServerCnxn@1008] - Closed socket connection for client /127.0.0.1:52767 (no session established for client)\n2016-12-06 19:34:46,230 [myid:1] - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181:NIOServerCnxnFactory@192] - Accepted socket connection from /127.0.0.1:52768\n2016-12-06 19:34:46,230 [myid:1] - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181:NIOServerCnxn@827] - Processing ruok command from /127.0.0.1:52768\n2016-12-06 19:34:46,230 [myid:1] - INFO [Thread-1142:NIOServerCnxn@1008] - Closed socket connection for client /127.0.0.1:52768 (no session established for client)", "language": "text", "type": "code-example", "context": "Running ZooKeeper, A Distributed System Coordinator | Kubernetes integrates with many logging solutions. You can choose a logging solution that best fits your cluster and applications. For cluster-level logging and aggregation, consider deploying a sidec", "source": "https://kubernetes.io/docs/tutorials/stateful-application/zookeeper/", "id": "k8s_00220_code_completion_6033"} {"prompt": "Explain this text code:\n\n2016-12-06 19:34:16,236 [myid:1] - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181:NIOServerCnxn@827] - Processing ruok command from /127.0.0.1:52740\n2016-12-06 19:34:16,237 [myid:1] - INFO [Thread-1136:NIOServerCnxn@1008] - Closed socket connection for client /127.0.0.1:52740 (no session established for client)\n2016-12-06 19:34:26,155 [myid:1] - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181:NIOServerCnxnFactory@192] - Accepted socket connection from /127.0.0.1:52749\n2016-12-06 19:34:26,155 [myid:1] - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181:NIOServerCnxn@827] - Processing ruok command from /127.0.0.1:52749\n2016-12-06 19:34:26,156 [myid:1] - INFO [Thread-1137:NIOServerCnxn@1008] - Closed socket connection for client /127.0.0.1:52749 (no session established for client)\n2016-12-06 19:34:26,222 [myid:1] - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181:NIOServerCnxnFactory@192] - Accepted socket connection from /127.0.0.1:52750\n2016-12-06 19:34:26,222 [myid:1] - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181:NIOServerCnxn@827] - Processing ruok command from /127.0.0.1:52750\n2016-12-06 19:34:26,226 [myid:1] - INFO [Thread-1138:NIOServerCnxn@1008] - Closed socket connection for client /127.0.0.1:52750 (no session established for client)\n2016-12-06 19:34:36,151 [myid:1] - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181:NIOServerCnxnFactory@192] - Accepted socket connection from /127.0.0.1:52760\n2016-12-06 19:34:36,152 [myid:1] - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181:NIOServerCnxn@827] - Processing ruok command from /127.0.0.1:52760\n2016-12-06 19:34:36,152 [myid:1] - INFO [Thread-1139:NIOServerCnxn@1008] - Closed socket connection for client /127.0.0.1:52760 (no session established for client)\n2016-12-06 19:34:36,230 [myid:1] - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181:NIOServerCnxnFactory@192] - Accepted socket connection from /127.0.0.1:52761\n2016-12-06 19:34:36,231 [myid:1] - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181:NIOServerCnxn@827] - Processing ruok command from /127.0.0.1:52761\n2016-12-06 19:34:36,231 [myid:1] - INFO [Thread-1140:NIOServerCnxn@1008] - Closed socket connection for client /127.0.0.1:52761 (no session established for client)\n2016-12-06 19:34:46,149 [myid:1] - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181:NIOServerCnxnFactory@192] - Accepted socket connection from /127.0.0.1:52767\n2016-12-06 19:34:46,149 [myid:1] - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181:NIOServerCnxn@827] - Processing ruok command from /127.0.0.1:52767\n2016-12-06 19:34:46,149 [myid:1] - INFO [Thread-1141:NIOServerCnxn@1008] - Closed socket connection for client /127.0.0.1:52767 (no session established for client)\n2016-12-06 19:34:46,230 [myid:1] - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181:NIOServerCnxnFactory@192] - Accepted socket connection from /127.0.0.1:52768\n2016-12-06 19:34:46,230 [myid:1] - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181:NIOServerCnxn@827] - Processing ruok command from /127.0.0.1:52768\n2016-12-06 19:34:46,230 [myid:1] - INFO [Thread-1142:NIOServerCnxn@1008] - Closed socket connection for client /127.0.0.1:52768 (no session established for client)", "completion": "This text code Running ZooKeeper, A Distributed System Coordinator | Kubernetes integrates with many logging solutions. You can choose a logging solution that best fits your cluster and applications. For cluster-level logging and aggregation, consider deploying a sidec", "language": "text", "type": "code-example_explanation", "context": "Running ZooKeeper, A Distributed System Coordinator | Kubernetes integrates with many logging solutions. You can choose a logging solution that best fits your cluster and applications. For cluster-level logging and aggregation, consider deploying a sidec", "source": "https://kubernetes.io/docs/tutorials/stateful-application/zookeeper/", "id": "k8s_00220_code_explain_6034"} {"prompt": "Explain this text code:\n\nF S UID PID PPID C PRI NI ADDR SZ WCHAN STIME TTY TIME CMD\n4 S zookeep+ 1 0 0 80 0 - 1127 - 20:46 ? 00:00:00 sh -c zkGenConfig.sh && zkServer.sh start-foreground\n0 S zookeep+ 27 1 0 80 0 - 1155556 - 20:46 ? 00:00:19 /usr/lib/jvm/java-8-openjdk-amd64/bin/java -Dzookeeper.log.dir=/var/log/zookeeper -Dzookeeper.root.logger=INFO,CONSOLE -cp /usr/bin/../build/classes:/usr/bin/../build/lib/*.jar:/usr/bin/../share/zookeeper/zookeeper-3.4.9.jar:/usr/bin/../share/zookeeper/slf4j-log4j12-1.6.1.jar:/usr/bin/../share/zookeeper/slf4j-api-1.6.1.jar:/usr/bin/../share/zookeeper/netty-3.10.5.Final.jar:/usr/bin/../share/zookeeper/log4j-1.2.16.jar:/usr/bin/../share/zookeeper/jline-0.9.94.jar:/usr/bin/../src/java/lib/*.jar:/usr/bin/../etc/zookeeper: -Xmx2G -Xms2G -Dcom.sun.management.jmxremote -Dcom.sun.management.jmxremote.local.only=false org.apache.zookeeper.server.quorum.QuorumPeerMain /usr/bin/../etc/zookeeper/zoo.cfg", "completion": "This text code Running ZooKeeper, A Distributed System Coordinator | By default, when the Pod's PersistentVolumes is mounted to the ZooKeeper server's data directory, it is only accessible by the root user. This configuration prevents the ZooKeeper process from writing", "language": "text", "type": "code-example_explanation", "context": "Running ZooKeeper, A Distributed System Coordinator | By default, when the Pod's PersistentVolumes is mounted to the ZooKeeper server's data directory, it is only accessible by the root user. This configuration prevents the ZooKeeper process from writing", "source": "https://kubernetes.io/docs/tutorials/stateful-application/zookeeper/", "id": "k8s_00220_code_explain_6035"} {"prompt": "Explain this bash code:\n\nkubectl exec -ti zk-0 -- ls -ld /var/lib/zookeeper/data", "completion": "This bash code Running ZooKeeper, A Distributed System Coordinator | Because the fsGroup field of the securityContext object is set to 1000, the ownership of the Pods' PersistentVolumes is set to the zookeeper group, and the ZooKeeper process is able to read and write", "language": "bash", "type": "kubectl-command_explanation", "context": "Running ZooKeeper, A Distributed System Coordinator | Because the fsGroup field of the securityContext object is set to 1000, the ownership of the Pods' PersistentVolumes is set to the zookeeper group, and the ZooKeeper process is able to read and write", "source": "https://kubernetes.io/docs/tutorials/stateful-application/zookeeper/", "id": "k8s_00220_code_explain_6036"} {"prompt": "Explain this text code:\n\ndrwxr-sr-x 3 zookeeper zookeeper 4096 Dec 5 20:45 /var/lib/zookeeper/data", "completion": "This text code Running ZooKeeper, A Distributed System Coordinator | The ZooKeeper documentation mentions that \"You will want to have a supervisory process that manages each of your ZooKeeper server processes (JVM).\" Utilizing a watchdog (supervisory process) to restar", "language": "text", "type": "code-example_explanation", "context": "Running ZooKeeper, A Distributed System Coordinator | The ZooKeeper documentation mentions that \"You will want to have a supervisory process that manages each of your ZooKeeper server processes (JVM).\" Utilizing a watchdog (supervisory process) to restar", "source": "https://kubernetes.io/docs/tutorials/stateful-application/zookeeper/", "id": "k8s_00220_code_explain_6037"} {"prompt": "Explain this bash code:\n\nkubectl patch sts zk --type='json' -p='[{\"op\": \"replace\", \"path\": \"/spec/template/spec/containers/0/resources/requests/cpu\", \"value\":\"0.3\"}]'", "completion": "This bash code Running ZooKeeper, A Distributed System Coordinator | Use kubectl rollout status to watch the status of the update.", "language": "bash", "type": "kubectl-command_explanation", "context": "Running ZooKeeper, A Distributed System Coordinator | Use kubectl rollout status to watch the status of the update.", "source": "https://kubernetes.io/docs/tutorials/stateful-application/zookeeper/", "id": "k8s_00220_code_explain_6038"} {"prompt": "Complete this text code:\n\nwaiting for statefulset rolling update to complete 0 pods at revision zk-5db4499664...\nWaiting for 1 pods to be ready...\nWaiting for 1 pods to be ready...\nwaiting for statefulset rolling update to complete 1 pods at revision zk-5db4499664...\nWaiting for 1 pods to be ready...", "completion": "Waiting for 1 pods to be ready...\nwaiting for statefulset rolling update to complete 2 pods at revision zk-5db4499664...\nWaiting for 1 pods to be ready...\nWaiting for 1 pods to be ready...\nstatefulset rolling update complete 3 pods at revision zk-5db4499664...", "language": "text", "type": "code-example", "context": "Running ZooKeeper, A Distributed System Coordinator | This terminates the Pods, one at a time, in reverse ordinal order, and recreates them with the new configuration. This ensures that quorum is maintained during a rolling update.", "source": "https://kubernetes.io/docs/tutorials/stateful-application/zookeeper/", "id": "k8s_00220_code_completion_6039"} {"prompt": "Explain this text code:\n\nwaiting for statefulset rolling update to complete 0 pods at revision zk-5db4499664...\nWaiting for 1 pods to be ready...\nWaiting for 1 pods to be ready...\nwaiting for statefulset rolling update to complete 1 pods at revision zk-5db4499664...\nWaiting for 1 pods to be ready...\nWaiting for 1 pods to be ready...\nwaiting for statefulset rolling update to complete 2 pods at revision zk-5db4499664...\nWaiting for 1 pods to be ready...\nWaiting for 1 pods to be ready...\nstatefulset rolling update complete 3 pods at revision zk-5db4499664...", "completion": "This text code Running ZooKeeper, A Distributed System Coordinator | This terminates the Pods, one at a time, in reverse ordinal order, and recreates them with the new configuration. This ensures that quorum is maintained during a rolling update.", "language": "text", "type": "code-example_explanation", "context": "Running ZooKeeper, A Distributed System Coordinator | This terminates the Pods, one at a time, in reverse ordinal order, and recreates them with the new configuration. This ensures that quorum is maintained during a rolling update.", "source": "https://kubernetes.io/docs/tutorials/stateful-application/zookeeper/", "id": "k8s_00220_code_explain_6040"} {"prompt": "Explain this text code:\n\nUID PID PPID C STIME TTY TIME CMD\nzookeep+ 1 0 0 15:03 ? 00:00:00 sh -c zkGenConfig.sh && zkServer.sh start-foreground\nzookeep+ 27 1 0 15:03 ? 00:00:03 /usr/lib/jvm/java-8-openjdk-amd64/bin/java -Dzookeeper.log.dir=/var/log/zookeeper -Dzookeeper.root.logger=INFO,CONSOLE -cp /usr/bin/../build/classes:/usr/bin/../build/lib/*.jar:/usr/bin/../share/zookeeper/zookeeper-3.4.9.jar:/usr/bin/../share/zookeeper/slf4j-log4j12-1.6.1.jar:/usr/bin/../share/zookeeper/slf4j-api-1.6.1.jar:/usr/bin/../share/zookeeper/netty-3.10.5.Final.jar:/usr/bin/../share/zookeeper/log4j-1.2.16.jar:/usr/bin/../share/zookeeper/jline-0.9.94.jar:/usr/bin/../src/java/lib/*.jar:/usr/bin/../etc/zookeeper: -Xmx2G -Xms2G -Dcom.sun.management.jmxremote -Dcom.sun.management.jmxremote.local.only=false org.apache.zookeeper.server.quorum.QuorumPeerMain /usr/bin/../etc/zookeeper/zoo.cfg", "completion": "This text code Running ZooKeeper, A Distributed System Coordinator | In another terminal watch the Pods in the zk StatefulSet with the following command.", "language": "text", "type": "code-example_explanation", "context": "Running ZooKeeper, A Distributed System Coordinator | In another terminal watch the Pods in the zk StatefulSet with the following command.", "source": "https://kubernetes.io/docs/tutorials/stateful-application/zookeeper/", "id": "k8s_00220_code_explain_6041"} {"prompt": "Complete this text code:\n\nNAME READY STATUS RESTARTS AGE\nzk-0 1/1 Running 0 21m\nzk-1 1/1 Running 0 20m\nzk-2 1/1 Running 0 19m", "completion": "NAME READY STATUS RESTARTS AGE\nzk-0 0/1 Error 0 29m\nzk-0 0/1 Running 1 29m\nzk-0 1/1 Running 1 29m", "language": "text", "type": "code-example", "context": "Running ZooKeeper, A Distributed System Coordinator | If your application uses a script (such as zkServer.sh) to launch the process that implements the application's business logic, the script must terminate with the child process. This ensures that Kube", "source": "https://kubernetes.io/docs/tutorials/stateful-application/zookeeper/", "id": "k8s_00220_code_completion_6042"} {"prompt": "Explain this text code:\n\nNAME READY STATUS RESTARTS AGE\nzk-0 1/1 Running 0 21m\nzk-1 1/1 Running 0 20m\nzk-2 1/1 Running 0 19m\nNAME READY STATUS RESTARTS AGE\nzk-0 0/1 Error 0 29m\nzk-0 0/1 Running 1 29m\nzk-0 1/1 Running 1 29m", "completion": "This text code Running ZooKeeper, A Distributed System Coordinator | If your application uses a script (such as zkServer.sh) to launch the process that implements the application's business logic, the script must terminate with the child process. This ensures that Kube", "language": "text", "type": "code-example_explanation", "context": "Running ZooKeeper, A Distributed System Coordinator | If your application uses a script (such as zkServer.sh) to launch the process that implements the application's business logic, the script must terminate with the child process. This ensures that Kube", "source": "https://kubernetes.io/docs/tutorials/stateful-application/zookeeper/", "id": "k8s_00220_code_explain_6043"} {"prompt": "Complete this text code:\n\nlivenessProbe:\n exec:\n command:\n - sh", "completion": " - -c\n - \"zookeeper-ready 2181\"\n initialDelaySeconds: 15\n timeoutSeconds: 5", "language": "text", "type": "code-example", "context": "Running ZooKeeper, A Distributed System Coordinator | The probe calls a bash script that uses the ZooKeeper ruok four letter word to test the server's health.", "source": "https://kubernetes.io/docs/tutorials/stateful-application/zookeeper/", "id": "k8s_00220_code_completion_6044"} {"prompt": "Explain this text code:\n\nlivenessProbe:\n exec:\n command:\n - sh\n - -c\n - \"zookeeper-ready 2181\"\n initialDelaySeconds: 15\n timeoutSeconds: 5", "completion": "This text code Running ZooKeeper, A Distributed System Coordinator | The probe calls a bash script that uses the ZooKeeper ruok four letter word to test the server's health.", "language": "text", "type": "code-example_explanation", "context": "Running ZooKeeper, A Distributed System Coordinator | The probe calls a bash script that uses the ZooKeeper ruok four letter word to test the server's health.", "source": "https://kubernetes.io/docs/tutorials/stateful-application/zookeeper/", "id": "k8s_00220_code_explain_6045"} {"prompt": "Complete this text code:\n\nOK=$(echo ruok | nc 127.0.0.1 $1)\nif [ \"$OK\" == \"imok\" ]; then\n exit 0", "completion": "else\n exit 1\nfi", "language": "text", "type": "code-example", "context": "Running ZooKeeper, A Distributed System Coordinator | In one terminal window, use the following command to watch the Pods in the zk StatefulSet.", "source": "https://kubernetes.io/docs/tutorials/stateful-application/zookeeper/", "id": "k8s_00220_code_completion_6046"} {"prompt": "Explain this text code:\n\nOK=$(echo ruok | nc 127.0.0.1 $1)\nif [ \"$OK\" == \"imok\" ]; then\n exit 0\nelse\n exit 1\nfi", "completion": "This text code Running ZooKeeper, A Distributed System Coordinator | In one terminal window, use the following command to watch the Pods in the zk StatefulSet.", "language": "text", "type": "code-example_explanation", "context": "Running ZooKeeper, A Distributed System Coordinator | In one terminal window, use the following command to watch the Pods in the zk StatefulSet.", "source": "https://kubernetes.io/docs/tutorials/stateful-application/zookeeper/", "id": "k8s_00220_code_explain_6047"} {"prompt": "Explain this bash code:\n\nkubectl exec zk-0 -- rm /opt/zookeeper/bin/zookeeper-ready", "completion": "This bash code Running ZooKeeper, A Distributed System Coordinator | When the liveness probe for the ZooKeeper process fails, Kubernetes will automatically restart the process for you, ensuring that unhealthy processes in the ensemble are restarted.", "language": "bash", "type": "kubectl-command_explanation", "context": "Running ZooKeeper, A Distributed System Coordinator | When the liveness probe for the ZooKeeper process fails, Kubernetes will automatically restart the process for you, ensuring that unhealthy processes in the ensemble are restarted.", "source": "https://kubernetes.io/docs/tutorials/stateful-application/zookeeper/", "id": "k8s_00220_code_explain_6048"} {"prompt": "Complete this text code:\n\nNAME READY STATUS RESTARTS AGE\nzk-0 1/1 Running 0 1h\nzk-1 1/1 Running 0 1h\nzk-2 1/1 Running 0 1h", "completion": "NAME READY STATUS RESTARTS AGE\nzk-0 0/1 Running 0 1h\nzk-0 0/1 Running 1 1h\nzk-0 1/1 Running 1 1h", "language": "text", "type": "code-example", "context": "Running ZooKeeper, A Distributed System Coordinator | Readiness is not the same as liveness. If a process is alive, it is scheduled and healthy. If a process is ready, it is able to process input. Liveness is a necessary, but not sufficient, condition fo", "source": "https://kubernetes.io/docs/tutorials/stateful-application/zookeeper/", "id": "k8s_00220_code_completion_6049"} {"prompt": "Explain this text code:\n\nNAME READY STATUS RESTARTS AGE\nzk-0 1/1 Running 0 1h\nzk-1 1/1 Running 0 1h\nzk-2 1/1 Running 0 1h\nNAME READY STATUS RESTARTS AGE\nzk-0 0/1 Running 0 1h\nzk-0 0/1 Running 1 1h\nzk-0 1/1 Running 1 1h", "completion": "This text code Running ZooKeeper, A Distributed System Coordinator | Readiness is not the same as liveness. If a process is alive, it is scheduled and healthy. If a process is ready, it is able to process input. Liveness is a necessary, but not sufficient, condition fo", "language": "text", "type": "code-example_explanation", "context": "Running ZooKeeper, A Distributed System Coordinator | Readiness is not the same as liveness. If a process is alive, it is scheduled and healthy. If a process is ready, it is able to process input. Liveness is a necessary, but not sufficient, condition fo", "source": "https://kubernetes.io/docs/tutorials/stateful-application/zookeeper/", "id": "k8s_00220_code_explain_6050"} {"prompt": "Complete this text code:\n\nreadinessProbe:\n exec:\n command:\n - sh", "completion": " - -c\n - \"zookeeper-ready 2181\"\n initialDelaySeconds: 15\n timeoutSeconds: 5", "language": "text", "type": "code-example", "context": "Running ZooKeeper, A Distributed System Coordinator | Even though the liveness and readiness probes are identical, it is important to specify both. This ensures that only healthy servers in the ZooKeeper ensemble receive network traffic.", "source": "https://kubernetes.io/docs/tutorials/stateful-application/zookeeper/", "id": "k8s_00220_code_completion_6051"} {"prompt": "Explain this text code:\n\nreadinessProbe:\n exec:\n command:\n - sh\n - -c\n - \"zookeeper-ready 2181\"\n initialDelaySeconds: 15\n timeoutSeconds: 5", "completion": "This text code Running ZooKeeper, A Distributed System Coordinator | Even though the liveness and readiness probes are identical, it is important to specify both. This ensures that only healthy servers in the ZooKeeper ensemble receive network traffic.", "language": "text", "type": "code-example_explanation", "context": "Running ZooKeeper, A Distributed System Coordinator | Even though the liveness and readiness probes are identical, it is important to specify both. This ensures that only healthy servers in the ZooKeeper ensemble receive network traffic.", "source": "https://kubernetes.io/docs/tutorials/stateful-application/zookeeper/", "id": "k8s_00220_code_explain_6052"} {"prompt": "Explain this text code:\n\nfor i in 0 1 2; do kubectl get pod zk-$i --template {{.spec.nodeName}}; echo \"\"; done", "completion": "This text code Running ZooKeeper, A Distributed System Coordinator | All of the Pods in the zk StatefulSet are deployed on different nodes.", "language": "text", "type": "code-example_explanation", "context": "Running ZooKeeper, A Distributed System Coordinator | All of the Pods in the zk StatefulSet are deployed on different nodes.", "source": "https://kubernetes.io/docs/tutorials/stateful-application/zookeeper/", "id": "k8s_00220_code_explain_6053"} {"prompt": "Explain this text code:\n\nkubernetes-node-cxpk\nkubernetes-node-a5aq\nkubernetes-node-2g2d", "completion": "This text code Running ZooKeeper, A Distributed System Coordinator | This is because the Pods in the zk StatefulSet have a PodAntiAffinity specified.", "language": "text", "type": "code-example_explanation", "context": "Running ZooKeeper, A Distributed System Coordinator | This is because the Pods in the zk StatefulSet have a PodAntiAffinity specified.", "source": "https://kubernetes.io/docs/tutorials/stateful-application/zookeeper/", "id": "k8s_00220_code_explain_6054"} {"prompt": "Complete this text code:\n\naffinity:\n podAntiAffinity:\n requiredDuringSchedulingIgnoredDuringExecution:\n - labelSelector:\n matchExpressions:", "completion": " - key: \"app\"\n operator: In\n values:\n - zk\n topologyKey: \"kubernetes.io/hostname\"", "language": "text", "type": "code-example", "context": "Running ZooKeeper, A Distributed System Coordinator | The requiredDuringSchedulingIgnoredDuringExecution field tells the Kubernetes Scheduler that it should never co-locate two Pods which have app label as zk in the domain defined by the topologyKey. The", "source": "https://kubernetes.io/docs/tutorials/stateful-application/zookeeper/", "id": "k8s_00220_code_completion_6055"} {"prompt": "Explain this text code:\n\naffinity:\n podAntiAffinity:\n requiredDuringSchedulingIgnoredDuringExecution:\n - labelSelector:\n matchExpressions:\n - key: \"app\"\n operator: In\n values:\n - zk\n topologyKey: \"kubernetes.io/hostname\"", "completion": "This text code Running ZooKeeper, A Distributed System Coordinator | The requiredDuringSchedulingIgnoredDuringExecution field tells the Kubernetes Scheduler that it should never co-locate two Pods which have app label as zk in the domain defined by the topologyKey. The", "language": "text", "type": "code-example_explanation", "context": "Running ZooKeeper, A Distributed System Coordinator | The requiredDuringSchedulingIgnoredDuringExecution field tells the Kubernetes Scheduler that it should never co-locate two Pods which have app label as zk in the domain defined by the topologyKey. The", "source": "https://kubernetes.io/docs/tutorials/stateful-application/zookeeper/", "id": "k8s_00220_code_explain_6056"} {"prompt": "Explain this text code:\n\nNAME MIN-AVAILABLE MAX-UNAVAILABLE ALLOWED-DISRUPTIONS AGE\nzk-pdb N/A 1 1", "completion": "This text code Running ZooKeeper, A Distributed System Coordinator | In one terminal, use this command to watch the Pods in the zk StatefulSet.", "language": "text", "type": "code-example_explanation", "context": "Running ZooKeeper, A Distributed System Coordinator | In one terminal, use this command to watch the Pods in the zk StatefulSet.", "source": "https://kubernetes.io/docs/tutorials/stateful-application/zookeeper/", "id": "k8s_00220_code_explain_6057"} {"prompt": "Explain this text code:\n\nfor i in 0 1 2; do kubectl get pod zk-$i --template {{.spec.nodeName}}; echo \"\"; done", "completion": "This text code Running ZooKeeper, A Distributed System Coordinator | The output is similar to this:", "language": "text", "type": "code-example_explanation", "context": "Running ZooKeeper, A Distributed System Coordinator | The output is similar to this:", "source": "https://kubernetes.io/docs/tutorials/stateful-application/zookeeper/", "id": "k8s_00220_code_explain_6058"} {"prompt": "Explain this text code:\n\nkubernetes-node-pb41\nkubernetes-node-ixsl\nkubernetes-node-i4c4", "completion": "This text code Running ZooKeeper, A Distributed System Coordinator | Use kubectl drain to cordon and drain the node on which the zk-0 Pod is scheduled.", "language": "text", "type": "code-example_explanation", "context": "Running ZooKeeper, A Distributed System Coordinator | Use kubectl drain to cordon and drain the node on which the zk-0 Pod is scheduled.", "source": "https://kubernetes.io/docs/tutorials/stateful-application/zookeeper/", "id": "k8s_00220_code_explain_6059"} {"prompt": "Explain this bash code:\n\nkubectl drain $(kubectl get pod zk-0 --template {{.spec.nodeName}}) --ignore-daemonsets --force --delete-emptydir-data", "completion": "This bash code Running ZooKeeper, A Distributed System Coordinator | The output is similar to this:", "language": "bash", "type": "kubectl-command_explanation", "context": "Running ZooKeeper, A Distributed System Coordinator | The output is similar to this:", "source": "https://kubernetes.io/docs/tutorials/stateful-application/zookeeper/", "id": "k8s_00220_code_explain_6060"} {"prompt": "Complete this text code:\n\nnode \"kubernetes-node-pb41\" cordoned\n", "completion": "WARNING: Deleting pods not managed by ReplicationController, ReplicaSet, Job, or DaemonSet: fluentd-cloud-logging-kubernetes-node-pb41, kube-proxy-kubernetes-node-pb41; Ignoring DaemonSet-managed pods: node-problem-detector-v0.1-o5elz\npod \"zk-0\" deleted\nnode \"kubernetes-node-pb41\" drained", "language": "text", "type": "code-example", "context": "Running ZooKeeper, A Distributed System Coordinator | As there are four nodes in your cluster, kubectl drain, succeeds and the zk-0 is rescheduled to another node.", "source": "https://kubernetes.io/docs/tutorials/stateful-application/zookeeper/", "id": "k8s_00220_code_completion_6061"} {"prompt": "Explain this text code:\n\nnode \"kubernetes-node-pb41\" cordoned\n\nWARNING: Deleting pods not managed by ReplicationController, ReplicaSet, Job, or DaemonSet: fluentd-cloud-logging-kubernetes-node-pb41, kube-proxy-kubernetes-node-pb41; Ignoring DaemonSet-managed pods: node-problem-detector-v0.1-o5elz\npod \"zk-0\" deleted\nnode \"kubernetes-node-pb41\" drained", "completion": "This text code Running ZooKeeper, A Distributed System Coordinator | As there are four nodes in your cluster, kubectl drain, succeeds and the zk-0 is rescheduled to another node.", "language": "text", "type": "code-example_explanation", "context": "Running ZooKeeper, A Distributed System Coordinator | As there are four nodes in your cluster, kubectl drain, succeeds and the zk-0 is rescheduled to another node.", "source": "https://kubernetes.io/docs/tutorials/stateful-application/zookeeper/", "id": "k8s_00220_code_explain_6062"} {"prompt": "Complete this text code:\n\nNAME READY STATUS RESTARTS AGE\nzk-0 1/1 Running 2 1h\nzk-1 1/1 Running 0 1h\nzk-2 1/1 Running 0 1h\nNAME READY STATUS RESTARTS AGE\nzk-0 1/1 Terminating 2 2h\nzk-0 0/1 Terminating 2 2h", "completion": "zk-0 0/1 Terminating 2 2h\nzk-0 0/1 Terminating 2 2h\nzk-0 0/1 Pending 0 0s\nzk-0 0/1 Pending 0 0s\nzk-0 0/1 ContainerCreating 0 0s\nzk-0 0/1 Running 0 51s\nzk-0 1/1 Running 0 1m", "language": "text", "type": "code-example", "context": "Running ZooKeeper, A Distributed System Coordinator | Keep watching the StatefulSet's Pods in the first terminal and drain the node on which zk-1 is scheduled.", "source": "https://kubernetes.io/docs/tutorials/stateful-application/zookeeper/", "id": "k8s_00220_code_completion_6063"} {"prompt": "Explain this text code:\n\nNAME READY STATUS RESTARTS AGE\nzk-0 1/1 Running 2 1h\nzk-1 1/1 Running 0 1h\nzk-2 1/1 Running 0 1h\nNAME READY STATUS RESTARTS AGE\nzk-0 1/1 Terminating 2 2h\nzk-0 0/1 Terminating 2 2h\nzk-0 0/1 Terminating 2 2h\nzk-0 0/1 Terminating 2 2h\nzk-0 0/1 Pending 0 0s\nzk-0 0/1 Pending 0 0s\nzk-0 0/1 ContainerCreating 0 0s\nzk-0 0/1 Running 0 51s\nzk-0 1/1 Running 0 1m", "completion": "This text code Running ZooKeeper, A Distributed System Coordinator | Keep watching the StatefulSet's Pods in the first terminal and drain the node on which zk-1 is scheduled.", "language": "text", "type": "code-example_explanation", "context": "Running ZooKeeper, A Distributed System Coordinator | Keep watching the StatefulSet's Pods in the first terminal and drain the node on which zk-1 is scheduled.", "source": "https://kubernetes.io/docs/tutorials/stateful-application/zookeeper/", "id": "k8s_00220_code_explain_6064"} {"prompt": "Explain this bash code:\n\nkubectl drain $(kubectl get pod zk-1 --template {{.spec.nodeName}}) --ignore-daemonsets --force --delete-emptydir-data", "completion": "This bash code Running ZooKeeper, A Distributed System Coordinator | The output is similar to this:", "language": "bash", "type": "kubectl-command_explanation", "context": "Running ZooKeeper, A Distributed System Coordinator | The output is similar to this:", "source": "https://kubernetes.io/docs/tutorials/stateful-application/zookeeper/", "id": "k8s_00220_code_explain_6065"} {"prompt": "Complete this text code:\n\n\"kubernetes-node-ixsl\" cordoned\nWARNING: Deleting pods not managed by ReplicationController, ReplicaSet, Job, or DaemonSet: fluentd-cloud-logging-kubernetes-node-ixsl, kube-proxy-kubernetes-node-ixsl; Ignoring DaemonSet-managed pods: node-problem-detector-v0.1-voc74", "completion": "pod \"zk-1\" deleted\nnode \"kubernetes-node-ixsl\" drained", "language": "text", "type": "code-example", "context": "Running ZooKeeper, A Distributed System Coordinator | The zk-1 Pod cannot be scheduled because the zk StatefulSet contains a PodAntiAffinity rule preventing co-location of the Pods, and as only two nodes are schedulable, the Pod will remain in a Pending", "source": "https://kubernetes.io/docs/tutorials/stateful-application/zookeeper/", "id": "k8s_00220_code_completion_6066"} {"prompt": "Explain this text code:\n\n\"kubernetes-node-ixsl\" cordoned\nWARNING: Deleting pods not managed by ReplicationController, ReplicaSet, Job, or DaemonSet: fluentd-cloud-logging-kubernetes-node-ixsl, kube-proxy-kubernetes-node-ixsl; Ignoring DaemonSet-managed pods: node-problem-detector-v0.1-voc74\npod \"zk-1\" deleted\nnode \"kubernetes-node-ixsl\" drained", "completion": "This text code Running ZooKeeper, A Distributed System Coordinator | The zk-1 Pod cannot be scheduled because the zk StatefulSet contains a PodAntiAffinity rule preventing co-location of the Pods, and as only two nodes are schedulable, the Pod will remain in a Pending", "language": "text", "type": "code-example_explanation", "context": "Running ZooKeeper, A Distributed System Coordinator | The zk-1 Pod cannot be scheduled because the zk StatefulSet contains a PodAntiAffinity rule preventing co-location of the Pods, and as only two nodes are schedulable, the Pod will remain in a Pending", "source": "https://kubernetes.io/docs/tutorials/stateful-application/zookeeper/", "id": "k8s_00220_code_explain_6067"} {"prompt": "Complete this text code:\n\nNAME READY STATUS RESTARTS AGE\nzk-0 1/1 Running 2 1h\nzk-1 1/1 Running 0 1h\nzk-2 1/1 Running 0 1h\nNAME READY STATUS RESTARTS AGE\nzk-0 1/1 Terminating 2 2h\nzk-0 0/1 Terminating 2 2h\nzk-0 0/1 Terminating 2 2h\nzk-0 0/1 Terminating 2 2h\nzk-0 0/1 Pending 0 0s", "completion": "zk-0 0/1 Pending 0 0s\nzk-0 0/1 ContainerCreating 0 0s\nzk-0 0/1 Running 0 51s\nzk-0 1/1 Running 0 1m\nzk-1 1/1 Terminating 0 2h\nzk-1 0/1 Terminating 0 2h\nzk-1 0/1 Terminating 0 2h\nzk-1 0/1 Terminating 0 2h\nzk-1 0/1 Pending 0 0s\nzk-1 0/1 Pending 0 0s", "language": "text", "type": "code-example", "context": "Running ZooKeeper, A Distributed System Coordinator | Continue to watch the Pods of the StatefulSet, and drain the node on which zk-2 is scheduled.", "source": "https://kubernetes.io/docs/tutorials/stateful-application/zookeeper/", "id": "k8s_00220_code_completion_6068"} {"prompt": "Explain this text code:\n\nNAME READY STATUS RESTARTS AGE\nzk-0 1/1 Running 2 1h\nzk-1 1/1 Running 0 1h\nzk-2 1/1 Running 0 1h\nNAME READY STATUS RESTARTS AGE\nzk-0 1/1 Terminating 2 2h\nzk-0 0/1 Terminating 2 2h\nzk-0 0/1 Terminating 2 2h\nzk-0 0/1 Terminating 2 2h\nzk-0 0/1 Pending 0 0s\nzk-0 0/1 Pending 0 0s\nzk-0 0/1 ContainerCreating 0 0s\nzk-0 0/1 Running 0 51s\nzk-0 1/1 Running 0 1m\nzk-1 1/1 Terminating 0 2h\nzk-1 0/1 Terminating 0 2h\nzk-1 0/1 Terminating 0 2h\nzk-1 0/1 Terminating 0 2h\nzk-1 0/1 Pending 0 0s\nzk-1 0/1 Pending 0 0s", "completion": "This text code Running ZooKeeper, A Distributed System Coordinator | Continue to watch the Pods of the StatefulSet, and drain the node on which zk-2 is scheduled.", "language": "text", "type": "code-example_explanation", "context": "Running ZooKeeper, A Distributed System Coordinator | Continue to watch the Pods of the StatefulSet, and drain the node on which zk-2 is scheduled.", "source": "https://kubernetes.io/docs/tutorials/stateful-application/zookeeper/", "id": "k8s_00220_code_explain_6069"} {"prompt": "Explain this bash code:\n\nkubectl drain $(kubectl get pod zk-2 --template {{.spec.nodeName}}) --ignore-daemonsets --force --delete-emptydir-data", "completion": "This bash code Running ZooKeeper, A Distributed System Coordinator | The output is similar to this:", "language": "bash", "type": "kubectl-command_explanation", "context": "Running ZooKeeper, A Distributed System Coordinator | The output is similar to this:", "source": "https://kubernetes.io/docs/tutorials/stateful-application/zookeeper/", "id": "k8s_00220_code_explain_6070"} {"prompt": "Complete this text code:\n\nnode \"kubernetes-node-i4c4\" cordoned\n\nWARNING: Deleting pods not managed by ReplicationController, ReplicaSet, Job, or DaemonSet: fluentd-cloud-logging-kubernetes-node-i4c4, kube-proxy-kubernetes-node-i4c4; Ignoring DaemonSet-managed pods: node-problem-detector-v0.1-dyrog", "completion": "WARNING: Ignoring DaemonSet-managed pods: node-problem-detector-v0.1-dyrog; Deleting pods not managed by ReplicationController, ReplicaSet, Job, or DaemonSet: fluentd-cloud-logging-kubernetes-node-i4c4, kube-proxy-kubernetes-node-i4c4\nThere are pending pods when an error occurred: Cannot evict pod as it would violate the pod's disruption budget.\npod/zk-2", "language": "text", "type": "code-example", "context": "Running ZooKeeper, A Distributed System Coordinator | Use CTRL-C to terminate kubectl.", "source": "https://kubernetes.io/docs/tutorials/stateful-application/zookeeper/", "id": "k8s_00220_code_completion_6071"} {"prompt": "Explain this text code:\n\nnode \"kubernetes-node-i4c4\" cordoned\n\nWARNING: Deleting pods not managed by ReplicationController, ReplicaSet, Job, or DaemonSet: fluentd-cloud-logging-kubernetes-node-i4c4, kube-proxy-kubernetes-node-i4c4; Ignoring DaemonSet-managed pods: node-problem-detector-v0.1-dyrog\nWARNING: Ignoring DaemonSet-managed pods: node-problem-detector-v0.1-dyrog; Deleting pods not managed by ReplicationController, ReplicaSet, Job, or DaemonSet: fluentd-cloud-logging-kubernetes-node-i4c4, kube-proxy-kubernetes-node-i4c4\nThere are pending pods when an error occurred: Cannot evict pod as it would violate the pod's disruption budget.\npod/zk-2", "completion": "This text code Running ZooKeeper, A Distributed System Coordinator | Use CTRL-C to terminate kubectl.", "language": "text", "type": "code-example_explanation", "context": "Running ZooKeeper, A Distributed System Coordinator | Use CTRL-C to terminate kubectl.", "source": "https://kubernetes.io/docs/tutorials/stateful-application/zookeeper/", "id": "k8s_00220_code_explain_6072"} {"prompt": "Complete this text code:\n\nWatchedEvent state:SyncConnected type:None path:null\nworld\ncZxid = 0x200000002\nctime = Wed Dec 07 00:08:59 UTC 2016\nmZxid = 0x200000002\nmtime = Wed Dec 07 00:08:59 UTC 2016", "completion": "pZxid = 0x200000002\ncversion = 0\ndataVersion = 0\naclVersion = 0\nephemeralOwner = 0x0\ndataLength = 5\nnumChildren = 0", "language": "text", "type": "code-example", "context": "Running ZooKeeper, A Distributed System Coordinator | Use kubectl uncordon to uncordon the first node.", "source": "https://kubernetes.io/docs/tutorials/stateful-application/zookeeper/", "id": "k8s_00220_code_completion_6073"} {"prompt": "Explain this text code:\n\nWatchedEvent state:SyncConnected type:None path:null\nworld\ncZxid = 0x200000002\nctime = Wed Dec 07 00:08:59 UTC 2016\nmZxid = 0x200000002\nmtime = Wed Dec 07 00:08:59 UTC 2016\npZxid = 0x200000002\ncversion = 0\ndataVersion = 0\naclVersion = 0\nephemeralOwner = 0x0\ndataLength = 5\nnumChildren = 0", "completion": "This text code Running ZooKeeper, A Distributed System Coordinator | Use kubectl uncordon to uncordon the first node.", "language": "text", "type": "code-example_explanation", "context": "Running ZooKeeper, A Distributed System Coordinator | Use kubectl uncordon to uncordon the first node.", "source": "https://kubernetes.io/docs/tutorials/stateful-application/zookeeper/", "id": "k8s_00220_code_explain_6074"} {"prompt": "Complete this text code:\n\nNAME READY STATUS RESTARTS AGE\nzk-0 1/1 Running 2 1h\nzk-1 1/1 Running 0 1h\nzk-2 1/1 Running 0 1h\nNAME READY STATUS RESTARTS AGE\nzk-0 1/1 Terminating 2 2h\nzk-0 0/1 Terminating 2 2h\nzk-0 0/1 Terminating 2 2h\nzk-0 0/1 Terminating 2 2h\nzk-0 0/1 Pending 0 0s\nzk-0 0/1 Pending 0 0s\nzk-0 0/1 ContainerCreating 0 0s", "completion": "zk-0 0/1 Running 0 51s\nzk-0 1/1 Running 0 1m\nzk-1 1/1 Terminating 0 2h\nzk-1 0/1 Terminating 0 2h\nzk-1 0/1 Terminating 0 2h\nzk-1 0/1 Terminating 0 2h\nzk-1 0/1 Pending 0 0s\nzk-1 0/1 Pending 0 0s\nzk-1 0/1 Pending 0 12m\nzk-1 0/1 ContainerCreating 0 12m\nzk-1 0/1 Running 0 13m\nzk-1 1/1 Running 0 13m", "language": "text", "type": "code-example", "context": "Running ZooKeeper, A Distributed System Coordinator | Attempt to drain the node on which zk-2 is scheduled.", "source": "https://kubernetes.io/docs/tutorials/stateful-application/zookeeper/", "id": "k8s_00220_code_completion_6075"} {"prompt": "Explain this text code:\n\nNAME READY STATUS RESTARTS AGE\nzk-0 1/1 Running 2 1h\nzk-1 1/1 Running 0 1h\nzk-2 1/1 Running 0 1h\nNAME READY STATUS RESTARTS AGE\nzk-0 1/1 Terminating 2 2h\nzk-0 0/1 Terminating 2 2h\nzk-0 0/1 Terminating 2 2h\nzk-0 0/1 Terminating 2 2h\nzk-0 0/1 Pending 0 0s\nzk-0 0/1 Pending 0 0s\nzk-0 0/1 ContainerCreating 0 0s\nzk-0 0/1 Running 0 51s\nzk-0 1/1 Running 0 1m\nzk-1 1/1 Terminating 0 2h\nzk-1 0/1 Terminating 0 2h\nzk-1 0/1 Terminating 0 2h\nzk-1 0/1 Terminating 0 2h\nzk-1 0/1 Pending 0 0s\nzk-1 0/1 Pending 0 0s\nzk-1 0/1 Pending 0 12m\nzk-1 0/1 ContainerCreating 0 12m\nzk-1 0/1 Running 0 13m\nzk-1 1/1 Running 0 13m", "completion": "This text code Running ZooKeeper, A Distributed System Coordinator | Attempt to drain the node on which zk-2 is scheduled.", "language": "text", "type": "code-example_explanation", "context": "Running ZooKeeper, A Distributed System Coordinator | Attempt to drain the node on which zk-2 is scheduled.", "source": "https://kubernetes.io/docs/tutorials/stateful-application/zookeeper/", "id": "k8s_00220_code_explain_6076"} {"prompt": "Explain this bash code:\n\nkubectl drain $(kubectl get pod zk-2 --template {{.spec.nodeName}}) --ignore-daemonsets --force --delete-emptydir-data", "completion": "This bash code Running ZooKeeper, A Distributed System Coordinator | The output is similar to this:", "language": "bash", "type": "kubectl-command_explanation", "context": "Running ZooKeeper, A Distributed System Coordinator | The output is similar to this:", "source": "https://kubernetes.io/docs/tutorials/stateful-application/zookeeper/", "id": "k8s_00220_code_explain_6077"} {"prompt": "Complete this text code:\n\nnode \"kubernetes-node-i4c4\" already cordoned\nWARNING: Deleting pods not managed by ReplicationController, ReplicaSet, Job, or DaemonSet: fluentd-cloud-logging-kubernetes-node-i4c4, kube-proxy-kubernetes-node-i4c4; Ignoring DaemonSet-managed pods: node-problem-detector-v0.1-dyrog", "completion": "pod \"heapster-v1.2.0-2604621511-wht1r\" deleted\npod \"zk-2\" deleted\nnode \"kubernetes-node-i4c4\" drained", "language": "text", "type": "code-example", "context": "Running ZooKeeper, A Distributed System Coordinator | This time kubectl drain succeeds.", "source": "https://kubernetes.io/docs/tutorials/stateful-application/zookeeper/", "id": "k8s_00220_code_completion_6078"} {"prompt": "Explain this text code:\n\nnode \"kubernetes-node-i4c4\" already cordoned\nWARNING: Deleting pods not managed by ReplicationController, ReplicaSet, Job, or DaemonSet: fluentd-cloud-logging-kubernetes-node-i4c4, kube-proxy-kubernetes-node-i4c4; Ignoring DaemonSet-managed pods: node-problem-detector-v0.1-dyrog\npod \"heapster-v1.2.0-2604621511-wht1r\" deleted\npod \"zk-2\" deleted\nnode \"kubernetes-node-i4c4\" drained", "completion": "This text code Running ZooKeeper, A Distributed System Coordinator | This time kubectl drain succeeds.", "language": "text", "type": "code-example_explanation", "context": "Running ZooKeeper, A Distributed System Coordinator | This time kubectl drain succeeds.", "source": "https://kubernetes.io/docs/tutorials/stateful-application/zookeeper/", "id": "k8s_00220_code_explain_6079"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Service\nmetadata:\n name: zk-hs\n labels:\n app: zk\nspec:\n ports:\n - port: 2888\n name: server\n - port: 3888\n name: leader-election\n clusterIP: None\n selector:\n app: zk\n---\napiVersion: v1\nkind: Service\nmetadata:\n name: zk-cs\n labels:\n app: zk\nspec:\n ports:\n - port: 2181\n name: client\n selector:\n app: zk\n---\napiVersion: policy/v1\nkind: PodDisruptionBudget\nmetadata:\n name: zk-pdb\nspec:\n selector:\n matchLabels:\n app: zk\n maxUnavailable: 1\n---\napiVersion: apps/v1\nkind: StatefulSet\nmetadata:\n name: zk\nspec:\n selector:\n matchLabels:\n app: zk\n serviceName: zk-hs\n replicas: 3\n updateStrategy:\n type: RollingUpdate\n podManagementPolicy: OrderedReady\n template:\n metadata:\n labels:\n app: zk\n spec:\n affinity:\n podAntiAffinity:\n requiredDuringSchedulingIgnoredDuringExecution:\n - labelSelector:\n matchExpressions:\n - key: \"app\"\n operator: In\n values:\n - zk", "completion": " topologyKey: \"kubernetes.io/hostname\"\n containers:\n - name: kubernetes-zookeeper\n imagePullPolicy: Always\n image: \"registry.k8s.io/kubernetes-zookeeper:1.0-3.4.10\"\n resources:\n requests:\n memory: \"1Gi\"\n cpu: \"0.5\"\n ports:\n - containerPort: 2181\n name: client\n - containerPort: 2888\n name: server\n - containerPort: 3888\n name: leader-election\n command:\n - sh\n - -c\n - \"start-zookeeper \\\n --servers=3 \\\n --data_dir=/var/lib/zookeeper/data \\\n --data_log_dir=/var/lib/zookeeper/data/log \\\n --conf_dir=/opt/zookeeper/conf \\\n --client_port=2181 \\\n --election_port=3888 \\\n --server_port=2888 \\\n --tick_time=2000 \\\n --init_limit=10 \\\n --sync_limit=5 \\\n --heap=512M \\\n --max_client_cnxns=60 \\\n --snap_retain_count=3 \\\n --purge_interval=12 \\\n --max_session_timeout=40000 \\\n --min_session_timeout=4000 \\\n --log_level=INFO\"\n readinessProbe:\n exec:\n command:\n - sh\n - -c\n - \"zookeeper-ready 2181\"\n initialDelaySeconds: 10\n timeoutSeconds: 5\n livenessProbe:\n exec:\n command:\n - sh\n - -c\n - \"zookeeper-ready 2181\"\n initialDelaySeconds: 10\n timeoutSeconds: 5\n volumeMounts:\n - name: datadir\n mountPath: /var/lib/zookeeper\n securityContext:\n runAsUser: 1000\n fsGroup: 1000\n volumeClaimTemplates:\n - metadata:\n name: datadir\n spec:\n accessModes: [ \"ReadWriteOnce\" ]\n resources:\n requests:\n storage: 10Gi", "language": "yaml", "type": "pod-definition", "context": "Running ZooKeeper, A Distributed System Coordinator | Open a terminal, and use the kubectl apply command to create the manifest.", "source": "https://kubernetes.io/docs/tutorials/stateful-application/zookeeper/", "id": "k8s_00220_code_completion_6080"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Service\nmetadata:\n name: zk-hs\n labels:\n app: zk\nspec:\n ports:\n - port: 2888\n name: server\n - port: 3888\n name: leader-election\n clusterIP: None\n selector:\n app: zk\n---\napiVersion: v1\nkind: Service\nmetadata:\n name: zk-cs\n labels:\n app: zk\nspec:\n ports:\n - port: 2181\n name: client\n selector:\n app: zk\n---\napiVersion: policy/v1\nkind: PodDisruptionBudget\nmetadata:\n name: zk-pdb\nspec:\n selector:\n matchLabels:\n app: zk\n maxUnavailable: 1\n---\napiVersion: apps/v1\nkind: StatefulSet\nmetadata:\n name: zk\nspec:\n selector:\n matchLabels:\n app: zk\n serviceName: zk-hs\n replicas: 3\n updateStrategy:\n type: RollingUpdate\n podManagementPolicy: OrderedReady\n template:\n metadata:\n labels:\n app: zk\n spec:\n affinity:\n podAntiAffinity:\n requiredDuringSchedulingIgnoredDuringExecution:\n - labelSelector:\n matchExpressions:\n - key: \"app\"\n operator: In\n values:\n - zk\n topologyKey: \"kubernetes.io/hostname\"\n containers:\n - name: kubernetes-zookeeper\n imagePullPolicy: Always\n image: \"registry.k8s.io/kubernetes-zookeeper:1.0-3.4.10\"\n resources:\n requests:\n memory: \"1Gi\"\n cpu: \"0.5\"\n ports:\n - containerPort: 2181\n name: client\n - containerPort: 2888\n name: server\n - containerPort: 3888\n name: leader-election\n command:\n - sh\n - -c\n - \"start-zookeeper \\\n --servers=3 \\\n --data_dir=/var/lib/zookeeper/data \\\n --data_log_dir=/var/lib/zookeeper/data/log \\\n --conf_dir=/opt/zookeeper/conf \\\n --client_port=2181 \\\n --election_port=3888 \\\n --server_port=2888 \\\n --tick_time=2000 \\\n --init_limit=10 \\\n --sync_limit=5 \\\n --heap=512M \\\n --max_client_cnxns=60 \\\n --snap_retain_count=3 \\\n --purge_interval=12 \\\n --max_session_timeout=40000 \\\n --min_session_timeout=4000 \\\n --log_level=INFO\"\n readinessProbe:\n exec:\n command:\n - sh\n - -c\n - \"zookeeper-ready 2181\"\n initialDelaySeconds: 10\n timeoutSeconds: 5\n livenessProbe:\n exec:\n command:\n - sh\n - -c\n - \"zookeeper-ready 2181\"\n initialDelaySeconds: 10\n timeoutSeconds: 5\n volumeMounts:\n - name: datadir\n mountPath: /var/lib/zookeeper\n securityContext:\n runAsUser: 1000\n fsGroup: 1000\n volumeClaimTemplates:\n - metadata:\n name: datadir\n spec:\n accessModes: [ \"ReadWriteOnce\" ]\n resources:\n requests:\n storage: 10Gi", "completion": "This yaml code Running ZooKeeper, A Distributed System Coordinator | Open a terminal, and use the kubectl apply command to create the manifest.", "language": "yaml", "type": "pod-definition_explanation", "context": "Running ZooKeeper, A Distributed System Coordinator | Open a terminal, and use the kubectl apply command to create the manifest.", "source": "https://kubernetes.io/docs/tutorials/stateful-application/zookeeper/", "id": "k8s_00220_code_explain_6081"} {"prompt": "Explain this shell code:\n\nkubectl apply -f https://k8s.io/examples/application/zookeeper/zookeeper.yaml", "completion": "This shell code Running ZooKeeper, A Distributed System Coordinator | This creates the zk-hs Headless Service, the zk-cs Service, the zk-pdb PodDisruptionBudget, and the zk StatefulSet.", "language": "shell", "type": "code-example_explanation", "context": "Running ZooKeeper, A Distributed System Coordinator | This creates the zk-hs Headless Service, the zk-cs Service, the zk-pdb PodDisruptionBudget, and the zk StatefulSet.", "source": "https://kubernetes.io/docs/tutorials/stateful-application/zookeeper/", "id": "k8s_00220_code_explain_6082"} {"prompt": "Complete this text code:\n\nservice/zk-hs created\nservice/zk-cs created", "completion": "poddisruptionbudget.policy/zk-pdb created\nstatefulset.apps/zk created", "language": "text", "type": "code-example", "context": "Running ZooKeeper, A Distributed System Coordinator | Use kubectl get to watch the StatefulSet controller create the StatefulSet's Pods.", "source": "https://kubernetes.io/docs/tutorials/stateful-application/zookeeper/", "id": "k8s_00220_code_completion_6083"} {"prompt": "Explain this text code:\n\nservice/zk-hs created\nservice/zk-cs created\npoddisruptionbudget.policy/zk-pdb created\nstatefulset.apps/zk created", "completion": "This text code Running ZooKeeper, A Distributed System Coordinator | Use kubectl get to watch the StatefulSet controller create the StatefulSet's Pods.", "language": "text", "type": "code-example_explanation", "context": "Running ZooKeeper, A Distributed System Coordinator | Use kubectl get to watch the StatefulSet controller create the StatefulSet's Pods.", "source": "https://kubernetes.io/docs/tutorials/stateful-application/zookeeper/", "id": "k8s_00220_code_explain_6084"} {"prompt": "Complete this text code:\n\nNAME READY STATUS RESTARTS AGE\nzk-0 0/1 Pending 0 0s\nzk-0 0/1 Pending 0 0s\nzk-0 0/1 ContainerCreating 0 0s\nzk-0 0/1 Running 0 19s\nzk-0 1/1 Running 0 40s\nzk-1 0/1 Pending 0 0s\nzk-1 0/1 Pending 0 0s", "completion": "zk-1 0/1 ContainerCreating 0 0s\nzk-1 0/1 Running 0 18s\nzk-1 1/1 Running 0 40s\nzk-2 0/1 Pending 0 0s\nzk-2 0/1 Pending 0 0s\nzk-2 0/1 ContainerCreating 0 0s\nzk-2 0/1 Running 0 19s\nzk-2 1/1 Running 0 40s", "language": "text", "type": "code-example", "context": "Running ZooKeeper, A Distributed System Coordinator | The StatefulSet controller creates three Pods, and each Pod has a container with a ZooKeeper server.", "source": "https://kubernetes.io/docs/tutorials/stateful-application/zookeeper/", "id": "k8s_00220_code_completion_6085"} {"prompt": "Explain this text code:\n\nNAME READY STATUS RESTARTS AGE\nzk-0 0/1 Pending 0 0s\nzk-0 0/1 Pending 0 0s\nzk-0 0/1 ContainerCreating 0 0s\nzk-0 0/1 Running 0 19s\nzk-0 1/1 Running 0 40s\nzk-1 0/1 Pending 0 0s\nzk-1 0/1 Pending 0 0s\nzk-1 0/1 ContainerCreating 0 0s\nzk-1 0/1 Running 0 18s\nzk-1 1/1 Running 0 40s\nzk-2 0/1 Pending 0 0s\nzk-2 0/1 Pending 0 0s\nzk-2 0/1 ContainerCreating 0 0s\nzk-2 0/1 Running 0 19s\nzk-2 1/1 Running 0 40s", "completion": "This text code Running ZooKeeper, A Distributed System Coordinator | The StatefulSet controller creates three Pods, and each Pod has a container with a ZooKeeper server.", "language": "text", "type": "code-example_explanation", "context": "Running ZooKeeper, A Distributed System Coordinator | The StatefulSet controller creates three Pods, and each Pod has a container with a ZooKeeper server.", "source": "https://kubernetes.io/docs/tutorials/stateful-application/zookeeper/", "id": "k8s_00220_code_explain_6086"} {"prompt": "Explain this shell code:\n\nfor i in 0 1 2; do kubectl exec zk-$i -- hostname; done", "completion": "This shell code Running ZooKeeper, A Distributed System Coordinator | The StatefulSet controller provides each Pod with a unique hostname based on its ordinal index. The hostnames take the form of -. Because the replicas field of the zk", "language": "shell", "type": "code-example_explanation", "context": "Running ZooKeeper, A Distributed System Coordinator | The StatefulSet controller provides each Pod with a unique hostname based on its ordinal index. The hostnames take the form of -. Because the replicas field of the zk", "source": "https://kubernetes.io/docs/tutorials/stateful-application/zookeeper/", "id": "k8s_00220_code_explain_6087"} {"prompt": "Explain this shell code:\n\nfor i in 0 1 2; do echo \"myid zk-$i\";kubectl exec zk-$i -- cat /var/lib/zookeeper/data/myid; done", "completion": "This shell code Running ZooKeeper, A Distributed System Coordinator | Because the identifiers are natural numbers and the ordinal indices are non-negative integers, you can generate an identifier by adding 1 to the ordinal.", "language": "shell", "type": "code-example_explanation", "context": "Running ZooKeeper, A Distributed System Coordinator | Because the identifiers are natural numbers and the ordinal indices are non-negative integers, you can generate an identifier by adding 1 to the ordinal.", "source": "https://kubernetes.io/docs/tutorials/stateful-application/zookeeper/", "id": "k8s_00220_code_explain_6088"} {"prompt": "Explain this shell code:\n\nfor i in 0 1 2; do kubectl exec zk-$i -- hostname -f; done", "completion": "This shell code Running ZooKeeper, A Distributed System Coordinator | The zk-hs Service creates a domain for all of the Pods, zk-hs.default.svc.cluster.local.", "language": "shell", "type": "code-example_explanation", "context": "Running ZooKeeper, A Distributed System Coordinator | The zk-hs Service creates a domain for all of the Pods, zk-hs.default.svc.cluster.local.", "source": "https://kubernetes.io/docs/tutorials/stateful-application/zookeeper/", "id": "k8s_00220_code_explain_6089"} {"prompt": "Explain this text code:\n\nzk-0.zk-hs.default.svc.cluster.local\nzk-1.zk-hs.default.svc.cluster.local\nzk-2.zk-hs.default.svc.cluster.local", "completion": "This text code Running ZooKeeper, A Distributed System Coordinator | The A records in Kubernetes DNS resolve the FQDNs to the Pods' IP addresses. If Kubernetes reschedules the Pods, it will update the A records with the Pods' new IP addresses, but the A records names w", "language": "text", "type": "code-example_explanation", "context": "Running ZooKeeper, A Distributed System Coordinator | The A records in Kubernetes DNS resolve the FQDNs to the Pods' IP addresses. If Kubernetes reschedules the Pods, it will update the A records with the Pods' new IP addresses, but the A records names w", "source": "https://kubernetes.io/docs/tutorials/stateful-application/zookeeper/", "id": "k8s_00220_code_explain_6090"} {"prompt": "Explain this shell code:\n\nkubectl exec zk-0 -- cat /opt/zookeeper/conf/zoo.cfg", "completion": "This shell code Running ZooKeeper, A Distributed System Coordinator | In the server.1, server.2, and server.3 properties at the bottom of the file, the 1, 2, and 3 correspond to the identifiers in the ZooKeeper servers' myid files. They are set to the FQDNs for the Pods", "language": "shell", "type": "code-example_explanation", "context": "Running ZooKeeper, A Distributed System Coordinator | In the server.1, server.2, and server.3 properties at the bottom of the file, the 1, 2, and 3 correspond to the identifiers in the ZooKeeper servers' myid files. They are set to the FQDNs for the Pods", "source": "https://kubernetes.io/docs/tutorials/stateful-application/zookeeper/", "id": "k8s_00220_code_explain_6091"} {"prompt": "Complete this text code:\n\nclientPort=2181\ndataDir=/var/lib/zookeeper/data\ndataLogDir=/var/lib/zookeeper/log\ntickTime=2000\ninitLimit=10\nsyncLimit=2000\nmaxClientCnxns=60", "completion": "minSessionTimeout= 4000\nmaxSessionTimeout= 40000\nautopurge.snapRetainCount=3\nautopurge.purgeInterval=0\nserver.1=zk-0.zk-hs.default.svc.cluster.local:2888:3888\nserver.2=zk-1.zk-hs.default.svc.cluster.local:2888:3888\nserver.3=zk-2.zk-hs.default.svc.cluster.local:2888:3888", "language": "text", "type": "code-example", "context": "Running ZooKeeper, A Distributed System Coordinator | Consensus protocols require that the identifiers of each participant be unique. No two participants in the Zab protocol should claim the same unique identifier. This is necessary to allow the processe", "source": "https://kubernetes.io/docs/tutorials/stateful-application/zookeeper/", "id": "k8s_00220_code_completion_6092"} {"prompt": "Explain this text code:\n\nclientPort=2181\ndataDir=/var/lib/zookeeper/data\ndataLogDir=/var/lib/zookeeper/log\ntickTime=2000\ninitLimit=10\nsyncLimit=2000\nmaxClientCnxns=60\nminSessionTimeout= 4000\nmaxSessionTimeout= 40000\nautopurge.snapRetainCount=3\nautopurge.purgeInterval=0\nserver.1=zk-0.zk-hs.default.svc.cluster.local:2888:3888\nserver.2=zk-1.zk-hs.default.svc.cluster.local:2888:3888\nserver.3=zk-2.zk-hs.default.svc.cluster.local:2888:3888", "completion": "This text code Running ZooKeeper, A Distributed System Coordinator | Consensus protocols require that the identifiers of each participant be unique. No two participants in the Zab protocol should claim the same unique identifier. This is necessary to allow the processe", "language": "text", "type": "code-example_explanation", "context": "Running ZooKeeper, A Distributed System Coordinator | Consensus protocols require that the identifiers of each participant be unique. No two participants in the Zab protocol should claim the same unique identifier. This is necessary to allow the processe", "source": "https://kubernetes.io/docs/tutorials/stateful-application/zookeeper/", "id": "k8s_00220_code_explain_6093"} {"prompt": "Complete this text code:\n\nNAME READY STATUS RESTARTS AGE\nzk-0 0/1 Pending 0 0s\nzk-0 0/1 Pending 0 0s\nzk-0 0/1 ContainerCreating 0 0s\nzk-0 0/1 Running 0 19s\nzk-0 1/1 Running 0 40s\nzk-1 0/1 Pending 0 0s\nzk-1 0/1 Pending 0 0s", "completion": "zk-1 0/1 ContainerCreating 0 0s\nzk-1 0/1 Running 0 18s\nzk-1 1/1 Running 0 40s\nzk-2 0/1 Pending 0 0s\nzk-2 0/1 Pending 0 0s\nzk-2 0/1 ContainerCreating 0 0s\nzk-2 0/1 Running 0 19s\nzk-2 1/1 Running 0 40s", "language": "text", "type": "code-example", "context": "Running ZooKeeper, A Distributed System Coordinator | The A records for each Pod are entered when the Pod becomes Ready. Therefore, the FQDNs of the ZooKeeper servers will resolve to a single endpoint, and that endpoint will be the unique ZooKeeper serve", "source": "https://kubernetes.io/docs/tutorials/stateful-application/zookeeper/", "id": "k8s_00220_code_completion_6094"} {"prompt": "Explain this text code:\n\nNAME READY STATUS RESTARTS AGE\nzk-0 0/1 Pending 0 0s\nzk-0 0/1 Pending 0 0s\nzk-0 0/1 ContainerCreating 0 0s\nzk-0 0/1 Running 0 19s\nzk-0 1/1 Running 0 40s\nzk-1 0/1 Pending 0 0s\nzk-1 0/1 Pending 0 0s\nzk-1 0/1 ContainerCreating 0 0s\nzk-1 0/1 Running 0 18s\nzk-1 1/1 Running 0 40s\nzk-2 0/1 Pending 0 0s\nzk-2 0/1 Pending 0 0s\nzk-2 0/1 ContainerCreating 0 0s\nzk-2 0/1 Running 0 19s\nzk-2 1/1 Running 0 40s", "completion": "This text code Running ZooKeeper, A Distributed System Coordinator | The A records for each Pod are entered when the Pod becomes Ready. Therefore, the FQDNs of the ZooKeeper servers will resolve to a single endpoint, and that endpoint will be the unique ZooKeeper serve", "language": "text", "type": "code-example_explanation", "context": "Running ZooKeeper, A Distributed System Coordinator | The A records for each Pod are entered when the Pod becomes Ready. Therefore, the FQDNs of the ZooKeeper servers will resolve to a single endpoint, and that endpoint will be the unique ZooKeeper serve", "source": "https://kubernetes.io/docs/tutorials/stateful-application/zookeeper/", "id": "k8s_00220_code_explain_6095"} {"prompt": "Explain this text code:\n\nzk-0.zk-hs.default.svc.cluster.local\nzk-1.zk-hs.default.svc.cluster.local\nzk-2.zk-hs.default.svc.cluster.local", "completion": "This text code Running ZooKeeper, A Distributed System Coordinator | This ensures that the servers properties in the ZooKeepers' zoo.cfg files represents a correctly configured ensemble.", "language": "text", "type": "code-example_explanation", "context": "Running ZooKeeper, A Distributed System Coordinator | This ensures that the servers properties in the ZooKeepers' zoo.cfg files represents a correctly configured ensemble.", "source": "https://kubernetes.io/docs/tutorials/stateful-application/zookeeper/", "id": "k8s_00220_code_explain_6096"} {"prompt": "Explain this text code:\n\nserver.1=zk-0.zk-hs.default.svc.cluster.local:2888:3888\nserver.2=zk-1.zk-hs.default.svc.cluster.local:2888:3888\nserver.3=zk-2.zk-hs.default.svc.cluster.local:2888:3888", "completion": "This text code Running ZooKeeper, A Distributed System Coordinator | When the servers use the Zab protocol to attempt to commit a value, they will either achieve consensus and commit the value (if leader election has succeeded and at least two of the Pods are Running a", "language": "text", "type": "code-example_explanation", "context": "Running ZooKeeper, A Distributed System Coordinator | When the servers use the Zab protocol to attempt to commit a value, they will either achieve consensus and commit the value (if leader election has succeeded and at least two of the Pods are Running a", "source": "https://kubernetes.io/docs/tutorials/stateful-application/zookeeper/", "id": "k8s_00220_code_explain_6097"} {"prompt": "Complete this text code:\n\nWATCHER::\n", "completion": "WatchedEvent state:SyncConnected type:None path:null\nCreated /hello", "language": "text", "type": "code-example", "context": "Running ZooKeeper, A Distributed System Coordinator | To get the data from the zk-1 Pod use the following command.", "source": "https://kubernetes.io/docs/tutorials/stateful-application/zookeeper/", "id": "k8s_00220_code_completion_6098"} {"prompt": "Explain this text code:\n\nWATCHER::\n\nWatchedEvent state:SyncConnected type:None path:null\nCreated /hello", "completion": "This text code Running ZooKeeper, A Distributed System Coordinator | To get the data from the zk-1 Pod use the following command.", "language": "text", "type": "code-example_explanation", "context": "Running ZooKeeper, A Distributed System Coordinator | To get the data from the zk-1 Pod use the following command.", "source": "https://kubernetes.io/docs/tutorials/stateful-application/zookeeper/", "id": "k8s_00220_code_explain_6099"} {"prompt": "Complete this text code:\n\nWATCHER::\n\nWatchedEvent state:SyncConnected type:None path:null\nworld\ncZxid = 0x100000002\nctime = Thu Dec 08 15:13:30 UTC 2016\nmZxid = 0x100000002", "completion": "mtime = Thu Dec 08 15:13:30 UTC 2016\npZxid = 0x100000002\ncversion = 0\ndataVersion = 0\naclVersion = 0\nephemeralOwner = 0x0\ndataLength = 5\nnumChildren = 0", "language": "text", "type": "code-example", "context": "Running ZooKeeper, A Distributed System Coordinator | As mentioned in the ZooKeeper Basics section, ZooKeeper commits all entries to a durable WAL, and periodically writes snapshots in memory state, to storage media. Using WALs to provide durability is a", "source": "https://kubernetes.io/docs/tutorials/stateful-application/zookeeper/", "id": "k8s_00220_code_completion_6100"} {"prompt": "Explain this text code:\n\nWATCHER::\n\nWatchedEvent state:SyncConnected type:None path:null\nworld\ncZxid = 0x100000002\nctime = Thu Dec 08 15:13:30 UTC 2016\nmZxid = 0x100000002\nmtime = Thu Dec 08 15:13:30 UTC 2016\npZxid = 0x100000002\ncversion = 0\ndataVersion = 0\naclVersion = 0\nephemeralOwner = 0x0\ndataLength = 5\nnumChildren = 0", "completion": "This text code Running ZooKeeper, A Distributed System Coordinator | As mentioned in the ZooKeeper Basics section, ZooKeeper commits all entries to a durable WAL, and periodically writes snapshots in memory state, to storage media. Using WALs to provide durability is a", "language": "text", "type": "code-example_explanation", "context": "Running ZooKeeper, A Distributed System Coordinator | As mentioned in the ZooKeeper Basics section, ZooKeeper commits all entries to a durable WAL, and periodically writes snapshots in memory state, to storage media. Using WALs to provide durability is a", "source": "https://kubernetes.io/docs/tutorials/stateful-application/zookeeper/", "id": "k8s_00220_code_explain_6101"} {"prompt": "Complete this text code:\n\nzk-2 1/1 Terminating 0 9m\nzk-0 1/1 Terminating 0 11m\nzk-1 1/1 Terminating 0 10m\nzk-2 0/1 Terminating 0 9m\nzk-2 0/1 Terminating 0 9m\nzk-2 0/1 Terminating 0 9m", "completion": "zk-1 0/1 Terminating 0 10m\nzk-1 0/1 Terminating 0 10m\nzk-1 0/1 Terminating 0 10m\nzk-0 0/1 Terminating 0 11m\nzk-0 0/1 Terminating 0 11m\nzk-0 0/1 Terminating 0 11m", "language": "text", "type": "code-example", "context": "Running ZooKeeper, A Distributed System Coordinator | Reapply the manifest in zookeeper.yaml.", "source": "https://kubernetes.io/docs/tutorials/stateful-application/zookeeper/", "id": "k8s_00220_code_completion_6102"} {"prompt": "Explain this text code:\n\nzk-2 1/1 Terminating 0 9m\nzk-0 1/1 Terminating 0 11m\nzk-1 1/1 Terminating 0 10m\nzk-2 0/1 Terminating 0 9m\nzk-2 0/1 Terminating 0 9m\nzk-2 0/1 Terminating 0 9m\nzk-1 0/1 Terminating 0 10m\nzk-1 0/1 Terminating 0 10m\nzk-1 0/1 Terminating 0 10m\nzk-0 0/1 Terminating 0 11m\nzk-0 0/1 Terminating 0 11m\nzk-0 0/1 Terminating 0 11m", "completion": "This text code Running ZooKeeper, A Distributed System Coordinator | Reapply the manifest in zookeeper.yaml.", "language": "text", "type": "code-example_explanation", "context": "Running ZooKeeper, A Distributed System Coordinator | Reapply the manifest in zookeeper.yaml.", "source": "https://kubernetes.io/docs/tutorials/stateful-application/zookeeper/", "id": "k8s_00220_code_explain_6103"} {"prompt": "Explain this shell code:\n\nkubectl apply -f https://k8s.io/examples/application/zookeeper/zookeeper.yaml", "completion": "This shell code Running ZooKeeper, A Distributed System Coordinator | This creates the zk StatefulSet object, but the other API objects in the manifest are not modified because they already exist.", "language": "shell", "type": "code-example_explanation", "context": "Running ZooKeeper, A Distributed System Coordinator | This creates the zk StatefulSet object, but the other API objects in the manifest are not modified because they already exist.", "source": "https://kubernetes.io/docs/tutorials/stateful-application/zookeeper/", "id": "k8s_00220_code_explain_6104"} {"prompt": "Complete this text code:\n\nNAME READY STATUS RESTARTS AGE\nzk-0 0/1 Pending 0 0s\nzk-0 0/1 Pending 0 0s\nzk-0 0/1 ContainerCreating 0 0s\nzk-0 0/1 Running 0 19s\nzk-0 1/1 Running 0 40s\nzk-1 0/1 Pending 0 0s\nzk-1 0/1 Pending 0 0s", "completion": "zk-1 0/1 ContainerCreating 0 0s\nzk-1 0/1 Running 0 18s\nzk-1 1/1 Running 0 40s\nzk-2 0/1 Pending 0 0s\nzk-2 0/1 Pending 0 0s\nzk-2 0/1 ContainerCreating 0 0s\nzk-2 0/1 Running 0 19s\nzk-2 1/1 Running 0 40s", "language": "text", "type": "code-example", "context": "Running ZooKeeper, A Distributed System Coordinator | Use the command below to get the value you entered during the sanity test, from the zk-2 Pod.", "source": "https://kubernetes.io/docs/tutorials/stateful-application/zookeeper/", "id": "k8s_00220_code_completion_6105"} {"prompt": "Explain this text code:\n\nNAME READY STATUS RESTARTS AGE\nzk-0 0/1 Pending 0 0s\nzk-0 0/1 Pending 0 0s\nzk-0 0/1 ContainerCreating 0 0s\nzk-0 0/1 Running 0 19s\nzk-0 1/1 Running 0 40s\nzk-1 0/1 Pending 0 0s\nzk-1 0/1 Pending 0 0s\nzk-1 0/1 ContainerCreating 0 0s\nzk-1 0/1 Running 0 18s\nzk-1 1/1 Running 0 40s\nzk-2 0/1 Pending 0 0s\nzk-2 0/1 Pending 0 0s\nzk-2 0/1 ContainerCreating 0 0s\nzk-2 0/1 Running 0 19s\nzk-2 1/1 Running 0 40s", "completion": "This text code Running ZooKeeper, A Distributed System Coordinator | Use the command below to get the value you entered during the sanity test, from the zk-2 Pod.", "language": "text", "type": "code-example_explanation", "context": "Running ZooKeeper, A Distributed System Coordinator | Use the command below to get the value you entered during the sanity test, from the zk-2 Pod.", "source": "https://kubernetes.io/docs/tutorials/stateful-application/zookeeper/", "id": "k8s_00220_code_explain_6106"} {"prompt": "Complete this text code:\n\nWATCHER::\n\nWatchedEvent state:SyncConnected type:None path:null\nworld\ncZxid = 0x100000002\nctime = Thu Dec 08 15:13:30 UTC 2016\nmZxid = 0x100000002", "completion": "mtime = Thu Dec 08 15:13:30 UTC 2016\npZxid = 0x100000002\ncversion = 0\ndataVersion = 0\naclVersion = 0\nephemeralOwner = 0x0\ndataLength = 5\nnumChildren = 0", "language": "text", "type": "code-example", "context": "Running ZooKeeper, A Distributed System Coordinator | The volumeClaimTemplates field of the zk StatefulSet's spec specifies a PersistentVolume provisioned for each Pod.", "source": "https://kubernetes.io/docs/tutorials/stateful-application/zookeeper/", "id": "k8s_00220_code_completion_6107"} {"prompt": "Explain this text code:\n\nWATCHER::\n\nWatchedEvent state:SyncConnected type:None path:null\nworld\ncZxid = 0x100000002\nctime = Thu Dec 08 15:13:30 UTC 2016\nmZxid = 0x100000002\nmtime = Thu Dec 08 15:13:30 UTC 2016\npZxid = 0x100000002\ncversion = 0\ndataVersion = 0\naclVersion = 0\nephemeralOwner = 0x0\ndataLength = 5\nnumChildren = 0", "completion": "This text code Running ZooKeeper, A Distributed System Coordinator | The volumeClaimTemplates field of the zk StatefulSet's spec specifies a PersistentVolume provisioned for each Pod.", "language": "text", "type": "code-example_explanation", "context": "Running ZooKeeper, A Distributed System Coordinator | The volumeClaimTemplates field of the zk StatefulSet's spec specifies a PersistentVolume provisioned for each Pod.", "source": "https://kubernetes.io/docs/tutorials/stateful-application/zookeeper/", "id": "k8s_00220_code_explain_6108"} {"prompt": "Complete this yaml code:\n\nvolumeClaimTemplates:\n - metadata:\n name: datadir\n annotations:\n volume.alpha.kubernetes.io/storage-class: anything", "completion": " spec:\n accessModes: [ \"ReadWriteOnce\" ]\n resources:\n requests:\n storage: 20Gi", "language": "yaml", "type": "kubernetes-manifest", "context": "Running ZooKeeper, A Distributed System Coordinator | The StatefulSet controller generates a PersistentVolumeClaim for each Pod in the StatefulSet.", "source": "https://kubernetes.io/docs/tutorials/stateful-application/zookeeper/", "id": "k8s_00220_code_completion_6109"} {"prompt": "Explain this yaml code:\n\nvolumeClaimTemplates:\n - metadata:\n name: datadir\n annotations:\n volume.alpha.kubernetes.io/storage-class: anything\n spec:\n accessModes: [ \"ReadWriteOnce\" ]\n resources:\n requests:\n storage: 20Gi", "completion": "This yaml code Running ZooKeeper, A Distributed System Coordinator | The StatefulSet controller generates a PersistentVolumeClaim for each Pod in the StatefulSet.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Running ZooKeeper, A Distributed System Coordinator | The StatefulSet controller generates a PersistentVolumeClaim for each Pod in the StatefulSet.", "source": "https://kubernetes.io/docs/tutorials/stateful-application/zookeeper/", "id": "k8s_00220_code_explain_6110"} {"prompt": "Complete this text code:\n\nNAME STATUS VOLUME CAPACITY ACCESSMODES AGE\ndatadir-zk-0 Bound pvc-bed742cd-bcb1-11e6-994f-42010a800002 20Gi RWO 1h", "completion": "datadir-zk-1 Bound pvc-bedd27d2-bcb1-11e6-994f-42010a800002 20Gi RWO 1h\ndatadir-zk-2 Bound pvc-bee0817e-bcb1-11e6-994f-42010a800002 20Gi RWO 1h", "language": "text", "type": "code-example", "context": "Running ZooKeeper, A Distributed System Coordinator | The volumeMounts section of the StatefulSet's container template mounts the PersistentVolumes in the ZooKeeper servers' data directories.", "source": "https://kubernetes.io/docs/tutorials/stateful-application/zookeeper/", "id": "k8s_00220_code_completion_6111"} {"prompt": "Explain this text code:\n\nNAME STATUS VOLUME CAPACITY ACCESSMODES AGE\ndatadir-zk-0 Bound pvc-bed742cd-bcb1-11e6-994f-42010a800002 20Gi RWO 1h\ndatadir-zk-1 Bound pvc-bedd27d2-bcb1-11e6-994f-42010a800002 20Gi RWO 1h\ndatadir-zk-2 Bound pvc-bee0817e-bcb1-11e6-994f-42010a800002 20Gi RWO 1h", "completion": "This text code Running ZooKeeper, A Distributed System Coordinator | The volumeMounts section of the StatefulSet's container template mounts the PersistentVolumes in the ZooKeeper servers' data directories.", "language": "text", "type": "code-example_explanation", "context": "Running ZooKeeper, A Distributed System Coordinator | The volumeMounts section of the StatefulSet's container template mounts the PersistentVolumes in the ZooKeeper servers' data directories.", "source": "https://kubernetes.io/docs/tutorials/stateful-application/zookeeper/", "id": "k8s_00220_code_explain_6112"} {"prompt": "Explain this yaml code:\n\nvolumeMounts:\n- name: datadir\n mountPath: /var/lib/zookeeper", "completion": "This yaml code Running ZooKeeper, A Distributed System Coordinator | When a Pod in the zk StatefulSet is (re)scheduled, it will always have the same PersistentVolume mounted to the ZooKeeper server's data directory. Even when the Pods are rescheduled, all the writes ma", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Running ZooKeeper, A Distributed System Coordinator | When a Pod in the zk StatefulSet is (re)scheduled, it will always have the same PersistentVolume mounted to the ZooKeeper server's data directory. Even when the Pods are rescheduled, all the writes ma", "source": "https://kubernetes.io/docs/tutorials/stateful-application/zookeeper/", "id": "k8s_00220_code_explain_6113"} {"prompt": "Complete this text code:\n\n…\ncommand:\n - sh\n - -c\n - \"start-zookeeper \\\n --servers=3 \\\n --data_dir=/var/lib/zookeeper/data \\\n --data_log_dir=/var/lib/zookeeper/data/log \\\n --conf_dir=/opt/zookeeper/conf \\\n --client_port=2181 \\\n --election_port=3888 \\", "completion": " --server_port=2888 \\\n --tick_time=2000 \\\n --init_limit=10 \\\n --sync_limit=5 \\\n --heap=512M \\\n --max_client_cnxns=60 \\\n --snap_retain_count=3 \\\n --purge_interval=12 \\\n --max_session_timeout=40000 \\\n --min_session_timeout=4000 \\\n --log_level=INFO\"\n…", "language": "text", "type": "code-example", "context": "Running ZooKeeper, A Distributed System Coordinator | The command used to start the ZooKeeper servers passed the configuration as command line parameter. You can also use environment variables to pass configuration to the ensemble.", "source": "https://kubernetes.io/docs/tutorials/stateful-application/zookeeper/", "id": "k8s_00220_code_completion_6114"} {"prompt": "Explain this text code:\n\n…\ncommand:\n - sh\n - -c\n - \"start-zookeeper \\\n --servers=3 \\\n --data_dir=/var/lib/zookeeper/data \\\n --data_log_dir=/var/lib/zookeeper/data/log \\\n --conf_dir=/opt/zookeeper/conf \\\n --client_port=2181 \\\n --election_port=3888 \\\n --server_port=2888 \\\n --tick_time=2000 \\\n --init_limit=10 \\\n --sync_limit=5 \\\n --heap=512M \\\n --max_client_cnxns=60 \\\n --snap_retain_count=3 \\\n --purge_interval=12 \\\n --max_session_timeout=40000 \\\n --min_session_timeout=4000 \\\n --log_level=INFO\"\n…", "completion": "This text code Running ZooKeeper, A Distributed System Coordinator | The command used to start the ZooKeeper servers passed the configuration as command line parameter. You can also use environment variables to pass configuration to the ensemble.", "language": "text", "type": "code-example_explanation", "context": "Running ZooKeeper, A Distributed System Coordinator | The command used to start the ZooKeeper servers passed the configuration as command line parameter. You can also use environment variables to pass configuration to the ensemble.", "source": "https://kubernetes.io/docs/tutorials/stateful-application/zookeeper/", "id": "k8s_00220_code_explain_6115"} {"prompt": "Explain this shell code:\n\nkubectl exec zk-0 cat /usr/etc/zookeeper/log4j.properties", "completion": "This shell code Running ZooKeeper, A Distributed System Coordinator | The logging configuration below will cause the ZooKeeper process to write all of its logs to the standard output file stream.", "language": "shell", "type": "code-example_explanation", "context": "Running ZooKeeper, A Distributed System Coordinator | The logging configuration below will cause the ZooKeeper process to write all of its logs to the standard output file stream.", "source": "https://kubernetes.io/docs/tutorials/stateful-application/zookeeper/", "id": "k8s_00220_code_explain_6116"} {"prompt": "Complete this text code:\n\nzookeeper.root.logger=CONSOLE\nzookeeper.console.threshold=INFO\nlog4j.rootLogger=${zookeeper.root.logger}", "completion": "log4j.appender.CONSOLE=org.apache.log4j.ConsoleAppender\nlog4j.appender.CONSOLE.Threshold=${zookeeper.console.threshold}\nlog4j.appender.CONSOLE.layout=org.apache.log4j.PatternLayout\nlog4j.appender.CONSOLE.layout.ConversionPattern=%d{ISO8601} [myid:%X{myid}] - %-5p [%t:%C{1}@%L] - %m%n", "language": "text", "type": "code-example", "context": "Running ZooKeeper, A Distributed System Coordinator | This is the simplest possible way to safely log inside the container. Because the applications write logs to standard out, Kubernetes will handle log rotation for you. Kubernetes also implements a san", "source": "https://kubernetes.io/docs/tutorials/stateful-application/zookeeper/", "id": "k8s_00220_code_completion_6117"} {"prompt": "Explain this text code:\n\nzookeeper.root.logger=CONSOLE\nzookeeper.console.threshold=INFO\nlog4j.rootLogger=${zookeeper.root.logger}\nlog4j.appender.CONSOLE=org.apache.log4j.ConsoleAppender\nlog4j.appender.CONSOLE.Threshold=${zookeeper.console.threshold}\nlog4j.appender.CONSOLE.layout=org.apache.log4j.PatternLayout\nlog4j.appender.CONSOLE.layout.ConversionPattern=%d{ISO8601} [myid:%X{myid}] - %-5p [%t:%C{1}@%L] - %m%n", "completion": "This text code Running ZooKeeper, A Distributed System Coordinator | This is the simplest possible way to safely log inside the container. Because the applications write logs to standard out, Kubernetes will handle log rotation for you. Kubernetes also implements a san", "language": "text", "type": "code-example_explanation", "context": "Running ZooKeeper, A Distributed System Coordinator | This is the simplest possible way to safely log inside the container. Because the applications write logs to standard out, Kubernetes will handle log rotation for you. Kubernetes also implements a san", "source": "https://kubernetes.io/docs/tutorials/stateful-application/zookeeper/", "id": "k8s_00220_code_explain_6118"} {"prompt": "Complete this text code:\n\n2016-12-06 19:34:16,236 [myid:1] - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181:NIOServerCnxn@827] - Processing ruok command from /127.0.0.1:52740\n2016-12-06 19:34:16,237 [myid:1] - INFO [Thread-1136:NIOServerCnxn@1008] - Closed socket connection for client /127.0.0.1:52740 (no session established for client)\n2016-12-06 19:34:26,155 [myid:1] - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181:NIOServerCnxnFactory@192] - Accepted socket connection from /127.0.0.1:52749\n2016-12-06 19:34:26,155 [myid:1] - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181:NIOServerCnxn@827] - Processing ruok command from /127.0.0.1:52749\n2016-12-06 19:34:26,156 [myid:1] - INFO [Thread-1137:NIOServerCnxn@1008] - Closed socket connection for client /127.0.0.1:52749 (no session established for client)\n2016-12-06 19:34:26,222 [myid:1] - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181:NIOServerCnxnFactory@192] - Accepted socket connection from /127.0.0.1:52750\n2016-12-06 19:34:26,222 [myid:1] - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181:NIOServerCnxn@827] - Processing ruok command from /127.0.0.1:52750\n2016-12-06 19:34:26,226 [myid:1] - INFO [Thread-1138:NIOServerCnxn@1008] - Closed socket connection for client /127.0.0.1:52750 (no session established for client)\n2016-12-06 19:34:36,151 [myid:1] - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181:NIOServerCnxnFactory@192] - Accepted socket connection from /127.0.0.1:52760\n2016-12-06 19:34:36,152 [myid:1] - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181:NIOServerCnxn@827] - Processing ruok command from /127.0.0.1:52760", "completion": "2016-12-06 19:34:36,152 [myid:1] - INFO [Thread-1139:NIOServerCnxn@1008] - Closed socket connection for client /127.0.0.1:52760 (no session established for client)\n2016-12-06 19:34:36,230 [myid:1] - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181:NIOServerCnxnFactory@192] - Accepted socket connection from /127.0.0.1:52761\n2016-12-06 19:34:36,231 [myid:1] - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181:NIOServerCnxn@827] - Processing ruok command from /127.0.0.1:52761\n2016-12-06 19:34:36,231 [myid:1] - INFO [Thread-1140:NIOServerCnxn@1008] - Closed socket connection for client /127.0.0.1:52761 (no session established for client)\n2016-12-06 19:34:46,149 [myid:1] - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181:NIOServerCnxnFactory@192] - Accepted socket connection from /127.0.0.1:52767\n2016-12-06 19:34:46,149 [myid:1] - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181:NIOServerCnxn@827] - Processing ruok command from /127.0.0.1:52767\n2016-12-06 19:34:46,149 [myid:1] - INFO [Thread-1141:NIOServerCnxn@1008] - Closed socket connection for client /127.0.0.1:52767 (no session established for client)\n2016-12-06 19:34:46,230 [myid:1] - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181:NIOServerCnxnFactory@192] - Accepted socket connection from /127.0.0.1:52768\n2016-12-06 19:34:46,230 [myid:1] - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181:NIOServerCnxn@827] - Processing ruok command from /127.0.0.1:52768\n2016-12-06 19:34:46,230 [myid:1] - INFO [Thread-1142:NIOServerCnxn@1008] - Closed socket connection for client /127.0.0.1:52768 (no session established for client)", "language": "text", "type": "code-example", "context": "Running ZooKeeper, A Distributed System Coordinator | Kubernetes integrates with many logging solutions. You can choose a logging solution that best fits your cluster and applications. For cluster-level logging and aggregation, consider deploying a sidec", "source": "https://kubernetes.io/docs/tutorials/stateful-application/zookeeper/", "id": "k8s_00220_code_completion_6119"} {"prompt": "Explain this text code:\n\n2016-12-06 19:34:16,236 [myid:1] - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181:NIOServerCnxn@827] - Processing ruok command from /127.0.0.1:52740\n2016-12-06 19:34:16,237 [myid:1] - INFO [Thread-1136:NIOServerCnxn@1008] - Closed socket connection for client /127.0.0.1:52740 (no session established for client)\n2016-12-06 19:34:26,155 [myid:1] - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181:NIOServerCnxnFactory@192] - Accepted socket connection from /127.0.0.1:52749\n2016-12-06 19:34:26,155 [myid:1] - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181:NIOServerCnxn@827] - Processing ruok command from /127.0.0.1:52749\n2016-12-06 19:34:26,156 [myid:1] - INFO [Thread-1137:NIOServerCnxn@1008] - Closed socket connection for client /127.0.0.1:52749 (no session established for client)\n2016-12-06 19:34:26,222 [myid:1] - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181:NIOServerCnxnFactory@192] - Accepted socket connection from /127.0.0.1:52750\n2016-12-06 19:34:26,222 [myid:1] - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181:NIOServerCnxn@827] - Processing ruok command from /127.0.0.1:52750\n2016-12-06 19:34:26,226 [myid:1] - INFO [Thread-1138:NIOServerCnxn@1008] - Closed socket connection for client /127.0.0.1:52750 (no session established for client)\n2016-12-06 19:34:36,151 [myid:1] - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181:NIOServerCnxnFactory@192] - Accepted socket connection from /127.0.0.1:52760\n2016-12-06 19:34:36,152 [myid:1] - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181:NIOServerCnxn@827] - Processing ruok command from /127.0.0.1:52760\n2016-12-06 19:34:36,152 [myid:1] - INFO [Thread-1139:NIOServerCnxn@1008] - Closed socket connection for client /127.0.0.1:52760 (no session established for client)\n2016-12-06 19:34:36,230 [myid:1] - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181:NIOServerCnxnFactory@192] - Accepted socket connection from /127.0.0.1:52761\n2016-12-06 19:34:36,231 [myid:1] - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181:NIOServerCnxn@827] - Processing ruok command from /127.0.0.1:52761\n2016-12-06 19:34:36,231 [myid:1] - INFO [Thread-1140:NIOServerCnxn@1008] - Closed socket connection for client /127.0.0.1:52761 (no session established for client)\n2016-12-06 19:34:46,149 [myid:1] - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181:NIOServerCnxnFactory@192] - Accepted socket connection from /127.0.0.1:52767\n2016-12-06 19:34:46,149 [myid:1] - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181:NIOServerCnxn@827] - Processing ruok command from /127.0.0.1:52767\n2016-12-06 19:34:46,149 [myid:1] - INFO [Thread-1141:NIOServerCnxn@1008] - Closed socket connection for client /127.0.0.1:52767 (no session established for client)\n2016-12-06 19:34:46,230 [myid:1] - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181:NIOServerCnxnFactory@192] - Accepted socket connection from /127.0.0.1:52768\n2016-12-06 19:34:46,230 [myid:1] - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181:NIOServerCnxn@827] - Processing ruok command from /127.0.0.1:52768\n2016-12-06 19:34:46,230 [myid:1] - INFO [Thread-1142:NIOServerCnxn@1008] - Closed socket connection for client /127.0.0.1:52768 (no session established for client)", "completion": "This text code Running ZooKeeper, A Distributed System Coordinator | Kubernetes integrates with many logging solutions. You can choose a logging solution that best fits your cluster and applications. For cluster-level logging and aggregation, consider deploying a sidec", "language": "text", "type": "code-example_explanation", "context": "Running ZooKeeper, A Distributed System Coordinator | Kubernetes integrates with many logging solutions. You can choose a logging solution that best fits your cluster and applications. For cluster-level logging and aggregation, consider deploying a sidec", "source": "https://kubernetes.io/docs/tutorials/stateful-application/zookeeper/", "id": "k8s_00220_code_explain_6120"} {"prompt": "Explain this text code:\n\nF S UID PID PPID C PRI NI ADDR SZ WCHAN STIME TTY TIME CMD\n4 S zookeep+ 1 0 0 80 0 - 1127 - 20:46 ? 00:00:00 sh -c zkGenConfig.sh && zkServer.sh start-foreground\n0 S zookeep+ 27 1 0 80 0 - 1155556 - 20:46 ? 00:00:19 /usr/lib/jvm/java-8-openjdk-amd64/bin/java -Dzookeeper.log.dir=/var/log/zookeeper -Dzookeeper.root.logger=INFO,CONSOLE -cp /usr/bin/../build/classes:/usr/bin/../build/lib/*.jar:/usr/bin/../share/zookeeper/zookeeper-3.4.9.jar:/usr/bin/../share/zookeeper/slf4j-log4j12-1.6.1.jar:/usr/bin/../share/zookeeper/slf4j-api-1.6.1.jar:/usr/bin/../share/zookeeper/netty-3.10.5.Final.jar:/usr/bin/../share/zookeeper/log4j-1.2.16.jar:/usr/bin/../share/zookeeper/jline-0.9.94.jar:/usr/bin/../src/java/lib/*.jar:/usr/bin/../etc/zookeeper: -Xmx2G -Xms2G -Dcom.sun.management.jmxremote -Dcom.sun.management.jmxremote.local.only=false org.apache.zookeeper.server.quorum.QuorumPeerMain /usr/bin/../etc/zookeeper/zoo.cfg", "completion": "This text code Running ZooKeeper, A Distributed System Coordinator | By default, when the Pod's PersistentVolumes is mounted to the ZooKeeper server's data directory, it is only accessible by the root user. This configuration prevents the ZooKeeper process from writing", "language": "text", "type": "code-example_explanation", "context": "Running ZooKeeper, A Distributed System Coordinator | By default, when the Pod's PersistentVolumes is mounted to the ZooKeeper server's data directory, it is only accessible by the root user. This configuration prevents the ZooKeeper process from writing", "source": "https://kubernetes.io/docs/tutorials/stateful-application/zookeeper/", "id": "k8s_00220_code_explain_6121"} {"prompt": "Explain this shell code:\n\nkubectl exec -ti zk-0 -- ls -ld /var/lib/zookeeper/data", "completion": "This shell code Running ZooKeeper, A Distributed System Coordinator | Because the fsGroup field of the securityContext object is set to 1000, the ownership of the Pods' PersistentVolumes is set to the zookeeper group, and the ZooKeeper process is able to read and write", "language": "shell", "type": "code-example_explanation", "context": "Running ZooKeeper, A Distributed System Coordinator | Because the fsGroup field of the securityContext object is set to 1000, the ownership of the Pods' PersistentVolumes is set to the zookeeper group, and the ZooKeeper process is able to read and write", "source": "https://kubernetes.io/docs/tutorials/stateful-application/zookeeper/", "id": "k8s_00220_code_explain_6122"} {"prompt": "Explain this text code:\n\ndrwxr-sr-x 3 zookeeper zookeeper 4096 Dec 5 20:45 /var/lib/zookeeper/data", "completion": "This text code Running ZooKeeper, A Distributed System Coordinator | The ZooKeeper documentation mentions that \"You will want to have a supervisory process that manages each of your ZooKeeper server processes (JVM).\" Utilizing a watchdog (supervisory process) to restar", "language": "text", "type": "code-example_explanation", "context": "Running ZooKeeper, A Distributed System Coordinator | The ZooKeeper documentation mentions that \"You will want to have a supervisory process that manages each of your ZooKeeper server processes (JVM).\" Utilizing a watchdog (supervisory process) to restar", "source": "https://kubernetes.io/docs/tutorials/stateful-application/zookeeper/", "id": "k8s_00220_code_explain_6123"} {"prompt": "Explain this shell code:\n\nkubectl patch sts zk --type='json' -p='[{\"op\": \"replace\", \"path\": \"/spec/template/spec/containers/0/resources/requests/cpu\", \"value\":\"0.3\"}]'", "completion": "This shell code Running ZooKeeper, A Distributed System Coordinator | Use kubectl rollout status to watch the status of the update.", "language": "shell", "type": "code-example_explanation", "context": "Running ZooKeeper, A Distributed System Coordinator | Use kubectl rollout status to watch the status of the update.", "source": "https://kubernetes.io/docs/tutorials/stateful-application/zookeeper/", "id": "k8s_00220_code_explain_6124"} {"prompt": "Complete this text code:\n\nwaiting for statefulset rolling update to complete 0 pods at revision zk-5db4499664...\nWaiting for 1 pods to be ready...\nWaiting for 1 pods to be ready...\nwaiting for statefulset rolling update to complete 1 pods at revision zk-5db4499664...\nWaiting for 1 pods to be ready...", "completion": "Waiting for 1 pods to be ready...\nwaiting for statefulset rolling update to complete 2 pods at revision zk-5db4499664...\nWaiting for 1 pods to be ready...\nWaiting for 1 pods to be ready...\nstatefulset rolling update complete 3 pods at revision zk-5db4499664...", "language": "text", "type": "code-example", "context": "Running ZooKeeper, A Distributed System Coordinator | This terminates the Pods, one at a time, in reverse ordinal order, and recreates them with the new configuration. This ensures that quorum is maintained during a rolling update.", "source": "https://kubernetes.io/docs/tutorials/stateful-application/zookeeper/", "id": "k8s_00220_code_completion_6125"} {"prompt": "Explain this text code:\n\nwaiting for statefulset rolling update to complete 0 pods at revision zk-5db4499664...\nWaiting for 1 pods to be ready...\nWaiting for 1 pods to be ready...\nwaiting for statefulset rolling update to complete 1 pods at revision zk-5db4499664...\nWaiting for 1 pods to be ready...\nWaiting for 1 pods to be ready...\nwaiting for statefulset rolling update to complete 2 pods at revision zk-5db4499664...\nWaiting for 1 pods to be ready...\nWaiting for 1 pods to be ready...\nstatefulset rolling update complete 3 pods at revision zk-5db4499664...", "completion": "This text code Running ZooKeeper, A Distributed System Coordinator | This terminates the Pods, one at a time, in reverse ordinal order, and recreates them with the new configuration. This ensures that quorum is maintained during a rolling update.", "language": "text", "type": "code-example_explanation", "context": "Running ZooKeeper, A Distributed System Coordinator | This terminates the Pods, one at a time, in reverse ordinal order, and recreates them with the new configuration. This ensures that quorum is maintained during a rolling update.", "source": "https://kubernetes.io/docs/tutorials/stateful-application/zookeeper/", "id": "k8s_00220_code_explain_6126"} {"prompt": "Explain this text code:\n\nUID PID PPID C STIME TTY TIME CMD\nzookeep+ 1 0 0 15:03 ? 00:00:00 sh -c zkGenConfig.sh && zkServer.sh start-foreground\nzookeep+ 27 1 0 15:03 ? 00:00:03 /usr/lib/jvm/java-8-openjdk-amd64/bin/java -Dzookeeper.log.dir=/var/log/zookeeper -Dzookeeper.root.logger=INFO,CONSOLE -cp /usr/bin/../build/classes:/usr/bin/../build/lib/*.jar:/usr/bin/../share/zookeeper/zookeeper-3.4.9.jar:/usr/bin/../share/zookeeper/slf4j-log4j12-1.6.1.jar:/usr/bin/../share/zookeeper/slf4j-api-1.6.1.jar:/usr/bin/../share/zookeeper/netty-3.10.5.Final.jar:/usr/bin/../share/zookeeper/log4j-1.2.16.jar:/usr/bin/../share/zookeeper/jline-0.9.94.jar:/usr/bin/../src/java/lib/*.jar:/usr/bin/../etc/zookeeper: -Xmx2G -Xms2G -Dcom.sun.management.jmxremote -Dcom.sun.management.jmxremote.local.only=false org.apache.zookeeper.server.quorum.QuorumPeerMain /usr/bin/../etc/zookeeper/zoo.cfg", "completion": "This text code Running ZooKeeper, A Distributed System Coordinator | In another terminal watch the Pods in the zk StatefulSet with the following command.", "language": "text", "type": "code-example_explanation", "context": "Running ZooKeeper, A Distributed System Coordinator | In another terminal watch the Pods in the zk StatefulSet with the following command.", "source": "https://kubernetes.io/docs/tutorials/stateful-application/zookeeper/", "id": "k8s_00220_code_explain_6127"} {"prompt": "Complete this text code:\n\nNAME READY STATUS RESTARTS AGE\nzk-0 1/1 Running 0 21m\nzk-1 1/1 Running 0 20m\nzk-2 1/1 Running 0 19m", "completion": "NAME READY STATUS RESTARTS AGE\nzk-0 0/1 Error 0 29m\nzk-0 0/1 Running 1 29m\nzk-0 1/1 Running 1 29m", "language": "text", "type": "code-example", "context": "Running ZooKeeper, A Distributed System Coordinator | If your application uses a script (such as zkServer.sh) to launch the process that implements the application's business logic, the script must terminate with the child process. This ensures that Kube", "source": "https://kubernetes.io/docs/tutorials/stateful-application/zookeeper/", "id": "k8s_00220_code_completion_6128"} {"prompt": "Explain this text code:\n\nNAME READY STATUS RESTARTS AGE\nzk-0 1/1 Running 0 21m\nzk-1 1/1 Running 0 20m\nzk-2 1/1 Running 0 19m\nNAME READY STATUS RESTARTS AGE\nzk-0 0/1 Error 0 29m\nzk-0 0/1 Running 1 29m\nzk-0 1/1 Running 1 29m", "completion": "This text code Running ZooKeeper, A Distributed System Coordinator | If your application uses a script (such as zkServer.sh) to launch the process that implements the application's business logic, the script must terminate with the child process. This ensures that Kube", "language": "text", "type": "code-example_explanation", "context": "Running ZooKeeper, A Distributed System Coordinator | If your application uses a script (such as zkServer.sh) to launch the process that implements the application's business logic, the script must terminate with the child process. This ensures that Kube", "source": "https://kubernetes.io/docs/tutorials/stateful-application/zookeeper/", "id": "k8s_00220_code_explain_6129"} {"prompt": "Complete this yaml code:\n\nlivenessProbe:\n exec:\n command:\n - sh", "completion": " - -c\n - \"zookeeper-ready 2181\"\n initialDelaySeconds: 15\n timeoutSeconds: 5", "language": "yaml", "type": "kubernetes-manifest", "context": "Running ZooKeeper, A Distributed System Coordinator | The probe calls a bash script that uses the ZooKeeper ruok four letter word to test the server's health.", "source": "https://kubernetes.io/docs/tutorials/stateful-application/zookeeper/", "id": "k8s_00220_code_completion_6130"} {"prompt": "Explain this yaml code:\n\nlivenessProbe:\n exec:\n command:\n - sh\n - -c\n - \"zookeeper-ready 2181\"\n initialDelaySeconds: 15\n timeoutSeconds: 5", "completion": "This yaml code Running ZooKeeper, A Distributed System Coordinator | The probe calls a bash script that uses the ZooKeeper ruok four letter word to test the server's health.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Running ZooKeeper, A Distributed System Coordinator | The probe calls a bash script that uses the ZooKeeper ruok four letter word to test the server's health.", "source": "https://kubernetes.io/docs/tutorials/stateful-application/zookeeper/", "id": "k8s_00220_code_explain_6131"} {"prompt": "Complete this text code:\n\nOK=$(echo ruok | nc 127.0.0.1 $1)\nif [ \"$OK\" == \"imok\" ]; then\n exit 0", "completion": "else\n exit 1\nfi", "language": "text", "type": "code-example", "context": "Running ZooKeeper, A Distributed System Coordinator | In one terminal window, use the following command to watch the Pods in the zk StatefulSet.", "source": "https://kubernetes.io/docs/tutorials/stateful-application/zookeeper/", "id": "k8s_00220_code_completion_6132"} {"prompt": "Explain this text code:\n\nOK=$(echo ruok | nc 127.0.0.1 $1)\nif [ \"$OK\" == \"imok\" ]; then\n exit 0\nelse\n exit 1\nfi", "completion": "This text code Running ZooKeeper, A Distributed System Coordinator | In one terminal window, use the following command to watch the Pods in the zk StatefulSet.", "language": "text", "type": "code-example_explanation", "context": "Running ZooKeeper, A Distributed System Coordinator | In one terminal window, use the following command to watch the Pods in the zk StatefulSet.", "source": "https://kubernetes.io/docs/tutorials/stateful-application/zookeeper/", "id": "k8s_00220_code_explain_6133"} {"prompt": "Explain this shell code:\n\nkubectl exec zk-0 -- rm /opt/zookeeper/bin/zookeeper-ready", "completion": "This shell code Running ZooKeeper, A Distributed System Coordinator | When the liveness probe for the ZooKeeper process fails, Kubernetes will automatically restart the process for you, ensuring that unhealthy processes in the ensemble are restarted.", "language": "shell", "type": "code-example_explanation", "context": "Running ZooKeeper, A Distributed System Coordinator | When the liveness probe for the ZooKeeper process fails, Kubernetes will automatically restart the process for you, ensuring that unhealthy processes in the ensemble are restarted.", "source": "https://kubernetes.io/docs/tutorials/stateful-application/zookeeper/", "id": "k8s_00220_code_explain_6134"} {"prompt": "Complete this text code:\n\nNAME READY STATUS RESTARTS AGE\nzk-0 1/1 Running 0 1h\nzk-1 1/1 Running 0 1h\nzk-2 1/1 Running 0 1h", "completion": "NAME READY STATUS RESTARTS AGE\nzk-0 0/1 Running 0 1h\nzk-0 0/1 Running 1 1h\nzk-0 1/1 Running 1 1h", "language": "text", "type": "code-example", "context": "Running ZooKeeper, A Distributed System Coordinator | Readiness is not the same as liveness. If a process is alive, it is scheduled and healthy. If a process is ready, it is able to process input. Liveness is a necessary, but not sufficient, condition fo", "source": "https://kubernetes.io/docs/tutorials/stateful-application/zookeeper/", "id": "k8s_00220_code_completion_6135"} {"prompt": "Explain this text code:\n\nNAME READY STATUS RESTARTS AGE\nzk-0 1/1 Running 0 1h\nzk-1 1/1 Running 0 1h\nzk-2 1/1 Running 0 1h\nNAME READY STATUS RESTARTS AGE\nzk-0 0/1 Running 0 1h\nzk-0 0/1 Running 1 1h\nzk-0 1/1 Running 1 1h", "completion": "This text code Running ZooKeeper, A Distributed System Coordinator | Readiness is not the same as liveness. If a process is alive, it is scheduled and healthy. If a process is ready, it is able to process input. Liveness is a necessary, but not sufficient, condition fo", "language": "text", "type": "code-example_explanation", "context": "Running ZooKeeper, A Distributed System Coordinator | Readiness is not the same as liveness. If a process is alive, it is scheduled and healthy. If a process is ready, it is able to process input. Liveness is a necessary, but not sufficient, condition fo", "source": "https://kubernetes.io/docs/tutorials/stateful-application/zookeeper/", "id": "k8s_00220_code_explain_6136"} {"prompt": "Complete this yaml code:\n\nreadinessProbe:\n exec:\n command:\n - sh", "completion": " - -c\n - \"zookeeper-ready 2181\"\n initialDelaySeconds: 15\n timeoutSeconds: 5", "language": "yaml", "type": "kubernetes-manifest", "context": "Running ZooKeeper, A Distributed System Coordinator | Even though the liveness and readiness probes are identical, it is important to specify both. This ensures that only healthy servers in the ZooKeeper ensemble receive network traffic.", "source": "https://kubernetes.io/docs/tutorials/stateful-application/zookeeper/", "id": "k8s_00220_code_completion_6137"} {"prompt": "Explain this yaml code:\n\nreadinessProbe:\n exec:\n command:\n - sh\n - -c\n - \"zookeeper-ready 2181\"\n initialDelaySeconds: 15\n timeoutSeconds: 5", "completion": "This yaml code Running ZooKeeper, A Distributed System Coordinator | Even though the liveness and readiness probes are identical, it is important to specify both. This ensures that only healthy servers in the ZooKeeper ensemble receive network traffic.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Running ZooKeeper, A Distributed System Coordinator | Even though the liveness and readiness probes are identical, it is important to specify both. This ensures that only healthy servers in the ZooKeeper ensemble receive network traffic.", "source": "https://kubernetes.io/docs/tutorials/stateful-application/zookeeper/", "id": "k8s_00220_code_explain_6138"} {"prompt": "Explain this shell code:\n\nfor i in 0 1 2; do kubectl get pod zk-$i --template {{.spec.nodeName}}; echo \"\"; done", "completion": "This shell code Running ZooKeeper, A Distributed System Coordinator | All of the Pods in the zk StatefulSet are deployed on different nodes.", "language": "shell", "type": "code-example_explanation", "context": "Running ZooKeeper, A Distributed System Coordinator | All of the Pods in the zk StatefulSet are deployed on different nodes.", "source": "https://kubernetes.io/docs/tutorials/stateful-application/zookeeper/", "id": "k8s_00220_code_explain_6139"} {"prompt": "Explain this text code:\n\nkubernetes-node-cxpk\nkubernetes-node-a5aq\nkubernetes-node-2g2d", "completion": "This text code Running ZooKeeper, A Distributed System Coordinator | This is because the Pods in the zk StatefulSet have a PodAntiAffinity specified.", "language": "text", "type": "code-example_explanation", "context": "Running ZooKeeper, A Distributed System Coordinator | This is because the Pods in the zk StatefulSet have a PodAntiAffinity specified.", "source": "https://kubernetes.io/docs/tutorials/stateful-application/zookeeper/", "id": "k8s_00220_code_explain_6140"} {"prompt": "Complete this yaml code:\n\naffinity:\n podAntiAffinity:\n requiredDuringSchedulingIgnoredDuringExecution:\n - labelSelector:\n matchExpressions:", "completion": " - key: \"app\"\n operator: In\n values:\n - zk\n topologyKey: \"kubernetes.io/hostname\"", "language": "yaml", "type": "kubernetes-manifest", "context": "Running ZooKeeper, A Distributed System Coordinator | The requiredDuringSchedulingIgnoredDuringExecution field tells the Kubernetes Scheduler that it should never co-locate two Pods which have app label as zk in the domain defined by the topologyKey. The", "source": "https://kubernetes.io/docs/tutorials/stateful-application/zookeeper/", "id": "k8s_00220_code_completion_6141"} {"prompt": "Explain this yaml code:\n\naffinity:\n podAntiAffinity:\n requiredDuringSchedulingIgnoredDuringExecution:\n - labelSelector:\n matchExpressions:\n - key: \"app\"\n operator: In\n values:\n - zk\n topologyKey: \"kubernetes.io/hostname\"", "completion": "This yaml code Running ZooKeeper, A Distributed System Coordinator | The requiredDuringSchedulingIgnoredDuringExecution field tells the Kubernetes Scheduler that it should never co-locate two Pods which have app label as zk in the domain defined by the topologyKey. The", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Running ZooKeeper, A Distributed System Coordinator | The requiredDuringSchedulingIgnoredDuringExecution field tells the Kubernetes Scheduler that it should never co-locate two Pods which have app label as zk in the domain defined by the topologyKey. The", "source": "https://kubernetes.io/docs/tutorials/stateful-application/zookeeper/", "id": "k8s_00220_code_explain_6142"} {"prompt": "Explain this text code:\n\nNAME MIN-AVAILABLE MAX-UNAVAILABLE ALLOWED-DISRUPTIONS AGE\nzk-pdb N/A 1 1", "completion": "This text code Running ZooKeeper, A Distributed System Coordinator | In one terminal, use this command to watch the Pods in the zk StatefulSet.", "language": "text", "type": "code-example_explanation", "context": "Running ZooKeeper, A Distributed System Coordinator | In one terminal, use this command to watch the Pods in the zk StatefulSet.", "source": "https://kubernetes.io/docs/tutorials/stateful-application/zookeeper/", "id": "k8s_00220_code_explain_6143"} {"prompt": "Explain this shell code:\n\nfor i in 0 1 2; do kubectl get pod zk-$i --template {{.spec.nodeName}}; echo \"\"; done", "completion": "This shell code Running ZooKeeper, A Distributed System Coordinator | The output is similar to this:", "language": "shell", "type": "code-example_explanation", "context": "Running ZooKeeper, A Distributed System Coordinator | The output is similar to this:", "source": "https://kubernetes.io/docs/tutorials/stateful-application/zookeeper/", "id": "k8s_00220_code_explain_6144"} {"prompt": "Explain this text code:\n\nkubernetes-node-pb41\nkubernetes-node-ixsl\nkubernetes-node-i4c4", "completion": "This text code Running ZooKeeper, A Distributed System Coordinator | Use kubectl drain to cordon and drain the node on which the zk-0 Pod is scheduled.", "language": "text", "type": "code-example_explanation", "context": "Running ZooKeeper, A Distributed System Coordinator | Use kubectl drain to cordon and drain the node on which the zk-0 Pod is scheduled.", "source": "https://kubernetes.io/docs/tutorials/stateful-application/zookeeper/", "id": "k8s_00220_code_explain_6145"} {"prompt": "Explain this shell code:\n\nkubectl drain $(kubectl get pod zk-0 --template {{.spec.nodeName}}) --ignore-daemonsets --force --delete-emptydir-data", "completion": "This shell code Running ZooKeeper, A Distributed System Coordinator | The output is similar to this:", "language": "shell", "type": "code-example_explanation", "context": "Running ZooKeeper, A Distributed System Coordinator | The output is similar to this:", "source": "https://kubernetes.io/docs/tutorials/stateful-application/zookeeper/", "id": "k8s_00220_code_explain_6146"} {"prompt": "Complete this text code:\n\nnode \"kubernetes-node-pb41\" cordoned\n", "completion": "WARNING: Deleting pods not managed by ReplicationController, ReplicaSet, Job, or DaemonSet: fluentd-cloud-logging-kubernetes-node-pb41, kube-proxy-kubernetes-node-pb41; Ignoring DaemonSet-managed pods: node-problem-detector-v0.1-o5elz\npod \"zk-0\" deleted\nnode \"kubernetes-node-pb41\" drained", "language": "text", "type": "code-example", "context": "Running ZooKeeper, A Distributed System Coordinator | As there are four nodes in your cluster, kubectl drain, succeeds and the zk-0 is rescheduled to another node.", "source": "https://kubernetes.io/docs/tutorials/stateful-application/zookeeper/", "id": "k8s_00220_code_completion_6147"} {"prompt": "Explain this text code:\n\nnode \"kubernetes-node-pb41\" cordoned\n\nWARNING: Deleting pods not managed by ReplicationController, ReplicaSet, Job, or DaemonSet: fluentd-cloud-logging-kubernetes-node-pb41, kube-proxy-kubernetes-node-pb41; Ignoring DaemonSet-managed pods: node-problem-detector-v0.1-o5elz\npod \"zk-0\" deleted\nnode \"kubernetes-node-pb41\" drained", "completion": "This text code Running ZooKeeper, A Distributed System Coordinator | As there are four nodes in your cluster, kubectl drain, succeeds and the zk-0 is rescheduled to another node.", "language": "text", "type": "code-example_explanation", "context": "Running ZooKeeper, A Distributed System Coordinator | As there are four nodes in your cluster, kubectl drain, succeeds and the zk-0 is rescheduled to another node.", "source": "https://kubernetes.io/docs/tutorials/stateful-application/zookeeper/", "id": "k8s_00220_code_explain_6148"} {"prompt": "Complete this text code:\n\nNAME READY STATUS RESTARTS AGE\nzk-0 1/1 Running 2 1h\nzk-1 1/1 Running 0 1h\nzk-2 1/1 Running 0 1h\nNAME READY STATUS RESTARTS AGE\nzk-0 1/1 Terminating 2 2h\nzk-0 0/1 Terminating 2 2h", "completion": "zk-0 0/1 Terminating 2 2h\nzk-0 0/1 Terminating 2 2h\nzk-0 0/1 Pending 0 0s\nzk-0 0/1 Pending 0 0s\nzk-0 0/1 ContainerCreating 0 0s\nzk-0 0/1 Running 0 51s\nzk-0 1/1 Running 0 1m", "language": "text", "type": "code-example", "context": "Running ZooKeeper, A Distributed System Coordinator | Keep watching the StatefulSet's Pods in the first terminal and drain the node on which zk-1 is scheduled.", "source": "https://kubernetes.io/docs/tutorials/stateful-application/zookeeper/", "id": "k8s_00220_code_completion_6149"} {"prompt": "Explain this text code:\n\nNAME READY STATUS RESTARTS AGE\nzk-0 1/1 Running 2 1h\nzk-1 1/1 Running 0 1h\nzk-2 1/1 Running 0 1h\nNAME READY STATUS RESTARTS AGE\nzk-0 1/1 Terminating 2 2h\nzk-0 0/1 Terminating 2 2h\nzk-0 0/1 Terminating 2 2h\nzk-0 0/1 Terminating 2 2h\nzk-0 0/1 Pending 0 0s\nzk-0 0/1 Pending 0 0s\nzk-0 0/1 ContainerCreating 0 0s\nzk-0 0/1 Running 0 51s\nzk-0 1/1 Running 0 1m", "completion": "This text code Running ZooKeeper, A Distributed System Coordinator | Keep watching the StatefulSet's Pods in the first terminal and drain the node on which zk-1 is scheduled.", "language": "text", "type": "code-example_explanation", "context": "Running ZooKeeper, A Distributed System Coordinator | Keep watching the StatefulSet's Pods in the first terminal and drain the node on which zk-1 is scheduled.", "source": "https://kubernetes.io/docs/tutorials/stateful-application/zookeeper/", "id": "k8s_00220_code_explain_6150"} {"prompt": "Explain this shell code:\n\nkubectl drain $(kubectl get pod zk-1 --template {{.spec.nodeName}}) --ignore-daemonsets --force --delete-emptydir-data", "completion": "This shell code Running ZooKeeper, A Distributed System Coordinator | The output is similar to this:", "language": "shell", "type": "code-example_explanation", "context": "Running ZooKeeper, A Distributed System Coordinator | The output is similar to this:", "source": "https://kubernetes.io/docs/tutorials/stateful-application/zookeeper/", "id": "k8s_00220_code_explain_6151"} {"prompt": "Complete this text code:\n\n\"kubernetes-node-ixsl\" cordoned\nWARNING: Deleting pods not managed by ReplicationController, ReplicaSet, Job, or DaemonSet: fluentd-cloud-logging-kubernetes-node-ixsl, kube-proxy-kubernetes-node-ixsl; Ignoring DaemonSet-managed pods: node-problem-detector-v0.1-voc74", "completion": "pod \"zk-1\" deleted\nnode \"kubernetes-node-ixsl\" drained", "language": "text", "type": "code-example", "context": "Running ZooKeeper, A Distributed System Coordinator | The zk-1 Pod cannot be scheduled because the zk StatefulSet contains a PodAntiAffinity rule preventing co-location of the Pods, and as only two nodes are schedulable, the Pod will remain in a Pending", "source": "https://kubernetes.io/docs/tutorials/stateful-application/zookeeper/", "id": "k8s_00220_code_completion_6152"} {"prompt": "Explain this text code:\n\n\"kubernetes-node-ixsl\" cordoned\nWARNING: Deleting pods not managed by ReplicationController, ReplicaSet, Job, or DaemonSet: fluentd-cloud-logging-kubernetes-node-ixsl, kube-proxy-kubernetes-node-ixsl; Ignoring DaemonSet-managed pods: node-problem-detector-v0.1-voc74\npod \"zk-1\" deleted\nnode \"kubernetes-node-ixsl\" drained", "completion": "This text code Running ZooKeeper, A Distributed System Coordinator | The zk-1 Pod cannot be scheduled because the zk StatefulSet contains a PodAntiAffinity rule preventing co-location of the Pods, and as only two nodes are schedulable, the Pod will remain in a Pending", "language": "text", "type": "code-example_explanation", "context": "Running ZooKeeper, A Distributed System Coordinator | The zk-1 Pod cannot be scheduled because the zk StatefulSet contains a PodAntiAffinity rule preventing co-location of the Pods, and as only two nodes are schedulable, the Pod will remain in a Pending", "source": "https://kubernetes.io/docs/tutorials/stateful-application/zookeeper/", "id": "k8s_00220_code_explain_6153"} {"prompt": "Complete this text code:\n\nNAME READY STATUS RESTARTS AGE\nzk-0 1/1 Running 2 1h\nzk-1 1/1 Running 0 1h\nzk-2 1/1 Running 0 1h\nNAME READY STATUS RESTARTS AGE\nzk-0 1/1 Terminating 2 2h\nzk-0 0/1 Terminating 2 2h\nzk-0 0/1 Terminating 2 2h\nzk-0 0/1 Terminating 2 2h\nzk-0 0/1 Pending 0 0s", "completion": "zk-0 0/1 Pending 0 0s\nzk-0 0/1 ContainerCreating 0 0s\nzk-0 0/1 Running 0 51s\nzk-0 1/1 Running 0 1m\nzk-1 1/1 Terminating 0 2h\nzk-1 0/1 Terminating 0 2h\nzk-1 0/1 Terminating 0 2h\nzk-1 0/1 Terminating 0 2h\nzk-1 0/1 Pending 0 0s\nzk-1 0/1 Pending 0 0s", "language": "text", "type": "code-example", "context": "Running ZooKeeper, A Distributed System Coordinator | Continue to watch the Pods of the StatefulSet, and drain the node on which zk-2 is scheduled.", "source": "https://kubernetes.io/docs/tutorials/stateful-application/zookeeper/", "id": "k8s_00220_code_completion_6154"} {"prompt": "Explain this text code:\n\nNAME READY STATUS RESTARTS AGE\nzk-0 1/1 Running 2 1h\nzk-1 1/1 Running 0 1h\nzk-2 1/1 Running 0 1h\nNAME READY STATUS RESTARTS AGE\nzk-0 1/1 Terminating 2 2h\nzk-0 0/1 Terminating 2 2h\nzk-0 0/1 Terminating 2 2h\nzk-0 0/1 Terminating 2 2h\nzk-0 0/1 Pending 0 0s\nzk-0 0/1 Pending 0 0s\nzk-0 0/1 ContainerCreating 0 0s\nzk-0 0/1 Running 0 51s\nzk-0 1/1 Running 0 1m\nzk-1 1/1 Terminating 0 2h\nzk-1 0/1 Terminating 0 2h\nzk-1 0/1 Terminating 0 2h\nzk-1 0/1 Terminating 0 2h\nzk-1 0/1 Pending 0 0s\nzk-1 0/1 Pending 0 0s", "completion": "This text code Running ZooKeeper, A Distributed System Coordinator | Continue to watch the Pods of the StatefulSet, and drain the node on which zk-2 is scheduled.", "language": "text", "type": "code-example_explanation", "context": "Running ZooKeeper, A Distributed System Coordinator | Continue to watch the Pods of the StatefulSet, and drain the node on which zk-2 is scheduled.", "source": "https://kubernetes.io/docs/tutorials/stateful-application/zookeeper/", "id": "k8s_00220_code_explain_6155"} {"prompt": "Explain this shell code:\n\nkubectl drain $(kubectl get pod zk-2 --template {{.spec.nodeName}}) --ignore-daemonsets --force --delete-emptydir-data", "completion": "This shell code Running ZooKeeper, A Distributed System Coordinator | The output is similar to this:", "language": "shell", "type": "code-example_explanation", "context": "Running ZooKeeper, A Distributed System Coordinator | The output is similar to this:", "source": "https://kubernetes.io/docs/tutorials/stateful-application/zookeeper/", "id": "k8s_00220_code_explain_6156"} {"prompt": "Complete this text code:\n\nnode \"kubernetes-node-i4c4\" cordoned\n\nWARNING: Deleting pods not managed by ReplicationController, ReplicaSet, Job, or DaemonSet: fluentd-cloud-logging-kubernetes-node-i4c4, kube-proxy-kubernetes-node-i4c4; Ignoring DaemonSet-managed pods: node-problem-detector-v0.1-dyrog", "completion": "WARNING: Ignoring DaemonSet-managed pods: node-problem-detector-v0.1-dyrog; Deleting pods not managed by ReplicationController, ReplicaSet, Job, or DaemonSet: fluentd-cloud-logging-kubernetes-node-i4c4, kube-proxy-kubernetes-node-i4c4\nThere are pending pods when an error occurred: Cannot evict pod as it would violate the pod's disruption budget.\npod/zk-2", "language": "text", "type": "code-example", "context": "Running ZooKeeper, A Distributed System Coordinator | Use CTRL-C to terminate kubectl.", "source": "https://kubernetes.io/docs/tutorials/stateful-application/zookeeper/", "id": "k8s_00220_code_completion_6157"} {"prompt": "Explain this text code:\n\nnode \"kubernetes-node-i4c4\" cordoned\n\nWARNING: Deleting pods not managed by ReplicationController, ReplicaSet, Job, or DaemonSet: fluentd-cloud-logging-kubernetes-node-i4c4, kube-proxy-kubernetes-node-i4c4; Ignoring DaemonSet-managed pods: node-problem-detector-v0.1-dyrog\nWARNING: Ignoring DaemonSet-managed pods: node-problem-detector-v0.1-dyrog; Deleting pods not managed by ReplicationController, ReplicaSet, Job, or DaemonSet: fluentd-cloud-logging-kubernetes-node-i4c4, kube-proxy-kubernetes-node-i4c4\nThere are pending pods when an error occurred: Cannot evict pod as it would violate the pod's disruption budget.\npod/zk-2", "completion": "This text code Running ZooKeeper, A Distributed System Coordinator | Use CTRL-C to terminate kubectl.", "language": "text", "type": "code-example_explanation", "context": "Running ZooKeeper, A Distributed System Coordinator | Use CTRL-C to terminate kubectl.", "source": "https://kubernetes.io/docs/tutorials/stateful-application/zookeeper/", "id": "k8s_00220_code_explain_6158"} {"prompt": "Complete this text code:\n\nWatchedEvent state:SyncConnected type:None path:null\nworld\ncZxid = 0x200000002\nctime = Wed Dec 07 00:08:59 UTC 2016\nmZxid = 0x200000002\nmtime = Wed Dec 07 00:08:59 UTC 2016", "completion": "pZxid = 0x200000002\ncversion = 0\ndataVersion = 0\naclVersion = 0\nephemeralOwner = 0x0\ndataLength = 5\nnumChildren = 0", "language": "text", "type": "code-example", "context": "Running ZooKeeper, A Distributed System Coordinator | Use kubectl uncordon to uncordon the first node.", "source": "https://kubernetes.io/docs/tutorials/stateful-application/zookeeper/", "id": "k8s_00220_code_completion_6159"} {"prompt": "Explain this text code:\n\nWatchedEvent state:SyncConnected type:None path:null\nworld\ncZxid = 0x200000002\nctime = Wed Dec 07 00:08:59 UTC 2016\nmZxid = 0x200000002\nmtime = Wed Dec 07 00:08:59 UTC 2016\npZxid = 0x200000002\ncversion = 0\ndataVersion = 0\naclVersion = 0\nephemeralOwner = 0x0\ndataLength = 5\nnumChildren = 0", "completion": "This text code Running ZooKeeper, A Distributed System Coordinator | Use kubectl uncordon to uncordon the first node.", "language": "text", "type": "code-example_explanation", "context": "Running ZooKeeper, A Distributed System Coordinator | Use kubectl uncordon to uncordon the first node.", "source": "https://kubernetes.io/docs/tutorials/stateful-application/zookeeper/", "id": "k8s_00220_code_explain_6160"} {"prompt": "Complete this text code:\n\nNAME READY STATUS RESTARTS AGE\nzk-0 1/1 Running 2 1h\nzk-1 1/1 Running 0 1h\nzk-2 1/1 Running 0 1h\nNAME READY STATUS RESTARTS AGE\nzk-0 1/1 Terminating 2 2h\nzk-0 0/1 Terminating 2 2h\nzk-0 0/1 Terminating 2 2h\nzk-0 0/1 Terminating 2 2h\nzk-0 0/1 Pending 0 0s\nzk-0 0/1 Pending 0 0s\nzk-0 0/1 ContainerCreating 0 0s", "completion": "zk-0 0/1 Running 0 51s\nzk-0 1/1 Running 0 1m\nzk-1 1/1 Terminating 0 2h\nzk-1 0/1 Terminating 0 2h\nzk-1 0/1 Terminating 0 2h\nzk-1 0/1 Terminating 0 2h\nzk-1 0/1 Pending 0 0s\nzk-1 0/1 Pending 0 0s\nzk-1 0/1 Pending 0 12m\nzk-1 0/1 ContainerCreating 0 12m\nzk-1 0/1 Running 0 13m\nzk-1 1/1 Running 0 13m", "language": "text", "type": "code-example", "context": "Running ZooKeeper, A Distributed System Coordinator | Attempt to drain the node on which zk-2 is scheduled.", "source": "https://kubernetes.io/docs/tutorials/stateful-application/zookeeper/", "id": "k8s_00220_code_completion_6161"} {"prompt": "Explain this text code:\n\nNAME READY STATUS RESTARTS AGE\nzk-0 1/1 Running 2 1h\nzk-1 1/1 Running 0 1h\nzk-2 1/1 Running 0 1h\nNAME READY STATUS RESTARTS AGE\nzk-0 1/1 Terminating 2 2h\nzk-0 0/1 Terminating 2 2h\nzk-0 0/1 Terminating 2 2h\nzk-0 0/1 Terminating 2 2h\nzk-0 0/1 Pending 0 0s\nzk-0 0/1 Pending 0 0s\nzk-0 0/1 ContainerCreating 0 0s\nzk-0 0/1 Running 0 51s\nzk-0 1/1 Running 0 1m\nzk-1 1/1 Terminating 0 2h\nzk-1 0/1 Terminating 0 2h\nzk-1 0/1 Terminating 0 2h\nzk-1 0/1 Terminating 0 2h\nzk-1 0/1 Pending 0 0s\nzk-1 0/1 Pending 0 0s\nzk-1 0/1 Pending 0 12m\nzk-1 0/1 ContainerCreating 0 12m\nzk-1 0/1 Running 0 13m\nzk-1 1/1 Running 0 13m", "completion": "This text code Running ZooKeeper, A Distributed System Coordinator | Attempt to drain the node on which zk-2 is scheduled.", "language": "text", "type": "code-example_explanation", "context": "Running ZooKeeper, A Distributed System Coordinator | Attempt to drain the node on which zk-2 is scheduled.", "source": "https://kubernetes.io/docs/tutorials/stateful-application/zookeeper/", "id": "k8s_00220_code_explain_6162"} {"prompt": "Explain this shell code:\n\nkubectl drain $(kubectl get pod zk-2 --template {{.spec.nodeName}}) --ignore-daemonsets --force --delete-emptydir-data", "completion": "This shell code Running ZooKeeper, A Distributed System Coordinator | The output is similar to this:", "language": "shell", "type": "code-example_explanation", "context": "Running ZooKeeper, A Distributed System Coordinator | The output is similar to this:", "source": "https://kubernetes.io/docs/tutorials/stateful-application/zookeeper/", "id": "k8s_00220_code_explain_6163"} {"prompt": "Complete this text code:\n\nnode \"kubernetes-node-i4c4\" already cordoned\nWARNING: Deleting pods not managed by ReplicationController, ReplicaSet, Job, or DaemonSet: fluentd-cloud-logging-kubernetes-node-i4c4, kube-proxy-kubernetes-node-i4c4; Ignoring DaemonSet-managed pods: node-problem-detector-v0.1-dyrog", "completion": "pod \"heapster-v1.2.0-2604621511-wht1r\" deleted\npod \"zk-2\" deleted\nnode \"kubernetes-node-i4c4\" drained", "language": "text", "type": "code-example", "context": "Running ZooKeeper, A Distributed System Coordinator | This time kubectl drain succeeds.", "source": "https://kubernetes.io/docs/tutorials/stateful-application/zookeeper/", "id": "k8s_00220_code_completion_6164"} {"prompt": "Explain this text code:\n\nnode \"kubernetes-node-i4c4\" already cordoned\nWARNING: Deleting pods not managed by ReplicationController, ReplicaSet, Job, or DaemonSet: fluentd-cloud-logging-kubernetes-node-i4c4, kube-proxy-kubernetes-node-i4c4; Ignoring DaemonSet-managed pods: node-problem-detector-v0.1-dyrog\npod \"heapster-v1.2.0-2604621511-wht1r\" deleted\npod \"zk-2\" deleted\nnode \"kubernetes-node-i4c4\" drained", "completion": "This text code Running ZooKeeper, A Distributed System Coordinator | This time kubectl drain succeeds.", "language": "text", "type": "code-example_explanation", "context": "Running ZooKeeper, A Distributed System Coordinator | This time kubectl drain succeeds.", "source": "https://kubernetes.io/docs/tutorials/stateful-application/zookeeper/", "id": "k8s_00220_code_explain_6165"} {"prompt": "Complete this yaml code:\n\napplication/zookeeper/zookeeper.yaml\napiVersion: v1\nkind: Service\nmetadata:\n name: zk-hs\n labels:\n app: zk\nspec:\n ports:\n - port: 2888\n name: server\n - port: 3888\n name: leader-election\n clusterIP: None\n selector:\n app: zk\n---\napiVersion: v1\nkind: Service\nmetadata:\n name: zk-cs\n labels:\n app: zk\nspec:\n ports:\n - port: 2181\n name: client\n selector:\n app: zk\n---\napiVersion: policy/v1\nkind: PodDisruptionBudget\nmetadata:\n name: zk-pdb\nspec:\n selector:\n matchLabels:\n app: zk\n maxUnavailable: 1\n---\napiVersion: apps/v1\nkind: StatefulSet\nmetadata:\n name: zk\nspec:\n selector:\n matchLabels:\n app: zk\n serviceName: zk-hs\n replicas: 3\n updateStrategy:\n type: RollingUpdate\n podManagementPolicy: OrderedReady\n template:\n metadata:\n labels:\n app: zk\n spec:\n affinity:\n podAntiAffinity:\n requiredDuringSchedulingIgnoredDuringExecution:\n - labelSelector:\n matchExpressions:\n - key: \"app\"\n operator: In\n values:\n - zk", "completion": " topologyKey: \"kubernetes.io/hostname\"\n containers:\n - name: kubernetes-zookeeper\n imagePullPolicy: Always\n image: \"registry.k8s.io/kubernetes-zookeeper:1.0-3.4.10\"\n resources:\n requests:\n memory: \"1Gi\"\n cpu: \"0.5\"\n ports:\n - containerPort: 2181\n name: client\n - containerPort: 2888\n name: server\n - containerPort: 3888\n name: leader-election\n command:\n - sh\n - -c\n - \"start-zookeeper \\\n --servers=3 \\\n --data_dir=/var/lib/zookeeper/data \\\n --data_log_dir=/var/lib/zookeeper/data/log \\\n --conf_dir=/opt/zookeeper/conf \\\n --client_port=2181 \\\n --election_port=3888 \\\n --server_port=2888 \\\n --tick_time=2000 \\\n --init_limit=10 \\\n --sync_limit=5 \\\n --heap=512M \\\n --max_client_cnxns=60 \\\n --snap_retain_count=3 \\\n --purge_interval=12 \\\n --max_session_timeout=40000 \\\n --min_session_timeout=4000 \\\n --log_level=INFO\"\n readinessProbe:\n exec:\n command:\n - sh\n - -c\n - \"zookeeper-ready 2181\"\n initialDelaySeconds: 10\n timeoutSeconds: 5\n livenessProbe:\n exec:\n command:\n - sh\n - -c\n - \"zookeeper-ready 2181\"\n initialDelaySeconds: 10\n timeoutSeconds: 5\n volumeMounts:\n - name: datadir\n mountPath: /var/lib/zookeeper\n securityContext:\n runAsUser: 1000\n fsGroup: 1000\n volumeClaimTemplates:\n - metadata:\n name: datadir\n spec:\n accessModes: [ \"ReadWriteOnce\" ]\n resources:\n requests:\n storage: 10Gi", "language": "yaml", "type": "pod-definition", "context": "Running ZooKeeper, A Distributed System Coordinator | Open a terminal, and use the kubectl apply command to create the manifest.", "source": "https://kubernetes.io/docs/tutorials/stateful-application/zookeeper/", "id": "k8s_00220_code_completion_6166"} {"prompt": "Explain this yaml code:\n\napplication/zookeeper/zookeeper.yaml\napiVersion: v1\nkind: Service\nmetadata:\n name: zk-hs\n labels:\n app: zk\nspec:\n ports:\n - port: 2888\n name: server\n - port: 3888\n name: leader-election\n clusterIP: None\n selector:\n app: zk\n---\napiVersion: v1\nkind: Service\nmetadata:\n name: zk-cs\n labels:\n app: zk\nspec:\n ports:\n - port: 2181\n name: client\n selector:\n app: zk\n---\napiVersion: policy/v1\nkind: PodDisruptionBudget\nmetadata:\n name: zk-pdb\nspec:\n selector:\n matchLabels:\n app: zk\n maxUnavailable: 1\n---\napiVersion: apps/v1\nkind: StatefulSet\nmetadata:\n name: zk\nspec:\n selector:\n matchLabels:\n app: zk\n serviceName: zk-hs\n replicas: 3\n updateStrategy:\n type: RollingUpdate\n podManagementPolicy: OrderedReady\n template:\n metadata:\n labels:\n app: zk\n spec:\n affinity:\n podAntiAffinity:\n requiredDuringSchedulingIgnoredDuringExecution:\n - labelSelector:\n matchExpressions:\n - key: \"app\"\n operator: In\n values:\n - zk\n topologyKey: \"kubernetes.io/hostname\"\n containers:\n - name: kubernetes-zookeeper\n imagePullPolicy: Always\n image: \"registry.k8s.io/kubernetes-zookeeper:1.0-3.4.10\"\n resources:\n requests:\n memory: \"1Gi\"\n cpu: \"0.5\"\n ports:\n - containerPort: 2181\n name: client\n - containerPort: 2888\n name: server\n - containerPort: 3888\n name: leader-election\n command:\n - sh\n - -c\n - \"start-zookeeper \\\n --servers=3 \\\n --data_dir=/var/lib/zookeeper/data \\\n --data_log_dir=/var/lib/zookeeper/data/log \\\n --conf_dir=/opt/zookeeper/conf \\\n --client_port=2181 \\\n --election_port=3888 \\\n --server_port=2888 \\\n --tick_time=2000 \\\n --init_limit=10 \\\n --sync_limit=5 \\\n --heap=512M \\\n --max_client_cnxns=60 \\\n --snap_retain_count=3 \\\n --purge_interval=12 \\\n --max_session_timeout=40000 \\\n --min_session_timeout=4000 \\\n --log_level=INFO\"\n readinessProbe:\n exec:\n command:\n - sh\n - -c\n - \"zookeeper-ready 2181\"\n initialDelaySeconds: 10\n timeoutSeconds: 5\n livenessProbe:\n exec:\n command:\n - sh\n - -c\n - \"zookeeper-ready 2181\"\n initialDelaySeconds: 10\n timeoutSeconds: 5\n volumeMounts:\n - name: datadir\n mountPath: /var/lib/zookeeper\n securityContext:\n runAsUser: 1000\n fsGroup: 1000\n volumeClaimTemplates:\n - metadata:\n name: datadir\n spec:\n accessModes: [ \"ReadWriteOnce\" ]\n resources:\n requests:\n storage: 10Gi", "completion": "This yaml code Running ZooKeeper, A Distributed System Coordinator | Open a terminal, and use the kubectl apply command to create the manifest.", "language": "yaml", "type": "pod-definition_explanation", "context": "Running ZooKeeper, A Distributed System Coordinator | Open a terminal, and use the kubectl apply command to create the manifest.", "source": "https://kubernetes.io/docs/tutorials/stateful-application/zookeeper/", "id": "k8s_00220_code_explain_6167"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Service\nmetadata:\n name: zk-hs\n labels:\n app: zk\nspec:\n ports:\n - port: 2888\n name: server\n - port: 3888\n name: leader-election\n clusterIP: None\n selector:\n app: zk\n---\napiVersion: v1\nkind: Service\nmetadata:\n name: zk-cs\n labels:\n app: zk\nspec:\n ports:\n - port: 2181\n name: client\n selector:\n app: zk\n---\napiVersion: policy/v1\nkind: PodDisruptionBudget\nmetadata:\n name: zk-pdb\nspec:\n selector:\n matchLabels:\n app: zk\n maxUnavailable: 1\n---\napiVersion: apps/v1\nkind: StatefulSet\nmetadata:\n name: zk\nspec:\n selector:\n matchLabels:\n app: zk\n serviceName: zk-hs\n replicas: 3\n updateStrategy:\n type: RollingUpdate\n podManagementPolicy: OrderedReady\n template:\n metadata:\n labels:\n app: zk\n spec:\n affinity:\n podAntiAffinity:\n requiredDuringSchedulingIgnoredDuringExecution:\n - labelSelector:\n matchExpressions:\n - key: \"app\"\n operator: In\n values:\n - zk", "completion": " topologyKey: \"kubernetes.io/hostname\"\n containers:\n - name: kubernetes-zookeeper\n imagePullPolicy: Always\n image: \"registry.k8s.io/kubernetes-zookeeper:1.0-3.4.10\"\n resources:\n requests:\n memory: \"1Gi\"\n cpu: \"0.5\"\n ports:\n - containerPort: 2181\n name: client\n - containerPort: 2888\n name: server\n - containerPort: 3888\n name: leader-election\n command:\n - sh\n - -c\n - \"start-zookeeper \\\n --servers=3 \\\n --data_dir=/var/lib/zookeeper/data \\\n --data_log_dir=/var/lib/zookeeper/data/log \\\n --conf_dir=/opt/zookeeper/conf \\\n --client_port=2181 \\\n --election_port=3888 \\\n --server_port=2888 \\\n --tick_time=2000 \\\n --init_limit=10 \\\n --sync_limit=5 \\\n --heap=512M \\\n --max_client_cnxns=60 \\\n --snap_retain_count=3 \\\n --purge_interval=12 \\\n --max_session_timeout=40000 \\\n --min_session_timeout=4000 \\\n --log_level=INFO\"\n readinessProbe:\n exec:\n command:\n - sh\n - -c\n - \"zookeeper-ready 2181\"\n initialDelaySeconds: 10\n timeoutSeconds: 5\n livenessProbe:\n exec:\n command:\n - sh\n - -c\n - \"zookeeper-ready 2181\"\n initialDelaySeconds: 10\n timeoutSeconds: 5\n volumeMounts:\n - name: datadir\n mountPath: /var/lib/zookeeper\n securityContext:\n runAsUser: 1000\n fsGroup: 1000\n volumeClaimTemplates:\n - metadata:\n name: datadir\n spec:\n accessModes: [ \"ReadWriteOnce\" ]\n resources:\n requests:\n storage: 10Gi", "language": "yaml", "type": "pod-definition", "context": "Running ZooKeeper, A Distributed System Coordinator | Open a terminal, and use the kubectl apply command to create the manifest.", "source": "https://kubernetes.io/docs/tutorials/stateful-application/zookeeper/", "id": "k8s_00220_code_completion_6168"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Service\nmetadata:\n name: zk-hs\n labels:\n app: zk\nspec:\n ports:\n - port: 2888\n name: server\n - port: 3888\n name: leader-election\n clusterIP: None\n selector:\n app: zk\n---\napiVersion: v1\nkind: Service\nmetadata:\n name: zk-cs\n labels:\n app: zk\nspec:\n ports:\n - port: 2181\n name: client\n selector:\n app: zk\n---\napiVersion: policy/v1\nkind: PodDisruptionBudget\nmetadata:\n name: zk-pdb\nspec:\n selector:\n matchLabels:\n app: zk\n maxUnavailable: 1\n---\napiVersion: apps/v1\nkind: StatefulSet\nmetadata:\n name: zk\nspec:\n selector:\n matchLabels:\n app: zk\n serviceName: zk-hs\n replicas: 3\n updateStrategy:\n type: RollingUpdate\n podManagementPolicy: OrderedReady\n template:\n metadata:\n labels:\n app: zk\n spec:\n affinity:\n podAntiAffinity:\n requiredDuringSchedulingIgnoredDuringExecution:\n - labelSelector:\n matchExpressions:\n - key: \"app\"\n operator: In\n values:\n - zk\n topologyKey: \"kubernetes.io/hostname\"\n containers:\n - name: kubernetes-zookeeper\n imagePullPolicy: Always\n image: \"registry.k8s.io/kubernetes-zookeeper:1.0-3.4.10\"\n resources:\n requests:\n memory: \"1Gi\"\n cpu: \"0.5\"\n ports:\n - containerPort: 2181\n name: client\n - containerPort: 2888\n name: server\n - containerPort: 3888\n name: leader-election\n command:\n - sh\n - -c\n - \"start-zookeeper \\\n --servers=3 \\\n --data_dir=/var/lib/zookeeper/data \\\n --data_log_dir=/var/lib/zookeeper/data/log \\\n --conf_dir=/opt/zookeeper/conf \\\n --client_port=2181 \\\n --election_port=3888 \\\n --server_port=2888 \\\n --tick_time=2000 \\\n --init_limit=10 \\\n --sync_limit=5 \\\n --heap=512M \\\n --max_client_cnxns=60 \\\n --snap_retain_count=3 \\\n --purge_interval=12 \\\n --max_session_timeout=40000 \\\n --min_session_timeout=4000 \\\n --log_level=INFO\"\n readinessProbe:\n exec:\n command:\n - sh\n - -c\n - \"zookeeper-ready 2181\"\n initialDelaySeconds: 10\n timeoutSeconds: 5\n livenessProbe:\n exec:\n command:\n - sh\n - -c\n - \"zookeeper-ready 2181\"\n initialDelaySeconds: 10\n timeoutSeconds: 5\n volumeMounts:\n - name: datadir\n mountPath: /var/lib/zookeeper\n securityContext:\n runAsUser: 1000\n fsGroup: 1000\n volumeClaimTemplates:\n - metadata:\n name: datadir\n spec:\n accessModes: [ \"ReadWriteOnce\" ]\n resources:\n requests:\n storage: 10Gi", "completion": "This yaml code Running ZooKeeper, A Distributed System Coordinator | Open a terminal, and use the kubectl apply command to create the manifest.", "language": "yaml", "type": "pod-definition_explanation", "context": "Running ZooKeeper, A Distributed System Coordinator | Open a terminal, and use the kubectl apply command to create the manifest.", "source": "https://kubernetes.io/docs/tutorials/stateful-application/zookeeper/", "id": "k8s_00220_code_explain_6169"} {"prompt": "Explain this bash code:\n\nkubectl apply -f https://k8s.io/examples/application/zookeeper/zookeeper.yaml", "completion": "This bash code Running ZooKeeper, A Distributed System Coordinator | This creates the zk-hs Headless Service, the zk-cs Service, the zk-pdb PodDisruptionBudget, and the zk StatefulSet.", "language": "bash", "type": "kubectl-command_explanation", "context": "Running ZooKeeper, A Distributed System Coordinator | This creates the zk-hs Headless Service, the zk-cs Service, the zk-pdb PodDisruptionBudget, and the zk StatefulSet.", "source": "https://kubernetes.io/docs/tutorials/stateful-application/zookeeper/", "id": "k8s_00220_code_explain_6170"} {"prompt": "Explain this text code:\n\nfor i in 0 1 2; do kubectl exec zk-$i -- hostname; done", "completion": "This text code Running ZooKeeper, A Distributed System Coordinator | The StatefulSet controller provides each Pod with a unique hostname based on its ordinal index. The hostnames take the form of -. Because the replicas field of the zk", "language": "text", "type": "code-example_explanation", "context": "Running ZooKeeper, A Distributed System Coordinator | The StatefulSet controller provides each Pod with a unique hostname based on its ordinal index. The hostnames take the form of -. Because the replicas field of the zk", "source": "https://kubernetes.io/docs/tutorials/stateful-application/zookeeper/", "id": "k8s_00220_code_explain_6171"} {"prompt": "Explain this text code:\n\nfor i in 0 1 2; do echo \"myid zk-$i\";kubectl exec zk-$i -- cat /var/lib/zookeeper/data/myid; done", "completion": "This text code Running ZooKeeper, A Distributed System Coordinator | Because the identifiers are natural numbers and the ordinal indices are non-negative integers, you can generate an identifier by adding 1 to the ordinal.", "language": "text", "type": "code-example_explanation", "context": "Running ZooKeeper, A Distributed System Coordinator | Because the identifiers are natural numbers and the ordinal indices are non-negative integers, you can generate an identifier by adding 1 to the ordinal.", "source": "https://kubernetes.io/docs/tutorials/stateful-application/zookeeper/", "id": "k8s_00220_code_explain_6172"} {"prompt": "Explain this text code:\n\nfor i in 0 1 2; do kubectl exec zk-$i -- hostname -f; done", "completion": "This text code Running ZooKeeper, A Distributed System Coordinator | The zk-hs Service creates a domain for all of the Pods, zk-hs.default.svc.cluster.local.", "language": "text", "type": "code-example_explanation", "context": "Running ZooKeeper, A Distributed System Coordinator | The zk-hs Service creates a domain for all of the Pods, zk-hs.default.svc.cluster.local.", "source": "https://kubernetes.io/docs/tutorials/stateful-application/zookeeper/", "id": "k8s_00220_code_explain_6173"} {"prompt": "Explain this bash code:\n\nkubectl exec zk-0 -- cat /opt/zookeeper/conf/zoo.cfg", "completion": "This bash code Running ZooKeeper, A Distributed System Coordinator | In the server.1, server.2, and server.3 properties at the bottom of the file, the 1, 2, and 3 correspond to the identifiers in the ZooKeeper servers' myid files. They are set to the FQDNs for the Pods", "language": "bash", "type": "kubectl-command_explanation", "context": "Running ZooKeeper, A Distributed System Coordinator | In the server.1, server.2, and server.3 properties at the bottom of the file, the 1, 2, and 3 correspond to the identifiers in the ZooKeeper servers' myid files. They are set to the FQDNs for the Pods", "source": "https://kubernetes.io/docs/tutorials/stateful-application/zookeeper/", "id": "k8s_00220_code_explain_6174"} {"prompt": "Explain this bash code:\n\nkubectl apply -f https://k8s.io/examples/application/zookeeper/zookeeper.yaml", "completion": "This bash code Running ZooKeeper, A Distributed System Coordinator | This creates the zk StatefulSet object, but the other API objects in the manifest are not modified because they already exist.", "language": "bash", "type": "kubectl-command_explanation", "context": "Running ZooKeeper, A Distributed System Coordinator | This creates the zk StatefulSet object, but the other API objects in the manifest are not modified because they already exist.", "source": "https://kubernetes.io/docs/tutorials/stateful-application/zookeeper/", "id": "k8s_00220_code_explain_6175"} {"prompt": "Complete this text code:\n\nvolumeClaimTemplates:\n - metadata:\n name: datadir\n annotations:\n volume.alpha.kubernetes.io/storage-class: anything", "completion": " spec:\n accessModes: [ \"ReadWriteOnce\" ]\n resources:\n requests:\n storage: 20Gi", "language": "text", "type": "code-example", "context": "Running ZooKeeper, A Distributed System Coordinator | The StatefulSet controller generates a PersistentVolumeClaim for each Pod in the StatefulSet.", "source": "https://kubernetes.io/docs/tutorials/stateful-application/zookeeper/", "id": "k8s_00220_code_completion_6176"} {"prompt": "Explain this text code:\n\nvolumeClaimTemplates:\n - metadata:\n name: datadir\n annotations:\n volume.alpha.kubernetes.io/storage-class: anything\n spec:\n accessModes: [ \"ReadWriteOnce\" ]\n resources:\n requests:\n storage: 20Gi", "completion": "This text code Running ZooKeeper, A Distributed System Coordinator | The StatefulSet controller generates a PersistentVolumeClaim for each Pod in the StatefulSet.", "language": "text", "type": "code-example_explanation", "context": "Running ZooKeeper, A Distributed System Coordinator | The StatefulSet controller generates a PersistentVolumeClaim for each Pod in the StatefulSet.", "source": "https://kubernetes.io/docs/tutorials/stateful-application/zookeeper/", "id": "k8s_00220_code_explain_6177"} {"prompt": "Explain this text code:\n\nvolumeMounts:\n- name: datadir\n mountPath: /var/lib/zookeeper", "completion": "This text code Running ZooKeeper, A Distributed System Coordinator | When a Pod in the zk StatefulSet is (re)scheduled, it will always have the same PersistentVolume mounted to the ZooKeeper server's data directory. Even when the Pods are rescheduled, all the writes ma", "language": "text", "type": "code-example_explanation", "context": "Running ZooKeeper, A Distributed System Coordinator | When a Pod in the zk StatefulSet is (re)scheduled, it will always have the same PersistentVolume mounted to the ZooKeeper server's data directory. Even when the Pods are rescheduled, all the writes ma", "source": "https://kubernetes.io/docs/tutorials/stateful-application/zookeeper/", "id": "k8s_00220_code_explain_6178"} {"prompt": "Explain this bash code:\n\nkubectl exec zk-0 cat /usr/etc/zookeeper/log4j.properties", "completion": "This bash code Running ZooKeeper, A Distributed System Coordinator | The logging configuration below will cause the ZooKeeper process to write all of its logs to the standard output file stream.", "language": "bash", "type": "kubectl-command_explanation", "context": "Running ZooKeeper, A Distributed System Coordinator | The logging configuration below will cause the ZooKeeper process to write all of its logs to the standard output file stream.", "source": "https://kubernetes.io/docs/tutorials/stateful-application/zookeeper/", "id": "k8s_00220_code_explain_6179"} {"prompt": "Explain this bash code:\n\nkubectl exec -ti zk-0 -- ls -ld /var/lib/zookeeper/data", "completion": "This bash code Running ZooKeeper, A Distributed System Coordinator | Because the fsGroup field of the securityContext object is set to 1000, the ownership of the Pods' PersistentVolumes is set to the zookeeper group, and the ZooKeeper process is able to read and write", "language": "bash", "type": "kubectl-command_explanation", "context": "Running ZooKeeper, A Distributed System Coordinator | Because the fsGroup field of the securityContext object is set to 1000, the ownership of the Pods' PersistentVolumes is set to the zookeeper group, and the ZooKeeper process is able to read and write", "source": "https://kubernetes.io/docs/tutorials/stateful-application/zookeeper/", "id": "k8s_00220_code_explain_6180"} {"prompt": "Explain this bash code:\n\nkubectl patch sts zk --type='json' -p='[{\"op\": \"replace\", \"path\": \"/spec/template/spec/containers/0/resources/requests/cpu\", \"value\":\"0.3\"}]'", "completion": "This bash code Running ZooKeeper, A Distributed System Coordinator | Use kubectl rollout status to watch the status of the update.", "language": "bash", "type": "kubectl-command_explanation", "context": "Running ZooKeeper, A Distributed System Coordinator | Use kubectl rollout status to watch the status of the update.", "source": "https://kubernetes.io/docs/tutorials/stateful-application/zookeeper/", "id": "k8s_00220_code_explain_6181"} {"prompt": "Complete this text code:\n\nlivenessProbe:\n exec:\n command:\n - sh", "completion": " - -c\n - \"zookeeper-ready 2181\"\n initialDelaySeconds: 15\n timeoutSeconds: 5", "language": "text", "type": "code-example", "context": "Running ZooKeeper, A Distributed System Coordinator | The probe calls a bash script that uses the ZooKeeper ruok four letter word to test the server's health.", "source": "https://kubernetes.io/docs/tutorials/stateful-application/zookeeper/", "id": "k8s_00220_code_completion_6182"} {"prompt": "Explain this text code:\n\nlivenessProbe:\n exec:\n command:\n - sh\n - -c\n - \"zookeeper-ready 2181\"\n initialDelaySeconds: 15\n timeoutSeconds: 5", "completion": "This text code Running ZooKeeper, A Distributed System Coordinator | The probe calls a bash script that uses the ZooKeeper ruok four letter word to test the server's health.", "language": "text", "type": "code-example_explanation", "context": "Running ZooKeeper, A Distributed System Coordinator | The probe calls a bash script that uses the ZooKeeper ruok four letter word to test the server's health.", "source": "https://kubernetes.io/docs/tutorials/stateful-application/zookeeper/", "id": "k8s_00220_code_explain_6183"} {"prompt": "Explain this bash code:\n\nkubectl exec zk-0 -- rm /opt/zookeeper/bin/zookeeper-ready", "completion": "This bash code Running ZooKeeper, A Distributed System Coordinator | When the liveness probe for the ZooKeeper process fails, Kubernetes will automatically restart the process for you, ensuring that unhealthy processes in the ensemble are restarted.", "language": "bash", "type": "kubectl-command_explanation", "context": "Running ZooKeeper, A Distributed System Coordinator | When the liveness probe for the ZooKeeper process fails, Kubernetes will automatically restart the process for you, ensuring that unhealthy processes in the ensemble are restarted.", "source": "https://kubernetes.io/docs/tutorials/stateful-application/zookeeper/", "id": "k8s_00220_code_explain_6184"} {"prompt": "Complete this text code:\n\nreadinessProbe:\n exec:\n command:\n - sh", "completion": " - -c\n - \"zookeeper-ready 2181\"\n initialDelaySeconds: 15\n timeoutSeconds: 5", "language": "text", "type": "code-example", "context": "Running ZooKeeper, A Distributed System Coordinator | Even though the liveness and readiness probes are identical, it is important to specify both. This ensures that only healthy servers in the ZooKeeper ensemble receive network traffic.", "source": "https://kubernetes.io/docs/tutorials/stateful-application/zookeeper/", "id": "k8s_00220_code_completion_6185"} {"prompt": "Explain this text code:\n\nreadinessProbe:\n exec:\n command:\n - sh\n - -c\n - \"zookeeper-ready 2181\"\n initialDelaySeconds: 15\n timeoutSeconds: 5", "completion": "This text code Running ZooKeeper, A Distributed System Coordinator | Even though the liveness and readiness probes are identical, it is important to specify both. This ensures that only healthy servers in the ZooKeeper ensemble receive network traffic.", "language": "text", "type": "code-example_explanation", "context": "Running ZooKeeper, A Distributed System Coordinator | Even though the liveness and readiness probes are identical, it is important to specify both. This ensures that only healthy servers in the ZooKeeper ensemble receive network traffic.", "source": "https://kubernetes.io/docs/tutorials/stateful-application/zookeeper/", "id": "k8s_00220_code_explain_6186"} {"prompt": "Explain this text code:\n\nfor i in 0 1 2; do kubectl get pod zk-$i --template {{.spec.nodeName}}; echo \"\"; done", "completion": "This text code Running ZooKeeper, A Distributed System Coordinator | All of the Pods in the zk StatefulSet are deployed on different nodes.", "language": "text", "type": "code-example_explanation", "context": "Running ZooKeeper, A Distributed System Coordinator | All of the Pods in the zk StatefulSet are deployed on different nodes.", "source": "https://kubernetes.io/docs/tutorials/stateful-application/zookeeper/", "id": "k8s_00220_code_explain_6187"} {"prompt": "Complete this text code:\n\naffinity:\n podAntiAffinity:\n requiredDuringSchedulingIgnoredDuringExecution:\n - labelSelector:\n matchExpressions:", "completion": " - key: \"app\"\n operator: In\n values:\n - zk\n topologyKey: \"kubernetes.io/hostname\"", "language": "text", "type": "code-example", "context": "Running ZooKeeper, A Distributed System Coordinator | The requiredDuringSchedulingIgnoredDuringExecution field tells the Kubernetes Scheduler that it should never co-locate two Pods which have app label as zk in the domain defined by the topologyKey. The", "source": "https://kubernetes.io/docs/tutorials/stateful-application/zookeeper/", "id": "k8s_00220_code_completion_6188"} {"prompt": "Explain this text code:\n\naffinity:\n podAntiAffinity:\n requiredDuringSchedulingIgnoredDuringExecution:\n - labelSelector:\n matchExpressions:\n - key: \"app\"\n operator: In\n values:\n - zk\n topologyKey: \"kubernetes.io/hostname\"", "completion": "This text code Running ZooKeeper, A Distributed System Coordinator | The requiredDuringSchedulingIgnoredDuringExecution field tells the Kubernetes Scheduler that it should never co-locate two Pods which have app label as zk in the domain defined by the topologyKey. The", "language": "text", "type": "code-example_explanation", "context": "Running ZooKeeper, A Distributed System Coordinator | The requiredDuringSchedulingIgnoredDuringExecution field tells the Kubernetes Scheduler that it should never co-locate two Pods which have app label as zk in the domain defined by the topologyKey. The", "source": "https://kubernetes.io/docs/tutorials/stateful-application/zookeeper/", "id": "k8s_00220_code_explain_6189"} {"prompt": "Explain this text code:\n\nfor i in 0 1 2; do kubectl get pod zk-$i --template {{.spec.nodeName}}; echo \"\"; done", "completion": "This text code Running ZooKeeper, A Distributed System Coordinator | The output is similar to this:", "language": "text", "type": "code-example_explanation", "context": "Running ZooKeeper, A Distributed System Coordinator | The output is similar to this:", "source": "https://kubernetes.io/docs/tutorials/stateful-application/zookeeper/", "id": "k8s_00220_code_explain_6190"} {"prompt": "Explain this bash code:\n\nkubectl drain $(kubectl get pod zk-0 --template {{.spec.nodeName}}) --ignore-daemonsets --force --delete-emptydir-data", "completion": "This bash code Running ZooKeeper, A Distributed System Coordinator | The output is similar to this:", "language": "bash", "type": "kubectl-command_explanation", "context": "Running ZooKeeper, A Distributed System Coordinator | The output is similar to this:", "source": "https://kubernetes.io/docs/tutorials/stateful-application/zookeeper/", "id": "k8s_00220_code_explain_6191"} {"prompt": "Explain this bash code:\n\nkubectl drain $(kubectl get pod zk-1 --template {{.spec.nodeName}}) --ignore-daemonsets --force --delete-emptydir-data", "completion": "This bash code Running ZooKeeper, A Distributed System Coordinator | The output is similar to this:", "language": "bash", "type": "kubectl-command_explanation", "context": "Running ZooKeeper, A Distributed System Coordinator | The output is similar to this:", "source": "https://kubernetes.io/docs/tutorials/stateful-application/zookeeper/", "id": "k8s_00220_code_explain_6192"} {"prompt": "Explain this bash code:\n\nkubectl drain $(kubectl get pod zk-2 --template {{.spec.nodeName}}) --ignore-daemonsets --force --delete-emptydir-data", "completion": "This bash code Running ZooKeeper, A Distributed System Coordinator | The output is similar to this:", "language": "bash", "type": "kubectl-command_explanation", "context": "Running ZooKeeper, A Distributed System Coordinator | The output is similar to this:", "source": "https://kubernetes.io/docs/tutorials/stateful-application/zookeeper/", "id": "k8s_00220_code_explain_6193"} {"prompt": "Explain this bash code:\n\nkubectl drain $(kubectl get pod zk-2 --template {{.spec.nodeName}}) --ignore-daemonsets --force --delete-emptydir-data", "completion": "This bash code Running ZooKeeper, A Distributed System Coordinator | The output is similar to this:", "language": "bash", "type": "kubectl-command_explanation", "context": "Running ZooKeeper, A Distributed System Coordinator | The output is similar to this:", "source": "https://kubernetes.io/docs/tutorials/stateful-application/zookeeper/", "id": "k8s_00220_code_explain_6194"} {"prompt": "Explain this text code:\n\nFOO_SERVICE_HOST=\nFOO_SERVICE_PORT=", "completion": "This text code 1 - Configuration Best Practices | This does imply an ordering requirement - any Service that a Pod wants to access must be created before the Pod itself, or else the environment variables will not be populated. DNS does not have this", "language": "text", "type": "code-example_explanation", "context": "1 - Configuration Best Practices | This does imply an ordering requirement - any Service that a Pod wants to access must be created before the Pod itself, or else the environment variables will not be populated. DNS does not have this", "source": "https://kubernetes.io/docs/concepts/configuration/_print/", "id": "k8s_00221_code_explain_6195"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: ConfigMap\nmetadata:\n name: game-demo\ndata:\n # property-like keys; each key maps to a simple value\n player_initial_lives: \"3\"\n ui_properties_file_name: \"user-interface.properties\"", "completion": "\n # file-like keys\n game.properties: |\n enemy.types=aliens,monsters\n player.maximum-lives=5 \n user-interface.properties: |\n color.good=purple\n color.bad=yellow\n allow.textmode=true", "language": "yaml", "type": "kubernetes-manifest", "context": "2 - ConfigMaps | There are four different ways that you can use a ConfigMap to configure a container inside a Pod:", "source": "https://kubernetes.io/docs/concepts/configuration/_print/", "id": "k8s_00221_code_completion_6196"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: ConfigMap\nmetadata:\n name: game-demo\ndata:\n # property-like keys; each key maps to a simple value\n player_initial_lives: \"3\"\n ui_properties_file_name: \"user-interface.properties\"\n\n # file-like keys\n game.properties: |\n enemy.types=aliens,monsters\n player.maximum-lives=5 \n user-interface.properties: |\n color.good=purple\n color.bad=yellow\n allow.textmode=true", "completion": "This yaml code 2 - ConfigMaps | There are four different ways that you can use a ConfigMap to configure a container inside a Pod:", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "2 - ConfigMaps | There are four different ways that you can use a ConfigMap to configure a container inside a Pod:", "source": "https://kubernetes.io/docs/concepts/configuration/_print/", "id": "k8s_00221_code_explain_6197"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: configmap-demo-pod\nspec:\n containers:\n - name: demo\n image: alpine\n command: [\"sleep\", \"3600\"]\n env:\n # Define the environment variable\n - name: PLAYER_INITIAL_LIVES # Notice that the case is different here\n # from the key name in the ConfigMap.\n valueFrom:\n configMapKeyRef:\n name: game-demo # The ConfigMap this value comes from.\n key: player_initial_lives # The key to fetch.\n - name: UI_PROPERTIES_FILE_NAME\n valueFrom:", "completion": " configMapKeyRef:\n name: game-demo\n key: ui_properties_file_name\n volumeMounts:\n - name: config\n mountPath: \"/config\"\n readOnly: true\n volumes:\n # You set volumes at the Pod level, then mount them into containers inside that Pod\n - name: config\n configMap:\n # Provide the name of the ConfigMap you want to mount.\n name: game-demo\n # An array of keys from the ConfigMap to create as files\n items:\n - key: \"game.properties\"\n path: \"game.properties\"\n - key: \"user-interface.properties\"\n path: \"user-interface.properties\"", "language": "yaml", "type": "pod-definition", "context": "2 - ConfigMaps | A ConfigMap doesn't differentiate between single line property values and multi-line file-like values. What matters is how Pods and other objects consume those values.", "source": "https://kubernetes.io/docs/concepts/configuration/_print/", "id": "k8s_00221_code_completion_6198"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: configmap-demo-pod\nspec:\n containers:\n - name: demo\n image: alpine\n command: [\"sleep\", \"3600\"]\n env:\n # Define the environment variable\n - name: PLAYER_INITIAL_LIVES # Notice that the case is different here\n # from the key name in the ConfigMap.\n valueFrom:\n configMapKeyRef:\n name: game-demo # The ConfigMap this value comes from.\n key: player_initial_lives # The key to fetch.\n - name: UI_PROPERTIES_FILE_NAME\n valueFrom:\n configMapKeyRef:\n name: game-demo\n key: ui_properties_file_name\n volumeMounts:\n - name: config\n mountPath: \"/config\"\n readOnly: true\n volumes:\n # You set volumes at the Pod level, then mount them into containers inside that Pod\n - name: config\n configMap:\n # Provide the name of the ConfigMap you want to mount.\n name: game-demo\n # An array of keys from the ConfigMap to create as files\n items:\n - key: \"game.properties\"\n path: \"game.properties\"\n - key: \"user-interface.properties\"\n path: \"user-interface.properties\"", "completion": "This yaml code 2 - ConfigMaps | A ConfigMap doesn't differentiate between single line property values and multi-line file-like values. What matters is how Pods and other objects consume those values.", "language": "yaml", "type": "pod-definition_explanation", "context": "2 - ConfigMaps | A ConfigMap doesn't differentiate between single line property values and multi-line file-like values. What matters is how Pods and other objects consume those values.", "source": "https://kubernetes.io/docs/concepts/configuration/_print/", "id": "k8s_00221_code_explain_6199"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: mypod\nspec:\n containers:\n - name: mypod\n image: redis", "completion": " volumeMounts:\n - name: foo\n mountPath: \"/etc/foo\"\n readOnly: true\n volumes:\n - name: foo\n configMap:\n name: myconfigmap", "language": "yaml", "type": "pod-definition", "context": "2 - ConfigMaps | Each ConfigMap you want to use needs to be referred to in .spec.volumes.", "source": "https://kubernetes.io/docs/concepts/configuration/_print/", "id": "k8s_00221_code_completion_6200"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: mypod\nspec:\n containers:\n - name: mypod\n image: redis\n volumeMounts:\n - name: foo\n mountPath: \"/etc/foo\"\n readOnly: true\n volumes:\n - name: foo\n configMap:\n name: myconfigmap", "completion": "This yaml code 2 - ConfigMaps | Each ConfigMap you want to use needs to be referred to in .spec.volumes.", "language": "yaml", "type": "pod-definition_explanation", "context": "2 - ConfigMaps | Each ConfigMap you want to use needs to be referred to in .spec.volumes.", "source": "https://kubernetes.io/docs/concepts/configuration/_print/", "id": "k8s_00221_code_explain_6201"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: ConfigMap\nmetadata:", "completion": " name: myconfigmap\ndata:\n username: k8s-admin\n access_level: \"1\"", "language": "yaml", "type": "kubernetes-manifest", "context": "2 - ConfigMaps | The following command will create the ConfigMap object:", "source": "https://kubernetes.io/docs/concepts/configuration/_print/", "id": "k8s_00221_code_completion_6202"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: ConfigMap\nmetadata:\n name: myconfigmap\ndata:\n username: k8s-admin\n access_level: \"1\"", "completion": "This yaml code 2 - ConfigMaps | The following command will create the ConfigMap object:", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "2 - ConfigMaps | The following command will create the ConfigMap object:", "source": "https://kubernetes.io/docs/concepts/configuration/_print/", "id": "k8s_00221_code_explain_6203"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: env-configmap\nspec:\n containers:", "completion": " - name: app\n command: [\"/bin/sh\", \"-c\", \"printenv\"]\n image: busybox:latest\n envFrom:\n - configMapRef:\n name: myconfigmap", "language": "yaml", "type": "pod-definition", "context": "2 - ConfigMaps | The envFrom field instructs Kubernetes to create environment variables from the sources nested within it. The inner configMapRef refers to a ConfigMap by its name and selects all its key-value pairs.", "source": "https://kubernetes.io/docs/concepts/configuration/_print/", "id": "k8s_00221_code_completion_6204"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: env-configmap\nspec:\n containers:\n - name: app\n command: [\"/bin/sh\", \"-c\", \"printenv\"]\n image: busybox:latest\n envFrom:\n - configMapRef:\n name: myconfigmap", "completion": "This yaml code 2 - ConfigMaps | The envFrom field instructs Kubernetes to create environment variables from the sources nested within it. The inner configMapRef refers to a ConfigMap by its name and selects all its key-value pairs.", "language": "yaml", "type": "pod-definition_explanation", "context": "2 - ConfigMaps | The envFrom field instructs Kubernetes to create environment variables from the sources nested within it. The inner configMapRef refers to a ConfigMap by its name and selects all its key-value pairs.", "source": "https://kubernetes.io/docs/concepts/configuration/_print/", "id": "k8s_00221_code_explain_6205"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: env-configmap\nspec:\n containers:\n - name: envars-test-container", "completion": " image: nginx\n env:\n - name: CONFIGMAP_USERNAME\n valueFrom:\n configMapKeyRef:\n name: myconfigmap\n key: username", "language": "yaml", "type": "pod-definition", "context": "2 - ConfigMaps | In the Pod created from this manifest, you will see that the environment variable CONFIGMAP_USERNAME is set to the value of the username value from the ConfigMap. Other keys from the ConfigMap data ar", "source": "https://kubernetes.io/docs/concepts/configuration/_print/", "id": "k8s_00221_code_completion_6206"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: env-configmap\nspec:\n containers:\n - name: envars-test-container\n image: nginx\n env:\n - name: CONFIGMAP_USERNAME\n valueFrom:\n configMapKeyRef:\n name: myconfigmap\n key: username", "completion": "This yaml code 2 - ConfigMaps | In the Pod created from this manifest, you will see that the environment variable CONFIGMAP_USERNAME is set to the value of the username value from the ConfigMap. Other keys from the ConfigMap data ar", "language": "yaml", "type": "pod-definition_explanation", "context": "2 - ConfigMaps | In the Pod created from this manifest, you will see that the environment variable CONFIGMAP_USERNAME is set to the value of the username value from the ConfigMap. Other keys from the ConfigMap data ar", "source": "https://kubernetes.io/docs/concepts/configuration/_print/", "id": "k8s_00221_code_explain_6207"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: ConfigMap\nmetadata:", "completion": " ...\ndata:\n ...\nimmutable: true", "language": "yaml", "type": "kubernetes-manifest", "context": "2 - ConfigMaps | Once a ConfigMap is marked as immutable, it is not possible to revert this change nor to mutate the contents of the data or the binaryData field. You can only delete and recreate the ConfigMap. Becaus", "source": "https://kubernetes.io/docs/concepts/configuration/_print/", "id": "k8s_00221_code_completion_6208"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: ConfigMap\nmetadata:\n ...\ndata:\n ...\nimmutable: true", "completion": "This yaml code 2 - ConfigMaps | Once a ConfigMap is marked as immutable, it is not possible to revert this change nor to mutate the contents of the data or the binaryData field. You can only delete and recreate the ConfigMap. Becaus", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "2 - ConfigMaps | Once a ConfigMap is marked as immutable, it is not possible to revert this change nor to mutate the contents of the data or the binaryData field. You can only delete and recreate the ConfigMap. Becaus", "source": "https://kubernetes.io/docs/concepts/configuration/_print/", "id": "k8s_00221_code_explain_6209"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Secret\nmetadata:\n name: dotfile-secret\ndata:\n .secret-file: dmFsdWUtMg0KDQo=\n---\napiVersion: v1\nkind: Pod\nmetadata:\n name: secret-dotfiles-pod\nspec:\n volumes:", "completion": " - name: secret-volume\n secret:\n secretName: dotfile-secret\n containers:\n - name: dotfile-test-container\n image: registry.k8s.io/busybox\n command:\n - ls\n - \"-l\"\n - \"/etc/secret-volume\"\n volumeMounts:\n - name: secret-volume\n readOnly: true\n mountPath: \"/etc/secret-volume\"", "language": "yaml", "type": "pod-definition", "context": "3 - Secrets | Consider a program that needs to handle HTTP requests, do some complex business logic, and then sign some messages with an HMAC. Because it has complex application logic, there might be an unnoticed r", "source": "https://kubernetes.io/docs/concepts/configuration/_print/", "id": "k8s_00221_code_completion_6210"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Secret\nmetadata:\n name: dotfile-secret\ndata:\n .secret-file: dmFsdWUtMg0KDQo=\n---\napiVersion: v1\nkind: Pod\nmetadata:\n name: secret-dotfiles-pod\nspec:\n volumes:\n - name: secret-volume\n secret:\n secretName: dotfile-secret\n containers:\n - name: dotfile-test-container\n image: registry.k8s.io/busybox\n command:\n - ls\n - \"-l\"\n - \"/etc/secret-volume\"\n volumeMounts:\n - name: secret-volume\n readOnly: true\n mountPath: \"/etc/secret-volume\"", "completion": "This yaml code 3 - Secrets | Consider a program that needs to handle HTTP requests, do some complex business logic, and then sign some messages with an HMAC. Because it has complex application logic, there might be an unnoticed r", "language": "yaml", "type": "pod-definition_explanation", "context": "3 - Secrets | Consider a program that needs to handle HTTP requests, do some complex business logic, and then sign some messages with an HMAC. Because it has complex application logic, there might be an unnoticed r", "source": "https://kubernetes.io/docs/concepts/configuration/_print/", "id": "k8s_00221_code_explain_6211"} {"prompt": "Explain this bash code:\n\nkubectl create secret generic empty-secret\nkubectl get secret empty-secret", "completion": "This bash code 3 - Secrets | The output looks like:", "language": "bash", "type": "kubectl-command_explanation", "context": "3 - Secrets | The output looks like:", "source": "https://kubernetes.io/docs/concepts/configuration/_print/", "id": "k8s_00221_code_explain_6212"} {"prompt": "Explain this text code:\n\nNAME TYPE DATA AGE\nempty-secret Opaque 0 2m6s", "completion": "This text code 3 - Secrets | The DATA column shows the number of data items stored in the Secret. In this case, 0 means you have created an empty Secret.", "language": "text", "type": "code-example_explanation", "context": "3 - Secrets | The DATA column shows the number of data items stored in the Secret. In this case, 0 means you have created an empty Secret.", "source": "https://kubernetes.io/docs/concepts/configuration/_print/", "id": "k8s_00221_code_explain_6213"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Secret\nmetadata:\n name: secret-sa-sample", "completion": " annotations:\n kubernetes.io/service-account.name: \"sa-name\"\ntype: kubernetes.io/service-account-token\ndata:\n extra: YmFyCg==", "language": "yaml", "type": "kubernetes-manifest", "context": "3 - Secrets | After creating the Secret, wait for Kubernetes to populate the token key in the data field.", "source": "https://kubernetes.io/docs/concepts/configuration/_print/", "id": "k8s_00221_code_completion_6214"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Secret\nmetadata:\n name: secret-sa-sample\n annotations:\n kubernetes.io/service-account.name: \"sa-name\"\ntype: kubernetes.io/service-account-token\ndata:\n extra: YmFyCg==", "completion": "This yaml code 3 - Secrets | After creating the Secret, wait for Kubernetes to populate the token key in the data field.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "3 - Secrets | After creating the Secret, wait for Kubernetes to populate the token key in the data field.", "source": "https://kubernetes.io/docs/concepts/configuration/_print/", "id": "k8s_00221_code_explain_6215"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Secret\nmetadata:\n name: secret-dockercfg", "completion": "type: kubernetes.io/dockercfg\ndata:\n .dockercfg: |\n eyJhdXRocyI6eyJodHRwczovL2V4YW1wbGUvdjEvIjp7ImF1dGgiOiJvcGVuc2VzYW1lIn19fQo=", "language": "yaml", "type": "kubernetes-manifest", "context": "3 - Secrets | When you create Docker config Secrets using a manifest, the API server checks whether the expected key exists in the data field, and it verifies if the value provided can be parsed as a valid JSON. Th", "source": "https://kubernetes.io/docs/concepts/configuration/_print/", "id": "k8s_00221_code_completion_6216"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Secret\nmetadata:\n name: secret-dockercfg\ntype: kubernetes.io/dockercfg\ndata:\n .dockercfg: |\n eyJhdXRocyI6eyJodHRwczovL2V4YW1wbGUvdjEvIjp7ImF1dGgiOiJvcGVuc2VzYW1lIn19fQo=", "completion": "This yaml code 3 - Secrets | When you create Docker config Secrets using a manifest, the API server checks whether the expected key exists in the data field, and it verifies if the value provided can be parsed as a valid JSON. Th", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "3 - Secrets | When you create Docker config Secrets using a manifest, the API server checks whether the expected key exists in the data field, and it verifies if the value provided can be parsed as a valid JSON. Th", "source": "https://kubernetes.io/docs/concepts/configuration/_print/", "id": "k8s_00221_code_explain_6217"} {"prompt": "Complete this bash code:\n\nkubectl create secret docker-registry secret-tiger-docker \\\n --docker-email=tiger@acme.example \\", "completion": " --docker-username=tiger \\\n --docker-password=pass1234 \\\n --docker-server=my-registry.example:5000", "language": "bash", "type": "kubectl-command", "context": "3 - Secrets | This command creates a Secret of type kubernetes.io/dockerconfigjson.", "source": "https://kubernetes.io/docs/concepts/configuration/_print/", "id": "k8s_00221_code_completion_6218"} {"prompt": "Explain this bash code:\n\nkubectl create secret docker-registry secret-tiger-docker \\\n --docker-email=tiger@acme.example \\\n --docker-username=tiger \\\n --docker-password=pass1234 \\\n --docker-server=my-registry.example:5000", "completion": "This bash code 3 - Secrets | This command creates a Secret of type kubernetes.io/dockerconfigjson.", "language": "bash", "type": "kubectl-command_explanation", "context": "3 - Secrets | This command creates a Secret of type kubernetes.io/dockerconfigjson.", "source": "https://kubernetes.io/docs/concepts/configuration/_print/", "id": "k8s_00221_code_explain_6219"} {"prompt": "Explain this bash code:\n\nkubectl get secret secret-tiger-docker -o jsonpath='{.data.*}' | base64 -d", "completion": "This bash code 3 - Secrets | The output is equivalent to the following JSON document (which is also a valid Docker configuration file):", "language": "bash", "type": "kubectl-command_explanation", "context": "3 - Secrets | The output is equivalent to the following JSON document (which is also a valid Docker configuration file):", "source": "https://kubernetes.io/docs/concepts/configuration/_print/", "id": "k8s_00221_code_explain_6220"} {"prompt": "Complete this json code:\n\n{\n \"auths\": {\n \"my-registry.example:5000\": {\n \"username\": \"tiger\",\n \"password\": \"pass1234\",", "completion": " \"email\": \"tiger@acme.example\",\n \"auth\": \"dGlnZXI6cGFzczEyMzQ=\"\n }\n }\n}", "language": "json", "type": "api-response", "context": "3 - Secrets | The auth value there is base64 encoded; it is obscured but not secret. Anyone who can read that Secret can learn the registry access bearer token.", "source": "https://kubernetes.io/docs/concepts/configuration/_print/", "id": "k8s_00221_code_completion_6221"} {"prompt": "Explain this json code:\n\n{\n \"auths\": {\n \"my-registry.example:5000\": {\n \"username\": \"tiger\",\n \"password\": \"pass1234\",\n \"email\": \"tiger@acme.example\",\n \"auth\": \"dGlnZXI6cGFzczEyMzQ=\"\n }\n }\n}", "completion": "This json code 3 - Secrets | The auth value there is base64 encoded; it is obscured but not secret. Anyone who can read that Secret can learn the registry access bearer token.", "language": "json", "type": "api-response_explanation", "context": "3 - Secrets | The auth value there is base64 encoded; it is obscured but not secret. Anyone who can read that Secret can learn the registry access bearer token.", "source": "https://kubernetes.io/docs/concepts/configuration/_print/", "id": "k8s_00221_code_explain_6222"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Secret\nmetadata:\n name: secret-basic-auth", "completion": "type: kubernetes.io/basic-auth\nstringData:\n username: admin # required field for kubernetes.io/basic-auth\n password: t0p-Secret # required field for kubernetes.io/basic-auth", "language": "yaml", "type": "kubernetes-manifest", "context": "3 - Secrets | The basic authentication Secret type is provided only for convenience. You can create an Opaque type for credentials used for basic authentication. However, using the defined and public Secret type (k", "source": "https://kubernetes.io/docs/concepts/configuration/_print/", "id": "k8s_00221_code_completion_6223"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Secret\nmetadata:\n name: secret-basic-auth\ntype: kubernetes.io/basic-auth\nstringData:\n username: admin # required field for kubernetes.io/basic-auth\n password: t0p-Secret # required field for kubernetes.io/basic-auth", "completion": "This yaml code 3 - Secrets | The basic authentication Secret type is provided only for convenience. You can create an Opaque type for credentials used for basic authentication. However, using the defined and public Secret type (k", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "3 - Secrets | The basic authentication Secret type is provided only for convenience. You can create an Opaque type for credentials used for basic authentication. However, using the defined and public Secret type (k", "source": "https://kubernetes.io/docs/concepts/configuration/_print/", "id": "k8s_00221_code_explain_6224"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Secret\nmetadata:\n name: secret-ssh-auth", "completion": "type: kubernetes.io/ssh-auth\ndata:\n # the data is abbreviated in this example\n ssh-privatekey: |\n UG91cmluZzYlRW1vdGljb24lU2N1YmE=", "language": "yaml", "type": "kubernetes-manifest", "context": "3 - Secrets | The SSH authentication Secret type is provided only for convenience. You can create an Opaque type for credentials used for SSH authentication. However, using the defined and public Secret type (kuber", "source": "https://kubernetes.io/docs/concepts/configuration/_print/", "id": "k8s_00221_code_completion_6225"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Secret\nmetadata:\n name: secret-ssh-auth\ntype: kubernetes.io/ssh-auth\ndata:\n # the data is abbreviated in this example\n ssh-privatekey: |\n UG91cmluZzYlRW1vdGljb24lU2N1YmE=", "completion": "This yaml code 3 - Secrets | The SSH authentication Secret type is provided only for convenience. You can create an Opaque type for credentials used for SSH authentication. However, using the defined and public Secret type (kuber", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "3 - Secrets | The SSH authentication Secret type is provided only for convenience. You can create an Opaque type for credentials used for SSH authentication. However, using the defined and public Secret type (kuber", "source": "https://kubernetes.io/docs/concepts/configuration/_print/", "id": "k8s_00221_code_explain_6226"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Secret\nmetadata:\n name: secret-tls\ntype: kubernetes.io/tls", "completion": "data:\n # values are base64 encoded, which obscures them but does NOT provide\n # any useful level of confidentiality\n # Replace the following values with your own base64-encoded certificate and key.\n tls.crt: \"REPLACE_WITH_BASE64_CERT\" \n tls.key: \"REPLACE_WITH_BASE64_KEY\"", "language": "yaml", "type": "kubernetes-manifest", "context": "3 - Secrets | The TLS Secret type is provided only for convenience. You can create an Opaque type for credentials used for TLS authentication. However, using the defined and public Secret type (kubernetes.io/tls) h", "source": "https://kubernetes.io/docs/concepts/configuration/_print/", "id": "k8s_00221_code_completion_6227"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Secret\nmetadata:\n name: secret-tls\ntype: kubernetes.io/tls\ndata:\n # values are base64 encoded, which obscures them but does NOT provide\n # any useful level of confidentiality\n # Replace the following values with your own base64-encoded certificate and key.\n tls.crt: \"REPLACE_WITH_BASE64_CERT\" \n tls.key: \"REPLACE_WITH_BASE64_KEY\"", "completion": "This yaml code 3 - Secrets | The TLS Secret type is provided only for convenience. You can create an Opaque type for credentials used for TLS authentication. However, using the defined and public Secret type (kubernetes.io/tls) h", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "3 - Secrets | The TLS Secret type is provided only for convenience. You can create an Opaque type for credentials used for TLS authentication. However, using the defined and public Secret type (kubernetes.io/tls) h", "source": "https://kubernetes.io/docs/concepts/configuration/_print/", "id": "k8s_00221_code_explain_6228"} {"prompt": "Explain this bash code:\n\nkubectl create secret tls my-tls-secret \\\n --cert=path/to/cert/file \\\n --key=path/to/key/file", "completion": "This bash code 3 - Secrets | The public/private key pair must exist before hand. The public key certificate for --cert must be .PEM encoded and must match the given private key for --key.", "language": "bash", "type": "kubectl-command_explanation", "context": "3 - Secrets | The public/private key pair must exist before hand. The public key certificate for --cert must be .PEM encoded and must match the given private key for --key.", "source": "https://kubernetes.io/docs/concepts/configuration/_print/", "id": "k8s_00221_code_explain_6229"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Secret\nmetadata:\n name: bootstrap-token-5emitj\n namespace: kube-system\ntype: bootstrap.kubernetes.io/token", "completion": "data:\n auth-extra-groups: c3lzdGVtOmJvb3RzdHJhcHBlcnM6a3ViZWFkbTpkZWZhdWx0LW5vZGUtdG9rZW4=\n expiration: MjAyMC0wOS0xM1QwNDozOToxMFo=\n token-id: NWVtaXRq\n token-secret: a3E0Z2lodnN6emduMXAwcg==\n usage-bootstrap-authentication: dHJ1ZQ==\n usage-bootstrap-signing: dHJ1ZQ==", "language": "yaml", "type": "kubernetes-manifest", "context": "3 - Secrets | A bootstrap token Secret has the following keys specified under data:", "source": "https://kubernetes.io/docs/concepts/configuration/_print/", "id": "k8s_00221_code_completion_6230"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Secret\nmetadata:\n name: bootstrap-token-5emitj\n namespace: kube-system\ntype: bootstrap.kubernetes.io/token\ndata:\n auth-extra-groups: c3lzdGVtOmJvb3RzdHJhcHBlcnM6a3ViZWFkbTpkZWZhdWx0LW5vZGUtdG9rZW4=\n expiration: MjAyMC0wOS0xM1QwNDozOToxMFo=\n token-id: NWVtaXRq\n token-secret: a3E0Z2lodnN6emduMXAwcg==\n usage-bootstrap-authentication: dHJ1ZQ==\n usage-bootstrap-signing: dHJ1ZQ==", "completion": "This yaml code 3 - Secrets | A bootstrap token Secret has the following keys specified under data:", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "3 - Secrets | A bootstrap token Secret has the following keys specified under data:", "source": "https://kubernetes.io/docs/concepts/configuration/_print/", "id": "k8s_00221_code_explain_6231"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Secret\nmetadata:\n # Note how the Secret is named\n name: bootstrap-token-5emitj\n # A bootstrap token Secret usually resides in the kube-system namespace\n namespace: kube-system\ntype: bootstrap.kubernetes.io/token\nstringData:", "completion": " auth-extra-groups: \"system:bootstrappers:kubeadm:default-node-token\"\n expiration: \"2020-09-13T04:39:10Z\"\n # This token ID is used in the name\n token-id: \"5emitj\"\n token-secret: \"kq4gihvszzgn1p0r\"\n # This token can be used for authentication\n usage-bootstrap-authentication: \"true\"\n # and it can be used for signing\n usage-bootstrap-signing: \"true\"", "language": "yaml", "type": "kubernetes-manifest", "context": "3 - Secrets | There are several options to create a Secret:", "source": "https://kubernetes.io/docs/concepts/configuration/_print/", "id": "k8s_00221_code_completion_6232"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Secret\nmetadata:\n # Note how the Secret is named\n name: bootstrap-token-5emitj\n # A bootstrap token Secret usually resides in the kube-system namespace\n namespace: kube-system\ntype: bootstrap.kubernetes.io/token\nstringData:\n auth-extra-groups: \"system:bootstrappers:kubeadm:default-node-token\"\n expiration: \"2020-09-13T04:39:10Z\"\n # This token ID is used in the name\n token-id: \"5emitj\"\n token-secret: \"kq4gihvszzgn1p0r\"\n # This token can be used for authentication\n usage-bootstrap-authentication: \"true\"\n # and it can be used for signing\n usage-bootstrap-signing: \"true\"", "completion": "This yaml code 3 - Secrets | There are several options to create a Secret:", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "3 - Secrets | There are several options to create a Secret:", "source": "https://kubernetes.io/docs/concepts/configuration/_print/", "id": "k8s_00221_code_explain_6233"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: mypod\nspec:\n containers:\n - name: mypod\n image: redis", "completion": " volumeMounts:\n - name: foo\n mountPath: \"/etc/foo\"\n readOnly: true\n volumes:\n - name: foo\n secret:\n secretName: mysecret\n optional: true", "language": "yaml", "type": "pod-definition", "context": "3 - Secrets | By default, Secrets are required. None of a Pod's containers will start until all non-optional Secrets are available.", "source": "https://kubernetes.io/docs/concepts/configuration/_print/", "id": "k8s_00221_code_completion_6234"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: mypod\nspec:\n containers:\n - name: mypod\n image: redis\n volumeMounts:\n - name: foo\n mountPath: \"/etc/foo\"\n readOnly: true\n volumes:\n - name: foo\n secret:\n secretName: mysecret\n optional: true", "completion": "This yaml code 3 - Secrets | By default, Secrets are required. None of a Pod's containers will start until all non-optional Secrets are available.", "language": "yaml", "type": "pod-definition_explanation", "context": "3 - Secrets | By default, Secrets are required. None of a Pod's containers will start until all non-optional Secrets are available.", "source": "https://kubernetes.io/docs/concepts/configuration/_print/", "id": "k8s_00221_code_explain_6235"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Secret", "completion": "metadata: ...\ndata: ...\nimmutable: true", "language": "yaml", "type": "kubernetes-manifest", "context": "3 - Secrets | You can also update any existing mutable Secret to make it immutable.", "source": "https://kubernetes.io/docs/concepts/configuration/_print/", "id": "k8s_00221_code_completion_6236"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Secret\nmetadata: ...\ndata: ...\nimmutable: true", "completion": "This yaml code 3 - Secrets | You can also update any existing mutable Secret to make it immutable.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "3 - Secrets | You can also update any existing mutable Secret to make it immutable.", "source": "https://kubernetes.io/docs/concepts/configuration/_print/", "id": "k8s_00221_code_explain_6237"} {"prompt": "Complete this yaml code:\n\n---\napiVersion: v1\nkind: Pod\nmetadata:\n name: frontend\nspec:\n containers:\n - name: app\n image: images.my-company.example/app:v4\n resources:\n requests:\n memory: \"64Mi\"", "completion": " cpu: \"250m\"\n limits:\n memory: \"128Mi\"\n cpu: \"500m\"\n - name: log-aggregator\n image: images.my-company.example/log-aggregator:v6\n resources:\n requests:\n memory: \"64Mi\"\n cpu: \"250m\"\n limits:\n memory: \"128Mi\"\n cpu: \"500m\"", "language": "yaml", "type": "pod-definition", "context": "5 - Resource Management for Pods and Containers | This feature can be enabled by setting the PodLevelResources feature gate. The following Pod has an explicit request of 1 CPU and 100 MiB of memory, and an explicit limit of 1 CPU and 200 MiB of memor", "source": "https://kubernetes.io/docs/concepts/configuration/_print/", "id": "k8s_00221_code_completion_6238"} {"prompt": "Explain this yaml code:\n\n---\napiVersion: v1\nkind: Pod\nmetadata:\n name: frontend\nspec:\n containers:\n - name: app\n image: images.my-company.example/app:v4\n resources:\n requests:\n memory: \"64Mi\"\n cpu: \"250m\"\n limits:\n memory: \"128Mi\"\n cpu: \"500m\"\n - name: log-aggregator\n image: images.my-company.example/log-aggregator:v6\n resources:\n requests:\n memory: \"64Mi\"\n cpu: \"250m\"\n limits:\n memory: \"128Mi\"\n cpu: \"500m\"", "completion": "This yaml code 5 - Resource Management for Pods and Containers | This feature can be enabled by setting the PodLevelResources feature gate. The following Pod has an explicit request of 1 CPU and 100 MiB of memory, and an explicit limit of 1 CPU and 200 MiB of memor", "language": "yaml", "type": "pod-definition_explanation", "context": "5 - Resource Management for Pods and Containers | This feature can be enabled by setting the PodLevelResources feature gate. The following Pod has an explicit request of 1 CPU and 100 MiB of memory, and an explicit limit of 1 CPU and 200 MiB of memor", "source": "https://kubernetes.io/docs/concepts/configuration/_print/", "id": "k8s_00221_code_explain_6239"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: pod-resources-demo\n namespace: pod-resources-example\nspec:\n resources:\n limits:\n cpu: \"1\"\n memory: \"200Mi\"\n requests:\n cpu: \"1\"\n memory: \"100Mi\"\n containers:", "completion": " - name: pod-resources-demo-ctr-1\n image: nginx\n resources:\n limits:\n cpu: \"0.5\"\n memory: \"100Mi\"\n requests:\n cpu: \"0.5\"\n memory: \"50Mi\"\n - name: pod-resources-demo-ctr-2\n image: fedora\n command:\n - sleep\n - inf", "language": "yaml", "type": "pod-definition", "context": "5 - Resource Management for Pods and Containers | When you create a Pod, the Kubernetes scheduler selects a node for the Pod to run on. Each node has a maximum capacity for each of the resource types: the amount of CPU and memory it can provide for P", "source": "https://kubernetes.io/docs/concepts/configuration/_print/", "id": "k8s_00221_code_completion_6240"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: pod-resources-demo\n namespace: pod-resources-example\nspec:\n resources:\n limits:\n cpu: \"1\"\n memory: \"200Mi\"\n requests:\n cpu: \"1\"\n memory: \"100Mi\"\n containers:\n - name: pod-resources-demo-ctr-1\n image: nginx\n resources:\n limits:\n cpu: \"0.5\"\n memory: \"100Mi\"\n requests:\n cpu: \"0.5\"\n memory: \"50Mi\"\n - name: pod-resources-demo-ctr-2\n image: fedora\n command:\n - sleep\n - inf", "completion": "This yaml code 5 - Resource Management for Pods and Containers | When you create a Pod, the Kubernetes scheduler selects a node for the Pod to run on. Each node has a maximum capacity for each of the resource types: the amount of CPU and memory it can provide for P", "language": "yaml", "type": "pod-definition_explanation", "context": "5 - Resource Management for Pods and Containers | When you create a Pod, the Kubernetes scheduler selects a node for the Pod to run on. Each node has a maximum capacity for each of the resource types: the amount of CPU and memory it can provide for P", "source": "https://kubernetes.io/docs/concepts/configuration/_print/", "id": "k8s_00221_code_explain_6241"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: frontend\nspec:\n containers:\n - name: app\n image: images.my-company.example/app:v4\n resources:\n requests:\n ephemeral-storage: \"2Gi\"\n limits:\n ephemeral-storage: \"4Gi\"\n volumeMounts:\n - name: ephemeral", "completion": " mountPath: \"/tmp\"\n - name: log-aggregator\n image: images.my-company.example/log-aggregator:v6\n resources:\n requests:\n ephemeral-storage: \"2Gi\"\n limits:\n ephemeral-storage: \"4Gi\"\n volumeMounts:\n - name: ephemeral\n mountPath: \"/tmp\"\n volumes:\n - name: ephemeral\n emptyDir:\n sizeLimit: 500Mi", "language": "yaml", "type": "pod-definition", "context": "5 - Resource Management for Pods and Containers | When you create a Pod, the Kubernetes scheduler selects a node for the Pod to run on. Each node has a maximum amount of local ephemeral storage it can provide for Pods. For more information, see Node", "source": "https://kubernetes.io/docs/concepts/configuration/_print/", "id": "k8s_00221_code_completion_6242"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: frontend\nspec:\n containers:\n - name: app\n image: images.my-company.example/app:v4\n resources:\n requests:\n ephemeral-storage: \"2Gi\"\n limits:\n ephemeral-storage: \"4Gi\"\n volumeMounts:\n - name: ephemeral\n mountPath: \"/tmp\"\n - name: log-aggregator\n image: images.my-company.example/log-aggregator:v6\n resources:\n requests:\n ephemeral-storage: \"2Gi\"\n limits:\n ephemeral-storage: \"4Gi\"\n volumeMounts:\n - name: ephemeral\n mountPath: \"/tmp\"\n volumes:\n - name: ephemeral\n emptyDir:\n sizeLimit: 500Mi", "completion": "This yaml code 5 - Resource Management for Pods and Containers | When you create a Pod, the Kubernetes scheduler selects a node for the Pod to run on. Each node has a maximum amount of local ephemeral storage it can provide for Pods. For more information, see Node", "language": "yaml", "type": "pod-definition_explanation", "context": "5 - Resource Management for Pods and Containers | When you create a Pod, the Kubernetes scheduler selects a node for the Pod to run on. Each node has a maximum amount of local ephemeral storage it can provide for Pods. For more information, see Node", "source": "https://kubernetes.io/docs/concepts/configuration/_print/", "id": "k8s_00221_code_explain_6243"} {"prompt": "Explain this text code:\n\n# For ext4, with /dev/block-device not mounted\nsudo tune2fs -O project -Q prjquota /dev/block-device", "completion": "This text code 5 - Resource Management for Pods and Containers | Ensure that the root filesystem (or optional runtime filesystem) is mounted with project quotas enabled. For both XFS and ext4fs, the mount option is named prjquota.", "language": "text", "type": "code-example_explanation", "context": "5 - Resource Management for Pods and Containers | Ensure that the root filesystem (or optional runtime filesystem) is mounted with project quotas enabled. For both XFS and ext4fs, the mount option is named prjquota.", "source": "https://kubernetes.io/docs/concepts/configuration/_print/", "id": "k8s_00221_code_explain_6244"} {"prompt": "Complete this text code:\n\ncurl --header \"Content-Type: application/json-patch+json\" \\\n--request PATCH \\", "completion": "--data '[{\"op\": \"add\", \"path\": \"/status/capacity/example.com~1foo\", \"value\": \"5\"}]' \\\nhttp://k8s-master:8080/api/v1/nodes/k8s-node-1/status", "language": "text", "type": "code-example", "context": "5 - Resource Management for Pods and Containers | Cluster-level extended resources are not tied to nodes. They are usually managed by scheduler extenders, which handle the resource consumption and resource quota.", "source": "https://kubernetes.io/docs/concepts/configuration/_print/", "id": "k8s_00221_code_completion_6245"} {"prompt": "Explain this text code:\n\ncurl --header \"Content-Type: application/json-patch+json\" \\\n--request PATCH \\\n--data '[{\"op\": \"add\", \"path\": \"/status/capacity/example.com~1foo\", \"value\": \"5\"}]' \\\nhttp://k8s-master:8080/api/v1/nodes/k8s-node-1/status", "completion": "This text code 5 - Resource Management for Pods and Containers | Cluster-level extended resources are not tied to nodes. They are usually managed by scheduler extenders, which handle the resource consumption and resource quota.", "language": "text", "type": "code-example_explanation", "context": "5 - Resource Management for Pods and Containers | Cluster-level extended resources are not tied to nodes. They are usually managed by scheduler extenders, which handle the resource consumption and resource quota.", "source": "https://kubernetes.io/docs/concepts/configuration/_print/", "id": "k8s_00221_code_explain_6246"} {"prompt": "Complete this json code:\n\n{\n \"kind\": \"Policy\",\n \"apiVersion\": \"v1\",\n \"extenders\": [\n {\n \"urlPrefix\":\"\",\n \"bindVerb\": \"bind\",\n \"managedResources\": [", "completion": " {\n \"name\": \"example.com/foo\",\n \"ignoredByScheduler\": true\n }\n ]\n }\n ]\n}", "language": "json", "type": "api-response", "context": "5 - Resource Management for Pods and Containers | Extended resources allocation by DRA allows cluster administrators to specify an extendedResourceName in DeviceClass, then the devices matching the DeviceClass can be requested from a pod's extended r", "source": "https://kubernetes.io/docs/concepts/configuration/_print/", "id": "k8s_00221_code_completion_6247"} {"prompt": "Explain this json code:\n\n{\n \"kind\": \"Policy\",\n \"apiVersion\": \"v1\",\n \"extenders\": [\n {\n \"urlPrefix\":\"\",\n \"bindVerb\": \"bind\",\n \"managedResources\": [\n {\n \"name\": \"example.com/foo\",\n \"ignoredByScheduler\": true\n }\n ]\n }\n ]\n}", "completion": "This json code 5 - Resource Management for Pods and Containers | Extended resources allocation by DRA allows cluster administrators to specify an extendedResourceName in DeviceClass, then the devices matching the DeviceClass can be requested from a pod's extended r", "language": "json", "type": "api-response_explanation", "context": "5 - Resource Management for Pods and Containers | Extended resources allocation by DRA allows cluster administrators to specify an extendedResourceName in DeviceClass, then the devices matching the DeviceClass can be requested from a pod's extended r", "source": "https://kubernetes.io/docs/concepts/configuration/_print/", "id": "k8s_00221_code_explain_6248"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: my-pod\nspec:\n containers:\n - name: my-container", "completion": " image: myimage\n resources:\n requests:\n cpu: 2\n example.com/foo: 1\n limits:\n example.com/foo: 1", "language": "yaml", "type": "pod-definition", "context": "5 - Resource Management for Pods and Containers | Process ID (PID) limits allow for the configuration of a kubelet to limit the number of PIDs that a given Pod can consume. See PID Limiting for information.", "source": "https://kubernetes.io/docs/concepts/configuration/_print/", "id": "k8s_00221_code_completion_6249"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: my-pod\nspec:\n containers:\n - name: my-container\n image: myimage\n resources:\n requests:\n cpu: 2\n example.com/foo: 1\n limits:\n example.com/foo: 1", "completion": "This yaml code 5 - Resource Management for Pods and Containers | Process ID (PID) limits allow for the configuration of a kubelet to limit the number of PIDs that a given Pod can consume. See PID Limiting for information.", "language": "yaml", "type": "pod-definition_explanation", "context": "5 - Resource Management for Pods and Containers | Process ID (PID) limits allow for the configuration of a kubelet to limit the number of PIDs that a given Pod can consume. See PID Limiting for information.", "source": "https://kubernetes.io/docs/concepts/configuration/_print/", "id": "k8s_00221_code_explain_6250"} {"prompt": "Explain this bash code:\n\nkubectl describe pod frontend | grep -A 9999999999 Events", "completion": "This bash code 5 - Resource Management for Pods and Containers | In the preceding example, the Pod named \"frontend\" fails to be scheduled due to insufficient CPU resource on any node. Similar error messages can also suggest failure due to insufficient memory (PodEx", "language": "bash", "type": "kubectl-command_explanation", "context": "5 - Resource Management for Pods and Containers | In the preceding example, the Pod named \"frontend\" fails to be scheduled due to insufficient CPU resource on any node. Similar error messages can also suggest failure due to insufficient memory (PodEx", "source": "https://kubernetes.io/docs/concepts/configuration/_print/", "id": "k8s_00221_code_explain_6251"} {"prompt": "Complete this text code:\n\nEvents:\n Type Reason Age From Message", "completion": " ---- ------ ---- ---- -------\n Warning FailedScheduling 23s default-scheduler 0/42 nodes available: insufficient cpu", "language": "text", "type": "code-example", "context": "5 - Resource Management for Pods and Containers | In the preceding example, the Pod named \"frontend\" fails to be scheduled due to insufficient CPU resource on any node. Similar error messages can also suggest failure due to insufficient memory (PodEx", "source": "https://kubernetes.io/docs/concepts/configuration/_print/", "id": "k8s_00221_code_completion_6252"} {"prompt": "Explain this text code:\n\nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Warning FailedScheduling 23s default-scheduler 0/42 nodes available: insufficient cpu", "completion": "This text code 5 - Resource Management for Pods and Containers | In the preceding example, the Pod named \"frontend\" fails to be scheduled due to insufficient CPU resource on any node. Similar error messages can also suggest failure due to insufficient memory (PodEx", "language": "text", "type": "code-example_explanation", "context": "5 - Resource Management for Pods and Containers | In the preceding example, the Pod named \"frontend\" fails to be scheduled due to insufficient CPU resource on any node. Similar error messages can also suggest failure due to insufficient memory (PodEx", "source": "https://kubernetes.io/docs/concepts/configuration/_print/", "id": "k8s_00221_code_explain_6253"} {"prompt": "Complete this text code:\n\nName: e2e-test-node-pool-4lw4\n[ ... lines removed for clarity ...]\nCapacity:\n cpu: 2\n memory: 7679792Ki\n pods: 110\nAllocatable:\n cpu: 1800m\n memory: 7474992Ki\n pods: 110\n[ ... lines removed for clarity ...]\nNon-terminated Pods: (5 in total)", "completion": " Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits\n --------- ---- ------------ ---------- --------------- -------------\n kube-system fluentd-gcp-v1.38-28bv1 100m (5%) 0 (0%) 200Mi (2%) 200Mi (2%)\n kube-system kube-dns-3297075139-61lj3 260m (13%) 0 (0%) 100Mi (1%) 170Mi (2%)\n kube-system kube-proxy-e2e-test-... 100m (5%) 0 (0%) 0 (0%) 0 (0%)\n kube-system monitoring-influxdb-grafana-v4-z1m12 200m (10%) 200m (10%) 600Mi (8%) 600Mi (8%)\n kube-system node-problem-detector-v0.1-fj7m3 20m (1%) 200m (10%) 20Mi (0%) 100Mi (1%)\nAllocated resources:\n (Total limits may be over 100 percent, i.e., overcommitted.)\n CPU Requests CPU Limits Memory Requests Memory Limits\n ------------ ---------- --------------- -------------\n 680m (34%) 400m (20%) 920Mi (11%) 1070Mi (13%)", "language": "text", "type": "code-example", "context": "5 - Resource Management for Pods and Containers | In the preceding output, you can see that if a Pod requests more than 1.120 CPUs or more than 6.23Gi of memory, that Pod will not fit on the node.", "source": "https://kubernetes.io/docs/concepts/configuration/_print/", "id": "k8s_00221_code_completion_6254"} {"prompt": "Explain this text code:\n\nName: e2e-test-node-pool-4lw4\n[ ... lines removed for clarity ...]\nCapacity:\n cpu: 2\n memory: 7679792Ki\n pods: 110\nAllocatable:\n cpu: 1800m\n memory: 7474992Ki\n pods: 110\n[ ... lines removed for clarity ...]\nNon-terminated Pods: (5 in total)\n Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits\n --------- ---- ------------ ---------- --------------- -------------\n kube-system fluentd-gcp-v1.38-28bv1 100m (5%) 0 (0%) 200Mi (2%) 200Mi (2%)\n kube-system kube-dns-3297075139-61lj3 260m (13%) 0 (0%) 100Mi (1%) 170Mi (2%)\n kube-system kube-proxy-e2e-test-... 100m (5%) 0 (0%) 0 (0%) 0 (0%)\n kube-system monitoring-influxdb-grafana-v4-z1m12 200m (10%) 200m (10%) 600Mi (8%) 600Mi (8%)\n kube-system node-problem-detector-v0.1-fj7m3 20m (1%) 200m (10%) 20Mi (0%) 100Mi (1%)\nAllocated resources:\n (Total limits may be over 100 percent, i.e., overcommitted.)\n CPU Requests CPU Limits Memory Requests Memory Limits\n ------------ ---------- --------------- -------------\n 680m (34%) 400m (20%) 920Mi (11%) 1070Mi (13%)", "completion": "This text code 5 - Resource Management for Pods and Containers | In the preceding output, you can see that if a Pod requests more than 1.120 CPUs or more than 6.23Gi of memory, that Pod will not fit on the node.", "language": "text", "type": "code-example_explanation", "context": "5 - Resource Management for Pods and Containers | In the preceding output, you can see that if a Pod requests more than 1.120 CPUs or more than 6.23Gi of memory, that Pod will not fit on the node.", "source": "https://kubernetes.io/docs/concepts/configuration/_print/", "id": "k8s_00221_code_explain_6255"} {"prompt": "Complete this text code:\n\nName: simmemleak-hra99\nNamespace: default\nImage(s): saadali/simmemleak\nNode: kubernetes-node-tf0f/10.240.216.66\nLabels: name=simmemleak\nStatus: Running\nReason:\nMessage:\nIP: 10.244.2.75\nContainers:\n simmemleak:\n Image: saadali/simmemleak:latest\n Limits:\n cpu: 100m\n memory: 50Mi\n State: Running\n Started: Tue, 07 Jul 2019 12:54:41 -0700", "completion": " Last State: Terminated\n Reason: OOMKilled\n Exit Code: 137\n Started: Fri, 07 Jul 2019 12:54:30 -0700\n Finished: Fri, 07 Jul 2019 12:54:33 -0700\n Ready: False\n Restart Count: 5\nConditions:\n Type Status\n Ready False\nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal Scheduled 42s default-scheduler Successfully assigned simmemleak-hra99 to kubernetes-node-tf0f\n Normal Pulled 41s kubelet Container image \"saadali/simmemleak:latest\" already present on machine\n Normal Created 41s kubelet Created container simmemleak\n Normal Started 40s kubelet Started container simmemleak\n Normal Killing 32s kubelet Killing container with id ead3fb35-5cf5-44ed-9ae1-488115be66c6: Need to kill Pod", "language": "text", "type": "code-example", "context": "5 - Resource Management for Pods and Containers | In the preceding example, the Restart Count: 5 indicates that the simmemleak container in the Pod was terminated and restarted five times (so far). The OOMKilled reason shows that the container tried", "source": "https://kubernetes.io/docs/concepts/configuration/_print/", "id": "k8s_00221_code_completion_6256"} {"prompt": "Explain this text code:\n\nName: simmemleak-hra99\nNamespace: default\nImage(s): saadali/simmemleak\nNode: kubernetes-node-tf0f/10.240.216.66\nLabels: name=simmemleak\nStatus: Running\nReason:\nMessage:\nIP: 10.244.2.75\nContainers:\n simmemleak:\n Image: saadali/simmemleak:latest\n Limits:\n cpu: 100m\n memory: 50Mi\n State: Running\n Started: Tue, 07 Jul 2019 12:54:41 -0700\n Last State: Terminated\n Reason: OOMKilled\n Exit Code: 137\n Started: Fri, 07 Jul 2019 12:54:30 -0700\n Finished: Fri, 07 Jul 2019 12:54:33 -0700\n Ready: False\n Restart Count: 5\nConditions:\n Type Status\n Ready False\nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal Scheduled 42s default-scheduler Successfully assigned simmemleak-hra99 to kubernetes-node-tf0f\n Normal Pulled 41s kubelet Container image \"saadali/simmemleak:latest\" already present on machine\n Normal Created 41s kubelet Created container simmemleak\n Normal Started 40s kubelet Started container simmemleak\n Normal Killing 32s kubelet Killing container with id ead3fb35-5cf5-44ed-9ae1-488115be66c6: Need to kill Pod", "completion": "This text code 5 - Resource Management for Pods and Containers | In the preceding example, the Restart Count: 5 indicates that the simmemleak container in the Pod was terminated and restarted five times (so far). The OOMKilled reason shows that the container tried", "language": "text", "type": "code-example_explanation", "context": "5 - Resource Management for Pods and Containers | In the preceding example, the Restart Count: 5 indicates that the simmemleak container in the Pod was terminated and restarted five times (so far). The OOMKilled reason shows that the container tried", "source": "https://kubernetes.io/docs/concepts/configuration/_print/", "id": "k8s_00221_code_explain_6257"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Config\n\nclusters:\n- cluster:\n proxy-url: http://proxy.example.org:3128\n server: https://k8s.example.org/k8s/clusters/c-xxyyzz", "completion": " name: development\n\nusers:\n- name: developer\n\ncontexts:\n- context:\n name: development", "language": "yaml", "type": "kubernetes-manifest", "context": "6 - Organizing Cluster Access Using kubeconfig Files | This page outlines the differences in how resources are managed between Linux and Windows.", "source": "https://kubernetes.io/docs/concepts/configuration/_print/", "id": "k8s_00221_code_completion_6258"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Config\n\nclusters:\n- cluster:\n proxy-url: http://proxy.example.org:3128\n server: https://k8s.example.org/k8s/clusters/c-xxyyzz\n name: development\n\nusers:\n- name: developer\n\ncontexts:\n- context:\n name: development", "completion": "This yaml code 6 - Organizing Cluster Access Using kubeconfig Files | This page outlines the differences in how resources are managed between Linux and Windows.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "6 - Organizing Cluster Access Using kubeconfig Files | This page outlines the differences in how resources are managed between Linux and Windows.", "source": "https://kubernetes.io/docs/concepts/configuration/_print/", "id": "k8s_00221_code_explain_6259"} {"prompt": "Explain this shell code:\n\nFOO_SERVICE_HOST=\nFOO_SERVICE_PORT=", "completion": "This shell code 1 - Configuration Best Practices | This does imply an ordering requirement - any Service that a Pod wants to access must be created before the Pod itself, or else the environment variables will not be populated. DNS does not have this", "language": "shell", "type": "code-example_explanation", "context": "1 - Configuration Best Practices | This does imply an ordering requirement - any Service that a Pod wants to access must be created before the Pod itself, or else the environment variables will not be populated. DNS does not have this", "source": "https://kubernetes.io/docs/concepts/configuration/_print/", "id": "k8s_00221_code_explain_6260"} {"prompt": "Explain this json code:\n\n{ app.kubernetes.io/name: MyApp, tier: frontend, phase: test, deployment: v3 }", "completion": "This json code 1 - Configuration Best Practices | A Service can be made to span multiple Deployments by omitting release-specific labels from its selector. When you need to update a running service without downtime, use a Deployment.", "language": "json", "type": "api-response_explanation", "context": "1 - Configuration Best Practices | A Service can be made to span multiple Deployments by omitting release-specific labels from its selector. When you need to update a running service without downtime, use a Deployment.", "source": "https://kubernetes.io/docs/concepts/configuration/_print/", "id": "k8s_00221_code_explain_6261"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: ConfigMap\nmetadata:\n name: game-demo\ndata:\n # property-like keys; each key maps to a simple value\n player_initial_lives: \"3\"\n ui_properties_file_name: \"user-interface.properties\"", "completion": "\n # file-like keys\n game.properties: |\n enemy.types=aliens,monsters\n player.maximum-lives=5 \n user-interface.properties: |\n color.good=purple\n color.bad=yellow\n allow.textmode=true", "language": "yaml", "type": "kubernetes-manifest", "context": "2 - ConfigMaps | There are four different ways that you can use a ConfigMap to configure a container inside a Pod:", "source": "https://kubernetes.io/docs/concepts/configuration/_print/", "id": "k8s_00221_code_completion_6262"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: ConfigMap\nmetadata:\n name: game-demo\ndata:\n # property-like keys; each key maps to a simple value\n player_initial_lives: \"3\"\n ui_properties_file_name: \"user-interface.properties\"\n\n # file-like keys\n game.properties: |\n enemy.types=aliens,monsters\n player.maximum-lives=5 \n user-interface.properties: |\n color.good=purple\n color.bad=yellow\n allow.textmode=true", "completion": "This yaml code 2 - ConfigMaps | There are four different ways that you can use a ConfigMap to configure a container inside a Pod:", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "2 - ConfigMaps | There are four different ways that you can use a ConfigMap to configure a container inside a Pod:", "source": "https://kubernetes.io/docs/concepts/configuration/_print/", "id": "k8s_00221_code_explain_6263"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: configmap-demo-pod\nspec:\n containers:\n - name: demo\n image: alpine\n command: [\"sleep\", \"3600\"]\n env:\n # Define the environment variable\n - name: PLAYER_INITIAL_LIVES # Notice that the case is different here\n # from the key name in the ConfigMap.\n valueFrom:\n configMapKeyRef:\n name: game-demo # The ConfigMap this value comes from.\n key: player_initial_lives # The key to fetch.\n - name: UI_PROPERTIES_FILE_NAME\n valueFrom:", "completion": " configMapKeyRef:\n name: game-demo\n key: ui_properties_file_name\n volumeMounts:\n - name: config\n mountPath: \"/config\"\n readOnly: true\n volumes:\n # You set volumes at the Pod level, then mount them into containers inside that Pod\n - name: config\n configMap:\n # Provide the name of the ConfigMap you want to mount.\n name: game-demo\n # An array of keys from the ConfigMap to create as files\n items:\n - key: \"game.properties\"\n path: \"game.properties\"\n - key: \"user-interface.properties\"\n path: \"user-interface.properties\"", "language": "yaml", "type": "pod-definition", "context": "2 - ConfigMaps | A ConfigMap doesn't differentiate between single line property values and multi-line file-like values. What matters is how Pods and other objects consume those values.", "source": "https://kubernetes.io/docs/concepts/configuration/_print/", "id": "k8s_00221_code_completion_6264"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: configmap-demo-pod\nspec:\n containers:\n - name: demo\n image: alpine\n command: [\"sleep\", \"3600\"]\n env:\n # Define the environment variable\n - name: PLAYER_INITIAL_LIVES # Notice that the case is different here\n # from the key name in the ConfigMap.\n valueFrom:\n configMapKeyRef:\n name: game-demo # The ConfigMap this value comes from.\n key: player_initial_lives # The key to fetch.\n - name: UI_PROPERTIES_FILE_NAME\n valueFrom:\n configMapKeyRef:\n name: game-demo\n key: ui_properties_file_name\n volumeMounts:\n - name: config\n mountPath: \"/config\"\n readOnly: true\n volumes:\n # You set volumes at the Pod level, then mount them into containers inside that Pod\n - name: config\n configMap:\n # Provide the name of the ConfigMap you want to mount.\n name: game-demo\n # An array of keys from the ConfigMap to create as files\n items:\n - key: \"game.properties\"\n path: \"game.properties\"\n - key: \"user-interface.properties\"\n path: \"user-interface.properties\"", "completion": "This yaml code 2 - ConfigMaps | A ConfigMap doesn't differentiate between single line property values and multi-line file-like values. What matters is how Pods and other objects consume those values.", "language": "yaml", "type": "pod-definition_explanation", "context": "2 - ConfigMaps | A ConfigMap doesn't differentiate between single line property values and multi-line file-like values. What matters is how Pods and other objects consume those values.", "source": "https://kubernetes.io/docs/concepts/configuration/_print/", "id": "k8s_00221_code_explain_6265"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: mypod\nspec:\n containers:\n - name: mypod\n image: redis", "completion": " volumeMounts:\n - name: foo\n mountPath: \"/etc/foo\"\n readOnly: true\n volumes:\n - name: foo\n configMap:\n name: myconfigmap", "language": "yaml", "type": "pod-definition", "context": "2 - ConfigMaps | Each ConfigMap you want to use needs to be referred to in .spec.volumes.", "source": "https://kubernetes.io/docs/concepts/configuration/_print/", "id": "k8s_00221_code_completion_6266"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: mypod\nspec:\n containers:\n - name: mypod\n image: redis\n volumeMounts:\n - name: foo\n mountPath: \"/etc/foo\"\n readOnly: true\n volumes:\n - name: foo\n configMap:\n name: myconfigmap", "completion": "This yaml code 2 - ConfigMaps | Each ConfigMap you want to use needs to be referred to in .spec.volumes.", "language": "yaml", "type": "pod-definition_explanation", "context": "2 - ConfigMaps | Each ConfigMap you want to use needs to be referred to in .spec.volumes.", "source": "https://kubernetes.io/docs/concepts/configuration/_print/", "id": "k8s_00221_code_explain_6267"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: ConfigMap\nmetadata:", "completion": " name: myconfigmap\ndata:\n username: k8s-admin\n access_level: \"1\"", "language": "yaml", "type": "kubernetes-manifest", "context": "2 - ConfigMaps | The following command will create the ConfigMap object:", "source": "https://kubernetes.io/docs/concepts/configuration/_print/", "id": "k8s_00221_code_completion_6268"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: ConfigMap\nmetadata:\n name: myconfigmap\ndata:\n username: k8s-admin\n access_level: \"1\"", "completion": "This yaml code 2 - ConfigMaps | The following command will create the ConfigMap object:", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "2 - ConfigMaps | The following command will create the ConfigMap object:", "source": "https://kubernetes.io/docs/concepts/configuration/_print/", "id": "k8s_00221_code_explain_6269"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: env-configmap\nspec:\n containers:", "completion": " - name: app\n command: [\"/bin/sh\", \"-c\", \"printenv\"]\n image: busybox:latest\n envFrom:\n - configMapRef:\n name: myconfigmap", "language": "yaml", "type": "pod-definition", "context": "2 - ConfigMaps | The envFrom field instructs Kubernetes to create environment variables from the sources nested within it. The inner configMapRef refers to a ConfigMap by its name and selects all its key-value pairs.", "source": "https://kubernetes.io/docs/concepts/configuration/_print/", "id": "k8s_00221_code_completion_6270"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: env-configmap\nspec:\n containers:\n - name: app\n command: [\"/bin/sh\", \"-c\", \"printenv\"]\n image: busybox:latest\n envFrom:\n - configMapRef:\n name: myconfigmap", "completion": "This yaml code 2 - ConfigMaps | The envFrom field instructs Kubernetes to create environment variables from the sources nested within it. The inner configMapRef refers to a ConfigMap by its name and selects all its key-value pairs.", "language": "yaml", "type": "pod-definition_explanation", "context": "2 - ConfigMaps | The envFrom field instructs Kubernetes to create environment variables from the sources nested within it. The inner configMapRef refers to a ConfigMap by its name and selects all its key-value pairs.", "source": "https://kubernetes.io/docs/concepts/configuration/_print/", "id": "k8s_00221_code_explain_6271"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: env-configmap\nspec:\n containers:\n - name: envars-test-container", "completion": " image: nginx\n env:\n - name: CONFIGMAP_USERNAME\n valueFrom:\n configMapKeyRef:\n name: myconfigmap\n key: username", "language": "yaml", "type": "pod-definition", "context": "2 - ConfigMaps | In the Pod created from this manifest, you will see that the environment variable CONFIGMAP_USERNAME is set to the value of the username value from the ConfigMap. Other keys from the ConfigMap data ar", "source": "https://kubernetes.io/docs/concepts/configuration/_print/", "id": "k8s_00221_code_completion_6272"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: env-configmap\nspec:\n containers:\n - name: envars-test-container\n image: nginx\n env:\n - name: CONFIGMAP_USERNAME\n valueFrom:\n configMapKeyRef:\n name: myconfigmap\n key: username", "completion": "This yaml code 2 - ConfigMaps | In the Pod created from this manifest, you will see that the environment variable CONFIGMAP_USERNAME is set to the value of the username value from the ConfigMap. Other keys from the ConfigMap data ar", "language": "yaml", "type": "pod-definition_explanation", "context": "2 - ConfigMaps | In the Pod created from this manifest, you will see that the environment variable CONFIGMAP_USERNAME is set to the value of the username value from the ConfigMap. Other keys from the ConfigMap data ar", "source": "https://kubernetes.io/docs/concepts/configuration/_print/", "id": "k8s_00221_code_explain_6273"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: ConfigMap\nmetadata:", "completion": " ...\ndata:\n ...\nimmutable: true", "language": "yaml", "type": "kubernetes-manifest", "context": "2 - ConfigMaps | Once a ConfigMap is marked as immutable, it is not possible to revert this change nor to mutate the contents of the data or the binaryData field. You can only delete and recreate the ConfigMap. Becaus", "source": "https://kubernetes.io/docs/concepts/configuration/_print/", "id": "k8s_00221_code_completion_6274"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: ConfigMap\nmetadata:\n ...\ndata:\n ...\nimmutable: true", "completion": "This yaml code 2 - ConfigMaps | Once a ConfigMap is marked as immutable, it is not possible to revert this change nor to mutate the contents of the data or the binaryData field. You can only delete and recreate the ConfigMap. Becaus", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "2 - ConfigMaps | Once a ConfigMap is marked as immutable, it is not possible to revert this change nor to mutate the contents of the data or the binaryData field. You can only delete and recreate the ConfigMap. Becaus", "source": "https://kubernetes.io/docs/concepts/configuration/_print/", "id": "k8s_00221_code_explain_6275"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Secret\nmetadata:\n name: dotfile-secret\ndata:\n .secret-file: dmFsdWUtMg0KDQo=\n---\napiVersion: v1\nkind: Pod\nmetadata:\n name: secret-dotfiles-pod\nspec:\n volumes:", "completion": " - name: secret-volume\n secret:\n secretName: dotfile-secret\n containers:\n - name: dotfile-test-container\n image: registry.k8s.io/busybox\n command:\n - ls\n - \"-l\"\n - \"/etc/secret-volume\"\n volumeMounts:\n - name: secret-volume\n readOnly: true\n mountPath: \"/etc/secret-volume\"", "language": "yaml", "type": "pod-definition", "context": "3 - Secrets | Consider a program that needs to handle HTTP requests, do some complex business logic, and then sign some messages with an HMAC. Because it has complex application logic, there might be an unnoticed r", "source": "https://kubernetes.io/docs/concepts/configuration/_print/", "id": "k8s_00221_code_completion_6276"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Secret\nmetadata:\n name: dotfile-secret\ndata:\n .secret-file: dmFsdWUtMg0KDQo=\n---\napiVersion: v1\nkind: Pod\nmetadata:\n name: secret-dotfiles-pod\nspec:\n volumes:\n - name: secret-volume\n secret:\n secretName: dotfile-secret\n containers:\n - name: dotfile-test-container\n image: registry.k8s.io/busybox\n command:\n - ls\n - \"-l\"\n - \"/etc/secret-volume\"\n volumeMounts:\n - name: secret-volume\n readOnly: true\n mountPath: \"/etc/secret-volume\"", "completion": "This yaml code 3 - Secrets | Consider a program that needs to handle HTTP requests, do some complex business logic, and then sign some messages with an HMAC. Because it has complex application logic, there might be an unnoticed r", "language": "yaml", "type": "pod-definition_explanation", "context": "3 - Secrets | Consider a program that needs to handle HTTP requests, do some complex business logic, and then sign some messages with an HMAC. Because it has complex application logic, there might be an unnoticed r", "source": "https://kubernetes.io/docs/concepts/configuration/_print/", "id": "k8s_00221_code_explain_6277"} {"prompt": "Explain this shell code:\n\nkubectl create secret generic empty-secret\nkubectl get secret empty-secret", "completion": "This shell code 3 - Secrets | The output looks like:", "language": "shell", "type": "code-example_explanation", "context": "3 - Secrets | The output looks like:", "source": "https://kubernetes.io/docs/concepts/configuration/_print/", "id": "k8s_00221_code_explain_6278"} {"prompt": "Explain this text code:\n\nNAME TYPE DATA AGE\nempty-secret Opaque 0 2m6s", "completion": "This text code 3 - Secrets | The DATA column shows the number of data items stored in the Secret. In this case, 0 means you have created an empty Secret.", "language": "text", "type": "code-example_explanation", "context": "3 - Secrets | The DATA column shows the number of data items stored in the Secret. In this case, 0 means you have created an empty Secret.", "source": "https://kubernetes.io/docs/concepts/configuration/_print/", "id": "k8s_00221_code_explain_6279"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Secret\nmetadata:\n name: secret-sa-sample", "completion": " annotations:\n kubernetes.io/service-account.name: \"sa-name\"\ntype: kubernetes.io/service-account-token\ndata:\n extra: YmFyCg==", "language": "yaml", "type": "kubernetes-manifest", "context": "3 - Secrets | After creating the Secret, wait for Kubernetes to populate the token key in the data field.", "source": "https://kubernetes.io/docs/concepts/configuration/_print/", "id": "k8s_00221_code_completion_6280"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Secret\nmetadata:\n name: secret-sa-sample\n annotations:\n kubernetes.io/service-account.name: \"sa-name\"\ntype: kubernetes.io/service-account-token\ndata:\n extra: YmFyCg==", "completion": "This yaml code 3 - Secrets | After creating the Secret, wait for Kubernetes to populate the token key in the data field.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "3 - Secrets | After creating the Secret, wait for Kubernetes to populate the token key in the data field.", "source": "https://kubernetes.io/docs/concepts/configuration/_print/", "id": "k8s_00221_code_explain_6281"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Secret\nmetadata:\n name: secret-dockercfg", "completion": "type: kubernetes.io/dockercfg\ndata:\n .dockercfg: |\n eyJhdXRocyI6eyJodHRwczovL2V4YW1wbGUvdjEvIjp7ImF1dGgiOiJvcGVuc2VzYW1lIn19fQo=", "language": "yaml", "type": "kubernetes-manifest", "context": "3 - Secrets | When you create Docker config Secrets using a manifest, the API server checks whether the expected key exists in the data field, and it verifies if the value provided can be parsed as a valid JSON. Th", "source": "https://kubernetes.io/docs/concepts/configuration/_print/", "id": "k8s_00221_code_completion_6282"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Secret\nmetadata:\n name: secret-dockercfg\ntype: kubernetes.io/dockercfg\ndata:\n .dockercfg: |\n eyJhdXRocyI6eyJodHRwczovL2V4YW1wbGUvdjEvIjp7ImF1dGgiOiJvcGVuc2VzYW1lIn19fQo=", "completion": "This yaml code 3 - Secrets | When you create Docker config Secrets using a manifest, the API server checks whether the expected key exists in the data field, and it verifies if the value provided can be parsed as a valid JSON. Th", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "3 - Secrets | When you create Docker config Secrets using a manifest, the API server checks whether the expected key exists in the data field, and it verifies if the value provided can be parsed as a valid JSON. Th", "source": "https://kubernetes.io/docs/concepts/configuration/_print/", "id": "k8s_00221_code_explain_6283"} {"prompt": "Complete this shell code:\n\nkubectl create secret docker-registry secret-tiger-docker \\\n --docker-email=tiger@acme.example \\", "completion": " --docker-username=tiger \\\n --docker-password=pass1234 \\\n --docker-server=my-registry.example:5000", "language": "shell", "type": "code-example", "context": "3 - Secrets | This command creates a Secret of type kubernetes.io/dockerconfigjson.", "source": "https://kubernetes.io/docs/concepts/configuration/_print/", "id": "k8s_00221_code_completion_6284"} {"prompt": "Explain this shell code:\n\nkubectl create secret docker-registry secret-tiger-docker \\\n --docker-email=tiger@acme.example \\\n --docker-username=tiger \\\n --docker-password=pass1234 \\\n --docker-server=my-registry.example:5000", "completion": "This shell code 3 - Secrets | This command creates a Secret of type kubernetes.io/dockerconfigjson.", "language": "shell", "type": "code-example_explanation", "context": "3 - Secrets | This command creates a Secret of type kubernetes.io/dockerconfigjson.", "source": "https://kubernetes.io/docs/concepts/configuration/_print/", "id": "k8s_00221_code_explain_6285"} {"prompt": "Explain this shell code:\n\nkubectl get secret secret-tiger-docker -o jsonpath='{.data.*}' | base64 -d", "completion": "This shell code 3 - Secrets | The output is equivalent to the following JSON document (which is also a valid Docker configuration file):", "language": "shell", "type": "code-example_explanation", "context": "3 - Secrets | The output is equivalent to the following JSON document (which is also a valid Docker configuration file):", "source": "https://kubernetes.io/docs/concepts/configuration/_print/", "id": "k8s_00221_code_explain_6286"} {"prompt": "Complete this json code:\n\n{\n \"auths\": {\n \"my-registry.example:5000\": {\n \"username\": \"tiger\",\n \"password\": \"pass1234\",", "completion": " \"email\": \"tiger@acme.example\",\n \"auth\": \"dGlnZXI6cGFzczEyMzQ=\"\n }\n }\n}", "language": "json", "type": "api-response", "context": "3 - Secrets | The auth value there is base64 encoded; it is obscured but not secret. Anyone who can read that Secret can learn the registry access bearer token.", "source": "https://kubernetes.io/docs/concepts/configuration/_print/", "id": "k8s_00221_code_completion_6287"} {"prompt": "Explain this json code:\n\n{\n \"auths\": {\n \"my-registry.example:5000\": {\n \"username\": \"tiger\",\n \"password\": \"pass1234\",\n \"email\": \"tiger@acme.example\",\n \"auth\": \"dGlnZXI6cGFzczEyMzQ=\"\n }\n }\n}", "completion": "This json code 3 - Secrets | The auth value there is base64 encoded; it is obscured but not secret. Anyone who can read that Secret can learn the registry access bearer token.", "language": "json", "type": "api-response_explanation", "context": "3 - Secrets | The auth value there is base64 encoded; it is obscured but not secret. Anyone who can read that Secret can learn the registry access bearer token.", "source": "https://kubernetes.io/docs/concepts/configuration/_print/", "id": "k8s_00221_code_explain_6288"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Secret\nmetadata:\n name: secret-basic-auth", "completion": "type: kubernetes.io/basic-auth\nstringData:\n username: admin # required field for kubernetes.io/basic-auth\n password: t0p-Secret # required field for kubernetes.io/basic-auth", "language": "yaml", "type": "kubernetes-manifest", "context": "3 - Secrets | The basic authentication Secret type is provided only for convenience. You can create an Opaque type for credentials used for basic authentication. However, using the defined and public Secret type (k", "source": "https://kubernetes.io/docs/concepts/configuration/_print/", "id": "k8s_00221_code_completion_6289"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Secret\nmetadata:\n name: secret-basic-auth\ntype: kubernetes.io/basic-auth\nstringData:\n username: admin # required field for kubernetes.io/basic-auth\n password: t0p-Secret # required field for kubernetes.io/basic-auth", "completion": "This yaml code 3 - Secrets | The basic authentication Secret type is provided only for convenience. You can create an Opaque type for credentials used for basic authentication. However, using the defined and public Secret type (k", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "3 - Secrets | The basic authentication Secret type is provided only for convenience. You can create an Opaque type for credentials used for basic authentication. However, using the defined and public Secret type (k", "source": "https://kubernetes.io/docs/concepts/configuration/_print/", "id": "k8s_00221_code_explain_6290"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Secret\nmetadata:\n name: secret-ssh-auth", "completion": "type: kubernetes.io/ssh-auth\ndata:\n # the data is abbreviated in this example\n ssh-privatekey: |\n UG91cmluZzYlRW1vdGljb24lU2N1YmE=", "language": "yaml", "type": "kubernetes-manifest", "context": "3 - Secrets | The SSH authentication Secret type is provided only for convenience. You can create an Opaque type for credentials used for SSH authentication. However, using the defined and public Secret type (kuber", "source": "https://kubernetes.io/docs/concepts/configuration/_print/", "id": "k8s_00221_code_completion_6291"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Secret\nmetadata:\n name: secret-ssh-auth\ntype: kubernetes.io/ssh-auth\ndata:\n # the data is abbreviated in this example\n ssh-privatekey: |\n UG91cmluZzYlRW1vdGljb24lU2N1YmE=", "completion": "This yaml code 3 - Secrets | The SSH authentication Secret type is provided only for convenience. You can create an Opaque type for credentials used for SSH authentication. However, using the defined and public Secret type (kuber", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "3 - Secrets | The SSH authentication Secret type is provided only for convenience. You can create an Opaque type for credentials used for SSH authentication. However, using the defined and public Secret type (kuber", "source": "https://kubernetes.io/docs/concepts/configuration/_print/", "id": "k8s_00221_code_explain_6292"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Secret\nmetadata:\n name: secret-tls\ntype: kubernetes.io/tls", "completion": "data:\n # values are base64 encoded, which obscures them but does NOT provide\n # any useful level of confidentiality\n # Replace the following values with your own base64-encoded certificate and key.\n tls.crt: \"REPLACE_WITH_BASE64_CERT\" \n tls.key: \"REPLACE_WITH_BASE64_KEY\"", "language": "yaml", "type": "kubernetes-manifest", "context": "3 - Secrets | The TLS Secret type is provided only for convenience. You can create an Opaque type for credentials used for TLS authentication. However, using the defined and public Secret type (kubernetes.io/tls) h", "source": "https://kubernetes.io/docs/concepts/configuration/_print/", "id": "k8s_00221_code_completion_6293"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Secret\nmetadata:\n name: secret-tls\ntype: kubernetes.io/tls\ndata:\n # values are base64 encoded, which obscures them but does NOT provide\n # any useful level of confidentiality\n # Replace the following values with your own base64-encoded certificate and key.\n tls.crt: \"REPLACE_WITH_BASE64_CERT\" \n tls.key: \"REPLACE_WITH_BASE64_KEY\"", "completion": "This yaml code 3 - Secrets | The TLS Secret type is provided only for convenience. You can create an Opaque type for credentials used for TLS authentication. However, using the defined and public Secret type (kubernetes.io/tls) h", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "3 - Secrets | The TLS Secret type is provided only for convenience. You can create an Opaque type for credentials used for TLS authentication. However, using the defined and public Secret type (kubernetes.io/tls) h", "source": "https://kubernetes.io/docs/concepts/configuration/_print/", "id": "k8s_00221_code_explain_6294"} {"prompt": "Explain this shell code:\n\nkubectl create secret tls my-tls-secret \\\n --cert=path/to/cert/file \\\n --key=path/to/key/file", "completion": "This shell code 3 - Secrets | The public/private key pair must exist before hand. The public key certificate for --cert must be .PEM encoded and must match the given private key for --key.", "language": "shell", "type": "code-example_explanation", "context": "3 - Secrets | The public/private key pair must exist before hand. The public key certificate for --cert must be .PEM encoded and must match the given private key for --key.", "source": "https://kubernetes.io/docs/concepts/configuration/_print/", "id": "k8s_00221_code_explain_6295"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Secret\nmetadata:\n name: bootstrap-token-5emitj\n namespace: kube-system\ntype: bootstrap.kubernetes.io/token", "completion": "data:\n auth-extra-groups: c3lzdGVtOmJvb3RzdHJhcHBlcnM6a3ViZWFkbTpkZWZhdWx0LW5vZGUtdG9rZW4=\n expiration: MjAyMC0wOS0xM1QwNDozOToxMFo=\n token-id: NWVtaXRq\n token-secret: a3E0Z2lodnN6emduMXAwcg==\n usage-bootstrap-authentication: dHJ1ZQ==\n usage-bootstrap-signing: dHJ1ZQ==", "language": "yaml", "type": "kubernetes-manifest", "context": "3 - Secrets | A bootstrap token Secret has the following keys specified under data:", "source": "https://kubernetes.io/docs/concepts/configuration/_print/", "id": "k8s_00221_code_completion_6296"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Secret\nmetadata:\n name: bootstrap-token-5emitj\n namespace: kube-system\ntype: bootstrap.kubernetes.io/token\ndata:\n auth-extra-groups: c3lzdGVtOmJvb3RzdHJhcHBlcnM6a3ViZWFkbTpkZWZhdWx0LW5vZGUtdG9rZW4=\n expiration: MjAyMC0wOS0xM1QwNDozOToxMFo=\n token-id: NWVtaXRq\n token-secret: a3E0Z2lodnN6emduMXAwcg==\n usage-bootstrap-authentication: dHJ1ZQ==\n usage-bootstrap-signing: dHJ1ZQ==", "completion": "This yaml code 3 - Secrets | A bootstrap token Secret has the following keys specified under data:", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "3 - Secrets | A bootstrap token Secret has the following keys specified under data:", "source": "https://kubernetes.io/docs/concepts/configuration/_print/", "id": "k8s_00221_code_explain_6297"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Secret\nmetadata:\n # Note how the Secret is named\n name: bootstrap-token-5emitj\n # A bootstrap token Secret usually resides in the kube-system namespace\n namespace: kube-system\ntype: bootstrap.kubernetes.io/token\nstringData:", "completion": " auth-extra-groups: \"system:bootstrappers:kubeadm:default-node-token\"\n expiration: \"2020-09-13T04:39:10Z\"\n # This token ID is used in the name\n token-id: \"5emitj\"\n token-secret: \"kq4gihvszzgn1p0r\"\n # This token can be used for authentication\n usage-bootstrap-authentication: \"true\"\n # and it can be used for signing\n usage-bootstrap-signing: \"true\"", "language": "yaml", "type": "kubernetes-manifest", "context": "3 - Secrets | There are several options to create a Secret:", "source": "https://kubernetes.io/docs/concepts/configuration/_print/", "id": "k8s_00221_code_completion_6298"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Secret\nmetadata:\n # Note how the Secret is named\n name: bootstrap-token-5emitj\n # A bootstrap token Secret usually resides in the kube-system namespace\n namespace: kube-system\ntype: bootstrap.kubernetes.io/token\nstringData:\n auth-extra-groups: \"system:bootstrappers:kubeadm:default-node-token\"\n expiration: \"2020-09-13T04:39:10Z\"\n # This token ID is used in the name\n token-id: \"5emitj\"\n token-secret: \"kq4gihvszzgn1p0r\"\n # This token can be used for authentication\n usage-bootstrap-authentication: \"true\"\n # and it can be used for signing\n usage-bootstrap-signing: \"true\"", "completion": "This yaml code 3 - Secrets | There are several options to create a Secret:", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "3 - Secrets | There are several options to create a Secret:", "source": "https://kubernetes.io/docs/concepts/configuration/_print/", "id": "k8s_00221_code_explain_6299"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: mypod\nspec:\n containers:\n - name: mypod\n image: redis", "completion": " volumeMounts:\n - name: foo\n mountPath: \"/etc/foo\"\n readOnly: true\n volumes:\n - name: foo\n secret:\n secretName: mysecret\n optional: true", "language": "yaml", "type": "pod-definition", "context": "3 - Secrets | By default, Secrets are required. None of a Pod's containers will start until all non-optional Secrets are available.", "source": "https://kubernetes.io/docs/concepts/configuration/_print/", "id": "k8s_00221_code_completion_6300"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: mypod\nspec:\n containers:\n - name: mypod\n image: redis\n volumeMounts:\n - name: foo\n mountPath: \"/etc/foo\"\n readOnly: true\n volumes:\n - name: foo\n secret:\n secretName: mysecret\n optional: true", "completion": "This yaml code 3 - Secrets | By default, Secrets are required. None of a Pod's containers will start until all non-optional Secrets are available.", "language": "yaml", "type": "pod-definition_explanation", "context": "3 - Secrets | By default, Secrets are required. None of a Pod's containers will start until all non-optional Secrets are available.", "source": "https://kubernetes.io/docs/concepts/configuration/_print/", "id": "k8s_00221_code_explain_6301"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Secret", "completion": "metadata: ...\ndata: ...\nimmutable: true", "language": "yaml", "type": "kubernetes-manifest", "context": "3 - Secrets | You can also update any existing mutable Secret to make it immutable.", "source": "https://kubernetes.io/docs/concepts/configuration/_print/", "id": "k8s_00221_code_completion_6302"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Secret\nmetadata: ...\ndata: ...\nimmutable: true", "completion": "This yaml code 3 - Secrets | You can also update any existing mutable Secret to make it immutable.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "3 - Secrets | You can also update any existing mutable Secret to make it immutable.", "source": "https://kubernetes.io/docs/concepts/configuration/_print/", "id": "k8s_00221_code_explain_6303"} {"prompt": "Explain this text code:\n\nspec.containers[].resources.limits.hugepages-", "completion": "This text code 5 - Resource Management for Pods and Containers | Although you can only specify requests and limits for individual containers, it is also useful to think about the overall resource requests and limits for a Pod. For a particular resource, a Pod resou", "language": "text", "type": "code-example_explanation", "context": "5 - Resource Management for Pods and Containers | Although you can only specify requests and limits for individual containers, it is also useful to think about the overall resource requests and limits for a Pod. For a particular resource, a Pod resou", "source": "https://kubernetes.io/docs/concepts/configuration/_print/", "id": "k8s_00221_code_explain_6304"} {"prompt": "Explain this text code:\n\nspec.containers[].resources.requests.hugepages-", "completion": "This text code 5 - Resource Management for Pods and Containers | Although you can only specify requests and limits for individual containers, it is also useful to think about the overall resource requests and limits for a Pod. For a particular resource, a Pod resou", "language": "text", "type": "code-example_explanation", "context": "5 - Resource Management for Pods and Containers | Although you can only specify requests and limits for individual containers, it is also useful to think about the overall resource requests and limits for a Pod. For a particular resource, a Pod resou", "source": "https://kubernetes.io/docs/concepts/configuration/_print/", "id": "k8s_00221_code_explain_6305"} {"prompt": "Complete this yaml code:\n\n---\napiVersion: v1\nkind: Pod\nmetadata:\n name: frontend\nspec:\n containers:\n - name: app\n image: images.my-company.example/app:v4\n resources:\n requests:\n memory: \"64Mi\"", "completion": " cpu: \"250m\"\n limits:\n memory: \"128Mi\"\n cpu: \"500m\"\n - name: log-aggregator\n image: images.my-company.example/log-aggregator:v6\n resources:\n requests:\n memory: \"64Mi\"\n cpu: \"250m\"\n limits:\n memory: \"128Mi\"\n cpu: \"500m\"", "language": "yaml", "type": "pod-definition", "context": "5 - Resource Management for Pods and Containers | This feature can be enabled by setting the PodLevelResources feature gate. The following Pod has an explicit request of 1 CPU and 100 MiB of memory, and an explicit limit of 1 CPU and 200 MiB of memor", "source": "https://kubernetes.io/docs/concepts/configuration/_print/", "id": "k8s_00221_code_completion_6306"} {"prompt": "Explain this yaml code:\n\n---\napiVersion: v1\nkind: Pod\nmetadata:\n name: frontend\nspec:\n containers:\n - name: app\n image: images.my-company.example/app:v4\n resources:\n requests:\n memory: \"64Mi\"\n cpu: \"250m\"\n limits:\n memory: \"128Mi\"\n cpu: \"500m\"\n - name: log-aggregator\n image: images.my-company.example/log-aggregator:v6\n resources:\n requests:\n memory: \"64Mi\"\n cpu: \"250m\"\n limits:\n memory: \"128Mi\"\n cpu: \"500m\"", "completion": "This yaml code 5 - Resource Management for Pods and Containers | This feature can be enabled by setting the PodLevelResources feature gate. The following Pod has an explicit request of 1 CPU and 100 MiB of memory, and an explicit limit of 1 CPU and 200 MiB of memor", "language": "yaml", "type": "pod-definition_explanation", "context": "5 - Resource Management for Pods and Containers | This feature can be enabled by setting the PodLevelResources feature gate. The following Pod has an explicit request of 1 CPU and 100 MiB of memory, and an explicit limit of 1 CPU and 200 MiB of memor", "source": "https://kubernetes.io/docs/concepts/configuration/_print/", "id": "k8s_00221_code_explain_6307"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: pod-resources-demo\n namespace: pod-resources-example\nspec:\n resources:\n limits:\n cpu: \"1\"\n memory: \"200Mi\"\n requests:\n cpu: \"1\"\n memory: \"100Mi\"\n containers:", "completion": " - name: pod-resources-demo-ctr-1\n image: nginx\n resources:\n limits:\n cpu: \"0.5\"\n memory: \"100Mi\"\n requests:\n cpu: \"0.5\"\n memory: \"50Mi\"\n - name: pod-resources-demo-ctr-2\n image: fedora\n command:\n - sleep\n - inf", "language": "yaml", "type": "pod-definition", "context": "5 - Resource Management for Pods and Containers | When you create a Pod, the Kubernetes scheduler selects a node for the Pod to run on. Each node has a maximum capacity for each of the resource types: the amount of CPU and memory it can provide for P", "source": "https://kubernetes.io/docs/concepts/configuration/_print/", "id": "k8s_00221_code_completion_6308"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: pod-resources-demo\n namespace: pod-resources-example\nspec:\n resources:\n limits:\n cpu: \"1\"\n memory: \"200Mi\"\n requests:\n cpu: \"1\"\n memory: \"100Mi\"\n containers:\n - name: pod-resources-demo-ctr-1\n image: nginx\n resources:\n limits:\n cpu: \"0.5\"\n memory: \"100Mi\"\n requests:\n cpu: \"0.5\"\n memory: \"50Mi\"\n - name: pod-resources-demo-ctr-2\n image: fedora\n command:\n - sleep\n - inf", "completion": "This yaml code 5 - Resource Management for Pods and Containers | When you create a Pod, the Kubernetes scheduler selects a node for the Pod to run on. Each node has a maximum capacity for each of the resource types: the amount of CPU and memory it can provide for P", "language": "yaml", "type": "pod-definition_explanation", "context": "5 - Resource Management for Pods and Containers | When you create a Pod, the Kubernetes scheduler selects a node for the Pod to run on. Each node has a maximum capacity for each of the resource types: the amount of CPU and memory it can provide for P", "source": "https://kubernetes.io/docs/concepts/configuration/_print/", "id": "k8s_00221_code_explain_6309"} {"prompt": "Explain this text code:\n\nspec.containers[].resources.limits.ephemeral-storage", "completion": "This text code 5 - Resource Management for Pods and Containers | Limits and requests for ephemeral-storage are measured in byte quantities. You can express storage as a plain integer or as a fixed-point number using one of these suffixes: E, P, T, G, M, k. You can", "language": "text", "type": "code-example_explanation", "context": "5 - Resource Management for Pods and Containers | Limits and requests for ephemeral-storage are measured in byte quantities. You can express storage as a plain integer or as a fixed-point number using one of these suffixes: E, P, T, G, M, k. You can", "source": "https://kubernetes.io/docs/concepts/configuration/_print/", "id": "k8s_00221_code_explain_6310"} {"prompt": "Explain this text code:\n\nspec.containers[].resources.requests.ephemeral-storage", "completion": "This text code 5 - Resource Management for Pods and Containers | Limits and requests for ephemeral-storage are measured in byte quantities. You can express storage as a plain integer or as a fixed-point number using one of these suffixes: E, P, T, G, M, k. You can", "language": "text", "type": "code-example_explanation", "context": "5 - Resource Management for Pods and Containers | Limits and requests for ephemeral-storage are measured in byte quantities. You can express storage as a plain integer or as a fixed-point number using one of these suffixes: E, P, T, G, M, k. You can", "source": "https://kubernetes.io/docs/concepts/configuration/_print/", "id": "k8s_00221_code_explain_6311"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: frontend\nspec:\n containers:\n - name: app\n image: images.my-company.example/app:v4\n resources:\n requests:\n ephemeral-storage: \"2Gi\"\n limits:\n ephemeral-storage: \"4Gi\"\n volumeMounts:\n - name: ephemeral", "completion": " mountPath: \"/tmp\"\n - name: log-aggregator\n image: images.my-company.example/log-aggregator:v6\n resources:\n requests:\n ephemeral-storage: \"2Gi\"\n limits:\n ephemeral-storage: \"4Gi\"\n volumeMounts:\n - name: ephemeral\n mountPath: \"/tmp\"\n volumes:\n - name: ephemeral\n emptyDir:\n sizeLimit: 500Mi", "language": "yaml", "type": "pod-definition", "context": "5 - Resource Management for Pods and Containers | When you create a Pod, the Kubernetes scheduler selects a node for the Pod to run on. Each node has a maximum amount of local ephemeral storage it can provide for Pods. For more information, see Node", "source": "https://kubernetes.io/docs/concepts/configuration/_print/", "id": "k8s_00221_code_completion_6312"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: frontend\nspec:\n containers:\n - name: app\n image: images.my-company.example/app:v4\n resources:\n requests:\n ephemeral-storage: \"2Gi\"\n limits:\n ephemeral-storage: \"4Gi\"\n volumeMounts:\n - name: ephemeral\n mountPath: \"/tmp\"\n - name: log-aggregator\n image: images.my-company.example/log-aggregator:v6\n resources:\n requests:\n ephemeral-storage: \"2Gi\"\n limits:\n ephemeral-storage: \"4Gi\"\n volumeMounts:\n - name: ephemeral\n mountPath: \"/tmp\"\n volumes:\n - name: ephemeral\n emptyDir:\n sizeLimit: 500Mi", "completion": "This yaml code 5 - Resource Management for Pods and Containers | When you create a Pod, the Kubernetes scheduler selects a node for the Pod to run on. Each node has a maximum amount of local ephemeral storage it can provide for Pods. For more information, see Node", "language": "yaml", "type": "pod-definition_explanation", "context": "5 - Resource Management for Pods and Containers | When you create a Pod, the Kubernetes scheduler selects a node for the Pod to run on. Each node has a maximum amount of local ephemeral storage it can provide for Pods. For more information, see Node", "source": "https://kubernetes.io/docs/concepts/configuration/_print/", "id": "k8s_00221_code_explain_6313"} {"prompt": "Explain this text code:\n\nLocalStorageCapacityIsolationFSQuotaMonitoring=true", "completion": "This text code 5 - Resource Management for Pods and Containers | Ensure the UserNamespacesSupport feature gate is enabled, and that the kernel, CRI implementation and OCI runtime support user namespaces.", "language": "text", "type": "code-example_explanation", "context": "5 - Resource Management for Pods and Containers | Ensure the UserNamespacesSupport feature gate is enabled, and that the kernel, CRI implementation and OCI runtime support user namespaces.", "source": "https://kubernetes.io/docs/concepts/configuration/_print/", "id": "k8s_00221_code_explain_6314"} {"prompt": "Explain this bash code:\n\n# For ext4, with /dev/block-device not mounted\nsudo tune2fs -O project -Q prjquota /dev/block-device", "completion": "This bash code 5 - Resource Management for Pods and Containers | Ensure that the root filesystem (or optional runtime filesystem) is mounted with project quotas enabled. For both XFS and ext4fs, the mount option is named prjquota.", "language": "bash", "type": "shell-command_explanation", "context": "5 - Resource Management for Pods and Containers | Ensure that the root filesystem (or optional runtime filesystem) is mounted with project quotas enabled. For both XFS and ext4fs, the mount option is named prjquota.", "source": "https://kubernetes.io/docs/concepts/configuration/_print/", "id": "k8s_00221_code_explain_6315"} {"prompt": "Complete this shell code:\n\ncurl --header \"Content-Type: application/json-patch+json\" \\\n--request PATCH \\", "completion": "--data '[{\"op\": \"add\", \"path\": \"/status/capacity/example.com~1foo\", \"value\": \"5\"}]' \\\nhttp://k8s-master:8080/api/v1/nodes/k8s-node-1/status", "language": "shell", "type": "code-example", "context": "5 - Resource Management for Pods and Containers | Cluster-level extended resources are not tied to nodes. They are usually managed by scheduler extenders, which handle the resource consumption and resource quota.", "source": "https://kubernetes.io/docs/concepts/configuration/_print/", "id": "k8s_00221_code_completion_6316"} {"prompt": "Explain this shell code:\n\ncurl --header \"Content-Type: application/json-patch+json\" \\\n--request PATCH \\\n--data '[{\"op\": \"add\", \"path\": \"/status/capacity/example.com~1foo\", \"value\": \"5\"}]' \\\nhttp://k8s-master:8080/api/v1/nodes/k8s-node-1/status", "completion": "This shell code 5 - Resource Management for Pods and Containers | Cluster-level extended resources are not tied to nodes. They are usually managed by scheduler extenders, which handle the resource consumption and resource quota.", "language": "shell", "type": "code-example_explanation", "context": "5 - Resource Management for Pods and Containers | Cluster-level extended resources are not tied to nodes. They are usually managed by scheduler extenders, which handle the resource consumption and resource quota.", "source": "https://kubernetes.io/docs/concepts/configuration/_print/", "id": "k8s_00221_code_explain_6317"} {"prompt": "Complete this json code:\n\n{\n \"kind\": \"Policy\",\n \"apiVersion\": \"v1\",\n \"extenders\": [\n {\n \"urlPrefix\":\"\",\n \"bindVerb\": \"bind\",\n \"managedResources\": [", "completion": " {\n \"name\": \"example.com/foo\",\n \"ignoredByScheduler\": true\n }\n ]\n }\n ]\n}", "language": "json", "type": "api-response", "context": "5 - Resource Management for Pods and Containers | Extended resources allocation by DRA allows cluster administrators to specify an extendedResourceName in DeviceClass, then the devices matching the DeviceClass can be requested from a pod's extended r", "source": "https://kubernetes.io/docs/concepts/configuration/_print/", "id": "k8s_00221_code_completion_6318"} {"prompt": "Explain this json code:\n\n{\n \"kind\": \"Policy\",\n \"apiVersion\": \"v1\",\n \"extenders\": [\n {\n \"urlPrefix\":\"\",\n \"bindVerb\": \"bind\",\n \"managedResources\": [\n {\n \"name\": \"example.com/foo\",\n \"ignoredByScheduler\": true\n }\n ]\n }\n ]\n}", "completion": "This json code 5 - Resource Management for Pods and Containers | Extended resources allocation by DRA allows cluster administrators to specify an extendedResourceName in DeviceClass, then the devices matching the DeviceClass can be requested from a pod's extended r", "language": "json", "type": "api-response_explanation", "context": "5 - Resource Management for Pods and Containers | Extended resources allocation by DRA allows cluster administrators to specify an extendedResourceName in DeviceClass, then the devices matching the DeviceClass can be requested from a pod's extended r", "source": "https://kubernetes.io/docs/concepts/configuration/_print/", "id": "k8s_00221_code_explain_6319"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: my-pod\nspec:\n containers:\n - name: my-container", "completion": " image: myimage\n resources:\n requests:\n cpu: 2\n example.com/foo: 1\n limits:\n example.com/foo: 1", "language": "yaml", "type": "pod-definition", "context": "5 - Resource Management for Pods and Containers | Process ID (PID) limits allow for the configuration of a kubelet to limit the number of PIDs that a given Pod can consume. See PID Limiting for information.", "source": "https://kubernetes.io/docs/concepts/configuration/_print/", "id": "k8s_00221_code_completion_6320"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: my-pod\nspec:\n containers:\n - name: my-container\n image: myimage\n resources:\n requests:\n cpu: 2\n example.com/foo: 1\n limits:\n example.com/foo: 1", "completion": "This yaml code 5 - Resource Management for Pods and Containers | Process ID (PID) limits allow for the configuration of a kubelet to limit the number of PIDs that a given Pod can consume. See PID Limiting for information.", "language": "yaml", "type": "pod-definition_explanation", "context": "5 - Resource Management for Pods and Containers | Process ID (PID) limits allow for the configuration of a kubelet to limit the number of PIDs that a given Pod can consume. See PID Limiting for information.", "source": "https://kubernetes.io/docs/concepts/configuration/_print/", "id": "k8s_00221_code_explain_6321"} {"prompt": "Explain this shell code:\n\nkubectl describe pod frontend | grep -A 9999999999 Events", "completion": "This shell code 5 - Resource Management for Pods and Containers | In the preceding example, the Pod named \"frontend\" fails to be scheduled due to insufficient CPU resource on any node. Similar error messages can also suggest failure due to insufficient memory (PodEx", "language": "shell", "type": "code-example_explanation", "context": "5 - Resource Management for Pods and Containers | In the preceding example, the Pod named \"frontend\" fails to be scheduled due to insufficient CPU resource on any node. Similar error messages can also suggest failure due to insufficient memory (PodEx", "source": "https://kubernetes.io/docs/concepts/configuration/_print/", "id": "k8s_00221_code_explain_6322"} {"prompt": "Complete this text code:\n\nEvents:\n Type Reason Age From Message", "completion": " ---- ------ ---- ---- -------\n Warning FailedScheduling 23s default-scheduler 0/42 nodes available: insufficient cpu", "language": "text", "type": "code-example", "context": "5 - Resource Management for Pods and Containers | In the preceding example, the Pod named \"frontend\" fails to be scheduled due to insufficient CPU resource on any node. Similar error messages can also suggest failure due to insufficient memory (PodEx", "source": "https://kubernetes.io/docs/concepts/configuration/_print/", "id": "k8s_00221_code_completion_6323"} {"prompt": "Explain this text code:\n\nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Warning FailedScheduling 23s default-scheduler 0/42 nodes available: insufficient cpu", "completion": "This text code 5 - Resource Management for Pods and Containers | In the preceding example, the Pod named \"frontend\" fails to be scheduled due to insufficient CPU resource on any node. Similar error messages can also suggest failure due to insufficient memory (PodEx", "language": "text", "type": "code-example_explanation", "context": "5 - Resource Management for Pods and Containers | In the preceding example, the Pod named \"frontend\" fails to be scheduled due to insufficient CPU resource on any node. Similar error messages can also suggest failure due to insufficient memory (PodEx", "source": "https://kubernetes.io/docs/concepts/configuration/_print/", "id": "k8s_00221_code_explain_6324"} {"prompt": "Complete this text code:\n\nName: e2e-test-node-pool-4lw4\n[ ... lines removed for clarity ...]\nCapacity:\n cpu: 2\n memory: 7679792Ki\n pods: 110\nAllocatable:\n cpu: 1800m\n memory: 7474992Ki\n pods: 110\n[ ... lines removed for clarity ...]\nNon-terminated Pods: (5 in total)", "completion": " Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits\n --------- ---- ------------ ---------- --------------- -------------\n kube-system fluentd-gcp-v1.38-28bv1 100m (5%) 0 (0%) 200Mi (2%) 200Mi (2%)\n kube-system kube-dns-3297075139-61lj3 260m (13%) 0 (0%) 100Mi (1%) 170Mi (2%)\n kube-system kube-proxy-e2e-test-... 100m (5%) 0 (0%) 0 (0%) 0 (0%)\n kube-system monitoring-influxdb-grafana-v4-z1m12 200m (10%) 200m (10%) 600Mi (8%) 600Mi (8%)\n kube-system node-problem-detector-v0.1-fj7m3 20m (1%) 200m (10%) 20Mi (0%) 100Mi (1%)\nAllocated resources:\n (Total limits may be over 100 percent, i.e., overcommitted.)\n CPU Requests CPU Limits Memory Requests Memory Limits\n ------------ ---------- --------------- -------------\n 680m (34%) 400m (20%) 920Mi (11%) 1070Mi (13%)", "language": "text", "type": "code-example", "context": "5 - Resource Management for Pods and Containers | In the preceding output, you can see that if a Pod requests more than 1.120 CPUs or more than 6.23Gi of memory, that Pod will not fit on the node.", "source": "https://kubernetes.io/docs/concepts/configuration/_print/", "id": "k8s_00221_code_completion_6325"} {"prompt": "Explain this text code:\n\nName: e2e-test-node-pool-4lw4\n[ ... lines removed for clarity ...]\nCapacity:\n cpu: 2\n memory: 7679792Ki\n pods: 110\nAllocatable:\n cpu: 1800m\n memory: 7474992Ki\n pods: 110\n[ ... lines removed for clarity ...]\nNon-terminated Pods: (5 in total)\n Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits\n --------- ---- ------------ ---------- --------------- -------------\n kube-system fluentd-gcp-v1.38-28bv1 100m (5%) 0 (0%) 200Mi (2%) 200Mi (2%)\n kube-system kube-dns-3297075139-61lj3 260m (13%) 0 (0%) 100Mi (1%) 170Mi (2%)\n kube-system kube-proxy-e2e-test-... 100m (5%) 0 (0%) 0 (0%) 0 (0%)\n kube-system monitoring-influxdb-grafana-v4-z1m12 200m (10%) 200m (10%) 600Mi (8%) 600Mi (8%)\n kube-system node-problem-detector-v0.1-fj7m3 20m (1%) 200m (10%) 20Mi (0%) 100Mi (1%)\nAllocated resources:\n (Total limits may be over 100 percent, i.e., overcommitted.)\n CPU Requests CPU Limits Memory Requests Memory Limits\n ------------ ---------- --------------- -------------\n 680m (34%) 400m (20%) 920Mi (11%) 1070Mi (13%)", "completion": "This text code 5 - Resource Management for Pods and Containers | In the preceding output, you can see that if a Pod requests more than 1.120 CPUs or more than 6.23Gi of memory, that Pod will not fit on the node.", "language": "text", "type": "code-example_explanation", "context": "5 - Resource Management for Pods and Containers | In the preceding output, you can see that if a Pod requests more than 1.120 CPUs or more than 6.23Gi of memory, that Pod will not fit on the node.", "source": "https://kubernetes.io/docs/concepts/configuration/_print/", "id": "k8s_00221_code_explain_6326"} {"prompt": "Complete this text code:\n\nName: simmemleak-hra99\nNamespace: default\nImage(s): saadali/simmemleak\nNode: kubernetes-node-tf0f/10.240.216.66\nLabels: name=simmemleak\nStatus: Running\nReason:\nMessage:\nIP: 10.244.2.75\nContainers:\n simmemleak:\n Image: saadali/simmemleak:latest\n Limits:\n cpu: 100m\n memory: 50Mi\n State: Running\n Started: Tue, 07 Jul 2019 12:54:41 -0700", "completion": " Last State: Terminated\n Reason: OOMKilled\n Exit Code: 137\n Started: Fri, 07 Jul 2019 12:54:30 -0700\n Finished: Fri, 07 Jul 2019 12:54:33 -0700\n Ready: False\n Restart Count: 5\nConditions:\n Type Status\n Ready False\nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal Scheduled 42s default-scheduler Successfully assigned simmemleak-hra99 to kubernetes-node-tf0f\n Normal Pulled 41s kubelet Container image \"saadali/simmemleak:latest\" already present on machine\n Normal Created 41s kubelet Created container simmemleak\n Normal Started 40s kubelet Started container simmemleak\n Normal Killing 32s kubelet Killing container with id ead3fb35-5cf5-44ed-9ae1-488115be66c6: Need to kill Pod", "language": "text", "type": "code-example", "context": "5 - Resource Management for Pods and Containers | In the preceding example, the Restart Count: 5 indicates that the simmemleak container in the Pod was terminated and restarted five times (so far). The OOMKilled reason shows that the container tried", "source": "https://kubernetes.io/docs/concepts/configuration/_print/", "id": "k8s_00221_code_completion_6327"} {"prompt": "Explain this text code:\n\nName: simmemleak-hra99\nNamespace: default\nImage(s): saadali/simmemleak\nNode: kubernetes-node-tf0f/10.240.216.66\nLabels: name=simmemleak\nStatus: Running\nReason:\nMessage:\nIP: 10.244.2.75\nContainers:\n simmemleak:\n Image: saadali/simmemleak:latest\n Limits:\n cpu: 100m\n memory: 50Mi\n State: Running\n Started: Tue, 07 Jul 2019 12:54:41 -0700\n Last State: Terminated\n Reason: OOMKilled\n Exit Code: 137\n Started: Fri, 07 Jul 2019 12:54:30 -0700\n Finished: Fri, 07 Jul 2019 12:54:33 -0700\n Ready: False\n Restart Count: 5\nConditions:\n Type Status\n Ready False\nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal Scheduled 42s default-scheduler Successfully assigned simmemleak-hra99 to kubernetes-node-tf0f\n Normal Pulled 41s kubelet Container image \"saadali/simmemleak:latest\" already present on machine\n Normal Created 41s kubelet Created container simmemleak\n Normal Started 40s kubelet Started container simmemleak\n Normal Killing 32s kubelet Killing container with id ead3fb35-5cf5-44ed-9ae1-488115be66c6: Need to kill Pod", "completion": "This text code 5 - Resource Management for Pods and Containers | In the preceding example, the Restart Count: 5 indicates that the simmemleak container in the Pod was terminated and restarted five times (so far). The OOMKilled reason shows that the container tried", "language": "text", "type": "code-example_explanation", "context": "5 - Resource Management for Pods and Containers | In the preceding example, the Restart Count: 5 indicates that the simmemleak container in the Pod was terminated and restarted five times (so far). The OOMKilled reason shows that the container tried", "source": "https://kubernetes.io/docs/concepts/configuration/_print/", "id": "k8s_00221_code_explain_6328"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Config\n\nclusters:\n- cluster:\n proxy-url: http://proxy.example.org:3128\n server: https://k8s.example.org/k8s/clusters/c-xxyyzz", "completion": " name: development\n\nusers:\n- name: developer\n\ncontexts:\n- context:\n name: development", "language": "yaml", "type": "kubernetes-manifest", "context": "6 - Organizing Cluster Access Using kubeconfig Files | This page outlines the differences in how resources are managed between Linux and Windows.", "source": "https://kubernetes.io/docs/concepts/configuration/_print/", "id": "k8s_00221_code_completion_6329"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Config\n\nclusters:\n- cluster:\n proxy-url: http://proxy.example.org:3128\n server: https://k8s.example.org/k8s/clusters/c-xxyyzz\n name: development\n\nusers:\n- name: developer\n\ncontexts:\n- context:\n name: development", "completion": "This yaml code 6 - Organizing Cluster Access Using kubeconfig Files | This page outlines the differences in how resources are managed between Linux and Windows.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "6 - Organizing Cluster Access Using kubeconfig Files | This page outlines the differences in how resources are managed between Linux and Windows.", "source": "https://kubernetes.io/docs/concepts/configuration/_print/", "id": "k8s_00221_code_explain_6330"} {"prompt": "Explain this text code:\n\nFOO_SERVICE_HOST=\nFOO_SERVICE_PORT=", "completion": "This text code 1 - Configuration Best Practices | This does imply an ordering requirement - any Service that a Pod wants to access must be created before the Pod itself, or else the environment variables will not be populated. DNS does not have this", "language": "text", "type": "code-example_explanation", "context": "1 - Configuration Best Practices | This does imply an ordering requirement - any Service that a Pod wants to access must be created before the Pod itself, or else the environment variables will not be populated. DNS does not have this", "source": "https://kubernetes.io/docs/concepts/configuration/_print/", "id": "k8s_00221_code_explain_6331"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: ConfigMap\nmetadata:\n name: game-demo\ndata:\n # property-like keys; each key maps to a simple value\n player_initial_lives: \"3\"\n ui_properties_file_name: \"user-interface.properties\"", "completion": "\n # file-like keys\n game.properties: |\n enemy.types=aliens,monsters\n player.maximum-lives=5 \n user-interface.properties: |\n color.good=purple\n color.bad=yellow\n allow.textmode=true", "language": "yaml", "type": "kubernetes-manifest", "context": "2 - ConfigMaps | There are four different ways that you can use a ConfigMap to configure a container inside a Pod:", "source": "https://kubernetes.io/docs/concepts/configuration/_print/", "id": "k8s_00221_code_completion_6332"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: ConfigMap\nmetadata:\n name: game-demo\ndata:\n # property-like keys; each key maps to a simple value\n player_initial_lives: \"3\"\n ui_properties_file_name: \"user-interface.properties\"\n\n # file-like keys\n game.properties: |\n enemy.types=aliens,monsters\n player.maximum-lives=5 \n user-interface.properties: |\n color.good=purple\n color.bad=yellow\n allow.textmode=true", "completion": "This yaml code 2 - ConfigMaps | There are four different ways that you can use a ConfigMap to configure a container inside a Pod:", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "2 - ConfigMaps | There are four different ways that you can use a ConfigMap to configure a container inside a Pod:", "source": "https://kubernetes.io/docs/concepts/configuration/_print/", "id": "k8s_00221_code_explain_6333"} {"prompt": "Complete this yaml code:\n\nconfigmap/configure-pod.yaml\napiVersion: v1\nkind: Pod\nmetadata:\n name: configmap-demo-pod\nspec:\n containers:\n - name: demo\n image: alpine\n command: [\"sleep\", \"3600\"]\n env:\n # Define the environment variable\n - name: PLAYER_INITIAL_LIVES # Notice that the case is different here\n # from the key name in the ConfigMap.\n valueFrom:\n configMapKeyRef:\n name: game-demo # The ConfigMap this value comes from.\n key: player_initial_lives # The key to fetch.\n - name: UI_PROPERTIES_FILE_NAME", "completion": " valueFrom:\n configMapKeyRef:\n name: game-demo\n key: ui_properties_file_name\n volumeMounts:\n - name: config\n mountPath: \"/config\"\n readOnly: true\n volumes:\n # You set volumes at the Pod level, then mount them into containers inside that Pod\n - name: config\n configMap:\n # Provide the name of the ConfigMap you want to mount.\n name: game-demo\n # An array of keys from the ConfigMap to create as files\n items:\n - key: \"game.properties\"\n path: \"game.properties\"\n - key: \"user-interface.properties\"\n path: \"user-interface.properties\"", "language": "yaml", "type": "pod-definition", "context": "2 - ConfigMaps | A ConfigMap doesn't differentiate between single line property values and multi-line file-like values. What matters is how Pods and other objects consume those values.", "source": "https://kubernetes.io/docs/concepts/configuration/_print/", "id": "k8s_00221_code_completion_6334"} {"prompt": "Explain this yaml code:\n\nconfigmap/configure-pod.yaml\napiVersion: v1\nkind: Pod\nmetadata:\n name: configmap-demo-pod\nspec:\n containers:\n - name: demo\n image: alpine\n command: [\"sleep\", \"3600\"]\n env:\n # Define the environment variable\n - name: PLAYER_INITIAL_LIVES # Notice that the case is different here\n # from the key name in the ConfigMap.\n valueFrom:\n configMapKeyRef:\n name: game-demo # The ConfigMap this value comes from.\n key: player_initial_lives # The key to fetch.\n - name: UI_PROPERTIES_FILE_NAME\n valueFrom:\n configMapKeyRef:\n name: game-demo\n key: ui_properties_file_name\n volumeMounts:\n - name: config\n mountPath: \"/config\"\n readOnly: true\n volumes:\n # You set volumes at the Pod level, then mount them into containers inside that Pod\n - name: config\n configMap:\n # Provide the name of the ConfigMap you want to mount.\n name: game-demo\n # An array of keys from the ConfigMap to create as files\n items:\n - key: \"game.properties\"\n path: \"game.properties\"\n - key: \"user-interface.properties\"\n path: \"user-interface.properties\"", "completion": "This yaml code 2 - ConfigMaps | A ConfigMap doesn't differentiate between single line property values and multi-line file-like values. What matters is how Pods and other objects consume those values.", "language": "yaml", "type": "pod-definition_explanation", "context": "2 - ConfigMaps | A ConfigMap doesn't differentiate between single line property values and multi-line file-like values. What matters is how Pods and other objects consume those values.", "source": "https://kubernetes.io/docs/concepts/configuration/_print/", "id": "k8s_00221_code_explain_6335"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: configmap-demo-pod\nspec:\n containers:\n - name: demo\n image: alpine\n command: [\"sleep\", \"3600\"]\n env:\n # Define the environment variable\n - name: PLAYER_INITIAL_LIVES # Notice that the case is different here\n # from the key name in the ConfigMap.\n valueFrom:\n configMapKeyRef:\n name: game-demo # The ConfigMap this value comes from.\n key: player_initial_lives # The key to fetch.\n - name: UI_PROPERTIES_FILE_NAME\n valueFrom:", "completion": " configMapKeyRef:\n name: game-demo\n key: ui_properties_file_name\n volumeMounts:\n - name: config\n mountPath: \"/config\"\n readOnly: true\n volumes:\n # You set volumes at the Pod level, then mount them into containers inside that Pod\n - name: config\n configMap:\n # Provide the name of the ConfigMap you want to mount.\n name: game-demo\n # An array of keys from the ConfigMap to create as files\n items:\n - key: \"game.properties\"\n path: \"game.properties\"\n - key: \"user-interface.properties\"\n path: \"user-interface.properties\"", "language": "yaml", "type": "pod-definition", "context": "2 - ConfigMaps | A ConfigMap doesn't differentiate between single line property values and multi-line file-like values. What matters is how Pods and other objects consume those values.", "source": "https://kubernetes.io/docs/concepts/configuration/_print/", "id": "k8s_00221_code_completion_6336"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: configmap-demo-pod\nspec:\n containers:\n - name: demo\n image: alpine\n command: [\"sleep\", \"3600\"]\n env:\n # Define the environment variable\n - name: PLAYER_INITIAL_LIVES # Notice that the case is different here\n # from the key name in the ConfigMap.\n valueFrom:\n configMapKeyRef:\n name: game-demo # The ConfigMap this value comes from.\n key: player_initial_lives # The key to fetch.\n - name: UI_PROPERTIES_FILE_NAME\n valueFrom:\n configMapKeyRef:\n name: game-demo\n key: ui_properties_file_name\n volumeMounts:\n - name: config\n mountPath: \"/config\"\n readOnly: true\n volumes:\n # You set volumes at the Pod level, then mount them into containers inside that Pod\n - name: config\n configMap:\n # Provide the name of the ConfigMap you want to mount.\n name: game-demo\n # An array of keys from the ConfigMap to create as files\n items:\n - key: \"game.properties\"\n path: \"game.properties\"\n - key: \"user-interface.properties\"\n path: \"user-interface.properties\"", "completion": "This yaml code 2 - ConfigMaps | A ConfigMap doesn't differentiate between single line property values and multi-line file-like values. What matters is how Pods and other objects consume those values.", "language": "yaml", "type": "pod-definition_explanation", "context": "2 - ConfigMaps | A ConfigMap doesn't differentiate between single line property values and multi-line file-like values. What matters is how Pods and other objects consume those values.", "source": "https://kubernetes.io/docs/concepts/configuration/_print/", "id": "k8s_00221_code_explain_6337"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: mypod\nspec:\n containers:\n - name: mypod\n image: redis", "completion": " volumeMounts:\n - name: foo\n mountPath: \"/etc/foo\"\n readOnly: true\n volumes:\n - name: foo\n configMap:\n name: myconfigmap", "language": "yaml", "type": "pod-definition", "context": "2 - ConfigMaps | Each ConfigMap you want to use needs to be referred to in .spec.volumes.", "source": "https://kubernetes.io/docs/concepts/configuration/_print/", "id": "k8s_00221_code_completion_6338"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: mypod\nspec:\n containers:\n - name: mypod\n image: redis\n volumeMounts:\n - name: foo\n mountPath: \"/etc/foo\"\n readOnly: true\n volumes:\n - name: foo\n configMap:\n name: myconfigmap", "completion": "This yaml code 2 - ConfigMaps | Each ConfigMap you want to use needs to be referred to in .spec.volumes.", "language": "yaml", "type": "pod-definition_explanation", "context": "2 - ConfigMaps | Each ConfigMap you want to use needs to be referred to in .spec.volumes.", "source": "https://kubernetes.io/docs/concepts/configuration/_print/", "id": "k8s_00221_code_explain_6339"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: ConfigMap\nmetadata:", "completion": " name: myconfigmap\ndata:\n username: k8s-admin\n access_level: \"1\"", "language": "yaml", "type": "kubernetes-manifest", "context": "2 - ConfigMaps | The following command will create the ConfigMap object:", "source": "https://kubernetes.io/docs/concepts/configuration/_print/", "id": "k8s_00221_code_completion_6340"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: ConfigMap\nmetadata:\n name: myconfigmap\ndata:\n username: k8s-admin\n access_level: \"1\"", "completion": "This yaml code 2 - ConfigMaps | The following command will create the ConfigMap object:", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "2 - ConfigMaps | The following command will create the ConfigMap object:", "source": "https://kubernetes.io/docs/concepts/configuration/_print/", "id": "k8s_00221_code_explain_6341"} {"prompt": "Complete this yaml code:\n\nconfigmap/env-configmap.yaml\napiVersion: v1\nkind: Pod\nmetadata:\n name: env-configmap\nspec:", "completion": " containers:\n - name: app\n command: [\"/bin/sh\", \"-c\", \"printenv\"]\n image: busybox:latest\n envFrom:\n - configMapRef:\n name: myconfigmap", "language": "yaml", "type": "pod-definition", "context": "2 - ConfigMaps | The envFrom field instructs Kubernetes to create environment variables from the sources nested within it. The inner configMapRef refers to a ConfigMap by its name and selects all its key-value pairs.", "source": "https://kubernetes.io/docs/concepts/configuration/_print/", "id": "k8s_00221_code_completion_6342"} {"prompt": "Explain this yaml code:\n\nconfigmap/env-configmap.yaml\napiVersion: v1\nkind: Pod\nmetadata:\n name: env-configmap\nspec:\n containers:\n - name: app\n command: [\"/bin/sh\", \"-c\", \"printenv\"]\n image: busybox:latest\n envFrom:\n - configMapRef:\n name: myconfigmap", "completion": "This yaml code 2 - ConfigMaps | The envFrom field instructs Kubernetes to create environment variables from the sources nested within it. The inner configMapRef refers to a ConfigMap by its name and selects all its key-value pairs.", "language": "yaml", "type": "pod-definition_explanation", "context": "2 - ConfigMaps | The envFrom field instructs Kubernetes to create environment variables from the sources nested within it. The inner configMapRef refers to a ConfigMap by its name and selects all its key-value pairs.", "source": "https://kubernetes.io/docs/concepts/configuration/_print/", "id": "k8s_00221_code_explain_6343"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: env-configmap\nspec:\n containers:", "completion": " - name: app\n command: [\"/bin/sh\", \"-c\", \"printenv\"]\n image: busybox:latest\n envFrom:\n - configMapRef:\n name: myconfigmap", "language": "yaml", "type": "pod-definition", "context": "2 - ConfigMaps | The envFrom field instructs Kubernetes to create environment variables from the sources nested within it. The inner configMapRef refers to a ConfigMap by its name and selects all its key-value pairs.", "source": "https://kubernetes.io/docs/concepts/configuration/_print/", "id": "k8s_00221_code_completion_6344"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: env-configmap\nspec:\n containers:\n - name: app\n command: [\"/bin/sh\", \"-c\", \"printenv\"]\n image: busybox:latest\n envFrom:\n - configMapRef:\n name: myconfigmap", "completion": "This yaml code 2 - ConfigMaps | The envFrom field instructs Kubernetes to create environment variables from the sources nested within it. The inner configMapRef refers to a ConfigMap by its name and selects all its key-value pairs.", "language": "yaml", "type": "pod-definition_explanation", "context": "2 - ConfigMaps | The envFrom field instructs Kubernetes to create environment variables from the sources nested within it. The inner configMapRef refers to a ConfigMap by its name and selects all its key-value pairs.", "source": "https://kubernetes.io/docs/concepts/configuration/_print/", "id": "k8s_00221_code_explain_6345"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: env-configmap\nspec:\n containers:\n - name: envars-test-container", "completion": " image: nginx\n env:\n - name: CONFIGMAP_USERNAME\n valueFrom:\n configMapKeyRef:\n name: myconfigmap\n key: username", "language": "yaml", "type": "pod-definition", "context": "2 - ConfigMaps | In the Pod created from this manifest, you will see that the environment variable CONFIGMAP_USERNAME is set to the value of the username value from the ConfigMap. Other keys from the ConfigMap data ar", "source": "https://kubernetes.io/docs/concepts/configuration/_print/", "id": "k8s_00221_code_completion_6346"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: env-configmap\nspec:\n containers:\n - name: envars-test-container\n image: nginx\n env:\n - name: CONFIGMAP_USERNAME\n valueFrom:\n configMapKeyRef:\n name: myconfigmap\n key: username", "completion": "This yaml code 2 - ConfigMaps | In the Pod created from this manifest, you will see that the environment variable CONFIGMAP_USERNAME is set to the value of the username value from the ConfigMap. Other keys from the ConfigMap data ar", "language": "yaml", "type": "pod-definition_explanation", "context": "2 - ConfigMaps | In the Pod created from this manifest, you will see that the environment variable CONFIGMAP_USERNAME is set to the value of the username value from the ConfigMap. Other keys from the ConfigMap data ar", "source": "https://kubernetes.io/docs/concepts/configuration/_print/", "id": "k8s_00221_code_explain_6347"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: ConfigMap\nmetadata:", "completion": " ...\ndata:\n ...\nimmutable: true", "language": "yaml", "type": "kubernetes-manifest", "context": "2 - ConfigMaps | Once a ConfigMap is marked as immutable, it is not possible to revert this change nor to mutate the contents of the data or the binaryData field. You can only delete and recreate the ConfigMap. Becaus", "source": "https://kubernetes.io/docs/concepts/configuration/_print/", "id": "k8s_00221_code_completion_6348"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: ConfigMap\nmetadata:\n ...\ndata:\n ...\nimmutable: true", "completion": "This yaml code 2 - ConfigMaps | Once a ConfigMap is marked as immutable, it is not possible to revert this change nor to mutate the contents of the data or the binaryData field. You can only delete and recreate the ConfigMap. Becaus", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "2 - ConfigMaps | Once a ConfigMap is marked as immutable, it is not possible to revert this change nor to mutate the contents of the data or the binaryData field. You can only delete and recreate the ConfigMap. Becaus", "source": "https://kubernetes.io/docs/concepts/configuration/_print/", "id": "k8s_00221_code_explain_6349"} {"prompt": "Complete this yaml code:\n\nsecret/dotfile-secret.yaml\napiVersion: v1\nkind: Secret\nmetadata:\n name: dotfile-secret\ndata:\n .secret-file: dmFsdWUtMg0KDQo=\n---\napiVersion: v1\nkind: Pod\nmetadata:\n name: secret-dotfiles-pod\nspec:\n volumes:", "completion": " - name: secret-volume\n secret:\n secretName: dotfile-secret\n containers:\n - name: dotfile-test-container\n image: registry.k8s.io/busybox\n command:\n - ls\n - \"-l\"\n - \"/etc/secret-volume\"\n volumeMounts:\n - name: secret-volume\n readOnly: true\n mountPath: \"/etc/secret-volume\"", "language": "yaml", "type": "pod-definition", "context": "3 - Secrets | Consider a program that needs to handle HTTP requests, do some complex business logic, and then sign some messages with an HMAC. Because it has complex application logic, there might be an unnoticed r", "source": "https://kubernetes.io/docs/concepts/configuration/_print/", "id": "k8s_00221_code_completion_6350"} {"prompt": "Explain this yaml code:\n\nsecret/dotfile-secret.yaml\napiVersion: v1\nkind: Secret\nmetadata:\n name: dotfile-secret\ndata:\n .secret-file: dmFsdWUtMg0KDQo=\n---\napiVersion: v1\nkind: Pod\nmetadata:\n name: secret-dotfiles-pod\nspec:\n volumes:\n - name: secret-volume\n secret:\n secretName: dotfile-secret\n containers:\n - name: dotfile-test-container\n image: registry.k8s.io/busybox\n command:\n - ls\n - \"-l\"\n - \"/etc/secret-volume\"\n volumeMounts:\n - name: secret-volume\n readOnly: true\n mountPath: \"/etc/secret-volume\"", "completion": "This yaml code 3 - Secrets | Consider a program that needs to handle HTTP requests, do some complex business logic, and then sign some messages with an HMAC. Because it has complex application logic, there might be an unnoticed r", "language": "yaml", "type": "pod-definition_explanation", "context": "3 - Secrets | Consider a program that needs to handle HTTP requests, do some complex business logic, and then sign some messages with an HMAC. Because it has complex application logic, there might be an unnoticed r", "source": "https://kubernetes.io/docs/concepts/configuration/_print/", "id": "k8s_00221_code_explain_6351"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Secret\nmetadata:\n name: dotfile-secret\ndata:\n .secret-file: dmFsdWUtMg0KDQo=\n---\napiVersion: v1\nkind: Pod\nmetadata:\n name: secret-dotfiles-pod\nspec:\n volumes:", "completion": " - name: secret-volume\n secret:\n secretName: dotfile-secret\n containers:\n - name: dotfile-test-container\n image: registry.k8s.io/busybox\n command:\n - ls\n - \"-l\"\n - \"/etc/secret-volume\"\n volumeMounts:\n - name: secret-volume\n readOnly: true\n mountPath: \"/etc/secret-volume\"", "language": "yaml", "type": "pod-definition", "context": "3 - Secrets | Consider a program that needs to handle HTTP requests, do some complex business logic, and then sign some messages with an HMAC. Because it has complex application logic, there might be an unnoticed r", "source": "https://kubernetes.io/docs/concepts/configuration/_print/", "id": "k8s_00221_code_completion_6352"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Secret\nmetadata:\n name: dotfile-secret\ndata:\n .secret-file: dmFsdWUtMg0KDQo=\n---\napiVersion: v1\nkind: Pod\nmetadata:\n name: secret-dotfiles-pod\nspec:\n volumes:\n - name: secret-volume\n secret:\n secretName: dotfile-secret\n containers:\n - name: dotfile-test-container\n image: registry.k8s.io/busybox\n command:\n - ls\n - \"-l\"\n - \"/etc/secret-volume\"\n volumeMounts:\n - name: secret-volume\n readOnly: true\n mountPath: \"/etc/secret-volume\"", "completion": "This yaml code 3 - Secrets | Consider a program that needs to handle HTTP requests, do some complex business logic, and then sign some messages with an HMAC. Because it has complex application logic, there might be an unnoticed r", "language": "yaml", "type": "pod-definition_explanation", "context": "3 - Secrets | Consider a program that needs to handle HTTP requests, do some complex business logic, and then sign some messages with an HMAC. Because it has complex application logic, there might be an unnoticed r", "source": "https://kubernetes.io/docs/concepts/configuration/_print/", "id": "k8s_00221_code_explain_6353"} {"prompt": "Explain this bash code:\n\nkubectl create secret generic empty-secret\nkubectl get secret empty-secret", "completion": "This bash code 3 - Secrets | The output looks like:", "language": "bash", "type": "kubectl-command_explanation", "context": "3 - Secrets | The output looks like:", "source": "https://kubernetes.io/docs/concepts/configuration/_print/", "id": "k8s_00221_code_explain_6354"} {"prompt": "Complete this yaml code:\n\nsecret/serviceaccount-token-secret.yaml\napiVersion: v1\nkind: Secret\nmetadata:\n name: secret-sa-sample", "completion": " annotations:\n kubernetes.io/service-account.name: \"sa-name\"\ntype: kubernetes.io/service-account-token\ndata:\n extra: YmFyCg==", "language": "yaml", "type": "kubernetes-manifest", "context": "3 - Secrets | After creating the Secret, wait for Kubernetes to populate the token key in the data field.", "source": "https://kubernetes.io/docs/concepts/configuration/_print/", "id": "k8s_00221_code_completion_6355"} {"prompt": "Explain this yaml code:\n\nsecret/serviceaccount-token-secret.yaml\napiVersion: v1\nkind: Secret\nmetadata:\n name: secret-sa-sample\n annotations:\n kubernetes.io/service-account.name: \"sa-name\"\ntype: kubernetes.io/service-account-token\ndata:\n extra: YmFyCg==", "completion": "This yaml code 3 - Secrets | After creating the Secret, wait for Kubernetes to populate the token key in the data field.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "3 - Secrets | After creating the Secret, wait for Kubernetes to populate the token key in the data field.", "source": "https://kubernetes.io/docs/concepts/configuration/_print/", "id": "k8s_00221_code_explain_6356"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Secret\nmetadata:\n name: secret-sa-sample", "completion": " annotations:\n kubernetes.io/service-account.name: \"sa-name\"\ntype: kubernetes.io/service-account-token\ndata:\n extra: YmFyCg==", "language": "yaml", "type": "kubernetes-manifest", "context": "3 - Secrets | After creating the Secret, wait for Kubernetes to populate the token key in the data field.", "source": "https://kubernetes.io/docs/concepts/configuration/_print/", "id": "k8s_00221_code_completion_6357"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Secret\nmetadata:\n name: secret-sa-sample\n annotations:\n kubernetes.io/service-account.name: \"sa-name\"\ntype: kubernetes.io/service-account-token\ndata:\n extra: YmFyCg==", "completion": "This yaml code 3 - Secrets | After creating the Secret, wait for Kubernetes to populate the token key in the data field.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "3 - Secrets | After creating the Secret, wait for Kubernetes to populate the token key in the data field.", "source": "https://kubernetes.io/docs/concepts/configuration/_print/", "id": "k8s_00221_code_explain_6358"} {"prompt": "Complete this yaml code:\n\nsecret/dockercfg-secret.yaml\napiVersion: v1\nkind: Secret\nmetadata:", "completion": " name: secret-dockercfg\ntype: kubernetes.io/dockercfg\ndata:\n .dockercfg: |\n eyJhdXRocyI6eyJodHRwczovL2V4YW1wbGUvdjEvIjp7ImF1dGgiOiJvcGVuc2VzYW1lIn19fQo=", "language": "yaml", "type": "kubernetes-manifest", "context": "3 - Secrets | When you create Docker config Secrets using a manifest, the API server checks whether the expected key exists in the data field, and it verifies if the value provided can be parsed as a valid JSON. Th", "source": "https://kubernetes.io/docs/concepts/configuration/_print/", "id": "k8s_00221_code_completion_6359"} {"prompt": "Explain this yaml code:\n\nsecret/dockercfg-secret.yaml\napiVersion: v1\nkind: Secret\nmetadata:\n name: secret-dockercfg\ntype: kubernetes.io/dockercfg\ndata:\n .dockercfg: |\n eyJhdXRocyI6eyJodHRwczovL2V4YW1wbGUvdjEvIjp7ImF1dGgiOiJvcGVuc2VzYW1lIn19fQo=", "completion": "This yaml code 3 - Secrets | When you create Docker config Secrets using a manifest, the API server checks whether the expected key exists in the data field, and it verifies if the value provided can be parsed as a valid JSON. Th", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "3 - Secrets | When you create Docker config Secrets using a manifest, the API server checks whether the expected key exists in the data field, and it verifies if the value provided can be parsed as a valid JSON. Th", "source": "https://kubernetes.io/docs/concepts/configuration/_print/", "id": "k8s_00221_code_explain_6360"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Secret\nmetadata:\n name: secret-dockercfg", "completion": "type: kubernetes.io/dockercfg\ndata:\n .dockercfg: |\n eyJhdXRocyI6eyJodHRwczovL2V4YW1wbGUvdjEvIjp7ImF1dGgiOiJvcGVuc2VzYW1lIn19fQo=", "language": "yaml", "type": "kubernetes-manifest", "context": "3 - Secrets | When you create Docker config Secrets using a manifest, the API server checks whether the expected key exists in the data field, and it verifies if the value provided can be parsed as a valid JSON. Th", "source": "https://kubernetes.io/docs/concepts/configuration/_print/", "id": "k8s_00221_code_completion_6361"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Secret\nmetadata:\n name: secret-dockercfg\ntype: kubernetes.io/dockercfg\ndata:\n .dockercfg: |\n eyJhdXRocyI6eyJodHRwczovL2V4YW1wbGUvdjEvIjp7ImF1dGgiOiJvcGVuc2VzYW1lIn19fQo=", "completion": "This yaml code 3 - Secrets | When you create Docker config Secrets using a manifest, the API server checks whether the expected key exists in the data field, and it verifies if the value provided can be parsed as a valid JSON. Th", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "3 - Secrets | When you create Docker config Secrets using a manifest, the API server checks whether the expected key exists in the data field, and it verifies if the value provided can be parsed as a valid JSON. Th", "source": "https://kubernetes.io/docs/concepts/configuration/_print/", "id": "k8s_00221_code_explain_6362"} {"prompt": "Complete this bash code:\n\nkubectl create secret docker-registry secret-tiger-docker \\\n --docker-email=tiger@acme.example \\", "completion": " --docker-username=tiger \\\n --docker-password=pass1234 \\\n --docker-server=my-registry.example:5000", "language": "bash", "type": "kubectl-command", "context": "3 - Secrets | This command creates a Secret of type kubernetes.io/dockerconfigjson.", "source": "https://kubernetes.io/docs/concepts/configuration/_print/", "id": "k8s_00221_code_completion_6363"} {"prompt": "Explain this bash code:\n\nkubectl create secret docker-registry secret-tiger-docker \\\n --docker-email=tiger@acme.example \\\n --docker-username=tiger \\\n --docker-password=pass1234 \\\n --docker-server=my-registry.example:5000", "completion": "This bash code 3 - Secrets | This command creates a Secret of type kubernetes.io/dockerconfigjson.", "language": "bash", "type": "kubectl-command_explanation", "context": "3 - Secrets | This command creates a Secret of type kubernetes.io/dockerconfigjson.", "source": "https://kubernetes.io/docs/concepts/configuration/_print/", "id": "k8s_00221_code_explain_6364"} {"prompt": "Explain this bash code:\n\nkubectl get secret secret-tiger-docker -o jsonpath='{.data.*}' | base64 -d", "completion": "This bash code 3 - Secrets | The output is equivalent to the following JSON document (which is also a valid Docker configuration file):", "language": "bash", "type": "kubectl-command_explanation", "context": "3 - Secrets | The output is equivalent to the following JSON document (which is also a valid Docker configuration file):", "source": "https://kubernetes.io/docs/concepts/configuration/_print/", "id": "k8s_00221_code_explain_6365"} {"prompt": "Complete this json code:\n\n{\n \"auths\": {\n \"my-registry.example:5000\": {\n \"username\": \"tiger\",\n \"password\": \"pass1234\",", "completion": " \"email\": \"tiger@acme.example\",\n \"auth\": \"dGlnZXI6cGFzczEyMzQ=\"\n }\n }\n}", "language": "json", "type": "api-response", "context": "3 - Secrets | The auth value there is base64 encoded; it is obscured but not secret. Anyone who can read that Secret can learn the registry access bearer token.", "source": "https://kubernetes.io/docs/concepts/configuration/_print/", "id": "k8s_00221_code_completion_6366"} {"prompt": "Explain this json code:\n\n{\n \"auths\": {\n \"my-registry.example:5000\": {\n \"username\": \"tiger\",\n \"password\": \"pass1234\",\n \"email\": \"tiger@acme.example\",\n \"auth\": \"dGlnZXI6cGFzczEyMzQ=\"\n }\n }\n}", "completion": "This json code 3 - Secrets | The auth value there is base64 encoded; it is obscured but not secret. Anyone who can read that Secret can learn the registry access bearer token.", "language": "json", "type": "api-response_explanation", "context": "3 - Secrets | The auth value there is base64 encoded; it is obscured but not secret. Anyone who can read that Secret can learn the registry access bearer token.", "source": "https://kubernetes.io/docs/concepts/configuration/_print/", "id": "k8s_00221_code_explain_6367"} {"prompt": "Complete this yaml code:\n\nsecret/basicauth-secret.yaml\napiVersion: v1\nkind: Secret\nmetadata:", "completion": " name: secret-basic-auth\ntype: kubernetes.io/basic-auth\nstringData:\n username: admin # required field for kubernetes.io/basic-auth\n password: t0p-Secret # required field for kubernetes.io/basic-auth", "language": "yaml", "type": "kubernetes-manifest", "context": "3 - Secrets | The basic authentication Secret type is provided only for convenience. You can create an Opaque type for credentials used for basic authentication. However, using the defined and public Secret type (k", "source": "https://kubernetes.io/docs/concepts/configuration/_print/", "id": "k8s_00221_code_completion_6368"} {"prompt": "Explain this yaml code:\n\nsecret/basicauth-secret.yaml\napiVersion: v1\nkind: Secret\nmetadata:\n name: secret-basic-auth\ntype: kubernetes.io/basic-auth\nstringData:\n username: admin # required field for kubernetes.io/basic-auth\n password: t0p-Secret # required field for kubernetes.io/basic-auth", "completion": "This yaml code 3 - Secrets | The basic authentication Secret type is provided only for convenience. You can create an Opaque type for credentials used for basic authentication. However, using the defined and public Secret type (k", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "3 - Secrets | The basic authentication Secret type is provided only for convenience. You can create an Opaque type for credentials used for basic authentication. However, using the defined and public Secret type (k", "source": "https://kubernetes.io/docs/concepts/configuration/_print/", "id": "k8s_00221_code_explain_6369"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Secret\nmetadata:\n name: secret-basic-auth", "completion": "type: kubernetes.io/basic-auth\nstringData:\n username: admin # required field for kubernetes.io/basic-auth\n password: t0p-Secret # required field for kubernetes.io/basic-auth", "language": "yaml", "type": "kubernetes-manifest", "context": "3 - Secrets | The basic authentication Secret type is provided only for convenience. You can create an Opaque type for credentials used for basic authentication. However, using the defined and public Secret type (k", "source": "https://kubernetes.io/docs/concepts/configuration/_print/", "id": "k8s_00221_code_completion_6370"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Secret\nmetadata:\n name: secret-basic-auth\ntype: kubernetes.io/basic-auth\nstringData:\n username: admin # required field for kubernetes.io/basic-auth\n password: t0p-Secret # required field for kubernetes.io/basic-auth", "completion": "This yaml code 3 - Secrets | The basic authentication Secret type is provided only for convenience. You can create an Opaque type for credentials used for basic authentication. However, using the defined and public Secret type (k", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "3 - Secrets | The basic authentication Secret type is provided only for convenience. You can create an Opaque type for credentials used for basic authentication. However, using the defined and public Secret type (k", "source": "https://kubernetes.io/docs/concepts/configuration/_print/", "id": "k8s_00221_code_explain_6371"} {"prompt": "Complete this yaml code:\n\nsecret/ssh-auth-secret.yaml\napiVersion: v1\nkind: Secret\nmetadata:\n name: secret-ssh-auth", "completion": "type: kubernetes.io/ssh-auth\ndata:\n # the data is abbreviated in this example\n ssh-privatekey: |\n UG91cmluZzYlRW1vdGljb24lU2N1YmE=", "language": "yaml", "type": "kubernetes-manifest", "context": "3 - Secrets | The SSH authentication Secret type is provided only for convenience. You can create an Opaque type for credentials used for SSH authentication. However, using the defined and public Secret type (kuber", "source": "https://kubernetes.io/docs/concepts/configuration/_print/", "id": "k8s_00221_code_completion_6372"} {"prompt": "Explain this yaml code:\n\nsecret/ssh-auth-secret.yaml\napiVersion: v1\nkind: Secret\nmetadata:\n name: secret-ssh-auth\ntype: kubernetes.io/ssh-auth\ndata:\n # the data is abbreviated in this example\n ssh-privatekey: |\n UG91cmluZzYlRW1vdGljb24lU2N1YmE=", "completion": "This yaml code 3 - Secrets | The SSH authentication Secret type is provided only for convenience. You can create an Opaque type for credentials used for SSH authentication. However, using the defined and public Secret type (kuber", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "3 - Secrets | The SSH authentication Secret type is provided only for convenience. You can create an Opaque type for credentials used for SSH authentication. However, using the defined and public Secret type (kuber", "source": "https://kubernetes.io/docs/concepts/configuration/_print/", "id": "k8s_00221_code_explain_6373"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Secret\nmetadata:\n name: secret-ssh-auth", "completion": "type: kubernetes.io/ssh-auth\ndata:\n # the data is abbreviated in this example\n ssh-privatekey: |\n UG91cmluZzYlRW1vdGljb24lU2N1YmE=", "language": "yaml", "type": "kubernetes-manifest", "context": "3 - Secrets | The SSH authentication Secret type is provided only for convenience. You can create an Opaque type for credentials used for SSH authentication. However, using the defined and public Secret type (kuber", "source": "https://kubernetes.io/docs/concepts/configuration/_print/", "id": "k8s_00221_code_completion_6374"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Secret\nmetadata:\n name: secret-ssh-auth\ntype: kubernetes.io/ssh-auth\ndata:\n # the data is abbreviated in this example\n ssh-privatekey: |\n UG91cmluZzYlRW1vdGljb24lU2N1YmE=", "completion": "This yaml code 3 - Secrets | The SSH authentication Secret type is provided only for convenience. You can create an Opaque type for credentials used for SSH authentication. However, using the defined and public Secret type (kuber", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "3 - Secrets | The SSH authentication Secret type is provided only for convenience. You can create an Opaque type for credentials used for SSH authentication. However, using the defined and public Secret type (kuber", "source": "https://kubernetes.io/docs/concepts/configuration/_print/", "id": "k8s_00221_code_explain_6375"} {"prompt": "Complete this yaml code:\n\nsecret/tls-auth-secret.yaml\napiVersion: v1\nkind: Secret\nmetadata:\n name: secret-tls\ntype: kubernetes.io/tls", "completion": "data:\n # values are base64 encoded, which obscures them but does NOT provide\n # any useful level of confidentiality\n # Replace the following values with your own base64-encoded certificate and key.\n tls.crt: \"REPLACE_WITH_BASE64_CERT\" \n tls.key: \"REPLACE_WITH_BASE64_KEY\"", "language": "yaml", "type": "kubernetes-manifest", "context": "3 - Secrets | The TLS Secret type is provided only for convenience. You can create an Opaque type for credentials used for TLS authentication. However, using the defined and public Secret type (kubernetes.io/tls) h", "source": "https://kubernetes.io/docs/concepts/configuration/_print/", "id": "k8s_00221_code_completion_6376"} {"prompt": "Explain this yaml code:\n\nsecret/tls-auth-secret.yaml\napiVersion: v1\nkind: Secret\nmetadata:\n name: secret-tls\ntype: kubernetes.io/tls\ndata:\n # values are base64 encoded, which obscures them but does NOT provide\n # any useful level of confidentiality\n # Replace the following values with your own base64-encoded certificate and key.\n tls.crt: \"REPLACE_WITH_BASE64_CERT\" \n tls.key: \"REPLACE_WITH_BASE64_KEY\"", "completion": "This yaml code 3 - Secrets | The TLS Secret type is provided only for convenience. You can create an Opaque type for credentials used for TLS authentication. However, using the defined and public Secret type (kubernetes.io/tls) h", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "3 - Secrets | The TLS Secret type is provided only for convenience. You can create an Opaque type for credentials used for TLS authentication. However, using the defined and public Secret type (kubernetes.io/tls) h", "source": "https://kubernetes.io/docs/concepts/configuration/_print/", "id": "k8s_00221_code_explain_6377"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Secret\nmetadata:\n name: secret-tls\ntype: kubernetes.io/tls", "completion": "data:\n # values are base64 encoded, which obscures them but does NOT provide\n # any useful level of confidentiality\n # Replace the following values with your own base64-encoded certificate and key.\n tls.crt: \"REPLACE_WITH_BASE64_CERT\" \n tls.key: \"REPLACE_WITH_BASE64_KEY\"", "language": "yaml", "type": "kubernetes-manifest", "context": "3 - Secrets | The TLS Secret type is provided only for convenience. You can create an Opaque type for credentials used for TLS authentication. However, using the defined and public Secret type (kubernetes.io/tls) h", "source": "https://kubernetes.io/docs/concepts/configuration/_print/", "id": "k8s_00221_code_completion_6378"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Secret\nmetadata:\n name: secret-tls\ntype: kubernetes.io/tls\ndata:\n # values are base64 encoded, which obscures them but does NOT provide\n # any useful level of confidentiality\n # Replace the following values with your own base64-encoded certificate and key.\n tls.crt: \"REPLACE_WITH_BASE64_CERT\" \n tls.key: \"REPLACE_WITH_BASE64_KEY\"", "completion": "This yaml code 3 - Secrets | The TLS Secret type is provided only for convenience. You can create an Opaque type for credentials used for TLS authentication. However, using the defined and public Secret type (kubernetes.io/tls) h", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "3 - Secrets | The TLS Secret type is provided only for convenience. You can create an Opaque type for credentials used for TLS authentication. However, using the defined and public Secret type (kubernetes.io/tls) h", "source": "https://kubernetes.io/docs/concepts/configuration/_print/", "id": "k8s_00221_code_explain_6379"} {"prompt": "Explain this bash code:\n\nkubectl create secret tls my-tls-secret \\\n --cert=path/to/cert/file \\\n --key=path/to/key/file", "completion": "This bash code 3 - Secrets | The public/private key pair must exist before hand. The public key certificate for --cert must be .PEM encoded and must match the given private key for --key.", "language": "bash", "type": "kubectl-command_explanation", "context": "3 - Secrets | The public/private key pair must exist before hand. The public key certificate for --cert must be .PEM encoded and must match the given private key for --key.", "source": "https://kubernetes.io/docs/concepts/configuration/_print/", "id": "k8s_00221_code_explain_6380"} {"prompt": "Complete this yaml code:\n\nsecret/bootstrap-token-secret-base64.yaml\napiVersion: v1\nkind: Secret\nmetadata:\n name: bootstrap-token-5emitj\n namespace: kube-system\ntype: bootstrap.kubernetes.io/token", "completion": "data:\n auth-extra-groups: c3lzdGVtOmJvb3RzdHJhcHBlcnM6a3ViZWFkbTpkZWZhdWx0LW5vZGUtdG9rZW4=\n expiration: MjAyMC0wOS0xM1QwNDozOToxMFo=\n token-id: NWVtaXRq\n token-secret: a3E0Z2lodnN6emduMXAwcg==\n usage-bootstrap-authentication: dHJ1ZQ==\n usage-bootstrap-signing: dHJ1ZQ==", "language": "yaml", "type": "kubernetes-manifest", "context": "3 - Secrets | A bootstrap token Secret has the following keys specified under data:", "source": "https://kubernetes.io/docs/concepts/configuration/_print/", "id": "k8s_00221_code_completion_6381"} {"prompt": "Explain this yaml code:\n\nsecret/bootstrap-token-secret-base64.yaml\napiVersion: v1\nkind: Secret\nmetadata:\n name: bootstrap-token-5emitj\n namespace: kube-system\ntype: bootstrap.kubernetes.io/token\ndata:\n auth-extra-groups: c3lzdGVtOmJvb3RzdHJhcHBlcnM6a3ViZWFkbTpkZWZhdWx0LW5vZGUtdG9rZW4=\n expiration: MjAyMC0wOS0xM1QwNDozOToxMFo=\n token-id: NWVtaXRq\n token-secret: a3E0Z2lodnN6emduMXAwcg==\n usage-bootstrap-authentication: dHJ1ZQ==\n usage-bootstrap-signing: dHJ1ZQ==", "completion": "This yaml code 3 - Secrets | A bootstrap token Secret has the following keys specified under data:", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "3 - Secrets | A bootstrap token Secret has the following keys specified under data:", "source": "https://kubernetes.io/docs/concepts/configuration/_print/", "id": "k8s_00221_code_explain_6382"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Secret\nmetadata:\n name: bootstrap-token-5emitj\n namespace: kube-system\ntype: bootstrap.kubernetes.io/token", "completion": "data:\n auth-extra-groups: c3lzdGVtOmJvb3RzdHJhcHBlcnM6a3ViZWFkbTpkZWZhdWx0LW5vZGUtdG9rZW4=\n expiration: MjAyMC0wOS0xM1QwNDozOToxMFo=\n token-id: NWVtaXRq\n token-secret: a3E0Z2lodnN6emduMXAwcg==\n usage-bootstrap-authentication: dHJ1ZQ==\n usage-bootstrap-signing: dHJ1ZQ==", "language": "yaml", "type": "kubernetes-manifest", "context": "3 - Secrets | A bootstrap token Secret has the following keys specified under data:", "source": "https://kubernetes.io/docs/concepts/configuration/_print/", "id": "k8s_00221_code_completion_6383"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Secret\nmetadata:\n name: bootstrap-token-5emitj\n namespace: kube-system\ntype: bootstrap.kubernetes.io/token\ndata:\n auth-extra-groups: c3lzdGVtOmJvb3RzdHJhcHBlcnM6a3ViZWFkbTpkZWZhdWx0LW5vZGUtdG9rZW4=\n expiration: MjAyMC0wOS0xM1QwNDozOToxMFo=\n token-id: NWVtaXRq\n token-secret: a3E0Z2lodnN6emduMXAwcg==\n usage-bootstrap-authentication: dHJ1ZQ==\n usage-bootstrap-signing: dHJ1ZQ==", "completion": "This yaml code 3 - Secrets | A bootstrap token Secret has the following keys specified under data:", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "3 - Secrets | A bootstrap token Secret has the following keys specified under data:", "source": "https://kubernetes.io/docs/concepts/configuration/_print/", "id": "k8s_00221_code_explain_6384"} {"prompt": "Complete this yaml code:\n\nsecret/bootstrap-token-secret-literal.yaml\napiVersion: v1\nkind: Secret\nmetadata:\n # Note how the Secret is named\n name: bootstrap-token-5emitj\n # A bootstrap token Secret usually resides in the kube-system namespace\n namespace: kube-system\ntype: bootstrap.kubernetes.io/token", "completion": "stringData:\n auth-extra-groups: \"system:bootstrappers:kubeadm:default-node-token\"\n expiration: \"2020-09-13T04:39:10Z\"\n # This token ID is used in the name\n token-id: \"5emitj\"\n token-secret: \"kq4gihvszzgn1p0r\"\n # This token can be used for authentication\n usage-bootstrap-authentication: \"true\"\n # and it can be used for signing\n usage-bootstrap-signing: \"true\"", "language": "yaml", "type": "kubernetes-manifest", "context": "3 - Secrets | There are several options to create a Secret:", "source": "https://kubernetes.io/docs/concepts/configuration/_print/", "id": "k8s_00221_code_completion_6385"} {"prompt": "Explain this yaml code:\n\nsecret/bootstrap-token-secret-literal.yaml\napiVersion: v1\nkind: Secret\nmetadata:\n # Note how the Secret is named\n name: bootstrap-token-5emitj\n # A bootstrap token Secret usually resides in the kube-system namespace\n namespace: kube-system\ntype: bootstrap.kubernetes.io/token\nstringData:\n auth-extra-groups: \"system:bootstrappers:kubeadm:default-node-token\"\n expiration: \"2020-09-13T04:39:10Z\"\n # This token ID is used in the name\n token-id: \"5emitj\"\n token-secret: \"kq4gihvszzgn1p0r\"\n # This token can be used for authentication\n usage-bootstrap-authentication: \"true\"\n # and it can be used for signing\n usage-bootstrap-signing: \"true\"", "completion": "This yaml code 3 - Secrets | There are several options to create a Secret:", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "3 - Secrets | There are several options to create a Secret:", "source": "https://kubernetes.io/docs/concepts/configuration/_print/", "id": "k8s_00221_code_explain_6386"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Secret\nmetadata:\n # Note how the Secret is named\n name: bootstrap-token-5emitj\n # A bootstrap token Secret usually resides in the kube-system namespace\n namespace: kube-system\ntype: bootstrap.kubernetes.io/token\nstringData:", "completion": " auth-extra-groups: \"system:bootstrappers:kubeadm:default-node-token\"\n expiration: \"2020-09-13T04:39:10Z\"\n # This token ID is used in the name\n token-id: \"5emitj\"\n token-secret: \"kq4gihvszzgn1p0r\"\n # This token can be used for authentication\n usage-bootstrap-authentication: \"true\"\n # and it can be used for signing\n usage-bootstrap-signing: \"true\"", "language": "yaml", "type": "kubernetes-manifest", "context": "3 - Secrets | There are several options to create a Secret:", "source": "https://kubernetes.io/docs/concepts/configuration/_print/", "id": "k8s_00221_code_completion_6387"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Secret\nmetadata:\n # Note how the Secret is named\n name: bootstrap-token-5emitj\n # A bootstrap token Secret usually resides in the kube-system namespace\n namespace: kube-system\ntype: bootstrap.kubernetes.io/token\nstringData:\n auth-extra-groups: \"system:bootstrappers:kubeadm:default-node-token\"\n expiration: \"2020-09-13T04:39:10Z\"\n # This token ID is used in the name\n token-id: \"5emitj\"\n token-secret: \"kq4gihvszzgn1p0r\"\n # This token can be used for authentication\n usage-bootstrap-authentication: \"true\"\n # and it can be used for signing\n usage-bootstrap-signing: \"true\"", "completion": "This yaml code 3 - Secrets | There are several options to create a Secret:", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "3 - Secrets | There are several options to create a Secret:", "source": "https://kubernetes.io/docs/concepts/configuration/_print/", "id": "k8s_00221_code_explain_6388"} {"prompt": "Complete this yaml code:\n\nsecret/optional-secret.yaml\napiVersion: v1\nkind: Pod\nmetadata:\n name: mypod\nspec:\n containers:\n - name: mypod\n image: redis", "completion": " volumeMounts:\n - name: foo\n mountPath: \"/etc/foo\"\n readOnly: true\n volumes:\n - name: foo\n secret:\n secretName: mysecret\n optional: true", "language": "yaml", "type": "pod-definition", "context": "3 - Secrets | By default, Secrets are required. None of a Pod's containers will start until all non-optional Secrets are available.", "source": "https://kubernetes.io/docs/concepts/configuration/_print/", "id": "k8s_00221_code_completion_6389"} {"prompt": "Explain this yaml code:\n\nsecret/optional-secret.yaml\napiVersion: v1\nkind: Pod\nmetadata:\n name: mypod\nspec:\n containers:\n - name: mypod\n image: redis\n volumeMounts:\n - name: foo\n mountPath: \"/etc/foo\"\n readOnly: true\n volumes:\n - name: foo\n secret:\n secretName: mysecret\n optional: true", "completion": "This yaml code 3 - Secrets | By default, Secrets are required. None of a Pod's containers will start until all non-optional Secrets are available.", "language": "yaml", "type": "pod-definition_explanation", "context": "3 - Secrets | By default, Secrets are required. None of a Pod's containers will start until all non-optional Secrets are available.", "source": "https://kubernetes.io/docs/concepts/configuration/_print/", "id": "k8s_00221_code_explain_6390"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: mypod\nspec:\n containers:\n - name: mypod\n image: redis", "completion": " volumeMounts:\n - name: foo\n mountPath: \"/etc/foo\"\n readOnly: true\n volumes:\n - name: foo\n secret:\n secretName: mysecret\n optional: true", "language": "yaml", "type": "pod-definition", "context": "3 - Secrets | By default, Secrets are required. None of a Pod's containers will start until all non-optional Secrets are available.", "source": "https://kubernetes.io/docs/concepts/configuration/_print/", "id": "k8s_00221_code_completion_6391"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: mypod\nspec:\n containers:\n - name: mypod\n image: redis\n volumeMounts:\n - name: foo\n mountPath: \"/etc/foo\"\n readOnly: true\n volumes:\n - name: foo\n secret:\n secretName: mysecret\n optional: true", "completion": "This yaml code 3 - Secrets | By default, Secrets are required. None of a Pod's containers will start until all non-optional Secrets are available.", "language": "yaml", "type": "pod-definition_explanation", "context": "3 - Secrets | By default, Secrets are required. None of a Pod's containers will start until all non-optional Secrets are available.", "source": "https://kubernetes.io/docs/concepts/configuration/_print/", "id": "k8s_00221_code_explain_6392"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Secret", "completion": "metadata: ...\ndata: ...\nimmutable: true", "language": "yaml", "type": "kubernetes-manifest", "context": "3 - Secrets | You can also update any existing mutable Secret to make it immutable.", "source": "https://kubernetes.io/docs/concepts/configuration/_print/", "id": "k8s_00221_code_completion_6393"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Secret\nmetadata: ...\ndata: ...\nimmutable: true", "completion": "This yaml code 3 - Secrets | You can also update any existing mutable Secret to make it immutable.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "3 - Secrets | You can also update any existing mutable Secret to make it immutable.", "source": "https://kubernetes.io/docs/concepts/configuration/_print/", "id": "k8s_00221_code_explain_6394"} {"prompt": "Complete this yaml code:\n\n---\napiVersion: v1\nkind: Pod\nmetadata:\n name: frontend\nspec:\n containers:\n - name: app\n image: images.my-company.example/app:v4\n resources:\n requests:\n memory: \"64Mi\"", "completion": " cpu: \"250m\"\n limits:\n memory: \"128Mi\"\n cpu: \"500m\"\n - name: log-aggregator\n image: images.my-company.example/log-aggregator:v6\n resources:\n requests:\n memory: \"64Mi\"\n cpu: \"250m\"\n limits:\n memory: \"128Mi\"\n cpu: \"500m\"", "language": "yaml", "type": "pod-definition", "context": "5 - Resource Management for Pods and Containers | This feature can be enabled by setting the PodLevelResources feature gate. The following Pod has an explicit request of 1 CPU and 100 MiB of memory, and an explicit limit of 1 CPU and 200 MiB of memor", "source": "https://kubernetes.io/docs/concepts/configuration/_print/", "id": "k8s_00221_code_completion_6395"} {"prompt": "Explain this yaml code:\n\n---\napiVersion: v1\nkind: Pod\nmetadata:\n name: frontend\nspec:\n containers:\n - name: app\n image: images.my-company.example/app:v4\n resources:\n requests:\n memory: \"64Mi\"\n cpu: \"250m\"\n limits:\n memory: \"128Mi\"\n cpu: \"500m\"\n - name: log-aggregator\n image: images.my-company.example/log-aggregator:v6\n resources:\n requests:\n memory: \"64Mi\"\n cpu: \"250m\"\n limits:\n memory: \"128Mi\"\n cpu: \"500m\"", "completion": "This yaml code 5 - Resource Management for Pods and Containers | This feature can be enabled by setting the PodLevelResources feature gate. The following Pod has an explicit request of 1 CPU and 100 MiB of memory, and an explicit limit of 1 CPU and 200 MiB of memor", "language": "yaml", "type": "pod-definition_explanation", "context": "5 - Resource Management for Pods and Containers | This feature can be enabled by setting the PodLevelResources feature gate. The following Pod has an explicit request of 1 CPU and 100 MiB of memory, and an explicit limit of 1 CPU and 200 MiB of memor", "source": "https://kubernetes.io/docs/concepts/configuration/_print/", "id": "k8s_00221_code_explain_6396"} {"prompt": "Complete this yaml code:\n\npods/resource/pod-level-resources.yaml\napiVersion: v1\nkind: Pod\nmetadata:\n name: pod-resources-demo\n namespace: pod-resources-example\nspec:\n resources:\n limits:\n cpu: \"1\"\n memory: \"200Mi\"\n requests:\n cpu: \"1\"\n memory: \"100Mi\"", "completion": " containers:\n - name: pod-resources-demo-ctr-1\n image: nginx\n resources:\n limits:\n cpu: \"0.5\"\n memory: \"100Mi\"\n requests:\n cpu: \"0.5\"\n memory: \"50Mi\"\n - name: pod-resources-demo-ctr-2\n image: fedora\n command:\n - sleep\n - inf", "language": "yaml", "type": "pod-definition", "context": "5 - Resource Management for Pods and Containers | When you create a Pod, the Kubernetes scheduler selects a node for the Pod to run on. Each node has a maximum capacity for each of the resource types: the amount of CPU and memory it can provide for P", "source": "https://kubernetes.io/docs/concepts/configuration/_print/", "id": "k8s_00221_code_completion_6397"} {"prompt": "Explain this yaml code:\n\npods/resource/pod-level-resources.yaml\napiVersion: v1\nkind: Pod\nmetadata:\n name: pod-resources-demo\n namespace: pod-resources-example\nspec:\n resources:\n limits:\n cpu: \"1\"\n memory: \"200Mi\"\n requests:\n cpu: \"1\"\n memory: \"100Mi\"\n containers:\n - name: pod-resources-demo-ctr-1\n image: nginx\n resources:\n limits:\n cpu: \"0.5\"\n memory: \"100Mi\"\n requests:\n cpu: \"0.5\"\n memory: \"50Mi\"\n - name: pod-resources-demo-ctr-2\n image: fedora\n command:\n - sleep\n - inf", "completion": "This yaml code 5 - Resource Management for Pods and Containers | When you create a Pod, the Kubernetes scheduler selects a node for the Pod to run on. Each node has a maximum capacity for each of the resource types: the amount of CPU and memory it can provide for P", "language": "yaml", "type": "pod-definition_explanation", "context": "5 - Resource Management for Pods and Containers | When you create a Pod, the Kubernetes scheduler selects a node for the Pod to run on. Each node has a maximum capacity for each of the resource types: the amount of CPU and memory it can provide for P", "source": "https://kubernetes.io/docs/concepts/configuration/_print/", "id": "k8s_00221_code_explain_6398"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: pod-resources-demo\n namespace: pod-resources-example\nspec:\n resources:\n limits:\n cpu: \"1\"\n memory: \"200Mi\"\n requests:\n cpu: \"1\"\n memory: \"100Mi\"\n containers:", "completion": " - name: pod-resources-demo-ctr-1\n image: nginx\n resources:\n limits:\n cpu: \"0.5\"\n memory: \"100Mi\"\n requests:\n cpu: \"0.5\"\n memory: \"50Mi\"\n - name: pod-resources-demo-ctr-2\n image: fedora\n command:\n - sleep\n - inf", "language": "yaml", "type": "pod-definition", "context": "5 - Resource Management for Pods and Containers | When you create a Pod, the Kubernetes scheduler selects a node for the Pod to run on. Each node has a maximum capacity for each of the resource types: the amount of CPU and memory it can provide for P", "source": "https://kubernetes.io/docs/concepts/configuration/_print/", "id": "k8s_00221_code_completion_6399"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: pod-resources-demo\n namespace: pod-resources-example\nspec:\n resources:\n limits:\n cpu: \"1\"\n memory: \"200Mi\"\n requests:\n cpu: \"1\"\n memory: \"100Mi\"\n containers:\n - name: pod-resources-demo-ctr-1\n image: nginx\n resources:\n limits:\n cpu: \"0.5\"\n memory: \"100Mi\"\n requests:\n cpu: \"0.5\"\n memory: \"50Mi\"\n - name: pod-resources-demo-ctr-2\n image: fedora\n command:\n - sleep\n - inf", "completion": "This yaml code 5 - Resource Management for Pods and Containers | When you create a Pod, the Kubernetes scheduler selects a node for the Pod to run on. Each node has a maximum capacity for each of the resource types: the amount of CPU and memory it can provide for P", "language": "yaml", "type": "pod-definition_explanation", "context": "5 - Resource Management for Pods and Containers | When you create a Pod, the Kubernetes scheduler selects a node for the Pod to run on. Each node has a maximum capacity for each of the resource types: the amount of CPU and memory it can provide for P", "source": "https://kubernetes.io/docs/concepts/configuration/_print/", "id": "k8s_00221_code_explain_6400"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: frontend\nspec:\n containers:\n - name: app\n image: images.my-company.example/app:v4\n resources:\n requests:\n ephemeral-storage: \"2Gi\"\n limits:\n ephemeral-storage: \"4Gi\"\n volumeMounts:\n - name: ephemeral", "completion": " mountPath: \"/tmp\"\n - name: log-aggregator\n image: images.my-company.example/log-aggregator:v6\n resources:\n requests:\n ephemeral-storage: \"2Gi\"\n limits:\n ephemeral-storage: \"4Gi\"\n volumeMounts:\n - name: ephemeral\n mountPath: \"/tmp\"\n volumes:\n - name: ephemeral\n emptyDir:\n sizeLimit: 500Mi", "language": "yaml", "type": "pod-definition", "context": "5 - Resource Management for Pods and Containers | When you create a Pod, the Kubernetes scheduler selects a node for the Pod to run on. Each node has a maximum amount of local ephemeral storage it can provide for Pods. For more information, see Node", "source": "https://kubernetes.io/docs/concepts/configuration/_print/", "id": "k8s_00221_code_completion_6401"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: frontend\nspec:\n containers:\n - name: app\n image: images.my-company.example/app:v4\n resources:\n requests:\n ephemeral-storage: \"2Gi\"\n limits:\n ephemeral-storage: \"4Gi\"\n volumeMounts:\n - name: ephemeral\n mountPath: \"/tmp\"\n - name: log-aggregator\n image: images.my-company.example/log-aggregator:v6\n resources:\n requests:\n ephemeral-storage: \"2Gi\"\n limits:\n ephemeral-storage: \"4Gi\"\n volumeMounts:\n - name: ephemeral\n mountPath: \"/tmp\"\n volumes:\n - name: ephemeral\n emptyDir:\n sizeLimit: 500Mi", "completion": "This yaml code 5 - Resource Management for Pods and Containers | When you create a Pod, the Kubernetes scheduler selects a node for the Pod to run on. Each node has a maximum amount of local ephemeral storage it can provide for Pods. For more information, see Node", "language": "yaml", "type": "pod-definition_explanation", "context": "5 - Resource Management for Pods and Containers | When you create a Pod, the Kubernetes scheduler selects a node for the Pod to run on. Each node has a maximum amount of local ephemeral storage it can provide for Pods. For more information, see Node", "source": "https://kubernetes.io/docs/concepts/configuration/_print/", "id": "k8s_00221_code_explain_6402"} {"prompt": "Explain this text code:\n\n# For ext4, with /dev/block-device not mounted\nsudo tune2fs -O project -Q prjquota /dev/block-device", "completion": "This text code 5 - Resource Management for Pods and Containers | Ensure that the root filesystem (or optional runtime filesystem) is mounted with project quotas enabled. For both XFS and ext4fs, the mount option is named prjquota.", "language": "text", "type": "code-example_explanation", "context": "5 - Resource Management for Pods and Containers | Ensure that the root filesystem (or optional runtime filesystem) is mounted with project quotas enabled. For both XFS and ext4fs, the mount option is named prjquota.", "source": "https://kubernetes.io/docs/concepts/configuration/_print/", "id": "k8s_00221_code_explain_6403"} {"prompt": "Complete this text code:\n\ncurl --header \"Content-Type: application/json-patch+json\" \\\n--request PATCH \\", "completion": "--data '[{\"op\": \"add\", \"path\": \"/status/capacity/example.com~1foo\", \"value\": \"5\"}]' \\\nhttp://k8s-master:8080/api/v1/nodes/k8s-node-1/status", "language": "text", "type": "code-example", "context": "5 - Resource Management for Pods and Containers | Cluster-level extended resources are not tied to nodes. They are usually managed by scheduler extenders, which handle the resource consumption and resource quota.", "source": "https://kubernetes.io/docs/concepts/configuration/_print/", "id": "k8s_00221_code_completion_6404"} {"prompt": "Explain this text code:\n\ncurl --header \"Content-Type: application/json-patch+json\" \\\n--request PATCH \\\n--data '[{\"op\": \"add\", \"path\": \"/status/capacity/example.com~1foo\", \"value\": \"5\"}]' \\\nhttp://k8s-master:8080/api/v1/nodes/k8s-node-1/status", "completion": "This text code 5 - Resource Management for Pods and Containers | Cluster-level extended resources are not tied to nodes. They are usually managed by scheduler extenders, which handle the resource consumption and resource quota.", "language": "text", "type": "code-example_explanation", "context": "5 - Resource Management for Pods and Containers | Cluster-level extended resources are not tied to nodes. They are usually managed by scheduler extenders, which handle the resource consumption and resource quota.", "source": "https://kubernetes.io/docs/concepts/configuration/_print/", "id": "k8s_00221_code_explain_6405"} {"prompt": "Complete this json code:\n\n{\n \"kind\": \"Policy\",\n \"apiVersion\": \"v1\",\n \"extenders\": [\n {\n \"urlPrefix\":\"\",\n \"bindVerb\": \"bind\",\n \"managedResources\": [", "completion": " {\n \"name\": \"example.com/foo\",\n \"ignoredByScheduler\": true\n }\n ]\n }\n ]\n}", "language": "json", "type": "api-response", "context": "5 - Resource Management for Pods and Containers | Extended resources allocation by DRA allows cluster administrators to specify an extendedResourceName in DeviceClass, then the devices matching the DeviceClass can be requested from a pod's extended r", "source": "https://kubernetes.io/docs/concepts/configuration/_print/", "id": "k8s_00221_code_completion_6406"} {"prompt": "Explain this json code:\n\n{\n \"kind\": \"Policy\",\n \"apiVersion\": \"v1\",\n \"extenders\": [\n {\n \"urlPrefix\":\"\",\n \"bindVerb\": \"bind\",\n \"managedResources\": [\n {\n \"name\": \"example.com/foo\",\n \"ignoredByScheduler\": true\n }\n ]\n }\n ]\n}", "completion": "This json code 5 - Resource Management for Pods and Containers | Extended resources allocation by DRA allows cluster administrators to specify an extendedResourceName in DeviceClass, then the devices matching the DeviceClass can be requested from a pod's extended r", "language": "json", "type": "api-response_explanation", "context": "5 - Resource Management for Pods and Containers | Extended resources allocation by DRA allows cluster administrators to specify an extendedResourceName in DeviceClass, then the devices matching the DeviceClass can be requested from a pod's extended r", "source": "https://kubernetes.io/docs/concepts/configuration/_print/", "id": "k8s_00221_code_explain_6407"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: my-pod\nspec:\n containers:\n - name: my-container", "completion": " image: myimage\n resources:\n requests:\n cpu: 2\n example.com/foo: 1\n limits:\n example.com/foo: 1", "language": "yaml", "type": "pod-definition", "context": "5 - Resource Management for Pods and Containers | Process ID (PID) limits allow for the configuration of a kubelet to limit the number of PIDs that a given Pod can consume. See PID Limiting for information.", "source": "https://kubernetes.io/docs/concepts/configuration/_print/", "id": "k8s_00221_code_completion_6408"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: my-pod\nspec:\n containers:\n - name: my-container\n image: myimage\n resources:\n requests:\n cpu: 2\n example.com/foo: 1\n limits:\n example.com/foo: 1", "completion": "This yaml code 5 - Resource Management for Pods and Containers | Process ID (PID) limits allow for the configuration of a kubelet to limit the number of PIDs that a given Pod can consume. See PID Limiting for information.", "language": "yaml", "type": "pod-definition_explanation", "context": "5 - Resource Management for Pods and Containers | Process ID (PID) limits allow for the configuration of a kubelet to limit the number of PIDs that a given Pod can consume. See PID Limiting for information.", "source": "https://kubernetes.io/docs/concepts/configuration/_print/", "id": "k8s_00221_code_explain_6409"} {"prompt": "Explain this bash code:\n\nkubectl describe pod frontend | grep -A 9999999999 Events", "completion": "This bash code 5 - Resource Management for Pods and Containers | In the preceding example, the Pod named \"frontend\" fails to be scheduled due to insufficient CPU resource on any node. Similar error messages can also suggest failure due to insufficient memory (PodEx", "language": "bash", "type": "kubectl-command_explanation", "context": "5 - Resource Management for Pods and Containers | In the preceding example, the Pod named \"frontend\" fails to be scheduled due to insufficient CPU resource on any node. Similar error messages can also suggest failure due to insufficient memory (PodEx", "source": "https://kubernetes.io/docs/concepts/configuration/_print/", "id": "k8s_00221_code_explain_6410"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Config\n\nclusters:\n- cluster:\n proxy-url: http://proxy.example.org:3128\n server: https://k8s.example.org/k8s/clusters/c-xxyyzz", "completion": " name: development\n\nusers:\n- name: developer\n\ncontexts:\n- context:\n name: development", "language": "yaml", "type": "kubernetes-manifest", "context": "6 - Organizing Cluster Access Using kubeconfig Files | This page outlines the differences in how resources are managed between Linux and Windows.", "source": "https://kubernetes.io/docs/concepts/configuration/_print/", "id": "k8s_00221_code_completion_6411"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Config\n\nclusters:\n- cluster:\n proxy-url: http://proxy.example.org:3128\n server: https://k8s.example.org/k8s/clusters/c-xxyyzz\n name: development\n\nusers:\n- name: developer\n\ncontexts:\n- context:\n name: development", "completion": "This yaml code 6 - Organizing Cluster Access Using kubeconfig Files | This page outlines the differences in how resources are managed between Linux and Windows.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "6 - Organizing Cluster Access Using kubeconfig Files | This page outlines the differences in how resources are managed between Linux and Windows.", "source": "https://kubernetes.io/docs/concepts/configuration/_print/", "id": "k8s_00221_code_explain_6412"} {"prompt": "Explain this bash code:\n\ndocker run -d --restart=always -e DOMAIN=cluster --name nginx-app -p 80:80 nginx", "completion": "This bash code kubectl for Docker Users | kubectl:", "language": "bash", "type": "shell-command_explanation", "context": "kubectl for Docker Users | kubectl:", "source": "https://kubernetes.io/docs/reference/kubectl/docker-cli-to-kubectl/", "id": "k8s_00222_code_explain_6413"} {"prompt": "Explain this text code:\n\n55c103fa129692154a7652490236fee9be47d70a8dd562281ae7d2f9a339a6db", "completion": "This text code kubectl for Docker Users | kubectl:", "language": "text", "type": "code-example_explanation", "context": "kubectl for Docker Users | kubectl:", "source": "https://kubernetes.io/docs/reference/kubectl/docker-cli-to-kubectl/", "id": "k8s_00222_code_explain_6414"} {"prompt": "Explain this text code:\n\nCONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES\n55c103fa1296 nginx \"nginx -g 'daemon of…\" 9 seconds ago Up 9 seconds 0.0.0.0:80->80/tcp nginx-app", "completion": "This text code kubectl for Docker Users | kubectl:", "language": "text", "type": "code-example_explanation", "context": "kubectl for Docker Users | kubectl:", "source": "https://kubernetes.io/docs/reference/kubectl/docker-cli-to-kubectl/", "id": "k8s_00222_code_explain_6415"} {"prompt": "Explain this text code:\n\n# start the pod running nginx\nkubectl create deployment --image=nginx nginx-app", "completion": "This text code kubectl for Docker Users | By using kubectl, you can create a Deployment to ensure that N pods are running nginx, where N is the number of replicas stated in the spec and defaults to 1. You can also create a service with a sele", "language": "text", "type": "code-example_explanation", "context": "kubectl for Docker Users | By using kubectl, you can create a Deployment to ensure that N pods are running nginx, where N is the number of replicas stated in the spec and defaults to 1. You can also create a service with a sele", "source": "https://kubernetes.io/docs/reference/kubectl/docker-cli-to-kubectl/", "id": "k8s_00222_code_explain_6416"} {"prompt": "Explain this text code:\n\n# add env to nginx-app\nkubectl set env deployment/nginx-app DOMAIN=cluster", "completion": "This text code kubectl for Docker Users | By using kubectl, you can create a Deployment to ensure that N pods are running nginx, where N is the number of replicas stated in the spec and defaults to 1. You can also create a service with a sele", "language": "text", "type": "code-example_explanation", "context": "kubectl for Docker Users | By using kubectl, you can create a Deployment to ensure that N pods are running nginx, where N is the number of replicas stated in the spec and defaults to 1. You can also create a service with a sele", "source": "https://kubernetes.io/docs/reference/kubectl/docker-cli-to-kubectl/", "id": "k8s_00222_code_explain_6417"} {"prompt": "Explain this text code:\n\n# expose a port through with a service\nkubectl expose deployment nginx-app --port=80 --name=nginx-http", "completion": "This text code kubectl for Docker Users | By using kubectl, you can create a Deployment to ensure that N pods are running nginx, where N is the number of replicas stated in the spec and defaults to 1. You can also create a service with a sele", "language": "text", "type": "code-example_explanation", "context": "kubectl for Docker Users | By using kubectl, you can create a Deployment to ensure that N pods are running nginx, where N is the number of replicas stated in the spec and defaults to 1. You can also create a service with a sele", "source": "https://kubernetes.io/docs/reference/kubectl/docker-cli-to-kubectl/", "id": "k8s_00222_code_explain_6418"} {"prompt": "Explain this bash code:\n\nkubectl run [-i] [--tty] --attach --image=", "completion": "This bash code kubectl for Docker Users | Unlike docker run ..., if you specify --attach, then you attach stdin, stdout and stderr. You cannot control which streams are attached (docker -a ...). To detach from the container, you can type the", "language": "bash", "type": "kubectl-command_explanation", "context": "kubectl for Docker Users | Unlike docker run ..., if you specify --attach, then you attach stdin, stdout and stderr. You cannot control which streams are attached (docker -a ...). To detach from the container, you can type the", "source": "https://kubernetes.io/docs/reference/kubectl/docker-cli-to-kubectl/", "id": "k8s_00222_code_explain_6419"} {"prompt": "Explain this text code:\n\nCONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES\n14636241935f ubuntu:16.04 \"echo test\" 5 seconds ago Exited (0) 5 seconds ago cocky_fermi\n55c103fa1296 nginx \"nginx -g 'daemon of…\" About a minute ago Up About a minute 0.0.0.0:80->80/tcp nginx-app", "completion": "This text code kubectl for Docker Users | kubectl:", "language": "text", "type": "code-example_explanation", "context": "kubectl for Docker Users | kubectl:", "source": "https://kubernetes.io/docs/reference/kubectl/docker-cli-to-kubectl/", "id": "k8s_00222_code_explain_6420"} {"prompt": "Explain this text code:\n\nNAME READY STATUS RESTARTS AGE\nnginx-app-8df569cb7-4gd89 1/1 Running 0 3m\nubuntu 0/1 Completed 0 20s", "completion": "This text code kubectl for Docker Users | To attach a process that is already running in a container, see kubectl attach.", "language": "text", "type": "code-example_explanation", "context": "kubectl for Docker Users | To attach a process that is already running in a container, see kubectl attach.", "source": "https://kubernetes.io/docs/reference/kubectl/docker-cli-to-kubectl/", "id": "k8s_00222_code_explain_6421"} {"prompt": "Explain this text code:\n\nCONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES\n55c103fa1296 nginx \"nginx -g 'daemon of…\" 5 minutes ago Up 5 minutes 0.0.0.0:80->80/tcp nginx-app", "completion": "This text code kubectl for Docker Users | kubectl:", "language": "text", "type": "code-example_explanation", "context": "kubectl for Docker Users | kubectl:", "source": "https://kubernetes.io/docs/reference/kubectl/docker-cli-to-kubectl/", "id": "k8s_00222_code_explain_6422"} {"prompt": "Explain this text code:\n\nNAME READY STATUS RESTARTS AGE\nnginx-app-5jyvm 1/1 Running 0 10m", "completion": "This text code kubectl for Docker Users | To detach from the container, you can type the escape sequence Ctrl+P followed by Ctrl+Q.", "language": "text", "type": "code-example_explanation", "context": "kubectl for Docker Users | To detach from the container, you can type the escape sequence Ctrl+P followed by Ctrl+Q.", "source": "https://kubernetes.io/docs/reference/kubectl/docker-cli-to-kubectl/", "id": "k8s_00222_code_explain_6423"} {"prompt": "Explain this text code:\n\nCONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES\n55c103fa1296 nginx \"nginx -g 'daemon of…\" 6 minutes ago Up 6 minutes 0.0.0.0:80->80/tcp nginx-app", "completion": "This text code kubectl for Docker Users | kubectl:", "language": "text", "type": "code-example_explanation", "context": "kubectl for Docker Users | kubectl:", "source": "https://kubernetes.io/docs/reference/kubectl/docker-cli-to-kubectl/", "id": "k8s_00222_code_explain_6424"} {"prompt": "Explain this text code:\n\nNAME READY STATUS RESTARTS AGE\nnginx-app-5jyvm 1/1 Running 0 10m", "completion": "This text code kubectl for Docker Users | To use interactive commands.", "language": "text", "type": "code-example_explanation", "context": "kubectl for Docker Users | To use interactive commands.", "source": "https://kubernetes.io/docs/reference/kubectl/docker-cli-to-kubectl/", "id": "k8s_00222_code_explain_6425"} {"prompt": "Explain this text code:\n\n192.168.9.1 - - [14/Jul/2015:01:04:02 +0000] \"GET / HTTP/1.1\" 200 612 \"-\" \"curl/7.35.0\" \"-\"\n192.168.9.1 - - [14/Jul/2015:01:04:03 +0000] \"GET / HTTP/1.1\" 200 612 \"-\" \"curl/7.35.0\" \"-\"", "completion": "This text code kubectl for Docker Users | kubectl:", "language": "text", "type": "code-example_explanation", "context": "kubectl for Docker Users | kubectl:", "source": "https://kubernetes.io/docs/reference/kubectl/docker-cli-to-kubectl/", "id": "k8s_00222_code_explain_6426"} {"prompt": "Explain this text code:\n\n10.240.63.110 - - [14/Jul/2015:01:09:01 +0000] \"GET / HTTP/1.1\" 200 612 \"-\" \"curl/7.26.0\" \"-\"\n10.240.63.110 - - [14/Jul/2015:01:09:02 +0000] \"GET / HTTP/1.1\" 200 612 \"-\" \"curl/7.26.0\" \"-\"", "completion": "This text code kubectl for Docker Users | There is a slight difference between pods and containers; by default pods do not terminate if their processes exit. Instead the pods restart the process. This is similar to the docker run option --res", "language": "text", "type": "code-example_explanation", "context": "kubectl for Docker Users | There is a slight difference between pods and containers; by default pods do not terminate if their processes exit. Instead the pods restart the process. This is similar to the docker run option --res", "source": "https://kubernetes.io/docs/reference/kubectl/docker-cli-to-kubectl/", "id": "k8s_00222_code_explain_6427"} {"prompt": "Explain this text code:\n\n10.240.63.110 - - [14/Jul/2015:01:09:01 +0000] \"GET / HTTP/1.1\" 200 612 \"-\" \"curl/7.26.0\" \"-\"\n10.240.63.110 - - [14/Jul/2015:01:09:02 +0000] \"GET / HTTP/1.1\" 200 612 \"-\" \"curl/7.26.0\" \"-\"", "completion": "This text code kubectl for Docker Users | For more information, see Logging Architecture.", "language": "text", "type": "code-example_explanation", "context": "kubectl for Docker Users | For more information, see Logging Architecture.", "source": "https://kubernetes.io/docs/reference/kubectl/docker-cli-to-kubectl/", "id": "k8s_00222_code_explain_6428"} {"prompt": "Explain this text code:\n\nCONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES\na9ec34d98787 nginx \"nginx -g 'daemon of\" 22 hours ago Up 22 hours 0.0.0.0:80->80/tcp, 443/tcp nginx-app", "completion": "This text code kubectl for Docker Users | kubectl:", "language": "text", "type": "code-example_explanation", "context": "kubectl for Docker Users | kubectl:", "source": "https://kubernetes.io/docs/reference/kubectl/docker-cli-to-kubectl/", "id": "k8s_00222_code_explain_6429"} {"prompt": "Explain this text code:\n\nNAME READY UP-TO-DATE AVAILABLE AGE\nnginx-app 1/1 1 1 2m", "completion": "This text code kubectl for Docker Users | There is no direct analog of docker login in kubectl. If you are interested in using Kubernetes with a private registry, see Using a Private Registry.", "language": "text", "type": "code-example_explanation", "context": "kubectl for Docker Users | There is no direct analog of docker login in kubectl. If you are interested in using Kubernetes with a private registry, see Using a Private Registry.", "source": "https://kubernetes.io/docs/reference/kubectl/docker-cli-to-kubectl/", "id": "k8s_00222_code_explain_6430"} {"prompt": "Explain this text code:\n\nNAME READY STATUS RESTARTS AGE\nnginx-app-2883164633-aklf7 1/1 Running 0 2m", "completion": "This text code kubectl for Docker Users | There is no direct analog of docker login in kubectl. If you are interested in using Kubernetes with a private registry, see Using a Private Registry.", "language": "text", "type": "code-example_explanation", "context": "kubectl for Docker Users | There is no direct analog of docker login in kubectl. If you are interested in using Kubernetes with a private registry, see Using a Private Registry.", "source": "https://kubernetes.io/docs/reference/kubectl/docker-cli-to-kubectl/", "id": "k8s_00222_code_explain_6431"} {"prompt": "Complete this text code:\n\nClient version: 1.7.0\nClient API version: 1.19\nGo version (client): go1.4.2\nGit commit (client): 0baf609\nOS/Arch (client): linux/amd64", "completion": "Server version: 1.7.0\nServer API version: 1.19\nGo version (server): go1.4.2\nGit commit (server): 0baf609\nOS/Arch (server): linux/amd64", "language": "text", "type": "code-example", "context": "kubectl for Docker Users | kubectl:", "source": "https://kubernetes.io/docs/reference/kubectl/docker-cli-to-kubectl/", "id": "k8s_00222_code_completion_6432"} {"prompt": "Explain this text code:\n\nClient version: 1.7.0\nClient API version: 1.19\nGo version (client): go1.4.2\nGit commit (client): 0baf609\nOS/Arch (client): linux/amd64\nServer version: 1.7.0\nServer API version: 1.19\nGo version (server): go1.4.2\nGit commit (server): 0baf609\nOS/Arch (server): linux/amd64", "completion": "This text code kubectl for Docker Users | kubectl:", "language": "text", "type": "code-example_explanation", "context": "kubectl for Docker Users | kubectl:", "source": "https://kubernetes.io/docs/reference/kubectl/docker-cli-to-kubectl/", "id": "k8s_00222_code_explain_6433"} {"prompt": "Explain this text code:\n\nClient Version: version.Info{Major:\"1\", Minor:\"6\", GitVersion:\"v1.6.9+a3d1dfa6f4335\", GitCommit:\"9b77fed11a9843ce3780f70dd251e92901c43072\", GitTreeState:\"dirty\", BuildDate:\"2017-08-29T20:32:58Z\", OpenPaasKubernetesVersion:\"v1.03.02\", GoVersion:\"go1.7.5\", Compiler:\"gc\", Platform:\"linux/amd64\"}\nServer Version: version.Info{Major:\"1\", Minor:\"6\", GitVersion:\"v1.6.9+a3d1dfa6f4335\", GitCommit:\"9b77fed11a9843ce3780f70dd251e92901c43072\", GitTreeState:\"dirty\", BuildDate:\"2017-08-29T20:32:58Z\", OpenPaasKubernetesVersion:\"v1.03.02\", GoVersion:\"go1.7.5\", Compiler:\"gc\", Platform:\"linux/amd64\"}", "completion": "This text code kubectl for Docker Users | To get miscellaneous information about the environment and configuration, see kubectl cluster-info.", "language": "text", "type": "code-example_explanation", "context": "kubectl for Docker Users | To get miscellaneous information about the environment and configuration, see kubectl cluster-info.", "source": "https://kubernetes.io/docs/reference/kubectl/docker-cli-to-kubectl/", "id": "k8s_00222_code_explain_6434"} {"prompt": "Complete this text code:\n\nContainers: 40\nImages: 168\nStorage Driver: aufs\n Root Dir: /usr/local/google/docker/aufs\n Backing Filesystem: extfs\n Dirs: 248\n Dirperm1 Supported: false\nExecution Driver: native-0.2", "completion": "Logging Driver: json-file\nKernel Version: 3.13.0-53-generic\nOperating System: Ubuntu 14.04.2 LTS\nCPUs: 12\nTotal Memory: 31.32 GiB\nName: k8s-is-fun.mtv.corp.google.com\nID: ADUV:GCYR:B3VJ:HMPO:LNPQ:KD5S:YKFQ:76VN:IANZ:7TFV:ZBF4:BYJO\nWARNING: No swap limit support", "language": "text", "type": "code-example", "context": "kubectl for Docker Users | kubectl:", "source": "https://kubernetes.io/docs/reference/kubectl/docker-cli-to-kubectl/", "id": "k8s_00222_code_completion_6435"} {"prompt": "Explain this text code:\n\nContainers: 40\nImages: 168\nStorage Driver: aufs\n Root Dir: /usr/local/google/docker/aufs\n Backing Filesystem: extfs\n Dirs: 248\n Dirperm1 Supported: false\nExecution Driver: native-0.2\nLogging Driver: json-file\nKernel Version: 3.13.0-53-generic\nOperating System: Ubuntu 14.04.2 LTS\nCPUs: 12\nTotal Memory: 31.32 GiB\nName: k8s-is-fun.mtv.corp.google.com\nID: ADUV:GCYR:B3VJ:HMPO:LNPQ:KD5S:YKFQ:76VN:IANZ:7TFV:ZBF4:BYJO\nWARNING: No swap limit support", "completion": "This text code kubectl for Docker Users | kubectl:", "language": "text", "type": "code-example_explanation", "context": "kubectl for Docker Users | kubectl:", "source": "https://kubernetes.io/docs/reference/kubectl/docker-cli-to-kubectl/", "id": "k8s_00222_code_explain_6436"} {"prompt": "Complete this text code:\n\nKubernetes master is running at https://203.0.113.141\nKubeDNS is running at https://203.0.113.141/api/v1/namespaces/kube-system/services/kube-dns/proxy\nkubernetes-dashboard is running at https://203.0.113.141/api/v1/namespaces/kube-system/services/kubernetes-dashboard/proxy", "completion": "Grafana is running at https://203.0.113.141/api/v1/namespaces/kube-system/services/monitoring-grafana/proxy\nHeapster is running at https://203.0.113.141/api/v1/namespaces/kube-system/services/monitoring-heapster/proxy\nInfluxDB is running at https://203.0.113.141/api/v1/namespaces/kube-system/services/monitoring-influxdb/proxy", "language": "text", "type": "code-example", "context": "kubectl for Docker Users |", "source": "https://kubernetes.io/docs/reference/kubectl/docker-cli-to-kubectl/", "id": "k8s_00222_code_completion_6437"} {"prompt": "Explain this text code:\n\nKubernetes master is running at https://203.0.113.141\nKubeDNS is running at https://203.0.113.141/api/v1/namespaces/kube-system/services/kube-dns/proxy\nkubernetes-dashboard is running at https://203.0.113.141/api/v1/namespaces/kube-system/services/kubernetes-dashboard/proxy\nGrafana is running at https://203.0.113.141/api/v1/namespaces/kube-system/services/monitoring-grafana/proxy\nHeapster is running at https://203.0.113.141/api/v1/namespaces/kube-system/services/monitoring-heapster/proxy\nInfluxDB is running at https://203.0.113.141/api/v1/namespaces/kube-system/services/monitoring-influxdb/proxy", "completion": "This text code kubectl for Docker Users |", "language": "text", "type": "code-example_explanation", "context": "kubectl for Docker Users |", "source": "https://kubernetes.io/docs/reference/kubectl/docker-cli-to-kubectl/", "id": "k8s_00222_code_explain_6438"} {"prompt": "Explain this shell code:\n\ndocker run -d --restart=always -e DOMAIN=cluster --name nginx-app -p 80:80 nginx", "completion": "This shell code kubectl for Docker Users | kubectl:", "language": "shell", "type": "code-example_explanation", "context": "kubectl for Docker Users | kubectl:", "source": "https://kubernetes.io/docs/reference/kubectl/docker-cli-to-kubectl/", "id": "k8s_00222_code_explain_6439"} {"prompt": "Explain this text code:\n\n55c103fa129692154a7652490236fee9be47d70a8dd562281ae7d2f9a339a6db", "completion": "This text code kubectl for Docker Users | kubectl:", "language": "text", "type": "code-example_explanation", "context": "kubectl for Docker Users | kubectl:", "source": "https://kubernetes.io/docs/reference/kubectl/docker-cli-to-kubectl/", "id": "k8s_00222_code_explain_6440"} {"prompt": "Explain this text code:\n\nCONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES\n55c103fa1296 nginx \"nginx -g 'daemon of…\" 9 seconds ago Up 9 seconds 0.0.0.0:80->80/tcp nginx-app", "completion": "This text code kubectl for Docker Users | kubectl:", "language": "text", "type": "code-example_explanation", "context": "kubectl for Docker Users | kubectl:", "source": "https://kubernetes.io/docs/reference/kubectl/docker-cli-to-kubectl/", "id": "k8s_00222_code_explain_6441"} {"prompt": "Explain this shell code:\n\n# start the pod running nginx\nkubectl create deployment --image=nginx nginx-app", "completion": "This shell code kubectl for Docker Users | By using kubectl, you can create a Deployment to ensure that N pods are running nginx, where N is the number of replicas stated in the spec and defaults to 1. You can also create a service with a sele", "language": "shell", "type": "code-example_explanation", "context": "kubectl for Docker Users | By using kubectl, you can create a Deployment to ensure that N pods are running nginx, where N is the number of replicas stated in the spec and defaults to 1. You can also create a service with a sele", "source": "https://kubernetes.io/docs/reference/kubectl/docker-cli-to-kubectl/", "id": "k8s_00222_code_explain_6442"} {"prompt": "Explain this shell code:\n\n# add env to nginx-app\nkubectl set env deployment/nginx-app DOMAIN=cluster", "completion": "This shell code kubectl for Docker Users | By using kubectl, you can create a Deployment to ensure that N pods are running nginx, where N is the number of replicas stated in the spec and defaults to 1. You can also create a service with a sele", "language": "shell", "type": "code-example_explanation", "context": "kubectl for Docker Users | By using kubectl, you can create a Deployment to ensure that N pods are running nginx, where N is the number of replicas stated in the spec and defaults to 1. You can also create a service with a sele", "source": "https://kubernetes.io/docs/reference/kubectl/docker-cli-to-kubectl/", "id": "k8s_00222_code_explain_6443"} {"prompt": "Explain this shell code:\n\n# expose a port through with a service\nkubectl expose deployment nginx-app --port=80 --name=nginx-http", "completion": "This shell code kubectl for Docker Users | By using kubectl, you can create a Deployment to ensure that N pods are running nginx, where N is the number of replicas stated in the spec and defaults to 1. You can also create a service with a sele", "language": "shell", "type": "code-example_explanation", "context": "kubectl for Docker Users | By using kubectl, you can create a Deployment to ensure that N pods are running nginx, where N is the number of replicas stated in the spec and defaults to 1. You can also create a service with a sele", "source": "https://kubernetes.io/docs/reference/kubectl/docker-cli-to-kubectl/", "id": "k8s_00222_code_explain_6444"} {"prompt": "Explain this shell code:\n\nkubectl run [-i] [--tty] --attach --image=", "completion": "This shell code kubectl for Docker Users | Unlike docker run ..., if you specify --attach, then you attach stdin, stdout and stderr. You cannot control which streams are attached (docker -a ...). To detach from the container, you can type the", "language": "shell", "type": "code-example_explanation", "context": "kubectl for Docker Users | Unlike docker run ..., if you specify --attach, then you attach stdin, stdout and stderr. You cannot control which streams are attached (docker -a ...). To detach from the container, you can type the", "source": "https://kubernetes.io/docs/reference/kubectl/docker-cli-to-kubectl/", "id": "k8s_00222_code_explain_6445"} {"prompt": "Explain this text code:\n\nCONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES\n14636241935f ubuntu:16.04 \"echo test\" 5 seconds ago Exited (0) 5 seconds ago cocky_fermi\n55c103fa1296 nginx \"nginx -g 'daemon of…\" About a minute ago Up About a minute 0.0.0.0:80->80/tcp nginx-app", "completion": "This text code kubectl for Docker Users | kubectl:", "language": "text", "type": "code-example_explanation", "context": "kubectl for Docker Users | kubectl:", "source": "https://kubernetes.io/docs/reference/kubectl/docker-cli-to-kubectl/", "id": "k8s_00222_code_explain_6446"} {"prompt": "Explain this text code:\n\nNAME READY STATUS RESTARTS AGE\nnginx-app-8df569cb7-4gd89 1/1 Running 0 3m\nubuntu 0/1 Completed 0 20s", "completion": "This text code kubectl for Docker Users | To attach a process that is already running in a container, see kubectl attach.", "language": "text", "type": "code-example_explanation", "context": "kubectl for Docker Users | To attach a process that is already running in a container, see kubectl attach.", "source": "https://kubernetes.io/docs/reference/kubectl/docker-cli-to-kubectl/", "id": "k8s_00222_code_explain_6447"} {"prompt": "Explain this text code:\n\nCONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES\n55c103fa1296 nginx \"nginx -g 'daemon of…\" 5 minutes ago Up 5 minutes 0.0.0.0:80->80/tcp nginx-app", "completion": "This text code kubectl for Docker Users | kubectl:", "language": "text", "type": "code-example_explanation", "context": "kubectl for Docker Users | kubectl:", "source": "https://kubernetes.io/docs/reference/kubectl/docker-cli-to-kubectl/", "id": "k8s_00222_code_explain_6448"} {"prompt": "Explain this text code:\n\nNAME READY STATUS RESTARTS AGE\nnginx-app-5jyvm 1/1 Running 0 10m", "completion": "This text code kubectl for Docker Users | To detach from the container, you can type the escape sequence Ctrl+P followed by Ctrl+Q.", "language": "text", "type": "code-example_explanation", "context": "kubectl for Docker Users | To detach from the container, you can type the escape sequence Ctrl+P followed by Ctrl+Q.", "source": "https://kubernetes.io/docs/reference/kubectl/docker-cli-to-kubectl/", "id": "k8s_00222_code_explain_6449"} {"prompt": "Explain this text code:\n\nCONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES\n55c103fa1296 nginx \"nginx -g 'daemon of…\" 6 minutes ago Up 6 minutes 0.0.0.0:80->80/tcp nginx-app", "completion": "This text code kubectl for Docker Users | kubectl:", "language": "text", "type": "code-example_explanation", "context": "kubectl for Docker Users | kubectl:", "source": "https://kubernetes.io/docs/reference/kubectl/docker-cli-to-kubectl/", "id": "k8s_00222_code_explain_6450"} {"prompt": "Explain this text code:\n\nNAME READY STATUS RESTARTS AGE\nnginx-app-5jyvm 1/1 Running 0 10m", "completion": "This text code kubectl for Docker Users | To use interactive commands.", "language": "text", "type": "code-example_explanation", "context": "kubectl for Docker Users | To use interactive commands.", "source": "https://kubernetes.io/docs/reference/kubectl/docker-cli-to-kubectl/", "id": "k8s_00222_code_explain_6451"} {"prompt": "Explain this text code:\n\n192.168.9.1 - - [14/Jul/2015:01:04:02 +0000] \"GET / HTTP/1.1\" 200 612 \"-\" \"curl/7.35.0\" \"-\"\n192.168.9.1 - - [14/Jul/2015:01:04:03 +0000] \"GET / HTTP/1.1\" 200 612 \"-\" \"curl/7.35.0\" \"-\"", "completion": "This text code kubectl for Docker Users | kubectl:", "language": "text", "type": "code-example_explanation", "context": "kubectl for Docker Users | kubectl:", "source": "https://kubernetes.io/docs/reference/kubectl/docker-cli-to-kubectl/", "id": "k8s_00222_code_explain_6452"} {"prompt": "Explain this text code:\n\n10.240.63.110 - - [14/Jul/2015:01:09:01 +0000] \"GET / HTTP/1.1\" 200 612 \"-\" \"curl/7.26.0\" \"-\"\n10.240.63.110 - - [14/Jul/2015:01:09:02 +0000] \"GET / HTTP/1.1\" 200 612 \"-\" \"curl/7.26.0\" \"-\"", "completion": "This text code kubectl for Docker Users | There is a slight difference between pods and containers; by default pods do not terminate if their processes exit. Instead the pods restart the process. This is similar to the docker run option --res", "language": "text", "type": "code-example_explanation", "context": "kubectl for Docker Users | There is a slight difference between pods and containers; by default pods do not terminate if their processes exit. Instead the pods restart the process. This is similar to the docker run option --res", "source": "https://kubernetes.io/docs/reference/kubectl/docker-cli-to-kubectl/", "id": "k8s_00222_code_explain_6453"} {"prompt": "Explain this text code:\n\n10.240.63.110 - - [14/Jul/2015:01:09:01 +0000] \"GET / HTTP/1.1\" 200 612 \"-\" \"curl/7.26.0\" \"-\"\n10.240.63.110 - - [14/Jul/2015:01:09:02 +0000] \"GET / HTTP/1.1\" 200 612 \"-\" \"curl/7.26.0\" \"-\"", "completion": "This text code kubectl for Docker Users | For more information, see Logging Architecture.", "language": "text", "type": "code-example_explanation", "context": "kubectl for Docker Users | For more information, see Logging Architecture.", "source": "https://kubernetes.io/docs/reference/kubectl/docker-cli-to-kubectl/", "id": "k8s_00222_code_explain_6454"} {"prompt": "Explain this text code:\n\nCONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES\na9ec34d98787 nginx \"nginx -g 'daemon of\" 22 hours ago Up 22 hours 0.0.0.0:80->80/tcp, 443/tcp nginx-app", "completion": "This text code kubectl for Docker Users | kubectl:", "language": "text", "type": "code-example_explanation", "context": "kubectl for Docker Users | kubectl:", "source": "https://kubernetes.io/docs/reference/kubectl/docker-cli-to-kubectl/", "id": "k8s_00222_code_explain_6455"} {"prompt": "Explain this text code:\n\nNAME READY UP-TO-DATE AVAILABLE AGE\nnginx-app 1/1 1 1 2m", "completion": "This text code kubectl for Docker Users | There is no direct analog of docker login in kubectl. If you are interested in using Kubernetes with a private registry, see Using a Private Registry.", "language": "text", "type": "code-example_explanation", "context": "kubectl for Docker Users | There is no direct analog of docker login in kubectl. If you are interested in using Kubernetes with a private registry, see Using a Private Registry.", "source": "https://kubernetes.io/docs/reference/kubectl/docker-cli-to-kubectl/", "id": "k8s_00222_code_explain_6456"} {"prompt": "Explain this text code:\n\nNAME READY STATUS RESTARTS AGE\nnginx-app-2883164633-aklf7 1/1 Running 0 2m", "completion": "This text code kubectl for Docker Users | There is no direct analog of docker login in kubectl. If you are interested in using Kubernetes with a private registry, see Using a Private Registry.", "language": "text", "type": "code-example_explanation", "context": "kubectl for Docker Users | There is no direct analog of docker login in kubectl. If you are interested in using Kubernetes with a private registry, see Using a Private Registry.", "source": "https://kubernetes.io/docs/reference/kubectl/docker-cli-to-kubectl/", "id": "k8s_00222_code_explain_6457"} {"prompt": "Complete this text code:\n\nClient version: 1.7.0\nClient API version: 1.19\nGo version (client): go1.4.2\nGit commit (client): 0baf609\nOS/Arch (client): linux/amd64", "completion": "Server version: 1.7.0\nServer API version: 1.19\nGo version (server): go1.4.2\nGit commit (server): 0baf609\nOS/Arch (server): linux/amd64", "language": "text", "type": "code-example", "context": "kubectl for Docker Users | kubectl:", "source": "https://kubernetes.io/docs/reference/kubectl/docker-cli-to-kubectl/", "id": "k8s_00222_code_completion_6458"} {"prompt": "Explain this text code:\n\nClient version: 1.7.0\nClient API version: 1.19\nGo version (client): go1.4.2\nGit commit (client): 0baf609\nOS/Arch (client): linux/amd64\nServer version: 1.7.0\nServer API version: 1.19\nGo version (server): go1.4.2\nGit commit (server): 0baf609\nOS/Arch (server): linux/amd64", "completion": "This text code kubectl for Docker Users | kubectl:", "language": "text", "type": "code-example_explanation", "context": "kubectl for Docker Users | kubectl:", "source": "https://kubernetes.io/docs/reference/kubectl/docker-cli-to-kubectl/", "id": "k8s_00222_code_explain_6459"} {"prompt": "Explain this text code:\n\nClient Version: version.Info{Major:\"1\", Minor:\"6\", GitVersion:\"v1.6.9+a3d1dfa6f4335\", GitCommit:\"9b77fed11a9843ce3780f70dd251e92901c43072\", GitTreeState:\"dirty\", BuildDate:\"2017-08-29T20:32:58Z\", OpenPaasKubernetesVersion:\"v1.03.02\", GoVersion:\"go1.7.5\", Compiler:\"gc\", Platform:\"linux/amd64\"}\nServer Version: version.Info{Major:\"1\", Minor:\"6\", GitVersion:\"v1.6.9+a3d1dfa6f4335\", GitCommit:\"9b77fed11a9843ce3780f70dd251e92901c43072\", GitTreeState:\"dirty\", BuildDate:\"2017-08-29T20:32:58Z\", OpenPaasKubernetesVersion:\"v1.03.02\", GoVersion:\"go1.7.5\", Compiler:\"gc\", Platform:\"linux/amd64\"}", "completion": "This text code kubectl for Docker Users | To get miscellaneous information about the environment and configuration, see kubectl cluster-info.", "language": "text", "type": "code-example_explanation", "context": "kubectl for Docker Users | To get miscellaneous information about the environment and configuration, see kubectl cluster-info.", "source": "https://kubernetes.io/docs/reference/kubectl/docker-cli-to-kubectl/", "id": "k8s_00222_code_explain_6460"} {"prompt": "Complete this text code:\n\nContainers: 40\nImages: 168\nStorage Driver: aufs\n Root Dir: /usr/local/google/docker/aufs\n Backing Filesystem: extfs\n Dirs: 248\n Dirperm1 Supported: false\nExecution Driver: native-0.2", "completion": "Logging Driver: json-file\nKernel Version: 3.13.0-53-generic\nOperating System: Ubuntu 14.04.2 LTS\nCPUs: 12\nTotal Memory: 31.32 GiB\nName: k8s-is-fun.mtv.corp.google.com\nID: ADUV:GCYR:B3VJ:HMPO:LNPQ:KD5S:YKFQ:76VN:IANZ:7TFV:ZBF4:BYJO\nWARNING: No swap limit support", "language": "text", "type": "code-example", "context": "kubectl for Docker Users | kubectl:", "source": "https://kubernetes.io/docs/reference/kubectl/docker-cli-to-kubectl/", "id": "k8s_00222_code_completion_6461"} {"prompt": "Explain this text code:\n\nContainers: 40\nImages: 168\nStorage Driver: aufs\n Root Dir: /usr/local/google/docker/aufs\n Backing Filesystem: extfs\n Dirs: 248\n Dirperm1 Supported: false\nExecution Driver: native-0.2\nLogging Driver: json-file\nKernel Version: 3.13.0-53-generic\nOperating System: Ubuntu 14.04.2 LTS\nCPUs: 12\nTotal Memory: 31.32 GiB\nName: k8s-is-fun.mtv.corp.google.com\nID: ADUV:GCYR:B3VJ:HMPO:LNPQ:KD5S:YKFQ:76VN:IANZ:7TFV:ZBF4:BYJO\nWARNING: No swap limit support", "completion": "This text code kubectl for Docker Users | kubectl:", "language": "text", "type": "code-example_explanation", "context": "kubectl for Docker Users | kubectl:", "source": "https://kubernetes.io/docs/reference/kubectl/docker-cli-to-kubectl/", "id": "k8s_00222_code_explain_6462"} {"prompt": "Complete this text code:\n\nKubernetes master is running at https://203.0.113.141\nKubeDNS is running at https://203.0.113.141/api/v1/namespaces/kube-system/services/kube-dns/proxy\nkubernetes-dashboard is running at https://203.0.113.141/api/v1/namespaces/kube-system/services/kubernetes-dashboard/proxy", "completion": "Grafana is running at https://203.0.113.141/api/v1/namespaces/kube-system/services/monitoring-grafana/proxy\nHeapster is running at https://203.0.113.141/api/v1/namespaces/kube-system/services/monitoring-heapster/proxy\nInfluxDB is running at https://203.0.113.141/api/v1/namespaces/kube-system/services/monitoring-influxdb/proxy", "language": "text", "type": "code-example", "context": "kubectl for Docker Users |", "source": "https://kubernetes.io/docs/reference/kubectl/docker-cli-to-kubectl/", "id": "k8s_00222_code_completion_6463"} {"prompt": "Explain this text code:\n\nKubernetes master is running at https://203.0.113.141\nKubeDNS is running at https://203.0.113.141/api/v1/namespaces/kube-system/services/kube-dns/proxy\nkubernetes-dashboard is running at https://203.0.113.141/api/v1/namespaces/kube-system/services/kubernetes-dashboard/proxy\nGrafana is running at https://203.0.113.141/api/v1/namespaces/kube-system/services/monitoring-grafana/proxy\nHeapster is running at https://203.0.113.141/api/v1/namespaces/kube-system/services/monitoring-heapster/proxy\nInfluxDB is running at https://203.0.113.141/api/v1/namespaces/kube-system/services/monitoring-influxdb/proxy", "completion": "This text code kubectl for Docker Users |", "language": "text", "type": "code-example_explanation", "context": "kubectl for Docker Users |", "source": "https://kubernetes.io/docs/reference/kubectl/docker-cli-to-kubectl/", "id": "k8s_00222_code_explain_6464"} {"prompt": "Explain this bash code:\n\ndocker run -d --restart=always -e DOMAIN=cluster --name nginx-app -p 80:80 nginx", "completion": "This bash code kubectl for Docker Users | kubectl:", "language": "bash", "type": "shell-command_explanation", "context": "kubectl for Docker Users | kubectl:", "source": "https://kubernetes.io/docs/reference/kubectl/docker-cli-to-kubectl/", "id": "k8s_00222_code_explain_6465"} {"prompt": "Explain this text code:\n\n# start the pod running nginx\nkubectl create deployment --image=nginx nginx-app", "completion": "This text code kubectl for Docker Users | By using kubectl, you can create a Deployment to ensure that N pods are running nginx, where N is the number of replicas stated in the spec and defaults to 1. You can also create a service with a sele", "language": "text", "type": "code-example_explanation", "context": "kubectl for Docker Users | By using kubectl, you can create a Deployment to ensure that N pods are running nginx, where N is the number of replicas stated in the spec and defaults to 1. You can also create a service with a sele", "source": "https://kubernetes.io/docs/reference/kubectl/docker-cli-to-kubectl/", "id": "k8s_00222_code_explain_6466"} {"prompt": "Explain this text code:\n\n# add env to nginx-app\nkubectl set env deployment/nginx-app DOMAIN=cluster", "completion": "This text code kubectl for Docker Users | By using kubectl, you can create a Deployment to ensure that N pods are running nginx, where N is the number of replicas stated in the spec and defaults to 1. You can also create a service with a sele", "language": "text", "type": "code-example_explanation", "context": "kubectl for Docker Users | By using kubectl, you can create a Deployment to ensure that N pods are running nginx, where N is the number of replicas stated in the spec and defaults to 1. You can also create a service with a sele", "source": "https://kubernetes.io/docs/reference/kubectl/docker-cli-to-kubectl/", "id": "k8s_00222_code_explain_6467"} {"prompt": "Explain this text code:\n\n# expose a port through with a service\nkubectl expose deployment nginx-app --port=80 --name=nginx-http", "completion": "This text code kubectl for Docker Users | By using kubectl, you can create a Deployment to ensure that N pods are running nginx, where N is the number of replicas stated in the spec and defaults to 1. You can also create a service with a sele", "language": "text", "type": "code-example_explanation", "context": "kubectl for Docker Users | By using kubectl, you can create a Deployment to ensure that N pods are running nginx, where N is the number of replicas stated in the spec and defaults to 1. You can also create a service with a sele", "source": "https://kubernetes.io/docs/reference/kubectl/docker-cli-to-kubectl/", "id": "k8s_00222_code_explain_6468"} {"prompt": "Explain this bash code:\n\nkubectl run [-i] [--tty] --attach --image=", "completion": "This bash code kubectl for Docker Users | Unlike docker run ..., if you specify --attach, then you attach stdin, stdout and stderr. You cannot control which streams are attached (docker -a ...). To detach from the container, you can type the", "language": "bash", "type": "kubectl-command_explanation", "context": "kubectl for Docker Users | Unlike docker run ..., if you specify --attach, then you attach stdin, stdout and stderr. You cannot control which streams are attached (docker -a ...). To detach from the container, you can type the", "source": "https://kubernetes.io/docs/reference/kubectl/docker-cli-to-kubectl/", "id": "k8s_00222_code_explain_6469"} {"prompt": "Explain this text code:\n\napiGroups:[\"apps\"], apiVersions:[\"v1\"], resources: [\"deployments\"]", "completion": "This text code ValidatingAdmissionPolicyBinding | Equivalent: match a request if modifies a resource listed in rules, even via another API group or version. For example, if deployments can be modified via apps/v1, apps/v1beta1, and extensions/v1beta1", "language": "text", "type": "code-example_explanation", "context": "ValidatingAdmissionPolicyBinding | Equivalent: match a request if modifies a resource listed in rules, even via another API group or version. For example, if deployments can be modified via apps/v1, apps/v1beta1, and extensions/v1beta1", "source": "https://kubernetes.io/docs/reference/kubernetes-api/policy-resources/validating-admission-policy-binding-v1/", "id": "k8s_00223_code_explain_6470"} {"prompt": "Explain this text code:\n\napiGroups:[\"apps\"], apiVersions:[\"v1\"], resources: [\"deployments\"]", "completion": "This text code ValidatingAdmissionPolicyBinding | Defaults to \"Equivalent\"", "language": "text", "type": "code-example_explanation", "context": "ValidatingAdmissionPolicyBinding | Defaults to \"Equivalent\"", "source": "https://kubernetes.io/docs/reference/kubernetes-api/policy-resources/validating-admission-policy-binding-v1/", "id": "k8s_00223_code_explain_6471"} {"prompt": "Explain this text code:\n\nvalidation.policy.admission.k8s.io/validation_failure", "completion": "This text code ValidatingAdmissionPolicyBinding | Clients should expect to handle additional values by ignoring any values not recognized.", "language": "text", "type": "code-example_explanation", "context": "ValidatingAdmissionPolicyBinding | Clients should expect to handle additional values by ignoring any values not recognized.", "source": "https://kubernetes.io/docs/reference/kubernetes-api/policy-resources/validating-admission-policy-binding-v1/", "id": "k8s_00223_code_explain_6472"} {"prompt": "Explain this text code:\n\n\"validation.policy.admission.k8s.io/validation_failure\": \"[{\\\"message\\\": \\\"Invalid value\\\", {\\\"policy\\\": \\\"policy.example.com\\\", {\\\"binding\\\": \\\"policybinding.example.com\\\", {\\\"expressionIndex\\\": \\\"1\\\", {\\\"validationActions\\\": [\\\"Audit\\\"]}]\"", "completion": "This text code ValidatingAdmissionPolicyBinding | Clients should expect to handle additional values by ignoring any values not recognized.", "language": "text", "type": "code-example_explanation", "context": "ValidatingAdmissionPolicyBinding | Clients should expect to handle additional values by ignoring any values not recognized.", "source": "https://kubernetes.io/docs/reference/kubernetes-api/policy-resources/validating-admission-policy-binding-v1/", "id": "k8s_00223_code_explain_6473"} {"prompt": "Explain this text code:\n\napiGroups:[\"apps\"], apiVersions:[\"v1\"], resources: [\"deployments\"]", "completion": "This text code ValidatingAdmissionPolicyBinding | Equivalent: match a request if modifies a resource listed in rules, even via another API group or version. For example, if deployments can be modified via apps/v1, apps/v1beta1, and extensions/v1beta1", "language": "text", "type": "code-example_explanation", "context": "ValidatingAdmissionPolicyBinding | Equivalent: match a request if modifies a resource listed in rules, even via another API group or version. For example, if deployments can be modified via apps/v1, apps/v1beta1, and extensions/v1beta1", "source": "https://kubernetes.io/docs/reference/kubernetes-api/policy-resources/validating-admission-policy-binding-v1/", "id": "k8s_00223_code_explain_6474"} {"prompt": "Explain this text code:\n\napiGroups:[\"apps\"], apiVersions:[\"v1\"], resources: [\"deployments\"]", "completion": "This text code ValidatingAdmissionPolicyBinding | Defaults to \"Equivalent\"", "language": "text", "type": "code-example_explanation", "context": "ValidatingAdmissionPolicyBinding | Defaults to \"Equivalent\"", "source": "https://kubernetes.io/docs/reference/kubernetes-api/policy-resources/validating-admission-policy-binding-v1/", "id": "k8s_00223_code_explain_6475"} {"prompt": "Complete this text code:\n\n# The per-mode level label indicates which policy level to apply for the mode.\n#\n# MODE must be one of `enforce`, `audit`, or `warn`.\n# LEVEL must be one of `privileged`, `baseline`, or `restricted`.\npod-security.kubernetes.io/: \n", "completion": "# Optional: per-mode version label that can be used to pin the policy to the\n# version that shipped with a given Kubernetes minor version (for example v1.34).\n#\n# MODE must be one of `enforce`, `audit`, or `warn`.\n# VERSION must be a valid Kubernetes minor version, or `latest`.\npod-security.kubernetes.io/-version: ", "language": "text", "type": "code-example", "context": "3 - Pod Security Admission | Check out Enforce Pod Security Standards with Namespace Labels to see example usage.", "source": "https://kubernetes.io/docs/concepts/security/_print/", "id": "k8s_00224_code_completion_6476"} {"prompt": "Explain this text code:\n\n# The per-mode level label indicates which policy level to apply for the mode.\n#\n# MODE must be one of `enforce`, `audit`, or `warn`.\n# LEVEL must be one of `privileged`, `baseline`, or `restricted`.\npod-security.kubernetes.io/: \n\n# Optional: per-mode version label that can be used to pin the policy to the\n# version that shipped with a given Kubernetes minor version (for example v1.34).\n#\n# MODE must be one of `enforce`, `audit`, or `warn`.\n# VERSION must be a valid Kubernetes minor version, or `latest`.\npod-security.kubernetes.io/-version: ", "completion": "This text code 3 - Pod Security Admission | Check out Enforce Pod Security Standards with Namespace Labels to see example usage.", "language": "text", "type": "code-example_explanation", "context": "3 - Pod Security Admission | Check out Enforce Pod Security Standards with Namespace Labels to see example usage.", "source": "https://kubernetes.io/docs/concepts/security/_print/", "id": "k8s_00224_code_explain_6477"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: ServiceAccount\nmetadata:", "completion": " annotations:\n kubernetes.io/enforce-mountable-secrets: \"true\"\n name: my-serviceaccount\n namespace: my-namespace", "language": "yaml", "type": "service-definition", "context": "4 - Service Accounts | When this annotation is set to \"true\", the Kubernetes control plane ensures that the Secrets from this ServiceAccount are subject to certain mounting restrictions.", "source": "https://kubernetes.io/docs/concepts/security/_print/", "id": "k8s_00224_code_completion_6478"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: ServiceAccount\nmetadata:\n annotations:\n kubernetes.io/enforce-mountable-secrets: \"true\"\n name: my-serviceaccount\n namespace: my-namespace", "completion": "This yaml code 4 - Service Accounts | When this annotation is set to \"true\", the Kubernetes control plane ensures that the Secrets from this ServiceAccount are subject to certain mounting restrictions.", "language": "yaml", "type": "service-definition_explanation", "context": "4 - Service Accounts | When this annotation is set to \"true\", the Kubernetes control plane ensures that the Secrets from this ServiceAccount are subject to certain mounting restrictions.", "source": "https://kubernetes.io/docs/concepts/security/_print/", "id": "k8s_00224_code_explain_6479"} {"prompt": "Complete this json code:\n\n{\n \"apiVersion\": \"abac.authorization.kubernetes.io/v1beta1\",\n \"kind\": \"Policy\",\n \"spec\": {\n \"user\": \"bob\",", "completion": " \"namespace\": \"projectCaribou\",\n \"resource\": \"pods\",\n \"readonly\": true\n }\n}", "language": "json", "type": "api-response", "context": "8 - Controlling Access to the Kubernetes API | If Bob makes the following request, the request is authorized because he is allowed to read objects in the projectCaribou namespace:", "source": "https://kubernetes.io/docs/concepts/security/_print/", "id": "k8s_00224_code_completion_6480"} {"prompt": "Explain this json code:\n\n{\n \"apiVersion\": \"abac.authorization.kubernetes.io/v1beta1\",\n \"kind\": \"Policy\",\n \"spec\": {\n \"user\": \"bob\",\n \"namespace\": \"projectCaribou\",\n \"resource\": \"pods\",\n \"readonly\": true\n }\n}", "completion": "This json code 8 - Controlling Access to the Kubernetes API | If Bob makes the following request, the request is authorized because he is allowed to read objects in the projectCaribou namespace:", "language": "json", "type": "api-response_explanation", "context": "8 - Controlling Access to the Kubernetes API | If Bob makes the following request, the request is authorized because he is allowed to read objects in the projectCaribou namespace:", "source": "https://kubernetes.io/docs/concepts/security/_print/", "id": "k8s_00224_code_explain_6481"} {"prompt": "Complete this json code:\n\n{\n \"apiVersion\": \"authorization.k8s.io/v1beta1\",\n \"kind\": \"SubjectAccessReview\",\n \"spec\": {\n \"resourceAttributes\": {\n \"namespace\": \"projectCaribou\",", "completion": " \"verb\": \"get\",\n \"group\": \"unicorn.example.org\",\n \"resource\": \"pods\"\n }\n }\n}", "language": "json", "type": "api-response", "context": "8 - Controlling Access to the Kubernetes API | If Bob makes a request to write (create or update) to the objects in the projectCaribou namespace, his authorization is denied. If Bob makes a request to read (get) objects in a different namespace su", "source": "https://kubernetes.io/docs/concepts/security/_print/", "id": "k8s_00224_code_completion_6482"} {"prompt": "Explain this json code:\n\n{\n \"apiVersion\": \"authorization.k8s.io/v1beta1\",\n \"kind\": \"SubjectAccessReview\",\n \"spec\": {\n \"resourceAttributes\": {\n \"namespace\": \"projectCaribou\",\n \"verb\": \"get\",\n \"group\": \"unicorn.example.org\",\n \"resource\": \"pods\"\n }\n }\n}", "completion": "This json code 8 - Controlling Access to the Kubernetes API | If Bob makes a request to write (create or update) to the objects in the projectCaribou namespace, his authorization is denied. If Bob makes a request to read (get) objects in a different namespace su", "language": "json", "type": "api-response_explanation", "context": "8 - Controlling Access to the Kubernetes API | If Bob makes a request to write (create or update) to the objects in the projectCaribou namespace, his authorization is denied. If Bob makes a request to read (get) objects in a different namespace su", "source": "https://kubernetes.io/docs/concepts/security/_print/", "id": "k8s_00224_code_explain_6483"} {"prompt": "Complete this yaml code:\n\napiVersion: kubescheduler.config.k8s.io/v1\nkind: KubeSchedulerConfiguration\nprofiles:\n - schedulerName: my-scheduler\n plugins:\n # Disable specific plugins for different extension points\n # You can disable all plugins for an extension point using \"*\"\n queueSort:\n disabled:", "completion": " - name: \"*\" # Disable all queueSort plugins\n # - name: \"PrioritySort\" # Disable specific queueSort plugin\n filter:\n disabled:\n - name: \"*\" # Disable all filter plugins\n # - name: \"NodeResourcesFit\" # Disable specific filter plugin\n permit:\n disabled:\n - name: \"*\" # Disables all permit plugins\n # - name: \"TaintToleration\" # Disable specific permit plugin", "language": "yaml", "type": "kubernetes-manifest", "context": "13 - Hardening Guide - Scheduler Configuration | This creates a scheduler profile my-custom-scheduler. Whenever the .spec of a Pod does not have a value for .spec.schedulerName, the kube-scheduler runs for that Pod, using its main configuration, and", "source": "https://kubernetes.io/docs/concepts/security/_print/", "id": "k8s_00224_code_completion_6484"} {"prompt": "Explain this yaml code:\n\napiVersion: kubescheduler.config.k8s.io/v1\nkind: KubeSchedulerConfiguration\nprofiles:\n - schedulerName: my-scheduler\n plugins:\n # Disable specific plugins for different extension points\n # You can disable all plugins for an extension point using \"*\"\n queueSort:\n disabled:\n - name: \"*\" # Disable all queueSort plugins\n # - name: \"PrioritySort\" # Disable specific queueSort plugin\n filter:\n disabled:\n - name: \"*\" # Disable all filter plugins\n # - name: \"NodeResourcesFit\" # Disable specific filter plugin\n permit:\n disabled:\n - name: \"*\" # Disables all permit plugins\n # - name: \"TaintToleration\" # Disable specific permit plugin", "completion": "This yaml code 13 - Hardening Guide - Scheduler Configuration | This creates a scheduler profile my-custom-scheduler. Whenever the .spec of a Pod does not have a value for .spec.schedulerName, the kube-scheduler runs for that Pod, using its main configuration, and", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "13 - Hardening Guide - Scheduler Configuration | This creates a scheduler profile my-custom-scheduler. Whenever the .spec of a Pod does not have a value for .spec.schedulerName, the kube-scheduler runs for that Pod, using its main configuration, and", "source": "https://kubernetes.io/docs/concepts/security/_print/", "id": "k8s_00224_code_explain_6485"} {"prompt": "Explain this text code:\n\nspec.containers[*].securityContext.windowsOptions.hostProcess", "completion": "This text code 2 - Pod Security Standards | Allowed Values", "language": "text", "type": "code-example_explanation", "context": "2 - Pod Security Standards | Allowed Values", "source": "https://kubernetes.io/docs/concepts/security/_print/", "id": "k8s_00224_code_explain_6486"} {"prompt": "Explain this text code:\n\nspec.initContainers[*].securityContext.windowsOptions.hostProcess", "completion": "This text code 2 - Pod Security Standards | Allowed Values", "language": "text", "type": "code-example_explanation", "context": "2 - Pod Security Standards | Allowed Values", "source": "https://kubernetes.io/docs/concepts/security/_print/", "id": "k8s_00224_code_explain_6487"} {"prompt": "Explain this text code:\n\nspec.ephemeralContainers[*].securityContext.windowsOptions.hostProcess", "completion": "This text code 2 - Pod Security Standards | Allowed Values", "language": "text", "type": "code-example_explanation", "context": "2 - Pod Security Standards | Allowed Values", "source": "https://kubernetes.io/docs/concepts/security/_print/", "id": "k8s_00224_code_explain_6488"} {"prompt": "Explain this text code:\n\nspec.ephemeralContainers[*].securityContext.privileged", "completion": "This text code 2 - Pod Security Standards | Allowed Values", "language": "text", "type": "code-example_explanation", "context": "2 - Pod Security Standards | Allowed Values", "source": "https://kubernetes.io/docs/concepts/security/_print/", "id": "k8s_00224_code_explain_6489"} {"prompt": "Explain this text code:\n\nspec.containers[*].securityContext.capabilities.add", "completion": "This text code 2 - Pod Security Standards | Allowed Values", "language": "text", "type": "code-example_explanation", "context": "2 - Pod Security Standards | Allowed Values", "source": "https://kubernetes.io/docs/concepts/security/_print/", "id": "k8s_00224_code_explain_6490"} {"prompt": "Explain this text code:\n\nspec.initContainers[*].securityContext.capabilities.add", "completion": "This text code 2 - Pod Security Standards | Allowed Values", "language": "text", "type": "code-example_explanation", "context": "2 - Pod Security Standards | Allowed Values", "source": "https://kubernetes.io/docs/concepts/security/_print/", "id": "k8s_00224_code_explain_6491"} {"prompt": "Explain this text code:\n\nspec.ephemeralContainers[*].securityContext.capabilities.add", "completion": "This text code 2 - Pod Security Standards | Allowed Values", "language": "text", "type": "code-example_explanation", "context": "2 - Pod Security Standards | Allowed Values", "source": "https://kubernetes.io/docs/concepts/security/_print/", "id": "k8s_00224_code_explain_6492"} {"prompt": "Explain this text code:\n\nspec.containers[*].lifecycle.postStart.tcpSocket.host", "completion": "This text code 2 - Pod Security Standards | Allowed Values", "language": "text", "type": "code-example_explanation", "context": "2 - Pod Security Standards | Allowed Values", "source": "https://kubernetes.io/docs/concepts/security/_print/", "id": "k8s_00224_code_explain_6493"} {"prompt": "Explain this text code:\n\nspec.containers[*].lifecycle.preStop.tcpSocket.host", "completion": "This text code 2 - Pod Security Standards | Allowed Values", "language": "text", "type": "code-example_explanation", "context": "2 - Pod Security Standards | Allowed Values", "source": "https://kubernetes.io/docs/concepts/security/_print/", "id": "k8s_00224_code_explain_6494"} {"prompt": "Explain this text code:\n\nspec.containers[*].lifecycle.postStart.httpGet.host", "completion": "This text code 2 - Pod Security Standards | Allowed Values", "language": "text", "type": "code-example_explanation", "context": "2 - Pod Security Standards | Allowed Values", "source": "https://kubernetes.io/docs/concepts/security/_print/", "id": "k8s_00224_code_explain_6495"} {"prompt": "Explain this text code:\n\nspec.initContainers[*].livenessProbe.tcpSocket.host", "completion": "This text code 2 - Pod Security Standards | Allowed Values", "language": "text", "type": "code-example_explanation", "context": "2 - Pod Security Standards | Allowed Values", "source": "https://kubernetes.io/docs/concepts/security/_print/", "id": "k8s_00224_code_explain_6496"} {"prompt": "Explain this text code:\n\nspec.initContainers[*].readinessProbe.tcpSocket.host", "completion": "This text code 2 - Pod Security Standards | Allowed Values", "language": "text", "type": "code-example_explanation", "context": "2 - Pod Security Standards | Allowed Values", "source": "https://kubernetes.io/docs/concepts/security/_print/", "id": "k8s_00224_code_explain_6497"} {"prompt": "Explain this text code:\n\nspec.initContainers[*].lifecycle.postStart.tcpSocket.host", "completion": "This text code 2 - Pod Security Standards | Allowed Values", "language": "text", "type": "code-example_explanation", "context": "2 - Pod Security Standards | Allowed Values", "source": "https://kubernetes.io/docs/concepts/security/_print/", "id": "k8s_00224_code_explain_6498"} {"prompt": "Explain this text code:\n\nspec.initContainers[*].lifecycle.preStop.tcpSocket.host", "completion": "This text code 2 - Pod Security Standards | Allowed Values", "language": "text", "type": "code-example_explanation", "context": "2 - Pod Security Standards | Allowed Values", "source": "https://kubernetes.io/docs/concepts/security/_print/", "id": "k8s_00224_code_explain_6499"} {"prompt": "Explain this text code:\n\nspec.initContainers[*].lifecycle.postStart.httpGet.host", "completion": "This text code 2 - Pod Security Standards | Allowed Values", "language": "text", "type": "code-example_explanation", "context": "2 - Pod Security Standards | Allowed Values", "source": "https://kubernetes.io/docs/concepts/security/_print/", "id": "k8s_00224_code_explain_6500"} {"prompt": "Explain this text code:\n\nspec.initContainers[*].lifecycle.preStop.httpGet.host", "completion": "This text code 2 - Pod Security Standards | Allowed Values", "language": "text", "type": "code-example_explanation", "context": "2 - Pod Security Standards | Allowed Values", "source": "https://kubernetes.io/docs/concepts/security/_print/", "id": "k8s_00224_code_explain_6501"} {"prompt": "Explain this text code:\n\nspec.containers[*].securityContext.appArmorProfile.type", "completion": "This text code 2 - Pod Security Standards | Allowed Values", "language": "text", "type": "code-example_explanation", "context": "2 - Pod Security Standards | Allowed Values", "source": "https://kubernetes.io/docs/concepts/security/_print/", "id": "k8s_00224_code_explain_6502"} {"prompt": "Explain this text code:\n\nspec.initContainers[*].securityContext.appArmorProfile.type", "completion": "This text code 2 - Pod Security Standards | Allowed Values", "language": "text", "type": "code-example_explanation", "context": "2 - Pod Security Standards | Allowed Values", "source": "https://kubernetes.io/docs/concepts/security/_print/", "id": "k8s_00224_code_explain_6503"} {"prompt": "Explain this text code:\n\nspec.ephemeralContainers[*].securityContext.appArmorProfile.type", "completion": "This text code 2 - Pod Security Standards | Allowed Values", "language": "text", "type": "code-example_explanation", "context": "2 - Pod Security Standards | Allowed Values", "source": "https://kubernetes.io/docs/concepts/security/_print/", "id": "k8s_00224_code_explain_6504"} {"prompt": "Explain this text code:\n\nmetadata.annotations[\"container.apparmor.security.beta.kubernetes.io/*\"]", "completion": "This text code 2 - Pod Security Standards | Allowed Values", "language": "text", "type": "code-example_explanation", "context": "2 - Pod Security Standards | Allowed Values", "source": "https://kubernetes.io/docs/concepts/security/_print/", "id": "k8s_00224_code_explain_6505"} {"prompt": "Explain this text code:\n\nspec.containers[*].securityContext.seLinuxOptions.type", "completion": "This text code 2 - Pod Security Standards | Allowed Values", "language": "text", "type": "code-example_explanation", "context": "2 - Pod Security Standards | Allowed Values", "source": "https://kubernetes.io/docs/concepts/security/_print/", "id": "k8s_00224_code_explain_6506"} {"prompt": "Explain this text code:\n\nspec.initContainers[*].securityContext.seLinuxOptions.type", "completion": "This text code 2 - Pod Security Standards | Allowed Values", "language": "text", "type": "code-example_explanation", "context": "2 - Pod Security Standards | Allowed Values", "source": "https://kubernetes.io/docs/concepts/security/_print/", "id": "k8s_00224_code_explain_6507"} {"prompt": "Explain this text code:\n\nspec.ephemeralContainers[*].securityContext.seLinuxOptions.type", "completion": "This text code 2 - Pod Security Standards | Allowed Values", "language": "text", "type": "code-example_explanation", "context": "2 - Pod Security Standards | Allowed Values", "source": "https://kubernetes.io/docs/concepts/security/_print/", "id": "k8s_00224_code_explain_6508"} {"prompt": "Explain this text code:\n\nspec.containers[*].securityContext.seLinuxOptions.user", "completion": "This text code 2 - Pod Security Standards | Allowed Values", "language": "text", "type": "code-example_explanation", "context": "2 - Pod Security Standards | Allowed Values", "source": "https://kubernetes.io/docs/concepts/security/_print/", "id": "k8s_00224_code_explain_6509"} {"prompt": "Explain this text code:\n\nspec.initContainers[*].securityContext.seLinuxOptions.user", "completion": "This text code 2 - Pod Security Standards | Allowed Values", "language": "text", "type": "code-example_explanation", "context": "2 - Pod Security Standards | Allowed Values", "source": "https://kubernetes.io/docs/concepts/security/_print/", "id": "k8s_00224_code_explain_6510"} {"prompt": "Explain this text code:\n\nspec.ephemeralContainers[*].securityContext.seLinuxOptions.user", "completion": "This text code 2 - Pod Security Standards | Allowed Values", "language": "text", "type": "code-example_explanation", "context": "2 - Pod Security Standards | Allowed Values", "source": "https://kubernetes.io/docs/concepts/security/_print/", "id": "k8s_00224_code_explain_6511"} {"prompt": "Explain this text code:\n\nspec.containers[*].securityContext.seLinuxOptions.role", "completion": "This text code 2 - Pod Security Standards | Allowed Values", "language": "text", "type": "code-example_explanation", "context": "2 - Pod Security Standards | Allowed Values", "source": "https://kubernetes.io/docs/concepts/security/_print/", "id": "k8s_00224_code_explain_6512"} {"prompt": "Explain this text code:\n\nspec.initContainers[*].securityContext.seLinuxOptions.role", "completion": "This text code 2 - Pod Security Standards | Allowed Values", "language": "text", "type": "code-example_explanation", "context": "2 - Pod Security Standards | Allowed Values", "source": "https://kubernetes.io/docs/concepts/security/_print/", "id": "k8s_00224_code_explain_6513"} {"prompt": "Explain this text code:\n\nspec.ephemeralContainers[*].securityContext.seLinuxOptions.role", "completion": "This text code 2 - Pod Security Standards | Allowed Values", "language": "text", "type": "code-example_explanation", "context": "2 - Pod Security Standards | Allowed Values", "source": "https://kubernetes.io/docs/concepts/security/_print/", "id": "k8s_00224_code_explain_6514"} {"prompt": "Explain this text code:\n\nspec.ephemeralContainers[*].securityContext.procMount", "completion": "This text code 2 - Pod Security Standards | Allowed Values", "language": "text", "type": "code-example_explanation", "context": "2 - Pod Security Standards | Allowed Values", "source": "https://kubernetes.io/docs/concepts/security/_print/", "id": "k8s_00224_code_explain_6515"} {"prompt": "Explain this text code:\n\nspec.containers[*].securityContext.seccompProfile.type", "completion": "This text code 2 - Pod Security Standards | Allowed Values", "language": "text", "type": "code-example_explanation", "context": "2 - Pod Security Standards | Allowed Values", "source": "https://kubernetes.io/docs/concepts/security/_print/", "id": "k8s_00224_code_explain_6516"} {"prompt": "Explain this text code:\n\nspec.initContainers[*].securityContext.seccompProfile.type", "completion": "This text code 2 - Pod Security Standards | Allowed Values", "language": "text", "type": "code-example_explanation", "context": "2 - Pod Security Standards | Allowed Values", "source": "https://kubernetes.io/docs/concepts/security/_print/", "id": "k8s_00224_code_explain_6517"} {"prompt": "Explain this text code:\n\nspec.ephemeralContainers[*].securityContext.seccompProfile.type", "completion": "This text code 2 - Pod Security Standards | Allowed Values", "language": "text", "type": "code-example_explanation", "context": "2 - Pod Security Standards | Allowed Values", "source": "https://kubernetes.io/docs/concepts/security/_print/", "id": "k8s_00224_code_explain_6518"} {"prompt": "Explain this text code:\n\nspec.containers[*].securityContext.allowPrivilegeEscalation", "completion": "This text code 2 - Pod Security Standards | Allowed Values", "language": "text", "type": "code-example_explanation", "context": "2 - Pod Security Standards | Allowed Values", "source": "https://kubernetes.io/docs/concepts/security/_print/", "id": "k8s_00224_code_explain_6519"} {"prompt": "Explain this text code:\n\nspec.initContainers[*].securityContext.allowPrivilegeEscalation", "completion": "This text code 2 - Pod Security Standards | Allowed Values", "language": "text", "type": "code-example_explanation", "context": "2 - Pod Security Standards | Allowed Values", "source": "https://kubernetes.io/docs/concepts/security/_print/", "id": "k8s_00224_code_explain_6520"} {"prompt": "Explain this text code:\n\nspec.ephemeralContainers[*].securityContext.allowPrivilegeEscalation", "completion": "This text code 2 - Pod Security Standards | Allowed Values", "language": "text", "type": "code-example_explanation", "context": "2 - Pod Security Standards | Allowed Values", "source": "https://kubernetes.io/docs/concepts/security/_print/", "id": "k8s_00224_code_explain_6521"} {"prompt": "Explain this text code:\n\nspec.initContainers[*].securityContext.runAsNonRoot", "completion": "This text code 2 - Pod Security Standards | Allowed Values", "language": "text", "type": "code-example_explanation", "context": "2 - Pod Security Standards | Allowed Values", "source": "https://kubernetes.io/docs/concepts/security/_print/", "id": "k8s_00224_code_explain_6522"} {"prompt": "Explain this text code:\n\nspec.ephemeralContainers[*].securityContext.runAsNonRoot", "completion": "This text code 2 - Pod Security Standards | Allowed Values", "language": "text", "type": "code-example_explanation", "context": "2 - Pod Security Standards | Allowed Values", "source": "https://kubernetes.io/docs/concepts/security/_print/", "id": "k8s_00224_code_explain_6523"} {"prompt": "Explain this text code:\n\nspec.ephemeralContainers[*].securityContext.runAsUser", "completion": "This text code 2 - Pod Security Standards | Allowed Values", "language": "text", "type": "code-example_explanation", "context": "2 - Pod Security Standards | Allowed Values", "source": "https://kubernetes.io/docs/concepts/security/_print/", "id": "k8s_00224_code_explain_6524"} {"prompt": "Explain this text code:\n\nspec.containers[*].securityContext.seccompProfile.type", "completion": "This text code 2 - Pod Security Standards | Allowed Values", "language": "text", "type": "code-example_explanation", "context": "2 - Pod Security Standards | Allowed Values", "source": "https://kubernetes.io/docs/concepts/security/_print/", "id": "k8s_00224_code_explain_6525"} {"prompt": "Explain this text code:\n\nspec.initContainers[*].securityContext.seccompProfile.type", "completion": "This text code 2 - Pod Security Standards | Allowed Values", "language": "text", "type": "code-example_explanation", "context": "2 - Pod Security Standards | Allowed Values", "source": "https://kubernetes.io/docs/concepts/security/_print/", "id": "k8s_00224_code_explain_6526"} {"prompt": "Explain this text code:\n\nspec.ephemeralContainers[*].securityContext.seccompProfile.type", "completion": "This text code 2 - Pod Security Standards | Allowed Values", "language": "text", "type": "code-example_explanation", "context": "2 - Pod Security Standards | Allowed Values", "source": "https://kubernetes.io/docs/concepts/security/_print/", "id": "k8s_00224_code_explain_6527"} {"prompt": "Explain this text code:\n\nspec.containers[*].securityContext.capabilities.drop", "completion": "This text code 2 - Pod Security Standards | Allowed Values", "language": "text", "type": "code-example_explanation", "context": "2 - Pod Security Standards | Allowed Values", "source": "https://kubernetes.io/docs/concepts/security/_print/", "id": "k8s_00224_code_explain_6528"} {"prompt": "Explain this text code:\n\nspec.initContainers[*].securityContext.capabilities.drop", "completion": "This text code 2 - Pod Security Standards | Allowed Values", "language": "text", "type": "code-example_explanation", "context": "2 - Pod Security Standards | Allowed Values", "source": "https://kubernetes.io/docs/concepts/security/_print/", "id": "k8s_00224_code_explain_6529"} {"prompt": "Explain this text code:\n\nspec.ephemeralContainers[*].securityContext.capabilities.drop", "completion": "This text code 2 - Pod Security Standards | Allowed Values", "language": "text", "type": "code-example_explanation", "context": "2 - Pod Security Standards | Allowed Values", "source": "https://kubernetes.io/docs/concepts/security/_print/", "id": "k8s_00224_code_explain_6530"} {"prompt": "Explain this text code:\n\nspec.containers[*].securityContext.capabilities.add", "completion": "This text code 2 - Pod Security Standards | Allowed Values", "language": "text", "type": "code-example_explanation", "context": "2 - Pod Security Standards | Allowed Values", "source": "https://kubernetes.io/docs/concepts/security/_print/", "id": "k8s_00224_code_explain_6531"} {"prompt": "Explain this text code:\n\nspec.initContainers[*].securityContext.capabilities.add", "completion": "This text code 2 - Pod Security Standards | Allowed Values", "language": "text", "type": "code-example_explanation", "context": "2 - Pod Security Standards | Allowed Values", "source": "https://kubernetes.io/docs/concepts/security/_print/", "id": "k8s_00224_code_explain_6532"} {"prompt": "Explain this text code:\n\nspec.ephemeralContainers[*].securityContext.capabilities.add", "completion": "This text code 2 - Pod Security Standards | Allowed Values", "language": "text", "type": "code-example_explanation", "context": "2 - Pod Security Standards | Allowed Values", "source": "https://kubernetes.io/docs/concepts/security/_print/", "id": "k8s_00224_code_explain_6533"} {"prompt": "Complete this yaml code:\n\n# The per-mode level label indicates which policy level to apply for the mode.\n#\n# MODE must be one of `enforce`, `audit`, or `warn`.\n# LEVEL must be one of `privileged`, `baseline`, or `restricted`.\npod-security.kubernetes.io/: \n", "completion": "# Optional: per-mode version label that can be used to pin the policy to the\n# version that shipped with a given Kubernetes minor version (for example v1.34).\n#\n# MODE must be one of `enforce`, `audit`, or `warn`.\n# VERSION must be a valid Kubernetes minor version, or `latest`.\npod-security.kubernetes.io/-version: ", "language": "yaml", "type": "kubernetes-manifest", "context": "3 - Pod Security Admission | Check out Enforce Pod Security Standards with Namespace Labels to see example usage.", "source": "https://kubernetes.io/docs/concepts/security/_print/", "id": "k8s_00224_code_completion_6534"} {"prompt": "Explain this yaml code:\n\n# The per-mode level label indicates which policy level to apply for the mode.\n#\n# MODE must be one of `enforce`, `audit`, or `warn`.\n# LEVEL must be one of `privileged`, `baseline`, or `restricted`.\npod-security.kubernetes.io/: \n\n# Optional: per-mode version label that can be used to pin the policy to the\n# version that shipped with a given Kubernetes minor version (for example v1.34).\n#\n# MODE must be one of `enforce`, `audit`, or `warn`.\n# VERSION must be a valid Kubernetes minor version, or `latest`.\npod-security.kubernetes.io/-version: ", "completion": "This yaml code 3 - Pod Security Admission | Check out Enforce Pod Security Standards with Namespace Labels to see example usage.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "3 - Pod Security Admission | Check out Enforce Pod Security Standards with Namespace Labels to see example usage.", "source": "https://kubernetes.io/docs/concepts/security/_print/", "id": "k8s_00224_code_explain_6535"} {"prompt": "Explain this text code:\n\nsystem:serviceaccount:kube-system:replicaset-controller", "completion": "This text code 3 - Pod Security Admission | Updates to the following pod fields are exempt from policy checks, meaning that if a pod update request only changes these fields, it will not be denied even if the pod is in violation of the current", "language": "text", "type": "code-example_explanation", "context": "3 - Pod Security Admission | Updates to the following pod fields are exempt from policy checks, meaning that if a pod update request only changes these fields, it will not be denied even if the pod is in violation of the current", "source": "https://kubernetes.io/docs/concepts/security/_print/", "id": "k8s_00224_code_explain_6536"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: ServiceAccount\nmetadata:", "completion": " annotations:\n kubernetes.io/enforce-mountable-secrets: \"true\"\n name: my-serviceaccount\n namespace: my-namespace", "language": "yaml", "type": "service-definition", "context": "4 - Service Accounts | When this annotation is set to \"true\", the Kubernetes control plane ensures that the Secrets from this ServiceAccount are subject to certain mounting restrictions.", "source": "https://kubernetes.io/docs/concepts/security/_print/", "id": "k8s_00224_code_completion_6537"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: ServiceAccount\nmetadata:\n annotations:\n kubernetes.io/enforce-mountable-secrets: \"true\"\n name: my-serviceaccount\n namespace: my-namespace", "completion": "This yaml code 4 - Service Accounts | When this annotation is set to \"true\", the Kubernetes control plane ensures that the Secrets from this ServiceAccount are subject to certain mounting restrictions.", "language": "yaml", "type": "service-definition_explanation", "context": "4 - Service Accounts | When this annotation is set to \"true\", the Kubernetes control plane ensures that the Secrets from this ServiceAccount are subject to certain mounting restrictions.", "source": "https://kubernetes.io/docs/concepts/security/_print/", "id": "k8s_00224_code_explain_6538"} {"prompt": "Complete this json code:\n\n{\n \"apiVersion\": \"abac.authorization.kubernetes.io/v1beta1\",\n \"kind\": \"Policy\",\n \"spec\": {\n \"user\": \"bob\",", "completion": " \"namespace\": \"projectCaribou\",\n \"resource\": \"pods\",\n \"readonly\": true\n }\n}", "language": "json", "type": "api-response", "context": "8 - Controlling Access to the Kubernetes API | If Bob makes the following request, the request is authorized because he is allowed to read objects in the projectCaribou namespace:", "source": "https://kubernetes.io/docs/concepts/security/_print/", "id": "k8s_00224_code_completion_6539"} {"prompt": "Explain this json code:\n\n{\n \"apiVersion\": \"abac.authorization.kubernetes.io/v1beta1\",\n \"kind\": \"Policy\",\n \"spec\": {\n \"user\": \"bob\",\n \"namespace\": \"projectCaribou\",\n \"resource\": \"pods\",\n \"readonly\": true\n }\n}", "completion": "This json code 8 - Controlling Access to the Kubernetes API | If Bob makes the following request, the request is authorized because he is allowed to read objects in the projectCaribou namespace:", "language": "json", "type": "api-response_explanation", "context": "8 - Controlling Access to the Kubernetes API | If Bob makes the following request, the request is authorized because he is allowed to read objects in the projectCaribou namespace:", "source": "https://kubernetes.io/docs/concepts/security/_print/", "id": "k8s_00224_code_explain_6540"} {"prompt": "Complete this json code:\n\n{\n \"apiVersion\": \"authorization.k8s.io/v1beta1\",\n \"kind\": \"SubjectAccessReview\",\n \"spec\": {\n \"resourceAttributes\": {\n \"namespace\": \"projectCaribou\",", "completion": " \"verb\": \"get\",\n \"group\": \"unicorn.example.org\",\n \"resource\": \"pods\"\n }\n }\n}", "language": "json", "type": "api-response", "context": "8 - Controlling Access to the Kubernetes API | If Bob makes a request to write (create or update) to the objects in the projectCaribou namespace, his authorization is denied. If Bob makes a request to read (get) objects in a different namespace su", "source": "https://kubernetes.io/docs/concepts/security/_print/", "id": "k8s_00224_code_completion_6541"} {"prompt": "Explain this json code:\n\n{\n \"apiVersion\": \"authorization.k8s.io/v1beta1\",\n \"kind\": \"SubjectAccessReview\",\n \"spec\": {\n \"resourceAttributes\": {\n \"namespace\": \"projectCaribou\",\n \"verb\": \"get\",\n \"group\": \"unicorn.example.org\",\n \"resource\": \"pods\"\n }\n }\n}", "completion": "This json code 8 - Controlling Access to the Kubernetes API | If Bob makes a request to write (create or update) to the objects in the projectCaribou namespace, his authorization is denied. If Bob makes a request to read (get) objects in a different namespace su", "language": "json", "type": "api-response_explanation", "context": "8 - Controlling Access to the Kubernetes API | If Bob makes a request to write (create or update) to the objects in the projectCaribou namespace, his authorization is denied. If Bob makes a request to read (get) objects in a different namespace su", "source": "https://kubernetes.io/docs/concepts/security/_print/", "id": "k8s_00224_code_explain_6542"} {"prompt": "Complete this yaml code:\n\napiVersion: kubescheduler.config.k8s.io/v1\nkind: KubeSchedulerConfiguration\nprofiles:\n - schedulerName: my-scheduler\n plugins:\n # Disable specific plugins for different extension points\n # You can disable all plugins for an extension point using \"*\"\n queueSort:\n disabled:", "completion": " - name: \"*\" # Disable all queueSort plugins\n # - name: \"PrioritySort\" # Disable specific queueSort plugin\n filter:\n disabled:\n - name: \"*\" # Disable all filter plugins\n # - name: \"NodeResourcesFit\" # Disable specific filter plugin\n permit:\n disabled:\n - name: \"*\" # Disables all permit plugins\n # - name: \"TaintToleration\" # Disable specific permit plugin", "language": "yaml", "type": "kubernetes-manifest", "context": "13 - Hardening Guide - Scheduler Configuration | This creates a scheduler profile my-custom-scheduler. Whenever the .spec of a Pod does not have a value for .spec.schedulerName, the kube-scheduler runs for that Pod, using its main configuration, and", "source": "https://kubernetes.io/docs/concepts/security/_print/", "id": "k8s_00224_code_completion_6543"} {"prompt": "Explain this yaml code:\n\napiVersion: kubescheduler.config.k8s.io/v1\nkind: KubeSchedulerConfiguration\nprofiles:\n - schedulerName: my-scheduler\n plugins:\n # Disable specific plugins for different extension points\n # You can disable all plugins for an extension point using \"*\"\n queueSort:\n disabled:\n - name: \"*\" # Disable all queueSort plugins\n # - name: \"PrioritySort\" # Disable specific queueSort plugin\n filter:\n disabled:\n - name: \"*\" # Disable all filter plugins\n # - name: \"NodeResourcesFit\" # Disable specific filter plugin\n permit:\n disabled:\n - name: \"*\" # Disables all permit plugins\n # - name: \"TaintToleration\" # Disable specific permit plugin", "completion": "This yaml code 13 - Hardening Guide - Scheduler Configuration | This creates a scheduler profile my-custom-scheduler. Whenever the .spec of a Pod does not have a value for .spec.schedulerName, the kube-scheduler runs for that Pod, using its main configuration, and", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "13 - Hardening Guide - Scheduler Configuration | This creates a scheduler profile my-custom-scheduler. Whenever the .spec of a Pod does not have a value for .spec.schedulerName, the kube-scheduler runs for that Pod, using its main configuration, and", "source": "https://kubernetes.io/docs/concepts/security/_print/", "id": "k8s_00224_code_explain_6544"} {"prompt": "Complete this text code:\n\n# The per-mode level label indicates which policy level to apply for the mode.\n#\n# MODE must be one of `enforce`, `audit`, or `warn`.\n# LEVEL must be one of `privileged`, `baseline`, or `restricted`.\npod-security.kubernetes.io/: \n", "completion": "# Optional: per-mode version label that can be used to pin the policy to the\n# version that shipped with a given Kubernetes minor version (for example v1.34).\n#\n# MODE must be one of `enforce`, `audit`, or `warn`.\n# VERSION must be a valid Kubernetes minor version, or `latest`.\npod-security.kubernetes.io/-version: ", "language": "text", "type": "code-example", "context": "3 - Pod Security Admission | Check out Enforce Pod Security Standards with Namespace Labels to see example usage.", "source": "https://kubernetes.io/docs/concepts/security/_print/", "id": "k8s_00224_code_completion_6545"} {"prompt": "Explain this text code:\n\n# The per-mode level label indicates which policy level to apply for the mode.\n#\n# MODE must be one of `enforce`, `audit`, or `warn`.\n# LEVEL must be one of `privileged`, `baseline`, or `restricted`.\npod-security.kubernetes.io/: \n\n# Optional: per-mode version label that can be used to pin the policy to the\n# version that shipped with a given Kubernetes minor version (for example v1.34).\n#\n# MODE must be one of `enforce`, `audit`, or `warn`.\n# VERSION must be a valid Kubernetes minor version, or `latest`.\npod-security.kubernetes.io/-version: ", "completion": "This text code 3 - Pod Security Admission | Check out Enforce Pod Security Standards with Namespace Labels to see example usage.", "language": "text", "type": "code-example_explanation", "context": "3 - Pod Security Admission | Check out Enforce Pod Security Standards with Namespace Labels to see example usage.", "source": "https://kubernetes.io/docs/concepts/security/_print/", "id": "k8s_00224_code_explain_6546"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: ServiceAccount\nmetadata:", "completion": " annotations:\n kubernetes.io/enforce-mountable-secrets: \"true\"\n name: my-serviceaccount\n namespace: my-namespace", "language": "yaml", "type": "service-definition", "context": "4 - Service Accounts | When this annotation is set to \"true\", the Kubernetes control plane ensures that the Secrets from this ServiceAccount are subject to certain mounting restrictions.", "source": "https://kubernetes.io/docs/concepts/security/_print/", "id": "k8s_00224_code_completion_6547"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: ServiceAccount\nmetadata:\n annotations:\n kubernetes.io/enforce-mountable-secrets: \"true\"\n name: my-serviceaccount\n namespace: my-namespace", "completion": "This yaml code 4 - Service Accounts | When this annotation is set to \"true\", the Kubernetes control plane ensures that the Secrets from this ServiceAccount are subject to certain mounting restrictions.", "language": "yaml", "type": "service-definition_explanation", "context": "4 - Service Accounts | When this annotation is set to \"true\", the Kubernetes control plane ensures that the Secrets from this ServiceAccount are subject to certain mounting restrictions.", "source": "https://kubernetes.io/docs/concepts/security/_print/", "id": "k8s_00224_code_explain_6548"} {"prompt": "Complete this json code:\n\n{\n \"apiVersion\": \"abac.authorization.kubernetes.io/v1beta1\",\n \"kind\": \"Policy\",\n \"spec\": {\n \"user\": \"bob\",", "completion": " \"namespace\": \"projectCaribou\",\n \"resource\": \"pods\",\n \"readonly\": true\n }\n}", "language": "json", "type": "api-response", "context": "8 - Controlling Access to the Kubernetes API | If Bob makes the following request, the request is authorized because he is allowed to read objects in the projectCaribou namespace:", "source": "https://kubernetes.io/docs/concepts/security/_print/", "id": "k8s_00224_code_completion_6549"} {"prompt": "Explain this json code:\n\n{\n \"apiVersion\": \"abac.authorization.kubernetes.io/v1beta1\",\n \"kind\": \"Policy\",\n \"spec\": {\n \"user\": \"bob\",\n \"namespace\": \"projectCaribou\",\n \"resource\": \"pods\",\n \"readonly\": true\n }\n}", "completion": "This json code 8 - Controlling Access to the Kubernetes API | If Bob makes the following request, the request is authorized because he is allowed to read objects in the projectCaribou namespace:", "language": "json", "type": "api-response_explanation", "context": "8 - Controlling Access to the Kubernetes API | If Bob makes the following request, the request is authorized because he is allowed to read objects in the projectCaribou namespace:", "source": "https://kubernetes.io/docs/concepts/security/_print/", "id": "k8s_00224_code_explain_6550"} {"prompt": "Complete this json code:\n\n{\n \"apiVersion\": \"authorization.k8s.io/v1beta1\",\n \"kind\": \"SubjectAccessReview\",\n \"spec\": {\n \"resourceAttributes\": {\n \"namespace\": \"projectCaribou\",", "completion": " \"verb\": \"get\",\n \"group\": \"unicorn.example.org\",\n \"resource\": \"pods\"\n }\n }\n}", "language": "json", "type": "api-response", "context": "8 - Controlling Access to the Kubernetes API | If Bob makes a request to write (create or update) to the objects in the projectCaribou namespace, his authorization is denied. If Bob makes a request to read (get) objects in a different namespace su", "source": "https://kubernetes.io/docs/concepts/security/_print/", "id": "k8s_00224_code_completion_6551"} {"prompt": "Explain this json code:\n\n{\n \"apiVersion\": \"authorization.k8s.io/v1beta1\",\n \"kind\": \"SubjectAccessReview\",\n \"spec\": {\n \"resourceAttributes\": {\n \"namespace\": \"projectCaribou\",\n \"verb\": \"get\",\n \"group\": \"unicorn.example.org\",\n \"resource\": \"pods\"\n }\n }\n}", "completion": "This json code 8 - Controlling Access to the Kubernetes API | If Bob makes a request to write (create or update) to the objects in the projectCaribou namespace, his authorization is denied. If Bob makes a request to read (get) objects in a different namespace su", "language": "json", "type": "api-response_explanation", "context": "8 - Controlling Access to the Kubernetes API | If Bob makes a request to write (create or update) to the objects in the projectCaribou namespace, his authorization is denied. If Bob makes a request to read (get) objects in a different namespace su", "source": "https://kubernetes.io/docs/concepts/security/_print/", "id": "k8s_00224_code_explain_6552"} {"prompt": "Complete this yaml code:\n\napiVersion: kubescheduler.config.k8s.io/v1\nkind: KubeSchedulerConfiguration\nprofiles:\n - schedulerName: my-scheduler\n plugins:\n # Disable specific plugins for different extension points\n # You can disable all plugins for an extension point using \"*\"\n queueSort:\n disabled:", "completion": " - name: \"*\" # Disable all queueSort plugins\n # - name: \"PrioritySort\" # Disable specific queueSort plugin\n filter:\n disabled:\n - name: \"*\" # Disable all filter plugins\n # - name: \"NodeResourcesFit\" # Disable specific filter plugin\n permit:\n disabled:\n - name: \"*\" # Disables all permit plugins\n # - name: \"TaintToleration\" # Disable specific permit plugin", "language": "yaml", "type": "kubernetes-manifest", "context": "13 - Hardening Guide - Scheduler Configuration | This creates a scheduler profile my-custom-scheduler. Whenever the .spec of a Pod does not have a value for .spec.schedulerName, the kube-scheduler runs for that Pod, using its main configuration, and", "source": "https://kubernetes.io/docs/concepts/security/_print/", "id": "k8s_00224_code_completion_6553"} {"prompt": "Explain this yaml code:\n\napiVersion: kubescheduler.config.k8s.io/v1\nkind: KubeSchedulerConfiguration\nprofiles:\n - schedulerName: my-scheduler\n plugins:\n # Disable specific plugins for different extension points\n # You can disable all plugins for an extension point using \"*\"\n queueSort:\n disabled:\n - name: \"*\" # Disable all queueSort plugins\n # - name: \"PrioritySort\" # Disable specific queueSort plugin\n filter:\n disabled:\n - name: \"*\" # Disable all filter plugins\n # - name: \"NodeResourcesFit\" # Disable specific filter plugin\n permit:\n disabled:\n - name: \"*\" # Disables all permit plugins\n # - name: \"TaintToleration\" # Disable specific permit plugin", "completion": "This yaml code 13 - Hardening Guide - Scheduler Configuration | This creates a scheduler profile my-custom-scheduler. Whenever the .spec of a Pod does not have a value for .spec.schedulerName, the kube-scheduler runs for that Pod, using its main configuration, and", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "13 - Hardening Guide - Scheduler Configuration | This creates a scheduler profile my-custom-scheduler. Whenever the .spec of a Pod does not have a value for .spec.schedulerName, the kube-scheduler runs for that Pod, using its main configuration, and", "source": "https://kubernetes.io/docs/concepts/security/_print/", "id": "k8s_00224_code_explain_6554"} {"prompt": "Complete this text code:\n\n# Linux\ncurl -L https://github.com/kubernetes/kompose/releases/download/v1.34.0/kompose-linux-amd64 -o kompose\n\n# macOS\ncurl -L https://github.com/kubernetes/kompose/releases/download/v1.34.0/kompose-darwin-amd64 -o kompose", "completion": "\n# Windows\ncurl -L https://github.com/kubernetes/kompose/releases/download/v1.34.0/kompose-windows-amd64.exe -o kompose.exe\n\nchmod +x kompose\nsudo mv ./kompose /usr/local/bin/kompose", "language": "text", "type": "code-example", "context": "Translate a Docker Compose File to Kubernetes Resources | Alternatively, you can download the tarball.", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/translate-compose-kubernetes/", "id": "k8s_00226_code_completion_6555"} {"prompt": "Explain this text code:\n\n# Linux\ncurl -L https://github.com/kubernetes/kompose/releases/download/v1.34.0/kompose-linux-amd64 -o kompose\n\n# macOS\ncurl -L https://github.com/kubernetes/kompose/releases/download/v1.34.0/kompose-darwin-amd64 -o kompose\n\n# Windows\ncurl -L https://github.com/kubernetes/kompose/releases/download/v1.34.0/kompose-windows-amd64.exe -o kompose.exe\n\nchmod +x kompose\nsudo mv ./kompose /usr/local/bin/kompose", "completion": "This text code Translate a Docker Compose File to Kubernetes Resources | Alternatively, you can download the tarball.", "language": "text", "type": "code-example_explanation", "context": "Translate a Docker Compose File to Kubernetes Resources | Alternatively, you can download the tarball.", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/translate-compose-kubernetes/", "id": "k8s_00226_code_explain_6556"} {"prompt": "Complete this text code:\n\nservices:\n\n redis-leader:\n container_name: redis-leader\n image: redis\n ports:\n - \"6379\"\n\n redis-replica:\n container_name: redis-replica\n image: redis\n ports:", "completion": " - \"6379\"\n command: redis-server --replicaof redis-leader 6379 --dir /tmp\n\n web:\n container_name: web\n image: quay.io/kompose/web\n ports:\n - \"8080:8080\"\n environment:\n - GET_HOSTS_FROM=dns\n labels:\n kompose.service.type: LoadBalancer", "language": "text", "type": "code-example", "context": "Translate a Docker Compose File to Kubernetes Resources | To convert the docker-compose.yml file to files that you can use with kubectl, run kompose convert and then kubectl apply -f .", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/translate-compose-kubernetes/", "id": "k8s_00226_code_completion_6557"} {"prompt": "Explain this text code:\n\nservices:\n\n redis-leader:\n container_name: redis-leader\n image: redis\n ports:\n - \"6379\"\n\n redis-replica:\n container_name: redis-replica\n image: redis\n ports:\n - \"6379\"\n command: redis-server --replicaof redis-leader 6379 --dir /tmp\n\n web:\n container_name: web\n image: quay.io/kompose/web\n ports:\n - \"8080:8080\"\n environment:\n - GET_HOSTS_FROM=dns\n labels:\n kompose.service.type: LoadBalancer", "completion": "This text code Translate a Docker Compose File to Kubernetes Resources | To convert the docker-compose.yml file to files that you can use with kubectl, run kompose convert and then kubectl apply -f .", "language": "text", "type": "code-example_explanation", "context": "Translate a Docker Compose File to Kubernetes Resources | To convert the docker-compose.yml file to files that you can use with kubectl, run kompose convert and then kubectl apply -f .", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/translate-compose-kubernetes/", "id": "k8s_00226_code_explain_6558"} {"prompt": "Complete this text code:\n\nINFO Kubernetes file \"redis-leader-service.yaml\" created\nINFO Kubernetes file \"redis-replica-service.yaml\" created\nINFO Kubernetes file \"web-tcp-service.yaml\" created", "completion": "INFO Kubernetes file \"redis-leader-deployment.yaml\" created\nINFO Kubernetes file \"redis-replica-deployment.yaml\" created\nINFO Kubernetes file \"web-deployment.yaml\" created", "language": "text", "type": "code-example", "context": "Translate a Docker Compose File to Kubernetes Resources | The output is similar to:", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/translate-compose-kubernetes/", "id": "k8s_00226_code_completion_6559"} {"prompt": "Explain this text code:\n\nINFO Kubernetes file \"redis-leader-service.yaml\" created\nINFO Kubernetes file \"redis-replica-service.yaml\" created\nINFO Kubernetes file \"web-tcp-service.yaml\" created\nINFO Kubernetes file \"redis-leader-deployment.yaml\" created\nINFO Kubernetes file \"redis-replica-deployment.yaml\" created\nINFO Kubernetes file \"web-deployment.yaml\" created", "completion": "This text code Translate a Docker Compose File to Kubernetes Resources | The output is similar to:", "language": "text", "type": "code-example_explanation", "context": "Translate a Docker Compose File to Kubernetes Resources | The output is similar to:", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/translate-compose-kubernetes/", "id": "k8s_00226_code_explain_6560"} {"prompt": "Explain this bash code:\n\nkubectl apply -f web-tcp-service.yaml,redis-leader-service.yaml,redis-replica-service.yaml,web-deployment.yaml,redis-leader-deployment.yaml,redis-replica-deployment.yaml", "completion": "This bash code Translate a Docker Compose File to Kubernetes Resources | The output is similar to:", "language": "bash", "type": "kubectl-command_explanation", "context": "Translate a Docker Compose File to Kubernetes Resources | The output is similar to:", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/translate-compose-kubernetes/", "id": "k8s_00226_code_explain_6561"} {"prompt": "Complete this text code:\n\ndeployment.apps/redis-leader created\ndeployment.apps/redis-replica created\ndeployment.apps/web created", "completion": "service/redis-leader created\nservice/redis-replica created\nservice/web-tcp created", "language": "text", "type": "code-example", "context": "Translate a Docker Compose File to Kubernetes Resources | Your deployments are running in Kubernetes.", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/translate-compose-kubernetes/", "id": "k8s_00226_code_completion_6562"} {"prompt": "Explain this text code:\n\ndeployment.apps/redis-leader created\ndeployment.apps/redis-replica created\ndeployment.apps/web created\nservice/redis-leader created\nservice/redis-replica created\nservice/web-tcp created", "completion": "This text code Translate a Docker Compose File to Kubernetes Resources | Your deployments are running in Kubernetes.", "language": "text", "type": "code-example_explanation", "context": "Translate a Docker Compose File to Kubernetes Resources | Your deployments are running in Kubernetes.", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/translate-compose-kubernetes/", "id": "k8s_00226_code_explain_6563"} {"prompt": "Complete this text code:\n\nName: web-tcp\n Namespace: default\n Labels: io.kompose.service=web-tcp\n Annotations: kompose.cmd: kompose convert\n kompose.service.type: LoadBalancer\n kompose.version: 1.33.0 (3ce457399)\n Selector: io.kompose.service=web\n Type: LoadBalancer\n IP Family Policy: SingleStack", "completion": " IP Families: IPv4\n IP: 10.102.30.3\n IPs: 10.102.30.3\n Port: 8080 8080/TCP\n TargetPort: 8080/TCP\n NodePort: 8080 31624/TCP\n Endpoints: 10.244.0.5:8080\n Session Affinity: None\n External Traffic Policy: Cluster\n Events: ", "language": "text", "type": "code-example", "context": "Translate a Docker Compose File to Kubernetes Resources | If you're using a cloud provider, your IP will be listed next to LoadBalancer Ingress.", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/translate-compose-kubernetes/", "id": "k8s_00226_code_completion_6564"} {"prompt": "Explain this text code:\n\nName: web-tcp\n Namespace: default\n Labels: io.kompose.service=web-tcp\n Annotations: kompose.cmd: kompose convert\n kompose.service.type: LoadBalancer\n kompose.version: 1.33.0 (3ce457399)\n Selector: io.kompose.service=web\n Type: LoadBalancer\n IP Family Policy: SingleStack\n IP Families: IPv4\n IP: 10.102.30.3\n IPs: 10.102.30.3\n Port: 8080 8080/TCP\n TargetPort: 8080/TCP\n NodePort: 8080 31624/TCP\n Endpoints: 10.244.0.5:8080\n Session Affinity: None\n External Traffic Policy: Cluster\n Events: ", "completion": "This text code Translate a Docker Compose File to Kubernetes Resources | If you're using a cloud provider, your IP will be listed next to LoadBalancer Ingress.", "language": "text", "type": "code-example_explanation", "context": "Translate a Docker Compose File to Kubernetes Resources | If you're using a cloud provider, your IP will be listed next to LoadBalancer Ingress.", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/translate-compose-kubernetes/", "id": "k8s_00226_code_explain_6565"} {"prompt": "Explain this bash code:\n\nkubectl delete -f web-tcp-service.yaml,redis-leader-service.yaml,redis-replica-service.yaml,web-deployment.yaml,redis-leader-deployment.yaml,redis-replica-deployment.yaml", "completion": "This bash code Translate a Docker Compose File to Kubernetes Resources | Kompose has support for two providers: OpenShift and Kubernetes. You can choose a targeted provider using global option --provider. If no provider is specified, Kubernetes is set by default.", "language": "bash", "type": "kubectl-command_explanation", "context": "Translate a Docker Compose File to Kubernetes Resources | Kompose has support for two providers: OpenShift and Kubernetes. You can choose a targeted provider using global option --provider. If no provider is specified, Kubernetes is set by default.", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/translate-compose-kubernetes/", "id": "k8s_00226_code_explain_6566"} {"prompt": "Complete this text code:\n\nWARN Unsupported key networks - ignoring\nWARN Unsupported key build - ignoring\nINFO Kubernetes file \"worker-svc.yaml\" created\nINFO Kubernetes file \"db-svc.yaml\" created\nINFO Kubernetes file \"redis-svc.yaml\" created\nINFO Kubernetes file \"result-svc.yaml\" created", "completion": "INFO Kubernetes file \"vote-svc.yaml\" created\nINFO Kubernetes file \"redis-deployment.yaml\" created\nINFO Kubernetes file \"result-deployment.yaml\" created\nINFO Kubernetes file \"vote-deployment.yaml\" created\nINFO Kubernetes file \"worker-deployment.yaml\" created\nINFO Kubernetes file \"db-deployment.yaml\" created", "language": "text", "type": "code-example", "context": "Translate a Docker Compose File to Kubernetes Resources | You can also provide multiple docker-compose files at the same time:", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/translate-compose-kubernetes/", "id": "k8s_00226_code_completion_6567"} {"prompt": "Explain this text code:\n\nWARN Unsupported key networks - ignoring\nWARN Unsupported key build - ignoring\nINFO Kubernetes file \"worker-svc.yaml\" created\nINFO Kubernetes file \"db-svc.yaml\" created\nINFO Kubernetes file \"redis-svc.yaml\" created\nINFO Kubernetes file \"result-svc.yaml\" created\nINFO Kubernetes file \"vote-svc.yaml\" created\nINFO Kubernetes file \"redis-deployment.yaml\" created\nINFO Kubernetes file \"result-deployment.yaml\" created\nINFO Kubernetes file \"vote-deployment.yaml\" created\nINFO Kubernetes file \"worker-deployment.yaml\" created\nINFO Kubernetes file \"db-deployment.yaml\" created", "completion": "This text code Translate a Docker Compose File to Kubernetes Resources | You can also provide multiple docker-compose files at the same time:", "language": "text", "type": "code-example_explanation", "context": "Translate a Docker Compose File to Kubernetes Resources | You can also provide multiple docker-compose files at the same time:", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/translate-compose-kubernetes/", "id": "k8s_00226_code_explain_6568"} {"prompt": "Explain this text code:\n\ndb-deployment.yaml docker-compose.yml docker-gitlab.yml redis-deployment.yaml result-deployment.yaml vote-deployment.yaml worker-deployment.yaml\ndb-svc.yaml docker-voting.yml redis-svc.yaml result-svc.yaml vote-svc.yaml worker-svc.yaml", "completion": "This text code Translate a Docker Compose File to Kubernetes Resources | You can also provide multiple docker-compose files at the same time:", "language": "text", "type": "code-example_explanation", "context": "Translate a Docker Compose File to Kubernetes Resources | You can also provide multiple docker-compose files at the same time:", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/translate-compose-kubernetes/", "id": "k8s_00226_code_explain_6569"} {"prompt": "Explain this text code:\n\nkompose -f docker-compose.yml -f docker-guestbook.yml convert", "completion": "This text code Translate a Docker Compose File to Kubernetes Resources | When multiple docker-compose files are provided the configuration is merged. Any configuration that is common will be overridden by subsequent file.", "language": "text", "type": "code-example_explanation", "context": "Translate a Docker Compose File to Kubernetes Resources | When multiple docker-compose files are provided the configuration is merged. Any configuration that is common will be overridden by subsequent file.", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/translate-compose-kubernetes/", "id": "k8s_00226_code_explain_6570"} {"prompt": "Complete this text code:\n\nINFO Kubernetes file \"frontend-service.yaml\" created \nINFO Kubernetes file \"mlbparks-service.yaml\" created \nINFO Kubernetes file \"mongodb-service.yaml\" created \nINFO Kubernetes file \"redis-master-service.yaml\" created \nINFO Kubernetes file \"redis-slave-service.yaml\" created ", "completion": "INFO Kubernetes file \"frontend-deployment.yaml\" created \nINFO Kubernetes file \"mlbparks-deployment.yaml\" created \nINFO Kubernetes file \"mongodb-deployment.yaml\" created \nINFO Kubernetes file \"mongodb-claim0-persistentvolumeclaim.yaml\" created\nINFO Kubernetes file \"redis-master-deployment.yaml\" created \nINFO Kubernetes file \"redis-slave-deployment.yaml\" created", "language": "text", "type": "code-example", "context": "Translate a Docker Compose File to Kubernetes Resources | When multiple docker-compose files are provided the configuration is merged. Any configuration that is common will be overridden by subsequent file.", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/translate-compose-kubernetes/", "id": "k8s_00226_code_completion_6571"} {"prompt": "Explain this text code:\n\nINFO Kubernetes file \"frontend-service.yaml\" created \nINFO Kubernetes file \"mlbparks-service.yaml\" created \nINFO Kubernetes file \"mongodb-service.yaml\" created \nINFO Kubernetes file \"redis-master-service.yaml\" created \nINFO Kubernetes file \"redis-slave-service.yaml\" created \nINFO Kubernetes file \"frontend-deployment.yaml\" created \nINFO Kubernetes file \"mlbparks-deployment.yaml\" created \nINFO Kubernetes file \"mongodb-deployment.yaml\" created \nINFO Kubernetes file \"mongodb-claim0-persistentvolumeclaim.yaml\" created\nINFO Kubernetes file \"redis-master-deployment.yaml\" created \nINFO Kubernetes file \"redis-slave-deployment.yaml\" created", "completion": "This text code Translate a Docker Compose File to Kubernetes Resources | When multiple docker-compose files are provided the configuration is merged. Any configuration that is common will be overridden by subsequent file.", "language": "text", "type": "code-example_explanation", "context": "Translate a Docker Compose File to Kubernetes Resources | When multiple docker-compose files are provided the configuration is merged. Any configuration that is common will be overridden by subsequent file.", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/translate-compose-kubernetes/", "id": "k8s_00226_code_explain_6572"} {"prompt": "Complete this text code:\n\nmlbparks-deployment.yaml mongodb-service.yaml redis-slave-service.jsonmlbparks-service.yaml \nfrontend-deployment.yaml mongodb-claim0-persistentvolumeclaim.yaml redis-master-service.yaml", "completion": "frontend-service.yaml mongodb-deployment.yaml redis-slave-deployment.yaml\nredis-master-deployment.yaml", "language": "text", "type": "code-example", "context": "Translate a Docker Compose File to Kubernetes Resources | When multiple docker-compose files are provided the configuration is merged. Any configuration that is common will be overridden by subsequent file.", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/translate-compose-kubernetes/", "id": "k8s_00226_code_completion_6573"} {"prompt": "Explain this text code:\n\nmlbparks-deployment.yaml mongodb-service.yaml redis-slave-service.jsonmlbparks-service.yaml \nfrontend-deployment.yaml mongodb-claim0-persistentvolumeclaim.yaml redis-master-service.yaml\nfrontend-service.yaml mongodb-deployment.yaml redis-slave-deployment.yaml\nredis-master-deployment.yaml", "completion": "This text code Translate a Docker Compose File to Kubernetes Resources | When multiple docker-compose files are provided the configuration is merged. Any configuration that is common will be overridden by subsequent file.", "language": "text", "type": "code-example_explanation", "context": "Translate a Docker Compose File to Kubernetes Resources | When multiple docker-compose files are provided the configuration is merged. Any configuration that is common will be overridden by subsequent file.", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/translate-compose-kubernetes/", "id": "k8s_00226_code_explain_6574"} {"prompt": "Explain this text code:\n\nkompose --provider openshift --file docker-voting.yml convert", "completion": "This text code Translate a Docker Compose File to Kubernetes Resources | It also supports creating buildconfig for build directive in a service. By default, it uses the remote repo for the current git branch as the source repo, and the current branch as the source branch f", "language": "text", "type": "code-example_explanation", "context": "Translate a Docker Compose File to Kubernetes Resources | It also supports creating buildconfig for build directive in a service. By default, it uses the remote repo for the current git branch as the source repo, and the current branch as the source branch f", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/translate-compose-kubernetes/", "id": "k8s_00226_code_explain_6575"} {"prompt": "Complete this text code:\n\nWARN [worker] Service cannot be created because of missing port.\nINFO OpenShift file \"vote-service.yaml\" created \nINFO OpenShift file \"db-service.yaml\" created \nINFO OpenShift file \"redis-service.yaml\" created \nINFO OpenShift file \"result-service.yaml\" created \nINFO OpenShift file \"vote-deploymentconfig.yaml\" created \nINFO OpenShift file \"vote-imagestream.yaml\" created ", "completion": "INFO OpenShift file \"worker-deploymentconfig.yaml\" created \nINFO OpenShift file \"worker-imagestream.yaml\" created \nINFO OpenShift file \"db-deploymentconfig.yaml\" created \nINFO OpenShift file \"db-imagestream.yaml\" created \nINFO OpenShift file \"redis-deploymentconfig.yaml\" created \nINFO OpenShift file \"redis-imagestream.yaml\" created \nINFO OpenShift file \"result-deploymentconfig.yaml\" created \nINFO OpenShift file \"result-imagestream.yaml\" created", "language": "text", "type": "code-example", "context": "Translate a Docker Compose File to Kubernetes Resources | It also supports creating buildconfig for build directive in a service. By default, it uses the remote repo for the current git branch as the source repo, and the current branch as the source branch f", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/translate-compose-kubernetes/", "id": "k8s_00226_code_completion_6576"} {"prompt": "Explain this text code:\n\nWARN [worker] Service cannot be created because of missing port.\nINFO OpenShift file \"vote-service.yaml\" created \nINFO OpenShift file \"db-service.yaml\" created \nINFO OpenShift file \"redis-service.yaml\" created \nINFO OpenShift file \"result-service.yaml\" created \nINFO OpenShift file \"vote-deploymentconfig.yaml\" created \nINFO OpenShift file \"vote-imagestream.yaml\" created \nINFO OpenShift file \"worker-deploymentconfig.yaml\" created \nINFO OpenShift file \"worker-imagestream.yaml\" created \nINFO OpenShift file \"db-deploymentconfig.yaml\" created \nINFO OpenShift file \"db-imagestream.yaml\" created \nINFO OpenShift file \"redis-deploymentconfig.yaml\" created \nINFO OpenShift file \"redis-imagestream.yaml\" created \nINFO OpenShift file \"result-deploymentconfig.yaml\" created \nINFO OpenShift file \"result-imagestream.yaml\" created", "completion": "This text code Translate a Docker Compose File to Kubernetes Resources | It also supports creating buildconfig for build directive in a service. By default, it uses the remote repo for the current git branch as the source repo, and the current branch as the source branch f", "language": "text", "type": "code-example_explanation", "context": "Translate a Docker Compose File to Kubernetes Resources | It also supports creating buildconfig for build directive in a service. By default, it uses the remote repo for the current git branch as the source repo, and the current branch as the source branch f", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/translate-compose-kubernetes/", "id": "k8s_00226_code_explain_6577"} {"prompt": "Explain this text code:\n\nkompose --provider openshift --file buildconfig/docker-compose.yml convert", "completion": "This text code Translate a Docker Compose File to Kubernetes Resources | The default kompose transformation will generate Kubernetes Deployments and Services, in yaml format. You have alternative option to generate json with -j. Also, you can alternatively generate Replica", "language": "text", "type": "code-example_explanation", "context": "Translate a Docker Compose File to Kubernetes Resources | The default kompose transformation will generate Kubernetes Deployments and Services, in yaml format. You have alternative option to generate json with -j. Also, you can alternatively generate Replica", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/translate-compose-kubernetes/", "id": "k8s_00226_code_explain_6578"} {"prompt": "Complete this text code:\n\nWARN [foo] Service cannot be created because of missing port.\nINFO OpenShift Buildconfig using git@github.com:rtnpro/kompose.git::master as source.", "completion": "INFO OpenShift file \"foo-deploymentconfig.yaml\" created \nINFO OpenShift file \"foo-imagestream.yaml\" created \nINFO OpenShift file \"foo-buildconfig.yaml\" created", "language": "text", "type": "code-example", "context": "Translate a Docker Compose File to Kubernetes Resources | The default kompose transformation will generate Kubernetes Deployments and Services, in yaml format. You have alternative option to generate json with -j. Also, you can alternatively generate Replica", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/translate-compose-kubernetes/", "id": "k8s_00226_code_completion_6579"} {"prompt": "Explain this text code:\n\nWARN [foo] Service cannot be created because of missing port.\nINFO OpenShift Buildconfig using git@github.com:rtnpro/kompose.git::master as source.\nINFO OpenShift file \"foo-deploymentconfig.yaml\" created \nINFO OpenShift file \"foo-imagestream.yaml\" created \nINFO OpenShift file \"foo-buildconfig.yaml\" created", "completion": "This text code Translate a Docker Compose File to Kubernetes Resources | The default kompose transformation will generate Kubernetes Deployments and Services, in yaml format. You have alternative option to generate json with -j. Also, you can alternatively generate Replica", "language": "text", "type": "code-example_explanation", "context": "Translate a Docker Compose File to Kubernetes Resources | The default kompose transformation will generate Kubernetes Deployments and Services, in yaml format. You have alternative option to generate json with -j. Also, you can alternatively generate Replica", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/translate-compose-kubernetes/", "id": "k8s_00226_code_explain_6580"} {"prompt": "Complete this text code:\n\nkompose convert -j\nINFO Kubernetes file \"redis-svc.json\" created", "completion": "INFO Kubernetes file \"web-svc.json\" created\nINFO Kubernetes file \"redis-deployment.json\" created\nINFO Kubernetes file \"web-deployment.json\" created", "language": "text", "type": "code-example", "context": "Translate a Docker Compose File to Kubernetes Resources | The *-deployment.json files contain the Deployment objects.", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/translate-compose-kubernetes/", "id": "k8s_00226_code_completion_6581"} {"prompt": "Explain this text code:\n\nkompose convert -j\nINFO Kubernetes file \"redis-svc.json\" created\nINFO Kubernetes file \"web-svc.json\" created\nINFO Kubernetes file \"redis-deployment.json\" created\nINFO Kubernetes file \"web-deployment.json\" created", "completion": "This text code Translate a Docker Compose File to Kubernetes Resources | The *-deployment.json files contain the Deployment objects.", "language": "text", "type": "code-example_explanation", "context": "Translate a Docker Compose File to Kubernetes Resources | The *-deployment.json files contain the Deployment objects.", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/translate-compose-kubernetes/", "id": "k8s_00226_code_explain_6582"} {"prompt": "Complete this text code:\n\nkompose convert --replication-controller\nINFO Kubernetes file \"redis-svc.yaml\" created", "completion": "INFO Kubernetes file \"web-svc.yaml\" created\nINFO Kubernetes file \"redis-replicationcontroller.yaml\" created\nINFO Kubernetes file \"web-replicationcontroller.yaml\" created", "language": "text", "type": "code-example", "context": "Translate a Docker Compose File to Kubernetes Resources | The *-replicationcontroller.yaml files contain the Replication Controller objects. If you want to specify replicas (default is 1), use --replicas flag: kompose convert --replication-controller --repli", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/translate-compose-kubernetes/", "id": "k8s_00226_code_completion_6583"} {"prompt": "Explain this text code:\n\nkompose convert --replication-controller\nINFO Kubernetes file \"redis-svc.yaml\" created\nINFO Kubernetes file \"web-svc.yaml\" created\nINFO Kubernetes file \"redis-replicationcontroller.yaml\" created\nINFO Kubernetes file \"web-replicationcontroller.yaml\" created", "completion": "This text code Translate a Docker Compose File to Kubernetes Resources | The *-replicationcontroller.yaml files contain the Replication Controller objects. If you want to specify replicas (default is 1), use --replicas flag: kompose convert --replication-controller --repli", "language": "text", "type": "code-example_explanation", "context": "Translate a Docker Compose File to Kubernetes Resources | The *-replicationcontroller.yaml files contain the Replication Controller objects. If you want to specify replicas (default is 1), use --replicas flag: kompose convert --replication-controller --repli", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/translate-compose-kubernetes/", "id": "k8s_00226_code_explain_6584"} {"prompt": "Complete this text code:\n\nkompose convert --daemon-set\nINFO Kubernetes file \"redis-svc.yaml\" created", "completion": "INFO Kubernetes file \"web-svc.yaml\" created\nINFO Kubernetes file \"redis-daemonset.yaml\" created\nINFO Kubernetes file \"web-daemonset.yaml\" created", "language": "text", "type": "code-example", "context": "Translate a Docker Compose File to Kubernetes Resources | The *-daemonset.yaml files contain the DaemonSet objects.", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/translate-compose-kubernetes/", "id": "k8s_00226_code_completion_6585"} {"prompt": "Explain this text code:\n\nkompose convert --daemon-set\nINFO Kubernetes file \"redis-svc.yaml\" created\nINFO Kubernetes file \"web-svc.yaml\" created\nINFO Kubernetes file \"redis-daemonset.yaml\" created\nINFO Kubernetes file \"web-daemonset.yaml\" created", "completion": "This text code Translate a Docker Compose File to Kubernetes Resources | The *-daemonset.yaml files contain the DaemonSet objects.", "language": "text", "type": "code-example_explanation", "context": "Translate a Docker Compose File to Kubernetes Resources | The *-daemonset.yaml files contain the DaemonSet objects.", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/translate-compose-kubernetes/", "id": "k8s_00226_code_explain_6586"} {"prompt": "Complete this text code:\n\nINFO Kubernetes file \"web-svc.yaml\" created\nINFO Kubernetes file \"redis-svc.yaml\" created", "completion": "INFO Kubernetes file \"web-deployment.yaml\" created\nINFO Kubernetes file \"redis-deployment.yaml\" created\nchart created in \"./docker-compose/\"", "language": "text", "type": "code-example", "context": "Translate a Docker Compose File to Kubernetes Resources | The chart structure is aimed at providing a skeleton for building your Helm charts.", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/translate-compose-kubernetes/", "id": "k8s_00226_code_completion_6587"} {"prompt": "Explain this text code:\n\nINFO Kubernetes file \"web-svc.yaml\" created\nINFO Kubernetes file \"redis-svc.yaml\" created\nINFO Kubernetes file \"web-deployment.yaml\" created\nINFO Kubernetes file \"redis-deployment.yaml\" created\nchart created in \"./docker-compose/\"", "completion": "This text code Translate a Docker Compose File to Kubernetes Resources | The chart structure is aimed at providing a skeleton for building your Helm charts.", "language": "text", "type": "code-example_explanation", "context": "Translate a Docker Compose File to Kubernetes Resources | The chart structure is aimed at providing a skeleton for building your Helm charts.", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/translate-compose-kubernetes/", "id": "k8s_00226_code_explain_6588"} {"prompt": "Complete this bash code:\n\ndocker-compose\n├── Chart.yaml\n├── README.md\n└── templates", "completion": " ├── redis-deployment.yaml\n ├── redis-svc.yaml\n ├── web-deployment.yaml\n └── web-svc.yaml", "language": "bash", "type": "shell-command", "context": "Translate a Docker Compose File to Kubernetes Resources | The chart structure is aimed at providing a skeleton for building your Helm charts.", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/translate-compose-kubernetes/", "id": "k8s_00226_code_completion_6589"} {"prompt": "Explain this bash code:\n\ndocker-compose\n├── Chart.yaml\n├── README.md\n└── templates\n ├── redis-deployment.yaml\n ├── redis-svc.yaml\n ├── web-deployment.yaml\n └── web-svc.yaml", "completion": "This bash code Translate a Docker Compose File to Kubernetes Resources | The chart structure is aimed at providing a skeleton for building your Helm charts.", "language": "bash", "type": "shell-command_explanation", "context": "Translate a Docker Compose File to Kubernetes Resources | The chart structure is aimed at providing a skeleton for building your Helm charts.", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/translate-compose-kubernetes/", "id": "k8s_00226_code_explain_6590"} {"prompt": "Complete this text code:\n\nversion: \"2\"\nservices:\n nginx:\n image: nginx\n dockerfile: foobar", "completion": " build: ./foobar\n cap_add:\n - ALL\n container_name: foobar\n labels:\n kompose.service.type: nodeport", "language": "text", "type": "code-example", "context": "Translate a Docker Compose File to Kubernetes Resources | kompose.service.expose defines if the service needs to be made accessible from outside the cluster or not. If the value is set to \"true\", the provider sets the endpoint automatically, and for any othe", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/translate-compose-kubernetes/", "id": "k8s_00226_code_completion_6591"} {"prompt": "Explain this text code:\n\nversion: \"2\"\nservices:\n nginx:\n image: nginx\n dockerfile: foobar\n build: ./foobar\n cap_add:\n - ALL\n container_name: foobar\n labels:\n kompose.service.type: nodeport", "completion": "This text code Translate a Docker Compose File to Kubernetes Resources | kompose.service.expose defines if the service needs to be made accessible from outside the cluster or not. If the value is set to \"true\", the provider sets the endpoint automatically, and for any othe", "language": "text", "type": "code-example_explanation", "context": "Translate a Docker Compose File to Kubernetes Resources | kompose.service.expose defines if the service needs to be made accessible from outside the cluster or not. If the value is set to \"true\", the provider sets the endpoint automatically, and for any othe", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/translate-compose-kubernetes/", "id": "k8s_00226_code_explain_6592"} {"prompt": "Complete this text code:\n\nversion: \"2\"\nservices:\n web:\n image: tuna/docker-counter23\n ports:\n - \"5000:5000\"\n links:", "completion": " - redis\n labels:\n kompose.service.expose: \"counter.example.com\"\n redis:\n image: redis:3.0\n ports:\n - \"6379\"", "language": "text", "type": "code-example", "context": "Translate a Docker Compose File to Kubernetes Resources | The currently supported options are:", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/translate-compose-kubernetes/", "id": "k8s_00226_code_completion_6593"} {"prompt": "Explain this text code:\n\nversion: \"2\"\nservices:\n web:\n image: tuna/docker-counter23\n ports:\n - \"5000:5000\"\n links:\n - redis\n labels:\n kompose.service.expose: \"counter.example.com\"\n redis:\n image: redis:3.0\n ports:\n - \"6379\"", "completion": "This text code Translate a Docker Compose File to Kubernetes Resources | The currently supported options are:", "language": "text", "type": "code-example_explanation", "context": "Translate a Docker Compose File to Kubernetes Resources | The currently supported options are:", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/translate-compose-kubernetes/", "id": "k8s_00226_code_explain_6594"} {"prompt": "Complete this text code:\n\nversion: '2'\n\nservices:", "completion": " pival:\n image: perl\n command: [\"perl\", \"-Mbignum=bpi\", \"-wle\", \"print bpi(2000)\"]\n restart: \"on-failure\"", "language": "text", "type": "code-example", "context": "Translate a Docker Compose File to Kubernetes Resources | If the Docker Compose file has a volume specified for a service, the Deployment (Kubernetes) or DeploymentConfig (OpenShift) strategy is changed to \"Recreate\" instead of \"RollingUpdate\" (default). Thi", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/translate-compose-kubernetes/", "id": "k8s_00226_code_completion_6595"} {"prompt": "Explain this text code:\n\nversion: '2'\n\nservices:\n pival:\n image: perl\n command: [\"perl\", \"-Mbignum=bpi\", \"-wle\", \"print bpi(2000)\"]\n restart: \"on-failure\"", "completion": "This text code Translate a Docker Compose File to Kubernetes Resources | If the Docker Compose file has a volume specified for a service, the Deployment (Kubernetes) or DeploymentConfig (OpenShift) strategy is changed to \"Recreate\" instead of \"RollingUpdate\" (default). Thi", "language": "text", "type": "code-example_explanation", "context": "Translate a Docker Compose File to Kubernetes Resources | If the Docker Compose file has a volume specified for a service, the Deployment (Kubernetes) or DeploymentConfig (OpenShift) strategy is changed to \"Recreate\" instead of \"RollingUpdate\" (default). Thi", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/translate-compose-kubernetes/", "id": "k8s_00226_code_explain_6596"} {"prompt": "Complete this sh code:\n\n# Linux\ncurl -L https://github.com/kubernetes/kompose/releases/download/v1.34.0/kompose-linux-amd64 -o kompose\n\n# macOS\ncurl -L https://github.com/kubernetes/kompose/releases/download/v1.34.0/kompose-darwin-amd64 -o kompose", "completion": "\n# Windows\ncurl -L https://github.com/kubernetes/kompose/releases/download/v1.34.0/kompose-windows-amd64.exe -o kompose.exe\n\nchmod +x kompose\nsudo mv ./kompose /usr/local/bin/kompose", "language": "sh", "type": "code-example", "context": "Translate a Docker Compose File to Kubernetes Resources | Alternatively, you can download the tarball.", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/translate-compose-kubernetes/", "id": "k8s_00226_code_completion_6597"} {"prompt": "Explain this sh code:\n\n# Linux\ncurl -L https://github.com/kubernetes/kompose/releases/download/v1.34.0/kompose-linux-amd64 -o kompose\n\n# macOS\ncurl -L https://github.com/kubernetes/kompose/releases/download/v1.34.0/kompose-darwin-amd64 -o kompose\n\n# Windows\ncurl -L https://github.com/kubernetes/kompose/releases/download/v1.34.0/kompose-windows-amd64.exe -o kompose.exe\n\nchmod +x kompose\nsudo mv ./kompose /usr/local/bin/kompose", "completion": "This sh code Translate a Docker Compose File to Kubernetes Resources | Alternatively, you can download the tarball.", "language": "sh", "type": "code-example_explanation", "context": "Translate a Docker Compose File to Kubernetes Resources | Alternatively, you can download the tarball.", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/translate-compose-kubernetes/", "id": "k8s_00226_code_explain_6598"} {"prompt": "Complete this yaml code:\n\nservices:\n\n redis-leader:\n container_name: redis-leader\n image: redis\n ports:\n - \"6379\"\n\n redis-replica:\n container_name: redis-replica\n image: redis\n ports:", "completion": " - \"6379\"\n command: redis-server --replicaof redis-leader 6379 --dir /tmp\n\n web:\n container_name: web\n image: quay.io/kompose/web\n ports:\n - \"8080:8080\"\n environment:\n - GET_HOSTS_FROM=dns\n labels:\n kompose.service.type: LoadBalancer", "language": "yaml", "type": "kubernetes-manifest", "context": "Translate a Docker Compose File to Kubernetes Resources | To convert the docker-compose.yml file to files that you can use with kubectl, run kompose convert and then kubectl apply -f .", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/translate-compose-kubernetes/", "id": "k8s_00226_code_completion_6599"} {"prompt": "Explain this yaml code:\n\nservices:\n\n redis-leader:\n container_name: redis-leader\n image: redis\n ports:\n - \"6379\"\n\n redis-replica:\n container_name: redis-replica\n image: redis\n ports:\n - \"6379\"\n command: redis-server --replicaof redis-leader 6379 --dir /tmp\n\n web:\n container_name: web\n image: quay.io/kompose/web\n ports:\n - \"8080:8080\"\n environment:\n - GET_HOSTS_FROM=dns\n labels:\n kompose.service.type: LoadBalancer", "completion": "This yaml code Translate a Docker Compose File to Kubernetes Resources | To convert the docker-compose.yml file to files that you can use with kubectl, run kompose convert and then kubectl apply -f .", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Translate a Docker Compose File to Kubernetes Resources | To convert the docker-compose.yml file to files that you can use with kubectl, run kompose convert and then kubectl apply -f .", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/translate-compose-kubernetes/", "id": "k8s_00226_code_explain_6600"} {"prompt": "Complete this none code:\n\nINFO Kubernetes file \"redis-leader-service.yaml\" created\nINFO Kubernetes file \"redis-replica-service.yaml\" created\nINFO Kubernetes file \"web-tcp-service.yaml\" created", "completion": "INFO Kubernetes file \"redis-leader-deployment.yaml\" created\nINFO Kubernetes file \"redis-replica-deployment.yaml\" created\nINFO Kubernetes file \"web-deployment.yaml\" created", "language": "none", "type": "code-example", "context": "Translate a Docker Compose File to Kubernetes Resources | The output is similar to:", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/translate-compose-kubernetes/", "id": "k8s_00226_code_completion_6601"} {"prompt": "Explain this none code:\n\nINFO Kubernetes file \"redis-leader-service.yaml\" created\nINFO Kubernetes file \"redis-replica-service.yaml\" created\nINFO Kubernetes file \"web-tcp-service.yaml\" created\nINFO Kubernetes file \"redis-leader-deployment.yaml\" created\nINFO Kubernetes file \"redis-replica-deployment.yaml\" created\nINFO Kubernetes file \"web-deployment.yaml\" created", "completion": "This none code Translate a Docker Compose File to Kubernetes Resources | The output is similar to:", "language": "none", "type": "code-example_explanation", "context": "Translate a Docker Compose File to Kubernetes Resources | The output is similar to:", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/translate-compose-kubernetes/", "id": "k8s_00226_code_explain_6602"} {"prompt": "Explain this bash code:\n\nkubectl apply -f web-tcp-service.yaml,redis-leader-service.yaml,redis-replica-service.yaml,web-deployment.yaml,redis-leader-deployment.yaml,redis-replica-deployment.yaml", "completion": "This bash code Translate a Docker Compose File to Kubernetes Resources | The output is similar to:", "language": "bash", "type": "kubectl-command_explanation", "context": "Translate a Docker Compose File to Kubernetes Resources | The output is similar to:", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/translate-compose-kubernetes/", "id": "k8s_00226_code_explain_6603"} {"prompt": "Complete this none code:\n\ndeployment.apps/redis-leader created\ndeployment.apps/redis-replica created\ndeployment.apps/web created", "completion": "service/redis-leader created\nservice/redis-replica created\nservice/web-tcp created", "language": "none", "type": "code-example", "context": "Translate a Docker Compose File to Kubernetes Resources | Your deployments are running in Kubernetes.", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/translate-compose-kubernetes/", "id": "k8s_00226_code_completion_6604"} {"prompt": "Explain this none code:\n\ndeployment.apps/redis-leader created\ndeployment.apps/redis-replica created\ndeployment.apps/web created\nservice/redis-leader created\nservice/redis-replica created\nservice/web-tcp created", "completion": "This none code Translate a Docker Compose File to Kubernetes Resources | Your deployments are running in Kubernetes.", "language": "none", "type": "code-example_explanation", "context": "Translate a Docker Compose File to Kubernetes Resources | Your deployments are running in Kubernetes.", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/translate-compose-kubernetes/", "id": "k8s_00226_code_explain_6605"} {"prompt": "Complete this none code:\n\nName: web-tcp\n Namespace: default\n Labels: io.kompose.service=web-tcp\n Annotations: kompose.cmd: kompose convert\n kompose.service.type: LoadBalancer\n kompose.version: 1.33.0 (3ce457399)\n Selector: io.kompose.service=web\n Type: LoadBalancer\n IP Family Policy: SingleStack", "completion": " IP Families: IPv4\n IP: 10.102.30.3\n IPs: 10.102.30.3\n Port: 8080 8080/TCP\n TargetPort: 8080/TCP\n NodePort: 8080 31624/TCP\n Endpoints: 10.244.0.5:8080\n Session Affinity: None\n External Traffic Policy: Cluster\n Events: ", "language": "none", "type": "code-example", "context": "Translate a Docker Compose File to Kubernetes Resources | If you're using a cloud provider, your IP will be listed next to LoadBalancer Ingress.", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/translate-compose-kubernetes/", "id": "k8s_00226_code_completion_6606"} {"prompt": "Explain this none code:\n\nName: web-tcp\n Namespace: default\n Labels: io.kompose.service=web-tcp\n Annotations: kompose.cmd: kompose convert\n kompose.service.type: LoadBalancer\n kompose.version: 1.33.0 (3ce457399)\n Selector: io.kompose.service=web\n Type: LoadBalancer\n IP Family Policy: SingleStack\n IP Families: IPv4\n IP: 10.102.30.3\n IPs: 10.102.30.3\n Port: 8080 8080/TCP\n TargetPort: 8080/TCP\n NodePort: 8080 31624/TCP\n Endpoints: 10.244.0.5:8080\n Session Affinity: None\n External Traffic Policy: Cluster\n Events: ", "completion": "This none code Translate a Docker Compose File to Kubernetes Resources | If you're using a cloud provider, your IP will be listed next to LoadBalancer Ingress.", "language": "none", "type": "code-example_explanation", "context": "Translate a Docker Compose File to Kubernetes Resources | If you're using a cloud provider, your IP will be listed next to LoadBalancer Ingress.", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/translate-compose-kubernetes/", "id": "k8s_00226_code_explain_6607"} {"prompt": "Explain this sh code:\n\nkubectl delete -f web-tcp-service.yaml,redis-leader-service.yaml,redis-replica-service.yaml,web-deployment.yaml,redis-leader-deployment.yaml,redis-replica-deployment.yaml", "completion": "This sh code Translate a Docker Compose File to Kubernetes Resources | Kompose has support for two providers: OpenShift and Kubernetes. You can choose a targeted provider using global option --provider. If no provider is specified, Kubernetes is set by default.", "language": "sh", "type": "code-example_explanation", "context": "Translate a Docker Compose File to Kubernetes Resources | Kompose has support for two providers: OpenShift and Kubernetes. You can choose a targeted provider using global option --provider. If no provider is specified, Kubernetes is set by default.", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/translate-compose-kubernetes/", "id": "k8s_00226_code_explain_6608"} {"prompt": "Complete this none code:\n\nWARN Unsupported key networks - ignoring\nWARN Unsupported key build - ignoring\nINFO Kubernetes file \"worker-svc.yaml\" created\nINFO Kubernetes file \"db-svc.yaml\" created\nINFO Kubernetes file \"redis-svc.yaml\" created\nINFO Kubernetes file \"result-svc.yaml\" created", "completion": "INFO Kubernetes file \"vote-svc.yaml\" created\nINFO Kubernetes file \"redis-deployment.yaml\" created\nINFO Kubernetes file \"result-deployment.yaml\" created\nINFO Kubernetes file \"vote-deployment.yaml\" created\nINFO Kubernetes file \"worker-deployment.yaml\" created\nINFO Kubernetes file \"db-deployment.yaml\" created", "language": "none", "type": "code-example", "context": "Translate a Docker Compose File to Kubernetes Resources | You can also provide multiple docker-compose files at the same time:", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/translate-compose-kubernetes/", "id": "k8s_00226_code_completion_6609"} {"prompt": "Explain this none code:\n\nWARN Unsupported key networks - ignoring\nWARN Unsupported key build - ignoring\nINFO Kubernetes file \"worker-svc.yaml\" created\nINFO Kubernetes file \"db-svc.yaml\" created\nINFO Kubernetes file \"redis-svc.yaml\" created\nINFO Kubernetes file \"result-svc.yaml\" created\nINFO Kubernetes file \"vote-svc.yaml\" created\nINFO Kubernetes file \"redis-deployment.yaml\" created\nINFO Kubernetes file \"result-deployment.yaml\" created\nINFO Kubernetes file \"vote-deployment.yaml\" created\nINFO Kubernetes file \"worker-deployment.yaml\" created\nINFO Kubernetes file \"db-deployment.yaml\" created", "completion": "This none code Translate a Docker Compose File to Kubernetes Resources | You can also provide multiple docker-compose files at the same time:", "language": "none", "type": "code-example_explanation", "context": "Translate a Docker Compose File to Kubernetes Resources | You can also provide multiple docker-compose files at the same time:", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/translate-compose-kubernetes/", "id": "k8s_00226_code_explain_6610"} {"prompt": "Explain this none code:\n\ndb-deployment.yaml docker-compose.yml docker-gitlab.yml redis-deployment.yaml result-deployment.yaml vote-deployment.yaml worker-deployment.yaml\ndb-svc.yaml docker-voting.yml redis-svc.yaml result-svc.yaml vote-svc.yaml worker-svc.yaml", "completion": "This none code Translate a Docker Compose File to Kubernetes Resources | You can also provide multiple docker-compose files at the same time:", "language": "none", "type": "code-example_explanation", "context": "Translate a Docker Compose File to Kubernetes Resources | You can also provide multiple docker-compose files at the same time:", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/translate-compose-kubernetes/", "id": "k8s_00226_code_explain_6611"} {"prompt": "Explain this shell code:\n\nkompose -f docker-compose.yml -f docker-guestbook.yml convert", "completion": "This shell code Translate a Docker Compose File to Kubernetes Resources | When multiple docker-compose files are provided the configuration is merged. Any configuration that is common will be overridden by subsequent file.", "language": "shell", "type": "code-example_explanation", "context": "Translate a Docker Compose File to Kubernetes Resources | When multiple docker-compose files are provided the configuration is merged. Any configuration that is common will be overridden by subsequent file.", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/translate-compose-kubernetes/", "id": "k8s_00226_code_explain_6612"} {"prompt": "Complete this none code:\n\nINFO Kubernetes file \"frontend-service.yaml\" created \nINFO Kubernetes file \"mlbparks-service.yaml\" created \nINFO Kubernetes file \"mongodb-service.yaml\" created \nINFO Kubernetes file \"redis-master-service.yaml\" created \nINFO Kubernetes file \"redis-slave-service.yaml\" created ", "completion": "INFO Kubernetes file \"frontend-deployment.yaml\" created \nINFO Kubernetes file \"mlbparks-deployment.yaml\" created \nINFO Kubernetes file \"mongodb-deployment.yaml\" created \nINFO Kubernetes file \"mongodb-claim0-persistentvolumeclaim.yaml\" created\nINFO Kubernetes file \"redis-master-deployment.yaml\" created \nINFO Kubernetes file \"redis-slave-deployment.yaml\" created", "language": "none", "type": "code-example", "context": "Translate a Docker Compose File to Kubernetes Resources | When multiple docker-compose files are provided the configuration is merged. Any configuration that is common will be overridden by subsequent file.", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/translate-compose-kubernetes/", "id": "k8s_00226_code_completion_6613"} {"prompt": "Explain this none code:\n\nINFO Kubernetes file \"frontend-service.yaml\" created \nINFO Kubernetes file \"mlbparks-service.yaml\" created \nINFO Kubernetes file \"mongodb-service.yaml\" created \nINFO Kubernetes file \"redis-master-service.yaml\" created \nINFO Kubernetes file \"redis-slave-service.yaml\" created \nINFO Kubernetes file \"frontend-deployment.yaml\" created \nINFO Kubernetes file \"mlbparks-deployment.yaml\" created \nINFO Kubernetes file \"mongodb-deployment.yaml\" created \nINFO Kubernetes file \"mongodb-claim0-persistentvolumeclaim.yaml\" created\nINFO Kubernetes file \"redis-master-deployment.yaml\" created \nINFO Kubernetes file \"redis-slave-deployment.yaml\" created", "completion": "This none code Translate a Docker Compose File to Kubernetes Resources | When multiple docker-compose files are provided the configuration is merged. Any configuration that is common will be overridden by subsequent file.", "language": "none", "type": "code-example_explanation", "context": "Translate a Docker Compose File to Kubernetes Resources | When multiple docker-compose files are provided the configuration is merged. Any configuration that is common will be overridden by subsequent file.", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/translate-compose-kubernetes/", "id": "k8s_00226_code_explain_6614"} {"prompt": "Complete this none code:\n\nmlbparks-deployment.yaml mongodb-service.yaml redis-slave-service.jsonmlbparks-service.yaml \nfrontend-deployment.yaml mongodb-claim0-persistentvolumeclaim.yaml redis-master-service.yaml", "completion": "frontend-service.yaml mongodb-deployment.yaml redis-slave-deployment.yaml\nredis-master-deployment.yaml", "language": "none", "type": "code-example", "context": "Translate a Docker Compose File to Kubernetes Resources | When multiple docker-compose files are provided the configuration is merged. Any configuration that is common will be overridden by subsequent file.", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/translate-compose-kubernetes/", "id": "k8s_00226_code_completion_6615"} {"prompt": "Explain this none code:\n\nmlbparks-deployment.yaml mongodb-service.yaml redis-slave-service.jsonmlbparks-service.yaml \nfrontend-deployment.yaml mongodb-claim0-persistentvolumeclaim.yaml redis-master-service.yaml\nfrontend-service.yaml mongodb-deployment.yaml redis-slave-deployment.yaml\nredis-master-deployment.yaml", "completion": "This none code Translate a Docker Compose File to Kubernetes Resources | When multiple docker-compose files are provided the configuration is merged. Any configuration that is common will be overridden by subsequent file.", "language": "none", "type": "code-example_explanation", "context": "Translate a Docker Compose File to Kubernetes Resources | When multiple docker-compose files are provided the configuration is merged. Any configuration that is common will be overridden by subsequent file.", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/translate-compose-kubernetes/", "id": "k8s_00226_code_explain_6616"} {"prompt": "Explain this sh code:\n\nkompose --provider openshift --file docker-voting.yml convert", "completion": "This sh code Translate a Docker Compose File to Kubernetes Resources | It also supports creating buildconfig for build directive in a service. By default, it uses the remote repo for the current git branch as the source repo, and the current branch as the source branch f", "language": "sh", "type": "code-example_explanation", "context": "Translate a Docker Compose File to Kubernetes Resources | It also supports creating buildconfig for build directive in a service. By default, it uses the remote repo for the current git branch as the source repo, and the current branch as the source branch f", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/translate-compose-kubernetes/", "id": "k8s_00226_code_explain_6617"} {"prompt": "Complete this none code:\n\nWARN [worker] Service cannot be created because of missing port.\nINFO OpenShift file \"vote-service.yaml\" created \nINFO OpenShift file \"db-service.yaml\" created \nINFO OpenShift file \"redis-service.yaml\" created \nINFO OpenShift file \"result-service.yaml\" created \nINFO OpenShift file \"vote-deploymentconfig.yaml\" created \nINFO OpenShift file \"vote-imagestream.yaml\" created ", "completion": "INFO OpenShift file \"worker-deploymentconfig.yaml\" created \nINFO OpenShift file \"worker-imagestream.yaml\" created \nINFO OpenShift file \"db-deploymentconfig.yaml\" created \nINFO OpenShift file \"db-imagestream.yaml\" created \nINFO OpenShift file \"redis-deploymentconfig.yaml\" created \nINFO OpenShift file \"redis-imagestream.yaml\" created \nINFO OpenShift file \"result-deploymentconfig.yaml\" created \nINFO OpenShift file \"result-imagestream.yaml\" created", "language": "none", "type": "code-example", "context": "Translate a Docker Compose File to Kubernetes Resources | It also supports creating buildconfig for build directive in a service. By default, it uses the remote repo for the current git branch as the source repo, and the current branch as the source branch f", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/translate-compose-kubernetes/", "id": "k8s_00226_code_completion_6618"} {"prompt": "Explain this none code:\n\nWARN [worker] Service cannot be created because of missing port.\nINFO OpenShift file \"vote-service.yaml\" created \nINFO OpenShift file \"db-service.yaml\" created \nINFO OpenShift file \"redis-service.yaml\" created \nINFO OpenShift file \"result-service.yaml\" created \nINFO OpenShift file \"vote-deploymentconfig.yaml\" created \nINFO OpenShift file \"vote-imagestream.yaml\" created \nINFO OpenShift file \"worker-deploymentconfig.yaml\" created \nINFO OpenShift file \"worker-imagestream.yaml\" created \nINFO OpenShift file \"db-deploymentconfig.yaml\" created \nINFO OpenShift file \"db-imagestream.yaml\" created \nINFO OpenShift file \"redis-deploymentconfig.yaml\" created \nINFO OpenShift file \"redis-imagestream.yaml\" created \nINFO OpenShift file \"result-deploymentconfig.yaml\" created \nINFO OpenShift file \"result-imagestream.yaml\" created", "completion": "This none code Translate a Docker Compose File to Kubernetes Resources | It also supports creating buildconfig for build directive in a service. By default, it uses the remote repo for the current git branch as the source repo, and the current branch as the source branch f", "language": "none", "type": "code-example_explanation", "context": "Translate a Docker Compose File to Kubernetes Resources | It also supports creating buildconfig for build directive in a service. By default, it uses the remote repo for the current git branch as the source repo, and the current branch as the source branch f", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/translate-compose-kubernetes/", "id": "k8s_00226_code_explain_6619"} {"prompt": "Explain this sh code:\n\nkompose --provider openshift --file buildconfig/docker-compose.yml convert", "completion": "This sh code Translate a Docker Compose File to Kubernetes Resources | The default kompose transformation will generate Kubernetes Deployments and Services, in yaml format. You have alternative option to generate json with -j. Also, you can alternatively generate Replica", "language": "sh", "type": "code-example_explanation", "context": "Translate a Docker Compose File to Kubernetes Resources | The default kompose transformation will generate Kubernetes Deployments and Services, in yaml format. You have alternative option to generate json with -j. Also, you can alternatively generate Replica", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/translate-compose-kubernetes/", "id": "k8s_00226_code_explain_6620"} {"prompt": "Complete this none code:\n\nWARN [foo] Service cannot be created because of missing port.\nINFO OpenShift Buildconfig using git@github.com:rtnpro/kompose.git::master as source.", "completion": "INFO OpenShift file \"foo-deploymentconfig.yaml\" created \nINFO OpenShift file \"foo-imagestream.yaml\" created \nINFO OpenShift file \"foo-buildconfig.yaml\" created", "language": "none", "type": "code-example", "context": "Translate a Docker Compose File to Kubernetes Resources | The default kompose transformation will generate Kubernetes Deployments and Services, in yaml format. You have alternative option to generate json with -j. Also, you can alternatively generate Replica", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/translate-compose-kubernetes/", "id": "k8s_00226_code_completion_6621"} {"prompt": "Explain this none code:\n\nWARN [foo] Service cannot be created because of missing port.\nINFO OpenShift Buildconfig using git@github.com:rtnpro/kompose.git::master as source.\nINFO OpenShift file \"foo-deploymentconfig.yaml\" created \nINFO OpenShift file \"foo-imagestream.yaml\" created \nINFO OpenShift file \"foo-buildconfig.yaml\" created", "completion": "This none code Translate a Docker Compose File to Kubernetes Resources | The default kompose transformation will generate Kubernetes Deployments and Services, in yaml format. You have alternative option to generate json with -j. Also, you can alternatively generate Replica", "language": "none", "type": "code-example_explanation", "context": "Translate a Docker Compose File to Kubernetes Resources | The default kompose transformation will generate Kubernetes Deployments and Services, in yaml format. You have alternative option to generate json with -j. Also, you can alternatively generate Replica", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/translate-compose-kubernetes/", "id": "k8s_00226_code_explain_6622"} {"prompt": "Complete this sh code:\n\nkompose convert -j\nINFO Kubernetes file \"redis-svc.json\" created", "completion": "INFO Kubernetes file \"web-svc.json\" created\nINFO Kubernetes file \"redis-deployment.json\" created\nINFO Kubernetes file \"web-deployment.json\" created", "language": "sh", "type": "code-example", "context": "Translate a Docker Compose File to Kubernetes Resources | The *-deployment.json files contain the Deployment objects.", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/translate-compose-kubernetes/", "id": "k8s_00226_code_completion_6623"} {"prompt": "Explain this sh code:\n\nkompose convert -j\nINFO Kubernetes file \"redis-svc.json\" created\nINFO Kubernetes file \"web-svc.json\" created\nINFO Kubernetes file \"redis-deployment.json\" created\nINFO Kubernetes file \"web-deployment.json\" created", "completion": "This sh code Translate a Docker Compose File to Kubernetes Resources | The *-deployment.json files contain the Deployment objects.", "language": "sh", "type": "code-example_explanation", "context": "Translate a Docker Compose File to Kubernetes Resources | The *-deployment.json files contain the Deployment objects.", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/translate-compose-kubernetes/", "id": "k8s_00226_code_explain_6624"} {"prompt": "Complete this sh code:\n\nkompose convert --replication-controller\nINFO Kubernetes file \"redis-svc.yaml\" created", "completion": "INFO Kubernetes file \"web-svc.yaml\" created\nINFO Kubernetes file \"redis-replicationcontroller.yaml\" created\nINFO Kubernetes file \"web-replicationcontroller.yaml\" created", "language": "sh", "type": "code-example", "context": "Translate a Docker Compose File to Kubernetes Resources | The *-replicationcontroller.yaml files contain the Replication Controller objects. If you want to specify replicas (default is 1), use --replicas flag: kompose convert --replication-controller --repli", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/translate-compose-kubernetes/", "id": "k8s_00226_code_completion_6625"} {"prompt": "Explain this sh code:\n\nkompose convert --replication-controller\nINFO Kubernetes file \"redis-svc.yaml\" created\nINFO Kubernetes file \"web-svc.yaml\" created\nINFO Kubernetes file \"redis-replicationcontroller.yaml\" created\nINFO Kubernetes file \"web-replicationcontroller.yaml\" created", "completion": "This sh code Translate a Docker Compose File to Kubernetes Resources | The *-replicationcontroller.yaml files contain the Replication Controller objects. If you want to specify replicas (default is 1), use --replicas flag: kompose convert --replication-controller --repli", "language": "sh", "type": "code-example_explanation", "context": "Translate a Docker Compose File to Kubernetes Resources | The *-replicationcontroller.yaml files contain the Replication Controller objects. If you want to specify replicas (default is 1), use --replicas flag: kompose convert --replication-controller --repli", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/translate-compose-kubernetes/", "id": "k8s_00226_code_explain_6626"} {"prompt": "Explain this text code:\n\nkompose convert --replication-controller --replicas 3", "completion": "This text code Translate a Docker Compose File to Kubernetes Resources | The *-daemonset.yaml files contain the DaemonSet objects.", "language": "text", "type": "code-example_explanation", "context": "Translate a Docker Compose File to Kubernetes Resources | The *-daemonset.yaml files contain the DaemonSet objects.", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/translate-compose-kubernetes/", "id": "k8s_00226_code_explain_6627"} {"prompt": "Complete this shell code:\n\nkompose convert --daemon-set\nINFO Kubernetes file \"redis-svc.yaml\" created", "completion": "INFO Kubernetes file \"web-svc.yaml\" created\nINFO Kubernetes file \"redis-daemonset.yaml\" created\nINFO Kubernetes file \"web-daemonset.yaml\" created", "language": "shell", "type": "code-example", "context": "Translate a Docker Compose File to Kubernetes Resources | The *-daemonset.yaml files contain the DaemonSet objects.", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/translate-compose-kubernetes/", "id": "k8s_00226_code_completion_6628"} {"prompt": "Explain this shell code:\n\nkompose convert --daemon-set\nINFO Kubernetes file \"redis-svc.yaml\" created\nINFO Kubernetes file \"web-svc.yaml\" created\nINFO Kubernetes file \"redis-daemonset.yaml\" created\nINFO Kubernetes file \"web-daemonset.yaml\" created", "completion": "This shell code Translate a Docker Compose File to Kubernetes Resources | The *-daemonset.yaml files contain the DaemonSet objects.", "language": "shell", "type": "code-example_explanation", "context": "Translate a Docker Compose File to Kubernetes Resources | The *-daemonset.yaml files contain the DaemonSet objects.", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/translate-compose-kubernetes/", "id": "k8s_00226_code_explain_6629"} {"prompt": "Complete this none code:\n\nINFO Kubernetes file \"web-svc.yaml\" created\nINFO Kubernetes file \"redis-svc.yaml\" created", "completion": "INFO Kubernetes file \"web-deployment.yaml\" created\nINFO Kubernetes file \"redis-deployment.yaml\" created\nchart created in \"./docker-compose/\"", "language": "none", "type": "code-example", "context": "Translate a Docker Compose File to Kubernetes Resources | The chart structure is aimed at providing a skeleton for building your Helm charts.", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/translate-compose-kubernetes/", "id": "k8s_00226_code_completion_6630"} {"prompt": "Explain this none code:\n\nINFO Kubernetes file \"web-svc.yaml\" created\nINFO Kubernetes file \"redis-svc.yaml\" created\nINFO Kubernetes file \"web-deployment.yaml\" created\nINFO Kubernetes file \"redis-deployment.yaml\" created\nchart created in \"./docker-compose/\"", "completion": "This none code Translate a Docker Compose File to Kubernetes Resources | The chart structure is aimed at providing a skeleton for building your Helm charts.", "language": "none", "type": "code-example_explanation", "context": "Translate a Docker Compose File to Kubernetes Resources | The chart structure is aimed at providing a skeleton for building your Helm charts.", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/translate-compose-kubernetes/", "id": "k8s_00226_code_explain_6631"} {"prompt": "Complete this none code:\n\ndocker-compose\n├── Chart.yaml\n├── README.md\n└── templates", "completion": " ├── redis-deployment.yaml\n ├── redis-svc.yaml\n ├── web-deployment.yaml\n └── web-svc.yaml", "language": "none", "type": "code-example", "context": "Translate a Docker Compose File to Kubernetes Resources | The chart structure is aimed at providing a skeleton for building your Helm charts.", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/translate-compose-kubernetes/", "id": "k8s_00226_code_completion_6632"} {"prompt": "Explain this none code:\n\ndocker-compose\n├── Chart.yaml\n├── README.md\n└── templates\n ├── redis-deployment.yaml\n ├── redis-svc.yaml\n ├── web-deployment.yaml\n └── web-svc.yaml", "completion": "This none code Translate a Docker Compose File to Kubernetes Resources | The chart structure is aimed at providing a skeleton for building your Helm charts.", "language": "none", "type": "code-example_explanation", "context": "Translate a Docker Compose File to Kubernetes Resources | The chart structure is aimed at providing a skeleton for building your Helm charts.", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/translate-compose-kubernetes/", "id": "k8s_00226_code_explain_6633"} {"prompt": "Complete this yaml code:\n\nversion: \"2\"\nservices:\n nginx:\n image: nginx\n dockerfile: foobar", "completion": " build: ./foobar\n cap_add:\n - ALL\n container_name: foobar\n labels:\n kompose.service.type: nodeport", "language": "yaml", "type": "kubernetes-manifest", "context": "Translate a Docker Compose File to Kubernetes Resources | kompose.service.expose defines if the service needs to be made accessible from outside the cluster or not. If the value is set to \"true\", the provider sets the endpoint automatically, and for any othe", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/translate-compose-kubernetes/", "id": "k8s_00226_code_completion_6634"} {"prompt": "Explain this yaml code:\n\nversion: \"2\"\nservices:\n nginx:\n image: nginx\n dockerfile: foobar\n build: ./foobar\n cap_add:\n - ALL\n container_name: foobar\n labels:\n kompose.service.type: nodeport", "completion": "This yaml code Translate a Docker Compose File to Kubernetes Resources | kompose.service.expose defines if the service needs to be made accessible from outside the cluster or not. If the value is set to \"true\", the provider sets the endpoint automatically, and for any othe", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Translate a Docker Compose File to Kubernetes Resources | kompose.service.expose defines if the service needs to be made accessible from outside the cluster or not. If the value is set to \"true\", the provider sets the endpoint automatically, and for any othe", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/translate-compose-kubernetes/", "id": "k8s_00226_code_explain_6635"} {"prompt": "Complete this yaml code:\n\nversion: \"2\"\nservices:\n web:\n image: tuna/docker-counter23\n ports:\n - \"5000:5000\"\n links:", "completion": " - redis\n labels:\n kompose.service.expose: \"counter.example.com\"\n redis:\n image: redis:3.0\n ports:\n - \"6379\"", "language": "yaml", "type": "kubernetes-manifest", "context": "Translate a Docker Compose File to Kubernetes Resources | The currently supported options are:", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/translate-compose-kubernetes/", "id": "k8s_00226_code_completion_6636"} {"prompt": "Explain this yaml code:\n\nversion: \"2\"\nservices:\n web:\n image: tuna/docker-counter23\n ports:\n - \"5000:5000\"\n links:\n - redis\n labels:\n kompose.service.expose: \"counter.example.com\"\n redis:\n image: redis:3.0\n ports:\n - \"6379\"", "completion": "This yaml code Translate a Docker Compose File to Kubernetes Resources | The currently supported options are:", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Translate a Docker Compose File to Kubernetes Resources | The currently supported options are:", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/translate-compose-kubernetes/", "id": "k8s_00226_code_explain_6637"} {"prompt": "Complete this yaml code:\n\nversion: '2'\n\nservices:", "completion": " pival:\n image: perl\n command: [\"perl\", \"-Mbignum=bpi\", \"-wle\", \"print bpi(2000)\"]\n restart: \"on-failure\"", "language": "yaml", "type": "kubernetes-manifest", "context": "Translate a Docker Compose File to Kubernetes Resources | If the Docker Compose file has a volume specified for a service, the Deployment (Kubernetes) or DeploymentConfig (OpenShift) strategy is changed to \"Recreate\" instead of \"RollingUpdate\" (default). Thi", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/translate-compose-kubernetes/", "id": "k8s_00226_code_completion_6638"} {"prompt": "Explain this yaml code:\n\nversion: '2'\n\nservices:\n pival:\n image: perl\n command: [\"perl\", \"-Mbignum=bpi\", \"-wle\", \"print bpi(2000)\"]\n restart: \"on-failure\"", "completion": "This yaml code Translate a Docker Compose File to Kubernetes Resources | If the Docker Compose file has a volume specified for a service, the Deployment (Kubernetes) or DeploymentConfig (OpenShift) strategy is changed to \"Recreate\" instead of \"RollingUpdate\" (default). Thi", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Translate a Docker Compose File to Kubernetes Resources | If the Docker Compose file has a volume specified for a service, the Deployment (Kubernetes) or DeploymentConfig (OpenShift) strategy is changed to \"Recreate\" instead of \"RollingUpdate\" (default). Thi", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/translate-compose-kubernetes/", "id": "k8s_00226_code_explain_6639"} {"prompt": "Complete this text code:\n\n# Linux\ncurl -L https://github.com/kubernetes/kompose/releases/download/v1.34.0/kompose-linux-amd64 -o kompose\n\n# macOS\ncurl -L https://github.com/kubernetes/kompose/releases/download/v1.34.0/kompose-darwin-amd64 -o kompose", "completion": "\n# Windows\ncurl -L https://github.com/kubernetes/kompose/releases/download/v1.34.0/kompose-windows-amd64.exe -o kompose.exe\n\nchmod +x kompose\nsudo mv ./kompose /usr/local/bin/kompose", "language": "text", "type": "code-example", "context": "Translate a Docker Compose File to Kubernetes Resources | Alternatively, you can download the tarball.", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/translate-compose-kubernetes/", "id": "k8s_00226_code_completion_6640"} {"prompt": "Explain this text code:\n\n# Linux\ncurl -L https://github.com/kubernetes/kompose/releases/download/v1.34.0/kompose-linux-amd64 -o kompose\n\n# macOS\ncurl -L https://github.com/kubernetes/kompose/releases/download/v1.34.0/kompose-darwin-amd64 -o kompose\n\n# Windows\ncurl -L https://github.com/kubernetes/kompose/releases/download/v1.34.0/kompose-windows-amd64.exe -o kompose.exe\n\nchmod +x kompose\nsudo mv ./kompose /usr/local/bin/kompose", "completion": "This text code Translate a Docker Compose File to Kubernetes Resources | Alternatively, you can download the tarball.", "language": "text", "type": "code-example_explanation", "context": "Translate a Docker Compose File to Kubernetes Resources | Alternatively, you can download the tarball.", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/translate-compose-kubernetes/", "id": "k8s_00226_code_explain_6641"} {"prompt": "Complete this text code:\n\nservices:\n\n redis-leader:\n container_name: redis-leader\n image: redis\n ports:\n - \"6379\"\n\n redis-replica:\n container_name: redis-replica\n image: redis\n ports:", "completion": " - \"6379\"\n command: redis-server --replicaof redis-leader 6379 --dir /tmp\n\n web:\n container_name: web\n image: quay.io/kompose/web\n ports:\n - \"8080:8080\"\n environment:\n - GET_HOSTS_FROM=dns\n labels:\n kompose.service.type: LoadBalancer", "language": "text", "type": "code-example", "context": "Translate a Docker Compose File to Kubernetes Resources | To convert the docker-compose.yml file to files that you can use with kubectl, run kompose convert and then kubectl apply -f .", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/translate-compose-kubernetes/", "id": "k8s_00226_code_completion_6642"} {"prompt": "Explain this text code:\n\nservices:\n\n redis-leader:\n container_name: redis-leader\n image: redis\n ports:\n - \"6379\"\n\n redis-replica:\n container_name: redis-replica\n image: redis\n ports:\n - \"6379\"\n command: redis-server --replicaof redis-leader 6379 --dir /tmp\n\n web:\n container_name: web\n image: quay.io/kompose/web\n ports:\n - \"8080:8080\"\n environment:\n - GET_HOSTS_FROM=dns\n labels:\n kompose.service.type: LoadBalancer", "completion": "This text code Translate a Docker Compose File to Kubernetes Resources | To convert the docker-compose.yml file to files that you can use with kubectl, run kompose convert and then kubectl apply -f .", "language": "text", "type": "code-example_explanation", "context": "Translate a Docker Compose File to Kubernetes Resources | To convert the docker-compose.yml file to files that you can use with kubectl, run kompose convert and then kubectl apply -f .", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/translate-compose-kubernetes/", "id": "k8s_00226_code_explain_6643"} {"prompt": "Explain this bash code:\n\nkubectl apply -f web-tcp-service.yaml,redis-leader-service.yaml,redis-replica-service.yaml,web-deployment.yaml,redis-leader-deployment.yaml,redis-replica-deployment.yaml", "completion": "This bash code Translate a Docker Compose File to Kubernetes Resources | The output is similar to:", "language": "bash", "type": "kubectl-command_explanation", "context": "Translate a Docker Compose File to Kubernetes Resources | The output is similar to:", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/translate-compose-kubernetes/", "id": "k8s_00226_code_explain_6644"} {"prompt": "Explain this bash code:\n\nkubectl delete -f web-tcp-service.yaml,redis-leader-service.yaml,redis-replica-service.yaml,web-deployment.yaml,redis-leader-deployment.yaml,redis-replica-deployment.yaml", "completion": "This bash code Translate a Docker Compose File to Kubernetes Resources | Kompose has support for two providers: OpenShift and Kubernetes. You can choose a targeted provider using global option --provider. If no provider is specified, Kubernetes is set by default.", "language": "bash", "type": "kubectl-command_explanation", "context": "Translate a Docker Compose File to Kubernetes Resources | Kompose has support for two providers: OpenShift and Kubernetes. You can choose a targeted provider using global option --provider. If no provider is specified, Kubernetes is set by default.", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/translate-compose-kubernetes/", "id": "k8s_00226_code_explain_6645"} {"prompt": "Explain this text code:\n\nkompose -f docker-compose.yml -f docker-guestbook.yml convert", "completion": "This text code Translate a Docker Compose File to Kubernetes Resources | When multiple docker-compose files are provided the configuration is merged. Any configuration that is common will be overridden by subsequent file.", "language": "text", "type": "code-example_explanation", "context": "Translate a Docker Compose File to Kubernetes Resources | When multiple docker-compose files are provided the configuration is merged. Any configuration that is common will be overridden by subsequent file.", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/translate-compose-kubernetes/", "id": "k8s_00226_code_explain_6646"} {"prompt": "Explain this text code:\n\nkompose --provider openshift --file docker-voting.yml convert", "completion": "This text code Translate a Docker Compose File to Kubernetes Resources | It also supports creating buildconfig for build directive in a service. By default, it uses the remote repo for the current git branch as the source repo, and the current branch as the source branch f", "language": "text", "type": "code-example_explanation", "context": "Translate a Docker Compose File to Kubernetes Resources | It also supports creating buildconfig for build directive in a service. By default, it uses the remote repo for the current git branch as the source repo, and the current branch as the source branch f", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/translate-compose-kubernetes/", "id": "k8s_00226_code_explain_6647"} {"prompt": "Explain this text code:\n\nkompose --provider openshift --file buildconfig/docker-compose.yml convert", "completion": "This text code Translate a Docker Compose File to Kubernetes Resources | The default kompose transformation will generate Kubernetes Deployments and Services, in yaml format. You have alternative option to generate json with -j. Also, you can alternatively generate Replica", "language": "text", "type": "code-example_explanation", "context": "Translate a Docker Compose File to Kubernetes Resources | The default kompose transformation will generate Kubernetes Deployments and Services, in yaml format. You have alternative option to generate json with -j. Also, you can alternatively generate Replica", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/translate-compose-kubernetes/", "id": "k8s_00226_code_explain_6648"} {"prompt": "Complete this text code:\n\nkompose convert -j\nINFO Kubernetes file \"redis-svc.json\" created", "completion": "INFO Kubernetes file \"web-svc.json\" created\nINFO Kubernetes file \"redis-deployment.json\" created\nINFO Kubernetes file \"web-deployment.json\" created", "language": "text", "type": "code-example", "context": "Translate a Docker Compose File to Kubernetes Resources | The *-deployment.json files contain the Deployment objects.", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/translate-compose-kubernetes/", "id": "k8s_00226_code_completion_6649"} {"prompt": "Explain this text code:\n\nkompose convert -j\nINFO Kubernetes file \"redis-svc.json\" created\nINFO Kubernetes file \"web-svc.json\" created\nINFO Kubernetes file \"redis-deployment.json\" created\nINFO Kubernetes file \"web-deployment.json\" created", "completion": "This text code Translate a Docker Compose File to Kubernetes Resources | The *-deployment.json files contain the Deployment objects.", "language": "text", "type": "code-example_explanation", "context": "Translate a Docker Compose File to Kubernetes Resources | The *-deployment.json files contain the Deployment objects.", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/translate-compose-kubernetes/", "id": "k8s_00226_code_explain_6650"} {"prompt": "Complete this text code:\n\nkompose convert --replication-controller\nINFO Kubernetes file \"redis-svc.yaml\" created", "completion": "INFO Kubernetes file \"web-svc.yaml\" created\nINFO Kubernetes file \"redis-replicationcontroller.yaml\" created\nINFO Kubernetes file \"web-replicationcontroller.yaml\" created", "language": "text", "type": "code-example", "context": "Translate a Docker Compose File to Kubernetes Resources | The *-replicationcontroller.yaml files contain the Replication Controller objects. If you want to specify replicas (default is 1), use --replicas flag: kompose convert --replication-controller --repli", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/translate-compose-kubernetes/", "id": "k8s_00226_code_completion_6651"} {"prompt": "Explain this text code:\n\nkompose convert --replication-controller\nINFO Kubernetes file \"redis-svc.yaml\" created\nINFO Kubernetes file \"web-svc.yaml\" created\nINFO Kubernetes file \"redis-replicationcontroller.yaml\" created\nINFO Kubernetes file \"web-replicationcontroller.yaml\" created", "completion": "This text code Translate a Docker Compose File to Kubernetes Resources | The *-replicationcontroller.yaml files contain the Replication Controller objects. If you want to specify replicas (default is 1), use --replicas flag: kompose convert --replication-controller --repli", "language": "text", "type": "code-example_explanation", "context": "Translate a Docker Compose File to Kubernetes Resources | The *-replicationcontroller.yaml files contain the Replication Controller objects. If you want to specify replicas (default is 1), use --replicas flag: kompose convert --replication-controller --repli", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/translate-compose-kubernetes/", "id": "k8s_00226_code_explain_6652"} {"prompt": "Complete this text code:\n\nkompose convert --daemon-set\nINFO Kubernetes file \"redis-svc.yaml\" created", "completion": "INFO Kubernetes file \"web-svc.yaml\" created\nINFO Kubernetes file \"redis-daemonset.yaml\" created\nINFO Kubernetes file \"web-daemonset.yaml\" created", "language": "text", "type": "code-example", "context": "Translate a Docker Compose File to Kubernetes Resources | The *-daemonset.yaml files contain the DaemonSet objects.", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/translate-compose-kubernetes/", "id": "k8s_00226_code_completion_6653"} {"prompt": "Explain this text code:\n\nkompose convert --daemon-set\nINFO Kubernetes file \"redis-svc.yaml\" created\nINFO Kubernetes file \"web-svc.yaml\" created\nINFO Kubernetes file \"redis-daemonset.yaml\" created\nINFO Kubernetes file \"web-daemonset.yaml\" created", "completion": "This text code Translate a Docker Compose File to Kubernetes Resources | The *-daemonset.yaml files contain the DaemonSet objects.", "language": "text", "type": "code-example_explanation", "context": "Translate a Docker Compose File to Kubernetes Resources | The *-daemonset.yaml files contain the DaemonSet objects.", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/translate-compose-kubernetes/", "id": "k8s_00226_code_explain_6654"} {"prompt": "Complete this text code:\n\nversion: \"2\"\nservices:\n nginx:\n image: nginx\n dockerfile: foobar", "completion": " build: ./foobar\n cap_add:\n - ALL\n container_name: foobar\n labels:\n kompose.service.type: nodeport", "language": "text", "type": "code-example", "context": "Translate a Docker Compose File to Kubernetes Resources | kompose.service.expose defines if the service needs to be made accessible from outside the cluster or not. If the value is set to \"true\", the provider sets the endpoint automatically, and for any othe", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/translate-compose-kubernetes/", "id": "k8s_00226_code_completion_6655"} {"prompt": "Explain this text code:\n\nversion: \"2\"\nservices:\n nginx:\n image: nginx\n dockerfile: foobar\n build: ./foobar\n cap_add:\n - ALL\n container_name: foobar\n labels:\n kompose.service.type: nodeport", "completion": "This text code Translate a Docker Compose File to Kubernetes Resources | kompose.service.expose defines if the service needs to be made accessible from outside the cluster or not. If the value is set to \"true\", the provider sets the endpoint automatically, and for any othe", "language": "text", "type": "code-example_explanation", "context": "Translate a Docker Compose File to Kubernetes Resources | kompose.service.expose defines if the service needs to be made accessible from outside the cluster or not. If the value is set to \"true\", the provider sets the endpoint automatically, and for any othe", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/translate-compose-kubernetes/", "id": "k8s_00226_code_explain_6656"} {"prompt": "Complete this text code:\n\nversion: \"2\"\nservices:\n web:\n image: tuna/docker-counter23\n ports:\n - \"5000:5000\"\n links:", "completion": " - redis\n labels:\n kompose.service.expose: \"counter.example.com\"\n redis:\n image: redis:3.0\n ports:\n - \"6379\"", "language": "text", "type": "code-example", "context": "Translate a Docker Compose File to Kubernetes Resources | The currently supported options are:", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/translate-compose-kubernetes/", "id": "k8s_00226_code_completion_6657"} {"prompt": "Explain this text code:\n\nversion: \"2\"\nservices:\n web:\n image: tuna/docker-counter23\n ports:\n - \"5000:5000\"\n links:\n - redis\n labels:\n kompose.service.expose: \"counter.example.com\"\n redis:\n image: redis:3.0\n ports:\n - \"6379\"", "completion": "This text code Translate a Docker Compose File to Kubernetes Resources | The currently supported options are:", "language": "text", "type": "code-example_explanation", "context": "Translate a Docker Compose File to Kubernetes Resources | The currently supported options are:", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/translate-compose-kubernetes/", "id": "k8s_00226_code_explain_6658"} {"prompt": "Complete this text code:\n\nversion: '2'\n\nservices:", "completion": " pival:\n image: perl\n command: [\"perl\", \"-Mbignum=bpi\", \"-wle\", \"print bpi(2000)\"]\n restart: \"on-failure\"", "language": "text", "type": "code-example", "context": "Translate a Docker Compose File to Kubernetes Resources | If the Docker Compose file has a volume specified for a service, the Deployment (Kubernetes) or DeploymentConfig (OpenShift) strategy is changed to \"Recreate\" instead of \"RollingUpdate\" (default). Thi", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/translate-compose-kubernetes/", "id": "k8s_00226_code_completion_6659"} {"prompt": "Explain this text code:\n\nversion: '2'\n\nservices:\n pival:\n image: perl\n command: [\"perl\", \"-Mbignum=bpi\", \"-wle\", \"print bpi(2000)\"]\n restart: \"on-failure\"", "completion": "This text code Translate a Docker Compose File to Kubernetes Resources | If the Docker Compose file has a volume specified for a service, the Deployment (Kubernetes) or DeploymentConfig (OpenShift) strategy is changed to \"Recreate\" instead of \"RollingUpdate\" (default). Thi", "language": "text", "type": "code-example_explanation", "context": "Translate a Docker Compose File to Kubernetes Resources | If the Docker Compose file has a volume specified for a service, the Deployment (Kubernetes) or DeploymentConfig (OpenShift) strategy is changed to \"Recreate\" instead of \"RollingUpdate\" (default). Thi", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/translate-compose-kubernetes/", "id": "k8s_00226_code_explain_6660"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: dapi-envars-fieldref\nspec:\n containers:\n - name: test-container\n image: registry.k8s.io/busybox:1.27.2\n command: [ \"sh\", \"-c\"]\n args:\n - while true; do\n echo -en '\\n';\n printenv MY_NODE_NAME MY_POD_NAME MY_POD_NAMESPACE;\n printenv MY_POD_IP MY_POD_SERVICE_ACCOUNT;\n sleep 10;\n done;\n env:\n - name: MY_NODE_NAME\n valueFrom:", "completion": " fieldRef:\n fieldPath: spec.nodeName\n - name: MY_POD_NAME\n valueFrom:\n fieldRef:\n fieldPath: metadata.name\n - name: MY_POD_NAMESPACE\n valueFrom:\n fieldRef:\n fieldPath: metadata.namespace\n - name: MY_POD_IP\n valueFrom:\n fieldRef:\n fieldPath: status.podIP\n - name: MY_POD_SERVICE_ACCOUNT\n valueFrom:\n fieldRef:\n fieldPath: spec.serviceAccountName\n restartPolicy: Never", "language": "yaml", "type": "pod-definition", "context": "Expose Pod Information to Containers Through Environment Variables | In that manifest, you can see five environment variables. The env field is an array of environment variable definitions. The first element in the array specifies that the MY_NODE_NAME environment vari", "source": "https://kubernetes.io/docs/tasks/inject-data-application/environment-variable-expose-pod-information/", "id": "k8s_00227_code_completion_6661"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: dapi-envars-fieldref\nspec:\n containers:\n - name: test-container\n image: registry.k8s.io/busybox:1.27.2\n command: [ \"sh\", \"-c\"]\n args:\n - while true; do\n echo -en '\\n';\n printenv MY_NODE_NAME MY_POD_NAME MY_POD_NAMESPACE;\n printenv MY_POD_IP MY_POD_SERVICE_ACCOUNT;\n sleep 10;\n done;\n env:\n - name: MY_NODE_NAME\n valueFrom:\n fieldRef:\n fieldPath: spec.nodeName\n - name: MY_POD_NAME\n valueFrom:\n fieldRef:\n fieldPath: metadata.name\n - name: MY_POD_NAMESPACE\n valueFrom:\n fieldRef:\n fieldPath: metadata.namespace\n - name: MY_POD_IP\n valueFrom:\n fieldRef:\n fieldPath: status.podIP\n - name: MY_POD_SERVICE_ACCOUNT\n valueFrom:\n fieldRef:\n fieldPath: spec.serviceAccountName\n restartPolicy: Never", "completion": "This yaml code Expose Pod Information to Containers Through Environment Variables | In that manifest, you can see five environment variables. The env field is an array of environment variable definitions. The first element in the array specifies that the MY_NODE_NAME environment vari", "language": "yaml", "type": "pod-definition_explanation", "context": "Expose Pod Information to Containers Through Environment Variables | In that manifest, you can see five environment variables. The env field is an array of environment variable definitions. The first element in the array specifies that the MY_NODE_NAME environment vari", "source": "https://kubernetes.io/docs/tasks/inject-data-application/environment-variable-expose-pod-information/", "id": "k8s_00227_code_explain_6662"} {"prompt": "Explain this bash code:\n\nkubectl apply -f https://k8s.io/examples/pods/inject/dapi-envars-pod.yaml", "completion": "This bash code Expose Pod Information to Containers Through Environment Variables | Verify that the container in the Pod is running:", "language": "bash", "type": "kubectl-command_explanation", "context": "Expose Pod Information to Containers Through Environment Variables | Verify that the container in the Pod is running:", "source": "https://kubernetes.io/docs/tasks/inject-data-application/environment-variable-expose-pod-information/", "id": "k8s_00227_code_explain_6663"} {"prompt": "Explain this text code:\n\n# If the new Pod isn't yet healthy, rerun this command a few times.\nkubectl get pods", "completion": "This text code Expose Pod Information to Containers Through Environment Variables | View the container's logs:", "language": "text", "type": "code-example_explanation", "context": "Expose Pod Information to Containers Through Environment Variables | View the container's logs:", "source": "https://kubernetes.io/docs/tasks/inject-data-application/environment-variable-expose-pod-information/", "id": "k8s_00227_code_explain_6664"} {"prompt": "Complete this text code:\n\nminikube\ndapi-envars-fieldref", "completion": "default\n172.17.0.4\ndefault", "language": "text", "type": "code-example", "context": "Expose Pod Information to Containers Through Environment Variables | To see why these values are in the log, look at the command and args fields in the configuration file. When the container starts, it writes the values of five environment variables to stdout. It repea", "source": "https://kubernetes.io/docs/tasks/inject-data-application/environment-variable-expose-pod-information/", "id": "k8s_00227_code_completion_6665"} {"prompt": "Explain this text code:\n\nminikube\ndapi-envars-fieldref\ndefault\n172.17.0.4\ndefault", "completion": "This text code Expose Pod Information to Containers Through Environment Variables | To see why these values are in the log, look at the command and args fields in the configuration file. When the container starts, it writes the values of five environment variables to stdout. It repea", "language": "text", "type": "code-example_explanation", "context": "Expose Pod Information to Containers Through Environment Variables | To see why these values are in the log, look at the command and args fields in the configuration file. When the container starts, it writes the values of five environment variables to stdout. It repea", "source": "https://kubernetes.io/docs/tasks/inject-data-application/environment-variable-expose-pod-information/", "id": "k8s_00227_code_explain_6666"} {"prompt": "Explain this text code:\n\n# Run this in a shell inside the container\nprintenv", "completion": "This text code Expose Pod Information to Containers Through Environment Variables | The output shows that certain environment variables have been assigned the values of Pod fields:", "language": "text", "type": "code-example_explanation", "context": "Expose Pod Information to Containers Through Environment Variables | The output shows that certain environment variables have been assigned the values of Pod fields:", "source": "https://kubernetes.io/docs/tasks/inject-data-application/environment-variable-expose-pod-information/", "id": "k8s_00227_code_explain_6667"} {"prompt": "Complete this text code:\n\nMY_POD_SERVICE_ACCOUNT=default\n...\nMY_POD_NAMESPACE=default\nMY_POD_IP=172.17.0.4", "completion": "...\nMY_NODE_NAME=minikube\n...\nMY_POD_NAME=dapi-envars-fieldref", "language": "text", "type": "code-example", "context": "Expose Pod Information to Containers Through Environment Variables | In the preceding exercise, you used information from Pod-level fields as the values for environment variables. In this next exercise, you are going to pass fields that are part of the Pod definition,", "source": "https://kubernetes.io/docs/tasks/inject-data-application/environment-variable-expose-pod-information/", "id": "k8s_00227_code_completion_6668"} {"prompt": "Explain this text code:\n\nMY_POD_SERVICE_ACCOUNT=default\n...\nMY_POD_NAMESPACE=default\nMY_POD_IP=172.17.0.4\n...\nMY_NODE_NAME=minikube\n...\nMY_POD_NAME=dapi-envars-fieldref", "completion": "This text code Expose Pod Information to Containers Through Environment Variables | In the preceding exercise, you used information from Pod-level fields as the values for environment variables. In this next exercise, you are going to pass fields that are part of the Pod definition,", "language": "text", "type": "code-example_explanation", "context": "Expose Pod Information to Containers Through Environment Variables | In the preceding exercise, you used information from Pod-level fields as the values for environment variables. In this next exercise, you are going to pass fields that are part of the Pod definition,", "source": "https://kubernetes.io/docs/tasks/inject-data-application/environment-variable-expose-pod-information/", "id": "k8s_00227_code_explain_6669"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: dapi-envars-resourcefieldref\nspec:\n containers:\n - name: test-container\n image: registry.k8s.io/busybox:1.27.2\n command: [ \"sh\", \"-c\"]\n args:\n - while true; do\n echo -en '\\n';\n printenv MY_CPU_REQUEST MY_CPU_LIMIT;\n printenv MY_MEM_REQUEST MY_MEM_LIMIT;\n sleep 10;\n done;\n resources:\n requests:\n memory: \"32Mi\"\n cpu: \"125m\"\n limits:\n memory: \"64Mi\"", "completion": " cpu: \"250m\"\n env:\n - name: MY_CPU_REQUEST\n valueFrom:\n resourceFieldRef:\n containerName: test-container\n resource: requests.cpu\n - name: MY_CPU_LIMIT\n valueFrom:\n resourceFieldRef:\n containerName: test-container\n resource: limits.cpu\n - name: MY_MEM_REQUEST\n valueFrom:\n resourceFieldRef:\n containerName: test-container\n resource: requests.memory\n - name: MY_MEM_LIMIT\n valueFrom:\n resourceFieldRef:\n containerName: test-container\n resource: limits.memory\n restartPolicy: Never", "language": "yaml", "type": "pod-definition", "context": "Expose Pod Information to Containers Through Environment Variables | In this manifest, you can see four environment variables. The env field is an array of environment variable definitions. The first element in the array specifies that the MY_CPU_REQUEST environment va", "source": "https://kubernetes.io/docs/tasks/inject-data-application/environment-variable-expose-pod-information/", "id": "k8s_00227_code_completion_6670"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: dapi-envars-resourcefieldref\nspec:\n containers:\n - name: test-container\n image: registry.k8s.io/busybox:1.27.2\n command: [ \"sh\", \"-c\"]\n args:\n - while true; do\n echo -en '\\n';\n printenv MY_CPU_REQUEST MY_CPU_LIMIT;\n printenv MY_MEM_REQUEST MY_MEM_LIMIT;\n sleep 10;\n done;\n resources:\n requests:\n memory: \"32Mi\"\n cpu: \"125m\"\n limits:\n memory: \"64Mi\"\n cpu: \"250m\"\n env:\n - name: MY_CPU_REQUEST\n valueFrom:\n resourceFieldRef:\n containerName: test-container\n resource: requests.cpu\n - name: MY_CPU_LIMIT\n valueFrom:\n resourceFieldRef:\n containerName: test-container\n resource: limits.cpu\n - name: MY_MEM_REQUEST\n valueFrom:\n resourceFieldRef:\n containerName: test-container\n resource: requests.memory\n - name: MY_MEM_LIMIT\n valueFrom:\n resourceFieldRef:\n containerName: test-container\n resource: limits.memory\n restartPolicy: Never", "completion": "This yaml code Expose Pod Information to Containers Through Environment Variables | In this manifest, you can see four environment variables. The env field is an array of environment variable definitions. The first element in the array specifies that the MY_CPU_REQUEST environment va", "language": "yaml", "type": "pod-definition_explanation", "context": "Expose Pod Information to Containers Through Environment Variables | In this manifest, you can see four environment variables. The env field is an array of environment variable definitions. The first element in the array specifies that the MY_CPU_REQUEST environment va", "source": "https://kubernetes.io/docs/tasks/inject-data-application/environment-variable-expose-pod-information/", "id": "k8s_00227_code_explain_6671"} {"prompt": "Explain this bash code:\n\nkubectl apply -f https://k8s.io/examples/pods/inject/dapi-envars-container.yaml", "completion": "This bash code Expose Pod Information to Containers Through Environment Variables | Verify that the container in the Pod is running:", "language": "bash", "type": "kubectl-command_explanation", "context": "Expose Pod Information to Containers Through Environment Variables | Verify that the container in the Pod is running:", "source": "https://kubernetes.io/docs/tasks/inject-data-application/environment-variable-expose-pod-information/", "id": "k8s_00227_code_explain_6672"} {"prompt": "Explain this text code:\n\n# If the new Pod isn't yet healthy, rerun this command a few times.\nkubectl get pods", "completion": "This text code Expose Pod Information to Containers Through Environment Variables | View the container's logs:", "language": "text", "type": "code-example_explanation", "context": "Expose Pod Information to Containers Through Environment Variables | View the container's logs:", "source": "https://kubernetes.io/docs/tasks/inject-data-application/environment-variable-expose-pod-information/", "id": "k8s_00227_code_explain_6673"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: dapi-envars-fieldref\nspec:\n containers:\n - name: test-container\n image: registry.k8s.io/busybox:1.27.2\n command: [ \"sh\", \"-c\"]\n args:\n - while true; do\n echo -en '\\n';\n printenv MY_NODE_NAME MY_POD_NAME MY_POD_NAMESPACE;\n printenv MY_POD_IP MY_POD_SERVICE_ACCOUNT;\n sleep 10;\n done;\n env:\n - name: MY_NODE_NAME\n valueFrom:", "completion": " fieldRef:\n fieldPath: spec.nodeName\n - name: MY_POD_NAME\n valueFrom:\n fieldRef:\n fieldPath: metadata.name\n - name: MY_POD_NAMESPACE\n valueFrom:\n fieldRef:\n fieldPath: metadata.namespace\n - name: MY_POD_IP\n valueFrom:\n fieldRef:\n fieldPath: status.podIP\n - name: MY_POD_SERVICE_ACCOUNT\n valueFrom:\n fieldRef:\n fieldPath: spec.serviceAccountName\n restartPolicy: Never", "language": "yaml", "type": "pod-definition", "context": "Expose Pod Information to Containers Through Environment Variables | In that manifest, you can see five environment variables. The env field is an array of environment variable definitions. The first element in the array specifies that the MY_NODE_NAME environment vari", "source": "https://kubernetes.io/docs/tasks/inject-data-application/environment-variable-expose-pod-information/", "id": "k8s_00227_code_completion_6674"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: dapi-envars-fieldref\nspec:\n containers:\n - name: test-container\n image: registry.k8s.io/busybox:1.27.2\n command: [ \"sh\", \"-c\"]\n args:\n - while true; do\n echo -en '\\n';\n printenv MY_NODE_NAME MY_POD_NAME MY_POD_NAMESPACE;\n printenv MY_POD_IP MY_POD_SERVICE_ACCOUNT;\n sleep 10;\n done;\n env:\n - name: MY_NODE_NAME\n valueFrom:\n fieldRef:\n fieldPath: spec.nodeName\n - name: MY_POD_NAME\n valueFrom:\n fieldRef:\n fieldPath: metadata.name\n - name: MY_POD_NAMESPACE\n valueFrom:\n fieldRef:\n fieldPath: metadata.namespace\n - name: MY_POD_IP\n valueFrom:\n fieldRef:\n fieldPath: status.podIP\n - name: MY_POD_SERVICE_ACCOUNT\n valueFrom:\n fieldRef:\n fieldPath: spec.serviceAccountName\n restartPolicy: Never", "completion": "This yaml code Expose Pod Information to Containers Through Environment Variables | In that manifest, you can see five environment variables. The env field is an array of environment variable definitions. The first element in the array specifies that the MY_NODE_NAME environment vari", "language": "yaml", "type": "pod-definition_explanation", "context": "Expose Pod Information to Containers Through Environment Variables | In that manifest, you can see five environment variables. The env field is an array of environment variable definitions. The first element in the array specifies that the MY_NODE_NAME environment vari", "source": "https://kubernetes.io/docs/tasks/inject-data-application/environment-variable-expose-pod-information/", "id": "k8s_00227_code_explain_6675"} {"prompt": "Explain this shell code:\n\nkubectl apply -f https://k8s.io/examples/pods/inject/dapi-envars-pod.yaml", "completion": "This shell code Expose Pod Information to Containers Through Environment Variables | Verify that the container in the Pod is running:", "language": "shell", "type": "code-example_explanation", "context": "Expose Pod Information to Containers Through Environment Variables | Verify that the container in the Pod is running:", "source": "https://kubernetes.io/docs/tasks/inject-data-application/environment-variable-expose-pod-information/", "id": "k8s_00227_code_explain_6676"} {"prompt": "Explain this shell code:\n\n# If the new Pod isn't yet healthy, rerun this command a few times.\nkubectl get pods", "completion": "This shell code Expose Pod Information to Containers Through Environment Variables | View the container's logs:", "language": "shell", "type": "code-example_explanation", "context": "Expose Pod Information to Containers Through Environment Variables | View the container's logs:", "source": "https://kubernetes.io/docs/tasks/inject-data-application/environment-variable-expose-pod-information/", "id": "k8s_00227_code_explain_6677"} {"prompt": "Complete this text code:\n\nminikube\ndapi-envars-fieldref", "completion": "default\n172.17.0.4\ndefault", "language": "text", "type": "code-example", "context": "Expose Pod Information to Containers Through Environment Variables | To see why these values are in the log, look at the command and args fields in the configuration file. When the container starts, it writes the values of five environment variables to stdout. It repea", "source": "https://kubernetes.io/docs/tasks/inject-data-application/environment-variable-expose-pod-information/", "id": "k8s_00227_code_completion_6678"} {"prompt": "Explain this text code:\n\nminikube\ndapi-envars-fieldref\ndefault\n172.17.0.4\ndefault", "completion": "This text code Expose Pod Information to Containers Through Environment Variables | To see why these values are in the log, look at the command and args fields in the configuration file. When the container starts, it writes the values of five environment variables to stdout. It repea", "language": "text", "type": "code-example_explanation", "context": "Expose Pod Information to Containers Through Environment Variables | To see why these values are in the log, look at the command and args fields in the configuration file. When the container starts, it writes the values of five environment variables to stdout. It repea", "source": "https://kubernetes.io/docs/tasks/inject-data-application/environment-variable-expose-pod-information/", "id": "k8s_00227_code_explain_6679"} {"prompt": "Explain this shell code:\n\n# Run this in a shell inside the container\nprintenv", "completion": "This shell code Expose Pod Information to Containers Through Environment Variables | The output shows that certain environment variables have been assigned the values of Pod fields:", "language": "shell", "type": "code-example_explanation", "context": "Expose Pod Information to Containers Through Environment Variables | The output shows that certain environment variables have been assigned the values of Pod fields:", "source": "https://kubernetes.io/docs/tasks/inject-data-application/environment-variable-expose-pod-information/", "id": "k8s_00227_code_explain_6680"} {"prompt": "Complete this text code:\n\nMY_POD_SERVICE_ACCOUNT=default\n...\nMY_POD_NAMESPACE=default\nMY_POD_IP=172.17.0.4", "completion": "...\nMY_NODE_NAME=minikube\n...\nMY_POD_NAME=dapi-envars-fieldref", "language": "text", "type": "code-example", "context": "Expose Pod Information to Containers Through Environment Variables | In the preceding exercise, you used information from Pod-level fields as the values for environment variables. In this next exercise, you are going to pass fields that are part of the Pod definition,", "source": "https://kubernetes.io/docs/tasks/inject-data-application/environment-variable-expose-pod-information/", "id": "k8s_00227_code_completion_6681"} {"prompt": "Explain this text code:\n\nMY_POD_SERVICE_ACCOUNT=default\n...\nMY_POD_NAMESPACE=default\nMY_POD_IP=172.17.0.4\n...\nMY_NODE_NAME=minikube\n...\nMY_POD_NAME=dapi-envars-fieldref", "completion": "This text code Expose Pod Information to Containers Through Environment Variables | In the preceding exercise, you used information from Pod-level fields as the values for environment variables. In this next exercise, you are going to pass fields that are part of the Pod definition,", "language": "text", "type": "code-example_explanation", "context": "Expose Pod Information to Containers Through Environment Variables | In the preceding exercise, you used information from Pod-level fields as the values for environment variables. In this next exercise, you are going to pass fields that are part of the Pod definition,", "source": "https://kubernetes.io/docs/tasks/inject-data-application/environment-variable-expose-pod-information/", "id": "k8s_00227_code_explain_6682"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: dapi-envars-resourcefieldref\nspec:\n containers:\n - name: test-container\n image: registry.k8s.io/busybox:1.27.2\n command: [ \"sh\", \"-c\"]\n args:\n - while true; do\n echo -en '\\n';\n printenv MY_CPU_REQUEST MY_CPU_LIMIT;\n printenv MY_MEM_REQUEST MY_MEM_LIMIT;\n sleep 10;\n done;\n resources:\n requests:\n memory: \"32Mi\"\n cpu: \"125m\"\n limits:\n memory: \"64Mi\"", "completion": " cpu: \"250m\"\n env:\n - name: MY_CPU_REQUEST\n valueFrom:\n resourceFieldRef:\n containerName: test-container\n resource: requests.cpu\n - name: MY_CPU_LIMIT\n valueFrom:\n resourceFieldRef:\n containerName: test-container\n resource: limits.cpu\n - name: MY_MEM_REQUEST\n valueFrom:\n resourceFieldRef:\n containerName: test-container\n resource: requests.memory\n - name: MY_MEM_LIMIT\n valueFrom:\n resourceFieldRef:\n containerName: test-container\n resource: limits.memory\n restartPolicy: Never", "language": "yaml", "type": "pod-definition", "context": "Expose Pod Information to Containers Through Environment Variables | In this manifest, you can see four environment variables. The env field is an array of environment variable definitions. The first element in the array specifies that the MY_CPU_REQUEST environment va", "source": "https://kubernetes.io/docs/tasks/inject-data-application/environment-variable-expose-pod-information/", "id": "k8s_00227_code_completion_6683"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: dapi-envars-resourcefieldref\nspec:\n containers:\n - name: test-container\n image: registry.k8s.io/busybox:1.27.2\n command: [ \"sh\", \"-c\"]\n args:\n - while true; do\n echo -en '\\n';\n printenv MY_CPU_REQUEST MY_CPU_LIMIT;\n printenv MY_MEM_REQUEST MY_MEM_LIMIT;\n sleep 10;\n done;\n resources:\n requests:\n memory: \"32Mi\"\n cpu: \"125m\"\n limits:\n memory: \"64Mi\"\n cpu: \"250m\"\n env:\n - name: MY_CPU_REQUEST\n valueFrom:\n resourceFieldRef:\n containerName: test-container\n resource: requests.cpu\n - name: MY_CPU_LIMIT\n valueFrom:\n resourceFieldRef:\n containerName: test-container\n resource: limits.cpu\n - name: MY_MEM_REQUEST\n valueFrom:\n resourceFieldRef:\n containerName: test-container\n resource: requests.memory\n - name: MY_MEM_LIMIT\n valueFrom:\n resourceFieldRef:\n containerName: test-container\n resource: limits.memory\n restartPolicy: Never", "completion": "This yaml code Expose Pod Information to Containers Through Environment Variables | In this manifest, you can see four environment variables. The env field is an array of environment variable definitions. The first element in the array specifies that the MY_CPU_REQUEST environment va", "language": "yaml", "type": "pod-definition_explanation", "context": "Expose Pod Information to Containers Through Environment Variables | In this manifest, you can see four environment variables. The env field is an array of environment variable definitions. The first element in the array specifies that the MY_CPU_REQUEST environment va", "source": "https://kubernetes.io/docs/tasks/inject-data-application/environment-variable-expose-pod-information/", "id": "k8s_00227_code_explain_6684"} {"prompt": "Explain this shell code:\n\nkubectl apply -f https://k8s.io/examples/pods/inject/dapi-envars-container.yaml", "completion": "This shell code Expose Pod Information to Containers Through Environment Variables | Verify that the container in the Pod is running:", "language": "shell", "type": "code-example_explanation", "context": "Expose Pod Information to Containers Through Environment Variables | Verify that the container in the Pod is running:", "source": "https://kubernetes.io/docs/tasks/inject-data-application/environment-variable-expose-pod-information/", "id": "k8s_00227_code_explain_6685"} {"prompt": "Explain this shell code:\n\n# If the new Pod isn't yet healthy, rerun this command a few times.\nkubectl get pods", "completion": "This shell code Expose Pod Information to Containers Through Environment Variables | View the container's logs:", "language": "shell", "type": "code-example_explanation", "context": "Expose Pod Information to Containers Through Environment Variables | View the container's logs:", "source": "https://kubernetes.io/docs/tasks/inject-data-application/environment-variable-expose-pod-information/", "id": "k8s_00227_code_explain_6686"} {"prompt": "Complete this yaml code:\n\npods/inject/dapi-envars-pod.yaml\napiVersion: v1\nkind: Pod\nmetadata:\n name: dapi-envars-fieldref\nspec:\n containers:\n - name: test-container\n image: registry.k8s.io/busybox:1.27.2\n command: [ \"sh\", \"-c\"]\n args:\n - while true; do\n echo -en '\\n';\n printenv MY_NODE_NAME MY_POD_NAME MY_POD_NAMESPACE;\n printenv MY_POD_IP MY_POD_SERVICE_ACCOUNT;\n sleep 10;\n done;\n env:\n - name: MY_NODE_NAME", "completion": " valueFrom:\n fieldRef:\n fieldPath: spec.nodeName\n - name: MY_POD_NAME\n valueFrom:\n fieldRef:\n fieldPath: metadata.name\n - name: MY_POD_NAMESPACE\n valueFrom:\n fieldRef:\n fieldPath: metadata.namespace\n - name: MY_POD_IP\n valueFrom:\n fieldRef:\n fieldPath: status.podIP\n - name: MY_POD_SERVICE_ACCOUNT\n valueFrom:\n fieldRef:\n fieldPath: spec.serviceAccountName\n restartPolicy: Never", "language": "yaml", "type": "pod-definition", "context": "Expose Pod Information to Containers Through Environment Variables | In that manifest, you can see five environment variables. The env field is an array of environment variable definitions. The first element in the array specifies that the MY_NODE_NAME environment vari", "source": "https://kubernetes.io/docs/tasks/inject-data-application/environment-variable-expose-pod-information/", "id": "k8s_00227_code_completion_6687"} {"prompt": "Explain this yaml code:\n\npods/inject/dapi-envars-pod.yaml\napiVersion: v1\nkind: Pod\nmetadata:\n name: dapi-envars-fieldref\nspec:\n containers:\n - name: test-container\n image: registry.k8s.io/busybox:1.27.2\n command: [ \"sh\", \"-c\"]\n args:\n - while true; do\n echo -en '\\n';\n printenv MY_NODE_NAME MY_POD_NAME MY_POD_NAMESPACE;\n printenv MY_POD_IP MY_POD_SERVICE_ACCOUNT;\n sleep 10;\n done;\n env:\n - name: MY_NODE_NAME\n valueFrom:\n fieldRef:\n fieldPath: spec.nodeName\n - name: MY_POD_NAME\n valueFrom:\n fieldRef:\n fieldPath: metadata.name\n - name: MY_POD_NAMESPACE\n valueFrom:\n fieldRef:\n fieldPath: metadata.namespace\n - name: MY_POD_IP\n valueFrom:\n fieldRef:\n fieldPath: status.podIP\n - name: MY_POD_SERVICE_ACCOUNT\n valueFrom:\n fieldRef:\n fieldPath: spec.serviceAccountName\n restartPolicy: Never", "completion": "This yaml code Expose Pod Information to Containers Through Environment Variables | In that manifest, you can see five environment variables. The env field is an array of environment variable definitions. The first element in the array specifies that the MY_NODE_NAME environment vari", "language": "yaml", "type": "pod-definition_explanation", "context": "Expose Pod Information to Containers Through Environment Variables | In that manifest, you can see five environment variables. The env field is an array of environment variable definitions. The first element in the array specifies that the MY_NODE_NAME environment vari", "source": "https://kubernetes.io/docs/tasks/inject-data-application/environment-variable-expose-pod-information/", "id": "k8s_00227_code_explain_6688"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: dapi-envars-fieldref\nspec:\n containers:\n - name: test-container\n image: registry.k8s.io/busybox:1.27.2\n command: [ \"sh\", \"-c\"]\n args:\n - while true; do\n echo -en '\\n';\n printenv MY_NODE_NAME MY_POD_NAME MY_POD_NAMESPACE;\n printenv MY_POD_IP MY_POD_SERVICE_ACCOUNT;\n sleep 10;\n done;\n env:\n - name: MY_NODE_NAME\n valueFrom:", "completion": " fieldRef:\n fieldPath: spec.nodeName\n - name: MY_POD_NAME\n valueFrom:\n fieldRef:\n fieldPath: metadata.name\n - name: MY_POD_NAMESPACE\n valueFrom:\n fieldRef:\n fieldPath: metadata.namespace\n - name: MY_POD_IP\n valueFrom:\n fieldRef:\n fieldPath: status.podIP\n - name: MY_POD_SERVICE_ACCOUNT\n valueFrom:\n fieldRef:\n fieldPath: spec.serviceAccountName\n restartPolicy: Never", "language": "yaml", "type": "pod-definition", "context": "Expose Pod Information to Containers Through Environment Variables | In that manifest, you can see five environment variables. The env field is an array of environment variable definitions. The first element in the array specifies that the MY_NODE_NAME environment vari", "source": "https://kubernetes.io/docs/tasks/inject-data-application/environment-variable-expose-pod-information/", "id": "k8s_00227_code_completion_6689"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: dapi-envars-fieldref\nspec:\n containers:\n - name: test-container\n image: registry.k8s.io/busybox:1.27.2\n command: [ \"sh\", \"-c\"]\n args:\n - while true; do\n echo -en '\\n';\n printenv MY_NODE_NAME MY_POD_NAME MY_POD_NAMESPACE;\n printenv MY_POD_IP MY_POD_SERVICE_ACCOUNT;\n sleep 10;\n done;\n env:\n - name: MY_NODE_NAME\n valueFrom:\n fieldRef:\n fieldPath: spec.nodeName\n - name: MY_POD_NAME\n valueFrom:\n fieldRef:\n fieldPath: metadata.name\n - name: MY_POD_NAMESPACE\n valueFrom:\n fieldRef:\n fieldPath: metadata.namespace\n - name: MY_POD_IP\n valueFrom:\n fieldRef:\n fieldPath: status.podIP\n - name: MY_POD_SERVICE_ACCOUNT\n valueFrom:\n fieldRef:\n fieldPath: spec.serviceAccountName\n restartPolicy: Never", "completion": "This yaml code Expose Pod Information to Containers Through Environment Variables | In that manifest, you can see five environment variables. The env field is an array of environment variable definitions. The first element in the array specifies that the MY_NODE_NAME environment vari", "language": "yaml", "type": "pod-definition_explanation", "context": "Expose Pod Information to Containers Through Environment Variables | In that manifest, you can see five environment variables. The env field is an array of environment variable definitions. The first element in the array specifies that the MY_NODE_NAME environment vari", "source": "https://kubernetes.io/docs/tasks/inject-data-application/environment-variable-expose-pod-information/", "id": "k8s_00227_code_explain_6690"} {"prompt": "Explain this bash code:\n\nkubectl apply -f https://k8s.io/examples/pods/inject/dapi-envars-pod.yaml", "completion": "This bash code Expose Pod Information to Containers Through Environment Variables | Verify that the container in the Pod is running:", "language": "bash", "type": "kubectl-command_explanation", "context": "Expose Pod Information to Containers Through Environment Variables | Verify that the container in the Pod is running:", "source": "https://kubernetes.io/docs/tasks/inject-data-application/environment-variable-expose-pod-information/", "id": "k8s_00227_code_explain_6691"} {"prompt": "Explain this text code:\n\n# If the new Pod isn't yet healthy, rerun this command a few times.\nkubectl get pods", "completion": "This text code Expose Pod Information to Containers Through Environment Variables | View the container's logs:", "language": "text", "type": "code-example_explanation", "context": "Expose Pod Information to Containers Through Environment Variables | View the container's logs:", "source": "https://kubernetes.io/docs/tasks/inject-data-application/environment-variable-expose-pod-information/", "id": "k8s_00227_code_explain_6692"} {"prompt": "Explain this text code:\n\n# Run this in a shell inside the container\nprintenv", "completion": "This text code Expose Pod Information to Containers Through Environment Variables | The output shows that certain environment variables have been assigned the values of Pod fields:", "language": "text", "type": "code-example_explanation", "context": "Expose Pod Information to Containers Through Environment Variables | The output shows that certain environment variables have been assigned the values of Pod fields:", "source": "https://kubernetes.io/docs/tasks/inject-data-application/environment-variable-expose-pod-information/", "id": "k8s_00227_code_explain_6693"} {"prompt": "Complete this yaml code:\n\npods/inject/dapi-envars-container.yaml\napiVersion: v1\nkind: Pod\nmetadata:\n name: dapi-envars-resourcefieldref\nspec:\n containers:\n - name: test-container\n image: registry.k8s.io/busybox:1.27.2\n command: [ \"sh\", \"-c\"]\n args:\n - while true; do\n echo -en '\\n';\n printenv MY_CPU_REQUEST MY_CPU_LIMIT;\n printenv MY_MEM_REQUEST MY_MEM_LIMIT;\n sleep 10;\n done;\n resources:\n requests:\n memory: \"32Mi\"\n cpu: \"125m\"\n limits:\n memory: \"64Mi\"", "completion": " cpu: \"250m\"\n env:\n - name: MY_CPU_REQUEST\n valueFrom:\n resourceFieldRef:\n containerName: test-container\n resource: requests.cpu\n - name: MY_CPU_LIMIT\n valueFrom:\n resourceFieldRef:\n containerName: test-container\n resource: limits.cpu\n - name: MY_MEM_REQUEST\n valueFrom:\n resourceFieldRef:\n containerName: test-container\n resource: requests.memory\n - name: MY_MEM_LIMIT\n valueFrom:\n resourceFieldRef:\n containerName: test-container\n resource: limits.memory\n restartPolicy: Never", "language": "yaml", "type": "pod-definition", "context": "Expose Pod Information to Containers Through Environment Variables | In this manifest, you can see four environment variables. The env field is an array of environment variable definitions. The first element in the array specifies that the MY_CPU_REQUEST environment va", "source": "https://kubernetes.io/docs/tasks/inject-data-application/environment-variable-expose-pod-information/", "id": "k8s_00227_code_completion_6694"} {"prompt": "Explain this yaml code:\n\npods/inject/dapi-envars-container.yaml\napiVersion: v1\nkind: Pod\nmetadata:\n name: dapi-envars-resourcefieldref\nspec:\n containers:\n - name: test-container\n image: registry.k8s.io/busybox:1.27.2\n command: [ \"sh\", \"-c\"]\n args:\n - while true; do\n echo -en '\\n';\n printenv MY_CPU_REQUEST MY_CPU_LIMIT;\n printenv MY_MEM_REQUEST MY_MEM_LIMIT;\n sleep 10;\n done;\n resources:\n requests:\n memory: \"32Mi\"\n cpu: \"125m\"\n limits:\n memory: \"64Mi\"\n cpu: \"250m\"\n env:\n - name: MY_CPU_REQUEST\n valueFrom:\n resourceFieldRef:\n containerName: test-container\n resource: requests.cpu\n - name: MY_CPU_LIMIT\n valueFrom:\n resourceFieldRef:\n containerName: test-container\n resource: limits.cpu\n - name: MY_MEM_REQUEST\n valueFrom:\n resourceFieldRef:\n containerName: test-container\n resource: requests.memory\n - name: MY_MEM_LIMIT\n valueFrom:\n resourceFieldRef:\n containerName: test-container\n resource: limits.memory\n restartPolicy: Never", "completion": "This yaml code Expose Pod Information to Containers Through Environment Variables | In this manifest, you can see four environment variables. The env field is an array of environment variable definitions. The first element in the array specifies that the MY_CPU_REQUEST environment va", "language": "yaml", "type": "pod-definition_explanation", "context": "Expose Pod Information to Containers Through Environment Variables | In this manifest, you can see four environment variables. The env field is an array of environment variable definitions. The first element in the array specifies that the MY_CPU_REQUEST environment va", "source": "https://kubernetes.io/docs/tasks/inject-data-application/environment-variable-expose-pod-information/", "id": "k8s_00227_code_explain_6695"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: dapi-envars-resourcefieldref\nspec:\n containers:\n - name: test-container\n image: registry.k8s.io/busybox:1.27.2\n command: [ \"sh\", \"-c\"]\n args:\n - while true; do\n echo -en '\\n';\n printenv MY_CPU_REQUEST MY_CPU_LIMIT;\n printenv MY_MEM_REQUEST MY_MEM_LIMIT;\n sleep 10;\n done;\n resources:\n requests:\n memory: \"32Mi\"\n cpu: \"125m\"\n limits:\n memory: \"64Mi\"", "completion": " cpu: \"250m\"\n env:\n - name: MY_CPU_REQUEST\n valueFrom:\n resourceFieldRef:\n containerName: test-container\n resource: requests.cpu\n - name: MY_CPU_LIMIT\n valueFrom:\n resourceFieldRef:\n containerName: test-container\n resource: limits.cpu\n - name: MY_MEM_REQUEST\n valueFrom:\n resourceFieldRef:\n containerName: test-container\n resource: requests.memory\n - name: MY_MEM_LIMIT\n valueFrom:\n resourceFieldRef:\n containerName: test-container\n resource: limits.memory\n restartPolicy: Never", "language": "yaml", "type": "pod-definition", "context": "Expose Pod Information to Containers Through Environment Variables | In this manifest, you can see four environment variables. The env field is an array of environment variable definitions. The first element in the array specifies that the MY_CPU_REQUEST environment va", "source": "https://kubernetes.io/docs/tasks/inject-data-application/environment-variable-expose-pod-information/", "id": "k8s_00227_code_completion_6696"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: dapi-envars-resourcefieldref\nspec:\n containers:\n - name: test-container\n image: registry.k8s.io/busybox:1.27.2\n command: [ \"sh\", \"-c\"]\n args:\n - while true; do\n echo -en '\\n';\n printenv MY_CPU_REQUEST MY_CPU_LIMIT;\n printenv MY_MEM_REQUEST MY_MEM_LIMIT;\n sleep 10;\n done;\n resources:\n requests:\n memory: \"32Mi\"\n cpu: \"125m\"\n limits:\n memory: \"64Mi\"\n cpu: \"250m\"\n env:\n - name: MY_CPU_REQUEST\n valueFrom:\n resourceFieldRef:\n containerName: test-container\n resource: requests.cpu\n - name: MY_CPU_LIMIT\n valueFrom:\n resourceFieldRef:\n containerName: test-container\n resource: limits.cpu\n - name: MY_MEM_REQUEST\n valueFrom:\n resourceFieldRef:\n containerName: test-container\n resource: requests.memory\n - name: MY_MEM_LIMIT\n valueFrom:\n resourceFieldRef:\n containerName: test-container\n resource: limits.memory\n restartPolicy: Never", "completion": "This yaml code Expose Pod Information to Containers Through Environment Variables | In this manifest, you can see four environment variables. The env field is an array of environment variable definitions. The first element in the array specifies that the MY_CPU_REQUEST environment va", "language": "yaml", "type": "pod-definition_explanation", "context": "Expose Pod Information to Containers Through Environment Variables | In this manifest, you can see four environment variables. The env field is an array of environment variable definitions. The first element in the array specifies that the MY_CPU_REQUEST environment va", "source": "https://kubernetes.io/docs/tasks/inject-data-application/environment-variable-expose-pod-information/", "id": "k8s_00227_code_explain_6697"} {"prompt": "Explain this bash code:\n\nkubectl apply -f https://k8s.io/examples/pods/inject/dapi-envars-container.yaml", "completion": "This bash code Expose Pod Information to Containers Through Environment Variables | Verify that the container in the Pod is running:", "language": "bash", "type": "kubectl-command_explanation", "context": "Expose Pod Information to Containers Through Environment Variables | Verify that the container in the Pod is running:", "source": "https://kubernetes.io/docs/tasks/inject-data-application/environment-variable-expose-pod-information/", "id": "k8s_00227_code_explain_6698"} {"prompt": "Explain this text code:\n\n# If the new Pod isn't yet healthy, rerun this command a few times.\nkubectl get pods", "completion": "This text code Expose Pod Information to Containers Through Environment Variables | View the container's logs:", "language": "text", "type": "code-example_explanation", "context": "Expose Pod Information to Containers Through Environment Variables | View the container's logs:", "source": "https://kubernetes.io/docs/tasks/inject-data-application/environment-variable-expose-pod-information/", "id": "k8s_00227_code_explain_6699"} {"prompt": "Explain this text code:\n\napiGroups:[\"apps\"], apiVersions:[\"v1\"], resources: [\"deployments\"]", "completion": "This text code kube-apiserver Admission (v1) | See documentation for the \"matchPolicy\" field in the webhook configuration type for more details.", "language": "text", "type": "code-example_explanation", "context": "kube-apiserver Admission (v1) | See documentation for the \"matchPolicy\" field in the webhook configuration type for more details.", "source": "https://kubernetes.io/docs/reference/config-api/apiserver-admission.v1/", "id": "k8s_00229_code_explain_6700"} {"prompt": "Explain this yaml code:\n\nkind: {group:\"apps\", version:\"v1\", kind:\"Deployment\"}", "completion": "This yaml code kube-apiserver Admission (v1) | See documentation for the \"matchPolicy\" field in the webhook configuration type for more details.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "kube-apiserver Admission (v1) | See documentation for the \"matchPolicy\" field in the webhook configuration type for more details.", "source": "https://kubernetes.io/docs/reference/config-api/apiserver-admission.v1/", "id": "k8s_00229_code_explain_6701"} {"prompt": "Explain this yaml code:\n\nrequestKind: {group:\"apps\", version:\"v1beta1\", kind:\"Deployment\"}", "completion": "This yaml code kube-apiserver Admission (v1) | See documentation for the \"matchPolicy\" field in the webhook configuration type for more details.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "kube-apiserver Admission (v1) | See documentation for the \"matchPolicy\" field in the webhook configuration type for more details.", "source": "https://kubernetes.io/docs/reference/config-api/apiserver-admission.v1/", "id": "k8s_00229_code_explain_6702"} {"prompt": "Explain this text code:\n\napiGroups:[\"apps\"], apiVersions:[\"v1\"], resources: [\"deployments\"]", "completion": "This text code kube-apiserver Admission (v1) | See documentation for the \"matchPolicy\" field in the webhook configuration type.", "language": "text", "type": "code-example_explanation", "context": "kube-apiserver Admission (v1) | See documentation for the \"matchPolicy\" field in the webhook configuration type.", "source": "https://kubernetes.io/docs/reference/config-api/apiserver-admission.v1/", "id": "k8s_00229_code_explain_6703"} {"prompt": "Explain this text code:\n\nresource: {group:\"apps\", version:\"v1\", resource:\"deployments\"}", "completion": "This text code kube-apiserver Admission (v1) | See documentation for the \"matchPolicy\" field in the webhook configuration type.", "language": "text", "type": "code-example_explanation", "context": "kube-apiserver Admission (v1) | See documentation for the \"matchPolicy\" field in the webhook configuration type.", "source": "https://kubernetes.io/docs/reference/config-api/apiserver-admission.v1/", "id": "k8s_00229_code_explain_6704"} {"prompt": "Explain this text code:\n\nrequestResource: {group:\"apps\", version:\"v1beta1\", resource:\"deployments\"}", "completion": "This text code kube-apiserver Admission (v1) | See documentation for the \"matchPolicy\" field in the webhook configuration type.", "language": "text", "type": "code-example_explanation", "context": "kube-apiserver Admission (v1) | See documentation for the \"matchPolicy\" field in the webhook configuration type.", "source": "https://kubernetes.io/docs/reference/config-api/apiserver-admission.v1/", "id": "k8s_00229_code_explain_6705"} {"prompt": "Complete this text code:\n\n# sysctl params required by setup, params persist across reboots\ncat < /etc/containerd/config.toml", "completion": "This text code Container Runtimes | If you apply this change, make sure to restart containerd:", "language": "text", "type": "code-example_explanation", "context": "Container Runtimes | If you apply this change, make sure to restart containerd:", "source": "https://kubernetes.io/docs/setup/production-environment/container-runtimes/", "id": "k8s_00233_code_explain_6725"} {"prompt": "Explain this toml code:\n\n[plugins.\"io.containerd.grpc.v1.cri\"]\n sandbox_image = \"registry.k8s.io/pause:3.10\"", "completion": "This toml code Container Runtimes | You might need to restart containerd as well once you've updated the config file: systemctl restart containerd.", "language": "toml", "type": "code-example_explanation", "context": "Container Runtimes | You might need to restart containerd as well once you've updated the config file: systemctl restart containerd.", "source": "https://kubernetes.io/docs/setup/production-environment/container-runtimes/", "id": "k8s_00233_code_explain_6726"} {"prompt": "Explain this toml code:\n\n[crio.runtime]\nconmon_cgroup = \"pod\"\ncgroup_manager = \"cgroupfs\"", "completion": "This toml code Container Runtimes | You should also note the changed conmon_cgroup, which has to be set to the value pod when using CRI-O with cgroupfs. It is generally necessary to keep the cgroup driver configuration of the kubelet (u", "language": "toml", "type": "code-example_explanation", "context": "Container Runtimes | You should also note the changed conmon_cgroup, which has to be set to the value pod when using CRI-O with cgroupfs. It is generally necessary to keep the cgroup driver configuration of the kubelet (u", "source": "https://kubernetes.io/docs/setup/production-environment/container-runtimes/", "id": "k8s_00233_code_explain_6727"} {"prompt": "Explain this toml code:\n\n[crio.image]\npause_image=\"registry.k8s.io/pause:3.10\"", "completion": "This toml code Container Runtimes | This config option supports live configuration reload to apply this change: systemctl reload crio or by sending SIGHUP to the crio process.", "language": "toml", "type": "code-example_explanation", "context": "Container Runtimes | This config option supports live configuration reload to apply this change: systemctl reload crio or by sending SIGHUP to the crio process.", "source": "https://kubernetes.io/docs/setup/production-environment/container-runtimes/", "id": "k8s_00233_code_explain_6728"} {"prompt": "Complete this text code:\n\n# sysctl params required by setup, params persist across reboots\ncat < kubelet.conf", "completion": "This text code Troubleshooting kubeadm | Copy this resulted kubelet.conf to /etc/kubernetes/kubelet.conf on the failed node.", "language": "text", "type": "code-example_explanation", "context": "Troubleshooting kubeadm | Copy this resulted kubelet.conf to /etc/kubernetes/kubelet.conf on the failed node.", "source": "https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/troubleshooting-kubeadm/", "id": "k8s_00236_code_explain_6792"} {"prompt": "Explain this yaml code:\n\nclient-certificate: /var/lib/kubelet/pki/kubelet-client-current.pem\nclient-key: /var/lib/kubelet/pki/kubelet-client-current.pem", "completion": "This yaml code Troubleshooting kubeadm | Restart the kubelet.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Troubleshooting kubeadm | Restart the kubelet.", "source": "https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/troubleshooting-kubeadm/", "id": "k8s_00236_code_explain_6793"} {"prompt": "Explain this sh code:\n\nError from server (NotFound): the server could not find the requested resource", "completion": "This sh code Troubleshooting kubeadm | If you're using flannel as the pod network inside Vagrant, then you will have to specify the default interface name for flannel.", "language": "sh", "type": "code-example_explanation", "context": "Troubleshooting kubeadm | If you're using flannel as the pod network inside Vagrant, then you will have to specify the default interface name for flannel.", "source": "https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/troubleshooting-kubeadm/", "id": "k8s_00236_code_explain_6794"} {"prompt": "Explain this console code:\n\nError from server: Get https://10.19.0.41:10250/containerLogs/default/mysql-ddc65b868-glc5m/mysql: dial tcp 10.19.0.41:10250: getsockopt: no route to host", "completion": "This console code Troubleshooting kubeadm | This may be due to Kubernetes using an IP that can not communicate with other IPs on the seemingly same subnet, possibly by policy of the machine provider.", "language": "console", "type": "code-example_explanation", "context": "Troubleshooting kubeadm | This may be due to Kubernetes using an IP that can not communicate with other IPs on the seemingly same subnet, possibly by policy of the machine provider.", "source": "https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/troubleshooting-kubeadm/", "id": "k8s_00236_code_explain_6795"} {"prompt": "Explain this sh code:\n\ncurl http://169.254.169.254/metadata/v1/interfaces/public/0/anchor_ipv4/address", "completion": "This sh code Troubleshooting kubeadm | The workaround is to tell kubelet which IP to use using --node-ip. When using DigitalOcean, it can be the public one (assigned to eth0) or the private one (assigned to eth1) should you want to use the", "language": "sh", "type": "code-example_explanation", "context": "Troubleshooting kubeadm | The workaround is to tell kubelet which IP to use using --node-ip. When using DigitalOcean, it can be the public one (assigned to eth0) or the private one (assigned to eth1) should you want to use the", "source": "https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/troubleshooting-kubeadm/", "id": "k8s_00236_code_explain_6796"} {"prompt": "Explain this bash code:\n\nkubectl -n kube-system get deployment coredns -o yaml | \\\n sed 's/allowPrivilegeEscalation: false/allowPrivilegeEscalation: true/g' | \\\n kubectl apply -f -", "completion": "This bash code Troubleshooting kubeadm | Another cause for CoreDNS to have CrashLoopBackOff is when a CoreDNS Pod deployed in Kubernetes detects a loop. A number of workarounds are available to avoid Kubernetes trying to restart the CoreDNS", "language": "bash", "type": "kubectl-command_explanation", "context": "Troubleshooting kubeadm | Another cause for CoreDNS to have CrashLoopBackOff is when a CoreDNS Pod deployed in Kubernetes detects a loop. A number of workarounds are available to avoid Kubernetes trying to restart the CoreDNS", "source": "https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/troubleshooting-kubeadm/", "id": "k8s_00236_code_explain_6797"} {"prompt": "Explain this text code:\n\nrpc error: code = 2 desc = oci runtime error: exec failed: container_linux.go:247: starting container process caused \"process_linux.go:110: decoding init error from pipe caused \\\"read parent: connection reset by peer\\\"\"", "completion": "This text code Troubleshooting kubeadm | This issue appears if you run CentOS 7 with Docker 1.13.1.84. This version of Docker can prevent the kubelet from executing into the etcd container.", "language": "text", "type": "code-example_explanation", "context": "Troubleshooting kubeadm | This issue appears if you run CentOS 7 with Docker 1.13.1.84. This version of Docker can prevent the kubelet from executing into the etcd container.", "source": "https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/troubleshooting-kubeadm/", "id": "k8s_00236_code_explain_6798"} {"prompt": "Explain this text code:\n\nyum downgrade docker-1.13.1-75.git8633870.el7.centos.x86_64 docker-client-1.13.1-75.git8633870.el7.centos.x86_64 docker-common-1.13.1-75.git8633870.el7.centos.x86_64", "completion": "This text code Troubleshooting kubeadm | Install one of the more recent recommended versions, such as 18.06:", "language": "text", "type": "code-example_explanation", "context": "Troubleshooting kubeadm | Install one of the more recent recommended versions, such as 18.06:", "source": "https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/troubleshooting-kubeadm/", "id": "k8s_00236_code_explain_6799"} {"prompt": "Explain this bash code:\n\nsudo yum-config-manager --add-repo https://download.docker.com/linux/centos/docker-ce.repo\nyum install docker-ce-18.06.1.ce-3.el7.x86_64", "completion": "This bash code Troubleshooting kubeadm | kubeadm init flags such as --component-extra-args allow you to pass custom arguments to a control-plane component like the kube-apiserver. However, this mechanism is limited due to the underlying type", "language": "bash", "type": "shell-command_explanation", "context": "Troubleshooting kubeadm | kubeadm init flags such as --component-extra-args allow you to pass custom arguments to a control-plane component like the kube-apiserver. However, this mechanism is limited due to the underlying type", "source": "https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/troubleshooting-kubeadm/", "id": "k8s_00236_code_explain_6800"} {"prompt": "Explain this text code:\n\n--apiserver-extra-args \"enable-admission-plugins=LimitRanger,NamespaceExists\"", "completion": "This text code Troubleshooting kubeadm | Alternatively, you can try separating the key=value pairs like so: --apiserver-extra-args \"enable-admission-plugins=LimitRanger,enable-admission-plugins=NamespaceExists\" but this will result in the ke", "language": "text", "type": "code-example_explanation", "context": "Troubleshooting kubeadm | Alternatively, you can try separating the key=value pairs like so: --apiserver-extra-args \"enable-admission-plugins=LimitRanger,enable-admission-plugins=NamespaceExists\" but this will result in the ke", "source": "https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/troubleshooting-kubeadm/", "id": "k8s_00236_code_explain_6801"} {"prompt": "Explain this text code:\n\n--apiserver-extra-args \"enable-admission-plugins=LimitRanger,enable-admission-plugins=NamespaceExists\"", "completion": "This text code Troubleshooting kubeadm | A known workaround is to use the kubeadm configuration file.", "language": "text", "type": "code-example_explanation", "context": "Troubleshooting kubeadm | A known workaround is to use the kubeadm configuration file.", "source": "https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/troubleshooting-kubeadm/", "id": "k8s_00236_code_explain_6802"} {"prompt": "Explain this text code:\n\nserver.go:610] Failed to retrieve node IP: host IP unknown; known addresses: []\nproxier.go:340] invalid nodeIP, initializing kube-proxy with 127.0.0.1 as nodeIP", "completion": "This text code Troubleshooting kubeadm | A known solution is to patch the kube-proxy DaemonSet to allow scheduling it on control-plane nodes regardless of their conditions, keeping it off of other nodes until their initial guarding condition", "language": "text", "type": "code-example_explanation", "context": "Troubleshooting kubeadm | A known solution is to patch the kube-proxy DaemonSet to allow scheduling it on control-plane nodes regardless of their conditions, keeping it off of other nodes until their initial guarding condition", "source": "https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/troubleshooting-kubeadm/", "id": "k8s_00236_code_explain_6803"} {"prompt": "Complete this bash code:\n\nkubectl -n kube-system patch ds kube-proxy -p='{\n \"spec\": {\n \"template\": {\n \"spec\": {\n \"tolerations\": [\n {\n \"key\": \"CriticalAddonsOnly\",\n \"operator\": \"Exists\"\n },", "completion": " {\n \"effect\": \"NoSchedule\",\n \"key\": \"node-role.kubernetes.io/control-plane\"\n }\n ]\n }\n }\n }\n}'", "language": "bash", "type": "kubectl-command", "context": "Troubleshooting kubeadm | The tracking issue for this problem is here.", "source": "https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/troubleshooting-kubeadm/", "id": "k8s_00236_code_completion_6804"} {"prompt": "Explain this bash code:\n\nkubectl -n kube-system patch ds kube-proxy -p='{\n \"spec\": {\n \"template\": {\n \"spec\": {\n \"tolerations\": [\n {\n \"key\": \"CriticalAddonsOnly\",\n \"operator\": \"Exists\"\n },\n {\n \"effect\": \"NoSchedule\",\n \"key\": \"node-role.kubernetes.io/control-plane\"\n }\n ]\n }\n }\n }\n}'", "completion": "This bash code Troubleshooting kubeadm | The tracking issue for this problem is here.", "language": "bash", "type": "kubectl-command_explanation", "context": "Troubleshooting kubeadm | The tracking issue for this problem is here.", "source": "https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/troubleshooting-kubeadm/", "id": "k8s_00236_code_explain_6805"} {"prompt": "Explain this text code:\n\n/usr/libexec/kubernetes/kubelet-plugins/volume/exec/", "completion": "This text code Troubleshooting kubeadm | To workaround this issue, you can configure the flex-volume directory using the kubeadm configuration file.", "language": "text", "type": "code-example_explanation", "context": "Troubleshooting kubeadm | To workaround this issue, you can configure the flex-volume directory using the kubeadm configuration file.", "source": "https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/troubleshooting-kubeadm/", "id": "k8s_00236_code_explain_6806"} {"prompt": "Complete this yaml code:\n\napiVersion: kubeadm.k8s.io/v1beta4\nkind: InitConfiguration\nnodeRegistration:\n kubeletExtraArgs:\n - name: \"volume-plugin-dir\"\n value: \"/opt/libexec/kubernetes/kubelet-plugins/volume/exec/\"", "completion": "---\napiVersion: kubeadm.k8s.io/v1beta4\nkind: ClusterConfiguration\ncontrollerManager:\n extraArgs:\n - name: \"flex-volume-plugin-dir\"\n value: \"/opt/libexec/kubernetes/kubelet-plugins/volume/exec/\"", "language": "yaml", "type": "kubernetes-manifest", "context": "Troubleshooting kubeadm | On joining Nodes:", "source": "https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/troubleshooting-kubeadm/", "id": "k8s_00236_code_completion_6807"} {"prompt": "Explain this yaml code:\n\napiVersion: kubeadm.k8s.io/v1beta4\nkind: InitConfiguration\nnodeRegistration:\n kubeletExtraArgs:\n - name: \"volume-plugin-dir\"\n value: \"/opt/libexec/kubernetes/kubelet-plugins/volume/exec/\"\n---\napiVersion: kubeadm.k8s.io/v1beta4\nkind: ClusterConfiguration\ncontrollerManager:\n extraArgs:\n - name: \"flex-volume-plugin-dir\"\n value: \"/opt/libexec/kubernetes/kubelet-plugins/volume/exec/\"", "completion": "This yaml code Troubleshooting kubeadm | On joining Nodes:", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Troubleshooting kubeadm | On joining Nodes:", "source": "https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/troubleshooting-kubeadm/", "id": "k8s_00236_code_explain_6808"} {"prompt": "Complete this yaml code:\n\napiVersion: kubeadm.k8s.io/v1beta4\nkind: JoinConfiguration\nnodeRegistration:", "completion": " kubeletExtraArgs:\n - name: \"volume-plugin-dir\"\n value: \"/opt/libexec/kubernetes/kubelet-plugins/volume/exec/\"", "language": "yaml", "type": "kubernetes-manifest", "context": "Troubleshooting kubeadm | Alternatively, you can modify /etc/fstab to make the /usr mount writeable, but please be advised that this is modifying a design principle of the Linux distribution.", "source": "https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/troubleshooting-kubeadm/", "id": "k8s_00236_code_completion_6809"} {"prompt": "Explain this yaml code:\n\napiVersion: kubeadm.k8s.io/v1beta4\nkind: JoinConfiguration\nnodeRegistration:\n kubeletExtraArgs:\n - name: \"volume-plugin-dir\"\n value: \"/opt/libexec/kubernetes/kubelet-plugins/volume/exec/\"", "completion": "This yaml code Troubleshooting kubeadm | Alternatively, you can modify /etc/fstab to make the /usr mount writeable, but please be advised that this is modifying a design principle of the Linux distribution.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Troubleshooting kubeadm | Alternatively, you can modify /etc/fstab to make the /usr mount writeable, but please be advised that this is modifying a design principle of the Linux distribution.", "source": "https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/troubleshooting-kubeadm/", "id": "k8s_00236_code_explain_6810"} {"prompt": "Explain this text code:\n\nx509: certificate signed by unknown authority\nx509: certificate is valid for IP-foo not IP-bar", "completion": "This text code Troubleshooting kubeadm | See Enabling signed kubelet serving certificates to understand how to configure the kubelets in a kubeadm cluster to have properly signed serving certificates.", "language": "text", "type": "code-example_explanation", "context": "Troubleshooting kubeadm | See Enabling signed kubelet serving certificates to understand how to configure the kubelets in a kubeadm cluster to have properly signed serving certificates.", "source": "https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/troubleshooting-kubeadm/", "id": "k8s_00236_code_explain_6811"} {"prompt": "Complete this json code:\n\n[upgrade/etcd] Failed to upgrade etcd: couldn't upgrade control plane. kubeadm has tried to recover everything into the earlier state. Errors faced: static Pod hash for component etcd on Node kinder-upgrade-control-plane-1 did not change after 5m0s: timed out waiting for the condition\n[upgrade/etcd] Waiting for previous etcd to become available\nI0907 10:10:09.109104 3704 etcd.go:588] [etcd] attempting to see if all cluster endpoints ([https://172.17.0.6:2379/ https://172.17.0.4:2379/ https://172.17.0.3:2379/]) are available 1/10\n[upgrade/etcd] Etcd was rolled back and is now available\nstatic Pod hash for component etcd on Node kinder-upgrade-control-plane-1 did not change after 5m0s: timed out waiting for the condition\ncouldn't upgrade control plane. kubeadm has tried to recover everything into the earlier state. Errors faced", "completion": "k8s.io/kubernetes/cmd/kubeadm/app/phases/upgrade.rollbackOldManifests\n\tcmd/kubeadm/app/phases/upgrade/staticpods.go:525\nk8s.io/kubernetes/cmd/kubeadm/app/phases/upgrade.upgradeComponent\n\tcmd/kubeadm/app/phases/upgrade/staticpods.go:254\nk8s.io/kubernetes/cmd/kubeadm/app/phases/upgrade.performEtcdStaticPodUpgrade\n\tcmd/kubeadm/app/phases/upgrade/staticpods.go:338\n...", "language": "json", "type": "api-response", "context": "Troubleshooting kubeadm | The reason for this failure is that the affected versions generate an etcd manifest file with unwanted defaults in the PodSpec. This will result in a diff from the manifest comparison, and kubeadm wil", "source": "https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/troubleshooting-kubeadm/", "id": "k8s_00236_code_completion_6812"} {"prompt": "Explain this json code:\n\n[upgrade/etcd] Failed to upgrade etcd: couldn't upgrade control plane. kubeadm has tried to recover everything into the earlier state. Errors faced: static Pod hash for component etcd on Node kinder-upgrade-control-plane-1 did not change after 5m0s: timed out waiting for the condition\n[upgrade/etcd] Waiting for previous etcd to become available\nI0907 10:10:09.109104 3704 etcd.go:588] [etcd] attempting to see if all cluster endpoints ([https://172.17.0.6:2379/ https://172.17.0.4:2379/ https://172.17.0.3:2379/]) are available 1/10\n[upgrade/etcd] Etcd was rolled back and is now available\nstatic Pod hash for component etcd on Node kinder-upgrade-control-plane-1 did not change after 5m0s: timed out waiting for the condition\ncouldn't upgrade control plane. kubeadm has tried to recover everything into the earlier state. Errors faced\nk8s.io/kubernetes/cmd/kubeadm/app/phases/upgrade.rollbackOldManifests\n\tcmd/kubeadm/app/phases/upgrade/staticpods.go:525\nk8s.io/kubernetes/cmd/kubeadm/app/phases/upgrade.upgradeComponent\n\tcmd/kubeadm/app/phases/upgrade/staticpods.go:254\nk8s.io/kubernetes/cmd/kubeadm/app/phases/upgrade.performEtcdStaticPodUpgrade\n\tcmd/kubeadm/app/phases/upgrade/staticpods.go:338\n...", "completion": "This json code Troubleshooting kubeadm | The reason for this failure is that the affected versions generate an etcd manifest file with unwanted defaults in the PodSpec. This will result in a diff from the manifest comparison, and kubeadm wil", "language": "json", "type": "api-response_explanation", "context": "Troubleshooting kubeadm | The reason for this failure is that the affected versions generate an etcd manifest file with unwanted defaults in the PodSpec. This will result in a diff from the manifest comparison, and kubeadm wil", "source": "https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/troubleshooting-kubeadm/", "id": "k8s_00236_code_explain_6813"} {"prompt": "Explain this shell code:\n\nkubeadm upgrade {apply|node} [version] --etcd-upgrade=false", "completion": "This shell code Troubleshooting kubeadm | This is not recommended in case a new etcd version was introduced by a later v1.28 patch version.", "language": "shell", "type": "code-example_explanation", "context": "Troubleshooting kubeadm | This is not recommended in case a new etcd version was introduced by a later v1.28 patch version.", "source": "https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/troubleshooting-kubeadm/", "id": "k8s_00236_code_explain_6814"} {"prompt": "Complete this patch code:\n\ndiff --git a/etc/kubernetes/manifests/etcd_defaults.yaml b/etc/kubernetes/manifests/etcd_origin.yaml\nindex d807ccbe0aa..46b35f00e15 100644\n--- a/etc/kubernetes/manifests/etcd_defaults.yaml\n+++ b/etc/kubernetes/manifests/etcd_origin.yaml\n@@ -43,7 +43,6 @@ spec:\n scheme: HTTP\n initialDelaySeconds: 10\n periodSeconds: 10\n- successThreshold: 1\n timeoutSeconds: 15\n name: etcd\n resources:\n@@ -59,26 +58,18 @@ spec:\n scheme: HTTP\n initialDelaySeconds: 10\n periodSeconds: 10\n- successThreshold: 1\n timeoutSeconds: 15\n- terminationMessagePath: /dev/termination-log", "completion": "- terminationMessagePolicy: File\n volumeMounts:\n - mountPath: /var/lib/etcd\n name: etcd-data\n - mountPath: /etc/kubernetes/pki/etcd\n name: etcd-certs\n- dnsPolicy: ClusterFirst\n- enableServiceLinks: true\n hostNetwork: true\n priority: 2000001000\n priorityClassName: system-node-critical\n- restartPolicy: Always\n- schedulerName: default-scheduler\n securityContext:\n seccompProfile:\n type: RuntimeDefault\n- terminationGracePeriodSeconds: 30\n volumes:\n - hostPath:\n path: /etc/kubernetes/pki/etcd", "language": "patch", "type": "code-example", "context": "Troubleshooting kubeadm | More information can be found in the tracking issue for this bug.", "source": "https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/troubleshooting-kubeadm/", "id": "k8s_00236_code_completion_6815"} {"prompt": "Explain this patch code:\n\ndiff --git a/etc/kubernetes/manifests/etcd_defaults.yaml b/etc/kubernetes/manifests/etcd_origin.yaml\nindex d807ccbe0aa..46b35f00e15 100644\n--- a/etc/kubernetes/manifests/etcd_defaults.yaml\n+++ b/etc/kubernetes/manifests/etcd_origin.yaml\n@@ -43,7 +43,6 @@ spec:\n scheme: HTTP\n initialDelaySeconds: 10\n periodSeconds: 10\n- successThreshold: 1\n timeoutSeconds: 15\n name: etcd\n resources:\n@@ -59,26 +58,18 @@ spec:\n scheme: HTTP\n initialDelaySeconds: 10\n periodSeconds: 10\n- successThreshold: 1\n timeoutSeconds: 15\n- terminationMessagePath: /dev/termination-log\n- terminationMessagePolicy: File\n volumeMounts:\n - mountPath: /var/lib/etcd\n name: etcd-data\n - mountPath: /etc/kubernetes/pki/etcd\n name: etcd-certs\n- dnsPolicy: ClusterFirst\n- enableServiceLinks: true\n hostNetwork: true\n priority: 2000001000\n priorityClassName: system-node-critical\n- restartPolicy: Always\n- schedulerName: default-scheduler\n securityContext:\n seccompProfile:\n type: RuntimeDefault\n- terminationGracePeriodSeconds: 30\n volumes:\n - hostPath:\n path: /etc/kubernetes/pki/etcd", "completion": "This patch code Troubleshooting kubeadm | More information can be found in the tracking issue for this bug.", "language": "patch", "type": "code-example_explanation", "context": "Troubleshooting kubeadm | More information can be found in the tracking issue for this bug.", "source": "https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/troubleshooting-kubeadm/", "id": "k8s_00236_code_explain_6816"} {"prompt": "Complete this yaml code:\n\napiVersion: rbac.authorization.k8s.io/v1\nkind: ClusterRole\nmetadata:\n name: kubeadm:get-nodes\nrules:\n - apiGroups:\n - \"\"\n resources:\n - nodes\n verbs:\n - get\n---", "completion": "apiVersion: rbac.authorization.k8s.io/v1\nkind: ClusterRoleBinding\nmetadata:\n name: kubeadm:get-nodes\nroleRef:\n apiGroup: rbac.authorization.k8s.io\n kind: ClusterRole\n name: kubeadm:get-nodes\nsubjects:\n - apiGroup: rbac.authorization.k8s.io\n kind: Group\n name: system:bootstrappers:kubeadm:default-node-token", "language": "yaml", "type": "kubernetes-manifest", "context": "Troubleshooting kubeadm | If you see the following warnings while running kubeadm init", "source": "https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/troubleshooting-kubeadm/", "id": "k8s_00236_code_completion_6817"} {"prompt": "Explain this yaml code:\n\napiVersion: rbac.authorization.k8s.io/v1\nkind: ClusterRole\nmetadata:\n name: kubeadm:get-nodes\nrules:\n - apiGroups:\n - \"\"\n resources:\n - nodes\n verbs:\n - get\n---\napiVersion: rbac.authorization.k8s.io/v1\nkind: ClusterRoleBinding\nmetadata:\n name: kubeadm:get-nodes\nroleRef:\n apiGroup: rbac.authorization.k8s.io\n kind: ClusterRole\n name: kubeadm:get-nodes\nsubjects:\n - apiGroup: rbac.authorization.k8s.io\n kind: Group\n name: system:bootstrappers:kubeadm:default-node-token", "completion": "This yaml code Troubleshooting kubeadm | If you see the following warnings while running kubeadm init", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Troubleshooting kubeadm | If you see the following warnings while running kubeadm init", "source": "https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/troubleshooting-kubeadm/", "id": "k8s_00236_code_explain_6818"} {"prompt": "Explain this json code:\n\n[preflight] WARNING: ebtables not found in system path\n[preflight] WARNING: ethtool not found in system path", "completion": "This json code Troubleshooting kubeadm | Then you may be missing ebtables, ethtool or a similar executable on your node. You can install them with the following commands:", "language": "json", "type": "api-response_explanation", "context": "Troubleshooting kubeadm | Then you may be missing ebtables, ethtool or a similar executable on your node. You can install them with the following commands:", "source": "https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/troubleshooting-kubeadm/", "id": "k8s_00236_code_explain_6819"} {"prompt": "Explain this json code:\n\n[apiclient] Created API client, waiting for the control plane to become ready", "completion": "This json code Troubleshooting kubeadm | This may be caused by a number of problems. The most common are:", "language": "json", "type": "api-response_explanation", "context": "Troubleshooting kubeadm | This may be caused by a number of problems. The most common are:", "source": "https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/troubleshooting-kubeadm/", "id": "k8s_00236_code_explain_6820"} {"prompt": "Complete this json code:\n\n[preflight] Running pre-flight checks\n[reset] Stopping the kubelet service", "completion": "[reset] Unmounting mounted directories in \"/var/lib/kubelet\"\n[reset] Removing kubernetes-managed containers\n(block)", "language": "json", "type": "api-response", "context": "Troubleshooting kubeadm | A possible solution is to restart the container runtime and then re-run kubeadm reset. You can also use crictl to debug the state of the container runtime. See Debugging Kubernetes nodes with crictl.", "source": "https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/troubleshooting-kubeadm/", "id": "k8s_00236_code_completion_6821"} {"prompt": "Explain this json code:\n\n[preflight] Running pre-flight checks\n[reset] Stopping the kubelet service\n[reset] Unmounting mounted directories in \"/var/lib/kubelet\"\n[reset] Removing kubernetes-managed containers\n(block)", "completion": "This json code Troubleshooting kubeadm | A possible solution is to restart the container runtime and then re-run kubeadm reset. You can also use crictl to debug the state of the container runtime. See Debugging Kubernetes nodes with crictl.", "language": "json", "type": "api-response_explanation", "context": "Troubleshooting kubeadm | A possible solution is to restart the container runtime and then re-run kubeadm reset. You can also use crictl to debug the state of the container runtime. See Debugging Kubernetes nodes with crictl.", "source": "https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/troubleshooting-kubeadm/", "id": "k8s_00236_code_explain_6822"} {"prompt": "Complete this text code:\n\nmv $HOME/.kube $HOME/.kube.bak\nmkdir $HOME/.kube", "completion": "sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config\nsudo chown $(id -u):$(id -g) $HOME/.kube/config", "language": "text", "type": "code-example", "context": "Troubleshooting kubeadm | By default, kubeadm configures a kubelet with automatic rotation of client certificates by using the /var/lib/kubelet/pki/kubelet-client-current.pem symlink specified in /etc/kubernetes/kubelet.conf.", "source": "https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/troubleshooting-kubeadm/", "id": "k8s_00236_code_completion_6823"} {"prompt": "Explain this text code:\n\nmv $HOME/.kube $HOME/.kube.bak\nmkdir $HOME/.kube\nsudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config\nsudo chown $(id -u):$(id -g) $HOME/.kube/config", "completion": "This text code Troubleshooting kubeadm | By default, kubeadm configures a kubelet with automatic rotation of client certificates by using the /var/lib/kubelet/pki/kubelet-client-current.pem symlink specified in /etc/kubernetes/kubelet.conf.", "language": "text", "type": "code-example_explanation", "context": "Troubleshooting kubeadm | By default, kubeadm configures a kubelet with automatic rotation of client certificates by using the /var/lib/kubelet/pki/kubelet-client-current.pem symlink specified in /etc/kubernetes/kubelet.conf.", "source": "https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/troubleshooting-kubeadm/", "id": "k8s_00236_code_explain_6824"} {"prompt": "Explain this text code:\n\nclient-certificate: /var/lib/kubelet/pki/kubelet-client-current.pem\nclient-key: /var/lib/kubelet/pki/kubelet-client-current.pem", "completion": "This text code Troubleshooting kubeadm | Restart the kubelet.", "language": "text", "type": "code-example_explanation", "context": "Troubleshooting kubeadm | Restart the kubelet.", "source": "https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/troubleshooting-kubeadm/", "id": "k8s_00236_code_explain_6825"} {"prompt": "Explain this text code:\n\nError from server (NotFound): the server could not find the requested resource", "completion": "This text code Troubleshooting kubeadm | If you're using flannel as the pod network inside Vagrant, then you will have to specify the default interface name for flannel.", "language": "text", "type": "code-example_explanation", "context": "Troubleshooting kubeadm | If you're using flannel as the pod network inside Vagrant, then you will have to specify the default interface name for flannel.", "source": "https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/troubleshooting-kubeadm/", "id": "k8s_00236_code_explain_6826"} {"prompt": "Explain this text code:\n\nError from server: Get https://10.19.0.41:10250/containerLogs/default/mysql-ddc65b868-glc5m/mysql: dial tcp 10.19.0.41:10250: getsockopt: no route to host", "completion": "This text code Troubleshooting kubeadm | This may be due to Kubernetes using an IP that can not communicate with other IPs on the seemingly same subnet, possibly by policy of the machine provider.", "language": "text", "type": "code-example_explanation", "context": "Troubleshooting kubeadm | This may be due to Kubernetes using an IP that can not communicate with other IPs on the seemingly same subnet, possibly by policy of the machine provider.", "source": "https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/troubleshooting-kubeadm/", "id": "k8s_00236_code_explain_6827"} {"prompt": "Explain this text code:\n\ncurl http://169.254.169.254/metadata/v1/interfaces/public/0/anchor_ipv4/address", "completion": "This text code Troubleshooting kubeadm | The workaround is to tell kubelet which IP to use using --node-ip. When using DigitalOcean, it can be the public one (assigned to eth0) or the private one (assigned to eth1) should you want to use the", "language": "text", "type": "code-example_explanation", "context": "Troubleshooting kubeadm | The workaround is to tell kubelet which IP to use using --node-ip. When using DigitalOcean, it can be the public one (assigned to eth0) or the private one (assigned to eth1) should you want to use the", "source": "https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/troubleshooting-kubeadm/", "id": "k8s_00236_code_explain_6828"} {"prompt": "Explain this bash code:\n\nkubectl -n kube-system get deployment coredns -o yaml | \\\n sed 's/allowPrivilegeEscalation: false/allowPrivilegeEscalation: true/g' | \\\n kubectl apply -f -", "completion": "This bash code Troubleshooting kubeadm | Another cause for CoreDNS to have CrashLoopBackOff is when a CoreDNS Pod deployed in Kubernetes detects a loop. A number of workarounds are available to avoid Kubernetes trying to restart the CoreDNS", "language": "bash", "type": "kubectl-command_explanation", "context": "Troubleshooting kubeadm | Another cause for CoreDNS to have CrashLoopBackOff is when a CoreDNS Pod deployed in Kubernetes detects a loop. A number of workarounds are available to avoid Kubernetes trying to restart the CoreDNS", "source": "https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/troubleshooting-kubeadm/", "id": "k8s_00236_code_explain_6829"} {"prompt": "Explain this text code:\n\nsudo yum-config-manager --add-repo https://download.docker.com/linux/centos/docker-ce.repo\nyum install docker-ce-18.06.1.ce-3.el7.x86_64", "completion": "This text code Troubleshooting kubeadm | kubeadm init flags such as --component-extra-args allow you to pass custom arguments to a control-plane component like the kube-apiserver. However, this mechanism is limited due to the underlying type", "language": "text", "type": "code-example_explanation", "context": "Troubleshooting kubeadm | kubeadm init flags such as --component-extra-args allow you to pass custom arguments to a control-plane component like the kube-apiserver. However, this mechanism is limited due to the underlying type", "source": "https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/troubleshooting-kubeadm/", "id": "k8s_00236_code_explain_6830"} {"prompt": "Complete this yaml code:\n\napiVersion: kubeadm.k8s.io/v1beta4\nkind: InitConfiguration\nnodeRegistration:\n kubeletExtraArgs:\n - name: \"volume-plugin-dir\"\n value: \"/opt/libexec/kubernetes/kubelet-plugins/volume/exec/\"", "completion": "---\napiVersion: kubeadm.k8s.io/v1beta4\nkind: ClusterConfiguration\ncontrollerManager:\n extraArgs:\n - name: \"flex-volume-plugin-dir\"\n value: \"/opt/libexec/kubernetes/kubelet-plugins/volume/exec/\"", "language": "yaml", "type": "kubernetes-manifest", "context": "Troubleshooting kubeadm | On joining Nodes:", "source": "https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/troubleshooting-kubeadm/", "id": "k8s_00236_code_completion_6831"} {"prompt": "Explain this yaml code:\n\napiVersion: kubeadm.k8s.io/v1beta4\nkind: InitConfiguration\nnodeRegistration:\n kubeletExtraArgs:\n - name: \"volume-plugin-dir\"\n value: \"/opt/libexec/kubernetes/kubelet-plugins/volume/exec/\"\n---\napiVersion: kubeadm.k8s.io/v1beta4\nkind: ClusterConfiguration\ncontrollerManager:\n extraArgs:\n - name: \"flex-volume-plugin-dir\"\n value: \"/opt/libexec/kubernetes/kubelet-plugins/volume/exec/\"", "completion": "This yaml code Troubleshooting kubeadm | On joining Nodes:", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Troubleshooting kubeadm | On joining Nodes:", "source": "https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/troubleshooting-kubeadm/", "id": "k8s_00236_code_explain_6832"} {"prompt": "Complete this yaml code:\n\napiVersion: kubeadm.k8s.io/v1beta4\nkind: JoinConfiguration\nnodeRegistration:", "completion": " kubeletExtraArgs:\n - name: \"volume-plugin-dir\"\n value: \"/opt/libexec/kubernetes/kubelet-plugins/volume/exec/\"", "language": "yaml", "type": "kubernetes-manifest", "context": "Troubleshooting kubeadm | Alternatively, you can modify /etc/fstab to make the /usr mount writeable, but please be advised that this is modifying a design principle of the Linux distribution.", "source": "https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/troubleshooting-kubeadm/", "id": "k8s_00236_code_completion_6833"} {"prompt": "Explain this yaml code:\n\napiVersion: kubeadm.k8s.io/v1beta4\nkind: JoinConfiguration\nnodeRegistration:\n kubeletExtraArgs:\n - name: \"volume-plugin-dir\"\n value: \"/opt/libexec/kubernetes/kubelet-plugins/volume/exec/\"", "completion": "This yaml code Troubleshooting kubeadm | Alternatively, you can modify /etc/fstab to make the /usr mount writeable, but please be advised that this is modifying a design principle of the Linux distribution.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Troubleshooting kubeadm | Alternatively, you can modify /etc/fstab to make the /usr mount writeable, but please be advised that this is modifying a design principle of the Linux distribution.", "source": "https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/troubleshooting-kubeadm/", "id": "k8s_00236_code_explain_6834"} {"prompt": "Explain this text code:\n\nkubeadm upgrade {apply|node} [version] --etcd-upgrade=false", "completion": "This text code Troubleshooting kubeadm | This is not recommended in case a new etcd version was introduced by a later v1.28 patch version.", "language": "text", "type": "code-example_explanation", "context": "Troubleshooting kubeadm | This is not recommended in case a new etcd version was introduced by a later v1.28 patch version.", "source": "https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/troubleshooting-kubeadm/", "id": "k8s_00236_code_explain_6835"} {"prompt": "Complete this text code:\n\ndiff --git a/etc/kubernetes/manifests/etcd_defaults.yaml b/etc/kubernetes/manifests/etcd_origin.yaml\nindex d807ccbe0aa..46b35f00e15 100644\n--- a/etc/kubernetes/manifests/etcd_defaults.yaml\n+++ b/etc/kubernetes/manifests/etcd_origin.yaml\n@@ -43,7 +43,6 @@ spec:\n scheme: HTTP\n initialDelaySeconds: 10\n periodSeconds: 10\n- successThreshold: 1\n timeoutSeconds: 15\n name: etcd\n resources:\n@@ -59,26 +58,18 @@ spec:\n scheme: HTTP\n initialDelaySeconds: 10\n periodSeconds: 10\n- successThreshold: 1\n timeoutSeconds: 15\n- terminationMessagePath: /dev/termination-log", "completion": "- terminationMessagePolicy: File\n volumeMounts:\n - mountPath: /var/lib/etcd\n name: etcd-data\n - mountPath: /etc/kubernetes/pki/etcd\n name: etcd-certs\n- dnsPolicy: ClusterFirst\n- enableServiceLinks: true\n hostNetwork: true\n priority: 2000001000\n priorityClassName: system-node-critical\n- restartPolicy: Always\n- schedulerName: default-scheduler\n securityContext:\n seccompProfile:\n type: RuntimeDefault\n- terminationGracePeriodSeconds: 30\n volumes:\n - hostPath:\n path: /etc/kubernetes/pki/etcd", "language": "text", "type": "code-example", "context": "Troubleshooting kubeadm | More information can be found in the tracking issue for this bug.", "source": "https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/troubleshooting-kubeadm/", "id": "k8s_00236_code_completion_6836"} {"prompt": "Explain this text code:\n\ndiff --git a/etc/kubernetes/manifests/etcd_defaults.yaml b/etc/kubernetes/manifests/etcd_origin.yaml\nindex d807ccbe0aa..46b35f00e15 100644\n--- a/etc/kubernetes/manifests/etcd_defaults.yaml\n+++ b/etc/kubernetes/manifests/etcd_origin.yaml\n@@ -43,7 +43,6 @@ spec:\n scheme: HTTP\n initialDelaySeconds: 10\n periodSeconds: 10\n- successThreshold: 1\n timeoutSeconds: 15\n name: etcd\n resources:\n@@ -59,26 +58,18 @@ spec:\n scheme: HTTP\n initialDelaySeconds: 10\n periodSeconds: 10\n- successThreshold: 1\n timeoutSeconds: 15\n- terminationMessagePath: /dev/termination-log\n- terminationMessagePolicy: File\n volumeMounts:\n - mountPath: /var/lib/etcd\n name: etcd-data\n - mountPath: /etc/kubernetes/pki/etcd\n name: etcd-certs\n- dnsPolicy: ClusterFirst\n- enableServiceLinks: true\n hostNetwork: true\n priority: 2000001000\n priorityClassName: system-node-critical\n- restartPolicy: Always\n- schedulerName: default-scheduler\n securityContext:\n seccompProfile:\n type: RuntimeDefault\n- terminationGracePeriodSeconds: 30\n volumes:\n - hostPath:\n path: /etc/kubernetes/pki/etcd", "completion": "This text code Troubleshooting kubeadm | More information can be found in the tracking issue for this bug.", "language": "text", "type": "code-example_explanation", "context": "Troubleshooting kubeadm | More information can be found in the tracking issue for this bug.", "source": "https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/troubleshooting-kubeadm/", "id": "k8s_00236_code_explain_6837"} {"prompt": "Explain this text code:\n\nTo generate certificates and private keys for your cluster using the `openssl` command line tool,\n see [Certificates (`openssl`)](/docs/tasks/administer-cluster/certificates/#openssl).\n You can also use [`cfssl`](/docs/tasks/administer-cluster/certificates/#cfssl).", "completion": "This text code Manual Rotation of CA Certificates | Annotate any DaemonSets and Deployments to trigger pod replacement in a safer rolling fashion.", "language": "text", "type": "code-example_explanation", "context": "Manual Rotation of CA Certificates | Annotate any DaemonSets and Deployments to trigger pod replacement in a safer rolling fashion.", "source": "https://kubernetes.io/docs/tasks/tls/manual-rotation-of-ca-certificates/", "id": "k8s_00237_code_explain_6838"} {"prompt": "Complete this text code:\n\nfor namespace in $(kubectl get namespace -o jsonpath='{.items[*].metadata.name}'); do\n for name in $(kubectl get deployments -n $namespace -o jsonpath='{.items[*].metadata.name}'); do\n kubectl patch deployment -n ${namespace} ${name} -p '{\"spec\":{\"template\":{\"metadata\":{\"annotations\":{\"ca-rotation\": \"1\"}}}}}';\n done", "completion": " for name in $(kubectl get daemonset -n $namespace -o jsonpath='{.items[*].metadata.name}'); do\n kubectl patch daemonset -n ${namespace} ${name} -p '{\"spec\":{\"template\":{\"metadata\":{\"annotations\":{\"ca-rotation\": \"1\"}}}}}';\n done\ndone", "language": "text", "type": "code-example", "context": "Manual Rotation of CA Certificates | If your cluster is using bootstrap tokens to join nodes, update the ConfigMap cluster-info in the kube-public namespace with new CA.", "source": "https://kubernetes.io/docs/tasks/tls/manual-rotation-of-ca-certificates/", "id": "k8s_00237_code_completion_6839"} {"prompt": "Explain this text code:\n\nfor namespace in $(kubectl get namespace -o jsonpath='{.items[*].metadata.name}'); do\n for name in $(kubectl get deployments -n $namespace -o jsonpath='{.items[*].metadata.name}'); do\n kubectl patch deployment -n ${namespace} ${name} -p '{\"spec\":{\"template\":{\"metadata\":{\"annotations\":{\"ca-rotation\": \"1\"}}}}}';\n done\n for name in $(kubectl get daemonset -n $namespace -o jsonpath='{.items[*].metadata.name}'); do\n kubectl patch daemonset -n ${namespace} ${name} -p '{\"spec\":{\"template\":{\"metadata\":{\"annotations\":{\"ca-rotation\": \"1\"}}}}}';\n done\ndone", "completion": "This text code Manual Rotation of CA Certificates | If your cluster is using bootstrap tokens to join nodes, update the ConfigMap cluster-info in the kube-public namespace with new CA.", "language": "text", "type": "code-example_explanation", "context": "Manual Rotation of CA Certificates | If your cluster is using bootstrap tokens to join nodes, update the ConfigMap cluster-info in the kube-public namespace with new CA.", "source": "https://kubernetes.io/docs/tasks/tls/manual-rotation-of-ca-certificates/", "id": "k8s_00237_code_explain_6840"} {"prompt": "Explain this text code:\n\nTo limit the number of concurrent disruptions that your application experiences,\n see [configure pod disruption budget](/docs/tasks/run-application/configure-pdb/).", "completion": "This text code Manual Rotation of CA Certificates | If your cluster is using bootstrap tokens to join nodes, update the ConfigMap cluster-info in the kube-public namespace with new CA.", "language": "text", "type": "code-example_explanation", "context": "Manual Rotation of CA Certificates | If your cluster is using bootstrap tokens to join nodes, update the ConfigMap cluster-info in the kube-public namespace with new CA.", "source": "https://kubernetes.io/docs/tasks/tls/manual-rotation-of-ca-certificates/", "id": "k8s_00237_code_explain_6841"} {"prompt": "Explain this text code:\n\nDepending on how you use StatefulSets you may also need to perform similar rolling replacement.", "completion": "This text code Manual Rotation of CA Certificates | If your cluster is using bootstrap tokens to join nodes, update the ConfigMap cluster-info in the kube-public namespace with new CA.", "language": "text", "type": "code-example_explanation", "context": "Manual Rotation of CA Certificates | If your cluster is using bootstrap tokens to join nodes, update the ConfigMap cluster-info in the kube-public namespace with new CA.", "source": "https://kubernetes.io/docs/tasks/tls/manual-rotation-of-ca-certificates/", "id": "k8s_00237_code_explain_6842"} {"prompt": "Complete this text code:\n\nbase64_encoded_ca=\"$(base64 -w0 /etc/kubernetes/pki/ca.crt)\"\n", "completion": "kubectl get cm/cluster-info --namespace kube-public -o yaml | \\\n /bin/sed \"s/\\(certificate-authority-data:\\).*/\\1 ${base64_encoded_ca}/\" | \\\n kubectl apply -f -", "language": "text", "type": "code-example", "context": "Manual Rotation of CA Certificates | Verify the cluster functionality.", "source": "https://kubernetes.io/docs/tasks/tls/manual-rotation-of-ca-certificates/", "id": "k8s_00237_code_completion_6843"} {"prompt": "Explain this text code:\n\nbase64_encoded_ca=\"$(base64 -w0 /etc/kubernetes/pki/ca.crt)\"\n\nkubectl get cm/cluster-info --namespace kube-public -o yaml | \\\n /bin/sed \"s/\\(certificate-authority-data:\\).*/\\1 ${base64_encoded_ca}/\" | \\\n kubectl apply -f -", "completion": "This text code Manual Rotation of CA Certificates | Verify the cluster functionality.", "language": "text", "type": "code-example_explanation", "context": "Manual Rotation of CA Certificates | Verify the cluster functionality.", "source": "https://kubernetes.io/docs/tasks/tls/manual-rotation-of-ca-certificates/", "id": "k8s_00237_code_explain_6844"} {"prompt": "Explain this text code:\n\nTo generate certificates and private keys for your cluster using the `openssl` command line tool,\n see [Certificates (`openssl`)](/docs/tasks/administer-cluster/certificates/#openssl).\n You can also use [`cfssl`](/docs/tasks/administer-cluster/certificates/#cfssl).", "completion": "This text code Manual Rotation of CA Certificates | Annotate any DaemonSets and Deployments to trigger pod replacement in a safer rolling fashion.", "language": "text", "type": "code-example_explanation", "context": "Manual Rotation of CA Certificates | Annotate any DaemonSets and Deployments to trigger pod replacement in a safer rolling fashion.", "source": "https://kubernetes.io/docs/tasks/tls/manual-rotation-of-ca-certificates/", "id": "k8s_00237_code_explain_6845"} {"prompt": "Complete this shell code:\n\nfor namespace in $(kubectl get namespace -o jsonpath='{.items[*].metadata.name}'); do\n for name in $(kubectl get deployments -n $namespace -o jsonpath='{.items[*].metadata.name}'); do\n kubectl patch deployment -n ${namespace} ${name} -p '{\"spec\":{\"template\":{\"metadata\":{\"annotations\":{\"ca-rotation\": \"1\"}}}}}';\n done", "completion": " for name in $(kubectl get daemonset -n $namespace -o jsonpath='{.items[*].metadata.name}'); do\n kubectl patch daemonset -n ${namespace} ${name} -p '{\"spec\":{\"template\":{\"metadata\":{\"annotations\":{\"ca-rotation\": \"1\"}}}}}';\n done\ndone", "language": "shell", "type": "code-example", "context": "Manual Rotation of CA Certificates | If your cluster is using bootstrap tokens to join nodes, update the ConfigMap cluster-info in the kube-public namespace with new CA.", "source": "https://kubernetes.io/docs/tasks/tls/manual-rotation-of-ca-certificates/", "id": "k8s_00237_code_completion_6846"} {"prompt": "Explain this shell code:\n\nfor namespace in $(kubectl get namespace -o jsonpath='{.items[*].metadata.name}'); do\n for name in $(kubectl get deployments -n $namespace -o jsonpath='{.items[*].metadata.name}'); do\n kubectl patch deployment -n ${namespace} ${name} -p '{\"spec\":{\"template\":{\"metadata\":{\"annotations\":{\"ca-rotation\": \"1\"}}}}}';\n done\n for name in $(kubectl get daemonset -n $namespace -o jsonpath='{.items[*].metadata.name}'); do\n kubectl patch daemonset -n ${namespace} ${name} -p '{\"spec\":{\"template\":{\"metadata\":{\"annotations\":{\"ca-rotation\": \"1\"}}}}}';\n done\ndone", "completion": "This shell code Manual Rotation of CA Certificates | If your cluster is using bootstrap tokens to join nodes, update the ConfigMap cluster-info in the kube-public namespace with new CA.", "language": "shell", "type": "code-example_explanation", "context": "Manual Rotation of CA Certificates | If your cluster is using bootstrap tokens to join nodes, update the ConfigMap cluster-info in the kube-public namespace with new CA.", "source": "https://kubernetes.io/docs/tasks/tls/manual-rotation-of-ca-certificates/", "id": "k8s_00237_code_explain_6847"} {"prompt": "Explain this text code:\n\nTo limit the number of concurrent disruptions that your application experiences,\n see [configure pod disruption budget](/docs/tasks/run-application/configure-pdb/).", "completion": "This text code Manual Rotation of CA Certificates | If your cluster is using bootstrap tokens to join nodes, update the ConfigMap cluster-info in the kube-public namespace with new CA.", "language": "text", "type": "code-example_explanation", "context": "Manual Rotation of CA Certificates | If your cluster is using bootstrap tokens to join nodes, update the ConfigMap cluster-info in the kube-public namespace with new CA.", "source": "https://kubernetes.io/docs/tasks/tls/manual-rotation-of-ca-certificates/", "id": "k8s_00237_code_explain_6848"} {"prompt": "Explain this text code:\n\nDepending on how you use StatefulSets you may also need to perform similar rolling replacement.", "completion": "This text code Manual Rotation of CA Certificates | If your cluster is using bootstrap tokens to join nodes, update the ConfigMap cluster-info in the kube-public namespace with new CA.", "language": "text", "type": "code-example_explanation", "context": "Manual Rotation of CA Certificates | If your cluster is using bootstrap tokens to join nodes, update the ConfigMap cluster-info in the kube-public namespace with new CA.", "source": "https://kubernetes.io/docs/tasks/tls/manual-rotation-of-ca-certificates/", "id": "k8s_00237_code_explain_6849"} {"prompt": "Complete this shell code:\n\nbase64_encoded_ca=\"$(base64 -w0 /etc/kubernetes/pki/ca.crt)\"\n", "completion": "kubectl get cm/cluster-info --namespace kube-public -o yaml | \\\n /bin/sed \"s/\\(certificate-authority-data:\\).*/\\1 ${base64_encoded_ca}/\" | \\\n kubectl apply -f -", "language": "shell", "type": "code-example", "context": "Manual Rotation of CA Certificates | Verify the cluster functionality.", "source": "https://kubernetes.io/docs/tasks/tls/manual-rotation-of-ca-certificates/", "id": "k8s_00237_code_completion_6850"} {"prompt": "Explain this shell code:\n\nbase64_encoded_ca=\"$(base64 -w0 /etc/kubernetes/pki/ca.crt)\"\n\nkubectl get cm/cluster-info --namespace kube-public -o yaml | \\\n /bin/sed \"s/\\(certificate-authority-data:\\).*/\\1 ${base64_encoded_ca}/\" | \\\n kubectl apply -f -", "completion": "This shell code Manual Rotation of CA Certificates | Verify the cluster functionality.", "language": "shell", "type": "code-example_explanation", "context": "Manual Rotation of CA Certificates | Verify the cluster functionality.", "source": "https://kubernetes.io/docs/tasks/tls/manual-rotation-of-ca-certificates/", "id": "k8s_00237_code_explain_6851"} {"prompt": "Complete this text code:\n\nfor namespace in $(kubectl get namespace -o jsonpath='{.items[*].metadata.name}'); do\n for name in $(kubectl get deployments -n $namespace -o jsonpath='{.items[*].metadata.name}'); do\n kubectl patch deployment -n ${namespace} ${name} -p '{\"spec\":{\"template\":{\"metadata\":{\"annotations\":{\"ca-rotation\": \"1\"}}}}}';\n done", "completion": " for name in $(kubectl get daemonset -n $namespace -o jsonpath='{.items[*].metadata.name}'); do\n kubectl patch daemonset -n ${namespace} ${name} -p '{\"spec\":{\"template\":{\"metadata\":{\"annotations\":{\"ca-rotation\": \"1\"}}}}}';\n done\ndone", "language": "text", "type": "code-example", "context": "Manual Rotation of CA Certificates | If your cluster is using bootstrap tokens to join nodes, update the ConfigMap cluster-info in the kube-public namespace with new CA.", "source": "https://kubernetes.io/docs/tasks/tls/manual-rotation-of-ca-certificates/", "id": "k8s_00237_code_completion_6852"} {"prompt": "Explain this text code:\n\nfor namespace in $(kubectl get namespace -o jsonpath='{.items[*].metadata.name}'); do\n for name in $(kubectl get deployments -n $namespace -o jsonpath='{.items[*].metadata.name}'); do\n kubectl patch deployment -n ${namespace} ${name} -p '{\"spec\":{\"template\":{\"metadata\":{\"annotations\":{\"ca-rotation\": \"1\"}}}}}';\n done\n for name in $(kubectl get daemonset -n $namespace -o jsonpath='{.items[*].metadata.name}'); do\n kubectl patch daemonset -n ${namespace} ${name} -p '{\"spec\":{\"template\":{\"metadata\":{\"annotations\":{\"ca-rotation\": \"1\"}}}}}';\n done\ndone", "completion": "This text code Manual Rotation of CA Certificates | If your cluster is using bootstrap tokens to join nodes, update the ConfigMap cluster-info in the kube-public namespace with new CA.", "language": "text", "type": "code-example_explanation", "context": "Manual Rotation of CA Certificates | If your cluster is using bootstrap tokens to join nodes, update the ConfigMap cluster-info in the kube-public namespace with new CA.", "source": "https://kubernetes.io/docs/tasks/tls/manual-rotation-of-ca-certificates/", "id": "k8s_00237_code_explain_6853"} {"prompt": "Complete this text code:\n\nbase64_encoded_ca=\"$(base64 -w0 /etc/kubernetes/pki/ca.crt)\"\n", "completion": "kubectl get cm/cluster-info --namespace kube-public -o yaml | \\\n /bin/sed \"s/\\(certificate-authority-data:\\).*/\\1 ${base64_encoded_ca}/\" | \\\n kubectl apply -f -", "language": "text", "type": "code-example", "context": "Manual Rotation of CA Certificates | Verify the cluster functionality.", "source": "https://kubernetes.io/docs/tasks/tls/manual-rotation-of-ca-certificates/", "id": "k8s_00237_code_completion_6854"} {"prompt": "Explain this text code:\n\nbase64_encoded_ca=\"$(base64 -w0 /etc/kubernetes/pki/ca.crt)\"\n\nkubectl get cm/cluster-info --namespace kube-public -o yaml | \\\n /bin/sed \"s/\\(certificate-authority-data:\\).*/\\1 ${base64_encoded_ca}/\" | \\\n kubectl apply -f -", "completion": "This text code Manual Rotation of CA Certificates | Verify the cluster functionality.", "language": "text", "type": "code-example_explanation", "context": "Manual Rotation of CA Certificates | Verify the cluster functionality.", "source": "https://kubernetes.io/docs/tasks/tls/manual-rotation-of-ca-certificates/", "id": "k8s_00237_code_explain_6855"} {"prompt": "Complete this text code:\n\n# Mark the nginx deployment as paused\n # Any current state of the deployment will continue its function; new updates", "completion": " # to the deployment will not have an effect as long as the deployment is paused\n kubectl rollout pause deployment/nginx", "language": "text", "type": "code-example", "context": "kubectl rollout pause | If true, ignore any errors in templates when a field or map key is missing in the template. Only applies to golang and jsonpath output formats.", "source": "https://kubernetes.io/docs/reference/kubectl/generated/kubectl_rollout/kubectl_rollout_pause/", "id": "k8s_00238_code_completion_6856"} {"prompt": "Explain this text code:\n\n# Mark the nginx deployment as paused\n # Any current state of the deployment will continue its function; new updates\n # to the deployment will not have an effect as long as the deployment is paused\n kubectl rollout pause deployment/nginx", "completion": "This text code kubectl rollout pause | If true, ignore any errors in templates when a field or map key is missing in the template. Only applies to golang and jsonpath output formats.", "language": "text", "type": "code-example_explanation", "context": "kubectl rollout pause | If true, ignore any errors in templates when a field or map key is missing in the template. Only applies to golang and jsonpath output formats.", "source": "https://kubernetes.io/docs/reference/kubectl/generated/kubectl_rollout/kubectl_rollout_pause/", "id": "k8s_00238_code_explain_6857"} {"prompt": "Complete this text code:\n\n# Mark the nginx deployment as paused\n # Any current state of the deployment will continue its function; new updates", "completion": " # to the deployment will not have an effect as long as the deployment is paused\n kubectl rollout pause deployment/nginx", "language": "text", "type": "code-example", "context": "kubectl rollout pause | If true, ignore any errors in templates when a field or map key is missing in the template. Only applies to golang and jsonpath output formats.", "source": "https://kubernetes.io/docs/reference/kubectl/generated/kubectl_rollout/kubectl_rollout_pause/", "id": "k8s_00238_code_completion_6858"} {"prompt": "Explain this text code:\n\n# Mark the nginx deployment as paused\n # Any current state of the deployment will continue its function; new updates\n # to the deployment will not have an effect as long as the deployment is paused\n kubectl rollout pause deployment/nginx", "completion": "This text code kubectl rollout pause | If true, ignore any errors in templates when a field or map key is missing in the template. Only applies to golang and jsonpath output formats.", "language": "text", "type": "code-example_explanation", "context": "kubectl rollout pause | If true, ignore any errors in templates when a field or map key is missing in the template. Only applies to golang and jsonpath output formats.", "source": "https://kubernetes.io/docs/reference/kubectl/generated/kubectl_rollout/kubectl_rollout_pause/", "id": "k8s_00238_code_explain_6859"} {"prompt": "Complete this text code:\n\n# Create a application.properties file\ncat <application.properties\nFOO=Bar\nEOF\n", "completion": "cat <./kustomization.yaml\nconfigMapGenerator:\n- name: example-configmap-1\n files:\n - application.properties\nEOF", "language": "text", "type": "code-example", "context": "Declarative Management of Kubernetes Objects Using Kustomize | The generated ConfigMap can be examined with the following command:", "source": "https://kubernetes.io/docs/tasks/manage-kubernetes-objects/kustomization/", "id": "k8s_00239_code_completion_6860"} {"prompt": "Explain this text code:\n\n# Create a application.properties file\ncat <application.properties\nFOO=Bar\nEOF\n\ncat <./kustomization.yaml\nconfigMapGenerator:\n- name: example-configmap-1\n files:\n - application.properties\nEOF", "completion": "This text code Declarative Management of Kubernetes Objects Using Kustomize | The generated ConfigMap can be examined with the following command:", "language": "text", "type": "code-example_explanation", "context": "Declarative Management of Kubernetes Objects Using Kustomize | The generated ConfigMap can be examined with the following command:", "source": "https://kubernetes.io/docs/tasks/manage-kubernetes-objects/kustomization/", "id": "k8s_00239_code_explain_6861"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\ndata:\n application.properties: |", "completion": " FOO=Bar \nkind: ConfigMap\nmetadata:\n name: example-configmap-1-8mbdf7882g", "language": "yaml", "type": "kubernetes-manifest", "context": "Declarative Management of Kubernetes Objects Using Kustomize | To generate a ConfigMap from an env file, add an entry to the envs list in configMapGenerator. Here is an example of generating a ConfigMap with a data item from a .env file:", "source": "https://kubernetes.io/docs/tasks/manage-kubernetes-objects/kustomization/", "id": "k8s_00239_code_completion_6862"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\ndata:\n application.properties: |\n FOO=Bar \nkind: ConfigMap\nmetadata:\n name: example-configmap-1-8mbdf7882g", "completion": "This yaml code Declarative Management of Kubernetes Objects Using Kustomize | To generate a ConfigMap from an env file, add an entry to the envs list in configMapGenerator. Here is an example of generating a ConfigMap with a data item from a .env file:", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Declarative Management of Kubernetes Objects Using Kustomize | To generate a ConfigMap from an env file, add an entry to the envs list in configMapGenerator. Here is an example of generating a ConfigMap with a data item from a .env file:", "source": "https://kubernetes.io/docs/tasks/manage-kubernetes-objects/kustomization/", "id": "k8s_00239_code_explain_6863"} {"prompt": "Complete this text code:\n\n# Create a .env file\ncat <.env\nFOO=Bar\nEOF\n", "completion": "cat <./kustomization.yaml\nconfigMapGenerator:\n- name: example-configmap-1\n envs:\n - .env\nEOF", "language": "text", "type": "code-example", "context": "Declarative Management of Kubernetes Objects Using Kustomize | The generated ConfigMap can be examined with the following command:", "source": "https://kubernetes.io/docs/tasks/manage-kubernetes-objects/kustomization/", "id": "k8s_00239_code_completion_6864"} {"prompt": "Explain this text code:\n\n# Create a .env file\ncat <.env\nFOO=Bar\nEOF\n\ncat <./kustomization.yaml\nconfigMapGenerator:\n- name: example-configmap-1\n envs:\n - .env\nEOF", "completion": "This text code Declarative Management of Kubernetes Objects Using Kustomize | The generated ConfigMap can be examined with the following command:", "language": "text", "type": "code-example_explanation", "context": "Declarative Management of Kubernetes Objects Using Kustomize | The generated ConfigMap can be examined with the following command:", "source": "https://kubernetes.io/docs/tasks/manage-kubernetes-objects/kustomization/", "id": "k8s_00239_code_explain_6865"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\ndata:\n FOO: Bar", "completion": "kind: ConfigMap\nmetadata:\n name: example-configmap-1-42cfbf598f", "language": "yaml", "type": "kubernetes-manifest", "context": "Declarative Management of Kubernetes Objects Using Kustomize | ConfigMaps can also be generated from literal key-value pairs. To generate a ConfigMap from a literal key-value pair, add an entry to the literals list in configMapGenerator. Here is an example of gen", "source": "https://kubernetes.io/docs/tasks/manage-kubernetes-objects/kustomization/", "id": "k8s_00239_code_completion_6866"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\ndata:\n FOO: Bar\nkind: ConfigMap\nmetadata:\n name: example-configmap-1-42cfbf598f", "completion": "This yaml code Declarative Management of Kubernetes Objects Using Kustomize | ConfigMaps can also be generated from literal key-value pairs. To generate a ConfigMap from a literal key-value pair, add an entry to the literals list in configMapGenerator. Here is an example of gen", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Declarative Management of Kubernetes Objects Using Kustomize | ConfigMaps can also be generated from literal key-value pairs. To generate a ConfigMap from a literal key-value pair, add an entry to the literals list in configMapGenerator. Here is an example of gen", "source": "https://kubernetes.io/docs/tasks/manage-kubernetes-objects/kustomization/", "id": "k8s_00239_code_explain_6867"} {"prompt": "Complete this text code:\n\ncat <./kustomization.yaml\nconfigMapGenerator:\n- name: example-configmap-2", "completion": " literals:\n - FOO=Bar\nEOF", "language": "text", "type": "code-example", "context": "Declarative Management of Kubernetes Objects Using Kustomize | The generated ConfigMap can be checked by the following command:", "source": "https://kubernetes.io/docs/tasks/manage-kubernetes-objects/kustomization/", "id": "k8s_00239_code_completion_6868"} {"prompt": "Explain this text code:\n\ncat <./kustomization.yaml\nconfigMapGenerator:\n- name: example-configmap-2\n literals:\n - FOO=Bar\nEOF", "completion": "This text code Declarative Management of Kubernetes Objects Using Kustomize | The generated ConfigMap can be checked by the following command:", "language": "text", "type": "code-example_explanation", "context": "Declarative Management of Kubernetes Objects Using Kustomize | The generated ConfigMap can be checked by the following command:", "source": "https://kubernetes.io/docs/tasks/manage-kubernetes-objects/kustomization/", "id": "k8s_00239_code_explain_6869"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\ndata:\n FOO: Bar", "completion": "kind: ConfigMap\nmetadata:\n name: example-configmap-2-g2hdhfc6tk", "language": "yaml", "type": "kubernetes-manifest", "context": "Declarative Management of Kubernetes Objects Using Kustomize | To use a generated ConfigMap in a Deployment, reference it by the name of the configMapGenerator. Kustomize will automatically replace this name with the generated name.", "source": "https://kubernetes.io/docs/tasks/manage-kubernetes-objects/kustomization/", "id": "k8s_00239_code_completion_6870"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\ndata:\n FOO: Bar\nkind: ConfigMap\nmetadata:\n name: example-configmap-2-g2hdhfc6tk", "completion": "This yaml code Declarative Management of Kubernetes Objects Using Kustomize | To use a generated ConfigMap in a Deployment, reference it by the name of the configMapGenerator. Kustomize will automatically replace this name with the generated name.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Declarative Management of Kubernetes Objects Using Kustomize | To use a generated ConfigMap in a Deployment, reference it by the name of the configMapGenerator. Kustomize will automatically replace this name with the generated name.", "source": "https://kubernetes.io/docs/tasks/manage-kubernetes-objects/kustomization/", "id": "k8s_00239_code_explain_6871"} {"prompt": "Complete this yaml code:\n\n# Create an application.properties file\ncat <application.properties\nFOO=Bar\nEOF\n\ncat <deployment.yaml\napiVersion: apps/v1\nkind: Deployment\nmetadata:\n name: my-app\n labels:\n app: my-app\nspec:\n selector:\n matchLabels:\n app: my-app\n template:\n metadata:\n labels:\n app: my-app", "completion": " spec:\n containers:\n - name: app\n image: my-app\n volumeMounts:\n - name: config\n mountPath: /config\n volumes:\n - name: config\n configMap:\n name: example-configmap-1\nEOF\n\ncat <./kustomization.yaml\nresources:\n- deployment.yaml\nconfigMapGenerator:\n- name: example-configmap-1\n files:\n - application.properties\nEOF", "language": "yaml", "type": "deployment-definition", "context": "Declarative Management of Kubernetes Objects Using Kustomize | Generate the ConfigMap and Deployment:", "source": "https://kubernetes.io/docs/tasks/manage-kubernetes-objects/kustomization/", "id": "k8s_00239_code_completion_6872"} {"prompt": "Explain this yaml code:\n\n# Create an application.properties file\ncat <application.properties\nFOO=Bar\nEOF\n\ncat <deployment.yaml\napiVersion: apps/v1\nkind: Deployment\nmetadata:\n name: my-app\n labels:\n app: my-app\nspec:\n selector:\n matchLabels:\n app: my-app\n template:\n metadata:\n labels:\n app: my-app\n spec:\n containers:\n - name: app\n image: my-app\n volumeMounts:\n - name: config\n mountPath: /config\n volumes:\n - name: config\n configMap:\n name: example-configmap-1\nEOF\n\ncat <./kustomization.yaml\nresources:\n- deployment.yaml\nconfigMapGenerator:\n- name: example-configmap-1\n files:\n - application.properties\nEOF", "completion": "This yaml code Declarative Management of Kubernetes Objects Using Kustomize | Generate the ConfigMap and Deployment:", "language": "yaml", "type": "deployment-definition_explanation", "context": "Declarative Management of Kubernetes Objects Using Kustomize | Generate the ConfigMap and Deployment:", "source": "https://kubernetes.io/docs/tasks/manage-kubernetes-objects/kustomization/", "id": "k8s_00239_code_explain_6873"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\ndata:\n application.properties: |\n FOO=Bar \nkind: ConfigMap\nmetadata:\n name: example-configmap-1-g4hk9g2ff8\n---\napiVersion: apps/v1\nkind: Deployment\nmetadata:\n labels:\n app: my-app\n name: my-app\nspec:\n selector:", "completion": " matchLabels:\n app: my-app\n template:\n metadata:\n labels:\n app: my-app\n spec:\n containers:\n - image: my-app\n name: app\n volumeMounts:\n - mountPath: /config\n name: config\n volumes:\n - configMap:\n name: example-configmap-1-g4hk9g2ff8\n name: config", "language": "yaml", "type": "deployment-definition", "context": "Declarative Management of Kubernetes Objects Using Kustomize | You can generate Secrets from files or literal key-value pairs. To generate a Secret from a file, add an entry to the files list in secretGenerator. Here is an example of generating a Secret with a da", "source": "https://kubernetes.io/docs/tasks/manage-kubernetes-objects/kustomization/", "id": "k8s_00239_code_completion_6874"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\ndata:\n application.properties: |\n FOO=Bar \nkind: ConfigMap\nmetadata:\n name: example-configmap-1-g4hk9g2ff8\n---\napiVersion: apps/v1\nkind: Deployment\nmetadata:\n labels:\n app: my-app\n name: my-app\nspec:\n selector:\n matchLabels:\n app: my-app\n template:\n metadata:\n labels:\n app: my-app\n spec:\n containers:\n - image: my-app\n name: app\n volumeMounts:\n - mountPath: /config\n name: config\n volumes:\n - configMap:\n name: example-configmap-1-g4hk9g2ff8\n name: config", "completion": "This yaml code Declarative Management of Kubernetes Objects Using Kustomize | You can generate Secrets from files or literal key-value pairs. To generate a Secret from a file, add an entry to the files list in secretGenerator. Here is an example of generating a Secret with a da", "language": "yaml", "type": "deployment-definition_explanation", "context": "Declarative Management of Kubernetes Objects Using Kustomize | You can generate Secrets from files or literal key-value pairs. To generate a Secret from a file, add an entry to the files list in secretGenerator. Here is an example of generating a Secret with a da", "source": "https://kubernetes.io/docs/tasks/manage-kubernetes-objects/kustomization/", "id": "k8s_00239_code_explain_6875"} {"prompt": "Complete this text code:\n\n# Create a password.txt file\ncat <./password.txt\nusername=admin\npassword=secret\nEOF\n", "completion": "cat <./kustomization.yaml\nsecretGenerator:\n- name: example-secret-1\n files:\n - password.txt\nEOF", "language": "text", "type": "code-example", "context": "Declarative Management of Kubernetes Objects Using Kustomize | The generated Secret is as follows:", "source": "https://kubernetes.io/docs/tasks/manage-kubernetes-objects/kustomization/", "id": "k8s_00239_code_completion_6876"} {"prompt": "Explain this text code:\n\n# Create a password.txt file\ncat <./password.txt\nusername=admin\npassword=secret\nEOF\n\ncat <./kustomization.yaml\nsecretGenerator:\n- name: example-secret-1\n files:\n - password.txt\nEOF", "completion": "This text code Declarative Management of Kubernetes Objects Using Kustomize | The generated Secret is as follows:", "language": "text", "type": "code-example_explanation", "context": "Declarative Management of Kubernetes Objects Using Kustomize | The generated Secret is as follows:", "source": "https://kubernetes.io/docs/tasks/manage-kubernetes-objects/kustomization/", "id": "k8s_00239_code_explain_6877"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\ndata:\n password.txt: dXNlcm5hbWU9YWRtaW4KcGFzc3dvcmQ9c2VjcmV0Cg==", "completion": "kind: Secret\nmetadata:\n name: example-secret-1-t2kt65hgtb\ntype: Opaque", "language": "yaml", "type": "kubernetes-manifest", "context": "Declarative Management of Kubernetes Objects Using Kustomize | To generate a Secret from a literal key-value pair, add an entry to literals list in secretGenerator. Here is an example of generating a Secret with a data item from a key-value pair:", "source": "https://kubernetes.io/docs/tasks/manage-kubernetes-objects/kustomization/", "id": "k8s_00239_code_completion_6878"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\ndata:\n password.txt: dXNlcm5hbWU9YWRtaW4KcGFzc3dvcmQ9c2VjcmV0Cg==\nkind: Secret\nmetadata:\n name: example-secret-1-t2kt65hgtb\ntype: Opaque", "completion": "This yaml code Declarative Management of Kubernetes Objects Using Kustomize | To generate a Secret from a literal key-value pair, add an entry to literals list in secretGenerator. Here is an example of generating a Secret with a data item from a key-value pair:", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Declarative Management of Kubernetes Objects Using Kustomize | To generate a Secret from a literal key-value pair, add an entry to literals list in secretGenerator. Here is an example of generating a Secret with a data item from a key-value pair:", "source": "https://kubernetes.io/docs/tasks/manage-kubernetes-objects/kustomization/", "id": "k8s_00239_code_explain_6879"} {"prompt": "Complete this text code:\n\ncat <./kustomization.yaml\nsecretGenerator:\n- name: example-secret-2", "completion": " literals:\n - username=admin\n - password=secret\nEOF", "language": "text", "type": "code-example", "context": "Declarative Management of Kubernetes Objects Using Kustomize | The generated Secret is as follows:", "source": "https://kubernetes.io/docs/tasks/manage-kubernetes-objects/kustomization/", "id": "k8s_00239_code_completion_6880"} {"prompt": "Explain this text code:\n\ncat <./kustomization.yaml\nsecretGenerator:\n- name: example-secret-2\n literals:\n - username=admin\n - password=secret\nEOF", "completion": "This text code Declarative Management of Kubernetes Objects Using Kustomize | The generated Secret is as follows:", "language": "text", "type": "code-example_explanation", "context": "Declarative Management of Kubernetes Objects Using Kustomize | The generated Secret is as follows:", "source": "https://kubernetes.io/docs/tasks/manage-kubernetes-objects/kustomization/", "id": "k8s_00239_code_explain_6881"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\ndata:\n password: c2VjcmV0\n username: YWRtaW4=", "completion": "kind: Secret\nmetadata:\n name: example-secret-2-t52t6g96d8\ntype: Opaque", "language": "yaml", "type": "kubernetes-manifest", "context": "Declarative Management of Kubernetes Objects Using Kustomize | Like ConfigMaps, generated Secrets can be used in Deployments by referring to the name of the secretGenerator:", "source": "https://kubernetes.io/docs/tasks/manage-kubernetes-objects/kustomization/", "id": "k8s_00239_code_completion_6882"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\ndata:\n password: c2VjcmV0\n username: YWRtaW4=\nkind: Secret\nmetadata:\n name: example-secret-2-t52t6g96d8\ntype: Opaque", "completion": "This yaml code Declarative Management of Kubernetes Objects Using Kustomize | Like ConfigMaps, generated Secrets can be used in Deployments by referring to the name of the secretGenerator:", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Declarative Management of Kubernetes Objects Using Kustomize | Like ConfigMaps, generated Secrets can be used in Deployments by referring to the name of the secretGenerator:", "source": "https://kubernetes.io/docs/tasks/manage-kubernetes-objects/kustomization/", "id": "k8s_00239_code_explain_6883"} {"prompt": "Complete this yaml code:\n\n# Create a password.txt file\ncat <./password.txt\nusername=admin\npassword=secret\nEOF\n\ncat <deployment.yaml\napiVersion: apps/v1\nkind: Deployment\nmetadata:\n name: my-app\n labels:\n app: my-app\nspec:\n selector:\n matchLabels:\n app: my-app\n template:\n metadata:\n labels:\n app: my-app", "completion": " spec:\n containers:\n - name: app\n image: my-app\n volumeMounts:\n - name: password\n mountPath: /secrets\n volumes:\n - name: password\n secret:\n secretName: example-secret-1\nEOF\n\ncat <./kustomization.yaml\nresources:\n- deployment.yaml\nsecretGenerator:\n- name: example-secret-1\n files:\n - password.txt\nEOF", "language": "yaml", "type": "deployment-definition", "context": "Declarative Management of Kubernetes Objects Using Kustomize | The generated ConfigMaps and Secrets have a content hash suffix appended. This ensures that a new ConfigMap or Secret is generated when the contents are changed. To disable the behavior of appending a", "source": "https://kubernetes.io/docs/tasks/manage-kubernetes-objects/kustomization/", "id": "k8s_00239_code_completion_6884"} {"prompt": "Explain this yaml code:\n\n# Create a password.txt file\ncat <./password.txt\nusername=admin\npassword=secret\nEOF\n\ncat <deployment.yaml\napiVersion: apps/v1\nkind: Deployment\nmetadata:\n name: my-app\n labels:\n app: my-app\nspec:\n selector:\n matchLabels:\n app: my-app\n template:\n metadata:\n labels:\n app: my-app\n spec:\n containers:\n - name: app\n image: my-app\n volumeMounts:\n - name: password\n mountPath: /secrets\n volumes:\n - name: password\n secret:\n secretName: example-secret-1\nEOF\n\ncat <./kustomization.yaml\nresources:\n- deployment.yaml\nsecretGenerator:\n- name: example-secret-1\n files:\n - password.txt\nEOF", "completion": "This yaml code Declarative Management of Kubernetes Objects Using Kustomize | The generated ConfigMaps and Secrets have a content hash suffix appended. This ensures that a new ConfigMap or Secret is generated when the contents are changed. To disable the behavior of appending a", "language": "yaml", "type": "deployment-definition_explanation", "context": "Declarative Management of Kubernetes Objects Using Kustomize | The generated ConfigMaps and Secrets have a content hash suffix appended. This ensures that a new ConfigMap or Secret is generated when the contents are changed. To disable the behavior of appending a", "source": "https://kubernetes.io/docs/tasks/manage-kubernetes-objects/kustomization/", "id": "k8s_00239_code_explain_6885"} {"prompt": "Complete this text code:\n\ncat <./kustomization.yaml\nconfigMapGenerator:\n- name: example-configmap-3\n literals:\n - FOO=Bar\ngeneratorOptions:", "completion": " disableNameSuffixHash: true\n labels:\n type: generated\n annotations:\n note: generated\nEOF", "language": "text", "type": "code-example", "context": "Declarative Management of Kubernetes Objects Using Kustomize | Runkubectl kustomize ./ to view the generated ConfigMap:", "source": "https://kubernetes.io/docs/tasks/manage-kubernetes-objects/kustomization/", "id": "k8s_00239_code_completion_6886"} {"prompt": "Explain this text code:\n\ncat <./kustomization.yaml\nconfigMapGenerator:\n- name: example-configmap-3\n literals:\n - FOO=Bar\ngeneratorOptions:\n disableNameSuffixHash: true\n labels:\n type: generated\n annotations:\n note: generated\nEOF", "completion": "This text code Declarative Management of Kubernetes Objects Using Kustomize | Runkubectl kustomize ./ to view the generated ConfigMap:", "language": "text", "type": "code-example_explanation", "context": "Declarative Management of Kubernetes Objects Using Kustomize | Runkubectl kustomize ./ to view the generated ConfigMap:", "source": "https://kubernetes.io/docs/tasks/manage-kubernetes-objects/kustomization/", "id": "k8s_00239_code_explain_6887"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\ndata:\n FOO: Bar\nkind: ConfigMap\nmetadata:", "completion": " annotations:\n note: generated\n labels:\n type: generated\n name: example-configmap-3", "language": "yaml", "type": "kubernetes-manifest", "context": "Declarative Management of Kubernetes Objects Using Kustomize | It is quite common to set cross-cutting fields for all Kubernetes resources in a project. Some use cases for setting cross-cutting fields:", "source": "https://kubernetes.io/docs/tasks/manage-kubernetes-objects/kustomization/", "id": "k8s_00239_code_completion_6888"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\ndata:\n FOO: Bar\nkind: ConfigMap\nmetadata:\n annotations:\n note: generated\n labels:\n type: generated\n name: example-configmap-3", "completion": "This yaml code Declarative Management of Kubernetes Objects Using Kustomize | It is quite common to set cross-cutting fields for all Kubernetes resources in a project. Some use cases for setting cross-cutting fields:", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Declarative Management of Kubernetes Objects Using Kustomize | It is quite common to set cross-cutting fields for all Kubernetes resources in a project. Some use cases for setting cross-cutting fields:", "source": "https://kubernetes.io/docs/tasks/manage-kubernetes-objects/kustomization/", "id": "k8s_00239_code_explain_6889"} {"prompt": "Complete this yaml code:\n\n# Create a deployment.yaml\ncat <./deployment.yaml\napiVersion: apps/v1\nkind: Deployment\nmetadata:\n name: nginx-deployment\n labels:\n app: nginx\nspec:\n selector:\n matchLabels:\n app: nginx\n template:\n metadata:\n labels:\n app: nginx\n spec:", "completion": " containers:\n - name: nginx\n image: nginx\nEOF\n\ncat <./kustomization.yaml\nnamespace: my-namespace\nnamePrefix: dev-\nnameSuffix: \"-001\"\nlabels:\n - pairs:\n app: bingo\n includeSelectors: true \ncommonAnnotations:\n oncallPager: 800-555-1212\nresources:\n- deployment.yaml\nEOF", "language": "yaml", "type": "deployment-definition", "context": "Declarative Management of Kubernetes Objects Using Kustomize | Run kubectl kustomize ./ to view those fields are all set in the Deployment Resource:", "source": "https://kubernetes.io/docs/tasks/manage-kubernetes-objects/kustomization/", "id": "k8s_00239_code_completion_6890"} {"prompt": "Explain this yaml code:\n\n# Create a deployment.yaml\ncat <./deployment.yaml\napiVersion: apps/v1\nkind: Deployment\nmetadata:\n name: nginx-deployment\n labels:\n app: nginx\nspec:\n selector:\n matchLabels:\n app: nginx\n template:\n metadata:\n labels:\n app: nginx\n spec:\n containers:\n - name: nginx\n image: nginx\nEOF\n\ncat <./kustomization.yaml\nnamespace: my-namespace\nnamePrefix: dev-\nnameSuffix: \"-001\"\nlabels:\n - pairs:\n app: bingo\n includeSelectors: true \ncommonAnnotations:\n oncallPager: 800-555-1212\nresources:\n- deployment.yaml\nEOF", "completion": "This yaml code Declarative Management of Kubernetes Objects Using Kustomize | Run kubectl kustomize ./ to view those fields are all set in the Deployment Resource:", "language": "yaml", "type": "deployment-definition_explanation", "context": "Declarative Management of Kubernetes Objects Using Kustomize | Run kubectl kustomize ./ to view those fields are all set in the Deployment Resource:", "source": "https://kubernetes.io/docs/tasks/manage-kubernetes-objects/kustomization/", "id": "k8s_00239_code_explain_6891"} {"prompt": "Complete this yaml code:\n\napiVersion: apps/v1\nkind: Deployment\nmetadata:\n annotations:\n oncallPager: 800-555-1212\n labels:\n app: bingo\n name: dev-nginx-deployment-001\n namespace: my-namespace\nspec:\n selector:", "completion": " matchLabels:\n app: bingo\n template:\n metadata:\n annotations:\n oncallPager: 800-555-1212\n labels:\n app: bingo\n spec:\n containers:\n - image: nginx\n name: nginx", "language": "yaml", "type": "deployment-definition", "context": "Declarative Management of Kubernetes Objects Using Kustomize | It is common to compose a set of resources in a project and manage them inside the same file or directory. Kustomize offers composing resources from different files and applying patches or other custo", "source": "https://kubernetes.io/docs/tasks/manage-kubernetes-objects/kustomization/", "id": "k8s_00239_code_completion_6892"} {"prompt": "Explain this yaml code:\n\napiVersion: apps/v1\nkind: Deployment\nmetadata:\n annotations:\n oncallPager: 800-555-1212\n labels:\n app: bingo\n name: dev-nginx-deployment-001\n namespace: my-namespace\nspec:\n selector:\n matchLabels:\n app: bingo\n template:\n metadata:\n annotations:\n oncallPager: 800-555-1212\n labels:\n app: bingo\n spec:\n containers:\n - image: nginx\n name: nginx", "completion": "This yaml code Declarative Management of Kubernetes Objects Using Kustomize | It is common to compose a set of resources in a project and manage them inside the same file or directory. Kustomize offers composing resources from different files and applying patches or other custo", "language": "yaml", "type": "deployment-definition_explanation", "context": "Declarative Management of Kubernetes Objects Using Kustomize | It is common to compose a set of resources in a project and manage them inside the same file or directory. Kustomize offers composing resources from different files and applying patches or other custo", "source": "https://kubernetes.io/docs/tasks/manage-kubernetes-objects/kustomization/", "id": "k8s_00239_code_explain_6893"} {"prompt": "Complete this yaml code:\n\n# Create a deployment.yaml file\ncat < deployment.yaml\napiVersion: apps/v1\nkind: Deployment\nmetadata:\n name: my-nginx\nspec:\n selector:\n matchLabels:\n run: my-nginx\n replicas: 2\n template:\n metadata:\n labels:\n run: my-nginx\n spec:\n containers:\n - name: my-nginx\n image: nginx\n ports:\n - containerPort: 80\nEOF", "completion": "\n# Create a service.yaml file\ncat < service.yaml\napiVersion: v1\nkind: Service\nmetadata:\n name: my-nginx\n labels:\n run: my-nginx\nspec:\n ports:\n - port: 80\n protocol: TCP\n selector:\n run: my-nginx\nEOF\n\n# Create a kustomization.yaml composing them\ncat <./kustomization.yaml\nresources:\n- deployment.yaml\n- service.yaml\nEOF", "language": "yaml", "type": "deployment-definition", "context": "Declarative Management of Kubernetes Objects Using Kustomize | The resources from kubectl kustomize ./ contain both the Deployment and the Service objects.", "source": "https://kubernetes.io/docs/tasks/manage-kubernetes-objects/kustomization/", "id": "k8s_00239_code_completion_6894"} {"prompt": "Explain this yaml code:\n\n# Create a deployment.yaml file\ncat < deployment.yaml\napiVersion: apps/v1\nkind: Deployment\nmetadata:\n name: my-nginx\nspec:\n selector:\n matchLabels:\n run: my-nginx\n replicas: 2\n template:\n metadata:\n labels:\n run: my-nginx\n spec:\n containers:\n - name: my-nginx\n image: nginx\n ports:\n - containerPort: 80\nEOF\n\n# Create a service.yaml file\ncat < service.yaml\napiVersion: v1\nkind: Service\nmetadata:\n name: my-nginx\n labels:\n run: my-nginx\nspec:\n ports:\n - port: 80\n protocol: TCP\n selector:\n run: my-nginx\nEOF\n\n# Create a kustomization.yaml composing them\ncat <./kustomization.yaml\nresources:\n- deployment.yaml\n- service.yaml\nEOF", "completion": "This yaml code Declarative Management of Kubernetes Objects Using Kustomize | The resources from kubectl kustomize ./ contain both the Deployment and the Service objects.", "language": "yaml", "type": "deployment-definition_explanation", "context": "Declarative Management of Kubernetes Objects Using Kustomize | The resources from kubectl kustomize ./ contain both the Deployment and the Service objects.", "source": "https://kubernetes.io/docs/tasks/manage-kubernetes-objects/kustomization/", "id": "k8s_00239_code_explain_6895"} {"prompt": "Complete this yaml code:\n\n# Create a deployment.yaml file\ncat < deployment.yaml\napiVersion: apps/v1\nkind: Deployment\nmetadata:\n name: my-nginx\nspec:\n selector:\n matchLabels:\n run: my-nginx\n replicas: 2\n template:\n metadata:\n labels:\n run: my-nginx\n spec:\n containers:\n - name: my-nginx\n image: nginx\n ports:\n - containerPort: 80\nEOF\n\n# Create a patch increase_replicas.yaml\ncat < increase_replicas.yaml\napiVersion: apps/v1\nkind: Deployment\nmetadata:", "completion": " name: my-nginx\nspec:\n replicas: 3\nEOF\n\n# Create another patch set_memory.yaml\ncat < set_memory.yaml\napiVersion: apps/v1\nkind: Deployment\nmetadata:\n name: my-nginx\nspec:\n template:\n spec:\n containers:\n - name: my-nginx\n resources:\n limits:\n memory: 512Mi\nEOF\n\ncat <./kustomization.yaml\nresources:\n- deployment.yaml\npatches:\n - path: increase_replicas.yaml\n - path: set_memory.yaml\nEOF", "language": "yaml", "type": "deployment-definition", "context": "Declarative Management of Kubernetes Objects Using Kustomize | Run kubectl kustomize ./ to view the Deployment:", "source": "https://kubernetes.io/docs/tasks/manage-kubernetes-objects/kustomization/", "id": "k8s_00239_code_completion_6896"} {"prompt": "Explain this yaml code:\n\n# Create a deployment.yaml file\ncat < deployment.yaml\napiVersion: apps/v1\nkind: Deployment\nmetadata:\n name: my-nginx\nspec:\n selector:\n matchLabels:\n run: my-nginx\n replicas: 2\n template:\n metadata:\n labels:\n run: my-nginx\n spec:\n containers:\n - name: my-nginx\n image: nginx\n ports:\n - containerPort: 80\nEOF\n\n# Create a patch increase_replicas.yaml\ncat < increase_replicas.yaml\napiVersion: apps/v1\nkind: Deployment\nmetadata:\n name: my-nginx\nspec:\n replicas: 3\nEOF\n\n# Create another patch set_memory.yaml\ncat < set_memory.yaml\napiVersion: apps/v1\nkind: Deployment\nmetadata:\n name: my-nginx\nspec:\n template:\n spec:\n containers:\n - name: my-nginx\n resources:\n limits:\n memory: 512Mi\nEOF\n\ncat <./kustomization.yaml\nresources:\n- deployment.yaml\npatches:\n - path: increase_replicas.yaml\n - path: set_memory.yaml\nEOF", "completion": "This yaml code Declarative Management of Kubernetes Objects Using Kustomize | Run kubectl kustomize ./ to view the Deployment:", "language": "yaml", "type": "deployment-definition_explanation", "context": "Declarative Management of Kubernetes Objects Using Kustomize | Run kubectl kustomize ./ to view the Deployment:", "source": "https://kubernetes.io/docs/tasks/manage-kubernetes-objects/kustomization/", "id": "k8s_00239_code_explain_6897"} {"prompt": "Complete this yaml code:\n\napiVersion: apps/v1\nkind: Deployment\nmetadata:\n name: my-nginx\nspec:\n replicas: 3\n selector:\n matchLabels:\n run: my-nginx\n template:\n metadata:", "completion": " labels:\n run: my-nginx\n spec:\n containers:\n - image: nginx\n name: my-nginx\n ports:\n - containerPort: 80\n resources:\n limits:\n memory: 512Mi", "language": "yaml", "type": "deployment-definition", "context": "Declarative Management of Kubernetes Objects Using Kustomize | Not all resources or fields support strategicMerge patches. To support modifying arbitrary fields in arbitrary resources, Kustomize offers applying JSON patch through Json6902. To find the correct Res", "source": "https://kubernetes.io/docs/tasks/manage-kubernetes-objects/kustomization/", "id": "k8s_00239_code_completion_6898"} {"prompt": "Explain this yaml code:\n\napiVersion: apps/v1\nkind: Deployment\nmetadata:\n name: my-nginx\nspec:\n replicas: 3\n selector:\n matchLabels:\n run: my-nginx\n template:\n metadata:\n labels:\n run: my-nginx\n spec:\n containers:\n - image: nginx\n name: my-nginx\n ports:\n - containerPort: 80\n resources:\n limits:\n memory: 512Mi", "completion": "This yaml code Declarative Management of Kubernetes Objects Using Kustomize | Not all resources or fields support strategicMerge patches. To support modifying arbitrary fields in arbitrary resources, Kustomize offers applying JSON patch through Json6902. To find the correct Res", "language": "yaml", "type": "deployment-definition_explanation", "context": "Declarative Management of Kubernetes Objects Using Kustomize | Not all resources or fields support strategicMerge patches. To support modifying arbitrary fields in arbitrary resources, Kustomize offers applying JSON patch through Json6902. To find the correct Res", "source": "https://kubernetes.io/docs/tasks/manage-kubernetes-objects/kustomization/", "id": "k8s_00239_code_explain_6899"} {"prompt": "Complete this yaml code:\n\n# Create a deployment.yaml file\ncat < deployment.yaml\napiVersion: apps/v1\nkind: Deployment\nmetadata:\n name: my-nginx\nspec:\n selector:\n matchLabels:\n run: my-nginx\n replicas: 2\n template:\n metadata:\n labels:\n run: my-nginx\n spec:\n containers:\n - name: my-nginx\n image: nginx\n ports:\n - containerPort: 80", "completion": "EOF\n\n# Create a json patch\ncat < patch.yaml\n- op: replace\n path: /spec/replicas\n value: 3\nEOF\n\n# Create a kustomization.yaml\ncat <./kustomization.yaml\nresources:\n- deployment.yaml\n\npatches:\n- target:\n group: apps\n version: v1\n kind: Deployment\n name: my-nginx\n path: patch.yaml\nEOF", "language": "yaml", "type": "deployment-definition", "context": "Declarative Management of Kubernetes Objects Using Kustomize | Run kubectl kustomize ./ to see the replicas field is updated:", "source": "https://kubernetes.io/docs/tasks/manage-kubernetes-objects/kustomization/", "id": "k8s_00239_code_completion_6900"} {"prompt": "Explain this yaml code:\n\n# Create a deployment.yaml file\ncat < deployment.yaml\napiVersion: apps/v1\nkind: Deployment\nmetadata:\n name: my-nginx\nspec:\n selector:\n matchLabels:\n run: my-nginx\n replicas: 2\n template:\n metadata:\n labels:\n run: my-nginx\n spec:\n containers:\n - name: my-nginx\n image: nginx\n ports:\n - containerPort: 80\nEOF\n\n# Create a json patch\ncat < patch.yaml\n- op: replace\n path: /spec/replicas\n value: 3\nEOF\n\n# Create a kustomization.yaml\ncat <./kustomization.yaml\nresources:\n- deployment.yaml\n\npatches:\n- target:\n group: apps\n version: v1\n kind: Deployment\n name: my-nginx\n path: patch.yaml\nEOF", "completion": "This yaml code Declarative Management of Kubernetes Objects Using Kustomize | Run kubectl kustomize ./ to see the replicas field is updated:", "language": "yaml", "type": "deployment-definition_explanation", "context": "Declarative Management of Kubernetes Objects Using Kustomize | Run kubectl kustomize ./ to see the replicas field is updated:", "source": "https://kubernetes.io/docs/tasks/manage-kubernetes-objects/kustomization/", "id": "k8s_00239_code_explain_6901"} {"prompt": "Complete this yaml code:\n\napiVersion: apps/v1\nkind: Deployment\nmetadata:\n name: my-nginx\nspec:\n replicas: 3\n selector:\n matchLabels:\n run: my-nginx", "completion": " template:\n metadata:\n labels:\n run: my-nginx\n spec:\n containers:\n - image: nginx\n name: my-nginx\n ports:\n - containerPort: 80", "language": "yaml", "type": "deployment-definition", "context": "Declarative Management of Kubernetes Objects Using Kustomize | In addition to patches, Kustomize also offers customizing container images or injecting field values from other objects into containers without creating patches. For example, you can change the image", "source": "https://kubernetes.io/docs/tasks/manage-kubernetes-objects/kustomization/", "id": "k8s_00239_code_completion_6902"} {"prompt": "Explain this yaml code:\n\napiVersion: apps/v1\nkind: Deployment\nmetadata:\n name: my-nginx\nspec:\n replicas: 3\n selector:\n matchLabels:\n run: my-nginx\n template:\n metadata:\n labels:\n run: my-nginx\n spec:\n containers:\n - image: nginx\n name: my-nginx\n ports:\n - containerPort: 80", "completion": "This yaml code Declarative Management of Kubernetes Objects Using Kustomize | In addition to patches, Kustomize also offers customizing container images or injecting field values from other objects into containers without creating patches. For example, you can change the image", "language": "yaml", "type": "deployment-definition_explanation", "context": "Declarative Management of Kubernetes Objects Using Kustomize | In addition to patches, Kustomize also offers customizing container images or injecting field values from other objects into containers without creating patches. For example, you can change the image", "source": "https://kubernetes.io/docs/tasks/manage-kubernetes-objects/kustomization/", "id": "k8s_00239_code_explain_6903"} {"prompt": "Complete this yaml code:\n\ncat < deployment.yaml\napiVersion: apps/v1\nkind: Deployment\nmetadata:\n name: my-nginx\nspec:\n selector:\n matchLabels:\n run: my-nginx\n replicas: 2\n template:\n metadata:\n labels:\n run: my-nginx\n spec:", "completion": " containers:\n - name: my-nginx\n image: nginx\n ports:\n - containerPort: 80\nEOF\n\ncat <./kustomization.yaml\nresources:\n- deployment.yaml\nimages:\n- name: nginx\n newName: my.image.registry/nginx\n newTag: \"1.4.0\"\nEOF", "language": "yaml", "type": "deployment-definition", "context": "Declarative Management of Kubernetes Objects Using Kustomize | Run kubectl kustomize ./ to see that the image being used is updated:", "source": "https://kubernetes.io/docs/tasks/manage-kubernetes-objects/kustomization/", "id": "k8s_00239_code_completion_6904"} {"prompt": "Explain this yaml code:\n\ncat < deployment.yaml\napiVersion: apps/v1\nkind: Deployment\nmetadata:\n name: my-nginx\nspec:\n selector:\n matchLabels:\n run: my-nginx\n replicas: 2\n template:\n metadata:\n labels:\n run: my-nginx\n spec:\n containers:\n - name: my-nginx\n image: nginx\n ports:\n - containerPort: 80\nEOF\n\ncat <./kustomization.yaml\nresources:\n- deployment.yaml\nimages:\n- name: nginx\n newName: my.image.registry/nginx\n newTag: \"1.4.0\"\nEOF", "completion": "This yaml code Declarative Management of Kubernetes Objects Using Kustomize | Run kubectl kustomize ./ to see that the image being used is updated:", "language": "yaml", "type": "deployment-definition_explanation", "context": "Declarative Management of Kubernetes Objects Using Kustomize | Run kubectl kustomize ./ to see that the image being used is updated:", "source": "https://kubernetes.io/docs/tasks/manage-kubernetes-objects/kustomization/", "id": "k8s_00239_code_explain_6905"} {"prompt": "Complete this yaml code:\n\napiVersion: apps/v1\nkind: Deployment\nmetadata:\n name: my-nginx\nspec:\n replicas: 2\n selector:\n matchLabels:\n run: my-nginx", "completion": " template:\n metadata:\n labels:\n run: my-nginx\n spec:\n containers:\n - image: my.image.registry/nginx:1.4.0\n name: my-nginx\n ports:\n - containerPort: 80", "language": "yaml", "type": "deployment-definition", "context": "Declarative Management of Kubernetes Objects Using Kustomize | Sometimes, the application running in a Pod may need to use configuration values from other objects. For example, a Pod from a Deployment object need to read the corresponding Service name from Env or", "source": "https://kubernetes.io/docs/tasks/manage-kubernetes-objects/kustomization/", "id": "k8s_00239_code_completion_6906"} {"prompt": "Explain this yaml code:\n\napiVersion: apps/v1\nkind: Deployment\nmetadata:\n name: my-nginx\nspec:\n replicas: 2\n selector:\n matchLabels:\n run: my-nginx\n template:\n metadata:\n labels:\n run: my-nginx\n spec:\n containers:\n - image: my.image.registry/nginx:1.4.0\n name: my-nginx\n ports:\n - containerPort: 80", "completion": "This yaml code Declarative Management of Kubernetes Objects Using Kustomize | Sometimes, the application running in a Pod may need to use configuration values from other objects. For example, a Pod from a Deployment object need to read the corresponding Service name from Env or", "language": "yaml", "type": "deployment-definition_explanation", "context": "Declarative Management of Kubernetes Objects Using Kustomize | Sometimes, the application running in a Pod may need to use configuration values from other objects. For example, a Pod from a Deployment object need to read the corresponding Service name from Env or", "source": "https://kubernetes.io/docs/tasks/manage-kubernetes-objects/kustomization/", "id": "k8s_00239_code_explain_6907"} {"prompt": "Complete this yaml code:\n\n# Create a deployment.yaml file (quoting the here doc delimiter)\ncat <<'EOF' > deployment.yaml\napiVersion: apps/v1\nkind: Deployment\nmetadata:\n name: my-nginx\nspec:\n selector:\n matchLabels:\n run: my-nginx\n replicas: 2\n template:\n metadata:\n labels:\n run: my-nginx\n spec:\n containers:\n - name: my-nginx\n image: nginx\n command: [\"start\", \"--host\", \"MY_SERVICE_NAME_PLACEHOLDER\"]\nEOF\n\n# Create a service.yaml file\ncat < service.yaml\napiVersion: v1\nkind: Service\nmetadata:\n name: my-nginx\n labels:", "completion": " run: my-nginx\nspec:\n ports:\n - port: 80\n protocol: TCP\n selector:\n run: my-nginx\nEOF\n\ncat <./kustomization.yaml\nnamePrefix: dev-\nnameSuffix: \"-001\"\n\nresources:\n- deployment.yaml\n- service.yaml\n\nreplacements:\n- source:\n kind: Service\n name: my-nginx\n fieldPath: metadata.name\n targets:\n - select:\n kind: Deployment\n name: my-nginx\n fieldPaths:\n - spec.template.spec.containers.0.command.2\nEOF", "language": "yaml", "type": "deployment-definition", "context": "Declarative Management of Kubernetes Objects Using Kustomize | Run kubectl kustomize ./ to see that the Service name injected into containers is dev-my-nginx-001:", "source": "https://kubernetes.io/docs/tasks/manage-kubernetes-objects/kustomization/", "id": "k8s_00239_code_completion_6908"} {"prompt": "Explain this yaml code:\n\n# Create a deployment.yaml file (quoting the here doc delimiter)\ncat <<'EOF' > deployment.yaml\napiVersion: apps/v1\nkind: Deployment\nmetadata:\n name: my-nginx\nspec:\n selector:\n matchLabels:\n run: my-nginx\n replicas: 2\n template:\n metadata:\n labels:\n run: my-nginx\n spec:\n containers:\n - name: my-nginx\n image: nginx\n command: [\"start\", \"--host\", \"MY_SERVICE_NAME_PLACEHOLDER\"]\nEOF\n\n# Create a service.yaml file\ncat < service.yaml\napiVersion: v1\nkind: Service\nmetadata:\n name: my-nginx\n labels:\n run: my-nginx\nspec:\n ports:\n - port: 80\n protocol: TCP\n selector:\n run: my-nginx\nEOF\n\ncat <./kustomization.yaml\nnamePrefix: dev-\nnameSuffix: \"-001\"\n\nresources:\n- deployment.yaml\n- service.yaml\n\nreplacements:\n- source:\n kind: Service\n name: my-nginx\n fieldPath: metadata.name\n targets:\n - select:\n kind: Deployment\n name: my-nginx\n fieldPaths:\n - spec.template.spec.containers.0.command.2\nEOF", "completion": "This yaml code Declarative Management of Kubernetes Objects Using Kustomize | Run kubectl kustomize ./ to see that the Service name injected into containers is dev-my-nginx-001:", "language": "yaml", "type": "deployment-definition_explanation", "context": "Declarative Management of Kubernetes Objects Using Kustomize | Run kubectl kustomize ./ to see that the Service name injected into containers is dev-my-nginx-001:", "source": "https://kubernetes.io/docs/tasks/manage-kubernetes-objects/kustomization/", "id": "k8s_00239_code_explain_6909"} {"prompt": "Complete this yaml code:\n\napiVersion: apps/v1\nkind: Deployment\nmetadata:\n name: dev-my-nginx-001\nspec:\n replicas: 2\n selector:\n matchLabels:\n run: my-nginx\n template:", "completion": " metadata:\n labels:\n run: my-nginx\n spec:\n containers:\n - command:\n - start\n - --host\n - dev-my-nginx-001\n image: nginx\n name: my-nginx", "language": "yaml", "type": "deployment-definition", "context": "Declarative Management of Kubernetes Objects Using Kustomize | Kustomize has the concepts of bases and overlays. A base is a directory with a kustomization.yaml, which contains a set of resources and associated customization. A base could be either a local direct", "source": "https://kubernetes.io/docs/tasks/manage-kubernetes-objects/kustomization/", "id": "k8s_00239_code_completion_6910"} {"prompt": "Explain this yaml code:\n\napiVersion: apps/v1\nkind: Deployment\nmetadata:\n name: dev-my-nginx-001\nspec:\n replicas: 2\n selector:\n matchLabels:\n run: my-nginx\n template:\n metadata:\n labels:\n run: my-nginx\n spec:\n containers:\n - command:\n - start\n - --host\n - dev-my-nginx-001\n image: nginx\n name: my-nginx", "completion": "This yaml code Declarative Management of Kubernetes Objects Using Kustomize | Kustomize has the concepts of bases and overlays. A base is a directory with a kustomization.yaml, which contains a set of resources and associated customization. A base could be either a local direct", "language": "yaml", "type": "deployment-definition_explanation", "context": "Declarative Management of Kubernetes Objects Using Kustomize | Kustomize has the concepts of bases and overlays. A base is a directory with a kustomization.yaml, which contains a set of resources and associated customization. A base could be either a local direct", "source": "https://kubernetes.io/docs/tasks/manage-kubernetes-objects/kustomization/", "id": "k8s_00239_code_explain_6911"} {"prompt": "Complete this yaml code:\n\n# Create a directory to hold the base\nmkdir base\n# Create a base/deployment.yaml\ncat < base/deployment.yaml\napiVersion: apps/v1\nkind: Deployment\nmetadata:\n name: my-nginx\nspec:\n selector:\n matchLabels:\n run: my-nginx\n replicas: 2\n template:\n metadata:\n labels:\n run: my-nginx\n spec:\n containers:\n - name: my-nginx\n image: nginx\nEOF", "completion": "\n# Create a base/service.yaml file\ncat < base/service.yaml\napiVersion: v1\nkind: Service\nmetadata:\n name: my-nginx\n labels:\n run: my-nginx\nspec:\n ports:\n - port: 80\n protocol: TCP\n selector:\n run: my-nginx\nEOF\n# Create a base/kustomization.yaml\ncat < base/kustomization.yaml\nresources:\n- deployment.yaml\n- service.yaml\nEOF", "language": "yaml", "type": "deployment-definition", "context": "Declarative Management of Kubernetes Objects Using Kustomize | This base can be used in multiple overlays. You can add different namePrefix or other cross-cutting fields in different overlays. Here are two overlays using the same base.", "source": "https://kubernetes.io/docs/tasks/manage-kubernetes-objects/kustomization/", "id": "k8s_00239_code_completion_6912"} {"prompt": "Explain this yaml code:\n\n# Create a directory to hold the base\nmkdir base\n# Create a base/deployment.yaml\ncat < base/deployment.yaml\napiVersion: apps/v1\nkind: Deployment\nmetadata:\n name: my-nginx\nspec:\n selector:\n matchLabels:\n run: my-nginx\n replicas: 2\n template:\n metadata:\n labels:\n run: my-nginx\n spec:\n containers:\n - name: my-nginx\n image: nginx\nEOF\n\n# Create a base/service.yaml file\ncat < base/service.yaml\napiVersion: v1\nkind: Service\nmetadata:\n name: my-nginx\n labels:\n run: my-nginx\nspec:\n ports:\n - port: 80\n protocol: TCP\n selector:\n run: my-nginx\nEOF\n# Create a base/kustomization.yaml\ncat < base/kustomization.yaml\nresources:\n- deployment.yaml\n- service.yaml\nEOF", "completion": "This yaml code Declarative Management of Kubernetes Objects Using Kustomize | This base can be used in multiple overlays. You can add different namePrefix or other cross-cutting fields in different overlays. Here are two overlays using the same base.", "language": "yaml", "type": "deployment-definition_explanation", "context": "Declarative Management of Kubernetes Objects Using Kustomize | This base can be used in multiple overlays. You can add different namePrefix or other cross-cutting fields in different overlays. Here are two overlays using the same base.", "source": "https://kubernetes.io/docs/tasks/manage-kubernetes-objects/kustomization/", "id": "k8s_00239_code_explain_6913"} {"prompt": "Complete this text code:\n\nmkdir dev\ncat < dev/kustomization.yaml\nresources:\n- ../base\nnamePrefix: dev-\nEOF", "completion": "\nmkdir prod\ncat < prod/kustomization.yaml\nresources:\n- ../base\nnamePrefix: prod-\nEOF", "language": "text", "type": "code-example", "context": "Declarative Management of Kubernetes Objects Using Kustomize | Use --kustomize or -k in kubectl commands to recognize resources managed by kustomization.yaml. Note that -k should point to a kustomization directory, such as", "source": "https://kubernetes.io/docs/tasks/manage-kubernetes-objects/kustomization/", "id": "k8s_00239_code_completion_6914"} {"prompt": "Explain this text code:\n\nmkdir dev\ncat < dev/kustomization.yaml\nresources:\n- ../base\nnamePrefix: dev-\nEOF\n\nmkdir prod\ncat < prod/kustomization.yaml\nresources:\n- ../base\nnamePrefix: prod-\nEOF", "completion": "This text code Declarative Management of Kubernetes Objects Using Kustomize | Use --kustomize or -k in kubectl commands to recognize resources managed by kustomization.yaml. Note that -k should point to a kustomization directory, such as", "language": "text", "type": "code-example_explanation", "context": "Declarative Management of Kubernetes Objects Using Kustomize | Use --kustomize or -k in kubectl commands to recognize resources managed by kustomization.yaml. Note that -k should point to a kustomization directory, such as", "source": "https://kubernetes.io/docs/tasks/manage-kubernetes-objects/kustomization/", "id": "k8s_00239_code_explain_6915"} {"prompt": "Complete this yaml code:\n\n# Create a deployment.yaml file\ncat < deployment.yaml\napiVersion: apps/v1\nkind: Deployment\nmetadata:\n name: my-nginx\nspec:\n selector:\n matchLabels:\n run: my-nginx\n replicas: 2\n template:\n metadata:\n labels:\n run: my-nginx\n spec:", "completion": " containers:\n - name: my-nginx\n image: nginx\n ports:\n - containerPort: 80\nEOF\n\n# Create a kustomization.yaml\ncat <./kustomization.yaml\nnamePrefix: dev-\nlabels:\n - pairs:\n app: my-nginx\n includeSelectors: true \nresources:\n- deployment.yaml\nEOF", "language": "yaml", "type": "deployment-definition", "context": "Declarative Management of Kubernetes Objects Using Kustomize | Run the following command to apply the Deployment object dev-my-nginx:", "source": "https://kubernetes.io/docs/tasks/manage-kubernetes-objects/kustomization/", "id": "k8s_00239_code_completion_6916"} {"prompt": "Explain this yaml code:\n\n# Create a deployment.yaml file\ncat < deployment.yaml\napiVersion: apps/v1\nkind: Deployment\nmetadata:\n name: my-nginx\nspec:\n selector:\n matchLabels:\n run: my-nginx\n replicas: 2\n template:\n metadata:\n labels:\n run: my-nginx\n spec:\n containers:\n - name: my-nginx\n image: nginx\n ports:\n - containerPort: 80\nEOF\n\n# Create a kustomization.yaml\ncat <./kustomization.yaml\nnamePrefix: dev-\nlabels:\n - pairs:\n app: my-nginx\n includeSelectors: true \nresources:\n- deployment.yaml\nEOF", "completion": "This yaml code Declarative Management of Kubernetes Objects Using Kustomize | Run the following command to apply the Deployment object dev-my-nginx:", "language": "yaml", "type": "deployment-definition_explanation", "context": "Declarative Management of Kubernetes Objects Using Kustomize | Run the following command to apply the Deployment object dev-my-nginx:", "source": "https://kubernetes.io/docs/tasks/manage-kubernetes-objects/kustomization/", "id": "k8s_00239_code_explain_6917"} {"prompt": "Explain this text code:\n\n> kubectl apply -k ./\ndeployment.apps/dev-my-nginx created", "completion": "This text code Declarative Management of Kubernetes Objects Using Kustomize | Run one of the following commands to view the Deployment object dev-my-nginx:", "language": "text", "type": "code-example_explanation", "context": "Declarative Management of Kubernetes Objects Using Kustomize | Run one of the following commands to view the Deployment object dev-my-nginx:", "source": "https://kubernetes.io/docs/tasks/manage-kubernetes-objects/kustomization/", "id": "k8s_00239_code_explain_6918"} {"prompt": "Explain this text code:\n\n> kubectl delete -k ./\ndeployment.apps \"dev-my-nginx\" deleted", "completion": "This text code Declarative Management of Kubernetes Objects Using Kustomize |", "language": "text", "type": "code-example_explanation", "context": "Declarative Management of Kubernetes Objects Using Kustomize |", "source": "https://kubernetes.io/docs/tasks/manage-kubernetes-objects/kustomization/", "id": "k8s_00239_code_explain_6919"} {"prompt": "Complete this shell code:\n\n# Create a application.properties file\ncat <application.properties\nFOO=Bar\nEOF\n", "completion": "cat <./kustomization.yaml\nconfigMapGenerator:\n- name: example-configmap-1\n files:\n - application.properties\nEOF", "language": "shell", "type": "code-example", "context": "Declarative Management of Kubernetes Objects Using Kustomize | The generated ConfigMap can be examined with the following command:", "source": "https://kubernetes.io/docs/tasks/manage-kubernetes-objects/kustomization/", "id": "k8s_00239_code_completion_6920"} {"prompt": "Explain this shell code:\n\n# Create a application.properties file\ncat <application.properties\nFOO=Bar\nEOF\n\ncat <./kustomization.yaml\nconfigMapGenerator:\n- name: example-configmap-1\n files:\n - application.properties\nEOF", "completion": "This shell code Declarative Management of Kubernetes Objects Using Kustomize | The generated ConfigMap can be examined with the following command:", "language": "shell", "type": "code-example_explanation", "context": "Declarative Management of Kubernetes Objects Using Kustomize | The generated ConfigMap can be examined with the following command:", "source": "https://kubernetes.io/docs/tasks/manage-kubernetes-objects/kustomization/", "id": "k8s_00239_code_explain_6921"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\ndata:\n application.properties: |", "completion": " FOO=Bar \nkind: ConfigMap\nmetadata:\n name: example-configmap-1-8mbdf7882g", "language": "yaml", "type": "kubernetes-manifest", "context": "Declarative Management of Kubernetes Objects Using Kustomize | To generate a ConfigMap from an env file, add an entry to the envs list in configMapGenerator. Here is an example of generating a ConfigMap with a data item from a .env file:", "source": "https://kubernetes.io/docs/tasks/manage-kubernetes-objects/kustomization/", "id": "k8s_00239_code_completion_6922"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\ndata:\n application.properties: |\n FOO=Bar \nkind: ConfigMap\nmetadata:\n name: example-configmap-1-8mbdf7882g", "completion": "This yaml code Declarative Management of Kubernetes Objects Using Kustomize | To generate a ConfigMap from an env file, add an entry to the envs list in configMapGenerator. Here is an example of generating a ConfigMap with a data item from a .env file:", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Declarative Management of Kubernetes Objects Using Kustomize | To generate a ConfigMap from an env file, add an entry to the envs list in configMapGenerator. Here is an example of generating a ConfigMap with a data item from a .env file:", "source": "https://kubernetes.io/docs/tasks/manage-kubernetes-objects/kustomization/", "id": "k8s_00239_code_explain_6923"} {"prompt": "Complete this shell code:\n\n# Create a .env file\ncat <.env\nFOO=Bar\nEOF\n", "completion": "cat <./kustomization.yaml\nconfigMapGenerator:\n- name: example-configmap-1\n envs:\n - .env\nEOF", "language": "shell", "type": "code-example", "context": "Declarative Management of Kubernetes Objects Using Kustomize | The generated ConfigMap can be examined with the following command:", "source": "https://kubernetes.io/docs/tasks/manage-kubernetes-objects/kustomization/", "id": "k8s_00239_code_completion_6924"} {"prompt": "Explain this shell code:\n\n# Create a .env file\ncat <.env\nFOO=Bar\nEOF\n\ncat <./kustomization.yaml\nconfigMapGenerator:\n- name: example-configmap-1\n envs:\n - .env\nEOF", "completion": "This shell code Declarative Management of Kubernetes Objects Using Kustomize | The generated ConfigMap can be examined with the following command:", "language": "shell", "type": "code-example_explanation", "context": "Declarative Management of Kubernetes Objects Using Kustomize | The generated ConfigMap can be examined with the following command:", "source": "https://kubernetes.io/docs/tasks/manage-kubernetes-objects/kustomization/", "id": "k8s_00239_code_explain_6925"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\ndata:\n FOO: Bar", "completion": "kind: ConfigMap\nmetadata:\n name: example-configmap-1-42cfbf598f", "language": "yaml", "type": "kubernetes-manifest", "context": "Declarative Management of Kubernetes Objects Using Kustomize | ConfigMaps can also be generated from literal key-value pairs. To generate a ConfigMap from a literal key-value pair, add an entry to the literals list in configMapGenerator. Here is an example of gen", "source": "https://kubernetes.io/docs/tasks/manage-kubernetes-objects/kustomization/", "id": "k8s_00239_code_completion_6926"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\ndata:\n FOO: Bar\nkind: ConfigMap\nmetadata:\n name: example-configmap-1-42cfbf598f", "completion": "This yaml code Declarative Management of Kubernetes Objects Using Kustomize | ConfigMaps can also be generated from literal key-value pairs. To generate a ConfigMap from a literal key-value pair, add an entry to the literals list in configMapGenerator. Here is an example of gen", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Declarative Management of Kubernetes Objects Using Kustomize | ConfigMaps can also be generated from literal key-value pairs. To generate a ConfigMap from a literal key-value pair, add an entry to the literals list in configMapGenerator. Here is an example of gen", "source": "https://kubernetes.io/docs/tasks/manage-kubernetes-objects/kustomization/", "id": "k8s_00239_code_explain_6927"} {"prompt": "Complete this shell code:\n\ncat <./kustomization.yaml\nconfigMapGenerator:\n- name: example-configmap-2", "completion": " literals:\n - FOO=Bar\nEOF", "language": "shell", "type": "code-example", "context": "Declarative Management of Kubernetes Objects Using Kustomize | The generated ConfigMap can be checked by the following command:", "source": "https://kubernetes.io/docs/tasks/manage-kubernetes-objects/kustomization/", "id": "k8s_00239_code_completion_6928"} {"prompt": "Explain this shell code:\n\ncat <./kustomization.yaml\nconfigMapGenerator:\n- name: example-configmap-2\n literals:\n - FOO=Bar\nEOF", "completion": "This shell code Declarative Management of Kubernetes Objects Using Kustomize | The generated ConfigMap can be checked by the following command:", "language": "shell", "type": "code-example_explanation", "context": "Declarative Management of Kubernetes Objects Using Kustomize | The generated ConfigMap can be checked by the following command:", "source": "https://kubernetes.io/docs/tasks/manage-kubernetes-objects/kustomization/", "id": "k8s_00239_code_explain_6929"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\ndata:\n FOO: Bar", "completion": "kind: ConfigMap\nmetadata:\n name: example-configmap-2-g2hdhfc6tk", "language": "yaml", "type": "kubernetes-manifest", "context": "Declarative Management of Kubernetes Objects Using Kustomize | To use a generated ConfigMap in a Deployment, reference it by the name of the configMapGenerator. Kustomize will automatically replace this name with the generated name.", "source": "https://kubernetes.io/docs/tasks/manage-kubernetes-objects/kustomization/", "id": "k8s_00239_code_completion_6930"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\ndata:\n FOO: Bar\nkind: ConfigMap\nmetadata:\n name: example-configmap-2-g2hdhfc6tk", "completion": "This yaml code Declarative Management of Kubernetes Objects Using Kustomize | To use a generated ConfigMap in a Deployment, reference it by the name of the configMapGenerator. Kustomize will automatically replace this name with the generated name.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Declarative Management of Kubernetes Objects Using Kustomize | To use a generated ConfigMap in a Deployment, reference it by the name of the configMapGenerator. Kustomize will automatically replace this name with the generated name.", "source": "https://kubernetes.io/docs/tasks/manage-kubernetes-objects/kustomization/", "id": "k8s_00239_code_explain_6931"} {"prompt": "Complete this yaml code:\n\n# Create an application.properties file\ncat <application.properties\nFOO=Bar\nEOF\n\ncat <deployment.yaml\napiVersion: apps/v1\nkind: Deployment\nmetadata:\n name: my-app\n labels:\n app: my-app\nspec:\n selector:\n matchLabels:\n app: my-app\n template:\n metadata:\n labels:\n app: my-app", "completion": " spec:\n containers:\n - name: app\n image: my-app\n volumeMounts:\n - name: config\n mountPath: /config\n volumes:\n - name: config\n configMap:\n name: example-configmap-1\nEOF\n\ncat <./kustomization.yaml\nresources:\n- deployment.yaml\nconfigMapGenerator:\n- name: example-configmap-1\n files:\n - application.properties\nEOF", "language": "yaml", "type": "deployment-definition", "context": "Declarative Management of Kubernetes Objects Using Kustomize | Generate the ConfigMap and Deployment:", "source": "https://kubernetes.io/docs/tasks/manage-kubernetes-objects/kustomization/", "id": "k8s_00239_code_completion_6932"} {"prompt": "Explain this yaml code:\n\n# Create an application.properties file\ncat <application.properties\nFOO=Bar\nEOF\n\ncat <deployment.yaml\napiVersion: apps/v1\nkind: Deployment\nmetadata:\n name: my-app\n labels:\n app: my-app\nspec:\n selector:\n matchLabels:\n app: my-app\n template:\n metadata:\n labels:\n app: my-app\n spec:\n containers:\n - name: app\n image: my-app\n volumeMounts:\n - name: config\n mountPath: /config\n volumes:\n - name: config\n configMap:\n name: example-configmap-1\nEOF\n\ncat <./kustomization.yaml\nresources:\n- deployment.yaml\nconfigMapGenerator:\n- name: example-configmap-1\n files:\n - application.properties\nEOF", "completion": "This yaml code Declarative Management of Kubernetes Objects Using Kustomize | Generate the ConfigMap and Deployment:", "language": "yaml", "type": "deployment-definition_explanation", "context": "Declarative Management of Kubernetes Objects Using Kustomize | Generate the ConfigMap and Deployment:", "source": "https://kubernetes.io/docs/tasks/manage-kubernetes-objects/kustomization/", "id": "k8s_00239_code_explain_6933"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\ndata:\n application.properties: |\n FOO=Bar \nkind: ConfigMap\nmetadata:\n name: example-configmap-1-g4hk9g2ff8\n---\napiVersion: apps/v1\nkind: Deployment\nmetadata:\n labels:\n app: my-app\n name: my-app\nspec:\n selector:", "completion": " matchLabels:\n app: my-app\n template:\n metadata:\n labels:\n app: my-app\n spec:\n containers:\n - image: my-app\n name: app\n volumeMounts:\n - mountPath: /config\n name: config\n volumes:\n - configMap:\n name: example-configmap-1-g4hk9g2ff8\n name: config", "language": "yaml", "type": "deployment-definition", "context": "Declarative Management of Kubernetes Objects Using Kustomize | You can generate Secrets from files or literal key-value pairs. To generate a Secret from a file, add an entry to the files list in secretGenerator. Here is an example of generating a Secret with a da", "source": "https://kubernetes.io/docs/tasks/manage-kubernetes-objects/kustomization/", "id": "k8s_00239_code_completion_6934"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\ndata:\n application.properties: |\n FOO=Bar \nkind: ConfigMap\nmetadata:\n name: example-configmap-1-g4hk9g2ff8\n---\napiVersion: apps/v1\nkind: Deployment\nmetadata:\n labels:\n app: my-app\n name: my-app\nspec:\n selector:\n matchLabels:\n app: my-app\n template:\n metadata:\n labels:\n app: my-app\n spec:\n containers:\n - image: my-app\n name: app\n volumeMounts:\n - mountPath: /config\n name: config\n volumes:\n - configMap:\n name: example-configmap-1-g4hk9g2ff8\n name: config", "completion": "This yaml code Declarative Management of Kubernetes Objects Using Kustomize | You can generate Secrets from files or literal key-value pairs. To generate a Secret from a file, add an entry to the files list in secretGenerator. Here is an example of generating a Secret with a da", "language": "yaml", "type": "deployment-definition_explanation", "context": "Declarative Management of Kubernetes Objects Using Kustomize | You can generate Secrets from files or literal key-value pairs. To generate a Secret from a file, add an entry to the files list in secretGenerator. Here is an example of generating a Secret with a da", "source": "https://kubernetes.io/docs/tasks/manage-kubernetes-objects/kustomization/", "id": "k8s_00239_code_explain_6935"} {"prompt": "Complete this shell code:\n\n# Create a password.txt file\ncat <./password.txt\nusername=admin\npassword=secret\nEOF\n", "completion": "cat <./kustomization.yaml\nsecretGenerator:\n- name: example-secret-1\n files:\n - password.txt\nEOF", "language": "shell", "type": "code-example", "context": "Declarative Management of Kubernetes Objects Using Kustomize | The generated Secret is as follows:", "source": "https://kubernetes.io/docs/tasks/manage-kubernetes-objects/kustomization/", "id": "k8s_00239_code_completion_6936"} {"prompt": "Explain this shell code:\n\n# Create a password.txt file\ncat <./password.txt\nusername=admin\npassword=secret\nEOF\n\ncat <./kustomization.yaml\nsecretGenerator:\n- name: example-secret-1\n files:\n - password.txt\nEOF", "completion": "This shell code Declarative Management of Kubernetes Objects Using Kustomize | The generated Secret is as follows:", "language": "shell", "type": "code-example_explanation", "context": "Declarative Management of Kubernetes Objects Using Kustomize | The generated Secret is as follows:", "source": "https://kubernetes.io/docs/tasks/manage-kubernetes-objects/kustomization/", "id": "k8s_00239_code_explain_6937"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\ndata:\n password.txt: dXNlcm5hbWU9YWRtaW4KcGFzc3dvcmQ9c2VjcmV0Cg==", "completion": "kind: Secret\nmetadata:\n name: example-secret-1-t2kt65hgtb\ntype: Opaque", "language": "yaml", "type": "kubernetes-manifest", "context": "Declarative Management of Kubernetes Objects Using Kustomize | To generate a Secret from a literal key-value pair, add an entry to literals list in secretGenerator. Here is an example of generating a Secret with a data item from a key-value pair:", "source": "https://kubernetes.io/docs/tasks/manage-kubernetes-objects/kustomization/", "id": "k8s_00239_code_completion_6938"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\ndata:\n password.txt: dXNlcm5hbWU9YWRtaW4KcGFzc3dvcmQ9c2VjcmV0Cg==\nkind: Secret\nmetadata:\n name: example-secret-1-t2kt65hgtb\ntype: Opaque", "completion": "This yaml code Declarative Management of Kubernetes Objects Using Kustomize | To generate a Secret from a literal key-value pair, add an entry to literals list in secretGenerator. Here is an example of generating a Secret with a data item from a key-value pair:", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Declarative Management of Kubernetes Objects Using Kustomize | To generate a Secret from a literal key-value pair, add an entry to literals list in secretGenerator. Here is an example of generating a Secret with a data item from a key-value pair:", "source": "https://kubernetes.io/docs/tasks/manage-kubernetes-objects/kustomization/", "id": "k8s_00239_code_explain_6939"} {"prompt": "Complete this shell code:\n\ncat <./kustomization.yaml\nsecretGenerator:\n- name: example-secret-2", "completion": " literals:\n - username=admin\n - password=secret\nEOF", "language": "shell", "type": "code-example", "context": "Declarative Management of Kubernetes Objects Using Kustomize | The generated Secret is as follows:", "source": "https://kubernetes.io/docs/tasks/manage-kubernetes-objects/kustomization/", "id": "k8s_00239_code_completion_6940"} {"prompt": "Explain this shell code:\n\ncat <./kustomization.yaml\nsecretGenerator:\n- name: example-secret-2\n literals:\n - username=admin\n - password=secret\nEOF", "completion": "This shell code Declarative Management of Kubernetes Objects Using Kustomize | The generated Secret is as follows:", "language": "shell", "type": "code-example_explanation", "context": "Declarative Management of Kubernetes Objects Using Kustomize | The generated Secret is as follows:", "source": "https://kubernetes.io/docs/tasks/manage-kubernetes-objects/kustomization/", "id": "k8s_00239_code_explain_6941"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\ndata:\n password: c2VjcmV0\n username: YWRtaW4=", "completion": "kind: Secret\nmetadata:\n name: example-secret-2-t52t6g96d8\ntype: Opaque", "language": "yaml", "type": "kubernetes-manifest", "context": "Declarative Management of Kubernetes Objects Using Kustomize | Like ConfigMaps, generated Secrets can be used in Deployments by referring to the name of the secretGenerator:", "source": "https://kubernetes.io/docs/tasks/manage-kubernetes-objects/kustomization/", "id": "k8s_00239_code_completion_6942"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\ndata:\n password: c2VjcmV0\n username: YWRtaW4=\nkind: Secret\nmetadata:\n name: example-secret-2-t52t6g96d8\ntype: Opaque", "completion": "This yaml code Declarative Management of Kubernetes Objects Using Kustomize | Like ConfigMaps, generated Secrets can be used in Deployments by referring to the name of the secretGenerator:", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Declarative Management of Kubernetes Objects Using Kustomize | Like ConfigMaps, generated Secrets can be used in Deployments by referring to the name of the secretGenerator:", "source": "https://kubernetes.io/docs/tasks/manage-kubernetes-objects/kustomization/", "id": "k8s_00239_code_explain_6943"} {"prompt": "Complete this shell code:\n\n# Create a password.txt file\ncat <./password.txt\nusername=admin\npassword=secret\nEOF\n\ncat <deployment.yaml\napiVersion: apps/v1\nkind: Deployment\nmetadata:\n name: my-app\n labels:\n app: my-app\nspec:\n selector:\n matchLabels:\n app: my-app\n template:\n metadata:\n labels:\n app: my-app", "completion": " spec:\n containers:\n - name: app\n image: my-app\n volumeMounts:\n - name: password\n mountPath: /secrets\n volumes:\n - name: password\n secret:\n secretName: example-secret-1\nEOF\n\ncat <./kustomization.yaml\nresources:\n- deployment.yaml\nsecretGenerator:\n- name: example-secret-1\n files:\n - password.txt\nEOF", "language": "shell", "type": "code-example", "context": "Declarative Management of Kubernetes Objects Using Kustomize | The generated ConfigMaps and Secrets have a content hash suffix appended. This ensures that a new ConfigMap or Secret is generated when the contents are changed. To disable the behavior of appending a", "source": "https://kubernetes.io/docs/tasks/manage-kubernetes-objects/kustomization/", "id": "k8s_00239_code_completion_6944"} {"prompt": "Explain this shell code:\n\n# Create a password.txt file\ncat <./password.txt\nusername=admin\npassword=secret\nEOF\n\ncat <deployment.yaml\napiVersion: apps/v1\nkind: Deployment\nmetadata:\n name: my-app\n labels:\n app: my-app\nspec:\n selector:\n matchLabels:\n app: my-app\n template:\n metadata:\n labels:\n app: my-app\n spec:\n containers:\n - name: app\n image: my-app\n volumeMounts:\n - name: password\n mountPath: /secrets\n volumes:\n - name: password\n secret:\n secretName: example-secret-1\nEOF\n\ncat <./kustomization.yaml\nresources:\n- deployment.yaml\nsecretGenerator:\n- name: example-secret-1\n files:\n - password.txt\nEOF", "completion": "This shell code Declarative Management of Kubernetes Objects Using Kustomize | The generated ConfigMaps and Secrets have a content hash suffix appended. This ensures that a new ConfigMap or Secret is generated when the contents are changed. To disable the behavior of appending a", "language": "shell", "type": "code-example_explanation", "context": "Declarative Management of Kubernetes Objects Using Kustomize | The generated ConfigMaps and Secrets have a content hash suffix appended. This ensures that a new ConfigMap or Secret is generated when the contents are changed. To disable the behavior of appending a", "source": "https://kubernetes.io/docs/tasks/manage-kubernetes-objects/kustomization/", "id": "k8s_00239_code_explain_6945"} {"prompt": "Complete this shell code:\n\ncat <./kustomization.yaml\nconfigMapGenerator:\n- name: example-configmap-3\n literals:\n - FOO=Bar\ngeneratorOptions:", "completion": " disableNameSuffixHash: true\n labels:\n type: generated\n annotations:\n note: generated\nEOF", "language": "shell", "type": "code-example", "context": "Declarative Management of Kubernetes Objects Using Kustomize | Runkubectl kustomize ./ to view the generated ConfigMap:", "source": "https://kubernetes.io/docs/tasks/manage-kubernetes-objects/kustomization/", "id": "k8s_00239_code_completion_6946"} {"prompt": "Explain this shell code:\n\ncat <./kustomization.yaml\nconfigMapGenerator:\n- name: example-configmap-3\n literals:\n - FOO=Bar\ngeneratorOptions:\n disableNameSuffixHash: true\n labels:\n type: generated\n annotations:\n note: generated\nEOF", "completion": "This shell code Declarative Management of Kubernetes Objects Using Kustomize | Runkubectl kustomize ./ to view the generated ConfigMap:", "language": "shell", "type": "code-example_explanation", "context": "Declarative Management of Kubernetes Objects Using Kustomize | Runkubectl kustomize ./ to view the generated ConfigMap:", "source": "https://kubernetes.io/docs/tasks/manage-kubernetes-objects/kustomization/", "id": "k8s_00239_code_explain_6947"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\ndata:\n FOO: Bar\nkind: ConfigMap\nmetadata:", "completion": " annotations:\n note: generated\n labels:\n type: generated\n name: example-configmap-3", "language": "yaml", "type": "kubernetes-manifest", "context": "Declarative Management of Kubernetes Objects Using Kustomize | It is quite common to set cross-cutting fields for all Kubernetes resources in a project. Some use cases for setting cross-cutting fields:", "source": "https://kubernetes.io/docs/tasks/manage-kubernetes-objects/kustomization/", "id": "k8s_00239_code_completion_6948"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\ndata:\n FOO: Bar\nkind: ConfigMap\nmetadata:\n annotations:\n note: generated\n labels:\n type: generated\n name: example-configmap-3", "completion": "This yaml code Declarative Management of Kubernetes Objects Using Kustomize | It is quite common to set cross-cutting fields for all Kubernetes resources in a project. Some use cases for setting cross-cutting fields:", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Declarative Management of Kubernetes Objects Using Kustomize | It is quite common to set cross-cutting fields for all Kubernetes resources in a project. Some use cases for setting cross-cutting fields:", "source": "https://kubernetes.io/docs/tasks/manage-kubernetes-objects/kustomization/", "id": "k8s_00239_code_explain_6949"} {"prompt": "Complete this shell code:\n\n# Create a deployment.yaml\ncat <./deployment.yaml\napiVersion: apps/v1\nkind: Deployment\nmetadata:\n name: nginx-deployment\n labels:\n app: nginx\nspec:\n selector:\n matchLabels:\n app: nginx\n template:\n metadata:\n labels:\n app: nginx\n spec:", "completion": " containers:\n - name: nginx\n image: nginx\nEOF\n\ncat <./kustomization.yaml\nnamespace: my-namespace\nnamePrefix: dev-\nnameSuffix: \"-001\"\nlabels:\n - pairs:\n app: bingo\n includeSelectors: true \ncommonAnnotations:\n oncallPager: 800-555-1212\nresources:\n- deployment.yaml\nEOF", "language": "shell", "type": "code-example", "context": "Declarative Management of Kubernetes Objects Using Kustomize | Run kubectl kustomize ./ to view those fields are all set in the Deployment Resource:", "source": "https://kubernetes.io/docs/tasks/manage-kubernetes-objects/kustomization/", "id": "k8s_00239_code_completion_6950"} {"prompt": "Explain this shell code:\n\n# Create a deployment.yaml\ncat <./deployment.yaml\napiVersion: apps/v1\nkind: Deployment\nmetadata:\n name: nginx-deployment\n labels:\n app: nginx\nspec:\n selector:\n matchLabels:\n app: nginx\n template:\n metadata:\n labels:\n app: nginx\n spec:\n containers:\n - name: nginx\n image: nginx\nEOF\n\ncat <./kustomization.yaml\nnamespace: my-namespace\nnamePrefix: dev-\nnameSuffix: \"-001\"\nlabels:\n - pairs:\n app: bingo\n includeSelectors: true \ncommonAnnotations:\n oncallPager: 800-555-1212\nresources:\n- deployment.yaml\nEOF", "completion": "This shell code Declarative Management of Kubernetes Objects Using Kustomize | Run kubectl kustomize ./ to view those fields are all set in the Deployment Resource:", "language": "shell", "type": "code-example_explanation", "context": "Declarative Management of Kubernetes Objects Using Kustomize | Run kubectl kustomize ./ to view those fields are all set in the Deployment Resource:", "source": "https://kubernetes.io/docs/tasks/manage-kubernetes-objects/kustomization/", "id": "k8s_00239_code_explain_6951"} {"prompt": "Complete this yaml code:\n\napiVersion: apps/v1\nkind: Deployment\nmetadata:\n annotations:\n oncallPager: 800-555-1212\n labels:\n app: bingo\n name: dev-nginx-deployment-001\n namespace: my-namespace\nspec:\n selector:", "completion": " matchLabels:\n app: bingo\n template:\n metadata:\n annotations:\n oncallPager: 800-555-1212\n labels:\n app: bingo\n spec:\n containers:\n - image: nginx\n name: nginx", "language": "yaml", "type": "deployment-definition", "context": "Declarative Management of Kubernetes Objects Using Kustomize | It is common to compose a set of resources in a project and manage them inside the same file or directory. Kustomize offers composing resources from different files and applying patches or other custo", "source": "https://kubernetes.io/docs/tasks/manage-kubernetes-objects/kustomization/", "id": "k8s_00239_code_completion_6952"} {"prompt": "Explain this yaml code:\n\napiVersion: apps/v1\nkind: Deployment\nmetadata:\n annotations:\n oncallPager: 800-555-1212\n labels:\n app: bingo\n name: dev-nginx-deployment-001\n namespace: my-namespace\nspec:\n selector:\n matchLabels:\n app: bingo\n template:\n metadata:\n annotations:\n oncallPager: 800-555-1212\n labels:\n app: bingo\n spec:\n containers:\n - image: nginx\n name: nginx", "completion": "This yaml code Declarative Management of Kubernetes Objects Using Kustomize | It is common to compose a set of resources in a project and manage them inside the same file or directory. Kustomize offers composing resources from different files and applying patches or other custo", "language": "yaml", "type": "deployment-definition_explanation", "context": "Declarative Management of Kubernetes Objects Using Kustomize | It is common to compose a set of resources in a project and manage them inside the same file or directory. Kustomize offers composing resources from different files and applying patches or other custo", "source": "https://kubernetes.io/docs/tasks/manage-kubernetes-objects/kustomization/", "id": "k8s_00239_code_explain_6953"} {"prompt": "Complete this shell code:\n\n# Create a deployment.yaml file\ncat < deployment.yaml\napiVersion: apps/v1\nkind: Deployment\nmetadata:\n name: my-nginx\nspec:\n selector:\n matchLabels:\n run: my-nginx\n replicas: 2\n template:\n metadata:\n labels:\n run: my-nginx\n spec:\n containers:\n - name: my-nginx\n image: nginx\n ports:\n - containerPort: 80\nEOF", "completion": "\n# Create a service.yaml file\ncat < service.yaml\napiVersion: v1\nkind: Service\nmetadata:\n name: my-nginx\n labels:\n run: my-nginx\nspec:\n ports:\n - port: 80\n protocol: TCP\n selector:\n run: my-nginx\nEOF\n\n# Create a kustomization.yaml composing them\ncat <./kustomization.yaml\nresources:\n- deployment.yaml\n- service.yaml\nEOF", "language": "shell", "type": "code-example", "context": "Declarative Management of Kubernetes Objects Using Kustomize | The resources from kubectl kustomize ./ contain both the Deployment and the Service objects.", "source": "https://kubernetes.io/docs/tasks/manage-kubernetes-objects/kustomization/", "id": "k8s_00239_code_completion_6954"} {"prompt": "Explain this shell code:\n\n# Create a deployment.yaml file\ncat < deployment.yaml\napiVersion: apps/v1\nkind: Deployment\nmetadata:\n name: my-nginx\nspec:\n selector:\n matchLabels:\n run: my-nginx\n replicas: 2\n template:\n metadata:\n labels:\n run: my-nginx\n spec:\n containers:\n - name: my-nginx\n image: nginx\n ports:\n - containerPort: 80\nEOF\n\n# Create a service.yaml file\ncat < service.yaml\napiVersion: v1\nkind: Service\nmetadata:\n name: my-nginx\n labels:\n run: my-nginx\nspec:\n ports:\n - port: 80\n protocol: TCP\n selector:\n run: my-nginx\nEOF\n\n# Create a kustomization.yaml composing them\ncat <./kustomization.yaml\nresources:\n- deployment.yaml\n- service.yaml\nEOF", "completion": "This shell code Declarative Management of Kubernetes Objects Using Kustomize | The resources from kubectl kustomize ./ contain both the Deployment and the Service objects.", "language": "shell", "type": "code-example_explanation", "context": "Declarative Management of Kubernetes Objects Using Kustomize | The resources from kubectl kustomize ./ contain both the Deployment and the Service objects.", "source": "https://kubernetes.io/docs/tasks/manage-kubernetes-objects/kustomization/", "id": "k8s_00239_code_explain_6955"} {"prompt": "Complete this shell code:\n\n# Create a deployment.yaml file\ncat < deployment.yaml\napiVersion: apps/v1\nkind: Deployment\nmetadata:\n name: my-nginx\nspec:\n selector:\n matchLabels:\n run: my-nginx\n replicas: 2\n template:\n metadata:\n labels:\n run: my-nginx\n spec:\n containers:\n - name: my-nginx\n image: nginx\n ports:\n - containerPort: 80\nEOF\n\n# Create a patch increase_replicas.yaml\ncat < increase_replicas.yaml\napiVersion: apps/v1\nkind: Deployment\nmetadata:", "completion": " name: my-nginx\nspec:\n replicas: 3\nEOF\n\n# Create another patch set_memory.yaml\ncat < set_memory.yaml\napiVersion: apps/v1\nkind: Deployment\nmetadata:\n name: my-nginx\nspec:\n template:\n spec:\n containers:\n - name: my-nginx\n resources:\n limits:\n memory: 512Mi\nEOF\n\ncat <./kustomization.yaml\nresources:\n- deployment.yaml\npatches:\n - path: increase_replicas.yaml\n - path: set_memory.yaml\nEOF", "language": "shell", "type": "code-example", "context": "Declarative Management of Kubernetes Objects Using Kustomize | Run kubectl kustomize ./ to view the Deployment:", "source": "https://kubernetes.io/docs/tasks/manage-kubernetes-objects/kustomization/", "id": "k8s_00239_code_completion_6956"} {"prompt": "Explain this shell code:\n\n# Create a deployment.yaml file\ncat < deployment.yaml\napiVersion: apps/v1\nkind: Deployment\nmetadata:\n name: my-nginx\nspec:\n selector:\n matchLabels:\n run: my-nginx\n replicas: 2\n template:\n metadata:\n labels:\n run: my-nginx\n spec:\n containers:\n - name: my-nginx\n image: nginx\n ports:\n - containerPort: 80\nEOF\n\n# Create a patch increase_replicas.yaml\ncat < increase_replicas.yaml\napiVersion: apps/v1\nkind: Deployment\nmetadata:\n name: my-nginx\nspec:\n replicas: 3\nEOF\n\n# Create another patch set_memory.yaml\ncat < set_memory.yaml\napiVersion: apps/v1\nkind: Deployment\nmetadata:\n name: my-nginx\nspec:\n template:\n spec:\n containers:\n - name: my-nginx\n resources:\n limits:\n memory: 512Mi\nEOF\n\ncat <./kustomization.yaml\nresources:\n- deployment.yaml\npatches:\n - path: increase_replicas.yaml\n - path: set_memory.yaml\nEOF", "completion": "This shell code Declarative Management of Kubernetes Objects Using Kustomize | Run kubectl kustomize ./ to view the Deployment:", "language": "shell", "type": "code-example_explanation", "context": "Declarative Management of Kubernetes Objects Using Kustomize | Run kubectl kustomize ./ to view the Deployment:", "source": "https://kubernetes.io/docs/tasks/manage-kubernetes-objects/kustomization/", "id": "k8s_00239_code_explain_6957"} {"prompt": "Complete this yaml code:\n\napiVersion: apps/v1\nkind: Deployment\nmetadata:\n name: my-nginx\nspec:\n replicas: 3\n selector:\n matchLabels:\n run: my-nginx\n template:\n metadata:", "completion": " labels:\n run: my-nginx\n spec:\n containers:\n - image: nginx\n name: my-nginx\n ports:\n - containerPort: 80\n resources:\n limits:\n memory: 512Mi", "language": "yaml", "type": "deployment-definition", "context": "Declarative Management of Kubernetes Objects Using Kustomize | Not all resources or fields support strategicMerge patches. To support modifying arbitrary fields in arbitrary resources, Kustomize offers applying JSON patch through Json6902. To find the correct Res", "source": "https://kubernetes.io/docs/tasks/manage-kubernetes-objects/kustomization/", "id": "k8s_00239_code_completion_6958"} {"prompt": "Explain this yaml code:\n\napiVersion: apps/v1\nkind: Deployment\nmetadata:\n name: my-nginx\nspec:\n replicas: 3\n selector:\n matchLabels:\n run: my-nginx\n template:\n metadata:\n labels:\n run: my-nginx\n spec:\n containers:\n - image: nginx\n name: my-nginx\n ports:\n - containerPort: 80\n resources:\n limits:\n memory: 512Mi", "completion": "This yaml code Declarative Management of Kubernetes Objects Using Kustomize | Not all resources or fields support strategicMerge patches. To support modifying arbitrary fields in arbitrary resources, Kustomize offers applying JSON patch through Json6902. To find the correct Res", "language": "yaml", "type": "deployment-definition_explanation", "context": "Declarative Management of Kubernetes Objects Using Kustomize | Not all resources or fields support strategicMerge patches. To support modifying arbitrary fields in arbitrary resources, Kustomize offers applying JSON patch through Json6902. To find the correct Res", "source": "https://kubernetes.io/docs/tasks/manage-kubernetes-objects/kustomization/", "id": "k8s_00239_code_explain_6959"} {"prompt": "Complete this shell code:\n\n# Create a deployment.yaml file\ncat < deployment.yaml\napiVersion: apps/v1\nkind: Deployment\nmetadata:\n name: my-nginx\nspec:\n selector:\n matchLabels:\n run: my-nginx\n replicas: 2\n template:\n metadata:\n labels:\n run: my-nginx\n spec:\n containers:\n - name: my-nginx\n image: nginx\n ports:\n - containerPort: 80", "completion": "EOF\n\n# Create a json patch\ncat < patch.yaml\n- op: replace\n path: /spec/replicas\n value: 3\nEOF\n\n# Create a kustomization.yaml\ncat <./kustomization.yaml\nresources:\n- deployment.yaml\n\npatches:\n- target:\n group: apps\n version: v1\n kind: Deployment\n name: my-nginx\n path: patch.yaml\nEOF", "language": "shell", "type": "code-example", "context": "Declarative Management of Kubernetes Objects Using Kustomize | Run kubectl kustomize ./ to see the replicas field is updated:", "source": "https://kubernetes.io/docs/tasks/manage-kubernetes-objects/kustomization/", "id": "k8s_00239_code_completion_6960"} {"prompt": "Explain this shell code:\n\n# Create a deployment.yaml file\ncat < deployment.yaml\napiVersion: apps/v1\nkind: Deployment\nmetadata:\n name: my-nginx\nspec:\n selector:\n matchLabels:\n run: my-nginx\n replicas: 2\n template:\n metadata:\n labels:\n run: my-nginx\n spec:\n containers:\n - name: my-nginx\n image: nginx\n ports:\n - containerPort: 80\nEOF\n\n# Create a json patch\ncat < patch.yaml\n- op: replace\n path: /spec/replicas\n value: 3\nEOF\n\n# Create a kustomization.yaml\ncat <./kustomization.yaml\nresources:\n- deployment.yaml\n\npatches:\n- target:\n group: apps\n version: v1\n kind: Deployment\n name: my-nginx\n path: patch.yaml\nEOF", "completion": "This shell code Declarative Management of Kubernetes Objects Using Kustomize | Run kubectl kustomize ./ to see the replicas field is updated:", "language": "shell", "type": "code-example_explanation", "context": "Declarative Management of Kubernetes Objects Using Kustomize | Run kubectl kustomize ./ to see the replicas field is updated:", "source": "https://kubernetes.io/docs/tasks/manage-kubernetes-objects/kustomization/", "id": "k8s_00239_code_explain_6961"} {"prompt": "Complete this yaml code:\n\napiVersion: apps/v1\nkind: Deployment\nmetadata:\n name: my-nginx\nspec:\n replicas: 3\n selector:\n matchLabels:\n run: my-nginx", "completion": " template:\n metadata:\n labels:\n run: my-nginx\n spec:\n containers:\n - image: nginx\n name: my-nginx\n ports:\n - containerPort: 80", "language": "yaml", "type": "deployment-definition", "context": "Declarative Management of Kubernetes Objects Using Kustomize | In addition to patches, Kustomize also offers customizing container images or injecting field values from other objects into containers without creating patches. For example, you can change the image", "source": "https://kubernetes.io/docs/tasks/manage-kubernetes-objects/kustomization/", "id": "k8s_00239_code_completion_6962"} {"prompt": "Explain this yaml code:\n\napiVersion: apps/v1\nkind: Deployment\nmetadata:\n name: my-nginx\nspec:\n replicas: 3\n selector:\n matchLabels:\n run: my-nginx\n template:\n metadata:\n labels:\n run: my-nginx\n spec:\n containers:\n - image: nginx\n name: my-nginx\n ports:\n - containerPort: 80", "completion": "This yaml code Declarative Management of Kubernetes Objects Using Kustomize | In addition to patches, Kustomize also offers customizing container images or injecting field values from other objects into containers without creating patches. For example, you can change the image", "language": "yaml", "type": "deployment-definition_explanation", "context": "Declarative Management of Kubernetes Objects Using Kustomize | In addition to patches, Kustomize also offers customizing container images or injecting field values from other objects into containers without creating patches. For example, you can change the image", "source": "https://kubernetes.io/docs/tasks/manage-kubernetes-objects/kustomization/", "id": "k8s_00239_code_explain_6963"} {"prompt": "Complete this shell code:\n\ncat < deployment.yaml\napiVersion: apps/v1\nkind: Deployment\nmetadata:\n name: my-nginx\nspec:\n selector:\n matchLabels:\n run: my-nginx\n replicas: 2\n template:\n metadata:\n labels:\n run: my-nginx\n spec:", "completion": " containers:\n - name: my-nginx\n image: nginx\n ports:\n - containerPort: 80\nEOF\n\ncat <./kustomization.yaml\nresources:\n- deployment.yaml\nimages:\n- name: nginx\n newName: my.image.registry/nginx\n newTag: \"1.4.0\"\nEOF", "language": "shell", "type": "code-example", "context": "Declarative Management of Kubernetes Objects Using Kustomize | Run kubectl kustomize ./ to see that the image being used is updated:", "source": "https://kubernetes.io/docs/tasks/manage-kubernetes-objects/kustomization/", "id": "k8s_00239_code_completion_6964"} {"prompt": "Explain this shell code:\n\ncat < deployment.yaml\napiVersion: apps/v1\nkind: Deployment\nmetadata:\n name: my-nginx\nspec:\n selector:\n matchLabels:\n run: my-nginx\n replicas: 2\n template:\n metadata:\n labels:\n run: my-nginx\n spec:\n containers:\n - name: my-nginx\n image: nginx\n ports:\n - containerPort: 80\nEOF\n\ncat <./kustomization.yaml\nresources:\n- deployment.yaml\nimages:\n- name: nginx\n newName: my.image.registry/nginx\n newTag: \"1.4.0\"\nEOF", "completion": "This shell code Declarative Management of Kubernetes Objects Using Kustomize | Run kubectl kustomize ./ to see that the image being used is updated:", "language": "shell", "type": "code-example_explanation", "context": "Declarative Management of Kubernetes Objects Using Kustomize | Run kubectl kustomize ./ to see that the image being used is updated:", "source": "https://kubernetes.io/docs/tasks/manage-kubernetes-objects/kustomization/", "id": "k8s_00239_code_explain_6965"} {"prompt": "Complete this yaml code:\n\napiVersion: apps/v1\nkind: Deployment\nmetadata:\n name: my-nginx\nspec:\n replicas: 2\n selector:\n matchLabels:\n run: my-nginx", "completion": " template:\n metadata:\n labels:\n run: my-nginx\n spec:\n containers:\n - image: my.image.registry/nginx:1.4.0\n name: my-nginx\n ports:\n - containerPort: 80", "language": "yaml", "type": "deployment-definition", "context": "Declarative Management of Kubernetes Objects Using Kustomize | Sometimes, the application running in a Pod may need to use configuration values from other objects. For example, a Pod from a Deployment object need to read the corresponding Service name from Env or", "source": "https://kubernetes.io/docs/tasks/manage-kubernetes-objects/kustomization/", "id": "k8s_00239_code_completion_6966"} {"prompt": "Explain this yaml code:\n\napiVersion: apps/v1\nkind: Deployment\nmetadata:\n name: my-nginx\nspec:\n replicas: 2\n selector:\n matchLabels:\n run: my-nginx\n template:\n metadata:\n labels:\n run: my-nginx\n spec:\n containers:\n - image: my.image.registry/nginx:1.4.0\n name: my-nginx\n ports:\n - containerPort: 80", "completion": "This yaml code Declarative Management of Kubernetes Objects Using Kustomize | Sometimes, the application running in a Pod may need to use configuration values from other objects. For example, a Pod from a Deployment object need to read the corresponding Service name from Env or", "language": "yaml", "type": "deployment-definition_explanation", "context": "Declarative Management of Kubernetes Objects Using Kustomize | Sometimes, the application running in a Pod may need to use configuration values from other objects. For example, a Pod from a Deployment object need to read the corresponding Service name from Env or", "source": "https://kubernetes.io/docs/tasks/manage-kubernetes-objects/kustomization/", "id": "k8s_00239_code_explain_6967"} {"prompt": "Complete this shell code:\n\n# Create a deployment.yaml file (quoting the here doc delimiter)\ncat <<'EOF' > deployment.yaml\napiVersion: apps/v1\nkind: Deployment\nmetadata:\n name: my-nginx\nspec:\n selector:\n matchLabels:\n run: my-nginx\n replicas: 2\n template:\n metadata:\n labels:\n run: my-nginx\n spec:\n containers:\n - name: my-nginx\n image: nginx\n command: [\"start\", \"--host\", \"MY_SERVICE_NAME_PLACEHOLDER\"]\nEOF\n\n# Create a service.yaml file\ncat < service.yaml\napiVersion: v1\nkind: Service\nmetadata:\n name: my-nginx\n labels:", "completion": " run: my-nginx\nspec:\n ports:\n - port: 80\n protocol: TCP\n selector:\n run: my-nginx\nEOF\n\ncat <./kustomization.yaml\nnamePrefix: dev-\nnameSuffix: \"-001\"\n\nresources:\n- deployment.yaml\n- service.yaml\n\nreplacements:\n- source:\n kind: Service\n name: my-nginx\n fieldPath: metadata.name\n targets:\n - select:\n kind: Deployment\n name: my-nginx\n fieldPaths:\n - spec.template.spec.containers.0.command.2\nEOF", "language": "shell", "type": "code-example", "context": "Declarative Management of Kubernetes Objects Using Kustomize | Run kubectl kustomize ./ to see that the Service name injected into containers is dev-my-nginx-001:", "source": "https://kubernetes.io/docs/tasks/manage-kubernetes-objects/kustomization/", "id": "k8s_00239_code_completion_6968"} {"prompt": "Explain this shell code:\n\n# Create a deployment.yaml file (quoting the here doc delimiter)\ncat <<'EOF' > deployment.yaml\napiVersion: apps/v1\nkind: Deployment\nmetadata:\n name: my-nginx\nspec:\n selector:\n matchLabels:\n run: my-nginx\n replicas: 2\n template:\n metadata:\n labels:\n run: my-nginx\n spec:\n containers:\n - name: my-nginx\n image: nginx\n command: [\"start\", \"--host\", \"MY_SERVICE_NAME_PLACEHOLDER\"]\nEOF\n\n# Create a service.yaml file\ncat < service.yaml\napiVersion: v1\nkind: Service\nmetadata:\n name: my-nginx\n labels:\n run: my-nginx\nspec:\n ports:\n - port: 80\n protocol: TCP\n selector:\n run: my-nginx\nEOF\n\ncat <./kustomization.yaml\nnamePrefix: dev-\nnameSuffix: \"-001\"\n\nresources:\n- deployment.yaml\n- service.yaml\n\nreplacements:\n- source:\n kind: Service\n name: my-nginx\n fieldPath: metadata.name\n targets:\n - select:\n kind: Deployment\n name: my-nginx\n fieldPaths:\n - spec.template.spec.containers.0.command.2\nEOF", "completion": "This shell code Declarative Management of Kubernetes Objects Using Kustomize | Run kubectl kustomize ./ to see that the Service name injected into containers is dev-my-nginx-001:", "language": "shell", "type": "code-example_explanation", "context": "Declarative Management of Kubernetes Objects Using Kustomize | Run kubectl kustomize ./ to see that the Service name injected into containers is dev-my-nginx-001:", "source": "https://kubernetes.io/docs/tasks/manage-kubernetes-objects/kustomization/", "id": "k8s_00239_code_explain_6969"} {"prompt": "Complete this yaml code:\n\napiVersion: apps/v1\nkind: Deployment\nmetadata:\n name: dev-my-nginx-001\nspec:\n replicas: 2\n selector:\n matchLabels:\n run: my-nginx\n template:", "completion": " metadata:\n labels:\n run: my-nginx\n spec:\n containers:\n - command:\n - start\n - --host\n - dev-my-nginx-001\n image: nginx\n name: my-nginx", "language": "yaml", "type": "deployment-definition", "context": "Declarative Management of Kubernetes Objects Using Kustomize | Kustomize has the concepts of bases and overlays. A base is a directory with a kustomization.yaml, which contains a set of resources and associated customization. A base could be either a local direct", "source": "https://kubernetes.io/docs/tasks/manage-kubernetes-objects/kustomization/", "id": "k8s_00239_code_completion_6970"} {"prompt": "Explain this yaml code:\n\napiVersion: apps/v1\nkind: Deployment\nmetadata:\n name: dev-my-nginx-001\nspec:\n replicas: 2\n selector:\n matchLabels:\n run: my-nginx\n template:\n metadata:\n labels:\n run: my-nginx\n spec:\n containers:\n - command:\n - start\n - --host\n - dev-my-nginx-001\n image: nginx\n name: my-nginx", "completion": "This yaml code Declarative Management of Kubernetes Objects Using Kustomize | Kustomize has the concepts of bases and overlays. A base is a directory with a kustomization.yaml, which contains a set of resources and associated customization. A base could be either a local direct", "language": "yaml", "type": "deployment-definition_explanation", "context": "Declarative Management of Kubernetes Objects Using Kustomize | Kustomize has the concepts of bases and overlays. A base is a directory with a kustomization.yaml, which contains a set of resources and associated customization. A base could be either a local direct", "source": "https://kubernetes.io/docs/tasks/manage-kubernetes-objects/kustomization/", "id": "k8s_00239_code_explain_6971"} {"prompt": "Complete this shell code:\n\n# Create a directory to hold the base\nmkdir base\n# Create a base/deployment.yaml\ncat < base/deployment.yaml\napiVersion: apps/v1\nkind: Deployment\nmetadata:\n name: my-nginx\nspec:\n selector:\n matchLabels:\n run: my-nginx\n replicas: 2\n template:\n metadata:\n labels:\n run: my-nginx\n spec:\n containers:\n - name: my-nginx\n image: nginx\nEOF", "completion": "\n# Create a base/service.yaml file\ncat < base/service.yaml\napiVersion: v1\nkind: Service\nmetadata:\n name: my-nginx\n labels:\n run: my-nginx\nspec:\n ports:\n - port: 80\n protocol: TCP\n selector:\n run: my-nginx\nEOF\n# Create a base/kustomization.yaml\ncat < base/kustomization.yaml\nresources:\n- deployment.yaml\n- service.yaml\nEOF", "language": "shell", "type": "code-example", "context": "Declarative Management of Kubernetes Objects Using Kustomize | This base can be used in multiple overlays. You can add different namePrefix or other cross-cutting fields in different overlays. Here are two overlays using the same base.", "source": "https://kubernetes.io/docs/tasks/manage-kubernetes-objects/kustomization/", "id": "k8s_00239_code_completion_6972"} {"prompt": "Explain this shell code:\n\n# Create a directory to hold the base\nmkdir base\n# Create a base/deployment.yaml\ncat < base/deployment.yaml\napiVersion: apps/v1\nkind: Deployment\nmetadata:\n name: my-nginx\nspec:\n selector:\n matchLabels:\n run: my-nginx\n replicas: 2\n template:\n metadata:\n labels:\n run: my-nginx\n spec:\n containers:\n - name: my-nginx\n image: nginx\nEOF\n\n# Create a base/service.yaml file\ncat < base/service.yaml\napiVersion: v1\nkind: Service\nmetadata:\n name: my-nginx\n labels:\n run: my-nginx\nspec:\n ports:\n - port: 80\n protocol: TCP\n selector:\n run: my-nginx\nEOF\n# Create a base/kustomization.yaml\ncat < base/kustomization.yaml\nresources:\n- deployment.yaml\n- service.yaml\nEOF", "completion": "This shell code Declarative Management of Kubernetes Objects Using Kustomize | This base can be used in multiple overlays. You can add different namePrefix or other cross-cutting fields in different overlays. Here are two overlays using the same base.", "language": "shell", "type": "code-example_explanation", "context": "Declarative Management of Kubernetes Objects Using Kustomize | This base can be used in multiple overlays. You can add different namePrefix or other cross-cutting fields in different overlays. Here are two overlays using the same base.", "source": "https://kubernetes.io/docs/tasks/manage-kubernetes-objects/kustomization/", "id": "k8s_00239_code_explain_6973"} {"prompt": "Complete this shell code:\n\nmkdir dev\ncat < dev/kustomization.yaml\nresources:\n- ../base\nnamePrefix: dev-\nEOF", "completion": "\nmkdir prod\ncat < prod/kustomization.yaml\nresources:\n- ../base\nnamePrefix: prod-\nEOF", "language": "shell", "type": "code-example", "context": "Declarative Management of Kubernetes Objects Using Kustomize | Use --kustomize or -k in kubectl commands to recognize resources managed by kustomization.yaml. Note that -k should point to a kustomization directory, such as", "source": "https://kubernetes.io/docs/tasks/manage-kubernetes-objects/kustomization/", "id": "k8s_00239_code_completion_6974"} {"prompt": "Explain this shell code:\n\nmkdir dev\ncat < dev/kustomization.yaml\nresources:\n- ../base\nnamePrefix: dev-\nEOF\n\nmkdir prod\ncat < prod/kustomization.yaml\nresources:\n- ../base\nnamePrefix: prod-\nEOF", "completion": "This shell code Declarative Management of Kubernetes Objects Using Kustomize | Use --kustomize or -k in kubectl commands to recognize resources managed by kustomization.yaml. Note that -k should point to a kustomization directory, such as", "language": "shell", "type": "code-example_explanation", "context": "Declarative Management of Kubernetes Objects Using Kustomize | Use --kustomize or -k in kubectl commands to recognize resources managed by kustomization.yaml. Note that -k should point to a kustomization directory, such as", "source": "https://kubernetes.io/docs/tasks/manage-kubernetes-objects/kustomization/", "id": "k8s_00239_code_explain_6975"} {"prompt": "Complete this shell code:\n\n# Create a deployment.yaml file\ncat < deployment.yaml\napiVersion: apps/v1\nkind: Deployment\nmetadata:\n name: my-nginx\nspec:\n selector:\n matchLabels:\n run: my-nginx\n replicas: 2\n template:\n metadata:\n labels:\n run: my-nginx\n spec:", "completion": " containers:\n - name: my-nginx\n image: nginx\n ports:\n - containerPort: 80\nEOF\n\n# Create a kustomization.yaml\ncat <./kustomization.yaml\nnamePrefix: dev-\nlabels:\n - pairs:\n app: my-nginx\n includeSelectors: true \nresources:\n- deployment.yaml\nEOF", "language": "shell", "type": "code-example", "context": "Declarative Management of Kubernetes Objects Using Kustomize | Run the following command to apply the Deployment object dev-my-nginx:", "source": "https://kubernetes.io/docs/tasks/manage-kubernetes-objects/kustomization/", "id": "k8s_00239_code_completion_6976"} {"prompt": "Explain this shell code:\n\n# Create a deployment.yaml file\ncat < deployment.yaml\napiVersion: apps/v1\nkind: Deployment\nmetadata:\n name: my-nginx\nspec:\n selector:\n matchLabels:\n run: my-nginx\n replicas: 2\n template:\n metadata:\n labels:\n run: my-nginx\n spec:\n containers:\n - name: my-nginx\n image: nginx\n ports:\n - containerPort: 80\nEOF\n\n# Create a kustomization.yaml\ncat <./kustomization.yaml\nnamePrefix: dev-\nlabels:\n - pairs:\n app: my-nginx\n includeSelectors: true \nresources:\n- deployment.yaml\nEOF", "completion": "This shell code Declarative Management of Kubernetes Objects Using Kustomize | Run the following command to apply the Deployment object dev-my-nginx:", "language": "shell", "type": "code-example_explanation", "context": "Declarative Management of Kubernetes Objects Using Kustomize | Run the following command to apply the Deployment object dev-my-nginx:", "source": "https://kubernetes.io/docs/tasks/manage-kubernetes-objects/kustomization/", "id": "k8s_00239_code_explain_6977"} {"prompt": "Explain this shell code:\n\n> kubectl apply -k ./\ndeployment.apps/dev-my-nginx created", "completion": "This shell code Declarative Management of Kubernetes Objects Using Kustomize | Run one of the following commands to view the Deployment object dev-my-nginx:", "language": "shell", "type": "code-example_explanation", "context": "Declarative Management of Kubernetes Objects Using Kustomize | Run one of the following commands to view the Deployment object dev-my-nginx:", "source": "https://kubernetes.io/docs/tasks/manage-kubernetes-objects/kustomization/", "id": "k8s_00239_code_explain_6978"} {"prompt": "Explain this shell code:\n\n> kubectl delete -k ./\ndeployment.apps \"dev-my-nginx\" deleted", "completion": "This shell code Declarative Management of Kubernetes Objects Using Kustomize |", "language": "shell", "type": "code-example_explanation", "context": "Declarative Management of Kubernetes Objects Using Kustomize |", "source": "https://kubernetes.io/docs/tasks/manage-kubernetes-objects/kustomization/", "id": "k8s_00239_code_explain_6979"} {"prompt": "Complete this text code:\n\n# Create a application.properties file\ncat <application.properties\nFOO=Bar\nEOF\n", "completion": "cat <./kustomization.yaml\nconfigMapGenerator:\n- name: example-configmap-1\n files:\n - application.properties\nEOF", "language": "text", "type": "code-example", "context": "Declarative Management of Kubernetes Objects Using Kustomize | The generated ConfigMap can be examined with the following command:", "source": "https://kubernetes.io/docs/tasks/manage-kubernetes-objects/kustomization/", "id": "k8s_00239_code_completion_6980"} {"prompt": "Explain this text code:\n\n# Create a application.properties file\ncat <application.properties\nFOO=Bar\nEOF\n\ncat <./kustomization.yaml\nconfigMapGenerator:\n- name: example-configmap-1\n files:\n - application.properties\nEOF", "completion": "This text code Declarative Management of Kubernetes Objects Using Kustomize | The generated ConfigMap can be examined with the following command:", "language": "text", "type": "code-example_explanation", "context": "Declarative Management of Kubernetes Objects Using Kustomize | The generated ConfigMap can be examined with the following command:", "source": "https://kubernetes.io/docs/tasks/manage-kubernetes-objects/kustomization/", "id": "k8s_00239_code_explain_6981"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\ndata:\n application.properties: |", "completion": " FOO=Bar \nkind: ConfigMap\nmetadata:\n name: example-configmap-1-8mbdf7882g", "language": "yaml", "type": "kubernetes-manifest", "context": "Declarative Management of Kubernetes Objects Using Kustomize | To generate a ConfigMap from an env file, add an entry to the envs list in configMapGenerator. Here is an example of generating a ConfigMap with a data item from a .env file:", "source": "https://kubernetes.io/docs/tasks/manage-kubernetes-objects/kustomization/", "id": "k8s_00239_code_completion_6982"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\ndata:\n application.properties: |\n FOO=Bar \nkind: ConfigMap\nmetadata:\n name: example-configmap-1-8mbdf7882g", "completion": "This yaml code Declarative Management of Kubernetes Objects Using Kustomize | To generate a ConfigMap from an env file, add an entry to the envs list in configMapGenerator. Here is an example of generating a ConfigMap with a data item from a .env file:", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Declarative Management of Kubernetes Objects Using Kustomize | To generate a ConfigMap from an env file, add an entry to the envs list in configMapGenerator. Here is an example of generating a ConfigMap with a data item from a .env file:", "source": "https://kubernetes.io/docs/tasks/manage-kubernetes-objects/kustomization/", "id": "k8s_00239_code_explain_6983"} {"prompt": "Complete this text code:\n\n# Create a .env file\ncat <.env\nFOO=Bar\nEOF\n", "completion": "cat <./kustomization.yaml\nconfigMapGenerator:\n- name: example-configmap-1\n envs:\n - .env\nEOF", "language": "text", "type": "code-example", "context": "Declarative Management of Kubernetes Objects Using Kustomize | The generated ConfigMap can be examined with the following command:", "source": "https://kubernetes.io/docs/tasks/manage-kubernetes-objects/kustomization/", "id": "k8s_00239_code_completion_6984"} {"prompt": "Explain this text code:\n\n# Create a .env file\ncat <.env\nFOO=Bar\nEOF\n\ncat <./kustomization.yaml\nconfigMapGenerator:\n- name: example-configmap-1\n envs:\n - .env\nEOF", "completion": "This text code Declarative Management of Kubernetes Objects Using Kustomize | The generated ConfigMap can be examined with the following command:", "language": "text", "type": "code-example_explanation", "context": "Declarative Management of Kubernetes Objects Using Kustomize | The generated ConfigMap can be examined with the following command:", "source": "https://kubernetes.io/docs/tasks/manage-kubernetes-objects/kustomization/", "id": "k8s_00239_code_explain_6985"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\ndata:\n FOO: Bar", "completion": "kind: ConfigMap\nmetadata:\n name: example-configmap-1-42cfbf598f", "language": "yaml", "type": "kubernetes-manifest", "context": "Declarative Management of Kubernetes Objects Using Kustomize | ConfigMaps can also be generated from literal key-value pairs. To generate a ConfigMap from a literal key-value pair, add an entry to the literals list in configMapGenerator. Here is an example of gen", "source": "https://kubernetes.io/docs/tasks/manage-kubernetes-objects/kustomization/", "id": "k8s_00239_code_completion_6986"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\ndata:\n FOO: Bar\nkind: ConfigMap\nmetadata:\n name: example-configmap-1-42cfbf598f", "completion": "This yaml code Declarative Management of Kubernetes Objects Using Kustomize | ConfigMaps can also be generated from literal key-value pairs. To generate a ConfigMap from a literal key-value pair, add an entry to the literals list in configMapGenerator. Here is an example of gen", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Declarative Management of Kubernetes Objects Using Kustomize | ConfigMaps can also be generated from literal key-value pairs. To generate a ConfigMap from a literal key-value pair, add an entry to the literals list in configMapGenerator. Here is an example of gen", "source": "https://kubernetes.io/docs/tasks/manage-kubernetes-objects/kustomization/", "id": "k8s_00239_code_explain_6987"} {"prompt": "Complete this text code:\n\ncat <./kustomization.yaml\nconfigMapGenerator:\n- name: example-configmap-2", "completion": " literals:\n - FOO=Bar\nEOF", "language": "text", "type": "code-example", "context": "Declarative Management of Kubernetes Objects Using Kustomize | The generated ConfigMap can be checked by the following command:", "source": "https://kubernetes.io/docs/tasks/manage-kubernetes-objects/kustomization/", "id": "k8s_00239_code_completion_6988"} {"prompt": "Explain this text code:\n\ncat <./kustomization.yaml\nconfigMapGenerator:\n- name: example-configmap-2\n literals:\n - FOO=Bar\nEOF", "completion": "This text code Declarative Management of Kubernetes Objects Using Kustomize | The generated ConfigMap can be checked by the following command:", "language": "text", "type": "code-example_explanation", "context": "Declarative Management of Kubernetes Objects Using Kustomize | The generated ConfigMap can be checked by the following command:", "source": "https://kubernetes.io/docs/tasks/manage-kubernetes-objects/kustomization/", "id": "k8s_00239_code_explain_6989"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\ndata:\n FOO: Bar", "completion": "kind: ConfigMap\nmetadata:\n name: example-configmap-2-g2hdhfc6tk", "language": "yaml", "type": "kubernetes-manifest", "context": "Declarative Management of Kubernetes Objects Using Kustomize | To use a generated ConfigMap in a Deployment, reference it by the name of the configMapGenerator. Kustomize will automatically replace this name with the generated name.", "source": "https://kubernetes.io/docs/tasks/manage-kubernetes-objects/kustomization/", "id": "k8s_00239_code_completion_6990"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\ndata:\n FOO: Bar\nkind: ConfigMap\nmetadata:\n name: example-configmap-2-g2hdhfc6tk", "completion": "This yaml code Declarative Management of Kubernetes Objects Using Kustomize | To use a generated ConfigMap in a Deployment, reference it by the name of the configMapGenerator. Kustomize will automatically replace this name with the generated name.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Declarative Management of Kubernetes Objects Using Kustomize | To use a generated ConfigMap in a Deployment, reference it by the name of the configMapGenerator. Kustomize will automatically replace this name with the generated name.", "source": "https://kubernetes.io/docs/tasks/manage-kubernetes-objects/kustomization/", "id": "k8s_00239_code_explain_6991"} {"prompt": "Complete this yaml code:\n\n# Create an application.properties file\ncat <application.properties\nFOO=Bar\nEOF\n\ncat <deployment.yaml\napiVersion: apps/v1\nkind: Deployment\nmetadata:\n name: my-app\n labels:\n app: my-app\nspec:\n selector:\n matchLabels:\n app: my-app\n template:\n metadata:\n labels:\n app: my-app", "completion": " spec:\n containers:\n - name: app\n image: my-app\n volumeMounts:\n - name: config\n mountPath: /config\n volumes:\n - name: config\n configMap:\n name: example-configmap-1\nEOF\n\ncat <./kustomization.yaml\nresources:\n- deployment.yaml\nconfigMapGenerator:\n- name: example-configmap-1\n files:\n - application.properties\nEOF", "language": "yaml", "type": "deployment-definition", "context": "Declarative Management of Kubernetes Objects Using Kustomize | Generate the ConfigMap and Deployment:", "source": "https://kubernetes.io/docs/tasks/manage-kubernetes-objects/kustomization/", "id": "k8s_00239_code_completion_6992"} {"prompt": "Explain this yaml code:\n\n# Create an application.properties file\ncat <application.properties\nFOO=Bar\nEOF\n\ncat <deployment.yaml\napiVersion: apps/v1\nkind: Deployment\nmetadata:\n name: my-app\n labels:\n app: my-app\nspec:\n selector:\n matchLabels:\n app: my-app\n template:\n metadata:\n labels:\n app: my-app\n spec:\n containers:\n - name: app\n image: my-app\n volumeMounts:\n - name: config\n mountPath: /config\n volumes:\n - name: config\n configMap:\n name: example-configmap-1\nEOF\n\ncat <./kustomization.yaml\nresources:\n- deployment.yaml\nconfigMapGenerator:\n- name: example-configmap-1\n files:\n - application.properties\nEOF", "completion": "This yaml code Declarative Management of Kubernetes Objects Using Kustomize | Generate the ConfigMap and Deployment:", "language": "yaml", "type": "deployment-definition_explanation", "context": "Declarative Management of Kubernetes Objects Using Kustomize | Generate the ConfigMap and Deployment:", "source": "https://kubernetes.io/docs/tasks/manage-kubernetes-objects/kustomization/", "id": "k8s_00239_code_explain_6993"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\ndata:\n application.properties: |\n FOO=Bar \nkind: ConfigMap\nmetadata:\n name: example-configmap-1-g4hk9g2ff8\n---\napiVersion: apps/v1\nkind: Deployment\nmetadata:\n labels:\n app: my-app\n name: my-app\nspec:\n selector:", "completion": " matchLabels:\n app: my-app\n template:\n metadata:\n labels:\n app: my-app\n spec:\n containers:\n - image: my-app\n name: app\n volumeMounts:\n - mountPath: /config\n name: config\n volumes:\n - configMap:\n name: example-configmap-1-g4hk9g2ff8\n name: config", "language": "yaml", "type": "deployment-definition", "context": "Declarative Management of Kubernetes Objects Using Kustomize | You can generate Secrets from files or literal key-value pairs. To generate a Secret from a file, add an entry to the files list in secretGenerator. Here is an example of generating a Secret with a da", "source": "https://kubernetes.io/docs/tasks/manage-kubernetes-objects/kustomization/", "id": "k8s_00239_code_completion_6994"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\ndata:\n application.properties: |\n FOO=Bar \nkind: ConfigMap\nmetadata:\n name: example-configmap-1-g4hk9g2ff8\n---\napiVersion: apps/v1\nkind: Deployment\nmetadata:\n labels:\n app: my-app\n name: my-app\nspec:\n selector:\n matchLabels:\n app: my-app\n template:\n metadata:\n labels:\n app: my-app\n spec:\n containers:\n - image: my-app\n name: app\n volumeMounts:\n - mountPath: /config\n name: config\n volumes:\n - configMap:\n name: example-configmap-1-g4hk9g2ff8\n name: config", "completion": "This yaml code Declarative Management of Kubernetes Objects Using Kustomize | You can generate Secrets from files or literal key-value pairs. To generate a Secret from a file, add an entry to the files list in secretGenerator. Here is an example of generating a Secret with a da", "language": "yaml", "type": "deployment-definition_explanation", "context": "Declarative Management of Kubernetes Objects Using Kustomize | You can generate Secrets from files or literal key-value pairs. To generate a Secret from a file, add an entry to the files list in secretGenerator. Here is an example of generating a Secret with a da", "source": "https://kubernetes.io/docs/tasks/manage-kubernetes-objects/kustomization/", "id": "k8s_00239_code_explain_6995"} {"prompt": "Complete this text code:\n\n# Create a password.txt file\ncat <./password.txt\nusername=admin\npassword=secret\nEOF\n", "completion": "cat <./kustomization.yaml\nsecretGenerator:\n- name: example-secret-1\n files:\n - password.txt\nEOF", "language": "text", "type": "code-example", "context": "Declarative Management of Kubernetes Objects Using Kustomize | The generated Secret is as follows:", "source": "https://kubernetes.io/docs/tasks/manage-kubernetes-objects/kustomization/", "id": "k8s_00239_code_completion_6996"} {"prompt": "Explain this text code:\n\n# Create a password.txt file\ncat <./password.txt\nusername=admin\npassword=secret\nEOF\n\ncat <./kustomization.yaml\nsecretGenerator:\n- name: example-secret-1\n files:\n - password.txt\nEOF", "completion": "This text code Declarative Management of Kubernetes Objects Using Kustomize | The generated Secret is as follows:", "language": "text", "type": "code-example_explanation", "context": "Declarative Management of Kubernetes Objects Using Kustomize | The generated Secret is as follows:", "source": "https://kubernetes.io/docs/tasks/manage-kubernetes-objects/kustomization/", "id": "k8s_00239_code_explain_6997"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\ndata:\n password.txt: dXNlcm5hbWU9YWRtaW4KcGFzc3dvcmQ9c2VjcmV0Cg==", "completion": "kind: Secret\nmetadata:\n name: example-secret-1-t2kt65hgtb\ntype: Opaque", "language": "yaml", "type": "kubernetes-manifest", "context": "Declarative Management of Kubernetes Objects Using Kustomize | To generate a Secret from a literal key-value pair, add an entry to literals list in secretGenerator. Here is an example of generating a Secret with a data item from a key-value pair:", "source": "https://kubernetes.io/docs/tasks/manage-kubernetes-objects/kustomization/", "id": "k8s_00239_code_completion_6998"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\ndata:\n password.txt: dXNlcm5hbWU9YWRtaW4KcGFzc3dvcmQ9c2VjcmV0Cg==\nkind: Secret\nmetadata:\n name: example-secret-1-t2kt65hgtb\ntype: Opaque", "completion": "This yaml code Declarative Management of Kubernetes Objects Using Kustomize | To generate a Secret from a literal key-value pair, add an entry to literals list in secretGenerator. Here is an example of generating a Secret with a data item from a key-value pair:", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Declarative Management of Kubernetes Objects Using Kustomize | To generate a Secret from a literal key-value pair, add an entry to literals list in secretGenerator. Here is an example of generating a Secret with a data item from a key-value pair:", "source": "https://kubernetes.io/docs/tasks/manage-kubernetes-objects/kustomization/", "id": "k8s_00239_code_explain_6999"} {"prompt": "Complete this text code:\n\ncat <./kustomization.yaml\nsecretGenerator:\n- name: example-secret-2", "completion": " literals:\n - username=admin\n - password=secret\nEOF", "language": "text", "type": "code-example", "context": "Declarative Management of Kubernetes Objects Using Kustomize | The generated Secret is as follows:", "source": "https://kubernetes.io/docs/tasks/manage-kubernetes-objects/kustomization/", "id": "k8s_00239_code_completion_7000"} {"prompt": "Explain this text code:\n\ncat <./kustomization.yaml\nsecretGenerator:\n- name: example-secret-2\n literals:\n - username=admin\n - password=secret\nEOF", "completion": "This text code Declarative Management of Kubernetes Objects Using Kustomize | The generated Secret is as follows:", "language": "text", "type": "code-example_explanation", "context": "Declarative Management of Kubernetes Objects Using Kustomize | The generated Secret is as follows:", "source": "https://kubernetes.io/docs/tasks/manage-kubernetes-objects/kustomization/", "id": "k8s_00239_code_explain_7001"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\ndata:\n password: c2VjcmV0\n username: YWRtaW4=", "completion": "kind: Secret\nmetadata:\n name: example-secret-2-t52t6g96d8\ntype: Opaque", "language": "yaml", "type": "kubernetes-manifest", "context": "Declarative Management of Kubernetes Objects Using Kustomize | Like ConfigMaps, generated Secrets can be used in Deployments by referring to the name of the secretGenerator:", "source": "https://kubernetes.io/docs/tasks/manage-kubernetes-objects/kustomization/", "id": "k8s_00239_code_completion_7002"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\ndata:\n password: c2VjcmV0\n username: YWRtaW4=\nkind: Secret\nmetadata:\n name: example-secret-2-t52t6g96d8\ntype: Opaque", "completion": "This yaml code Declarative Management of Kubernetes Objects Using Kustomize | Like ConfigMaps, generated Secrets can be used in Deployments by referring to the name of the secretGenerator:", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Declarative Management of Kubernetes Objects Using Kustomize | Like ConfigMaps, generated Secrets can be used in Deployments by referring to the name of the secretGenerator:", "source": "https://kubernetes.io/docs/tasks/manage-kubernetes-objects/kustomization/", "id": "k8s_00239_code_explain_7003"} {"prompt": "Complete this yaml code:\n\n# Create a password.txt file\ncat <./password.txt\nusername=admin\npassword=secret\nEOF\n\ncat <deployment.yaml\napiVersion: apps/v1\nkind: Deployment\nmetadata:\n name: my-app\n labels:\n app: my-app\nspec:\n selector:\n matchLabels:\n app: my-app\n template:\n metadata:\n labels:\n app: my-app", "completion": " spec:\n containers:\n - name: app\n image: my-app\n volumeMounts:\n - name: password\n mountPath: /secrets\n volumes:\n - name: password\n secret:\n secretName: example-secret-1\nEOF\n\ncat <./kustomization.yaml\nresources:\n- deployment.yaml\nsecretGenerator:\n- name: example-secret-1\n files:\n - password.txt\nEOF", "language": "yaml", "type": "deployment-definition", "context": "Declarative Management of Kubernetes Objects Using Kustomize | The generated ConfigMaps and Secrets have a content hash suffix appended. This ensures that a new ConfigMap or Secret is generated when the contents are changed. To disable the behavior of appending a", "source": "https://kubernetes.io/docs/tasks/manage-kubernetes-objects/kustomization/", "id": "k8s_00239_code_completion_7004"} {"prompt": "Explain this yaml code:\n\n# Create a password.txt file\ncat <./password.txt\nusername=admin\npassword=secret\nEOF\n\ncat <deployment.yaml\napiVersion: apps/v1\nkind: Deployment\nmetadata:\n name: my-app\n labels:\n app: my-app\nspec:\n selector:\n matchLabels:\n app: my-app\n template:\n metadata:\n labels:\n app: my-app\n spec:\n containers:\n - name: app\n image: my-app\n volumeMounts:\n - name: password\n mountPath: /secrets\n volumes:\n - name: password\n secret:\n secretName: example-secret-1\nEOF\n\ncat <./kustomization.yaml\nresources:\n- deployment.yaml\nsecretGenerator:\n- name: example-secret-1\n files:\n - password.txt\nEOF", "completion": "This yaml code Declarative Management of Kubernetes Objects Using Kustomize | The generated ConfigMaps and Secrets have a content hash suffix appended. This ensures that a new ConfigMap or Secret is generated when the contents are changed. To disable the behavior of appending a", "language": "yaml", "type": "deployment-definition_explanation", "context": "Declarative Management of Kubernetes Objects Using Kustomize | The generated ConfigMaps and Secrets have a content hash suffix appended. This ensures that a new ConfigMap or Secret is generated when the contents are changed. To disable the behavior of appending a", "source": "https://kubernetes.io/docs/tasks/manage-kubernetes-objects/kustomization/", "id": "k8s_00239_code_explain_7005"} {"prompt": "Complete this text code:\n\ncat <./kustomization.yaml\nconfigMapGenerator:\n- name: example-configmap-3\n literals:\n - FOO=Bar\ngeneratorOptions:", "completion": " disableNameSuffixHash: true\n labels:\n type: generated\n annotations:\n note: generated\nEOF", "language": "text", "type": "code-example", "context": "Declarative Management of Kubernetes Objects Using Kustomize | Runkubectl kustomize ./ to view the generated ConfigMap:", "source": "https://kubernetes.io/docs/tasks/manage-kubernetes-objects/kustomization/", "id": "k8s_00239_code_completion_7006"} {"prompt": "Explain this text code:\n\ncat <./kustomization.yaml\nconfigMapGenerator:\n- name: example-configmap-3\n literals:\n - FOO=Bar\ngeneratorOptions:\n disableNameSuffixHash: true\n labels:\n type: generated\n annotations:\n note: generated\nEOF", "completion": "This text code Declarative Management of Kubernetes Objects Using Kustomize | Runkubectl kustomize ./ to view the generated ConfigMap:", "language": "text", "type": "code-example_explanation", "context": "Declarative Management of Kubernetes Objects Using Kustomize | Runkubectl kustomize ./ to view the generated ConfigMap:", "source": "https://kubernetes.io/docs/tasks/manage-kubernetes-objects/kustomization/", "id": "k8s_00239_code_explain_7007"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\ndata:\n FOO: Bar\nkind: ConfigMap\nmetadata:", "completion": " annotations:\n note: generated\n labels:\n type: generated\n name: example-configmap-3", "language": "yaml", "type": "kubernetes-manifest", "context": "Declarative Management of Kubernetes Objects Using Kustomize | It is quite common to set cross-cutting fields for all Kubernetes resources in a project. Some use cases for setting cross-cutting fields:", "source": "https://kubernetes.io/docs/tasks/manage-kubernetes-objects/kustomization/", "id": "k8s_00239_code_completion_7008"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\ndata:\n FOO: Bar\nkind: ConfigMap\nmetadata:\n annotations:\n note: generated\n labels:\n type: generated\n name: example-configmap-3", "completion": "This yaml code Declarative Management of Kubernetes Objects Using Kustomize | It is quite common to set cross-cutting fields for all Kubernetes resources in a project. Some use cases for setting cross-cutting fields:", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Declarative Management of Kubernetes Objects Using Kustomize | It is quite common to set cross-cutting fields for all Kubernetes resources in a project. Some use cases for setting cross-cutting fields:", "source": "https://kubernetes.io/docs/tasks/manage-kubernetes-objects/kustomization/", "id": "k8s_00239_code_explain_7009"} {"prompt": "Complete this yaml code:\n\n# Create a deployment.yaml\ncat <./deployment.yaml\napiVersion: apps/v1\nkind: Deployment\nmetadata:\n name: nginx-deployment\n labels:\n app: nginx\nspec:\n selector:\n matchLabels:\n app: nginx\n template:\n metadata:\n labels:\n app: nginx\n spec:", "completion": " containers:\n - name: nginx\n image: nginx\nEOF\n\ncat <./kustomization.yaml\nnamespace: my-namespace\nnamePrefix: dev-\nnameSuffix: \"-001\"\nlabels:\n - pairs:\n app: bingo\n includeSelectors: true \ncommonAnnotations:\n oncallPager: 800-555-1212\nresources:\n- deployment.yaml\nEOF", "language": "yaml", "type": "deployment-definition", "context": "Declarative Management of Kubernetes Objects Using Kustomize | Run kubectl kustomize ./ to view those fields are all set in the Deployment Resource:", "source": "https://kubernetes.io/docs/tasks/manage-kubernetes-objects/kustomization/", "id": "k8s_00239_code_completion_7010"} {"prompt": "Explain this yaml code:\n\n# Create a deployment.yaml\ncat <./deployment.yaml\napiVersion: apps/v1\nkind: Deployment\nmetadata:\n name: nginx-deployment\n labels:\n app: nginx\nspec:\n selector:\n matchLabels:\n app: nginx\n template:\n metadata:\n labels:\n app: nginx\n spec:\n containers:\n - name: nginx\n image: nginx\nEOF\n\ncat <./kustomization.yaml\nnamespace: my-namespace\nnamePrefix: dev-\nnameSuffix: \"-001\"\nlabels:\n - pairs:\n app: bingo\n includeSelectors: true \ncommonAnnotations:\n oncallPager: 800-555-1212\nresources:\n- deployment.yaml\nEOF", "completion": "This yaml code Declarative Management of Kubernetes Objects Using Kustomize | Run kubectl kustomize ./ to view those fields are all set in the Deployment Resource:", "language": "yaml", "type": "deployment-definition_explanation", "context": "Declarative Management of Kubernetes Objects Using Kustomize | Run kubectl kustomize ./ to view those fields are all set in the Deployment Resource:", "source": "https://kubernetes.io/docs/tasks/manage-kubernetes-objects/kustomization/", "id": "k8s_00239_code_explain_7011"} {"prompt": "Complete this yaml code:\n\napiVersion: apps/v1\nkind: Deployment\nmetadata:\n annotations:\n oncallPager: 800-555-1212\n labels:\n app: bingo\n name: dev-nginx-deployment-001\n namespace: my-namespace\nspec:\n selector:", "completion": " matchLabels:\n app: bingo\n template:\n metadata:\n annotations:\n oncallPager: 800-555-1212\n labels:\n app: bingo\n spec:\n containers:\n - image: nginx\n name: nginx", "language": "yaml", "type": "deployment-definition", "context": "Declarative Management of Kubernetes Objects Using Kustomize | It is common to compose a set of resources in a project and manage them inside the same file or directory. Kustomize offers composing resources from different files and applying patches or other custo", "source": "https://kubernetes.io/docs/tasks/manage-kubernetes-objects/kustomization/", "id": "k8s_00239_code_completion_7012"} {"prompt": "Explain this yaml code:\n\napiVersion: apps/v1\nkind: Deployment\nmetadata:\n annotations:\n oncallPager: 800-555-1212\n labels:\n app: bingo\n name: dev-nginx-deployment-001\n namespace: my-namespace\nspec:\n selector:\n matchLabels:\n app: bingo\n template:\n metadata:\n annotations:\n oncallPager: 800-555-1212\n labels:\n app: bingo\n spec:\n containers:\n - image: nginx\n name: nginx", "completion": "This yaml code Declarative Management of Kubernetes Objects Using Kustomize | It is common to compose a set of resources in a project and manage them inside the same file or directory. Kustomize offers composing resources from different files and applying patches or other custo", "language": "yaml", "type": "deployment-definition_explanation", "context": "Declarative Management of Kubernetes Objects Using Kustomize | It is common to compose a set of resources in a project and manage them inside the same file or directory. Kustomize offers composing resources from different files and applying patches or other custo", "source": "https://kubernetes.io/docs/tasks/manage-kubernetes-objects/kustomization/", "id": "k8s_00239_code_explain_7013"} {"prompt": "Complete this yaml code:\n\n# Create a deployment.yaml file\ncat < deployment.yaml\napiVersion: apps/v1\nkind: Deployment\nmetadata:\n name: my-nginx\nspec:\n selector:\n matchLabels:\n run: my-nginx\n replicas: 2\n template:\n metadata:\n labels:\n run: my-nginx\n spec:\n containers:\n - name: my-nginx\n image: nginx\n ports:\n - containerPort: 80\nEOF", "completion": "\n# Create a service.yaml file\ncat < service.yaml\napiVersion: v1\nkind: Service\nmetadata:\n name: my-nginx\n labels:\n run: my-nginx\nspec:\n ports:\n - port: 80\n protocol: TCP\n selector:\n run: my-nginx\nEOF\n\n# Create a kustomization.yaml composing them\ncat <./kustomization.yaml\nresources:\n- deployment.yaml\n- service.yaml\nEOF", "language": "yaml", "type": "deployment-definition", "context": "Declarative Management of Kubernetes Objects Using Kustomize | The resources from kubectl kustomize ./ contain both the Deployment and the Service objects.", "source": "https://kubernetes.io/docs/tasks/manage-kubernetes-objects/kustomization/", "id": "k8s_00239_code_completion_7014"} {"prompt": "Explain this yaml code:\n\n# Create a deployment.yaml file\ncat < deployment.yaml\napiVersion: apps/v1\nkind: Deployment\nmetadata:\n name: my-nginx\nspec:\n selector:\n matchLabels:\n run: my-nginx\n replicas: 2\n template:\n metadata:\n labels:\n run: my-nginx\n spec:\n containers:\n - name: my-nginx\n image: nginx\n ports:\n - containerPort: 80\nEOF\n\n# Create a service.yaml file\ncat < service.yaml\napiVersion: v1\nkind: Service\nmetadata:\n name: my-nginx\n labels:\n run: my-nginx\nspec:\n ports:\n - port: 80\n protocol: TCP\n selector:\n run: my-nginx\nEOF\n\n# Create a kustomization.yaml composing them\ncat <./kustomization.yaml\nresources:\n- deployment.yaml\n- service.yaml\nEOF", "completion": "This yaml code Declarative Management of Kubernetes Objects Using Kustomize | The resources from kubectl kustomize ./ contain both the Deployment and the Service objects.", "language": "yaml", "type": "deployment-definition_explanation", "context": "Declarative Management of Kubernetes Objects Using Kustomize | The resources from kubectl kustomize ./ contain both the Deployment and the Service objects.", "source": "https://kubernetes.io/docs/tasks/manage-kubernetes-objects/kustomization/", "id": "k8s_00239_code_explain_7015"} {"prompt": "Complete this yaml code:\n\n# Create a deployment.yaml file\ncat < deployment.yaml\napiVersion: apps/v1\nkind: Deployment\nmetadata:\n name: my-nginx\nspec:\n selector:\n matchLabels:\n run: my-nginx\n replicas: 2\n template:\n metadata:\n labels:\n run: my-nginx\n spec:\n containers:\n - name: my-nginx\n image: nginx\n ports:\n - containerPort: 80\nEOF\n\n# Create a patch increase_replicas.yaml\ncat < increase_replicas.yaml\napiVersion: apps/v1\nkind: Deployment\nmetadata:", "completion": " name: my-nginx\nspec:\n replicas: 3\nEOF\n\n# Create another patch set_memory.yaml\ncat < set_memory.yaml\napiVersion: apps/v1\nkind: Deployment\nmetadata:\n name: my-nginx\nspec:\n template:\n spec:\n containers:\n - name: my-nginx\n resources:\n limits:\n memory: 512Mi\nEOF\n\ncat <./kustomization.yaml\nresources:\n- deployment.yaml\npatches:\n - path: increase_replicas.yaml\n - path: set_memory.yaml\nEOF", "language": "yaml", "type": "deployment-definition", "context": "Declarative Management of Kubernetes Objects Using Kustomize | Run kubectl kustomize ./ to view the Deployment:", "source": "https://kubernetes.io/docs/tasks/manage-kubernetes-objects/kustomization/", "id": "k8s_00239_code_completion_7016"} {"prompt": "Explain this yaml code:\n\n# Create a deployment.yaml file\ncat < deployment.yaml\napiVersion: apps/v1\nkind: Deployment\nmetadata:\n name: my-nginx\nspec:\n selector:\n matchLabels:\n run: my-nginx\n replicas: 2\n template:\n metadata:\n labels:\n run: my-nginx\n spec:\n containers:\n - name: my-nginx\n image: nginx\n ports:\n - containerPort: 80\nEOF\n\n# Create a patch increase_replicas.yaml\ncat < increase_replicas.yaml\napiVersion: apps/v1\nkind: Deployment\nmetadata:\n name: my-nginx\nspec:\n replicas: 3\nEOF\n\n# Create another patch set_memory.yaml\ncat < set_memory.yaml\napiVersion: apps/v1\nkind: Deployment\nmetadata:\n name: my-nginx\nspec:\n template:\n spec:\n containers:\n - name: my-nginx\n resources:\n limits:\n memory: 512Mi\nEOF\n\ncat <./kustomization.yaml\nresources:\n- deployment.yaml\npatches:\n - path: increase_replicas.yaml\n - path: set_memory.yaml\nEOF", "completion": "This yaml code Declarative Management of Kubernetes Objects Using Kustomize | Run kubectl kustomize ./ to view the Deployment:", "language": "yaml", "type": "deployment-definition_explanation", "context": "Declarative Management of Kubernetes Objects Using Kustomize | Run kubectl kustomize ./ to view the Deployment:", "source": "https://kubernetes.io/docs/tasks/manage-kubernetes-objects/kustomization/", "id": "k8s_00239_code_explain_7017"} {"prompt": "Complete this yaml code:\n\napiVersion: apps/v1\nkind: Deployment\nmetadata:\n name: my-nginx\nspec:\n replicas: 3\n selector:\n matchLabels:\n run: my-nginx\n template:\n metadata:", "completion": " labels:\n run: my-nginx\n spec:\n containers:\n - image: nginx\n name: my-nginx\n ports:\n - containerPort: 80\n resources:\n limits:\n memory: 512Mi", "language": "yaml", "type": "deployment-definition", "context": "Declarative Management of Kubernetes Objects Using Kustomize | Not all resources or fields support strategicMerge patches. To support modifying arbitrary fields in arbitrary resources, Kustomize offers applying JSON patch through Json6902. To find the correct Res", "source": "https://kubernetes.io/docs/tasks/manage-kubernetes-objects/kustomization/", "id": "k8s_00239_code_completion_7018"} {"prompt": "Explain this yaml code:\n\napiVersion: apps/v1\nkind: Deployment\nmetadata:\n name: my-nginx\nspec:\n replicas: 3\n selector:\n matchLabels:\n run: my-nginx\n template:\n metadata:\n labels:\n run: my-nginx\n spec:\n containers:\n - image: nginx\n name: my-nginx\n ports:\n - containerPort: 80\n resources:\n limits:\n memory: 512Mi", "completion": "This yaml code Declarative Management of Kubernetes Objects Using Kustomize | Not all resources or fields support strategicMerge patches. To support modifying arbitrary fields in arbitrary resources, Kustomize offers applying JSON patch through Json6902. To find the correct Res", "language": "yaml", "type": "deployment-definition_explanation", "context": "Declarative Management of Kubernetes Objects Using Kustomize | Not all resources or fields support strategicMerge patches. To support modifying arbitrary fields in arbitrary resources, Kustomize offers applying JSON patch through Json6902. To find the correct Res", "source": "https://kubernetes.io/docs/tasks/manage-kubernetes-objects/kustomization/", "id": "k8s_00239_code_explain_7019"} {"prompt": "Complete this yaml code:\n\n# Create a deployment.yaml file\ncat < deployment.yaml\napiVersion: apps/v1\nkind: Deployment\nmetadata:\n name: my-nginx\nspec:\n selector:\n matchLabels:\n run: my-nginx\n replicas: 2\n template:\n metadata:\n labels:\n run: my-nginx\n spec:\n containers:\n - name: my-nginx\n image: nginx\n ports:\n - containerPort: 80", "completion": "EOF\n\n# Create a json patch\ncat < patch.yaml\n- op: replace\n path: /spec/replicas\n value: 3\nEOF\n\n# Create a kustomization.yaml\ncat <./kustomization.yaml\nresources:\n- deployment.yaml\n\npatches:\n- target:\n group: apps\n version: v1\n kind: Deployment\n name: my-nginx\n path: patch.yaml\nEOF", "language": "yaml", "type": "deployment-definition", "context": "Declarative Management of Kubernetes Objects Using Kustomize | Run kubectl kustomize ./ to see the replicas field is updated:", "source": "https://kubernetes.io/docs/tasks/manage-kubernetes-objects/kustomization/", "id": "k8s_00239_code_completion_7020"} {"prompt": "Explain this yaml code:\n\n# Create a deployment.yaml file\ncat < deployment.yaml\napiVersion: apps/v1\nkind: Deployment\nmetadata:\n name: my-nginx\nspec:\n selector:\n matchLabels:\n run: my-nginx\n replicas: 2\n template:\n metadata:\n labels:\n run: my-nginx\n spec:\n containers:\n - name: my-nginx\n image: nginx\n ports:\n - containerPort: 80\nEOF\n\n# Create a json patch\ncat < patch.yaml\n- op: replace\n path: /spec/replicas\n value: 3\nEOF\n\n# Create a kustomization.yaml\ncat <./kustomization.yaml\nresources:\n- deployment.yaml\n\npatches:\n- target:\n group: apps\n version: v1\n kind: Deployment\n name: my-nginx\n path: patch.yaml\nEOF", "completion": "This yaml code Declarative Management of Kubernetes Objects Using Kustomize | Run kubectl kustomize ./ to see the replicas field is updated:", "language": "yaml", "type": "deployment-definition_explanation", "context": "Declarative Management of Kubernetes Objects Using Kustomize | Run kubectl kustomize ./ to see the replicas field is updated:", "source": "https://kubernetes.io/docs/tasks/manage-kubernetes-objects/kustomization/", "id": "k8s_00239_code_explain_7021"} {"prompt": "Complete this yaml code:\n\napiVersion: apps/v1\nkind: Deployment\nmetadata:\n name: my-nginx\nspec:\n replicas: 3\n selector:\n matchLabels:\n run: my-nginx", "completion": " template:\n metadata:\n labels:\n run: my-nginx\n spec:\n containers:\n - image: nginx\n name: my-nginx\n ports:\n - containerPort: 80", "language": "yaml", "type": "deployment-definition", "context": "Declarative Management of Kubernetes Objects Using Kustomize | In addition to patches, Kustomize also offers customizing container images or injecting field values from other objects into containers without creating patches. For example, you can change the image", "source": "https://kubernetes.io/docs/tasks/manage-kubernetes-objects/kustomization/", "id": "k8s_00239_code_completion_7022"} {"prompt": "Explain this yaml code:\n\napiVersion: apps/v1\nkind: Deployment\nmetadata:\n name: my-nginx\nspec:\n replicas: 3\n selector:\n matchLabels:\n run: my-nginx\n template:\n metadata:\n labels:\n run: my-nginx\n spec:\n containers:\n - image: nginx\n name: my-nginx\n ports:\n - containerPort: 80", "completion": "This yaml code Declarative Management of Kubernetes Objects Using Kustomize | In addition to patches, Kustomize also offers customizing container images or injecting field values from other objects into containers without creating patches. For example, you can change the image", "language": "yaml", "type": "deployment-definition_explanation", "context": "Declarative Management of Kubernetes Objects Using Kustomize | In addition to patches, Kustomize also offers customizing container images or injecting field values from other objects into containers without creating patches. For example, you can change the image", "source": "https://kubernetes.io/docs/tasks/manage-kubernetes-objects/kustomization/", "id": "k8s_00239_code_explain_7023"} {"prompt": "Complete this yaml code:\n\ncat < deployment.yaml\napiVersion: apps/v1\nkind: Deployment\nmetadata:\n name: my-nginx\nspec:\n selector:\n matchLabels:\n run: my-nginx\n replicas: 2\n template:\n metadata:\n labels:\n run: my-nginx\n spec:", "completion": " containers:\n - name: my-nginx\n image: nginx\n ports:\n - containerPort: 80\nEOF\n\ncat <./kustomization.yaml\nresources:\n- deployment.yaml\nimages:\n- name: nginx\n newName: my.image.registry/nginx\n newTag: \"1.4.0\"\nEOF", "language": "yaml", "type": "deployment-definition", "context": "Declarative Management of Kubernetes Objects Using Kustomize | Run kubectl kustomize ./ to see that the image being used is updated:", "source": "https://kubernetes.io/docs/tasks/manage-kubernetes-objects/kustomization/", "id": "k8s_00239_code_completion_7024"} {"prompt": "Explain this yaml code:\n\ncat < deployment.yaml\napiVersion: apps/v1\nkind: Deployment\nmetadata:\n name: my-nginx\nspec:\n selector:\n matchLabels:\n run: my-nginx\n replicas: 2\n template:\n metadata:\n labels:\n run: my-nginx\n spec:\n containers:\n - name: my-nginx\n image: nginx\n ports:\n - containerPort: 80\nEOF\n\ncat <./kustomization.yaml\nresources:\n- deployment.yaml\nimages:\n- name: nginx\n newName: my.image.registry/nginx\n newTag: \"1.4.0\"\nEOF", "completion": "This yaml code Declarative Management of Kubernetes Objects Using Kustomize | Run kubectl kustomize ./ to see that the image being used is updated:", "language": "yaml", "type": "deployment-definition_explanation", "context": "Declarative Management of Kubernetes Objects Using Kustomize | Run kubectl kustomize ./ to see that the image being used is updated:", "source": "https://kubernetes.io/docs/tasks/manage-kubernetes-objects/kustomization/", "id": "k8s_00239_code_explain_7025"} {"prompt": "Complete this yaml code:\n\napiVersion: apps/v1\nkind: Deployment\nmetadata:\n name: my-nginx\nspec:\n replicas: 2\n selector:\n matchLabels:\n run: my-nginx", "completion": " template:\n metadata:\n labels:\n run: my-nginx\n spec:\n containers:\n - image: my.image.registry/nginx:1.4.0\n name: my-nginx\n ports:\n - containerPort: 80", "language": "yaml", "type": "deployment-definition", "context": "Declarative Management of Kubernetes Objects Using Kustomize | Sometimes, the application running in a Pod may need to use configuration values from other objects. For example, a Pod from a Deployment object need to read the corresponding Service name from Env or", "source": "https://kubernetes.io/docs/tasks/manage-kubernetes-objects/kustomization/", "id": "k8s_00239_code_completion_7026"} {"prompt": "Explain this yaml code:\n\napiVersion: apps/v1\nkind: Deployment\nmetadata:\n name: my-nginx\nspec:\n replicas: 2\n selector:\n matchLabels:\n run: my-nginx\n template:\n metadata:\n labels:\n run: my-nginx\n spec:\n containers:\n - image: my.image.registry/nginx:1.4.0\n name: my-nginx\n ports:\n - containerPort: 80", "completion": "This yaml code Declarative Management of Kubernetes Objects Using Kustomize | Sometimes, the application running in a Pod may need to use configuration values from other objects. For example, a Pod from a Deployment object need to read the corresponding Service name from Env or", "language": "yaml", "type": "deployment-definition_explanation", "context": "Declarative Management of Kubernetes Objects Using Kustomize | Sometimes, the application running in a Pod may need to use configuration values from other objects. For example, a Pod from a Deployment object need to read the corresponding Service name from Env or", "source": "https://kubernetes.io/docs/tasks/manage-kubernetes-objects/kustomization/", "id": "k8s_00239_code_explain_7027"} {"prompt": "Complete this yaml code:\n\n# Create a deployment.yaml file (quoting the here doc delimiter)\ncat <<'EOF' > deployment.yaml\napiVersion: apps/v1\nkind: Deployment\nmetadata:\n name: my-nginx\nspec:\n selector:\n matchLabels:\n run: my-nginx\n replicas: 2\n template:\n metadata:\n labels:\n run: my-nginx\n spec:\n containers:\n - name: my-nginx\n image: nginx\n command: [\"start\", \"--host\", \"MY_SERVICE_NAME_PLACEHOLDER\"]\nEOF\n\n# Create a service.yaml file\ncat < service.yaml\napiVersion: v1\nkind: Service\nmetadata:\n name: my-nginx\n labels:", "completion": " run: my-nginx\nspec:\n ports:\n - port: 80\n protocol: TCP\n selector:\n run: my-nginx\nEOF\n\ncat <./kustomization.yaml\nnamePrefix: dev-\nnameSuffix: \"-001\"\n\nresources:\n- deployment.yaml\n- service.yaml\n\nreplacements:\n- source:\n kind: Service\n name: my-nginx\n fieldPath: metadata.name\n targets:\n - select:\n kind: Deployment\n name: my-nginx\n fieldPaths:\n - spec.template.spec.containers.0.command.2\nEOF", "language": "yaml", "type": "deployment-definition", "context": "Declarative Management of Kubernetes Objects Using Kustomize | Run kubectl kustomize ./ to see that the Service name injected into containers is dev-my-nginx-001:", "source": "https://kubernetes.io/docs/tasks/manage-kubernetes-objects/kustomization/", "id": "k8s_00239_code_completion_7028"} {"prompt": "Explain this yaml code:\n\n# Create a deployment.yaml file (quoting the here doc delimiter)\ncat <<'EOF' > deployment.yaml\napiVersion: apps/v1\nkind: Deployment\nmetadata:\n name: my-nginx\nspec:\n selector:\n matchLabels:\n run: my-nginx\n replicas: 2\n template:\n metadata:\n labels:\n run: my-nginx\n spec:\n containers:\n - name: my-nginx\n image: nginx\n command: [\"start\", \"--host\", \"MY_SERVICE_NAME_PLACEHOLDER\"]\nEOF\n\n# Create a service.yaml file\ncat < service.yaml\napiVersion: v1\nkind: Service\nmetadata:\n name: my-nginx\n labels:\n run: my-nginx\nspec:\n ports:\n - port: 80\n protocol: TCP\n selector:\n run: my-nginx\nEOF\n\ncat <./kustomization.yaml\nnamePrefix: dev-\nnameSuffix: \"-001\"\n\nresources:\n- deployment.yaml\n- service.yaml\n\nreplacements:\n- source:\n kind: Service\n name: my-nginx\n fieldPath: metadata.name\n targets:\n - select:\n kind: Deployment\n name: my-nginx\n fieldPaths:\n - spec.template.spec.containers.0.command.2\nEOF", "completion": "This yaml code Declarative Management of Kubernetes Objects Using Kustomize | Run kubectl kustomize ./ to see that the Service name injected into containers is dev-my-nginx-001:", "language": "yaml", "type": "deployment-definition_explanation", "context": "Declarative Management of Kubernetes Objects Using Kustomize | Run kubectl kustomize ./ to see that the Service name injected into containers is dev-my-nginx-001:", "source": "https://kubernetes.io/docs/tasks/manage-kubernetes-objects/kustomization/", "id": "k8s_00239_code_explain_7029"} {"prompt": "Complete this yaml code:\n\napiVersion: apps/v1\nkind: Deployment\nmetadata:\n name: dev-my-nginx-001\nspec:\n replicas: 2\n selector:\n matchLabels:\n run: my-nginx\n template:", "completion": " metadata:\n labels:\n run: my-nginx\n spec:\n containers:\n - command:\n - start\n - --host\n - dev-my-nginx-001\n image: nginx\n name: my-nginx", "language": "yaml", "type": "deployment-definition", "context": "Declarative Management of Kubernetes Objects Using Kustomize | Kustomize has the concepts of bases and overlays. A base is a directory with a kustomization.yaml, which contains a set of resources and associated customization. A base could be either a local direct", "source": "https://kubernetes.io/docs/tasks/manage-kubernetes-objects/kustomization/", "id": "k8s_00239_code_completion_7030"} {"prompt": "Explain this yaml code:\n\napiVersion: apps/v1\nkind: Deployment\nmetadata:\n name: dev-my-nginx-001\nspec:\n replicas: 2\n selector:\n matchLabels:\n run: my-nginx\n template:\n metadata:\n labels:\n run: my-nginx\n spec:\n containers:\n - command:\n - start\n - --host\n - dev-my-nginx-001\n image: nginx\n name: my-nginx", "completion": "This yaml code Declarative Management of Kubernetes Objects Using Kustomize | Kustomize has the concepts of bases and overlays. A base is a directory with a kustomization.yaml, which contains a set of resources and associated customization. A base could be either a local direct", "language": "yaml", "type": "deployment-definition_explanation", "context": "Declarative Management of Kubernetes Objects Using Kustomize | Kustomize has the concepts of bases and overlays. A base is a directory with a kustomization.yaml, which contains a set of resources and associated customization. A base could be either a local direct", "source": "https://kubernetes.io/docs/tasks/manage-kubernetes-objects/kustomization/", "id": "k8s_00239_code_explain_7031"} {"prompt": "Complete this yaml code:\n\n# Create a directory to hold the base\nmkdir base\n# Create a base/deployment.yaml\ncat < base/deployment.yaml\napiVersion: apps/v1\nkind: Deployment\nmetadata:\n name: my-nginx\nspec:\n selector:\n matchLabels:\n run: my-nginx\n replicas: 2\n template:\n metadata:\n labels:\n run: my-nginx\n spec:\n containers:\n - name: my-nginx\n image: nginx\nEOF", "completion": "\n# Create a base/service.yaml file\ncat < base/service.yaml\napiVersion: v1\nkind: Service\nmetadata:\n name: my-nginx\n labels:\n run: my-nginx\nspec:\n ports:\n - port: 80\n protocol: TCP\n selector:\n run: my-nginx\nEOF\n# Create a base/kustomization.yaml\ncat < base/kustomization.yaml\nresources:\n- deployment.yaml\n- service.yaml\nEOF", "language": "yaml", "type": "deployment-definition", "context": "Declarative Management of Kubernetes Objects Using Kustomize | This base can be used in multiple overlays. You can add different namePrefix or other cross-cutting fields in different overlays. Here are two overlays using the same base.", "source": "https://kubernetes.io/docs/tasks/manage-kubernetes-objects/kustomization/", "id": "k8s_00239_code_completion_7032"} {"prompt": "Explain this yaml code:\n\n# Create a directory to hold the base\nmkdir base\n# Create a base/deployment.yaml\ncat < base/deployment.yaml\napiVersion: apps/v1\nkind: Deployment\nmetadata:\n name: my-nginx\nspec:\n selector:\n matchLabels:\n run: my-nginx\n replicas: 2\n template:\n metadata:\n labels:\n run: my-nginx\n spec:\n containers:\n - name: my-nginx\n image: nginx\nEOF\n\n# Create a base/service.yaml file\ncat < base/service.yaml\napiVersion: v1\nkind: Service\nmetadata:\n name: my-nginx\n labels:\n run: my-nginx\nspec:\n ports:\n - port: 80\n protocol: TCP\n selector:\n run: my-nginx\nEOF\n# Create a base/kustomization.yaml\ncat < base/kustomization.yaml\nresources:\n- deployment.yaml\n- service.yaml\nEOF", "completion": "This yaml code Declarative Management of Kubernetes Objects Using Kustomize | This base can be used in multiple overlays. You can add different namePrefix or other cross-cutting fields in different overlays. Here are two overlays using the same base.", "language": "yaml", "type": "deployment-definition_explanation", "context": "Declarative Management of Kubernetes Objects Using Kustomize | This base can be used in multiple overlays. You can add different namePrefix or other cross-cutting fields in different overlays. Here are two overlays using the same base.", "source": "https://kubernetes.io/docs/tasks/manage-kubernetes-objects/kustomization/", "id": "k8s_00239_code_explain_7033"} {"prompt": "Complete this text code:\n\nmkdir dev\ncat < dev/kustomization.yaml\nresources:\n- ../base\nnamePrefix: dev-\nEOF", "completion": "\nmkdir prod\ncat < prod/kustomization.yaml\nresources:\n- ../base\nnamePrefix: prod-\nEOF", "language": "text", "type": "code-example", "context": "Declarative Management of Kubernetes Objects Using Kustomize | Use --kustomize or -k in kubectl commands to recognize resources managed by kustomization.yaml. Note that -k should point to a kustomization directory, such as", "source": "https://kubernetes.io/docs/tasks/manage-kubernetes-objects/kustomization/", "id": "k8s_00239_code_completion_7034"} {"prompt": "Explain this text code:\n\nmkdir dev\ncat < dev/kustomization.yaml\nresources:\n- ../base\nnamePrefix: dev-\nEOF\n\nmkdir prod\ncat < prod/kustomization.yaml\nresources:\n- ../base\nnamePrefix: prod-\nEOF", "completion": "This text code Declarative Management of Kubernetes Objects Using Kustomize | Use --kustomize or -k in kubectl commands to recognize resources managed by kustomization.yaml. Note that -k should point to a kustomization directory, such as", "language": "text", "type": "code-example_explanation", "context": "Declarative Management of Kubernetes Objects Using Kustomize | Use --kustomize or -k in kubectl commands to recognize resources managed by kustomization.yaml. Note that -k should point to a kustomization directory, such as", "source": "https://kubernetes.io/docs/tasks/manage-kubernetes-objects/kustomization/", "id": "k8s_00239_code_explain_7035"} {"prompt": "Complete this yaml code:\n\n# Create a deployment.yaml file\ncat < deployment.yaml\napiVersion: apps/v1\nkind: Deployment\nmetadata:\n name: my-nginx\nspec:\n selector:\n matchLabels:\n run: my-nginx\n replicas: 2\n template:\n metadata:\n labels:\n run: my-nginx\n spec:", "completion": " containers:\n - name: my-nginx\n image: nginx\n ports:\n - containerPort: 80\nEOF\n\n# Create a kustomization.yaml\ncat <./kustomization.yaml\nnamePrefix: dev-\nlabels:\n - pairs:\n app: my-nginx\n includeSelectors: true \nresources:\n- deployment.yaml\nEOF", "language": "yaml", "type": "deployment-definition", "context": "Declarative Management of Kubernetes Objects Using Kustomize | Run the following command to apply the Deployment object dev-my-nginx:", "source": "https://kubernetes.io/docs/tasks/manage-kubernetes-objects/kustomization/", "id": "k8s_00239_code_completion_7036"} {"prompt": "Explain this yaml code:\n\n# Create a deployment.yaml file\ncat < deployment.yaml\napiVersion: apps/v1\nkind: Deployment\nmetadata:\n name: my-nginx\nspec:\n selector:\n matchLabels:\n run: my-nginx\n replicas: 2\n template:\n metadata:\n labels:\n run: my-nginx\n spec:\n containers:\n - name: my-nginx\n image: nginx\n ports:\n - containerPort: 80\nEOF\n\n# Create a kustomization.yaml\ncat <./kustomization.yaml\nnamePrefix: dev-\nlabels:\n - pairs:\n app: my-nginx\n includeSelectors: true \nresources:\n- deployment.yaml\nEOF", "completion": "This yaml code Declarative Management of Kubernetes Objects Using Kustomize | Run the following command to apply the Deployment object dev-my-nginx:", "language": "yaml", "type": "deployment-definition_explanation", "context": "Declarative Management of Kubernetes Objects Using Kustomize | Run the following command to apply the Deployment object dev-my-nginx:", "source": "https://kubernetes.io/docs/tasks/manage-kubernetes-objects/kustomization/", "id": "k8s_00239_code_explain_7037"} {"prompt": "Explain this text code:\n\n> kubectl apply -k ./\ndeployment.apps/dev-my-nginx created", "completion": "This text code Declarative Management of Kubernetes Objects Using Kustomize | Run one of the following commands to view the Deployment object dev-my-nginx:", "language": "text", "type": "code-example_explanation", "context": "Declarative Management of Kubernetes Objects Using Kustomize | Run one of the following commands to view the Deployment object dev-my-nginx:", "source": "https://kubernetes.io/docs/tasks/manage-kubernetes-objects/kustomization/", "id": "k8s_00239_code_explain_7038"} {"prompt": "Explain this text code:\n\n> kubectl delete -k ./\ndeployment.apps \"dev-my-nginx\" deleted", "completion": "This text code Declarative Management of Kubernetes Objects Using Kustomize |", "language": "text", "type": "code-example_explanation", "context": "Declarative Management of Kubernetes Objects Using Kustomize |", "source": "https://kubernetes.io/docs/tasks/manage-kubernetes-objects/kustomization/", "id": "k8s_00239_code_explain_7039"} {"prompt": "Explain this text code:\n\n02b50b05283e98dd0fd71db496ef01e8,kubelet-bootstrap,10001,\"system:bootstrappers\"", "completion": "This text code TLS bootstrapping | Add the --token-auth-file=FILENAME flag to the kube-apiserver command (in your systemd unit file perhaps) to enable the token file. See docs here for further details.", "language": "text", "type": "code-example_explanation", "context": "TLS bootstrapping | Add the --token-auth-file=FILENAME flag to the kube-apiserver command (in your systemd unit file perhaps) to enable the token file. See docs here for further details.", "source": "https://kubernetes.io/docs/reference/access-authn-authz/kubelet-tls-bootstrapping/", "id": "k8s_00240_code_explain_7040"} {"prompt": "Complete this yaml code:\n\n# enable bootstrapping nodes to create CSR\napiVersion: rbac.authorization.k8s.io/v1\nkind: ClusterRoleBinding\nmetadata:\n name: create-csrs-for-bootstrapping\nsubjects:", "completion": "- kind: Group\n name: system:bootstrappers\n apiGroup: rbac.authorization.k8s.io\nroleRef:\n kind: ClusterRole\n name: system:node-bootstrapper\n apiGroup: rbac.authorization.k8s.io", "language": "yaml", "type": "kubernetes-manifest", "context": "TLS bootstrapping | While the apiserver receives the requests for certificates from the kubelet and authenticates those requests, the controller-manager is responsible for issuing actual signed certificates.", "source": "https://kubernetes.io/docs/reference/access-authn-authz/kubelet-tls-bootstrapping/", "id": "k8s_00240_code_completion_7041"} {"prompt": "Explain this yaml code:\n\n# enable bootstrapping nodes to create CSR\napiVersion: rbac.authorization.k8s.io/v1\nkind: ClusterRoleBinding\nmetadata:\n name: create-csrs-for-bootstrapping\nsubjects:\n- kind: Group\n name: system:bootstrappers\n apiGroup: rbac.authorization.k8s.io\nroleRef:\n kind: ClusterRole\n name: system:node-bootstrapper\n apiGroup: rbac.authorization.k8s.io", "completion": "This yaml code TLS bootstrapping | While the apiserver receives the requests for certificates from the kubelet and authenticates those requests, the controller-manager is responsible for issuing actual signed certificates.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "TLS bootstrapping | While the apiserver receives the requests for certificates from the kubelet and authenticates those requests, the controller-manager is responsible for issuing actual signed certificates.", "source": "https://kubernetes.io/docs/reference/access-authn-authz/kubelet-tls-bootstrapping/", "id": "k8s_00240_code_explain_7042"} {"prompt": "Explain this text code:\n\n--cluster-signing-cert-file=\"/etc/path/to/kubernetes/ca/ca.crt\" --cluster-signing-key-file=\"/etc/path/to/kubernetes/ca/ca.key\"", "completion": "This text code TLS bootstrapping | For example:", "language": "text", "type": "code-example_explanation", "context": "TLS bootstrapping | For example:", "source": "https://kubernetes.io/docs/reference/access-authn-authz/kubelet-tls-bootstrapping/", "id": "k8s_00240_code_explain_7043"} {"prompt": "Explain this text code:\n\n--cluster-signing-cert-file=\"/var/lib/kubernetes/ca.pem\" --cluster-signing-key-file=\"/var/lib/kubernetes/ca-key.pem\"", "completion": "This text code TLS bootstrapping | The validity duration of signed certificates can be configured with flag:", "language": "text", "type": "code-example_explanation", "context": "TLS bootstrapping | The validity duration of signed certificates can be configured with flag:", "source": "https://kubernetes.io/docs/reference/access-authn-authz/kubelet-tls-bootstrapping/", "id": "k8s_00240_code_explain_7044"} {"prompt": "Complete this yaml code:\n\n# Approve all CSRs for the group \"system:bootstrappers\"\napiVersion: rbac.authorization.k8s.io/v1\nkind: ClusterRoleBinding\nmetadata:\n name: auto-approve-csrs-for-group\nsubjects:", "completion": "- kind: Group\n name: system:bootstrappers\n apiGroup: rbac.authorization.k8s.io\nroleRef:\n kind: ClusterRole\n name: system:certificates.k8s.io:certificatesigningrequests:nodeclient\n apiGroup: rbac.authorization.k8s.io", "language": "yaml", "type": "kubernetes-manifest", "context": "TLS bootstrapping | To enable the kubelet to renew its own client certificate, create a ClusterRoleBinding that binds the group in which the fully functioning node is a member system:nodes to the ClusterRole that grants", "source": "https://kubernetes.io/docs/reference/access-authn-authz/kubelet-tls-bootstrapping/", "id": "k8s_00240_code_completion_7045"} {"prompt": "Explain this yaml code:\n\n# Approve all CSRs for the group \"system:bootstrappers\"\napiVersion: rbac.authorization.k8s.io/v1\nkind: ClusterRoleBinding\nmetadata:\n name: auto-approve-csrs-for-group\nsubjects:\n- kind: Group\n name: system:bootstrappers\n apiGroup: rbac.authorization.k8s.io\nroleRef:\n kind: ClusterRole\n name: system:certificates.k8s.io:certificatesigningrequests:nodeclient\n apiGroup: rbac.authorization.k8s.io", "completion": "This yaml code TLS bootstrapping | To enable the kubelet to renew its own client certificate, create a ClusterRoleBinding that binds the group in which the fully functioning node is a member system:nodes to the ClusterRole that grants", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "TLS bootstrapping | To enable the kubelet to renew its own client certificate, create a ClusterRoleBinding that binds the group in which the fully functioning node is a member system:nodes to the ClusterRole that grants", "source": "https://kubernetes.io/docs/reference/access-authn-authz/kubelet-tls-bootstrapping/", "id": "k8s_00240_code_explain_7046"} {"prompt": "Complete this yaml code:\n\n# Approve renewal CSRs for the group \"system:nodes\"\napiVersion: rbac.authorization.k8s.io/v1\nkind: ClusterRoleBinding\nmetadata:\n name: auto-approve-renewals-for-nodes\nsubjects:", "completion": "- kind: Group\n name: system:nodes\n apiGroup: rbac.authorization.k8s.io\nroleRef:\n kind: ClusterRole\n name: system:certificates.k8s.io:certificatesigningrequests:selfnodeclient\n apiGroup: rbac.authorization.k8s.io", "language": "yaml", "type": "kubernetes-manifest", "context": "TLS bootstrapping | The csrapproving controller that ships as part of kube-controller-manager and is enabled by default. The controller uses the SubjectAccessReview API to determine if a given user is authorized to reque", "source": "https://kubernetes.io/docs/reference/access-authn-authz/kubelet-tls-bootstrapping/", "id": "k8s_00240_code_completion_7047"} {"prompt": "Explain this yaml code:\n\n# Approve renewal CSRs for the group \"system:nodes\"\napiVersion: rbac.authorization.k8s.io/v1\nkind: ClusterRoleBinding\nmetadata:\n name: auto-approve-renewals-for-nodes\nsubjects:\n- kind: Group\n name: system:nodes\n apiGroup: rbac.authorization.k8s.io\nroleRef:\n kind: ClusterRole\n name: system:certificates.k8s.io:certificatesigningrequests:selfnodeclient\n apiGroup: rbac.authorization.k8s.io", "completion": "This yaml code TLS bootstrapping | The csrapproving controller that ships as part of kube-controller-manager and is enabled by default. The controller uses the SubjectAccessReview API to determine if a given user is authorized to reque", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "TLS bootstrapping | The csrapproving controller that ships as part of kube-controller-manager and is enabled by default. The controller uses the SubjectAccessReview API to determine if a given user is authorized to reque", "source": "https://kubernetes.io/docs/reference/access-authn-authz/kubelet-tls-bootstrapping/", "id": "k8s_00240_code_explain_7048"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Config\nclusters:\n- cluster:\n certificate-authority: /var/lib/kubernetes/ca.pem\n server: https://my.server.example.com:6443\n name: bootstrap\ncontexts:\n- context:", "completion": " cluster: bootstrap\n user: kubelet-bootstrap\n name: bootstrap\ncurrent-context: bootstrap\npreferences: {}\nusers:\n- name: kubelet-bootstrap\n user:\n token: 07401b.f395accd246ae52d", "language": "yaml", "type": "kubernetes-manifest", "context": "TLS bootstrapping | The important elements to note are:", "source": "https://kubernetes.io/docs/reference/access-authn-authz/kubelet-tls-bootstrapping/", "id": "k8s_00240_code_completion_7049"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Config\nclusters:\n- cluster:\n certificate-authority: /var/lib/kubernetes/ca.pem\n server: https://my.server.example.com:6443\n name: bootstrap\ncontexts:\n- context:\n cluster: bootstrap\n user: kubelet-bootstrap\n name: bootstrap\ncurrent-context: bootstrap\npreferences: {}\nusers:\n- name: kubelet-bootstrap\n user:\n token: 07401b.f395accd246ae52d", "completion": "This yaml code TLS bootstrapping | The important elements to note are:", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "TLS bootstrapping | The important elements to note are:", "source": "https://kubernetes.io/docs/reference/access-authn-authz/kubelet-tls-bootstrapping/", "id": "k8s_00240_code_explain_7050"} {"prompt": "Complete this bash code:\n\nkubectl config --kubeconfig=/var/lib/kubelet/bootstrap-kubeconfig set-cluster bootstrap --server='https://my.server.example.com:6443' --certificate-authority=/var/lib/kubernetes/ca.pem\nkubectl config --kubeconfig=/var/lib/kubelet/bootstrap-kubeconfig set-credentials kubelet-bootstrap --token=07401b.f395accd246ae52d", "completion": "kubectl config --kubeconfig=/var/lib/kubelet/bootstrap-kubeconfig set-context bootstrap --user=kubelet-bootstrap --cluster=bootstrap\nkubectl config --kubeconfig=/var/lib/kubelet/bootstrap-kubeconfig use-context bootstrap", "language": "bash", "type": "kubectl-command", "context": "TLS bootstrapping | To indicate to the kubelet to use the bootstrap kubeconfig, use the following kubelet flag:", "source": "https://kubernetes.io/docs/reference/access-authn-authz/kubelet-tls-bootstrapping/", "id": "k8s_00240_code_completion_7051"} {"prompt": "Explain this bash code:\n\nkubectl config --kubeconfig=/var/lib/kubelet/bootstrap-kubeconfig set-cluster bootstrap --server='https://my.server.example.com:6443' --certificate-authority=/var/lib/kubernetes/ca.pem\nkubectl config --kubeconfig=/var/lib/kubelet/bootstrap-kubeconfig set-credentials kubelet-bootstrap --token=07401b.f395accd246ae52d\nkubectl config --kubeconfig=/var/lib/kubelet/bootstrap-kubeconfig set-context bootstrap --user=kubelet-bootstrap --cluster=bootstrap\nkubectl config --kubeconfig=/var/lib/kubelet/bootstrap-kubeconfig use-context bootstrap", "completion": "This bash code TLS bootstrapping | To indicate to the kubelet to use the bootstrap kubeconfig, use the following kubelet flag:", "language": "bash", "type": "kubectl-command_explanation", "context": "TLS bootstrapping | To indicate to the kubelet to use the bootstrap kubeconfig, use the following kubelet flag:", "source": "https://kubernetes.io/docs/reference/access-authn-authz/kubelet-tls-bootstrapping/", "id": "k8s_00240_code_explain_7052"} {"prompt": "Explain this text code:\n\n--bootstrap-kubeconfig=\"/var/lib/kubelet/bootstrap-kubeconfig\" --kubeconfig=\"/var/lib/kubelet/kubeconfig\"", "completion": "This text code TLS bootstrapping | When starting the kubelet, if the file specified via --kubeconfig does not exist, the bootstrap kubeconfig specified via --bootstrap-kubeconfig is used to request a client certificate from the API ser", "language": "text", "type": "code-example_explanation", "context": "TLS bootstrapping | When starting the kubelet, if the file specified via --kubeconfig does not exist, the bootstrap kubeconfig specified via --bootstrap-kubeconfig is used to request a client certificate from the API ser", "source": "https://kubernetes.io/docs/reference/access-authn-authz/kubelet-tls-bootstrapping/", "id": "k8s_00240_code_explain_7053"} {"prompt": "Explain this console code:\n\n02b50b05283e98dd0fd71db496ef01e8,kubelet-bootstrap,10001,\"system:bootstrappers\"", "completion": "This console code TLS bootstrapping | Add the --token-auth-file=FILENAME flag to the kube-apiserver command (in your systemd unit file perhaps) to enable the token file. See docs here for further details.", "language": "console", "type": "code-example_explanation", "context": "TLS bootstrapping | Add the --token-auth-file=FILENAME flag to the kube-apiserver command (in your systemd unit file perhaps) to enable the token file. See docs here for further details.", "source": "https://kubernetes.io/docs/reference/access-authn-authz/kubelet-tls-bootstrapping/", "id": "k8s_00240_code_explain_7054"} {"prompt": "Complete this yaml code:\n\n# enable bootstrapping nodes to create CSR\napiVersion: rbac.authorization.k8s.io/v1\nkind: ClusterRoleBinding\nmetadata:\n name: create-csrs-for-bootstrapping\nsubjects:", "completion": "- kind: Group\n name: system:bootstrappers\n apiGroup: rbac.authorization.k8s.io\nroleRef:\n kind: ClusterRole\n name: system:node-bootstrapper\n apiGroup: rbac.authorization.k8s.io", "language": "yaml", "type": "kubernetes-manifest", "context": "TLS bootstrapping | While the apiserver receives the requests for certificates from the kubelet and authenticates those requests, the controller-manager is responsible for issuing actual signed certificates.", "source": "https://kubernetes.io/docs/reference/access-authn-authz/kubelet-tls-bootstrapping/", "id": "k8s_00240_code_completion_7055"} {"prompt": "Explain this yaml code:\n\n# enable bootstrapping nodes to create CSR\napiVersion: rbac.authorization.k8s.io/v1\nkind: ClusterRoleBinding\nmetadata:\n name: create-csrs-for-bootstrapping\nsubjects:\n- kind: Group\n name: system:bootstrappers\n apiGroup: rbac.authorization.k8s.io\nroleRef:\n kind: ClusterRole\n name: system:node-bootstrapper\n apiGroup: rbac.authorization.k8s.io", "completion": "This yaml code TLS bootstrapping | While the apiserver receives the requests for certificates from the kubelet and authenticates those requests, the controller-manager is responsible for issuing actual signed certificates.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "TLS bootstrapping | While the apiserver receives the requests for certificates from the kubelet and authenticates those requests, the controller-manager is responsible for issuing actual signed certificates.", "source": "https://kubernetes.io/docs/reference/access-authn-authz/kubelet-tls-bootstrapping/", "id": "k8s_00240_code_explain_7056"} {"prompt": "Explain this shell code:\n\n--cluster-signing-cert-file=\"/etc/path/to/kubernetes/ca/ca.crt\" --cluster-signing-key-file=\"/etc/path/to/kubernetes/ca/ca.key\"", "completion": "This shell code TLS bootstrapping | For example:", "language": "shell", "type": "code-example_explanation", "context": "TLS bootstrapping | For example:", "source": "https://kubernetes.io/docs/reference/access-authn-authz/kubelet-tls-bootstrapping/", "id": "k8s_00240_code_explain_7057"} {"prompt": "Explain this shell code:\n\n--cluster-signing-cert-file=\"/var/lib/kubernetes/ca.pem\" --cluster-signing-key-file=\"/var/lib/kubernetes/ca-key.pem\"", "completion": "This shell code TLS bootstrapping | The validity duration of signed certificates can be configured with flag:", "language": "shell", "type": "code-example_explanation", "context": "TLS bootstrapping | The validity duration of signed certificates can be configured with flag:", "source": "https://kubernetes.io/docs/reference/access-authn-authz/kubelet-tls-bootstrapping/", "id": "k8s_00240_code_explain_7058"} {"prompt": "Explain this text code:\n\nsystem:certificates.k8s.io:certificatesigningrequests:nodeclient", "completion": "This text code TLS bootstrapping | To enable the kubelet to renew its own client certificate, create a ClusterRoleBinding that binds the group in which the fully functioning node is a member system:nodes to the ClusterRole that grants", "language": "text", "type": "code-example_explanation", "context": "TLS bootstrapping | To enable the kubelet to renew its own client certificate, create a ClusterRoleBinding that binds the group in which the fully functioning node is a member system:nodes to the ClusterRole that grants", "source": "https://kubernetes.io/docs/reference/access-authn-authz/kubelet-tls-bootstrapping/", "id": "k8s_00240_code_explain_7059"} {"prompt": "Complete this yaml code:\n\n# Approve all CSRs for the group \"system:bootstrappers\"\napiVersion: rbac.authorization.k8s.io/v1\nkind: ClusterRoleBinding\nmetadata:\n name: auto-approve-csrs-for-group\nsubjects:", "completion": "- kind: Group\n name: system:bootstrappers\n apiGroup: rbac.authorization.k8s.io\nroleRef:\n kind: ClusterRole\n name: system:certificates.k8s.io:certificatesigningrequests:nodeclient\n apiGroup: rbac.authorization.k8s.io", "language": "yaml", "type": "kubernetes-manifest", "context": "TLS bootstrapping | To enable the kubelet to renew its own client certificate, create a ClusterRoleBinding that binds the group in which the fully functioning node is a member system:nodes to the ClusterRole that grants", "source": "https://kubernetes.io/docs/reference/access-authn-authz/kubelet-tls-bootstrapping/", "id": "k8s_00240_code_completion_7060"} {"prompt": "Explain this yaml code:\n\n# Approve all CSRs for the group \"system:bootstrappers\"\napiVersion: rbac.authorization.k8s.io/v1\nkind: ClusterRoleBinding\nmetadata:\n name: auto-approve-csrs-for-group\nsubjects:\n- kind: Group\n name: system:bootstrappers\n apiGroup: rbac.authorization.k8s.io\nroleRef:\n kind: ClusterRole\n name: system:certificates.k8s.io:certificatesigningrequests:nodeclient\n apiGroup: rbac.authorization.k8s.io", "completion": "This yaml code TLS bootstrapping | To enable the kubelet to renew its own client certificate, create a ClusterRoleBinding that binds the group in which the fully functioning node is a member system:nodes to the ClusterRole that grants", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "TLS bootstrapping | To enable the kubelet to renew its own client certificate, create a ClusterRoleBinding that binds the group in which the fully functioning node is a member system:nodes to the ClusterRole that grants", "source": "https://kubernetes.io/docs/reference/access-authn-authz/kubelet-tls-bootstrapping/", "id": "k8s_00240_code_explain_7061"} {"prompt": "Explain this text code:\n\nsystem:certificates.k8s.io:certificatesigningrequests:selfnodeclient", "completion": "This text code TLS bootstrapping | The csrapproving controller that ships as part of kube-controller-manager and is enabled by default. The controller uses the SubjectAccessReview API to determine if a given user is authorized to reque", "language": "text", "type": "code-example_explanation", "context": "TLS bootstrapping | The csrapproving controller that ships as part of kube-controller-manager and is enabled by default. The controller uses the SubjectAccessReview API to determine if a given user is authorized to reque", "source": "https://kubernetes.io/docs/reference/access-authn-authz/kubelet-tls-bootstrapping/", "id": "k8s_00240_code_explain_7062"} {"prompt": "Complete this yaml code:\n\n# Approve renewal CSRs for the group \"system:nodes\"\napiVersion: rbac.authorization.k8s.io/v1\nkind: ClusterRoleBinding\nmetadata:\n name: auto-approve-renewals-for-nodes\nsubjects:", "completion": "- kind: Group\n name: system:nodes\n apiGroup: rbac.authorization.k8s.io\nroleRef:\n kind: ClusterRole\n name: system:certificates.k8s.io:certificatesigningrequests:selfnodeclient\n apiGroup: rbac.authorization.k8s.io", "language": "yaml", "type": "kubernetes-manifest", "context": "TLS bootstrapping | The csrapproving controller that ships as part of kube-controller-manager and is enabled by default. The controller uses the SubjectAccessReview API to determine if a given user is authorized to reque", "source": "https://kubernetes.io/docs/reference/access-authn-authz/kubelet-tls-bootstrapping/", "id": "k8s_00240_code_completion_7063"} {"prompt": "Explain this yaml code:\n\n# Approve renewal CSRs for the group \"system:nodes\"\napiVersion: rbac.authorization.k8s.io/v1\nkind: ClusterRoleBinding\nmetadata:\n name: auto-approve-renewals-for-nodes\nsubjects:\n- kind: Group\n name: system:nodes\n apiGroup: rbac.authorization.k8s.io\nroleRef:\n kind: ClusterRole\n name: system:certificates.k8s.io:certificatesigningrequests:selfnodeclient\n apiGroup: rbac.authorization.k8s.io", "completion": "This yaml code TLS bootstrapping | The csrapproving controller that ships as part of kube-controller-manager and is enabled by default. The controller uses the SubjectAccessReview API to determine if a given user is authorized to reque", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "TLS bootstrapping | The csrapproving controller that ships as part of kube-controller-manager and is enabled by default. The controller uses the SubjectAccessReview API to determine if a given user is authorized to reque", "source": "https://kubernetes.io/docs/reference/access-authn-authz/kubelet-tls-bootstrapping/", "id": "k8s_00240_code_explain_7064"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Config\nclusters:\n- cluster:\n certificate-authority: /var/lib/kubernetes/ca.pem\n server: https://my.server.example.com:6443\n name: bootstrap\ncontexts:\n- context:", "completion": " cluster: bootstrap\n user: kubelet-bootstrap\n name: bootstrap\ncurrent-context: bootstrap\npreferences: {}\nusers:\n- name: kubelet-bootstrap\n user:\n token: 07401b.f395accd246ae52d", "language": "yaml", "type": "kubernetes-manifest", "context": "TLS bootstrapping | The important elements to note are:", "source": "https://kubernetes.io/docs/reference/access-authn-authz/kubelet-tls-bootstrapping/", "id": "k8s_00240_code_completion_7065"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Config\nclusters:\n- cluster:\n certificate-authority: /var/lib/kubernetes/ca.pem\n server: https://my.server.example.com:6443\n name: bootstrap\ncontexts:\n- context:\n cluster: bootstrap\n user: kubelet-bootstrap\n name: bootstrap\ncurrent-context: bootstrap\npreferences: {}\nusers:\n- name: kubelet-bootstrap\n user:\n token: 07401b.f395accd246ae52d", "completion": "This yaml code TLS bootstrapping | The important elements to note are:", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "TLS bootstrapping | The important elements to note are:", "source": "https://kubernetes.io/docs/reference/access-authn-authz/kubelet-tls-bootstrapping/", "id": "k8s_00240_code_explain_7066"} {"prompt": "Complete this bash code:\n\nkubectl config --kubeconfig=/var/lib/kubelet/bootstrap-kubeconfig set-cluster bootstrap --server='https://my.server.example.com:6443' --certificate-authority=/var/lib/kubernetes/ca.pem\nkubectl config --kubeconfig=/var/lib/kubelet/bootstrap-kubeconfig set-credentials kubelet-bootstrap --token=07401b.f395accd246ae52d", "completion": "kubectl config --kubeconfig=/var/lib/kubelet/bootstrap-kubeconfig set-context bootstrap --user=kubelet-bootstrap --cluster=bootstrap\nkubectl config --kubeconfig=/var/lib/kubelet/bootstrap-kubeconfig use-context bootstrap", "language": "bash", "type": "kubectl-command", "context": "TLS bootstrapping | To indicate to the kubelet to use the bootstrap kubeconfig, use the following kubelet flag:", "source": "https://kubernetes.io/docs/reference/access-authn-authz/kubelet-tls-bootstrapping/", "id": "k8s_00240_code_completion_7067"} {"prompt": "Explain this bash code:\n\nkubectl config --kubeconfig=/var/lib/kubelet/bootstrap-kubeconfig set-cluster bootstrap --server='https://my.server.example.com:6443' --certificate-authority=/var/lib/kubernetes/ca.pem\nkubectl config --kubeconfig=/var/lib/kubelet/bootstrap-kubeconfig set-credentials kubelet-bootstrap --token=07401b.f395accd246ae52d\nkubectl config --kubeconfig=/var/lib/kubelet/bootstrap-kubeconfig set-context bootstrap --user=kubelet-bootstrap --cluster=bootstrap\nkubectl config --kubeconfig=/var/lib/kubelet/bootstrap-kubeconfig use-context bootstrap", "completion": "This bash code TLS bootstrapping | To indicate to the kubelet to use the bootstrap kubeconfig, use the following kubelet flag:", "language": "bash", "type": "kubectl-command_explanation", "context": "TLS bootstrapping | To indicate to the kubelet to use the bootstrap kubeconfig, use the following kubelet flag:", "source": "https://kubernetes.io/docs/reference/access-authn-authz/kubelet-tls-bootstrapping/", "id": "k8s_00240_code_explain_7068"} {"prompt": "Explain this text code:\n\n--bootstrap-kubeconfig=\"/var/lib/kubelet/bootstrap-kubeconfig\" --kubeconfig=\"/var/lib/kubelet/kubeconfig\"", "completion": "This text code TLS bootstrapping | When starting the kubelet, if the file specified via --kubeconfig does not exist, the bootstrap kubeconfig specified via --bootstrap-kubeconfig is used to request a client certificate from the API ser", "language": "text", "type": "code-example_explanation", "context": "TLS bootstrapping | When starting the kubelet, if the file specified via --kubeconfig does not exist, the bootstrap kubeconfig specified via --bootstrap-kubeconfig is used to request a client certificate from the API ser", "source": "https://kubernetes.io/docs/reference/access-authn-authz/kubelet-tls-bootstrapping/", "id": "k8s_00240_code_explain_7069"} {"prompt": "Explain this text code:\n\n02b50b05283e98dd0fd71db496ef01e8,kubelet-bootstrap,10001,\"system:bootstrappers\"", "completion": "This text code TLS bootstrapping | Add the --token-auth-file=FILENAME flag to the kube-apiserver command (in your systemd unit file perhaps) to enable the token file. See docs here for further details.", "language": "text", "type": "code-example_explanation", "context": "TLS bootstrapping | Add the --token-auth-file=FILENAME flag to the kube-apiserver command (in your systemd unit file perhaps) to enable the token file. See docs here for further details.", "source": "https://kubernetes.io/docs/reference/access-authn-authz/kubelet-tls-bootstrapping/", "id": "k8s_00240_code_explain_7070"} {"prompt": "Complete this yaml code:\n\n# enable bootstrapping nodes to create CSR\napiVersion: rbac.authorization.k8s.io/v1\nkind: ClusterRoleBinding\nmetadata:\n name: create-csrs-for-bootstrapping\nsubjects:", "completion": "- kind: Group\n name: system:bootstrappers\n apiGroup: rbac.authorization.k8s.io\nroleRef:\n kind: ClusterRole\n name: system:node-bootstrapper\n apiGroup: rbac.authorization.k8s.io", "language": "yaml", "type": "kubernetes-manifest", "context": "TLS bootstrapping | While the apiserver receives the requests for certificates from the kubelet and authenticates those requests, the controller-manager is responsible for issuing actual signed certificates.", "source": "https://kubernetes.io/docs/reference/access-authn-authz/kubelet-tls-bootstrapping/", "id": "k8s_00240_code_completion_7071"} {"prompt": "Explain this yaml code:\n\n# enable bootstrapping nodes to create CSR\napiVersion: rbac.authorization.k8s.io/v1\nkind: ClusterRoleBinding\nmetadata:\n name: create-csrs-for-bootstrapping\nsubjects:\n- kind: Group\n name: system:bootstrappers\n apiGroup: rbac.authorization.k8s.io\nroleRef:\n kind: ClusterRole\n name: system:node-bootstrapper\n apiGroup: rbac.authorization.k8s.io", "completion": "This yaml code TLS bootstrapping | While the apiserver receives the requests for certificates from the kubelet and authenticates those requests, the controller-manager is responsible for issuing actual signed certificates.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "TLS bootstrapping | While the apiserver receives the requests for certificates from the kubelet and authenticates those requests, the controller-manager is responsible for issuing actual signed certificates.", "source": "https://kubernetes.io/docs/reference/access-authn-authz/kubelet-tls-bootstrapping/", "id": "k8s_00240_code_explain_7072"} {"prompt": "Explain this text code:\n\n--cluster-signing-cert-file=\"/etc/path/to/kubernetes/ca/ca.crt\" --cluster-signing-key-file=\"/etc/path/to/kubernetes/ca/ca.key\"", "completion": "This text code TLS bootstrapping | For example:", "language": "text", "type": "code-example_explanation", "context": "TLS bootstrapping | For example:", "source": "https://kubernetes.io/docs/reference/access-authn-authz/kubelet-tls-bootstrapping/", "id": "k8s_00240_code_explain_7073"} {"prompt": "Explain this text code:\n\n--cluster-signing-cert-file=\"/var/lib/kubernetes/ca.pem\" --cluster-signing-key-file=\"/var/lib/kubernetes/ca-key.pem\"", "completion": "This text code TLS bootstrapping | The validity duration of signed certificates can be configured with flag:", "language": "text", "type": "code-example_explanation", "context": "TLS bootstrapping | The validity duration of signed certificates can be configured with flag:", "source": "https://kubernetes.io/docs/reference/access-authn-authz/kubelet-tls-bootstrapping/", "id": "k8s_00240_code_explain_7074"} {"prompt": "Complete this yaml code:\n\n# Approve all CSRs for the group \"system:bootstrappers\"\napiVersion: rbac.authorization.k8s.io/v1\nkind: ClusterRoleBinding\nmetadata:\n name: auto-approve-csrs-for-group\nsubjects:", "completion": "- kind: Group\n name: system:bootstrappers\n apiGroup: rbac.authorization.k8s.io\nroleRef:\n kind: ClusterRole\n name: system:certificates.k8s.io:certificatesigningrequests:nodeclient\n apiGroup: rbac.authorization.k8s.io", "language": "yaml", "type": "kubernetes-manifest", "context": "TLS bootstrapping | To enable the kubelet to renew its own client certificate, create a ClusterRoleBinding that binds the group in which the fully functioning node is a member system:nodes to the ClusterRole that grants", "source": "https://kubernetes.io/docs/reference/access-authn-authz/kubelet-tls-bootstrapping/", "id": "k8s_00240_code_completion_7075"} {"prompt": "Explain this yaml code:\n\n# Approve all CSRs for the group \"system:bootstrappers\"\napiVersion: rbac.authorization.k8s.io/v1\nkind: ClusterRoleBinding\nmetadata:\n name: auto-approve-csrs-for-group\nsubjects:\n- kind: Group\n name: system:bootstrappers\n apiGroup: rbac.authorization.k8s.io\nroleRef:\n kind: ClusterRole\n name: system:certificates.k8s.io:certificatesigningrequests:nodeclient\n apiGroup: rbac.authorization.k8s.io", "completion": "This yaml code TLS bootstrapping | To enable the kubelet to renew its own client certificate, create a ClusterRoleBinding that binds the group in which the fully functioning node is a member system:nodes to the ClusterRole that grants", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "TLS bootstrapping | To enable the kubelet to renew its own client certificate, create a ClusterRoleBinding that binds the group in which the fully functioning node is a member system:nodes to the ClusterRole that grants", "source": "https://kubernetes.io/docs/reference/access-authn-authz/kubelet-tls-bootstrapping/", "id": "k8s_00240_code_explain_7076"} {"prompt": "Complete this yaml code:\n\n# Approve renewal CSRs for the group \"system:nodes\"\napiVersion: rbac.authorization.k8s.io/v1\nkind: ClusterRoleBinding\nmetadata:\n name: auto-approve-renewals-for-nodes\nsubjects:", "completion": "- kind: Group\n name: system:nodes\n apiGroup: rbac.authorization.k8s.io\nroleRef:\n kind: ClusterRole\n name: system:certificates.k8s.io:certificatesigningrequests:selfnodeclient\n apiGroup: rbac.authorization.k8s.io", "language": "yaml", "type": "kubernetes-manifest", "context": "TLS bootstrapping | The csrapproving controller that ships as part of kube-controller-manager and is enabled by default. The controller uses the SubjectAccessReview API to determine if a given user is authorized to reque", "source": "https://kubernetes.io/docs/reference/access-authn-authz/kubelet-tls-bootstrapping/", "id": "k8s_00240_code_completion_7077"} {"prompt": "Explain this yaml code:\n\n# Approve renewal CSRs for the group \"system:nodes\"\napiVersion: rbac.authorization.k8s.io/v1\nkind: ClusterRoleBinding\nmetadata:\n name: auto-approve-renewals-for-nodes\nsubjects:\n- kind: Group\n name: system:nodes\n apiGroup: rbac.authorization.k8s.io\nroleRef:\n kind: ClusterRole\n name: system:certificates.k8s.io:certificatesigningrequests:selfnodeclient\n apiGroup: rbac.authorization.k8s.io", "completion": "This yaml code TLS bootstrapping | The csrapproving controller that ships as part of kube-controller-manager and is enabled by default. The controller uses the SubjectAccessReview API to determine if a given user is authorized to reque", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "TLS bootstrapping | The csrapproving controller that ships as part of kube-controller-manager and is enabled by default. The controller uses the SubjectAccessReview API to determine if a given user is authorized to reque", "source": "https://kubernetes.io/docs/reference/access-authn-authz/kubelet-tls-bootstrapping/", "id": "k8s_00240_code_explain_7078"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Config\nclusters:\n- cluster:\n certificate-authority: /var/lib/kubernetes/ca.pem\n server: https://my.server.example.com:6443\n name: bootstrap\ncontexts:\n- context:", "completion": " cluster: bootstrap\n user: kubelet-bootstrap\n name: bootstrap\ncurrent-context: bootstrap\npreferences: {}\nusers:\n- name: kubelet-bootstrap\n user:\n token: 07401b.f395accd246ae52d", "language": "yaml", "type": "kubernetes-manifest", "context": "TLS bootstrapping | The important elements to note are:", "source": "https://kubernetes.io/docs/reference/access-authn-authz/kubelet-tls-bootstrapping/", "id": "k8s_00240_code_completion_7079"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Config\nclusters:\n- cluster:\n certificate-authority: /var/lib/kubernetes/ca.pem\n server: https://my.server.example.com:6443\n name: bootstrap\ncontexts:\n- context:\n cluster: bootstrap\n user: kubelet-bootstrap\n name: bootstrap\ncurrent-context: bootstrap\npreferences: {}\nusers:\n- name: kubelet-bootstrap\n user:\n token: 07401b.f395accd246ae52d", "completion": "This yaml code TLS bootstrapping | The important elements to note are:", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "TLS bootstrapping | The important elements to note are:", "source": "https://kubernetes.io/docs/reference/access-authn-authz/kubelet-tls-bootstrapping/", "id": "k8s_00240_code_explain_7080"} {"prompt": "Explain this python code:\n\nimport \"k8s.io/apiextensions-apiserver/pkg/apis/apiextensions/v1\"", "completion": "This python code CustomResourceDefinition | CustomResourceDefinition represents a resource that should be exposed on the API server. Its name MUST be in the format <.spec.name>.<.spec.group>.", "language": "python", "type": "code-example_explanation", "context": "CustomResourceDefinition | CustomResourceDefinition represents a resource that should be exposed on the API server. Its name MUST be in the format <.spec.name>.<.spec.group>.", "source": "https://kubernetes.io/docs/reference/kubernetes-api/extend-resources/custom-resource-definition-v1/", "id": "k8s_00242_code_explain_7081"} {"prompt": "Explain this text code:\n\nsudo apt-get update\n# apt-transport-https may be a dummy package; if so, you can skip that package\nsudo apt-get install -y apt-transport-https ca-certificates curl gpg", "completion": "This text code Installing kubeadm | Download the public signing key for the Kubernetes package repositories. The same signing key is used for all repositories so you can disregard the version in the URL:", "language": "text", "type": "code-example_explanation", "context": "Installing kubeadm | Download the public signing key for the Kubernetes package repositories. The same signing key is used for all repositories so you can disregard the version in the URL:", "source": "https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/install-kubeadm/", "id": "k8s_00244_code_explain_7082"} {"prompt": "Explain this text code:\n\n# If the directory `/etc/apt/keyrings` does not exist, it should be created before the curl command, read the note below.\n# sudo mkdir -p -m 755 /etc/apt/keyrings\ncurl -fsSL https://pkgs.k8s.io/core:/stable:/v1.34/deb/Release.key | sudo gpg --dearmor -o /etc/apt/keyrings/kubernetes-apt-keyring.gpg", "completion": "This text code Installing kubeadm | Add the appropriate Kubernetes apt repository. Please note that this repository have packages only for Kubernetes 1.34; for other Kubernetes minor versions, you need to change the Kubernetes minor ver", "language": "text", "type": "code-example_explanation", "context": "Installing kubeadm | Add the appropriate Kubernetes apt repository. Please note that this repository have packages only for Kubernetes 1.34; for other Kubernetes minor versions, you need to change the Kubernetes minor ver", "source": "https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/install-kubeadm/", "id": "k8s_00244_code_explain_7083"} {"prompt": "Explain this text code:\n\n# This overwrites any existing configuration in /etc/apt/sources.list.d/kubernetes.list\necho 'deb [signed-by=/etc/apt/keyrings/kubernetes-apt-keyring.gpg] https://pkgs.k8s.io/core:/stable:/v1.34/deb/ /' | sudo tee /etc/apt/sources.list.d/kubernetes.list", "completion": "This text code Installing kubeadm | Update the apt package index, install kubelet, kubeadm and kubectl, and pin their version:", "language": "text", "type": "code-example_explanation", "context": "Installing kubeadm | Update the apt package index, install kubelet, kubeadm and kubectl, and pin their version:", "source": "https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/install-kubeadm/", "id": "k8s_00244_code_explain_7084"} {"prompt": "Explain this text code:\n\nsudo apt-get update\nsudo apt-get install -y kubelet kubeadm kubectl\nsudo apt-mark hold kubelet kubeadm kubectl", "completion": "This text code Installing kubeadm | (Optional) Enable the kubelet service before running kubeadm:", "language": "text", "type": "code-example_explanation", "context": "Installing kubeadm | (Optional) Enable the kubelet service before running kubeadm:", "source": "https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/install-kubeadm/", "id": "k8s_00244_code_explain_7085"} {"prompt": "Explain this text code:\n\n# Set SELinux in permissive mode (effectively disabling it)\nsudo setenforce 0\nsudo sed -i 's/^SELINUX=enforcing$/SELINUX=permissive/' /etc/selinux/config", "completion": "This text code Installing kubeadm | Add the Kubernetes yum repository. The exclude parameter in the repository definition ensures that the packages related to Kubernetes are not upgraded upon running yum update as there's a special proc", "language": "text", "type": "code-example_explanation", "context": "Installing kubeadm | Add the Kubernetes yum repository. The exclude parameter in the repository definition ensures that the packages related to Kubernetes are not upgraded upon running yum update as there's a special proc", "source": "https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/install-kubeadm/", "id": "k8s_00244_code_explain_7086"} {"prompt": "Complete this text code:\n\n# This overwrites any existing configuration in /etc/yum.repos.d/kubernetes.repo\ncat <\n - identity: {} # this fallback allows reading unencrypted secrets;\n # for example, during initial migration", "language": "yaml", "type": "kubernetes-manifest", "context": "Encrypting Confidential Data at Rest | To create a new encryption key (that does not use KMS), see Generate the encryption key.", "source": "https://kubernetes.io/docs/tasks/administer-cluster/encrypt-data/", "id": "k8s_00245_code_completion_7132"} {"prompt": "Explain this yaml code:\n\n---\napiVersion: apiserver.config.k8s.io/v1\nkind: EncryptionConfiguration\nresources:\n - resources:\n - secrets\n - configmaps\n - pandas.awesome.bears.example\n providers:\n - aescbc:\n keys:\n - name: key1\n # See the following text for more details about the secret value\n secret: \n - identity: {} # this fallback allows reading unencrypted secrets;\n # for example, during initial migration", "completion": "This yaml code Encrypting Confidential Data at Rest | To create a new encryption key (that does not use KMS), see Generate the encryption key.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Encrypting Confidential Data at Rest | To create a new encryption key (that does not use KMS), see Generate the encryption key.", "source": "https://kubernetes.io/docs/tasks/administer-cluster/encrypt-data/", "id": "k8s_00245_code_explain_7133"} {"prompt": "Complete this yaml code:\n\n---\n#\n# This is a fragment of a manifest for a static Pod.\n# Check whether this is correct for your cluster and for your API server.\n#\napiVersion: v1\nkind: Pod\nmetadata:\n annotations:\n kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 10.20.30.40:443\n creationTimestamp: null\n labels:\n app.kubernetes.io/component: kube-apiserver\n tier: control-plane\n name: kube-apiserver\n namespace: kube-system\nspec:", "completion": " containers:\n - command:\n - kube-apiserver\n ...\n - --encryption-provider-config=/etc/kubernetes/enc/enc.yaml # add this line\n volumeMounts:\n ...\n - name: enc # add this line\n mountPath: /etc/kubernetes/enc # add this line\n readOnly: true # add this line\n ...\n volumes:\n ...\n - name: enc # add this line\n hostPath: # add this line\n path: /etc/kubernetes/enc # add this line\n type: DirectoryOrCreate # add this line\n ...", "language": "yaml", "type": "pod-definition", "context": "Encrypting Confidential Data at Rest | Restart your API server.", "source": "https://kubernetes.io/docs/tasks/administer-cluster/encrypt-data/", "id": "k8s_00245_code_completion_7134"} {"prompt": "Explain this yaml code:\n\n---\n#\n# This is a fragment of a manifest for a static Pod.\n# Check whether this is correct for your cluster and for your API server.\n#\napiVersion: v1\nkind: Pod\nmetadata:\n annotations:\n kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 10.20.30.40:443\n creationTimestamp: null\n labels:\n app.kubernetes.io/component: kube-apiserver\n tier: control-plane\n name: kube-apiserver\n namespace: kube-system\nspec:\n containers:\n - command:\n - kube-apiserver\n ...\n - --encryption-provider-config=/etc/kubernetes/enc/enc.yaml # add this line\n volumeMounts:\n ...\n - name: enc # add this line\n mountPath: /etc/kubernetes/enc # add this line\n readOnly: true # add this line\n ...\n volumes:\n ...\n - name: enc # add this line\n hostPath: # add this line\n path: /etc/kubernetes/enc # add this line\n type: DirectoryOrCreate # add this line\n ...", "completion": "This yaml code Encrypting Confidential Data at Rest | Restart your API server.", "language": "yaml", "type": "pod-definition_explanation", "context": "Encrypting Confidential Data at Rest | Restart your API server.", "source": "https://kubernetes.io/docs/tasks/administer-cluster/encrypt-data/", "id": "k8s_00245_code_explain_7135"} {"prompt": "Explain this bash code:\n\nkubectl create secret generic secret1 -n default --from-literal=mykey=mydata", "completion": "This bash code Encrypting Confidential Data at Rest | Using the etcdctl command line tool, read that Secret out of etcd:", "language": "bash", "type": "kubectl-command_explanation", "context": "Encrypting Confidential Data at Rest | Using the etcdctl command line tool, read that Secret out of etcd:", "source": "https://kubernetes.io/docs/tasks/administer-cluster/encrypt-data/", "id": "k8s_00245_code_explain_7136"} {"prompt": "Explain this text code:\n\nETCDCTL_API=3 etcdctl get /registry/secrets/default/secret1 [...] | hexdump -C", "completion": "This text code Encrypting Confidential Data at Rest | where [...] must be the additional arguments for connecting to the etcd server.", "language": "text", "type": "code-example_explanation", "context": "Encrypting Confidential Data at Rest | where [...] must be the additional arguments for connecting to the etcd server.", "source": "https://kubernetes.io/docs/tasks/administer-cluster/encrypt-data/", "id": "k8s_00245_code_explain_7137"} {"prompt": "Complete this text code:\n\nETCDCTL_API=3 etcdctl \\\n --cacert=/etc/kubernetes/pki/etcd/ca.crt \\", "completion": " --cert=/etc/kubernetes/pki/etcd/server.crt \\\n --key=/etc/kubernetes/pki/etcd/server.key \\\n get /registry/secrets/default/secret1 | hexdump -C", "language": "text", "type": "code-example", "context": "Encrypting Confidential Data at Rest | The output is similar to this (abbreviated):", "source": "https://kubernetes.io/docs/tasks/administer-cluster/encrypt-data/", "id": "k8s_00245_code_completion_7138"} {"prompt": "Explain this text code:\n\nETCDCTL_API=3 etcdctl \\\n --cacert=/etc/kubernetes/pki/etcd/ca.crt \\\n --cert=/etc/kubernetes/pki/etcd/server.crt \\\n --key=/etc/kubernetes/pki/etcd/server.key \\\n get /registry/secrets/default/secret1 | hexdump -C", "completion": "This text code Encrypting Confidential Data at Rest | The output is similar to this (abbreviated):", "language": "text", "type": "code-example_explanation", "context": "Encrypting Confidential Data at Rest | The output is similar to this (abbreviated):", "source": "https://kubernetes.io/docs/tasks/administer-cluster/encrypt-data/", "id": "k8s_00245_code_explain_7139"} {"prompt": "Complete this text code:\n\n00000000 2f 72 65 67 69 73 74 72 79 2f 73 65 63 72 65 74 |/registry/secret|\n00000010 73 2f 64 65 66 61 75 6c 74 2f 73 65 63 72 65 74 |s/default/secret|\n00000020 31 0a 6b 38 73 3a 65 6e 63 3a 61 65 73 63 62 63 |1.k8s:enc:aescbc|\n00000030 3a 76 31 3a 6b 65 79 31 3a c7 6c e7 d3 09 bc 06 |:v1:key1:.l.....|\n00000040 25 51 91 e4 e0 6c e5 b1 4d 7a 8b 3d b9 c2 7c 6e |%Q...l..Mz.=..|n|", "completion": "00000050 b4 79 df 05 28 ae 0d 8e 5f 35 13 2c c0 18 99 3e |.y..(..._5.,...>|\n[...]\n00000110 23 3a 0d fc 28 ca 48 2d 6b 2d 46 cc 72 0b 70 4c |#:..(.H-k-F.r.pL|\n00000120 a5 fc 35 43 12 4e 60 ef bf 6f fe cf df 0b ad 1f |..5C.N`..o......|\n00000130 82 c4 88 53 02 da 3e 66 ff 0a |...S..>f..|\n0000013a", "language": "text", "type": "code-example", "context": "Encrypting Confidential Data at Rest | Verify the stored Secret is prefixed with k8s:enc:aescbc:v1: which indicates the aescbc provider has encrypted the resulting data. Confirm that the key name shown in etcd matches the key name specifie", "source": "https://kubernetes.io/docs/tasks/administer-cluster/encrypt-data/", "id": "k8s_00245_code_completion_7140"} {"prompt": "Explain this text code:\n\n00000000 2f 72 65 67 69 73 74 72 79 2f 73 65 63 72 65 74 |/registry/secret|\n00000010 73 2f 64 65 66 61 75 6c 74 2f 73 65 63 72 65 74 |s/default/secret|\n00000020 31 0a 6b 38 73 3a 65 6e 63 3a 61 65 73 63 62 63 |1.k8s:enc:aescbc|\n00000030 3a 76 31 3a 6b 65 79 31 3a c7 6c e7 d3 09 bc 06 |:v1:key1:.l.....|\n00000040 25 51 91 e4 e0 6c e5 b1 4d 7a 8b 3d b9 c2 7c 6e |%Q...l..Mz.=..|n|\n00000050 b4 79 df 05 28 ae 0d 8e 5f 35 13 2c c0 18 99 3e |.y..(..._5.,...>|\n[...]\n00000110 23 3a 0d fc 28 ca 48 2d 6b 2d 46 cc 72 0b 70 4c |#:..(.H-k-F.r.pL|\n00000120 a5 fc 35 43 12 4e 60 ef bf 6f fe cf df 0b ad 1f |..5C.N`..o......|\n00000130 82 c4 88 53 02 da 3e 66 ff 0a |...S..>f..|\n0000013a", "completion": "This text code Encrypting Confidential Data at Rest | Verify the stored Secret is prefixed with k8s:enc:aescbc:v1: which indicates the aescbc provider has encrypted the resulting data. Confirm that the key name shown in etcd matches the key name specifie", "language": "text", "type": "code-example_explanation", "context": "Encrypting Confidential Data at Rest | Verify the stored Secret is prefixed with k8s:enc:aescbc:v1: which indicates the aescbc provider has encrypted the resulting data. Confirm that the key name shown in etcd matches the key name specifie", "source": "https://kubernetes.io/docs/tasks/administer-cluster/encrypt-data/", "id": "k8s_00245_code_explain_7141"} {"prompt": "Explain this text code:\n\n# Run this as an administrator that can read and write all Secrets\nkubectl get secrets --all-namespaces -o json | kubectl replace -f -", "completion": "This text code Encrypting Confidential Data at Rest | The command above reads all Secrets and then updates them with the same data, in order to apply server side encryption.", "language": "text", "type": "code-example_explanation", "context": "Encrypting Confidential Data at Rest | The command above reads all Secrets and then updates them with the same data, in order to apply server side encryption.", "source": "https://kubernetes.io/docs/tasks/administer-cluster/encrypt-data/", "id": "k8s_00245_code_explain_7142"} {"prompt": "Complete this yaml code:\n\n---\napiVersion: apiserver.config.k8s.io/v1\nkind: EncryptionConfiguration\nresources:\n - resources:\n - secrets", "completion": " providers:\n - aescbc:\n keys:\n - name: key1\n secret: \n - identity: {} # REMOVE THIS LINE", "language": "yaml", "type": "kubernetes-manifest", "context": "Encrypting Confidential Data at Rest | …and then restart each API server in turn. This change prevents the API server from accessing a plain-text Secret, even by accident.", "source": "https://kubernetes.io/docs/tasks/administer-cluster/encrypt-data/", "id": "k8s_00245_code_completion_7143"} {"prompt": "Explain this yaml code:\n\n---\napiVersion: apiserver.config.k8s.io/v1\nkind: EncryptionConfiguration\nresources:\n - resources:\n - secrets\n providers:\n - aescbc:\n keys:\n - name: key1\n secret: \n - identity: {} # REMOVE THIS LINE", "completion": "This yaml code Encrypting Confidential Data at Rest | …and then restart each API server in turn. This change prevents the API server from accessing a plain-text Secret, even by accident.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Encrypting Confidential Data at Rest | …and then restart each API server in turn. This change prevents the API server from accessing a plain-text Secret, even by accident.", "source": "https://kubernetes.io/docs/tasks/administer-cluster/encrypt-data/", "id": "k8s_00245_code_explain_7144"} {"prompt": "Complete this yaml code:\n\n---\napiVersion: apiserver.config.k8s.io/v1\nkind: EncryptionConfiguration\nresources:\n - resources:\n - secrets\n # list any other resources here that you previously were", "completion": " # encrypting at rest\n providers:\n - identity: {} # add this line\n - aescbc:\n keys:\n - name: key1\n secret: # keep this in place\n # make sure it comes after \"identity\"", "language": "yaml", "type": "kubernetes-manifest", "context": "Encrypting Confidential Data at Rest | Then run the following command to force decryption of all Secrets:", "source": "https://kubernetes.io/docs/tasks/administer-cluster/encrypt-data/", "id": "k8s_00245_code_completion_7145"} {"prompt": "Explain this yaml code:\n\n---\napiVersion: apiserver.config.k8s.io/v1\nkind: EncryptionConfiguration\nresources:\n - resources:\n - secrets\n # list any other resources here that you previously were\n # encrypting at rest\n providers:\n - identity: {} # add this line\n - aescbc:\n keys:\n - name: key1\n secret: # keep this in place\n # make sure it comes after \"identity\"", "completion": "This yaml code Encrypting Confidential Data at Rest | Then run the following command to force decryption of all Secrets:", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Encrypting Confidential Data at Rest | Then run the following command to force decryption of all Secrets:", "source": "https://kubernetes.io/docs/tasks/administer-cluster/encrypt-data/", "id": "k8s_00245_code_explain_7146"} {"prompt": "Explain this bash code:\n\nkubectl get secrets --all-namespaces -o json | kubectl replace -f -", "completion": "This bash code Encrypting Confidential Data at Rest | Once you have replaced all existing encrypted resources with backing data that don't use encryption, you can remove the encryption settings from the kube-apiserver.", "language": "bash", "type": "kubectl-command_explanation", "context": "Encrypting Confidential Data at Rest | Once you have replaced all existing encrypted resources with backing data that don't use encryption, you can remove the encryption settings from the kube-apiserver.", "source": "https://kubernetes.io/docs/tasks/administer-cluster/encrypt-data/", "id": "k8s_00245_code_explain_7147"} {"prompt": "Complete this yaml code:\n\n---\n#\n# CAUTION: this is an example configuration.\n# Do not use this for your own cluster!\n#\napiVersion: apiserver.config.k8s.io/v1\nkind: EncryptionConfiguration\nresources:\n - resources:\n - secrets\n - configmaps\n - pandas.awesome.bears.example # a custom resource API\n providers:\n # This configuration does not provide data confidentiality. The first\n # configured provider is specifying the \"identity\" mechanism, which\n # stores resources as plain text.\n #\n - identity: {} # plain text, in other words NO encryption\n - aesgcm:\n keys:\n - name: key1\n secret: c2VjcmV0IGlzIHNlY3VyZQ==\n - name: key2\n secret: dGhpcyBpcyBwYXNzd29yZA==\n - aescbc:\n keys:", "completion": " - name: key1\n secret: c2VjcmV0IGlzIHNlY3VyZQ==\n - name: key2\n secret: dGhpcyBpcyBwYXNzd29yZA==\n - secretbox:\n keys:\n - name: key1\n secret: YWJjZGVmZ2hpamtsbW5vcHFyc3R1dnd4eXoxMjM0NTY=\n - resources:\n - events\n providers:\n - identity: {} # do not encrypt Events even though *.* is specified below\n - resources:\n - '*.apps' # wildcard match requires Kubernetes 1.27 or later\n providers:\n - aescbc:\n keys:\n - name: key2\n secret: c2VjcmV0IGlzIHNlY3VyZSwgb3IgaXMgaXQ/Cg==\n - resources:\n - '*.*' # wildcard match requires Kubernetes 1.27 or later\n providers:\n - aescbc:\n keys:\n - name: key3\n secret: c2VjcmV0IGlzIHNlY3VyZSwgSSB0aGluaw==", "language": "yaml", "type": "kubernetes-manifest", "context": "Encrypting Confidential Data at Rest | Each resources array item is a separate config and contains a complete configuration. The resources.resources field is an array of Kubernetes resource names (resource or resource.group) that should be", "source": "https://kubernetes.io/docs/tasks/administer-cluster/encrypt-data/", "id": "k8s_00245_code_completion_7148"} {"prompt": "Explain this yaml code:\n\n---\n#\n# CAUTION: this is an example configuration.\n# Do not use this for your own cluster!\n#\napiVersion: apiserver.config.k8s.io/v1\nkind: EncryptionConfiguration\nresources:\n - resources:\n - secrets\n - configmaps\n - pandas.awesome.bears.example # a custom resource API\n providers:\n # This configuration does not provide data confidentiality. The first\n # configured provider is specifying the \"identity\" mechanism, which\n # stores resources as plain text.\n #\n - identity: {} # plain text, in other words NO encryption\n - aesgcm:\n keys:\n - name: key1\n secret: c2VjcmV0IGlzIHNlY3VyZQ==\n - name: key2\n secret: dGhpcyBpcyBwYXNzd29yZA==\n - aescbc:\n keys:\n - name: key1\n secret: c2VjcmV0IGlzIHNlY3VyZQ==\n - name: key2\n secret: dGhpcyBpcyBwYXNzd29yZA==\n - secretbox:\n keys:\n - name: key1\n secret: YWJjZGVmZ2hpamtsbW5vcHFyc3R1dnd4eXoxMjM0NTY=\n - resources:\n - events\n providers:\n - identity: {} # do not encrypt Events even though *.* is specified below\n - resources:\n - '*.apps' # wildcard match requires Kubernetes 1.27 or later\n providers:\n - aescbc:\n keys:\n - name: key2\n secret: c2VjcmV0IGlzIHNlY3VyZSwgb3IgaXMgaXQ/Cg==\n - resources:\n - '*.*' # wildcard match requires Kubernetes 1.27 or later\n providers:\n - aescbc:\n keys:\n - name: key3\n secret: c2VjcmV0IGlzIHNlY3VyZSwgSSB0aGluaw==", "completion": "This yaml code Encrypting Confidential Data at Rest | Each resources array item is a separate config and contains a complete configuration. The resources.resources field is an array of Kubernetes resource names (resource or resource.group) that should be", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Encrypting Confidential Data at Rest | Each resources array item is a separate config and contains a complete configuration. The resources.resources field is an array of Kubernetes resource names (resource or resource.group) that should be", "source": "https://kubernetes.io/docs/tasks/administer-cluster/encrypt-data/", "id": "k8s_00245_code_explain_7149"} {"prompt": "Complete this yaml code:\n\n...\n - resources:\n - configmaps. # specifically from the core API group,\n # because of trailing \".\"", "completion": " - events\n providers:\n - identity: {}\n # and then other entries in resources", "language": "yaml", "type": "kubernetes-manifest", "context": "Encrypting Confidential Data at Rest | Ensure that the exemption is listed before the wildcard '*.*' item in the resources array to give it precedence.", "source": "https://kubernetes.io/docs/tasks/administer-cluster/encrypt-data/", "id": "k8s_00245_code_completion_7150"} {"prompt": "Explain this yaml code:\n\n...\n - resources:\n - configmaps. # specifically from the core API group,\n # because of trailing \".\"\n - events\n providers:\n - identity: {}\n # and then other entries in resources", "completion": "This yaml code Encrypting Confidential Data at Rest | Ensure that the exemption is listed before the wildcard '*.*' item in the resources array to give it precedence.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Encrypting Confidential Data at Rest | Ensure that the exemption is listed before the wildcard '*.*' item in the resources array to give it precedence.", "source": "https://kubernetes.io/docs/tasks/administer-cluster/encrypt-data/", "id": "k8s_00245_code_explain_7151"} {"prompt": "Explain this powershell code:\n\n# Do not run this in a session where you have set a random number\n# generator seed.\n[Convert]::ToBase64String((1..32|%{[byte](Get-Random -Max 256)}))", "completion": "This powershell code Encrypting Confidential Data at Rest | Using a secure mechanism for file transfer, make a copy of that encryption key available to every other control plane host.", "language": "powershell", "type": "code-example_explanation", "context": "Encrypting Confidential Data at Rest | Using a secure mechanism for file transfer, make a copy of that encryption key available to every other control plane host.", "source": "https://kubernetes.io/docs/tasks/administer-cluster/encrypt-data/", "id": "k8s_00245_code_explain_7152"} {"prompt": "Complete this yaml code:\n\n---\napiVersion: apiserver.config.k8s.io/v1\nkind: EncryptionConfiguration\nresources:\n - resources:\n - secrets\n - configmaps\n - pandas.awesome.bears.example", "completion": " providers:\n - aescbc:\n keys:\n - name: key1\n # See the following text for more details about the secret value\n secret: \n - identity: {} # this fallback allows reading unencrypted secrets;\n # for example, during initial migration", "language": "yaml", "type": "kubernetes-manifest", "context": "Encrypting Confidential Data at Rest | To create a new encryption key (that does not use KMS), see Generate the encryption key.", "source": "https://kubernetes.io/docs/tasks/administer-cluster/encrypt-data/", "id": "k8s_00245_code_completion_7153"} {"prompt": "Explain this yaml code:\n\n---\napiVersion: apiserver.config.k8s.io/v1\nkind: EncryptionConfiguration\nresources:\n - resources:\n - secrets\n - configmaps\n - pandas.awesome.bears.example\n providers:\n - aescbc:\n keys:\n - name: key1\n # See the following text for more details about the secret value\n secret: \n - identity: {} # this fallback allows reading unencrypted secrets;\n # for example, during initial migration", "completion": "This yaml code Encrypting Confidential Data at Rest | To create a new encryption key (that does not use KMS), see Generate the encryption key.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Encrypting Confidential Data at Rest | To create a new encryption key (that does not use KMS), see Generate the encryption key.", "source": "https://kubernetes.io/docs/tasks/administer-cluster/encrypt-data/", "id": "k8s_00245_code_explain_7154"} {"prompt": "Complete this yaml code:\n\n---\n#\n# This is a fragment of a manifest for a static Pod.\n# Check whether this is correct for your cluster and for your API server.\n#\napiVersion: v1\nkind: Pod\nmetadata:\n annotations:\n kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 10.20.30.40:443\n creationTimestamp: null\n labels:\n app.kubernetes.io/component: kube-apiserver\n tier: control-plane\n name: kube-apiserver\n namespace: kube-system\nspec:", "completion": " containers:\n - command:\n - kube-apiserver\n ...\n - --encryption-provider-config=/etc/kubernetes/enc/enc.yaml # add this line\n volumeMounts:\n ...\n - name: enc # add this line\n mountPath: /etc/kubernetes/enc # add this line\n readOnly: true # add this line\n ...\n volumes:\n ...\n - name: enc # add this line\n hostPath: # add this line\n path: /etc/kubernetes/enc # add this line\n type: DirectoryOrCreate # add this line\n ...", "language": "yaml", "type": "pod-definition", "context": "Encrypting Confidential Data at Rest | Restart your API server.", "source": "https://kubernetes.io/docs/tasks/administer-cluster/encrypt-data/", "id": "k8s_00245_code_completion_7155"} {"prompt": "Explain this yaml code:\n\n---\n#\n# This is a fragment of a manifest for a static Pod.\n# Check whether this is correct for your cluster and for your API server.\n#\napiVersion: v1\nkind: Pod\nmetadata:\n annotations:\n kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 10.20.30.40:443\n creationTimestamp: null\n labels:\n app.kubernetes.io/component: kube-apiserver\n tier: control-plane\n name: kube-apiserver\n namespace: kube-system\nspec:\n containers:\n - command:\n - kube-apiserver\n ...\n - --encryption-provider-config=/etc/kubernetes/enc/enc.yaml # add this line\n volumeMounts:\n ...\n - name: enc # add this line\n mountPath: /etc/kubernetes/enc # add this line\n readOnly: true # add this line\n ...\n volumes:\n ...\n - name: enc # add this line\n hostPath: # add this line\n path: /etc/kubernetes/enc # add this line\n type: DirectoryOrCreate # add this line\n ...", "completion": "This yaml code Encrypting Confidential Data at Rest | Restart your API server.", "language": "yaml", "type": "pod-definition_explanation", "context": "Encrypting Confidential Data at Rest | Restart your API server.", "source": "https://kubernetes.io/docs/tasks/administer-cluster/encrypt-data/", "id": "k8s_00245_code_explain_7156"} {"prompt": "Explain this shell code:\n\nkubectl create secret generic secret1 -n default --from-literal=mykey=mydata", "completion": "This shell code Encrypting Confidential Data at Rest | Using the etcdctl command line tool, read that Secret out of etcd:", "language": "shell", "type": "code-example_explanation", "context": "Encrypting Confidential Data at Rest | Using the etcdctl command line tool, read that Secret out of etcd:", "source": "https://kubernetes.io/docs/tasks/administer-cluster/encrypt-data/", "id": "k8s_00245_code_explain_7157"} {"prompt": "Explain this text code:\n\nETCDCTL_API=3 etcdctl get /registry/secrets/default/secret1 [...] | hexdump -C", "completion": "This text code Encrypting Confidential Data at Rest | where [...] must be the additional arguments for connecting to the etcd server.", "language": "text", "type": "code-example_explanation", "context": "Encrypting Confidential Data at Rest | where [...] must be the additional arguments for connecting to the etcd server.", "source": "https://kubernetes.io/docs/tasks/administer-cluster/encrypt-data/", "id": "k8s_00245_code_explain_7158"} {"prompt": "Complete this shell code:\n\nETCDCTL_API=3 etcdctl \\\n --cacert=/etc/kubernetes/pki/etcd/ca.crt \\", "completion": " --cert=/etc/kubernetes/pki/etcd/server.crt \\\n --key=/etc/kubernetes/pki/etcd/server.key \\\n get /registry/secrets/default/secret1 | hexdump -C", "language": "shell", "type": "code-example", "context": "Encrypting Confidential Data at Rest | The output is similar to this (abbreviated):", "source": "https://kubernetes.io/docs/tasks/administer-cluster/encrypt-data/", "id": "k8s_00245_code_completion_7159"} {"prompt": "Explain this shell code:\n\nETCDCTL_API=3 etcdctl \\\n --cacert=/etc/kubernetes/pki/etcd/ca.crt \\\n --cert=/etc/kubernetes/pki/etcd/server.crt \\\n --key=/etc/kubernetes/pki/etcd/server.key \\\n get /registry/secrets/default/secret1 | hexdump -C", "completion": "This shell code Encrypting Confidential Data at Rest | The output is similar to this (abbreviated):", "language": "shell", "type": "code-example_explanation", "context": "Encrypting Confidential Data at Rest | The output is similar to this (abbreviated):", "source": "https://kubernetes.io/docs/tasks/administer-cluster/encrypt-data/", "id": "k8s_00245_code_explain_7160"} {"prompt": "Complete this hexdump code:\n\n00000000 2f 72 65 67 69 73 74 72 79 2f 73 65 63 72 65 74 |/registry/secret|\n00000010 73 2f 64 65 66 61 75 6c 74 2f 73 65 63 72 65 74 |s/default/secret|\n00000020 31 0a 6b 38 73 3a 65 6e 63 3a 61 65 73 63 62 63 |1.k8s:enc:aescbc|\n00000030 3a 76 31 3a 6b 65 79 31 3a c7 6c e7 d3 09 bc 06 |:v1:key1:.l.....|\n00000040 25 51 91 e4 e0 6c e5 b1 4d 7a 8b 3d b9 c2 7c 6e |%Q...l..Mz.=..|n|", "completion": "00000050 b4 79 df 05 28 ae 0d 8e 5f 35 13 2c c0 18 99 3e |.y..(..._5.,...>|\n[...]\n00000110 23 3a 0d fc 28 ca 48 2d 6b 2d 46 cc 72 0b 70 4c |#:..(.H-k-F.r.pL|\n00000120 a5 fc 35 43 12 4e 60 ef bf 6f fe cf df 0b ad 1f |..5C.N`..o......|\n00000130 82 c4 88 53 02 da 3e 66 ff 0a |...S..>f..|\n0000013a", "language": "hexdump", "type": "code-example", "context": "Encrypting Confidential Data at Rest | Verify the stored Secret is prefixed with k8s:enc:aescbc:v1: which indicates the aescbc provider has encrypted the resulting data. Confirm that the key name shown in etcd matches the key name specifie", "source": "https://kubernetes.io/docs/tasks/administer-cluster/encrypt-data/", "id": "k8s_00245_code_completion_7161"} {"prompt": "Explain this hexdump code:\n\n00000000 2f 72 65 67 69 73 74 72 79 2f 73 65 63 72 65 74 |/registry/secret|\n00000010 73 2f 64 65 66 61 75 6c 74 2f 73 65 63 72 65 74 |s/default/secret|\n00000020 31 0a 6b 38 73 3a 65 6e 63 3a 61 65 73 63 62 63 |1.k8s:enc:aescbc|\n00000030 3a 76 31 3a 6b 65 79 31 3a c7 6c e7 d3 09 bc 06 |:v1:key1:.l.....|\n00000040 25 51 91 e4 e0 6c e5 b1 4d 7a 8b 3d b9 c2 7c 6e |%Q...l..Mz.=..|n|\n00000050 b4 79 df 05 28 ae 0d 8e 5f 35 13 2c c0 18 99 3e |.y..(..._5.,...>|\n[...]\n00000110 23 3a 0d fc 28 ca 48 2d 6b 2d 46 cc 72 0b 70 4c |#:..(.H-k-F.r.pL|\n00000120 a5 fc 35 43 12 4e 60 ef bf 6f fe cf df 0b ad 1f |..5C.N`..o......|\n00000130 82 c4 88 53 02 da 3e 66 ff 0a |...S..>f..|\n0000013a", "completion": "This hexdump code Encrypting Confidential Data at Rest | Verify the stored Secret is prefixed with k8s:enc:aescbc:v1: which indicates the aescbc provider has encrypted the resulting data. Confirm that the key name shown in etcd matches the key name specifie", "language": "hexdump", "type": "code-example_explanation", "context": "Encrypting Confidential Data at Rest | Verify the stored Secret is prefixed with k8s:enc:aescbc:v1: which indicates the aescbc provider has encrypted the resulting data. Confirm that the key name shown in etcd matches the key name specifie", "source": "https://kubernetes.io/docs/tasks/administer-cluster/encrypt-data/", "id": "k8s_00245_code_explain_7162"} {"prompt": "Explain this shell code:\n\n# Run this as an administrator that can read and write all Secrets\nkubectl get secrets --all-namespaces -o json | kubectl replace -f -", "completion": "This shell code Encrypting Confidential Data at Rest | The command above reads all Secrets and then updates them with the same data, in order to apply server side encryption.", "language": "shell", "type": "code-example_explanation", "context": "Encrypting Confidential Data at Rest | The command above reads all Secrets and then updates them with the same data, in order to apply server side encryption.", "source": "https://kubernetes.io/docs/tasks/administer-cluster/encrypt-data/", "id": "k8s_00245_code_explain_7163"} {"prompt": "Complete this yaml code:\n\n---\napiVersion: apiserver.config.k8s.io/v1\nkind: EncryptionConfiguration\nresources:\n - resources:\n - secrets", "completion": " providers:\n - aescbc:\n keys:\n - name: key1\n secret: \n - identity: {} # REMOVE THIS LINE", "language": "yaml", "type": "kubernetes-manifest", "context": "Encrypting Confidential Data at Rest | …and then restart each API server in turn. This change prevents the API server from accessing a plain-text Secret, even by accident.", "source": "https://kubernetes.io/docs/tasks/administer-cluster/encrypt-data/", "id": "k8s_00245_code_completion_7164"} {"prompt": "Explain this yaml code:\n\n---\napiVersion: apiserver.config.k8s.io/v1\nkind: EncryptionConfiguration\nresources:\n - resources:\n - secrets\n providers:\n - aescbc:\n keys:\n - name: key1\n secret: \n - identity: {} # REMOVE THIS LINE", "completion": "This yaml code Encrypting Confidential Data at Rest | …and then restart each API server in turn. This change prevents the API server from accessing a plain-text Secret, even by accident.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Encrypting Confidential Data at Rest | …and then restart each API server in turn. This change prevents the API server from accessing a plain-text Secret, even by accident.", "source": "https://kubernetes.io/docs/tasks/administer-cluster/encrypt-data/", "id": "k8s_00245_code_explain_7165"} {"prompt": "Explain this bash code:\n\nkubectl get secrets --all-namespaces -o json | kubectl replace -f -", "completion": "This bash code Encrypting Confidential Data at Rest | This example shows how to stop encrypting the Secret API at rest. If you are encrypting other API kinds, adjust the steps to match.", "language": "bash", "type": "kubectl-command_explanation", "context": "Encrypting Confidential Data at Rest | This example shows how to stop encrypting the Secret API at rest. If you are encrypting other API kinds, adjust the steps to match.", "source": "https://kubernetes.io/docs/tasks/administer-cluster/encrypt-data/", "id": "k8s_00245_code_explain_7166"} {"prompt": "Complete this yaml code:\n\n---\napiVersion: apiserver.config.k8s.io/v1\nkind: EncryptionConfiguration\nresources:\n - resources:\n - secrets\n # list any other resources here that you previously were", "completion": " # encrypting at rest\n providers:\n - identity: {} # add this line\n - aescbc:\n keys:\n - name: key1\n secret: # keep this in place\n # make sure it comes after \"identity\"", "language": "yaml", "type": "kubernetes-manifest", "context": "Encrypting Confidential Data at Rest | Then run the following command to force decryption of all Secrets:", "source": "https://kubernetes.io/docs/tasks/administer-cluster/encrypt-data/", "id": "k8s_00245_code_completion_7167"} {"prompt": "Explain this yaml code:\n\n---\napiVersion: apiserver.config.k8s.io/v1\nkind: EncryptionConfiguration\nresources:\n - resources:\n - secrets\n # list any other resources here that you previously were\n # encrypting at rest\n providers:\n - identity: {} # add this line\n - aescbc:\n keys:\n - name: key1\n secret: # keep this in place\n # make sure it comes after \"identity\"", "completion": "This yaml code Encrypting Confidential Data at Rest | Then run the following command to force decryption of all Secrets:", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Encrypting Confidential Data at Rest | Then run the following command to force decryption of all Secrets:", "source": "https://kubernetes.io/docs/tasks/administer-cluster/encrypt-data/", "id": "k8s_00245_code_explain_7168"} {"prompt": "Explain this shell code:\n\nkubectl get secrets --all-namespaces -o json | kubectl replace -f -", "completion": "This shell code Encrypting Confidential Data at Rest | Once you have replaced all existing encrypted resources with backing data that don't use encryption, you can remove the encryption settings from the kube-apiserver.", "language": "shell", "type": "code-example_explanation", "context": "Encrypting Confidential Data at Rest | Once you have replaced all existing encrypted resources with backing data that don't use encryption, you can remove the encryption settings from the kube-apiserver.", "source": "https://kubernetes.io/docs/tasks/administer-cluster/encrypt-data/", "id": "k8s_00245_code_explain_7169"} {"prompt": "Explain this text code:\n\napiserver_encryption_config_controller_automatic_reload_last_timestamp_seconds", "completion": "This text code Encrypting Confidential Data at Rest |", "language": "text", "type": "code-example_explanation", "context": "Encrypting Confidential Data at Rest |", "source": "https://kubernetes.io/docs/tasks/administer-cluster/encrypt-data/", "id": "k8s_00245_code_explain_7170"} {"prompt": "Complete this yaml code:\n\n---\n#\n# CAUTION: this is an example configuration.\n# Do not use this for your own cluster!\n#\napiVersion: apiserver.config.k8s.io/v1\nkind: EncryptionConfiguration\nresources:\n - resources:\n - secrets\n - configmaps\n - pandas.awesome.bears.example # a custom resource API\n providers:\n # This configuration does not provide data confidentiality. The first\n # configured provider is specifying the \"identity\" mechanism, which\n # stores resources as plain text.\n #\n - identity: {} # plain text, in other words NO encryption\n - aesgcm:\n keys:\n - name: key1\n secret: c2VjcmV0IGlzIHNlY3VyZQ==\n - name: key2\n secret: dGhpcyBpcyBwYXNzd29yZA==\n - aescbc:\n keys:", "completion": " - name: key1\n secret: c2VjcmV0IGlzIHNlY3VyZQ==\n - name: key2\n secret: dGhpcyBpcyBwYXNzd29yZA==\n - secretbox:\n keys:\n - name: key1\n secret: YWJjZGVmZ2hpamtsbW5vcHFyc3R1dnd4eXoxMjM0NTY=\n - resources:\n - events\n providers:\n - identity: {} # do not encrypt Events even though *.* is specified below\n - resources:\n - '*.apps' # wildcard match requires Kubernetes 1.27 or later\n providers:\n - aescbc:\n keys:\n - name: key2\n secret: c2VjcmV0IGlzIHNlY3VyZSwgb3IgaXMgaXQ/Cg==\n - resources:\n - '*.*' # wildcard match requires Kubernetes 1.27 or later\n providers:\n - aescbc:\n keys:\n - name: key3\n secret: c2VjcmV0IGlzIHNlY3VyZSwgSSB0aGluaw==", "language": "yaml", "type": "kubernetes-manifest", "context": "Encrypting Confidential Data at Rest | Each resources array item is a separate config and contains a complete configuration. The resources.resources field is an array of Kubernetes resource names (resource or resource.group) that should be", "source": "https://kubernetes.io/docs/tasks/administer-cluster/encrypt-data/", "id": "k8s_00245_code_completion_7171"} {"prompt": "Explain this yaml code:\n\n---\n#\n# CAUTION: this is an example configuration.\n# Do not use this for your own cluster!\n#\napiVersion: apiserver.config.k8s.io/v1\nkind: EncryptionConfiguration\nresources:\n - resources:\n - secrets\n - configmaps\n - pandas.awesome.bears.example # a custom resource API\n providers:\n # This configuration does not provide data confidentiality. The first\n # configured provider is specifying the \"identity\" mechanism, which\n # stores resources as plain text.\n #\n - identity: {} # plain text, in other words NO encryption\n - aesgcm:\n keys:\n - name: key1\n secret: c2VjcmV0IGlzIHNlY3VyZQ==\n - name: key2\n secret: dGhpcyBpcyBwYXNzd29yZA==\n - aescbc:\n keys:\n - name: key1\n secret: c2VjcmV0IGlzIHNlY3VyZQ==\n - name: key2\n secret: dGhpcyBpcyBwYXNzd29yZA==\n - secretbox:\n keys:\n - name: key1\n secret: YWJjZGVmZ2hpamtsbW5vcHFyc3R1dnd4eXoxMjM0NTY=\n - resources:\n - events\n providers:\n - identity: {} # do not encrypt Events even though *.* is specified below\n - resources:\n - '*.apps' # wildcard match requires Kubernetes 1.27 or later\n providers:\n - aescbc:\n keys:\n - name: key2\n secret: c2VjcmV0IGlzIHNlY3VyZSwgb3IgaXMgaXQ/Cg==\n - resources:\n - '*.*' # wildcard match requires Kubernetes 1.27 or later\n providers:\n - aescbc:\n keys:\n - name: key3\n secret: c2VjcmV0IGlzIHNlY3VyZSwgSSB0aGluaw==", "completion": "This yaml code Encrypting Confidential Data at Rest | Each resources array item is a separate config and contains a complete configuration. The resources.resources field is an array of Kubernetes resource names (resource or resource.group) that should be", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Encrypting Confidential Data at Rest | Each resources array item is a separate config and contains a complete configuration. The resources.resources field is an array of Kubernetes resource names (resource or resource.group) that should be", "source": "https://kubernetes.io/docs/tasks/administer-cluster/encrypt-data/", "id": "k8s_00245_code_explain_7172"} {"prompt": "Complete this text code:\n\n...\n - resources:\n - configmaps. # specifically from the core API group,\n # because of trailing \".\"", "completion": " - events\n providers:\n - identity: {}\n # and then other entries in resources", "language": "text", "type": "code-example", "context": "Encrypting Confidential Data at Rest | Ensure that the exemption is listed before the wildcard '*.*' item in the resources array to give it precedence.", "source": "https://kubernetes.io/docs/tasks/administer-cluster/encrypt-data/", "id": "k8s_00245_code_completion_7173"} {"prompt": "Explain this text code:\n\n...\n - resources:\n - configmaps. # specifically from the core API group,\n # because of trailing \".\"\n - events\n providers:\n - identity: {}\n # and then other entries in resources", "completion": "This text code Encrypting Confidential Data at Rest | Ensure that the exemption is listed before the wildcard '*.*' item in the resources array to give it precedence.", "language": "text", "type": "code-example_explanation", "context": "Encrypting Confidential Data at Rest | Ensure that the exemption is listed before the wildcard '*.*' item in the resources array to give it precedence.", "source": "https://kubernetes.io/docs/tasks/administer-cluster/encrypt-data/", "id": "k8s_00245_code_explain_7174"} {"prompt": "Explain this text code:\n\n# Do not run this in a session where you have set a random number\n# generator seed.\n[Convert]::ToBase64String((1..32|%{[byte](Get-Random -Max 256)}))", "completion": "This text code Encrypting Confidential Data at Rest | Using a secure mechanism for file transfer, make a copy of that encryption key available to every other control plane host.", "language": "text", "type": "code-example_explanation", "context": "Encrypting Confidential Data at Rest | Using a secure mechanism for file transfer, make a copy of that encryption key available to every other control plane host.", "source": "https://kubernetes.io/docs/tasks/administer-cluster/encrypt-data/", "id": "k8s_00245_code_explain_7175"} {"prompt": "Complete this yaml code:\n\n---\napiVersion: apiserver.config.k8s.io/v1\nkind: EncryptionConfiguration\nresources:\n - resources:\n - secrets\n - configmaps\n - pandas.awesome.bears.example", "completion": " providers:\n - aescbc:\n keys:\n - name: key1\n # See the following text for more details about the secret value\n secret: \n - identity: {} # this fallback allows reading unencrypted secrets;\n # for example, during initial migration", "language": "yaml", "type": "kubernetes-manifest", "context": "Encrypting Confidential Data at Rest | To create a new encryption key (that does not use KMS), see Generate the encryption key.", "source": "https://kubernetes.io/docs/tasks/administer-cluster/encrypt-data/", "id": "k8s_00245_code_completion_7176"} {"prompt": "Explain this yaml code:\n\n---\napiVersion: apiserver.config.k8s.io/v1\nkind: EncryptionConfiguration\nresources:\n - resources:\n - secrets\n - configmaps\n - pandas.awesome.bears.example\n providers:\n - aescbc:\n keys:\n - name: key1\n # See the following text for more details about the secret value\n secret: \n - identity: {} # this fallback allows reading unencrypted secrets;\n # for example, during initial migration", "completion": "This yaml code Encrypting Confidential Data at Rest | To create a new encryption key (that does not use KMS), see Generate the encryption key.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Encrypting Confidential Data at Rest | To create a new encryption key (that does not use KMS), see Generate the encryption key.", "source": "https://kubernetes.io/docs/tasks/administer-cluster/encrypt-data/", "id": "k8s_00245_code_explain_7177"} {"prompt": "Complete this yaml code:\n\n---\n#\n# This is a fragment of a manifest for a static Pod.\n# Check whether this is correct for your cluster and for your API server.\n#\napiVersion: v1\nkind: Pod\nmetadata:\n annotations:\n kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 10.20.30.40:443\n creationTimestamp: null\n labels:\n app.kubernetes.io/component: kube-apiserver\n tier: control-plane\n name: kube-apiserver\n namespace: kube-system\nspec:", "completion": " containers:\n - command:\n - kube-apiserver\n ...\n - --encryption-provider-config=/etc/kubernetes/enc/enc.yaml # add this line\n volumeMounts:\n ...\n - name: enc # add this line\n mountPath: /etc/kubernetes/enc # add this line\n readOnly: true # add this line\n ...\n volumes:\n ...\n - name: enc # add this line\n hostPath: # add this line\n path: /etc/kubernetes/enc # add this line\n type: DirectoryOrCreate # add this line\n ...", "language": "yaml", "type": "pod-definition", "context": "Encrypting Confidential Data at Rest | Restart your API server.", "source": "https://kubernetes.io/docs/tasks/administer-cluster/encrypt-data/", "id": "k8s_00245_code_completion_7178"} {"prompt": "Explain this yaml code:\n\n---\n#\n# This is a fragment of a manifest for a static Pod.\n# Check whether this is correct for your cluster and for your API server.\n#\napiVersion: v1\nkind: Pod\nmetadata:\n annotations:\n kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 10.20.30.40:443\n creationTimestamp: null\n labels:\n app.kubernetes.io/component: kube-apiserver\n tier: control-plane\n name: kube-apiserver\n namespace: kube-system\nspec:\n containers:\n - command:\n - kube-apiserver\n ...\n - --encryption-provider-config=/etc/kubernetes/enc/enc.yaml # add this line\n volumeMounts:\n ...\n - name: enc # add this line\n mountPath: /etc/kubernetes/enc # add this line\n readOnly: true # add this line\n ...\n volumes:\n ...\n - name: enc # add this line\n hostPath: # add this line\n path: /etc/kubernetes/enc # add this line\n type: DirectoryOrCreate # add this line\n ...", "completion": "This yaml code Encrypting Confidential Data at Rest | Restart your API server.", "language": "yaml", "type": "pod-definition_explanation", "context": "Encrypting Confidential Data at Rest | Restart your API server.", "source": "https://kubernetes.io/docs/tasks/administer-cluster/encrypt-data/", "id": "k8s_00245_code_explain_7179"} {"prompt": "Explain this bash code:\n\nkubectl create secret generic secret1 -n default --from-literal=mykey=mydata", "completion": "This bash code Encrypting Confidential Data at Rest | Using the etcdctl command line tool, read that Secret out of etcd:", "language": "bash", "type": "kubectl-command_explanation", "context": "Encrypting Confidential Data at Rest | Using the etcdctl command line tool, read that Secret out of etcd:", "source": "https://kubernetes.io/docs/tasks/administer-cluster/encrypt-data/", "id": "k8s_00245_code_explain_7180"} {"prompt": "Complete this text code:\n\nETCDCTL_API=3 etcdctl \\\n --cacert=/etc/kubernetes/pki/etcd/ca.crt \\", "completion": " --cert=/etc/kubernetes/pki/etcd/server.crt \\\n --key=/etc/kubernetes/pki/etcd/server.key \\\n get /registry/secrets/default/secret1 | hexdump -C", "language": "text", "type": "code-example", "context": "Encrypting Confidential Data at Rest | The output is similar to this (abbreviated):", "source": "https://kubernetes.io/docs/tasks/administer-cluster/encrypt-data/", "id": "k8s_00245_code_completion_7181"} {"prompt": "Explain this text code:\n\nETCDCTL_API=3 etcdctl \\\n --cacert=/etc/kubernetes/pki/etcd/ca.crt \\\n --cert=/etc/kubernetes/pki/etcd/server.crt \\\n --key=/etc/kubernetes/pki/etcd/server.key \\\n get /registry/secrets/default/secret1 | hexdump -C", "completion": "This text code Encrypting Confidential Data at Rest | The output is similar to this (abbreviated):", "language": "text", "type": "code-example_explanation", "context": "Encrypting Confidential Data at Rest | The output is similar to this (abbreviated):", "source": "https://kubernetes.io/docs/tasks/administer-cluster/encrypt-data/", "id": "k8s_00245_code_explain_7182"} {"prompt": "Complete this text code:\n\n00000000 2f 72 65 67 69 73 74 72 79 2f 73 65 63 72 65 74 |/registry/secret|\n00000010 73 2f 64 65 66 61 75 6c 74 2f 73 65 63 72 65 74 |s/default/secret|\n00000020 31 0a 6b 38 73 3a 65 6e 63 3a 61 65 73 63 62 63 |1.k8s:enc:aescbc|\n00000030 3a 76 31 3a 6b 65 79 31 3a c7 6c e7 d3 09 bc 06 |:v1:key1:.l.....|\n00000040 25 51 91 e4 e0 6c e5 b1 4d 7a 8b 3d b9 c2 7c 6e |%Q...l..Mz.=..|n|", "completion": "00000050 b4 79 df 05 28 ae 0d 8e 5f 35 13 2c c0 18 99 3e |.y..(..._5.,...>|\n[...]\n00000110 23 3a 0d fc 28 ca 48 2d 6b 2d 46 cc 72 0b 70 4c |#:..(.H-k-F.r.pL|\n00000120 a5 fc 35 43 12 4e 60 ef bf 6f fe cf df 0b ad 1f |..5C.N`..o......|\n00000130 82 c4 88 53 02 da 3e 66 ff 0a |...S..>f..|\n0000013a", "language": "text", "type": "code-example", "context": "Encrypting Confidential Data at Rest | Verify the stored Secret is prefixed with k8s:enc:aescbc:v1: which indicates the aescbc provider has encrypted the resulting data. Confirm that the key name shown in etcd matches the key name specifie", "source": "https://kubernetes.io/docs/tasks/administer-cluster/encrypt-data/", "id": "k8s_00245_code_completion_7183"} {"prompt": "Explain this text code:\n\n00000000 2f 72 65 67 69 73 74 72 79 2f 73 65 63 72 65 74 |/registry/secret|\n00000010 73 2f 64 65 66 61 75 6c 74 2f 73 65 63 72 65 74 |s/default/secret|\n00000020 31 0a 6b 38 73 3a 65 6e 63 3a 61 65 73 63 62 63 |1.k8s:enc:aescbc|\n00000030 3a 76 31 3a 6b 65 79 31 3a c7 6c e7 d3 09 bc 06 |:v1:key1:.l.....|\n00000040 25 51 91 e4 e0 6c e5 b1 4d 7a 8b 3d b9 c2 7c 6e |%Q...l..Mz.=..|n|\n00000050 b4 79 df 05 28 ae 0d 8e 5f 35 13 2c c0 18 99 3e |.y..(..._5.,...>|\n[...]\n00000110 23 3a 0d fc 28 ca 48 2d 6b 2d 46 cc 72 0b 70 4c |#:..(.H-k-F.r.pL|\n00000120 a5 fc 35 43 12 4e 60 ef bf 6f fe cf df 0b ad 1f |..5C.N`..o......|\n00000130 82 c4 88 53 02 da 3e 66 ff 0a |...S..>f..|\n0000013a", "completion": "This text code Encrypting Confidential Data at Rest | Verify the stored Secret is prefixed with k8s:enc:aescbc:v1: which indicates the aescbc provider has encrypted the resulting data. Confirm that the key name shown in etcd matches the key name specifie", "language": "text", "type": "code-example_explanation", "context": "Encrypting Confidential Data at Rest | Verify the stored Secret is prefixed with k8s:enc:aescbc:v1: which indicates the aescbc provider has encrypted the resulting data. Confirm that the key name shown in etcd matches the key name specifie", "source": "https://kubernetes.io/docs/tasks/administer-cluster/encrypt-data/", "id": "k8s_00245_code_explain_7184"} {"prompt": "Explain this text code:\n\n# Run this as an administrator that can read and write all Secrets\nkubectl get secrets --all-namespaces -o json | kubectl replace -f -", "completion": "This text code Encrypting Confidential Data at Rest | The command above reads all Secrets and then updates them with the same data, in order to apply server side encryption.", "language": "text", "type": "code-example_explanation", "context": "Encrypting Confidential Data at Rest | The command above reads all Secrets and then updates them with the same data, in order to apply server side encryption.", "source": "https://kubernetes.io/docs/tasks/administer-cluster/encrypt-data/", "id": "k8s_00245_code_explain_7185"} {"prompt": "Complete this yaml code:\n\n---\napiVersion: apiserver.config.k8s.io/v1\nkind: EncryptionConfiguration\nresources:\n - resources:\n - secrets", "completion": " providers:\n - aescbc:\n keys:\n - name: key1\n secret: \n - identity: {} # REMOVE THIS LINE", "language": "yaml", "type": "kubernetes-manifest", "context": "Encrypting Confidential Data at Rest | …and then restart each API server in turn. This change prevents the API server from accessing a plain-text Secret, even by accident.", "source": "https://kubernetes.io/docs/tasks/administer-cluster/encrypt-data/", "id": "k8s_00245_code_completion_7186"} {"prompt": "Explain this yaml code:\n\n---\napiVersion: apiserver.config.k8s.io/v1\nkind: EncryptionConfiguration\nresources:\n - resources:\n - secrets\n providers:\n - aescbc:\n keys:\n - name: key1\n secret: \n - identity: {} # REMOVE THIS LINE", "completion": "This yaml code Encrypting Confidential Data at Rest | …and then restart each API server in turn. This change prevents the API server from accessing a plain-text Secret, even by accident.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Encrypting Confidential Data at Rest | …and then restart each API server in turn. This change prevents the API server from accessing a plain-text Secret, even by accident.", "source": "https://kubernetes.io/docs/tasks/administer-cluster/encrypt-data/", "id": "k8s_00245_code_explain_7187"} {"prompt": "Complete this yaml code:\n\n---\napiVersion: apiserver.config.k8s.io/v1\nkind: EncryptionConfiguration\nresources:\n - resources:\n - secrets\n # list any other resources here that you previously were", "completion": " # encrypting at rest\n providers:\n - identity: {} # add this line\n - aescbc:\n keys:\n - name: key1\n secret: # keep this in place\n # make sure it comes after \"identity\"", "language": "yaml", "type": "kubernetes-manifest", "context": "Encrypting Confidential Data at Rest | Then run the following command to force decryption of all Secrets:", "source": "https://kubernetes.io/docs/tasks/administer-cluster/encrypt-data/", "id": "k8s_00245_code_completion_7188"} {"prompt": "Explain this yaml code:\n\n---\napiVersion: apiserver.config.k8s.io/v1\nkind: EncryptionConfiguration\nresources:\n - resources:\n - secrets\n # list any other resources here that you previously were\n # encrypting at rest\n providers:\n - identity: {} # add this line\n - aescbc:\n keys:\n - name: key1\n secret: # keep this in place\n # make sure it comes after \"identity\"", "completion": "This yaml code Encrypting Confidential Data at Rest | Then run the following command to force decryption of all Secrets:", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Encrypting Confidential Data at Rest | Then run the following command to force decryption of all Secrets:", "source": "https://kubernetes.io/docs/tasks/administer-cluster/encrypt-data/", "id": "k8s_00245_code_explain_7189"} {"prompt": "Explain this bash code:\n\nkubectl get secrets --all-namespaces -o json | kubectl replace -f -", "completion": "This bash code Encrypting Confidential Data at Rest | Once you have replaced all existing encrypted resources with backing data that don't use encryption, you can remove the encryption settings from the kube-apiserver.", "language": "bash", "type": "kubectl-command_explanation", "context": "Encrypting Confidential Data at Rest | Once you have replaced all existing encrypted resources with backing data that don't use encryption, you can remove the encryption settings from the kube-apiserver.", "source": "https://kubernetes.io/docs/tasks/administer-cluster/encrypt-data/", "id": "k8s_00245_code_explain_7190"} {"prompt": "Explain this text code:\n\nPSP_NAME=\"original\" # Set the name of the PSP you're checking for\nkubectl get pods --all-namespaces -o jsonpath=\"{range .items[?(@.metadata.annotations.kubernetes\\.io\\/psp=='$PSP_NAME')]}{.metadata.namespace} {.metadata.name}{'\\n'}{end}\"", "completion": "This text code Migrate from PodSecurityPolicy to the Built-In PodSecurity Admission Controller | The following steps will need to be performed on every namespace in the cluster. Commands referenced in these steps use the $NAMESPACE variable to refer to the namespace being updated.", "language": "text", "type": "code-example_explanation", "context": "Migrate from PodSecurityPolicy to the Built-In PodSecurity Admission Controller | The following steps will need to be performed on every namespace in the cluster. Commands referenced in these steps use the $NAMESPACE variable to refer to the namespace being updated.", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/migrate-from-psp/", "id": "k8s_00246_code_explain_7191"} {"prompt": "Explain this bash code:\n\nkubectl get pods -n $NAMESPACE -o jsonpath=\"{.items[*].metadata.annotations.kubernetes\\.io\\/psp}\" | tr \" \" \"\\n\" | sort -u", "completion": "This bash code Migrate from PodSecurityPolicy to the Built-In PodSecurity Admission Controller | Once you have selected a Pod Security level for the namespace (or if you're trying several), it's a good idea to test it out first (you can skip this step if using the Privileged level). Pod Security", "language": "bash", "type": "kubectl-command_explanation", "context": "Migrate from PodSecurityPolicy to the Built-In PodSecurity Admission Controller | Once you have selected a Pod Security level for the namespace (or if you're trying several), it's a good idea to test it out first (you can skip this step if using the Privileged level). Pod Security", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/migrate-from-psp/", "id": "k8s_00246_code_explain_7192"} {"prompt": "Explain this text code:\n\n# $LEVEL is the level to dry-run, either \"baseline\" or \"restricted\".\nkubectl label --dry-run=server --overwrite ns $NAMESPACE pod-security.kubernetes.io/enforce=$LEVEL", "completion": "This text code Migrate from PodSecurityPolicy to the Built-In PodSecurity Admission Controller | This command will return a warning for any existing pods that are not valid under the proposed level.", "language": "text", "type": "code-example_explanation", "context": "Migrate from PodSecurityPolicy to the Built-In PodSecurity Admission Controller | This command will return a warning for any existing pods that are not valid under the proposed level.", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/migrate-from-psp/", "id": "k8s_00246_code_explain_7193"} {"prompt": "Explain this bash code:\n\nkubectl label --overwrite ns $NAMESPACE pod-security.kubernetes.io/audit=$LEVEL", "completion": "This bash code Migrate from PodSecurityPolicy to the Built-In PodSecurity Admission Controller | If either of these approaches yield unexpected violations, you will need to either update the violating workloads to meet the policy requirements, or relax the namespace Pod Security level.", "language": "bash", "type": "kubectl-command_explanation", "context": "Migrate from PodSecurityPolicy to the Built-In PodSecurity Admission Controller | If either of these approaches yield unexpected violations, you will need to either update the violating workloads to meet the policy requirements, or relax the namespace Pod Security level.", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/migrate-from-psp/", "id": "k8s_00246_code_explain_7194"} {"prompt": "Explain this bash code:\n\nkubectl label --overwrite ns $NAMESPACE pod-security.kubernetes.io/enforce=$LEVEL", "completion": "This bash code Migrate from PodSecurityPolicy to the Built-In PodSecurity Admission Controller | Finally, you can effectively bypass PodSecurityPolicy at the namespace level by binding the fully privileged PSP to all service accounts in the namespace.", "language": "bash", "type": "kubectl-command_explanation", "context": "Migrate from PodSecurityPolicy to the Built-In PodSecurity Admission Controller | Finally, you can effectively bypass PodSecurityPolicy at the namespace level by binding the fully privileged PSP to all service accounts in the namespace.", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/migrate-from-psp/", "id": "k8s_00246_code_explain_7195"} {"prompt": "Complete this text code:\n\n# The following cluster-scoped commands are only needed once.\nkubectl apply -f privileged-psp.yaml\nkubectl create clusterrole privileged-psp --verb use --resource podsecuritypolicies.policy --resource-name privileged", "completion": "\n# Per-namespace disable\nkubectl create -n $NAMESPACE rolebinding disable-psp --clusterrole privileged-psp --group system:serviceaccounts:$NAMESPACE", "language": "text", "type": "code-example", "context": "Migrate from PodSecurityPolicy to the Built-In PodSecurity Admission Controller | Since the privileged PSP is non-mutating, and the PSP admission controller always prefers non-mutating PSPs, this will ensure that pods in this namespace are no longer being modified or restricted by", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/migrate-from-psp/", "id": "k8s_00246_code_completion_7196"} {"prompt": "Explain this text code:\n\n# The following cluster-scoped commands are only needed once.\nkubectl apply -f privileged-psp.yaml\nkubectl create clusterrole privileged-psp --verb use --resource podsecuritypolicies.policy --resource-name privileged\n\n# Per-namespace disable\nkubectl create -n $NAMESPACE rolebinding disable-psp --clusterrole privileged-psp --group system:serviceaccounts:$NAMESPACE", "completion": "This text code Migrate from PodSecurityPolicy to the Built-In PodSecurity Admission Controller | Since the privileged PSP is non-mutating, and the PSP admission controller always prefers non-mutating PSPs, this will ensure that pods in this namespace are no longer being modified or restricted by", "language": "text", "type": "code-example_explanation", "context": "Migrate from PodSecurityPolicy to the Built-In PodSecurity Admission Controller | Since the privileged PSP is non-mutating, and the PSP admission controller always prefers non-mutating PSPs, this will ensure that pods in this namespace are no longer being modified or restricted by", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/migrate-from-psp/", "id": "k8s_00246_code_explain_7197"} {"prompt": "Explain this text code:\n\n# Undo PodSecurityPolicy disablement.\nkubectl delete -n $NAMESPACE rolebinding disable-psp", "completion": "This text code Migrate from PodSecurityPolicy to the Built-In PodSecurity Admission Controller | Now that existing namespaces have been updated to enforce Pod Security Admission, you should ensure that your processes and/or policies for creating new namespaces are updated to ensure that an approp", "language": "text", "type": "code-example_explanation", "context": "Migrate from PodSecurityPolicy to the Built-In PodSecurity Admission Controller | Now that existing namespaces have been updated to enforce Pod Security Admission, you should ensure that your processes and/or policies for creating new namespaces are updated to ensure that an approp", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/migrate-from-psp/", "id": "k8s_00246_code_explain_7198"} {"prompt": "Explain this text code:\n\nI0218 00:59:44.903329 13 plugins.go:158] Loaded 16 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,ExtendedResourceToleration,PersistentVolumeLabel,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,MutatingAdmissionWebhook.\nI0218 00:59:44.903350 13 plugins.go:161] Loaded 14 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,PodSecurity,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,CertificateSubjectRestriction,DenyServiceExternalIPs,ValidatingAdmissionWebhook,ResourceQuota.", "completion": "This text code Migrate from PodSecurityPolicy to the Built-In PodSecurity Admission Controller | You should see PodSecurity (in the validating admission controllers), and neither list should contain PodSecurityPolicy.", "language": "text", "type": "code-example_explanation", "context": "Migrate from PodSecurityPolicy to the Built-In PodSecurity Admission Controller | You should see PodSecurity (in the validating admission controllers), and neither list should contain PodSecurityPolicy.", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/migrate-from-psp/", "id": "k8s_00246_code_explain_7199"} {"prompt": "Explain this text code:\n\n.metadata.annotations['seccomp.security.alpha.kubernetes.io/defaultProfileName']", "completion": "This text code Migrate from PodSecurityPolicy to the Built-In PodSecurity Admission Controller | There are several fields in PodSecurityPolicy that are not covered by the Pod Security Standards. If you must enforce these options, you will need to supplement Pod Security Admission with an admissio", "language": "text", "type": "code-example_explanation", "context": "Migrate from PodSecurityPolicy to the Built-In PodSecurity Admission Controller | There are several fields in PodSecurityPolicy that are not covered by the Pod Security Standards. If you must enforce these options, you will need to supplement Pod Security Admission with an admissio", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/migrate-from-psp/", "id": "k8s_00246_code_explain_7200"} {"prompt": "Explain this text code:\n\n.metadata.annotations['apparmor.security.beta.kubernetes.io/defaultProfileName']", "completion": "This text code Migrate from PodSecurityPolicy to the Built-In PodSecurity Admission Controller | There are several fields in PodSecurityPolicy that are not covered by the Pod Security Standards. If you must enforce these options, you will need to supplement Pod Security Admission with an admissio", "language": "text", "type": "code-example_explanation", "context": "Migrate from PodSecurityPolicy to the Built-In PodSecurity Admission Controller | There are several fields in PodSecurityPolicy that are not covered by the Pod Security Standards. If you must enforce these options, you will need to supplement Pod Security Admission with an admissio", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/migrate-from-psp/", "id": "k8s_00246_code_explain_7201"} {"prompt": "Explain this sh code:\n\nPSP_NAME=\"original\" # Set the name of the PSP you're checking for\nkubectl get pods --all-namespaces -o jsonpath=\"{range .items[?(@.metadata.annotations.kubernetes\\.io\\/psp=='$PSP_NAME')]}{.metadata.namespace} {.metadata.name}{'\\n'}{end}\"", "completion": "This sh code Migrate from PodSecurityPolicy to the Built-In PodSecurity Admission Controller | The following steps will need to be performed on every namespace in the cluster. Commands referenced in these steps use the $NAMESPACE variable to refer to the namespace being updated.", "language": "sh", "type": "code-example_explanation", "context": "Migrate from PodSecurityPolicy to the Built-In PodSecurity Admission Controller | The following steps will need to be performed on every namespace in the cluster. Commands referenced in these steps use the $NAMESPACE variable to refer to the namespace being updated.", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/migrate-from-psp/", "id": "k8s_00246_code_explain_7202"} {"prompt": "Explain this text code:\n\n.metadata.annotations['container.apparmor.security.beta.kubernetes.io/*']", "completion": "This text code Migrate from PodSecurityPolicy to the Built-In PodSecurity Admission Controller | The following steps will need to be performed on every namespace in the cluster. Commands referenced in these steps use the $NAMESPACE variable to refer to the namespace being updated.", "language": "text", "type": "code-example_explanation", "context": "Migrate from PodSecurityPolicy to the Built-In PodSecurity Admission Controller | The following steps will need to be performed on every namespace in the cluster. Commands referenced in these steps use the $NAMESPACE variable to refer to the namespace being updated.", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/migrate-from-psp/", "id": "k8s_00246_code_explain_7203"} {"prompt": "Explain this sh code:\n\nkubectl get pods -n $NAMESPACE -o jsonpath=\"{.items[*].metadata.annotations.kubernetes\\.io\\/psp}\" | tr \" \" \"\\n\" | sort -u", "completion": "This sh code Migrate from PodSecurityPolicy to the Built-In PodSecurity Admission Controller | Once you have selected a Pod Security level for the namespace (or if you're trying several), it's a good idea to test it out first (you can skip this step if using the Privileged level). Pod Security", "language": "sh", "type": "code-example_explanation", "context": "Migrate from PodSecurityPolicy to the Built-In PodSecurity Admission Controller | Once you have selected a Pod Security level for the namespace (or if you're trying several), it's a good idea to test it out first (you can skip this step if using the Privileged level). Pod Security", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/migrate-from-psp/", "id": "k8s_00246_code_explain_7204"} {"prompt": "Explain this sh code:\n\n# $LEVEL is the level to dry-run, either \"baseline\" or \"restricted\".\nkubectl label --dry-run=server --overwrite ns $NAMESPACE pod-security.kubernetes.io/enforce=$LEVEL", "completion": "This sh code Migrate from PodSecurityPolicy to the Built-In PodSecurity Admission Controller | This command will return a warning for any existing pods that are not valid under the proposed level.", "language": "sh", "type": "code-example_explanation", "context": "Migrate from PodSecurityPolicy to the Built-In PodSecurity Admission Controller | This command will return a warning for any existing pods that are not valid under the proposed level.", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/migrate-from-psp/", "id": "k8s_00246_code_explain_7205"} {"prompt": "Explain this sh code:\n\nkubectl label --overwrite ns $NAMESPACE pod-security.kubernetes.io/audit=$LEVEL", "completion": "This sh code Migrate from PodSecurityPolicy to the Built-In PodSecurity Admission Controller | If either of these approaches yield unexpected violations, you will need to either update the violating workloads to meet the policy requirements, or relax the namespace Pod Security level.", "language": "sh", "type": "code-example_explanation", "context": "Migrate from PodSecurityPolicy to the Built-In PodSecurity Admission Controller | If either of these approaches yield unexpected violations, you will need to either update the violating workloads to meet the policy requirements, or relax the namespace Pod Security level.", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/migrate-from-psp/", "id": "k8s_00246_code_explain_7206"} {"prompt": "Explain this sh code:\n\nkubectl label --overwrite ns $NAMESPACE pod-security.kubernetes.io/enforce=$LEVEL", "completion": "This sh code Migrate from PodSecurityPolicy to the Built-In PodSecurity Admission Controller | Finally, you can effectively bypass PodSecurityPolicy at the namespace level by binding the fully privileged PSP to all service accounts in the namespace.", "language": "sh", "type": "code-example_explanation", "context": "Migrate from PodSecurityPolicy to the Built-In PodSecurity Admission Controller | Finally, you can effectively bypass PodSecurityPolicy at the namespace level by binding the fully privileged PSP to all service accounts in the namespace.", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/migrate-from-psp/", "id": "k8s_00246_code_explain_7207"} {"prompt": "Complete this sh code:\n\n# The following cluster-scoped commands are only needed once.\nkubectl apply -f privileged-psp.yaml\nkubectl create clusterrole privileged-psp --verb use --resource podsecuritypolicies.policy --resource-name privileged", "completion": "\n# Per-namespace disable\nkubectl create -n $NAMESPACE rolebinding disable-psp --clusterrole privileged-psp --group system:serviceaccounts:$NAMESPACE", "language": "sh", "type": "code-example", "context": "Migrate from PodSecurityPolicy to the Built-In PodSecurity Admission Controller | Since the privileged PSP is non-mutating, and the PSP admission controller always prefers non-mutating PSPs, this will ensure that pods in this namespace are no longer being modified or restricted by", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/migrate-from-psp/", "id": "k8s_00246_code_completion_7208"} {"prompt": "Explain this sh code:\n\n# The following cluster-scoped commands are only needed once.\nkubectl apply -f privileged-psp.yaml\nkubectl create clusterrole privileged-psp --verb use --resource podsecuritypolicies.policy --resource-name privileged\n\n# Per-namespace disable\nkubectl create -n $NAMESPACE rolebinding disable-psp --clusterrole privileged-psp --group system:serviceaccounts:$NAMESPACE", "completion": "This sh code Migrate from PodSecurityPolicy to the Built-In PodSecurity Admission Controller | Since the privileged PSP is non-mutating, and the PSP admission controller always prefers non-mutating PSPs, this will ensure that pods in this namespace are no longer being modified or restricted by", "language": "sh", "type": "code-example_explanation", "context": "Migrate from PodSecurityPolicy to the Built-In PodSecurity Admission Controller | Since the privileged PSP is non-mutating, and the PSP admission controller always prefers non-mutating PSPs, this will ensure that pods in this namespace are no longer being modified or restricted by", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/migrate-from-psp/", "id": "k8s_00246_code_explain_7209"} {"prompt": "Explain this sh code:\n\n# Undo PodSecurityPolicy disablement.\nkubectl delete -n $NAMESPACE rolebinding disable-psp", "completion": "This sh code Migrate from PodSecurityPolicy to the Built-In PodSecurity Admission Controller | Now that existing namespaces have been updated to enforce Pod Security Admission, you should ensure that your processes and/or policies for creating new namespaces are updated to ensure that an approp", "language": "sh", "type": "code-example_explanation", "context": "Migrate from PodSecurityPolicy to the Built-In PodSecurity Admission Controller | Now that existing namespaces have been updated to enforce Pod Security Admission, you should ensure that your processes and/or policies for creating new namespaces are updated to ensure that an approp", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/migrate-from-psp/", "id": "k8s_00246_code_explain_7210"} {"prompt": "Explain this text code:\n\nI0218 00:59:44.903329 13 plugins.go:158] Loaded 16 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,ExtendedResourceToleration,PersistentVolumeLabel,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,MutatingAdmissionWebhook.\nI0218 00:59:44.903350 13 plugins.go:161] Loaded 14 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,PodSecurity,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,CertificateSubjectRestriction,DenyServiceExternalIPs,ValidatingAdmissionWebhook,ResourceQuota.", "completion": "This text code Migrate from PodSecurityPolicy to the Built-In PodSecurity Admission Controller | You should see PodSecurity (in the validating admission controllers), and neither list should contain PodSecurityPolicy.", "language": "text", "type": "code-example_explanation", "context": "Migrate from PodSecurityPolicy to the Built-In PodSecurity Admission Controller | You should see PodSecurity (in the validating admission controllers), and neither list should contain PodSecurityPolicy.", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/migrate-from-psp/", "id": "k8s_00246_code_explain_7211"} {"prompt": "Explain this text code:\n\nPSP_NAME=\"original\" # Set the name of the PSP you're checking for\nkubectl get pods --all-namespaces -o jsonpath=\"{range .items[?(@.metadata.annotations.kubernetes\\.io\\/psp=='$PSP_NAME')]}{.metadata.namespace} {.metadata.name}{'\\n'}{end}\"", "completion": "This text code Migrate from PodSecurityPolicy to the Built-In PodSecurity Admission Controller | The following steps will need to be performed on every namespace in the cluster. Commands referenced in these steps use the $NAMESPACE variable to refer to the namespace being updated.", "language": "text", "type": "code-example_explanation", "context": "Migrate from PodSecurityPolicy to the Built-In PodSecurity Admission Controller | The following steps will need to be performed on every namespace in the cluster. Commands referenced in these steps use the $NAMESPACE variable to refer to the namespace being updated.", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/migrate-from-psp/", "id": "k8s_00246_code_explain_7212"} {"prompt": "Explain this bash code:\n\nkubectl get pods -n $NAMESPACE -o jsonpath=\"{.items[*].metadata.annotations.kubernetes\\.io\\/psp}\" | tr \" \" \"\\n\" | sort -u", "completion": "This bash code Migrate from PodSecurityPolicy to the Built-In PodSecurity Admission Controller | Once you have selected a Pod Security level for the namespace (or if you're trying several), it's a good idea to test it out first (you can skip this step if using the Privileged level). Pod Security", "language": "bash", "type": "kubectl-command_explanation", "context": "Migrate from PodSecurityPolicy to the Built-In PodSecurity Admission Controller | Once you have selected a Pod Security level for the namespace (or if you're trying several), it's a good idea to test it out first (you can skip this step if using the Privileged level). Pod Security", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/migrate-from-psp/", "id": "k8s_00246_code_explain_7213"} {"prompt": "Explain this text code:\n\n# $LEVEL is the level to dry-run, either \"baseline\" or \"restricted\".\nkubectl label --dry-run=server --overwrite ns $NAMESPACE pod-security.kubernetes.io/enforce=$LEVEL", "completion": "This text code Migrate from PodSecurityPolicy to the Built-In PodSecurity Admission Controller | This command will return a warning for any existing pods that are not valid under the proposed level.", "language": "text", "type": "code-example_explanation", "context": "Migrate from PodSecurityPolicy to the Built-In PodSecurity Admission Controller | This command will return a warning for any existing pods that are not valid under the proposed level.", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/migrate-from-psp/", "id": "k8s_00246_code_explain_7214"} {"prompt": "Explain this bash code:\n\nkubectl label --overwrite ns $NAMESPACE pod-security.kubernetes.io/audit=$LEVEL", "completion": "This bash code Migrate from PodSecurityPolicy to the Built-In PodSecurity Admission Controller | If either of these approaches yield unexpected violations, you will need to either update the violating workloads to meet the policy requirements, or relax the namespace Pod Security level.", "language": "bash", "type": "kubectl-command_explanation", "context": "Migrate from PodSecurityPolicy to the Built-In PodSecurity Admission Controller | If either of these approaches yield unexpected violations, you will need to either update the violating workloads to meet the policy requirements, or relax the namespace Pod Security level.", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/migrate-from-psp/", "id": "k8s_00246_code_explain_7215"} {"prompt": "Explain this bash code:\n\nkubectl label --overwrite ns $NAMESPACE pod-security.kubernetes.io/enforce=$LEVEL", "completion": "This bash code Migrate from PodSecurityPolicy to the Built-In PodSecurity Admission Controller | Finally, you can effectively bypass PodSecurityPolicy at the namespace level by binding the fully privileged PSP to all service accounts in the namespace.", "language": "bash", "type": "kubectl-command_explanation", "context": "Migrate from PodSecurityPolicy to the Built-In PodSecurity Admission Controller | Finally, you can effectively bypass PodSecurityPolicy at the namespace level by binding the fully privileged PSP to all service accounts in the namespace.", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/migrate-from-psp/", "id": "k8s_00246_code_explain_7216"} {"prompt": "Complete this text code:\n\n# The following cluster-scoped commands are only needed once.\nkubectl apply -f privileged-psp.yaml\nkubectl create clusterrole privileged-psp --verb use --resource podsecuritypolicies.policy --resource-name privileged", "completion": "\n# Per-namespace disable\nkubectl create -n $NAMESPACE rolebinding disable-psp --clusterrole privileged-psp --group system:serviceaccounts:$NAMESPACE", "language": "text", "type": "code-example", "context": "Migrate from PodSecurityPolicy to the Built-In PodSecurity Admission Controller | Since the privileged PSP is non-mutating, and the PSP admission controller always prefers non-mutating PSPs, this will ensure that pods in this namespace are no longer being modified or restricted by", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/migrate-from-psp/", "id": "k8s_00246_code_completion_7217"} {"prompt": "Explain this text code:\n\n# The following cluster-scoped commands are only needed once.\nkubectl apply -f privileged-psp.yaml\nkubectl create clusterrole privileged-psp --verb use --resource podsecuritypolicies.policy --resource-name privileged\n\n# Per-namespace disable\nkubectl create -n $NAMESPACE rolebinding disable-psp --clusterrole privileged-psp --group system:serviceaccounts:$NAMESPACE", "completion": "This text code Migrate from PodSecurityPolicy to the Built-In PodSecurity Admission Controller | Since the privileged PSP is non-mutating, and the PSP admission controller always prefers non-mutating PSPs, this will ensure that pods in this namespace are no longer being modified or restricted by", "language": "text", "type": "code-example_explanation", "context": "Migrate from PodSecurityPolicy to the Built-In PodSecurity Admission Controller | Since the privileged PSP is non-mutating, and the PSP admission controller always prefers non-mutating PSPs, this will ensure that pods in this namespace are no longer being modified or restricted by", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/migrate-from-psp/", "id": "k8s_00246_code_explain_7218"} {"prompt": "Explain this text code:\n\n# Undo PodSecurityPolicy disablement.\nkubectl delete -n $NAMESPACE rolebinding disable-psp", "completion": "This text code Migrate from PodSecurityPolicy to the Built-In PodSecurity Admission Controller | Now that existing namespaces have been updated to enforce Pod Security Admission, you should ensure that your processes and/or policies for creating new namespaces are updated to ensure that an approp", "language": "text", "type": "code-example_explanation", "context": "Migrate from PodSecurityPolicy to the Built-In PodSecurity Admission Controller | Now that existing namespaces have been updated to enforce Pod Security Admission, you should ensure that your processes and/or policies for creating new namespaces are updated to ensure that an approp", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/migrate-from-psp/", "id": "k8s_00246_code_explain_7219"} {"prompt": "Complete this text code:\n\n# Diff resources included in pod.json\n kubectl diff -f pod.json", "completion": " \n # Diff file read from stdin\n cat service.yaml | kubectl diff -f -", "language": "text", "type": "code-example", "context": "kubectl diff | Number of objects to process in parallel when diffing against the live version. Larger number = faster, but more memory, I/O and CPU over that shorter period of time.", "source": "https://kubernetes.io/docs/reference/kubectl/generated/kubectl_diff/", "id": "k8s_00248_code_completion_7220"} {"prompt": "Explain this text code:\n\n# Diff resources included in pod.json\n kubectl diff -f pod.json\n \n # Diff file read from stdin\n cat service.yaml | kubectl diff -f -", "completion": "This text code kubectl diff | Number of objects to process in parallel when diffing against the live version. Larger number = faster, but more memory, I/O and CPU over that shorter period of time.", "language": "text", "type": "code-example_explanation", "context": "kubectl diff | Number of objects to process in parallel when diffing against the live version. Larger number = faster, but more memory, I/O and CPU over that shorter period of time.", "source": "https://kubernetes.io/docs/reference/kubectl/generated/kubectl_diff/", "id": "k8s_00248_code_explain_7221"} {"prompt": "Complete this text code:\n\n# Diff resources included in pod.json\n kubectl diff -f pod.json", "completion": " \n # Diff file read from stdin\n cat service.yaml | kubectl diff -f -", "language": "text", "type": "code-example", "context": "kubectl diff | Number of objects to process in parallel when diffing against the live version. Larger number = faster, but more memory, I/O and CPU over that shorter period of time.", "source": "https://kubernetes.io/docs/reference/kubectl/generated/kubectl_diff/", "id": "k8s_00248_code_completion_7222"} {"prompt": "Explain this text code:\n\n# Diff resources included in pod.json\n kubectl diff -f pod.json\n \n # Diff file read from stdin\n cat service.yaml | kubectl diff -f -", "completion": "This text code kubectl diff | Number of objects to process in parallel when diffing against the live version. Larger number = faster, but more memory, I/O and CPU over that shorter period of time.", "language": "text", "type": "code-example_explanation", "context": "kubectl diff | Number of objects to process in parallel when diffing against the live version. Larger number = faster, but more memory, I/O and CPU over that shorter period of time.", "source": "https://kubernetes.io/docs/reference/kubectl/generated/kubectl_diff/", "id": "k8s_00248_code_explain_7223"} {"prompt": "Complete this text code:\n\n---\nlayout: blog\ntitle: \"Your Title Here\"\ndraft: true # will be changed to date: YYYY-MM-DD before publication\nslug: lowercase-text-for-link-goes-here-no-spaces # optional", "completion": "author: >\n Author-1 (Affiliation),\n Author-2 (Affiliation),\n Author-3 (Affiliation) \n---", "language": "text", "type": "code-example", "context": "Submitting articles to Kubernetes blogs | Make sure to use second-level Markdown headings (## not #) as the topmost heading level in the article. The title you set in the front matter becomes the first-level heading for that page.", "source": "https://kubernetes.io/docs/contribute/blog/article-submission/", "id": "k8s_00254_code_completion_7224"} {"prompt": "Explain this text code:\n\n---\nlayout: blog\ntitle: \"Your Title Here\"\ndraft: true # will be changed to date: YYYY-MM-DD before publication\nslug: lowercase-text-for-link-goes-here-no-spaces # optional\nauthor: >\n Author-1 (Affiliation),\n Author-2 (Affiliation),\n Author-3 (Affiliation) \n---", "completion": "This text code Submitting articles to Kubernetes blogs | Make sure to use second-level Markdown headings (## not #) as the topmost heading level in the article. The title you set in the front matter becomes the first-level heading for that page.", "language": "text", "type": "code-example_explanation", "context": "Submitting articles to Kubernetes blogs | Make sure to use second-level Markdown headings (## not #) as the topmost heading level in the article. The title you set in the front matter becomes the first-level heading for that page.", "source": "https://kubernetes.io/docs/contribute/blog/article-submission/", "id": "k8s_00254_code_explain_7225"} {"prompt": "Complete this yaml code:\n\n---\nlayout: blog\ntitle: \"Your Title Here\"\ndraft: true # will be changed to date: YYYY-MM-DD before publication\nslug: lowercase-text-for-link-goes-here-no-spaces # optional", "completion": "author: >\n Author-1 (Affiliation),\n Author-2 (Affiliation),\n Author-3 (Affiliation) \n---", "language": "yaml", "type": "kubernetes-manifest", "context": "Submitting articles to Kubernetes blogs | Make sure to use second-level Markdown headings (## not #) as the topmost heading level in the article. The title you set in the front matter becomes the first-level heading for that page.", "source": "https://kubernetes.io/docs/contribute/blog/article-submission/", "id": "k8s_00254_code_completion_7226"} {"prompt": "Explain this yaml code:\n\n---\nlayout: blog\ntitle: \"Your Title Here\"\ndraft: true # will be changed to date: YYYY-MM-DD before publication\nslug: lowercase-text-for-link-goes-here-no-spaces # optional\nauthor: >\n Author-1 (Affiliation),\n Author-2 (Affiliation),\n Author-3 (Affiliation) \n---", "completion": "This yaml code Submitting articles to Kubernetes blogs | Make sure to use second-level Markdown headings (## not #) as the topmost heading level in the article. The title you set in the front matter becomes the first-level heading for that page.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Submitting articles to Kubernetes blogs | Make sure to use second-level Markdown headings (## not #) as the topmost heading level in the article. The title you set in the front matter becomes the first-level heading for that page.", "source": "https://kubernetes.io/docs/contribute/blog/article-submission/", "id": "k8s_00254_code_explain_7227"} {"prompt": "Complete this text code:\n\n---\nlayout: blog\ntitle: \"Your Title Here\"\ndraft: true # will be changed to date: YYYY-MM-DD before publication\nslug: lowercase-text-for-link-goes-here-no-spaces # optional", "completion": "author: >\n Author-1 (Affiliation),\n Author-2 (Affiliation),\n Author-3 (Affiliation) \n---", "language": "text", "type": "code-example", "context": "Submitting articles to Kubernetes blogs | Make sure to use second-level Markdown headings (## not #) as the topmost heading level in the article. The title you set in the front matter becomes the first-level heading for that page.", "source": "https://kubernetes.io/docs/contribute/blog/article-submission/", "id": "k8s_00254_code_completion_7228"} {"prompt": "Explain this text code:\n\n---\nlayout: blog\ntitle: \"Your Title Here\"\ndraft: true # will be changed to date: YYYY-MM-DD before publication\nslug: lowercase-text-for-link-goes-here-no-spaces # optional\nauthor: >\n Author-1 (Affiliation),\n Author-2 (Affiliation),\n Author-3 (Affiliation) \n---", "completion": "This text code Submitting articles to Kubernetes blogs | Make sure to use second-level Markdown headings (## not #) as the topmost heading level in the article. The title you set in the front matter becomes the first-level heading for that page.", "language": "text", "type": "code-example_explanation", "context": "Submitting articles to Kubernetes blogs | Make sure to use second-level Markdown headings (## not #) as the topmost heading level in the article. The title you set in the front matter becomes the first-level heading for that page.", "source": "https://kubernetes.io/docs/contribute/blog/article-submission/", "id": "k8s_00254_code_explain_7229"} {"prompt": "Complete this text code:\n\nDockercfg secrets are used to authenticate against Docker registries.\n ", "completion": " When using the Docker command line to push images, you can authenticate to a given registry by running:\n '$ docker login DOCKER_REGISTRY_SERVER --username=DOCKER_USER --password=DOCKER_PASSWORD --email=DOCKER_EMAIL'.", "language": "text", "type": "code-example", "context": "kubectl create secret docker-registry | That produces a ~/.dockercfg file that is used by subsequent 'docker push' and 'docker pull' commands to authenticate to the registry. The email address is optional.", "source": "https://kubernetes.io/docs/reference/kubectl/generated/kubectl_create/kubectl_create_secret_docker-registry/", "id": "k8s_00258_code_completion_7230"} {"prompt": "Explain this text code:\n\nDockercfg secrets are used to authenticate against Docker registries.\n \n When using the Docker command line to push images, you can authenticate to a given registry by running:\n '$ docker login DOCKER_REGISTRY_SERVER --username=DOCKER_USER --password=DOCKER_PASSWORD --email=DOCKER_EMAIL'.", "completion": "This text code kubectl create secret docker-registry | That produces a ~/.dockercfg file that is used by subsequent 'docker push' and 'docker pull' commands to authenticate to the registry. The email address is optional.", "language": "text", "type": "code-example_explanation", "context": "kubectl create secret docker-registry | That produces a ~/.dockercfg file that is used by subsequent 'docker push' and 'docker pull' commands to authenticate to the registry. The email address is optional.", "source": "https://kubernetes.io/docs/reference/kubectl/generated/kubectl_create/kubectl_create_secret_docker-registry/", "id": "k8s_00258_code_explain_7231"} {"prompt": "Explain this text code:\n\nWhen creating applications, you may have a Docker registry that requires authentication. In order for the\n nodes to pull images on your behalf, they must have the credentials. You can provide this information\n by creating a dockercfg secret and attaching it to your service account.", "completion": "This text code kubectl create secret docker-registry | If true, ignore any errors in templates when a field or map key is missing in the template. Only applies to golang and jsonpath output formats.", "language": "text", "type": "code-example_explanation", "context": "kubectl create secret docker-registry | If true, ignore any errors in templates when a field or map key is missing in the template. Only applies to golang and jsonpath output formats.", "source": "https://kubernetes.io/docs/reference/kubectl/generated/kubectl_create/kubectl_create_secret_docker-registry/", "id": "k8s_00258_code_explain_7232"} {"prompt": "Explain this bash code:\n\nkubectl create secret docker-registry NAME --docker-username=user --docker-password=password --docker-email=email [--docker-server=string] [--from-file=[key=]source] [--dry-run=server|client|none]", "completion": "This bash code kubectl create secret docker-registry | If true, ignore any errors in templates when a field or map key is missing in the template. Only applies to golang and jsonpath output formats.", "language": "bash", "type": "kubectl-command_explanation", "context": "kubectl create secret docker-registry | If true, ignore any errors in templates when a field or map key is missing in the template. Only applies to golang and jsonpath output formats.", "source": "https://kubernetes.io/docs/reference/kubectl/generated/kubectl_create/kubectl_create_secret_docker-registry/", "id": "k8s_00258_code_explain_7233"} {"prompt": "Complete this text code:\n\n# If you do not already have a .dockercfg file, create a dockercfg secret directly\n kubectl create secret docker-registry my-secret --docker-server=DOCKER_REGISTRY_SERVER --docker-username=DOCKER_USER --docker-password=DOCKER_PASSWORD --docker-email=DOCKER_EMAIL", "completion": " \n # Create a new secret named my-secret from ~/.docker/config.json\n kubectl create secret docker-registry my-secret --from-file=path/to/.docker/config.json", "language": "text", "type": "code-example", "context": "kubectl create secret docker-registry | If true, ignore any errors in templates when a field or map key is missing in the template. Only applies to golang and jsonpath output formats.", "source": "https://kubernetes.io/docs/reference/kubectl/generated/kubectl_create/kubectl_create_secret_docker-registry/", "id": "k8s_00258_code_completion_7234"} {"prompt": "Explain this text code:\n\n# If you do not already have a .dockercfg file, create a dockercfg secret directly\n kubectl create secret docker-registry my-secret --docker-server=DOCKER_REGISTRY_SERVER --docker-username=DOCKER_USER --docker-password=DOCKER_PASSWORD --docker-email=DOCKER_EMAIL\n \n # Create a new secret named my-secret from ~/.docker/config.json\n kubectl create secret docker-registry my-secret --from-file=path/to/.docker/config.json", "completion": "This text code kubectl create secret docker-registry | If true, ignore any errors in templates when a field or map key is missing in the template. Only applies to golang and jsonpath output formats.", "language": "text", "type": "code-example_explanation", "context": "kubectl create secret docker-registry | If true, ignore any errors in templates when a field or map key is missing in the template. Only applies to golang and jsonpath output formats.", "source": "https://kubernetes.io/docs/reference/kubectl/generated/kubectl_create/kubectl_create_secret_docker-registry/", "id": "k8s_00258_code_explain_7235"} {"prompt": "Complete this text code:\n\nDockercfg secrets are used to authenticate against Docker registries.\n ", "completion": " When using the Docker command line to push images, you can authenticate to a given registry by running:\n '$ docker login DOCKER_REGISTRY_SERVER --username=DOCKER_USER --password=DOCKER_PASSWORD --email=DOCKER_EMAIL'.", "language": "text", "type": "code-example", "context": "kubectl create secret docker-registry | That produces a ~/.dockercfg file that is used by subsequent 'docker push' and 'docker pull' commands to authenticate to the registry. The email address is optional.", "source": "https://kubernetes.io/docs/reference/kubectl/generated/kubectl_create/kubectl_create_secret_docker-registry/", "id": "k8s_00258_code_completion_7236"} {"prompt": "Explain this text code:\n\nDockercfg secrets are used to authenticate against Docker registries.\n \n When using the Docker command line to push images, you can authenticate to a given registry by running:\n '$ docker login DOCKER_REGISTRY_SERVER --username=DOCKER_USER --password=DOCKER_PASSWORD --email=DOCKER_EMAIL'.", "completion": "This text code kubectl create secret docker-registry | That produces a ~/.dockercfg file that is used by subsequent 'docker push' and 'docker pull' commands to authenticate to the registry. The email address is optional.", "language": "text", "type": "code-example_explanation", "context": "kubectl create secret docker-registry | That produces a ~/.dockercfg file that is used by subsequent 'docker push' and 'docker pull' commands to authenticate to the registry. The email address is optional.", "source": "https://kubernetes.io/docs/reference/kubectl/generated/kubectl_create/kubectl_create_secret_docker-registry/", "id": "k8s_00258_code_explain_7237"} {"prompt": "Explain this text code:\n\nWhen creating applications, you may have a Docker registry that requires authentication. In order for the\n nodes to pull images on your behalf, they must have the credentials. You can provide this information\n by creating a dockercfg secret and attaching it to your service account.", "completion": "This text code kubectl create secret docker-registry | If true, ignore any errors in templates when a field or map key is missing in the template. Only applies to golang and jsonpath output formats.", "language": "text", "type": "code-example_explanation", "context": "kubectl create secret docker-registry | If true, ignore any errors in templates when a field or map key is missing in the template. Only applies to golang and jsonpath output formats.", "source": "https://kubernetes.io/docs/reference/kubectl/generated/kubectl_create/kubectl_create_secret_docker-registry/", "id": "k8s_00258_code_explain_7238"} {"prompt": "Explain this bash code:\n\nkubectl create secret docker-registry NAME --docker-username=user --docker-password=password --docker-email=email [--docker-server=string] [--from-file=[key=]source] [--dry-run=server|client|none]", "completion": "This bash code kubectl create secret docker-registry | If true, ignore any errors in templates when a field or map key is missing in the template. Only applies to golang and jsonpath output formats.", "language": "bash", "type": "kubectl-command_explanation", "context": "kubectl create secret docker-registry | If true, ignore any errors in templates when a field or map key is missing in the template. Only applies to golang and jsonpath output formats.", "source": "https://kubernetes.io/docs/reference/kubectl/generated/kubectl_create/kubectl_create_secret_docker-registry/", "id": "k8s_00258_code_explain_7239"} {"prompt": "Complete this text code:\n\n# If you do not already have a .dockercfg file, create a dockercfg secret directly\n kubectl create secret docker-registry my-secret --docker-server=DOCKER_REGISTRY_SERVER --docker-username=DOCKER_USER --docker-password=DOCKER_PASSWORD --docker-email=DOCKER_EMAIL", "completion": " \n # Create a new secret named my-secret from ~/.docker/config.json\n kubectl create secret docker-registry my-secret --from-file=path/to/.docker/config.json", "language": "text", "type": "code-example", "context": "kubectl create secret docker-registry | If true, ignore any errors in templates when a field or map key is missing in the template. Only applies to golang and jsonpath output formats.", "source": "https://kubernetes.io/docs/reference/kubectl/generated/kubectl_create/kubectl_create_secret_docker-registry/", "id": "k8s_00258_code_completion_7240"} {"prompt": "Explain this text code:\n\n# If you do not already have a .dockercfg file, create a dockercfg secret directly\n kubectl create secret docker-registry my-secret --docker-server=DOCKER_REGISTRY_SERVER --docker-username=DOCKER_USER --docker-password=DOCKER_PASSWORD --docker-email=DOCKER_EMAIL\n \n # Create a new secret named my-secret from ~/.docker/config.json\n kubectl create secret docker-registry my-secret --from-file=path/to/.docker/config.json", "completion": "This text code kubectl create secret docker-registry | If true, ignore any errors in templates when a field or map key is missing in the template. Only applies to golang and jsonpath output formats.", "language": "text", "type": "code-example_explanation", "context": "kubectl create secret docker-registry | If true, ignore any errors in templates when a field or map key is missing in the template. Only applies to golang and jsonpath output formats.", "source": "https://kubernetes.io/docs/reference/kubectl/generated/kubectl_create/kubectl_create_secret_docker-registry/", "id": "k8s_00258_code_explain_7241"} {"prompt": "Explain this bash code:\n\nkubectl expose (-f FILENAME | TYPE NAME) [--port=port] [--protocol=TCP|UDP|SCTP] [--target-port=number-or-name] [--name=name] [--external-ip=external-ip-of-service] [--type=type]", "completion": "This bash code kubectl expose | If true, ignore any errors in templates when a field or map key is missing in the template. Only applies to golang and jsonpath output formats.", "language": "bash", "type": "kubectl-command_explanation", "context": "kubectl expose | If true, ignore any errors in templates when a field or map key is missing in the template. Only applies to golang and jsonpath output formats.", "source": "https://kubernetes.io/docs/reference/kubectl/generated/kubectl_expose/", "id": "k8s_00261_code_explain_7242"} {"prompt": "Complete this text code:\n\n# Create a service for a replicated nginx, which serves on port 80 and connects to the containers on port 8000\n kubectl expose rc nginx --port=80 --target-port=8000\n \n # Create a service for a replication controller identified by type and name specified in \"nginx-controller.yaml\", which serves on port 80 and connects to the containers on port 8000\n kubectl expose -f nginx-controller.yaml --port=80 --target-port=8000\n \n # Create a service for a pod valid-pod, which serves on port 444 with the name \"frontend\"\n kubectl expose pod valid-pod --port=444 --name=frontend\n \n # Create a second service based on the above service, exposing the container port 8443 as port 443 with the name \"nginx-https\"", "completion": " kubectl expose service nginx --port=443 --target-port=8443 --name=nginx-https\n \n # Create a service for a replicated streaming application on port 4100 balancing UDP traffic and named 'video-stream'.\n kubectl expose rc streamer --port=4100 --protocol=UDP --name=video-stream\n \n # Create a service for a replicated nginx using replica set, which serves on port 80 and connects to the containers on port 8000\n kubectl expose rs nginx --port=80 --target-port=8000\n \n # Create a service for an nginx deployment, which serves on port 80 and connects to the containers on port 8000\n kubectl expose deployment nginx --port=80 --target-port=8000", "language": "text", "type": "code-example", "context": "kubectl expose | If true, ignore any errors in templates when a field or map key is missing in the template. Only applies to golang and jsonpath output formats.", "source": "https://kubernetes.io/docs/reference/kubectl/generated/kubectl_expose/", "id": "k8s_00261_code_completion_7243"} {"prompt": "Explain this text code:\n\n# Create a service for a replicated nginx, which serves on port 80 and connects to the containers on port 8000\n kubectl expose rc nginx --port=80 --target-port=8000\n \n # Create a service for a replication controller identified by type and name specified in \"nginx-controller.yaml\", which serves on port 80 and connects to the containers on port 8000\n kubectl expose -f nginx-controller.yaml --port=80 --target-port=8000\n \n # Create a service for a pod valid-pod, which serves on port 444 with the name \"frontend\"\n kubectl expose pod valid-pod --port=444 --name=frontend\n \n # Create a second service based on the above service, exposing the container port 8443 as port 443 with the name \"nginx-https\"\n kubectl expose service nginx --port=443 --target-port=8443 --name=nginx-https\n \n # Create a service for a replicated streaming application on port 4100 balancing UDP traffic and named 'video-stream'.\n kubectl expose rc streamer --port=4100 --protocol=UDP --name=video-stream\n \n # Create a service for a replicated nginx using replica set, which serves on port 80 and connects to the containers on port 8000\n kubectl expose rs nginx --port=80 --target-port=8000\n \n # Create a service for an nginx deployment, which serves on port 80 and connects to the containers on port 8000\n kubectl expose deployment nginx --port=80 --target-port=8000", "completion": "This text code kubectl expose | If true, ignore any errors in templates when a field or map key is missing in the template. Only applies to golang and jsonpath output formats.", "language": "text", "type": "code-example_explanation", "context": "kubectl expose | If true, ignore any errors in templates when a field or map key is missing in the template. Only applies to golang and jsonpath output formats.", "source": "https://kubernetes.io/docs/reference/kubectl/generated/kubectl_expose/", "id": "k8s_00261_code_explain_7244"} {"prompt": "Explain this bash code:\n\nkubectl expose (-f FILENAME | TYPE NAME) [--port=port] [--protocol=TCP|UDP|SCTP] [--target-port=number-or-name] [--name=name] [--external-ip=external-ip-of-service] [--type=type]", "completion": "This bash code kubectl expose | If true, ignore any errors in templates when a field or map key is missing in the template. Only applies to golang and jsonpath output formats.", "language": "bash", "type": "kubectl-command_explanation", "context": "kubectl expose | If true, ignore any errors in templates when a field or map key is missing in the template. Only applies to golang and jsonpath output formats.", "source": "https://kubernetes.io/docs/reference/kubectl/generated/kubectl_expose/", "id": "k8s_00261_code_explain_7245"} {"prompt": "Complete this text code:\n\n# Create a service for a replicated nginx, which serves on port 80 and connects to the containers on port 8000\n kubectl expose rc nginx --port=80 --target-port=8000\n \n # Create a service for a replication controller identified by type and name specified in \"nginx-controller.yaml\", which serves on port 80 and connects to the containers on port 8000\n kubectl expose -f nginx-controller.yaml --port=80 --target-port=8000\n \n # Create a service for a pod valid-pod, which serves on port 444 with the name \"frontend\"\n kubectl expose pod valid-pod --port=444 --name=frontend\n \n # Create a second service based on the above service, exposing the container port 8443 as port 443 with the name \"nginx-https\"", "completion": " kubectl expose service nginx --port=443 --target-port=8443 --name=nginx-https\n \n # Create a service for a replicated streaming application on port 4100 balancing UDP traffic and named 'video-stream'.\n kubectl expose rc streamer --port=4100 --protocol=UDP --name=video-stream\n \n # Create a service for a replicated nginx using replica set, which serves on port 80 and connects to the containers on port 8000\n kubectl expose rs nginx --port=80 --target-port=8000\n \n # Create a service for an nginx deployment, which serves on port 80 and connects to the containers on port 8000\n kubectl expose deployment nginx --port=80 --target-port=8000", "language": "text", "type": "code-example", "context": "kubectl expose | If true, ignore any errors in templates when a field or map key is missing in the template. Only applies to golang and jsonpath output formats.", "source": "https://kubernetes.io/docs/reference/kubectl/generated/kubectl_expose/", "id": "k8s_00261_code_completion_7246"} {"prompt": "Explain this text code:\n\n# Create a service for a replicated nginx, which serves on port 80 and connects to the containers on port 8000\n kubectl expose rc nginx --port=80 --target-port=8000\n \n # Create a service for a replication controller identified by type and name specified in \"nginx-controller.yaml\", which serves on port 80 and connects to the containers on port 8000\n kubectl expose -f nginx-controller.yaml --port=80 --target-port=8000\n \n # Create a service for a pod valid-pod, which serves on port 444 with the name \"frontend\"\n kubectl expose pod valid-pod --port=444 --name=frontend\n \n # Create a second service based on the above service, exposing the container port 8443 as port 443 with the name \"nginx-https\"\n kubectl expose service nginx --port=443 --target-port=8443 --name=nginx-https\n \n # Create a service for a replicated streaming application on port 4100 balancing UDP traffic and named 'video-stream'.\n kubectl expose rc streamer --port=4100 --protocol=UDP --name=video-stream\n \n # Create a service for a replicated nginx using replica set, which serves on port 80 and connects to the containers on port 8000\n kubectl expose rs nginx --port=80 --target-port=8000\n \n # Create a service for an nginx deployment, which serves on port 80 and connects to the containers on port 8000\n kubectl expose deployment nginx --port=80 --target-port=8000", "completion": "This text code kubectl expose | If true, ignore any errors in templates when a field or map key is missing in the template. Only applies to golang and jsonpath output formats.", "language": "text", "type": "code-example_explanation", "context": "kubectl expose | If true, ignore any errors in templates when a field or map key is missing in the template. Only applies to golang and jsonpath output formats.", "source": "https://kubernetes.io/docs/reference/kubectl/generated/kubectl_expose/", "id": "k8s_00261_code_explain_7247"} {"prompt": "Complete this yaml code:\n\napiVersion: gateway.networking.k8s.io/v1\nkind: GatewayClass\nmetadata:", "completion": " name: example-class\nspec:\n controllerName: example.com/gateway-controller", "language": "yaml", "type": "kubernetes-manifest", "context": "Gateway API | In this example, a controller that has implemented Gateway API is configured to manage GatewayClasses with the controller name example.com/gateway-controller. Gateways of this class will be managed by", "source": "https://kubernetes.io/docs/concepts/services-networking/gateway/", "id": "k8s_00262_code_completion_7248"} {"prompt": "Explain this yaml code:\n\napiVersion: gateway.networking.k8s.io/v1\nkind: GatewayClass\nmetadata:\n name: example-class\nspec:\n controllerName: example.com/gateway-controller", "completion": "This yaml code Gateway API | In this example, a controller that has implemented Gateway API is configured to manage GatewayClasses with the controller name example.com/gateway-controller. Gateways of this class will be managed by", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Gateway API | In this example, a controller that has implemented Gateway API is configured to manage GatewayClasses with the controller name example.com/gateway-controller. Gateways of this class will be managed by", "source": "https://kubernetes.io/docs/concepts/services-networking/gateway/", "id": "k8s_00262_code_explain_7249"} {"prompt": "Complete this yaml code:\n\napiVersion: gateway.networking.k8s.io/v1\nkind: Gateway\nmetadata:\n name: example-gateway\nspec:", "completion": " gatewayClassName: example-class\n listeners:\n - name: http\n protocol: HTTP\n port: 80", "language": "yaml", "type": "kubernetes-manifest", "context": "Gateway API | In this example, an instance of traffic handling infrastructure is programmed to listen for HTTP traffic on port 80. Since the addresses field is unspecified, an address or hostname is assigned to the", "source": "https://kubernetes.io/docs/concepts/services-networking/gateway/", "id": "k8s_00262_code_completion_7250"} {"prompt": "Explain this yaml code:\n\napiVersion: gateway.networking.k8s.io/v1\nkind: Gateway\nmetadata:\n name: example-gateway\nspec:\n gatewayClassName: example-class\n listeners:\n - name: http\n protocol: HTTP\n port: 80", "completion": "This yaml code Gateway API | In this example, an instance of traffic handling infrastructure is programmed to listen for HTTP traffic on port 80. Since the addresses field is unspecified, an address or hostname is assigned to the", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Gateway API | In this example, an instance of traffic handling infrastructure is programmed to listen for HTTP traffic on port 80. Since the addresses field is unspecified, an address or hostname is assigned to the", "source": "https://kubernetes.io/docs/concepts/services-networking/gateway/", "id": "k8s_00262_code_explain_7251"} {"prompt": "Complete this yaml code:\n\napiVersion: gateway.networking.k8s.io/v1\nkind: HTTPRoute\nmetadata:\n name: example-httproute\nspec:\n parentRefs:\n - name: example-gateway\n hostnames:", "completion": " - \"www.example.com\"\n rules:\n - matches:\n - path:\n type: PathPrefix\n value: /login\n backendRefs:\n - name: example-svc\n port: 8080", "language": "yaml", "type": "kubernetes-manifest", "context": "Gateway API | In this example, HTTP traffic from Gateway example-gateway with the Host: header set to www.example.com and the request path specified as /login will be routed to Service example-svc on port 8080.", "source": "https://kubernetes.io/docs/concepts/services-networking/gateway/", "id": "k8s_00262_code_completion_7252"} {"prompt": "Explain this yaml code:\n\napiVersion: gateway.networking.k8s.io/v1\nkind: HTTPRoute\nmetadata:\n name: example-httproute\nspec:\n parentRefs:\n - name: example-gateway\n hostnames:\n - \"www.example.com\"\n rules:\n - matches:\n - path:\n type: PathPrefix\n value: /login\n backendRefs:\n - name: example-svc\n port: 8080", "completion": "This yaml code Gateway API | In this example, HTTP traffic from Gateway example-gateway with the Host: header set to www.example.com and the request path specified as /login will be routed to Service example-svc on port 8080.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Gateway API | In this example, HTTP traffic from Gateway example-gateway with the Host: header set to www.example.com and the request path specified as /login will be routed to Service example-svc on port 8080.", "source": "https://kubernetes.io/docs/concepts/services-networking/gateway/", "id": "k8s_00262_code_explain_7253"} {"prompt": "Complete this yaml code:\n\napiVersion: gateway.networking.k8s.io/v1\nkind: GRPCRoute\nmetadata:\n name: example-grpcroute\nspec:\n parentRefs:", "completion": " - name: example-gateway\n hostnames:\n - \"svc.example.com\"\n rules:\n - backendRefs:\n - name: example-svc\n port: 50051", "language": "yaml", "type": "kubernetes-manifest", "context": "Gateway API | In this example, gRPC traffic from Gateway example-gateway with the host set to svc.example.com will be directed to the service example-svc on port 50051 from the same namespace.", "source": "https://kubernetes.io/docs/concepts/services-networking/gateway/", "id": "k8s_00262_code_completion_7254"} {"prompt": "Explain this yaml code:\n\napiVersion: gateway.networking.k8s.io/v1\nkind: GRPCRoute\nmetadata:\n name: example-grpcroute\nspec:\n parentRefs:\n - name: example-gateway\n hostnames:\n - \"svc.example.com\"\n rules:\n - backendRefs:\n - name: example-svc\n port: 50051", "completion": "This yaml code Gateway API | In this example, gRPC traffic from Gateway example-gateway with the host set to svc.example.com will be directed to the service example-svc on port 50051 from the same namespace.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Gateway API | In this example, gRPC traffic from Gateway example-gateway with the host set to svc.example.com will be directed to the service example-svc on port 50051 from the same namespace.", "source": "https://kubernetes.io/docs/concepts/services-networking/gateway/", "id": "k8s_00262_code_explain_7255"} {"prompt": "Complete this yaml code:\n\napiVersion: gateway.networking.k8s.io/v1\nkind: GRPCRoute\nmetadata:\n name: example-grpcroute\nspec:\n parentRefs:\n - name: example-gateway\n hostnames:", "completion": " - \"svc.example.com\"\n rules:\n - matches:\n - method:\n service: com.example\n method: Login\n backendRefs:\n - name: foo-svc\n port: 50051", "language": "yaml", "type": "kubernetes-manifest", "context": "Gateway API | In this case, the GRPCRoute will match any traffic for svc.example.com and apply its routing rules to forward the traffic to the correct backend. Since there is only one match specified,only requests", "source": "https://kubernetes.io/docs/concepts/services-networking/gateway/", "id": "k8s_00262_code_completion_7256"} {"prompt": "Explain this yaml code:\n\napiVersion: gateway.networking.k8s.io/v1\nkind: GRPCRoute\nmetadata:\n name: example-grpcroute\nspec:\n parentRefs:\n - name: example-gateway\n hostnames:\n - \"svc.example.com\"\n rules:\n - matches:\n - method:\n service: com.example\n method: Login\n backendRefs:\n - name: foo-svc\n port: 50051", "completion": "This yaml code Gateway API | In this case, the GRPCRoute will match any traffic for svc.example.com and apply its routing rules to forward the traffic to the correct backend. Since there is only one match specified,only requests", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Gateway API | In this case, the GRPCRoute will match any traffic for svc.example.com and apply its routing rules to forward the traffic to the correct backend. Since there is only one match specified,only requests", "source": "https://kubernetes.io/docs/concepts/services-networking/gateway/", "id": "k8s_00262_code_explain_7257"} {"prompt": "Complete this yaml code:\n\napiVersion: gateway.networking.k8s.io/v1\nkind: GatewayClass\nmetadata:", "completion": " name: example-class\nspec:\n controllerName: example.com/gateway-controller", "language": "yaml", "type": "kubernetes-manifest", "context": "Gateway API | In this example, a controller that has implemented Gateway API is configured to manage GatewayClasses with the controller name example.com/gateway-controller. Gateways of this class will be managed by", "source": "https://kubernetes.io/docs/concepts/services-networking/gateway/", "id": "k8s_00262_code_completion_7258"} {"prompt": "Explain this yaml code:\n\napiVersion: gateway.networking.k8s.io/v1\nkind: GatewayClass\nmetadata:\n name: example-class\nspec:\n controllerName: example.com/gateway-controller", "completion": "This yaml code Gateway API | In this example, a controller that has implemented Gateway API is configured to manage GatewayClasses with the controller name example.com/gateway-controller. Gateways of this class will be managed by", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Gateway API | In this example, a controller that has implemented Gateway API is configured to manage GatewayClasses with the controller name example.com/gateway-controller. Gateways of this class will be managed by", "source": "https://kubernetes.io/docs/concepts/services-networking/gateway/", "id": "k8s_00262_code_explain_7259"} {"prompt": "Complete this yaml code:\n\napiVersion: gateway.networking.k8s.io/v1\nkind: Gateway\nmetadata:\n name: example-gateway\nspec:", "completion": " gatewayClassName: example-class\n listeners:\n - name: http\n protocol: HTTP\n port: 80", "language": "yaml", "type": "kubernetes-manifest", "context": "Gateway API | In this example, an instance of traffic handling infrastructure is programmed to listen for HTTP traffic on port 80. Since the addresses field is unspecified, an address or hostname is assigned to the", "source": "https://kubernetes.io/docs/concepts/services-networking/gateway/", "id": "k8s_00262_code_completion_7260"} {"prompt": "Explain this yaml code:\n\napiVersion: gateway.networking.k8s.io/v1\nkind: Gateway\nmetadata:\n name: example-gateway\nspec:\n gatewayClassName: example-class\n listeners:\n - name: http\n protocol: HTTP\n port: 80", "completion": "This yaml code Gateway API | In this example, an instance of traffic handling infrastructure is programmed to listen for HTTP traffic on port 80. Since the addresses field is unspecified, an address or hostname is assigned to the", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Gateway API | In this example, an instance of traffic handling infrastructure is programmed to listen for HTTP traffic on port 80. Since the addresses field is unspecified, an address or hostname is assigned to the", "source": "https://kubernetes.io/docs/concepts/services-networking/gateway/", "id": "k8s_00262_code_explain_7261"} {"prompt": "Complete this yaml code:\n\napiVersion: gateway.networking.k8s.io/v1\nkind: HTTPRoute\nmetadata:\n name: example-httproute\nspec:\n parentRefs:\n - name: example-gateway\n hostnames:", "completion": " - \"www.example.com\"\n rules:\n - matches:\n - path:\n type: PathPrefix\n value: /login\n backendRefs:\n - name: example-svc\n port: 8080", "language": "yaml", "type": "kubernetes-manifest", "context": "Gateway API | In this example, HTTP traffic from Gateway example-gateway with the Host: header set to www.example.com and the request path specified as /login will be routed to Service example-svc on port 8080.", "source": "https://kubernetes.io/docs/concepts/services-networking/gateway/", "id": "k8s_00262_code_completion_7262"} {"prompt": "Explain this yaml code:\n\napiVersion: gateway.networking.k8s.io/v1\nkind: HTTPRoute\nmetadata:\n name: example-httproute\nspec:\n parentRefs:\n - name: example-gateway\n hostnames:\n - \"www.example.com\"\n rules:\n - matches:\n - path:\n type: PathPrefix\n value: /login\n backendRefs:\n - name: example-svc\n port: 8080", "completion": "This yaml code Gateway API | In this example, HTTP traffic from Gateway example-gateway with the Host: header set to www.example.com and the request path specified as /login will be routed to Service example-svc on port 8080.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Gateway API | In this example, HTTP traffic from Gateway example-gateway with the Host: header set to www.example.com and the request path specified as /login will be routed to Service example-svc on port 8080.", "source": "https://kubernetes.io/docs/concepts/services-networking/gateway/", "id": "k8s_00262_code_explain_7263"} {"prompt": "Complete this yaml code:\n\napiVersion: gateway.networking.k8s.io/v1\nkind: GRPCRoute\nmetadata:\n name: example-grpcroute\nspec:\n parentRefs:", "completion": " - name: example-gateway\n hostnames:\n - \"svc.example.com\"\n rules:\n - backendRefs:\n - name: example-svc\n port: 50051", "language": "yaml", "type": "kubernetes-manifest", "context": "Gateway API | In this example, gRPC traffic from Gateway example-gateway with the host set to svc.example.com will be directed to the service example-svc on port 50051 from the same namespace.", "source": "https://kubernetes.io/docs/concepts/services-networking/gateway/", "id": "k8s_00262_code_completion_7264"} {"prompt": "Explain this yaml code:\n\napiVersion: gateway.networking.k8s.io/v1\nkind: GRPCRoute\nmetadata:\n name: example-grpcroute\nspec:\n parentRefs:\n - name: example-gateway\n hostnames:\n - \"svc.example.com\"\n rules:\n - backendRefs:\n - name: example-svc\n port: 50051", "completion": "This yaml code Gateway API | In this example, gRPC traffic from Gateway example-gateway with the host set to svc.example.com will be directed to the service example-svc on port 50051 from the same namespace.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Gateway API | In this example, gRPC traffic from Gateway example-gateway with the host set to svc.example.com will be directed to the service example-svc on port 50051 from the same namespace.", "source": "https://kubernetes.io/docs/concepts/services-networking/gateway/", "id": "k8s_00262_code_explain_7265"} {"prompt": "Complete this yaml code:\n\napiVersion: gateway.networking.k8s.io/v1\nkind: GRPCRoute\nmetadata:\n name: example-grpcroute\nspec:\n parentRefs:\n - name: example-gateway\n hostnames:", "completion": " - \"svc.example.com\"\n rules:\n - matches:\n - method:\n service: com.example\n method: Login\n backendRefs:\n - name: foo-svc\n port: 50051", "language": "yaml", "type": "kubernetes-manifest", "context": "Gateway API | In this case, the GRPCRoute will match any traffic for svc.example.com and apply its routing rules to forward the traffic to the correct backend. Since there is only one match specified,only requests", "source": "https://kubernetes.io/docs/concepts/services-networking/gateway/", "id": "k8s_00262_code_completion_7266"} {"prompt": "Explain this yaml code:\n\napiVersion: gateway.networking.k8s.io/v1\nkind: GRPCRoute\nmetadata:\n name: example-grpcroute\nspec:\n parentRefs:\n - name: example-gateway\n hostnames:\n - \"svc.example.com\"\n rules:\n - matches:\n - method:\n service: com.example\n method: Login\n backendRefs:\n - name: foo-svc\n port: 50051", "completion": "This yaml code Gateway API | In this case, the GRPCRoute will match any traffic for svc.example.com and apply its routing rules to forward the traffic to the correct backend. Since there is only one match specified,only requests", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Gateway API | In this case, the GRPCRoute will match any traffic for svc.example.com and apply its routing rules to forward the traffic to the correct backend. Since there is only one match specified,only requests", "source": "https://kubernetes.io/docs/concepts/services-networking/gateway/", "id": "k8s_00262_code_explain_7267"} {"prompt": "Complete this yaml code:\n\napiVersion: gateway.networking.k8s.io/v1\nkind: GatewayClass\nmetadata:", "completion": " name: example-class\nspec:\n controllerName: example.com/gateway-controller", "language": "yaml", "type": "kubernetes-manifest", "context": "Gateway API | In this example, a controller that has implemented Gateway API is configured to manage GatewayClasses with the controller name example.com/gateway-controller. Gateways of this class will be managed by", "source": "https://kubernetes.io/docs/concepts/services-networking/gateway/", "id": "k8s_00262_code_completion_7268"} {"prompt": "Explain this yaml code:\n\napiVersion: gateway.networking.k8s.io/v1\nkind: GatewayClass\nmetadata:\n name: example-class\nspec:\n controllerName: example.com/gateway-controller", "completion": "This yaml code Gateway API | In this example, a controller that has implemented Gateway API is configured to manage GatewayClasses with the controller name example.com/gateway-controller. Gateways of this class will be managed by", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Gateway API | In this example, a controller that has implemented Gateway API is configured to manage GatewayClasses with the controller name example.com/gateway-controller. Gateways of this class will be managed by", "source": "https://kubernetes.io/docs/concepts/services-networking/gateway/", "id": "k8s_00262_code_explain_7269"} {"prompt": "Complete this yaml code:\n\napiVersion: gateway.networking.k8s.io/v1\nkind: Gateway\nmetadata:\n name: example-gateway\nspec:", "completion": " gatewayClassName: example-class\n listeners:\n - name: http\n protocol: HTTP\n port: 80", "language": "yaml", "type": "kubernetes-manifest", "context": "Gateway API | In this example, an instance of traffic handling infrastructure is programmed to listen for HTTP traffic on port 80. Since the addresses field is unspecified, an address or hostname is assigned to the", "source": "https://kubernetes.io/docs/concepts/services-networking/gateway/", "id": "k8s_00262_code_completion_7270"} {"prompt": "Explain this yaml code:\n\napiVersion: gateway.networking.k8s.io/v1\nkind: Gateway\nmetadata:\n name: example-gateway\nspec:\n gatewayClassName: example-class\n listeners:\n - name: http\n protocol: HTTP\n port: 80", "completion": "This yaml code Gateway API | In this example, an instance of traffic handling infrastructure is programmed to listen for HTTP traffic on port 80. Since the addresses field is unspecified, an address or hostname is assigned to the", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Gateway API | In this example, an instance of traffic handling infrastructure is programmed to listen for HTTP traffic on port 80. Since the addresses field is unspecified, an address or hostname is assigned to the", "source": "https://kubernetes.io/docs/concepts/services-networking/gateway/", "id": "k8s_00262_code_explain_7271"} {"prompt": "Complete this yaml code:\n\napiVersion: gateway.networking.k8s.io/v1\nkind: HTTPRoute\nmetadata:\n name: example-httproute\nspec:\n parentRefs:\n - name: example-gateway\n hostnames:", "completion": " - \"www.example.com\"\n rules:\n - matches:\n - path:\n type: PathPrefix\n value: /login\n backendRefs:\n - name: example-svc\n port: 8080", "language": "yaml", "type": "kubernetes-manifest", "context": "Gateway API | In this example, HTTP traffic from Gateway example-gateway with the Host: header set to www.example.com and the request path specified as /login will be routed to Service example-svc on port 8080.", "source": "https://kubernetes.io/docs/concepts/services-networking/gateway/", "id": "k8s_00262_code_completion_7272"} {"prompt": "Explain this yaml code:\n\napiVersion: gateway.networking.k8s.io/v1\nkind: HTTPRoute\nmetadata:\n name: example-httproute\nspec:\n parentRefs:\n - name: example-gateway\n hostnames:\n - \"www.example.com\"\n rules:\n - matches:\n - path:\n type: PathPrefix\n value: /login\n backendRefs:\n - name: example-svc\n port: 8080", "completion": "This yaml code Gateway API | In this example, HTTP traffic from Gateway example-gateway with the Host: header set to www.example.com and the request path specified as /login will be routed to Service example-svc on port 8080.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Gateway API | In this example, HTTP traffic from Gateway example-gateway with the Host: header set to www.example.com and the request path specified as /login will be routed to Service example-svc on port 8080.", "source": "https://kubernetes.io/docs/concepts/services-networking/gateway/", "id": "k8s_00262_code_explain_7273"} {"prompt": "Complete this yaml code:\n\napiVersion: gateway.networking.k8s.io/v1\nkind: GRPCRoute\nmetadata:\n name: example-grpcroute\nspec:\n parentRefs:", "completion": " - name: example-gateway\n hostnames:\n - \"svc.example.com\"\n rules:\n - backendRefs:\n - name: example-svc\n port: 50051", "language": "yaml", "type": "kubernetes-manifest", "context": "Gateway API | In this example, gRPC traffic from Gateway example-gateway with the host set to svc.example.com will be directed to the service example-svc on port 50051 from the same namespace.", "source": "https://kubernetes.io/docs/concepts/services-networking/gateway/", "id": "k8s_00262_code_completion_7274"} {"prompt": "Explain this yaml code:\n\napiVersion: gateway.networking.k8s.io/v1\nkind: GRPCRoute\nmetadata:\n name: example-grpcroute\nspec:\n parentRefs:\n - name: example-gateway\n hostnames:\n - \"svc.example.com\"\n rules:\n - backendRefs:\n - name: example-svc\n port: 50051", "completion": "This yaml code Gateway API | In this example, gRPC traffic from Gateway example-gateway with the host set to svc.example.com will be directed to the service example-svc on port 50051 from the same namespace.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Gateway API | In this example, gRPC traffic from Gateway example-gateway with the host set to svc.example.com will be directed to the service example-svc on port 50051 from the same namespace.", "source": "https://kubernetes.io/docs/concepts/services-networking/gateway/", "id": "k8s_00262_code_explain_7275"} {"prompt": "Complete this yaml code:\n\napiVersion: gateway.networking.k8s.io/v1\nkind: GRPCRoute\nmetadata:\n name: example-grpcroute\nspec:\n parentRefs:\n - name: example-gateway\n hostnames:", "completion": " - \"svc.example.com\"\n rules:\n - matches:\n - method:\n service: com.example\n method: Login\n backendRefs:\n - name: foo-svc\n port: 50051", "language": "yaml", "type": "kubernetes-manifest", "context": "Gateway API | In this case, the GRPCRoute will match any traffic for svc.example.com and apply its routing rules to forward the traffic to the correct backend. Since there is only one match specified,only requests", "source": "https://kubernetes.io/docs/concepts/services-networking/gateway/", "id": "k8s_00262_code_completion_7276"} {"prompt": "Explain this yaml code:\n\napiVersion: gateway.networking.k8s.io/v1\nkind: GRPCRoute\nmetadata:\n name: example-grpcroute\nspec:\n parentRefs:\n - name: example-gateway\n hostnames:\n - \"svc.example.com\"\n rules:\n - matches:\n - method:\n service: com.example\n method: Login\n backendRefs:\n - name: foo-svc\n port: 50051", "completion": "This yaml code Gateway API | In this case, the GRPCRoute will match any traffic for svc.example.com and apply its routing rules to forward the traffic to the correct backend. Since there is only one match specified,only requests", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Gateway API | In this case, the GRPCRoute will match any traffic for svc.example.com and apply its routing rules to forward the traffic to the correct backend. Since there is only one match specified,only requests", "source": "https://kubernetes.io/docs/concepts/services-networking/gateway/", "id": "k8s_00262_code_explain_7277"} {"prompt": "Explain this bash code:\n\nkubectl expose deployment/kubernetes-bootcamp --type=\"NodePort\" --port 8080", "completion": "This bash code Using a Service to Expose Your App | We have now a running Service called kubernetes-bootcamp. Here we see that the Service received a unique cluster-IP, an internal port and an external-IP (the IP of the Node).", "language": "bash", "type": "kubectl-command_explanation", "context": "Using a Service to Expose Your App | We have now a running Service called kubernetes-bootcamp. Here we see that the Service received a unique cluster-IP, an internal port and an external-IP (the IP of the Node).", "source": "https://kubernetes.io/docs/tutorials/kubernetes-basics/expose/expose-intro/", "id": "k8s_00265_code_explain_7278"} {"prompt": "Explain this text code:\n\nexport NODE_PORT=\"$(kubectl get services/kubernetes-bootcamp -o go-template='{{(index .spec.ports 0).nodePort}}')\"\necho \"NODE_PORT=$NODE_PORT\"", "completion": "This text code Using a Service to Expose Your App | Now we can test that the app is exposed outside of the cluster using curl, the IP address of the Node and the externally exposed port:", "language": "text", "type": "code-example_explanation", "context": "Using a Service to Expose Your App | Now we can test that the app is exposed outside of the cluster using curl, the IP address of the Node and the externally exposed port:", "source": "https://kubernetes.io/docs/tutorials/kubernetes-basics/expose/expose-intro/", "id": "k8s_00265_code_explain_7279"} {"prompt": "Explain this text code:\n\nhttp://127.0.0.1:51082\n! Because you are using a Docker driver on darwin, the terminal needs to be open to run it.", "completion": "This text code Using a Service to Expose Your App | Then use the given URL to access the app:", "language": "text", "type": "code-example_explanation", "context": "Using a Service to Expose Your App | Then use the given URL to access the app:", "source": "https://kubernetes.io/docs/tutorials/kubernetes-basics/expose/expose-intro/", "id": "k8s_00265_code_explain_7280"} {"prompt": "Explain this text code:\n\nexport POD_NAME=\"$(kubectl get pods -o go-template --template '{{range .items}}{{.metadata.name}}{{\"\\n\"}}{{end}}')\"\necho \"Name of the Pod: $POD_NAME\"", "completion": "This text code Using a Service to Expose Your App | To apply a new label we use the label subcommand followed by the object type, object name and the new label:", "language": "text", "type": "code-example_explanation", "context": "Using a Service to Expose Your App | To apply a new label we use the label subcommand followed by the object type, object name and the new label:", "source": "https://kubernetes.io/docs/tutorials/kubernetes-basics/expose/expose-intro/", "id": "k8s_00265_code_explain_7281"} {"prompt": "Explain this bash code:\n\nkubectl exec -ti $POD_NAME -- curl http://localhost:8080", "completion": "This bash code Using a Service to Expose Your App | We see here that the application is up. This is because the Deployment is managing the application. To shut down the application, you would need to delete the Deployment as well.", "language": "bash", "type": "kubectl-command_explanation", "context": "Using a Service to Expose Your App | We see here that the application is up. This is because the Deployment is managing the application. To shut down the application, you would need to delete the Deployment as well.", "source": "https://kubernetes.io/docs/tutorials/kubernetes-basics/expose/expose-intro/", "id": "k8s_00265_code_explain_7282"} {"prompt": "Explain this shell code:\n\nkubectl expose deployment/kubernetes-bootcamp --type=\"NodePort\" --port 8080", "completion": "This shell code Using a Service to Expose Your App | We have now a running Service called kubernetes-bootcamp. Here we see that the Service received a unique cluster-IP, an internal port and an external-IP (the IP of the Node).", "language": "shell", "type": "code-example_explanation", "context": "Using a Service to Expose Your App | We have now a running Service called kubernetes-bootcamp. Here we see that the Service received a unique cluster-IP, an internal port and an external-IP (the IP of the Node).", "source": "https://kubernetes.io/docs/tutorials/kubernetes-basics/expose/expose-intro/", "id": "k8s_00265_code_explain_7283"} {"prompt": "Explain this shell code:\n\nexport NODE_PORT=\"$(kubectl get services/kubernetes-bootcamp -o go-template='{{(index .spec.ports 0).nodePort}}')\"\necho \"NODE_PORT=$NODE_PORT\"", "completion": "This shell code Using a Service to Expose Your App | Now we can test that the app is exposed outside of the cluster using curl, the IP address of the Node and the externally exposed port:", "language": "shell", "type": "code-example_explanation", "context": "Using a Service to Expose Your App | Now we can test that the app is exposed outside of the cluster using curl, the IP address of the Node and the externally exposed port:", "source": "https://kubernetes.io/docs/tutorials/kubernetes-basics/expose/expose-intro/", "id": "k8s_00265_code_explain_7284"} {"prompt": "Explain this text code:\n\nhttp://127.0.0.1:51082\n! Because you are using a Docker driver on darwin, the terminal needs to be open to run it.", "completion": "This text code Using a Service to Expose Your App | Then use the given URL to access the app:", "language": "text", "type": "code-example_explanation", "context": "Using a Service to Expose Your App | Then use the given URL to access the app:", "source": "https://kubernetes.io/docs/tutorials/kubernetes-basics/expose/expose-intro/", "id": "k8s_00265_code_explain_7285"} {"prompt": "Explain this shell code:\n\nexport POD_NAME=\"$(kubectl get pods -o go-template --template '{{range .items}}{{.metadata.name}}{{\"\\n\"}}{{end}}')\"\necho \"Name of the Pod: $POD_NAME\"", "completion": "This shell code Using a Service to Expose Your App | To apply a new label we use the label subcommand followed by the object type, object name and the new label:", "language": "shell", "type": "code-example_explanation", "context": "Using a Service to Expose Your App | To apply a new label we use the label subcommand followed by the object type, object name and the new label:", "source": "https://kubernetes.io/docs/tutorials/kubernetes-basics/expose/expose-intro/", "id": "k8s_00265_code_explain_7286"} {"prompt": "Explain this shell code:\n\nkubectl exec -ti $POD_NAME -- curl http://localhost:8080", "completion": "This shell code Using a Service to Expose Your App | We see here that the application is up. This is because the Deployment is managing the application. To shut down the application, you would need to delete the Deployment as well.", "language": "shell", "type": "code-example_explanation", "context": "Using a Service to Expose Your App | We see here that the application is up. This is because the Deployment is managing the application. To shut down the application, you would need to delete the Deployment as well.", "source": "https://kubernetes.io/docs/tutorials/kubernetes-basics/expose/expose-intro/", "id": "k8s_00265_code_explain_7287"} {"prompt": "Explain this bash code:\n\nkubectl expose deployment/kubernetes-bootcamp --type=\"NodePort\" --port 8080", "completion": "This bash code Using a Service to Expose Your App | We have now a running Service called kubernetes-bootcamp. Here we see that the Service received a unique cluster-IP, an internal port and an external-IP (the IP of the Node).", "language": "bash", "type": "kubectl-command_explanation", "context": "Using a Service to Expose Your App | We have now a running Service called kubernetes-bootcamp. Here we see that the Service received a unique cluster-IP, an internal port and an external-IP (the IP of the Node).", "source": "https://kubernetes.io/docs/tutorials/kubernetes-basics/expose/expose-intro/", "id": "k8s_00265_code_explain_7288"} {"prompt": "Explain this text code:\n\nexport NODE_PORT=\"$(kubectl get services/kubernetes-bootcamp -o go-template='{{(index .spec.ports 0).nodePort}}')\"\necho \"NODE_PORT=$NODE_PORT\"", "completion": "This text code Using a Service to Expose Your App | Now we can test that the app is exposed outside of the cluster using curl, the IP address of the Node and the externally exposed port:", "language": "text", "type": "code-example_explanation", "context": "Using a Service to Expose Your App | Now we can test that the app is exposed outside of the cluster using curl, the IP address of the Node and the externally exposed port:", "source": "https://kubernetes.io/docs/tutorials/kubernetes-basics/expose/expose-intro/", "id": "k8s_00265_code_explain_7289"} {"prompt": "Explain this text code:\n\nexport POD_NAME=\"$(kubectl get pods -o go-template --template '{{range .items}}{{.metadata.name}}{{\"\\n\"}}{{end}}')\"\necho \"Name of the Pod: $POD_NAME\"", "completion": "This text code Using a Service to Expose Your App | To apply a new label we use the label subcommand followed by the object type, object name and the new label:", "language": "text", "type": "code-example_explanation", "context": "Using a Service to Expose Your App | To apply a new label we use the label subcommand followed by the object type, object name and the new label:", "source": "https://kubernetes.io/docs/tutorials/kubernetes-basics/expose/expose-intro/", "id": "k8s_00265_code_explain_7290"} {"prompt": "Explain this bash code:\n\nkubectl exec -ti $POD_NAME -- curl http://localhost:8080", "completion": "This bash code Using a Service to Expose Your App | We see here that the application is up. This is because the Deployment is managing the application. To shut down the application, you would need to delete the Deployment as well.", "language": "bash", "type": "kubectl-command_explanation", "context": "Using a Service to Expose Your App | We see here that the application is up. This is because the Deployment is managing the application. To shut down the application, you would need to delete the Deployment as well.", "source": "https://kubernetes.io/docs/tutorials/kubernetes-basics/expose/expose-intro/", "id": "k8s_00265_code_explain_7291"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: memory-demo\n namespace: pod-resources-example\nspec:\n resources:\n requests:", "completion": " memory: \"100Mi\"\n limits:\n memory: \"200Mi\"\n containers:\n - name: memory-demo-ctr\n image: nginx\n command: [\"stress\"]\n args: [\"--vm\", \"1\", \"--vm-bytes\", \"150M\", \"--vm-hang\", \"1\"]", "language": "yaml", "type": "pod-definition", "context": "Assign Pod-level CPU and memory resources | The args section in the manifest provides arguments for the container when it starts. The \"--vm-bytes\", \"150M\" arguments tell the Container to attempt to allocate 150 MiB of memory.", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/assign-pod-level-resources/", "id": "k8s_00266_code_completion_7292"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: memory-demo\n namespace: pod-resources-example\nspec:\n resources:\n requests:\n memory: \"100Mi\"\n limits:\n memory: \"200Mi\"\n containers:\n - name: memory-demo-ctr\n image: nginx\n command: [\"stress\"]\n args: [\"--vm\", \"1\", \"--vm-bytes\", \"150M\", \"--vm-hang\", \"1\"]", "completion": "This yaml code Assign Pod-level CPU and memory resources | The args section in the manifest provides arguments for the container when it starts. The \"--vm-bytes\", \"150M\" arguments tell the Container to attempt to allocate 150 MiB of memory.", "language": "yaml", "type": "pod-definition_explanation", "context": "Assign Pod-level CPU and memory resources | The args section in the manifest provides arguments for the container when it starts. The \"--vm-bytes\", \"150M\" arguments tell the Container to attempt to allocate 150 MiB of memory.", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/assign-pod-level-resources/", "id": "k8s_00266_code_explain_7293"} {"prompt": "Explain this bash code:\n\nkubectl apply -f https://k8s.io/examples/pods/resource/pod-level-memory-request-limit.yaml --namespace=pod-resources-example", "completion": "This bash code Assign Pod-level CPU and memory resources | Verify that the Pod is running:", "language": "bash", "type": "kubectl-command_explanation", "context": "Assign Pod-level CPU and memory resources | Verify that the Pod is running:", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/assign-pod-level-resources/", "id": "k8s_00266_code_explain_7294"} {"prompt": "Explain this bash code:\n\nkubectl get pod memory-demo --namespace=pod-resources-example", "completion": "This bash code Assign Pod-level CPU and memory resources | View detailed information about the Pod:", "language": "bash", "type": "kubectl-command_explanation", "context": "Assign Pod-level CPU and memory resources | View detailed information about the Pod:", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/assign-pod-level-resources/", "id": "k8s_00266_code_explain_7295"} {"prompt": "Explain this bash code:\n\nkubectl get pod memory-demo --output=yaml --namespace=pod-resources-example", "completion": "This bash code Assign Pod-level CPU and memory resources | The output shows that the Pod has a memory request of 100 MiB and a memory limit of 200 MiB.", "language": "bash", "type": "kubectl-command_explanation", "context": "Assign Pod-level CPU and memory resources | The output shows that the Pod has a memory request of 100 MiB and a memory limit of 200 MiB.", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/assign-pod-level-resources/", "id": "k8s_00266_code_explain_7296"} {"prompt": "Complete this text code:\n\n...\nspec: \n containers: \n ...\n resources:", "completion": " requests:\n memory: 100Mi\n limits:\n memory: 200Mi\n...", "language": "text", "type": "code-example", "context": "Assign Pod-level CPU and memory resources | Run kubectl top to fetch the metrics for the pod:", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/assign-pod-level-resources/", "id": "k8s_00266_code_completion_7297"} {"prompt": "Explain this text code:\n\n...\nspec: \n containers: \n ...\n resources:\n requests:\n memory: 100Mi\n limits:\n memory: 200Mi\n...", "completion": "This text code Assign Pod-level CPU and memory resources | Run kubectl top to fetch the metrics for the pod:", "language": "text", "type": "code-example_explanation", "context": "Assign Pod-level CPU and memory resources | Run kubectl top to fetch the metrics for the pod:", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/assign-pod-level-resources/", "id": "k8s_00266_code_explain_7298"} {"prompt": "Explain this bash code:\n\nkubectl top pod memory-demo --namespace=pod-resources-example", "completion": "This bash code Assign Pod-level CPU and memory resources | The output shows that the Pod is using about 162,900,000 bytes of memory, which is about 150 MiB. This is greater than the Pod's 100 MiB request, but within the Pod's 200 MiB limit.", "language": "bash", "type": "kubectl-command_explanation", "context": "Assign Pod-level CPU and memory resources | The output shows that the Pod is using about 162,900,000 bytes of memory, which is about 150 MiB. This is greater than the Pod's 100 MiB request, but within the Pod's 200 MiB limit.", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/assign-pod-level-resources/", "id": "k8s_00266_code_explain_7299"} {"prompt": "Explain this text code:\n\nNAME CPU(cores) MEMORY(bytes)\nmemory-demo 162856960", "completion": "This text code Assign Pod-level CPU and memory resources | To specify a CPU request for a Pod, include the resources.requests.cpu field in the Pod spec manifest. To specify a CPU limit, include resources.limits.cpu.", "language": "text", "type": "code-example_explanation", "context": "Assign Pod-level CPU and memory resources | To specify a CPU request for a Pod, include the resources.requests.cpu field in the Pod spec manifest. To specify a CPU limit, include resources.limits.cpu.", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/assign-pod-level-resources/", "id": "k8s_00266_code_explain_7300"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: cpu-demo\n namespace: pod-resources-example\nspec:\n resources:\n limits:", "completion": " cpu: \"1\"\n requests:\n cpu: \"0.5\"\n containers:\n - name: cpu-demo-ctr\n image: vish/stress\n args:\n - -cpus\n - \"2\"", "language": "yaml", "type": "pod-definition", "context": "Assign Pod-level CPU and memory resources | The args section of the configuration file provides arguments for the container when it starts. The -cpus \"2\" argument tells the Container to attempt to use 2 CPUs.", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/assign-pod-level-resources/", "id": "k8s_00266_code_completion_7301"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: cpu-demo\n namespace: pod-resources-example\nspec:\n resources:\n limits:\n cpu: \"1\"\n requests:\n cpu: \"0.5\"\n containers:\n - name: cpu-demo-ctr\n image: vish/stress\n args:\n - -cpus\n - \"2\"", "completion": "This yaml code Assign Pod-level CPU and memory resources | The args section of the configuration file provides arguments for the container when it starts. The -cpus \"2\" argument tells the Container to attempt to use 2 CPUs.", "language": "yaml", "type": "pod-definition_explanation", "context": "Assign Pod-level CPU and memory resources | The args section of the configuration file provides arguments for the container when it starts. The -cpus \"2\" argument tells the Container to attempt to use 2 CPUs.", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/assign-pod-level-resources/", "id": "k8s_00266_code_explain_7302"} {"prompt": "Explain this bash code:\n\nkubectl apply -f https://k8s.io/examples/pods/resource/pod-level-cpu-request-limit.yaml --namespace=pod-resources-example", "completion": "This bash code Assign Pod-level CPU and memory resources | Verify that the Pod is running:", "language": "bash", "type": "kubectl-command_explanation", "context": "Assign Pod-level CPU and memory resources | Verify that the Pod is running:", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/assign-pod-level-resources/", "id": "k8s_00266_code_explain_7303"} {"prompt": "Explain this bash code:\n\nkubectl get pod cpu-demo --namespace=pod-resources-example", "completion": "This bash code Assign Pod-level CPU and memory resources | View detailed information about the Pod:", "language": "bash", "type": "kubectl-command_explanation", "context": "Assign Pod-level CPU and memory resources | View detailed information about the Pod:", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/assign-pod-level-resources/", "id": "k8s_00266_code_explain_7304"} {"prompt": "Explain this bash code:\n\nkubectl get pod cpu-demo --output=yaml --namespace=pod-resources-example", "completion": "This bash code Assign Pod-level CPU and memory resources | The output shows that the Pod has a CPU request of 500 milliCPU and a CPU limit of 1 CPU.", "language": "bash", "type": "kubectl-command_explanation", "context": "Assign Pod-level CPU and memory resources | The output shows that the Pod has a CPU request of 500 milliCPU and a CPU limit of 1 CPU.", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/assign-pod-level-resources/", "id": "k8s_00266_code_explain_7305"} {"prompt": "Complete this text code:\n\nspec:\n containers:\n ...\n resources:", "completion": " limits:\n cpu: \"1\"\n requests:\n cpu: 500m", "language": "text", "type": "code-example", "context": "Assign Pod-level CPU and memory resources | Use kubectl top to fetch the metrics for the Pod:", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/assign-pod-level-resources/", "id": "k8s_00266_code_completion_7306"} {"prompt": "Explain this text code:\n\nspec:\n containers:\n ...\n resources:\n limits:\n cpu: \"1\"\n requests:\n cpu: 500m", "completion": "This text code Assign Pod-level CPU and memory resources | Use kubectl top to fetch the metrics for the Pod:", "language": "text", "type": "code-example_explanation", "context": "Assign Pod-level CPU and memory resources | Use kubectl top to fetch the metrics for the Pod:", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/assign-pod-level-resources/", "id": "k8s_00266_code_explain_7307"} {"prompt": "Explain this bash code:\n\nkubectl top pod cpu-demo --namespace=pod-resources-example", "completion": "This bash code Assign Pod-level CPU and memory resources | This example output shows that the Pod is using 974 milliCPU, which is slightly less than the limit of 1 CPU specified in the Pod configuration.", "language": "bash", "type": "kubectl-command_explanation", "context": "Assign Pod-level CPU and memory resources | This example output shows that the Pod is using 974 milliCPU, which is slightly less than the limit of 1 CPU specified in the Pod configuration.", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/assign-pod-level-resources/", "id": "k8s_00266_code_explain_7308"} {"prompt": "Explain this text code:\n\nNAME CPU(cores) MEMORY(bytes)\ncpu-demo 974m ", "completion": "This text code Assign Pod-level CPU and memory resources | Recall that by setting -cpu \"2\", you configured the Container to attempt to use 2 CPUs, but the Container is only being allowed to use about 1 CPU. The container's CPU use is being throttled, because", "language": "text", "type": "code-example_explanation", "context": "Assign Pod-level CPU and memory resources | Recall that by setting -cpu \"2\", you configured the Container to attempt to use 2 CPUs, but the Container is only being allowed to use about 1 CPU. The container's CPU use is being throttled, because", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/assign-pod-level-resources/", "id": "k8s_00266_code_explain_7309"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: pod-resources-demo\n namespace: pod-resources-example\nspec:\n resources:\n limits:\n cpu: \"1\"\n memory: \"200Mi\"\n requests:\n cpu: \"1\"\n memory: \"100Mi\"\n containers:", "completion": " - name: pod-resources-demo-ctr-1\n image: nginx\n resources:\n limits:\n cpu: \"0.5\"\n memory: \"100Mi\"\n requests:\n cpu: \"0.5\"\n memory: \"50Mi\"\n - name: pod-resources-demo-ctr-2\n image: fedora\n command:\n - sleep\n - inf", "language": "yaml", "type": "pod-definition", "context": "Assign Pod-level CPU and memory resources | Create the Pod:", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/assign-pod-level-resources/", "id": "k8s_00266_code_completion_7310"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: pod-resources-demo\n namespace: pod-resources-example\nspec:\n resources:\n limits:\n cpu: \"1\"\n memory: \"200Mi\"\n requests:\n cpu: \"1\"\n memory: \"100Mi\"\n containers:\n - name: pod-resources-demo-ctr-1\n image: nginx\n resources:\n limits:\n cpu: \"0.5\"\n memory: \"100Mi\"\n requests:\n cpu: \"0.5\"\n memory: \"50Mi\"\n - name: pod-resources-demo-ctr-2\n image: fedora\n command:\n - sleep\n - inf", "completion": "This yaml code Assign Pod-level CPU and memory resources | Create the Pod:", "language": "yaml", "type": "pod-definition_explanation", "context": "Assign Pod-level CPU and memory resources | Create the Pod:", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/assign-pod-level-resources/", "id": "k8s_00266_code_explain_7311"} {"prompt": "Explain this bash code:\n\nkubectl apply -f https://k8s.io/examples/pods/resource/pod-level-resources.yaml --namespace=pod-resources-example", "completion": "This bash code Assign Pod-level CPU and memory resources | Verify that the Pod Container is running:", "language": "bash", "type": "kubectl-command_explanation", "context": "Assign Pod-level CPU and memory resources | Verify that the Pod Container is running:", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/assign-pod-level-resources/", "id": "k8s_00266_code_explain_7312"} {"prompt": "Explain this bash code:\n\nkubectl get pod-resources-demo --namespace=pod-resources-example", "completion": "This bash code Assign Pod-level CPU and memory resources | View detailed information about the Pod:", "language": "bash", "type": "kubectl-command_explanation", "context": "Assign Pod-level CPU and memory resources | View detailed information about the Pod:", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/assign-pod-level-resources/", "id": "k8s_00266_code_explain_7313"} {"prompt": "Explain this bash code:\n\nkubectl get pod memory-demo --output=yaml --namespace=pod-resources-example", "completion": "This bash code Assign Pod-level CPU and memory resources | The output shows that one container in the Pod has a memory request of 50 MiB and a CPU request of 0.5 cores, with a memory limit of 100 MiB and a CPU limit of 0.5 cores. The Pod itself has a memory r", "language": "bash", "type": "kubectl-command_explanation", "context": "Assign Pod-level CPU and memory resources | The output shows that one container in the Pod has a memory request of 50 MiB and a CPU request of 0.5 cores, with a memory limit of 100 MiB and a CPU limit of 0.5 cores. The Pod itself has a memory r", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/assign-pod-level-resources/", "id": "k8s_00266_code_explain_7314"} {"prompt": "Complete this text code:\n\n...\ncontainers:\n name: pod-resources-demo-ctr-1\n resources:\n requests:\n cpu: 500m\n memory: 50Mi\n limits:\n cpu: 500m\n memory: 100Mi", "completion": " ...\n name: pod-resources-demo-ctr-2\n resources: {} \nresources:\n limits:\n cpu: 1\n memory: 200Mi\n requests:\n cpu: 1\n memory: 100Mi\n...", "language": "text", "type": "code-example", "context": "Assign Pod-level CPU and memory resources | Since pod-level requests and limits are specified, the request guarantees for both containers in the pod will be equal 1 core or CPU and 100Mi of memory. Additionally, both containers together won't b", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/assign-pod-level-resources/", "id": "k8s_00266_code_completion_7315"} {"prompt": "Explain this text code:\n\n...\ncontainers:\n name: pod-resources-demo-ctr-1\n resources:\n requests:\n cpu: 500m\n memory: 50Mi\n limits:\n cpu: 500m\n memory: 100Mi\n ...\n name: pod-resources-demo-ctr-2\n resources: {} \nresources:\n limits:\n cpu: 1\n memory: 200Mi\n requests:\n cpu: 1\n memory: 100Mi\n...", "completion": "This text code Assign Pod-level CPU and memory resources | Since pod-level requests and limits are specified, the request guarantees for both containers in the pod will be equal 1 core or CPU and 100Mi of memory. Additionally, both containers together won't b", "language": "text", "type": "code-example_explanation", "context": "Assign Pod-level CPU and memory resources | Since pod-level requests and limits are specified, the request guarantees for both containers in the pod will be equal 1 core or CPU and 100Mi of memory. Additionally, both containers together won't b", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/assign-pod-level-resources/", "id": "k8s_00266_code_explain_7316"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: memory-demo\n namespace: pod-resources-example\nspec:\n resources:\n requests:", "completion": " memory: \"100Mi\"\n limits:\n memory: \"200Mi\"\n containers:\n - name: memory-demo-ctr\n image: nginx\n command: [\"stress\"]\n args: [\"--vm\", \"1\", \"--vm-bytes\", \"150M\", \"--vm-hang\", \"1\"]", "language": "yaml", "type": "pod-definition", "context": "Assign Pod-level CPU and memory resources | The args section in the manifest provides arguments for the container when it starts. The \"--vm-bytes\", \"150M\" arguments tell the Container to attempt to allocate 150 MiB of memory.", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/assign-pod-level-resources/", "id": "k8s_00266_code_completion_7317"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: memory-demo\n namespace: pod-resources-example\nspec:\n resources:\n requests:\n memory: \"100Mi\"\n limits:\n memory: \"200Mi\"\n containers:\n - name: memory-demo-ctr\n image: nginx\n command: [\"stress\"]\n args: [\"--vm\", \"1\", \"--vm-bytes\", \"150M\", \"--vm-hang\", \"1\"]", "completion": "This yaml code Assign Pod-level CPU and memory resources | The args section in the manifest provides arguments for the container when it starts. The \"--vm-bytes\", \"150M\" arguments tell the Container to attempt to allocate 150 MiB of memory.", "language": "yaml", "type": "pod-definition_explanation", "context": "Assign Pod-level CPU and memory resources | The args section in the manifest provides arguments for the container when it starts. The \"--vm-bytes\", \"150M\" arguments tell the Container to attempt to allocate 150 MiB of memory.", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/assign-pod-level-resources/", "id": "k8s_00266_code_explain_7318"} {"prompt": "Explain this shell code:\n\nkubectl apply -f https://k8s.io/examples/pods/resource/pod-level-memory-request-limit.yaml --namespace=pod-resources-example", "completion": "This shell code Assign Pod-level CPU and memory resources | Verify that the Pod is running:", "language": "shell", "type": "code-example_explanation", "context": "Assign Pod-level CPU and memory resources | Verify that the Pod is running:", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/assign-pod-level-resources/", "id": "k8s_00266_code_explain_7319"} {"prompt": "Explain this shell code:\n\nkubectl get pod memory-demo --namespace=pod-resources-example", "completion": "This shell code Assign Pod-level CPU and memory resources | View detailed information about the Pod:", "language": "shell", "type": "code-example_explanation", "context": "Assign Pod-level CPU and memory resources | View detailed information about the Pod:", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/assign-pod-level-resources/", "id": "k8s_00266_code_explain_7320"} {"prompt": "Explain this shell code:\n\nkubectl get pod memory-demo --output=yaml --namespace=pod-resources-example", "completion": "This shell code Assign Pod-level CPU and memory resources | The output shows that the Pod has a memory request of 100 MiB and a memory limit of 200 MiB.", "language": "shell", "type": "code-example_explanation", "context": "Assign Pod-level CPU and memory resources | The output shows that the Pod has a memory request of 100 MiB and a memory limit of 200 MiB.", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/assign-pod-level-resources/", "id": "k8s_00266_code_explain_7321"} {"prompt": "Complete this yaml code:\n\n...\nspec: \n containers: \n ...\n resources:", "completion": " requests:\n memory: 100Mi\n limits:\n memory: 200Mi\n...", "language": "yaml", "type": "kubernetes-manifest", "context": "Assign Pod-level CPU and memory resources | Run kubectl top to fetch the metrics for the pod:", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/assign-pod-level-resources/", "id": "k8s_00266_code_completion_7322"} {"prompt": "Explain this yaml code:\n\n...\nspec: \n containers: \n ...\n resources:\n requests:\n memory: 100Mi\n limits:\n memory: 200Mi\n...", "completion": "This yaml code Assign Pod-level CPU and memory resources | Run kubectl top to fetch the metrics for the pod:", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Assign Pod-level CPU and memory resources | Run kubectl top to fetch the metrics for the pod:", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/assign-pod-level-resources/", "id": "k8s_00266_code_explain_7323"} {"prompt": "Explain this shell code:\n\nkubectl top pod memory-demo --namespace=pod-resources-example", "completion": "This shell code Assign Pod-level CPU and memory resources | The output shows that the Pod is using about 162,900,000 bytes of memory, which is about 150 MiB. This is greater than the Pod's 100 MiB request, but within the Pod's 200 MiB limit.", "language": "shell", "type": "code-example_explanation", "context": "Assign Pod-level CPU and memory resources | The output shows that the Pod is using about 162,900,000 bytes of memory, which is about 150 MiB. This is greater than the Pod's 100 MiB request, but within the Pod's 200 MiB limit.", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/assign-pod-level-resources/", "id": "k8s_00266_code_explain_7324"} {"prompt": "Explain this text code:\n\nNAME CPU(cores) MEMORY(bytes)\nmemory-demo 162856960", "completion": "This text code Assign Pod-level CPU and memory resources | To specify a CPU request for a Pod, include the resources.requests.cpu field in the Pod spec manifest. To specify a CPU limit, include resources.limits.cpu.", "language": "text", "type": "code-example_explanation", "context": "Assign Pod-level CPU and memory resources | To specify a CPU request for a Pod, include the resources.requests.cpu field in the Pod spec manifest. To specify a CPU limit, include resources.limits.cpu.", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/assign-pod-level-resources/", "id": "k8s_00266_code_explain_7325"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: cpu-demo\n namespace: pod-resources-example\nspec:\n resources:\n limits:", "completion": " cpu: \"1\"\n requests:\n cpu: \"0.5\"\n containers:\n - name: cpu-demo-ctr\n image: vish/stress\n args:\n - -cpus\n - \"2\"", "language": "yaml", "type": "pod-definition", "context": "Assign Pod-level CPU and memory resources | The args section of the configuration file provides arguments for the container when it starts. The -cpus \"2\" argument tells the Container to attempt to use 2 CPUs.", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/assign-pod-level-resources/", "id": "k8s_00266_code_completion_7326"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: cpu-demo\n namespace: pod-resources-example\nspec:\n resources:\n limits:\n cpu: \"1\"\n requests:\n cpu: \"0.5\"\n containers:\n - name: cpu-demo-ctr\n image: vish/stress\n args:\n - -cpus\n - \"2\"", "completion": "This yaml code Assign Pod-level CPU and memory resources | The args section of the configuration file provides arguments for the container when it starts. The -cpus \"2\" argument tells the Container to attempt to use 2 CPUs.", "language": "yaml", "type": "pod-definition_explanation", "context": "Assign Pod-level CPU and memory resources | The args section of the configuration file provides arguments for the container when it starts. The -cpus \"2\" argument tells the Container to attempt to use 2 CPUs.", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/assign-pod-level-resources/", "id": "k8s_00266_code_explain_7327"} {"prompt": "Explain this shell code:\n\nkubectl apply -f https://k8s.io/examples/pods/resource/pod-level-cpu-request-limit.yaml --namespace=pod-resources-example", "completion": "This shell code Assign Pod-level CPU and memory resources | Verify that the Pod is running:", "language": "shell", "type": "code-example_explanation", "context": "Assign Pod-level CPU and memory resources | Verify that the Pod is running:", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/assign-pod-level-resources/", "id": "k8s_00266_code_explain_7328"} {"prompt": "Explain this shell code:\n\nkubectl get pod cpu-demo --namespace=pod-resources-example", "completion": "This shell code Assign Pod-level CPU and memory resources | View detailed information about the Pod:", "language": "shell", "type": "code-example_explanation", "context": "Assign Pod-level CPU and memory resources | View detailed information about the Pod:", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/assign-pod-level-resources/", "id": "k8s_00266_code_explain_7329"} {"prompt": "Explain this shell code:\n\nkubectl get pod cpu-demo --output=yaml --namespace=pod-resources-example", "completion": "This shell code Assign Pod-level CPU and memory resources | The output shows that the Pod has a CPU request of 500 milliCPU and a CPU limit of 1 CPU.", "language": "shell", "type": "code-example_explanation", "context": "Assign Pod-level CPU and memory resources | The output shows that the Pod has a CPU request of 500 milliCPU and a CPU limit of 1 CPU.", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/assign-pod-level-resources/", "id": "k8s_00266_code_explain_7330"} {"prompt": "Complete this yaml code:\n\nspec:\n containers:\n ...\n resources:", "completion": " limits:\n cpu: \"1\"\n requests:\n cpu: 500m", "language": "yaml", "type": "kubernetes-manifest", "context": "Assign Pod-level CPU and memory resources | Use kubectl top to fetch the metrics for the Pod:", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/assign-pod-level-resources/", "id": "k8s_00266_code_completion_7331"} {"prompt": "Explain this yaml code:\n\nspec:\n containers:\n ...\n resources:\n limits:\n cpu: \"1\"\n requests:\n cpu: 500m", "completion": "This yaml code Assign Pod-level CPU and memory resources | Use kubectl top to fetch the metrics for the Pod:", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Assign Pod-level CPU and memory resources | Use kubectl top to fetch the metrics for the Pod:", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/assign-pod-level-resources/", "id": "k8s_00266_code_explain_7332"} {"prompt": "Explain this shell code:\n\nkubectl top pod cpu-demo --namespace=pod-resources-example", "completion": "This shell code Assign Pod-level CPU and memory resources | This example output shows that the Pod is using 974 milliCPU, which is slightly less than the limit of 1 CPU specified in the Pod configuration.", "language": "shell", "type": "code-example_explanation", "context": "Assign Pod-level CPU and memory resources | This example output shows that the Pod is using 974 milliCPU, which is slightly less than the limit of 1 CPU specified in the Pod configuration.", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/assign-pod-level-resources/", "id": "k8s_00266_code_explain_7333"} {"prompt": "Explain this text code:\n\nNAME CPU(cores) MEMORY(bytes)\ncpu-demo 974m ", "completion": "This text code Assign Pod-level CPU and memory resources | Recall that by setting -cpu \"2\", you configured the Container to attempt to use 2 CPUs, but the Container is only being allowed to use about 1 CPU. The container's CPU use is being throttled, because", "language": "text", "type": "code-example_explanation", "context": "Assign Pod-level CPU and memory resources | Recall that by setting -cpu \"2\", you configured the Container to attempt to use 2 CPUs, but the Container is only being allowed to use about 1 CPU. The container's CPU use is being throttled, because", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/assign-pod-level-resources/", "id": "k8s_00266_code_explain_7334"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: pod-resources-demo\n namespace: pod-resources-example\nspec:\n resources:\n limits:\n cpu: \"1\"\n memory: \"200Mi\"\n requests:\n cpu: \"1\"\n memory: \"100Mi\"\n containers:", "completion": " - name: pod-resources-demo-ctr-1\n image: nginx\n resources:\n limits:\n cpu: \"0.5\"\n memory: \"100Mi\"\n requests:\n cpu: \"0.5\"\n memory: \"50Mi\"\n - name: pod-resources-demo-ctr-2\n image: fedora\n command:\n - sleep\n - inf", "language": "yaml", "type": "pod-definition", "context": "Assign Pod-level CPU and memory resources | Create the Pod:", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/assign-pod-level-resources/", "id": "k8s_00266_code_completion_7335"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: pod-resources-demo\n namespace: pod-resources-example\nspec:\n resources:\n limits:\n cpu: \"1\"\n memory: \"200Mi\"\n requests:\n cpu: \"1\"\n memory: \"100Mi\"\n containers:\n - name: pod-resources-demo-ctr-1\n image: nginx\n resources:\n limits:\n cpu: \"0.5\"\n memory: \"100Mi\"\n requests:\n cpu: \"0.5\"\n memory: \"50Mi\"\n - name: pod-resources-demo-ctr-2\n image: fedora\n command:\n - sleep\n - inf", "completion": "This yaml code Assign Pod-level CPU and memory resources | Create the Pod:", "language": "yaml", "type": "pod-definition_explanation", "context": "Assign Pod-level CPU and memory resources | Create the Pod:", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/assign-pod-level-resources/", "id": "k8s_00266_code_explain_7336"} {"prompt": "Explain this shell code:\n\nkubectl apply -f https://k8s.io/examples/pods/resource/pod-level-resources.yaml --namespace=pod-resources-example", "completion": "This shell code Assign Pod-level CPU and memory resources | Verify that the Pod Container is running:", "language": "shell", "type": "code-example_explanation", "context": "Assign Pod-level CPU and memory resources | Verify that the Pod Container is running:", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/assign-pod-level-resources/", "id": "k8s_00266_code_explain_7337"} {"prompt": "Explain this shell code:\n\nkubectl get pod-resources-demo --namespace=pod-resources-example", "completion": "This shell code Assign Pod-level CPU and memory resources | View detailed information about the Pod:", "language": "shell", "type": "code-example_explanation", "context": "Assign Pod-level CPU and memory resources | View detailed information about the Pod:", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/assign-pod-level-resources/", "id": "k8s_00266_code_explain_7338"} {"prompt": "Explain this shell code:\n\nkubectl get pod memory-demo --output=yaml --namespace=pod-resources-example", "completion": "This shell code Assign Pod-level CPU and memory resources | The output shows that one container in the Pod has a memory request of 50 MiB and a CPU request of 0.5 cores, with a memory limit of 100 MiB and a CPU limit of 0.5 cores. The Pod itself has a memory r", "language": "shell", "type": "code-example_explanation", "context": "Assign Pod-level CPU and memory resources | The output shows that one container in the Pod has a memory request of 50 MiB and a CPU request of 0.5 cores, with a memory limit of 100 MiB and a CPU limit of 0.5 cores. The Pod itself has a memory r", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/assign-pod-level-resources/", "id": "k8s_00266_code_explain_7339"} {"prompt": "Complete this yaml code:\n\n...\ncontainers:\n name: pod-resources-demo-ctr-1\n resources:\n requests:\n cpu: 500m\n memory: 50Mi\n limits:\n cpu: 500m\n memory: 100Mi", "completion": " ...\n name: pod-resources-demo-ctr-2\n resources: {} \nresources:\n limits:\n cpu: 1\n memory: 200Mi\n requests:\n cpu: 1\n memory: 100Mi\n...", "language": "yaml", "type": "kubernetes-manifest", "context": "Assign Pod-level CPU and memory resources | Since pod-level requests and limits are specified, the request guarantees for both containers in the pod will be equal 1 core or CPU and 100Mi of memory. Additionally, both containers together won't b", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/assign-pod-level-resources/", "id": "k8s_00266_code_completion_7340"} {"prompt": "Explain this yaml code:\n\n...\ncontainers:\n name: pod-resources-demo-ctr-1\n resources:\n requests:\n cpu: 500m\n memory: 50Mi\n limits:\n cpu: 500m\n memory: 100Mi\n ...\n name: pod-resources-demo-ctr-2\n resources: {} \nresources:\n limits:\n cpu: 1\n memory: 200Mi\n requests:\n cpu: 1\n memory: 100Mi\n...", "completion": "This yaml code Assign Pod-level CPU and memory resources | Since pod-level requests and limits are specified, the request guarantees for both containers in the pod will be equal 1 core or CPU and 100Mi of memory. Additionally, both containers together won't b", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Assign Pod-level CPU and memory resources | Since pod-level requests and limits are specified, the request guarantees for both containers in the pod will be equal 1 core or CPU and 100Mi of memory. Additionally, both containers together won't b", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/assign-pod-level-resources/", "id": "k8s_00266_code_explain_7341"} {"prompt": "Complete this yaml code:\n\npods/resource/pod-level-memory-request-limit.yaml\napiVersion: v1\nkind: Pod\nmetadata:\n name: memory-demo\n namespace: pod-resources-example\nspec:\n resources:", "completion": " requests:\n memory: \"100Mi\"\n limits:\n memory: \"200Mi\"\n containers:\n - name: memory-demo-ctr\n image: nginx\n command: [\"stress\"]\n args: [\"--vm\", \"1\", \"--vm-bytes\", \"150M\", \"--vm-hang\", \"1\"]", "language": "yaml", "type": "pod-definition", "context": "Assign Pod-level CPU and memory resources | The args section in the manifest provides arguments for the container when it starts. The \"--vm-bytes\", \"150M\" arguments tell the Container to attempt to allocate 150 MiB of memory.", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/assign-pod-level-resources/", "id": "k8s_00266_code_completion_7342"} {"prompt": "Explain this yaml code:\n\npods/resource/pod-level-memory-request-limit.yaml\napiVersion: v1\nkind: Pod\nmetadata:\n name: memory-demo\n namespace: pod-resources-example\nspec:\n resources:\n requests:\n memory: \"100Mi\"\n limits:\n memory: \"200Mi\"\n containers:\n - name: memory-demo-ctr\n image: nginx\n command: [\"stress\"]\n args: [\"--vm\", \"1\", \"--vm-bytes\", \"150M\", \"--vm-hang\", \"1\"]", "completion": "This yaml code Assign Pod-level CPU and memory resources | The args section in the manifest provides arguments for the container when it starts. The \"--vm-bytes\", \"150M\" arguments tell the Container to attempt to allocate 150 MiB of memory.", "language": "yaml", "type": "pod-definition_explanation", "context": "Assign Pod-level CPU and memory resources | The args section in the manifest provides arguments for the container when it starts. The \"--vm-bytes\", \"150M\" arguments tell the Container to attempt to allocate 150 MiB of memory.", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/assign-pod-level-resources/", "id": "k8s_00266_code_explain_7343"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: memory-demo\n namespace: pod-resources-example\nspec:\n resources:\n requests:", "completion": " memory: \"100Mi\"\n limits:\n memory: \"200Mi\"\n containers:\n - name: memory-demo-ctr\n image: nginx\n command: [\"stress\"]\n args: [\"--vm\", \"1\", \"--vm-bytes\", \"150M\", \"--vm-hang\", \"1\"]", "language": "yaml", "type": "pod-definition", "context": "Assign Pod-level CPU and memory resources | The args section in the manifest provides arguments for the container when it starts. The \"--vm-bytes\", \"150M\" arguments tell the Container to attempt to allocate 150 MiB of memory.", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/assign-pod-level-resources/", "id": "k8s_00266_code_completion_7344"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: memory-demo\n namespace: pod-resources-example\nspec:\n resources:\n requests:\n memory: \"100Mi\"\n limits:\n memory: \"200Mi\"\n containers:\n - name: memory-demo-ctr\n image: nginx\n command: [\"stress\"]\n args: [\"--vm\", \"1\", \"--vm-bytes\", \"150M\", \"--vm-hang\", \"1\"]", "completion": "This yaml code Assign Pod-level CPU and memory resources | The args section in the manifest provides arguments for the container when it starts. The \"--vm-bytes\", \"150M\" arguments tell the Container to attempt to allocate 150 MiB of memory.", "language": "yaml", "type": "pod-definition_explanation", "context": "Assign Pod-level CPU and memory resources | The args section in the manifest provides arguments for the container when it starts. The \"--vm-bytes\", \"150M\" arguments tell the Container to attempt to allocate 150 MiB of memory.", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/assign-pod-level-resources/", "id": "k8s_00266_code_explain_7345"} {"prompt": "Explain this bash code:\n\nkubectl apply -f https://k8s.io/examples/pods/resource/pod-level-memory-request-limit.yaml --namespace=pod-resources-example", "completion": "This bash code Assign Pod-level CPU and memory resources | Verify that the Pod is running:", "language": "bash", "type": "kubectl-command_explanation", "context": "Assign Pod-level CPU and memory resources | Verify that the Pod is running:", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/assign-pod-level-resources/", "id": "k8s_00266_code_explain_7346"} {"prompt": "Explain this bash code:\n\nkubectl get pod memory-demo --namespace=pod-resources-example", "completion": "This bash code Assign Pod-level CPU and memory resources | View detailed information about the Pod:", "language": "bash", "type": "kubectl-command_explanation", "context": "Assign Pod-level CPU and memory resources | View detailed information about the Pod:", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/assign-pod-level-resources/", "id": "k8s_00266_code_explain_7347"} {"prompt": "Explain this bash code:\n\nkubectl get pod memory-demo --output=yaml --namespace=pod-resources-example", "completion": "This bash code Assign Pod-level CPU and memory resources | The output shows that the Pod has a memory request of 100 MiB and a memory limit of 200 MiB.", "language": "bash", "type": "kubectl-command_explanation", "context": "Assign Pod-level CPU and memory resources | The output shows that the Pod has a memory request of 100 MiB and a memory limit of 200 MiB.", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/assign-pod-level-resources/", "id": "k8s_00266_code_explain_7348"} {"prompt": "Complete this text code:\n\n...\nspec: \n containers: \n ...\n resources:", "completion": " requests:\n memory: 100Mi\n limits:\n memory: 200Mi\n...", "language": "text", "type": "code-example", "context": "Assign Pod-level CPU and memory resources | Run kubectl top to fetch the metrics for the pod:", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/assign-pod-level-resources/", "id": "k8s_00266_code_completion_7349"} {"prompt": "Explain this text code:\n\n...\nspec: \n containers: \n ...\n resources:\n requests:\n memory: 100Mi\n limits:\n memory: 200Mi\n...", "completion": "This text code Assign Pod-level CPU and memory resources | Run kubectl top to fetch the metrics for the pod:", "language": "text", "type": "code-example_explanation", "context": "Assign Pod-level CPU and memory resources | Run kubectl top to fetch the metrics for the pod:", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/assign-pod-level-resources/", "id": "k8s_00266_code_explain_7350"} {"prompt": "Explain this bash code:\n\nkubectl top pod memory-demo --namespace=pod-resources-example", "completion": "This bash code Assign Pod-level CPU and memory resources | The output shows that the Pod is using about 162,900,000 bytes of memory, which is about 150 MiB. This is greater than the Pod's 100 MiB request, but within the Pod's 200 MiB limit.", "language": "bash", "type": "kubectl-command_explanation", "context": "Assign Pod-level CPU and memory resources | The output shows that the Pod is using about 162,900,000 bytes of memory, which is about 150 MiB. This is greater than the Pod's 100 MiB request, but within the Pod's 200 MiB limit.", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/assign-pod-level-resources/", "id": "k8s_00266_code_explain_7351"} {"prompt": "Complete this yaml code:\n\npods/resource/pod-level-cpu-request-limit.yaml\napiVersion: v1\nkind: Pod\nmetadata:\n name: cpu-demo\n namespace: pod-resources-example\nspec:\n resources:\n limits:", "completion": " cpu: \"1\"\n requests:\n cpu: \"0.5\"\n containers:\n - name: cpu-demo-ctr\n image: vish/stress\n args:\n - -cpus\n - \"2\"", "language": "yaml", "type": "pod-definition", "context": "Assign Pod-level CPU and memory resources | The args section of the configuration file provides arguments for the container when it starts. The -cpus \"2\" argument tells the Container to attempt to use 2 CPUs.", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/assign-pod-level-resources/", "id": "k8s_00266_code_completion_7352"} {"prompt": "Explain this yaml code:\n\npods/resource/pod-level-cpu-request-limit.yaml\napiVersion: v1\nkind: Pod\nmetadata:\n name: cpu-demo\n namespace: pod-resources-example\nspec:\n resources:\n limits:\n cpu: \"1\"\n requests:\n cpu: \"0.5\"\n containers:\n - name: cpu-demo-ctr\n image: vish/stress\n args:\n - -cpus\n - \"2\"", "completion": "This yaml code Assign Pod-level CPU and memory resources | The args section of the configuration file provides arguments for the container when it starts. The -cpus \"2\" argument tells the Container to attempt to use 2 CPUs.", "language": "yaml", "type": "pod-definition_explanation", "context": "Assign Pod-level CPU and memory resources | The args section of the configuration file provides arguments for the container when it starts. The -cpus \"2\" argument tells the Container to attempt to use 2 CPUs.", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/assign-pod-level-resources/", "id": "k8s_00266_code_explain_7353"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: cpu-demo\n namespace: pod-resources-example\nspec:\n resources:\n limits:", "completion": " cpu: \"1\"\n requests:\n cpu: \"0.5\"\n containers:\n - name: cpu-demo-ctr\n image: vish/stress\n args:\n - -cpus\n - \"2\"", "language": "yaml", "type": "pod-definition", "context": "Assign Pod-level CPU and memory resources | The args section of the configuration file provides arguments for the container when it starts. The -cpus \"2\" argument tells the Container to attempt to use 2 CPUs.", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/assign-pod-level-resources/", "id": "k8s_00266_code_completion_7354"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: cpu-demo\n namespace: pod-resources-example\nspec:\n resources:\n limits:\n cpu: \"1\"\n requests:\n cpu: \"0.5\"\n containers:\n - name: cpu-demo-ctr\n image: vish/stress\n args:\n - -cpus\n - \"2\"", "completion": "This yaml code Assign Pod-level CPU and memory resources | The args section of the configuration file provides arguments for the container when it starts. The -cpus \"2\" argument tells the Container to attempt to use 2 CPUs.", "language": "yaml", "type": "pod-definition_explanation", "context": "Assign Pod-level CPU and memory resources | The args section of the configuration file provides arguments for the container when it starts. The -cpus \"2\" argument tells the Container to attempt to use 2 CPUs.", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/assign-pod-level-resources/", "id": "k8s_00266_code_explain_7355"} {"prompt": "Explain this bash code:\n\nkubectl apply -f https://k8s.io/examples/pods/resource/pod-level-cpu-request-limit.yaml --namespace=pod-resources-example", "completion": "This bash code Assign Pod-level CPU and memory resources | Verify that the Pod is running:", "language": "bash", "type": "kubectl-command_explanation", "context": "Assign Pod-level CPU and memory resources | Verify that the Pod is running:", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/assign-pod-level-resources/", "id": "k8s_00266_code_explain_7356"} {"prompt": "Explain this bash code:\n\nkubectl get pod cpu-demo --namespace=pod-resources-example", "completion": "This bash code Assign Pod-level CPU and memory resources | View detailed information about the Pod:", "language": "bash", "type": "kubectl-command_explanation", "context": "Assign Pod-level CPU and memory resources | View detailed information about the Pod:", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/assign-pod-level-resources/", "id": "k8s_00266_code_explain_7357"} {"prompt": "Explain this bash code:\n\nkubectl get pod cpu-demo --output=yaml --namespace=pod-resources-example", "completion": "This bash code Assign Pod-level CPU and memory resources | The output shows that the Pod has a CPU request of 500 milliCPU and a CPU limit of 1 CPU.", "language": "bash", "type": "kubectl-command_explanation", "context": "Assign Pod-level CPU and memory resources | The output shows that the Pod has a CPU request of 500 milliCPU and a CPU limit of 1 CPU.", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/assign-pod-level-resources/", "id": "k8s_00266_code_explain_7358"} {"prompt": "Complete this text code:\n\nspec:\n containers:\n ...\n resources:", "completion": " limits:\n cpu: \"1\"\n requests:\n cpu: 500m", "language": "text", "type": "code-example", "context": "Assign Pod-level CPU and memory resources | Use kubectl top to fetch the metrics for the Pod:", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/assign-pod-level-resources/", "id": "k8s_00266_code_completion_7359"} {"prompt": "Explain this text code:\n\nspec:\n containers:\n ...\n resources:\n limits:\n cpu: \"1\"\n requests:\n cpu: 500m", "completion": "This text code Assign Pod-level CPU and memory resources | Use kubectl top to fetch the metrics for the Pod:", "language": "text", "type": "code-example_explanation", "context": "Assign Pod-level CPU and memory resources | Use kubectl top to fetch the metrics for the Pod:", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/assign-pod-level-resources/", "id": "k8s_00266_code_explain_7360"} {"prompt": "Explain this bash code:\n\nkubectl top pod cpu-demo --namespace=pod-resources-example", "completion": "This bash code Assign Pod-level CPU and memory resources | This example output shows that the Pod is using 974 milliCPU, which is slightly less than the limit of 1 CPU specified in the Pod configuration.", "language": "bash", "type": "kubectl-command_explanation", "context": "Assign Pod-level CPU and memory resources | This example output shows that the Pod is using 974 milliCPU, which is slightly less than the limit of 1 CPU specified in the Pod configuration.", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/assign-pod-level-resources/", "id": "k8s_00266_code_explain_7361"} {"prompt": "Complete this yaml code:\n\npods/resource/pod-level-resources.yaml\napiVersion: v1\nkind: Pod\nmetadata:\n name: pod-resources-demo\n namespace: pod-resources-example\nspec:\n resources:\n limits:\n cpu: \"1\"\n memory: \"200Mi\"\n requests:\n cpu: \"1\"\n memory: \"100Mi\"", "completion": " containers:\n - name: pod-resources-demo-ctr-1\n image: nginx\n resources:\n limits:\n cpu: \"0.5\"\n memory: \"100Mi\"\n requests:\n cpu: \"0.5\"\n memory: \"50Mi\"\n - name: pod-resources-demo-ctr-2\n image: fedora\n command:\n - sleep\n - inf", "language": "yaml", "type": "pod-definition", "context": "Assign Pod-level CPU and memory resources | Create the Pod:", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/assign-pod-level-resources/", "id": "k8s_00266_code_completion_7362"} {"prompt": "Explain this yaml code:\n\npods/resource/pod-level-resources.yaml\napiVersion: v1\nkind: Pod\nmetadata:\n name: pod-resources-demo\n namespace: pod-resources-example\nspec:\n resources:\n limits:\n cpu: \"1\"\n memory: \"200Mi\"\n requests:\n cpu: \"1\"\n memory: \"100Mi\"\n containers:\n - name: pod-resources-demo-ctr-1\n image: nginx\n resources:\n limits:\n cpu: \"0.5\"\n memory: \"100Mi\"\n requests:\n cpu: \"0.5\"\n memory: \"50Mi\"\n - name: pod-resources-demo-ctr-2\n image: fedora\n command:\n - sleep\n - inf", "completion": "This yaml code Assign Pod-level CPU and memory resources | Create the Pod:", "language": "yaml", "type": "pod-definition_explanation", "context": "Assign Pod-level CPU and memory resources | Create the Pod:", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/assign-pod-level-resources/", "id": "k8s_00266_code_explain_7363"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: pod-resources-demo\n namespace: pod-resources-example\nspec:\n resources:\n limits:\n cpu: \"1\"\n memory: \"200Mi\"\n requests:\n cpu: \"1\"\n memory: \"100Mi\"\n containers:", "completion": " - name: pod-resources-demo-ctr-1\n image: nginx\n resources:\n limits:\n cpu: \"0.5\"\n memory: \"100Mi\"\n requests:\n cpu: \"0.5\"\n memory: \"50Mi\"\n - name: pod-resources-demo-ctr-2\n image: fedora\n command:\n - sleep\n - inf", "language": "yaml", "type": "pod-definition", "context": "Assign Pod-level CPU and memory resources | Create the Pod:", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/assign-pod-level-resources/", "id": "k8s_00266_code_completion_7364"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: pod-resources-demo\n namespace: pod-resources-example\nspec:\n resources:\n limits:\n cpu: \"1\"\n memory: \"200Mi\"\n requests:\n cpu: \"1\"\n memory: \"100Mi\"\n containers:\n - name: pod-resources-demo-ctr-1\n image: nginx\n resources:\n limits:\n cpu: \"0.5\"\n memory: \"100Mi\"\n requests:\n cpu: \"0.5\"\n memory: \"50Mi\"\n - name: pod-resources-demo-ctr-2\n image: fedora\n command:\n - sleep\n - inf", "completion": "This yaml code Assign Pod-level CPU and memory resources | Create the Pod:", "language": "yaml", "type": "pod-definition_explanation", "context": "Assign Pod-level CPU and memory resources | Create the Pod:", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/assign-pod-level-resources/", "id": "k8s_00266_code_explain_7365"} {"prompt": "Explain this bash code:\n\nkubectl apply -f https://k8s.io/examples/pods/resource/pod-level-resources.yaml --namespace=pod-resources-example", "completion": "This bash code Assign Pod-level CPU and memory resources | Verify that the Pod Container is running:", "language": "bash", "type": "kubectl-command_explanation", "context": "Assign Pod-level CPU and memory resources | Verify that the Pod Container is running:", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/assign-pod-level-resources/", "id": "k8s_00266_code_explain_7366"} {"prompt": "Explain this bash code:\n\nkubectl get pod-resources-demo --namespace=pod-resources-example", "completion": "This bash code Assign Pod-level CPU and memory resources | View detailed information about the Pod:", "language": "bash", "type": "kubectl-command_explanation", "context": "Assign Pod-level CPU and memory resources | View detailed information about the Pod:", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/assign-pod-level-resources/", "id": "k8s_00266_code_explain_7367"} {"prompt": "Explain this bash code:\n\nkubectl get pod memory-demo --output=yaml --namespace=pod-resources-example", "completion": "This bash code Assign Pod-level CPU and memory resources | The output shows that one container in the Pod has a memory request of 50 MiB and a CPU request of 0.5 cores, with a memory limit of 100 MiB and a CPU limit of 0.5 cores. The Pod itself has a memory r", "language": "bash", "type": "kubectl-command_explanation", "context": "Assign Pod-level CPU and memory resources | The output shows that one container in the Pod has a memory request of 50 MiB and a CPU request of 0.5 cores, with a memory limit of 100 MiB and a CPU limit of 0.5 cores. The Pod itself has a memory r", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/assign-pod-level-resources/", "id": "k8s_00266_code_explain_7368"} {"prompt": "Complete this text code:\n\n...\ncontainers:\n name: pod-resources-demo-ctr-1\n resources:\n requests:\n cpu: 500m\n memory: 50Mi\n limits:\n cpu: 500m\n memory: 100Mi", "completion": " ...\n name: pod-resources-demo-ctr-2\n resources: {} \nresources:\n limits:\n cpu: 1\n memory: 200Mi\n requests:\n cpu: 1\n memory: 100Mi\n...", "language": "text", "type": "code-example", "context": "Assign Pod-level CPU and memory resources | Since pod-level requests and limits are specified, the request guarantees for both containers in the pod will be equal 1 core or CPU and 100Mi of memory. Additionally, both containers together won't b", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/assign-pod-level-resources/", "id": "k8s_00266_code_completion_7369"} {"prompt": "Explain this text code:\n\n...\ncontainers:\n name: pod-resources-demo-ctr-1\n resources:\n requests:\n cpu: 500m\n memory: 50Mi\n limits:\n cpu: 500m\n memory: 100Mi\n ...\n name: pod-resources-demo-ctr-2\n resources: {} \nresources:\n limits:\n cpu: 1\n memory: 200Mi\n requests:\n cpu: 1\n memory: 100Mi\n...", "completion": "This text code Assign Pod-level CPU and memory resources | Since pod-level requests and limits are specified, the request guarantees for both containers in the pod will be equal 1 core or CPU and 100Mi of memory. Additionally, both containers together won't b", "language": "text", "type": "code-example_explanation", "context": "Assign Pod-level CPU and memory resources | Since pod-level requests and limits are specified, the request guarantees for both containers in the pod will be equal 1 core or CPU and 100Mi of memory. Additionally, both containers together won't b", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/assign-pod-level-resources/", "id": "k8s_00266_code_explain_7370"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: image-volume\nspec:\n containers:\n - name: shell\n command: [\"sleep\", \"infinity\"]", "completion": " image: debian\n volumeMounts:\n - name: volume\n mountPath: /volume\n volumes:\n - name: volume\n image:\n reference: quay.io/crio/artifact:v2\n pullPolicy: IfNotPresent", "language": "yaml", "type": "pod-definition", "context": "Use an Image Volume With a Pod | Create the pod on your cluster:", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/image-volumes/", "id": "k8s_00267_code_completion_7371"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: image-volume\nspec:\n containers:\n - name: shell\n command: [\"sleep\", \"infinity\"]\n image: debian\n volumeMounts:\n - name: volume\n mountPath: /volume\n volumes:\n - name: volume\n image:\n reference: quay.io/crio/artifact:v2\n pullPolicy: IfNotPresent", "completion": "This yaml code Use an Image Volume With a Pod | Create the pod on your cluster:", "language": "yaml", "type": "pod-definition_explanation", "context": "Use an Image Volume With a Pod | Create the pod on your cluster:", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/image-volumes/", "id": "k8s_00267_code_explain_7372"} {"prompt": "Explain this bash code:\n\nkubectl apply -f https://k8s.io/examples/pods/image-volumes.yaml", "completion": "This bash code Use an Image Volume With a Pod | Attach to the container:", "language": "bash", "type": "kubectl-command_explanation", "context": "Use an Image Volume With a Pod | Attach to the container:", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/image-volumes/", "id": "k8s_00267_code_explain_7373"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: image-volume\nspec:\n containers:\n - name: shell\n command: [\"sleep\", \"infinity\"]\n image: debian", "completion": " volumeMounts:\n - name: volume\n mountPath: /volume\n subPath: dir\n volumes:\n - name: volume\n image:\n reference: quay.io/crio/artifact:v2\n pullPolicy: IfNotPresent", "language": "yaml", "type": "pod-definition", "context": "Use an Image Volume With a Pod | Create the pod on your cluster:", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/image-volumes/", "id": "k8s_00267_code_completion_7374"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: image-volume\nspec:\n containers:\n - name: shell\n command: [\"sleep\", \"infinity\"]\n image: debian\n volumeMounts:\n - name: volume\n mountPath: /volume\n subPath: dir\n volumes:\n - name: volume\n image:\n reference: quay.io/crio/artifact:v2\n pullPolicy: IfNotPresent", "completion": "This yaml code Use an Image Volume With a Pod | Create the pod on your cluster:", "language": "yaml", "type": "pod-definition_explanation", "context": "Use an Image Volume With a Pod | Create the pod on your cluster:", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/image-volumes/", "id": "k8s_00267_code_explain_7375"} {"prompt": "Explain this bash code:\n\nkubectl apply -f https://k8s.io/examples/pods/image-volumes-subpath.yaml", "completion": "This bash code Use an Image Volume With a Pod | Attach to the container:", "language": "bash", "type": "kubectl-command_explanation", "context": "Use an Image Volume With a Pod | Attach to the container:", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/image-volumes/", "id": "k8s_00267_code_explain_7376"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: image-volume\nspec:\n containers:\n - name: shell\n command: [\"sleep\", \"infinity\"]", "completion": " image: debian\n volumeMounts:\n - name: volume\n mountPath: /volume\n volumes:\n - name: volume\n image:\n reference: quay.io/crio/artifact:v2\n pullPolicy: IfNotPresent", "language": "yaml", "type": "pod-definition", "context": "Use an Image Volume With a Pod | Create the pod on your cluster:", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/image-volumes/", "id": "k8s_00267_code_completion_7377"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: image-volume\nspec:\n containers:\n - name: shell\n command: [\"sleep\", \"infinity\"]\n image: debian\n volumeMounts:\n - name: volume\n mountPath: /volume\n volumes:\n - name: volume\n image:\n reference: quay.io/crio/artifact:v2\n pullPolicy: IfNotPresent", "completion": "This yaml code Use an Image Volume With a Pod | Create the pod on your cluster:", "language": "yaml", "type": "pod-definition_explanation", "context": "Use an Image Volume With a Pod | Create the pod on your cluster:", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/image-volumes/", "id": "k8s_00267_code_explain_7378"} {"prompt": "Explain this shell code:\n\nkubectl apply -f https://k8s.io/examples/pods/image-volumes.yaml", "completion": "This shell code Use an Image Volume With a Pod | Attach to the container:", "language": "shell", "type": "code-example_explanation", "context": "Use an Image Volume With a Pod | Attach to the container:", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/image-volumes/", "id": "k8s_00267_code_explain_7379"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: image-volume\nspec:\n containers:\n - name: shell\n command: [\"sleep\", \"infinity\"]\n image: debian", "completion": " volumeMounts:\n - name: volume\n mountPath: /volume\n subPath: dir\n volumes:\n - name: volume\n image:\n reference: quay.io/crio/artifact:v2\n pullPolicy: IfNotPresent", "language": "yaml", "type": "pod-definition", "context": "Use an Image Volume With a Pod | Create the pod on your cluster:", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/image-volumes/", "id": "k8s_00267_code_completion_7380"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: image-volume\nspec:\n containers:\n - name: shell\n command: [\"sleep\", \"infinity\"]\n image: debian\n volumeMounts:\n - name: volume\n mountPath: /volume\n subPath: dir\n volumes:\n - name: volume\n image:\n reference: quay.io/crio/artifact:v2\n pullPolicy: IfNotPresent", "completion": "This yaml code Use an Image Volume With a Pod | Create the pod on your cluster:", "language": "yaml", "type": "pod-definition_explanation", "context": "Use an Image Volume With a Pod | Create the pod on your cluster:", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/image-volumes/", "id": "k8s_00267_code_explain_7381"} {"prompt": "Explain this shell code:\n\nkubectl apply -f https://k8s.io/examples/pods/image-volumes-subpath.yaml", "completion": "This shell code Use an Image Volume With a Pod | Attach to the container:", "language": "shell", "type": "code-example_explanation", "context": "Use an Image Volume With a Pod | Attach to the container:", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/image-volumes/", "id": "k8s_00267_code_explain_7382"} {"prompt": "Complete this yaml code:\n\npods/image-volumes.yaml\napiVersion: v1\nkind: Pod\nmetadata:\n name: image-volume\nspec:\n containers:\n - name: shell\n command: [\"sleep\", \"infinity\"]", "completion": " image: debian\n volumeMounts:\n - name: volume\n mountPath: /volume\n volumes:\n - name: volume\n image:\n reference: quay.io/crio/artifact:v2\n pullPolicy: IfNotPresent", "language": "yaml", "type": "pod-definition", "context": "Use an Image Volume With a Pod | Create the pod on your cluster:", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/image-volumes/", "id": "k8s_00267_code_completion_7383"} {"prompt": "Explain this yaml code:\n\npods/image-volumes.yaml\napiVersion: v1\nkind: Pod\nmetadata:\n name: image-volume\nspec:\n containers:\n - name: shell\n command: [\"sleep\", \"infinity\"]\n image: debian\n volumeMounts:\n - name: volume\n mountPath: /volume\n volumes:\n - name: volume\n image:\n reference: quay.io/crio/artifact:v2\n pullPolicy: IfNotPresent", "completion": "This yaml code Use an Image Volume With a Pod | Create the pod on your cluster:", "language": "yaml", "type": "pod-definition_explanation", "context": "Use an Image Volume With a Pod | Create the pod on your cluster:", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/image-volumes/", "id": "k8s_00267_code_explain_7384"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: image-volume\nspec:\n containers:\n - name: shell\n command: [\"sleep\", \"infinity\"]", "completion": " image: debian\n volumeMounts:\n - name: volume\n mountPath: /volume\n volumes:\n - name: volume\n image:\n reference: quay.io/crio/artifact:v2\n pullPolicy: IfNotPresent", "language": "yaml", "type": "pod-definition", "context": "Use an Image Volume With a Pod | Create the pod on your cluster:", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/image-volumes/", "id": "k8s_00267_code_completion_7385"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: image-volume\nspec:\n containers:\n - name: shell\n command: [\"sleep\", \"infinity\"]\n image: debian\n volumeMounts:\n - name: volume\n mountPath: /volume\n volumes:\n - name: volume\n image:\n reference: quay.io/crio/artifact:v2\n pullPolicy: IfNotPresent", "completion": "This yaml code Use an Image Volume With a Pod | Create the pod on your cluster:", "language": "yaml", "type": "pod-definition_explanation", "context": "Use an Image Volume With a Pod | Create the pod on your cluster:", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/image-volumes/", "id": "k8s_00267_code_explain_7386"} {"prompt": "Explain this bash code:\n\nkubectl apply -f https://k8s.io/examples/pods/image-volumes.yaml", "completion": "This bash code Use an Image Volume With a Pod | Attach to the container:", "language": "bash", "type": "kubectl-command_explanation", "context": "Use an Image Volume With a Pod | Attach to the container:", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/image-volumes/", "id": "k8s_00267_code_explain_7387"} {"prompt": "Complete this yaml code:\n\npods/image-volumes-subpath.yaml\napiVersion: v1\nkind: Pod\nmetadata:\n name: image-volume\nspec:\n containers:\n - name: shell\n command: [\"sleep\", \"infinity\"]", "completion": " image: debian\n volumeMounts:\n - name: volume\n mountPath: /volume\n subPath: dir\n volumes:\n - name: volume\n image:\n reference: quay.io/crio/artifact:v2\n pullPolicy: IfNotPresent", "language": "yaml", "type": "pod-definition", "context": "Use an Image Volume With a Pod | Create the pod on your cluster:", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/image-volumes/", "id": "k8s_00267_code_completion_7388"} {"prompt": "Explain this yaml code:\n\npods/image-volumes-subpath.yaml\napiVersion: v1\nkind: Pod\nmetadata:\n name: image-volume\nspec:\n containers:\n - name: shell\n command: [\"sleep\", \"infinity\"]\n image: debian\n volumeMounts:\n - name: volume\n mountPath: /volume\n subPath: dir\n volumes:\n - name: volume\n image:\n reference: quay.io/crio/artifact:v2\n pullPolicy: IfNotPresent", "completion": "This yaml code Use an Image Volume With a Pod | Create the pod on your cluster:", "language": "yaml", "type": "pod-definition_explanation", "context": "Use an Image Volume With a Pod | Create the pod on your cluster:", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/image-volumes/", "id": "k8s_00267_code_explain_7389"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: image-volume\nspec:\n containers:\n - name: shell\n command: [\"sleep\", \"infinity\"]\n image: debian", "completion": " volumeMounts:\n - name: volume\n mountPath: /volume\n subPath: dir\n volumes:\n - name: volume\n image:\n reference: quay.io/crio/artifact:v2\n pullPolicy: IfNotPresent", "language": "yaml", "type": "pod-definition", "context": "Use an Image Volume With a Pod | Create the pod on your cluster:", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/image-volumes/", "id": "k8s_00267_code_completion_7390"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: image-volume\nspec:\n containers:\n - name: shell\n command: [\"sleep\", \"infinity\"]\n image: debian\n volumeMounts:\n - name: volume\n mountPath: /volume\n subPath: dir\n volumes:\n - name: volume\n image:\n reference: quay.io/crio/artifact:v2\n pullPolicy: IfNotPresent", "completion": "This yaml code Use an Image Volume With a Pod | Create the pod on your cluster:", "language": "yaml", "type": "pod-definition_explanation", "context": "Use an Image Volume With a Pod | Create the pod on your cluster:", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/image-volumes/", "id": "k8s_00267_code_explain_7391"} {"prompt": "Explain this bash code:\n\nkubectl apply -f https://k8s.io/examples/pods/image-volumes-subpath.yaml", "completion": "This bash code Use an Image Volume With a Pod | Attach to the container:", "language": "bash", "type": "kubectl-command_explanation", "context": "Use an Image Volume With a Pod | Attach to the container:", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/image-volumes/", "id": "k8s_00267_code_explain_7392"} {"prompt": "Complete this yaml code:\n\n# This is an excerpt\napiVersion: apps/v1\nkind: StatefulSet\nmetadata:\n labels:", "completion": " app.kubernetes.io/name: mysql\n app.kubernetes.io/instance: mysql-abcxyz\n app.kubernetes.io/version: \"5.7.21\"\n app.kubernetes.io/component: database\n app.kubernetes.io/part-of: wordpress\n app.kubernetes.io/managed-by: Helm", "language": "yaml", "type": "kubernetes-manifest", "context": "Recommended Labels | An application can be installed one or more times into a Kubernetes cluster and, in some cases, the same namespace. For example, WordPress can be installed more than once where different websites are", "source": "https://kubernetes.io/docs/concepts/overview/working-with-objects/common-labels/", "id": "k8s_00270_code_completion_7393"} {"prompt": "Explain this yaml code:\n\n# This is an excerpt\napiVersion: apps/v1\nkind: StatefulSet\nmetadata:\n labels:\n app.kubernetes.io/name: mysql\n app.kubernetes.io/instance: mysql-abcxyz\n app.kubernetes.io/version: \"5.7.21\"\n app.kubernetes.io/component: database\n app.kubernetes.io/part-of: wordpress\n app.kubernetes.io/managed-by: Helm", "completion": "This yaml code Recommended Labels | An application can be installed one or more times into a Kubernetes cluster and, in some cases, the same namespace. For example, WordPress can be installed more than once where different websites are", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Recommended Labels | An application can be installed one or more times into a Kubernetes cluster and, in some cases, the same namespace. For example, WordPress can be installed more than once where different websites are", "source": "https://kubernetes.io/docs/concepts/overview/working-with-objects/common-labels/", "id": "k8s_00270_code_explain_7394"} {"prompt": "Complete this yaml code:\n\napiVersion: apps/v1\nkind: Deployment\nmetadata:", "completion": " labels:\n app.kubernetes.io/name: myservice\n app.kubernetes.io/instance: myservice-abcxyz\n...", "language": "yaml", "type": "deployment-definition", "context": "Recommended Labels | The Service is used to expose the application.", "source": "https://kubernetes.io/docs/concepts/overview/working-with-objects/common-labels/", "id": "k8s_00270_code_completion_7395"} {"prompt": "Explain this yaml code:\n\napiVersion: apps/v1\nkind: Deployment\nmetadata:\n labels:\n app.kubernetes.io/name: myservice\n app.kubernetes.io/instance: myservice-abcxyz\n...", "completion": "This yaml code Recommended Labels | The Service is used to expose the application.", "language": "yaml", "type": "deployment-definition_explanation", "context": "Recommended Labels | The Service is used to expose the application.", "source": "https://kubernetes.io/docs/concepts/overview/working-with-objects/common-labels/", "id": "k8s_00270_code_explain_7396"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Service\nmetadata:", "completion": " labels:\n app.kubernetes.io/name: myservice\n app.kubernetes.io/instance: myservice-abcxyz\n...", "language": "yaml", "type": "service-definition", "context": "Recommended Labels | Consider a slightly more complicated application: a web application (WordPress) using a database (MySQL), installed using Helm. The following snippets illustrate the start of objects used to deploy th", "source": "https://kubernetes.io/docs/concepts/overview/working-with-objects/common-labels/", "id": "k8s_00270_code_completion_7397"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Service\nmetadata:\n labels:\n app.kubernetes.io/name: myservice\n app.kubernetes.io/instance: myservice-abcxyz\n...", "completion": "This yaml code Recommended Labels | Consider a slightly more complicated application: a web application (WordPress) using a database (MySQL), installed using Helm. The following snippets illustrate the start of objects used to deploy th", "language": "yaml", "type": "service-definition_explanation", "context": "Recommended Labels | Consider a slightly more complicated application: a web application (WordPress) using a database (MySQL), installed using Helm. The following snippets illustrate the start of objects used to deploy th", "source": "https://kubernetes.io/docs/concepts/overview/working-with-objects/common-labels/", "id": "k8s_00270_code_explain_7398"} {"prompt": "Complete this yaml code:\n\napiVersion: apps/v1\nkind: Deployment\nmetadata:\n labels:\n app.kubernetes.io/name: wordpress", "completion": " app.kubernetes.io/instance: wordpress-abcxyz\n app.kubernetes.io/version: \"4.9.4\"\n app.kubernetes.io/managed-by: Helm\n app.kubernetes.io/component: server\n app.kubernetes.io/part-of: wordpress\n...", "language": "yaml", "type": "deployment-definition", "context": "Recommended Labels | The Service is used to expose WordPress:", "source": "https://kubernetes.io/docs/concepts/overview/working-with-objects/common-labels/", "id": "k8s_00270_code_completion_7399"} {"prompt": "Explain this yaml code:\n\napiVersion: apps/v1\nkind: Deployment\nmetadata:\n labels:\n app.kubernetes.io/name: wordpress\n app.kubernetes.io/instance: wordpress-abcxyz\n app.kubernetes.io/version: \"4.9.4\"\n app.kubernetes.io/managed-by: Helm\n app.kubernetes.io/component: server\n app.kubernetes.io/part-of: wordpress\n...", "completion": "This yaml code Recommended Labels | The Service is used to expose WordPress:", "language": "yaml", "type": "deployment-definition_explanation", "context": "Recommended Labels | The Service is used to expose WordPress:", "source": "https://kubernetes.io/docs/concepts/overview/working-with-objects/common-labels/", "id": "k8s_00270_code_explain_7400"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Service\nmetadata:\n labels:\n app.kubernetes.io/name: wordpress", "completion": " app.kubernetes.io/instance: wordpress-abcxyz\n app.kubernetes.io/version: \"4.9.4\"\n app.kubernetes.io/managed-by: Helm\n app.kubernetes.io/component: server\n app.kubernetes.io/part-of: wordpress\n...", "language": "yaml", "type": "service-definition", "context": "Recommended Labels | MySQL is exposed as a StatefulSet with metadata for both it and the larger application it belongs to:", "source": "https://kubernetes.io/docs/concepts/overview/working-with-objects/common-labels/", "id": "k8s_00270_code_completion_7401"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Service\nmetadata:\n labels:\n app.kubernetes.io/name: wordpress\n app.kubernetes.io/instance: wordpress-abcxyz\n app.kubernetes.io/version: \"4.9.4\"\n app.kubernetes.io/managed-by: Helm\n app.kubernetes.io/component: server\n app.kubernetes.io/part-of: wordpress\n...", "completion": "This yaml code Recommended Labels | MySQL is exposed as a StatefulSet with metadata for both it and the larger application it belongs to:", "language": "yaml", "type": "service-definition_explanation", "context": "Recommended Labels | MySQL is exposed as a StatefulSet with metadata for both it and the larger application it belongs to:", "source": "https://kubernetes.io/docs/concepts/overview/working-with-objects/common-labels/", "id": "k8s_00270_code_explain_7402"} {"prompt": "Complete this yaml code:\n\napiVersion: apps/v1\nkind: StatefulSet\nmetadata:\n labels:\n app.kubernetes.io/name: mysql", "completion": " app.kubernetes.io/instance: mysql-abcxyz\n app.kubernetes.io/version: \"5.7.21\"\n app.kubernetes.io/managed-by: Helm\n app.kubernetes.io/component: database\n app.kubernetes.io/part-of: wordpress\n...", "language": "yaml", "type": "kubernetes-manifest", "context": "Recommended Labels | The Service is used to expose MySQL as part of WordPress:", "source": "https://kubernetes.io/docs/concepts/overview/working-with-objects/common-labels/", "id": "k8s_00270_code_completion_7403"} {"prompt": "Explain this yaml code:\n\napiVersion: apps/v1\nkind: StatefulSet\nmetadata:\n labels:\n app.kubernetes.io/name: mysql\n app.kubernetes.io/instance: mysql-abcxyz\n app.kubernetes.io/version: \"5.7.21\"\n app.kubernetes.io/managed-by: Helm\n app.kubernetes.io/component: database\n app.kubernetes.io/part-of: wordpress\n...", "completion": "This yaml code Recommended Labels | The Service is used to expose MySQL as part of WordPress:", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Recommended Labels | The Service is used to expose MySQL as part of WordPress:", "source": "https://kubernetes.io/docs/concepts/overview/working-with-objects/common-labels/", "id": "k8s_00270_code_explain_7404"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Service\nmetadata:\n labels:\n app.kubernetes.io/name: mysql", "completion": " app.kubernetes.io/instance: mysql-abcxyz\n app.kubernetes.io/version: \"5.7.21\"\n app.kubernetes.io/managed-by: Helm\n app.kubernetes.io/component: database\n app.kubernetes.io/part-of: wordpress\n...", "language": "yaml", "type": "service-definition", "context": "Recommended Labels | With the MySQL StatefulSet and Service you'll notice information about both MySQL and WordPress, the broader application, are included.", "source": "https://kubernetes.io/docs/concepts/overview/working-with-objects/common-labels/", "id": "k8s_00270_code_completion_7405"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Service\nmetadata:\n labels:\n app.kubernetes.io/name: mysql\n app.kubernetes.io/instance: mysql-abcxyz\n app.kubernetes.io/version: \"5.7.21\"\n app.kubernetes.io/managed-by: Helm\n app.kubernetes.io/component: database\n app.kubernetes.io/part-of: wordpress\n...", "completion": "This yaml code Recommended Labels | With the MySQL StatefulSet and Service you'll notice information about both MySQL and WordPress, the broader application, are included.", "language": "yaml", "type": "service-definition_explanation", "context": "Recommended Labels | With the MySQL StatefulSet and Service you'll notice information about both MySQL and WordPress, the broader application, are included.", "source": "https://kubernetes.io/docs/concepts/overview/working-with-objects/common-labels/", "id": "k8s_00270_code_explain_7406"} {"prompt": "Complete this yaml code:\n\n# This is an excerpt\napiVersion: apps/v1\nkind: StatefulSet\nmetadata:\n labels:", "completion": " app.kubernetes.io/name: mysql\n app.kubernetes.io/instance: mysql-abcxyz\n app.kubernetes.io/version: \"5.7.21\"\n app.kubernetes.io/component: database\n app.kubernetes.io/part-of: wordpress\n app.kubernetes.io/managed-by: Helm", "language": "yaml", "type": "kubernetes-manifest", "context": "Recommended Labels | An application can be installed one or more times into a Kubernetes cluster and, in some cases, the same namespace. For example, WordPress can be installed more than once where different websites are", "source": "https://kubernetes.io/docs/concepts/overview/working-with-objects/common-labels/", "id": "k8s_00270_code_completion_7407"} {"prompt": "Explain this yaml code:\n\n# This is an excerpt\napiVersion: apps/v1\nkind: StatefulSet\nmetadata:\n labels:\n app.kubernetes.io/name: mysql\n app.kubernetes.io/instance: mysql-abcxyz\n app.kubernetes.io/version: \"5.7.21\"\n app.kubernetes.io/component: database\n app.kubernetes.io/part-of: wordpress\n app.kubernetes.io/managed-by: Helm", "completion": "This yaml code Recommended Labels | An application can be installed one or more times into a Kubernetes cluster and, in some cases, the same namespace. For example, WordPress can be installed more than once where different websites are", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Recommended Labels | An application can be installed one or more times into a Kubernetes cluster and, in some cases, the same namespace. For example, WordPress can be installed more than once where different websites are", "source": "https://kubernetes.io/docs/concepts/overview/working-with-objects/common-labels/", "id": "k8s_00270_code_explain_7408"} {"prompt": "Complete this yaml code:\n\napiVersion: apps/v1\nkind: Deployment\nmetadata:", "completion": " labels:\n app.kubernetes.io/name: myservice\n app.kubernetes.io/instance: myservice-abcxyz\n...", "language": "yaml", "type": "deployment-definition", "context": "Recommended Labels | The Service is used to expose the application.", "source": "https://kubernetes.io/docs/concepts/overview/working-with-objects/common-labels/", "id": "k8s_00270_code_completion_7409"} {"prompt": "Explain this yaml code:\n\napiVersion: apps/v1\nkind: Deployment\nmetadata:\n labels:\n app.kubernetes.io/name: myservice\n app.kubernetes.io/instance: myservice-abcxyz\n...", "completion": "This yaml code Recommended Labels | The Service is used to expose the application.", "language": "yaml", "type": "deployment-definition_explanation", "context": "Recommended Labels | The Service is used to expose the application.", "source": "https://kubernetes.io/docs/concepts/overview/working-with-objects/common-labels/", "id": "k8s_00270_code_explain_7410"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Service\nmetadata:", "completion": " labels:\n app.kubernetes.io/name: myservice\n app.kubernetes.io/instance: myservice-abcxyz\n...", "language": "yaml", "type": "service-definition", "context": "Recommended Labels | Consider a slightly more complicated application: a web application (WordPress) using a database (MySQL), installed using Helm. The following snippets illustrate the start of objects used to deploy th", "source": "https://kubernetes.io/docs/concepts/overview/working-with-objects/common-labels/", "id": "k8s_00270_code_completion_7411"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Service\nmetadata:\n labels:\n app.kubernetes.io/name: myservice\n app.kubernetes.io/instance: myservice-abcxyz\n...", "completion": "This yaml code Recommended Labels | Consider a slightly more complicated application: a web application (WordPress) using a database (MySQL), installed using Helm. The following snippets illustrate the start of objects used to deploy th", "language": "yaml", "type": "service-definition_explanation", "context": "Recommended Labels | Consider a slightly more complicated application: a web application (WordPress) using a database (MySQL), installed using Helm. The following snippets illustrate the start of objects used to deploy th", "source": "https://kubernetes.io/docs/concepts/overview/working-with-objects/common-labels/", "id": "k8s_00270_code_explain_7412"} {"prompt": "Complete this yaml code:\n\napiVersion: apps/v1\nkind: Deployment\nmetadata:\n labels:\n app.kubernetes.io/name: wordpress", "completion": " app.kubernetes.io/instance: wordpress-abcxyz\n app.kubernetes.io/version: \"4.9.4\"\n app.kubernetes.io/managed-by: Helm\n app.kubernetes.io/component: server\n app.kubernetes.io/part-of: wordpress\n...", "language": "yaml", "type": "deployment-definition", "context": "Recommended Labels | The Service is used to expose WordPress:", "source": "https://kubernetes.io/docs/concepts/overview/working-with-objects/common-labels/", "id": "k8s_00270_code_completion_7413"} {"prompt": "Explain this yaml code:\n\napiVersion: apps/v1\nkind: Deployment\nmetadata:\n labels:\n app.kubernetes.io/name: wordpress\n app.kubernetes.io/instance: wordpress-abcxyz\n app.kubernetes.io/version: \"4.9.4\"\n app.kubernetes.io/managed-by: Helm\n app.kubernetes.io/component: server\n app.kubernetes.io/part-of: wordpress\n...", "completion": "This yaml code Recommended Labels | The Service is used to expose WordPress:", "language": "yaml", "type": "deployment-definition_explanation", "context": "Recommended Labels | The Service is used to expose WordPress:", "source": "https://kubernetes.io/docs/concepts/overview/working-with-objects/common-labels/", "id": "k8s_00270_code_explain_7414"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Service\nmetadata:\n labels:\n app.kubernetes.io/name: wordpress", "completion": " app.kubernetes.io/instance: wordpress-abcxyz\n app.kubernetes.io/version: \"4.9.4\"\n app.kubernetes.io/managed-by: Helm\n app.kubernetes.io/component: server\n app.kubernetes.io/part-of: wordpress\n...", "language": "yaml", "type": "service-definition", "context": "Recommended Labels | MySQL is exposed as a StatefulSet with metadata for both it and the larger application it belongs to:", "source": "https://kubernetes.io/docs/concepts/overview/working-with-objects/common-labels/", "id": "k8s_00270_code_completion_7415"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Service\nmetadata:\n labels:\n app.kubernetes.io/name: wordpress\n app.kubernetes.io/instance: wordpress-abcxyz\n app.kubernetes.io/version: \"4.9.4\"\n app.kubernetes.io/managed-by: Helm\n app.kubernetes.io/component: server\n app.kubernetes.io/part-of: wordpress\n...", "completion": "This yaml code Recommended Labels | MySQL is exposed as a StatefulSet with metadata for both it and the larger application it belongs to:", "language": "yaml", "type": "service-definition_explanation", "context": "Recommended Labels | MySQL is exposed as a StatefulSet with metadata for both it and the larger application it belongs to:", "source": "https://kubernetes.io/docs/concepts/overview/working-with-objects/common-labels/", "id": "k8s_00270_code_explain_7416"} {"prompt": "Complete this yaml code:\n\napiVersion: apps/v1\nkind: StatefulSet\nmetadata:\n labels:\n app.kubernetes.io/name: mysql", "completion": " app.kubernetes.io/instance: mysql-abcxyz\n app.kubernetes.io/version: \"5.7.21\"\n app.kubernetes.io/managed-by: Helm\n app.kubernetes.io/component: database\n app.kubernetes.io/part-of: wordpress\n...", "language": "yaml", "type": "kubernetes-manifest", "context": "Recommended Labels | The Service is used to expose MySQL as part of WordPress:", "source": "https://kubernetes.io/docs/concepts/overview/working-with-objects/common-labels/", "id": "k8s_00270_code_completion_7417"} {"prompt": "Explain this yaml code:\n\napiVersion: apps/v1\nkind: StatefulSet\nmetadata:\n labels:\n app.kubernetes.io/name: mysql\n app.kubernetes.io/instance: mysql-abcxyz\n app.kubernetes.io/version: \"5.7.21\"\n app.kubernetes.io/managed-by: Helm\n app.kubernetes.io/component: database\n app.kubernetes.io/part-of: wordpress\n...", "completion": "This yaml code Recommended Labels | The Service is used to expose MySQL as part of WordPress:", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Recommended Labels | The Service is used to expose MySQL as part of WordPress:", "source": "https://kubernetes.io/docs/concepts/overview/working-with-objects/common-labels/", "id": "k8s_00270_code_explain_7418"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Service\nmetadata:\n labels:\n app.kubernetes.io/name: mysql", "completion": " app.kubernetes.io/instance: mysql-abcxyz\n app.kubernetes.io/version: \"5.7.21\"\n app.kubernetes.io/managed-by: Helm\n app.kubernetes.io/component: database\n app.kubernetes.io/part-of: wordpress\n...", "language": "yaml", "type": "service-definition", "context": "Recommended Labels | With the MySQL StatefulSet and Service you'll notice information about both MySQL and WordPress, the broader application, are included.", "source": "https://kubernetes.io/docs/concepts/overview/working-with-objects/common-labels/", "id": "k8s_00270_code_completion_7419"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Service\nmetadata:\n labels:\n app.kubernetes.io/name: mysql\n app.kubernetes.io/instance: mysql-abcxyz\n app.kubernetes.io/version: \"5.7.21\"\n app.kubernetes.io/managed-by: Helm\n app.kubernetes.io/component: database\n app.kubernetes.io/part-of: wordpress\n...", "completion": "This yaml code Recommended Labels | With the MySQL StatefulSet and Service you'll notice information about both MySQL and WordPress, the broader application, are included.", "language": "yaml", "type": "service-definition_explanation", "context": "Recommended Labels | With the MySQL StatefulSet and Service you'll notice information about both MySQL and WordPress, the broader application, are included.", "source": "https://kubernetes.io/docs/concepts/overview/working-with-objects/common-labels/", "id": "k8s_00270_code_explain_7420"} {"prompt": "Complete this yaml code:\n\n# This is an excerpt\napiVersion: apps/v1\nkind: StatefulSet\nmetadata:\n labels:", "completion": " app.kubernetes.io/name: mysql\n app.kubernetes.io/instance: mysql-abcxyz\n app.kubernetes.io/version: \"5.7.21\"\n app.kubernetes.io/component: database\n app.kubernetes.io/part-of: wordpress\n app.kubernetes.io/managed-by: Helm", "language": "yaml", "type": "kubernetes-manifest", "context": "Recommended Labels | An application can be installed one or more times into a Kubernetes cluster and, in some cases, the same namespace. For example, WordPress can be installed more than once where different websites are", "source": "https://kubernetes.io/docs/concepts/overview/working-with-objects/common-labels/", "id": "k8s_00270_code_completion_7421"} {"prompt": "Explain this yaml code:\n\n# This is an excerpt\napiVersion: apps/v1\nkind: StatefulSet\nmetadata:\n labels:\n app.kubernetes.io/name: mysql\n app.kubernetes.io/instance: mysql-abcxyz\n app.kubernetes.io/version: \"5.7.21\"\n app.kubernetes.io/component: database\n app.kubernetes.io/part-of: wordpress\n app.kubernetes.io/managed-by: Helm", "completion": "This yaml code Recommended Labels | An application can be installed one or more times into a Kubernetes cluster and, in some cases, the same namespace. For example, WordPress can be installed more than once where different websites are", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Recommended Labels | An application can be installed one or more times into a Kubernetes cluster and, in some cases, the same namespace. For example, WordPress can be installed more than once where different websites are", "source": "https://kubernetes.io/docs/concepts/overview/working-with-objects/common-labels/", "id": "k8s_00270_code_explain_7422"} {"prompt": "Complete this yaml code:\n\napiVersion: apps/v1\nkind: Deployment\nmetadata:", "completion": " labels:\n app.kubernetes.io/name: myservice\n app.kubernetes.io/instance: myservice-abcxyz\n...", "language": "yaml", "type": "deployment-definition", "context": "Recommended Labels | The Service is used to expose the application.", "source": "https://kubernetes.io/docs/concepts/overview/working-with-objects/common-labels/", "id": "k8s_00270_code_completion_7423"} {"prompt": "Explain this yaml code:\n\napiVersion: apps/v1\nkind: Deployment\nmetadata:\n labels:\n app.kubernetes.io/name: myservice\n app.kubernetes.io/instance: myservice-abcxyz\n...", "completion": "This yaml code Recommended Labels | The Service is used to expose the application.", "language": "yaml", "type": "deployment-definition_explanation", "context": "Recommended Labels | The Service is used to expose the application.", "source": "https://kubernetes.io/docs/concepts/overview/working-with-objects/common-labels/", "id": "k8s_00270_code_explain_7424"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Service\nmetadata:", "completion": " labels:\n app.kubernetes.io/name: myservice\n app.kubernetes.io/instance: myservice-abcxyz\n...", "language": "yaml", "type": "service-definition", "context": "Recommended Labels | Consider a slightly more complicated application: a web application (WordPress) using a database (MySQL), installed using Helm. The following snippets illustrate the start of objects used to deploy th", "source": "https://kubernetes.io/docs/concepts/overview/working-with-objects/common-labels/", "id": "k8s_00270_code_completion_7425"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Service\nmetadata:\n labels:\n app.kubernetes.io/name: myservice\n app.kubernetes.io/instance: myservice-abcxyz\n...", "completion": "This yaml code Recommended Labels | Consider a slightly more complicated application: a web application (WordPress) using a database (MySQL), installed using Helm. The following snippets illustrate the start of objects used to deploy th", "language": "yaml", "type": "service-definition_explanation", "context": "Recommended Labels | Consider a slightly more complicated application: a web application (WordPress) using a database (MySQL), installed using Helm. The following snippets illustrate the start of objects used to deploy th", "source": "https://kubernetes.io/docs/concepts/overview/working-with-objects/common-labels/", "id": "k8s_00270_code_explain_7426"} {"prompt": "Complete this yaml code:\n\napiVersion: apps/v1\nkind: Deployment\nmetadata:\n labels:\n app.kubernetes.io/name: wordpress", "completion": " app.kubernetes.io/instance: wordpress-abcxyz\n app.kubernetes.io/version: \"4.9.4\"\n app.kubernetes.io/managed-by: Helm\n app.kubernetes.io/component: server\n app.kubernetes.io/part-of: wordpress\n...", "language": "yaml", "type": "deployment-definition", "context": "Recommended Labels | The Service is used to expose WordPress:", "source": "https://kubernetes.io/docs/concepts/overview/working-with-objects/common-labels/", "id": "k8s_00270_code_completion_7427"} {"prompt": "Explain this yaml code:\n\napiVersion: apps/v1\nkind: Deployment\nmetadata:\n labels:\n app.kubernetes.io/name: wordpress\n app.kubernetes.io/instance: wordpress-abcxyz\n app.kubernetes.io/version: \"4.9.4\"\n app.kubernetes.io/managed-by: Helm\n app.kubernetes.io/component: server\n app.kubernetes.io/part-of: wordpress\n...", "completion": "This yaml code Recommended Labels | The Service is used to expose WordPress:", "language": "yaml", "type": "deployment-definition_explanation", "context": "Recommended Labels | The Service is used to expose WordPress:", "source": "https://kubernetes.io/docs/concepts/overview/working-with-objects/common-labels/", "id": "k8s_00270_code_explain_7428"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Service\nmetadata:\n labels:\n app.kubernetes.io/name: wordpress", "completion": " app.kubernetes.io/instance: wordpress-abcxyz\n app.kubernetes.io/version: \"4.9.4\"\n app.kubernetes.io/managed-by: Helm\n app.kubernetes.io/component: server\n app.kubernetes.io/part-of: wordpress\n...", "language": "yaml", "type": "service-definition", "context": "Recommended Labels | MySQL is exposed as a StatefulSet with metadata for both it and the larger application it belongs to:", "source": "https://kubernetes.io/docs/concepts/overview/working-with-objects/common-labels/", "id": "k8s_00270_code_completion_7429"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Service\nmetadata:\n labels:\n app.kubernetes.io/name: wordpress\n app.kubernetes.io/instance: wordpress-abcxyz\n app.kubernetes.io/version: \"4.9.4\"\n app.kubernetes.io/managed-by: Helm\n app.kubernetes.io/component: server\n app.kubernetes.io/part-of: wordpress\n...", "completion": "This yaml code Recommended Labels | MySQL is exposed as a StatefulSet with metadata for both it and the larger application it belongs to:", "language": "yaml", "type": "service-definition_explanation", "context": "Recommended Labels | MySQL is exposed as a StatefulSet with metadata for both it and the larger application it belongs to:", "source": "https://kubernetes.io/docs/concepts/overview/working-with-objects/common-labels/", "id": "k8s_00270_code_explain_7430"} {"prompt": "Complete this yaml code:\n\napiVersion: apps/v1\nkind: StatefulSet\nmetadata:\n labels:\n app.kubernetes.io/name: mysql", "completion": " app.kubernetes.io/instance: mysql-abcxyz\n app.kubernetes.io/version: \"5.7.21\"\n app.kubernetes.io/managed-by: Helm\n app.kubernetes.io/component: database\n app.kubernetes.io/part-of: wordpress\n...", "language": "yaml", "type": "kubernetes-manifest", "context": "Recommended Labels | The Service is used to expose MySQL as part of WordPress:", "source": "https://kubernetes.io/docs/concepts/overview/working-with-objects/common-labels/", "id": "k8s_00270_code_completion_7431"} {"prompt": "Explain this yaml code:\n\napiVersion: apps/v1\nkind: StatefulSet\nmetadata:\n labels:\n app.kubernetes.io/name: mysql\n app.kubernetes.io/instance: mysql-abcxyz\n app.kubernetes.io/version: \"5.7.21\"\n app.kubernetes.io/managed-by: Helm\n app.kubernetes.io/component: database\n app.kubernetes.io/part-of: wordpress\n...", "completion": "This yaml code Recommended Labels | The Service is used to expose MySQL as part of WordPress:", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Recommended Labels | The Service is used to expose MySQL as part of WordPress:", "source": "https://kubernetes.io/docs/concepts/overview/working-with-objects/common-labels/", "id": "k8s_00270_code_explain_7432"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Service\nmetadata:\n labels:\n app.kubernetes.io/name: mysql", "completion": " app.kubernetes.io/instance: mysql-abcxyz\n app.kubernetes.io/version: \"5.7.21\"\n app.kubernetes.io/managed-by: Helm\n app.kubernetes.io/component: database\n app.kubernetes.io/part-of: wordpress\n...", "language": "yaml", "type": "service-definition", "context": "Recommended Labels | With the MySQL StatefulSet and Service you'll notice information about both MySQL and WordPress, the broader application, are included.", "source": "https://kubernetes.io/docs/concepts/overview/working-with-objects/common-labels/", "id": "k8s_00270_code_completion_7433"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Service\nmetadata:\n labels:\n app.kubernetes.io/name: mysql\n app.kubernetes.io/instance: mysql-abcxyz\n app.kubernetes.io/version: \"5.7.21\"\n app.kubernetes.io/managed-by: Helm\n app.kubernetes.io/component: database\n app.kubernetes.io/part-of: wordpress\n...", "completion": "This yaml code Recommended Labels | With the MySQL StatefulSet and Service you'll notice information about both MySQL and WordPress, the broader application, are included.", "language": "yaml", "type": "service-definition_explanation", "context": "Recommended Labels | With the MySQL StatefulSet and Service you'll notice information about both MySQL and WordPress, the broader application, are included.", "source": "https://kubernetes.io/docs/concepts/overview/working-with-objects/common-labels/", "id": "k8s_00270_code_explain_7434"} {"prompt": "Complete this text code:\n\n...\n containers:\n - name: fluentd-cloud-logging\n image: fluent/fluentd-kubernetes-daemonset:v1", "completion": " resources:\n limits:\n cpu: 100m\n memory: 200Mi", "language": "text", "type": "code-example", "context": "Considerations for large clusters | Addons' default limits are typically based on data collected from experience running each addon on small or medium Kubernetes clusters. When running on large clusters, addons often consume more of som", "source": "https://kubernetes.io/docs/setup/best-practices/cluster-large/", "id": "k8s_00272_code_completion_7435"} {"prompt": "Explain this text code:\n\n...\n containers:\n - name: fluentd-cloud-logging\n image: fluent/fluentd-kubernetes-daemonset:v1\n resources:\n limits:\n cpu: 100m\n memory: 200Mi", "completion": "This text code Considerations for large clusters | Addons' default limits are typically based on data collected from experience running each addon on small or medium Kubernetes clusters. When running on large clusters, addons often consume more of som", "language": "text", "type": "code-example_explanation", "context": "Considerations for large clusters | Addons' default limits are typically based on data collected from experience running each addon on small or medium Kubernetes clusters. When running on large clusters, addons often consume more of som", "source": "https://kubernetes.io/docs/setup/best-practices/cluster-large/", "id": "k8s_00272_code_explain_7436"} {"prompt": "Complete this yaml code:\n\n...\n containers:\n - name: fluentd-cloud-logging\n image: fluent/fluentd-kubernetes-daemonset:v1", "completion": " resources:\n limits:\n cpu: 100m\n memory: 200Mi", "language": "yaml", "type": "kubernetes-manifest", "context": "Considerations for large clusters | Addons' default limits are typically based on data collected from experience running each addon on small or medium Kubernetes clusters. When running on large clusters, addons often consume more of som", "source": "https://kubernetes.io/docs/setup/best-practices/cluster-large/", "id": "k8s_00272_code_completion_7437"} {"prompt": "Explain this yaml code:\n\n...\n containers:\n - name: fluentd-cloud-logging\n image: fluent/fluentd-kubernetes-daemonset:v1\n resources:\n limits:\n cpu: 100m\n memory: 200Mi", "completion": "This yaml code Considerations for large clusters | Addons' default limits are typically based on data collected from experience running each addon on small or medium Kubernetes clusters. When running on large clusters, addons often consume more of som", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Considerations for large clusters | Addons' default limits are typically based on data collected from experience running each addon on small or medium Kubernetes clusters. When running on large clusters, addons often consume more of som", "source": "https://kubernetes.io/docs/setup/best-practices/cluster-large/", "id": "k8s_00272_code_explain_7438"} {"prompt": "Complete this text code:\n\n...\n containers:\n - name: fluentd-cloud-logging\n image: fluent/fluentd-kubernetes-daemonset:v1", "completion": " resources:\n limits:\n cpu: 100m\n memory: 200Mi", "language": "text", "type": "code-example", "context": "Considerations for large clusters | Addons' default limits are typically based on data collected from experience running each addon on small or medium Kubernetes clusters. When running on large clusters, addons often consume more of som", "source": "https://kubernetes.io/docs/setup/best-practices/cluster-large/", "id": "k8s_00272_code_completion_7439"} {"prompt": "Explain this text code:\n\n...\n containers:\n - name: fluentd-cloud-logging\n image: fluent/fluentd-kubernetes-daemonset:v1\n resources:\n limits:\n cpu: 100m\n memory: 200Mi", "completion": "This text code Considerations for large clusters | Addons' default limits are typically based on data collected from experience running each addon on small or medium Kubernetes clusters. When running on large clusters, addons often consume more of som", "language": "text", "type": "code-example_explanation", "context": "Considerations for large clusters | Addons' default limits are typically based on data collected from experience running each addon on small or medium Kubernetes clusters. When running on large clusters, addons often consume more of som", "source": "https://kubernetes.io/docs/setup/best-practices/cluster-large/", "id": "k8s_00272_code_explain_7440"} {"prompt": "Complete this text code:\n\n# The format is\n# name:firstID:count of IDs\n# where\n# - firstID is 65536 (the minimum value possible)", "completion": "# - count of IDs is 110 * 65536\n# (110 is the default limit for number of pods on the node)\n\nkubelet:65536:7208960", "language": "text", "type": "code-example", "context": "User Namespaces | Starting with Kubernetes v1.33, the ID count for each of Pods can be set in KubeletConfiguration.", "source": "https://kubernetes.io/docs/concepts/workloads/pods/user-namespaces/", "id": "k8s_00273_code_completion_7441"} {"prompt": "Explain this text code:\n\n# The format is\n# name:firstID:count of IDs\n# where\n# - firstID is 65536 (the minimum value possible)\n# - count of IDs is 110 * 65536\n# (110 is the default limit for number of pods on the node)\n\nkubelet:65536:7208960", "completion": "This text code User Namespaces | Starting with Kubernetes v1.33, the ID count for each of Pods can be set in KubeletConfiguration.", "language": "text", "type": "code-example_explanation", "context": "User Namespaces | Starting with Kubernetes v1.33, the ID count for each of Pods can be set in KubeletConfiguration.", "source": "https://kubernetes.io/docs/concepts/workloads/pods/user-namespaces/", "id": "k8s_00273_code_explain_7442"} {"prompt": "Complete this yaml code:\n\napiVersion: kubelet.config.k8s.io/v1beta1\nkind: KubeletConfiguration", "completion": "userNamespaces:\n idsPerPod: 1048576", "language": "yaml", "type": "kubernetes-manifest", "context": "User Namespaces | The value of idsPerPod (uint32) must be a multiple of 65536. The default value is 65536. This value only applies to containers created after the kubelet was started with this KubeletConfiguration. Run", "source": "https://kubernetes.io/docs/concepts/workloads/pods/user-namespaces/", "id": "k8s_00273_code_completion_7443"} {"prompt": "Explain this yaml code:\n\napiVersion: kubelet.config.k8s.io/v1beta1\nkind: KubeletConfiguration\nuserNamespaces:\n idsPerPod: 1048576", "completion": "This yaml code User Namespaces | The value of idsPerPod (uint32) must be a multiple of 65536. The default value is 65536. This value only applies to containers created after the kubelet was started with this KubeletConfiguration. Run", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "User Namespaces | The value of idsPerPod (uint32) must be a multiple of 65536. The default value is 65536. This value only applies to containers created after the kubelet was started with this KubeletConfiguration. Run", "source": "https://kubernetes.io/docs/concepts/workloads/pods/user-namespaces/", "id": "k8s_00273_code_explain_7444"} {"prompt": "Complete this text code:\n\n# The format is\n# name:firstID:count of IDs\n# where\n# - firstID is 65536 (the minimum value possible)", "completion": "# - count of IDs is 110 * 65536\n# (110 is the default limit for number of pods on the node)\n\nkubelet:65536:7208960", "language": "text", "type": "code-example", "context": "User Namespaces | Starting with Kubernetes v1.33, the ID count for each of Pods can be set in KubeletConfiguration.", "source": "https://kubernetes.io/docs/concepts/workloads/pods/user-namespaces/", "id": "k8s_00273_code_completion_7445"} {"prompt": "Explain this text code:\n\n# The format is\n# name:firstID:count of IDs\n# where\n# - firstID is 65536 (the minimum value possible)\n# - count of IDs is 110 * 65536\n# (110 is the default limit for number of pods on the node)\n\nkubelet:65536:7208960", "completion": "This text code User Namespaces | Starting with Kubernetes v1.33, the ID count for each of Pods can be set in KubeletConfiguration.", "language": "text", "type": "code-example_explanation", "context": "User Namespaces | Starting with Kubernetes v1.33, the ID count for each of Pods can be set in KubeletConfiguration.", "source": "https://kubernetes.io/docs/concepts/workloads/pods/user-namespaces/", "id": "k8s_00273_code_explain_7446"} {"prompt": "Complete this yaml code:\n\napiVersion: kubelet.config.k8s.io/v1beta1\nkind: KubeletConfiguration", "completion": "userNamespaces:\n idsPerPod: 1048576", "language": "yaml", "type": "kubernetes-manifest", "context": "User Namespaces | The value of idsPerPod (uint32) must be a multiple of 65536. The default value is 65536. This value only applies to containers created after the kubelet was started with this KubeletConfiguration. Run", "source": "https://kubernetes.io/docs/concepts/workloads/pods/user-namespaces/", "id": "k8s_00273_code_completion_7447"} {"prompt": "Explain this yaml code:\n\napiVersion: kubelet.config.k8s.io/v1beta1\nkind: KubeletConfiguration\nuserNamespaces:\n idsPerPod: 1048576", "completion": "This yaml code User Namespaces | The value of idsPerPod (uint32) must be a multiple of 65536. The default value is 65536. This value only applies to containers created after the kubelet was started with this KubeletConfiguration. Run", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "User Namespaces | The value of idsPerPod (uint32) must be a multiple of 65536. The default value is 65536. This value only applies to containers created after the kubelet was started with this KubeletConfiguration. Run", "source": "https://kubernetes.io/docs/concepts/workloads/pods/user-namespaces/", "id": "k8s_00273_code_explain_7448"} {"prompt": "Explain this text code:\n\nspec.initContainers[*].securityContext.runAsNonRoot", "completion": "This text code User Namespaces | When using a user namespace for the pod, it is disallowed to use other host namespaces. In particular, if you set hostUsers: false then you are not allowed to set any of:", "language": "text", "type": "code-example_explanation", "context": "User Namespaces | When using a user namespace for the pod, it is disallowed to use other host namespaces. In particular, if you set hostUsers: false then you are not allowed to set any of:", "source": "https://kubernetes.io/docs/concepts/workloads/pods/user-namespaces/", "id": "k8s_00273_code_explain_7449"} {"prompt": "Explain this text code:\n\nspec.ephemeralContainers[*].securityContext.runAsNonRoot", "completion": "This text code User Namespaces | When using a user namespace for the pod, it is disallowed to use other host namespaces. In particular, if you set hostUsers: false then you are not allowed to set any of:", "language": "text", "type": "code-example_explanation", "context": "User Namespaces | When using a user namespace for the pod, it is disallowed to use other host namespaces. In particular, if you set hostUsers: false then you are not allowed to set any of:", "source": "https://kubernetes.io/docs/concepts/workloads/pods/user-namespaces/", "id": "k8s_00273_code_explain_7450"} {"prompt": "Explain this text code:\n\nspec.ephemeralContainers[*].securityContext.runAsUser", "completion": "This text code User Namespaces | When using a user namespace for the pod, it is disallowed to use other host namespaces. In particular, if you set hostUsers: false then you are not allowed to set any of:", "language": "text", "type": "code-example_explanation", "context": "User Namespaces | When using a user namespace for the pod, it is disallowed to use other host namespaces. In particular, if you set hostUsers: false then you are not allowed to set any of:", "source": "https://kubernetes.io/docs/concepts/workloads/pods/user-namespaces/", "id": "k8s_00273_code_explain_7451"} {"prompt": "Complete this yaml code:\n\napiVersion: kubelet.config.k8s.io/v1beta1\nkind: KubeletConfiguration", "completion": "userNamespaces:\n idsPerPod: 1048576", "language": "yaml", "type": "kubernetes-manifest", "context": "User Namespaces | The value of idsPerPod (uint32) must be a multiple of 65536. The default value is 65536. This value only applies to containers created after the kubelet was started with this KubeletConfiguration. Run", "source": "https://kubernetes.io/docs/concepts/workloads/pods/user-namespaces/", "id": "k8s_00273_code_completion_7452"} {"prompt": "Explain this yaml code:\n\napiVersion: kubelet.config.k8s.io/v1beta1\nkind: KubeletConfiguration\nuserNamespaces:\n idsPerPod: 1048576", "completion": "This yaml code User Namespaces | The value of idsPerPod (uint32) must be a multiple of 65536. The default value is 65536. This value only applies to containers created after the kubelet was started with this KubeletConfiguration. Run", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "User Namespaces | The value of idsPerPod (uint32) must be a multiple of 65536. The default value is 65536. This value only applies to containers created after the kubelet was started with this KubeletConfiguration. Run", "source": "https://kubernetes.io/docs/concepts/workloads/pods/user-namespaces/", "id": "k8s_00273_code_explain_7453"} {"prompt": "Explain this text code:\n\nincompatible CNI versions; config is \\\"1.0.0\\\", plugin supports [\\\"0.1.0\\\" \\\"0.2.0\\\" \\\"0.3.0\\\" \\\"0.3.1\\\" \\\"0.4.0\\\"]\"", "completion": "This text code Troubleshooting CNI plugin-related errors | To fix this issue, update your CNI plugins and CNI config files.", "language": "text", "type": "code-example_explanation", "context": "Troubleshooting CNI plugin-related errors | To fix this issue, update your CNI plugins and CNI config files.", "source": "https://kubernetes.io/docs/tasks/administer-cluster/migrating-from-dockershim/troubleshooting-cni-plugin-related-errors/", "id": "k8s_00275_code_explain_7454"} {"prompt": "Explain this text code:\n\nERROR[2022-04-26T00:43:24.518165483Z] StopPodSandbox for \"b\" failed\nerror=\"failed to destroy network for sandbox \\\"bbc85f891eaf060c5a879e27bba9b6b06450210161dfdecfbb2732959fb6500a\\\": invalid version \\\"\\\": the version is empty\"", "completion": "This text code Troubleshooting CNI plugin-related errors | This error leaves the pod in the not-ready state with a network namespace still attached. To recover from this problem, edit the CNI config file to add the missing version information. The next attemp", "language": "text", "type": "code-example_explanation", "context": "Troubleshooting CNI plugin-related errors | This error leaves the pod in the not-ready state with a network namespace still attached. To recover from this problem, edit the CNI config file to add the missing version information. The next attemp", "source": "https://kubernetes.io/docs/tasks/administer-cluster/migrating-from-dockershim/troubleshooting-cni-plugin-related-errors/", "id": "k8s_00275_code_explain_7455"} {"prompt": "Complete this text code:\n\ncat << EOF | tee /etc/cni/net.d/10-containerd-net.conflist\n{\n \"cniVersion\": \"1.0.0\",\n \"name\": \"containerd-net\",\n \"plugins\": [\n {\n \"type\": \"bridge\",\n \"bridge\": \"cni0\",\n \"isGateway\": true,\n \"ipMasq\": true,\n \"promiscMode\": true,\n \"ipam\": {\n \"type\": \"host-local\",\n \"ranges\": [\n [{\n \"subnet\": \"10.88.0.0/16\"\n }],", "completion": " [{\n \"subnet\": \"2001:db8:4860::/64\"\n }]\n ],\n \"routes\": [\n { \"dst\": \"0.0.0.0/0\" },\n { \"dst\": \"::/0\" }\n ]\n }\n },\n {\n \"type\": \"portmap\",\n \"capabilities\": {\"portMappings\": true},\n \"externalSetMarkChain\": \"KUBE-MARK-MASQ\"\n }\n ]\n}\nEOF", "language": "text", "type": "code-example", "context": "Troubleshooting CNI plugin-related errors | Update the IP address ranges in the preceding example with ones that are based on your use case and network addressing plan.", "source": "https://kubernetes.io/docs/tasks/administer-cluster/migrating-from-dockershim/troubleshooting-cni-plugin-related-errors/", "id": "k8s_00275_code_completion_7456"} {"prompt": "Explain this text code:\n\ncat << EOF | tee /etc/cni/net.d/10-containerd-net.conflist\n{\n \"cniVersion\": \"1.0.0\",\n \"name\": \"containerd-net\",\n \"plugins\": [\n {\n \"type\": \"bridge\",\n \"bridge\": \"cni0\",\n \"isGateway\": true,\n \"ipMasq\": true,\n \"promiscMode\": true,\n \"ipam\": {\n \"type\": \"host-local\",\n \"ranges\": [\n [{\n \"subnet\": \"10.88.0.0/16\"\n }],\n [{\n \"subnet\": \"2001:db8:4860::/64\"\n }]\n ],\n \"routes\": [\n { \"dst\": \"0.0.0.0/0\" },\n { \"dst\": \"::/0\" }\n ]\n }\n },\n {\n \"type\": \"portmap\",\n \"capabilities\": {\"portMappings\": true},\n \"externalSetMarkChain\": \"KUBE-MARK-MASQ\"\n }\n ]\n}\nEOF", "completion": "This text code Troubleshooting CNI plugin-related errors | Update the IP address ranges in the preceding example with ones that are based on your use case and network addressing plan.", "language": "text", "type": "code-example_explanation", "context": "Troubleshooting CNI plugin-related errors | Update the IP address ranges in the preceding example with ones that are based on your use case and network addressing plan.", "source": "https://kubernetes.io/docs/tasks/administer-cluster/migrating-from-dockershim/troubleshooting-cni-plugin-related-errors/", "id": "k8s_00275_code_explain_7457"} {"prompt": "Explain this text code:\n\nincompatible CNI versions; config is \\\"1.0.0\\\", plugin supports [\\\"0.1.0\\\" \\\"0.2.0\\\" \\\"0.3.0\\\" \\\"0.3.1\\\" \\\"0.4.0\\\"]\"", "completion": "This text code Troubleshooting CNI plugin-related errors | To fix this issue, update your CNI plugins and CNI config files.", "language": "text", "type": "code-example_explanation", "context": "Troubleshooting CNI plugin-related errors | To fix this issue, update your CNI plugins and CNI config files.", "source": "https://kubernetes.io/docs/tasks/administer-cluster/migrating-from-dockershim/troubleshooting-cni-plugin-related-errors/", "id": "k8s_00275_code_explain_7458"} {"prompt": "Explain this text code:\n\nERROR[2022-04-26T00:43:24.518165483Z] StopPodSandbox for \"b\" failed\nerror=\"failed to destroy network for sandbox \\\"bbc85f891eaf060c5a879e27bba9b6b06450210161dfdecfbb2732959fb6500a\\\": invalid version \\\"\\\": the version is empty\"", "completion": "This text code Troubleshooting CNI plugin-related errors | This error leaves the pod in the not-ready state with a network namespace still attached. To recover from this problem, edit the CNI config file to add the missing version information. The next attemp", "language": "text", "type": "code-example_explanation", "context": "Troubleshooting CNI plugin-related errors | This error leaves the pod in the not-ready state with a network namespace still attached. To recover from this problem, edit the CNI config file to add the missing version information. The next attemp", "source": "https://kubernetes.io/docs/tasks/administer-cluster/migrating-from-dockershim/troubleshooting-cni-plugin-related-errors/", "id": "k8s_00275_code_explain_7459"} {"prompt": "Complete this bash code:\n\ncat << EOF | tee /etc/cni/net.d/10-containerd-net.conflist\n{\n \"cniVersion\": \"1.0.0\",\n \"name\": \"containerd-net\",\n \"plugins\": [\n {\n \"type\": \"bridge\",\n \"bridge\": \"cni0\",\n \"isGateway\": true,\n \"ipMasq\": true,\n \"promiscMode\": true,\n \"ipam\": {\n \"type\": \"host-local\",\n \"ranges\": [\n [{\n \"subnet\": \"10.88.0.0/16\"\n }],", "completion": " [{\n \"subnet\": \"2001:db8:4860::/64\"\n }]\n ],\n \"routes\": [\n { \"dst\": \"0.0.0.0/0\" },\n { \"dst\": \"::/0\" }\n ]\n }\n },\n {\n \"type\": \"portmap\",\n \"capabilities\": {\"portMappings\": true},\n \"externalSetMarkChain\": \"KUBE-MARK-MASQ\"\n }\n ]\n}\nEOF", "language": "bash", "type": "shell-command", "context": "Troubleshooting CNI plugin-related errors | Update the IP address ranges in the preceding example with ones that are based on your use case and network addressing plan.", "source": "https://kubernetes.io/docs/tasks/administer-cluster/migrating-from-dockershim/troubleshooting-cni-plugin-related-errors/", "id": "k8s_00275_code_completion_7460"} {"prompt": "Explain this bash code:\n\ncat << EOF | tee /etc/cni/net.d/10-containerd-net.conflist\n{\n \"cniVersion\": \"1.0.0\",\n \"name\": \"containerd-net\",\n \"plugins\": [\n {\n \"type\": \"bridge\",\n \"bridge\": \"cni0\",\n \"isGateway\": true,\n \"ipMasq\": true,\n \"promiscMode\": true,\n \"ipam\": {\n \"type\": \"host-local\",\n \"ranges\": [\n [{\n \"subnet\": \"10.88.0.0/16\"\n }],\n [{\n \"subnet\": \"2001:db8:4860::/64\"\n }]\n ],\n \"routes\": [\n { \"dst\": \"0.0.0.0/0\" },\n { \"dst\": \"::/0\" }\n ]\n }\n },\n {\n \"type\": \"portmap\",\n \"capabilities\": {\"portMappings\": true},\n \"externalSetMarkChain\": \"KUBE-MARK-MASQ\"\n }\n ]\n}\nEOF", "completion": "This bash code Troubleshooting CNI plugin-related errors | Update the IP address ranges in the preceding example with ones that are based on your use case and network addressing plan.", "language": "bash", "type": "shell-command_explanation", "context": "Troubleshooting CNI plugin-related errors | Update the IP address ranges in the preceding example with ones that are based on your use case and network addressing plan.", "source": "https://kubernetes.io/docs/tasks/administer-cluster/migrating-from-dockershim/troubleshooting-cni-plugin-related-errors/", "id": "k8s_00275_code_explain_7461"} {"prompt": "Complete this text code:\n\ncat << EOF | tee /etc/cni/net.d/10-containerd-net.conflist\n{\n \"cniVersion\": \"1.0.0\",\n \"name\": \"containerd-net\",\n \"plugins\": [\n {\n \"type\": \"bridge\",\n \"bridge\": \"cni0\",\n \"isGateway\": true,\n \"ipMasq\": true,\n \"promiscMode\": true,\n \"ipam\": {\n \"type\": \"host-local\",\n \"ranges\": [\n [{\n \"subnet\": \"10.88.0.0/16\"\n }],", "completion": " [{\n \"subnet\": \"2001:db8:4860::/64\"\n }]\n ],\n \"routes\": [\n { \"dst\": \"0.0.0.0/0\" },\n { \"dst\": \"::/0\" }\n ]\n }\n },\n {\n \"type\": \"portmap\",\n \"capabilities\": {\"portMappings\": true},\n \"externalSetMarkChain\": \"KUBE-MARK-MASQ\"\n }\n ]\n}\nEOF", "language": "text", "type": "code-example", "context": "Troubleshooting CNI plugin-related errors | Update the IP address ranges in the preceding example with ones that are based on your use case and network addressing plan.", "source": "https://kubernetes.io/docs/tasks/administer-cluster/migrating-from-dockershim/troubleshooting-cni-plugin-related-errors/", "id": "k8s_00275_code_completion_7462"} {"prompt": "Explain this text code:\n\ncat << EOF | tee /etc/cni/net.d/10-containerd-net.conflist\n{\n \"cniVersion\": \"1.0.0\",\n \"name\": \"containerd-net\",\n \"plugins\": [\n {\n \"type\": \"bridge\",\n \"bridge\": \"cni0\",\n \"isGateway\": true,\n \"ipMasq\": true,\n \"promiscMode\": true,\n \"ipam\": {\n \"type\": \"host-local\",\n \"ranges\": [\n [{\n \"subnet\": \"10.88.0.0/16\"\n }],\n [{\n \"subnet\": \"2001:db8:4860::/64\"\n }]\n ],\n \"routes\": [\n { \"dst\": \"0.0.0.0/0\" },\n { \"dst\": \"::/0\" }\n ]\n }\n },\n {\n \"type\": \"portmap\",\n \"capabilities\": {\"portMappings\": true},\n \"externalSetMarkChain\": \"KUBE-MARK-MASQ\"\n }\n ]\n}\nEOF", "completion": "This text code Troubleshooting CNI plugin-related errors | Update the IP address ranges in the preceding example with ones that are based on your use case and network addressing plan.", "language": "text", "type": "code-example_explanation", "context": "Troubleshooting CNI plugin-related errors | Update the IP address ranges in the preceding example with ones that are based on your use case and network addressing plan.", "source": "https://kubernetes.io/docs/tasks/administer-cluster/migrating-from-dockershim/troubleshooting-cni-plugin-related-errors/", "id": "k8s_00275_code_explain_7463"} {"prompt": "Complete this yaml code:\n\napiVersion: batch/v1\nkind: Job\nmetadata:\n name: 'indexed-job'\nspec:\n completions: 5\n parallelism: 3\n completionMode: Indexed\n template:\n spec:\n restartPolicy: Never\n initContainers:\n - name: 'input'\n image: 'docker.io/library/bash'\n command:\n - \"bash\"\n - \"-c\"", "completion": " - |\n items=(foo bar baz qux xyz)\n echo ${items[$JOB_COMPLETION_INDEX]} > /input/data.txt \n volumeMounts:\n - mountPath: /input\n name: input\n containers:\n - name: 'worker'\n image: 'docker.io/library/busybox'\n command:\n - \"rev\"\n - \"/input/data.txt\"\n volumeMounts:\n - mountPath: /input\n name: input\n volumes:\n - name: input\n emptyDir: {}", "language": "yaml", "type": "kubernetes-manifest", "context": "Indexed Job for Parallel Processing with Static Work Assignment | In the example above, you use the builtin JOB_COMPLETION_INDEX environment variable set by the Job controller for all containers. An init container maps the index to a static value and writes it to a", "source": "https://kubernetes.io/docs/tasks/job/indexed-parallel-processing-static/", "id": "k8s_00276_code_completion_7464"} {"prompt": "Explain this yaml code:\n\napiVersion: batch/v1\nkind: Job\nmetadata:\n name: 'indexed-job'\nspec:\n completions: 5\n parallelism: 3\n completionMode: Indexed\n template:\n spec:\n restartPolicy: Never\n initContainers:\n - name: 'input'\n image: 'docker.io/library/bash'\n command:\n - \"bash\"\n - \"-c\"\n - |\n items=(foo bar baz qux xyz)\n echo ${items[$JOB_COMPLETION_INDEX]} > /input/data.txt \n volumeMounts:\n - mountPath: /input\n name: input\n containers:\n - name: 'worker'\n image: 'docker.io/library/busybox'\n command:\n - \"rev\"\n - \"/input/data.txt\"\n volumeMounts:\n - mountPath: /input\n name: input\n volumes:\n - name: input\n emptyDir: {}", "completion": "This yaml code Indexed Job for Parallel Processing with Static Work Assignment | In the example above, you use the builtin JOB_COMPLETION_INDEX environment variable set by the Job controller for all containers. An init container maps the index to a static value and writes it to a", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Indexed Job for Parallel Processing with Static Work Assignment | In the example above, you use the builtin JOB_COMPLETION_INDEX environment variable set by the Job controller for all containers. An init container maps the index to a static value and writes it to a", "source": "https://kubernetes.io/docs/tasks/job/indexed-parallel-processing-static/", "id": "k8s_00276_code_explain_7465"} {"prompt": "Complete this yaml code:\n\napiVersion: batch/v1\nkind: Job\nmetadata:\n name: 'indexed-job'\nspec:\n completions: 5\n parallelism: 3\n completionMode: Indexed\n template:\n spec:\n restartPolicy: Never\n containers:\n - name: 'worker'", "completion": " image: 'docker.io/library/busybox'\n command:\n - \"rev\"\n - \"/input/data.txt\"\n volumeMounts:\n - mountPath: /input\n name: input\n volumes:\n - name: input\n downwardAPI:\n items:\n - path: \"data.txt\"\n fieldRef:\n fieldPath: metadata.annotations['batch.kubernetes.io/job-completion-index']", "language": "yaml", "type": "kubernetes-manifest", "context": "Indexed Job for Parallel Processing with Static Work Assignment | Now run the Job:", "source": "https://kubernetes.io/docs/tasks/job/indexed-parallel-processing-static/", "id": "k8s_00276_code_completion_7466"} {"prompt": "Explain this yaml code:\n\napiVersion: batch/v1\nkind: Job\nmetadata:\n name: 'indexed-job'\nspec:\n completions: 5\n parallelism: 3\n completionMode: Indexed\n template:\n spec:\n restartPolicy: Never\n containers:\n - name: 'worker'\n image: 'docker.io/library/busybox'\n command:\n - \"rev\"\n - \"/input/data.txt\"\n volumeMounts:\n - mountPath: /input\n name: input\n volumes:\n - name: input\n downwardAPI:\n items:\n - path: \"data.txt\"\n fieldRef:\n fieldPath: metadata.annotations['batch.kubernetes.io/job-completion-index']", "completion": "This yaml code Indexed Job for Parallel Processing with Static Work Assignment | Now run the Job:", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Indexed Job for Parallel Processing with Static Work Assignment | Now run the Job:", "source": "https://kubernetes.io/docs/tasks/job/indexed-parallel-processing-static/", "id": "k8s_00276_code_explain_7467"} {"prompt": "Explain this text code:\n\n# This uses the first approach (relying on $JOB_COMPLETION_INDEX)\nkubectl apply -f https://kubernetes.io/examples/application/job/indexed-job.yaml", "completion": "This text code Indexed Job for Parallel Processing with Static Work Assignment | When you create this Job, the control plane creates a series of Pods, one for each index you specified. The value of .spec.parallelism determines how many can run at once whereas .spec.completions det", "language": "text", "type": "code-example_explanation", "context": "Indexed Job for Parallel Processing with Static Work Assignment | When you create this Job, the control plane creates a series of Pods, one for each index you specified. The value of .spec.parallelism determines how many can run at once whereas .spec.completions det", "source": "https://kubernetes.io/docs/tasks/job/indexed-parallel-processing-static/", "id": "k8s_00276_code_explain_7468"} {"prompt": "Explain this text code:\n\n# The check for condition name is case insensitive\nkubectl wait --for=condition=complete --timeout=300s job/indexed-job", "completion": "This text code Indexed Job for Parallel Processing with Static Work Assignment | Now, describe the Job and check that it was successful.", "language": "text", "type": "code-example_explanation", "context": "Indexed Job for Parallel Processing with Static Work Assignment | Now, describe the Job and check that it was successful.", "source": "https://kubernetes.io/docs/tasks/job/indexed-parallel-processing-static/", "id": "k8s_00276_code_explain_7469"} {"prompt": "Complete this text code:\n\nName: indexed-job\nNamespace: default\nSelector: controller-uid=bf865e04-0b67-483b-9a90-74cfc4c3e756\nLabels: controller-uid=bf865e04-0b67-483b-9a90-74cfc4c3e756\n job-name=indexed-job\nAnnotations: \nParallelism: 3\nCompletions: 5\nStart Time: Thu, 11 Mar 2021 15:47:34 +0000\nPods Statuses: 2 Running / 3 Succeeded / 0 Failed\nCompleted Indexes: 0-2\nPod Template:\n Labels: controller-uid=bf865e04-0b67-483b-9a90-74cfc4c3e756\n job-name=indexed-job\n Init Containers:\n input:\n Image: docker.io/library/bash\n Port: \n Host Port: \n Command:\n bash\n -c\n items=(foo bar baz qux xyz)\n echo ${items[$JOB_COMPLETION_INDEX]} > /input/data.txt\n\n Environment: ", "completion": " Mounts:\n /input from input (rw)\n Containers:\n worker:\n Image: docker.io/library/busybox\n Port: \n Host Port: \n Command:\n rev\n /input/data.txt\n Environment: \n Mounts:\n /input from input (rw)\n Volumes:\n input:\n Type: EmptyDir (a temporary directory that shares a pod's lifetime)\n Medium:\n SizeLimit: \nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal SuccessfulCreate 4s job-controller Created pod: indexed-job-njkjj\n Normal SuccessfulCreate 4s job-controller Created pod: indexed-job-9kd4h\n Normal SuccessfulCreate 4s job-controller Created pod: indexed-job-qjwsz\n Normal SuccessfulCreate 1s job-controller Created pod: indexed-job-fdhq5\n Normal SuccessfulCreate 1s job-controller Created pod: indexed-job-ncslj", "language": "text", "type": "code-example", "context": "Indexed Job for Parallel Processing with Static Work Assignment | In this example, you run the Job with custom values for each index. You can inspect the output of one of the pods:", "source": "https://kubernetes.io/docs/tasks/job/indexed-parallel-processing-static/", "id": "k8s_00276_code_completion_7470"} {"prompt": "Explain this text code:\n\nName: indexed-job\nNamespace: default\nSelector: controller-uid=bf865e04-0b67-483b-9a90-74cfc4c3e756\nLabels: controller-uid=bf865e04-0b67-483b-9a90-74cfc4c3e756\n job-name=indexed-job\nAnnotations: \nParallelism: 3\nCompletions: 5\nStart Time: Thu, 11 Mar 2021 15:47:34 +0000\nPods Statuses: 2 Running / 3 Succeeded / 0 Failed\nCompleted Indexes: 0-2\nPod Template:\n Labels: controller-uid=bf865e04-0b67-483b-9a90-74cfc4c3e756\n job-name=indexed-job\n Init Containers:\n input:\n Image: docker.io/library/bash\n Port: \n Host Port: \n Command:\n bash\n -c\n items=(foo bar baz qux xyz)\n echo ${items[$JOB_COMPLETION_INDEX]} > /input/data.txt\n\n Environment: \n Mounts:\n /input from input (rw)\n Containers:\n worker:\n Image: docker.io/library/busybox\n Port: \n Host Port: \n Command:\n rev\n /input/data.txt\n Environment: \n Mounts:\n /input from input (rw)\n Volumes:\n input:\n Type: EmptyDir (a temporary directory that shares a pod's lifetime)\n Medium:\n SizeLimit: \nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal SuccessfulCreate 4s job-controller Created pod: indexed-job-njkjj\n Normal SuccessfulCreate 4s job-controller Created pod: indexed-job-9kd4h\n Normal SuccessfulCreate 4s job-controller Created pod: indexed-job-qjwsz\n Normal SuccessfulCreate 1s job-controller Created pod: indexed-job-fdhq5\n Normal SuccessfulCreate 1s job-controller Created pod: indexed-job-ncslj", "completion": "This text code Indexed Job for Parallel Processing with Static Work Assignment | In this example, you run the Job with custom values for each index. You can inspect the output of one of the pods:", "language": "text", "type": "code-example_explanation", "context": "Indexed Job for Parallel Processing with Static Work Assignment | In this example, you run the Job with custom values for each index. You can inspect the output of one of the pods:", "source": "https://kubernetes.io/docs/tasks/job/indexed-parallel-processing-static/", "id": "k8s_00276_code_explain_7471"} {"prompt": "Explain this bash code:\n\nkubectl logs indexed-job-fdhq5 # Change this to match the name of a Pod from that Job", "completion": "This bash code Indexed Job for Parallel Processing with Static Work Assignment | The output is similar to:", "language": "bash", "type": "kubectl-command_explanation", "context": "Indexed Job for Parallel Processing with Static Work Assignment | The output is similar to:", "source": "https://kubernetes.io/docs/tasks/job/indexed-parallel-processing-static/", "id": "k8s_00276_code_explain_7472"} {"prompt": "Complete this yaml code:\n\napiVersion: batch/v1\nkind: Job\nmetadata:\n name: 'indexed-job'\nspec:\n completions: 5\n parallelism: 3\n completionMode: Indexed\n template:\n spec:\n restartPolicy: Never\n initContainers:\n - name: 'input'\n image: 'docker.io/library/bash'\n command:\n - \"bash\"\n - \"-c\"", "completion": " - |\n items=(foo bar baz qux xyz)\n echo ${items[$JOB_COMPLETION_INDEX]} > /input/data.txt \n volumeMounts:\n - mountPath: /input\n name: input\n containers:\n - name: 'worker'\n image: 'docker.io/library/busybox'\n command:\n - \"rev\"\n - \"/input/data.txt\"\n volumeMounts:\n - mountPath: /input\n name: input\n volumes:\n - name: input\n emptyDir: {}", "language": "yaml", "type": "kubernetes-manifest", "context": "Indexed Job for Parallel Processing with Static Work Assignment | In the example above, you use the builtin JOB_COMPLETION_INDEX environment variable set by the Job controller for all containers. An init container maps the index to a static value and writes it to a", "source": "https://kubernetes.io/docs/tasks/job/indexed-parallel-processing-static/", "id": "k8s_00276_code_completion_7473"} {"prompt": "Explain this yaml code:\n\napiVersion: batch/v1\nkind: Job\nmetadata:\n name: 'indexed-job'\nspec:\n completions: 5\n parallelism: 3\n completionMode: Indexed\n template:\n spec:\n restartPolicy: Never\n initContainers:\n - name: 'input'\n image: 'docker.io/library/bash'\n command:\n - \"bash\"\n - \"-c\"\n - |\n items=(foo bar baz qux xyz)\n echo ${items[$JOB_COMPLETION_INDEX]} > /input/data.txt \n volumeMounts:\n - mountPath: /input\n name: input\n containers:\n - name: 'worker'\n image: 'docker.io/library/busybox'\n command:\n - \"rev\"\n - \"/input/data.txt\"\n volumeMounts:\n - mountPath: /input\n name: input\n volumes:\n - name: input\n emptyDir: {}", "completion": "This yaml code Indexed Job for Parallel Processing with Static Work Assignment | In the example above, you use the builtin JOB_COMPLETION_INDEX environment variable set by the Job controller for all containers. An init container maps the index to a static value and writes it to a", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Indexed Job for Parallel Processing with Static Work Assignment | In the example above, you use the builtin JOB_COMPLETION_INDEX environment variable set by the Job controller for all containers. An init container maps the index to a static value and writes it to a", "source": "https://kubernetes.io/docs/tasks/job/indexed-parallel-processing-static/", "id": "k8s_00276_code_explain_7474"} {"prompt": "Complete this yaml code:\n\napiVersion: batch/v1\nkind: Job\nmetadata:\n name: 'indexed-job'\nspec:\n completions: 5\n parallelism: 3\n completionMode: Indexed\n template:\n spec:\n restartPolicy: Never\n containers:\n - name: 'worker'", "completion": " image: 'docker.io/library/busybox'\n command:\n - \"rev\"\n - \"/input/data.txt\"\n volumeMounts:\n - mountPath: /input\n name: input\n volumes:\n - name: input\n downwardAPI:\n items:\n - path: \"data.txt\"\n fieldRef:\n fieldPath: metadata.annotations['batch.kubernetes.io/job-completion-index']", "language": "yaml", "type": "kubernetes-manifest", "context": "Indexed Job for Parallel Processing with Static Work Assignment | Now run the Job:", "source": "https://kubernetes.io/docs/tasks/job/indexed-parallel-processing-static/", "id": "k8s_00276_code_completion_7475"} {"prompt": "Explain this yaml code:\n\napiVersion: batch/v1\nkind: Job\nmetadata:\n name: 'indexed-job'\nspec:\n completions: 5\n parallelism: 3\n completionMode: Indexed\n template:\n spec:\n restartPolicy: Never\n containers:\n - name: 'worker'\n image: 'docker.io/library/busybox'\n command:\n - \"rev\"\n - \"/input/data.txt\"\n volumeMounts:\n - mountPath: /input\n name: input\n volumes:\n - name: input\n downwardAPI:\n items:\n - path: \"data.txt\"\n fieldRef:\n fieldPath: metadata.annotations['batch.kubernetes.io/job-completion-index']", "completion": "This yaml code Indexed Job for Parallel Processing with Static Work Assignment | Now run the Job:", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Indexed Job for Parallel Processing with Static Work Assignment | Now run the Job:", "source": "https://kubernetes.io/docs/tasks/job/indexed-parallel-processing-static/", "id": "k8s_00276_code_explain_7476"} {"prompt": "Explain this shell code:\n\n# This uses the first approach (relying on $JOB_COMPLETION_INDEX)\nkubectl apply -f https://kubernetes.io/examples/application/job/indexed-job.yaml", "completion": "This shell code Indexed Job for Parallel Processing with Static Work Assignment | When you create this Job, the control plane creates a series of Pods, one for each index you specified. The value of .spec.parallelism determines how many can run at once whereas .spec.completions det", "language": "shell", "type": "code-example_explanation", "context": "Indexed Job for Parallel Processing with Static Work Assignment | When you create this Job, the control plane creates a series of Pods, one for each index you specified. The value of .spec.parallelism determines how many can run at once whereas .spec.completions det", "source": "https://kubernetes.io/docs/tasks/job/indexed-parallel-processing-static/", "id": "k8s_00276_code_explain_7477"} {"prompt": "Explain this shell code:\n\n# The check for condition name is case insensitive\nkubectl wait --for=condition=complete --timeout=300s job/indexed-job", "completion": "This shell code Indexed Job for Parallel Processing with Static Work Assignment | Now, describe the Job and check that it was successful.", "language": "shell", "type": "code-example_explanation", "context": "Indexed Job for Parallel Processing with Static Work Assignment | Now, describe the Job and check that it was successful.", "source": "https://kubernetes.io/docs/tasks/job/indexed-parallel-processing-static/", "id": "k8s_00276_code_explain_7478"} {"prompt": "Complete this text code:\n\nName: indexed-job\nNamespace: default\nSelector: controller-uid=bf865e04-0b67-483b-9a90-74cfc4c3e756\nLabels: controller-uid=bf865e04-0b67-483b-9a90-74cfc4c3e756\n job-name=indexed-job\nAnnotations: \nParallelism: 3\nCompletions: 5\nStart Time: Thu, 11 Mar 2021 15:47:34 +0000\nPods Statuses: 2 Running / 3 Succeeded / 0 Failed\nCompleted Indexes: 0-2\nPod Template:\n Labels: controller-uid=bf865e04-0b67-483b-9a90-74cfc4c3e756\n job-name=indexed-job\n Init Containers:\n input:\n Image: docker.io/library/bash\n Port: \n Host Port: \n Command:\n bash\n -c\n items=(foo bar baz qux xyz)\n echo ${items[$JOB_COMPLETION_INDEX]} > /input/data.txt\n\n Environment: ", "completion": " Mounts:\n /input from input (rw)\n Containers:\n worker:\n Image: docker.io/library/busybox\n Port: \n Host Port: \n Command:\n rev\n /input/data.txt\n Environment: \n Mounts:\n /input from input (rw)\n Volumes:\n input:\n Type: EmptyDir (a temporary directory that shares a pod's lifetime)\n Medium:\n SizeLimit: \nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal SuccessfulCreate 4s job-controller Created pod: indexed-job-njkjj\n Normal SuccessfulCreate 4s job-controller Created pod: indexed-job-9kd4h\n Normal SuccessfulCreate 4s job-controller Created pod: indexed-job-qjwsz\n Normal SuccessfulCreate 1s job-controller Created pod: indexed-job-fdhq5\n Normal SuccessfulCreate 1s job-controller Created pod: indexed-job-ncslj", "language": "text", "type": "code-example", "context": "Indexed Job for Parallel Processing with Static Work Assignment | In this example, you run the Job with custom values for each index. You can inspect the output of one of the pods:", "source": "https://kubernetes.io/docs/tasks/job/indexed-parallel-processing-static/", "id": "k8s_00276_code_completion_7479"} {"prompt": "Explain this text code:\n\nName: indexed-job\nNamespace: default\nSelector: controller-uid=bf865e04-0b67-483b-9a90-74cfc4c3e756\nLabels: controller-uid=bf865e04-0b67-483b-9a90-74cfc4c3e756\n job-name=indexed-job\nAnnotations: \nParallelism: 3\nCompletions: 5\nStart Time: Thu, 11 Mar 2021 15:47:34 +0000\nPods Statuses: 2 Running / 3 Succeeded / 0 Failed\nCompleted Indexes: 0-2\nPod Template:\n Labels: controller-uid=bf865e04-0b67-483b-9a90-74cfc4c3e756\n job-name=indexed-job\n Init Containers:\n input:\n Image: docker.io/library/bash\n Port: \n Host Port: \n Command:\n bash\n -c\n items=(foo bar baz qux xyz)\n echo ${items[$JOB_COMPLETION_INDEX]} > /input/data.txt\n\n Environment: \n Mounts:\n /input from input (rw)\n Containers:\n worker:\n Image: docker.io/library/busybox\n Port: \n Host Port: \n Command:\n rev\n /input/data.txt\n Environment: \n Mounts:\n /input from input (rw)\n Volumes:\n input:\n Type: EmptyDir (a temporary directory that shares a pod's lifetime)\n Medium:\n SizeLimit: \nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal SuccessfulCreate 4s job-controller Created pod: indexed-job-njkjj\n Normal SuccessfulCreate 4s job-controller Created pod: indexed-job-9kd4h\n Normal SuccessfulCreate 4s job-controller Created pod: indexed-job-qjwsz\n Normal SuccessfulCreate 1s job-controller Created pod: indexed-job-fdhq5\n Normal SuccessfulCreate 1s job-controller Created pod: indexed-job-ncslj", "completion": "This text code Indexed Job for Parallel Processing with Static Work Assignment | In this example, you run the Job with custom values for each index. You can inspect the output of one of the pods:", "language": "text", "type": "code-example_explanation", "context": "Indexed Job for Parallel Processing with Static Work Assignment | In this example, you run the Job with custom values for each index. You can inspect the output of one of the pods:", "source": "https://kubernetes.io/docs/tasks/job/indexed-parallel-processing-static/", "id": "k8s_00276_code_explain_7480"} {"prompt": "Explain this shell code:\n\nkubectl logs indexed-job-fdhq5 # Change this to match the name of a Pod from that Job", "completion": "This shell code Indexed Job for Parallel Processing with Static Work Assignment | The output is similar to:", "language": "shell", "type": "code-example_explanation", "context": "Indexed Job for Parallel Processing with Static Work Assignment | The output is similar to:", "source": "https://kubernetes.io/docs/tasks/job/indexed-parallel-processing-static/", "id": "k8s_00276_code_explain_7481"} {"prompt": "Complete this yaml code:\n\napplication/job/indexed-job.yaml\napiVersion: batch/v1\nkind: Job\nmetadata:\n name: 'indexed-job'\nspec:\n completions: 5\n parallelism: 3\n completionMode: Indexed\n template:\n spec:\n restartPolicy: Never\n initContainers:\n - name: 'input'\n image: 'docker.io/library/bash'\n command:\n - \"bash\"\n - \"-c\"", "completion": " - |\n items=(foo bar baz qux xyz)\n echo ${items[$JOB_COMPLETION_INDEX]} > /input/data.txt \n volumeMounts:\n - mountPath: /input\n name: input\n containers:\n - name: 'worker'\n image: 'docker.io/library/busybox'\n command:\n - \"rev\"\n - \"/input/data.txt\"\n volumeMounts:\n - mountPath: /input\n name: input\n volumes:\n - name: input\n emptyDir: {}", "language": "yaml", "type": "kubernetes-manifest", "context": "Indexed Job for Parallel Processing with Static Work Assignment | In the example above, you use the builtin JOB_COMPLETION_INDEX environment variable set by the Job controller for all containers. An init container maps the index to a static value and writes it to a", "source": "https://kubernetes.io/docs/tasks/job/indexed-parallel-processing-static/", "id": "k8s_00276_code_completion_7482"} {"prompt": "Explain this yaml code:\n\napplication/job/indexed-job.yaml\napiVersion: batch/v1\nkind: Job\nmetadata:\n name: 'indexed-job'\nspec:\n completions: 5\n parallelism: 3\n completionMode: Indexed\n template:\n spec:\n restartPolicy: Never\n initContainers:\n - name: 'input'\n image: 'docker.io/library/bash'\n command:\n - \"bash\"\n - \"-c\"\n - |\n items=(foo bar baz qux xyz)\n echo ${items[$JOB_COMPLETION_INDEX]} > /input/data.txt \n volumeMounts:\n - mountPath: /input\n name: input\n containers:\n - name: 'worker'\n image: 'docker.io/library/busybox'\n command:\n - \"rev\"\n - \"/input/data.txt\"\n volumeMounts:\n - mountPath: /input\n name: input\n volumes:\n - name: input\n emptyDir: {}", "completion": "This yaml code Indexed Job for Parallel Processing with Static Work Assignment | In the example above, you use the builtin JOB_COMPLETION_INDEX environment variable set by the Job controller for all containers. An init container maps the index to a static value and writes it to a", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Indexed Job for Parallel Processing with Static Work Assignment | In the example above, you use the builtin JOB_COMPLETION_INDEX environment variable set by the Job controller for all containers. An init container maps the index to a static value and writes it to a", "source": "https://kubernetes.io/docs/tasks/job/indexed-parallel-processing-static/", "id": "k8s_00276_code_explain_7483"} {"prompt": "Complete this yaml code:\n\napiVersion: batch/v1\nkind: Job\nmetadata:\n name: 'indexed-job'\nspec:\n completions: 5\n parallelism: 3\n completionMode: Indexed\n template:\n spec:\n restartPolicy: Never\n initContainers:\n - name: 'input'\n image: 'docker.io/library/bash'\n command:\n - \"bash\"\n - \"-c\"", "completion": " - |\n items=(foo bar baz qux xyz)\n echo ${items[$JOB_COMPLETION_INDEX]} > /input/data.txt \n volumeMounts:\n - mountPath: /input\n name: input\n containers:\n - name: 'worker'\n image: 'docker.io/library/busybox'\n command:\n - \"rev\"\n - \"/input/data.txt\"\n volumeMounts:\n - mountPath: /input\n name: input\n volumes:\n - name: input\n emptyDir: {}", "language": "yaml", "type": "kubernetes-manifest", "context": "Indexed Job for Parallel Processing with Static Work Assignment | In the example above, you use the builtin JOB_COMPLETION_INDEX environment variable set by the Job controller for all containers. An init container maps the index to a static value and writes it to a", "source": "https://kubernetes.io/docs/tasks/job/indexed-parallel-processing-static/", "id": "k8s_00276_code_completion_7484"} {"prompt": "Explain this yaml code:\n\napiVersion: batch/v1\nkind: Job\nmetadata:\n name: 'indexed-job'\nspec:\n completions: 5\n parallelism: 3\n completionMode: Indexed\n template:\n spec:\n restartPolicy: Never\n initContainers:\n - name: 'input'\n image: 'docker.io/library/bash'\n command:\n - \"bash\"\n - \"-c\"\n - |\n items=(foo bar baz qux xyz)\n echo ${items[$JOB_COMPLETION_INDEX]} > /input/data.txt \n volumeMounts:\n - mountPath: /input\n name: input\n containers:\n - name: 'worker'\n image: 'docker.io/library/busybox'\n command:\n - \"rev\"\n - \"/input/data.txt\"\n volumeMounts:\n - mountPath: /input\n name: input\n volumes:\n - name: input\n emptyDir: {}", "completion": "This yaml code Indexed Job for Parallel Processing with Static Work Assignment | In the example above, you use the builtin JOB_COMPLETION_INDEX environment variable set by the Job controller for all containers. An init container maps the index to a static value and writes it to a", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Indexed Job for Parallel Processing with Static Work Assignment | In the example above, you use the builtin JOB_COMPLETION_INDEX environment variable set by the Job controller for all containers. An init container maps the index to a static value and writes it to a", "source": "https://kubernetes.io/docs/tasks/job/indexed-parallel-processing-static/", "id": "k8s_00276_code_explain_7485"} {"prompt": "Complete this yaml code:\n\napplication/job/indexed-job-vol.yaml\napiVersion: batch/v1\nkind: Job\nmetadata:\n name: 'indexed-job'\nspec:\n completions: 5\n parallelism: 3\n completionMode: Indexed\n template:\n spec:\n restartPolicy: Never\n containers:\n - name: 'worker'", "completion": " image: 'docker.io/library/busybox'\n command:\n - \"rev\"\n - \"/input/data.txt\"\n volumeMounts:\n - mountPath: /input\n name: input\n volumes:\n - name: input\n downwardAPI:\n items:\n - path: \"data.txt\"\n fieldRef:\n fieldPath: metadata.annotations['batch.kubernetes.io/job-completion-index']", "language": "yaml", "type": "kubernetes-manifest", "context": "Indexed Job for Parallel Processing with Static Work Assignment | Now run the Job:", "source": "https://kubernetes.io/docs/tasks/job/indexed-parallel-processing-static/", "id": "k8s_00276_code_completion_7486"} {"prompt": "Explain this yaml code:\n\napplication/job/indexed-job-vol.yaml\napiVersion: batch/v1\nkind: Job\nmetadata:\n name: 'indexed-job'\nspec:\n completions: 5\n parallelism: 3\n completionMode: Indexed\n template:\n spec:\n restartPolicy: Never\n containers:\n - name: 'worker'\n image: 'docker.io/library/busybox'\n command:\n - \"rev\"\n - \"/input/data.txt\"\n volumeMounts:\n - mountPath: /input\n name: input\n volumes:\n - name: input\n downwardAPI:\n items:\n - path: \"data.txt\"\n fieldRef:\n fieldPath: metadata.annotations['batch.kubernetes.io/job-completion-index']", "completion": "This yaml code Indexed Job for Parallel Processing with Static Work Assignment | Now run the Job:", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Indexed Job for Parallel Processing with Static Work Assignment | Now run the Job:", "source": "https://kubernetes.io/docs/tasks/job/indexed-parallel-processing-static/", "id": "k8s_00276_code_explain_7487"} {"prompt": "Complete this yaml code:\n\napiVersion: batch/v1\nkind: Job\nmetadata:\n name: 'indexed-job'\nspec:\n completions: 5\n parallelism: 3\n completionMode: Indexed\n template:\n spec:\n restartPolicy: Never\n containers:\n - name: 'worker'", "completion": " image: 'docker.io/library/busybox'\n command:\n - \"rev\"\n - \"/input/data.txt\"\n volumeMounts:\n - mountPath: /input\n name: input\n volumes:\n - name: input\n downwardAPI:\n items:\n - path: \"data.txt\"\n fieldRef:\n fieldPath: metadata.annotations['batch.kubernetes.io/job-completion-index']", "language": "yaml", "type": "kubernetes-manifest", "context": "Indexed Job for Parallel Processing with Static Work Assignment | Now run the Job:", "source": "https://kubernetes.io/docs/tasks/job/indexed-parallel-processing-static/", "id": "k8s_00276_code_completion_7488"} {"prompt": "Explain this yaml code:\n\napiVersion: batch/v1\nkind: Job\nmetadata:\n name: 'indexed-job'\nspec:\n completions: 5\n parallelism: 3\n completionMode: Indexed\n template:\n spec:\n restartPolicy: Never\n containers:\n - name: 'worker'\n image: 'docker.io/library/busybox'\n command:\n - \"rev\"\n - \"/input/data.txt\"\n volumeMounts:\n - mountPath: /input\n name: input\n volumes:\n - name: input\n downwardAPI:\n items:\n - path: \"data.txt\"\n fieldRef:\n fieldPath: metadata.annotations['batch.kubernetes.io/job-completion-index']", "completion": "This yaml code Indexed Job for Parallel Processing with Static Work Assignment | Now run the Job:", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Indexed Job for Parallel Processing with Static Work Assignment | Now run the Job:", "source": "https://kubernetes.io/docs/tasks/job/indexed-parallel-processing-static/", "id": "k8s_00276_code_explain_7489"} {"prompt": "Explain this text code:\n\n# This uses the first approach (relying on $JOB_COMPLETION_INDEX)\nkubectl apply -f https://kubernetes.io/examples/application/job/indexed-job.yaml", "completion": "This text code Indexed Job for Parallel Processing with Static Work Assignment | When you create this Job, the control plane creates a series of Pods, one for each index you specified. The value of .spec.parallelism determines how many can run at once whereas .spec.completions det", "language": "text", "type": "code-example_explanation", "context": "Indexed Job for Parallel Processing with Static Work Assignment | When you create this Job, the control plane creates a series of Pods, one for each index you specified. The value of .spec.parallelism determines how many can run at once whereas .spec.completions det", "source": "https://kubernetes.io/docs/tasks/job/indexed-parallel-processing-static/", "id": "k8s_00276_code_explain_7490"} {"prompt": "Explain this text code:\n\n# The check for condition name is case insensitive\nkubectl wait --for=condition=complete --timeout=300s job/indexed-job", "completion": "This text code Indexed Job for Parallel Processing with Static Work Assignment | Now, describe the Job and check that it was successful.", "language": "text", "type": "code-example_explanation", "context": "Indexed Job for Parallel Processing with Static Work Assignment | Now, describe the Job and check that it was successful.", "source": "https://kubernetes.io/docs/tasks/job/indexed-parallel-processing-static/", "id": "k8s_00276_code_explain_7491"} {"prompt": "Explain this bash code:\n\nkubectl logs indexed-job-fdhq5 # Change this to match the name of a Pod from that Job", "completion": "This bash code Indexed Job for Parallel Processing with Static Work Assignment | The output is similar to:", "language": "bash", "type": "kubectl-command_explanation", "context": "Indexed Job for Parallel Processing with Static Work Assignment | The output is similar to:", "source": "https://kubernetes.io/docs/tasks/job/indexed-parallel-processing-static/", "id": "k8s_00276_code_explain_7492"} {"prompt": "Complete this text code:\n\n# The per-mode level label indicates which policy level to apply for the mode.\n#\n# MODE must be one of `enforce`, `audit`, or `warn`.\n# LEVEL must be one of `privileged`, `baseline`, or `restricted`.\npod-security.kubernetes.io/: \n", "completion": "# Optional: per-mode version label that can be used to pin the policy to the\n# version that shipped with a given Kubernetes minor version (for example v1.34).\n#\n# MODE must be one of `enforce`, `audit`, or `warn`.\n# VERSION must be a valid Kubernetes minor version, or `latest`.\npod-security.kubernetes.io/-version: ", "language": "text", "type": "code-example", "context": "Pod Security Admission | Check out Enforce Pod Security Standards with Namespace Labels to see example usage.", "source": "https://kubernetes.io/docs/concepts/security/pod-security-admission/", "id": "k8s_00277_code_completion_7493"} {"prompt": "Explain this text code:\n\n# The per-mode level label indicates which policy level to apply for the mode.\n#\n# MODE must be one of `enforce`, `audit`, or `warn`.\n# LEVEL must be one of `privileged`, `baseline`, or `restricted`.\npod-security.kubernetes.io/: \n\n# Optional: per-mode version label that can be used to pin the policy to the\n# version that shipped with a given Kubernetes minor version (for example v1.34).\n#\n# MODE must be one of `enforce`, `audit`, or `warn`.\n# VERSION must be a valid Kubernetes minor version, or `latest`.\npod-security.kubernetes.io/-version: ", "completion": "This text code Pod Security Admission | Check out Enforce Pod Security Standards with Namespace Labels to see example usage.", "language": "text", "type": "code-example_explanation", "context": "Pod Security Admission | Check out Enforce Pod Security Standards with Namespace Labels to see example usage.", "source": "https://kubernetes.io/docs/concepts/security/pod-security-admission/", "id": "k8s_00277_code_explain_7494"} {"prompt": "Complete this yaml code:\n\n# The per-mode level label indicates which policy level to apply for the mode.\n#\n# MODE must be one of `enforce`, `audit`, or `warn`.\n# LEVEL must be one of `privileged`, `baseline`, or `restricted`.\npod-security.kubernetes.io/: \n", "completion": "# Optional: per-mode version label that can be used to pin the policy to the\n# version that shipped with a given Kubernetes minor version (for example v1.34).\n#\n# MODE must be one of `enforce`, `audit`, or `warn`.\n# VERSION must be a valid Kubernetes minor version, or `latest`.\npod-security.kubernetes.io/-version: ", "language": "yaml", "type": "kubernetes-manifest", "context": "Pod Security Admission | Check out Enforce Pod Security Standards with Namespace Labels to see example usage.", "source": "https://kubernetes.io/docs/concepts/security/pod-security-admission/", "id": "k8s_00277_code_completion_7495"} {"prompt": "Explain this yaml code:\n\n# The per-mode level label indicates which policy level to apply for the mode.\n#\n# MODE must be one of `enforce`, `audit`, or `warn`.\n# LEVEL must be one of `privileged`, `baseline`, or `restricted`.\npod-security.kubernetes.io/: \n\n# Optional: per-mode version label that can be used to pin the policy to the\n# version that shipped with a given Kubernetes minor version (for example v1.34).\n#\n# MODE must be one of `enforce`, `audit`, or `warn`.\n# VERSION must be a valid Kubernetes minor version, or `latest`.\npod-security.kubernetes.io/-version: ", "completion": "This yaml code Pod Security Admission | Check out Enforce Pod Security Standards with Namespace Labels to see example usage.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Pod Security Admission | Check out Enforce Pod Security Standards with Namespace Labels to see example usage.", "source": "https://kubernetes.io/docs/concepts/security/pod-security-admission/", "id": "k8s_00277_code_explain_7496"} {"prompt": "Explain this text code:\n\nsystem:serviceaccount:kube-system:replicaset-controller", "completion": "This text code Pod Security Admission | Updates to the following pod fields are exempt from policy checks, meaning that if a pod update request only changes these fields, it will not be denied even if the pod is in violation of the current", "language": "text", "type": "code-example_explanation", "context": "Pod Security Admission | Updates to the following pod fields are exempt from policy checks, meaning that if a pod update request only changes these fields, it will not be denied even if the pod is in violation of the current", "source": "https://kubernetes.io/docs/concepts/security/pod-security-admission/", "id": "k8s_00277_code_explain_7497"} {"prompt": "Complete this text code:\n\n# The per-mode level label indicates which policy level to apply for the mode.\n#\n# MODE must be one of `enforce`, `audit`, or `warn`.\n# LEVEL must be one of `privileged`, `baseline`, or `restricted`.\npod-security.kubernetes.io/: \n", "completion": "# Optional: per-mode version label that can be used to pin the policy to the\n# version that shipped with a given Kubernetes minor version (for example v1.34).\n#\n# MODE must be one of `enforce`, `audit`, or `warn`.\n# VERSION must be a valid Kubernetes minor version, or `latest`.\npod-security.kubernetes.io/-version: ", "language": "text", "type": "code-example", "context": "Pod Security Admission | Check out Enforce Pod Security Standards with Namespace Labels to see example usage.", "source": "https://kubernetes.io/docs/concepts/security/pod-security-admission/", "id": "k8s_00277_code_completion_7498"} {"prompt": "Explain this text code:\n\n# The per-mode level label indicates which policy level to apply for the mode.\n#\n# MODE must be one of `enforce`, `audit`, or `warn`.\n# LEVEL must be one of `privileged`, `baseline`, or `restricted`.\npod-security.kubernetes.io/: \n\n# Optional: per-mode version label that can be used to pin the policy to the\n# version that shipped with a given Kubernetes minor version (for example v1.34).\n#\n# MODE must be one of `enforce`, `audit`, or `warn`.\n# VERSION must be a valid Kubernetes minor version, or `latest`.\npod-security.kubernetes.io/-version: ", "completion": "This text code Pod Security Admission | Check out Enforce Pod Security Standards with Namespace Labels to see example usage.", "language": "text", "type": "code-example_explanation", "context": "Pod Security Admission | Check out Enforce Pod Security Standards with Namespace Labels to see example usage.", "source": "https://kubernetes.io/docs/concepts/security/pod-security-admission/", "id": "k8s_00277_code_explain_7499"} {"prompt": "Explain this text code:\n\n# The SSH tunnel continues running in the foreground after you run this\nssh -D 1080 -q -N username@kubernetes-remote-server.example", "completion": "This text code Use a SOCKS5 Proxy to Access the Kubernetes API | The SOCKS5 proxy lets you connect to your cluster's API server based on the following configuration:", "language": "text", "type": "code-example_explanation", "context": "Use a SOCKS5 Proxy to Access the Kubernetes API | The SOCKS5 proxy lets you connect to your cluster's API server based on the following configuration:", "source": "https://kubernetes.io/docs/tasks/extend-kubernetes/socks5-proxy-access-api/", "id": "k8s_00279_code_explain_7500"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nclusters:\n- cluster:\n certificate-authority-data: LRMEMMW2 # shortened for readability \n server: https://:6443 # the \"Kubernetes API\" server, in other words the IP address of kubernetes-remote-server.example\n proxy-url: socks5://localhost:1080 # the \"SSH SOCKS5 proxy\" in the diagram above\n name: default\ncontexts:\n- context:\n cluster: default", "completion": " user: default\n name: default\ncurrent-context: default\nkind: Config\npreferences: {}\nusers:\n- name: default\n user:\n client-certificate-data: LS0tLS1CR== # shortened for readability\n client-key-data: LS0tLS1CRUdJT= # shortened for readability", "language": "yaml", "type": "kubernetes-manifest", "context": "Use a SOCKS5 Proxy to Access the Kubernetes API | Once you have created the tunnel via the ssh command mentioned earlier, and defined either the environment variable or the proxy-url attribute, you can interact with your cluster through that proxy. F", "source": "https://kubernetes.io/docs/tasks/extend-kubernetes/socks5-proxy-access-api/", "id": "k8s_00279_code_completion_7501"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nclusters:\n- cluster:\n certificate-authority-data: LRMEMMW2 # shortened for readability \n server: https://:6443 # the \"Kubernetes API\" server, in other words the IP address of kubernetes-remote-server.example\n proxy-url: socks5://localhost:1080 # the \"SSH SOCKS5 proxy\" in the diagram above\n name: default\ncontexts:\n- context:\n cluster: default\n user: default\n name: default\ncurrent-context: default\nkind: Config\npreferences: {}\nusers:\n- name: default\n user:\n client-certificate-data: LS0tLS1CR== # shortened for readability\n client-key-data: LS0tLS1CRUdJT= # shortened for readability", "completion": "This yaml code Use a SOCKS5 Proxy to Access the Kubernetes API | Once you have created the tunnel via the ssh command mentioned earlier, and defined either the environment variable or the proxy-url attribute, you can interact with your cluster through that proxy. F", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Use a SOCKS5 Proxy to Access the Kubernetes API | Once you have created the tunnel via the ssh command mentioned earlier, and defined either the environment variable or the proxy-url attribute, you can interact with your cluster through that proxy. F", "source": "https://kubernetes.io/docs/tasks/extend-kubernetes/socks5-proxy-access-api/", "id": "k8s_00279_code_explain_7502"} {"prompt": "Explain this text code:\n\nNAMESPACE NAME READY STATUS RESTARTS AGE\nkube-system coredns-85cb69466-klwq8 1/1 Running 0 5m46s", "completion": "This text code Use a SOCKS5 Proxy to Access the Kubernetes API | Stop the ssh port-forwarding process by pressing CTRL+C on the terminal where it is running.", "language": "text", "type": "code-example_explanation", "context": "Use a SOCKS5 Proxy to Access the Kubernetes API | Stop the ssh port-forwarding process by pressing CTRL+C on the terminal where it is running.", "source": "https://kubernetes.io/docs/tasks/extend-kubernetes/socks5-proxy-access-api/", "id": "k8s_00279_code_explain_7503"} {"prompt": "Explain this text code:\n\nHTTPS_PROXY=socks5://localhost:1080 kubectl get pods", "completion": "This text code Use a SOCKS5 Proxy to Access the Kubernetes API | Stop the ssh port-forwarding process by pressing CTRL+C on the terminal where it is running.", "language": "text", "type": "code-example_explanation", "context": "Use a SOCKS5 Proxy to Access the Kubernetes API | Stop the ssh port-forwarding process by pressing CTRL+C on the terminal where it is running.", "source": "https://kubernetes.io/docs/tasks/extend-kubernetes/socks5-proxy-access-api/", "id": "k8s_00279_code_explain_7504"} {"prompt": "Explain this shell code:\n\n# The SSH tunnel continues running in the foreground after you run this\nssh -D 1080 -q -N username@kubernetes-remote-server.example", "completion": "This shell code Use a SOCKS5 Proxy to Access the Kubernetes API | The SOCKS5 proxy lets you connect to your cluster's API server based on the following configuration:", "language": "shell", "type": "code-example_explanation", "context": "Use a SOCKS5 Proxy to Access the Kubernetes API | The SOCKS5 proxy lets you connect to your cluster's API server based on the following configuration:", "source": "https://kubernetes.io/docs/tasks/extend-kubernetes/socks5-proxy-access-api/", "id": "k8s_00279_code_explain_7505"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nclusters:\n- cluster:\n certificate-authority-data: LRMEMMW2 # shortened for readability \n server: https://:6443 # the \"Kubernetes API\" server, in other words the IP address of kubernetes-remote-server.example\n proxy-url: socks5://localhost:1080 # the \"SSH SOCKS5 proxy\" in the diagram above\n name: default\ncontexts:\n- context:\n cluster: default", "completion": " user: default\n name: default\ncurrent-context: default\nkind: Config\npreferences: {}\nusers:\n- name: default\n user:\n client-certificate-data: LS0tLS1CR== # shortened for readability\n client-key-data: LS0tLS1CRUdJT= # shortened for readability", "language": "yaml", "type": "kubernetes-manifest", "context": "Use a SOCKS5 Proxy to Access the Kubernetes API | Once you have created the tunnel via the ssh command mentioned earlier, and defined either the environment variable or the proxy-url attribute, you can interact with your cluster through that proxy. F", "source": "https://kubernetes.io/docs/tasks/extend-kubernetes/socks5-proxy-access-api/", "id": "k8s_00279_code_completion_7506"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nclusters:\n- cluster:\n certificate-authority-data: LRMEMMW2 # shortened for readability \n server: https://:6443 # the \"Kubernetes API\" server, in other words the IP address of kubernetes-remote-server.example\n proxy-url: socks5://localhost:1080 # the \"SSH SOCKS5 proxy\" in the diagram above\n name: default\ncontexts:\n- context:\n cluster: default\n user: default\n name: default\ncurrent-context: default\nkind: Config\npreferences: {}\nusers:\n- name: default\n user:\n client-certificate-data: LS0tLS1CR== # shortened for readability\n client-key-data: LS0tLS1CRUdJT= # shortened for readability", "completion": "This yaml code Use a SOCKS5 Proxy to Access the Kubernetes API | Once you have created the tunnel via the ssh command mentioned earlier, and defined either the environment variable or the proxy-url attribute, you can interact with your cluster through that proxy. F", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Use a SOCKS5 Proxy to Access the Kubernetes API | Once you have created the tunnel via the ssh command mentioned earlier, and defined either the environment variable or the proxy-url attribute, you can interact with your cluster through that proxy. F", "source": "https://kubernetes.io/docs/tasks/extend-kubernetes/socks5-proxy-access-api/", "id": "k8s_00279_code_explain_7507"} {"prompt": "Explain this console code:\n\nNAMESPACE NAME READY STATUS RESTARTS AGE\nkube-system coredns-85cb69466-klwq8 1/1 Running 0 5m46s", "completion": "This console code Use a SOCKS5 Proxy to Access the Kubernetes API | Stop the ssh port-forwarding process by pressing CTRL+C on the terminal where it is running.", "language": "console", "type": "code-example_explanation", "context": "Use a SOCKS5 Proxy to Access the Kubernetes API | Stop the ssh port-forwarding process by pressing CTRL+C on the terminal where it is running.", "source": "https://kubernetes.io/docs/tasks/extend-kubernetes/socks5-proxy-access-api/", "id": "k8s_00279_code_explain_7508"} {"prompt": "Explain this shell code:\n\nHTTPS_PROXY=socks5://localhost:1080 kubectl get pods", "completion": "This shell code Use a SOCKS5 Proxy to Access the Kubernetes API | Stop the ssh port-forwarding process by pressing CTRL+C on the terminal where it is running.", "language": "shell", "type": "code-example_explanation", "context": "Use a SOCKS5 Proxy to Access the Kubernetes API | Stop the ssh port-forwarding process by pressing CTRL+C on the terminal where it is running.", "source": "https://kubernetes.io/docs/tasks/extend-kubernetes/socks5-proxy-access-api/", "id": "k8s_00279_code_explain_7509"} {"prompt": "Explain this text code:\n\n# The SSH tunnel continues running in the foreground after you run this\nssh -D 1080 -q -N username@kubernetes-remote-server.example", "completion": "This text code Use a SOCKS5 Proxy to Access the Kubernetes API | The SOCKS5 proxy lets you connect to your cluster's API server based on the following configuration:", "language": "text", "type": "code-example_explanation", "context": "Use a SOCKS5 Proxy to Access the Kubernetes API | The SOCKS5 proxy lets you connect to your cluster's API server based on the following configuration:", "source": "https://kubernetes.io/docs/tasks/extend-kubernetes/socks5-proxy-access-api/", "id": "k8s_00279_code_explain_7510"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nclusters:\n- cluster:\n certificate-authority-data: LRMEMMW2 # shortened for readability \n server: https://:6443 # the \"Kubernetes API\" server, in other words the IP address of kubernetes-remote-server.example\n proxy-url: socks5://localhost:1080 # the \"SSH SOCKS5 proxy\" in the diagram above\n name: default\ncontexts:\n- context:\n cluster: default", "completion": " user: default\n name: default\ncurrent-context: default\nkind: Config\npreferences: {}\nusers:\n- name: default\n user:\n client-certificate-data: LS0tLS1CR== # shortened for readability\n client-key-data: LS0tLS1CRUdJT= # shortened for readability", "language": "yaml", "type": "kubernetes-manifest", "context": "Use a SOCKS5 Proxy to Access the Kubernetes API | Once you have created the tunnel via the ssh command mentioned earlier, and defined either the environment variable or the proxy-url attribute, you can interact with your cluster through that proxy. F", "source": "https://kubernetes.io/docs/tasks/extend-kubernetes/socks5-proxy-access-api/", "id": "k8s_00279_code_completion_7511"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nclusters:\n- cluster:\n certificate-authority-data: LRMEMMW2 # shortened for readability \n server: https://:6443 # the \"Kubernetes API\" server, in other words the IP address of kubernetes-remote-server.example\n proxy-url: socks5://localhost:1080 # the \"SSH SOCKS5 proxy\" in the diagram above\n name: default\ncontexts:\n- context:\n cluster: default\n user: default\n name: default\ncurrent-context: default\nkind: Config\npreferences: {}\nusers:\n- name: default\n user:\n client-certificate-data: LS0tLS1CR== # shortened for readability\n client-key-data: LS0tLS1CRUdJT= # shortened for readability", "completion": "This yaml code Use a SOCKS5 Proxy to Access the Kubernetes API | Once you have created the tunnel via the ssh command mentioned earlier, and defined either the environment variable or the proxy-url attribute, you can interact with your cluster through that proxy. F", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Use a SOCKS5 Proxy to Access the Kubernetes API | Once you have created the tunnel via the ssh command mentioned earlier, and defined either the environment variable or the proxy-url attribute, you can interact with your cluster through that proxy. F", "source": "https://kubernetes.io/docs/tasks/extend-kubernetes/socks5-proxy-access-api/", "id": "k8s_00279_code_explain_7512"} {"prompt": "Explain this text code:\n\nNAMESPACE NAME READY STATUS RESTARTS AGE\nkube-system coredns-85cb69466-klwq8 1/1 Running 0 5m46s", "completion": "This text code Use a SOCKS5 Proxy to Access the Kubernetes API | Stop the ssh port-forwarding process by pressing CTRL+C on the terminal where it is running.", "language": "text", "type": "code-example_explanation", "context": "Use a SOCKS5 Proxy to Access the Kubernetes API | Stop the ssh port-forwarding process by pressing CTRL+C on the terminal where it is running.", "source": "https://kubernetes.io/docs/tasks/extend-kubernetes/socks5-proxy-access-api/", "id": "k8s_00279_code_explain_7513"} {"prompt": "Explain this text code:\n\nHTTPS_PROXY=socks5://localhost:1080 kubectl get pods", "completion": "This text code Use a SOCKS5 Proxy to Access the Kubernetes API | Stop the ssh port-forwarding process by pressing CTRL+C on the terminal where it is running.", "language": "text", "type": "code-example_explanation", "context": "Use a SOCKS5 Proxy to Access the Kubernetes API | Stop the ssh port-forwarding process by pressing CTRL+C on the terminal where it is running.", "source": "https://kubernetes.io/docs/tasks/extend-kubernetes/socks5-proxy-access-api/", "id": "k8s_00279_code_explain_7514"} {"prompt": "Explain this text code:\n\nI0805 10:43:25.129850 46757 schema.go:126] unknown field: commnd\nI0805 10:43:25.129973 46757 schema.go:129] this may be a false alarm, see https://github.com/kubernetes/kubernetes/issues/6842\npods/mypod", "completion": "This text code Debug Pods | The next thing to check is whether the pod on the apiserver matches the pod you meant to create (e.g. in a yaml file on your local machine). For example, run kubectl get pods/mypod -o yaml > mypod-on-", "language": "text", "type": "code-example_explanation", "context": "Debug Pods | The next thing to check is whether the pod on the apiserver matches the pod you meant to create (e.g. in a yaml file on your local machine). For example, run kubectl get pods/mypod -o yaml > mypod-on-", "source": "https://kubernetes.io/docs/tasks/debug/debug-application/debug-pods/", "id": "k8s_00282_code_explain_7515"} {"prompt": "Explain this bash code:\n\nkubectl get endpointslices -l kubernetes.io/service-name=${SERVICE_NAME}", "completion": "This bash code Debug Pods | Make sure that the endpoints in the EndpointSlices match up with the number of pods that you expect to be members of your service. For example, if your Service is for an nginx container with 3 replica", "language": "bash", "type": "kubectl-command_explanation", "context": "Debug Pods | Make sure that the endpoints in the EndpointSlices match up with the number of pods that you expect to be members of your service. For example, if your Service is for an nginx container with 3 replica", "source": "https://kubernetes.io/docs/tasks/debug/debug-application/debug-pods/", "id": "k8s_00282_code_explain_7516"} {"prompt": "Complete this text code:\n\n...\nspec:", "completion": " - selector:\n name: nginx\n type: frontend", "language": "text", "type": "code-example", "context": "Debug Pods | You can use:", "source": "https://kubernetes.io/docs/tasks/debug/debug-application/debug-pods/", "id": "k8s_00282_code_completion_7517"} {"prompt": "Explain this text code:\n\n...\nspec:\n - selector:\n name: nginx\n type: frontend", "completion": "This text code Debug Pods | You can use:", "language": "text", "type": "code-example_explanation", "context": "Debug Pods | You can use:", "source": "https://kubernetes.io/docs/tasks/debug/debug-application/debug-pods/", "id": "k8s_00282_code_explain_7518"} {"prompt": "Explain this bash code:\n\nkubectl get pods --selector=name=nginx,type=frontend", "completion": "This bash code Debug Pods | to list pods that match this selector. Verify that the list matches the Pods that you expect to provide your Service. Verify that the pod's containerPort matches up with the Service's targetPort", "language": "bash", "type": "kubectl-command_explanation", "context": "Debug Pods | to list pods that match this selector. Verify that the list matches the Pods that you expect to provide your Service. Verify that the pod's containerPort matches up with the Service's targetPort", "source": "https://kubernetes.io/docs/tasks/debug/debug-application/debug-pods/", "id": "k8s_00282_code_explain_7519"} {"prompt": "Explain this shell code:\n\nI0805 10:43:25.129850 46757 schema.go:126] unknown field: commnd\nI0805 10:43:25.129973 46757 schema.go:129] this may be a false alarm, see https://github.com/kubernetes/kubernetes/issues/6842\npods/mypod", "completion": "This shell code Debug Pods | The next thing to check is whether the pod on the apiserver matches the pod you meant to create (e.g. in a yaml file on your local machine). For example, run kubectl get pods/mypod -o yaml > mypod-on-", "language": "shell", "type": "code-example_explanation", "context": "Debug Pods | The next thing to check is whether the pod on the apiserver matches the pod you meant to create (e.g. in a yaml file on your local machine). For example, run kubectl get pods/mypod -o yaml > mypod-on-", "source": "https://kubernetes.io/docs/tasks/debug/debug-application/debug-pods/", "id": "k8s_00282_code_explain_7520"} {"prompt": "Explain this bash code:\n\nkubectl get pods/mypod -o yaml > mypod-on-apiserver.yaml", "completion": "This bash code Debug Pods | Replication controllers are fairly straightforward. They can either create Pods or they can't. If they can't create pods, then please refer to the instructions above to debug your pods.", "language": "bash", "type": "kubectl-command_explanation", "context": "Debug Pods | Replication controllers are fairly straightforward. They can either create Pods or they can't. If they can't create pods, then please refer to the instructions above to debug your pods.", "source": "https://kubernetes.io/docs/tasks/debug/debug-application/debug-pods/", "id": "k8s_00282_code_explain_7521"} {"prompt": "Explain this shell code:\n\nkubectl get endpointslices -l kubernetes.io/service-name=${SERVICE_NAME}", "completion": "This shell code Debug Pods | Make sure that the endpoints in the EndpointSlices match up with the number of pods that you expect to be members of your service. For example, if your Service is for an nginx container with 3 replica", "language": "shell", "type": "code-example_explanation", "context": "Debug Pods | Make sure that the endpoints in the EndpointSlices match up with the number of pods that you expect to be members of your service. For example, if your Service is for an nginx container with 3 replica", "source": "https://kubernetes.io/docs/tasks/debug/debug-application/debug-pods/", "id": "k8s_00282_code_explain_7522"} {"prompt": "Complete this yaml code:\n\n...\nspec:", "completion": " - selector:\n name: nginx\n type: frontend", "language": "yaml", "type": "kubernetes-manifest", "context": "Debug Pods | You can use:", "source": "https://kubernetes.io/docs/tasks/debug/debug-application/debug-pods/", "id": "k8s_00282_code_completion_7523"} {"prompt": "Explain this yaml code:\n\n...\nspec:\n - selector:\n name: nginx\n type: frontend", "completion": "This yaml code Debug Pods | You can use:", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Debug Pods | You can use:", "source": "https://kubernetes.io/docs/tasks/debug/debug-application/debug-pods/", "id": "k8s_00282_code_explain_7524"} {"prompt": "Explain this shell code:\n\nkubectl get pods --selector=name=nginx,type=frontend", "completion": "This shell code Debug Pods | to list pods that match this selector. Verify that the list matches the Pods that you expect to provide your Service. Verify that the pod's containerPort matches up with the Service's targetPort", "language": "shell", "type": "code-example_explanation", "context": "Debug Pods | to list pods that match this selector. Verify that the list matches the Pods that you expect to provide your Service. Verify that the pod's containerPort matches up with the Service's targetPort", "source": "https://kubernetes.io/docs/tasks/debug/debug-application/debug-pods/", "id": "k8s_00282_code_explain_7525"} {"prompt": "Explain this text code:\n\nI0805 10:43:25.129850 46757 schema.go:126] unknown field: commnd\nI0805 10:43:25.129973 46757 schema.go:129] this may be a false alarm, see https://github.com/kubernetes/kubernetes/issues/6842\npods/mypod", "completion": "This text code Debug Pods | The next thing to check is whether the pod on the apiserver matches the pod you meant to create (e.g. in a yaml file on your local machine). For example, run kubectl get pods/mypod -o yaml > mypod-on-", "language": "text", "type": "code-example_explanation", "context": "Debug Pods | The next thing to check is whether the pod on the apiserver matches the pod you meant to create (e.g. in a yaml file on your local machine). For example, run kubectl get pods/mypod -o yaml > mypod-on-", "source": "https://kubernetes.io/docs/tasks/debug/debug-application/debug-pods/", "id": "k8s_00282_code_explain_7526"} {"prompt": "Explain this bash code:\n\nkubectl get endpointslices -l kubernetes.io/service-name=${SERVICE_NAME}", "completion": "This bash code Debug Pods | Make sure that the endpoints in the EndpointSlices match up with the number of pods that you expect to be members of your service. For example, if your Service is for an nginx container with 3 replica", "language": "bash", "type": "kubectl-command_explanation", "context": "Debug Pods | Make sure that the endpoints in the EndpointSlices match up with the number of pods that you expect to be members of your service. For example, if your Service is for an nginx container with 3 replica", "source": "https://kubernetes.io/docs/tasks/debug/debug-application/debug-pods/", "id": "k8s_00282_code_explain_7527"} {"prompt": "Complete this text code:\n\n...\nspec:", "completion": " - selector:\n name: nginx\n type: frontend", "language": "text", "type": "code-example", "context": "Debug Pods | You can use:", "source": "https://kubernetes.io/docs/tasks/debug/debug-application/debug-pods/", "id": "k8s_00282_code_completion_7528"} {"prompt": "Explain this text code:\n\n...\nspec:\n - selector:\n name: nginx\n type: frontend", "completion": "This text code Debug Pods | You can use:", "language": "text", "type": "code-example_explanation", "context": "Debug Pods | You can use:", "source": "https://kubernetes.io/docs/tasks/debug/debug-application/debug-pods/", "id": "k8s_00282_code_explain_7529"} {"prompt": "Explain this bash code:\n\nkubectl get pods --selector=name=nginx,type=frontend", "completion": "This bash code Debug Pods | to list pods that match this selector. Verify that the list matches the Pods that you expect to provide your Service. Verify that the pod's containerPort matches up with the Service's targetPort", "language": "bash", "type": "kubectl-command_explanation", "context": "Debug Pods | to list pods that match this selector. Verify that the list matches the Pods that you expect to provide your Service. Verify that the pod's containerPort matches up with the Service's targetPort", "source": "https://kubernetes.io/docs/tasks/debug/debug-application/debug-pods/", "id": "k8s_00282_code_explain_7530"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: with-node-affinity\nspec:\n affinity:\n nodeAffinity:\n requiredDuringSchedulingIgnoredDuringExecution:\n nodeSelectorTerms:\n - matchExpressions:\n - key: topology.kubernetes.io/zone\n operator: In\n values:", "completion": " - antarctica-east1\n - antarctica-west1\n preferredDuringSchedulingIgnoredDuringExecution:\n - weight: 1\n preference:\n matchExpressions:\n - key: another-node-label-key\n operator: In\n values:\n - another-node-label-value\n containers:\n - name: with-node-affinity\n image: registry.k8s.io/pause:3.8", "language": "yaml", "type": "pod-definition", "context": "Assigning Pods to Nodes | In this example, the following rules apply:", "source": "https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node/", "id": "k8s_00283_code_completion_7531"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: with-node-affinity\nspec:\n affinity:\n nodeAffinity:\n requiredDuringSchedulingIgnoredDuringExecution:\n nodeSelectorTerms:\n - matchExpressions:\n - key: topology.kubernetes.io/zone\n operator: In\n values:\n - antarctica-east1\n - antarctica-west1\n preferredDuringSchedulingIgnoredDuringExecution:\n - weight: 1\n preference:\n matchExpressions:\n - key: another-node-label-key\n operator: In\n values:\n - another-node-label-value\n containers:\n - name: with-node-affinity\n image: registry.k8s.io/pause:3.8", "completion": "This yaml code Assigning Pods to Nodes | In this example, the following rules apply:", "language": "yaml", "type": "pod-definition_explanation", "context": "Assigning Pods to Nodes | In this example, the following rules apply:", "source": "https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node/", "id": "k8s_00283_code_explain_7532"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: with-affinity-preferred-weight\nspec:\n affinity:\n nodeAffinity:\n requiredDuringSchedulingIgnoredDuringExecution:\n nodeSelectorTerms:\n - matchExpressions:\n - key: kubernetes.io/os\n operator: In\n values:\n - linux\n preferredDuringSchedulingIgnoredDuringExecution:\n - weight: 1", "completion": " preference:\n matchExpressions:\n - key: label-1\n operator: In\n values:\n - key-1\n - weight: 50\n preference:\n matchExpressions:\n - key: label-2\n operator: In\n values:\n - key-2\n containers:\n - name: with-node-affinity\n image: registry.k8s.io/pause:3.8", "language": "yaml", "type": "pod-definition", "context": "Assigning Pods to Nodes | If there are two possible nodes that match the preferredDuringSchedulingIgnoredDuringExecution rule, one with the label-1:key-1 label and another with the label-2:key-2 label, the scheduler considers", "source": "https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node/", "id": "k8s_00283_code_completion_7533"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: with-affinity-preferred-weight\nspec:\n affinity:\n nodeAffinity:\n requiredDuringSchedulingIgnoredDuringExecution:\n nodeSelectorTerms:\n - matchExpressions:\n - key: kubernetes.io/os\n operator: In\n values:\n - linux\n preferredDuringSchedulingIgnoredDuringExecution:\n - weight: 1\n preference:\n matchExpressions:\n - key: label-1\n operator: In\n values:\n - key-1\n - weight: 50\n preference:\n matchExpressions:\n - key: label-2\n operator: In\n values:\n - key-2\n containers:\n - name: with-node-affinity\n image: registry.k8s.io/pause:3.8", "completion": "This yaml code Assigning Pods to Nodes | If there are two possible nodes that match the preferredDuringSchedulingIgnoredDuringExecution rule, one with the label-1:key-1 label and another with the label-2:key-2 label, the scheduler considers", "language": "yaml", "type": "pod-definition_explanation", "context": "Assigning Pods to Nodes | If there are two possible nodes that match the preferredDuringSchedulingIgnoredDuringExecution rule, one with the label-1:key-1 label and another with the label-2:key-2 label, the scheduler considers", "source": "https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node/", "id": "k8s_00283_code_explain_7534"} {"prompt": "Complete this yaml code:\n\napiVersion: kubescheduler.config.k8s.io/v1\nkind: KubeSchedulerConfiguration\n\nprofiles:\n - schedulerName: default-scheduler\n - schedulerName: foo-scheduler\n pluginConfig:\n - name: NodeAffinity", "completion": " args:\n addedAffinity:\n requiredDuringSchedulingIgnoredDuringExecution:\n nodeSelectorTerms:\n - matchExpressions:\n - key: scheduler-profile\n operator: In\n values:\n - foo", "language": "yaml", "type": "kubernetes-manifest", "context": "Assigning Pods to Nodes | The addedAffinity is applied to all Pods that set .spec.schedulerName to foo-scheduler, in addition to the NodeAffinity specified in the PodSpec. That is, in order to match the Pod, nodes need to sati", "source": "https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node/", "id": "k8s_00283_code_completion_7535"} {"prompt": "Explain this yaml code:\n\napiVersion: kubescheduler.config.k8s.io/v1\nkind: KubeSchedulerConfiguration\n\nprofiles:\n - schedulerName: default-scheduler\n - schedulerName: foo-scheduler\n pluginConfig:\n - name: NodeAffinity\n args:\n addedAffinity:\n requiredDuringSchedulingIgnoredDuringExecution:\n nodeSelectorTerms:\n - matchExpressions:\n - key: scheduler-profile\n operator: In\n values:\n - foo", "completion": "This yaml code Assigning Pods to Nodes | The addedAffinity is applied to all Pods that set .spec.schedulerName to foo-scheduler, in addition to the NodeAffinity specified in the PodSpec. That is, in order to match the Pod, nodes need to sati", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Assigning Pods to Nodes | The addedAffinity is applied to all Pods that set .spec.schedulerName to foo-scheduler, in addition to the NodeAffinity specified in the PodSpec. That is, in order to match the Pod, nodes need to sati", "source": "https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node/", "id": "k8s_00283_code_explain_7536"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: with-pod-affinity\nspec:\n affinity:\n podAffinity:\n requiredDuringSchedulingIgnoredDuringExecution:\n - labelSelector:\n matchExpressions:\n - key: security\n operator: In\n values:\n - S1", "completion": " topologyKey: topology.kubernetes.io/zone\n podAntiAffinity:\n preferredDuringSchedulingIgnoredDuringExecution:\n - weight: 100\n podAffinityTerm:\n labelSelector:\n matchExpressions:\n - key: security\n operator: In\n values:\n - S2\n topologyKey: topology.kubernetes.io/zone\n containers:\n - name: with-pod-affinity\n image: registry.k8s.io/pause:3.8", "language": "yaml", "type": "pod-definition", "context": "Assigning Pods to Nodes | This example defines one Pod affinity rule and one Pod anti-affinity rule. The Pod affinity rule uses the \"hard\" requiredDuringSchedulingIgnoredDuringExecution, while the anti-affinity rule uses the \"", "source": "https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node/", "id": "k8s_00283_code_completion_7537"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: with-pod-affinity\nspec:\n affinity:\n podAffinity:\n requiredDuringSchedulingIgnoredDuringExecution:\n - labelSelector:\n matchExpressions:\n - key: security\n operator: In\n values:\n - S1\n topologyKey: topology.kubernetes.io/zone\n podAntiAffinity:\n preferredDuringSchedulingIgnoredDuringExecution:\n - weight: 100\n podAffinityTerm:\n labelSelector:\n matchExpressions:\n - key: security\n operator: In\n values:\n - S2\n topologyKey: topology.kubernetes.io/zone\n containers:\n - name: with-pod-affinity\n image: registry.k8s.io/pause:3.8", "completion": "This yaml code Assigning Pods to Nodes | This example defines one Pod affinity rule and one Pod anti-affinity rule. The Pod affinity rule uses the \"hard\" requiredDuringSchedulingIgnoredDuringExecution, while the anti-affinity rule uses the \"", "language": "yaml", "type": "pod-definition_explanation", "context": "Assigning Pods to Nodes | This example defines one Pod affinity rule and one Pod anti-affinity rule. The Pod affinity rule uses the \"hard\" requiredDuringSchedulingIgnoredDuringExecution, while the anti-affinity rule uses the \"", "source": "https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node/", "id": "k8s_00283_code_explain_7538"} {"prompt": "Complete this yaml code:\n\napiVersion: apps/v1\nkind: Deployment\nmetadata:\n name: application-server\n...\nspec:\n template:\n spec:\n affinity:\n podAffinity:\n requiredDuringSchedulingIgnoredDuringExecution:", "completion": " - labelSelector:\n matchExpressions:\n - key: app\n operator: In\n values:\n - database\n topologyKey: topology.kubernetes.io/zone\n # Only Pods from a given rollout are taken into consideration when calculating pod affinity.\n # If you update the Deployment, the replacement Pods follow their own affinity rules\n # (if there are any defined in the new Pod template)\n matchLabelKeys:\n - pod-template-hash", "language": "yaml", "type": "deployment-definition", "context": "Assigning Pods to Nodes | The mismatchLabelKeys field is a beta-level field and is enabled by default in Kubernetes 1.34. When you want to disable it, you have to disable it explicitly via the MatchLabelKeysInPodAffinity featu", "source": "https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node/", "id": "k8s_00283_code_completion_7539"} {"prompt": "Explain this yaml code:\n\napiVersion: apps/v1\nkind: Deployment\nmetadata:\n name: application-server\n...\nspec:\n template:\n spec:\n affinity:\n podAffinity:\n requiredDuringSchedulingIgnoredDuringExecution:\n - labelSelector:\n matchExpressions:\n - key: app\n operator: In\n values:\n - database\n topologyKey: topology.kubernetes.io/zone\n # Only Pods from a given rollout are taken into consideration when calculating pod affinity.\n # If you update the Deployment, the replacement Pods follow their own affinity rules\n # (if there are any defined in the new Pod template)\n matchLabelKeys:\n - pod-template-hash", "completion": "This yaml code Assigning Pods to Nodes | The mismatchLabelKeys field is a beta-level field and is enabled by default in Kubernetes 1.34. When you want to disable it, you have to disable it explicitly via the MatchLabelKeysInPodAffinity featu", "language": "yaml", "type": "deployment-definition_explanation", "context": "Assigning Pods to Nodes | The mismatchLabelKeys field is a beta-level field and is enabled by default in Kubernetes 1.34. When you want to disable it, you have to disable it explicitly via the MatchLabelKeysInPodAffinity featu", "source": "https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node/", "id": "k8s_00283_code_explain_7540"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n labels:\n # Assume that all relevant Pods have a \"tenant\" label set\n tenant: tenant-a\n...\nspec:\n affinity:\n podAffinity:\n requiredDuringSchedulingIgnoredDuringExecution:\n # ensure that Pods associated with this tenant land on the correct node pool\n - matchLabelKeys:\n - tenant\n topologyKey: node-pool", "completion": " podAntiAffinity:\n requiredDuringSchedulingIgnoredDuringExecution:\n # ensure that Pods associated with this tenant can't schedule to nodes used for another tenant\n - mismatchLabelKeys:\n - tenant # whatever the value of the \"tenant\" label for this Pod, prevent\n # scheduling to nodes in any pool where any Pod from a different\n # tenant is running.\n labelSelector:\n # We have to have the labelSelector which selects only Pods with the tenant label,\n # otherwise this Pod would have anti-affinity against Pods from daemonsets as well, for example,\n # which aren't supposed to have the tenant label.\n matchExpressions:\n - key: tenant\n operator: Exists\n topologyKey: node-pool", "language": "yaml", "type": "pod-definition", "context": "Assigning Pods to Nodes | Inter-pod affinity and anti-affinity can be even more useful when they are used with higher level collections such as ReplicaSets, StatefulSets, Deployments, etc. These rules allow you to configure th", "source": "https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node/", "id": "k8s_00283_code_completion_7541"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n labels:\n # Assume that all relevant Pods have a \"tenant\" label set\n tenant: tenant-a\n...\nspec:\n affinity:\n podAffinity:\n requiredDuringSchedulingIgnoredDuringExecution:\n # ensure that Pods associated with this tenant land on the correct node pool\n - matchLabelKeys:\n - tenant\n topologyKey: node-pool\n podAntiAffinity:\n requiredDuringSchedulingIgnoredDuringExecution:\n # ensure that Pods associated with this tenant can't schedule to nodes used for another tenant\n - mismatchLabelKeys:\n - tenant # whatever the value of the \"tenant\" label for this Pod, prevent\n # scheduling to nodes in any pool where any Pod from a different\n # tenant is running.\n labelSelector:\n # We have to have the labelSelector which selects only Pods with the tenant label,\n # otherwise this Pod would have anti-affinity against Pods from daemonsets as well, for example,\n # which aren't supposed to have the tenant label.\n matchExpressions:\n - key: tenant\n operator: Exists\n topologyKey: node-pool", "completion": "This yaml code Assigning Pods to Nodes | Inter-pod affinity and anti-affinity can be even more useful when they are used with higher level collections such as ReplicaSets, StatefulSets, Deployments, etc. These rules allow you to configure th", "language": "yaml", "type": "pod-definition_explanation", "context": "Assigning Pods to Nodes | Inter-pod affinity and anti-affinity can be even more useful when they are used with higher level collections such as ReplicaSets, StatefulSets, Deployments, etc. These rules allow you to configure th", "source": "https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node/", "id": "k8s_00283_code_explain_7542"} {"prompt": "Complete this yaml code:\n\napiVersion: apps/v1\nkind: Deployment\nmetadata:\n name: redis-cache\nspec:\n selector:\n matchLabels:\n app: store\n replicas: 3\n template:\n metadata:\n labels:\n app: store", "completion": " spec:\n affinity:\n podAntiAffinity:\n requiredDuringSchedulingIgnoredDuringExecution:\n - labelSelector:\n matchExpressions:\n - key: app\n operator: In\n values:\n - store\n topologyKey: \"kubernetes.io/hostname\"\n containers:\n - name: redis-server\n image: redis:3.2-alpine", "language": "yaml", "type": "deployment-definition", "context": "Assigning Pods to Nodes | The following example Deployment for the web servers creates replicas with the label app=web-store. The Pod affinity rule tells the scheduler to place each replica on a node that has a Pod with the la", "source": "https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node/", "id": "k8s_00283_code_completion_7543"} {"prompt": "Explain this yaml code:\n\napiVersion: apps/v1\nkind: Deployment\nmetadata:\n name: redis-cache\nspec:\n selector:\n matchLabels:\n app: store\n replicas: 3\n template:\n metadata:\n labels:\n app: store\n spec:\n affinity:\n podAntiAffinity:\n requiredDuringSchedulingIgnoredDuringExecution:\n - labelSelector:\n matchExpressions:\n - key: app\n operator: In\n values:\n - store\n topologyKey: \"kubernetes.io/hostname\"\n containers:\n - name: redis-server\n image: redis:3.2-alpine", "completion": "This yaml code Assigning Pods to Nodes | The following example Deployment for the web servers creates replicas with the label app=web-store. The Pod affinity rule tells the scheduler to place each replica on a node that has a Pod with the la", "language": "yaml", "type": "deployment-definition_explanation", "context": "Assigning Pods to Nodes | The following example Deployment for the web servers creates replicas with the label app=web-store. The Pod affinity rule tells the scheduler to place each replica on a node that has a Pod with the la", "source": "https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node/", "id": "k8s_00283_code_explain_7544"} {"prompt": "Complete this yaml code:\n\napiVersion: apps/v1\nkind: Deployment\nmetadata:\n name: web-server\nspec:\n selector:\n matchLabels:\n app: web-store\n replicas: 3\n template:\n metadata:\n labels:\n app: web-store\n spec:\n affinity:\n podAntiAffinity:\n requiredDuringSchedulingIgnoredDuringExecution:\n - labelSelector:", "completion": " matchExpressions:\n - key: app\n operator: In\n values:\n - web-store\n topologyKey: \"kubernetes.io/hostname\"\n podAffinity:\n requiredDuringSchedulingIgnoredDuringExecution:\n - labelSelector:\n matchExpressions:\n - key: app\n operator: In\n values:\n - store\n topologyKey: \"kubernetes.io/hostname\"\n containers:\n - name: web-app\n image: nginx:1.16-alpine", "language": "yaml", "type": "deployment-definition", "context": "Assigning Pods to Nodes | Creating the two preceding Deployments results in the following cluster layout, where each web server is co-located with a cache, on three separate nodes.", "source": "https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node/", "id": "k8s_00283_code_completion_7545"} {"prompt": "Explain this yaml code:\n\napiVersion: apps/v1\nkind: Deployment\nmetadata:\n name: web-server\nspec:\n selector:\n matchLabels:\n app: web-store\n replicas: 3\n template:\n metadata:\n labels:\n app: web-store\n spec:\n affinity:\n podAntiAffinity:\n requiredDuringSchedulingIgnoredDuringExecution:\n - labelSelector:\n matchExpressions:\n - key: app\n operator: In\n values:\n - web-store\n topologyKey: \"kubernetes.io/hostname\"\n podAffinity:\n requiredDuringSchedulingIgnoredDuringExecution:\n - labelSelector:\n matchExpressions:\n - key: app\n operator: In\n values:\n - store\n topologyKey: \"kubernetes.io/hostname\"\n containers:\n - name: web-app\n image: nginx:1.16-alpine", "completion": "This yaml code Assigning Pods to Nodes | Creating the two preceding Deployments results in the following cluster layout, where each web server is co-located with a cache, on three separate nodes.", "language": "yaml", "type": "deployment-definition_explanation", "context": "Assigning Pods to Nodes | Creating the two preceding Deployments results in the following cluster layout, where each web server is co-located with a cache, on three separate nodes.", "source": "https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node/", "id": "k8s_00283_code_explain_7546"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: nginx", "completion": "spec:\n containers:\n - name: nginx\n image: nginx\n nodeName: kube-01", "language": "yaml", "type": "pod-definition", "context": "Assigning Pods to Nodes | The above Pod will only run on the node kube-01.", "source": "https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node/", "id": "k8s_00283_code_completion_7547"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: nginx\nspec:\n containers:\n - name: nginx\n image: nginx\n nodeName: kube-01", "completion": "This yaml code Assigning Pods to Nodes | The above Pod will only run on the node kube-01.", "language": "yaml", "type": "pod-definition_explanation", "context": "Assigning Pods to Nodes | The above Pod will only run on the node kube-01.", "source": "https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node/", "id": "k8s_00283_code_explain_7548"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:", "completion": " name: nginx\n...\nstatus:\n nominatedNodeName: kube-01", "language": "yaml", "type": "pod-definition", "context": "Assigning Pods to Nodes | You can use topology spread constraints to control how Pods are spread across your cluster among failure-domains such as regions, zones, nodes, or among any other topology domains that you define. You", "source": "https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node/", "id": "k8s_00283_code_completion_7549"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: nginx\n...\nstatus:\n nominatedNodeName: kube-01", "completion": "This yaml code Assigning Pods to Nodes | You can use topology spread constraints to control how Pods are spread across your cluster among failure-domains such as regions, zones, nodes, or among any other topology domains that you define. You", "language": "yaml", "type": "pod-definition_explanation", "context": "Assigning Pods to Nodes | You can use topology spread constraints to control how Pods are spread across your cluster among failure-domains such as regions, zones, nodes, or among any other topology domains that you define. You", "source": "https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node/", "id": "k8s_00283_code_explain_7550"} {"prompt": "Explain this text code:\n\nexample.com.node-restriction.kubernetes.io/fips=true", "completion": "This text code Assigning Pods to Nodes | nodeSelector is the simplest recommended form of node selection constraint. You can add the nodeSelector field to your Pod specification and specify the node labels you want the target node to have. K", "language": "text", "type": "code-example_explanation", "context": "Assigning Pods to Nodes | nodeSelector is the simplest recommended form of node selection constraint. You can add the nodeSelector field to your Pod specification and specify the node labels you want the target node to have. K", "source": "https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node/", "id": "k8s_00283_code_explain_7551"} {"prompt": "Explain this text code:\n\nexample.com.node-restriction.kubernetes.io/pci-dss=true", "completion": "This text code Assigning Pods to Nodes | nodeSelector is the simplest recommended form of node selection constraint. You can add the nodeSelector field to your Pod specification and specify the node labels you want the target node to have. K", "language": "text", "type": "code-example_explanation", "context": "Assigning Pods to Nodes | nodeSelector is the simplest recommended form of node selection constraint. You can add the nodeSelector field to your Pod specification and specify the node labels you want the target node to have. K", "source": "https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node/", "id": "k8s_00283_code_explain_7552"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: with-node-affinity\nspec:\n affinity:\n nodeAffinity:\n requiredDuringSchedulingIgnoredDuringExecution:\n nodeSelectorTerms:\n - matchExpressions:\n - key: topology.kubernetes.io/zone\n operator: In\n values:", "completion": " - antarctica-east1\n - antarctica-west1\n preferredDuringSchedulingIgnoredDuringExecution:\n - weight: 1\n preference:\n matchExpressions:\n - key: another-node-label-key\n operator: In\n values:\n - another-node-label-value\n containers:\n - name: with-node-affinity\n image: registry.k8s.io/pause:3.8", "language": "yaml", "type": "pod-definition", "context": "Assigning Pods to Nodes | In this example, the following rules apply:", "source": "https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node/", "id": "k8s_00283_code_completion_7553"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: with-node-affinity\nspec:\n affinity:\n nodeAffinity:\n requiredDuringSchedulingIgnoredDuringExecution:\n nodeSelectorTerms:\n - matchExpressions:\n - key: topology.kubernetes.io/zone\n operator: In\n values:\n - antarctica-east1\n - antarctica-west1\n preferredDuringSchedulingIgnoredDuringExecution:\n - weight: 1\n preference:\n matchExpressions:\n - key: another-node-label-key\n operator: In\n values:\n - another-node-label-value\n containers:\n - name: with-node-affinity\n image: registry.k8s.io/pause:3.8", "completion": "This yaml code Assigning Pods to Nodes | In this example, the following rules apply:", "language": "yaml", "type": "pod-definition_explanation", "context": "Assigning Pods to Nodes | In this example, the following rules apply:", "source": "https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node/", "id": "k8s_00283_code_explain_7554"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: with-affinity-preferred-weight\nspec:\n affinity:\n nodeAffinity:\n requiredDuringSchedulingIgnoredDuringExecution:\n nodeSelectorTerms:\n - matchExpressions:\n - key: kubernetes.io/os\n operator: In\n values:\n - linux\n preferredDuringSchedulingIgnoredDuringExecution:\n - weight: 1", "completion": " preference:\n matchExpressions:\n - key: label-1\n operator: In\n values:\n - key-1\n - weight: 50\n preference:\n matchExpressions:\n - key: label-2\n operator: In\n values:\n - key-2\n containers:\n - name: with-node-affinity\n image: registry.k8s.io/pause:3.8", "language": "yaml", "type": "pod-definition", "context": "Assigning Pods to Nodes | If there are two possible nodes that match the preferredDuringSchedulingIgnoredDuringExecution rule, one with the label-1:key-1 label and another with the label-2:key-2 label, the scheduler considers", "source": "https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node/", "id": "k8s_00283_code_completion_7555"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: with-affinity-preferred-weight\nspec:\n affinity:\n nodeAffinity:\n requiredDuringSchedulingIgnoredDuringExecution:\n nodeSelectorTerms:\n - matchExpressions:\n - key: kubernetes.io/os\n operator: In\n values:\n - linux\n preferredDuringSchedulingIgnoredDuringExecution:\n - weight: 1\n preference:\n matchExpressions:\n - key: label-1\n operator: In\n values:\n - key-1\n - weight: 50\n preference:\n matchExpressions:\n - key: label-2\n operator: In\n values:\n - key-2\n containers:\n - name: with-node-affinity\n image: registry.k8s.io/pause:3.8", "completion": "This yaml code Assigning Pods to Nodes | If there are two possible nodes that match the preferredDuringSchedulingIgnoredDuringExecution rule, one with the label-1:key-1 label and another with the label-2:key-2 label, the scheduler considers", "language": "yaml", "type": "pod-definition_explanation", "context": "Assigning Pods to Nodes | If there are two possible nodes that match the preferredDuringSchedulingIgnoredDuringExecution rule, one with the label-1:key-1 label and another with the label-2:key-2 label, the scheduler considers", "source": "https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node/", "id": "k8s_00283_code_explain_7556"} {"prompt": "Complete this yaml code:\n\napiVersion: kubescheduler.config.k8s.io/v1\nkind: KubeSchedulerConfiguration\n\nprofiles:\n - schedulerName: default-scheduler\n - schedulerName: foo-scheduler\n pluginConfig:\n - name: NodeAffinity", "completion": " args:\n addedAffinity:\n requiredDuringSchedulingIgnoredDuringExecution:\n nodeSelectorTerms:\n - matchExpressions:\n - key: scheduler-profile\n operator: In\n values:\n - foo", "language": "yaml", "type": "kubernetes-manifest", "context": "Assigning Pods to Nodes | The addedAffinity is applied to all Pods that set .spec.schedulerName to foo-scheduler, in addition to the NodeAffinity specified in the PodSpec. That is, in order to match the Pod, nodes need to sati", "source": "https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node/", "id": "k8s_00283_code_completion_7557"} {"prompt": "Explain this yaml code:\n\napiVersion: kubescheduler.config.k8s.io/v1\nkind: KubeSchedulerConfiguration\n\nprofiles:\n - schedulerName: default-scheduler\n - schedulerName: foo-scheduler\n pluginConfig:\n - name: NodeAffinity\n args:\n addedAffinity:\n requiredDuringSchedulingIgnoredDuringExecution:\n nodeSelectorTerms:\n - matchExpressions:\n - key: scheduler-profile\n operator: In\n values:\n - foo", "completion": "This yaml code Assigning Pods to Nodes | The addedAffinity is applied to all Pods that set .spec.schedulerName to foo-scheduler, in addition to the NodeAffinity specified in the PodSpec. That is, in order to match the Pod, nodes need to sati", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Assigning Pods to Nodes | The addedAffinity is applied to all Pods that set .spec.schedulerName to foo-scheduler, in addition to the NodeAffinity specified in the PodSpec. That is, in order to match the Pod, nodes need to sati", "source": "https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node/", "id": "k8s_00283_code_explain_7558"} {"prompt": "Explain this text code:\n\npodAffinity.requiredDuringSchedulingIgnoredDuringExecution", "completion": "This text code Assigning Pods to Nodes | Soft Constraints (Scoring):", "language": "text", "type": "code-example_explanation", "context": "Assigning Pods to Nodes | Soft Constraints (Scoring):", "source": "https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node/", "id": "k8s_00283_code_explain_7559"} {"prompt": "Explain this text code:\n\npodAntiAffinity.requiredDuringSchedulingIgnoredDuringExecution", "completion": "This text code Assigning Pods to Nodes | Soft Constraints (Scoring):", "language": "text", "type": "code-example_explanation", "context": "Assigning Pods to Nodes | Soft Constraints (Scoring):", "source": "https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node/", "id": "k8s_00283_code_explain_7560"} {"prompt": "Explain this text code:\n\npodAffinity.preferredDuringSchedulingIgnoredDuringExecution", "completion": "This text code Assigning Pods to Nodes | Ignored Fields:", "language": "text", "type": "code-example_explanation", "context": "Assigning Pods to Nodes | Ignored Fields:", "source": "https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node/", "id": "k8s_00283_code_explain_7561"} {"prompt": "Explain this text code:\n\npodAntiAffinity.preferredDuringSchedulingIgnoredDuringExecution", "completion": "This text code Assigning Pods to Nodes | Ignored Fields:", "language": "text", "type": "code-example_explanation", "context": "Assigning Pods to Nodes | Ignored Fields:", "source": "https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node/", "id": "k8s_00283_code_explain_7562"} {"prompt": "Explain this text code:\n\npodAffinity.preferredDuringSchedulingIgnoredDuringExecution", "completion": "This text code Assigning Pods to Nodes | If the current Pod being scheduled is the first in a series that have affinity to themselves, it is allowed to be scheduled if it passes all other affinity checks. This is determined by verifying that", "language": "text", "type": "code-example_explanation", "context": "Assigning Pods to Nodes | If the current Pod being scheduled is the first in a series that have affinity to themselves, it is allowed to be scheduled if it passes all other affinity checks. This is determined by verifying that", "source": "https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node/", "id": "k8s_00283_code_explain_7563"} {"prompt": "Explain this text code:\n\npodAntiAffinity.preferredDuringSchedulingIgnoredDuringExecution", "completion": "This text code Assigning Pods to Nodes | If the current Pod being scheduled is the first in a series that have affinity to themselves, it is allowed to be scheduled if it passes all other affinity checks. This is determined by verifying that", "language": "text", "type": "code-example_explanation", "context": "Assigning Pods to Nodes | If the current Pod being scheduled is the first in a series that have affinity to themselves, it is allowed to be scheduled if it passes all other affinity checks. This is determined by verifying that", "source": "https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node/", "id": "k8s_00283_code_explain_7564"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: with-pod-affinity\nspec:\n affinity:\n podAffinity:\n requiredDuringSchedulingIgnoredDuringExecution:\n - labelSelector:\n matchExpressions:\n - key: security\n operator: In\n values:\n - S1", "completion": " topologyKey: topology.kubernetes.io/zone\n podAntiAffinity:\n preferredDuringSchedulingIgnoredDuringExecution:\n - weight: 100\n podAffinityTerm:\n labelSelector:\n matchExpressions:\n - key: security\n operator: In\n values:\n - S2\n topologyKey: topology.kubernetes.io/zone\n containers:\n - name: with-pod-affinity\n image: registry.k8s.io/pause:3.8", "language": "yaml", "type": "pod-definition", "context": "Assigning Pods to Nodes | This example defines one Pod affinity rule and one Pod anti-affinity rule. The Pod affinity rule uses the \"hard\" requiredDuringSchedulingIgnoredDuringExecution, while the anti-affinity rule uses the \"", "source": "https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node/", "id": "k8s_00283_code_completion_7565"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: with-pod-affinity\nspec:\n affinity:\n podAffinity:\n requiredDuringSchedulingIgnoredDuringExecution:\n - labelSelector:\n matchExpressions:\n - key: security\n operator: In\n values:\n - S1\n topologyKey: topology.kubernetes.io/zone\n podAntiAffinity:\n preferredDuringSchedulingIgnoredDuringExecution:\n - weight: 100\n podAffinityTerm:\n labelSelector:\n matchExpressions:\n - key: security\n operator: In\n values:\n - S2\n topologyKey: topology.kubernetes.io/zone\n containers:\n - name: with-pod-affinity\n image: registry.k8s.io/pause:3.8", "completion": "This yaml code Assigning Pods to Nodes | This example defines one Pod affinity rule and one Pod anti-affinity rule. The Pod affinity rule uses the \"hard\" requiredDuringSchedulingIgnoredDuringExecution, while the anti-affinity rule uses the \"", "language": "yaml", "type": "pod-definition_explanation", "context": "Assigning Pods to Nodes | This example defines one Pod affinity rule and one Pod anti-affinity rule. The Pod affinity rule uses the \"hard\" requiredDuringSchedulingIgnoredDuringExecution, while the anti-affinity rule uses the \"", "source": "https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node/", "id": "k8s_00283_code_explain_7566"} {"prompt": "Complete this yaml code:\n\napiVersion: apps/v1\nkind: Deployment\nmetadata:\n name: application-server\n...\nspec:\n template:\n spec:\n affinity:\n podAffinity:\n requiredDuringSchedulingIgnoredDuringExecution:", "completion": " - labelSelector:\n matchExpressions:\n - key: app\n operator: In\n values:\n - database\n topologyKey: topology.kubernetes.io/zone\n # Only Pods from a given rollout are taken into consideration when calculating pod affinity.\n # If you update the Deployment, the replacement Pods follow their own affinity rules\n # (if there are any defined in the new Pod template)\n matchLabelKeys:\n - pod-template-hash", "language": "yaml", "type": "deployment-definition", "context": "Assigning Pods to Nodes | The mismatchLabelKeys field is a beta-level field and is enabled by default in Kubernetes 1.34. When you want to disable it, you have to disable it explicitly via the MatchLabelKeysInPodAffinity featu", "source": "https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node/", "id": "k8s_00283_code_completion_7567"} {"prompt": "Explain this yaml code:\n\napiVersion: apps/v1\nkind: Deployment\nmetadata:\n name: application-server\n...\nspec:\n template:\n spec:\n affinity:\n podAffinity:\n requiredDuringSchedulingIgnoredDuringExecution:\n - labelSelector:\n matchExpressions:\n - key: app\n operator: In\n values:\n - database\n topologyKey: topology.kubernetes.io/zone\n # Only Pods from a given rollout are taken into consideration when calculating pod affinity.\n # If you update the Deployment, the replacement Pods follow their own affinity rules\n # (if there are any defined in the new Pod template)\n matchLabelKeys:\n - pod-template-hash", "completion": "This yaml code Assigning Pods to Nodes | The mismatchLabelKeys field is a beta-level field and is enabled by default in Kubernetes 1.34. When you want to disable it, you have to disable it explicitly via the MatchLabelKeysInPodAffinity featu", "language": "yaml", "type": "deployment-definition_explanation", "context": "Assigning Pods to Nodes | The mismatchLabelKeys field is a beta-level field and is enabled by default in Kubernetes 1.34. When you want to disable it, you have to disable it explicitly via the MatchLabelKeysInPodAffinity featu", "source": "https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node/", "id": "k8s_00283_code_explain_7568"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n labels:\n # Assume that all relevant Pods have a \"tenant\" label set\n tenant: tenant-a\n...\nspec:\n affinity:\n podAffinity:\n requiredDuringSchedulingIgnoredDuringExecution:\n # ensure that Pods associated with this tenant land on the correct node pool\n - matchLabelKeys:\n - tenant\n topologyKey: node-pool", "completion": " podAntiAffinity:\n requiredDuringSchedulingIgnoredDuringExecution:\n # ensure that Pods associated with this tenant can't schedule to nodes used for another tenant\n - mismatchLabelKeys:\n - tenant # whatever the value of the \"tenant\" label for this Pod, prevent\n # scheduling to nodes in any pool where any Pod from a different\n # tenant is running.\n labelSelector:\n # We have to have the labelSelector which selects only Pods with the tenant label,\n # otherwise this Pod would have anti-affinity against Pods from daemonsets as well, for example,\n # which aren't supposed to have the tenant label.\n matchExpressions:\n - key: tenant\n operator: Exists\n topologyKey: node-pool", "language": "yaml", "type": "pod-definition", "context": "Assigning Pods to Nodes | Inter-pod affinity and anti-affinity can be even more useful when they are used with higher level collections such as ReplicaSets, StatefulSets, Deployments, etc. These rules allow you to configure th", "source": "https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node/", "id": "k8s_00283_code_completion_7569"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n labels:\n # Assume that all relevant Pods have a \"tenant\" label set\n tenant: tenant-a\n...\nspec:\n affinity:\n podAffinity:\n requiredDuringSchedulingIgnoredDuringExecution:\n # ensure that Pods associated with this tenant land on the correct node pool\n - matchLabelKeys:\n - tenant\n topologyKey: node-pool\n podAntiAffinity:\n requiredDuringSchedulingIgnoredDuringExecution:\n # ensure that Pods associated with this tenant can't schedule to nodes used for another tenant\n - mismatchLabelKeys:\n - tenant # whatever the value of the \"tenant\" label for this Pod, prevent\n # scheduling to nodes in any pool where any Pod from a different\n # tenant is running.\n labelSelector:\n # We have to have the labelSelector which selects only Pods with the tenant label,\n # otherwise this Pod would have anti-affinity against Pods from daemonsets as well, for example,\n # which aren't supposed to have the tenant label.\n matchExpressions:\n - key: tenant\n operator: Exists\n topologyKey: node-pool", "completion": "This yaml code Assigning Pods to Nodes | Inter-pod affinity and anti-affinity can be even more useful when they are used with higher level collections such as ReplicaSets, StatefulSets, Deployments, etc. These rules allow you to configure th", "language": "yaml", "type": "pod-definition_explanation", "context": "Assigning Pods to Nodes | Inter-pod affinity and anti-affinity can be even more useful when they are used with higher level collections such as ReplicaSets, StatefulSets, Deployments, etc. These rules allow you to configure th", "source": "https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node/", "id": "k8s_00283_code_explain_7570"} {"prompt": "Complete this yaml code:\n\napiVersion: apps/v1\nkind: Deployment\nmetadata:\n name: redis-cache\nspec:\n selector:\n matchLabels:\n app: store\n replicas: 3\n template:\n metadata:\n labels:\n app: store", "completion": " spec:\n affinity:\n podAntiAffinity:\n requiredDuringSchedulingIgnoredDuringExecution:\n - labelSelector:\n matchExpressions:\n - key: app\n operator: In\n values:\n - store\n topologyKey: \"kubernetes.io/hostname\"\n containers:\n - name: redis-server\n image: redis:3.2-alpine", "language": "yaml", "type": "deployment-definition", "context": "Assigning Pods to Nodes | The following example Deployment for the web servers creates replicas with the label app=web-store. The Pod affinity rule tells the scheduler to place each replica on a node that has a Pod with the la", "source": "https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node/", "id": "k8s_00283_code_completion_7571"} {"prompt": "Explain this yaml code:\n\napiVersion: apps/v1\nkind: Deployment\nmetadata:\n name: redis-cache\nspec:\n selector:\n matchLabels:\n app: store\n replicas: 3\n template:\n metadata:\n labels:\n app: store\n spec:\n affinity:\n podAntiAffinity:\n requiredDuringSchedulingIgnoredDuringExecution:\n - labelSelector:\n matchExpressions:\n - key: app\n operator: In\n values:\n - store\n topologyKey: \"kubernetes.io/hostname\"\n containers:\n - name: redis-server\n image: redis:3.2-alpine", "completion": "This yaml code Assigning Pods to Nodes | The following example Deployment for the web servers creates replicas with the label app=web-store. The Pod affinity rule tells the scheduler to place each replica on a node that has a Pod with the la", "language": "yaml", "type": "deployment-definition_explanation", "context": "Assigning Pods to Nodes | The following example Deployment for the web servers creates replicas with the label app=web-store. The Pod affinity rule tells the scheduler to place each replica on a node that has a Pod with the la", "source": "https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node/", "id": "k8s_00283_code_explain_7572"} {"prompt": "Complete this yaml code:\n\napiVersion: apps/v1\nkind: Deployment\nmetadata:\n name: web-server\nspec:\n selector:\n matchLabels:\n app: web-store\n replicas: 3\n template:\n metadata:\n labels:\n app: web-store\n spec:\n affinity:\n podAntiAffinity:\n requiredDuringSchedulingIgnoredDuringExecution:\n - labelSelector:", "completion": " matchExpressions:\n - key: app\n operator: In\n values:\n - web-store\n topologyKey: \"kubernetes.io/hostname\"\n podAffinity:\n requiredDuringSchedulingIgnoredDuringExecution:\n - labelSelector:\n matchExpressions:\n - key: app\n operator: In\n values:\n - store\n topologyKey: \"kubernetes.io/hostname\"\n containers:\n - name: web-app\n image: nginx:1.16-alpine", "language": "yaml", "type": "deployment-definition", "context": "Assigning Pods to Nodes | Creating the two preceding Deployments results in the following cluster layout, where each web server is co-located with a cache, on three separate nodes.", "source": "https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node/", "id": "k8s_00283_code_completion_7573"} {"prompt": "Explain this yaml code:\n\napiVersion: apps/v1\nkind: Deployment\nmetadata:\n name: web-server\nspec:\n selector:\n matchLabels:\n app: web-store\n replicas: 3\n template:\n metadata:\n labels:\n app: web-store\n spec:\n affinity:\n podAntiAffinity:\n requiredDuringSchedulingIgnoredDuringExecution:\n - labelSelector:\n matchExpressions:\n - key: app\n operator: In\n values:\n - web-store\n topologyKey: \"kubernetes.io/hostname\"\n podAffinity:\n requiredDuringSchedulingIgnoredDuringExecution:\n - labelSelector:\n matchExpressions:\n - key: app\n operator: In\n values:\n - store\n topologyKey: \"kubernetes.io/hostname\"\n containers:\n - name: web-app\n image: nginx:1.16-alpine", "completion": "This yaml code Assigning Pods to Nodes | Creating the two preceding Deployments results in the following cluster layout, where each web server is co-located with a cache, on three separate nodes.", "language": "yaml", "type": "deployment-definition_explanation", "context": "Assigning Pods to Nodes | Creating the two preceding Deployments results in the following cluster layout, where each web server is co-located with a cache, on three separate nodes.", "source": "https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node/", "id": "k8s_00283_code_explain_7574"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: nginx", "completion": "spec:\n containers:\n - name: nginx\n image: nginx\n nodeName: kube-01", "language": "yaml", "type": "pod-definition", "context": "Assigning Pods to Nodes | The above Pod will only run on the node kube-01.", "source": "https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node/", "id": "k8s_00283_code_completion_7575"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: nginx\nspec:\n containers:\n - name: nginx\n image: nginx\n nodeName: kube-01", "completion": "This yaml code Assigning Pods to Nodes | The above Pod will only run on the node kube-01.", "language": "yaml", "type": "pod-definition_explanation", "context": "Assigning Pods to Nodes | The above Pod will only run on the node kube-01.", "source": "https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node/", "id": "k8s_00283_code_explain_7576"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:", "completion": " name: nginx\n...\nstatus:\n nominatedNodeName: kube-01", "language": "yaml", "type": "pod-definition", "context": "Assigning Pods to Nodes | You can use topology spread constraints to control how Pods are spread across your cluster among failure-domains such as regions, zones, nodes, or among any other topology domains that you define. You", "source": "https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node/", "id": "k8s_00283_code_completion_7577"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: nginx\n...\nstatus:\n nominatedNodeName: kube-01", "completion": "This yaml code Assigning Pods to Nodes | You can use topology spread constraints to control how Pods are spread across your cluster among failure-domains such as regions, zones, nodes, or among any other topology domains that you define. You", "language": "yaml", "type": "pod-definition_explanation", "context": "Assigning Pods to Nodes | You can use topology spread constraints to control how Pods are spread across your cluster among failure-domains such as regions, zones, nodes, or among any other topology domains that you define. You", "source": "https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node/", "id": "k8s_00283_code_explain_7578"} {"prompt": "Complete this yaml code:\n\npods/pod-with-node-affinity.yaml\napiVersion: v1\nkind: Pod\nmetadata:\n name: with-node-affinity\nspec:\n affinity:\n nodeAffinity:\n requiredDuringSchedulingIgnoredDuringExecution:\n nodeSelectorTerms:\n - matchExpressions:\n - key: topology.kubernetes.io/zone\n operator: In", "completion": " values:\n - antarctica-east1\n - antarctica-west1\n preferredDuringSchedulingIgnoredDuringExecution:\n - weight: 1\n preference:\n matchExpressions:\n - key: another-node-label-key\n operator: In\n values:\n - another-node-label-value\n containers:\n - name: with-node-affinity\n image: registry.k8s.io/pause:3.8", "language": "yaml", "type": "pod-definition", "context": "Assigning Pods to Nodes | In this example, the following rules apply:", "source": "https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node/", "id": "k8s_00283_code_completion_7579"} {"prompt": "Explain this yaml code:\n\npods/pod-with-node-affinity.yaml\napiVersion: v1\nkind: Pod\nmetadata:\n name: with-node-affinity\nspec:\n affinity:\n nodeAffinity:\n requiredDuringSchedulingIgnoredDuringExecution:\n nodeSelectorTerms:\n - matchExpressions:\n - key: topology.kubernetes.io/zone\n operator: In\n values:\n - antarctica-east1\n - antarctica-west1\n preferredDuringSchedulingIgnoredDuringExecution:\n - weight: 1\n preference:\n matchExpressions:\n - key: another-node-label-key\n operator: In\n values:\n - another-node-label-value\n containers:\n - name: with-node-affinity\n image: registry.k8s.io/pause:3.8", "completion": "This yaml code Assigning Pods to Nodes | In this example, the following rules apply:", "language": "yaml", "type": "pod-definition_explanation", "context": "Assigning Pods to Nodes | In this example, the following rules apply:", "source": "https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node/", "id": "k8s_00283_code_explain_7580"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: with-node-affinity\nspec:\n affinity:\n nodeAffinity:\n requiredDuringSchedulingIgnoredDuringExecution:\n nodeSelectorTerms:\n - matchExpressions:\n - key: topology.kubernetes.io/zone\n operator: In\n values:", "completion": " - antarctica-east1\n - antarctica-west1\n preferredDuringSchedulingIgnoredDuringExecution:\n - weight: 1\n preference:\n matchExpressions:\n - key: another-node-label-key\n operator: In\n values:\n - another-node-label-value\n containers:\n - name: with-node-affinity\n image: registry.k8s.io/pause:3.8", "language": "yaml", "type": "pod-definition", "context": "Assigning Pods to Nodes | In this example, the following rules apply:", "source": "https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node/", "id": "k8s_00283_code_completion_7581"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: with-node-affinity\nspec:\n affinity:\n nodeAffinity:\n requiredDuringSchedulingIgnoredDuringExecution:\n nodeSelectorTerms:\n - matchExpressions:\n - key: topology.kubernetes.io/zone\n operator: In\n values:\n - antarctica-east1\n - antarctica-west1\n preferredDuringSchedulingIgnoredDuringExecution:\n - weight: 1\n preference:\n matchExpressions:\n - key: another-node-label-key\n operator: In\n values:\n - another-node-label-value\n containers:\n - name: with-node-affinity\n image: registry.k8s.io/pause:3.8", "completion": "This yaml code Assigning Pods to Nodes | In this example, the following rules apply:", "language": "yaml", "type": "pod-definition_explanation", "context": "Assigning Pods to Nodes | In this example, the following rules apply:", "source": "https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node/", "id": "k8s_00283_code_explain_7582"} {"prompt": "Complete this yaml code:\n\npods/pod-with-affinity-preferred-weight.yaml\napiVersion: v1\nkind: Pod\nmetadata:\n name: with-affinity-preferred-weight\nspec:\n affinity:\n nodeAffinity:\n requiredDuringSchedulingIgnoredDuringExecution:\n nodeSelectorTerms:\n - matchExpressions:\n - key: kubernetes.io/os\n operator: In\n values:\n - linux\n preferredDuringSchedulingIgnoredDuringExecution:", "completion": " - weight: 1\n preference:\n matchExpressions:\n - key: label-1\n operator: In\n values:\n - key-1\n - weight: 50\n preference:\n matchExpressions:\n - key: label-2\n operator: In\n values:\n - key-2\n containers:\n - name: with-node-affinity\n image: registry.k8s.io/pause:3.8", "language": "yaml", "type": "pod-definition", "context": "Assigning Pods to Nodes | If there are two possible nodes that match the preferredDuringSchedulingIgnoredDuringExecution rule, one with the label-1:key-1 label and another with the label-2:key-2 label, the scheduler considers", "source": "https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node/", "id": "k8s_00283_code_completion_7583"} {"prompt": "Explain this yaml code:\n\npods/pod-with-affinity-preferred-weight.yaml\napiVersion: v1\nkind: Pod\nmetadata:\n name: with-affinity-preferred-weight\nspec:\n affinity:\n nodeAffinity:\n requiredDuringSchedulingIgnoredDuringExecution:\n nodeSelectorTerms:\n - matchExpressions:\n - key: kubernetes.io/os\n operator: In\n values:\n - linux\n preferredDuringSchedulingIgnoredDuringExecution:\n - weight: 1\n preference:\n matchExpressions:\n - key: label-1\n operator: In\n values:\n - key-1\n - weight: 50\n preference:\n matchExpressions:\n - key: label-2\n operator: In\n values:\n - key-2\n containers:\n - name: with-node-affinity\n image: registry.k8s.io/pause:3.8", "completion": "This yaml code Assigning Pods to Nodes | If there are two possible nodes that match the preferredDuringSchedulingIgnoredDuringExecution rule, one with the label-1:key-1 label and another with the label-2:key-2 label, the scheduler considers", "language": "yaml", "type": "pod-definition_explanation", "context": "Assigning Pods to Nodes | If there are two possible nodes that match the preferredDuringSchedulingIgnoredDuringExecution rule, one with the label-1:key-1 label and another with the label-2:key-2 label, the scheduler considers", "source": "https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node/", "id": "k8s_00283_code_explain_7584"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: with-affinity-preferred-weight\nspec:\n affinity:\n nodeAffinity:\n requiredDuringSchedulingIgnoredDuringExecution:\n nodeSelectorTerms:\n - matchExpressions:\n - key: kubernetes.io/os\n operator: In\n values:\n - linux\n preferredDuringSchedulingIgnoredDuringExecution:\n - weight: 1", "completion": " preference:\n matchExpressions:\n - key: label-1\n operator: In\n values:\n - key-1\n - weight: 50\n preference:\n matchExpressions:\n - key: label-2\n operator: In\n values:\n - key-2\n containers:\n - name: with-node-affinity\n image: registry.k8s.io/pause:3.8", "language": "yaml", "type": "pod-definition", "context": "Assigning Pods to Nodes | If there are two possible nodes that match the preferredDuringSchedulingIgnoredDuringExecution rule, one with the label-1:key-1 label and another with the label-2:key-2 label, the scheduler considers", "source": "https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node/", "id": "k8s_00283_code_completion_7585"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: with-affinity-preferred-weight\nspec:\n affinity:\n nodeAffinity:\n requiredDuringSchedulingIgnoredDuringExecution:\n nodeSelectorTerms:\n - matchExpressions:\n - key: kubernetes.io/os\n operator: In\n values:\n - linux\n preferredDuringSchedulingIgnoredDuringExecution:\n - weight: 1\n preference:\n matchExpressions:\n - key: label-1\n operator: In\n values:\n - key-1\n - weight: 50\n preference:\n matchExpressions:\n - key: label-2\n operator: In\n values:\n - key-2\n containers:\n - name: with-node-affinity\n image: registry.k8s.io/pause:3.8", "completion": "This yaml code Assigning Pods to Nodes | If there are two possible nodes that match the preferredDuringSchedulingIgnoredDuringExecution rule, one with the label-1:key-1 label and another with the label-2:key-2 label, the scheduler considers", "language": "yaml", "type": "pod-definition_explanation", "context": "Assigning Pods to Nodes | If there are two possible nodes that match the preferredDuringSchedulingIgnoredDuringExecution rule, one with the label-1:key-1 label and another with the label-2:key-2 label, the scheduler considers", "source": "https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node/", "id": "k8s_00283_code_explain_7586"} {"prompt": "Complete this yaml code:\n\napiVersion: kubescheduler.config.k8s.io/v1\nkind: KubeSchedulerConfiguration\n\nprofiles:\n - schedulerName: default-scheduler\n - schedulerName: foo-scheduler\n pluginConfig:\n - name: NodeAffinity", "completion": " args:\n addedAffinity:\n requiredDuringSchedulingIgnoredDuringExecution:\n nodeSelectorTerms:\n - matchExpressions:\n - key: scheduler-profile\n operator: In\n values:\n - foo", "language": "yaml", "type": "kubernetes-manifest", "context": "Assigning Pods to Nodes | The addedAffinity is applied to all Pods that set .spec.schedulerName to foo-scheduler, in addition to the NodeAffinity specified in the PodSpec. That is, in order to match the Pod, nodes need to sati", "source": "https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node/", "id": "k8s_00283_code_completion_7587"} {"prompt": "Explain this yaml code:\n\napiVersion: kubescheduler.config.k8s.io/v1\nkind: KubeSchedulerConfiguration\n\nprofiles:\n - schedulerName: default-scheduler\n - schedulerName: foo-scheduler\n pluginConfig:\n - name: NodeAffinity\n args:\n addedAffinity:\n requiredDuringSchedulingIgnoredDuringExecution:\n nodeSelectorTerms:\n - matchExpressions:\n - key: scheduler-profile\n operator: In\n values:\n - foo", "completion": "This yaml code Assigning Pods to Nodes | The addedAffinity is applied to all Pods that set .spec.schedulerName to foo-scheduler, in addition to the NodeAffinity specified in the PodSpec. That is, in order to match the Pod, nodes need to sati", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Assigning Pods to Nodes | The addedAffinity is applied to all Pods that set .spec.schedulerName to foo-scheduler, in addition to the NodeAffinity specified in the PodSpec. That is, in order to match the Pod, nodes need to sati", "source": "https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node/", "id": "k8s_00283_code_explain_7588"} {"prompt": "Complete this yaml code:\n\npods/pod-with-pod-affinity.yaml\napiVersion: v1\nkind: Pod\nmetadata:\n name: with-pod-affinity\nspec:\n affinity:\n podAffinity:\n requiredDuringSchedulingIgnoredDuringExecution:\n - labelSelector:\n matchExpressions:\n - key: security\n operator: In\n values:\n - S1", "completion": " topologyKey: topology.kubernetes.io/zone\n podAntiAffinity:\n preferredDuringSchedulingIgnoredDuringExecution:\n - weight: 100\n podAffinityTerm:\n labelSelector:\n matchExpressions:\n - key: security\n operator: In\n values:\n - S2\n topologyKey: topology.kubernetes.io/zone\n containers:\n - name: with-pod-affinity\n image: registry.k8s.io/pause:3.8", "language": "yaml", "type": "pod-definition", "context": "Assigning Pods to Nodes | This example defines one Pod affinity rule and one Pod anti-affinity rule. The Pod affinity rule uses the \"hard\" requiredDuringSchedulingIgnoredDuringExecution, while the anti-affinity rule uses the \"", "source": "https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node/", "id": "k8s_00283_code_completion_7589"} {"prompt": "Explain this yaml code:\n\npods/pod-with-pod-affinity.yaml\napiVersion: v1\nkind: Pod\nmetadata:\n name: with-pod-affinity\nspec:\n affinity:\n podAffinity:\n requiredDuringSchedulingIgnoredDuringExecution:\n - labelSelector:\n matchExpressions:\n - key: security\n operator: In\n values:\n - S1\n topologyKey: topology.kubernetes.io/zone\n podAntiAffinity:\n preferredDuringSchedulingIgnoredDuringExecution:\n - weight: 100\n podAffinityTerm:\n labelSelector:\n matchExpressions:\n - key: security\n operator: In\n values:\n - S2\n topologyKey: topology.kubernetes.io/zone\n containers:\n - name: with-pod-affinity\n image: registry.k8s.io/pause:3.8", "completion": "This yaml code Assigning Pods to Nodes | This example defines one Pod affinity rule and one Pod anti-affinity rule. The Pod affinity rule uses the \"hard\" requiredDuringSchedulingIgnoredDuringExecution, while the anti-affinity rule uses the \"", "language": "yaml", "type": "pod-definition_explanation", "context": "Assigning Pods to Nodes | This example defines one Pod affinity rule and one Pod anti-affinity rule. The Pod affinity rule uses the \"hard\" requiredDuringSchedulingIgnoredDuringExecution, while the anti-affinity rule uses the \"", "source": "https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node/", "id": "k8s_00283_code_explain_7590"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: with-pod-affinity\nspec:\n affinity:\n podAffinity:\n requiredDuringSchedulingIgnoredDuringExecution:\n - labelSelector:\n matchExpressions:\n - key: security\n operator: In\n values:\n - S1", "completion": " topologyKey: topology.kubernetes.io/zone\n podAntiAffinity:\n preferredDuringSchedulingIgnoredDuringExecution:\n - weight: 100\n podAffinityTerm:\n labelSelector:\n matchExpressions:\n - key: security\n operator: In\n values:\n - S2\n topologyKey: topology.kubernetes.io/zone\n containers:\n - name: with-pod-affinity\n image: registry.k8s.io/pause:3.8", "language": "yaml", "type": "pod-definition", "context": "Assigning Pods to Nodes | This example defines one Pod affinity rule and one Pod anti-affinity rule. The Pod affinity rule uses the \"hard\" requiredDuringSchedulingIgnoredDuringExecution, while the anti-affinity rule uses the \"", "source": "https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node/", "id": "k8s_00283_code_completion_7591"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: with-pod-affinity\nspec:\n affinity:\n podAffinity:\n requiredDuringSchedulingIgnoredDuringExecution:\n - labelSelector:\n matchExpressions:\n - key: security\n operator: In\n values:\n - S1\n topologyKey: topology.kubernetes.io/zone\n podAntiAffinity:\n preferredDuringSchedulingIgnoredDuringExecution:\n - weight: 100\n podAffinityTerm:\n labelSelector:\n matchExpressions:\n - key: security\n operator: In\n values:\n - S2\n topologyKey: topology.kubernetes.io/zone\n containers:\n - name: with-pod-affinity\n image: registry.k8s.io/pause:3.8", "completion": "This yaml code Assigning Pods to Nodes | This example defines one Pod affinity rule and one Pod anti-affinity rule. The Pod affinity rule uses the \"hard\" requiredDuringSchedulingIgnoredDuringExecution, while the anti-affinity rule uses the \"", "language": "yaml", "type": "pod-definition_explanation", "context": "Assigning Pods to Nodes | This example defines one Pod affinity rule and one Pod anti-affinity rule. The Pod affinity rule uses the \"hard\" requiredDuringSchedulingIgnoredDuringExecution, while the anti-affinity rule uses the \"", "source": "https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node/", "id": "k8s_00283_code_explain_7592"} {"prompt": "Complete this yaml code:\n\napiVersion: apps/v1\nkind: Deployment\nmetadata:\n name: application-server\n...\nspec:\n template:\n spec:\n affinity:\n podAffinity:\n requiredDuringSchedulingIgnoredDuringExecution:", "completion": " - labelSelector:\n matchExpressions:\n - key: app\n operator: In\n values:\n - database\n topologyKey: topology.kubernetes.io/zone\n # Only Pods from a given rollout are taken into consideration when calculating pod affinity.\n # If you update the Deployment, the replacement Pods follow their own affinity rules\n # (if there are any defined in the new Pod template)\n matchLabelKeys:\n - pod-template-hash", "language": "yaml", "type": "deployment-definition", "context": "Assigning Pods to Nodes | The mismatchLabelKeys field is a beta-level field and is enabled by default in Kubernetes 1.34. When you want to disable it, you have to disable it explicitly via the MatchLabelKeysInPodAffinity featu", "source": "https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node/", "id": "k8s_00283_code_completion_7593"} {"prompt": "Explain this yaml code:\n\napiVersion: apps/v1\nkind: Deployment\nmetadata:\n name: application-server\n...\nspec:\n template:\n spec:\n affinity:\n podAffinity:\n requiredDuringSchedulingIgnoredDuringExecution:\n - labelSelector:\n matchExpressions:\n - key: app\n operator: In\n values:\n - database\n topologyKey: topology.kubernetes.io/zone\n # Only Pods from a given rollout are taken into consideration when calculating pod affinity.\n # If you update the Deployment, the replacement Pods follow their own affinity rules\n # (if there are any defined in the new Pod template)\n matchLabelKeys:\n - pod-template-hash", "completion": "This yaml code Assigning Pods to Nodes | The mismatchLabelKeys field is a beta-level field and is enabled by default in Kubernetes 1.34. When you want to disable it, you have to disable it explicitly via the MatchLabelKeysInPodAffinity featu", "language": "yaml", "type": "deployment-definition_explanation", "context": "Assigning Pods to Nodes | The mismatchLabelKeys field is a beta-level field and is enabled by default in Kubernetes 1.34. When you want to disable it, you have to disable it explicitly via the MatchLabelKeysInPodAffinity featu", "source": "https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node/", "id": "k8s_00283_code_explain_7594"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n labels:\n # Assume that all relevant Pods have a \"tenant\" label set\n tenant: tenant-a\n...\nspec:\n affinity:\n podAffinity:\n requiredDuringSchedulingIgnoredDuringExecution:\n # ensure that Pods associated with this tenant land on the correct node pool\n - matchLabelKeys:\n - tenant\n topologyKey: node-pool", "completion": " podAntiAffinity:\n requiredDuringSchedulingIgnoredDuringExecution:\n # ensure that Pods associated with this tenant can't schedule to nodes used for another tenant\n - mismatchLabelKeys:\n - tenant # whatever the value of the \"tenant\" label for this Pod, prevent\n # scheduling to nodes in any pool where any Pod from a different\n # tenant is running.\n labelSelector:\n # We have to have the labelSelector which selects only Pods with the tenant label,\n # otherwise this Pod would have anti-affinity against Pods from daemonsets as well, for example,\n # which aren't supposed to have the tenant label.\n matchExpressions:\n - key: tenant\n operator: Exists\n topologyKey: node-pool", "language": "yaml", "type": "pod-definition", "context": "Assigning Pods to Nodes | Inter-pod affinity and anti-affinity can be even more useful when they are used with higher level collections such as ReplicaSets, StatefulSets, Deployments, etc. These rules allow you to configure th", "source": "https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node/", "id": "k8s_00283_code_completion_7595"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n labels:\n # Assume that all relevant Pods have a \"tenant\" label set\n tenant: tenant-a\n...\nspec:\n affinity:\n podAffinity:\n requiredDuringSchedulingIgnoredDuringExecution:\n # ensure that Pods associated with this tenant land on the correct node pool\n - matchLabelKeys:\n - tenant\n topologyKey: node-pool\n podAntiAffinity:\n requiredDuringSchedulingIgnoredDuringExecution:\n # ensure that Pods associated with this tenant can't schedule to nodes used for another tenant\n - mismatchLabelKeys:\n - tenant # whatever the value of the \"tenant\" label for this Pod, prevent\n # scheduling to nodes in any pool where any Pod from a different\n # tenant is running.\n labelSelector:\n # We have to have the labelSelector which selects only Pods with the tenant label,\n # otherwise this Pod would have anti-affinity against Pods from daemonsets as well, for example,\n # which aren't supposed to have the tenant label.\n matchExpressions:\n - key: tenant\n operator: Exists\n topologyKey: node-pool", "completion": "This yaml code Assigning Pods to Nodes | Inter-pod affinity and anti-affinity can be even more useful when they are used with higher level collections such as ReplicaSets, StatefulSets, Deployments, etc. These rules allow you to configure th", "language": "yaml", "type": "pod-definition_explanation", "context": "Assigning Pods to Nodes | Inter-pod affinity and anti-affinity can be even more useful when they are used with higher level collections such as ReplicaSets, StatefulSets, Deployments, etc. These rules allow you to configure th", "source": "https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node/", "id": "k8s_00283_code_explain_7596"} {"prompt": "Complete this yaml code:\n\napiVersion: apps/v1\nkind: Deployment\nmetadata:\n name: redis-cache\nspec:\n selector:\n matchLabels:\n app: store\n replicas: 3\n template:\n metadata:\n labels:\n app: store", "completion": " spec:\n affinity:\n podAntiAffinity:\n requiredDuringSchedulingIgnoredDuringExecution:\n - labelSelector:\n matchExpressions:\n - key: app\n operator: In\n values:\n - store\n topologyKey: \"kubernetes.io/hostname\"\n containers:\n - name: redis-server\n image: redis:3.2-alpine", "language": "yaml", "type": "deployment-definition", "context": "Assigning Pods to Nodes | The following example Deployment for the web servers creates replicas with the label app=web-store. The Pod affinity rule tells the scheduler to place each replica on a node that has a Pod with the la", "source": "https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node/", "id": "k8s_00283_code_completion_7597"} {"prompt": "Explain this yaml code:\n\napiVersion: apps/v1\nkind: Deployment\nmetadata:\n name: redis-cache\nspec:\n selector:\n matchLabels:\n app: store\n replicas: 3\n template:\n metadata:\n labels:\n app: store\n spec:\n affinity:\n podAntiAffinity:\n requiredDuringSchedulingIgnoredDuringExecution:\n - labelSelector:\n matchExpressions:\n - key: app\n operator: In\n values:\n - store\n topologyKey: \"kubernetes.io/hostname\"\n containers:\n - name: redis-server\n image: redis:3.2-alpine", "completion": "This yaml code Assigning Pods to Nodes | The following example Deployment for the web servers creates replicas with the label app=web-store. The Pod affinity rule tells the scheduler to place each replica on a node that has a Pod with the la", "language": "yaml", "type": "deployment-definition_explanation", "context": "Assigning Pods to Nodes | The following example Deployment for the web servers creates replicas with the label app=web-store. The Pod affinity rule tells the scheduler to place each replica on a node that has a Pod with the la", "source": "https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node/", "id": "k8s_00283_code_explain_7598"} {"prompt": "Complete this yaml code:\n\napiVersion: apps/v1\nkind: Deployment\nmetadata:\n name: web-server\nspec:\n selector:\n matchLabels:\n app: web-store\n replicas: 3\n template:\n metadata:\n labels:\n app: web-store\n spec:\n affinity:\n podAntiAffinity:\n requiredDuringSchedulingIgnoredDuringExecution:\n - labelSelector:", "completion": " matchExpressions:\n - key: app\n operator: In\n values:\n - web-store\n topologyKey: \"kubernetes.io/hostname\"\n podAffinity:\n requiredDuringSchedulingIgnoredDuringExecution:\n - labelSelector:\n matchExpressions:\n - key: app\n operator: In\n values:\n - store\n topologyKey: \"kubernetes.io/hostname\"\n containers:\n - name: web-app\n image: nginx:1.16-alpine", "language": "yaml", "type": "deployment-definition", "context": "Assigning Pods to Nodes | Creating the two preceding Deployments results in the following cluster layout, where each web server is co-located with a cache, on three separate nodes.", "source": "https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node/", "id": "k8s_00283_code_completion_7599"} {"prompt": "Explain this yaml code:\n\napiVersion: apps/v1\nkind: Deployment\nmetadata:\n name: web-server\nspec:\n selector:\n matchLabels:\n app: web-store\n replicas: 3\n template:\n metadata:\n labels:\n app: web-store\n spec:\n affinity:\n podAntiAffinity:\n requiredDuringSchedulingIgnoredDuringExecution:\n - labelSelector:\n matchExpressions:\n - key: app\n operator: In\n values:\n - web-store\n topologyKey: \"kubernetes.io/hostname\"\n podAffinity:\n requiredDuringSchedulingIgnoredDuringExecution:\n - labelSelector:\n matchExpressions:\n - key: app\n operator: In\n values:\n - store\n topologyKey: \"kubernetes.io/hostname\"\n containers:\n - name: web-app\n image: nginx:1.16-alpine", "completion": "This yaml code Assigning Pods to Nodes | Creating the two preceding Deployments results in the following cluster layout, where each web server is co-located with a cache, on three separate nodes.", "language": "yaml", "type": "deployment-definition_explanation", "context": "Assigning Pods to Nodes | Creating the two preceding Deployments results in the following cluster layout, where each web server is co-located with a cache, on three separate nodes.", "source": "https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node/", "id": "k8s_00283_code_explain_7600"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: nginx", "completion": "spec:\n containers:\n - name: nginx\n image: nginx\n nodeName: kube-01", "language": "yaml", "type": "pod-definition", "context": "Assigning Pods to Nodes | The above Pod will only run on the node kube-01.", "source": "https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node/", "id": "k8s_00283_code_completion_7601"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: nginx\nspec:\n containers:\n - name: nginx\n image: nginx\n nodeName: kube-01", "completion": "This yaml code Assigning Pods to Nodes | The above Pod will only run on the node kube-01.", "language": "yaml", "type": "pod-definition_explanation", "context": "Assigning Pods to Nodes | The above Pod will only run on the node kube-01.", "source": "https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node/", "id": "k8s_00283_code_explain_7602"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:", "completion": " name: nginx\n...\nstatus:\n nominatedNodeName: kube-01", "language": "yaml", "type": "pod-definition", "context": "Assigning Pods to Nodes | You can use topology spread constraints to control how Pods are spread across your cluster among failure-domains such as regions, zones, nodes, or among any other topology domains that you define. You", "source": "https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node/", "id": "k8s_00283_code_completion_7603"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: nginx\n...\nstatus:\n nominatedNodeName: kube-01", "completion": "This yaml code Assigning Pods to Nodes | You can use topology spread constraints to control how Pods are spread across your cluster among failure-domains such as regions, zones, nodes, or among any other topology domains that you define. You", "language": "yaml", "type": "pod-definition_explanation", "context": "Assigning Pods to Nodes | You can use topology spread constraints to control how Pods are spread across your cluster among failure-domains such as regions, zones, nodes, or among any other topology domains that you define. You", "source": "https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node/", "id": "k8s_00283_code_explain_7604"} {"prompt": "Explain this text code:\n\nfor i in {1..100}; do sleep 1; if nslookup myservice; then exit 0; fi; done; exit 1", "completion": "This text code Init Containers | Register this Pod with a remote server from the downward API with a command like:", "language": "text", "type": "code-example_explanation", "context": "Init Containers | Register this Pod with a remote server from the downward API with a command like:", "source": "https://kubernetes.io/docs/concepts/workloads/pods/init-containers/", "id": "k8s_00284_code_explain_7605"} {"prompt": "Explain this text code:\n\ncurl -X POST http://$MANAGEMENT_SERVICE_HOST:$MANAGEMENT_SERVICE_PORT/register -d 'instance=$()&ip=$()'", "completion": "This text code Init Containers | Wait for some time before starting the app container with a command like", "language": "text", "type": "code-example_explanation", "context": "Init Containers | Wait for some time before starting the app container with a command like", "source": "https://kubernetes.io/docs/concepts/workloads/pods/init-containers/", "id": "k8s_00284_code_explain_7606"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: myapp-pod\n labels:\n app.kubernetes.io/name: MyApp\nspec:\n containers:\n - name: myapp-container", "completion": " image: busybox:1.28\n command: ['sh', '-c', 'echo The app is running! && sleep 3600']\n initContainers:\n - name: init-myservice\n image: busybox:1.28\n command: ['sh', '-c', \"until nslookup myservice.$(cat /var/run/secrets/kubernetes.io/serviceaccount/namespace).svc.cluster.local; do echo waiting for myservice; sleep 2; done\"]\n - name: init-mydb\n image: busybox:1.28\n command: ['sh', '-c', \"until nslookup mydb.$(cat /var/run/secrets/kubernetes.io/serviceaccount/namespace).svc.cluster.local; do echo waiting for mydb; sleep 2; done\"]", "language": "yaml", "type": "pod-definition", "context": "Init Containers | You can start this Pod by running:", "source": "https://kubernetes.io/docs/concepts/workloads/pods/init-containers/", "id": "k8s_00284_code_completion_7607"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: myapp-pod\n labels:\n app.kubernetes.io/name: MyApp\nspec:\n containers:\n - name: myapp-container\n image: busybox:1.28\n command: ['sh', '-c', 'echo The app is running! && sleep 3600']\n initContainers:\n - name: init-myservice\n image: busybox:1.28\n command: ['sh', '-c', \"until nslookup myservice.$(cat /var/run/secrets/kubernetes.io/serviceaccount/namespace).svc.cluster.local; do echo waiting for myservice; sleep 2; done\"]\n - name: init-mydb\n image: busybox:1.28\n command: ['sh', '-c', \"until nslookup mydb.$(cat /var/run/secrets/kubernetes.io/serviceaccount/namespace).svc.cluster.local; do echo waiting for mydb; sleep 2; done\"]", "completion": "This yaml code Init Containers | You can start this Pod by running:", "language": "yaml", "type": "pod-definition_explanation", "context": "Init Containers | You can start this Pod by running:", "source": "https://kubernetes.io/docs/concepts/workloads/pods/init-containers/", "id": "k8s_00284_code_explain_7608"} {"prompt": "Explain this text code:\n\nNAME READY STATUS RESTARTS AGE\nmyapp-pod 0/1 Init:0/2 0 6m", "completion": "This text code Init Containers | or for more details:", "language": "text", "type": "code-example_explanation", "context": "Init Containers | or for more details:", "source": "https://kubernetes.io/docs/concepts/workloads/pods/init-containers/", "id": "k8s_00284_code_explain_7609"} {"prompt": "Complete this text code:\n\nName: myapp-pod\nNamespace: default\n[...]\nLabels: app.kubernetes.io/name=MyApp\nStatus: Pending\n[...]\nInit Containers:\n init-myservice:\n[...]\n State: Running\n[...]\n init-mydb:\n[...]\n State: Waiting\n Reason: PodInitializing\n Ready: False", "completion": "[...]\nContainers:\n myapp-container:\n[...]\n State: Waiting\n Reason: PodInitializing\n Ready: False\n[...]\nEvents:\n FirstSeen LastSeen Count From SubObjectPath Type Reason Message\n --------- -------- ----- ---- ------------- -------- ------ -------\n 16s 16s 1 {default-scheduler } Normal Scheduled Successfully assigned myapp-pod to 172.17.4.201\n 16s 16s 1 {kubelet 172.17.4.201} spec.initContainers{init-myservice} Normal Pulling pulling image \"busybox\"\n 13s 13s 1 {kubelet 172.17.4.201} spec.initContainers{init-myservice} Normal Pulled Successfully pulled image \"busybox\"\n 13s 13s 1 {kubelet 172.17.4.201} spec.initContainers{init-myservice} Normal Created Created container init-myservice\n 13s 13s 1 {kubelet 172.17.4.201} spec.initContainers{init-myservice} Normal Started Started container init-myservice", "language": "text", "type": "code-example", "context": "Init Containers | To see logs for the init containers in this Pod, run:", "source": "https://kubernetes.io/docs/concepts/workloads/pods/init-containers/", "id": "k8s_00284_code_completion_7610"} {"prompt": "Explain this text code:\n\nName: myapp-pod\nNamespace: default\n[...]\nLabels: app.kubernetes.io/name=MyApp\nStatus: Pending\n[...]\nInit Containers:\n init-myservice:\n[...]\n State: Running\n[...]\n init-mydb:\n[...]\n State: Waiting\n Reason: PodInitializing\n Ready: False\n[...]\nContainers:\n myapp-container:\n[...]\n State: Waiting\n Reason: PodInitializing\n Ready: False\n[...]\nEvents:\n FirstSeen LastSeen Count From SubObjectPath Type Reason Message\n --------- -------- ----- ---- ------------- -------- ------ -------\n 16s 16s 1 {default-scheduler } Normal Scheduled Successfully assigned myapp-pod to 172.17.4.201\n 16s 16s 1 {kubelet 172.17.4.201} spec.initContainers{init-myservice} Normal Pulling pulling image \"busybox\"\n 13s 13s 1 {kubelet 172.17.4.201} spec.initContainers{init-myservice} Normal Pulled Successfully pulled image \"busybox\"\n 13s 13s 1 {kubelet 172.17.4.201} spec.initContainers{init-myservice} Normal Created Created container init-myservice\n 13s 13s 1 {kubelet 172.17.4.201} spec.initContainers{init-myservice} Normal Started Started container init-myservice", "completion": "This text code Init Containers | To see logs for the init containers in this Pod, run:", "language": "text", "type": "code-example_explanation", "context": "Init Containers | To see logs for the init containers in this Pod, run:", "source": "https://kubernetes.io/docs/concepts/workloads/pods/init-containers/", "id": "k8s_00284_code_explain_7611"} {"prompt": "Explain this bash code:\n\nkubectl logs myapp-pod -c init-myservice # Inspect the first init container\nkubectl logs myapp-pod -c init-mydb # Inspect the second init container", "completion": "This bash code Init Containers | At this point, those init containers will be waiting to discover Services named mydb and myservice.", "language": "bash", "type": "kubectl-command_explanation", "context": "Init Containers | At this point, those init containers will be waiting to discover Services named mydb and myservice.", "source": "https://kubernetes.io/docs/concepts/workloads/pods/init-containers/", "id": "k8s_00284_code_explain_7612"} {"prompt": "Complete this yaml code:\n\n---\napiVersion: v1\nkind: Service\nmetadata:\n name: myservice\nspec:\n ports:\n - protocol: TCP\n port: 80\n targetPort: 9376", "completion": "---\napiVersion: v1\nkind: Service\nmetadata:\n name: mydb\nspec:\n ports:\n - protocol: TCP\n port: 80\n targetPort: 9377", "language": "yaml", "type": "service-definition", "context": "Init Containers | To create the mydb and myservice services:", "source": "https://kubernetes.io/docs/concepts/workloads/pods/init-containers/", "id": "k8s_00284_code_completion_7613"} {"prompt": "Explain this yaml code:\n\n---\napiVersion: v1\nkind: Service\nmetadata:\n name: myservice\nspec:\n ports:\n - protocol: TCP\n port: 80\n targetPort: 9376\n---\napiVersion: v1\nkind: Service\nmetadata:\n name: mydb\nspec:\n ports:\n - protocol: TCP\n port: 80\n targetPort: 9377", "completion": "This yaml code Init Containers | To create the mydb and myservice services:", "language": "yaml", "type": "service-definition_explanation", "context": "Init Containers | To create the mydb and myservice services:", "source": "https://kubernetes.io/docs/concepts/workloads/pods/init-containers/", "id": "k8s_00284_code_explain_7614"} {"prompt": "Explain this text code:\n\nNAME READY STATUS RESTARTS AGE\nmyapp-pod 1/1 Running 0 9m", "completion": "This text code Init Containers | This simple example should provide some inspiration for you to create your own init containers. What's next contains a link to a more detailed example.", "language": "text", "type": "code-example_explanation", "context": "Init Containers | This simple example should provide some inspiration for you to create your own init containers. What's next contains a link to a more detailed example.", "source": "https://kubernetes.io/docs/concepts/workloads/pods/init-containers/", "id": "k8s_00284_code_explain_7615"} {"prompt": "Explain this shell code:\n\nfor i in {1..100}; do sleep 1; if nslookup myservice; then exit 0; fi; done; exit 1", "completion": "This shell code Init Containers | Register this Pod with a remote server from the downward API with a command like:", "language": "shell", "type": "code-example_explanation", "context": "Init Containers | Register this Pod with a remote server from the downward API with a command like:", "source": "https://kubernetes.io/docs/concepts/workloads/pods/init-containers/", "id": "k8s_00284_code_explain_7616"} {"prompt": "Explain this shell code:\n\ncurl -X POST http://$MANAGEMENT_SERVICE_HOST:$MANAGEMENT_SERVICE_PORT/register -d 'instance=$()&ip=$()'", "completion": "This shell code Init Containers | Wait for some time before starting the app container with a command like", "language": "shell", "type": "code-example_explanation", "context": "Init Containers | Wait for some time before starting the app container with a command like", "source": "https://kubernetes.io/docs/concepts/workloads/pods/init-containers/", "id": "k8s_00284_code_explain_7617"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: myapp-pod\n labels:\n app.kubernetes.io/name: MyApp\nspec:\n containers:\n - name: myapp-container", "completion": " image: busybox:1.28\n command: ['sh', '-c', 'echo The app is running! && sleep 3600']\n initContainers:\n - name: init-myservice\n image: busybox:1.28\n command: ['sh', '-c', \"until nslookup myservice.$(cat /var/run/secrets/kubernetes.io/serviceaccount/namespace).svc.cluster.local; do echo waiting for myservice; sleep 2; done\"]\n - name: init-mydb\n image: busybox:1.28\n command: ['sh', '-c', \"until nslookup mydb.$(cat /var/run/secrets/kubernetes.io/serviceaccount/namespace).svc.cluster.local; do echo waiting for mydb; sleep 2; done\"]", "language": "yaml", "type": "pod-definition", "context": "Init Containers | You can start this Pod by running:", "source": "https://kubernetes.io/docs/concepts/workloads/pods/init-containers/", "id": "k8s_00284_code_completion_7618"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: myapp-pod\n labels:\n app.kubernetes.io/name: MyApp\nspec:\n containers:\n - name: myapp-container\n image: busybox:1.28\n command: ['sh', '-c', 'echo The app is running! && sleep 3600']\n initContainers:\n - name: init-myservice\n image: busybox:1.28\n command: ['sh', '-c', \"until nslookup myservice.$(cat /var/run/secrets/kubernetes.io/serviceaccount/namespace).svc.cluster.local; do echo waiting for myservice; sleep 2; done\"]\n - name: init-mydb\n image: busybox:1.28\n command: ['sh', '-c', \"until nslookup mydb.$(cat /var/run/secrets/kubernetes.io/serviceaccount/namespace).svc.cluster.local; do echo waiting for mydb; sleep 2; done\"]", "completion": "This yaml code Init Containers | You can start this Pod by running:", "language": "yaml", "type": "pod-definition_explanation", "context": "Init Containers | You can start this Pod by running:", "source": "https://kubernetes.io/docs/concepts/workloads/pods/init-containers/", "id": "k8s_00284_code_explain_7619"} {"prompt": "Explain this text code:\n\nNAME READY STATUS RESTARTS AGE\nmyapp-pod 0/1 Init:0/2 0 6m", "completion": "This text code Init Containers | or for more details:", "language": "text", "type": "code-example_explanation", "context": "Init Containers | or for more details:", "source": "https://kubernetes.io/docs/concepts/workloads/pods/init-containers/", "id": "k8s_00284_code_explain_7620"} {"prompt": "Complete this text code:\n\nName: myapp-pod\nNamespace: default\n[...]\nLabels: app.kubernetes.io/name=MyApp\nStatus: Pending\n[...]\nInit Containers:\n init-myservice:\n[...]\n State: Running\n[...]\n init-mydb:\n[...]\n State: Waiting\n Reason: PodInitializing\n Ready: False", "completion": "[...]\nContainers:\n myapp-container:\n[...]\n State: Waiting\n Reason: PodInitializing\n Ready: False\n[...]\nEvents:\n FirstSeen LastSeen Count From SubObjectPath Type Reason Message\n --------- -------- ----- ---- ------------- -------- ------ -------\n 16s 16s 1 {default-scheduler } Normal Scheduled Successfully assigned myapp-pod to 172.17.4.201\n 16s 16s 1 {kubelet 172.17.4.201} spec.initContainers{init-myservice} Normal Pulling pulling image \"busybox\"\n 13s 13s 1 {kubelet 172.17.4.201} spec.initContainers{init-myservice} Normal Pulled Successfully pulled image \"busybox\"\n 13s 13s 1 {kubelet 172.17.4.201} spec.initContainers{init-myservice} Normal Created Created container init-myservice\n 13s 13s 1 {kubelet 172.17.4.201} spec.initContainers{init-myservice} Normal Started Started container init-myservice", "language": "text", "type": "code-example", "context": "Init Containers | To see logs for the init containers in this Pod, run:", "source": "https://kubernetes.io/docs/concepts/workloads/pods/init-containers/", "id": "k8s_00284_code_completion_7621"} {"prompt": "Explain this text code:\n\nName: myapp-pod\nNamespace: default\n[...]\nLabels: app.kubernetes.io/name=MyApp\nStatus: Pending\n[...]\nInit Containers:\n init-myservice:\n[...]\n State: Running\n[...]\n init-mydb:\n[...]\n State: Waiting\n Reason: PodInitializing\n Ready: False\n[...]\nContainers:\n myapp-container:\n[...]\n State: Waiting\n Reason: PodInitializing\n Ready: False\n[...]\nEvents:\n FirstSeen LastSeen Count From SubObjectPath Type Reason Message\n --------- -------- ----- ---- ------------- -------- ------ -------\n 16s 16s 1 {default-scheduler } Normal Scheduled Successfully assigned myapp-pod to 172.17.4.201\n 16s 16s 1 {kubelet 172.17.4.201} spec.initContainers{init-myservice} Normal Pulling pulling image \"busybox\"\n 13s 13s 1 {kubelet 172.17.4.201} spec.initContainers{init-myservice} Normal Pulled Successfully pulled image \"busybox\"\n 13s 13s 1 {kubelet 172.17.4.201} spec.initContainers{init-myservice} Normal Created Created container init-myservice\n 13s 13s 1 {kubelet 172.17.4.201} spec.initContainers{init-myservice} Normal Started Started container init-myservice", "completion": "This text code Init Containers | To see logs for the init containers in this Pod, run:", "language": "text", "type": "code-example_explanation", "context": "Init Containers | To see logs for the init containers in this Pod, run:", "source": "https://kubernetes.io/docs/concepts/workloads/pods/init-containers/", "id": "k8s_00284_code_explain_7622"} {"prompt": "Explain this shell code:\n\nkubectl logs myapp-pod -c init-myservice # Inspect the first init container\nkubectl logs myapp-pod -c init-mydb # Inspect the second init container", "completion": "This shell code Init Containers | At this point, those init containers will be waiting to discover Services named mydb and myservice.", "language": "shell", "type": "code-example_explanation", "context": "Init Containers | At this point, those init containers will be waiting to discover Services named mydb and myservice.", "source": "https://kubernetes.io/docs/concepts/workloads/pods/init-containers/", "id": "k8s_00284_code_explain_7623"} {"prompt": "Complete this yaml code:\n\n---\napiVersion: v1\nkind: Service\nmetadata:\n name: myservice\nspec:\n ports:\n - protocol: TCP\n port: 80\n targetPort: 9376", "completion": "---\napiVersion: v1\nkind: Service\nmetadata:\n name: mydb\nspec:\n ports:\n - protocol: TCP\n port: 80\n targetPort: 9377", "language": "yaml", "type": "service-definition", "context": "Init Containers | To create the mydb and myservice services:", "source": "https://kubernetes.io/docs/concepts/workloads/pods/init-containers/", "id": "k8s_00284_code_completion_7624"} {"prompt": "Explain this yaml code:\n\n---\napiVersion: v1\nkind: Service\nmetadata:\n name: myservice\nspec:\n ports:\n - protocol: TCP\n port: 80\n targetPort: 9376\n---\napiVersion: v1\nkind: Service\nmetadata:\n name: mydb\nspec:\n ports:\n - protocol: TCP\n port: 80\n targetPort: 9377", "completion": "This yaml code Init Containers | To create the mydb and myservice services:", "language": "yaml", "type": "service-definition_explanation", "context": "Init Containers | To create the mydb and myservice services:", "source": "https://kubernetes.io/docs/concepts/workloads/pods/init-containers/", "id": "k8s_00284_code_explain_7625"} {"prompt": "Explain this text code:\n\nNAME READY STATUS RESTARTS AGE\nmyapp-pod 1/1 Running 0 9m", "completion": "This text code Init Containers | This simple example should provide some inspiration for you to create your own init containers. What's next contains a link to a more detailed example.", "language": "text", "type": "code-example_explanation", "context": "Init Containers | This simple example should provide some inspiration for you to create your own init containers. What's next contains a link to a more detailed example.", "source": "https://kubernetes.io/docs/concepts/workloads/pods/init-containers/", "id": "k8s_00284_code_explain_7626"} {"prompt": "Explain this text code:\n\nfor i in {1..100}; do sleep 1; if nslookup myservice; then exit 0; fi; done; exit 1", "completion": "This text code Init Containers | Register this Pod with a remote server from the downward API with a command like:", "language": "text", "type": "code-example_explanation", "context": "Init Containers | Register this Pod with a remote server from the downward API with a command like:", "source": "https://kubernetes.io/docs/concepts/workloads/pods/init-containers/", "id": "k8s_00284_code_explain_7627"} {"prompt": "Explain this text code:\n\ncurl -X POST http://$MANAGEMENT_SERVICE_HOST:$MANAGEMENT_SERVICE_PORT/register -d 'instance=$()&ip=$()'", "completion": "This text code Init Containers | Wait for some time before starting the app container with a command like", "language": "text", "type": "code-example_explanation", "context": "Init Containers | Wait for some time before starting the app container with a command like", "source": "https://kubernetes.io/docs/concepts/workloads/pods/init-containers/", "id": "k8s_00284_code_explain_7628"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: myapp-pod\n labels:\n app.kubernetes.io/name: MyApp\nspec:\n containers:\n - name: myapp-container", "completion": " image: busybox:1.28\n command: ['sh', '-c', 'echo The app is running! && sleep 3600']\n initContainers:\n - name: init-myservice\n image: busybox:1.28\n command: ['sh', '-c', \"until nslookup myservice.$(cat /var/run/secrets/kubernetes.io/serviceaccount/namespace).svc.cluster.local; do echo waiting for myservice; sleep 2; done\"]\n - name: init-mydb\n image: busybox:1.28\n command: ['sh', '-c', \"until nslookup mydb.$(cat /var/run/secrets/kubernetes.io/serviceaccount/namespace).svc.cluster.local; do echo waiting for mydb; sleep 2; done\"]", "language": "yaml", "type": "pod-definition", "context": "Init Containers | You can start this Pod by running:", "source": "https://kubernetes.io/docs/concepts/workloads/pods/init-containers/", "id": "k8s_00284_code_completion_7629"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: myapp-pod\n labels:\n app.kubernetes.io/name: MyApp\nspec:\n containers:\n - name: myapp-container\n image: busybox:1.28\n command: ['sh', '-c', 'echo The app is running! && sleep 3600']\n initContainers:\n - name: init-myservice\n image: busybox:1.28\n command: ['sh', '-c', \"until nslookup myservice.$(cat /var/run/secrets/kubernetes.io/serviceaccount/namespace).svc.cluster.local; do echo waiting for myservice; sleep 2; done\"]\n - name: init-mydb\n image: busybox:1.28\n command: ['sh', '-c', \"until nslookup mydb.$(cat /var/run/secrets/kubernetes.io/serviceaccount/namespace).svc.cluster.local; do echo waiting for mydb; sleep 2; done\"]", "completion": "This yaml code Init Containers | You can start this Pod by running:", "language": "yaml", "type": "pod-definition_explanation", "context": "Init Containers | You can start this Pod by running:", "source": "https://kubernetes.io/docs/concepts/workloads/pods/init-containers/", "id": "k8s_00284_code_explain_7630"} {"prompt": "Explain this bash code:\n\nkubectl logs myapp-pod -c init-myservice # Inspect the first init container\nkubectl logs myapp-pod -c init-mydb # Inspect the second init container", "completion": "This bash code Init Containers | At this point, those init containers will be waiting to discover Services named mydb and myservice.", "language": "bash", "type": "kubectl-command_explanation", "context": "Init Containers | At this point, those init containers will be waiting to discover Services named mydb and myservice.", "source": "https://kubernetes.io/docs/concepts/workloads/pods/init-containers/", "id": "k8s_00284_code_explain_7631"} {"prompt": "Complete this yaml code:\n\n---\napiVersion: v1\nkind: Service\nmetadata:\n name: myservice\nspec:\n ports:\n - protocol: TCP\n port: 80\n targetPort: 9376", "completion": "---\napiVersion: v1\nkind: Service\nmetadata:\n name: mydb\nspec:\n ports:\n - protocol: TCP\n port: 80\n targetPort: 9377", "language": "yaml", "type": "service-definition", "context": "Init Containers | To create the mydb and myservice services:", "source": "https://kubernetes.io/docs/concepts/workloads/pods/init-containers/", "id": "k8s_00284_code_completion_7632"} {"prompt": "Explain this yaml code:\n\n---\napiVersion: v1\nkind: Service\nmetadata:\n name: myservice\nspec:\n ports:\n - protocol: TCP\n port: 80\n targetPort: 9376\n---\napiVersion: v1\nkind: Service\nmetadata:\n name: mydb\nspec:\n ports:\n - protocol: TCP\n port: 80\n targetPort: 9377", "completion": "This yaml code Init Containers | To create the mydb and myservice services:", "language": "yaml", "type": "service-definition_explanation", "context": "Init Containers | To create the mydb and myservice services:", "source": "https://kubernetes.io/docs/concepts/workloads/pods/init-containers/", "id": "k8s_00284_code_explain_7633"} {"prompt": "Complete this yaml code:\n\napiVersion: apps/v1\nkind: Deployment\nmetadata:\n name: nginx-deployment\nspec:\n selector:\n matchLabels:\n app: nginx\n replicas: 2 # tells deployment to run 2 pods matching the template", "completion": " template:\n metadata:\n labels:\n app: nginx\n spec:\n containers:\n - name: nginx\n image: nginx:1.14.2\n ports:\n - containerPort: 80", "language": "yaml", "type": "deployment-definition", "context": "Run a Stateless Application Using a Deployment | Create a Deployment based on the YAML file:", "source": "https://kubernetes.io/docs/tasks/run-application/run-stateless-application-deployment/", "id": "k8s_00285_code_completion_7634"} {"prompt": "Explain this yaml code:\n\napiVersion: apps/v1\nkind: Deployment\nmetadata:\n name: nginx-deployment\nspec:\n selector:\n matchLabels:\n app: nginx\n replicas: 2 # tells deployment to run 2 pods matching the template\n template:\n metadata:\n labels:\n app: nginx\n spec:\n containers:\n - name: nginx\n image: nginx:1.14.2\n ports:\n - containerPort: 80", "completion": "This yaml code Run a Stateless Application Using a Deployment | Create a Deployment based on the YAML file:", "language": "yaml", "type": "deployment-definition_explanation", "context": "Run a Stateless Application Using a Deployment | Create a Deployment based on the YAML file:", "source": "https://kubernetes.io/docs/tasks/run-application/run-stateless-application-deployment/", "id": "k8s_00285_code_explain_7635"} {"prompt": "Explain this bash code:\n\nkubectl apply -f https://k8s.io/examples/application/deployment.yaml", "completion": "This bash code Run a Stateless Application Using a Deployment | Display information about the Deployment:", "language": "bash", "type": "kubectl-command_explanation", "context": "Run a Stateless Application Using a Deployment | Display information about the Deployment:", "source": "https://kubernetes.io/docs/tasks/run-application/run-stateless-application-deployment/", "id": "k8s_00285_code_explain_7636"} {"prompt": "Complete this text code:\n\nName: nginx-deployment\nNamespace: default\nCreationTimestamp: Tue, 30 Aug 2016 18:11:37 -0700\nLabels: app=nginx\nAnnotations: deployment.kubernetes.io/revision=1\nSelector: app=nginx\nReplicas: 2 desired | 2 updated | 2 total | 2 available | 0 unavailable\nStrategyType: RollingUpdate\nMinReadySeconds: 0\nRollingUpdateStrategy: 1 max unavailable, 1 max surge\nPod Template:\n Labels: app=nginx\n Containers:", "completion": " nginx:\n Image: nginx:1.14.2\n Port: 80/TCP\n Environment: \n Mounts: \n Volumes: \nConditions:\n Type Status Reason\n ---- ------ ------\n Available True MinimumReplicasAvailable\n Progressing True NewReplicaSetAvailable\nOldReplicaSets: \nNewReplicaSet: nginx-deployment-1771418926 (2/2 replicas created)\nNo events.", "language": "text", "type": "code-example", "context": "Run a Stateless Application Using a Deployment | List the Pods created by the deployment:", "source": "https://kubernetes.io/docs/tasks/run-application/run-stateless-application-deployment/", "id": "k8s_00285_code_completion_7637"} {"prompt": "Explain this text code:\n\nName: nginx-deployment\nNamespace: default\nCreationTimestamp: Tue, 30 Aug 2016 18:11:37 -0700\nLabels: app=nginx\nAnnotations: deployment.kubernetes.io/revision=1\nSelector: app=nginx\nReplicas: 2 desired | 2 updated | 2 total | 2 available | 0 unavailable\nStrategyType: RollingUpdate\nMinReadySeconds: 0\nRollingUpdateStrategy: 1 max unavailable, 1 max surge\nPod Template:\n Labels: app=nginx\n Containers:\n nginx:\n Image: nginx:1.14.2\n Port: 80/TCP\n Environment: \n Mounts: \n Volumes: \nConditions:\n Type Status Reason\n ---- ------ ------\n Available True MinimumReplicasAvailable\n Progressing True NewReplicaSetAvailable\nOldReplicaSets: \nNewReplicaSet: nginx-deployment-1771418926 (2/2 replicas created)\nNo events.", "completion": "This text code Run a Stateless Application Using a Deployment | List the Pods created by the deployment:", "language": "text", "type": "code-example_explanation", "context": "Run a Stateless Application Using a Deployment | List the Pods created by the deployment:", "source": "https://kubernetes.io/docs/tasks/run-application/run-stateless-application-deployment/", "id": "k8s_00285_code_explain_7638"} {"prompt": "Explain this text code:\n\nNAME READY STATUS RESTARTS AGE\nnginx-deployment-1771418926-7o5ns 1/1 Running 0 16h\nnginx-deployment-1771418926-r18az 1/1 Running 0 16h", "completion": "This text code Run a Stateless Application Using a Deployment | Display information about a Pod:", "language": "text", "type": "code-example_explanation", "context": "Run a Stateless Application Using a Deployment | Display information about a Pod:", "source": "https://kubernetes.io/docs/tasks/run-application/run-stateless-application-deployment/", "id": "k8s_00285_code_explain_7639"} {"prompt": "Complete this yaml code:\n\napiVersion: apps/v1\nkind: Deployment\nmetadata:\n name: nginx-deployment\nspec:\n selector:\n matchLabels:\n app: nginx\n replicas: 2", "completion": " template:\n metadata:\n labels:\n app: nginx\n spec:\n containers:\n - name: nginx\n image: nginx:1.16.1 # Update the version of nginx from 1.14.2 to 1.16.1\n ports:\n - containerPort: 80", "language": "yaml", "type": "deployment-definition", "context": "Run a Stateless Application Using a Deployment | Apply the new YAML file:", "source": "https://kubernetes.io/docs/tasks/run-application/run-stateless-application-deployment/", "id": "k8s_00285_code_completion_7640"} {"prompt": "Explain this yaml code:\n\napiVersion: apps/v1\nkind: Deployment\nmetadata:\n name: nginx-deployment\nspec:\n selector:\n matchLabels:\n app: nginx\n replicas: 2\n template:\n metadata:\n labels:\n app: nginx\n spec:\n containers:\n - name: nginx\n image: nginx:1.16.1 # Update the version of nginx from 1.14.2 to 1.16.1\n ports:\n - containerPort: 80", "completion": "This yaml code Run a Stateless Application Using a Deployment | Apply the new YAML file:", "language": "yaml", "type": "deployment-definition_explanation", "context": "Run a Stateless Application Using a Deployment | Apply the new YAML file:", "source": "https://kubernetes.io/docs/tasks/run-application/run-stateless-application-deployment/", "id": "k8s_00285_code_explain_7641"} {"prompt": "Explain this bash code:\n\nkubectl apply -f https://k8s.io/examples/application/deployment-update.yaml", "completion": "This bash code Run a Stateless Application Using a Deployment | Watch the deployment create pods with new names and delete the old pods:", "language": "bash", "type": "kubectl-command_explanation", "context": "Run a Stateless Application Using a Deployment | Watch the deployment create pods with new names and delete the old pods:", "source": "https://kubernetes.io/docs/tasks/run-application/run-stateless-application-deployment/", "id": "k8s_00285_code_explain_7642"} {"prompt": "Complete this yaml code:\n\napiVersion: apps/v1\nkind: Deployment\nmetadata:\n name: nginx-deployment\nspec:\n selector:\n matchLabels:\n app: nginx\n replicas: 4 # Update the replicas from 2 to 4", "completion": " template:\n metadata:\n labels:\n app: nginx\n spec:\n containers:\n - name: nginx\n image: nginx:1.16.1\n ports:\n - containerPort: 80", "language": "yaml", "type": "deployment-definition", "context": "Run a Stateless Application Using a Deployment | Apply the new YAML file:", "source": "https://kubernetes.io/docs/tasks/run-application/run-stateless-application-deployment/", "id": "k8s_00285_code_completion_7643"} {"prompt": "Explain this yaml code:\n\napiVersion: apps/v1\nkind: Deployment\nmetadata:\n name: nginx-deployment\nspec:\n selector:\n matchLabels:\n app: nginx\n replicas: 4 # Update the replicas from 2 to 4\n template:\n metadata:\n labels:\n app: nginx\n spec:\n containers:\n - name: nginx\n image: nginx:1.16.1\n ports:\n - containerPort: 80", "completion": "This yaml code Run a Stateless Application Using a Deployment | Apply the new YAML file:", "language": "yaml", "type": "deployment-definition_explanation", "context": "Run a Stateless Application Using a Deployment | Apply the new YAML file:", "source": "https://kubernetes.io/docs/tasks/run-application/run-stateless-application-deployment/", "id": "k8s_00285_code_explain_7644"} {"prompt": "Explain this bash code:\n\nkubectl apply -f https://k8s.io/examples/application/deployment-scale.yaml", "completion": "This bash code Run a Stateless Application Using a Deployment | Verify that the Deployment has four Pods:", "language": "bash", "type": "kubectl-command_explanation", "context": "Run a Stateless Application Using a Deployment | Verify that the Deployment has four Pods:", "source": "https://kubernetes.io/docs/tasks/run-application/run-stateless-application-deployment/", "id": "k8s_00285_code_explain_7645"} {"prompt": "Complete this text code:\n\nNAME READY STATUS RESTARTS AGE\nnginx-deployment-148880595-4zdqq 1/1 Running 0 25s", "completion": "nginx-deployment-148880595-6zgi1 1/1 Running 0 25s\nnginx-deployment-148880595-fxcez 1/1 Running 0 2m\nnginx-deployment-148880595-rwovn 1/1 Running 0 2m", "language": "text", "type": "code-example", "context": "Run a Stateless Application Using a Deployment | Delete the deployment by name:", "source": "https://kubernetes.io/docs/tasks/run-application/run-stateless-application-deployment/", "id": "k8s_00285_code_completion_7646"} {"prompt": "Explain this text code:\n\nNAME READY STATUS RESTARTS AGE\nnginx-deployment-148880595-4zdqq 1/1 Running 0 25s\nnginx-deployment-148880595-6zgi1 1/1 Running 0 25s\nnginx-deployment-148880595-fxcez 1/1 Running 0 2m\nnginx-deployment-148880595-rwovn 1/1 Running 0 2m", "completion": "This text code Run a Stateless Application Using a Deployment | Delete the deployment by name:", "language": "text", "type": "code-example_explanation", "context": "Run a Stateless Application Using a Deployment | Delete the deployment by name:", "source": "https://kubernetes.io/docs/tasks/run-application/run-stateless-application-deployment/", "id": "k8s_00285_code_explain_7647"} {"prompt": "Complete this yaml code:\n\napiVersion: apps/v1\nkind: Deployment\nmetadata:\n name: nginx-deployment\nspec:\n selector:\n matchLabels:\n app: nginx\n replicas: 2 # tells deployment to run 2 pods matching the template", "completion": " template:\n metadata:\n labels:\n app: nginx\n spec:\n containers:\n - name: nginx\n image: nginx:1.14.2\n ports:\n - containerPort: 80", "language": "yaml", "type": "deployment-definition", "context": "Run a Stateless Application Using a Deployment | Create a Deployment based on the YAML file:", "source": "https://kubernetes.io/docs/tasks/run-application/run-stateless-application-deployment/", "id": "k8s_00285_code_completion_7648"} {"prompt": "Explain this yaml code:\n\napiVersion: apps/v1\nkind: Deployment\nmetadata:\n name: nginx-deployment\nspec:\n selector:\n matchLabels:\n app: nginx\n replicas: 2 # tells deployment to run 2 pods matching the template\n template:\n metadata:\n labels:\n app: nginx\n spec:\n containers:\n - name: nginx\n image: nginx:1.14.2\n ports:\n - containerPort: 80", "completion": "This yaml code Run a Stateless Application Using a Deployment | Create a Deployment based on the YAML file:", "language": "yaml", "type": "deployment-definition_explanation", "context": "Run a Stateless Application Using a Deployment | Create a Deployment based on the YAML file:", "source": "https://kubernetes.io/docs/tasks/run-application/run-stateless-application-deployment/", "id": "k8s_00285_code_explain_7649"} {"prompt": "Explain this shell code:\n\nkubectl apply -f https://k8s.io/examples/application/deployment.yaml", "completion": "This shell code Run a Stateless Application Using a Deployment | Display information about the Deployment:", "language": "shell", "type": "code-example_explanation", "context": "Run a Stateless Application Using a Deployment | Display information about the Deployment:", "source": "https://kubernetes.io/docs/tasks/run-application/run-stateless-application-deployment/", "id": "k8s_00285_code_explain_7650"} {"prompt": "Complete this text code:\n\nName: nginx-deployment\nNamespace: default\nCreationTimestamp: Tue, 30 Aug 2016 18:11:37 -0700\nLabels: app=nginx\nAnnotations: deployment.kubernetes.io/revision=1\nSelector: app=nginx\nReplicas: 2 desired | 2 updated | 2 total | 2 available | 0 unavailable\nStrategyType: RollingUpdate\nMinReadySeconds: 0\nRollingUpdateStrategy: 1 max unavailable, 1 max surge\nPod Template:\n Labels: app=nginx\n Containers:", "completion": " nginx:\n Image: nginx:1.14.2\n Port: 80/TCP\n Environment: \n Mounts: \n Volumes: \nConditions:\n Type Status Reason\n ---- ------ ------\n Available True MinimumReplicasAvailable\n Progressing True NewReplicaSetAvailable\nOldReplicaSets: \nNewReplicaSet: nginx-deployment-1771418926 (2/2 replicas created)\nNo events.", "language": "text", "type": "code-example", "context": "Run a Stateless Application Using a Deployment | List the Pods created by the deployment:", "source": "https://kubernetes.io/docs/tasks/run-application/run-stateless-application-deployment/", "id": "k8s_00285_code_completion_7651"} {"prompt": "Explain this text code:\n\nName: nginx-deployment\nNamespace: default\nCreationTimestamp: Tue, 30 Aug 2016 18:11:37 -0700\nLabels: app=nginx\nAnnotations: deployment.kubernetes.io/revision=1\nSelector: app=nginx\nReplicas: 2 desired | 2 updated | 2 total | 2 available | 0 unavailable\nStrategyType: RollingUpdate\nMinReadySeconds: 0\nRollingUpdateStrategy: 1 max unavailable, 1 max surge\nPod Template:\n Labels: app=nginx\n Containers:\n nginx:\n Image: nginx:1.14.2\n Port: 80/TCP\n Environment: \n Mounts: \n Volumes: \nConditions:\n Type Status Reason\n ---- ------ ------\n Available True MinimumReplicasAvailable\n Progressing True NewReplicaSetAvailable\nOldReplicaSets: \nNewReplicaSet: nginx-deployment-1771418926 (2/2 replicas created)\nNo events.", "completion": "This text code Run a Stateless Application Using a Deployment | List the Pods created by the deployment:", "language": "text", "type": "code-example_explanation", "context": "Run a Stateless Application Using a Deployment | List the Pods created by the deployment:", "source": "https://kubernetes.io/docs/tasks/run-application/run-stateless-application-deployment/", "id": "k8s_00285_code_explain_7652"} {"prompt": "Explain this text code:\n\nNAME READY STATUS RESTARTS AGE\nnginx-deployment-1771418926-7o5ns 1/1 Running 0 16h\nnginx-deployment-1771418926-r18az 1/1 Running 0 16h", "completion": "This text code Run a Stateless Application Using a Deployment | Display information about a Pod:", "language": "text", "type": "code-example_explanation", "context": "Run a Stateless Application Using a Deployment | Display information about a Pod:", "source": "https://kubernetes.io/docs/tasks/run-application/run-stateless-application-deployment/", "id": "k8s_00285_code_explain_7653"} {"prompt": "Complete this yaml code:\n\napiVersion: apps/v1\nkind: Deployment\nmetadata:\n name: nginx-deployment\nspec:\n selector:\n matchLabels:\n app: nginx\n replicas: 2", "completion": " template:\n metadata:\n labels:\n app: nginx\n spec:\n containers:\n - name: nginx\n image: nginx:1.16.1 # Update the version of nginx from 1.14.2 to 1.16.1\n ports:\n - containerPort: 80", "language": "yaml", "type": "deployment-definition", "context": "Run a Stateless Application Using a Deployment | Apply the new YAML file:", "source": "https://kubernetes.io/docs/tasks/run-application/run-stateless-application-deployment/", "id": "k8s_00285_code_completion_7654"} {"prompt": "Explain this yaml code:\n\napiVersion: apps/v1\nkind: Deployment\nmetadata:\n name: nginx-deployment\nspec:\n selector:\n matchLabels:\n app: nginx\n replicas: 2\n template:\n metadata:\n labels:\n app: nginx\n spec:\n containers:\n - name: nginx\n image: nginx:1.16.1 # Update the version of nginx from 1.14.2 to 1.16.1\n ports:\n - containerPort: 80", "completion": "This yaml code Run a Stateless Application Using a Deployment | Apply the new YAML file:", "language": "yaml", "type": "deployment-definition_explanation", "context": "Run a Stateless Application Using a Deployment | Apply the new YAML file:", "source": "https://kubernetes.io/docs/tasks/run-application/run-stateless-application-deployment/", "id": "k8s_00285_code_explain_7655"} {"prompt": "Explain this shell code:\n\nkubectl apply -f https://k8s.io/examples/application/deployment-update.yaml", "completion": "This shell code Run a Stateless Application Using a Deployment | Watch the deployment create pods with new names and delete the old pods:", "language": "shell", "type": "code-example_explanation", "context": "Run a Stateless Application Using a Deployment | Watch the deployment create pods with new names and delete the old pods:", "source": "https://kubernetes.io/docs/tasks/run-application/run-stateless-application-deployment/", "id": "k8s_00285_code_explain_7656"} {"prompt": "Complete this yaml code:\n\napiVersion: apps/v1\nkind: Deployment\nmetadata:\n name: nginx-deployment\nspec:\n selector:\n matchLabels:\n app: nginx\n replicas: 4 # Update the replicas from 2 to 4", "completion": " template:\n metadata:\n labels:\n app: nginx\n spec:\n containers:\n - name: nginx\n image: nginx:1.16.1\n ports:\n - containerPort: 80", "language": "yaml", "type": "deployment-definition", "context": "Run a Stateless Application Using a Deployment | Apply the new YAML file:", "source": "https://kubernetes.io/docs/tasks/run-application/run-stateless-application-deployment/", "id": "k8s_00285_code_completion_7657"} {"prompt": "Explain this yaml code:\n\napiVersion: apps/v1\nkind: Deployment\nmetadata:\n name: nginx-deployment\nspec:\n selector:\n matchLabels:\n app: nginx\n replicas: 4 # Update the replicas from 2 to 4\n template:\n metadata:\n labels:\n app: nginx\n spec:\n containers:\n - name: nginx\n image: nginx:1.16.1\n ports:\n - containerPort: 80", "completion": "This yaml code Run a Stateless Application Using a Deployment | Apply the new YAML file:", "language": "yaml", "type": "deployment-definition_explanation", "context": "Run a Stateless Application Using a Deployment | Apply the new YAML file:", "source": "https://kubernetes.io/docs/tasks/run-application/run-stateless-application-deployment/", "id": "k8s_00285_code_explain_7658"} {"prompt": "Explain this shell code:\n\nkubectl apply -f https://k8s.io/examples/application/deployment-scale.yaml", "completion": "This shell code Run a Stateless Application Using a Deployment | Verify that the Deployment has four Pods:", "language": "shell", "type": "code-example_explanation", "context": "Run a Stateless Application Using a Deployment | Verify that the Deployment has four Pods:", "source": "https://kubernetes.io/docs/tasks/run-application/run-stateless-application-deployment/", "id": "k8s_00285_code_explain_7659"} {"prompt": "Complete this text code:\n\nNAME READY STATUS RESTARTS AGE\nnginx-deployment-148880595-4zdqq 1/1 Running 0 25s", "completion": "nginx-deployment-148880595-6zgi1 1/1 Running 0 25s\nnginx-deployment-148880595-fxcez 1/1 Running 0 2m\nnginx-deployment-148880595-rwovn 1/1 Running 0 2m", "language": "text", "type": "code-example", "context": "Run a Stateless Application Using a Deployment | Delete the deployment by name:", "source": "https://kubernetes.io/docs/tasks/run-application/run-stateless-application-deployment/", "id": "k8s_00285_code_completion_7660"} {"prompt": "Explain this text code:\n\nNAME READY STATUS RESTARTS AGE\nnginx-deployment-148880595-4zdqq 1/1 Running 0 25s\nnginx-deployment-148880595-6zgi1 1/1 Running 0 25s\nnginx-deployment-148880595-fxcez 1/1 Running 0 2m\nnginx-deployment-148880595-rwovn 1/1 Running 0 2m", "completion": "This text code Run a Stateless Application Using a Deployment | Delete the deployment by name:", "language": "text", "type": "code-example_explanation", "context": "Run a Stateless Application Using a Deployment | Delete the deployment by name:", "source": "https://kubernetes.io/docs/tasks/run-application/run-stateless-application-deployment/", "id": "k8s_00285_code_explain_7661"} {"prompt": "Complete this yaml code:\n\napplication/deployment.yaml\napiVersion: apps/v1\nkind: Deployment\nmetadata:\n name: nginx-deployment\nspec:\n selector:\n matchLabels:\n app: nginx\n replicas: 2 # tells deployment to run 2 pods matching the template", "completion": " template:\n metadata:\n labels:\n app: nginx\n spec:\n containers:\n - name: nginx\n image: nginx:1.14.2\n ports:\n - containerPort: 80", "language": "yaml", "type": "deployment-definition", "context": "Run a Stateless Application Using a Deployment | Create a Deployment based on the YAML file:", "source": "https://kubernetes.io/docs/tasks/run-application/run-stateless-application-deployment/", "id": "k8s_00285_code_completion_7662"} {"prompt": "Explain this yaml code:\n\napplication/deployment.yaml\napiVersion: apps/v1\nkind: Deployment\nmetadata:\n name: nginx-deployment\nspec:\n selector:\n matchLabels:\n app: nginx\n replicas: 2 # tells deployment to run 2 pods matching the template\n template:\n metadata:\n labels:\n app: nginx\n spec:\n containers:\n - name: nginx\n image: nginx:1.14.2\n ports:\n - containerPort: 80", "completion": "This yaml code Run a Stateless Application Using a Deployment | Create a Deployment based on the YAML file:", "language": "yaml", "type": "deployment-definition_explanation", "context": "Run a Stateless Application Using a Deployment | Create a Deployment based on the YAML file:", "source": "https://kubernetes.io/docs/tasks/run-application/run-stateless-application-deployment/", "id": "k8s_00285_code_explain_7663"} {"prompt": "Complete this yaml code:\n\napiVersion: apps/v1\nkind: Deployment\nmetadata:\n name: nginx-deployment\nspec:\n selector:\n matchLabels:\n app: nginx\n replicas: 2 # tells deployment to run 2 pods matching the template", "completion": " template:\n metadata:\n labels:\n app: nginx\n spec:\n containers:\n - name: nginx\n image: nginx:1.14.2\n ports:\n - containerPort: 80", "language": "yaml", "type": "deployment-definition", "context": "Run a Stateless Application Using a Deployment | Create a Deployment based on the YAML file:", "source": "https://kubernetes.io/docs/tasks/run-application/run-stateless-application-deployment/", "id": "k8s_00285_code_completion_7664"} {"prompt": "Explain this yaml code:\n\napiVersion: apps/v1\nkind: Deployment\nmetadata:\n name: nginx-deployment\nspec:\n selector:\n matchLabels:\n app: nginx\n replicas: 2 # tells deployment to run 2 pods matching the template\n template:\n metadata:\n labels:\n app: nginx\n spec:\n containers:\n - name: nginx\n image: nginx:1.14.2\n ports:\n - containerPort: 80", "completion": "This yaml code Run a Stateless Application Using a Deployment | Create a Deployment based on the YAML file:", "language": "yaml", "type": "deployment-definition_explanation", "context": "Run a Stateless Application Using a Deployment | Create a Deployment based on the YAML file:", "source": "https://kubernetes.io/docs/tasks/run-application/run-stateless-application-deployment/", "id": "k8s_00285_code_explain_7665"} {"prompt": "Explain this bash code:\n\nkubectl apply -f https://k8s.io/examples/application/deployment.yaml", "completion": "This bash code Run a Stateless Application Using a Deployment | Display information about the Deployment:", "language": "bash", "type": "kubectl-command_explanation", "context": "Run a Stateless Application Using a Deployment | Display information about the Deployment:", "source": "https://kubernetes.io/docs/tasks/run-application/run-stateless-application-deployment/", "id": "k8s_00285_code_explain_7666"} {"prompt": "Complete this yaml code:\n\napplication/deployment-update.yaml\napiVersion: apps/v1\nkind: Deployment\nmetadata:\n name: nginx-deployment\nspec:\n selector:\n matchLabels:\n app: nginx\n replicas: 2", "completion": " template:\n metadata:\n labels:\n app: nginx\n spec:\n containers:\n - name: nginx\n image: nginx:1.16.1 # Update the version of nginx from 1.14.2 to 1.16.1\n ports:\n - containerPort: 80", "language": "yaml", "type": "deployment-definition", "context": "Run a Stateless Application Using a Deployment | Apply the new YAML file:", "source": "https://kubernetes.io/docs/tasks/run-application/run-stateless-application-deployment/", "id": "k8s_00285_code_completion_7667"} {"prompt": "Explain this yaml code:\n\napplication/deployment-update.yaml\napiVersion: apps/v1\nkind: Deployment\nmetadata:\n name: nginx-deployment\nspec:\n selector:\n matchLabels:\n app: nginx\n replicas: 2\n template:\n metadata:\n labels:\n app: nginx\n spec:\n containers:\n - name: nginx\n image: nginx:1.16.1 # Update the version of nginx from 1.14.2 to 1.16.1\n ports:\n - containerPort: 80", "completion": "This yaml code Run a Stateless Application Using a Deployment | Apply the new YAML file:", "language": "yaml", "type": "deployment-definition_explanation", "context": "Run a Stateless Application Using a Deployment | Apply the new YAML file:", "source": "https://kubernetes.io/docs/tasks/run-application/run-stateless-application-deployment/", "id": "k8s_00285_code_explain_7668"} {"prompt": "Complete this yaml code:\n\napiVersion: apps/v1\nkind: Deployment\nmetadata:\n name: nginx-deployment\nspec:\n selector:\n matchLabels:\n app: nginx\n replicas: 2", "completion": " template:\n metadata:\n labels:\n app: nginx\n spec:\n containers:\n - name: nginx\n image: nginx:1.16.1 # Update the version of nginx from 1.14.2 to 1.16.1\n ports:\n - containerPort: 80", "language": "yaml", "type": "deployment-definition", "context": "Run a Stateless Application Using a Deployment | Apply the new YAML file:", "source": "https://kubernetes.io/docs/tasks/run-application/run-stateless-application-deployment/", "id": "k8s_00285_code_completion_7669"} {"prompt": "Explain this yaml code:\n\napiVersion: apps/v1\nkind: Deployment\nmetadata:\n name: nginx-deployment\nspec:\n selector:\n matchLabels:\n app: nginx\n replicas: 2\n template:\n metadata:\n labels:\n app: nginx\n spec:\n containers:\n - name: nginx\n image: nginx:1.16.1 # Update the version of nginx from 1.14.2 to 1.16.1\n ports:\n - containerPort: 80", "completion": "This yaml code Run a Stateless Application Using a Deployment | Apply the new YAML file:", "language": "yaml", "type": "deployment-definition_explanation", "context": "Run a Stateless Application Using a Deployment | Apply the new YAML file:", "source": "https://kubernetes.io/docs/tasks/run-application/run-stateless-application-deployment/", "id": "k8s_00285_code_explain_7670"} {"prompt": "Explain this bash code:\n\nkubectl apply -f https://k8s.io/examples/application/deployment-update.yaml", "completion": "This bash code Run a Stateless Application Using a Deployment | Watch the deployment create pods with new names and delete the old pods:", "language": "bash", "type": "kubectl-command_explanation", "context": "Run a Stateless Application Using a Deployment | Watch the deployment create pods with new names and delete the old pods:", "source": "https://kubernetes.io/docs/tasks/run-application/run-stateless-application-deployment/", "id": "k8s_00285_code_explain_7671"} {"prompt": "Complete this yaml code:\n\napplication/deployment-scale.yaml\napiVersion: apps/v1\nkind: Deployment\nmetadata:\n name: nginx-deployment\nspec:\n selector:\n matchLabels:\n app: nginx\n replicas: 4 # Update the replicas from 2 to 4", "completion": " template:\n metadata:\n labels:\n app: nginx\n spec:\n containers:\n - name: nginx\n image: nginx:1.16.1\n ports:\n - containerPort: 80", "language": "yaml", "type": "deployment-definition", "context": "Run a Stateless Application Using a Deployment | Apply the new YAML file:", "source": "https://kubernetes.io/docs/tasks/run-application/run-stateless-application-deployment/", "id": "k8s_00285_code_completion_7672"} {"prompt": "Explain this yaml code:\n\napplication/deployment-scale.yaml\napiVersion: apps/v1\nkind: Deployment\nmetadata:\n name: nginx-deployment\nspec:\n selector:\n matchLabels:\n app: nginx\n replicas: 4 # Update the replicas from 2 to 4\n template:\n metadata:\n labels:\n app: nginx\n spec:\n containers:\n - name: nginx\n image: nginx:1.16.1\n ports:\n - containerPort: 80", "completion": "This yaml code Run a Stateless Application Using a Deployment | Apply the new YAML file:", "language": "yaml", "type": "deployment-definition_explanation", "context": "Run a Stateless Application Using a Deployment | Apply the new YAML file:", "source": "https://kubernetes.io/docs/tasks/run-application/run-stateless-application-deployment/", "id": "k8s_00285_code_explain_7673"} {"prompt": "Complete this yaml code:\n\napiVersion: apps/v1\nkind: Deployment\nmetadata:\n name: nginx-deployment\nspec:\n selector:\n matchLabels:\n app: nginx\n replicas: 4 # Update the replicas from 2 to 4", "completion": " template:\n metadata:\n labels:\n app: nginx\n spec:\n containers:\n - name: nginx\n image: nginx:1.16.1\n ports:\n - containerPort: 80", "language": "yaml", "type": "deployment-definition", "context": "Run a Stateless Application Using a Deployment | Apply the new YAML file:", "source": "https://kubernetes.io/docs/tasks/run-application/run-stateless-application-deployment/", "id": "k8s_00285_code_completion_7674"} {"prompt": "Explain this yaml code:\n\napiVersion: apps/v1\nkind: Deployment\nmetadata:\n name: nginx-deployment\nspec:\n selector:\n matchLabels:\n app: nginx\n replicas: 4 # Update the replicas from 2 to 4\n template:\n metadata:\n labels:\n app: nginx\n spec:\n containers:\n - name: nginx\n image: nginx:1.16.1\n ports:\n - containerPort: 80", "completion": "This yaml code Run a Stateless Application Using a Deployment | Apply the new YAML file:", "language": "yaml", "type": "deployment-definition_explanation", "context": "Run a Stateless Application Using a Deployment | Apply the new YAML file:", "source": "https://kubernetes.io/docs/tasks/run-application/run-stateless-application-deployment/", "id": "k8s_00285_code_explain_7675"} {"prompt": "Explain this bash code:\n\nkubectl apply -f https://k8s.io/examples/application/deployment-scale.yaml", "completion": "This bash code Run a Stateless Application Using a Deployment | Verify that the Deployment has four Pods:", "language": "bash", "type": "kubectl-command_explanation", "context": "Run a Stateless Application Using a Deployment | Verify that the Deployment has four Pods:", "source": "https://kubernetes.io/docs/tasks/run-application/run-stateless-application-deployment/", "id": "k8s_00285_code_explain_7676"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Service\nmetadata:\n name: headless-svc\nspec:\n clusterIP: None # clusterIP must be None to create a headless service\n selector:\n job-name: example-job # must match Job name\n---\napiVersion: batch/v1\nkind: Job\nmetadata:\n name: example-job\nspec:\n completions: 3\n parallelism: 3\n completionMode: Indexed\n template:\n spec:\n subdomain: headless-svc # has to match Service name\n restartPolicy: Never", "completion": " containers:\n - name: example-workload\n image: bash:latest\n command:\n - bash\n - -c\n - |\n for i in 0 1 2\n do\n gotStatus=\"-1\"\n wantStatus=\"0\" \n while [ $gotStatus -ne $wantStatus ]\n do \n ping -c 1 example-job-${i}.headless-svc > /dev/null 2>&1\n gotStatus=$? \n if [ $gotStatus -ne $wantStatus ]; then\n echo \"Failed to ping pod example-job-${i}.headless-svc, retrying in 1 second...\"\n sleep 1\n fi\n done \n echo \"Successfully pinged pod: example-job-${i}.headless-svc\"\n done", "language": "yaml", "type": "service-definition", "context": "Job with Pod-to-Pod Communication | After applying the example above, reach each other over the network using: .. You should see output similar to the following:", "source": "https://kubernetes.io/docs/tasks/job/job-with-pod-to-pod-communication/", "id": "k8s_00286_code_completion_7677"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Service\nmetadata:\n name: headless-svc\nspec:\n clusterIP: None # clusterIP must be None to create a headless service\n selector:\n job-name: example-job # must match Job name\n---\napiVersion: batch/v1\nkind: Job\nmetadata:\n name: example-job\nspec:\n completions: 3\n parallelism: 3\n completionMode: Indexed\n template:\n spec:\n subdomain: headless-svc # has to match Service name\n restartPolicy: Never\n containers:\n - name: example-workload\n image: bash:latest\n command:\n - bash\n - -c\n - |\n for i in 0 1 2\n do\n gotStatus=\"-1\"\n wantStatus=\"0\" \n while [ $gotStatus -ne $wantStatus ]\n do \n ping -c 1 example-job-${i}.headless-svc > /dev/null 2>&1\n gotStatus=$? \n if [ $gotStatus -ne $wantStatus ]; then\n echo \"Failed to ping pod example-job-${i}.headless-svc, retrying in 1 second...\"\n sleep 1\n fi\n done \n echo \"Successfully pinged pod: example-job-${i}.headless-svc\"\n done", "completion": "This yaml code Job with Pod-to-Pod Communication | After applying the example above, reach each other over the network using: .. You should see output similar to the following:", "language": "yaml", "type": "service-definition_explanation", "context": "Job with Pod-to-Pod Communication | After applying the example above, reach each other over the network using: .. You should see output similar to the following:", "source": "https://kubernetes.io/docs/tasks/job/job-with-pod-to-pod-communication/", "id": "k8s_00286_code_explain_7678"} {"prompt": "Complete this text code:\n\nFailed to ping pod example-job-0.headless-svc, retrying in 1 second...\nSuccessfully pinged pod: example-job-0.headless-svc", "completion": "Successfully pinged pod: example-job-1.headless-svc\nSuccessfully pinged pod: example-job-2.headless-svc", "language": "text", "type": "code-example", "context": "Job with Pod-to-Pod Communication |", "source": "https://kubernetes.io/docs/tasks/job/job-with-pod-to-pod-communication/", "id": "k8s_00286_code_completion_7679"} {"prompt": "Explain this text code:\n\nFailed to ping pod example-job-0.headless-svc, retrying in 1 second...\nSuccessfully pinged pod: example-job-0.headless-svc\nSuccessfully pinged pod: example-job-1.headless-svc\nSuccessfully pinged pod: example-job-2.headless-svc", "completion": "This text code Job with Pod-to-Pod Communication |", "language": "text", "type": "code-example_explanation", "context": "Job with Pod-to-Pod Communication |", "source": "https://kubernetes.io/docs/tasks/job/job-with-pod-to-pod-communication/", "id": "k8s_00286_code_explain_7680"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Service\nmetadata:\n name: headless-svc\nspec:\n clusterIP: None # clusterIP must be None to create a headless service\n selector:\n job-name: example-job # must match Job name\n---\napiVersion: batch/v1\nkind: Job\nmetadata:\n name: example-job\nspec:\n completions: 3\n parallelism: 3\n completionMode: Indexed\n template:\n spec:\n subdomain: headless-svc # has to match Service name\n restartPolicy: Never", "completion": " containers:\n - name: example-workload\n image: bash:latest\n command:\n - bash\n - -c\n - |\n for i in 0 1 2\n do\n gotStatus=\"-1\"\n wantStatus=\"0\" \n while [ $gotStatus -ne $wantStatus ]\n do \n ping -c 1 example-job-${i}.headless-svc > /dev/null 2>&1\n gotStatus=$? \n if [ $gotStatus -ne $wantStatus ]; then\n echo \"Failed to ping pod example-job-${i}.headless-svc, retrying in 1 second...\"\n sleep 1\n fi\n done \n echo \"Successfully pinged pod: example-job-${i}.headless-svc\"\n done", "language": "yaml", "type": "service-definition", "context": "Job with Pod-to-Pod Communication | After applying the example above, reach each other over the network using: .. You should see output similar to the following:", "source": "https://kubernetes.io/docs/tasks/job/job-with-pod-to-pod-communication/", "id": "k8s_00286_code_completion_7681"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Service\nmetadata:\n name: headless-svc\nspec:\n clusterIP: None # clusterIP must be None to create a headless service\n selector:\n job-name: example-job # must match Job name\n---\napiVersion: batch/v1\nkind: Job\nmetadata:\n name: example-job\nspec:\n completions: 3\n parallelism: 3\n completionMode: Indexed\n template:\n spec:\n subdomain: headless-svc # has to match Service name\n restartPolicy: Never\n containers:\n - name: example-workload\n image: bash:latest\n command:\n - bash\n - -c\n - |\n for i in 0 1 2\n do\n gotStatus=\"-1\"\n wantStatus=\"0\" \n while [ $gotStatus -ne $wantStatus ]\n do \n ping -c 1 example-job-${i}.headless-svc > /dev/null 2>&1\n gotStatus=$? \n if [ $gotStatus -ne $wantStatus ]; then\n echo \"Failed to ping pod example-job-${i}.headless-svc, retrying in 1 second...\"\n sleep 1\n fi\n done \n echo \"Successfully pinged pod: example-job-${i}.headless-svc\"\n done", "completion": "This yaml code Job with Pod-to-Pod Communication | After applying the example above, reach each other over the network using: .. You should see output similar to the following:", "language": "yaml", "type": "service-definition_explanation", "context": "Job with Pod-to-Pod Communication | After applying the example above, reach each other over the network using: .. You should see output similar to the following:", "source": "https://kubernetes.io/docs/tasks/job/job-with-pod-to-pod-communication/", "id": "k8s_00286_code_explain_7682"} {"prompt": "Complete this text code:\n\nFailed to ping pod example-job-0.headless-svc, retrying in 1 second...\nSuccessfully pinged pod: example-job-0.headless-svc", "completion": "Successfully pinged pod: example-job-1.headless-svc\nSuccessfully pinged pod: example-job-2.headless-svc", "language": "text", "type": "code-example", "context": "Job with Pod-to-Pod Communication |", "source": "https://kubernetes.io/docs/tasks/job/job-with-pod-to-pod-communication/", "id": "k8s_00286_code_completion_7683"} {"prompt": "Explain this text code:\n\nFailed to ping pod example-job-0.headless-svc, retrying in 1 second...\nSuccessfully pinged pod: example-job-0.headless-svc\nSuccessfully pinged pod: example-job-1.headless-svc\nSuccessfully pinged pod: example-job-2.headless-svc", "completion": "This text code Job with Pod-to-Pod Communication |", "language": "text", "type": "code-example_explanation", "context": "Job with Pod-to-Pod Communication |", "source": "https://kubernetes.io/docs/tasks/job/job-with-pod-to-pod-communication/", "id": "k8s_00286_code_explain_7684"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Service\nmetadata:\n name: headless-svc\nspec:\n clusterIP: None # clusterIP must be None to create a headless service\n selector:\n job-name: example-job # must match Job name\n---\napiVersion: batch/v1\nkind: Job\nmetadata:\n name: example-job\nspec:\n completions: 3\n parallelism: 3\n completionMode: Indexed\n template:\n spec:\n subdomain: headless-svc # has to match Service name\n restartPolicy: Never", "completion": " containers:\n - name: example-workload\n image: bash:latest\n command:\n - bash\n - -c\n - |\n for i in 0 1 2\n do\n gotStatus=\"-1\"\n wantStatus=\"0\" \n while [ $gotStatus -ne $wantStatus ]\n do \n ping -c 1 example-job-${i}.headless-svc > /dev/null 2>&1\n gotStatus=$? \n if [ $gotStatus -ne $wantStatus ]; then\n echo \"Failed to ping pod example-job-${i}.headless-svc, retrying in 1 second...\"\n sleep 1\n fi\n done \n echo \"Successfully pinged pod: example-job-${i}.headless-svc\"\n done", "language": "yaml", "type": "service-definition", "context": "Job with Pod-to-Pod Communication | After applying the example above, reach each other over the network using: .. You should see output similar to the following:", "source": "https://kubernetes.io/docs/tasks/job/job-with-pod-to-pod-communication/", "id": "k8s_00286_code_completion_7685"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Service\nmetadata:\n name: headless-svc\nspec:\n clusterIP: None # clusterIP must be None to create a headless service\n selector:\n job-name: example-job # must match Job name\n---\napiVersion: batch/v1\nkind: Job\nmetadata:\n name: example-job\nspec:\n completions: 3\n parallelism: 3\n completionMode: Indexed\n template:\n spec:\n subdomain: headless-svc # has to match Service name\n restartPolicy: Never\n containers:\n - name: example-workload\n image: bash:latest\n command:\n - bash\n - -c\n - |\n for i in 0 1 2\n do\n gotStatus=\"-1\"\n wantStatus=\"0\" \n while [ $gotStatus -ne $wantStatus ]\n do \n ping -c 1 example-job-${i}.headless-svc > /dev/null 2>&1\n gotStatus=$? \n if [ $gotStatus -ne $wantStatus ]; then\n echo \"Failed to ping pod example-job-${i}.headless-svc, retrying in 1 second...\"\n sleep 1\n fi\n done \n echo \"Successfully pinged pod: example-job-${i}.headless-svc\"\n done", "completion": "This yaml code Job with Pod-to-Pod Communication | After applying the example above, reach each other over the network using: .. You should see output similar to the following:", "language": "yaml", "type": "service-definition_explanation", "context": "Job with Pod-to-Pod Communication | After applying the example above, reach each other over the network using: .. You should see output similar to the following:", "source": "https://kubernetes.io/docs/tasks/job/job-with-pod-to-pod-communication/", "id": "k8s_00286_code_explain_7686"} {"prompt": "Explain this bash code:\n\nkubectl delete ([-f FILENAME] | [-k DIRECTORY] | TYPE [(NAME | -l label | --all)])", "completion": "This bash code kubectl delete | Delete all resources, in the namespace of the specified resource types.", "language": "bash", "type": "kubectl-command_explanation", "context": "kubectl delete | Delete all resources, in the namespace of the specified resource types.", "source": "https://kubernetes.io/docs/reference/kubectl/generated/kubectl_delete/", "id": "k8s_00287_code_explain_7687"} {"prompt": "Complete this text code:\n\n# Delete a pod using the type and name specified in pod.json\n kubectl delete -f ./pod.json\n \n # Delete resources from a directory containing kustomization.yaml - e.g. dir/kustomization.yaml\n kubectl delete -k dir\n \n # Delete resources from all files that end with '.json'\n kubectl delete -f '*.json'\n \n # Delete a pod based on the type and name in the JSON passed into stdin\n cat pod.json | kubectl delete -f -\n \n # Delete pods and services with same names \"baz\" and \"foo\"\n kubectl delete pod,service baz foo", "completion": " \n # Delete pods and services with label name=myLabel\n kubectl delete pods,services -l name=myLabel\n \n # Delete a pod with minimal delay\n kubectl delete pod foo --now\n \n # Force delete a pod on a dead node\n kubectl delete pod foo --force\n \n # Delete all pods\n kubectl delete pods --all\n \n # Delete all pods only if the user confirms the deletion\n kubectl delete pods --all --interactive", "language": "text", "type": "code-example", "context": "kubectl delete | Delete all resources, in the namespace of the specified resource types.", "source": "https://kubernetes.io/docs/reference/kubectl/generated/kubectl_delete/", "id": "k8s_00287_code_completion_7688"} {"prompt": "Explain this text code:\n\n# Delete a pod using the type and name specified in pod.json\n kubectl delete -f ./pod.json\n \n # Delete resources from a directory containing kustomization.yaml - e.g. dir/kustomization.yaml\n kubectl delete -k dir\n \n # Delete resources from all files that end with '.json'\n kubectl delete -f '*.json'\n \n # Delete a pod based on the type and name in the JSON passed into stdin\n cat pod.json | kubectl delete -f -\n \n # Delete pods and services with same names \"baz\" and \"foo\"\n kubectl delete pod,service baz foo\n \n # Delete pods and services with label name=myLabel\n kubectl delete pods,services -l name=myLabel\n \n # Delete a pod with minimal delay\n kubectl delete pod foo --now\n \n # Force delete a pod on a dead node\n kubectl delete pod foo --force\n \n # Delete all pods\n kubectl delete pods --all\n \n # Delete all pods only if the user confirms the deletion\n kubectl delete pods --all --interactive", "completion": "This text code kubectl delete | Delete all resources, in the namespace of the specified resource types.", "language": "text", "type": "code-example_explanation", "context": "kubectl delete | Delete all resources, in the namespace of the specified resource types.", "source": "https://kubernetes.io/docs/reference/kubectl/generated/kubectl_delete/", "id": "k8s_00287_code_explain_7689"} {"prompt": "Explain this bash code:\n\nkubectl delete ([-f FILENAME] | [-k DIRECTORY] | TYPE [(NAME | -l label | --all)])", "completion": "This bash code kubectl delete | Delete all resources, in the namespace of the specified resource types.", "language": "bash", "type": "kubectl-command_explanation", "context": "kubectl delete | Delete all resources, in the namespace of the specified resource types.", "source": "https://kubernetes.io/docs/reference/kubectl/generated/kubectl_delete/", "id": "k8s_00287_code_explain_7690"} {"prompt": "Complete this text code:\n\n# Delete a pod using the type and name specified in pod.json\n kubectl delete -f ./pod.json\n \n # Delete resources from a directory containing kustomization.yaml - e.g. dir/kustomization.yaml\n kubectl delete -k dir\n \n # Delete resources from all files that end with '.json'\n kubectl delete -f '*.json'\n \n # Delete a pod based on the type and name in the JSON passed into stdin\n cat pod.json | kubectl delete -f -\n \n # Delete pods and services with same names \"baz\" and \"foo\"\n kubectl delete pod,service baz foo", "completion": " \n # Delete pods and services with label name=myLabel\n kubectl delete pods,services -l name=myLabel\n \n # Delete a pod with minimal delay\n kubectl delete pod foo --now\n \n # Force delete a pod on a dead node\n kubectl delete pod foo --force\n \n # Delete all pods\n kubectl delete pods --all\n \n # Delete all pods only if the user confirms the deletion\n kubectl delete pods --all --interactive", "language": "text", "type": "code-example", "context": "kubectl delete | Delete all resources, in the namespace of the specified resource types.", "source": "https://kubernetes.io/docs/reference/kubectl/generated/kubectl_delete/", "id": "k8s_00287_code_completion_7691"} {"prompt": "Explain this text code:\n\n# Delete a pod using the type and name specified in pod.json\n kubectl delete -f ./pod.json\n \n # Delete resources from a directory containing kustomization.yaml - e.g. dir/kustomization.yaml\n kubectl delete -k dir\n \n # Delete resources from all files that end with '.json'\n kubectl delete -f '*.json'\n \n # Delete a pod based on the type and name in the JSON passed into stdin\n cat pod.json | kubectl delete -f -\n \n # Delete pods and services with same names \"baz\" and \"foo\"\n kubectl delete pod,service baz foo\n \n # Delete pods and services with label name=myLabel\n kubectl delete pods,services -l name=myLabel\n \n # Delete a pod with minimal delay\n kubectl delete pod foo --now\n \n # Force delete a pod on a dead node\n kubectl delete pod foo --force\n \n # Delete all pods\n kubectl delete pods --all\n \n # Delete all pods only if the user confirms the deletion\n kubectl delete pods --all --interactive", "completion": "This text code kubectl delete | Delete all resources, in the namespace of the specified resource types.", "language": "text", "type": "code-example_explanation", "context": "kubectl delete | Delete all resources, in the namespace of the specified resource types.", "source": "https://kubernetes.io/docs/reference/kubectl/generated/kubectl_delete/", "id": "k8s_00287_code_explain_7692"} {"prompt": "Complete this yaml code:\n\n---\napiVersion: v1\nkind: Service\nmetadata:\n name: win-webserver\n labels:\n app: win-webserver\nspec:\n ports:\n # the port that this service should serve on\n - port: 80\n targetPort: 80\n selector:\n app: win-webserver\n type: NodePort\n---\napiVersion: apps/v1\nkind: Deployment\nmetadata:\n labels:\n app: win-webserver", "completion": " name: win-webserver\nspec:\n replicas: 2\n selector:\n matchLabels:\n app: win-webserver\n template:\n metadata:\n labels:\n app: win-webserver\n name: win-webserver\n spec:\n containers:\n - name: windowswebserver\n image: mcr.microsoft.com/windows/servercore:ltsc2019\n command:\n - powershell.exe\n - -command\n - \"<#code used from https://gist.github.com/19WAS85/5424431#> ; $$listener = New-Object System.Net.HttpListener ; $$listener.Prefixes.Add('http://*:80/') ; $$listener.Start() ; $$callerCounts = @{} ; Write-Host('Listening at http://*:80/') ; while ($$listener.IsListening) { ;$$context = $$listener.GetContext() ;$$requestUrl = $$context.Request.Url ;$$clientIP = $$context.Request.RemoteEndPoint.Address ;$$response = $$context.Response ;Write-Host '' ;Write-Host('> {0}' -f $$requestUrl) ; ;$$count = 1 ;$$k=$$callerCounts.Get_Item($$clientIP) ;if ($$k -ne $$null) { $$count += $$k } ;$$callerCounts.Set_Item($$clientIP, $$count) ;$$ip=(Get-NetAdapter | Get-NetIpAddress); $$header='

    Windows Container Web Server

    ' ;$$callerCountsString='' ;$$callerCounts.Keys | % { $$callerCountsString+='

    IP {0} callerCount {1} ' -f $$ip[1].IPAddress,$$callerCounts.Item($$_) } ;$$footer='

    ' ;$$content='{0}{1}{2}' -f $$header,$$callerCountsString,$$footer ;Write-Output $$content ;$$buffer = [System.Text.Encoding]::UTF8.GetBytes($$content) ;$$response.ContentLength64 = $$buffer.Length ;$$response.OutputStream.Write($$buffer, 0, $$buffer.Length) ;$$response.Close() ;$$responseStatus = $$response.StatusCode ;Write-Host('< {0}' -f $$responseStatus) } ; \"\n nodeSelector:\n kubernetes.io/os: windows", "language": "yaml", "type": "deployment-definition", "context": "Guide for Running Windows Containers in Kubernetes | Check that all nodes are healthy:", "source": "https://kubernetes.io/docs/concepts/windows/user-guide/", "id": "k8s_00288_code_completion_7693"} {"prompt": "Explain this yaml code:\n\n---\napiVersion: v1\nkind: Service\nmetadata:\n name: win-webserver\n labels:\n app: win-webserver\nspec:\n ports:\n # the port that this service should serve on\n - port: 80\n targetPort: 80\n selector:\n app: win-webserver\n type: NodePort\n---\napiVersion: apps/v1\nkind: Deployment\nmetadata:\n labels:\n app: win-webserver\n name: win-webserver\nspec:\n replicas: 2\n selector:\n matchLabels:\n app: win-webserver\n template:\n metadata:\n labels:\n app: win-webserver\n name: win-webserver\n spec:\n containers:\n - name: windowswebserver\n image: mcr.microsoft.com/windows/servercore:ltsc2019\n command:\n - powershell.exe\n - -command\n - \"<#code used from https://gist.github.com/19WAS85/5424431#> ; $$listener = New-Object System.Net.HttpListener ; $$listener.Prefixes.Add('http://*:80/') ; $$listener.Start() ; $$callerCounts = @{} ; Write-Host('Listening at http://*:80/') ; while ($$listener.IsListening) { ;$$context = $$listener.GetContext() ;$$requestUrl = $$context.Request.Url ;$$clientIP = $$context.Request.RemoteEndPoint.Address ;$$response = $$context.Response ;Write-Host '' ;Write-Host('> {0}' -f $$requestUrl) ; ;$$count = 1 ;$$k=$$callerCounts.Get_Item($$clientIP) ;if ($$k -ne $$null) { $$count += $$k } ;$$callerCounts.Set_Item($$clientIP, $$count) ;$$ip=(Get-NetAdapter | Get-NetIpAddress); $$header='

    Windows Container Web Server

    ' ;$$callerCountsString='' ;$$callerCounts.Keys | % { $$callerCountsString+='

    IP {0} callerCount {1} ' -f $$ip[1].IPAddress,$$callerCounts.Item($$_) } ;$$footer='' ;$$content='{0}{1}{2}' -f $$header,$$callerCountsString,$$footer ;Write-Output $$content ;$$buffer = [System.Text.Encoding]::UTF8.GetBytes($$content) ;$$response.ContentLength64 = $$buffer.Length ;$$response.OutputStream.Write($$buffer, 0, $$buffer.Length) ;$$response.Close() ;$$responseStatus = $$response.StatusCode ;Write-Host('< {0}' -f $$responseStatus) } ; \"\n nodeSelector:\n kubernetes.io/os: windows", "completion": "This yaml code Guide for Running Windows Containers in Kubernetes | Check that all nodes are healthy:", "language": "yaml", "type": "deployment-definition_explanation", "context": "Guide for Running Windows Containers in Kubernetes | Check that all nodes are healthy:", "source": "https://kubernetes.io/docs/concepts/windows/user-guide/", "id": "k8s_00288_code_explain_7694"} {"prompt": "Explain this bash code:\n\nkubectl apply -f win-webserver.yaml\nkubectl get pods -o wide -w", "completion": "This bash code Guide for Running Windows Containers in Kubernetes | When the service is deployed correctly both Pods are marked as Ready. To exit the watch command, press Ctrl+C.", "language": "bash", "type": "kubectl-command_explanation", "context": "Guide for Running Windows Containers in Kubernetes | When the service is deployed correctly both Pods are marked as Ready. To exit the watch command, press Ctrl+C.", "source": "https://kubernetes.io/docs/concepts/windows/user-guide/", "id": "k8s_00288_code_explain_7695"} {"prompt": "Complete this text code:\n\nnodeSelector:\n kubernetes.io/os: windows\n node.kubernetes.io/windows-build: '10.0.17763'\ntolerations:", "completion": " - key: \"os\"\n operator: \"Equal\"\n value: \"windows\"\n effect: \"NoSchedule\"", "language": "text", "type": "code-example", "context": "Guide for Running Windows Containers in Kubernetes | The Windows Server version used by each pod must match that of the node. If you want to use multiple Windows Server versions in the same cluster, then you should set additional node labels and nodeSel", "source": "https://kubernetes.io/docs/concepts/windows/user-guide/", "id": "k8s_00288_code_completion_7696"} {"prompt": "Explain this text code:\n\nnodeSelector:\n kubernetes.io/os: windows\n node.kubernetes.io/windows-build: '10.0.17763'\ntolerations:\n - key: \"os\"\n operator: \"Equal\"\n value: \"windows\"\n effect: \"NoSchedule\"", "completion": "This text code Guide for Running Windows Containers in Kubernetes | The Windows Server version used by each pod must match that of the node. If you want to use multiple Windows Server versions in the same cluster, then you should set additional node labels and nodeSel", "language": "text", "type": "code-example_explanation", "context": "Guide for Running Windows Containers in Kubernetes | The Windows Server version used by each pod must match that of the node. If you want to use multiple Windows Server versions in the same cluster, then you should set additional node labels and nodeSel", "source": "https://kubernetes.io/docs/concepts/windows/user-guide/", "id": "k8s_00288_code_explain_7697"} {"prompt": "Complete this yaml code:\n\n---\napiVersion: node.k8s.io/v1\nkind: RuntimeClass\nmetadata:\n name: windows-2019\nhandler: example-container-runtime-handler\nscheduling:\n nodeSelector:", "completion": " kubernetes.io/os: 'windows'\n kubernetes.io/arch: 'amd64'\n node.kubernetes.io/windows-build: '10.0.17763'\n tolerations:\n - effect: NoSchedule\n key: os\n operator: Equal\n value: \"windows\"", "language": "yaml", "type": "kubernetes-manifest", "context": "Guide for Running Windows Containers in Kubernetes | Run kubectl create -f runtimeClasses.yml using as a cluster administrator", "source": "https://kubernetes.io/docs/concepts/windows/user-guide/", "id": "k8s_00288_code_completion_7698"} {"prompt": "Explain this yaml code:\n\n---\napiVersion: node.k8s.io/v1\nkind: RuntimeClass\nmetadata:\n name: windows-2019\nhandler: example-container-runtime-handler\nscheduling:\n nodeSelector:\n kubernetes.io/os: 'windows'\n kubernetes.io/arch: 'amd64'\n node.kubernetes.io/windows-build: '10.0.17763'\n tolerations:\n - effect: NoSchedule\n key: os\n operator: Equal\n value: \"windows\"", "completion": "This yaml code Guide for Running Windows Containers in Kubernetes | Run kubectl create -f runtimeClasses.yml using as a cluster administrator", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Guide for Running Windows Containers in Kubernetes | Run kubectl create -f runtimeClasses.yml using as a cluster administrator", "source": "https://kubernetes.io/docs/concepts/windows/user-guide/", "id": "k8s_00288_code_explain_7699"} {"prompt": "Complete this yaml code:\n\n---\napiVersion: apps/v1\nkind: Deployment\nmetadata:\n name: iis-2019\n labels:\n app: iis-2019\nspec:\n replicas: 1\n template:\n metadata:\n name: iis-2019\n labels:\n app: iis-2019\n spec:\n runtimeClassName: windows-2019\n containers:\n - name: iis\n image: mcr.microsoft.com/windows/servercore/iis:windowsservercore-ltsc2019\n resources:\n limits:", "completion": " cpu: 1\n memory: 800Mi\n requests:\n cpu: .1\n memory: 300Mi\n ports:\n - containerPort: 80\n selector:\n matchLabels:\n app: iis-2019\n---\napiVersion: v1\nkind: Service\nmetadata:\n name: iis\nspec:\n type: LoadBalancer\n ports:\n - protocol: TCP\n port: 80\n selector:\n app: iis-2019", "language": "yaml", "type": "deployment-definition", "context": "Guide for Running Windows Containers in Kubernetes |", "source": "https://kubernetes.io/docs/concepts/windows/user-guide/", "id": "k8s_00288_code_completion_7700"} {"prompt": "Explain this yaml code:\n\n---\napiVersion: apps/v1\nkind: Deployment\nmetadata:\n name: iis-2019\n labels:\n app: iis-2019\nspec:\n replicas: 1\n template:\n metadata:\n name: iis-2019\n labels:\n app: iis-2019\n spec:\n runtimeClassName: windows-2019\n containers:\n - name: iis\n image: mcr.microsoft.com/windows/servercore/iis:windowsservercore-ltsc2019\n resources:\n limits:\n cpu: 1\n memory: 800Mi\n requests:\n cpu: .1\n memory: 300Mi\n ports:\n - containerPort: 80\n selector:\n matchLabels:\n app: iis-2019\n---\napiVersion: v1\nkind: Service\nmetadata:\n name: iis\nspec:\n type: LoadBalancer\n ports:\n - protocol: TCP\n port: 80\n selector:\n app: iis-2019", "completion": "This yaml code Guide for Running Windows Containers in Kubernetes |", "language": "yaml", "type": "deployment-definition_explanation", "context": "Guide for Running Windows Containers in Kubernetes |", "source": "https://kubernetes.io/docs/concepts/windows/user-guide/", "id": "k8s_00288_code_explain_7701"} {"prompt": "Complete this yaml code:\n\n---\napiVersion: v1\nkind: Service\nmetadata:\n name: win-webserver\n labels:\n app: win-webserver\nspec:\n ports:\n # the port that this service should serve on\n - port: 80\n targetPort: 80\n selector:\n app: win-webserver\n type: NodePort\n---\napiVersion: apps/v1\nkind: Deployment\nmetadata:\n labels:\n app: win-webserver", "completion": " name: win-webserver\nspec:\n replicas: 2\n selector:\n matchLabels:\n app: win-webserver\n template:\n metadata:\n labels:\n app: win-webserver\n name: win-webserver\n spec:\n containers:\n - name: windowswebserver\n image: mcr.microsoft.com/windows/servercore:ltsc2019\n command:\n - powershell.exe\n - -command\n - \"<#code used from https://gist.github.com/19WAS85/5424431#> ; $$listener = New-Object System.Net.HttpListener ; $$listener.Prefixes.Add('http://*:80/') ; $$listener.Start() ; $$callerCounts = @{} ; Write-Host('Listening at http://*:80/') ; while ($$listener.IsListening) { ;$$context = $$listener.GetContext() ;$$requestUrl = $$context.Request.Url ;$$clientIP = $$context.Request.RemoteEndPoint.Address ;$$response = $$context.Response ;Write-Host '' ;Write-Host('> {0}' -f $$requestUrl) ; ;$$count = 1 ;$$k=$$callerCounts.Get_Item($$clientIP) ;if ($$k -ne $$null) { $$count += $$k } ;$$callerCounts.Set_Item($$clientIP, $$count) ;$$ip=(Get-NetAdapter | Get-NetIpAddress); $$header='

    Windows Container Web Server

    ' ;$$callerCountsString='' ;$$callerCounts.Keys | % { $$callerCountsString+='

    IP {0} callerCount {1} ' -f $$ip[1].IPAddress,$$callerCounts.Item($$_) } ;$$footer='' ;$$content='{0}{1}{2}' -f $$header,$$callerCountsString,$$footer ;Write-Output $$content ;$$buffer = [System.Text.Encoding]::UTF8.GetBytes($$content) ;$$response.ContentLength64 = $$buffer.Length ;$$response.OutputStream.Write($$buffer, 0, $$buffer.Length) ;$$response.Close() ;$$responseStatus = $$response.StatusCode ;Write-Host('< {0}' -f $$responseStatus) } ; \"\n nodeSelector:\n kubernetes.io/os: windows", "language": "yaml", "type": "deployment-definition", "context": "Guide for Running Windows Containers in Kubernetes | Check that all nodes are healthy:", "source": "https://kubernetes.io/docs/concepts/windows/user-guide/", "id": "k8s_00288_code_completion_7702"} {"prompt": "Explain this yaml code:\n\n---\napiVersion: v1\nkind: Service\nmetadata:\n name: win-webserver\n labels:\n app: win-webserver\nspec:\n ports:\n # the port that this service should serve on\n - port: 80\n targetPort: 80\n selector:\n app: win-webserver\n type: NodePort\n---\napiVersion: apps/v1\nkind: Deployment\nmetadata:\n labels:\n app: win-webserver\n name: win-webserver\nspec:\n replicas: 2\n selector:\n matchLabels:\n app: win-webserver\n template:\n metadata:\n labels:\n app: win-webserver\n name: win-webserver\n spec:\n containers:\n - name: windowswebserver\n image: mcr.microsoft.com/windows/servercore:ltsc2019\n command:\n - powershell.exe\n - -command\n - \"<#code used from https://gist.github.com/19WAS85/5424431#> ; $$listener = New-Object System.Net.HttpListener ; $$listener.Prefixes.Add('http://*:80/') ; $$listener.Start() ; $$callerCounts = @{} ; Write-Host('Listening at http://*:80/') ; while ($$listener.IsListening) { ;$$context = $$listener.GetContext() ;$$requestUrl = $$context.Request.Url ;$$clientIP = $$context.Request.RemoteEndPoint.Address ;$$response = $$context.Response ;Write-Host '' ;Write-Host('> {0}' -f $$requestUrl) ; ;$$count = 1 ;$$k=$$callerCounts.Get_Item($$clientIP) ;if ($$k -ne $$null) { $$count += $$k } ;$$callerCounts.Set_Item($$clientIP, $$count) ;$$ip=(Get-NetAdapter | Get-NetIpAddress); $$header='

    Windows Container Web Server

    ' ;$$callerCountsString='' ;$$callerCounts.Keys | % { $$callerCountsString+='

    IP {0} callerCount {1} ' -f $$ip[1].IPAddress,$$callerCounts.Item($$_) } ;$$footer='' ;$$content='{0}{1}{2}' -f $$header,$$callerCountsString,$$footer ;Write-Output $$content ;$$buffer = [System.Text.Encoding]::UTF8.GetBytes($$content) ;$$response.ContentLength64 = $$buffer.Length ;$$response.OutputStream.Write($$buffer, 0, $$buffer.Length) ;$$response.Close() ;$$responseStatus = $$response.StatusCode ;Write-Host('< {0}' -f $$responseStatus) } ; \"\n nodeSelector:\n kubernetes.io/os: windows", "completion": "This yaml code Guide for Running Windows Containers in Kubernetes | Check that all nodes are healthy:", "language": "yaml", "type": "deployment-definition_explanation", "context": "Guide for Running Windows Containers in Kubernetes | Check that all nodes are healthy:", "source": "https://kubernetes.io/docs/concepts/windows/user-guide/", "id": "k8s_00288_code_explain_7703"} {"prompt": "Explain this bash code:\n\nkubectl apply -f win-webserver.yaml\nkubectl get pods -o wide -w", "completion": "This bash code Guide for Running Windows Containers in Kubernetes | When the service is deployed correctly both Pods are marked as Ready. To exit the watch command, press Ctrl+C.", "language": "bash", "type": "kubectl-command_explanation", "context": "Guide for Running Windows Containers in Kubernetes | When the service is deployed correctly both Pods are marked as Ready. To exit the watch command, press Ctrl+C.", "source": "https://kubernetes.io/docs/concepts/windows/user-guide/", "id": "k8s_00288_code_explain_7704"} {"prompt": "Complete this yaml code:\n\nnodeSelector:\n kubernetes.io/os: windows\n node.kubernetes.io/windows-build: '10.0.17763'\ntolerations:", "completion": " - key: \"os\"\n operator: \"Equal\"\n value: \"windows\"\n effect: \"NoSchedule\"", "language": "yaml", "type": "kubernetes-manifest", "context": "Guide for Running Windows Containers in Kubernetes | The Windows Server version used by each pod must match that of the node. If you want to use multiple Windows Server versions in the same cluster, then you should set additional node labels and nodeSel", "source": "https://kubernetes.io/docs/concepts/windows/user-guide/", "id": "k8s_00288_code_completion_7705"} {"prompt": "Explain this yaml code:\n\nnodeSelector:\n kubernetes.io/os: windows\n node.kubernetes.io/windows-build: '10.0.17763'\ntolerations:\n - key: \"os\"\n operator: \"Equal\"\n value: \"windows\"\n effect: \"NoSchedule\"", "completion": "This yaml code Guide for Running Windows Containers in Kubernetes | The Windows Server version used by each pod must match that of the node. If you want to use multiple Windows Server versions in the same cluster, then you should set additional node labels and nodeSel", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Guide for Running Windows Containers in Kubernetes | The Windows Server version used by each pod must match that of the node. If you want to use multiple Windows Server versions in the same cluster, then you should set additional node labels and nodeSel", "source": "https://kubernetes.io/docs/concepts/windows/user-guide/", "id": "k8s_00288_code_explain_7706"} {"prompt": "Complete this yaml code:\n\n---\napiVersion: node.k8s.io/v1\nkind: RuntimeClass\nmetadata:\n name: windows-2019\nhandler: example-container-runtime-handler\nscheduling:\n nodeSelector:", "completion": " kubernetes.io/os: 'windows'\n kubernetes.io/arch: 'amd64'\n node.kubernetes.io/windows-build: '10.0.17763'\n tolerations:\n - effect: NoSchedule\n key: os\n operator: Equal\n value: \"windows\"", "language": "yaml", "type": "kubernetes-manifest", "context": "Guide for Running Windows Containers in Kubernetes | Run kubectl create -f runtimeClasses.yml using as a cluster administrator", "source": "https://kubernetes.io/docs/concepts/windows/user-guide/", "id": "k8s_00288_code_completion_7707"} {"prompt": "Explain this yaml code:\n\n---\napiVersion: node.k8s.io/v1\nkind: RuntimeClass\nmetadata:\n name: windows-2019\nhandler: example-container-runtime-handler\nscheduling:\n nodeSelector:\n kubernetes.io/os: 'windows'\n kubernetes.io/arch: 'amd64'\n node.kubernetes.io/windows-build: '10.0.17763'\n tolerations:\n - effect: NoSchedule\n key: os\n operator: Equal\n value: \"windows\"", "completion": "This yaml code Guide for Running Windows Containers in Kubernetes | Run kubectl create -f runtimeClasses.yml using as a cluster administrator", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Guide for Running Windows Containers in Kubernetes | Run kubectl create -f runtimeClasses.yml using as a cluster administrator", "source": "https://kubernetes.io/docs/concepts/windows/user-guide/", "id": "k8s_00288_code_explain_7708"} {"prompt": "Complete this yaml code:\n\n---\napiVersion: apps/v1\nkind: Deployment\nmetadata:\n name: iis-2019\n labels:\n app: iis-2019\nspec:\n replicas: 1\n template:\n metadata:\n name: iis-2019\n labels:\n app: iis-2019\n spec:\n runtimeClassName: windows-2019\n containers:\n - name: iis\n image: mcr.microsoft.com/windows/servercore/iis:windowsservercore-ltsc2019\n resources:\n limits:", "completion": " cpu: 1\n memory: 800Mi\n requests:\n cpu: .1\n memory: 300Mi\n ports:\n - containerPort: 80\n selector:\n matchLabels:\n app: iis-2019\n---\napiVersion: v1\nkind: Service\nmetadata:\n name: iis\nspec:\n type: LoadBalancer\n ports:\n - protocol: TCP\n port: 80\n selector:\n app: iis-2019", "language": "yaml", "type": "deployment-definition", "context": "Guide for Running Windows Containers in Kubernetes |", "source": "https://kubernetes.io/docs/concepts/windows/user-guide/", "id": "k8s_00288_code_completion_7709"} {"prompt": "Explain this yaml code:\n\n---\napiVersion: apps/v1\nkind: Deployment\nmetadata:\n name: iis-2019\n labels:\n app: iis-2019\nspec:\n replicas: 1\n template:\n metadata:\n name: iis-2019\n labels:\n app: iis-2019\n spec:\n runtimeClassName: windows-2019\n containers:\n - name: iis\n image: mcr.microsoft.com/windows/servercore/iis:windowsservercore-ltsc2019\n resources:\n limits:\n cpu: 1\n memory: 800Mi\n requests:\n cpu: .1\n memory: 300Mi\n ports:\n - containerPort: 80\n selector:\n matchLabels:\n app: iis-2019\n---\napiVersion: v1\nkind: Service\nmetadata:\n name: iis\nspec:\n type: LoadBalancer\n ports:\n - protocol: TCP\n port: 80\n selector:\n app: iis-2019", "completion": "This yaml code Guide for Running Windows Containers in Kubernetes |", "language": "yaml", "type": "deployment-definition_explanation", "context": "Guide for Running Windows Containers in Kubernetes |", "source": "https://kubernetes.io/docs/concepts/windows/user-guide/", "id": "k8s_00288_code_explain_7710"} {"prompt": "Complete this yaml code:\n\n---\napiVersion: v1\nkind: Service\nmetadata:\n name: win-webserver\n labels:\n app: win-webserver\nspec:\n ports:\n # the port that this service should serve on\n - port: 80\n targetPort: 80\n selector:\n app: win-webserver\n type: NodePort\n---\napiVersion: apps/v1\nkind: Deployment\nmetadata:\n labels:\n app: win-webserver", "completion": " name: win-webserver\nspec:\n replicas: 2\n selector:\n matchLabels:\n app: win-webserver\n template:\n metadata:\n labels:\n app: win-webserver\n name: win-webserver\n spec:\n containers:\n - name: windowswebserver\n image: mcr.microsoft.com/windows/servercore:ltsc2019\n command:\n - powershell.exe\n - -command\n - \"<#code used from https://gist.github.com/19WAS85/5424431#> ; $$listener = New-Object System.Net.HttpListener ; $$listener.Prefixes.Add('http://*:80/') ; $$listener.Start() ; $$callerCounts = @{} ; Write-Host('Listening at http://*:80/') ; while ($$listener.IsListening) { ;$$context = $$listener.GetContext() ;$$requestUrl = $$context.Request.Url ;$$clientIP = $$context.Request.RemoteEndPoint.Address ;$$response = $$context.Response ;Write-Host '' ;Write-Host('> {0}' -f $$requestUrl) ; ;$$count = 1 ;$$k=$$callerCounts.Get_Item($$clientIP) ;if ($$k -ne $$null) { $$count += $$k } ;$$callerCounts.Set_Item($$clientIP, $$count) ;$$ip=(Get-NetAdapter | Get-NetIpAddress); $$header='

    Windows Container Web Server

    ' ;$$callerCountsString='' ;$$callerCounts.Keys | % { $$callerCountsString+='

    IP {0} callerCount {1} ' -f $$ip[1].IPAddress,$$callerCounts.Item($$_) } ;$$footer='' ;$$content='{0}{1}{2}' -f $$header,$$callerCountsString,$$footer ;Write-Output $$content ;$$buffer = [System.Text.Encoding]::UTF8.GetBytes($$content) ;$$response.ContentLength64 = $$buffer.Length ;$$response.OutputStream.Write($$buffer, 0, $$buffer.Length) ;$$response.Close() ;$$responseStatus = $$response.StatusCode ;Write-Host('< {0}' -f $$responseStatus) } ; \"\n nodeSelector:\n kubernetes.io/os: windows", "language": "yaml", "type": "deployment-definition", "context": "Guide for Running Windows Containers in Kubernetes | Check that all nodes are healthy:", "source": "https://kubernetes.io/docs/concepts/windows/user-guide/", "id": "k8s_00288_code_completion_7711"} {"prompt": "Explain this yaml code:\n\n---\napiVersion: v1\nkind: Service\nmetadata:\n name: win-webserver\n labels:\n app: win-webserver\nspec:\n ports:\n # the port that this service should serve on\n - port: 80\n targetPort: 80\n selector:\n app: win-webserver\n type: NodePort\n---\napiVersion: apps/v1\nkind: Deployment\nmetadata:\n labels:\n app: win-webserver\n name: win-webserver\nspec:\n replicas: 2\n selector:\n matchLabels:\n app: win-webserver\n template:\n metadata:\n labels:\n app: win-webserver\n name: win-webserver\n spec:\n containers:\n - name: windowswebserver\n image: mcr.microsoft.com/windows/servercore:ltsc2019\n command:\n - powershell.exe\n - -command\n - \"<#code used from https://gist.github.com/19WAS85/5424431#> ; $$listener = New-Object System.Net.HttpListener ; $$listener.Prefixes.Add('http://*:80/') ; $$listener.Start() ; $$callerCounts = @{} ; Write-Host('Listening at http://*:80/') ; while ($$listener.IsListening) { ;$$context = $$listener.GetContext() ;$$requestUrl = $$context.Request.Url ;$$clientIP = $$context.Request.RemoteEndPoint.Address ;$$response = $$context.Response ;Write-Host '' ;Write-Host('> {0}' -f $$requestUrl) ; ;$$count = 1 ;$$k=$$callerCounts.Get_Item($$clientIP) ;if ($$k -ne $$null) { $$count += $$k } ;$$callerCounts.Set_Item($$clientIP, $$count) ;$$ip=(Get-NetAdapter | Get-NetIpAddress); $$header='

    Windows Container Web Server

    ' ;$$callerCountsString='' ;$$callerCounts.Keys | % { $$callerCountsString+='

    IP {0} callerCount {1} ' -f $$ip[1].IPAddress,$$callerCounts.Item($$_) } ;$$footer='' ;$$content='{0}{1}{2}' -f $$header,$$callerCountsString,$$footer ;Write-Output $$content ;$$buffer = [System.Text.Encoding]::UTF8.GetBytes($$content) ;$$response.ContentLength64 = $$buffer.Length ;$$response.OutputStream.Write($$buffer, 0, $$buffer.Length) ;$$response.Close() ;$$responseStatus = $$response.StatusCode ;Write-Host('< {0}' -f $$responseStatus) } ; \"\n nodeSelector:\n kubernetes.io/os: windows", "completion": "This yaml code Guide for Running Windows Containers in Kubernetes | Check that all nodes are healthy:", "language": "yaml", "type": "deployment-definition_explanation", "context": "Guide for Running Windows Containers in Kubernetes | Check that all nodes are healthy:", "source": "https://kubernetes.io/docs/concepts/windows/user-guide/", "id": "k8s_00288_code_explain_7712"} {"prompt": "Explain this bash code:\n\nkubectl apply -f win-webserver.yaml\nkubectl get pods -o wide -w", "completion": "This bash code Guide for Running Windows Containers in Kubernetes | When the service is deployed correctly both Pods are marked as Ready. To exit the watch command, press Ctrl+C.", "language": "bash", "type": "kubectl-command_explanation", "context": "Guide for Running Windows Containers in Kubernetes | When the service is deployed correctly both Pods are marked as Ready. To exit the watch command, press Ctrl+C.", "source": "https://kubernetes.io/docs/concepts/windows/user-guide/", "id": "k8s_00288_code_explain_7713"} {"prompt": "Complete this text code:\n\nnodeSelector:\n kubernetes.io/os: windows\n node.kubernetes.io/windows-build: '10.0.17763'\ntolerations:", "completion": " - key: \"os\"\n operator: \"Equal\"\n value: \"windows\"\n effect: \"NoSchedule\"", "language": "text", "type": "code-example", "context": "Guide for Running Windows Containers in Kubernetes | The Windows Server version used by each pod must match that of the node. If you want to use multiple Windows Server versions in the same cluster, then you should set additional node labels and nodeSel", "source": "https://kubernetes.io/docs/concepts/windows/user-guide/", "id": "k8s_00288_code_completion_7714"} {"prompt": "Explain this text code:\n\nnodeSelector:\n kubernetes.io/os: windows\n node.kubernetes.io/windows-build: '10.0.17763'\ntolerations:\n - key: \"os\"\n operator: \"Equal\"\n value: \"windows\"\n effect: \"NoSchedule\"", "completion": "This text code Guide for Running Windows Containers in Kubernetes | The Windows Server version used by each pod must match that of the node. If you want to use multiple Windows Server versions in the same cluster, then you should set additional node labels and nodeSel", "language": "text", "type": "code-example_explanation", "context": "Guide for Running Windows Containers in Kubernetes | The Windows Server version used by each pod must match that of the node. If you want to use multiple Windows Server versions in the same cluster, then you should set additional node labels and nodeSel", "source": "https://kubernetes.io/docs/concepts/windows/user-guide/", "id": "k8s_00288_code_explain_7715"} {"prompt": "Complete this yaml code:\n\n---\napiVersion: node.k8s.io/v1\nkind: RuntimeClass\nmetadata:\n name: windows-2019\nhandler: example-container-runtime-handler\nscheduling:\n nodeSelector:", "completion": " kubernetes.io/os: 'windows'\n kubernetes.io/arch: 'amd64'\n node.kubernetes.io/windows-build: '10.0.17763'\n tolerations:\n - effect: NoSchedule\n key: os\n operator: Equal\n value: \"windows\"", "language": "yaml", "type": "kubernetes-manifest", "context": "Guide for Running Windows Containers in Kubernetes | Run kubectl create -f runtimeClasses.yml using as a cluster administrator", "source": "https://kubernetes.io/docs/concepts/windows/user-guide/", "id": "k8s_00288_code_completion_7716"} {"prompt": "Explain this yaml code:\n\n---\napiVersion: node.k8s.io/v1\nkind: RuntimeClass\nmetadata:\n name: windows-2019\nhandler: example-container-runtime-handler\nscheduling:\n nodeSelector:\n kubernetes.io/os: 'windows'\n kubernetes.io/arch: 'amd64'\n node.kubernetes.io/windows-build: '10.0.17763'\n tolerations:\n - effect: NoSchedule\n key: os\n operator: Equal\n value: \"windows\"", "completion": "This yaml code Guide for Running Windows Containers in Kubernetes | Run kubectl create -f runtimeClasses.yml using as a cluster administrator", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Guide for Running Windows Containers in Kubernetes | Run kubectl create -f runtimeClasses.yml using as a cluster administrator", "source": "https://kubernetes.io/docs/concepts/windows/user-guide/", "id": "k8s_00288_code_explain_7717"} {"prompt": "Complete this yaml code:\n\n---\napiVersion: apps/v1\nkind: Deployment\nmetadata:\n name: iis-2019\n labels:\n app: iis-2019\nspec:\n replicas: 1\n template:\n metadata:\n name: iis-2019\n labels:\n app: iis-2019\n spec:\n runtimeClassName: windows-2019\n containers:\n - name: iis\n image: mcr.microsoft.com/windows/servercore/iis:windowsservercore-ltsc2019\n resources:\n limits:", "completion": " cpu: 1\n memory: 800Mi\n requests:\n cpu: .1\n memory: 300Mi\n ports:\n - containerPort: 80\n selector:\n matchLabels:\n app: iis-2019\n---\napiVersion: v1\nkind: Service\nmetadata:\n name: iis\nspec:\n type: LoadBalancer\n ports:\n - protocol: TCP\n port: 80\n selector:\n app: iis-2019", "language": "yaml", "type": "deployment-definition", "context": "Guide for Running Windows Containers in Kubernetes |", "source": "https://kubernetes.io/docs/concepts/windows/user-guide/", "id": "k8s_00288_code_completion_7718"} {"prompt": "Explain this yaml code:\n\n---\napiVersion: apps/v1\nkind: Deployment\nmetadata:\n name: iis-2019\n labels:\n app: iis-2019\nspec:\n replicas: 1\n template:\n metadata:\n name: iis-2019\n labels:\n app: iis-2019\n spec:\n runtimeClassName: windows-2019\n containers:\n - name: iis\n image: mcr.microsoft.com/windows/servercore/iis:windowsservercore-ltsc2019\n resources:\n limits:\n cpu: 1\n memory: 800Mi\n requests:\n cpu: .1\n memory: 300Mi\n ports:\n - containerPort: 80\n selector:\n matchLabels:\n app: iis-2019\n---\napiVersion: v1\nkind: Service\nmetadata:\n name: iis\nspec:\n type: LoadBalancer\n ports:\n - protocol: TCP\n port: 80\n selector:\n app: iis-2019", "completion": "This yaml code Guide for Running Windows Containers in Kubernetes |", "language": "yaml", "type": "deployment-definition_explanation", "context": "Guide for Running Windows Containers in Kubernetes |", "source": "https://kubernetes.io/docs/concepts/windows/user-guide/", "id": "k8s_00288_code_explain_7719"} {"prompt": "Explain this json code:\n\n{cpu: 100m, memory: 100Mi, ephemeral-storage: 1Gi, pid=1000}", "completion": "This json code Reserve Compute Resources for System Daemons | kubeReserved is meant to capture resource reservation for kubernetes system daemons like the kubelet, container runtime, etc. It is not meant to reserve resources for system daemons that are run as po", "language": "json", "type": "api-response_explanation", "context": "Reserve Compute Resources for System Daemons | kubeReserved is meant to capture resource reservation for kubernetes system daemons like the kubelet, container runtime, etc. It is not meant to reserve resources for system daemons that are run as po", "source": "https://kubernetes.io/docs/tasks/administer-cluster/reserve-compute-resources/", "id": "k8s_00289_code_explain_7720"} {"prompt": "Explain this json code:\n\n{cpu: 100m, memory: 100Mi, ephemeral-storage: 1Gi, pid=1000}", "completion": "This json code Reserve Compute Resources for System Daemons | systemReserved is meant to capture resource reservation for OS system daemons like sshd, udev, etc. systemReserved should reserve memory for the kernel too since kernel memory is not accounted to pods", "language": "json", "type": "api-response_explanation", "context": "Reserve Compute Resources for System Daemons | systemReserved is meant to capture resource reservation for OS system daemons like sshd, udev, etc. systemReserved should reserve memory for the kernel too since kernel memory is not accounted to pods", "source": "https://kubernetes.io/docs/tasks/administer-cluster/reserve-compute-resources/", "id": "k8s_00289_code_explain_7721"} {"prompt": "Explain this text code:\n\nevictionHard: {memory.available: \"100Mi\", nodefs.available: \"10%\", nodefs.inodesFree: \"5%\", imagefs.available: \"15%\"}", "completion": "This text code Reserve Compute Resources for System Daemons | Memory pressure at the node level leads to System OOMs which affects the entire node and all pods running on it. Nodes can go offline temporarily until memory has been reclaimed. To avoid (or reduce t", "language": "text", "type": "code-example_explanation", "context": "Reserve Compute Resources for System Daemons | Memory pressure at the node level leads to System OOMs which affects the entire node and all pods running on it. Nodes can go offline temporarily until memory has been reclaimed. To avoid (or reduce t", "source": "https://kubernetes.io/docs/tasks/administer-cluster/reserve-compute-resources/", "id": "k8s_00289_code_explain_7722"} {"prompt": "Explain this json code:\n\n{memory.available: \"<500Mi\", nodefs.available: \"<10%\"}", "completion": "This json code Reserve Compute Resources for System Daemons | Under this scenario, 'Allocatable' will be 14.5 CPUs, 28.5Gi of memory and 88Gi of local storage. Scheduler ensures that the total memory requests across all pods on this node does not exceed 28.5Gi a", "language": "json", "type": "api-response_explanation", "context": "Reserve Compute Resources for System Daemons | Under this scenario, 'Allocatable' will be 14.5 CPUs, 28.5Gi of memory and 88Gi of local storage. Scheduler ensures that the total memory requests across all pods on this node does not exceed 28.5Gi a", "source": "https://kubernetes.io/docs/tasks/administer-cluster/reserve-compute-resources/", "id": "k8s_00289_code_explain_7723"} {"prompt": "Complete this text code:\n\npreflight Run join pre-flight checks\ncontrol-plane-prepare Prepare the machine for serving a control plane\n /download-certs Download certificates shared among control-plane nodes from the kubeadm-certs Secret\n /certs Generate the certificates for the new control plane components\n /kubeconfig Generate the kubeconfig for the new control plane components", "completion": " /control-plane Generate the manifests for the new control plane components\nkubelet-start Write kubelet settings, certificates and (re)start the kubelet\ncontrol-plane-join Join a machine as a control plane instance\n /etcd Add a new local etcd member\n /mark-control-plane Mark a node as a control-plane\nwait-control-plane Wait for the control plane to start", "language": "text", "type": "code-example", "context": "kubeadm join | If the node should host a new control plane instance, the IP address the API Server will advertise it's listening on. If not set the default network interface will be used.", "source": "https://kubernetes.io/docs/reference/setup-tools/kubeadm/kubeadm-join/", "id": "k8s_00290_code_completion_7724"} {"prompt": "Explain this text code:\n\npreflight Run join pre-flight checks\ncontrol-plane-prepare Prepare the machine for serving a control plane\n /download-certs Download certificates shared among control-plane nodes from the kubeadm-certs Secret\n /certs Generate the certificates for the new control plane components\n /kubeconfig Generate the kubeconfig for the new control plane components\n /control-plane Generate the manifests for the new control plane components\nkubelet-start Write kubelet settings, certificates and (re)start the kubelet\ncontrol-plane-join Join a machine as a control plane instance\n /etcd Add a new local etcd member\n /mark-control-plane Mark a node as a control-plane\nwait-control-plane Wait for the control plane to start", "completion": "This text code kubeadm join | If the node should host a new control plane instance, the IP address the API Server will advertise it's listening on. If not set the default network interface will be used.", "language": "text", "type": "code-example_explanation", "context": "kubeadm join | If the node should host a new control plane instance, the IP address the API Server will advertise it's listening on. If not set the default network interface will be used.", "source": "https://kubernetes.io/docs/reference/setup-tools/kubeadm/kubeadm-join/", "id": "k8s_00290_code_explain_7725"} {"prompt": "Explain this text code:\n\nsudo kubeadm join --skip-phases=preflight --config=config.yaml", "completion": "This text code kubeadm join | Alternatively, you can use the skipPhases field in JoinConfiguration.", "language": "text", "type": "code-example_explanation", "context": "kubeadm join | Alternatively, you can use the skipPhases field in JoinConfiguration.", "source": "https://kubernetes.io/docs/reference/setup-tools/kubeadm/kubeadm-join/", "id": "k8s_00290_code_explain_7726"} {"prompt": "Explain this text code:\n\nopenssl x509 -pubkey -in /etc/kubernetes/pki/ca.crt | openssl rsa -pubin -outform der 2>/dev/null | openssl dgst -sha256 -hex | sed 's/^.* //'", "completion": "This text code kubeadm join | Example kubeadm join commands:", "language": "text", "type": "code-example_explanation", "context": "kubeadm join | Example kubeadm join commands:", "source": "https://kubernetes.io/docs/reference/setup-tools/kubeadm/kubeadm-join/", "id": "k8s_00290_code_explain_7727"} {"prompt": "Explain this python code:\n\nkubeadm join --discovery-token abcdef.1234567890abcdef --discovery-token-ca-cert-hash sha256:1234..cdef 1.2.3.4:6443", "completion": "This python code kubeadm join | For control-plane nodes:", "language": "python", "type": "code-example_explanation", "context": "kubeadm join | For control-plane nodes:", "source": "https://kubernetes.io/docs/reference/setup-tools/kubeadm/kubeadm-join/", "id": "k8s_00290_code_explain_7728"} {"prompt": "Explain this python code:\n\nkubeadm join --discovery-token abcdef.1234567890abcdef --discovery-token-ca-cert-hash sha256:1234..cdef --control-plane 1.2.3.4:6443", "completion": "This python code kubeadm join | You can also call join for a control-plane node with --certificate-key to copy certificates to this node, if the kubeadm init command was called with --upload-certs.", "language": "python", "type": "code-example_explanation", "context": "kubeadm join | You can also call join for a control-plane node with --certificate-key to copy certificates to this node, if the kubeadm init command was called with --upload-certs.", "source": "https://kubernetes.io/docs/reference/setup-tools/kubeadm/kubeadm-join/", "id": "k8s_00290_code_explain_7729"} {"prompt": "Explain this python code:\n\nkubeadm join --token abcdef.1234567890abcdef --discovery-token-unsafe-skip-ca-verification 1.2.3.4:6443", "completion": "This python code kubeadm join | Advantages:", "language": "python", "type": "code-example_explanation", "context": "kubeadm join | Advantages:", "source": "https://kubernetes.io/docs/reference/setup-tools/kubeadm/kubeadm-join/", "id": "k8s_00290_code_explain_7730"} {"prompt": "Explain this bash code:\n\nkubectl delete clusterrolebinding kubeadm:node-autoapprove-bootstrap", "completion": "This bash code kubeadm join | After that, kubeadm join will block until the admin has manually approved the CSR in flight:", "language": "bash", "type": "kubectl-command_explanation", "context": "kubeadm join | After that, kubeadm join will block until the admin has manually approved the CSR in flight:", "source": "https://kubernetes.io/docs/reference/setup-tools/kubeadm/kubeadm-join/", "id": "k8s_00290_code_explain_7731"} {"prompt": "Explain this text code:\n\nNAME AGE REQUESTOR CONDITION\nnode-csr-c69HXe7aYcqkS1bKmH4faEnHAWxn6i2bHZ2mD04jZyQ 18s system:bootstrap:878f07 Pending", "completion": "This text code kubeadm join | kubectl certificate approve allows the admin to approve CSR.This action tells a certificate signing controller to issue a certificate to the requestor with the attributes requested in the CSR.", "language": "text", "type": "code-example_explanation", "context": "kubeadm join | kubectl certificate approve allows the admin to approve CSR.This action tells a certificate signing controller to issue a certificate to the requestor with the attributes requested in the CSR.", "source": "https://kubernetes.io/docs/reference/setup-tools/kubeadm/kubeadm-join/", "id": "k8s_00290_code_explain_7732"} {"prompt": "Explain this bash code:\n\nkubectl certificate approve node-csr-c69HXe7aYcqkS1bKmH4faEnHAWxn6i2bHZ2mD04jZyQ", "completion": "This bash code kubeadm join | The output is similar to this:", "language": "bash", "type": "kubectl-command_explanation", "context": "kubeadm join | The output is similar to this:", "source": "https://kubernetes.io/docs/reference/setup-tools/kubeadm/kubeadm-join/", "id": "k8s_00290_code_explain_7733"} {"prompt": "Explain this text code:\n\ncertificatesigningrequest \"node-csr-c69HXe7aYcqkS1bKmH4faEnHAWxn6i2bHZ2mD04jZyQ\" approved", "completion": "This text code kubeadm join | This would change the CSR resource to Active state.", "language": "text", "type": "code-example_explanation", "context": "kubeadm join | This would change the CSR resource to Active state.", "source": "https://kubernetes.io/docs/reference/setup-tools/kubeadm/kubeadm-join/", "id": "k8s_00290_code_explain_7734"} {"prompt": "Explain this text code:\n\nNAME AGE REQUESTOR CONDITION\nnode-csr-c69HXe7aYcqkS1bKmH4faEnHAWxn6i2bHZ2mD04jZyQ 1m system:bootstrap:878f07 Approved,Issued", "completion": "This text code kubeadm join | This forces the workflow that kubeadm join will only succeed if kubectl certificate approve has been run.", "language": "text", "type": "code-example_explanation", "context": "kubeadm join | This forces the workflow that kubeadm join will only succeed if kubectl certificate approve has been run.", "source": "https://kubernetes.io/docs/reference/setup-tools/kubeadm/kubeadm-join/", "id": "k8s_00290_code_explain_7735"} {"prompt": "Explain this bash code:\n\nkubectl -n kube-public get cm cluster-info -o jsonpath='{.data.kubeconfig}' | tee cluster-info.yaml", "completion": "This bash code kubeadm join | The output is similar to this:", "language": "bash", "type": "kubectl-command_explanation", "context": "kubeadm join | The output is similar to this:", "source": "https://kubernetes.io/docs/reference/setup-tools/kubeadm/kubeadm-join/", "id": "k8s_00290_code_explain_7736"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Config\nclusters:\n- cluster:\n certificate-authority-data: ", "completion": " server: https://:\n name: \"\"\ncontexts: []\ncurrent-context: \"\"\npreferences: {}\nusers: []", "language": "yaml", "type": "kubernetes-manifest", "context": "kubeadm join | Use the cluster-info.yaml file as an argument to kubeadm join --discovery-file.", "source": "https://kubernetes.io/docs/reference/setup-tools/kubeadm/kubeadm-join/", "id": "k8s_00290_code_completion_7737"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Config\nclusters:\n- cluster:\n certificate-authority-data: \n server: https://:\n name: \"\"\ncontexts: []\ncurrent-context: \"\"\npreferences: {}\nusers: []", "completion": "This yaml code kubeadm join | Use the cluster-info.yaml file as an argument to kubeadm join --discovery-file.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "kubeadm join | Use the cluster-info.yaml file as an argument to kubeadm join --discovery-file.", "source": "https://kubernetes.io/docs/reference/setup-tools/kubeadm/kubeadm-join/", "id": "k8s_00290_code_explain_7738"} {"prompt": "Explain this bash code:\n\nkubectl -n kube-public delete rolebinding kubeadm:bootstrap-signer-clusterinfo", "completion": "This bash code kubeadm join | These commands should be run after kubeadm init but before kubeadm join.", "language": "bash", "type": "kubectl-command_explanation", "context": "kubeadm join | These commands should be run after kubeadm init but before kubeadm join.", "source": "https://kubernetes.io/docs/reference/setup-tools/kubeadm/kubeadm-join/", "id": "k8s_00290_code_explain_7739"} {"prompt": "Complete this text code:\n\npreflight Run join pre-flight checks\ncontrol-plane-prepare Prepare the machine for serving a control plane\n /download-certs Download certificates shared among control-plane nodes from the kubeadm-certs Secret\n /certs Generate the certificates for the new control plane components\n /kubeconfig Generate the kubeconfig for the new control plane components", "completion": " /control-plane Generate the manifests for the new control plane components\nkubelet-start Write kubelet settings, certificates and (re)start the kubelet\ncontrol-plane-join Join a machine as a control plane instance\n /etcd Add a new local etcd member\n /mark-control-plane Mark a node as a control-plane\nwait-control-plane Wait for the control plane to start", "language": "text", "type": "code-example", "context": "kubeadm join | If the node should host a new control plane instance, the IP address the API Server will advertise it's listening on. If not set the default network interface will be used.", "source": "https://kubernetes.io/docs/reference/setup-tools/kubeadm/kubeadm-join/", "id": "k8s_00290_code_completion_7740"} {"prompt": "Explain this text code:\n\npreflight Run join pre-flight checks\ncontrol-plane-prepare Prepare the machine for serving a control plane\n /download-certs Download certificates shared among control-plane nodes from the kubeadm-certs Secret\n /certs Generate the certificates for the new control plane components\n /kubeconfig Generate the kubeconfig for the new control plane components\n /control-plane Generate the manifests for the new control plane components\nkubelet-start Write kubelet settings, certificates and (re)start the kubelet\ncontrol-plane-join Join a machine as a control plane instance\n /etcd Add a new local etcd member\n /mark-control-plane Mark a node as a control-plane\nwait-control-plane Wait for the control plane to start", "completion": "This text code kubeadm join | If the node should host a new control plane instance, the IP address the API Server will advertise it's listening on. If not set the default network interface will be used.", "language": "text", "type": "code-example_explanation", "context": "kubeadm join | If the node should host a new control plane instance, the IP address the API Server will advertise it's listening on. If not set the default network interface will be used.", "source": "https://kubernetes.io/docs/reference/setup-tools/kubeadm/kubeadm-join/", "id": "k8s_00290_code_explain_7741"} {"prompt": "Explain this shell code:\n\nsudo kubeadm join --skip-phases=preflight --config=config.yaml", "completion": "This shell code kubeadm join | Alternatively, you can use the skipPhases field in JoinConfiguration.", "language": "shell", "type": "code-example_explanation", "context": "kubeadm join | Alternatively, you can use the skipPhases field in JoinConfiguration.", "source": "https://kubernetes.io/docs/reference/setup-tools/kubeadm/kubeadm-join/", "id": "k8s_00290_code_explain_7742"} {"prompt": "Explain this shell code:\n\nopenssl x509 -pubkey -in /etc/kubernetes/pki/ca.crt | openssl rsa -pubin -outform der 2>/dev/null | openssl dgst -sha256 -hex | sed 's/^.* //'", "completion": "This shell code kubeadm join | Example kubeadm join commands:", "language": "shell", "type": "code-example_explanation", "context": "kubeadm join | Example kubeadm join commands:", "source": "https://kubernetes.io/docs/reference/setup-tools/kubeadm/kubeadm-join/", "id": "k8s_00290_code_explain_7743"} {"prompt": "Explain this shell code:\n\nkubeadm join --discovery-token abcdef.1234567890abcdef --discovery-token-ca-cert-hash sha256:1234..cdef 1.2.3.4:6443", "completion": "This shell code kubeadm join | For control-plane nodes:", "language": "shell", "type": "code-example_explanation", "context": "kubeadm join | For control-plane nodes:", "source": "https://kubernetes.io/docs/reference/setup-tools/kubeadm/kubeadm-join/", "id": "k8s_00290_code_explain_7744"} {"prompt": "Explain this shell code:\n\nkubeadm join --discovery-token abcdef.1234567890abcdef --discovery-token-ca-cert-hash sha256:1234..cdef --control-plane 1.2.3.4:6443", "completion": "This shell code kubeadm join | You can also call join for a control-plane node with --certificate-key to copy certificates to this node, if the kubeadm init command was called with --upload-certs.", "language": "shell", "type": "code-example_explanation", "context": "kubeadm join | You can also call join for a control-plane node with --certificate-key to copy certificates to this node, if the kubeadm init command was called with --upload-certs.", "source": "https://kubernetes.io/docs/reference/setup-tools/kubeadm/kubeadm-join/", "id": "k8s_00290_code_explain_7745"} {"prompt": "Explain this shell code:\n\nkubeadm join --token abcdef.1234567890abcdef --discovery-token-unsafe-skip-ca-verification 1.2.3.4:6443", "completion": "This shell code kubeadm join | Advantages:", "language": "shell", "type": "code-example_explanation", "context": "kubeadm join | Advantages:", "source": "https://kubernetes.io/docs/reference/setup-tools/kubeadm/kubeadm-join/", "id": "k8s_00290_code_explain_7746"} {"prompt": "Explain this text code:\n\nkubeadm join --discovery-file https://url/file.conf", "completion": "This text code kubeadm join | Advantages:", "language": "text", "type": "code-example_explanation", "context": "kubeadm join | Advantages:", "source": "https://kubernetes.io/docs/reference/setup-tools/kubeadm/kubeadm-join/", "id": "k8s_00290_code_explain_7747"} {"prompt": "Explain this text code:\n\nkubeadm kubeconfig user --org system:nodes --client-name system:node:$NODE > kubelet.conf", "completion": "This text code kubeadm join | If your cluster does not have the ca.key file, you must sign the embedded certificates in the kubelet.conf externally. For additional information, see PKI certificates and requirements and Certificate", "language": "text", "type": "code-example_explanation", "context": "kubeadm join | If your cluster does not have the ca.key file, you must sign the embedded certificates in the kubelet.conf externally. For additional information, see PKI certificates and requirements and Certificate", "source": "https://kubernetes.io/docs/reference/setup-tools/kubeadm/kubeadm-join/", "id": "k8s_00290_code_explain_7748"} {"prompt": "Explain this text code:\n\n--ignore-preflight-errors=FileAvailable--etc-kubernetes-kubelet.conf", "completion": "This text code kubeadm join | The defaults for kubeadm may not work for everyone. This section documents how to tighten up a kubeadm installation at the cost of some usability.", "language": "text", "type": "code-example_explanation", "context": "kubeadm join | The defaults for kubeadm may not work for everyone. This section documents how to tighten up a kubeadm installation at the cost of some usability.", "source": "https://kubernetes.io/docs/reference/setup-tools/kubeadm/kubeadm-join/", "id": "k8s_00290_code_explain_7749"} {"prompt": "Explain this shell code:\n\nkubectl delete clusterrolebinding kubeadm:node-autoapprove-bootstrap", "completion": "This shell code kubeadm join | After that, kubeadm join will block until the admin has manually approved the CSR in flight:", "language": "shell", "type": "code-example_explanation", "context": "kubeadm join | After that, kubeadm join will block until the admin has manually approved the CSR in flight:", "source": "https://kubernetes.io/docs/reference/setup-tools/kubeadm/kubeadm-join/", "id": "k8s_00290_code_explain_7750"} {"prompt": "Explain this text code:\n\nNAME AGE REQUESTOR CONDITION\nnode-csr-c69HXe7aYcqkS1bKmH4faEnHAWxn6i2bHZ2mD04jZyQ 18s system:bootstrap:878f07 Pending", "completion": "This text code kubeadm join | kubectl certificate approve allows the admin to approve CSR.This action tells a certificate signing controller to issue a certificate to the requestor with the attributes requested in the CSR.", "language": "text", "type": "code-example_explanation", "context": "kubeadm join | kubectl certificate approve allows the admin to approve CSR.This action tells a certificate signing controller to issue a certificate to the requestor with the attributes requested in the CSR.", "source": "https://kubernetes.io/docs/reference/setup-tools/kubeadm/kubeadm-join/", "id": "k8s_00290_code_explain_7751"} {"prompt": "Explain this shell code:\n\nkubectl certificate approve node-csr-c69HXe7aYcqkS1bKmH4faEnHAWxn6i2bHZ2mD04jZyQ", "completion": "This shell code kubeadm join | The output is similar to this:", "language": "shell", "type": "code-example_explanation", "context": "kubeadm join | The output is similar to this:", "source": "https://kubernetes.io/docs/reference/setup-tools/kubeadm/kubeadm-join/", "id": "k8s_00290_code_explain_7752"} {"prompt": "Explain this text code:\n\ncertificatesigningrequest \"node-csr-c69HXe7aYcqkS1bKmH4faEnHAWxn6i2bHZ2mD04jZyQ\" approved", "completion": "This text code kubeadm join | This would change the CSR resource to Active state.", "language": "text", "type": "code-example_explanation", "context": "kubeadm join | This would change the CSR resource to Active state.", "source": "https://kubernetes.io/docs/reference/setup-tools/kubeadm/kubeadm-join/", "id": "k8s_00290_code_explain_7753"} {"prompt": "Explain this text code:\n\nNAME AGE REQUESTOR CONDITION\nnode-csr-c69HXe7aYcqkS1bKmH4faEnHAWxn6i2bHZ2mD04jZyQ 1m system:bootstrap:878f07 Approved,Issued", "completion": "This text code kubeadm join | This forces the workflow that kubeadm join will only succeed if kubectl certificate approve has been run.", "language": "text", "type": "code-example_explanation", "context": "kubeadm join | This forces the workflow that kubeadm join will only succeed if kubectl certificate approve has been run.", "source": "https://kubernetes.io/docs/reference/setup-tools/kubeadm/kubeadm-join/", "id": "k8s_00290_code_explain_7754"} {"prompt": "Explain this shell code:\n\nkubectl -n kube-public get cm cluster-info -o jsonpath='{.data.kubeconfig}' | tee cluster-info.yaml", "completion": "This shell code kubeadm join | The output is similar to this:", "language": "shell", "type": "code-example_explanation", "context": "kubeadm join | The output is similar to this:", "source": "https://kubernetes.io/docs/reference/setup-tools/kubeadm/kubeadm-join/", "id": "k8s_00290_code_explain_7755"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Config\nclusters:\n- cluster:\n certificate-authority-data: ", "completion": " server: https://:\n name: \"\"\ncontexts: []\ncurrent-context: \"\"\npreferences: {}\nusers: []", "language": "yaml", "type": "kubernetes-manifest", "context": "kubeadm join | Use the cluster-info.yaml file as an argument to kubeadm join --discovery-file.", "source": "https://kubernetes.io/docs/reference/setup-tools/kubeadm/kubeadm-join/", "id": "k8s_00290_code_completion_7756"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Config\nclusters:\n- cluster:\n certificate-authority-data: \n server: https://:\n name: \"\"\ncontexts: []\ncurrent-context: \"\"\npreferences: {}\nusers: []", "completion": "This yaml code kubeadm join | Use the cluster-info.yaml file as an argument to kubeadm join --discovery-file.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "kubeadm join | Use the cluster-info.yaml file as an argument to kubeadm join --discovery-file.", "source": "https://kubernetes.io/docs/reference/setup-tools/kubeadm/kubeadm-join/", "id": "k8s_00290_code_explain_7757"} {"prompt": "Explain this shell code:\n\nkubectl -n kube-public delete rolebinding kubeadm:bootstrap-signer-clusterinfo", "completion": "This shell code kubeadm join | These commands should be run after kubeadm init but before kubeadm join.", "language": "shell", "type": "code-example_explanation", "context": "kubeadm join | These commands should be run after kubeadm init but before kubeadm join.", "source": "https://kubernetes.io/docs/reference/setup-tools/kubeadm/kubeadm-join/", "id": "k8s_00290_code_explain_7758"} {"prompt": "Explain this text code:\n\nsudo kubeadm join --skip-phases=preflight --config=config.yaml", "completion": "This text code kubeadm join | Alternatively, you can use the skipPhases field in JoinConfiguration.", "language": "text", "type": "code-example_explanation", "context": "kubeadm join | Alternatively, you can use the skipPhases field in JoinConfiguration.", "source": "https://kubernetes.io/docs/reference/setup-tools/kubeadm/kubeadm-join/", "id": "k8s_00290_code_explain_7759"} {"prompt": "Explain this text code:\n\nopenssl x509 -pubkey -in /etc/kubernetes/pki/ca.crt | openssl rsa -pubin -outform der 2>/dev/null | openssl dgst -sha256 -hex | sed 's/^.* //'", "completion": "This text code kubeadm join | Example kubeadm join commands:", "language": "text", "type": "code-example_explanation", "context": "kubeadm join | Example kubeadm join commands:", "source": "https://kubernetes.io/docs/reference/setup-tools/kubeadm/kubeadm-join/", "id": "k8s_00290_code_explain_7760"} {"prompt": "Explain this python code:\n\nkubeadm join --discovery-token abcdef.1234567890abcdef --discovery-token-ca-cert-hash sha256:1234..cdef 1.2.3.4:6443", "completion": "This python code kubeadm join | For control-plane nodes:", "language": "python", "type": "code-example_explanation", "context": "kubeadm join | For control-plane nodes:", "source": "https://kubernetes.io/docs/reference/setup-tools/kubeadm/kubeadm-join/", "id": "k8s_00290_code_explain_7761"} {"prompt": "Explain this python code:\n\nkubeadm join --discovery-token abcdef.1234567890abcdef --discovery-token-ca-cert-hash sha256:1234..cdef --control-plane 1.2.3.4:6443", "completion": "This python code kubeadm join | You can also call join for a control-plane node with --certificate-key to copy certificates to this node, if the kubeadm init command was called with --upload-certs.", "language": "python", "type": "code-example_explanation", "context": "kubeadm join | You can also call join for a control-plane node with --certificate-key to copy certificates to this node, if the kubeadm init command was called with --upload-certs.", "source": "https://kubernetes.io/docs/reference/setup-tools/kubeadm/kubeadm-join/", "id": "k8s_00290_code_explain_7762"} {"prompt": "Explain this python code:\n\nkubeadm join --token abcdef.1234567890abcdef --discovery-token-unsafe-skip-ca-verification 1.2.3.4:6443", "completion": "This python code kubeadm join | Advantages:", "language": "python", "type": "code-example_explanation", "context": "kubeadm join | Advantages:", "source": "https://kubernetes.io/docs/reference/setup-tools/kubeadm/kubeadm-join/", "id": "k8s_00290_code_explain_7763"} {"prompt": "Explain this bash code:\n\nkubectl delete clusterrolebinding kubeadm:node-autoapprove-bootstrap", "completion": "This bash code kubeadm join | After that, kubeadm join will block until the admin has manually approved the CSR in flight:", "language": "bash", "type": "kubectl-command_explanation", "context": "kubeadm join | After that, kubeadm join will block until the admin has manually approved the CSR in flight:", "source": "https://kubernetes.io/docs/reference/setup-tools/kubeadm/kubeadm-join/", "id": "k8s_00290_code_explain_7764"} {"prompt": "Explain this bash code:\n\nkubectl certificate approve node-csr-c69HXe7aYcqkS1bKmH4faEnHAWxn6i2bHZ2mD04jZyQ", "completion": "This bash code kubeadm join | The output is similar to this:", "language": "bash", "type": "kubectl-command_explanation", "context": "kubeadm join | The output is similar to this:", "source": "https://kubernetes.io/docs/reference/setup-tools/kubeadm/kubeadm-join/", "id": "k8s_00290_code_explain_7765"} {"prompt": "Explain this bash code:\n\nkubectl -n kube-public get cm cluster-info -o jsonpath='{.data.kubeconfig}' | tee cluster-info.yaml", "completion": "This bash code kubeadm join | The output is similar to this:", "language": "bash", "type": "kubectl-command_explanation", "context": "kubeadm join | The output is similar to this:", "source": "https://kubernetes.io/docs/reference/setup-tools/kubeadm/kubeadm-join/", "id": "k8s_00290_code_explain_7766"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Config\nclusters:\n- cluster:\n certificate-authority-data: ", "completion": " server: https://:\n name: \"\"\ncontexts: []\ncurrent-context: \"\"\npreferences: {}\nusers: []", "language": "yaml", "type": "kubernetes-manifest", "context": "kubeadm join | Use the cluster-info.yaml file as an argument to kubeadm join --discovery-file.", "source": "https://kubernetes.io/docs/reference/setup-tools/kubeadm/kubeadm-join/", "id": "k8s_00290_code_completion_7767"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Config\nclusters:\n- cluster:\n certificate-authority-data: \n server: https://:\n name: \"\"\ncontexts: []\ncurrent-context: \"\"\npreferences: {}\nusers: []", "completion": "This yaml code kubeadm join | Use the cluster-info.yaml file as an argument to kubeadm join --discovery-file.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "kubeadm join | Use the cluster-info.yaml file as an argument to kubeadm join --discovery-file.", "source": "https://kubernetes.io/docs/reference/setup-tools/kubeadm/kubeadm-join/", "id": "k8s_00290_code_explain_7768"} {"prompt": "Explain this bash code:\n\nkubectl -n kube-public delete rolebinding kubeadm:bootstrap-signer-clusterinfo", "completion": "This bash code kubeadm join | These commands should be run after kubeadm init but before kubeadm join.", "language": "bash", "type": "kubectl-command_explanation", "context": "kubeadm join | These commands should be run after kubeadm init but before kubeadm join.", "source": "https://kubernetes.io/docs/reference/setup-tools/kubeadm/kubeadm-join/", "id": "k8s_00290_code_explain_7769"} {"prompt": "Explain this text code:\n\ngit clone https://github.com//website\ncd website", "completion": "This text code Localizing Kubernetes documentation | The website content directory includes subdirectories for each language. The localization you want to help out with is inside content/.", "language": "text", "type": "code-example_explanation", "context": "Localizing Kubernetes documentation | The website content directory includes subdirectories for each language. The localization you want to help out with is inside content/.", "source": "https://kubernetes.io/docs/contribute/localization/", "id": "k8s_00292_code_explain_7770"} {"prompt": "Complete this json code:\n\n[languages.de]\ntitle = \"Kubernetes\"\nlanguageName = \"Deutsch (German)\"\nweight = 5\ncontentDir = \"content/de\"\nlanguagedirection = \"ltr\"", "completion": "\n[languages.de.params]\ntime_format_blog = \"02.01.2006\"\nlanguage_alternatives = [\"en\"]\ndescription = \"Produktionsreife Container-Orchestrierung\"\nlanguageNameLatinScript = \"Deutsch\"", "language": "json", "type": "api-response", "context": "Localizing Kubernetes documentation | The language selection bar lists the value for languageName. Assign \"language name in native script and language (English language name in Latin script)\" to languageName. For example, languageName = \"", "source": "https://kubernetes.io/docs/contribute/localization/", "id": "k8s_00292_code_completion_7771"} {"prompt": "Explain this json code:\n\n[languages.de]\ntitle = \"Kubernetes\"\nlanguageName = \"Deutsch (German)\"\nweight = 5\ncontentDir = \"content/de\"\nlanguagedirection = \"ltr\"\n\n[languages.de.params]\ntime_format_blog = \"02.01.2006\"\nlanguage_alternatives = [\"en\"]\ndescription = \"Produktionsreife Container-Orchestrierung\"\nlanguageNameLatinScript = \"Deutsch\"", "completion": "This json code Localizing Kubernetes documentation | The language selection bar lists the value for languageName. Assign \"language name in native script and language (English language name in Latin script)\" to languageName. For example, languageName = \"", "language": "json", "type": "api-response_explanation", "context": "Localizing Kubernetes documentation | The language selection bar lists the value for languageName. Assign \"language name in native script and language (English language name in Latin script)\" to languageName. For example, languageName = \"", "source": "https://kubernetes.io/docs/contribute/localization/", "id": "k8s_00292_code_explain_7772"} {"prompt": "Complete this text code:\n\n# See the OWNERS docs at https://go.k8s.io/owners\n\n# This is the localization project for Spanish.\n# Teams and members are visible at https://github.com/orgs/kubernetes/teams.\n\nreviewers:\n- sig-docs-es-reviews", "completion": "\napprovers:\n- sig-docs-es-owners\n\nlabels:\n- area/localization\n- language/es", "language": "text", "type": "code-example", "context": "Localizing Kubernetes documentation | After adding the language-specific OWNERS file, update the root OWNERS_ALIASES file with the new Kubernetes teams for the localization, sig-docs-**-owners and sig-docs-**-reviews.", "source": "https://kubernetes.io/docs/contribute/localization/", "id": "k8s_00292_code_completion_7773"} {"prompt": "Explain this text code:\n\n# See the OWNERS docs at https://go.k8s.io/owners\n\n# This is the localization project for Spanish.\n# Teams and members are visible at https://github.com/orgs/kubernetes/teams.\n\nreviewers:\n- sig-docs-es-reviews\n\napprovers:\n- sig-docs-es-owners\n\nlabels:\n- area/localization\n- language/es", "completion": "This text code Localizing Kubernetes documentation | After adding the language-specific OWNERS file, update the root OWNERS_ALIASES file with the new Kubernetes teams for the localization, sig-docs-**-owners and sig-docs-**-reviews.", "language": "text", "type": "code-example_explanation", "context": "Localizing Kubernetes documentation | After adding the language-specific OWNERS file, update the root OWNERS_ALIASES file with the new Kubernetes teams for the localization, sig-docs-**-owners and sig-docs-**-reviews.", "source": "https://kubernetes.io/docs/contribute/localization/", "id": "k8s_00292_code_explain_7774"} {"prompt": "Complete this text code:\n\n--- a/OWNERS_ALIASES\n+++ b/OWNERS_ALIASES\n@@ -48,6 +48,14 @@ aliases:\n - stewart-yu\n - xiangpengzhao\n - zhangxiaoyu-zidif\n+ sig-docs-es-owners: # Admins for Spanish content\n+ - alexbrand", "completion": "+ - raelga\n+ sig-docs-es-reviews: # PR reviews for Spanish content\n+ - alexbrand\n+ - electrocucaracha\n+ - glo-pena\n+ - raelga\n sig-docs-fr-owners: # Admins for French content\n - perriea\n - remyleone", "language": "text", "type": "code-example", "context": "Localizing Kubernetes documentation | Next, open a pull request (PR) to add a localization to the kubernetes/website repository. The PR must include all the minimum required content before it can be approved.", "source": "https://kubernetes.io/docs/contribute/localization/", "id": "k8s_00292_code_completion_7775"} {"prompt": "Explain this text code:\n\n--- a/OWNERS_ALIASES\n+++ b/OWNERS_ALIASES\n@@ -48,6 +48,14 @@ aliases:\n - stewart-yu\n - xiangpengzhao\n - zhangxiaoyu-zidif\n+ sig-docs-es-owners: # Admins for Spanish content\n+ - alexbrand\n+ - raelga\n+ sig-docs-es-reviews: # PR reviews for Spanish content\n+ - alexbrand\n+ - electrocucaracha\n+ - glo-pena\n+ - raelga\n sig-docs-fr-owners: # Admins for French content\n - perriea\n - remyleone", "completion": "This text code Localizing Kubernetes documentation | Next, open a pull request (PR) to add a localization to the kubernetes/website repository. The PR must include all the minimum required content before it can be approved.", "language": "text", "type": "code-example_explanation", "context": "Localizing Kubernetes documentation | Next, open a pull request (PR) to add a localization to the kubernetes/website repository. The PR must include all the minimum required content before it can be approved.", "source": "https://kubernetes.io/docs/contribute/localization/", "id": "k8s_00292_code_explain_7776"} {"prompt": "Explain this text code:\n\nmkdir -p content/de/docs/tutorials\ncp -ra content/en/docs/tutorials/kubernetes-basics/ content/de/docs/tutorials/", "completion": "This text code Localizing Kubernetes documentation | Translation tools can speed up the translation process. For example, some editors offer plugins to quickly translate text.", "language": "text", "type": "code-example_explanation", "context": "Localizing Kubernetes documentation | Translation tools can speed up the translation process. For example, some editors offer plugins to quickly translate text.", "source": "https://kubernetes.io/docs/contribute/localization/", "id": "k8s_00292_code_explain_7777"} {"prompt": "Explain this text code:\n\nmkdir -p i18n/de\ncp i18n/en/en.toml i18n/de/de.toml", "completion": "This text code Localizing Kubernetes documentation | Revise the comments at the top of the file to suit your localization, then translate the value of each string. For example, this is the German-language placeholder text for the search form:", "language": "text", "type": "code-example_explanation", "context": "Localizing Kubernetes documentation | Revise the comments at the top of the file to suit your localization, then translate the value of each string. For example, this is the German-language placeholder text for the search form:", "source": "https://kubernetes.io/docs/contribute/localization/", "id": "k8s_00292_code_explain_7778"} {"prompt": "Explain this shell code:\n\ngit clone https://github.com//website\ncd website", "completion": "This shell code Localizing Kubernetes documentation | The website content directory includes subdirectories for each language. The localization you want to help out with is inside content/.", "language": "shell", "type": "code-example_explanation", "context": "Localizing Kubernetes documentation | The website content directory includes subdirectories for each language. The localization you want to help out with is inside content/.", "source": "https://kubernetes.io/docs/contribute/localization/", "id": "k8s_00292_code_explain_7779"} {"prompt": "Complete this toml code:\n\n[languages.de]\ntitle = \"Kubernetes\"\nlanguageName = \"Deutsch (German)\"\nweight = 5\ncontentDir = \"content/de\"\nlanguagedirection = \"ltr\"", "completion": "\n[languages.de.params]\ntime_format_blog = \"02.01.2006\"\nlanguage_alternatives = [\"en\"]\ndescription = \"Produktionsreife Container-Orchestrierung\"\nlanguageNameLatinScript = \"Deutsch\"", "language": "toml", "type": "code-example", "context": "Localizing Kubernetes documentation | The language selection bar lists the value for languageName. Assign \"language name in native script and language (English language name in Latin script)\" to languageName. For example, languageName = \"", "source": "https://kubernetes.io/docs/contribute/localization/", "id": "k8s_00292_code_completion_7780"} {"prompt": "Explain this toml code:\n\n[languages.de]\ntitle = \"Kubernetes\"\nlanguageName = \"Deutsch (German)\"\nweight = 5\ncontentDir = \"content/de\"\nlanguagedirection = \"ltr\"\n\n[languages.de.params]\ntime_format_blog = \"02.01.2006\"\nlanguage_alternatives = [\"en\"]\ndescription = \"Produktionsreife Container-Orchestrierung\"\nlanguageNameLatinScript = \"Deutsch\"", "completion": "This toml code Localizing Kubernetes documentation | The language selection bar lists the value for languageName. Assign \"language name in native script and language (English language name in Latin script)\" to languageName. For example, languageName = \"", "language": "toml", "type": "code-example_explanation", "context": "Localizing Kubernetes documentation | The language selection bar lists the value for languageName. Assign \"language name in native script and language (English language name in Latin script)\" to languageName. For example, languageName = \"", "source": "https://kubernetes.io/docs/contribute/localization/", "id": "k8s_00292_code_explain_7781"} {"prompt": "Complete this yaml code:\n\n# See the OWNERS docs at https://go.k8s.io/owners\n\n# This is the localization project for Spanish.\n# Teams and members are visible at https://github.com/orgs/kubernetes/teams.\n\nreviewers:\n- sig-docs-es-reviews", "completion": "\napprovers:\n- sig-docs-es-owners\n\nlabels:\n- area/localization\n- language/es", "language": "yaml", "type": "kubernetes-manifest", "context": "Localizing Kubernetes documentation | After adding the language-specific OWNERS file, update the root OWNERS_ALIASES file with the new Kubernetes teams for the localization, sig-docs-**-owners and sig-docs-**-reviews.", "source": "https://kubernetes.io/docs/contribute/localization/", "id": "k8s_00292_code_completion_7782"} {"prompt": "Explain this yaml code:\n\n# See the OWNERS docs at https://go.k8s.io/owners\n\n# This is the localization project for Spanish.\n# Teams and members are visible at https://github.com/orgs/kubernetes/teams.\n\nreviewers:\n- sig-docs-es-reviews\n\napprovers:\n- sig-docs-es-owners\n\nlabels:\n- area/localization\n- language/es", "completion": "This yaml code Localizing Kubernetes documentation | After adding the language-specific OWNERS file, update the root OWNERS_ALIASES file with the new Kubernetes teams for the localization, sig-docs-**-owners and sig-docs-**-reviews.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Localizing Kubernetes documentation | After adding the language-specific OWNERS file, update the root OWNERS_ALIASES file with the new Kubernetes teams for the localization, sig-docs-**-owners and sig-docs-**-reviews.", "source": "https://kubernetes.io/docs/contribute/localization/", "id": "k8s_00292_code_explain_7783"} {"prompt": "Complete this diff code:\n\n--- a/OWNERS_ALIASES\n+++ b/OWNERS_ALIASES\n@@ -48,6 +48,14 @@ aliases:\n - stewart-yu\n - xiangpengzhao\n - zhangxiaoyu-zidif\n+ sig-docs-es-owners: # Admins for Spanish content\n+ - alexbrand", "completion": "+ - raelga\n+ sig-docs-es-reviews: # PR reviews for Spanish content\n+ - alexbrand\n+ - electrocucaracha\n+ - glo-pena\n+ - raelga\n sig-docs-fr-owners: # Admins for French content\n - perriea\n - remyleone", "language": "diff", "type": "code-example", "context": "Localizing Kubernetes documentation | Next, open a pull request (PR) to add a localization to the kubernetes/website repository. The PR must include all the minimum required content before it can be approved.", "source": "https://kubernetes.io/docs/contribute/localization/", "id": "k8s_00292_code_completion_7784"} {"prompt": "Explain this diff code:\n\n--- a/OWNERS_ALIASES\n+++ b/OWNERS_ALIASES\n@@ -48,6 +48,14 @@ aliases:\n - stewart-yu\n - xiangpengzhao\n - zhangxiaoyu-zidif\n+ sig-docs-es-owners: # Admins for Spanish content\n+ - alexbrand\n+ - raelga\n+ sig-docs-es-reviews: # PR reviews for Spanish content\n+ - alexbrand\n+ - electrocucaracha\n+ - glo-pena\n+ - raelga\n sig-docs-fr-owners: # Admins for French content\n - perriea\n - remyleone", "completion": "This diff code Localizing Kubernetes documentation | Next, open a pull request (PR) to add a localization to the kubernetes/website repository. The PR must include all the minimum required content before it can be approved.", "language": "diff", "type": "code-example_explanation", "context": "Localizing Kubernetes documentation | Next, open a pull request (PR) to add a localization to the kubernetes/website repository. The PR must include all the minimum required content before it can be approved.", "source": "https://kubernetes.io/docs/contribute/localization/", "id": "k8s_00292_code_explain_7785"} {"prompt": "Explain this shell code:\n\nmkdir -p content/de/docs/tutorials\ncp -ra content/en/docs/tutorials/kubernetes-basics/ content/de/docs/tutorials/", "completion": "This shell code Localizing Kubernetes documentation | Translation tools can speed up the translation process. For example, some editors offer plugins to quickly translate text.", "language": "shell", "type": "code-example_explanation", "context": "Localizing Kubernetes documentation | Translation tools can speed up the translation process. For example, some editors offer plugins to quickly translate text.", "source": "https://kubernetes.io/docs/contribute/localization/", "id": "k8s_00292_code_explain_7786"} {"prompt": "Explain this bash code:\n\nmkdir -p i18n/de\ncp i18n/en/en.toml i18n/de/de.toml", "completion": "This bash code Localizing Kubernetes documentation | Revise the comments at the top of the file to suit your localization, then translate the value of each string. For example, this is the German-language placeholder text for the search form:", "language": "bash", "type": "shell-command_explanation", "context": "Localizing Kubernetes documentation | Revise the comments at the top of the file to suit your localization, then translate the value of each string. For example, this is the German-language placeholder text for the search form:", "source": "https://kubernetes.io/docs/contribute/localization/", "id": "k8s_00292_code_explain_7787"} {"prompt": "Explain this text code:\n\ndev--.", "completion": "This text code Localizing Kubernetes documentation | For example, an approver on a German localization team opens the localization branch dev-1.12-de.1 directly against the kubernetes/website repository, based on the source branch for Kubernetes v1.12.", "language": "text", "type": "code-example_explanation", "context": "Localizing Kubernetes documentation | For example, an approver on a German localization team opens the localization branch dev-1.12-de.1 directly against the kubernetes/website repository, based on the source branch for Kubernetes v1.12.", "source": "https://kubernetes.io/docs/contribute/localization/", "id": "k8s_00292_code_explain_7788"} {"prompt": "Explain this text code:\n\ngit clone https://github.com//website\ncd website", "completion": "This text code Localizing Kubernetes documentation | The website content directory includes subdirectories for each language. The localization you want to help out with is inside content/.", "language": "text", "type": "code-example_explanation", "context": "Localizing Kubernetes documentation | The website content directory includes subdirectories for each language. The localization you want to help out with is inside content/.", "source": "https://kubernetes.io/docs/contribute/localization/", "id": "k8s_00292_code_explain_7789"} {"prompt": "Complete this json code:\n\n[languages.de]\ntitle = \"Kubernetes\"\nlanguageName = \"Deutsch (German)\"\nweight = 5\ncontentDir = \"content/de\"\nlanguagedirection = \"ltr\"", "completion": "\n[languages.de.params]\ntime_format_blog = \"02.01.2006\"\nlanguage_alternatives = [\"en\"]\ndescription = \"Produktionsreife Container-Orchestrierung\"\nlanguageNameLatinScript = \"Deutsch\"", "language": "json", "type": "api-response", "context": "Localizing Kubernetes documentation | The language selection bar lists the value for languageName. Assign \"language name in native script and language (English language name in Latin script)\" to languageName. For example, languageName = \"", "source": "https://kubernetes.io/docs/contribute/localization/", "id": "k8s_00292_code_completion_7790"} {"prompt": "Explain this json code:\n\n[languages.de]\ntitle = \"Kubernetes\"\nlanguageName = \"Deutsch (German)\"\nweight = 5\ncontentDir = \"content/de\"\nlanguagedirection = \"ltr\"\n\n[languages.de.params]\ntime_format_blog = \"02.01.2006\"\nlanguage_alternatives = [\"en\"]\ndescription = \"Produktionsreife Container-Orchestrierung\"\nlanguageNameLatinScript = \"Deutsch\"", "completion": "This json code Localizing Kubernetes documentation | The language selection bar lists the value for languageName. Assign \"language name in native script and language (English language name in Latin script)\" to languageName. For example, languageName = \"", "language": "json", "type": "api-response_explanation", "context": "Localizing Kubernetes documentation | The language selection bar lists the value for languageName. Assign \"language name in native script and language (English language name in Latin script)\" to languageName. For example, languageName = \"", "source": "https://kubernetes.io/docs/contribute/localization/", "id": "k8s_00292_code_explain_7791"} {"prompt": "Complete this text code:\n\n# See the OWNERS docs at https://go.k8s.io/owners\n\n# This is the localization project for Spanish.\n# Teams and members are visible at https://github.com/orgs/kubernetes/teams.\n\nreviewers:\n- sig-docs-es-reviews", "completion": "\napprovers:\n- sig-docs-es-owners\n\nlabels:\n- area/localization\n- language/es", "language": "text", "type": "code-example", "context": "Localizing Kubernetes documentation | After adding the language-specific OWNERS file, update the root OWNERS_ALIASES file with the new Kubernetes teams for the localization, sig-docs-**-owners and sig-docs-**-reviews.", "source": "https://kubernetes.io/docs/contribute/localization/", "id": "k8s_00292_code_completion_7792"} {"prompt": "Explain this text code:\n\n# See the OWNERS docs at https://go.k8s.io/owners\n\n# This is the localization project for Spanish.\n# Teams and members are visible at https://github.com/orgs/kubernetes/teams.\n\nreviewers:\n- sig-docs-es-reviews\n\napprovers:\n- sig-docs-es-owners\n\nlabels:\n- area/localization\n- language/es", "completion": "This text code Localizing Kubernetes documentation | After adding the language-specific OWNERS file, update the root OWNERS_ALIASES file with the new Kubernetes teams for the localization, sig-docs-**-owners and sig-docs-**-reviews.", "language": "text", "type": "code-example_explanation", "context": "Localizing Kubernetes documentation | After adding the language-specific OWNERS file, update the root OWNERS_ALIASES file with the new Kubernetes teams for the localization, sig-docs-**-owners and sig-docs-**-reviews.", "source": "https://kubernetes.io/docs/contribute/localization/", "id": "k8s_00292_code_explain_7793"} {"prompt": "Complete this text code:\n\n--- a/OWNERS_ALIASES\n+++ b/OWNERS_ALIASES\n@@ -48,6 +48,14 @@ aliases:\n - stewart-yu\n - xiangpengzhao\n - zhangxiaoyu-zidif\n+ sig-docs-es-owners: # Admins for Spanish content\n+ - alexbrand", "completion": "+ - raelga\n+ sig-docs-es-reviews: # PR reviews for Spanish content\n+ - alexbrand\n+ - electrocucaracha\n+ - glo-pena\n+ - raelga\n sig-docs-fr-owners: # Admins for French content\n - perriea\n - remyleone", "language": "text", "type": "code-example", "context": "Localizing Kubernetes documentation | Next, open a pull request (PR) to add a localization to the kubernetes/website repository. The PR must include all the minimum required content before it can be approved.", "source": "https://kubernetes.io/docs/contribute/localization/", "id": "k8s_00292_code_completion_7794"} {"prompt": "Explain this text code:\n\n--- a/OWNERS_ALIASES\n+++ b/OWNERS_ALIASES\n@@ -48,6 +48,14 @@ aliases:\n - stewart-yu\n - xiangpengzhao\n - zhangxiaoyu-zidif\n+ sig-docs-es-owners: # Admins for Spanish content\n+ - alexbrand\n+ - raelga\n+ sig-docs-es-reviews: # PR reviews for Spanish content\n+ - alexbrand\n+ - electrocucaracha\n+ - glo-pena\n+ - raelga\n sig-docs-fr-owners: # Admins for French content\n - perriea\n - remyleone", "completion": "This text code Localizing Kubernetes documentation | Next, open a pull request (PR) to add a localization to the kubernetes/website repository. The PR must include all the minimum required content before it can be approved.", "language": "text", "type": "code-example_explanation", "context": "Localizing Kubernetes documentation | Next, open a pull request (PR) to add a localization to the kubernetes/website repository. The PR must include all the minimum required content before it can be approved.", "source": "https://kubernetes.io/docs/contribute/localization/", "id": "k8s_00292_code_explain_7795"} {"prompt": "Explain this text code:\n\nmkdir -p content/de/docs/tutorials\ncp -ra content/en/docs/tutorials/kubernetes-basics/ content/de/docs/tutorials/", "completion": "This text code Localizing Kubernetes documentation | Translation tools can speed up the translation process. For example, some editors offer plugins to quickly translate text.", "language": "text", "type": "code-example_explanation", "context": "Localizing Kubernetes documentation | Translation tools can speed up the translation process. For example, some editors offer plugins to quickly translate text.", "source": "https://kubernetes.io/docs/contribute/localization/", "id": "k8s_00292_code_explain_7796"} {"prompt": "Explain this text code:\n\nmkdir -p i18n/de\ncp i18n/en/en.toml i18n/de/de.toml", "completion": "This text code Localizing Kubernetes documentation | Revise the comments at the top of the file to suit your localization, then translate the value of each string. For example, this is the German-language placeholder text for the search form:", "language": "text", "type": "code-example_explanation", "context": "Localizing Kubernetes documentation | Revise the comments at the top of the file to suit your localization, then translate the value of each string. For example, this is the German-language placeholder text for the search form:", "source": "https://kubernetes.io/docs/contribute/localization/", "id": "k8s_00292_code_explain_7797"} {"prompt": "Complete this json code:\n\n{\n \"versions\": [\n \"v1\"\n ],\n \"serverAddressByClientCIDRs\": [", "completion": " {\n \"clientCIDR\": \"0.0.0.0/0\",\n \"serverAddress\": \"10.0.1.149:443\"\n }\n ]\n}", "language": "json", "type": "api-response", "context": "Access Clusters Using the Kubernetes API | It is possible to avoid using kubectl proxy by passing an authentication token directly to the API server, like this:", "source": "https://kubernetes.io/docs/tasks/administer-cluster/access-cluster-api/", "id": "k8s_00293_code_completion_7798"} {"prompt": "Explain this json code:\n\n{\n \"versions\": [\n \"v1\"\n ],\n \"serverAddressByClientCIDRs\": [\n {\n \"clientCIDR\": \"0.0.0.0/0\",\n \"serverAddress\": \"10.0.1.149:443\"\n }\n ]\n}", "completion": "This json code Access Clusters Using the Kubernetes API | It is possible to avoid using kubectl proxy by passing an authentication token directly to the API server, like this:", "language": "json", "type": "api-response_explanation", "context": "Access Clusters Using the Kubernetes API | It is possible to avoid using kubectl proxy by passing an authentication token directly to the API server, like this:", "source": "https://kubernetes.io/docs/tasks/administer-cluster/access-cluster-api/", "id": "k8s_00293_code_explain_7799"} {"prompt": "Complete this yaml code:\n\n# Check all possible clusters, as your .KUBECONFIG may have multiple contexts:\nkubectl config view -o jsonpath='{\"Cluster name\\tServer\\n\"}{range .clusters[*]}{.name}{\"\\t\"}{.cluster.server}{\"\\n\"}{end}'\n\n# Select name of cluster you want to interact with from above output:\nexport CLUSTER_NAME=\"some_server_name\"\n\n# Point to the API server referring the cluster name\nAPISERVER=$(kubectl config view -o jsonpath=\"{.clusters[?(@.name==\\\"$CLUSTER_NAME\\\")].cluster.server}\")\n\n# Create a secret to hold a token for the default service account\nkubectl apply -f - </dev/null; do\n echo \"waiting for token...\" >&2\n sleep 1\ndone\n\n# Get the token value\nTOKEN=$(kubectl get secret default-token -o jsonpath='{.data.token}' | base64 --decode)\n\n# Explore the API with TOKEN\ncurl -X GET $APISERVER/api --header \"Authorization: Bearer $TOKEN\" --insecure", "language": "yaml", "type": "kubernetes-manifest", "context": "Access Clusters Using the Kubernetes API | The output is similar to this:", "source": "https://kubernetes.io/docs/tasks/administer-cluster/access-cluster-api/", "id": "k8s_00293_code_completion_7800"} {"prompt": "Explain this yaml code:\n\n# Check all possible clusters, as your .KUBECONFIG may have multiple contexts:\nkubectl config view -o jsonpath='{\"Cluster name\\tServer\\n\"}{range .clusters[*]}{.name}{\"\\t\"}{.cluster.server}{\"\\n\"}{end}'\n\n# Select name of cluster you want to interact with from above output:\nexport CLUSTER_NAME=\"some_server_name\"\n\n# Point to the API server referring the cluster name\nAPISERVER=$(kubectl config view -o jsonpath=\"{.clusters[?(@.name==\\\"$CLUSTER_NAME\\\")].cluster.server}\")\n\n# Create a secret to hold a token for the default service account\nkubectl apply -f - </dev/null; do\n echo \"waiting for token...\" >&2\n sleep 1\ndone\n\n# Get the token value\nTOKEN=$(kubectl get secret default-token -o jsonpath='{.data.token}' | base64 --decode)\n\n# Explore the API with TOKEN\ncurl -X GET $APISERVER/api --header \"Authorization: Bearer $TOKEN\" --insecure", "completion": "This yaml code Access Clusters Using the Kubernetes API | The output is similar to this:", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Access Clusters Using the Kubernetes API | The output is similar to this:", "source": "https://kubernetes.io/docs/tasks/administer-cluster/access-cluster-api/", "id": "k8s_00293_code_explain_7801"} {"prompt": "Complete this json code:\n\n{\n \"kind\": \"APIVersions\",\n \"versions\": [\n \"v1\"\n ],\n \"serverAddressByClientCIDRs\": [", "completion": " {\n \"clientCIDR\": \"0.0.0.0/0\",\n \"serverAddress\": \"10.0.1.149:443\"\n }\n ]\n}", "language": "json", "type": "api-response", "context": "Access Clusters Using the Kubernetes API | The above example uses the --insecure flag. This leaves it subject to MITM attacks. When kubectl accesses the cluster it uses a stored root certificate and client certificates to access the server. (T", "source": "https://kubernetes.io/docs/tasks/administer-cluster/access-cluster-api/", "id": "k8s_00293_code_completion_7802"} {"prompt": "Explain this json code:\n\n{\n \"kind\": \"APIVersions\",\n \"versions\": [\n \"v1\"\n ],\n \"serverAddressByClientCIDRs\": [\n {\n \"clientCIDR\": \"0.0.0.0/0\",\n \"serverAddress\": \"10.0.1.149:443\"\n }\n ]\n}", "completion": "This json code Access Clusters Using the Kubernetes API | The above example uses the --insecure flag. This leaves it subject to MITM attacks. When kubectl accesses the cluster it uses a stored root certificate and client certificates to access the server. (T", "language": "json", "type": "api-response_explanation", "context": "Access Clusters Using the Kubernetes API | The above example uses the --insecure flag. This leaves it subject to MITM attacks. When kubectl accesses the cluster it uses a stored root certificate and client certificates to access the server. (T", "source": "https://kubernetes.io/docs/tasks/administer-cluster/access-cluster-api/", "id": "k8s_00293_code_explain_7803"} {"prompt": "Complete this python code:\n\npackage main\n\nimport (\n \"context\"\n \"fmt\"\n \"k8s.io/apimachinery/pkg/apis/meta/v1\"\n \"k8s.io/client-go/kubernetes\"\n \"k8s.io/client-go/tools/clientcmd\"\n)\n", "completion": "func main() {\n // uses the current context in kubeconfig\n // path-to-kubeconfig -- for example, /root/.kube/config\n config, _ := clientcmd.BuildConfigFromFlags(\"\", \"\")\n // creates the clientset\n clientset, _ := kubernetes.NewForConfig(config)\n // access the API to list pods\n pods, _ := clientset.CoreV1().Pods(\"\").List(context.TODO(), v1.ListOptions{})\n fmt.Printf(\"There are %d pods in the cluster\\n\", len(pods.Items))\n}", "language": "python", "type": "code-example", "context": "Access Clusters Using the Kubernetes API | If the application is deployed as a Pod in the cluster, see Accessing the API from within a Pod.", "source": "https://kubernetes.io/docs/tasks/administer-cluster/access-cluster-api/", "id": "k8s_00293_code_completion_7804"} {"prompt": "Explain this python code:\n\npackage main\n\nimport (\n \"context\"\n \"fmt\"\n \"k8s.io/apimachinery/pkg/apis/meta/v1\"\n \"k8s.io/client-go/kubernetes\"\n \"k8s.io/client-go/tools/clientcmd\"\n)\n\nfunc main() {\n // uses the current context in kubeconfig\n // path-to-kubeconfig -- for example, /root/.kube/config\n config, _ := clientcmd.BuildConfigFromFlags(\"\", \"\")\n // creates the clientset\n clientset, _ := kubernetes.NewForConfig(config)\n // access the API to list pods\n pods, _ := clientset.CoreV1().Pods(\"\").List(context.TODO(), v1.ListOptions{})\n fmt.Printf(\"There are %d pods in the cluster\\n\", len(pods.Items))\n}", "completion": "This python code Access Clusters Using the Kubernetes API | If the application is deployed as a Pod in the cluster, see Accessing the API from within a Pod.", "language": "python", "type": "code-example_explanation", "context": "Access Clusters Using the Kubernetes API | If the application is deployed as a Pod in the cluster, see Accessing the API from within a Pod.", "source": "https://kubernetes.io/docs/tasks/administer-cluster/access-cluster-api/", "id": "k8s_00293_code_explain_7805"} {"prompt": "Complete this python code:\n\nfrom kubernetes import client, config\n\nconfig.load_kube_config()\n", "completion": "v1=client.CoreV1Api()\nprint(\"Listing pods with their IPs:\")\nret = v1.list_pod_for_all_namespaces(watch=False)\nfor i in ret.items:\n print(\"%s\\t%s\\t%s\" % (i.status.pod_ip, i.metadata.namespace, i.metadata.name))", "language": "python", "type": "code-example", "context": "Access Clusters Using the Kubernetes API | To install the Java Client, run:", "source": "https://kubernetes.io/docs/tasks/administer-cluster/access-cluster-api/", "id": "k8s_00293_code_completion_7806"} {"prompt": "Explain this python code:\n\nfrom kubernetes import client, config\n\nconfig.load_kube_config()\n\nv1=client.CoreV1Api()\nprint(\"Listing pods with their IPs:\")\nret = v1.list_pod_for_all_namespaces(watch=False)\nfor i in ret.items:\n print(\"%s\\t%s\\t%s\" % (i.status.pod_ip, i.metadata.namespace, i.metadata.name))", "completion": "This python code Access Clusters Using the Kubernetes API | To install the Java Client, run:", "language": "python", "type": "code-example_explanation", "context": "Access Clusters Using the Kubernetes API | To install the Java Client, run:", "source": "https://kubernetes.io/docs/tasks/administer-cluster/access-cluster-api/", "id": "k8s_00293_code_explain_7807"} {"prompt": "Complete this text code:\n\n# Clone java library\ngit clone --recursive https://github.com/kubernetes-client/java\n", "completion": "# Installing project artifacts, POM etc:\ncd java\nmvn install", "language": "text", "type": "code-example", "context": "Access Clusters Using the Kubernetes API | See https://github.com/kubernetes-client/java/releases to see which versions are supported.", "source": "https://kubernetes.io/docs/tasks/administer-cluster/access-cluster-api/", "id": "k8s_00293_code_completion_7808"} {"prompt": "Explain this text code:\n\n# Clone java library\ngit clone --recursive https://github.com/kubernetes-client/java\n\n# Installing project artifacts, POM etc:\ncd java\nmvn install", "completion": "This text code Access Clusters Using the Kubernetes API | See https://github.com/kubernetes-client/java/releases to see which versions are supported.", "language": "text", "type": "code-example_explanation", "context": "Access Clusters Using the Kubernetes API | See https://github.com/kubernetes-client/java/releases to see which versions are supported.", "source": "https://kubernetes.io/docs/tasks/administer-cluster/access-cluster-api/", "id": "k8s_00293_code_explain_7809"} {"prompt": "Complete this python code:\n\npackage io.kubernetes.client.examples;\n\nimport io.kubernetes.client.ApiClient;\nimport io.kubernetes.client.ApiException;\nimport io.kubernetes.client.Configuration;\nimport io.kubernetes.client.apis.CoreV1Api;\nimport io.kubernetes.client.models.V1Pod;\nimport io.kubernetes.client.models.V1PodList;\nimport io.kubernetes.client.util.ClientBuilder;\nimport io.kubernetes.client.util.KubeConfig;\nimport java.io.FileReader;\nimport java.io.IOException;\n\n/**\n * A simple example of how to use the Java API from an application outside a kubernetes cluster\n *\n *

    Easiest way to run this: mvn exec:java\n * -Dexec.mainClass=\"io.kubernetes.client.examples.KubeConfigFileClientExample\"\n *\n */\npublic class KubeConfigFileClientExample {\n public static void main(String[] args) throws IOException, ApiException {", "completion": "\n // file path to your KubeConfig\n String kubeConfigPath = \"~/.kube/config\";\n\n // loading the out-of-cluster config, a kubeconfig from file-system\n ApiClient client =\n ClientBuilder.kubeconfig(KubeConfig.loadKubeConfig(new FileReader(kubeConfigPath))).build();\n\n // set the global default api-client to the in-cluster one from above\n Configuration.setDefaultApiClient(client);\n\n // the CoreV1Api loads default api-client from global configuration.\n CoreV1Api api = new CoreV1Api();\n\n // invokes the CoreV1Api client\n V1PodList list = api.listPodForAllNamespaces(null, null, null, null, null, null, null, null, null);\n System.out.println(\"Listing all pods: \");\n for (V1Pod item : list.getItems()) {\n System.out.println(item.getMetadata().getName());\n }\n }\n}", "language": "python", "type": "code-example", "context": "Access Clusters Using the Kubernetes API | To use dotnet client, run the following command: dotnet add package KubernetesClient --version 1.6.1. See dotnet Client Library page for more installation options. See https://github.com/kubernetes-cl", "source": "https://kubernetes.io/docs/tasks/administer-cluster/access-cluster-api/", "id": "k8s_00293_code_completion_7810"} {"prompt": "Explain this python code:\n\npackage io.kubernetes.client.examples;\n\nimport io.kubernetes.client.ApiClient;\nimport io.kubernetes.client.ApiException;\nimport io.kubernetes.client.Configuration;\nimport io.kubernetes.client.apis.CoreV1Api;\nimport io.kubernetes.client.models.V1Pod;\nimport io.kubernetes.client.models.V1PodList;\nimport io.kubernetes.client.util.ClientBuilder;\nimport io.kubernetes.client.util.KubeConfig;\nimport java.io.FileReader;\nimport java.io.IOException;\n\n/**\n * A simple example of how to use the Java API from an application outside a kubernetes cluster\n *\n *

    Easiest way to run this: mvn exec:java\n * -Dexec.mainClass=\"io.kubernetes.client.examples.KubeConfigFileClientExample\"\n *\n */\npublic class KubeConfigFileClientExample {\n public static void main(String[] args) throws IOException, ApiException {\n\n // file path to your KubeConfig\n String kubeConfigPath = \"~/.kube/config\";\n\n // loading the out-of-cluster config, a kubeconfig from file-system\n ApiClient client =\n ClientBuilder.kubeconfig(KubeConfig.loadKubeConfig(new FileReader(kubeConfigPath))).build();\n\n // set the global default api-client to the in-cluster one from above\n Configuration.setDefaultApiClient(client);\n\n // the CoreV1Api loads default api-client from global configuration.\n CoreV1Api api = new CoreV1Api();\n\n // invokes the CoreV1Api client\n V1PodList list = api.listPodForAllNamespaces(null, null, null, null, null, null, null, null, null);\n System.out.println(\"Listing all pods: \");\n for (V1Pod item : list.getItems()) {\n System.out.println(item.getMetadata().getName());\n }\n }\n}", "completion": "This python code Access Clusters Using the Kubernetes API | To use dotnet client, run the following command: dotnet add package KubernetesClient --version 1.6.1. See dotnet Client Library page for more installation options. See https://github.com/kubernetes-cl", "language": "python", "type": "code-example_explanation", "context": "Access Clusters Using the Kubernetes API | To use dotnet client, run the following command: dotnet add package KubernetesClient --version 1.6.1. See dotnet Client Library page for more installation options. See https://github.com/kubernetes-cl", "source": "https://kubernetes.io/docs/tasks/administer-cluster/access-cluster-api/", "id": "k8s_00293_code_explain_7811"} {"prompt": "Complete this text code:\n\nusing System;\nusing k8s;\n\nnamespace simple\n{\n internal class PodList\n {\n private static void Main(string[] args)\n {\n var config = KubernetesClientConfiguration.BuildDefaultConfig();\n IKubernetes client = new Kubernetes(config);\n Console.WriteLine(\"Starting Request!\");", "completion": "\n var list = client.ListNamespacedPod(\"default\");\n foreach (var item in list.Items)\n {\n Console.WriteLine(item.Metadata.Name);\n }\n if (list.Items.Count == 0)\n {\n Console.WriteLine(\"Empty!\");\n }\n }\n }\n}", "language": "text", "type": "code-example", "context": "Access Clusters Using the Kubernetes API | To install JavaScript client, run the following command: npm install @kubernetes/client-node. See https://github.com/kubernetes-client/javascript/releases to see which versions are supported.", "source": "https://kubernetes.io/docs/tasks/administer-cluster/access-cluster-api/", "id": "k8s_00293_code_completion_7812"} {"prompt": "Explain this text code:\n\nusing System;\nusing k8s;\n\nnamespace simple\n{\n internal class PodList\n {\n private static void Main(string[] args)\n {\n var config = KubernetesClientConfiguration.BuildDefaultConfig();\n IKubernetes client = new Kubernetes(config);\n Console.WriteLine(\"Starting Request!\");\n\n var list = client.ListNamespacedPod(\"default\");\n foreach (var item in list.Items)\n {\n Console.WriteLine(item.Metadata.Name);\n }\n if (list.Items.Count == 0)\n {\n Console.WriteLine(\"Empty!\");\n }\n }\n }\n}", "completion": "This text code Access Clusters Using the Kubernetes API | To install JavaScript client, run the following command: npm install @kubernetes/client-node. See https://github.com/kubernetes-client/javascript/releases to see which versions are supported.", "language": "text", "type": "code-example_explanation", "context": "Access Clusters Using the Kubernetes API | To install JavaScript client, run the following command: npm install @kubernetes/client-node. See https://github.com/kubernetes-client/javascript/releases to see which versions are supported.", "source": "https://kubernetes.io/docs/tasks/administer-cluster/access-cluster-api/", "id": "k8s_00293_code_explain_7813"} {"prompt": "Complete this text code:\n\nconst k8s = require('@kubernetes/client-node');\n\nconst kc = new k8s.KubeConfig();\nkc.loadFromDefault();\n", "completion": "const k8sApi = kc.makeApiClient(k8s.CoreV1Api);\n\nk8sApi.listNamespacedPod({ namespace: 'default' }).then((res) => {\n console.log(res);\n});", "language": "text", "type": "code-example", "context": "Access Clusters Using the Kubernetes API | See https://github.com/kubernetes-client/haskell/releases to see which versions are supported.", "source": "https://kubernetes.io/docs/tasks/administer-cluster/access-cluster-api/", "id": "k8s_00293_code_completion_7814"} {"prompt": "Explain this text code:\n\nconst k8s = require('@kubernetes/client-node');\n\nconst kc = new k8s.KubeConfig();\nkc.loadFromDefault();\n\nconst k8sApi = kc.makeApiClient(k8s.CoreV1Api);\n\nk8sApi.listNamespacedPod({ namespace: 'default' }).then((res) => {\n console.log(res);\n});", "completion": "This text code Access Clusters Using the Kubernetes API | See https://github.com/kubernetes-client/haskell/releases to see which versions are supported.", "language": "text", "type": "code-example_explanation", "context": "Access Clusters Using the Kubernetes API | See https://github.com/kubernetes-client/haskell/releases to see which versions are supported.", "source": "https://kubernetes.io/docs/tasks/administer-cluster/access-cluster-api/", "id": "k8s_00293_code_explain_7815"} {"prompt": "Complete this text code:\n\nexampleWithKubeConfig :: IO ()\nexampleWithKubeConfig = do\n oidcCache <- atomically $ newTVar $ Map.fromList []\n (mgr, kcfg) <- mkKubeClientConfig oidcCache $ KubeConfigFile \"/path/to/kubeconfig\"", "completion": " dispatchMime\n mgr\n kcfg\n (CoreV1.listPodForAllNamespaces (Accept MimeJSON))\n >>= print", "language": "text", "type": "code-example", "context": "Access Clusters Using the Kubernetes API |", "source": "https://kubernetes.io/docs/tasks/administer-cluster/access-cluster-api/", "id": "k8s_00293_code_completion_7816"} {"prompt": "Explain this text code:\n\nexampleWithKubeConfig :: IO ()\nexampleWithKubeConfig = do\n oidcCache <- atomically $ newTVar $ Map.fromList []\n (mgr, kcfg) <- mkKubeClientConfig oidcCache $ KubeConfigFile \"/path/to/kubeconfig\"\n dispatchMime\n mgr\n kcfg\n (CoreV1.listPodForAllNamespaces (Accept MimeJSON))\n >>= print", "completion": "This text code Access Clusters Using the Kubernetes API |", "language": "text", "type": "code-example_explanation", "context": "Access Clusters Using the Kubernetes API |", "source": "https://kubernetes.io/docs/tasks/administer-cluster/access-cluster-api/", "id": "k8s_00293_code_explain_7817"} {"prompt": "Complete this json code:\n\n{\n \"versions\": [\n \"v1\"\n ],\n \"serverAddressByClientCIDRs\": [", "completion": " {\n \"clientCIDR\": \"0.0.0.0/0\",\n \"serverAddress\": \"10.0.1.149:443\"\n }\n ]\n}", "language": "json", "type": "api-response", "context": "Access Clusters Using the Kubernetes API | It is possible to avoid using kubectl proxy by passing an authentication token directly to the API server, like this:", "source": "https://kubernetes.io/docs/tasks/administer-cluster/access-cluster-api/", "id": "k8s_00293_code_completion_7818"} {"prompt": "Explain this json code:\n\n{\n \"versions\": [\n \"v1\"\n ],\n \"serverAddressByClientCIDRs\": [\n {\n \"clientCIDR\": \"0.0.0.0/0\",\n \"serverAddress\": \"10.0.1.149:443\"\n }\n ]\n}", "completion": "This json code Access Clusters Using the Kubernetes API | It is possible to avoid using kubectl proxy by passing an authentication token directly to the API server, like this:", "language": "json", "type": "api-response_explanation", "context": "Access Clusters Using the Kubernetes API | It is possible to avoid using kubectl proxy by passing an authentication token directly to the API server, like this:", "source": "https://kubernetes.io/docs/tasks/administer-cluster/access-cluster-api/", "id": "k8s_00293_code_explain_7819"} {"prompt": "Complete this shell code:\n\n# Check all possible clusters, as your .KUBECONFIG may have multiple contexts:\nkubectl config view -o jsonpath='{\"Cluster name\\tServer\\n\"}{range .clusters[*]}{.name}{\"\\t\"}{.cluster.server}{\"\\n\"}{end}'\n\n# Select name of cluster you want to interact with from above output:\nexport CLUSTER_NAME=\"some_server_name\"\n\n# Point to the API server referring the cluster name\nAPISERVER=$(kubectl config view -o jsonpath=\"{.clusters[?(@.name==\\\"$CLUSTER_NAME\\\")].cluster.server}\")\n\n# Create a secret to hold a token for the default service account\nkubectl apply -f - </dev/null; do\n echo \"waiting for token...\" >&2\n sleep 1\ndone\n\n# Get the token value\nTOKEN=$(kubectl get secret default-token -o jsonpath='{.data.token}' | base64 --decode)\n\n# Explore the API with TOKEN\ncurl -X GET $APISERVER/api --header \"Authorization: Bearer $TOKEN\" --insecure", "language": "shell", "type": "code-example", "context": "Access Clusters Using the Kubernetes API | The output is similar to this:", "source": "https://kubernetes.io/docs/tasks/administer-cluster/access-cluster-api/", "id": "k8s_00293_code_completion_7820"} {"prompt": "Explain this shell code:\n\n# Check all possible clusters, as your .KUBECONFIG may have multiple contexts:\nkubectl config view -o jsonpath='{\"Cluster name\\tServer\\n\"}{range .clusters[*]}{.name}{\"\\t\"}{.cluster.server}{\"\\n\"}{end}'\n\n# Select name of cluster you want to interact with from above output:\nexport CLUSTER_NAME=\"some_server_name\"\n\n# Point to the API server referring the cluster name\nAPISERVER=$(kubectl config view -o jsonpath=\"{.clusters[?(@.name==\\\"$CLUSTER_NAME\\\")].cluster.server}\")\n\n# Create a secret to hold a token for the default service account\nkubectl apply -f - </dev/null; do\n echo \"waiting for token...\" >&2\n sleep 1\ndone\n\n# Get the token value\nTOKEN=$(kubectl get secret default-token -o jsonpath='{.data.token}' | base64 --decode)\n\n# Explore the API with TOKEN\ncurl -X GET $APISERVER/api --header \"Authorization: Bearer $TOKEN\" --insecure", "completion": "This shell code Access Clusters Using the Kubernetes API | The output is similar to this:", "language": "shell", "type": "code-example_explanation", "context": "Access Clusters Using the Kubernetes API | The output is similar to this:", "source": "https://kubernetes.io/docs/tasks/administer-cluster/access-cluster-api/", "id": "k8s_00293_code_explain_7821"} {"prompt": "Complete this json code:\n\n{\n \"kind\": \"APIVersions\",\n \"versions\": [\n \"v1\"\n ],\n \"serverAddressByClientCIDRs\": [", "completion": " {\n \"clientCIDR\": \"0.0.0.0/0\",\n \"serverAddress\": \"10.0.1.149:443\"\n }\n ]\n}", "language": "json", "type": "api-response", "context": "Access Clusters Using the Kubernetes API | The above example uses the --insecure flag. This leaves it subject to MITM attacks. When kubectl accesses the cluster it uses a stored root certificate and client certificates to access the server. (T", "source": "https://kubernetes.io/docs/tasks/administer-cluster/access-cluster-api/", "id": "k8s_00293_code_completion_7822"} {"prompt": "Explain this json code:\n\n{\n \"kind\": \"APIVersions\",\n \"versions\": [\n \"v1\"\n ],\n \"serverAddressByClientCIDRs\": [\n {\n \"clientCIDR\": \"0.0.0.0/0\",\n \"serverAddress\": \"10.0.1.149:443\"\n }\n ]\n}", "completion": "This json code Access Clusters Using the Kubernetes API | The above example uses the --insecure flag. This leaves it subject to MITM attacks. When kubectl accesses the cluster it uses a stored root certificate and client certificates to access the server. (T", "language": "json", "type": "api-response_explanation", "context": "Access Clusters Using the Kubernetes API | The above example uses the --insecure flag. This leaves it subject to MITM attacks. When kubectl accesses the cluster it uses a stored root certificate and client certificates to access the server. (T", "source": "https://kubernetes.io/docs/tasks/administer-cluster/access-cluster-api/", "id": "k8s_00293_code_explain_7823"} {"prompt": "Explain this text code:\n\ngo get k8s.io/client-go@kubernetes-", "completion": "This text code Access Clusters Using the Kubernetes API | The Go client can use the same kubeconfig file as the kubectl CLI does to locate and authenticate to the API server. See this example:", "language": "text", "type": "code-example_explanation", "context": "Access Clusters Using the Kubernetes API | The Go client can use the same kubeconfig file as the kubectl CLI does to locate and authenticate to the API server. See this example:", "source": "https://kubernetes.io/docs/tasks/administer-cluster/access-cluster-api/", "id": "k8s_00293_code_explain_7824"} {"prompt": "Complete this golang code:\n\npackage main\n\nimport (\n \"context\"\n \"fmt\"\n \"k8s.io/apimachinery/pkg/apis/meta/v1\"\n \"k8s.io/client-go/kubernetes\"\n \"k8s.io/client-go/tools/clientcmd\"\n)\n", "completion": "func main() {\n // uses the current context in kubeconfig\n // path-to-kubeconfig -- for example, /root/.kube/config\n config, _ := clientcmd.BuildConfigFromFlags(\"\", \"\")\n // creates the clientset\n clientset, _ := kubernetes.NewForConfig(config)\n // access the API to list pods\n pods, _ := clientset.CoreV1().Pods(\"\").List(context.TODO(), v1.ListOptions{})\n fmt.Printf(\"There are %d pods in the cluster\\n\", len(pods.Items))\n}", "language": "golang", "type": "code-example", "context": "Access Clusters Using the Kubernetes API | If the application is deployed as a Pod in the cluster, see Accessing the API from within a Pod.", "source": "https://kubernetes.io/docs/tasks/administer-cluster/access-cluster-api/", "id": "k8s_00293_code_completion_7825"} {"prompt": "Explain this golang code:\n\npackage main\n\nimport (\n \"context\"\n \"fmt\"\n \"k8s.io/apimachinery/pkg/apis/meta/v1\"\n \"k8s.io/client-go/kubernetes\"\n \"k8s.io/client-go/tools/clientcmd\"\n)\n\nfunc main() {\n // uses the current context in kubeconfig\n // path-to-kubeconfig -- for example, /root/.kube/config\n config, _ := clientcmd.BuildConfigFromFlags(\"\", \"\")\n // creates the clientset\n clientset, _ := kubernetes.NewForConfig(config)\n // access the API to list pods\n pods, _ := clientset.CoreV1().Pods(\"\").List(context.TODO(), v1.ListOptions{})\n fmt.Printf(\"There are %d pods in the cluster\\n\", len(pods.Items))\n}", "completion": "This golang code Access Clusters Using the Kubernetes API | If the application is deployed as a Pod in the cluster, see Accessing the API from within a Pod.", "language": "golang", "type": "code-example_explanation", "context": "Access Clusters Using the Kubernetes API | If the application is deployed as a Pod in the cluster, see Accessing the API from within a Pod.", "source": "https://kubernetes.io/docs/tasks/administer-cluster/access-cluster-api/", "id": "k8s_00293_code_explain_7826"} {"prompt": "Complete this python code:\n\nfrom kubernetes import client, config\n\nconfig.load_kube_config()\n", "completion": "v1=client.CoreV1Api()\nprint(\"Listing pods with their IPs:\")\nret = v1.list_pod_for_all_namespaces(watch=False)\nfor i in ret.items:\n print(\"%s\\t%s\\t%s\" % (i.status.pod_ip, i.metadata.namespace, i.metadata.name))", "language": "python", "type": "code-example", "context": "Access Clusters Using the Kubernetes API | To install the Java Client, run:", "source": "https://kubernetes.io/docs/tasks/administer-cluster/access-cluster-api/", "id": "k8s_00293_code_completion_7827"} {"prompt": "Explain this python code:\n\nfrom kubernetes import client, config\n\nconfig.load_kube_config()\n\nv1=client.CoreV1Api()\nprint(\"Listing pods with their IPs:\")\nret = v1.list_pod_for_all_namespaces(watch=False)\nfor i in ret.items:\n print(\"%s\\t%s\\t%s\" % (i.status.pod_ip, i.metadata.namespace, i.metadata.name))", "completion": "This python code Access Clusters Using the Kubernetes API | To install the Java Client, run:", "language": "python", "type": "code-example_explanation", "context": "Access Clusters Using the Kubernetes API | To install the Java Client, run:", "source": "https://kubernetes.io/docs/tasks/administer-cluster/access-cluster-api/", "id": "k8s_00293_code_explain_7828"} {"prompt": "Complete this shell code:\n\n# Clone java library\ngit clone --recursive https://github.com/kubernetes-client/java\n", "completion": "# Installing project artifacts, POM etc:\ncd java\nmvn install", "language": "shell", "type": "code-example", "context": "Access Clusters Using the Kubernetes API | See https://github.com/kubernetes-client/java/releases to see which versions are supported.", "source": "https://kubernetes.io/docs/tasks/administer-cluster/access-cluster-api/", "id": "k8s_00293_code_completion_7829"} {"prompt": "Explain this shell code:\n\n# Clone java library\ngit clone --recursive https://github.com/kubernetes-client/java\n\n# Installing project artifacts, POM etc:\ncd java\nmvn install", "completion": "This shell code Access Clusters Using the Kubernetes API | See https://github.com/kubernetes-client/java/releases to see which versions are supported.", "language": "shell", "type": "code-example_explanation", "context": "Access Clusters Using the Kubernetes API | See https://github.com/kubernetes-client/java/releases to see which versions are supported.", "source": "https://kubernetes.io/docs/tasks/administer-cluster/access-cluster-api/", "id": "k8s_00293_code_explain_7830"} {"prompt": "Complete this java code:\n\npackage io.kubernetes.client.examples;\n\nimport io.kubernetes.client.ApiClient;\nimport io.kubernetes.client.ApiException;\nimport io.kubernetes.client.Configuration;\nimport io.kubernetes.client.apis.CoreV1Api;\nimport io.kubernetes.client.models.V1Pod;\nimport io.kubernetes.client.models.V1PodList;\nimport io.kubernetes.client.util.ClientBuilder;\nimport io.kubernetes.client.util.KubeConfig;\nimport java.io.FileReader;\nimport java.io.IOException;\n\n/**\n * A simple example of how to use the Java API from an application outside a kubernetes cluster\n *\n *

    Easiest way to run this: mvn exec:java\n * -Dexec.mainClass=\"io.kubernetes.client.examples.KubeConfigFileClientExample\"\n *\n */\npublic class KubeConfigFileClientExample {\n public static void main(String[] args) throws IOException, ApiException {", "completion": "\n // file path to your KubeConfig\n String kubeConfigPath = \"~/.kube/config\";\n\n // loading the out-of-cluster config, a kubeconfig from file-system\n ApiClient client =\n ClientBuilder.kubeconfig(KubeConfig.loadKubeConfig(new FileReader(kubeConfigPath))).build();\n\n // set the global default api-client to the in-cluster one from above\n Configuration.setDefaultApiClient(client);\n\n // the CoreV1Api loads default api-client from global configuration.\n CoreV1Api api = new CoreV1Api();\n\n // invokes the CoreV1Api client\n V1PodList list = api.listPodForAllNamespaces(null, null, null, null, null, null, null, null, null);\n System.out.println(\"Listing all pods: \");\n for (V1Pod item : list.getItems()) {\n System.out.println(item.getMetadata().getName());\n }\n }\n}", "language": "java", "type": "code-example", "context": "Access Clusters Using the Kubernetes API | To use dotnet client, run the following command: dotnet add package KubernetesClient --version 1.6.1. See dotnet Client Library page for more installation options. See https://github.com/kubernetes-cl", "source": "https://kubernetes.io/docs/tasks/administer-cluster/access-cluster-api/", "id": "k8s_00293_code_completion_7831"} {"prompt": "Explain this java code:\n\npackage io.kubernetes.client.examples;\n\nimport io.kubernetes.client.ApiClient;\nimport io.kubernetes.client.ApiException;\nimport io.kubernetes.client.Configuration;\nimport io.kubernetes.client.apis.CoreV1Api;\nimport io.kubernetes.client.models.V1Pod;\nimport io.kubernetes.client.models.V1PodList;\nimport io.kubernetes.client.util.ClientBuilder;\nimport io.kubernetes.client.util.KubeConfig;\nimport java.io.FileReader;\nimport java.io.IOException;\n\n/**\n * A simple example of how to use the Java API from an application outside a kubernetes cluster\n *\n *

    Easiest way to run this: mvn exec:java\n * -Dexec.mainClass=\"io.kubernetes.client.examples.KubeConfigFileClientExample\"\n *\n */\npublic class KubeConfigFileClientExample {\n public static void main(String[] args) throws IOException, ApiException {\n\n // file path to your KubeConfig\n String kubeConfigPath = \"~/.kube/config\";\n\n // loading the out-of-cluster config, a kubeconfig from file-system\n ApiClient client =\n ClientBuilder.kubeconfig(KubeConfig.loadKubeConfig(new FileReader(kubeConfigPath))).build();\n\n // set the global default api-client to the in-cluster one from above\n Configuration.setDefaultApiClient(client);\n\n // the CoreV1Api loads default api-client from global configuration.\n CoreV1Api api = new CoreV1Api();\n\n // invokes the CoreV1Api client\n V1PodList list = api.listPodForAllNamespaces(null, null, null, null, null, null, null, null, null);\n System.out.println(\"Listing all pods: \");\n for (V1Pod item : list.getItems()) {\n System.out.println(item.getMetadata().getName());\n }\n }\n}", "completion": "This java code Access Clusters Using the Kubernetes API | To use dotnet client, run the following command: dotnet add package KubernetesClient --version 1.6.1. See dotnet Client Library page for more installation options. See https://github.com/kubernetes-cl", "language": "java", "type": "code-example_explanation", "context": "Access Clusters Using the Kubernetes API | To use dotnet client, run the following command: dotnet add package KubernetesClient --version 1.6.1. See dotnet Client Library page for more installation options. See https://github.com/kubernetes-cl", "source": "https://kubernetes.io/docs/tasks/administer-cluster/access-cluster-api/", "id": "k8s_00293_code_explain_7832"} {"prompt": "Explain this go code:\n\ndotnet add package KubernetesClient --version 1.6.1", "completion": "This go code Access Clusters Using the Kubernetes API | The dotnet client can use the same kubeconfig file as the kubectl CLI does to locate and authenticate to the API server. See this example:", "language": "go", "type": "code-example_explanation", "context": "Access Clusters Using the Kubernetes API | The dotnet client can use the same kubeconfig file as the kubectl CLI does to locate and authenticate to the API server. See this example:", "source": "https://kubernetes.io/docs/tasks/administer-cluster/access-cluster-api/", "id": "k8s_00293_code_explain_7833"} {"prompt": "Complete this csharp code:\n\nusing System;\nusing k8s;\n\nnamespace simple\n{\n internal class PodList\n {\n private static void Main(string[] args)\n {\n var config = KubernetesClientConfiguration.BuildDefaultConfig();\n IKubernetes client = new Kubernetes(config);\n Console.WriteLine(\"Starting Request!\");", "completion": "\n var list = client.ListNamespacedPod(\"default\");\n foreach (var item in list.Items)\n {\n Console.WriteLine(item.Metadata.Name);\n }\n if (list.Items.Count == 0)\n {\n Console.WriteLine(\"Empty!\");\n }\n }\n }\n}", "language": "csharp", "type": "code-example", "context": "Access Clusters Using the Kubernetes API | To install JavaScript client, run the following command: npm install @kubernetes/client-node. See https://github.com/kubernetes-client/javascript/releases to see which versions are supported.", "source": "https://kubernetes.io/docs/tasks/administer-cluster/access-cluster-api/", "id": "k8s_00293_code_completion_7834"} {"prompt": "Explain this csharp code:\n\nusing System;\nusing k8s;\n\nnamespace simple\n{\n internal class PodList\n {\n private static void Main(string[] args)\n {\n var config = KubernetesClientConfiguration.BuildDefaultConfig();\n IKubernetes client = new Kubernetes(config);\n Console.WriteLine(\"Starting Request!\");\n\n var list = client.ListNamespacedPod(\"default\");\n foreach (var item in list.Items)\n {\n Console.WriteLine(item.Metadata.Name);\n }\n if (list.Items.Count == 0)\n {\n Console.WriteLine(\"Empty!\");\n }\n }\n }\n}", "completion": "This csharp code Access Clusters Using the Kubernetes API | To install JavaScript client, run the following command: npm install @kubernetes/client-node. See https://github.com/kubernetes-client/javascript/releases to see which versions are supported.", "language": "csharp", "type": "code-example_explanation", "context": "Access Clusters Using the Kubernetes API | To install JavaScript client, run the following command: npm install @kubernetes/client-node. See https://github.com/kubernetes-client/javascript/releases to see which versions are supported.", "source": "https://kubernetes.io/docs/tasks/administer-cluster/access-cluster-api/", "id": "k8s_00293_code_explain_7835"} {"prompt": "Complete this javascript code:\n\nconst k8s = require('@kubernetes/client-node');\n\nconst kc = new k8s.KubeConfig();\nkc.loadFromDefault();\n", "completion": "const k8sApi = kc.makeApiClient(k8s.CoreV1Api);\n\nk8sApi.listNamespacedPod({ namespace: 'default' }).then((res) => {\n console.log(res);\n});", "language": "javascript", "type": "code-example", "context": "Access Clusters Using the Kubernetes API | See https://github.com/kubernetes-client/haskell/releases to see which versions are supported.", "source": "https://kubernetes.io/docs/tasks/administer-cluster/access-cluster-api/", "id": "k8s_00293_code_completion_7836"} {"prompt": "Explain this javascript code:\n\nconst k8s = require('@kubernetes/client-node');\n\nconst kc = new k8s.KubeConfig();\nkc.loadFromDefault();\n\nconst k8sApi = kc.makeApiClient(k8s.CoreV1Api);\n\nk8sApi.listNamespacedPod({ namespace: 'default' }).then((res) => {\n console.log(res);\n});", "completion": "This javascript code Access Clusters Using the Kubernetes API | See https://github.com/kubernetes-client/haskell/releases to see which versions are supported.", "language": "javascript", "type": "code-example_explanation", "context": "Access Clusters Using the Kubernetes API | See https://github.com/kubernetes-client/haskell/releases to see which versions are supported.", "source": "https://kubernetes.io/docs/tasks/administer-cluster/access-cluster-api/", "id": "k8s_00293_code_explain_7837"} {"prompt": "Complete this haskell code:\n\nexampleWithKubeConfig :: IO ()\nexampleWithKubeConfig = do\n oidcCache <- atomically $ newTVar $ Map.fromList []\n (mgr, kcfg) <- mkKubeClientConfig oidcCache $ KubeConfigFile \"/path/to/kubeconfig\"", "completion": " dispatchMime\n mgr\n kcfg\n (CoreV1.listPodForAllNamespaces (Accept MimeJSON))\n >>= print", "language": "haskell", "type": "code-example", "context": "Access Clusters Using the Kubernetes API |", "source": "https://kubernetes.io/docs/tasks/administer-cluster/access-cluster-api/", "id": "k8s_00293_code_completion_7838"} {"prompt": "Explain this haskell code:\n\nexampleWithKubeConfig :: IO ()\nexampleWithKubeConfig = do\n oidcCache <- atomically $ newTVar $ Map.fromList []\n (mgr, kcfg) <- mkKubeClientConfig oidcCache $ KubeConfigFile \"/path/to/kubeconfig\"\n dispatchMime\n mgr\n kcfg\n (CoreV1.listPodForAllNamespaces (Accept MimeJSON))\n >>= print", "completion": "This haskell code Access Clusters Using the Kubernetes API |", "language": "haskell", "type": "code-example_explanation", "context": "Access Clusters Using the Kubernetes API |", "source": "https://kubernetes.io/docs/tasks/administer-cluster/access-cluster-api/", "id": "k8s_00293_code_explain_7839"} {"prompt": "Complete this json code:\n\n{\n \"versions\": [\n \"v1\"\n ],\n \"serverAddressByClientCIDRs\": [", "completion": " {\n \"clientCIDR\": \"0.0.0.0/0\",\n \"serverAddress\": \"10.0.1.149:443\"\n }\n ]\n}", "language": "json", "type": "api-response", "context": "Access Clusters Using the Kubernetes API | It is possible to avoid using kubectl proxy by passing an authentication token directly to the API server, like this:", "source": "https://kubernetes.io/docs/tasks/administer-cluster/access-cluster-api/", "id": "k8s_00293_code_completion_7840"} {"prompt": "Explain this json code:\n\n{\n \"versions\": [\n \"v1\"\n ],\n \"serverAddressByClientCIDRs\": [\n {\n \"clientCIDR\": \"0.0.0.0/0\",\n \"serverAddress\": \"10.0.1.149:443\"\n }\n ]\n}", "completion": "This json code Access Clusters Using the Kubernetes API | It is possible to avoid using kubectl proxy by passing an authentication token directly to the API server, like this:", "language": "json", "type": "api-response_explanation", "context": "Access Clusters Using the Kubernetes API | It is possible to avoid using kubectl proxy by passing an authentication token directly to the API server, like this:", "source": "https://kubernetes.io/docs/tasks/administer-cluster/access-cluster-api/", "id": "k8s_00293_code_explain_7841"} {"prompt": "Complete this yaml code:\n\n# Check all possible clusters, as your .KUBECONFIG may have multiple contexts:\nkubectl config view -o jsonpath='{\"Cluster name\\tServer\\n\"}{range .clusters[*]}{.name}{\"\\t\"}{.cluster.server}{\"\\n\"}{end}'\n\n# Select name of cluster you want to interact with from above output:\nexport CLUSTER_NAME=\"some_server_name\"\n\n# Point to the API server referring the cluster name\nAPISERVER=$(kubectl config view -o jsonpath=\"{.clusters[?(@.name==\\\"$CLUSTER_NAME\\\")].cluster.server}\")\n\n# Create a secret to hold a token for the default service account\nkubectl apply -f - </dev/null; do\n echo \"waiting for token...\" >&2\n sleep 1\ndone\n\n# Get the token value\nTOKEN=$(kubectl get secret default-token -o jsonpath='{.data.token}' | base64 --decode)\n\n# Explore the API with TOKEN\ncurl -X GET $APISERVER/api --header \"Authorization: Bearer $TOKEN\" --insecure", "language": "yaml", "type": "kubernetes-manifest", "context": "Access Clusters Using the Kubernetes API | The output is similar to this:", "source": "https://kubernetes.io/docs/tasks/administer-cluster/access-cluster-api/", "id": "k8s_00293_code_completion_7842"} {"prompt": "Explain this yaml code:\n\n# Check all possible clusters, as your .KUBECONFIG may have multiple contexts:\nkubectl config view -o jsonpath='{\"Cluster name\\tServer\\n\"}{range .clusters[*]}{.name}{\"\\t\"}{.cluster.server}{\"\\n\"}{end}'\n\n# Select name of cluster you want to interact with from above output:\nexport CLUSTER_NAME=\"some_server_name\"\n\n# Point to the API server referring the cluster name\nAPISERVER=$(kubectl config view -o jsonpath=\"{.clusters[?(@.name==\\\"$CLUSTER_NAME\\\")].cluster.server}\")\n\n# Create a secret to hold a token for the default service account\nkubectl apply -f - </dev/null; do\n echo \"waiting for token...\" >&2\n sleep 1\ndone\n\n# Get the token value\nTOKEN=$(kubectl get secret default-token -o jsonpath='{.data.token}' | base64 --decode)\n\n# Explore the API with TOKEN\ncurl -X GET $APISERVER/api --header \"Authorization: Bearer $TOKEN\" --insecure", "completion": "This yaml code Access Clusters Using the Kubernetes API | The output is similar to this:", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Access Clusters Using the Kubernetes API | The output is similar to this:", "source": "https://kubernetes.io/docs/tasks/administer-cluster/access-cluster-api/", "id": "k8s_00293_code_explain_7843"} {"prompt": "Complete this json code:\n\n{\n \"kind\": \"APIVersions\",\n \"versions\": [\n \"v1\"\n ],\n \"serverAddressByClientCIDRs\": [", "completion": " {\n \"clientCIDR\": \"0.0.0.0/0\",\n \"serverAddress\": \"10.0.1.149:443\"\n }\n ]\n}", "language": "json", "type": "api-response", "context": "Access Clusters Using the Kubernetes API | The above example uses the --insecure flag. This leaves it subject to MITM attacks. When kubectl accesses the cluster it uses a stored root certificate and client certificates to access the server. (T", "source": "https://kubernetes.io/docs/tasks/administer-cluster/access-cluster-api/", "id": "k8s_00293_code_completion_7844"} {"prompt": "Explain this json code:\n\n{\n \"kind\": \"APIVersions\",\n \"versions\": [\n \"v1\"\n ],\n \"serverAddressByClientCIDRs\": [\n {\n \"clientCIDR\": \"0.0.0.0/0\",\n \"serverAddress\": \"10.0.1.149:443\"\n }\n ]\n}", "completion": "This json code Access Clusters Using the Kubernetes API | The above example uses the --insecure flag. This leaves it subject to MITM attacks. When kubectl accesses the cluster it uses a stored root certificate and client certificates to access the server. (T", "language": "json", "type": "api-response_explanation", "context": "Access Clusters Using the Kubernetes API | The above example uses the --insecure flag. This leaves it subject to MITM attacks. When kubectl accesses the cluster it uses a stored root certificate and client certificates to access the server. (T", "source": "https://kubernetes.io/docs/tasks/administer-cluster/access-cluster-api/", "id": "k8s_00293_code_explain_7845"} {"prompt": "Complete this python code:\n\npackage main\n\nimport (\n \"context\"\n \"fmt\"\n \"k8s.io/apimachinery/pkg/apis/meta/v1\"\n \"k8s.io/client-go/kubernetes\"\n \"k8s.io/client-go/tools/clientcmd\"\n)\n", "completion": "func main() {\n // uses the current context in kubeconfig\n // path-to-kubeconfig -- for example, /root/.kube/config\n config, _ := clientcmd.BuildConfigFromFlags(\"\", \"\")\n // creates the clientset\n clientset, _ := kubernetes.NewForConfig(config)\n // access the API to list pods\n pods, _ := clientset.CoreV1().Pods(\"\").List(context.TODO(), v1.ListOptions{})\n fmt.Printf(\"There are %d pods in the cluster\\n\", len(pods.Items))\n}", "language": "python", "type": "code-example", "context": "Access Clusters Using the Kubernetes API | If the application is deployed as a Pod in the cluster, see Accessing the API from within a Pod.", "source": "https://kubernetes.io/docs/tasks/administer-cluster/access-cluster-api/", "id": "k8s_00293_code_completion_7846"} {"prompt": "Explain this python code:\n\npackage main\n\nimport (\n \"context\"\n \"fmt\"\n \"k8s.io/apimachinery/pkg/apis/meta/v1\"\n \"k8s.io/client-go/kubernetes\"\n \"k8s.io/client-go/tools/clientcmd\"\n)\n\nfunc main() {\n // uses the current context in kubeconfig\n // path-to-kubeconfig -- for example, /root/.kube/config\n config, _ := clientcmd.BuildConfigFromFlags(\"\", \"\")\n // creates the clientset\n clientset, _ := kubernetes.NewForConfig(config)\n // access the API to list pods\n pods, _ := clientset.CoreV1().Pods(\"\").List(context.TODO(), v1.ListOptions{})\n fmt.Printf(\"There are %d pods in the cluster\\n\", len(pods.Items))\n}", "completion": "This python code Access Clusters Using the Kubernetes API | If the application is deployed as a Pod in the cluster, see Accessing the API from within a Pod.", "language": "python", "type": "code-example_explanation", "context": "Access Clusters Using the Kubernetes API | If the application is deployed as a Pod in the cluster, see Accessing the API from within a Pod.", "source": "https://kubernetes.io/docs/tasks/administer-cluster/access-cluster-api/", "id": "k8s_00293_code_explain_7847"} {"prompt": "Complete this python code:\n\nfrom kubernetes import client, config\n\nconfig.load_kube_config()\n", "completion": "v1=client.CoreV1Api()\nprint(\"Listing pods with their IPs:\")\nret = v1.list_pod_for_all_namespaces(watch=False)\nfor i in ret.items:\n print(\"%s\\t%s\\t%s\" % (i.status.pod_ip, i.metadata.namespace, i.metadata.name))", "language": "python", "type": "code-example", "context": "Access Clusters Using the Kubernetes API | To install the Java Client, run:", "source": "https://kubernetes.io/docs/tasks/administer-cluster/access-cluster-api/", "id": "k8s_00293_code_completion_7848"} {"prompt": "Explain this python code:\n\nfrom kubernetes import client, config\n\nconfig.load_kube_config()\n\nv1=client.CoreV1Api()\nprint(\"Listing pods with their IPs:\")\nret = v1.list_pod_for_all_namespaces(watch=False)\nfor i in ret.items:\n print(\"%s\\t%s\\t%s\" % (i.status.pod_ip, i.metadata.namespace, i.metadata.name))", "completion": "This python code Access Clusters Using the Kubernetes API | To install the Java Client, run:", "language": "python", "type": "code-example_explanation", "context": "Access Clusters Using the Kubernetes API | To install the Java Client, run:", "source": "https://kubernetes.io/docs/tasks/administer-cluster/access-cluster-api/", "id": "k8s_00293_code_explain_7849"} {"prompt": "Complete this text code:\n\n# Clone java library\ngit clone --recursive https://github.com/kubernetes-client/java\n", "completion": "# Installing project artifacts, POM etc:\ncd java\nmvn install", "language": "text", "type": "code-example", "context": "Access Clusters Using the Kubernetes API | See https://github.com/kubernetes-client/java/releases to see which versions are supported.", "source": "https://kubernetes.io/docs/tasks/administer-cluster/access-cluster-api/", "id": "k8s_00293_code_completion_7850"} {"prompt": "Explain this text code:\n\n# Clone java library\ngit clone --recursive https://github.com/kubernetes-client/java\n\n# Installing project artifacts, POM etc:\ncd java\nmvn install", "completion": "This text code Access Clusters Using the Kubernetes API | See https://github.com/kubernetes-client/java/releases to see which versions are supported.", "language": "text", "type": "code-example_explanation", "context": "Access Clusters Using the Kubernetes API | See https://github.com/kubernetes-client/java/releases to see which versions are supported.", "source": "https://kubernetes.io/docs/tasks/administer-cluster/access-cluster-api/", "id": "k8s_00293_code_explain_7851"} {"prompt": "Complete this python code:\n\npackage io.kubernetes.client.examples;\n\nimport io.kubernetes.client.ApiClient;\nimport io.kubernetes.client.ApiException;\nimport io.kubernetes.client.Configuration;\nimport io.kubernetes.client.apis.CoreV1Api;\nimport io.kubernetes.client.models.V1Pod;\nimport io.kubernetes.client.models.V1PodList;\nimport io.kubernetes.client.util.ClientBuilder;\nimport io.kubernetes.client.util.KubeConfig;\nimport java.io.FileReader;\nimport java.io.IOException;\n\n/**\n * A simple example of how to use the Java API from an application outside a kubernetes cluster\n *\n *

    Easiest way to run this: mvn exec:java\n * -Dexec.mainClass=\"io.kubernetes.client.examples.KubeConfigFileClientExample\"\n *\n */\npublic class KubeConfigFileClientExample {\n public static void main(String[] args) throws IOException, ApiException {", "completion": "\n // file path to your KubeConfig\n String kubeConfigPath = \"~/.kube/config\";\n\n // loading the out-of-cluster config, a kubeconfig from file-system\n ApiClient client =\n ClientBuilder.kubeconfig(KubeConfig.loadKubeConfig(new FileReader(kubeConfigPath))).build();\n\n // set the global default api-client to the in-cluster one from above\n Configuration.setDefaultApiClient(client);\n\n // the CoreV1Api loads default api-client from global configuration.\n CoreV1Api api = new CoreV1Api();\n\n // invokes the CoreV1Api client\n V1PodList list = api.listPodForAllNamespaces(null, null, null, null, null, null, null, null, null);\n System.out.println(\"Listing all pods: \");\n for (V1Pod item : list.getItems()) {\n System.out.println(item.getMetadata().getName());\n }\n }\n}", "language": "python", "type": "code-example", "context": "Access Clusters Using the Kubernetes API | To use dotnet client, run the following command: dotnet add package KubernetesClient --version 1.6.1. See dotnet Client Library page for more installation options. See https://github.com/kubernetes-cl", "source": "https://kubernetes.io/docs/tasks/administer-cluster/access-cluster-api/", "id": "k8s_00293_code_completion_7852"} {"prompt": "Explain this python code:\n\npackage io.kubernetes.client.examples;\n\nimport io.kubernetes.client.ApiClient;\nimport io.kubernetes.client.ApiException;\nimport io.kubernetes.client.Configuration;\nimport io.kubernetes.client.apis.CoreV1Api;\nimport io.kubernetes.client.models.V1Pod;\nimport io.kubernetes.client.models.V1PodList;\nimport io.kubernetes.client.util.ClientBuilder;\nimport io.kubernetes.client.util.KubeConfig;\nimport java.io.FileReader;\nimport java.io.IOException;\n\n/**\n * A simple example of how to use the Java API from an application outside a kubernetes cluster\n *\n *

    Easiest way to run this: mvn exec:java\n * -Dexec.mainClass=\"io.kubernetes.client.examples.KubeConfigFileClientExample\"\n *\n */\npublic class KubeConfigFileClientExample {\n public static void main(String[] args) throws IOException, ApiException {\n\n // file path to your KubeConfig\n String kubeConfigPath = \"~/.kube/config\";\n\n // loading the out-of-cluster config, a kubeconfig from file-system\n ApiClient client =\n ClientBuilder.kubeconfig(KubeConfig.loadKubeConfig(new FileReader(kubeConfigPath))).build();\n\n // set the global default api-client to the in-cluster one from above\n Configuration.setDefaultApiClient(client);\n\n // the CoreV1Api loads default api-client from global configuration.\n CoreV1Api api = new CoreV1Api();\n\n // invokes the CoreV1Api client\n V1PodList list = api.listPodForAllNamespaces(null, null, null, null, null, null, null, null, null);\n System.out.println(\"Listing all pods: \");\n for (V1Pod item : list.getItems()) {\n System.out.println(item.getMetadata().getName());\n }\n }\n}", "completion": "This python code Access Clusters Using the Kubernetes API | To use dotnet client, run the following command: dotnet add package KubernetesClient --version 1.6.1. See dotnet Client Library page for more installation options. See https://github.com/kubernetes-cl", "language": "python", "type": "code-example_explanation", "context": "Access Clusters Using the Kubernetes API | To use dotnet client, run the following command: dotnet add package KubernetesClient --version 1.6.1. See dotnet Client Library page for more installation options. See https://github.com/kubernetes-cl", "source": "https://kubernetes.io/docs/tasks/administer-cluster/access-cluster-api/", "id": "k8s_00293_code_explain_7853"} {"prompt": "Complete this text code:\n\nusing System;\nusing k8s;\n\nnamespace simple\n{\n internal class PodList\n {\n private static void Main(string[] args)\n {\n var config = KubernetesClientConfiguration.BuildDefaultConfig();\n IKubernetes client = new Kubernetes(config);\n Console.WriteLine(\"Starting Request!\");", "completion": "\n var list = client.ListNamespacedPod(\"default\");\n foreach (var item in list.Items)\n {\n Console.WriteLine(item.Metadata.Name);\n }\n if (list.Items.Count == 0)\n {\n Console.WriteLine(\"Empty!\");\n }\n }\n }\n}", "language": "text", "type": "code-example", "context": "Access Clusters Using the Kubernetes API | To install JavaScript client, run the following command: npm install @kubernetes/client-node. See https://github.com/kubernetes-client/javascript/releases to see which versions are supported.", "source": "https://kubernetes.io/docs/tasks/administer-cluster/access-cluster-api/", "id": "k8s_00293_code_completion_7854"} {"prompt": "Explain this text code:\n\nusing System;\nusing k8s;\n\nnamespace simple\n{\n internal class PodList\n {\n private static void Main(string[] args)\n {\n var config = KubernetesClientConfiguration.BuildDefaultConfig();\n IKubernetes client = new Kubernetes(config);\n Console.WriteLine(\"Starting Request!\");\n\n var list = client.ListNamespacedPod(\"default\");\n foreach (var item in list.Items)\n {\n Console.WriteLine(item.Metadata.Name);\n }\n if (list.Items.Count == 0)\n {\n Console.WriteLine(\"Empty!\");\n }\n }\n }\n}", "completion": "This text code Access Clusters Using the Kubernetes API | To install JavaScript client, run the following command: npm install @kubernetes/client-node. See https://github.com/kubernetes-client/javascript/releases to see which versions are supported.", "language": "text", "type": "code-example_explanation", "context": "Access Clusters Using the Kubernetes API | To install JavaScript client, run the following command: npm install @kubernetes/client-node. See https://github.com/kubernetes-client/javascript/releases to see which versions are supported.", "source": "https://kubernetes.io/docs/tasks/administer-cluster/access-cluster-api/", "id": "k8s_00293_code_explain_7855"} {"prompt": "Complete this text code:\n\nconst k8s = require('@kubernetes/client-node');\n\nconst kc = new k8s.KubeConfig();\nkc.loadFromDefault();\n", "completion": "const k8sApi = kc.makeApiClient(k8s.CoreV1Api);\n\nk8sApi.listNamespacedPod({ namespace: 'default' }).then((res) => {\n console.log(res);\n});", "language": "text", "type": "code-example", "context": "Access Clusters Using the Kubernetes API | See https://github.com/kubernetes-client/haskell/releases to see which versions are supported.", "source": "https://kubernetes.io/docs/tasks/administer-cluster/access-cluster-api/", "id": "k8s_00293_code_completion_7856"} {"prompt": "Explain this text code:\n\nconst k8s = require('@kubernetes/client-node');\n\nconst kc = new k8s.KubeConfig();\nkc.loadFromDefault();\n\nconst k8sApi = kc.makeApiClient(k8s.CoreV1Api);\n\nk8sApi.listNamespacedPod({ namespace: 'default' }).then((res) => {\n console.log(res);\n});", "completion": "This text code Access Clusters Using the Kubernetes API | See https://github.com/kubernetes-client/haskell/releases to see which versions are supported.", "language": "text", "type": "code-example_explanation", "context": "Access Clusters Using the Kubernetes API | See https://github.com/kubernetes-client/haskell/releases to see which versions are supported.", "source": "https://kubernetes.io/docs/tasks/administer-cluster/access-cluster-api/", "id": "k8s_00293_code_explain_7857"} {"prompt": "Complete this text code:\n\nexampleWithKubeConfig :: IO ()\nexampleWithKubeConfig = do\n oidcCache <- atomically $ newTVar $ Map.fromList []\n (mgr, kcfg) <- mkKubeClientConfig oidcCache $ KubeConfigFile \"/path/to/kubeconfig\"", "completion": " dispatchMime\n mgr\n kcfg\n (CoreV1.listPodForAllNamespaces (Accept MimeJSON))\n >>= print", "language": "text", "type": "code-example", "context": "Access Clusters Using the Kubernetes API |", "source": "https://kubernetes.io/docs/tasks/administer-cluster/access-cluster-api/", "id": "k8s_00293_code_completion_7858"} {"prompt": "Explain this text code:\n\nexampleWithKubeConfig :: IO ()\nexampleWithKubeConfig = do\n oidcCache <- atomically $ newTVar $ Map.fromList []\n (mgr, kcfg) <- mkKubeClientConfig oidcCache $ KubeConfigFile \"/path/to/kubeconfig\"\n dispatchMime\n mgr\n kcfg\n (CoreV1.listPodForAllNamespaces (Accept MimeJSON))\n >>= print", "completion": "This text code Access Clusters Using the Kubernetes API |", "language": "text", "type": "code-example_explanation", "context": "Access Clusters Using the Kubernetes API |", "source": "https://kubernetes.io/docs/tasks/administer-cluster/access-cluster-api/", "id": "k8s_00293_code_explain_7859"} {"prompt": "Explain this text code:\n\n* `resource.k8s.io/v1beta1`\n* `resource.k8s.io/v1beta2`", "completion": "This text code Set Up DRA in a Cluster | For more information, see Enabling or disabling API groups.", "language": "text", "type": "code-example_explanation", "context": "Set Up DRA in a Cluster | For more information, see Enabling or disabling API groups.", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/assign-resources/set-up-dra-cluster/", "id": "k8s_00297_code_explain_7860"} {"prompt": "Explain this text code:\n\nerror: the server doesn't have a resource type \"deviceclasses\"", "completion": "This text code Set Up DRA in a Cluster | Try the following troubleshooting steps:", "language": "text", "type": "code-example_explanation", "context": "Set Up DRA in a Cluster | Try the following troubleshooting steps:", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/assign-resources/set-up-dra-cluster/", "id": "k8s_00297_code_explain_7861"} {"prompt": "Explain this text code:\n\nNAME NODE DRIVER POOL AGE\ncluster-1-device-pool-1-driver.example.com-lqx8x cluster-1-node-1 driver.example.com cluster-1-device-pool-1-r1gc 7s\ncluster-1-device-pool-2-driver.example.com-29t7b cluster-1-node-2 driver.example.com cluster-1-device-pool-2-446z 8s", "completion": "This text code Set Up DRA in a Cluster | Try the following troubleshooting steps:", "language": "text", "type": "code-example_explanation", "context": "Set Up DRA in a Cluster | Try the following troubleshooting steps:", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/assign-resources/set-up-dra-cluster/", "id": "k8s_00297_code_explain_7862"} {"prompt": "Explain this bash code:\n\nkubectl get resourceslice -o yaml", "completion": "This bash code Set Up DRA in a Cluster | The output is similar to the following:", "language": "bash", "type": "kubectl-command_explanation", "context": "Set Up DRA in a Cluster | The output is similar to the following:", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/assign-resources/set-up-dra-cluster/", "id": "k8s_00297_code_explain_7863"} {"prompt": "Complete this yaml code:\n\napiVersion: resource.k8s.io/v1\nkind: ResourceSlice\n# lines omitted for clarity\nspec:\n devices:\n - attributes:\n type:\n string: gpu\n capacity:\n memory:\n value: 64Gi", "completion": " name: gpu-0\n - attributes:\n type:\n string: gpu\n capacity:\n memory:\n value: 64Gi\n name: gpu-1\n driver: driver.example.com\n nodeName: cluster-1-node-1\n# lines omitted for clarity", "language": "yaml", "type": "kubernetes-manifest", "context": "Set Up DRA in a Cluster | You can also check the driver provider's documentation for available properties and values.", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/assign-resources/set-up-dra-cluster/", "id": "k8s_00297_code_completion_7864"} {"prompt": "Explain this yaml code:\n\napiVersion: resource.k8s.io/v1\nkind: ResourceSlice\n# lines omitted for clarity\nspec:\n devices:\n - attributes:\n type:\n string: gpu\n capacity:\n memory:\n value: 64Gi\n name: gpu-0\n - attributes:\n type:\n string: gpu\n capacity:\n memory:\n value: 64Gi\n name: gpu-1\n driver: driver.example.com\n nodeName: cluster-1-node-1\n# lines omitted for clarity", "completion": "This yaml code Set Up DRA in a Cluster | You can also check the driver provider's documentation for available properties and values.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Set Up DRA in a Cluster | You can also check the driver provider's documentation for available properties and values.", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/assign-resources/set-up-dra-cluster/", "id": "k8s_00297_code_explain_7865"} {"prompt": "Complete this yaml code:\n\napiVersion: resource.k8s.io/v1\nkind: DeviceClass\nmetadata:\n name: example-device-class", "completion": "spec:\n selectors:\n - cel:\n expression: |-\n device.driver == \"driver.example.com\"", "language": "yaml", "type": "kubernetes-manifest", "context": "Set Up DRA in a Cluster | Create the DeviceClass in your cluster:", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/assign-resources/set-up-dra-cluster/", "id": "k8s_00297_code_completion_7866"} {"prompt": "Explain this yaml code:\n\napiVersion: resource.k8s.io/v1\nkind: DeviceClass\nmetadata:\n name: example-device-class\nspec:\n selectors:\n - cel:\n expression: |-\n device.driver == \"driver.example.com\"", "completion": "This yaml code Set Up DRA in a Cluster | Create the DeviceClass in your cluster:", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Set Up DRA in a Cluster | Create the DeviceClass in your cluster:", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/assign-resources/set-up-dra-cluster/", "id": "k8s_00297_code_explain_7867"} {"prompt": "Explain this bash code:\n\nkubectl apply -f https://k8s.io/examples/dra/deviceclass.yaml", "completion": "This bash code Set Up DRA in a Cluster | To delete the DeviceClass that you created in this task, run the following command:", "language": "bash", "type": "kubectl-command_explanation", "context": "Set Up DRA in a Cluster | To delete the DeviceClass that you created in this task, run the following command:", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/assign-resources/set-up-dra-cluster/", "id": "k8s_00297_code_explain_7868"} {"prompt": "Explain this bash code:\n\nkubectl delete -f https://k8s.io/examples/dra/deviceclass.yaml", "completion": "This bash code Set Up DRA in a Cluster |", "language": "bash", "type": "kubectl-command_explanation", "context": "Set Up DRA in a Cluster |", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/assign-resources/set-up-dra-cluster/", "id": "k8s_00297_code_explain_7869"} {"prompt": "Explain this text code:\n\n* `resource.k8s.io/v1beta1`\n* `resource.k8s.io/v1beta2`", "completion": "This text code Set Up DRA in a Cluster | For more information, see Enabling or disabling API groups.", "language": "text", "type": "code-example_explanation", "context": "Set Up DRA in a Cluster | For more information, see Enabling or disabling API groups.", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/assign-resources/set-up-dra-cluster/", "id": "k8s_00297_code_explain_7870"} {"prompt": "Explain this text code:\n\nerror: the server doesn't have a resource type \"deviceclasses\"", "completion": "This text code Set Up DRA in a Cluster | Try the following troubleshooting steps:", "language": "text", "type": "code-example_explanation", "context": "Set Up DRA in a Cluster | Try the following troubleshooting steps:", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/assign-resources/set-up-dra-cluster/", "id": "k8s_00297_code_explain_7871"} {"prompt": "Explain this text code:\n\nNAME NODE DRIVER POOL AGE\ncluster-1-device-pool-1-driver.example.com-lqx8x cluster-1-node-1 driver.example.com cluster-1-device-pool-1-r1gc 7s\ncluster-1-device-pool-2-driver.example.com-29t7b cluster-1-node-2 driver.example.com cluster-1-device-pool-2-446z 8s", "completion": "This text code Set Up DRA in a Cluster | Try the following troubleshooting steps:", "language": "text", "type": "code-example_explanation", "context": "Set Up DRA in a Cluster | Try the following troubleshooting steps:", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/assign-resources/set-up-dra-cluster/", "id": "k8s_00297_code_explain_7872"} {"prompt": "Explain this shell code:\n\nkubectl get resourceslice -o yaml", "completion": "This shell code Set Up DRA in a Cluster | The output is similar to the following:", "language": "shell", "type": "code-example_explanation", "context": "Set Up DRA in a Cluster | The output is similar to the following:", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/assign-resources/set-up-dra-cluster/", "id": "k8s_00297_code_explain_7873"} {"prompt": "Complete this yaml code:\n\napiVersion: resource.k8s.io/v1\nkind: ResourceSlice\n# lines omitted for clarity\nspec:\n devices:\n - attributes:\n type:\n string: gpu\n capacity:\n memory:\n value: 64Gi", "completion": " name: gpu-0\n - attributes:\n type:\n string: gpu\n capacity:\n memory:\n value: 64Gi\n name: gpu-1\n driver: driver.example.com\n nodeName: cluster-1-node-1\n# lines omitted for clarity", "language": "yaml", "type": "kubernetes-manifest", "context": "Set Up DRA in a Cluster | You can also check the driver provider's documentation for available properties and values.", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/assign-resources/set-up-dra-cluster/", "id": "k8s_00297_code_completion_7874"} {"prompt": "Explain this yaml code:\n\napiVersion: resource.k8s.io/v1\nkind: ResourceSlice\n# lines omitted for clarity\nspec:\n devices:\n - attributes:\n type:\n string: gpu\n capacity:\n memory:\n value: 64Gi\n name: gpu-0\n - attributes:\n type:\n string: gpu\n capacity:\n memory:\n value: 64Gi\n name: gpu-1\n driver: driver.example.com\n nodeName: cluster-1-node-1\n# lines omitted for clarity", "completion": "This yaml code Set Up DRA in a Cluster | You can also check the driver provider's documentation for available properties and values.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Set Up DRA in a Cluster | You can also check the driver provider's documentation for available properties and values.", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/assign-resources/set-up-dra-cluster/", "id": "k8s_00297_code_explain_7875"} {"prompt": "Complete this yaml code:\n\napiVersion: resource.k8s.io/v1\nkind: DeviceClass\nmetadata:\n name: example-device-class", "completion": "spec:\n selectors:\n - cel:\n expression: |-\n device.driver == \"driver.example.com\"", "language": "yaml", "type": "kubernetes-manifest", "context": "Set Up DRA in a Cluster | Create the DeviceClass in your cluster:", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/assign-resources/set-up-dra-cluster/", "id": "k8s_00297_code_completion_7876"} {"prompt": "Explain this yaml code:\n\napiVersion: resource.k8s.io/v1\nkind: DeviceClass\nmetadata:\n name: example-device-class\nspec:\n selectors:\n - cel:\n expression: |-\n device.driver == \"driver.example.com\"", "completion": "This yaml code Set Up DRA in a Cluster | Create the DeviceClass in your cluster:", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Set Up DRA in a Cluster | Create the DeviceClass in your cluster:", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/assign-resources/set-up-dra-cluster/", "id": "k8s_00297_code_explain_7877"} {"prompt": "Explain this shell code:\n\nkubectl apply -f https://k8s.io/examples/dra/deviceclass.yaml", "completion": "This shell code Set Up DRA in a Cluster | To delete the DeviceClass that you created in this task, run the following command:", "language": "shell", "type": "code-example_explanation", "context": "Set Up DRA in a Cluster | To delete the DeviceClass that you created in this task, run the following command:", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/assign-resources/set-up-dra-cluster/", "id": "k8s_00297_code_explain_7878"} {"prompt": "Explain this shell code:\n\nkubectl delete -f https://k8s.io/examples/dra/deviceclass.yaml", "completion": "This shell code Set Up DRA in a Cluster |", "language": "shell", "type": "code-example_explanation", "context": "Set Up DRA in a Cluster |", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/assign-resources/set-up-dra-cluster/", "id": "k8s_00297_code_explain_7879"} {"prompt": "Explain this bash code:\n\nkubectl get resourceslice -o yaml", "completion": "This bash code Set Up DRA in a Cluster | The output is similar to the following:", "language": "bash", "type": "kubectl-command_explanation", "context": "Set Up DRA in a Cluster | The output is similar to the following:", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/assign-resources/set-up-dra-cluster/", "id": "k8s_00297_code_explain_7880"} {"prompt": "Complete this yaml code:\n\napiVersion: resource.k8s.io/v1\nkind: ResourceSlice\n# lines omitted for clarity\nspec:\n devices:\n - attributes:\n type:\n string: gpu\n capacity:\n memory:\n value: 64Gi", "completion": " name: gpu-0\n - attributes:\n type:\n string: gpu\n capacity:\n memory:\n value: 64Gi\n name: gpu-1\n driver: driver.example.com\n nodeName: cluster-1-node-1\n# lines omitted for clarity", "language": "yaml", "type": "kubernetes-manifest", "context": "Set Up DRA in a Cluster | You can also check the driver provider's documentation for available properties and values.", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/assign-resources/set-up-dra-cluster/", "id": "k8s_00297_code_completion_7881"} {"prompt": "Explain this yaml code:\n\napiVersion: resource.k8s.io/v1\nkind: ResourceSlice\n# lines omitted for clarity\nspec:\n devices:\n - attributes:\n type:\n string: gpu\n capacity:\n memory:\n value: 64Gi\n name: gpu-0\n - attributes:\n type:\n string: gpu\n capacity:\n memory:\n value: 64Gi\n name: gpu-1\n driver: driver.example.com\n nodeName: cluster-1-node-1\n# lines omitted for clarity", "completion": "This yaml code Set Up DRA in a Cluster | You can also check the driver provider's documentation for available properties and values.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Set Up DRA in a Cluster | You can also check the driver provider's documentation for available properties and values.", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/assign-resources/set-up-dra-cluster/", "id": "k8s_00297_code_explain_7882"} {"prompt": "Complete this yaml code:\n\ndra/deviceclass.yaml\napiVersion: resource.k8s.io/v1\nkind: DeviceClass\nmetadata:\n name: example-device-class", "completion": "spec:\n selectors:\n - cel:\n expression: |-\n device.driver == \"driver.example.com\"", "language": "yaml", "type": "kubernetes-manifest", "context": "Set Up DRA in a Cluster | Create the DeviceClass in your cluster:", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/assign-resources/set-up-dra-cluster/", "id": "k8s_00297_code_completion_7883"} {"prompt": "Explain this yaml code:\n\ndra/deviceclass.yaml\napiVersion: resource.k8s.io/v1\nkind: DeviceClass\nmetadata:\n name: example-device-class\nspec:\n selectors:\n - cel:\n expression: |-\n device.driver == \"driver.example.com\"", "completion": "This yaml code Set Up DRA in a Cluster | Create the DeviceClass in your cluster:", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Set Up DRA in a Cluster | Create the DeviceClass in your cluster:", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/assign-resources/set-up-dra-cluster/", "id": "k8s_00297_code_explain_7884"} {"prompt": "Complete this yaml code:\n\napiVersion: resource.k8s.io/v1\nkind: DeviceClass\nmetadata:\n name: example-device-class", "completion": "spec:\n selectors:\n - cel:\n expression: |-\n device.driver == \"driver.example.com\"", "language": "yaml", "type": "kubernetes-manifest", "context": "Set Up DRA in a Cluster | Create the DeviceClass in your cluster:", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/assign-resources/set-up-dra-cluster/", "id": "k8s_00297_code_completion_7885"} {"prompt": "Explain this yaml code:\n\napiVersion: resource.k8s.io/v1\nkind: DeviceClass\nmetadata:\n name: example-device-class\nspec:\n selectors:\n - cel:\n expression: |-\n device.driver == \"driver.example.com\"", "completion": "This yaml code Set Up DRA in a Cluster | Create the DeviceClass in your cluster:", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Set Up DRA in a Cluster | Create the DeviceClass in your cluster:", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/assign-resources/set-up-dra-cluster/", "id": "k8s_00297_code_explain_7886"} {"prompt": "Explain this bash code:\n\nkubectl apply -f https://k8s.io/examples/dra/deviceclass.yaml", "completion": "This bash code Set Up DRA in a Cluster | To delete the DeviceClass that you created in this task, run the following command:", "language": "bash", "type": "kubectl-command_explanation", "context": "Set Up DRA in a Cluster | To delete the DeviceClass that you created in this task, run the following command:", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/assign-resources/set-up-dra-cluster/", "id": "k8s_00297_code_explain_7887"} {"prompt": "Explain this bash code:\n\nkubectl delete -f https://k8s.io/examples/dra/deviceclass.yaml", "completion": "This bash code Set Up DRA in a Cluster |", "language": "bash", "type": "kubectl-command_explanation", "context": "Set Up DRA in a Cluster |", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/assign-resources/set-up-dra-cluster/", "id": "k8s_00297_code_explain_7888"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: PersistentVolumeClaim\nmetadata:\n name: clone-of-pvc-1\n namespace: myns\nspec:\n accessModes:", "completion": " - ReadWriteOnce\n storageClassName: cloning\n resources:\n requests:\n storage: 5Gi\n dataSource:\n kind: PersistentVolumeClaim\n name: pvc-1", "language": "yaml", "type": "kubernetes-manifest", "context": "CSI Volume Cloning | The result is a new PVC with the name clone-of-pvc-1 that has the exact same content as the specified source pvc-1.", "source": "https://kubernetes.io/docs/concepts/storage/volume-pvc-datasource/", "id": "k8s_00298_code_completion_7889"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: PersistentVolumeClaim\nmetadata:\n name: clone-of-pvc-1\n namespace: myns\nspec:\n accessModes:\n - ReadWriteOnce\n storageClassName: cloning\n resources:\n requests:\n storage: 5Gi\n dataSource:\n kind: PersistentVolumeClaim\n name: pvc-1", "completion": "This yaml code CSI Volume Cloning | The result is a new PVC with the name clone-of-pvc-1 that has the exact same content as the specified source pvc-1.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "CSI Volume Cloning | The result is a new PVC with the name clone-of-pvc-1 that has the exact same content as the specified source pvc-1.", "source": "https://kubernetes.io/docs/concepts/storage/volume-pvc-datasource/", "id": "k8s_00298_code_explain_7890"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: PersistentVolumeClaim\nmetadata:\n name: clone-of-pvc-1\n namespace: myns\nspec:\n accessModes:", "completion": " - ReadWriteOnce\n storageClassName: cloning\n resources:\n requests:\n storage: 5Gi\n dataSource:\n kind: PersistentVolumeClaim\n name: pvc-1", "language": "yaml", "type": "kubernetes-manifest", "context": "CSI Volume Cloning | The result is a new PVC with the name clone-of-pvc-1 that has the exact same content as the specified source pvc-1.", "source": "https://kubernetes.io/docs/concepts/storage/volume-pvc-datasource/", "id": "k8s_00298_code_completion_7891"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: PersistentVolumeClaim\nmetadata:\n name: clone-of-pvc-1\n namespace: myns\nspec:\n accessModes:\n - ReadWriteOnce\n storageClassName: cloning\n resources:\n requests:\n storage: 5Gi\n dataSource:\n kind: PersistentVolumeClaim\n name: pvc-1", "completion": "This yaml code CSI Volume Cloning | The result is a new PVC with the name clone-of-pvc-1 that has the exact same content as the specified source pvc-1.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "CSI Volume Cloning | The result is a new PVC with the name clone-of-pvc-1 that has the exact same content as the specified source pvc-1.", "source": "https://kubernetes.io/docs/concepts/storage/volume-pvc-datasource/", "id": "k8s_00298_code_explain_7892"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: PersistentVolumeClaim\nmetadata:\n name: clone-of-pvc-1\n namespace: myns\nspec:\n accessModes:", "completion": " - ReadWriteOnce\n storageClassName: cloning\n resources:\n requests:\n storage: 5Gi\n dataSource:\n kind: PersistentVolumeClaim\n name: pvc-1", "language": "yaml", "type": "kubernetes-manifest", "context": "CSI Volume Cloning | The result is a new PVC with the name clone-of-pvc-1 that has the exact same content as the specified source pvc-1.", "source": "https://kubernetes.io/docs/concepts/storage/volume-pvc-datasource/", "id": "k8s_00298_code_completion_7893"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: PersistentVolumeClaim\nmetadata:\n name: clone-of-pvc-1\n namespace: myns\nspec:\n accessModes:\n - ReadWriteOnce\n storageClassName: cloning\n resources:\n requests:\n storage: 5Gi\n dataSource:\n kind: PersistentVolumeClaim\n name: pvc-1", "completion": "This yaml code CSI Volume Cloning | The result is a new PVC with the name clone-of-pvc-1 that has the exact same content as the specified source pvc-1.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "CSI Volume Cloning | The result is a new PVC with the name clone-of-pvc-1 that has the exact same content as the specified source pvc-1.", "source": "https://kubernetes.io/docs/concepts/storage/volume-pvc-datasource/", "id": "k8s_00298_code_explain_7894"} {"prompt": "Complete this yaml code:\n\napiVersion: kubescheduler.config.k8s.io/v1\nkind: KubeSchedulerConfiguration\nprofiles:\n - schedulerName: my-scheduler\n plugins:\n # Disable specific plugins for different extension points\n # You can disable all plugins for an extension point using \"*\"\n queueSort:\n disabled:", "completion": " - name: \"*\" # Disable all queueSort plugins\n # - name: \"PrioritySort\" # Disable specific queueSort plugin\n filter:\n disabled:\n - name: \"*\" # Disable all filter plugins\n # - name: \"NodeResourcesFit\" # Disable specific filter plugin\n permit:\n disabled:\n - name: \"*\" # Disables all permit plugins\n # - name: \"TaintToleration\" # Disable specific permit plugin", "language": "yaml", "type": "kubernetes-manifest", "context": "Hardening Guide - Scheduler Configuration | This creates a scheduler profile my-custom-scheduler. Whenever the .spec of a Pod does not have a value for .spec.schedulerName, the kube-scheduler runs for that Pod, using its main configuration, and", "source": "https://kubernetes.io/docs/concepts/security/hardening-guide/scheduler/", "id": "k8s_00301_code_completion_7895"} {"prompt": "Explain this yaml code:\n\napiVersion: kubescheduler.config.k8s.io/v1\nkind: KubeSchedulerConfiguration\nprofiles:\n - schedulerName: my-scheduler\n plugins:\n # Disable specific plugins for different extension points\n # You can disable all plugins for an extension point using \"*\"\n queueSort:\n disabled:\n - name: \"*\" # Disable all queueSort plugins\n # - name: \"PrioritySort\" # Disable specific queueSort plugin\n filter:\n disabled:\n - name: \"*\" # Disable all filter plugins\n # - name: \"NodeResourcesFit\" # Disable specific filter plugin\n permit:\n disabled:\n - name: \"*\" # Disables all permit plugins\n # - name: \"TaintToleration\" # Disable specific permit plugin", "completion": "This yaml code Hardening Guide - Scheduler Configuration | This creates a scheduler profile my-custom-scheduler. Whenever the .spec of a Pod does not have a value for .spec.schedulerName, the kube-scheduler runs for that Pod, using its main configuration, and", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Hardening Guide - Scheduler Configuration | This creates a scheduler profile my-custom-scheduler. Whenever the .spec of a Pod does not have a value for .spec.schedulerName, the kube-scheduler runs for that Pod, using its main configuration, and", "source": "https://kubernetes.io/docs/concepts/security/hardening-guide/scheduler/", "id": "k8s_00301_code_explain_7896"} {"prompt": "Complete this yaml code:\n\napiVersion: kubescheduler.config.k8s.io/v1\nkind: KubeSchedulerConfiguration\nprofiles:\n - schedulerName: my-scheduler\n plugins:\n # Disable specific plugins for different extension points\n # You can disable all plugins for an extension point using \"*\"\n queueSort:\n disabled:", "completion": " - name: \"*\" # Disable all queueSort plugins\n # - name: \"PrioritySort\" # Disable specific queueSort plugin\n filter:\n disabled:\n - name: \"*\" # Disable all filter plugins\n # - name: \"NodeResourcesFit\" # Disable specific filter plugin\n permit:\n disabled:\n - name: \"*\" # Disables all permit plugins\n # - name: \"TaintToleration\" # Disable specific permit plugin", "language": "yaml", "type": "kubernetes-manifest", "context": "Hardening Guide - Scheduler Configuration | This creates a scheduler profile my-custom-scheduler. Whenever the .spec of a Pod does not have a value for .spec.schedulerName, the kube-scheduler runs for that Pod, using its main configuration, and", "source": "https://kubernetes.io/docs/concepts/security/hardening-guide/scheduler/", "id": "k8s_00301_code_completion_7897"} {"prompt": "Explain this yaml code:\n\napiVersion: kubescheduler.config.k8s.io/v1\nkind: KubeSchedulerConfiguration\nprofiles:\n - schedulerName: my-scheduler\n plugins:\n # Disable specific plugins for different extension points\n # You can disable all plugins for an extension point using \"*\"\n queueSort:\n disabled:\n - name: \"*\" # Disable all queueSort plugins\n # - name: \"PrioritySort\" # Disable specific queueSort plugin\n filter:\n disabled:\n - name: \"*\" # Disable all filter plugins\n # - name: \"NodeResourcesFit\" # Disable specific filter plugin\n permit:\n disabled:\n - name: \"*\" # Disables all permit plugins\n # - name: \"TaintToleration\" # Disable specific permit plugin", "completion": "This yaml code Hardening Guide - Scheduler Configuration | This creates a scheduler profile my-custom-scheduler. Whenever the .spec of a Pod does not have a value for .spec.schedulerName, the kube-scheduler runs for that Pod, using its main configuration, and", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Hardening Guide - Scheduler Configuration | This creates a scheduler profile my-custom-scheduler. Whenever the .spec of a Pod does not have a value for .spec.schedulerName, the kube-scheduler runs for that Pod, using its main configuration, and", "source": "https://kubernetes.io/docs/concepts/security/hardening-guide/scheduler/", "id": "k8s_00301_code_explain_7898"} {"prompt": "Complete this yaml code:\n\napiVersion: kubescheduler.config.k8s.io/v1\nkind: KubeSchedulerConfiguration\nprofiles:\n - schedulerName: my-scheduler\n plugins:\n # Disable specific plugins for different extension points\n # You can disable all plugins for an extension point using \"*\"\n queueSort:\n disabled:", "completion": " - name: \"*\" # Disable all queueSort plugins\n # - name: \"PrioritySort\" # Disable specific queueSort plugin\n filter:\n disabled:\n - name: \"*\" # Disable all filter plugins\n # - name: \"NodeResourcesFit\" # Disable specific filter plugin\n permit:\n disabled:\n - name: \"*\" # Disables all permit plugins\n # - name: \"TaintToleration\" # Disable specific permit plugin", "language": "yaml", "type": "kubernetes-manifest", "context": "Hardening Guide - Scheduler Configuration | This creates a scheduler profile my-custom-scheduler. Whenever the .spec of a Pod does not have a value for .spec.schedulerName, the kube-scheduler runs for that Pod, using its main configuration, and", "source": "https://kubernetes.io/docs/concepts/security/hardening-guide/scheduler/", "id": "k8s_00301_code_completion_7899"} {"prompt": "Explain this yaml code:\n\napiVersion: kubescheduler.config.k8s.io/v1\nkind: KubeSchedulerConfiguration\nprofiles:\n - schedulerName: my-scheduler\n plugins:\n # Disable specific plugins for different extension points\n # You can disable all plugins for an extension point using \"*\"\n queueSort:\n disabled:\n - name: \"*\" # Disable all queueSort plugins\n # - name: \"PrioritySort\" # Disable specific queueSort plugin\n filter:\n disabled:\n - name: \"*\" # Disable all filter plugins\n # - name: \"NodeResourcesFit\" # Disable specific filter plugin\n permit:\n disabled:\n - name: \"*\" # Disables all permit plugins\n # - name: \"TaintToleration\" # Disable specific permit plugin", "completion": "This yaml code Hardening Guide - Scheduler Configuration | This creates a scheduler profile my-custom-scheduler. Whenever the .spec of a Pod does not have a value for .spec.schedulerName, the kube-scheduler runs for that Pod, using its main configuration, and", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Hardening Guide - Scheduler Configuration | This creates a scheduler profile my-custom-scheduler. Whenever the .spec of a Pod does not have a value for .spec.schedulerName, the kube-scheduler runs for that Pod, using its main configuration, and", "source": "https://kubernetes.io/docs/concepts/security/hardening-guide/scheduler/", "id": "k8s_00301_code_explain_7900"} {"prompt": "Complete this yaml code:\n\ncat <./example-redis-config.yaml\napiVersion: v1\nkind: ConfigMap\nmetadata:", "completion": " name: example-redis-config\ndata:\n redis-config: \"\"\nEOF", "language": "yaml", "type": "kubernetes-manifest", "context": "Configuring Redis using a ConfigMap | Apply the ConfigMap created above, along with a Redis pod manifest:", "source": "https://kubernetes.io/docs/tutorials/configuration/configure-redis-using-configmap/", "id": "k8s_00302_code_completion_7901"} {"prompt": "Explain this yaml code:\n\ncat <./example-redis-config.yaml\napiVersion: v1\nkind: ConfigMap\nmetadata:\n name: example-redis-config\ndata:\n redis-config: \"\"\nEOF", "completion": "This yaml code Configuring Redis using a ConfigMap | Apply the ConfigMap created above, along with a Redis pod manifest:", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Configuring Redis using a ConfigMap | Apply the ConfigMap created above, along with a Redis pod manifest:", "source": "https://kubernetes.io/docs/tutorials/configuration/configure-redis-using-configmap/", "id": "k8s_00302_code_explain_7902"} {"prompt": "Explain this bash code:\n\nkubectl apply -f example-redis-config.yaml\nkubectl apply -f https://raw.githubusercontent.com/kubernetes/website/main/content/en/examples/pods/config/redis-pod.yaml", "completion": "This bash code Configuring Redis using a ConfigMap | Examine the contents of the Redis pod manifest and note the following:", "language": "bash", "type": "kubectl-command_explanation", "context": "Configuring Redis using a ConfigMap | Examine the contents of the Redis pod manifest and note the following:", "source": "https://kubernetes.io/docs/tutorials/configuration/configure-redis-using-configmap/", "id": "k8s_00302_code_explain_7903"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: redis\nspec:\n containers:\n - name: redis\n image: redis:8.0.2\n command:\n - redis-server\n - \"/redis-master/redis.conf\"\n env:\n - name: MASTER\n value: \"true\"\n ports:\n - containerPort: 6379", "completion": " resources:\n limits:\n cpu: \"0.1\"\n volumeMounts:\n - mountPath: /redis-master-data\n name: data\n - mountPath: /redis-master\n name: config\n volumes:\n - name: data\n emptyDir: {}\n - name: config\n configMap:\n name: example-redis-config\n items:\n - key: redis-config\n path: redis.conf", "language": "yaml", "type": "pod-definition", "context": "Configuring Redis using a ConfigMap | Examine the created objects:", "source": "https://kubernetes.io/docs/tutorials/configuration/configure-redis-using-configmap/", "id": "k8s_00302_code_completion_7904"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: redis\nspec:\n containers:\n - name: redis\n image: redis:8.0.2\n command:\n - redis-server\n - \"/redis-master/redis.conf\"\n env:\n - name: MASTER\n value: \"true\"\n ports:\n - containerPort: 6379\n resources:\n limits:\n cpu: \"0.1\"\n volumeMounts:\n - mountPath: /redis-master-data\n name: data\n - mountPath: /redis-master\n name: config\n volumes:\n - name: data\n emptyDir: {}\n - name: config\n configMap:\n name: example-redis-config\n items:\n - key: redis-config\n path: redis.conf", "completion": "This yaml code Configuring Redis using a ConfigMap | Examine the created objects:", "language": "yaml", "type": "pod-definition_explanation", "context": "Configuring Redis using a ConfigMap | Examine the created objects:", "source": "https://kubernetes.io/docs/tutorials/configuration/configure-redis-using-configmap/", "id": "k8s_00302_code_explain_7905"} {"prompt": "Explain this bash code:\n\nkubectl get pod/redis configmap/example-redis-config", "completion": "This bash code Configuring Redis using a ConfigMap | You should see the following output:", "language": "bash", "type": "kubectl-command_explanation", "context": "Configuring Redis using a ConfigMap | You should see the following output:", "source": "https://kubernetes.io/docs/tutorials/configuration/configure-redis-using-configmap/", "id": "k8s_00302_code_explain_7906"} {"prompt": "Complete this text code:\n\nNAME READY STATUS RESTARTS AGE\npod/redis 1/1 Running 0 8s", "completion": "\nNAME DATA AGE\nconfigmap/example-redis-config 1 14s", "language": "text", "type": "code-example", "context": "Configuring Redis using a ConfigMap | Recall that we left redis-config key in the example-redis-config ConfigMap blank:", "source": "https://kubernetes.io/docs/tutorials/configuration/configure-redis-using-configmap/", "id": "k8s_00302_code_completion_7907"} {"prompt": "Explain this text code:\n\nNAME READY STATUS RESTARTS AGE\npod/redis 1/1 Running 0 8s\n\nNAME DATA AGE\nconfigmap/example-redis-config 1 14s", "completion": "This text code Configuring Redis using a ConfigMap | Recall that we left redis-config key in the example-redis-config ConfigMap blank:", "language": "text", "type": "code-example_explanation", "context": "Configuring Redis using a ConfigMap | Recall that we left redis-config key in the example-redis-config ConfigMap blank:", "source": "https://kubernetes.io/docs/tutorials/configuration/configure-redis-using-configmap/", "id": "k8s_00302_code_explain_7908"} {"prompt": "Complete this text code:\n\nName: example-redis-config\nNamespace: default\nLabels: \nAnnotations: ", "completion": "\nData\n====\nredis-config:", "language": "text", "type": "code-example", "context": "Configuring Redis using a ConfigMap | Use kubectl exec to enter the pod and run the redis-cli tool to check the current configuration:", "source": "https://kubernetes.io/docs/tutorials/configuration/configure-redis-using-configmap/", "id": "k8s_00302_code_completion_7909"} {"prompt": "Explain this text code:\n\nName: example-redis-config\nNamespace: default\nLabels: \nAnnotations: \n\nData\n====\nredis-config:", "completion": "This text code Configuring Redis using a ConfigMap | Use kubectl exec to enter the pod and run the redis-cli tool to check the current configuration:", "language": "text", "type": "code-example_explanation", "context": "Configuring Redis using a ConfigMap | Use kubectl exec to enter the pod and run the redis-cli tool to check the current configuration:", "source": "https://kubernetes.io/docs/tutorials/configuration/configure-redis-using-configmap/", "id": "k8s_00302_code_explain_7910"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: ConfigMap\nmetadata:\n name: example-redis-config", "completion": "data:\n redis-config: |\n maxmemory 2mb\n maxmemory-policy allkeys-lru", "language": "yaml", "type": "kubernetes-manifest", "context": "Configuring Redis using a ConfigMap | Apply the updated ConfigMap:", "source": "https://kubernetes.io/docs/tutorials/configuration/configure-redis-using-configmap/", "id": "k8s_00302_code_completion_7911"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: ConfigMap\nmetadata:\n name: example-redis-config\ndata:\n redis-config: |\n maxmemory 2mb\n maxmemory-policy allkeys-lru", "completion": "This yaml code Configuring Redis using a ConfigMap | Apply the updated ConfigMap:", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Configuring Redis using a ConfigMap | Apply the updated ConfigMap:", "source": "https://kubernetes.io/docs/tutorials/configuration/configure-redis-using-configmap/", "id": "k8s_00302_code_explain_7912"} {"prompt": "Complete this text code:\n\nName: example-redis-config\nNamespace: default\nLabels: \nAnnotations: \n", "completion": "Data\n====\nredis-config:\n----\nmaxmemory 2mb\nmaxmemory-policy allkeys-lru", "language": "text", "type": "code-example", "context": "Configuring Redis using a ConfigMap | Check the Redis Pod again using redis-cli via kubectl exec to see if the configuration was applied:", "source": "https://kubernetes.io/docs/tutorials/configuration/configure-redis-using-configmap/", "id": "k8s_00302_code_completion_7913"} {"prompt": "Explain this text code:\n\nName: example-redis-config\nNamespace: default\nLabels: \nAnnotations: \n\nData\n====\nredis-config:\n----\nmaxmemory 2mb\nmaxmemory-policy allkeys-lru", "completion": "This text code Configuring Redis using a ConfigMap | Check the Redis Pod again using redis-cli via kubectl exec to see if the configuration was applied:", "language": "text", "type": "code-example_explanation", "context": "Configuring Redis using a ConfigMap | Check the Redis Pod again using redis-cli via kubectl exec to see if the configuration was applied:", "source": "https://kubernetes.io/docs/tutorials/configuration/configure-redis-using-configmap/", "id": "k8s_00302_code_explain_7914"} {"prompt": "Explain this bash code:\n\nkubectl delete pod redis\nkubectl apply -f https://raw.githubusercontent.com/kubernetes/website/main/content/en/examples/pods/config/redis-pod.yaml", "completion": "This bash code Configuring Redis using a ConfigMap | Now re-check the configuration values one last time:", "language": "bash", "type": "kubectl-command_explanation", "context": "Configuring Redis using a ConfigMap | Now re-check the configuration values one last time:", "source": "https://kubernetes.io/docs/tutorials/configuration/configure-redis-using-configmap/", "id": "k8s_00302_code_explain_7915"} {"prompt": "Explain this bash code:\n\nkubectl delete pod/redis configmap/example-redis-config", "completion": "This bash code Configuring Redis using a ConfigMap |", "language": "bash", "type": "kubectl-command_explanation", "context": "Configuring Redis using a ConfigMap |", "source": "https://kubernetes.io/docs/tutorials/configuration/configure-redis-using-configmap/", "id": "k8s_00302_code_explain_7916"} {"prompt": "Complete this shell code:\n\ncat <./example-redis-config.yaml\napiVersion: v1\nkind: ConfigMap\nmetadata:", "completion": " name: example-redis-config\ndata:\n redis-config: \"\"\nEOF", "language": "shell", "type": "code-example", "context": "Configuring Redis using a ConfigMap | Apply the ConfigMap created above, along with a Redis pod manifest:", "source": "https://kubernetes.io/docs/tutorials/configuration/configure-redis-using-configmap/", "id": "k8s_00302_code_completion_7917"} {"prompt": "Explain this shell code:\n\ncat <./example-redis-config.yaml\napiVersion: v1\nkind: ConfigMap\nmetadata:\n name: example-redis-config\ndata:\n redis-config: \"\"\nEOF", "completion": "This shell code Configuring Redis using a ConfigMap | Apply the ConfigMap created above, along with a Redis pod manifest:", "language": "shell", "type": "code-example_explanation", "context": "Configuring Redis using a ConfigMap | Apply the ConfigMap created above, along with a Redis pod manifest:", "source": "https://kubernetes.io/docs/tutorials/configuration/configure-redis-using-configmap/", "id": "k8s_00302_code_explain_7918"} {"prompt": "Explain this shell code:\n\nkubectl apply -f example-redis-config.yaml\nkubectl apply -f https://raw.githubusercontent.com/kubernetes/website/main/content/en/examples/pods/config/redis-pod.yaml", "completion": "This shell code Configuring Redis using a ConfigMap | Examine the contents of the Redis pod manifest and note the following:", "language": "shell", "type": "code-example_explanation", "context": "Configuring Redis using a ConfigMap | Examine the contents of the Redis pod manifest and note the following:", "source": "https://kubernetes.io/docs/tutorials/configuration/configure-redis-using-configmap/", "id": "k8s_00302_code_explain_7919"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: redis\nspec:\n containers:\n - name: redis\n image: redis:8.0.2\n command:\n - redis-server\n - \"/redis-master/redis.conf\"\n env:\n - name: MASTER\n value: \"true\"\n ports:\n - containerPort: 6379", "completion": " resources:\n limits:\n cpu: \"0.1\"\n volumeMounts:\n - mountPath: /redis-master-data\n name: data\n - mountPath: /redis-master\n name: config\n volumes:\n - name: data\n emptyDir: {}\n - name: config\n configMap:\n name: example-redis-config\n items:\n - key: redis-config\n path: redis.conf", "language": "yaml", "type": "pod-definition", "context": "Configuring Redis using a ConfigMap | Examine the created objects:", "source": "https://kubernetes.io/docs/tutorials/configuration/configure-redis-using-configmap/", "id": "k8s_00302_code_completion_7920"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: redis\nspec:\n containers:\n - name: redis\n image: redis:8.0.2\n command:\n - redis-server\n - \"/redis-master/redis.conf\"\n env:\n - name: MASTER\n value: \"true\"\n ports:\n - containerPort: 6379\n resources:\n limits:\n cpu: \"0.1\"\n volumeMounts:\n - mountPath: /redis-master-data\n name: data\n - mountPath: /redis-master\n name: config\n volumes:\n - name: data\n emptyDir: {}\n - name: config\n configMap:\n name: example-redis-config\n items:\n - key: redis-config\n path: redis.conf", "completion": "This yaml code Configuring Redis using a ConfigMap | Examine the created objects:", "language": "yaml", "type": "pod-definition_explanation", "context": "Configuring Redis using a ConfigMap | Examine the created objects:", "source": "https://kubernetes.io/docs/tutorials/configuration/configure-redis-using-configmap/", "id": "k8s_00302_code_explain_7921"} {"prompt": "Explain this shell code:\n\nkubectl get pod/redis configmap/example-redis-config", "completion": "This shell code Configuring Redis using a ConfigMap | You should see the following output:", "language": "shell", "type": "code-example_explanation", "context": "Configuring Redis using a ConfigMap | You should see the following output:", "source": "https://kubernetes.io/docs/tutorials/configuration/configure-redis-using-configmap/", "id": "k8s_00302_code_explain_7922"} {"prompt": "Complete this text code:\n\nNAME READY STATUS RESTARTS AGE\npod/redis 1/1 Running 0 8s", "completion": "\nNAME DATA AGE\nconfigmap/example-redis-config 1 14s", "language": "text", "type": "code-example", "context": "Configuring Redis using a ConfigMap | Recall that we left redis-config key in the example-redis-config ConfigMap blank:", "source": "https://kubernetes.io/docs/tutorials/configuration/configure-redis-using-configmap/", "id": "k8s_00302_code_completion_7923"} {"prompt": "Explain this text code:\n\nNAME READY STATUS RESTARTS AGE\npod/redis 1/1 Running 0 8s\n\nNAME DATA AGE\nconfigmap/example-redis-config 1 14s", "completion": "This text code Configuring Redis using a ConfigMap | Recall that we left redis-config key in the example-redis-config ConfigMap blank:", "language": "text", "type": "code-example_explanation", "context": "Configuring Redis using a ConfigMap | Recall that we left redis-config key in the example-redis-config ConfigMap blank:", "source": "https://kubernetes.io/docs/tutorials/configuration/configure-redis-using-configmap/", "id": "k8s_00302_code_explain_7924"} {"prompt": "Complete this shell code:\n\nName: example-redis-config\nNamespace: default\nLabels: \nAnnotations: ", "completion": "\nData\n====\nredis-config:", "language": "shell", "type": "code-example", "context": "Configuring Redis using a ConfigMap | Use kubectl exec to enter the pod and run the redis-cli tool to check the current configuration:", "source": "https://kubernetes.io/docs/tutorials/configuration/configure-redis-using-configmap/", "id": "k8s_00302_code_completion_7925"} {"prompt": "Explain this shell code:\n\nName: example-redis-config\nNamespace: default\nLabels: \nAnnotations: \n\nData\n====\nredis-config:", "completion": "This shell code Configuring Redis using a ConfigMap | Use kubectl exec to enter the pod and run the redis-cli tool to check the current configuration:", "language": "shell", "type": "code-example_explanation", "context": "Configuring Redis using a ConfigMap | Use kubectl exec to enter the pod and run the redis-cli tool to check the current configuration:", "source": "https://kubernetes.io/docs/tutorials/configuration/configure-redis-using-configmap/", "id": "k8s_00302_code_explain_7926"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: ConfigMap\nmetadata:\n name: example-redis-config", "completion": "data:\n redis-config: |\n maxmemory 2mb\n maxmemory-policy allkeys-lru", "language": "yaml", "type": "kubernetes-manifest", "context": "Configuring Redis using a ConfigMap | Apply the updated ConfigMap:", "source": "https://kubernetes.io/docs/tutorials/configuration/configure-redis-using-configmap/", "id": "k8s_00302_code_completion_7927"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: ConfigMap\nmetadata:\n name: example-redis-config\ndata:\n redis-config: |\n maxmemory 2mb\n maxmemory-policy allkeys-lru", "completion": "This yaml code Configuring Redis using a ConfigMap | Apply the updated ConfigMap:", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Configuring Redis using a ConfigMap | Apply the updated ConfigMap:", "source": "https://kubernetes.io/docs/tutorials/configuration/configure-redis-using-configmap/", "id": "k8s_00302_code_explain_7928"} {"prompt": "Complete this shell code:\n\nName: example-redis-config\nNamespace: default\nLabels: \nAnnotations: \n", "completion": "Data\n====\nredis-config:\n----\nmaxmemory 2mb\nmaxmemory-policy allkeys-lru", "language": "shell", "type": "code-example", "context": "Configuring Redis using a ConfigMap | Check the Redis Pod again using redis-cli via kubectl exec to see if the configuration was applied:", "source": "https://kubernetes.io/docs/tutorials/configuration/configure-redis-using-configmap/", "id": "k8s_00302_code_completion_7929"} {"prompt": "Explain this shell code:\n\nName: example-redis-config\nNamespace: default\nLabels: \nAnnotations: \n\nData\n====\nredis-config:\n----\nmaxmemory 2mb\nmaxmemory-policy allkeys-lru", "completion": "This shell code Configuring Redis using a ConfigMap | Check the Redis Pod again using redis-cli via kubectl exec to see if the configuration was applied:", "language": "shell", "type": "code-example_explanation", "context": "Configuring Redis using a ConfigMap | Check the Redis Pod again using redis-cli via kubectl exec to see if the configuration was applied:", "source": "https://kubernetes.io/docs/tutorials/configuration/configure-redis-using-configmap/", "id": "k8s_00302_code_explain_7930"} {"prompt": "Explain this shell code:\n\nkubectl delete pod redis\nkubectl apply -f https://raw.githubusercontent.com/kubernetes/website/main/content/en/examples/pods/config/redis-pod.yaml", "completion": "This shell code Configuring Redis using a ConfigMap | Now re-check the configuration values one last time:", "language": "shell", "type": "code-example_explanation", "context": "Configuring Redis using a ConfigMap | Now re-check the configuration values one last time:", "source": "https://kubernetes.io/docs/tutorials/configuration/configure-redis-using-configmap/", "id": "k8s_00302_code_explain_7931"} {"prompt": "Explain this shell code:\n\nkubectl delete pod/redis configmap/example-redis-config", "completion": "This shell code Configuring Redis using a ConfigMap |", "language": "shell", "type": "code-example_explanation", "context": "Configuring Redis using a ConfigMap |", "source": "https://kubernetes.io/docs/tutorials/configuration/configure-redis-using-configmap/", "id": "k8s_00302_code_explain_7932"} {"prompt": "Complete this yaml code:\n\ncat <./example-redis-config.yaml\napiVersion: v1\nkind: ConfigMap\nmetadata:", "completion": " name: example-redis-config\ndata:\n redis-config: \"\"\nEOF", "language": "yaml", "type": "kubernetes-manifest", "context": "Configuring Redis using a ConfigMap | Apply the ConfigMap created above, along with a Redis pod manifest:", "source": "https://kubernetes.io/docs/tutorials/configuration/configure-redis-using-configmap/", "id": "k8s_00302_code_completion_7933"} {"prompt": "Explain this yaml code:\n\ncat <./example-redis-config.yaml\napiVersion: v1\nkind: ConfigMap\nmetadata:\n name: example-redis-config\ndata:\n redis-config: \"\"\nEOF", "completion": "This yaml code Configuring Redis using a ConfigMap | Apply the ConfigMap created above, along with a Redis pod manifest:", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Configuring Redis using a ConfigMap | Apply the ConfigMap created above, along with a Redis pod manifest:", "source": "https://kubernetes.io/docs/tutorials/configuration/configure-redis-using-configmap/", "id": "k8s_00302_code_explain_7934"} {"prompt": "Explain this bash code:\n\nkubectl apply -f example-redis-config.yaml\nkubectl apply -f https://raw.githubusercontent.com/kubernetes/website/main/content/en/examples/pods/config/redis-pod.yaml", "completion": "This bash code Configuring Redis using a ConfigMap | Examine the contents of the Redis pod manifest and note the following:", "language": "bash", "type": "kubectl-command_explanation", "context": "Configuring Redis using a ConfigMap | Examine the contents of the Redis pod manifest and note the following:", "source": "https://kubernetes.io/docs/tutorials/configuration/configure-redis-using-configmap/", "id": "k8s_00302_code_explain_7935"} {"prompt": "Complete this yaml code:\n\npods/config/redis-pod.yaml\napiVersion: v1\nkind: Pod\nmetadata:\n name: redis\nspec:\n containers:\n - name: redis\n image: redis:8.0.2\n command:\n - redis-server\n - \"/redis-master/redis.conf\"\n env:\n - name: MASTER\n value: \"true\"\n ports:\n - containerPort: 6379", "completion": " resources:\n limits:\n cpu: \"0.1\"\n volumeMounts:\n - mountPath: /redis-master-data\n name: data\n - mountPath: /redis-master\n name: config\n volumes:\n - name: data\n emptyDir: {}\n - name: config\n configMap:\n name: example-redis-config\n items:\n - key: redis-config\n path: redis.conf", "language": "yaml", "type": "pod-definition", "context": "Configuring Redis using a ConfigMap | Examine the created objects:", "source": "https://kubernetes.io/docs/tutorials/configuration/configure-redis-using-configmap/", "id": "k8s_00302_code_completion_7936"} {"prompt": "Explain this yaml code:\n\npods/config/redis-pod.yaml\napiVersion: v1\nkind: Pod\nmetadata:\n name: redis\nspec:\n containers:\n - name: redis\n image: redis:8.0.2\n command:\n - redis-server\n - \"/redis-master/redis.conf\"\n env:\n - name: MASTER\n value: \"true\"\n ports:\n - containerPort: 6379\n resources:\n limits:\n cpu: \"0.1\"\n volumeMounts:\n - mountPath: /redis-master-data\n name: data\n - mountPath: /redis-master\n name: config\n volumes:\n - name: data\n emptyDir: {}\n - name: config\n configMap:\n name: example-redis-config\n items:\n - key: redis-config\n path: redis.conf", "completion": "This yaml code Configuring Redis using a ConfigMap | Examine the created objects:", "language": "yaml", "type": "pod-definition_explanation", "context": "Configuring Redis using a ConfigMap | Examine the created objects:", "source": "https://kubernetes.io/docs/tutorials/configuration/configure-redis-using-configmap/", "id": "k8s_00302_code_explain_7937"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: redis\nspec:\n containers:\n - name: redis\n image: redis:8.0.2\n command:\n - redis-server\n - \"/redis-master/redis.conf\"\n env:\n - name: MASTER\n value: \"true\"\n ports:\n - containerPort: 6379", "completion": " resources:\n limits:\n cpu: \"0.1\"\n volumeMounts:\n - mountPath: /redis-master-data\n name: data\n - mountPath: /redis-master\n name: config\n volumes:\n - name: data\n emptyDir: {}\n - name: config\n configMap:\n name: example-redis-config\n items:\n - key: redis-config\n path: redis.conf", "language": "yaml", "type": "pod-definition", "context": "Configuring Redis using a ConfigMap | Examine the created objects:", "source": "https://kubernetes.io/docs/tutorials/configuration/configure-redis-using-configmap/", "id": "k8s_00302_code_completion_7938"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: redis\nspec:\n containers:\n - name: redis\n image: redis:8.0.2\n command:\n - redis-server\n - \"/redis-master/redis.conf\"\n env:\n - name: MASTER\n value: \"true\"\n ports:\n - containerPort: 6379\n resources:\n limits:\n cpu: \"0.1\"\n volumeMounts:\n - mountPath: /redis-master-data\n name: data\n - mountPath: /redis-master\n name: config\n volumes:\n - name: data\n emptyDir: {}\n - name: config\n configMap:\n name: example-redis-config\n items:\n - key: redis-config\n path: redis.conf", "completion": "This yaml code Configuring Redis using a ConfigMap | Examine the created objects:", "language": "yaml", "type": "pod-definition_explanation", "context": "Configuring Redis using a ConfigMap | Examine the created objects:", "source": "https://kubernetes.io/docs/tutorials/configuration/configure-redis-using-configmap/", "id": "k8s_00302_code_explain_7939"} {"prompt": "Explain this bash code:\n\nkubectl get pod/redis configmap/example-redis-config", "completion": "This bash code Configuring Redis using a ConfigMap | You should see the following output:", "language": "bash", "type": "kubectl-command_explanation", "context": "Configuring Redis using a ConfigMap | You should see the following output:", "source": "https://kubernetes.io/docs/tutorials/configuration/configure-redis-using-configmap/", "id": "k8s_00302_code_explain_7940"} {"prompt": "Complete this text code:\n\nName: example-redis-config\nNamespace: default\nLabels: \nAnnotations: ", "completion": "\nData\n====\nredis-config:", "language": "text", "type": "code-example", "context": "Configuring Redis using a ConfigMap | Use kubectl exec to enter the pod and run the redis-cli tool to check the current configuration:", "source": "https://kubernetes.io/docs/tutorials/configuration/configure-redis-using-configmap/", "id": "k8s_00302_code_completion_7941"} {"prompt": "Explain this text code:\n\nName: example-redis-config\nNamespace: default\nLabels: \nAnnotations: \n\nData\n====\nredis-config:", "completion": "This text code Configuring Redis using a ConfigMap | Use kubectl exec to enter the pod and run the redis-cli tool to check the current configuration:", "language": "text", "type": "code-example_explanation", "context": "Configuring Redis using a ConfigMap | Use kubectl exec to enter the pod and run the redis-cli tool to check the current configuration:", "source": "https://kubernetes.io/docs/tutorials/configuration/configure-redis-using-configmap/", "id": "k8s_00302_code_explain_7942"} {"prompt": "Complete this yaml code:\n\npods/config/example-redis-config.yaml\napiVersion: v1\nkind: ConfigMap\nmetadata:", "completion": " name: example-redis-config\ndata:\n redis-config: |\n maxmemory 2mb\n maxmemory-policy allkeys-lru", "language": "yaml", "type": "kubernetes-manifest", "context": "Configuring Redis using a ConfigMap | Apply the updated ConfigMap:", "source": "https://kubernetes.io/docs/tutorials/configuration/configure-redis-using-configmap/", "id": "k8s_00302_code_completion_7943"} {"prompt": "Explain this yaml code:\n\npods/config/example-redis-config.yaml\napiVersion: v1\nkind: ConfigMap\nmetadata:\n name: example-redis-config\ndata:\n redis-config: |\n maxmemory 2mb\n maxmemory-policy allkeys-lru", "completion": "This yaml code Configuring Redis using a ConfigMap | Apply the updated ConfigMap:", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Configuring Redis using a ConfigMap | Apply the updated ConfigMap:", "source": "https://kubernetes.io/docs/tutorials/configuration/configure-redis-using-configmap/", "id": "k8s_00302_code_explain_7944"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: ConfigMap\nmetadata:\n name: example-redis-config", "completion": "data:\n redis-config: |\n maxmemory 2mb\n maxmemory-policy allkeys-lru", "language": "yaml", "type": "kubernetes-manifest", "context": "Configuring Redis using a ConfigMap | Apply the updated ConfigMap:", "source": "https://kubernetes.io/docs/tutorials/configuration/configure-redis-using-configmap/", "id": "k8s_00302_code_completion_7945"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: ConfigMap\nmetadata:\n name: example-redis-config\ndata:\n redis-config: |\n maxmemory 2mb\n maxmemory-policy allkeys-lru", "completion": "This yaml code Configuring Redis using a ConfigMap | Apply the updated ConfigMap:", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Configuring Redis using a ConfigMap | Apply the updated ConfigMap:", "source": "https://kubernetes.io/docs/tutorials/configuration/configure-redis-using-configmap/", "id": "k8s_00302_code_explain_7946"} {"prompt": "Complete this text code:\n\nName: example-redis-config\nNamespace: default\nLabels: \nAnnotations: \n", "completion": "Data\n====\nredis-config:\n----\nmaxmemory 2mb\nmaxmemory-policy allkeys-lru", "language": "text", "type": "code-example", "context": "Configuring Redis using a ConfigMap | Check the Redis Pod again using redis-cli via kubectl exec to see if the configuration was applied:", "source": "https://kubernetes.io/docs/tutorials/configuration/configure-redis-using-configmap/", "id": "k8s_00302_code_completion_7947"} {"prompt": "Explain this text code:\n\nName: example-redis-config\nNamespace: default\nLabels: \nAnnotations: \n\nData\n====\nredis-config:\n----\nmaxmemory 2mb\nmaxmemory-policy allkeys-lru", "completion": "This text code Configuring Redis using a ConfigMap | Check the Redis Pod again using redis-cli via kubectl exec to see if the configuration was applied:", "language": "text", "type": "code-example_explanation", "context": "Configuring Redis using a ConfigMap | Check the Redis Pod again using redis-cli via kubectl exec to see if the configuration was applied:", "source": "https://kubernetes.io/docs/tutorials/configuration/configure-redis-using-configmap/", "id": "k8s_00302_code_explain_7948"} {"prompt": "Explain this bash code:\n\nkubectl delete pod redis\nkubectl apply -f https://raw.githubusercontent.com/kubernetes/website/main/content/en/examples/pods/config/redis-pod.yaml", "completion": "This bash code Configuring Redis using a ConfigMap | Now re-check the configuration values one last time:", "language": "bash", "type": "kubectl-command_explanation", "context": "Configuring Redis using a ConfigMap | Now re-check the configuration values one last time:", "source": "https://kubernetes.io/docs/tutorials/configuration/configure-redis-using-configmap/", "id": "k8s_00302_code_explain_7949"} {"prompt": "Explain this bash code:\n\nkubectl delete pod/redis configmap/example-redis-config", "completion": "This bash code Configuring Redis using a ConfigMap |", "language": "bash", "type": "kubectl-command_explanation", "context": "Configuring Redis using a ConfigMap |", "source": "https://kubernetes.io/docs/tutorials/configuration/configure-redis-using-configmap/", "id": "k8s_00302_code_explain_7950"} {"prompt": "Explain this bash code:\n\nkubectl create priorityclass NAME --value=VALUE --global-default=BOOL [--dry-run=server|client|none]", "completion": "This bash code kubectl create priorityclass | If true, ignore any errors in templates when a field or map key is missing in the template. Only applies to golang and jsonpath output formats.", "language": "bash", "type": "kubectl-command_explanation", "context": "kubectl create priorityclass | If true, ignore any errors in templates when a field or map key is missing in the template. Only applies to golang and jsonpath output formats.", "source": "https://kubernetes.io/docs/reference/kubectl/generated/kubectl_create/kubectl_create_priorityclass/", "id": "k8s_00303_code_explain_7951"} {"prompt": "Complete this text code:\n\n# Create a priority class named high-priority\n kubectl create priorityclass high-priority --value=1000 --description=\"high priority\"\n \n # Create a priority class named default-priority that is considered as the global default priority", "completion": " kubectl create priorityclass default-priority --value=1000 --global-default=true --description=\"default priority\"\n \n # Create a priority class named high-priority that cannot preempt pods with lower priority\n kubectl create priorityclass high-priority --value=1000 --description=\"high priority\" --preemption-policy=\"Never\"", "language": "text", "type": "code-example", "context": "kubectl create priorityclass | If true, ignore any errors in templates when a field or map key is missing in the template. Only applies to golang and jsonpath output formats.", "source": "https://kubernetes.io/docs/reference/kubectl/generated/kubectl_create/kubectl_create_priorityclass/", "id": "k8s_00303_code_completion_7952"} {"prompt": "Explain this text code:\n\n# Create a priority class named high-priority\n kubectl create priorityclass high-priority --value=1000 --description=\"high priority\"\n \n # Create a priority class named default-priority that is considered as the global default priority\n kubectl create priorityclass default-priority --value=1000 --global-default=true --description=\"default priority\"\n \n # Create a priority class named high-priority that cannot preempt pods with lower priority\n kubectl create priorityclass high-priority --value=1000 --description=\"high priority\" --preemption-policy=\"Never\"", "completion": "This text code kubectl create priorityclass | If true, ignore any errors in templates when a field or map key is missing in the template. Only applies to golang and jsonpath output formats.", "language": "text", "type": "code-example_explanation", "context": "kubectl create priorityclass | If true, ignore any errors in templates when a field or map key is missing in the template. Only applies to golang and jsonpath output formats.", "source": "https://kubernetes.io/docs/reference/kubectl/generated/kubectl_create/kubectl_create_priorityclass/", "id": "k8s_00303_code_explain_7953"} {"prompt": "Explain this bash code:\n\nkubectl create priorityclass NAME --value=VALUE --global-default=BOOL [--dry-run=server|client|none]", "completion": "This bash code kubectl create priorityclass | If true, ignore any errors in templates when a field or map key is missing in the template. Only applies to golang and jsonpath output formats.", "language": "bash", "type": "kubectl-command_explanation", "context": "kubectl create priorityclass | If true, ignore any errors in templates when a field or map key is missing in the template. Only applies to golang and jsonpath output formats.", "source": "https://kubernetes.io/docs/reference/kubectl/generated/kubectl_create/kubectl_create_priorityclass/", "id": "k8s_00303_code_explain_7954"} {"prompt": "Complete this text code:\n\n# Create a priority class named high-priority\n kubectl create priorityclass high-priority --value=1000 --description=\"high priority\"\n \n # Create a priority class named default-priority that is considered as the global default priority", "completion": " kubectl create priorityclass default-priority --value=1000 --global-default=true --description=\"default priority\"\n \n # Create a priority class named high-priority that cannot preempt pods with lower priority\n kubectl create priorityclass high-priority --value=1000 --description=\"high priority\" --preemption-policy=\"Never\"", "language": "text", "type": "code-example", "context": "kubectl create priorityclass | If true, ignore any errors in templates when a field or map key is missing in the template. Only applies to golang and jsonpath output formats.", "source": "https://kubernetes.io/docs/reference/kubectl/generated/kubectl_create/kubectl_create_priorityclass/", "id": "k8s_00303_code_completion_7955"} {"prompt": "Explain this text code:\n\n# Create a priority class named high-priority\n kubectl create priorityclass high-priority --value=1000 --description=\"high priority\"\n \n # Create a priority class named default-priority that is considered as the global default priority\n kubectl create priorityclass default-priority --value=1000 --global-default=true --description=\"default priority\"\n \n # Create a priority class named high-priority that cannot preempt pods with lower priority\n kubectl create priorityclass high-priority --value=1000 --description=\"high priority\" --preemption-policy=\"Never\"", "completion": "This text code kubectl create priorityclass | If true, ignore any errors in templates when a field or map key is missing in the template. Only applies to golang and jsonpath output formats.", "language": "text", "type": "code-example_explanation", "context": "kubectl create priorityclass | If true, ignore any errors in templates when a field or map key is missing in the template. Only applies to golang and jsonpath output formats.", "source": "https://kubernetes.io/docs/reference/kubectl/generated/kubectl_create/kubectl_create_priorityclass/", "id": "k8s_00303_code_explain_7956"} {"prompt": "Explain this bash code:\n\nkubectl rollout undo (TYPE NAME | TYPE/NAME) [flags]", "completion": "This bash code kubectl rollout undo | If true, ignore any errors in templates when a field or map key is missing in the template. Only applies to golang and jsonpath output formats.", "language": "bash", "type": "kubectl-command_explanation", "context": "kubectl rollout undo | If true, ignore any errors in templates when a field or map key is missing in the template. Only applies to golang and jsonpath output formats.", "source": "https://kubernetes.io/docs/reference/kubectl/generated/kubectl_rollout/kubectl_rollout_undo/", "id": "k8s_00304_code_explain_7957"} {"prompt": "Complete this text code:\n\n# Roll back to the previous deployment\n kubectl rollout undo deployment/abc\n \n # Roll back to daemonset revision 3", "completion": " kubectl rollout undo daemonset/abc --to-revision=3\n \n # Roll back to the previous deployment with dry-run\n kubectl rollout undo --dry-run=server deployment/abc", "language": "text", "type": "code-example", "context": "kubectl rollout undo | If true, ignore any errors in templates when a field or map key is missing in the template. Only applies to golang and jsonpath output formats.", "source": "https://kubernetes.io/docs/reference/kubectl/generated/kubectl_rollout/kubectl_rollout_undo/", "id": "k8s_00304_code_completion_7958"} {"prompt": "Explain this text code:\n\n# Roll back to the previous deployment\n kubectl rollout undo deployment/abc\n \n # Roll back to daemonset revision 3\n kubectl rollout undo daemonset/abc --to-revision=3\n \n # Roll back to the previous deployment with dry-run\n kubectl rollout undo --dry-run=server deployment/abc", "completion": "This text code kubectl rollout undo | If true, ignore any errors in templates when a field or map key is missing in the template. Only applies to golang and jsonpath output formats.", "language": "text", "type": "code-example_explanation", "context": "kubectl rollout undo | If true, ignore any errors in templates when a field or map key is missing in the template. Only applies to golang and jsonpath output formats.", "source": "https://kubernetes.io/docs/reference/kubectl/generated/kubectl_rollout/kubectl_rollout_undo/", "id": "k8s_00304_code_explain_7959"} {"prompt": "Explain this bash code:\n\nkubectl rollout undo (TYPE NAME | TYPE/NAME) [flags]", "completion": "This bash code kubectl rollout undo | If true, ignore any errors in templates when a field or map key is missing in the template. Only applies to golang and jsonpath output formats.", "language": "bash", "type": "kubectl-command_explanation", "context": "kubectl rollout undo | If true, ignore any errors in templates when a field or map key is missing in the template. Only applies to golang and jsonpath output formats.", "source": "https://kubernetes.io/docs/reference/kubectl/generated/kubectl_rollout/kubectl_rollout_undo/", "id": "k8s_00304_code_explain_7960"} {"prompt": "Complete this text code:\n\n# Roll back to the previous deployment\n kubectl rollout undo deployment/abc\n \n # Roll back to daemonset revision 3", "completion": " kubectl rollout undo daemonset/abc --to-revision=3\n \n # Roll back to the previous deployment with dry-run\n kubectl rollout undo --dry-run=server deployment/abc", "language": "text", "type": "code-example", "context": "kubectl rollout undo | If true, ignore any errors in templates when a field or map key is missing in the template. Only applies to golang and jsonpath output formats.", "source": "https://kubernetes.io/docs/reference/kubectl/generated/kubectl_rollout/kubectl_rollout_undo/", "id": "k8s_00304_code_completion_7961"} {"prompt": "Explain this text code:\n\n# Roll back to the previous deployment\n kubectl rollout undo deployment/abc\n \n # Roll back to daemonset revision 3\n kubectl rollout undo daemonset/abc --to-revision=3\n \n # Roll back to the previous deployment with dry-run\n kubectl rollout undo --dry-run=server deployment/abc", "completion": "This text code kubectl rollout undo | If true, ignore any errors in templates when a field or map key is missing in the template. Only applies to golang and jsonpath output formats.", "language": "text", "type": "code-example_explanation", "context": "kubectl rollout undo | If true, ignore any errors in templates when a field or map key is missing in the template. Only applies to golang and jsonpath output formats.", "source": "https://kubernetes.io/docs/reference/kubectl/generated/kubectl_rollout/kubectl_rollout_undo/", "id": "k8s_00304_code_explain_7962"} {"prompt": "Explain this text code:\n\nexport POD_NAME=\"$(kubectl get pods -o go-template --template '{{range .items}}{{.metadata.name}}{{\"\\n\"}}{{end}}')\"\necho Name of the Pod: $POD_NAME", "completion": "This text code Viewing Pods and Nodes | To see the output of our application, run a curl request:", "language": "text", "type": "code-example_explanation", "context": "Viewing Pods and Nodes | To see the output of our application, run a curl request:", "source": "https://kubernetes.io/docs/tutorials/kubernetes-basics/explore/explore-intro/", "id": "k8s_00306_code_explain_7963"} {"prompt": "Explain this text code:\n\ncurl http://localhost:8001/api/v1/namespaces/default/pods/$POD_NAME:8080/proxy/", "completion": "This text code Viewing Pods and Nodes | The URL is the route to the API of the Pod.", "language": "text", "type": "code-example_explanation", "context": "Viewing Pods and Nodes | The URL is the route to the API of the Pod.", "source": "https://kubernetes.io/docs/tutorials/kubernetes-basics/explore/explore-intro/", "id": "k8s_00306_code_explain_7964"} {"prompt": "Explain this shell code:\n\nexport POD_NAME=\"$(kubectl get pods -o go-template --template '{{range .items}}{{.metadata.name}}{{\"\\n\"}}{{end}}')\"\necho Name of the Pod: $POD_NAME", "completion": "This shell code Viewing Pods and Nodes | To see the output of our application, run a curl request:", "language": "shell", "type": "code-example_explanation", "context": "Viewing Pods and Nodes | To see the output of our application, run a curl request:", "source": "https://kubernetes.io/docs/tutorials/kubernetes-basics/explore/explore-intro/", "id": "k8s_00306_code_explain_7965"} {"prompt": "Explain this shell code:\n\ncurl http://localhost:8001/api/v1/namespaces/default/pods/$POD_NAME:8080/proxy/", "completion": "This shell code Viewing Pods and Nodes | The URL is the route to the API of the Pod.", "language": "shell", "type": "code-example_explanation", "context": "Viewing Pods and Nodes | The URL is the route to the API of the Pod.", "source": "https://kubernetes.io/docs/tutorials/kubernetes-basics/explore/explore-intro/", "id": "k8s_00306_code_explain_7966"} {"prompt": "Explain this text code:\n\nexport POD_NAME=\"$(kubectl get pods -o go-template --template '{{range .items}}{{.metadata.name}}{{\"\\n\"}}{{end}}')\"\necho Name of the Pod: $POD_NAME", "completion": "This text code Viewing Pods and Nodes | To see the output of our application, run a curl request:", "language": "text", "type": "code-example_explanation", "context": "Viewing Pods and Nodes | To see the output of our application, run a curl request:", "source": "https://kubernetes.io/docs/tutorials/kubernetes-basics/explore/explore-intro/", "id": "k8s_00306_code_explain_7967"} {"prompt": "Explain this text code:\n\ncurl http://localhost:8001/api/v1/namespaces/default/pods/$POD_NAME:8080/proxy/", "completion": "This text code Viewing Pods and Nodes | The URL is the route to the API of the Pod.", "language": "text", "type": "code-example_explanation", "context": "Viewing Pods and Nodes | The URL is the route to the API of the Pod.", "source": "https://kubernetes.io/docs/tutorials/kubernetes-basics/explore/explore-intro/", "id": "k8s_00306_code_explain_7968"} {"prompt": "Explain this text code:\n\napiGroups:[\"apps\"], apiVersions:[\"v1\"], resources: [\"deployments\"]", "completion": "This text code MutatingWebhookConfiguration | Equivalent: match a request if modifies a resource listed in rules, even via another API group or version. For example, if deployments can be modified via apps/v1, apps/v1beta1, and extensions/v1beta1", "language": "text", "type": "code-example_explanation", "context": "MutatingWebhookConfiguration | Equivalent: match a request if modifies a resource listed in rules, even via another API group or version. For example, if deployments can be modified via apps/v1, apps/v1beta1, and extensions/v1beta1", "source": "https://kubernetes.io/docs/reference/kubernetes-api/extend-resources/mutating-webhook-configuration-v1/", "id": "k8s_00307_code_explain_7969"} {"prompt": "Explain this text code:\n\napiGroups:[\"apps\"], apiVersions:[\"v1\"], resources: [\"deployments\"]", "completion": "This text code MutatingWebhookConfiguration | Defaults to \"Equivalent\"", "language": "text", "type": "code-example_explanation", "context": "MutatingWebhookConfiguration | Defaults to \"Equivalent\"", "source": "https://kubernetes.io/docs/reference/kubernetes-api/extend-resources/mutating-webhook-configuration-v1/", "id": "k8s_00307_code_explain_7970"} {"prompt": "Explain this text code:\n\nReason: Terminated\nMessage: Pod was terminated in response to imminent node shutdown.", "completion": "This text code 1 - Node Shutdowns | To provide more flexibility during graceful node shutdown around the ordering of pods during shutdown, graceful node shutdown honors the PriorityClass for Pods, provided that you enabled this feature", "language": "text", "type": "code-example_explanation", "context": "1 - Node Shutdowns | To provide more flexibility during graceful node shutdown around the ordering of pods during shutdown, graceful node shutdown honors the PriorityClass for Pods, provided that you enabled this feature", "source": "https://kubernetes.io/docs/concepts/cluster-administration/_print/", "id": "k8s_00309_code_explain_7971"} {"prompt": "Complete this text code:\n\nshutdownGracePeriodByPodPriority:\n - priority: 100000\n shutdownGracePeriodSeconds: 10\n - priority: 10000", "completion": " shutdownGracePeriodSeconds: 180\n - priority: 1000\n shutdownGracePeriodSeconds: 120\n - priority: 0\n shutdownGracePeriodSeconds: 60", "language": "text", "type": "code-example", "context": "1 - Node Shutdowns | The above table implies that any pod with priority value >= 100000 will get just 10 seconds to shut down, any pod with value >= 10000 and < 100000 will get 180 seconds to shut down, any pod with value", "source": "https://kubernetes.io/docs/concepts/cluster-administration/_print/", "id": "k8s_00309_code_completion_7972"} {"prompt": "Explain this text code:\n\nshutdownGracePeriodByPodPriority:\n - priority: 100000\n shutdownGracePeriodSeconds: 10\n - priority: 10000\n shutdownGracePeriodSeconds: 180\n - priority: 1000\n shutdownGracePeriodSeconds: 120\n - priority: 0\n shutdownGracePeriodSeconds: 60", "completion": "This text code 1 - Node Shutdowns | The above table implies that any pod with priority value >= 100000 will get just 10 seconds to shut down, any pod with value >= 10000 and < 100000 will get 180 seconds to shut down, any pod with value", "language": "text", "type": "code-example_explanation", "context": "1 - Node Shutdowns | The above table implies that any pod with priority value >= 100000 will get just 10 seconds to shut down, any pod with value >= 10000 and < 100000 will get 180 seconds to shut down, any pod with value", "source": "https://kubernetes.io/docs/concepts/cluster-administration/_print/", "id": "k8s_00309_code_explain_7973"} {"prompt": "Complete this text code:\n\nNAME CPU(cores) CPU(%) MEMORY(bytes) MEMORY(%) SWAP(bytes) SWAP(%) \nnode1 1m 10% 2Mi 10% 1Mi 0% ", "completion": "node2 5m 10% 6Mi 10% 2Mi 0% \nnode3 3m 10% 4Mi 10% ", "language": "text", "type": "code-example", "context": "2 - Swap memory management | In order to receive information about swap usage by pods, kubectl top nodes --show-swap can be used:", "source": "https://kubernetes.io/docs/concepts/cluster-administration/_print/", "id": "k8s_00309_code_completion_7974"} {"prompt": "Explain this text code:\n\nNAME CPU(cores) CPU(%) MEMORY(bytes) MEMORY(%) SWAP(bytes) SWAP(%) \nnode1 1m 10% 2Mi 10% 1Mi 0% \nnode2 5m 10% 6Mi 10% 2Mi 0% \nnode3 3m 10% 4Mi 10% ", "completion": "This text code 2 - Swap memory management | In order to receive information about swap usage by pods, kubectl top nodes --show-swap can be used:", "language": "text", "type": "code-example_explanation", "context": "2 - Swap memory management | In order to receive information about swap usage by pods, kubectl top nodes --show-swap can be used:", "source": "https://kubernetes.io/docs/concepts/cluster-administration/_print/", "id": "k8s_00309_code_explain_7975"} {"prompt": "Complete this text code:\n\nNAME CPU(cores) MEMORY(bytes) SWAP(bytes)\ncoredns-58d5bc5cdb-5nbk4 2m 19Mi 0Mi\ncoredns-58d5bc5cdb-jsh26 3m 37Mi 0Mi\netcd-node01 51m 143Mi 5Mi\nkube-apiserver-node01 98m 824Mi 16Mi", "completion": "kube-controller-manager-node01 20m 135Mi 9Mi\nkube-proxy-ffgs2 1m 24Mi 0Mi\nkube-proxy-fhvwx 1m 39Mi 0Mi\nkube-scheduler-node01 13m 69Mi 0Mi\nmetrics-server-8598789fdb-d2kcj 5m 26Mi 0Mi", "language": "text", "type": "code-example", "context": "2 - Swap memory management | A new node status field is now added, node.status.nodeInfo.swap.capacity, to report the swap capacity of a node.", "source": "https://kubernetes.io/docs/concepts/cluster-administration/_print/", "id": "k8s_00309_code_completion_7976"} {"prompt": "Explain this text code:\n\nNAME CPU(cores) MEMORY(bytes) SWAP(bytes)\ncoredns-58d5bc5cdb-5nbk4 2m 19Mi 0Mi\ncoredns-58d5bc5cdb-jsh26 3m 37Mi 0Mi\netcd-node01 51m 143Mi 5Mi\nkube-apiserver-node01 98m 824Mi 16Mi\nkube-controller-manager-node01 20m 135Mi 9Mi\nkube-proxy-ffgs2 1m 24Mi 0Mi\nkube-proxy-fhvwx 1m 39Mi 0Mi\nkube-scheduler-node01 13m 69Mi 0Mi\nmetrics-server-8598789fdb-d2kcj 5m 26Mi 0Mi", "completion": "This text code 2 - Swap memory management | A new node status field is now added, node.status.nodeInfo.swap.capacity, to report the swap capacity of a node.", "language": "text", "type": "code-example_explanation", "context": "2 - Swap memory management | A new node status field is now added, node.status.nodeInfo.swap.capacity, to report the swap capacity of a node.", "source": "https://kubernetes.io/docs/concepts/cluster-administration/_print/", "id": "k8s_00309_code_explain_7977"} {"prompt": "Explain this bash code:\n\nkubectl get nodes -o go-template='{{range .items}}{{.metadata.name}}: {{if .status.nodeInfo.swap.capacity}}{{.status.nodeInfo.swap.capacity}}{{else}}{{end}}{{\"\\n\"}}{{end}}'", "completion": "This bash code 2 - Swap memory management | This will result in an output similar to:", "language": "bash", "type": "kubectl-command_explanation", "context": "2 - Swap memory management | This will result in an output similar to:", "source": "https://kubernetes.io/docs/concepts/cluster-administration/_print/", "id": "k8s_00309_code_explain_7978"} {"prompt": "Explain this text code:\n\nnode1: 21474836480\nnode2: 42949664768\nnode3: ", "completion": "This text code 2 - Swap memory management | Node Feature Discovery is a Kubernetes addon for detecting hardware features and configuration. It can be utilized to discover which nodes are provisioned with swap.", "language": "text", "type": "code-example_explanation", "context": "2 - Swap memory management | Node Feature Discovery is a Kubernetes addon for detecting hardware features and configuration. It can be utilized to discover which nodes are provisioned with swap.", "source": "https://kubernetes.io/docs/concepts/cluster-administration/_print/", "id": "k8s_00309_code_explain_7979"} {"prompt": "Explain this bash code:\n\nkubectl get nodes -o jsonpath='{range .items[?(@.metadata.labels.feature\\.node\\.kubernetes\\.io/memory-swap)]}{.metadata.name}{\"\\t\"}{.metadata.labels.feature\\.node\\.kubernetes\\.io/memory-swap}{\"\\n\"}{end}'", "completion": "This bash code 2 - Swap memory management | This will result in an output similar to:", "language": "bash", "type": "kubectl-command_explanation", "context": "2 - Swap memory management | This will result in an output similar to:", "source": "https://kubernetes.io/docs/concepts/cluster-administration/_print/", "id": "k8s_00309_code_explain_7980"} {"prompt": "Explain this text code:\n\nk8s-worker1: true\nk8s-worker2: true\nk8s-worker3: false", "completion": "This text code 2 - Swap memory management | In this example, swap is provisioned on nodes k8s-worker1 and k8s-worker2, but not on k8s-worker3.", "language": "text", "type": "code-example_explanation", "context": "2 - Swap memory management | In this example, swap is provisioned on nodes k8s-worker1 and k8s-worker2, but not on k8s-worker3.", "source": "https://kubernetes.io/docs/concepts/cluster-administration/_print/", "id": "k8s_00309_code_explain_7981"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: counter\nspec:", "completion": " containers:\n - name: count\n image: busybox:1.28\n args: [/bin/sh, -c,\n 'i=0; while true; do echo \"$i: $(date)\"; i=$((i+1)); sleep 1; done']", "language": "yaml", "type": "pod-definition", "context": "8 - Logging Architecture | To run this pod, use the following command:", "source": "https://kubernetes.io/docs/concepts/cluster-administration/_print/", "id": "k8s_00309_code_completion_7982"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: counter\nspec:\n containers:\n - name: count\n image: busybox:1.28\n args: [/bin/sh, -c,\n 'i=0; while true; do echo \"$i: $(date)\"; i=$((i+1)); sleep 1; done']", "completion": "This yaml code 8 - Logging Architecture | To run this pod, use the following command:", "language": "yaml", "type": "pod-definition_explanation", "context": "8 - Logging Architecture | To run this pod, use the following command:", "source": "https://kubernetes.io/docs/concepts/cluster-administration/_print/", "id": "k8s_00309_code_explain_7983"} {"prompt": "Explain this bash code:\n\nkubectl apply -f https://k8s.io/examples/debug/counter-pod.yaml", "completion": "This bash code 8 - Logging Architecture | The output is:", "language": "bash", "type": "kubectl-command_explanation", "context": "8 - Logging Architecture | The output is:", "source": "https://kubernetes.io/docs/concepts/cluster-administration/_print/", "id": "k8s_00309_code_explain_7984"} {"prompt": "Explain this text code:\n\n0: Fri Apr 1 11:42:23 UTC 2022\n1: Fri Apr 1 11:42:24 UTC 2022\n2: Fri Apr 1 11:42:25 UTC 2022", "completion": "This text code 8 - Logging Architecture | You can use kubectl logs --previous to retrieve logs from a previous instantiation of a container. If your pod has multiple containers, specify which container's logs you want to access by appending a", "language": "text", "type": "code-example_explanation", "context": "8 - Logging Architecture | You can use kubectl logs --previous to retrieve logs from a previous instantiation of a container. If your pod has multiple containers, specify which container's logs you want to access by appending a", "source": "https://kubernetes.io/docs/concepts/cluster-administration/_print/", "id": "k8s_00309_code_explain_7985"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: counter-err\nspec:", "completion": " containers:\n - name: count\n image: busybox:1.28\n args: [/bin/sh, -c,\n 'i=0; while true; do echo \"$i: $(date)\"; echo \"$i: err\" >&2 ; i=$((i+1)); sleep 1; done']", "language": "yaml", "type": "pod-definition", "context": "8 - Logging Architecture | To run this pod, use the following command:", "source": "https://kubernetes.io/docs/concepts/cluster-administration/_print/", "id": "k8s_00309_code_completion_7986"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: counter-err\nspec:\n containers:\n - name: count\n image: busybox:1.28\n args: [/bin/sh, -c,\n 'i=0; while true; do echo \"$i: $(date)\"; echo \"$i: err\" >&2 ; i=$((i+1)); sleep 1; done']", "completion": "This yaml code 8 - Logging Architecture | To run this pod, use the following command:", "language": "yaml", "type": "pod-definition_explanation", "context": "8 - Logging Architecture | To run this pod, use the following command:", "source": "https://kubernetes.io/docs/concepts/cluster-administration/_print/", "id": "k8s_00309_code_explain_7987"} {"prompt": "Explain this bash code:\n\nkubectl apply -f https://k8s.io/examples/debug/counter-pod-err.yaml", "completion": "This bash code 8 - Logging Architecture | To fetch only the stderr log stream, you can run:", "language": "bash", "type": "kubectl-command_explanation", "context": "8 - Logging Architecture | To fetch only the stderr log stream, you can run:", "source": "https://kubernetes.io/docs/concepts/cluster-administration/_print/", "id": "k8s_00309_code_explain_7988"} {"prompt": "Explain this bash code:\n\nkubectl get --raw \"/api/v1/namespaces/default/pods/counter-err/log?stream=Stderr\"", "completion": "This bash code 8 - Logging Architecture | See the kubectl logs documentation for more details.", "language": "bash", "type": "kubectl-command_explanation", "context": "8 - Logging Architecture | See the kubectl logs documentation for more details.", "source": "https://kubernetes.io/docs/concepts/cluster-administration/_print/", "id": "k8s_00309_code_explain_7989"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: counter\nspec:\n containers:\n - name: count\n image: busybox:1.28\n args:\n - /bin/sh\n - -c\n - >\n i=0;", "completion": " while true;\n do\n echo \"$i: $(date)\" >> /var/log/1.log;\n echo \"$(date) INFO $i\" >> /var/log/2.log;\n i=$((i+1));\n sleep 1;\n done \n volumeMounts:\n - name: varlog\n mountPath: /var/log\n volumes:\n - name: varlog\n emptyDir: {}", "language": "yaml", "type": "pod-definition", "context": "8 - Logging Architecture | It is not recommended to write log entries with different formats to the same log stream, even if you managed to redirect both components to the stdout stream of the container. Instead, you can create", "source": "https://kubernetes.io/docs/concepts/cluster-administration/_print/", "id": "k8s_00309_code_completion_7990"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: counter\nspec:\n containers:\n - name: count\n image: busybox:1.28\n args:\n - /bin/sh\n - -c\n - >\n i=0;\n while true;\n do\n echo \"$i: $(date)\" >> /var/log/1.log;\n echo \"$(date) INFO $i\" >> /var/log/2.log;\n i=$((i+1));\n sleep 1;\n done \n volumeMounts:\n - name: varlog\n mountPath: /var/log\n volumes:\n - name: varlog\n emptyDir: {}", "completion": "This yaml code 8 - Logging Architecture | It is not recommended to write log entries with different formats to the same log stream, even if you managed to redirect both components to the stdout stream of the container. Instead, you can create", "language": "yaml", "type": "pod-definition_explanation", "context": "8 - Logging Architecture | It is not recommended to write log entries with different formats to the same log stream, even if you managed to redirect both components to the stdout stream of the container. Instead, you can create", "source": "https://kubernetes.io/docs/concepts/cluster-administration/_print/", "id": "k8s_00309_code_explain_7991"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: counter\nspec:\n containers:\n - name: count\n image: busybox:1.28\n args:\n - /bin/sh\n - -c\n - >\n i=0;\n while true;\n do\n echo \"$i: $(date)\" >> /var/log/1.log;\n echo \"$(date) INFO $i\" >> /var/log/2.log;\n i=$((i+1));\n sleep 1;", "completion": " done \n volumeMounts:\n - name: varlog\n mountPath: /var/log\n - name: count-log-1\n image: busybox:1.28\n args: [/bin/sh, -c, 'tail -n+1 -F /var/log/1.log']\n volumeMounts:\n - name: varlog\n mountPath: /var/log\n - name: count-log-2\n image: busybox:1.28\n args: [/bin/sh, -c, 'tail -n+1 -F /var/log/2.log']\n volumeMounts:\n - name: varlog\n mountPath: /var/log\n volumes:\n - name: varlog\n emptyDir: {}", "language": "yaml", "type": "pod-definition", "context": "8 - Logging Architecture | Now when you run this pod, you can access each log stream separately by running the following commands:", "source": "https://kubernetes.io/docs/concepts/cluster-administration/_print/", "id": "k8s_00309_code_completion_7992"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: counter\nspec:\n containers:\n - name: count\n image: busybox:1.28\n args:\n - /bin/sh\n - -c\n - >\n i=0;\n while true;\n do\n echo \"$i: $(date)\" >> /var/log/1.log;\n echo \"$(date) INFO $i\" >> /var/log/2.log;\n i=$((i+1));\n sleep 1;\n done \n volumeMounts:\n - name: varlog\n mountPath: /var/log\n - name: count-log-1\n image: busybox:1.28\n args: [/bin/sh, -c, 'tail -n+1 -F /var/log/1.log']\n volumeMounts:\n - name: varlog\n mountPath: /var/log\n - name: count-log-2\n image: busybox:1.28\n args: [/bin/sh, -c, 'tail -n+1 -F /var/log/2.log']\n volumeMounts:\n - name: varlog\n mountPath: /var/log\n volumes:\n - name: varlog\n emptyDir: {}", "completion": "This yaml code 8 - Logging Architecture | Now when you run this pod, you can access each log stream separately by running the following commands:", "language": "yaml", "type": "pod-definition_explanation", "context": "8 - Logging Architecture | Now when you run this pod, you can access each log stream separately by running the following commands:", "source": "https://kubernetes.io/docs/concepts/cluster-administration/_print/", "id": "k8s_00309_code_explain_7993"} {"prompt": "Complete this text code:\n\n0: Fri Apr 1 11:42:26 UTC 2022\n1: Fri Apr 1 11:42:27 UTC 2022", "completion": "2: Fri Apr 1 11:42:28 UTC 2022\n...", "language": "text", "type": "code-example", "context": "8 - Logging Architecture | The output is similar to:", "source": "https://kubernetes.io/docs/concepts/cluster-administration/_print/", "id": "k8s_00309_code_completion_7994"} {"prompt": "Explain this text code:\n\n0: Fri Apr 1 11:42:26 UTC 2022\n1: Fri Apr 1 11:42:27 UTC 2022\n2: Fri Apr 1 11:42:28 UTC 2022\n...", "completion": "This text code 8 - Logging Architecture | The output is similar to:", "language": "text", "type": "code-example_explanation", "context": "8 - Logging Architecture | The output is similar to:", "source": "https://kubernetes.io/docs/concepts/cluster-administration/_print/", "id": "k8s_00309_code_explain_7995"} {"prompt": "Complete this text code:\n\nFri Apr 1 11:42:29 UTC 2022 INFO 0\nFri Apr 1 11:42:30 UTC 2022 INFO 0", "completion": "Fri Apr 1 11:42:31 UTC 2022 INFO 0\n...", "language": "text", "type": "code-example", "context": "8 - Logging Architecture | If you installed a node-level agent in your cluster, that agent picks up those log streams automatically without any further configuration. If you like, you can configure the agent to parse log lines", "source": "https://kubernetes.io/docs/concepts/cluster-administration/_print/", "id": "k8s_00309_code_completion_7996"} {"prompt": "Explain this text code:\n\nFri Apr 1 11:42:29 UTC 2022 INFO 0\nFri Apr 1 11:42:30 UTC 2022 INFO 0\nFri Apr 1 11:42:31 UTC 2022 INFO 0\n...", "completion": "This text code 8 - Logging Architecture | If you installed a node-level agent in your cluster, that agent picks up those log streams automatically without any further configuration. If you like, you can configure the agent to parse log lines", "language": "text", "type": "code-example_explanation", "context": "8 - Logging Architecture | If you installed a node-level agent in your cluster, that agent picks up those log streams automatically without any further configuration. If you like, you can configure the agent to parse log lines", "source": "https://kubernetes.io/docs/concepts/cluster-administration/_print/", "id": "k8s_00309_code_explain_7997"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: ConfigMap\nmetadata:\n name: fluentd-config\ndata:\n fluentd.conf: |\n \n type tail\n format none\n path /var/log/1.log\n pos_file /var/log/1.log.pos\n tag count.format1", "completion": " \n\n \n type tail\n format none\n path /var/log/2.log\n pos_file /var/log/2.log.pos\n tag count.format2\n \n\n \n type google_cloud\n ", "language": "yaml", "type": "kubernetes-manifest", "context": "8 - Logging Architecture | The second manifest describes a pod that has a sidecar container running fluentd. The pod mounts a volume where fluentd can pick up its configuration data.", "source": "https://kubernetes.io/docs/concepts/cluster-administration/_print/", "id": "k8s_00309_code_completion_7998"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: ConfigMap\nmetadata:\n name: fluentd-config\ndata:\n fluentd.conf: |\n \n type tail\n format none\n path /var/log/1.log\n pos_file /var/log/1.log.pos\n tag count.format1\n \n\n \n type tail\n format none\n path /var/log/2.log\n pos_file /var/log/2.log.pos\n tag count.format2\n \n\n \n type google_cloud\n ", "completion": "This yaml code 8 - Logging Architecture | The second manifest describes a pod that has a sidecar container running fluentd. The pod mounts a volume where fluentd can pick up its configuration data.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "8 - Logging Architecture | The second manifest describes a pod that has a sidecar container running fluentd. The pod mounts a volume where fluentd can pick up its configuration data.", "source": "https://kubernetes.io/docs/concepts/cluster-administration/_print/", "id": "k8s_00309_code_explain_7999"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: counter\nspec:\n containers:\n - name: count\n image: busybox:1.28\n args:\n - /bin/sh\n - -c\n - >\n i=0;\n while true;\n do\n echo \"$i: $(date)\" >> /var/log/1.log;\n echo \"$(date) INFO $i\" >> /var/log/2.log;\n i=$((i+1));\n sleep 1;", "completion": " done \n volumeMounts:\n - name: varlog\n mountPath: /var/log\n - name: count-agent\n image: registry.k8s.io/fluentd-gcp:1.30\n env:\n - name: FLUENTD_ARGS\n value: -c /etc/fluentd-config/fluentd.conf\n volumeMounts:\n - name: varlog\n mountPath: /var/log\n - name: config-volume\n mountPath: /etc/fluentd-config\n volumes:\n - name: varlog\n emptyDir: {}\n - name: config-volume\n configMap:\n name: fluentd-config", "language": "yaml", "type": "pod-definition", "context": "8 - Logging Architecture |", "source": "https://kubernetes.io/docs/concepts/cluster-administration/_print/", "id": "k8s_00309_code_completion_8000"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: counter\nspec:\n containers:\n - name: count\n image: busybox:1.28\n args:\n - /bin/sh\n - -c\n - >\n i=0;\n while true;\n do\n echo \"$i: $(date)\" >> /var/log/1.log;\n echo \"$(date) INFO $i\" >> /var/log/2.log;\n i=$((i+1));\n sleep 1;\n done \n volumeMounts:\n - name: varlog\n mountPath: /var/log\n - name: count-agent\n image: registry.k8s.io/fluentd-gcp:1.30\n env:\n - name: FLUENTD_ARGS\n value: -c /etc/fluentd-config/fluentd.conf\n volumeMounts:\n - name: varlog\n mountPath: /var/log\n - name: config-volume\n mountPath: /etc/fluentd-config\n volumes:\n - name: varlog\n emptyDir: {}\n - name: config-volume\n configMap:\n name: fluentd-config", "completion": "This yaml code 8 - Logging Architecture |", "language": "yaml", "type": "pod-definition_explanation", "context": "8 - Logging Architecture |", "source": "https://kubernetes.io/docs/concepts/cluster-administration/_print/", "id": "k8s_00309_code_explain_8001"} {"prompt": "Complete this yaml code:\n\napiVersion: rbac.authorization.k8s.io/v1\nkind: ClusterRole\nmetadata:\n name: prometheus", "completion": "rules:\n - nonResourceURLs:\n - \"/metrics\"\n verbs:\n - get", "language": "yaml", "type": "kubernetes-manifest", "context": "10 - Metrics For Kubernetes System Components | Alpha metric → Stable metric → Deprecated metric → Hidden metric → Deleted metric", "source": "https://kubernetes.io/docs/concepts/cluster-administration/_print/", "id": "k8s_00309_code_completion_8002"} {"prompt": "Explain this yaml code:\n\napiVersion: rbac.authorization.k8s.io/v1\nkind: ClusterRole\nmetadata:\n name: prometheus\nrules:\n - nonResourceURLs:\n - \"/metrics\"\n verbs:\n - get", "completion": "This yaml code 10 - Metrics For Kubernetes System Components | Alpha metric → Stable metric → Deprecated metric → Hidden metric → Deleted metric", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "10 - Metrics For Kubernetes System Components | Alpha metric → Stable metric → Deprecated metric → Hidden metric → Deleted metric", "source": "https://kubernetes.io/docs/concepts/cluster-administration/_print/", "id": "k8s_00309_code_explain_8003"} {"prompt": "Explain this text code:\n\n# HELP some_counter this counts things\n# TYPE some_counter counter\nsome_counter 0", "completion": "This text code 10 - Metrics For Kubernetes System Components | After deprecation", "language": "text", "type": "code-example_explanation", "context": "10 - Metrics For Kubernetes System Components | After deprecation", "source": "https://kubernetes.io/docs/concepts/cluster-administration/_print/", "id": "k8s_00309_code_explain_8004"} {"prompt": "Explain this text code:\n\n# HELP some_counter (Deprecated since 1.15.0) this counts things\n# TYPE some_counter counter\nsome_counter 0", "completion": "This text code 10 - Metrics For Kubernetes System Components | Hidden metrics are no longer published for scraping, but are still available for use. To use a hidden metric, please refer to the Show hidden metrics section.", "language": "text", "type": "code-example_explanation", "context": "10 - Metrics For Kubernetes System Components | Hidden metrics are no longer published for scraping, but are still available for use. To use a hidden metric, please refer to the Show hidden metrics section.", "source": "https://kubernetes.io/docs/concepts/cluster-administration/_print/", "id": "k8s_00309_code_explain_8005"} {"prompt": "Complete this text code:\n\ncloudprovider_gce_api_request_duration_seconds { request = \"instance_list\"}\ncloudprovider_gce_api_request_duration_seconds { request = \"disk_insert\"}\ncloudprovider_gce_api_request_duration_seconds { request = \"disk_delete\"}", "completion": "cloudprovider_gce_api_request_duration_seconds { request = \"attach_disk\"}\ncloudprovider_gce_api_request_duration_seconds { request = \"detach_disk\"}\ncloudprovider_gce_api_request_duration_seconds { request = \"list_disk\"}", "language": "text", "type": "code-example", "context": "10 - Metrics For Kubernetes System Components | The scheduler exposes optional metrics that reports the requested resources and the desired limits of all running pods. These metrics can be used to build capacity planning dashboards, assess current", "source": "https://kubernetes.io/docs/concepts/cluster-administration/_print/", "id": "k8s_00309_code_completion_8006"} {"prompt": "Explain this text code:\n\ncloudprovider_gce_api_request_duration_seconds { request = \"instance_list\"}\ncloudprovider_gce_api_request_duration_seconds { request = \"disk_insert\"}\ncloudprovider_gce_api_request_duration_seconds { request = \"disk_delete\"}\ncloudprovider_gce_api_request_duration_seconds { request = \"attach_disk\"}\ncloudprovider_gce_api_request_duration_seconds { request = \"detach_disk\"}\ncloudprovider_gce_api_request_duration_seconds { request = \"list_disk\"}", "completion": "This text code 10 - Metrics For Kubernetes System Components | The scheduler exposes optional metrics that reports the requested resources and the desired limits of all running pods. These metrics can be used to build capacity planning dashboards, assess current", "language": "text", "type": "code-example_explanation", "context": "10 - Metrics For Kubernetes System Components | The scheduler exposes optional metrics that reports the requested resources and the desired limits of all running pods. These metrics can be used to build capacity planning dashboards, assess current", "source": "https://kubernetes.io/docs/concepts/cluster-administration/_print/", "id": "k8s_00309_code_explain_8007"} {"prompt": "Complete this text code:\n\ncontainer_pressure_cpu_stalled_seconds_total\ncontainer_pressure_cpu_waiting_seconds_total\ncontainer_pressure_memory_stalled_seconds_total", "completion": "container_pressure_memory_waiting_seconds_total\ncontainer_pressure_io_stalled_seconds_total\ncontainer_pressure_io_waiting_seconds_total", "language": "text", "type": "code-example", "context": "10 - Metrics For Kubernetes System Components | This feature is enabled by default, by setting the KubeletPSI feature gate. The information is also exposed in the Summary API.", "source": "https://kubernetes.io/docs/concepts/cluster-administration/_print/", "id": "k8s_00309_code_completion_8008"} {"prompt": "Explain this text code:\n\ncontainer_pressure_cpu_stalled_seconds_total\ncontainer_pressure_cpu_waiting_seconds_total\ncontainer_pressure_memory_stalled_seconds_total\ncontainer_pressure_memory_waiting_seconds_total\ncontainer_pressure_io_stalled_seconds_total\ncontainer_pressure_io_waiting_seconds_total", "completion": "This text code 10 - Metrics For Kubernetes System Components | This feature is enabled by default, by setting the KubeletPSI feature gate. The information is also exposed in the Summary API.", "language": "text", "type": "code-example_explanation", "context": "10 - Metrics For Kubernetes System Components | This feature is enabled by default, by setting the KubeletPSI feature gate. The information is also exposed in the Summary API.", "source": "https://kubernetes.io/docs/concepts/cluster-administration/_print/", "id": "k8s_00309_code_explain_8009"} {"prompt": "Explain this text code:\n\n--allow-metric-labels ,=', ...', ,=', ...', ...", "completion": "This text code 10 - Metrics For Kubernetes System Components | Here is an example:", "language": "text", "type": "code-example_explanation", "context": "10 - Metrics For Kubernetes System Components | Here is an example:", "source": "https://kubernetes.io/docs/concepts/cluster-administration/_print/", "id": "k8s_00309_code_explain_8010"} {"prompt": "Explain this text code:\n\n--allow-metric-labels number_count_metric,odd_number='1,3,5', number_count_metric,even_number='2,4,6', date_gauge_metric,weekend='Saturday,Sunday'", "completion": "This text code 10 - Metrics For Kubernetes System Components | In addition to specifying this from the CLI, this can also be done within a configuration file. You can specify the path to that configuration file using the --allow-metric-labels-manifest command lin", "language": "text", "type": "code-example_explanation", "context": "10 - Metrics For Kubernetes System Components | In addition to specifying this from the CLI, this can also be done within a configuration file. You can specify the path to that configuration file using the --allow-metric-labels-manifest command lin", "source": "https://kubernetes.io/docs/concepts/cluster-administration/_print/", "id": "k8s_00309_code_explain_8011"} {"prompt": "Explain this text code:\n\n\"metric1,label2\": \"v1,v2,v3\"\n\"metric2,label1\": \"v1,v2,v3\"", "completion": "This text code 10 - Metrics For Kubernetes System Components | Additionally, the cardinality_enforcement_unexpected_categorizations_total meta-metric records the count of unexpected categorizations during cardinality enforcement, that is, whenever a label value i", "language": "text", "type": "code-example_explanation", "context": "10 - Metrics For Kubernetes System Components | Additionally, the cardinality_enforcement_unexpected_categorizations_total meta-metric records the count of unexpected categorizations during cardinality enforcement, that is, whenever a label value i", "source": "https://kubernetes.io/docs/concepts/cluster-administration/_print/", "id": "k8s_00309_code_explain_8012"} {"prompt": "Explain this text code:\n\ncount(kube_pod_status_ready{condition=\"false\"}) by (namespace, pod)", "completion": "This text code 11 - Metrics for Kubernetes Object States | Metrics generated from kube-state-metrics also allow for alerting on issues in the cluster.", "language": "text", "type": "code-example_explanation", "context": "11 - Metrics for Kubernetes Object States | Metrics generated from kube-state-metrics also allow for alerting on issues in the cluster.", "source": "https://kubernetes.io/docs/concepts/cluster-administration/_print/", "id": "k8s_00309_code_explain_8013"} {"prompt": "Complete this text code:\n\ngroups:\n- name: Pod state\n rules:\n - alert: PodsBlockedInTerminatingState\n expr: count(kube_pod_deletion_timestamp) by (namespace, pod) * count(kube_pod_status_reason{reason=\"NodeLost\"} == 0) by (namespace, pod) > 0", "completion": " for: 5m\n labels:\n severity: page\n annotations:\n summary: Pod {{$labels.namespace}}/{{$labels.pod}} blocked in Terminating state.", "language": "text", "type": "code-example", "context": "11 - Metrics for Kubernetes Object States | System component logs record events happening in cluster, which can be very useful for debugging. You can configure log verbosity to see more or less detail. Logs can be as coarse-grained as showing e", "source": "https://kubernetes.io/docs/concepts/cluster-administration/_print/", "id": "k8s_00309_code_completion_8014"} {"prompt": "Explain this text code:\n\ngroups:\n- name: Pod state\n rules:\n - alert: PodsBlockedInTerminatingState\n expr: count(kube_pod_deletion_timestamp) by (namespace, pod) * count(kube_pod_status_reason{reason=\"NodeLost\"} == 0) by (namespace, pod) > 0\n for: 5m\n labels:\n severity: page\n annotations:\n summary: Pod {{$labels.namespace}}/{{$labels.pod}} blocked in Terminating state.", "completion": "This text code 11 - Metrics for Kubernetes Object States | System component logs record events happening in cluster, which can be very useful for debugging. You can configure log verbosity to see more or less detail. Logs can be as coarse-grained as showing e", "language": "text", "type": "code-example_explanation", "context": "11 - Metrics for Kubernetes Object States | System component logs record events happening in cluster, which can be very useful for debugging. You can configure log verbosity to see more or less detail. Logs can be as coarse-grained as showing e", "source": "https://kubernetes.io/docs/concepts/cluster-administration/_print/", "id": "k8s_00309_code_explain_8015"} {"prompt": "Explain this text code:\n\nI1025 00:15:15.525108 1 httplog.go:79] GET /api/v1/namespaces/kube-system/pods/metrics-server-v0.3.1-57c75779f-9p8wg: (1.512ms) 200 [pod_nanny/v0.0.0 (linux/amd64) kubernetes/$Format 10.56.1.19:51756]", "completion": "This text code 12 - System Logs | The message string may contain line breaks:", "language": "text", "type": "code-example_explanation", "context": "12 - System Logs | The message string may contain line breaks:", "source": "https://kubernetes.io/docs/concepts/cluster-administration/_print/", "id": "k8s_00309_code_explain_8016"} {"prompt": "Explain this text code:\n\nI1025 00:15:15.525108 1 example.go:79] This is a message\nwhich has a line break.", "completion": "This text code 12 - System Logs | Migration to structured log messages is an ongoing process. Not all log messages are structured in this version. When parsing log files, you must also handle unstructured log messages.", "language": "text", "type": "code-example_explanation", "context": "12 - System Logs | Migration to structured log messages is an ongoing process. Not all log messages are structured in this version. When parsing log files, you must also handle unstructured log messages.", "source": "https://kubernetes.io/docs/concepts/cluster-administration/_print/", "id": "k8s_00309_code_explain_8017"} {"prompt": "Explain this text code:\n\n \"\" =\"\" =\"\" ...", "completion": "This text code 12 - System Logs | Example:", "language": "text", "type": "code-example_explanation", "context": "12 - System Logs | Example:", "source": "https://kubernetes.io/docs/concepts/cluster-administration/_print/", "id": "k8s_00309_code_explain_8018"} {"prompt": "Explain this text code:\n\nI1025 00:15:15.525108 1 controller_utils.go:116] \"Pod status updated\" pod=\"kube-system/kubedns\" status=\"ready\"", "completion": "This text code 12 - System Logs | Strings are quoted. Other values are formatted with %+v, which may cause log messages to continue on the next line depending on the data.", "language": "text", "type": "code-example_explanation", "context": "12 - System Logs | Strings are quoted. Other values are formatted with %+v, which may cause log messages to continue on the next line depending on the data.", "source": "https://kubernetes.io/docs/concepts/cluster-administration/_print/", "id": "k8s_00309_code_explain_8019"} {"prompt": "Explain this text code:\n\nI1025 00:15:15.525108 1 example.go:116] \"Example\" data=\"This is text with a line break\\nand \\\"quotation marks\\\".\" someInt=1 someFloat=0.1 someStruct={StringField: First line,\nsecond line.}", "completion": "This text code 12 - System Logs | Contextual logging builds on top of structured logging. It is primarily about how developers use logging calls: code based on that concept is more flexible and supports additional use cases as describ", "language": "text", "type": "code-example_explanation", "context": "12 - System Logs | Contextual logging builds on top of structured logging. It is primarily about how developers use logging calls: code based on that concept is more flexible and supports additional use cases as describ", "source": "https://kubernetes.io/docs/concepts/cluster-administration/_print/", "id": "k8s_00309_code_explain_8020"} {"prompt": "Complete this text code:\n\n$ cd $GOPATH/src/k8s.io/kubernetes/staging/src/k8s.io/component-base/logs/example/cmd/\n$ go run . --help\n...\n --feature-gates mapStringBool A set of key=value pairs that describe feature gates for alpha/experimental features. Options are:\n AllAlpha=true|false (ALPHA - default=false)", "completion": " AllBeta=true|false (BETA - default=false)\n ContextualLogging=true|false (BETA - default=true)\n$ go run . --feature-gates ContextualLogging=true\n...\nI0222 15:13:31.645988 197901 example.go:54] \"runtime\" logger=\"example.myname\" foo=\"bar\" duration=\"1m0s\"\nI0222 15:13:31.646007 197901 example.go:55] \"another runtime\" logger=\"example\" foo=\"bar\" duration=\"1h0m0s\" duration=\"1m0s\"", "language": "text", "type": "code-example", "context": "12 - System Logs | The logger key and foo=\"bar\" were added by the caller of the function which logs the runtime message and duration=\"1m0s\" value, without having to modify that function.", "source": "https://kubernetes.io/docs/concepts/cluster-administration/_print/", "id": "k8s_00309_code_completion_8021"} {"prompt": "Explain this text code:\n\n$ cd $GOPATH/src/k8s.io/kubernetes/staging/src/k8s.io/component-base/logs/example/cmd/\n$ go run . --help\n...\n --feature-gates mapStringBool A set of key=value pairs that describe feature gates for alpha/experimental features. Options are:\n AllAlpha=true|false (ALPHA - default=false)\n AllBeta=true|false (BETA - default=false)\n ContextualLogging=true|false (BETA - default=true)\n$ go run . --feature-gates ContextualLogging=true\n...\nI0222 15:13:31.645988 197901 example.go:54] \"runtime\" logger=\"example.myname\" foo=\"bar\" duration=\"1m0s\"\nI0222 15:13:31.646007 197901 example.go:55] \"another runtime\" logger=\"example\" foo=\"bar\" duration=\"1h0m0s\" duration=\"1m0s\"", "completion": "This text code 12 - System Logs | The logger key and foo=\"bar\" were added by the caller of the function which logs the runtime message and duration=\"1m0s\" value, without having to modify that function.", "language": "text", "type": "code-example_explanation", "context": "12 - System Logs | The logger key and foo=\"bar\" were added by the caller of the function which logs the runtime message and duration=\"1m0s\" value, without having to modify that function.", "source": "https://kubernetes.io/docs/concepts/cluster-administration/_print/", "id": "k8s_00309_code_explain_8022"} {"prompt": "Complete this text code:\n\n$ go run . --feature-gates ContextualLogging=false\n...", "completion": "I0222 15:14:40.497333 198174 example.go:54] \"runtime\" duration=\"1m0s\"\nI0222 15:14:40.497346 198174 example.go:55] \"another runtime\" duration=\"1h0m0s\" duration=\"1m0s\"", "language": "text", "type": "code-example", "context": "12 - System Logs | JSON output does not support many standard klog flags. For list of unsupported klog flags, see the Command line tool reference.", "source": "https://kubernetes.io/docs/concepts/cluster-administration/_print/", "id": "k8s_00309_code_completion_8023"} {"prompt": "Explain this text code:\n\n$ go run . --feature-gates ContextualLogging=false\n...\nI0222 15:14:40.497333 198174 example.go:54] \"runtime\" duration=\"1m0s\"\nI0222 15:14:40.497346 198174 example.go:55] \"another runtime\" duration=\"1h0m0s\" duration=\"1m0s\"", "completion": "This text code 12 - System Logs | JSON output does not support many standard klog flags. For list of unsupported klog flags, see the Command line tool reference.", "language": "text", "type": "code-example_explanation", "context": "12 - System Logs | JSON output does not support many standard klog flags. For list of unsupported klog flags, see the Command line tool reference.", "source": "https://kubernetes.io/docs/concepts/cluster-administration/_print/", "id": "k8s_00309_code_explain_8024"} {"prompt": "Complete this json code:\n\n{\n \"ts\": 1580306777.04728,\n \"v\": 4,\n \"msg\": \"Pod status updated\",\n \"pod\":{", "completion": " \"name\": \"nginx-1\",\n \"namespace\": \"default\"\n },\n \"status\": \"ready\"\n}", "language": "json", "type": "api-response", "context": "12 - System Logs | Keys with special meaning:", "source": "https://kubernetes.io/docs/concepts/cluster-administration/_print/", "id": "k8s_00309_code_completion_8025"} {"prompt": "Explain this json code:\n\n{\n \"ts\": 1580306777.04728,\n \"v\": 4,\n \"msg\": \"Pod status updated\",\n \"pod\":{\n \"name\": \"nginx-1\",\n \"namespace\": \"default\"\n },\n \"status\": \"ready\"\n}", "completion": "This json code 12 - System Logs | Keys with special meaning:", "language": "json", "type": "api-response_explanation", "context": "12 - System Logs | Keys with special meaning:", "source": "https://kubernetes.io/docs/concepts/cluster-administration/_print/", "id": "k8s_00309_code_explain_8026"} {"prompt": "Explain this text code:\n\n# Fetch kubelet logs from a node named node-1.example\nkubectl get --raw \"/api/v1/nodes/node-1.example/proxy/logs/?query=kubelet\"", "completion": "This text code 12 - System Logs | You can also fetch files, provided that the files are in a directory that the kubelet allows for log fetches. For example, you can fetch a log from /var/log on a Linux node:", "language": "text", "type": "code-example_explanation", "context": "12 - System Logs | You can also fetch files, provided that the files are in a directory that the kubelet allows for log fetches. For example, you can fetch a log from /var/log on a Linux node:", "source": "https://kubernetes.io/docs/concepts/cluster-administration/_print/", "id": "k8s_00309_code_explain_8027"} {"prompt": "Explain this bash code:\n\nkubectl get --raw \"/api/v1/nodes//proxy/logs/?query=/\"", "completion": "This bash code 12 - System Logs | The kubelet uses heuristics to retrieve logs. This helps if you are not aware whether a given system service is writing logs to the operating system's native logger like journald or to a log file in /", "language": "bash", "type": "kubectl-command_explanation", "context": "12 - System Logs | The kubelet uses heuristics to retrieve logs. This helps if you are not aware whether a given system service is writing logs to the operating system's native logger like journald or to a log file in /", "source": "https://kubernetes.io/docs/concepts/cluster-administration/_print/", "id": "k8s_00309_code_explain_8028"} {"prompt": "Explain this text code:\n\n# Fetch kubelet logs from a node named node-1.example that have the word \"error\"\nkubectl get --raw \"/api/v1/nodes/node-1.example/proxy/logs/?query=kubelet&pattern=error\"", "completion": "This text code 12 - System Logs | System component traces record the latency of and relationships between operations in the cluster.", "language": "text", "type": "code-example_explanation", "context": "12 - System Logs | System component traces record the latency of and relationships between operations in the cluster.", "source": "https://kubernetes.io/docs/concepts/cluster-administration/_print/", "id": "k8s_00309_code_explain_8029"} {"prompt": "Complete this text code:\n\nreceivers:\n otlp:\n protocols:\n grpc:\nexporters:\n # Replace this exporter with the exporter for your backend\n exporters:", "completion": " debug:\n verbosity: detailed\nservice:\n pipelines:\n traces:\n receivers: [otlp]\n exporters: [debug]", "language": "text", "type": "code-example", "context": "13 - Traces For Kubernetes System Components | To directly emit traces to a backend without utilizing a collector, specify the endpoint field in the Kubernetes tracing configuration file with the desired trace backend address. This method negates", "source": "https://kubernetes.io/docs/concepts/cluster-administration/_print/", "id": "k8s_00309_code_completion_8030"} {"prompt": "Explain this text code:\n\nreceivers:\n otlp:\n protocols:\n grpc:\nexporters:\n # Replace this exporter with the exporter for your backend\n exporters:\n debug:\n verbosity: detailed\nservice:\n pipelines:\n traces:\n receivers: [otlp]\n exporters: [debug]", "completion": "This text code 13 - Traces For Kubernetes System Components | To directly emit traces to a backend without utilizing a collector, specify the endpoint field in the Kubernetes tracing configuration file with the desired trace backend address. This method negates", "language": "text", "type": "code-example_explanation", "context": "13 - Traces For Kubernetes System Components | To directly emit traces to a backend without utilizing a collector, specify the endpoint field in the Kubernetes tracing configuration file with the desired trace backend address. This method negates", "source": "https://kubernetes.io/docs/concepts/cluster-administration/_print/", "id": "k8s_00309_code_explain_8031"} {"prompt": "Complete this yaml code:\n\napiVersion: apiserver.config.k8s.io/v1\nkind: TracingConfiguration", "completion": "# default value\n#endpoint: localhost:4317\nsamplingRatePerMillion: 100", "language": "yaml", "type": "kubernetes-manifest", "context": "13 - Traces For Kubernetes System Components | For more information about the TracingConfiguration struct, see API server config API (v1).", "source": "https://kubernetes.io/docs/concepts/cluster-administration/_print/", "id": "k8s_00309_code_completion_8032"} {"prompt": "Explain this yaml code:\n\napiVersion: apiserver.config.k8s.io/v1\nkind: TracingConfiguration\n# default value\n#endpoint: localhost:4317\nsamplingRatePerMillion: 100", "completion": "This yaml code 13 - Traces For Kubernetes System Components | For more information about the TracingConfiguration struct, see API server config API (v1).", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "13 - Traces For Kubernetes System Components | For more information about the TracingConfiguration struct, see API server config API (v1).", "source": "https://kubernetes.io/docs/concepts/cluster-administration/_print/", "id": "k8s_00309_code_explain_8033"} {"prompt": "Complete this yaml code:\n\napiVersion: kubelet.config.k8s.io/v1beta1\nkind: KubeletConfiguration\ntracing:", "completion": " # default value\n #endpoint: localhost:4317\n samplingRatePerMillion: 100", "language": "yaml", "type": "kubernetes-manifest", "context": "13 - Traces For Kubernetes System Components | If the samplingRatePerMillion is set to one million (1000000), then every span will be sent to the exporter.", "source": "https://kubernetes.io/docs/concepts/cluster-administration/_print/", "id": "k8s_00309_code_completion_8034"} {"prompt": "Explain this yaml code:\n\napiVersion: kubelet.config.k8s.io/v1beta1\nkind: KubeletConfiguration\ntracing:\n # default value\n #endpoint: localhost:4317\n samplingRatePerMillion: 100", "completion": "This yaml code 13 - Traces For Kubernetes System Components | If the samplingRatePerMillion is set to one million (1000000), then every span will be sent to the exporter.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "13 - Traces For Kubernetes System Components | If the samplingRatePerMillion is set to one million (1000000), then every span will be sent to the exporter.", "source": "https://kubernetes.io/docs/concepts/cluster-administration/_print/", "id": "k8s_00309_code_explain_8035"} {"prompt": "Explain this text code:\n\nkube-apiserver \\\n--runtime-config=flowcontrol.apiserver.k8s.io/v1beta3=false \\\n # …and other flags as usual", "completion": "This text code 15 - API Priority and Fairness | The command-line flag --enable-priority-and-fairness=false will disable the API Priority and Fairness feature.", "language": "text", "type": "code-example_explanation", "context": "15 - API Priority and Fairness | The command-line flag --enable-priority-and-fairness=false will disable the API Priority and Fairness feature.", "source": "https://kubernetes.io/docs/concepts/cluster-administration/_print/", "id": "k8s_00309_code_explain_8036"} {"prompt": "Complete this yaml code:\n\napiVersion: flowcontrol.apiserver.k8s.io/v1\nkind: FlowSchema\nmetadata:\n name: health-for-strangers\nspec:\n matchingPrecedence: 1000\n priorityLevelConfiguration:\n name: exempt\n rules:\n - nonResourceRules:", "completion": " - nonResourceURLs:\n - \"/healthz\"\n - \"/livez\"\n - \"/readyz\"\n verbs:\n - \"*\"\n subjects:\n - kind: Group\n group:\n name: \"system:unauthenticated\"", "language": "yaml", "type": "kubernetes-manifest", "context": "15 - API Priority and Fairness | When you enable the API Priority and Fairness feature, the kube-apiserver exports additional metrics. Monitoring these can help you determine whether your configuration is inappropriately throttling i", "source": "https://kubernetes.io/docs/concepts/cluster-administration/_print/", "id": "k8s_00309_code_completion_8037"} {"prompt": "Explain this yaml code:\n\napiVersion: flowcontrol.apiserver.k8s.io/v1\nkind: FlowSchema\nmetadata:\n name: health-for-strangers\nspec:\n matchingPrecedence: 1000\n priorityLevelConfiguration:\n name: exempt\n rules:\n - nonResourceRules:\n - nonResourceURLs:\n - \"/healthz\"\n - \"/livez\"\n - \"/readyz\"\n verbs:\n - \"*\"\n subjects:\n - kind: Group\n group:\n name: \"system:unauthenticated\"", "completion": "This yaml code 15 - API Priority and Fairness | When you enable the API Priority and Fairness feature, the kube-apiserver exports additional metrics. Monitoring these can help you determine whether your configuration is inappropriately throttling i", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "15 - API Priority and Fairness | When you enable the API Priority and Fairness feature, the kube-apiserver exports additional metrics. Monitoring these can help you determine whether your configuration is inappropriately throttling i", "source": "https://kubernetes.io/docs/concepts/cluster-administration/_print/", "id": "k8s_00309_code_explain_8038"} {"prompt": "Complete this yaml code:\n\napiVersion: flowcontrol.apiserver.k8s.io/v1\nkind: FlowSchema\nmetadata:\n name: list-events-default-service-account\nspec:\n distinguisherMethod:\n type: ByUser\n matchingPrecedence: 8000\n priorityLevelConfiguration:\n name: catch-all\n rules:\n - resourceRules:", "completion": " - apiGroups:\n - '*'\n namespaces:\n - default\n resources:\n - events\n verbs:\n - list\n subjects:\n - kind: ServiceAccount\n serviceAccount:\n name: default\n namespace: default", "language": "yaml", "type": "service-definition", "context": "15 - API Priority and Fairness | Add-ons extend the functionality of Kubernetes.", "source": "https://kubernetes.io/docs/concepts/cluster-administration/_print/", "id": "k8s_00309_code_completion_8039"} {"prompt": "Explain this yaml code:\n\napiVersion: flowcontrol.apiserver.k8s.io/v1\nkind: FlowSchema\nmetadata:\n name: list-events-default-service-account\nspec:\n distinguisherMethod:\n type: ByUser\n matchingPrecedence: 8000\n priorityLevelConfiguration:\n name: catch-all\n rules:\n - resourceRules:\n - apiGroups:\n - '*'\n namespaces:\n - default\n resources:\n - events\n verbs:\n - list\n subjects:\n - kind: ServiceAccount\n serviceAccount:\n name: default\n namespace: default", "completion": "This yaml code 15 - API Priority and Fairness | Add-ons extend the functionality of Kubernetes.", "language": "yaml", "type": "service-definition_explanation", "context": "15 - API Priority and Fairness | Add-ons extend the functionality of Kubernetes.", "source": "https://kubernetes.io/docs/concepts/cluster-administration/_print/", "id": "k8s_00309_code_explain_8040"} {"prompt": "Explain this text code:\n\nReason: Terminated\nMessage: Pod was terminated in response to imminent node shutdown.", "completion": "This text code 1 - Node Shutdowns | To provide more flexibility during graceful node shutdown around the ordering of pods during shutdown, graceful node shutdown honors the PriorityClass for Pods, provided that you enabled this feature", "language": "text", "type": "code-example_explanation", "context": "1 - Node Shutdowns | To provide more flexibility during graceful node shutdown around the ordering of pods during shutdown, graceful node shutdown honors the PriorityClass for Pods, provided that you enabled this feature", "source": "https://kubernetes.io/docs/concepts/cluster-administration/_print/", "id": "k8s_00309_code_explain_8041"} {"prompt": "Complete this yaml code:\n\nshutdownGracePeriodByPodPriority:\n - priority: 100000\n shutdownGracePeriodSeconds: 10\n - priority: 10000", "completion": " shutdownGracePeriodSeconds: 180\n - priority: 1000\n shutdownGracePeriodSeconds: 120\n - priority: 0\n shutdownGracePeriodSeconds: 60", "language": "yaml", "type": "kubernetes-manifest", "context": "1 - Node Shutdowns | The above table implies that any pod with priority value >= 100000 will get just 10 seconds to shut down, any pod with value >= 10000 and < 100000 will get 180 seconds to shut down, any pod with value", "source": "https://kubernetes.io/docs/concepts/cluster-administration/_print/", "id": "k8s_00309_code_completion_8042"} {"prompt": "Explain this yaml code:\n\nshutdownGracePeriodByPodPriority:\n - priority: 100000\n shutdownGracePeriodSeconds: 10\n - priority: 10000\n shutdownGracePeriodSeconds: 180\n - priority: 1000\n shutdownGracePeriodSeconds: 120\n - priority: 0\n shutdownGracePeriodSeconds: 60", "completion": "This yaml code 1 - Node Shutdowns | The above table implies that any pod with priority value >= 100000 will get just 10 seconds to shut down, any pod with value >= 10000 and < 100000 will get 180 seconds to shut down, any pod with value", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "1 - Node Shutdowns | The above table implies that any pod with priority value >= 100000 will get just 10 seconds to shut down, any pod with value >= 10000 and < 100000 will get 180 seconds to shut down, any pod with value", "source": "https://kubernetes.io/docs/concepts/cluster-administration/_print/", "id": "k8s_00309_code_explain_8043"} {"prompt": "Complete this text code:\n\nNAME CPU(cores) CPU(%) MEMORY(bytes) MEMORY(%) SWAP(bytes) SWAP(%) \nnode1 1m 10% 2Mi 10% 1Mi 0% ", "completion": "node2 5m 10% 6Mi 10% 2Mi 0% \nnode3 3m 10% 4Mi 10% ", "language": "text", "type": "code-example", "context": "2 - Swap memory management | In order to receive information about swap usage by pods, kubectl top nodes --show-swap can be used:", "source": "https://kubernetes.io/docs/concepts/cluster-administration/_print/", "id": "k8s_00309_code_completion_8044"} {"prompt": "Explain this text code:\n\nNAME CPU(cores) CPU(%) MEMORY(bytes) MEMORY(%) SWAP(bytes) SWAP(%) \nnode1 1m 10% 2Mi 10% 1Mi 0% \nnode2 5m 10% 6Mi 10% 2Mi 0% \nnode3 3m 10% 4Mi 10% ", "completion": "This text code 2 - Swap memory management | In order to receive information about swap usage by pods, kubectl top nodes --show-swap can be used:", "language": "text", "type": "code-example_explanation", "context": "2 - Swap memory management | In order to receive information about swap usage by pods, kubectl top nodes --show-swap can be used:", "source": "https://kubernetes.io/docs/concepts/cluster-administration/_print/", "id": "k8s_00309_code_explain_8045"} {"prompt": "Complete this text code:\n\nNAME CPU(cores) MEMORY(bytes) SWAP(bytes)\ncoredns-58d5bc5cdb-5nbk4 2m 19Mi 0Mi\ncoredns-58d5bc5cdb-jsh26 3m 37Mi 0Mi\netcd-node01 51m 143Mi 5Mi\nkube-apiserver-node01 98m 824Mi 16Mi", "completion": "kube-controller-manager-node01 20m 135Mi 9Mi\nkube-proxy-ffgs2 1m 24Mi 0Mi\nkube-proxy-fhvwx 1m 39Mi 0Mi\nkube-scheduler-node01 13m 69Mi 0Mi\nmetrics-server-8598789fdb-d2kcj 5m 26Mi 0Mi", "language": "text", "type": "code-example", "context": "2 - Swap memory management | A new node status field is now added, node.status.nodeInfo.swap.capacity, to report the swap capacity of a node.", "source": "https://kubernetes.io/docs/concepts/cluster-administration/_print/", "id": "k8s_00309_code_completion_8046"} {"prompt": "Explain this text code:\n\nNAME CPU(cores) MEMORY(bytes) SWAP(bytes)\ncoredns-58d5bc5cdb-5nbk4 2m 19Mi 0Mi\ncoredns-58d5bc5cdb-jsh26 3m 37Mi 0Mi\netcd-node01 51m 143Mi 5Mi\nkube-apiserver-node01 98m 824Mi 16Mi\nkube-controller-manager-node01 20m 135Mi 9Mi\nkube-proxy-ffgs2 1m 24Mi 0Mi\nkube-proxy-fhvwx 1m 39Mi 0Mi\nkube-scheduler-node01 13m 69Mi 0Mi\nmetrics-server-8598789fdb-d2kcj 5m 26Mi 0Mi", "completion": "This text code 2 - Swap memory management | A new node status field is now added, node.status.nodeInfo.swap.capacity, to report the swap capacity of a node.", "language": "text", "type": "code-example_explanation", "context": "2 - Swap memory management | A new node status field is now added, node.status.nodeInfo.swap.capacity, to report the swap capacity of a node.", "source": "https://kubernetes.io/docs/concepts/cluster-administration/_print/", "id": "k8s_00309_code_explain_8047"} {"prompt": "Explain this shell code:\n\nkubectl get nodes -o go-template='{{range .items}}{{.metadata.name}}: {{if .status.nodeInfo.swap.capacity}}{{.status.nodeInfo.swap.capacity}}{{else}}{{end}}{{\"\\n\"}}{{end}}'", "completion": "This shell code 2 - Swap memory management | This will result in an output similar to:", "language": "shell", "type": "code-example_explanation", "context": "2 - Swap memory management | This will result in an output similar to:", "source": "https://kubernetes.io/docs/concepts/cluster-administration/_print/", "id": "k8s_00309_code_explain_8048"} {"prompt": "Explain this text code:\n\nnode1: 21474836480\nnode2: 42949664768\nnode3: ", "completion": "This text code 2 - Swap memory management | Node Feature Discovery is a Kubernetes addon for detecting hardware features and configuration. It can be utilized to discover which nodes are provisioned with swap.", "language": "text", "type": "code-example_explanation", "context": "2 - Swap memory management | Node Feature Discovery is a Kubernetes addon for detecting hardware features and configuration. It can be utilized to discover which nodes are provisioned with swap.", "source": "https://kubernetes.io/docs/concepts/cluster-administration/_print/", "id": "k8s_00309_code_explain_8049"} {"prompt": "Explain this shell code:\n\nkubectl get nodes -o jsonpath='{range .items[?(@.metadata.labels.feature\\.node\\.kubernetes\\.io/memory-swap)]}{.metadata.name}{\"\\t\"}{.metadata.labels.feature\\.node\\.kubernetes\\.io/memory-swap}{\"\\n\"}{end}'", "completion": "This shell code 2 - Swap memory management | This will result in an output similar to:", "language": "shell", "type": "code-example_explanation", "context": "2 - Swap memory management | This will result in an output similar to:", "source": "https://kubernetes.io/docs/concepts/cluster-administration/_print/", "id": "k8s_00309_code_explain_8050"} {"prompt": "Explain this text code:\n\nk8s-worker1: true\nk8s-worker2: true\nk8s-worker3: false", "completion": "This text code 2 - Swap memory management | In this example, swap is provisioned on nodes k8s-worker1 and k8s-worker2, but not on k8s-worker3.", "language": "text", "type": "code-example_explanation", "context": "2 - Swap memory management | In this example, swap is provisioned on nodes k8s-worker1 and k8s-worker2, but not on k8s-worker3.", "source": "https://kubernetes.io/docs/concepts/cluster-administration/_print/", "id": "k8s_00309_code_explain_8051"} {"prompt": "Explain this promql code:\n\nsum(rate(workqueue_adds_total{name=\"resource_claim\"}[5m]))", "completion": "This promql code 7 - Good practices for Dynamic Resource Allocation as a Cluster Admin | If you are experiencing low Workqueue Add Rate, high Workqueue Depth, and/or high Workqueue Work Duration, this suggests the controller isn't performing optimally. Consider tuning parameters like QPS,", "language": "promql", "type": "code-example_explanation", "context": "7 - Good practices for Dynamic Resource Allocation as a Cluster Admin | If you are experiencing low Workqueue Add Rate, high Workqueue Depth, and/or high Workqueue Work Duration, this suggests the controller isn't performing optimally. Consider tuning parameters like QPS,", "source": "https://kubernetes.io/docs/concepts/cluster-administration/_print/", "id": "k8s_00309_code_explain_8052"} {"prompt": "Explain this promql code:\n\nsum(workqueue_depth{endpoint=\"kube-controller-manager\",\nname=\"resource_claim\"})", "completion": "This promql code 7 - Good practices for Dynamic Resource Allocation as a Cluster Admin | If you are experiencing low Workqueue Add Rate, high Workqueue Depth, and/or high Workqueue Work Duration, this suggests the controller isn't performing optimally. Consider tuning parameters like QPS,", "language": "promql", "type": "code-example_explanation", "context": "7 - Good practices for Dynamic Resource Allocation as a Cluster Admin | If you are experiencing low Workqueue Add Rate, high Workqueue Depth, and/or high Workqueue Work Duration, this suggests the controller isn't performing optimally. Consider tuning parameters like QPS,", "source": "https://kubernetes.io/docs/concepts/cluster-administration/_print/", "id": "k8s_00309_code_explain_8053"} {"prompt": "Explain this promql code:\n\nhistogram_quantile(0.99,\nsum(rate(workqueue_work_duration_seconds_bucket{name=\"resource_claim\"}[5m]))\nby (le))", "completion": "This promql code 7 - Good practices for Dynamic Resource Allocation as a Cluster Admin | If you are experiencing low Workqueue Add Rate, high Workqueue Depth, and/or high Workqueue Work Duration, this suggests the controller isn't performing optimally. Consider tuning parameters like QPS,", "language": "promql", "type": "code-example_explanation", "context": "7 - Good practices for Dynamic Resource Allocation as a Cluster Admin | If you are experiencing low Workqueue Add Rate, high Workqueue Depth, and/or high Workqueue Work Duration, this suggests the controller isn't performing optimally. Consider tuning parameters like QPS,", "source": "https://kubernetes.io/docs/concepts/cluster-administration/_print/", "id": "k8s_00309_code_explain_8054"} {"prompt": "Explain this promql code:\n\nhistogram_quantile(0.99,\nsum(increase(scheduler_pod_scheduling_sli_duration_seconds_bucket[5m])) by\n(le))", "completion": "This promql code 7 - Good practices for Dynamic Resource Allocation as a Cluster Admin | When a Pod bound to a node must have a ResourceClaim satisfied, kubelet calls the NodePrepareResources and NodeUnprepareResources methods of the DRA driver. You can observe this behavior from the kube", "language": "promql", "type": "code-example_explanation", "context": "7 - Good practices for Dynamic Resource Allocation as a Cluster Admin | When a Pod bound to a node must have a ResourceClaim satisfied, kubelet calls the NodePrepareResources and NodeUnprepareResources methods of the DRA driver. You can observe this behavior from the kube", "source": "https://kubernetes.io/docs/concepts/cluster-administration/_print/", "id": "k8s_00309_code_explain_8055"} {"prompt": "Explain this promql code:\n\nhistogram_quantile(0.99,\nsum(increase(scheduler_scheduling_algorithm_duration_seconds_bucket[5m])) by\n(le))", "completion": "This promql code 7 - Good practices for Dynamic Resource Allocation as a Cluster Admin | When a Pod bound to a node must have a ResourceClaim satisfied, kubelet calls the NodePrepareResources and NodeUnprepareResources methods of the DRA driver. You can observe this behavior from the kube", "language": "promql", "type": "code-example_explanation", "context": "7 - Good practices for Dynamic Resource Allocation as a Cluster Admin | When a Pod bound to a node must have a ResourceClaim satisfied, kubelet calls the NodePrepareResources and NodeUnprepareResources methods of the DRA driver. You can observe this behavior from the kube", "source": "https://kubernetes.io/docs/concepts/cluster-administration/_print/", "id": "k8s_00309_code_explain_8056"} {"prompt": "Explain this promql code:\n\nhistogram_quantile(0.99,\nsum(rate(dra_operations_duration_seconds_bucket{operation_name=\"PrepareResources\"}[5m]))\nby (le))", "completion": "This promql code 7 - Good practices for Dynamic Resource Allocation as a Cluster Admin | DRA drivers implement the kubeletplugin package interface which surfaces its own metric for the underlying gRPC operation NodePrepareResources and NodeUnprepareResources. You can observe this behavior", "language": "promql", "type": "code-example_explanation", "context": "7 - Good practices for Dynamic Resource Allocation as a Cluster Admin | DRA drivers implement the kubeletplugin package interface which surfaces its own metric for the underlying gRPC operation NodePrepareResources and NodeUnprepareResources. You can observe this behavior", "source": "https://kubernetes.io/docs/concepts/cluster-administration/_print/", "id": "k8s_00309_code_explain_8057"} {"prompt": "Explain this promql code:\n\nhistogram_quantile(0.99,\nsum(rate(dra_operations_duration_seconds_bucket{operation_name=\"UnprepareResources\"}[5m]))\nby (le))", "completion": "This promql code 7 - Good practices for Dynamic Resource Allocation as a Cluster Admin | DRA drivers implement the kubeletplugin package interface which surfaces its own metric for the underlying gRPC operation NodePrepareResources and NodeUnprepareResources. You can observe this behavior", "language": "promql", "type": "code-example_explanation", "context": "7 - Good practices for Dynamic Resource Allocation as a Cluster Admin | DRA drivers implement the kubeletplugin package interface which surfaces its own metric for the underlying gRPC operation NodePrepareResources and NodeUnprepareResources. You can observe this behavior", "source": "https://kubernetes.io/docs/concepts/cluster-administration/_print/", "id": "k8s_00309_code_explain_8058"} {"prompt": "Explain this promql code:\n\nhistogram_quantile(0.99,\nsum(rate(dra_grpc_operations_duration_seconds_bucket{method_name=~\".*NodePrepareResources\"}[5m]))\nby (le))", "completion": "This promql code 7 - Good practices for Dynamic Resource Allocation as a Cluster Admin | Application logs can help you understand what is happening inside your application. The logs are particularly useful for debugging problems and monitoring cluster activity. Most modern applications ha", "language": "promql", "type": "code-example_explanation", "context": "7 - Good practices for Dynamic Resource Allocation as a Cluster Admin | Application logs can help you understand what is happening inside your application. The logs are particularly useful for debugging problems and monitoring cluster activity. Most modern applications ha", "source": "https://kubernetes.io/docs/concepts/cluster-administration/_print/", "id": "k8s_00309_code_explain_8059"} {"prompt": "Explain this promql code:\n\nhistogram_quantile(0.99,\nsum(rate(dra_grpc_operations_duration_seconds_bucket{method_name=~\".*NodeUnprepareResources\"}[5m]))\nby (le))", "completion": "This promql code 7 - Good practices for Dynamic Resource Allocation as a Cluster Admin | Application logs can help you understand what is happening inside your application. The logs are particularly useful for debugging problems and monitoring cluster activity. Most modern applications ha", "language": "promql", "type": "code-example_explanation", "context": "7 - Good practices for Dynamic Resource Allocation as a Cluster Admin | Application logs can help you understand what is happening inside your application. The logs are particularly useful for debugging problems and monitoring cluster activity. Most modern applications ha", "source": "https://kubernetes.io/docs/concepts/cluster-administration/_print/", "id": "k8s_00309_code_explain_8060"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: counter\nspec:", "completion": " containers:\n - name: count\n image: busybox:1.28\n args: [/bin/sh, -c,\n 'i=0; while true; do echo \"$i: $(date)\"; i=$((i+1)); sleep 1; done']", "language": "yaml", "type": "pod-definition", "context": "8 - Logging Architecture | To run this pod, use the following command:", "source": "https://kubernetes.io/docs/concepts/cluster-administration/_print/", "id": "k8s_00309_code_completion_8061"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: counter\nspec:\n containers:\n - name: count\n image: busybox:1.28\n args: [/bin/sh, -c,\n 'i=0; while true; do echo \"$i: $(date)\"; i=$((i+1)); sleep 1; done']", "completion": "This yaml code 8 - Logging Architecture | To run this pod, use the following command:", "language": "yaml", "type": "pod-definition_explanation", "context": "8 - Logging Architecture | To run this pod, use the following command:", "source": "https://kubernetes.io/docs/concepts/cluster-administration/_print/", "id": "k8s_00309_code_explain_8062"} {"prompt": "Explain this shell code:\n\nkubectl apply -f https://k8s.io/examples/debug/counter-pod.yaml", "completion": "This shell code 8 - Logging Architecture | The output is:", "language": "shell", "type": "code-example_explanation", "context": "8 - Logging Architecture | The output is:", "source": "https://kubernetes.io/docs/concepts/cluster-administration/_print/", "id": "k8s_00309_code_explain_8063"} {"prompt": "Explain this console code:\n\n0: Fri Apr 1 11:42:23 UTC 2022\n1: Fri Apr 1 11:42:24 UTC 2022\n2: Fri Apr 1 11:42:25 UTC 2022", "completion": "This console code 8 - Logging Architecture | You can use kubectl logs --previous to retrieve logs from a previous instantiation of a container. If your pod has multiple containers, specify which container's logs you want to access by appending a", "language": "console", "type": "code-example_explanation", "context": "8 - Logging Architecture | You can use kubectl logs --previous to retrieve logs from a previous instantiation of a container. If your pod has multiple containers, specify which container's logs you want to access by appending a", "source": "https://kubernetes.io/docs/concepts/cluster-administration/_print/", "id": "k8s_00309_code_explain_8064"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: counter-err\nspec:", "completion": " containers:\n - name: count\n image: busybox:1.28\n args: [/bin/sh, -c,\n 'i=0; while true; do echo \"$i: $(date)\"; echo \"$i: err\" >&2 ; i=$((i+1)); sleep 1; done']", "language": "yaml", "type": "pod-definition", "context": "8 - Logging Architecture | To run this pod, use the following command:", "source": "https://kubernetes.io/docs/concepts/cluster-administration/_print/", "id": "k8s_00309_code_completion_8065"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: counter-err\nspec:\n containers:\n - name: count\n image: busybox:1.28\n args: [/bin/sh, -c,\n 'i=0; while true; do echo \"$i: $(date)\"; echo \"$i: err\" >&2 ; i=$((i+1)); sleep 1; done']", "completion": "This yaml code 8 - Logging Architecture | To run this pod, use the following command:", "language": "yaml", "type": "pod-definition_explanation", "context": "8 - Logging Architecture | To run this pod, use the following command:", "source": "https://kubernetes.io/docs/concepts/cluster-administration/_print/", "id": "k8s_00309_code_explain_8066"} {"prompt": "Explain this shell code:\n\nkubectl apply -f https://k8s.io/examples/debug/counter-pod-err.yaml", "completion": "This shell code 8 - Logging Architecture | To fetch only the stderr log stream, you can run:", "language": "shell", "type": "code-example_explanation", "context": "8 - Logging Architecture | To fetch only the stderr log stream, you can run:", "source": "https://kubernetes.io/docs/concepts/cluster-administration/_print/", "id": "k8s_00309_code_explain_8067"} {"prompt": "Explain this shell code:\n\nkubectl get --raw \"/api/v1/namespaces/default/pods/counter-err/log?stream=Stderr\"", "completion": "This shell code 8 - Logging Architecture | See the kubectl logs documentation for more details.", "language": "shell", "type": "code-example_explanation", "context": "8 - Logging Architecture | See the kubectl logs documentation for more details.", "source": "https://kubernetes.io/docs/concepts/cluster-administration/_print/", "id": "k8s_00309_code_explain_8068"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: counter\nspec:\n containers:\n - name: count\n image: busybox:1.28\n args:\n - /bin/sh\n - -c\n - >\n i=0;", "completion": " while true;\n do\n echo \"$i: $(date)\" >> /var/log/1.log;\n echo \"$(date) INFO $i\" >> /var/log/2.log;\n i=$((i+1));\n sleep 1;\n done \n volumeMounts:\n - name: varlog\n mountPath: /var/log\n volumes:\n - name: varlog\n emptyDir: {}", "language": "yaml", "type": "pod-definition", "context": "8 - Logging Architecture | It is not recommended to write log entries with different formats to the same log stream, even if you managed to redirect both components to the stdout stream of the container. Instead, you can create", "source": "https://kubernetes.io/docs/concepts/cluster-administration/_print/", "id": "k8s_00309_code_completion_8069"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: counter\nspec:\n containers:\n - name: count\n image: busybox:1.28\n args:\n - /bin/sh\n - -c\n - >\n i=0;\n while true;\n do\n echo \"$i: $(date)\" >> /var/log/1.log;\n echo \"$(date) INFO $i\" >> /var/log/2.log;\n i=$((i+1));\n sleep 1;\n done \n volumeMounts:\n - name: varlog\n mountPath: /var/log\n volumes:\n - name: varlog\n emptyDir: {}", "completion": "This yaml code 8 - Logging Architecture | It is not recommended to write log entries with different formats to the same log stream, even if you managed to redirect both components to the stdout stream of the container. Instead, you can create", "language": "yaml", "type": "pod-definition_explanation", "context": "8 - Logging Architecture | It is not recommended to write log entries with different formats to the same log stream, even if you managed to redirect both components to the stdout stream of the container. Instead, you can create", "source": "https://kubernetes.io/docs/concepts/cluster-administration/_print/", "id": "k8s_00309_code_explain_8070"} {"prompt": "Explain this text code:\n\nadmin/logging/two-files-counter-pod-streaming-sidecar.yaml", "completion": "This text code 8 - Logging Architecture | Now when you run this pod, you can access each log stream separately by running the following commands:", "language": "text", "type": "code-example_explanation", "context": "8 - Logging Architecture | Now when you run this pod, you can access each log stream separately by running the following commands:", "source": "https://kubernetes.io/docs/concepts/cluster-administration/_print/", "id": "k8s_00309_code_explain_8071"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: counter\nspec:\n containers:\n - name: count\n image: busybox:1.28\n args:\n - /bin/sh\n - -c\n - >\n i=0;\n while true;\n do\n echo \"$i: $(date)\" >> /var/log/1.log;\n echo \"$(date) INFO $i\" >> /var/log/2.log;\n i=$((i+1));\n sleep 1;", "completion": " done \n volumeMounts:\n - name: varlog\n mountPath: /var/log\n - name: count-log-1\n image: busybox:1.28\n args: [/bin/sh, -c, 'tail -n+1 -F /var/log/1.log']\n volumeMounts:\n - name: varlog\n mountPath: /var/log\n - name: count-log-2\n image: busybox:1.28\n args: [/bin/sh, -c, 'tail -n+1 -F /var/log/2.log']\n volumeMounts:\n - name: varlog\n mountPath: /var/log\n volumes:\n - name: varlog\n emptyDir: {}", "language": "yaml", "type": "pod-definition", "context": "8 - Logging Architecture | Now when you run this pod, you can access each log stream separately by running the following commands:", "source": "https://kubernetes.io/docs/concepts/cluster-administration/_print/", "id": "k8s_00309_code_completion_8072"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: counter\nspec:\n containers:\n - name: count\n image: busybox:1.28\n args:\n - /bin/sh\n - -c\n - >\n i=0;\n while true;\n do\n echo \"$i: $(date)\" >> /var/log/1.log;\n echo \"$(date) INFO $i\" >> /var/log/2.log;\n i=$((i+1));\n sleep 1;\n done \n volumeMounts:\n - name: varlog\n mountPath: /var/log\n - name: count-log-1\n image: busybox:1.28\n args: [/bin/sh, -c, 'tail -n+1 -F /var/log/1.log']\n volumeMounts:\n - name: varlog\n mountPath: /var/log\n - name: count-log-2\n image: busybox:1.28\n args: [/bin/sh, -c, 'tail -n+1 -F /var/log/2.log']\n volumeMounts:\n - name: varlog\n mountPath: /var/log\n volumes:\n - name: varlog\n emptyDir: {}", "completion": "This yaml code 8 - Logging Architecture | Now when you run this pod, you can access each log stream separately by running the following commands:", "language": "yaml", "type": "pod-definition_explanation", "context": "8 - Logging Architecture | Now when you run this pod, you can access each log stream separately by running the following commands:", "source": "https://kubernetes.io/docs/concepts/cluster-administration/_print/", "id": "k8s_00309_code_explain_8073"} {"prompt": "Complete this console code:\n\n0: Fri Apr 1 11:42:26 UTC 2022\n1: Fri Apr 1 11:42:27 UTC 2022", "completion": "2: Fri Apr 1 11:42:28 UTC 2022\n...", "language": "console", "type": "code-example", "context": "8 - Logging Architecture | The output is similar to:", "source": "https://kubernetes.io/docs/concepts/cluster-administration/_print/", "id": "k8s_00309_code_completion_8074"} {"prompt": "Explain this console code:\n\n0: Fri Apr 1 11:42:26 UTC 2022\n1: Fri Apr 1 11:42:27 UTC 2022\n2: Fri Apr 1 11:42:28 UTC 2022\n...", "completion": "This console code 8 - Logging Architecture | The output is similar to:", "language": "console", "type": "code-example_explanation", "context": "8 - Logging Architecture | The output is similar to:", "source": "https://kubernetes.io/docs/concepts/cluster-administration/_print/", "id": "k8s_00309_code_explain_8075"} {"prompt": "Complete this console code:\n\nFri Apr 1 11:42:29 UTC 2022 INFO 0\nFri Apr 1 11:42:30 UTC 2022 INFO 0", "completion": "Fri Apr 1 11:42:31 UTC 2022 INFO 0\n...", "language": "console", "type": "code-example", "context": "8 - Logging Architecture | If you installed a node-level agent in your cluster, that agent picks up those log streams automatically without any further configuration. If you like, you can configure the agent to parse log lines", "source": "https://kubernetes.io/docs/concepts/cluster-administration/_print/", "id": "k8s_00309_code_completion_8076"} {"prompt": "Explain this console code:\n\nFri Apr 1 11:42:29 UTC 2022 INFO 0\nFri Apr 1 11:42:30 UTC 2022 INFO 0\nFri Apr 1 11:42:31 UTC 2022 INFO 0\n...", "completion": "This console code 8 - Logging Architecture | If you installed a node-level agent in your cluster, that agent picks up those log streams automatically without any further configuration. If you like, you can configure the agent to parse log lines", "language": "console", "type": "code-example_explanation", "context": "8 - Logging Architecture | If you installed a node-level agent in your cluster, that agent picks up those log streams automatically without any further configuration. If you like, you can configure the agent to parse log lines", "source": "https://kubernetes.io/docs/concepts/cluster-administration/_print/", "id": "k8s_00309_code_explain_8077"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: ConfigMap\nmetadata:\n name: fluentd-config\ndata:\n fluentd.conf: |\n \n type tail\n format none\n path /var/log/1.log\n pos_file /var/log/1.log.pos\n tag count.format1", "completion": " \n\n \n type tail\n format none\n path /var/log/2.log\n pos_file /var/log/2.log.pos\n tag count.format2\n \n\n \n type google_cloud\n ", "language": "yaml", "type": "kubernetes-manifest", "context": "8 - Logging Architecture | The second manifest describes a pod that has a sidecar container running fluentd. The pod mounts a volume where fluentd can pick up its configuration data.", "source": "https://kubernetes.io/docs/concepts/cluster-administration/_print/", "id": "k8s_00309_code_completion_8078"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: ConfigMap\nmetadata:\n name: fluentd-config\ndata:\n fluentd.conf: |\n \n type tail\n format none\n path /var/log/1.log\n pos_file /var/log/1.log.pos\n tag count.format1\n \n\n \n type tail\n format none\n path /var/log/2.log\n pos_file /var/log/2.log.pos\n tag count.format2\n \n\n \n type google_cloud\n ", "completion": "This yaml code 8 - Logging Architecture | The second manifest describes a pod that has a sidecar container running fluentd. The pod mounts a volume where fluentd can pick up its configuration data.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "8 - Logging Architecture | The second manifest describes a pod that has a sidecar container running fluentd. The pod mounts a volume where fluentd can pick up its configuration data.", "source": "https://kubernetes.io/docs/concepts/cluster-administration/_print/", "id": "k8s_00309_code_explain_8079"} {"prompt": "Explain this text code:\n\nadmin/logging/two-files-counter-pod-agent-sidecar.yaml", "completion": "This text code 8 - Logging Architecture |", "language": "text", "type": "code-example_explanation", "context": "8 - Logging Architecture |", "source": "https://kubernetes.io/docs/concepts/cluster-administration/_print/", "id": "k8s_00309_code_explain_8080"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: counter\nspec:\n containers:\n - name: count\n image: busybox:1.28\n args:\n - /bin/sh\n - -c\n - >\n i=0;\n while true;\n do\n echo \"$i: $(date)\" >> /var/log/1.log;\n echo \"$(date) INFO $i\" >> /var/log/2.log;\n i=$((i+1));\n sleep 1;", "completion": " done \n volumeMounts:\n - name: varlog\n mountPath: /var/log\n - name: count-agent\n image: registry.k8s.io/fluentd-gcp:1.30\n env:\n - name: FLUENTD_ARGS\n value: -c /etc/fluentd-config/fluentd.conf\n volumeMounts:\n - name: varlog\n mountPath: /var/log\n - name: config-volume\n mountPath: /etc/fluentd-config\n volumes:\n - name: varlog\n emptyDir: {}\n - name: config-volume\n configMap:\n name: fluentd-config", "language": "yaml", "type": "pod-definition", "context": "8 - Logging Architecture |", "source": "https://kubernetes.io/docs/concepts/cluster-administration/_print/", "id": "k8s_00309_code_completion_8081"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: counter\nspec:\n containers:\n - name: count\n image: busybox:1.28\n args:\n - /bin/sh\n - -c\n - >\n i=0;\n while true;\n do\n echo \"$i: $(date)\" >> /var/log/1.log;\n echo \"$(date) INFO $i\" >> /var/log/2.log;\n i=$((i+1));\n sleep 1;\n done \n volumeMounts:\n - name: varlog\n mountPath: /var/log\n - name: count-agent\n image: registry.k8s.io/fluentd-gcp:1.30\n env:\n - name: FLUENTD_ARGS\n value: -c /etc/fluentd-config/fluentd.conf\n volumeMounts:\n - name: varlog\n mountPath: /var/log\n - name: config-volume\n mountPath: /etc/fluentd-config\n volumes:\n - name: varlog\n emptyDir: {}\n - name: config-volume\n configMap:\n name: fluentd-config", "completion": "This yaml code 8 - Logging Architecture |", "language": "yaml", "type": "pod-definition_explanation", "context": "8 - Logging Architecture |", "source": "https://kubernetes.io/docs/concepts/cluster-administration/_print/", "id": "k8s_00309_code_explain_8082"} {"prompt": "Complete this yaml code:\n\napiVersion: rbac.authorization.k8s.io/v1\nkind: ClusterRole\nmetadata:\n name: prometheus", "completion": "rules:\n - nonResourceURLs:\n - \"/metrics\"\n verbs:\n - get", "language": "yaml", "type": "kubernetes-manifest", "context": "10 - Metrics For Kubernetes System Components | Alpha metric → Stable metric → Deprecated metric → Hidden metric → Deleted metric", "source": "https://kubernetes.io/docs/concepts/cluster-administration/_print/", "id": "k8s_00309_code_completion_8083"} {"prompt": "Explain this yaml code:\n\napiVersion: rbac.authorization.k8s.io/v1\nkind: ClusterRole\nmetadata:\n name: prometheus\nrules:\n - nonResourceURLs:\n - \"/metrics\"\n verbs:\n - get", "completion": "This yaml code 10 - Metrics For Kubernetes System Components | Alpha metric → Stable metric → Deprecated metric → Hidden metric → Deleted metric", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "10 - Metrics For Kubernetes System Components | Alpha metric → Stable metric → Deprecated metric → Hidden metric → Deleted metric", "source": "https://kubernetes.io/docs/concepts/cluster-administration/_print/", "id": "k8s_00309_code_explain_8084"} {"prompt": "Explain this text code:\n\n# HELP some_counter this counts things\n# TYPE some_counter counter\nsome_counter 0", "completion": "This text code 10 - Metrics For Kubernetes System Components | After deprecation", "language": "text", "type": "code-example_explanation", "context": "10 - Metrics For Kubernetes System Components | After deprecation", "source": "https://kubernetes.io/docs/concepts/cluster-administration/_print/", "id": "k8s_00309_code_explain_8085"} {"prompt": "Explain this text code:\n\n# HELP some_counter (Deprecated since 1.15.0) this counts things\n# TYPE some_counter counter\nsome_counter 0", "completion": "This text code 10 - Metrics For Kubernetes System Components | Hidden metrics are no longer published for scraping, but are still available for use. To use a hidden metric, please refer to the Show hidden metrics section.", "language": "text", "type": "code-example_explanation", "context": "10 - Metrics For Kubernetes System Components | Hidden metrics are no longer published for scraping, but are still available for use. To use a hidden metric, please refer to the Show hidden metrics section.", "source": "https://kubernetes.io/docs/concepts/cluster-administration/_print/", "id": "k8s_00309_code_explain_8086"} {"prompt": "Complete this text code:\n\ncloudprovider_gce_api_request_duration_seconds { request = \"instance_list\"}\ncloudprovider_gce_api_request_duration_seconds { request = \"disk_insert\"}\ncloudprovider_gce_api_request_duration_seconds { request = \"disk_delete\"}", "completion": "cloudprovider_gce_api_request_duration_seconds { request = \"attach_disk\"}\ncloudprovider_gce_api_request_duration_seconds { request = \"detach_disk\"}\ncloudprovider_gce_api_request_duration_seconds { request = \"list_disk\"}", "language": "text", "type": "code-example", "context": "10 - Metrics For Kubernetes System Components | The scheduler exposes optional metrics that reports the requested resources and the desired limits of all running pods. These metrics can be used to build capacity planning dashboards, assess current", "source": "https://kubernetes.io/docs/concepts/cluster-administration/_print/", "id": "k8s_00309_code_completion_8087"} {"prompt": "Explain this text code:\n\ncloudprovider_gce_api_request_duration_seconds { request = \"instance_list\"}\ncloudprovider_gce_api_request_duration_seconds { request = \"disk_insert\"}\ncloudprovider_gce_api_request_duration_seconds { request = \"disk_delete\"}\ncloudprovider_gce_api_request_duration_seconds { request = \"attach_disk\"}\ncloudprovider_gce_api_request_duration_seconds { request = \"detach_disk\"}\ncloudprovider_gce_api_request_duration_seconds { request = \"list_disk\"}", "completion": "This text code 10 - Metrics For Kubernetes System Components | The scheduler exposes optional metrics that reports the requested resources and the desired limits of all running pods. These metrics can be used to build capacity planning dashboards, assess current", "language": "text", "type": "code-example_explanation", "context": "10 - Metrics For Kubernetes System Components | The scheduler exposes optional metrics that reports the requested resources and the desired limits of all running pods. These metrics can be used to build capacity planning dashboards, assess current", "source": "https://kubernetes.io/docs/concepts/cluster-administration/_print/", "id": "k8s_00309_code_explain_8088"} {"prompt": "Complete this text code:\n\ncontainer_pressure_cpu_stalled_seconds_total\ncontainer_pressure_cpu_waiting_seconds_total\ncontainer_pressure_memory_stalled_seconds_total", "completion": "container_pressure_memory_waiting_seconds_total\ncontainer_pressure_io_stalled_seconds_total\ncontainer_pressure_io_waiting_seconds_total", "language": "text", "type": "code-example", "context": "10 - Metrics For Kubernetes System Components | This feature is enabled by default, by setting the KubeletPSI feature gate. The information is also exposed in the Summary API.", "source": "https://kubernetes.io/docs/concepts/cluster-administration/_print/", "id": "k8s_00309_code_completion_8089"} {"prompt": "Explain this text code:\n\ncontainer_pressure_cpu_stalled_seconds_total\ncontainer_pressure_cpu_waiting_seconds_total\ncontainer_pressure_memory_stalled_seconds_total\ncontainer_pressure_memory_waiting_seconds_total\ncontainer_pressure_io_stalled_seconds_total\ncontainer_pressure_io_waiting_seconds_total", "completion": "This text code 10 - Metrics For Kubernetes System Components | This feature is enabled by default, by setting the KubeletPSI feature gate. The information is also exposed in the Summary API.", "language": "text", "type": "code-example_explanation", "context": "10 - Metrics For Kubernetes System Components | This feature is enabled by default, by setting the KubeletPSI feature gate. The information is also exposed in the Summary API.", "source": "https://kubernetes.io/docs/concepts/cluster-administration/_print/", "id": "k8s_00309_code_explain_8090"} {"prompt": "Explain this text code:\n\n--allow-metric-labels ,=', ...', ,=', ...', ...", "completion": "This text code 10 - Metrics For Kubernetes System Components | Here is an example:", "language": "text", "type": "code-example_explanation", "context": "10 - Metrics For Kubernetes System Components | Here is an example:", "source": "https://kubernetes.io/docs/concepts/cluster-administration/_print/", "id": "k8s_00309_code_explain_8091"} {"prompt": "Explain this none code:\n\n--allow-metric-labels number_count_metric,odd_number='1,3,5', number_count_metric,even_number='2,4,6', date_gauge_metric,weekend='Saturday,Sunday'", "completion": "This none code 10 - Metrics For Kubernetes System Components | In addition to specifying this from the CLI, this can also be done within a configuration file. You can specify the path to that configuration file using the --allow-metric-labels-manifest command lin", "language": "none", "type": "code-example_explanation", "context": "10 - Metrics For Kubernetes System Components | In addition to specifying this from the CLI, this can also be done within a configuration file. You can specify the path to that configuration file using the --allow-metric-labels-manifest command lin", "source": "https://kubernetes.io/docs/concepts/cluster-administration/_print/", "id": "k8s_00309_code_explain_8092"} {"prompt": "Explain this yaml code:\n\n\"metric1,label2\": \"v1,v2,v3\"\n\"metric2,label1\": \"v1,v2,v3\"", "completion": "This yaml code 10 - Metrics For Kubernetes System Components | Additionally, the cardinality_enforcement_unexpected_categorizations_total meta-metric records the count of unexpected categorizations during cardinality enforcement, that is, whenever a label value i", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "10 - Metrics For Kubernetes System Components | Additionally, the cardinality_enforcement_unexpected_categorizations_total meta-metric records the count of unexpected categorizations during cardinality enforcement, that is, whenever a label value i", "source": "https://kubernetes.io/docs/concepts/cluster-administration/_print/", "id": "k8s_00309_code_explain_8093"} {"prompt": "Explain this text code:\n\ncardinality_enforcement_unexpected_categorizations_total", "completion": "This text code 10 - Metrics For Kubernetes System Components | The state of Kubernetes objects in the Kubernetes API can be exposed as metrics. An add-on agent called kube-state-metrics can connect to the Kubernetes API server and expose a HTTP endpoint with metr", "language": "text", "type": "code-example_explanation", "context": "10 - Metrics For Kubernetes System Components | The state of Kubernetes objects in the Kubernetes API can be exposed as metrics. An add-on agent called kube-state-metrics can connect to the Kubernetes API server and expose a HTTP endpoint with metr", "source": "https://kubernetes.io/docs/concepts/cluster-administration/_print/", "id": "k8s_00309_code_explain_8094"} {"prompt": "Explain this text code:\n\ncount(kube_pod_status_ready{condition=\"false\"}) by (namespace, pod)", "completion": "This text code 11 - Metrics for Kubernetes Object States | Metrics generated from kube-state-metrics also allow for alerting on issues in the cluster.", "language": "text", "type": "code-example_explanation", "context": "11 - Metrics for Kubernetes Object States | Metrics generated from kube-state-metrics also allow for alerting on issues in the cluster.", "source": "https://kubernetes.io/docs/concepts/cluster-administration/_print/", "id": "k8s_00309_code_explain_8095"} {"prompt": "Complete this yaml code:\n\ngroups:\n- name: Pod state\n rules:\n - alert: PodsBlockedInTerminatingState\n expr: count(kube_pod_deletion_timestamp) by (namespace, pod) * count(kube_pod_status_reason{reason=\"NodeLost\"} == 0) by (namespace, pod) > 0", "completion": " for: 5m\n labels:\n severity: page\n annotations:\n summary: Pod {{$labels.namespace}}/{{$labels.pod}} blocked in Terminating state.", "language": "yaml", "type": "kubernetes-manifest", "context": "11 - Metrics for Kubernetes Object States | System component logs record events happening in cluster, which can be very useful for debugging. You can configure log verbosity to see more or less detail. Logs can be as coarse-grained as showing e", "source": "https://kubernetes.io/docs/concepts/cluster-administration/_print/", "id": "k8s_00309_code_completion_8096"} {"prompt": "Explain this yaml code:\n\ngroups:\n- name: Pod state\n rules:\n - alert: PodsBlockedInTerminatingState\n expr: count(kube_pod_deletion_timestamp) by (namespace, pod) * count(kube_pod_status_reason{reason=\"NodeLost\"} == 0) by (namespace, pod) > 0\n for: 5m\n labels:\n severity: page\n annotations:\n summary: Pod {{$labels.namespace}}/{{$labels.pod}} blocked in Terminating state.", "completion": "This yaml code 11 - Metrics for Kubernetes Object States | System component logs record events happening in cluster, which can be very useful for debugging. You can configure log verbosity to see more or less detail. Logs can be as coarse-grained as showing e", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "11 - Metrics for Kubernetes Object States | System component logs record events happening in cluster, which can be very useful for debugging. You can configure log verbosity to see more or less detail. Logs can be as coarse-grained as showing e", "source": "https://kubernetes.io/docs/concepts/cluster-administration/_print/", "id": "k8s_00309_code_explain_8097"} {"prompt": "Explain this text code:\n\nkube-log-runner -log-file=/tmp/log -redirect-stderr=false", "completion": "This text code 12 - System Logs | An example of the traditional klog native format:", "language": "text", "type": "code-example_explanation", "context": "12 - System Logs | An example of the traditional klog native format:", "source": "https://kubernetes.io/docs/concepts/cluster-administration/_print/", "id": "k8s_00309_code_explain_8098"} {"prompt": "Explain this text code:\n\nI1025 00:15:15.525108 1 httplog.go:79] GET /api/v1/namespaces/kube-system/pods/metrics-server-v0.3.1-57c75779f-9p8wg: (1.512ms) 200 [pod_nanny/v0.0.0 (linux/amd64) kubernetes/$Format 10.56.1.19:51756]", "completion": "This text code 12 - System Logs | The message string may contain line breaks:", "language": "text", "type": "code-example_explanation", "context": "12 - System Logs | The message string may contain line breaks:", "source": "https://kubernetes.io/docs/concepts/cluster-administration/_print/", "id": "k8s_00309_code_explain_8099"} {"prompt": "Explain this text code:\n\nI1025 00:15:15.525108 1 example.go:79] This is a message\nwhich has a line break.", "completion": "This text code 12 - System Logs | Migration to structured log messages is an ongoing process. Not all log messages are structured in this version. When parsing log files, you must also handle unstructured log messages.", "language": "text", "type": "code-example_explanation", "context": "12 - System Logs | Migration to structured log messages is an ongoing process. Not all log messages are structured in this version. When parsing log files, you must also handle unstructured log messages.", "source": "https://kubernetes.io/docs/concepts/cluster-administration/_print/", "id": "k8s_00309_code_explain_8100"} {"prompt": "Explain this text code:\n\n \"\" =\"\" =\"\" ...", "completion": "This text code 12 - System Logs | Example:", "language": "text", "type": "code-example_explanation", "context": "12 - System Logs | Example:", "source": "https://kubernetes.io/docs/concepts/cluster-administration/_print/", "id": "k8s_00309_code_explain_8101"} {"prompt": "Explain this text code:\n\nI1025 00:15:15.525108 1 controller_utils.go:116] \"Pod status updated\" pod=\"kube-system/kubedns\" status=\"ready\"", "completion": "This text code 12 - System Logs | Strings are quoted. Other values are formatted with %+v, which may cause log messages to continue on the next line depending on the data.", "language": "text", "type": "code-example_explanation", "context": "12 - System Logs | Strings are quoted. Other values are formatted with %+v, which may cause log messages to continue on the next line depending on the data.", "source": "https://kubernetes.io/docs/concepts/cluster-administration/_print/", "id": "k8s_00309_code_explain_8102"} {"prompt": "Explain this text code:\n\nI1025 00:15:15.525108 1 example.go:116] \"Example\" data=\"This is text with a line break\\nand \\\"quotation marks\\\".\" someInt=1 someFloat=0.1 someStruct={StringField: First line,\nsecond line.}", "completion": "This text code 12 - System Logs | Contextual logging builds on top of structured logging. It is primarily about how developers use logging calls: code based on that concept is more flexible and supports additional use cases as describ", "language": "text", "type": "code-example_explanation", "context": "12 - System Logs | Contextual logging builds on top of structured logging. It is primarily about how developers use logging calls: code based on that concept is more flexible and supports additional use cases as describ", "source": "https://kubernetes.io/docs/concepts/cluster-administration/_print/", "id": "k8s_00309_code_explain_8103"} {"prompt": "Complete this console code:\n\n$ cd $GOPATH/src/k8s.io/kubernetes/staging/src/k8s.io/component-base/logs/example/cmd/\n$ go run . --help\n...\n --feature-gates mapStringBool A set of key=value pairs that describe feature gates for alpha/experimental features. Options are:\n AllAlpha=true|false (ALPHA - default=false)", "completion": " AllBeta=true|false (BETA - default=false)\n ContextualLogging=true|false (BETA - default=true)\n$ go run . --feature-gates ContextualLogging=true\n...\nI0222 15:13:31.645988 197901 example.go:54] \"runtime\" logger=\"example.myname\" foo=\"bar\" duration=\"1m0s\"\nI0222 15:13:31.646007 197901 example.go:55] \"another runtime\" logger=\"example\" foo=\"bar\" duration=\"1h0m0s\" duration=\"1m0s\"", "language": "console", "type": "code-example", "context": "12 - System Logs | The logger key and foo=\"bar\" were added by the caller of the function which logs the runtime message and duration=\"1m0s\" value, without having to modify that function.", "source": "https://kubernetes.io/docs/concepts/cluster-administration/_print/", "id": "k8s_00309_code_completion_8104"} {"prompt": "Explain this console code:\n\n$ cd $GOPATH/src/k8s.io/kubernetes/staging/src/k8s.io/component-base/logs/example/cmd/\n$ go run . --help\n...\n --feature-gates mapStringBool A set of key=value pairs that describe feature gates for alpha/experimental features. Options are:\n AllAlpha=true|false (ALPHA - default=false)\n AllBeta=true|false (BETA - default=false)\n ContextualLogging=true|false (BETA - default=true)\n$ go run . --feature-gates ContextualLogging=true\n...\nI0222 15:13:31.645988 197901 example.go:54] \"runtime\" logger=\"example.myname\" foo=\"bar\" duration=\"1m0s\"\nI0222 15:13:31.646007 197901 example.go:55] \"another runtime\" logger=\"example\" foo=\"bar\" duration=\"1h0m0s\" duration=\"1m0s\"", "completion": "This console code 12 - System Logs | The logger key and foo=\"bar\" were added by the caller of the function which logs the runtime message and duration=\"1m0s\" value, without having to modify that function.", "language": "console", "type": "code-example_explanation", "context": "12 - System Logs | The logger key and foo=\"bar\" were added by the caller of the function which logs the runtime message and duration=\"1m0s\" value, without having to modify that function.", "source": "https://kubernetes.io/docs/concepts/cluster-administration/_print/", "id": "k8s_00309_code_explain_8105"} {"prompt": "Complete this console code:\n\n$ go run . --feature-gates ContextualLogging=false\n...", "completion": "I0222 15:14:40.497333 198174 example.go:54] \"runtime\" duration=\"1m0s\"\nI0222 15:14:40.497346 198174 example.go:55] \"another runtime\" duration=\"1h0m0s\" duration=\"1m0s\"", "language": "console", "type": "code-example", "context": "12 - System Logs | JSON output does not support many standard klog flags. For list of unsupported klog flags, see the Command line tool reference.", "source": "https://kubernetes.io/docs/concepts/cluster-administration/_print/", "id": "k8s_00309_code_completion_8106"} {"prompt": "Explain this console code:\n\n$ go run . --feature-gates ContextualLogging=false\n...\nI0222 15:14:40.497333 198174 example.go:54] \"runtime\" duration=\"1m0s\"\nI0222 15:14:40.497346 198174 example.go:55] \"another runtime\" duration=\"1h0m0s\" duration=\"1m0s\"", "completion": "This console code 12 - System Logs | JSON output does not support many standard klog flags. For list of unsupported klog flags, see the Command line tool reference.", "language": "console", "type": "code-example_explanation", "context": "12 - System Logs | JSON output does not support many standard klog flags. For list of unsupported klog flags, see the Command line tool reference.", "source": "https://kubernetes.io/docs/concepts/cluster-administration/_print/", "id": "k8s_00309_code_explain_8107"} {"prompt": "Complete this json code:\n\n{\n \"ts\": 1580306777.04728,\n \"v\": 4,\n \"msg\": \"Pod status updated\",\n \"pod\":{", "completion": " \"name\": \"nginx-1\",\n \"namespace\": \"default\"\n },\n \"status\": \"ready\"\n}", "language": "json", "type": "api-response", "context": "12 - System Logs | Keys with special meaning:", "source": "https://kubernetes.io/docs/concepts/cluster-administration/_print/", "id": "k8s_00309_code_completion_8108"} {"prompt": "Explain this json code:\n\n{\n \"ts\": 1580306777.04728,\n \"v\": 4,\n \"msg\": \"Pod status updated\",\n \"pod\":{\n \"name\": \"nginx-1\",\n \"namespace\": \"default\"\n },\n \"status\": \"ready\"\n}", "completion": "This json code 12 - System Logs | Keys with special meaning:", "language": "json", "type": "api-response_explanation", "context": "12 - System Logs | Keys with special meaning:", "source": "https://kubernetes.io/docs/concepts/cluster-administration/_print/", "id": "k8s_00309_code_explain_8109"} {"prompt": "Explain this shell code:\n\n# Fetch kubelet logs from a node named node-1.example\nkubectl get --raw \"/api/v1/nodes/node-1.example/proxy/logs/?query=kubelet\"", "completion": "This shell code 12 - System Logs | You can also fetch files, provided that the files are in a directory that the kubelet allows for log fetches. For example, you can fetch a log from /var/log on a Linux node:", "language": "shell", "type": "code-example_explanation", "context": "12 - System Logs | You can also fetch files, provided that the files are in a directory that the kubelet allows for log fetches. For example, you can fetch a log from /var/log on a Linux node:", "source": "https://kubernetes.io/docs/concepts/cluster-administration/_print/", "id": "k8s_00309_code_explain_8110"} {"prompt": "Explain this shell code:\n\nkubectl get --raw \"/api/v1/nodes//proxy/logs/?query=/\"", "completion": "This shell code 12 - System Logs | The kubelet uses heuristics to retrieve logs. This helps if you are not aware whether a given system service is writing logs to the operating system's native logger like journald or to a log file in /", "language": "shell", "type": "code-example_explanation", "context": "12 - System Logs | The kubelet uses heuristics to retrieve logs. This helps if you are not aware whether a given system service is writing logs to the operating system's native logger like journald or to a log file in /", "source": "https://kubernetes.io/docs/concepts/cluster-administration/_print/", "id": "k8s_00309_code_explain_8111"} {"prompt": "Explain this shell code:\n\n# Fetch kubelet logs from a node named node-1.example that have the word \"error\"\nkubectl get --raw \"/api/v1/nodes/node-1.example/proxy/logs/?query=kubelet&pattern=error\"", "completion": "This shell code 12 - System Logs | System component traces record the latency of and relationships between operations in the cluster.", "language": "shell", "type": "code-example_explanation", "context": "12 - System Logs | System component traces record the latency of and relationships between operations in the cluster.", "source": "https://kubernetes.io/docs/concepts/cluster-administration/_print/", "id": "k8s_00309_code_explain_8112"} {"prompt": "Complete this yaml code:\n\nreceivers:\n otlp:\n protocols:\n grpc:\nexporters:\n # Replace this exporter with the exporter for your backend\n exporters:", "completion": " debug:\n verbosity: detailed\nservice:\n pipelines:\n traces:\n receivers: [otlp]\n exporters: [debug]", "language": "yaml", "type": "kubernetes-manifest", "context": "13 - Traces For Kubernetes System Components | To directly emit traces to a backend without utilizing a collector, specify the endpoint field in the Kubernetes tracing configuration file with the desired trace backend address. This method negates", "source": "https://kubernetes.io/docs/concepts/cluster-administration/_print/", "id": "k8s_00309_code_completion_8113"} {"prompt": "Explain this yaml code:\n\nreceivers:\n otlp:\n protocols:\n grpc:\nexporters:\n # Replace this exporter with the exporter for your backend\n exporters:\n debug:\n verbosity: detailed\nservice:\n pipelines:\n traces:\n receivers: [otlp]\n exporters: [debug]", "completion": "This yaml code 13 - Traces For Kubernetes System Components | To directly emit traces to a backend without utilizing a collector, specify the endpoint field in the Kubernetes tracing configuration file with the desired trace backend address. This method negates", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "13 - Traces For Kubernetes System Components | To directly emit traces to a backend without utilizing a collector, specify the endpoint field in the Kubernetes tracing configuration file with the desired trace backend address. This method negates", "source": "https://kubernetes.io/docs/concepts/cluster-administration/_print/", "id": "k8s_00309_code_explain_8114"} {"prompt": "Complete this yaml code:\n\napiVersion: apiserver.config.k8s.io/v1\nkind: TracingConfiguration", "completion": "# default value\n#endpoint: localhost:4317\nsamplingRatePerMillion: 100", "language": "yaml", "type": "kubernetes-manifest", "context": "13 - Traces For Kubernetes System Components | For more information about the TracingConfiguration struct, see API server config API (v1).", "source": "https://kubernetes.io/docs/concepts/cluster-administration/_print/", "id": "k8s_00309_code_completion_8115"} {"prompt": "Explain this yaml code:\n\napiVersion: apiserver.config.k8s.io/v1\nkind: TracingConfiguration\n# default value\n#endpoint: localhost:4317\nsamplingRatePerMillion: 100", "completion": "This yaml code 13 - Traces For Kubernetes System Components | For more information about the TracingConfiguration struct, see API server config API (v1).", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "13 - Traces For Kubernetes System Components | For more information about the TracingConfiguration struct, see API server config API (v1).", "source": "https://kubernetes.io/docs/concepts/cluster-administration/_print/", "id": "k8s_00309_code_explain_8116"} {"prompt": "Complete this yaml code:\n\napiVersion: kubelet.config.k8s.io/v1beta1\nkind: KubeletConfiguration\ntracing:", "completion": " # default value\n #endpoint: localhost:4317\n samplingRatePerMillion: 100", "language": "yaml", "type": "kubernetes-manifest", "context": "13 - Traces For Kubernetes System Components | If the samplingRatePerMillion is set to one million (1000000), then every span will be sent to the exporter.", "source": "https://kubernetes.io/docs/concepts/cluster-administration/_print/", "id": "k8s_00309_code_completion_8117"} {"prompt": "Explain this yaml code:\n\napiVersion: kubelet.config.k8s.io/v1beta1\nkind: KubeletConfiguration\ntracing:\n # default value\n #endpoint: localhost:4317\n samplingRatePerMillion: 100", "completion": "This yaml code 13 - Traces For Kubernetes System Components | If the samplingRatePerMillion is set to one million (1000000), then every span will be sent to the exporter.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "13 - Traces For Kubernetes System Components | If the samplingRatePerMillion is set to one million (1000000), then every span will be sent to the exporter.", "source": "https://kubernetes.io/docs/concepts/cluster-administration/_print/", "id": "k8s_00309_code_explain_8118"} {"prompt": "Explain this shell code:\n\nkube-apiserver \\\n--runtime-config=flowcontrol.apiserver.k8s.io/v1beta3=false \\\n # …and other flags as usual", "completion": "This shell code 15 - API Priority and Fairness | The command-line flag --enable-priority-and-fairness=false will disable the API Priority and Fairness feature.", "language": "shell", "type": "code-example_explanation", "context": "15 - API Priority and Fairness | The command-line flag --enable-priority-and-fairness=false will disable the API Priority and Fairness feature.", "source": "https://kubernetes.io/docs/concepts/cluster-administration/_print/", "id": "k8s_00309_code_explain_8119"} {"prompt": "Complete this yaml code:\n\napiVersion: flowcontrol.apiserver.k8s.io/v1\nkind: FlowSchema\nmetadata:\n name: health-for-strangers\nspec:\n matchingPrecedence: 1000\n priorityLevelConfiguration:\n name: exempt\n rules:\n - nonResourceRules:", "completion": " - nonResourceURLs:\n - \"/healthz\"\n - \"/livez\"\n - \"/readyz\"\n verbs:\n - \"*\"\n subjects:\n - kind: Group\n group:\n name: \"system:unauthenticated\"", "language": "yaml", "type": "kubernetes-manifest", "context": "15 - API Priority and Fairness | When you enable the API Priority and Fairness feature, the kube-apiserver exports additional metrics. Monitoring these can help you determine whether your configuration is inappropriately throttling i", "source": "https://kubernetes.io/docs/concepts/cluster-administration/_print/", "id": "k8s_00309_code_completion_8120"} {"prompt": "Explain this yaml code:\n\napiVersion: flowcontrol.apiserver.k8s.io/v1\nkind: FlowSchema\nmetadata:\n name: health-for-strangers\nspec:\n matchingPrecedence: 1000\n priorityLevelConfiguration:\n name: exempt\n rules:\n - nonResourceRules:\n - nonResourceURLs:\n - \"/healthz\"\n - \"/livez\"\n - \"/readyz\"\n verbs:\n - \"*\"\n subjects:\n - kind: Group\n group:\n name: \"system:unauthenticated\"", "completion": "This yaml code 15 - API Priority and Fairness | When you enable the API Priority and Fairness feature, the kube-apiserver exports additional metrics. Monitoring these can help you determine whether your configuration is inappropriately throttling i", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "15 - API Priority and Fairness | When you enable the API Priority and Fairness feature, the kube-apiserver exports additional metrics. Monitoring these can help you determine whether your configuration is inappropriately throttling i", "source": "https://kubernetes.io/docs/concepts/cluster-administration/_print/", "id": "k8s_00309_code_explain_8121"} {"prompt": "Explain this text code:\n\napiserver_flowcontrol_request_wait_duration_seconds", "completion": "This text code 15 - API Priority and Fairness | apiserver_flowcontrol_nominal_limit_seats is a gauge vector holding each priority level's nominal concurrency limit, computed from the API server's total concurrency limit and the priority level's con", "language": "text", "type": "code-example_explanation", "context": "15 - API Priority and Fairness | apiserver_flowcontrol_nominal_limit_seats is a gauge vector holding each priority level's nominal concurrency limit, computed from the API server's total concurrency limit and the priority level's con", "source": "https://kubernetes.io/docs/concepts/cluster-administration/_print/", "id": "k8s_00309_code_explain_8122"} {"prompt": "Explain this text code:\n\napiserver_flowcontrol_read_vs_write_current_requests", "completion": "This text code 15 - API Priority and Fairness | apiserver_flowcontrol_request_concurrency_in_use is a gauge vector holding the instantaneous number of occupied seats, broken down by priority_level and flow_schema.", "language": "text", "type": "code-example_explanation", "context": "15 - API Priority and Fairness | apiserver_flowcontrol_request_concurrency_in_use is a gauge vector holding the instantaneous number of occupied seats, broken down by priority_level and flow_schema.", "source": "https://kubernetes.io/docs/concepts/cluster-administration/_print/", "id": "k8s_00309_code_explain_8123"} {"prompt": "Explain this text code:\n\napiserver_flowcontrol_priority_level_request_utilization", "completion": "This text code 15 - API Priority and Fairness | apiserver_flowcontrol_priority_level_seat_utilization is a histogram vector of observations, made at the end of each nanosecond, of the utilization of a priority level's concurrency limit, broken down", "language": "text", "type": "code-example_explanation", "context": "15 - API Priority and Fairness | apiserver_flowcontrol_priority_level_seat_utilization is a histogram vector of observations, made at the end of each nanosecond, of the utilization of a priority level's concurrency limit, broken down", "source": "https://kubernetes.io/docs/concepts/cluster-administration/_print/", "id": "k8s_00309_code_explain_8124"} {"prompt": "Explain this text code:\n\napiserver_flowcontrol_priority_level_seat_utilization", "completion": "This text code 15 - API Priority and Fairness | apiserver_flowcontrol_request_queue_length_after_enqueue is a histogram vector of queue lengths for the queues, broken down by priority_level and flow_schema, as sampled by the enqueued requests. Each", "language": "text", "type": "code-example_explanation", "context": "15 - API Priority and Fairness | apiserver_flowcontrol_request_queue_length_after_enqueue is a histogram vector of queue lengths for the queues, broken down by priority_level and flow_schema, as sampled by the enqueued requests. Each", "source": "https://kubernetes.io/docs/concepts/cluster-administration/_print/", "id": "k8s_00309_code_explain_8125"} {"prompt": "Explain this text code:\n\napiserver_flowcontrol_request_queue_length_after_enqueue", "completion": "This text code 15 - API Priority and Fairness | apiserver_flowcontrol_request_concurrency_limit is the same as apiserver_flowcontrol_nominal_limit_seats. Before the introduction of concurrency borrowing between priority levels, this was always equa", "language": "text", "type": "code-example_explanation", "context": "15 - API Priority and Fairness | apiserver_flowcontrol_request_concurrency_limit is the same as apiserver_flowcontrol_nominal_limit_seats. Before the introduction of concurrency borrowing between priority levels, this was always equa", "source": "https://kubernetes.io/docs/concepts/cluster-administration/_print/", "id": "k8s_00309_code_explain_8126"} {"prompt": "Explain this text code:\n\napiserver_flowcontrol_request_dispatch_no_accommodation_total", "completion": "This text code 15 - API Priority and Fairness | apiserver_flowcontrol_epoch_advance_total is a counter vector of the number of attempts to jump a priority level's progress meter backward to avoid numeric overflow, grouped by priority_level and succ", "language": "text", "type": "code-example_explanation", "context": "15 - API Priority and Fairness | apiserver_flowcontrol_epoch_advance_total is a counter vector of the number of attempts to jump a priority level's progress meter backward to avoid numeric overflow, grouped by priority_level and succ", "source": "https://kubernetes.io/docs/concepts/cluster-administration/_print/", "id": "k8s_00309_code_explain_8127"} {"prompt": "Explain this text code:\n\npriority-and-fairness/list-events-default-service-account.yaml", "completion": "This text code 15 - API Priority and Fairness | Add-ons extend the functionality of Kubernetes.", "language": "text", "type": "code-example_explanation", "context": "15 - API Priority and Fairness | Add-ons extend the functionality of Kubernetes.", "source": "https://kubernetes.io/docs/concepts/cluster-administration/_print/", "id": "k8s_00309_code_explain_8128"} {"prompt": "Complete this yaml code:\n\napiVersion: flowcontrol.apiserver.k8s.io/v1\nkind: FlowSchema\nmetadata:\n name: list-events-default-service-account\nspec:\n distinguisherMethod:\n type: ByUser\n matchingPrecedence: 8000\n priorityLevelConfiguration:\n name: catch-all\n rules:\n - resourceRules:", "completion": " - apiGroups:\n - '*'\n namespaces:\n - default\n resources:\n - events\n verbs:\n - list\n subjects:\n - kind: ServiceAccount\n serviceAccount:\n name: default\n namespace: default", "language": "yaml", "type": "service-definition", "context": "15 - API Priority and Fairness | Add-ons extend the functionality of Kubernetes.", "source": "https://kubernetes.io/docs/concepts/cluster-administration/_print/", "id": "k8s_00309_code_completion_8129"} {"prompt": "Explain this yaml code:\n\napiVersion: flowcontrol.apiserver.k8s.io/v1\nkind: FlowSchema\nmetadata:\n name: list-events-default-service-account\nspec:\n distinguisherMethod:\n type: ByUser\n matchingPrecedence: 8000\n priorityLevelConfiguration:\n name: catch-all\n rules:\n - resourceRules:\n - apiGroups:\n - '*'\n namespaces:\n - default\n resources:\n - events\n verbs:\n - list\n subjects:\n - kind: ServiceAccount\n serviceAccount:\n name: default\n namespace: default", "completion": "This yaml code 15 - API Priority and Fairness | Add-ons extend the functionality of Kubernetes.", "language": "yaml", "type": "service-definition_explanation", "context": "15 - API Priority and Fairness | Add-ons extend the functionality of Kubernetes.", "source": "https://kubernetes.io/docs/concepts/cluster-administration/_print/", "id": "k8s_00309_code_explain_8130"} {"prompt": "Explain this text code:\n\n--runtime-config=\"coordination.k8s.io/v1beta1=true\"", "completion": "This text code 17 - Coordinated Leader Election | Provided that you have enabled the CoordinatedLeaderElection feature gate andhave the coordination.k8s.io/v1beta1 API group enabled, compatible control planecomponents automatically use the LeaseCandi", "language": "text", "type": "code-example_explanation", "context": "17 - Coordinated Leader Election | Provided that you have enabled the CoordinatedLeaderElection feature gate andhave the coordination.k8s.io/v1beta1 API group enabled, compatible control planecomponents automatically use the LeaseCandi", "source": "https://kubernetes.io/docs/concepts/cluster-administration/_print/", "id": "k8s_00309_code_explain_8131"} {"prompt": "Complete this text code:\n\nshutdownGracePeriodByPodPriority:\n - priority: 100000\n shutdownGracePeriodSeconds: 10\n - priority: 10000", "completion": " shutdownGracePeriodSeconds: 180\n - priority: 1000\n shutdownGracePeriodSeconds: 120\n - priority: 0\n shutdownGracePeriodSeconds: 60", "language": "text", "type": "code-example", "context": "1 - Node Shutdowns | The above table implies that any pod with priority value >= 100000 will get just 10 seconds to shut down, any pod with value >= 10000 and < 100000 will get 180 seconds to shut down, any pod with value", "source": "https://kubernetes.io/docs/concepts/cluster-administration/_print/", "id": "k8s_00309_code_completion_8132"} {"prompt": "Explain this text code:\n\nshutdownGracePeriodByPodPriority:\n - priority: 100000\n shutdownGracePeriodSeconds: 10\n - priority: 10000\n shutdownGracePeriodSeconds: 180\n - priority: 1000\n shutdownGracePeriodSeconds: 120\n - priority: 0\n shutdownGracePeriodSeconds: 60", "completion": "This text code 1 - Node Shutdowns | The above table implies that any pod with priority value >= 100000 will get just 10 seconds to shut down, any pod with value >= 10000 and < 100000 will get 180 seconds to shut down, any pod with value", "language": "text", "type": "code-example_explanation", "context": "1 - Node Shutdowns | The above table implies that any pod with priority value >= 100000 will get just 10 seconds to shut down, any pod with value >= 10000 and < 100000 will get 180 seconds to shut down, any pod with value", "source": "https://kubernetes.io/docs/concepts/cluster-administration/_print/", "id": "k8s_00309_code_explain_8133"} {"prompt": "Explain this bash code:\n\nkubectl get nodes -o go-template='{{range .items}}{{.metadata.name}}: {{if .status.nodeInfo.swap.capacity}}{{.status.nodeInfo.swap.capacity}}{{else}}{{end}}{{\"\\n\"}}{{end}}'", "completion": "This bash code 2 - Swap memory management | This will result in an output similar to:", "language": "bash", "type": "kubectl-command_explanation", "context": "2 - Swap memory management | This will result in an output similar to:", "source": "https://kubernetes.io/docs/concepts/cluster-administration/_print/", "id": "k8s_00309_code_explain_8134"} {"prompt": "Explain this bash code:\n\nkubectl get nodes -o jsonpath='{range .items[?(@.metadata.labels.feature\\.node\\.kubernetes\\.io/memory-swap)]}{.metadata.name}{\"\\t\"}{.metadata.labels.feature\\.node\\.kubernetes\\.io/memory-swap}{\"\\n\"}{end}'", "completion": "This bash code 2 - Swap memory management | This will result in an output similar to:", "language": "bash", "type": "kubectl-command_explanation", "context": "2 - Swap memory management | This will result in an output similar to:", "source": "https://kubernetes.io/docs/concepts/cluster-administration/_print/", "id": "k8s_00309_code_explain_8135"} {"prompt": "Complete this yaml code:\n\ndebug/counter-pod.yaml\napiVersion: v1\nkind: Pod\nmetadata:\n name: counter", "completion": "spec:\n containers:\n - name: count\n image: busybox:1.28\n args: [/bin/sh, -c,\n 'i=0; while true; do echo \"$i: $(date)\"; i=$((i+1)); sleep 1; done']", "language": "yaml", "type": "pod-definition", "context": "8 - Logging Architecture | To run this pod, use the following command:", "source": "https://kubernetes.io/docs/concepts/cluster-administration/_print/", "id": "k8s_00309_code_completion_8136"} {"prompt": "Explain this yaml code:\n\ndebug/counter-pod.yaml\napiVersion: v1\nkind: Pod\nmetadata:\n name: counter\nspec:\n containers:\n - name: count\n image: busybox:1.28\n args: [/bin/sh, -c,\n 'i=0; while true; do echo \"$i: $(date)\"; i=$((i+1)); sleep 1; done']", "completion": "This yaml code 8 - Logging Architecture | To run this pod, use the following command:", "language": "yaml", "type": "pod-definition_explanation", "context": "8 - Logging Architecture | To run this pod, use the following command:", "source": "https://kubernetes.io/docs/concepts/cluster-administration/_print/", "id": "k8s_00309_code_explain_8137"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: counter\nspec:", "completion": " containers:\n - name: count\n image: busybox:1.28\n args: [/bin/sh, -c,\n 'i=0; while true; do echo \"$i: $(date)\"; i=$((i+1)); sleep 1; done']", "language": "yaml", "type": "pod-definition", "context": "8 - Logging Architecture | To run this pod, use the following command:", "source": "https://kubernetes.io/docs/concepts/cluster-administration/_print/", "id": "k8s_00309_code_completion_8138"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: counter\nspec:\n containers:\n - name: count\n image: busybox:1.28\n args: [/bin/sh, -c,\n 'i=0; while true; do echo \"$i: $(date)\"; i=$((i+1)); sleep 1; done']", "completion": "This yaml code 8 - Logging Architecture | To run this pod, use the following command:", "language": "yaml", "type": "pod-definition_explanation", "context": "8 - Logging Architecture | To run this pod, use the following command:", "source": "https://kubernetes.io/docs/concepts/cluster-administration/_print/", "id": "k8s_00309_code_explain_8139"} {"prompt": "Explain this bash code:\n\nkubectl apply -f https://k8s.io/examples/debug/counter-pod.yaml", "completion": "This bash code 8 - Logging Architecture | The output is:", "language": "bash", "type": "kubectl-command_explanation", "context": "8 - Logging Architecture | The output is:", "source": "https://kubernetes.io/docs/concepts/cluster-administration/_print/", "id": "k8s_00309_code_explain_8140"} {"prompt": "Explain this text code:\n\n0: Fri Apr 1 11:42:23 UTC 2022\n1: Fri Apr 1 11:42:24 UTC 2022\n2: Fri Apr 1 11:42:25 UTC 2022", "completion": "This text code 8 - Logging Architecture | You can use kubectl logs --previous to retrieve logs from a previous instantiation of a container. If your pod has multiple containers, specify which container's logs you want to access by appending a", "language": "text", "type": "code-example_explanation", "context": "8 - Logging Architecture | You can use kubectl logs --previous to retrieve logs from a previous instantiation of a container. If your pod has multiple containers, specify which container's logs you want to access by appending a", "source": "https://kubernetes.io/docs/concepts/cluster-administration/_print/", "id": "k8s_00309_code_explain_8141"} {"prompt": "Complete this yaml code:\n\ndebug/counter-pod-err.yaml\napiVersion: v1\nkind: Pod\nmetadata:\n name: counter-err", "completion": "spec:\n containers:\n - name: count\n image: busybox:1.28\n args: [/bin/sh, -c,\n 'i=0; while true; do echo \"$i: $(date)\"; echo \"$i: err\" >&2 ; i=$((i+1)); sleep 1; done']", "language": "yaml", "type": "pod-definition", "context": "8 - Logging Architecture | To run this pod, use the following command:", "source": "https://kubernetes.io/docs/concepts/cluster-administration/_print/", "id": "k8s_00309_code_completion_8142"} {"prompt": "Explain this yaml code:\n\ndebug/counter-pod-err.yaml\napiVersion: v1\nkind: Pod\nmetadata:\n name: counter-err\nspec:\n containers:\n - name: count\n image: busybox:1.28\n args: [/bin/sh, -c,\n 'i=0; while true; do echo \"$i: $(date)\"; echo \"$i: err\" >&2 ; i=$((i+1)); sleep 1; done']", "completion": "This yaml code 8 - Logging Architecture | To run this pod, use the following command:", "language": "yaml", "type": "pod-definition_explanation", "context": "8 - Logging Architecture | To run this pod, use the following command:", "source": "https://kubernetes.io/docs/concepts/cluster-administration/_print/", "id": "k8s_00309_code_explain_8143"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: counter-err\nspec:", "completion": " containers:\n - name: count\n image: busybox:1.28\n args: [/bin/sh, -c,\n 'i=0; while true; do echo \"$i: $(date)\"; echo \"$i: err\" >&2 ; i=$((i+1)); sleep 1; done']", "language": "yaml", "type": "pod-definition", "context": "8 - Logging Architecture | To run this pod, use the following command:", "source": "https://kubernetes.io/docs/concepts/cluster-administration/_print/", "id": "k8s_00309_code_completion_8144"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: counter-err\nspec:\n containers:\n - name: count\n image: busybox:1.28\n args: [/bin/sh, -c,\n 'i=0; while true; do echo \"$i: $(date)\"; echo \"$i: err\" >&2 ; i=$((i+1)); sleep 1; done']", "completion": "This yaml code 8 - Logging Architecture | To run this pod, use the following command:", "language": "yaml", "type": "pod-definition_explanation", "context": "8 - Logging Architecture | To run this pod, use the following command:", "source": "https://kubernetes.io/docs/concepts/cluster-administration/_print/", "id": "k8s_00309_code_explain_8145"} {"prompt": "Explain this bash code:\n\nkubectl apply -f https://k8s.io/examples/debug/counter-pod-err.yaml", "completion": "This bash code 8 - Logging Architecture | To fetch only the stderr log stream, you can run:", "language": "bash", "type": "kubectl-command_explanation", "context": "8 - Logging Architecture | To fetch only the stderr log stream, you can run:", "source": "https://kubernetes.io/docs/concepts/cluster-administration/_print/", "id": "k8s_00309_code_explain_8146"} {"prompt": "Explain this bash code:\n\nkubectl get --raw \"/api/v1/namespaces/default/pods/counter-err/log?stream=Stderr\"", "completion": "This bash code 8 - Logging Architecture | See the kubectl logs documentation for more details.", "language": "bash", "type": "kubectl-command_explanation", "context": "8 - Logging Architecture | See the kubectl logs documentation for more details.", "source": "https://kubernetes.io/docs/concepts/cluster-administration/_print/", "id": "k8s_00309_code_explain_8147"} {"prompt": "Complete this yaml code:\n\nadmin/logging/two-files-counter-pod.yaml\napiVersion: v1\nkind: Pod\nmetadata:\n name: counter\nspec:\n containers:\n - name: count\n image: busybox:1.28\n args:\n - /bin/sh\n - -c\n - >", "completion": " i=0;\n while true;\n do\n echo \"$i: $(date)\" >> /var/log/1.log;\n echo \"$(date) INFO $i\" >> /var/log/2.log;\n i=$((i+1));\n sleep 1;\n done \n volumeMounts:\n - name: varlog\n mountPath: /var/log\n volumes:\n - name: varlog\n emptyDir: {}", "language": "yaml", "type": "pod-definition", "context": "8 - Logging Architecture | It is not recommended to write log entries with different formats to the same log stream, even if you managed to redirect both components to the stdout stream of the container. Instead, you can create", "source": "https://kubernetes.io/docs/concepts/cluster-administration/_print/", "id": "k8s_00309_code_completion_8148"} {"prompt": "Explain this yaml code:\n\nadmin/logging/two-files-counter-pod.yaml\napiVersion: v1\nkind: Pod\nmetadata:\n name: counter\nspec:\n containers:\n - name: count\n image: busybox:1.28\n args:\n - /bin/sh\n - -c\n - >\n i=0;\n while true;\n do\n echo \"$i: $(date)\" >> /var/log/1.log;\n echo \"$(date) INFO $i\" >> /var/log/2.log;\n i=$((i+1));\n sleep 1;\n done \n volumeMounts:\n - name: varlog\n mountPath: /var/log\n volumes:\n - name: varlog\n emptyDir: {}", "completion": "This yaml code 8 - Logging Architecture | It is not recommended to write log entries with different formats to the same log stream, even if you managed to redirect both components to the stdout stream of the container. Instead, you can create", "language": "yaml", "type": "pod-definition_explanation", "context": "8 - Logging Architecture | It is not recommended to write log entries with different formats to the same log stream, even if you managed to redirect both components to the stdout stream of the container. Instead, you can create", "source": "https://kubernetes.io/docs/concepts/cluster-administration/_print/", "id": "k8s_00309_code_explain_8149"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: counter\nspec:\n containers:\n - name: count\n image: busybox:1.28\n args:\n - /bin/sh\n - -c\n - >\n i=0;", "completion": " while true;\n do\n echo \"$i: $(date)\" >> /var/log/1.log;\n echo \"$(date) INFO $i\" >> /var/log/2.log;\n i=$((i+1));\n sleep 1;\n done \n volumeMounts:\n - name: varlog\n mountPath: /var/log\n volumes:\n - name: varlog\n emptyDir: {}", "language": "yaml", "type": "pod-definition", "context": "8 - Logging Architecture | It is not recommended to write log entries with different formats to the same log stream, even if you managed to redirect both components to the stdout stream of the container. Instead, you can create", "source": "https://kubernetes.io/docs/concepts/cluster-administration/_print/", "id": "k8s_00309_code_completion_8150"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: counter\nspec:\n containers:\n - name: count\n image: busybox:1.28\n args:\n - /bin/sh\n - -c\n - >\n i=0;\n while true;\n do\n echo \"$i: $(date)\" >> /var/log/1.log;\n echo \"$(date) INFO $i\" >> /var/log/2.log;\n i=$((i+1));\n sleep 1;\n done \n volumeMounts:\n - name: varlog\n mountPath: /var/log\n volumes:\n - name: varlog\n emptyDir: {}", "completion": "This yaml code 8 - Logging Architecture | It is not recommended to write log entries with different formats to the same log stream, even if you managed to redirect both components to the stdout stream of the container. Instead, you can create", "language": "yaml", "type": "pod-definition_explanation", "context": "8 - Logging Architecture | It is not recommended to write log entries with different formats to the same log stream, even if you managed to redirect both components to the stdout stream of the container. Instead, you can create", "source": "https://kubernetes.io/docs/concepts/cluster-administration/_print/", "id": "k8s_00309_code_explain_8151"} {"prompt": "Complete this yaml code:\n\nadmin/logging/two-files-counter-pod-streaming-sidecar.yaml\napiVersion: v1\nkind: Pod\nmetadata:\n name: counter\nspec:\n containers:\n - name: count\n image: busybox:1.28\n args:\n - /bin/sh\n - -c\n - >\n i=0;\n while true;\n do\n echo \"$i: $(date)\" >> /var/log/1.log;\n echo \"$(date) INFO $i\" >> /var/log/2.log;\n i=$((i+1));", "completion": " sleep 1;\n done \n volumeMounts:\n - name: varlog\n mountPath: /var/log\n - name: count-log-1\n image: busybox:1.28\n args: [/bin/sh, -c, 'tail -n+1 -F /var/log/1.log']\n volumeMounts:\n - name: varlog\n mountPath: /var/log\n - name: count-log-2\n image: busybox:1.28\n args: [/bin/sh, -c, 'tail -n+1 -F /var/log/2.log']\n volumeMounts:\n - name: varlog\n mountPath: /var/log\n volumes:\n - name: varlog\n emptyDir: {}", "language": "yaml", "type": "pod-definition", "context": "8 - Logging Architecture | Now when you run this pod, you can access each log stream separately by running the following commands:", "source": "https://kubernetes.io/docs/concepts/cluster-administration/_print/", "id": "k8s_00309_code_completion_8152"} {"prompt": "Explain this yaml code:\n\nadmin/logging/two-files-counter-pod-streaming-sidecar.yaml\napiVersion: v1\nkind: Pod\nmetadata:\n name: counter\nspec:\n containers:\n - name: count\n image: busybox:1.28\n args:\n - /bin/sh\n - -c\n - >\n i=0;\n while true;\n do\n echo \"$i: $(date)\" >> /var/log/1.log;\n echo \"$(date) INFO $i\" >> /var/log/2.log;\n i=$((i+1));\n sleep 1;\n done \n volumeMounts:\n - name: varlog\n mountPath: /var/log\n - name: count-log-1\n image: busybox:1.28\n args: [/bin/sh, -c, 'tail -n+1 -F /var/log/1.log']\n volumeMounts:\n - name: varlog\n mountPath: /var/log\n - name: count-log-2\n image: busybox:1.28\n args: [/bin/sh, -c, 'tail -n+1 -F /var/log/2.log']\n volumeMounts:\n - name: varlog\n mountPath: /var/log\n volumes:\n - name: varlog\n emptyDir: {}", "completion": "This yaml code 8 - Logging Architecture | Now when you run this pod, you can access each log stream separately by running the following commands:", "language": "yaml", "type": "pod-definition_explanation", "context": "8 - Logging Architecture | Now when you run this pod, you can access each log stream separately by running the following commands:", "source": "https://kubernetes.io/docs/concepts/cluster-administration/_print/", "id": "k8s_00309_code_explain_8153"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: counter\nspec:\n containers:\n - name: count\n image: busybox:1.28\n args:\n - /bin/sh\n - -c\n - >\n i=0;\n while true;\n do\n echo \"$i: $(date)\" >> /var/log/1.log;\n echo \"$(date) INFO $i\" >> /var/log/2.log;\n i=$((i+1));\n sleep 1;", "completion": " done \n volumeMounts:\n - name: varlog\n mountPath: /var/log\n - name: count-log-1\n image: busybox:1.28\n args: [/bin/sh, -c, 'tail -n+1 -F /var/log/1.log']\n volumeMounts:\n - name: varlog\n mountPath: /var/log\n - name: count-log-2\n image: busybox:1.28\n args: [/bin/sh, -c, 'tail -n+1 -F /var/log/2.log']\n volumeMounts:\n - name: varlog\n mountPath: /var/log\n volumes:\n - name: varlog\n emptyDir: {}", "language": "yaml", "type": "pod-definition", "context": "8 - Logging Architecture | Now when you run this pod, you can access each log stream separately by running the following commands:", "source": "https://kubernetes.io/docs/concepts/cluster-administration/_print/", "id": "k8s_00309_code_completion_8154"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: counter\nspec:\n containers:\n - name: count\n image: busybox:1.28\n args:\n - /bin/sh\n - -c\n - >\n i=0;\n while true;\n do\n echo \"$i: $(date)\" >> /var/log/1.log;\n echo \"$(date) INFO $i\" >> /var/log/2.log;\n i=$((i+1));\n sleep 1;\n done \n volumeMounts:\n - name: varlog\n mountPath: /var/log\n - name: count-log-1\n image: busybox:1.28\n args: [/bin/sh, -c, 'tail -n+1 -F /var/log/1.log']\n volumeMounts:\n - name: varlog\n mountPath: /var/log\n - name: count-log-2\n image: busybox:1.28\n args: [/bin/sh, -c, 'tail -n+1 -F /var/log/2.log']\n volumeMounts:\n - name: varlog\n mountPath: /var/log\n volumes:\n - name: varlog\n emptyDir: {}", "completion": "This yaml code 8 - Logging Architecture | Now when you run this pod, you can access each log stream separately by running the following commands:", "language": "yaml", "type": "pod-definition_explanation", "context": "8 - Logging Architecture | Now when you run this pod, you can access each log stream separately by running the following commands:", "source": "https://kubernetes.io/docs/concepts/cluster-administration/_print/", "id": "k8s_00309_code_explain_8155"} {"prompt": "Complete this text code:\n\n0: Fri Apr 1 11:42:26 UTC 2022\n1: Fri Apr 1 11:42:27 UTC 2022", "completion": "2: Fri Apr 1 11:42:28 UTC 2022\n...", "language": "text", "type": "code-example", "context": "8 - Logging Architecture | The output is similar to:", "source": "https://kubernetes.io/docs/concepts/cluster-administration/_print/", "id": "k8s_00309_code_completion_8156"} {"prompt": "Explain this text code:\n\n0: Fri Apr 1 11:42:26 UTC 2022\n1: Fri Apr 1 11:42:27 UTC 2022\n2: Fri Apr 1 11:42:28 UTC 2022\n...", "completion": "This text code 8 - Logging Architecture | The output is similar to:", "language": "text", "type": "code-example_explanation", "context": "8 - Logging Architecture | The output is similar to:", "source": "https://kubernetes.io/docs/concepts/cluster-administration/_print/", "id": "k8s_00309_code_explain_8157"} {"prompt": "Complete this text code:\n\nFri Apr 1 11:42:29 UTC 2022 INFO 0\nFri Apr 1 11:42:30 UTC 2022 INFO 0", "completion": "Fri Apr 1 11:42:31 UTC 2022 INFO 0\n...", "language": "text", "type": "code-example", "context": "8 - Logging Architecture | If you installed a node-level agent in your cluster, that agent picks up those log streams automatically without any further configuration. If you like, you can configure the agent to parse log lines", "source": "https://kubernetes.io/docs/concepts/cluster-administration/_print/", "id": "k8s_00309_code_completion_8158"} {"prompt": "Explain this text code:\n\nFri Apr 1 11:42:29 UTC 2022 INFO 0\nFri Apr 1 11:42:30 UTC 2022 INFO 0\nFri Apr 1 11:42:31 UTC 2022 INFO 0\n...", "completion": "This text code 8 - Logging Architecture | If you installed a node-level agent in your cluster, that agent picks up those log streams automatically without any further configuration. If you like, you can configure the agent to parse log lines", "language": "text", "type": "code-example_explanation", "context": "8 - Logging Architecture | If you installed a node-level agent in your cluster, that agent picks up those log streams automatically without any further configuration. If you like, you can configure the agent to parse log lines", "source": "https://kubernetes.io/docs/concepts/cluster-administration/_print/", "id": "k8s_00309_code_explain_8159"} {"prompt": "Complete this yaml code:\n\nadmin/logging/fluentd-sidecar-config.yaml\napiVersion: v1\nkind: ConfigMap\nmetadata:\n name: fluentd-config\ndata:\n fluentd.conf: |\n \n type tail\n format none\n path /var/log/1.log\n pos_file /var/log/1.log.pos\n tag count.format1", "completion": " \n\n \n type tail\n format none\n path /var/log/2.log\n pos_file /var/log/2.log.pos\n tag count.format2\n \n\n \n type google_cloud\n ", "language": "yaml", "type": "kubernetes-manifest", "context": "8 - Logging Architecture | The second manifest describes a pod that has a sidecar container running fluentd. The pod mounts a volume where fluentd can pick up its configuration data.", "source": "https://kubernetes.io/docs/concepts/cluster-administration/_print/", "id": "k8s_00309_code_completion_8160"} {"prompt": "Explain this yaml code:\n\nadmin/logging/fluentd-sidecar-config.yaml\napiVersion: v1\nkind: ConfigMap\nmetadata:\n name: fluentd-config\ndata:\n fluentd.conf: |\n \n type tail\n format none\n path /var/log/1.log\n pos_file /var/log/1.log.pos\n tag count.format1\n \n\n \n type tail\n format none\n path /var/log/2.log\n pos_file /var/log/2.log.pos\n tag count.format2\n \n\n \n type google_cloud\n ", "completion": "This yaml code 8 - Logging Architecture | The second manifest describes a pod that has a sidecar container running fluentd. The pod mounts a volume where fluentd can pick up its configuration data.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "8 - Logging Architecture | The second manifest describes a pod that has a sidecar container running fluentd. The pod mounts a volume where fluentd can pick up its configuration data.", "source": "https://kubernetes.io/docs/concepts/cluster-administration/_print/", "id": "k8s_00309_code_explain_8161"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: ConfigMap\nmetadata:\n name: fluentd-config\ndata:\n fluentd.conf: |\n \n type tail\n format none\n path /var/log/1.log\n pos_file /var/log/1.log.pos\n tag count.format1", "completion": " \n\n \n type tail\n format none\n path /var/log/2.log\n pos_file /var/log/2.log.pos\n tag count.format2\n \n\n \n type google_cloud\n ", "language": "yaml", "type": "kubernetes-manifest", "context": "8 - Logging Architecture | The second manifest describes a pod that has a sidecar container running fluentd. The pod mounts a volume where fluentd can pick up its configuration data.", "source": "https://kubernetes.io/docs/concepts/cluster-administration/_print/", "id": "k8s_00309_code_completion_8162"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: ConfigMap\nmetadata:\n name: fluentd-config\ndata:\n fluentd.conf: |\n \n type tail\n format none\n path /var/log/1.log\n pos_file /var/log/1.log.pos\n tag count.format1\n \n\n \n type tail\n format none\n path /var/log/2.log\n pos_file /var/log/2.log.pos\n tag count.format2\n \n\n \n type google_cloud\n ", "completion": "This yaml code 8 - Logging Architecture | The second manifest describes a pod that has a sidecar container running fluentd. The pod mounts a volume where fluentd can pick up its configuration data.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "8 - Logging Architecture | The second manifest describes a pod that has a sidecar container running fluentd. The pod mounts a volume where fluentd can pick up its configuration data.", "source": "https://kubernetes.io/docs/concepts/cluster-administration/_print/", "id": "k8s_00309_code_explain_8163"} {"prompt": "Complete this yaml code:\n\nadmin/logging/two-files-counter-pod-agent-sidecar.yaml\napiVersion: v1\nkind: Pod\nmetadata:\n name: counter\nspec:\n containers:\n - name: count\n image: busybox:1.28\n args:\n - /bin/sh\n - -c\n - >\n i=0;\n while true;\n do\n echo \"$i: $(date)\" >> /var/log/1.log;\n echo \"$(date) INFO $i\" >> /var/log/2.log;\n i=$((i+1));\n sleep 1;", "completion": " done \n volumeMounts:\n - name: varlog\n mountPath: /var/log\n - name: count-agent\n image: registry.k8s.io/fluentd-gcp:1.30\n env:\n - name: FLUENTD_ARGS\n value: -c /etc/fluentd-config/fluentd.conf\n volumeMounts:\n - name: varlog\n mountPath: /var/log\n - name: config-volume\n mountPath: /etc/fluentd-config\n volumes:\n - name: varlog\n emptyDir: {}\n - name: config-volume\n configMap:\n name: fluentd-config", "language": "yaml", "type": "pod-definition", "context": "8 - Logging Architecture |", "source": "https://kubernetes.io/docs/concepts/cluster-administration/_print/", "id": "k8s_00309_code_completion_8164"} {"prompt": "Explain this yaml code:\n\nadmin/logging/two-files-counter-pod-agent-sidecar.yaml\napiVersion: v1\nkind: Pod\nmetadata:\n name: counter\nspec:\n containers:\n - name: count\n image: busybox:1.28\n args:\n - /bin/sh\n - -c\n - >\n i=0;\n while true;\n do\n echo \"$i: $(date)\" >> /var/log/1.log;\n echo \"$(date) INFO $i\" >> /var/log/2.log;\n i=$((i+1));\n sleep 1;\n done \n volumeMounts:\n - name: varlog\n mountPath: /var/log\n - name: count-agent\n image: registry.k8s.io/fluentd-gcp:1.30\n env:\n - name: FLUENTD_ARGS\n value: -c /etc/fluentd-config/fluentd.conf\n volumeMounts:\n - name: varlog\n mountPath: /var/log\n - name: config-volume\n mountPath: /etc/fluentd-config\n volumes:\n - name: varlog\n emptyDir: {}\n - name: config-volume\n configMap:\n name: fluentd-config", "completion": "This yaml code 8 - Logging Architecture |", "language": "yaml", "type": "pod-definition_explanation", "context": "8 - Logging Architecture |", "source": "https://kubernetes.io/docs/concepts/cluster-administration/_print/", "id": "k8s_00309_code_explain_8165"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: counter\nspec:\n containers:\n - name: count\n image: busybox:1.28\n args:\n - /bin/sh\n - -c\n - >\n i=0;\n while true;\n do\n echo \"$i: $(date)\" >> /var/log/1.log;\n echo \"$(date) INFO $i\" >> /var/log/2.log;\n i=$((i+1));\n sleep 1;", "completion": " done \n volumeMounts:\n - name: varlog\n mountPath: /var/log\n - name: count-agent\n image: registry.k8s.io/fluentd-gcp:1.30\n env:\n - name: FLUENTD_ARGS\n value: -c /etc/fluentd-config/fluentd.conf\n volumeMounts:\n - name: varlog\n mountPath: /var/log\n - name: config-volume\n mountPath: /etc/fluentd-config\n volumes:\n - name: varlog\n emptyDir: {}\n - name: config-volume\n configMap:\n name: fluentd-config", "language": "yaml", "type": "pod-definition", "context": "8 - Logging Architecture |", "source": "https://kubernetes.io/docs/concepts/cluster-administration/_print/", "id": "k8s_00309_code_completion_8166"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: counter\nspec:\n containers:\n - name: count\n image: busybox:1.28\n args:\n - /bin/sh\n - -c\n - >\n i=0;\n while true;\n do\n echo \"$i: $(date)\" >> /var/log/1.log;\n echo \"$(date) INFO $i\" >> /var/log/2.log;\n i=$((i+1));\n sleep 1;\n done \n volumeMounts:\n - name: varlog\n mountPath: /var/log\n - name: count-agent\n image: registry.k8s.io/fluentd-gcp:1.30\n env:\n - name: FLUENTD_ARGS\n value: -c /etc/fluentd-config/fluentd.conf\n volumeMounts:\n - name: varlog\n mountPath: /var/log\n - name: config-volume\n mountPath: /etc/fluentd-config\n volumes:\n - name: varlog\n emptyDir: {}\n - name: config-volume\n configMap:\n name: fluentd-config", "completion": "This yaml code 8 - Logging Architecture |", "language": "yaml", "type": "pod-definition_explanation", "context": "8 - Logging Architecture |", "source": "https://kubernetes.io/docs/concepts/cluster-administration/_print/", "id": "k8s_00309_code_explain_8167"} {"prompt": "Complete this yaml code:\n\napiVersion: rbac.authorization.k8s.io/v1\nkind: ClusterRole\nmetadata:\n name: prometheus", "completion": "rules:\n - nonResourceURLs:\n - \"/metrics\"\n verbs:\n - get", "language": "yaml", "type": "kubernetes-manifest", "context": "10 - Metrics For Kubernetes System Components | Alpha metric → Stable metric → Deprecated metric → Hidden metric → Deleted metric", "source": "https://kubernetes.io/docs/concepts/cluster-administration/_print/", "id": "k8s_00309_code_completion_8168"} {"prompt": "Explain this yaml code:\n\napiVersion: rbac.authorization.k8s.io/v1\nkind: ClusterRole\nmetadata:\n name: prometheus\nrules:\n - nonResourceURLs:\n - \"/metrics\"\n verbs:\n - get", "completion": "This yaml code 10 - Metrics For Kubernetes System Components | Alpha metric → Stable metric → Deprecated metric → Hidden metric → Deleted metric", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "10 - Metrics For Kubernetes System Components | Alpha metric → Stable metric → Deprecated metric → Hidden metric → Deleted metric", "source": "https://kubernetes.io/docs/concepts/cluster-administration/_print/", "id": "k8s_00309_code_explain_8169"} {"prompt": "Explain this text code:\n\n\"metric1,label2\": \"v1,v2,v3\"\n\"metric2,label1\": \"v1,v2,v3\"", "completion": "This text code 10 - Metrics For Kubernetes System Components | Additionally, the cardinality_enforcement_unexpected_categorizations_total meta-metric records the count of unexpected categorizations during cardinality enforcement, that is, whenever a label value i", "language": "text", "type": "code-example_explanation", "context": "10 - Metrics For Kubernetes System Components | Additionally, the cardinality_enforcement_unexpected_categorizations_total meta-metric records the count of unexpected categorizations during cardinality enforcement, that is, whenever a label value i", "source": "https://kubernetes.io/docs/concepts/cluster-administration/_print/", "id": "k8s_00309_code_explain_8170"} {"prompt": "Complete this text code:\n\ngroups:\n- name: Pod state\n rules:\n - alert: PodsBlockedInTerminatingState\n expr: count(kube_pod_deletion_timestamp) by (namespace, pod) * count(kube_pod_status_reason{reason=\"NodeLost\"} == 0) by (namespace, pod) > 0", "completion": " for: 5m\n labels:\n severity: page\n annotations:\n summary: Pod {{$labels.namespace}}/{{$labels.pod}} blocked in Terminating state.", "language": "text", "type": "code-example", "context": "11 - Metrics for Kubernetes Object States | System component logs record events happening in cluster, which can be very useful for debugging. You can configure log verbosity to see more or less detail. Logs can be as coarse-grained as showing e", "source": "https://kubernetes.io/docs/concepts/cluster-administration/_print/", "id": "k8s_00309_code_completion_8171"} {"prompt": "Explain this text code:\n\ngroups:\n- name: Pod state\n rules:\n - alert: PodsBlockedInTerminatingState\n expr: count(kube_pod_deletion_timestamp) by (namespace, pod) * count(kube_pod_status_reason{reason=\"NodeLost\"} == 0) by (namespace, pod) > 0\n for: 5m\n labels:\n severity: page\n annotations:\n summary: Pod {{$labels.namespace}}/{{$labels.pod}} blocked in Terminating state.", "completion": "This text code 11 - Metrics for Kubernetes Object States | System component logs record events happening in cluster, which can be very useful for debugging. You can configure log verbosity to see more or less detail. Logs can be as coarse-grained as showing e", "language": "text", "type": "code-example_explanation", "context": "11 - Metrics for Kubernetes Object States | System component logs record events happening in cluster, which can be very useful for debugging. You can configure log verbosity to see more or less detail. Logs can be as coarse-grained as showing e", "source": "https://kubernetes.io/docs/concepts/cluster-administration/_print/", "id": "k8s_00309_code_explain_8172"} {"prompt": "Complete this text code:\n\n$ cd $GOPATH/src/k8s.io/kubernetes/staging/src/k8s.io/component-base/logs/example/cmd/\n$ go run . --help\n...\n --feature-gates mapStringBool A set of key=value pairs that describe feature gates for alpha/experimental features. Options are:\n AllAlpha=true|false (ALPHA - default=false)", "completion": " AllBeta=true|false (BETA - default=false)\n ContextualLogging=true|false (BETA - default=true)\n$ go run . --feature-gates ContextualLogging=true\n...\nI0222 15:13:31.645988 197901 example.go:54] \"runtime\" logger=\"example.myname\" foo=\"bar\" duration=\"1m0s\"\nI0222 15:13:31.646007 197901 example.go:55] \"another runtime\" logger=\"example\" foo=\"bar\" duration=\"1h0m0s\" duration=\"1m0s\"", "language": "text", "type": "code-example", "context": "12 - System Logs | The logger key and foo=\"bar\" were added by the caller of the function which logs the runtime message and duration=\"1m0s\" value, without having to modify that function.", "source": "https://kubernetes.io/docs/concepts/cluster-administration/_print/", "id": "k8s_00309_code_completion_8173"} {"prompt": "Explain this text code:\n\n$ cd $GOPATH/src/k8s.io/kubernetes/staging/src/k8s.io/component-base/logs/example/cmd/\n$ go run . --help\n...\n --feature-gates mapStringBool A set of key=value pairs that describe feature gates for alpha/experimental features. Options are:\n AllAlpha=true|false (ALPHA - default=false)\n AllBeta=true|false (BETA - default=false)\n ContextualLogging=true|false (BETA - default=true)\n$ go run . --feature-gates ContextualLogging=true\n...\nI0222 15:13:31.645988 197901 example.go:54] \"runtime\" logger=\"example.myname\" foo=\"bar\" duration=\"1m0s\"\nI0222 15:13:31.646007 197901 example.go:55] \"another runtime\" logger=\"example\" foo=\"bar\" duration=\"1h0m0s\" duration=\"1m0s\"", "completion": "This text code 12 - System Logs | The logger key and foo=\"bar\" were added by the caller of the function which logs the runtime message and duration=\"1m0s\" value, without having to modify that function.", "language": "text", "type": "code-example_explanation", "context": "12 - System Logs | The logger key and foo=\"bar\" were added by the caller of the function which logs the runtime message and duration=\"1m0s\" value, without having to modify that function.", "source": "https://kubernetes.io/docs/concepts/cluster-administration/_print/", "id": "k8s_00309_code_explain_8174"} {"prompt": "Complete this text code:\n\n$ go run . --feature-gates ContextualLogging=false\n...", "completion": "I0222 15:14:40.497333 198174 example.go:54] \"runtime\" duration=\"1m0s\"\nI0222 15:14:40.497346 198174 example.go:55] \"another runtime\" duration=\"1h0m0s\" duration=\"1m0s\"", "language": "text", "type": "code-example", "context": "12 - System Logs | JSON output does not support many standard klog flags. For list of unsupported klog flags, see the Command line tool reference.", "source": "https://kubernetes.io/docs/concepts/cluster-administration/_print/", "id": "k8s_00309_code_completion_8175"} {"prompt": "Explain this text code:\n\n$ go run . --feature-gates ContextualLogging=false\n...\nI0222 15:14:40.497333 198174 example.go:54] \"runtime\" duration=\"1m0s\"\nI0222 15:14:40.497346 198174 example.go:55] \"another runtime\" duration=\"1h0m0s\" duration=\"1m0s\"", "completion": "This text code 12 - System Logs | JSON output does not support many standard klog flags. For list of unsupported klog flags, see the Command line tool reference.", "language": "text", "type": "code-example_explanation", "context": "12 - System Logs | JSON output does not support many standard klog flags. For list of unsupported klog flags, see the Command line tool reference.", "source": "https://kubernetes.io/docs/concepts/cluster-administration/_print/", "id": "k8s_00309_code_explain_8176"} {"prompt": "Complete this json code:\n\n{\n \"ts\": 1580306777.04728,\n \"v\": 4,\n \"msg\": \"Pod status updated\",\n \"pod\":{", "completion": " \"name\": \"nginx-1\",\n \"namespace\": \"default\"\n },\n \"status\": \"ready\"\n}", "language": "json", "type": "api-response", "context": "12 - System Logs | Keys with special meaning:", "source": "https://kubernetes.io/docs/concepts/cluster-administration/_print/", "id": "k8s_00309_code_completion_8177"} {"prompt": "Explain this json code:\n\n{\n \"ts\": 1580306777.04728,\n \"v\": 4,\n \"msg\": \"Pod status updated\",\n \"pod\":{\n \"name\": \"nginx-1\",\n \"namespace\": \"default\"\n },\n \"status\": \"ready\"\n}", "completion": "This json code 12 - System Logs | Keys with special meaning:", "language": "json", "type": "api-response_explanation", "context": "12 - System Logs | Keys with special meaning:", "source": "https://kubernetes.io/docs/concepts/cluster-administration/_print/", "id": "k8s_00309_code_explain_8178"} {"prompt": "Explain this text code:\n\n# Fetch kubelet logs from a node named node-1.example\nkubectl get --raw \"/api/v1/nodes/node-1.example/proxy/logs/?query=kubelet\"", "completion": "This text code 12 - System Logs | You can also fetch files, provided that the files are in a directory that the kubelet allows for log fetches. For example, you can fetch a log from /var/log on a Linux node:", "language": "text", "type": "code-example_explanation", "context": "12 - System Logs | You can also fetch files, provided that the files are in a directory that the kubelet allows for log fetches. For example, you can fetch a log from /var/log on a Linux node:", "source": "https://kubernetes.io/docs/concepts/cluster-administration/_print/", "id": "k8s_00309_code_explain_8179"} {"prompt": "Explain this bash code:\n\nkubectl get --raw \"/api/v1/nodes//proxy/logs/?query=/\"", "completion": "This bash code 12 - System Logs | The kubelet uses heuristics to retrieve logs. This helps if you are not aware whether a given system service is writing logs to the operating system's native logger like journald or to a log file in /", "language": "bash", "type": "kubectl-command_explanation", "context": "12 - System Logs | The kubelet uses heuristics to retrieve logs. This helps if you are not aware whether a given system service is writing logs to the operating system's native logger like journald or to a log file in /", "source": "https://kubernetes.io/docs/concepts/cluster-administration/_print/", "id": "k8s_00309_code_explain_8180"} {"prompt": "Explain this text code:\n\n# Fetch kubelet logs from a node named node-1.example that have the word \"error\"\nkubectl get --raw \"/api/v1/nodes/node-1.example/proxy/logs/?query=kubelet&pattern=error\"", "completion": "This text code 12 - System Logs | System component traces record the latency of and relationships between operations in the cluster.", "language": "text", "type": "code-example_explanation", "context": "12 - System Logs | System component traces record the latency of and relationships between operations in the cluster.", "source": "https://kubernetes.io/docs/concepts/cluster-administration/_print/", "id": "k8s_00309_code_explain_8181"} {"prompt": "Complete this text code:\n\nreceivers:\n otlp:\n protocols:\n grpc:\nexporters:\n # Replace this exporter with the exporter for your backend\n exporters:", "completion": " debug:\n verbosity: detailed\nservice:\n pipelines:\n traces:\n receivers: [otlp]\n exporters: [debug]", "language": "text", "type": "code-example", "context": "13 - Traces For Kubernetes System Components | To directly emit traces to a backend without utilizing a collector, specify the endpoint field in the Kubernetes tracing configuration file with the desired trace backend address. This method negates", "source": "https://kubernetes.io/docs/concepts/cluster-administration/_print/", "id": "k8s_00309_code_completion_8182"} {"prompt": "Explain this text code:\n\nreceivers:\n otlp:\n protocols:\n grpc:\nexporters:\n # Replace this exporter with the exporter for your backend\n exporters:\n debug:\n verbosity: detailed\nservice:\n pipelines:\n traces:\n receivers: [otlp]\n exporters: [debug]", "completion": "This text code 13 - Traces For Kubernetes System Components | To directly emit traces to a backend without utilizing a collector, specify the endpoint field in the Kubernetes tracing configuration file with the desired trace backend address. This method negates", "language": "text", "type": "code-example_explanation", "context": "13 - Traces For Kubernetes System Components | To directly emit traces to a backend without utilizing a collector, specify the endpoint field in the Kubernetes tracing configuration file with the desired trace backend address. This method negates", "source": "https://kubernetes.io/docs/concepts/cluster-administration/_print/", "id": "k8s_00309_code_explain_8183"} {"prompt": "Complete this yaml code:\n\napiVersion: apiserver.config.k8s.io/v1\nkind: TracingConfiguration", "completion": "# default value\n#endpoint: localhost:4317\nsamplingRatePerMillion: 100", "language": "yaml", "type": "kubernetes-manifest", "context": "13 - Traces For Kubernetes System Components | For more information about the TracingConfiguration struct, see API server config API (v1).", "source": "https://kubernetes.io/docs/concepts/cluster-administration/_print/", "id": "k8s_00309_code_completion_8184"} {"prompt": "Explain this yaml code:\n\napiVersion: apiserver.config.k8s.io/v1\nkind: TracingConfiguration\n# default value\n#endpoint: localhost:4317\nsamplingRatePerMillion: 100", "completion": "This yaml code 13 - Traces For Kubernetes System Components | For more information about the TracingConfiguration struct, see API server config API (v1).", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "13 - Traces For Kubernetes System Components | For more information about the TracingConfiguration struct, see API server config API (v1).", "source": "https://kubernetes.io/docs/concepts/cluster-administration/_print/", "id": "k8s_00309_code_explain_8185"} {"prompt": "Complete this yaml code:\n\napiVersion: kubelet.config.k8s.io/v1beta1\nkind: KubeletConfiguration\ntracing:", "completion": " # default value\n #endpoint: localhost:4317\n samplingRatePerMillion: 100", "language": "yaml", "type": "kubernetes-manifest", "context": "13 - Traces For Kubernetes System Components | If the samplingRatePerMillion is set to one million (1000000), then every span will be sent to the exporter.", "source": "https://kubernetes.io/docs/concepts/cluster-administration/_print/", "id": "k8s_00309_code_completion_8186"} {"prompt": "Explain this yaml code:\n\napiVersion: kubelet.config.k8s.io/v1beta1\nkind: KubeletConfiguration\ntracing:\n # default value\n #endpoint: localhost:4317\n samplingRatePerMillion: 100", "completion": "This yaml code 13 - Traces For Kubernetes System Components | If the samplingRatePerMillion is set to one million (1000000), then every span will be sent to the exporter.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "13 - Traces For Kubernetes System Components | If the samplingRatePerMillion is set to one million (1000000), then every span will be sent to the exporter.", "source": "https://kubernetes.io/docs/concepts/cluster-administration/_print/", "id": "k8s_00309_code_explain_8187"} {"prompt": "Explain this text code:\n\nkube-apiserver \\\n--runtime-config=flowcontrol.apiserver.k8s.io/v1beta3=false \\\n # …and other flags as usual", "completion": "This text code 15 - API Priority and Fairness | The command-line flag --enable-priority-and-fairness=false will disable the API Priority and Fairness feature.", "language": "text", "type": "code-example_explanation", "context": "15 - API Priority and Fairness | The command-line flag --enable-priority-and-fairness=false will disable the API Priority and Fairness feature.", "source": "https://kubernetes.io/docs/concepts/cluster-administration/_print/", "id": "k8s_00309_code_explain_8188"} {"prompt": "Complete this yaml code:\n\npriority-and-fairness/health-for-strangers.yaml\napiVersion: flowcontrol.apiserver.k8s.io/v1\nkind: FlowSchema\nmetadata:\n name: health-for-strangers\nspec:\n matchingPrecedence: 1000\n priorityLevelConfiguration:\n name: exempt\n rules:", "completion": " - nonResourceRules:\n - nonResourceURLs:\n - \"/healthz\"\n - \"/livez\"\n - \"/readyz\"\n verbs:\n - \"*\"\n subjects:\n - kind: Group\n group:\n name: \"system:unauthenticated\"", "language": "yaml", "type": "kubernetes-manifest", "context": "15 - API Priority and Fairness | When you enable the API Priority and Fairness feature, the kube-apiserver exports additional metrics. Monitoring these can help you determine whether your configuration is inappropriately throttling i", "source": "https://kubernetes.io/docs/concepts/cluster-administration/_print/", "id": "k8s_00309_code_completion_8189"} {"prompt": "Explain this yaml code:\n\npriority-and-fairness/health-for-strangers.yaml\napiVersion: flowcontrol.apiserver.k8s.io/v1\nkind: FlowSchema\nmetadata:\n name: health-for-strangers\nspec:\n matchingPrecedence: 1000\n priorityLevelConfiguration:\n name: exempt\n rules:\n - nonResourceRules:\n - nonResourceURLs:\n - \"/healthz\"\n - \"/livez\"\n - \"/readyz\"\n verbs:\n - \"*\"\n subjects:\n - kind: Group\n group:\n name: \"system:unauthenticated\"", "completion": "This yaml code 15 - API Priority and Fairness | When you enable the API Priority and Fairness feature, the kube-apiserver exports additional metrics. Monitoring these can help you determine whether your configuration is inappropriately throttling i", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "15 - API Priority and Fairness | When you enable the API Priority and Fairness feature, the kube-apiserver exports additional metrics. Monitoring these can help you determine whether your configuration is inappropriately throttling i", "source": "https://kubernetes.io/docs/concepts/cluster-administration/_print/", "id": "k8s_00309_code_explain_8190"} {"prompt": "Complete this yaml code:\n\napiVersion: flowcontrol.apiserver.k8s.io/v1\nkind: FlowSchema\nmetadata:\n name: health-for-strangers\nspec:\n matchingPrecedence: 1000\n priorityLevelConfiguration:\n name: exempt\n rules:\n - nonResourceRules:", "completion": " - nonResourceURLs:\n - \"/healthz\"\n - \"/livez\"\n - \"/readyz\"\n verbs:\n - \"*\"\n subjects:\n - kind: Group\n group:\n name: \"system:unauthenticated\"", "language": "yaml", "type": "kubernetes-manifest", "context": "15 - API Priority and Fairness | When you enable the API Priority and Fairness feature, the kube-apiserver exports additional metrics. Monitoring these can help you determine whether your configuration is inappropriately throttling i", "source": "https://kubernetes.io/docs/concepts/cluster-administration/_print/", "id": "k8s_00309_code_completion_8191"} {"prompt": "Explain this yaml code:\n\napiVersion: flowcontrol.apiserver.k8s.io/v1\nkind: FlowSchema\nmetadata:\n name: health-for-strangers\nspec:\n matchingPrecedence: 1000\n priorityLevelConfiguration:\n name: exempt\n rules:\n - nonResourceRules:\n - nonResourceURLs:\n - \"/healthz\"\n - \"/livez\"\n - \"/readyz\"\n verbs:\n - \"*\"\n subjects:\n - kind: Group\n group:\n name: \"system:unauthenticated\"", "completion": "This yaml code 15 - API Priority and Fairness | When you enable the API Priority and Fairness feature, the kube-apiserver exports additional metrics. Monitoring these can help you determine whether your configuration is inappropriately throttling i", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "15 - API Priority and Fairness | When you enable the API Priority and Fairness feature, the kube-apiserver exports additional metrics. Monitoring these can help you determine whether your configuration is inappropriately throttling i", "source": "https://kubernetes.io/docs/concepts/cluster-administration/_print/", "id": "k8s_00309_code_explain_8192"} {"prompt": "Complete this yaml code:\n\npriority-and-fairness/list-events-default-service-account.yaml\napiVersion: flowcontrol.apiserver.k8s.io/v1\nkind: FlowSchema\nmetadata:\n name: list-events-default-service-account\nspec:\n distinguisherMethod:\n type: ByUser\n matchingPrecedence: 8000\n priorityLevelConfiguration:\n name: catch-all\n rules:\n - resourceRules:", "completion": " - apiGroups:\n - '*'\n namespaces:\n - default\n resources:\n - events\n verbs:\n - list\n subjects:\n - kind: ServiceAccount\n serviceAccount:\n name: default\n namespace: default", "language": "yaml", "type": "service-definition", "context": "15 - API Priority and Fairness | Add-ons extend the functionality of Kubernetes.", "source": "https://kubernetes.io/docs/concepts/cluster-administration/_print/", "id": "k8s_00309_code_completion_8193"} {"prompt": "Explain this yaml code:\n\npriority-and-fairness/list-events-default-service-account.yaml\napiVersion: flowcontrol.apiserver.k8s.io/v1\nkind: FlowSchema\nmetadata:\n name: list-events-default-service-account\nspec:\n distinguisherMethod:\n type: ByUser\n matchingPrecedence: 8000\n priorityLevelConfiguration:\n name: catch-all\n rules:\n - resourceRules:\n - apiGroups:\n - '*'\n namespaces:\n - default\n resources:\n - events\n verbs:\n - list\n subjects:\n - kind: ServiceAccount\n serviceAccount:\n name: default\n namespace: default", "completion": "This yaml code 15 - API Priority and Fairness | Add-ons extend the functionality of Kubernetes.", "language": "yaml", "type": "service-definition_explanation", "context": "15 - API Priority and Fairness | Add-ons extend the functionality of Kubernetes.", "source": "https://kubernetes.io/docs/concepts/cluster-administration/_print/", "id": "k8s_00309_code_explain_8194"} {"prompt": "Complete this yaml code:\n\napiVersion: flowcontrol.apiserver.k8s.io/v1\nkind: FlowSchema\nmetadata:\n name: list-events-default-service-account\nspec:\n distinguisherMethod:\n type: ByUser\n matchingPrecedence: 8000\n priorityLevelConfiguration:\n name: catch-all\n rules:\n - resourceRules:", "completion": " - apiGroups:\n - '*'\n namespaces:\n - default\n resources:\n - events\n verbs:\n - list\n subjects:\n - kind: ServiceAccount\n serviceAccount:\n name: default\n namespace: default", "language": "yaml", "type": "service-definition", "context": "15 - API Priority and Fairness | Add-ons extend the functionality of Kubernetes.", "source": "https://kubernetes.io/docs/concepts/cluster-administration/_print/", "id": "k8s_00309_code_completion_8195"} {"prompt": "Explain this yaml code:\n\napiVersion: flowcontrol.apiserver.k8s.io/v1\nkind: FlowSchema\nmetadata:\n name: list-events-default-service-account\nspec:\n distinguisherMethod:\n type: ByUser\n matchingPrecedence: 8000\n priorityLevelConfiguration:\n name: catch-all\n rules:\n - resourceRules:\n - apiGroups:\n - '*'\n namespaces:\n - default\n resources:\n - events\n verbs:\n - list\n subjects:\n - kind: ServiceAccount\n serviceAccount:\n name: default\n namespace: default", "completion": "This yaml code 15 - API Priority and Fairness | Add-ons extend the functionality of Kubernetes.", "language": "yaml", "type": "service-definition_explanation", "context": "15 - API Priority and Fairness | Add-ons extend the functionality of Kubernetes.", "source": "https://kubernetes.io/docs/concepts/cluster-administration/_print/", "id": "k8s_00309_code_explain_8196"} {"prompt": "Complete this yaml code:\n\n# RuntimeClass is defined in the node.k8s.io API group\napiVersion: node.k8s.io/v1\nkind: RuntimeClass\nmetadata:", "completion": " # The name the RuntimeClass will be referenced by.\n # RuntimeClass is a non-namespaced resource.\n name: myclass \n# The name of the corresponding CRI configuration\nhandler: myconfiguration", "language": "yaml", "type": "kubernetes-manifest", "context": "Runtime Class | The name of a RuntimeClass object must be a valid DNS subdomain name.", "source": "https://kubernetes.io/docs/concepts/containers/runtime-class/", "id": "k8s_00310_code_completion_8197"} {"prompt": "Explain this yaml code:\n\n# RuntimeClass is defined in the node.k8s.io API group\napiVersion: node.k8s.io/v1\nkind: RuntimeClass\nmetadata:\n # The name the RuntimeClass will be referenced by.\n # RuntimeClass is a non-namespaced resource.\n name: myclass \n# The name of the corresponding CRI configuration\nhandler: myconfiguration", "completion": "This yaml code Runtime Class | The name of a RuntimeClass object must be a valid DNS subdomain name.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Runtime Class | The name of a RuntimeClass object must be a valid DNS subdomain name.", "source": "https://kubernetes.io/docs/concepts/containers/runtime-class/", "id": "k8s_00310_code_explain_8198"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:", "completion": " name: mypod\nspec:\n runtimeClassName: myclass\n # ...", "language": "yaml", "type": "pod-definition", "context": "Runtime Class | This will instruct the kubelet to use the named RuntimeClass to run this pod. If the named RuntimeClass does not exist, or the CRI cannot run the corresponding handler, the pod will enter the Failed t", "source": "https://kubernetes.io/docs/concepts/containers/runtime-class/", "id": "k8s_00310_code_completion_8199"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: mypod\nspec:\n runtimeClassName: myclass\n # ...", "completion": "This yaml code Runtime Class | This will instruct the kubelet to use the named RuntimeClass to run this pod. If the named RuntimeClass does not exist, or the CRI cannot run the corresponding handler, the pod will enter the Failed t", "language": "yaml", "type": "pod-definition_explanation", "context": "Runtime Class | This will instruct the kubelet to use the named RuntimeClass to run this pod. If the named RuntimeClass does not exist, or the CRI cannot run the corresponding handler, the pod will enter the Failed t", "source": "https://kubernetes.io/docs/concepts/containers/runtime-class/", "id": "k8s_00310_code_explain_8200"} {"prompt": "Explain this json code:\n\n[plugins.\"io.containerd.grpc.v1.cri\".containerd.runtimes.${HANDLER_NAME}]", "completion": "This json code Runtime Class | See containerd's config documentation for more details:", "language": "json", "type": "api-response_explanation", "context": "Runtime Class | See containerd's config documentation for more details:", "source": "https://kubernetes.io/docs/concepts/containers/runtime-class/", "id": "k8s_00310_code_explain_8201"} {"prompt": "Explain this json code:\n\n[crio.runtime.runtimes.${HANDLER_NAME}]\n runtime_path = \"${PATH_TO_BINARY}\"", "completion": "This json code Runtime Class | See CRI-O's config documentation for more details.", "language": "json", "type": "api-response_explanation", "context": "Runtime Class | See CRI-O's config documentation for more details.", "source": "https://kubernetes.io/docs/concepts/containers/runtime-class/", "id": "k8s_00310_code_explain_8202"} {"prompt": "Complete this yaml code:\n\n# RuntimeClass is defined in the node.k8s.io API group\napiVersion: node.k8s.io/v1\nkind: RuntimeClass\nmetadata:", "completion": " # The name the RuntimeClass will be referenced by.\n # RuntimeClass is a non-namespaced resource.\n name: myclass \n# The name of the corresponding CRI configuration\nhandler: myconfiguration", "language": "yaml", "type": "kubernetes-manifest", "context": "Runtime Class | The name of a RuntimeClass object must be a valid DNS subdomain name.", "source": "https://kubernetes.io/docs/concepts/containers/runtime-class/", "id": "k8s_00310_code_completion_8203"} {"prompt": "Explain this yaml code:\n\n# RuntimeClass is defined in the node.k8s.io API group\napiVersion: node.k8s.io/v1\nkind: RuntimeClass\nmetadata:\n # The name the RuntimeClass will be referenced by.\n # RuntimeClass is a non-namespaced resource.\n name: myclass \n# The name of the corresponding CRI configuration\nhandler: myconfiguration", "completion": "This yaml code Runtime Class | The name of a RuntimeClass object must be a valid DNS subdomain name.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Runtime Class | The name of a RuntimeClass object must be a valid DNS subdomain name.", "source": "https://kubernetes.io/docs/concepts/containers/runtime-class/", "id": "k8s_00310_code_explain_8204"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:", "completion": " name: mypod\nspec:\n runtimeClassName: myclass\n # ...", "language": "yaml", "type": "pod-definition", "context": "Runtime Class | This will instruct the kubelet to use the named RuntimeClass to run this pod. If the named RuntimeClass does not exist, or the CRI cannot run the corresponding handler, the pod will enter the Failed t", "source": "https://kubernetes.io/docs/concepts/containers/runtime-class/", "id": "k8s_00310_code_completion_8205"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: mypod\nspec:\n runtimeClassName: myclass\n # ...", "completion": "This yaml code Runtime Class | This will instruct the kubelet to use the named RuntimeClass to run this pod. If the named RuntimeClass does not exist, or the CRI cannot run the corresponding handler, the pod will enter the Failed t", "language": "yaml", "type": "pod-definition_explanation", "context": "Runtime Class | This will instruct the kubelet to use the named RuntimeClass to run this pod. If the named RuntimeClass does not exist, or the CRI cannot run the corresponding handler, the pod will enter the Failed t", "source": "https://kubernetes.io/docs/concepts/containers/runtime-class/", "id": "k8s_00310_code_explain_8206"} {"prompt": "Explain this json code:\n\n[plugins.\"io.containerd.grpc.v1.cri\".containerd.runtimes.${HANDLER_NAME}]", "completion": "This json code Runtime Class | See containerd's config documentation for more details:", "language": "json", "type": "api-response_explanation", "context": "Runtime Class | See containerd's config documentation for more details:", "source": "https://kubernetes.io/docs/concepts/containers/runtime-class/", "id": "k8s_00310_code_explain_8207"} {"prompt": "Explain this json code:\n\n[crio.runtime.runtimes.${HANDLER_NAME}]\n runtime_path = \"${PATH_TO_BINARY}\"", "completion": "This json code Runtime Class | See CRI-O's config documentation for more details.", "language": "json", "type": "api-response_explanation", "context": "Runtime Class | See CRI-O's config documentation for more details.", "source": "https://kubernetes.io/docs/concepts/containers/runtime-class/", "id": "k8s_00310_code_explain_8208"} {"prompt": "Complete this yaml code:\n\n# RuntimeClass is defined in the node.k8s.io API group\napiVersion: node.k8s.io/v1\nkind: RuntimeClass\nmetadata:", "completion": " # The name the RuntimeClass will be referenced by.\n # RuntimeClass is a non-namespaced resource.\n name: myclass \n# The name of the corresponding CRI configuration\nhandler: myconfiguration", "language": "yaml", "type": "kubernetes-manifest", "context": "Runtime Class | The name of a RuntimeClass object must be a valid DNS subdomain name.", "source": "https://kubernetes.io/docs/concepts/containers/runtime-class/", "id": "k8s_00310_code_completion_8209"} {"prompt": "Explain this yaml code:\n\n# RuntimeClass is defined in the node.k8s.io API group\napiVersion: node.k8s.io/v1\nkind: RuntimeClass\nmetadata:\n # The name the RuntimeClass will be referenced by.\n # RuntimeClass is a non-namespaced resource.\n name: myclass \n# The name of the corresponding CRI configuration\nhandler: myconfiguration", "completion": "This yaml code Runtime Class | The name of a RuntimeClass object must be a valid DNS subdomain name.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Runtime Class | The name of a RuntimeClass object must be a valid DNS subdomain name.", "source": "https://kubernetes.io/docs/concepts/containers/runtime-class/", "id": "k8s_00310_code_explain_8210"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:", "completion": " name: mypod\nspec:\n runtimeClassName: myclass\n # ...", "language": "yaml", "type": "pod-definition", "context": "Runtime Class | This will instruct the kubelet to use the named RuntimeClass to run this pod. If the named RuntimeClass does not exist, or the CRI cannot run the corresponding handler, the pod will enter the Failed t", "source": "https://kubernetes.io/docs/concepts/containers/runtime-class/", "id": "k8s_00310_code_completion_8211"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: mypod\nspec:\n runtimeClassName: myclass\n # ...", "completion": "This yaml code Runtime Class | This will instruct the kubelet to use the named RuntimeClass to run this pod. If the named RuntimeClass does not exist, or the CRI cannot run the corresponding handler, the pod will enter the Failed t", "language": "yaml", "type": "pod-definition_explanation", "context": "Runtime Class | This will instruct the kubelet to use the named RuntimeClass to run this pod. If the named RuntimeClass does not exist, or the CRI cannot run the corresponding handler, the pod will enter the Failed t", "source": "https://kubernetes.io/docs/concepts/containers/runtime-class/", "id": "k8s_00310_code_explain_8212"} {"prompt": "Explain this text code:\n\nkube-scheduler-simulator.sigs.k8s.io/filter-result: >-\n {\"node-282x7\":{\"AzureDiskLimits\":\"passed\",\"EBSLimits\":\"passed\",\"GCEPDLimits\":\"passed\",\"InterPodAffinity\":\"passed\",\"NodeAffinity\":\"passed\",\"NodeName\":\"passed\",\"NodePorts\":\"passed\",\"NodeResourcesFit\":\"passed\",\"NodeUnschedulable\":\"passed\",\"NodeVolumeLimits\":\"passed\",\"PodTopologySpread\":\"passed\",\"TaintToleration\":\"passed\",\"VolumeBinding\":\"passed\",\"VolumeRestrictions\":\"passed\",\"VolumeZone\":\"passed\"},\"node-gp9t4\":{\"AzureDiskLimits\":\"passed\",\"EBSLimits\":\"passed\",\"GCEPDLimits\":\"passed\",\"InterPodAffinity\":\"passed\",\"NodeAffinity\":\"passed\",\"NodeName\":\"passed\",\"NodePorts\":\"passed\",\"NodeResourcesFit\":\"passed\",\"NodeUnschedulable\":\"passed\",\"NodeVolumeLimits\":\"passed\",\"PodTopologySpread\":\"passed\",\"TaintToleration\":\"passed\",\"VolumeBinding\":\"passed\",\"VolumeRestrictions\":\"passed\",\"VolumeZone\":\"passed\"}}", "completion": "This text code Well-Known Labels, Annotations and Taints | Used on: Pod", "language": "text", "type": "code-example_explanation", "context": "Well-Known Labels, Annotations and Taints | Used on: Pod", "source": "https://kubernetes.io/docs/reference/labels-annotations-taints/", "id": "k8s_00314_code_explain_8213"} {"prompt": "Explain this text code:\n\nkube-scheduler-simulator.sigs.k8s.io/finalscore-result: >-\n {\"node-282x7\":{\"ImageLocality\":\"0\",\"InterPodAffinity\":\"0\",\"NodeAffinity\":\"0\",\"NodeNumber\":\"0\",\"NodeResourcesBalancedAllocation\":\"76\",\"NodeResourcesFit\":\"73\",\"PodTopologySpread\":\"200\",\"TaintToleration\":\"300\",\"VolumeBinding\":\"0\"},\"node-gp9t4\":{\"ImageLocality\":\"0\",\"InterPodAffinity\":\"0\",\"NodeAffinity\":\"0\",\"NodeNumber\":\"0\",\"NodeResourcesBalancedAllocation\":\"76\",\"NodeResourcesFit\":\"73\",\"PodTopologySpread\":\"200\",\"TaintToleration\":\"300\",\"VolumeBinding\":\"0\"}}", "completion": "This text code Well-Known Labels, Annotations and Taints | Used on: Pod", "language": "text", "type": "code-example_explanation", "context": "Well-Known Labels, Annotations and Taints | Used on: Pod", "source": "https://kubernetes.io/docs/reference/labels-annotations-taints/", "id": "k8s_00314_code_explain_8214"} {"prompt": "Explain this text code:\n\nkube-scheduler-simulator.sigs.k8s.io/prefilter-result-status: >-\n {\"InterPodAffinity\":\"success\",\"NodeAffinity\":\"success\",\"NodePorts\":\"success\",\"NodeResourcesFit\":\"success\",\"PodTopologySpread\":\"success\",\"VolumeBinding\":\"success\",\"VolumeRestrictions\":\"success\"}", "completion": "This text code Well-Known Labels, Annotations and Taints | Used on: Pod", "language": "text", "type": "code-example_explanation", "context": "Well-Known Labels, Annotations and Taints | Used on: Pod", "source": "https://kubernetes.io/docs/reference/labels-annotations-taints/", "id": "k8s_00314_code_explain_8215"} {"prompt": "Explain this text code:\n\nkube-scheduler-simulator.sigs.k8s.io/prescore-result: >-\n {\"InterPodAffinity\":\"success\",\"NodeAffinity\":\"success\",\"NodeNumber\":\"success\",\"PodTopologySpread\":\"success\",\"TaintToleration\":\"success\"}", "completion": "This text code Well-Known Labels, Annotations and Taints | Used on: Pod", "language": "text", "type": "code-example_explanation", "context": "Well-Known Labels, Annotations and Taints | Used on: Pod", "source": "https://kubernetes.io/docs/reference/labels-annotations-taints/", "id": "k8s_00314_code_explain_8216"} {"prompt": "Explain this text code:\n\nkube-scheduler-simulator.sigs.k8s.io/score-result: >-\n {\"node-282x7\":{\"ImageLocality\":\"0\",\"InterPodAffinity\":\"0\",\"NodeAffinity\":\"0\",\"NodeNumber\":\"0\",\"NodeResourcesBalancedAllocation\":\"76\",\"NodeResourcesFit\":\"73\",\"PodTopologySpread\":\"0\",\"TaintToleration\":\"0\",\"VolumeBinding\":\"0\"},\"node-gp9t4\":{\"ImageLocality\":\"0\",\"InterPodAffinity\":\"0\",\"NodeAffinity\":\"0\",\"NodeNumber\":\"0\",\"NodeResourcesBalancedAllocation\":\"76\",\"NodeResourcesFit\":\"73\",\"PodTopologySpread\":\"0\",\"TaintToleration\":\"0\",\"VolumeBinding\":\"0\"}}", "completion": "This text code Well-Known Labels, Annotations and Taints | Used on: Pod", "language": "text", "type": "code-example_explanation", "context": "Well-Known Labels, Annotations and Taints | Used on: Pod", "source": "https://kubernetes.io/docs/reference/labels-annotations-taints/", "id": "k8s_00314_code_explain_8217"} {"prompt": "Explain this bash code:\n\nkubectl label nodes node.kubernetes.io/exclude-from-external-load-balancers=true", "completion": "This bash code Well-Known Labels, Annotations and Taints | Type: Annotation", "language": "bash", "type": "kubectl-command_explanation", "context": "Well-Known Labels, Annotations and Taints | Type: Annotation", "source": "https://kubernetes.io/docs/reference/labels-annotations-taints/", "id": "k8s_00314_code_explain_8218"} {"prompt": "Explain this bash code:\n\nkubectl.kubernetes.io/last-applied-configuration: >\n {\"apiVersion\":\"apps/v1\",\"kind\":\"Deployment\",\"metadata\":{\"annotations\":{},\"name\":\"example\",\"namespace\":\"default\"},\"spec\":{\"selector\":{\"matchLabels\":{\"app.kubernetes.io/name\":foo}},\"template\":{\"metadata\":{\"labels\":{\"app.kubernetes.io/name\":\"foo\"}},\"spec\":{\"containers\":[{\"image\":\"container-registry.example/foo-bar:1.42\",\"name\":\"foo-bar\",\"ports\":[{\"containerPort\":42}]}]}}}}", "completion": "This bash code Well-Known Labels, Annotations and Taints | Used on: all objects", "language": "bash", "type": "kubectl-command_explanation", "context": "Well-Known Labels, Annotations and Taints | Used on: all objects", "source": "https://kubernetes.io/docs/reference/labels-annotations-taints/", "id": "k8s_00314_code_explain_8219"} {"prompt": "Explain this bash code:\n\nkubectl taint nodes node-role.kubernetes.io/control-plane:NoSchedule-", "completion": "This bash code Well-Known Labels, Annotations and Taints | Type: Taint", "language": "bash", "type": "kubectl-command_explanation", "context": "Well-Known Labels, Annotations and Taints | Type: Taint", "source": "https://kubernetes.io/docs/reference/labels-annotations-taints/", "id": "k8s_00314_code_explain_8220"} {"prompt": "Explain this text code:\n\napplyset.kubernetes.io/additional-namespaces: \"namespace1,namespace2\"", "completion": "This text code Well-Known Labels, Annotations and Taints | Used on: Objects being used as ApplySet parents.", "language": "text", "type": "code-example_explanation", "context": "Well-Known Labels, Annotations and Taints | Used on: Objects being used as ApplySet parents.", "source": "https://kubernetes.io/docs/reference/labels-annotations-taints/", "id": "k8s_00314_code_explain_8221"} {"prompt": "Explain this text code:\n\napplyset.kubernetes.io/contains-group-kinds: \"certificates.cert-manager.io,configmaps,deployments.apps,secrets,services\"", "completion": "This text code Well-Known Labels, Annotations and Taints | Used on: Objects being used as ApplySet parents.", "language": "text", "type": "code-example_explanation", "context": "Well-Known Labels, Annotations and Taints | Used on: Objects being used as ApplySet parents.", "source": "https://kubernetes.io/docs/reference/labels-annotations-taints/", "id": "k8s_00314_code_explain_8222"} {"prompt": "Explain this text code:\n\napplyset.kubernetes.io/contains-group-resources: \"certificates.cert-manager.io,configmaps,deployments.apps,secrets,services\"", "completion": "This text code Well-Known Labels, Annotations and Taints | Used on: Objects being used as ApplySet parents.", "language": "text", "type": "code-example_explanation", "context": "Well-Known Labels, Annotations and Taints | Used on: Objects being used as ApplySet parents.", "source": "https://kubernetes.io/docs/reference/labels-annotations-taints/", "id": "k8s_00314_code_explain_8223"} {"prompt": "Explain this text code:\n\napplyset.kubernetes.io/id: \"applyset-0eFHV8ySqp7XoShsGvyWFQD3s96yqwHmzc4e0HR1dsY-v1\"", "completion": "This text code Well-Known Labels, Annotations and Taints | Used on: Objects being used as ApplySet parents.", "language": "text", "type": "code-example_explanation", "context": "Well-Known Labels, Annotations and Taints | Used on: Objects being used as ApplySet parents.", "source": "https://kubernetes.io/docs/reference/labels-annotations-taints/", "id": "k8s_00314_code_explain_8224"} {"prompt": "Explain this text code:\n\n...))>", "completion": "This text code Well-Known Labels, Annotations and Taints | Type: Label", "language": "text", "type": "code-example_explanation", "context": "Well-Known Labels, Annotations and Taints | Type: Label", "source": "https://kubernetes.io/docs/reference/labels-annotations-taints/", "id": "k8s_00314_code_explain_8225"} {"prompt": "Explain this text code:\n\napplyset.kubernetes.io/part-of: \"applyset-0eFHV8ySqp7XoShsGvyWFQD3s96yqwHmzc4e0HR1dsY-v1\"", "completion": "This text code Well-Known Labels, Annotations and Taints | Used on: All objects.", "language": "text", "type": "code-example_explanation", "context": "Well-Known Labels, Annotations and Taints | Used on: All objects.", "source": "https://kubernetes.io/docs/reference/labels-annotations-taints/", "id": "k8s_00314_code_explain_8226"} {"prompt": "Explain this text code:\n\nresource.kubernetes.io/pod-claim-name: \"my-pod-claim\"", "completion": "This text code Well-Known Labels, Annotations and Taints | Used on: ResourceClaim", "language": "text", "type": "code-example_explanation", "context": "Well-Known Labels, Annotations and Taints | Used on: ResourceClaim", "source": "https://kubernetes.io/docs/reference/labels-annotations-taints/", "id": "k8s_00314_code_explain_8227"} {"prompt": "Explain this text code:\n\ncluster-autoscaler.kubernetes.io/safe-to-evict: \"true\"", "completion": "This text code Well-Known Labels, Annotations and Taints | Used on: Pod", "language": "text", "type": "code-example_explanation", "context": "Well-Known Labels, Annotations and Taints | Used on: Pod", "source": "https://kubernetes.io/docs/reference/labels-annotations-taints/", "id": "k8s_00314_code_explain_8228"} {"prompt": "Explain this text code:\n\ncontainer.apparmor.security.beta.kubernetes.io/my-container: my-custom-profile", "completion": "This text code Well-Known Labels, Annotations and Taints | Used on: Pods", "language": "text", "type": "code-example_explanation", "context": "Well-Known Labels, Annotations and Taints | Used on: Pods", "source": "https://kubernetes.io/docs/reference/labels-annotations-taints/", "id": "k8s_00314_code_explain_8229"} {"prompt": "Explain this text code:\n\ninternal.config.kubernetes.io/path: \"relative/file/path.yaml\"", "completion": "This text code Well-Known Labels, Annotations and Taints | Used on: All objects", "language": "text", "type": "code-example_explanation", "context": "Well-Known Labels, Annotations and Taints | Used on: All objects", "source": "https://kubernetes.io/docs/reference/labels-annotations-taints/", "id": "k8s_00314_code_explain_8230"} {"prompt": "Explain this text code:\n\nkube-scheduler-simulator.sigs.k8s.io/bind-result: '{\"DefaultBinder\":\"success\"}'", "completion": "This text code Well-Known Labels, Annotations and Taints | Used on: Pod", "language": "text", "type": "code-example_explanation", "context": "Well-Known Labels, Annotations and Taints | Used on: Pod", "source": "https://kubernetes.io/docs/reference/labels-annotations-taints/", "id": "k8s_00314_code_explain_8231"} {"prompt": "Explain this yaml code:\n\nkube-scheduler-simulator.sigs.k8s.io/filter-result: >-\n {\"node-282x7\":{\"AzureDiskLimits\":\"passed\",\"EBSLimits\":\"passed\",\"GCEPDLimits\":\"passed\",\"InterPodAffinity\":\"passed\",\"NodeAffinity\":\"passed\",\"NodeName\":\"passed\",\"NodePorts\":\"passed\",\"NodeResourcesFit\":\"passed\",\"NodeUnschedulable\":\"passed\",\"NodeVolumeLimits\":\"passed\",\"PodTopologySpread\":\"passed\",\"TaintToleration\":\"passed\",\"VolumeBinding\":\"passed\",\"VolumeRestrictions\":\"passed\",\"VolumeZone\":\"passed\"},\"node-gp9t4\":{\"AzureDiskLimits\":\"passed\",\"EBSLimits\":\"passed\",\"GCEPDLimits\":\"passed\",\"InterPodAffinity\":\"passed\",\"NodeAffinity\":\"passed\",\"NodeName\":\"passed\",\"NodePorts\":\"passed\",\"NodeResourcesFit\":\"passed\",\"NodeUnschedulable\":\"passed\",\"NodeVolumeLimits\":\"passed\",\"PodTopologySpread\":\"passed\",\"TaintToleration\":\"passed\",\"VolumeBinding\":\"passed\",\"VolumeRestrictions\":\"passed\",\"VolumeZone\":\"passed\"}}", "completion": "This yaml code Well-Known Labels, Annotations and Taints | Used on: Pod", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Well-Known Labels, Annotations and Taints | Used on: Pod", "source": "https://kubernetes.io/docs/reference/labels-annotations-taints/", "id": "k8s_00314_code_explain_8232"} {"prompt": "Explain this yaml code:\n\nkube-scheduler-simulator.sigs.k8s.io/finalscore-result: >-\n {\"node-282x7\":{\"ImageLocality\":\"0\",\"InterPodAffinity\":\"0\",\"NodeAffinity\":\"0\",\"NodeNumber\":\"0\",\"NodeResourcesBalancedAllocation\":\"76\",\"NodeResourcesFit\":\"73\",\"PodTopologySpread\":\"200\",\"TaintToleration\":\"300\",\"VolumeBinding\":\"0\"},\"node-gp9t4\":{\"ImageLocality\":\"0\",\"InterPodAffinity\":\"0\",\"NodeAffinity\":\"0\",\"NodeNumber\":\"0\",\"NodeResourcesBalancedAllocation\":\"76\",\"NodeResourcesFit\":\"73\",\"PodTopologySpread\":\"200\",\"TaintToleration\":\"300\",\"VolumeBinding\":\"0\"}}", "completion": "This yaml code Well-Known Labels, Annotations and Taints | Used on: Pod", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Well-Known Labels, Annotations and Taints | Used on: Pod", "source": "https://kubernetes.io/docs/reference/labels-annotations-taints/", "id": "k8s_00314_code_explain_8233"} {"prompt": "Explain this text code:\n\nkube-scheduler-simulator.sigs.k8s.io/permit-result: '{\"CustomPermitPlugin\":\"success\"}'", "completion": "This text code Well-Known Labels, Annotations and Taints | Used on: Pod", "language": "text", "type": "code-example_explanation", "context": "Well-Known Labels, Annotations and Taints | Used on: Pod", "source": "https://kubernetes.io/docs/reference/labels-annotations-taints/", "id": "k8s_00314_code_explain_8234"} {"prompt": "Explain this text code:\n\nkube-scheduler-simulator.sigs.k8s.io/permit-result-timeout: '{\"CustomPermitPlugin\":\"10s\"}'", "completion": "This text code Well-Known Labels, Annotations and Taints | Used on: Pod", "language": "text", "type": "code-example_explanation", "context": "Well-Known Labels, Annotations and Taints | Used on: Pod", "source": "https://kubernetes.io/docs/reference/labels-annotations-taints/", "id": "k8s_00314_code_explain_8235"} {"prompt": "Explain this text code:\n\nkube-scheduler-simulator.sigs.k8s.io/postfilter-result: '{\"DefaultPreemption\":\"success\"}'", "completion": "This text code Well-Known Labels, Annotations and Taints | Used on: Pod", "language": "text", "type": "code-example_explanation", "context": "Well-Known Labels, Annotations and Taints | Used on: Pod", "source": "https://kubernetes.io/docs/reference/labels-annotations-taints/", "id": "k8s_00314_code_explain_8236"} {"prompt": "Explain this text code:\n\nkube-scheduler-simulator.sigs.k8s.io/prebind-result: '{\"VolumeBinding\":\"success\"}'", "completion": "This text code Well-Known Labels, Annotations and Taints | Used on: Pod", "language": "text", "type": "code-example_explanation", "context": "Well-Known Labels, Annotations and Taints | Used on: Pod", "source": "https://kubernetes.io/docs/reference/labels-annotations-taints/", "id": "k8s_00314_code_explain_8237"} {"prompt": "Explain this text code:\n\nkube-scheduler-simulator.sigs.k8s.io/prebind-result: '{\"NodeAffinity\":\"[\\\"node-\\a\"]\"}'", "completion": "This text code Well-Known Labels, Annotations and Taints | Used on: Pod", "language": "text", "type": "code-example_explanation", "context": "Well-Known Labels, Annotations and Taints | Used on: Pod", "source": "https://kubernetes.io/docs/reference/labels-annotations-taints/", "id": "k8s_00314_code_explain_8238"} {"prompt": "Explain this yaml code:\n\nkube-scheduler-simulator.sigs.k8s.io/prefilter-result-status: >-\n {\"InterPodAffinity\":\"success\",\"NodeAffinity\":\"success\",\"NodePorts\":\"success\",\"NodeResourcesFit\":\"success\",\"PodTopologySpread\":\"success\",\"VolumeBinding\":\"success\",\"VolumeRestrictions\":\"success\"}", "completion": "This yaml code Well-Known Labels, Annotations and Taints | Used on: Pod", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Well-Known Labels, Annotations and Taints | Used on: Pod", "source": "https://kubernetes.io/docs/reference/labels-annotations-taints/", "id": "k8s_00314_code_explain_8239"} {"prompt": "Explain this yaml code:\n\nkube-scheduler-simulator.sigs.k8s.io/prescore-result: >-\n {\"InterPodAffinity\":\"success\",\"NodeAffinity\":\"success\",\"NodeNumber\":\"success\",\"PodTopologySpread\":\"success\",\"TaintToleration\":\"success\"}", "completion": "This yaml code Well-Known Labels, Annotations and Taints | Used on: Pod", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Well-Known Labels, Annotations and Taints | Used on: Pod", "source": "https://kubernetes.io/docs/reference/labels-annotations-taints/", "id": "k8s_00314_code_explain_8240"} {"prompt": "Explain this text code:\n\nkube-scheduler-simulator.sigs.k8s.io/reserve-result: '{\"VolumeBinding\":\"success\"}'", "completion": "This text code Well-Known Labels, Annotations and Taints | Used on: Pod", "language": "text", "type": "code-example_explanation", "context": "Well-Known Labels, Annotations and Taints | Used on: Pod", "source": "https://kubernetes.io/docs/reference/labels-annotations-taints/", "id": "k8s_00314_code_explain_8241"} {"prompt": "Explain this text code:\n\nkube-scheduler-simulator.sigs.k8s.io/result-history: '[]'", "completion": "This text code Well-Known Labels, Annotations and Taints | Used on: Pod", "language": "text", "type": "code-example_explanation", "context": "Well-Known Labels, Annotations and Taints | Used on: Pod", "source": "https://kubernetes.io/docs/reference/labels-annotations-taints/", "id": "k8s_00314_code_explain_8242"} {"prompt": "Explain this yaml code:\n\nkube-scheduler-simulator.sigs.k8s.io/score-result: >-\n {\"node-282x7\":{\"ImageLocality\":\"0\",\"InterPodAffinity\":\"0\",\"NodeAffinity\":\"0\",\"NodeNumber\":\"0\",\"NodeResourcesBalancedAllocation\":\"76\",\"NodeResourcesFit\":\"73\",\"PodTopologySpread\":\"0\",\"TaintToleration\":\"0\",\"VolumeBinding\":\"0\"},\"node-gp9t4\":{\"ImageLocality\":\"0\",\"InterPodAffinity\":\"0\",\"NodeAffinity\":\"0\",\"NodeNumber\":\"0\",\"NodeResourcesBalancedAllocation\":\"76\",\"NodeResourcesFit\":\"73\",\"PodTopologySpread\":\"0\",\"TaintToleration\":\"0\",\"VolumeBinding\":\"0\"}}", "completion": "This yaml code Well-Known Labels, Annotations and Taints | Used on: Pod", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Well-Known Labels, Annotations and Taints | Used on: Pod", "source": "https://kubernetes.io/docs/reference/labels-annotations-taints/", "id": "k8s_00314_code_explain_8243"} {"prompt": "Explain this text code:\n\nkube-scheduler-simulator.sigs.k8s.io/selected-node: node-282x7", "completion": "This text code Well-Known Labels, Annotations and Taints | Used on: Pod", "language": "text", "type": "code-example_explanation", "context": "Well-Known Labels, Annotations and Taints | Used on: Pod", "source": "https://kubernetes.io/docs/reference/labels-annotations-taints/", "id": "k8s_00314_code_explain_8244"} {"prompt": "Explain this text code:\n\nkubernetes.io/limit-ranger: \"LimitRanger plugin set: cpu, memory request for container nginx; cpu, memory limit for container nginx\"", "completion": "This text code Well-Known Labels, Annotations and Taints | Used on: Pod", "language": "text", "type": "code-example_explanation", "context": "Well-Known Labels, Annotations and Taints | Used on: Pod", "source": "https://kubernetes.io/docs/reference/labels-annotations-taints/", "id": "k8s_00314_code_explain_8245"} {"prompt": "Explain this text code:\n\nkubernetes.io/config.hash: \"df7cc47f8477b6b1226d7d23a904867b\"", "completion": "This text code Well-Known Labels, Annotations and Taints | Used on: Pod", "language": "text", "type": "code-example_explanation", "context": "Well-Known Labels, Annotations and Taints | Used on: Pod", "source": "https://kubernetes.io/docs/reference/labels-annotations-taints/", "id": "k8s_00314_code_explain_8246"} {"prompt": "Explain this text code:\n\nkubernetes.io/config.mirror: \"df7cc47f8477b6b1226d7d23a904867b\"", "completion": "This text code Well-Known Labels, Annotations and Taints | Used on: Pod", "language": "text", "type": "code-example_explanation", "context": "Well-Known Labels, Annotations and Taints | Used on: Pod", "source": "https://kubernetes.io/docs/reference/labels-annotations-taints/", "id": "k8s_00314_code_explain_8247"} {"prompt": "Explain this text code:\n\nkubernetes.io/config.seen: \"2023-10-27T04:04:56.011314488Z\"", "completion": "This text code Well-Known Labels, Annotations and Taints | Used on: Pod", "language": "text", "type": "code-example_explanation", "context": "Well-Known Labels, Annotations and Taints | Used on: Pod", "source": "https://kubernetes.io/docs/reference/labels-annotations-taints/", "id": "k8s_00314_code_explain_8248"} {"prompt": "Explain this text code:\n\nkube-aggregator.kubernetes.io/automanaged: \"onstart\"", "completion": "This text code Well-Known Labels, Annotations and Taints | Used on: APIService", "language": "text", "type": "code-example_explanation", "context": "Well-Known Labels, Annotations and Taints | Used on: APIService", "source": "https://kubernetes.io/docs/reference/labels-annotations-taints/", "id": "k8s_00314_code_explain_8249"} {"prompt": "Explain this text code:\n\nkubernetes.io/hostname: \"ip-172-20-114-199.ec2.internal\"", "completion": "This text code Well-Known Labels, Annotations and Taints | Used on: Node", "language": "text", "type": "code-example_explanation", "context": "Well-Known Labels, Annotations and Taints | Used on: Node", "source": "https://kubernetes.io/docs/reference/labels-annotations-taints/", "id": "k8s_00314_code_explain_8250"} {"prompt": "Explain this text code:\n\nkubernetes.io/change-cause: \"kubectl edit --record deployment foo\"", "completion": "This text code Well-Known Labels, Annotations and Taints | Used on: All Objects", "language": "text", "type": "code-example_explanation", "context": "Well-Known Labels, Annotations and Taints | Used on: All Objects", "source": "https://kubernetes.io/docs/reference/labels-annotations-taints/", "id": "k8s_00314_code_explain_8251"} {"prompt": "Explain this text code:\n\nkubernetes.io/description: \"Description of K8s object.\"", "completion": "This text code Well-Known Labels, Annotations and Taints | Used on: All Objects", "language": "text", "type": "code-example_explanation", "context": "Well-Known Labels, Annotations and Taints | Used on: All Objects", "source": "https://kubernetes.io/docs/reference/labels-annotations-taints/", "id": "k8s_00314_code_explain_8252"} {"prompt": "Explain this text code:\n\nnode.kubernetes.io/exclude-from-external-load-balancers", "completion": "This text code Well-Known Labels, Annotations and Taints | Used on: Node", "language": "text", "type": "code-example_explanation", "context": "Well-Known Labels, Annotations and Taints | Used on: Node", "source": "https://kubernetes.io/docs/reference/labels-annotations-taints/", "id": "k8s_00314_code_explain_8253"} {"prompt": "Explain this shell code:\n\nkubectl label nodes node.kubernetes.io/exclude-from-external-load-balancers=true", "completion": "This shell code Well-Known Labels, Annotations and Taints | Type: Annotation", "language": "shell", "type": "code-example_explanation", "context": "Well-Known Labels, Annotations and Taints | Type: Annotation", "source": "https://kubernetes.io/docs/reference/labels-annotations-taints/", "id": "k8s_00314_code_explain_8254"} {"prompt": "Explain this text code:\n\ncluster-autoscaler.kubernetes.io/enable-ds-eviction: \"true\"", "completion": "This text code Well-Known Labels, Annotations and Taints | Used on: Pod", "language": "text", "type": "code-example_explanation", "context": "Well-Known Labels, Annotations and Taints | Used on: Pod", "source": "https://kubernetes.io/docs/reference/labels-annotations-taints/", "id": "k8s_00314_code_explain_8255"} {"prompt": "Explain this text code:\n\npv.kubernetes.io/provisioned-by: \"kubernetes.io/rbd\"", "completion": "This text code Well-Known Labels, Annotations and Taints | Used on: PersistentVolume", "language": "text", "type": "code-example_explanation", "context": "Well-Known Labels, Annotations and Taints | Used on: PersistentVolume", "source": "https://kubernetes.io/docs/reference/labels-annotations-taints/", "id": "k8s_00314_code_explain_8256"} {"prompt": "Explain this text code:\n\npv.kubernetes.io/migrated-to: pd.csi.storage.gke.io", "completion": "This text code Well-Known Labels, Annotations and Taints | Used on: PersistentVolume, PersistentVolumeClaim", "language": "text", "type": "code-example_explanation", "context": "Well-Known Labels, Annotations and Taints | Used on: PersistentVolume, PersistentVolumeClaim", "source": "https://kubernetes.io/docs/reference/labels-annotations-taints/", "id": "k8s_00314_code_explain_8257"} {"prompt": "Explain this text code:\n\nstatefulset.kubernetes.io/pod-name: \"mystatefulset-7\"", "completion": "This text code Well-Known Labels, Annotations and Taints | Used on: Pod", "language": "text", "type": "code-example_explanation", "context": "Well-Known Labels, Annotations and Taints | Used on: Pod", "source": "https://kubernetes.io/docs/reference/labels-annotations-taints/", "id": "k8s_00314_code_explain_8258"} {"prompt": "Explain this text code:\n\nscheduler.alpha.kubernetes.io/node-selector: \"name-of-node-selector\"", "completion": "This text code Well-Known Labels, Annotations and Taints | Used on: Namespace", "language": "text", "type": "code-example_explanation", "context": "Well-Known Labels, Annotations and Taints | Used on: Namespace", "source": "https://kubernetes.io/docs/reference/labels-annotations-taints/", "id": "k8s_00314_code_explain_8259"} {"prompt": "Explain this text code:\n\nvolume.beta.kubernetes.io/storage-provisioner: \"k8s.io/minikube-hostpath\"", "completion": "This text code Well-Known Labels, Annotations and Taints | Used on: PersistentVolumeClaim", "language": "text", "type": "code-example_explanation", "context": "Well-Known Labels, Annotations and Taints | Used on: PersistentVolumeClaim", "source": "https://kubernetes.io/docs/reference/labels-annotations-taints/", "id": "k8s_00314_code_explain_8260"} {"prompt": "Explain this text code:\n\nvolume.beta.kubernetes.io/storage-class: \"example-class\"", "completion": "This text code Well-Known Labels, Annotations and Taints | Used on: PersistentVolume, PersistentVolumeClaim", "language": "text", "type": "code-example_explanation", "context": "Well-Known Labels, Annotations and Taints | Used on: PersistentVolume, PersistentVolumeClaim", "source": "https://kubernetes.io/docs/reference/labels-annotations-taints/", "id": "k8s_00314_code_explain_8261"} {"prompt": "Explain this text code:\n\nvolumes.kubernetes.io/controller-managed-attach-detach", "completion": "This text code Well-Known Labels, Annotations and Taints | The value of the annotation isn't important.", "language": "text", "type": "code-example_explanation", "context": "Well-Known Labels, Annotations and Taints | The value of the annotation isn't important.", "source": "https://kubernetes.io/docs/reference/labels-annotations-taints/", "id": "k8s_00314_code_explain_8262"} {"prompt": "Explain this text code:\n\nstorage.alpha.kubernetes.io/migrated-plugins: \"kubernetes.io/cinder\"", "completion": "This text code Well-Known Labels, Annotations and Taints | Used on: CSINode (an extension API)", "language": "text", "type": "code-example_explanation", "context": "Well-Known Labels, Annotations and Taints | Used on: CSINode (an extension API)", "source": "https://kubernetes.io/docs/reference/labels-annotations-taints/", "id": "k8s_00314_code_explain_8263"} {"prompt": "Explain this text code:\n\nstorage.alpha.kubernetes.io/migrated-plugins: \"kubernetes.io/vsphere-volume\"", "completion": "This text code Well-Known Labels, Annotations and Taints | Type: Label", "language": "text", "type": "code-example_explanation", "context": "Well-Known Labels, Annotations and Taints | Type: Label", "source": "https://kubernetes.io/docs/reference/labels-annotations-taints/", "id": "k8s_00314_code_explain_8264"} {"prompt": "Explain this text code:\n\nkubernetes.io/service-account.uid: da68f9c6-9d26-11e7-b84e-002dc52800da", "completion": "This text code Well-Known Labels, Annotations and Taints | Used on: Secret", "language": "text", "type": "code-example_explanation", "context": "Well-Known Labels, Annotations and Taints | Used on: Secret", "source": "https://kubernetes.io/docs/reference/labels-annotations-taints/", "id": "k8s_00314_code_explain_8265"} {"prompt": "Explain this text code:\n\nkubernetes.io/legacy-token-invalid-since: 2023-10-27", "completion": "This text code Well-Known Labels, Annotations and Taints | Used on: Secret", "language": "text", "type": "code-example_explanation", "context": "Well-Known Labels, Annotations and Taints | Used on: Secret", "source": "https://kubernetes.io/docs/reference/labels-annotations-taints/", "id": "k8s_00314_code_explain_8266"} {"prompt": "Explain this text code:\n\nendpoints.kubernetes.io/managed-by: endpoint-controller", "completion": "This text code Well-Known Labels, Annotations and Taints | Used on: Endpoints", "language": "text", "type": "code-example_explanation", "context": "Well-Known Labels, Annotations and Taints | Used on: Endpoints", "source": "https://kubernetes.io/docs/reference/labels-annotations-taints/", "id": "k8s_00314_code_explain_8267"} {"prompt": "Explain this text code:\n\nendpointslice.kubernetes.io/managed-by: endpointslice-controller.k8s.io", "completion": "This text code Well-Known Labels, Annotations and Taints | Used on: EndpointSlices", "language": "text", "type": "code-example_explanation", "context": "Well-Known Labels, Annotations and Taints | Used on: EndpointSlices", "source": "https://kubernetes.io/docs/reference/labels-annotations-taints/", "id": "k8s_00314_code_explain_8268"} {"prompt": "Explain this text code:\n\nservice.kubernetes.io/service-proxy-name: \"foo-bar\"", "completion": "This text code Well-Known Labels, Annotations and Taints | Used on: Service", "language": "text", "type": "code-example_explanation", "context": "Well-Known Labels, Annotations and Taints | Used on: Service", "source": "https://kubernetes.io/docs/reference/labels-annotations-taints/", "id": "k8s_00314_code_explain_8269"} {"prompt": "Explain this text code:\n\nexperimental.windows.kubernetes.io/isolation-type: \"hyperv\"", "completion": "This text code Well-Known Labels, Annotations and Taints | Used on: Pod", "language": "text", "type": "code-example_explanation", "context": "Well-Known Labels, Annotations and Taints | Used on: Pod", "source": "https://kubernetes.io/docs/reference/labels-annotations-taints/", "id": "k8s_00314_code_explain_8270"} {"prompt": "Explain this text code:\n\ningressclass.kubernetes.io/is-default-class: \"true\"", "completion": "This text code Well-Known Labels, Annotations and Taints | Used on: IngressClass", "language": "text", "type": "code-example_explanation", "context": "Well-Known Labels, Annotations and Taints | Used on: IngressClass", "source": "https://kubernetes.io/docs/reference/labels-annotations-taints/", "id": "k8s_00314_code_explain_8271"} {"prompt": "Explain this text code:\n\nnginx.ingress.kubernetes.io/configuration-snippet: \" more_set_headers \\\"Request-Id: $req_id\\\";\\nmore_set_headers \\\"Example: 42\\\";\\n\"", "completion": "This text code Well-Known Labels, Annotations and Taints | Used on: Ingress", "language": "text", "type": "code-example_explanation", "context": "Well-Known Labels, Annotations and Taints | Used on: Ingress", "source": "https://kubernetes.io/docs/reference/labels-annotations-taints/", "id": "k8s_00314_code_explain_8272"} {"prompt": "Explain this text code:\n\nstorageclass.kubernetes.io/is-default-class: \"true\"", "completion": "This text code Well-Known Labels, Annotations and Taints | Used on: StorageClass", "language": "text", "type": "code-example_explanation", "context": "Well-Known Labels, Annotations and Taints | Used on: StorageClass", "source": "https://kubernetes.io/docs/reference/labels-annotations-taints/", "id": "k8s_00314_code_explain_8273"} {"prompt": "Explain this text code:\n\nbatch.kubernetes.io/cronjob-scheduled-timestamp: \"2016-05-19T03:00:00-07:00\"", "completion": "This text code Well-Known Labels, Annotations and Taints | Used on: Jobs and Pods controlled by CronJobs", "language": "text", "type": "code-example_explanation", "context": "Well-Known Labels, Annotations and Taints | Used on: Jobs and Pods controlled by CronJobs", "source": "https://kubernetes.io/docs/reference/labels-annotations-taints/", "id": "k8s_00314_code_explain_8274"} {"prompt": "Explain this bash code:\n\nkubectl.kubernetes.io/default-container: \"front-end-app\"", "completion": "This bash code Well-Known Labels, Annotations and Taints | The value of the annotation is the container name that is default for this Pod. For example, kubectl logs or kubectl exec without -c or --container flag will use this default container.", "language": "bash", "type": "kubectl-command_explanation", "context": "Well-Known Labels, Annotations and Taints | The value of the annotation is the container name that is default for this Pod. For example, kubectl logs or kubectl exec without -c or --container flag will use this default container.", "source": "https://kubernetes.io/docs/reference/labels-annotations-taints/", "id": "k8s_00314_code_explain_8275"} {"prompt": "Explain this bash code:\n\nkubectl.kubernetes.io/default-logs-container: \"front-end-app\"", "completion": "This bash code Well-Known Labels, Annotations and Taints | The value of the annotation is the container name that is the default logging container for this Pod. For example, kubectl logs without -c or --container flag will use this default container.", "language": "bash", "type": "kubectl-command_explanation", "context": "Well-Known Labels, Annotations and Taints | The value of the annotation is the container name that is the default logging container for this Pod. For example, kubectl logs without -c or --container flag will use this default container.", "source": "https://kubernetes.io/docs/reference/labels-annotations-taints/", "id": "k8s_00314_code_explain_8276"} {"prompt": "Explain this yaml code:\n\nkubectl.kubernetes.io/last-applied-configuration: >\n {\"apiVersion\":\"apps/v1\",\"kind\":\"Deployment\",\"metadata\":{\"annotations\":{},\"name\":\"example\",\"namespace\":\"default\"},\"spec\":{\"selector\":{\"matchLabels\":{\"app.kubernetes.io/name\":foo}},\"template\":{\"metadata\":{\"labels\":{\"app.kubernetes.io/name\":\"foo\"}},\"spec\":{\"containers\":[{\"image\":\"container-registry.example/foo-bar:1.42\",\"name\":\"foo-bar\",\"ports\":[{\"containerPort\":42}]}]}}}}", "completion": "This yaml code Well-Known Labels, Annotations and Taints | Used on: all objects", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Well-Known Labels, Annotations and Taints | Used on: all objects", "source": "https://kubernetes.io/docs/reference/labels-annotations-taints/", "id": "k8s_00314_code_explain_8277"} {"prompt": "Explain this bash code:\n\nkubectl.kubernetes.io/restartedAt: \"2024-06-21T17:27:41Z\"", "completion": "This bash code Well-Known Labels, Annotations and Taints | Used on: Deployment, ReplicaSet, StatefulSet, DaemonSet, Pod", "language": "bash", "type": "kubectl-command_explanation", "context": "Well-Known Labels, Annotations and Taints | Used on: Deployment, ReplicaSet, StatefulSet, DaemonSet, Pod", "source": "https://kubernetes.io/docs/reference/labels-annotations-taints/", "id": "k8s_00314_code_explain_8278"} {"prompt": "Explain this text code:\n\nendpoints.kubernetes.io/last-change-trigger-time: \"2023-07-20T04:45:21Z\"", "completion": "This text code Well-Known Labels, Annotations and Taints | Used on: Endpoints", "language": "text", "type": "code-example_explanation", "context": "Well-Known Labels, Annotations and Taints | Used on: Endpoints", "source": "https://kubernetes.io/docs/reference/labels-annotations-taints/", "id": "k8s_00314_code_explain_8279"} {"prompt": "Explain this text code:\n\ncontrol-plane.alpha.kubernetes.io/leader={\"holderIdentity\":\"controller-0\",\"leaseDurationSeconds\":15,\"acquireTime\":\"2023-01-19T13:12:57Z\",\"renewTime\":\"2023-01-19T13:13:54Z\",\"leaderTransitions\":1}", "completion": "This text code Well-Known Labels, Annotations and Taints | Used on: Endpoints", "language": "text", "type": "code-example_explanation", "context": "Well-Known Labels, Annotations and Taints | Used on: Endpoints", "source": "https://kubernetes.io/docs/reference/labels-annotations-taints/", "id": "k8s_00314_code_explain_8280"} {"prompt": "Explain this text code:\n\nscheduler.alpha.kubernetes.io/defaultTolerations: '[{\"operator\": \"Equal\", \"value\": \"value1\", \"effect\": \"NoSchedule\", \"key\": \"dedicated-node\"}]'", "completion": "This text code Well-Known Labels, Annotations and Taints | Used on: Namespace", "language": "text", "type": "code-example_explanation", "context": "Well-Known Labels, Annotations and Taints | Used on: Namespace", "source": "https://kubernetes.io/docs/reference/labels-annotations-taints/", "id": "k8s_00314_code_explain_8281"} {"prompt": "Explain this text code:\n\nscheduler.alpha.kubernetes.io/tolerationsWhitelist: '[{\"operator\": \"Exists\", \"effect\": \"NoSchedule\", \"key\": \"dedicated-node\"}]'", "completion": "This text code Well-Known Labels, Annotations and Taints | Used on: Namespace", "language": "text", "type": "code-example_explanation", "context": "Well-Known Labels, Annotations and Taints | Used on: Namespace", "source": "https://kubernetes.io/docs/reference/labels-annotations-taints/", "id": "k8s_00314_code_explain_8282"} {"prompt": "Explain this text code:\n\nnode.kubernetes.io/network-unavailable: \"NoSchedule\"", "completion": "This text code Well-Known Labels, Annotations and Taints | Used on: Node", "language": "text", "type": "code-example_explanation", "context": "Well-Known Labels, Annotations and Taints | Used on: Node", "source": "https://kubernetes.io/docs/reference/labels-annotations-taints/", "id": "k8s_00314_code_explain_8283"} {"prompt": "Explain this text code:\n\nnode.cloudprovider.kubernetes.io/uninitialized: \"NoSchedule\"", "completion": "This text code Well-Known Labels, Annotations and Taints | Used on: Node", "language": "text", "type": "code-example_explanation", "context": "Well-Known Labels, Annotations and Taints | Used on: Node", "source": "https://kubernetes.io/docs/reference/labels-annotations-taints/", "id": "k8s_00314_code_explain_8284"} {"prompt": "Explain this text code:\n\nnode.cloudprovider.kubernetes.io/shutdown: \"NoSchedule\"", "completion": "This text code Well-Known Labels, Annotations and Taints | Used on: Node", "language": "text", "type": "code-example_explanation", "context": "Well-Known Labels, Annotations and Taints | Used on: Node", "source": "https://kubernetes.io/docs/reference/labels-annotations-taints/", "id": "k8s_00314_code_explain_8285"} {"prompt": "Explain this text code:\n\nfeature.node.kubernetes.io/network-sriov.capable: \"true\"", "completion": "This text code Well-Known Labels, Annotations and Taints | Used on: Node", "language": "text", "type": "code-example_explanation", "context": "Well-Known Labels, Annotations and Taints | Used on: Node", "source": "https://kubernetes.io/docs/reference/labels-annotations-taints/", "id": "k8s_00314_code_explain_8286"} {"prompt": "Explain this text code:\n\nnfd.node.kubernetes.io/feature-labels: \"cpu-cpuid.ADX,cpu-cpuid.AESNI,cpu-hardware_multithreading,kernel-version.full\"", "completion": "This text code Well-Known Labels, Annotations and Taints | Used on: Nodes", "language": "text", "type": "code-example_explanation", "context": "Well-Known Labels, Annotations and Taints | Used on: Nodes", "source": "https://kubernetes.io/docs/reference/labels-annotations-taints/", "id": "k8s_00314_code_explain_8287"} {"prompt": "Explain this text code:\n\nnfd.node.kubernetes.io/extended-resources: \"accelerator.acme.example/q500,example.com/coprocessor-fx5\"", "completion": "This text code Well-Known Labels, Annotations and Taints | Used on: Nodes", "language": "text", "type": "code-example_explanation", "context": "Well-Known Labels, Annotations and Taints | Used on: Nodes", "source": "https://kubernetes.io/docs/reference/labels-annotations-taints/", "id": "k8s_00314_code_explain_8288"} {"prompt": "Explain this text code:\n\nservice.beta.kubernetes.io/aws-load-balancer-access-log-emit-interval: \"5\"", "completion": "This text code Well-Known Labels, Annotations and Taints | Used on: Service", "language": "text", "type": "code-example_explanation", "context": "Well-Known Labels, Annotations and Taints | Used on: Service", "source": "https://kubernetes.io/docs/reference/labels-annotations-taints/", "id": "k8s_00314_code_explain_8289"} {"prompt": "Explain this text code:\n\nservice.beta.kubernetes.io/aws-load-balancer-access-log-enabled: \"false\"", "completion": "This text code Well-Known Labels, Annotations and Taints | Used on: Service", "language": "text", "type": "code-example_explanation", "context": "Well-Known Labels, Annotations and Taints | Used on: Service", "source": "https://kubernetes.io/docs/reference/labels-annotations-taints/", "id": "k8s_00314_code_explain_8290"} {"prompt": "Explain this text code:\n\nservice.beta.kubernetes.io/aws-load-balancer-access-log-enabled: example", "completion": "This text code Well-Known Labels, Annotations and Taints | Used on: Service", "language": "text", "type": "code-example_explanation", "context": "Well-Known Labels, Annotations and Taints | Used on: Service", "source": "https://kubernetes.io/docs/reference/labels-annotations-taints/", "id": "k8s_00314_code_explain_8291"} {"prompt": "Explain this text code:\n\nservice.beta.kubernetes.io/aws-load-balancer-access-log-enabled: \"/example\"", "completion": "This text code Well-Known Labels, Annotations and Taints | Used on: Service", "language": "text", "type": "code-example_explanation", "context": "Well-Known Labels, Annotations and Taints | Used on: Service", "source": "https://kubernetes.io/docs/reference/labels-annotations-taints/", "id": "k8s_00314_code_explain_8292"} {"prompt": "Explain this text code:\n\nservice.beta.kubernetes.io/aws-load-balancer-additional-resource-tags: \"Environment=demo,Project=example\"", "completion": "This text code Well-Known Labels, Annotations and Taints | Used on: Service", "language": "text", "type": "code-example_explanation", "context": "Well-Known Labels, Annotations and Taints | Used on: Service", "source": "https://kubernetes.io/docs/reference/labels-annotations-taints/", "id": "k8s_00314_code_explain_8293"} {"prompt": "Explain this text code:\n\nservice.beta.kubernetes.io/aws-load-balancer-alpn-policy: HTTP2Optional", "completion": "This text code Well-Known Labels, Annotations and Taints | Used on: Service", "language": "text", "type": "code-example_explanation", "context": "Well-Known Labels, Annotations and Taints | Used on: Service", "source": "https://kubernetes.io/docs/reference/labels-annotations-taints/", "id": "k8s_00314_code_explain_8294"} {"prompt": "Explain this text code:\n\nservice.beta.kubernetes.io/aws-load-balancer-attributes: \"deletion_protection.enabled=true\"", "completion": "This text code Well-Known Labels, Annotations and Taints | Used on: Service", "language": "text", "type": "code-example_explanation", "context": "Well-Known Labels, Annotations and Taints | Used on: Service", "source": "https://kubernetes.io/docs/reference/labels-annotations-taints/", "id": "k8s_00314_code_explain_8295"} {"prompt": "Explain this text code:\n\nservice.beta.kubernetes.io/aws-load-balancer-backend-protocol: tcp", "completion": "This text code Well-Known Labels, Annotations and Taints | Used on: Service", "language": "text", "type": "code-example_explanation", "context": "Well-Known Labels, Annotations and Taints | Used on: Service", "source": "https://kubernetes.io/docs/reference/labels-annotations-taints/", "id": "k8s_00314_code_explain_8296"} {"prompt": "Explain this text code:\n\nservice.beta.kubernetes.io/aws-load-balancer-connection-draining-enabled: \"false\"", "completion": "This text code Well-Known Labels, Annotations and Taints | Used on: Service", "language": "text", "type": "code-example_explanation", "context": "Well-Known Labels, Annotations and Taints | Used on: Service", "source": "https://kubernetes.io/docs/reference/labels-annotations-taints/", "id": "k8s_00314_code_explain_8297"} {"prompt": "Explain this text code:\n\nservice.beta.kubernetes.io/aws-load-balancer-connection-draining-timeout: \"60\"", "completion": "This text code Well-Known Labels, Annotations and Taints | Used on: Service", "language": "text", "type": "code-example_explanation", "context": "Well-Known Labels, Annotations and Taints | Used on: Service", "source": "https://kubernetes.io/docs/reference/labels-annotations-taints/", "id": "k8s_00314_code_explain_8298"} {"prompt": "Explain this text code:\n\nservice.beta.kubernetes.io/aws-load-balancer-ip-address-type: ipv4", "completion": "This text code Well-Known Labels, Annotations and Taints | Used on: Service", "language": "text", "type": "code-example_explanation", "context": "Well-Known Labels, Annotations and Taints | Used on: Service", "source": "https://kubernetes.io/docs/reference/labels-annotations-taints/", "id": "k8s_00314_code_explain_8299"} {"prompt": "Explain this text code:\n\nservice.beta.kubernetes.io/aws-load-balancer-connection-idle-timeout: \"60\"", "completion": "This text code Well-Known Labels, Annotations and Taints | Used on: Service", "language": "text", "type": "code-example_explanation", "context": "Well-Known Labels, Annotations and Taints | Used on: Service", "source": "https://kubernetes.io/docs/reference/labels-annotations-taints/", "id": "k8s_00314_code_explain_8300"} {"prompt": "Explain this text code:\n\nservice.beta.kubernetes.io/aws-load-balancer-cross-zone-load-balancing-enabled: \"true\"", "completion": "This text code Well-Known Labels, Annotations and Taints | Used on: Service", "language": "text", "type": "code-example_explanation", "context": "Well-Known Labels, Annotations and Taints | Used on: Service", "source": "https://kubernetes.io/docs/reference/labels-annotations-taints/", "id": "k8s_00314_code_explain_8301"} {"prompt": "Explain this text code:\n\nservice.beta.kubernetes.io/aws-load-balancer-eip-allocations: \"eipalloc-01bcdef23bcdef456,eipalloc-def1234abc4567890\"", "completion": "This text code Well-Known Labels, Annotations and Taints | Used on: Service", "language": "text", "type": "code-example_explanation", "context": "Well-Known Labels, Annotations and Taints | Used on: Service", "source": "https://kubernetes.io/docs/reference/labels-annotations-taints/", "id": "k8s_00314_code_explain_8302"} {"prompt": "Explain this text code:\n\nservice.beta.kubernetes.io/aws-load-balancer-extra-security-groups: \"sg-12abcd3456,sg-34dcba6543\"", "completion": "This text code Well-Known Labels, Annotations and Taints | Used on: Service", "language": "text", "type": "code-example_explanation", "context": "Well-Known Labels, Annotations and Taints | Used on: Service", "source": "https://kubernetes.io/docs/reference/labels-annotations-taints/", "id": "k8s_00314_code_explain_8303"} {"prompt": "Explain this text code:\n\nservice.beta.kubernetes.io/aws-load-balancer-healthcheck-healthy-threshold: \"3\"", "completion": "This text code Well-Known Labels, Annotations and Taints | Used on: Service", "language": "text", "type": "code-example_explanation", "context": "Well-Known Labels, Annotations and Taints | Used on: Service", "source": "https://kubernetes.io/docs/reference/labels-annotations-taints/", "id": "k8s_00314_code_explain_8304"} {"prompt": "Explain this text code:\n\nservice.beta.kubernetes.io/aws-load-balancer-healthcheck-interval: \"30\"", "completion": "This text code Well-Known Labels, Annotations and Taints | Used on: Service", "language": "text", "type": "code-example_explanation", "context": "Well-Known Labels, Annotations and Taints | Used on: Service", "source": "https://kubernetes.io/docs/reference/labels-annotations-taints/", "id": "k8s_00314_code_explain_8305"} {"prompt": "Explain this text code:\n\nservice.beta.kubernetes.io/aws-load-balancer-healthcheck-path: /healthcheck", "completion": "This text code Well-Known Labels, Annotations and Taints | Used on: Service", "language": "text", "type": "code-example_explanation", "context": "Well-Known Labels, Annotations and Taints | Used on: Service", "source": "https://kubernetes.io/docs/reference/labels-annotations-taints/", "id": "k8s_00314_code_explain_8306"} {"prompt": "Explain this text code:\n\nservice.beta.kubernetes.io/aws-load-balancer-healthcheck-port: \"24\"", "completion": "This text code Well-Known Labels, Annotations and Taints | Used on: Service", "language": "text", "type": "code-example_explanation", "context": "Well-Known Labels, Annotations and Taints | Used on: Service", "source": "https://kubernetes.io/docs/reference/labels-annotations-taints/", "id": "k8s_00314_code_explain_8307"} {"prompt": "Explain this text code:\n\nservice.beta.kubernetes.io/aws-load-balancer-healthcheck-protocol: TCP", "completion": "This text code Well-Known Labels, Annotations and Taints | Used on: Service", "language": "text", "type": "code-example_explanation", "context": "Well-Known Labels, Annotations and Taints | Used on: Service", "source": "https://kubernetes.io/docs/reference/labels-annotations-taints/", "id": "k8s_00314_code_explain_8308"} {"prompt": "Explain this text code:\n\nservice.beta.kubernetes.io/aws-load-balancer-healthcheck-timeout: \"3\"", "completion": "This text code Well-Known Labels, Annotations and Taints | Used on: Service", "language": "text", "type": "code-example_explanation", "context": "Well-Known Labels, Annotations and Taints | Used on: Service", "source": "https://kubernetes.io/docs/reference/labels-annotations-taints/", "id": "k8s_00314_code_explain_8309"} {"prompt": "Explain this text code:\n\nservice.beta.kubernetes.io/aws-load-balancer-healthcheck-unhealthy-threshold: \"3\"", "completion": "This text code Well-Known Labels, Annotations and Taints | Used on: Service", "language": "text", "type": "code-example_explanation", "context": "Well-Known Labels, Annotations and Taints | Used on: Service", "source": "https://kubernetes.io/docs/reference/labels-annotations-taints/", "id": "k8s_00314_code_explain_8310"} {"prompt": "Explain this text code:\n\nservice.beta.kubernetes.io/aws-load-balancer-internal: \"true\"", "completion": "This text code Well-Known Labels, Annotations and Taints | Used on: Service", "language": "text", "type": "code-example_explanation", "context": "Well-Known Labels, Annotations and Taints | Used on: Service", "source": "https://kubernetes.io/docs/reference/labels-annotations-taints/", "id": "k8s_00314_code_explain_8311"} {"prompt": "Explain this text code:\n\nservice.beta.kubernetes.io/aws-load-balancer-scheme", "completion": "This text code Well-Known Labels, Annotations and Taints | Example: service.beta.kubernetes.io/aws-load-balancer-manage-backend-security-group-rules: \"true\"", "language": "text", "type": "code-example_explanation", "context": "Well-Known Labels, Annotations and Taints | Example: service.beta.kubernetes.io/aws-load-balancer-manage-backend-security-group-rules: \"true\"", "source": "https://kubernetes.io/docs/reference/labels-annotations-taints/", "id": "k8s_00314_code_explain_8312"} {"prompt": "Explain this text code:\n\nservice.beta.kubernetes.io/aws-load-balancer-manage-backend-security-group-rules: \"true\"", "completion": "This text code Well-Known Labels, Annotations and Taints | Used on: Service", "language": "text", "type": "code-example_explanation", "context": "Well-Known Labels, Annotations and Taints | Used on: Service", "source": "https://kubernetes.io/docs/reference/labels-annotations-taints/", "id": "k8s_00314_code_explain_8313"} {"prompt": "Explain this text code:\n\nservice.beta.kubernetes.io/aws-load-balancer-name: my-elb", "completion": "This text code Well-Known Labels, Annotations and Taints | Used on: Service", "language": "text", "type": "code-example_explanation", "context": "Well-Known Labels, Annotations and Taints | Used on: Service", "source": "https://kubernetes.io/docs/reference/labels-annotations-taints/", "id": "k8s_00314_code_explain_8314"} {"prompt": "Explain this text code:\n\nservice.beta.kubernetes.io/aws-load-balancer-type: \"external\"", "completion": "This text code Well-Known Labels, Annotations and Taints | See annotations in the AWS load balancer controller documentation.", "language": "text", "type": "code-example_explanation", "context": "Well-Known Labels, Annotations and Taints | See annotations in the AWS load balancer controller documentation.", "source": "https://kubernetes.io/docs/reference/labels-annotations-taints/", "id": "k8s_00314_code_explain_8315"} {"prompt": "Explain this text code:\n\nservice.beta.kubernetes.io/aws-load-balancer-nlb-target-type: \"true\"", "completion": "This text code Well-Known Labels, Annotations and Taints | Used on: Service", "language": "text", "type": "code-example_explanation", "context": "Well-Known Labels, Annotations and Taints | Used on: Service", "source": "https://kubernetes.io/docs/reference/labels-annotations-taints/", "id": "k8s_00314_code_explain_8316"} {"prompt": "Explain this text code:\n\nservice.beta.kubernetes.io/aws-load-balancer-private-ipv4-addresses: \"198.51.100.0,198.51.100.64\"", "completion": "This text code Well-Known Labels, Annotations and Taints | Used on: Service", "language": "text", "type": "code-example_explanation", "context": "Well-Known Labels, Annotations and Taints | Used on: Service", "source": "https://kubernetes.io/docs/reference/labels-annotations-taints/", "id": "k8s_00314_code_explain_8317"} {"prompt": "Explain this text code:\n\nservice.beta.kubernetes.io/aws-load-balancer-proxy-protocol: \"*\"", "completion": "This text code Well-Known Labels, Annotations and Taints | Used on: Service", "language": "text", "type": "code-example_explanation", "context": "Well-Known Labels, Annotations and Taints | Used on: Service", "source": "https://kubernetes.io/docs/reference/labels-annotations-taints/", "id": "k8s_00314_code_explain_8318"} {"prompt": "Explain this text code:\n\nservice.beta.kubernetes.io/aws-load-balancer-scheme: internal", "completion": "This text code Well-Known Labels, Annotations and Taints | Used on: Service", "language": "text", "type": "code-example_explanation", "context": "Well-Known Labels, Annotations and Taints | Used on: Service", "source": "https://kubernetes.io/docs/reference/labels-annotations-taints/", "id": "k8s_00314_code_explain_8319"} {"prompt": "Explain this text code:\n\nservice.beta.kubernetes.io/aws-load-balancer-security-groups: \"sg-53fae93f,sg-8725gr62r\"", "completion": "This text code Well-Known Labels, Annotations and Taints | Used on: Service", "language": "text", "type": "code-example_explanation", "context": "Well-Known Labels, Annotations and Taints | Used on: Service", "source": "https://kubernetes.io/docs/reference/labels-annotations-taints/", "id": "k8s_00314_code_explain_8320"} {"prompt": "Explain this text code:\n\nservice.beta.kubernetes.io/aws-load-balancer-security-groups", "completion": "This text code Well-Known Labels, Annotations and Taints | Example: service.beta.kubernetes.io/load-balancer-source-ranges: \"192.0.2.0/25\"", "language": "text", "type": "code-example_explanation", "context": "Well-Known Labels, Annotations and Taints | Example: service.beta.kubernetes.io/load-balancer-source-ranges: \"192.0.2.0/25\"", "source": "https://kubernetes.io/docs/reference/labels-annotations-taints/", "id": "k8s_00314_code_explain_8321"} {"prompt": "Explain this text code:\n\nservice.beta.kubernetes.io/load-balancer-source-ranges: \"192.0.2.0/25\"", "completion": "This text code Well-Known Labels, Annotations and Taints | Used on: Service", "language": "text", "type": "code-example_explanation", "context": "Well-Known Labels, Annotations and Taints | Used on: Service", "source": "https://kubernetes.io/docs/reference/labels-annotations-taints/", "id": "k8s_00314_code_explain_8322"} {"prompt": "Explain this text code:\n\nservice.beta.kubernetes.io/aws-load-balancer-ssl-cert: \"arn:aws:acm:us-east-1:123456789012:certificate/12345678-1234-1234-1234-123456789012\"", "completion": "This text code Well-Known Labels, Annotations and Taints | Used on: Service", "language": "text", "type": "code-example_explanation", "context": "Well-Known Labels, Annotations and Taints | Used on: Service", "source": "https://kubernetes.io/docs/reference/labels-annotations-taints/", "id": "k8s_00314_code_explain_8323"} {"prompt": "Explain this text code:\n\nservice.beta.kubernetes.io/aws-load-balancer-ssl-negotiation-policy: ELBSecurityPolicy-TLS-1-2-2017-01", "completion": "This text code Well-Known Labels, Annotations and Taints | The official integration with AWS elastic load balancing configures TLS for a Service of type: LoadBalancer based on this annotation. The value of the annotation is the name of an AWS policy for negot", "language": "text", "type": "code-example_explanation", "context": "Well-Known Labels, Annotations and Taints | The official integration with AWS elastic load balancing configures TLS for a Service of type: LoadBalancer based on this annotation. The value of the annotation is the name of an AWS policy for negot", "source": "https://kubernetes.io/docs/reference/labels-annotations-taints/", "id": "k8s_00314_code_explain_8324"} {"prompt": "Explain this text code:\n\nservice.beta.kubernetes.io/aws-load-balancer-ssl-ports: \"*\"", "completion": "This text code Well-Known Labels, Annotations and Taints | The official integration with AWS elastic load balancing configures TLS for a Service of type: LoadBalancer based on this annotation. The value of the annotation is either \"*\", which means that all th", "language": "text", "type": "code-example_explanation", "context": "Well-Known Labels, Annotations and Taints | The official integration with AWS elastic load balancing configures TLS for a Service of type: LoadBalancer based on this annotation. The value of the annotation is either \"*\", which means that all th", "source": "https://kubernetes.io/docs/reference/labels-annotations-taints/", "id": "k8s_00314_code_explain_8325"} {"prompt": "Explain this text code:\n\nservice.beta.kubernetes.io/aws-load-balancer-subnets: \"private-a,private-b\"", "completion": "This text code Well-Known Labels, Annotations and Taints | Kubernetes' official integration with AWS uses this annotation to configure a load balancer and determine in which AWS availability zones to deploy the managed load balancing service. The value is eit", "language": "text", "type": "code-example_explanation", "context": "Well-Known Labels, Annotations and Taints | Kubernetes' official integration with AWS uses this annotation to configure a load balancer and determine in which AWS availability zones to deploy the managed load balancing service. The value is eit", "source": "https://kubernetes.io/docs/reference/labels-annotations-taints/", "id": "k8s_00314_code_explain_8326"} {"prompt": "Explain this text code:\n\nservice.beta.kubernetes.io/aws-load-balancer-target-group-attributes: \"stickiness.enabled=true,stickiness.type=source_ip\"", "completion": "This text code Well-Known Labels, Annotations and Taints | Used on: Service", "language": "text", "type": "code-example_explanation", "context": "Well-Known Labels, Annotations and Taints | Used on: Service", "source": "https://kubernetes.io/docs/reference/labels-annotations-taints/", "id": "k8s_00314_code_explain_8327"} {"prompt": "Explain this text code:\n\nservice.beta.kubernetes.io/aws-load-balancer-target-node-labels: \"kubernetes.io/os=Linux,topology.kubernetes.io/region=us-east-2\"", "completion": "This text code Well-Known Labels, Annotations and Taints | Kubernetes' official integration with AWS uses this annotation to determine which nodes in your cluster should be considered as valid targets for the load balancer.", "language": "text", "type": "code-example_explanation", "context": "Well-Known Labels, Annotations and Taints | Kubernetes' official integration with AWS uses this annotation to determine which nodes in your cluster should be considered as valid targets for the load balancer.", "source": "https://kubernetes.io/docs/reference/labels-annotations-taints/", "id": "k8s_00314_code_explain_8328"} {"prompt": "Explain this text code:\n\nservice.beta.kubernetes.io/aws-load-balancer-type: external", "completion": "This text code Well-Known Labels, Annotations and Taints | Kubernetes' official integrations with AWS use this annotation to determine whether the AWS cloud provider integration should manage a Service of type: LoadBalancer.", "language": "text", "type": "code-example_explanation", "context": "Well-Known Labels, Annotations and Taints | Kubernetes' official integrations with AWS use this annotation to determine whether the AWS cloud provider integration should manage a Service of type: LoadBalancer.", "source": "https://kubernetes.io/docs/reference/labels-annotations-taints/", "id": "k8s_00314_code_explain_8329"} {"prompt": "Explain this text code:\n\nservice.beta.kubernetes.io/azure-load-balancer-disable-tcp-reset: \"false\"", "completion": "This text code Well-Known Labels, Annotations and Taints | Used on: Service", "language": "text", "type": "code-example_explanation", "context": "Well-Known Labels, Annotations and Taints | Used on: Service", "source": "https://kubernetes.io/docs/reference/labels-annotations-taints/", "id": "k8s_00314_code_explain_8330"} {"prompt": "Explain this text code:\n\nrbac.authorization.kubernetes.io/autoupdate: \"false\"", "completion": "This text code Well-Known Labels, Annotations and Taints | Used on: ClusterRole, ClusterRoleBinding, Role, RoleBinding", "language": "text", "type": "code-example_explanation", "context": "Well-Known Labels, Annotations and Taints | Used on: ClusterRole, ClusterRoleBinding, Role, RoleBinding", "source": "https://kubernetes.io/docs/reference/labels-annotations-taints/", "id": "k8s_00314_code_explain_8331"} {"prompt": "Explain this text code:\n\nsnapshot.storage.kubernetes.io/allow-volume-mode-change: \"true\"", "completion": "This text code Well-Known Labels, Annotations and Taints | Used on: VolumeSnapshotContent", "language": "text", "type": "code-example_explanation", "context": "Well-Known Labels, Annotations and Taints | Used on: VolumeSnapshotContent", "source": "https://kubernetes.io/docs/reference/labels-annotations-taints/", "id": "k8s_00314_code_explain_8332"} {"prompt": "Explain this text code:\n\njobset.sigs.k8s.io/replicatedjob-name: \"my-replicatedjob\"", "completion": "This text code Well-Known Labels, Annotations and Taints | Used on: Jobs, Pods", "language": "text", "type": "code-example_explanation", "context": "Well-Known Labels, Annotations and Taints | Used on: Jobs, Pods", "source": "https://kubernetes.io/docs/reference/labels-annotations-taints/", "id": "k8s_00314_code_explain_8333"} {"prompt": "Explain this text code:\n\njobset.sigs.k8s.io/job-key: \"0f1e93893c4cb372080804ddb9153093cb0d20cefdd37f653e739c232d363feb\"", "completion": "This text code Well-Known Labels, Annotations and Taints | Used on: Jobs, Pods", "language": "text", "type": "code-example_explanation", "context": "Well-Known Labels, Annotations and Taints | Used on: Jobs, Pods", "source": "https://kubernetes.io/docs/reference/labels-annotations-taints/", "id": "k8s_00314_code_explain_8334"} {"prompt": "Explain this text code:\n\nalpha.jobset.sigs.k8s.io/exclusive-topology: \"zone\"", "completion": "This text code Well-Known Labels, Annotations and Taints | Used on: JobSets, Jobs", "language": "text", "type": "code-example_explanation", "context": "Well-Known Labels, Annotations and Taints | Used on: JobSets, Jobs", "source": "https://kubernetes.io/docs/reference/labels-annotations-taints/", "id": "k8s_00314_code_explain_8335"} {"prompt": "Explain this text code:\n\nalpha.jobset.sigs.k8s.io/namespaced-job: \"default_myjobset-replicatedjob-0\"", "completion": "This text code Well-Known Labels, Annotations and Taints | Used on: Nodes", "language": "text", "type": "code-example_explanation", "context": "Well-Known Labels, Annotations and Taints | Used on: Nodes", "source": "https://kubernetes.io/docs/reference/labels-annotations-taints/", "id": "k8s_00314_code_explain_8336"} {"prompt": "Explain this text code:\n\njobset.sigs.k8s.io/coordinator: \"myjobset-workers-0-0.headless-svc\"", "completion": "This text code Well-Known Labels, Annotations and Taints | Used on: Jobs, Pods", "language": "text", "type": "code-example_explanation", "context": "Well-Known Labels, Annotations and Taints | Used on: Jobs, Pods", "source": "https://kubernetes.io/docs/reference/labels-annotations-taints/", "id": "k8s_00314_code_explain_8337"} {"prompt": "Explain this text code:\n\nvalidation.policy.admission.k8s.io/validation_failure", "completion": "This text code Well-Known Labels, Annotations and Taints | See more details on Audit Annotations.", "language": "text", "type": "code-example_explanation", "context": "Well-Known Labels, Annotations and Taints | See more details on Audit Annotations.", "source": "https://kubernetes.io/docs/reference/labels-annotations-taints/", "id": "k8s_00314_code_explain_8338"} {"prompt": "Explain this text code:\n\nkubeadm.alpha.kubernetes.io/cri-socket: unix:///run/containerd/container.sock", "completion": "This text code Well-Known Labels, Annotations and Taints | Used on: Node", "language": "text", "type": "code-example_explanation", "context": "Well-Known Labels, Annotations and Taints | Used on: Node", "source": "https://kubernetes.io/docs/reference/labels-annotations-taints/", "id": "k8s_00314_code_explain_8339"} {"prompt": "Explain this text code:\n\nkubeadm.kubernetes.io/etcd.advertise-client-urls: https://172.17.0.18:2379", "completion": "This text code Well-Known Labels, Annotations and Taints | Used on: Pod", "language": "text", "type": "code-example_explanation", "context": "Well-Known Labels, Annotations and Taints | Used on: Pod", "source": "https://kubernetes.io/docs/reference/labels-annotations-taints/", "id": "k8s_00314_code_explain_8340"} {"prompt": "Explain this text code:\n\nkubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: https://172.17.0.18:6443", "completion": "This text code Well-Known Labels, Annotations and Taints | Used on: Pod", "language": "text", "type": "code-example_explanation", "context": "Well-Known Labels, Annotations and Taints | Used on: Pod", "source": "https://kubernetes.io/docs/reference/labels-annotations-taints/", "id": "k8s_00314_code_explain_8341"} {"prompt": "Explain this text code:\n\nkubeadm.kubernetes.io/component-config.hash: 2c26b46b68ffc68ff99b453c1d30413413422d706483bfa0f98a5e886266e7ae", "completion": "This text code Well-Known Labels, Annotations and Taints | Used on: ConfigMap", "language": "text", "type": "code-example_explanation", "context": "Well-Known Labels, Annotations and Taints | Used on: ConfigMap", "source": "https://kubernetes.io/docs/reference/labels-annotations-taints/", "id": "k8s_00314_code_explain_8342"} {"prompt": "Explain this shell code:\n\nkubectl taint nodes node-role.kubernetes.io/control-plane:NoSchedule-", "completion": "This shell code Well-Known Labels, Annotations and Taints | Type: Taint", "language": "shell", "type": "code-example_explanation", "context": "Well-Known Labels, Annotations and Taints | Type: Taint", "source": "https://kubernetes.io/docs/reference/labels-annotations-taints/", "id": "k8s_00314_code_explain_8343"} {"prompt": "Explain this text code:\n\nkube-scheduler-simulator.sigs.k8s.io/filter-result: >-\n {\"node-282x7\":{\"AzureDiskLimits\":\"passed\",\"EBSLimits\":\"passed\",\"GCEPDLimits\":\"passed\",\"InterPodAffinity\":\"passed\",\"NodeAffinity\":\"passed\",\"NodeName\":\"passed\",\"NodePorts\":\"passed\",\"NodeResourcesFit\":\"passed\",\"NodeUnschedulable\":\"passed\",\"NodeVolumeLimits\":\"passed\",\"PodTopologySpread\":\"passed\",\"TaintToleration\":\"passed\",\"VolumeBinding\":\"passed\",\"VolumeRestrictions\":\"passed\",\"VolumeZone\":\"passed\"},\"node-gp9t4\":{\"AzureDiskLimits\":\"passed\",\"EBSLimits\":\"passed\",\"GCEPDLimits\":\"passed\",\"InterPodAffinity\":\"passed\",\"NodeAffinity\":\"passed\",\"NodeName\":\"passed\",\"NodePorts\":\"passed\",\"NodeResourcesFit\":\"passed\",\"NodeUnschedulable\":\"passed\",\"NodeVolumeLimits\":\"passed\",\"PodTopologySpread\":\"passed\",\"TaintToleration\":\"passed\",\"VolumeBinding\":\"passed\",\"VolumeRestrictions\":\"passed\",\"VolumeZone\":\"passed\"}}", "completion": "This text code Well-Known Labels, Annotations and Taints | Used on: Pod", "language": "text", "type": "code-example_explanation", "context": "Well-Known Labels, Annotations and Taints | Used on: Pod", "source": "https://kubernetes.io/docs/reference/labels-annotations-taints/", "id": "k8s_00314_code_explain_8344"} {"prompt": "Explain this text code:\n\nkube-scheduler-simulator.sigs.k8s.io/finalscore-result: >-\n {\"node-282x7\":{\"ImageLocality\":\"0\",\"InterPodAffinity\":\"0\",\"NodeAffinity\":\"0\",\"NodeNumber\":\"0\",\"NodeResourcesBalancedAllocation\":\"76\",\"NodeResourcesFit\":\"73\",\"PodTopologySpread\":\"200\",\"TaintToleration\":\"300\",\"VolumeBinding\":\"0\"},\"node-gp9t4\":{\"ImageLocality\":\"0\",\"InterPodAffinity\":\"0\",\"NodeAffinity\":\"0\",\"NodeNumber\":\"0\",\"NodeResourcesBalancedAllocation\":\"76\",\"NodeResourcesFit\":\"73\",\"PodTopologySpread\":\"200\",\"TaintToleration\":\"300\",\"VolumeBinding\":\"0\"}}", "completion": "This text code Well-Known Labels, Annotations and Taints | Used on: Pod", "language": "text", "type": "code-example_explanation", "context": "Well-Known Labels, Annotations and Taints | Used on: Pod", "source": "https://kubernetes.io/docs/reference/labels-annotations-taints/", "id": "k8s_00314_code_explain_8345"} {"prompt": "Explain this text code:\n\nkube-scheduler-simulator.sigs.k8s.io/prefilter-result-status: >-\n {\"InterPodAffinity\":\"success\",\"NodeAffinity\":\"success\",\"NodePorts\":\"success\",\"NodeResourcesFit\":\"success\",\"PodTopologySpread\":\"success\",\"VolumeBinding\":\"success\",\"VolumeRestrictions\":\"success\"}", "completion": "This text code Well-Known Labels, Annotations and Taints | Used on: Pod", "language": "text", "type": "code-example_explanation", "context": "Well-Known Labels, Annotations and Taints | Used on: Pod", "source": "https://kubernetes.io/docs/reference/labels-annotations-taints/", "id": "k8s_00314_code_explain_8346"} {"prompt": "Explain this text code:\n\nkube-scheduler-simulator.sigs.k8s.io/prescore-result: >-\n {\"InterPodAffinity\":\"success\",\"NodeAffinity\":\"success\",\"NodeNumber\":\"success\",\"PodTopologySpread\":\"success\",\"TaintToleration\":\"success\"}", "completion": "This text code Well-Known Labels, Annotations and Taints | Used on: Pod", "language": "text", "type": "code-example_explanation", "context": "Well-Known Labels, Annotations and Taints | Used on: Pod", "source": "https://kubernetes.io/docs/reference/labels-annotations-taints/", "id": "k8s_00314_code_explain_8347"} {"prompt": "Explain this text code:\n\nkube-scheduler-simulator.sigs.k8s.io/score-result: >-\n {\"node-282x7\":{\"ImageLocality\":\"0\",\"InterPodAffinity\":\"0\",\"NodeAffinity\":\"0\",\"NodeNumber\":\"0\",\"NodeResourcesBalancedAllocation\":\"76\",\"NodeResourcesFit\":\"73\",\"PodTopologySpread\":\"0\",\"TaintToleration\":\"0\",\"VolumeBinding\":\"0\"},\"node-gp9t4\":{\"ImageLocality\":\"0\",\"InterPodAffinity\":\"0\",\"NodeAffinity\":\"0\",\"NodeNumber\":\"0\",\"NodeResourcesBalancedAllocation\":\"76\",\"NodeResourcesFit\":\"73\",\"PodTopologySpread\":\"0\",\"TaintToleration\":\"0\",\"VolumeBinding\":\"0\"}}", "completion": "This text code Well-Known Labels, Annotations and Taints | Used on: Pod", "language": "text", "type": "code-example_explanation", "context": "Well-Known Labels, Annotations and Taints | Used on: Pod", "source": "https://kubernetes.io/docs/reference/labels-annotations-taints/", "id": "k8s_00314_code_explain_8348"} {"prompt": "Explain this bash code:\n\nkubectl label nodes node.kubernetes.io/exclude-from-external-load-balancers=true", "completion": "This bash code Well-Known Labels, Annotations and Taints | Type: Annotation", "language": "bash", "type": "kubectl-command_explanation", "context": "Well-Known Labels, Annotations and Taints | Type: Annotation", "source": "https://kubernetes.io/docs/reference/labels-annotations-taints/", "id": "k8s_00314_code_explain_8349"} {"prompt": "Explain this bash code:\n\nkubectl.kubernetes.io/last-applied-configuration: >\n {\"apiVersion\":\"apps/v1\",\"kind\":\"Deployment\",\"metadata\":{\"annotations\":{},\"name\":\"example\",\"namespace\":\"default\"},\"spec\":{\"selector\":{\"matchLabels\":{\"app.kubernetes.io/name\":foo}},\"template\":{\"metadata\":{\"labels\":{\"app.kubernetes.io/name\":\"foo\"}},\"spec\":{\"containers\":[{\"image\":\"container-registry.example/foo-bar:1.42\",\"name\":\"foo-bar\",\"ports\":[{\"containerPort\":42}]}]}}}}", "completion": "This bash code Well-Known Labels, Annotations and Taints | Used on: all objects", "language": "bash", "type": "kubectl-command_explanation", "context": "Well-Known Labels, Annotations and Taints | Used on: all objects", "source": "https://kubernetes.io/docs/reference/labels-annotations-taints/", "id": "k8s_00314_code_explain_8350"} {"prompt": "Explain this bash code:\n\nkubectl taint nodes node-role.kubernetes.io/control-plane:NoSchedule-", "completion": "This bash code Well-Known Labels, Annotations and Taints | Type: Taint", "language": "bash", "type": "kubectl-command_explanation", "context": "Well-Known Labels, Annotations and Taints | Type: Taint", "source": "https://kubernetes.io/docs/reference/labels-annotations-taints/", "id": "k8s_00314_code_explain_8351"} {"prompt": "Explain this bash code:\n\nkubectl create service externalname NAME --external-name external.name [--dry-run=server|client|none]", "completion": "This bash code kubectl create service externalname | If true, ignore any errors in templates when a field or map key is missing in the template. Only applies to golang and jsonpath output formats.", "language": "bash", "type": "kubectl-command_explanation", "context": "kubectl create service externalname | If true, ignore any errors in templates when a field or map key is missing in the template. Only applies to golang and jsonpath output formats.", "source": "https://kubernetes.io/docs/reference/kubectl/generated/kubectl_create/kubectl_create_service_externalname/", "id": "k8s_00315_code_explain_8352"} {"prompt": "Explain this text code:\n\n# Create a new ExternalName service named my-ns\n kubectl create service externalname my-ns --external-name bar.com", "completion": "This text code kubectl create service externalname | If true, ignore any errors in templates when a field or map key is missing in the template. Only applies to golang and jsonpath output formats.", "language": "text", "type": "code-example_explanation", "context": "kubectl create service externalname | If true, ignore any errors in templates when a field or map key is missing in the template. Only applies to golang and jsonpath output formats.", "source": "https://kubernetes.io/docs/reference/kubectl/generated/kubectl_create/kubectl_create_service_externalname/", "id": "k8s_00315_code_explain_8353"} {"prompt": "Explain this bash code:\n\nkubectl create service externalname NAME --external-name external.name [--dry-run=server|client|none]", "completion": "This bash code kubectl create service externalname | If true, ignore any errors in templates when a field or map key is missing in the template. Only applies to golang and jsonpath output formats.", "language": "bash", "type": "kubectl-command_explanation", "context": "kubectl create service externalname | If true, ignore any errors in templates when a field or map key is missing in the template. Only applies to golang and jsonpath output formats.", "source": "https://kubernetes.io/docs/reference/kubectl/generated/kubectl_create/kubectl_create_service_externalname/", "id": "k8s_00315_code_explain_8354"} {"prompt": "Explain this text code:\n\n# Create a new ExternalName service named my-ns\n kubectl create service externalname my-ns --external-name bar.com", "completion": "This text code kubectl create service externalname | If true, ignore any errors in templates when a field or map key is missing in the template. Only applies to golang and jsonpath output formats.", "language": "text", "type": "code-example_explanation", "context": "kubectl create service externalname | If true, ignore any errors in templates when a field or map key is missing in the template. Only applies to golang and jsonpath output formats.", "source": "https://kubernetes.io/docs/reference/kubectl/generated/kubectl_create/kubectl_create_service_externalname/", "id": "k8s_00315_code_explain_8355"} {"prompt": "Explain this text code:\n\n# replace 1.34.0 with your desired version\ncurl.exe -Lo \"https://dl.k8s.io/v1.34.0/bin/windows/amd64/kubeadm.exe\"", "completion": "This text code Upgrading Windows nodes | From a machine with access to the Kubernetes API, prepare the node for maintenance by marking it unschedulable and evicting the workloads:", "language": "text", "type": "code-example_explanation", "context": "Upgrading Windows nodes | From a machine with access to the Kubernetes API, prepare the node for maintenance by marking it unschedulable and evicting the workloads:", "source": "https://kubernetes.io/docs/tasks/administer-cluster/kubeadm/upgrading-windows-nodes/", "id": "k8s_00316_code_explain_8356"} {"prompt": "Explain this text code:\n\n# replace with the name of your node you are draining\nkubectl drain --ignore-daemonsets", "completion": "This text code Upgrading Windows nodes | You should see output similar to this:", "language": "text", "type": "code-example_explanation", "context": "Upgrading Windows nodes | You should see output similar to this:", "source": "https://kubernetes.io/docs/tasks/administer-cluster/kubeadm/upgrading-windows-nodes/", "id": "k8s_00316_code_explain_8357"} {"prompt": "Explain this text code:\n\nnode/ip-172-31-85-18 cordoned\nnode/ip-172-31-85-18 drained", "completion": "This text code Upgrading Windows nodes | From the Windows node, call the following command to sync new kubelet configuration:", "language": "text", "type": "code-example_explanation", "context": "Upgrading Windows nodes | From the Windows node, call the following command to sync new kubelet configuration:", "source": "https://kubernetes.io/docs/tasks/administer-cluster/kubeadm/upgrading-windows-nodes/", "id": "k8s_00316_code_explain_8358"} {"prompt": "Explain this text code:\n\nstop-service kubelet\ncurl.exe -Lo \"https://dl.k8s.io/v1.34.0/bin/windows/amd64/kubelet.exe\"\nrestart-service kubelet", "completion": "This text code Upgrading Windows nodes | From the Windows node, upgrade and restart the kube-proxy.", "language": "text", "type": "code-example_explanation", "context": "Upgrading Windows nodes | From the Windows node, upgrade and restart the kube-proxy.", "source": "https://kubernetes.io/docs/tasks/administer-cluster/kubeadm/upgrading-windows-nodes/", "id": "k8s_00316_code_explain_8359"} {"prompt": "Explain this text code:\n\nstop-service kube-proxy\ncurl.exe -Lo \"https://dl.k8s.io/v1.34.0/bin/windows/amd64/kube-proxy.exe\"\nrestart-service kube-proxy", "completion": "This text code Upgrading Windows nodes | From a machine with access to the Kubernetes API, bring the node back online by marking it schedulable:", "language": "text", "type": "code-example_explanation", "context": "Upgrading Windows nodes | From a machine with access to the Kubernetes API, bring the node back online by marking it schedulable:", "source": "https://kubernetes.io/docs/tasks/administer-cluster/kubeadm/upgrading-windows-nodes/", "id": "k8s_00316_code_explain_8360"} {"prompt": "Explain this text code:\n\n# replace with the name of your node\nkubectl uncordon ", "completion": "This text code Upgrading Windows nodes |", "language": "text", "type": "code-example_explanation", "context": "Upgrading Windows nodes |", "source": "https://kubernetes.io/docs/tasks/administer-cluster/kubeadm/upgrading-windows-nodes/", "id": "k8s_00316_code_explain_8361"} {"prompt": "Explain this powershell code:\n\n# replace 1.34.0 with your desired version\ncurl.exe -Lo \"https://dl.k8s.io/v1.34.0/bin/windows/amd64/kubeadm.exe\"", "completion": "This powershell code Upgrading Windows nodes | From a machine with access to the Kubernetes API, prepare the node for maintenance by marking it unschedulable and evicting the workloads:", "language": "powershell", "type": "code-example_explanation", "context": "Upgrading Windows nodes | From a machine with access to the Kubernetes API, prepare the node for maintenance by marking it unschedulable and evicting the workloads:", "source": "https://kubernetes.io/docs/tasks/administer-cluster/kubeadm/upgrading-windows-nodes/", "id": "k8s_00316_code_explain_8362"} {"prompt": "Explain this shell code:\n\n# replace with the name of your node you are draining\nkubectl drain --ignore-daemonsets", "completion": "This shell code Upgrading Windows nodes | You should see output similar to this:", "language": "shell", "type": "code-example_explanation", "context": "Upgrading Windows nodes | You should see output similar to this:", "source": "https://kubernetes.io/docs/tasks/administer-cluster/kubeadm/upgrading-windows-nodes/", "id": "k8s_00316_code_explain_8363"} {"prompt": "Explain this text code:\n\nnode/ip-172-31-85-18 cordoned\nnode/ip-172-31-85-18 drained", "completion": "This text code Upgrading Windows nodes | From the Windows node, call the following command to sync new kubelet configuration:", "language": "text", "type": "code-example_explanation", "context": "Upgrading Windows nodes | From the Windows node, call the following command to sync new kubelet configuration:", "source": "https://kubernetes.io/docs/tasks/administer-cluster/kubeadm/upgrading-windows-nodes/", "id": "k8s_00316_code_explain_8364"} {"prompt": "Explain this powershell code:\n\nstop-service kubelet\ncurl.exe -Lo \"https://dl.k8s.io/v1.34.0/bin/windows/amd64/kubelet.exe\"\nrestart-service kubelet", "completion": "This powershell code Upgrading Windows nodes | From the Windows node, upgrade and restart the kube-proxy.", "language": "powershell", "type": "code-example_explanation", "context": "Upgrading Windows nodes | From the Windows node, upgrade and restart the kube-proxy.", "source": "https://kubernetes.io/docs/tasks/administer-cluster/kubeadm/upgrading-windows-nodes/", "id": "k8s_00316_code_explain_8365"} {"prompt": "Explain this powershell code:\n\nstop-service kube-proxy\ncurl.exe -Lo \"https://dl.k8s.io/v1.34.0/bin/windows/amd64/kube-proxy.exe\"\nrestart-service kube-proxy", "completion": "This powershell code Upgrading Windows nodes | From a machine with access to the Kubernetes API, bring the node back online by marking it schedulable:", "language": "powershell", "type": "code-example_explanation", "context": "Upgrading Windows nodes | From a machine with access to the Kubernetes API, bring the node back online by marking it schedulable:", "source": "https://kubernetes.io/docs/tasks/administer-cluster/kubeadm/upgrading-windows-nodes/", "id": "k8s_00316_code_explain_8366"} {"prompt": "Explain this shell code:\n\n# replace with the name of your node\nkubectl uncordon ", "completion": "This shell code Upgrading Windows nodes |", "language": "shell", "type": "code-example_explanation", "context": "Upgrading Windows nodes |", "source": "https://kubernetes.io/docs/tasks/administer-cluster/kubeadm/upgrading-windows-nodes/", "id": "k8s_00316_code_explain_8367"} {"prompt": "Explain this text code:\n\n# replace 1.34.0 with your desired version\ncurl.exe -Lo \"https://dl.k8s.io/v1.34.0/bin/windows/amd64/kubeadm.exe\"", "completion": "This text code Upgrading Windows nodes | From a machine with access to the Kubernetes API, prepare the node for maintenance by marking it unschedulable and evicting the workloads:", "language": "text", "type": "code-example_explanation", "context": "Upgrading Windows nodes | From a machine with access to the Kubernetes API, prepare the node for maintenance by marking it unschedulable and evicting the workloads:", "source": "https://kubernetes.io/docs/tasks/administer-cluster/kubeadm/upgrading-windows-nodes/", "id": "k8s_00316_code_explain_8368"} {"prompt": "Explain this text code:\n\n# replace with the name of your node you are draining\nkubectl drain --ignore-daemonsets", "completion": "This text code Upgrading Windows nodes | You should see output similar to this:", "language": "text", "type": "code-example_explanation", "context": "Upgrading Windows nodes | You should see output similar to this:", "source": "https://kubernetes.io/docs/tasks/administer-cluster/kubeadm/upgrading-windows-nodes/", "id": "k8s_00316_code_explain_8369"} {"prompt": "Explain this text code:\n\nstop-service kubelet\ncurl.exe -Lo \"https://dl.k8s.io/v1.34.0/bin/windows/amd64/kubelet.exe\"\nrestart-service kubelet", "completion": "This text code Upgrading Windows nodes | From the Windows node, upgrade and restart the kube-proxy.", "language": "text", "type": "code-example_explanation", "context": "Upgrading Windows nodes | From the Windows node, upgrade and restart the kube-proxy.", "source": "https://kubernetes.io/docs/tasks/administer-cluster/kubeadm/upgrading-windows-nodes/", "id": "k8s_00316_code_explain_8370"} {"prompt": "Explain this text code:\n\nstop-service kube-proxy\ncurl.exe -Lo \"https://dl.k8s.io/v1.34.0/bin/windows/amd64/kube-proxy.exe\"\nrestart-service kube-proxy", "completion": "This text code Upgrading Windows nodes | From a machine with access to the Kubernetes API, bring the node back online by marking it schedulable:", "language": "text", "type": "code-example_explanation", "context": "Upgrading Windows nodes | From a machine with access to the Kubernetes API, bring the node back online by marking it schedulable:", "source": "https://kubernetes.io/docs/tasks/administer-cluster/kubeadm/upgrading-windows-nodes/", "id": "k8s_00316_code_explain_8371"} {"prompt": "Explain this text code:\n\n# replace with the name of your node\nkubectl uncordon ", "completion": "This text code Upgrading Windows nodes |", "language": "text", "type": "code-example_explanation", "context": "Upgrading Windows nodes |", "source": "https://kubernetes.io/docs/tasks/administer-cluster/kubeadm/upgrading-windows-nodes/", "id": "k8s_00316_code_explain_8372"} {"prompt": "Complete this text code:\n\nbase64(\n-----BEGIN CERTIFICATE-----", "completion": "...\n-----END CERTIFICATE-----\n)", "language": "text", "type": "code-example", "context": "CertificateSigningRequest | conditions ([]CertificateSigningRequestCondition)", "source": "https://kubernetes.io/docs/reference/kubernetes-api/authentication-resources/certificate-signing-request-v1/", "id": "k8s_00319_code_completion_8373"} {"prompt": "Explain this text code:\n\nbase64(\n-----BEGIN CERTIFICATE-----\n...\n-----END CERTIFICATE-----\n)", "completion": "This text code CertificateSigningRequest | conditions ([]CertificateSigningRequestCondition)", "language": "text", "type": "code-example_explanation", "context": "CertificateSigningRequest | conditions ([]CertificateSigningRequestCondition)", "source": "https://kubernetes.io/docs/reference/kubernetes-api/authentication-resources/certificate-signing-request-v1/", "id": "k8s_00319_code_explain_8374"} {"prompt": "Complete this text code:\n\nbase64(\n-----BEGIN CERTIFICATE-----", "completion": "...\n-----END CERTIFICATE-----\n)", "language": "text", "type": "code-example", "context": "CertificateSigningRequest | conditions ([]CertificateSigningRequestCondition)", "source": "https://kubernetes.io/docs/reference/kubernetes-api/authentication-resources/certificate-signing-request-v1/", "id": "k8s_00319_code_completion_8375"} {"prompt": "Explain this text code:\n\nbase64(\n-----BEGIN CERTIFICATE-----\n...\n-----END CERTIFICATE-----\n)", "completion": "This text code CertificateSigningRequest | conditions ([]CertificateSigningRequestCondition)", "language": "text", "type": "code-example_explanation", "context": "CertificateSigningRequest | conditions ([]CertificateSigningRequestCondition)", "source": "https://kubernetes.io/docs/reference/kubernetes-api/authentication-resources/certificate-signing-request-v1/", "id": "k8s_00319_code_explain_8376"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: two-containers\nspec:\n\n restartPolicy: Never\n\n volumes:\n - name: shared-data\n emptyDir: {}\n\n containers:", "completion": "\n - name: nginx-container\n image: nginx\n volumeMounts:\n - name: shared-data\n mountPath: /usr/share/nginx/html\n\n - name: debian-container\n image: debian\n volumeMounts:\n - name: shared-data\n mountPath: /pod-data\n command: [\"/bin/sh\"]\n args: [\"-c\", \"echo Hello from the debian container > /pod-data/index.html\"]", "language": "yaml", "type": "pod-definition", "context": "Communicate Between Containers in the Same Pod Using a Shared Volume | In the configuration file, you can see that the Pod has a Volume named shared-data.", "source": "https://kubernetes.io/docs/tasks/access-application-cluster/communicate-containers-same-pod-shared-volume/", "id": "k8s_00320_code_completion_8377"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: two-containers\nspec:\n\n restartPolicy: Never\n\n volumes:\n - name: shared-data\n emptyDir: {}\n\n containers:\n\n - name: nginx-container\n image: nginx\n volumeMounts:\n - name: shared-data\n mountPath: /usr/share/nginx/html\n\n - name: debian-container\n image: debian\n volumeMounts:\n - name: shared-data\n mountPath: /pod-data\n command: [\"/bin/sh\"]\n args: [\"-c\", \"echo Hello from the debian container > /pod-data/index.html\"]", "completion": "This yaml code Communicate Between Containers in the Same Pod Using a Shared Volume | In the configuration file, you can see that the Pod has a Volume named shared-data.", "language": "yaml", "type": "pod-definition_explanation", "context": "Communicate Between Containers in the Same Pod Using a Shared Volume | In the configuration file, you can see that the Pod has a Volume named shared-data.", "source": "https://kubernetes.io/docs/tasks/access-application-cluster/communicate-containers-same-pod-shared-volume/", "id": "k8s_00320_code_explain_8378"} {"prompt": "Explain this text code:\n\necho Hello from the debian container > /pod-data/index.html", "completion": "This text code Communicate Between Containers in the Same Pod Using a Shared Volume | Notice that the second container writes the index.html file in the root directory of the nginx server.", "language": "text", "type": "code-example_explanation", "context": "Communicate Between Containers in the Same Pod Using a Shared Volume | Notice that the second container writes the index.html file in the root directory of the nginx server.", "source": "https://kubernetes.io/docs/tasks/access-application-cluster/communicate-containers-same-pod-shared-volume/", "id": "k8s_00320_code_explain_8379"} {"prompt": "Explain this bash code:\n\nkubectl apply -f https://k8s.io/examples/pods/two-container-pod.yaml", "completion": "This bash code Communicate Between Containers in the Same Pod Using a Shared Volume | View information about the Pod and the Containers:", "language": "bash", "type": "kubectl-command_explanation", "context": "Communicate Between Containers in the Same Pod Using a Shared Volume | View information about the Pod and the Containers:", "source": "https://kubernetes.io/docs/tasks/access-application-cluster/communicate-containers-same-pod-shared-volume/", "id": "k8s_00320_code_explain_8380"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n ...\n name: two-containers\n namespace: default\n ...\nspec:\n ...\n containerStatuses:\n\n - containerID: docker://c1d8abd1 ...\n image: debian\n ...", "completion": " lastState:\n terminated:\n ...\n name: debian-container\n ...\n\n - containerID: docker://96c1ff2c5bb ...\n image: nginx\n ...\n name: nginx-container\n ...\n state:\n running:\n ...", "language": "yaml", "type": "pod-definition", "context": "Communicate Between Containers in the Same Pod Using a Shared Volume | You can see that the debian Container has terminated, and the nginx Container is still running.", "source": "https://kubernetes.io/docs/tasks/access-application-cluster/communicate-containers-same-pod-shared-volume/", "id": "k8s_00320_code_completion_8381"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n ...\n name: two-containers\n namespace: default\n ...\nspec:\n ...\n containerStatuses:\n\n - containerID: docker://c1d8abd1 ...\n image: debian\n ...\n lastState:\n terminated:\n ...\n name: debian-container\n ...\n\n - containerID: docker://96c1ff2c5bb ...\n image: nginx\n ...\n name: nginx-container\n ...\n state:\n running:\n ...", "completion": "This yaml code Communicate Between Containers in the Same Pod Using a Shared Volume | You can see that the debian Container has terminated, and the nginx Container is still running.", "language": "yaml", "type": "pod-definition_explanation", "context": "Communicate Between Containers in the Same Pod Using a Shared Volume | You can see that the debian Container has terminated, and the nginx Container is still running.", "source": "https://kubernetes.io/docs/tasks/access-application-cluster/communicate-containers-same-pod-shared-volume/", "id": "k8s_00320_code_explain_8382"} {"prompt": "Explain this bash code:\n\nkubectl exec -it two-containers -c nginx-container -- /bin/bash", "completion": "This bash code Communicate Between Containers in the Same Pod Using a Shared Volume | In your shell, verify that nginx is running:", "language": "bash", "type": "kubectl-command_explanation", "context": "Communicate Between Containers in the Same Pod Using a Shared Volume | In your shell, verify that nginx is running:", "source": "https://kubernetes.io/docs/tasks/access-application-cluster/communicate-containers-same-pod-shared-volume/", "id": "k8s_00320_code_explain_8383"} {"prompt": "Explain this text code:\n\nroot@two-containers:/# apt-get update\nroot@two-containers:/# apt-get install curl procps\nroot@two-containers:/# ps aux", "completion": "This text code Communicate Between Containers in the Same Pod Using a Shared Volume | The output is similar to this:", "language": "text", "type": "code-example_explanation", "context": "Communicate Between Containers in the Same Pod Using a Shared Volume | The output is similar to this:", "source": "https://kubernetes.io/docs/tasks/access-application-cluster/communicate-containers-same-pod-shared-volume/", "id": "k8s_00320_code_explain_8384"} {"prompt": "Explain this text code:\n\nUSER PID ... STAT START TIME COMMAND\nroot 1 ... Ss 21:12 0:00 nginx: master process nginx -g daemon off;", "completion": "This text code Communicate Between Containers in the Same Pod Using a Shared Volume | Recall that the debian Container created the index.html file in the nginx root directory. Use curl to send a GET request to the nginx server:", "language": "text", "type": "code-example_explanation", "context": "Communicate Between Containers in the Same Pod Using a Shared Volume | Recall that the debian Container created the index.html file in the nginx root directory. Use curl to send a GET request to the nginx server:", "source": "https://kubernetes.io/docs/tasks/access-application-cluster/communicate-containers-same-pod-shared-volume/", "id": "k8s_00320_code_explain_8385"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: two-containers\nspec:\n\n restartPolicy: Never\n\n volumes:\n - name: shared-data\n emptyDir: {}\n\n containers:", "completion": "\n - name: nginx-container\n image: nginx\n volumeMounts:\n - name: shared-data\n mountPath: /usr/share/nginx/html\n\n - name: debian-container\n image: debian\n volumeMounts:\n - name: shared-data\n mountPath: /pod-data\n command: [\"/bin/sh\"]\n args: [\"-c\", \"echo Hello from the debian container > /pod-data/index.html\"]", "language": "yaml", "type": "pod-definition", "context": "Communicate Between Containers in the Same Pod Using a Shared Volume | In the configuration file, you can see that the Pod has a Volume named shared-data.", "source": "https://kubernetes.io/docs/tasks/access-application-cluster/communicate-containers-same-pod-shared-volume/", "id": "k8s_00320_code_completion_8386"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: two-containers\nspec:\n\n restartPolicy: Never\n\n volumes:\n - name: shared-data\n emptyDir: {}\n\n containers:\n\n - name: nginx-container\n image: nginx\n volumeMounts:\n - name: shared-data\n mountPath: /usr/share/nginx/html\n\n - name: debian-container\n image: debian\n volumeMounts:\n - name: shared-data\n mountPath: /pod-data\n command: [\"/bin/sh\"]\n args: [\"-c\", \"echo Hello from the debian container > /pod-data/index.html\"]", "completion": "This yaml code Communicate Between Containers in the Same Pod Using a Shared Volume | In the configuration file, you can see that the Pod has a Volume named shared-data.", "language": "yaml", "type": "pod-definition_explanation", "context": "Communicate Between Containers in the Same Pod Using a Shared Volume | In the configuration file, you can see that the Pod has a Volume named shared-data.", "source": "https://kubernetes.io/docs/tasks/access-application-cluster/communicate-containers-same-pod-shared-volume/", "id": "k8s_00320_code_explain_8387"} {"prompt": "Explain this text code:\n\necho Hello from the debian container > /pod-data/index.html", "completion": "This text code Communicate Between Containers in the Same Pod Using a Shared Volume | Notice that the second container writes the index.html file in the root directory of the nginx server.", "language": "text", "type": "code-example_explanation", "context": "Communicate Between Containers in the Same Pod Using a Shared Volume | Notice that the second container writes the index.html file in the root directory of the nginx server.", "source": "https://kubernetes.io/docs/tasks/access-application-cluster/communicate-containers-same-pod-shared-volume/", "id": "k8s_00320_code_explain_8388"} {"prompt": "Explain this bash code:\n\nkubectl apply -f https://k8s.io/examples/pods/two-container-pod.yaml", "completion": "This bash code Communicate Between Containers in the Same Pod Using a Shared Volume | View information about the Pod and the Containers:", "language": "bash", "type": "kubectl-command_explanation", "context": "Communicate Between Containers in the Same Pod Using a Shared Volume | View information about the Pod and the Containers:", "source": "https://kubernetes.io/docs/tasks/access-application-cluster/communicate-containers-same-pod-shared-volume/", "id": "k8s_00320_code_explain_8389"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n ...\n name: two-containers\n namespace: default\n ...\nspec:\n ...\n containerStatuses:\n\n - containerID: docker://c1d8abd1 ...\n image: debian\n ...", "completion": " lastState:\n terminated:\n ...\n name: debian-container\n ...\n\n - containerID: docker://96c1ff2c5bb ...\n image: nginx\n ...\n name: nginx-container\n ...\n state:\n running:\n ...", "language": "yaml", "type": "pod-definition", "context": "Communicate Between Containers in the Same Pod Using a Shared Volume | You can see that the debian Container has terminated, and the nginx Container is still running.", "source": "https://kubernetes.io/docs/tasks/access-application-cluster/communicate-containers-same-pod-shared-volume/", "id": "k8s_00320_code_completion_8390"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n ...\n name: two-containers\n namespace: default\n ...\nspec:\n ...\n containerStatuses:\n\n - containerID: docker://c1d8abd1 ...\n image: debian\n ...\n lastState:\n terminated:\n ...\n name: debian-container\n ...\n\n - containerID: docker://96c1ff2c5bb ...\n image: nginx\n ...\n name: nginx-container\n ...\n state:\n running:\n ...", "completion": "This yaml code Communicate Between Containers in the Same Pod Using a Shared Volume | You can see that the debian Container has terminated, and the nginx Container is still running.", "language": "yaml", "type": "pod-definition_explanation", "context": "Communicate Between Containers in the Same Pod Using a Shared Volume | You can see that the debian Container has terminated, and the nginx Container is still running.", "source": "https://kubernetes.io/docs/tasks/access-application-cluster/communicate-containers-same-pod-shared-volume/", "id": "k8s_00320_code_explain_8391"} {"prompt": "Explain this bash code:\n\nkubectl exec -it two-containers -c nginx-container -- /bin/bash", "completion": "This bash code Communicate Between Containers in the Same Pod Using a Shared Volume | In your shell, verify that nginx is running:", "language": "bash", "type": "kubectl-command_explanation", "context": "Communicate Between Containers in the Same Pod Using a Shared Volume | In your shell, verify that nginx is running:", "source": "https://kubernetes.io/docs/tasks/access-application-cluster/communicate-containers-same-pod-shared-volume/", "id": "k8s_00320_code_explain_8392"} {"prompt": "Explain this text code:\n\nroot@two-containers:/# apt-get update\nroot@two-containers:/# apt-get install curl procps\nroot@two-containers:/# ps aux", "completion": "This text code Communicate Between Containers in the Same Pod Using a Shared Volume | The output is similar to this:", "language": "text", "type": "code-example_explanation", "context": "Communicate Between Containers in the Same Pod Using a Shared Volume | The output is similar to this:", "source": "https://kubernetes.io/docs/tasks/access-application-cluster/communicate-containers-same-pod-shared-volume/", "id": "k8s_00320_code_explain_8393"} {"prompt": "Explain this text code:\n\nUSER PID ... STAT START TIME COMMAND\nroot 1 ... Ss 21:12 0:00 nginx: master process nginx -g daemon off;", "completion": "This text code Communicate Between Containers in the Same Pod Using a Shared Volume | Recall that the debian Container created the index.html file in the nginx root directory. Use curl to send a GET request to the nginx server:", "language": "text", "type": "code-example_explanation", "context": "Communicate Between Containers in the Same Pod Using a Shared Volume | Recall that the debian Container created the index.html file in the nginx root directory. Use curl to send a GET request to the nginx server:", "source": "https://kubernetes.io/docs/tasks/access-application-cluster/communicate-containers-same-pod-shared-volume/", "id": "k8s_00320_code_explain_8394"} {"prompt": "Complete this yaml code:\n\npods/two-container-pod.yaml\napiVersion: v1\nkind: Pod\nmetadata:\n name: two-containers\nspec:\n\n restartPolicy: Never\n\n volumes:\n - name: shared-data\n emptyDir: {}\n\n containers:", "completion": "\n - name: nginx-container\n image: nginx\n volumeMounts:\n - name: shared-data\n mountPath: /usr/share/nginx/html\n\n - name: debian-container\n image: debian\n volumeMounts:\n - name: shared-data\n mountPath: /pod-data\n command: [\"/bin/sh\"]\n args: [\"-c\", \"echo Hello from the debian container > /pod-data/index.html\"]", "language": "yaml", "type": "pod-definition", "context": "Communicate Between Containers in the Same Pod Using a Shared Volume | In the configuration file, you can see that the Pod has a Volume named shared-data.", "source": "https://kubernetes.io/docs/tasks/access-application-cluster/communicate-containers-same-pod-shared-volume/", "id": "k8s_00320_code_completion_8395"} {"prompt": "Explain this yaml code:\n\npods/two-container-pod.yaml\napiVersion: v1\nkind: Pod\nmetadata:\n name: two-containers\nspec:\n\n restartPolicy: Never\n\n volumes:\n - name: shared-data\n emptyDir: {}\n\n containers:\n\n - name: nginx-container\n image: nginx\n volumeMounts:\n - name: shared-data\n mountPath: /usr/share/nginx/html\n\n - name: debian-container\n image: debian\n volumeMounts:\n - name: shared-data\n mountPath: /pod-data\n command: [\"/bin/sh\"]\n args: [\"-c\", \"echo Hello from the debian container > /pod-data/index.html\"]", "completion": "This yaml code Communicate Between Containers in the Same Pod Using a Shared Volume | In the configuration file, you can see that the Pod has a Volume named shared-data.", "language": "yaml", "type": "pod-definition_explanation", "context": "Communicate Between Containers in the Same Pod Using a Shared Volume | In the configuration file, you can see that the Pod has a Volume named shared-data.", "source": "https://kubernetes.io/docs/tasks/access-application-cluster/communicate-containers-same-pod-shared-volume/", "id": "k8s_00320_code_explain_8396"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: two-containers\nspec:\n\n restartPolicy: Never\n\n volumes:\n - name: shared-data\n emptyDir: {}\n\n containers:", "completion": "\n - name: nginx-container\n image: nginx\n volumeMounts:\n - name: shared-data\n mountPath: /usr/share/nginx/html\n\n - name: debian-container\n image: debian\n volumeMounts:\n - name: shared-data\n mountPath: /pod-data\n command: [\"/bin/sh\"]\n args: [\"-c\", \"echo Hello from the debian container > /pod-data/index.html\"]", "language": "yaml", "type": "pod-definition", "context": "Communicate Between Containers in the Same Pod Using a Shared Volume | In the configuration file, you can see that the Pod has a Volume named shared-data.", "source": "https://kubernetes.io/docs/tasks/access-application-cluster/communicate-containers-same-pod-shared-volume/", "id": "k8s_00320_code_completion_8397"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: two-containers\nspec:\n\n restartPolicy: Never\n\n volumes:\n - name: shared-data\n emptyDir: {}\n\n containers:\n\n - name: nginx-container\n image: nginx\n volumeMounts:\n - name: shared-data\n mountPath: /usr/share/nginx/html\n\n - name: debian-container\n image: debian\n volumeMounts:\n - name: shared-data\n mountPath: /pod-data\n command: [\"/bin/sh\"]\n args: [\"-c\", \"echo Hello from the debian container > /pod-data/index.html\"]", "completion": "This yaml code Communicate Between Containers in the Same Pod Using a Shared Volume | In the configuration file, you can see that the Pod has a Volume named shared-data.", "language": "yaml", "type": "pod-definition_explanation", "context": "Communicate Between Containers in the Same Pod Using a Shared Volume | In the configuration file, you can see that the Pod has a Volume named shared-data.", "source": "https://kubernetes.io/docs/tasks/access-application-cluster/communicate-containers-same-pod-shared-volume/", "id": "k8s_00320_code_explain_8398"} {"prompt": "Explain this bash code:\n\nkubectl get [(-o|--output=)json|yaml|name|go-template|go-template-file|template|templatefile|jsonpath|jsonpath-as-json|jsonpath-file|custom-columns|custom-columns-file|wide] (TYPE[.VERSION][.GROUP] [NAME | -l label] | TYPE[.VERSION][.GROUP]/NAME ...) [flags]", "completion": "This bash code kubectl get | If present, list the requested object(s) across all namespaces. Namespace in current context is ignored even if specified with --namespace.", "language": "bash", "type": "kubectl-command_explanation", "context": "kubectl get | If present, list the requested object(s) across all namespaces. Namespace in current context is ignored even if specified with --namespace.", "source": "https://kubernetes.io/docs/reference/kubectl/generated/kubectl_get/", "id": "k8s_00321_code_explain_8399"} {"prompt": "Complete this text code:\n\n# List all pods in ps output format\n kubectl get pods\n \n # List all pods in ps output format with more information (such as node name)\n kubectl get pods -o wide\n \n # List a single replication controller with specified NAME in ps output format\n kubectl get replicationcontroller web\n \n # List deployments in JSON output format, in the \"v1\" version of the \"apps\" API group\n kubectl get deployments.v1.apps -o json\n \n # List a single pod in JSON output format\n kubectl get -o json pod web-pod-13je7\n \n # List a pod identified by type and name specified in \"pod.yaml\" in JSON output format\n kubectl get -f pod.yaml -o json\n \n # List resources from a directory with kustomization.yaml - e.g. dir/kustomization.yaml\n kubectl get -k dir/", "completion": " \n # Return only the phase value of the specified pod\n kubectl get -o template pod/web-pod-13je7 --template={{.status.phase}}\n \n # List resource information in custom columns\n kubectl get pod test-pod -o custom-columns=CONTAINER:.spec.containers[0].name,IMAGE:.spec.containers[0].image\n \n # List all replication controllers and services together in ps output format\n kubectl get rc,services\n \n # List one or more resources by their type and names\n kubectl get rc/web service/frontend pods/web-pod-13je7\n \n # List the 'status' subresource for a single pod\n kubectl get pod web-pod-13je7 --subresource status\n \n # List all deployments in namespace 'backend'\n kubectl get deployments.apps --namespace backend\n \n # List all pods existing in all namespaces\n kubectl get pods --all-namespaces", "language": "text", "type": "code-example", "context": "kubectl get | If present, list the requested object(s) across all namespaces. Namespace in current context is ignored even if specified with --namespace.", "source": "https://kubernetes.io/docs/reference/kubectl/generated/kubectl_get/", "id": "k8s_00321_code_completion_8400"} {"prompt": "Explain this text code:\n\n# List all pods in ps output format\n kubectl get pods\n \n # List all pods in ps output format with more information (such as node name)\n kubectl get pods -o wide\n \n # List a single replication controller with specified NAME in ps output format\n kubectl get replicationcontroller web\n \n # List deployments in JSON output format, in the \"v1\" version of the \"apps\" API group\n kubectl get deployments.v1.apps -o json\n \n # List a single pod in JSON output format\n kubectl get -o json pod web-pod-13je7\n \n # List a pod identified by type and name specified in \"pod.yaml\" in JSON output format\n kubectl get -f pod.yaml -o json\n \n # List resources from a directory with kustomization.yaml - e.g. dir/kustomization.yaml\n kubectl get -k dir/\n \n # Return only the phase value of the specified pod\n kubectl get -o template pod/web-pod-13je7 --template={{.status.phase}}\n \n # List resource information in custom columns\n kubectl get pod test-pod -o custom-columns=CONTAINER:.spec.containers[0].name,IMAGE:.spec.containers[0].image\n \n # List all replication controllers and services together in ps output format\n kubectl get rc,services\n \n # List one or more resources by their type and names\n kubectl get rc/web service/frontend pods/web-pod-13je7\n \n # List the 'status' subresource for a single pod\n kubectl get pod web-pod-13je7 --subresource status\n \n # List all deployments in namespace 'backend'\n kubectl get deployments.apps --namespace backend\n \n # List all pods existing in all namespaces\n kubectl get pods --all-namespaces", "completion": "This text code kubectl get | If present, list the requested object(s) across all namespaces. Namespace in current context is ignored even if specified with --namespace.", "language": "text", "type": "code-example_explanation", "context": "kubectl get | If present, list the requested object(s) across all namespaces. Namespace in current context is ignored even if specified with --namespace.", "source": "https://kubernetes.io/docs/reference/kubectl/generated/kubectl_get/", "id": "k8s_00321_code_explain_8401"} {"prompt": "Explain this bash code:\n\nkubectl get [(-o|--output=)json|yaml|name|go-template|go-template-file|template|templatefile|jsonpath|jsonpath-as-json|jsonpath-file|custom-columns|custom-columns-file|wide] (TYPE[.VERSION][.GROUP] [NAME | -l label] | TYPE[.VERSION][.GROUP]/NAME ...) [flags]", "completion": "This bash code kubectl get | If present, list the requested object(s) across all namespaces. Namespace in current context is ignored even if specified with --namespace.", "language": "bash", "type": "kubectl-command_explanation", "context": "kubectl get | If present, list the requested object(s) across all namespaces. Namespace in current context is ignored even if specified with --namespace.", "source": "https://kubernetes.io/docs/reference/kubectl/generated/kubectl_get/", "id": "k8s_00321_code_explain_8402"} {"prompt": "Complete this text code:\n\n# List all pods in ps output format\n kubectl get pods\n \n # List all pods in ps output format with more information (such as node name)\n kubectl get pods -o wide\n \n # List a single replication controller with specified NAME in ps output format\n kubectl get replicationcontroller web\n \n # List deployments in JSON output format, in the \"v1\" version of the \"apps\" API group\n kubectl get deployments.v1.apps -o json\n \n # List a single pod in JSON output format\n kubectl get -o json pod web-pod-13je7\n \n # List a pod identified by type and name specified in \"pod.yaml\" in JSON output format\n kubectl get -f pod.yaml -o json\n \n # List resources from a directory with kustomization.yaml - e.g. dir/kustomization.yaml\n kubectl get -k dir/", "completion": " \n # Return only the phase value of the specified pod\n kubectl get -o template pod/web-pod-13je7 --template={{.status.phase}}\n \n # List resource information in custom columns\n kubectl get pod test-pod -o custom-columns=CONTAINER:.spec.containers[0].name,IMAGE:.spec.containers[0].image\n \n # List all replication controllers and services together in ps output format\n kubectl get rc,services\n \n # List one or more resources by their type and names\n kubectl get rc/web service/frontend pods/web-pod-13je7\n \n # List the 'status' subresource for a single pod\n kubectl get pod web-pod-13je7 --subresource status\n \n # List all deployments in namespace 'backend'\n kubectl get deployments.apps --namespace backend\n \n # List all pods existing in all namespaces\n kubectl get pods --all-namespaces", "language": "text", "type": "code-example", "context": "kubectl get | If present, list the requested object(s) across all namespaces. Namespace in current context is ignored even if specified with --namespace.", "source": "https://kubernetes.io/docs/reference/kubectl/generated/kubectl_get/", "id": "k8s_00321_code_completion_8403"} {"prompt": "Explain this text code:\n\n# List all pods in ps output format\n kubectl get pods\n \n # List all pods in ps output format with more information (such as node name)\n kubectl get pods -o wide\n \n # List a single replication controller with specified NAME in ps output format\n kubectl get replicationcontroller web\n \n # List deployments in JSON output format, in the \"v1\" version of the \"apps\" API group\n kubectl get deployments.v1.apps -o json\n \n # List a single pod in JSON output format\n kubectl get -o json pod web-pod-13je7\n \n # List a pod identified by type and name specified in \"pod.yaml\" in JSON output format\n kubectl get -f pod.yaml -o json\n \n # List resources from a directory with kustomization.yaml - e.g. dir/kustomization.yaml\n kubectl get -k dir/\n \n # Return only the phase value of the specified pod\n kubectl get -o template pod/web-pod-13je7 --template={{.status.phase}}\n \n # List resource information in custom columns\n kubectl get pod test-pod -o custom-columns=CONTAINER:.spec.containers[0].name,IMAGE:.spec.containers[0].image\n \n # List all replication controllers and services together in ps output format\n kubectl get rc,services\n \n # List one or more resources by their type and names\n kubectl get rc/web service/frontend pods/web-pod-13je7\n \n # List the 'status' subresource for a single pod\n kubectl get pod web-pod-13je7 --subresource status\n \n # List all deployments in namespace 'backend'\n kubectl get deployments.apps --namespace backend\n \n # List all pods existing in all namespaces\n kubectl get pods --all-namespaces", "completion": "This text code kubectl get | If present, list the requested object(s) across all namespaces. Namespace in current context is ignored even if specified with --namespace.", "language": "text", "type": "code-example_explanation", "context": "kubectl get | If present, list the requested object(s) across all namespaces. Namespace in current context is ignored even if specified with --namespace.", "source": "https://kubernetes.io/docs/reference/kubectl/generated/kubectl_get/", "id": "k8s_00321_code_explain_8404"} {"prompt": "Explain this text code:\n\npod (po), replicationcontroller (rc), deployment (deploy), daemonset (ds), statefulset (sts), cronjob (cj), replicaset (rs)", "completion": "This text code kubectl set image | Select all resources, in the namespace of the specified resource types", "language": "text", "type": "code-example_explanation", "context": "kubectl set image | Select all resources, in the namespace of the specified resource types", "source": "https://kubernetes.io/docs/reference/kubectl/generated/kubectl_set/kubectl_set_image/", "id": "k8s_00323_code_explain_8405"} {"prompt": "Explain this bash code:\n\nkubectl set image (-f FILENAME | TYPE NAME) CONTAINER_NAME_1=CONTAINER_IMAGE_1 ... CONTAINER_NAME_N=CONTAINER_IMAGE_N", "completion": "This bash code kubectl set image | Select all resources, in the namespace of the specified resource types", "language": "bash", "type": "kubectl-command_explanation", "context": "kubectl set image | Select all resources, in the namespace of the specified resource types", "source": "https://kubernetes.io/docs/reference/kubectl/generated/kubectl_set/kubectl_set_image/", "id": "k8s_00323_code_explain_8406"} {"prompt": "Complete this text code:\n\n# Set a deployment's nginx container image to 'nginx:1.9.1', and its busybox container image to 'busybox'\n kubectl set image deployment/nginx busybox=busybox nginx=nginx:1.9.1\n \n # Update all deployments' and rc's nginx container's image to 'nginx:1.9.1'\n kubectl set image deployments,rc nginx=nginx:1.9.1 --all", "completion": " \n # Update image of all containers of daemonset abc to 'nginx:1.9.1'\n kubectl set image daemonset abc *=nginx:1.9.1\n \n # Print result (in yaml format) of updating nginx container image from local file, without hitting the server\n kubectl set image -f path/to/file.yaml nginx=nginx:1.9.1 --local -o yaml", "language": "text", "type": "code-example", "context": "kubectl set image | Select all resources, in the namespace of the specified resource types", "source": "https://kubernetes.io/docs/reference/kubectl/generated/kubectl_set/kubectl_set_image/", "id": "k8s_00323_code_completion_8407"} {"prompt": "Explain this text code:\n\n# Set a deployment's nginx container image to 'nginx:1.9.1', and its busybox container image to 'busybox'\n kubectl set image deployment/nginx busybox=busybox nginx=nginx:1.9.1\n \n # Update all deployments' and rc's nginx container's image to 'nginx:1.9.1'\n kubectl set image deployments,rc nginx=nginx:1.9.1 --all\n \n # Update image of all containers of daemonset abc to 'nginx:1.9.1'\n kubectl set image daemonset abc *=nginx:1.9.1\n \n # Print result (in yaml format) of updating nginx container image from local file, without hitting the server\n kubectl set image -f path/to/file.yaml nginx=nginx:1.9.1 --local -o yaml", "completion": "This text code kubectl set image | Select all resources, in the namespace of the specified resource types", "language": "text", "type": "code-example_explanation", "context": "kubectl set image | Select all resources, in the namespace of the specified resource types", "source": "https://kubernetes.io/docs/reference/kubectl/generated/kubectl_set/kubectl_set_image/", "id": "k8s_00323_code_explain_8408"} {"prompt": "Explain this text code:\n\npod (po), replicationcontroller (rc), deployment (deploy), daemonset (ds), statefulset (sts), cronjob (cj), replicaset (rs)", "completion": "This text code kubectl set image | Select all resources, in the namespace of the specified resource types", "language": "text", "type": "code-example_explanation", "context": "kubectl set image | Select all resources, in the namespace of the specified resource types", "source": "https://kubernetes.io/docs/reference/kubectl/generated/kubectl_set/kubectl_set_image/", "id": "k8s_00323_code_explain_8409"} {"prompt": "Explain this bash code:\n\nkubectl set image (-f FILENAME | TYPE NAME) CONTAINER_NAME_1=CONTAINER_IMAGE_1 ... CONTAINER_NAME_N=CONTAINER_IMAGE_N", "completion": "This bash code kubectl set image | Select all resources, in the namespace of the specified resource types", "language": "bash", "type": "kubectl-command_explanation", "context": "kubectl set image | Select all resources, in the namespace of the specified resource types", "source": "https://kubernetes.io/docs/reference/kubectl/generated/kubectl_set/kubectl_set_image/", "id": "k8s_00323_code_explain_8410"} {"prompt": "Complete this text code:\n\n# Set a deployment's nginx container image to 'nginx:1.9.1', and its busybox container image to 'busybox'\n kubectl set image deployment/nginx busybox=busybox nginx=nginx:1.9.1\n \n # Update all deployments' and rc's nginx container's image to 'nginx:1.9.1'\n kubectl set image deployments,rc nginx=nginx:1.9.1 --all", "completion": " \n # Update image of all containers of daemonset abc to 'nginx:1.9.1'\n kubectl set image daemonset abc *=nginx:1.9.1\n \n # Print result (in yaml format) of updating nginx container image from local file, without hitting the server\n kubectl set image -f path/to/file.yaml nginx=nginx:1.9.1 --local -o yaml", "language": "text", "type": "code-example", "context": "kubectl set image | Select all resources, in the namespace of the specified resource types", "source": "https://kubernetes.io/docs/reference/kubectl/generated/kubectl_set/kubectl_set_image/", "id": "k8s_00323_code_completion_8411"} {"prompt": "Explain this text code:\n\n# Set a deployment's nginx container image to 'nginx:1.9.1', and its busybox container image to 'busybox'\n kubectl set image deployment/nginx busybox=busybox nginx=nginx:1.9.1\n \n # Update all deployments' and rc's nginx container's image to 'nginx:1.9.1'\n kubectl set image deployments,rc nginx=nginx:1.9.1 --all\n \n # Update image of all containers of daemonset abc to 'nginx:1.9.1'\n kubectl set image daemonset abc *=nginx:1.9.1\n \n # Print result (in yaml format) of updating nginx container image from local file, without hitting the server\n kubectl set image -f path/to/file.yaml nginx=nginx:1.9.1 --local -o yaml", "completion": "This text code kubectl set image | Select all resources, in the namespace of the specified resource types", "language": "text", "type": "code-example_explanation", "context": "kubectl set image | Select all resources, in the namespace of the specified resource types", "source": "https://kubernetes.io/docs/reference/kubectl/generated/kubectl_set/kubectl_set_image/", "id": "k8s_00323_code_explain_8412"} {"prompt": "Complete this yaml code:\n\n---\napiVersion: v1\nkind: Pod\nmetadata:\n name: example-pod\nspec:\n # Configure a topology spread constraint\n topologySpreadConstraints:", "completion": " - maxSkew: \n minDomains: # optional\n topologyKey: \n whenUnsatisfiable: \n labelSelector: \n matchLabelKeys: # optional; beta since v1.27\n nodeAffinityPolicy: [Honor|Ignore] # optional; beta since v1.26\n nodeTaintsPolicy: [Honor|Ignore] # optional; beta since v1.26\n ### other Pod fields go here", "language": "yaml", "type": "pod-definition", "context": "Pod Topology Spread Constraints | You can read more about this field by running kubectl explain Pod.spec.topologySpreadConstraints or refer to the scheduling section of the API reference for Pod.", "source": "https://kubernetes.io/docs/concepts/scheduling-eviction/topology-spread-constraints/", "id": "k8s_00325_code_completion_8413"} {"prompt": "Explain this yaml code:\n\n---\napiVersion: v1\nkind: Pod\nmetadata:\n name: example-pod\nspec:\n # Configure a topology spread constraint\n topologySpreadConstraints:\n - maxSkew: \n minDomains: # optional\n topologyKey: \n whenUnsatisfiable: \n labelSelector: \n matchLabelKeys: # optional; beta since v1.27\n nodeAffinityPolicy: [Honor|Ignore] # optional; beta since v1.26\n nodeTaintsPolicy: [Honor|Ignore] # optional; beta since v1.26\n ### other Pod fields go here", "completion": "This yaml code Pod Topology Spread Constraints | You can read more about this field by running kubectl explain Pod.spec.topologySpreadConstraints or refer to the scheduling section of the API reference for Pod.", "language": "yaml", "type": "pod-definition_explanation", "context": "Pod Topology Spread Constraints | You can read more about this field by running kubectl explain Pod.spec.topologySpreadConstraints or refer to the scheduling section of the API reference for Pod.", "source": "https://kubernetes.io/docs/concepts/scheduling-eviction/topology-spread-constraints/", "id": "k8s_00325_code_explain_8414"} {"prompt": "Complete this text code:\n\ntopologySpreadConstraints:\n - maxSkew: 1\n topologyKey: kubernetes.io/hostname\n whenUnsatisfiable: DoNotSchedule", "completion": " labelSelector:\n matchLabels:\n app: foo\n matchLabelKeys:\n - pod-template-hash", "language": "text", "type": "code-example", "context": "Pod Topology Spread Constraints | The matchLabelKeys field is a beta-level field and enabled by default in 1.27. You can disable it by disabling the MatchLabelKeysInPodTopologySpread feature gate.", "source": "https://kubernetes.io/docs/concepts/scheduling-eviction/topology-spread-constraints/", "id": "k8s_00325_code_completion_8415"} {"prompt": "Explain this text code:\n\ntopologySpreadConstraints:\n - maxSkew: 1\n topologyKey: kubernetes.io/hostname\n whenUnsatisfiable: DoNotSchedule\n labelSelector:\n matchLabels:\n app: foo\n matchLabelKeys:\n - pod-template-hash", "completion": "This text code Pod Topology Spread Constraints | The matchLabelKeys field is a beta-level field and enabled by default in 1.27. You can disable it by disabling the MatchLabelKeysInPodTopologySpread feature gate.", "language": "text", "type": "code-example_explanation", "context": "Pod Topology Spread Constraints | The matchLabelKeys field is a beta-level field and enabled by default in 1.27. You can disable it by disabling the MatchLabelKeysInPodTopologySpread feature gate.", "source": "https://kubernetes.io/docs/concepts/scheduling-eviction/topology-spread-constraints/", "id": "k8s_00325_code_explain_8416"} {"prompt": "Complete this text code:\n\nNAME STATUS ROLES AGE VERSION LABELS\nnode1 Ready 4m26s v1.16.0 node=node1,zone=zoneA", "completion": "node2 Ready 3m58s v1.16.0 node=node2,zone=zoneA\nnode3 Ready 3m17s v1.16.0 node=node3,zone=zoneB\nnode4 Ready 2m43s v1.16.0 node=node4,zone=zoneB", "language": "text", "type": "code-example", "context": "Pod Topology Spread Constraints | Then the cluster is logically viewed as below:", "source": "https://kubernetes.io/docs/concepts/scheduling-eviction/topology-spread-constraints/", "id": "k8s_00325_code_completion_8417"} {"prompt": "Explain this text code:\n\nNAME STATUS ROLES AGE VERSION LABELS\nnode1 Ready 4m26s v1.16.0 node=node1,zone=zoneA\nnode2 Ready 3m58s v1.16.0 node=node2,zone=zoneA\nnode3 Ready 3m17s v1.16.0 node=node3,zone=zoneB\nnode4 Ready 2m43s v1.16.0 node=node4,zone=zoneB", "completion": "This text code Pod Topology Spread Constraints | Then the cluster is logically viewed as below:", "language": "text", "type": "code-example_explanation", "context": "Pod Topology Spread Constraints | Then the cluster is logically viewed as below:", "source": "https://kubernetes.io/docs/concepts/scheduling-eviction/topology-spread-constraints/", "id": "k8s_00325_code_explain_8418"} {"prompt": "Complete this yaml code:\n\nkind: Pod\napiVersion: v1\nmetadata:\n name: mypod\n labels:\n foo: bar\nspec:\n topologySpreadConstraints:", "completion": " - maxSkew: 1\n topologyKey: zone\n whenUnsatisfiable: DoNotSchedule\n labelSelector:\n matchLabels:\n foo: bar\n containers:\n - name: pause\n image: registry.k8s.io/pause:3.1", "language": "yaml", "type": "pod-definition", "context": "Pod Topology Spread Constraints | From that manifest, topologyKey: zone implies the even distribution will only be applied to nodes that are labeled zone: (nodes that don't have a zone label are skipped). The field whenUns", "source": "https://kubernetes.io/docs/concepts/scheduling-eviction/topology-spread-constraints/", "id": "k8s_00325_code_completion_8419"} {"prompt": "Explain this yaml code:\n\nkind: Pod\napiVersion: v1\nmetadata:\n name: mypod\n labels:\n foo: bar\nspec:\n topologySpreadConstraints:\n - maxSkew: 1\n topologyKey: zone\n whenUnsatisfiable: DoNotSchedule\n labelSelector:\n matchLabels:\n foo: bar\n containers:\n - name: pause\n image: registry.k8s.io/pause:3.1", "completion": "This yaml code Pod Topology Spread Constraints | From that manifest, topologyKey: zone implies the even distribution will only be applied to nodes that are labeled zone: (nodes that don't have a zone label are skipped). The field whenUns", "language": "yaml", "type": "pod-definition_explanation", "context": "Pod Topology Spread Constraints | From that manifest, topologyKey: zone implies the even distribution will only be applied to nodes that are labeled zone: (nodes that don't have a zone label are skipped). The field whenUns", "source": "https://kubernetes.io/docs/concepts/scheduling-eviction/topology-spread-constraints/", "id": "k8s_00325_code_explain_8420"} {"prompt": "Complete this yaml code:\n\nkind: Pod\napiVersion: v1\nmetadata:\n name: mypod\n labels:\n foo: bar\nspec:\n topologySpreadConstraints:\n - maxSkew: 1\n topologyKey: zone\n whenUnsatisfiable: DoNotSchedule", "completion": " labelSelector:\n matchLabels:\n foo: bar\n - maxSkew: 1\n topologyKey: node\n whenUnsatisfiable: DoNotSchedule\n labelSelector:\n matchLabels:\n foo: bar\n containers:\n - name: pause\n image: registry.k8s.io/pause:3.1", "language": "yaml", "type": "pod-definition", "context": "Pod Topology Spread Constraints | In this case, to match the first constraint, the incoming Pod can only be placed onto nodes in zone B; while in terms of the second constraint, the incoming Pod can only be scheduled to the node node4", "source": "https://kubernetes.io/docs/concepts/scheduling-eviction/topology-spread-constraints/", "id": "k8s_00325_code_completion_8421"} {"prompt": "Explain this yaml code:\n\nkind: Pod\napiVersion: v1\nmetadata:\n name: mypod\n labels:\n foo: bar\nspec:\n topologySpreadConstraints:\n - maxSkew: 1\n topologyKey: zone\n whenUnsatisfiable: DoNotSchedule\n labelSelector:\n matchLabels:\n foo: bar\n - maxSkew: 1\n topologyKey: node\n whenUnsatisfiable: DoNotSchedule\n labelSelector:\n matchLabels:\n foo: bar\n containers:\n - name: pause\n image: registry.k8s.io/pause:3.1", "completion": "This yaml code Pod Topology Spread Constraints | In this case, to match the first constraint, the incoming Pod can only be placed onto nodes in zone B; while in terms of the second constraint, the incoming Pod can only be scheduled to the node node4", "language": "yaml", "type": "pod-definition_explanation", "context": "Pod Topology Spread Constraints | In this case, to match the first constraint, the incoming Pod can only be placed onto nodes in zone B; while in terms of the second constraint, the incoming Pod can only be scheduled to the node node4", "source": "https://kubernetes.io/docs/concepts/scheduling-eviction/topology-spread-constraints/", "id": "k8s_00325_code_explain_8422"} {"prompt": "Complete this yaml code:\n\nkind: Pod\napiVersion: v1\nmetadata:\n name: mypod\n labels:\n foo: bar\nspec:\n topologySpreadConstraints:\n - maxSkew: 1\n topologyKey: zone\n whenUnsatisfiable: DoNotSchedule\n labelSelector:\n matchLabels:", "completion": " foo: bar\n affinity:\n nodeAffinity:\n requiredDuringSchedulingIgnoredDuringExecution:\n nodeSelectorTerms:\n - matchExpressions:\n - key: zone\n operator: NotIn\n values:\n - zoneC\n containers:\n - name: pause\n image: registry.k8s.io/pause:3.1", "language": "yaml", "type": "pod-definition", "context": "Pod Topology Spread Constraints | There are some implicit conventions worth noting here:", "source": "https://kubernetes.io/docs/concepts/scheduling-eviction/topology-spread-constraints/", "id": "k8s_00325_code_completion_8423"} {"prompt": "Explain this yaml code:\n\nkind: Pod\napiVersion: v1\nmetadata:\n name: mypod\n labels:\n foo: bar\nspec:\n topologySpreadConstraints:\n - maxSkew: 1\n topologyKey: zone\n whenUnsatisfiable: DoNotSchedule\n labelSelector:\n matchLabels:\n foo: bar\n affinity:\n nodeAffinity:\n requiredDuringSchedulingIgnoredDuringExecution:\n nodeSelectorTerms:\n - matchExpressions:\n - key: zone\n operator: NotIn\n values:\n - zoneC\n containers:\n - name: pause\n image: registry.k8s.io/pause:3.1", "completion": "This yaml code Pod Topology Spread Constraints | There are some implicit conventions worth noting here:", "language": "yaml", "type": "pod-definition_explanation", "context": "Pod Topology Spread Constraints | There are some implicit conventions worth noting here:", "source": "https://kubernetes.io/docs/concepts/scheduling-eviction/topology-spread-constraints/", "id": "k8s_00325_code_explain_8424"} {"prompt": "Complete this yaml code:\n\napiVersion: kubescheduler.config.k8s.io/v1\nkind: KubeSchedulerConfiguration\n\nprofiles:\n - schedulerName: default-scheduler\n pluginConfig:", "completion": " - name: PodTopologySpread\n args:\n defaultConstraints:\n - maxSkew: 1\n topologyKey: topology.kubernetes.io/zone\n whenUnsatisfiable: ScheduleAnyway\n defaultingType: List", "language": "yaml", "type": "kubernetes-manifest", "context": "Pod Topology Spread Constraints | If you don't configure any cluster-level default constraints for pod topology spreading, then kube-scheduler acts as if you specified the following default topology constraints:", "source": "https://kubernetes.io/docs/concepts/scheduling-eviction/topology-spread-constraints/", "id": "k8s_00325_code_completion_8425"} {"prompt": "Explain this yaml code:\n\napiVersion: kubescheduler.config.k8s.io/v1\nkind: KubeSchedulerConfiguration\n\nprofiles:\n - schedulerName: default-scheduler\n pluginConfig:\n - name: PodTopologySpread\n args:\n defaultConstraints:\n - maxSkew: 1\n topologyKey: topology.kubernetes.io/zone\n whenUnsatisfiable: ScheduleAnyway\n defaultingType: List", "completion": "This yaml code Pod Topology Spread Constraints | If you don't configure any cluster-level default constraints for pod topology spreading, then kube-scheduler acts as if you specified the following default topology constraints:", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Pod Topology Spread Constraints | If you don't configure any cluster-level default constraints for pod topology spreading, then kube-scheduler acts as if you specified the following default topology constraints:", "source": "https://kubernetes.io/docs/concepts/scheduling-eviction/topology-spread-constraints/", "id": "k8s_00325_code_explain_8426"} {"prompt": "Complete this text code:\n\ndefaultConstraints:\n - maxSkew: 3\n topologyKey: \"kubernetes.io/hostname\"", "completion": " whenUnsatisfiable: ScheduleAnyway\n - maxSkew: 5\n topologyKey: \"topology.kubernetes.io/zone\"\n whenUnsatisfiable: ScheduleAnyway", "language": "text", "type": "code-example", "context": "Pod Topology Spread Constraints | Also, the legacy SelectorSpread plugin, which provides an equivalent behavior, is disabled by default.", "source": "https://kubernetes.io/docs/concepts/scheduling-eviction/topology-spread-constraints/", "id": "k8s_00325_code_completion_8427"} {"prompt": "Explain this text code:\n\ndefaultConstraints:\n - maxSkew: 3\n topologyKey: \"kubernetes.io/hostname\"\n whenUnsatisfiable: ScheduleAnyway\n - maxSkew: 5\n topologyKey: \"topology.kubernetes.io/zone\"\n whenUnsatisfiable: ScheduleAnyway", "completion": "This text code Pod Topology Spread Constraints | Also, the legacy SelectorSpread plugin, which provides an equivalent behavior, is disabled by default.", "language": "text", "type": "code-example_explanation", "context": "Pod Topology Spread Constraints | Also, the legacy SelectorSpread plugin, which provides an equivalent behavior, is disabled by default.", "source": "https://kubernetes.io/docs/concepts/scheduling-eviction/topology-spread-constraints/", "id": "k8s_00325_code_explain_8428"} {"prompt": "Complete this yaml code:\n\napiVersion: kubescheduler.config.k8s.io/v1\nkind: KubeSchedulerConfiguration\n\nprofiles:\n - schedulerName: default-scheduler", "completion": " pluginConfig:\n - name: PodTopologySpread\n args:\n defaultConstraints: []\n defaultingType: List", "language": "yaml", "type": "kubernetes-manifest", "context": "Pod Topology Spread Constraints | In Kubernetes, inter-Pod affinity and anti-affinity control how Pods are scheduled in relation to one another - either more packed or more scattered.", "source": "https://kubernetes.io/docs/concepts/scheduling-eviction/topology-spread-constraints/", "id": "k8s_00325_code_completion_8429"} {"prompt": "Explain this yaml code:\n\napiVersion: kubescheduler.config.k8s.io/v1\nkind: KubeSchedulerConfiguration\n\nprofiles:\n - schedulerName: default-scheduler\n pluginConfig:\n - name: PodTopologySpread\n args:\n defaultConstraints: []\n defaultingType: List", "completion": "This yaml code Pod Topology Spread Constraints | In Kubernetes, inter-Pod affinity and anti-affinity control how Pods are scheduled in relation to one another - either more packed or more scattered.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Pod Topology Spread Constraints | In Kubernetes, inter-Pod affinity and anti-affinity control how Pods are scheduled in relation to one another - either more packed or more scattered.", "source": "https://kubernetes.io/docs/concepts/scheduling-eviction/topology-spread-constraints/", "id": "k8s_00325_code_explain_8430"} {"prompt": "Complete this yaml code:\n\n---\napiVersion: v1\nkind: Pod\nmetadata:\n name: example-pod\nspec:\n # Configure a topology spread constraint\n topologySpreadConstraints:", "completion": " - maxSkew: \n minDomains: # optional\n topologyKey: \n whenUnsatisfiable: \n labelSelector: \n matchLabelKeys: # optional; beta since v1.27\n nodeAffinityPolicy: [Honor|Ignore] # optional; beta since v1.26\n nodeTaintsPolicy: [Honor|Ignore] # optional; beta since v1.26\n ### other Pod fields go here", "language": "yaml", "type": "pod-definition", "context": "Pod Topology Spread Constraints | You can read more about this field by running kubectl explain Pod.spec.topologySpreadConstraints or refer to the scheduling section of the API reference for Pod.", "source": "https://kubernetes.io/docs/concepts/scheduling-eviction/topology-spread-constraints/", "id": "k8s_00325_code_completion_8431"} {"prompt": "Explain this yaml code:\n\n---\napiVersion: v1\nkind: Pod\nmetadata:\n name: example-pod\nspec:\n # Configure a topology spread constraint\n topologySpreadConstraints:\n - maxSkew: \n minDomains: # optional\n topologyKey: \n whenUnsatisfiable: \n labelSelector: \n matchLabelKeys: # optional; beta since v1.27\n nodeAffinityPolicy: [Honor|Ignore] # optional; beta since v1.26\n nodeTaintsPolicy: [Honor|Ignore] # optional; beta since v1.26\n ### other Pod fields go here", "completion": "This yaml code Pod Topology Spread Constraints | You can read more about this field by running kubectl explain Pod.spec.topologySpreadConstraints or refer to the scheduling section of the API reference for Pod.", "language": "yaml", "type": "pod-definition_explanation", "context": "Pod Topology Spread Constraints | You can read more about this field by running kubectl explain Pod.spec.topologySpreadConstraints or refer to the scheduling section of the API reference for Pod.", "source": "https://kubernetes.io/docs/concepts/scheduling-eviction/topology-spread-constraints/", "id": "k8s_00325_code_explain_8432"} {"prompt": "Complete this yaml code:\n\ntopologySpreadConstraints:\n - maxSkew: 1\n topologyKey: kubernetes.io/hostname\n whenUnsatisfiable: DoNotSchedule", "completion": " labelSelector:\n matchLabels:\n app: foo\n matchLabelKeys:\n - pod-template-hash", "language": "yaml", "type": "kubernetes-manifest", "context": "Pod Topology Spread Constraints | The matchLabelKeys field is a beta-level field and enabled by default in 1.27. You can disable it by disabling the MatchLabelKeysInPodTopologySpread feature gate.", "source": "https://kubernetes.io/docs/concepts/scheduling-eviction/topology-spread-constraints/", "id": "k8s_00325_code_completion_8433"} {"prompt": "Explain this yaml code:\n\ntopologySpreadConstraints:\n - maxSkew: 1\n topologyKey: kubernetes.io/hostname\n whenUnsatisfiable: DoNotSchedule\n labelSelector:\n matchLabels:\n app: foo\n matchLabelKeys:\n - pod-template-hash", "completion": "This yaml code Pod Topology Spread Constraints | The matchLabelKeys field is a beta-level field and enabled by default in 1.27. You can disable it by disabling the MatchLabelKeysInPodTopologySpread feature gate.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Pod Topology Spread Constraints | The matchLabelKeys field is a beta-level field and enabled by default in 1.27. You can disable it by disabling the MatchLabelKeysInPodTopologySpread feature gate.", "source": "https://kubernetes.io/docs/concepts/scheduling-eviction/topology-spread-constraints/", "id": "k8s_00325_code_explain_8434"} {"prompt": "Complete this text code:\n\nNAME STATUS ROLES AGE VERSION LABELS\nnode1 Ready 4m26s v1.16.0 node=node1,zone=zoneA", "completion": "node2 Ready 3m58s v1.16.0 node=node2,zone=zoneA\nnode3 Ready 3m17s v1.16.0 node=node3,zone=zoneB\nnode4 Ready 2m43s v1.16.0 node=node4,zone=zoneB", "language": "text", "type": "code-example", "context": "Pod Topology Spread Constraints | Then the cluster is logically viewed as below:", "source": "https://kubernetes.io/docs/concepts/scheduling-eviction/topology-spread-constraints/", "id": "k8s_00325_code_completion_8435"} {"prompt": "Explain this text code:\n\nNAME STATUS ROLES AGE VERSION LABELS\nnode1 Ready 4m26s v1.16.0 node=node1,zone=zoneA\nnode2 Ready 3m58s v1.16.0 node=node2,zone=zoneA\nnode3 Ready 3m17s v1.16.0 node=node3,zone=zoneB\nnode4 Ready 2m43s v1.16.0 node=node4,zone=zoneB", "completion": "This text code Pod Topology Spread Constraints | Then the cluster is logically viewed as below:", "language": "text", "type": "code-example_explanation", "context": "Pod Topology Spread Constraints | Then the cluster is logically viewed as below:", "source": "https://kubernetes.io/docs/concepts/scheduling-eviction/topology-spread-constraints/", "id": "k8s_00325_code_explain_8436"} {"prompt": "Explain this text code:\n\npods/topology-spread-constraints/one-constraint.yaml", "completion": "This text code Pod Topology Spread Constraints | From that manifest, topologyKey: zone implies the even distribution will only be applied to nodes that are labeled zone: (nodes that don't have a zone label are skipped). The field whenUns", "language": "text", "type": "code-example_explanation", "context": "Pod Topology Spread Constraints | From that manifest, topologyKey: zone implies the even distribution will only be applied to nodes that are labeled zone: (nodes that don't have a zone label are skipped). The field whenUns", "source": "https://kubernetes.io/docs/concepts/scheduling-eviction/topology-spread-constraints/", "id": "k8s_00325_code_explain_8437"} {"prompt": "Complete this yaml code:\n\nkind: Pod\napiVersion: v1\nmetadata:\n name: mypod\n labels:\n foo: bar\nspec:\n topologySpreadConstraints:", "completion": " - maxSkew: 1\n topologyKey: zone\n whenUnsatisfiable: DoNotSchedule\n labelSelector:\n matchLabels:\n foo: bar\n containers:\n - name: pause\n image: registry.k8s.io/pause:3.1", "language": "yaml", "type": "pod-definition", "context": "Pod Topology Spread Constraints | From that manifest, topologyKey: zone implies the even distribution will only be applied to nodes that are labeled zone: (nodes that don't have a zone label are skipped). The field whenUns", "source": "https://kubernetes.io/docs/concepts/scheduling-eviction/topology-spread-constraints/", "id": "k8s_00325_code_completion_8438"} {"prompt": "Explain this yaml code:\n\nkind: Pod\napiVersion: v1\nmetadata:\n name: mypod\n labels:\n foo: bar\nspec:\n topologySpreadConstraints:\n - maxSkew: 1\n topologyKey: zone\n whenUnsatisfiable: DoNotSchedule\n labelSelector:\n matchLabels:\n foo: bar\n containers:\n - name: pause\n image: registry.k8s.io/pause:3.1", "completion": "This yaml code Pod Topology Spread Constraints | From that manifest, topologyKey: zone implies the even distribution will only be applied to nodes that are labeled zone: (nodes that don't have a zone label are skipped). The field whenUns", "language": "yaml", "type": "pod-definition_explanation", "context": "Pod Topology Spread Constraints | From that manifest, topologyKey: zone implies the even distribution will only be applied to nodes that are labeled zone: (nodes that don't have a zone label are skipped). The field whenUns", "source": "https://kubernetes.io/docs/concepts/scheduling-eviction/topology-spread-constraints/", "id": "k8s_00325_code_explain_8439"} {"prompt": "Explain this text code:\n\npods/topology-spread-constraints/two-constraints.yaml", "completion": "This text code Pod Topology Spread Constraints | In this case, to match the first constraint, the incoming Pod can only be placed onto nodes in zone B; while in terms of the second constraint, the incoming Pod can only be scheduled to the node node4", "language": "text", "type": "code-example_explanation", "context": "Pod Topology Spread Constraints | In this case, to match the first constraint, the incoming Pod can only be placed onto nodes in zone B; while in terms of the second constraint, the incoming Pod can only be scheduled to the node node4", "source": "https://kubernetes.io/docs/concepts/scheduling-eviction/topology-spread-constraints/", "id": "k8s_00325_code_explain_8440"} {"prompt": "Complete this yaml code:\n\nkind: Pod\napiVersion: v1\nmetadata:\n name: mypod\n labels:\n foo: bar\nspec:\n topologySpreadConstraints:\n - maxSkew: 1\n topologyKey: zone\n whenUnsatisfiable: DoNotSchedule", "completion": " labelSelector:\n matchLabels:\n foo: bar\n - maxSkew: 1\n topologyKey: node\n whenUnsatisfiable: DoNotSchedule\n labelSelector:\n matchLabels:\n foo: bar\n containers:\n - name: pause\n image: registry.k8s.io/pause:3.1", "language": "yaml", "type": "pod-definition", "context": "Pod Topology Spread Constraints | In this case, to match the first constraint, the incoming Pod can only be placed onto nodes in zone B; while in terms of the second constraint, the incoming Pod can only be scheduled to the node node4", "source": "https://kubernetes.io/docs/concepts/scheduling-eviction/topology-spread-constraints/", "id": "k8s_00325_code_completion_8441"} {"prompt": "Explain this yaml code:\n\nkind: Pod\napiVersion: v1\nmetadata:\n name: mypod\n labels:\n foo: bar\nspec:\n topologySpreadConstraints:\n - maxSkew: 1\n topologyKey: zone\n whenUnsatisfiable: DoNotSchedule\n labelSelector:\n matchLabels:\n foo: bar\n - maxSkew: 1\n topologyKey: node\n whenUnsatisfiable: DoNotSchedule\n labelSelector:\n matchLabels:\n foo: bar\n containers:\n - name: pause\n image: registry.k8s.io/pause:3.1", "completion": "This yaml code Pod Topology Spread Constraints | In this case, to match the first constraint, the incoming Pod can only be placed onto nodes in zone B; while in terms of the second constraint, the incoming Pod can only be scheduled to the node node4", "language": "yaml", "type": "pod-definition_explanation", "context": "Pod Topology Spread Constraints | In this case, to match the first constraint, the incoming Pod can only be placed onto nodes in zone B; while in terms of the second constraint, the incoming Pod can only be scheduled to the node node4", "source": "https://kubernetes.io/docs/concepts/scheduling-eviction/topology-spread-constraints/", "id": "k8s_00325_code_explain_8442"} {"prompt": "Explain this text code:\n\npods/topology-spread-constraints/one-constraint-with-nodeaffinity.yaml", "completion": "This text code Pod Topology Spread Constraints | There are some implicit conventions worth noting here:", "language": "text", "type": "code-example_explanation", "context": "Pod Topology Spread Constraints | There are some implicit conventions worth noting here:", "source": "https://kubernetes.io/docs/concepts/scheduling-eviction/topology-spread-constraints/", "id": "k8s_00325_code_explain_8443"} {"prompt": "Complete this yaml code:\n\nkind: Pod\napiVersion: v1\nmetadata:\n name: mypod\n labels:\n foo: bar\nspec:\n topologySpreadConstraints:\n - maxSkew: 1\n topologyKey: zone\n whenUnsatisfiable: DoNotSchedule\n labelSelector:\n matchLabels:", "completion": " foo: bar\n affinity:\n nodeAffinity:\n requiredDuringSchedulingIgnoredDuringExecution:\n nodeSelectorTerms:\n - matchExpressions:\n - key: zone\n operator: NotIn\n values:\n - zoneC\n containers:\n - name: pause\n image: registry.k8s.io/pause:3.1", "language": "yaml", "type": "pod-definition", "context": "Pod Topology Spread Constraints | There are some implicit conventions worth noting here:", "source": "https://kubernetes.io/docs/concepts/scheduling-eviction/topology-spread-constraints/", "id": "k8s_00325_code_completion_8444"} {"prompt": "Explain this yaml code:\n\nkind: Pod\napiVersion: v1\nmetadata:\n name: mypod\n labels:\n foo: bar\nspec:\n topologySpreadConstraints:\n - maxSkew: 1\n topologyKey: zone\n whenUnsatisfiable: DoNotSchedule\n labelSelector:\n matchLabels:\n foo: bar\n affinity:\n nodeAffinity:\n requiredDuringSchedulingIgnoredDuringExecution:\n nodeSelectorTerms:\n - matchExpressions:\n - key: zone\n operator: NotIn\n values:\n - zoneC\n containers:\n - name: pause\n image: registry.k8s.io/pause:3.1", "completion": "This yaml code Pod Topology Spread Constraints | There are some implicit conventions worth noting here:", "language": "yaml", "type": "pod-definition_explanation", "context": "Pod Topology Spread Constraints | There are some implicit conventions worth noting here:", "source": "https://kubernetes.io/docs/concepts/scheduling-eviction/topology-spread-constraints/", "id": "k8s_00325_code_explain_8445"} {"prompt": "Complete this yaml code:\n\napiVersion: kubescheduler.config.k8s.io/v1\nkind: KubeSchedulerConfiguration\n\nprofiles:\n - schedulerName: default-scheduler\n pluginConfig:", "completion": " - name: PodTopologySpread\n args:\n defaultConstraints:\n - maxSkew: 1\n topologyKey: topology.kubernetes.io/zone\n whenUnsatisfiable: ScheduleAnyway\n defaultingType: List", "language": "yaml", "type": "kubernetes-manifest", "context": "Pod Topology Spread Constraints | If you don't configure any cluster-level default constraints for pod topology spreading, then kube-scheduler acts as if you specified the following default topology constraints:", "source": "https://kubernetes.io/docs/concepts/scheduling-eviction/topology-spread-constraints/", "id": "k8s_00325_code_completion_8446"} {"prompt": "Explain this yaml code:\n\napiVersion: kubescheduler.config.k8s.io/v1\nkind: KubeSchedulerConfiguration\n\nprofiles:\n - schedulerName: default-scheduler\n pluginConfig:\n - name: PodTopologySpread\n args:\n defaultConstraints:\n - maxSkew: 1\n topologyKey: topology.kubernetes.io/zone\n whenUnsatisfiable: ScheduleAnyway\n defaultingType: List", "completion": "This yaml code Pod Topology Spread Constraints | If you don't configure any cluster-level default constraints for pod topology spreading, then kube-scheduler acts as if you specified the following default topology constraints:", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Pod Topology Spread Constraints | If you don't configure any cluster-level default constraints for pod topology spreading, then kube-scheduler acts as if you specified the following default topology constraints:", "source": "https://kubernetes.io/docs/concepts/scheduling-eviction/topology-spread-constraints/", "id": "k8s_00325_code_explain_8447"} {"prompt": "Complete this yaml code:\n\ndefaultConstraints:\n - maxSkew: 3\n topologyKey: \"kubernetes.io/hostname\"", "completion": " whenUnsatisfiable: ScheduleAnyway\n - maxSkew: 5\n topologyKey: \"topology.kubernetes.io/zone\"\n whenUnsatisfiable: ScheduleAnyway", "language": "yaml", "type": "kubernetes-manifest", "context": "Pod Topology Spread Constraints | Also, the legacy SelectorSpread plugin, which provides an equivalent behavior, is disabled by default.", "source": "https://kubernetes.io/docs/concepts/scheduling-eviction/topology-spread-constraints/", "id": "k8s_00325_code_completion_8448"} {"prompt": "Explain this yaml code:\n\ndefaultConstraints:\n - maxSkew: 3\n topologyKey: \"kubernetes.io/hostname\"\n whenUnsatisfiable: ScheduleAnyway\n - maxSkew: 5\n topologyKey: \"topology.kubernetes.io/zone\"\n whenUnsatisfiable: ScheduleAnyway", "completion": "This yaml code Pod Topology Spread Constraints | Also, the legacy SelectorSpread plugin, which provides an equivalent behavior, is disabled by default.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Pod Topology Spread Constraints | Also, the legacy SelectorSpread plugin, which provides an equivalent behavior, is disabled by default.", "source": "https://kubernetes.io/docs/concepts/scheduling-eviction/topology-spread-constraints/", "id": "k8s_00325_code_explain_8449"} {"prompt": "Complete this yaml code:\n\napiVersion: kubescheduler.config.k8s.io/v1\nkind: KubeSchedulerConfiguration\n\nprofiles:\n - schedulerName: default-scheduler", "completion": " pluginConfig:\n - name: PodTopologySpread\n args:\n defaultConstraints: []\n defaultingType: List", "language": "yaml", "type": "kubernetes-manifest", "context": "Pod Topology Spread Constraints | In Kubernetes, inter-Pod affinity and anti-affinity control how Pods are scheduled in relation to one another - either more packed or more scattered.", "source": "https://kubernetes.io/docs/concepts/scheduling-eviction/topology-spread-constraints/", "id": "k8s_00325_code_completion_8450"} {"prompt": "Explain this yaml code:\n\napiVersion: kubescheduler.config.k8s.io/v1\nkind: KubeSchedulerConfiguration\n\nprofiles:\n - schedulerName: default-scheduler\n pluginConfig:\n - name: PodTopologySpread\n args:\n defaultConstraints: []\n defaultingType: List", "completion": "This yaml code Pod Topology Spread Constraints | In Kubernetes, inter-Pod affinity and anti-affinity control how Pods are scheduled in relation to one another - either more packed or more scattered.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Pod Topology Spread Constraints | In Kubernetes, inter-Pod affinity and anti-affinity control how Pods are scheduled in relation to one another - either more packed or more scattered.", "source": "https://kubernetes.io/docs/concepts/scheduling-eviction/topology-spread-constraints/", "id": "k8s_00325_code_explain_8451"} {"prompt": "Complete this yaml code:\n\n---\napiVersion: v1\nkind: Pod\nmetadata:\n name: example-pod\nspec:\n # Configure a topology spread constraint\n topologySpreadConstraints:", "completion": " - maxSkew: \n minDomains: # optional\n topologyKey: \n whenUnsatisfiable: \n labelSelector: \n matchLabelKeys: # optional; beta since v1.27\n nodeAffinityPolicy: [Honor|Ignore] # optional; beta since v1.26\n nodeTaintsPolicy: [Honor|Ignore] # optional; beta since v1.26\n ### other Pod fields go here", "language": "yaml", "type": "pod-definition", "context": "Pod Topology Spread Constraints | You can read more about this field by running kubectl explain Pod.spec.topologySpreadConstraints or refer to the scheduling section of the API reference for Pod.", "source": "https://kubernetes.io/docs/concepts/scheduling-eviction/topology-spread-constraints/", "id": "k8s_00325_code_completion_8452"} {"prompt": "Explain this yaml code:\n\n---\napiVersion: v1\nkind: Pod\nmetadata:\n name: example-pod\nspec:\n # Configure a topology spread constraint\n topologySpreadConstraints:\n - maxSkew: \n minDomains: # optional\n topologyKey: \n whenUnsatisfiable: \n labelSelector: \n matchLabelKeys: # optional; beta since v1.27\n nodeAffinityPolicy: [Honor|Ignore] # optional; beta since v1.26\n nodeTaintsPolicy: [Honor|Ignore] # optional; beta since v1.26\n ### other Pod fields go here", "completion": "This yaml code Pod Topology Spread Constraints | You can read more about this field by running kubectl explain Pod.spec.topologySpreadConstraints or refer to the scheduling section of the API reference for Pod.", "language": "yaml", "type": "pod-definition_explanation", "context": "Pod Topology Spread Constraints | You can read more about this field by running kubectl explain Pod.spec.topologySpreadConstraints or refer to the scheduling section of the API reference for Pod.", "source": "https://kubernetes.io/docs/concepts/scheduling-eviction/topology-spread-constraints/", "id": "k8s_00325_code_explain_8453"} {"prompt": "Complete this text code:\n\ntopologySpreadConstraints:\n - maxSkew: 1\n topologyKey: kubernetes.io/hostname\n whenUnsatisfiable: DoNotSchedule", "completion": " labelSelector:\n matchLabels:\n app: foo\n matchLabelKeys:\n - pod-template-hash", "language": "text", "type": "code-example", "context": "Pod Topology Spread Constraints | The matchLabelKeys field is a beta-level field and enabled by default in 1.27. You can disable it by disabling the MatchLabelKeysInPodTopologySpread feature gate.", "source": "https://kubernetes.io/docs/concepts/scheduling-eviction/topology-spread-constraints/", "id": "k8s_00325_code_completion_8454"} {"prompt": "Explain this text code:\n\ntopologySpreadConstraints:\n - maxSkew: 1\n topologyKey: kubernetes.io/hostname\n whenUnsatisfiable: DoNotSchedule\n labelSelector:\n matchLabels:\n app: foo\n matchLabelKeys:\n - pod-template-hash", "completion": "This text code Pod Topology Spread Constraints | The matchLabelKeys field is a beta-level field and enabled by default in 1.27. You can disable it by disabling the MatchLabelKeysInPodTopologySpread feature gate.", "language": "text", "type": "code-example_explanation", "context": "Pod Topology Spread Constraints | The matchLabelKeys field is a beta-level field and enabled by default in 1.27. You can disable it by disabling the MatchLabelKeysInPodTopologySpread feature gate.", "source": "https://kubernetes.io/docs/concepts/scheduling-eviction/topology-spread-constraints/", "id": "k8s_00325_code_explain_8455"} {"prompt": "Complete this yaml code:\n\npods/topology-spread-constraints/one-constraint.yaml\nkind: Pod\napiVersion: v1\nmetadata:\n name: mypod\n labels:\n foo: bar\nspec:\n topologySpreadConstraints:", "completion": " - maxSkew: 1\n topologyKey: zone\n whenUnsatisfiable: DoNotSchedule\n labelSelector:\n matchLabels:\n foo: bar\n containers:\n - name: pause\n image: registry.k8s.io/pause:3.1", "language": "yaml", "type": "pod-definition", "context": "Pod Topology Spread Constraints | From that manifest, topologyKey: zone implies the even distribution will only be applied to nodes that are labeled zone: (nodes that don't have a zone label are skipped). The field whenUns", "source": "https://kubernetes.io/docs/concepts/scheduling-eviction/topology-spread-constraints/", "id": "k8s_00325_code_completion_8456"} {"prompt": "Explain this yaml code:\n\npods/topology-spread-constraints/one-constraint.yaml\nkind: Pod\napiVersion: v1\nmetadata:\n name: mypod\n labels:\n foo: bar\nspec:\n topologySpreadConstraints:\n - maxSkew: 1\n topologyKey: zone\n whenUnsatisfiable: DoNotSchedule\n labelSelector:\n matchLabels:\n foo: bar\n containers:\n - name: pause\n image: registry.k8s.io/pause:3.1", "completion": "This yaml code Pod Topology Spread Constraints | From that manifest, topologyKey: zone implies the even distribution will only be applied to nodes that are labeled zone: (nodes that don't have a zone label are skipped). The field whenUns", "language": "yaml", "type": "pod-definition_explanation", "context": "Pod Topology Spread Constraints | From that manifest, topologyKey: zone implies the even distribution will only be applied to nodes that are labeled zone: (nodes that don't have a zone label are skipped). The field whenUns", "source": "https://kubernetes.io/docs/concepts/scheduling-eviction/topology-spread-constraints/", "id": "k8s_00325_code_explain_8457"} {"prompt": "Complete this yaml code:\n\nkind: Pod\napiVersion: v1\nmetadata:\n name: mypod\n labels:\n foo: bar\nspec:\n topologySpreadConstraints:", "completion": " - maxSkew: 1\n topologyKey: zone\n whenUnsatisfiable: DoNotSchedule\n labelSelector:\n matchLabels:\n foo: bar\n containers:\n - name: pause\n image: registry.k8s.io/pause:3.1", "language": "yaml", "type": "pod-definition", "context": "Pod Topology Spread Constraints | From that manifest, topologyKey: zone implies the even distribution will only be applied to nodes that are labeled zone: (nodes that don't have a zone label are skipped). The field whenUns", "source": "https://kubernetes.io/docs/concepts/scheduling-eviction/topology-spread-constraints/", "id": "k8s_00325_code_completion_8458"} {"prompt": "Explain this yaml code:\n\nkind: Pod\napiVersion: v1\nmetadata:\n name: mypod\n labels:\n foo: bar\nspec:\n topologySpreadConstraints:\n - maxSkew: 1\n topologyKey: zone\n whenUnsatisfiable: DoNotSchedule\n labelSelector:\n matchLabels:\n foo: bar\n containers:\n - name: pause\n image: registry.k8s.io/pause:3.1", "completion": "This yaml code Pod Topology Spread Constraints | From that manifest, topologyKey: zone implies the even distribution will only be applied to nodes that are labeled zone: (nodes that don't have a zone label are skipped). The field whenUns", "language": "yaml", "type": "pod-definition_explanation", "context": "Pod Topology Spread Constraints | From that manifest, topologyKey: zone implies the even distribution will only be applied to nodes that are labeled zone: (nodes that don't have a zone label are skipped). The field whenUns", "source": "https://kubernetes.io/docs/concepts/scheduling-eviction/topology-spread-constraints/", "id": "k8s_00325_code_explain_8459"} {"prompt": "Complete this yaml code:\n\npods/topology-spread-constraints/two-constraints.yaml\nkind: Pod\napiVersion: v1\nmetadata:\n name: mypod\n labels:\n foo: bar\nspec:\n topologySpreadConstraints:\n - maxSkew: 1\n topologyKey: zone\n whenUnsatisfiable: DoNotSchedule", "completion": " labelSelector:\n matchLabels:\n foo: bar\n - maxSkew: 1\n topologyKey: node\n whenUnsatisfiable: DoNotSchedule\n labelSelector:\n matchLabels:\n foo: bar\n containers:\n - name: pause\n image: registry.k8s.io/pause:3.1", "language": "yaml", "type": "pod-definition", "context": "Pod Topology Spread Constraints | In this case, to match the first constraint, the incoming Pod can only be placed onto nodes in zone B; while in terms of the second constraint, the incoming Pod can only be scheduled to the node node4", "source": "https://kubernetes.io/docs/concepts/scheduling-eviction/topology-spread-constraints/", "id": "k8s_00325_code_completion_8460"} {"prompt": "Explain this yaml code:\n\npods/topology-spread-constraints/two-constraints.yaml\nkind: Pod\napiVersion: v1\nmetadata:\n name: mypod\n labels:\n foo: bar\nspec:\n topologySpreadConstraints:\n - maxSkew: 1\n topologyKey: zone\n whenUnsatisfiable: DoNotSchedule\n labelSelector:\n matchLabels:\n foo: bar\n - maxSkew: 1\n topologyKey: node\n whenUnsatisfiable: DoNotSchedule\n labelSelector:\n matchLabels:\n foo: bar\n containers:\n - name: pause\n image: registry.k8s.io/pause:3.1", "completion": "This yaml code Pod Topology Spread Constraints | In this case, to match the first constraint, the incoming Pod can only be placed onto nodes in zone B; while in terms of the second constraint, the incoming Pod can only be scheduled to the node node4", "language": "yaml", "type": "pod-definition_explanation", "context": "Pod Topology Spread Constraints | In this case, to match the first constraint, the incoming Pod can only be placed onto nodes in zone B; while in terms of the second constraint, the incoming Pod can only be scheduled to the node node4", "source": "https://kubernetes.io/docs/concepts/scheduling-eviction/topology-spread-constraints/", "id": "k8s_00325_code_explain_8461"} {"prompt": "Complete this yaml code:\n\nkind: Pod\napiVersion: v1\nmetadata:\n name: mypod\n labels:\n foo: bar\nspec:\n topologySpreadConstraints:\n - maxSkew: 1\n topologyKey: zone\n whenUnsatisfiable: DoNotSchedule", "completion": " labelSelector:\n matchLabels:\n foo: bar\n - maxSkew: 1\n topologyKey: node\n whenUnsatisfiable: DoNotSchedule\n labelSelector:\n matchLabels:\n foo: bar\n containers:\n - name: pause\n image: registry.k8s.io/pause:3.1", "language": "yaml", "type": "pod-definition", "context": "Pod Topology Spread Constraints | In this case, to match the first constraint, the incoming Pod can only be placed onto nodes in zone B; while in terms of the second constraint, the incoming Pod can only be scheduled to the node node4", "source": "https://kubernetes.io/docs/concepts/scheduling-eviction/topology-spread-constraints/", "id": "k8s_00325_code_completion_8462"} {"prompt": "Explain this yaml code:\n\nkind: Pod\napiVersion: v1\nmetadata:\n name: mypod\n labels:\n foo: bar\nspec:\n topologySpreadConstraints:\n - maxSkew: 1\n topologyKey: zone\n whenUnsatisfiable: DoNotSchedule\n labelSelector:\n matchLabels:\n foo: bar\n - maxSkew: 1\n topologyKey: node\n whenUnsatisfiable: DoNotSchedule\n labelSelector:\n matchLabels:\n foo: bar\n containers:\n - name: pause\n image: registry.k8s.io/pause:3.1", "completion": "This yaml code Pod Topology Spread Constraints | In this case, to match the first constraint, the incoming Pod can only be placed onto nodes in zone B; while in terms of the second constraint, the incoming Pod can only be scheduled to the node node4", "language": "yaml", "type": "pod-definition_explanation", "context": "Pod Topology Spread Constraints | In this case, to match the first constraint, the incoming Pod can only be placed onto nodes in zone B; while in terms of the second constraint, the incoming Pod can only be scheduled to the node node4", "source": "https://kubernetes.io/docs/concepts/scheduling-eviction/topology-spread-constraints/", "id": "k8s_00325_code_explain_8463"} {"prompt": "Complete this yaml code:\n\npods/topology-spread-constraints/one-constraint-with-nodeaffinity.yaml\nkind: Pod\napiVersion: v1\nmetadata:\n name: mypod\n labels:\n foo: bar\nspec:\n topologySpreadConstraints:\n - maxSkew: 1\n topologyKey: zone\n whenUnsatisfiable: DoNotSchedule\n labelSelector:", "completion": " matchLabels:\n foo: bar\n affinity:\n nodeAffinity:\n requiredDuringSchedulingIgnoredDuringExecution:\n nodeSelectorTerms:\n - matchExpressions:\n - key: zone\n operator: NotIn\n values:\n - zoneC\n containers:\n - name: pause\n image: registry.k8s.io/pause:3.1", "language": "yaml", "type": "pod-definition", "context": "Pod Topology Spread Constraints | There are some implicit conventions worth noting here:", "source": "https://kubernetes.io/docs/concepts/scheduling-eviction/topology-spread-constraints/", "id": "k8s_00325_code_completion_8464"} {"prompt": "Explain this yaml code:\n\npods/topology-spread-constraints/one-constraint-with-nodeaffinity.yaml\nkind: Pod\napiVersion: v1\nmetadata:\n name: mypod\n labels:\n foo: bar\nspec:\n topologySpreadConstraints:\n - maxSkew: 1\n topologyKey: zone\n whenUnsatisfiable: DoNotSchedule\n labelSelector:\n matchLabels:\n foo: bar\n affinity:\n nodeAffinity:\n requiredDuringSchedulingIgnoredDuringExecution:\n nodeSelectorTerms:\n - matchExpressions:\n - key: zone\n operator: NotIn\n values:\n - zoneC\n containers:\n - name: pause\n image: registry.k8s.io/pause:3.1", "completion": "This yaml code Pod Topology Spread Constraints | There are some implicit conventions worth noting here:", "language": "yaml", "type": "pod-definition_explanation", "context": "Pod Topology Spread Constraints | There are some implicit conventions worth noting here:", "source": "https://kubernetes.io/docs/concepts/scheduling-eviction/topology-spread-constraints/", "id": "k8s_00325_code_explain_8465"} {"prompt": "Complete this yaml code:\n\nkind: Pod\napiVersion: v1\nmetadata:\n name: mypod\n labels:\n foo: bar\nspec:\n topologySpreadConstraints:\n - maxSkew: 1\n topologyKey: zone\n whenUnsatisfiable: DoNotSchedule\n labelSelector:\n matchLabels:", "completion": " foo: bar\n affinity:\n nodeAffinity:\n requiredDuringSchedulingIgnoredDuringExecution:\n nodeSelectorTerms:\n - matchExpressions:\n - key: zone\n operator: NotIn\n values:\n - zoneC\n containers:\n - name: pause\n image: registry.k8s.io/pause:3.1", "language": "yaml", "type": "pod-definition", "context": "Pod Topology Spread Constraints | There are some implicit conventions worth noting here:", "source": "https://kubernetes.io/docs/concepts/scheduling-eviction/topology-spread-constraints/", "id": "k8s_00325_code_completion_8466"} {"prompt": "Explain this yaml code:\n\nkind: Pod\napiVersion: v1\nmetadata:\n name: mypod\n labels:\n foo: bar\nspec:\n topologySpreadConstraints:\n - maxSkew: 1\n topologyKey: zone\n whenUnsatisfiable: DoNotSchedule\n labelSelector:\n matchLabels:\n foo: bar\n affinity:\n nodeAffinity:\n requiredDuringSchedulingIgnoredDuringExecution:\n nodeSelectorTerms:\n - matchExpressions:\n - key: zone\n operator: NotIn\n values:\n - zoneC\n containers:\n - name: pause\n image: registry.k8s.io/pause:3.1", "completion": "This yaml code Pod Topology Spread Constraints | There are some implicit conventions worth noting here:", "language": "yaml", "type": "pod-definition_explanation", "context": "Pod Topology Spread Constraints | There are some implicit conventions worth noting here:", "source": "https://kubernetes.io/docs/concepts/scheduling-eviction/topology-spread-constraints/", "id": "k8s_00325_code_explain_8467"} {"prompt": "Complete this yaml code:\n\napiVersion: kubescheduler.config.k8s.io/v1\nkind: KubeSchedulerConfiguration\n\nprofiles:\n - schedulerName: default-scheduler\n pluginConfig:", "completion": " - name: PodTopologySpread\n args:\n defaultConstraints:\n - maxSkew: 1\n topologyKey: topology.kubernetes.io/zone\n whenUnsatisfiable: ScheduleAnyway\n defaultingType: List", "language": "yaml", "type": "kubernetes-manifest", "context": "Pod Topology Spread Constraints | If you don't configure any cluster-level default constraints for pod topology spreading, then kube-scheduler acts as if you specified the following default topology constraints:", "source": "https://kubernetes.io/docs/concepts/scheduling-eviction/topology-spread-constraints/", "id": "k8s_00325_code_completion_8468"} {"prompt": "Explain this yaml code:\n\napiVersion: kubescheduler.config.k8s.io/v1\nkind: KubeSchedulerConfiguration\n\nprofiles:\n - schedulerName: default-scheduler\n pluginConfig:\n - name: PodTopologySpread\n args:\n defaultConstraints:\n - maxSkew: 1\n topologyKey: topology.kubernetes.io/zone\n whenUnsatisfiable: ScheduleAnyway\n defaultingType: List", "completion": "This yaml code Pod Topology Spread Constraints | If you don't configure any cluster-level default constraints for pod topology spreading, then kube-scheduler acts as if you specified the following default topology constraints:", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Pod Topology Spread Constraints | If you don't configure any cluster-level default constraints for pod topology spreading, then kube-scheduler acts as if you specified the following default topology constraints:", "source": "https://kubernetes.io/docs/concepts/scheduling-eviction/topology-spread-constraints/", "id": "k8s_00325_code_explain_8469"} {"prompt": "Complete this text code:\n\ndefaultConstraints:\n - maxSkew: 3\n topologyKey: \"kubernetes.io/hostname\"", "completion": " whenUnsatisfiable: ScheduleAnyway\n - maxSkew: 5\n topologyKey: \"topology.kubernetes.io/zone\"\n whenUnsatisfiable: ScheduleAnyway", "language": "text", "type": "code-example", "context": "Pod Topology Spread Constraints | Also, the legacy SelectorSpread plugin, which provides an equivalent behavior, is disabled by default.", "source": "https://kubernetes.io/docs/concepts/scheduling-eviction/topology-spread-constraints/", "id": "k8s_00325_code_completion_8470"} {"prompt": "Explain this text code:\n\ndefaultConstraints:\n - maxSkew: 3\n topologyKey: \"kubernetes.io/hostname\"\n whenUnsatisfiable: ScheduleAnyway\n - maxSkew: 5\n topologyKey: \"topology.kubernetes.io/zone\"\n whenUnsatisfiable: ScheduleAnyway", "completion": "This text code Pod Topology Spread Constraints | Also, the legacy SelectorSpread plugin, which provides an equivalent behavior, is disabled by default.", "language": "text", "type": "code-example_explanation", "context": "Pod Topology Spread Constraints | Also, the legacy SelectorSpread plugin, which provides an equivalent behavior, is disabled by default.", "source": "https://kubernetes.io/docs/concepts/scheduling-eviction/topology-spread-constraints/", "id": "k8s_00325_code_explain_8471"} {"prompt": "Complete this yaml code:\n\napiVersion: kubescheduler.config.k8s.io/v1\nkind: KubeSchedulerConfiguration\n\nprofiles:\n - schedulerName: default-scheduler", "completion": " pluginConfig:\n - name: PodTopologySpread\n args:\n defaultConstraints: []\n defaultingType: List", "language": "yaml", "type": "kubernetes-manifest", "context": "Pod Topology Spread Constraints | In Kubernetes, inter-Pod affinity and anti-affinity control how Pods are scheduled in relation to one another - either more packed or more scattered.", "source": "https://kubernetes.io/docs/concepts/scheduling-eviction/topology-spread-constraints/", "id": "k8s_00325_code_completion_8472"} {"prompt": "Explain this yaml code:\n\napiVersion: kubescheduler.config.k8s.io/v1\nkind: KubeSchedulerConfiguration\n\nprofiles:\n - schedulerName: default-scheduler\n pluginConfig:\n - name: PodTopologySpread\n args:\n defaultConstraints: []\n defaultingType: List", "completion": "This yaml code Pod Topology Spread Constraints | In Kubernetes, inter-Pod affinity and anti-affinity control how Pods are scheduled in relation to one another - either more packed or more scattered.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Pod Topology Spread Constraints | In Kubernetes, inter-Pod affinity and anti-affinity control how Pods are scheduled in relation to one another - either more packed or more scattered.", "source": "https://kubernetes.io/docs/concepts/scheduling-eviction/topology-spread-constraints/", "id": "k8s_00325_code_explain_8473"} {"prompt": "Complete this text code:\n\ndevice.driver\ndevice.attributes[\"dra.example.com\"].model", "completion": "device.attributes[\"ext.example.com\"].family\ndevice.capacity[\"dra.example.com\"].modules", "language": "text", "type": "code-example", "context": "DeviceClass | The device.driver field can be used to check for a specific driver, either as a high-level precondition (i.e. you only want to consider devices from this driver) or as part of a multi-clause expressio", "source": "https://kubernetes.io/docs/reference/kubernetes-api/extend-resources/device-class-v1/", "id": "k8s_00327_code_completion_8474"} {"prompt": "Explain this text code:\n\ndevice.driver\ndevice.attributes[\"dra.example.com\"].model\ndevice.attributes[\"ext.example.com\"].family\ndevice.capacity[\"dra.example.com\"].modules", "completion": "This text code DeviceClass | The device.driver field can be used to check for a specific driver, either as a high-level precondition (i.e. you only want to consider devices from this driver) or as part of a multi-clause expressio", "language": "text", "type": "code-example_explanation", "context": "DeviceClass | The device.driver field can be used to check for a specific driver, either as a high-level precondition (i.e. you only want to consider devices from this driver) or as part of a multi-clause expressio", "source": "https://kubernetes.io/docs/reference/kubernetes-api/extend-resources/device-class-v1/", "id": "k8s_00327_code_explain_8475"} {"prompt": "Explain this text code:\n\ncel.bind(dra, device.attributes[\"dra.example.com\"], dra.someBool && dra.anotherBool)", "completion": "This text code DeviceClass | The length of the expression must be smaller or equal to 10 Ki. The cost of evaluating it is also limited based on the estimated number of logical steps.", "language": "text", "type": "code-example_explanation", "context": "DeviceClass | The length of the expression must be smaller or equal to 10 Ki. The cost of evaluating it is also limited based on the estimated number of logical steps.", "source": "https://kubernetes.io/docs/reference/kubernetes-api/extend-resources/device-class-v1/", "id": "k8s_00327_code_explain_8476"} {"prompt": "Complete this text code:\n\ndevice.driver\ndevice.attributes[\"dra.example.com\"].model", "completion": "device.attributes[\"ext.example.com\"].family\ndevice.capacity[\"dra.example.com\"].modules", "language": "text", "type": "code-example", "context": "DeviceClass | The device.driver field can be used to check for a specific driver, either as a high-level precondition (i.e. you only want to consider devices from this driver) or as part of a multi-clause expressio", "source": "https://kubernetes.io/docs/reference/kubernetes-api/extend-resources/device-class-v1/", "id": "k8s_00327_code_completion_8477"} {"prompt": "Explain this text code:\n\ndevice.driver\ndevice.attributes[\"dra.example.com\"].model\ndevice.attributes[\"ext.example.com\"].family\ndevice.capacity[\"dra.example.com\"].modules", "completion": "This text code DeviceClass | The device.driver field can be used to check for a specific driver, either as a high-level precondition (i.e. you only want to consider devices from this driver) or as part of a multi-clause expressio", "language": "text", "type": "code-example_explanation", "context": "DeviceClass | The device.driver field can be used to check for a specific driver, either as a high-level precondition (i.e. you only want to consider devices from this driver) or as part of a multi-clause expressio", "source": "https://kubernetes.io/docs/reference/kubernetes-api/extend-resources/device-class-v1/", "id": "k8s_00327_code_explain_8478"} {"prompt": "Explain this text code:\n\ncel.bind(dra, device.attributes[\"dra.example.com\"], dra.someBool && dra.anotherBool)", "completion": "This text code DeviceClass | The length of the expression must be smaller or equal to 10 Ki. The cost of evaluating it is also limited based on the estimated number of logical steps.", "language": "text", "type": "code-example_explanation", "context": "DeviceClass | The length of the expression must be smaller or equal to 10 Ki. The cost of evaluating it is also limited based on the estimated number of logical steps.", "source": "https://kubernetes.io/docs/reference/kubernetes-api/extend-resources/device-class-v1/", "id": "k8s_00327_code_explain_8479"} {"prompt": "Complete this text code:\n\n# add English label\n/language en\n\n# add squash label to PR if more than one commit", "completion": "/label tide/merge-method-squash\n\n# retitle a PR via Prow (such as a work-in-progress [WIP] or better detail of PR)\n/retitle [WIP] ", "language": "text", "type": "code-example", "context": "PR wranglers | Reviews and approvals are one tool to keep our PR queue short and current. Another tool is closure.", "source": "https://kubernetes.io/docs/contribute/participate/pr-wranglers/", "id": "k8s_00328_code_completion_8480"} {"prompt": "Explain this text code:\n\n# add English label\n/language en\n\n# add squash label to PR if more than one commit\n/label tide/merge-method-squash\n\n# retitle a PR via Prow (such as a work-in-progress [WIP] or better detail of PR)\n/retitle [WIP] <TITLE>", "completion": "This text code PR wranglers | Reviews and approvals are one tool to keep our PR queue short and current. Another tool is closure.", "language": "text", "type": "code-example_explanation", "context": "PR wranglers | Reviews and approvals are one tool to keep our PR queue short and current. Another tool is closure.", "source": "https://kubernetes.io/docs/contribute/participate/pr-wranglers/", "id": "k8s_00328_code_explain_8481"} {"prompt": "Complete this text code:\n\n# add English label\n/language en\n\n# add squash label to PR if more than one commit", "completion": "/label tide/merge-method-squash\n\n# retitle a PR via Prow (such as a work-in-progress [WIP] or better detail of PR)\n/retitle [WIP] <TITLE>", "language": "text", "type": "code-example", "context": "PR wranglers | Reviews and approvals are one tool to keep our PR queue short and current. Another tool is closure.", "source": "https://kubernetes.io/docs/contribute/participate/pr-wranglers/", "id": "k8s_00328_code_completion_8482"} {"prompt": "Explain this text code:\n\n# add English label\n/language en\n\n# add squash label to PR if more than one commit\n/label tide/merge-method-squash\n\n# retitle a PR via Prow (such as a work-in-progress [WIP] or better detail of PR)\n/retitle [WIP] <TITLE>", "completion": "This text code PR wranglers | Reviews and approvals are one tool to keep our PR queue short and current. Another tool is closure.", "language": "text", "type": "code-example_explanation", "context": "PR wranglers | Reviews and approvals are one tool to keep our PR queue short and current. Another tool is closure.", "source": "https://kubernetes.io/docs/contribute/participate/pr-wranglers/", "id": "k8s_00328_code_explain_8483"} {"prompt": "Complete this text code:\n\nreceivers:\n otlp:\n protocols:\n grpc:\nexporters:\n # Replace this exporter with the exporter for your backend\n exporters:", "completion": " debug:\n verbosity: detailed\nservice:\n pipelines:\n traces:\n receivers: [otlp]\n exporters: [debug]", "language": "text", "type": "code-example", "context": "Traces For Kubernetes System Components | To directly emit traces to a backend without utilizing a collector, specify the endpoint field in the Kubernetes tracing configuration file with the desired trace backend address. This method negates", "source": "https://kubernetes.io/docs/concepts/cluster-administration/system-traces/", "id": "k8s_00329_code_completion_8484"} {"prompt": "Explain this text code:\n\nreceivers:\n otlp:\n protocols:\n grpc:\nexporters:\n # Replace this exporter with the exporter for your backend\n exporters:\n debug:\n verbosity: detailed\nservice:\n pipelines:\n traces:\n receivers: [otlp]\n exporters: [debug]", "completion": "This text code Traces For Kubernetes System Components | To directly emit traces to a backend without utilizing a collector, specify the endpoint field in the Kubernetes tracing configuration file with the desired trace backend address. This method negates", "language": "text", "type": "code-example_explanation", "context": "Traces For Kubernetes System Components | To directly emit traces to a backend without utilizing a collector, specify the endpoint field in the Kubernetes tracing configuration file with the desired trace backend address. This method negates", "source": "https://kubernetes.io/docs/concepts/cluster-administration/system-traces/", "id": "k8s_00329_code_explain_8485"} {"prompt": "Complete this yaml code:\n\napiVersion: apiserver.config.k8s.io/v1\nkind: TracingConfiguration", "completion": "# default value\n#endpoint: localhost:4317\nsamplingRatePerMillion: 100", "language": "yaml", "type": "kubernetes-manifest", "context": "Traces For Kubernetes System Components | For more information about the TracingConfiguration struct, see API server config API (v1).", "source": "https://kubernetes.io/docs/concepts/cluster-administration/system-traces/", "id": "k8s_00329_code_completion_8486"} {"prompt": "Explain this yaml code:\n\napiVersion: apiserver.config.k8s.io/v1\nkind: TracingConfiguration\n# default value\n#endpoint: localhost:4317\nsamplingRatePerMillion: 100", "completion": "This yaml code Traces For Kubernetes System Components | For more information about the TracingConfiguration struct, see API server config API (v1).", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Traces For Kubernetes System Components | For more information about the TracingConfiguration struct, see API server config API (v1).", "source": "https://kubernetes.io/docs/concepts/cluster-administration/system-traces/", "id": "k8s_00329_code_explain_8487"} {"prompt": "Complete this yaml code:\n\napiVersion: kubelet.config.k8s.io/v1beta1\nkind: KubeletConfiguration\ntracing:", "completion": " # default value\n #endpoint: localhost:4317\n samplingRatePerMillion: 100", "language": "yaml", "type": "kubernetes-manifest", "context": "Traces For Kubernetes System Components | If the samplingRatePerMillion is set to one million (1000000), then every span will be sent to the exporter.", "source": "https://kubernetes.io/docs/concepts/cluster-administration/system-traces/", "id": "k8s_00329_code_completion_8488"} {"prompt": "Explain this yaml code:\n\napiVersion: kubelet.config.k8s.io/v1beta1\nkind: KubeletConfiguration\ntracing:\n # default value\n #endpoint: localhost:4317\n samplingRatePerMillion: 100", "completion": "This yaml code Traces For Kubernetes System Components | If the samplingRatePerMillion is set to one million (1000000), then every span will be sent to the exporter.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Traces For Kubernetes System Components | If the samplingRatePerMillion is set to one million (1000000), then every span will be sent to the exporter.", "source": "https://kubernetes.io/docs/concepts/cluster-administration/system-traces/", "id": "k8s_00329_code_explain_8489"} {"prompt": "Complete this yaml code:\n\nreceivers:\n otlp:\n protocols:\n grpc:\nexporters:\n # Replace this exporter with the exporter for your backend\n exporters:", "completion": " debug:\n verbosity: detailed\nservice:\n pipelines:\n traces:\n receivers: [otlp]\n exporters: [debug]", "language": "yaml", "type": "kubernetes-manifest", "context": "Traces For Kubernetes System Components | To directly emit traces to a backend without utilizing a collector, specify the endpoint field in the Kubernetes tracing configuration file with the desired trace backend address. This method negates", "source": "https://kubernetes.io/docs/concepts/cluster-administration/system-traces/", "id": "k8s_00329_code_completion_8490"} {"prompt": "Explain this yaml code:\n\nreceivers:\n otlp:\n protocols:\n grpc:\nexporters:\n # Replace this exporter with the exporter for your backend\n exporters:\n debug:\n verbosity: detailed\nservice:\n pipelines:\n traces:\n receivers: [otlp]\n exporters: [debug]", "completion": "This yaml code Traces For Kubernetes System Components | To directly emit traces to a backend without utilizing a collector, specify the endpoint field in the Kubernetes tracing configuration file with the desired trace backend address. This method negates", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Traces For Kubernetes System Components | To directly emit traces to a backend without utilizing a collector, specify the endpoint field in the Kubernetes tracing configuration file with the desired trace backend address. This method negates", "source": "https://kubernetes.io/docs/concepts/cluster-administration/system-traces/", "id": "k8s_00329_code_explain_8491"} {"prompt": "Complete this yaml code:\n\napiVersion: apiserver.config.k8s.io/v1\nkind: TracingConfiguration", "completion": "# default value\n#endpoint: localhost:4317\nsamplingRatePerMillion: 100", "language": "yaml", "type": "kubernetes-manifest", "context": "Traces For Kubernetes System Components | For more information about the TracingConfiguration struct, see API server config API (v1).", "source": "https://kubernetes.io/docs/concepts/cluster-administration/system-traces/", "id": "k8s_00329_code_completion_8492"} {"prompt": "Explain this yaml code:\n\napiVersion: apiserver.config.k8s.io/v1\nkind: TracingConfiguration\n# default value\n#endpoint: localhost:4317\nsamplingRatePerMillion: 100", "completion": "This yaml code Traces For Kubernetes System Components | For more information about the TracingConfiguration struct, see API server config API (v1).", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Traces For Kubernetes System Components | For more information about the TracingConfiguration struct, see API server config API (v1).", "source": "https://kubernetes.io/docs/concepts/cluster-administration/system-traces/", "id": "k8s_00329_code_explain_8493"} {"prompt": "Complete this yaml code:\n\napiVersion: kubelet.config.k8s.io/v1beta1\nkind: KubeletConfiguration\ntracing:", "completion": " # default value\n #endpoint: localhost:4317\n samplingRatePerMillion: 100", "language": "yaml", "type": "kubernetes-manifest", "context": "Traces For Kubernetes System Components | If the samplingRatePerMillion is set to one million (1000000), then every span will be sent to the exporter.", "source": "https://kubernetes.io/docs/concepts/cluster-administration/system-traces/", "id": "k8s_00329_code_completion_8494"} {"prompt": "Explain this yaml code:\n\napiVersion: kubelet.config.k8s.io/v1beta1\nkind: KubeletConfiguration\ntracing:\n # default value\n #endpoint: localhost:4317\n samplingRatePerMillion: 100", "completion": "This yaml code Traces For Kubernetes System Components | If the samplingRatePerMillion is set to one million (1000000), then every span will be sent to the exporter.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Traces For Kubernetes System Components | If the samplingRatePerMillion is set to one million (1000000), then every span will be sent to the exporter.", "source": "https://kubernetes.io/docs/concepts/cluster-administration/system-traces/", "id": "k8s_00329_code_explain_8495"} {"prompt": "Complete this text code:\n\nreceivers:\n otlp:\n protocols:\n grpc:\nexporters:\n # Replace this exporter with the exporter for your backend\n exporters:", "completion": " debug:\n verbosity: detailed\nservice:\n pipelines:\n traces:\n receivers: [otlp]\n exporters: [debug]", "language": "text", "type": "code-example", "context": "Traces For Kubernetes System Components | To directly emit traces to a backend without utilizing a collector, specify the endpoint field in the Kubernetes tracing configuration file with the desired trace backend address. This method negates", "source": "https://kubernetes.io/docs/concepts/cluster-administration/system-traces/", "id": "k8s_00329_code_completion_8496"} {"prompt": "Explain this text code:\n\nreceivers:\n otlp:\n protocols:\n grpc:\nexporters:\n # Replace this exporter with the exporter for your backend\n exporters:\n debug:\n verbosity: detailed\nservice:\n pipelines:\n traces:\n receivers: [otlp]\n exporters: [debug]", "completion": "This text code Traces For Kubernetes System Components | To directly emit traces to a backend without utilizing a collector, specify the endpoint field in the Kubernetes tracing configuration file with the desired trace backend address. This method negates", "language": "text", "type": "code-example_explanation", "context": "Traces For Kubernetes System Components | To directly emit traces to a backend without utilizing a collector, specify the endpoint field in the Kubernetes tracing configuration file with the desired trace backend address. This method negates", "source": "https://kubernetes.io/docs/concepts/cluster-administration/system-traces/", "id": "k8s_00329_code_explain_8497"} {"prompt": "Complete this yaml code:\n\napiVersion: apiserver.config.k8s.io/v1\nkind: TracingConfiguration", "completion": "# default value\n#endpoint: localhost:4317\nsamplingRatePerMillion: 100", "language": "yaml", "type": "kubernetes-manifest", "context": "Traces For Kubernetes System Components | For more information about the TracingConfiguration struct, see API server config API (v1).", "source": "https://kubernetes.io/docs/concepts/cluster-administration/system-traces/", "id": "k8s_00329_code_completion_8498"} {"prompt": "Explain this yaml code:\n\napiVersion: apiserver.config.k8s.io/v1\nkind: TracingConfiguration\n# default value\n#endpoint: localhost:4317\nsamplingRatePerMillion: 100", "completion": "This yaml code Traces For Kubernetes System Components | For more information about the TracingConfiguration struct, see API server config API (v1).", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Traces For Kubernetes System Components | For more information about the TracingConfiguration struct, see API server config API (v1).", "source": "https://kubernetes.io/docs/concepts/cluster-administration/system-traces/", "id": "k8s_00329_code_explain_8499"} {"prompt": "Complete this yaml code:\n\napiVersion: kubelet.config.k8s.io/v1beta1\nkind: KubeletConfiguration\ntracing:", "completion": " # default value\n #endpoint: localhost:4317\n samplingRatePerMillion: 100", "language": "yaml", "type": "kubernetes-manifest", "context": "Traces For Kubernetes System Components | If the samplingRatePerMillion is set to one million (1000000), then every span will be sent to the exporter.", "source": "https://kubernetes.io/docs/concepts/cluster-administration/system-traces/", "id": "k8s_00329_code_completion_8500"} {"prompt": "Explain this yaml code:\n\napiVersion: kubelet.config.k8s.io/v1beta1\nkind: KubeletConfiguration\ntracing:\n # default value\n #endpoint: localhost:4317\n samplingRatePerMillion: 100", "completion": "This yaml code Traces For Kubernetes System Components | If the samplingRatePerMillion is set to one million (1000000), then every span will be sent to the exporter.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Traces For Kubernetes System Components | If the samplingRatePerMillion is set to one million (1000000), then every span will be sent to the exporter.", "source": "https://kubernetes.io/docs/concepts/cluster-administration/system-traces/", "id": "k8s_00329_code_explain_8501"} {"prompt": "Complete this text code:\n\n# List all available plugins\n kubectl plugin list", "completion": " \n # List only binary names of available plugins without paths\n kubectl plugin list --name-only", "language": "text", "type": "code-example", "context": "kubectl plugin | help for plugin", "source": "https://kubernetes.io/docs/reference/kubectl/generated/kubectl_plugin/", "id": "k8s_00330_code_completion_8502"} {"prompt": "Explain this text code:\n\n# List all available plugins\n kubectl plugin list\n \n # List only binary names of available plugins without paths\n kubectl plugin list --name-only", "completion": "This text code kubectl plugin | help for plugin", "language": "text", "type": "code-example_explanation", "context": "kubectl plugin | help for plugin", "source": "https://kubernetes.io/docs/reference/kubectl/generated/kubectl_plugin/", "id": "k8s_00330_code_explain_8503"} {"prompt": "Complete this text code:\n\n# List all available plugins\n kubectl plugin list", "completion": " \n # List only binary names of available plugins without paths\n kubectl plugin list --name-only", "language": "text", "type": "code-example", "context": "kubectl plugin | help for plugin", "source": "https://kubernetes.io/docs/reference/kubectl/generated/kubectl_plugin/", "id": "k8s_00330_code_completion_8504"} {"prompt": "Explain this text code:\n\n# List all available plugins\n kubectl plugin list\n \n # List only binary names of available plugins without paths\n kubectl plugin list --name-only", "completion": "This text code kubectl plugin | help for plugin", "language": "text", "type": "code-example_explanation", "context": "kubectl plugin | help for plugin", "source": "https://kubernetes.io/docs/reference/kubectl/generated/kubectl_plugin/", "id": "k8s_00330_code_explain_8505"} {"prompt": "Explain this text code:\n\ngit clone git@github.com:<github_username>/website\ncd website", "completion": "This text code Opening a pull request | Navigate to the new website directory. Set the kubernetes/website repository as the upstream remote:", "language": "text", "type": "code-example_explanation", "context": "Opening a pull request | Navigate to the new website directory. Set the kubernetes/website repository as the upstream remote:", "source": "https://kubernetes.io/docs/contribute/new-content/open-a-pr/", "id": "k8s_00331_code_explain_8506"} {"prompt": "Explain this text code:\n\ncd website\n\ngit remote add upstream https://github.com/kubernetes/website.git", "completion": "This text code Opening a pull request | Confirm your origin and upstream repositories:", "language": "text", "type": "code-example_explanation", "context": "Opening a pull request | Confirm your origin and upstream repositories:", "source": "https://kubernetes.io/docs/contribute/new-content/open-a-pr/", "id": "k8s_00331_code_explain_8507"} {"prompt": "Complete this text code:\n\norigin\tgit@github.com:<github_username>/website.git (fetch)\norigin\tgit@github.com:<github_username>/website.git (push)", "completion": "upstream\thttps://github.com/kubernetes/website.git (fetch)\nupstream\thttps://github.com/kubernetes/website.git (push)", "language": "text", "type": "code-example", "context": "Opening a pull request | Fetch commits from your fork's origin/main and kubernetes/website's upstream/main:", "source": "https://kubernetes.io/docs/contribute/new-content/open-a-pr/", "id": "k8s_00331_code_completion_8508"} {"prompt": "Explain this text code:\n\norigin\tgit@github.com:<github_username>/website.git (fetch)\norigin\tgit@github.com:<github_username>/website.git (push)\nupstream\thttps://github.com/kubernetes/website.git (fetch)\nupstream\thttps://github.com/kubernetes/website.git (push)", "completion": "This text code Opening a pull request | Fetch commits from your fork's origin/main and kubernetes/website's upstream/main:", "language": "text", "type": "code-example_explanation", "context": "Opening a pull request | Fetch commits from your fork's origin/main and kubernetes/website's upstream/main:", "source": "https://kubernetes.io/docs/contribute/new-content/open-a-pr/", "id": "k8s_00331_code_explain_8509"} {"prompt": "Complete this text code:\n\nOn branch <my_new_branch>\nYour branch is up to date with 'origin/<my_new_branch>'.\n\nChanges not staged for commit:\n(use \"git add <file>...\" to update what will be committed)", "completion": "(use \"git checkout -- <file>...\" to discard changes in working directory)\n\nmodified: content/en/docs/contribute/new-content/contributing-content.md\n\nno changes added to commit (use \"git add\" and/or \"git commit -a\")", "language": "text", "type": "code-example", "context": "Opening a pull request | Add the files listed under Changes not staged for commit to the commit:", "source": "https://kubernetes.io/docs/contribute/new-content/open-a-pr/", "id": "k8s_00331_code_completion_8510"} {"prompt": "Explain this text code:\n\nOn branch <my_new_branch>\nYour branch is up to date with 'origin/<my_new_branch>'.\n\nChanges not staged for commit:\n(use \"git add <file>...\" to update what will be committed)\n(use \"git checkout -- <file>...\" to discard changes in working directory)\n\nmodified: content/en/docs/contribute/new-content/contributing-content.md\n\nno changes added to commit (use \"git add\" and/or \"git commit -a\")", "completion": "This text code Opening a pull request | Add the files listed under Changes not staged for commit to the commit:", "language": "text", "type": "code-example_explanation", "context": "Opening a pull request | Add the files listed under Changes not staged for commit to the commit:", "source": "https://kubernetes.io/docs/contribute/new-content/open-a-pr/", "id": "k8s_00331_code_explain_8511"} {"prompt": "Explain this text code:\n\n# Run this in a terminal (if required)\nmake container-image", "completion": "This text code Opening a pull request | Fetch submodule dependencies in your local repository:", "language": "text", "type": "code-example_explanation", "context": "Opening a pull request | Fetch submodule dependencies in your local repository:", "source": "https://kubernetes.io/docs/contribute/new-content/open-a-pr/", "id": "k8s_00331_code_explain_8512"} {"prompt": "Explain this text code:\n\ngit fetch origin\ngit rebase origin/<your-branch-name>", "completion": "This text code Opening a pull request | After rebasing, force-push new changes to your fork:", "language": "text", "type": "code-example_explanation", "context": "Opening a pull request | After rebasing, force-push new changes to your fork:", "source": "https://kubernetes.io/docs/contribute/new-content/open-a-pr/", "id": "k8s_00331_code_explain_8513"} {"prompt": "Explain this text code:\n\ngit push --force-with-lease origin <your-branch-name>", "completion": "This text code Opening a pull request | If another contributor commits changes to the same file in another PR, it can create a merge conflict. You must resolve all merge conflicts in your PR.", "language": "text", "type": "code-example_explanation", "context": "Opening a pull request | If another contributor commits changes to the same file in another PR, it can create a merge conflict. You must resolve all merge conflicts in your PR.", "source": "https://kubernetes.io/docs/contribute/new-content/open-a-pr/", "id": "k8s_00331_code_explain_8514"} {"prompt": "Explain this text code:\n\ngit fetch origin\ngit rebase origin/<your-branch-name>", "completion": "This text code Opening a pull request | Then force-push the changes to your fork:", "language": "text", "type": "code-example_explanation", "context": "Opening a pull request | Then force-push the changes to your fork:", "source": "https://kubernetes.io/docs/contribute/new-content/open-a-pr/", "id": "k8s_00331_code_explain_8515"} {"prompt": "Explain this text code:\n\ngit push --force-with-lease origin <your-branch-name>", "completion": "This text code Opening a pull request | Fetch changes from kubernetes/website's upstream/main and rebase your branch:", "language": "text", "type": "code-example_explanation", "context": "Opening a pull request | Fetch changes from kubernetes/website's upstream/main and rebase your branch:", "source": "https://kubernetes.io/docs/contribute/new-content/open-a-pr/", "id": "k8s_00331_code_explain_8516"} {"prompt": "Explain this text code:\n\ngit push --force-with-lease origin <your-branch-name>", "completion": "This text code Opening a pull request | The pull request no longer shows any conflicts.", "language": "text", "type": "code-example_explanation", "context": "Opening a pull request | The pull request no longer shows any conflicts.", "source": "https://kubernetes.io/docs/contribute/new-content/open-a-pr/", "id": "k8s_00331_code_explain_8517"} {"prompt": "Complete this text code:\n\npick d875112ca Original commit\npick 4fa167b80 Address feedback 1\npick 7d54e15ee Address feedback 2\n", "completion": "# Rebase 3d18sf680..7d54e15ee onto 3d183f680 (3 commands)\n\n...\n\n# These lines can be re-ordered; they are executed from top to bottom.", "language": "text", "type": "code-example", "context": "Opening a pull request | The first section of the output lists the commits in the rebase. The second section lists the options for each commit. Changing the word pick changes the status of the commit once the rebase is comple", "source": "https://kubernetes.io/docs/contribute/new-content/open-a-pr/", "id": "k8s_00331_code_completion_8518"} {"prompt": "Explain this text code:\n\npick d875112ca Original commit\npick 4fa167b80 Address feedback 1\npick 7d54e15ee Address feedback 2\n\n# Rebase 3d18sf680..7d54e15ee onto 3d183f680 (3 commands)\n\n...\n\n# These lines can be re-ordered; they are executed from top to bottom.", "completion": "This text code Opening a pull request | The first section of the output lists the commits in the rebase. The second section lists the options for each commit. Changing the word pick changes the status of the commit once the rebase is comple", "language": "text", "type": "code-example_explanation", "context": "Opening a pull request | The first section of the output lists the commits in the rebase. The second section lists the options for each commit. Changing the word pick changes the status of the commit once the rebase is comple", "source": "https://kubernetes.io/docs/contribute/new-content/open-a-pr/", "id": "k8s_00331_code_explain_8519"} {"prompt": "Explain this text code:\n\npick d875112ca Original commit\npick 4fa167b80 Address feedback 1\npick 7d54e15ee Address feedback 2", "completion": "This text code Opening a pull request | To:", "language": "text", "type": "code-example_explanation", "context": "Opening a pull request | To:", "source": "https://kubernetes.io/docs/contribute/new-content/open-a-pr/", "id": "k8s_00331_code_explain_8520"} {"prompt": "Explain this text code:\n\npick d875112ca Original commit\nsquash 4fa167b80 Address feedback 1\nsquash 7d54e15ee Address feedback 2", "completion": "This text code Opening a pull request | This squashes commits 4fa167b80 Address", "language": "text", "type": "code-example_explanation", "context": "Opening a pull request | This squashes commits 4fa167b80 Address", "source": "https://kubernetes.io/docs/contribute/new-content/open-a-pr/", "id": "k8s_00331_code_explain_8521"} {"prompt": "Explain this shell code:\n\ngit clone git@github.com:<github_username>/website\ncd website", "completion": "This shell code Opening a pull request | Navigate to the new website directory. Set the kubernetes/website repository as the upstream remote:", "language": "shell", "type": "code-example_explanation", "context": "Opening a pull request | Navigate to the new website directory. Set the kubernetes/website repository as the upstream remote:", "source": "https://kubernetes.io/docs/contribute/new-content/open-a-pr/", "id": "k8s_00331_code_explain_8522"} {"prompt": "Explain this shell code:\n\ncd website\n\ngit remote add upstream https://github.com/kubernetes/website.git", "completion": "This shell code Opening a pull request | Confirm your origin and upstream repositories:", "language": "shell", "type": "code-example_explanation", "context": "Opening a pull request | Confirm your origin and upstream repositories:", "source": "https://kubernetes.io/docs/contribute/new-content/open-a-pr/", "id": "k8s_00331_code_explain_8523"} {"prompt": "Complete this none code:\n\norigin\tgit@github.com:<github_username>/website.git (fetch)\norigin\tgit@github.com:<github_username>/website.git (push)", "completion": "upstream\thttps://github.com/kubernetes/website.git (fetch)\nupstream\thttps://github.com/kubernetes/website.git (push)", "language": "none", "type": "code-example", "context": "Opening a pull request | Fetch commits from your fork's origin/main and kubernetes/website's upstream/main:", "source": "https://kubernetes.io/docs/contribute/new-content/open-a-pr/", "id": "k8s_00331_code_completion_8524"} {"prompt": "Explain this none code:\n\norigin\tgit@github.com:<github_username>/website.git (fetch)\norigin\tgit@github.com:<github_username>/website.git (push)\nupstream\thttps://github.com/kubernetes/website.git (fetch)\nupstream\thttps://github.com/kubernetes/website.git (push)", "completion": "This none code Opening a pull request | Fetch commits from your fork's origin/main and kubernetes/website's upstream/main:", "language": "none", "type": "code-example_explanation", "context": "Opening a pull request | Fetch commits from your fork's origin/main and kubernetes/website's upstream/main:", "source": "https://kubernetes.io/docs/contribute/new-content/open-a-pr/", "id": "k8s_00331_code_explain_8525"} {"prompt": "Complete this none code:\n\nOn branch <my_new_branch>\nYour branch is up to date with 'origin/<my_new_branch>'.\n\nChanges not staged for commit:\n(use \"git add <file>...\" to update what will be committed)", "completion": "(use \"git checkout -- <file>...\" to discard changes in working directory)\n\nmodified: content/en/docs/contribute/new-content/contributing-content.md\n\nno changes added to commit (use \"git add\" and/or \"git commit -a\")", "language": "none", "type": "code-example", "context": "Opening a pull request | Add the files listed under Changes not staged for commit to the commit:", "source": "https://kubernetes.io/docs/contribute/new-content/open-a-pr/", "id": "k8s_00331_code_completion_8526"} {"prompt": "Explain this none code:\n\nOn branch <my_new_branch>\nYour branch is up to date with 'origin/<my_new_branch>'.\n\nChanges not staged for commit:\n(use \"git add <file>...\" to update what will be committed)\n(use \"git checkout -- <file>...\" to discard changes in working directory)\n\nmodified: content/en/docs/contribute/new-content/contributing-content.md\n\nno changes added to commit (use \"git add\" and/or \"git commit -a\")", "completion": "This none code Opening a pull request | Add the files listed under Changes not staged for commit to the commit:", "language": "none", "type": "code-example_explanation", "context": "Opening a pull request | Add the files listed under Changes not staged for commit to the commit:", "source": "https://kubernetes.io/docs/contribute/new-content/open-a-pr/", "id": "k8s_00331_code_explain_8527"} {"prompt": "Explain this shell code:\n\n# Run this in a terminal (if required)\nmake container-image", "completion": "This shell code Opening a pull request | Fetch submodule dependencies in your local repository:", "language": "shell", "type": "code-example_explanation", "context": "Opening a pull request | Fetch submodule dependencies in your local repository:", "source": "https://kubernetes.io/docs/contribute/new-content/open-a-pr/", "id": "k8s_00331_code_explain_8528"} {"prompt": "Explain this shell code:\n\ngit fetch origin\ngit rebase origin/<your-branch-name>", "completion": "This shell code Opening a pull request | After rebasing, force-push new changes to your fork:", "language": "shell", "type": "code-example_explanation", "context": "Opening a pull request | After rebasing, force-push new changes to your fork:", "source": "https://kubernetes.io/docs/contribute/new-content/open-a-pr/", "id": "k8s_00331_code_explain_8529"} {"prompt": "Explain this shell code:\n\ngit push --force-with-lease origin <your-branch-name>", "completion": "This shell code Opening a pull request | If another contributor commits changes to the same file in another PR, it can create a merge conflict. You must resolve all merge conflicts in your PR.", "language": "shell", "type": "code-example_explanation", "context": "Opening a pull request | If another contributor commits changes to the same file in another PR, it can create a merge conflict. You must resolve all merge conflicts in your PR.", "source": "https://kubernetes.io/docs/contribute/new-content/open-a-pr/", "id": "k8s_00331_code_explain_8530"} {"prompt": "Explain this shell code:\n\ngit fetch origin\ngit rebase origin/<your-branch-name>", "completion": "This shell code Opening a pull request | Then force-push the changes to your fork:", "language": "shell", "type": "code-example_explanation", "context": "Opening a pull request | Then force-push the changes to your fork:", "source": "https://kubernetes.io/docs/contribute/new-content/open-a-pr/", "id": "k8s_00331_code_explain_8531"} {"prompt": "Explain this shell code:\n\ngit push --force-with-lease origin <your-branch-name>", "completion": "This shell code Opening a pull request | Fetch changes from kubernetes/website's upstream/main and rebase your branch:", "language": "shell", "type": "code-example_explanation", "context": "Opening a pull request | Fetch changes from kubernetes/website's upstream/main and rebase your branch:", "source": "https://kubernetes.io/docs/contribute/new-content/open-a-pr/", "id": "k8s_00331_code_explain_8532"} {"prompt": "Explain this shell code:\n\ngit push --force-with-lease origin <your-branch-name>", "completion": "This shell code Opening a pull request | The pull request no longer shows any conflicts.", "language": "shell", "type": "code-example_explanation", "context": "Opening a pull request | The pull request no longer shows any conflicts.", "source": "https://kubernetes.io/docs/contribute/new-content/open-a-pr/", "id": "k8s_00331_code_explain_8533"} {"prompt": "Complete this none code:\n\npick d875112ca Original commit\npick 4fa167b80 Address feedback 1\npick 7d54e15ee Address feedback 2\n", "completion": "# Rebase 3d18sf680..7d54e15ee onto 3d183f680 (3 commands)\n\n...\n\n# These lines can be re-ordered; they are executed from top to bottom.", "language": "none", "type": "code-example", "context": "Opening a pull request | The first section of the output lists the commits in the rebase. The second section lists the options for each commit. Changing the word pick changes the status of the commit once the rebase is comple", "source": "https://kubernetes.io/docs/contribute/new-content/open-a-pr/", "id": "k8s_00331_code_completion_8534"} {"prompt": "Explain this none code:\n\npick d875112ca Original commit\npick 4fa167b80 Address feedback 1\npick 7d54e15ee Address feedback 2\n\n# Rebase 3d18sf680..7d54e15ee onto 3d183f680 (3 commands)\n\n...\n\n# These lines can be re-ordered; they are executed from top to bottom.", "completion": "This none code Opening a pull request | The first section of the output lists the commits in the rebase. The second section lists the options for each commit. Changing the word pick changes the status of the commit once the rebase is comple", "language": "none", "type": "code-example_explanation", "context": "Opening a pull request | The first section of the output lists the commits in the rebase. The second section lists the options for each commit. Changing the word pick changes the status of the commit once the rebase is comple", "source": "https://kubernetes.io/docs/contribute/new-content/open-a-pr/", "id": "k8s_00331_code_explain_8535"} {"prompt": "Explain this none code:\n\npick d875112ca Original commit\npick 4fa167b80 Address feedback 1\npick 7d54e15ee Address feedback 2", "completion": "This none code Opening a pull request | To:", "language": "none", "type": "code-example_explanation", "context": "Opening a pull request | To:", "source": "https://kubernetes.io/docs/contribute/new-content/open-a-pr/", "id": "k8s_00331_code_explain_8536"} {"prompt": "Explain this none code:\n\npick d875112ca Original commit\nsquash 4fa167b80 Address feedback 1\nsquash 7d54e15ee Address feedback 2", "completion": "This none code Opening a pull request | This squashes commits 4fa167b80 Address", "language": "none", "type": "code-example_explanation", "context": "Opening a pull request | This squashes commits 4fa167b80 Address", "source": "https://kubernetes.io/docs/contribute/new-content/open-a-pr/", "id": "k8s_00331_code_explain_8537"} {"prompt": "Explain this text code:\n\ngit clone git@github.com:<github_username>/website\ncd website", "completion": "This text code Opening a pull request | Navigate to the new website directory. Set the kubernetes/website repository as the upstream remote:", "language": "text", "type": "code-example_explanation", "context": "Opening a pull request | Navigate to the new website directory. Set the kubernetes/website repository as the upstream remote:", "source": "https://kubernetes.io/docs/contribute/new-content/open-a-pr/", "id": "k8s_00331_code_explain_8538"} {"prompt": "Explain this text code:\n\ncd website\n\ngit remote add upstream https://github.com/kubernetes/website.git", "completion": "This text code Opening a pull request | Confirm your origin and upstream repositories:", "language": "text", "type": "code-example_explanation", "context": "Opening a pull request | Confirm your origin and upstream repositories:", "source": "https://kubernetes.io/docs/contribute/new-content/open-a-pr/", "id": "k8s_00331_code_explain_8539"} {"prompt": "Explain this text code:\n\n# Run this in a terminal (if required)\nmake container-image", "completion": "This text code Opening a pull request | Fetch submodule dependencies in your local repository:", "language": "text", "type": "code-example_explanation", "context": "Opening a pull request | Fetch submodule dependencies in your local repository:", "source": "https://kubernetes.io/docs/contribute/new-content/open-a-pr/", "id": "k8s_00331_code_explain_8540"} {"prompt": "Explain this text code:\n\ngit fetch origin\ngit rebase origin/<your-branch-name>", "completion": "This text code Opening a pull request | After rebasing, force-push new changes to your fork:", "language": "text", "type": "code-example_explanation", "context": "Opening a pull request | After rebasing, force-push new changes to your fork:", "source": "https://kubernetes.io/docs/contribute/new-content/open-a-pr/", "id": "k8s_00331_code_explain_8541"} {"prompt": "Explain this text code:\n\ngit push --force-with-lease origin <your-branch-name>", "completion": "This text code Opening a pull request | If another contributor commits changes to the same file in another PR, it can create a merge conflict. You must resolve all merge conflicts in your PR.", "language": "text", "type": "code-example_explanation", "context": "Opening a pull request | If another contributor commits changes to the same file in another PR, it can create a merge conflict. You must resolve all merge conflicts in your PR.", "source": "https://kubernetes.io/docs/contribute/new-content/open-a-pr/", "id": "k8s_00331_code_explain_8542"} {"prompt": "Explain this text code:\n\ngit fetch origin\ngit rebase origin/<your-branch-name>", "completion": "This text code Opening a pull request | Then force-push the changes to your fork:", "language": "text", "type": "code-example_explanation", "context": "Opening a pull request | Then force-push the changes to your fork:", "source": "https://kubernetes.io/docs/contribute/new-content/open-a-pr/", "id": "k8s_00331_code_explain_8543"} {"prompt": "Explain this text code:\n\ngit push --force-with-lease origin <your-branch-name>", "completion": "This text code Opening a pull request | Fetch changes from kubernetes/website's upstream/main and rebase your branch:", "language": "text", "type": "code-example_explanation", "context": "Opening a pull request | Fetch changes from kubernetes/website's upstream/main and rebase your branch:", "source": "https://kubernetes.io/docs/contribute/new-content/open-a-pr/", "id": "k8s_00331_code_explain_8544"} {"prompt": "Explain this text code:\n\ngit push --force-with-lease origin <your-branch-name>", "completion": "This text code Opening a pull request | The pull request no longer shows any conflicts.", "language": "text", "type": "code-example_explanation", "context": "Opening a pull request | The pull request no longer shows any conflicts.", "source": "https://kubernetes.io/docs/contribute/new-content/open-a-pr/", "id": "k8s_00331_code_explain_8545"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: kubernetes-downwardapi-volume-example\n labels:\n zone: us-est-coast\n cluster: test-cluster1\n rack: rack-22\n annotations:\n build: two\n builder: john-doe\nspec:\n containers:\n - name: client-container\n image: registry.k8s.io/busybox:1.27.2\n command: [\"sh\", \"-c\"]\n args:\n - while true; do", "completion": " if [[ -e /etc/podinfo/labels ]]; then\n echo -en '\\n\\n'; cat /etc/podinfo/labels; fi;\n if [[ -e /etc/podinfo/annotations ]]; then\n echo -en '\\n\\n'; cat /etc/podinfo/annotations; fi;\n sleep 5;\n done;\n volumeMounts:\n - name: podinfo\n mountPath: /etc/podinfo\n volumes:\n - name: podinfo\n downwardAPI:\n items:\n - path: \"labels\"\n fieldRef:\n fieldPath: metadata.labels\n - path: \"annotations\"\n fieldRef:\n fieldPath: metadata.annotations", "language": "yaml", "type": "pod-definition", "context": "Expose Pod Information to Containers Through Files | In the manifest, you can see that the Pod has a downwardAPI Volume, and the container mounts the volume at /etc/podinfo.", "source": "https://kubernetes.io/docs/tasks/inject-data-application/downward-api-volume-expose-pod-information/", "id": "k8s_00332_code_completion_8546"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: kubernetes-downwardapi-volume-example\n labels:\n zone: us-est-coast\n cluster: test-cluster1\n rack: rack-22\n annotations:\n build: two\n builder: john-doe\nspec:\n containers:\n - name: client-container\n image: registry.k8s.io/busybox:1.27.2\n command: [\"sh\", \"-c\"]\n args:\n - while true; do\n if [[ -e /etc/podinfo/labels ]]; then\n echo -en '\\n\\n'; cat /etc/podinfo/labels; fi;\n if [[ -e /etc/podinfo/annotations ]]; then\n echo -en '\\n\\n'; cat /etc/podinfo/annotations; fi;\n sleep 5;\n done;\n volumeMounts:\n - name: podinfo\n mountPath: /etc/podinfo\n volumes:\n - name: podinfo\n downwardAPI:\n items:\n - path: \"labels\"\n fieldRef:\n fieldPath: metadata.labels\n - path: \"annotations\"\n fieldRef:\n fieldPath: metadata.annotations", "completion": "This yaml code Expose Pod Information to Containers Through Files | In the manifest, you can see that the Pod has a downwardAPI Volume, and the container mounts the volume at /etc/podinfo.", "language": "yaml", "type": "pod-definition_explanation", "context": "Expose Pod Information to Containers Through Files | In the manifest, you can see that the Pod has a downwardAPI Volume, and the container mounts the volume at /etc/podinfo.", "source": "https://kubernetes.io/docs/tasks/inject-data-application/downward-api-volume-expose-pod-information/", "id": "k8s_00332_code_explain_8547"} {"prompt": "Explain this bash code:\n\nkubectl apply -f https://k8s.io/examples/pods/inject/dapi-volume.yaml", "completion": "This bash code Expose Pod Information to Containers Through Files | Verify that the container in the Pod is running:", "language": "bash", "type": "kubectl-command_explanation", "context": "Expose Pod Information to Containers Through Files | Verify that the container in the Pod is running:", "source": "https://kubernetes.io/docs/tasks/inject-data-application/downward-api-volume-expose-pod-information/", "id": "k8s_00332_code_explain_8548"} {"prompt": "Complete this text code:\n\ncluster=\"test-cluster1\"\nrack=\"rack-22\"\nzone=\"us-est-coast\"", "completion": "\nbuild=\"two\"\nbuilder=\"john-doe\"", "language": "text", "type": "code-example", "context": "Expose Pod Information to Containers Through Files | Get a shell into the container that is running in your Pod:", "source": "https://kubernetes.io/docs/tasks/inject-data-application/downward-api-volume-expose-pod-information/", "id": "k8s_00332_code_completion_8549"} {"prompt": "Explain this text code:\n\ncluster=\"test-cluster1\"\nrack=\"rack-22\"\nzone=\"us-est-coast\"\n\nbuild=\"two\"\nbuilder=\"john-doe\"", "completion": "This text code Expose Pod Information to Containers Through Files | Get a shell into the container that is running in your Pod:", "language": "text", "type": "code-example_explanation", "context": "Expose Pod Information to Containers Through Files | Get a shell into the container that is running in your Pod:", "source": "https://kubernetes.io/docs/tasks/inject-data-application/downward-api-volume-expose-pod-information/", "id": "k8s_00332_code_explain_8550"} {"prompt": "Explain this bash code:\n\nkubectl exec -it kubernetes-downwardapi-volume-example -- sh", "completion": "This bash code Expose Pod Information to Containers Through Files | In your shell, view the labels file:", "language": "bash", "type": "kubectl-command_explanation", "context": "Expose Pod Information to Containers Through Files | In your shell, view the labels file:", "source": "https://kubernetes.io/docs/tasks/inject-data-application/downward-api-volume-expose-pod-information/", "id": "k8s_00332_code_explain_8551"} {"prompt": "Explain this text code:\n\ncluster=\"test-cluster1\"\nrack=\"rack-22\"\nzone=\"us-est-coast\"", "completion": "This text code Expose Pod Information to Containers Through Files | Similarly, view the annotations file:", "language": "text", "type": "code-example_explanation", "context": "Expose Pod Information to Containers Through Files | Similarly, view the annotations file:", "source": "https://kubernetes.io/docs/tasks/inject-data-application/downward-api-volume-expose-pod-information/", "id": "k8s_00332_code_explain_8552"} {"prompt": "Complete this text code:\n\ndrwxr-xr-x ... Feb 6 21:47 ..2982_06_02_21_47_53.299460680\nlrwxrwxrwx ... Feb 6 21:47 ..data -> ..2982_06_02_21_47_53.299460680\nlrwxrwxrwx ... Feb 6 21:47 annotations -> ..data/annotations\nlrwxrwxrwx ... Feb 6 21:47 labels -> ..data/labels", "completion": "\n/etc/..2982_06_02_21_47_53.299460680:\ntotal 8\n-rw-r--r-- ... Feb 6 21:47 annotations\n-rw-r--r-- ... Feb 6 21:47 labels", "language": "text", "type": "code-example", "context": "Expose Pod Information to Containers Through Files | Using symbolic links enables dynamic atomic refresh of the metadata; updates are written to a new temporary directory, and the ..data symlink is updated atomically using rename(2).", "source": "https://kubernetes.io/docs/tasks/inject-data-application/downward-api-volume-expose-pod-information/", "id": "k8s_00332_code_completion_8553"} {"prompt": "Explain this text code:\n\ndrwxr-xr-x ... Feb 6 21:47 ..2982_06_02_21_47_53.299460680\nlrwxrwxrwx ... Feb 6 21:47 ..data -> ..2982_06_02_21_47_53.299460680\nlrwxrwxrwx ... Feb 6 21:47 annotations -> ..data/annotations\nlrwxrwxrwx ... Feb 6 21:47 labels -> ..data/labels\n\n/etc/..2982_06_02_21_47_53.299460680:\ntotal 8\n-rw-r--r-- ... Feb 6 21:47 annotations\n-rw-r--r-- ... Feb 6 21:47 labels", "completion": "This text code Expose Pod Information to Containers Through Files | Using symbolic links enables dynamic atomic refresh of the metadata; updates are written to a new temporary directory, and the ..data symlink is updated atomically using rename(2).", "language": "text", "type": "code-example_explanation", "context": "Expose Pod Information to Containers Through Files | Using symbolic links enables dynamic atomic refresh of the metadata; updates are written to a new temporary directory, and the ..data symlink is updated atomically using rename(2).", "source": "https://kubernetes.io/docs/tasks/inject-data-application/downward-api-volume-expose-pod-information/", "id": "k8s_00332_code_explain_8554"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: kubernetes-downwardapi-volume-example-2\nspec:\n containers:\n - name: client-container\n image: registry.k8s.io/busybox:1.27.2\n command: [\"sh\", \"-c\"]\n args:\n - while true; do\n echo -en '\\n';\n if [[ -e /etc/podinfo/cpu_limit ]]; then\n echo -en '\\n'; cat /etc/podinfo/cpu_limit; fi;\n if [[ -e /etc/podinfo/cpu_request ]]; then\n echo -en '\\n'; cat /etc/podinfo/cpu_request; fi;\n if [[ -e /etc/podinfo/mem_limit ]]; then\n echo -en '\\n'; cat /etc/podinfo/mem_limit; fi;\n if [[ -e /etc/podinfo/mem_request ]]; then\n echo -en '\\n'; cat /etc/podinfo/mem_request; fi;\n sleep 5;\n done;\n resources:\n requests:\n memory: \"32Mi\"\n cpu: \"125m\"\n limits:\n memory: \"64Mi\"", "completion": " cpu: \"250m\"\n volumeMounts:\n - name: podinfo\n mountPath: /etc/podinfo\n volumes:\n - name: podinfo\n downwardAPI:\n items:\n - path: \"cpu_limit\"\n resourceFieldRef:\n containerName: client-container\n resource: limits.cpu\n divisor: 1m\n - path: \"cpu_request\"\n resourceFieldRef:\n containerName: client-container\n resource: requests.cpu\n divisor: 1m\n - path: \"mem_limit\"\n resourceFieldRef:\n containerName: client-container\n resource: limits.memory\n divisor: 1Mi\n - path: \"mem_request\"\n resourceFieldRef:\n containerName: client-container\n resource: requests.memory\n divisor: 1Mi", "language": "yaml", "type": "pod-definition", "context": "Expose Pod Information to Containers Through Files | In the manifest, you can see that the Pod has a downwardAPI volume, and that the single container in that Pod mounts the volume at /etc/podinfo.", "source": "https://kubernetes.io/docs/tasks/inject-data-application/downward-api-volume-expose-pod-information/", "id": "k8s_00332_code_completion_8555"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: kubernetes-downwardapi-volume-example-2\nspec:\n containers:\n - name: client-container\n image: registry.k8s.io/busybox:1.27.2\n command: [\"sh\", \"-c\"]\n args:\n - while true; do\n echo -en '\\n';\n if [[ -e /etc/podinfo/cpu_limit ]]; then\n echo -en '\\n'; cat /etc/podinfo/cpu_limit; fi;\n if [[ -e /etc/podinfo/cpu_request ]]; then\n echo -en '\\n'; cat /etc/podinfo/cpu_request; fi;\n if [[ -e /etc/podinfo/mem_limit ]]; then\n echo -en '\\n'; cat /etc/podinfo/mem_limit; fi;\n if [[ -e /etc/podinfo/mem_request ]]; then\n echo -en '\\n'; cat /etc/podinfo/mem_request; fi;\n sleep 5;\n done;\n resources:\n requests:\n memory: \"32Mi\"\n cpu: \"125m\"\n limits:\n memory: \"64Mi\"\n cpu: \"250m\"\n volumeMounts:\n - name: podinfo\n mountPath: /etc/podinfo\n volumes:\n - name: podinfo\n downwardAPI:\n items:\n - path: \"cpu_limit\"\n resourceFieldRef:\n containerName: client-container\n resource: limits.cpu\n divisor: 1m\n - path: \"cpu_request\"\n resourceFieldRef:\n containerName: client-container\n resource: requests.cpu\n divisor: 1m\n - path: \"mem_limit\"\n resourceFieldRef:\n containerName: client-container\n resource: limits.memory\n divisor: 1Mi\n - path: \"mem_request\"\n resourceFieldRef:\n containerName: client-container\n resource: requests.memory\n divisor: 1Mi", "completion": "This yaml code Expose Pod Information to Containers Through Files | In the manifest, you can see that the Pod has a downwardAPI volume, and that the single container in that Pod mounts the volume at /etc/podinfo.", "language": "yaml", "type": "pod-definition_explanation", "context": "Expose Pod Information to Containers Through Files | In the manifest, you can see that the Pod has a downwardAPI volume, and that the single container in that Pod mounts the volume at /etc/podinfo.", "source": "https://kubernetes.io/docs/tasks/inject-data-application/downward-api-volume-expose-pod-information/", "id": "k8s_00332_code_explain_8556"} {"prompt": "Explain this bash code:\n\nkubectl apply -f https://k8s.io/examples/pods/inject/dapi-volume-resources.yaml", "completion": "This bash code Expose Pod Information to Containers Through Files | Get a shell into the container that is running in your Pod:", "language": "bash", "type": "kubectl-command_explanation", "context": "Expose Pod Information to Containers Through Files | Get a shell into the container that is running in your Pod:", "source": "https://kubernetes.io/docs/tasks/inject-data-application/downward-api-volume-expose-pod-information/", "id": "k8s_00332_code_explain_8557"} {"prompt": "Explain this bash code:\n\nkubectl exec -it kubernetes-downwardapi-volume-example-2 -- sh", "completion": "This bash code Expose Pod Information to Containers Through Files | In your shell, view the cpu_limit file:", "language": "bash", "type": "kubectl-command_explanation", "context": "Expose Pod Information to Containers Through Files | In your shell, view the cpu_limit file:", "source": "https://kubernetes.io/docs/tasks/inject-data-application/downward-api-volume-expose-pod-information/", "id": "k8s_00332_code_explain_8558"} {"prompt": "Explain this text code:\n\n# Run this in a shell inside the container\ncat /etc/podinfo/cpu_limit", "completion": "This text code Expose Pod Information to Containers Through Files | You can use similar commands to view the cpu_request, mem_limit and mem_request files.", "language": "text", "type": "code-example_explanation", "context": "Expose Pod Information to Containers Through Files | You can use similar commands to view the cpu_request, mem_limit and mem_request files.", "source": "https://kubernetes.io/docs/tasks/inject-data-application/downward-api-volume-expose-pod-information/", "id": "k8s_00332_code_explain_8559"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: kubernetes-downwardapi-volume-example\n labels:\n zone: us-est-coast\n cluster: test-cluster1\n rack: rack-22\n annotations:\n build: two\n builder: john-doe\nspec:\n containers:\n - name: client-container\n image: registry.k8s.io/busybox:1.27.2\n command: [\"sh\", \"-c\"]\n args:\n - while true; do", "completion": " if [[ -e /etc/podinfo/labels ]]; then\n echo -en '\\n\\n'; cat /etc/podinfo/labels; fi;\n if [[ -e /etc/podinfo/annotations ]]; then\n echo -en '\\n\\n'; cat /etc/podinfo/annotations; fi;\n sleep 5;\n done;\n volumeMounts:\n - name: podinfo\n mountPath: /etc/podinfo\n volumes:\n - name: podinfo\n downwardAPI:\n items:\n - path: \"labels\"\n fieldRef:\n fieldPath: metadata.labels\n - path: \"annotations\"\n fieldRef:\n fieldPath: metadata.annotations", "language": "yaml", "type": "pod-definition", "context": "Expose Pod Information to Containers Through Files | In the manifest, you can see that the Pod has a downwardAPI Volume, and the container mounts the volume at /etc/podinfo.", "source": "https://kubernetes.io/docs/tasks/inject-data-application/downward-api-volume-expose-pod-information/", "id": "k8s_00332_code_completion_8560"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: kubernetes-downwardapi-volume-example\n labels:\n zone: us-est-coast\n cluster: test-cluster1\n rack: rack-22\n annotations:\n build: two\n builder: john-doe\nspec:\n containers:\n - name: client-container\n image: registry.k8s.io/busybox:1.27.2\n command: [\"sh\", \"-c\"]\n args:\n - while true; do\n if [[ -e /etc/podinfo/labels ]]; then\n echo -en '\\n\\n'; cat /etc/podinfo/labels; fi;\n if [[ -e /etc/podinfo/annotations ]]; then\n echo -en '\\n\\n'; cat /etc/podinfo/annotations; fi;\n sleep 5;\n done;\n volumeMounts:\n - name: podinfo\n mountPath: /etc/podinfo\n volumes:\n - name: podinfo\n downwardAPI:\n items:\n - path: \"labels\"\n fieldRef:\n fieldPath: metadata.labels\n - path: \"annotations\"\n fieldRef:\n fieldPath: metadata.annotations", "completion": "This yaml code Expose Pod Information to Containers Through Files | In the manifest, you can see that the Pod has a downwardAPI Volume, and the container mounts the volume at /etc/podinfo.", "language": "yaml", "type": "pod-definition_explanation", "context": "Expose Pod Information to Containers Through Files | In the manifest, you can see that the Pod has a downwardAPI Volume, and the container mounts the volume at /etc/podinfo.", "source": "https://kubernetes.io/docs/tasks/inject-data-application/downward-api-volume-expose-pod-information/", "id": "k8s_00332_code_explain_8561"} {"prompt": "Explain this shell code:\n\nkubectl apply -f https://k8s.io/examples/pods/inject/dapi-volume.yaml", "completion": "This shell code Expose Pod Information to Containers Through Files | Verify that the container in the Pod is running:", "language": "shell", "type": "code-example_explanation", "context": "Expose Pod Information to Containers Through Files | Verify that the container in the Pod is running:", "source": "https://kubernetes.io/docs/tasks/inject-data-application/downward-api-volume-expose-pod-information/", "id": "k8s_00332_code_explain_8562"} {"prompt": "Complete this text code:\n\ncluster=\"test-cluster1\"\nrack=\"rack-22\"\nzone=\"us-est-coast\"", "completion": "\nbuild=\"two\"\nbuilder=\"john-doe\"", "language": "text", "type": "code-example", "context": "Expose Pod Information to Containers Through Files | Get a shell into the container that is running in your Pod:", "source": "https://kubernetes.io/docs/tasks/inject-data-application/downward-api-volume-expose-pod-information/", "id": "k8s_00332_code_completion_8563"} {"prompt": "Explain this text code:\n\ncluster=\"test-cluster1\"\nrack=\"rack-22\"\nzone=\"us-est-coast\"\n\nbuild=\"two\"\nbuilder=\"john-doe\"", "completion": "This text code Expose Pod Information to Containers Through Files | Get a shell into the container that is running in your Pod:", "language": "text", "type": "code-example_explanation", "context": "Expose Pod Information to Containers Through Files | Get a shell into the container that is running in your Pod:", "source": "https://kubernetes.io/docs/tasks/inject-data-application/downward-api-volume-expose-pod-information/", "id": "k8s_00332_code_explain_8564"} {"prompt": "Explain this shell code:\n\nkubectl exec -it kubernetes-downwardapi-volume-example -- sh", "completion": "This shell code Expose Pod Information to Containers Through Files | In your shell, view the labels file:", "language": "shell", "type": "code-example_explanation", "context": "Expose Pod Information to Containers Through Files | In your shell, view the labels file:", "source": "https://kubernetes.io/docs/tasks/inject-data-application/downward-api-volume-expose-pod-information/", "id": "k8s_00332_code_explain_8565"} {"prompt": "Explain this shell code:\n\ncluster=\"test-cluster1\"\nrack=\"rack-22\"\nzone=\"us-est-coast\"", "completion": "This shell code Expose Pod Information to Containers Through Files | Similarly, view the annotations file:", "language": "shell", "type": "code-example_explanation", "context": "Expose Pod Information to Containers Through Files | Similarly, view the annotations file:", "source": "https://kubernetes.io/docs/tasks/inject-data-application/downward-api-volume-expose-pod-information/", "id": "k8s_00332_code_explain_8566"} {"prompt": "Complete this text code:\n\ndrwxr-xr-x ... Feb 6 21:47 ..2982_06_02_21_47_53.299460680\nlrwxrwxrwx ... Feb 6 21:47 ..data -> ..2982_06_02_21_47_53.299460680\nlrwxrwxrwx ... Feb 6 21:47 annotations -> ..data/annotations\nlrwxrwxrwx ... Feb 6 21:47 labels -> ..data/labels", "completion": "\n/etc/..2982_06_02_21_47_53.299460680:\ntotal 8\n-rw-r--r-- ... Feb 6 21:47 annotations\n-rw-r--r-- ... Feb 6 21:47 labels", "language": "text", "type": "code-example", "context": "Expose Pod Information to Containers Through Files | Using symbolic links enables dynamic atomic refresh of the metadata; updates are written to a new temporary directory, and the ..data symlink is updated atomically using rename(2).", "source": "https://kubernetes.io/docs/tasks/inject-data-application/downward-api-volume-expose-pod-information/", "id": "k8s_00332_code_completion_8567"} {"prompt": "Explain this text code:\n\ndrwxr-xr-x ... Feb 6 21:47 ..2982_06_02_21_47_53.299460680\nlrwxrwxrwx ... Feb 6 21:47 ..data -> ..2982_06_02_21_47_53.299460680\nlrwxrwxrwx ... Feb 6 21:47 annotations -> ..data/annotations\nlrwxrwxrwx ... Feb 6 21:47 labels -> ..data/labels\n\n/etc/..2982_06_02_21_47_53.299460680:\ntotal 8\n-rw-r--r-- ... Feb 6 21:47 annotations\n-rw-r--r-- ... Feb 6 21:47 labels", "completion": "This text code Expose Pod Information to Containers Through Files | Using symbolic links enables dynamic atomic refresh of the metadata; updates are written to a new temporary directory, and the ..data symlink is updated atomically using rename(2).", "language": "text", "type": "code-example_explanation", "context": "Expose Pod Information to Containers Through Files | Using symbolic links enables dynamic atomic refresh of the metadata; updates are written to a new temporary directory, and the ..data symlink is updated atomically using rename(2).", "source": "https://kubernetes.io/docs/tasks/inject-data-application/downward-api-volume-expose-pod-information/", "id": "k8s_00332_code_explain_8568"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: kubernetes-downwardapi-volume-example-2\nspec:\n containers:\n - name: client-container\n image: registry.k8s.io/busybox:1.27.2\n command: [\"sh\", \"-c\"]\n args:\n - while true; do\n echo -en '\\n';\n if [[ -e /etc/podinfo/cpu_limit ]]; then\n echo -en '\\n'; cat /etc/podinfo/cpu_limit; fi;\n if [[ -e /etc/podinfo/cpu_request ]]; then\n echo -en '\\n'; cat /etc/podinfo/cpu_request; fi;\n if [[ -e /etc/podinfo/mem_limit ]]; then\n echo -en '\\n'; cat /etc/podinfo/mem_limit; fi;\n if [[ -e /etc/podinfo/mem_request ]]; then\n echo -en '\\n'; cat /etc/podinfo/mem_request; fi;\n sleep 5;\n done;\n resources:\n requests:\n memory: \"32Mi\"\n cpu: \"125m\"\n limits:\n memory: \"64Mi\"", "completion": " cpu: \"250m\"\n volumeMounts:\n - name: podinfo\n mountPath: /etc/podinfo\n volumes:\n - name: podinfo\n downwardAPI:\n items:\n - path: \"cpu_limit\"\n resourceFieldRef:\n containerName: client-container\n resource: limits.cpu\n divisor: 1m\n - path: \"cpu_request\"\n resourceFieldRef:\n containerName: client-container\n resource: requests.cpu\n divisor: 1m\n - path: \"mem_limit\"\n resourceFieldRef:\n containerName: client-container\n resource: limits.memory\n divisor: 1Mi\n - path: \"mem_request\"\n resourceFieldRef:\n containerName: client-container\n resource: requests.memory\n divisor: 1Mi", "language": "yaml", "type": "pod-definition", "context": "Expose Pod Information to Containers Through Files | In the manifest, you can see that the Pod has a downwardAPI volume, and that the single container in that Pod mounts the volume at /etc/podinfo.", "source": "https://kubernetes.io/docs/tasks/inject-data-application/downward-api-volume-expose-pod-information/", "id": "k8s_00332_code_completion_8569"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: kubernetes-downwardapi-volume-example-2\nspec:\n containers:\n - name: client-container\n image: registry.k8s.io/busybox:1.27.2\n command: [\"sh\", \"-c\"]\n args:\n - while true; do\n echo -en '\\n';\n if [[ -e /etc/podinfo/cpu_limit ]]; then\n echo -en '\\n'; cat /etc/podinfo/cpu_limit; fi;\n if [[ -e /etc/podinfo/cpu_request ]]; then\n echo -en '\\n'; cat /etc/podinfo/cpu_request; fi;\n if [[ -e /etc/podinfo/mem_limit ]]; then\n echo -en '\\n'; cat /etc/podinfo/mem_limit; fi;\n if [[ -e /etc/podinfo/mem_request ]]; then\n echo -en '\\n'; cat /etc/podinfo/mem_request; fi;\n sleep 5;\n done;\n resources:\n requests:\n memory: \"32Mi\"\n cpu: \"125m\"\n limits:\n memory: \"64Mi\"\n cpu: \"250m\"\n volumeMounts:\n - name: podinfo\n mountPath: /etc/podinfo\n volumes:\n - name: podinfo\n downwardAPI:\n items:\n - path: \"cpu_limit\"\n resourceFieldRef:\n containerName: client-container\n resource: limits.cpu\n divisor: 1m\n - path: \"cpu_request\"\n resourceFieldRef:\n containerName: client-container\n resource: requests.cpu\n divisor: 1m\n - path: \"mem_limit\"\n resourceFieldRef:\n containerName: client-container\n resource: limits.memory\n divisor: 1Mi\n - path: \"mem_request\"\n resourceFieldRef:\n containerName: client-container\n resource: requests.memory\n divisor: 1Mi", "completion": "This yaml code Expose Pod Information to Containers Through Files | In the manifest, you can see that the Pod has a downwardAPI volume, and that the single container in that Pod mounts the volume at /etc/podinfo.", "language": "yaml", "type": "pod-definition_explanation", "context": "Expose Pod Information to Containers Through Files | In the manifest, you can see that the Pod has a downwardAPI volume, and that the single container in that Pod mounts the volume at /etc/podinfo.", "source": "https://kubernetes.io/docs/tasks/inject-data-application/downward-api-volume-expose-pod-information/", "id": "k8s_00332_code_explain_8570"} {"prompt": "Explain this shell code:\n\nkubectl apply -f https://k8s.io/examples/pods/inject/dapi-volume-resources.yaml", "completion": "This shell code Expose Pod Information to Containers Through Files | Get a shell into the container that is running in your Pod:", "language": "shell", "type": "code-example_explanation", "context": "Expose Pod Information to Containers Through Files | Get a shell into the container that is running in your Pod:", "source": "https://kubernetes.io/docs/tasks/inject-data-application/downward-api-volume-expose-pod-information/", "id": "k8s_00332_code_explain_8571"} {"prompt": "Explain this shell code:\n\nkubectl exec -it kubernetes-downwardapi-volume-example-2 -- sh", "completion": "This shell code Expose Pod Information to Containers Through Files | In your shell, view the cpu_limit file:", "language": "shell", "type": "code-example_explanation", "context": "Expose Pod Information to Containers Through Files | In your shell, view the cpu_limit file:", "source": "https://kubernetes.io/docs/tasks/inject-data-application/downward-api-volume-expose-pod-information/", "id": "k8s_00332_code_explain_8572"} {"prompt": "Explain this shell code:\n\n# Run this in a shell inside the container\ncat /etc/podinfo/cpu_limit", "completion": "This shell code Expose Pod Information to Containers Through Files | You can use similar commands to view the cpu_request, mem_limit and mem_request files.", "language": "shell", "type": "code-example_explanation", "context": "Expose Pod Information to Containers Through Files | You can use similar commands to view the cpu_request, mem_limit and mem_request files.", "source": "https://kubernetes.io/docs/tasks/inject-data-application/downward-api-volume-expose-pod-information/", "id": "k8s_00332_code_explain_8573"} {"prompt": "Complete this yaml code:\n\npods/inject/dapi-volume.yaml\napiVersion: v1\nkind: Pod\nmetadata:\n name: kubernetes-downwardapi-volume-example\n labels:\n zone: us-est-coast\n cluster: test-cluster1\n rack: rack-22\n annotations:\n build: two\n builder: john-doe\nspec:\n containers:\n - name: client-container\n image: registry.k8s.io/busybox:1.27.2\n command: [\"sh\", \"-c\"]\n args:\n - while true; do", "completion": " if [[ -e /etc/podinfo/labels ]]; then\n echo -en '\\n\\n'; cat /etc/podinfo/labels; fi;\n if [[ -e /etc/podinfo/annotations ]]; then\n echo -en '\\n\\n'; cat /etc/podinfo/annotations; fi;\n sleep 5;\n done;\n volumeMounts:\n - name: podinfo\n mountPath: /etc/podinfo\n volumes:\n - name: podinfo\n downwardAPI:\n items:\n - path: \"labels\"\n fieldRef:\n fieldPath: metadata.labels\n - path: \"annotations\"\n fieldRef:\n fieldPath: metadata.annotations", "language": "yaml", "type": "pod-definition", "context": "Expose Pod Information to Containers Through Files | In the manifest, you can see that the Pod has a downwardAPI Volume, and the container mounts the volume at /etc/podinfo.", "source": "https://kubernetes.io/docs/tasks/inject-data-application/downward-api-volume-expose-pod-information/", "id": "k8s_00332_code_completion_8574"} {"prompt": "Explain this yaml code:\n\npods/inject/dapi-volume.yaml\napiVersion: v1\nkind: Pod\nmetadata:\n name: kubernetes-downwardapi-volume-example\n labels:\n zone: us-est-coast\n cluster: test-cluster1\n rack: rack-22\n annotations:\n build: two\n builder: john-doe\nspec:\n containers:\n - name: client-container\n image: registry.k8s.io/busybox:1.27.2\n command: [\"sh\", \"-c\"]\n args:\n - while true; do\n if [[ -e /etc/podinfo/labels ]]; then\n echo -en '\\n\\n'; cat /etc/podinfo/labels; fi;\n if [[ -e /etc/podinfo/annotations ]]; then\n echo -en '\\n\\n'; cat /etc/podinfo/annotations; fi;\n sleep 5;\n done;\n volumeMounts:\n - name: podinfo\n mountPath: /etc/podinfo\n volumes:\n - name: podinfo\n downwardAPI:\n items:\n - path: \"labels\"\n fieldRef:\n fieldPath: metadata.labels\n - path: \"annotations\"\n fieldRef:\n fieldPath: metadata.annotations", "completion": "This yaml code Expose Pod Information to Containers Through Files | In the manifest, you can see that the Pod has a downwardAPI Volume, and the container mounts the volume at /etc/podinfo.", "language": "yaml", "type": "pod-definition_explanation", "context": "Expose Pod Information to Containers Through Files | In the manifest, you can see that the Pod has a downwardAPI Volume, and the container mounts the volume at /etc/podinfo.", "source": "https://kubernetes.io/docs/tasks/inject-data-application/downward-api-volume-expose-pod-information/", "id": "k8s_00332_code_explain_8575"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: kubernetes-downwardapi-volume-example\n labels:\n zone: us-est-coast\n cluster: test-cluster1\n rack: rack-22\n annotations:\n build: two\n builder: john-doe\nspec:\n containers:\n - name: client-container\n image: registry.k8s.io/busybox:1.27.2\n command: [\"sh\", \"-c\"]\n args:\n - while true; do", "completion": " if [[ -e /etc/podinfo/labels ]]; then\n echo -en '\\n\\n'; cat /etc/podinfo/labels; fi;\n if [[ -e /etc/podinfo/annotations ]]; then\n echo -en '\\n\\n'; cat /etc/podinfo/annotations; fi;\n sleep 5;\n done;\n volumeMounts:\n - name: podinfo\n mountPath: /etc/podinfo\n volumes:\n - name: podinfo\n downwardAPI:\n items:\n - path: \"labels\"\n fieldRef:\n fieldPath: metadata.labels\n - path: \"annotations\"\n fieldRef:\n fieldPath: metadata.annotations", "language": "yaml", "type": "pod-definition", "context": "Expose Pod Information to Containers Through Files | In the manifest, you can see that the Pod has a downwardAPI Volume, and the container mounts the volume at /etc/podinfo.", "source": "https://kubernetes.io/docs/tasks/inject-data-application/downward-api-volume-expose-pod-information/", "id": "k8s_00332_code_completion_8576"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: kubernetes-downwardapi-volume-example\n labels:\n zone: us-est-coast\n cluster: test-cluster1\n rack: rack-22\n annotations:\n build: two\n builder: john-doe\nspec:\n containers:\n - name: client-container\n image: registry.k8s.io/busybox:1.27.2\n command: [\"sh\", \"-c\"]\n args:\n - while true; do\n if [[ -e /etc/podinfo/labels ]]; then\n echo -en '\\n\\n'; cat /etc/podinfo/labels; fi;\n if [[ -e /etc/podinfo/annotations ]]; then\n echo -en '\\n\\n'; cat /etc/podinfo/annotations; fi;\n sleep 5;\n done;\n volumeMounts:\n - name: podinfo\n mountPath: /etc/podinfo\n volumes:\n - name: podinfo\n downwardAPI:\n items:\n - path: \"labels\"\n fieldRef:\n fieldPath: metadata.labels\n - path: \"annotations\"\n fieldRef:\n fieldPath: metadata.annotations", "completion": "This yaml code Expose Pod Information to Containers Through Files | In the manifest, you can see that the Pod has a downwardAPI Volume, and the container mounts the volume at /etc/podinfo.", "language": "yaml", "type": "pod-definition_explanation", "context": "Expose Pod Information to Containers Through Files | In the manifest, you can see that the Pod has a downwardAPI Volume, and the container mounts the volume at /etc/podinfo.", "source": "https://kubernetes.io/docs/tasks/inject-data-application/downward-api-volume-expose-pod-information/", "id": "k8s_00332_code_explain_8577"} {"prompt": "Explain this bash code:\n\nkubectl apply -f https://k8s.io/examples/pods/inject/dapi-volume.yaml", "completion": "This bash code Expose Pod Information to Containers Through Files | Verify that the container in the Pod is running:", "language": "bash", "type": "kubectl-command_explanation", "context": "Expose Pod Information to Containers Through Files | Verify that the container in the Pod is running:", "source": "https://kubernetes.io/docs/tasks/inject-data-application/downward-api-volume-expose-pod-information/", "id": "k8s_00332_code_explain_8578"} {"prompt": "Explain this bash code:\n\nkubectl exec -it kubernetes-downwardapi-volume-example -- sh", "completion": "This bash code Expose Pod Information to Containers Through Files | In your shell, view the labels file:", "language": "bash", "type": "kubectl-command_explanation", "context": "Expose Pod Information to Containers Through Files | In your shell, view the labels file:", "source": "https://kubernetes.io/docs/tasks/inject-data-application/downward-api-volume-expose-pod-information/", "id": "k8s_00332_code_explain_8579"} {"prompt": "Explain this text code:\n\ncluster=\"test-cluster1\"\nrack=\"rack-22\"\nzone=\"us-est-coast\"", "completion": "This text code Expose Pod Information to Containers Through Files | Similarly, view the annotations file:", "language": "text", "type": "code-example_explanation", "context": "Expose Pod Information to Containers Through Files | Similarly, view the annotations file:", "source": "https://kubernetes.io/docs/tasks/inject-data-application/downward-api-volume-expose-pod-information/", "id": "k8s_00332_code_explain_8580"} {"prompt": "Complete this yaml code:\n\npods/inject/dapi-volume-resources.yaml\napiVersion: v1\nkind: Pod\nmetadata:\n name: kubernetes-downwardapi-volume-example-2\nspec:\n containers:\n - name: client-container\n image: registry.k8s.io/busybox:1.27.2\n command: [\"sh\", \"-c\"]\n args:\n - while true; do\n echo -en '\\n';\n if [[ -e /etc/podinfo/cpu_limit ]]; then\n echo -en '\\n'; cat /etc/podinfo/cpu_limit; fi;\n if [[ -e /etc/podinfo/cpu_request ]]; then\n echo -en '\\n'; cat /etc/podinfo/cpu_request; fi;\n if [[ -e /etc/podinfo/mem_limit ]]; then\n echo -en '\\n'; cat /etc/podinfo/mem_limit; fi;\n if [[ -e /etc/podinfo/mem_request ]]; then\n echo -en '\\n'; cat /etc/podinfo/mem_request; fi;\n sleep 5;\n done;\n resources:\n requests:\n memory: \"32Mi\"\n cpu: \"125m\"\n limits:", "completion": " memory: \"64Mi\"\n cpu: \"250m\"\n volumeMounts:\n - name: podinfo\n mountPath: /etc/podinfo\n volumes:\n - name: podinfo\n downwardAPI:\n items:\n - path: \"cpu_limit\"\n resourceFieldRef:\n containerName: client-container\n resource: limits.cpu\n divisor: 1m\n - path: \"cpu_request\"\n resourceFieldRef:\n containerName: client-container\n resource: requests.cpu\n divisor: 1m\n - path: \"mem_limit\"\n resourceFieldRef:\n containerName: client-container\n resource: limits.memory\n divisor: 1Mi\n - path: \"mem_request\"\n resourceFieldRef:\n containerName: client-container\n resource: requests.memory\n divisor: 1Mi", "language": "yaml", "type": "pod-definition", "context": "Expose Pod Information to Containers Through Files | In the manifest, you can see that the Pod has a downwardAPI volume, and that the single container in that Pod mounts the volume at /etc/podinfo.", "source": "https://kubernetes.io/docs/tasks/inject-data-application/downward-api-volume-expose-pod-information/", "id": "k8s_00332_code_completion_8581"} {"prompt": "Explain this yaml code:\n\npods/inject/dapi-volume-resources.yaml\napiVersion: v1\nkind: Pod\nmetadata:\n name: kubernetes-downwardapi-volume-example-2\nspec:\n containers:\n - name: client-container\n image: registry.k8s.io/busybox:1.27.2\n command: [\"sh\", \"-c\"]\n args:\n - while true; do\n echo -en '\\n';\n if [[ -e /etc/podinfo/cpu_limit ]]; then\n echo -en '\\n'; cat /etc/podinfo/cpu_limit; fi;\n if [[ -e /etc/podinfo/cpu_request ]]; then\n echo -en '\\n'; cat /etc/podinfo/cpu_request; fi;\n if [[ -e /etc/podinfo/mem_limit ]]; then\n echo -en '\\n'; cat /etc/podinfo/mem_limit; fi;\n if [[ -e /etc/podinfo/mem_request ]]; then\n echo -en '\\n'; cat /etc/podinfo/mem_request; fi;\n sleep 5;\n done;\n resources:\n requests:\n memory: \"32Mi\"\n cpu: \"125m\"\n limits:\n memory: \"64Mi\"\n cpu: \"250m\"\n volumeMounts:\n - name: podinfo\n mountPath: /etc/podinfo\n volumes:\n - name: podinfo\n downwardAPI:\n items:\n - path: \"cpu_limit\"\n resourceFieldRef:\n containerName: client-container\n resource: limits.cpu\n divisor: 1m\n - path: \"cpu_request\"\n resourceFieldRef:\n containerName: client-container\n resource: requests.cpu\n divisor: 1m\n - path: \"mem_limit\"\n resourceFieldRef:\n containerName: client-container\n resource: limits.memory\n divisor: 1Mi\n - path: \"mem_request\"\n resourceFieldRef:\n containerName: client-container\n resource: requests.memory\n divisor: 1Mi", "completion": "This yaml code Expose Pod Information to Containers Through Files | In the manifest, you can see that the Pod has a downwardAPI volume, and that the single container in that Pod mounts the volume at /etc/podinfo.", "language": "yaml", "type": "pod-definition_explanation", "context": "Expose Pod Information to Containers Through Files | In the manifest, you can see that the Pod has a downwardAPI volume, and that the single container in that Pod mounts the volume at /etc/podinfo.", "source": "https://kubernetes.io/docs/tasks/inject-data-application/downward-api-volume-expose-pod-information/", "id": "k8s_00332_code_explain_8582"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: kubernetes-downwardapi-volume-example-2\nspec:\n containers:\n - name: client-container\n image: registry.k8s.io/busybox:1.27.2\n command: [\"sh\", \"-c\"]\n args:\n - while true; do\n echo -en '\\n';\n if [[ -e /etc/podinfo/cpu_limit ]]; then\n echo -en '\\n'; cat /etc/podinfo/cpu_limit; fi;\n if [[ -e /etc/podinfo/cpu_request ]]; then\n echo -en '\\n'; cat /etc/podinfo/cpu_request; fi;\n if [[ -e /etc/podinfo/mem_limit ]]; then\n echo -en '\\n'; cat /etc/podinfo/mem_limit; fi;\n if [[ -e /etc/podinfo/mem_request ]]; then\n echo -en '\\n'; cat /etc/podinfo/mem_request; fi;\n sleep 5;\n done;\n resources:\n requests:\n memory: \"32Mi\"\n cpu: \"125m\"\n limits:\n memory: \"64Mi\"", "completion": " cpu: \"250m\"\n volumeMounts:\n - name: podinfo\n mountPath: /etc/podinfo\n volumes:\n - name: podinfo\n downwardAPI:\n items:\n - path: \"cpu_limit\"\n resourceFieldRef:\n containerName: client-container\n resource: limits.cpu\n divisor: 1m\n - path: \"cpu_request\"\n resourceFieldRef:\n containerName: client-container\n resource: requests.cpu\n divisor: 1m\n - path: \"mem_limit\"\n resourceFieldRef:\n containerName: client-container\n resource: limits.memory\n divisor: 1Mi\n - path: \"mem_request\"\n resourceFieldRef:\n containerName: client-container\n resource: requests.memory\n divisor: 1Mi", "language": "yaml", "type": "pod-definition", "context": "Expose Pod Information to Containers Through Files | In the manifest, you can see that the Pod has a downwardAPI volume, and that the single container in that Pod mounts the volume at /etc/podinfo.", "source": "https://kubernetes.io/docs/tasks/inject-data-application/downward-api-volume-expose-pod-information/", "id": "k8s_00332_code_completion_8583"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: kubernetes-downwardapi-volume-example-2\nspec:\n containers:\n - name: client-container\n image: registry.k8s.io/busybox:1.27.2\n command: [\"sh\", \"-c\"]\n args:\n - while true; do\n echo -en '\\n';\n if [[ -e /etc/podinfo/cpu_limit ]]; then\n echo -en '\\n'; cat /etc/podinfo/cpu_limit; fi;\n if [[ -e /etc/podinfo/cpu_request ]]; then\n echo -en '\\n'; cat /etc/podinfo/cpu_request; fi;\n if [[ -e /etc/podinfo/mem_limit ]]; then\n echo -en '\\n'; cat /etc/podinfo/mem_limit; fi;\n if [[ -e /etc/podinfo/mem_request ]]; then\n echo -en '\\n'; cat /etc/podinfo/mem_request; fi;\n sleep 5;\n done;\n resources:\n requests:\n memory: \"32Mi\"\n cpu: \"125m\"\n limits:\n memory: \"64Mi\"\n cpu: \"250m\"\n volumeMounts:\n - name: podinfo\n mountPath: /etc/podinfo\n volumes:\n - name: podinfo\n downwardAPI:\n items:\n - path: \"cpu_limit\"\n resourceFieldRef:\n containerName: client-container\n resource: limits.cpu\n divisor: 1m\n - path: \"cpu_request\"\n resourceFieldRef:\n containerName: client-container\n resource: requests.cpu\n divisor: 1m\n - path: \"mem_limit\"\n resourceFieldRef:\n containerName: client-container\n resource: limits.memory\n divisor: 1Mi\n - path: \"mem_request\"\n resourceFieldRef:\n containerName: client-container\n resource: requests.memory\n divisor: 1Mi", "completion": "This yaml code Expose Pod Information to Containers Through Files | In the manifest, you can see that the Pod has a downwardAPI volume, and that the single container in that Pod mounts the volume at /etc/podinfo.", "language": "yaml", "type": "pod-definition_explanation", "context": "Expose Pod Information to Containers Through Files | In the manifest, you can see that the Pod has a downwardAPI volume, and that the single container in that Pod mounts the volume at /etc/podinfo.", "source": "https://kubernetes.io/docs/tasks/inject-data-application/downward-api-volume-expose-pod-information/", "id": "k8s_00332_code_explain_8584"} {"prompt": "Explain this bash code:\n\nkubectl apply -f https://k8s.io/examples/pods/inject/dapi-volume-resources.yaml", "completion": "This bash code Expose Pod Information to Containers Through Files | Get a shell into the container that is running in your Pod:", "language": "bash", "type": "kubectl-command_explanation", "context": "Expose Pod Information to Containers Through Files | Get a shell into the container that is running in your Pod:", "source": "https://kubernetes.io/docs/tasks/inject-data-application/downward-api-volume-expose-pod-information/", "id": "k8s_00332_code_explain_8585"} {"prompt": "Explain this bash code:\n\nkubectl exec -it kubernetes-downwardapi-volume-example-2 -- sh", "completion": "This bash code Expose Pod Information to Containers Through Files | In your shell, view the cpu_limit file:", "language": "bash", "type": "kubectl-command_explanation", "context": "Expose Pod Information to Containers Through Files | In your shell, view the cpu_limit file:", "source": "https://kubernetes.io/docs/tasks/inject-data-application/downward-api-volume-expose-pod-information/", "id": "k8s_00332_code_explain_8586"} {"prompt": "Explain this text code:\n\n# Run this in a shell inside the container\ncat /etc/podinfo/cpu_limit", "completion": "This text code Expose Pod Information to Containers Through Files | You can use similar commands to view the cpu_request, mem_limit and mem_request files.", "language": "text", "type": "code-example_explanation", "context": "Expose Pod Information to Containers Through Files | You can use similar commands to view the cpu_request, mem_limit and mem_request files.", "source": "https://kubernetes.io/docs/tasks/inject-data-application/downward-api-volume-expose-pod-information/", "id": "k8s_00332_code_explain_8587"} {"prompt": "Complete this yaml code:\n\napiVersion: snapshot.storage.k8s.io/v1\nkind: VolumeSnapshot\nmetadata:\n name: new-snapshot-test", "completion": "spec:\n volumeSnapshotClassName: csi-hostpath-snapclass\n source:\n persistentVolumeClaimName: pvc-test", "language": "yaml", "type": "kubernetes-manifest", "context": "Volume Snapshots | persistentVolumeClaimName is the name of the PersistentVolumeClaim data source for the snapshot. This field is required for dynamically provisioning a snapshot.", "source": "https://kubernetes.io/docs/concepts/storage/volume-snapshots/", "id": "k8s_00333_code_completion_8588"} {"prompt": "Explain this yaml code:\n\napiVersion: snapshot.storage.k8s.io/v1\nkind: VolumeSnapshot\nmetadata:\n name: new-snapshot-test\nspec:\n volumeSnapshotClassName: csi-hostpath-snapclass\n source:\n persistentVolumeClaimName: pvc-test", "completion": "This yaml code Volume Snapshots | persistentVolumeClaimName is the name of the PersistentVolumeClaim data source for the snapshot. This field is required for dynamically provisioning a snapshot.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Volume Snapshots | persistentVolumeClaimName is the name of the PersistentVolumeClaim data source for the snapshot. This field is required for dynamically provisioning a snapshot.", "source": "https://kubernetes.io/docs/concepts/storage/volume-snapshots/", "id": "k8s_00333_code_explain_8589"} {"prompt": "Complete this yaml code:\n\napiVersion: snapshot.storage.k8s.io/v1\nkind: VolumeSnapshot\nmetadata:", "completion": " name: test-snapshot\nspec:\n source:\n volumeSnapshotContentName: test-content", "language": "yaml", "type": "kubernetes-manifest", "context": "Volume Snapshots | Each VolumeSnapshotContent contains a spec and status. In dynamic provisioning, the snapshot common controller creates VolumeSnapshotContent objects. Here is an example:", "source": "https://kubernetes.io/docs/concepts/storage/volume-snapshots/", "id": "k8s_00333_code_completion_8590"} {"prompt": "Explain this yaml code:\n\napiVersion: snapshot.storage.k8s.io/v1\nkind: VolumeSnapshot\nmetadata:\n name: test-snapshot\nspec:\n source:\n volumeSnapshotContentName: test-content", "completion": "This yaml code Volume Snapshots | Each VolumeSnapshotContent contains a spec and status. In dynamic provisioning, the snapshot common controller creates VolumeSnapshotContent objects. Here is an example:", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Volume Snapshots | Each VolumeSnapshotContent contains a spec and status. In dynamic provisioning, the snapshot common controller creates VolumeSnapshotContent objects. Here is an example:", "source": "https://kubernetes.io/docs/concepts/storage/volume-snapshots/", "id": "k8s_00333_code_explain_8591"} {"prompt": "Complete this yaml code:\n\napiVersion: snapshot.storage.k8s.io/v1\nkind: VolumeSnapshotContent\nmetadata:\n name: snapcontent-72d9a349-aacd-42d2-a240-d775650d2455\nspec:\n deletionPolicy: Delete\n driver: hostpath.csi.k8s.io", "completion": " source:\n volumeHandle: ee0cfb94-f8d4-11e9-b2d8-0242ac110002\n sourceVolumeMode: Filesystem\n volumeSnapshotClassName: csi-hostpath-snapclass\n volumeSnapshotRef:\n name: new-snapshot-test\n namespace: default\n uid: 72d9a349-aacd-42d2-a240-d775650d2455", "language": "yaml", "type": "kubernetes-manifest", "context": "Volume Snapshots | volumeHandle is the unique identifier of the volume created on the storage backend and returned by the CSI driver during the volume creation. This field is required for dynamically provisioning a snap", "source": "https://kubernetes.io/docs/concepts/storage/volume-snapshots/", "id": "k8s_00333_code_completion_8592"} {"prompt": "Explain this yaml code:\n\napiVersion: snapshot.storage.k8s.io/v1\nkind: VolumeSnapshotContent\nmetadata:\n name: snapcontent-72d9a349-aacd-42d2-a240-d775650d2455\nspec:\n deletionPolicy: Delete\n driver: hostpath.csi.k8s.io\n source:\n volumeHandle: ee0cfb94-f8d4-11e9-b2d8-0242ac110002\n sourceVolumeMode: Filesystem\n volumeSnapshotClassName: csi-hostpath-snapclass\n volumeSnapshotRef:\n name: new-snapshot-test\n namespace: default\n uid: 72d9a349-aacd-42d2-a240-d775650d2455", "completion": "This yaml code Volume Snapshots | volumeHandle is the unique identifier of the volume created on the storage backend and returned by the CSI driver during the volume creation. This field is required for dynamically provisioning a snap", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Volume Snapshots | volumeHandle is the unique identifier of the volume created on the storage backend and returned by the CSI driver during the volume creation. This field is required for dynamically provisioning a snap", "source": "https://kubernetes.io/docs/concepts/storage/volume-snapshots/", "id": "k8s_00333_code_explain_8593"} {"prompt": "Complete this yaml code:\n\napiVersion: snapshot.storage.k8s.io/v1\nkind: VolumeSnapshotContent\nmetadata:\n name: new-snapshot-content-test\nspec:\n deletionPolicy: Delete", "completion": " driver: hostpath.csi.k8s.io\n source:\n snapshotHandle: 7bdd0de3-aaeb-11e8-9aae-0242ac110002\n sourceVolumeMode: Filesystem\n volumeSnapshotRef:\n name: new-snapshot-test\n namespace: default", "language": "yaml", "type": "kubernetes-manifest", "context": "Volume Snapshots | snapshotHandle is the unique identifier of the volume snapshot created on the storage backend. This field is required for the pre-provisioned snapshots. It specifies the CSI snapshot id on the storage", "source": "https://kubernetes.io/docs/concepts/storage/volume-snapshots/", "id": "k8s_00333_code_completion_8594"} {"prompt": "Explain this yaml code:\n\napiVersion: snapshot.storage.k8s.io/v1\nkind: VolumeSnapshotContent\nmetadata:\n name: new-snapshot-content-test\nspec:\n deletionPolicy: Delete\n driver: hostpath.csi.k8s.io\n source:\n snapshotHandle: 7bdd0de3-aaeb-11e8-9aae-0242ac110002\n sourceVolumeMode: Filesystem\n volumeSnapshotRef:\n name: new-snapshot-test\n namespace: default", "completion": "This yaml code Volume Snapshots | snapshotHandle is the unique identifier of the volume snapshot created on the storage backend. This field is required for the pre-provisioned snapshots. It specifies the CSI snapshot id on the storage", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Volume Snapshots | snapshotHandle is the unique identifier of the volume snapshot created on the storage backend. This field is required for the pre-provisioned snapshots. It specifies the CSI snapshot id on the storage", "source": "https://kubernetes.io/docs/concepts/storage/volume-snapshots/", "id": "k8s_00333_code_explain_8595"} {"prompt": "Complete this yaml code:\n\napiVersion: snapshot.storage.k8s.io/v1\nkind: VolumeSnapshotContent\nmetadata:\n name: new-snapshot-content-test\n annotations:\n - snapshot.storage.kubernetes.io/allow-volume-mode-change: \"true\"\nspec:", "completion": " deletionPolicy: Delete\n driver: hostpath.csi.k8s.io\n source:\n snapshotHandle: 7bdd0de3-aaeb-11e8-9aae-0242ac110002\n sourceVolumeMode: Filesystem\n volumeSnapshotRef:\n name: new-snapshot-test\n namespace: default", "language": "yaml", "type": "kubernetes-manifest", "context": "Volume Snapshots | You can provision a new volume, pre-populated with data from a snapshot, by using the dataSource field in the PersistentVolumeClaim object.", "source": "https://kubernetes.io/docs/concepts/storage/volume-snapshots/", "id": "k8s_00333_code_completion_8596"} {"prompt": "Explain this yaml code:\n\napiVersion: snapshot.storage.k8s.io/v1\nkind: VolumeSnapshotContent\nmetadata:\n name: new-snapshot-content-test\n annotations:\n - snapshot.storage.kubernetes.io/allow-volume-mode-change: \"true\"\nspec:\n deletionPolicy: Delete\n driver: hostpath.csi.k8s.io\n source:\n snapshotHandle: 7bdd0de3-aaeb-11e8-9aae-0242ac110002\n sourceVolumeMode: Filesystem\n volumeSnapshotRef:\n name: new-snapshot-test\n namespace: default", "completion": "This yaml code Volume Snapshots | You can provision a new volume, pre-populated with data from a snapshot, by using the dataSource field in the PersistentVolumeClaim object.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Volume Snapshots | You can provision a new volume, pre-populated with data from a snapshot, by using the dataSource field in the PersistentVolumeClaim object.", "source": "https://kubernetes.io/docs/concepts/storage/volume-snapshots/", "id": "k8s_00333_code_explain_8597"} {"prompt": "Complete this yaml code:\n\napiVersion: snapshot.storage.k8s.io/v1\nkind: VolumeSnapshot\nmetadata:\n name: new-snapshot-test", "completion": "spec:\n volumeSnapshotClassName: csi-hostpath-snapclass\n source:\n persistentVolumeClaimName: pvc-test", "language": "yaml", "type": "kubernetes-manifest", "context": "Volume Snapshots | persistentVolumeClaimName is the name of the PersistentVolumeClaim data source for the snapshot. This field is required for dynamically provisioning a snapshot.", "source": "https://kubernetes.io/docs/concepts/storage/volume-snapshots/", "id": "k8s_00333_code_completion_8598"} {"prompt": "Explain this yaml code:\n\napiVersion: snapshot.storage.k8s.io/v1\nkind: VolumeSnapshot\nmetadata:\n name: new-snapshot-test\nspec:\n volumeSnapshotClassName: csi-hostpath-snapclass\n source:\n persistentVolumeClaimName: pvc-test", "completion": "This yaml code Volume Snapshots | persistentVolumeClaimName is the name of the PersistentVolumeClaim data source for the snapshot. This field is required for dynamically provisioning a snapshot.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Volume Snapshots | persistentVolumeClaimName is the name of the PersistentVolumeClaim data source for the snapshot. This field is required for dynamically provisioning a snapshot.", "source": "https://kubernetes.io/docs/concepts/storage/volume-snapshots/", "id": "k8s_00333_code_explain_8599"} {"prompt": "Complete this yaml code:\n\napiVersion: snapshot.storage.k8s.io/v1\nkind: VolumeSnapshot\nmetadata:", "completion": " name: test-snapshot\nspec:\n source:\n volumeSnapshotContentName: test-content", "language": "yaml", "type": "kubernetes-manifest", "context": "Volume Snapshots | Each VolumeSnapshotContent contains a spec and status. In dynamic provisioning, the snapshot common controller creates VolumeSnapshotContent objects. Here is an example:", "source": "https://kubernetes.io/docs/concepts/storage/volume-snapshots/", "id": "k8s_00333_code_completion_8600"} {"prompt": "Explain this yaml code:\n\napiVersion: snapshot.storage.k8s.io/v1\nkind: VolumeSnapshot\nmetadata:\n name: test-snapshot\nspec:\n source:\n volumeSnapshotContentName: test-content", "completion": "This yaml code Volume Snapshots | Each VolumeSnapshotContent contains a spec and status. In dynamic provisioning, the snapshot common controller creates VolumeSnapshotContent objects. Here is an example:", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Volume Snapshots | Each VolumeSnapshotContent contains a spec and status. In dynamic provisioning, the snapshot common controller creates VolumeSnapshotContent objects. Here is an example:", "source": "https://kubernetes.io/docs/concepts/storage/volume-snapshots/", "id": "k8s_00333_code_explain_8601"} {"prompt": "Complete this yaml code:\n\napiVersion: snapshot.storage.k8s.io/v1\nkind: VolumeSnapshotContent\nmetadata:\n name: snapcontent-72d9a349-aacd-42d2-a240-d775650d2455\nspec:\n deletionPolicy: Delete\n driver: hostpath.csi.k8s.io", "completion": " source:\n volumeHandle: ee0cfb94-f8d4-11e9-b2d8-0242ac110002\n sourceVolumeMode: Filesystem\n volumeSnapshotClassName: csi-hostpath-snapclass\n volumeSnapshotRef:\n name: new-snapshot-test\n namespace: default\n uid: 72d9a349-aacd-42d2-a240-d775650d2455", "language": "yaml", "type": "kubernetes-manifest", "context": "Volume Snapshots | volumeHandle is the unique identifier of the volume created on the storage backend and returned by the CSI driver during the volume creation. This field is required for dynamically provisioning a snap", "source": "https://kubernetes.io/docs/concepts/storage/volume-snapshots/", "id": "k8s_00333_code_completion_8602"} {"prompt": "Explain this yaml code:\n\napiVersion: snapshot.storage.k8s.io/v1\nkind: VolumeSnapshotContent\nmetadata:\n name: snapcontent-72d9a349-aacd-42d2-a240-d775650d2455\nspec:\n deletionPolicy: Delete\n driver: hostpath.csi.k8s.io\n source:\n volumeHandle: ee0cfb94-f8d4-11e9-b2d8-0242ac110002\n sourceVolumeMode: Filesystem\n volumeSnapshotClassName: csi-hostpath-snapclass\n volumeSnapshotRef:\n name: new-snapshot-test\n namespace: default\n uid: 72d9a349-aacd-42d2-a240-d775650d2455", "completion": "This yaml code Volume Snapshots | volumeHandle is the unique identifier of the volume created on the storage backend and returned by the CSI driver during the volume creation. This field is required for dynamically provisioning a snap", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Volume Snapshots | volumeHandle is the unique identifier of the volume created on the storage backend and returned by the CSI driver during the volume creation. This field is required for dynamically provisioning a snap", "source": "https://kubernetes.io/docs/concepts/storage/volume-snapshots/", "id": "k8s_00333_code_explain_8603"} {"prompt": "Complete this yaml code:\n\napiVersion: snapshot.storage.k8s.io/v1\nkind: VolumeSnapshotContent\nmetadata:\n name: new-snapshot-content-test\nspec:\n deletionPolicy: Delete", "completion": " driver: hostpath.csi.k8s.io\n source:\n snapshotHandle: 7bdd0de3-aaeb-11e8-9aae-0242ac110002\n sourceVolumeMode: Filesystem\n volumeSnapshotRef:\n name: new-snapshot-test\n namespace: default", "language": "yaml", "type": "kubernetes-manifest", "context": "Volume Snapshots | snapshotHandle is the unique identifier of the volume snapshot created on the storage backend. This field is required for the pre-provisioned snapshots. It specifies the CSI snapshot id on the storage", "source": "https://kubernetes.io/docs/concepts/storage/volume-snapshots/", "id": "k8s_00333_code_completion_8604"} {"prompt": "Explain this yaml code:\n\napiVersion: snapshot.storage.k8s.io/v1\nkind: VolumeSnapshotContent\nmetadata:\n name: new-snapshot-content-test\nspec:\n deletionPolicy: Delete\n driver: hostpath.csi.k8s.io\n source:\n snapshotHandle: 7bdd0de3-aaeb-11e8-9aae-0242ac110002\n sourceVolumeMode: Filesystem\n volumeSnapshotRef:\n name: new-snapshot-test\n namespace: default", "completion": "This yaml code Volume Snapshots | snapshotHandle is the unique identifier of the volume snapshot created on the storage backend. This field is required for the pre-provisioned snapshots. It specifies the CSI snapshot id on the storage", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Volume Snapshots | snapshotHandle is the unique identifier of the volume snapshot created on the storage backend. This field is required for the pre-provisioned snapshots. It specifies the CSI snapshot id on the storage", "source": "https://kubernetes.io/docs/concepts/storage/volume-snapshots/", "id": "k8s_00333_code_explain_8605"} {"prompt": "Explain this text code:\n\nsnapshot.storage.kubernetes.io/allow-volume-mode-change: \"true\"", "completion": "This text code Volume Snapshots | For pre-provisioned snapshots, spec.sourceVolumeMode needs to be populated by the cluster administrator.", "language": "text", "type": "code-example_explanation", "context": "Volume Snapshots | For pre-provisioned snapshots, spec.sourceVolumeMode needs to be populated by the cluster administrator.", "source": "https://kubernetes.io/docs/concepts/storage/volume-snapshots/", "id": "k8s_00333_code_explain_8606"} {"prompt": "Complete this yaml code:\n\napiVersion: snapshot.storage.k8s.io/v1\nkind: VolumeSnapshotContent\nmetadata:\n name: new-snapshot-content-test\n annotations:\n - snapshot.storage.kubernetes.io/allow-volume-mode-change: \"true\"\nspec:", "completion": " deletionPolicy: Delete\n driver: hostpath.csi.k8s.io\n source:\n snapshotHandle: 7bdd0de3-aaeb-11e8-9aae-0242ac110002\n sourceVolumeMode: Filesystem\n volumeSnapshotRef:\n name: new-snapshot-test\n namespace: default", "language": "yaml", "type": "kubernetes-manifest", "context": "Volume Snapshots | You can provision a new volume, pre-populated with data from a snapshot, by using the dataSource field in the PersistentVolumeClaim object.", "source": "https://kubernetes.io/docs/concepts/storage/volume-snapshots/", "id": "k8s_00333_code_completion_8607"} {"prompt": "Explain this yaml code:\n\napiVersion: snapshot.storage.k8s.io/v1\nkind: VolumeSnapshotContent\nmetadata:\n name: new-snapshot-content-test\n annotations:\n - snapshot.storage.kubernetes.io/allow-volume-mode-change: \"true\"\nspec:\n deletionPolicy: Delete\n driver: hostpath.csi.k8s.io\n source:\n snapshotHandle: 7bdd0de3-aaeb-11e8-9aae-0242ac110002\n sourceVolumeMode: Filesystem\n volumeSnapshotRef:\n name: new-snapshot-test\n namespace: default", "completion": "This yaml code Volume Snapshots | You can provision a new volume, pre-populated with data from a snapshot, by using the dataSource field in the PersistentVolumeClaim object.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Volume Snapshots | You can provision a new volume, pre-populated with data from a snapshot, by using the dataSource field in the PersistentVolumeClaim object.", "source": "https://kubernetes.io/docs/concepts/storage/volume-snapshots/", "id": "k8s_00333_code_explain_8608"} {"prompt": "Complete this yaml code:\n\napiVersion: snapshot.storage.k8s.io/v1\nkind: VolumeSnapshot\nmetadata:\n name: new-snapshot-test", "completion": "spec:\n volumeSnapshotClassName: csi-hostpath-snapclass\n source:\n persistentVolumeClaimName: pvc-test", "language": "yaml", "type": "kubernetes-manifest", "context": "Volume Snapshots | persistentVolumeClaimName is the name of the PersistentVolumeClaim data source for the snapshot. This field is required for dynamically provisioning a snapshot.", "source": "https://kubernetes.io/docs/concepts/storage/volume-snapshots/", "id": "k8s_00333_code_completion_8609"} {"prompt": "Explain this yaml code:\n\napiVersion: snapshot.storage.k8s.io/v1\nkind: VolumeSnapshot\nmetadata:\n name: new-snapshot-test\nspec:\n volumeSnapshotClassName: csi-hostpath-snapclass\n source:\n persistentVolumeClaimName: pvc-test", "completion": "This yaml code Volume Snapshots | persistentVolumeClaimName is the name of the PersistentVolumeClaim data source for the snapshot. This field is required for dynamically provisioning a snapshot.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Volume Snapshots | persistentVolumeClaimName is the name of the PersistentVolumeClaim data source for the snapshot. This field is required for dynamically provisioning a snapshot.", "source": "https://kubernetes.io/docs/concepts/storage/volume-snapshots/", "id": "k8s_00333_code_explain_8610"} {"prompt": "Complete this yaml code:\n\napiVersion: snapshot.storage.k8s.io/v1\nkind: VolumeSnapshot\nmetadata:", "completion": " name: test-snapshot\nspec:\n source:\n volumeSnapshotContentName: test-content", "language": "yaml", "type": "kubernetes-manifest", "context": "Volume Snapshots | Each VolumeSnapshotContent contains a spec and status. In dynamic provisioning, the snapshot common controller creates VolumeSnapshotContent objects. Here is an example:", "source": "https://kubernetes.io/docs/concepts/storage/volume-snapshots/", "id": "k8s_00333_code_completion_8611"} {"prompt": "Explain this yaml code:\n\napiVersion: snapshot.storage.k8s.io/v1\nkind: VolumeSnapshot\nmetadata:\n name: test-snapshot\nspec:\n source:\n volumeSnapshotContentName: test-content", "completion": "This yaml code Volume Snapshots | Each VolumeSnapshotContent contains a spec and status. In dynamic provisioning, the snapshot common controller creates VolumeSnapshotContent objects. Here is an example:", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Volume Snapshots | Each VolumeSnapshotContent contains a spec and status. In dynamic provisioning, the snapshot common controller creates VolumeSnapshotContent objects. Here is an example:", "source": "https://kubernetes.io/docs/concepts/storage/volume-snapshots/", "id": "k8s_00333_code_explain_8612"} {"prompt": "Complete this yaml code:\n\napiVersion: snapshot.storage.k8s.io/v1\nkind: VolumeSnapshotContent\nmetadata:\n name: snapcontent-72d9a349-aacd-42d2-a240-d775650d2455\nspec:\n deletionPolicy: Delete\n driver: hostpath.csi.k8s.io", "completion": " source:\n volumeHandle: ee0cfb94-f8d4-11e9-b2d8-0242ac110002\n sourceVolumeMode: Filesystem\n volumeSnapshotClassName: csi-hostpath-snapclass\n volumeSnapshotRef:\n name: new-snapshot-test\n namespace: default\n uid: 72d9a349-aacd-42d2-a240-d775650d2455", "language": "yaml", "type": "kubernetes-manifest", "context": "Volume Snapshots | volumeHandle is the unique identifier of the volume created on the storage backend and returned by the CSI driver during the volume creation. This field is required for dynamically provisioning a snap", "source": "https://kubernetes.io/docs/concepts/storage/volume-snapshots/", "id": "k8s_00333_code_completion_8613"} {"prompt": "Explain this yaml code:\n\napiVersion: snapshot.storage.k8s.io/v1\nkind: VolumeSnapshotContent\nmetadata:\n name: snapcontent-72d9a349-aacd-42d2-a240-d775650d2455\nspec:\n deletionPolicy: Delete\n driver: hostpath.csi.k8s.io\n source:\n volumeHandle: ee0cfb94-f8d4-11e9-b2d8-0242ac110002\n sourceVolumeMode: Filesystem\n volumeSnapshotClassName: csi-hostpath-snapclass\n volumeSnapshotRef:\n name: new-snapshot-test\n namespace: default\n uid: 72d9a349-aacd-42d2-a240-d775650d2455", "completion": "This yaml code Volume Snapshots | volumeHandle is the unique identifier of the volume created on the storage backend and returned by the CSI driver during the volume creation. This field is required for dynamically provisioning a snap", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Volume Snapshots | volumeHandle is the unique identifier of the volume created on the storage backend and returned by the CSI driver during the volume creation. This field is required for dynamically provisioning a snap", "source": "https://kubernetes.io/docs/concepts/storage/volume-snapshots/", "id": "k8s_00333_code_explain_8614"} {"prompt": "Complete this yaml code:\n\napiVersion: snapshot.storage.k8s.io/v1\nkind: VolumeSnapshotContent\nmetadata:\n name: new-snapshot-content-test\nspec:\n deletionPolicy: Delete", "completion": " driver: hostpath.csi.k8s.io\n source:\n snapshotHandle: 7bdd0de3-aaeb-11e8-9aae-0242ac110002\n sourceVolumeMode: Filesystem\n volumeSnapshotRef:\n name: new-snapshot-test\n namespace: default", "language": "yaml", "type": "kubernetes-manifest", "context": "Volume Snapshots | snapshotHandle is the unique identifier of the volume snapshot created on the storage backend. This field is required for the pre-provisioned snapshots. It specifies the CSI snapshot id on the storage", "source": "https://kubernetes.io/docs/concepts/storage/volume-snapshots/", "id": "k8s_00333_code_completion_8615"} {"prompt": "Explain this yaml code:\n\napiVersion: snapshot.storage.k8s.io/v1\nkind: VolumeSnapshotContent\nmetadata:\n name: new-snapshot-content-test\nspec:\n deletionPolicy: Delete\n driver: hostpath.csi.k8s.io\n source:\n snapshotHandle: 7bdd0de3-aaeb-11e8-9aae-0242ac110002\n sourceVolumeMode: Filesystem\n volumeSnapshotRef:\n name: new-snapshot-test\n namespace: default", "completion": "This yaml code Volume Snapshots | snapshotHandle is the unique identifier of the volume snapshot created on the storage backend. This field is required for the pre-provisioned snapshots. It specifies the CSI snapshot id on the storage", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Volume Snapshots | snapshotHandle is the unique identifier of the volume snapshot created on the storage backend. This field is required for the pre-provisioned snapshots. It specifies the CSI snapshot id on the storage", "source": "https://kubernetes.io/docs/concepts/storage/volume-snapshots/", "id": "k8s_00333_code_explain_8616"} {"prompt": "Complete this yaml code:\n\napiVersion: snapshot.storage.k8s.io/v1\nkind: VolumeSnapshotContent\nmetadata:\n name: new-snapshot-content-test\n annotations:\n - snapshot.storage.kubernetes.io/allow-volume-mode-change: \"true\"\nspec:", "completion": " deletionPolicy: Delete\n driver: hostpath.csi.k8s.io\n source:\n snapshotHandle: 7bdd0de3-aaeb-11e8-9aae-0242ac110002\n sourceVolumeMode: Filesystem\n volumeSnapshotRef:\n name: new-snapshot-test\n namespace: default", "language": "yaml", "type": "kubernetes-manifest", "context": "Volume Snapshots | You can provision a new volume, pre-populated with data from a snapshot, by using the dataSource field in the PersistentVolumeClaim object.", "source": "https://kubernetes.io/docs/concepts/storage/volume-snapshots/", "id": "k8s_00333_code_completion_8617"} {"prompt": "Explain this yaml code:\n\napiVersion: snapshot.storage.k8s.io/v1\nkind: VolumeSnapshotContent\nmetadata:\n name: new-snapshot-content-test\n annotations:\n - snapshot.storage.kubernetes.io/allow-volume-mode-change: \"true\"\nspec:\n deletionPolicy: Delete\n driver: hostpath.csi.k8s.io\n source:\n snapshotHandle: 7bdd0de3-aaeb-11e8-9aae-0242ac110002\n sourceVolumeMode: Filesystem\n volumeSnapshotRef:\n name: new-snapshot-test\n namespace: default", "completion": "This yaml code Volume Snapshots | You can provision a new volume, pre-populated with data from a snapshot, by using the dataSource field in the PersistentVolumeClaim object.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Volume Snapshots | You can provision a new volume, pre-populated with data from a snapshot, by using the dataSource field in the PersistentVolumeClaim object.", "source": "https://kubernetes.io/docs/concepts/storage/volume-snapshots/", "id": "k8s_00333_code_explain_8618"} {"prompt": "Explain this text code:\n\nip route show # Look for a line starting with \"default via\"", "completion": "This text code Creating a cluster with kubeadm | Kubernetes components do not accept custom network interface as an option, therefore a custom IP address must be passed as a flag to all components instances that need such a custom configuration.", "language": "text", "type": "code-example_explanation", "context": "Creating a cluster with kubeadm | Kubernetes components do not accept custom network interface as an option, therefore a custom IP address must be passed as a flag to all components instances that need such a custom configuration.", "source": "https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/create-cluster-kubeadm/", "id": "k8s_00335_code_explain_8619"} {"prompt": "Complete this text code:\n\nYour Kubernetes control-plane has initialized successfully!\n\nTo start using your cluster, you need to run the following as a regular user:\n\n mkdir -p $HOME/.kube\n sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config\n sudo chown $(id -u):$(id -g) $HOME/.kube/config\n", "completion": "You should now deploy a Pod network to the cluster.\nRun \"kubectl apply -f [podnetwork].yaml\" with one of the options listed at:\n /docs/concepts/cluster-administration/addons/\n\nYou can now join any number of machines by running the following on each node\nas root:\n\n kubeadm join <control-plane-host>:<control-plane-port> --token <token> --discovery-token-ca-cert-hash sha256:<hash>", "language": "text", "type": "code-example", "context": "Creating a cluster with kubeadm | To make kubectl work for your non-root user, run these commands, which are also part of the kubeadm init output:", "source": "https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/create-cluster-kubeadm/", "id": "k8s_00335_code_completion_8620"} {"prompt": "Explain this text code:\n\nYour Kubernetes control-plane has initialized successfully!\n\nTo start using your cluster, you need to run the following as a regular user:\n\n mkdir -p $HOME/.kube\n sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config\n sudo chown $(id -u):$(id -g) $HOME/.kube/config\n\nYou should now deploy a Pod network to the cluster.\nRun \"kubectl apply -f [podnetwork].yaml\" with one of the options listed at:\n /docs/concepts/cluster-administration/addons/\n\nYou can now join any number of machines by running the following on each node\nas root:\n\n kubeadm join <control-plane-host>:<control-plane-port> --token <token> --discovery-token-ca-cert-hash sha256:<hash>", "completion": "This text code Creating a cluster with kubeadm | To make kubectl work for your non-root user, run these commands, which are also part of the kubeadm init output:", "language": "text", "type": "code-example_explanation", "context": "Creating a cluster with kubeadm | To make kubectl work for your non-root user, run these commands, which are also part of the kubeadm init output:", "source": "https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/create-cluster-kubeadm/", "id": "k8s_00335_code_explain_8621"} {"prompt": "Explain this text code:\n\nmkdir -p $HOME/.kube\nsudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config\nsudo chown $(id -u):$(id -g) $HOME/.kube/config", "completion": "This text code Creating a cluster with kubeadm | Alternatively, if you are the root user, you can run:", "language": "text", "type": "code-example_explanation", "context": "Creating a cluster with kubeadm | Alternatively, if you are the root user, you can run:", "source": "https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/create-cluster-kubeadm/", "id": "k8s_00335_code_explain_8622"} {"prompt": "Explain this bash code:\n\nkubectl taint nodes --all node-role.kubernetes.io/control-plane-", "completion": "This bash code Creating a cluster with kubeadm | The output will look something like:", "language": "bash", "type": "kubectl-command_explanation", "context": "Creating a cluster with kubeadm | The output will look something like:", "source": "https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/create-cluster-kubeadm/", "id": "k8s_00335_code_explain_8623"} {"prompt": "Explain this bash code:\n\nkubectl label nodes --all node.kubernetes.io/exclude-from-external-load-balancers-", "completion": "This bash code Creating a cluster with kubeadm | See Creating Highly Available Clusters with kubeadm for steps on creating a high availability kubeadm cluster by adding more control plane nodes.", "language": "bash", "type": "kubectl-command_explanation", "context": "Creating a cluster with kubeadm | See Creating Highly Available Clusters with kubeadm for steps on creating a high availability kubeadm cluster by adding more control plane nodes.", "source": "https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/create-cluster-kubeadm/", "id": "k8s_00335_code_explain_8624"} {"prompt": "Explain this text code:\n\nscp root@<control-plane-host>:/etc/kubernetes/admin.conf .\nkubectl --kubeconfig ./admin.conf get nodes", "completion": "This text code Creating a cluster with kubeadm | The example above assumes SSH access is enabled for root. If that is not the case, you can copy the admin.conf file to be accessible by some other user and scp using that other user instead.", "language": "text", "type": "code-example_explanation", "context": "Creating a cluster with kubeadm | The example above assumes SSH access is enabled for root. If that is not the case, you can copy the admin.conf file to be accessible by some other user and scp using that other user instead.", "source": "https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/create-cluster-kubeadm/", "id": "k8s_00335_code_explain_8625"} {"prompt": "Explain this text code:\n\nscp root@<control-plane-host>:/etc/kubernetes/admin.conf .\nkubectl --kubeconfig ./admin.conf proxy", "completion": "This text code Creating a cluster with kubeadm | You can now access the API Server locally at http://localhost:8001/api/v1", "language": "text", "type": "code-example_explanation", "context": "Creating a cluster with kubeadm | You can now access the API Server locally at http://localhost:8001/api/v1", "source": "https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/create-cluster-kubeadm/", "id": "k8s_00335_code_explain_8626"} {"prompt": "Explain this bash code:\n\nkubectl drain <node name> --delete-emptydir-data --force --ignore-daemonsets", "completion": "This bash code Creating a cluster with kubeadm | Before removing the node, reset the state installed by kubeadm:", "language": "bash", "type": "kubectl-command_explanation", "context": "Creating a cluster with kubeadm | Before removing the node, reset the state installed by kubeadm:", "source": "https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/create-cluster-kubeadm/", "id": "k8s_00335_code_explain_8627"} {"prompt": "Explain this text code:\n\niptables -F && iptables -t nat -F && iptables -t mangle -F && iptables -X", "completion": "This text code Creating a cluster with kubeadm | If you want to reset the IPVS tables, you must run the following command:", "language": "text", "type": "code-example_explanation", "context": "Creating a cluster with kubeadm | If you want to reset the IPVS tables, you must run the following command:", "source": "https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/create-cluster-kubeadm/", "id": "k8s_00335_code_explain_8628"} {"prompt": "Explain this shell code:\n\nip route show # Look for a line starting with \"default via\"", "completion": "This shell code Creating a cluster with kubeadm | Kubernetes components do not accept custom network interface as an option, therefore a custom IP address must be passed as a flag to all components instances that need such a custom configuration.", "language": "shell", "type": "code-example_explanation", "context": "Creating a cluster with kubeadm | Kubernetes components do not accept custom network interface as an option, therefore a custom IP address must be passed as a flag to all components instances that need such a custom configuration.", "source": "https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/create-cluster-kubeadm/", "id": "k8s_00335_code_explain_8629"} {"prompt": "Complete this none code:\n\nYour Kubernetes control-plane has initialized successfully!\n\nTo start using your cluster, you need to run the following as a regular user:\n\n mkdir -p $HOME/.kube\n sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config\n sudo chown $(id -u):$(id -g) $HOME/.kube/config\n", "completion": "You should now deploy a Pod network to the cluster.\nRun \"kubectl apply -f [podnetwork].yaml\" with one of the options listed at:\n /docs/concepts/cluster-administration/addons/\n\nYou can now join any number of machines by running the following on each node\nas root:\n\n kubeadm join <control-plane-host>:<control-plane-port> --token <token> --discovery-token-ca-cert-hash sha256:<hash>", "language": "none", "type": "code-example", "context": "Creating a cluster with kubeadm | To make kubectl work for your non-root user, run these commands, which are also part of the kubeadm init output:", "source": "https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/create-cluster-kubeadm/", "id": "k8s_00335_code_completion_8630"} {"prompt": "Explain this none code:\n\nYour Kubernetes control-plane has initialized successfully!\n\nTo start using your cluster, you need to run the following as a regular user:\n\n mkdir -p $HOME/.kube\n sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config\n sudo chown $(id -u):$(id -g) $HOME/.kube/config\n\nYou should now deploy a Pod network to the cluster.\nRun \"kubectl apply -f [podnetwork].yaml\" with one of the options listed at:\n /docs/concepts/cluster-administration/addons/\n\nYou can now join any number of machines by running the following on each node\nas root:\n\n kubeadm join <control-plane-host>:<control-plane-port> --token <token> --discovery-token-ca-cert-hash sha256:<hash>", "completion": "This none code Creating a cluster with kubeadm | To make kubectl work for your non-root user, run these commands, which are also part of the kubeadm init output:", "language": "none", "type": "code-example_explanation", "context": "Creating a cluster with kubeadm | To make kubectl work for your non-root user, run these commands, which are also part of the kubeadm init output:", "source": "https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/create-cluster-kubeadm/", "id": "k8s_00335_code_explain_8631"} {"prompt": "Explain this bash code:\n\nmkdir -p $HOME/.kube\nsudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config\nsudo chown $(id -u):$(id -g) $HOME/.kube/config", "completion": "This bash code Creating a cluster with kubeadm | Alternatively, if you are the root user, you can run:", "language": "bash", "type": "shell-command_explanation", "context": "Creating a cluster with kubeadm | Alternatively, if you are the root user, you can run:", "source": "https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/create-cluster-kubeadm/", "id": "k8s_00335_code_explain_8632"} {"prompt": "Explain this text code:\n\nSubject: O = kubeadm:cluster-admins, CN = kubernetes-admin", "completion": "This text code Creating a cluster with kubeadm | kubeadm init generates another kubeconfig file super-admin.conf that contains a certificate with Subject: O = system:masters, CN = kubernetes-super-admin. system:masters is a break-glass, super user g", "language": "text", "type": "code-example_explanation", "context": "Creating a cluster with kubeadm | kubeadm init generates another kubeconfig file super-admin.conf that contains a certificate with Subject: O = system:masters, CN = kubernetes-super-admin. system:masters is a break-glass, super user g", "source": "https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/create-cluster-kubeadm/", "id": "k8s_00335_code_explain_8633"} {"prompt": "Explain this text code:\n\nSubject: O = system:masters, CN = kubernetes-super-admin", "completion": "This text code Creating a cluster with kubeadm | See Generating kubeconfig files for additional users on how to use kubeadm kubeconfig user to generate kubeconfig files for additional users.", "language": "text", "type": "code-example_explanation", "context": "Creating a cluster with kubeadm | See Generating kubeconfig files for additional users on how to use kubeadm kubeconfig user to generate kubeconfig files for additional users.", "source": "https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/create-cluster-kubeadm/", "id": "k8s_00335_code_explain_8634"} {"prompt": "Explain this bash code:\n\nkubectl taint nodes --all node-role.kubernetes.io/control-plane-", "completion": "This bash code Creating a cluster with kubeadm | The output will look something like:", "language": "bash", "type": "kubectl-command_explanation", "context": "Creating a cluster with kubeadm | The output will look something like:", "source": "https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/create-cluster-kubeadm/", "id": "k8s_00335_code_explain_8635"} {"prompt": "Explain this text code:\n\nnode.kubernetes.io/exclude-from-external-load-balancers", "completion": "This text code Creating a cluster with kubeadm | See Creating Highly Available Clusters with kubeadm for steps on creating a high availability kubeadm cluster by adding more control plane nodes.", "language": "text", "type": "code-example_explanation", "context": "Creating a cluster with kubeadm | See Creating Highly Available Clusters with kubeadm for steps on creating a high availability kubeadm cluster by adding more control plane nodes.", "source": "https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/create-cluster-kubeadm/", "id": "k8s_00335_code_explain_8636"} {"prompt": "Explain this bash code:\n\nkubectl label nodes --all node.kubernetes.io/exclude-from-external-load-balancers-", "completion": "This bash code Creating a cluster with kubeadm | See Creating Highly Available Clusters with kubeadm for steps on creating a high availability kubeadm cluster by adding more control plane nodes.", "language": "bash", "type": "kubectl-command_explanation", "context": "Creating a cluster with kubeadm | See Creating Highly Available Clusters with kubeadm for steps on creating a high availability kubeadm cluster by adding more control plane nodes.", "source": "https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/create-cluster-kubeadm/", "id": "k8s_00335_code_explain_8637"} {"prompt": "Explain this bash code:\n\nscp root@<control-plane-host>:/etc/kubernetes/admin.conf .\nkubectl --kubeconfig ./admin.conf get nodes", "completion": "This bash code Creating a cluster with kubeadm | The example above assumes SSH access is enabled for root. If that is not the case, you can copy the admin.conf file to be accessible by some other user and scp using that other user instead.", "language": "bash", "type": "shell-command_explanation", "context": "Creating a cluster with kubeadm | The example above assumes SSH access is enabled for root. If that is not the case, you can copy the admin.conf file to be accessible by some other user and scp using that other user instead.", "source": "https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/create-cluster-kubeadm/", "id": "k8s_00335_code_explain_8638"} {"prompt": "Explain this bash code:\n\nscp root@<control-plane-host>:/etc/kubernetes/admin.conf .\nkubectl --kubeconfig ./admin.conf proxy", "completion": "This bash code Creating a cluster with kubeadm | You can now access the API Server locally at http://localhost:8001/api/v1", "language": "bash", "type": "shell-command_explanation", "context": "Creating a cluster with kubeadm | You can now access the API Server locally at http://localhost:8001/api/v1", "source": "https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/create-cluster-kubeadm/", "id": "k8s_00335_code_explain_8639"} {"prompt": "Explain this bash code:\n\nkubectl drain <node name> --delete-emptydir-data --force --ignore-daemonsets", "completion": "This bash code Creating a cluster with kubeadm | Before removing the node, reset the state installed by kubeadm:", "language": "bash", "type": "kubectl-command_explanation", "context": "Creating a cluster with kubeadm | Before removing the node, reset the state installed by kubeadm:", "source": "https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/create-cluster-kubeadm/", "id": "k8s_00335_code_explain_8640"} {"prompt": "Explain this bash code:\n\niptables -F && iptables -t nat -F && iptables -t mangle -F && iptables -X", "completion": "This bash code Creating a cluster with kubeadm | If you want to reset the IPVS tables, you must run the following command:", "language": "bash", "type": "shell-command_explanation", "context": "Creating a cluster with kubeadm | If you want to reset the IPVS tables, you must run the following command:", "source": "https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/create-cluster-kubeadm/", "id": "k8s_00335_code_explain_8641"} {"prompt": "Explain this text code:\n\nip route show # Look for a line starting with \"default via\"", "completion": "This text code Creating a cluster with kubeadm | Kubernetes components do not accept custom network interface as an option, therefore a custom IP address must be passed as a flag to all components instances that need such a custom configuration.", "language": "text", "type": "code-example_explanation", "context": "Creating a cluster with kubeadm | Kubernetes components do not accept custom network interface as an option, therefore a custom IP address must be passed as a flag to all components instances that need such a custom configuration.", "source": "https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/create-cluster-kubeadm/", "id": "k8s_00335_code_explain_8642"} {"prompt": "Explain this text code:\n\nmkdir -p $HOME/.kube\nsudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config\nsudo chown $(id -u):$(id -g) $HOME/.kube/config", "completion": "This text code Creating a cluster with kubeadm | Alternatively, if you are the root user, you can run:", "language": "text", "type": "code-example_explanation", "context": "Creating a cluster with kubeadm | Alternatively, if you are the root user, you can run:", "source": "https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/create-cluster-kubeadm/", "id": "k8s_00335_code_explain_8643"} {"prompt": "Explain this bash code:\n\nkubectl taint nodes --all node-role.kubernetes.io/control-plane-", "completion": "This bash code Creating a cluster with kubeadm | The output will look something like:", "language": "bash", "type": "kubectl-command_explanation", "context": "Creating a cluster with kubeadm | The output will look something like:", "source": "https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/create-cluster-kubeadm/", "id": "k8s_00335_code_explain_8644"} {"prompt": "Explain this bash code:\n\nkubectl label nodes --all node.kubernetes.io/exclude-from-external-load-balancers-", "completion": "This bash code Creating a cluster with kubeadm | See Creating Highly Available Clusters with kubeadm for steps on creating a high availability kubeadm cluster by adding more control plane nodes.", "language": "bash", "type": "kubectl-command_explanation", "context": "Creating a cluster with kubeadm | See Creating Highly Available Clusters with kubeadm for steps on creating a high availability kubeadm cluster by adding more control plane nodes.", "source": "https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/create-cluster-kubeadm/", "id": "k8s_00335_code_explain_8645"} {"prompt": "Explain this text code:\n\nscp root@<control-plane-host>:/etc/kubernetes/admin.conf .\nkubectl --kubeconfig ./admin.conf get nodes", "completion": "This text code Creating a cluster with kubeadm | The example above assumes SSH access is enabled for root. If that is not the case, you can copy the admin.conf file to be accessible by some other user and scp using that other user instead.", "language": "text", "type": "code-example_explanation", "context": "Creating a cluster with kubeadm | The example above assumes SSH access is enabled for root. If that is not the case, you can copy the admin.conf file to be accessible by some other user and scp using that other user instead.", "source": "https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/create-cluster-kubeadm/", "id": "k8s_00335_code_explain_8646"} {"prompt": "Explain this text code:\n\nscp root@<control-plane-host>:/etc/kubernetes/admin.conf .\nkubectl --kubeconfig ./admin.conf proxy", "completion": "This text code Creating a cluster with kubeadm | You can now access the API Server locally at http://localhost:8001/api/v1", "language": "text", "type": "code-example_explanation", "context": "Creating a cluster with kubeadm | You can now access the API Server locally at http://localhost:8001/api/v1", "source": "https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/create-cluster-kubeadm/", "id": "k8s_00335_code_explain_8647"} {"prompt": "Explain this bash code:\n\nkubectl drain <node name> --delete-emptydir-data --force --ignore-daemonsets", "completion": "This bash code Creating a cluster with kubeadm | Before removing the node, reset the state installed by kubeadm:", "language": "bash", "type": "kubectl-command_explanation", "context": "Creating a cluster with kubeadm | Before removing the node, reset the state installed by kubeadm:", "source": "https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/create-cluster-kubeadm/", "id": "k8s_00335_code_explain_8648"} {"prompt": "Explain this text code:\n\niptables -F && iptables -t nat -F && iptables -t mangle -F && iptables -X", "completion": "This text code Creating a cluster with kubeadm | If you want to reset the IPVS tables, you must run the following command:", "language": "text", "type": "code-example_explanation", "context": "Creating a cluster with kubeadm | If you want to reset the IPVS tables, you must run the following command:", "source": "https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/create-cluster-kubeadm/", "id": "k8s_00335_code_explain_8649"} {"prompt": "Complete this text code:\n\n# Get output from running pod mypod; use the 'kubectl.kubernetes.io/default-container' annotation\n # for selecting the container to be attached or the first container in the pod will be chosen\n kubectl attach mypod\n \n # Get output from ruby-container from pod mypod\n kubectl attach mypod -c ruby-container", "completion": " \n # Switch to raw terminal mode; sends stdin to 'bash' in ruby-container from pod mypod\n # and sends stdout/stderr from 'bash' back to the client\n kubectl attach mypod -c ruby-container -i -t\n \n # Get output from the first pod of a replica set named nginx\n kubectl attach rs/nginx", "language": "text", "type": "code-example", "context": "kubectl attach | Container name. If omitted, use the kubectl.kubernetes.io/default-container annotation for selecting the container to be attached or the first container in the pod will be chosen", "source": "https://kubernetes.io/docs/reference/kubectl/generated/kubectl_attach/", "id": "k8s_00336_code_completion_8650"} {"prompt": "Explain this text code:\n\n# Get output from running pod mypod; use the 'kubectl.kubernetes.io/default-container' annotation\n # for selecting the container to be attached or the first container in the pod will be chosen\n kubectl attach mypod\n \n # Get output from ruby-container from pod mypod\n kubectl attach mypod -c ruby-container\n \n # Switch to raw terminal mode; sends stdin to 'bash' in ruby-container from pod mypod\n # and sends stdout/stderr from 'bash' back to the client\n kubectl attach mypod -c ruby-container -i -t\n \n # Get output from the first pod of a replica set named nginx\n kubectl attach rs/nginx", "completion": "This text code kubectl attach | Container name. If omitted, use the kubectl.kubernetes.io/default-container annotation for selecting the container to be attached or the first container in the pod will be chosen", "language": "text", "type": "code-example_explanation", "context": "kubectl attach | Container name. If omitted, use the kubectl.kubernetes.io/default-container annotation for selecting the container to be attached or the first container in the pod will be chosen", "source": "https://kubernetes.io/docs/reference/kubectl/generated/kubectl_attach/", "id": "k8s_00336_code_explain_8651"} {"prompt": "Complete this text code:\n\n# Get output from running pod mypod; use the 'kubectl.kubernetes.io/default-container' annotation\n # for selecting the container to be attached or the first container in the pod will be chosen\n kubectl attach mypod\n \n # Get output from ruby-container from pod mypod\n kubectl attach mypod -c ruby-container", "completion": " \n # Switch to raw terminal mode; sends stdin to 'bash' in ruby-container from pod mypod\n # and sends stdout/stderr from 'bash' back to the client\n kubectl attach mypod -c ruby-container -i -t\n \n # Get output from the first pod of a replica set named nginx\n kubectl attach rs/nginx", "language": "text", "type": "code-example", "context": "kubectl attach | Container name. If omitted, use the kubectl.kubernetes.io/default-container annotation for selecting the container to be attached or the first container in the pod will be chosen", "source": "https://kubernetes.io/docs/reference/kubectl/generated/kubectl_attach/", "id": "k8s_00336_code_completion_8652"} {"prompt": "Explain this text code:\n\n# Get output from running pod mypod; use the 'kubectl.kubernetes.io/default-container' annotation\n # for selecting the container to be attached or the first container in the pod will be chosen\n kubectl attach mypod\n \n # Get output from ruby-container from pod mypod\n kubectl attach mypod -c ruby-container\n \n # Switch to raw terminal mode; sends stdin to 'bash' in ruby-container from pod mypod\n # and sends stdout/stderr from 'bash' back to the client\n kubectl attach mypod -c ruby-container -i -t\n \n # Get output from the first pod of a replica set named nginx\n kubectl attach rs/nginx", "completion": "This text code kubectl attach | Container name. If omitted, use the kubectl.kubernetes.io/default-container annotation for selecting the container to be attached or the first container in the pod will be chosen", "language": "text", "type": "code-example_explanation", "context": "kubectl attach | Container name. If omitted, use the kubectl.kubernetes.io/default-container annotation for selecting the container to be attached or the first container in the pod will be chosen", "source": "https://kubernetes.io/docs/reference/kubectl/generated/kubectl_attach/", "id": "k8s_00336_code_explain_8653"} {"prompt": "Complete this yaml code:\n\napiVersion: batch/v1\nkind: Job\nmetadata:\n name: job-pod-failure-policy-failjob\nspec:\n completions: 8\n parallelism: 2\n template:\n spec:\n restartPolicy: Never\n containers:\n - name: main", "completion": " image: docker.io/library/bash:5\n command: [\"bash\"]\n args:\n - -c\n - echo \"Hello world! I'm going to exit with 42 to simulate a software bug.\" && sleep 30 && exit 42\n backoffLimit: 6\n podFailurePolicy:\n rules:\n - action: FailJob\n onExitCodes:\n containerName: main\n operator: In\n values: [42]", "language": "yaml", "type": "kubernetes-manifest", "context": "Handling retriable and non-retriable pod failures with Pod failure policy | Apply the manifest:", "source": "https://kubernetes.io/docs/tasks/job/pod-failure-policy/", "id": "k8s_00338_code_completion_8654"} {"prompt": "Explain this yaml code:\n\napiVersion: batch/v1\nkind: Job\nmetadata:\n name: job-pod-failure-policy-failjob\nspec:\n completions: 8\n parallelism: 2\n template:\n spec:\n restartPolicy: Never\n containers:\n - name: main\n image: docker.io/library/bash:5\n command: [\"bash\"]\n args:\n - -c\n - echo \"Hello world! I'm going to exit with 42 to simulate a software bug.\" && sleep 30 && exit 42\n backoffLimit: 6\n podFailurePolicy:\n rules:\n - action: FailJob\n onExitCodes:\n containerName: main\n operator: In\n values: [42]", "completion": "This yaml code Handling retriable and non-retriable pod failures with Pod failure policy | Apply the manifest:", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Handling retriable and non-retriable pod failures with Pod failure policy | Apply the manifest:", "source": "https://kubernetes.io/docs/tasks/job/pod-failure-policy/", "id": "k8s_00338_code_explain_8655"} {"prompt": "Explain this bash code:\n\nkubectl create -f https://k8s.io/examples/controllers/job-pod-failure-policy-failjob.yaml", "completion": "This bash code Handling retriable and non-retriable pod failures with Pod failure policy | After around 30 seconds the entire Job should be terminated. Inspect the status of the Job by running:", "language": "bash", "type": "kubectl-command_explanation", "context": "Handling retriable and non-retriable pod failures with Pod failure policy | After around 30 seconds the entire Job should be terminated. Inspect the status of the Job by running:", "source": "https://kubernetes.io/docs/tasks/job/pod-failure-policy/", "id": "k8s_00338_code_explain_8656"} {"prompt": "Explain this bash code:\n\nkubectl get jobs -l job-name=job-pod-failure-policy-failjob -o yaml", "completion": "This bash code Handling retriable and non-retriable pod failures with Pod failure policy | In the Job status, the following conditions display:", "language": "bash", "type": "kubectl-command_explanation", "context": "Handling retriable and non-retriable pod failures with Pod failure policy | In the Job status, the following conditions display:", "source": "https://kubernetes.io/docs/tasks/job/pod-failure-policy/", "id": "k8s_00338_code_explain_8657"} {"prompt": "Complete this yaml code:\n\napiVersion: batch/v1\nkind: Job\nmetadata:\n name: job-pod-failure-policy-ignore\nspec:\n completions: 4\n parallelism: 2\n template:\n spec:\n restartPolicy: Never\n containers:", "completion": " - name: main\n image: docker.io/library/bash:5\n command: [\"bash\"]\n args:\n - -c\n - echo \"Hello world! I'm going to exit with 0 (success).\" && sleep 90 && exit 0\n backoffLimit: 0\n podFailurePolicy:\n rules:\n - action: Ignore\n onPodConditions:\n - type: DisruptionTarget", "language": "yaml", "type": "kubernetes-manifest", "context": "Handling retriable and non-retriable pod failures with Pod failure policy | Apply the manifest:", "source": "https://kubernetes.io/docs/tasks/job/pod-failure-policy/", "id": "k8s_00338_code_completion_8658"} {"prompt": "Explain this yaml code:\n\napiVersion: batch/v1\nkind: Job\nmetadata:\n name: job-pod-failure-policy-ignore\nspec:\n completions: 4\n parallelism: 2\n template:\n spec:\n restartPolicy: Never\n containers:\n - name: main\n image: docker.io/library/bash:5\n command: [\"bash\"]\n args:\n - -c\n - echo \"Hello world! I'm going to exit with 0 (success).\" && sleep 90 && exit 0\n backoffLimit: 0\n podFailurePolicy:\n rules:\n - action: Ignore\n onPodConditions:\n - type: DisruptionTarget", "completion": "This yaml code Handling retriable and non-retriable pod failures with Pod failure policy | Apply the manifest:", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Handling retriable and non-retriable pod failures with Pod failure policy | Apply the manifest:", "source": "https://kubernetes.io/docs/tasks/job/pod-failure-policy/", "id": "k8s_00338_code_explain_8659"} {"prompt": "Explain this bash code:\n\nkubectl create -f https://k8s.io/examples/controllers/job-pod-failure-policy-ignore.yaml", "completion": "This bash code Handling retriable and non-retriable pod failures with Pod failure policy | Run this command to check the nodeName the Pod is scheduled to:", "language": "bash", "type": "kubectl-command_explanation", "context": "Handling retriable and non-retriable pod failures with Pod failure policy | Run this command to check the nodeName the Pod is scheduled to:", "source": "https://kubernetes.io/docs/tasks/job/pod-failure-policy/", "id": "k8s_00338_code_explain_8660"} {"prompt": "Explain this text code:\n\nnodeName=$(kubectl get pods -l job-name=job-pod-failure-policy-ignore -o jsonpath='{.items[0].spec.nodeName}')", "completion": "This text code Handling retriable and non-retriable pod failures with Pod failure policy | Drain the node to evict the Pod before it completes (within 90s):", "language": "text", "type": "code-example_explanation", "context": "Handling retriable and non-retriable pod failures with Pod failure policy | Drain the node to evict the Pod before it completes (within 90s):", "source": "https://kubernetes.io/docs/tasks/job/pod-failure-policy/", "id": "k8s_00338_code_explain_8661"} {"prompt": "Explain this bash code:\n\nkubectl drain nodes/$nodeName --ignore-daemonsets --grace-period=0", "completion": "This bash code Handling retriable and non-retriable pod failures with Pod failure policy | Inspect the .status.failed to check the counter for the Job is not incremented:", "language": "bash", "type": "kubectl-command_explanation", "context": "Handling retriable and non-retriable pod failures with Pod failure policy | Inspect the .status.failed to check the counter for the Job is not incremented:", "source": "https://kubernetes.io/docs/tasks/job/pod-failure-policy/", "id": "k8s_00338_code_explain_8662"} {"prompt": "Explain this bash code:\n\nkubectl get jobs -l job-name=job-pod-failure-policy-ignore -o yaml", "completion": "This bash code Handling retriable and non-retriable pod failures with Pod failure policy | Uncordon the node:", "language": "bash", "type": "kubectl-command_explanation", "context": "Handling retriable and non-retriable pod failures with Pod failure policy | Uncordon the node:", "source": "https://kubernetes.io/docs/tasks/job/pod-failure-policy/", "id": "k8s_00338_code_explain_8663"} {"prompt": "Complete this yaml code:\n\napiVersion: batch/v1\nkind: Job\nmetadata:\n name: job-pod-failure-policy-config-issue\nspec:\n completions: 8\n parallelism: 2\n template:\n spec:", "completion": " restartPolicy: Never\n containers:\n - name: main\n image: \"non-existing-repo/non-existing-image:example\"\n backoffLimit: 6\n podFailurePolicy:\n rules:\n - action: FailJob\n onPodConditions:\n - type: ConfigIssue", "language": "yaml", "type": "kubernetes-manifest", "context": "Handling retriable and non-retriable pod failures with Pod failure policy | Apply the manifest:", "source": "https://kubernetes.io/docs/tasks/job/pod-failure-policy/", "id": "k8s_00338_code_completion_8664"} {"prompt": "Explain this yaml code:\n\napiVersion: batch/v1\nkind: Job\nmetadata:\n name: job-pod-failure-policy-config-issue\nspec:\n completions: 8\n parallelism: 2\n template:\n spec:\n restartPolicy: Never\n containers:\n - name: main\n image: \"non-existing-repo/non-existing-image:example\"\n backoffLimit: 6\n podFailurePolicy:\n rules:\n - action: FailJob\n onPodConditions:\n - type: ConfigIssue", "completion": "This yaml code Handling retriable and non-retriable pod failures with Pod failure policy | Apply the manifest:", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Handling retriable and non-retriable pod failures with Pod failure policy | Apply the manifest:", "source": "https://kubernetes.io/docs/tasks/job/pod-failure-policy/", "id": "k8s_00338_code_explain_8665"} {"prompt": "Explain this bash code:\n\nkubectl create -f https://k8s.io/examples/controllers/job-pod-failure-policy-config-issue.yaml", "completion": "This bash code Handling retriable and non-retriable pod failures with Pod failure policy | Note that, the image is misconfigured, as it does not exist.", "language": "bash", "type": "kubectl-command_explanation", "context": "Handling retriable and non-retriable pod failures with Pod failure policy | Note that, the image is misconfigured, as it does not exist.", "source": "https://kubernetes.io/docs/tasks/job/pod-failure-policy/", "id": "k8s_00338_code_explain_8666"} {"prompt": "Explain this bash code:\n\nkubectl get pods -l job-name=job-pod-failure-policy-config-issue -o yaml", "completion": "This bash code Handling retriable and non-retriable pod failures with Pod failure policy | You will see output similar to this:", "language": "bash", "type": "kubectl-command_explanation", "context": "Handling retriable and non-retriable pod failures with Pod failure policy | You will see output similar to this:", "source": "https://kubernetes.io/docs/tasks/job/pod-failure-policy/", "id": "k8s_00338_code_explain_8667"} {"prompt": "Complete this text code:\n\ncontainerStatuses:\n- image: non-existing-repo/non-existing-image:example\n ...\n state:", "completion": " waiting:\n message: Back-off pulling image \"non-existing-repo/non-existing-image:example\"\n reason: ImagePullBackOff\n ...\nphase: Pending", "language": "text", "type": "code-example", "context": "Handling retriable and non-retriable pod failures with Pod failure policy | Note that the pod remains in the Pending phase as it fails to pull the misconfigured image. This, in principle, could be a transient issue and the image could get pulled. However, in this case, the im", "source": "https://kubernetes.io/docs/tasks/job/pod-failure-policy/", "id": "k8s_00338_code_completion_8668"} {"prompt": "Explain this text code:\n\ncontainerStatuses:\n- image: non-existing-repo/non-existing-image:example\n ...\n state:\n waiting:\n message: Back-off pulling image \"non-existing-repo/non-existing-image:example\"\n reason: ImagePullBackOff\n ...\nphase: Pending", "completion": "This text code Handling retriable and non-retriable pod failures with Pod failure policy | Note that the pod remains in the Pending phase as it fails to pull the misconfigured image. This, in principle, could be a transient issue and the image could get pulled. However, in this case, the im", "language": "text", "type": "code-example_explanation", "context": "Handling retriable and non-retriable pod failures with Pod failure policy | Note that the pod remains in the Pending phase as it fails to pull the misconfigured image. This, in principle, could be a transient issue and the image could get pulled. However, in this case, the im", "source": "https://kubernetes.io/docs/tasks/job/pod-failure-policy/", "id": "k8s_00338_code_explain_8669"} {"prompt": "Complete this text code:\n\ncat <<EOF > patch.yaml\nstatus:\n conditions:\n - type: ConfigIssue", "completion": " status: \"True\"\n reason: \"NonExistingImage\"\n lastTransitionTime: \"$(date -u +\"%Y-%m-%dT%H:%M:%SZ\")\"\nEOF", "language": "text", "type": "code-example", "context": "Handling retriable and non-retriable pod failures with Pod failure policy | Second, select one of the pods created by the job by running:", "source": "https://kubernetes.io/docs/tasks/job/pod-failure-policy/", "id": "k8s_00338_code_completion_8670"} {"prompt": "Explain this text code:\n\ncat <<EOF > patch.yaml\nstatus:\n conditions:\n - type: ConfigIssue\n status: \"True\"\n reason: \"NonExistingImage\"\n lastTransitionTime: \"$(date -u +\"%Y-%m-%dT%H:%M:%SZ\")\"\nEOF", "completion": "This text code Handling retriable and non-retriable pod failures with Pod failure policy | Second, select one of the pods created by the job by running:", "language": "text", "type": "code-example_explanation", "context": "Handling retriable and non-retriable pod failures with Pod failure policy | Second, select one of the pods created by the job by running:", "source": "https://kubernetes.io/docs/tasks/job/pod-failure-policy/", "id": "k8s_00338_code_explain_8671"} {"prompt": "Explain this text code:\n\npodName=$(kubectl get pods -l job-name=job-pod-failure-policy-config-issue -o jsonpath='{.items[0].metadata.name}')", "completion": "This text code Handling retriable and non-retriable pod failures with Pod failure policy | Then, apply the patch on one of the pods by running the following command:", "language": "text", "type": "code-example_explanation", "context": "Handling retriable and non-retriable pod failures with Pod failure policy | Then, apply the patch on one of the pods by running the following command:", "source": "https://kubernetes.io/docs/tasks/job/pod-failure-policy/", "id": "k8s_00338_code_explain_8672"} {"prompt": "Explain this bash code:\n\nkubectl patch pod $podName --subresource=status --patch-file=patch.yaml", "completion": "This bash code Handling retriable and non-retriable pod failures with Pod failure policy | If applied successfully, you will get a notification like this:", "language": "bash", "type": "kubectl-command_explanation", "context": "Handling retriable and non-retriable pod failures with Pod failure policy | If applied successfully, you will get a notification like this:", "source": "https://kubernetes.io/docs/tasks/job/pod-failure-policy/", "id": "k8s_00338_code_explain_8673"} {"prompt": "Explain this text code:\n\npod/job-pod-failure-policy-config-issue-k6pvp patched", "completion": "This text code Handling retriable and non-retriable pod failures with Pod failure policy | Delete the pod to transition it to Failed phase, by running the command:", "language": "text", "type": "code-example_explanation", "context": "Handling retriable and non-retriable pod failures with Pod failure policy | Delete the pod to transition it to Failed phase, by running the command:", "source": "https://kubernetes.io/docs/tasks/job/pod-failure-policy/", "id": "k8s_00338_code_explain_8674"} {"prompt": "Explain this bash code:\n\nkubectl get jobs -l job-name=job-pod-failure-policy-config-issue -o yaml", "completion": "This bash code Handling retriable and non-retriable pod failures with Pod failure policy | In the Job status, see a job Failed condition with the field reason equal PodFailurePolicy. Additionally, the message field contains a more detailed information about the Job termination, such as: Pod", "language": "bash", "type": "kubectl-command_explanation", "context": "Handling retriable and non-retriable pod failures with Pod failure policy | In the Job status, see a job Failed condition with the field reason equal PodFailurePolicy. Additionally, the message field contains a more detailed information about the Job termination, such as: Pod", "source": "https://kubernetes.io/docs/tasks/job/pod-failure-policy/", "id": "k8s_00338_code_explain_8675"} {"prompt": "Explain this bash code:\n\nkubectl delete jobs/job-pod-failure-policy-config-issue", "completion": "This bash code Handling retriable and non-retriable pod failures with Pod failure policy | The cluster automatically cleans up the Pods.", "language": "bash", "type": "kubectl-command_explanation", "context": "Handling retriable and non-retriable pod failures with Pod failure policy | The cluster automatically cleans up the Pods.", "source": "https://kubernetes.io/docs/tasks/job/pod-failure-policy/", "id": "k8s_00338_code_explain_8676"} {"prompt": "Complete this yaml code:\n\napiVersion: batch/v1\nkind: Job\nmetadata:\n name: job-backoff-limit-per-index-failindex\nspec:\n completions: 4\n parallelism: 2\n completionMode: Indexed\n backoffLimitPerIndex: 1\n template:\n spec:\n restartPolicy: Never\n containers:\n - name: main\n image: docker.io/library/python:3\n command:\n # The script:\n # - fails the Pod with index 0 with exit code 1, which results in one retry;\n # - fails the Pod with index 1 with exit code 42 which results\n # in failing the index without retry.", "completion": " # - succeeds Pods with any other index.\n - python3\n - -c\n - |\n import os, sys\n index = int(os.environ.get(\"JOB_COMPLETION_INDEX\"))\n if index == 0:\n sys.exit(1)\n elif index == 1:\n sys.exit(42)\n else:\n sys.exit(0) \n backoffLimit: 6\n podFailurePolicy:\n rules:\n - action: FailIndex\n onExitCodes:\n containerName: main\n operator: In\n values: [42]", "language": "yaml", "type": "kubernetes-manifest", "context": "Handling retriable and non-retriable pod failures with Pod failure policy | Apply the manifest:", "source": "https://kubernetes.io/docs/tasks/job/pod-failure-policy/", "id": "k8s_00338_code_completion_8677"} {"prompt": "Explain this yaml code:\n\napiVersion: batch/v1\nkind: Job\nmetadata:\n name: job-backoff-limit-per-index-failindex\nspec:\n completions: 4\n parallelism: 2\n completionMode: Indexed\n backoffLimitPerIndex: 1\n template:\n spec:\n restartPolicy: Never\n containers:\n - name: main\n image: docker.io/library/python:3\n command:\n # The script:\n # - fails the Pod with index 0 with exit code 1, which results in one retry;\n # - fails the Pod with index 1 with exit code 42 which results\n # in failing the index without retry.\n # - succeeds Pods with any other index.\n - python3\n - -c\n - |\n import os, sys\n index = int(os.environ.get(\"JOB_COMPLETION_INDEX\"))\n if index == 0:\n sys.exit(1)\n elif index == 1:\n sys.exit(42)\n else:\n sys.exit(0) \n backoffLimit: 6\n podFailurePolicy:\n rules:\n - action: FailIndex\n onExitCodes:\n containerName: main\n operator: In\n values: [42]", "completion": "This yaml code Handling retriable and non-retriable pod failures with Pod failure policy | Apply the manifest:", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Handling retriable and non-retriable pod failures with Pod failure policy | Apply the manifest:", "source": "https://kubernetes.io/docs/tasks/job/pod-failure-policy/", "id": "k8s_00338_code_explain_8678"} {"prompt": "Explain this bash code:\n\nkubectl create -f https://k8s.io/examples/controllers/job-backoff-limit-per-index-failindex.yaml", "completion": "This bash code Handling retriable and non-retriable pod failures with Pod failure policy | After around 15 seconds, inspect the status of the Pods for the Job. You can do that by running:", "language": "bash", "type": "kubectl-command_explanation", "context": "Handling retriable and non-retriable pod failures with Pod failure policy | After around 15 seconds, inspect the status of the Pods for the Job. You can do that by running:", "source": "https://kubernetes.io/docs/tasks/job/pod-failure-policy/", "id": "k8s_00338_code_explain_8679"} {"prompt": "Explain this bash code:\n\nkubectl get pods -l job-name=job-backoff-limit-per-index-failindex -o yaml", "completion": "This bash code Handling retriable and non-retriable pod failures with Pod failure policy | You will see output similar to this:", "language": "bash", "type": "kubectl-command_explanation", "context": "Handling retriable and non-retriable pod failures with Pod failure policy | You will see output similar to this:", "source": "https://kubernetes.io/docs/tasks/job/pod-failure-policy/", "id": "k8s_00338_code_explain_8680"} {"prompt": "Complete this text code:\n\nNAME READY STATUS RESTARTS AGE\njob-backoff-limit-per-index-failindex-0-4g4cm 0/1 Error 0 4s\njob-backoff-limit-per-index-failindex-0-fkdzq 0/1 Error 0 15s", "completion": "job-backoff-limit-per-index-failindex-1-2bgdj 0/1 Error 0 15s\njob-backoff-limit-per-index-failindex-2-vs6lt 0/1 Completed 0 11s\njob-backoff-limit-per-index-failindex-3-s7s47 0/1 Completed 0 6s", "language": "text", "type": "code-example", "context": "Handling retriable and non-retriable pod failures with Pod failure policy | Note that the output shows the following:", "source": "https://kubernetes.io/docs/tasks/job/pod-failure-policy/", "id": "k8s_00338_code_completion_8681"} {"prompt": "Explain this text code:\n\nNAME READY STATUS RESTARTS AGE\njob-backoff-limit-per-index-failindex-0-4g4cm 0/1 Error 0 4s\njob-backoff-limit-per-index-failindex-0-fkdzq 0/1 Error 0 15s\njob-backoff-limit-per-index-failindex-1-2bgdj 0/1 Error 0 15s\njob-backoff-limit-per-index-failindex-2-vs6lt 0/1 Completed 0 11s\njob-backoff-limit-per-index-failindex-3-s7s47 0/1 Completed 0 6s", "completion": "This text code Handling retriable and non-retriable pod failures with Pod failure policy | Note that the output shows the following:", "language": "text", "type": "code-example_explanation", "context": "Handling retriable and non-retriable pod failures with Pod failure policy | Note that the output shows the following:", "source": "https://kubernetes.io/docs/tasks/job/pod-failure-policy/", "id": "k8s_00338_code_explain_8682"} {"prompt": "Explain this bash code:\n\nkubectl get jobs -l job-name=job-backoff-limit-per-index-failindex -o yaml", "completion": "This bash code Handling retriable and non-retriable pod failures with Pod failure policy | In the Job status, see that the failedIndexes field shows \"0,1\", because both indexes failed. Because the index 1 was not retried the number of failed Pods, indicated by the status field \"failed\" equa", "language": "bash", "type": "kubectl-command_explanation", "context": "Handling retriable and non-retriable pod failures with Pod failure policy | In the Job status, see that the failedIndexes field shows \"0,1\", because both indexes failed. Because the index 1 was not retried the number of failed Pods, indicated by the status field \"failed\" equa", "source": "https://kubernetes.io/docs/tasks/job/pod-failure-policy/", "id": "k8s_00338_code_explain_8683"} {"prompt": "Explain this bash code:\n\nkubectl delete jobs/job-backoff-limit-per-index-failindex", "completion": "This bash code Handling retriable and non-retriable pod failures with Pod failure policy | The cluster automatically cleans up the Pods.", "language": "bash", "type": "kubectl-command_explanation", "context": "Handling retriable and non-retriable pod failures with Pod failure policy | The cluster automatically cleans up the Pods.", "source": "https://kubernetes.io/docs/tasks/job/pod-failure-policy/", "id": "k8s_00338_code_explain_8684"} {"prompt": "Complete this yaml code:\n\napiVersion: batch/v1\nkind: Job\nmetadata:\n name: job-pod-failure-policy-failjob\nspec:\n completions: 8\n parallelism: 2\n template:\n spec:\n restartPolicy: Never\n containers:\n - name: main", "completion": " image: docker.io/library/bash:5\n command: [\"bash\"]\n args:\n - -c\n - echo \"Hello world! I'm going to exit with 42 to simulate a software bug.\" && sleep 30 && exit 42\n backoffLimit: 6\n podFailurePolicy:\n rules:\n - action: FailJob\n onExitCodes:\n containerName: main\n operator: In\n values: [42]", "language": "yaml", "type": "kubernetes-manifest", "context": "Handling retriable and non-retriable pod failures with Pod failure policy | Apply the manifest:", "source": "https://kubernetes.io/docs/tasks/job/pod-failure-policy/", "id": "k8s_00338_code_completion_8685"} {"prompt": "Explain this yaml code:\n\napiVersion: batch/v1\nkind: Job\nmetadata:\n name: job-pod-failure-policy-failjob\nspec:\n completions: 8\n parallelism: 2\n template:\n spec:\n restartPolicy: Never\n containers:\n - name: main\n image: docker.io/library/bash:5\n command: [\"bash\"]\n args:\n - -c\n - echo \"Hello world! I'm going to exit with 42 to simulate a software bug.\" && sleep 30 && exit 42\n backoffLimit: 6\n podFailurePolicy:\n rules:\n - action: FailJob\n onExitCodes:\n containerName: main\n operator: In\n values: [42]", "completion": "This yaml code Handling retriable and non-retriable pod failures with Pod failure policy | Apply the manifest:", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Handling retriable and non-retriable pod failures with Pod failure policy | Apply the manifest:", "source": "https://kubernetes.io/docs/tasks/job/pod-failure-policy/", "id": "k8s_00338_code_explain_8686"} {"prompt": "Explain this sh code:\n\nkubectl create -f https://k8s.io/examples/controllers/job-pod-failure-policy-failjob.yaml", "completion": "This sh code Handling retriable and non-retriable pod failures with Pod failure policy | After around 30 seconds the entire Job should be terminated. Inspect the status of the Job by running:", "language": "sh", "type": "code-example_explanation", "context": "Handling retriable and non-retriable pod failures with Pod failure policy | After around 30 seconds the entire Job should be terminated. Inspect the status of the Job by running:", "source": "https://kubernetes.io/docs/tasks/job/pod-failure-policy/", "id": "k8s_00338_code_explain_8687"} {"prompt": "Explain this sh code:\n\nkubectl get jobs -l job-name=job-pod-failure-policy-failjob -o yaml", "completion": "This sh code Handling retriable and non-retriable pod failures with Pod failure policy | In the Job status, the following conditions display:", "language": "sh", "type": "code-example_explanation", "context": "Handling retriable and non-retriable pod failures with Pod failure policy | In the Job status, the following conditions display:", "source": "https://kubernetes.io/docs/tasks/job/pod-failure-policy/", "id": "k8s_00338_code_explain_8688"} {"prompt": "Explain this text code:\n\nContainer main for pod default/job-pod-failure-policy-failjob-8ckj8 failed with exit code 42 matching FailJob rule at index 0", "completion": "This text code Handling retriable and non-retriable pod failures with Pod failure policy | For comparison, if the Pod failure policy was disabled it would take 6 retries of the Pod, taking at least 2 minutes.", "language": "text", "type": "code-example_explanation", "context": "Handling retriable and non-retriable pod failures with Pod failure policy | For comparison, if the Pod failure policy was disabled it would take 6 retries of the Pod, taking at least 2 minutes.", "source": "https://kubernetes.io/docs/tasks/job/pod-failure-policy/", "id": "k8s_00338_code_explain_8689"} {"prompt": "Complete this yaml code:\n\napiVersion: batch/v1\nkind: Job\nmetadata:\n name: job-pod-failure-policy-ignore\nspec:\n completions: 4\n parallelism: 2\n template:\n spec:\n restartPolicy: Never\n containers:", "completion": " - name: main\n image: docker.io/library/bash:5\n command: [\"bash\"]\n args:\n - -c\n - echo \"Hello world! I'm going to exit with 0 (success).\" && sleep 90 && exit 0\n backoffLimit: 0\n podFailurePolicy:\n rules:\n - action: Ignore\n onPodConditions:\n - type: DisruptionTarget", "language": "yaml", "type": "kubernetes-manifest", "context": "Handling retriable and non-retriable pod failures with Pod failure policy | Apply the manifest:", "source": "https://kubernetes.io/docs/tasks/job/pod-failure-policy/", "id": "k8s_00338_code_completion_8690"} {"prompt": "Explain this yaml code:\n\napiVersion: batch/v1\nkind: Job\nmetadata:\n name: job-pod-failure-policy-ignore\nspec:\n completions: 4\n parallelism: 2\n template:\n spec:\n restartPolicy: Never\n containers:\n - name: main\n image: docker.io/library/bash:5\n command: [\"bash\"]\n args:\n - -c\n - echo \"Hello world! I'm going to exit with 0 (success).\" && sleep 90 && exit 0\n backoffLimit: 0\n podFailurePolicy:\n rules:\n - action: Ignore\n onPodConditions:\n - type: DisruptionTarget", "completion": "This yaml code Handling retriable and non-retriable pod failures with Pod failure policy | Apply the manifest:", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Handling retriable and non-retriable pod failures with Pod failure policy | Apply the manifest:", "source": "https://kubernetes.io/docs/tasks/job/pod-failure-policy/", "id": "k8s_00338_code_explain_8691"} {"prompt": "Explain this sh code:\n\nkubectl create -f https://k8s.io/examples/controllers/job-pod-failure-policy-ignore.yaml", "completion": "This sh code Handling retriable and non-retriable pod failures with Pod failure policy | Run this command to check the nodeName the Pod is scheduled to:", "language": "sh", "type": "code-example_explanation", "context": "Handling retriable and non-retriable pod failures with Pod failure policy | Run this command to check the nodeName the Pod is scheduled to:", "source": "https://kubernetes.io/docs/tasks/job/pod-failure-policy/", "id": "k8s_00338_code_explain_8692"} {"prompt": "Explain this sh code:\n\nnodeName=$(kubectl get pods -l job-name=job-pod-failure-policy-ignore -o jsonpath='{.items[0].spec.nodeName}')", "completion": "This sh code Handling retriable and non-retriable pod failures with Pod failure policy | Drain the node to evict the Pod before it completes (within 90s):", "language": "sh", "type": "code-example_explanation", "context": "Handling retriable and non-retriable pod failures with Pod failure policy | Drain the node to evict the Pod before it completes (within 90s):", "source": "https://kubernetes.io/docs/tasks/job/pod-failure-policy/", "id": "k8s_00338_code_explain_8693"} {"prompt": "Explain this sh code:\n\nkubectl drain nodes/$nodeName --ignore-daemonsets --grace-period=0", "completion": "This sh code Handling retriable and non-retriable pod failures with Pod failure policy | Inspect the .status.failed to check the counter for the Job is not incremented:", "language": "sh", "type": "code-example_explanation", "context": "Handling retriable and non-retriable pod failures with Pod failure policy | Inspect the .status.failed to check the counter for the Job is not incremented:", "source": "https://kubernetes.io/docs/tasks/job/pod-failure-policy/", "id": "k8s_00338_code_explain_8694"} {"prompt": "Explain this sh code:\n\nkubectl get jobs -l job-name=job-pod-failure-policy-ignore -o yaml", "completion": "This sh code Handling retriable and non-retriable pod failures with Pod failure policy | Uncordon the node:", "language": "sh", "type": "code-example_explanation", "context": "Handling retriable and non-retriable pod failures with Pod failure policy | Uncordon the node:", "source": "https://kubernetes.io/docs/tasks/job/pod-failure-policy/", "id": "k8s_00338_code_explain_8695"} {"prompt": "Explain this text code:\n\n/controllers/job-pod-failure-policy-config-issue.yaml", "completion": "This text code Handling retriable and non-retriable pod failures with Pod failure policy | Apply the manifest:", "language": "text", "type": "code-example_explanation", "context": "Handling retriable and non-retriable pod failures with Pod failure policy | Apply the manifest:", "source": "https://kubernetes.io/docs/tasks/job/pod-failure-policy/", "id": "k8s_00338_code_explain_8696"} {"prompt": "Complete this yaml code:\n\napiVersion: batch/v1\nkind: Job\nmetadata:\n name: job-pod-failure-policy-config-issue\nspec:\n completions: 8\n parallelism: 2\n template:\n spec:", "completion": " restartPolicy: Never\n containers:\n - name: main\n image: \"non-existing-repo/non-existing-image:example\"\n backoffLimit: 6\n podFailurePolicy:\n rules:\n - action: FailJob\n onPodConditions:\n - type: ConfigIssue", "language": "yaml", "type": "kubernetes-manifest", "context": "Handling retriable and non-retriable pod failures with Pod failure policy | Apply the manifest:", "source": "https://kubernetes.io/docs/tasks/job/pod-failure-policy/", "id": "k8s_00338_code_completion_8697"} {"prompt": "Explain this yaml code:\n\napiVersion: batch/v1\nkind: Job\nmetadata:\n name: job-pod-failure-policy-config-issue\nspec:\n completions: 8\n parallelism: 2\n template:\n spec:\n restartPolicy: Never\n containers:\n - name: main\n image: \"non-existing-repo/non-existing-image:example\"\n backoffLimit: 6\n podFailurePolicy:\n rules:\n - action: FailJob\n onPodConditions:\n - type: ConfigIssue", "completion": "This yaml code Handling retriable and non-retriable pod failures with Pod failure policy | Apply the manifest:", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Handling retriable and non-retriable pod failures with Pod failure policy | Apply the manifest:", "source": "https://kubernetes.io/docs/tasks/job/pod-failure-policy/", "id": "k8s_00338_code_explain_8698"} {"prompt": "Explain this sh code:\n\nkubectl create -f https://k8s.io/examples/controllers/job-pod-failure-policy-config-issue.yaml", "completion": "This sh code Handling retriable and non-retriable pod failures with Pod failure policy | Note that, the image is misconfigured, as it does not exist.", "language": "sh", "type": "code-example_explanation", "context": "Handling retriable and non-retriable pod failures with Pod failure policy | Note that, the image is misconfigured, as it does not exist.", "source": "https://kubernetes.io/docs/tasks/job/pod-failure-policy/", "id": "k8s_00338_code_explain_8699"} {"prompt": "Explain this sh code:\n\nkubectl get pods -l job-name=job-pod-failure-policy-config-issue -o yaml", "completion": "This sh code Handling retriable and non-retriable pod failures with Pod failure policy | You will see output similar to this:", "language": "sh", "type": "code-example_explanation", "context": "Handling retriable and non-retriable pod failures with Pod failure policy | You will see output similar to this:", "source": "https://kubernetes.io/docs/tasks/job/pod-failure-policy/", "id": "k8s_00338_code_explain_8700"} {"prompt": "Complete this yaml code:\n\ncontainerStatuses:\n- image: non-existing-repo/non-existing-image:example\n ...\n state:", "completion": " waiting:\n message: Back-off pulling image \"non-existing-repo/non-existing-image:example\"\n reason: ImagePullBackOff\n ...\nphase: Pending", "language": "yaml", "type": "kubernetes-manifest", "context": "Handling retriable and non-retriable pod failures with Pod failure policy | Note that the pod remains in the Pending phase as it fails to pull the misconfigured image. This, in principle, could be a transient issue and the image could get pulled. However, in this case, the im", "source": "https://kubernetes.io/docs/tasks/job/pod-failure-policy/", "id": "k8s_00338_code_completion_8701"} {"prompt": "Explain this yaml code:\n\ncontainerStatuses:\n- image: non-existing-repo/non-existing-image:example\n ...\n state:\n waiting:\n message: Back-off pulling image \"non-existing-repo/non-existing-image:example\"\n reason: ImagePullBackOff\n ...\nphase: Pending", "completion": "This yaml code Handling retriable and non-retriable pod failures with Pod failure policy | Note that the pod remains in the Pending phase as it fails to pull the misconfigured image. This, in principle, could be a transient issue and the image could get pulled. However, in this case, the im", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Handling retriable and non-retriable pod failures with Pod failure policy | Note that the pod remains in the Pending phase as it fails to pull the misconfigured image. This, in principle, could be a transient issue and the image could get pulled. However, in this case, the im", "source": "https://kubernetes.io/docs/tasks/job/pod-failure-policy/", "id": "k8s_00338_code_explain_8702"} {"prompt": "Complete this sh code:\n\ncat <<EOF > patch.yaml\nstatus:\n conditions:\n - type: ConfigIssue", "completion": " status: \"True\"\n reason: \"NonExistingImage\"\n lastTransitionTime: \"$(date -u +\"%Y-%m-%dT%H:%M:%SZ\")\"\nEOF", "language": "sh", "type": "code-example", "context": "Handling retriable and non-retriable pod failures with Pod failure policy | Second, select one of the pods created by the job by running:", "source": "https://kubernetes.io/docs/tasks/job/pod-failure-policy/", "id": "k8s_00338_code_completion_8703"} {"prompt": "Explain this sh code:\n\ncat <<EOF > patch.yaml\nstatus:\n conditions:\n - type: ConfigIssue\n status: \"True\"\n reason: \"NonExistingImage\"\n lastTransitionTime: \"$(date -u +\"%Y-%m-%dT%H:%M:%SZ\")\"\nEOF", "completion": "This sh code Handling retriable and non-retriable pod failures with Pod failure policy | Second, select one of the pods created by the job by running:", "language": "sh", "type": "code-example_explanation", "context": "Handling retriable and non-retriable pod failures with Pod failure policy | Second, select one of the pods created by the job by running:", "source": "https://kubernetes.io/docs/tasks/job/pod-failure-policy/", "id": "k8s_00338_code_explain_8704"} {"prompt": "Explain this text code:\n\npodName=$(kubectl get pods -l job-name=job-pod-failure-policy-config-issue -o jsonpath='{.items[0].metadata.name}')", "completion": "This text code Handling retriable and non-retriable pod failures with Pod failure policy | Then, apply the patch on one of the pods by running the following command:", "language": "text", "type": "code-example_explanation", "context": "Handling retriable and non-retriable pod failures with Pod failure policy | Then, apply the patch on one of the pods by running the following command:", "source": "https://kubernetes.io/docs/tasks/job/pod-failure-policy/", "id": "k8s_00338_code_explain_8705"} {"prompt": "Explain this sh code:\n\nkubectl patch pod $podName --subresource=status --patch-file=patch.yaml", "completion": "This sh code Handling retriable and non-retriable pod failures with Pod failure policy | If applied successfully, you will get a notification like this:", "language": "sh", "type": "code-example_explanation", "context": "Handling retriable and non-retriable pod failures with Pod failure policy | If applied successfully, you will get a notification like this:", "source": "https://kubernetes.io/docs/tasks/job/pod-failure-policy/", "id": "k8s_00338_code_explain_8706"} {"prompt": "Explain this sh code:\n\npod/job-pod-failure-policy-config-issue-k6pvp patched", "completion": "This sh code Handling retriable and non-retriable pod failures with Pod failure policy | Delete the pod to transition it to Failed phase, by running the command:", "language": "sh", "type": "code-example_explanation", "context": "Handling retriable and non-retriable pod failures with Pod failure policy | Delete the pod to transition it to Failed phase, by running the command:", "source": "https://kubernetes.io/docs/tasks/job/pod-failure-policy/", "id": "k8s_00338_code_explain_8707"} {"prompt": "Explain this sh code:\n\nkubectl get jobs -l job-name=job-pod-failure-policy-config-issue -o yaml", "completion": "This sh code Handling retriable and non-retriable pod failures with Pod failure policy | In the Job status, see a job Failed condition with the field reason equal PodFailurePolicy. Additionally, the message field contains a more detailed information about the Job termination, such as: Pod", "language": "sh", "type": "code-example_explanation", "context": "Handling retriable and non-retriable pod failures with Pod failure policy | In the Job status, see a job Failed condition with the field reason equal PodFailurePolicy. Additionally, the message field contains a more detailed information about the Job termination, such as: Pod", "source": "https://kubernetes.io/docs/tasks/job/pod-failure-policy/", "id": "k8s_00338_code_explain_8708"} {"prompt": "Explain this text code:\n\nPod default/job-pod-failure-policy-config-issue-k6pvp has condition ConfigIssue matching FailJob rule at index 0", "completion": "This text code Handling retriable and non-retriable pod failures with Pod failure policy | Delete the Job you created:", "language": "text", "type": "code-example_explanation", "context": "Handling retriable and non-retriable pod failures with Pod failure policy | Delete the Job you created:", "source": "https://kubernetes.io/docs/tasks/job/pod-failure-policy/", "id": "k8s_00338_code_explain_8709"} {"prompt": "Explain this sh code:\n\nkubectl delete jobs/job-pod-failure-policy-config-issue", "completion": "This sh code Handling retriable and non-retriable pod failures with Pod failure policy | The cluster automatically cleans up the Pods.", "language": "sh", "type": "code-example_explanation", "context": "Handling retriable and non-retriable pod failures with Pod failure policy | The cluster automatically cleans up the Pods.", "source": "https://kubernetes.io/docs/tasks/job/pod-failure-policy/", "id": "k8s_00338_code_explain_8710"} {"prompt": "Explain this text code:\n\n/controllers/job-backoff-limit-per-index-failindex.yaml", "completion": "This text code Handling retriable and non-retriable pod failures with Pod failure policy | Apply the manifest:", "language": "text", "type": "code-example_explanation", "context": "Handling retriable and non-retriable pod failures with Pod failure policy | Apply the manifest:", "source": "https://kubernetes.io/docs/tasks/job/pod-failure-policy/", "id": "k8s_00338_code_explain_8711"} {"prompt": "Complete this yaml code:\n\napiVersion: batch/v1\nkind: Job\nmetadata:\n name: job-backoff-limit-per-index-failindex\nspec:\n completions: 4\n parallelism: 2\n completionMode: Indexed\n backoffLimitPerIndex: 1\n template:\n spec:\n restartPolicy: Never\n containers:\n - name: main\n image: docker.io/library/python:3\n command:\n # The script:\n # - fails the Pod with index 0 with exit code 1, which results in one retry;\n # - fails the Pod with index 1 with exit code 42 which results\n # in failing the index without retry.", "completion": " # - succeeds Pods with any other index.\n - python3\n - -c\n - |\n import os, sys\n index = int(os.environ.get(\"JOB_COMPLETION_INDEX\"))\n if index == 0:\n sys.exit(1)\n elif index == 1:\n sys.exit(42)\n else:\n sys.exit(0) \n backoffLimit: 6\n podFailurePolicy:\n rules:\n - action: FailIndex\n onExitCodes:\n containerName: main\n operator: In\n values: [42]", "language": "yaml", "type": "kubernetes-manifest", "context": "Handling retriable and non-retriable pod failures with Pod failure policy | Apply the manifest:", "source": "https://kubernetes.io/docs/tasks/job/pod-failure-policy/", "id": "k8s_00338_code_completion_8712"} {"prompt": "Explain this yaml code:\n\napiVersion: batch/v1\nkind: Job\nmetadata:\n name: job-backoff-limit-per-index-failindex\nspec:\n completions: 4\n parallelism: 2\n completionMode: Indexed\n backoffLimitPerIndex: 1\n template:\n spec:\n restartPolicy: Never\n containers:\n - name: main\n image: docker.io/library/python:3\n command:\n # The script:\n # - fails the Pod with index 0 with exit code 1, which results in one retry;\n # - fails the Pod with index 1 with exit code 42 which results\n # in failing the index without retry.\n # - succeeds Pods with any other index.\n - python3\n - -c\n - |\n import os, sys\n index = int(os.environ.get(\"JOB_COMPLETION_INDEX\"))\n if index == 0:\n sys.exit(1)\n elif index == 1:\n sys.exit(42)\n else:\n sys.exit(0) \n backoffLimit: 6\n podFailurePolicy:\n rules:\n - action: FailIndex\n onExitCodes:\n containerName: main\n operator: In\n values: [42]", "completion": "This yaml code Handling retriable and non-retriable pod failures with Pod failure policy | Apply the manifest:", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Handling retriable and non-retriable pod failures with Pod failure policy | Apply the manifest:", "source": "https://kubernetes.io/docs/tasks/job/pod-failure-policy/", "id": "k8s_00338_code_explain_8713"} {"prompt": "Explain this sh code:\n\nkubectl create -f https://k8s.io/examples/controllers/job-backoff-limit-per-index-failindex.yaml", "completion": "This sh code Handling retriable and non-retriable pod failures with Pod failure policy | After around 15 seconds, inspect the status of the Pods for the Job. You can do that by running:", "language": "sh", "type": "code-example_explanation", "context": "Handling retriable and non-retriable pod failures with Pod failure policy | After around 15 seconds, inspect the status of the Pods for the Job. You can do that by running:", "source": "https://kubernetes.io/docs/tasks/job/pod-failure-policy/", "id": "k8s_00338_code_explain_8714"} {"prompt": "Explain this shell code:\n\nkubectl get pods -l job-name=job-backoff-limit-per-index-failindex -o yaml", "completion": "This shell code Handling retriable and non-retriable pod failures with Pod failure policy | You will see output similar to this:", "language": "shell", "type": "code-example_explanation", "context": "Handling retriable and non-retriable pod failures with Pod failure policy | You will see output similar to this:", "source": "https://kubernetes.io/docs/tasks/job/pod-failure-policy/", "id": "k8s_00338_code_explain_8715"} {"prompt": "Complete this none code:\n\nNAME READY STATUS RESTARTS AGE\njob-backoff-limit-per-index-failindex-0-4g4cm 0/1 Error 0 4s\njob-backoff-limit-per-index-failindex-0-fkdzq 0/1 Error 0 15s", "completion": "job-backoff-limit-per-index-failindex-1-2bgdj 0/1 Error 0 15s\njob-backoff-limit-per-index-failindex-2-vs6lt 0/1 Completed 0 11s\njob-backoff-limit-per-index-failindex-3-s7s47 0/1 Completed 0 6s", "language": "none", "type": "code-example", "context": "Handling retriable and non-retriable pod failures with Pod failure policy | Note that the output shows the following:", "source": "https://kubernetes.io/docs/tasks/job/pod-failure-policy/", "id": "k8s_00338_code_completion_8716"} {"prompt": "Explain this none code:\n\nNAME READY STATUS RESTARTS AGE\njob-backoff-limit-per-index-failindex-0-4g4cm 0/1 Error 0 4s\njob-backoff-limit-per-index-failindex-0-fkdzq 0/1 Error 0 15s\njob-backoff-limit-per-index-failindex-1-2bgdj 0/1 Error 0 15s\njob-backoff-limit-per-index-failindex-2-vs6lt 0/1 Completed 0 11s\njob-backoff-limit-per-index-failindex-3-s7s47 0/1 Completed 0 6s", "completion": "This none code Handling retriable and non-retriable pod failures with Pod failure policy | Note that the output shows the following:", "language": "none", "type": "code-example_explanation", "context": "Handling retriable and non-retriable pod failures with Pod failure policy | Note that the output shows the following:", "source": "https://kubernetes.io/docs/tasks/job/pod-failure-policy/", "id": "k8s_00338_code_explain_8717"} {"prompt": "Explain this sh code:\n\nkubectl get jobs -l job-name=job-backoff-limit-per-index-failindex -o yaml", "completion": "This sh code Handling retriable and non-retriable pod failures with Pod failure policy | In the Job status, see that the failedIndexes field shows \"0,1\", because both indexes failed. Because the index 1 was not retried the number of failed Pods, indicated by the status field \"failed\" equa", "language": "sh", "type": "code-example_explanation", "context": "Handling retriable and non-retriable pod failures with Pod failure policy | In the Job status, see that the failedIndexes field shows \"0,1\", because both indexes failed. Because the index 1 was not retried the number of failed Pods, indicated by the status field \"failed\" equa", "source": "https://kubernetes.io/docs/tasks/job/pod-failure-policy/", "id": "k8s_00338_code_explain_8718"} {"prompt": "Explain this sh code:\n\nkubectl delete jobs/job-backoff-limit-per-index-failindex", "completion": "This sh code Handling retriable and non-retriable pod failures with Pod failure policy | The cluster automatically cleans up the Pods.", "language": "sh", "type": "code-example_explanation", "context": "Handling retriable and non-retriable pod failures with Pod failure policy | The cluster automatically cleans up the Pods.", "source": "https://kubernetes.io/docs/tasks/job/pod-failure-policy/", "id": "k8s_00338_code_explain_8719"} {"prompt": "Complete this yaml code:\n\n/controllers/job-pod-failure-policy-failjob.yaml\napiVersion: batch/v1\nkind: Job\nmetadata:\n name: job-pod-failure-policy-failjob\nspec:\n completions: 8\n parallelism: 2\n template:\n spec:\n restartPolicy: Never\n containers:\n - name: main", "completion": " image: docker.io/library/bash:5\n command: [\"bash\"]\n args:\n - -c\n - echo \"Hello world! I'm going to exit with 42 to simulate a software bug.\" && sleep 30 && exit 42\n backoffLimit: 6\n podFailurePolicy:\n rules:\n - action: FailJob\n onExitCodes:\n containerName: main\n operator: In\n values: [42]", "language": "yaml", "type": "kubernetes-manifest", "context": "Handling retriable and non-retriable pod failures with Pod failure policy | Apply the manifest:", "source": "https://kubernetes.io/docs/tasks/job/pod-failure-policy/", "id": "k8s_00338_code_completion_8720"} {"prompt": "Explain this yaml code:\n\n/controllers/job-pod-failure-policy-failjob.yaml\napiVersion: batch/v1\nkind: Job\nmetadata:\n name: job-pod-failure-policy-failjob\nspec:\n completions: 8\n parallelism: 2\n template:\n spec:\n restartPolicy: Never\n containers:\n - name: main\n image: docker.io/library/bash:5\n command: [\"bash\"]\n args:\n - -c\n - echo \"Hello world! I'm going to exit with 42 to simulate a software bug.\" && sleep 30 && exit 42\n backoffLimit: 6\n podFailurePolicy:\n rules:\n - action: FailJob\n onExitCodes:\n containerName: main\n operator: In\n values: [42]", "completion": "This yaml code Handling retriable and non-retriable pod failures with Pod failure policy | Apply the manifest:", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Handling retriable and non-retriable pod failures with Pod failure policy | Apply the manifest:", "source": "https://kubernetes.io/docs/tasks/job/pod-failure-policy/", "id": "k8s_00338_code_explain_8721"} {"prompt": "Complete this yaml code:\n\napiVersion: batch/v1\nkind: Job\nmetadata:\n name: job-pod-failure-policy-failjob\nspec:\n completions: 8\n parallelism: 2\n template:\n spec:\n restartPolicy: Never\n containers:\n - name: main", "completion": " image: docker.io/library/bash:5\n command: [\"bash\"]\n args:\n - -c\n - echo \"Hello world! I'm going to exit with 42 to simulate a software bug.\" && sleep 30 && exit 42\n backoffLimit: 6\n podFailurePolicy:\n rules:\n - action: FailJob\n onExitCodes:\n containerName: main\n operator: In\n values: [42]", "language": "yaml", "type": "kubernetes-manifest", "context": "Handling retriable and non-retriable pod failures with Pod failure policy | Apply the manifest:", "source": "https://kubernetes.io/docs/tasks/job/pod-failure-policy/", "id": "k8s_00338_code_completion_8722"} {"prompt": "Explain this yaml code:\n\napiVersion: batch/v1\nkind: Job\nmetadata:\n name: job-pod-failure-policy-failjob\nspec:\n completions: 8\n parallelism: 2\n template:\n spec:\n restartPolicy: Never\n containers:\n - name: main\n image: docker.io/library/bash:5\n command: [\"bash\"]\n args:\n - -c\n - echo \"Hello world! I'm going to exit with 42 to simulate a software bug.\" && sleep 30 && exit 42\n backoffLimit: 6\n podFailurePolicy:\n rules:\n - action: FailJob\n onExitCodes:\n containerName: main\n operator: In\n values: [42]", "completion": "This yaml code Handling retriable and non-retriable pod failures with Pod failure policy | Apply the manifest:", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Handling retriable and non-retriable pod failures with Pod failure policy | Apply the manifest:", "source": "https://kubernetes.io/docs/tasks/job/pod-failure-policy/", "id": "k8s_00338_code_explain_8723"} {"prompt": "Explain this bash code:\n\nkubectl create -f https://k8s.io/examples/controllers/job-pod-failure-policy-failjob.yaml", "completion": "This bash code Handling retriable and non-retriable pod failures with Pod failure policy | After around 30 seconds the entire Job should be terminated. Inspect the status of the Job by running:", "language": "bash", "type": "kubectl-command_explanation", "context": "Handling retriable and non-retriable pod failures with Pod failure policy | After around 30 seconds the entire Job should be terminated. Inspect the status of the Job by running:", "source": "https://kubernetes.io/docs/tasks/job/pod-failure-policy/", "id": "k8s_00338_code_explain_8724"} {"prompt": "Explain this bash code:\n\nkubectl get jobs -l job-name=job-pod-failure-policy-failjob -o yaml", "completion": "This bash code Handling retriable and non-retriable pod failures with Pod failure policy | In the Job status, the following conditions display:", "language": "bash", "type": "kubectl-command_explanation", "context": "Handling retriable and non-retriable pod failures with Pod failure policy | In the Job status, the following conditions display:", "source": "https://kubernetes.io/docs/tasks/job/pod-failure-policy/", "id": "k8s_00338_code_explain_8725"} {"prompt": "Complete this yaml code:\n\n/controllers/job-pod-failure-policy-ignore.yaml\napiVersion: batch/v1\nkind: Job\nmetadata:\n name: job-pod-failure-policy-ignore\nspec:\n completions: 4\n parallelism: 2\n template:\n spec:\n restartPolicy: Never\n containers:", "completion": " - name: main\n image: docker.io/library/bash:5\n command: [\"bash\"]\n args:\n - -c\n - echo \"Hello world! I'm going to exit with 0 (success).\" && sleep 90 && exit 0\n backoffLimit: 0\n podFailurePolicy:\n rules:\n - action: Ignore\n onPodConditions:\n - type: DisruptionTarget", "language": "yaml", "type": "kubernetes-manifest", "context": "Handling retriable and non-retriable pod failures with Pod failure policy | Apply the manifest:", "source": "https://kubernetes.io/docs/tasks/job/pod-failure-policy/", "id": "k8s_00338_code_completion_8726"} {"prompt": "Explain this yaml code:\n\n/controllers/job-pod-failure-policy-ignore.yaml\napiVersion: batch/v1\nkind: Job\nmetadata:\n name: job-pod-failure-policy-ignore\nspec:\n completions: 4\n parallelism: 2\n template:\n spec:\n restartPolicy: Never\n containers:\n - name: main\n image: docker.io/library/bash:5\n command: [\"bash\"]\n args:\n - -c\n - echo \"Hello world! I'm going to exit with 0 (success).\" && sleep 90 && exit 0\n backoffLimit: 0\n podFailurePolicy:\n rules:\n - action: Ignore\n onPodConditions:\n - type: DisruptionTarget", "completion": "This yaml code Handling retriable and non-retriable pod failures with Pod failure policy | Apply the manifest:", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Handling retriable and non-retriable pod failures with Pod failure policy | Apply the manifest:", "source": "https://kubernetes.io/docs/tasks/job/pod-failure-policy/", "id": "k8s_00338_code_explain_8727"} {"prompt": "Complete this yaml code:\n\napiVersion: batch/v1\nkind: Job\nmetadata:\n name: job-pod-failure-policy-ignore\nspec:\n completions: 4\n parallelism: 2\n template:\n spec:\n restartPolicy: Never\n containers:", "completion": " - name: main\n image: docker.io/library/bash:5\n command: [\"bash\"]\n args:\n - -c\n - echo \"Hello world! I'm going to exit with 0 (success).\" && sleep 90 && exit 0\n backoffLimit: 0\n podFailurePolicy:\n rules:\n - action: Ignore\n onPodConditions:\n - type: DisruptionTarget", "language": "yaml", "type": "kubernetes-manifest", "context": "Handling retriable and non-retriable pod failures with Pod failure policy | Apply the manifest:", "source": "https://kubernetes.io/docs/tasks/job/pod-failure-policy/", "id": "k8s_00338_code_completion_8728"} {"prompt": "Explain this yaml code:\n\napiVersion: batch/v1\nkind: Job\nmetadata:\n name: job-pod-failure-policy-ignore\nspec:\n completions: 4\n parallelism: 2\n template:\n spec:\n restartPolicy: Never\n containers:\n - name: main\n image: docker.io/library/bash:5\n command: [\"bash\"]\n args:\n - -c\n - echo \"Hello world! I'm going to exit with 0 (success).\" && sleep 90 && exit 0\n backoffLimit: 0\n podFailurePolicy:\n rules:\n - action: Ignore\n onPodConditions:\n - type: DisruptionTarget", "completion": "This yaml code Handling retriable and non-retriable pod failures with Pod failure policy | Apply the manifest:", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Handling retriable and non-retriable pod failures with Pod failure policy | Apply the manifest:", "source": "https://kubernetes.io/docs/tasks/job/pod-failure-policy/", "id": "k8s_00338_code_explain_8729"} {"prompt": "Explain this bash code:\n\nkubectl create -f https://k8s.io/examples/controllers/job-pod-failure-policy-ignore.yaml", "completion": "This bash code Handling retriable and non-retriable pod failures with Pod failure policy | Run this command to check the nodeName the Pod is scheduled to:", "language": "bash", "type": "kubectl-command_explanation", "context": "Handling retriable and non-retriable pod failures with Pod failure policy | Run this command to check the nodeName the Pod is scheduled to:", "source": "https://kubernetes.io/docs/tasks/job/pod-failure-policy/", "id": "k8s_00338_code_explain_8730"} {"prompt": "Explain this text code:\n\nnodeName=$(kubectl get pods -l job-name=job-pod-failure-policy-ignore -o jsonpath='{.items[0].spec.nodeName}')", "completion": "This text code Handling retriable and non-retriable pod failures with Pod failure policy | Drain the node to evict the Pod before it completes (within 90s):", "language": "text", "type": "code-example_explanation", "context": "Handling retriable and non-retriable pod failures with Pod failure policy | Drain the node to evict the Pod before it completes (within 90s):", "source": "https://kubernetes.io/docs/tasks/job/pod-failure-policy/", "id": "k8s_00338_code_explain_8731"} {"prompt": "Explain this bash code:\n\nkubectl drain nodes/$nodeName --ignore-daemonsets --grace-period=0", "completion": "This bash code Handling retriable and non-retriable pod failures with Pod failure policy | Inspect the .status.failed to check the counter for the Job is not incremented:", "language": "bash", "type": "kubectl-command_explanation", "context": "Handling retriable and non-retriable pod failures with Pod failure policy | Inspect the .status.failed to check the counter for the Job is not incremented:", "source": "https://kubernetes.io/docs/tasks/job/pod-failure-policy/", "id": "k8s_00338_code_explain_8732"} {"prompt": "Explain this bash code:\n\nkubectl get jobs -l job-name=job-pod-failure-policy-ignore -o yaml", "completion": "This bash code Handling retriable and non-retriable pod failures with Pod failure policy | Uncordon the node:", "language": "bash", "type": "kubectl-command_explanation", "context": "Handling retriable and non-retriable pod failures with Pod failure policy | Uncordon the node:", "source": "https://kubernetes.io/docs/tasks/job/pod-failure-policy/", "id": "k8s_00338_code_explain_8733"} {"prompt": "Complete this yaml code:\n\n/controllers/job-pod-failure-policy-config-issue.yaml\napiVersion: batch/v1\nkind: Job\nmetadata:\n name: job-pod-failure-policy-config-issue\nspec:\n completions: 8\n parallelism: 2\n template:\n spec:", "completion": " restartPolicy: Never\n containers:\n - name: main\n image: \"non-existing-repo/non-existing-image:example\"\n backoffLimit: 6\n podFailurePolicy:\n rules:\n - action: FailJob\n onPodConditions:\n - type: ConfigIssue", "language": "yaml", "type": "kubernetes-manifest", "context": "Handling retriable and non-retriable pod failures with Pod failure policy | Apply the manifest:", "source": "https://kubernetes.io/docs/tasks/job/pod-failure-policy/", "id": "k8s_00338_code_completion_8734"} {"prompt": "Explain this yaml code:\n\n/controllers/job-pod-failure-policy-config-issue.yaml\napiVersion: batch/v1\nkind: Job\nmetadata:\n name: job-pod-failure-policy-config-issue\nspec:\n completions: 8\n parallelism: 2\n template:\n spec:\n restartPolicy: Never\n containers:\n - name: main\n image: \"non-existing-repo/non-existing-image:example\"\n backoffLimit: 6\n podFailurePolicy:\n rules:\n - action: FailJob\n onPodConditions:\n - type: ConfigIssue", "completion": "This yaml code Handling retriable and non-retriable pod failures with Pod failure policy | Apply the manifest:", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Handling retriable and non-retriable pod failures with Pod failure policy | Apply the manifest:", "source": "https://kubernetes.io/docs/tasks/job/pod-failure-policy/", "id": "k8s_00338_code_explain_8735"} {"prompt": "Complete this yaml code:\n\napiVersion: batch/v1\nkind: Job\nmetadata:\n name: job-pod-failure-policy-config-issue\nspec:\n completions: 8\n parallelism: 2\n template:\n spec:", "completion": " restartPolicy: Never\n containers:\n - name: main\n image: \"non-existing-repo/non-existing-image:example\"\n backoffLimit: 6\n podFailurePolicy:\n rules:\n - action: FailJob\n onPodConditions:\n - type: ConfigIssue", "language": "yaml", "type": "kubernetes-manifest", "context": "Handling retriable and non-retriable pod failures with Pod failure policy | Apply the manifest:", "source": "https://kubernetes.io/docs/tasks/job/pod-failure-policy/", "id": "k8s_00338_code_completion_8736"} {"prompt": "Explain this yaml code:\n\napiVersion: batch/v1\nkind: Job\nmetadata:\n name: job-pod-failure-policy-config-issue\nspec:\n completions: 8\n parallelism: 2\n template:\n spec:\n restartPolicy: Never\n containers:\n - name: main\n image: \"non-existing-repo/non-existing-image:example\"\n backoffLimit: 6\n podFailurePolicy:\n rules:\n - action: FailJob\n onPodConditions:\n - type: ConfigIssue", "completion": "This yaml code Handling retriable and non-retriable pod failures with Pod failure policy | Apply the manifest:", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Handling retriable and non-retriable pod failures with Pod failure policy | Apply the manifest:", "source": "https://kubernetes.io/docs/tasks/job/pod-failure-policy/", "id": "k8s_00338_code_explain_8737"} {"prompt": "Explain this bash code:\n\nkubectl create -f https://k8s.io/examples/controllers/job-pod-failure-policy-config-issue.yaml", "completion": "This bash code Handling retriable and non-retriable pod failures with Pod failure policy | Note that, the image is misconfigured, as it does not exist.", "language": "bash", "type": "kubectl-command_explanation", "context": "Handling retriable and non-retriable pod failures with Pod failure policy | Note that, the image is misconfigured, as it does not exist.", "source": "https://kubernetes.io/docs/tasks/job/pod-failure-policy/", "id": "k8s_00338_code_explain_8738"} {"prompt": "Explain this bash code:\n\nkubectl get pods -l job-name=job-pod-failure-policy-config-issue -o yaml", "completion": "This bash code Handling retriable and non-retriable pod failures with Pod failure policy | You will see output similar to this:", "language": "bash", "type": "kubectl-command_explanation", "context": "Handling retriable and non-retriable pod failures with Pod failure policy | You will see output similar to this:", "source": "https://kubernetes.io/docs/tasks/job/pod-failure-policy/", "id": "k8s_00338_code_explain_8739"} {"prompt": "Complete this text code:\n\ncontainerStatuses:\n- image: non-existing-repo/non-existing-image:example\n ...\n state:", "completion": " waiting:\n message: Back-off pulling image \"non-existing-repo/non-existing-image:example\"\n reason: ImagePullBackOff\n ...\nphase: Pending", "language": "text", "type": "code-example", "context": "Handling retriable and non-retriable pod failures with Pod failure policy | Note that the pod remains in the Pending phase as it fails to pull the misconfigured image. This, in principle, could be a transient issue and the image could get pulled. However, in this case, the im", "source": "https://kubernetes.io/docs/tasks/job/pod-failure-policy/", "id": "k8s_00338_code_completion_8740"} {"prompt": "Explain this text code:\n\ncontainerStatuses:\n- image: non-existing-repo/non-existing-image:example\n ...\n state:\n waiting:\n message: Back-off pulling image \"non-existing-repo/non-existing-image:example\"\n reason: ImagePullBackOff\n ...\nphase: Pending", "completion": "This text code Handling retriable and non-retriable pod failures with Pod failure policy | Note that the pod remains in the Pending phase as it fails to pull the misconfigured image. This, in principle, could be a transient issue and the image could get pulled. However, in this case, the im", "language": "text", "type": "code-example_explanation", "context": "Handling retriable and non-retriable pod failures with Pod failure policy | Note that the pod remains in the Pending phase as it fails to pull the misconfigured image. This, in principle, could be a transient issue and the image could get pulled. However, in this case, the im", "source": "https://kubernetes.io/docs/tasks/job/pod-failure-policy/", "id": "k8s_00338_code_explain_8741"} {"prompt": "Complete this text code:\n\ncat <<EOF > patch.yaml\nstatus:\n conditions:\n - type: ConfigIssue", "completion": " status: \"True\"\n reason: \"NonExistingImage\"\n lastTransitionTime: \"$(date -u +\"%Y-%m-%dT%H:%M:%SZ\")\"\nEOF", "language": "text", "type": "code-example", "context": "Handling retriable and non-retriable pod failures with Pod failure policy | Second, select one of the pods created by the job by running:", "source": "https://kubernetes.io/docs/tasks/job/pod-failure-policy/", "id": "k8s_00338_code_completion_8742"} {"prompt": "Explain this text code:\n\ncat <<EOF > patch.yaml\nstatus:\n conditions:\n - type: ConfigIssue\n status: \"True\"\n reason: \"NonExistingImage\"\n lastTransitionTime: \"$(date -u +\"%Y-%m-%dT%H:%M:%SZ\")\"\nEOF", "completion": "This text code Handling retriable and non-retriable pod failures with Pod failure policy | Second, select one of the pods created by the job by running:", "language": "text", "type": "code-example_explanation", "context": "Handling retriable and non-retriable pod failures with Pod failure policy | Second, select one of the pods created by the job by running:", "source": "https://kubernetes.io/docs/tasks/job/pod-failure-policy/", "id": "k8s_00338_code_explain_8743"} {"prompt": "Explain this bash code:\n\nkubectl patch pod $podName --subresource=status --patch-file=patch.yaml", "completion": "This bash code Handling retriable and non-retriable pod failures with Pod failure policy | If applied successfully, you will get a notification like this:", "language": "bash", "type": "kubectl-command_explanation", "context": "Handling retriable and non-retriable pod failures with Pod failure policy | If applied successfully, you will get a notification like this:", "source": "https://kubernetes.io/docs/tasks/job/pod-failure-policy/", "id": "k8s_00338_code_explain_8744"} {"prompt": "Explain this text code:\n\npod/job-pod-failure-policy-config-issue-k6pvp patched", "completion": "This text code Handling retriable and non-retriable pod failures with Pod failure policy | Delete the pod to transition it to Failed phase, by running the command:", "language": "text", "type": "code-example_explanation", "context": "Handling retriable and non-retriable pod failures with Pod failure policy | Delete the pod to transition it to Failed phase, by running the command:", "source": "https://kubernetes.io/docs/tasks/job/pod-failure-policy/", "id": "k8s_00338_code_explain_8745"} {"prompt": "Explain this bash code:\n\nkubectl get jobs -l job-name=job-pod-failure-policy-config-issue -o yaml", "completion": "This bash code Handling retriable and non-retriable pod failures with Pod failure policy | In the Job status, see a job Failed condition with the field reason equal PodFailurePolicy. Additionally, the message field contains a more detailed information about the Job termination, such as: Pod", "language": "bash", "type": "kubectl-command_explanation", "context": "Handling retriable and non-retriable pod failures with Pod failure policy | In the Job status, see a job Failed condition with the field reason equal PodFailurePolicy. Additionally, the message field contains a more detailed information about the Job termination, such as: Pod", "source": "https://kubernetes.io/docs/tasks/job/pod-failure-policy/", "id": "k8s_00338_code_explain_8746"} {"prompt": "Explain this bash code:\n\nkubectl delete jobs/job-pod-failure-policy-config-issue", "completion": "This bash code Handling retriable and non-retriable pod failures with Pod failure policy | The cluster automatically cleans up the Pods.", "language": "bash", "type": "kubectl-command_explanation", "context": "Handling retriable and non-retriable pod failures with Pod failure policy | The cluster automatically cleans up the Pods.", "source": "https://kubernetes.io/docs/tasks/job/pod-failure-policy/", "id": "k8s_00338_code_explain_8747"} {"prompt": "Complete this yaml code:\n\n/controllers/job-backoff-limit-per-index-failindex.yaml\napiVersion: batch/v1\nkind: Job\nmetadata:\n name: job-backoff-limit-per-index-failindex\nspec:\n completions: 4\n parallelism: 2\n completionMode: Indexed\n backoffLimitPerIndex: 1\n template:\n spec:\n restartPolicy: Never\n containers:\n - name: main\n image: docker.io/library/python:3\n command:\n # The script:\n # - fails the Pod with index 0 with exit code 1, which results in one retry;\n # - fails the Pod with index 1 with exit code 42 which results", "completion": " # in failing the index without retry.\n # - succeeds Pods with any other index.\n - python3\n - -c\n - |\n import os, sys\n index = int(os.environ.get(\"JOB_COMPLETION_INDEX\"))\n if index == 0:\n sys.exit(1)\n elif index == 1:\n sys.exit(42)\n else:\n sys.exit(0) \n backoffLimit: 6\n podFailurePolicy:\n rules:\n - action: FailIndex\n onExitCodes:\n containerName: main\n operator: In\n values: [42]", "language": "yaml", "type": "kubernetes-manifest", "context": "Handling retriable and non-retriable pod failures with Pod failure policy | Apply the manifest:", "source": "https://kubernetes.io/docs/tasks/job/pod-failure-policy/", "id": "k8s_00338_code_completion_8748"} {"prompt": "Explain this yaml code:\n\n/controllers/job-backoff-limit-per-index-failindex.yaml\napiVersion: batch/v1\nkind: Job\nmetadata:\n name: job-backoff-limit-per-index-failindex\nspec:\n completions: 4\n parallelism: 2\n completionMode: Indexed\n backoffLimitPerIndex: 1\n template:\n spec:\n restartPolicy: Never\n containers:\n - name: main\n image: docker.io/library/python:3\n command:\n # The script:\n # - fails the Pod with index 0 with exit code 1, which results in one retry;\n # - fails the Pod with index 1 with exit code 42 which results\n # in failing the index without retry.\n # - succeeds Pods with any other index.\n - python3\n - -c\n - |\n import os, sys\n index = int(os.environ.get(\"JOB_COMPLETION_INDEX\"))\n if index == 0:\n sys.exit(1)\n elif index == 1:\n sys.exit(42)\n else:\n sys.exit(0) \n backoffLimit: 6\n podFailurePolicy:\n rules:\n - action: FailIndex\n onExitCodes:\n containerName: main\n operator: In\n values: [42]", "completion": "This yaml code Handling retriable and non-retriable pod failures with Pod failure policy | Apply the manifest:", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Handling retriable and non-retriable pod failures with Pod failure policy | Apply the manifest:", "source": "https://kubernetes.io/docs/tasks/job/pod-failure-policy/", "id": "k8s_00338_code_explain_8749"} {"prompt": "Complete this yaml code:\n\napiVersion: batch/v1\nkind: Job\nmetadata:\n name: job-backoff-limit-per-index-failindex\nspec:\n completions: 4\n parallelism: 2\n completionMode: Indexed\n backoffLimitPerIndex: 1\n template:\n spec:\n restartPolicy: Never\n containers:\n - name: main\n image: docker.io/library/python:3\n command:\n # The script:\n # - fails the Pod with index 0 with exit code 1, which results in one retry;\n # - fails the Pod with index 1 with exit code 42 which results\n # in failing the index without retry.", "completion": " # - succeeds Pods with any other index.\n - python3\n - -c\n - |\n import os, sys\n index = int(os.environ.get(\"JOB_COMPLETION_INDEX\"))\n if index == 0:\n sys.exit(1)\n elif index == 1:\n sys.exit(42)\n else:\n sys.exit(0) \n backoffLimit: 6\n podFailurePolicy:\n rules:\n - action: FailIndex\n onExitCodes:\n containerName: main\n operator: In\n values: [42]", "language": "yaml", "type": "kubernetes-manifest", "context": "Handling retriable and non-retriable pod failures with Pod failure policy | Apply the manifest:", "source": "https://kubernetes.io/docs/tasks/job/pod-failure-policy/", "id": "k8s_00338_code_completion_8750"} {"prompt": "Explain this yaml code:\n\napiVersion: batch/v1\nkind: Job\nmetadata:\n name: job-backoff-limit-per-index-failindex\nspec:\n completions: 4\n parallelism: 2\n completionMode: Indexed\n backoffLimitPerIndex: 1\n template:\n spec:\n restartPolicy: Never\n containers:\n - name: main\n image: docker.io/library/python:3\n command:\n # The script:\n # - fails the Pod with index 0 with exit code 1, which results in one retry;\n # - fails the Pod with index 1 with exit code 42 which results\n # in failing the index without retry.\n # - succeeds Pods with any other index.\n - python3\n - -c\n - |\n import os, sys\n index = int(os.environ.get(\"JOB_COMPLETION_INDEX\"))\n if index == 0:\n sys.exit(1)\n elif index == 1:\n sys.exit(42)\n else:\n sys.exit(0) \n backoffLimit: 6\n podFailurePolicy:\n rules:\n - action: FailIndex\n onExitCodes:\n containerName: main\n operator: In\n values: [42]", "completion": "This yaml code Handling retriable and non-retriable pod failures with Pod failure policy | Apply the manifest:", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Handling retriable and non-retriable pod failures with Pod failure policy | Apply the manifest:", "source": "https://kubernetes.io/docs/tasks/job/pod-failure-policy/", "id": "k8s_00338_code_explain_8751"} {"prompt": "Explain this bash code:\n\nkubectl create -f https://k8s.io/examples/controllers/job-backoff-limit-per-index-failindex.yaml", "completion": "This bash code Handling retriable and non-retriable pod failures with Pod failure policy | After around 15 seconds, inspect the status of the Pods for the Job. You can do that by running:", "language": "bash", "type": "kubectl-command_explanation", "context": "Handling retriable and non-retriable pod failures with Pod failure policy | After around 15 seconds, inspect the status of the Pods for the Job. You can do that by running:", "source": "https://kubernetes.io/docs/tasks/job/pod-failure-policy/", "id": "k8s_00338_code_explain_8752"} {"prompt": "Explain this bash code:\n\nkubectl get pods -l job-name=job-backoff-limit-per-index-failindex -o yaml", "completion": "This bash code Handling retriable and non-retriable pod failures with Pod failure policy | You will see output similar to this:", "language": "bash", "type": "kubectl-command_explanation", "context": "Handling retriable and non-retriable pod failures with Pod failure policy | You will see output similar to this:", "source": "https://kubernetes.io/docs/tasks/job/pod-failure-policy/", "id": "k8s_00338_code_explain_8753"} {"prompt": "Explain this bash code:\n\nkubectl get jobs -l job-name=job-backoff-limit-per-index-failindex -o yaml", "completion": "This bash code Handling retriable and non-retriable pod failures with Pod failure policy | In the Job status, see that the failedIndexes field shows \"0,1\", because both indexes failed. Because the index 1 was not retried the number of failed Pods, indicated by the status field \"failed\" equa", "language": "bash", "type": "kubectl-command_explanation", "context": "Handling retriable and non-retriable pod failures with Pod failure policy | In the Job status, see that the failedIndexes field shows \"0,1\", because both indexes failed. Because the index 1 was not retried the number of failed Pods, indicated by the status field \"failed\" equa", "source": "https://kubernetes.io/docs/tasks/job/pod-failure-policy/", "id": "k8s_00338_code_explain_8754"} {"prompt": "Explain this bash code:\n\nkubectl delete jobs/job-backoff-limit-per-index-failindex", "completion": "This bash code Handling retriable and non-retriable pod failures with Pod failure policy | The cluster automatically cleans up the Pods.", "language": "bash", "type": "kubectl-command_explanation", "context": "Handling retriable and non-retriable pod failures with Pod failure policy | The cluster automatically cleans up the Pods.", "source": "https://kubernetes.io/docs/tasks/job/pod-failure-policy/", "id": "k8s_00338_code_explain_8755"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Secret\nmetadata:\n # Name MUST be of form \"bootstrap-token-<token id>\"\n name: bootstrap-token-07401b\n namespace: kube-system\n\n# Type MUST be 'bootstrap.kubernetes.io/token'\ntype: bootstrap.kubernetes.io/token\nstringData:\n # Human readable description. Optional.\n description: \"The default bootstrap token generated by 'kubeadm init'.\"\n", "completion": " # Token ID and secret. Required.\n token-id: 07401b\n token-secret: f395accd246ae52d\n\n # Expiration. Optional.\n expiration: 2017-03-10T03:22:11Z\n\n # Allowed usages.\n usage-bootstrap-authentication: \"true\"\n usage-bootstrap-signing: \"true\"\n\n # Extra groups to authenticate the token as. Must start with \"system:bootstrappers:\"\n auth-extra-groups: system:bootstrappers:worker,system:bootstrappers:ingress", "language": "yaml", "type": "kubernetes-manifest", "context": "Authenticating with Bootstrap Tokens | The type of the secret must be bootstrap.kubernetes.io/token and the name must be bootstrap-token-<token id>. It must also exist in the kube-system namespace.", "source": "https://kubernetes.io/docs/reference/access-authn-authz/bootstrap-tokens/", "id": "k8s_00339_code_completion_8756"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Secret\nmetadata:\n # Name MUST be of form \"bootstrap-token-<token id>\"\n name: bootstrap-token-07401b\n namespace: kube-system\n\n# Type MUST be 'bootstrap.kubernetes.io/token'\ntype: bootstrap.kubernetes.io/token\nstringData:\n # Human readable description. Optional.\n description: \"The default bootstrap token generated by 'kubeadm init'.\"\n\n # Token ID and secret. Required.\n token-id: 07401b\n token-secret: f395accd246ae52d\n\n # Expiration. Optional.\n expiration: 2017-03-10T03:22:11Z\n\n # Allowed usages.\n usage-bootstrap-authentication: \"true\"\n usage-bootstrap-signing: \"true\"\n\n # Extra groups to authenticate the token as. Must start with \"system:bootstrappers:\"\n auth-extra-groups: system:bootstrappers:worker,system:bootstrappers:ingress", "completion": "This yaml code Authenticating with Bootstrap Tokens | The type of the secret must be bootstrap.kubernetes.io/token and the name must be bootstrap-token-<token id>. It must also exist in the kube-system namespace.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Authenticating with Bootstrap Tokens | The type of the secret must be bootstrap.kubernetes.io/token and the name must be bootstrap-token-<token id>. It must also exist in the kube-system namespace.", "source": "https://kubernetes.io/docs/reference/access-authn-authz/bootstrap-tokens/", "id": "k8s_00339_code_explain_8757"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: ConfigMap\nmetadata:\n name: cluster-info\n namespace: kube-public\ndata:\n jws-kubeconfig-07401b: eyJhbGciOiJIUzI1NiIsImtpZCI6IjA3NDAxYiJ9..tYEfbo6zDNo40MQE07aZcQX2m3EB2rO3NuXtxVMYm9U\n kubeconfig: |\n apiVersion: v1", "completion": " clusters:\n - cluster:\n certificate-authority-data: <really long certificate data>\n server: https://10.138.0.2:6443\n name: \"\"\n contexts: []\n current-context: \"\"\n kind: Config\n preferences: {}\n users: []", "language": "yaml", "type": "kubernetes-manifest", "context": "Authenticating with Bootstrap Tokens | The kubeconfig member of the ConfigMap is a config file with only the cluster information filled out. The key thing being communicated here is the certificate-authority-data. This may be expanded in t", "source": "https://kubernetes.io/docs/reference/access-authn-authz/bootstrap-tokens/", "id": "k8s_00339_code_completion_8758"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: ConfigMap\nmetadata:\n name: cluster-info\n namespace: kube-public\ndata:\n jws-kubeconfig-07401b: eyJhbGciOiJIUzI1NiIsImtpZCI6IjA3NDAxYiJ9..tYEfbo6zDNo40MQE07aZcQX2m3EB2rO3NuXtxVMYm9U\n kubeconfig: |\n apiVersion: v1\n clusters:\n - cluster:\n certificate-authority-data: <really long certificate data>\n server: https://10.138.0.2:6443\n name: \"\"\n contexts: []\n current-context: \"\"\n kind: Config\n preferences: {}\n users: []", "completion": "This yaml code Authenticating with Bootstrap Tokens | The kubeconfig member of the ConfigMap is a config file with only the cluster information filled out. The key thing being communicated here is the certificate-authority-data. This may be expanded in t", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Authenticating with Bootstrap Tokens | The kubeconfig member of the ConfigMap is a config file with only the cluster information filled out. The key thing being communicated here is the certificate-authority-data. This may be expanded in t", "source": "https://kubernetes.io/docs/reference/access-authn-authz/bootstrap-tokens/", "id": "k8s_00339_code_explain_8759"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Secret\nmetadata:\n # Name MUST be of form \"bootstrap-token-<token id>\"\n name: bootstrap-token-07401b\n namespace: kube-system\n\n# Type MUST be 'bootstrap.kubernetes.io/token'\ntype: bootstrap.kubernetes.io/token\nstringData:\n # Human readable description. Optional.\n description: \"The default bootstrap token generated by 'kubeadm init'.\"\n", "completion": " # Token ID and secret. Required.\n token-id: 07401b\n token-secret: f395accd246ae52d\n\n # Expiration. Optional.\n expiration: 2017-03-10T03:22:11Z\n\n # Allowed usages.\n usage-bootstrap-authentication: \"true\"\n usage-bootstrap-signing: \"true\"\n\n # Extra groups to authenticate the token as. Must start with \"system:bootstrappers:\"\n auth-extra-groups: system:bootstrappers:worker,system:bootstrappers:ingress", "language": "yaml", "type": "kubernetes-manifest", "context": "Authenticating with Bootstrap Tokens | The type of the secret must be bootstrap.kubernetes.io/token and the name must be bootstrap-token-<token id>. It must also exist in the kube-system namespace.", "source": "https://kubernetes.io/docs/reference/access-authn-authz/bootstrap-tokens/", "id": "k8s_00339_code_completion_8760"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Secret\nmetadata:\n # Name MUST be of form \"bootstrap-token-<token id>\"\n name: bootstrap-token-07401b\n namespace: kube-system\n\n# Type MUST be 'bootstrap.kubernetes.io/token'\ntype: bootstrap.kubernetes.io/token\nstringData:\n # Human readable description. Optional.\n description: \"The default bootstrap token generated by 'kubeadm init'.\"\n\n # Token ID and secret. Required.\n token-id: 07401b\n token-secret: f395accd246ae52d\n\n # Expiration. Optional.\n expiration: 2017-03-10T03:22:11Z\n\n # Allowed usages.\n usage-bootstrap-authentication: \"true\"\n usage-bootstrap-signing: \"true\"\n\n # Extra groups to authenticate the token as. Must start with \"system:bootstrappers:\"\n auth-extra-groups: system:bootstrappers:worker,system:bootstrappers:ingress", "completion": "This yaml code Authenticating with Bootstrap Tokens | The type of the secret must be bootstrap.kubernetes.io/token and the name must be bootstrap-token-<token id>. It must also exist in the kube-system namespace.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Authenticating with Bootstrap Tokens | The type of the secret must be bootstrap.kubernetes.io/token and the name must be bootstrap-token-<token id>. It must also exist in the kube-system namespace.", "source": "https://kubernetes.io/docs/reference/access-authn-authz/bootstrap-tokens/", "id": "k8s_00339_code_explain_8761"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: ConfigMap\nmetadata:\n name: cluster-info\n namespace: kube-public\ndata:\n jws-kubeconfig-07401b: eyJhbGciOiJIUzI1NiIsImtpZCI6IjA3NDAxYiJ9..tYEfbo6zDNo40MQE07aZcQX2m3EB2rO3NuXtxVMYm9U\n kubeconfig: |\n apiVersion: v1", "completion": " clusters:\n - cluster:\n certificate-authority-data: <really long certificate data>\n server: https://10.138.0.2:6443\n name: \"\"\n contexts: []\n current-context: \"\"\n kind: Config\n preferences: {}\n users: []", "language": "yaml", "type": "kubernetes-manifest", "context": "Authenticating with Bootstrap Tokens | The kubeconfig member of the ConfigMap is a config file with only the cluster information filled out. The key thing being communicated here is the certificate-authority-data. This may be expanded in t", "source": "https://kubernetes.io/docs/reference/access-authn-authz/bootstrap-tokens/", "id": "k8s_00339_code_completion_8762"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: ConfigMap\nmetadata:\n name: cluster-info\n namespace: kube-public\ndata:\n jws-kubeconfig-07401b: eyJhbGciOiJIUzI1NiIsImtpZCI6IjA3NDAxYiJ9..tYEfbo6zDNo40MQE07aZcQX2m3EB2rO3NuXtxVMYm9U\n kubeconfig: |\n apiVersion: v1\n clusters:\n - cluster:\n certificate-authority-data: <really long certificate data>\n server: https://10.138.0.2:6443\n name: \"\"\n contexts: []\n current-context: \"\"\n kind: Config\n preferences: {}\n users: []", "completion": "This yaml code Authenticating with Bootstrap Tokens | The kubeconfig member of the ConfigMap is a config file with only the cluster information filled out. The key thing being communicated here is the certificate-authority-data. This may be expanded in t", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Authenticating with Bootstrap Tokens | The kubeconfig member of the ConfigMap is a config file with only the cluster information filled out. The key thing being communicated here is the certificate-authority-data. This may be expanded in t", "source": "https://kubernetes.io/docs/reference/access-authn-authz/bootstrap-tokens/", "id": "k8s_00339_code_explain_8763"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Secret\nmetadata:\n # Name MUST be of form \"bootstrap-token-<token id>\"\n name: bootstrap-token-07401b\n namespace: kube-system\n\n# Type MUST be 'bootstrap.kubernetes.io/token'\ntype: bootstrap.kubernetes.io/token\nstringData:\n # Human readable description. Optional.\n description: \"The default bootstrap token generated by 'kubeadm init'.\"\n", "completion": " # Token ID and secret. Required.\n token-id: 07401b\n token-secret: f395accd246ae52d\n\n # Expiration. Optional.\n expiration: 2017-03-10T03:22:11Z\n\n # Allowed usages.\n usage-bootstrap-authentication: \"true\"\n usage-bootstrap-signing: \"true\"\n\n # Extra groups to authenticate the token as. Must start with \"system:bootstrappers:\"\n auth-extra-groups: system:bootstrappers:worker,system:bootstrappers:ingress", "language": "yaml", "type": "kubernetes-manifest", "context": "Authenticating with Bootstrap Tokens | The type of the secret must be bootstrap.kubernetes.io/token and the name must be bootstrap-token-<token id>. It must also exist in the kube-system namespace.", "source": "https://kubernetes.io/docs/reference/access-authn-authz/bootstrap-tokens/", "id": "k8s_00339_code_completion_8764"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Secret\nmetadata:\n # Name MUST be of form \"bootstrap-token-<token id>\"\n name: bootstrap-token-07401b\n namespace: kube-system\n\n# Type MUST be 'bootstrap.kubernetes.io/token'\ntype: bootstrap.kubernetes.io/token\nstringData:\n # Human readable description. Optional.\n description: \"The default bootstrap token generated by 'kubeadm init'.\"\n\n # Token ID and secret. Required.\n token-id: 07401b\n token-secret: f395accd246ae52d\n\n # Expiration. Optional.\n expiration: 2017-03-10T03:22:11Z\n\n # Allowed usages.\n usage-bootstrap-authentication: \"true\"\n usage-bootstrap-signing: \"true\"\n\n # Extra groups to authenticate the token as. Must start with \"system:bootstrappers:\"\n auth-extra-groups: system:bootstrappers:worker,system:bootstrappers:ingress", "completion": "This yaml code Authenticating with Bootstrap Tokens | The type of the secret must be bootstrap.kubernetes.io/token and the name must be bootstrap-token-<token id>. It must also exist in the kube-system namespace.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Authenticating with Bootstrap Tokens | The type of the secret must be bootstrap.kubernetes.io/token and the name must be bootstrap-token-<token id>. It must also exist in the kube-system namespace.", "source": "https://kubernetes.io/docs/reference/access-authn-authz/bootstrap-tokens/", "id": "k8s_00339_code_explain_8765"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: ConfigMap\nmetadata:\n name: cluster-info\n namespace: kube-public\ndata:\n jws-kubeconfig-07401b: eyJhbGciOiJIUzI1NiIsImtpZCI6IjA3NDAxYiJ9..tYEfbo6zDNo40MQE07aZcQX2m3EB2rO3NuXtxVMYm9U\n kubeconfig: |\n apiVersion: v1", "completion": " clusters:\n - cluster:\n certificate-authority-data: <really long certificate data>\n server: https://10.138.0.2:6443\n name: \"\"\n contexts: []\n current-context: \"\"\n kind: Config\n preferences: {}\n users: []", "language": "yaml", "type": "kubernetes-manifest", "context": "Authenticating with Bootstrap Tokens | The kubeconfig member of the ConfigMap is a config file with only the cluster information filled out. The key thing being communicated here is the certificate-authority-data. This may be expanded in t", "source": "https://kubernetes.io/docs/reference/access-authn-authz/bootstrap-tokens/", "id": "k8s_00339_code_completion_8766"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: ConfigMap\nmetadata:\n name: cluster-info\n namespace: kube-public\ndata:\n jws-kubeconfig-07401b: eyJhbGciOiJIUzI1NiIsImtpZCI6IjA3NDAxYiJ9..tYEfbo6zDNo40MQE07aZcQX2m3EB2rO3NuXtxVMYm9U\n kubeconfig: |\n apiVersion: v1\n clusters:\n - cluster:\n certificate-authority-data: <really long certificate data>\n server: https://10.138.0.2:6443\n name: \"\"\n contexts: []\n current-context: \"\"\n kind: Config\n preferences: {}\n users: []", "completion": "This yaml code Authenticating with Bootstrap Tokens | The kubeconfig member of the ConfigMap is a config file with only the cluster information filled out. The key thing being communicated here is the certificate-authority-data. This may be expanded in t", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Authenticating with Bootstrap Tokens | The kubeconfig member of the ConfigMap is a config file with only the cluster information filled out. The key thing being communicated here is the certificate-authority-data. This may be expanded in t", "source": "https://kubernetes.io/docs/reference/access-authn-authz/bootstrap-tokens/", "id": "k8s_00339_code_explain_8767"} {"prompt": "Complete this text code:\n\ntarget prot opt source destination\nRETURN all -- anywhere 169.254.0.0/16 /* ip-masq-agent: cluster-local traffic should not be subject to MASQUERADE */ ADDRTYPE match dst-type !LOCAL\nRETURN all -- anywhere 10.0.0.0/8 /* ip-masq-agent: cluster-local traffic should not be subject to MASQUERADE */ ADDRTYPE match dst-type !LOCAL", "completion": "RETURN all -- anywhere 172.16.0.0/12 /* ip-masq-agent: cluster-local traffic should not be subject to MASQUERADE */ ADDRTYPE match dst-type !LOCAL\nRETURN all -- anywhere 192.168.0.0/16 /* ip-masq-agent: cluster-local traffic should not be subject to MASQUERADE */ ADDRTYPE match dst-type !LOCAL\nMASQUERADE all -- anywhere anywhere /* ip-masq-agent: outbound traffic should be subject to MASQUERADE (this match must come after cluster-local CIDR matches) */ ADDRTYPE match dst-type !LOCAL", "language": "text", "type": "code-example", "context": "IP Masquerade Agent User Guide | By default, in GCE/Google Kubernetes Engine, if network policy is enabled or you are using a cluster CIDR not in the 10.0.0.0/8 range, the ip-masq-agent will run in your cluster. If you are running in", "source": "https://kubernetes.io/docs/tasks/administer-cluster/ip-masq-agent/", "id": "k8s_00342_code_completion_8768"} {"prompt": "Explain this text code:\n\ntarget prot opt source destination\nRETURN all -- anywhere 169.254.0.0/16 /* ip-masq-agent: cluster-local traffic should not be subject to MASQUERADE */ ADDRTYPE match dst-type !LOCAL\nRETURN all -- anywhere 10.0.0.0/8 /* ip-masq-agent: cluster-local traffic should not be subject to MASQUERADE */ ADDRTYPE match dst-type !LOCAL\nRETURN all -- anywhere 172.16.0.0/12 /* ip-masq-agent: cluster-local traffic should not be subject to MASQUERADE */ ADDRTYPE match dst-type !LOCAL\nRETURN all -- anywhere 192.168.0.0/16 /* ip-masq-agent: cluster-local traffic should not be subject to MASQUERADE */ ADDRTYPE match dst-type !LOCAL\nMASQUERADE all -- anywhere anywhere /* ip-masq-agent: outbound traffic should be subject to MASQUERADE (this match must come after cluster-local CIDR matches) */ ADDRTYPE match dst-type !LOCAL", "completion": "This text code IP Masquerade Agent User Guide | By default, in GCE/Google Kubernetes Engine, if network policy is enabled or you are using a cluster CIDR not in the 10.0.0.0/8 range, the ip-masq-agent will run in your cluster. If you are running in", "language": "text", "type": "code-example_explanation", "context": "IP Masquerade Agent User Guide | By default, in GCE/Google Kubernetes Engine, if network policy is enabled or you are using a cluster CIDR not in the 10.0.0.0/8 range, the ip-masq-agent will run in your cluster. If you are running in", "source": "https://kubernetes.io/docs/tasks/administer-cluster/ip-masq-agent/", "id": "k8s_00342_code_explain_8769"} {"prompt": "Explain this bash code:\n\nkubectl apply -f https://raw.githubusercontent.com/kubernetes-sigs/ip-masq-agent/master/ip-masq-agent.yaml", "completion": "This bash code IP Masquerade Agent User Guide | You must also apply the appropriate node label to any nodes in your cluster that you want the agent to run on.", "language": "bash", "type": "kubectl-command_explanation", "context": "IP Masquerade Agent User Guide | You must also apply the appropriate node label to any nodes in your cluster that you want the agent to run on.", "source": "https://kubernetes.io/docs/tasks/administer-cluster/ip-masq-agent/", "id": "k8s_00342_code_explain_8770"} {"prompt": "Explain this bash code:\n\nkubectl label nodes my-node node.kubernetes.io/masq-agent-ds-ready=true", "completion": "This bash code IP Masquerade Agent User Guide | More information can be found in the ip-masq-agent documentation here.", "language": "bash", "type": "kubectl-command_explanation", "context": "IP Masquerade Agent User Guide | More information can be found in the ip-masq-agent documentation here.", "source": "https://kubernetes.io/docs/tasks/administer-cluster/ip-masq-agent/", "id": "k8s_00342_code_explain_8771"} {"prompt": "Explain this text code:\n\nnonMasqueradeCIDRs:\n - 10.0.0.0/8\nresyncInterval: 60s", "completion": "This text code IP Masquerade Agent User Guide | Run the following command to add the configmap to your cluster:", "language": "text", "type": "code-example_explanation", "context": "IP Masquerade Agent User Guide | Run the following command to add the configmap to your cluster:", "source": "https://kubernetes.io/docs/tasks/administer-cluster/ip-masq-agent/", "id": "k8s_00342_code_explain_8772"} {"prompt": "Explain this bash code:\n\nkubectl create configmap ip-masq-agent --from-file=config --namespace=kube-system", "completion": "This bash code IP Masquerade Agent User Guide | This will update a file located at /etc/config/ip-masq-agent which is periodically checked every resyncInterval and applied to the cluster node. After the resync interval has expired, you should see t", "language": "bash", "type": "kubectl-command_explanation", "context": "IP Masquerade Agent User Guide | This will update a file located at /etc/config/ip-masq-agent which is periodically checked every resyncInterval and applied to the cluster node. After the resync interval has expired, you should see t", "source": "https://kubernetes.io/docs/tasks/administer-cluster/ip-masq-agent/", "id": "k8s_00342_code_explain_8773"} {"prompt": "Complete this text code:\n\nChain IP-MASQ-AGENT (1 references)\ntarget prot opt source destination", "completion": "RETURN all -- anywhere 169.254.0.0/16 /* ip-masq-agent: cluster-local traffic should not be subject to MASQUERADE */ ADDRTYPE match dst-type !LOCAL\nRETURN all -- anywhere 10.0.0.0/8 /* ip-masq-agent: cluster-local\nMASQUERADE all -- anywhere anywhere /* ip-masq-agent: outbound traffic should be subject to MASQUERADE (this match must come after cluster-local CIDR matches) */ ADDRTYPE match dst-type !LOCAL", "language": "text", "type": "code-example", "context": "IP Masquerade Agent User Guide | By default, the link local range (169.254.0.0/16) is also handled by the ip-masq agent, which sets up the appropriate iptables rules. To have the ip-masq-agent ignore link local, you can set masqLinkL", "source": "https://kubernetes.io/docs/tasks/administer-cluster/ip-masq-agent/", "id": "k8s_00342_code_completion_8774"} {"prompt": "Explain this text code:\n\nChain IP-MASQ-AGENT (1 references)\ntarget prot opt source destination\nRETURN all -- anywhere 169.254.0.0/16 /* ip-masq-agent: cluster-local traffic should not be subject to MASQUERADE */ ADDRTYPE match dst-type !LOCAL\nRETURN all -- anywhere 10.0.0.0/8 /* ip-masq-agent: cluster-local\nMASQUERADE all -- anywhere anywhere /* ip-masq-agent: outbound traffic should be subject to MASQUERADE (this match must come after cluster-local CIDR matches) */ ADDRTYPE match dst-type !LOCAL", "completion": "This text code IP Masquerade Agent User Guide | By default, the link local range (169.254.0.0/16) is also handled by the ip-masq agent, which sets up the appropriate iptables rules. To have the ip-masq-agent ignore link local, you can set masqLinkL", "language": "text", "type": "code-example_explanation", "context": "IP Masquerade Agent User Guide | By default, the link local range (169.254.0.0/16) is also handled by the ip-masq agent, which sets up the appropriate iptables rules. To have the ip-masq-agent ignore link local, you can set masqLinkL", "source": "https://kubernetes.io/docs/tasks/administer-cluster/ip-masq-agent/", "id": "k8s_00342_code_explain_8775"} {"prompt": "Complete this text code:\n\nnonMasqueradeCIDRs:\n - 10.0.0.0/8", "completion": "resyncInterval: 60s\nmasqLinkLocal: true", "language": "text", "type": "code-example", "context": "IP Masquerade Agent User Guide |", "source": "https://kubernetes.io/docs/tasks/administer-cluster/ip-masq-agent/", "id": "k8s_00342_code_completion_8776"} {"prompt": "Explain this text code:\n\nnonMasqueradeCIDRs:\n - 10.0.0.0/8\nresyncInterval: 60s\nmasqLinkLocal: true", "completion": "This text code IP Masquerade Agent User Guide |", "language": "text", "type": "code-example_explanation", "context": "IP Masquerade Agent User Guide |", "source": "https://kubernetes.io/docs/tasks/administer-cluster/ip-masq-agent/", "id": "k8s_00342_code_explain_8777"} {"prompt": "Complete this none code:\n\ntarget prot opt source destination\nRETURN all -- anywhere 169.254.0.0/16 /* ip-masq-agent: cluster-local traffic should not be subject to MASQUERADE */ ADDRTYPE match dst-type !LOCAL\nRETURN all -- anywhere 10.0.0.0/8 /* ip-masq-agent: cluster-local traffic should not be subject to MASQUERADE */ ADDRTYPE match dst-type !LOCAL", "completion": "RETURN all -- anywhere 172.16.0.0/12 /* ip-masq-agent: cluster-local traffic should not be subject to MASQUERADE */ ADDRTYPE match dst-type !LOCAL\nRETURN all -- anywhere 192.168.0.0/16 /* ip-masq-agent: cluster-local traffic should not be subject to MASQUERADE */ ADDRTYPE match dst-type !LOCAL\nMASQUERADE all -- anywhere anywhere /* ip-masq-agent: outbound traffic should be subject to MASQUERADE (this match must come after cluster-local CIDR matches) */ ADDRTYPE match dst-type !LOCAL", "language": "none", "type": "code-example", "context": "IP Masquerade Agent User Guide | By default, in GCE/Google Kubernetes Engine, if network policy is enabled or you are using a cluster CIDR not in the 10.0.0.0/8 range, the ip-masq-agent will run in your cluster. If you are running in", "source": "https://kubernetes.io/docs/tasks/administer-cluster/ip-masq-agent/", "id": "k8s_00342_code_completion_8778"} {"prompt": "Explain this none code:\n\ntarget prot opt source destination\nRETURN all -- anywhere 169.254.0.0/16 /* ip-masq-agent: cluster-local traffic should not be subject to MASQUERADE */ ADDRTYPE match dst-type !LOCAL\nRETURN all -- anywhere 10.0.0.0/8 /* ip-masq-agent: cluster-local traffic should not be subject to MASQUERADE */ ADDRTYPE match dst-type !LOCAL\nRETURN all -- anywhere 172.16.0.0/12 /* ip-masq-agent: cluster-local traffic should not be subject to MASQUERADE */ ADDRTYPE match dst-type !LOCAL\nRETURN all -- anywhere 192.168.0.0/16 /* ip-masq-agent: cluster-local traffic should not be subject to MASQUERADE */ ADDRTYPE match dst-type !LOCAL\nMASQUERADE all -- anywhere anywhere /* ip-masq-agent: outbound traffic should be subject to MASQUERADE (this match must come after cluster-local CIDR matches) */ ADDRTYPE match dst-type !LOCAL", "completion": "This none code IP Masquerade Agent User Guide | By default, in GCE/Google Kubernetes Engine, if network policy is enabled or you are using a cluster CIDR not in the 10.0.0.0/8 range, the ip-masq-agent will run in your cluster. If you are running in", "language": "none", "type": "code-example_explanation", "context": "IP Masquerade Agent User Guide | By default, in GCE/Google Kubernetes Engine, if network policy is enabled or you are using a cluster CIDR not in the 10.0.0.0/8 range, the ip-masq-agent will run in your cluster. If you are running in", "source": "https://kubernetes.io/docs/tasks/administer-cluster/ip-masq-agent/", "id": "k8s_00342_code_explain_8779"} {"prompt": "Explain this shell code:\n\nkubectl apply -f https://raw.githubusercontent.com/kubernetes-sigs/ip-masq-agent/master/ip-masq-agent.yaml", "completion": "This shell code IP Masquerade Agent User Guide | You must also apply the appropriate node label to any nodes in your cluster that you want the agent to run on.", "language": "shell", "type": "code-example_explanation", "context": "IP Masquerade Agent User Guide | You must also apply the appropriate node label to any nodes in your cluster that you want the agent to run on.", "source": "https://kubernetes.io/docs/tasks/administer-cluster/ip-masq-agent/", "id": "k8s_00342_code_explain_8780"} {"prompt": "Explain this shell code:\n\nkubectl label nodes my-node node.kubernetes.io/masq-agent-ds-ready=true", "completion": "This shell code IP Masquerade Agent User Guide | More information can be found in the ip-masq-agent documentation here.", "language": "shell", "type": "code-example_explanation", "context": "IP Masquerade Agent User Guide | More information can be found in the ip-masq-agent documentation here.", "source": "https://kubernetes.io/docs/tasks/administer-cluster/ip-masq-agent/", "id": "k8s_00342_code_explain_8781"} {"prompt": "Explain this yaml code:\n\nnonMasqueradeCIDRs:\n - 10.0.0.0/8\nresyncInterval: 60s", "completion": "This yaml code IP Masquerade Agent User Guide | Run the following command to add the configmap to your cluster:", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "IP Masquerade Agent User Guide | Run the following command to add the configmap to your cluster:", "source": "https://kubernetes.io/docs/tasks/administer-cluster/ip-masq-agent/", "id": "k8s_00342_code_explain_8782"} {"prompt": "Explain this shell code:\n\nkubectl create configmap ip-masq-agent --from-file=config --namespace=kube-system", "completion": "This shell code IP Masquerade Agent User Guide | This will update a file located at /etc/config/ip-masq-agent which is periodically checked every resyncInterval and applied to the cluster node. After the resync interval has expired, you should see t", "language": "shell", "type": "code-example_explanation", "context": "IP Masquerade Agent User Guide | This will update a file located at /etc/config/ip-masq-agent which is periodically checked every resyncInterval and applied to the cluster node. After the resync interval has expired, you should see t", "source": "https://kubernetes.io/docs/tasks/administer-cluster/ip-masq-agent/", "id": "k8s_00342_code_explain_8783"} {"prompt": "Complete this none code:\n\nChain IP-MASQ-AGENT (1 references)\ntarget prot opt source destination", "completion": "RETURN all -- anywhere 169.254.0.0/16 /* ip-masq-agent: cluster-local traffic should not be subject to MASQUERADE */ ADDRTYPE match dst-type !LOCAL\nRETURN all -- anywhere 10.0.0.0/8 /* ip-masq-agent: cluster-local\nMASQUERADE all -- anywhere anywhere /* ip-masq-agent: outbound traffic should be subject to MASQUERADE (this match must come after cluster-local CIDR matches) */ ADDRTYPE match dst-type !LOCAL", "language": "none", "type": "code-example", "context": "IP Masquerade Agent User Guide | By default, the link local range (169.254.0.0/16) is also handled by the ip-masq agent, which sets up the appropriate iptables rules. To have the ip-masq-agent ignore link local, you can set masqLinkL", "source": "https://kubernetes.io/docs/tasks/administer-cluster/ip-masq-agent/", "id": "k8s_00342_code_completion_8784"} {"prompt": "Explain this none code:\n\nChain IP-MASQ-AGENT (1 references)\ntarget prot opt source destination\nRETURN all -- anywhere 169.254.0.0/16 /* ip-masq-agent: cluster-local traffic should not be subject to MASQUERADE */ ADDRTYPE match dst-type !LOCAL\nRETURN all -- anywhere 10.0.0.0/8 /* ip-masq-agent: cluster-local\nMASQUERADE all -- anywhere anywhere /* ip-masq-agent: outbound traffic should be subject to MASQUERADE (this match must come after cluster-local CIDR matches) */ ADDRTYPE match dst-type !LOCAL", "completion": "This none code IP Masquerade Agent User Guide | By default, the link local range (169.254.0.0/16) is also handled by the ip-masq agent, which sets up the appropriate iptables rules. To have the ip-masq-agent ignore link local, you can set masqLinkL", "language": "none", "type": "code-example_explanation", "context": "IP Masquerade Agent User Guide | By default, the link local range (169.254.0.0/16) is also handled by the ip-masq agent, which sets up the appropriate iptables rules. To have the ip-masq-agent ignore link local, you can set masqLinkL", "source": "https://kubernetes.io/docs/tasks/administer-cluster/ip-masq-agent/", "id": "k8s_00342_code_explain_8785"} {"prompt": "Complete this yaml code:\n\nnonMasqueradeCIDRs:\n - 10.0.0.0/8", "completion": "resyncInterval: 60s\nmasqLinkLocal: true", "language": "yaml", "type": "kubernetes-manifest", "context": "IP Masquerade Agent User Guide |", "source": "https://kubernetes.io/docs/tasks/administer-cluster/ip-masq-agent/", "id": "k8s_00342_code_completion_8786"} {"prompt": "Explain this yaml code:\n\nnonMasqueradeCIDRs:\n - 10.0.0.0/8\nresyncInterval: 60s\nmasqLinkLocal: true", "completion": "This yaml code IP Masquerade Agent User Guide |", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "IP Masquerade Agent User Guide |", "source": "https://kubernetes.io/docs/tasks/administer-cluster/ip-masq-agent/", "id": "k8s_00342_code_explain_8787"} {"prompt": "Explain this bash code:\n\nkubectl apply -f https://raw.githubusercontent.com/kubernetes-sigs/ip-masq-agent/master/ip-masq-agent.yaml", "completion": "This bash code IP Masquerade Agent User Guide | You must also apply the appropriate node label to any nodes in your cluster that you want the agent to run on.", "language": "bash", "type": "kubectl-command_explanation", "context": "IP Masquerade Agent User Guide | You must also apply the appropriate node label to any nodes in your cluster that you want the agent to run on.", "source": "https://kubernetes.io/docs/tasks/administer-cluster/ip-masq-agent/", "id": "k8s_00342_code_explain_8788"} {"prompt": "Explain this bash code:\n\nkubectl label nodes my-node node.kubernetes.io/masq-agent-ds-ready=true", "completion": "This bash code IP Masquerade Agent User Guide | More information can be found in the ip-masq-agent documentation here.", "language": "bash", "type": "kubectl-command_explanation", "context": "IP Masquerade Agent User Guide | More information can be found in the ip-masq-agent documentation here.", "source": "https://kubernetes.io/docs/tasks/administer-cluster/ip-masq-agent/", "id": "k8s_00342_code_explain_8789"} {"prompt": "Explain this text code:\n\nnonMasqueradeCIDRs:\n - 10.0.0.0/8\nresyncInterval: 60s", "completion": "This text code IP Masquerade Agent User Guide | Run the following command to add the configmap to your cluster:", "language": "text", "type": "code-example_explanation", "context": "IP Masquerade Agent User Guide | Run the following command to add the configmap to your cluster:", "source": "https://kubernetes.io/docs/tasks/administer-cluster/ip-masq-agent/", "id": "k8s_00342_code_explain_8790"} {"prompt": "Explain this bash code:\n\nkubectl create configmap ip-masq-agent --from-file=config --namespace=kube-system", "completion": "This bash code IP Masquerade Agent User Guide | This will update a file located at /etc/config/ip-masq-agent which is periodically checked every resyncInterval and applied to the cluster node. After the resync interval has expired, you should see t", "language": "bash", "type": "kubectl-command_explanation", "context": "IP Masquerade Agent User Guide | This will update a file located at /etc/config/ip-masq-agent which is periodically checked every resyncInterval and applied to the cluster node. After the resync interval has expired, you should see t", "source": "https://kubernetes.io/docs/tasks/administer-cluster/ip-masq-agent/", "id": "k8s_00342_code_explain_8791"} {"prompt": "Complete this text code:\n\nnonMasqueradeCIDRs:\n - 10.0.0.0/8", "completion": "resyncInterval: 60s\nmasqLinkLocal: true", "language": "text", "type": "code-example", "context": "IP Masquerade Agent User Guide |", "source": "https://kubernetes.io/docs/tasks/administer-cluster/ip-masq-agent/", "id": "k8s_00342_code_completion_8792"} {"prompt": "Explain this text code:\n\nnonMasqueradeCIDRs:\n - 10.0.0.0/8\nresyncInterval: 60s\nmasqLinkLocal: true", "completion": "This text code IP Masquerade Agent User Guide |", "language": "text", "type": "code-example_explanation", "context": "IP Masquerade Agent User Guide |", "source": "https://kubernetes.io/docs/tasks/administer-cluster/ip-masq-agent/", "id": "k8s_00342_code_explain_8793"} {"prompt": "Explain this text code:\n\napiGroups:[\"apps\"], apiVersions:[\"v1\"], resources: [\"deployments\"]", "completion": "This text code ValidatingAdmissionPolicy | Equivalent: match a request if modifies a resource listed in rules, even via another API group or version. For example, if deployments can be modified via apps/v1, apps/v1beta1, and extensions/v1beta1", "language": "text", "type": "code-example_explanation", "context": "ValidatingAdmissionPolicy | Equivalent: match a request if modifies a resource listed in rules, even via another API group or version. For example, if deployments can be modified via apps/v1, apps/v1beta1, and extensions/v1beta1", "source": "https://kubernetes.io/docs/reference/kubernetes-api/policy-resources/validating-admission-policy-v1/", "id": "k8s_00343_code_explain_8794"} {"prompt": "Explain this text code:\n\napiGroups:[\"apps\"], apiVersions:[\"v1\"], resources: [\"deployments\"]", "completion": "This text code ValidatingAdmissionPolicy | Defaults to \"Equivalent\"", "language": "text", "type": "code-example_explanation", "context": "ValidatingAdmissionPolicy | Defaults to \"Equivalent\"", "source": "https://kubernetes.io/docs/reference/kubernetes-api/policy-resources/validating-admission-policy-v1/", "id": "k8s_00343_code_explain_8795"} {"prompt": "Explain this text code:\n\napiGroups:[\"apps\"], apiVersions:[\"v1\"], resources: [\"deployments\"]", "completion": "This text code ValidatingAdmissionPolicy | Equivalent: match a request if modifies a resource listed in rules, even via another API group or version. For example, if deployments can be modified via apps/v1, apps/v1beta1, and extensions/v1beta1", "language": "text", "type": "code-example_explanation", "context": "ValidatingAdmissionPolicy | Equivalent: match a request if modifies a resource listed in rules, even via another API group or version. For example, if deployments can be modified via apps/v1, apps/v1beta1, and extensions/v1beta1", "source": "https://kubernetes.io/docs/reference/kubernetes-api/policy-resources/validating-admission-policy-v1/", "id": "k8s_00343_code_explain_8796"} {"prompt": "Explain this text code:\n\napiGroups:[\"apps\"], apiVersions:[\"v1\"], resources: [\"deployments\"]", "completion": "This text code ValidatingAdmissionPolicy | Defaults to \"Equivalent\"", "language": "text", "type": "code-example_explanation", "context": "ValidatingAdmissionPolicy | Defaults to \"Equivalent\"", "source": "https://kubernetes.io/docs/reference/kubernetes-api/policy-resources/validating-admission-policy-v1/", "id": "k8s_00343_code_explain_8797"} {"prompt": "Explain this text code:\n\nvalidation.policy.admission.k8s.io/validation_failure", "completion": "This text code ValidatingAdmissionPolicy | Clients should expect to handle additional values by ignoring any values not recognized.", "language": "text", "type": "code-example_explanation", "context": "ValidatingAdmissionPolicy | Clients should expect to handle additional values by ignoring any values not recognized.", "source": "https://kubernetes.io/docs/reference/kubernetes-api/policy-resources/validating-admission-policy-v1/", "id": "k8s_00343_code_explain_8798"} {"prompt": "Explain this text code:\n\n\"validation.policy.admission.k8s.io/validation_failure\": \"[{\\\"message\\\": \\\"Invalid value\\\", {\\\"policy\\\": \\\"policy.example.com\\\", {\\\"binding\\\": \\\"policybinding.example.com\\\", {\\\"expressionIndex\\\": \\\"1\\\", {\\\"validationActions\\\": [\\\"Audit\\\"]}]\"", "completion": "This text code ValidatingAdmissionPolicy | Clients should expect to handle additional values by ignoring any values not recognized.", "language": "text", "type": "code-example_explanation", "context": "ValidatingAdmissionPolicy | Clients should expect to handle additional values by ignoring any values not recognized.", "source": "https://kubernetes.io/docs/reference/kubernetes-api/policy-resources/validating-admission-policy-v1/", "id": "k8s_00343_code_explain_8799"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Namespace\nmetadata:\n name: my-baseline-namespace\n labels:\n pod-security.kubernetes.io/enforce: baseline", "completion": " pod-security.kubernetes.io/enforce-version: v1.34\n\n # We are setting these to our _desired_ `enforce` level.\n pod-security.kubernetes.io/audit: restricted\n pod-security.kubernetes.io/audit-version: v1.34\n pod-security.kubernetes.io/warn: restricted\n pod-security.kubernetes.io/warn-version: v1.34", "language": "yaml", "type": "kubernetes-manifest", "context": "Enforce Pod Security Standards with Namespace Labels | It is helpful to apply the --dry-run flag when initially evaluating security profile changes for namespaces. The Pod Security Standard checks will still be run in dry run mode, giving you information", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/enforce-standards-namespace-labels/", "id": "k8s_00345_code_completion_8800"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Namespace\nmetadata:\n name: my-baseline-namespace\n labels:\n pod-security.kubernetes.io/enforce: baseline\n pod-security.kubernetes.io/enforce-version: v1.34\n\n # We are setting these to our _desired_ `enforce` level.\n pod-security.kubernetes.io/audit: restricted\n pod-security.kubernetes.io/audit-version: v1.34\n pod-security.kubernetes.io/warn: restricted\n pod-security.kubernetes.io/warn-version: v1.34", "completion": "This yaml code Enforce Pod Security Standards with Namespace Labels | It is helpful to apply the --dry-run flag when initially evaluating security profile changes for namespaces. The Pod Security Standard checks will still be run in dry run mode, giving you information", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Enforce Pod Security Standards with Namespace Labels | It is helpful to apply the --dry-run flag when initially evaluating security profile changes for namespaces. The Pod Security Standard checks will still be run in dry run mode, giving you information", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/enforce-standards-namespace-labels/", "id": "k8s_00345_code_explain_8801"} {"prompt": "Explain this bash code:\n\nkubectl label --dry-run=server --overwrite ns --all \\\n pod-security.kubernetes.io/enforce=baseline", "completion": "This bash code Enforce Pod Security Standards with Namespace Labels | If you're just getting started with the Pod Security Standards, a suitable first step would be to configure all namespaces with audit annotations for a stricter level such as baseline:", "language": "bash", "type": "kubectl-command_explanation", "context": "Enforce Pod Security Standards with Namespace Labels | If you're just getting started with the Pod Security Standards, a suitable first step would be to configure all namespaces with audit annotations for a stricter level such as baseline:", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/enforce-standards-namespace-labels/", "id": "k8s_00345_code_explain_8802"} {"prompt": "Explain this bash code:\n\nkubectl label --overwrite ns --all \\\n pod-security.kubernetes.io/audit=baseline \\\n pod-security.kubernetes.io/warn=baseline", "completion": "This bash code Enforce Pod Security Standards with Namespace Labels | Note that this is not setting an enforce level, so that namespaces that haven't been explicitly evaluated can be distinguished. You can list namespaces without an explicitly set enforce level using th", "language": "bash", "type": "kubectl-command_explanation", "context": "Enforce Pod Security Standards with Namespace Labels | Note that this is not setting an enforce level, so that namespaces that haven't been explicitly evaluated can be distinguished. You can list namespaces without an explicitly set enforce level using th", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/enforce-standards-namespace-labels/", "id": "k8s_00345_code_explain_8803"} {"prompt": "Explain this bash code:\n\nkubectl get namespaces --selector='!pod-security.kubernetes.io/enforce'", "completion": "This bash code Enforce Pod Security Standards with Namespace Labels | You can update a specific namespace as well. This command adds the enforce=restricted policy to my-existing-namespace, pinning the restricted policy version to v1.34.", "language": "bash", "type": "kubectl-command_explanation", "context": "Enforce Pod Security Standards with Namespace Labels | You can update a specific namespace as well. This command adds the enforce=restricted policy to my-existing-namespace, pinning the restricted policy version to v1.34.", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/enforce-standards-namespace-labels/", "id": "k8s_00345_code_explain_8804"} {"prompt": "Explain this bash code:\n\nkubectl label --overwrite ns my-existing-namespace \\\n pod-security.kubernetes.io/enforce=restricted \\\n pod-security.kubernetes.io/enforce-version=v1.34", "completion": "This bash code Enforce Pod Security Standards with Namespace Labels |", "language": "bash", "type": "kubectl-command_explanation", "context": "Enforce Pod Security Standards with Namespace Labels |", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/enforce-standards-namespace-labels/", "id": "k8s_00345_code_explain_8805"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Namespace\nmetadata:\n name: my-baseline-namespace\n labels:\n pod-security.kubernetes.io/enforce: baseline", "completion": " pod-security.kubernetes.io/enforce-version: v1.34\n\n # We are setting these to our _desired_ `enforce` level.\n pod-security.kubernetes.io/audit: restricted\n pod-security.kubernetes.io/audit-version: v1.34\n pod-security.kubernetes.io/warn: restricted\n pod-security.kubernetes.io/warn-version: v1.34", "language": "yaml", "type": "kubernetes-manifest", "context": "Enforce Pod Security Standards with Namespace Labels | It is helpful to apply the --dry-run flag when initially evaluating security profile changes for namespaces. The Pod Security Standard checks will still be run in dry run mode, giving you information", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/enforce-standards-namespace-labels/", "id": "k8s_00345_code_completion_8806"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Namespace\nmetadata:\n name: my-baseline-namespace\n labels:\n pod-security.kubernetes.io/enforce: baseline\n pod-security.kubernetes.io/enforce-version: v1.34\n\n # We are setting these to our _desired_ `enforce` level.\n pod-security.kubernetes.io/audit: restricted\n pod-security.kubernetes.io/audit-version: v1.34\n pod-security.kubernetes.io/warn: restricted\n pod-security.kubernetes.io/warn-version: v1.34", "completion": "This yaml code Enforce Pod Security Standards with Namespace Labels | It is helpful to apply the --dry-run flag when initially evaluating security profile changes for namespaces. The Pod Security Standard checks will still be run in dry run mode, giving you information", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Enforce Pod Security Standards with Namespace Labels | It is helpful to apply the --dry-run flag when initially evaluating security profile changes for namespaces. The Pod Security Standard checks will still be run in dry run mode, giving you information", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/enforce-standards-namespace-labels/", "id": "k8s_00345_code_explain_8807"} {"prompt": "Explain this shell code:\n\nkubectl label --dry-run=server --overwrite ns --all \\\n pod-security.kubernetes.io/enforce=baseline", "completion": "This shell code Enforce Pod Security Standards with Namespace Labels | If you're just getting started with the Pod Security Standards, a suitable first step would be to configure all namespaces with audit annotations for a stricter level such as baseline:", "language": "shell", "type": "code-example_explanation", "context": "Enforce Pod Security Standards with Namespace Labels | If you're just getting started with the Pod Security Standards, a suitable first step would be to configure all namespaces with audit annotations for a stricter level such as baseline:", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/enforce-standards-namespace-labels/", "id": "k8s_00345_code_explain_8808"} {"prompt": "Explain this shell code:\n\nkubectl label --overwrite ns --all \\\n pod-security.kubernetes.io/audit=baseline \\\n pod-security.kubernetes.io/warn=baseline", "completion": "This shell code Enforce Pod Security Standards with Namespace Labels | Note that this is not setting an enforce level, so that namespaces that haven't been explicitly evaluated can be distinguished. You can list namespaces without an explicitly set enforce level using th", "language": "shell", "type": "code-example_explanation", "context": "Enforce Pod Security Standards with Namespace Labels | Note that this is not setting an enforce level, so that namespaces that haven't been explicitly evaluated can be distinguished. You can list namespaces without an explicitly set enforce level using th", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/enforce-standards-namespace-labels/", "id": "k8s_00345_code_explain_8809"} {"prompt": "Explain this shell code:\n\nkubectl get namespaces --selector='!pod-security.kubernetes.io/enforce'", "completion": "This shell code Enforce Pod Security Standards with Namespace Labels | You can update a specific namespace as well. This command adds the enforce=restricted policy to my-existing-namespace, pinning the restricted policy version to v1.34.", "language": "shell", "type": "code-example_explanation", "context": "Enforce Pod Security Standards with Namespace Labels | You can update a specific namespace as well. This command adds the enforce=restricted policy to my-existing-namespace, pinning the restricted policy version to v1.34.", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/enforce-standards-namespace-labels/", "id": "k8s_00345_code_explain_8810"} {"prompt": "Explain this shell code:\n\nkubectl label --overwrite ns my-existing-namespace \\\n pod-security.kubernetes.io/enforce=restricted \\\n pod-security.kubernetes.io/enforce-version=v1.34", "completion": "This shell code Enforce Pod Security Standards with Namespace Labels |", "language": "shell", "type": "code-example_explanation", "context": "Enforce Pod Security Standards with Namespace Labels |", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/enforce-standards-namespace-labels/", "id": "k8s_00345_code_explain_8811"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Namespace\nmetadata:\n name: my-baseline-namespace\n labels:\n pod-security.kubernetes.io/enforce: baseline", "completion": " pod-security.kubernetes.io/enforce-version: v1.34\n\n # We are setting these to our _desired_ `enforce` level.\n pod-security.kubernetes.io/audit: restricted\n pod-security.kubernetes.io/audit-version: v1.34\n pod-security.kubernetes.io/warn: restricted\n pod-security.kubernetes.io/warn-version: v1.34", "language": "yaml", "type": "kubernetes-manifest", "context": "Enforce Pod Security Standards with Namespace Labels | It is helpful to apply the --dry-run flag when initially evaluating security profile changes for namespaces. The Pod Security Standard checks will still be run in dry run mode, giving you information", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/enforce-standards-namespace-labels/", "id": "k8s_00345_code_completion_8812"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Namespace\nmetadata:\n name: my-baseline-namespace\n labels:\n pod-security.kubernetes.io/enforce: baseline\n pod-security.kubernetes.io/enforce-version: v1.34\n\n # We are setting these to our _desired_ `enforce` level.\n pod-security.kubernetes.io/audit: restricted\n pod-security.kubernetes.io/audit-version: v1.34\n pod-security.kubernetes.io/warn: restricted\n pod-security.kubernetes.io/warn-version: v1.34", "completion": "This yaml code Enforce Pod Security Standards with Namespace Labels | It is helpful to apply the --dry-run flag when initially evaluating security profile changes for namespaces. The Pod Security Standard checks will still be run in dry run mode, giving you information", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Enforce Pod Security Standards with Namespace Labels | It is helpful to apply the --dry-run flag when initially evaluating security profile changes for namespaces. The Pod Security Standard checks will still be run in dry run mode, giving you information", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/enforce-standards-namespace-labels/", "id": "k8s_00345_code_explain_8813"} {"prompt": "Explain this bash code:\n\nkubectl label --dry-run=server --overwrite ns --all \\\n pod-security.kubernetes.io/enforce=baseline", "completion": "This bash code Enforce Pod Security Standards with Namespace Labels | If you're just getting started with the Pod Security Standards, a suitable first step would be to configure all namespaces with audit annotations for a stricter level such as baseline:", "language": "bash", "type": "kubectl-command_explanation", "context": "Enforce Pod Security Standards with Namespace Labels | If you're just getting started with the Pod Security Standards, a suitable first step would be to configure all namespaces with audit annotations for a stricter level such as baseline:", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/enforce-standards-namespace-labels/", "id": "k8s_00345_code_explain_8814"} {"prompt": "Explain this bash code:\n\nkubectl label --overwrite ns --all \\\n pod-security.kubernetes.io/audit=baseline \\\n pod-security.kubernetes.io/warn=baseline", "completion": "This bash code Enforce Pod Security Standards with Namespace Labels | Note that this is not setting an enforce level, so that namespaces that haven't been explicitly evaluated can be distinguished. You can list namespaces without an explicitly set enforce level using th", "language": "bash", "type": "kubectl-command_explanation", "context": "Enforce Pod Security Standards with Namespace Labels | Note that this is not setting an enforce level, so that namespaces that haven't been explicitly evaluated can be distinguished. You can list namespaces without an explicitly set enforce level using th", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/enforce-standards-namespace-labels/", "id": "k8s_00345_code_explain_8815"} {"prompt": "Explain this bash code:\n\nkubectl get namespaces --selector='!pod-security.kubernetes.io/enforce'", "completion": "This bash code Enforce Pod Security Standards with Namespace Labels | You can update a specific namespace as well. This command adds the enforce=restricted policy to my-existing-namespace, pinning the restricted policy version to v1.34.", "language": "bash", "type": "kubectl-command_explanation", "context": "Enforce Pod Security Standards with Namespace Labels | You can update a specific namespace as well. This command adds the enforce=restricted policy to my-existing-namespace, pinning the restricted policy version to v1.34.", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/enforce-standards-namespace-labels/", "id": "k8s_00345_code_explain_8816"} {"prompt": "Explain this bash code:\n\nkubectl label --overwrite ns my-existing-namespace \\\n pod-security.kubernetes.io/enforce=restricted \\\n pod-security.kubernetes.io/enforce-version=v1.34", "completion": "This bash code Enforce Pod Security Standards with Namespace Labels |", "language": "bash", "type": "kubectl-command_explanation", "context": "Enforce Pod Security Standards with Namespace Labels |", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/enforce-standards-namespace-labels/", "id": "k8s_00345_code_explain_8817"} {"prompt": "Explain this bash code:\n\nkubectl create clusterrolebinding NAME --clusterrole=NAME [--user=username] [--group=groupname] [--serviceaccount=namespace:serviceaccountname] [--dry-run=server|client|none]", "completion": "This bash code kubectl create clusterrolebinding | If true, ignore any errors in templates when a field or map key is missing in the template. Only applies to golang and jsonpath output formats.", "language": "bash", "type": "kubectl-command_explanation", "context": "kubectl create clusterrolebinding | If true, ignore any errors in templates when a field or map key is missing in the template. Only applies to golang and jsonpath output formats.", "source": "https://kubernetes.io/docs/reference/kubectl/generated/kubectl_create/kubectl_create_clusterrolebinding/", "id": "k8s_00347_code_explain_8818"} {"prompt": "Explain this text code:\n\n# Create a cluster role binding for user1, user2, and group1 using the cluster-admin cluster role\n kubectl create clusterrolebinding cluster-admin --clusterrole=cluster-admin --user=user1 --user=user2 --group=group1", "completion": "This text code kubectl create clusterrolebinding | If true, ignore any errors in templates when a field or map key is missing in the template. Only applies to golang and jsonpath output formats.", "language": "text", "type": "code-example_explanation", "context": "kubectl create clusterrolebinding | If true, ignore any errors in templates when a field or map key is missing in the template. Only applies to golang and jsonpath output formats.", "source": "https://kubernetes.io/docs/reference/kubectl/generated/kubectl_create/kubectl_create_clusterrolebinding/", "id": "k8s_00347_code_explain_8819"} {"prompt": "Explain this bash code:\n\nkubectl create clusterrolebinding NAME --clusterrole=NAME [--user=username] [--group=groupname] [--serviceaccount=namespace:serviceaccountname] [--dry-run=server|client|none]", "completion": "This bash code kubectl create clusterrolebinding | If true, ignore any errors in templates when a field or map key is missing in the template. Only applies to golang and jsonpath output formats.", "language": "bash", "type": "kubectl-command_explanation", "context": "kubectl create clusterrolebinding | If true, ignore any errors in templates when a field or map key is missing in the template. Only applies to golang and jsonpath output formats.", "source": "https://kubernetes.io/docs/reference/kubectl/generated/kubectl_create/kubectl_create_clusterrolebinding/", "id": "k8s_00347_code_explain_8820"} {"prompt": "Explain this text code:\n\n# Create a cluster role binding for user1, user2, and group1 using the cluster-admin cluster role\n kubectl create clusterrolebinding cluster-admin --clusterrole=cluster-admin --user=user1 --user=user2 --group=group1", "completion": "This text code kubectl create clusterrolebinding | If true, ignore any errors in templates when a field or map key is missing in the template. Only applies to golang and jsonpath output formats.", "language": "text", "type": "code-example_explanation", "context": "kubectl create clusterrolebinding | If true, ignore any errors in templates when a field or map key is missing in the template. Only applies to golang and jsonpath output formats.", "source": "https://kubernetes.io/docs/reference/kubectl/generated/kubectl_create/kubectl_create_clusterrolebinding/", "id": "k8s_00347_code_explain_8821"} {"prompt": "Complete this text code:\n\nkube-apiserver \\\n--feature-gates=UnknownVersionInteroperabilityProxy=true \\\n# required command line arguments for this feature\n--peer-ca-file=<path to kube-apiserver CA cert>\n--proxy-client-cert-file=<path to aggregator proxy cert>,\n--proxy-client-key-file=<path to aggregator proxy key>,\n--requestheader-client-ca-file=<path to aggregator CA cert>,", "completion": "# requestheader-allowed-names can be set to blank to allow any Common Name\n--requestheader-allowed-names=<valid Common Names to verify proxy client cert against>,\n\n# optional flags for this feature\n--peer-advertise-ip=`IP of this kube-apiserver that should be used by peers to proxy requests`\n--peer-advertise-port=`port of this kube-apiserver that should be used by peers to proxy requests`\n\n# …and other flags as usual", "language": "text", "type": "code-example", "context": "Mixed Version Proxy | The source kube-apiserver reuses the existing APIserver client authentication flags --proxy-client-cert-file and --proxy-client-key-file to present its identity that will be verified by its peer (the", "source": "https://kubernetes.io/docs/concepts/architecture/mixed-version-proxy/", "id": "k8s_00348_code_completion_8822"} {"prompt": "Explain this text code:\n\nkube-apiserver \\\n--feature-gates=UnknownVersionInteroperabilityProxy=true \\\n# required command line arguments for this feature\n--peer-ca-file=<path to kube-apiserver CA cert>\n--proxy-client-cert-file=<path to aggregator proxy cert>,\n--proxy-client-key-file=<path to aggregator proxy key>,\n--requestheader-client-ca-file=<path to aggregator CA cert>,\n# requestheader-allowed-names can be set to blank to allow any Common Name\n--requestheader-allowed-names=<valid Common Names to verify proxy client cert against>,\n\n# optional flags for this feature\n--peer-advertise-ip=`IP of this kube-apiserver that should be used by peers to proxy requests`\n--peer-advertise-port=`port of this kube-apiserver that should be used by peers to proxy requests`\n\n# …and other flags as usual", "completion": "This text code Mixed Version Proxy | The source kube-apiserver reuses the existing APIserver client authentication flags --proxy-client-cert-file and --proxy-client-key-file to present its identity that will be verified by its peer (the", "language": "text", "type": "code-example_explanation", "context": "Mixed Version Proxy | The source kube-apiserver reuses the existing APIserver client authentication flags --proxy-client-cert-file and --proxy-client-key-file to present its identity that will be verified by its peer (the", "source": "https://kubernetes.io/docs/concepts/architecture/mixed-version-proxy/", "id": "k8s_00348_code_explain_8823"} {"prompt": "Complete this shell code:\n\nkube-apiserver \\\n--feature-gates=UnknownVersionInteroperabilityProxy=true \\\n# required command line arguments for this feature\n--peer-ca-file=<path to kube-apiserver CA cert>\n--proxy-client-cert-file=<path to aggregator proxy cert>,\n--proxy-client-key-file=<path to aggregator proxy key>,\n--requestheader-client-ca-file=<path to aggregator CA cert>,", "completion": "# requestheader-allowed-names can be set to blank to allow any Common Name\n--requestheader-allowed-names=<valid Common Names to verify proxy client cert against>,\n\n# optional flags for this feature\n--peer-advertise-ip=`IP of this kube-apiserver that should be used by peers to proxy requests`\n--peer-advertise-port=`port of this kube-apiserver that should be used by peers to proxy requests`\n\n# …and other flags as usual", "language": "shell", "type": "code-example", "context": "Mixed Version Proxy | The source kube-apiserver reuses the existing APIserver client authentication flags --proxy-client-cert-file and --proxy-client-key-file to present its identity that will be verified by its peer (the", "source": "https://kubernetes.io/docs/concepts/architecture/mixed-version-proxy/", "id": "k8s_00348_code_completion_8824"} {"prompt": "Explain this shell code:\n\nkube-apiserver \\\n--feature-gates=UnknownVersionInteroperabilityProxy=true \\\n# required command line arguments for this feature\n--peer-ca-file=<path to kube-apiserver CA cert>\n--proxy-client-cert-file=<path to aggregator proxy cert>,\n--proxy-client-key-file=<path to aggregator proxy key>,\n--requestheader-client-ca-file=<path to aggregator CA cert>,\n# requestheader-allowed-names can be set to blank to allow any Common Name\n--requestheader-allowed-names=<valid Common Names to verify proxy client cert against>,\n\n# optional flags for this feature\n--peer-advertise-ip=`IP of this kube-apiserver that should be used by peers to proxy requests`\n--peer-advertise-port=`port of this kube-apiserver that should be used by peers to proxy requests`\n\n# …and other flags as usual", "completion": "This shell code Mixed Version Proxy | The source kube-apiserver reuses the existing APIserver client authentication flags --proxy-client-cert-file and --proxy-client-key-file to present its identity that will be verified by its peer (the", "language": "shell", "type": "code-example_explanation", "context": "Mixed Version Proxy | The source kube-apiserver reuses the existing APIserver client authentication flags --proxy-client-cert-file and --proxy-client-key-file to present its identity that will be verified by its peer (the", "source": "https://kubernetes.io/docs/concepts/architecture/mixed-version-proxy/", "id": "k8s_00348_code_explain_8825"} {"prompt": "Complete this text code:\n\nkube-apiserver \\\n--feature-gates=UnknownVersionInteroperabilityProxy=true \\\n# required command line arguments for this feature\n--peer-ca-file=<path to kube-apiserver CA cert>\n--proxy-client-cert-file=<path to aggregator proxy cert>,\n--proxy-client-key-file=<path to aggregator proxy key>,\n--requestheader-client-ca-file=<path to aggregator CA cert>,", "completion": "# requestheader-allowed-names can be set to blank to allow any Common Name\n--requestheader-allowed-names=<valid Common Names to verify proxy client cert against>,\n\n# optional flags for this feature\n--peer-advertise-ip=`IP of this kube-apiserver that should be used by peers to proxy requests`\n--peer-advertise-port=`port of this kube-apiserver that should be used by peers to proxy requests`\n\n# …and other flags as usual", "language": "text", "type": "code-example", "context": "Mixed Version Proxy | The source kube-apiserver reuses the existing APIserver client authentication flags --proxy-client-cert-file and --proxy-client-key-file to present its identity that will be verified by its peer (the", "source": "https://kubernetes.io/docs/concepts/architecture/mixed-version-proxy/", "id": "k8s_00348_code_completion_8826"} {"prompt": "Explain this text code:\n\nkube-apiserver \\\n--feature-gates=UnknownVersionInteroperabilityProxy=true \\\n# required command line arguments for this feature\n--peer-ca-file=<path to kube-apiserver CA cert>\n--proxy-client-cert-file=<path to aggregator proxy cert>,\n--proxy-client-key-file=<path to aggregator proxy key>,\n--requestheader-client-ca-file=<path to aggregator CA cert>,\n# requestheader-allowed-names can be set to blank to allow any Common Name\n--requestheader-allowed-names=<valid Common Names to verify proxy client cert against>,\n\n# optional flags for this feature\n--peer-advertise-ip=`IP of this kube-apiserver that should be used by peers to proxy requests`\n--peer-advertise-port=`port of this kube-apiserver that should be used by peers to proxy requests`\n\n# …and other flags as usual", "completion": "This text code Mixed Version Proxy | The source kube-apiserver reuses the existing APIserver client authentication flags --proxy-client-cert-file and --proxy-client-key-file to present its identity that will be verified by its peer (the", "language": "text", "type": "code-example_explanation", "context": "Mixed Version Proxy | The source kube-apiserver reuses the existing APIserver client authentication flags --proxy-client-cert-file and --proxy-client-key-file to present its identity that will be verified by its peer (the", "source": "https://kubernetes.io/docs/concepts/architecture/mixed-version-proxy/", "id": "k8s_00348_code_explain_8827"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: dnsutils\n namespace: default", "completion": "spec:\n containers:\n - name: dnsutils\n image: registry.k8s.io/e2e-test-images/agnhost:2.39\n imagePullPolicy: IfNotPresent\n restartPolicy: Always", "language": "yaml", "type": "pod-definition", "context": "Debugging DNS Resolution | Use that manifest to create a Pod:", "source": "https://kubernetes.io/docs/tasks/administer-cluster/dns-debugging-resolution/", "id": "k8s_00349_code_completion_8828"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: dnsutils\n namespace: default\nspec:\n containers:\n - name: dnsutils\n image: registry.k8s.io/e2e-test-images/agnhost:2.39\n imagePullPolicy: IfNotPresent\n restartPolicy: Always", "completion": "This yaml code Debugging DNS Resolution | Use that manifest to create a Pod:", "language": "yaml", "type": "pod-definition_explanation", "context": "Debugging DNS Resolution | Use that manifest to create a Pod:", "source": "https://kubernetes.io/docs/tasks/administer-cluster/dns-debugging-resolution/", "id": "k8s_00349_code_explain_8829"} {"prompt": "Explain this bash code:\n\nkubectl apply -f https://k8s.io/examples/admin/dns/dnsutils.yaml", "completion": "This bash code Debugging DNS Resolution | …and verify its status:", "language": "bash", "type": "kubectl-command_explanation", "context": "Debugging DNS Resolution | …and verify its status:", "source": "https://kubernetes.io/docs/tasks/administer-cluster/dns-debugging-resolution/", "id": "k8s_00349_code_explain_8830"} {"prompt": "Explain this text code:\n\nNAME READY STATUS RESTARTS AGE\ndnsutils 1/1 Running 0 <some-time>", "completion": "This text code Debugging DNS Resolution | Once that Pod is running, you can exec nslookup in that environment. If you see something like the following, DNS is working correctly.", "language": "text", "type": "code-example_explanation", "context": "Debugging DNS Resolution | Once that Pod is running, you can exec nslookup in that environment. If you see something like the following, DNS is working correctly.", "source": "https://kubernetes.io/docs/tasks/administer-cluster/dns-debugging-resolution/", "id": "k8s_00349_code_explain_8831"} {"prompt": "Explain this bash code:\n\nkubectl exec -i -t dnsutils -- nslookup kubernetes.default", "completion": "This bash code Debugging DNS Resolution | If the nslookup command fails, check the following:", "language": "bash", "type": "kubectl-command_explanation", "context": "Debugging DNS Resolution | If the nslookup command fails, check the following:", "source": "https://kubernetes.io/docs/tasks/administer-cluster/dns-debugging-resolution/", "id": "k8s_00349_code_explain_8832"} {"prompt": "Complete this text code:\n\nServer: 10.0.0.10\nAddress 1: 10.0.0.10", "completion": "\nName: kubernetes.default\nAddress 1: 10.0.0.1", "language": "text", "type": "code-example", "context": "Debugging DNS Resolution | If the nslookup command fails, check the following:", "source": "https://kubernetes.io/docs/tasks/administer-cluster/dns-debugging-resolution/", "id": "k8s_00349_code_completion_8833"} {"prompt": "Explain this text code:\n\nServer: 10.0.0.10\nAddress 1: 10.0.0.10\n\nName: kubernetes.default\nAddress 1: 10.0.0.1", "completion": "This text code Debugging DNS Resolution | If the nslookup command fails, check the following:", "language": "text", "type": "code-example_explanation", "context": "Debugging DNS Resolution | If the nslookup command fails, check the following:", "source": "https://kubernetes.io/docs/tasks/administer-cluster/dns-debugging-resolution/", "id": "k8s_00349_code_explain_8834"} {"prompt": "Explain this text code:\n\nsearch default.svc.cluster.local svc.cluster.local cluster.local google.internal c.gce_project_id.internal\nnameserver 10.0.0.10\noptions ndots:5", "completion": "This text code Debugging DNS Resolution | Errors such as the following indicate a problem with the CoreDNS (or kube-dns) add-on or with associated Services:", "language": "text", "type": "code-example_explanation", "context": "Debugging DNS Resolution | Errors such as the following indicate a problem with the CoreDNS (or kube-dns) add-on or with associated Services:", "source": "https://kubernetes.io/docs/tasks/administer-cluster/dns-debugging-resolution/", "id": "k8s_00349_code_explain_8835"} {"prompt": "Explain this bash code:\n\nkubectl exec -i -t dnsutils -- nslookup kubernetes.default", "completion": "This bash code Debugging DNS Resolution | or", "language": "bash", "type": "kubectl-command_explanation", "context": "Debugging DNS Resolution | or", "source": "https://kubernetes.io/docs/tasks/administer-cluster/dns-debugging-resolution/", "id": "k8s_00349_code_explain_8836"} {"prompt": "Complete this text code:\n\nServer: 10.0.0.10\nAddress 1: 10.0.0.10", "completion": "\nnslookup: can't resolve 'kubernetes.default'", "language": "text", "type": "code-example", "context": "Debugging DNS Resolution | or", "source": "https://kubernetes.io/docs/tasks/administer-cluster/dns-debugging-resolution/", "id": "k8s_00349_code_completion_8837"} {"prompt": "Explain this text code:\n\nServer: 10.0.0.10\nAddress 1: 10.0.0.10\n\nnslookup: can't resolve 'kubernetes.default'", "completion": "This text code Debugging DNS Resolution | or", "language": "text", "type": "code-example_explanation", "context": "Debugging DNS Resolution | or", "source": "https://kubernetes.io/docs/tasks/administer-cluster/dns-debugging-resolution/", "id": "k8s_00349_code_explain_8838"} {"prompt": "Explain this bash code:\n\nkubectl exec -i -t dnsutils -- nslookup kubernetes.default", "completion": "This bash code Debugging DNS Resolution | Use the kubectl get pods command to verify that the DNS pod is running.", "language": "bash", "type": "kubectl-command_explanation", "context": "Debugging DNS Resolution | Use the kubectl get pods command to verify that the DNS pod is running.", "source": "https://kubernetes.io/docs/tasks/administer-cluster/dns-debugging-resolution/", "id": "k8s_00349_code_explain_8839"} {"prompt": "Complete this text code:\n\nServer: 10.0.0.10\nAddress 1: 10.0.0.10 kube-dns.kube-system.svc.cluster.local", "completion": "\nnslookup: can't resolve 'kubernetes.default'", "language": "text", "type": "code-example", "context": "Debugging DNS Resolution | Use the kubectl get pods command to verify that the DNS pod is running.", "source": "https://kubernetes.io/docs/tasks/administer-cluster/dns-debugging-resolution/", "id": "k8s_00349_code_completion_8840"} {"prompt": "Explain this text code:\n\nServer: 10.0.0.10\nAddress 1: 10.0.0.10 kube-dns.kube-system.svc.cluster.local\n\nnslookup: can't resolve 'kubernetes.default'", "completion": "This text code Debugging DNS Resolution | Use the kubectl get pods command to verify that the DNS pod is running.", "language": "text", "type": "code-example_explanation", "context": "Debugging DNS Resolution | Use the kubectl get pods command to verify that the DNS pod is running.", "source": "https://kubernetes.io/docs/tasks/administer-cluster/dns-debugging-resolution/", "id": "k8s_00349_code_explain_8841"} {"prompt": "Explain this bash code:\n\nkubectl get pods --namespace=kube-system -l k8s-app=kube-dns", "completion": "This bash code Debugging DNS Resolution | If you see that no CoreDNS Pod is running or that the Pod has failed/completed, the DNS add-on may not be deployed by default in your current environment and you will have to deploy it manually.", "language": "bash", "type": "kubectl-command_explanation", "context": "Debugging DNS Resolution | If you see that no CoreDNS Pod is running or that the Pod has failed/completed, the DNS add-on may not be deployed by default in your current environment and you will have to deploy it manually.", "source": "https://kubernetes.io/docs/tasks/administer-cluster/dns-debugging-resolution/", "id": "k8s_00349_code_explain_8842"} {"prompt": "Complete this text code:\n\nNAME READY STATUS RESTARTS AGE\n...", "completion": "coredns-7b96bf9f76-5hsxb 1/1 Running 0 1h\ncoredns-7b96bf9f76-mvmmt 1/1 Running 0 1h\n...", "language": "text", "type": "code-example", "context": "Debugging DNS Resolution | If you see that no CoreDNS Pod is running or that the Pod has failed/completed, the DNS add-on may not be deployed by default in your current environment and you will have to deploy it manually.", "source": "https://kubernetes.io/docs/tasks/administer-cluster/dns-debugging-resolution/", "id": "k8s_00349_code_completion_8843"} {"prompt": "Explain this text code:\n\nNAME READY STATUS RESTARTS AGE\n...\ncoredns-7b96bf9f76-5hsxb 1/1 Running 0 1h\ncoredns-7b96bf9f76-mvmmt 1/1 Running 0 1h\n...", "completion": "This text code Debugging DNS Resolution | If you see that no CoreDNS Pod is running or that the Pod has failed/completed, the DNS add-on may not be deployed by default in your current environment and you will have to deploy it manually.", "language": "text", "type": "code-example_explanation", "context": "Debugging DNS Resolution | If you see that no CoreDNS Pod is running or that the Pod has failed/completed, the DNS add-on may not be deployed by default in your current environment and you will have to deploy it manually.", "source": "https://kubernetes.io/docs/tasks/administer-cluster/dns-debugging-resolution/", "id": "k8s_00349_code_explain_8844"} {"prompt": "Explain this bash code:\n\nkubectl logs --namespace=kube-system -l k8s-app=kube-dns", "completion": "This bash code Debugging DNS Resolution | Here is an example of a healthy CoreDNS log:", "language": "bash", "type": "kubectl-command_explanation", "context": "Debugging DNS Resolution | Here is an example of a healthy CoreDNS log:", "source": "https://kubernetes.io/docs/tasks/administer-cluster/dns-debugging-resolution/", "id": "k8s_00349_code_explain_8845"} {"prompt": "Complete this text code:\n\n.:53\n2018/08/15 14:37:17 [INFO] CoreDNS-1.2.2\n2018/08/15 14:37:17 [INFO] linux/amd64, go1.10.3, 2e322f6", "completion": "CoreDNS-1.2.2\nlinux/amd64, go1.10.3, 2e322f6\n2018/08/15 14:37:17 [INFO] plugin/reload: Running configuration MD5 = 24e6c59e83ce706f07bcc82c31b1ea1c", "language": "text", "type": "code-example", "context": "Debugging DNS Resolution | See if there are any suspicious or unexpected messages in the logs.", "source": "https://kubernetes.io/docs/tasks/administer-cluster/dns-debugging-resolution/", "id": "k8s_00349_code_completion_8846"} {"prompt": "Explain this text code:\n\n.:53\n2018/08/15 14:37:17 [INFO] CoreDNS-1.2.2\n2018/08/15 14:37:17 [INFO] linux/amd64, go1.10.3, 2e322f6\nCoreDNS-1.2.2\nlinux/amd64, go1.10.3, 2e322f6\n2018/08/15 14:37:17 [INFO] plugin/reload: Running configuration MD5 = 24e6c59e83ce706f07bcc82c31b1ea1c", "completion": "This text code Debugging DNS Resolution | See if there are any suspicious or unexpected messages in the logs.", "language": "text", "type": "code-example_explanation", "context": "Debugging DNS Resolution | See if there are any suspicious or unexpected messages in the logs.", "source": "https://kubernetes.io/docs/tasks/administer-cluster/dns-debugging-resolution/", "id": "k8s_00349_code_explain_8847"} {"prompt": "Complete this text code:\n\nNAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE\n...", "completion": "kube-dns ClusterIP 10.0.0.10 <none> 53/UDP,53/TCP 1h\n...", "language": "text", "type": "code-example", "context": "Debugging DNS Resolution | If you have created the Service or in the case it should be created by default but it does not appear, see debugging Services for more information.", "source": "https://kubernetes.io/docs/tasks/administer-cluster/dns-debugging-resolution/", "id": "k8s_00349_code_completion_8848"} {"prompt": "Explain this text code:\n\nNAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE\n...\nkube-dns ClusterIP 10.0.0.10 <none> 53/UDP,53/TCP 1h\n...", "completion": "This text code Debugging DNS Resolution | If you have created the Service or in the case it should be created by default but it does not appear, see debugging Services for more information.", "language": "text", "type": "code-example_explanation", "context": "Debugging DNS Resolution | If you have created the Service or in the case it should be created by default but it does not appear, see debugging Services for more information.", "source": "https://kubernetes.io/docs/tasks/administer-cluster/dns-debugging-resolution/", "id": "k8s_00349_code_explain_8849"} {"prompt": "Explain this bash code:\n\nkubectl get endpointslices -l k8s.io/service-name=kube-dns --namespace=kube-system", "completion": "This bash code Debugging DNS Resolution | If you do not see the endpoints, see the endpoints section in the debugging Services documentation.", "language": "bash", "type": "kubectl-command_explanation", "context": "Debugging DNS Resolution | If you do not see the endpoints, see the endpoints section in the debugging Services documentation.", "source": "https://kubernetes.io/docs/tasks/administer-cluster/dns-debugging-resolution/", "id": "k8s_00349_code_explain_8850"} {"prompt": "Explain this text code:\n\nNAME ADDRESSTYPE PORTS ENDPOINTS AGE\nkube-dns-zxoja IPv4 53 10.180.3.17,10.180.3.17 1h", "completion": "This text code Debugging DNS Resolution | If you do not see the endpoints, see the endpoints section in the debugging Services documentation.", "language": "text", "type": "code-example_explanation", "context": "Debugging DNS Resolution | If you do not see the endpoints, see the endpoints section in the debugging Services documentation.", "source": "https://kubernetes.io/docs/tasks/administer-cluster/dns-debugging-resolution/", "id": "k8s_00349_code_explain_8851"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: ConfigMap\nmetadata:\n name: coredns\n namespace: kube-system\ndata:\n Corefile: |\n .:53 {\n log\n errors\n health", "completion": " kubernetes cluster.local in-addr.arpa ip6.arpa {\n pods insecure\n upstream\n fallthrough in-addr.arpa ip6.arpa\n }\n prometheus :9153\n forward . /etc/resolv.conf\n cache 30\n loop\n reload\n loadbalance\n }", "language": "yaml", "type": "kubernetes-manifest", "context": "Debugging DNS Resolution | After saving the changes, it may take up to minute or two for Kubernetes to propagate these changes to the CoreDNS pods.", "source": "https://kubernetes.io/docs/tasks/administer-cluster/dns-debugging-resolution/", "id": "k8s_00349_code_completion_8852"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: ConfigMap\nmetadata:\n name: coredns\n namespace: kube-system\ndata:\n Corefile: |\n .:53 {\n log\n errors\n health\n kubernetes cluster.local in-addr.arpa ip6.arpa {\n pods insecure\n upstream\n fallthrough in-addr.arpa ip6.arpa\n }\n prometheus :9153\n forward . /etc/resolv.conf\n cache 30\n loop\n reload\n loadbalance\n }", "completion": "This yaml code Debugging DNS Resolution | After saving the changes, it may take up to minute or two for Kubernetes to propagate these changes to the CoreDNS pods.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Debugging DNS Resolution | After saving the changes, it may take up to minute or two for Kubernetes to propagate these changes to the CoreDNS pods.", "source": "https://kubernetes.io/docs/tasks/administer-cluster/dns-debugging-resolution/", "id": "k8s_00349_code_explain_8853"} {"prompt": "Complete this text code:\n\n.:53\n2018/08/15 14:37:15 [INFO] CoreDNS-1.2.0\n2018/08/15 14:37:15 [INFO] linux/amd64, go1.10.3, 2e322f6\nCoreDNS-1.2.0", "completion": "linux/amd64, go1.10.3, 2e322f6\n2018/09/07 15:29:04 [INFO] plugin/reload: Running configuration MD5 = 162475cdf272d8aa601e6fe67a6ad42f\n2018/09/07 15:29:04 [INFO] Reloading complete\n172.17.0.18:41675 - [07/Sep/2018:15:29:11 +0000] 59925 \"A IN kubernetes.default.svc.cluster.local. udp 54 false 512\" NOERROR qr,aa,rd,ra 106 0.000066649s", "language": "text", "type": "code-example", "context": "Debugging DNS Resolution | CoreDNS must be able to list service and endpointslice related resources to properly resolve service names.", "source": "https://kubernetes.io/docs/tasks/administer-cluster/dns-debugging-resolution/", "id": "k8s_00349_code_completion_8854"} {"prompt": "Explain this text code:\n\n.:53\n2018/08/15 14:37:15 [INFO] CoreDNS-1.2.0\n2018/08/15 14:37:15 [INFO] linux/amd64, go1.10.3, 2e322f6\nCoreDNS-1.2.0\nlinux/amd64, go1.10.3, 2e322f6\n2018/09/07 15:29:04 [INFO] plugin/reload: Running configuration MD5 = 162475cdf272d8aa601e6fe67a6ad42f\n2018/09/07 15:29:04 [INFO] Reloading complete\n172.17.0.18:41675 - [07/Sep/2018:15:29:11 +0000] 59925 \"A IN kubernetes.default.svc.cluster.local. udp 54 false 512\" NOERROR qr,aa,rd,ra 106 0.000066649s", "completion": "This text code Debugging DNS Resolution | CoreDNS must be able to list service and endpointslice related resources to properly resolve service names.", "language": "text", "type": "code-example_explanation", "context": "Debugging DNS Resolution | CoreDNS must be able to list service and endpointslice related resources to properly resolve service names.", "source": "https://kubernetes.io/docs/tasks/administer-cluster/dns-debugging-resolution/", "id": "k8s_00349_code_explain_8855"} {"prompt": "Explain this text code:\n\n2022-03-18T07:12:15.699431183Z [INFO] 10.96.144.227:52299 - 3686 \"A IN serverproxy.contoso.net.cluster.local. udp 52 false 512\" SERVFAIL qr,aa,rd 145 0.000091221s", "completion": "This text code Debugging DNS Resolution | First, get the current ClusterRole of system:coredns:", "language": "text", "type": "code-example_explanation", "context": "Debugging DNS Resolution | First, get the current ClusterRole of system:coredns:", "source": "https://kubernetes.io/docs/tasks/administer-cluster/dns-debugging-resolution/", "id": "k8s_00349_code_explain_8856"} {"prompt": "Explain this bash code:\n\nkubectl describe clusterrole system:coredns -n kube-system", "completion": "This bash code Debugging DNS Resolution | Expected output:", "language": "bash", "type": "kubectl-command_explanation", "context": "Debugging DNS Resolution | Expected output:", "source": "https://kubernetes.io/docs/tasks/administer-cluster/dns-debugging-resolution/", "id": "k8s_00349_code_explain_8857"} {"prompt": "Complete this text code:\n\nPolicyRule:\n Resources Non-Resource URLs Resource Names Verbs\n --------- ----------------- -------------- -----\n endpoints [] [] [list watch]", "completion": " namespaces [] [] [list watch]\n pods [] [] [list watch]\n services [] [] [list watch]\n endpointslices.discovery.k8s.io [] [] [list watch]", "language": "text", "type": "code-example", "context": "Debugging DNS Resolution | If any permissions are missing, edit the ClusterRole to add them:", "source": "https://kubernetes.io/docs/tasks/administer-cluster/dns-debugging-resolution/", "id": "k8s_00349_code_completion_8858"} {"prompt": "Explain this text code:\n\nPolicyRule:\n Resources Non-Resource URLs Resource Names Verbs\n --------- ----------------- -------------- -----\n endpoints [] [] [list watch]\n namespaces [] [] [list watch]\n pods [] [] [list watch]\n services [] [] [list watch]\n endpointslices.discovery.k8s.io [] [] [list watch]", "completion": "This text code Debugging DNS Resolution | If any permissions are missing, edit the ClusterRole to add them:", "language": "text", "type": "code-example_explanation", "context": "Debugging DNS Resolution | If any permissions are missing, edit the ClusterRole to add them:", "source": "https://kubernetes.io/docs/tasks/administer-cluster/dns-debugging-resolution/", "id": "k8s_00349_code_explain_8859"} {"prompt": "Explain this bash code:\n\nkubectl edit clusterrole system:coredns -n kube-system", "completion": "This bash code Debugging DNS Resolution | Example insertion of EndpointSlices permissions:", "language": "bash", "type": "kubectl-command_explanation", "context": "Debugging DNS Resolution | Example insertion of EndpointSlices permissions:", "source": "https://kubernetes.io/docs/tasks/administer-cluster/dns-debugging-resolution/", "id": "k8s_00349_code_explain_8860"} {"prompt": "Complete this text code:\n\n...\n- apiGroups:\n - discovery.k8s.io\n resources:", "completion": " - endpointslices\n verbs:\n - list\n - watch\n...", "language": "text", "type": "code-example", "context": "Debugging DNS Resolution | DNS queries that don't specify a namespace are limited to the pod's namespace.", "source": "https://kubernetes.io/docs/tasks/administer-cluster/dns-debugging-resolution/", "id": "k8s_00349_code_completion_8861"} {"prompt": "Explain this text code:\n\n...\n- apiGroups:\n - discovery.k8s.io\n resources:\n - endpointslices\n verbs:\n - list\n - watch\n...", "completion": "This text code Debugging DNS Resolution | DNS queries that don't specify a namespace are limited to the pod's namespace.", "language": "text", "type": "code-example_explanation", "context": "Debugging DNS Resolution | DNS queries that don't specify a namespace are limited to the pod's namespace.", "source": "https://kubernetes.io/docs/tasks/administer-cluster/dns-debugging-resolution/", "id": "k8s_00349_code_explain_8862"} {"prompt": "Explain this bash code:\n\nkubectl exec -i -t dnsutils -- nslookup <service-name>", "completion": "This bash code Debugging DNS Resolution | This query specifies the namespace:", "language": "bash", "type": "kubectl-command_explanation", "context": "Debugging DNS Resolution | This query specifies the namespace:", "source": "https://kubernetes.io/docs/tasks/administer-cluster/dns-debugging-resolution/", "id": "k8s_00349_code_explain_8863"} {"prompt": "Explain this bash code:\n\nkubectl exec -i -t dnsutils -- nslookup <service-name>.<namespace>", "completion": "This bash code Debugging DNS Resolution | To learn more about name resolution, see DNS for Services and Pods.", "language": "bash", "type": "kubectl-command_explanation", "context": "Debugging DNS Resolution | To learn more about name resolution, see DNS for Services and Pods.", "source": "https://kubernetes.io/docs/tasks/administer-cluster/dns-debugging-resolution/", "id": "k8s_00349_code_explain_8864"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: dnsutils\n namespace: default", "completion": "spec:\n containers:\n - name: dnsutils\n image: registry.k8s.io/e2e-test-images/agnhost:2.39\n imagePullPolicy: IfNotPresent\n restartPolicy: Always", "language": "yaml", "type": "pod-definition", "context": "Debugging DNS Resolution | Use that manifest to create a Pod:", "source": "https://kubernetes.io/docs/tasks/administer-cluster/dns-debugging-resolution/", "id": "k8s_00349_code_completion_8865"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: dnsutils\n namespace: default\nspec:\n containers:\n - name: dnsutils\n image: registry.k8s.io/e2e-test-images/agnhost:2.39\n imagePullPolicy: IfNotPresent\n restartPolicy: Always", "completion": "This yaml code Debugging DNS Resolution | Use that manifest to create a Pod:", "language": "yaml", "type": "pod-definition_explanation", "context": "Debugging DNS Resolution | Use that manifest to create a Pod:", "source": "https://kubernetes.io/docs/tasks/administer-cluster/dns-debugging-resolution/", "id": "k8s_00349_code_explain_8866"} {"prompt": "Explain this shell code:\n\nkubectl apply -f https://k8s.io/examples/admin/dns/dnsutils.yaml", "completion": "This shell code Debugging DNS Resolution | …and verify its status:", "language": "shell", "type": "code-example_explanation", "context": "Debugging DNS Resolution | …and verify its status:", "source": "https://kubernetes.io/docs/tasks/administer-cluster/dns-debugging-resolution/", "id": "k8s_00349_code_explain_8867"} {"prompt": "Explain this text code:\n\nNAME READY STATUS RESTARTS AGE\ndnsutils 1/1 Running 0 <some-time>", "completion": "This text code Debugging DNS Resolution | Once that Pod is running, you can exec nslookup in that environment. If you see something like the following, DNS is working correctly.", "language": "text", "type": "code-example_explanation", "context": "Debugging DNS Resolution | Once that Pod is running, you can exec nslookup in that environment. If you see something like the following, DNS is working correctly.", "source": "https://kubernetes.io/docs/tasks/administer-cluster/dns-debugging-resolution/", "id": "k8s_00349_code_explain_8868"} {"prompt": "Explain this shell code:\n\nkubectl exec -i -t dnsutils -- nslookup kubernetes.default", "completion": "This shell code Debugging DNS Resolution | If the nslookup command fails, check the following:", "language": "shell", "type": "code-example_explanation", "context": "Debugging DNS Resolution | If the nslookup command fails, check the following:", "source": "https://kubernetes.io/docs/tasks/administer-cluster/dns-debugging-resolution/", "id": "k8s_00349_code_explain_8869"} {"prompt": "Complete this text code:\n\nServer: 10.0.0.10\nAddress 1: 10.0.0.10", "completion": "\nName: kubernetes.default\nAddress 1: 10.0.0.1", "language": "text", "type": "code-example", "context": "Debugging DNS Resolution | If the nslookup command fails, check the following:", "source": "https://kubernetes.io/docs/tasks/administer-cluster/dns-debugging-resolution/", "id": "k8s_00349_code_completion_8870"} {"prompt": "Explain this text code:\n\nServer: 10.0.0.10\nAddress 1: 10.0.0.10\n\nName: kubernetes.default\nAddress 1: 10.0.0.1", "completion": "This text code Debugging DNS Resolution | If the nslookup command fails, check the following:", "language": "text", "type": "code-example_explanation", "context": "Debugging DNS Resolution | If the nslookup command fails, check the following:", "source": "https://kubernetes.io/docs/tasks/administer-cluster/dns-debugging-resolution/", "id": "k8s_00349_code_explain_8871"} {"prompt": "Explain this text code:\n\nsearch default.svc.cluster.local svc.cluster.local cluster.local google.internal c.gce_project_id.internal\nnameserver 10.0.0.10\noptions ndots:5", "completion": "This text code Debugging DNS Resolution | Errors such as the following indicate a problem with the CoreDNS (or kube-dns) add-on or with associated Services:", "language": "text", "type": "code-example_explanation", "context": "Debugging DNS Resolution | Errors such as the following indicate a problem with the CoreDNS (or kube-dns) add-on or with associated Services:", "source": "https://kubernetes.io/docs/tasks/administer-cluster/dns-debugging-resolution/", "id": "k8s_00349_code_explain_8872"} {"prompt": "Explain this shell code:\n\nkubectl exec -i -t dnsutils -- nslookup kubernetes.default", "completion": "This shell code Debugging DNS Resolution | or", "language": "shell", "type": "code-example_explanation", "context": "Debugging DNS Resolution | or", "source": "https://kubernetes.io/docs/tasks/administer-cluster/dns-debugging-resolution/", "id": "k8s_00349_code_explain_8873"} {"prompt": "Complete this text code:\n\nServer: 10.0.0.10\nAddress 1: 10.0.0.10", "completion": "\nnslookup: can't resolve 'kubernetes.default'", "language": "text", "type": "code-example", "context": "Debugging DNS Resolution | or", "source": "https://kubernetes.io/docs/tasks/administer-cluster/dns-debugging-resolution/", "id": "k8s_00349_code_completion_8874"} {"prompt": "Explain this text code:\n\nServer: 10.0.0.10\nAddress 1: 10.0.0.10\n\nnslookup: can't resolve 'kubernetes.default'", "completion": "This text code Debugging DNS Resolution | or", "language": "text", "type": "code-example_explanation", "context": "Debugging DNS Resolution | or", "source": "https://kubernetes.io/docs/tasks/administer-cluster/dns-debugging-resolution/", "id": "k8s_00349_code_explain_8875"} {"prompt": "Explain this shell code:\n\nkubectl exec -i -t dnsutils -- nslookup kubernetes.default", "completion": "This shell code Debugging DNS Resolution | Use the kubectl get pods command to verify that the DNS pod is running.", "language": "shell", "type": "code-example_explanation", "context": "Debugging DNS Resolution | Use the kubectl get pods command to verify that the DNS pod is running.", "source": "https://kubernetes.io/docs/tasks/administer-cluster/dns-debugging-resolution/", "id": "k8s_00349_code_explain_8876"} {"prompt": "Complete this text code:\n\nServer: 10.0.0.10\nAddress 1: 10.0.0.10 kube-dns.kube-system.svc.cluster.local", "completion": "\nnslookup: can't resolve 'kubernetes.default'", "language": "text", "type": "code-example", "context": "Debugging DNS Resolution | Use the kubectl get pods command to verify that the DNS pod is running.", "source": "https://kubernetes.io/docs/tasks/administer-cluster/dns-debugging-resolution/", "id": "k8s_00349_code_completion_8877"} {"prompt": "Explain this text code:\n\nServer: 10.0.0.10\nAddress 1: 10.0.0.10 kube-dns.kube-system.svc.cluster.local\n\nnslookup: can't resolve 'kubernetes.default'", "completion": "This text code Debugging DNS Resolution | Use the kubectl get pods command to verify that the DNS pod is running.", "language": "text", "type": "code-example_explanation", "context": "Debugging DNS Resolution | Use the kubectl get pods command to verify that the DNS pod is running.", "source": "https://kubernetes.io/docs/tasks/administer-cluster/dns-debugging-resolution/", "id": "k8s_00349_code_explain_8878"} {"prompt": "Explain this shell code:\n\nkubectl get pods --namespace=kube-system -l k8s-app=kube-dns", "completion": "This shell code Debugging DNS Resolution | If you see that no CoreDNS Pod is running or that the Pod has failed/completed, the DNS add-on may not be deployed by default in your current environment and you will have to deploy it manually.", "language": "shell", "type": "code-example_explanation", "context": "Debugging DNS Resolution | If you see that no CoreDNS Pod is running or that the Pod has failed/completed, the DNS add-on may not be deployed by default in your current environment and you will have to deploy it manually.", "source": "https://kubernetes.io/docs/tasks/administer-cluster/dns-debugging-resolution/", "id": "k8s_00349_code_explain_8879"} {"prompt": "Complete this text code:\n\nNAME READY STATUS RESTARTS AGE\n...", "completion": "coredns-7b96bf9f76-5hsxb 1/1 Running 0 1h\ncoredns-7b96bf9f76-mvmmt 1/1 Running 0 1h\n...", "language": "text", "type": "code-example", "context": "Debugging DNS Resolution | If you see that no CoreDNS Pod is running or that the Pod has failed/completed, the DNS add-on may not be deployed by default in your current environment and you will have to deploy it manually.", "source": "https://kubernetes.io/docs/tasks/administer-cluster/dns-debugging-resolution/", "id": "k8s_00349_code_completion_8880"} {"prompt": "Explain this text code:\n\nNAME READY STATUS RESTARTS AGE\n...\ncoredns-7b96bf9f76-5hsxb 1/1 Running 0 1h\ncoredns-7b96bf9f76-mvmmt 1/1 Running 0 1h\n...", "completion": "This text code Debugging DNS Resolution | If you see that no CoreDNS Pod is running or that the Pod has failed/completed, the DNS add-on may not be deployed by default in your current environment and you will have to deploy it manually.", "language": "text", "type": "code-example_explanation", "context": "Debugging DNS Resolution | If you see that no CoreDNS Pod is running or that the Pod has failed/completed, the DNS add-on may not be deployed by default in your current environment and you will have to deploy it manually.", "source": "https://kubernetes.io/docs/tasks/administer-cluster/dns-debugging-resolution/", "id": "k8s_00349_code_explain_8881"} {"prompt": "Explain this shell code:\n\nkubectl logs --namespace=kube-system -l k8s-app=kube-dns", "completion": "This shell code Debugging DNS Resolution | Here is an example of a healthy CoreDNS log:", "language": "shell", "type": "code-example_explanation", "context": "Debugging DNS Resolution | Here is an example of a healthy CoreDNS log:", "source": "https://kubernetes.io/docs/tasks/administer-cluster/dns-debugging-resolution/", "id": "k8s_00349_code_explain_8882"} {"prompt": "Complete this text code:\n\n.:53\n2018/08/15 14:37:17 [INFO] CoreDNS-1.2.2\n2018/08/15 14:37:17 [INFO] linux/amd64, go1.10.3, 2e322f6", "completion": "CoreDNS-1.2.2\nlinux/amd64, go1.10.3, 2e322f6\n2018/08/15 14:37:17 [INFO] plugin/reload: Running configuration MD5 = 24e6c59e83ce706f07bcc82c31b1ea1c", "language": "text", "type": "code-example", "context": "Debugging DNS Resolution | See if there are any suspicious or unexpected messages in the logs.", "source": "https://kubernetes.io/docs/tasks/administer-cluster/dns-debugging-resolution/", "id": "k8s_00349_code_completion_8883"} {"prompt": "Explain this text code:\n\n.:53\n2018/08/15 14:37:17 [INFO] CoreDNS-1.2.2\n2018/08/15 14:37:17 [INFO] linux/amd64, go1.10.3, 2e322f6\nCoreDNS-1.2.2\nlinux/amd64, go1.10.3, 2e322f6\n2018/08/15 14:37:17 [INFO] plugin/reload: Running configuration MD5 = 24e6c59e83ce706f07bcc82c31b1ea1c", "completion": "This text code Debugging DNS Resolution | See if there are any suspicious or unexpected messages in the logs.", "language": "text", "type": "code-example_explanation", "context": "Debugging DNS Resolution | See if there are any suspicious or unexpected messages in the logs.", "source": "https://kubernetes.io/docs/tasks/administer-cluster/dns-debugging-resolution/", "id": "k8s_00349_code_explain_8884"} {"prompt": "Complete this text code:\n\nNAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE\n...", "completion": "kube-dns ClusterIP 10.0.0.10 <none> 53/UDP,53/TCP 1h\n...", "language": "text", "type": "code-example", "context": "Debugging DNS Resolution | If you have created the Service or in the case it should be created by default but it does not appear, see debugging Services for more information.", "source": "https://kubernetes.io/docs/tasks/administer-cluster/dns-debugging-resolution/", "id": "k8s_00349_code_completion_8885"} {"prompt": "Explain this text code:\n\nNAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE\n...\nkube-dns ClusterIP 10.0.0.10 <none> 53/UDP,53/TCP 1h\n...", "completion": "This text code Debugging DNS Resolution | If you have created the Service or in the case it should be created by default but it does not appear, see debugging Services for more information.", "language": "text", "type": "code-example_explanation", "context": "Debugging DNS Resolution | If you have created the Service or in the case it should be created by default but it does not appear, see debugging Services for more information.", "source": "https://kubernetes.io/docs/tasks/administer-cluster/dns-debugging-resolution/", "id": "k8s_00349_code_explain_8886"} {"prompt": "Explain this shell code:\n\nkubectl get endpointslices -l k8s.io/service-name=kube-dns --namespace=kube-system", "completion": "This shell code Debugging DNS Resolution | If you do not see the endpoints, see the endpoints section in the debugging Services documentation.", "language": "shell", "type": "code-example_explanation", "context": "Debugging DNS Resolution | If you do not see the endpoints, see the endpoints section in the debugging Services documentation.", "source": "https://kubernetes.io/docs/tasks/administer-cluster/dns-debugging-resolution/", "id": "k8s_00349_code_explain_8887"} {"prompt": "Explain this text code:\n\nNAME ADDRESSTYPE PORTS ENDPOINTS AGE\nkube-dns-zxoja IPv4 53 10.180.3.17,10.180.3.17 1h", "completion": "This text code Debugging DNS Resolution | If you do not see the endpoints, see the endpoints section in the debugging Services documentation.", "language": "text", "type": "code-example_explanation", "context": "Debugging DNS Resolution | If you do not see the endpoints, see the endpoints section in the debugging Services documentation.", "source": "https://kubernetes.io/docs/tasks/administer-cluster/dns-debugging-resolution/", "id": "k8s_00349_code_explain_8888"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: ConfigMap\nmetadata:\n name: coredns\n namespace: kube-system\ndata:\n Corefile: |\n .:53 {\n log\n errors\n health", "completion": " kubernetes cluster.local in-addr.arpa ip6.arpa {\n pods insecure\n upstream\n fallthrough in-addr.arpa ip6.arpa\n }\n prometheus :9153\n forward . /etc/resolv.conf\n cache 30\n loop\n reload\n loadbalance\n }", "language": "yaml", "type": "kubernetes-manifest", "context": "Debugging DNS Resolution | After saving the changes, it may take up to minute or two for Kubernetes to propagate these changes to the CoreDNS pods.", "source": "https://kubernetes.io/docs/tasks/administer-cluster/dns-debugging-resolution/", "id": "k8s_00349_code_completion_8889"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: ConfigMap\nmetadata:\n name: coredns\n namespace: kube-system\ndata:\n Corefile: |\n .:53 {\n log\n errors\n health\n kubernetes cluster.local in-addr.arpa ip6.arpa {\n pods insecure\n upstream\n fallthrough in-addr.arpa ip6.arpa\n }\n prometheus :9153\n forward . /etc/resolv.conf\n cache 30\n loop\n reload\n loadbalance\n }", "completion": "This yaml code Debugging DNS Resolution | After saving the changes, it may take up to minute or two for Kubernetes to propagate these changes to the CoreDNS pods.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Debugging DNS Resolution | After saving the changes, it may take up to minute or two for Kubernetes to propagate these changes to the CoreDNS pods.", "source": "https://kubernetes.io/docs/tasks/administer-cluster/dns-debugging-resolution/", "id": "k8s_00349_code_explain_8890"} {"prompt": "Complete this text code:\n\n.:53\n2018/08/15 14:37:15 [INFO] CoreDNS-1.2.0\n2018/08/15 14:37:15 [INFO] linux/amd64, go1.10.3, 2e322f6\nCoreDNS-1.2.0", "completion": "linux/amd64, go1.10.3, 2e322f6\n2018/09/07 15:29:04 [INFO] plugin/reload: Running configuration MD5 = 162475cdf272d8aa601e6fe67a6ad42f\n2018/09/07 15:29:04 [INFO] Reloading complete\n172.17.0.18:41675 - [07/Sep/2018:15:29:11 +0000] 59925 \"A IN kubernetes.default.svc.cluster.local. udp 54 false 512\" NOERROR qr,aa,rd,ra 106 0.000066649s", "language": "text", "type": "code-example", "context": "Debugging DNS Resolution | CoreDNS must be able to list service and endpointslice related resources to properly resolve service names.", "source": "https://kubernetes.io/docs/tasks/administer-cluster/dns-debugging-resolution/", "id": "k8s_00349_code_completion_8891"} {"prompt": "Explain this text code:\n\n.:53\n2018/08/15 14:37:15 [INFO] CoreDNS-1.2.0\n2018/08/15 14:37:15 [INFO] linux/amd64, go1.10.3, 2e322f6\nCoreDNS-1.2.0\nlinux/amd64, go1.10.3, 2e322f6\n2018/09/07 15:29:04 [INFO] plugin/reload: Running configuration MD5 = 162475cdf272d8aa601e6fe67a6ad42f\n2018/09/07 15:29:04 [INFO] Reloading complete\n172.17.0.18:41675 - [07/Sep/2018:15:29:11 +0000] 59925 \"A IN kubernetes.default.svc.cluster.local. udp 54 false 512\" NOERROR qr,aa,rd,ra 106 0.000066649s", "completion": "This text code Debugging DNS Resolution | CoreDNS must be able to list service and endpointslice related resources to properly resolve service names.", "language": "text", "type": "code-example_explanation", "context": "Debugging DNS Resolution | CoreDNS must be able to list service and endpointslice related resources to properly resolve service names.", "source": "https://kubernetes.io/docs/tasks/administer-cluster/dns-debugging-resolution/", "id": "k8s_00349_code_explain_8892"} {"prompt": "Explain this text code:\n\n2022-03-18T07:12:15.699431183Z [INFO] 10.96.144.227:52299 - 3686 \"A IN serverproxy.contoso.net.cluster.local. udp 52 false 512\" SERVFAIL qr,aa,rd 145 0.000091221s", "completion": "This text code Debugging DNS Resolution | First, get the current ClusterRole of system:coredns:", "language": "text", "type": "code-example_explanation", "context": "Debugging DNS Resolution | First, get the current ClusterRole of system:coredns:", "source": "https://kubernetes.io/docs/tasks/administer-cluster/dns-debugging-resolution/", "id": "k8s_00349_code_explain_8893"} {"prompt": "Explain this shell code:\n\nkubectl describe clusterrole system:coredns -n kube-system", "completion": "This shell code Debugging DNS Resolution | Expected output:", "language": "shell", "type": "code-example_explanation", "context": "Debugging DNS Resolution | Expected output:", "source": "https://kubernetes.io/docs/tasks/administer-cluster/dns-debugging-resolution/", "id": "k8s_00349_code_explain_8894"} {"prompt": "Complete this text code:\n\nPolicyRule:\n Resources Non-Resource URLs Resource Names Verbs\n --------- ----------------- -------------- -----\n endpoints [] [] [list watch]", "completion": " namespaces [] [] [list watch]\n pods [] [] [list watch]\n services [] [] [list watch]\n endpointslices.discovery.k8s.io [] [] [list watch]", "language": "text", "type": "code-example", "context": "Debugging DNS Resolution | If any permissions are missing, edit the ClusterRole to add them:", "source": "https://kubernetes.io/docs/tasks/administer-cluster/dns-debugging-resolution/", "id": "k8s_00349_code_completion_8895"} {"prompt": "Explain this text code:\n\nPolicyRule:\n Resources Non-Resource URLs Resource Names Verbs\n --------- ----------------- -------------- -----\n endpoints [] [] [list watch]\n namespaces [] [] [list watch]\n pods [] [] [list watch]\n services [] [] [list watch]\n endpointslices.discovery.k8s.io [] [] [list watch]", "completion": "This text code Debugging DNS Resolution | If any permissions are missing, edit the ClusterRole to add them:", "language": "text", "type": "code-example_explanation", "context": "Debugging DNS Resolution | If any permissions are missing, edit the ClusterRole to add them:", "source": "https://kubernetes.io/docs/tasks/administer-cluster/dns-debugging-resolution/", "id": "k8s_00349_code_explain_8896"} {"prompt": "Explain this shell code:\n\nkubectl edit clusterrole system:coredns -n kube-system", "completion": "This shell code Debugging DNS Resolution | Example insertion of EndpointSlices permissions:", "language": "shell", "type": "code-example_explanation", "context": "Debugging DNS Resolution | Example insertion of EndpointSlices permissions:", "source": "https://kubernetes.io/docs/tasks/administer-cluster/dns-debugging-resolution/", "id": "k8s_00349_code_explain_8897"} {"prompt": "Complete this text code:\n\n...\n- apiGroups:\n - discovery.k8s.io\n resources:", "completion": " - endpointslices\n verbs:\n - list\n - watch\n...", "language": "text", "type": "code-example", "context": "Debugging DNS Resolution | DNS queries that don't specify a namespace are limited to the pod's namespace.", "source": "https://kubernetes.io/docs/tasks/administer-cluster/dns-debugging-resolution/", "id": "k8s_00349_code_completion_8898"} {"prompt": "Explain this text code:\n\n...\n- apiGroups:\n - discovery.k8s.io\n resources:\n - endpointslices\n verbs:\n - list\n - watch\n...", "completion": "This text code Debugging DNS Resolution | DNS queries that don't specify a namespace are limited to the pod's namespace.", "language": "text", "type": "code-example_explanation", "context": "Debugging DNS Resolution | DNS queries that don't specify a namespace are limited to the pod's namespace.", "source": "https://kubernetes.io/docs/tasks/administer-cluster/dns-debugging-resolution/", "id": "k8s_00349_code_explain_8899"} {"prompt": "Explain this shell code:\n\nkubectl exec -i -t dnsutils -- nslookup <service-name>", "completion": "This shell code Debugging DNS Resolution | This query specifies the namespace:", "language": "shell", "type": "code-example_explanation", "context": "Debugging DNS Resolution | This query specifies the namespace:", "source": "https://kubernetes.io/docs/tasks/administer-cluster/dns-debugging-resolution/", "id": "k8s_00349_code_explain_8900"} {"prompt": "Explain this shell code:\n\nkubectl exec -i -t dnsutils -- nslookup <service-name>.<namespace>", "completion": "This shell code Debugging DNS Resolution | To learn more about name resolution, see DNS for Services and Pods.", "language": "shell", "type": "code-example_explanation", "context": "Debugging DNS Resolution | To learn more about name resolution, see DNS for Services and Pods.", "source": "https://kubernetes.io/docs/tasks/administer-cluster/dns-debugging-resolution/", "id": "k8s_00349_code_explain_8901"} {"prompt": "Complete this yaml code:\n\nadmin/dns/dnsutils.yaml\napiVersion: v1\nkind: Pod\nmetadata:\n name: dnsutils\n namespace: default", "completion": "spec:\n containers:\n - name: dnsutils\n image: registry.k8s.io/e2e-test-images/agnhost:2.39\n imagePullPolicy: IfNotPresent\n restartPolicy: Always", "language": "yaml", "type": "pod-definition", "context": "Debugging DNS Resolution | Use that manifest to create a Pod:", "source": "https://kubernetes.io/docs/tasks/administer-cluster/dns-debugging-resolution/", "id": "k8s_00349_code_completion_8902"} {"prompt": "Explain this yaml code:\n\nadmin/dns/dnsutils.yaml\napiVersion: v1\nkind: Pod\nmetadata:\n name: dnsutils\n namespace: default\nspec:\n containers:\n - name: dnsutils\n image: registry.k8s.io/e2e-test-images/agnhost:2.39\n imagePullPolicy: IfNotPresent\n restartPolicy: Always", "completion": "This yaml code Debugging DNS Resolution | Use that manifest to create a Pod:", "language": "yaml", "type": "pod-definition_explanation", "context": "Debugging DNS Resolution | Use that manifest to create a Pod:", "source": "https://kubernetes.io/docs/tasks/administer-cluster/dns-debugging-resolution/", "id": "k8s_00349_code_explain_8903"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: dnsutils\n namespace: default", "completion": "spec:\n containers:\n - name: dnsutils\n image: registry.k8s.io/e2e-test-images/agnhost:2.39\n imagePullPolicy: IfNotPresent\n restartPolicy: Always", "language": "yaml", "type": "pod-definition", "context": "Debugging DNS Resolution | Use that manifest to create a Pod:", "source": "https://kubernetes.io/docs/tasks/administer-cluster/dns-debugging-resolution/", "id": "k8s_00349_code_completion_8904"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: dnsutils\n namespace: default\nspec:\n containers:\n - name: dnsutils\n image: registry.k8s.io/e2e-test-images/agnhost:2.39\n imagePullPolicy: IfNotPresent\n restartPolicy: Always", "completion": "This yaml code Debugging DNS Resolution | Use that manifest to create a Pod:", "language": "yaml", "type": "pod-definition_explanation", "context": "Debugging DNS Resolution | Use that manifest to create a Pod:", "source": "https://kubernetes.io/docs/tasks/administer-cluster/dns-debugging-resolution/", "id": "k8s_00349_code_explain_8905"} {"prompt": "Explain this bash code:\n\nkubectl apply -f https://k8s.io/examples/admin/dns/dnsutils.yaml", "completion": "This bash code Debugging DNS Resolution | …and verify its status:", "language": "bash", "type": "kubectl-command_explanation", "context": "Debugging DNS Resolution | …and verify its status:", "source": "https://kubernetes.io/docs/tasks/administer-cluster/dns-debugging-resolution/", "id": "k8s_00349_code_explain_8906"} {"prompt": "Explain this bash code:\n\nkubectl exec -i -t dnsutils -- nslookup kubernetes.default", "completion": "This bash code Debugging DNS Resolution | If the nslookup command fails, check the following:", "language": "bash", "type": "kubectl-command_explanation", "context": "Debugging DNS Resolution | If the nslookup command fails, check the following:", "source": "https://kubernetes.io/docs/tasks/administer-cluster/dns-debugging-resolution/", "id": "k8s_00349_code_explain_8907"} {"prompt": "Explain this bash code:\n\nkubectl exec -i -t dnsutils -- nslookup kubernetes.default", "completion": "This bash code Debugging DNS Resolution | or", "language": "bash", "type": "kubectl-command_explanation", "context": "Debugging DNS Resolution | or", "source": "https://kubernetes.io/docs/tasks/administer-cluster/dns-debugging-resolution/", "id": "k8s_00349_code_explain_8908"} {"prompt": "Explain this bash code:\n\nkubectl exec -i -t dnsutils -- nslookup kubernetes.default", "completion": "This bash code Debugging DNS Resolution | Use the kubectl get pods command to verify that the DNS pod is running.", "language": "bash", "type": "kubectl-command_explanation", "context": "Debugging DNS Resolution | Use the kubectl get pods command to verify that the DNS pod is running.", "source": "https://kubernetes.io/docs/tasks/administer-cluster/dns-debugging-resolution/", "id": "k8s_00349_code_explain_8909"} {"prompt": "Explain this bash code:\n\nkubectl get pods --namespace=kube-system -l k8s-app=kube-dns", "completion": "This bash code Debugging DNS Resolution | If you see that no CoreDNS Pod is running or that the Pod has failed/completed, the DNS add-on may not be deployed by default in your current environment and you will have to deploy it manually.", "language": "bash", "type": "kubectl-command_explanation", "context": "Debugging DNS Resolution | If you see that no CoreDNS Pod is running or that the Pod has failed/completed, the DNS add-on may not be deployed by default in your current environment and you will have to deploy it manually.", "source": "https://kubernetes.io/docs/tasks/administer-cluster/dns-debugging-resolution/", "id": "k8s_00349_code_explain_8910"} {"prompt": "Explain this bash code:\n\nkubectl logs --namespace=kube-system -l k8s-app=kube-dns", "completion": "This bash code Debugging DNS Resolution | Here is an example of a healthy CoreDNS log:", "language": "bash", "type": "kubectl-command_explanation", "context": "Debugging DNS Resolution | Here is an example of a healthy CoreDNS log:", "source": "https://kubernetes.io/docs/tasks/administer-cluster/dns-debugging-resolution/", "id": "k8s_00349_code_explain_8911"} {"prompt": "Explain this bash code:\n\nkubectl get endpointslices -l k8s.io/service-name=kube-dns --namespace=kube-system", "completion": "This bash code Debugging DNS Resolution | If you do not see the endpoints, see the endpoints section in the debugging Services documentation.", "language": "bash", "type": "kubectl-command_explanation", "context": "Debugging DNS Resolution | If you do not see the endpoints, see the endpoints section in the debugging Services documentation.", "source": "https://kubernetes.io/docs/tasks/administer-cluster/dns-debugging-resolution/", "id": "k8s_00349_code_explain_8912"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: ConfigMap\nmetadata:\n name: coredns\n namespace: kube-system\ndata:\n Corefile: |\n .:53 {\n log\n errors\n health", "completion": " kubernetes cluster.local in-addr.arpa ip6.arpa {\n pods insecure\n upstream\n fallthrough in-addr.arpa ip6.arpa\n }\n prometheus :9153\n forward . /etc/resolv.conf\n cache 30\n loop\n reload\n loadbalance\n }", "language": "yaml", "type": "kubernetes-manifest", "context": "Debugging DNS Resolution | After saving the changes, it may take up to minute or two for Kubernetes to propagate these changes to the CoreDNS pods.", "source": "https://kubernetes.io/docs/tasks/administer-cluster/dns-debugging-resolution/", "id": "k8s_00349_code_completion_8913"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: ConfigMap\nmetadata:\n name: coredns\n namespace: kube-system\ndata:\n Corefile: |\n .:53 {\n log\n errors\n health\n kubernetes cluster.local in-addr.arpa ip6.arpa {\n pods insecure\n upstream\n fallthrough in-addr.arpa ip6.arpa\n }\n prometheus :9153\n forward . /etc/resolv.conf\n cache 30\n loop\n reload\n loadbalance\n }", "completion": "This yaml code Debugging DNS Resolution | After saving the changes, it may take up to minute or two for Kubernetes to propagate these changes to the CoreDNS pods.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Debugging DNS Resolution | After saving the changes, it may take up to minute or two for Kubernetes to propagate these changes to the CoreDNS pods.", "source": "https://kubernetes.io/docs/tasks/administer-cluster/dns-debugging-resolution/", "id": "k8s_00349_code_explain_8914"} {"prompt": "Explain this bash code:\n\nkubectl describe clusterrole system:coredns -n kube-system", "completion": "This bash code Debugging DNS Resolution | Expected output:", "language": "bash", "type": "kubectl-command_explanation", "context": "Debugging DNS Resolution | Expected output:", "source": "https://kubernetes.io/docs/tasks/administer-cluster/dns-debugging-resolution/", "id": "k8s_00349_code_explain_8915"} {"prompt": "Explain this bash code:\n\nkubectl edit clusterrole system:coredns -n kube-system", "completion": "This bash code Debugging DNS Resolution | Example insertion of EndpointSlices permissions:", "language": "bash", "type": "kubectl-command_explanation", "context": "Debugging DNS Resolution | Example insertion of EndpointSlices permissions:", "source": "https://kubernetes.io/docs/tasks/administer-cluster/dns-debugging-resolution/", "id": "k8s_00349_code_explain_8916"} {"prompt": "Explain this bash code:\n\nkubectl exec -i -t dnsutils -- nslookup <service-name>", "completion": "This bash code Debugging DNS Resolution | This query specifies the namespace:", "language": "bash", "type": "kubectl-command_explanation", "context": "Debugging DNS Resolution | This query specifies the namespace:", "source": "https://kubernetes.io/docs/tasks/administer-cluster/dns-debugging-resolution/", "id": "k8s_00349_code_explain_8917"} {"prompt": "Explain this bash code:\n\nkubectl exec -i -t dnsutils -- nslookup <service-name>.<namespace>", "completion": "This bash code Debugging DNS Resolution | To learn more about name resolution, see DNS for Services and Pods.", "language": "bash", "type": "kubectl-command_explanation", "context": "Debugging DNS Resolution | To learn more about name resolution, see DNS for Services and Pods.", "source": "https://kubernetes.io/docs/tasks/administer-cluster/dns-debugging-resolution/", "id": "k8s_00349_code_explain_8918"} {"prompt": "Explain this text code:\n\nNAME READY STATUS RESTARTS AGE IP NODE\nnginx 1/1 Running 0 13s 10.200.0.4 worker0", "completion": "This text code Adding entries to Pod /etc/hosts with HostAliases | The hosts file content would look like this:", "language": "text", "type": "code-example_explanation", "context": "Adding entries to Pod /etc/hosts with HostAliases | The hosts file content would look like this:", "source": "https://kubernetes.io/docs/tasks/network/customize-hosts-file-for-pods/", "id": "k8s_00350_code_explain_8919"} {"prompt": "Complete this text code:\n\n# Kubernetes-managed hosts file.\n127.0.0.1\tlocalhost\n::1\tlocalhost ip6-localhost ip6-loopback\nfe00::0\tip6-localnet", "completion": "fe00::0\tip6-mcastprefix\nfe00::1\tip6-allnodes\nfe00::2\tip6-allrouters\n10.200.0.4\tnginx", "language": "text", "type": "code-example", "context": "Adding entries to Pod /etc/hosts with HostAliases | By default, the hosts file only includes IPv4 and IPv6 boilerplates like localhost and its own hostname.", "source": "https://kubernetes.io/docs/tasks/network/customize-hosts-file-for-pods/", "id": "k8s_00350_code_completion_8920"} {"prompt": "Explain this text code:\n\n# Kubernetes-managed hosts file.\n127.0.0.1\tlocalhost\n::1\tlocalhost ip6-localhost ip6-loopback\nfe00::0\tip6-localnet\nfe00::0\tip6-mcastprefix\nfe00::1\tip6-allnodes\nfe00::2\tip6-allrouters\n10.200.0.4\tnginx", "completion": "This text code Adding entries to Pod /etc/hosts with HostAliases | By default, the hosts file only includes IPv4 and IPv6 boilerplates like localhost and its own hostname.", "language": "text", "type": "code-example_explanation", "context": "Adding entries to Pod /etc/hosts with HostAliases | By default, the hosts file only includes IPv4 and IPv6 boilerplates like localhost and its own hostname.", "source": "https://kubernetes.io/docs/tasks/network/customize-hosts-file-for-pods/", "id": "k8s_00350_code_explain_8921"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: hostaliases-pod\nspec:\n restartPolicy: Never\n hostAliases:\n - ip: \"127.0.0.1\"\n hostnames:\n - \"foo.local\"\n - \"bar.local\"", "completion": " - ip: \"10.1.2.3\"\n hostnames:\n - \"foo.remote\"\n - \"bar.remote\"\n containers:\n - name: cat-hosts\n image: busybox:1.28\n command:\n - cat\n args:\n - \"/etc/hosts\"", "language": "yaml", "type": "pod-definition", "context": "Adding entries to Pod /etc/hosts with HostAliases | You can start a Pod with that configuration by running:", "source": "https://kubernetes.io/docs/tasks/network/customize-hosts-file-for-pods/", "id": "k8s_00350_code_completion_8922"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: hostaliases-pod\nspec:\n restartPolicy: Never\n hostAliases:\n - ip: \"127.0.0.1\"\n hostnames:\n - \"foo.local\"\n - \"bar.local\"\n - ip: \"10.1.2.3\"\n hostnames:\n - \"foo.remote\"\n - \"bar.remote\"\n containers:\n - name: cat-hosts\n image: busybox:1.28\n command:\n - cat\n args:\n - \"/etc/hosts\"", "completion": "This yaml code Adding entries to Pod /etc/hosts with HostAliases | You can start a Pod with that configuration by running:", "language": "yaml", "type": "pod-definition_explanation", "context": "Adding entries to Pod /etc/hosts with HostAliases | You can start a Pod with that configuration by running:", "source": "https://kubernetes.io/docs/tasks/network/customize-hosts-file-for-pods/", "id": "k8s_00350_code_explain_8923"} {"prompt": "Explain this bash code:\n\nkubectl apply -f https://k8s.io/examples/service/networking/hostaliases-pod.yaml", "completion": "This bash code Adding entries to Pod /etc/hosts with HostAliases | Examine a Pod's details to see its IPv4 address and its status:", "language": "bash", "type": "kubectl-command_explanation", "context": "Adding entries to Pod /etc/hosts with HostAliases | Examine a Pod's details to see its IPv4 address and its status:", "source": "https://kubernetes.io/docs/tasks/network/customize-hosts-file-for-pods/", "id": "k8s_00350_code_explain_8924"} {"prompt": "Explain this text code:\n\nNAME READY STATUS RESTARTS AGE IP NODE\nhostaliases-pod 0/1 Completed 0 6s 10.200.0.5 worker0", "completion": "This text code Adding entries to Pod /etc/hosts with HostAliases | The hosts file content looks like this:", "language": "text", "type": "code-example_explanation", "context": "Adding entries to Pod /etc/hosts with HostAliases | The hosts file content looks like this:", "source": "https://kubernetes.io/docs/tasks/network/customize-hosts-file-for-pods/", "id": "k8s_00350_code_explain_8925"} {"prompt": "Complete this text code:\n\n# Kubernetes-managed hosts file.\n127.0.0.1\tlocalhost\n::1\tlocalhost ip6-localhost ip6-loopback\nfe00::0\tip6-localnet\nfe00::0\tip6-mcastprefix\nfe00::1\tip6-allnodes", "completion": "fe00::2\tip6-allrouters\n10.200.0.5\thostaliases-pod\n\n# Entries added by HostAliases.\n127.0.0.1\tfoo.local\tbar.local\n10.1.2.3\tfoo.remote\tbar.remote", "language": "text", "type": "code-example", "context": "Adding entries to Pod /etc/hosts with HostAliases | with the additional entries specified at the bottom.", "source": "https://kubernetes.io/docs/tasks/network/customize-hosts-file-for-pods/", "id": "k8s_00350_code_completion_8926"} {"prompt": "Explain this text code:\n\n# Kubernetes-managed hosts file.\n127.0.0.1\tlocalhost\n::1\tlocalhost ip6-localhost ip6-loopback\nfe00::0\tip6-localnet\nfe00::0\tip6-mcastprefix\nfe00::1\tip6-allnodes\nfe00::2\tip6-allrouters\n10.200.0.5\thostaliases-pod\n\n# Entries added by HostAliases.\n127.0.0.1\tfoo.local\tbar.local\n10.1.2.3\tfoo.remote\tbar.remote", "completion": "This text code Adding entries to Pod /etc/hosts with HostAliases | with the additional entries specified at the bottom.", "language": "text", "type": "code-example_explanation", "context": "Adding entries to Pod /etc/hosts with HostAliases | with the additional entries specified at the bottom.", "source": "https://kubernetes.io/docs/tasks/network/customize-hosts-file-for-pods/", "id": "k8s_00350_code_explain_8927"} {"prompt": "Explain this text code:\n\nNAME READY STATUS RESTARTS AGE IP NODE\nnginx 1/1 Running 0 13s 10.200.0.4 worker0", "completion": "This text code Adding entries to Pod /etc/hosts with HostAliases | The hosts file content would look like this:", "language": "text", "type": "code-example_explanation", "context": "Adding entries to Pod /etc/hosts with HostAliases | The hosts file content would look like this:", "source": "https://kubernetes.io/docs/tasks/network/customize-hosts-file-for-pods/", "id": "k8s_00350_code_explain_8928"} {"prompt": "Complete this text code:\n\n# Kubernetes-managed hosts file.\n127.0.0.1\tlocalhost\n::1\tlocalhost ip6-localhost ip6-loopback\nfe00::0\tip6-localnet", "completion": "fe00::0\tip6-mcastprefix\nfe00::1\tip6-allnodes\nfe00::2\tip6-allrouters\n10.200.0.4\tnginx", "language": "text", "type": "code-example", "context": "Adding entries to Pod /etc/hosts with HostAliases | By default, the hosts file only includes IPv4 and IPv6 boilerplates like localhost and its own hostname.", "source": "https://kubernetes.io/docs/tasks/network/customize-hosts-file-for-pods/", "id": "k8s_00350_code_completion_8929"} {"prompt": "Explain this text code:\n\n# Kubernetes-managed hosts file.\n127.0.0.1\tlocalhost\n::1\tlocalhost ip6-localhost ip6-loopback\nfe00::0\tip6-localnet\nfe00::0\tip6-mcastprefix\nfe00::1\tip6-allnodes\nfe00::2\tip6-allrouters\n10.200.0.4\tnginx", "completion": "This text code Adding entries to Pod /etc/hosts with HostAliases | By default, the hosts file only includes IPv4 and IPv6 boilerplates like localhost and its own hostname.", "language": "text", "type": "code-example_explanation", "context": "Adding entries to Pod /etc/hosts with HostAliases | By default, the hosts file only includes IPv4 and IPv6 boilerplates like localhost and its own hostname.", "source": "https://kubernetes.io/docs/tasks/network/customize-hosts-file-for-pods/", "id": "k8s_00350_code_explain_8930"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: hostaliases-pod\nspec:\n restartPolicy: Never\n hostAliases:\n - ip: \"127.0.0.1\"\n hostnames:\n - \"foo.local\"\n - \"bar.local\"", "completion": " - ip: \"10.1.2.3\"\n hostnames:\n - \"foo.remote\"\n - \"bar.remote\"\n containers:\n - name: cat-hosts\n image: busybox:1.28\n command:\n - cat\n args:\n - \"/etc/hosts\"", "language": "yaml", "type": "pod-definition", "context": "Adding entries to Pod /etc/hosts with HostAliases | You can start a Pod with that configuration by running:", "source": "https://kubernetes.io/docs/tasks/network/customize-hosts-file-for-pods/", "id": "k8s_00350_code_completion_8931"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: hostaliases-pod\nspec:\n restartPolicy: Never\n hostAliases:\n - ip: \"127.0.0.1\"\n hostnames:\n - \"foo.local\"\n - \"bar.local\"\n - ip: \"10.1.2.3\"\n hostnames:\n - \"foo.remote\"\n - \"bar.remote\"\n containers:\n - name: cat-hosts\n image: busybox:1.28\n command:\n - cat\n args:\n - \"/etc/hosts\"", "completion": "This yaml code Adding entries to Pod /etc/hosts with HostAliases | You can start a Pod with that configuration by running:", "language": "yaml", "type": "pod-definition_explanation", "context": "Adding entries to Pod /etc/hosts with HostAliases | You can start a Pod with that configuration by running:", "source": "https://kubernetes.io/docs/tasks/network/customize-hosts-file-for-pods/", "id": "k8s_00350_code_explain_8932"} {"prompt": "Explain this shell code:\n\nkubectl apply -f https://k8s.io/examples/service/networking/hostaliases-pod.yaml", "completion": "This shell code Adding entries to Pod /etc/hosts with HostAliases | Examine a Pod's details to see its IPv4 address and its status:", "language": "shell", "type": "code-example_explanation", "context": "Adding entries to Pod /etc/hosts with HostAliases | Examine a Pod's details to see its IPv4 address and its status:", "source": "https://kubernetes.io/docs/tasks/network/customize-hosts-file-for-pods/", "id": "k8s_00350_code_explain_8933"} {"prompt": "Explain this text code:\n\nNAME READY STATUS RESTARTS AGE IP NODE\nhostaliases-pod 0/1 Completed 0 6s 10.200.0.5 worker0", "completion": "This text code Adding entries to Pod /etc/hosts with HostAliases | The hosts file content looks like this:", "language": "text", "type": "code-example_explanation", "context": "Adding entries to Pod /etc/hosts with HostAliases | The hosts file content looks like this:", "source": "https://kubernetes.io/docs/tasks/network/customize-hosts-file-for-pods/", "id": "k8s_00350_code_explain_8934"} {"prompt": "Complete this text code:\n\n# Kubernetes-managed hosts file.\n127.0.0.1\tlocalhost\n::1\tlocalhost ip6-localhost ip6-loopback\nfe00::0\tip6-localnet\nfe00::0\tip6-mcastprefix\nfe00::1\tip6-allnodes", "completion": "fe00::2\tip6-allrouters\n10.200.0.5\thostaliases-pod\n\n# Entries added by HostAliases.\n127.0.0.1\tfoo.local\tbar.local\n10.1.2.3\tfoo.remote\tbar.remote", "language": "text", "type": "code-example", "context": "Adding entries to Pod /etc/hosts with HostAliases | with the additional entries specified at the bottom.", "source": "https://kubernetes.io/docs/tasks/network/customize-hosts-file-for-pods/", "id": "k8s_00350_code_completion_8935"} {"prompt": "Explain this text code:\n\n# Kubernetes-managed hosts file.\n127.0.0.1\tlocalhost\n::1\tlocalhost ip6-localhost ip6-loopback\nfe00::0\tip6-localnet\nfe00::0\tip6-mcastprefix\nfe00::1\tip6-allnodes\nfe00::2\tip6-allrouters\n10.200.0.5\thostaliases-pod\n\n# Entries added by HostAliases.\n127.0.0.1\tfoo.local\tbar.local\n10.1.2.3\tfoo.remote\tbar.remote", "completion": "This text code Adding entries to Pod /etc/hosts with HostAliases | with the additional entries specified at the bottom.", "language": "text", "type": "code-example_explanation", "context": "Adding entries to Pod /etc/hosts with HostAliases | with the additional entries specified at the bottom.", "source": "https://kubernetes.io/docs/tasks/network/customize-hosts-file-for-pods/", "id": "k8s_00350_code_explain_8936"} {"prompt": "Complete this yaml code:\n\nservice/networking/hostaliases-pod.yaml\napiVersion: v1\nkind: Pod\nmetadata:\n name: hostaliases-pod\nspec:\n restartPolicy: Never\n hostAliases:\n - ip: \"127.0.0.1\"\n hostnames:\n - \"foo.local\"", "completion": " - \"bar.local\"\n - ip: \"10.1.2.3\"\n hostnames:\n - \"foo.remote\"\n - \"bar.remote\"\n containers:\n - name: cat-hosts\n image: busybox:1.28\n command:\n - cat\n args:\n - \"/etc/hosts\"", "language": "yaml", "type": "pod-definition", "context": "Adding entries to Pod /etc/hosts with HostAliases | You can start a Pod with that configuration by running:", "source": "https://kubernetes.io/docs/tasks/network/customize-hosts-file-for-pods/", "id": "k8s_00350_code_completion_8937"} {"prompt": "Explain this yaml code:\n\nservice/networking/hostaliases-pod.yaml\napiVersion: v1\nkind: Pod\nmetadata:\n name: hostaliases-pod\nspec:\n restartPolicy: Never\n hostAliases:\n - ip: \"127.0.0.1\"\n hostnames:\n - \"foo.local\"\n - \"bar.local\"\n - ip: \"10.1.2.3\"\n hostnames:\n - \"foo.remote\"\n - \"bar.remote\"\n containers:\n - name: cat-hosts\n image: busybox:1.28\n command:\n - cat\n args:\n - \"/etc/hosts\"", "completion": "This yaml code Adding entries to Pod /etc/hosts with HostAliases | You can start a Pod with that configuration by running:", "language": "yaml", "type": "pod-definition_explanation", "context": "Adding entries to Pod /etc/hosts with HostAliases | You can start a Pod with that configuration by running:", "source": "https://kubernetes.io/docs/tasks/network/customize-hosts-file-for-pods/", "id": "k8s_00350_code_explain_8938"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: hostaliases-pod\nspec:\n restartPolicy: Never\n hostAliases:\n - ip: \"127.0.0.1\"\n hostnames:\n - \"foo.local\"\n - \"bar.local\"", "completion": " - ip: \"10.1.2.3\"\n hostnames:\n - \"foo.remote\"\n - \"bar.remote\"\n containers:\n - name: cat-hosts\n image: busybox:1.28\n command:\n - cat\n args:\n - \"/etc/hosts\"", "language": "yaml", "type": "pod-definition", "context": "Adding entries to Pod /etc/hosts with HostAliases | You can start a Pod with that configuration by running:", "source": "https://kubernetes.io/docs/tasks/network/customize-hosts-file-for-pods/", "id": "k8s_00350_code_completion_8939"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: hostaliases-pod\nspec:\n restartPolicy: Never\n hostAliases:\n - ip: \"127.0.0.1\"\n hostnames:\n - \"foo.local\"\n - \"bar.local\"\n - ip: \"10.1.2.3\"\n hostnames:\n - \"foo.remote\"\n - \"bar.remote\"\n containers:\n - name: cat-hosts\n image: busybox:1.28\n command:\n - cat\n args:\n - \"/etc/hosts\"", "completion": "This yaml code Adding entries to Pod /etc/hosts with HostAliases | You can start a Pod with that configuration by running:", "language": "yaml", "type": "pod-definition_explanation", "context": "Adding entries to Pod /etc/hosts with HostAliases | You can start a Pod with that configuration by running:", "source": "https://kubernetes.io/docs/tasks/network/customize-hosts-file-for-pods/", "id": "k8s_00350_code_explain_8940"} {"prompt": "Explain this bash code:\n\nkubectl apply -f https://k8s.io/examples/service/networking/hostaliases-pod.yaml", "completion": "This bash code Adding entries to Pod /etc/hosts with HostAliases | Examine a Pod's details to see its IPv4 address and its status:", "language": "bash", "type": "kubectl-command_explanation", "context": "Adding entries to Pod /etc/hosts with HostAliases | Examine a Pod's details to see its IPv4 address and its status:", "source": "https://kubernetes.io/docs/tasks/network/customize-hosts-file-for-pods/", "id": "k8s_00350_code_explain_8941"} {"prompt": "Explain this bash code:\n\nkubectl diff -R -f configs/\nkubectl apply -R -f configs/", "completion": "This bash code Kubernetes Object Management | Advantages compared to imperative object configuration:", "language": "bash", "type": "kubectl-command_explanation", "context": "Kubernetes Object Management | Advantages compared to imperative object configuration:", "source": "https://kubernetes.io/docs/concepts/overview/working-with-objects/object-management/", "id": "k8s_00351_code_explain_8942"} {"prompt": "Explain this sh code:\n\nkubectl diff -R -f configs/\nkubectl apply -R -f configs/", "completion": "This sh code Kubernetes Object Management | Advantages compared to imperative object configuration:", "language": "sh", "type": "code-example_explanation", "context": "Kubernetes Object Management | Advantages compared to imperative object configuration:", "source": "https://kubernetes.io/docs/concepts/overview/working-with-objects/object-management/", "id": "k8s_00351_code_explain_8943"} {"prompt": "Explain this bash code:\n\nkubectl diff -R -f configs/\nkubectl apply -R -f configs/", "completion": "This bash code Kubernetes Object Management | Advantages compared to imperative object configuration:", "language": "bash", "type": "kubectl-command_explanation", "context": "Kubernetes Object Management | Advantages compared to imperative object configuration:", "source": "https://kubernetes.io/docs/concepts/overview/working-with-objects/object-management/", "id": "k8s_00351_code_explain_8944"} {"prompt": "Explain this text code:\n\n# Print the supported API versions\n kubectl api-versions", "completion": "This text code kubectl api-versions | help for api-versions", "language": "text", "type": "code-example_explanation", "context": "kubectl api-versions | help for api-versions", "source": "https://kubernetes.io/docs/reference/kubectl/generated/kubectl_api-versions/", "id": "k8s_00352_code_explain_8945"} {"prompt": "Explain this text code:\n\n# Print the supported API versions\n kubectl api-versions", "completion": "This text code kubectl api-versions | help for api-versions", "language": "text", "type": "code-example_explanation", "context": "kubectl api-versions | help for api-versions", "source": "https://kubernetes.io/docs/reference/kubectl/generated/kubectl_api-versions/", "id": "k8s_00352_code_explain_8946"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Service\nmetadata:\n name: my-service\nspec:\n selector:", "completion": " app.kubernetes.io/name: MyApp\n ports:\n - protocol: TCP\n port: 80\n targetPort: 9376\n internalTrafficPolicy: Local", "language": "yaml", "type": "service-definition", "context": "Service Internal Traffic Policy | The kube-proxy filters the endpoints it routes to based on the spec.internalTrafficPolicy setting. When it's set to Local, only node local endpoints are considered. When it's Cluster (the default), or", "source": "https://kubernetes.io/docs/concepts/services-networking/service-traffic-policy/", "id": "k8s_00353_code_completion_8947"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Service\nmetadata:\n name: my-service\nspec:\n selector:\n app.kubernetes.io/name: MyApp\n ports:\n - protocol: TCP\n port: 80\n targetPort: 9376\n internalTrafficPolicy: Local", "completion": "This yaml code Service Internal Traffic Policy | The kube-proxy filters the endpoints it routes to based on the spec.internalTrafficPolicy setting. When it's set to Local, only node local endpoints are considered. When it's Cluster (the default), or", "language": "yaml", "type": "service-definition_explanation", "context": "Service Internal Traffic Policy | The kube-proxy filters the endpoints it routes to based on the spec.internalTrafficPolicy setting. When it's set to Local, only node local endpoints are considered. When it's Cluster (the default), or", "source": "https://kubernetes.io/docs/concepts/services-networking/service-traffic-policy/", "id": "k8s_00353_code_explain_8948"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Service\nmetadata:\n name: my-service\nspec:\n selector:", "completion": " app.kubernetes.io/name: MyApp\n ports:\n - protocol: TCP\n port: 80\n targetPort: 9376\n internalTrafficPolicy: Local", "language": "yaml", "type": "service-definition", "context": "Service Internal Traffic Policy | The kube-proxy filters the endpoints it routes to based on the spec.internalTrafficPolicy setting. When it's set to Local, only node local endpoints are considered. When it's Cluster (the default), or", "source": "https://kubernetes.io/docs/concepts/services-networking/service-traffic-policy/", "id": "k8s_00353_code_completion_8949"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Service\nmetadata:\n name: my-service\nspec:\n selector:\n app.kubernetes.io/name: MyApp\n ports:\n - protocol: TCP\n port: 80\n targetPort: 9376\n internalTrafficPolicy: Local", "completion": "This yaml code Service Internal Traffic Policy | The kube-proxy filters the endpoints it routes to based on the spec.internalTrafficPolicy setting. When it's set to Local, only node local endpoints are considered. When it's Cluster (the default), or", "language": "yaml", "type": "service-definition_explanation", "context": "Service Internal Traffic Policy | The kube-proxy filters the endpoints it routes to based on the spec.internalTrafficPolicy setting. When it's set to Local, only node local endpoints are considered. When it's Cluster (the default), or", "source": "https://kubernetes.io/docs/concepts/services-networking/service-traffic-policy/", "id": "k8s_00353_code_explain_8950"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Service\nmetadata:\n name: my-service\nspec:\n selector:", "completion": " app.kubernetes.io/name: MyApp\n ports:\n - protocol: TCP\n port: 80\n targetPort: 9376\n internalTrafficPolicy: Local", "language": "yaml", "type": "service-definition", "context": "Service Internal Traffic Policy | The kube-proxy filters the endpoints it routes to based on the spec.internalTrafficPolicy setting. When it's set to Local, only node local endpoints are considered. When it's Cluster (the default), or", "source": "https://kubernetes.io/docs/concepts/services-networking/service-traffic-policy/", "id": "k8s_00353_code_completion_8951"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Service\nmetadata:\n name: my-service\nspec:\n selector:\n app.kubernetes.io/name: MyApp\n ports:\n - protocol: TCP\n port: 80\n targetPort: 9376\n internalTrafficPolicy: Local", "completion": "This yaml code Service Internal Traffic Policy | The kube-proxy filters the endpoints it routes to based on the spec.internalTrafficPolicy setting. When it's set to Local, only node local endpoints are considered. When it's Cluster (the default), or", "language": "yaml", "type": "service-definition_explanation", "context": "Service Internal Traffic Policy | The kube-proxy filters the endpoints it routes to based on the spec.internalTrafficPolicy setting. When it's set to Local, only node local endpoints are considered. When it's Cluster (the default), or", "source": "https://kubernetes.io/docs/concepts/services-networking/service-traffic-policy/", "id": "k8s_00353_code_explain_8952"} {"prompt": "Explain this bash code:\n\nkubectl annotate [--overwrite] (-f FILENAME | TYPE NAME) KEY_1=VAL_1 ... KEY_N=VAL_N [--resource-version=version]", "completion": "This bash code kubectl annotate | Select all resources, in the namespace of the specified resource types.", "language": "bash", "type": "kubectl-command_explanation", "context": "kubectl annotate | Select all resources, in the namespace of the specified resource types.", "source": "https://kubernetes.io/docs/reference/kubectl/generated/kubectl_annotate/", "id": "k8s_00354_code_explain_8953"} {"prompt": "Complete this text code:\n\n# Update pod 'foo' with the annotation 'description' and the value 'my frontend'\n # If the same annotation is set multiple times, only the last value will be applied\n kubectl annotate pods foo description='my frontend'\n \n # Update a pod identified by type and name in \"pod.json\"\n kubectl annotate -f pod.json description='my frontend'\n \n # Update pod 'foo' with the annotation 'description' and the value 'my frontend running nginx', overwriting any existing value\n kubectl annotate --overwrite pods foo description='my frontend running nginx'", "completion": " \n # Update all pods in the namespace\n kubectl annotate pods --all description='my frontend running nginx'\n \n # Update pod 'foo' only if the resource is unchanged from version 1\n kubectl annotate pods foo description='my frontend running nginx' --resource-version=1\n \n # Update pod 'foo' by removing an annotation named 'description' if it exists\n # Does not require the --overwrite flag\n kubectl annotate pods foo description-", "language": "text", "type": "code-example", "context": "kubectl annotate | Select all resources, in the namespace of the specified resource types.", "source": "https://kubernetes.io/docs/reference/kubectl/generated/kubectl_annotate/", "id": "k8s_00354_code_completion_8954"} {"prompt": "Explain this text code:\n\n# Update pod 'foo' with the annotation 'description' and the value 'my frontend'\n # If the same annotation is set multiple times, only the last value will be applied\n kubectl annotate pods foo description='my frontend'\n \n # Update a pod identified by type and name in \"pod.json\"\n kubectl annotate -f pod.json description='my frontend'\n \n # Update pod 'foo' with the annotation 'description' and the value 'my frontend running nginx', overwriting any existing value\n kubectl annotate --overwrite pods foo description='my frontend running nginx'\n \n # Update all pods in the namespace\n kubectl annotate pods --all description='my frontend running nginx'\n \n # Update pod 'foo' only if the resource is unchanged from version 1\n kubectl annotate pods foo description='my frontend running nginx' --resource-version=1\n \n # Update pod 'foo' by removing an annotation named 'description' if it exists\n # Does not require the --overwrite flag\n kubectl annotate pods foo description-", "completion": "This text code kubectl annotate | Select all resources, in the namespace of the specified resource types.", "language": "text", "type": "code-example_explanation", "context": "kubectl annotate | Select all resources, in the namespace of the specified resource types.", "source": "https://kubernetes.io/docs/reference/kubectl/generated/kubectl_annotate/", "id": "k8s_00354_code_explain_8955"} {"prompt": "Explain this bash code:\n\nkubectl annotate [--overwrite] (-f FILENAME | TYPE NAME) KEY_1=VAL_1 ... KEY_N=VAL_N [--resource-version=version]", "completion": "This bash code kubectl annotate | Select all resources, in the namespace of the specified resource types.", "language": "bash", "type": "kubectl-command_explanation", "context": "kubectl annotate | Select all resources, in the namespace of the specified resource types.", "source": "https://kubernetes.io/docs/reference/kubectl/generated/kubectl_annotate/", "id": "k8s_00354_code_explain_8956"} {"prompt": "Complete this text code:\n\n# Update pod 'foo' with the annotation 'description' and the value 'my frontend'\n # If the same annotation is set multiple times, only the last value will be applied\n kubectl annotate pods foo description='my frontend'\n \n # Update a pod identified by type and name in \"pod.json\"\n kubectl annotate -f pod.json description='my frontend'\n \n # Update pod 'foo' with the annotation 'description' and the value 'my frontend running nginx', overwriting any existing value\n kubectl annotate --overwrite pods foo description='my frontend running nginx'", "completion": " \n # Update all pods in the namespace\n kubectl annotate pods --all description='my frontend running nginx'\n \n # Update pod 'foo' only if the resource is unchanged from version 1\n kubectl annotate pods foo description='my frontend running nginx' --resource-version=1\n \n # Update pod 'foo' by removing an annotation named 'description' if it exists\n # Does not require the --overwrite flag\n kubectl annotate pods foo description-", "language": "text", "type": "code-example", "context": "kubectl annotate | Select all resources, in the namespace of the specified resource types.", "source": "https://kubernetes.io/docs/reference/kubectl/generated/kubectl_annotate/", "id": "k8s_00354_code_completion_8957"} {"prompt": "Explain this text code:\n\n# Update pod 'foo' with the annotation 'description' and the value 'my frontend'\n # If the same annotation is set multiple times, only the last value will be applied\n kubectl annotate pods foo description='my frontend'\n \n # Update a pod identified by type and name in \"pod.json\"\n kubectl annotate -f pod.json description='my frontend'\n \n # Update pod 'foo' with the annotation 'description' and the value 'my frontend running nginx', overwriting any existing value\n kubectl annotate --overwrite pods foo description='my frontend running nginx'\n \n # Update all pods in the namespace\n kubectl annotate pods --all description='my frontend running nginx'\n \n # Update pod 'foo' only if the resource is unchanged from version 1\n kubectl annotate pods foo description='my frontend running nginx' --resource-version=1\n \n # Update pod 'foo' by removing an annotation named 'description' if it exists\n # Does not require the --overwrite flag\n kubectl annotate pods foo description-", "completion": "This text code kubectl annotate | Select all resources, in the namespace of the specified resource types.", "language": "text", "type": "code-example_explanation", "context": "kubectl annotate | Select all resources, in the namespace of the specified resource types.", "source": "https://kubernetes.io/docs/reference/kubectl/generated/kubectl_annotate/", "id": "k8s_00354_code_explain_8958"} {"prompt": "Explain this text code:\n\nNAME PROVISIONER AGE\nstandard (default) kubernetes.io/gce-pd 1d\ngold kubernetes.io/gce-pd 1d", "completion": "This text code Change the default StorageClass | The default StorageClass is marked by (default).", "language": "text", "type": "code-example_explanation", "context": "Change the default StorageClass | The default StorageClass is marked by (default).", "source": "https://kubernetes.io/docs/tasks/administer-cluster/change-default-storage-class/", "id": "k8s_00356_code_explain_8959"} {"prompt": "Explain this bash code:\n\nkubectl patch storageclass standard -p '{\"metadata\": {\"annotations\":{\"storageclass.kubernetes.io/is-default-class\":\"false\"}}}'", "completion": "This bash code Change the default StorageClass | where standard is the name of your chosen StorageClass.", "language": "bash", "type": "kubectl-command_explanation", "context": "Change the default StorageClass | where standard is the name of your chosen StorageClass.", "source": "https://kubernetes.io/docs/tasks/administer-cluster/change-default-storage-class/", "id": "k8s_00356_code_explain_8960"} {"prompt": "Explain this bash code:\n\nkubectl patch storageclass gold -p '{\"metadata\": {\"annotations\":{\"storageclass.kubernetes.io/is-default-class\":\"true\"}}}'", "completion": "This bash code Change the default StorageClass | Please note you can have multiple StorageClass marked as default. If more than one StorageClass is marked as default, a PersistentVolumeClaim without an explicitly defined storageClassName will be cre", "language": "bash", "type": "kubectl-command_explanation", "context": "Change the default StorageClass | Please note you can have multiple StorageClass marked as default. If more than one StorageClass is marked as default, a PersistentVolumeClaim without an explicitly defined storageClassName will be cre", "source": "https://kubernetes.io/docs/tasks/administer-cluster/change-default-storage-class/", "id": "k8s_00356_code_explain_8961"} {"prompt": "Explain this text code:\n\nNAME PROVISIONER AGE\nstandard kubernetes.io/gce-pd 1d\ngold (default) kubernetes.io/gce-pd 1d", "completion": "This text code Change the default StorageClass |", "language": "text", "type": "code-example_explanation", "context": "Change the default StorageClass |", "source": "https://kubernetes.io/docs/tasks/administer-cluster/change-default-storage-class/", "id": "k8s_00356_code_explain_8962"} {"prompt": "Explain this bash code:\n\nNAME PROVISIONER AGE\nstandard (default) kubernetes.io/gce-pd 1d\ngold kubernetes.io/gce-pd 1d", "completion": "This bash code Change the default StorageClass | The default StorageClass is marked by (default).", "language": "bash", "type": "shell-command_explanation", "context": "Change the default StorageClass | The default StorageClass is marked by (default).", "source": "https://kubernetes.io/docs/tasks/administer-cluster/change-default-storage-class/", "id": "k8s_00356_code_explain_8963"} {"prompt": "Explain this bash code:\n\nkubectl patch storageclass standard -p '{\"metadata\": {\"annotations\":{\"storageclass.kubernetes.io/is-default-class\":\"false\"}}}'", "completion": "This bash code Change the default StorageClass | where standard is the name of your chosen StorageClass.", "language": "bash", "type": "kubectl-command_explanation", "context": "Change the default StorageClass | where standard is the name of your chosen StorageClass.", "source": "https://kubernetes.io/docs/tasks/administer-cluster/change-default-storage-class/", "id": "k8s_00356_code_explain_8964"} {"prompt": "Explain this bash code:\n\nkubectl patch storageclass gold -p '{\"metadata\": {\"annotations\":{\"storageclass.kubernetes.io/is-default-class\":\"true\"}}}'", "completion": "This bash code Change the default StorageClass | Please note you can have multiple StorageClass marked as default. If more than one StorageClass is marked as default, a PersistentVolumeClaim without an explicitly defined storageClassName will be cre", "language": "bash", "type": "kubectl-command_explanation", "context": "Change the default StorageClass | Please note you can have multiple StorageClass marked as default. If more than one StorageClass is marked as default, a PersistentVolumeClaim without an explicitly defined storageClassName will be cre", "source": "https://kubernetes.io/docs/tasks/administer-cluster/change-default-storage-class/", "id": "k8s_00356_code_explain_8965"} {"prompt": "Explain this bash code:\n\nNAME PROVISIONER AGE\nstandard kubernetes.io/gce-pd 1d\ngold (default) kubernetes.io/gce-pd 1d", "completion": "This bash code Change the default StorageClass |", "language": "bash", "type": "shell-command_explanation", "context": "Change the default StorageClass |", "source": "https://kubernetes.io/docs/tasks/administer-cluster/change-default-storage-class/", "id": "k8s_00356_code_explain_8966"} {"prompt": "Explain this text code:\n\nNAME PROVISIONER AGE\nstandard (default) kubernetes.io/gce-pd 1d\ngold kubernetes.io/gce-pd 1d", "completion": "This text code Change the default StorageClass | The default StorageClass is marked by (default).", "language": "text", "type": "code-example_explanation", "context": "Change the default StorageClass | The default StorageClass is marked by (default).", "source": "https://kubernetes.io/docs/tasks/administer-cluster/change-default-storage-class/", "id": "k8s_00356_code_explain_8967"} {"prompt": "Explain this bash code:\n\nkubectl patch storageclass standard -p '{\"metadata\": {\"annotations\":{\"storageclass.kubernetes.io/is-default-class\":\"false\"}}}'", "completion": "This bash code Change the default StorageClass | where standard is the name of your chosen StorageClass.", "language": "bash", "type": "kubectl-command_explanation", "context": "Change the default StorageClass | where standard is the name of your chosen StorageClass.", "source": "https://kubernetes.io/docs/tasks/administer-cluster/change-default-storage-class/", "id": "k8s_00356_code_explain_8968"} {"prompt": "Explain this bash code:\n\nkubectl patch storageclass gold -p '{\"metadata\": {\"annotations\":{\"storageclass.kubernetes.io/is-default-class\":\"true\"}}}'", "completion": "This bash code Change the default StorageClass | Please note you can have multiple StorageClass marked as default. If more than one StorageClass is marked as default, a PersistentVolumeClaim without an explicitly defined storageClassName will be cre", "language": "bash", "type": "kubectl-command_explanation", "context": "Change the default StorageClass | Please note you can have multiple StorageClass marked as default. If more than one StorageClass is marked as default, a PersistentVolumeClaim without an explicitly defined storageClassName will be cre", "source": "https://kubernetes.io/docs/tasks/administer-cluster/change-default-storage-class/", "id": "k8s_00356_code_explain_8969"} {"prompt": "Explain this text code:\n\nNAME PROVISIONER AGE\nstandard kubernetes.io/gce-pd 1d\ngold (default) kubernetes.io/gce-pd 1d", "completion": "This text code Change the default StorageClass |", "language": "text", "type": "code-example_explanation", "context": "Change the default StorageClass |", "source": "https://kubernetes.io/docs/tasks/administer-cluster/change-default-storage-class/", "id": "k8s_00356_code_explain_8970"} {"prompt": "Explain this bash code:\n\nkubectl create quota NAME [--hard=key1=value1,key2=value2] [--scopes=Scope1,Scope2] [--dry-run=server|client|none]", "completion": "This bash code kubectl create quota | If true, ignore any errors in templates when a field or map key is missing in the template. Only applies to golang and jsonpath output formats.", "language": "bash", "type": "kubectl-command_explanation", "context": "kubectl create quota | If true, ignore any errors in templates when a field or map key is missing in the template. Only applies to golang and jsonpath output formats.", "source": "https://kubernetes.io/docs/reference/kubectl/generated/kubectl_create/kubectl_create_quota/", "id": "k8s_00357_code_explain_8971"} {"prompt": "Complete this text code:\n\n# Create a new resource quota named my-quota\n kubectl create quota my-quota --hard=cpu=1,memory=1G,pods=2,services=3,replicationcontrollers=2,resourcequotas=1,secrets=5,persistentvolumeclaims=10", "completion": " \n # Create a new resource quota named best-effort\n kubectl create quota best-effort --hard=pods=100 --scopes=BestEffort", "language": "text", "type": "code-example", "context": "kubectl create quota | If true, ignore any errors in templates when a field or map key is missing in the template. Only applies to golang and jsonpath output formats.", "source": "https://kubernetes.io/docs/reference/kubectl/generated/kubectl_create/kubectl_create_quota/", "id": "k8s_00357_code_completion_8972"} {"prompt": "Explain this text code:\n\n# Create a new resource quota named my-quota\n kubectl create quota my-quota --hard=cpu=1,memory=1G,pods=2,services=3,replicationcontrollers=2,resourcequotas=1,secrets=5,persistentvolumeclaims=10\n \n # Create a new resource quota named best-effort\n kubectl create quota best-effort --hard=pods=100 --scopes=BestEffort", "completion": "This text code kubectl create quota | If true, ignore any errors in templates when a field or map key is missing in the template. Only applies to golang and jsonpath output formats.", "language": "text", "type": "code-example_explanation", "context": "kubectl create quota | If true, ignore any errors in templates when a field or map key is missing in the template. Only applies to golang and jsonpath output formats.", "source": "https://kubernetes.io/docs/reference/kubectl/generated/kubectl_create/kubectl_create_quota/", "id": "k8s_00357_code_explain_8973"} {"prompt": "Explain this bash code:\n\nkubectl create quota NAME [--hard=key1=value1,key2=value2] [--scopes=Scope1,Scope2] [--dry-run=server|client|none]", "completion": "This bash code kubectl create quota | If true, ignore any errors in templates when a field or map key is missing in the template. Only applies to golang and jsonpath output formats.", "language": "bash", "type": "kubectl-command_explanation", "context": "kubectl create quota | If true, ignore any errors in templates when a field or map key is missing in the template. Only applies to golang and jsonpath output formats.", "source": "https://kubernetes.io/docs/reference/kubectl/generated/kubectl_create/kubectl_create_quota/", "id": "k8s_00357_code_explain_8974"} {"prompt": "Complete this text code:\n\n# Create a new resource quota named my-quota\n kubectl create quota my-quota --hard=cpu=1,memory=1G,pods=2,services=3,replicationcontrollers=2,resourcequotas=1,secrets=5,persistentvolumeclaims=10", "completion": " \n # Create a new resource quota named best-effort\n kubectl create quota best-effort --hard=pods=100 --scopes=BestEffort", "language": "text", "type": "code-example", "context": "kubectl create quota | If true, ignore any errors in templates when a field or map key is missing in the template. Only applies to golang and jsonpath output formats.", "source": "https://kubernetes.io/docs/reference/kubectl/generated/kubectl_create/kubectl_create_quota/", "id": "k8s_00357_code_completion_8975"} {"prompt": "Explain this text code:\n\n# Create a new resource quota named my-quota\n kubectl create quota my-quota --hard=cpu=1,memory=1G,pods=2,services=3,replicationcontrollers=2,resourcequotas=1,secrets=5,persistentvolumeclaims=10\n \n # Create a new resource quota named best-effort\n kubectl create quota best-effort --hard=pods=100 --scopes=BestEffort", "completion": "This text code kubectl create quota | If true, ignore any errors in templates when a field or map key is missing in the template. Only applies to golang and jsonpath output formats.", "language": "text", "type": "code-example_explanation", "context": "kubectl create quota | If true, ignore any errors in templates when a field or map key is missing in the template. Only applies to golang and jsonpath output formats.", "source": "https://kubernetes.io/docs/reference/kubectl/generated/kubectl_create/kubectl_create_quota/", "id": "k8s_00357_code_explain_8976"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: test-pod\nspec:", "completion": " schedulingGates:\n - name: example.com/foo\n - name: example.com/bar\n containers:\n - name: pause\n image: registry.k8s.io/pause:3.6", "language": "yaml", "type": "pod-definition", "context": "Pod Scheduling Readiness | After the Pod's creation, you can check its state using:", "source": "https://kubernetes.io/docs/concepts/scheduling-eviction/pod-scheduling-readiness/", "id": "k8s_00359_code_completion_8977"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: test-pod\nspec:\n schedulingGates:\n - name: example.com/foo\n - name: example.com/bar\n containers:\n - name: pause\n image: registry.k8s.io/pause:3.6", "completion": "This yaml code Pod Scheduling Readiness | After the Pod's creation, you can check its state using:", "language": "yaml", "type": "pod-definition_explanation", "context": "Pod Scheduling Readiness | After the Pod's creation, you can check its state using:", "source": "https://kubernetes.io/docs/concepts/scheduling-eviction/pod-scheduling-readiness/", "id": "k8s_00359_code_explain_8978"} {"prompt": "Explain this text code:\n\nNAME READY STATUS RESTARTS AGE\ntest-pod 0/1 SchedulingGated 0 7s", "completion": "This text code Pod Scheduling Readiness | You can also check its schedulingGates field by running:", "language": "text", "type": "code-example_explanation", "context": "Pod Scheduling Readiness | You can also check its schedulingGates field by running:", "source": "https://kubernetes.io/docs/concepts/scheduling-eviction/pod-scheduling-readiness/", "id": "k8s_00359_code_explain_8979"} {"prompt": "Explain this bash code:\n\nkubectl get pod test-pod -o jsonpath='{.spec.schedulingGates}'", "completion": "This bash code Pod Scheduling Readiness | The output is:", "language": "bash", "type": "kubectl-command_explanation", "context": "Pod Scheduling Readiness | The output is:", "source": "https://kubernetes.io/docs/concepts/scheduling-eviction/pod-scheduling-readiness/", "id": "k8s_00359_code_explain_8980"} {"prompt": "Explain this json code:\n\n[{\"name\":\"example.com/foo\"},{\"name\":\"example.com/bar\"}]", "completion": "This json code Pod Scheduling Readiness | To inform scheduler this Pod is ready for scheduling, you can remove its schedulingGates entirely by reapplying a modified manifest:", "language": "json", "type": "api-response_explanation", "context": "Pod Scheduling Readiness | To inform scheduler this Pod is ready for scheduling, you can remove its schedulingGates entirely by reapplying a modified manifest:", "source": "https://kubernetes.io/docs/concepts/scheduling-eviction/pod-scheduling-readiness/", "id": "k8s_00359_code_explain_8981"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: test-pod", "completion": "spec:\n containers:\n - name: pause\n image: registry.k8s.io/pause:3.6", "language": "yaml", "type": "pod-definition", "context": "Pod Scheduling Readiness | You can check if the schedulingGates is cleared by running:", "source": "https://kubernetes.io/docs/concepts/scheduling-eviction/pod-scheduling-readiness/", "id": "k8s_00359_code_completion_8982"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: test-pod\nspec:\n containers:\n - name: pause\n image: registry.k8s.io/pause:3.6", "completion": "This yaml code Pod Scheduling Readiness | You can check if the schedulingGates is cleared by running:", "language": "yaml", "type": "pod-definition_explanation", "context": "Pod Scheduling Readiness | You can check if the schedulingGates is cleared by running:", "source": "https://kubernetes.io/docs/concepts/scheduling-eviction/pod-scheduling-readiness/", "id": "k8s_00359_code_explain_8983"} {"prompt": "Explain this bash code:\n\nkubectl get pod test-pod -o jsonpath='{.spec.schedulingGates}'", "completion": "This bash code Pod Scheduling Readiness | The output is expected to be empty. And you can check its latest status by running:", "language": "bash", "type": "kubectl-command_explanation", "context": "Pod Scheduling Readiness | The output is expected to be empty. And you can check its latest status by running:", "source": "https://kubernetes.io/docs/concepts/scheduling-eviction/pod-scheduling-readiness/", "id": "k8s_00359_code_explain_8984"} {"prompt": "Explain this text code:\n\nNAME READY STATUS RESTARTS AGE IP NODE\ntest-pod 1/1 Running 0 15s 10.0.0.4 node-2", "completion": "This text code Pod Scheduling Readiness | The metric scheduler_pending_pods comes with a new label \"gated\" to distinguish whether a Pod has been tried scheduling but claimed as unschedulable, or explicitly marked as not ready for scheduling.", "language": "text", "type": "code-example_explanation", "context": "Pod Scheduling Readiness | The metric scheduler_pending_pods comes with a new label \"gated\" to distinguish whether a Pod has been tried scheduling but claimed as unschedulable, or explicitly marked as not ready for scheduling.", "source": "https://kubernetes.io/docs/concepts/scheduling-eviction/pod-scheduling-readiness/", "id": "k8s_00359_code_explain_8985"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: test-pod\nspec:", "completion": " schedulingGates:\n - name: example.com/foo\n - name: example.com/bar\n containers:\n - name: pause\n image: registry.k8s.io/pause:3.6", "language": "yaml", "type": "pod-definition", "context": "Pod Scheduling Readiness | After the Pod's creation, you can check its state using:", "source": "https://kubernetes.io/docs/concepts/scheduling-eviction/pod-scheduling-readiness/", "id": "k8s_00359_code_completion_8986"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: test-pod\nspec:\n schedulingGates:\n - name: example.com/foo\n - name: example.com/bar\n containers:\n - name: pause\n image: registry.k8s.io/pause:3.6", "completion": "This yaml code Pod Scheduling Readiness | After the Pod's creation, you can check its state using:", "language": "yaml", "type": "pod-definition_explanation", "context": "Pod Scheduling Readiness | After the Pod's creation, you can check its state using:", "source": "https://kubernetes.io/docs/concepts/scheduling-eviction/pod-scheduling-readiness/", "id": "k8s_00359_code_explain_8987"} {"prompt": "Explain this none code:\n\nNAME READY STATUS RESTARTS AGE\ntest-pod 0/1 SchedulingGated 0 7s", "completion": "This none code Pod Scheduling Readiness | You can also check its schedulingGates field by running:", "language": "none", "type": "code-example_explanation", "context": "Pod Scheduling Readiness | You can also check its schedulingGates field by running:", "source": "https://kubernetes.io/docs/concepts/scheduling-eviction/pod-scheduling-readiness/", "id": "k8s_00359_code_explain_8988"} {"prompt": "Explain this bash code:\n\nkubectl get pod test-pod -o jsonpath='{.spec.schedulingGates}'", "completion": "This bash code Pod Scheduling Readiness | The output is:", "language": "bash", "type": "kubectl-command_explanation", "context": "Pod Scheduling Readiness | The output is:", "source": "https://kubernetes.io/docs/concepts/scheduling-eviction/pod-scheduling-readiness/", "id": "k8s_00359_code_explain_8989"} {"prompt": "Explain this none code:\n\n[{\"name\":\"example.com/foo\"},{\"name\":\"example.com/bar\"}]", "completion": "This none code Pod Scheduling Readiness | To inform scheduler this Pod is ready for scheduling, you can remove its schedulingGates entirely by reapplying a modified manifest:", "language": "none", "type": "code-example_explanation", "context": "Pod Scheduling Readiness | To inform scheduler this Pod is ready for scheduling, you can remove its schedulingGates entirely by reapplying a modified manifest:", "source": "https://kubernetes.io/docs/concepts/scheduling-eviction/pod-scheduling-readiness/", "id": "k8s_00359_code_explain_8990"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: test-pod", "completion": "spec:\n containers:\n - name: pause\n image: registry.k8s.io/pause:3.6", "language": "yaml", "type": "pod-definition", "context": "Pod Scheduling Readiness | You can check if the schedulingGates is cleared by running:", "source": "https://kubernetes.io/docs/concepts/scheduling-eviction/pod-scheduling-readiness/", "id": "k8s_00359_code_completion_8991"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: test-pod\nspec:\n containers:\n - name: pause\n image: registry.k8s.io/pause:3.6", "completion": "This yaml code Pod Scheduling Readiness | You can check if the schedulingGates is cleared by running:", "language": "yaml", "type": "pod-definition_explanation", "context": "Pod Scheduling Readiness | You can check if the schedulingGates is cleared by running:", "source": "https://kubernetes.io/docs/concepts/scheduling-eviction/pod-scheduling-readiness/", "id": "k8s_00359_code_explain_8992"} {"prompt": "Explain this bash code:\n\nkubectl get pod test-pod -o jsonpath='{.spec.schedulingGates}'", "completion": "This bash code Pod Scheduling Readiness | The output is expected to be empty. And you can check its latest status by running:", "language": "bash", "type": "kubectl-command_explanation", "context": "Pod Scheduling Readiness | The output is expected to be empty. And you can check its latest status by running:", "source": "https://kubernetes.io/docs/concepts/scheduling-eviction/pod-scheduling-readiness/", "id": "k8s_00359_code_explain_8993"} {"prompt": "Explain this none code:\n\nNAME READY STATUS RESTARTS AGE IP NODE\ntest-pod 1/1 Running 0 15s 10.0.0.4 node-2", "completion": "This none code Pod Scheduling Readiness | The metric scheduler_pending_pods comes with a new label \"gated\" to distinguish whether a Pod has been tried scheduling but claimed as unschedulable, or explicitly marked as not ready for scheduling.", "language": "none", "type": "code-example_explanation", "context": "Pod Scheduling Readiness | The metric scheduler_pending_pods comes with a new label \"gated\" to distinguish whether a Pod has been tried scheduling but claimed as unschedulable, or explicitly marked as not ready for scheduling.", "source": "https://kubernetes.io/docs/concepts/scheduling-eviction/pod-scheduling-readiness/", "id": "k8s_00359_code_explain_8994"} {"prompt": "Explain this text code:\n\n.requiredDuringSchedulingIgnoredDuringExecution.NodeSelectorTerms", "completion": "This text code Pod Scheduling Readiness | For .preferredDuringSchedulingIgnoredDuringExecution, all updates are allowed. This is because preferred terms are not authoritative, and so policy controllers don't validate those terms.", "language": "text", "type": "code-example_explanation", "context": "Pod Scheduling Readiness | For .preferredDuringSchedulingIgnoredDuringExecution, all updates are allowed. This is because preferred terms are not authoritative, and so policy controllers don't validate those terms.", "source": "https://kubernetes.io/docs/concepts/scheduling-eviction/pod-scheduling-readiness/", "id": "k8s_00359_code_explain_8995"} {"prompt": "Complete this yaml code:\n\npods/pod-with-scheduling-gates.yaml\napiVersion: v1\nkind: Pod\nmetadata:\n name: test-pod\nspec:", "completion": " schedulingGates:\n - name: example.com/foo\n - name: example.com/bar\n containers:\n - name: pause\n image: registry.k8s.io/pause:3.6", "language": "yaml", "type": "pod-definition", "context": "Pod Scheduling Readiness | After the Pod's creation, you can check its state using:", "source": "https://kubernetes.io/docs/concepts/scheduling-eviction/pod-scheduling-readiness/", "id": "k8s_00359_code_completion_8996"} {"prompt": "Explain this yaml code:\n\npods/pod-with-scheduling-gates.yaml\napiVersion: v1\nkind: Pod\nmetadata:\n name: test-pod\nspec:\n schedulingGates:\n - name: example.com/foo\n - name: example.com/bar\n containers:\n - name: pause\n image: registry.k8s.io/pause:3.6", "completion": "This yaml code Pod Scheduling Readiness | After the Pod's creation, you can check its state using:", "language": "yaml", "type": "pod-definition_explanation", "context": "Pod Scheduling Readiness | After the Pod's creation, you can check its state using:", "source": "https://kubernetes.io/docs/concepts/scheduling-eviction/pod-scheduling-readiness/", "id": "k8s_00359_code_explain_8997"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: test-pod\nspec:", "completion": " schedulingGates:\n - name: example.com/foo\n - name: example.com/bar\n containers:\n - name: pause\n image: registry.k8s.io/pause:3.6", "language": "yaml", "type": "pod-definition", "context": "Pod Scheduling Readiness | After the Pod's creation, you can check its state using:", "source": "https://kubernetes.io/docs/concepts/scheduling-eviction/pod-scheduling-readiness/", "id": "k8s_00359_code_completion_8998"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: test-pod\nspec:\n schedulingGates:\n - name: example.com/foo\n - name: example.com/bar\n containers:\n - name: pause\n image: registry.k8s.io/pause:3.6", "completion": "This yaml code Pod Scheduling Readiness | After the Pod's creation, you can check its state using:", "language": "yaml", "type": "pod-definition_explanation", "context": "Pod Scheduling Readiness | After the Pod's creation, you can check its state using:", "source": "https://kubernetes.io/docs/concepts/scheduling-eviction/pod-scheduling-readiness/", "id": "k8s_00359_code_explain_8999"} {"prompt": "Explain this bash code:\n\nkubectl get pod test-pod -o jsonpath='{.spec.schedulingGates}'", "completion": "This bash code Pod Scheduling Readiness | The output is:", "language": "bash", "type": "kubectl-command_explanation", "context": "Pod Scheduling Readiness | The output is:", "source": "https://kubernetes.io/docs/concepts/scheduling-eviction/pod-scheduling-readiness/", "id": "k8s_00359_code_explain_9000"} {"prompt": "Complete this yaml code:\n\npods/pod-without-scheduling-gates.yaml\napiVersion: v1\nkind: Pod\nmetadata:", "completion": " name: test-pod\nspec:\n containers:\n - name: pause\n image: registry.k8s.io/pause:3.6", "language": "yaml", "type": "pod-definition", "context": "Pod Scheduling Readiness | You can check if the schedulingGates is cleared by running:", "source": "https://kubernetes.io/docs/concepts/scheduling-eviction/pod-scheduling-readiness/", "id": "k8s_00359_code_completion_9001"} {"prompt": "Explain this yaml code:\n\npods/pod-without-scheduling-gates.yaml\napiVersion: v1\nkind: Pod\nmetadata:\n name: test-pod\nspec:\n containers:\n - name: pause\n image: registry.k8s.io/pause:3.6", "completion": "This yaml code Pod Scheduling Readiness | You can check if the schedulingGates is cleared by running:", "language": "yaml", "type": "pod-definition_explanation", "context": "Pod Scheduling Readiness | You can check if the schedulingGates is cleared by running:", "source": "https://kubernetes.io/docs/concepts/scheduling-eviction/pod-scheduling-readiness/", "id": "k8s_00359_code_explain_9002"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: test-pod", "completion": "spec:\n containers:\n - name: pause\n image: registry.k8s.io/pause:3.6", "language": "yaml", "type": "pod-definition", "context": "Pod Scheduling Readiness | You can check if the schedulingGates is cleared by running:", "source": "https://kubernetes.io/docs/concepts/scheduling-eviction/pod-scheduling-readiness/", "id": "k8s_00359_code_completion_9003"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: test-pod\nspec:\n containers:\n - name: pause\n image: registry.k8s.io/pause:3.6", "completion": "This yaml code Pod Scheduling Readiness | You can check if the schedulingGates is cleared by running:", "language": "yaml", "type": "pod-definition_explanation", "context": "Pod Scheduling Readiness | You can check if the schedulingGates is cleared by running:", "source": "https://kubernetes.io/docs/concepts/scheduling-eviction/pod-scheduling-readiness/", "id": "k8s_00359_code_explain_9004"} {"prompt": "Explain this bash code:\n\nkubectl get pod test-pod -o jsonpath='{.spec.schedulingGates}'", "completion": "This bash code Pod Scheduling Readiness | The output is expected to be empty. And you can check its latest status by running:", "language": "bash", "type": "kubectl-command_explanation", "context": "Pod Scheduling Readiness | The output is expected to be empty. And you can check its latest status by running:", "source": "https://kubernetes.io/docs/concepts/scheduling-eviction/pod-scheduling-readiness/", "id": "k8s_00359_code_explain_9005"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Service\nmetadata:\n name: mysql\nspec:\n ports:\n - port: 3306\n selector:\n app: mysql\n clusterIP: None\n---\napiVersion: apps/v1\nkind: Deployment\nmetadata:\n name: mysql\nspec:\n selector:\n matchLabels:\n app: mysql\n strategy:\n type: Recreate", "completion": " template:\n metadata:\n labels:\n app: mysql\n spec:\n containers:\n - image: mysql:9\n name: mysql\n env:\n # Use secret in real usage\n - name: MYSQL_ROOT_PASSWORD\n value: password\n ports:\n - containerPort: 3306\n name: mysql\n volumeMounts:\n - name: mysql-persistent-storage\n mountPath: /var/lib/mysql\n volumes:\n - name: mysql-persistent-storage\n persistentVolumeClaim:\n claimName: mysql-pv-claim", "language": "yaml", "type": "deployment-definition", "context": "Run a Single-Instance Stateful Application | Deploy the PV and PVC of the YAML file:", "source": "https://kubernetes.io/docs/tasks/run-application/run-single-instance-stateful-application/", "id": "k8s_00361_code_completion_9006"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Service\nmetadata:\n name: mysql\nspec:\n ports:\n - port: 3306\n selector:\n app: mysql\n clusterIP: None\n---\napiVersion: apps/v1\nkind: Deployment\nmetadata:\n name: mysql\nspec:\n selector:\n matchLabels:\n app: mysql\n strategy:\n type: Recreate\n template:\n metadata:\n labels:\n app: mysql\n spec:\n containers:\n - image: mysql:9\n name: mysql\n env:\n # Use secret in real usage\n - name: MYSQL_ROOT_PASSWORD\n value: password\n ports:\n - containerPort: 3306\n name: mysql\n volumeMounts:\n - name: mysql-persistent-storage\n mountPath: /var/lib/mysql\n volumes:\n - name: mysql-persistent-storage\n persistentVolumeClaim:\n claimName: mysql-pv-claim", "completion": "This yaml code Run a Single-Instance Stateful Application | Deploy the PV and PVC of the YAML file:", "language": "yaml", "type": "deployment-definition_explanation", "context": "Run a Single-Instance Stateful Application | Deploy the PV and PVC of the YAML file:", "source": "https://kubernetes.io/docs/tasks/run-application/run-single-instance-stateful-application/", "id": "k8s_00361_code_explain_9007"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: PersistentVolume\nmetadata:\n name: mysql-pv-volume\n labels:\n type: local\nspec:\n storageClassName: manual\n capacity:\n storage: 20Gi\n accessModes:\n - ReadWriteOnce\n hostPath:", "completion": " path: \"/mnt/data\"\n---\napiVersion: v1\nkind: PersistentVolumeClaim\nmetadata:\n name: mysql-pv-claim\nspec:\n storageClassName: manual\n accessModes:\n - ReadWriteOnce\n resources:\n requests:\n storage: 20Gi", "language": "yaml", "type": "kubernetes-manifest", "context": "Run a Single-Instance Stateful Application | Deploy the PV and PVC of the YAML file:", "source": "https://kubernetes.io/docs/tasks/run-application/run-single-instance-stateful-application/", "id": "k8s_00361_code_completion_9008"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: PersistentVolume\nmetadata:\n name: mysql-pv-volume\n labels:\n type: local\nspec:\n storageClassName: manual\n capacity:\n storage: 20Gi\n accessModes:\n - ReadWriteOnce\n hostPath:\n path: \"/mnt/data\"\n---\napiVersion: v1\nkind: PersistentVolumeClaim\nmetadata:\n name: mysql-pv-claim\nspec:\n storageClassName: manual\n accessModes:\n - ReadWriteOnce\n resources:\n requests:\n storage: 20Gi", "completion": "This yaml code Run a Single-Instance Stateful Application | Deploy the PV and PVC of the YAML file:", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Run a Single-Instance Stateful Application | Deploy the PV and PVC of the YAML file:", "source": "https://kubernetes.io/docs/tasks/run-application/run-single-instance-stateful-application/", "id": "k8s_00361_code_explain_9009"} {"prompt": "Explain this bash code:\n\nkubectl apply -f https://k8s.io/examples/application/mysql/mysql-pv.yaml", "completion": "This bash code Run a Single-Instance Stateful Application | Deploy the contents of the YAML file:", "language": "bash", "type": "kubectl-command_explanation", "context": "Run a Single-Instance Stateful Application | Deploy the contents of the YAML file:", "source": "https://kubernetes.io/docs/tasks/run-application/run-single-instance-stateful-application/", "id": "k8s_00361_code_explain_9010"} {"prompt": "Explain this bash code:\n\nkubectl apply -f https://k8s.io/examples/application/mysql/mysql-deployment.yaml", "completion": "This bash code Run a Single-Instance Stateful Application | Display information about the Deployment:", "language": "bash", "type": "kubectl-command_explanation", "context": "Run a Single-Instance Stateful Application | Display information about the Deployment:", "source": "https://kubernetes.io/docs/tasks/run-application/run-single-instance-stateful-application/", "id": "k8s_00361_code_explain_9011"} {"prompt": "Complete this text code:\n\nName: mysql\nNamespace: default\nCreationTimestamp: Tue, 01 Nov 2016 11:18:45 -0700\nLabels: app=mysql\nAnnotations: deployment.kubernetes.io/revision=1\nSelector: app=mysql\nReplicas: 1 desired | 1 updated | 1 total | 0 available | 1 unavailable\nStrategyType: Recreate\nMinReadySeconds: 0\nPod Template:\n Labels: app=mysql\n Containers:\n mysql:\n Image: mysql:9\n Port: 3306/TCP\n Environment:\n MYSQL_ROOT_PASSWORD: password", "completion": " Mounts:\n /var/lib/mysql from mysql-persistent-storage (rw)\n Volumes:\n mysql-persistent-storage:\n Type: PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)\n ClaimName: mysql-pv-claim\n ReadOnly: false\nConditions:\n Type Status Reason\n ---- ------ ------\n Available False MinimumReplicasUnavailable\n Progressing True ReplicaSetUpdated\nOldReplicaSets: <none>\nNewReplicaSet: mysql-63082529 (1/1 replicas created)\nEvents:\n FirstSeen LastSeen Count From SubobjectPath Type Reason Message\n --------- -------- ----- ---- ------------- -------- ------ -------\n 33s 33s 1 {deployment-controller } Normal ScalingReplicaSet Scaled up replica set mysql-63082529 to 1", "language": "text", "type": "code-example", "context": "Run a Single-Instance Stateful Application | List the pods created by the Deployment:", "source": "https://kubernetes.io/docs/tasks/run-application/run-single-instance-stateful-application/", "id": "k8s_00361_code_completion_9012"} {"prompt": "Explain this text code:\n\nName: mysql\nNamespace: default\nCreationTimestamp: Tue, 01 Nov 2016 11:18:45 -0700\nLabels: app=mysql\nAnnotations: deployment.kubernetes.io/revision=1\nSelector: app=mysql\nReplicas: 1 desired | 1 updated | 1 total | 0 available | 1 unavailable\nStrategyType: Recreate\nMinReadySeconds: 0\nPod Template:\n Labels: app=mysql\n Containers:\n mysql:\n Image: mysql:9\n Port: 3306/TCP\n Environment:\n MYSQL_ROOT_PASSWORD: password\n Mounts:\n /var/lib/mysql from mysql-persistent-storage (rw)\n Volumes:\n mysql-persistent-storage:\n Type: PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)\n ClaimName: mysql-pv-claim\n ReadOnly: false\nConditions:\n Type Status Reason\n ---- ------ ------\n Available False MinimumReplicasUnavailable\n Progressing True ReplicaSetUpdated\nOldReplicaSets: <none>\nNewReplicaSet: mysql-63082529 (1/1 replicas created)\nEvents:\n FirstSeen LastSeen Count From SubobjectPath Type Reason Message\n --------- -------- ----- ---- ------------- -------- ------ -------\n 33s 33s 1 {deployment-controller } Normal ScalingReplicaSet Scaled up replica set mysql-63082529 to 1", "completion": "This text code Run a Single-Instance Stateful Application | List the pods created by the Deployment:", "language": "text", "type": "code-example_explanation", "context": "Run a Single-Instance Stateful Application | List the pods created by the Deployment:", "source": "https://kubernetes.io/docs/tasks/run-application/run-single-instance-stateful-application/", "id": "k8s_00361_code_explain_9013"} {"prompt": "Explain this text code:\n\nNAME READY STATUS RESTARTS AGE\nmysql-63082529-2z3ki 1/1 Running 0 3m", "completion": "This text code Run a Single-Instance Stateful Application | Inspect the PersistentVolumeClaim:", "language": "text", "type": "code-example_explanation", "context": "Run a Single-Instance Stateful Application | Inspect the PersistentVolumeClaim:", "source": "https://kubernetes.io/docs/tasks/run-application/run-single-instance-stateful-application/", "id": "k8s_00361_code_explain_9014"} {"prompt": "Complete this text code:\n\nName: mysql-pv-claim\nNamespace: default\nStorageClass:\nStatus: Bound\nVolume: mysql-pv-volume", "completion": "Labels: <none>\nAnnotations: pv.kubernetes.io/bind-completed=yes\n pv.kubernetes.io/bound-by-controller=yes\nCapacity: 20Gi\nAccess Modes: RWO\nEvents: <none>", "language": "text", "type": "code-example", "context": "Run a Single-Instance Stateful Application | The preceding YAML file creates a service that allows other Pods in the cluster to access the database. The Service option clusterIP: None lets the Service DNS name resolve directly to the Pod's IP ad", "source": "https://kubernetes.io/docs/tasks/run-application/run-single-instance-stateful-application/", "id": "k8s_00361_code_completion_9015"} {"prompt": "Explain this text code:\n\nName: mysql-pv-claim\nNamespace: default\nStorageClass:\nStatus: Bound\nVolume: mysql-pv-volume\nLabels: <none>\nAnnotations: pv.kubernetes.io/bind-completed=yes\n pv.kubernetes.io/bound-by-controller=yes\nCapacity: 20Gi\nAccess Modes: RWO\nEvents: <none>", "completion": "This text code Run a Single-Instance Stateful Application | The preceding YAML file creates a service that allows other Pods in the cluster to access the database. The Service option clusterIP: None lets the Service DNS name resolve directly to the Pod's IP ad", "language": "text", "type": "code-example_explanation", "context": "Run a Single-Instance Stateful Application | The preceding YAML file creates a service that allows other Pods in the cluster to access the database. The Service option clusterIP: None lets the Service DNS name resolve directly to the Pod's IP ad", "source": "https://kubernetes.io/docs/tasks/run-application/run-single-instance-stateful-application/", "id": "k8s_00361_code_explain_9016"} {"prompt": "Explain this bash code:\n\nkubectl run -it --rm --image=mysql:9 --restart=Never mysql-client -- mysql -h mysql -ppassword", "completion": "This bash code Run a Single-Instance Stateful Application | This command creates a new Pod in the cluster running a MySQL client and connects it to the server through the Service. If it connects, you know your stateful MySQL database is up and running.", "language": "bash", "type": "kubectl-command_explanation", "context": "Run a Single-Instance Stateful Application | This command creates a new Pod in the cluster running a MySQL client and connects it to the server through the Service. If it connects, you know your stateful MySQL database is up and running.", "source": "https://kubernetes.io/docs/tasks/run-application/run-single-instance-stateful-application/", "id": "k8s_00361_code_explain_9017"} {"prompt": "Complete this text code:\n\nWaiting for pod default/mysql-client-274442439-zyp6i to be running, status is Pending, pod ready: false\nIf you don't see a command prompt, try pressing enter.", "completion": "\nmysql>", "language": "text", "type": "code-example", "context": "Run a Single-Instance Stateful Application | The image or any other part of the Deployment can be updated as usual with the kubectl apply command. Here are some precautions that are specific to stateful apps:", "source": "https://kubernetes.io/docs/tasks/run-application/run-single-instance-stateful-application/", "id": "k8s_00361_code_completion_9018"} {"prompt": "Explain this text code:\n\nWaiting for pod default/mysql-client-274442439-zyp6i to be running, status is Pending, pod ready: false\nIf you don't see a command prompt, try pressing enter.\n\nmysql>", "completion": "This text code Run a Single-Instance Stateful Application | The image or any other part of the Deployment can be updated as usual with the kubectl apply command. Here are some precautions that are specific to stateful apps:", "language": "text", "type": "code-example_explanation", "context": "Run a Single-Instance Stateful Application | The image or any other part of the Deployment can be updated as usual with the kubectl apply command. Here are some precautions that are specific to stateful apps:", "source": "https://kubernetes.io/docs/tasks/run-application/run-single-instance-stateful-application/", "id": "k8s_00361_code_explain_9019"} {"prompt": "Explain this bash code:\n\nkubectl delete deployment,svc mysql\nkubectl delete pvc mysql-pv-claim\nkubectl delete pv mysql-pv-volume", "completion": "This bash code Run a Single-Instance Stateful Application | If you manually provisioned a PersistentVolume, you also need to manually delete it, as well as release the underlying resource. If you used a dynamic provisioner, it automatically deletes the Persist", "language": "bash", "type": "kubectl-command_explanation", "context": "Run a Single-Instance Stateful Application | If you manually provisioned a PersistentVolume, you also need to manually delete it, as well as release the underlying resource. If you used a dynamic provisioner, it automatically deletes the Persist", "source": "https://kubernetes.io/docs/tasks/run-application/run-single-instance-stateful-application/", "id": "k8s_00361_code_explain_9020"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Service\nmetadata:\n name: mysql\nspec:\n ports:\n - port: 3306\n selector:\n app: mysql\n clusterIP: None\n---\napiVersion: apps/v1\nkind: Deployment\nmetadata:\n name: mysql\nspec:\n selector:\n matchLabels:\n app: mysql\n strategy:\n type: Recreate", "completion": " template:\n metadata:\n labels:\n app: mysql\n spec:\n containers:\n - image: mysql:9\n name: mysql\n env:\n # Use secret in real usage\n - name: MYSQL_ROOT_PASSWORD\n value: password\n ports:\n - containerPort: 3306\n name: mysql\n volumeMounts:\n - name: mysql-persistent-storage\n mountPath: /var/lib/mysql\n volumes:\n - name: mysql-persistent-storage\n persistentVolumeClaim:\n claimName: mysql-pv-claim", "language": "yaml", "type": "deployment-definition", "context": "Run a Single-Instance Stateful Application | Deploy the PV and PVC of the YAML file:", "source": "https://kubernetes.io/docs/tasks/run-application/run-single-instance-stateful-application/", "id": "k8s_00361_code_completion_9021"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Service\nmetadata:\n name: mysql\nspec:\n ports:\n - port: 3306\n selector:\n app: mysql\n clusterIP: None\n---\napiVersion: apps/v1\nkind: Deployment\nmetadata:\n name: mysql\nspec:\n selector:\n matchLabels:\n app: mysql\n strategy:\n type: Recreate\n template:\n metadata:\n labels:\n app: mysql\n spec:\n containers:\n - image: mysql:9\n name: mysql\n env:\n # Use secret in real usage\n - name: MYSQL_ROOT_PASSWORD\n value: password\n ports:\n - containerPort: 3306\n name: mysql\n volumeMounts:\n - name: mysql-persistent-storage\n mountPath: /var/lib/mysql\n volumes:\n - name: mysql-persistent-storage\n persistentVolumeClaim:\n claimName: mysql-pv-claim", "completion": "This yaml code Run a Single-Instance Stateful Application | Deploy the PV and PVC of the YAML file:", "language": "yaml", "type": "deployment-definition_explanation", "context": "Run a Single-Instance Stateful Application | Deploy the PV and PVC of the YAML file:", "source": "https://kubernetes.io/docs/tasks/run-application/run-single-instance-stateful-application/", "id": "k8s_00361_code_explain_9022"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: PersistentVolume\nmetadata:\n name: mysql-pv-volume\n labels:\n type: local\nspec:\n storageClassName: manual\n capacity:\n storage: 20Gi\n accessModes:\n - ReadWriteOnce\n hostPath:", "completion": " path: \"/mnt/data\"\n---\napiVersion: v1\nkind: PersistentVolumeClaim\nmetadata:\n name: mysql-pv-claim\nspec:\n storageClassName: manual\n accessModes:\n - ReadWriteOnce\n resources:\n requests:\n storage: 20Gi", "language": "yaml", "type": "kubernetes-manifest", "context": "Run a Single-Instance Stateful Application | Deploy the PV and PVC of the YAML file:", "source": "https://kubernetes.io/docs/tasks/run-application/run-single-instance-stateful-application/", "id": "k8s_00361_code_completion_9023"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: PersistentVolume\nmetadata:\n name: mysql-pv-volume\n labels:\n type: local\nspec:\n storageClassName: manual\n capacity:\n storage: 20Gi\n accessModes:\n - ReadWriteOnce\n hostPath:\n path: \"/mnt/data\"\n---\napiVersion: v1\nkind: PersistentVolumeClaim\nmetadata:\n name: mysql-pv-claim\nspec:\n storageClassName: manual\n accessModes:\n - ReadWriteOnce\n resources:\n requests:\n storage: 20Gi", "completion": "This yaml code Run a Single-Instance Stateful Application | Deploy the PV and PVC of the YAML file:", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Run a Single-Instance Stateful Application | Deploy the PV and PVC of the YAML file:", "source": "https://kubernetes.io/docs/tasks/run-application/run-single-instance-stateful-application/", "id": "k8s_00361_code_explain_9024"} {"prompt": "Explain this shell code:\n\nkubectl apply -f https://k8s.io/examples/application/mysql/mysql-pv.yaml", "completion": "This shell code Run a Single-Instance Stateful Application | Deploy the contents of the YAML file:", "language": "shell", "type": "code-example_explanation", "context": "Run a Single-Instance Stateful Application | Deploy the contents of the YAML file:", "source": "https://kubernetes.io/docs/tasks/run-application/run-single-instance-stateful-application/", "id": "k8s_00361_code_explain_9025"} {"prompt": "Explain this shell code:\n\nkubectl apply -f https://k8s.io/examples/application/mysql/mysql-deployment.yaml", "completion": "This shell code Run a Single-Instance Stateful Application | Display information about the Deployment:", "language": "shell", "type": "code-example_explanation", "context": "Run a Single-Instance Stateful Application | Display information about the Deployment:", "source": "https://kubernetes.io/docs/tasks/run-application/run-single-instance-stateful-application/", "id": "k8s_00361_code_explain_9026"} {"prompt": "Complete this text code:\n\nName: mysql\nNamespace: default\nCreationTimestamp: Tue, 01 Nov 2016 11:18:45 -0700\nLabels: app=mysql\nAnnotations: deployment.kubernetes.io/revision=1\nSelector: app=mysql\nReplicas: 1 desired | 1 updated | 1 total | 0 available | 1 unavailable\nStrategyType: Recreate\nMinReadySeconds: 0\nPod Template:\n Labels: app=mysql\n Containers:\n mysql:\n Image: mysql:9\n Port: 3306/TCP\n Environment:\n MYSQL_ROOT_PASSWORD: password", "completion": " Mounts:\n /var/lib/mysql from mysql-persistent-storage (rw)\n Volumes:\n mysql-persistent-storage:\n Type: PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)\n ClaimName: mysql-pv-claim\n ReadOnly: false\nConditions:\n Type Status Reason\n ---- ------ ------\n Available False MinimumReplicasUnavailable\n Progressing True ReplicaSetUpdated\nOldReplicaSets: <none>\nNewReplicaSet: mysql-63082529 (1/1 replicas created)\nEvents:\n FirstSeen LastSeen Count From SubobjectPath Type Reason Message\n --------- -------- ----- ---- ------------- -------- ------ -------\n 33s 33s 1 {deployment-controller } Normal ScalingReplicaSet Scaled up replica set mysql-63082529 to 1", "language": "text", "type": "code-example", "context": "Run a Single-Instance Stateful Application | List the pods created by the Deployment:", "source": "https://kubernetes.io/docs/tasks/run-application/run-single-instance-stateful-application/", "id": "k8s_00361_code_completion_9027"} {"prompt": "Explain this text code:\n\nName: mysql\nNamespace: default\nCreationTimestamp: Tue, 01 Nov 2016 11:18:45 -0700\nLabels: app=mysql\nAnnotations: deployment.kubernetes.io/revision=1\nSelector: app=mysql\nReplicas: 1 desired | 1 updated | 1 total | 0 available | 1 unavailable\nStrategyType: Recreate\nMinReadySeconds: 0\nPod Template:\n Labels: app=mysql\n Containers:\n mysql:\n Image: mysql:9\n Port: 3306/TCP\n Environment:\n MYSQL_ROOT_PASSWORD: password\n Mounts:\n /var/lib/mysql from mysql-persistent-storage (rw)\n Volumes:\n mysql-persistent-storage:\n Type: PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)\n ClaimName: mysql-pv-claim\n ReadOnly: false\nConditions:\n Type Status Reason\n ---- ------ ------\n Available False MinimumReplicasUnavailable\n Progressing True ReplicaSetUpdated\nOldReplicaSets: <none>\nNewReplicaSet: mysql-63082529 (1/1 replicas created)\nEvents:\n FirstSeen LastSeen Count From SubobjectPath Type Reason Message\n --------- -------- ----- ---- ------------- -------- ------ -------\n 33s 33s 1 {deployment-controller } Normal ScalingReplicaSet Scaled up replica set mysql-63082529 to 1", "completion": "This text code Run a Single-Instance Stateful Application | List the pods created by the Deployment:", "language": "text", "type": "code-example_explanation", "context": "Run a Single-Instance Stateful Application | List the pods created by the Deployment:", "source": "https://kubernetes.io/docs/tasks/run-application/run-single-instance-stateful-application/", "id": "k8s_00361_code_explain_9028"} {"prompt": "Explain this text code:\n\nNAME READY STATUS RESTARTS AGE\nmysql-63082529-2z3ki 1/1 Running 0 3m", "completion": "This text code Run a Single-Instance Stateful Application | Inspect the PersistentVolumeClaim:", "language": "text", "type": "code-example_explanation", "context": "Run a Single-Instance Stateful Application | Inspect the PersistentVolumeClaim:", "source": "https://kubernetes.io/docs/tasks/run-application/run-single-instance-stateful-application/", "id": "k8s_00361_code_explain_9029"} {"prompt": "Complete this text code:\n\nName: mysql-pv-claim\nNamespace: default\nStorageClass:\nStatus: Bound\nVolume: mysql-pv-volume", "completion": "Labels: <none>\nAnnotations: pv.kubernetes.io/bind-completed=yes\n pv.kubernetes.io/bound-by-controller=yes\nCapacity: 20Gi\nAccess Modes: RWO\nEvents: <none>", "language": "text", "type": "code-example", "context": "Run a Single-Instance Stateful Application | The preceding YAML file creates a service that allows other Pods in the cluster to access the database. The Service option clusterIP: None lets the Service DNS name resolve directly to the Pod's IP ad", "source": "https://kubernetes.io/docs/tasks/run-application/run-single-instance-stateful-application/", "id": "k8s_00361_code_completion_9030"} {"prompt": "Explain this text code:\n\nName: mysql-pv-claim\nNamespace: default\nStorageClass:\nStatus: Bound\nVolume: mysql-pv-volume\nLabels: <none>\nAnnotations: pv.kubernetes.io/bind-completed=yes\n pv.kubernetes.io/bound-by-controller=yes\nCapacity: 20Gi\nAccess Modes: RWO\nEvents: <none>", "completion": "This text code Run a Single-Instance Stateful Application | The preceding YAML file creates a service that allows other Pods in the cluster to access the database. The Service option clusterIP: None lets the Service DNS name resolve directly to the Pod's IP ad", "language": "text", "type": "code-example_explanation", "context": "Run a Single-Instance Stateful Application | The preceding YAML file creates a service that allows other Pods in the cluster to access the database. The Service option clusterIP: None lets the Service DNS name resolve directly to the Pod's IP ad", "source": "https://kubernetes.io/docs/tasks/run-application/run-single-instance-stateful-application/", "id": "k8s_00361_code_explain_9031"} {"prompt": "Explain this shell code:\n\nkubectl run -it --rm --image=mysql:9 --restart=Never mysql-client -- mysql -h mysql -ppassword", "completion": "This shell code Run a Single-Instance Stateful Application | This command creates a new Pod in the cluster running a MySQL client and connects it to the server through the Service. If it connects, you know your stateful MySQL database is up and running.", "language": "shell", "type": "code-example_explanation", "context": "Run a Single-Instance Stateful Application | This command creates a new Pod in the cluster running a MySQL client and connects it to the server through the Service. If it connects, you know your stateful MySQL database is up and running.", "source": "https://kubernetes.io/docs/tasks/run-application/run-single-instance-stateful-application/", "id": "k8s_00361_code_explain_9032"} {"prompt": "Complete this text code:\n\nWaiting for pod default/mysql-client-274442439-zyp6i to be running, status is Pending, pod ready: false\nIf you don't see a command prompt, try pressing enter.", "completion": "\nmysql>", "language": "text", "type": "code-example", "context": "Run a Single-Instance Stateful Application | The image or any other part of the Deployment can be updated as usual with the kubectl apply command. Here are some precautions that are specific to stateful apps:", "source": "https://kubernetes.io/docs/tasks/run-application/run-single-instance-stateful-application/", "id": "k8s_00361_code_completion_9033"} {"prompt": "Explain this text code:\n\nWaiting for pod default/mysql-client-274442439-zyp6i to be running, status is Pending, pod ready: false\nIf you don't see a command prompt, try pressing enter.\n\nmysql>", "completion": "This text code Run a Single-Instance Stateful Application | The image or any other part of the Deployment can be updated as usual with the kubectl apply command. Here are some precautions that are specific to stateful apps:", "language": "text", "type": "code-example_explanation", "context": "Run a Single-Instance Stateful Application | The image or any other part of the Deployment can be updated as usual with the kubectl apply command. Here are some precautions that are specific to stateful apps:", "source": "https://kubernetes.io/docs/tasks/run-application/run-single-instance-stateful-application/", "id": "k8s_00361_code_explain_9034"} {"prompt": "Explain this shell code:\n\nkubectl delete deployment,svc mysql\nkubectl delete pvc mysql-pv-claim\nkubectl delete pv mysql-pv-volume", "completion": "This shell code Run a Single-Instance Stateful Application | If you manually provisioned a PersistentVolume, you also need to manually delete it, as well as release the underlying resource. If you used a dynamic provisioner, it automatically deletes the Persist", "language": "shell", "type": "code-example_explanation", "context": "Run a Single-Instance Stateful Application | If you manually provisioned a PersistentVolume, you also need to manually delete it, as well as release the underlying resource. If you used a dynamic provisioner, it automatically deletes the Persist", "source": "https://kubernetes.io/docs/tasks/run-application/run-single-instance-stateful-application/", "id": "k8s_00361_code_explain_9035"} {"prompt": "Complete this yaml code:\n\napplication/mysql/mysql-deployment.yaml\napiVersion: v1\nkind: Service\nmetadata:\n name: mysql\nspec:\n ports:\n - port: 3306\n selector:\n app: mysql\n clusterIP: None\n---\napiVersion: apps/v1\nkind: Deployment\nmetadata:\n name: mysql\nspec:\n selector:\n matchLabels:\n app: mysql\n strategy:\n type: Recreate", "completion": " template:\n metadata:\n labels:\n app: mysql\n spec:\n containers:\n - image: mysql:9\n name: mysql\n env:\n # Use secret in real usage\n - name: MYSQL_ROOT_PASSWORD\n value: password\n ports:\n - containerPort: 3306\n name: mysql\n volumeMounts:\n - name: mysql-persistent-storage\n mountPath: /var/lib/mysql\n volumes:\n - name: mysql-persistent-storage\n persistentVolumeClaim:\n claimName: mysql-pv-claim", "language": "yaml", "type": "deployment-definition", "context": "Run a Single-Instance Stateful Application | Deploy the PV and PVC of the YAML file:", "source": "https://kubernetes.io/docs/tasks/run-application/run-single-instance-stateful-application/", "id": "k8s_00361_code_completion_9036"} {"prompt": "Explain this yaml code:\n\napplication/mysql/mysql-deployment.yaml\napiVersion: v1\nkind: Service\nmetadata:\n name: mysql\nspec:\n ports:\n - port: 3306\n selector:\n app: mysql\n clusterIP: None\n---\napiVersion: apps/v1\nkind: Deployment\nmetadata:\n name: mysql\nspec:\n selector:\n matchLabels:\n app: mysql\n strategy:\n type: Recreate\n template:\n metadata:\n labels:\n app: mysql\n spec:\n containers:\n - image: mysql:9\n name: mysql\n env:\n # Use secret in real usage\n - name: MYSQL_ROOT_PASSWORD\n value: password\n ports:\n - containerPort: 3306\n name: mysql\n volumeMounts:\n - name: mysql-persistent-storage\n mountPath: /var/lib/mysql\n volumes:\n - name: mysql-persistent-storage\n persistentVolumeClaim:\n claimName: mysql-pv-claim", "completion": "This yaml code Run a Single-Instance Stateful Application | Deploy the PV and PVC of the YAML file:", "language": "yaml", "type": "deployment-definition_explanation", "context": "Run a Single-Instance Stateful Application | Deploy the PV and PVC of the YAML file:", "source": "https://kubernetes.io/docs/tasks/run-application/run-single-instance-stateful-application/", "id": "k8s_00361_code_explain_9037"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Service\nmetadata:\n name: mysql\nspec:\n ports:\n - port: 3306\n selector:\n app: mysql\n clusterIP: None\n---\napiVersion: apps/v1\nkind: Deployment\nmetadata:\n name: mysql\nspec:\n selector:\n matchLabels:\n app: mysql\n strategy:\n type: Recreate", "completion": " template:\n metadata:\n labels:\n app: mysql\n spec:\n containers:\n - image: mysql:9\n name: mysql\n env:\n # Use secret in real usage\n - name: MYSQL_ROOT_PASSWORD\n value: password\n ports:\n - containerPort: 3306\n name: mysql\n volumeMounts:\n - name: mysql-persistent-storage\n mountPath: /var/lib/mysql\n volumes:\n - name: mysql-persistent-storage\n persistentVolumeClaim:\n claimName: mysql-pv-claim", "language": "yaml", "type": "deployment-definition", "context": "Run a Single-Instance Stateful Application | Deploy the PV and PVC of the YAML file:", "source": "https://kubernetes.io/docs/tasks/run-application/run-single-instance-stateful-application/", "id": "k8s_00361_code_completion_9038"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Service\nmetadata:\n name: mysql\nspec:\n ports:\n - port: 3306\n selector:\n app: mysql\n clusterIP: None\n---\napiVersion: apps/v1\nkind: Deployment\nmetadata:\n name: mysql\nspec:\n selector:\n matchLabels:\n app: mysql\n strategy:\n type: Recreate\n template:\n metadata:\n labels:\n app: mysql\n spec:\n containers:\n - image: mysql:9\n name: mysql\n env:\n # Use secret in real usage\n - name: MYSQL_ROOT_PASSWORD\n value: password\n ports:\n - containerPort: 3306\n name: mysql\n volumeMounts:\n - name: mysql-persistent-storage\n mountPath: /var/lib/mysql\n volumes:\n - name: mysql-persistent-storage\n persistentVolumeClaim:\n claimName: mysql-pv-claim", "completion": "This yaml code Run a Single-Instance Stateful Application | Deploy the PV and PVC of the YAML file:", "language": "yaml", "type": "deployment-definition_explanation", "context": "Run a Single-Instance Stateful Application | Deploy the PV and PVC of the YAML file:", "source": "https://kubernetes.io/docs/tasks/run-application/run-single-instance-stateful-application/", "id": "k8s_00361_code_explain_9039"} {"prompt": "Complete this yaml code:\n\napplication/mysql/mysql-pv.yaml\napiVersion: v1\nkind: PersistentVolume\nmetadata:\n name: mysql-pv-volume\n labels:\n type: local\nspec:\n storageClassName: manual\n capacity:\n storage: 20Gi\n accessModes:\n - ReadWriteOnce", "completion": " hostPath:\n path: \"/mnt/data\"\n---\napiVersion: v1\nkind: PersistentVolumeClaim\nmetadata:\n name: mysql-pv-claim\nspec:\n storageClassName: manual\n accessModes:\n - ReadWriteOnce\n resources:\n requests:\n storage: 20Gi", "language": "yaml", "type": "kubernetes-manifest", "context": "Run a Single-Instance Stateful Application | Deploy the PV and PVC of the YAML file:", "source": "https://kubernetes.io/docs/tasks/run-application/run-single-instance-stateful-application/", "id": "k8s_00361_code_completion_9040"} {"prompt": "Explain this yaml code:\n\napplication/mysql/mysql-pv.yaml\napiVersion: v1\nkind: PersistentVolume\nmetadata:\n name: mysql-pv-volume\n labels:\n type: local\nspec:\n storageClassName: manual\n capacity:\n storage: 20Gi\n accessModes:\n - ReadWriteOnce\n hostPath:\n path: \"/mnt/data\"\n---\napiVersion: v1\nkind: PersistentVolumeClaim\nmetadata:\n name: mysql-pv-claim\nspec:\n storageClassName: manual\n accessModes:\n - ReadWriteOnce\n resources:\n requests:\n storage: 20Gi", "completion": "This yaml code Run a Single-Instance Stateful Application | Deploy the PV and PVC of the YAML file:", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Run a Single-Instance Stateful Application | Deploy the PV and PVC of the YAML file:", "source": "https://kubernetes.io/docs/tasks/run-application/run-single-instance-stateful-application/", "id": "k8s_00361_code_explain_9041"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: PersistentVolume\nmetadata:\n name: mysql-pv-volume\n labels:\n type: local\nspec:\n storageClassName: manual\n capacity:\n storage: 20Gi\n accessModes:\n - ReadWriteOnce\n hostPath:", "completion": " path: \"/mnt/data\"\n---\napiVersion: v1\nkind: PersistentVolumeClaim\nmetadata:\n name: mysql-pv-claim\nspec:\n storageClassName: manual\n accessModes:\n - ReadWriteOnce\n resources:\n requests:\n storage: 20Gi", "language": "yaml", "type": "kubernetes-manifest", "context": "Run a Single-Instance Stateful Application | Deploy the PV and PVC of the YAML file:", "source": "https://kubernetes.io/docs/tasks/run-application/run-single-instance-stateful-application/", "id": "k8s_00361_code_completion_9042"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: PersistentVolume\nmetadata:\n name: mysql-pv-volume\n labels:\n type: local\nspec:\n storageClassName: manual\n capacity:\n storage: 20Gi\n accessModes:\n - ReadWriteOnce\n hostPath:\n path: \"/mnt/data\"\n---\napiVersion: v1\nkind: PersistentVolumeClaim\nmetadata:\n name: mysql-pv-claim\nspec:\n storageClassName: manual\n accessModes:\n - ReadWriteOnce\n resources:\n requests:\n storage: 20Gi", "completion": "This yaml code Run a Single-Instance Stateful Application | Deploy the PV and PVC of the YAML file:", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Run a Single-Instance Stateful Application | Deploy the PV and PVC of the YAML file:", "source": "https://kubernetes.io/docs/tasks/run-application/run-single-instance-stateful-application/", "id": "k8s_00361_code_explain_9043"} {"prompt": "Explain this bash code:\n\nkubectl apply -f https://k8s.io/examples/application/mysql/mysql-pv.yaml", "completion": "This bash code Run a Single-Instance Stateful Application | Deploy the contents of the YAML file:", "language": "bash", "type": "kubectl-command_explanation", "context": "Run a Single-Instance Stateful Application | Deploy the contents of the YAML file:", "source": "https://kubernetes.io/docs/tasks/run-application/run-single-instance-stateful-application/", "id": "k8s_00361_code_explain_9044"} {"prompt": "Explain this bash code:\n\nkubectl apply -f https://k8s.io/examples/application/mysql/mysql-deployment.yaml", "completion": "This bash code Run a Single-Instance Stateful Application | Display information about the Deployment:", "language": "bash", "type": "kubectl-command_explanation", "context": "Run a Single-Instance Stateful Application | Display information about the Deployment:", "source": "https://kubernetes.io/docs/tasks/run-application/run-single-instance-stateful-application/", "id": "k8s_00361_code_explain_9045"} {"prompt": "Explain this bash code:\n\nkubectl run -it --rm --image=mysql:9 --restart=Never mysql-client -- mysql -h mysql -ppassword", "completion": "This bash code Run a Single-Instance Stateful Application | This command creates a new Pod in the cluster running a MySQL client and connects it to the server through the Service. If it connects, you know your stateful MySQL database is up and running.", "language": "bash", "type": "kubectl-command_explanation", "context": "Run a Single-Instance Stateful Application | This command creates a new Pod in the cluster running a MySQL client and connects it to the server through the Service. If it connects, you know your stateful MySQL database is up and running.", "source": "https://kubernetes.io/docs/tasks/run-application/run-single-instance-stateful-application/", "id": "k8s_00361_code_explain_9046"} {"prompt": "Explain this bash code:\n\nkubectl delete deployment,svc mysql\nkubectl delete pvc mysql-pv-claim\nkubectl delete pv mysql-pv-volume", "completion": "This bash code Run a Single-Instance Stateful Application | If you manually provisioned a PersistentVolume, you also need to manually delete it, as well as release the underlying resource. If you used a dynamic provisioner, it automatically deletes the Persist", "language": "bash", "type": "kubectl-command_explanation", "context": "Run a Single-Instance Stateful Application | If you manually provisioned a PersistentVolume, you also need to manually delete it, as well as release the underlying resource. If you used a dynamic provisioner, it automatically deletes the Persist", "source": "https://kubernetes.io/docs/tasks/run-application/run-single-instance-stateful-application/", "id": "k8s_00361_code_explain_9047"} {"prompt": "Complete this text code:\n\nNAME CPU(cores) CPU(%) MEMORY(bytes) MEMORY(%) SWAP(bytes) SWAP(%) \nnode1 1m 10% 2Mi 10% 1Mi 0% ", "completion": "node2 5m 10% 6Mi 10% 2Mi 0% \nnode3 3m 10% 4Mi 10% <unknown> <unknown>", "language": "text", "type": "code-example", "context": "Swap memory management | In order to receive information about swap usage by pods, kubectl top nodes --show-swap can be used:", "source": "https://kubernetes.io/docs/concepts/cluster-administration/swap-memory-management/", "id": "k8s_00363_code_completion_9048"} {"prompt": "Explain this text code:\n\nNAME CPU(cores) CPU(%) MEMORY(bytes) MEMORY(%) SWAP(bytes) SWAP(%) \nnode1 1m 10% 2Mi 10% 1Mi 0% \nnode2 5m 10% 6Mi 10% 2Mi 0% \nnode3 3m 10% 4Mi 10% <unknown> <unknown>", "completion": "This text code Swap memory management | In order to receive information about swap usage by pods, kubectl top nodes --show-swap can be used:", "language": "text", "type": "code-example_explanation", "context": "Swap memory management | In order to receive information about swap usage by pods, kubectl top nodes --show-swap can be used:", "source": "https://kubernetes.io/docs/concepts/cluster-administration/swap-memory-management/", "id": "k8s_00363_code_explain_9049"} {"prompt": "Complete this text code:\n\nNAME CPU(cores) MEMORY(bytes) SWAP(bytes)\ncoredns-58d5bc5cdb-5nbk4 2m 19Mi 0Mi\ncoredns-58d5bc5cdb-jsh26 3m 37Mi 0Mi\netcd-node01 51m 143Mi 5Mi\nkube-apiserver-node01 98m 824Mi 16Mi", "completion": "kube-controller-manager-node01 20m 135Mi 9Mi\nkube-proxy-ffgs2 1m 24Mi 0Mi\nkube-proxy-fhvwx 1m 39Mi 0Mi\nkube-scheduler-node01 13m 69Mi 0Mi\nmetrics-server-8598789fdb-d2kcj 5m 26Mi 0Mi", "language": "text", "type": "code-example", "context": "Swap memory management | A new node status field is now added, node.status.nodeInfo.swap.capacity, to report the swap capacity of a node.", "source": "https://kubernetes.io/docs/concepts/cluster-administration/swap-memory-management/", "id": "k8s_00363_code_completion_9050"} {"prompt": "Explain this text code:\n\nNAME CPU(cores) MEMORY(bytes) SWAP(bytes)\ncoredns-58d5bc5cdb-5nbk4 2m 19Mi 0Mi\ncoredns-58d5bc5cdb-jsh26 3m 37Mi 0Mi\netcd-node01 51m 143Mi 5Mi\nkube-apiserver-node01 98m 824Mi 16Mi\nkube-controller-manager-node01 20m 135Mi 9Mi\nkube-proxy-ffgs2 1m 24Mi 0Mi\nkube-proxy-fhvwx 1m 39Mi 0Mi\nkube-scheduler-node01 13m 69Mi 0Mi\nmetrics-server-8598789fdb-d2kcj 5m 26Mi 0Mi", "completion": "This text code Swap memory management | A new node status field is now added, node.status.nodeInfo.swap.capacity, to report the swap capacity of a node.", "language": "text", "type": "code-example_explanation", "context": "Swap memory management | A new node status field is now added, node.status.nodeInfo.swap.capacity, to report the swap capacity of a node.", "source": "https://kubernetes.io/docs/concepts/cluster-administration/swap-memory-management/", "id": "k8s_00363_code_explain_9051"} {"prompt": "Explain this bash code:\n\nkubectl get nodes -o go-template='{{range .items}}{{.metadata.name}}: {{if .status.nodeInfo.swap.capacity}}{{.status.nodeInfo.swap.capacity}}{{else}}<unknown>{{end}}{{\"\\n\"}}{{end}}'", "completion": "This bash code Swap memory management | This will result in an output similar to:", "language": "bash", "type": "kubectl-command_explanation", "context": "Swap memory management | This will result in an output similar to:", "source": "https://kubernetes.io/docs/concepts/cluster-administration/swap-memory-management/", "id": "k8s_00363_code_explain_9052"} {"prompt": "Explain this text code:\n\nnode1: 21474836480\nnode2: 42949664768\nnode3: <unknown>", "completion": "This text code Swap memory management | Node Feature Discovery is a Kubernetes addon for detecting hardware features and configuration. It can be utilized to discover which nodes are provisioned with swap.", "language": "text", "type": "code-example_explanation", "context": "Swap memory management | Node Feature Discovery is a Kubernetes addon for detecting hardware features and configuration. It can be utilized to discover which nodes are provisioned with swap.", "source": "https://kubernetes.io/docs/concepts/cluster-administration/swap-memory-management/", "id": "k8s_00363_code_explain_9053"} {"prompt": "Explain this bash code:\n\nkubectl get nodes -o jsonpath='{range .items[?(@.metadata.labels.feature\\.node\\.kubernetes\\.io/memory-swap)]}{.metadata.name}{\"\\t\"}{.metadata.labels.feature\\.node\\.kubernetes\\.io/memory-swap}{\"\\n\"}{end}'", "completion": "This bash code Swap memory management | This will result in an output similar to:", "language": "bash", "type": "kubectl-command_explanation", "context": "Swap memory management | This will result in an output similar to:", "source": "https://kubernetes.io/docs/concepts/cluster-administration/swap-memory-management/", "id": "k8s_00363_code_explain_9054"} {"prompt": "Explain this text code:\n\nk8s-worker1: true\nk8s-worker2: true\nk8s-worker3: false", "completion": "This text code Swap memory management | In this example, swap is provisioned on nodes k8s-worker1 and k8s-worker2, but not on k8s-worker3.", "language": "text", "type": "code-example_explanation", "context": "Swap memory management | In this example, swap is provisioned on nodes k8s-worker1 and k8s-worker2, but not on k8s-worker3.", "source": "https://kubernetes.io/docs/concepts/cluster-administration/swap-memory-management/", "id": "k8s_00363_code_explain_9055"} {"prompt": "Complete this text code:\n\nNAME CPU(cores) CPU(%) MEMORY(bytes) MEMORY(%) SWAP(bytes) SWAP(%) \nnode1 1m 10% 2Mi 10% 1Mi 0% ", "completion": "node2 5m 10% 6Mi 10% 2Mi 0% \nnode3 3m 10% 4Mi 10% <unknown> <unknown>", "language": "text", "type": "code-example", "context": "Swap memory management | In order to receive information about swap usage by pods, kubectl top nodes --show-swap can be used:", "source": "https://kubernetes.io/docs/concepts/cluster-administration/swap-memory-management/", "id": "k8s_00363_code_completion_9056"} {"prompt": "Explain this text code:\n\nNAME CPU(cores) CPU(%) MEMORY(bytes) MEMORY(%) SWAP(bytes) SWAP(%) \nnode1 1m 10% 2Mi 10% 1Mi 0% \nnode2 5m 10% 6Mi 10% 2Mi 0% \nnode3 3m 10% 4Mi 10% <unknown> <unknown>", "completion": "This text code Swap memory management | In order to receive information about swap usage by pods, kubectl top nodes --show-swap can be used:", "language": "text", "type": "code-example_explanation", "context": "Swap memory management | In order to receive information about swap usage by pods, kubectl top nodes --show-swap can be used:", "source": "https://kubernetes.io/docs/concepts/cluster-administration/swap-memory-management/", "id": "k8s_00363_code_explain_9057"} {"prompt": "Complete this text code:\n\nNAME CPU(cores) MEMORY(bytes) SWAP(bytes)\ncoredns-58d5bc5cdb-5nbk4 2m 19Mi 0Mi\ncoredns-58d5bc5cdb-jsh26 3m 37Mi 0Mi\netcd-node01 51m 143Mi 5Mi\nkube-apiserver-node01 98m 824Mi 16Mi", "completion": "kube-controller-manager-node01 20m 135Mi 9Mi\nkube-proxy-ffgs2 1m 24Mi 0Mi\nkube-proxy-fhvwx 1m 39Mi 0Mi\nkube-scheduler-node01 13m 69Mi 0Mi\nmetrics-server-8598789fdb-d2kcj 5m 26Mi 0Mi", "language": "text", "type": "code-example", "context": "Swap memory management | A new node status field is now added, node.status.nodeInfo.swap.capacity, to report the swap capacity of a node.", "source": "https://kubernetes.io/docs/concepts/cluster-administration/swap-memory-management/", "id": "k8s_00363_code_completion_9058"} {"prompt": "Explain this text code:\n\nNAME CPU(cores) MEMORY(bytes) SWAP(bytes)\ncoredns-58d5bc5cdb-5nbk4 2m 19Mi 0Mi\ncoredns-58d5bc5cdb-jsh26 3m 37Mi 0Mi\netcd-node01 51m 143Mi 5Mi\nkube-apiserver-node01 98m 824Mi 16Mi\nkube-controller-manager-node01 20m 135Mi 9Mi\nkube-proxy-ffgs2 1m 24Mi 0Mi\nkube-proxy-fhvwx 1m 39Mi 0Mi\nkube-scheduler-node01 13m 69Mi 0Mi\nmetrics-server-8598789fdb-d2kcj 5m 26Mi 0Mi", "completion": "This text code Swap memory management | A new node status field is now added, node.status.nodeInfo.swap.capacity, to report the swap capacity of a node.", "language": "text", "type": "code-example_explanation", "context": "Swap memory management | A new node status field is now added, node.status.nodeInfo.swap.capacity, to report the swap capacity of a node.", "source": "https://kubernetes.io/docs/concepts/cluster-administration/swap-memory-management/", "id": "k8s_00363_code_explain_9059"} {"prompt": "Explain this shell code:\n\nkubectl get nodes -o go-template='{{range .items}}{{.metadata.name}}: {{if .status.nodeInfo.swap.capacity}}{{.status.nodeInfo.swap.capacity}}{{else}}<unknown>{{end}}{{\"\\n\"}}{{end}}'", "completion": "This shell code Swap memory management | This will result in an output similar to:", "language": "shell", "type": "code-example_explanation", "context": "Swap memory management | This will result in an output similar to:", "source": "https://kubernetes.io/docs/concepts/cluster-administration/swap-memory-management/", "id": "k8s_00363_code_explain_9060"} {"prompt": "Explain this text code:\n\nnode1: 21474836480\nnode2: 42949664768\nnode3: <unknown>", "completion": "This text code Swap memory management | Node Feature Discovery is a Kubernetes addon for detecting hardware features and configuration. It can be utilized to discover which nodes are provisioned with swap.", "language": "text", "type": "code-example_explanation", "context": "Swap memory management | Node Feature Discovery is a Kubernetes addon for detecting hardware features and configuration. It can be utilized to discover which nodes are provisioned with swap.", "source": "https://kubernetes.io/docs/concepts/cluster-administration/swap-memory-management/", "id": "k8s_00363_code_explain_9061"} {"prompt": "Explain this shell code:\n\nkubectl get nodes -o jsonpath='{range .items[?(@.metadata.labels.feature\\.node\\.kubernetes\\.io/memory-swap)]}{.metadata.name}{\"\\t\"}{.metadata.labels.feature\\.node\\.kubernetes\\.io/memory-swap}{\"\\n\"}{end}'", "completion": "This shell code Swap memory management | This will result in an output similar to:", "language": "shell", "type": "code-example_explanation", "context": "Swap memory management | This will result in an output similar to:", "source": "https://kubernetes.io/docs/concepts/cluster-administration/swap-memory-management/", "id": "k8s_00363_code_explain_9062"} {"prompt": "Explain this text code:\n\nk8s-worker1: true\nk8s-worker2: true\nk8s-worker3: false", "completion": "This text code Swap memory management | In this example, swap is provisioned on nodes k8s-worker1 and k8s-worker2, but not on k8s-worker3.", "language": "text", "type": "code-example_explanation", "context": "Swap memory management | In this example, swap is provisioned on nodes k8s-worker1 and k8s-worker2, but not on k8s-worker3.", "source": "https://kubernetes.io/docs/concepts/cluster-administration/swap-memory-management/", "id": "k8s_00363_code_explain_9063"} {"prompt": "Explain this bash code:\n\nkubectl get nodes -o go-template='{{range .items}}{{.metadata.name}}: {{if .status.nodeInfo.swap.capacity}}{{.status.nodeInfo.swap.capacity}}{{else}}<unknown>{{end}}{{\"\\n\"}}{{end}}'", "completion": "This bash code Swap memory management | This will result in an output similar to:", "language": "bash", "type": "kubectl-command_explanation", "context": "Swap memory management | This will result in an output similar to:", "source": "https://kubernetes.io/docs/concepts/cluster-administration/swap-memory-management/", "id": "k8s_00363_code_explain_9064"} {"prompt": "Explain this bash code:\n\nkubectl get nodes -o jsonpath='{range .items[?(@.metadata.labels.feature\\.node\\.kubernetes\\.io/memory-swap)]}{.metadata.name}{\"\\t\"}{.metadata.labels.feature\\.node\\.kubernetes\\.io/memory-swap}{\"\\n\"}{end}'", "completion": "This bash code Swap memory management | This will result in an output similar to:", "language": "bash", "type": "kubectl-command_explanation", "context": "Swap memory management | This will result in an output similar to:", "source": "https://kubernetes.io/docs/concepts/cluster-administration/swap-memory-management/", "id": "k8s_00363_code_explain_9065"} {"prompt": "Complete this text code:\n\n# List all the contexts in your kubeconfig file\n kubectl config get-contexts", "completion": " \n # Describe one context in your kubeconfig file\n kubectl config get-contexts my-context", "language": "text", "type": "code-example", "context": "kubectl config get-contexts | help for get-contexts", "source": "https://kubernetes.io/docs/reference/kubectl/generated/kubectl_config/kubectl_config_get-contexts/", "id": "k8s_00364_code_completion_9066"} {"prompt": "Explain this text code:\n\n# List all the contexts in your kubeconfig file\n kubectl config get-contexts\n \n # Describe one context in your kubeconfig file\n kubectl config get-contexts my-context", "completion": "This text code kubectl config get-contexts | help for get-contexts", "language": "text", "type": "code-example_explanation", "context": "kubectl config get-contexts | help for get-contexts", "source": "https://kubernetes.io/docs/reference/kubectl/generated/kubectl_config/kubectl_config_get-contexts/", "id": "k8s_00364_code_explain_9067"} {"prompt": "Complete this text code:\n\n# List all the contexts in your kubeconfig file\n kubectl config get-contexts", "completion": " \n # Describe one context in your kubeconfig file\n kubectl config get-contexts my-context", "language": "text", "type": "code-example", "context": "kubectl config get-contexts | help for get-contexts", "source": "https://kubernetes.io/docs/reference/kubectl/generated/kubectl_config/kubectl_config_get-contexts/", "id": "k8s_00364_code_completion_9068"} {"prompt": "Explain this text code:\n\n# List all the contexts in your kubeconfig file\n kubectl config get-contexts\n \n # Describe one context in your kubeconfig file\n kubectl config get-contexts my-context", "completion": "This text code kubectl config get-contexts | help for get-contexts", "language": "text", "type": "code-example_explanation", "context": "kubectl config get-contexts | help for get-contexts", "source": "https://kubernetes.io/docs/reference/kubectl/generated/kubectl_config/kubectl_config_get-contexts/", "id": "k8s_00364_code_explain_9069"} {"prompt": "Explain this bash code:\n\nkubectl apply edit-last-applied (RESOURCE/NAME | -f FILENAME)", "completion": "This bash code kubectl apply edit-last-applied | If true, ignore any errors in templates when a field or map key is missing in the template. Only applies to golang and jsonpath output formats.", "language": "bash", "type": "kubectl-command_explanation", "context": "kubectl apply edit-last-applied | If true, ignore any errors in templates when a field or map key is missing in the template. Only applies to golang and jsonpath output formats.", "source": "https://kubernetes.io/docs/reference/kubectl/generated/kubectl_apply/kubectl_apply_edit-last-applied/", "id": "k8s_00365_code_explain_9070"} {"prompt": "Complete this text code:\n\n# Edit the last-applied-configuration annotations by type/name in YAML\n kubectl apply edit-last-applied deployment/nginx", "completion": " \n # Edit the last-applied-configuration annotations by file in JSON\n kubectl apply edit-last-applied -f deploy.yaml -o json", "language": "text", "type": "code-example", "context": "kubectl apply edit-last-applied | If true, ignore any errors in templates when a field or map key is missing in the template. Only applies to golang and jsonpath output formats.", "source": "https://kubernetes.io/docs/reference/kubectl/generated/kubectl_apply/kubectl_apply_edit-last-applied/", "id": "k8s_00365_code_completion_9071"} {"prompt": "Explain this text code:\n\n# Edit the last-applied-configuration annotations by type/name in YAML\n kubectl apply edit-last-applied deployment/nginx\n \n # Edit the last-applied-configuration annotations by file in JSON\n kubectl apply edit-last-applied -f deploy.yaml -o json", "completion": "This text code kubectl apply edit-last-applied | If true, ignore any errors in templates when a field or map key is missing in the template. Only applies to golang and jsonpath output formats.", "language": "text", "type": "code-example_explanation", "context": "kubectl apply edit-last-applied | If true, ignore any errors in templates when a field or map key is missing in the template. Only applies to golang and jsonpath output formats.", "source": "https://kubernetes.io/docs/reference/kubectl/generated/kubectl_apply/kubectl_apply_edit-last-applied/", "id": "k8s_00365_code_explain_9072"} {"prompt": "Explain this bash code:\n\nkubectl apply edit-last-applied (RESOURCE/NAME | -f FILENAME)", "completion": "This bash code kubectl apply edit-last-applied | If true, ignore any errors in templates when a field or map key is missing in the template. Only applies to golang and jsonpath output formats.", "language": "bash", "type": "kubectl-command_explanation", "context": "kubectl apply edit-last-applied | If true, ignore any errors in templates when a field or map key is missing in the template. Only applies to golang and jsonpath output formats.", "source": "https://kubernetes.io/docs/reference/kubectl/generated/kubectl_apply/kubectl_apply_edit-last-applied/", "id": "k8s_00365_code_explain_9073"} {"prompt": "Complete this text code:\n\n# Edit the last-applied-configuration annotations by type/name in YAML\n kubectl apply edit-last-applied deployment/nginx", "completion": " \n # Edit the last-applied-configuration annotations by file in JSON\n kubectl apply edit-last-applied -f deploy.yaml -o json", "language": "text", "type": "code-example", "context": "kubectl apply edit-last-applied | If true, ignore any errors in templates when a field or map key is missing in the template. Only applies to golang and jsonpath output formats.", "source": "https://kubernetes.io/docs/reference/kubectl/generated/kubectl_apply/kubectl_apply_edit-last-applied/", "id": "k8s_00365_code_completion_9074"} {"prompt": "Explain this text code:\n\n# Edit the last-applied-configuration annotations by type/name in YAML\n kubectl apply edit-last-applied deployment/nginx\n \n # Edit the last-applied-configuration annotations by file in JSON\n kubectl apply edit-last-applied -f deploy.yaml -o json", "completion": "This text code kubectl apply edit-last-applied | If true, ignore any errors in templates when a field or map key is missing in the template. Only applies to golang and jsonpath output formats.", "language": "text", "type": "code-example_explanation", "context": "kubectl apply edit-last-applied | If true, ignore any errors in templates when a field or map key is missing in the template. Only applies to golang and jsonpath output formats.", "source": "https://kubernetes.io/docs/reference/kubectl/generated/kubectl_apply/kubectl_apply_edit-last-applied/", "id": "k8s_00365_code_explain_9075"} {"prompt": "Explain this text code:\n\nmkdir -p $HOME/<workspace>\n\nexport GOPATH=$HOME/<workspace>", "completion": "This text code Generating Reference Documentation for kubectl Commands | Get a local clone of the following repositories:", "language": "text", "type": "code-example_explanation", "context": "Generating Reference Documentation for kubectl Commands | Get a local clone of the following repositories:", "source": "https://kubernetes.io/docs/contribute/generate-ref-docs/kubectl/", "id": "k8s_00366_code_explain_9076"} {"prompt": "Complete this text code:\n\ngo get -u github.com/spf13/pflag\ngo get -u github.com/spf13/cobra", "completion": "go get -u gopkg.in/yaml.v2\ngo get -u github.com/kubernetes-sigs/reference-docs", "language": "text", "type": "code-example", "context": "Generating Reference Documentation for kubectl Commands | If you don't already have the kubernetes/website repository, get it now:", "source": "https://kubernetes.io/docs/contribute/generate-ref-docs/kubectl/", "id": "k8s_00366_code_completion_9077"} {"prompt": "Explain this text code:\n\ngo get -u github.com/spf13/pflag\ngo get -u github.com/spf13/cobra\ngo get -u gopkg.in/yaml.v2\ngo get -u github.com/kubernetes-sigs/reference-docs", "completion": "This text code Generating Reference Documentation for kubectl Commands | If you don't already have the kubernetes/website repository, get it now:", "language": "text", "type": "code-example_explanation", "context": "Generating Reference Documentation for kubectl Commands | If you don't already have the kubernetes/website repository, get it now:", "source": "https://kubernetes.io/docs/contribute/generate-ref-docs/kubectl/", "id": "k8s_00366_code_explain_9078"} {"prompt": "Explain this text code:\n\ngit clone https://github.com/<your-username>/website $GOPATH/src/github.com/<your-username>/website", "completion": "This text code Generating Reference Documentation for kubectl Commands | Get a clone of the kubernetes/kubernetes repository as k8s.io/kubernetes:", "language": "text", "type": "code-example_explanation", "context": "Generating Reference Documentation for kubectl Commands | Get a clone of the kubernetes/kubernetes repository as k8s.io/kubernetes:", "source": "https://kubernetes.io/docs/contribute/generate-ref-docs/kubectl/", "id": "k8s_00366_code_explain_9079"} {"prompt": "Explain this text code:\n\ngit clone https://github.com/kubernetes/kubernetes $GOPATH/src/k8s.io/kubernetes", "completion": "This text code Generating Reference Documentation for kubectl Commands | Remove the spf13 package from $GOPATH/src/k8s.io/kubernetes/vendor/github.com:", "language": "text", "type": "code-example_explanation", "context": "Generating Reference Documentation for kubectl Commands | Remove the spf13 package from $GOPATH/src/k8s.io/kubernetes/vendor/github.com:", "source": "https://kubernetes.io/docs/contribute/generate-ref-docs/kubectl/", "id": "k8s_00366_code_explain_9080"} {"prompt": "Explain this text code:\n\nrm -rf $GOPATH/src/k8s.io/kubernetes/vendor/github.com/spf13", "completion": "This text code Generating Reference Documentation for kubectl Commands | The kubernetes/kubernetes repository provides the kubectl and kustomize source code.", "language": "text", "type": "code-example_explanation", "context": "Generating Reference Documentation for kubectl Commands | The kubernetes/kubernetes repository provides the kubectl and kustomize source code.", "source": "https://kubernetes.io/docs/contribute/generate-ref-docs/kubectl/", "id": "k8s_00366_code_explain_9081"} {"prompt": "Explain this text code:\n\ncd <k8s-base>\ngit checkout v1.33.0\ngit pull https://github.com/kubernetes/kubernetes 1.33.0", "completion": "This text code Generating Reference Documentation for kubectl Commands | If you do not need to edit the kubectl source code, follow the instructions for Setting build variables.", "language": "text", "type": "code-example_explanation", "context": "Generating Reference Documentation for kubectl Commands | If you do not need to edit the kubectl source code, follow the instructions for Setting build variables.", "source": "https://kubernetes.io/docs/contribute/generate-ref-docs/kubectl/", "id": "k8s_00366_code_explain_9082"} {"prompt": "Explain this text code:\n\nexport K8S_WEBROOT=$GOPATH/src/github.com/<your-username>/website\nexport K8S_ROOT=$GOPATH/src/k8s.io/kubernetes\nexport K8S_RELEASE=1.33", "completion": "This text code Generating Reference Documentation for kubectl Commands | The createversiondirs build target creates a versioned directory and copies the kubectl reference configuration files to the versioned directory. The versioned directory name follows the pattern of v<", "language": "text", "type": "code-example_explanation", "context": "Generating Reference Documentation for kubectl Commands | The createversiondirs build target creates a versioned directory and copies the kubectl reference configuration files to the versioned directory. The versioned directory name follows the pattern of v<", "source": "https://kubernetes.io/docs/contribute/generate-ref-docs/kubectl/", "id": "k8s_00366_code_explain_9083"} {"prompt": "Explain this text code:\n\ncd <k8s-base>\ngit checkout v1.33.0\ngit pull https://github.com/kubernetes/kubernetes v1.33.0", "completion": "This text code Generating Reference Documentation for kubectl Commands | In your local <rdocs-base>, run the copycli build target. The command runs as root:", "language": "text", "type": "code-example_explanation", "context": "Generating Reference Documentation for kubectl Commands | In your local <rdocs-base>, run the copycli build target. The command runs as root:", "source": "https://kubernetes.io/docs/contribute/generate-ref-docs/kubectl/", "id": "k8s_00366_code_explain_9084"} {"prompt": "Explain this json code:\n\n[ -e \"<rdocs-base>/gen-kubectldocs/generators/build/index.html\" ] && echo \"index.html built\" || echo \"no index.html\"\n[ -e \"<rdocs-base>/gen-kubectldocs/generators/build/navData.js\" ] && echo \"navData.js built\" || echo \"no navData.js\"", "completion": "This json code Generating Reference Documentation for kubectl Commands | Verify that all generated files have been copied to your <web-base>:", "language": "json", "type": "api-response_explanation", "context": "Generating Reference Documentation for kubectl Commands | Verify that all generated files have been copied to your <web-base>:", "source": "https://kubernetes.io/docs/contribute/generate-ref-docs/kubectl/", "id": "k8s_00366_code_explain_9085"} {"prompt": "Explain this text code:\n\nstatic/docs/reference/generated/kubectl/kubectl-commands.html\nstatic/docs/reference/generated/kubectl/navData.js", "completion": "This text code Generating Reference Documentation for kubectl Commands | The output may also include:", "language": "text", "type": "code-example_explanation", "context": "Generating Reference Documentation for kubectl Commands | The output may also include:", "source": "https://kubernetes.io/docs/contribute/generate-ref-docs/kubectl/", "id": "k8s_00366_code_explain_9086"} {"prompt": "Complete this text code:\n\nstatic/docs/reference/generated/kubectl/scroll.js\nstatic/docs/reference/generated/kubectl/stylesheet.css\nstatic/docs/reference/generated/kubectl/tabvisibility.js\nstatic/docs/reference/generated/kubectl/node_modules/bootstrap/dist/css/bootstrap.min.css", "completion": "static/docs/reference/generated/kubectl/node_modules/highlight.js/styles/default.css\nstatic/docs/reference/generated/kubectl/node_modules/jquery.scrollto/jquery.scrollTo.min.js\nstatic/docs/reference/generated/kubectl/node_modules/jquery/dist/jquery.min.js\nstatic/docs/reference/generated/kubectl/node_modules/font-awesome/css/font-awesome.min.css", "language": "text", "type": "code-example", "context": "Generating Reference Documentation for kubectl Commands | Build the Kubernetes documentation in your local <web-base>.", "source": "https://kubernetes.io/docs/contribute/generate-ref-docs/kubectl/", "id": "k8s_00366_code_completion_9087"} {"prompt": "Explain this text code:\n\nstatic/docs/reference/generated/kubectl/scroll.js\nstatic/docs/reference/generated/kubectl/stylesheet.css\nstatic/docs/reference/generated/kubectl/tabvisibility.js\nstatic/docs/reference/generated/kubectl/node_modules/bootstrap/dist/css/bootstrap.min.css\nstatic/docs/reference/generated/kubectl/node_modules/highlight.js/styles/default.css\nstatic/docs/reference/generated/kubectl/node_modules/jquery.scrollto/jquery.scrollTo.min.js\nstatic/docs/reference/generated/kubectl/node_modules/jquery/dist/jquery.min.js\nstatic/docs/reference/generated/kubectl/node_modules/font-awesome/css/font-awesome.min.css", "completion": "This text code Generating Reference Documentation for kubectl Commands | Build the Kubernetes documentation in your local <web-base>.", "language": "text", "type": "code-example_explanation", "context": "Generating Reference Documentation for kubectl Commands | Build the Kubernetes documentation in your local <web-base>.", "source": "https://kubernetes.io/docs/contribute/generate-ref-docs/kubectl/", "id": "k8s_00366_code_explain_9088"} {"prompt": "Explain this text code:\n\ncd <web-base>\ngit submodule update --init --recursive --depth 1 # if not already done\nmake container-serve", "completion": "This text code Generating Reference Documentation for kubectl Commands | View the local preview.", "language": "text", "type": "code-example_explanation", "context": "Generating Reference Documentation for kubectl Commands | View the local preview.", "source": "https://kubernetes.io/docs/contribute/generate-ref-docs/kubectl/", "id": "k8s_00366_code_explain_9089"} {"prompt": "Explain this shell code:\n\nmkdir -p $HOME/<workspace>\n\nexport GOPATH=$HOME/<workspace>", "completion": "This shell code Generating Reference Documentation for kubectl Commands | Get a local clone of the following repositories:", "language": "shell", "type": "code-example_explanation", "context": "Generating Reference Documentation for kubectl Commands | Get a local clone of the following repositories:", "source": "https://kubernetes.io/docs/contribute/generate-ref-docs/kubectl/", "id": "k8s_00366_code_explain_9090"} {"prompt": "Complete this shell code:\n\ngo get -u github.com/spf13/pflag\ngo get -u github.com/spf13/cobra", "completion": "go get -u gopkg.in/yaml.v2\ngo get -u github.com/kubernetes-sigs/reference-docs", "language": "shell", "type": "code-example", "context": "Generating Reference Documentation for kubectl Commands | If you don't already have the kubernetes/website repository, get it now:", "source": "https://kubernetes.io/docs/contribute/generate-ref-docs/kubectl/", "id": "k8s_00366_code_completion_9091"} {"prompt": "Explain this shell code:\n\ngo get -u github.com/spf13/pflag\ngo get -u github.com/spf13/cobra\ngo get -u gopkg.in/yaml.v2\ngo get -u github.com/kubernetes-sigs/reference-docs", "completion": "This shell code Generating Reference Documentation for kubectl Commands | If you don't already have the kubernetes/website repository, get it now:", "language": "shell", "type": "code-example_explanation", "context": "Generating Reference Documentation for kubectl Commands | If you don't already have the kubernetes/website repository, get it now:", "source": "https://kubernetes.io/docs/contribute/generate-ref-docs/kubectl/", "id": "k8s_00366_code_explain_9092"} {"prompt": "Explain this shell code:\n\ngit clone https://github.com/<your-username>/website $GOPATH/src/github.com/<your-username>/website", "completion": "This shell code Generating Reference Documentation for kubectl Commands | Get a clone of the kubernetes/kubernetes repository as k8s.io/kubernetes:", "language": "shell", "type": "code-example_explanation", "context": "Generating Reference Documentation for kubectl Commands | Get a clone of the kubernetes/kubernetes repository as k8s.io/kubernetes:", "source": "https://kubernetes.io/docs/contribute/generate-ref-docs/kubectl/", "id": "k8s_00366_code_explain_9093"} {"prompt": "Explain this shell code:\n\ngit clone https://github.com/kubernetes/kubernetes $GOPATH/src/k8s.io/kubernetes", "completion": "This shell code Generating Reference Documentation for kubectl Commands | Remove the spf13 package from $GOPATH/src/k8s.io/kubernetes/vendor/github.com:", "language": "shell", "type": "code-example_explanation", "context": "Generating Reference Documentation for kubectl Commands | Remove the spf13 package from $GOPATH/src/k8s.io/kubernetes/vendor/github.com:", "source": "https://kubernetes.io/docs/contribute/generate-ref-docs/kubectl/", "id": "k8s_00366_code_explain_9094"} {"prompt": "Explain this shell code:\n\nrm -rf $GOPATH/src/k8s.io/kubernetes/vendor/github.com/spf13", "completion": "This shell code Generating Reference Documentation for kubectl Commands | The kubernetes/kubernetes repository provides the kubectl and kustomize source code.", "language": "shell", "type": "code-example_explanation", "context": "Generating Reference Documentation for kubectl Commands | The kubernetes/kubernetes repository provides the kubectl and kustomize source code.", "source": "https://kubernetes.io/docs/contribute/generate-ref-docs/kubectl/", "id": "k8s_00366_code_explain_9095"} {"prompt": "Explain this text code:\n\n$GOPATH/src/github.com/kubernetes-sigs/reference-docs", "completion": "This text code Generating Reference Documentation for kubectl Commands | In your local k8s.io/kubernetes repository, check out the branch of interest, and make sure it is up to date. For example, if you want to generate docs for Kubernetes 1.33.0, you could use these comma", "language": "text", "type": "code-example_explanation", "context": "Generating Reference Documentation for kubectl Commands | In your local k8s.io/kubernetes repository, check out the branch of interest, and make sure it is up to date. For example, if you want to generate docs for Kubernetes 1.33.0, you could use these comma", "source": "https://kubernetes.io/docs/contribute/generate-ref-docs/kubectl/", "id": "k8s_00366_code_explain_9096"} {"prompt": "Explain this shell code:\n\ncd <k8s-base>\ngit checkout v1.33.0\ngit pull https://github.com/kubernetes/kubernetes 1.33.0", "completion": "This shell code Generating Reference Documentation for kubectl Commands | If you do not need to edit the kubectl source code, follow the instructions for Setting build variables.", "language": "shell", "type": "code-example_explanation", "context": "Generating Reference Documentation for kubectl Commands | If you do not need to edit the kubectl source code, follow the instructions for Setting build variables.", "source": "https://kubernetes.io/docs/contribute/generate-ref-docs/kubectl/", "id": "k8s_00366_code_explain_9097"} {"prompt": "Explain this shell code:\n\nexport K8S_WEBROOT=$GOPATH/src/github.com/<your-username>/website\nexport K8S_ROOT=$GOPATH/src/k8s.io/kubernetes\nexport K8S_RELEASE=1.33", "completion": "This shell code Generating Reference Documentation for kubectl Commands | The createversiondirs build target creates a versioned directory and copies the kubectl reference configuration files to the versioned directory. The versioned directory name follows the pattern of v<", "language": "shell", "type": "code-example_explanation", "context": "Generating Reference Documentation for kubectl Commands | The createversiondirs build target creates a versioned directory and copies the kubectl reference configuration files to the versioned directory. The versioned directory name follows the pattern of v<", "source": "https://kubernetes.io/docs/contribute/generate-ref-docs/kubectl/", "id": "k8s_00366_code_explain_9098"} {"prompt": "Explain this shell code:\n\ncd <k8s-base>\ngit checkout v1.33.0\ngit pull https://github.com/kubernetes/kubernetes v1.33.0", "completion": "This shell code Generating Reference Documentation for kubectl Commands | In your local <rdocs-base>, run the copycli build target. The command runs as root:", "language": "shell", "type": "code-example_explanation", "context": "Generating Reference Documentation for kubectl Commands | In your local <rdocs-base>, run the copycli build target. The command runs as root:", "source": "https://kubernetes.io/docs/contribute/generate-ref-docs/kubectl/", "id": "k8s_00366_code_explain_9099"} {"prompt": "Explain this shell code:\n\n[ -e \"<rdocs-base>/gen-kubectldocs/generators/build/index.html\" ] && echo \"index.html built\" || echo \"no index.html\"\n[ -e \"<rdocs-base>/gen-kubectldocs/generators/build/navData.js\" ] && echo \"navData.js built\" || echo \"no navData.js\"", "completion": "This shell code Generating Reference Documentation for kubectl Commands | Verify that all generated files have been copied to your <web-base>:", "language": "shell", "type": "code-example_explanation", "context": "Generating Reference Documentation for kubectl Commands | Verify that all generated files have been copied to your <web-base>:", "source": "https://kubernetes.io/docs/contribute/generate-ref-docs/kubectl/", "id": "k8s_00366_code_explain_9100"} {"prompt": "Explain this text code:\n\nstatic/docs/reference/generated/kubectl/kubectl-commands.html\nstatic/docs/reference/generated/kubectl/navData.js", "completion": "This text code Generating Reference Documentation for kubectl Commands | The output may also include:", "language": "text", "type": "code-example_explanation", "context": "Generating Reference Documentation for kubectl Commands | The output may also include:", "source": "https://kubernetes.io/docs/contribute/generate-ref-docs/kubectl/", "id": "k8s_00366_code_explain_9101"} {"prompt": "Complete this text code:\n\nstatic/docs/reference/generated/kubectl/scroll.js\nstatic/docs/reference/generated/kubectl/stylesheet.css\nstatic/docs/reference/generated/kubectl/tabvisibility.js\nstatic/docs/reference/generated/kubectl/node_modules/bootstrap/dist/css/bootstrap.min.css", "completion": "static/docs/reference/generated/kubectl/node_modules/highlight.js/styles/default.css\nstatic/docs/reference/generated/kubectl/node_modules/jquery.scrollto/jquery.scrollTo.min.js\nstatic/docs/reference/generated/kubectl/node_modules/jquery/dist/jquery.min.js\nstatic/docs/reference/generated/kubectl/node_modules/font-awesome/css/font-awesome.min.css", "language": "text", "type": "code-example", "context": "Generating Reference Documentation for kubectl Commands | Build the Kubernetes documentation in your local <web-base>.", "source": "https://kubernetes.io/docs/contribute/generate-ref-docs/kubectl/", "id": "k8s_00366_code_completion_9102"} {"prompt": "Explain this text code:\n\nstatic/docs/reference/generated/kubectl/scroll.js\nstatic/docs/reference/generated/kubectl/stylesheet.css\nstatic/docs/reference/generated/kubectl/tabvisibility.js\nstatic/docs/reference/generated/kubectl/node_modules/bootstrap/dist/css/bootstrap.min.css\nstatic/docs/reference/generated/kubectl/node_modules/highlight.js/styles/default.css\nstatic/docs/reference/generated/kubectl/node_modules/jquery.scrollto/jquery.scrollTo.min.js\nstatic/docs/reference/generated/kubectl/node_modules/jquery/dist/jquery.min.js\nstatic/docs/reference/generated/kubectl/node_modules/font-awesome/css/font-awesome.min.css", "completion": "This text code Generating Reference Documentation for kubectl Commands | Build the Kubernetes documentation in your local <web-base>.", "language": "text", "type": "code-example_explanation", "context": "Generating Reference Documentation for kubectl Commands | Build the Kubernetes documentation in your local <web-base>.", "source": "https://kubernetes.io/docs/contribute/generate-ref-docs/kubectl/", "id": "k8s_00366_code_explain_9103"} {"prompt": "Explain this shell code:\n\ncd <web-base>\ngit submodule update --init --recursive --depth 1 # if not already done\nmake container-serve", "completion": "This shell code Generating Reference Documentation for kubectl Commands | View the local preview.", "language": "shell", "type": "code-example_explanation", "context": "Generating Reference Documentation for kubectl Commands | View the local preview.", "source": "https://kubernetes.io/docs/contribute/generate-ref-docs/kubectl/", "id": "k8s_00366_code_explain_9104"} {"prompt": "Explain this text code:\n\nmkdir -p $HOME/<workspace>\n\nexport GOPATH=$HOME/<workspace>", "completion": "This text code Generating Reference Documentation for kubectl Commands | Get a local clone of the following repositories:", "language": "text", "type": "code-example_explanation", "context": "Generating Reference Documentation for kubectl Commands | Get a local clone of the following repositories:", "source": "https://kubernetes.io/docs/contribute/generate-ref-docs/kubectl/", "id": "k8s_00366_code_explain_9105"} {"prompt": "Complete this text code:\n\ngo get -u github.com/spf13/pflag\ngo get -u github.com/spf13/cobra", "completion": "go get -u gopkg.in/yaml.v2\ngo get -u github.com/kubernetes-sigs/reference-docs", "language": "text", "type": "code-example", "context": "Generating Reference Documentation for kubectl Commands | If you don't already have the kubernetes/website repository, get it now:", "source": "https://kubernetes.io/docs/contribute/generate-ref-docs/kubectl/", "id": "k8s_00366_code_completion_9106"} {"prompt": "Explain this text code:\n\ngo get -u github.com/spf13/pflag\ngo get -u github.com/spf13/cobra\ngo get -u gopkg.in/yaml.v2\ngo get -u github.com/kubernetes-sigs/reference-docs", "completion": "This text code Generating Reference Documentation for kubectl Commands | If you don't already have the kubernetes/website repository, get it now:", "language": "text", "type": "code-example_explanation", "context": "Generating Reference Documentation for kubectl Commands | If you don't already have the kubernetes/website repository, get it now:", "source": "https://kubernetes.io/docs/contribute/generate-ref-docs/kubectl/", "id": "k8s_00366_code_explain_9107"} {"prompt": "Explain this text code:\n\ngit clone https://github.com/<your-username>/website $GOPATH/src/github.com/<your-username>/website", "completion": "This text code Generating Reference Documentation for kubectl Commands | Get a clone of the kubernetes/kubernetes repository as k8s.io/kubernetes:", "language": "text", "type": "code-example_explanation", "context": "Generating Reference Documentation for kubectl Commands | Get a clone of the kubernetes/kubernetes repository as k8s.io/kubernetes:", "source": "https://kubernetes.io/docs/contribute/generate-ref-docs/kubectl/", "id": "k8s_00366_code_explain_9108"} {"prompt": "Explain this text code:\n\ngit clone https://github.com/kubernetes/kubernetes $GOPATH/src/k8s.io/kubernetes", "completion": "This text code Generating Reference Documentation for kubectl Commands | Remove the spf13 package from $GOPATH/src/k8s.io/kubernetes/vendor/github.com:", "language": "text", "type": "code-example_explanation", "context": "Generating Reference Documentation for kubectl Commands | Remove the spf13 package from $GOPATH/src/k8s.io/kubernetes/vendor/github.com:", "source": "https://kubernetes.io/docs/contribute/generate-ref-docs/kubectl/", "id": "k8s_00366_code_explain_9109"} {"prompt": "Explain this text code:\n\nrm -rf $GOPATH/src/k8s.io/kubernetes/vendor/github.com/spf13", "completion": "This text code Generating Reference Documentation for kubectl Commands | The kubernetes/kubernetes repository provides the kubectl and kustomize source code.", "language": "text", "type": "code-example_explanation", "context": "Generating Reference Documentation for kubectl Commands | The kubernetes/kubernetes repository provides the kubectl and kustomize source code.", "source": "https://kubernetes.io/docs/contribute/generate-ref-docs/kubectl/", "id": "k8s_00366_code_explain_9110"} {"prompt": "Explain this text code:\n\ncd <k8s-base>\ngit checkout v1.33.0\ngit pull https://github.com/kubernetes/kubernetes 1.33.0", "completion": "This text code Generating Reference Documentation for kubectl Commands | If you do not need to edit the kubectl source code, follow the instructions for Setting build variables.", "language": "text", "type": "code-example_explanation", "context": "Generating Reference Documentation for kubectl Commands | If you do not need to edit the kubectl source code, follow the instructions for Setting build variables.", "source": "https://kubernetes.io/docs/contribute/generate-ref-docs/kubectl/", "id": "k8s_00366_code_explain_9111"} {"prompt": "Explain this text code:\n\nexport K8S_WEBROOT=$GOPATH/src/github.com/<your-username>/website\nexport K8S_ROOT=$GOPATH/src/k8s.io/kubernetes\nexport K8S_RELEASE=1.33", "completion": "This text code Generating Reference Documentation for kubectl Commands | The createversiondirs build target creates a versioned directory and copies the kubectl reference configuration files to the versioned directory. The versioned directory name follows the pattern of v<", "language": "text", "type": "code-example_explanation", "context": "Generating Reference Documentation for kubectl Commands | The createversiondirs build target creates a versioned directory and copies the kubectl reference configuration files to the versioned directory. The versioned directory name follows the pattern of v<", "source": "https://kubernetes.io/docs/contribute/generate-ref-docs/kubectl/", "id": "k8s_00366_code_explain_9112"} {"prompt": "Explain this text code:\n\ncd <k8s-base>\ngit checkout v1.33.0\ngit pull https://github.com/kubernetes/kubernetes v1.33.0", "completion": "This text code Generating Reference Documentation for kubectl Commands | In your local <rdocs-base>, run the copycli build target. The command runs as root:", "language": "text", "type": "code-example_explanation", "context": "Generating Reference Documentation for kubectl Commands | In your local <rdocs-base>, run the copycli build target. The command runs as root:", "source": "https://kubernetes.io/docs/contribute/generate-ref-docs/kubectl/", "id": "k8s_00366_code_explain_9113"} {"prompt": "Explain this json code:\n\n[ -e \"<rdocs-base>/gen-kubectldocs/generators/build/index.html\" ] && echo \"index.html built\" || echo \"no index.html\"\n[ -e \"<rdocs-base>/gen-kubectldocs/generators/build/navData.js\" ] && echo \"navData.js built\" || echo \"no navData.js\"", "completion": "This json code Generating Reference Documentation for kubectl Commands | Verify that all generated files have been copied to your <web-base>:", "language": "json", "type": "api-response_explanation", "context": "Generating Reference Documentation for kubectl Commands | Verify that all generated files have been copied to your <web-base>:", "source": "https://kubernetes.io/docs/contribute/generate-ref-docs/kubectl/", "id": "k8s_00366_code_explain_9114"} {"prompt": "Explain this text code:\n\ncd <web-base>\ngit submodule update --init --recursive --depth 1 # if not already done\nmake container-serve", "completion": "This text code Generating Reference Documentation for kubectl Commands | View the local preview.", "language": "text", "type": "code-example_explanation", "context": "Generating Reference Documentation for kubectl Commands | View the local preview.", "source": "https://kubernetes.io/docs/contribute/generate-ref-docs/kubectl/", "id": "k8s_00366_code_explain_9115"} {"prompt": "Complete this text code:\n\ncurl -LO https://dl.k8s.io/easy-rsa/easy-rsa.tar.gz\ntar xzf easy-rsa.tar.gz", "completion": "cd easy-rsa-master/easyrsa3\n./easyrsa init-pki", "language": "text", "type": "code-example", "context": "Generate Certificates Manually | Generate a new certificate authority (CA). --batch sets automatic mode; --req-cn specifies the Common Name (CN) for the CA's new root certificate.", "source": "https://kubernetes.io/docs/tasks/administer-cluster/certificates/", "id": "k8s_00367_code_completion_9116"} {"prompt": "Explain this text code:\n\ncurl -LO https://dl.k8s.io/easy-rsa/easy-rsa.tar.gz\ntar xzf easy-rsa.tar.gz\ncd easy-rsa-master/easyrsa3\n./easyrsa init-pki", "completion": "This text code Generate Certificates Manually | Generate a new certificate authority (CA). --batch sets automatic mode; --req-cn specifies the Common Name (CN) for the CA's new root certificate.", "language": "text", "type": "code-example_explanation", "context": "Generate Certificates Manually | Generate a new certificate authority (CA). --batch sets automatic mode; --req-cn specifies the Common Name (CN) for the CA's new root certificate.", "source": "https://kubernetes.io/docs/tasks/administer-cluster/certificates/", "id": "k8s_00367_code_explain_9117"} {"prompt": "Explain this text code:\n\n./easyrsa --batch \"--req-cn=${MASTER_IP}@`date +%s`\" build-ca nopass", "completion": "This text code Generate Certificates Manually | Generate server certificate and key.", "language": "text", "type": "code-example_explanation", "context": "Generate Certificates Manually | Generate server certificate and key.", "source": "https://kubernetes.io/docs/tasks/administer-cluster/certificates/", "id": "k8s_00367_code_explain_9118"} {"prompt": "Complete this text code:\n\n./easyrsa --subject-alt-name=\"IP:${MASTER_IP},\"\\\n\"IP:${MASTER_CLUSTER_IP},\"\\\n\"DNS:kubernetes,\"\\\n\"DNS:kubernetes.default,\"\\", "completion": "\"DNS:kubernetes.default.svc,\"\\\n\"DNS:kubernetes.default.svc.cluster,\"\\\n\"DNS:kubernetes.default.svc.cluster.local\" \\\n--days=10000 \\\nbuild-server-full server nopass", "language": "text", "type": "code-example", "context": "Generate Certificates Manually | Copy pki/ca.crt, pki/issued/server.crt, and pki/private/server.key to your directory.", "source": "https://kubernetes.io/docs/tasks/administer-cluster/certificates/", "id": "k8s_00367_code_completion_9119"} {"prompt": "Explain this text code:\n\n./easyrsa --subject-alt-name=\"IP:${MASTER_IP},\"\\\n\"IP:${MASTER_CLUSTER_IP},\"\\\n\"DNS:kubernetes,\"\\\n\"DNS:kubernetes.default,\"\\\n\"DNS:kubernetes.default.svc,\"\\\n\"DNS:kubernetes.default.svc.cluster,\"\\\n\"DNS:kubernetes.default.svc.cluster.local\" \\\n--days=10000 \\\nbuild-server-full server nopass", "completion": "This text code Generate Certificates Manually | Copy pki/ca.crt, pki/issued/server.crt, and pki/private/server.key to your directory.", "language": "text", "type": "code-example_explanation", "context": "Generate Certificates Manually | Copy pki/ca.crt, pki/issued/server.crt, and pki/private/server.key to your directory.", "source": "https://kubernetes.io/docs/tasks/administer-cluster/certificates/", "id": "k8s_00367_code_explain_9120"} {"prompt": "Explain this text code:\n\n--client-ca-file=/yourdirectory/ca.crt\n--tls-cert-file=/yourdirectory/server.crt\n--tls-private-key-file=/yourdirectory/server.key", "completion": "This text code Generate Certificates Manually | openssl can manually generate certificates for your cluster.", "language": "text", "type": "code-example_explanation", "context": "Generate Certificates Manually | openssl can manually generate certificates for your cluster.", "source": "https://kubernetes.io/docs/tasks/administer-cluster/certificates/", "id": "k8s_00367_code_explain_9121"} {"prompt": "Explain this text code:\n\nopenssl req -x509 -new -nodes -key ca.key -subj \"/CN=${MASTER_IP}\" -days 10000 -out ca.crt", "completion": "This text code Generate Certificates Manually | Generate a server.key with 2048bit:", "language": "text", "type": "code-example_explanation", "context": "Generate Certificates Manually | Generate a server.key with 2048bit:", "source": "https://kubernetes.io/docs/tasks/administer-cluster/certificates/", "id": "k8s_00367_code_explain_9122"} {"prompt": "Complete this json code:\n\n[ req ]\ndefault_bits = 2048\nprompt = no\ndefault_md = sha256\nreq_extensions = req_ext\ndistinguished_name = dn\n\n[ dn ]\nC = <country>\nST = <state>\nL = <city>\nO = <organization>\nOU = <organization unit>\nCN = <MASTER_IP>\n\n[ req_ext ]", "completion": "subjectAltName = @alt_names\n\n[ alt_names ]\nDNS.1 = kubernetes\nDNS.2 = kubernetes.default\nDNS.3 = kubernetes.default.svc\nDNS.4 = kubernetes.default.svc.cluster\nDNS.5 = kubernetes.default.svc.cluster.local\nIP.1 = <MASTER_IP>\nIP.2 = <MASTER_CLUSTER_IP>\n\n[ v3_ext ]\nauthorityKeyIdentifier=keyid,issuer:always\nbasicConstraints=CA:FALSE\nkeyUsage=keyEncipherment,dataEncipherment\nextendedKeyUsage=serverAuth,clientAuth\nsubjectAltName=@alt_names", "language": "json", "type": "api-response", "context": "Generate Certificates Manually | Generate the certificate signing request based on the config file:", "source": "https://kubernetes.io/docs/tasks/administer-cluster/certificates/", "id": "k8s_00367_code_completion_9123"} {"prompt": "Explain this json code:\n\n[ req ]\ndefault_bits = 2048\nprompt = no\ndefault_md = sha256\nreq_extensions = req_ext\ndistinguished_name = dn\n\n[ dn ]\nC = <country>\nST = <state>\nL = <city>\nO = <organization>\nOU = <organization unit>\nCN = <MASTER_IP>\n\n[ req_ext ]\nsubjectAltName = @alt_names\n\n[ alt_names ]\nDNS.1 = kubernetes\nDNS.2 = kubernetes.default\nDNS.3 = kubernetes.default.svc\nDNS.4 = kubernetes.default.svc.cluster\nDNS.5 = kubernetes.default.svc.cluster.local\nIP.1 = <MASTER_IP>\nIP.2 = <MASTER_CLUSTER_IP>\n\n[ v3_ext ]\nauthorityKeyIdentifier=keyid,issuer:always\nbasicConstraints=CA:FALSE\nkeyUsage=keyEncipherment,dataEncipherment\nextendedKeyUsage=serverAuth,clientAuth\nsubjectAltName=@alt_names", "completion": "This json code Generate Certificates Manually | Generate the certificate signing request based on the config file:", "language": "json", "type": "api-response_explanation", "context": "Generate Certificates Manually | Generate the certificate signing request based on the config file:", "source": "https://kubernetes.io/docs/tasks/administer-cluster/certificates/", "id": "k8s_00367_code_explain_9124"} {"prompt": "Explain this text code:\n\nopenssl req -new -key server.key -out server.csr -config csr.conf", "completion": "This text code Generate Certificates Manually | Generate the server certificate using the ca.key, ca.crt and server.csr:", "language": "text", "type": "code-example_explanation", "context": "Generate Certificates Manually | Generate the server certificate using the ca.key, ca.crt and server.csr:", "source": "https://kubernetes.io/docs/tasks/administer-cluster/certificates/", "id": "k8s_00367_code_explain_9125"} {"prompt": "Explain this text code:\n\nopenssl x509 -req -in server.csr -CA ca.crt -CAkey ca.key \\\n -CAcreateserial -out server.crt -days 10000 \\\n -extensions v3_ext -extfile csr.conf -sha256", "completion": "This text code Generate Certificates Manually | View the certificate signing request:", "language": "text", "type": "code-example_explanation", "context": "Generate Certificates Manually | View the certificate signing request:", "source": "https://kubernetes.io/docs/tasks/administer-cluster/certificates/", "id": "k8s_00367_code_explain_9126"} {"prompt": "Complete this text code:\n\ncurl -L https://github.com/cloudflare/cfssl/releases/download/v1.5.0/cfssl_1.5.0_linux_amd64 -o cfssl\nchmod +x cfssl\ncurl -L https://github.com/cloudflare/cfssl/releases/download/v1.5.0/cfssljson_1.5.0_linux_amd64 -o cfssljson", "completion": "chmod +x cfssljson\ncurl -L https://github.com/cloudflare/cfssl/releases/download/v1.5.0/cfssl-certinfo_1.5.0_linux_amd64 -o cfssl-certinfo\nchmod +x cfssl-certinfo", "language": "text", "type": "code-example", "context": "Generate Certificates Manually | Create a directory to hold the artifacts and initialize cfssl:", "source": "https://kubernetes.io/docs/tasks/administer-cluster/certificates/", "id": "k8s_00367_code_completion_9127"} {"prompt": "Explain this text code:\n\ncurl -L https://github.com/cloudflare/cfssl/releases/download/v1.5.0/cfssl_1.5.0_linux_amd64 -o cfssl\nchmod +x cfssl\ncurl -L https://github.com/cloudflare/cfssl/releases/download/v1.5.0/cfssljson_1.5.0_linux_amd64 -o cfssljson\nchmod +x cfssljson\ncurl -L https://github.com/cloudflare/cfssl/releases/download/v1.5.0/cfssl-certinfo_1.5.0_linux_amd64 -o cfssl-certinfo\nchmod +x cfssl-certinfo", "completion": "This text code Generate Certificates Manually | Create a directory to hold the artifacts and initialize cfssl:", "language": "text", "type": "code-example_explanation", "context": "Generate Certificates Manually | Create a directory to hold the artifacts and initialize cfssl:", "source": "https://kubernetes.io/docs/tasks/administer-cluster/certificates/", "id": "k8s_00367_code_explain_9128"} {"prompt": "Complete this text code:\n\nmkdir cert\ncd cert", "completion": "../cfssl print-defaults config > config.json\n../cfssl print-defaults csr > csr.json", "language": "text", "type": "code-example", "context": "Generate Certificates Manually | Create a JSON config file for generating the CA file, for example, ca-config.json:", "source": "https://kubernetes.io/docs/tasks/administer-cluster/certificates/", "id": "k8s_00367_code_completion_9129"} {"prompt": "Explain this text code:\n\nmkdir cert\ncd cert\n../cfssl print-defaults config > config.json\n../cfssl print-defaults csr > csr.json", "completion": "This text code Generate Certificates Manually | Create a JSON config file for generating the CA file, for example, ca-config.json:", "language": "text", "type": "code-example_explanation", "context": "Generate Certificates Manually | Create a JSON config file for generating the CA file, for example, ca-config.json:", "source": "https://kubernetes.io/docs/tasks/administer-cluster/certificates/", "id": "k8s_00367_code_explain_9130"} {"prompt": "Complete this json code:\n\n{\n \"signing\": {\n \"default\": {\n \"expiry\": \"8760h\"\n },\n \"profiles\": {\n \"kubernetes\": {\n \"usages\": [\n \"signing\",", "completion": " \"key encipherment\",\n \"server auth\",\n \"client auth\"\n ],\n \"expiry\": \"8760h\"\n }\n }\n }\n}", "language": "json", "type": "api-response", "context": "Generate Certificates Manually | Create a JSON config file for CA certificate signing request (CSR), for example, ca-csr.json. Be sure to replace the values marked with angle brackets with real values you want to use.", "source": "https://kubernetes.io/docs/tasks/administer-cluster/certificates/", "id": "k8s_00367_code_completion_9131"} {"prompt": "Explain this json code:\n\n{\n \"signing\": {\n \"default\": {\n \"expiry\": \"8760h\"\n },\n \"profiles\": {\n \"kubernetes\": {\n \"usages\": [\n \"signing\",\n \"key encipherment\",\n \"server auth\",\n \"client auth\"\n ],\n \"expiry\": \"8760h\"\n }\n }\n }\n}", "completion": "This json code Generate Certificates Manually | Create a JSON config file for CA certificate signing request (CSR), for example, ca-csr.json. Be sure to replace the values marked with angle brackets with real values you want to use.", "language": "json", "type": "api-response_explanation", "context": "Generate Certificates Manually | Create a JSON config file for CA certificate signing request (CSR), for example, ca-csr.json. Be sure to replace the values marked with angle brackets with real values you want to use.", "source": "https://kubernetes.io/docs/tasks/administer-cluster/certificates/", "id": "k8s_00367_code_explain_9132"} {"prompt": "Complete this json code:\n\n{\n \"CN\": \"kubernetes\",\n \"key\": {\n \"algo\": \"rsa\",\n \"size\": 2048\n },\n \"names\":[{", "completion": " \"C\": \"<country>\",\n \"ST\": \"<state>\",\n \"L\": \"<city>\",\n \"O\": \"<organization>\",\n \"OU\": \"<organization unit>\"\n }]\n}", "language": "json", "type": "api-response", "context": "Generate Certificates Manually | Generate CA key (ca-key.pem) and certificate (ca.pem):", "source": "https://kubernetes.io/docs/tasks/administer-cluster/certificates/", "id": "k8s_00367_code_completion_9133"} {"prompt": "Explain this json code:\n\n{\n \"CN\": \"kubernetes\",\n \"key\": {\n \"algo\": \"rsa\",\n \"size\": 2048\n },\n \"names\":[{\n \"C\": \"<country>\",\n \"ST\": \"<state>\",\n \"L\": \"<city>\",\n \"O\": \"<organization>\",\n \"OU\": \"<organization unit>\"\n }]\n}", "completion": "This json code Generate Certificates Manually | Generate CA key (ca-key.pem) and certificate (ca.pem):", "language": "json", "type": "api-response_explanation", "context": "Generate Certificates Manually | Generate CA key (ca-key.pem) and certificate (ca.pem):", "source": "https://kubernetes.io/docs/tasks/administer-cluster/certificates/", "id": "k8s_00367_code_explain_9134"} {"prompt": "Explain this text code:\n\n../cfssl gencert -initca ca-csr.json | ../cfssljson -bare ca", "completion": "This text code Generate Certificates Manually | Create a JSON config file for generating keys and certificates for the API server, for example, server-csr.json. Be sure to replace the values in angle brackets with real values you want to use. The <", "language": "text", "type": "code-example_explanation", "context": "Generate Certificates Manually | Create a JSON config file for generating keys and certificates for the API server, for example, server-csr.json. Be sure to replace the values in angle brackets with real values you want to use. The <", "source": "https://kubernetes.io/docs/tasks/administer-cluster/certificates/", "id": "k8s_00367_code_explain_9135"} {"prompt": "Complete this json code:\n\n{\n \"CN\": \"kubernetes\",\n \"hosts\": [\n \"127.0.0.1\",\n \"<MASTER_IP>\",\n \"<MASTER_CLUSTER_IP>\",\n \"kubernetes\",\n \"kubernetes.default\",\n \"kubernetes.default.svc\",\n \"kubernetes.default.svc.cluster\",\n \"kubernetes.default.svc.cluster.local\"\n ],", "completion": " \"key\": {\n \"algo\": \"rsa\",\n \"size\": 2048\n },\n \"names\": [{\n \"C\": \"<country>\",\n \"ST\": \"<state>\",\n \"L\": \"<city>\",\n \"O\": \"<organization>\",\n \"OU\": \"<organization unit>\"\n }]\n}", "language": "json", "type": "api-response", "context": "Generate Certificates Manually | Generate the key and certificate for the API server, which are by default saved into file server-key.pem and server.pem respectively:", "source": "https://kubernetes.io/docs/tasks/administer-cluster/certificates/", "id": "k8s_00367_code_completion_9136"} {"prompt": "Explain this json code:\n\n{\n \"CN\": \"kubernetes\",\n \"hosts\": [\n \"127.0.0.1\",\n \"<MASTER_IP>\",\n \"<MASTER_CLUSTER_IP>\",\n \"kubernetes\",\n \"kubernetes.default\",\n \"kubernetes.default.svc\",\n \"kubernetes.default.svc.cluster\",\n \"kubernetes.default.svc.cluster.local\"\n ],\n \"key\": {\n \"algo\": \"rsa\",\n \"size\": 2048\n },\n \"names\": [{\n \"C\": \"<country>\",\n \"ST\": \"<state>\",\n \"L\": \"<city>\",\n \"O\": \"<organization>\",\n \"OU\": \"<organization unit>\"\n }]\n}", "completion": "This json code Generate Certificates Manually | Generate the key and certificate for the API server, which are by default saved into file server-key.pem and server.pem respectively:", "language": "json", "type": "api-response_explanation", "context": "Generate Certificates Manually | Generate the key and certificate for the API server, which are by default saved into file server-key.pem and server.pem respectively:", "source": "https://kubernetes.io/docs/tasks/administer-cluster/certificates/", "id": "k8s_00367_code_explain_9137"} {"prompt": "Explain this text code:\n\n../cfssl gencert -ca=ca.pem -ca-key=ca-key.pem \\\n --config=ca-config.json -profile=kubernetes \\\n server-csr.json | ../cfssljson -bare server", "completion": "This text code Generate Certificates Manually | A client node may refuse to recognize a self-signed CA certificate as valid. For a non-production deployment, or for a deployment that runs behind a company firewall, you can distribute a self-signed", "language": "text", "type": "code-example_explanation", "context": "Generate Certificates Manually | A client node may refuse to recognize a self-signed CA certificate as valid. For a non-production deployment, or for a deployment that runs behind a company firewall, you can distribute a self-signed", "source": "https://kubernetes.io/docs/tasks/administer-cluster/certificates/", "id": "k8s_00367_code_explain_9138"} {"prompt": "Explain this text code:\n\nsudo cp ca.crt /usr/local/share/ca-certificates/kubernetes.crt\nsudo update-ca-certificates", "completion": "This text code Generate Certificates Manually | You can use the certificates.k8s.io API to provision x509 certificates to use for authentication as documented in the Managing TLS in a cluster task page.", "language": "text", "type": "code-example_explanation", "context": "Generate Certificates Manually | You can use the certificates.k8s.io API to provision x509 certificates to use for authentication as documented in the Managing TLS in a cluster task page.", "source": "https://kubernetes.io/docs/tasks/administer-cluster/certificates/", "id": "k8s_00367_code_explain_9139"} {"prompt": "Complete this text code:\n\nUpdating certificates in /etc/ssl/certs...\n1 added, 0 removed; done.", "completion": "Running hooks in /etc/ca-certificates/update.d....\ndone.", "language": "text", "type": "code-example", "context": "Generate Certificates Manually | You can use the certificates.k8s.io API to provision x509 certificates to use for authentication as documented in the Managing TLS in a cluster task page.", "source": "https://kubernetes.io/docs/tasks/administer-cluster/certificates/", "id": "k8s_00367_code_completion_9140"} {"prompt": "Explain this text code:\n\nUpdating certificates in /etc/ssl/certs...\n1 added, 0 removed; done.\nRunning hooks in /etc/ca-certificates/update.d....\ndone.", "completion": "This text code Generate Certificates Manually | You can use the certificates.k8s.io API to provision x509 certificates to use for authentication as documented in the Managing TLS in a cluster task page.", "language": "text", "type": "code-example_explanation", "context": "Generate Certificates Manually | You can use the certificates.k8s.io API to provision x509 certificates to use for authentication as documented in the Managing TLS in a cluster task page.", "source": "https://kubernetes.io/docs/tasks/administer-cluster/certificates/", "id": "k8s_00367_code_explain_9141"} {"prompt": "Complete this shell code:\n\ncurl -LO https://dl.k8s.io/easy-rsa/easy-rsa.tar.gz\ntar xzf easy-rsa.tar.gz", "completion": "cd easy-rsa-master/easyrsa3\n./easyrsa init-pki", "language": "shell", "type": "code-example", "context": "Generate Certificates Manually | Generate a new certificate authority (CA). --batch sets automatic mode; --req-cn specifies the Common Name (CN) for the CA's new root certificate.", "source": "https://kubernetes.io/docs/tasks/administer-cluster/certificates/", "id": "k8s_00367_code_completion_9142"} {"prompt": "Explain this shell code:\n\ncurl -LO https://dl.k8s.io/easy-rsa/easy-rsa.tar.gz\ntar xzf easy-rsa.tar.gz\ncd easy-rsa-master/easyrsa3\n./easyrsa init-pki", "completion": "This shell code Generate Certificates Manually | Generate a new certificate authority (CA). --batch sets automatic mode; --req-cn specifies the Common Name (CN) for the CA's new root certificate.", "language": "shell", "type": "code-example_explanation", "context": "Generate Certificates Manually | Generate a new certificate authority (CA). --batch sets automatic mode; --req-cn specifies the Common Name (CN) for the CA's new root certificate.", "source": "https://kubernetes.io/docs/tasks/administer-cluster/certificates/", "id": "k8s_00367_code_explain_9143"} {"prompt": "Explain this shell code:\n\n./easyrsa --batch \"--req-cn=${MASTER_IP}@`date +%s`\" build-ca nopass", "completion": "This shell code Generate Certificates Manually | Generate server certificate and key.", "language": "shell", "type": "code-example_explanation", "context": "Generate Certificates Manually | Generate server certificate and key.", "source": "https://kubernetes.io/docs/tasks/administer-cluster/certificates/", "id": "k8s_00367_code_explain_9144"} {"prompt": "Complete this shell code:\n\n./easyrsa --subject-alt-name=\"IP:${MASTER_IP},\"\\\n\"IP:${MASTER_CLUSTER_IP},\"\\\n\"DNS:kubernetes,\"\\\n\"DNS:kubernetes.default,\"\\", "completion": "\"DNS:kubernetes.default.svc,\"\\\n\"DNS:kubernetes.default.svc.cluster,\"\\\n\"DNS:kubernetes.default.svc.cluster.local\" \\\n--days=10000 \\\nbuild-server-full server nopass", "language": "shell", "type": "code-example", "context": "Generate Certificates Manually | Copy pki/ca.crt, pki/issued/server.crt, and pki/private/server.key to your directory.", "source": "https://kubernetes.io/docs/tasks/administer-cluster/certificates/", "id": "k8s_00367_code_completion_9145"} {"prompt": "Explain this shell code:\n\n./easyrsa --subject-alt-name=\"IP:${MASTER_IP},\"\\\n\"IP:${MASTER_CLUSTER_IP},\"\\\n\"DNS:kubernetes,\"\\\n\"DNS:kubernetes.default,\"\\\n\"DNS:kubernetes.default.svc,\"\\\n\"DNS:kubernetes.default.svc.cluster,\"\\\n\"DNS:kubernetes.default.svc.cluster.local\" \\\n--days=10000 \\\nbuild-server-full server nopass", "completion": "This shell code Generate Certificates Manually | Copy pki/ca.crt, pki/issued/server.crt, and pki/private/server.key to your directory.", "language": "shell", "type": "code-example_explanation", "context": "Generate Certificates Manually | Copy pki/ca.crt, pki/issued/server.crt, and pki/private/server.key to your directory.", "source": "https://kubernetes.io/docs/tasks/administer-cluster/certificates/", "id": "k8s_00367_code_explain_9146"} {"prompt": "Explain this shell code:\n\n--client-ca-file=/yourdirectory/ca.crt\n--tls-cert-file=/yourdirectory/server.crt\n--tls-private-key-file=/yourdirectory/server.key", "completion": "This shell code Generate Certificates Manually | openssl can manually generate certificates for your cluster.", "language": "shell", "type": "code-example_explanation", "context": "Generate Certificates Manually | openssl can manually generate certificates for your cluster.", "source": "https://kubernetes.io/docs/tasks/administer-cluster/certificates/", "id": "k8s_00367_code_explain_9147"} {"prompt": "Explain this shell code:\n\nopenssl req -x509 -new -nodes -key ca.key -subj \"/CN=${MASTER_IP}\" -days 10000 -out ca.crt", "completion": "This shell code Generate Certificates Manually | Generate a server.key with 2048bit:", "language": "shell", "type": "code-example_explanation", "context": "Generate Certificates Manually | Generate a server.key with 2048bit:", "source": "https://kubernetes.io/docs/tasks/administer-cluster/certificates/", "id": "k8s_00367_code_explain_9148"} {"prompt": "Complete this ini code:\n\n[ req ]\ndefault_bits = 2048\nprompt = no\ndefault_md = sha256\nreq_extensions = req_ext\ndistinguished_name = dn\n\n[ dn ]\nC = <country>\nST = <state>\nL = <city>\nO = <organization>\nOU = <organization unit>\nCN = <MASTER_IP>\n\n[ req_ext ]", "completion": "subjectAltName = @alt_names\n\n[ alt_names ]\nDNS.1 = kubernetes\nDNS.2 = kubernetes.default\nDNS.3 = kubernetes.default.svc\nDNS.4 = kubernetes.default.svc.cluster\nDNS.5 = kubernetes.default.svc.cluster.local\nIP.1 = <MASTER_IP>\nIP.2 = <MASTER_CLUSTER_IP>\n\n[ v3_ext ]\nauthorityKeyIdentifier=keyid,issuer:always\nbasicConstraints=CA:FALSE\nkeyUsage=keyEncipherment,dataEncipherment\nextendedKeyUsage=serverAuth,clientAuth\nsubjectAltName=@alt_names", "language": "ini", "type": "code-example", "context": "Generate Certificates Manually | Generate the certificate signing request based on the config file:", "source": "https://kubernetes.io/docs/tasks/administer-cluster/certificates/", "id": "k8s_00367_code_completion_9149"} {"prompt": "Explain this ini code:\n\n[ req ]\ndefault_bits = 2048\nprompt = no\ndefault_md = sha256\nreq_extensions = req_ext\ndistinguished_name = dn\n\n[ dn ]\nC = <country>\nST = <state>\nL = <city>\nO = <organization>\nOU = <organization unit>\nCN = <MASTER_IP>\n\n[ req_ext ]\nsubjectAltName = @alt_names\n\n[ alt_names ]\nDNS.1 = kubernetes\nDNS.2 = kubernetes.default\nDNS.3 = kubernetes.default.svc\nDNS.4 = kubernetes.default.svc.cluster\nDNS.5 = kubernetes.default.svc.cluster.local\nIP.1 = <MASTER_IP>\nIP.2 = <MASTER_CLUSTER_IP>\n\n[ v3_ext ]\nauthorityKeyIdentifier=keyid,issuer:always\nbasicConstraints=CA:FALSE\nkeyUsage=keyEncipherment,dataEncipherment\nextendedKeyUsage=serverAuth,clientAuth\nsubjectAltName=@alt_names", "completion": "This ini code Generate Certificates Manually | Generate the certificate signing request based on the config file:", "language": "ini", "type": "code-example_explanation", "context": "Generate Certificates Manually | Generate the certificate signing request based on the config file:", "source": "https://kubernetes.io/docs/tasks/administer-cluster/certificates/", "id": "k8s_00367_code_explain_9150"} {"prompt": "Explain this shell code:\n\nopenssl req -new -key server.key -out server.csr -config csr.conf", "completion": "This shell code Generate Certificates Manually | Generate the server certificate using the ca.key, ca.crt and server.csr:", "language": "shell", "type": "code-example_explanation", "context": "Generate Certificates Manually | Generate the server certificate using the ca.key, ca.crt and server.csr:", "source": "https://kubernetes.io/docs/tasks/administer-cluster/certificates/", "id": "k8s_00367_code_explain_9151"} {"prompt": "Explain this shell code:\n\nopenssl x509 -req -in server.csr -CA ca.crt -CAkey ca.key \\\n -CAcreateserial -out server.crt -days 10000 \\\n -extensions v3_ext -extfile csr.conf -sha256", "completion": "This shell code Generate Certificates Manually | View the certificate signing request:", "language": "shell", "type": "code-example_explanation", "context": "Generate Certificates Manually | View the certificate signing request:", "source": "https://kubernetes.io/docs/tasks/administer-cluster/certificates/", "id": "k8s_00367_code_explain_9152"} {"prompt": "Complete this shell code:\n\ncurl -L https://github.com/cloudflare/cfssl/releases/download/v1.5.0/cfssl_1.5.0_linux_amd64 -o cfssl\nchmod +x cfssl\ncurl -L https://github.com/cloudflare/cfssl/releases/download/v1.5.0/cfssljson_1.5.0_linux_amd64 -o cfssljson", "completion": "chmod +x cfssljson\ncurl -L https://github.com/cloudflare/cfssl/releases/download/v1.5.0/cfssl-certinfo_1.5.0_linux_amd64 -o cfssl-certinfo\nchmod +x cfssl-certinfo", "language": "shell", "type": "code-example", "context": "Generate Certificates Manually | Create a directory to hold the artifacts and initialize cfssl:", "source": "https://kubernetes.io/docs/tasks/administer-cluster/certificates/", "id": "k8s_00367_code_completion_9153"} {"prompt": "Explain this shell code:\n\ncurl -L https://github.com/cloudflare/cfssl/releases/download/v1.5.0/cfssl_1.5.0_linux_amd64 -o cfssl\nchmod +x cfssl\ncurl -L https://github.com/cloudflare/cfssl/releases/download/v1.5.0/cfssljson_1.5.0_linux_amd64 -o cfssljson\nchmod +x cfssljson\ncurl -L https://github.com/cloudflare/cfssl/releases/download/v1.5.0/cfssl-certinfo_1.5.0_linux_amd64 -o cfssl-certinfo\nchmod +x cfssl-certinfo", "completion": "This shell code Generate Certificates Manually | Create a directory to hold the artifacts and initialize cfssl:", "language": "shell", "type": "code-example_explanation", "context": "Generate Certificates Manually | Create a directory to hold the artifacts and initialize cfssl:", "source": "https://kubernetes.io/docs/tasks/administer-cluster/certificates/", "id": "k8s_00367_code_explain_9154"} {"prompt": "Complete this shell code:\n\nmkdir cert\ncd cert", "completion": "../cfssl print-defaults config > config.json\n../cfssl print-defaults csr > csr.json", "language": "shell", "type": "code-example", "context": "Generate Certificates Manually | Create a JSON config file for generating the CA file, for example, ca-config.json:", "source": "https://kubernetes.io/docs/tasks/administer-cluster/certificates/", "id": "k8s_00367_code_completion_9155"} {"prompt": "Explain this shell code:\n\nmkdir cert\ncd cert\n../cfssl print-defaults config > config.json\n../cfssl print-defaults csr > csr.json", "completion": "This shell code Generate Certificates Manually | Create a JSON config file for generating the CA file, for example, ca-config.json:", "language": "shell", "type": "code-example_explanation", "context": "Generate Certificates Manually | Create a JSON config file for generating the CA file, for example, ca-config.json:", "source": "https://kubernetes.io/docs/tasks/administer-cluster/certificates/", "id": "k8s_00367_code_explain_9156"} {"prompt": "Complete this json code:\n\n{\n \"signing\": {\n \"default\": {\n \"expiry\": \"8760h\"\n },\n \"profiles\": {\n \"kubernetes\": {\n \"usages\": [\n \"signing\",", "completion": " \"key encipherment\",\n \"server auth\",\n \"client auth\"\n ],\n \"expiry\": \"8760h\"\n }\n }\n }\n}", "language": "json", "type": "api-response", "context": "Generate Certificates Manually | Create a JSON config file for CA certificate signing request (CSR), for example, ca-csr.json. Be sure to replace the values marked with angle brackets with real values you want to use.", "source": "https://kubernetes.io/docs/tasks/administer-cluster/certificates/", "id": "k8s_00367_code_completion_9157"} {"prompt": "Explain this json code:\n\n{\n \"signing\": {\n \"default\": {\n \"expiry\": \"8760h\"\n },\n \"profiles\": {\n \"kubernetes\": {\n \"usages\": [\n \"signing\",\n \"key encipherment\",\n \"server auth\",\n \"client auth\"\n ],\n \"expiry\": \"8760h\"\n }\n }\n }\n}", "completion": "This json code Generate Certificates Manually | Create a JSON config file for CA certificate signing request (CSR), for example, ca-csr.json. Be sure to replace the values marked with angle brackets with real values you want to use.", "language": "json", "type": "api-response_explanation", "context": "Generate Certificates Manually | Create a JSON config file for CA certificate signing request (CSR), for example, ca-csr.json. Be sure to replace the values marked with angle brackets with real values you want to use.", "source": "https://kubernetes.io/docs/tasks/administer-cluster/certificates/", "id": "k8s_00367_code_explain_9158"} {"prompt": "Complete this json code:\n\n{\n \"CN\": \"kubernetes\",\n \"key\": {\n \"algo\": \"rsa\",\n \"size\": 2048\n },\n \"names\":[{", "completion": " \"C\": \"<country>\",\n \"ST\": \"<state>\",\n \"L\": \"<city>\",\n \"O\": \"<organization>\",\n \"OU\": \"<organization unit>\"\n }]\n}", "language": "json", "type": "api-response", "context": "Generate Certificates Manually | Generate CA key (ca-key.pem) and certificate (ca.pem):", "source": "https://kubernetes.io/docs/tasks/administer-cluster/certificates/", "id": "k8s_00367_code_completion_9159"} {"prompt": "Explain this json code:\n\n{\n \"CN\": \"kubernetes\",\n \"key\": {\n \"algo\": \"rsa\",\n \"size\": 2048\n },\n \"names\":[{\n \"C\": \"<country>\",\n \"ST\": \"<state>\",\n \"L\": \"<city>\",\n \"O\": \"<organization>\",\n \"OU\": \"<organization unit>\"\n }]\n}", "completion": "This json code Generate Certificates Manually | Generate CA key (ca-key.pem) and certificate (ca.pem):", "language": "json", "type": "api-response_explanation", "context": "Generate Certificates Manually | Generate CA key (ca-key.pem) and certificate (ca.pem):", "source": "https://kubernetes.io/docs/tasks/administer-cluster/certificates/", "id": "k8s_00367_code_explain_9160"} {"prompt": "Explain this shell code:\n\n../cfssl gencert -initca ca-csr.json | ../cfssljson -bare ca", "completion": "This shell code Generate Certificates Manually | Create a JSON config file for generating keys and certificates for the API server, for example, server-csr.json. Be sure to replace the values in angle brackets with real values you want to use. The <", "language": "shell", "type": "code-example_explanation", "context": "Generate Certificates Manually | Create a JSON config file for generating keys and certificates for the API server, for example, server-csr.json. Be sure to replace the values in angle brackets with real values you want to use. The <", "source": "https://kubernetes.io/docs/tasks/administer-cluster/certificates/", "id": "k8s_00367_code_explain_9161"} {"prompt": "Complete this json code:\n\n{\n \"CN\": \"kubernetes\",\n \"hosts\": [\n \"127.0.0.1\",\n \"<MASTER_IP>\",\n \"<MASTER_CLUSTER_IP>\",\n \"kubernetes\",\n \"kubernetes.default\",\n \"kubernetes.default.svc\",\n \"kubernetes.default.svc.cluster\",\n \"kubernetes.default.svc.cluster.local\"\n ],", "completion": " \"key\": {\n \"algo\": \"rsa\",\n \"size\": 2048\n },\n \"names\": [{\n \"C\": \"<country>\",\n \"ST\": \"<state>\",\n \"L\": \"<city>\",\n \"O\": \"<organization>\",\n \"OU\": \"<organization unit>\"\n }]\n}", "language": "json", "type": "api-response", "context": "Generate Certificates Manually | Generate the key and certificate for the API server, which are by default saved into file server-key.pem and server.pem respectively:", "source": "https://kubernetes.io/docs/tasks/administer-cluster/certificates/", "id": "k8s_00367_code_completion_9162"} {"prompt": "Explain this json code:\n\n{\n \"CN\": \"kubernetes\",\n \"hosts\": [\n \"127.0.0.1\",\n \"<MASTER_IP>\",\n \"<MASTER_CLUSTER_IP>\",\n \"kubernetes\",\n \"kubernetes.default\",\n \"kubernetes.default.svc\",\n \"kubernetes.default.svc.cluster\",\n \"kubernetes.default.svc.cluster.local\"\n ],\n \"key\": {\n \"algo\": \"rsa\",\n \"size\": 2048\n },\n \"names\": [{\n \"C\": \"<country>\",\n \"ST\": \"<state>\",\n \"L\": \"<city>\",\n \"O\": \"<organization>\",\n \"OU\": \"<organization unit>\"\n }]\n}", "completion": "This json code Generate Certificates Manually | Generate the key and certificate for the API server, which are by default saved into file server-key.pem and server.pem respectively:", "language": "json", "type": "api-response_explanation", "context": "Generate Certificates Manually | Generate the key and certificate for the API server, which are by default saved into file server-key.pem and server.pem respectively:", "source": "https://kubernetes.io/docs/tasks/administer-cluster/certificates/", "id": "k8s_00367_code_explain_9163"} {"prompt": "Explain this shell code:\n\n../cfssl gencert -ca=ca.pem -ca-key=ca-key.pem \\\n --config=ca-config.json -profile=kubernetes \\\n server-csr.json | ../cfssljson -bare server", "completion": "This shell code Generate Certificates Manually | A client node may refuse to recognize a self-signed CA certificate as valid. For a non-production deployment, or for a deployment that runs behind a company firewall, you can distribute a self-signed", "language": "shell", "type": "code-example_explanation", "context": "Generate Certificates Manually | A client node may refuse to recognize a self-signed CA certificate as valid. For a non-production deployment, or for a deployment that runs behind a company firewall, you can distribute a self-signed", "source": "https://kubernetes.io/docs/tasks/administer-cluster/certificates/", "id": "k8s_00367_code_explain_9164"} {"prompt": "Explain this shell code:\n\nsudo cp ca.crt /usr/local/share/ca-certificates/kubernetes.crt\nsudo update-ca-certificates", "completion": "This shell code Generate Certificates Manually | You can use the certificates.k8s.io API to provision x509 certificates to use for authentication as documented in the Managing TLS in a cluster task page.", "language": "shell", "type": "code-example_explanation", "context": "Generate Certificates Manually | You can use the certificates.k8s.io API to provision x509 certificates to use for authentication as documented in the Managing TLS in a cluster task page.", "source": "https://kubernetes.io/docs/tasks/administer-cluster/certificates/", "id": "k8s_00367_code_explain_9165"} {"prompt": "Complete this none code:\n\nUpdating certificates in /etc/ssl/certs...\n1 added, 0 removed; done.", "completion": "Running hooks in /etc/ca-certificates/update.d....\ndone.", "language": "none", "type": "code-example", "context": "Generate Certificates Manually | You can use the certificates.k8s.io API to provision x509 certificates to use for authentication as documented in the Managing TLS in a cluster task page.", "source": "https://kubernetes.io/docs/tasks/administer-cluster/certificates/", "id": "k8s_00367_code_completion_9166"} {"prompt": "Explain this none code:\n\nUpdating certificates in /etc/ssl/certs...\n1 added, 0 removed; done.\nRunning hooks in /etc/ca-certificates/update.d....\ndone.", "completion": "This none code Generate Certificates Manually | You can use the certificates.k8s.io API to provision x509 certificates to use for authentication as documented in the Managing TLS in a cluster task page.", "language": "none", "type": "code-example_explanation", "context": "Generate Certificates Manually | You can use the certificates.k8s.io API to provision x509 certificates to use for authentication as documented in the Managing TLS in a cluster task page.", "source": "https://kubernetes.io/docs/tasks/administer-cluster/certificates/", "id": "k8s_00367_code_explain_9167"} {"prompt": "Complete this text code:\n\ncurl -LO https://dl.k8s.io/easy-rsa/easy-rsa.tar.gz\ntar xzf easy-rsa.tar.gz", "completion": "cd easy-rsa-master/easyrsa3\n./easyrsa init-pki", "language": "text", "type": "code-example", "context": "Generate Certificates Manually | Generate a new certificate authority (CA). --batch sets automatic mode; --req-cn specifies the Common Name (CN) for the CA's new root certificate.", "source": "https://kubernetes.io/docs/tasks/administer-cluster/certificates/", "id": "k8s_00367_code_completion_9168"} {"prompt": "Explain this text code:\n\ncurl -LO https://dl.k8s.io/easy-rsa/easy-rsa.tar.gz\ntar xzf easy-rsa.tar.gz\ncd easy-rsa-master/easyrsa3\n./easyrsa init-pki", "completion": "This text code Generate Certificates Manually | Generate a new certificate authority (CA). --batch sets automatic mode; --req-cn specifies the Common Name (CN) for the CA's new root certificate.", "language": "text", "type": "code-example_explanation", "context": "Generate Certificates Manually | Generate a new certificate authority (CA). --batch sets automatic mode; --req-cn specifies the Common Name (CN) for the CA's new root certificate.", "source": "https://kubernetes.io/docs/tasks/administer-cluster/certificates/", "id": "k8s_00367_code_explain_9169"} {"prompt": "Explain this text code:\n\n./easyrsa --batch \"--req-cn=${MASTER_IP}@`date +%s`\" build-ca nopass", "completion": "This text code Generate Certificates Manually | Generate server certificate and key.", "language": "text", "type": "code-example_explanation", "context": "Generate Certificates Manually | Generate server certificate and key.", "source": "https://kubernetes.io/docs/tasks/administer-cluster/certificates/", "id": "k8s_00367_code_explain_9170"} {"prompt": "Complete this text code:\n\n./easyrsa --subject-alt-name=\"IP:${MASTER_IP},\"\\\n\"IP:${MASTER_CLUSTER_IP},\"\\\n\"DNS:kubernetes,\"\\\n\"DNS:kubernetes.default,\"\\", "completion": "\"DNS:kubernetes.default.svc,\"\\\n\"DNS:kubernetes.default.svc.cluster,\"\\\n\"DNS:kubernetes.default.svc.cluster.local\" \\\n--days=10000 \\\nbuild-server-full server nopass", "language": "text", "type": "code-example", "context": "Generate Certificates Manually | Copy pki/ca.crt, pki/issued/server.crt, and pki/private/server.key to your directory.", "source": "https://kubernetes.io/docs/tasks/administer-cluster/certificates/", "id": "k8s_00367_code_completion_9171"} {"prompt": "Explain this text code:\n\n./easyrsa --subject-alt-name=\"IP:${MASTER_IP},\"\\\n\"IP:${MASTER_CLUSTER_IP},\"\\\n\"DNS:kubernetes,\"\\\n\"DNS:kubernetes.default,\"\\\n\"DNS:kubernetes.default.svc,\"\\\n\"DNS:kubernetes.default.svc.cluster,\"\\\n\"DNS:kubernetes.default.svc.cluster.local\" \\\n--days=10000 \\\nbuild-server-full server nopass", "completion": "This text code Generate Certificates Manually | Copy pki/ca.crt, pki/issued/server.crt, and pki/private/server.key to your directory.", "language": "text", "type": "code-example_explanation", "context": "Generate Certificates Manually | Copy pki/ca.crt, pki/issued/server.crt, and pki/private/server.key to your directory.", "source": "https://kubernetes.io/docs/tasks/administer-cluster/certificates/", "id": "k8s_00367_code_explain_9172"} {"prompt": "Explain this text code:\n\n--client-ca-file=/yourdirectory/ca.crt\n--tls-cert-file=/yourdirectory/server.crt\n--tls-private-key-file=/yourdirectory/server.key", "completion": "This text code Generate Certificates Manually | openssl can manually generate certificates for your cluster.", "language": "text", "type": "code-example_explanation", "context": "Generate Certificates Manually | openssl can manually generate certificates for your cluster.", "source": "https://kubernetes.io/docs/tasks/administer-cluster/certificates/", "id": "k8s_00367_code_explain_9173"} {"prompt": "Explain this text code:\n\nopenssl req -x509 -new -nodes -key ca.key -subj \"/CN=${MASTER_IP}\" -days 10000 -out ca.crt", "completion": "This text code Generate Certificates Manually | Generate a server.key with 2048bit:", "language": "text", "type": "code-example_explanation", "context": "Generate Certificates Manually | Generate a server.key with 2048bit:", "source": "https://kubernetes.io/docs/tasks/administer-cluster/certificates/", "id": "k8s_00367_code_explain_9174"} {"prompt": "Complete this json code:\n\n[ req ]\ndefault_bits = 2048\nprompt = no\ndefault_md = sha256\nreq_extensions = req_ext\ndistinguished_name = dn\n\n[ dn ]\nC = <country>\nST = <state>\nL = <city>\nO = <organization>\nOU = <organization unit>\nCN = <MASTER_IP>\n\n[ req_ext ]", "completion": "subjectAltName = @alt_names\n\n[ alt_names ]\nDNS.1 = kubernetes\nDNS.2 = kubernetes.default\nDNS.3 = kubernetes.default.svc\nDNS.4 = kubernetes.default.svc.cluster\nDNS.5 = kubernetes.default.svc.cluster.local\nIP.1 = <MASTER_IP>\nIP.2 = <MASTER_CLUSTER_IP>\n\n[ v3_ext ]\nauthorityKeyIdentifier=keyid,issuer:always\nbasicConstraints=CA:FALSE\nkeyUsage=keyEncipherment,dataEncipherment\nextendedKeyUsage=serverAuth,clientAuth\nsubjectAltName=@alt_names", "language": "json", "type": "api-response", "context": "Generate Certificates Manually | Generate the certificate signing request based on the config file:", "source": "https://kubernetes.io/docs/tasks/administer-cluster/certificates/", "id": "k8s_00367_code_completion_9175"} {"prompt": "Explain this json code:\n\n[ req ]\ndefault_bits = 2048\nprompt = no\ndefault_md = sha256\nreq_extensions = req_ext\ndistinguished_name = dn\n\n[ dn ]\nC = <country>\nST = <state>\nL = <city>\nO = <organization>\nOU = <organization unit>\nCN = <MASTER_IP>\n\n[ req_ext ]\nsubjectAltName = @alt_names\n\n[ alt_names ]\nDNS.1 = kubernetes\nDNS.2 = kubernetes.default\nDNS.3 = kubernetes.default.svc\nDNS.4 = kubernetes.default.svc.cluster\nDNS.5 = kubernetes.default.svc.cluster.local\nIP.1 = <MASTER_IP>\nIP.2 = <MASTER_CLUSTER_IP>\n\n[ v3_ext ]\nauthorityKeyIdentifier=keyid,issuer:always\nbasicConstraints=CA:FALSE\nkeyUsage=keyEncipherment,dataEncipherment\nextendedKeyUsage=serverAuth,clientAuth\nsubjectAltName=@alt_names", "completion": "This json code Generate Certificates Manually | Generate the certificate signing request based on the config file:", "language": "json", "type": "api-response_explanation", "context": "Generate Certificates Manually | Generate the certificate signing request based on the config file:", "source": "https://kubernetes.io/docs/tasks/administer-cluster/certificates/", "id": "k8s_00367_code_explain_9176"} {"prompt": "Explain this text code:\n\nopenssl req -new -key server.key -out server.csr -config csr.conf", "completion": "This text code Generate Certificates Manually | Generate the server certificate using the ca.key, ca.crt and server.csr:", "language": "text", "type": "code-example_explanation", "context": "Generate Certificates Manually | Generate the server certificate using the ca.key, ca.crt and server.csr:", "source": "https://kubernetes.io/docs/tasks/administer-cluster/certificates/", "id": "k8s_00367_code_explain_9177"} {"prompt": "Explain this text code:\n\nopenssl x509 -req -in server.csr -CA ca.crt -CAkey ca.key \\\n -CAcreateserial -out server.crt -days 10000 \\\n -extensions v3_ext -extfile csr.conf -sha256", "completion": "This text code Generate Certificates Manually | View the certificate signing request:", "language": "text", "type": "code-example_explanation", "context": "Generate Certificates Manually | View the certificate signing request:", "source": "https://kubernetes.io/docs/tasks/administer-cluster/certificates/", "id": "k8s_00367_code_explain_9178"} {"prompt": "Complete this text code:\n\ncurl -L https://github.com/cloudflare/cfssl/releases/download/v1.5.0/cfssl_1.5.0_linux_amd64 -o cfssl\nchmod +x cfssl\ncurl -L https://github.com/cloudflare/cfssl/releases/download/v1.5.0/cfssljson_1.5.0_linux_amd64 -o cfssljson", "completion": "chmod +x cfssljson\ncurl -L https://github.com/cloudflare/cfssl/releases/download/v1.5.0/cfssl-certinfo_1.5.0_linux_amd64 -o cfssl-certinfo\nchmod +x cfssl-certinfo", "language": "text", "type": "code-example", "context": "Generate Certificates Manually | Create a directory to hold the artifacts and initialize cfssl:", "source": "https://kubernetes.io/docs/tasks/administer-cluster/certificates/", "id": "k8s_00367_code_completion_9179"} {"prompt": "Explain this text code:\n\ncurl -L https://github.com/cloudflare/cfssl/releases/download/v1.5.0/cfssl_1.5.0_linux_amd64 -o cfssl\nchmod +x cfssl\ncurl -L https://github.com/cloudflare/cfssl/releases/download/v1.5.0/cfssljson_1.5.0_linux_amd64 -o cfssljson\nchmod +x cfssljson\ncurl -L https://github.com/cloudflare/cfssl/releases/download/v1.5.0/cfssl-certinfo_1.5.0_linux_amd64 -o cfssl-certinfo\nchmod +x cfssl-certinfo", "completion": "This text code Generate Certificates Manually | Create a directory to hold the artifacts and initialize cfssl:", "language": "text", "type": "code-example_explanation", "context": "Generate Certificates Manually | Create a directory to hold the artifacts and initialize cfssl:", "source": "https://kubernetes.io/docs/tasks/administer-cluster/certificates/", "id": "k8s_00367_code_explain_9180"} {"prompt": "Complete this text code:\n\nmkdir cert\ncd cert", "completion": "../cfssl print-defaults config > config.json\n../cfssl print-defaults csr > csr.json", "language": "text", "type": "code-example", "context": "Generate Certificates Manually | Create a JSON config file for generating the CA file, for example, ca-config.json:", "source": "https://kubernetes.io/docs/tasks/administer-cluster/certificates/", "id": "k8s_00367_code_completion_9181"} {"prompt": "Explain this text code:\n\nmkdir cert\ncd cert\n../cfssl print-defaults config > config.json\n../cfssl print-defaults csr > csr.json", "completion": "This text code Generate Certificates Manually | Create a JSON config file for generating the CA file, for example, ca-config.json:", "language": "text", "type": "code-example_explanation", "context": "Generate Certificates Manually | Create a JSON config file for generating the CA file, for example, ca-config.json:", "source": "https://kubernetes.io/docs/tasks/administer-cluster/certificates/", "id": "k8s_00367_code_explain_9182"} {"prompt": "Complete this json code:\n\n{\n \"signing\": {\n \"default\": {\n \"expiry\": \"8760h\"\n },\n \"profiles\": {\n \"kubernetes\": {\n \"usages\": [\n \"signing\",", "completion": " \"key encipherment\",\n \"server auth\",\n \"client auth\"\n ],\n \"expiry\": \"8760h\"\n }\n }\n }\n}", "language": "json", "type": "api-response", "context": "Generate Certificates Manually | Create a JSON config file for CA certificate signing request (CSR), for example, ca-csr.json. Be sure to replace the values marked with angle brackets with real values you want to use.", "source": "https://kubernetes.io/docs/tasks/administer-cluster/certificates/", "id": "k8s_00367_code_completion_9183"} {"prompt": "Explain this json code:\n\n{\n \"signing\": {\n \"default\": {\n \"expiry\": \"8760h\"\n },\n \"profiles\": {\n \"kubernetes\": {\n \"usages\": [\n \"signing\",\n \"key encipherment\",\n \"server auth\",\n \"client auth\"\n ],\n \"expiry\": \"8760h\"\n }\n }\n }\n}", "completion": "This json code Generate Certificates Manually | Create a JSON config file for CA certificate signing request (CSR), for example, ca-csr.json. Be sure to replace the values marked with angle brackets with real values you want to use.", "language": "json", "type": "api-response_explanation", "context": "Generate Certificates Manually | Create a JSON config file for CA certificate signing request (CSR), for example, ca-csr.json. Be sure to replace the values marked with angle brackets with real values you want to use.", "source": "https://kubernetes.io/docs/tasks/administer-cluster/certificates/", "id": "k8s_00367_code_explain_9184"} {"prompt": "Complete this json code:\n\n{\n \"CN\": \"kubernetes\",\n \"key\": {\n \"algo\": \"rsa\",\n \"size\": 2048\n },\n \"names\":[{", "completion": " \"C\": \"<country>\",\n \"ST\": \"<state>\",\n \"L\": \"<city>\",\n \"O\": \"<organization>\",\n \"OU\": \"<organization unit>\"\n }]\n}", "language": "json", "type": "api-response", "context": "Generate Certificates Manually | Generate CA key (ca-key.pem) and certificate (ca.pem):", "source": "https://kubernetes.io/docs/tasks/administer-cluster/certificates/", "id": "k8s_00367_code_completion_9185"} {"prompt": "Explain this json code:\n\n{\n \"CN\": \"kubernetes\",\n \"key\": {\n \"algo\": \"rsa\",\n \"size\": 2048\n },\n \"names\":[{\n \"C\": \"<country>\",\n \"ST\": \"<state>\",\n \"L\": \"<city>\",\n \"O\": \"<organization>\",\n \"OU\": \"<organization unit>\"\n }]\n}", "completion": "This json code Generate Certificates Manually | Generate CA key (ca-key.pem) and certificate (ca.pem):", "language": "json", "type": "api-response_explanation", "context": "Generate Certificates Manually | Generate CA key (ca-key.pem) and certificate (ca.pem):", "source": "https://kubernetes.io/docs/tasks/administer-cluster/certificates/", "id": "k8s_00367_code_explain_9186"} {"prompt": "Explain this text code:\n\n../cfssl gencert -initca ca-csr.json | ../cfssljson -bare ca", "completion": "This text code Generate Certificates Manually | Create a JSON config file for generating keys and certificates for the API server, for example, server-csr.json. Be sure to replace the values in angle brackets with real values you want to use. The <", "language": "text", "type": "code-example_explanation", "context": "Generate Certificates Manually | Create a JSON config file for generating keys and certificates for the API server, for example, server-csr.json. Be sure to replace the values in angle brackets with real values you want to use. The <", "source": "https://kubernetes.io/docs/tasks/administer-cluster/certificates/", "id": "k8s_00367_code_explain_9187"} {"prompt": "Complete this json code:\n\n{\n \"CN\": \"kubernetes\",\n \"hosts\": [\n \"127.0.0.1\",\n \"<MASTER_IP>\",\n \"<MASTER_CLUSTER_IP>\",\n \"kubernetes\",\n \"kubernetes.default\",\n \"kubernetes.default.svc\",\n \"kubernetes.default.svc.cluster\",\n \"kubernetes.default.svc.cluster.local\"\n ],", "completion": " \"key\": {\n \"algo\": \"rsa\",\n \"size\": 2048\n },\n \"names\": [{\n \"C\": \"<country>\",\n \"ST\": \"<state>\",\n \"L\": \"<city>\",\n \"O\": \"<organization>\",\n \"OU\": \"<organization unit>\"\n }]\n}", "language": "json", "type": "api-response", "context": "Generate Certificates Manually | Generate the key and certificate for the API server, which are by default saved into file server-key.pem and server.pem respectively:", "source": "https://kubernetes.io/docs/tasks/administer-cluster/certificates/", "id": "k8s_00367_code_completion_9188"} {"prompt": "Explain this json code:\n\n{\n \"CN\": \"kubernetes\",\n \"hosts\": [\n \"127.0.0.1\",\n \"<MASTER_IP>\",\n \"<MASTER_CLUSTER_IP>\",\n \"kubernetes\",\n \"kubernetes.default\",\n \"kubernetes.default.svc\",\n \"kubernetes.default.svc.cluster\",\n \"kubernetes.default.svc.cluster.local\"\n ],\n \"key\": {\n \"algo\": \"rsa\",\n \"size\": 2048\n },\n \"names\": [{\n \"C\": \"<country>\",\n \"ST\": \"<state>\",\n \"L\": \"<city>\",\n \"O\": \"<organization>\",\n \"OU\": \"<organization unit>\"\n }]\n}", "completion": "This json code Generate Certificates Manually | Generate the key and certificate for the API server, which are by default saved into file server-key.pem and server.pem respectively:", "language": "json", "type": "api-response_explanation", "context": "Generate Certificates Manually | Generate the key and certificate for the API server, which are by default saved into file server-key.pem and server.pem respectively:", "source": "https://kubernetes.io/docs/tasks/administer-cluster/certificates/", "id": "k8s_00367_code_explain_9189"} {"prompt": "Explain this text code:\n\n../cfssl gencert -ca=ca.pem -ca-key=ca-key.pem \\\n --config=ca-config.json -profile=kubernetes \\\n server-csr.json | ../cfssljson -bare server", "completion": "This text code Generate Certificates Manually | A client node may refuse to recognize a self-signed CA certificate as valid. For a non-production deployment, or for a deployment that runs behind a company firewall, you can distribute a self-signed", "language": "text", "type": "code-example_explanation", "context": "Generate Certificates Manually | A client node may refuse to recognize a self-signed CA certificate as valid. For a non-production deployment, or for a deployment that runs behind a company firewall, you can distribute a self-signed", "source": "https://kubernetes.io/docs/tasks/administer-cluster/certificates/", "id": "k8s_00367_code_explain_9190"} {"prompt": "Explain this text code:\n\nsudo cp ca.crt /usr/local/share/ca-certificates/kubernetes.crt\nsudo update-ca-certificates", "completion": "This text code Generate Certificates Manually | You can use the certificates.k8s.io API to provision x509 certificates to use for authentication as documented in the Managing TLS in a cluster task page.", "language": "text", "type": "code-example_explanation", "context": "Generate Certificates Manually | You can use the certificates.k8s.io API to provision x509 certificates to use for authentication as documented in the Managing TLS in a cluster task page.", "source": "https://kubernetes.io/docs/tasks/administer-cluster/certificates/", "id": "k8s_00367_code_explain_9191"} {"prompt": "Complete this yaml code:\n\n# Run this command on the node where kubelet is running\nmkdir -p /etc/kubernetes/manifests/\ncat <<EOF >/etc/kubernetes/manifests/static-web.yaml\napiVersion: v1\nkind: Pod\nmetadata:\n name: static-web\n labels:\n role: myrole", "completion": "spec:\n containers:\n - name: web\n image: nginx\n ports:\n - name: web\n containerPort: 80\n protocol: TCP\nEOF", "language": "yaml", "type": "pod-definition", "context": "Create static Pods | Configure the kubelet on that node to set a staticPodPath value in the kubelet configuration file.See Set Kubelet Parameters Via A Configuration File for more information.", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/static-pod/", "id": "k8s_00368_code_completion_9192"} {"prompt": "Explain this yaml code:\n\n# Run this command on the node where kubelet is running\nmkdir -p /etc/kubernetes/manifests/\ncat <<EOF >/etc/kubernetes/manifests/static-web.yaml\napiVersion: v1\nkind: Pod\nmetadata:\n name: static-web\n labels:\n role: myrole\nspec:\n containers:\n - name: web\n image: nginx\n ports:\n - name: web\n containerPort: 80\n protocol: TCP\nEOF", "completion": "This yaml code Create static Pods | Configure the kubelet on that node to set a staticPodPath value in the kubelet configuration file.See Set Kubelet Parameters Via A Configuration File for more information.", "language": "yaml", "type": "pod-definition_explanation", "context": "Create static Pods | Configure the kubelet on that node to set a staticPodPath value in the kubelet configuration file.See Set Kubelet Parameters Via A Configuration File for more information.", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/static-pod/", "id": "k8s_00368_code_explain_9193"} {"prompt": "Explain this text code:\n\n# Run this command on the node where the kubelet is running\nsystemctl restart kubelet", "completion": "This text code Create static Pods | Kubelet periodically downloads a file specified by --manifest-url=<URL> argument and interprets it as a JSON/YAML file that contains Pod definitions. Similar to how filesystem-hosted manifests work, t", "language": "text", "type": "code-example_explanation", "context": "Create static Pods | Kubelet periodically downloads a file specified by --manifest-url=<URL> argument and interprets it as a JSON/YAML file that contains Pod definitions. Similar to how filesystem-hosted manifests work, t", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/static-pod/", "id": "k8s_00368_code_explain_9194"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: static-web\n labels:\n role: myrole\nspec:", "completion": " containers:\n - name: web\n image: nginx\n ports:\n - name: web\n containerPort: 80\n protocol: TCP", "language": "yaml", "type": "pod-definition", "context": "Create static Pods | Configure the kubelet on your selected node to use this web manifest by running it with --manifest-url=<manifest-url>. On Fedora, edit /etc/kubernetes/kubelet to include this line:", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/static-pod/", "id": "k8s_00368_code_completion_9195"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: static-web\n labels:\n role: myrole\nspec:\n containers:\n - name: web\n image: nginx\n ports:\n - name: web\n containerPort: 80\n protocol: TCP", "completion": "This yaml code Create static Pods | Configure the kubelet on your selected node to use this web manifest by running it with --manifest-url=<manifest-url>. On Fedora, edit /etc/kubernetes/kubelet to include this line:", "language": "yaml", "type": "pod-definition_explanation", "context": "Create static Pods | Configure the kubelet on your selected node to use this web manifest by running it with --manifest-url=<manifest-url>. On Fedora, edit /etc/kubernetes/kubelet to include this line:", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/static-pod/", "id": "k8s_00368_code_explain_9196"} {"prompt": "Explain this text code:\n\nKUBELET_ARGS=\"--cluster-dns=10.254.0.10 --cluster-domain=kube.local --manifest-url=<manifest-url>\"", "completion": "This text code Create static Pods | Restart the kubelet. On Fedora, you would run:", "language": "text", "type": "code-example_explanation", "context": "Create static Pods | Restart the kubelet. On Fedora, you would run:", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/static-pod/", "id": "k8s_00368_code_explain_9197"} {"prompt": "Explain this text code:\n\n# Run this command on the node where the kubelet is running\nsystemctl restart kubelet", "completion": "This text code Create static Pods | When the kubelet starts, it automatically starts all defined static Pods. As you have defined a static Pod and restarted the kubelet, the new static Pod should already be running.", "language": "text", "type": "code-example_explanation", "context": "Create static Pods | When the kubelet starts, it automatically starts all defined static Pods. As you have defined a static Pod and restarted the kubelet, the new static Pod should already be running.", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/static-pod/", "id": "k8s_00368_code_explain_9198"} {"prompt": "Explain this text code:\n\n# Run this command on the node where the kubelet is running\ncrictl ps", "completion": "This text code Create static Pods | The output might be something like:", "language": "text", "type": "code-example_explanation", "context": "Create static Pods | The output might be something like:", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/static-pod/", "id": "k8s_00368_code_explain_9199"} {"prompt": "Explain this text code:\n\nCONTAINER IMAGE CREATED STATE NAME ATTEMPT POD ID\n129fd7d382018 docker.io/library/nginx@sha256:... 11 minutes ago Running web 0 34533c6729106", "completion": "This text code Create static Pods | You can see the mirror Pod on the API server:", "language": "text", "type": "code-example_explanation", "context": "Create static Pods | You can see the mirror Pod on the API server:", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/static-pod/", "id": "k8s_00368_code_explain_9200"} {"prompt": "Explain this text code:\n\nNAME READY STATUS RESTARTS AGE\nstatic-web-my-node1 1/1 Running 0 2m", "completion": "This text code Create static Pods | Labels from the static Pod are propagated into the mirror Pod. You can use those labels as normal via selectors, etc.", "language": "text", "type": "code-example_explanation", "context": "Create static Pods | Labels from the static Pod are propagated into the mirror Pod. You can use those labels as normal via selectors, etc.", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/static-pod/", "id": "k8s_00368_code_explain_9201"} {"prompt": "Explain this text code:\n\nNAME READY STATUS RESTARTS AGE\nstatic-web-my-node1 1/1 Running 0 4s", "completion": "This text code Create static Pods | Back on your node where the kubelet is running, you can try to stop the container manually. You'll see that, after a time, the kubelet will notice and will restart the Pod automatically:", "language": "text", "type": "code-example_explanation", "context": "Create static Pods | Back on your node where the kubelet is running, you can try to stop the container manually. You'll see that, after a time, the kubelet will notice and will restart the Pod automatically:", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/static-pod/", "id": "k8s_00368_code_explain_9202"} {"prompt": "Complete this text code:\n\n# Run these commands on the node where the kubelet is running\ncrictl stop 129fd7d382018 # replace with the ID of your container", "completion": "sleep 20\ncrictl ps", "language": "text", "type": "code-example", "context": "Create static Pods | Once you identify the right container, you can get the logs for that container with crictl:", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/static-pod/", "id": "k8s_00368_code_completion_9203"} {"prompt": "Explain this text code:\n\n# Run these commands on the node where the kubelet is running\ncrictl stop 129fd7d382018 # replace with the ID of your container\nsleep 20\ncrictl ps", "completion": "This text code Create static Pods | Once you identify the right container, you can get the logs for that container with crictl:", "language": "text", "type": "code-example_explanation", "context": "Create static Pods | Once you identify the right container, you can get the logs for that container with crictl:", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/static-pod/", "id": "k8s_00368_code_explain_9204"} {"prompt": "Explain this text code:\n\nCONTAINER IMAGE CREATED STATE NAME ATTEMPT POD ID\n89db4553e1eeb docker.io/library/nginx@sha256:... 19 seconds ago Running web 1 34533c6729106", "completion": "This text code Create static Pods | Once you identify the right container, you can get the logs for that container with crictl:", "language": "text", "type": "code-example_explanation", "context": "Create static Pods | Once you identify the right container, you can get the logs for that container with crictl:", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/static-pod/", "id": "k8s_00368_code_explain_9205"} {"prompt": "Explain this text code:\n\n# Run these commands on the node where the container is running\ncrictl logs <container_id>", "completion": "This text code Create static Pods | To find more about how to debug using crictl, please visit Debugging Kubernetes nodes with crictl.", "language": "text", "type": "code-example_explanation", "context": "Create static Pods | To find more about how to debug using crictl, please visit Debugging Kubernetes nodes with crictl.", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/static-pod/", "id": "k8s_00368_code_explain_9206"} {"prompt": "Explain this text code:\n\n10.240.0.48 - - [16/Nov/2022:12:45:49 +0000] \"GET / HTTP/1.1\" 200 612 \"-\" \"curl/7.47.0\" \"-\"\n10.240.0.48 - - [16/Nov/2022:12:45:50 +0000] \"GET / HTTP/1.1\" 200 612 \"-\" \"curl/7.47.0\" \"-\"\n10.240.0.48 - - [16/Nove/2022:12:45:51 +0000] \"GET / HTTP/1.1\" 200 612 \"-\" \"curl/7.47.0\" \"-\"", "completion": "This text code Create static Pods | To find more about how to debug using crictl, please visit Debugging Kubernetes nodes with crictl.", "language": "text", "type": "code-example_explanation", "context": "Create static Pods | To find more about how to debug using crictl, please visit Debugging Kubernetes nodes with crictl.", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/static-pod/", "id": "k8s_00368_code_explain_9207"} {"prompt": "Complete this text code:\n\n# This assumes you are using filesystem-hosted static Pod configuration\n# Run these commands on the node where the container is running\n#\nmv /etc/kubernetes/manifests/static-web.yaml /tmp\nsleep 20", "completion": "crictl ps\n# You see that no nginx container is running\nmv /tmp/static-web.yaml /etc/kubernetes/manifests/\nsleep 20\ncrictl ps", "language": "text", "type": "code-example", "context": "Create static Pods |", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/static-pod/", "id": "k8s_00368_code_completion_9208"} {"prompt": "Explain this text code:\n\n# This assumes you are using filesystem-hosted static Pod configuration\n# Run these commands on the node where the container is running\n#\nmv /etc/kubernetes/manifests/static-web.yaml /tmp\nsleep 20\ncrictl ps\n# You see that no nginx container is running\nmv /tmp/static-web.yaml /etc/kubernetes/manifests/\nsleep 20\ncrictl ps", "completion": "This text code Create static Pods |", "language": "text", "type": "code-example_explanation", "context": "Create static Pods |", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/static-pod/", "id": "k8s_00368_code_explain_9209"} {"prompt": "Explain this text code:\n\nCONTAINER IMAGE CREATED STATE NAME ATTEMPT POD ID\nf427638871c35 docker.io/library/nginx@sha256:... 19 seconds ago Running web 1 34533c6729106", "completion": "This text code Create static Pods |", "language": "text", "type": "code-example_explanation", "context": "Create static Pods |", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/static-pod/", "id": "k8s_00368_code_explain_9210"} {"prompt": "Complete this shell code:\n\n# Run this command on the node where kubelet is running\nmkdir -p /etc/kubernetes/manifests/\ncat <<EOF >/etc/kubernetes/manifests/static-web.yaml\napiVersion: v1\nkind: Pod\nmetadata:\n name: static-web\n labels:\n role: myrole", "completion": "spec:\n containers:\n - name: web\n image: nginx\n ports:\n - name: web\n containerPort: 80\n protocol: TCP\nEOF", "language": "shell", "type": "code-example", "context": "Create static Pods | Configure the kubelet on that node to set a staticPodPath value in the kubelet configuration file.See Set Kubelet Parameters Via A Configuration File for more information.", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/static-pod/", "id": "k8s_00368_code_completion_9211"} {"prompt": "Explain this shell code:\n\n# Run this command on the node where kubelet is running\nmkdir -p /etc/kubernetes/manifests/\ncat <<EOF >/etc/kubernetes/manifests/static-web.yaml\napiVersion: v1\nkind: Pod\nmetadata:\n name: static-web\n labels:\n role: myrole\nspec:\n containers:\n - name: web\n image: nginx\n ports:\n - name: web\n containerPort: 80\n protocol: TCP\nEOF", "completion": "This shell code Create static Pods | Configure the kubelet on that node to set a staticPodPath value in the kubelet configuration file.See Set Kubelet Parameters Via A Configuration File for more information.", "language": "shell", "type": "code-example_explanation", "context": "Create static Pods | Configure the kubelet on that node to set a staticPodPath value in the kubelet configuration file.See Set Kubelet Parameters Via A Configuration File for more information.", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/static-pod/", "id": "k8s_00368_code_explain_9212"} {"prompt": "Explain this shell code:\n\n# Run this command on the node where the kubelet is running\nsystemctl restart kubelet", "completion": "This shell code Create static Pods | Kubelet periodically downloads a file specified by --manifest-url=<URL> argument and interprets it as a JSON/YAML file that contains Pod definitions. Similar to how filesystem-hosted manifests work, t", "language": "shell", "type": "code-example_explanation", "context": "Create static Pods | Kubelet periodically downloads a file specified by --manifest-url=<URL> argument and interprets it as a JSON/YAML file that contains Pod definitions. Similar to how filesystem-hosted manifests work, t", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/static-pod/", "id": "k8s_00368_code_explain_9213"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: static-web\n labels:\n role: myrole\nspec:", "completion": " containers:\n - name: web\n image: nginx\n ports:\n - name: web\n containerPort: 80\n protocol: TCP", "language": "yaml", "type": "pod-definition", "context": "Create static Pods | Configure the kubelet on your selected node to use this web manifest by running it with --manifest-url=<manifest-url>. On Fedora, edit /etc/kubernetes/kubelet to include this line:", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/static-pod/", "id": "k8s_00368_code_completion_9214"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: static-web\n labels:\n role: myrole\nspec:\n containers:\n - name: web\n image: nginx\n ports:\n - name: web\n containerPort: 80\n protocol: TCP", "completion": "This yaml code Create static Pods | Configure the kubelet on your selected node to use this web manifest by running it with --manifest-url=<manifest-url>. On Fedora, edit /etc/kubernetes/kubelet to include this line:", "language": "yaml", "type": "pod-definition_explanation", "context": "Create static Pods | Configure the kubelet on your selected node to use this web manifest by running it with --manifest-url=<manifest-url>. On Fedora, edit /etc/kubernetes/kubelet to include this line:", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/static-pod/", "id": "k8s_00368_code_explain_9215"} {"prompt": "Explain this shell code:\n\nKUBELET_ARGS=\"--cluster-dns=10.254.0.10 --cluster-domain=kube.local --manifest-url=<manifest-url>\"", "completion": "This shell code Create static Pods | Restart the kubelet. On Fedora, you would run:", "language": "shell", "type": "code-example_explanation", "context": "Create static Pods | Restart the kubelet. On Fedora, you would run:", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/static-pod/", "id": "k8s_00368_code_explain_9216"} {"prompt": "Explain this shell code:\n\n# Run this command on the node where the kubelet is running\nsystemctl restart kubelet", "completion": "This shell code Create static Pods | When the kubelet starts, it automatically starts all defined static Pods. As you have defined a static Pod and restarted the kubelet, the new static Pod should already be running.", "language": "shell", "type": "code-example_explanation", "context": "Create static Pods | When the kubelet starts, it automatically starts all defined static Pods. As you have defined a static Pod and restarted the kubelet, the new static Pod should already be running.", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/static-pod/", "id": "k8s_00368_code_explain_9217"} {"prompt": "Explain this shell code:\n\n# Run this command on the node where the kubelet is running\ncrictl ps", "completion": "This shell code Create static Pods | The output might be something like:", "language": "shell", "type": "code-example_explanation", "context": "Create static Pods | The output might be something like:", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/static-pod/", "id": "k8s_00368_code_explain_9218"} {"prompt": "Explain this console code:\n\nCONTAINER IMAGE CREATED STATE NAME ATTEMPT POD ID\n129fd7d382018 docker.io/library/nginx@sha256:... 11 minutes ago Running web 0 34533c6729106", "completion": "This console code Create static Pods | You can see the mirror Pod on the API server:", "language": "console", "type": "code-example_explanation", "context": "Create static Pods | You can see the mirror Pod on the API server:", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/static-pod/", "id": "k8s_00368_code_explain_9219"} {"prompt": "Explain this bash code:\n\ndocker.io/library/nginx@sha256:0d17b565c37bcbd895e9d92315a05c1c3c9a29f762b011a10c54a66cd53c9b31", "completion": "This bash code Create static Pods | You can see the mirror Pod on the API server:", "language": "bash", "type": "shell-command_explanation", "context": "Create static Pods | You can see the mirror Pod on the API server:", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/static-pod/", "id": "k8s_00368_code_explain_9220"} {"prompt": "Explain this text code:\n\nNAME READY STATUS RESTARTS AGE\nstatic-web-my-node1 1/1 Running 0 2m", "completion": "This text code Create static Pods | Labels from the static Pod are propagated into the mirror Pod. You can use those labels as normal via selectors, etc.", "language": "text", "type": "code-example_explanation", "context": "Create static Pods | Labels from the static Pod are propagated into the mirror Pod. You can use those labels as normal via selectors, etc.", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/static-pod/", "id": "k8s_00368_code_explain_9221"} {"prompt": "Explain this text code:\n\nNAME READY STATUS RESTARTS AGE\nstatic-web-my-node1 1/1 Running 0 4s", "completion": "This text code Create static Pods | Back on your node where the kubelet is running, you can try to stop the container manually. You'll see that, after a time, the kubelet will notice and will restart the Pod automatically:", "language": "text", "type": "code-example_explanation", "context": "Create static Pods | Back on your node where the kubelet is running, you can try to stop the container manually. You'll see that, after a time, the kubelet will notice and will restart the Pod automatically:", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/static-pod/", "id": "k8s_00368_code_explain_9222"} {"prompt": "Complete this shell code:\n\n# Run these commands on the node where the kubelet is running\ncrictl stop 129fd7d382018 # replace with the ID of your container", "completion": "sleep 20\ncrictl ps", "language": "shell", "type": "code-example", "context": "Create static Pods | Once you identify the right container, you can get the logs for that container with crictl:", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/static-pod/", "id": "k8s_00368_code_completion_9223"} {"prompt": "Explain this shell code:\n\n# Run these commands on the node where the kubelet is running\ncrictl stop 129fd7d382018 # replace with the ID of your container\nsleep 20\ncrictl ps", "completion": "This shell code Create static Pods | Once you identify the right container, you can get the logs for that container with crictl:", "language": "shell", "type": "code-example_explanation", "context": "Create static Pods | Once you identify the right container, you can get the logs for that container with crictl:", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/static-pod/", "id": "k8s_00368_code_explain_9224"} {"prompt": "Explain this console code:\n\nCONTAINER IMAGE CREATED STATE NAME ATTEMPT POD ID\n89db4553e1eeb docker.io/library/nginx@sha256:... 19 seconds ago Running web 1 34533c6729106", "completion": "This console code Create static Pods | Once you identify the right container, you can get the logs for that container with crictl:", "language": "console", "type": "code-example_explanation", "context": "Create static Pods | Once you identify the right container, you can get the logs for that container with crictl:", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/static-pod/", "id": "k8s_00368_code_explain_9225"} {"prompt": "Explain this shell code:\n\n# Run these commands on the node where the container is running\ncrictl logs <container_id>", "completion": "This shell code Create static Pods | To find more about how to debug using crictl, please visit Debugging Kubernetes nodes with crictl.", "language": "shell", "type": "code-example_explanation", "context": "Create static Pods | To find more about how to debug using crictl, please visit Debugging Kubernetes nodes with crictl.", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/static-pod/", "id": "k8s_00368_code_explain_9226"} {"prompt": "Explain this console code:\n\n10.240.0.48 - - [16/Nov/2022:12:45:49 +0000] \"GET / HTTP/1.1\" 200 612 \"-\" \"curl/7.47.0\" \"-\"\n10.240.0.48 - - [16/Nov/2022:12:45:50 +0000] \"GET / HTTP/1.1\" 200 612 \"-\" \"curl/7.47.0\" \"-\"\n10.240.0.48 - - [16/Nove/2022:12:45:51 +0000] \"GET / HTTP/1.1\" 200 612 \"-\" \"curl/7.47.0\" \"-\"", "completion": "This console code Create static Pods | To find more about how to debug using crictl, please visit Debugging Kubernetes nodes with crictl.", "language": "console", "type": "code-example_explanation", "context": "Create static Pods | To find more about how to debug using crictl, please visit Debugging Kubernetes nodes with crictl.", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/static-pod/", "id": "k8s_00368_code_explain_9227"} {"prompt": "Complete this shell code:\n\n# This assumes you are using filesystem-hosted static Pod configuration\n# Run these commands on the node where the container is running\n#\nmv /etc/kubernetes/manifests/static-web.yaml /tmp\nsleep 20", "completion": "crictl ps\n# You see that no nginx container is running\nmv /tmp/static-web.yaml /etc/kubernetes/manifests/\nsleep 20\ncrictl ps", "language": "shell", "type": "code-example", "context": "Create static Pods |", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/static-pod/", "id": "k8s_00368_code_completion_9228"} {"prompt": "Explain this shell code:\n\n# This assumes you are using filesystem-hosted static Pod configuration\n# Run these commands on the node where the container is running\n#\nmv /etc/kubernetes/manifests/static-web.yaml /tmp\nsleep 20\ncrictl ps\n# You see that no nginx container is running\nmv /tmp/static-web.yaml /etc/kubernetes/manifests/\nsleep 20\ncrictl ps", "completion": "This shell code Create static Pods |", "language": "shell", "type": "code-example_explanation", "context": "Create static Pods |", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/static-pod/", "id": "k8s_00368_code_explain_9229"} {"prompt": "Explain this console code:\n\nCONTAINER IMAGE CREATED STATE NAME ATTEMPT POD ID\nf427638871c35 docker.io/library/nginx@sha256:... 19 seconds ago Running web 1 34533c6729106", "completion": "This console code Create static Pods |", "language": "console", "type": "code-example_explanation", "context": "Create static Pods |", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/static-pod/", "id": "k8s_00368_code_explain_9230"} {"prompt": "Complete this yaml code:\n\n# Run this command on the node where kubelet is running\nmkdir -p /etc/kubernetes/manifests/\ncat <<EOF >/etc/kubernetes/manifests/static-web.yaml\napiVersion: v1\nkind: Pod\nmetadata:\n name: static-web\n labels:\n role: myrole", "completion": "spec:\n containers:\n - name: web\n image: nginx\n ports:\n - name: web\n containerPort: 80\n protocol: TCP\nEOF", "language": "yaml", "type": "pod-definition", "context": "Create static Pods | Configure the kubelet on that node to set a staticPodPath value in the kubelet configuration file.See Set Kubelet Parameters Via A Configuration File for more information.", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/static-pod/", "id": "k8s_00368_code_completion_9231"} {"prompt": "Explain this yaml code:\n\n# Run this command on the node where kubelet is running\nmkdir -p /etc/kubernetes/manifests/\ncat <<EOF >/etc/kubernetes/manifests/static-web.yaml\napiVersion: v1\nkind: Pod\nmetadata:\n name: static-web\n labels:\n role: myrole\nspec:\n containers:\n - name: web\n image: nginx\n ports:\n - name: web\n containerPort: 80\n protocol: TCP\nEOF", "completion": "This yaml code Create static Pods | Configure the kubelet on that node to set a staticPodPath value in the kubelet configuration file.See Set Kubelet Parameters Via A Configuration File for more information.", "language": "yaml", "type": "pod-definition_explanation", "context": "Create static Pods | Configure the kubelet on that node to set a staticPodPath value in the kubelet configuration file.See Set Kubelet Parameters Via A Configuration File for more information.", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/static-pod/", "id": "k8s_00368_code_explain_9232"} {"prompt": "Explain this text code:\n\n# Run this command on the node where the kubelet is running\nsystemctl restart kubelet", "completion": "This text code Create static Pods | Kubelet periodically downloads a file specified by --manifest-url=<URL> argument and interprets it as a JSON/YAML file that contains Pod definitions. Similar to how filesystem-hosted manifests work, t", "language": "text", "type": "code-example_explanation", "context": "Create static Pods | Kubelet periodically downloads a file specified by --manifest-url=<URL> argument and interprets it as a JSON/YAML file that contains Pod definitions. Similar to how filesystem-hosted manifests work, t", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/static-pod/", "id": "k8s_00368_code_explain_9233"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: static-web\n labels:\n role: myrole\nspec:", "completion": " containers:\n - name: web\n image: nginx\n ports:\n - name: web\n containerPort: 80\n protocol: TCP", "language": "yaml", "type": "pod-definition", "context": "Create static Pods | Configure the kubelet on your selected node to use this web manifest by running it with --manifest-url=<manifest-url>. On Fedora, edit /etc/kubernetes/kubelet to include this line:", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/static-pod/", "id": "k8s_00368_code_completion_9234"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: static-web\n labels:\n role: myrole\nspec:\n containers:\n - name: web\n image: nginx\n ports:\n - name: web\n containerPort: 80\n protocol: TCP", "completion": "This yaml code Create static Pods | Configure the kubelet on your selected node to use this web manifest by running it with --manifest-url=<manifest-url>. On Fedora, edit /etc/kubernetes/kubelet to include this line:", "language": "yaml", "type": "pod-definition_explanation", "context": "Create static Pods | Configure the kubelet on your selected node to use this web manifest by running it with --manifest-url=<manifest-url>. On Fedora, edit /etc/kubernetes/kubelet to include this line:", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/static-pod/", "id": "k8s_00368_code_explain_9235"} {"prompt": "Explain this text code:\n\nKUBELET_ARGS=\"--cluster-dns=10.254.0.10 --cluster-domain=kube.local --manifest-url=<manifest-url>\"", "completion": "This text code Create static Pods | Restart the kubelet. On Fedora, you would run:", "language": "text", "type": "code-example_explanation", "context": "Create static Pods | Restart the kubelet. On Fedora, you would run:", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/static-pod/", "id": "k8s_00368_code_explain_9236"} {"prompt": "Explain this text code:\n\n# Run this command on the node where the kubelet is running\nsystemctl restart kubelet", "completion": "This text code Create static Pods | When the kubelet starts, it automatically starts all defined static Pods. As you have defined a static Pod and restarted the kubelet, the new static Pod should already be running.", "language": "text", "type": "code-example_explanation", "context": "Create static Pods | When the kubelet starts, it automatically starts all defined static Pods. As you have defined a static Pod and restarted the kubelet, the new static Pod should already be running.", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/static-pod/", "id": "k8s_00368_code_explain_9237"} {"prompt": "Explain this text code:\n\n# Run this command on the node where the kubelet is running\ncrictl ps", "completion": "This text code Create static Pods | The output might be something like:", "language": "text", "type": "code-example_explanation", "context": "Create static Pods | The output might be something like:", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/static-pod/", "id": "k8s_00368_code_explain_9238"} {"prompt": "Explain this text code:\n\nCONTAINER IMAGE CREATED STATE NAME ATTEMPT POD ID\n129fd7d382018 docker.io/library/nginx@sha256:... 11 minutes ago Running web 0 34533c6729106", "completion": "This text code Create static Pods | You can see the mirror Pod on the API server:", "language": "text", "type": "code-example_explanation", "context": "Create static Pods | You can see the mirror Pod on the API server:", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/static-pod/", "id": "k8s_00368_code_explain_9239"} {"prompt": "Complete this text code:\n\n# Run these commands on the node where the kubelet is running\ncrictl stop 129fd7d382018 # replace with the ID of your container", "completion": "sleep 20\ncrictl ps", "language": "text", "type": "code-example", "context": "Create static Pods | Once you identify the right container, you can get the logs for that container with crictl:", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/static-pod/", "id": "k8s_00368_code_completion_9240"} {"prompt": "Explain this text code:\n\n# Run these commands on the node where the kubelet is running\ncrictl stop 129fd7d382018 # replace with the ID of your container\nsleep 20\ncrictl ps", "completion": "This text code Create static Pods | Once you identify the right container, you can get the logs for that container with crictl:", "language": "text", "type": "code-example_explanation", "context": "Create static Pods | Once you identify the right container, you can get the logs for that container with crictl:", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/static-pod/", "id": "k8s_00368_code_explain_9241"} {"prompt": "Explain this text code:\n\nCONTAINER IMAGE CREATED STATE NAME ATTEMPT POD ID\n89db4553e1eeb docker.io/library/nginx@sha256:... 19 seconds ago Running web 1 34533c6729106", "completion": "This text code Create static Pods | Once you identify the right container, you can get the logs for that container with crictl:", "language": "text", "type": "code-example_explanation", "context": "Create static Pods | Once you identify the right container, you can get the logs for that container with crictl:", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/static-pod/", "id": "k8s_00368_code_explain_9242"} {"prompt": "Explain this text code:\n\n# Run these commands on the node where the container is running\ncrictl logs <container_id>", "completion": "This text code Create static Pods | To find more about how to debug using crictl, please visit Debugging Kubernetes nodes with crictl.", "language": "text", "type": "code-example_explanation", "context": "Create static Pods | To find more about how to debug using crictl, please visit Debugging Kubernetes nodes with crictl.", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/static-pod/", "id": "k8s_00368_code_explain_9243"} {"prompt": "Explain this text code:\n\n10.240.0.48 - - [16/Nov/2022:12:45:49 +0000] \"GET / HTTP/1.1\" 200 612 \"-\" \"curl/7.47.0\" \"-\"\n10.240.0.48 - - [16/Nov/2022:12:45:50 +0000] \"GET / HTTP/1.1\" 200 612 \"-\" \"curl/7.47.0\" \"-\"\n10.240.0.48 - - [16/Nove/2022:12:45:51 +0000] \"GET / HTTP/1.1\" 200 612 \"-\" \"curl/7.47.0\" \"-\"", "completion": "This text code Create static Pods | To find more about how to debug using crictl, please visit Debugging Kubernetes nodes with crictl.", "language": "text", "type": "code-example_explanation", "context": "Create static Pods | To find more about how to debug using crictl, please visit Debugging Kubernetes nodes with crictl.", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/static-pod/", "id": "k8s_00368_code_explain_9244"} {"prompt": "Complete this text code:\n\n# This assumes you are using filesystem-hosted static Pod configuration\n# Run these commands on the node where the container is running\n#\nmv /etc/kubernetes/manifests/static-web.yaml /tmp\nsleep 20", "completion": "crictl ps\n# You see that no nginx container is running\nmv /tmp/static-web.yaml /etc/kubernetes/manifests/\nsleep 20\ncrictl ps", "language": "text", "type": "code-example", "context": "Create static Pods |", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/static-pod/", "id": "k8s_00368_code_completion_9245"} {"prompt": "Explain this text code:\n\n# This assumes you are using filesystem-hosted static Pod configuration\n# Run these commands on the node where the container is running\n#\nmv /etc/kubernetes/manifests/static-web.yaml /tmp\nsleep 20\ncrictl ps\n# You see that no nginx container is running\nmv /tmp/static-web.yaml /etc/kubernetes/manifests/\nsleep 20\ncrictl ps", "completion": "This text code Create static Pods |", "language": "text", "type": "code-example_explanation", "context": "Create static Pods |", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/static-pod/", "id": "k8s_00368_code_explain_9246"} {"prompt": "Explain this text code:\n\nCONTAINER IMAGE CREATED STATE NAME ATTEMPT POD ID\nf427638871c35 docker.io/library/nginx@sha256:... 19 seconds ago Running web 1 34533c6729106", "completion": "This text code Create static Pods |", "language": "text", "type": "code-example_explanation", "context": "Create static Pods |", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/static-pod/", "id": "k8s_00368_code_explain_9247"} {"prompt": "Explain this bash code:\n\nkubectl create configmap sport --from-literal=sport=football", "completion": "This bash code Updating Configuration via a ConfigMap | Below is an example of a Deployment manifest with the ConfigMap sport mounted as a volume into the Pod's only container.", "language": "bash", "type": "kubectl-command_explanation", "context": "Updating Configuration via a ConfigMap | Below is an example of a Deployment manifest with the ConfigMap sport mounted as a volume into the Pod's only container.", "source": "https://kubernetes.io/docs/tutorials/configuration/updating-configuration-via-a-configmap/", "id": "k8s_00369_code_explain_9248"} {"prompt": "Complete this yaml code:\n\napiVersion: apps/v1\nkind: Deployment\nmetadata:\n name: configmap-volume\n labels:\n app.kubernetes.io/name: configmap-volume\nspec:\n replicas: 3\n selector:\n matchLabels:\n app.kubernetes.io/name: configmap-volume\n template:\n metadata:\n labels:\n app.kubernetes.io/name: configmap-volume\n spec:", "completion": " containers:\n - name: alpine\n image: alpine:3\n command:\n - /bin/sh\n - -c\n - while true; do echo \"$(date) My preferred sport is $(cat /etc/config/sport)\";\n sleep 10; done;\n ports:\n - containerPort: 80\n volumeMounts:\n - name: config-volume\n mountPath: /etc/config\n volumes:\n - name: config-volume\n configMap:\n name: sport", "language": "yaml", "type": "deployment-definition", "context": "Updating Configuration via a ConfigMap | Create the Deployment:", "source": "https://kubernetes.io/docs/tutorials/configuration/updating-configuration-via-a-configmap/", "id": "k8s_00369_code_completion_9249"} {"prompt": "Explain this yaml code:\n\napiVersion: apps/v1\nkind: Deployment\nmetadata:\n name: configmap-volume\n labels:\n app.kubernetes.io/name: configmap-volume\nspec:\n replicas: 3\n selector:\n matchLabels:\n app.kubernetes.io/name: configmap-volume\n template:\n metadata:\n labels:\n app.kubernetes.io/name: configmap-volume\n spec:\n containers:\n - name: alpine\n image: alpine:3\n command:\n - /bin/sh\n - -c\n - while true; do echo \"$(date) My preferred sport is $(cat /etc/config/sport)\";\n sleep 10; done;\n ports:\n - containerPort: 80\n volumeMounts:\n - name: config-volume\n mountPath: /etc/config\n volumes:\n - name: config-volume\n configMap:\n name: sport", "completion": "This yaml code Updating Configuration via a ConfigMap | Create the Deployment:", "language": "yaml", "type": "deployment-definition_explanation", "context": "Updating Configuration via a ConfigMap | Create the Deployment:", "source": "https://kubernetes.io/docs/tutorials/configuration/updating-configuration-via-a-configmap/", "id": "k8s_00369_code_explain_9250"} {"prompt": "Explain this bash code:\n\nkubectl apply -f https://k8s.io/examples/deployments/deployment-with-configmap-as-volume.yaml", "completion": "This bash code Updating Configuration via a ConfigMap | Check the pods for this Deployment to ensure they are ready (matching by selector):", "language": "bash", "type": "kubectl-command_explanation", "context": "Updating Configuration via a ConfigMap | Check the pods for this Deployment to ensure they are ready (matching by selector):", "source": "https://kubernetes.io/docs/tutorials/configuration/updating-configuration-via-a-configmap/", "id": "k8s_00369_code_explain_9251"} {"prompt": "Explain this bash code:\n\nkubectl get pods --selector=app.kubernetes.io/name=configmap-volume", "completion": "This bash code Updating Configuration via a ConfigMap | You should see an output similar to:", "language": "bash", "type": "kubectl-command_explanation", "context": "Updating Configuration via a ConfigMap | You should see an output similar to:", "source": "https://kubernetes.io/docs/tutorials/configuration/updating-configuration-via-a-configmap/", "id": "k8s_00369_code_explain_9252"} {"prompt": "Complete this text code:\n\nNAME READY STATUS RESTARTS AGE\nconfigmap-volume-6b976dfdcf-qxvbm 1/1 Running 0 72s", "completion": "configmap-volume-6b976dfdcf-skpvm 1/1 Running 0 72s\nconfigmap-volume-6b976dfdcf-tbc6r 1/1 Running 0 72s", "language": "text", "type": "code-example", "context": "Updating Configuration via a ConfigMap | On each node where one of these Pods is running, the kubelet fetches the data for that ConfigMap and translates it to files in a local volume. The kubelet then mounts that volume into the container, a", "source": "https://kubernetes.io/docs/tutorials/configuration/updating-configuration-via-a-configmap/", "id": "k8s_00369_code_completion_9253"} {"prompt": "Explain this text code:\n\nNAME READY STATUS RESTARTS AGE\nconfigmap-volume-6b976dfdcf-qxvbm 1/1 Running 0 72s\nconfigmap-volume-6b976dfdcf-skpvm 1/1 Running 0 72s\nconfigmap-volume-6b976dfdcf-tbc6r 1/1 Running 0 72s", "completion": "This text code Updating Configuration via a ConfigMap | On each node where one of these Pods is running, the kubelet fetches the data for that ConfigMap and translates it to files in a local volume. The kubelet then mounts that volume into the container, a", "language": "text", "type": "code-example_explanation", "context": "Updating Configuration via a ConfigMap | On each node where one of these Pods is running, the kubelet fetches the data for that ConfigMap and translates it to files in a local volume. The kubelet then mounts that volume into the container, a", "source": "https://kubernetes.io/docs/tutorials/configuration/updating-configuration-via-a-configmap/", "id": "k8s_00369_code_explain_9254"} {"prompt": "Explain this text code:\n\n# Pick one Pod that belongs to the Deployment, and view its logs\nkubectl logs deployments/configmap-volume", "completion": "This text code Updating Configuration via a ConfigMap | You should see an output similar to:", "language": "text", "type": "code-example_explanation", "context": "Updating Configuration via a ConfigMap | You should see an output similar to:", "source": "https://kubernetes.io/docs/tutorials/configuration/updating-configuration-via-a-configmap/", "id": "k8s_00369_code_explain_9255"} {"prompt": "Complete this text code:\n\nFound 3 pods, using pod/configmap-volume-76d9c5678f-x5rgj\nThu Jan 4 14:06:46 UTC 2024 My preferred sport is football\nThu Jan 4 14:06:56 UTC 2024 My preferred sport is football", "completion": "Thu Jan 4 14:07:06 UTC 2024 My preferred sport is football\nThu Jan 4 14:07:16 UTC 2024 My preferred sport is football\nThu Jan 4 14:07:26 UTC 2024 My preferred sport is football", "language": "text", "type": "code-example", "context": "Updating Configuration via a ConfigMap | Edit the ConfigMap:", "source": "https://kubernetes.io/docs/tutorials/configuration/updating-configuration-via-a-configmap/", "id": "k8s_00369_code_completion_9256"} {"prompt": "Explain this text code:\n\nFound 3 pods, using pod/configmap-volume-76d9c5678f-x5rgj\nThu Jan 4 14:06:46 UTC 2024 My preferred sport is football\nThu Jan 4 14:06:56 UTC 2024 My preferred sport is football\nThu Jan 4 14:07:06 UTC 2024 My preferred sport is football\nThu Jan 4 14:07:16 UTC 2024 My preferred sport is football\nThu Jan 4 14:07:26 UTC 2024 My preferred sport is football", "completion": "This text code Updating Configuration via a ConfigMap | Edit the ConfigMap:", "language": "text", "type": "code-example_explanation", "context": "Updating Configuration via a ConfigMap | Edit the ConfigMap:", "source": "https://kubernetes.io/docs/tutorials/configuration/updating-configuration-via-a-configmap/", "id": "k8s_00369_code_explain_9257"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\ndata:\n sport: cricket\nkind: ConfigMap\n# You can leave the existing metadata as they are.\n# The values you'll see won't exactly match these.", "completion": "metadata:\n creationTimestamp: \"2024-01-04T14:05:06Z\"\n name: sport\n namespace: default\n resourceVersion: \"1743935\"\n uid: 024ee001-fe72-487e-872e-34d6464a8a23", "language": "yaml", "type": "kubernetes-manifest", "context": "Updating Configuration via a ConfigMap | You should see the following output:", "source": "https://kubernetes.io/docs/tutorials/configuration/updating-configuration-via-a-configmap/", "id": "k8s_00369_code_completion_9258"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\ndata:\n sport: cricket\nkind: ConfigMap\n# You can leave the existing metadata as they are.\n# The values you'll see won't exactly match these.\nmetadata:\n creationTimestamp: \"2024-01-04T14:05:06Z\"\n name: sport\n namespace: default\n resourceVersion: \"1743935\"\n uid: 024ee001-fe72-487e-872e-34d6464a8a23", "completion": "This yaml code Updating Configuration via a ConfigMap | You should see the following output:", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Updating Configuration via a ConfigMap | You should see the following output:", "source": "https://kubernetes.io/docs/tutorials/configuration/updating-configuration-via-a-configmap/", "id": "k8s_00369_code_explain_9259"} {"prompt": "Complete this text code:\n\nThu Jan 4 14:11:36 UTC 2024 My preferred sport is football\nThu Jan 4 14:11:46 UTC 2024 My preferred sport is football", "completion": "Thu Jan 4 14:11:56 UTC 2024 My preferred sport is football\nThu Jan 4 14:12:06 UTC 2024 My preferred sport is cricket\nThu Jan 4 14:12:16 UTC 2024 My preferred sport is cricket", "language": "text", "type": "code-example", "context": "Updating Configuration via a ConfigMap | When you have a ConfigMap that is mapped into a running Pod using either a configMap volume or a projected volume, and you update that ConfigMap, the running Pod sees the update almost immediately.How", "source": "https://kubernetes.io/docs/tutorials/configuration/updating-configuration-via-a-configmap/", "id": "k8s_00369_code_completion_9260"} {"prompt": "Explain this text code:\n\nThu Jan 4 14:11:36 UTC 2024 My preferred sport is football\nThu Jan 4 14:11:46 UTC 2024 My preferred sport is football\nThu Jan 4 14:11:56 UTC 2024 My preferred sport is football\nThu Jan 4 14:12:06 UTC 2024 My preferred sport is cricket\nThu Jan 4 14:12:16 UTC 2024 My preferred sport is cricket", "completion": "This text code Updating Configuration via a ConfigMap | When you have a ConfigMap that is mapped into a running Pod using either a configMap volume or a projected volume, and you update that ConfigMap, the running Pod sees the update almost immediately.How", "language": "text", "type": "code-example_explanation", "context": "Updating Configuration via a ConfigMap | When you have a ConfigMap that is mapped into a running Pod using either a configMap volume or a projected volume, and you update that ConfigMap, the running Pod sees the update almost immediately.How", "source": "https://kubernetes.io/docs/tutorials/configuration/updating-configuration-via-a-configmap/", "id": "k8s_00369_code_explain_9261"} {"prompt": "Explain this bash code:\n\nkubectl create configmap fruits --from-literal=fruits=apples", "completion": "This bash code Updating Configuration via a ConfigMap | Below is an example of a Deployment manifest with an environment variable configured via the ConfigMap fruits.", "language": "bash", "type": "kubectl-command_explanation", "context": "Updating Configuration via a ConfigMap | Below is an example of a Deployment manifest with an environment variable configured via the ConfigMap fruits.", "source": "https://kubernetes.io/docs/tutorials/configuration/updating-configuration-via-a-configmap/", "id": "k8s_00369_code_explain_9262"} {"prompt": "Complete this yaml code:\n\napiVersion: apps/v1\nkind: Deployment\nmetadata:\n name: configmap-env-var\n labels:\n app.kubernetes.io/name: configmap-env-var\nspec:\n replicas: 3\n selector:\n matchLabels:\n app.kubernetes.io/name: configmap-env-var\n template:\n metadata:\n labels:\n app.kubernetes.io/name: configmap-env-var\n spec:", "completion": " containers:\n - name: alpine\n image: alpine:3\n env:\n - name: FRUITS\n valueFrom:\n configMapKeyRef:\n key: fruits\n name: fruits\n command:\n - /bin/sh\n - -c\n - while true; do echo \"$(date) The basket is full of $FRUITS\";\n sleep 10; done;\n ports:\n - containerPort: 80", "language": "yaml", "type": "deployment-definition", "context": "Updating Configuration via a ConfigMap | Create the Deployment:", "source": "https://kubernetes.io/docs/tutorials/configuration/updating-configuration-via-a-configmap/", "id": "k8s_00369_code_completion_9263"} {"prompt": "Explain this yaml code:\n\napiVersion: apps/v1\nkind: Deployment\nmetadata:\n name: configmap-env-var\n labels:\n app.kubernetes.io/name: configmap-env-var\nspec:\n replicas: 3\n selector:\n matchLabels:\n app.kubernetes.io/name: configmap-env-var\n template:\n metadata:\n labels:\n app.kubernetes.io/name: configmap-env-var\n spec:\n containers:\n - name: alpine\n image: alpine:3\n env:\n - name: FRUITS\n valueFrom:\n configMapKeyRef:\n key: fruits\n name: fruits\n command:\n - /bin/sh\n - -c\n - while true; do echo \"$(date) The basket is full of $FRUITS\";\n sleep 10; done;\n ports:\n - containerPort: 80", "completion": "This yaml code Updating Configuration via a ConfigMap | Create the Deployment:", "language": "yaml", "type": "deployment-definition_explanation", "context": "Updating Configuration via a ConfigMap | Create the Deployment:", "source": "https://kubernetes.io/docs/tutorials/configuration/updating-configuration-via-a-configmap/", "id": "k8s_00369_code_explain_9264"} {"prompt": "Explain this bash code:\n\nkubectl apply -f https://k8s.io/examples/deployments/deployment-with-configmap-as-envvar.yaml", "completion": "This bash code Updating Configuration via a ConfigMap | Check the pods for this Deployment to ensure they are ready (matching by selector):", "language": "bash", "type": "kubectl-command_explanation", "context": "Updating Configuration via a ConfigMap | Check the pods for this Deployment to ensure they are ready (matching by selector):", "source": "https://kubernetes.io/docs/tutorials/configuration/updating-configuration-via-a-configmap/", "id": "k8s_00369_code_explain_9265"} {"prompt": "Explain this bash code:\n\nkubectl get pods --selector=app.kubernetes.io/name=configmap-env-var", "completion": "This bash code Updating Configuration via a ConfigMap | You should see an output similar to:", "language": "bash", "type": "kubectl-command_explanation", "context": "Updating Configuration via a ConfigMap | You should see an output similar to:", "source": "https://kubernetes.io/docs/tutorials/configuration/updating-configuration-via-a-configmap/", "id": "k8s_00369_code_explain_9266"} {"prompt": "Complete this text code:\n\nNAME READY STATUS RESTARTS AGE\nconfigmap-env-var-59cfc64f7d-74d7z 1/1 Running 0 46s", "completion": "configmap-env-var-59cfc64f7d-c4wmj 1/1 Running 0 46s\nconfigmap-env-var-59cfc64f7d-dpr98 1/1 Running 0 46s", "language": "text", "type": "code-example", "context": "Updating Configuration via a ConfigMap | The key-value pair in the ConfigMap is configured as an environment variable in the container of the Pod. Check this by viewing the logs of one Pod that belongs to the Deployment.", "source": "https://kubernetes.io/docs/tutorials/configuration/updating-configuration-via-a-configmap/", "id": "k8s_00369_code_completion_9267"} {"prompt": "Explain this text code:\n\nNAME READY STATUS RESTARTS AGE\nconfigmap-env-var-59cfc64f7d-74d7z 1/1 Running 0 46s\nconfigmap-env-var-59cfc64f7d-c4wmj 1/1 Running 0 46s\nconfigmap-env-var-59cfc64f7d-dpr98 1/1 Running 0 46s", "completion": "This text code Updating Configuration via a ConfigMap | The key-value pair in the ConfigMap is configured as an environment variable in the container of the Pod. Check this by viewing the logs of one Pod that belongs to the Deployment.", "language": "text", "type": "code-example_explanation", "context": "Updating Configuration via a ConfigMap | The key-value pair in the ConfigMap is configured as an environment variable in the container of the Pod. Check this by viewing the logs of one Pod that belongs to the Deployment.", "source": "https://kubernetes.io/docs/tutorials/configuration/updating-configuration-via-a-configmap/", "id": "k8s_00369_code_explain_9268"} {"prompt": "Complete this text code:\n\nFound 3 pods, using pod/configmap-env-var-7c994f7769-l74nq\nThu Jan 4 16:07:06 UTC 2024 The basket is full of apples", "completion": "Thu Jan 4 16:07:16 UTC 2024 The basket is full of apples\nThu Jan 4 16:07:26 UTC 2024 The basket is full of apples", "language": "text", "type": "code-example", "context": "Updating Configuration via a ConfigMap | Edit the ConfigMap:", "source": "https://kubernetes.io/docs/tutorials/configuration/updating-configuration-via-a-configmap/", "id": "k8s_00369_code_completion_9269"} {"prompt": "Explain this text code:\n\nFound 3 pods, using pod/configmap-env-var-7c994f7769-l74nq\nThu Jan 4 16:07:06 UTC 2024 The basket is full of apples\nThu Jan 4 16:07:16 UTC 2024 The basket is full of apples\nThu Jan 4 16:07:26 UTC 2024 The basket is full of apples", "completion": "This text code Updating Configuration via a ConfigMap | Edit the ConfigMap:", "language": "text", "type": "code-example_explanation", "context": "Updating Configuration via a ConfigMap | Edit the ConfigMap:", "source": "https://kubernetes.io/docs/tutorials/configuration/updating-configuration-via-a-configmap/", "id": "k8s_00369_code_explain_9270"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\ndata:\n fruits: mangoes\nkind: ConfigMap\n# You can leave the existing metadata as they are.", "completion": "# The values you'll see won't exactly match these.\nmetadata:\n creationTimestamp: \"2024-01-04T16:04:19Z\"\n name: fruits\n namespace: default\n resourceVersion: \"1749472\"", "language": "yaml", "type": "kubernetes-manifest", "context": "Updating Configuration via a ConfigMap | You should see the following output:", "source": "https://kubernetes.io/docs/tutorials/configuration/updating-configuration-via-a-configmap/", "id": "k8s_00369_code_completion_9271"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\ndata:\n fruits: mangoes\nkind: ConfigMap\n# You can leave the existing metadata as they are.\n# The values you'll see won't exactly match these.\nmetadata:\n creationTimestamp: \"2024-01-04T16:04:19Z\"\n name: fruits\n namespace: default\n resourceVersion: \"1749472\"", "completion": "This yaml code Updating Configuration via a ConfigMap | You should see the following output:", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Updating Configuration via a ConfigMap | You should see the following output:", "source": "https://kubernetes.io/docs/tutorials/configuration/updating-configuration-via-a-configmap/", "id": "k8s_00369_code_explain_9272"} {"prompt": "Explain this text code:\n\n# As the text explains, the output does NOT change\nkubectl logs deployments/configmap-env-var --follow", "completion": "This text code Updating Configuration via a ConfigMap | Notice that the output remains unchanged, even though you edited the ConfigMap:", "language": "text", "type": "code-example_explanation", "context": "Updating Configuration via a ConfigMap | Notice that the output remains unchanged, even though you edited the ConfigMap:", "source": "https://kubernetes.io/docs/tutorials/configuration/updating-configuration-via-a-configmap/", "id": "k8s_00369_code_explain_9273"} {"prompt": "Complete this text code:\n\nThu Jan 4 16:12:56 UTC 2024 The basket is full of apples\nThu Jan 4 16:13:06 UTC 2024 The basket is full of apples", "completion": "Thu Jan 4 16:13:16 UTC 2024 The basket is full of apples\nThu Jan 4 16:13:26 UTC 2024 The basket is full of apples", "language": "text", "type": "code-example", "context": "Updating Configuration via a ConfigMap | You can trigger that replacement. Perform a rollout for the Deployment, using kubectl rollout:", "source": "https://kubernetes.io/docs/tutorials/configuration/updating-configuration-via-a-configmap/", "id": "k8s_00369_code_completion_9274"} {"prompt": "Explain this text code:\n\nThu Jan 4 16:12:56 UTC 2024 The basket is full of apples\nThu Jan 4 16:13:06 UTC 2024 The basket is full of apples\nThu Jan 4 16:13:16 UTC 2024 The basket is full of apples\nThu Jan 4 16:13:26 UTC 2024 The basket is full of apples", "completion": "This text code Updating Configuration via a ConfigMap | You can trigger that replacement. Perform a rollout for the Deployment, using kubectl rollout:", "language": "text", "type": "code-example_explanation", "context": "Updating Configuration via a ConfigMap | You can trigger that replacement. Perform a rollout for the Deployment, using kubectl rollout:", "source": "https://kubernetes.io/docs/tutorials/configuration/updating-configuration-via-a-configmap/", "id": "k8s_00369_code_explain_9275"} {"prompt": "Complete this text code:\n\n# Trigger the rollout\nkubectl rollout restart deployment configmap-env-var", "completion": "\n# Wait for the rollout to complete\nkubectl rollout status deployment configmap-env-var --watch=true", "language": "text", "type": "code-example", "context": "Updating Configuration via a ConfigMap | Next, check the Deployment:", "source": "https://kubernetes.io/docs/tutorials/configuration/updating-configuration-via-a-configmap/", "id": "k8s_00369_code_completion_9276"} {"prompt": "Explain this text code:\n\n# Trigger the rollout\nkubectl rollout restart deployment configmap-env-var\n\n# Wait for the rollout to complete\nkubectl rollout status deployment configmap-env-var --watch=true", "completion": "This text code Updating Configuration via a ConfigMap | Next, check the Deployment:", "language": "text", "type": "code-example_explanation", "context": "Updating Configuration via a ConfigMap | Next, check the Deployment:", "source": "https://kubernetes.io/docs/tutorials/configuration/updating-configuration-via-a-configmap/", "id": "k8s_00369_code_explain_9277"} {"prompt": "Explain this text code:\n\nNAME READY UP-TO-DATE AVAILABLE AGE\nconfigmap-env-var 3/3 3 3 12m", "completion": "This text code Updating Configuration via a ConfigMap | Check the Pods:", "language": "text", "type": "code-example_explanation", "context": "Updating Configuration via a ConfigMap | Check the Pods:", "source": "https://kubernetes.io/docs/tutorials/configuration/updating-configuration-via-a-configmap/", "id": "k8s_00369_code_explain_9278"} {"prompt": "Explain this bash code:\n\nkubectl get pods --selector=app.kubernetes.io/name=configmap-env-var", "completion": "This bash code Updating Configuration via a ConfigMap | The rollout causes Kubernetes to make a new ReplicaSet for the Deployment; that means the existing Pods eventually terminate, and new ones are created. After few seconds, you should see an output simi", "language": "bash", "type": "kubectl-command_explanation", "context": "Updating Configuration via a ConfigMap | The rollout causes Kubernetes to make a new ReplicaSet for the Deployment; that means the existing Pods eventually terminate, and new ones are created. After few seconds, you should see an output simi", "source": "https://kubernetes.io/docs/tutorials/configuration/updating-configuration-via-a-configmap/", "id": "k8s_00369_code_explain_9279"} {"prompt": "Complete this text code:\n\nNAME READY STATUS RESTARTS AGE\nconfigmap-env-var-6d94d89bf5-2ph2l 1/1 Running 0 13s", "completion": "configmap-env-var-6d94d89bf5-74twx 1/1 Running 0 8s\nconfigmap-env-var-6d94d89bf5-d5vx8 1/1 Running 0 11s", "language": "text", "type": "code-example", "context": "Updating Configuration via a ConfigMap | View the logs for a Pod in this Deployment:", "source": "https://kubernetes.io/docs/tutorials/configuration/updating-configuration-via-a-configmap/", "id": "k8s_00369_code_completion_9280"} {"prompt": "Explain this text code:\n\nNAME READY STATUS RESTARTS AGE\nconfigmap-env-var-6d94d89bf5-2ph2l 1/1 Running 0 13s\nconfigmap-env-var-6d94d89bf5-74twx 1/1 Running 0 8s\nconfigmap-env-var-6d94d89bf5-d5vx8 1/1 Running 0 11s", "completion": "This text code Updating Configuration via a ConfigMap | View the logs for a Pod in this Deployment:", "language": "text", "type": "code-example_explanation", "context": "Updating Configuration via a ConfigMap | View the logs for a Pod in this Deployment:", "source": "https://kubernetes.io/docs/tutorials/configuration/updating-configuration-via-a-configmap/", "id": "k8s_00369_code_explain_9281"} {"prompt": "Explain this text code:\n\n# Pick one Pod that belongs to the Deployment, and view its logs\nkubectl logs deployment/configmap-env-var", "completion": "This text code Updating Configuration via a ConfigMap | You should see an output similar to the below:", "language": "text", "type": "code-example_explanation", "context": "Updating Configuration via a ConfigMap | You should see an output similar to the below:", "source": "https://kubernetes.io/docs/tutorials/configuration/updating-configuration-via-a-configmap/", "id": "k8s_00369_code_explain_9282"} {"prompt": "Complete this text code:\n\nFound 3 pods, using pod/configmap-env-var-6d9ff89fb6-bzcf6\nThu Jan 4 16:30:35 UTC 2024 The basket is full of mangoes", "completion": "Thu Jan 4 16:30:45 UTC 2024 The basket is full of mangoes\nThu Jan 4 16:30:55 UTC 2024 The basket is full of mangoes", "language": "text", "type": "code-example", "context": "Updating Configuration via a ConfigMap | This demonstrates the scenario of updating environment variables in a Pod that are derived from a ConfigMap. Changes to the ConfigMap values are applied to the Pod during the subsequent rollout. If Po", "source": "https://kubernetes.io/docs/tutorials/configuration/updating-configuration-via-a-configmap/", "id": "k8s_00369_code_completion_9283"} {"prompt": "Explain this text code:\n\nFound 3 pods, using pod/configmap-env-var-6d9ff89fb6-bzcf6\nThu Jan 4 16:30:35 UTC 2024 The basket is full of mangoes\nThu Jan 4 16:30:45 UTC 2024 The basket is full of mangoes\nThu Jan 4 16:30:55 UTC 2024 The basket is full of mangoes", "completion": "This text code Updating Configuration via a ConfigMap | This demonstrates the scenario of updating environment variables in a Pod that are derived from a ConfigMap. Changes to the ConfigMap values are applied to the Pod during the subsequent rollout. If Po", "language": "text", "type": "code-example_explanation", "context": "Updating Configuration via a ConfigMap | This demonstrates the scenario of updating environment variables in a Pod that are derived from a ConfigMap. Changes to the ConfigMap values are applied to the Pod during the subsequent rollout. If Po", "source": "https://kubernetes.io/docs/tutorials/configuration/updating-configuration-via-a-configmap/", "id": "k8s_00369_code_explain_9284"} {"prompt": "Explain this bash code:\n\nkubectl create configmap color --from-literal=color=red", "completion": "This bash code Updating Configuration via a ConfigMap | Below is an example manifest for a Deployment that manages a set of Pods, each with two containers. The two containers share an emptyDir volume that they use to communicate. The first container runs a", "language": "bash", "type": "kubectl-command_explanation", "context": "Updating Configuration via a ConfigMap | Below is an example manifest for a Deployment that manages a set of Pods, each with two containers. The two containers share an emptyDir volume that they use to communicate. The first container runs a", "source": "https://kubernetes.io/docs/tutorials/configuration/updating-configuration-via-a-configmap/", "id": "k8s_00369_code_explain_9285"} {"prompt": "Complete this yaml code:\n\napiVersion: apps/v1\nkind: Deployment\nmetadata:\n name: configmap-two-containers\n labels:\n app.kubernetes.io/name: configmap-two-containers\nspec:\n replicas: 3\n selector:\n matchLabels:\n app.kubernetes.io/name: configmap-two-containers\n template:\n metadata:\n labels:\n app.kubernetes.io/name: configmap-two-containers\n spec:\n volumes:\n - name: shared-data\n emptyDir: {}\n - name: config-volume", "completion": " configMap:\n name: color\n containers:\n - name: nginx\n image: nginx\n volumeMounts:\n - name: shared-data\n mountPath: /usr/share/nginx/html\n - name: alpine\n image: alpine:3\n volumeMounts:\n - name: shared-data\n mountPath: /pod-data\n - name: config-volume\n mountPath: /etc/config\n command:\n - /bin/sh\n - -c\n - while true; do echo \"$(date) My preferred color is $(cat /etc/config/color)\" > /pod-data/index.html;\n sleep 10; done;", "language": "yaml", "type": "deployment-definition", "context": "Updating Configuration via a ConfigMap | Create the Deployment:", "source": "https://kubernetes.io/docs/tutorials/configuration/updating-configuration-via-a-configmap/", "id": "k8s_00369_code_completion_9286"} {"prompt": "Explain this yaml code:\n\napiVersion: apps/v1\nkind: Deployment\nmetadata:\n name: configmap-two-containers\n labels:\n app.kubernetes.io/name: configmap-two-containers\nspec:\n replicas: 3\n selector:\n matchLabels:\n app.kubernetes.io/name: configmap-two-containers\n template:\n metadata:\n labels:\n app.kubernetes.io/name: configmap-two-containers\n spec:\n volumes:\n - name: shared-data\n emptyDir: {}\n - name: config-volume\n configMap:\n name: color\n containers:\n - name: nginx\n image: nginx\n volumeMounts:\n - name: shared-data\n mountPath: /usr/share/nginx/html\n - name: alpine\n image: alpine:3\n volumeMounts:\n - name: shared-data\n mountPath: /pod-data\n - name: config-volume\n mountPath: /etc/config\n command:\n - /bin/sh\n - -c\n - while true; do echo \"$(date) My preferred color is $(cat /etc/config/color)\" > /pod-data/index.html;\n sleep 10; done;", "completion": "This yaml code Updating Configuration via a ConfigMap | Create the Deployment:", "language": "yaml", "type": "deployment-definition_explanation", "context": "Updating Configuration via a ConfigMap | Create the Deployment:", "source": "https://kubernetes.io/docs/tutorials/configuration/updating-configuration-via-a-configmap/", "id": "k8s_00369_code_explain_9287"} {"prompt": "Explain this bash code:\n\nkubectl apply -f https://k8s.io/examples/deployments/deployment-with-configmap-two-containers.yaml", "completion": "This bash code Updating Configuration via a ConfigMap | Check the pods for this Deployment to ensure they are ready (matching by selector):", "language": "bash", "type": "kubectl-command_explanation", "context": "Updating Configuration via a ConfigMap | Check the pods for this Deployment to ensure they are ready (matching by selector):", "source": "https://kubernetes.io/docs/tutorials/configuration/updating-configuration-via-a-configmap/", "id": "k8s_00369_code_explain_9288"} {"prompt": "Explain this bash code:\n\nkubectl get pods --selector=app.kubernetes.io/name=configmap-two-containers", "completion": "This bash code Updating Configuration via a ConfigMap | You should see an output similar to:", "language": "bash", "type": "kubectl-command_explanation", "context": "Updating Configuration via a ConfigMap | You should see an output similar to:", "source": "https://kubernetes.io/docs/tutorials/configuration/updating-configuration-via-a-configmap/", "id": "k8s_00369_code_explain_9289"} {"prompt": "Complete this text code:\n\nNAME READY STATUS RESTARTS AGE\nconfigmap-two-containers-565fb6d4f4-2xhxf 2/2 Running 0 20s", "completion": "configmap-two-containers-565fb6d4f4-g5v4j 2/2 Running 0 20s\nconfigmap-two-containers-565fb6d4f4-mzsmf 2/2 Running 0 20s", "language": "text", "type": "code-example", "context": "Updating Configuration via a ConfigMap | Expose the Deployment (the kubectl tool creates a Service for you):", "source": "https://kubernetes.io/docs/tutorials/configuration/updating-configuration-via-a-configmap/", "id": "k8s_00369_code_completion_9290"} {"prompt": "Explain this text code:\n\nNAME READY STATUS RESTARTS AGE\nconfigmap-two-containers-565fb6d4f4-2xhxf 2/2 Running 0 20s\nconfigmap-two-containers-565fb6d4f4-g5v4j 2/2 Running 0 20s\nconfigmap-two-containers-565fb6d4f4-mzsmf 2/2 Running 0 20s", "completion": "This text code Updating Configuration via a ConfigMap | Expose the Deployment (the kubectl tool creates a Service for you):", "language": "text", "type": "code-example_explanation", "context": "Updating Configuration via a ConfigMap | Expose the Deployment (the kubectl tool creates a Service for you):", "source": "https://kubernetes.io/docs/tutorials/configuration/updating-configuration-via-a-configmap/", "id": "k8s_00369_code_explain_9291"} {"prompt": "Explain this bash code:\n\nkubectl expose deployment configmap-two-containers --name=configmap-service --port=8080 --target-port=80", "completion": "This bash code Updating Configuration via a ConfigMap | Use kubectl to forward the port:", "language": "bash", "type": "kubectl-command_explanation", "context": "Updating Configuration via a ConfigMap | Use kubectl to forward the port:", "source": "https://kubernetes.io/docs/tutorials/configuration/updating-configuration-via-a-configmap/", "id": "k8s_00369_code_explain_9292"} {"prompt": "Explain this text code:\n\n# this stays running in the background\nkubectl port-forward service/configmap-service 8080:8080 &", "completion": "This text code Updating Configuration via a ConfigMap | Access the service.", "language": "text", "type": "code-example_explanation", "context": "Updating Configuration via a ConfigMap | Access the service.", "source": "https://kubernetes.io/docs/tutorials/configuration/updating-configuration-via-a-configmap/", "id": "k8s_00369_code_explain_9293"} {"prompt": "Explain this text code:\n\nFri Jan 5 08:08:22 UTC 2024 My preferred color is red", "completion": "This text code Updating Configuration via a ConfigMap | Edit the ConfigMap:", "language": "text", "type": "code-example_explanation", "context": "Updating Configuration via a ConfigMap | Edit the ConfigMap:", "source": "https://kubernetes.io/docs/tutorials/configuration/updating-configuration-via-a-configmap/", "id": "k8s_00369_code_explain_9294"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\ndata:\n color: blue\nkind: ConfigMap\n# You can leave the existing metadata as they are.\n# The values you'll see won't exactly match these.", "completion": "metadata:\n creationTimestamp: \"2024-01-05T08:12:05Z\"\n name: color\n namespace: configmap\n resourceVersion: \"1801272\"\n uid: 80d33e4a-cbb4-4bc9-ba8c-544c68e425d6", "language": "yaml", "type": "kubernetes-manifest", "context": "Updating Configuration via a ConfigMap | Loop over the service URL for few seconds.", "source": "https://kubernetes.io/docs/tutorials/configuration/updating-configuration-via-a-configmap/", "id": "k8s_00369_code_completion_9295"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\ndata:\n color: blue\nkind: ConfigMap\n# You can leave the existing metadata as they are.\n# The values you'll see won't exactly match these.\nmetadata:\n creationTimestamp: \"2024-01-05T08:12:05Z\"\n name: color\n namespace: configmap\n resourceVersion: \"1801272\"\n uid: 80d33e4a-cbb4-4bc9-ba8c-544c68e425d6", "completion": "This yaml code Updating Configuration via a ConfigMap | Loop over the service URL for few seconds.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Updating Configuration via a ConfigMap | Loop over the service URL for few seconds.", "source": "https://kubernetes.io/docs/tutorials/configuration/updating-configuration-via-a-configmap/", "id": "k8s_00369_code_explain_9296"} {"prompt": "Explain this text code:\n\n# Cancel this when you're happy with it (Ctrl-C)\nwhile true; do curl --connect-timeout 7.5 http://localhost:8080; sleep 10; done", "completion": "This text code Updating Configuration via a ConfigMap | You should see the output change as follows:", "language": "text", "type": "code-example_explanation", "context": "Updating Configuration via a ConfigMap | You should see the output change as follows:", "source": "https://kubernetes.io/docs/tutorials/configuration/updating-configuration-via-a-configmap/", "id": "k8s_00369_code_explain_9297"} {"prompt": "Complete this text code:\n\nFri Jan 5 08:14:00 UTC 2024 My preferred color is red\nFri Jan 5 08:14:02 UTC 2024 My preferred color is red\nFri Jan 5 08:14:20 UTC 2024 My preferred color is red", "completion": "Fri Jan 5 08:14:22 UTC 2024 My preferred color is red\nFri Jan 5 08:14:32 UTC 2024 My preferred color is blue\nFri Jan 5 08:14:43 UTC 2024 My preferred color is blue\nFri Jan 5 08:15:00 UTC 2024 My preferred color is blue", "language": "text", "type": "code-example", "context": "Updating Configuration via a ConfigMap | The above scenario can be replicated by using a Sidecar Container as a helper container to write the HTML file.As a Sidecar Container is conceptually an Init Container, it is guaranteed to start befor", "source": "https://kubernetes.io/docs/tutorials/configuration/updating-configuration-via-a-configmap/", "id": "k8s_00369_code_completion_9298"} {"prompt": "Explain this text code:\n\nFri Jan 5 08:14:00 UTC 2024 My preferred color is red\nFri Jan 5 08:14:02 UTC 2024 My preferred color is red\nFri Jan 5 08:14:20 UTC 2024 My preferred color is red\nFri Jan 5 08:14:22 UTC 2024 My preferred color is red\nFri Jan 5 08:14:32 UTC 2024 My preferred color is blue\nFri Jan 5 08:14:43 UTC 2024 My preferred color is blue\nFri Jan 5 08:15:00 UTC 2024 My preferred color is blue", "completion": "This text code Updating Configuration via a ConfigMap | The above scenario can be replicated by using a Sidecar Container as a helper container to write the HTML file.As a Sidecar Container is conceptually an Init Container, it is guaranteed to start befor", "language": "text", "type": "code-example_explanation", "context": "Updating Configuration via a ConfigMap | The above scenario can be replicated by using a Sidecar Container as a helper container to write the HTML file.As a Sidecar Container is conceptually an Init Container, it is guaranteed to start befor", "source": "https://kubernetes.io/docs/tutorials/configuration/updating-configuration-via-a-configmap/", "id": "k8s_00369_code_explain_9299"} {"prompt": "Explain this bash code:\n\nkubectl create configmap color --from-literal=color=blue", "completion": "This bash code Updating Configuration via a ConfigMap | Below is an example manifest for a Deployment that manages a set of Pods, each with a main container and a sidecar container. The two containers share an emptyDir volume that they use to communicate.", "language": "bash", "type": "kubectl-command_explanation", "context": "Updating Configuration via a ConfigMap | Below is an example manifest for a Deployment that manages a set of Pods, each with a main container and a sidecar container. The two containers share an emptyDir volume that they use to communicate.", "source": "https://kubernetes.io/docs/tutorials/configuration/updating-configuration-via-a-configmap/", "id": "k8s_00369_code_explain_9300"} {"prompt": "Complete this yaml code:\n\napiVersion: apps/v1\nkind: Deployment\nmetadata:\n name: configmap-sidecar-container\n labels:\n app.kubernetes.io/name: configmap-sidecar-container\nspec:\n replicas: 3\n selector:\n matchLabels:\n app.kubernetes.io/name: configmap-sidecar-container\n template:\n metadata:\n labels:\n app.kubernetes.io/name: configmap-sidecar-container\n spec:\n volumes:\n - name: shared-data\n emptyDir: {}\n - name: config-volume\n configMap:", "completion": " name: color\n containers:\n - name: nginx\n image: nginx\n volumeMounts:\n - name: shared-data\n mountPath: /usr/share/nginx/html\n initContainers:\n - name: alpine\n image: alpine:3\n restartPolicy: Always\n volumeMounts:\n - name: shared-data\n mountPath: /pod-data\n - name: config-volume\n mountPath: /etc/config\n command:\n - /bin/sh\n - -c\n - while true; do echo \"$(date) My preferred color is $(cat /etc/config/color)\" > /pod-data/index.html;\n sleep 10; done;", "language": "yaml", "type": "deployment-definition", "context": "Updating Configuration via a ConfigMap | Create the Deployment:", "source": "https://kubernetes.io/docs/tutorials/configuration/updating-configuration-via-a-configmap/", "id": "k8s_00369_code_completion_9301"} {"prompt": "Explain this yaml code:\n\napiVersion: apps/v1\nkind: Deployment\nmetadata:\n name: configmap-sidecar-container\n labels:\n app.kubernetes.io/name: configmap-sidecar-container\nspec:\n replicas: 3\n selector:\n matchLabels:\n app.kubernetes.io/name: configmap-sidecar-container\n template:\n metadata:\n labels:\n app.kubernetes.io/name: configmap-sidecar-container\n spec:\n volumes:\n - name: shared-data\n emptyDir: {}\n - name: config-volume\n configMap:\n name: color\n containers:\n - name: nginx\n image: nginx\n volumeMounts:\n - name: shared-data\n mountPath: /usr/share/nginx/html\n initContainers:\n - name: alpine\n image: alpine:3\n restartPolicy: Always\n volumeMounts:\n - name: shared-data\n mountPath: /pod-data\n - name: config-volume\n mountPath: /etc/config\n command:\n - /bin/sh\n - -c\n - while true; do echo \"$(date) My preferred color is $(cat /etc/config/color)\" > /pod-data/index.html;\n sleep 10; done;", "completion": "This yaml code Updating Configuration via a ConfigMap | Create the Deployment:", "language": "yaml", "type": "deployment-definition_explanation", "context": "Updating Configuration via a ConfigMap | Create the Deployment:", "source": "https://kubernetes.io/docs/tutorials/configuration/updating-configuration-via-a-configmap/", "id": "k8s_00369_code_explain_9302"} {"prompt": "Explain this bash code:\n\nkubectl apply -f https://k8s.io/examples/deployments/deployment-with-configmap-and-sidecar-container.yaml", "completion": "This bash code Updating Configuration via a ConfigMap | Check the pods for this Deployment to ensure they are ready (matching by selector):", "language": "bash", "type": "kubectl-command_explanation", "context": "Updating Configuration via a ConfigMap | Check the pods for this Deployment to ensure they are ready (matching by selector):", "source": "https://kubernetes.io/docs/tutorials/configuration/updating-configuration-via-a-configmap/", "id": "k8s_00369_code_explain_9303"} {"prompt": "Explain this bash code:\n\nkubectl get pods --selector=app.kubernetes.io/name=configmap-sidecar-container", "completion": "This bash code Updating Configuration via a ConfigMap | You should see an output similar to:", "language": "bash", "type": "kubectl-command_explanation", "context": "Updating Configuration via a ConfigMap | You should see an output similar to:", "source": "https://kubernetes.io/docs/tutorials/configuration/updating-configuration-via-a-configmap/", "id": "k8s_00369_code_explain_9304"} {"prompt": "Complete this text code:\n\nNAME READY STATUS RESTARTS AGE\nconfigmap-sidecar-container-5fb59f558b-87rp7 2/2 Running 0 94s", "completion": "configmap-sidecar-container-5fb59f558b-ccs7s 2/2 Running 0 94s\nconfigmap-sidecar-container-5fb59f558b-wnmgk 2/2 Running 0 94s", "language": "text", "type": "code-example", "context": "Updating Configuration via a ConfigMap | Expose the Deployment (the kubectl tool creates a Service for you):", "source": "https://kubernetes.io/docs/tutorials/configuration/updating-configuration-via-a-configmap/", "id": "k8s_00369_code_completion_9305"} {"prompt": "Explain this text code:\n\nNAME READY STATUS RESTARTS AGE\nconfigmap-sidecar-container-5fb59f558b-87rp7 2/2 Running 0 94s\nconfigmap-sidecar-container-5fb59f558b-ccs7s 2/2 Running 0 94s\nconfigmap-sidecar-container-5fb59f558b-wnmgk 2/2 Running 0 94s", "completion": "This text code Updating Configuration via a ConfigMap | Expose the Deployment (the kubectl tool creates a Service for you):", "language": "text", "type": "code-example_explanation", "context": "Updating Configuration via a ConfigMap | Expose the Deployment (the kubectl tool creates a Service for you):", "source": "https://kubernetes.io/docs/tutorials/configuration/updating-configuration-via-a-configmap/", "id": "k8s_00369_code_explain_9306"} {"prompt": "Explain this bash code:\n\nkubectl expose deployment configmap-sidecar-container --name=configmap-sidecar-service --port=8081 --target-port=80", "completion": "This bash code Updating Configuration via a ConfigMap | Use kubectl to forward the port:", "language": "bash", "type": "kubectl-command_explanation", "context": "Updating Configuration via a ConfigMap | Use kubectl to forward the port:", "source": "https://kubernetes.io/docs/tutorials/configuration/updating-configuration-via-a-configmap/", "id": "k8s_00369_code_explain_9307"} {"prompt": "Explain this text code:\n\n# this stays running in the background\nkubectl port-forward service/configmap-sidecar-service 8081:8081 &", "completion": "This text code Updating Configuration via a ConfigMap | Access the service.", "language": "text", "type": "code-example_explanation", "context": "Updating Configuration via a ConfigMap | Access the service.", "source": "https://kubernetes.io/docs/tutorials/configuration/updating-configuration-via-a-configmap/", "id": "k8s_00369_code_explain_9308"} {"prompt": "Explain this text code:\n\nSat Feb 17 13:09:05 UTC 2024 My preferred color is blue", "completion": "This text code Updating Configuration via a ConfigMap | Edit the ConfigMap:", "language": "text", "type": "code-example_explanation", "context": "Updating Configuration via a ConfigMap | Edit the ConfigMap:", "source": "https://kubernetes.io/docs/tutorials/configuration/updating-configuration-via-a-configmap/", "id": "k8s_00369_code_explain_9309"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\ndata:\n color: green\nkind: ConfigMap\n# You can leave the existing metadata as they are.\n# The values you'll see won't exactly match these.", "completion": "metadata:\n creationTimestamp: \"2024-02-17T12:20:30Z\"\n name: color\n namespace: default\n resourceVersion: \"1054\"\n uid: e40bb34c-58df-4280-8bea-6ed16edccfaa", "language": "yaml", "type": "kubernetes-manifest", "context": "Updating Configuration via a ConfigMap | Loop over the service URL for few seconds.", "source": "https://kubernetes.io/docs/tutorials/configuration/updating-configuration-via-a-configmap/", "id": "k8s_00369_code_completion_9310"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\ndata:\n color: green\nkind: ConfigMap\n# You can leave the existing metadata as they are.\n# The values you'll see won't exactly match these.\nmetadata:\n creationTimestamp: \"2024-02-17T12:20:30Z\"\n name: color\n namespace: default\n resourceVersion: \"1054\"\n uid: e40bb34c-58df-4280-8bea-6ed16edccfaa", "completion": "This yaml code Updating Configuration via a ConfigMap | Loop over the service URL for few seconds.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Updating Configuration via a ConfigMap | Loop over the service URL for few seconds.", "source": "https://kubernetes.io/docs/tutorials/configuration/updating-configuration-via-a-configmap/", "id": "k8s_00369_code_explain_9311"} {"prompt": "Explain this text code:\n\n# Cancel this when you're happy with it (Ctrl-C)\nwhile true; do curl --connect-timeout 7.5 http://localhost:8081; sleep 10; done", "completion": "This text code Updating Configuration via a ConfigMap | You should see the output change as follows:", "language": "text", "type": "code-example_explanation", "context": "Updating Configuration via a ConfigMap | You should see the output change as follows:", "source": "https://kubernetes.io/docs/tutorials/configuration/updating-configuration-via-a-configmap/", "id": "k8s_00369_code_explain_9312"} {"prompt": "Complete this text code:\n\nSat Feb 17 13:12:35 UTC 2024 My preferred color is blue\nSat Feb 17 13:12:45 UTC 2024 My preferred color is blue\nSat Feb 17 13:12:55 UTC 2024 My preferred color is blue", "completion": "Sat Feb 17 13:13:05 UTC 2024 My preferred color is blue\nSat Feb 17 13:13:15 UTC 2024 My preferred color is green\nSat Feb 17 13:13:25 UTC 2024 My preferred color is green\nSat Feb 17 13:13:35 UTC 2024 My preferred color is green", "language": "text", "type": "code-example", "context": "Updating Configuration via a ConfigMap | Immutable ConfigMaps are especially used for configuration that is constant and is not expected to change over time. Marking a ConfigMap as immutable allows a performance improvement where the kubelet", "source": "https://kubernetes.io/docs/tutorials/configuration/updating-configuration-via-a-configmap/", "id": "k8s_00369_code_completion_9313"} {"prompt": "Explain this text code:\n\nSat Feb 17 13:12:35 UTC 2024 My preferred color is blue\nSat Feb 17 13:12:45 UTC 2024 My preferred color is blue\nSat Feb 17 13:12:55 UTC 2024 My preferred color is blue\nSat Feb 17 13:13:05 UTC 2024 My preferred color is blue\nSat Feb 17 13:13:15 UTC 2024 My preferred color is green\nSat Feb 17 13:13:25 UTC 2024 My preferred color is green\nSat Feb 17 13:13:35 UTC 2024 My preferred color is green", "completion": "This text code Updating Configuration via a ConfigMap | Immutable ConfigMaps are especially used for configuration that is constant and is not expected to change over time. Marking a ConfigMap as immutable allows a performance improvement where the kubelet", "language": "text", "type": "code-example_explanation", "context": "Updating Configuration via a ConfigMap | Immutable ConfigMaps are especially used for configuration that is constant and is not expected to change over time. Marking a ConfigMap as immutable allows a performance improvement where the kubelet", "source": "https://kubernetes.io/docs/tutorials/configuration/updating-configuration-via-a-configmap/", "id": "k8s_00369_code_explain_9314"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\ndata:\n company_name: \"ACME, Inc.\" # existing fictional company name", "completion": "kind: ConfigMap\nimmutable: true\nmetadata:\n name: company-name-20150801", "language": "yaml", "type": "kubernetes-manifest", "context": "Updating Configuration via a ConfigMap | Create the Immutable ConfigMap:", "source": "https://kubernetes.io/docs/tutorials/configuration/updating-configuration-via-a-configmap/", "id": "k8s_00369_code_completion_9315"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\ndata:\n company_name: \"ACME, Inc.\" # existing fictional company name\nkind: ConfigMap\nimmutable: true\nmetadata:\n name: company-name-20150801", "completion": "This yaml code Updating Configuration via a ConfigMap | Create the Immutable ConfigMap:", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Updating Configuration via a ConfigMap | Create the Immutable ConfigMap:", "source": "https://kubernetes.io/docs/tutorials/configuration/updating-configuration-via-a-configmap/", "id": "k8s_00369_code_explain_9316"} {"prompt": "Explain this bash code:\n\nkubectl apply -f https://k8s.io/examples/configmap/immutable-configmap.yaml", "completion": "This bash code Updating Configuration via a ConfigMap | Below is an example of a Deployment manifest with the Immutable ConfigMap company-name-20150801 mounted as a volume into the Pod's only container.", "language": "bash", "type": "kubectl-command_explanation", "context": "Updating Configuration via a ConfigMap | Below is an example of a Deployment manifest with the Immutable ConfigMap company-name-20150801 mounted as a volume into the Pod's only container.", "source": "https://kubernetes.io/docs/tutorials/configuration/updating-configuration-via-a-configmap/", "id": "k8s_00369_code_explain_9317"} {"prompt": "Complete this yaml code:\n\napiVersion: apps/v1\nkind: Deployment\nmetadata:\n name: immutable-configmap-volume\n labels:\n app.kubernetes.io/name: immutable-configmap-volume\nspec:\n replicas: 3\n selector:\n matchLabels:\n app.kubernetes.io/name: immutable-configmap-volume\n template:\n metadata:\n labels:\n app.kubernetes.io/name: immutable-configmap-volume\n spec:", "completion": " containers:\n - name: alpine\n image: alpine:3\n command:\n - /bin/sh\n - -c\n - while true; do echo \"$(date) The name of the company is $(cat /etc/config/company_name)\";\n sleep 10; done;\n ports:\n - containerPort: 80\n volumeMounts:\n - name: config-volume\n mountPath: /etc/config\n volumes:\n - name: config-volume\n configMap:\n name: company-name-20150801", "language": "yaml", "type": "deployment-definition", "context": "Updating Configuration via a ConfigMap | Create the Deployment:", "source": "https://kubernetes.io/docs/tutorials/configuration/updating-configuration-via-a-configmap/", "id": "k8s_00369_code_completion_9318"} {"prompt": "Explain this yaml code:\n\napiVersion: apps/v1\nkind: Deployment\nmetadata:\n name: immutable-configmap-volume\n labels:\n app.kubernetes.io/name: immutable-configmap-volume\nspec:\n replicas: 3\n selector:\n matchLabels:\n app.kubernetes.io/name: immutable-configmap-volume\n template:\n metadata:\n labels:\n app.kubernetes.io/name: immutable-configmap-volume\n spec:\n containers:\n - name: alpine\n image: alpine:3\n command:\n - /bin/sh\n - -c\n - while true; do echo \"$(date) The name of the company is $(cat /etc/config/company_name)\";\n sleep 10; done;\n ports:\n - containerPort: 80\n volumeMounts:\n - name: config-volume\n mountPath: /etc/config\n volumes:\n - name: config-volume\n configMap:\n name: company-name-20150801", "completion": "This yaml code Updating Configuration via a ConfigMap | Create the Deployment:", "language": "yaml", "type": "deployment-definition_explanation", "context": "Updating Configuration via a ConfigMap | Create the Deployment:", "source": "https://kubernetes.io/docs/tutorials/configuration/updating-configuration-via-a-configmap/", "id": "k8s_00369_code_explain_9319"} {"prompt": "Explain this bash code:\n\nkubectl apply -f https://k8s.io/examples/deployments/deployment-with-immutable-configmap-as-volume.yaml", "completion": "This bash code Updating Configuration via a ConfigMap | Check the pods for this Deployment to ensure they are ready (matching by selector):", "language": "bash", "type": "kubectl-command_explanation", "context": "Updating Configuration via a ConfigMap | Check the pods for this Deployment to ensure they are ready (matching by selector):", "source": "https://kubernetes.io/docs/tutorials/configuration/updating-configuration-via-a-configmap/", "id": "k8s_00369_code_explain_9320"} {"prompt": "Explain this bash code:\n\nkubectl get pods --selector=app.kubernetes.io/name=immutable-configmap-volume", "completion": "This bash code Updating Configuration via a ConfigMap | You should see an output similar to:", "language": "bash", "type": "kubectl-command_explanation", "context": "Updating Configuration via a ConfigMap | You should see an output similar to:", "source": "https://kubernetes.io/docs/tutorials/configuration/updating-configuration-via-a-configmap/", "id": "k8s_00369_code_explain_9321"} {"prompt": "Complete this text code:\n\nNAME READY STATUS RESTARTS AGE\nimmutable-configmap-volume-78b6fbff95-5gsfh 1/1 Running 0 62s", "completion": "immutable-configmap-volume-78b6fbff95-7vcj4 1/1 Running 0 62s\nimmutable-configmap-volume-78b6fbff95-vdslm 1/1 Running 0 62s", "language": "text", "type": "code-example", "context": "Updating Configuration via a ConfigMap | The Pod's container refers to the data defined in the ConfigMap and uses it to print a report to stdout. You can check this report by viewing the logs for one of the Pods in that Deployment:", "source": "https://kubernetes.io/docs/tutorials/configuration/updating-configuration-via-a-configmap/", "id": "k8s_00369_code_completion_9322"} {"prompt": "Explain this text code:\n\nNAME READY STATUS RESTARTS AGE\nimmutable-configmap-volume-78b6fbff95-5gsfh 1/1 Running 0 62s\nimmutable-configmap-volume-78b6fbff95-7vcj4 1/1 Running 0 62s\nimmutable-configmap-volume-78b6fbff95-vdslm 1/1 Running 0 62s", "completion": "This text code Updating Configuration via a ConfigMap | The Pod's container refers to the data defined in the ConfigMap and uses it to print a report to stdout. You can check this report by viewing the logs for one of the Pods in that Deployment:", "language": "text", "type": "code-example_explanation", "context": "Updating Configuration via a ConfigMap | The Pod's container refers to the data defined in the ConfigMap and uses it to print a report to stdout. You can check this report by viewing the logs for one of the Pods in that Deployment:", "source": "https://kubernetes.io/docs/tutorials/configuration/updating-configuration-via-a-configmap/", "id": "k8s_00369_code_explain_9323"} {"prompt": "Explain this text code:\n\n# Pick one Pod that belongs to the Deployment, and view its logs\nkubectl logs deployments/immutable-configmap-volume", "completion": "This text code Updating Configuration via a ConfigMap | You should see an output similar to:", "language": "text", "type": "code-example_explanation", "context": "Updating Configuration via a ConfigMap | You should see an output similar to:", "source": "https://kubernetes.io/docs/tutorials/configuration/updating-configuration-via-a-configmap/", "id": "k8s_00369_code_explain_9324"} {"prompt": "Complete this text code:\n\nFound 3 pods, using pod/immutable-configmap-volume-78b6fbff95-5gsfh\nWed Mar 20 03:52:34 UTC 2024 The name of the company is ACME, Inc.", "completion": "Wed Mar 20 03:52:44 UTC 2024 The name of the company is ACME, Inc.\nWed Mar 20 03:52:54 UTC 2024 The name of the company is ACME, Inc.", "language": "text", "type": "code-example", "context": "Updating Configuration via a ConfigMap | Create a new immutable ConfigMap by using the manifest shown below:", "source": "https://kubernetes.io/docs/tutorials/configuration/updating-configuration-via-a-configmap/", "id": "k8s_00369_code_completion_9325"} {"prompt": "Explain this text code:\n\nFound 3 pods, using pod/immutable-configmap-volume-78b6fbff95-5gsfh\nWed Mar 20 03:52:34 UTC 2024 The name of the company is ACME, Inc.\nWed Mar 20 03:52:44 UTC 2024 The name of the company is ACME, Inc.\nWed Mar 20 03:52:54 UTC 2024 The name of the company is ACME, Inc.", "completion": "This text code Updating Configuration via a ConfigMap | Create a new immutable ConfigMap by using the manifest shown below:", "language": "text", "type": "code-example_explanation", "context": "Updating Configuration via a ConfigMap | Create a new immutable ConfigMap by using the manifest shown below:", "source": "https://kubernetes.io/docs/tutorials/configuration/updating-configuration-via-a-configmap/", "id": "k8s_00369_code_explain_9326"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\ndata:\n company_name: \"Fiktivesunternehmen GmbH\" # new fictional company name", "completion": "kind: ConfigMap\nimmutable: true\nmetadata:\n name: company-name-20240312", "language": "yaml", "type": "kubernetes-manifest", "context": "Updating Configuration via a ConfigMap | You should see an output similar to:", "source": "https://kubernetes.io/docs/tutorials/configuration/updating-configuration-via-a-configmap/", "id": "k8s_00369_code_completion_9327"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\ndata:\n company_name: \"Fiktivesunternehmen GmbH\" # new fictional company name\nkind: ConfigMap\nimmutable: true\nmetadata:\n name: company-name-20240312", "completion": "This yaml code Updating Configuration via a ConfigMap | You should see an output similar to:", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Updating Configuration via a ConfigMap | You should see an output similar to:", "source": "https://kubernetes.io/docs/tutorials/configuration/updating-configuration-via-a-configmap/", "id": "k8s_00369_code_explain_9328"} {"prompt": "Explain this bash code:\n\nkubectl apply -f https://k8s.io/examples/configmap/new-immutable-configmap.yaml", "completion": "This bash code Updating Configuration via a ConfigMap | You should see an output similar to:", "language": "bash", "type": "kubectl-command_explanation", "context": "Updating Configuration via a ConfigMap | You should see an output similar to:", "source": "https://kubernetes.io/docs/tutorials/configuration/updating-configuration-via-a-configmap/", "id": "k8s_00369_code_explain_9329"} {"prompt": "Explain this text code:\n\nNAME DATA AGE\ncompany-name-20150801 1 22m\ncompany-name-20240312 1 24s", "completion": "This text code Updating Configuration via a ConfigMap | Modify the Deployment to reference the new ConfigMap.", "language": "text", "type": "code-example_explanation", "context": "Updating Configuration via a ConfigMap | Modify the Deployment to reference the new ConfigMap.", "source": "https://kubernetes.io/docs/tutorials/configuration/updating-configuration-via-a-configmap/", "id": "k8s_00369_code_explain_9330"} {"prompt": "Complete this text code:\n\nvolumes:\n- configMap:", "completion": " defaultMode: 420\n name: company-name-20240312 # Update this field\n name: config-volume", "language": "text", "type": "code-example", "context": "Updating Configuration via a ConfigMap | You should see the following output:", "source": "https://kubernetes.io/docs/tutorials/configuration/updating-configuration-via-a-configmap/", "id": "k8s_00369_code_completion_9331"} {"prompt": "Explain this text code:\n\nvolumes:\n- configMap:\n defaultMode: 420\n name: company-name-20240312 # Update this field\n name: config-volume", "completion": "This text code Updating Configuration via a ConfigMap | You should see the following output:", "language": "text", "type": "code-example_explanation", "context": "Updating Configuration via a ConfigMap | You should see the following output:", "source": "https://kubernetes.io/docs/tutorials/configuration/updating-configuration-via-a-configmap/", "id": "k8s_00369_code_explain_9332"} {"prompt": "Explain this bash code:\n\nkubectl get pods --selector=app.kubernetes.io/name=immutable-configmap-volume", "completion": "This bash code Updating Configuration via a ConfigMap | You should eventually see an output similar to:", "language": "bash", "type": "kubectl-command_explanation", "context": "Updating Configuration via a ConfigMap | You should eventually see an output similar to:", "source": "https://kubernetes.io/docs/tutorials/configuration/updating-configuration-via-a-configmap/", "id": "k8s_00369_code_explain_9333"} {"prompt": "Complete this text code:\n\nNAME READY STATUS RESTARTS AGE\nimmutable-configmap-volume-5fdb88fcc8-29v8n 1/1 Running 0 13s\nimmutable-configmap-volume-5fdb88fcc8-52ddd 1/1 Running 0 14s", "completion": "immutable-configmap-volume-5fdb88fcc8-n5jx4 1/1 Running 0 15s\nimmutable-configmap-volume-78b6fbff95-5gsfh 1/1 Terminating 0 32m\nimmutable-configmap-volume-78b6fbff95-7vcj4 1/1 Terminating 0 32m\nimmutable-configmap-volume-78b6fbff95-vdslm 1/1 Terminating 0 32m", "language": "text", "type": "code-example", "context": "Updating Configuration via a ConfigMap | You should eventually see an output similar to:", "source": "https://kubernetes.io/docs/tutorials/configuration/updating-configuration-via-a-configmap/", "id": "k8s_00369_code_completion_9334"} {"prompt": "Explain this text code:\n\nNAME READY STATUS RESTARTS AGE\nimmutable-configmap-volume-5fdb88fcc8-29v8n 1/1 Running 0 13s\nimmutable-configmap-volume-5fdb88fcc8-52ddd 1/1 Running 0 14s\nimmutable-configmap-volume-5fdb88fcc8-n5jx4 1/1 Running 0 15s\nimmutable-configmap-volume-78b6fbff95-5gsfh 1/1 Terminating 0 32m\nimmutable-configmap-volume-78b6fbff95-7vcj4 1/1 Terminating 0 32m\nimmutable-configmap-volume-78b6fbff95-vdslm 1/1 Terminating 0 32m", "completion": "This text code Updating Configuration via a ConfigMap | You should eventually see an output similar to:", "language": "text", "type": "code-example_explanation", "context": "Updating Configuration via a ConfigMap | You should eventually see an output similar to:", "source": "https://kubernetes.io/docs/tutorials/configuration/updating-configuration-via-a-configmap/", "id": "k8s_00369_code_explain_9335"} {"prompt": "Complete this text code:\n\nNAME READY STATUS RESTARTS AGE\nimmutable-configmap-volume-5fdb88fcc8-29v8n 1/1 Running 0 43s", "completion": "immutable-configmap-volume-5fdb88fcc8-52ddd 1/1 Running 0 44s\nimmutable-configmap-volume-5fdb88fcc8-n5jx4 1/1 Running 0 45s", "language": "text", "type": "code-example", "context": "Updating Configuration via a ConfigMap | View the logs for a Pod in this Deployment:", "source": "https://kubernetes.io/docs/tutorials/configuration/updating-configuration-via-a-configmap/", "id": "k8s_00369_code_completion_9336"} {"prompt": "Explain this text code:\n\nNAME READY STATUS RESTARTS AGE\nimmutable-configmap-volume-5fdb88fcc8-29v8n 1/1 Running 0 43s\nimmutable-configmap-volume-5fdb88fcc8-52ddd 1/1 Running 0 44s\nimmutable-configmap-volume-5fdb88fcc8-n5jx4 1/1 Running 0 45s", "completion": "This text code Updating Configuration via a ConfigMap | View the logs for a Pod in this Deployment:", "language": "text", "type": "code-example_explanation", "context": "Updating Configuration via a ConfigMap | View the logs for a Pod in this Deployment:", "source": "https://kubernetes.io/docs/tutorials/configuration/updating-configuration-via-a-configmap/", "id": "k8s_00369_code_explain_9337"} {"prompt": "Explain this text code:\n\n# Pick one Pod that belongs to the Deployment, and view its logs\nkubectl logs deployment/immutable-configmap-volume", "completion": "This text code Updating Configuration via a ConfigMap | You should see an output similar to the below:", "language": "text", "type": "code-example_explanation", "context": "Updating Configuration via a ConfigMap | You should see an output similar to the below:", "source": "https://kubernetes.io/docs/tutorials/configuration/updating-configuration-via-a-configmap/", "id": "k8s_00369_code_explain_9338"} {"prompt": "Complete this text code:\n\nFound 3 pods, using pod/immutable-configmap-volume-5fdb88fcc8-n5jx4\nWed Mar 20 04:24:17 UTC 2024 The name of the company is Fiktivesunternehmen GmbH", "completion": "Wed Mar 20 04:24:27 UTC 2024 The name of the company is Fiktivesunternehmen GmbH\nWed Mar 20 04:24:37 UTC 2024 The name of the company is Fiktivesunternehmen GmbH", "language": "text", "type": "code-example", "context": "Updating Configuration via a ConfigMap | Once all the deployments have migrated to use the new immutable ConfigMap, it is advised to delete the old one.", "source": "https://kubernetes.io/docs/tutorials/configuration/updating-configuration-via-a-configmap/", "id": "k8s_00369_code_completion_9339"} {"prompt": "Explain this text code:\n\nFound 3 pods, using pod/immutable-configmap-volume-5fdb88fcc8-n5jx4\nWed Mar 20 04:24:17 UTC 2024 The name of the company is Fiktivesunternehmen GmbH\nWed Mar 20 04:24:27 UTC 2024 The name of the company is Fiktivesunternehmen GmbH\nWed Mar 20 04:24:37 UTC 2024 The name of the company is Fiktivesunternehmen GmbH", "completion": "This text code Updating Configuration via a ConfigMap | Once all the deployments have migrated to use the new immutable ConfigMap, it is advised to delete the old one.", "language": "text", "type": "code-example_explanation", "context": "Updating Configuration via a ConfigMap | Once all the deployments have migrated to use the new immutable ConfigMap, it is advised to delete the old one.", "source": "https://kubernetes.io/docs/tutorials/configuration/updating-configuration-via-a-configmap/", "id": "k8s_00369_code_explain_9340"} {"prompt": "Complete this bash code:\n\nkubectl delete deployment configmap-volume configmap-env-var configmap-two-containers configmap-sidecar-container immutable-configmap-volume\nkubectl delete service configmap-service configmap-sidecar-service", "completion": "kubectl delete configmap sport fruits color company-name-20240312\n\nkubectl delete configmap company-name-20150801 # In case it was not handled during the task execution", "language": "bash", "type": "kubectl-command", "context": "Updating Configuration via a ConfigMap |", "source": "https://kubernetes.io/docs/tutorials/configuration/updating-configuration-via-a-configmap/", "id": "k8s_00369_code_completion_9341"} {"prompt": "Explain this bash code:\n\nkubectl delete deployment configmap-volume configmap-env-var configmap-two-containers configmap-sidecar-container immutable-configmap-volume\nkubectl delete service configmap-service configmap-sidecar-service\nkubectl delete configmap sport fruits color company-name-20240312\n\nkubectl delete configmap company-name-20150801 # In case it was not handled during the task execution", "completion": "This bash code Updating Configuration via a ConfigMap |", "language": "bash", "type": "kubectl-command_explanation", "context": "Updating Configuration via a ConfigMap |", "source": "https://kubernetes.io/docs/tutorials/configuration/updating-configuration-via-a-configmap/", "id": "k8s_00369_code_explain_9342"} {"prompt": "Explain this shell code:\n\nkubectl create configmap sport --from-literal=sport=football", "completion": "This shell code Updating Configuration via a ConfigMap | Below is an example of a Deployment manifest with the ConfigMap sport mounted as a volume into the Pod's only container.", "language": "shell", "type": "code-example_explanation", "context": "Updating Configuration via a ConfigMap | Below is an example of a Deployment manifest with the ConfigMap sport mounted as a volume into the Pod's only container.", "source": "https://kubernetes.io/docs/tutorials/configuration/updating-configuration-via-a-configmap/", "id": "k8s_00369_code_explain_9343"} {"prompt": "Explain this text code:\n\ndeployments/deployment-with-configmap-as-volume.yaml", "completion": "This text code Updating Configuration via a ConfigMap | Create the Deployment:", "language": "text", "type": "code-example_explanation", "context": "Updating Configuration via a ConfigMap | Create the Deployment:", "source": "https://kubernetes.io/docs/tutorials/configuration/updating-configuration-via-a-configmap/", "id": "k8s_00369_code_explain_9344"} {"prompt": "Complete this yaml code:\n\napiVersion: apps/v1\nkind: Deployment\nmetadata:\n name: configmap-volume\n labels:\n app.kubernetes.io/name: configmap-volume\nspec:\n replicas: 3\n selector:\n matchLabels:\n app.kubernetes.io/name: configmap-volume\n template:\n metadata:\n labels:\n app.kubernetes.io/name: configmap-volume\n spec:", "completion": " containers:\n - name: alpine\n image: alpine:3\n command:\n - /bin/sh\n - -c\n - while true; do echo \"$(date) My preferred sport is $(cat /etc/config/sport)\";\n sleep 10; done;\n ports:\n - containerPort: 80\n volumeMounts:\n - name: config-volume\n mountPath: /etc/config\n volumes:\n - name: config-volume\n configMap:\n name: sport", "language": "yaml", "type": "deployment-definition", "context": "Updating Configuration via a ConfigMap | Create the Deployment:", "source": "https://kubernetes.io/docs/tutorials/configuration/updating-configuration-via-a-configmap/", "id": "k8s_00369_code_completion_9345"} {"prompt": "Explain this yaml code:\n\napiVersion: apps/v1\nkind: Deployment\nmetadata:\n name: configmap-volume\n labels:\n app.kubernetes.io/name: configmap-volume\nspec:\n replicas: 3\n selector:\n matchLabels:\n app.kubernetes.io/name: configmap-volume\n template:\n metadata:\n labels:\n app.kubernetes.io/name: configmap-volume\n spec:\n containers:\n - name: alpine\n image: alpine:3\n command:\n - /bin/sh\n - -c\n - while true; do echo \"$(date) My preferred sport is $(cat /etc/config/sport)\";\n sleep 10; done;\n ports:\n - containerPort: 80\n volumeMounts:\n - name: config-volume\n mountPath: /etc/config\n volumes:\n - name: config-volume\n configMap:\n name: sport", "completion": "This yaml code Updating Configuration via a ConfigMap | Create the Deployment:", "language": "yaml", "type": "deployment-definition_explanation", "context": "Updating Configuration via a ConfigMap | Create the Deployment:", "source": "https://kubernetes.io/docs/tutorials/configuration/updating-configuration-via-a-configmap/", "id": "k8s_00369_code_explain_9346"} {"prompt": "Explain this shell code:\n\nkubectl apply -f https://k8s.io/examples/deployments/deployment-with-configmap-as-volume.yaml", "completion": "This shell code Updating Configuration via a ConfigMap | Check the pods for this Deployment to ensure they are ready (matching by selector):", "language": "shell", "type": "code-example_explanation", "context": "Updating Configuration via a ConfigMap | Check the pods for this Deployment to ensure they are ready (matching by selector):", "source": "https://kubernetes.io/docs/tutorials/configuration/updating-configuration-via-a-configmap/", "id": "k8s_00369_code_explain_9347"} {"prompt": "Explain this shell code:\n\nkubectl get pods --selector=app.kubernetes.io/name=configmap-volume", "completion": "This shell code Updating Configuration via a ConfigMap | You should see an output similar to:", "language": "shell", "type": "code-example_explanation", "context": "Updating Configuration via a ConfigMap | You should see an output similar to:", "source": "https://kubernetes.io/docs/tutorials/configuration/updating-configuration-via-a-configmap/", "id": "k8s_00369_code_explain_9348"} {"prompt": "Complete this text code:\n\nNAME READY STATUS RESTARTS AGE\nconfigmap-volume-6b976dfdcf-qxvbm 1/1 Running 0 72s", "completion": "configmap-volume-6b976dfdcf-skpvm 1/1 Running 0 72s\nconfigmap-volume-6b976dfdcf-tbc6r 1/1 Running 0 72s", "language": "text", "type": "code-example", "context": "Updating Configuration via a ConfigMap | On each node where one of these Pods is running, the kubelet fetches the data for that ConfigMap and translates it to files in a local volume. The kubelet then mounts that volume into the container, a", "source": "https://kubernetes.io/docs/tutorials/configuration/updating-configuration-via-a-configmap/", "id": "k8s_00369_code_completion_9349"} {"prompt": "Explain this text code:\n\nNAME READY STATUS RESTARTS AGE\nconfigmap-volume-6b976dfdcf-qxvbm 1/1 Running 0 72s\nconfigmap-volume-6b976dfdcf-skpvm 1/1 Running 0 72s\nconfigmap-volume-6b976dfdcf-tbc6r 1/1 Running 0 72s", "completion": "This text code Updating Configuration via a ConfigMap | On each node where one of these Pods is running, the kubelet fetches the data for that ConfigMap and translates it to files in a local volume. The kubelet then mounts that volume into the container, a", "language": "text", "type": "code-example_explanation", "context": "Updating Configuration via a ConfigMap | On each node where one of these Pods is running, the kubelet fetches the data for that ConfigMap and translates it to files in a local volume. The kubelet then mounts that volume into the container, a", "source": "https://kubernetes.io/docs/tutorials/configuration/updating-configuration-via-a-configmap/", "id": "k8s_00369_code_explain_9350"} {"prompt": "Explain this shell code:\n\n# Pick one Pod that belongs to the Deployment, and view its logs\nkubectl logs deployments/configmap-volume", "completion": "This shell code Updating Configuration via a ConfigMap | You should see an output similar to:", "language": "shell", "type": "code-example_explanation", "context": "Updating Configuration via a ConfigMap | You should see an output similar to:", "source": "https://kubernetes.io/docs/tutorials/configuration/updating-configuration-via-a-configmap/", "id": "k8s_00369_code_explain_9351"} {"prompt": "Complete this text code:\n\nFound 3 pods, using pod/configmap-volume-76d9c5678f-x5rgj\nThu Jan 4 14:06:46 UTC 2024 My preferred sport is football\nThu Jan 4 14:06:56 UTC 2024 My preferred sport is football", "completion": "Thu Jan 4 14:07:06 UTC 2024 My preferred sport is football\nThu Jan 4 14:07:16 UTC 2024 My preferred sport is football\nThu Jan 4 14:07:26 UTC 2024 My preferred sport is football", "language": "text", "type": "code-example", "context": "Updating Configuration via a ConfigMap | Edit the ConfigMap:", "source": "https://kubernetes.io/docs/tutorials/configuration/updating-configuration-via-a-configmap/", "id": "k8s_00369_code_completion_9352"} {"prompt": "Explain this text code:\n\nFound 3 pods, using pod/configmap-volume-76d9c5678f-x5rgj\nThu Jan 4 14:06:46 UTC 2024 My preferred sport is football\nThu Jan 4 14:06:56 UTC 2024 My preferred sport is football\nThu Jan 4 14:07:06 UTC 2024 My preferred sport is football\nThu Jan 4 14:07:16 UTC 2024 My preferred sport is football\nThu Jan 4 14:07:26 UTC 2024 My preferred sport is football", "completion": "This text code Updating Configuration via a ConfigMap | Edit the ConfigMap:", "language": "text", "type": "code-example_explanation", "context": "Updating Configuration via a ConfigMap | Edit the ConfigMap:", "source": "https://kubernetes.io/docs/tutorials/configuration/updating-configuration-via-a-configmap/", "id": "k8s_00369_code_explain_9353"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\ndata:\n sport: cricket\nkind: ConfigMap\n# You can leave the existing metadata as they are.\n# The values you'll see won't exactly match these.", "completion": "metadata:\n creationTimestamp: \"2024-01-04T14:05:06Z\"\n name: sport\n namespace: default\n resourceVersion: \"1743935\"\n uid: 024ee001-fe72-487e-872e-34d6464a8a23", "language": "yaml", "type": "kubernetes-manifest", "context": "Updating Configuration via a ConfigMap | You should see the following output:", "source": "https://kubernetes.io/docs/tutorials/configuration/updating-configuration-via-a-configmap/", "id": "k8s_00369_code_completion_9354"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\ndata:\n sport: cricket\nkind: ConfigMap\n# You can leave the existing metadata as they are.\n# The values you'll see won't exactly match these.\nmetadata:\n creationTimestamp: \"2024-01-04T14:05:06Z\"\n name: sport\n namespace: default\n resourceVersion: \"1743935\"\n uid: 024ee001-fe72-487e-872e-34d6464a8a23", "completion": "This yaml code Updating Configuration via a ConfigMap | You should see the following output:", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Updating Configuration via a ConfigMap | You should see the following output:", "source": "https://kubernetes.io/docs/tutorials/configuration/updating-configuration-via-a-configmap/", "id": "k8s_00369_code_explain_9355"} {"prompt": "Complete this text code:\n\nThu Jan 4 14:11:36 UTC 2024 My preferred sport is football\nThu Jan 4 14:11:46 UTC 2024 My preferred sport is football", "completion": "Thu Jan 4 14:11:56 UTC 2024 My preferred sport is football\nThu Jan 4 14:12:06 UTC 2024 My preferred sport is cricket\nThu Jan 4 14:12:16 UTC 2024 My preferred sport is cricket", "language": "text", "type": "code-example", "context": "Updating Configuration via a ConfigMap | When you have a ConfigMap that is mapped into a running Pod using either a configMap volume or a projected volume, and you update that ConfigMap, the running Pod sees the update almost immediately.How", "source": "https://kubernetes.io/docs/tutorials/configuration/updating-configuration-via-a-configmap/", "id": "k8s_00369_code_completion_9356"} {"prompt": "Explain this text code:\n\nThu Jan 4 14:11:36 UTC 2024 My preferred sport is football\nThu Jan 4 14:11:46 UTC 2024 My preferred sport is football\nThu Jan 4 14:11:56 UTC 2024 My preferred sport is football\nThu Jan 4 14:12:06 UTC 2024 My preferred sport is cricket\nThu Jan 4 14:12:16 UTC 2024 My preferred sport is cricket", "completion": "This text code Updating Configuration via a ConfigMap | When you have a ConfigMap that is mapped into a running Pod using either a configMap volume or a projected volume, and you update that ConfigMap, the running Pod sees the update almost immediately.How", "language": "text", "type": "code-example_explanation", "context": "Updating Configuration via a ConfigMap | When you have a ConfigMap that is mapped into a running Pod using either a configMap volume or a projected volume, and you update that ConfigMap, the running Pod sees the update almost immediately.How", "source": "https://kubernetes.io/docs/tutorials/configuration/updating-configuration-via-a-configmap/", "id": "k8s_00369_code_explain_9357"} {"prompt": "Explain this shell code:\n\nkubectl create configmap fruits --from-literal=fruits=apples", "completion": "This shell code Updating Configuration via a ConfigMap | Below is an example of a Deployment manifest with an environment variable configured via the ConfigMap fruits.", "language": "shell", "type": "code-example_explanation", "context": "Updating Configuration via a ConfigMap | Below is an example of a Deployment manifest with an environment variable configured via the ConfigMap fruits.", "source": "https://kubernetes.io/docs/tutorials/configuration/updating-configuration-via-a-configmap/", "id": "k8s_00369_code_explain_9358"} {"prompt": "Explain this text code:\n\ndeployments/deployment-with-configmap-as-envvar.yaml", "completion": "This text code Updating Configuration via a ConfigMap | Create the Deployment:", "language": "text", "type": "code-example_explanation", "context": "Updating Configuration via a ConfigMap | Create the Deployment:", "source": "https://kubernetes.io/docs/tutorials/configuration/updating-configuration-via-a-configmap/", "id": "k8s_00369_code_explain_9359"} {"prompt": "Complete this yaml code:\n\napiVersion: apps/v1\nkind: Deployment\nmetadata:\n name: configmap-env-var\n labels:\n app.kubernetes.io/name: configmap-env-var\nspec:\n replicas: 3\n selector:\n matchLabels:\n app.kubernetes.io/name: configmap-env-var\n template:\n metadata:\n labels:\n app.kubernetes.io/name: configmap-env-var\n spec:", "completion": " containers:\n - name: alpine\n image: alpine:3\n env:\n - name: FRUITS\n valueFrom:\n configMapKeyRef:\n key: fruits\n name: fruits\n command:\n - /bin/sh\n - -c\n - while true; do echo \"$(date) The basket is full of $FRUITS\";\n sleep 10; done;\n ports:\n - containerPort: 80", "language": "yaml", "type": "deployment-definition", "context": "Updating Configuration via a ConfigMap | Create the Deployment:", "source": "https://kubernetes.io/docs/tutorials/configuration/updating-configuration-via-a-configmap/", "id": "k8s_00369_code_completion_9360"} {"prompt": "Explain this yaml code:\n\napiVersion: apps/v1\nkind: Deployment\nmetadata:\n name: configmap-env-var\n labels:\n app.kubernetes.io/name: configmap-env-var\nspec:\n replicas: 3\n selector:\n matchLabels:\n app.kubernetes.io/name: configmap-env-var\n template:\n metadata:\n labels:\n app.kubernetes.io/name: configmap-env-var\n spec:\n containers:\n - name: alpine\n image: alpine:3\n env:\n - name: FRUITS\n valueFrom:\n configMapKeyRef:\n key: fruits\n name: fruits\n command:\n - /bin/sh\n - -c\n - while true; do echo \"$(date) The basket is full of $FRUITS\";\n sleep 10; done;\n ports:\n - containerPort: 80", "completion": "This yaml code Updating Configuration via a ConfigMap | Create the Deployment:", "language": "yaml", "type": "deployment-definition_explanation", "context": "Updating Configuration via a ConfigMap | Create the Deployment:", "source": "https://kubernetes.io/docs/tutorials/configuration/updating-configuration-via-a-configmap/", "id": "k8s_00369_code_explain_9361"} {"prompt": "Explain this shell code:\n\nkubectl apply -f https://k8s.io/examples/deployments/deployment-with-configmap-as-envvar.yaml", "completion": "This shell code Updating Configuration via a ConfigMap | Check the pods for this Deployment to ensure they are ready (matching by selector):", "language": "shell", "type": "code-example_explanation", "context": "Updating Configuration via a ConfigMap | Check the pods for this Deployment to ensure they are ready (matching by selector):", "source": "https://kubernetes.io/docs/tutorials/configuration/updating-configuration-via-a-configmap/", "id": "k8s_00369_code_explain_9362"} {"prompt": "Explain this shell code:\n\nkubectl get pods --selector=app.kubernetes.io/name=configmap-env-var", "completion": "This shell code Updating Configuration via a ConfigMap | You should see an output similar to:", "language": "shell", "type": "code-example_explanation", "context": "Updating Configuration via a ConfigMap | You should see an output similar to:", "source": "https://kubernetes.io/docs/tutorials/configuration/updating-configuration-via-a-configmap/", "id": "k8s_00369_code_explain_9363"} {"prompt": "Complete this text code:\n\nNAME READY STATUS RESTARTS AGE\nconfigmap-env-var-59cfc64f7d-74d7z 1/1 Running 0 46s", "completion": "configmap-env-var-59cfc64f7d-c4wmj 1/1 Running 0 46s\nconfigmap-env-var-59cfc64f7d-dpr98 1/1 Running 0 46s", "language": "text", "type": "code-example", "context": "Updating Configuration via a ConfigMap | The key-value pair in the ConfigMap is configured as an environment variable in the container of the Pod. Check this by viewing the logs of one Pod that belongs to the Deployment.", "source": "https://kubernetes.io/docs/tutorials/configuration/updating-configuration-via-a-configmap/", "id": "k8s_00369_code_completion_9364"} {"prompt": "Explain this text code:\n\nNAME READY STATUS RESTARTS AGE\nconfigmap-env-var-59cfc64f7d-74d7z 1/1 Running 0 46s\nconfigmap-env-var-59cfc64f7d-c4wmj 1/1 Running 0 46s\nconfigmap-env-var-59cfc64f7d-dpr98 1/1 Running 0 46s", "completion": "This text code Updating Configuration via a ConfigMap | The key-value pair in the ConfigMap is configured as an environment variable in the container of the Pod. Check this by viewing the logs of one Pod that belongs to the Deployment.", "language": "text", "type": "code-example_explanation", "context": "Updating Configuration via a ConfigMap | The key-value pair in the ConfigMap is configured as an environment variable in the container of the Pod. Check this by viewing the logs of one Pod that belongs to the Deployment.", "source": "https://kubernetes.io/docs/tutorials/configuration/updating-configuration-via-a-configmap/", "id": "k8s_00369_code_explain_9365"} {"prompt": "Complete this text code:\n\nFound 3 pods, using pod/configmap-env-var-7c994f7769-l74nq\nThu Jan 4 16:07:06 UTC 2024 The basket is full of apples", "completion": "Thu Jan 4 16:07:16 UTC 2024 The basket is full of apples\nThu Jan 4 16:07:26 UTC 2024 The basket is full of apples", "language": "text", "type": "code-example", "context": "Updating Configuration via a ConfigMap | Edit the ConfigMap:", "source": "https://kubernetes.io/docs/tutorials/configuration/updating-configuration-via-a-configmap/", "id": "k8s_00369_code_completion_9366"} {"prompt": "Explain this text code:\n\nFound 3 pods, using pod/configmap-env-var-7c994f7769-l74nq\nThu Jan 4 16:07:06 UTC 2024 The basket is full of apples\nThu Jan 4 16:07:16 UTC 2024 The basket is full of apples\nThu Jan 4 16:07:26 UTC 2024 The basket is full of apples", "completion": "This text code Updating Configuration via a ConfigMap | Edit the ConfigMap:", "language": "text", "type": "code-example_explanation", "context": "Updating Configuration via a ConfigMap | Edit the ConfigMap:", "source": "https://kubernetes.io/docs/tutorials/configuration/updating-configuration-via-a-configmap/", "id": "k8s_00369_code_explain_9367"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\ndata:\n fruits: mangoes\nkind: ConfigMap\n# You can leave the existing metadata as they are.", "completion": "# The values you'll see won't exactly match these.\nmetadata:\n creationTimestamp: \"2024-01-04T16:04:19Z\"\n name: fruits\n namespace: default\n resourceVersion: \"1749472\"", "language": "yaml", "type": "kubernetes-manifest", "context": "Updating Configuration via a ConfigMap | You should see the following output:", "source": "https://kubernetes.io/docs/tutorials/configuration/updating-configuration-via-a-configmap/", "id": "k8s_00369_code_completion_9368"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\ndata:\n fruits: mangoes\nkind: ConfigMap\n# You can leave the existing metadata as they are.\n# The values you'll see won't exactly match these.\nmetadata:\n creationTimestamp: \"2024-01-04T16:04:19Z\"\n name: fruits\n namespace: default\n resourceVersion: \"1749472\"", "completion": "This yaml code Updating Configuration via a ConfigMap | You should see the following output:", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Updating Configuration via a ConfigMap | You should see the following output:", "source": "https://kubernetes.io/docs/tutorials/configuration/updating-configuration-via-a-configmap/", "id": "k8s_00369_code_explain_9369"} {"prompt": "Explain this shell code:\n\n# As the text explains, the output does NOT change\nkubectl logs deployments/configmap-env-var --follow", "completion": "This shell code Updating Configuration via a ConfigMap | Notice that the output remains unchanged, even though you edited the ConfigMap:", "language": "shell", "type": "code-example_explanation", "context": "Updating Configuration via a ConfigMap | Notice that the output remains unchanged, even though you edited the ConfigMap:", "source": "https://kubernetes.io/docs/tutorials/configuration/updating-configuration-via-a-configmap/", "id": "k8s_00369_code_explain_9370"} {"prompt": "Complete this text code:\n\nThu Jan 4 16:12:56 UTC 2024 The basket is full of apples\nThu Jan 4 16:13:06 UTC 2024 The basket is full of apples", "completion": "Thu Jan 4 16:13:16 UTC 2024 The basket is full of apples\nThu Jan 4 16:13:26 UTC 2024 The basket is full of apples", "language": "text", "type": "code-example", "context": "Updating Configuration via a ConfigMap | You can trigger that replacement. Perform a rollout for the Deployment, using kubectl rollout:", "source": "https://kubernetes.io/docs/tutorials/configuration/updating-configuration-via-a-configmap/", "id": "k8s_00369_code_completion_9371"} {"prompt": "Explain this text code:\n\nThu Jan 4 16:12:56 UTC 2024 The basket is full of apples\nThu Jan 4 16:13:06 UTC 2024 The basket is full of apples\nThu Jan 4 16:13:16 UTC 2024 The basket is full of apples\nThu Jan 4 16:13:26 UTC 2024 The basket is full of apples", "completion": "This text code Updating Configuration via a ConfigMap | You can trigger that replacement. Perform a rollout for the Deployment, using kubectl rollout:", "language": "text", "type": "code-example_explanation", "context": "Updating Configuration via a ConfigMap | You can trigger that replacement. Perform a rollout for the Deployment, using kubectl rollout:", "source": "https://kubernetes.io/docs/tutorials/configuration/updating-configuration-via-a-configmap/", "id": "k8s_00369_code_explain_9372"} {"prompt": "Complete this shell code:\n\n# Trigger the rollout\nkubectl rollout restart deployment configmap-env-var", "completion": "\n# Wait for the rollout to complete\nkubectl rollout status deployment configmap-env-var --watch=true", "language": "shell", "type": "code-example", "context": "Updating Configuration via a ConfigMap | Next, check the Deployment:", "source": "https://kubernetes.io/docs/tutorials/configuration/updating-configuration-via-a-configmap/", "id": "k8s_00369_code_completion_9373"} {"prompt": "Explain this shell code:\n\n# Trigger the rollout\nkubectl rollout restart deployment configmap-env-var\n\n# Wait for the rollout to complete\nkubectl rollout status deployment configmap-env-var --watch=true", "completion": "This shell code Updating Configuration via a ConfigMap | Next, check the Deployment:", "language": "shell", "type": "code-example_explanation", "context": "Updating Configuration via a ConfigMap | Next, check the Deployment:", "source": "https://kubernetes.io/docs/tutorials/configuration/updating-configuration-via-a-configmap/", "id": "k8s_00369_code_explain_9374"} {"prompt": "Explain this text code:\n\nNAME READY UP-TO-DATE AVAILABLE AGE\nconfigmap-env-var 3/3 3 3 12m", "completion": "This text code Updating Configuration via a ConfigMap | Check the Pods:", "language": "text", "type": "code-example_explanation", "context": "Updating Configuration via a ConfigMap | Check the Pods:", "source": "https://kubernetes.io/docs/tutorials/configuration/updating-configuration-via-a-configmap/", "id": "k8s_00369_code_explain_9375"} {"prompt": "Explain this shell code:\n\nkubectl get pods --selector=app.kubernetes.io/name=configmap-env-var", "completion": "This shell code Updating Configuration via a ConfigMap | The rollout causes Kubernetes to make a new ReplicaSet for the Deployment; that means the existing Pods eventually terminate, and new ones are created. After few seconds, you should see an output simi", "language": "shell", "type": "code-example_explanation", "context": "Updating Configuration via a ConfigMap | The rollout causes Kubernetes to make a new ReplicaSet for the Deployment; that means the existing Pods eventually terminate, and new ones are created. After few seconds, you should see an output simi", "source": "https://kubernetes.io/docs/tutorials/configuration/updating-configuration-via-a-configmap/", "id": "k8s_00369_code_explain_9376"} {"prompt": "Complete this text code:\n\nNAME READY STATUS RESTARTS AGE\nconfigmap-env-var-6d94d89bf5-2ph2l 1/1 Running 0 13s", "completion": "configmap-env-var-6d94d89bf5-74twx 1/1 Running 0 8s\nconfigmap-env-var-6d94d89bf5-d5vx8 1/1 Running 0 11s", "language": "text", "type": "code-example", "context": "Updating Configuration via a ConfigMap | View the logs for a Pod in this Deployment:", "source": "https://kubernetes.io/docs/tutorials/configuration/updating-configuration-via-a-configmap/", "id": "k8s_00369_code_completion_9377"} {"prompt": "Explain this text code:\n\nNAME READY STATUS RESTARTS AGE\nconfigmap-env-var-6d94d89bf5-2ph2l 1/1 Running 0 13s\nconfigmap-env-var-6d94d89bf5-74twx 1/1 Running 0 8s\nconfigmap-env-var-6d94d89bf5-d5vx8 1/1 Running 0 11s", "completion": "This text code Updating Configuration via a ConfigMap | View the logs for a Pod in this Deployment:", "language": "text", "type": "code-example_explanation", "context": "Updating Configuration via a ConfigMap | View the logs for a Pod in this Deployment:", "source": "https://kubernetes.io/docs/tutorials/configuration/updating-configuration-via-a-configmap/", "id": "k8s_00369_code_explain_9378"} {"prompt": "Explain this shell code:\n\n# Pick one Pod that belongs to the Deployment, and view its logs\nkubectl logs deployment/configmap-env-var", "completion": "This shell code Updating Configuration via a ConfigMap | You should see an output similar to the below:", "language": "shell", "type": "code-example_explanation", "context": "Updating Configuration via a ConfigMap | You should see an output similar to the below:", "source": "https://kubernetes.io/docs/tutorials/configuration/updating-configuration-via-a-configmap/", "id": "k8s_00369_code_explain_9379"} {"prompt": "Complete this text code:\n\nFound 3 pods, using pod/configmap-env-var-6d9ff89fb6-bzcf6\nThu Jan 4 16:30:35 UTC 2024 The basket is full of mangoes", "completion": "Thu Jan 4 16:30:45 UTC 2024 The basket is full of mangoes\nThu Jan 4 16:30:55 UTC 2024 The basket is full of mangoes", "language": "text", "type": "code-example", "context": "Updating Configuration via a ConfigMap | This demonstrates the scenario of updating environment variables in a Pod that are derived from a ConfigMap. Changes to the ConfigMap values are applied to the Pod during the subsequent rollout. If Po", "source": "https://kubernetes.io/docs/tutorials/configuration/updating-configuration-via-a-configmap/", "id": "k8s_00369_code_completion_9380"} {"prompt": "Explain this text code:\n\nFound 3 pods, using pod/configmap-env-var-6d9ff89fb6-bzcf6\nThu Jan 4 16:30:35 UTC 2024 The basket is full of mangoes\nThu Jan 4 16:30:45 UTC 2024 The basket is full of mangoes\nThu Jan 4 16:30:55 UTC 2024 The basket is full of mangoes", "completion": "This text code Updating Configuration via a ConfigMap | This demonstrates the scenario of updating environment variables in a Pod that are derived from a ConfigMap. Changes to the ConfigMap values are applied to the Pod during the subsequent rollout. If Po", "language": "text", "type": "code-example_explanation", "context": "Updating Configuration via a ConfigMap | This demonstrates the scenario of updating environment variables in a Pod that are derived from a ConfigMap. Changes to the ConfigMap values are applied to the Pod during the subsequent rollout. If Po", "source": "https://kubernetes.io/docs/tutorials/configuration/updating-configuration-via-a-configmap/", "id": "k8s_00369_code_explain_9381"} {"prompt": "Explain this shell code:\n\nkubectl create configmap color --from-literal=color=red", "completion": "This shell code Updating Configuration via a ConfigMap | Below is an example manifest for a Deployment that manages a set of Pods, each with two containers. The two containers share an emptyDir volume that they use to communicate. The first container runs a", "language": "shell", "type": "code-example_explanation", "context": "Updating Configuration via a ConfigMap | Below is an example manifest for a Deployment that manages a set of Pods, each with two containers. The two containers share an emptyDir volume that they use to communicate. The first container runs a", "source": "https://kubernetes.io/docs/tutorials/configuration/updating-configuration-via-a-configmap/", "id": "k8s_00369_code_explain_9382"} {"prompt": "Explain this text code:\n\ndeployments/deployment-with-configmap-two-containers.yaml", "completion": "This text code Updating Configuration via a ConfigMap | Create the Deployment:", "language": "text", "type": "code-example_explanation", "context": "Updating Configuration via a ConfigMap | Create the Deployment:", "source": "https://kubernetes.io/docs/tutorials/configuration/updating-configuration-via-a-configmap/", "id": "k8s_00369_code_explain_9383"} {"prompt": "Complete this yaml code:\n\napiVersion: apps/v1\nkind: Deployment\nmetadata:\n name: configmap-two-containers\n labels:\n app.kubernetes.io/name: configmap-two-containers\nspec:\n replicas: 3\n selector:\n matchLabels:\n app.kubernetes.io/name: configmap-two-containers\n template:\n metadata:\n labels:\n app.kubernetes.io/name: configmap-two-containers\n spec:\n volumes:\n - name: shared-data\n emptyDir: {}\n - name: config-volume", "completion": " configMap:\n name: color\n containers:\n - name: nginx\n image: nginx\n volumeMounts:\n - name: shared-data\n mountPath: /usr/share/nginx/html\n - name: alpine\n image: alpine:3\n volumeMounts:\n - name: shared-data\n mountPath: /pod-data\n - name: config-volume\n mountPath: /etc/config\n command:\n - /bin/sh\n - -c\n - while true; do echo \"$(date) My preferred color is $(cat /etc/config/color)\" > /pod-data/index.html;\n sleep 10; done;", "language": "yaml", "type": "deployment-definition", "context": "Updating Configuration via a ConfigMap | Create the Deployment:", "source": "https://kubernetes.io/docs/tutorials/configuration/updating-configuration-via-a-configmap/", "id": "k8s_00369_code_completion_9384"} {"prompt": "Explain this yaml code:\n\napiVersion: apps/v1\nkind: Deployment\nmetadata:\n name: configmap-two-containers\n labels:\n app.kubernetes.io/name: configmap-two-containers\nspec:\n replicas: 3\n selector:\n matchLabels:\n app.kubernetes.io/name: configmap-two-containers\n template:\n metadata:\n labels:\n app.kubernetes.io/name: configmap-two-containers\n spec:\n volumes:\n - name: shared-data\n emptyDir: {}\n - name: config-volume\n configMap:\n name: color\n containers:\n - name: nginx\n image: nginx\n volumeMounts:\n - name: shared-data\n mountPath: /usr/share/nginx/html\n - name: alpine\n image: alpine:3\n volumeMounts:\n - name: shared-data\n mountPath: /pod-data\n - name: config-volume\n mountPath: /etc/config\n command:\n - /bin/sh\n - -c\n - while true; do echo \"$(date) My preferred color is $(cat /etc/config/color)\" > /pod-data/index.html;\n sleep 10; done;", "completion": "This yaml code Updating Configuration via a ConfigMap | Create the Deployment:", "language": "yaml", "type": "deployment-definition_explanation", "context": "Updating Configuration via a ConfigMap | Create the Deployment:", "source": "https://kubernetes.io/docs/tutorials/configuration/updating-configuration-via-a-configmap/", "id": "k8s_00369_code_explain_9385"} {"prompt": "Explain this shell code:\n\nkubectl apply -f https://k8s.io/examples/deployments/deployment-with-configmap-two-containers.yaml", "completion": "This shell code Updating Configuration via a ConfigMap | Check the pods for this Deployment to ensure they are ready (matching by selector):", "language": "shell", "type": "code-example_explanation", "context": "Updating Configuration via a ConfigMap | Check the pods for this Deployment to ensure they are ready (matching by selector):", "source": "https://kubernetes.io/docs/tutorials/configuration/updating-configuration-via-a-configmap/", "id": "k8s_00369_code_explain_9386"} {"prompt": "Explain this shell code:\n\nkubectl get pods --selector=app.kubernetes.io/name=configmap-two-containers", "completion": "This shell code Updating Configuration via a ConfigMap | You should see an output similar to:", "language": "shell", "type": "code-example_explanation", "context": "Updating Configuration via a ConfigMap | You should see an output similar to:", "source": "https://kubernetes.io/docs/tutorials/configuration/updating-configuration-via-a-configmap/", "id": "k8s_00369_code_explain_9387"} {"prompt": "Complete this text code:\n\nNAME READY STATUS RESTARTS AGE\nconfigmap-two-containers-565fb6d4f4-2xhxf 2/2 Running 0 20s", "completion": "configmap-two-containers-565fb6d4f4-g5v4j 2/2 Running 0 20s\nconfigmap-two-containers-565fb6d4f4-mzsmf 2/2 Running 0 20s", "language": "text", "type": "code-example", "context": "Updating Configuration via a ConfigMap | Expose the Deployment (the kubectl tool creates a Service for you):", "source": "https://kubernetes.io/docs/tutorials/configuration/updating-configuration-via-a-configmap/", "id": "k8s_00369_code_completion_9388"} {"prompt": "Explain this text code:\n\nNAME READY STATUS RESTARTS AGE\nconfigmap-two-containers-565fb6d4f4-2xhxf 2/2 Running 0 20s\nconfigmap-two-containers-565fb6d4f4-g5v4j 2/2 Running 0 20s\nconfigmap-two-containers-565fb6d4f4-mzsmf 2/2 Running 0 20s", "completion": "This text code Updating Configuration via a ConfigMap | Expose the Deployment (the kubectl tool creates a Service for you):", "language": "text", "type": "code-example_explanation", "context": "Updating Configuration via a ConfigMap | Expose the Deployment (the kubectl tool creates a Service for you):", "source": "https://kubernetes.io/docs/tutorials/configuration/updating-configuration-via-a-configmap/", "id": "k8s_00369_code_explain_9389"} {"prompt": "Explain this shell code:\n\nkubectl expose deployment configmap-two-containers --name=configmap-service --port=8080 --target-port=80", "completion": "This shell code Updating Configuration via a ConfigMap | Use kubectl to forward the port:", "language": "shell", "type": "code-example_explanation", "context": "Updating Configuration via a ConfigMap | Use kubectl to forward the port:", "source": "https://kubernetes.io/docs/tutorials/configuration/updating-configuration-via-a-configmap/", "id": "k8s_00369_code_explain_9390"} {"prompt": "Explain this shell code:\n\n# this stays running in the background\nkubectl port-forward service/configmap-service 8080:8080 &", "completion": "This shell code Updating Configuration via a ConfigMap | Access the service.", "language": "shell", "type": "code-example_explanation", "context": "Updating Configuration via a ConfigMap | Access the service.", "source": "https://kubernetes.io/docs/tutorials/configuration/updating-configuration-via-a-configmap/", "id": "k8s_00369_code_explain_9391"} {"prompt": "Explain this text code:\n\nFri Jan 5 08:08:22 UTC 2024 My preferred color is red", "completion": "This text code Updating Configuration via a ConfigMap | Edit the ConfigMap:", "language": "text", "type": "code-example_explanation", "context": "Updating Configuration via a ConfigMap | Edit the ConfigMap:", "source": "https://kubernetes.io/docs/tutorials/configuration/updating-configuration-via-a-configmap/", "id": "k8s_00369_code_explain_9392"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\ndata:\n color: blue\nkind: ConfigMap\n# You can leave the existing metadata as they are.\n# The values you'll see won't exactly match these.", "completion": "metadata:\n creationTimestamp: \"2024-01-05T08:12:05Z\"\n name: color\n namespace: configmap\n resourceVersion: \"1801272\"\n uid: 80d33e4a-cbb4-4bc9-ba8c-544c68e425d6", "language": "yaml", "type": "kubernetes-manifest", "context": "Updating Configuration via a ConfigMap | Loop over the service URL for few seconds.", "source": "https://kubernetes.io/docs/tutorials/configuration/updating-configuration-via-a-configmap/", "id": "k8s_00369_code_completion_9393"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\ndata:\n color: blue\nkind: ConfigMap\n# You can leave the existing metadata as they are.\n# The values you'll see won't exactly match these.\nmetadata:\n creationTimestamp: \"2024-01-05T08:12:05Z\"\n name: color\n namespace: configmap\n resourceVersion: \"1801272\"\n uid: 80d33e4a-cbb4-4bc9-ba8c-544c68e425d6", "completion": "This yaml code Updating Configuration via a ConfigMap | Loop over the service URL for few seconds.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Updating Configuration via a ConfigMap | Loop over the service URL for few seconds.", "source": "https://kubernetes.io/docs/tutorials/configuration/updating-configuration-via-a-configmap/", "id": "k8s_00369_code_explain_9394"} {"prompt": "Explain this shell code:\n\n# Cancel this when you're happy with it (Ctrl-C)\nwhile true; do curl --connect-timeout 7.5 http://localhost:8080; sleep 10; done", "completion": "This shell code Updating Configuration via a ConfigMap | You should see the output change as follows:", "language": "shell", "type": "code-example_explanation", "context": "Updating Configuration via a ConfigMap | You should see the output change as follows:", "source": "https://kubernetes.io/docs/tutorials/configuration/updating-configuration-via-a-configmap/", "id": "k8s_00369_code_explain_9395"} {"prompt": "Complete this text code:\n\nFri Jan 5 08:14:00 UTC 2024 My preferred color is red\nFri Jan 5 08:14:02 UTC 2024 My preferred color is red\nFri Jan 5 08:14:20 UTC 2024 My preferred color is red", "completion": "Fri Jan 5 08:14:22 UTC 2024 My preferred color is red\nFri Jan 5 08:14:32 UTC 2024 My preferred color is blue\nFri Jan 5 08:14:43 UTC 2024 My preferred color is blue\nFri Jan 5 08:15:00 UTC 2024 My preferred color is blue", "language": "text", "type": "code-example", "context": "Updating Configuration via a ConfigMap | The above scenario can be replicated by using a Sidecar Container as a helper container to write the HTML file.As a Sidecar Container is conceptually an Init Container, it is guaranteed to start befor", "source": "https://kubernetes.io/docs/tutorials/configuration/updating-configuration-via-a-configmap/", "id": "k8s_00369_code_completion_9396"} {"prompt": "Explain this text code:\n\nFri Jan 5 08:14:00 UTC 2024 My preferred color is red\nFri Jan 5 08:14:02 UTC 2024 My preferred color is red\nFri Jan 5 08:14:20 UTC 2024 My preferred color is red\nFri Jan 5 08:14:22 UTC 2024 My preferred color is red\nFri Jan 5 08:14:32 UTC 2024 My preferred color is blue\nFri Jan 5 08:14:43 UTC 2024 My preferred color is blue\nFri Jan 5 08:15:00 UTC 2024 My preferred color is blue", "completion": "This text code Updating Configuration via a ConfigMap | The above scenario can be replicated by using a Sidecar Container as a helper container to write the HTML file.As a Sidecar Container is conceptually an Init Container, it is guaranteed to start befor", "language": "text", "type": "code-example_explanation", "context": "Updating Configuration via a ConfigMap | The above scenario can be replicated by using a Sidecar Container as a helper container to write the HTML file.As a Sidecar Container is conceptually an Init Container, it is guaranteed to start befor", "source": "https://kubernetes.io/docs/tutorials/configuration/updating-configuration-via-a-configmap/", "id": "k8s_00369_code_explain_9397"} {"prompt": "Explain this shell code:\n\nkubectl create configmap color --from-literal=color=blue", "completion": "This shell code Updating Configuration via a ConfigMap | Below is an example manifest for a Deployment that manages a set of Pods, each with a main container and a sidecar container. The two containers share an emptyDir volume that they use to communicate.", "language": "shell", "type": "code-example_explanation", "context": "Updating Configuration via a ConfigMap | Below is an example manifest for a Deployment that manages a set of Pods, each with a main container and a sidecar container. The two containers share an emptyDir volume that they use to communicate.", "source": "https://kubernetes.io/docs/tutorials/configuration/updating-configuration-via-a-configmap/", "id": "k8s_00369_code_explain_9398"} {"prompt": "Explain this text code:\n\ndeployments/deployment-with-configmap-and-sidecar-container.yaml", "completion": "This text code Updating Configuration via a ConfigMap | Create the Deployment:", "language": "text", "type": "code-example_explanation", "context": "Updating Configuration via a ConfigMap | Create the Deployment:", "source": "https://kubernetes.io/docs/tutorials/configuration/updating-configuration-via-a-configmap/", "id": "k8s_00369_code_explain_9399"} {"prompt": "Complete this yaml code:\n\napiVersion: apps/v1\nkind: Deployment\nmetadata:\n name: configmap-sidecar-container\n labels:\n app.kubernetes.io/name: configmap-sidecar-container\nspec:\n replicas: 3\n selector:\n matchLabels:\n app.kubernetes.io/name: configmap-sidecar-container\n template:\n metadata:\n labels:\n app.kubernetes.io/name: configmap-sidecar-container\n spec:\n volumes:\n - name: shared-data\n emptyDir: {}\n - name: config-volume\n configMap:", "completion": " name: color\n containers:\n - name: nginx\n image: nginx\n volumeMounts:\n - name: shared-data\n mountPath: /usr/share/nginx/html\n initContainers:\n - name: alpine\n image: alpine:3\n restartPolicy: Always\n volumeMounts:\n - name: shared-data\n mountPath: /pod-data\n - name: config-volume\n mountPath: /etc/config\n command:\n - /bin/sh\n - -c\n - while true; do echo \"$(date) My preferred color is $(cat /etc/config/color)\" > /pod-data/index.html;\n sleep 10; done;", "language": "yaml", "type": "deployment-definition", "context": "Updating Configuration via a ConfigMap | Create the Deployment:", "source": "https://kubernetes.io/docs/tutorials/configuration/updating-configuration-via-a-configmap/", "id": "k8s_00369_code_completion_9400"} {"prompt": "Explain this yaml code:\n\napiVersion: apps/v1\nkind: Deployment\nmetadata:\n name: configmap-sidecar-container\n labels:\n app.kubernetes.io/name: configmap-sidecar-container\nspec:\n replicas: 3\n selector:\n matchLabels:\n app.kubernetes.io/name: configmap-sidecar-container\n template:\n metadata:\n labels:\n app.kubernetes.io/name: configmap-sidecar-container\n spec:\n volumes:\n - name: shared-data\n emptyDir: {}\n - name: config-volume\n configMap:\n name: color\n containers:\n - name: nginx\n image: nginx\n volumeMounts:\n - name: shared-data\n mountPath: /usr/share/nginx/html\n initContainers:\n - name: alpine\n image: alpine:3\n restartPolicy: Always\n volumeMounts:\n - name: shared-data\n mountPath: /pod-data\n - name: config-volume\n mountPath: /etc/config\n command:\n - /bin/sh\n - -c\n - while true; do echo \"$(date) My preferred color is $(cat /etc/config/color)\" > /pod-data/index.html;\n sleep 10; done;", "completion": "This yaml code Updating Configuration via a ConfigMap | Create the Deployment:", "language": "yaml", "type": "deployment-definition_explanation", "context": "Updating Configuration via a ConfigMap | Create the Deployment:", "source": "https://kubernetes.io/docs/tutorials/configuration/updating-configuration-via-a-configmap/", "id": "k8s_00369_code_explain_9401"} {"prompt": "Explain this shell code:\n\nkubectl apply -f https://k8s.io/examples/deployments/deployment-with-configmap-and-sidecar-container.yaml", "completion": "This shell code Updating Configuration via a ConfigMap | Check the pods for this Deployment to ensure they are ready (matching by selector):", "language": "shell", "type": "code-example_explanation", "context": "Updating Configuration via a ConfigMap | Check the pods for this Deployment to ensure they are ready (matching by selector):", "source": "https://kubernetes.io/docs/tutorials/configuration/updating-configuration-via-a-configmap/", "id": "k8s_00369_code_explain_9402"} {"prompt": "Explain this shell code:\n\nkubectl get pods --selector=app.kubernetes.io/name=configmap-sidecar-container", "completion": "This shell code Updating Configuration via a ConfigMap | You should see an output similar to:", "language": "shell", "type": "code-example_explanation", "context": "Updating Configuration via a ConfigMap | You should see an output similar to:", "source": "https://kubernetes.io/docs/tutorials/configuration/updating-configuration-via-a-configmap/", "id": "k8s_00369_code_explain_9403"} {"prompt": "Complete this text code:\n\nNAME READY STATUS RESTARTS AGE\nconfigmap-sidecar-container-5fb59f558b-87rp7 2/2 Running 0 94s", "completion": "configmap-sidecar-container-5fb59f558b-ccs7s 2/2 Running 0 94s\nconfigmap-sidecar-container-5fb59f558b-wnmgk 2/2 Running 0 94s", "language": "text", "type": "code-example", "context": "Updating Configuration via a ConfigMap | Expose the Deployment (the kubectl tool creates a Service for you):", "source": "https://kubernetes.io/docs/tutorials/configuration/updating-configuration-via-a-configmap/", "id": "k8s_00369_code_completion_9404"} {"prompt": "Explain this text code:\n\nNAME READY STATUS RESTARTS AGE\nconfigmap-sidecar-container-5fb59f558b-87rp7 2/2 Running 0 94s\nconfigmap-sidecar-container-5fb59f558b-ccs7s 2/2 Running 0 94s\nconfigmap-sidecar-container-5fb59f558b-wnmgk 2/2 Running 0 94s", "completion": "This text code Updating Configuration via a ConfigMap | Expose the Deployment (the kubectl tool creates a Service for you):", "language": "text", "type": "code-example_explanation", "context": "Updating Configuration via a ConfigMap | Expose the Deployment (the kubectl tool creates a Service for you):", "source": "https://kubernetes.io/docs/tutorials/configuration/updating-configuration-via-a-configmap/", "id": "k8s_00369_code_explain_9405"} {"prompt": "Explain this shell code:\n\nkubectl expose deployment configmap-sidecar-container --name=configmap-sidecar-service --port=8081 --target-port=80", "completion": "This shell code Updating Configuration via a ConfigMap | Use kubectl to forward the port:", "language": "shell", "type": "code-example_explanation", "context": "Updating Configuration via a ConfigMap | Use kubectl to forward the port:", "source": "https://kubernetes.io/docs/tutorials/configuration/updating-configuration-via-a-configmap/", "id": "k8s_00369_code_explain_9406"} {"prompt": "Explain this shell code:\n\n# this stays running in the background\nkubectl port-forward service/configmap-sidecar-service 8081:8081 &", "completion": "This shell code Updating Configuration via a ConfigMap | Access the service.", "language": "shell", "type": "code-example_explanation", "context": "Updating Configuration via a ConfigMap | Access the service.", "source": "https://kubernetes.io/docs/tutorials/configuration/updating-configuration-via-a-configmap/", "id": "k8s_00369_code_explain_9407"} {"prompt": "Explain this text code:\n\nSat Feb 17 13:09:05 UTC 2024 My preferred color is blue", "completion": "This text code Updating Configuration via a ConfigMap | Edit the ConfigMap:", "language": "text", "type": "code-example_explanation", "context": "Updating Configuration via a ConfigMap | Edit the ConfigMap:", "source": "https://kubernetes.io/docs/tutorials/configuration/updating-configuration-via-a-configmap/", "id": "k8s_00369_code_explain_9408"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\ndata:\n color: green\nkind: ConfigMap\n# You can leave the existing metadata as they are.\n# The values you'll see won't exactly match these.", "completion": "metadata:\n creationTimestamp: \"2024-02-17T12:20:30Z\"\n name: color\n namespace: default\n resourceVersion: \"1054\"\n uid: e40bb34c-58df-4280-8bea-6ed16edccfaa", "language": "yaml", "type": "kubernetes-manifest", "context": "Updating Configuration via a ConfigMap | Loop over the service URL for few seconds.", "source": "https://kubernetes.io/docs/tutorials/configuration/updating-configuration-via-a-configmap/", "id": "k8s_00369_code_completion_9409"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\ndata:\n color: green\nkind: ConfigMap\n# You can leave the existing metadata as they are.\n# The values you'll see won't exactly match these.\nmetadata:\n creationTimestamp: \"2024-02-17T12:20:30Z\"\n name: color\n namespace: default\n resourceVersion: \"1054\"\n uid: e40bb34c-58df-4280-8bea-6ed16edccfaa", "completion": "This yaml code Updating Configuration via a ConfigMap | Loop over the service URL for few seconds.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Updating Configuration via a ConfigMap | Loop over the service URL for few seconds.", "source": "https://kubernetes.io/docs/tutorials/configuration/updating-configuration-via-a-configmap/", "id": "k8s_00369_code_explain_9410"} {"prompt": "Explain this shell code:\n\n# Cancel this when you're happy with it (Ctrl-C)\nwhile true; do curl --connect-timeout 7.5 http://localhost:8081; sleep 10; done", "completion": "This shell code Updating Configuration via a ConfigMap | You should see the output change as follows:", "language": "shell", "type": "code-example_explanation", "context": "Updating Configuration via a ConfigMap | You should see the output change as follows:", "source": "https://kubernetes.io/docs/tutorials/configuration/updating-configuration-via-a-configmap/", "id": "k8s_00369_code_explain_9411"} {"prompt": "Complete this text code:\n\nSat Feb 17 13:12:35 UTC 2024 My preferred color is blue\nSat Feb 17 13:12:45 UTC 2024 My preferred color is blue\nSat Feb 17 13:12:55 UTC 2024 My preferred color is blue", "completion": "Sat Feb 17 13:13:05 UTC 2024 My preferred color is blue\nSat Feb 17 13:13:15 UTC 2024 My preferred color is green\nSat Feb 17 13:13:25 UTC 2024 My preferred color is green\nSat Feb 17 13:13:35 UTC 2024 My preferred color is green", "language": "text", "type": "code-example", "context": "Updating Configuration via a ConfigMap | Immutable ConfigMaps are especially used for configuration that is constant and is not expected to change over time. Marking a ConfigMap as immutable allows a performance improvement where the kubelet", "source": "https://kubernetes.io/docs/tutorials/configuration/updating-configuration-via-a-configmap/", "id": "k8s_00369_code_completion_9412"} {"prompt": "Explain this text code:\n\nSat Feb 17 13:12:35 UTC 2024 My preferred color is blue\nSat Feb 17 13:12:45 UTC 2024 My preferred color is blue\nSat Feb 17 13:12:55 UTC 2024 My preferred color is blue\nSat Feb 17 13:13:05 UTC 2024 My preferred color is blue\nSat Feb 17 13:13:15 UTC 2024 My preferred color is green\nSat Feb 17 13:13:25 UTC 2024 My preferred color is green\nSat Feb 17 13:13:35 UTC 2024 My preferred color is green", "completion": "This text code Updating Configuration via a ConfigMap | Immutable ConfigMaps are especially used for configuration that is constant and is not expected to change over time. Marking a ConfigMap as immutable allows a performance improvement where the kubelet", "language": "text", "type": "code-example_explanation", "context": "Updating Configuration via a ConfigMap | Immutable ConfigMaps are especially used for configuration that is constant and is not expected to change over time. Marking a ConfigMap as immutable allows a performance improvement where the kubelet", "source": "https://kubernetes.io/docs/tutorials/configuration/updating-configuration-via-a-configmap/", "id": "k8s_00369_code_explain_9413"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\ndata:\n company_name: \"ACME, Inc.\" # existing fictional company name", "completion": "kind: ConfigMap\nimmutable: true\nmetadata:\n name: company-name-20150801", "language": "yaml", "type": "kubernetes-manifest", "context": "Updating Configuration via a ConfigMap | Create the Immutable ConfigMap:", "source": "https://kubernetes.io/docs/tutorials/configuration/updating-configuration-via-a-configmap/", "id": "k8s_00369_code_completion_9414"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\ndata:\n company_name: \"ACME, Inc.\" # existing fictional company name\nkind: ConfigMap\nimmutable: true\nmetadata:\n name: company-name-20150801", "completion": "This yaml code Updating Configuration via a ConfigMap | Create the Immutable ConfigMap:", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Updating Configuration via a ConfigMap | Create the Immutable ConfigMap:", "source": "https://kubernetes.io/docs/tutorials/configuration/updating-configuration-via-a-configmap/", "id": "k8s_00369_code_explain_9415"} {"prompt": "Explain this shell code:\n\nkubectl apply -f https://k8s.io/examples/configmap/immutable-configmap.yaml", "completion": "This shell code Updating Configuration via a ConfigMap | Below is an example of a Deployment manifest with the Immutable ConfigMap company-name-20150801 mounted as a volume into the Pod's only container.", "language": "shell", "type": "code-example_explanation", "context": "Updating Configuration via a ConfigMap | Below is an example of a Deployment manifest with the Immutable ConfigMap company-name-20150801 mounted as a volume into the Pod's only container.", "source": "https://kubernetes.io/docs/tutorials/configuration/updating-configuration-via-a-configmap/", "id": "k8s_00369_code_explain_9416"} {"prompt": "Explain this text code:\n\ndeployments/deployment-with-immutable-configmap-as-volume.yaml", "completion": "This text code Updating Configuration via a ConfigMap | Create the Deployment:", "language": "text", "type": "code-example_explanation", "context": "Updating Configuration via a ConfigMap | Create the Deployment:", "source": "https://kubernetes.io/docs/tutorials/configuration/updating-configuration-via-a-configmap/", "id": "k8s_00369_code_explain_9417"} {"prompt": "Complete this yaml code:\n\napiVersion: apps/v1\nkind: Deployment\nmetadata:\n name: immutable-configmap-volume\n labels:\n app.kubernetes.io/name: immutable-configmap-volume\nspec:\n replicas: 3\n selector:\n matchLabels:\n app.kubernetes.io/name: immutable-configmap-volume\n template:\n metadata:\n labels:\n app.kubernetes.io/name: immutable-configmap-volume\n spec:", "completion": " containers:\n - name: alpine\n image: alpine:3\n command:\n - /bin/sh\n - -c\n - while true; do echo \"$(date) The name of the company is $(cat /etc/config/company_name)\";\n sleep 10; done;\n ports:\n - containerPort: 80\n volumeMounts:\n - name: config-volume\n mountPath: /etc/config\n volumes:\n - name: config-volume\n configMap:\n name: company-name-20150801", "language": "yaml", "type": "deployment-definition", "context": "Updating Configuration via a ConfigMap | Create the Deployment:", "source": "https://kubernetes.io/docs/tutorials/configuration/updating-configuration-via-a-configmap/", "id": "k8s_00369_code_completion_9418"} {"prompt": "Explain this yaml code:\n\napiVersion: apps/v1\nkind: Deployment\nmetadata:\n name: immutable-configmap-volume\n labels:\n app.kubernetes.io/name: immutable-configmap-volume\nspec:\n replicas: 3\n selector:\n matchLabels:\n app.kubernetes.io/name: immutable-configmap-volume\n template:\n metadata:\n labels:\n app.kubernetes.io/name: immutable-configmap-volume\n spec:\n containers:\n - name: alpine\n image: alpine:3\n command:\n - /bin/sh\n - -c\n - while true; do echo \"$(date) The name of the company is $(cat /etc/config/company_name)\";\n sleep 10; done;\n ports:\n - containerPort: 80\n volumeMounts:\n - name: config-volume\n mountPath: /etc/config\n volumes:\n - name: config-volume\n configMap:\n name: company-name-20150801", "completion": "This yaml code Updating Configuration via a ConfigMap | Create the Deployment:", "language": "yaml", "type": "deployment-definition_explanation", "context": "Updating Configuration via a ConfigMap | Create the Deployment:", "source": "https://kubernetes.io/docs/tutorials/configuration/updating-configuration-via-a-configmap/", "id": "k8s_00369_code_explain_9419"} {"prompt": "Explain this shell code:\n\nkubectl apply -f https://k8s.io/examples/deployments/deployment-with-immutable-configmap-as-volume.yaml", "completion": "This shell code Updating Configuration via a ConfigMap | Check the pods for this Deployment to ensure they are ready (matching by selector):", "language": "shell", "type": "code-example_explanation", "context": "Updating Configuration via a ConfigMap | Check the pods for this Deployment to ensure they are ready (matching by selector):", "source": "https://kubernetes.io/docs/tutorials/configuration/updating-configuration-via-a-configmap/", "id": "k8s_00369_code_explain_9420"} {"prompt": "Explain this shell code:\n\nkubectl get pods --selector=app.kubernetes.io/name=immutable-configmap-volume", "completion": "This shell code Updating Configuration via a ConfigMap | You should see an output similar to:", "language": "shell", "type": "code-example_explanation", "context": "Updating Configuration via a ConfigMap | You should see an output similar to:", "source": "https://kubernetes.io/docs/tutorials/configuration/updating-configuration-via-a-configmap/", "id": "k8s_00369_code_explain_9421"} {"prompt": "Complete this text code:\n\nNAME READY STATUS RESTARTS AGE\nimmutable-configmap-volume-78b6fbff95-5gsfh 1/1 Running 0 62s", "completion": "immutable-configmap-volume-78b6fbff95-7vcj4 1/1 Running 0 62s\nimmutable-configmap-volume-78b6fbff95-vdslm 1/1 Running 0 62s", "language": "text", "type": "code-example", "context": "Updating Configuration via a ConfigMap | The Pod's container refers to the data defined in the ConfigMap and uses it to print a report to stdout. You can check this report by viewing the logs for one of the Pods in that Deployment:", "source": "https://kubernetes.io/docs/tutorials/configuration/updating-configuration-via-a-configmap/", "id": "k8s_00369_code_completion_9422"} {"prompt": "Explain this text code:\n\nNAME READY STATUS RESTARTS AGE\nimmutable-configmap-volume-78b6fbff95-5gsfh 1/1 Running 0 62s\nimmutable-configmap-volume-78b6fbff95-7vcj4 1/1 Running 0 62s\nimmutable-configmap-volume-78b6fbff95-vdslm 1/1 Running 0 62s", "completion": "This text code Updating Configuration via a ConfigMap | The Pod's container refers to the data defined in the ConfigMap and uses it to print a report to stdout. You can check this report by viewing the logs for one of the Pods in that Deployment:", "language": "text", "type": "code-example_explanation", "context": "Updating Configuration via a ConfigMap | The Pod's container refers to the data defined in the ConfigMap and uses it to print a report to stdout. You can check this report by viewing the logs for one of the Pods in that Deployment:", "source": "https://kubernetes.io/docs/tutorials/configuration/updating-configuration-via-a-configmap/", "id": "k8s_00369_code_explain_9423"} {"prompt": "Explain this shell code:\n\n# Pick one Pod that belongs to the Deployment, and view its logs\nkubectl logs deployments/immutable-configmap-volume", "completion": "This shell code Updating Configuration via a ConfigMap | You should see an output similar to:", "language": "shell", "type": "code-example_explanation", "context": "Updating Configuration via a ConfigMap | You should see an output similar to:", "source": "https://kubernetes.io/docs/tutorials/configuration/updating-configuration-via-a-configmap/", "id": "k8s_00369_code_explain_9424"} {"prompt": "Complete this text code:\n\nFound 3 pods, using pod/immutable-configmap-volume-78b6fbff95-5gsfh\nWed Mar 20 03:52:34 UTC 2024 The name of the company is ACME, Inc.", "completion": "Wed Mar 20 03:52:44 UTC 2024 The name of the company is ACME, Inc.\nWed Mar 20 03:52:54 UTC 2024 The name of the company is ACME, Inc.", "language": "text", "type": "code-example", "context": "Updating Configuration via a ConfigMap | Create a new immutable ConfigMap by using the manifest shown below:", "source": "https://kubernetes.io/docs/tutorials/configuration/updating-configuration-via-a-configmap/", "id": "k8s_00369_code_completion_9425"} {"prompt": "Explain this text code:\n\nFound 3 pods, using pod/immutable-configmap-volume-78b6fbff95-5gsfh\nWed Mar 20 03:52:34 UTC 2024 The name of the company is ACME, Inc.\nWed Mar 20 03:52:44 UTC 2024 The name of the company is ACME, Inc.\nWed Mar 20 03:52:54 UTC 2024 The name of the company is ACME, Inc.", "completion": "This text code Updating Configuration via a ConfigMap | Create a new immutable ConfigMap by using the manifest shown below:", "language": "text", "type": "code-example_explanation", "context": "Updating Configuration via a ConfigMap | Create a new immutable ConfigMap by using the manifest shown below:", "source": "https://kubernetes.io/docs/tutorials/configuration/updating-configuration-via-a-configmap/", "id": "k8s_00369_code_explain_9426"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\ndata:\n company_name: \"Fiktivesunternehmen GmbH\" # new fictional company name", "completion": "kind: ConfigMap\nimmutable: true\nmetadata:\n name: company-name-20240312", "language": "yaml", "type": "kubernetes-manifest", "context": "Updating Configuration via a ConfigMap | You should see an output similar to:", "source": "https://kubernetes.io/docs/tutorials/configuration/updating-configuration-via-a-configmap/", "id": "k8s_00369_code_completion_9427"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\ndata:\n company_name: \"Fiktivesunternehmen GmbH\" # new fictional company name\nkind: ConfigMap\nimmutable: true\nmetadata:\n name: company-name-20240312", "completion": "This yaml code Updating Configuration via a ConfigMap | You should see an output similar to:", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Updating Configuration via a ConfigMap | You should see an output similar to:", "source": "https://kubernetes.io/docs/tutorials/configuration/updating-configuration-via-a-configmap/", "id": "k8s_00369_code_explain_9428"} {"prompt": "Explain this shell code:\n\nkubectl apply -f https://k8s.io/examples/configmap/new-immutable-configmap.yaml", "completion": "This shell code Updating Configuration via a ConfigMap | You should see an output similar to:", "language": "shell", "type": "code-example_explanation", "context": "Updating Configuration via a ConfigMap | You should see an output similar to:", "source": "https://kubernetes.io/docs/tutorials/configuration/updating-configuration-via-a-configmap/", "id": "k8s_00369_code_explain_9429"} {"prompt": "Explain this text code:\n\nNAME DATA AGE\ncompany-name-20150801 1 22m\ncompany-name-20240312 1 24s", "completion": "This text code Updating Configuration via a ConfigMap | Modify the Deployment to reference the new ConfigMap.", "language": "text", "type": "code-example_explanation", "context": "Updating Configuration via a ConfigMap | Modify the Deployment to reference the new ConfigMap.", "source": "https://kubernetes.io/docs/tutorials/configuration/updating-configuration-via-a-configmap/", "id": "k8s_00369_code_explain_9430"} {"prompt": "Complete this yaml code:\n\nvolumes:\n- configMap:", "completion": " defaultMode: 420\n name: company-name-20240312 # Update this field\n name: config-volume", "language": "yaml", "type": "kubernetes-manifest", "context": "Updating Configuration via a ConfigMap | You should see the following output:", "source": "https://kubernetes.io/docs/tutorials/configuration/updating-configuration-via-a-configmap/", "id": "k8s_00369_code_completion_9431"} {"prompt": "Explain this yaml code:\n\nvolumes:\n- configMap:\n defaultMode: 420\n name: company-name-20240312 # Update this field\n name: config-volume", "completion": "This yaml code Updating Configuration via a ConfigMap | You should see the following output:", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Updating Configuration via a ConfigMap | You should see the following output:", "source": "https://kubernetes.io/docs/tutorials/configuration/updating-configuration-via-a-configmap/", "id": "k8s_00369_code_explain_9432"} {"prompt": "Explain this shell code:\n\nkubectl get pods --selector=app.kubernetes.io/name=immutable-configmap-volume", "completion": "This shell code Updating Configuration via a ConfigMap | You should eventually see an output similar to:", "language": "shell", "type": "code-example_explanation", "context": "Updating Configuration via a ConfigMap | You should eventually see an output similar to:", "source": "https://kubernetes.io/docs/tutorials/configuration/updating-configuration-via-a-configmap/", "id": "k8s_00369_code_explain_9433"} {"prompt": "Complete this text code:\n\nNAME READY STATUS RESTARTS AGE\nimmutable-configmap-volume-5fdb88fcc8-29v8n 1/1 Running 0 13s\nimmutable-configmap-volume-5fdb88fcc8-52ddd 1/1 Running 0 14s", "completion": "immutable-configmap-volume-5fdb88fcc8-n5jx4 1/1 Running 0 15s\nimmutable-configmap-volume-78b6fbff95-5gsfh 1/1 Terminating 0 32m\nimmutable-configmap-volume-78b6fbff95-7vcj4 1/1 Terminating 0 32m\nimmutable-configmap-volume-78b6fbff95-vdslm 1/1 Terminating 0 32m", "language": "text", "type": "code-example", "context": "Updating Configuration via a ConfigMap | You should eventually see an output similar to:", "source": "https://kubernetes.io/docs/tutorials/configuration/updating-configuration-via-a-configmap/", "id": "k8s_00369_code_completion_9434"} {"prompt": "Explain this text code:\n\nNAME READY STATUS RESTARTS AGE\nimmutable-configmap-volume-5fdb88fcc8-29v8n 1/1 Running 0 13s\nimmutable-configmap-volume-5fdb88fcc8-52ddd 1/1 Running 0 14s\nimmutable-configmap-volume-5fdb88fcc8-n5jx4 1/1 Running 0 15s\nimmutable-configmap-volume-78b6fbff95-5gsfh 1/1 Terminating 0 32m\nimmutable-configmap-volume-78b6fbff95-7vcj4 1/1 Terminating 0 32m\nimmutable-configmap-volume-78b6fbff95-vdslm 1/1 Terminating 0 32m", "completion": "This text code Updating Configuration via a ConfigMap | You should eventually see an output similar to:", "language": "text", "type": "code-example_explanation", "context": "Updating Configuration via a ConfigMap | You should eventually see an output similar to:", "source": "https://kubernetes.io/docs/tutorials/configuration/updating-configuration-via-a-configmap/", "id": "k8s_00369_code_explain_9435"} {"prompt": "Complete this text code:\n\nNAME READY STATUS RESTARTS AGE\nimmutable-configmap-volume-5fdb88fcc8-29v8n 1/1 Running 0 43s", "completion": "immutable-configmap-volume-5fdb88fcc8-52ddd 1/1 Running 0 44s\nimmutable-configmap-volume-5fdb88fcc8-n5jx4 1/1 Running 0 45s", "language": "text", "type": "code-example", "context": "Updating Configuration via a ConfigMap | View the logs for a Pod in this Deployment:", "source": "https://kubernetes.io/docs/tutorials/configuration/updating-configuration-via-a-configmap/", "id": "k8s_00369_code_completion_9436"} {"prompt": "Explain this text code:\n\nNAME READY STATUS RESTARTS AGE\nimmutable-configmap-volume-5fdb88fcc8-29v8n 1/1 Running 0 43s\nimmutable-configmap-volume-5fdb88fcc8-52ddd 1/1 Running 0 44s\nimmutable-configmap-volume-5fdb88fcc8-n5jx4 1/1 Running 0 45s", "completion": "This text code Updating Configuration via a ConfigMap | View the logs for a Pod in this Deployment:", "language": "text", "type": "code-example_explanation", "context": "Updating Configuration via a ConfigMap | View the logs for a Pod in this Deployment:", "source": "https://kubernetes.io/docs/tutorials/configuration/updating-configuration-via-a-configmap/", "id": "k8s_00369_code_explain_9437"} {"prompt": "Explain this shell code:\n\n# Pick one Pod that belongs to the Deployment, and view its logs\nkubectl logs deployment/immutable-configmap-volume", "completion": "This shell code Updating Configuration via a ConfigMap | You should see an output similar to the below:", "language": "shell", "type": "code-example_explanation", "context": "Updating Configuration via a ConfigMap | You should see an output similar to the below:", "source": "https://kubernetes.io/docs/tutorials/configuration/updating-configuration-via-a-configmap/", "id": "k8s_00369_code_explain_9438"} {"prompt": "Complete this text code:\n\nFound 3 pods, using pod/immutable-configmap-volume-5fdb88fcc8-n5jx4\nWed Mar 20 04:24:17 UTC 2024 The name of the company is Fiktivesunternehmen GmbH", "completion": "Wed Mar 20 04:24:27 UTC 2024 The name of the company is Fiktivesunternehmen GmbH\nWed Mar 20 04:24:37 UTC 2024 The name of the company is Fiktivesunternehmen GmbH", "language": "text", "type": "code-example", "context": "Updating Configuration via a ConfigMap | Once all the deployments have migrated to use the new immutable ConfigMap, it is advised to delete the old one.", "source": "https://kubernetes.io/docs/tutorials/configuration/updating-configuration-via-a-configmap/", "id": "k8s_00369_code_completion_9439"} {"prompt": "Explain this text code:\n\nFound 3 pods, using pod/immutable-configmap-volume-5fdb88fcc8-n5jx4\nWed Mar 20 04:24:17 UTC 2024 The name of the company is Fiktivesunternehmen GmbH\nWed Mar 20 04:24:27 UTC 2024 The name of the company is Fiktivesunternehmen GmbH\nWed Mar 20 04:24:37 UTC 2024 The name of the company is Fiktivesunternehmen GmbH", "completion": "This text code Updating Configuration via a ConfigMap | Once all the deployments have migrated to use the new immutable ConfigMap, it is advised to delete the old one.", "language": "text", "type": "code-example_explanation", "context": "Updating Configuration via a ConfigMap | Once all the deployments have migrated to use the new immutable ConfigMap, it is advised to delete the old one.", "source": "https://kubernetes.io/docs/tutorials/configuration/updating-configuration-via-a-configmap/", "id": "k8s_00369_code_explain_9440"} {"prompt": "Complete this shell code:\n\nkubectl delete deployment configmap-volume configmap-env-var configmap-two-containers configmap-sidecar-container immutable-configmap-volume\nkubectl delete service configmap-service configmap-sidecar-service", "completion": "kubectl delete configmap sport fruits color company-name-20240312\n\nkubectl delete configmap company-name-20150801 # In case it was not handled during the task execution", "language": "shell", "type": "code-example", "context": "Updating Configuration via a ConfigMap |", "source": "https://kubernetes.io/docs/tutorials/configuration/updating-configuration-via-a-configmap/", "id": "k8s_00369_code_completion_9441"} {"prompt": "Explain this shell code:\n\nkubectl delete deployment configmap-volume configmap-env-var configmap-two-containers configmap-sidecar-container immutable-configmap-volume\nkubectl delete service configmap-service configmap-sidecar-service\nkubectl delete configmap sport fruits color company-name-20240312\n\nkubectl delete configmap company-name-20150801 # In case it was not handled during the task execution", "completion": "This shell code Updating Configuration via a ConfigMap |", "language": "shell", "type": "code-example_explanation", "context": "Updating Configuration via a ConfigMap |", "source": "https://kubernetes.io/docs/tutorials/configuration/updating-configuration-via-a-configmap/", "id": "k8s_00369_code_explain_9442"} {"prompt": "Explain this bash code:\n\nkubectl create configmap sport --from-literal=sport=football", "completion": "This bash code Updating Configuration via a ConfigMap | Below is an example of a Deployment manifest with the ConfigMap sport mounted as a volume into the Pod's only container.", "language": "bash", "type": "kubectl-command_explanation", "context": "Updating Configuration via a ConfigMap | Below is an example of a Deployment manifest with the ConfigMap sport mounted as a volume into the Pod's only container.", "source": "https://kubernetes.io/docs/tutorials/configuration/updating-configuration-via-a-configmap/", "id": "k8s_00369_code_explain_9443"} {"prompt": "Complete this yaml code:\n\ndeployments/deployment-with-configmap-as-volume.yaml\napiVersion: apps/v1\nkind: Deployment\nmetadata:\n name: configmap-volume\n labels:\n app.kubernetes.io/name: configmap-volume\nspec:\n replicas: 3\n selector:\n matchLabels:\n app.kubernetes.io/name: configmap-volume\n template:\n metadata:\n labels:\n app.kubernetes.io/name: configmap-volume\n spec:", "completion": " containers:\n - name: alpine\n image: alpine:3\n command:\n - /bin/sh\n - -c\n - while true; do echo \"$(date) My preferred sport is $(cat /etc/config/sport)\";\n sleep 10; done;\n ports:\n - containerPort: 80\n volumeMounts:\n - name: config-volume\n mountPath: /etc/config\n volumes:\n - name: config-volume\n configMap:\n name: sport", "language": "yaml", "type": "deployment-definition", "context": "Updating Configuration via a ConfigMap | Create the Deployment:", "source": "https://kubernetes.io/docs/tutorials/configuration/updating-configuration-via-a-configmap/", "id": "k8s_00369_code_completion_9444"} {"prompt": "Explain this yaml code:\n\ndeployments/deployment-with-configmap-as-volume.yaml\napiVersion: apps/v1\nkind: Deployment\nmetadata:\n name: configmap-volume\n labels:\n app.kubernetes.io/name: configmap-volume\nspec:\n replicas: 3\n selector:\n matchLabels:\n app.kubernetes.io/name: configmap-volume\n template:\n metadata:\n labels:\n app.kubernetes.io/name: configmap-volume\n spec:\n containers:\n - name: alpine\n image: alpine:3\n command:\n - /bin/sh\n - -c\n - while true; do echo \"$(date) My preferred sport is $(cat /etc/config/sport)\";\n sleep 10; done;\n ports:\n - containerPort: 80\n volumeMounts:\n - name: config-volume\n mountPath: /etc/config\n volumes:\n - name: config-volume\n configMap:\n name: sport", "completion": "This yaml code Updating Configuration via a ConfigMap | Create the Deployment:", "language": "yaml", "type": "deployment-definition_explanation", "context": "Updating Configuration via a ConfigMap | Create the Deployment:", "source": "https://kubernetes.io/docs/tutorials/configuration/updating-configuration-via-a-configmap/", "id": "k8s_00369_code_explain_9445"} {"prompt": "Complete this yaml code:\n\napiVersion: apps/v1\nkind: Deployment\nmetadata:\n name: configmap-volume\n labels:\n app.kubernetes.io/name: configmap-volume\nspec:\n replicas: 3\n selector:\n matchLabels:\n app.kubernetes.io/name: configmap-volume\n template:\n metadata:\n labels:\n app.kubernetes.io/name: configmap-volume\n spec:", "completion": " containers:\n - name: alpine\n image: alpine:3\n command:\n - /bin/sh\n - -c\n - while true; do echo \"$(date) My preferred sport is $(cat /etc/config/sport)\";\n sleep 10; done;\n ports:\n - containerPort: 80\n volumeMounts:\n - name: config-volume\n mountPath: /etc/config\n volumes:\n - name: config-volume\n configMap:\n name: sport", "language": "yaml", "type": "deployment-definition", "context": "Updating Configuration via a ConfigMap | Create the Deployment:", "source": "https://kubernetes.io/docs/tutorials/configuration/updating-configuration-via-a-configmap/", "id": "k8s_00369_code_completion_9446"} {"prompt": "Explain this yaml code:\n\napiVersion: apps/v1\nkind: Deployment\nmetadata:\n name: configmap-volume\n labels:\n app.kubernetes.io/name: configmap-volume\nspec:\n replicas: 3\n selector:\n matchLabels:\n app.kubernetes.io/name: configmap-volume\n template:\n metadata:\n labels:\n app.kubernetes.io/name: configmap-volume\n spec:\n containers:\n - name: alpine\n image: alpine:3\n command:\n - /bin/sh\n - -c\n - while true; do echo \"$(date) My preferred sport is $(cat /etc/config/sport)\";\n sleep 10; done;\n ports:\n - containerPort: 80\n volumeMounts:\n - name: config-volume\n mountPath: /etc/config\n volumes:\n - name: config-volume\n configMap:\n name: sport", "completion": "This yaml code Updating Configuration via a ConfigMap | Create the Deployment:", "language": "yaml", "type": "deployment-definition_explanation", "context": "Updating Configuration via a ConfigMap | Create the Deployment:", "source": "https://kubernetes.io/docs/tutorials/configuration/updating-configuration-via-a-configmap/", "id": "k8s_00369_code_explain_9447"} {"prompt": "Explain this bash code:\n\nkubectl apply -f https://k8s.io/examples/deployments/deployment-with-configmap-as-volume.yaml", "completion": "This bash code Updating Configuration via a ConfigMap | Check the pods for this Deployment to ensure they are ready (matching by selector):", "language": "bash", "type": "kubectl-command_explanation", "context": "Updating Configuration via a ConfigMap | Check the pods for this Deployment to ensure they are ready (matching by selector):", "source": "https://kubernetes.io/docs/tutorials/configuration/updating-configuration-via-a-configmap/", "id": "k8s_00369_code_explain_9448"} {"prompt": "Explain this bash code:\n\nkubectl get pods --selector=app.kubernetes.io/name=configmap-volume", "completion": "This bash code Updating Configuration via a ConfigMap | You should see an output similar to:", "language": "bash", "type": "kubectl-command_explanation", "context": "Updating Configuration via a ConfigMap | You should see an output similar to:", "source": "https://kubernetes.io/docs/tutorials/configuration/updating-configuration-via-a-configmap/", "id": "k8s_00369_code_explain_9449"} {"prompt": "Explain this text code:\n\n# Pick one Pod that belongs to the Deployment, and view its logs\nkubectl logs deployments/configmap-volume", "completion": "This text code Updating Configuration via a ConfigMap | You should see an output similar to:", "language": "text", "type": "code-example_explanation", "context": "Updating Configuration via a ConfigMap | You should see an output similar to:", "source": "https://kubernetes.io/docs/tutorials/configuration/updating-configuration-via-a-configmap/", "id": "k8s_00369_code_explain_9450"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\ndata:\n sport: cricket\nkind: ConfigMap\n# You can leave the existing metadata as they are.\n# The values you'll see won't exactly match these.", "completion": "metadata:\n creationTimestamp: \"2024-01-04T14:05:06Z\"\n name: sport\n namespace: default\n resourceVersion: \"1743935\"\n uid: 024ee001-fe72-487e-872e-34d6464a8a23", "language": "yaml", "type": "kubernetes-manifest", "context": "Updating Configuration via a ConfigMap | You should see the following output:", "source": "https://kubernetes.io/docs/tutorials/configuration/updating-configuration-via-a-configmap/", "id": "k8s_00369_code_completion_9451"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\ndata:\n sport: cricket\nkind: ConfigMap\n# You can leave the existing metadata as they are.\n# The values you'll see won't exactly match these.\nmetadata:\n creationTimestamp: \"2024-01-04T14:05:06Z\"\n name: sport\n namespace: default\n resourceVersion: \"1743935\"\n uid: 024ee001-fe72-487e-872e-34d6464a8a23", "completion": "This yaml code Updating Configuration via a ConfigMap | You should see the following output:", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Updating Configuration via a ConfigMap | You should see the following output:", "source": "https://kubernetes.io/docs/tutorials/configuration/updating-configuration-via-a-configmap/", "id": "k8s_00369_code_explain_9452"} {"prompt": "Explain this bash code:\n\nkubectl create configmap fruits --from-literal=fruits=apples", "completion": "This bash code Updating Configuration via a ConfigMap | Below is an example of a Deployment manifest with an environment variable configured via the ConfigMap fruits.", "language": "bash", "type": "kubectl-command_explanation", "context": "Updating Configuration via a ConfigMap | Below is an example of a Deployment manifest with an environment variable configured via the ConfigMap fruits.", "source": "https://kubernetes.io/docs/tutorials/configuration/updating-configuration-via-a-configmap/", "id": "k8s_00369_code_explain_9453"} {"prompt": "Complete this yaml code:\n\ndeployments/deployment-with-configmap-as-envvar.yaml\napiVersion: apps/v1\nkind: Deployment\nmetadata:\n name: configmap-env-var\n labels:\n app.kubernetes.io/name: configmap-env-var\nspec:\n replicas: 3\n selector:\n matchLabels:\n app.kubernetes.io/name: configmap-env-var\n template:\n metadata:\n labels:\n app.kubernetes.io/name: configmap-env-var", "completion": " spec:\n containers:\n - name: alpine\n image: alpine:3\n env:\n - name: FRUITS\n valueFrom:\n configMapKeyRef:\n key: fruits\n name: fruits\n command:\n - /bin/sh\n - -c\n - while true; do echo \"$(date) The basket is full of $FRUITS\";\n sleep 10; done;\n ports:\n - containerPort: 80", "language": "yaml", "type": "deployment-definition", "context": "Updating Configuration via a ConfigMap | Create the Deployment:", "source": "https://kubernetes.io/docs/tutorials/configuration/updating-configuration-via-a-configmap/", "id": "k8s_00369_code_completion_9454"} {"prompt": "Explain this yaml code:\n\ndeployments/deployment-with-configmap-as-envvar.yaml\napiVersion: apps/v1\nkind: Deployment\nmetadata:\n name: configmap-env-var\n labels:\n app.kubernetes.io/name: configmap-env-var\nspec:\n replicas: 3\n selector:\n matchLabels:\n app.kubernetes.io/name: configmap-env-var\n template:\n metadata:\n labels:\n app.kubernetes.io/name: configmap-env-var\n spec:\n containers:\n - name: alpine\n image: alpine:3\n env:\n - name: FRUITS\n valueFrom:\n configMapKeyRef:\n key: fruits\n name: fruits\n command:\n - /bin/sh\n - -c\n - while true; do echo \"$(date) The basket is full of $FRUITS\";\n sleep 10; done;\n ports:\n - containerPort: 80", "completion": "This yaml code Updating Configuration via a ConfigMap | Create the Deployment:", "language": "yaml", "type": "deployment-definition_explanation", "context": "Updating Configuration via a ConfigMap | Create the Deployment:", "source": "https://kubernetes.io/docs/tutorials/configuration/updating-configuration-via-a-configmap/", "id": "k8s_00369_code_explain_9455"} {"prompt": "Complete this yaml code:\n\napiVersion: apps/v1\nkind: Deployment\nmetadata:\n name: configmap-env-var\n labels:\n app.kubernetes.io/name: configmap-env-var\nspec:\n replicas: 3\n selector:\n matchLabels:\n app.kubernetes.io/name: configmap-env-var\n template:\n metadata:\n labels:\n app.kubernetes.io/name: configmap-env-var\n spec:", "completion": " containers:\n - name: alpine\n image: alpine:3\n env:\n - name: FRUITS\n valueFrom:\n configMapKeyRef:\n key: fruits\n name: fruits\n command:\n - /bin/sh\n - -c\n - while true; do echo \"$(date) The basket is full of $FRUITS\";\n sleep 10; done;\n ports:\n - containerPort: 80", "language": "yaml", "type": "deployment-definition", "context": "Updating Configuration via a ConfigMap | Create the Deployment:", "source": "https://kubernetes.io/docs/tutorials/configuration/updating-configuration-via-a-configmap/", "id": "k8s_00369_code_completion_9456"} {"prompt": "Explain this yaml code:\n\napiVersion: apps/v1\nkind: Deployment\nmetadata:\n name: configmap-env-var\n labels:\n app.kubernetes.io/name: configmap-env-var\nspec:\n replicas: 3\n selector:\n matchLabels:\n app.kubernetes.io/name: configmap-env-var\n template:\n metadata:\n labels:\n app.kubernetes.io/name: configmap-env-var\n spec:\n containers:\n - name: alpine\n image: alpine:3\n env:\n - name: FRUITS\n valueFrom:\n configMapKeyRef:\n key: fruits\n name: fruits\n command:\n - /bin/sh\n - -c\n - while true; do echo \"$(date) The basket is full of $FRUITS\";\n sleep 10; done;\n ports:\n - containerPort: 80", "completion": "This yaml code Updating Configuration via a ConfigMap | Create the Deployment:", "language": "yaml", "type": "deployment-definition_explanation", "context": "Updating Configuration via a ConfigMap | Create the Deployment:", "source": "https://kubernetes.io/docs/tutorials/configuration/updating-configuration-via-a-configmap/", "id": "k8s_00369_code_explain_9457"} {"prompt": "Explain this bash code:\n\nkubectl apply -f https://k8s.io/examples/deployments/deployment-with-configmap-as-envvar.yaml", "completion": "This bash code Updating Configuration via a ConfigMap | Check the pods for this Deployment to ensure they are ready (matching by selector):", "language": "bash", "type": "kubectl-command_explanation", "context": "Updating Configuration via a ConfigMap | Check the pods for this Deployment to ensure they are ready (matching by selector):", "source": "https://kubernetes.io/docs/tutorials/configuration/updating-configuration-via-a-configmap/", "id": "k8s_00369_code_explain_9458"} {"prompt": "Explain this bash code:\n\nkubectl get pods --selector=app.kubernetes.io/name=configmap-env-var", "completion": "This bash code Updating Configuration via a ConfigMap | You should see an output similar to:", "language": "bash", "type": "kubectl-command_explanation", "context": "Updating Configuration via a ConfigMap | You should see an output similar to:", "source": "https://kubernetes.io/docs/tutorials/configuration/updating-configuration-via-a-configmap/", "id": "k8s_00369_code_explain_9459"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\ndata:\n fruits: mangoes\nkind: ConfigMap\n# You can leave the existing metadata as they are.", "completion": "# The values you'll see won't exactly match these.\nmetadata:\n creationTimestamp: \"2024-01-04T16:04:19Z\"\n name: fruits\n namespace: default\n resourceVersion: \"1749472\"", "language": "yaml", "type": "kubernetes-manifest", "context": "Updating Configuration via a ConfigMap | You should see the following output:", "source": "https://kubernetes.io/docs/tutorials/configuration/updating-configuration-via-a-configmap/", "id": "k8s_00369_code_completion_9460"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\ndata:\n fruits: mangoes\nkind: ConfigMap\n# You can leave the existing metadata as they are.\n# The values you'll see won't exactly match these.\nmetadata:\n creationTimestamp: \"2024-01-04T16:04:19Z\"\n name: fruits\n namespace: default\n resourceVersion: \"1749472\"", "completion": "This yaml code Updating Configuration via a ConfigMap | You should see the following output:", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Updating Configuration via a ConfigMap | You should see the following output:", "source": "https://kubernetes.io/docs/tutorials/configuration/updating-configuration-via-a-configmap/", "id": "k8s_00369_code_explain_9461"} {"prompt": "Explain this text code:\n\n# As the text explains, the output does NOT change\nkubectl logs deployments/configmap-env-var --follow", "completion": "This text code Updating Configuration via a ConfigMap | Notice that the output remains unchanged, even though you edited the ConfigMap:", "language": "text", "type": "code-example_explanation", "context": "Updating Configuration via a ConfigMap | Notice that the output remains unchanged, even though you edited the ConfigMap:", "source": "https://kubernetes.io/docs/tutorials/configuration/updating-configuration-via-a-configmap/", "id": "k8s_00369_code_explain_9462"} {"prompt": "Complete this text code:\n\n# Trigger the rollout\nkubectl rollout restart deployment configmap-env-var", "completion": "\n# Wait for the rollout to complete\nkubectl rollout status deployment configmap-env-var --watch=true", "language": "text", "type": "code-example", "context": "Updating Configuration via a ConfigMap | Next, check the Deployment:", "source": "https://kubernetes.io/docs/tutorials/configuration/updating-configuration-via-a-configmap/", "id": "k8s_00369_code_completion_9463"} {"prompt": "Explain this text code:\n\n# Trigger the rollout\nkubectl rollout restart deployment configmap-env-var\n\n# Wait for the rollout to complete\nkubectl rollout status deployment configmap-env-var --watch=true", "completion": "This text code Updating Configuration via a ConfigMap | Next, check the Deployment:", "language": "text", "type": "code-example_explanation", "context": "Updating Configuration via a ConfigMap | Next, check the Deployment:", "source": "https://kubernetes.io/docs/tutorials/configuration/updating-configuration-via-a-configmap/", "id": "k8s_00369_code_explain_9464"} {"prompt": "Explain this bash code:\n\nkubectl get pods --selector=app.kubernetes.io/name=configmap-env-var", "completion": "This bash code Updating Configuration via a ConfigMap | The rollout causes Kubernetes to make a new ReplicaSet for the Deployment; that means the existing Pods eventually terminate, and new ones are created. After few seconds, you should see an output simi", "language": "bash", "type": "kubectl-command_explanation", "context": "Updating Configuration via a ConfigMap | The rollout causes Kubernetes to make a new ReplicaSet for the Deployment; that means the existing Pods eventually terminate, and new ones are created. After few seconds, you should see an output simi", "source": "https://kubernetes.io/docs/tutorials/configuration/updating-configuration-via-a-configmap/", "id": "k8s_00369_code_explain_9465"} {"prompt": "Explain this text code:\n\n# Pick one Pod that belongs to the Deployment, and view its logs\nkubectl logs deployment/configmap-env-var", "completion": "This text code Updating Configuration via a ConfigMap | You should see an output similar to the below:", "language": "text", "type": "code-example_explanation", "context": "Updating Configuration via a ConfigMap | You should see an output similar to the below:", "source": "https://kubernetes.io/docs/tutorials/configuration/updating-configuration-via-a-configmap/", "id": "k8s_00369_code_explain_9466"} {"prompt": "Explain this bash code:\n\nkubectl create configmap color --from-literal=color=red", "completion": "This bash code Updating Configuration via a ConfigMap | Below is an example manifest for a Deployment that manages a set of Pods, each with two containers. The two containers share an emptyDir volume that they use to communicate. The first container runs a", "language": "bash", "type": "kubectl-command_explanation", "context": "Updating Configuration via a ConfigMap | Below is an example manifest for a Deployment that manages a set of Pods, each with two containers. The two containers share an emptyDir volume that they use to communicate. The first container runs a", "source": "https://kubernetes.io/docs/tutorials/configuration/updating-configuration-via-a-configmap/", "id": "k8s_00369_code_explain_9467"} {"prompt": "Complete this yaml code:\n\ndeployments/deployment-with-configmap-two-containers.yaml\napiVersion: apps/v1\nkind: Deployment\nmetadata:\n name: configmap-two-containers\n labels:\n app.kubernetes.io/name: configmap-two-containers\nspec:\n replicas: 3\n selector:\n matchLabels:\n app.kubernetes.io/name: configmap-two-containers\n template:\n metadata:\n labels:\n app.kubernetes.io/name: configmap-two-containers\n spec:\n volumes:\n - name: shared-data\n emptyDir: {}", "completion": " - name: config-volume\n configMap:\n name: color\n containers:\n - name: nginx\n image: nginx\n volumeMounts:\n - name: shared-data\n mountPath: /usr/share/nginx/html\n - name: alpine\n image: alpine:3\n volumeMounts:\n - name: shared-data\n mountPath: /pod-data\n - name: config-volume\n mountPath: /etc/config\n command:\n - /bin/sh\n - -c\n - while true; do echo \"$(date) My preferred color is $(cat /etc/config/color)\" > /pod-data/index.html;\n sleep 10; done;", "language": "yaml", "type": "deployment-definition", "context": "Updating Configuration via a ConfigMap | Create the Deployment:", "source": "https://kubernetes.io/docs/tutorials/configuration/updating-configuration-via-a-configmap/", "id": "k8s_00369_code_completion_9468"} {"prompt": "Explain this yaml code:\n\ndeployments/deployment-with-configmap-two-containers.yaml\napiVersion: apps/v1\nkind: Deployment\nmetadata:\n name: configmap-two-containers\n labels:\n app.kubernetes.io/name: configmap-two-containers\nspec:\n replicas: 3\n selector:\n matchLabels:\n app.kubernetes.io/name: configmap-two-containers\n template:\n metadata:\n labels:\n app.kubernetes.io/name: configmap-two-containers\n spec:\n volumes:\n - name: shared-data\n emptyDir: {}\n - name: config-volume\n configMap:\n name: color\n containers:\n - name: nginx\n image: nginx\n volumeMounts:\n - name: shared-data\n mountPath: /usr/share/nginx/html\n - name: alpine\n image: alpine:3\n volumeMounts:\n - name: shared-data\n mountPath: /pod-data\n - name: config-volume\n mountPath: /etc/config\n command:\n - /bin/sh\n - -c\n - while true; do echo \"$(date) My preferred color is $(cat /etc/config/color)\" > /pod-data/index.html;\n sleep 10; done;", "completion": "This yaml code Updating Configuration via a ConfigMap | Create the Deployment:", "language": "yaml", "type": "deployment-definition_explanation", "context": "Updating Configuration via a ConfigMap | Create the Deployment:", "source": "https://kubernetes.io/docs/tutorials/configuration/updating-configuration-via-a-configmap/", "id": "k8s_00369_code_explain_9469"} {"prompt": "Complete this yaml code:\n\napiVersion: apps/v1\nkind: Deployment\nmetadata:\n name: configmap-two-containers\n labels:\n app.kubernetes.io/name: configmap-two-containers\nspec:\n replicas: 3\n selector:\n matchLabels:\n app.kubernetes.io/name: configmap-two-containers\n template:\n metadata:\n labels:\n app.kubernetes.io/name: configmap-two-containers\n spec:\n volumes:\n - name: shared-data\n emptyDir: {}\n - name: config-volume", "completion": " configMap:\n name: color\n containers:\n - name: nginx\n image: nginx\n volumeMounts:\n - name: shared-data\n mountPath: /usr/share/nginx/html\n - name: alpine\n image: alpine:3\n volumeMounts:\n - name: shared-data\n mountPath: /pod-data\n - name: config-volume\n mountPath: /etc/config\n command:\n - /bin/sh\n - -c\n - while true; do echo \"$(date) My preferred color is $(cat /etc/config/color)\" > /pod-data/index.html;\n sleep 10; done;", "language": "yaml", "type": "deployment-definition", "context": "Updating Configuration via a ConfigMap | Create the Deployment:", "source": "https://kubernetes.io/docs/tutorials/configuration/updating-configuration-via-a-configmap/", "id": "k8s_00369_code_completion_9470"} {"prompt": "Explain this yaml code:\n\napiVersion: apps/v1\nkind: Deployment\nmetadata:\n name: configmap-two-containers\n labels:\n app.kubernetes.io/name: configmap-two-containers\nspec:\n replicas: 3\n selector:\n matchLabels:\n app.kubernetes.io/name: configmap-two-containers\n template:\n metadata:\n labels:\n app.kubernetes.io/name: configmap-two-containers\n spec:\n volumes:\n - name: shared-data\n emptyDir: {}\n - name: config-volume\n configMap:\n name: color\n containers:\n - name: nginx\n image: nginx\n volumeMounts:\n - name: shared-data\n mountPath: /usr/share/nginx/html\n - name: alpine\n image: alpine:3\n volumeMounts:\n - name: shared-data\n mountPath: /pod-data\n - name: config-volume\n mountPath: /etc/config\n command:\n - /bin/sh\n - -c\n - while true; do echo \"$(date) My preferred color is $(cat /etc/config/color)\" > /pod-data/index.html;\n sleep 10; done;", "completion": "This yaml code Updating Configuration via a ConfigMap | Create the Deployment:", "language": "yaml", "type": "deployment-definition_explanation", "context": "Updating Configuration via a ConfigMap | Create the Deployment:", "source": "https://kubernetes.io/docs/tutorials/configuration/updating-configuration-via-a-configmap/", "id": "k8s_00369_code_explain_9471"} {"prompt": "Explain this bash code:\n\nkubectl apply -f https://k8s.io/examples/deployments/deployment-with-configmap-two-containers.yaml", "completion": "This bash code Updating Configuration via a ConfigMap | Check the pods for this Deployment to ensure they are ready (matching by selector):", "language": "bash", "type": "kubectl-command_explanation", "context": "Updating Configuration via a ConfigMap | Check the pods for this Deployment to ensure they are ready (matching by selector):", "source": "https://kubernetes.io/docs/tutorials/configuration/updating-configuration-via-a-configmap/", "id": "k8s_00369_code_explain_9472"} {"prompt": "Explain this bash code:\n\nkubectl get pods --selector=app.kubernetes.io/name=configmap-two-containers", "completion": "This bash code Updating Configuration via a ConfigMap | You should see an output similar to:", "language": "bash", "type": "kubectl-command_explanation", "context": "Updating Configuration via a ConfigMap | You should see an output similar to:", "source": "https://kubernetes.io/docs/tutorials/configuration/updating-configuration-via-a-configmap/", "id": "k8s_00369_code_explain_9473"} {"prompt": "Explain this bash code:\n\nkubectl expose deployment configmap-two-containers --name=configmap-service --port=8080 --target-port=80", "completion": "This bash code Updating Configuration via a ConfigMap | Use kubectl to forward the port:", "language": "bash", "type": "kubectl-command_explanation", "context": "Updating Configuration via a ConfigMap | Use kubectl to forward the port:", "source": "https://kubernetes.io/docs/tutorials/configuration/updating-configuration-via-a-configmap/", "id": "k8s_00369_code_explain_9474"} {"prompt": "Explain this text code:\n\n# this stays running in the background\nkubectl port-forward service/configmap-service 8080:8080 &", "completion": "This text code Updating Configuration via a ConfigMap | Access the service.", "language": "text", "type": "code-example_explanation", "context": "Updating Configuration via a ConfigMap | Access the service.", "source": "https://kubernetes.io/docs/tutorials/configuration/updating-configuration-via-a-configmap/", "id": "k8s_00369_code_explain_9475"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\ndata:\n color: blue\nkind: ConfigMap\n# You can leave the existing metadata as they are.\n# The values you'll see won't exactly match these.", "completion": "metadata:\n creationTimestamp: \"2024-01-05T08:12:05Z\"\n name: color\n namespace: configmap\n resourceVersion: \"1801272\"\n uid: 80d33e4a-cbb4-4bc9-ba8c-544c68e425d6", "language": "yaml", "type": "kubernetes-manifest", "context": "Updating Configuration via a ConfigMap | Loop over the service URL for few seconds.", "source": "https://kubernetes.io/docs/tutorials/configuration/updating-configuration-via-a-configmap/", "id": "k8s_00369_code_completion_9476"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\ndata:\n color: blue\nkind: ConfigMap\n# You can leave the existing metadata as they are.\n# The values you'll see won't exactly match these.\nmetadata:\n creationTimestamp: \"2024-01-05T08:12:05Z\"\n name: color\n namespace: configmap\n resourceVersion: \"1801272\"\n uid: 80d33e4a-cbb4-4bc9-ba8c-544c68e425d6", "completion": "This yaml code Updating Configuration via a ConfigMap | Loop over the service URL for few seconds.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Updating Configuration via a ConfigMap | Loop over the service URL for few seconds.", "source": "https://kubernetes.io/docs/tutorials/configuration/updating-configuration-via-a-configmap/", "id": "k8s_00369_code_explain_9477"} {"prompt": "Explain this text code:\n\n# Cancel this when you're happy with it (Ctrl-C)\nwhile true; do curl --connect-timeout 7.5 http://localhost:8080; sleep 10; done", "completion": "This text code Updating Configuration via a ConfigMap | You should see the output change as follows:", "language": "text", "type": "code-example_explanation", "context": "Updating Configuration via a ConfigMap | You should see the output change as follows:", "source": "https://kubernetes.io/docs/tutorials/configuration/updating-configuration-via-a-configmap/", "id": "k8s_00369_code_explain_9478"} {"prompt": "Explain this bash code:\n\nkubectl create configmap color --from-literal=color=blue", "completion": "This bash code Updating Configuration via a ConfigMap | Below is an example manifest for a Deployment that manages a set of Pods, each with a main container and a sidecar container. The two containers share an emptyDir volume that they use to communicate.", "language": "bash", "type": "kubectl-command_explanation", "context": "Updating Configuration via a ConfigMap | Below is an example manifest for a Deployment that manages a set of Pods, each with a main container and a sidecar container. The two containers share an emptyDir volume that they use to communicate.", "source": "https://kubernetes.io/docs/tutorials/configuration/updating-configuration-via-a-configmap/", "id": "k8s_00369_code_explain_9479"} {"prompt": "Complete this yaml code:\n\ndeployments/deployment-with-configmap-and-sidecar-container.yaml\napiVersion: apps/v1\nkind: Deployment\nmetadata:\n name: configmap-sidecar-container\n labels:\n app.kubernetes.io/name: configmap-sidecar-container\nspec:\n replicas: 3\n selector:\n matchLabels:\n app.kubernetes.io/name: configmap-sidecar-container\n template:\n metadata:\n labels:\n app.kubernetes.io/name: configmap-sidecar-container\n spec:\n volumes:\n - name: shared-data\n emptyDir: {}\n - name: config-volume", "completion": " configMap:\n name: color\n containers:\n - name: nginx\n image: nginx\n volumeMounts:\n - name: shared-data\n mountPath: /usr/share/nginx/html\n initContainers:\n - name: alpine\n image: alpine:3\n restartPolicy: Always\n volumeMounts:\n - name: shared-data\n mountPath: /pod-data\n - name: config-volume\n mountPath: /etc/config\n command:\n - /bin/sh\n - -c\n - while true; do echo \"$(date) My preferred color is $(cat /etc/config/color)\" > /pod-data/index.html;\n sleep 10; done;", "language": "yaml", "type": "deployment-definition", "context": "Updating Configuration via a ConfigMap | Create the Deployment:", "source": "https://kubernetes.io/docs/tutorials/configuration/updating-configuration-via-a-configmap/", "id": "k8s_00369_code_completion_9480"} {"prompt": "Explain this yaml code:\n\ndeployments/deployment-with-configmap-and-sidecar-container.yaml\napiVersion: apps/v1\nkind: Deployment\nmetadata:\n name: configmap-sidecar-container\n labels:\n app.kubernetes.io/name: configmap-sidecar-container\nspec:\n replicas: 3\n selector:\n matchLabels:\n app.kubernetes.io/name: configmap-sidecar-container\n template:\n metadata:\n labels:\n app.kubernetes.io/name: configmap-sidecar-container\n spec:\n volumes:\n - name: shared-data\n emptyDir: {}\n - name: config-volume\n configMap:\n name: color\n containers:\n - name: nginx\n image: nginx\n volumeMounts:\n - name: shared-data\n mountPath: /usr/share/nginx/html\n initContainers:\n - name: alpine\n image: alpine:3\n restartPolicy: Always\n volumeMounts:\n - name: shared-data\n mountPath: /pod-data\n - name: config-volume\n mountPath: /etc/config\n command:\n - /bin/sh\n - -c\n - while true; do echo \"$(date) My preferred color is $(cat /etc/config/color)\" > /pod-data/index.html;\n sleep 10; done;", "completion": "This yaml code Updating Configuration via a ConfigMap | Create the Deployment:", "language": "yaml", "type": "deployment-definition_explanation", "context": "Updating Configuration via a ConfigMap | Create the Deployment:", "source": "https://kubernetes.io/docs/tutorials/configuration/updating-configuration-via-a-configmap/", "id": "k8s_00369_code_explain_9481"} {"prompt": "Complete this yaml code:\n\napiVersion: apps/v1\nkind: Deployment\nmetadata:\n name: configmap-sidecar-container\n labels:\n app.kubernetes.io/name: configmap-sidecar-container\nspec:\n replicas: 3\n selector:\n matchLabels:\n app.kubernetes.io/name: configmap-sidecar-container\n template:\n metadata:\n labels:\n app.kubernetes.io/name: configmap-sidecar-container\n spec:\n volumes:\n - name: shared-data\n emptyDir: {}\n - name: config-volume\n configMap:", "completion": " name: color\n containers:\n - name: nginx\n image: nginx\n volumeMounts:\n - name: shared-data\n mountPath: /usr/share/nginx/html\n initContainers:\n - name: alpine\n image: alpine:3\n restartPolicy: Always\n volumeMounts:\n - name: shared-data\n mountPath: /pod-data\n - name: config-volume\n mountPath: /etc/config\n command:\n - /bin/sh\n - -c\n - while true; do echo \"$(date) My preferred color is $(cat /etc/config/color)\" > /pod-data/index.html;\n sleep 10; done;", "language": "yaml", "type": "deployment-definition", "context": "Updating Configuration via a ConfigMap | Create the Deployment:", "source": "https://kubernetes.io/docs/tutorials/configuration/updating-configuration-via-a-configmap/", "id": "k8s_00369_code_completion_9482"} {"prompt": "Explain this yaml code:\n\napiVersion: apps/v1\nkind: Deployment\nmetadata:\n name: configmap-sidecar-container\n labels:\n app.kubernetes.io/name: configmap-sidecar-container\nspec:\n replicas: 3\n selector:\n matchLabels:\n app.kubernetes.io/name: configmap-sidecar-container\n template:\n metadata:\n labels:\n app.kubernetes.io/name: configmap-sidecar-container\n spec:\n volumes:\n - name: shared-data\n emptyDir: {}\n - name: config-volume\n configMap:\n name: color\n containers:\n - name: nginx\n image: nginx\n volumeMounts:\n - name: shared-data\n mountPath: /usr/share/nginx/html\n initContainers:\n - name: alpine\n image: alpine:3\n restartPolicy: Always\n volumeMounts:\n - name: shared-data\n mountPath: /pod-data\n - name: config-volume\n mountPath: /etc/config\n command:\n - /bin/sh\n - -c\n - while true; do echo \"$(date) My preferred color is $(cat /etc/config/color)\" > /pod-data/index.html;\n sleep 10; done;", "completion": "This yaml code Updating Configuration via a ConfigMap | Create the Deployment:", "language": "yaml", "type": "deployment-definition_explanation", "context": "Updating Configuration via a ConfigMap | Create the Deployment:", "source": "https://kubernetes.io/docs/tutorials/configuration/updating-configuration-via-a-configmap/", "id": "k8s_00369_code_explain_9483"} {"prompt": "Explain this bash code:\n\nkubectl apply -f https://k8s.io/examples/deployments/deployment-with-configmap-and-sidecar-container.yaml", "completion": "This bash code Updating Configuration via a ConfigMap | Check the pods for this Deployment to ensure they are ready (matching by selector):", "language": "bash", "type": "kubectl-command_explanation", "context": "Updating Configuration via a ConfigMap | Check the pods for this Deployment to ensure they are ready (matching by selector):", "source": "https://kubernetes.io/docs/tutorials/configuration/updating-configuration-via-a-configmap/", "id": "k8s_00369_code_explain_9484"} {"prompt": "Explain this bash code:\n\nkubectl get pods --selector=app.kubernetes.io/name=configmap-sidecar-container", "completion": "This bash code Updating Configuration via a ConfigMap | You should see an output similar to:", "language": "bash", "type": "kubectl-command_explanation", "context": "Updating Configuration via a ConfigMap | You should see an output similar to:", "source": "https://kubernetes.io/docs/tutorials/configuration/updating-configuration-via-a-configmap/", "id": "k8s_00369_code_explain_9485"} {"prompt": "Explain this bash code:\n\nkubectl expose deployment configmap-sidecar-container --name=configmap-sidecar-service --port=8081 --target-port=80", "completion": "This bash code Updating Configuration via a ConfigMap | Use kubectl to forward the port:", "language": "bash", "type": "kubectl-command_explanation", "context": "Updating Configuration via a ConfigMap | Use kubectl to forward the port:", "source": "https://kubernetes.io/docs/tutorials/configuration/updating-configuration-via-a-configmap/", "id": "k8s_00369_code_explain_9486"} {"prompt": "Explain this text code:\n\n# this stays running in the background\nkubectl port-forward service/configmap-sidecar-service 8081:8081 &", "completion": "This text code Updating Configuration via a ConfigMap | Access the service.", "language": "text", "type": "code-example_explanation", "context": "Updating Configuration via a ConfigMap | Access the service.", "source": "https://kubernetes.io/docs/tutorials/configuration/updating-configuration-via-a-configmap/", "id": "k8s_00369_code_explain_9487"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\ndata:\n color: green\nkind: ConfigMap\n# You can leave the existing metadata as they are.\n# The values you'll see won't exactly match these.", "completion": "metadata:\n creationTimestamp: \"2024-02-17T12:20:30Z\"\n name: color\n namespace: default\n resourceVersion: \"1054\"\n uid: e40bb34c-58df-4280-8bea-6ed16edccfaa", "language": "yaml", "type": "kubernetes-manifest", "context": "Updating Configuration via a ConfigMap | Loop over the service URL for few seconds.", "source": "https://kubernetes.io/docs/tutorials/configuration/updating-configuration-via-a-configmap/", "id": "k8s_00369_code_completion_9488"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\ndata:\n color: green\nkind: ConfigMap\n# You can leave the existing metadata as they are.\n# The values you'll see won't exactly match these.\nmetadata:\n creationTimestamp: \"2024-02-17T12:20:30Z\"\n name: color\n namespace: default\n resourceVersion: \"1054\"\n uid: e40bb34c-58df-4280-8bea-6ed16edccfaa", "completion": "This yaml code Updating Configuration via a ConfigMap | Loop over the service URL for few seconds.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Updating Configuration via a ConfigMap | Loop over the service URL for few seconds.", "source": "https://kubernetes.io/docs/tutorials/configuration/updating-configuration-via-a-configmap/", "id": "k8s_00369_code_explain_9489"} {"prompt": "Explain this text code:\n\n# Cancel this when you're happy with it (Ctrl-C)\nwhile true; do curl --connect-timeout 7.5 http://localhost:8081; sleep 10; done", "completion": "This text code Updating Configuration via a ConfigMap | You should see the output change as follows:", "language": "text", "type": "code-example_explanation", "context": "Updating Configuration via a ConfigMap | You should see the output change as follows:", "source": "https://kubernetes.io/docs/tutorials/configuration/updating-configuration-via-a-configmap/", "id": "k8s_00369_code_explain_9490"} {"prompt": "Complete this yaml code:\n\nconfigmap/immutable-configmap.yaml\napiVersion: v1\ndata:\n company_name: \"ACME, Inc.\" # existing fictional company name", "completion": "kind: ConfigMap\nimmutable: true\nmetadata:\n name: company-name-20150801", "language": "yaml", "type": "kubernetes-manifest", "context": "Updating Configuration via a ConfigMap | Create the Immutable ConfigMap:", "source": "https://kubernetes.io/docs/tutorials/configuration/updating-configuration-via-a-configmap/", "id": "k8s_00369_code_completion_9491"} {"prompt": "Explain this yaml code:\n\nconfigmap/immutable-configmap.yaml\napiVersion: v1\ndata:\n company_name: \"ACME, Inc.\" # existing fictional company name\nkind: ConfigMap\nimmutable: true\nmetadata:\n name: company-name-20150801", "completion": "This yaml code Updating Configuration via a ConfigMap | Create the Immutable ConfigMap:", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Updating Configuration via a ConfigMap | Create the Immutable ConfigMap:", "source": "https://kubernetes.io/docs/tutorials/configuration/updating-configuration-via-a-configmap/", "id": "k8s_00369_code_explain_9492"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\ndata:\n company_name: \"ACME, Inc.\" # existing fictional company name", "completion": "kind: ConfigMap\nimmutable: true\nmetadata:\n name: company-name-20150801", "language": "yaml", "type": "kubernetes-manifest", "context": "Updating Configuration via a ConfigMap | Create the Immutable ConfigMap:", "source": "https://kubernetes.io/docs/tutorials/configuration/updating-configuration-via-a-configmap/", "id": "k8s_00369_code_completion_9493"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\ndata:\n company_name: \"ACME, Inc.\" # existing fictional company name\nkind: ConfigMap\nimmutable: true\nmetadata:\n name: company-name-20150801", "completion": "This yaml code Updating Configuration via a ConfigMap | Create the Immutable ConfigMap:", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Updating Configuration via a ConfigMap | Create the Immutable ConfigMap:", "source": "https://kubernetes.io/docs/tutorials/configuration/updating-configuration-via-a-configmap/", "id": "k8s_00369_code_explain_9494"} {"prompt": "Explain this bash code:\n\nkubectl apply -f https://k8s.io/examples/configmap/immutable-configmap.yaml", "completion": "This bash code Updating Configuration via a ConfigMap | Below is an example of a Deployment manifest with the Immutable ConfigMap company-name-20150801 mounted as a volume into the Pod's only container.", "language": "bash", "type": "kubectl-command_explanation", "context": "Updating Configuration via a ConfigMap | Below is an example of a Deployment manifest with the Immutable ConfigMap company-name-20150801 mounted as a volume into the Pod's only container.", "source": "https://kubernetes.io/docs/tutorials/configuration/updating-configuration-via-a-configmap/", "id": "k8s_00369_code_explain_9495"} {"prompt": "Complete this yaml code:\n\ndeployments/deployment-with-immutable-configmap-as-volume.yaml\napiVersion: apps/v1\nkind: Deployment\nmetadata:\n name: immutable-configmap-volume\n labels:\n app.kubernetes.io/name: immutable-configmap-volume\nspec:\n replicas: 3\n selector:\n matchLabels:\n app.kubernetes.io/name: immutable-configmap-volume\n template:\n metadata:\n labels:\n app.kubernetes.io/name: immutable-configmap-volume\n spec:", "completion": " containers:\n - name: alpine\n image: alpine:3\n command:\n - /bin/sh\n - -c\n - while true; do echo \"$(date) The name of the company is $(cat /etc/config/company_name)\";\n sleep 10; done;\n ports:\n - containerPort: 80\n volumeMounts:\n - name: config-volume\n mountPath: /etc/config\n volumes:\n - name: config-volume\n configMap:\n name: company-name-20150801", "language": "yaml", "type": "deployment-definition", "context": "Updating Configuration via a ConfigMap | Create the Deployment:", "source": "https://kubernetes.io/docs/tutorials/configuration/updating-configuration-via-a-configmap/", "id": "k8s_00369_code_completion_9496"} {"prompt": "Explain this yaml code:\n\ndeployments/deployment-with-immutable-configmap-as-volume.yaml\napiVersion: apps/v1\nkind: Deployment\nmetadata:\n name: immutable-configmap-volume\n labels:\n app.kubernetes.io/name: immutable-configmap-volume\nspec:\n replicas: 3\n selector:\n matchLabels:\n app.kubernetes.io/name: immutable-configmap-volume\n template:\n metadata:\n labels:\n app.kubernetes.io/name: immutable-configmap-volume\n spec:\n containers:\n - name: alpine\n image: alpine:3\n command:\n - /bin/sh\n - -c\n - while true; do echo \"$(date) The name of the company is $(cat /etc/config/company_name)\";\n sleep 10; done;\n ports:\n - containerPort: 80\n volumeMounts:\n - name: config-volume\n mountPath: /etc/config\n volumes:\n - name: config-volume\n configMap:\n name: company-name-20150801", "completion": "This yaml code Updating Configuration via a ConfigMap | Create the Deployment:", "language": "yaml", "type": "deployment-definition_explanation", "context": "Updating Configuration via a ConfigMap | Create the Deployment:", "source": "https://kubernetes.io/docs/tutorials/configuration/updating-configuration-via-a-configmap/", "id": "k8s_00369_code_explain_9497"} {"prompt": "Complete this yaml code:\n\napiVersion: apps/v1\nkind: Deployment\nmetadata:\n name: immutable-configmap-volume\n labels:\n app.kubernetes.io/name: immutable-configmap-volume\nspec:\n replicas: 3\n selector:\n matchLabels:\n app.kubernetes.io/name: immutable-configmap-volume\n template:\n metadata:\n labels:\n app.kubernetes.io/name: immutable-configmap-volume\n spec:", "completion": " containers:\n - name: alpine\n image: alpine:3\n command:\n - /bin/sh\n - -c\n - while true; do echo \"$(date) The name of the company is $(cat /etc/config/company_name)\";\n sleep 10; done;\n ports:\n - containerPort: 80\n volumeMounts:\n - name: config-volume\n mountPath: /etc/config\n volumes:\n - name: config-volume\n configMap:\n name: company-name-20150801", "language": "yaml", "type": "deployment-definition", "context": "Updating Configuration via a ConfigMap | Create the Deployment:", "source": "https://kubernetes.io/docs/tutorials/configuration/updating-configuration-via-a-configmap/", "id": "k8s_00369_code_completion_9498"} {"prompt": "Explain this yaml code:\n\napiVersion: apps/v1\nkind: Deployment\nmetadata:\n name: immutable-configmap-volume\n labels:\n app.kubernetes.io/name: immutable-configmap-volume\nspec:\n replicas: 3\n selector:\n matchLabels:\n app.kubernetes.io/name: immutable-configmap-volume\n template:\n metadata:\n labels:\n app.kubernetes.io/name: immutable-configmap-volume\n spec:\n containers:\n - name: alpine\n image: alpine:3\n command:\n - /bin/sh\n - -c\n - while true; do echo \"$(date) The name of the company is $(cat /etc/config/company_name)\";\n sleep 10; done;\n ports:\n - containerPort: 80\n volumeMounts:\n - name: config-volume\n mountPath: /etc/config\n volumes:\n - name: config-volume\n configMap:\n name: company-name-20150801", "completion": "This yaml code Updating Configuration via a ConfigMap | Create the Deployment:", "language": "yaml", "type": "deployment-definition_explanation", "context": "Updating Configuration via a ConfigMap | Create the Deployment:", "source": "https://kubernetes.io/docs/tutorials/configuration/updating-configuration-via-a-configmap/", "id": "k8s_00369_code_explain_9499"} {"prompt": "Explain this bash code:\n\nkubectl apply -f https://k8s.io/examples/deployments/deployment-with-immutable-configmap-as-volume.yaml", "completion": "This bash code Updating Configuration via a ConfigMap | Check the pods for this Deployment to ensure they are ready (matching by selector):", "language": "bash", "type": "kubectl-command_explanation", "context": "Updating Configuration via a ConfigMap | Check the pods for this Deployment to ensure they are ready (matching by selector):", "source": "https://kubernetes.io/docs/tutorials/configuration/updating-configuration-via-a-configmap/", "id": "k8s_00369_code_explain_9500"} {"prompt": "Explain this bash code:\n\nkubectl get pods --selector=app.kubernetes.io/name=immutable-configmap-volume", "completion": "This bash code Updating Configuration via a ConfigMap | You should see an output similar to:", "language": "bash", "type": "kubectl-command_explanation", "context": "Updating Configuration via a ConfigMap | You should see an output similar to:", "source": "https://kubernetes.io/docs/tutorials/configuration/updating-configuration-via-a-configmap/", "id": "k8s_00369_code_explain_9501"} {"prompt": "Explain this text code:\n\n# Pick one Pod that belongs to the Deployment, and view its logs\nkubectl logs deployments/immutable-configmap-volume", "completion": "This text code Updating Configuration via a ConfigMap | You should see an output similar to:", "language": "text", "type": "code-example_explanation", "context": "Updating Configuration via a ConfigMap | You should see an output similar to:", "source": "https://kubernetes.io/docs/tutorials/configuration/updating-configuration-via-a-configmap/", "id": "k8s_00369_code_explain_9502"} {"prompt": "Complete this yaml code:\n\nconfigmap/new-immutable-configmap.yaml\napiVersion: v1\ndata:\n company_name: \"Fiktivesunternehmen GmbH\" # new fictional company name", "completion": "kind: ConfigMap\nimmutable: true\nmetadata:\n name: company-name-20240312", "language": "yaml", "type": "kubernetes-manifest", "context": "Updating Configuration via a ConfigMap | You should see an output similar to:", "source": "https://kubernetes.io/docs/tutorials/configuration/updating-configuration-via-a-configmap/", "id": "k8s_00369_code_completion_9503"} {"prompt": "Explain this yaml code:\n\nconfigmap/new-immutable-configmap.yaml\napiVersion: v1\ndata:\n company_name: \"Fiktivesunternehmen GmbH\" # new fictional company name\nkind: ConfigMap\nimmutable: true\nmetadata:\n name: company-name-20240312", "completion": "This yaml code Updating Configuration via a ConfigMap | You should see an output similar to:", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Updating Configuration via a ConfigMap | You should see an output similar to:", "source": "https://kubernetes.io/docs/tutorials/configuration/updating-configuration-via-a-configmap/", "id": "k8s_00369_code_explain_9504"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\ndata:\n company_name: \"Fiktivesunternehmen GmbH\" # new fictional company name", "completion": "kind: ConfigMap\nimmutable: true\nmetadata:\n name: company-name-20240312", "language": "yaml", "type": "kubernetes-manifest", "context": "Updating Configuration via a ConfigMap | You should see an output similar to:", "source": "https://kubernetes.io/docs/tutorials/configuration/updating-configuration-via-a-configmap/", "id": "k8s_00369_code_completion_9505"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\ndata:\n company_name: \"Fiktivesunternehmen GmbH\" # new fictional company name\nkind: ConfigMap\nimmutable: true\nmetadata:\n name: company-name-20240312", "completion": "This yaml code Updating Configuration via a ConfigMap | You should see an output similar to:", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Updating Configuration via a ConfigMap | You should see an output similar to:", "source": "https://kubernetes.io/docs/tutorials/configuration/updating-configuration-via-a-configmap/", "id": "k8s_00369_code_explain_9506"} {"prompt": "Explain this bash code:\n\nkubectl apply -f https://k8s.io/examples/configmap/new-immutable-configmap.yaml", "completion": "This bash code Updating Configuration via a ConfigMap | You should see an output similar to:", "language": "bash", "type": "kubectl-command_explanation", "context": "Updating Configuration via a ConfigMap | You should see an output similar to:", "source": "https://kubernetes.io/docs/tutorials/configuration/updating-configuration-via-a-configmap/", "id": "k8s_00369_code_explain_9507"} {"prompt": "Complete this text code:\n\nvolumes:\n- configMap:", "completion": " defaultMode: 420\n name: company-name-20240312 # Update this field\n name: config-volume", "language": "text", "type": "code-example", "context": "Updating Configuration via a ConfigMap | You should see the following output:", "source": "https://kubernetes.io/docs/tutorials/configuration/updating-configuration-via-a-configmap/", "id": "k8s_00369_code_completion_9508"} {"prompt": "Explain this text code:\n\nvolumes:\n- configMap:\n defaultMode: 420\n name: company-name-20240312 # Update this field\n name: config-volume", "completion": "This text code Updating Configuration via a ConfigMap | You should see the following output:", "language": "text", "type": "code-example_explanation", "context": "Updating Configuration via a ConfigMap | You should see the following output:", "source": "https://kubernetes.io/docs/tutorials/configuration/updating-configuration-via-a-configmap/", "id": "k8s_00369_code_explain_9509"} {"prompt": "Explain this bash code:\n\nkubectl get pods --selector=app.kubernetes.io/name=immutable-configmap-volume", "completion": "This bash code Updating Configuration via a ConfigMap | You should eventually see an output similar to:", "language": "bash", "type": "kubectl-command_explanation", "context": "Updating Configuration via a ConfigMap | You should eventually see an output similar to:", "source": "https://kubernetes.io/docs/tutorials/configuration/updating-configuration-via-a-configmap/", "id": "k8s_00369_code_explain_9510"} {"prompt": "Explain this text code:\n\n# Pick one Pod that belongs to the Deployment, and view its logs\nkubectl logs deployment/immutable-configmap-volume", "completion": "This text code Updating Configuration via a ConfigMap | You should see an output similar to the below:", "language": "text", "type": "code-example_explanation", "context": "Updating Configuration via a ConfigMap | You should see an output similar to the below:", "source": "https://kubernetes.io/docs/tutorials/configuration/updating-configuration-via-a-configmap/", "id": "k8s_00369_code_explain_9511"} {"prompt": "Complete this bash code:\n\nkubectl delete deployment configmap-volume configmap-env-var configmap-two-containers configmap-sidecar-container immutable-configmap-volume\nkubectl delete service configmap-service configmap-sidecar-service", "completion": "kubectl delete configmap sport fruits color company-name-20240312\n\nkubectl delete configmap company-name-20150801 # In case it was not handled during the task execution", "language": "bash", "type": "kubectl-command", "context": "Updating Configuration via a ConfigMap |", "source": "https://kubernetes.io/docs/tutorials/configuration/updating-configuration-via-a-configmap/", "id": "k8s_00369_code_completion_9512"} {"prompt": "Explain this bash code:\n\nkubectl delete deployment configmap-volume configmap-env-var configmap-two-containers configmap-sidecar-container immutable-configmap-volume\nkubectl delete service configmap-service configmap-sidecar-service\nkubectl delete configmap sport fruits color company-name-20240312\n\nkubectl delete configmap company-name-20150801 # In case it was not handled during the task execution", "completion": "This bash code Updating Configuration via a ConfigMap |", "language": "bash", "type": "kubectl-command_explanation", "context": "Updating Configuration via a ConfigMap |", "source": "https://kubernetes.io/docs/tutorials/configuration/updating-configuration-via-a-configmap/", "id": "k8s_00369_code_explain_9513"} {"prompt": "Explain this text code:\n\ngcloud container clusters create [CLUSTER_NAME] --enable-network-policy", "completion": "This text code Use Calico for NetworkPolicy | Example", "language": "text", "type": "code-example_explanation", "context": "Use Calico for NetworkPolicy | Example", "source": "https://kubernetes.io/docs/tasks/administer-cluster/network-policy-provider/calico-network-policy/", "id": "k8s_00370_code_explain_9514"} {"prompt": "Explain this text code:\n\ngcloud container clusters create my-calico-cluster --enable-network-policy", "completion": "This text code Use Calico for NetworkPolicy | To verify the deployment, use the following command.", "language": "text", "type": "code-example_explanation", "context": "Use Calico for NetworkPolicy | To verify the deployment, use the following command.", "source": "https://kubernetes.io/docs/tasks/administer-cluster/network-policy-provider/calico-network-policy/", "id": "k8s_00370_code_explain_9515"} {"prompt": "Explain this shell code:\n\ngcloud container clusters create [CLUSTER_NAME] --enable-network-policy", "completion": "This shell code Use Calico for NetworkPolicy | Example", "language": "shell", "type": "code-example_explanation", "context": "Use Calico for NetworkPolicy | Example", "source": "https://kubernetes.io/docs/tasks/administer-cluster/network-policy-provider/calico-network-policy/", "id": "k8s_00370_code_explain_9516"} {"prompt": "Explain this shell code:\n\ngcloud container clusters create my-calico-cluster --enable-network-policy", "completion": "This shell code Use Calico for NetworkPolicy | To verify the deployment, use the following command.", "language": "shell", "type": "code-example_explanation", "context": "Use Calico for NetworkPolicy | To verify the deployment, use the following command.", "source": "https://kubernetes.io/docs/tasks/administer-cluster/network-policy-provider/calico-network-policy/", "id": "k8s_00370_code_explain_9517"} {"prompt": "Explain this text code:\n\ngcloud container clusters create [CLUSTER_NAME] --enable-network-policy", "completion": "This text code Use Calico for NetworkPolicy | Example", "language": "text", "type": "code-example_explanation", "context": "Use Calico for NetworkPolicy | Example", "source": "https://kubernetes.io/docs/tasks/administer-cluster/network-policy-provider/calico-network-policy/", "id": "k8s_00370_code_explain_9518"} {"prompt": "Explain this text code:\n\ngcloud container clusters create my-calico-cluster --enable-network-policy", "completion": "This text code Use Calico for NetworkPolicy | To verify the deployment, use the following command.", "language": "text", "type": "code-example_explanation", "context": "Use Calico for NetworkPolicy | To verify the deployment, use the following command.", "source": "https://kubernetes.io/docs/tasks/administer-cluster/network-policy-provider/calico-network-policy/", "id": "k8s_00370_code_explain_9519"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Service\nmetadata:\n name: my-service\n labels:\n app.kubernetes.io/name: MyApp", "completion": "spec:\n selector:\n app.kubernetes.io/name: MyApp\n ports:\n - protocol: TCP\n port: 80", "language": "yaml", "type": "service-definition", "context": "IPv4/IPv6 dual-stack | This Service specification explicitly defines PreferDualStack in .spec.ipFamilyPolicy. When you create this Service on a dual-stack cluster, Kubernetes assigns both IPv4 and IPv6 addresses for the ser", "source": "https://kubernetes.io/docs/concepts/services-networking/dual-stack/", "id": "k8s_00371_code_completion_9520"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Service\nmetadata:\n name: my-service\n labels:\n app.kubernetes.io/name: MyApp\nspec:\n selector:\n app.kubernetes.io/name: MyApp\n ports:\n - protocol: TCP\n port: 80", "completion": "This yaml code IPv4/IPv6 dual-stack | This Service specification explicitly defines PreferDualStack in .spec.ipFamilyPolicy. When you create this Service on a dual-stack cluster, Kubernetes assigns both IPv4 and IPv6 addresses for the ser", "language": "yaml", "type": "service-definition_explanation", "context": "IPv4/IPv6 dual-stack | This Service specification explicitly defines PreferDualStack in .spec.ipFamilyPolicy. When you create this Service on a dual-stack cluster, Kubernetes assigns both IPv4 and IPv6 addresses for the ser", "source": "https://kubernetes.io/docs/concepts/services-networking/dual-stack/", "id": "k8s_00371_code_explain_9521"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Service\nmetadata:\n name: my-service\n labels:\n app.kubernetes.io/name: MyApp", "completion": "spec:\n ipFamilyPolicy: PreferDualStack\n selector:\n app.kubernetes.io/name: MyApp\n ports:\n - protocol: TCP\n port: 80", "language": "yaml", "type": "service-definition", "context": "IPv4/IPv6 dual-stack | This Service specification explicitly defines IPv6 and IPv4 in .spec.ipFamilies as well as defining PreferDualStack in .spec.ipFamilyPolicy. When Kubernetes assigns an IPv6 and IPv4 address in .spec.c", "source": "https://kubernetes.io/docs/concepts/services-networking/dual-stack/", "id": "k8s_00371_code_completion_9522"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Service\nmetadata:\n name: my-service\n labels:\n app.kubernetes.io/name: MyApp\nspec:\n ipFamilyPolicy: PreferDualStack\n selector:\n app.kubernetes.io/name: MyApp\n ports:\n - protocol: TCP\n port: 80", "completion": "This yaml code IPv4/IPv6 dual-stack | This Service specification explicitly defines IPv6 and IPv4 in .spec.ipFamilies as well as defining PreferDualStack in .spec.ipFamilyPolicy. When Kubernetes assigns an IPv6 and IPv4 address in .spec.c", "language": "yaml", "type": "service-definition_explanation", "context": "IPv4/IPv6 dual-stack | This Service specification explicitly defines IPv6 and IPv4 in .spec.ipFamilies as well as defining PreferDualStack in .spec.ipFamilyPolicy. When Kubernetes assigns an IPv6 and IPv4 address in .spec.c", "source": "https://kubernetes.io/docs/concepts/services-networking/dual-stack/", "id": "k8s_00371_code_explain_9523"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Service\nmetadata:\n name: my-service\n labels:\n app.kubernetes.io/name: MyApp\nspec:\n ipFamilyPolicy: PreferDualStack", "completion": " ipFamilies:\n - IPv6\n - IPv4\n selector:\n app.kubernetes.io/name: MyApp\n ports:\n - protocol: TCP\n port: 80", "language": "yaml", "type": "service-definition", "context": "IPv4/IPv6 dual-stack | These examples demonstrate the default behavior when dual-stack is newly enabled on a cluster where Services already exist. (Upgrading an existing cluster to 1.21 or beyond will enable dual-stack.)", "source": "https://kubernetes.io/docs/concepts/services-networking/dual-stack/", "id": "k8s_00371_code_completion_9524"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Service\nmetadata:\n name: my-service\n labels:\n app.kubernetes.io/name: MyApp\nspec:\n ipFamilyPolicy: PreferDualStack\n ipFamilies:\n - IPv6\n - IPv4\n selector:\n app.kubernetes.io/name: MyApp\n ports:\n - protocol: TCP\n port: 80", "completion": "This yaml code IPv4/IPv6 dual-stack | These examples demonstrate the default behavior when dual-stack is newly enabled on a cluster where Services already exist. (Upgrading an existing cluster to 1.21 or beyond will enable dual-stack.)", "language": "yaml", "type": "service-definition_explanation", "context": "IPv4/IPv6 dual-stack | These examples demonstrate the default behavior when dual-stack is newly enabled on a cluster where Services already exist. (Upgrading an existing cluster to 1.21 or beyond will enable dual-stack.)", "source": "https://kubernetes.io/docs/concepts/services-networking/dual-stack/", "id": "k8s_00371_code_explain_9525"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Service\nmetadata:\n name: my-service\n labels:\n app.kubernetes.io/name: MyApp", "completion": "spec:\n selector:\n app.kubernetes.io/name: MyApp\n ports:\n - protocol: TCP\n port: 80", "language": "yaml", "type": "service-definition", "context": "IPv4/IPv6 dual-stack | You can validate this behavior by using kubectl to inspect an existing service.", "source": "https://kubernetes.io/docs/concepts/services-networking/dual-stack/", "id": "k8s_00371_code_completion_9526"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Service\nmetadata:\n name: my-service\n labels:\n app.kubernetes.io/name: MyApp\nspec:\n selector:\n app.kubernetes.io/name: MyApp\n ports:\n - protocol: TCP\n port: 80", "completion": "This yaml code IPv4/IPv6 dual-stack | You can validate this behavior by using kubectl to inspect an existing service.", "language": "yaml", "type": "service-definition_explanation", "context": "IPv4/IPv6 dual-stack | You can validate this behavior by using kubectl to inspect an existing service.", "source": "https://kubernetes.io/docs/concepts/services-networking/dual-stack/", "id": "k8s_00371_code_explain_9527"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Service\nmetadata:\n labels:\n app.kubernetes.io/name: MyApp\n name: my-service\nspec:\n clusterIP: 10.0.197.123\n clusterIPs:\n - 10.0.197.123\n ipFamilies:", "completion": " - IPv4\n ipFamilyPolicy: SingleStack\n ports:\n - port: 80\n protocol: TCP\n targetPort: 80\n selector:\n app.kubernetes.io/name: MyApp\n type: ClusterIP\nstatus:\n loadBalancer: {}", "language": "yaml", "type": "service-definition", "context": "IPv4/IPv6 dual-stack | When dual-stack is enabled on a cluster, existing headless Services with selectors are configured by the control plane to set .spec.ipFamilyPolicy to SingleStack and set .spec.ipFamilies to the addres", "source": "https://kubernetes.io/docs/concepts/services-networking/dual-stack/", "id": "k8s_00371_code_completion_9528"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Service\nmetadata:\n labels:\n app.kubernetes.io/name: MyApp\n name: my-service\nspec:\n clusterIP: 10.0.197.123\n clusterIPs:\n - 10.0.197.123\n ipFamilies:\n - IPv4\n ipFamilyPolicy: SingleStack\n ports:\n - port: 80\n protocol: TCP\n targetPort: 80\n selector:\n app.kubernetes.io/name: MyApp\n type: ClusterIP\nstatus:\n loadBalancer: {}", "completion": "This yaml code IPv4/IPv6 dual-stack | When dual-stack is enabled on a cluster, existing headless Services with selectors are configured by the control plane to set .spec.ipFamilyPolicy to SingleStack and set .spec.ipFamilies to the addres", "language": "yaml", "type": "service-definition_explanation", "context": "IPv4/IPv6 dual-stack | When dual-stack is enabled on a cluster, existing headless Services with selectors are configured by the control plane to set .spec.ipFamilyPolicy to SingleStack and set .spec.ipFamilies to the addres", "source": "https://kubernetes.io/docs/concepts/services-networking/dual-stack/", "id": "k8s_00371_code_explain_9529"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Service\nmetadata:\n name: my-service\n labels:\n app.kubernetes.io/name: MyApp", "completion": "spec:\n selector:\n app.kubernetes.io/name: MyApp\n ports:\n - protocol: TCP\n port: 80", "language": "yaml", "type": "service-definition", "context": "IPv4/IPv6 dual-stack | You can validate this behavior by using kubectl to inspect an existing headless service with selectors.", "source": "https://kubernetes.io/docs/concepts/services-networking/dual-stack/", "id": "k8s_00371_code_completion_9530"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Service\nmetadata:\n name: my-service\n labels:\n app.kubernetes.io/name: MyApp\nspec:\n selector:\n app.kubernetes.io/name: MyApp\n ports:\n - protocol: TCP\n port: 80", "completion": "This yaml code IPv4/IPv6 dual-stack | You can validate this behavior by using kubectl to inspect an existing headless service with selectors.", "language": "yaml", "type": "service-definition_explanation", "context": "IPv4/IPv6 dual-stack | You can validate this behavior by using kubectl to inspect an existing headless service with selectors.", "source": "https://kubernetes.io/docs/concepts/services-networking/dual-stack/", "id": "k8s_00371_code_explain_9531"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Service\nmetadata:\n labels:\n app.kubernetes.io/name: MyApp\n name: my-service\nspec:\n clusterIP: None\n clusterIPs:", "completion": " - None\n ipFamilies:\n - IPv4\n ipFamilyPolicy: SingleStack\n ports:\n - port: 80\n protocol: TCP\n targetPort: 80\n selector:\n app.kubernetes.io/name: MyApp", "language": "yaml", "type": "service-definition", "context": "IPv4/IPv6 dual-stack | Services can be changed from single-stack to dual-stack and from dual-stack to single-stack.", "source": "https://kubernetes.io/docs/concepts/services-networking/dual-stack/", "id": "k8s_00371_code_completion_9532"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Service\nmetadata:\n labels:\n app.kubernetes.io/name: MyApp\n name: my-service\nspec:\n clusterIP: None\n clusterIPs:\n - None\n ipFamilies:\n - IPv4\n ipFamilyPolicy: SingleStack\n ports:\n - port: 80\n protocol: TCP\n targetPort: 80\n selector:\n app.kubernetes.io/name: MyApp", "completion": "This yaml code IPv4/IPv6 dual-stack | Services can be changed from single-stack to dual-stack and from dual-stack to single-stack.", "language": "yaml", "type": "service-definition_explanation", "context": "IPv4/IPv6 dual-stack | Services can be changed from single-stack to dual-stack and from dual-stack to single-stack.", "source": "https://kubernetes.io/docs/concepts/services-networking/dual-stack/", "id": "k8s_00371_code_explain_9533"} {"prompt": "Explain this text code:\n\n--node-cidr-mask-size-ipv4|--node-cidr-mask-size-ipv6", "completion": "This text code IPv4/IPv6 dual-stack | An example of an IPv4 CIDR: 10.244.0.0/16 (though you would supply your own address range)", "language": "text", "type": "code-example_explanation", "context": "IPv4/IPv6 dual-stack | An example of an IPv4 CIDR: 10.244.0.0/16 (though you would supply your own address range)", "source": "https://kubernetes.io/docs/concepts/services-networking/dual-stack/", "id": "k8s_00371_code_explain_9534"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Service\nmetadata:\n name: my-service\n labels:\n app.kubernetes.io/name: MyApp", "completion": "spec:\n selector:\n app.kubernetes.io/name: MyApp\n ports:\n - protocol: TCP\n port: 80", "language": "yaml", "type": "service-definition", "context": "IPv4/IPv6 dual-stack | This Service specification explicitly defines PreferDualStack in .spec.ipFamilyPolicy. When you create this Service on a dual-stack cluster, Kubernetes assigns both IPv4 and IPv6 addresses for the ser", "source": "https://kubernetes.io/docs/concepts/services-networking/dual-stack/", "id": "k8s_00371_code_completion_9535"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Service\nmetadata:\n name: my-service\n labels:\n app.kubernetes.io/name: MyApp\nspec:\n selector:\n app.kubernetes.io/name: MyApp\n ports:\n - protocol: TCP\n port: 80", "completion": "This yaml code IPv4/IPv6 dual-stack | This Service specification explicitly defines PreferDualStack in .spec.ipFamilyPolicy. When you create this Service on a dual-stack cluster, Kubernetes assigns both IPv4 and IPv6 addresses for the ser", "language": "yaml", "type": "service-definition_explanation", "context": "IPv4/IPv6 dual-stack | This Service specification explicitly defines PreferDualStack in .spec.ipFamilyPolicy. When you create this Service on a dual-stack cluster, Kubernetes assigns both IPv4 and IPv6 addresses for the ser", "source": "https://kubernetes.io/docs/concepts/services-networking/dual-stack/", "id": "k8s_00371_code_explain_9536"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Service\nmetadata:\n name: my-service\n labels:\n app.kubernetes.io/name: MyApp", "completion": "spec:\n ipFamilyPolicy: PreferDualStack\n selector:\n app.kubernetes.io/name: MyApp\n ports:\n - protocol: TCP\n port: 80", "language": "yaml", "type": "service-definition", "context": "IPv4/IPv6 dual-stack | This Service specification explicitly defines IPv6 and IPv4 in .spec.ipFamilies as well as defining PreferDualStack in .spec.ipFamilyPolicy. When Kubernetes assigns an IPv6 and IPv4 address in .spec.c", "source": "https://kubernetes.io/docs/concepts/services-networking/dual-stack/", "id": "k8s_00371_code_completion_9537"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Service\nmetadata:\n name: my-service\n labels:\n app.kubernetes.io/name: MyApp\nspec:\n ipFamilyPolicy: PreferDualStack\n selector:\n app.kubernetes.io/name: MyApp\n ports:\n - protocol: TCP\n port: 80", "completion": "This yaml code IPv4/IPv6 dual-stack | This Service specification explicitly defines IPv6 and IPv4 in .spec.ipFamilies as well as defining PreferDualStack in .spec.ipFamilyPolicy. When Kubernetes assigns an IPv6 and IPv4 address in .spec.c", "language": "yaml", "type": "service-definition_explanation", "context": "IPv4/IPv6 dual-stack | This Service specification explicitly defines IPv6 and IPv4 in .spec.ipFamilies as well as defining PreferDualStack in .spec.ipFamilyPolicy. When Kubernetes assigns an IPv6 and IPv4 address in .spec.c", "source": "https://kubernetes.io/docs/concepts/services-networking/dual-stack/", "id": "k8s_00371_code_explain_9538"} {"prompt": "Explain this text code:\n\nservice/networking/dual-stack-preferred-ipfamilies-svc.yaml", "completion": "This text code IPv4/IPv6 dual-stack | These examples demonstrate the default behavior when dual-stack is newly enabled on a cluster where Services already exist. (Upgrading an existing cluster to 1.21 or beyond will enable dual-stack.)", "language": "text", "type": "code-example_explanation", "context": "IPv4/IPv6 dual-stack | These examples demonstrate the default behavior when dual-stack is newly enabled on a cluster where Services already exist. (Upgrading an existing cluster to 1.21 or beyond will enable dual-stack.)", "source": "https://kubernetes.io/docs/concepts/services-networking/dual-stack/", "id": "k8s_00371_code_explain_9539"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Service\nmetadata:\n name: my-service\n labels:\n app.kubernetes.io/name: MyApp\nspec:\n ipFamilyPolicy: PreferDualStack", "completion": " ipFamilies:\n - IPv6\n - IPv4\n selector:\n app.kubernetes.io/name: MyApp\n ports:\n - protocol: TCP\n port: 80", "language": "yaml", "type": "service-definition", "context": "IPv4/IPv6 dual-stack | These examples demonstrate the default behavior when dual-stack is newly enabled on a cluster where Services already exist. (Upgrading an existing cluster to 1.21 or beyond will enable dual-stack.)", "source": "https://kubernetes.io/docs/concepts/services-networking/dual-stack/", "id": "k8s_00371_code_completion_9540"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Service\nmetadata:\n name: my-service\n labels:\n app.kubernetes.io/name: MyApp\nspec:\n ipFamilyPolicy: PreferDualStack\n ipFamilies:\n - IPv6\n - IPv4\n selector:\n app.kubernetes.io/name: MyApp\n ports:\n - protocol: TCP\n port: 80", "completion": "This yaml code IPv4/IPv6 dual-stack | These examples demonstrate the default behavior when dual-stack is newly enabled on a cluster where Services already exist. (Upgrading an existing cluster to 1.21 or beyond will enable dual-stack.)", "language": "yaml", "type": "service-definition_explanation", "context": "IPv4/IPv6 dual-stack | These examples demonstrate the default behavior when dual-stack is newly enabled on a cluster where Services already exist. (Upgrading an existing cluster to 1.21 or beyond will enable dual-stack.)", "source": "https://kubernetes.io/docs/concepts/services-networking/dual-stack/", "id": "k8s_00371_code_explain_9541"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Service\nmetadata:\n name: my-service\n labels:\n app.kubernetes.io/name: MyApp", "completion": "spec:\n selector:\n app.kubernetes.io/name: MyApp\n ports:\n - protocol: TCP\n port: 80", "language": "yaml", "type": "service-definition", "context": "IPv4/IPv6 dual-stack | You can validate this behavior by using kubectl to inspect an existing service.", "source": "https://kubernetes.io/docs/concepts/services-networking/dual-stack/", "id": "k8s_00371_code_completion_9542"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Service\nmetadata:\n name: my-service\n labels:\n app.kubernetes.io/name: MyApp\nspec:\n selector:\n app.kubernetes.io/name: MyApp\n ports:\n - protocol: TCP\n port: 80", "completion": "This yaml code IPv4/IPv6 dual-stack | You can validate this behavior by using kubectl to inspect an existing service.", "language": "yaml", "type": "service-definition_explanation", "context": "IPv4/IPv6 dual-stack | You can validate this behavior by using kubectl to inspect an existing service.", "source": "https://kubernetes.io/docs/concepts/services-networking/dual-stack/", "id": "k8s_00371_code_explain_9543"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Service\nmetadata:\n labels:\n app.kubernetes.io/name: MyApp\n name: my-service\nspec:\n clusterIP: 10.0.197.123\n clusterIPs:\n - 10.0.197.123\n ipFamilies:", "completion": " - IPv4\n ipFamilyPolicy: SingleStack\n ports:\n - port: 80\n protocol: TCP\n targetPort: 80\n selector:\n app.kubernetes.io/name: MyApp\n type: ClusterIP\nstatus:\n loadBalancer: {}", "language": "yaml", "type": "service-definition", "context": "IPv4/IPv6 dual-stack | When dual-stack is enabled on a cluster, existing headless Services with selectors are configured by the control plane to set .spec.ipFamilyPolicy to SingleStack and set .spec.ipFamilies to the addres", "source": "https://kubernetes.io/docs/concepts/services-networking/dual-stack/", "id": "k8s_00371_code_completion_9544"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Service\nmetadata:\n labels:\n app.kubernetes.io/name: MyApp\n name: my-service\nspec:\n clusterIP: 10.0.197.123\n clusterIPs:\n - 10.0.197.123\n ipFamilies:\n - IPv4\n ipFamilyPolicy: SingleStack\n ports:\n - port: 80\n protocol: TCP\n targetPort: 80\n selector:\n app.kubernetes.io/name: MyApp\n type: ClusterIP\nstatus:\n loadBalancer: {}", "completion": "This yaml code IPv4/IPv6 dual-stack | When dual-stack is enabled on a cluster, existing headless Services with selectors are configured by the control plane to set .spec.ipFamilyPolicy to SingleStack and set .spec.ipFamilies to the addres", "language": "yaml", "type": "service-definition_explanation", "context": "IPv4/IPv6 dual-stack | When dual-stack is enabled on a cluster, existing headless Services with selectors are configured by the control plane to set .spec.ipFamilyPolicy to SingleStack and set .spec.ipFamilies to the addres", "source": "https://kubernetes.io/docs/concepts/services-networking/dual-stack/", "id": "k8s_00371_code_explain_9545"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Service\nmetadata:\n name: my-service\n labels:\n app.kubernetes.io/name: MyApp", "completion": "spec:\n selector:\n app.kubernetes.io/name: MyApp\n ports:\n - protocol: TCP\n port: 80", "language": "yaml", "type": "service-definition", "context": "IPv4/IPv6 dual-stack | You can validate this behavior by using kubectl to inspect an existing headless service with selectors.", "source": "https://kubernetes.io/docs/concepts/services-networking/dual-stack/", "id": "k8s_00371_code_completion_9546"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Service\nmetadata:\n name: my-service\n labels:\n app.kubernetes.io/name: MyApp\nspec:\n selector:\n app.kubernetes.io/name: MyApp\n ports:\n - protocol: TCP\n port: 80", "completion": "This yaml code IPv4/IPv6 dual-stack | You can validate this behavior by using kubectl to inspect an existing headless service with selectors.", "language": "yaml", "type": "service-definition_explanation", "context": "IPv4/IPv6 dual-stack | You can validate this behavior by using kubectl to inspect an existing headless service with selectors.", "source": "https://kubernetes.io/docs/concepts/services-networking/dual-stack/", "id": "k8s_00371_code_explain_9547"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Service\nmetadata:\n labels:\n app.kubernetes.io/name: MyApp\n name: my-service\nspec:\n clusterIP: None\n clusterIPs:", "completion": " - None\n ipFamilies:\n - IPv4\n ipFamilyPolicy: SingleStack\n ports:\n - port: 80\n protocol: TCP\n targetPort: 80\n selector:\n app.kubernetes.io/name: MyApp", "language": "yaml", "type": "service-definition", "context": "IPv4/IPv6 dual-stack | Services can be changed from single-stack to dual-stack and from dual-stack to single-stack.", "source": "https://kubernetes.io/docs/concepts/services-networking/dual-stack/", "id": "k8s_00371_code_completion_9548"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Service\nmetadata:\n labels:\n app.kubernetes.io/name: MyApp\n name: my-service\nspec:\n clusterIP: None\n clusterIPs:\n - None\n ipFamilies:\n - IPv4\n ipFamilyPolicy: SingleStack\n ports:\n - port: 80\n protocol: TCP\n targetPort: 80\n selector:\n app.kubernetes.io/name: MyApp", "completion": "This yaml code IPv4/IPv6 dual-stack | Services can be changed from single-stack to dual-stack and from dual-stack to single-stack.", "language": "yaml", "type": "service-definition_explanation", "context": "IPv4/IPv6 dual-stack | Services can be changed from single-stack to dual-stack and from dual-stack to single-stack.", "source": "https://kubernetes.io/docs/concepts/services-networking/dual-stack/", "id": "k8s_00371_code_explain_9549"} {"prompt": "Complete this yaml code:\n\nservice/networking/dual-stack-default-svc.yaml\napiVersion: v1\nkind: Service\nmetadata:\n name: my-service\n labels:", "completion": " app.kubernetes.io/name: MyApp\nspec:\n selector:\n app.kubernetes.io/name: MyApp\n ports:\n - protocol: TCP\n port: 80", "language": "yaml", "type": "service-definition", "context": "IPv4/IPv6 dual-stack | This Service specification explicitly defines PreferDualStack in .spec.ipFamilyPolicy. When you create this Service on a dual-stack cluster, Kubernetes assigns both IPv4 and IPv6 addresses for the ser", "source": "https://kubernetes.io/docs/concepts/services-networking/dual-stack/", "id": "k8s_00371_code_completion_9550"} {"prompt": "Explain this yaml code:\n\nservice/networking/dual-stack-default-svc.yaml\napiVersion: v1\nkind: Service\nmetadata:\n name: my-service\n labels:\n app.kubernetes.io/name: MyApp\nspec:\n selector:\n app.kubernetes.io/name: MyApp\n ports:\n - protocol: TCP\n port: 80", "completion": "This yaml code IPv4/IPv6 dual-stack | This Service specification explicitly defines PreferDualStack in .spec.ipFamilyPolicy. When you create this Service on a dual-stack cluster, Kubernetes assigns both IPv4 and IPv6 addresses for the ser", "language": "yaml", "type": "service-definition_explanation", "context": "IPv4/IPv6 dual-stack | This Service specification explicitly defines PreferDualStack in .spec.ipFamilyPolicy. When you create this Service on a dual-stack cluster, Kubernetes assigns both IPv4 and IPv6 addresses for the ser", "source": "https://kubernetes.io/docs/concepts/services-networking/dual-stack/", "id": "k8s_00371_code_explain_9551"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Service\nmetadata:\n name: my-service\n labels:\n app.kubernetes.io/name: MyApp", "completion": "spec:\n selector:\n app.kubernetes.io/name: MyApp\n ports:\n - protocol: TCP\n port: 80", "language": "yaml", "type": "service-definition", "context": "IPv4/IPv6 dual-stack | This Service specification explicitly defines PreferDualStack in .spec.ipFamilyPolicy. When you create this Service on a dual-stack cluster, Kubernetes assigns both IPv4 and IPv6 addresses for the ser", "source": "https://kubernetes.io/docs/concepts/services-networking/dual-stack/", "id": "k8s_00371_code_completion_9552"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Service\nmetadata:\n name: my-service\n labels:\n app.kubernetes.io/name: MyApp\nspec:\n selector:\n app.kubernetes.io/name: MyApp\n ports:\n - protocol: TCP\n port: 80", "completion": "This yaml code IPv4/IPv6 dual-stack | This Service specification explicitly defines PreferDualStack in .spec.ipFamilyPolicy. When you create this Service on a dual-stack cluster, Kubernetes assigns both IPv4 and IPv6 addresses for the ser", "language": "yaml", "type": "service-definition_explanation", "context": "IPv4/IPv6 dual-stack | This Service specification explicitly defines PreferDualStack in .spec.ipFamilyPolicy. When you create this Service on a dual-stack cluster, Kubernetes assigns both IPv4 and IPv6 addresses for the ser", "source": "https://kubernetes.io/docs/concepts/services-networking/dual-stack/", "id": "k8s_00371_code_explain_9553"} {"prompt": "Complete this yaml code:\n\nservice/networking/dual-stack-preferred-svc.yaml\napiVersion: v1\nkind: Service\nmetadata:\n name: my-service\n labels:\n app.kubernetes.io/name: MyApp", "completion": "spec:\n ipFamilyPolicy: PreferDualStack\n selector:\n app.kubernetes.io/name: MyApp\n ports:\n - protocol: TCP\n port: 80", "language": "yaml", "type": "service-definition", "context": "IPv4/IPv6 dual-stack | This Service specification explicitly defines IPv6 and IPv4 in .spec.ipFamilies as well as defining PreferDualStack in .spec.ipFamilyPolicy. When Kubernetes assigns an IPv6 and IPv4 address in .spec.c", "source": "https://kubernetes.io/docs/concepts/services-networking/dual-stack/", "id": "k8s_00371_code_completion_9554"} {"prompt": "Explain this yaml code:\n\nservice/networking/dual-stack-preferred-svc.yaml\napiVersion: v1\nkind: Service\nmetadata:\n name: my-service\n labels:\n app.kubernetes.io/name: MyApp\nspec:\n ipFamilyPolicy: PreferDualStack\n selector:\n app.kubernetes.io/name: MyApp\n ports:\n - protocol: TCP\n port: 80", "completion": "This yaml code IPv4/IPv6 dual-stack | This Service specification explicitly defines IPv6 and IPv4 in .spec.ipFamilies as well as defining PreferDualStack in .spec.ipFamilyPolicy. When Kubernetes assigns an IPv6 and IPv4 address in .spec.c", "language": "yaml", "type": "service-definition_explanation", "context": "IPv4/IPv6 dual-stack | This Service specification explicitly defines IPv6 and IPv4 in .spec.ipFamilies as well as defining PreferDualStack in .spec.ipFamilyPolicy. When Kubernetes assigns an IPv6 and IPv4 address in .spec.c", "source": "https://kubernetes.io/docs/concepts/services-networking/dual-stack/", "id": "k8s_00371_code_explain_9555"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Service\nmetadata:\n name: my-service\n labels:\n app.kubernetes.io/name: MyApp", "completion": "spec:\n ipFamilyPolicy: PreferDualStack\n selector:\n app.kubernetes.io/name: MyApp\n ports:\n - protocol: TCP\n port: 80", "language": "yaml", "type": "service-definition", "context": "IPv4/IPv6 dual-stack | This Service specification explicitly defines IPv6 and IPv4 in .spec.ipFamilies as well as defining PreferDualStack in .spec.ipFamilyPolicy. When Kubernetes assigns an IPv6 and IPv4 address in .spec.c", "source": "https://kubernetes.io/docs/concepts/services-networking/dual-stack/", "id": "k8s_00371_code_completion_9556"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Service\nmetadata:\n name: my-service\n labels:\n app.kubernetes.io/name: MyApp\nspec:\n ipFamilyPolicy: PreferDualStack\n selector:\n app.kubernetes.io/name: MyApp\n ports:\n - protocol: TCP\n port: 80", "completion": "This yaml code IPv4/IPv6 dual-stack | This Service specification explicitly defines IPv6 and IPv4 in .spec.ipFamilies as well as defining PreferDualStack in .spec.ipFamilyPolicy. When Kubernetes assigns an IPv6 and IPv4 address in .spec.c", "language": "yaml", "type": "service-definition_explanation", "context": "IPv4/IPv6 dual-stack | This Service specification explicitly defines IPv6 and IPv4 in .spec.ipFamilies as well as defining PreferDualStack in .spec.ipFamilyPolicy. When Kubernetes assigns an IPv6 and IPv4 address in .spec.c", "source": "https://kubernetes.io/docs/concepts/services-networking/dual-stack/", "id": "k8s_00371_code_explain_9557"} {"prompt": "Complete this yaml code:\n\nservice/networking/dual-stack-preferred-ipfamilies-svc.yaml\napiVersion: v1\nkind: Service\nmetadata:\n name: my-service\n labels:\n app.kubernetes.io/name: MyApp\nspec:", "completion": " ipFamilyPolicy: PreferDualStack\n ipFamilies:\n - IPv6\n - IPv4\n selector:\n app.kubernetes.io/name: MyApp\n ports:\n - protocol: TCP\n port: 80", "language": "yaml", "type": "service-definition", "context": "IPv4/IPv6 dual-stack | These examples demonstrate the default behavior when dual-stack is newly enabled on a cluster where Services already exist. (Upgrading an existing cluster to 1.21 or beyond will enable dual-stack.)", "source": "https://kubernetes.io/docs/concepts/services-networking/dual-stack/", "id": "k8s_00371_code_completion_9558"} {"prompt": "Explain this yaml code:\n\nservice/networking/dual-stack-preferred-ipfamilies-svc.yaml\napiVersion: v1\nkind: Service\nmetadata:\n name: my-service\n labels:\n app.kubernetes.io/name: MyApp\nspec:\n ipFamilyPolicy: PreferDualStack\n ipFamilies:\n - IPv6\n - IPv4\n selector:\n app.kubernetes.io/name: MyApp\n ports:\n - protocol: TCP\n port: 80", "completion": "This yaml code IPv4/IPv6 dual-stack | These examples demonstrate the default behavior when dual-stack is newly enabled on a cluster where Services already exist. (Upgrading an existing cluster to 1.21 or beyond will enable dual-stack.)", "language": "yaml", "type": "service-definition_explanation", "context": "IPv4/IPv6 dual-stack | These examples demonstrate the default behavior when dual-stack is newly enabled on a cluster where Services already exist. (Upgrading an existing cluster to 1.21 or beyond will enable dual-stack.)", "source": "https://kubernetes.io/docs/concepts/services-networking/dual-stack/", "id": "k8s_00371_code_explain_9559"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Service\nmetadata:\n name: my-service\n labels:\n app.kubernetes.io/name: MyApp\nspec:\n ipFamilyPolicy: PreferDualStack", "completion": " ipFamilies:\n - IPv6\n - IPv4\n selector:\n app.kubernetes.io/name: MyApp\n ports:\n - protocol: TCP\n port: 80", "language": "yaml", "type": "service-definition", "context": "IPv4/IPv6 dual-stack | These examples demonstrate the default behavior when dual-stack is newly enabled on a cluster where Services already exist. (Upgrading an existing cluster to 1.21 or beyond will enable dual-stack.)", "source": "https://kubernetes.io/docs/concepts/services-networking/dual-stack/", "id": "k8s_00371_code_completion_9560"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Service\nmetadata:\n name: my-service\n labels:\n app.kubernetes.io/name: MyApp\nspec:\n ipFamilyPolicy: PreferDualStack\n ipFamilies:\n - IPv6\n - IPv4\n selector:\n app.kubernetes.io/name: MyApp\n ports:\n - protocol: TCP\n port: 80", "completion": "This yaml code IPv4/IPv6 dual-stack | These examples demonstrate the default behavior when dual-stack is newly enabled on a cluster where Services already exist. (Upgrading an existing cluster to 1.21 or beyond will enable dual-stack.)", "language": "yaml", "type": "service-definition_explanation", "context": "IPv4/IPv6 dual-stack | These examples demonstrate the default behavior when dual-stack is newly enabled on a cluster where Services already exist. (Upgrading an existing cluster to 1.21 or beyond will enable dual-stack.)", "source": "https://kubernetes.io/docs/concepts/services-networking/dual-stack/", "id": "k8s_00371_code_explain_9561"} {"prompt": "Complete this yaml code:\n\nservice/networking/dual-stack-default-svc.yaml\napiVersion: v1\nkind: Service\nmetadata:\n name: my-service\n labels:", "completion": " app.kubernetes.io/name: MyApp\nspec:\n selector:\n app.kubernetes.io/name: MyApp\n ports:\n - protocol: TCP\n port: 80", "language": "yaml", "type": "service-definition", "context": "IPv4/IPv6 dual-stack | You can validate this behavior by using kubectl to inspect an existing service.", "source": "https://kubernetes.io/docs/concepts/services-networking/dual-stack/", "id": "k8s_00371_code_completion_9562"} {"prompt": "Explain this yaml code:\n\nservice/networking/dual-stack-default-svc.yaml\napiVersion: v1\nkind: Service\nmetadata:\n name: my-service\n labels:\n app.kubernetes.io/name: MyApp\nspec:\n selector:\n app.kubernetes.io/name: MyApp\n ports:\n - protocol: TCP\n port: 80", "completion": "This yaml code IPv4/IPv6 dual-stack | You can validate this behavior by using kubectl to inspect an existing service.", "language": "yaml", "type": "service-definition_explanation", "context": "IPv4/IPv6 dual-stack | You can validate this behavior by using kubectl to inspect an existing service.", "source": "https://kubernetes.io/docs/concepts/services-networking/dual-stack/", "id": "k8s_00371_code_explain_9563"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Service\nmetadata:\n name: my-service\n labels:\n app.kubernetes.io/name: MyApp", "completion": "spec:\n selector:\n app.kubernetes.io/name: MyApp\n ports:\n - protocol: TCP\n port: 80", "language": "yaml", "type": "service-definition", "context": "IPv4/IPv6 dual-stack | You can validate this behavior by using kubectl to inspect an existing service.", "source": "https://kubernetes.io/docs/concepts/services-networking/dual-stack/", "id": "k8s_00371_code_completion_9564"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Service\nmetadata:\n name: my-service\n labels:\n app.kubernetes.io/name: MyApp\nspec:\n selector:\n app.kubernetes.io/name: MyApp\n ports:\n - protocol: TCP\n port: 80", "completion": "This yaml code IPv4/IPv6 dual-stack | You can validate this behavior by using kubectl to inspect an existing service.", "language": "yaml", "type": "service-definition_explanation", "context": "IPv4/IPv6 dual-stack | You can validate this behavior by using kubectl to inspect an existing service.", "source": "https://kubernetes.io/docs/concepts/services-networking/dual-stack/", "id": "k8s_00371_code_explain_9565"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Service\nmetadata:\n labels:\n app.kubernetes.io/name: MyApp\n name: my-service\nspec:\n clusterIP: 10.0.197.123\n clusterIPs:\n - 10.0.197.123\n ipFamilies:", "completion": " - IPv4\n ipFamilyPolicy: SingleStack\n ports:\n - port: 80\n protocol: TCP\n targetPort: 80\n selector:\n app.kubernetes.io/name: MyApp\n type: ClusterIP\nstatus:\n loadBalancer: {}", "language": "yaml", "type": "service-definition", "context": "IPv4/IPv6 dual-stack | When dual-stack is enabled on a cluster, existing headless Services with selectors are configured by the control plane to set .spec.ipFamilyPolicy to SingleStack and set .spec.ipFamilies to the addres", "source": "https://kubernetes.io/docs/concepts/services-networking/dual-stack/", "id": "k8s_00371_code_completion_9566"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Service\nmetadata:\n labels:\n app.kubernetes.io/name: MyApp\n name: my-service\nspec:\n clusterIP: 10.0.197.123\n clusterIPs:\n - 10.0.197.123\n ipFamilies:\n - IPv4\n ipFamilyPolicy: SingleStack\n ports:\n - port: 80\n protocol: TCP\n targetPort: 80\n selector:\n app.kubernetes.io/name: MyApp\n type: ClusterIP\nstatus:\n loadBalancer: {}", "completion": "This yaml code IPv4/IPv6 dual-stack | When dual-stack is enabled on a cluster, existing headless Services with selectors are configured by the control plane to set .spec.ipFamilyPolicy to SingleStack and set .spec.ipFamilies to the addres", "language": "yaml", "type": "service-definition_explanation", "context": "IPv4/IPv6 dual-stack | When dual-stack is enabled on a cluster, existing headless Services with selectors are configured by the control plane to set .spec.ipFamilyPolicy to SingleStack and set .spec.ipFamilies to the addres", "source": "https://kubernetes.io/docs/concepts/services-networking/dual-stack/", "id": "k8s_00371_code_explain_9567"} {"prompt": "Complete this yaml code:\n\nservice/networking/dual-stack-default-svc.yaml\napiVersion: v1\nkind: Service\nmetadata:\n name: my-service\n labels:", "completion": " app.kubernetes.io/name: MyApp\nspec:\n selector:\n app.kubernetes.io/name: MyApp\n ports:\n - protocol: TCP\n port: 80", "language": "yaml", "type": "service-definition", "context": "IPv4/IPv6 dual-stack | You can validate this behavior by using kubectl to inspect an existing headless service with selectors.", "source": "https://kubernetes.io/docs/concepts/services-networking/dual-stack/", "id": "k8s_00371_code_completion_9568"} {"prompt": "Explain this yaml code:\n\nservice/networking/dual-stack-default-svc.yaml\napiVersion: v1\nkind: Service\nmetadata:\n name: my-service\n labels:\n app.kubernetes.io/name: MyApp\nspec:\n selector:\n app.kubernetes.io/name: MyApp\n ports:\n - protocol: TCP\n port: 80", "completion": "This yaml code IPv4/IPv6 dual-stack | You can validate this behavior by using kubectl to inspect an existing headless service with selectors.", "language": "yaml", "type": "service-definition_explanation", "context": "IPv4/IPv6 dual-stack | You can validate this behavior by using kubectl to inspect an existing headless service with selectors.", "source": "https://kubernetes.io/docs/concepts/services-networking/dual-stack/", "id": "k8s_00371_code_explain_9569"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Service\nmetadata:\n name: my-service\n labels:\n app.kubernetes.io/name: MyApp", "completion": "spec:\n selector:\n app.kubernetes.io/name: MyApp\n ports:\n - protocol: TCP\n port: 80", "language": "yaml", "type": "service-definition", "context": "IPv4/IPv6 dual-stack | You can validate this behavior by using kubectl to inspect an existing headless service with selectors.", "source": "https://kubernetes.io/docs/concepts/services-networking/dual-stack/", "id": "k8s_00371_code_completion_9570"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Service\nmetadata:\n name: my-service\n labels:\n app.kubernetes.io/name: MyApp\nspec:\n selector:\n app.kubernetes.io/name: MyApp\n ports:\n - protocol: TCP\n port: 80", "completion": "This yaml code IPv4/IPv6 dual-stack | You can validate this behavior by using kubectl to inspect an existing headless service with selectors.", "language": "yaml", "type": "service-definition_explanation", "context": "IPv4/IPv6 dual-stack | You can validate this behavior by using kubectl to inspect an existing headless service with selectors.", "source": "https://kubernetes.io/docs/concepts/services-networking/dual-stack/", "id": "k8s_00371_code_explain_9571"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Service\nmetadata:\n labels:\n app.kubernetes.io/name: MyApp\n name: my-service\nspec:\n clusterIP: None\n clusterIPs:", "completion": " - None\n ipFamilies:\n - IPv4\n ipFamilyPolicy: SingleStack\n ports:\n - port: 80\n protocol: TCP\n targetPort: 80\n selector:\n app.kubernetes.io/name: MyApp", "language": "yaml", "type": "service-definition", "context": "IPv4/IPv6 dual-stack | Services can be changed from single-stack to dual-stack and from dual-stack to single-stack.", "source": "https://kubernetes.io/docs/concepts/services-networking/dual-stack/", "id": "k8s_00371_code_completion_9572"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Service\nmetadata:\n labels:\n app.kubernetes.io/name: MyApp\n name: my-service\nspec:\n clusterIP: None\n clusterIPs:\n - None\n ipFamilies:\n - IPv4\n ipFamilyPolicy: SingleStack\n ports:\n - port: 80\n protocol: TCP\n targetPort: 80\n selector:\n app.kubernetes.io/name: MyApp", "completion": "This yaml code IPv4/IPv6 dual-stack | Services can be changed from single-stack to dual-stack and from dual-stack to single-stack.", "language": "yaml", "type": "service-definition_explanation", "context": "IPv4/IPv6 dual-stack | Services can be changed from single-stack to dual-stack and from dual-stack to single-stack.", "source": "https://kubernetes.io/docs/concepts/services-networking/dual-stack/", "id": "k8s_00371_code_explain_9573"} {"prompt": "Complete this yaml code:\n\napiVersion: batch/v1\nkind: Job\nmetadata:\n name: pi\nspec:\n template:", "completion": " spec:\n containers:\n - name: pi\n image: perl:5.34.0\n command: [\"perl\", \"-Mbignum=bpi\", \"-wle\", \"print bpi(2000)\"]\n restartPolicy: Never\n backoffLimit: 4", "language": "yaml", "type": "kubernetes-manifest", "context": "Jobs | You can run the example with this command:", "source": "https://kubernetes.io/docs/concepts/workloads/controllers/job/", "id": "k8s_00374_code_completion_9574"} {"prompt": "Explain this yaml code:\n\napiVersion: batch/v1\nkind: Job\nmetadata:\n name: pi\nspec:\n template:\n spec:\n containers:\n - name: pi\n image: perl:5.34.0\n command: [\"perl\", \"-Mbignum=bpi\", \"-wle\", \"print bpi(2000)\"]\n restartPolicy: Never\n backoffLimit: 4", "completion": "This yaml code Jobs | You can run the example with this command:", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Jobs | You can run the example with this command:", "source": "https://kubernetes.io/docs/concepts/workloads/controllers/job/", "id": "k8s_00374_code_explain_9575"} {"prompt": "Explain this bash code:\n\nkubectl apply -f https://kubernetes.io/examples/controllers/job.yaml", "completion": "This bash code Jobs | The output is similar to this:", "language": "bash", "type": "kubectl-command_explanation", "context": "Jobs | The output is similar to this:", "source": "https://kubernetes.io/docs/concepts/workloads/controllers/job/", "id": "k8s_00374_code_explain_9576"} {"prompt": "Complete this text code:\n\nName: pi\nNamespace: default\nSelector: batch.kubernetes.io/controller-uid=c9948307-e56d-4b5d-8302-ae2d7b7da67c\nLabels: batch.kubernetes.io/controller-uid=c9948307-e56d-4b5d-8302-ae2d7b7da67c\n batch.kubernetes.io/job-name=pi\n ...\nAnnotations: batch.kubernetes.io/job-tracking: \"\"\nParallelism: 1\nCompletions: 1\nStart Time: Mon, 02 Dec 2019 15:20:11 +0200\nCompleted At: Mon, 02 Dec 2019 15:21:16 +0200\nDuration: 65s\nPods Statuses: 0 Running / 1 Succeeded / 0 Failed\nPod Template:\n Labels: batch.kubernetes.io/controller-uid=c9948307-e56d-4b5d-8302-ae2d7b7da67c\n batch.kubernetes.io/job-name=pi\n Containers:", "completion": " pi:\n Image: perl:5.34.0\n Port: <none>\n Host Port: <none>\n Command:\n perl\n -Mbignum=bpi\n -wle\n print bpi(2000)\n Environment: <none>\n Mounts: <none>\n Volumes: <none>\nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal SuccessfulCreate 21s job-controller Created pod: pi-xf9p4\n Normal Completed 18s job-controller Job completed", "language": "text", "type": "code-example", "context": "Jobs | apiVersion: batch/v1 kind: Job metadata: annotations: batch.kubernetes.io/job-tracking: \"\" ... creationTimestamp: \"2022-11-10T17:53:53Z\" generation: 1 labels: batch.kubernet", "source": "https://kubernetes.io/docs/concepts/workloads/controllers/job/", "id": "k8s_00374_code_completion_9577"} {"prompt": "Explain this text code:\n\nName: pi\nNamespace: default\nSelector: batch.kubernetes.io/controller-uid=c9948307-e56d-4b5d-8302-ae2d7b7da67c\nLabels: batch.kubernetes.io/controller-uid=c9948307-e56d-4b5d-8302-ae2d7b7da67c\n batch.kubernetes.io/job-name=pi\n ...\nAnnotations: batch.kubernetes.io/job-tracking: \"\"\nParallelism: 1\nCompletions: 1\nStart Time: Mon, 02 Dec 2019 15:20:11 +0200\nCompleted At: Mon, 02 Dec 2019 15:21:16 +0200\nDuration: 65s\nPods Statuses: 0 Running / 1 Succeeded / 0 Failed\nPod Template:\n Labels: batch.kubernetes.io/controller-uid=c9948307-e56d-4b5d-8302-ae2d7b7da67c\n batch.kubernetes.io/job-name=pi\n Containers:\n pi:\n Image: perl:5.34.0\n Port: <none>\n Host Port: <none>\n Command:\n perl\n -Mbignum=bpi\n -wle\n print bpi(2000)\n Environment: <none>\n Mounts: <none>\n Volumes: <none>\nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal SuccessfulCreate 21s job-controller Created pod: pi-xf9p4\n Normal Completed 18s job-controller Job completed", "completion": "This text code Jobs | apiVersion: batch/v1 kind: Job metadata: annotations: batch.kubernetes.io/job-tracking: \"\" ... creationTimestamp: \"2022-11-10T17:53:53Z\" generation: 1 labels: batch.kubernet", "language": "text", "type": "code-example_explanation", "context": "Jobs | apiVersion: batch/v1 kind: Job metadata: annotations: batch.kubernetes.io/job-tracking: \"\" ... creationTimestamp: \"2022-11-10T17:53:53Z\" generation: 1 labels: batch.kubernet", "source": "https://kubernetes.io/docs/concepts/workloads/controllers/job/", "id": "k8s_00374_code_explain_9578"} {"prompt": "Complete this yaml code:\n\napiVersion: batch/v1\nkind: Job\nmetadata:\n annotations: batch.kubernetes.io/job-tracking: \"\"\n ... \n creationTimestamp: \"2022-11-10T17:53:53Z\"\n generation: 1\n labels:\n batch.kubernetes.io/controller-uid: 863452e6-270d-420e-9b94-53a54146c223\n batch.kubernetes.io/job-name: pi\n name: pi\n namespace: default\n resourceVersion: \"4751\"\n uid: 204fb678-040b-497f-9266-35ffa8716d14\nspec:\n backoffLimit: 4\n completionMode: NonIndexed\n completions: 1\n parallelism: 1\n selector:\n matchLabels:\n batch.kubernetes.io/controller-uid: 863452e6-270d-420e-9b94-53a54146c223\n suspend: false\n template:\n metadata:\n creationTimestamp: null", "completion": " labels:\n batch.kubernetes.io/controller-uid: 863452e6-270d-420e-9b94-53a54146c223\n batch.kubernetes.io/job-name: pi\n spec:\n containers:\n - command:\n - perl\n - -Mbignum=bpi\n - -wle\n - print bpi(2000)\n image: perl:5.34.0\n imagePullPolicy: IfNotPresent\n name: pi\n resources: {}\n terminationMessagePath: /dev/termination-log\n terminationMessagePolicy: File\n dnsPolicy: ClusterFirst\n restartPolicy: Never\n schedulerName: default-scheduler\n securityContext: {}\n terminationGracePeriodSeconds: 30\nstatus:\n active: 1\n ready: 0\n startTime: \"2022-11-10T17:53:57Z\"\n uncountedTerminatedPods: {}", "language": "yaml", "type": "kubernetes-manifest", "context": "Jobs | To view completed Pods of a Job, use kubectl get pods.", "source": "https://kubernetes.io/docs/concepts/workloads/controllers/job/", "id": "k8s_00374_code_completion_9579"} {"prompt": "Explain this yaml code:\n\napiVersion: batch/v1\nkind: Job\nmetadata:\n annotations: batch.kubernetes.io/job-tracking: \"\"\n ... \n creationTimestamp: \"2022-11-10T17:53:53Z\"\n generation: 1\n labels:\n batch.kubernetes.io/controller-uid: 863452e6-270d-420e-9b94-53a54146c223\n batch.kubernetes.io/job-name: pi\n name: pi\n namespace: default\n resourceVersion: \"4751\"\n uid: 204fb678-040b-497f-9266-35ffa8716d14\nspec:\n backoffLimit: 4\n completionMode: NonIndexed\n completions: 1\n parallelism: 1\n selector:\n matchLabels:\n batch.kubernetes.io/controller-uid: 863452e6-270d-420e-9b94-53a54146c223\n suspend: false\n template:\n metadata:\n creationTimestamp: null\n labels:\n batch.kubernetes.io/controller-uid: 863452e6-270d-420e-9b94-53a54146c223\n batch.kubernetes.io/job-name: pi\n spec:\n containers:\n - command:\n - perl\n - -Mbignum=bpi\n - -wle\n - print bpi(2000)\n image: perl:5.34.0\n imagePullPolicy: IfNotPresent\n name: pi\n resources: {}\n terminationMessagePath: /dev/termination-log\n terminationMessagePolicy: File\n dnsPolicy: ClusterFirst\n restartPolicy: Never\n schedulerName: default-scheduler\n securityContext: {}\n terminationGracePeriodSeconds: 30\nstatus:\n active: 1\n ready: 0\n startTime: \"2022-11-10T17:53:57Z\"\n uncountedTerminatedPods: {}", "completion": "This yaml code Jobs | To view completed Pods of a Job, use kubectl get pods.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Jobs | To view completed Pods of a Job, use kubectl get pods.", "source": "https://kubernetes.io/docs/concepts/workloads/controllers/job/", "id": "k8s_00374_code_explain_9580"} {"prompt": "Explain this text code:\n\npods=$(kubectl get pods --selector=batch.kubernetes.io/job-name=pi --output=jsonpath='{.items[*].metadata.name}')\necho $pods", "completion": "This text code Jobs | The output is similar to this:", "language": "text", "type": "code-example_explanation", "context": "Jobs | The output is similar to this:", "source": "https://kubernetes.io/docs/concepts/workloads/controllers/job/", "id": "k8s_00374_code_explain_9581"} {"prompt": "Explain this text code:\n\n3.1415926535897932384626433832795028841971693993751058209749445923078164062862089986280348253421170679821480865132823066470938446095505822317253594081284811174502841027019385211055596446229489549303819644288109756659334461284756482337867831652712019091456485669234603486104543266482133936072602491412737245870066063155881748815209209628292540917153643678925903600113305305488204665213841469519415116094330572703657595919530921861173819326117931051185480744623799627495673518857527248912279381830119491298336733624406566430860213949463952247371907021798609437027705392171762931767523846748184676694051320005681271452635608277857713427577896091736371787214684409012249534301465495853710507922796892589235420199561121290219608640344181598136297747713099605187072113499999983729780499510597317328160963185950244594553469083026425223082533446850352619311881710100031378387528865875332083814206171776691473035982534904287554687311595628638823537875937519577818577805321712268066130019278766111959092164201989380952572010654858632788659361533818279682303019520353018529689957736225994138912497217752834791315155748572424541506959508295331168617278558890750983817546374649393192550604009277016711390098488240128583616035637076601047101819429555961989467678374494482553797747268471040475346462080466842590694912933136770289891521047521620569660240580381501935112533824300355876402474964732639141992726042699227967823547816360093417216412199245863150302861829745557067498385054945885869269956909272107975093029553211653449872027559602364806654991198818347977535663698074265425278625518184175746728909777727938000816470600161452491921732172147723501414419735685481613611573525521334757418494684385233239073941433345477624168625189835694855620992192221842725502542568876717904946016534668049886272327917860857843838279679766814541009538837863609506800642251252051173929848960841284886269456042419652850222106611863067442786220391949450471237137869609563643719172874677646575739624138908658326459958133904780275901", "completion": "This text code Jobs | As with all other Kubernetes config, a Job needs apiVersion, kind, and metadata fields.", "language": "text", "type": "code-example_explanation", "context": "Jobs | As with all other Kubernetes config, a Job needs apiVersion, kind, and metadata fields.", "source": "https://kubernetes.io/docs/concepts/workloads/controllers/job/", "id": "k8s_00374_code_explain_9582"} {"prompt": "Complete this yaml code:\n\napiVersion: batch/v1\nkind: Job\nmetadata:\n name: job-backoff-limit-per-index-example\nspec:\n completions: 10\n parallelism: 3\n completionMode: Indexed # required for the feature\n backoffLimitPerIndex: 1 # maximal number of failures per index\n maxFailedIndexes: 5 # maximal number of failed indexes before terminating the Job execution\n template:\n spec:\n restartPolicy: Never # required for the feature", "completion": " containers:\n - name: example\n image: python\n command: # The jobs fails as there is at least one failed index\n # (all even indexes fail in here), yet all indexes\n # are executed as maxFailedIndexes is not exceeded.\n - python3\n - -c\n - |\n import os, sys\n print(\"Hello world\")\n if int(os.environ.get(\"JOB_COMPLETION_INDEX\")) % 2 == 0:\n sys.exit(1)", "language": "yaml", "type": "kubernetes-manifest", "context": "Jobs | In the example above, the Job controller allows for one restart for each of the indexes. When the total number of failed indexes exceeds 5, then the entire Job is terminated.", "source": "https://kubernetes.io/docs/concepts/workloads/controllers/job/", "id": "k8s_00374_code_completion_9583"} {"prompt": "Explain this yaml code:\n\napiVersion: batch/v1\nkind: Job\nmetadata:\n name: job-backoff-limit-per-index-example\nspec:\n completions: 10\n parallelism: 3\n completionMode: Indexed # required for the feature\n backoffLimitPerIndex: 1 # maximal number of failures per index\n maxFailedIndexes: 5 # maximal number of failed indexes before terminating the Job execution\n template:\n spec:\n restartPolicy: Never # required for the feature\n containers:\n - name: example\n image: python\n command: # The jobs fails as there is at least one failed index\n # (all even indexes fail in here), yet all indexes\n # are executed as maxFailedIndexes is not exceeded.\n - python3\n - -c\n - |\n import os, sys\n print(\"Hello world\")\n if int(os.environ.get(\"JOB_COMPLETION_INDEX\")) % 2 == 0:\n sys.exit(1)", "completion": "This yaml code Jobs | In the example above, the Job controller allows for one restart for each of the indexes. When the total number of failed indexes exceeds 5, then the entire Job is terminated.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Jobs | In the example above, the Job controller allows for one restart for each of the indexes. When the total number of failed indexes exceeds 5, then the entire Job is terminated.", "source": "https://kubernetes.io/docs/concepts/workloads/controllers/job/", "id": "k8s_00374_code_explain_9584"} {"prompt": "Explain this bash code:\n\nkubectl get -o yaml job job-backoff-limit-per-index-example", "completion": "This bash code Jobs | The Job controller adds the FailureTarget Job condition to trigger Job termination and cleanup. When all of the Job Pods are terminated, the Job controller adds the Failed condition with the same valu", "language": "bash", "type": "kubectl-command_explanation", "context": "Jobs | The Job controller adds the FailureTarget Job condition to trigger Job termination and cleanup. When all of the Job Pods are terminated, the Job controller adds the Failed condition with the same valu", "source": "https://kubernetes.io/docs/concepts/workloads/controllers/job/", "id": "k8s_00374_code_explain_9585"} {"prompt": "Complete this text code:\n\nstatus:\n completedIndexes: 1,3,5,7,9\n failedIndexes: 0,2,4,6,8\n succeeded: 5 # 1 succeeded pod for each of 5 succeeded indexes\n failed: 10 # 2 failed pods (1 retry) for each of 5 failed indexes\n conditions:\n - message: Job has failed indexes", "completion": " reason: FailedIndexes\n status: \"True\"\n type: FailureTarget\n - message: Job has failed indexes\n reason: FailedIndexes\n status: \"True\"\n type: Failed", "language": "text", "type": "code-example", "context": "Jobs | The Job controller adds the FailureTarget Job condition to trigger Job termination and cleanup. When all of the Job Pods are terminated, the Job controller adds the Failed condition with the same valu", "source": "https://kubernetes.io/docs/concepts/workloads/controllers/job/", "id": "k8s_00374_code_completion_9586"} {"prompt": "Explain this text code:\n\nstatus:\n completedIndexes: 1,3,5,7,9\n failedIndexes: 0,2,4,6,8\n succeeded: 5 # 1 succeeded pod for each of 5 succeeded indexes\n failed: 10 # 2 failed pods (1 retry) for each of 5 failed indexes\n conditions:\n - message: Job has failed indexes\n reason: FailedIndexes\n status: \"True\"\n type: FailureTarget\n - message: Job has failed indexes\n reason: FailedIndexes\n status: \"True\"\n type: Failed", "completion": "This text code Jobs | The Job controller adds the FailureTarget Job condition to trigger Job termination and cleanup. When all of the Job Pods are terminated, the Job controller adds the Failed condition with the same valu", "language": "text", "type": "code-example_explanation", "context": "Jobs | The Job controller adds the FailureTarget Job condition to trigger Job termination and cleanup. When all of the Job Pods are terminated, the Job controller adds the Failed condition with the same valu", "source": "https://kubernetes.io/docs/concepts/workloads/controllers/job/", "id": "k8s_00374_code_explain_9587"} {"prompt": "Complete this yaml code:\n\napiVersion: batch/v1\nkind: Job\nmetadata:\n name: job-pod-failure-policy-example\nspec:\n completions: 12\n parallelism: 3\n template:\n spec:\n restartPolicy: Never\n containers:\n - name: main\n image: docker.io/library/bash:5\n command: [\"bash\"] # example command simulating a bug which triggers the FailJob action", "completion": " args:\n - -c\n - echo \"Hello world!\" && sleep 5 && exit 42\n backoffLimit: 6\n podFailurePolicy:\n rules:\n - action: FailJob\n onExitCodes:\n containerName: main # optional\n operator: In # one of: In, NotIn\n values: [42]\n - action: Ignore # one of: Ignore, FailJob, Count\n onPodConditions:\n - type: DisruptionTarget # indicates Pod disruption", "language": "yaml", "type": "kubernetes-manifest", "context": "Jobs | In the example above, the first rule of the Pod failure policy specifies that the Job should be marked failed if the main container fails with the 42 exit code. The following are the rules for the mai", "source": "https://kubernetes.io/docs/concepts/workloads/controllers/job/", "id": "k8s_00374_code_completion_9588"} {"prompt": "Explain this yaml code:\n\napiVersion: batch/v1\nkind: Job\nmetadata:\n name: job-pod-failure-policy-example\nspec:\n completions: 12\n parallelism: 3\n template:\n spec:\n restartPolicy: Never\n containers:\n - name: main\n image: docker.io/library/bash:5\n command: [\"bash\"] # example command simulating a bug which triggers the FailJob action\n args:\n - -c\n - echo \"Hello world!\" && sleep 5 && exit 42\n backoffLimit: 6\n podFailurePolicy:\n rules:\n - action: FailJob\n onExitCodes:\n containerName: main # optional\n operator: In # one of: In, NotIn\n values: [42]\n - action: Ignore # one of: Ignore, FailJob, Count\n onPodConditions:\n - type: DisruptionTarget # indicates Pod disruption", "completion": "This yaml code Jobs | In the example above, the first rule of the Pod failure policy specifies that the Job should be marked failed if the main container fails with the 42 exit code. The following are the rules for the mai", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Jobs | In the example above, the first rule of the Pod failure policy specifies that the Job should be marked failed if the main container fails with the 42 exit code. The following are the rules for the mai", "source": "https://kubernetes.io/docs/concepts/workloads/controllers/job/", "id": "k8s_00374_code_explain_9589"} {"prompt": "Complete this yaml code:\n\napiVersion: batch/v1\nkind: Job\nmetadata:\n name: job-success\nspec:\n parallelism: 10\n completions: 10\n completionMode: Indexed # Required for the success policy\n successPolicy:\n rules:\n - succeededIndexes: 0,2-3\n succeededCount: 1\n template:\n spec:", "completion": " containers:\n - name: main\n image: python\n command: # Provided that at least one of the Pods with 0, 2, and 3 indexes has succeeded,\n # the overall Job is a success.\n - python3\n - -c\n - |\n import os, sys\n if os.environ.get(\"JOB_COMPLETION_INDEX\") == \"2\":\n sys.exit(0)\n else:\n sys.exit(1) \n restartPolicy: Never", "language": "yaml", "type": "kubernetes-manifest", "context": "Jobs | In the example above, both succeededIndexes and succeededCount have been specified. Therefore, the job controller will mark the Job as succeeded and terminate the lingering Pods when either of the spe", "source": "https://kubernetes.io/docs/concepts/workloads/controllers/job/", "id": "k8s_00374_code_completion_9590"} {"prompt": "Explain this yaml code:\n\napiVersion: batch/v1\nkind: Job\nmetadata:\n name: job-success\nspec:\n parallelism: 10\n completions: 10\n completionMode: Indexed # Required for the success policy\n successPolicy:\n rules:\n - succeededIndexes: 0,2-3\n succeededCount: 1\n template:\n spec:\n containers:\n - name: main\n image: python\n command: # Provided that at least one of the Pods with 0, 2, and 3 indexes has succeeded,\n # the overall Job is a success.\n - python3\n - -c\n - |\n import os, sys\n if os.environ.get(\"JOB_COMPLETION_INDEX\") == \"2\":\n sys.exit(0)\n else:\n sys.exit(1) \n restartPolicy: Never", "completion": "This yaml code Jobs | In the example above, both succeededIndexes and succeededCount have been specified. Therefore, the job controller will mark the Job as succeeded and terminate the lingering Pods when either of the spe", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Jobs | In the example above, both succeededIndexes and succeededCount have been specified. Therefore, the job controller will mark the Job as succeeded and terminate the lingering Pods when either of the spe", "source": "https://kubernetes.io/docs/concepts/workloads/controllers/job/", "id": "k8s_00374_code_explain_9591"} {"prompt": "Complete this yaml code:\n\napiVersion: batch/v1\nkind: Job\nmetadata:\n name: pi-with-timeout\nspec:\n backoffLimit: 5\n activeDeadlineSeconds: 100", "completion": " template:\n spec:\n containers:\n - name: pi\n image: perl:5.34.0\n command: [\"perl\", \"-Mbignum=bpi\", \"-wle\", \"print bpi(2000)\"]\n restartPolicy: Never", "language": "yaml", "type": "kubernetes-manifest", "context": "Jobs | Note that both the Job spec and the Pod template spec within the Job have an activeDeadlineSeconds field. Ensure that you set this field at the proper level.", "source": "https://kubernetes.io/docs/concepts/workloads/controllers/job/", "id": "k8s_00374_code_completion_9592"} {"prompt": "Explain this yaml code:\n\napiVersion: batch/v1\nkind: Job\nmetadata:\n name: pi-with-timeout\nspec:\n backoffLimit: 5\n activeDeadlineSeconds: 100\n template:\n spec:\n containers:\n - name: pi\n image: perl:5.34.0\n command: [\"perl\", \"-Mbignum=bpi\", \"-wle\", \"print bpi(2000)\"]\n restartPolicy: Never", "completion": "This yaml code Jobs | Note that both the Job spec and the Pod template spec within the Job have an activeDeadlineSeconds field. Ensure that you set this field at the proper level.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Jobs | Note that both the Job spec and the Pod template spec within the Job have an activeDeadlineSeconds field. Ensure that you set this field at the proper level.", "source": "https://kubernetes.io/docs/concepts/workloads/controllers/job/", "id": "k8s_00374_code_explain_9593"} {"prompt": "Complete this yaml code:\n\napiVersion: batch/v1\nkind: Job\nmetadata:\n name: pi-with-ttl\nspec:\n ttlSecondsAfterFinished: 100", "completion": " template:\n spec:\n containers:\n - name: pi\n image: perl:5.34.0\n command: [\"perl\", \"-Mbignum=bpi\", \"-wle\", \"print bpi(2000)\"]\n restartPolicy: Never", "language": "yaml", "type": "kubernetes-manifest", "context": "Jobs | The Job pi-with-ttl will be eligible to be automatically deleted, 100 seconds after it finishes.", "source": "https://kubernetes.io/docs/concepts/workloads/controllers/job/", "id": "k8s_00374_code_completion_9594"} {"prompt": "Explain this yaml code:\n\napiVersion: batch/v1\nkind: Job\nmetadata:\n name: pi-with-ttl\nspec:\n ttlSecondsAfterFinished: 100\n template:\n spec:\n containers:\n - name: pi\n image: perl:5.34.0\n command: [\"perl\", \"-Mbignum=bpi\", \"-wle\", \"print bpi(2000)\"]\n restartPolicy: Never", "completion": "This yaml code Jobs | The Job pi-with-ttl will be eligible to be automatically deleted, 100 seconds after it finishes.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Jobs | The Job pi-with-ttl will be eligible to be automatically deleted, 100 seconds after it finishes.", "source": "https://kubernetes.io/docs/concepts/workloads/controllers/job/", "id": "k8s_00374_code_explain_9595"} {"prompt": "Complete this yaml code:\n\napiVersion: batch/v1\nkind: Job\nmetadata:\n name: myjob\nspec:", "completion": " suspend: true\n parallelism: 1\n completions: 5\n template:\n spec:\n ...", "language": "yaml", "type": "kubernetes-manifest", "context": "Jobs | You can also toggle Job suspension by patching the Job using the command line.", "source": "https://kubernetes.io/docs/concepts/workloads/controllers/job/", "id": "k8s_00374_code_completion_9596"} {"prompt": "Explain this yaml code:\n\napiVersion: batch/v1\nkind: Job\nmetadata:\n name: myjob\nspec:\n suspend: true\n parallelism: 1\n completions: 5\n template:\n spec:\n ...", "completion": "This yaml code Jobs | You can also toggle Job suspension by patching the Job using the command line.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Jobs | You can also toggle Job suspension by patching the Job using the command line.", "source": "https://kubernetes.io/docs/concepts/workloads/controllers/job/", "id": "k8s_00374_code_explain_9597"} {"prompt": "Explain this bash code:\n\nkubectl patch job/myjob --type=strategic --patch '{\"spec\":{\"suspend\":true}}'", "completion": "This bash code Jobs | Resume a suspended Job:", "language": "bash", "type": "kubectl-command_explanation", "context": "Jobs | Resume a suspended Job:", "source": "https://kubernetes.io/docs/concepts/workloads/controllers/job/", "id": "k8s_00374_code_explain_9598"} {"prompt": "Explain this bash code:\n\nkubectl patch job/myjob --type=strategic --patch '{\"spec\":{\"suspend\":false}}'", "completion": "This bash code Jobs | The Job's status can be used to determine if a Job is suspended or has been suspended in the past:", "language": "bash", "type": "kubectl-command_explanation", "context": "Jobs | The Job's status can be used to determine if a Job is suspended or has been suspended in the past:", "source": "https://kubernetes.io/docs/concepts/workloads/controllers/job/", "id": "k8s_00374_code_explain_9599"} {"prompt": "Complete this yaml code:\n\napiVersion: batch/v1\nkind: Job\n# .metadata and .spec omitted\nstatus:\n conditions:", "completion": " - lastProbeTime: \"2021-02-05T13:14:33Z\"\n lastTransitionTime: \"2021-02-05T13:14:33Z\"\n status: \"True\"\n type: Suspended\n startTime: \"2021-02-05T13:13:48Z\"", "language": "yaml", "type": "kubernetes-manifest", "context": "Jobs | The Job condition of type \"Suspended\" with status \"True\" means the Job is suspended; the lastTransitionTime field can be used to determine how long the Job has been suspended for. If the status of tha", "source": "https://kubernetes.io/docs/concepts/workloads/controllers/job/", "id": "k8s_00374_code_completion_9600"} {"prompt": "Explain this yaml code:\n\napiVersion: batch/v1\nkind: Job\n# .metadata and .spec omitted\nstatus:\n conditions:\n - lastProbeTime: \"2021-02-05T13:14:33Z\"\n lastTransitionTime: \"2021-02-05T13:14:33Z\"\n status: \"True\"\n type: Suspended\n startTime: \"2021-02-05T13:13:48Z\"", "completion": "This yaml code Jobs | The Job condition of type \"Suspended\" with status \"True\" means the Job is suspended; the lastTransitionTime field can be used to determine how long the Job has been suspended for. If the status of tha", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Jobs | The Job condition of type \"Suspended\" with status \"True\" means the Job is suspended; the lastTransitionTime field can be used to determine how long the Job has been suspended for. If the status of tha", "source": "https://kubernetes.io/docs/concepts/workloads/controllers/job/", "id": "k8s_00374_code_explain_9601"} {"prompt": "Complete this text code:\n\nName: myjob\n...\nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------", "completion": " Normal SuccessfulCreate 12m job-controller Created pod: myjob-hlrpl\n Normal SuccessfulDelete 11m job-controller Deleted pod: myjob-hlrpl\n Normal Suspended 11m job-controller Job suspended\n Normal SuccessfulCreate 3s job-controller Created pod: myjob-jvb44\n Normal Resumed 3s job-controller Job resumed", "language": "text", "type": "code-example", "context": "Jobs | The last four events, particularly the \"Suspended\" and \"Resumed\" events, are directly a result of toggling the .spec.suspend field. In the time between these two events, we see that no Pods were creat", "source": "https://kubernetes.io/docs/concepts/workloads/controllers/job/", "id": "k8s_00374_code_completion_9602"} {"prompt": "Explain this text code:\n\nName: myjob\n...\nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal SuccessfulCreate 12m job-controller Created pod: myjob-hlrpl\n Normal SuccessfulDelete 11m job-controller Deleted pod: myjob-hlrpl\n Normal Suspended 11m job-controller Job suspended\n Normal SuccessfulCreate 3s job-controller Created pod: myjob-jvb44\n Normal Resumed 3s job-controller Job resumed", "completion": "This text code Jobs | The last four events, particularly the \"Suspended\" and \"Resumed\" events, are directly a result of toggling the .spec.suspend field. In the time between these two events, we see that no Pods were creat", "language": "text", "type": "code-example_explanation", "context": "Jobs | The last four events, particularly the \"Suspended\" and \"Resumed\" events, are directly a result of toggling the .spec.suspend field. In the time between these two events, we see that no Pods were creat", "source": "https://kubernetes.io/docs/concepts/workloads/controllers/job/", "id": "k8s_00374_code_explain_9603"} {"prompt": "Complete this yaml code:\n\nkind: Job\nmetadata:\n name: old\n ...", "completion": "spec:\n selector:\n matchLabels:\n batch.kubernetes.io/controller-uid: a8f3d00d-c6d2-11e5-9f87-42010af00002\n ...", "language": "yaml", "type": "kubernetes-manifest", "context": "Jobs | Then you create a new Job with name new and you explicitly specify the same selector. Since the existing Pods have label batch.kubernetes.io/controller-uid=a8f3d00d-c6d2-11e5-9f87-42010af00002, they a", "source": "https://kubernetes.io/docs/concepts/workloads/controllers/job/", "id": "k8s_00374_code_completion_9604"} {"prompt": "Explain this yaml code:\n\nkind: Job\nmetadata:\n name: old\n ...\nspec:\n selector:\n matchLabels:\n batch.kubernetes.io/controller-uid: a8f3d00d-c6d2-11e5-9f87-42010af00002\n ...", "completion": "This yaml code Jobs | Then you create a new Job with name new and you explicitly specify the same selector. Since the existing Pods have label batch.kubernetes.io/controller-uid=a8f3d00d-c6d2-11e5-9f87-42010af00002, they a", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Jobs | Then you create a new Job with name new and you explicitly specify the same selector. Since the existing Pods have label batch.kubernetes.io/controller-uid=a8f3d00d-c6d2-11e5-9f87-42010af00002, they a", "source": "https://kubernetes.io/docs/concepts/workloads/controllers/job/", "id": "k8s_00374_code_explain_9605"} {"prompt": "Complete this yaml code:\n\nkind: Job\nmetadata:\n name: new\n ...\nspec:", "completion": " manualSelector: true\n selector:\n matchLabels:\n batch.kubernetes.io/controller-uid: a8f3d00d-c6d2-11e5-9f87-42010af00002\n ...", "language": "yaml", "type": "kubernetes-manifest", "context": "Jobs | The new Job itself will have a different uid from a8f3d00d-c6d2-11e5-9f87-42010af00002. Setting manualSelector: true tells the system that you know what you are doing and to allow this mismatch.", "source": "https://kubernetes.io/docs/concepts/workloads/controllers/job/", "id": "k8s_00374_code_completion_9606"} {"prompt": "Explain this yaml code:\n\nkind: Job\nmetadata:\n name: new\n ...\nspec:\n manualSelector: true\n selector:\n matchLabels:\n batch.kubernetes.io/controller-uid: a8f3d00d-c6d2-11e5-9f87-42010af00002\n ...", "completion": "This yaml code Jobs | The new Job itself will have a different uid from a8f3d00d-c6d2-11e5-9f87-42010af00002. Setting manualSelector: true tells the system that you know what you are doing and to allow this mismatch.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Jobs | The new Job itself will have a different uid from a8f3d00d-c6d2-11e5-9f87-42010af00002. Setting manualSelector: true tells the system that you know what you are doing and to allow this mismatch.", "source": "https://kubernetes.io/docs/concepts/workloads/controllers/job/", "id": "k8s_00374_code_explain_9607"} {"prompt": "Complete this yaml code:\n\nkind: Job\nmetadata:\n name: new", "completion": " ...\nspec:\n podReplacementPolicy: Failed\n ...", "language": "yaml", "type": "kubernetes-manifest", "context": "Jobs | Provided your cluster has the feature gate enabled, you can inspect the .status.terminating field of a Job. The value of the field is the number of Pods owned by the Job that are currently terminating", "source": "https://kubernetes.io/docs/concepts/workloads/controllers/job/", "id": "k8s_00374_code_completion_9608"} {"prompt": "Explain this yaml code:\n\nkind: Job\nmetadata:\n name: new\n ...\nspec:\n podReplacementPolicy: Failed\n ...", "completion": "This yaml code Jobs | Provided your cluster has the feature gate enabled, you can inspect the .status.terminating field of a Job. The value of the field is the number of Pods owned by the Job that are currently terminating", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Jobs | Provided your cluster has the feature gate enabled, you can inspect the .status.terminating field of a Job. The value of the field is the number of Pods owned by the Job that are currently terminating", "source": "https://kubernetes.io/docs/concepts/workloads/controllers/job/", "id": "k8s_00374_code_explain_9609"} {"prompt": "Complete this yaml code:\n\napiVersion: batch/v1\nkind: Job", "completion": "# .metadata and .spec omitted\nstatus:\n terminating: 3 # three Pods are terminating and have not yet reached the Failed phase", "language": "yaml", "type": "kubernetes-manifest", "context": "Jobs | This feature allows you to disable the built-in Job controller, for a specific Job, and delegate reconciliation of the Job to an external controller.", "source": "https://kubernetes.io/docs/concepts/workloads/controllers/job/", "id": "k8s_00374_code_completion_9610"} {"prompt": "Explain this yaml code:\n\napiVersion: batch/v1\nkind: Job\n# .metadata and .spec omitted\nstatus:\n terminating: 3 # three Pods are terminating and have not yet reached the Failed phase", "completion": "This yaml code Jobs | This feature allows you to disable the built-in Job controller, for a specific Job, and delegate reconciliation of the Job to an external controller.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Jobs | This feature allows you to disable the built-in Job controller, for a specific Job, and delegate reconciliation of the Job to an external controller.", "source": "https://kubernetes.io/docs/concepts/workloads/controllers/job/", "id": "k8s_00374_code_explain_9611"} {"prompt": "Complete this yaml code:\n\napiVersion: batch/v1\nkind: Job\nmetadata:\n name: pi\nspec:\n template:", "completion": " spec:\n containers:\n - name: pi\n image: perl:5.34.0\n command: [\"perl\", \"-Mbignum=bpi\", \"-wle\", \"print bpi(2000)\"]\n restartPolicy: Never\n backoffLimit: 4", "language": "yaml", "type": "kubernetes-manifest", "context": "Jobs | You can run the example with this command:", "source": "https://kubernetes.io/docs/concepts/workloads/controllers/job/", "id": "k8s_00374_code_completion_9612"} {"prompt": "Explain this yaml code:\n\napiVersion: batch/v1\nkind: Job\nmetadata:\n name: pi\nspec:\n template:\n spec:\n containers:\n - name: pi\n image: perl:5.34.0\n command: [\"perl\", \"-Mbignum=bpi\", \"-wle\", \"print bpi(2000)\"]\n restartPolicy: Never\n backoffLimit: 4", "completion": "This yaml code Jobs | You can run the example with this command:", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Jobs | You can run the example with this command:", "source": "https://kubernetes.io/docs/concepts/workloads/controllers/job/", "id": "k8s_00374_code_explain_9613"} {"prompt": "Explain this shell code:\n\nkubectl apply -f https://kubernetes.io/examples/controllers/job.yaml", "completion": "This shell code Jobs | The output is similar to this:", "language": "shell", "type": "code-example_explanation", "context": "Jobs | The output is similar to this:", "source": "https://kubernetes.io/docs/concepts/workloads/controllers/job/", "id": "k8s_00374_code_explain_9614"} {"prompt": "Complete this bash code:\n\nName: pi\nNamespace: default\nSelector: batch.kubernetes.io/controller-uid=c9948307-e56d-4b5d-8302-ae2d7b7da67c\nLabels: batch.kubernetes.io/controller-uid=c9948307-e56d-4b5d-8302-ae2d7b7da67c\n batch.kubernetes.io/job-name=pi\n ...\nAnnotations: batch.kubernetes.io/job-tracking: \"\"\nParallelism: 1\nCompletions: 1\nStart Time: Mon, 02 Dec 2019 15:20:11 +0200\nCompleted At: Mon, 02 Dec 2019 15:21:16 +0200\nDuration: 65s\nPods Statuses: 0 Running / 1 Succeeded / 0 Failed\nPod Template:\n Labels: batch.kubernetes.io/controller-uid=c9948307-e56d-4b5d-8302-ae2d7b7da67c\n batch.kubernetes.io/job-name=pi\n Containers:", "completion": " pi:\n Image: perl:5.34.0\n Port: <none>\n Host Port: <none>\n Command:\n perl\n -Mbignum=bpi\n -wle\n print bpi(2000)\n Environment: <none>\n Mounts: <none>\n Volumes: <none>\nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal SuccessfulCreate 21s job-controller Created pod: pi-xf9p4\n Normal Completed 18s job-controller Job completed", "language": "bash", "type": "shell-command", "context": "Jobs | apiVersion: batch/v1 kind: Job metadata: annotations: batch.kubernetes.io/job-tracking: \"\" ... creationTimestamp: \"2022-11-10T17:53:53Z\" generation: 1 labels: batch.kubernet", "source": "https://kubernetes.io/docs/concepts/workloads/controllers/job/", "id": "k8s_00374_code_completion_9615"} {"prompt": "Explain this bash code:\n\nName: pi\nNamespace: default\nSelector: batch.kubernetes.io/controller-uid=c9948307-e56d-4b5d-8302-ae2d7b7da67c\nLabels: batch.kubernetes.io/controller-uid=c9948307-e56d-4b5d-8302-ae2d7b7da67c\n batch.kubernetes.io/job-name=pi\n ...\nAnnotations: batch.kubernetes.io/job-tracking: \"\"\nParallelism: 1\nCompletions: 1\nStart Time: Mon, 02 Dec 2019 15:20:11 +0200\nCompleted At: Mon, 02 Dec 2019 15:21:16 +0200\nDuration: 65s\nPods Statuses: 0 Running / 1 Succeeded / 0 Failed\nPod Template:\n Labels: batch.kubernetes.io/controller-uid=c9948307-e56d-4b5d-8302-ae2d7b7da67c\n batch.kubernetes.io/job-name=pi\n Containers:\n pi:\n Image: perl:5.34.0\n Port: <none>\n Host Port: <none>\n Command:\n perl\n -Mbignum=bpi\n -wle\n print bpi(2000)\n Environment: <none>\n Mounts: <none>\n Volumes: <none>\nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal SuccessfulCreate 21s job-controller Created pod: pi-xf9p4\n Normal Completed 18s job-controller Job completed", "completion": "This bash code Jobs | apiVersion: batch/v1 kind: Job metadata: annotations: batch.kubernetes.io/job-tracking: \"\" ... creationTimestamp: \"2022-11-10T17:53:53Z\" generation: 1 labels: batch.kubernet", "language": "bash", "type": "shell-command_explanation", "context": "Jobs | apiVersion: batch/v1 kind: Job metadata: annotations: batch.kubernetes.io/job-tracking: \"\" ... creationTimestamp: \"2022-11-10T17:53:53Z\" generation: 1 labels: batch.kubernet", "source": "https://kubernetes.io/docs/concepts/workloads/controllers/job/", "id": "k8s_00374_code_explain_9616"} {"prompt": "Complete this bash code:\n\napiVersion: batch/v1\nkind: Job\nmetadata:\n annotations: batch.kubernetes.io/job-tracking: \"\"\n ... \n creationTimestamp: \"2022-11-10T17:53:53Z\"\n generation: 1\n labels:\n batch.kubernetes.io/controller-uid: 863452e6-270d-420e-9b94-53a54146c223\n batch.kubernetes.io/job-name: pi\n name: pi\n namespace: default\n resourceVersion: \"4751\"\n uid: 204fb678-040b-497f-9266-35ffa8716d14\nspec:\n backoffLimit: 4\n completionMode: NonIndexed\n completions: 1\n parallelism: 1\n selector:\n matchLabels:\n batch.kubernetes.io/controller-uid: 863452e6-270d-420e-9b94-53a54146c223\n suspend: false\n template:\n metadata:\n creationTimestamp: null", "completion": " labels:\n batch.kubernetes.io/controller-uid: 863452e6-270d-420e-9b94-53a54146c223\n batch.kubernetes.io/job-name: pi\n spec:\n containers:\n - command:\n - perl\n - -Mbignum=bpi\n - -wle\n - print bpi(2000)\n image: perl:5.34.0\n imagePullPolicy: IfNotPresent\n name: pi\n resources: {}\n terminationMessagePath: /dev/termination-log\n terminationMessagePolicy: File\n dnsPolicy: ClusterFirst\n restartPolicy: Never\n schedulerName: default-scheduler\n securityContext: {}\n terminationGracePeriodSeconds: 30\nstatus:\n active: 1\n ready: 0\n startTime: \"2022-11-10T17:53:57Z\"\n uncountedTerminatedPods: {}", "language": "bash", "type": "shell-command", "context": "Jobs | To view completed Pods of a Job, use kubectl get pods.", "source": "https://kubernetes.io/docs/concepts/workloads/controllers/job/", "id": "k8s_00374_code_completion_9617"} {"prompt": "Explain this bash code:\n\napiVersion: batch/v1\nkind: Job\nmetadata:\n annotations: batch.kubernetes.io/job-tracking: \"\"\n ... \n creationTimestamp: \"2022-11-10T17:53:53Z\"\n generation: 1\n labels:\n batch.kubernetes.io/controller-uid: 863452e6-270d-420e-9b94-53a54146c223\n batch.kubernetes.io/job-name: pi\n name: pi\n namespace: default\n resourceVersion: \"4751\"\n uid: 204fb678-040b-497f-9266-35ffa8716d14\nspec:\n backoffLimit: 4\n completionMode: NonIndexed\n completions: 1\n parallelism: 1\n selector:\n matchLabels:\n batch.kubernetes.io/controller-uid: 863452e6-270d-420e-9b94-53a54146c223\n suspend: false\n template:\n metadata:\n creationTimestamp: null\n labels:\n batch.kubernetes.io/controller-uid: 863452e6-270d-420e-9b94-53a54146c223\n batch.kubernetes.io/job-name: pi\n spec:\n containers:\n - command:\n - perl\n - -Mbignum=bpi\n - -wle\n - print bpi(2000)\n image: perl:5.34.0\n imagePullPolicy: IfNotPresent\n name: pi\n resources: {}\n terminationMessagePath: /dev/termination-log\n terminationMessagePolicy: File\n dnsPolicy: ClusterFirst\n restartPolicy: Never\n schedulerName: default-scheduler\n securityContext: {}\n terminationGracePeriodSeconds: 30\nstatus:\n active: 1\n ready: 0\n startTime: \"2022-11-10T17:53:57Z\"\n uncountedTerminatedPods: {}", "completion": "This bash code Jobs | To view completed Pods of a Job, use kubectl get pods.", "language": "bash", "type": "shell-command_explanation", "context": "Jobs | To view completed Pods of a Job, use kubectl get pods.", "source": "https://kubernetes.io/docs/concepts/workloads/controllers/job/", "id": "k8s_00374_code_explain_9618"} {"prompt": "Explain this shell code:\n\npods=$(kubectl get pods --selector=batch.kubernetes.io/job-name=pi --output=jsonpath='{.items[*].metadata.name}')\necho $pods", "completion": "This shell code Jobs | The output is similar to this:", "language": "shell", "type": "code-example_explanation", "context": "Jobs | The output is similar to this:", "source": "https://kubernetes.io/docs/concepts/workloads/controllers/job/", "id": "k8s_00374_code_explain_9619"} {"prompt": "Explain this text code:\n\n3.1415926535897932384626433832795028841971693993751058209749445923078164062862089986280348253421170679821480865132823066470938446095505822317253594081284811174502841027019385211055596446229489549303819644288109756659334461284756482337867831652712019091456485669234603486104543266482133936072602491412737245870066063155881748815209209628292540917153643678925903600113305305488204665213841469519415116094330572703657595919530921861173819326117931051185480744623799627495673518857527248912279381830119491298336733624406566430860213949463952247371907021798609437027705392171762931767523846748184676694051320005681271452635608277857713427577896091736371787214684409012249534301465495853710507922796892589235420199561121290219608640344181598136297747713099605187072113499999983729780499510597317328160963185950244594553469083026425223082533446850352619311881710100031378387528865875332083814206171776691473035982534904287554687311595628638823537875937519577818577805321712268066130019278766111959092164201989380952572010654858632788659361533818279682303019520353018529689957736225994138912497217752834791315155748572424541506959508295331168617278558890750983817546374649393192550604009277016711390098488240128583616035637076601047101819429555961989467678374494482553797747268471040475346462080466842590694912933136770289891521047521620569660240580381501935112533824300355876402474964732639141992726042699227967823547816360093417216412199245863150302861829745557067498385054945885869269956909272107975093029553211653449872027559602364806654991198818347977535663698074265425278625518184175746728909777727938000816470600161452491921732172147723501414419735685481613611573525521334757418494684385233239073941433345477624168625189835694855620992192221842725502542568876717904946016534668049886272327917860857843838279679766814541009538837863609506800642251252051173929848960841284886269456042419652850222106611863067442786220391949450471237137869609563643719172874677646575739624138908658326459958133904780275901", "completion": "This text code Jobs | As with all other Kubernetes config, a Job needs apiVersion, kind, and metadata fields.", "language": "text", "type": "code-example_explanation", "context": "Jobs | As with all other Kubernetes config, a Job needs apiVersion, kind, and metadata fields.", "source": "https://kubernetes.io/docs/concepts/workloads/controllers/job/", "id": "k8s_00374_code_explain_9620"} {"prompt": "Explain this text code:\n\n/controllers/job-backoff-limit-per-index-example.yaml", "completion": "This text code Jobs | In the example above, the Job controller allows for one restart for each of the indexes. When the total number of failed indexes exceeds 5, then the entire Job is terminated.", "language": "text", "type": "code-example_explanation", "context": "Jobs | In the example above, the Job controller allows for one restart for each of the indexes. When the total number of failed indexes exceeds 5, then the entire Job is terminated.", "source": "https://kubernetes.io/docs/concepts/workloads/controllers/job/", "id": "k8s_00374_code_explain_9621"} {"prompt": "Complete this yaml code:\n\napiVersion: batch/v1\nkind: Job\nmetadata:\n name: job-backoff-limit-per-index-example\nspec:\n completions: 10\n parallelism: 3\n completionMode: Indexed # required for the feature\n backoffLimitPerIndex: 1 # maximal number of failures per index\n maxFailedIndexes: 5 # maximal number of failed indexes before terminating the Job execution\n template:\n spec:\n restartPolicy: Never # required for the feature", "completion": " containers:\n - name: example\n image: python\n command: # The jobs fails as there is at least one failed index\n # (all even indexes fail in here), yet all indexes\n # are executed as maxFailedIndexes is not exceeded.\n - python3\n - -c\n - |\n import os, sys\n print(\"Hello world\")\n if int(os.environ.get(\"JOB_COMPLETION_INDEX\")) % 2 == 0:\n sys.exit(1)", "language": "yaml", "type": "kubernetes-manifest", "context": "Jobs | In the example above, the Job controller allows for one restart for each of the indexes. When the total number of failed indexes exceeds 5, then the entire Job is terminated.", "source": "https://kubernetes.io/docs/concepts/workloads/controllers/job/", "id": "k8s_00374_code_completion_9622"} {"prompt": "Explain this yaml code:\n\napiVersion: batch/v1\nkind: Job\nmetadata:\n name: job-backoff-limit-per-index-example\nspec:\n completions: 10\n parallelism: 3\n completionMode: Indexed # required for the feature\n backoffLimitPerIndex: 1 # maximal number of failures per index\n maxFailedIndexes: 5 # maximal number of failed indexes before terminating the Job execution\n template:\n spec:\n restartPolicy: Never # required for the feature\n containers:\n - name: example\n image: python\n command: # The jobs fails as there is at least one failed index\n # (all even indexes fail in here), yet all indexes\n # are executed as maxFailedIndexes is not exceeded.\n - python3\n - -c\n - |\n import os, sys\n print(\"Hello world\")\n if int(os.environ.get(\"JOB_COMPLETION_INDEX\")) % 2 == 0:\n sys.exit(1)", "completion": "This yaml code Jobs | In the example above, the Job controller allows for one restart for each of the indexes. When the total number of failed indexes exceeds 5, then the entire Job is terminated.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Jobs | In the example above, the Job controller allows for one restart for each of the indexes. When the total number of failed indexes exceeds 5, then the entire Job is terminated.", "source": "https://kubernetes.io/docs/concepts/workloads/controllers/job/", "id": "k8s_00374_code_explain_9623"} {"prompt": "Explain this sh code:\n\nkubectl get -o yaml job job-backoff-limit-per-index-example", "completion": "This sh code Jobs | The Job controller adds the FailureTarget Job condition to trigger Job termination and cleanup. When all of the Job Pods are terminated, the Job controller adds the Failed condition with the same valu", "language": "sh", "type": "code-example_explanation", "context": "Jobs | The Job controller adds the FailureTarget Job condition to trigger Job termination and cleanup. When all of the Job Pods are terminated, the Job controller adds the Failed condition with the same valu", "source": "https://kubernetes.io/docs/concepts/workloads/controllers/job/", "id": "k8s_00374_code_explain_9624"} {"prompt": "Complete this yaml code:\n\nstatus:\n completedIndexes: 1,3,5,7,9\n failedIndexes: 0,2,4,6,8\n succeeded: 5 # 1 succeeded pod for each of 5 succeeded indexes\n failed: 10 # 2 failed pods (1 retry) for each of 5 failed indexes\n conditions:\n - message: Job has failed indexes", "completion": " reason: FailedIndexes\n status: \"True\"\n type: FailureTarget\n - message: Job has failed indexes\n reason: FailedIndexes\n status: \"True\"\n type: Failed", "language": "yaml", "type": "kubernetes-manifest", "context": "Jobs | The Job controller adds the FailureTarget Job condition to trigger Job termination and cleanup. When all of the Job Pods are terminated, the Job controller adds the Failed condition with the same valu", "source": "https://kubernetes.io/docs/concepts/workloads/controllers/job/", "id": "k8s_00374_code_completion_9625"} {"prompt": "Explain this yaml code:\n\nstatus:\n completedIndexes: 1,3,5,7,9\n failedIndexes: 0,2,4,6,8\n succeeded: 5 # 1 succeeded pod for each of 5 succeeded indexes\n failed: 10 # 2 failed pods (1 retry) for each of 5 failed indexes\n conditions:\n - message: Job has failed indexes\n reason: FailedIndexes\n status: \"True\"\n type: FailureTarget\n - message: Job has failed indexes\n reason: FailedIndexes\n status: \"True\"\n type: Failed", "completion": "This yaml code Jobs | The Job controller adds the FailureTarget Job condition to trigger Job termination and cleanup. When all of the Job Pods are terminated, the Job controller adds the Failed condition with the same valu", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Jobs | The Job controller adds the FailureTarget Job condition to trigger Job termination and cleanup. When all of the Job Pods are terminated, the Job controller adds the Failed condition with the same valu", "source": "https://kubernetes.io/docs/concepts/workloads/controllers/job/", "id": "k8s_00374_code_explain_9626"} {"prompt": "Complete this yaml code:\n\napiVersion: batch/v1\nkind: Job\nmetadata:\n name: job-pod-failure-policy-example\nspec:\n completions: 12\n parallelism: 3\n template:\n spec:\n restartPolicy: Never\n containers:\n - name: main\n image: docker.io/library/bash:5\n command: [\"bash\"] # example command simulating a bug which triggers the FailJob action", "completion": " args:\n - -c\n - echo \"Hello world!\" && sleep 5 && exit 42\n backoffLimit: 6\n podFailurePolicy:\n rules:\n - action: FailJob\n onExitCodes:\n containerName: main # optional\n operator: In # one of: In, NotIn\n values: [42]\n - action: Ignore # one of: Ignore, FailJob, Count\n onPodConditions:\n - type: DisruptionTarget # indicates Pod disruption", "language": "yaml", "type": "kubernetes-manifest", "context": "Jobs | In the example above, the first rule of the Pod failure policy specifies that the Job should be marked failed if the main container fails with the 42 exit code. The following are the rules for the mai", "source": "https://kubernetes.io/docs/concepts/workloads/controllers/job/", "id": "k8s_00374_code_completion_9627"} {"prompt": "Explain this yaml code:\n\napiVersion: batch/v1\nkind: Job\nmetadata:\n name: job-pod-failure-policy-example\nspec:\n completions: 12\n parallelism: 3\n template:\n spec:\n restartPolicy: Never\n containers:\n - name: main\n image: docker.io/library/bash:5\n command: [\"bash\"] # example command simulating a bug which triggers the FailJob action\n args:\n - -c\n - echo \"Hello world!\" && sleep 5 && exit 42\n backoffLimit: 6\n podFailurePolicy:\n rules:\n - action: FailJob\n onExitCodes:\n containerName: main # optional\n operator: In # one of: In, NotIn\n values: [42]\n - action: Ignore # one of: Ignore, FailJob, Count\n onPodConditions:\n - type: DisruptionTarget # indicates Pod disruption", "completion": "This yaml code Jobs | In the example above, the first rule of the Pod failure policy specifies that the Job should be marked failed if the main container fails with the 42 exit code. The following are the rules for the mai", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Jobs | In the example above, the first rule of the Pod failure policy specifies that the Job should be marked failed if the main container fails with the 42 exit code. The following are the rules for the mai", "source": "https://kubernetes.io/docs/concepts/workloads/controllers/job/", "id": "k8s_00374_code_explain_9628"} {"prompt": "Explain this text code:\n\nspec.podFailurePolicy.rules[*].onExitCodes.containerName", "completion": "This text code Jobs | When you use the podFailurePolicy, and the Job fails due to the pod matching the rule with the FailJob action, then the Job controller triggers the Job termination process by adding the FailureTarget", "language": "text", "type": "code-example_explanation", "context": "Jobs | When you use the podFailurePolicy, and the Job fails due to the pod matching the rule with the FailJob action, then the Job controller triggers the Job termination process by adding the FailureTarget", "source": "https://kubernetes.io/docs/concepts/workloads/controllers/job/", "id": "k8s_00374_code_explain_9629"} {"prompt": "Complete this yaml code:\n\napiVersion: batch/v1\nkind: Job\nmetadata:\n name: job-success\nspec:\n parallelism: 10\n completions: 10\n completionMode: Indexed # Required for the success policy\n successPolicy:\n rules:\n - succeededIndexes: 0,2-3\n succeededCount: 1\n template:\n spec:", "completion": " containers:\n - name: main\n image: python\n command: # Provided that at least one of the Pods with 0, 2, and 3 indexes has succeeded,\n # the overall Job is a success.\n - python3\n - -c\n - |\n import os, sys\n if os.environ.get(\"JOB_COMPLETION_INDEX\") == \"2\":\n sys.exit(0)\n else:\n sys.exit(1) \n restartPolicy: Never", "language": "yaml", "type": "kubernetes-manifest", "context": "Jobs | In the example above, both succeededIndexes and succeededCount have been specified. Therefore, the job controller will mark the Job as succeeded and terminate the lingering Pods when either of the spe", "source": "https://kubernetes.io/docs/concepts/workloads/controllers/job/", "id": "k8s_00374_code_completion_9630"} {"prompt": "Explain this yaml code:\n\napiVersion: batch/v1\nkind: Job\nmetadata:\n name: job-success\nspec:\n parallelism: 10\n completions: 10\n completionMode: Indexed # Required for the success policy\n successPolicy:\n rules:\n - succeededIndexes: 0,2-3\n succeededCount: 1\n template:\n spec:\n containers:\n - name: main\n image: python\n command: # Provided that at least one of the Pods with 0, 2, and 3 indexes has succeeded,\n # the overall Job is a success.\n - python3\n - -c\n - |\n import os, sys\n if os.environ.get(\"JOB_COMPLETION_INDEX\") == \"2\":\n sys.exit(0)\n else:\n sys.exit(1) \n restartPolicy: Never", "completion": "This yaml code Jobs | In the example above, both succeededIndexes and succeededCount have been specified. Therefore, the job controller will mark the Job as succeeded and terminate the lingering Pods when either of the spe", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Jobs | In the example above, both succeededIndexes and succeededCount have been specified. Therefore, the job controller will mark the Job as succeeded and terminate the lingering Pods when either of the spe", "source": "https://kubernetes.io/docs/concepts/workloads/controllers/job/", "id": "k8s_00374_code_explain_9631"} {"prompt": "Complete this yaml code:\n\napiVersion: batch/v1\nkind: Job\nmetadata:\n name: pi-with-timeout\nspec:\n backoffLimit: 5\n activeDeadlineSeconds: 100", "completion": " template:\n spec:\n containers:\n - name: pi\n image: perl:5.34.0\n command: [\"perl\", \"-Mbignum=bpi\", \"-wle\", \"print bpi(2000)\"]\n restartPolicy: Never", "language": "yaml", "type": "kubernetes-manifest", "context": "Jobs | Note that both the Job spec and the Pod template spec within the Job have an activeDeadlineSeconds field. Ensure that you set this field at the proper level.", "source": "https://kubernetes.io/docs/concepts/workloads/controllers/job/", "id": "k8s_00374_code_completion_9632"} {"prompt": "Explain this yaml code:\n\napiVersion: batch/v1\nkind: Job\nmetadata:\n name: pi-with-timeout\nspec:\n backoffLimit: 5\n activeDeadlineSeconds: 100\n template:\n spec:\n containers:\n - name: pi\n image: perl:5.34.0\n command: [\"perl\", \"-Mbignum=bpi\", \"-wle\", \"print bpi(2000)\"]\n restartPolicy: Never", "completion": "This yaml code Jobs | Note that both the Job spec and the Pod template spec within the Job have an activeDeadlineSeconds field. Ensure that you set this field at the proper level.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Jobs | Note that both the Job spec and the Pod template spec within the Job have an activeDeadlineSeconds field. Ensure that you set this field at the proper level.", "source": "https://kubernetes.io/docs/concepts/workloads/controllers/job/", "id": "k8s_00374_code_explain_9633"} {"prompt": "Complete this yaml code:\n\napiVersion: batch/v1\nkind: Job\nmetadata:\n name: pi-with-ttl\nspec:\n ttlSecondsAfterFinished: 100", "completion": " template:\n spec:\n containers:\n - name: pi\n image: perl:5.34.0\n command: [\"perl\", \"-Mbignum=bpi\", \"-wle\", \"print bpi(2000)\"]\n restartPolicy: Never", "language": "yaml", "type": "kubernetes-manifest", "context": "Jobs | The Job pi-with-ttl will be eligible to be automatically deleted, 100 seconds after it finishes.", "source": "https://kubernetes.io/docs/concepts/workloads/controllers/job/", "id": "k8s_00374_code_completion_9634"} {"prompt": "Explain this yaml code:\n\napiVersion: batch/v1\nkind: Job\nmetadata:\n name: pi-with-ttl\nspec:\n ttlSecondsAfterFinished: 100\n template:\n spec:\n containers:\n - name: pi\n image: perl:5.34.0\n command: [\"perl\", \"-Mbignum=bpi\", \"-wle\", \"print bpi(2000)\"]\n restartPolicy: Never", "completion": "This yaml code Jobs | The Job pi-with-ttl will be eligible to be automatically deleted, 100 seconds after it finishes.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Jobs | The Job pi-with-ttl will be eligible to be automatically deleted, 100 seconds after it finishes.", "source": "https://kubernetes.io/docs/concepts/workloads/controllers/job/", "id": "k8s_00374_code_explain_9635"} {"prompt": "Complete this yaml code:\n\napiVersion: batch/v1\nkind: Job\nmetadata:\n name: myjob\nspec:", "completion": " suspend: true\n parallelism: 1\n completions: 5\n template:\n spec:\n ...", "language": "yaml", "type": "kubernetes-manifest", "context": "Jobs | You can also toggle Job suspension by patching the Job using the command line.", "source": "https://kubernetes.io/docs/concepts/workloads/controllers/job/", "id": "k8s_00374_code_completion_9636"} {"prompt": "Explain this yaml code:\n\napiVersion: batch/v1\nkind: Job\nmetadata:\n name: myjob\nspec:\n suspend: true\n parallelism: 1\n completions: 5\n template:\n spec:\n ...", "completion": "This yaml code Jobs | You can also toggle Job suspension by patching the Job using the command line.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Jobs | You can also toggle Job suspension by patching the Job using the command line.", "source": "https://kubernetes.io/docs/concepts/workloads/controllers/job/", "id": "k8s_00374_code_explain_9637"} {"prompt": "Explain this shell code:\n\nkubectl patch job/myjob --type=strategic --patch '{\"spec\":{\"suspend\":true}}'", "completion": "This shell code Jobs | Resume a suspended Job:", "language": "shell", "type": "code-example_explanation", "context": "Jobs | Resume a suspended Job:", "source": "https://kubernetes.io/docs/concepts/workloads/controllers/job/", "id": "k8s_00374_code_explain_9638"} {"prompt": "Explain this shell code:\n\nkubectl patch job/myjob --type=strategic --patch '{\"spec\":{\"suspend\":false}}'", "completion": "This shell code Jobs | The Job's status can be used to determine if a Job is suspended or has been suspended in the past:", "language": "shell", "type": "code-example_explanation", "context": "Jobs | The Job's status can be used to determine if a Job is suspended or has been suspended in the past:", "source": "https://kubernetes.io/docs/concepts/workloads/controllers/job/", "id": "k8s_00374_code_explain_9639"} {"prompt": "Complete this yaml code:\n\napiVersion: batch/v1\nkind: Job\n# .metadata and .spec omitted\nstatus:\n conditions:", "completion": " - lastProbeTime: \"2021-02-05T13:14:33Z\"\n lastTransitionTime: \"2021-02-05T13:14:33Z\"\n status: \"True\"\n type: Suspended\n startTime: \"2021-02-05T13:13:48Z\"", "language": "yaml", "type": "kubernetes-manifest", "context": "Jobs | The Job condition of type \"Suspended\" with status \"True\" means the Job is suspended; the lastTransitionTime field can be used to determine how long the Job has been suspended for. If the status of tha", "source": "https://kubernetes.io/docs/concepts/workloads/controllers/job/", "id": "k8s_00374_code_completion_9640"} {"prompt": "Explain this yaml code:\n\napiVersion: batch/v1\nkind: Job\n# .metadata and .spec omitted\nstatus:\n conditions:\n - lastProbeTime: \"2021-02-05T13:14:33Z\"\n lastTransitionTime: \"2021-02-05T13:14:33Z\"\n status: \"True\"\n type: Suspended\n startTime: \"2021-02-05T13:13:48Z\"", "completion": "This yaml code Jobs | The Job condition of type \"Suspended\" with status \"True\" means the Job is suspended; the lastTransitionTime field can be used to determine how long the Job has been suspended for. If the status of tha", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Jobs | The Job condition of type \"Suspended\" with status \"True\" means the Job is suspended; the lastTransitionTime field can be used to determine how long the Job has been suspended for. If the status of tha", "source": "https://kubernetes.io/docs/concepts/workloads/controllers/job/", "id": "k8s_00374_code_explain_9641"} {"prompt": "Complete this text code:\n\nName: myjob\n...\nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------", "completion": " Normal SuccessfulCreate 12m job-controller Created pod: myjob-hlrpl\n Normal SuccessfulDelete 11m job-controller Deleted pod: myjob-hlrpl\n Normal Suspended 11m job-controller Job suspended\n Normal SuccessfulCreate 3s job-controller Created pod: myjob-jvb44\n Normal Resumed 3s job-controller Job resumed", "language": "text", "type": "code-example", "context": "Jobs | The last four events, particularly the \"Suspended\" and \"Resumed\" events, are directly a result of toggling the .spec.suspend field. In the time between these two events, we see that no Pods were creat", "source": "https://kubernetes.io/docs/concepts/workloads/controllers/job/", "id": "k8s_00374_code_completion_9642"} {"prompt": "Explain this text code:\n\nName: myjob\n...\nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal SuccessfulCreate 12m job-controller Created pod: myjob-hlrpl\n Normal SuccessfulDelete 11m job-controller Deleted pod: myjob-hlrpl\n Normal Suspended 11m job-controller Job suspended\n Normal SuccessfulCreate 3s job-controller Created pod: myjob-jvb44\n Normal Resumed 3s job-controller Job resumed", "completion": "This text code Jobs | The last four events, particularly the \"Suspended\" and \"Resumed\" events, are directly a result of toggling the .spec.suspend field. In the time between these two events, we see that no Pods were creat", "language": "text", "type": "code-example_explanation", "context": "Jobs | The last four events, particularly the \"Suspended\" and \"Resumed\" events, are directly a result of toggling the .spec.suspend field. In the time between these two events, we see that no Pods were creat", "source": "https://kubernetes.io/docs/concepts/workloads/controllers/job/", "id": "k8s_00374_code_explain_9643"} {"prompt": "Complete this yaml code:\n\nkind: Job\nmetadata:\n name: old\n ...", "completion": "spec:\n selector:\n matchLabels:\n batch.kubernetes.io/controller-uid: a8f3d00d-c6d2-11e5-9f87-42010af00002\n ...", "language": "yaml", "type": "kubernetes-manifest", "context": "Jobs | Then you create a new Job with name new and you explicitly specify the same selector. Since the existing Pods have label batch.kubernetes.io/controller-uid=a8f3d00d-c6d2-11e5-9f87-42010af00002, they a", "source": "https://kubernetes.io/docs/concepts/workloads/controllers/job/", "id": "k8s_00374_code_completion_9644"} {"prompt": "Explain this yaml code:\n\nkind: Job\nmetadata:\n name: old\n ...\nspec:\n selector:\n matchLabels:\n batch.kubernetes.io/controller-uid: a8f3d00d-c6d2-11e5-9f87-42010af00002\n ...", "completion": "This yaml code Jobs | Then you create a new Job with name new and you explicitly specify the same selector. Since the existing Pods have label batch.kubernetes.io/controller-uid=a8f3d00d-c6d2-11e5-9f87-42010af00002, they a", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Jobs | Then you create a new Job with name new and you explicitly specify the same selector. Since the existing Pods have label batch.kubernetes.io/controller-uid=a8f3d00d-c6d2-11e5-9f87-42010af00002, they a", "source": "https://kubernetes.io/docs/concepts/workloads/controllers/job/", "id": "k8s_00374_code_explain_9645"} {"prompt": "Explain this text code:\n\nbatch.kubernetes.io/controller-uid=a8f3d00d-c6d2-11e5-9f87-42010af00002", "completion": "This text code Jobs | You need to specify manualSelector: true in the new Job since you are not using the selector that the system normally generates for you automatically.", "language": "text", "type": "code-example_explanation", "context": "Jobs | You need to specify manualSelector: true in the new Job since you are not using the selector that the system normally generates for you automatically.", "source": "https://kubernetes.io/docs/concepts/workloads/controllers/job/", "id": "k8s_00374_code_explain_9646"} {"prompt": "Complete this yaml code:\n\nkind: Job\nmetadata:\n name: new\n ...\nspec:", "completion": " manualSelector: true\n selector:\n matchLabels:\n batch.kubernetes.io/controller-uid: a8f3d00d-c6d2-11e5-9f87-42010af00002\n ...", "language": "yaml", "type": "kubernetes-manifest", "context": "Jobs | The new Job itself will have a different uid from a8f3d00d-c6d2-11e5-9f87-42010af00002. Setting manualSelector: true tells the system that you know what you are doing and to allow this mismatch.", "source": "https://kubernetes.io/docs/concepts/workloads/controllers/job/", "id": "k8s_00374_code_completion_9647"} {"prompt": "Explain this yaml code:\n\nkind: Job\nmetadata:\n name: new\n ...\nspec:\n manualSelector: true\n selector:\n matchLabels:\n batch.kubernetes.io/controller-uid: a8f3d00d-c6d2-11e5-9f87-42010af00002\n ...", "completion": "This yaml code Jobs | The new Job itself will have a different uid from a8f3d00d-c6d2-11e5-9f87-42010af00002. Setting manualSelector: true tells the system that you know what you are doing and to allow this mismatch.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Jobs | The new Job itself will have a different uid from a8f3d00d-c6d2-11e5-9f87-42010af00002. Setting manualSelector: true tells the system that you know what you are doing and to allow this mismatch.", "source": "https://kubernetes.io/docs/concepts/workloads/controllers/job/", "id": "k8s_00374_code_explain_9648"} {"prompt": "Complete this yaml code:\n\nkind: Job\nmetadata:\n name: new", "completion": " ...\nspec:\n podReplacementPolicy: Failed\n ...", "language": "yaml", "type": "kubernetes-manifest", "context": "Jobs | Provided your cluster has the feature gate enabled, you can inspect the .status.terminating field of a Job. The value of the field is the number of Pods owned by the Job that are currently terminating", "source": "https://kubernetes.io/docs/concepts/workloads/controllers/job/", "id": "k8s_00374_code_completion_9649"} {"prompt": "Explain this yaml code:\n\nkind: Job\nmetadata:\n name: new\n ...\nspec:\n podReplacementPolicy: Failed\n ...", "completion": "This yaml code Jobs | Provided your cluster has the feature gate enabled, you can inspect the .status.terminating field of a Job. The value of the field is the number of Pods owned by the Job that are currently terminating", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Jobs | Provided your cluster has the feature gate enabled, you can inspect the .status.terminating field of a Job. The value of the field is the number of Pods owned by the Job that are currently terminating", "source": "https://kubernetes.io/docs/concepts/workloads/controllers/job/", "id": "k8s_00374_code_explain_9650"} {"prompt": "Complete this yaml code:\n\napiVersion: batch/v1\nkind: Job", "completion": "# .metadata and .spec omitted\nstatus:\n terminating: 3 # three Pods are terminating and have not yet reached the Failed phase", "language": "yaml", "type": "kubernetes-manifest", "context": "Jobs | This feature allows you to disable the built-in Job controller, for a specific Job, and delegate reconciliation of the Job to an external controller.", "source": "https://kubernetes.io/docs/concepts/workloads/controllers/job/", "id": "k8s_00374_code_completion_9651"} {"prompt": "Explain this yaml code:\n\napiVersion: batch/v1\nkind: Job\n# .metadata and .spec omitted\nstatus:\n terminating: 3 # three Pods are terminating and have not yet reached the Failed phase", "completion": "This yaml code Jobs | This feature allows you to disable the built-in Job controller, for a specific Job, and delegate reconciliation of the Job to an external controller.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Jobs | This feature allows you to disable the built-in Job controller, for a specific Job, and delegate reconciliation of the Job to an external controller.", "source": "https://kubernetes.io/docs/concepts/workloads/controllers/job/", "id": "k8s_00374_code_explain_9652"} {"prompt": "Complete this yaml code:\n\ncontrollers/job.yaml\napiVersion: batch/v1\nkind: Job\nmetadata:\n name: pi\nspec:\n template:", "completion": " spec:\n containers:\n - name: pi\n image: perl:5.34.0\n command: [\"perl\", \"-Mbignum=bpi\", \"-wle\", \"print bpi(2000)\"]\n restartPolicy: Never\n backoffLimit: 4", "language": "yaml", "type": "kubernetes-manifest", "context": "Jobs | You can run the example with this command:", "source": "https://kubernetes.io/docs/concepts/workloads/controllers/job/", "id": "k8s_00374_code_completion_9653"} {"prompt": "Explain this yaml code:\n\ncontrollers/job.yaml\napiVersion: batch/v1\nkind: Job\nmetadata:\n name: pi\nspec:\n template:\n spec:\n containers:\n - name: pi\n image: perl:5.34.0\n command: [\"perl\", \"-Mbignum=bpi\", \"-wle\", \"print bpi(2000)\"]\n restartPolicy: Never\n backoffLimit: 4", "completion": "This yaml code Jobs | You can run the example with this command:", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Jobs | You can run the example with this command:", "source": "https://kubernetes.io/docs/concepts/workloads/controllers/job/", "id": "k8s_00374_code_explain_9654"} {"prompt": "Complete this yaml code:\n\napiVersion: batch/v1\nkind: Job\nmetadata:\n name: pi\nspec:\n template:", "completion": " spec:\n containers:\n - name: pi\n image: perl:5.34.0\n command: [\"perl\", \"-Mbignum=bpi\", \"-wle\", \"print bpi(2000)\"]\n restartPolicy: Never\n backoffLimit: 4", "language": "yaml", "type": "kubernetes-manifest", "context": "Jobs | You can run the example with this command:", "source": "https://kubernetes.io/docs/concepts/workloads/controllers/job/", "id": "k8s_00374_code_completion_9655"} {"prompt": "Explain this yaml code:\n\napiVersion: batch/v1\nkind: Job\nmetadata:\n name: pi\nspec:\n template:\n spec:\n containers:\n - name: pi\n image: perl:5.34.0\n command: [\"perl\", \"-Mbignum=bpi\", \"-wle\", \"print bpi(2000)\"]\n restartPolicy: Never\n backoffLimit: 4", "completion": "This yaml code Jobs | You can run the example with this command:", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Jobs | You can run the example with this command:", "source": "https://kubernetes.io/docs/concepts/workloads/controllers/job/", "id": "k8s_00374_code_explain_9656"} {"prompt": "Explain this bash code:\n\nkubectl apply -f https://kubernetes.io/examples/controllers/job.yaml", "completion": "This bash code Jobs | The output is similar to this:", "language": "bash", "type": "kubectl-command_explanation", "context": "Jobs | The output is similar to this:", "source": "https://kubernetes.io/docs/concepts/workloads/controllers/job/", "id": "k8s_00374_code_explain_9657"} {"prompt": "Complete this text code:\n\nName: pi\nNamespace: default\nSelector: batch.kubernetes.io/controller-uid=c9948307-e56d-4b5d-8302-ae2d7b7da67c\nLabels: batch.kubernetes.io/controller-uid=c9948307-e56d-4b5d-8302-ae2d7b7da67c\n batch.kubernetes.io/job-name=pi\n ...\nAnnotations: batch.kubernetes.io/job-tracking: \"\"\nParallelism: 1\nCompletions: 1\nStart Time: Mon, 02 Dec 2019 15:20:11 +0200\nCompleted At: Mon, 02 Dec 2019 15:21:16 +0200\nDuration: 65s\nPods Statuses: 0 Running / 1 Succeeded / 0 Failed\nPod Template:\n Labels: batch.kubernetes.io/controller-uid=c9948307-e56d-4b5d-8302-ae2d7b7da67c\n batch.kubernetes.io/job-name=pi\n Containers:", "completion": " pi:\n Image: perl:5.34.0\n Port: <none>\n Host Port: <none>\n Command:\n perl\n -Mbignum=bpi\n -wle\n print bpi(2000)\n Environment: <none>\n Mounts: <none>\n Volumes: <none>\nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal SuccessfulCreate 21s job-controller Created pod: pi-xf9p4\n Normal Completed 18s job-controller Job completed", "language": "text", "type": "code-example", "context": "Jobs | apiVersion: batch/v1 kind: Job metadata: annotations: batch.kubernetes.io/job-tracking: \"\" ... creationTimestamp: \"2022-11-10T17:53:53Z\" generation: 1 labels: batch.kubernet", "source": "https://kubernetes.io/docs/concepts/workloads/controllers/job/", "id": "k8s_00374_code_completion_9658"} {"prompt": "Explain this text code:\n\nName: pi\nNamespace: default\nSelector: batch.kubernetes.io/controller-uid=c9948307-e56d-4b5d-8302-ae2d7b7da67c\nLabels: batch.kubernetes.io/controller-uid=c9948307-e56d-4b5d-8302-ae2d7b7da67c\n batch.kubernetes.io/job-name=pi\n ...\nAnnotations: batch.kubernetes.io/job-tracking: \"\"\nParallelism: 1\nCompletions: 1\nStart Time: Mon, 02 Dec 2019 15:20:11 +0200\nCompleted At: Mon, 02 Dec 2019 15:21:16 +0200\nDuration: 65s\nPods Statuses: 0 Running / 1 Succeeded / 0 Failed\nPod Template:\n Labels: batch.kubernetes.io/controller-uid=c9948307-e56d-4b5d-8302-ae2d7b7da67c\n batch.kubernetes.io/job-name=pi\n Containers:\n pi:\n Image: perl:5.34.0\n Port: <none>\n Host Port: <none>\n Command:\n perl\n -Mbignum=bpi\n -wle\n print bpi(2000)\n Environment: <none>\n Mounts: <none>\n Volumes: <none>\nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal SuccessfulCreate 21s job-controller Created pod: pi-xf9p4\n Normal Completed 18s job-controller Job completed", "completion": "This text code Jobs | apiVersion: batch/v1 kind: Job metadata: annotations: batch.kubernetes.io/job-tracking: \"\" ... creationTimestamp: \"2022-11-10T17:53:53Z\" generation: 1 labels: batch.kubernet", "language": "text", "type": "code-example_explanation", "context": "Jobs | apiVersion: batch/v1 kind: Job metadata: annotations: batch.kubernetes.io/job-tracking: \"\" ... creationTimestamp: \"2022-11-10T17:53:53Z\" generation: 1 labels: batch.kubernet", "source": "https://kubernetes.io/docs/concepts/workloads/controllers/job/", "id": "k8s_00374_code_explain_9659"} {"prompt": "Complete this yaml code:\n\napiVersion: batch/v1\nkind: Job\nmetadata:\n annotations: batch.kubernetes.io/job-tracking: \"\"\n ... \n creationTimestamp: \"2022-11-10T17:53:53Z\"\n generation: 1\n labels:\n batch.kubernetes.io/controller-uid: 863452e6-270d-420e-9b94-53a54146c223\n batch.kubernetes.io/job-name: pi\n name: pi\n namespace: default\n resourceVersion: \"4751\"\n uid: 204fb678-040b-497f-9266-35ffa8716d14\nspec:\n backoffLimit: 4\n completionMode: NonIndexed\n completions: 1\n parallelism: 1\n selector:\n matchLabels:\n batch.kubernetes.io/controller-uid: 863452e6-270d-420e-9b94-53a54146c223\n suspend: false\n template:\n metadata:\n creationTimestamp: null", "completion": " labels:\n batch.kubernetes.io/controller-uid: 863452e6-270d-420e-9b94-53a54146c223\n batch.kubernetes.io/job-name: pi\n spec:\n containers:\n - command:\n - perl\n - -Mbignum=bpi\n - -wle\n - print bpi(2000)\n image: perl:5.34.0\n imagePullPolicy: IfNotPresent\n name: pi\n resources: {}\n terminationMessagePath: /dev/termination-log\n terminationMessagePolicy: File\n dnsPolicy: ClusterFirst\n restartPolicy: Never\n schedulerName: default-scheduler\n securityContext: {}\n terminationGracePeriodSeconds: 30\nstatus:\n active: 1\n ready: 0\n startTime: \"2022-11-10T17:53:57Z\"\n uncountedTerminatedPods: {}", "language": "yaml", "type": "kubernetes-manifest", "context": "Jobs | To view completed Pods of a Job, use kubectl get pods.", "source": "https://kubernetes.io/docs/concepts/workloads/controllers/job/", "id": "k8s_00374_code_completion_9660"} {"prompt": "Explain this yaml code:\n\napiVersion: batch/v1\nkind: Job\nmetadata:\n annotations: batch.kubernetes.io/job-tracking: \"\"\n ... \n creationTimestamp: \"2022-11-10T17:53:53Z\"\n generation: 1\n labels:\n batch.kubernetes.io/controller-uid: 863452e6-270d-420e-9b94-53a54146c223\n batch.kubernetes.io/job-name: pi\n name: pi\n namespace: default\n resourceVersion: \"4751\"\n uid: 204fb678-040b-497f-9266-35ffa8716d14\nspec:\n backoffLimit: 4\n completionMode: NonIndexed\n completions: 1\n parallelism: 1\n selector:\n matchLabels:\n batch.kubernetes.io/controller-uid: 863452e6-270d-420e-9b94-53a54146c223\n suspend: false\n template:\n metadata:\n creationTimestamp: null\n labels:\n batch.kubernetes.io/controller-uid: 863452e6-270d-420e-9b94-53a54146c223\n batch.kubernetes.io/job-name: pi\n spec:\n containers:\n - command:\n - perl\n - -Mbignum=bpi\n - -wle\n - print bpi(2000)\n image: perl:5.34.0\n imagePullPolicy: IfNotPresent\n name: pi\n resources: {}\n terminationMessagePath: /dev/termination-log\n terminationMessagePolicy: File\n dnsPolicy: ClusterFirst\n restartPolicy: Never\n schedulerName: default-scheduler\n securityContext: {}\n terminationGracePeriodSeconds: 30\nstatus:\n active: 1\n ready: 0\n startTime: \"2022-11-10T17:53:57Z\"\n uncountedTerminatedPods: {}", "completion": "This yaml code Jobs | To view completed Pods of a Job, use kubectl get pods.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Jobs | To view completed Pods of a Job, use kubectl get pods.", "source": "https://kubernetes.io/docs/concepts/workloads/controllers/job/", "id": "k8s_00374_code_explain_9661"} {"prompt": "Explain this text code:\n\npods=$(kubectl get pods --selector=batch.kubernetes.io/job-name=pi --output=jsonpath='{.items[*].metadata.name}')\necho $pods", "completion": "This text code Jobs | The output is similar to this:", "language": "text", "type": "code-example_explanation", "context": "Jobs | The output is similar to this:", "source": "https://kubernetes.io/docs/concepts/workloads/controllers/job/", "id": "k8s_00374_code_explain_9662"} {"prompt": "Complete this yaml code:\n\n/controllers/job-backoff-limit-per-index-example.yaml\napiVersion: batch/v1\nkind: Job\nmetadata:\n name: job-backoff-limit-per-index-example\nspec:\n completions: 10\n parallelism: 3\n completionMode: Indexed # required for the feature\n backoffLimitPerIndex: 1 # maximal number of failures per index\n maxFailedIndexes: 5 # maximal number of failed indexes before terminating the Job execution\n template:\n spec:", "completion": " restartPolicy: Never # required for the feature\n containers:\n - name: example\n image: python\n command: # The jobs fails as there is at least one failed index\n # (all even indexes fail in here), yet all indexes\n # are executed as maxFailedIndexes is not exceeded.\n - python3\n - -c\n - |\n import os, sys\n print(\"Hello world\")\n if int(os.environ.get(\"JOB_COMPLETION_INDEX\")) % 2 == 0:\n sys.exit(1)", "language": "yaml", "type": "kubernetes-manifest", "context": "Jobs | In the example above, the Job controller allows for one restart for each of the indexes. When the total number of failed indexes exceeds 5, then the entire Job is terminated.", "source": "https://kubernetes.io/docs/concepts/workloads/controllers/job/", "id": "k8s_00374_code_completion_9663"} {"prompt": "Explain this yaml code:\n\n/controllers/job-backoff-limit-per-index-example.yaml\napiVersion: batch/v1\nkind: Job\nmetadata:\n name: job-backoff-limit-per-index-example\nspec:\n completions: 10\n parallelism: 3\n completionMode: Indexed # required for the feature\n backoffLimitPerIndex: 1 # maximal number of failures per index\n maxFailedIndexes: 5 # maximal number of failed indexes before terminating the Job execution\n template:\n spec:\n restartPolicy: Never # required for the feature\n containers:\n - name: example\n image: python\n command: # The jobs fails as there is at least one failed index\n # (all even indexes fail in here), yet all indexes\n # are executed as maxFailedIndexes is not exceeded.\n - python3\n - -c\n - |\n import os, sys\n print(\"Hello world\")\n if int(os.environ.get(\"JOB_COMPLETION_INDEX\")) % 2 == 0:\n sys.exit(1)", "completion": "This yaml code Jobs | In the example above, the Job controller allows for one restart for each of the indexes. When the total number of failed indexes exceeds 5, then the entire Job is terminated.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Jobs | In the example above, the Job controller allows for one restart for each of the indexes. When the total number of failed indexes exceeds 5, then the entire Job is terminated.", "source": "https://kubernetes.io/docs/concepts/workloads/controllers/job/", "id": "k8s_00374_code_explain_9664"} {"prompt": "Complete this yaml code:\n\napiVersion: batch/v1\nkind: Job\nmetadata:\n name: job-backoff-limit-per-index-example\nspec:\n completions: 10\n parallelism: 3\n completionMode: Indexed # required for the feature\n backoffLimitPerIndex: 1 # maximal number of failures per index\n maxFailedIndexes: 5 # maximal number of failed indexes before terminating the Job execution\n template:\n spec:\n restartPolicy: Never # required for the feature", "completion": " containers:\n - name: example\n image: python\n command: # The jobs fails as there is at least one failed index\n # (all even indexes fail in here), yet all indexes\n # are executed as maxFailedIndexes is not exceeded.\n - python3\n - -c\n - |\n import os, sys\n print(\"Hello world\")\n if int(os.environ.get(\"JOB_COMPLETION_INDEX\")) % 2 == 0:\n sys.exit(1)", "language": "yaml", "type": "kubernetes-manifest", "context": "Jobs | In the example above, the Job controller allows for one restart for each of the indexes. When the total number of failed indexes exceeds 5, then the entire Job is terminated.", "source": "https://kubernetes.io/docs/concepts/workloads/controllers/job/", "id": "k8s_00374_code_completion_9665"} {"prompt": "Explain this yaml code:\n\napiVersion: batch/v1\nkind: Job\nmetadata:\n name: job-backoff-limit-per-index-example\nspec:\n completions: 10\n parallelism: 3\n completionMode: Indexed # required for the feature\n backoffLimitPerIndex: 1 # maximal number of failures per index\n maxFailedIndexes: 5 # maximal number of failed indexes before terminating the Job execution\n template:\n spec:\n restartPolicy: Never # required for the feature\n containers:\n - name: example\n image: python\n command: # The jobs fails as there is at least one failed index\n # (all even indexes fail in here), yet all indexes\n # are executed as maxFailedIndexes is not exceeded.\n - python3\n - -c\n - |\n import os, sys\n print(\"Hello world\")\n if int(os.environ.get(\"JOB_COMPLETION_INDEX\")) % 2 == 0:\n sys.exit(1)", "completion": "This yaml code Jobs | In the example above, the Job controller allows for one restart for each of the indexes. When the total number of failed indexes exceeds 5, then the entire Job is terminated.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Jobs | In the example above, the Job controller allows for one restart for each of the indexes. When the total number of failed indexes exceeds 5, then the entire Job is terminated.", "source": "https://kubernetes.io/docs/concepts/workloads/controllers/job/", "id": "k8s_00374_code_explain_9666"} {"prompt": "Explain this bash code:\n\nkubectl get -o yaml job job-backoff-limit-per-index-example", "completion": "This bash code Jobs | The Job controller adds the FailureTarget Job condition to trigger Job termination and cleanup. When all of the Job Pods are terminated, the Job controller adds the Failed condition with the same valu", "language": "bash", "type": "kubectl-command_explanation", "context": "Jobs | The Job controller adds the FailureTarget Job condition to trigger Job termination and cleanup. When all of the Job Pods are terminated, the Job controller adds the Failed condition with the same valu", "source": "https://kubernetes.io/docs/concepts/workloads/controllers/job/", "id": "k8s_00374_code_explain_9667"} {"prompt": "Complete this text code:\n\nstatus:\n completedIndexes: 1,3,5,7,9\n failedIndexes: 0,2,4,6,8\n succeeded: 5 # 1 succeeded pod for each of 5 succeeded indexes\n failed: 10 # 2 failed pods (1 retry) for each of 5 failed indexes\n conditions:\n - message: Job has failed indexes", "completion": " reason: FailedIndexes\n status: \"True\"\n type: FailureTarget\n - message: Job has failed indexes\n reason: FailedIndexes\n status: \"True\"\n type: Failed", "language": "text", "type": "code-example", "context": "Jobs | The Job controller adds the FailureTarget Job condition to trigger Job termination and cleanup. When all of the Job Pods are terminated, the Job controller adds the Failed condition with the same valu", "source": "https://kubernetes.io/docs/concepts/workloads/controllers/job/", "id": "k8s_00374_code_completion_9668"} {"prompt": "Explain this text code:\n\nstatus:\n completedIndexes: 1,3,5,7,9\n failedIndexes: 0,2,4,6,8\n succeeded: 5 # 1 succeeded pod for each of 5 succeeded indexes\n failed: 10 # 2 failed pods (1 retry) for each of 5 failed indexes\n conditions:\n - message: Job has failed indexes\n reason: FailedIndexes\n status: \"True\"\n type: FailureTarget\n - message: Job has failed indexes\n reason: FailedIndexes\n status: \"True\"\n type: Failed", "completion": "This text code Jobs | The Job controller adds the FailureTarget Job condition to trigger Job termination and cleanup. When all of the Job Pods are terminated, the Job controller adds the Failed condition with the same valu", "language": "text", "type": "code-example_explanation", "context": "Jobs | The Job controller adds the FailureTarget Job condition to trigger Job termination and cleanup. When all of the Job Pods are terminated, the Job controller adds the Failed condition with the same valu", "source": "https://kubernetes.io/docs/concepts/workloads/controllers/job/", "id": "k8s_00374_code_explain_9669"} {"prompt": "Complete this yaml code:\n\n/controllers/job-pod-failure-policy-example.yaml\napiVersion: batch/v1\nkind: Job\nmetadata:\n name: job-pod-failure-policy-example\nspec:\n completions: 12\n parallelism: 3\n template:\n spec:\n restartPolicy: Never\n containers:\n - name: main\n image: docker.io/library/bash:5", "completion": " command: [\"bash\"] # example command simulating a bug which triggers the FailJob action\n args:\n - -c\n - echo \"Hello world!\" && sleep 5 && exit 42\n backoffLimit: 6\n podFailurePolicy:\n rules:\n - action: FailJob\n onExitCodes:\n containerName: main # optional\n operator: In # one of: In, NotIn\n values: [42]\n - action: Ignore # one of: Ignore, FailJob, Count\n onPodConditions:\n - type: DisruptionTarget # indicates Pod disruption", "language": "yaml", "type": "kubernetes-manifest", "context": "Jobs | In the example above, the first rule of the Pod failure policy specifies that the Job should be marked failed if the main container fails with the 42 exit code. The following are the rules for the mai", "source": "https://kubernetes.io/docs/concepts/workloads/controllers/job/", "id": "k8s_00374_code_completion_9670"} {"prompt": "Explain this yaml code:\n\n/controllers/job-pod-failure-policy-example.yaml\napiVersion: batch/v1\nkind: Job\nmetadata:\n name: job-pod-failure-policy-example\nspec:\n completions: 12\n parallelism: 3\n template:\n spec:\n restartPolicy: Never\n containers:\n - name: main\n image: docker.io/library/bash:5\n command: [\"bash\"] # example command simulating a bug which triggers the FailJob action\n args:\n - -c\n - echo \"Hello world!\" && sleep 5 && exit 42\n backoffLimit: 6\n podFailurePolicy:\n rules:\n - action: FailJob\n onExitCodes:\n containerName: main # optional\n operator: In # one of: In, NotIn\n values: [42]\n - action: Ignore # one of: Ignore, FailJob, Count\n onPodConditions:\n - type: DisruptionTarget # indicates Pod disruption", "completion": "This yaml code Jobs | In the example above, the first rule of the Pod failure policy specifies that the Job should be marked failed if the main container fails with the 42 exit code. The following are the rules for the mai", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Jobs | In the example above, the first rule of the Pod failure policy specifies that the Job should be marked failed if the main container fails with the 42 exit code. The following are the rules for the mai", "source": "https://kubernetes.io/docs/concepts/workloads/controllers/job/", "id": "k8s_00374_code_explain_9671"} {"prompt": "Complete this yaml code:\n\napiVersion: batch/v1\nkind: Job\nmetadata:\n name: job-pod-failure-policy-example\nspec:\n completions: 12\n parallelism: 3\n template:\n spec:\n restartPolicy: Never\n containers:\n - name: main\n image: docker.io/library/bash:5\n command: [\"bash\"] # example command simulating a bug which triggers the FailJob action", "completion": " args:\n - -c\n - echo \"Hello world!\" && sleep 5 && exit 42\n backoffLimit: 6\n podFailurePolicy:\n rules:\n - action: FailJob\n onExitCodes:\n containerName: main # optional\n operator: In # one of: In, NotIn\n values: [42]\n - action: Ignore # one of: Ignore, FailJob, Count\n onPodConditions:\n - type: DisruptionTarget # indicates Pod disruption", "language": "yaml", "type": "kubernetes-manifest", "context": "Jobs | In the example above, the first rule of the Pod failure policy specifies that the Job should be marked failed if the main container fails with the 42 exit code. The following are the rules for the mai", "source": "https://kubernetes.io/docs/concepts/workloads/controllers/job/", "id": "k8s_00374_code_completion_9672"} {"prompt": "Explain this yaml code:\n\napiVersion: batch/v1\nkind: Job\nmetadata:\n name: job-pod-failure-policy-example\nspec:\n completions: 12\n parallelism: 3\n template:\n spec:\n restartPolicy: Never\n containers:\n - name: main\n image: docker.io/library/bash:5\n command: [\"bash\"] # example command simulating a bug which triggers the FailJob action\n args:\n - -c\n - echo \"Hello world!\" && sleep 5 && exit 42\n backoffLimit: 6\n podFailurePolicy:\n rules:\n - action: FailJob\n onExitCodes:\n containerName: main # optional\n operator: In # one of: In, NotIn\n values: [42]\n - action: Ignore # one of: Ignore, FailJob, Count\n onPodConditions:\n - type: DisruptionTarget # indicates Pod disruption", "completion": "This yaml code Jobs | In the example above, the first rule of the Pod failure policy specifies that the Job should be marked failed if the main container fails with the 42 exit code. The following are the rules for the mai", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Jobs | In the example above, the first rule of the Pod failure policy specifies that the Job should be marked failed if the main container fails with the 42 exit code. The following are the rules for the mai", "source": "https://kubernetes.io/docs/concepts/workloads/controllers/job/", "id": "k8s_00374_code_explain_9673"} {"prompt": "Complete this yaml code:\n\n/controllers/job-success-policy.yaml\napiVersion: batch/v1\nkind: Job\nmetadata:\n name: job-success\nspec:\n parallelism: 10\n completions: 10\n completionMode: Indexed # Required for the success policy\n successPolicy:\n rules:\n - succeededIndexes: 0,2-3\n succeededCount: 1\n template:", "completion": " spec:\n containers:\n - name: main\n image: python\n command: # Provided that at least one of the Pods with 0, 2, and 3 indexes has succeeded,\n # the overall Job is a success.\n - python3\n - -c\n - |\n import os, sys\n if os.environ.get(\"JOB_COMPLETION_INDEX\") == \"2\":\n sys.exit(0)\n else:\n sys.exit(1) \n restartPolicy: Never", "language": "yaml", "type": "kubernetes-manifest", "context": "Jobs | In the example above, both succeededIndexes and succeededCount have been specified. Therefore, the job controller will mark the Job as succeeded and terminate the lingering Pods when either of the spe", "source": "https://kubernetes.io/docs/concepts/workloads/controllers/job/", "id": "k8s_00374_code_completion_9674"} {"prompt": "Explain this yaml code:\n\n/controllers/job-success-policy.yaml\napiVersion: batch/v1\nkind: Job\nmetadata:\n name: job-success\nspec:\n parallelism: 10\n completions: 10\n completionMode: Indexed # Required for the success policy\n successPolicy:\n rules:\n - succeededIndexes: 0,2-3\n succeededCount: 1\n template:\n spec:\n containers:\n - name: main\n image: python\n command: # Provided that at least one of the Pods with 0, 2, and 3 indexes has succeeded,\n # the overall Job is a success.\n - python3\n - -c\n - |\n import os, sys\n if os.environ.get(\"JOB_COMPLETION_INDEX\") == \"2\":\n sys.exit(0)\n else:\n sys.exit(1) \n restartPolicy: Never", "completion": "This yaml code Jobs | In the example above, both succeededIndexes and succeededCount have been specified. Therefore, the job controller will mark the Job as succeeded and terminate the lingering Pods when either of the spe", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Jobs | In the example above, both succeededIndexes and succeededCount have been specified. Therefore, the job controller will mark the Job as succeeded and terminate the lingering Pods when either of the spe", "source": "https://kubernetes.io/docs/concepts/workloads/controllers/job/", "id": "k8s_00374_code_explain_9675"} {"prompt": "Complete this yaml code:\n\napiVersion: batch/v1\nkind: Job\nmetadata:\n name: job-success\nspec:\n parallelism: 10\n completions: 10\n completionMode: Indexed # Required for the success policy\n successPolicy:\n rules:\n - succeededIndexes: 0,2-3\n succeededCount: 1\n template:\n spec:", "completion": " containers:\n - name: main\n image: python\n command: # Provided that at least one of the Pods with 0, 2, and 3 indexes has succeeded,\n # the overall Job is a success.\n - python3\n - -c\n - |\n import os, sys\n if os.environ.get(\"JOB_COMPLETION_INDEX\") == \"2\":\n sys.exit(0)\n else:\n sys.exit(1) \n restartPolicy: Never", "language": "yaml", "type": "kubernetes-manifest", "context": "Jobs | In the example above, both succeededIndexes and succeededCount have been specified. Therefore, the job controller will mark the Job as succeeded and terminate the lingering Pods when either of the spe", "source": "https://kubernetes.io/docs/concepts/workloads/controllers/job/", "id": "k8s_00374_code_completion_9676"} {"prompt": "Explain this yaml code:\n\napiVersion: batch/v1\nkind: Job\nmetadata:\n name: job-success\nspec:\n parallelism: 10\n completions: 10\n completionMode: Indexed # Required for the success policy\n successPolicy:\n rules:\n - succeededIndexes: 0,2-3\n succeededCount: 1\n template:\n spec:\n containers:\n - name: main\n image: python\n command: # Provided that at least one of the Pods with 0, 2, and 3 indexes has succeeded,\n # the overall Job is a success.\n - python3\n - -c\n - |\n import os, sys\n if os.environ.get(\"JOB_COMPLETION_INDEX\") == \"2\":\n sys.exit(0)\n else:\n sys.exit(1) \n restartPolicy: Never", "completion": "This yaml code Jobs | In the example above, both succeededIndexes and succeededCount have been specified. Therefore, the job controller will mark the Job as succeeded and terminate the lingering Pods when either of the spe", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Jobs | In the example above, both succeededIndexes and succeededCount have been specified. Therefore, the job controller will mark the Job as succeeded and terminate the lingering Pods when either of the spe", "source": "https://kubernetes.io/docs/concepts/workloads/controllers/job/", "id": "k8s_00374_code_explain_9677"} {"prompt": "Complete this yaml code:\n\napiVersion: batch/v1\nkind: Job\nmetadata:\n name: pi-with-timeout\nspec:\n backoffLimit: 5\n activeDeadlineSeconds: 100", "completion": " template:\n spec:\n containers:\n - name: pi\n image: perl:5.34.0\n command: [\"perl\", \"-Mbignum=bpi\", \"-wle\", \"print bpi(2000)\"]\n restartPolicy: Never", "language": "yaml", "type": "kubernetes-manifest", "context": "Jobs | Note that both the Job spec and the Pod template spec within the Job have an activeDeadlineSeconds field. Ensure that you set this field at the proper level.", "source": "https://kubernetes.io/docs/concepts/workloads/controllers/job/", "id": "k8s_00374_code_completion_9678"} {"prompt": "Explain this yaml code:\n\napiVersion: batch/v1\nkind: Job\nmetadata:\n name: pi-with-timeout\nspec:\n backoffLimit: 5\n activeDeadlineSeconds: 100\n template:\n spec:\n containers:\n - name: pi\n image: perl:5.34.0\n command: [\"perl\", \"-Mbignum=bpi\", \"-wle\", \"print bpi(2000)\"]\n restartPolicy: Never", "completion": "This yaml code Jobs | Note that both the Job spec and the Pod template spec within the Job have an activeDeadlineSeconds field. Ensure that you set this field at the proper level.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Jobs | Note that both the Job spec and the Pod template spec within the Job have an activeDeadlineSeconds field. Ensure that you set this field at the proper level.", "source": "https://kubernetes.io/docs/concepts/workloads/controllers/job/", "id": "k8s_00374_code_explain_9679"} {"prompt": "Complete this yaml code:\n\napiVersion: batch/v1\nkind: Job\nmetadata:\n name: pi-with-ttl\nspec:\n ttlSecondsAfterFinished: 100", "completion": " template:\n spec:\n containers:\n - name: pi\n image: perl:5.34.0\n command: [\"perl\", \"-Mbignum=bpi\", \"-wle\", \"print bpi(2000)\"]\n restartPolicy: Never", "language": "yaml", "type": "kubernetes-manifest", "context": "Jobs | The Job pi-with-ttl will be eligible to be automatically deleted, 100 seconds after it finishes.", "source": "https://kubernetes.io/docs/concepts/workloads/controllers/job/", "id": "k8s_00374_code_completion_9680"} {"prompt": "Explain this yaml code:\n\napiVersion: batch/v1\nkind: Job\nmetadata:\n name: pi-with-ttl\nspec:\n ttlSecondsAfterFinished: 100\n template:\n spec:\n containers:\n - name: pi\n image: perl:5.34.0\n command: [\"perl\", \"-Mbignum=bpi\", \"-wle\", \"print bpi(2000)\"]\n restartPolicy: Never", "completion": "This yaml code Jobs | The Job pi-with-ttl will be eligible to be automatically deleted, 100 seconds after it finishes.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Jobs | The Job pi-with-ttl will be eligible to be automatically deleted, 100 seconds after it finishes.", "source": "https://kubernetes.io/docs/concepts/workloads/controllers/job/", "id": "k8s_00374_code_explain_9681"} {"prompt": "Complete this yaml code:\n\napiVersion: batch/v1\nkind: Job\nmetadata:\n name: myjob\nspec:", "completion": " suspend: true\n parallelism: 1\n completions: 5\n template:\n spec:\n ...", "language": "yaml", "type": "kubernetes-manifest", "context": "Jobs | You can also toggle Job suspension by patching the Job using the command line.", "source": "https://kubernetes.io/docs/concepts/workloads/controllers/job/", "id": "k8s_00374_code_completion_9682"} {"prompt": "Explain this yaml code:\n\napiVersion: batch/v1\nkind: Job\nmetadata:\n name: myjob\nspec:\n suspend: true\n parallelism: 1\n completions: 5\n template:\n spec:\n ...", "completion": "This yaml code Jobs | You can also toggle Job suspension by patching the Job using the command line.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Jobs | You can also toggle Job suspension by patching the Job using the command line.", "source": "https://kubernetes.io/docs/concepts/workloads/controllers/job/", "id": "k8s_00374_code_explain_9683"} {"prompt": "Explain this bash code:\n\nkubectl patch job/myjob --type=strategic --patch '{\"spec\":{\"suspend\":true}}'", "completion": "This bash code Jobs | Resume a suspended Job:", "language": "bash", "type": "kubectl-command_explanation", "context": "Jobs | Resume a suspended Job:", "source": "https://kubernetes.io/docs/concepts/workloads/controllers/job/", "id": "k8s_00374_code_explain_9684"} {"prompt": "Explain this bash code:\n\nkubectl patch job/myjob --type=strategic --patch '{\"spec\":{\"suspend\":false}}'", "completion": "This bash code Jobs | The Job's status can be used to determine if a Job is suspended or has been suspended in the past:", "language": "bash", "type": "kubectl-command_explanation", "context": "Jobs | The Job's status can be used to determine if a Job is suspended or has been suspended in the past:", "source": "https://kubernetes.io/docs/concepts/workloads/controllers/job/", "id": "k8s_00374_code_explain_9685"} {"prompt": "Complete this yaml code:\n\napiVersion: batch/v1\nkind: Job\n# .metadata and .spec omitted\nstatus:\n conditions:", "completion": " - lastProbeTime: \"2021-02-05T13:14:33Z\"\n lastTransitionTime: \"2021-02-05T13:14:33Z\"\n status: \"True\"\n type: Suspended\n startTime: \"2021-02-05T13:13:48Z\"", "language": "yaml", "type": "kubernetes-manifest", "context": "Jobs | The Job condition of type \"Suspended\" with status \"True\" means the Job is suspended; the lastTransitionTime field can be used to determine how long the Job has been suspended for. If the status of tha", "source": "https://kubernetes.io/docs/concepts/workloads/controllers/job/", "id": "k8s_00374_code_completion_9686"} {"prompt": "Explain this yaml code:\n\napiVersion: batch/v1\nkind: Job\n# .metadata and .spec omitted\nstatus:\n conditions:\n - lastProbeTime: \"2021-02-05T13:14:33Z\"\n lastTransitionTime: \"2021-02-05T13:14:33Z\"\n status: \"True\"\n type: Suspended\n startTime: \"2021-02-05T13:13:48Z\"", "completion": "This yaml code Jobs | The Job condition of type \"Suspended\" with status \"True\" means the Job is suspended; the lastTransitionTime field can be used to determine how long the Job has been suspended for. If the status of tha", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Jobs | The Job condition of type \"Suspended\" with status \"True\" means the Job is suspended; the lastTransitionTime field can be used to determine how long the Job has been suspended for. If the status of tha", "source": "https://kubernetes.io/docs/concepts/workloads/controllers/job/", "id": "k8s_00374_code_explain_9687"} {"prompt": "Complete this yaml code:\n\nkind: Job\nmetadata:\n name: old\n ...", "completion": "spec:\n selector:\n matchLabels:\n batch.kubernetes.io/controller-uid: a8f3d00d-c6d2-11e5-9f87-42010af00002\n ...", "language": "yaml", "type": "kubernetes-manifest", "context": "Jobs | Then you create a new Job with name new and you explicitly specify the same selector. Since the existing Pods have label batch.kubernetes.io/controller-uid=a8f3d00d-c6d2-11e5-9f87-42010af00002, they a", "source": "https://kubernetes.io/docs/concepts/workloads/controllers/job/", "id": "k8s_00374_code_completion_9688"} {"prompt": "Explain this yaml code:\n\nkind: Job\nmetadata:\n name: old\n ...\nspec:\n selector:\n matchLabels:\n batch.kubernetes.io/controller-uid: a8f3d00d-c6d2-11e5-9f87-42010af00002\n ...", "completion": "This yaml code Jobs | Then you create a new Job with name new and you explicitly specify the same selector. Since the existing Pods have label batch.kubernetes.io/controller-uid=a8f3d00d-c6d2-11e5-9f87-42010af00002, they a", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Jobs | Then you create a new Job with name new and you explicitly specify the same selector. Since the existing Pods have label batch.kubernetes.io/controller-uid=a8f3d00d-c6d2-11e5-9f87-42010af00002, they a", "source": "https://kubernetes.io/docs/concepts/workloads/controllers/job/", "id": "k8s_00374_code_explain_9689"} {"prompt": "Complete this yaml code:\n\nkind: Job\nmetadata:\n name: new\n ...\nspec:", "completion": " manualSelector: true\n selector:\n matchLabels:\n batch.kubernetes.io/controller-uid: a8f3d00d-c6d2-11e5-9f87-42010af00002\n ...", "language": "yaml", "type": "kubernetes-manifest", "context": "Jobs | The new Job itself will have a different uid from a8f3d00d-c6d2-11e5-9f87-42010af00002. Setting manualSelector: true tells the system that you know what you are doing and to allow this mismatch.", "source": "https://kubernetes.io/docs/concepts/workloads/controllers/job/", "id": "k8s_00374_code_completion_9690"} {"prompt": "Explain this yaml code:\n\nkind: Job\nmetadata:\n name: new\n ...\nspec:\n manualSelector: true\n selector:\n matchLabels:\n batch.kubernetes.io/controller-uid: a8f3d00d-c6d2-11e5-9f87-42010af00002\n ...", "completion": "This yaml code Jobs | The new Job itself will have a different uid from a8f3d00d-c6d2-11e5-9f87-42010af00002. Setting manualSelector: true tells the system that you know what you are doing and to allow this mismatch.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Jobs | The new Job itself will have a different uid from a8f3d00d-c6d2-11e5-9f87-42010af00002. Setting manualSelector: true tells the system that you know what you are doing and to allow this mismatch.", "source": "https://kubernetes.io/docs/concepts/workloads/controllers/job/", "id": "k8s_00374_code_explain_9691"} {"prompt": "Complete this yaml code:\n\nkind: Job\nmetadata:\n name: new", "completion": " ...\nspec:\n podReplacementPolicy: Failed\n ...", "language": "yaml", "type": "kubernetes-manifest", "context": "Jobs | Provided your cluster has the feature gate enabled, you can inspect the .status.terminating field of a Job. The value of the field is the number of Pods owned by the Job that are currently terminating", "source": "https://kubernetes.io/docs/concepts/workloads/controllers/job/", "id": "k8s_00374_code_completion_9692"} {"prompt": "Explain this yaml code:\n\nkind: Job\nmetadata:\n name: new\n ...\nspec:\n podReplacementPolicy: Failed\n ...", "completion": "This yaml code Jobs | Provided your cluster has the feature gate enabled, you can inspect the .status.terminating field of a Job. The value of the field is the number of Pods owned by the Job that are currently terminating", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Jobs | Provided your cluster has the feature gate enabled, you can inspect the .status.terminating field of a Job. The value of the field is the number of Pods owned by the Job that are currently terminating", "source": "https://kubernetes.io/docs/concepts/workloads/controllers/job/", "id": "k8s_00374_code_explain_9693"} {"prompt": "Complete this yaml code:\n\napiVersion: batch/v1\nkind: Job", "completion": "# .metadata and .spec omitted\nstatus:\n terminating: 3 # three Pods are terminating and have not yet reached the Failed phase", "language": "yaml", "type": "kubernetes-manifest", "context": "Jobs | This feature allows you to disable the built-in Job controller, for a specific Job, and delegate reconciliation of the Job to an external controller.", "source": "https://kubernetes.io/docs/concepts/workloads/controllers/job/", "id": "k8s_00374_code_completion_9694"} {"prompt": "Explain this yaml code:\n\napiVersion: batch/v1\nkind: Job\n# .metadata and .spec omitted\nstatus:\n terminating: 3 # three Pods are terminating and have not yet reached the Failed phase", "completion": "This yaml code Jobs | This feature allows you to disable the built-in Job controller, for a specific Job, and delegate reconciliation of the Job to an external controller.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Jobs | This feature allows you to disable the built-in Job controller, for a specific Job, and delegate reconciliation of the Job to an external controller.", "source": "https://kubernetes.io/docs/concepts/workloads/controllers/job/", "id": "k8s_00374_code_explain_9695"} {"prompt": "Explain this bash code:\n\nkubectl create poddisruptionbudget NAME --selector=SELECTOR --min-available=N [--dry-run=server|client|none]", "completion": "This bash code kubectl create poddisruptionbudget | If true, ignore any errors in templates when a field or map key is missing in the template. Only applies to golang and jsonpath output formats.", "language": "bash", "type": "kubectl-command_explanation", "context": "kubectl create poddisruptionbudget | If true, ignore any errors in templates when a field or map key is missing in the template. Only applies to golang and jsonpath output formats.", "source": "https://kubernetes.io/docs/reference/kubectl/generated/kubectl_create/kubectl_create_poddisruptionbudget/", "id": "k8s_00375_code_explain_9696"} {"prompt": "Complete this text code:\n\n# Create a pod disruption budget named my-pdb that will select all pods with the app=rails label\n # and require at least one of them being available at any point in time\n kubectl create poddisruptionbudget my-pdb --selector=app=rails --min-available=1", "completion": " \n # Create a pod disruption budget named my-pdb that will select all pods with the app=nginx label\n # and require at least half of the pods selected to be available at any point in time\n kubectl create pdb my-pdb --selector=app=nginx --min-available=50%", "language": "text", "type": "code-example", "context": "kubectl create poddisruptionbudget | If true, ignore any errors in templates when a field or map key is missing in the template. Only applies to golang and jsonpath output formats.", "source": "https://kubernetes.io/docs/reference/kubectl/generated/kubectl_create/kubectl_create_poddisruptionbudget/", "id": "k8s_00375_code_completion_9697"} {"prompt": "Explain this text code:\n\n# Create a pod disruption budget named my-pdb that will select all pods with the app=rails label\n # and require at least one of them being available at any point in time\n kubectl create poddisruptionbudget my-pdb --selector=app=rails --min-available=1\n \n # Create a pod disruption budget named my-pdb that will select all pods with the app=nginx label\n # and require at least half of the pods selected to be available at any point in time\n kubectl create pdb my-pdb --selector=app=nginx --min-available=50%", "completion": "This text code kubectl create poddisruptionbudget | If true, ignore any errors in templates when a field or map key is missing in the template. Only applies to golang and jsonpath output formats.", "language": "text", "type": "code-example_explanation", "context": "kubectl create poddisruptionbudget | If true, ignore any errors in templates when a field or map key is missing in the template. Only applies to golang and jsonpath output formats.", "source": "https://kubernetes.io/docs/reference/kubectl/generated/kubectl_create/kubectl_create_poddisruptionbudget/", "id": "k8s_00375_code_explain_9698"} {"prompt": "Explain this bash code:\n\nkubectl create poddisruptionbudget NAME --selector=SELECTOR --min-available=N [--dry-run=server|client|none]", "completion": "This bash code kubectl create poddisruptionbudget | If true, ignore any errors in templates when a field or map key is missing in the template. Only applies to golang and jsonpath output formats.", "language": "bash", "type": "kubectl-command_explanation", "context": "kubectl create poddisruptionbudget | If true, ignore any errors in templates when a field or map key is missing in the template. Only applies to golang and jsonpath output formats.", "source": "https://kubernetes.io/docs/reference/kubectl/generated/kubectl_create/kubectl_create_poddisruptionbudget/", "id": "k8s_00375_code_explain_9699"} {"prompt": "Complete this text code:\n\n# Create a pod disruption budget named my-pdb that will select all pods with the app=rails label\n # and require at least one of them being available at any point in time\n kubectl create poddisruptionbudget my-pdb --selector=app=rails --min-available=1", "completion": " \n # Create a pod disruption budget named my-pdb that will select all pods with the app=nginx label\n # and require at least half of the pods selected to be available at any point in time\n kubectl create pdb my-pdb --selector=app=nginx --min-available=50%", "language": "text", "type": "code-example", "context": "kubectl create poddisruptionbudget | If true, ignore any errors in templates when a field or map key is missing in the template. Only applies to golang and jsonpath output formats.", "source": "https://kubernetes.io/docs/reference/kubectl/generated/kubectl_create/kubectl_create_poddisruptionbudget/", "id": "k8s_00375_code_completion_9700"} {"prompt": "Explain this text code:\n\n# Create a pod disruption budget named my-pdb that will select all pods with the app=rails label\n # and require at least one of them being available at any point in time\n kubectl create poddisruptionbudget my-pdb --selector=app=rails --min-available=1\n \n # Create a pod disruption budget named my-pdb that will select all pods with the app=nginx label\n # and require at least half of the pods selected to be available at any point in time\n kubectl create pdb my-pdb --selector=app=nginx --min-available=50%", "completion": "This text code kubectl create poddisruptionbudget | If true, ignore any errors in templates when a field or map key is missing in the template. Only applies to golang and jsonpath output formats.", "language": "text", "type": "code-example_explanation", "context": "kubectl create poddisruptionbudget | If true, ignore any errors in templates when a field or map key is missing in the template. Only applies to golang and jsonpath output formats.", "source": "https://kubernetes.io/docs/reference/kubectl/generated/kubectl_create/kubectl_create_poddisruptionbudget/", "id": "k8s_00375_code_explain_9701"} {"prompt": "Explain this text code:\n\nNAME READY STATUS RESTARTS AGE\n<pod-name> 0/1 Init:1/2 0 7s", "completion": "This text code Debug Init Containers | See Understanding Pod status for more examples of status values and their meanings.", "language": "text", "type": "code-example_explanation", "context": "Debug Init Containers | See Understanding Pod status for more examples of status values and their meanings.", "source": "https://kubernetes.io/docs/tasks/debug/debug-application/debug-init-containers/", "id": "k8s_00376_code_explain_9702"} {"prompt": "Complete this text code:\n\nInit Containers:\n <init-container-1>:\n Container ID: ...\n ...\n State: Terminated\n Reason: Completed\n Exit Code: 0\n Started: ...\n Finished: ...\n Ready: True\n Restart Count: 0\n ...", "completion": " <init-container-2>:\n Container ID: ...\n ...\n State: Waiting\n Reason: CrashLoopBackOff\n Last State: Terminated\n Reason: Error\n Exit Code: 1\n Started: ...\n Finished: ...\n Ready: False\n Restart Count: 3\n ...", "language": "text", "type": "code-example", "context": "Debug Init Containers | You can also access the Init Container statuses programmatically by reading the status.initContainerStatuses field on the Pod Spec:", "source": "https://kubernetes.io/docs/tasks/debug/debug-application/debug-init-containers/", "id": "k8s_00376_code_completion_9703"} {"prompt": "Explain this text code:\n\nInit Containers:\n <init-container-1>:\n Container ID: ...\n ...\n State: Terminated\n Reason: Completed\n Exit Code: 0\n Started: ...\n Finished: ...\n Ready: True\n Restart Count: 0\n ...\n <init-container-2>:\n Container ID: ...\n ...\n State: Waiting\n Reason: CrashLoopBackOff\n Last State: Terminated\n Reason: Error\n Exit Code: 1\n Started: ...\n Finished: ...\n Ready: False\n Restart Count: 3\n ...", "completion": "This text code Debug Init Containers | You can also access the Init Container statuses programmatically by reading the status.initContainerStatuses field on the Pod Spec:", "language": "text", "type": "code-example_explanation", "context": "Debug Init Containers | You can also access the Init Container statuses programmatically by reading the status.initContainerStatuses field on the Pod Spec:", "source": "https://kubernetes.io/docs/tasks/debug/debug-application/debug-init-containers/", "id": "k8s_00376_code_explain_9704"} {"prompt": "Explain this bash code:\n\nkubectl get pod nginx --template '{{.status.initContainerStatuses}}'", "completion": "This bash code Debug Init Containers | This command will return the same information as above in raw JSON.", "language": "bash", "type": "kubectl-command_explanation", "context": "Debug Init Containers | This command will return the same information as above in raw JSON.", "source": "https://kubernetes.io/docs/tasks/debug/debug-application/debug-init-containers/", "id": "k8s_00376_code_explain_9705"} {"prompt": "Explain this text code:\n\nNAME READY STATUS RESTARTS AGE\n<pod-name> 0/1 Init:1/2 0 7s", "completion": "This text code Debug Init Containers | See Understanding Pod status for more examples of status values and their meanings.", "language": "text", "type": "code-example_explanation", "context": "Debug Init Containers | See Understanding Pod status for more examples of status values and their meanings.", "source": "https://kubernetes.io/docs/tasks/debug/debug-application/debug-init-containers/", "id": "k8s_00376_code_explain_9706"} {"prompt": "Complete this text code:\n\nInit Containers:\n <init-container-1>:\n Container ID: ...\n ...\n State: Terminated\n Reason: Completed\n Exit Code: 0\n Started: ...\n Finished: ...\n Ready: True\n Restart Count: 0\n ...", "completion": " <init-container-2>:\n Container ID: ...\n ...\n State: Waiting\n Reason: CrashLoopBackOff\n Last State: Terminated\n Reason: Error\n Exit Code: 1\n Started: ...\n Finished: ...\n Ready: False\n Restart Count: 3\n ...", "language": "text", "type": "code-example", "context": "Debug Init Containers | You can also access the Init Container statuses programmatically by reading the status.initContainerStatuses field on the Pod Spec:", "source": "https://kubernetes.io/docs/tasks/debug/debug-application/debug-init-containers/", "id": "k8s_00376_code_completion_9707"} {"prompt": "Explain this text code:\n\nInit Containers:\n <init-container-1>:\n Container ID: ...\n ...\n State: Terminated\n Reason: Completed\n Exit Code: 0\n Started: ...\n Finished: ...\n Ready: True\n Restart Count: 0\n ...\n <init-container-2>:\n Container ID: ...\n ...\n State: Waiting\n Reason: CrashLoopBackOff\n Last State: Terminated\n Reason: Error\n Exit Code: 1\n Started: ...\n Finished: ...\n Ready: False\n Restart Count: 3\n ...", "completion": "This text code Debug Init Containers | You can also access the Init Container statuses programmatically by reading the status.initContainerStatuses field on the Pod Spec:", "language": "text", "type": "code-example_explanation", "context": "Debug Init Containers | You can also access the Init Container statuses programmatically by reading the status.initContainerStatuses field on the Pod Spec:", "source": "https://kubernetes.io/docs/tasks/debug/debug-application/debug-init-containers/", "id": "k8s_00376_code_explain_9708"} {"prompt": "Explain this shell code:\n\nkubectl get pod nginx --template '{{.status.initContainerStatuses}}'", "completion": "This shell code Debug Init Containers | This command will return the same information as above in raw JSON.", "language": "shell", "type": "code-example_explanation", "context": "Debug Init Containers | This command will return the same information as above in raw JSON.", "source": "https://kubernetes.io/docs/tasks/debug/debug-application/debug-init-containers/", "id": "k8s_00376_code_explain_9709"} {"prompt": "Explain this bash code:\n\nkubectl get pod nginx --template '{{.status.initContainerStatuses}}'", "completion": "This bash code Debug Init Containers | This command will return the same information as above in raw JSON.", "language": "bash", "type": "kubectl-command_explanation", "context": "Debug Init Containers | This command will return the same information as above in raw JSON.", "source": "https://kubernetes.io/docs/tasks/debug/debug-application/debug-init-containers/", "id": "k8s_00376_code_explain_9710"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Service\nmetadata:\n name: my-service\nspec:", "completion": " selector:\n app.kubernetes.io/name: MyApp\n ports:\n - protocol: TCP\n port: 80\n targetPort: 9376", "language": "yaml", "type": "service-definition", "context": "Service | Applying this manifest creates a new Service named \"my-service\" with the default ClusterIP service type. The Service targets TCP port 9376 on any Pod with the app.kubernetes.io/name: MyApp label.", "source": "https://kubernetes.io/docs/concepts/services-networking/service/", "id": "k8s_00377_code_completion_9711"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Service\nmetadata:\n name: my-service\nspec:\n selector:\n app.kubernetes.io/name: MyApp\n ports:\n - protocol: TCP\n port: 80\n targetPort: 9376", "completion": "This yaml code Service | Applying this manifest creates a new Service named \"my-service\" with the default ClusterIP service type. The Service targets TCP port 9376 on any Pod with the app.kubernetes.io/name: MyApp label.", "language": "yaml", "type": "service-definition_explanation", "context": "Service | Applying this manifest creates a new Service named \"my-service\" with the default ClusterIP service type. The Service targets TCP port 9376 on any Pod with the app.kubernetes.io/name: MyApp label.", "source": "https://kubernetes.io/docs/concepts/services-networking/service/", "id": "k8s_00377_code_explain_9712"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: nginx\n labels:\n app.kubernetes.io/name: proxy\nspec:\n containers:\n - name: nginx\n image: nginx:stable\n ports:\n - containerPort: 80\n name: http-web-svc", "completion": "\n---\napiVersion: v1\nkind: Service\nmetadata:\n name: nginx-service\nspec:\n selector:\n app.kubernetes.io/name: proxy\n ports:\n - name: name-of-service-port\n protocol: TCP\n port: 80\n targetPort: http-web-svc", "language": "yaml", "type": "pod-definition", "context": "Service | This works even if there is a mixture of Pods in the Service using a single configured name, with the same network protocol available via different port numbers. This offers a lot of flexibility for d", "source": "https://kubernetes.io/docs/concepts/services-networking/service/", "id": "k8s_00377_code_completion_9713"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: nginx\n labels:\n app.kubernetes.io/name: proxy\nspec:\n containers:\n - name: nginx\n image: nginx:stable\n ports:\n - containerPort: 80\n name: http-web-svc\n\n---\napiVersion: v1\nkind: Service\nmetadata:\n name: nginx-service\nspec:\n selector:\n app.kubernetes.io/name: proxy\n ports:\n - name: name-of-service-port\n protocol: TCP\n port: 80\n targetPort: http-web-svc", "completion": "This yaml code Service | This works even if there is a mixture of Pods in the Service using a single configured name, with the same network protocol available via different port numbers. This offers a lot of flexibility for d", "language": "yaml", "type": "pod-definition_explanation", "context": "Service | This works even if there is a mixture of Pods in the Service using a single configured name, with the same network protocol available via different port numbers. This offers a lot of flexibility for d", "source": "https://kubernetes.io/docs/concepts/services-networking/service/", "id": "k8s_00377_code_explain_9714"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Service\nmetadata:\n name: my-service\nspec:", "completion": " ports:\n - name: http\n protocol: TCP\n port: 80\n targetPort: 9376", "language": "yaml", "type": "service-definition", "context": "Service | Because this Service has no selector, the corresponding EndpointSlice objects are not created automatically. You can map the Service to the network address and port where it's running, by adding an En", "source": "https://kubernetes.io/docs/concepts/services-networking/service/", "id": "k8s_00377_code_completion_9715"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Service\nmetadata:\n name: my-service\nspec:\n ports:\n - name: http\n protocol: TCP\n port: 80\n targetPort: 9376", "completion": "This yaml code Service | Because this Service has no selector, the corresponding EndpointSlice objects are not created automatically. You can map the Service to the network address and port where it's running, by adding an En", "language": "yaml", "type": "service-definition_explanation", "context": "Service | Because this Service has no selector, the corresponding EndpointSlice objects are not created automatically. You can map the Service to the network address and port where it's running, by adding an En", "source": "https://kubernetes.io/docs/concepts/services-networking/service/", "id": "k8s_00377_code_explain_9716"} {"prompt": "Complete this yaml code:\n\napiVersion: discovery.k8s.io/v1\nkind: EndpointSlice\nmetadata:\n name: my-service-1 # by convention, use the name of the Service\n # as a prefix for the name of the EndpointSlice\n labels:\n # You should set the \"kubernetes.io/service-name\" label.\n # Set its value to match the name of the Service\n kubernetes.io/service-name: my-service\naddressType: IPv4", "completion": "ports:\n - name: http # should match with the name of the service port defined above\n appProtocol: http\n protocol: TCP\n port: 9376\nendpoints:\n - addresses:\n - \"10.4.5.6\"\n - addresses:\n - \"10.1.2.3\"", "language": "yaml", "type": "kubernetes-manifest", "context": "Service | When you create an EndpointSlice object for a Service, you can use any name for the EndpointSlice. Each EndpointSlice in a namespace must have a unique name. You link an EndpointSlice to a Service by", "source": "https://kubernetes.io/docs/concepts/services-networking/service/", "id": "k8s_00377_code_completion_9717"} {"prompt": "Explain this yaml code:\n\napiVersion: discovery.k8s.io/v1\nkind: EndpointSlice\nmetadata:\n name: my-service-1 # by convention, use the name of the Service\n # as a prefix for the name of the EndpointSlice\n labels:\n # You should set the \"kubernetes.io/service-name\" label.\n # Set its value to match the name of the Service\n kubernetes.io/service-name: my-service\naddressType: IPv4\nports:\n - name: http # should match with the name of the service port defined above\n appProtocol: http\n protocol: TCP\n port: 9376\nendpoints:\n - addresses:\n - \"10.4.5.6\"\n - addresses:\n - \"10.1.2.3\"", "completion": "This yaml code Service | When you create an EndpointSlice object for a Service, you can use any name for the EndpointSlice. Each EndpointSlice in a namespace must have a unique name. You link an EndpointSlice to a Service by", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Service | When you create an EndpointSlice object for a Service, you can use any name for the EndpointSlice. Each EndpointSlice in a namespace must have a unique name. You link an EndpointSlice to a Service by", "source": "https://kubernetes.io/docs/concepts/services-networking/service/", "id": "k8s_00377_code_explain_9718"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Service\nmetadata:\n name: my-service\nspec:\n selector:\n app.kubernetes.io/name: MyApp\n ports:", "completion": " - name: http\n protocol: TCP\n port: 80\n targetPort: 9376\n - name: https\n protocol: TCP\n port: 443\n targetPort: 9377", "language": "yaml", "type": "service-definition", "context": "Service | As with Kubernetes names in general, names for ports must only contain lowercase alphanumeric characters and -. Port names must also start and end with an alphanumeric character.", "source": "https://kubernetes.io/docs/concepts/services-networking/service/", "id": "k8s_00377_code_completion_9719"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Service\nmetadata:\n name: my-service\nspec:\n selector:\n app.kubernetes.io/name: MyApp\n ports:\n - name: http\n protocol: TCP\n port: 80\n targetPort: 9376\n - name: https\n protocol: TCP\n port: 443\n targetPort: 9377", "completion": "This yaml code Service | As with Kubernetes names in general, names for ports must only contain lowercase alphanumeric characters and -. Port names must also start and end with an alphanumeric character.", "language": "yaml", "type": "service-definition_explanation", "context": "Service | As with Kubernetes names in general, names for ports must only contain lowercase alphanumeric characters and -. Port names must also start and end with an alphanumeric character.", "source": "https://kubernetes.io/docs/concepts/services-networking/service/", "id": "k8s_00377_code_explain_9720"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Service\nmetadata:\n name: my-service\nspec:\n type: NodePort\n selector:\n app.kubernetes.io/name: MyApp", "completion": " ports:\n - port: 80\n # By default and for convenience, the `targetPort` is set to\n # the same value as the `port` field.\n targetPort: 80\n # Optional field\n # By default and for convenience, the Kubernetes control plane\n # will allocate a port from a range (default: 30000-32767)\n nodePort: 30007", "language": "yaml", "type": "service-definition", "context": "Service | The policy for assigning ports to NodePort services applies to both the auto-assignment and the manual assignment scenarios. When a user wants to create a NodePort service that uses a specific port, t", "source": "https://kubernetes.io/docs/concepts/services-networking/service/", "id": "k8s_00377_code_completion_9721"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Service\nmetadata:\n name: my-service\nspec:\n type: NodePort\n selector:\n app.kubernetes.io/name: MyApp\n ports:\n - port: 80\n # By default and for convenience, the `targetPort` is set to\n # the same value as the `port` field.\n targetPort: 80\n # Optional field\n # By default and for convenience, the Kubernetes control plane\n # will allocate a port from a range (default: 30000-32767)\n nodePort: 30007", "completion": "This yaml code Service | The policy for assigning ports to NodePort services applies to both the auto-assignment and the manual assignment scenarios. When a user wants to create a NodePort service that uses a specific port, t", "language": "yaml", "type": "service-definition_explanation", "context": "Service | The policy for assigning ports to NodePort services applies to both the auto-assignment and the manual assignment scenarios. When a user wants to create a NodePort service that uses a specific port, t", "source": "https://kubernetes.io/docs/concepts/services-networking/service/", "id": "k8s_00377_code_explain_9722"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Service\nmetadata:\n name: my-service\nspec:\n selector:\n app.kubernetes.io/name: MyApp\n ports:", "completion": " - protocol: TCP\n port: 80\n targetPort: 9376\n clusterIP: 10.0.171.239\n type: LoadBalancer\nstatus:\n loadBalancer:\n ingress:\n - ip: 192.0.2.127", "language": "yaml", "type": "service-definition", "context": "Service | Traffic from the external load balancer is directed at the backend Pods. The cloud provider decides how it is load balanced.", "source": "https://kubernetes.io/docs/concepts/services-networking/service/", "id": "k8s_00377_code_completion_9723"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Service\nmetadata:\n name: my-service\nspec:\n selector:\n app.kubernetes.io/name: MyApp\n ports:\n - protocol: TCP\n port: 80\n targetPort: 9376\n clusterIP: 10.0.171.239\n type: LoadBalancer\nstatus:\n loadBalancer:\n ingress:\n - ip: 192.0.2.127", "completion": "This yaml code Service | Traffic from the external load balancer is directed at the backend Pods. The cloud provider decides how it is load balanced.", "language": "yaml", "type": "service-definition_explanation", "context": "Service | Traffic from the external load balancer is directed at the backend Pods. The cloud provider decides how it is load balanced.", "source": "https://kubernetes.io/docs/concepts/services-networking/service/", "id": "k8s_00377_code_explain_9724"} {"prompt": "Complete this text code:\n\nmetadata:\n name: my-service", "completion": " annotations:\n networking.gke.io/load-balancer-type: \"Internal\"", "language": "text", "type": "code-example", "context": "Service | metadata: name: my-service annotations: service.beta.kubernetes.io/aws-load-balancer-internal: \"true\"", "source": "https://kubernetes.io/docs/concepts/services-networking/service/", "id": "k8s_00377_code_completion_9725"} {"prompt": "Explain this text code:\n\nmetadata:\n name: my-service\n annotations:\n networking.gke.io/load-balancer-type: \"Internal\"", "completion": "This text code Service | metadata: name: my-service annotations: service.beta.kubernetes.io/aws-load-balancer-internal: \"true\"", "language": "text", "type": "code-example_explanation", "context": "Service | metadata: name: my-service annotations: service.beta.kubernetes.io/aws-load-balancer-internal: \"true\"", "source": "https://kubernetes.io/docs/concepts/services-networking/service/", "id": "k8s_00377_code_explain_9726"} {"prompt": "Complete this text code:\n\nmetadata:\n name: my-service", "completion": " annotations:\n service.beta.kubernetes.io/aws-load-balancer-internal: \"true\"", "language": "text", "type": "code-example", "context": "Service | metadata: name: my-service annotations: service.beta.kubernetes.io/azure-load-balancer-internal: \"true\"", "source": "https://kubernetes.io/docs/concepts/services-networking/service/", "id": "k8s_00377_code_completion_9727"} {"prompt": "Explain this text code:\n\nmetadata:\n name: my-service\n annotations:\n service.beta.kubernetes.io/aws-load-balancer-internal: \"true\"", "completion": "This text code Service | metadata: name: my-service annotations: service.beta.kubernetes.io/azure-load-balancer-internal: \"true\"", "language": "text", "type": "code-example_explanation", "context": "Service | metadata: name: my-service annotations: service.beta.kubernetes.io/azure-load-balancer-internal: \"true\"", "source": "https://kubernetes.io/docs/concepts/services-networking/service/", "id": "k8s_00377_code_explain_9728"} {"prompt": "Complete this text code:\n\nmetadata:\n name: my-service", "completion": " annotations:\n service.beta.kubernetes.io/azure-load-balancer-internal: \"true\"", "language": "text", "type": "code-example", "context": "Service | metadata: name: my-service annotations: service.kubernetes.io/ibm-load-balancer-cloud-provider-ip-type: \"private\"", "source": "https://kubernetes.io/docs/concepts/services-networking/service/", "id": "k8s_00377_code_completion_9729"} {"prompt": "Explain this text code:\n\nmetadata:\n name: my-service\n annotations:\n service.beta.kubernetes.io/azure-load-balancer-internal: \"true\"", "completion": "This text code Service | metadata: name: my-service annotations: service.kubernetes.io/ibm-load-balancer-cloud-provider-ip-type: \"private\"", "language": "text", "type": "code-example_explanation", "context": "Service | metadata: name: my-service annotations: service.kubernetes.io/ibm-load-balancer-cloud-provider-ip-type: \"private\"", "source": "https://kubernetes.io/docs/concepts/services-networking/service/", "id": "k8s_00377_code_explain_9730"} {"prompt": "Complete this text code:\n\nmetadata:\n name: my-service", "completion": " annotations:\n service.kubernetes.io/ibm-load-balancer-cloud-provider-ip-type: \"private\"", "language": "text", "type": "code-example", "context": "Service | metadata: name: my-service annotations: service.beta.kubernetes.io/openstack-internal-load-balancer: \"true\"", "source": "https://kubernetes.io/docs/concepts/services-networking/service/", "id": "k8s_00377_code_completion_9731"} {"prompt": "Explain this text code:\n\nmetadata:\n name: my-service\n annotations:\n service.kubernetes.io/ibm-load-balancer-cloud-provider-ip-type: \"private\"", "completion": "This text code Service | metadata: name: my-service annotations: service.beta.kubernetes.io/openstack-internal-load-balancer: \"true\"", "language": "text", "type": "code-example_explanation", "context": "Service | metadata: name: my-service annotations: service.beta.kubernetes.io/openstack-internal-load-balancer: \"true\"", "source": "https://kubernetes.io/docs/concepts/services-networking/service/", "id": "k8s_00377_code_explain_9732"} {"prompt": "Complete this text code:\n\nmetadata:\n name: my-service", "completion": " annotations:\n service.beta.kubernetes.io/openstack-internal-load-balancer: \"true\"", "language": "text", "type": "code-example", "context": "Service | metadata: name: my-service annotations: service.beta.kubernetes.io/cce-load-balancer-internal-vpc: \"true\"", "source": "https://kubernetes.io/docs/concepts/services-networking/service/", "id": "k8s_00377_code_completion_9733"} {"prompt": "Explain this text code:\n\nmetadata:\n name: my-service\n annotations:\n service.beta.kubernetes.io/openstack-internal-load-balancer: \"true\"", "completion": "This text code Service | metadata: name: my-service annotations: service.beta.kubernetes.io/cce-load-balancer-internal-vpc: \"true\"", "language": "text", "type": "code-example_explanation", "context": "Service | metadata: name: my-service annotations: service.beta.kubernetes.io/cce-load-balancer-internal-vpc: \"true\"", "source": "https://kubernetes.io/docs/concepts/services-networking/service/", "id": "k8s_00377_code_explain_9734"} {"prompt": "Complete this text code:\n\nmetadata:\n name: my-service", "completion": " annotations:\n service.beta.kubernetes.io/cce-load-balancer-internal-vpc: \"true\"", "language": "text", "type": "code-example", "context": "Service | metadata: annotations: service.kubernetes.io/qcloud-loadbalancer-internal-subnetid: subnet-xxxxx", "source": "https://kubernetes.io/docs/concepts/services-networking/service/", "id": "k8s_00377_code_completion_9735"} {"prompt": "Explain this text code:\n\nmetadata:\n name: my-service\n annotations:\n service.beta.kubernetes.io/cce-load-balancer-internal-vpc: \"true\"", "completion": "This text code Service | metadata: annotations: service.kubernetes.io/qcloud-loadbalancer-internal-subnetid: subnet-xxxxx", "language": "text", "type": "code-example_explanation", "context": "Service | metadata: annotations: service.kubernetes.io/qcloud-loadbalancer-internal-subnetid: subnet-xxxxx", "source": "https://kubernetes.io/docs/concepts/services-networking/service/", "id": "k8s_00377_code_explain_9736"} {"prompt": "Explain this text code:\n\nmetadata:\n annotations:\n service.kubernetes.io/qcloud-loadbalancer-internal-subnetid: subnet-xxxxx", "completion": "This text code Service | metadata: annotations: service.beta.kubernetes.io/alibaba-cloud-loadbalancer-address-type: \"intranet\"", "language": "text", "type": "code-example_explanation", "context": "Service | metadata: annotations: service.beta.kubernetes.io/alibaba-cloud-loadbalancer-address-type: \"intranet\"", "source": "https://kubernetes.io/docs/concepts/services-networking/service/", "id": "k8s_00377_code_explain_9737"} {"prompt": "Explain this text code:\n\nmetadata:\n annotations:\n service.beta.kubernetes.io/alibaba-cloud-loadbalancer-address-type: \"intranet\"", "completion": "This text code Service | metadata: name: my-service annotations: service.beta.kubernetes.io/oci-load-balancer-internal: true", "language": "text", "type": "code-example_explanation", "context": "Service | metadata: name: my-service annotations: service.beta.kubernetes.io/oci-load-balancer-internal: true", "source": "https://kubernetes.io/docs/concepts/services-networking/service/", "id": "k8s_00377_code_explain_9738"} {"prompt": "Complete this text code:\n\nmetadata:\n name: my-service", "completion": " annotations:\n service.beta.kubernetes.io/oci-load-balancer-internal: true", "language": "text", "type": "code-example", "context": "Service | Services of type ExternalName map a Service to a DNS name, not to a typical selector such as my-service or cassandra. You specify these Services with the spec.externalName parameter.", "source": "https://kubernetes.io/docs/concepts/services-networking/service/", "id": "k8s_00377_code_completion_9739"} {"prompt": "Explain this text code:\n\nmetadata:\n name: my-service\n annotations:\n service.beta.kubernetes.io/oci-load-balancer-internal: true", "completion": "This text code Service | Services of type ExternalName map a Service to a DNS name, not to a typical selector such as my-service or cassandra. You specify these Services with the spec.externalName parameter.", "language": "text", "type": "code-example_explanation", "context": "Service | Services of type ExternalName map a Service to a DNS name, not to a typical selector such as my-service or cassandra. You specify these Services with the spec.externalName parameter.", "source": "https://kubernetes.io/docs/concepts/services-networking/service/", "id": "k8s_00377_code_explain_9740"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Service\nmetadata:\n name: my-service", "completion": " namespace: prod\nspec:\n type: ExternalName\n externalName: my.database.example.com", "language": "yaml", "type": "service-definition", "context": "Service | A Service of type: ExternalName accepts an IPv4 address string, but treats that string as a DNS name comprised of digits, not as an IP address (the internet does not however allow such names in DNS).", "source": "https://kubernetes.io/docs/concepts/services-networking/service/", "id": "k8s_00377_code_completion_9741"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Service\nmetadata:\n name: my-service\n namespace: prod\nspec:\n type: ExternalName\n externalName: my.database.example.com", "completion": "This yaml code Service | A Service of type: ExternalName accepts an IPv4 address string, but treats that string as a DNS name comprised of digits, not as an IP address (the internet does not however allow such names in DNS).", "language": "yaml", "type": "service-definition_explanation", "context": "Service | A Service of type: ExternalName accepts an IPv4 address string, but treats that string as a DNS name comprised of digits, not as an IP address (the internet does not however allow such names in DNS).", "source": "https://kubernetes.io/docs/concepts/services-networking/service/", "id": "k8s_00377_code_explain_9742"} {"prompt": "Complete this text code:\n\nREDIS_PRIMARY_SERVICE_HOST=10.0.0.11\nREDIS_PRIMARY_SERVICE_PORT=6379\nREDIS_PRIMARY_PORT=tcp://10.0.0.11:6379", "completion": "REDIS_PRIMARY_PORT_6379_TCP=tcp://10.0.0.11:6379\nREDIS_PRIMARY_PORT_6379_TCP_PROTO=tcp\nREDIS_PRIMARY_PORT_6379_TCP_PORT=6379\nREDIS_PRIMARY_PORT_6379_TCP_ADDR=10.0.0.11", "language": "text", "type": "code-example", "context": "Service | When you have a Pod that needs to access a Service, and you are using the environment variable method to publish the port and cluster IP to the client Pods, you must create the Service before the clie", "source": "https://kubernetes.io/docs/concepts/services-networking/service/", "id": "k8s_00377_code_completion_9743"} {"prompt": "Explain this text code:\n\nREDIS_PRIMARY_SERVICE_HOST=10.0.0.11\nREDIS_PRIMARY_SERVICE_PORT=6379\nREDIS_PRIMARY_PORT=tcp://10.0.0.11:6379\nREDIS_PRIMARY_PORT_6379_TCP=tcp://10.0.0.11:6379\nREDIS_PRIMARY_PORT_6379_TCP_PROTO=tcp\nREDIS_PRIMARY_PORT_6379_TCP_PORT=6379\nREDIS_PRIMARY_PORT_6379_TCP_ADDR=10.0.0.11", "completion": "This text code Service | When you have a Pod that needs to access a Service, and you are using the environment variable method to publish the port and cluster IP to the client Pods, you must create the Service before the clie", "language": "text", "type": "code-example_explanation", "context": "Service | When you have a Pod that needs to access a Service, and you are using the environment variable method to publish the port and cluster IP to the client Pods, you must create the Service before the clie", "source": "https://kubernetes.io/docs/concepts/services-networking/service/", "id": "k8s_00377_code_explain_9744"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Service\nmetadata:\n name: my-service\nspec:\n selector:\n app.kubernetes.io/name: MyApp", "completion": " ports:\n - name: http\n protocol: TCP\n port: 80\n targetPort: 49152\n externalIPs:\n - 198.51.100.32", "language": "yaml", "type": "service-definition", "context": "Service | Service is a top-level resource in the Kubernetes REST API. You can find more details about the Service API object.", "source": "https://kubernetes.io/docs/concepts/services-networking/service/", "id": "k8s_00377_code_completion_9745"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Service\nmetadata:\n name: my-service\nspec:\n selector:\n app.kubernetes.io/name: MyApp\n ports:\n - name: http\n protocol: TCP\n port: 80\n targetPort: 49152\n externalIPs:\n - 198.51.100.32", "completion": "This yaml code Service | Service is a top-level resource in the Kubernetes REST API. You can find more details about the Service API object.", "language": "yaml", "type": "service-definition_explanation", "context": "Service | Service is a top-level resource in the Kubernetes REST API. You can find more details about the Service API object.", "source": "https://kubernetes.io/docs/concepts/services-networking/service/", "id": "k8s_00377_code_explain_9746"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Service\nmetadata:\n name: my-service\nspec:", "completion": " selector:\n app.kubernetes.io/name: MyApp\n ports:\n - protocol: TCP\n port: 80\n targetPort: 9376", "language": "yaml", "type": "service-definition", "context": "Service | Applying this manifest creates a new Service named \"my-service\" with the default ClusterIP service type. The Service targets TCP port 9376 on any Pod with the app.kubernetes.io/name: MyApp label.", "source": "https://kubernetes.io/docs/concepts/services-networking/service/", "id": "k8s_00377_code_completion_9747"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Service\nmetadata:\n name: my-service\nspec:\n selector:\n app.kubernetes.io/name: MyApp\n ports:\n - protocol: TCP\n port: 80\n targetPort: 9376", "completion": "This yaml code Service | Applying this manifest creates a new Service named \"my-service\" with the default ClusterIP service type. The Service targets TCP port 9376 on any Pod with the app.kubernetes.io/name: MyApp label.", "language": "yaml", "type": "service-definition_explanation", "context": "Service | Applying this manifest creates a new Service named \"my-service\" with the default ClusterIP service type. The Service targets TCP port 9376 on any Pod with the app.kubernetes.io/name: MyApp label.", "source": "https://kubernetes.io/docs/concepts/services-networking/service/", "id": "k8s_00377_code_explain_9748"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: nginx\n labels:\n app.kubernetes.io/name: proxy\nspec:\n containers:\n - name: nginx\n image: nginx:stable\n ports:\n - containerPort: 80\n name: http-web-svc", "completion": "\n---\napiVersion: v1\nkind: Service\nmetadata:\n name: nginx-service\nspec:\n selector:\n app.kubernetes.io/name: proxy\n ports:\n - name: name-of-service-port\n protocol: TCP\n port: 80\n targetPort: http-web-svc", "language": "yaml", "type": "pod-definition", "context": "Service | This works even if there is a mixture of Pods in the Service using a single configured name, with the same network protocol available via different port numbers. This offers a lot of flexibility for d", "source": "https://kubernetes.io/docs/concepts/services-networking/service/", "id": "k8s_00377_code_completion_9749"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: nginx\n labels:\n app.kubernetes.io/name: proxy\nspec:\n containers:\n - name: nginx\n image: nginx:stable\n ports:\n - containerPort: 80\n name: http-web-svc\n\n---\napiVersion: v1\nkind: Service\nmetadata:\n name: nginx-service\nspec:\n selector:\n app.kubernetes.io/name: proxy\n ports:\n - name: name-of-service-port\n protocol: TCP\n port: 80\n targetPort: http-web-svc", "completion": "This yaml code Service | This works even if there is a mixture of Pods in the Service using a single configured name, with the same network protocol available via different port numbers. This offers a lot of flexibility for d", "language": "yaml", "type": "pod-definition_explanation", "context": "Service | This works even if there is a mixture of Pods in the Service using a single configured name, with the same network protocol available via different port numbers. This offers a lot of flexibility for d", "source": "https://kubernetes.io/docs/concepts/services-networking/service/", "id": "k8s_00377_code_explain_9750"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Service\nmetadata:\n name: my-service\nspec:", "completion": " ports:\n - name: http\n protocol: TCP\n port: 80\n targetPort: 9376", "language": "yaml", "type": "service-definition", "context": "Service | Because this Service has no selector, the corresponding EndpointSlice objects are not created automatically. You can map the Service to the network address and port where it's running, by adding an En", "source": "https://kubernetes.io/docs/concepts/services-networking/service/", "id": "k8s_00377_code_completion_9751"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Service\nmetadata:\n name: my-service\nspec:\n ports:\n - name: http\n protocol: TCP\n port: 80\n targetPort: 9376", "completion": "This yaml code Service | Because this Service has no selector, the corresponding EndpointSlice objects are not created automatically. You can map the Service to the network address and port where it's running, by adding an En", "language": "yaml", "type": "service-definition_explanation", "context": "Service | Because this Service has no selector, the corresponding EndpointSlice objects are not created automatically. You can map the Service to the network address and port where it's running, by adding an En", "source": "https://kubernetes.io/docs/concepts/services-networking/service/", "id": "k8s_00377_code_explain_9752"} {"prompt": "Complete this yaml code:\n\napiVersion: discovery.k8s.io/v1\nkind: EndpointSlice\nmetadata:\n name: my-service-1 # by convention, use the name of the Service\n # as a prefix for the name of the EndpointSlice\n labels:\n # You should set the \"kubernetes.io/service-name\" label.\n # Set its value to match the name of the Service\n kubernetes.io/service-name: my-service\naddressType: IPv4", "completion": "ports:\n - name: http # should match with the name of the service port defined above\n appProtocol: http\n protocol: TCP\n port: 9376\nendpoints:\n - addresses:\n - \"10.4.5.6\"\n - addresses:\n - \"10.1.2.3\"", "language": "yaml", "type": "kubernetes-manifest", "context": "Service | When you create an EndpointSlice object for a Service, you can use any name for the EndpointSlice. Each EndpointSlice in a namespace must have a unique name. You link an EndpointSlice to a Service by", "source": "https://kubernetes.io/docs/concepts/services-networking/service/", "id": "k8s_00377_code_completion_9753"} {"prompt": "Explain this yaml code:\n\napiVersion: discovery.k8s.io/v1\nkind: EndpointSlice\nmetadata:\n name: my-service-1 # by convention, use the name of the Service\n # as a prefix for the name of the EndpointSlice\n labels:\n # You should set the \"kubernetes.io/service-name\" label.\n # Set its value to match the name of the Service\n kubernetes.io/service-name: my-service\naddressType: IPv4\nports:\n - name: http # should match with the name of the service port defined above\n appProtocol: http\n protocol: TCP\n port: 9376\nendpoints:\n - addresses:\n - \"10.4.5.6\"\n - addresses:\n - \"10.1.2.3\"", "completion": "This yaml code Service | When you create an EndpointSlice object for a Service, you can use any name for the EndpointSlice. Each EndpointSlice in a namespace must have a unique name. You link an EndpointSlice to a Service by", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Service | When you create an EndpointSlice object for a Service, you can use any name for the EndpointSlice. Each EndpointSlice in a namespace must have a unique name. You link an EndpointSlice to a Service by", "source": "https://kubernetes.io/docs/concepts/services-networking/service/", "id": "k8s_00377_code_explain_9754"} {"prompt": "Explain this bash code:\n\nkubectl port-forward service/<service-name> forwardedPort:servicePort", "completion": "This bash code Service | An ExternalName Service is a special case of Service that does not have selectors and uses DNS names instead. For more information, see the ExternalName section.", "language": "bash", "type": "kubectl-command_explanation", "context": "Service | An ExternalName Service is a special case of Service that does not have selectors and uses DNS names instead. For more information, see the ExternalName section.", "source": "https://kubernetes.io/docs/concepts/services-networking/service/", "id": "k8s_00377_code_explain_9755"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Service\nmetadata:\n name: my-service\nspec:\n selector:\n app.kubernetes.io/name: MyApp\n ports:", "completion": " - name: http\n protocol: TCP\n port: 80\n targetPort: 9376\n - name: https\n protocol: TCP\n port: 443\n targetPort: 9377", "language": "yaml", "type": "service-definition", "context": "Service | As with Kubernetes names in general, names for ports must only contain lowercase alphanumeric characters and -. Port names must also start and end with an alphanumeric character.", "source": "https://kubernetes.io/docs/concepts/services-networking/service/", "id": "k8s_00377_code_completion_9756"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Service\nmetadata:\n name: my-service\nspec:\n selector:\n app.kubernetes.io/name: MyApp\n ports:\n - name: http\n protocol: TCP\n port: 80\n targetPort: 9376\n - name: https\n protocol: TCP\n port: 443\n targetPort: 9377", "completion": "This yaml code Service | As with Kubernetes names in general, names for ports must only contain lowercase alphanumeric characters and -. Port names must also start and end with an alphanumeric character.", "language": "yaml", "type": "service-definition_explanation", "context": "Service | As with Kubernetes names in general, names for ports must only contain lowercase alphanumeric characters and -. Port names must also start and end with an alphanumeric character.", "source": "https://kubernetes.io/docs/concepts/services-networking/service/", "id": "k8s_00377_code_explain_9757"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Service\nmetadata:\n name: my-service\nspec:\n type: NodePort\n selector:\n app.kubernetes.io/name: MyApp", "completion": " ports:\n - port: 80\n # By default and for convenience, the `targetPort` is set to\n # the same value as the `port` field.\n targetPort: 80\n # Optional field\n # By default and for convenience, the Kubernetes control plane\n # will allocate a port from a range (default: 30000-32767)\n nodePort: 30007", "language": "yaml", "type": "service-definition", "context": "Service | The policy for assigning ports to NodePort services applies to both the auto-assignment and the manual assignment scenarios. When a user wants to create a NodePort service that uses a specific port, t", "source": "https://kubernetes.io/docs/concepts/services-networking/service/", "id": "k8s_00377_code_completion_9758"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Service\nmetadata:\n name: my-service\nspec:\n type: NodePort\n selector:\n app.kubernetes.io/name: MyApp\n ports:\n - port: 80\n # By default and for convenience, the `targetPort` is set to\n # the same value as the `port` field.\n targetPort: 80\n # Optional field\n # By default and for convenience, the Kubernetes control plane\n # will allocate a port from a range (default: 30000-32767)\n nodePort: 30007", "completion": "This yaml code Service | The policy for assigning ports to NodePort services applies to both the auto-assignment and the manual assignment scenarios. When a user wants to create a NodePort service that uses a specific port, t", "language": "yaml", "type": "service-definition_explanation", "context": "Service | The policy for assigning ports to NodePort services applies to both the auto-assignment and the manual assignment scenarios. When a user wants to create a NodePort service that uses a specific port, t", "source": "https://kubernetes.io/docs/concepts/services-networking/service/", "id": "k8s_00377_code_explain_9759"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Service\nmetadata:\n name: my-service\nspec:\n selector:\n app.kubernetes.io/name: MyApp\n ports:", "completion": " - protocol: TCP\n port: 80\n targetPort: 9376\n clusterIP: 10.0.171.239\n type: LoadBalancer\nstatus:\n loadBalancer:\n ingress:\n - ip: 192.0.2.127", "language": "yaml", "type": "service-definition", "context": "Service | Traffic from the external load balancer is directed at the backend Pods. The cloud provider decides how it is load balanced.", "source": "https://kubernetes.io/docs/concepts/services-networking/service/", "id": "k8s_00377_code_completion_9760"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Service\nmetadata:\n name: my-service\nspec:\n selector:\n app.kubernetes.io/name: MyApp\n ports:\n - protocol: TCP\n port: 80\n targetPort: 9376\n clusterIP: 10.0.171.239\n type: LoadBalancer\nstatus:\n loadBalancer:\n ingress:\n - ip: 192.0.2.127", "completion": "This yaml code Service | Traffic from the external load balancer is directed at the backend Pods. The cloud provider decides how it is load balanced.", "language": "yaml", "type": "service-definition_explanation", "context": "Service | Traffic from the external load balancer is directed at the backend Pods. The cloud provider decides how it is load balanced.", "source": "https://kubernetes.io/docs/concepts/services-networking/service/", "id": "k8s_00377_code_explain_9761"} {"prompt": "Complete this yaml code:\n\nmetadata:\n name: my-service", "completion": " annotations:\n networking.gke.io/load-balancer-type: \"Internal\"", "language": "yaml", "type": "kubernetes-manifest", "context": "Service | metadata: name: my-service annotations: service.beta.kubernetes.io/aws-load-balancer-internal: \"true\"", "source": "https://kubernetes.io/docs/concepts/services-networking/service/", "id": "k8s_00377_code_completion_9762"} {"prompt": "Explain this yaml code:\n\nmetadata:\n name: my-service\n annotations:\n networking.gke.io/load-balancer-type: \"Internal\"", "completion": "This yaml code Service | metadata: name: my-service annotations: service.beta.kubernetes.io/aws-load-balancer-internal: \"true\"", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Service | metadata: name: my-service annotations: service.beta.kubernetes.io/aws-load-balancer-internal: \"true\"", "source": "https://kubernetes.io/docs/concepts/services-networking/service/", "id": "k8s_00377_code_explain_9763"} {"prompt": "Complete this yaml code:\n\nmetadata:\n name: my-service", "completion": " annotations:\n service.beta.kubernetes.io/aws-load-balancer-internal: \"true\"", "language": "yaml", "type": "kubernetes-manifest", "context": "Service | metadata: name: my-service annotations: service.beta.kubernetes.io/azure-load-balancer-internal: \"true\"", "source": "https://kubernetes.io/docs/concepts/services-networking/service/", "id": "k8s_00377_code_completion_9764"} {"prompt": "Explain this yaml code:\n\nmetadata:\n name: my-service\n annotations:\n service.beta.kubernetes.io/aws-load-balancer-internal: \"true\"", "completion": "This yaml code Service | metadata: name: my-service annotations: service.beta.kubernetes.io/azure-load-balancer-internal: \"true\"", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Service | metadata: name: my-service annotations: service.beta.kubernetes.io/azure-load-balancer-internal: \"true\"", "source": "https://kubernetes.io/docs/concepts/services-networking/service/", "id": "k8s_00377_code_explain_9765"} {"prompt": "Complete this yaml code:\n\nmetadata:\n name: my-service", "completion": " annotations:\n service.beta.kubernetes.io/azure-load-balancer-internal: \"true\"", "language": "yaml", "type": "kubernetes-manifest", "context": "Service | metadata: name: my-service annotations: service.kubernetes.io/ibm-load-balancer-cloud-provider-ip-type: \"private\"", "source": "https://kubernetes.io/docs/concepts/services-networking/service/", "id": "k8s_00377_code_completion_9766"} {"prompt": "Explain this yaml code:\n\nmetadata:\n name: my-service\n annotations:\n service.beta.kubernetes.io/azure-load-balancer-internal: \"true\"", "completion": "This yaml code Service | metadata: name: my-service annotations: service.kubernetes.io/ibm-load-balancer-cloud-provider-ip-type: \"private\"", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Service | metadata: name: my-service annotations: service.kubernetes.io/ibm-load-balancer-cloud-provider-ip-type: \"private\"", "source": "https://kubernetes.io/docs/concepts/services-networking/service/", "id": "k8s_00377_code_explain_9767"} {"prompt": "Complete this yaml code:\n\nmetadata:\n name: my-service", "completion": " annotations:\n service.kubernetes.io/ibm-load-balancer-cloud-provider-ip-type: \"private\"", "language": "yaml", "type": "kubernetes-manifest", "context": "Service | metadata: name: my-service annotations: service.beta.kubernetes.io/openstack-internal-load-balancer: \"true\"", "source": "https://kubernetes.io/docs/concepts/services-networking/service/", "id": "k8s_00377_code_completion_9768"} {"prompt": "Explain this yaml code:\n\nmetadata:\n name: my-service\n annotations:\n service.kubernetes.io/ibm-load-balancer-cloud-provider-ip-type: \"private\"", "completion": "This yaml code Service | metadata: name: my-service annotations: service.beta.kubernetes.io/openstack-internal-load-balancer: \"true\"", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Service | metadata: name: my-service annotations: service.beta.kubernetes.io/openstack-internal-load-balancer: \"true\"", "source": "https://kubernetes.io/docs/concepts/services-networking/service/", "id": "k8s_00377_code_explain_9769"} {"prompt": "Complete this yaml code:\n\nmetadata:\n name: my-service", "completion": " annotations:\n service.beta.kubernetes.io/openstack-internal-load-balancer: \"true\"", "language": "yaml", "type": "kubernetes-manifest", "context": "Service | metadata: name: my-service annotations: service.beta.kubernetes.io/cce-load-balancer-internal-vpc: \"true\"", "source": "https://kubernetes.io/docs/concepts/services-networking/service/", "id": "k8s_00377_code_completion_9770"} {"prompt": "Explain this yaml code:\n\nmetadata:\n name: my-service\n annotations:\n service.beta.kubernetes.io/openstack-internal-load-balancer: \"true\"", "completion": "This yaml code Service | metadata: name: my-service annotations: service.beta.kubernetes.io/cce-load-balancer-internal-vpc: \"true\"", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Service | metadata: name: my-service annotations: service.beta.kubernetes.io/cce-load-balancer-internal-vpc: \"true\"", "source": "https://kubernetes.io/docs/concepts/services-networking/service/", "id": "k8s_00377_code_explain_9771"} {"prompt": "Complete this yaml code:\n\nmetadata:\n name: my-service", "completion": " annotations:\n service.beta.kubernetes.io/cce-load-balancer-internal-vpc: \"true\"", "language": "yaml", "type": "kubernetes-manifest", "context": "Service | metadata: annotations: service.kubernetes.io/qcloud-loadbalancer-internal-subnetid: subnet-xxxxx", "source": "https://kubernetes.io/docs/concepts/services-networking/service/", "id": "k8s_00377_code_completion_9772"} {"prompt": "Explain this yaml code:\n\nmetadata:\n name: my-service\n annotations:\n service.beta.kubernetes.io/cce-load-balancer-internal-vpc: \"true\"", "completion": "This yaml code Service | metadata: annotations: service.kubernetes.io/qcloud-loadbalancer-internal-subnetid: subnet-xxxxx", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Service | metadata: annotations: service.kubernetes.io/qcloud-loadbalancer-internal-subnetid: subnet-xxxxx", "source": "https://kubernetes.io/docs/concepts/services-networking/service/", "id": "k8s_00377_code_explain_9773"} {"prompt": "Explain this yaml code:\n\nmetadata:\n annotations:\n service.kubernetes.io/qcloud-loadbalancer-internal-subnetid: subnet-xxxxx", "completion": "This yaml code Service | metadata: annotations: service.beta.kubernetes.io/alibaba-cloud-loadbalancer-address-type: \"intranet\"", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Service | metadata: annotations: service.beta.kubernetes.io/alibaba-cloud-loadbalancer-address-type: \"intranet\"", "source": "https://kubernetes.io/docs/concepts/services-networking/service/", "id": "k8s_00377_code_explain_9774"} {"prompt": "Explain this yaml code:\n\nmetadata:\n annotations:\n service.beta.kubernetes.io/alibaba-cloud-loadbalancer-address-type: \"intranet\"", "completion": "This yaml code Service | metadata: name: my-service annotations: service.beta.kubernetes.io/oci-load-balancer-internal: true", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Service | metadata: name: my-service annotations: service.beta.kubernetes.io/oci-load-balancer-internal: true", "source": "https://kubernetes.io/docs/concepts/services-networking/service/", "id": "k8s_00377_code_explain_9775"} {"prompt": "Complete this yaml code:\n\nmetadata:\n name: my-service", "completion": " annotations:\n service.beta.kubernetes.io/oci-load-balancer-internal: true", "language": "yaml", "type": "kubernetes-manifest", "context": "Service | Services of type ExternalName map a Service to a DNS name, not to a typical selector such as my-service or cassandra. You specify these Services with the spec.externalName parameter.", "source": "https://kubernetes.io/docs/concepts/services-networking/service/", "id": "k8s_00377_code_completion_9776"} {"prompt": "Explain this yaml code:\n\nmetadata:\n name: my-service\n annotations:\n service.beta.kubernetes.io/oci-load-balancer-internal: true", "completion": "This yaml code Service | Services of type ExternalName map a Service to a DNS name, not to a typical selector such as my-service or cassandra. You specify these Services with the spec.externalName parameter.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Service | Services of type ExternalName map a Service to a DNS name, not to a typical selector such as my-service or cassandra. You specify these Services with the spec.externalName parameter.", "source": "https://kubernetes.io/docs/concepts/services-networking/service/", "id": "k8s_00377_code_explain_9777"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Service\nmetadata:\n name: my-service", "completion": " namespace: prod\nspec:\n type: ExternalName\n externalName: my.database.example.com", "language": "yaml", "type": "service-definition", "context": "Service | A Service of type: ExternalName accepts an IPv4 address string, but treats that string as a DNS name comprised of digits, not as an IP address (the internet does not however allow such names in DNS).", "source": "https://kubernetes.io/docs/concepts/services-networking/service/", "id": "k8s_00377_code_completion_9778"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Service\nmetadata:\n name: my-service\n namespace: prod\nspec:\n type: ExternalName\n externalName: my.database.example.com", "completion": "This yaml code Service | A Service of type: ExternalName accepts an IPv4 address string, but treats that string as a DNS name comprised of digits, not as an IP address (the internet does not however allow such names in DNS).", "language": "yaml", "type": "service-definition_explanation", "context": "Service | A Service of type: ExternalName accepts an IPv4 address string, but treats that string as a DNS name comprised of digits, not as an IP address (the internet does not however allow such names in DNS).", "source": "https://kubernetes.io/docs/concepts/services-networking/service/", "id": "k8s_00377_code_explain_9779"} {"prompt": "Complete this shell code:\n\nREDIS_PRIMARY_SERVICE_HOST=10.0.0.11\nREDIS_PRIMARY_SERVICE_PORT=6379\nREDIS_PRIMARY_PORT=tcp://10.0.0.11:6379", "completion": "REDIS_PRIMARY_PORT_6379_TCP=tcp://10.0.0.11:6379\nREDIS_PRIMARY_PORT_6379_TCP_PROTO=tcp\nREDIS_PRIMARY_PORT_6379_TCP_PORT=6379\nREDIS_PRIMARY_PORT_6379_TCP_ADDR=10.0.0.11", "language": "shell", "type": "code-example", "context": "Service | When you have a Pod that needs to access a Service, and you are using the environment variable method to publish the port and cluster IP to the client Pods, you must create the Service before the clie", "source": "https://kubernetes.io/docs/concepts/services-networking/service/", "id": "k8s_00377_code_completion_9780"} {"prompt": "Explain this shell code:\n\nREDIS_PRIMARY_SERVICE_HOST=10.0.0.11\nREDIS_PRIMARY_SERVICE_PORT=6379\nREDIS_PRIMARY_PORT=tcp://10.0.0.11:6379\nREDIS_PRIMARY_PORT_6379_TCP=tcp://10.0.0.11:6379\nREDIS_PRIMARY_PORT_6379_TCP_PROTO=tcp\nREDIS_PRIMARY_PORT_6379_TCP_PORT=6379\nREDIS_PRIMARY_PORT_6379_TCP_ADDR=10.0.0.11", "completion": "This shell code Service | When you have a Pod that needs to access a Service, and you are using the environment variable method to publish the port and cluster IP to the client Pods, you must create the Service before the clie", "language": "shell", "type": "code-example_explanation", "context": "Service | When you have a Pod that needs to access a Service, and you are using the environment variable method to publish the port and cluster IP to the client Pods, you must create the Service before the clie", "source": "https://kubernetes.io/docs/concepts/services-networking/service/", "id": "k8s_00377_code_explain_9781"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Service\nmetadata:\n name: my-service\nspec:\n selector:\n app.kubernetes.io/name: MyApp", "completion": " ports:\n - name: http\n protocol: TCP\n port: 80\n targetPort: 49152\n externalIPs:\n - 198.51.100.32", "language": "yaml", "type": "service-definition", "context": "Service | Service is a top-level resource in the Kubernetes REST API. You can find more details about the Service API object.", "source": "https://kubernetes.io/docs/concepts/services-networking/service/", "id": "k8s_00377_code_completion_9782"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Service\nmetadata:\n name: my-service\nspec:\n selector:\n app.kubernetes.io/name: MyApp\n ports:\n - name: http\n protocol: TCP\n port: 80\n targetPort: 49152\n externalIPs:\n - 198.51.100.32", "completion": "This yaml code Service | Service is a top-level resource in the Kubernetes REST API. You can find more details about the Service API object.", "language": "yaml", "type": "service-definition_explanation", "context": "Service | Service is a top-level resource in the Kubernetes REST API. You can find more details about the Service API object.", "source": "https://kubernetes.io/docs/concepts/services-networking/service/", "id": "k8s_00377_code_explain_9783"} {"prompt": "Complete this yaml code:\n\nservice/simple-service.yaml\napiVersion: v1\nkind: Service\nmetadata:\n name: my-service\nspec:", "completion": " selector:\n app.kubernetes.io/name: MyApp\n ports:\n - protocol: TCP\n port: 80\n targetPort: 9376", "language": "yaml", "type": "service-definition", "context": "Service | Applying this manifest creates a new Service named \"my-service\" with the default ClusterIP service type. The Service targets TCP port 9376 on any Pod with the app.kubernetes.io/name: MyApp label.", "source": "https://kubernetes.io/docs/concepts/services-networking/service/", "id": "k8s_00377_code_completion_9784"} {"prompt": "Explain this yaml code:\n\nservice/simple-service.yaml\napiVersion: v1\nkind: Service\nmetadata:\n name: my-service\nspec:\n selector:\n app.kubernetes.io/name: MyApp\n ports:\n - protocol: TCP\n port: 80\n targetPort: 9376", "completion": "This yaml code Service | Applying this manifest creates a new Service named \"my-service\" with the default ClusterIP service type. The Service targets TCP port 9376 on any Pod with the app.kubernetes.io/name: MyApp label.", "language": "yaml", "type": "service-definition_explanation", "context": "Service | Applying this manifest creates a new Service named \"my-service\" with the default ClusterIP service type. The Service targets TCP port 9376 on any Pod with the app.kubernetes.io/name: MyApp label.", "source": "https://kubernetes.io/docs/concepts/services-networking/service/", "id": "k8s_00377_code_explain_9785"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Service\nmetadata:\n name: my-service\nspec:", "completion": " selector:\n app.kubernetes.io/name: MyApp\n ports:\n - protocol: TCP\n port: 80\n targetPort: 9376", "language": "yaml", "type": "service-definition", "context": "Service | Applying this manifest creates a new Service named \"my-service\" with the default ClusterIP service type. The Service targets TCP port 9376 on any Pod with the app.kubernetes.io/name: MyApp label.", "source": "https://kubernetes.io/docs/concepts/services-networking/service/", "id": "k8s_00377_code_completion_9786"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Service\nmetadata:\n name: my-service\nspec:\n selector:\n app.kubernetes.io/name: MyApp\n ports:\n - protocol: TCP\n port: 80\n targetPort: 9376", "completion": "This yaml code Service | Applying this manifest creates a new Service named \"my-service\" with the default ClusterIP service type. The Service targets TCP port 9376 on any Pod with the app.kubernetes.io/name: MyApp label.", "language": "yaml", "type": "service-definition_explanation", "context": "Service | Applying this manifest creates a new Service named \"my-service\" with the default ClusterIP service type. The Service targets TCP port 9376 on any Pod with the app.kubernetes.io/name: MyApp label.", "source": "https://kubernetes.io/docs/concepts/services-networking/service/", "id": "k8s_00377_code_explain_9787"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: nginx\n labels:\n app.kubernetes.io/name: proxy\nspec:\n containers:\n - name: nginx\n image: nginx:stable\n ports:\n - containerPort: 80\n name: http-web-svc", "completion": "\n---\napiVersion: v1\nkind: Service\nmetadata:\n name: nginx-service\nspec:\n selector:\n app.kubernetes.io/name: proxy\n ports:\n - name: name-of-service-port\n protocol: TCP\n port: 80\n targetPort: http-web-svc", "language": "yaml", "type": "pod-definition", "context": "Service | This works even if there is a mixture of Pods in the Service using a single configured name, with the same network protocol available via different port numbers. This offers a lot of flexibility for d", "source": "https://kubernetes.io/docs/concepts/services-networking/service/", "id": "k8s_00377_code_completion_9788"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: nginx\n labels:\n app.kubernetes.io/name: proxy\nspec:\n containers:\n - name: nginx\n image: nginx:stable\n ports:\n - containerPort: 80\n name: http-web-svc\n\n---\napiVersion: v1\nkind: Service\nmetadata:\n name: nginx-service\nspec:\n selector:\n app.kubernetes.io/name: proxy\n ports:\n - name: name-of-service-port\n protocol: TCP\n port: 80\n targetPort: http-web-svc", "completion": "This yaml code Service | This works even if there is a mixture of Pods in the Service using a single configured name, with the same network protocol available via different port numbers. This offers a lot of flexibility for d", "language": "yaml", "type": "pod-definition_explanation", "context": "Service | This works even if there is a mixture of Pods in the Service using a single configured name, with the same network protocol available via different port numbers. This offers a lot of flexibility for d", "source": "https://kubernetes.io/docs/concepts/services-networking/service/", "id": "k8s_00377_code_explain_9789"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Service\nmetadata:\n name: my-service\nspec:", "completion": " ports:\n - name: http\n protocol: TCP\n port: 80\n targetPort: 9376", "language": "yaml", "type": "service-definition", "context": "Service | Because this Service has no selector, the corresponding EndpointSlice objects are not created automatically. You can map the Service to the network address and port where it's running, by adding an En", "source": "https://kubernetes.io/docs/concepts/services-networking/service/", "id": "k8s_00377_code_completion_9790"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Service\nmetadata:\n name: my-service\nspec:\n ports:\n - name: http\n protocol: TCP\n port: 80\n targetPort: 9376", "completion": "This yaml code Service | Because this Service has no selector, the corresponding EndpointSlice objects are not created automatically. You can map the Service to the network address and port where it's running, by adding an En", "language": "yaml", "type": "service-definition_explanation", "context": "Service | Because this Service has no selector, the corresponding EndpointSlice objects are not created automatically. You can map the Service to the network address and port where it's running, by adding an En", "source": "https://kubernetes.io/docs/concepts/services-networking/service/", "id": "k8s_00377_code_explain_9791"} {"prompt": "Complete this yaml code:\n\napiVersion: discovery.k8s.io/v1\nkind: EndpointSlice\nmetadata:\n name: my-service-1 # by convention, use the name of the Service\n # as a prefix for the name of the EndpointSlice\n labels:\n # You should set the \"kubernetes.io/service-name\" label.\n # Set its value to match the name of the Service\n kubernetes.io/service-name: my-service\naddressType: IPv4", "completion": "ports:\n - name: http # should match with the name of the service port defined above\n appProtocol: http\n protocol: TCP\n port: 9376\nendpoints:\n - addresses:\n - \"10.4.5.6\"\n - addresses:\n - \"10.1.2.3\"", "language": "yaml", "type": "kubernetes-manifest", "context": "Service | When you create an EndpointSlice object for a Service, you can use any name for the EndpointSlice. Each EndpointSlice in a namespace must have a unique name. You link an EndpointSlice to a Service by", "source": "https://kubernetes.io/docs/concepts/services-networking/service/", "id": "k8s_00377_code_completion_9792"} {"prompt": "Explain this yaml code:\n\napiVersion: discovery.k8s.io/v1\nkind: EndpointSlice\nmetadata:\n name: my-service-1 # by convention, use the name of the Service\n # as a prefix for the name of the EndpointSlice\n labels:\n # You should set the \"kubernetes.io/service-name\" label.\n # Set its value to match the name of the Service\n kubernetes.io/service-name: my-service\naddressType: IPv4\nports:\n - name: http # should match with the name of the service port defined above\n appProtocol: http\n protocol: TCP\n port: 9376\nendpoints:\n - addresses:\n - \"10.4.5.6\"\n - addresses:\n - \"10.1.2.3\"", "completion": "This yaml code Service | When you create an EndpointSlice object for a Service, you can use any name for the EndpointSlice. Each EndpointSlice in a namespace must have a unique name. You link an EndpointSlice to a Service by", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Service | When you create an EndpointSlice object for a Service, you can use any name for the EndpointSlice. Each EndpointSlice in a namespace must have a unique name. You link an EndpointSlice to a Service by", "source": "https://kubernetes.io/docs/concepts/services-networking/service/", "id": "k8s_00377_code_explain_9793"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Service\nmetadata:\n name: my-service\nspec:\n selector:\n app.kubernetes.io/name: MyApp\n ports:", "completion": " - name: http\n protocol: TCP\n port: 80\n targetPort: 9376\n - name: https\n protocol: TCP\n port: 443\n targetPort: 9377", "language": "yaml", "type": "service-definition", "context": "Service | As with Kubernetes names in general, names for ports must only contain lowercase alphanumeric characters and -. Port names must also start and end with an alphanumeric character.", "source": "https://kubernetes.io/docs/concepts/services-networking/service/", "id": "k8s_00377_code_completion_9794"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Service\nmetadata:\n name: my-service\nspec:\n selector:\n app.kubernetes.io/name: MyApp\n ports:\n - name: http\n protocol: TCP\n port: 80\n targetPort: 9376\n - name: https\n protocol: TCP\n port: 443\n targetPort: 9377", "completion": "This yaml code Service | As with Kubernetes names in general, names for ports must only contain lowercase alphanumeric characters and -. Port names must also start and end with an alphanumeric character.", "language": "yaml", "type": "service-definition_explanation", "context": "Service | As with Kubernetes names in general, names for ports must only contain lowercase alphanumeric characters and -. Port names must also start and end with an alphanumeric character.", "source": "https://kubernetes.io/docs/concepts/services-networking/service/", "id": "k8s_00377_code_explain_9795"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Service\nmetadata:\n name: my-service\nspec:\n type: NodePort\n selector:\n app.kubernetes.io/name: MyApp", "completion": " ports:\n - port: 80\n # By default and for convenience, the `targetPort` is set to\n # the same value as the `port` field.\n targetPort: 80\n # Optional field\n # By default and for convenience, the Kubernetes control plane\n # will allocate a port from a range (default: 30000-32767)\n nodePort: 30007", "language": "yaml", "type": "service-definition", "context": "Service | The policy for assigning ports to NodePort services applies to both the auto-assignment and the manual assignment scenarios. When a user wants to create a NodePort service that uses a specific port, t", "source": "https://kubernetes.io/docs/concepts/services-networking/service/", "id": "k8s_00377_code_completion_9796"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Service\nmetadata:\n name: my-service\nspec:\n type: NodePort\n selector:\n app.kubernetes.io/name: MyApp\n ports:\n - port: 80\n # By default and for convenience, the `targetPort` is set to\n # the same value as the `port` field.\n targetPort: 80\n # Optional field\n # By default and for convenience, the Kubernetes control plane\n # will allocate a port from a range (default: 30000-32767)\n nodePort: 30007", "completion": "This yaml code Service | The policy for assigning ports to NodePort services applies to both the auto-assignment and the manual assignment scenarios. When a user wants to create a NodePort service that uses a specific port, t", "language": "yaml", "type": "service-definition_explanation", "context": "Service | The policy for assigning ports to NodePort services applies to both the auto-assignment and the manual assignment scenarios. When a user wants to create a NodePort service that uses a specific port, t", "source": "https://kubernetes.io/docs/concepts/services-networking/service/", "id": "k8s_00377_code_explain_9797"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Service\nmetadata:\n name: my-service\nspec:\n selector:\n app.kubernetes.io/name: MyApp\n ports:", "completion": " - protocol: TCP\n port: 80\n targetPort: 9376\n clusterIP: 10.0.171.239\n type: LoadBalancer\nstatus:\n loadBalancer:\n ingress:\n - ip: 192.0.2.127", "language": "yaml", "type": "service-definition", "context": "Service | Traffic from the external load balancer is directed at the backend Pods. The cloud provider decides how it is load balanced.", "source": "https://kubernetes.io/docs/concepts/services-networking/service/", "id": "k8s_00377_code_completion_9798"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Service\nmetadata:\n name: my-service\nspec:\n selector:\n app.kubernetes.io/name: MyApp\n ports:\n - protocol: TCP\n port: 80\n targetPort: 9376\n clusterIP: 10.0.171.239\n type: LoadBalancer\nstatus:\n loadBalancer:\n ingress:\n - ip: 192.0.2.127", "completion": "This yaml code Service | Traffic from the external load balancer is directed at the backend Pods. The cloud provider decides how it is load balanced.", "language": "yaml", "type": "service-definition_explanation", "context": "Service | Traffic from the external load balancer is directed at the backend Pods. The cloud provider decides how it is load balanced.", "source": "https://kubernetes.io/docs/concepts/services-networking/service/", "id": "k8s_00377_code_explain_9799"} {"prompt": "Complete this text code:\n\nmetadata:\n name: my-service", "completion": " annotations:\n networking.gke.io/load-balancer-type: \"Internal\"", "language": "text", "type": "code-example", "context": "Service | metadata: name: my-service annotations: service.beta.kubernetes.io/aws-load-balancer-internal: \"true\"", "source": "https://kubernetes.io/docs/concepts/services-networking/service/", "id": "k8s_00377_code_completion_9800"} {"prompt": "Explain this text code:\n\nmetadata:\n name: my-service\n annotations:\n networking.gke.io/load-balancer-type: \"Internal\"", "completion": "This text code Service | metadata: name: my-service annotations: service.beta.kubernetes.io/aws-load-balancer-internal: \"true\"", "language": "text", "type": "code-example_explanation", "context": "Service | metadata: name: my-service annotations: service.beta.kubernetes.io/aws-load-balancer-internal: \"true\"", "source": "https://kubernetes.io/docs/concepts/services-networking/service/", "id": "k8s_00377_code_explain_9801"} {"prompt": "Complete this text code:\n\nmetadata:\n name: my-service", "completion": " annotations:\n service.beta.kubernetes.io/aws-load-balancer-internal: \"true\"", "language": "text", "type": "code-example", "context": "Service | metadata: name: my-service annotations: service.beta.kubernetes.io/azure-load-balancer-internal: \"true\"", "source": "https://kubernetes.io/docs/concepts/services-networking/service/", "id": "k8s_00377_code_completion_9802"} {"prompt": "Explain this text code:\n\nmetadata:\n name: my-service\n annotations:\n service.beta.kubernetes.io/aws-load-balancer-internal: \"true\"", "completion": "This text code Service | metadata: name: my-service annotations: service.beta.kubernetes.io/azure-load-balancer-internal: \"true\"", "language": "text", "type": "code-example_explanation", "context": "Service | metadata: name: my-service annotations: service.beta.kubernetes.io/azure-load-balancer-internal: \"true\"", "source": "https://kubernetes.io/docs/concepts/services-networking/service/", "id": "k8s_00377_code_explain_9803"} {"prompt": "Complete this text code:\n\nmetadata:\n name: my-service", "completion": " annotations:\n service.beta.kubernetes.io/azure-load-balancer-internal: \"true\"", "language": "text", "type": "code-example", "context": "Service | metadata: name: my-service annotations: service.kubernetes.io/ibm-load-balancer-cloud-provider-ip-type: \"private\"", "source": "https://kubernetes.io/docs/concepts/services-networking/service/", "id": "k8s_00377_code_completion_9804"} {"prompt": "Explain this text code:\n\nmetadata:\n name: my-service\n annotations:\n service.beta.kubernetes.io/azure-load-balancer-internal: \"true\"", "completion": "This text code Service | metadata: name: my-service annotations: service.kubernetes.io/ibm-load-balancer-cloud-provider-ip-type: \"private\"", "language": "text", "type": "code-example_explanation", "context": "Service | metadata: name: my-service annotations: service.kubernetes.io/ibm-load-balancer-cloud-provider-ip-type: \"private\"", "source": "https://kubernetes.io/docs/concepts/services-networking/service/", "id": "k8s_00377_code_explain_9805"} {"prompt": "Complete this text code:\n\nmetadata:\n name: my-service", "completion": " annotations:\n service.kubernetes.io/ibm-load-balancer-cloud-provider-ip-type: \"private\"", "language": "text", "type": "code-example", "context": "Service | metadata: name: my-service annotations: service.beta.kubernetes.io/openstack-internal-load-balancer: \"true\"", "source": "https://kubernetes.io/docs/concepts/services-networking/service/", "id": "k8s_00377_code_completion_9806"} {"prompt": "Explain this text code:\n\nmetadata:\n name: my-service\n annotations:\n service.kubernetes.io/ibm-load-balancer-cloud-provider-ip-type: \"private\"", "completion": "This text code Service | metadata: name: my-service annotations: service.beta.kubernetes.io/openstack-internal-load-balancer: \"true\"", "language": "text", "type": "code-example_explanation", "context": "Service | metadata: name: my-service annotations: service.beta.kubernetes.io/openstack-internal-load-balancer: \"true\"", "source": "https://kubernetes.io/docs/concepts/services-networking/service/", "id": "k8s_00377_code_explain_9807"} {"prompt": "Complete this text code:\n\nmetadata:\n name: my-service", "completion": " annotations:\n service.beta.kubernetes.io/openstack-internal-load-balancer: \"true\"", "language": "text", "type": "code-example", "context": "Service | metadata: name: my-service annotations: service.beta.kubernetes.io/cce-load-balancer-internal-vpc: \"true\"", "source": "https://kubernetes.io/docs/concepts/services-networking/service/", "id": "k8s_00377_code_completion_9808"} {"prompt": "Explain this text code:\n\nmetadata:\n name: my-service\n annotations:\n service.beta.kubernetes.io/openstack-internal-load-balancer: \"true\"", "completion": "This text code Service | metadata: name: my-service annotations: service.beta.kubernetes.io/cce-load-balancer-internal-vpc: \"true\"", "language": "text", "type": "code-example_explanation", "context": "Service | metadata: name: my-service annotations: service.beta.kubernetes.io/cce-load-balancer-internal-vpc: \"true\"", "source": "https://kubernetes.io/docs/concepts/services-networking/service/", "id": "k8s_00377_code_explain_9809"} {"prompt": "Complete this text code:\n\nmetadata:\n name: my-service", "completion": " annotations:\n service.beta.kubernetes.io/cce-load-balancer-internal-vpc: \"true\"", "language": "text", "type": "code-example", "context": "Service | metadata: annotations: service.kubernetes.io/qcloud-loadbalancer-internal-subnetid: subnet-xxxxx", "source": "https://kubernetes.io/docs/concepts/services-networking/service/", "id": "k8s_00377_code_completion_9810"} {"prompt": "Explain this text code:\n\nmetadata:\n name: my-service\n annotations:\n service.beta.kubernetes.io/cce-load-balancer-internal-vpc: \"true\"", "completion": "This text code Service | metadata: annotations: service.kubernetes.io/qcloud-loadbalancer-internal-subnetid: subnet-xxxxx", "language": "text", "type": "code-example_explanation", "context": "Service | metadata: annotations: service.kubernetes.io/qcloud-loadbalancer-internal-subnetid: subnet-xxxxx", "source": "https://kubernetes.io/docs/concepts/services-networking/service/", "id": "k8s_00377_code_explain_9811"} {"prompt": "Explain this text code:\n\nmetadata:\n annotations:\n service.kubernetes.io/qcloud-loadbalancer-internal-subnetid: subnet-xxxxx", "completion": "This text code Service | metadata: annotations: service.beta.kubernetes.io/alibaba-cloud-loadbalancer-address-type: \"intranet\"", "language": "text", "type": "code-example_explanation", "context": "Service | metadata: annotations: service.beta.kubernetes.io/alibaba-cloud-loadbalancer-address-type: \"intranet\"", "source": "https://kubernetes.io/docs/concepts/services-networking/service/", "id": "k8s_00377_code_explain_9812"} {"prompt": "Explain this text code:\n\nmetadata:\n annotations:\n service.beta.kubernetes.io/alibaba-cloud-loadbalancer-address-type: \"intranet\"", "completion": "This text code Service | metadata: name: my-service annotations: service.beta.kubernetes.io/oci-load-balancer-internal: true", "language": "text", "type": "code-example_explanation", "context": "Service | metadata: name: my-service annotations: service.beta.kubernetes.io/oci-load-balancer-internal: true", "source": "https://kubernetes.io/docs/concepts/services-networking/service/", "id": "k8s_00377_code_explain_9813"} {"prompt": "Complete this text code:\n\nmetadata:\n name: my-service", "completion": " annotations:\n service.beta.kubernetes.io/oci-load-balancer-internal: true", "language": "text", "type": "code-example", "context": "Service | Services of type ExternalName map a Service to a DNS name, not to a typical selector such as my-service or cassandra. You specify these Services with the spec.externalName parameter.", "source": "https://kubernetes.io/docs/concepts/services-networking/service/", "id": "k8s_00377_code_completion_9814"} {"prompt": "Explain this text code:\n\nmetadata:\n name: my-service\n annotations:\n service.beta.kubernetes.io/oci-load-balancer-internal: true", "completion": "This text code Service | Services of type ExternalName map a Service to a DNS name, not to a typical selector such as my-service or cassandra. You specify these Services with the spec.externalName parameter.", "language": "text", "type": "code-example_explanation", "context": "Service | Services of type ExternalName map a Service to a DNS name, not to a typical selector such as my-service or cassandra. You specify these Services with the spec.externalName parameter.", "source": "https://kubernetes.io/docs/concepts/services-networking/service/", "id": "k8s_00377_code_explain_9815"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Service\nmetadata:\n name: my-service", "completion": " namespace: prod\nspec:\n type: ExternalName\n externalName: my.database.example.com", "language": "yaml", "type": "service-definition", "context": "Service | A Service of type: ExternalName accepts an IPv4 address string, but treats that string as a DNS name comprised of digits, not as an IP address (the internet does not however allow such names in DNS).", "source": "https://kubernetes.io/docs/concepts/services-networking/service/", "id": "k8s_00377_code_completion_9816"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Service\nmetadata:\n name: my-service\n namespace: prod\nspec:\n type: ExternalName\n externalName: my.database.example.com", "completion": "This yaml code Service | A Service of type: ExternalName accepts an IPv4 address string, but treats that string as a DNS name comprised of digits, not as an IP address (the internet does not however allow such names in DNS).", "language": "yaml", "type": "service-definition_explanation", "context": "Service | A Service of type: ExternalName accepts an IPv4 address string, but treats that string as a DNS name comprised of digits, not as an IP address (the internet does not however allow such names in DNS).", "source": "https://kubernetes.io/docs/concepts/services-networking/service/", "id": "k8s_00377_code_explain_9817"} {"prompt": "Complete this text code:\n\nREDIS_PRIMARY_SERVICE_HOST=10.0.0.11\nREDIS_PRIMARY_SERVICE_PORT=6379\nREDIS_PRIMARY_PORT=tcp://10.0.0.11:6379", "completion": "REDIS_PRIMARY_PORT_6379_TCP=tcp://10.0.0.11:6379\nREDIS_PRIMARY_PORT_6379_TCP_PROTO=tcp\nREDIS_PRIMARY_PORT_6379_TCP_PORT=6379\nREDIS_PRIMARY_PORT_6379_TCP_ADDR=10.0.0.11", "language": "text", "type": "code-example", "context": "Service | When you have a Pod that needs to access a Service, and you are using the environment variable method to publish the port and cluster IP to the client Pods, you must create the Service before the clie", "source": "https://kubernetes.io/docs/concepts/services-networking/service/", "id": "k8s_00377_code_completion_9818"} {"prompt": "Explain this text code:\n\nREDIS_PRIMARY_SERVICE_HOST=10.0.0.11\nREDIS_PRIMARY_SERVICE_PORT=6379\nREDIS_PRIMARY_PORT=tcp://10.0.0.11:6379\nREDIS_PRIMARY_PORT_6379_TCP=tcp://10.0.0.11:6379\nREDIS_PRIMARY_PORT_6379_TCP_PROTO=tcp\nREDIS_PRIMARY_PORT_6379_TCP_PORT=6379\nREDIS_PRIMARY_PORT_6379_TCP_ADDR=10.0.0.11", "completion": "This text code Service | When you have a Pod that needs to access a Service, and you are using the environment variable method to publish the port and cluster IP to the client Pods, you must create the Service before the clie", "language": "text", "type": "code-example_explanation", "context": "Service | When you have a Pod that needs to access a Service, and you are using the environment variable method to publish the port and cluster IP to the client Pods, you must create the Service before the clie", "source": "https://kubernetes.io/docs/concepts/services-networking/service/", "id": "k8s_00377_code_explain_9819"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Service\nmetadata:\n name: my-service\nspec:\n selector:\n app.kubernetes.io/name: MyApp", "completion": " ports:\n - name: http\n protocol: TCP\n port: 80\n targetPort: 49152\n externalIPs:\n - 198.51.100.32", "language": "yaml", "type": "service-definition", "context": "Service | Service is a top-level resource in the Kubernetes REST API. You can find more details about the Service API object.", "source": "https://kubernetes.io/docs/concepts/services-networking/service/", "id": "k8s_00377_code_completion_9820"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Service\nmetadata:\n name: my-service\nspec:\n selector:\n app.kubernetes.io/name: MyApp\n ports:\n - name: http\n protocol: TCP\n port: 80\n targetPort: 49152\n externalIPs:\n - 198.51.100.32", "completion": "This yaml code Service | Service is a top-level resource in the Kubernetes REST API. You can find more details about the Service API object.", "language": "yaml", "type": "service-definition_explanation", "context": "Service | Service is a top-level resource in the Kubernetes REST API. You can find more details about the Service API object.", "source": "https://kubernetes.io/docs/concepts/services-networking/service/", "id": "k8s_00377_code_explain_9821"} {"prompt": "Explain this bash code:\n\nkubectl config rename-context CONTEXT_NAME NEW_NAME", "completion": "This bash code kubectl config rename-context | help for rename-context", "language": "bash", "type": "kubectl-command_explanation", "context": "kubectl config rename-context | help for rename-context", "source": "https://kubernetes.io/docs/reference/kubectl/generated/kubectl_config/kubectl_config_rename-context/", "id": "k8s_00378_code_explain_9822"} {"prompt": "Explain this text code:\n\n# Rename the context 'old-name' to 'new-name' in your kubeconfig file\n kubectl config rename-context old-name new-name", "completion": "This text code kubectl config rename-context | help for rename-context", "language": "text", "type": "code-example_explanation", "context": "kubectl config rename-context | help for rename-context", "source": "https://kubernetes.io/docs/reference/kubectl/generated/kubectl_config/kubectl_config_rename-context/", "id": "k8s_00378_code_explain_9823"} {"prompt": "Explain this bash code:\n\nkubectl config rename-context CONTEXT_NAME NEW_NAME", "completion": "This bash code kubectl config rename-context | help for rename-context", "language": "bash", "type": "kubectl-command_explanation", "context": "kubectl config rename-context | help for rename-context", "source": "https://kubernetes.io/docs/reference/kubectl/generated/kubectl_config/kubectl_config_rename-context/", "id": "k8s_00378_code_explain_9824"} {"prompt": "Explain this text code:\n\n# Rename the context 'old-name' to 'new-name' in your kubeconfig file\n kubectl config rename-context old-name new-name", "completion": "This text code kubectl config rename-context | help for rename-context", "language": "text", "type": "code-example_explanation", "context": "kubectl config rename-context | help for rename-context", "source": "https://kubernetes.io/docs/reference/kubectl/generated/kubectl_config/kubectl_config_rename-context/", "id": "k8s_00378_code_explain_9825"} {"prompt": "Complete this yaml code:\n\napiVersion: apps/v1\nkind: DaemonSet\nmetadata:\n name: fluentd-elasticsearch\n namespace: kube-system\n labels:\n k8s-app: fluentd-logging\nspec:\n selector:\n matchLabels:\n name: fluentd-elasticsearch\n template:\n metadata:\n labels:\n name: fluentd-elasticsearch\n spec:\n tolerations:\n # these tolerations are to have the daemonset runnable on control plane nodes\n # remove them if your control plane nodes should not run pods\n - key: node-role.kubernetes.io/control-plane\n operator: Exists\n effect: NoSchedule", "completion": " - key: node-role.kubernetes.io/master\n operator: Exists\n effect: NoSchedule\n containers:\n - name: fluentd-elasticsearch\n image: quay.io/fluentd_elasticsearch/fluentd:v5.0.1\n resources:\n limits:\n memory: 200Mi\n requests:\n cpu: 100m\n memory: 200Mi\n volumeMounts:\n - name: varlog\n mountPath: /var/log\n # it may be desirable to set a high priority class to ensure that a DaemonSet Pod\n # preempts running Pods\n # priorityClassName: important\n terminationGracePeriodSeconds: 30\n volumes:\n - name: varlog\n hostPath:\n path: /var/log", "language": "yaml", "type": "kubernetes-manifest", "context": "DaemonSet | Create a DaemonSet based on the YAML file:", "source": "https://kubernetes.io/docs/concepts/workloads/controllers/daemonset", "id": "k8s_00382_code_completion_9826"} {"prompt": "Explain this yaml code:\n\napiVersion: apps/v1\nkind: DaemonSet\nmetadata:\n name: fluentd-elasticsearch\n namespace: kube-system\n labels:\n k8s-app: fluentd-logging\nspec:\n selector:\n matchLabels:\n name: fluentd-elasticsearch\n template:\n metadata:\n labels:\n name: fluentd-elasticsearch\n spec:\n tolerations:\n # these tolerations are to have the daemonset runnable on control plane nodes\n # remove them if your control plane nodes should not run pods\n - key: node-role.kubernetes.io/control-plane\n operator: Exists\n effect: NoSchedule\n - key: node-role.kubernetes.io/master\n operator: Exists\n effect: NoSchedule\n containers:\n - name: fluentd-elasticsearch\n image: quay.io/fluentd_elasticsearch/fluentd:v5.0.1\n resources:\n limits:\n memory: 200Mi\n requests:\n cpu: 100m\n memory: 200Mi\n volumeMounts:\n - name: varlog\n mountPath: /var/log\n # it may be desirable to set a high priority class to ensure that a DaemonSet Pod\n # preempts running Pods\n # priorityClassName: important\n terminationGracePeriodSeconds: 30\n volumes:\n - name: varlog\n hostPath:\n path: /var/log", "completion": "This yaml code DaemonSet | Create a DaemonSet based on the YAML file:", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "DaemonSet | Create a DaemonSet based on the YAML file:", "source": "https://kubernetes.io/docs/concepts/workloads/controllers/daemonset", "id": "k8s_00382_code_explain_9827"} {"prompt": "Explain this bash code:\n\nkubectl apply -f https://k8s.io/examples/controllers/daemonset.yaml", "completion": "This bash code DaemonSet | As with all other Kubernetes config, a DaemonSet needs apiVersion, kind, and metadata fields. For general information about working with config files, see running stateless applications and object man", "language": "bash", "type": "kubectl-command_explanation", "context": "DaemonSet | As with all other Kubernetes config, a DaemonSet needs apiVersion, kind, and metadata fields. For general information about working with config files, see running stateless applications and object man", "source": "https://kubernetes.io/docs/concepts/workloads/controllers/daemonset", "id": "k8s_00382_code_explain_9828"} {"prompt": "Complete this text code:\n\nnodeAffinity:\n requiredDuringSchedulingIgnoredDuringExecution:\n nodeSelectorTerms:\n - matchFields:", "completion": " - key: metadata.name\n operator: In\n values:\n - target-host-name", "language": "text", "type": "code-example", "context": "DaemonSet | The DaemonSet controller automatically adds a set of tolerations to DaemonSet Pods:", "source": "https://kubernetes.io/docs/concepts/workloads/controllers/daemonset", "id": "k8s_00382_code_completion_9829"} {"prompt": "Explain this text code:\n\nnodeAffinity:\n requiredDuringSchedulingIgnoredDuringExecution:\n nodeSelectorTerms:\n - matchFields:\n - key: metadata.name\n operator: In\n values:\n - target-host-name", "completion": "This text code DaemonSet | The DaemonSet controller automatically adds a set of tolerations to DaemonSet Pods:", "language": "text", "type": "code-example_explanation", "context": "DaemonSet | The DaemonSet controller automatically adds a set of tolerations to DaemonSet Pods:", "source": "https://kubernetes.io/docs/concepts/workloads/controllers/daemonset", "id": "k8s_00382_code_explain_9830"} {"prompt": "Complete this yaml code:\n\napiVersion: apps/v1\nkind: DaemonSet\nmetadata:\n name: fluentd-elasticsearch\n namespace: kube-system\n labels:\n k8s-app: fluentd-logging\nspec:\n selector:\n matchLabels:\n name: fluentd-elasticsearch\n template:\n metadata:\n labels:\n name: fluentd-elasticsearch\n spec:\n tolerations:\n # these tolerations are to have the daemonset runnable on control plane nodes\n # remove them if your control plane nodes should not run pods\n - key: node-role.kubernetes.io/control-plane\n operator: Exists\n effect: NoSchedule", "completion": " - key: node-role.kubernetes.io/master\n operator: Exists\n effect: NoSchedule\n containers:\n - name: fluentd-elasticsearch\n image: quay.io/fluentd_elasticsearch/fluentd:v5.0.1\n resources:\n limits:\n memory: 200Mi\n requests:\n cpu: 100m\n memory: 200Mi\n volumeMounts:\n - name: varlog\n mountPath: /var/log\n # it may be desirable to set a high priority class to ensure that a DaemonSet Pod\n # preempts running Pods\n # priorityClassName: important\n terminationGracePeriodSeconds: 30\n volumes:\n - name: varlog\n hostPath:\n path: /var/log", "language": "yaml", "type": "kubernetes-manifest", "context": "DaemonSet | Create a DaemonSet based on the YAML file:", "source": "https://kubernetes.io/docs/concepts/workloads/controllers/daemonset", "id": "k8s_00382_code_completion_9831"} {"prompt": "Explain this yaml code:\n\napiVersion: apps/v1\nkind: DaemonSet\nmetadata:\n name: fluentd-elasticsearch\n namespace: kube-system\n labels:\n k8s-app: fluentd-logging\nspec:\n selector:\n matchLabels:\n name: fluentd-elasticsearch\n template:\n metadata:\n labels:\n name: fluentd-elasticsearch\n spec:\n tolerations:\n # these tolerations are to have the daemonset runnable on control plane nodes\n # remove them if your control plane nodes should not run pods\n - key: node-role.kubernetes.io/control-plane\n operator: Exists\n effect: NoSchedule\n - key: node-role.kubernetes.io/master\n operator: Exists\n effect: NoSchedule\n containers:\n - name: fluentd-elasticsearch\n image: quay.io/fluentd_elasticsearch/fluentd:v5.0.1\n resources:\n limits:\n memory: 200Mi\n requests:\n cpu: 100m\n memory: 200Mi\n volumeMounts:\n - name: varlog\n mountPath: /var/log\n # it may be desirable to set a high priority class to ensure that a DaemonSet Pod\n # preempts running Pods\n # priorityClassName: important\n terminationGracePeriodSeconds: 30\n volumes:\n - name: varlog\n hostPath:\n path: /var/log", "completion": "This yaml code DaemonSet | Create a DaemonSet based on the YAML file:", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "DaemonSet | Create a DaemonSet based on the YAML file:", "source": "https://kubernetes.io/docs/concepts/workloads/controllers/daemonset", "id": "k8s_00382_code_explain_9832"} {"prompt": "Explain this bash code:\n\nkubectl apply -f https://k8s.io/examples/controllers/daemonset.yaml", "completion": "This bash code DaemonSet | As with all other Kubernetes config, a DaemonSet needs apiVersion, kind, and metadata fields. For general information about working with config files, see running stateless applications and object man", "language": "bash", "type": "kubectl-command_explanation", "context": "DaemonSet | As with all other Kubernetes config, a DaemonSet needs apiVersion, kind, and metadata fields. For general information about working with config files, see running stateless applications and object man", "source": "https://kubernetes.io/docs/concepts/workloads/controllers/daemonset", "id": "k8s_00382_code_explain_9833"} {"prompt": "Complete this yaml code:\n\nnodeAffinity:\n requiredDuringSchedulingIgnoredDuringExecution:\n nodeSelectorTerms:\n - matchFields:", "completion": " - key: metadata.name\n operator: In\n values:\n - target-host-name", "language": "yaml", "type": "kubernetes-manifest", "context": "DaemonSet | The DaemonSet controller automatically adds a set of tolerations to DaemonSet Pods:", "source": "https://kubernetes.io/docs/concepts/workloads/controllers/daemonset", "id": "k8s_00382_code_completion_9834"} {"prompt": "Explain this yaml code:\n\nnodeAffinity:\n requiredDuringSchedulingIgnoredDuringExecution:\n nodeSelectorTerms:\n - matchFields:\n - key: metadata.name\n operator: In\n values:\n - target-host-name", "completion": "This yaml code DaemonSet | The DaemonSet controller automatically adds a set of tolerations to DaemonSet Pods:", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "DaemonSet | The DaemonSet controller automatically adds a set of tolerations to DaemonSet Pods:", "source": "https://kubernetes.io/docs/concepts/workloads/controllers/daemonset", "id": "k8s_00382_code_explain_9835"} {"prompt": "Complete this yaml code:\n\ncontrollers/daemonset.yaml\napiVersion: apps/v1\nkind: DaemonSet\nmetadata:\n name: fluentd-elasticsearch\n namespace: kube-system\n labels:\n k8s-app: fluentd-logging\nspec:\n selector:\n matchLabels:\n name: fluentd-elasticsearch\n template:\n metadata:\n labels:\n name: fluentd-elasticsearch\n spec:\n tolerations:\n # these tolerations are to have the daemonset runnable on control plane nodes\n # remove them if your control plane nodes should not run pods\n - key: node-role.kubernetes.io/control-plane\n operator: Exists\n effect: NoSchedule", "completion": " - key: node-role.kubernetes.io/master\n operator: Exists\n effect: NoSchedule\n containers:\n - name: fluentd-elasticsearch\n image: quay.io/fluentd_elasticsearch/fluentd:v5.0.1\n resources:\n limits:\n memory: 200Mi\n requests:\n cpu: 100m\n memory: 200Mi\n volumeMounts:\n - name: varlog\n mountPath: /var/log\n # it may be desirable to set a high priority class to ensure that a DaemonSet Pod\n # preempts running Pods\n # priorityClassName: important\n terminationGracePeriodSeconds: 30\n volumes:\n - name: varlog\n hostPath:\n path: /var/log", "language": "yaml", "type": "kubernetes-manifest", "context": "DaemonSet | Create a DaemonSet based on the YAML file:", "source": "https://kubernetes.io/docs/concepts/workloads/controllers/daemonset", "id": "k8s_00382_code_completion_9836"} {"prompt": "Explain this yaml code:\n\ncontrollers/daemonset.yaml\napiVersion: apps/v1\nkind: DaemonSet\nmetadata:\n name: fluentd-elasticsearch\n namespace: kube-system\n labels:\n k8s-app: fluentd-logging\nspec:\n selector:\n matchLabels:\n name: fluentd-elasticsearch\n template:\n metadata:\n labels:\n name: fluentd-elasticsearch\n spec:\n tolerations:\n # these tolerations are to have the daemonset runnable on control plane nodes\n # remove them if your control plane nodes should not run pods\n - key: node-role.kubernetes.io/control-plane\n operator: Exists\n effect: NoSchedule\n - key: node-role.kubernetes.io/master\n operator: Exists\n effect: NoSchedule\n containers:\n - name: fluentd-elasticsearch\n image: quay.io/fluentd_elasticsearch/fluentd:v5.0.1\n resources:\n limits:\n memory: 200Mi\n requests:\n cpu: 100m\n memory: 200Mi\n volumeMounts:\n - name: varlog\n mountPath: /var/log\n # it may be desirable to set a high priority class to ensure that a DaemonSet Pod\n # preempts running Pods\n # priorityClassName: important\n terminationGracePeriodSeconds: 30\n volumes:\n - name: varlog\n hostPath:\n path: /var/log", "completion": "This yaml code DaemonSet | Create a DaemonSet based on the YAML file:", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "DaemonSet | Create a DaemonSet based on the YAML file:", "source": "https://kubernetes.io/docs/concepts/workloads/controllers/daemonset", "id": "k8s_00382_code_explain_9837"} {"prompt": "Complete this yaml code:\n\napiVersion: apps/v1\nkind: DaemonSet\nmetadata:\n name: fluentd-elasticsearch\n namespace: kube-system\n labels:\n k8s-app: fluentd-logging\nspec:\n selector:\n matchLabels:\n name: fluentd-elasticsearch\n template:\n metadata:\n labels:\n name: fluentd-elasticsearch\n spec:\n tolerations:\n # these tolerations are to have the daemonset runnable on control plane nodes\n # remove them if your control plane nodes should not run pods\n - key: node-role.kubernetes.io/control-plane\n operator: Exists\n effect: NoSchedule", "completion": " - key: node-role.kubernetes.io/master\n operator: Exists\n effect: NoSchedule\n containers:\n - name: fluentd-elasticsearch\n image: quay.io/fluentd_elasticsearch/fluentd:v5.0.1\n resources:\n limits:\n memory: 200Mi\n requests:\n cpu: 100m\n memory: 200Mi\n volumeMounts:\n - name: varlog\n mountPath: /var/log\n # it may be desirable to set a high priority class to ensure that a DaemonSet Pod\n # preempts running Pods\n # priorityClassName: important\n terminationGracePeriodSeconds: 30\n volumes:\n - name: varlog\n hostPath:\n path: /var/log", "language": "yaml", "type": "kubernetes-manifest", "context": "DaemonSet | Create a DaemonSet based on the YAML file:", "source": "https://kubernetes.io/docs/concepts/workloads/controllers/daemonset", "id": "k8s_00382_code_completion_9838"} {"prompt": "Explain this yaml code:\n\napiVersion: apps/v1\nkind: DaemonSet\nmetadata:\n name: fluentd-elasticsearch\n namespace: kube-system\n labels:\n k8s-app: fluentd-logging\nspec:\n selector:\n matchLabels:\n name: fluentd-elasticsearch\n template:\n metadata:\n labels:\n name: fluentd-elasticsearch\n spec:\n tolerations:\n # these tolerations are to have the daemonset runnable on control plane nodes\n # remove them if your control plane nodes should not run pods\n - key: node-role.kubernetes.io/control-plane\n operator: Exists\n effect: NoSchedule\n - key: node-role.kubernetes.io/master\n operator: Exists\n effect: NoSchedule\n containers:\n - name: fluentd-elasticsearch\n image: quay.io/fluentd_elasticsearch/fluentd:v5.0.1\n resources:\n limits:\n memory: 200Mi\n requests:\n cpu: 100m\n memory: 200Mi\n volumeMounts:\n - name: varlog\n mountPath: /var/log\n # it may be desirable to set a high priority class to ensure that a DaemonSet Pod\n # preempts running Pods\n # priorityClassName: important\n terminationGracePeriodSeconds: 30\n volumes:\n - name: varlog\n hostPath:\n path: /var/log", "completion": "This yaml code DaemonSet | Create a DaemonSet based on the YAML file:", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "DaemonSet | Create a DaemonSet based on the YAML file:", "source": "https://kubernetes.io/docs/concepts/workloads/controllers/daemonset", "id": "k8s_00382_code_explain_9839"} {"prompt": "Complete this text code:\n\nnodeAffinity:\n requiredDuringSchedulingIgnoredDuringExecution:\n nodeSelectorTerms:\n - matchFields:", "completion": " - key: metadata.name\n operator: In\n values:\n - target-host-name", "language": "text", "type": "code-example", "context": "DaemonSet | The DaemonSet controller automatically adds a set of tolerations to DaemonSet Pods:", "source": "https://kubernetes.io/docs/concepts/workloads/controllers/daemonset", "id": "k8s_00382_code_completion_9840"} {"prompt": "Explain this text code:\n\nnodeAffinity:\n requiredDuringSchedulingIgnoredDuringExecution:\n nodeSelectorTerms:\n - matchFields:\n - key: metadata.name\n operator: In\n values:\n - target-host-name", "completion": "This text code DaemonSet | The DaemonSet controller automatically adds a set of tolerations to DaemonSet Pods:", "language": "text", "type": "code-example_explanation", "context": "DaemonSet | The DaemonSet controller automatically adds a set of tolerations to DaemonSet Pods:", "source": "https://kubernetes.io/docs/concepts/workloads/controllers/daemonset", "id": "k8s_00382_code_explain_9841"} {"prompt": "Explain this bash code:\n\nkubectl create cronjob NAME --image=image --schedule='0/5 * * * ?' -- [COMMAND] [args...] [flags]", "completion": "This bash code kubectl create cronjob | If true, ignore any errors in templates when a field or map key is missing in the template. Only applies to golang and jsonpath output formats.", "language": "bash", "type": "kubectl-command_explanation", "context": "kubectl create cronjob | If true, ignore any errors in templates when a field or map key is missing in the template. Only applies to golang and jsonpath output formats.", "source": "https://kubernetes.io/docs/reference/kubectl/generated/kubectl_create/kubectl_create_cronjob/", "id": "k8s_00384_code_explain_9842"} {"prompt": "Complete this text code:\n\n# Create a cron job\n kubectl create cronjob my-job --image=busybox --schedule=\"*/1 * * * *\"", "completion": " \n # Create a cron job with a command\n kubectl create cronjob my-job --image=busybox --schedule=\"*/1 * * * *\" -- date", "language": "text", "type": "code-example", "context": "kubectl create cronjob | If true, ignore any errors in templates when a field or map key is missing in the template. Only applies to golang and jsonpath output formats.", "source": "https://kubernetes.io/docs/reference/kubectl/generated/kubectl_create/kubectl_create_cronjob/", "id": "k8s_00384_code_completion_9843"} {"prompt": "Explain this text code:\n\n# Create a cron job\n kubectl create cronjob my-job --image=busybox --schedule=\"*/1 * * * *\"\n \n # Create a cron job with a command\n kubectl create cronjob my-job --image=busybox --schedule=\"*/1 * * * *\" -- date", "completion": "This text code kubectl create cronjob | If true, ignore any errors in templates when a field or map key is missing in the template. Only applies to golang and jsonpath output formats.", "language": "text", "type": "code-example_explanation", "context": "kubectl create cronjob | If true, ignore any errors in templates when a field or map key is missing in the template. Only applies to golang and jsonpath output formats.", "source": "https://kubernetes.io/docs/reference/kubectl/generated/kubectl_create/kubectl_create_cronjob/", "id": "k8s_00384_code_explain_9844"} {"prompt": "Explain this bash code:\n\nkubectl create cronjob NAME --image=image --schedule='0/5 * * * ?' -- [COMMAND] [args...] [flags]", "completion": "This bash code kubectl create cronjob | If true, ignore any errors in templates when a field or map key is missing in the template. Only applies to golang and jsonpath output formats.", "language": "bash", "type": "kubectl-command_explanation", "context": "kubectl create cronjob | If true, ignore any errors in templates when a field or map key is missing in the template. Only applies to golang and jsonpath output formats.", "source": "https://kubernetes.io/docs/reference/kubectl/generated/kubectl_create/kubectl_create_cronjob/", "id": "k8s_00384_code_explain_9845"} {"prompt": "Complete this text code:\n\n# Create a cron job\n kubectl create cronjob my-job --image=busybox --schedule=\"*/1 * * * *\"", "completion": " \n # Create a cron job with a command\n kubectl create cronjob my-job --image=busybox --schedule=\"*/1 * * * *\" -- date", "language": "text", "type": "code-example", "context": "kubectl create cronjob | If true, ignore any errors in templates when a field or map key is missing in the template. Only applies to golang and jsonpath output formats.", "source": "https://kubernetes.io/docs/reference/kubectl/generated/kubectl_create/kubectl_create_cronjob/", "id": "k8s_00384_code_completion_9846"} {"prompt": "Explain this text code:\n\n# Create a cron job\n kubectl create cronjob my-job --image=busybox --schedule=\"*/1 * * * *\"\n \n # Create a cron job with a command\n kubectl create cronjob my-job --image=busybox --schedule=\"*/1 * * * *\" -- date", "completion": "This text code kubectl create cronjob | If true, ignore any errors in templates when a field or map key is missing in the template. Only applies to golang and jsonpath output formats.", "language": "text", "type": "code-example_explanation", "context": "kubectl create cronjob | If true, ignore any errors in templates when a field or map key is missing in the template. Only applies to golang and jsonpath output formats.", "source": "https://kubernetes.io/docs/reference/kubectl/generated/kubectl_create/kubectl_create_cronjob/", "id": "k8s_00384_code_explain_9847"} {"prompt": "Explain this text code:\n\n--runtime-config=\"coordination.k8s.io/v1beta1=true\"", "completion": "This text code Coordinated Leader Election | Provided that you have enabled the CoordinatedLeaderElection feature gate andhave the coordination.k8s.io/v1beta1 API group enabled, compatible control planecomponents automatically use the LeaseCandi", "language": "text", "type": "code-example_explanation", "context": "Coordinated Leader Election | Provided that you have enabled the CoordinatedLeaderElection feature gate andhave the coordination.k8s.io/v1beta1 API group enabled, compatible control planecomponents automatically use the LeaseCandi", "source": "https://kubernetes.io/docs/concepts/cluster-administration/coordinated-leader-election/", "id": "k8s_00385_code_explain_9848"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: test-projected-volume\nspec:\n containers:\n - name: test-projected-volume\n image: busybox:1.28\n args:\n - sleep\n - \"86400\"", "completion": " volumeMounts:\n - name: all-in-one\n mountPath: \"/projected-volume\"\n readOnly: true\n volumes:\n - name: all-in-one\n projected:\n sources:\n - secret:\n name: user\n - secret:\n name: pass", "language": "yaml", "type": "pod-definition", "context": "Configure a Pod to Use a Projected Volume for Storage | Create the Secrets:", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/configure-projected-volume-storage/", "id": "k8s_00386_code_completion_9849"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: test-projected-volume\nspec:\n containers:\n - name: test-projected-volume\n image: busybox:1.28\n args:\n - sleep\n - \"86400\"\n volumeMounts:\n - name: all-in-one\n mountPath: \"/projected-volume\"\n readOnly: true\n volumes:\n - name: all-in-one\n projected:\n sources:\n - secret:\n name: user\n - secret:\n name: pass", "completion": "This yaml code Configure a Pod to Use a Projected Volume for Storage | Create the Secrets:", "language": "yaml", "type": "pod-definition_explanation", "context": "Configure a Pod to Use a Projected Volume for Storage | Create the Secrets:", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/configure-projected-volume-storage/", "id": "k8s_00386_code_explain_9850"} {"prompt": "Complete this text code:\n\n# Create files containing the username and password:\necho -n \"admin\" > ./username.txt\necho -n \"1f2d1e2e67df\" > ./password.txt", "completion": "\n# Package these files into secrets:\nkubectl create secret generic user --from-file=./username.txt\nkubectl create secret generic pass --from-file=./password.txt", "language": "text", "type": "code-example", "context": "Configure a Pod to Use a Projected Volume for Storage | Create the Pod:", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/configure-projected-volume-storage/", "id": "k8s_00386_code_completion_9851"} {"prompt": "Explain this text code:\n\n# Create files containing the username and password:\necho -n \"admin\" > ./username.txt\necho -n \"1f2d1e2e67df\" > ./password.txt\n\n# Package these files into secrets:\nkubectl create secret generic user --from-file=./username.txt\nkubectl create secret generic pass --from-file=./password.txt", "completion": "This text code Configure a Pod to Use a Projected Volume for Storage | Create the Pod:", "language": "text", "type": "code-example_explanation", "context": "Configure a Pod to Use a Projected Volume for Storage | Create the Pod:", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/configure-projected-volume-storage/", "id": "k8s_00386_code_explain_9852"} {"prompt": "Explain this bash code:\n\nkubectl apply -f https://k8s.io/examples/pods/storage/projected.yaml", "completion": "This bash code Configure a Pod to Use a Projected Volume for Storage | Verify that the Pod's container is running, and then watch for changes to the Pod:", "language": "bash", "type": "kubectl-command_explanation", "context": "Configure a Pod to Use a Projected Volume for Storage | Verify that the Pod's container is running, and then watch for changes to the Pod:", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/configure-projected-volume-storage/", "id": "k8s_00386_code_explain_9853"} {"prompt": "Explain this text code:\n\nNAME READY STATUS RESTARTS AGE\ntest-projected-volume 1/1 Running 0 14s", "completion": "This text code Configure a Pod to Use a Projected Volume for Storage | In another terminal, get a shell to the running container:", "language": "text", "type": "code-example_explanation", "context": "Configure a Pod to Use a Projected Volume for Storage | In another terminal, get a shell to the running container:", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/configure-projected-volume-storage/", "id": "k8s_00386_code_explain_9854"} {"prompt": "Explain this bash code:\n\nkubectl delete pod test-projected-volume\nkubectl delete secret user pass", "completion": "This bash code Configure a Pod to Use a Projected Volume for Storage |", "language": "bash", "type": "kubectl-command_explanation", "context": "Configure a Pod to Use a Projected Volume for Storage |", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/configure-projected-volume-storage/", "id": "k8s_00386_code_explain_9855"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: test-projected-volume\nspec:\n containers:\n - name: test-projected-volume\n image: busybox:1.28\n args:\n - sleep\n - \"86400\"", "completion": " volumeMounts:\n - name: all-in-one\n mountPath: \"/projected-volume\"\n readOnly: true\n volumes:\n - name: all-in-one\n projected:\n sources:\n - secret:\n name: user\n - secret:\n name: pass", "language": "yaml", "type": "pod-definition", "context": "Configure a Pod to Use a Projected Volume for Storage | Create the Secrets:", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/configure-projected-volume-storage/", "id": "k8s_00386_code_completion_9856"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: test-projected-volume\nspec:\n containers:\n - name: test-projected-volume\n image: busybox:1.28\n args:\n - sleep\n - \"86400\"\n volumeMounts:\n - name: all-in-one\n mountPath: \"/projected-volume\"\n readOnly: true\n volumes:\n - name: all-in-one\n projected:\n sources:\n - secret:\n name: user\n - secret:\n name: pass", "completion": "This yaml code Configure a Pod to Use a Projected Volume for Storage | Create the Secrets:", "language": "yaml", "type": "pod-definition_explanation", "context": "Configure a Pod to Use a Projected Volume for Storage | Create the Secrets:", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/configure-projected-volume-storage/", "id": "k8s_00386_code_explain_9857"} {"prompt": "Complete this shell code:\n\n# Create files containing the username and password:\necho -n \"admin\" > ./username.txt\necho -n \"1f2d1e2e67df\" > ./password.txt", "completion": "\n# Package these files into secrets:\nkubectl create secret generic user --from-file=./username.txt\nkubectl create secret generic pass --from-file=./password.txt", "language": "shell", "type": "code-example", "context": "Configure a Pod to Use a Projected Volume for Storage | Create the Pod:", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/configure-projected-volume-storage/", "id": "k8s_00386_code_completion_9858"} {"prompt": "Explain this shell code:\n\n# Create files containing the username and password:\necho -n \"admin\" > ./username.txt\necho -n \"1f2d1e2e67df\" > ./password.txt\n\n# Package these files into secrets:\nkubectl create secret generic user --from-file=./username.txt\nkubectl create secret generic pass --from-file=./password.txt", "completion": "This shell code Configure a Pod to Use a Projected Volume for Storage | Create the Pod:", "language": "shell", "type": "code-example_explanation", "context": "Configure a Pod to Use a Projected Volume for Storage | Create the Pod:", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/configure-projected-volume-storage/", "id": "k8s_00386_code_explain_9859"} {"prompt": "Explain this shell code:\n\nkubectl apply -f https://k8s.io/examples/pods/storage/projected.yaml", "completion": "This shell code Configure a Pod to Use a Projected Volume for Storage | Verify that the Pod's container is running, and then watch for changes to the Pod:", "language": "shell", "type": "code-example_explanation", "context": "Configure a Pod to Use a Projected Volume for Storage | Verify that the Pod's container is running, and then watch for changes to the Pod:", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/configure-projected-volume-storage/", "id": "k8s_00386_code_explain_9860"} {"prompt": "Explain this text code:\n\nNAME READY STATUS RESTARTS AGE\ntest-projected-volume 1/1 Running 0 14s", "completion": "This text code Configure a Pod to Use a Projected Volume for Storage | In another terminal, get a shell to the running container:", "language": "text", "type": "code-example_explanation", "context": "Configure a Pod to Use a Projected Volume for Storage | In another terminal, get a shell to the running container:", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/configure-projected-volume-storage/", "id": "k8s_00386_code_explain_9861"} {"prompt": "Explain this shell code:\n\nkubectl delete pod test-projected-volume\nkubectl delete secret user pass", "completion": "This shell code Configure a Pod to Use a Projected Volume for Storage |", "language": "shell", "type": "code-example_explanation", "context": "Configure a Pod to Use a Projected Volume for Storage |", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/configure-projected-volume-storage/", "id": "k8s_00386_code_explain_9862"} {"prompt": "Complete this yaml code:\n\npods/storage/projected.yaml\napiVersion: v1\nkind: Pod\nmetadata:\n name: test-projected-volume\nspec:\n containers:\n - name: test-projected-volume\n image: busybox:1.28\n args:\n - sleep\n - \"86400\"", "completion": " volumeMounts:\n - name: all-in-one\n mountPath: \"/projected-volume\"\n readOnly: true\n volumes:\n - name: all-in-one\n projected:\n sources:\n - secret:\n name: user\n - secret:\n name: pass", "language": "yaml", "type": "pod-definition", "context": "Configure a Pod to Use a Projected Volume for Storage | Create the Secrets:", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/configure-projected-volume-storage/", "id": "k8s_00386_code_completion_9863"} {"prompt": "Explain this yaml code:\n\npods/storage/projected.yaml\napiVersion: v1\nkind: Pod\nmetadata:\n name: test-projected-volume\nspec:\n containers:\n - name: test-projected-volume\n image: busybox:1.28\n args:\n - sleep\n - \"86400\"\n volumeMounts:\n - name: all-in-one\n mountPath: \"/projected-volume\"\n readOnly: true\n volumes:\n - name: all-in-one\n projected:\n sources:\n - secret:\n name: user\n - secret:\n name: pass", "completion": "This yaml code Configure a Pod to Use a Projected Volume for Storage | Create the Secrets:", "language": "yaml", "type": "pod-definition_explanation", "context": "Configure a Pod to Use a Projected Volume for Storage | Create the Secrets:", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/configure-projected-volume-storage/", "id": "k8s_00386_code_explain_9864"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: test-projected-volume\nspec:\n containers:\n - name: test-projected-volume\n image: busybox:1.28\n args:\n - sleep\n - \"86400\"", "completion": " volumeMounts:\n - name: all-in-one\n mountPath: \"/projected-volume\"\n readOnly: true\n volumes:\n - name: all-in-one\n projected:\n sources:\n - secret:\n name: user\n - secret:\n name: pass", "language": "yaml", "type": "pod-definition", "context": "Configure a Pod to Use a Projected Volume for Storage | Create the Secrets:", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/configure-projected-volume-storage/", "id": "k8s_00386_code_completion_9865"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: test-projected-volume\nspec:\n containers:\n - name: test-projected-volume\n image: busybox:1.28\n args:\n - sleep\n - \"86400\"\n volumeMounts:\n - name: all-in-one\n mountPath: \"/projected-volume\"\n readOnly: true\n volumes:\n - name: all-in-one\n projected:\n sources:\n - secret:\n name: user\n - secret:\n name: pass", "completion": "This yaml code Configure a Pod to Use a Projected Volume for Storage | Create the Secrets:", "language": "yaml", "type": "pod-definition_explanation", "context": "Configure a Pod to Use a Projected Volume for Storage | Create the Secrets:", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/configure-projected-volume-storage/", "id": "k8s_00386_code_explain_9866"} {"prompt": "Complete this text code:\n\n# Create files containing the username and password:\necho -n \"admin\" > ./username.txt\necho -n \"1f2d1e2e67df\" > ./password.txt", "completion": "\n# Package these files into secrets:\nkubectl create secret generic user --from-file=./username.txt\nkubectl create secret generic pass --from-file=./password.txt", "language": "text", "type": "code-example", "context": "Configure a Pod to Use a Projected Volume for Storage | Create the Pod:", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/configure-projected-volume-storage/", "id": "k8s_00386_code_completion_9867"} {"prompt": "Explain this text code:\n\n# Create files containing the username and password:\necho -n \"admin\" > ./username.txt\necho -n \"1f2d1e2e67df\" > ./password.txt\n\n# Package these files into secrets:\nkubectl create secret generic user --from-file=./username.txt\nkubectl create secret generic pass --from-file=./password.txt", "completion": "This text code Configure a Pod to Use a Projected Volume for Storage | Create the Pod:", "language": "text", "type": "code-example_explanation", "context": "Configure a Pod to Use a Projected Volume for Storage | Create the Pod:", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/configure-projected-volume-storage/", "id": "k8s_00386_code_explain_9868"} {"prompt": "Explain this bash code:\n\nkubectl apply -f https://k8s.io/examples/pods/storage/projected.yaml", "completion": "This bash code Configure a Pod to Use a Projected Volume for Storage | Verify that the Pod's container is running, and then watch for changes to the Pod:", "language": "bash", "type": "kubectl-command_explanation", "context": "Configure a Pod to Use a Projected Volume for Storage | Verify that the Pod's container is running, and then watch for changes to the Pod:", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/configure-projected-volume-storage/", "id": "k8s_00386_code_explain_9869"} {"prompt": "Explain this bash code:\n\nkubectl delete pod test-projected-volume\nkubectl delete secret user pass", "completion": "This bash code Configure a Pod to Use a Projected Volume for Storage |", "language": "bash", "type": "kubectl-command_explanation", "context": "Configure a Pod to Use a Projected Volume for Storage |", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/configure-projected-volume-storage/", "id": "k8s_00386_code_explain_9870"} {"prompt": "Complete this yaml code:\n\napiVersion: storage.k8s.io/v1\nkind: CSIDriver\nmetadata:", "completion": " name: hostpath.csi.k8s.io\nspec:\n nodeAllocatableUpdatePeriodSeconds: 60", "language": "yaml", "type": "kubernetes-manifest", "context": "Node-specific Volume Limits | Kubelet will periodically call the corresponding CSI driver’s NodeGetInfo endpoint to refresh the maximum number of attachable volumes, using the interval specified in nodeAllocatableUpdatePeriodSecon", "source": "https://kubernetes.io/docs/concepts/storage/storage-limits/", "id": "k8s_00387_code_completion_9871"} {"prompt": "Explain this yaml code:\n\napiVersion: storage.k8s.io/v1\nkind: CSIDriver\nmetadata:\n name: hostpath.csi.k8s.io\nspec:\n nodeAllocatableUpdatePeriodSeconds: 60", "completion": "This yaml code Node-specific Volume Limits | Kubelet will periodically call the corresponding CSI driver’s NodeGetInfo endpoint to refresh the maximum number of attachable volumes, using the interval specified in nodeAllocatableUpdatePeriodSecon", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Node-specific Volume Limits | Kubelet will periodically call the corresponding CSI driver’s NodeGetInfo endpoint to refresh the maximum number of attachable volumes, using the interval specified in nodeAllocatableUpdatePeriodSecon", "source": "https://kubernetes.io/docs/concepts/storage/storage-limits/", "id": "k8s_00387_code_explain_9872"} {"prompt": "Complete this yaml code:\n\napiVersion: storage.k8s.io/v1\nkind: CSIDriver\nmetadata:", "completion": " name: hostpath.csi.k8s.io\nspec:\n nodeAllocatableUpdatePeriodSeconds: 60", "language": "yaml", "type": "kubernetes-manifest", "context": "Node-specific Volume Limits | Kubelet will periodically call the corresponding CSI driver’s NodeGetInfo endpoint to refresh the maximum number of attachable volumes, using the interval specified in nodeAllocatableUpdatePeriodSecon", "source": "https://kubernetes.io/docs/concepts/storage/storage-limits/", "id": "k8s_00387_code_completion_9873"} {"prompt": "Explain this yaml code:\n\napiVersion: storage.k8s.io/v1\nkind: CSIDriver\nmetadata:\n name: hostpath.csi.k8s.io\nspec:\n nodeAllocatableUpdatePeriodSeconds: 60", "completion": "This yaml code Node-specific Volume Limits | Kubelet will periodically call the corresponding CSI driver’s NodeGetInfo endpoint to refresh the maximum number of attachable volumes, using the interval specified in nodeAllocatableUpdatePeriodSecon", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Node-specific Volume Limits | Kubelet will periodically call the corresponding CSI driver’s NodeGetInfo endpoint to refresh the maximum number of attachable volumes, using the interval specified in nodeAllocatableUpdatePeriodSecon", "source": "https://kubernetes.io/docs/concepts/storage/storage-limits/", "id": "k8s_00387_code_explain_9874"} {"prompt": "Complete this yaml code:\n\napiVersion: storage.k8s.io/v1\nkind: CSIDriver\nmetadata:", "completion": " name: hostpath.csi.k8s.io\nspec:\n nodeAllocatableUpdatePeriodSeconds: 60", "language": "yaml", "type": "kubernetes-manifest", "context": "Node-specific Volume Limits | Kubelet will periodically call the corresponding CSI driver’s NodeGetInfo endpoint to refresh the maximum number of attachable volumes, using the interval specified in nodeAllocatableUpdatePeriodSecon", "source": "https://kubernetes.io/docs/concepts/storage/storage-limits/", "id": "k8s_00387_code_completion_9875"} {"prompt": "Explain this yaml code:\n\napiVersion: storage.k8s.io/v1\nkind: CSIDriver\nmetadata:\n name: hostpath.csi.k8s.io\nspec:\n nodeAllocatableUpdatePeriodSeconds: 60", "completion": "This yaml code Node-specific Volume Limits | Kubelet will periodically call the corresponding CSI driver’s NodeGetInfo endpoint to refresh the maximum number of attachable volumes, using the interval specified in nodeAllocatableUpdatePeriodSecon", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Node-specific Volume Limits | Kubelet will periodically call the corresponding CSI driver’s NodeGetInfo endpoint to refresh the maximum number of attachable volumes, using the interval specified in nodeAllocatableUpdatePeriodSecon", "source": "https://kubernetes.io/docs/concepts/storage/storage-limits/", "id": "k8s_00387_code_explain_9876"} {"prompt": "Complete this text code:\n\n--requestheader-client-ca-file=<path to aggregator CA cert>\n--requestheader-allowed-names=front-proxy-client\n--requestheader-extra-headers-prefix=X-Remote-Extra-", "completion": "--requestheader-group-headers=X-Remote-Group\n--requestheader-username-headers=X-Remote-User\n--proxy-client-cert-file=<path to aggregator proxy cert>\n--proxy-client-key-file=<path to aggregator proxy key>", "language": "text", "type": "code-example", "context": "Configure the Aggregation Layer | The Kubernetes apiserver has two client CA options:", "source": "https://kubernetes.io/docs/tasks/extend-kubernetes/configure-aggregation-layer/", "id": "k8s_00388_code_completion_9877"} {"prompt": "Explain this text code:\n\n--requestheader-client-ca-file=<path to aggregator CA cert>\n--requestheader-allowed-names=front-proxy-client\n--requestheader-extra-headers-prefix=X-Remote-Extra-\n--requestheader-group-headers=X-Remote-Group\n--requestheader-username-headers=X-Remote-User\n--proxy-client-cert-file=<path to aggregator proxy cert>\n--proxy-client-key-file=<path to aggregator proxy key>", "completion": "This text code Configure the Aggregation Layer | The Kubernetes apiserver has two client CA options:", "language": "text", "type": "code-example_explanation", "context": "Configure the Aggregation Layer | The Kubernetes apiserver has two client CA options:", "source": "https://kubernetes.io/docs/tasks/extend-kubernetes/configure-aggregation-layer/", "id": "k8s_00388_code_explain_9878"} {"prompt": "Complete this yaml code:\n\napiVersion: apiregistration.k8s.io/v1\nkind: APIService\nmetadata:\n name: <name of the registration object>\nspec:\n group: <API group name this extension apiserver hosts>", "completion": " version: <API version this extension apiserver hosts>\n groupPriorityMinimum: <priority this APIService for this group, see API documentation>\n versionPriority: <prioritizes ordering of this version within a group, see API documentation>\n service:\n namespace: <namespace of the extension apiserver service>\n name: <name of the extension apiserver service>\n caBundle: <pem encoded ca cert that signs the server cert used by the webhook>", "language": "yaml", "type": "kubernetes-manifest", "context": "Configure the Aggregation Layer | The name of an APIService object must be a valid path segment name.", "source": "https://kubernetes.io/docs/tasks/extend-kubernetes/configure-aggregation-layer/", "id": "k8s_00388_code_completion_9879"} {"prompt": "Explain this yaml code:\n\napiVersion: apiregistration.k8s.io/v1\nkind: APIService\nmetadata:\n name: <name of the registration object>\nspec:\n group: <API group name this extension apiserver hosts>\n version: <API version this extension apiserver hosts>\n groupPriorityMinimum: <priority this APIService for this group, see API documentation>\n versionPriority: <prioritizes ordering of this version within a group, see API documentation>\n service:\n namespace: <namespace of the extension apiserver service>\n name: <name of the extension apiserver service>\n caBundle: <pem encoded ca cert that signs the server cert used by the webhook>", "completion": "This yaml code Configure the Aggregation Layer | The name of an APIService object must be a valid path segment name.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Configure the Aggregation Layer | The name of an APIService object must be a valid path segment name.", "source": "https://kubernetes.io/docs/tasks/extend-kubernetes/configure-aggregation-layer/", "id": "k8s_00388_code_explain_9880"} {"prompt": "Complete this yaml code:\n\napiVersion: apiregistration.k8s.io/v1\nkind: APIService\n...\nspec:\n ...", "completion": " service:\n namespace: my-service-namespace\n name: my-service-name\n port: 1234\n caBundle: \"Ci0tLS0tQk...<base64-encoded PEM bundle>...tLS0K\"\n...", "language": "yaml", "type": "kubernetes-manifest", "context": "Configure the Aggregation Layer |", "source": "https://kubernetes.io/docs/tasks/extend-kubernetes/configure-aggregation-layer/", "id": "k8s_00388_code_completion_9881"} {"prompt": "Explain this yaml code:\n\napiVersion: apiregistration.k8s.io/v1\nkind: APIService\n...\nspec:\n ...\n service:\n namespace: my-service-namespace\n name: my-service-name\n port: 1234\n caBundle: \"Ci0tLS0tQk...<base64-encoded PEM bundle>...tLS0K\"\n...", "completion": "This yaml code Configure the Aggregation Layer |", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Configure the Aggregation Layer |", "source": "https://kubernetes.io/docs/tasks/extend-kubernetes/configure-aggregation-layer/", "id": "k8s_00388_code_explain_9882"} {"prompt": "Complete this text code:\n\n--requestheader-client-ca-file=<path to aggregator CA cert>\n--requestheader-allowed-names=front-proxy-client\n--requestheader-extra-headers-prefix=X-Remote-Extra-", "completion": "--requestheader-group-headers=X-Remote-Group\n--requestheader-username-headers=X-Remote-User\n--proxy-client-cert-file=<path to aggregator proxy cert>\n--proxy-client-key-file=<path to aggregator proxy key>", "language": "text", "type": "code-example", "context": "Configure the Aggregation Layer | The Kubernetes apiserver has two client CA options:", "source": "https://kubernetes.io/docs/tasks/extend-kubernetes/configure-aggregation-layer/", "id": "k8s_00388_code_completion_9883"} {"prompt": "Explain this text code:\n\n--requestheader-client-ca-file=<path to aggregator CA cert>\n--requestheader-allowed-names=front-proxy-client\n--requestheader-extra-headers-prefix=X-Remote-Extra-\n--requestheader-group-headers=X-Remote-Group\n--requestheader-username-headers=X-Remote-User\n--proxy-client-cert-file=<path to aggregator proxy cert>\n--proxy-client-key-file=<path to aggregator proxy key>", "completion": "This text code Configure the Aggregation Layer | The Kubernetes apiserver has two client CA options:", "language": "text", "type": "code-example_explanation", "context": "Configure the Aggregation Layer | The Kubernetes apiserver has two client CA options:", "source": "https://kubernetes.io/docs/tasks/extend-kubernetes/configure-aggregation-layer/", "id": "k8s_00388_code_explain_9884"} {"prompt": "Complete this yaml code:\n\napiVersion: apiregistration.k8s.io/v1\nkind: APIService\nmetadata:\n name: <name of the registration object>\nspec:\n group: <API group name this extension apiserver hosts>", "completion": " version: <API version this extension apiserver hosts>\n groupPriorityMinimum: <priority this APIService for this group, see API documentation>\n versionPriority: <prioritizes ordering of this version within a group, see API documentation>\n service:\n namespace: <namespace of the extension apiserver service>\n name: <name of the extension apiserver service>\n caBundle: <pem encoded ca cert that signs the server cert used by the webhook>", "language": "yaml", "type": "kubernetes-manifest", "context": "Configure the Aggregation Layer | The name of an APIService object must be a valid path segment name.", "source": "https://kubernetes.io/docs/tasks/extend-kubernetes/configure-aggregation-layer/", "id": "k8s_00388_code_completion_9885"} {"prompt": "Explain this yaml code:\n\napiVersion: apiregistration.k8s.io/v1\nkind: APIService\nmetadata:\n name: <name of the registration object>\nspec:\n group: <API group name this extension apiserver hosts>\n version: <API version this extension apiserver hosts>\n groupPriorityMinimum: <priority this APIService for this group, see API documentation>\n versionPriority: <prioritizes ordering of this version within a group, see API documentation>\n service:\n namespace: <namespace of the extension apiserver service>\n name: <name of the extension apiserver service>\n caBundle: <pem encoded ca cert that signs the server cert used by the webhook>", "completion": "This yaml code Configure the Aggregation Layer | The name of an APIService object must be a valid path segment name.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Configure the Aggregation Layer | The name of an APIService object must be a valid path segment name.", "source": "https://kubernetes.io/docs/tasks/extend-kubernetes/configure-aggregation-layer/", "id": "k8s_00388_code_explain_9886"} {"prompt": "Complete this yaml code:\n\napiVersion: apiregistration.k8s.io/v1\nkind: APIService\n...\nspec:\n ...", "completion": " service:\n namespace: my-service-namespace\n name: my-service-name\n port: 1234\n caBundle: \"Ci0tLS0tQk...<base64-encoded PEM bundle>...tLS0K\"\n...", "language": "yaml", "type": "kubernetes-manifest", "context": "Configure the Aggregation Layer |", "source": "https://kubernetes.io/docs/tasks/extend-kubernetes/configure-aggregation-layer/", "id": "k8s_00388_code_completion_9887"} {"prompt": "Explain this yaml code:\n\napiVersion: apiregistration.k8s.io/v1\nkind: APIService\n...\nspec:\n ...\n service:\n namespace: my-service-namespace\n name: my-service-name\n port: 1234\n caBundle: \"Ci0tLS0tQk...<base64-encoded PEM bundle>...tLS0K\"\n...", "completion": "This yaml code Configure the Aggregation Layer |", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Configure the Aggregation Layer |", "source": "https://kubernetes.io/docs/tasks/extend-kubernetes/configure-aggregation-layer/", "id": "k8s_00388_code_explain_9888"} {"prompt": "Complete this yaml code:\n\napiVersion: apiregistration.k8s.io/v1\nkind: APIService\nmetadata:\n name: <name of the registration object>\nspec:\n group: <API group name this extension apiserver hosts>", "completion": " version: <API version this extension apiserver hosts>\n groupPriorityMinimum: <priority this APIService for this group, see API documentation>\n versionPriority: <prioritizes ordering of this version within a group, see API documentation>\n service:\n namespace: <namespace of the extension apiserver service>\n name: <name of the extension apiserver service>\n caBundle: <pem encoded ca cert that signs the server cert used by the webhook>", "language": "yaml", "type": "kubernetes-manifest", "context": "Configure the Aggregation Layer | The name of an APIService object must be a valid path segment name.", "source": "https://kubernetes.io/docs/tasks/extend-kubernetes/configure-aggregation-layer/", "id": "k8s_00388_code_completion_9889"} {"prompt": "Explain this yaml code:\n\napiVersion: apiregistration.k8s.io/v1\nkind: APIService\nmetadata:\n name: <name of the registration object>\nspec:\n group: <API group name this extension apiserver hosts>\n version: <API version this extension apiserver hosts>\n groupPriorityMinimum: <priority this APIService for this group, see API documentation>\n versionPriority: <prioritizes ordering of this version within a group, see API documentation>\n service:\n namespace: <namespace of the extension apiserver service>\n name: <name of the extension apiserver service>\n caBundle: <pem encoded ca cert that signs the server cert used by the webhook>", "completion": "This yaml code Configure the Aggregation Layer | The name of an APIService object must be a valid path segment name.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Configure the Aggregation Layer | The name of an APIService object must be a valid path segment name.", "source": "https://kubernetes.io/docs/tasks/extend-kubernetes/configure-aggregation-layer/", "id": "k8s_00388_code_explain_9890"} {"prompt": "Complete this yaml code:\n\napiVersion: apiregistration.k8s.io/v1\nkind: APIService\n...\nspec:\n ...", "completion": " service:\n namespace: my-service-namespace\n name: my-service-name\n port: 1234\n caBundle: \"Ci0tLS0tQk...<base64-encoded PEM bundle>...tLS0K\"\n...", "language": "yaml", "type": "kubernetes-manifest", "context": "Configure the Aggregation Layer |", "source": "https://kubernetes.io/docs/tasks/extend-kubernetes/configure-aggregation-layer/", "id": "k8s_00388_code_completion_9891"} {"prompt": "Explain this yaml code:\n\napiVersion: apiregistration.k8s.io/v1\nkind: APIService\n...\nspec:\n ...\n service:\n namespace: my-service-namespace\n name: my-service-name\n port: 1234\n caBundle: \"Ci0tLS0tQk...<base64-encoded PEM bundle>...tLS0K\"\n...", "completion": "This yaml code Configure the Aggregation Layer |", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Configure the Aggregation Layer |", "source": "https://kubernetes.io/docs/tasks/extend-kubernetes/configure-aggregation-layer/", "id": "k8s_00388_code_explain_9892"} {"prompt": "Explain this bash code:\n\nkubectl create -f docker-registry.yaml --edit -o json", "completion": "This bash code create | Create a resource from a file or from stdin.", "language": "bash", "type": "kubectl-command_explanation", "context": "create | Create a resource from a file or from stdin.", "source": "https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands", "id": "k8s_00389_code_explain_9893"} {"prompt": "Explain this bash code:\n\nkubectl create clusterrole pod-reader --verb=get,list,watch --resource=pods", "completion": "This bash code create | Create a cluster role named \"pod-reader\" with ResourceName specified", "language": "bash", "type": "kubectl-command_explanation", "context": "create | Create a cluster role named \"pod-reader\" with ResourceName specified", "source": "https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands", "id": "k8s_00389_code_explain_9894"} {"prompt": "Explain this bash code:\n\nkubectl create clusterrole pod-reader --verb=get --resource=pods --resource-name=readablepod --resource-name=anotherpod", "completion": "This bash code create | Create a cluster role named \"foo\" with API Group specified", "language": "bash", "type": "kubectl-command_explanation", "context": "create | Create a cluster role named \"foo\" with API Group specified", "source": "https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands", "id": "k8s_00389_code_explain_9895"} {"prompt": "Explain this bash code:\n\nkubectl create clusterrole foo --verb=get,list,watch --resource=rs.extensions", "completion": "This bash code create | Create a cluster role named \"foo\" with SubResource specified", "language": "bash", "type": "kubectl-command_explanation", "context": "create | Create a cluster role named \"foo\" with SubResource specified", "source": "https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands", "id": "k8s_00389_code_explain_9896"} {"prompt": "Explain this bash code:\n\nkubectl create clusterrole foo --verb=get,list,watch --resource=pods,pods/status", "completion": "This bash code create | Create a cluster role name \"foo\" with NonResourceURL specified", "language": "bash", "type": "kubectl-command_explanation", "context": "create | Create a cluster role name \"foo\" with NonResourceURL specified", "source": "https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands", "id": "k8s_00389_code_explain_9897"} {"prompt": "Explain this bash code:\n\nkubectl create clusterrole \"foo\" --verb=get --non-resource-url=/logs/*", "completion": "This bash code create | Create a cluster role name \"monitoring\" with AggregationRule specified", "language": "bash", "type": "kubectl-command_explanation", "context": "create | Create a cluster role name \"monitoring\" with AggregationRule specified", "source": "https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands", "id": "k8s_00389_code_explain_9898"} {"prompt": "Explain this bash code:\n\nkubectl create clusterrole monitoring --aggregation-rule=\"rbac.example.com/aggregate-to-monitoring=true\"", "completion": "This bash code create | Create a cluster role.", "language": "bash", "type": "kubectl-command_explanation", "context": "create | Create a cluster role.", "source": "https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands", "id": "k8s_00389_code_explain_9899"} {"prompt": "Explain this bash code:\n\nkubectl create clusterrolebinding cluster-admin --clusterrole=cluster-admin --user=user1 --user=user2 --group=group1", "completion": "This bash code create | Create a cluster role binding for a particular cluster role.", "language": "bash", "type": "kubectl-command_explanation", "context": "create | Create a cluster role binding for a particular cluster role.", "source": "https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands", "id": "k8s_00389_code_explain_9900"} {"prompt": "Explain this bash code:\n\nkubectl create configmap my-config --from-file=path/to/bar", "completion": "This bash code create | Create a new config map named my-config with specified keys instead of file basenames on disk", "language": "bash", "type": "kubectl-command_explanation", "context": "create | Create a new config map named my-config with specified keys instead of file basenames on disk", "source": "https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands", "id": "k8s_00389_code_explain_9901"} {"prompt": "Explain this bash code:\n\nkubectl create configmap my-config --from-file=key1=/path/to/bar/file1.txt --from-file=key2=/path/to/bar/file2.txt", "completion": "This bash code create | Create a new config map named my-config with key1=config1 and key2=config2", "language": "bash", "type": "kubectl-command_explanation", "context": "create | Create a new config map named my-config with key1=config1 and key2=config2", "source": "https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands", "id": "k8s_00389_code_explain_9902"} {"prompt": "Explain this bash code:\n\nkubectl create configmap my-config --from-literal=key1=config1 --from-literal=key2=config2", "completion": "This bash code create | Create a new config map named my-config from the key=value pairs in the file", "language": "bash", "type": "kubectl-command_explanation", "context": "create | Create a new config map named my-config from the key=value pairs in the file", "source": "https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands", "id": "k8s_00389_code_explain_9903"} {"prompt": "Explain this bash code:\n\nkubectl create configmap my-config --from-file=path/to/bar", "completion": "This bash code create | Create a new config map named my-config from an env file", "language": "bash", "type": "kubectl-command_explanation", "context": "create | Create a new config map named my-config from an env file", "source": "https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands", "id": "k8s_00389_code_explain_9904"} {"prompt": "Explain this bash code:\n\nkubectl create configmap my-config --from-env-file=path/to/bar.env", "completion": "This bash code create | Create a config map based on a file, directory, or specified literal value.", "language": "bash", "type": "kubectl-command_explanation", "context": "create | Create a config map based on a file, directory, or specified literal value.", "source": "https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands", "id": "k8s_00389_code_explain_9905"} {"prompt": "Explain this bash code:\n\nkubectl create cronjob my-job --image=busybox --schedule=\"*/1 * * * *\"", "completion": "This bash code create | Create a cron job with a command", "language": "bash", "type": "kubectl-command_explanation", "context": "create | Create a cron job with a command", "source": "https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands", "id": "k8s_00389_code_explain_9906"} {"prompt": "Explain this bash code:\n\nkubectl create cronjob my-job --image=busybox --schedule=\"*/1 * * * *\" -- date", "completion": "This bash code create | Create a cron job with the specified name.", "language": "bash", "type": "kubectl-command_explanation", "context": "create | Create a cron job with the specified name.", "source": "https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands", "id": "k8s_00389_code_explain_9907"} {"prompt": "Explain this bash code:\n\nkubectl create deployment my-dep --image=busybox -- date", "completion": "This bash code create | Create a deployment named my-dep that runs the nginx image with 3 replicas", "language": "bash", "type": "kubectl-command_explanation", "context": "create | Create a deployment named my-dep that runs the nginx image with 3 replicas", "source": "https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands", "id": "k8s_00389_code_explain_9908"} {"prompt": "Explain this bash code:\n\nkubectl create deployment my-dep --image=nginx --replicas=3", "completion": "This bash code create | Create a deployment named my-dep that runs the busybox image and expose port 5701", "language": "bash", "type": "kubectl-command_explanation", "context": "create | Create a deployment named my-dep that runs the busybox image and expose port 5701", "source": "https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands", "id": "k8s_00389_code_explain_9909"} {"prompt": "Explain this bash code:\n\nkubectl create deployment my-dep --image=busybox --port=5701", "completion": "This bash code create | Create a deployment with the specified name.", "language": "bash", "type": "kubectl-command_explanation", "context": "create | Create a deployment with the specified name.", "source": "https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands", "id": "k8s_00389_code_explain_9910"} {"prompt": "Explain this bash code:\n\nkubectl create ingress simple --rule=\"foo.com/bar=svc1:8080,tls=my-cert\"", "completion": "This bash code create | Create a catch all ingress of \"/path\" pointing to service svc:port and Ingress Class as \"otheringress\"", "language": "bash", "type": "kubectl-command_explanation", "context": "create | Create a catch all ingress of \"/path\" pointing to service svc:port and Ingress Class as \"otheringress\"", "source": "https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands", "id": "k8s_00389_code_explain_9911"} {"prompt": "Explain this bash code:\n\nkubectl create ingress catch-all --class=otheringress --rule=\"/path=svc:port\"", "completion": "This bash code create | Create an ingress with two annotations: ingress.annotation1 and ingress.annotations2", "language": "bash", "type": "kubectl-command_explanation", "context": "create | Create an ingress with two annotations: ingress.annotation1 and ingress.annotations2", "source": "https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands", "id": "k8s_00389_code_explain_9912"} {"prompt": "Explain this bash code:\n\nkubectl create ingress annotated --class=default --rule=\"foo.com/bar=svc:port\" \\\n--annotation ingress.annotation1=foo \\\n--annotation ingress.annotation2=bla", "completion": "This bash code create | Create an ingress with the same host and multiple paths", "language": "bash", "type": "kubectl-command_explanation", "context": "create | Create an ingress with the same host and multiple paths", "source": "https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands", "id": "k8s_00389_code_explain_9913"} {"prompt": "Explain this bash code:\n\nkubectl create ingress multipath --class=default \\\n--rule=\"foo.com/=svc:port\" \\\n--rule=\"foo.com/admin/=svcadmin:portadmin\"", "completion": "This bash code create | Create an ingress with multiple hosts and the pathType as Prefix", "language": "bash", "type": "kubectl-command_explanation", "context": "create | Create an ingress with multiple hosts and the pathType as Prefix", "source": "https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands", "id": "k8s_00389_code_explain_9914"} {"prompt": "Explain this bash code:\n\nkubectl create ingress ingress1 --class=default \\\n--rule=\"foo.com/path*=svc:8080\" \\\n--rule=\"bar.com/admin*=svc2:http\"", "completion": "This bash code create | Create an ingress with TLS enabled using the default ingress certificate and different path types", "language": "bash", "type": "kubectl-command_explanation", "context": "create | Create an ingress with TLS enabled using the default ingress certificate and different path types", "source": "https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands", "id": "k8s_00389_code_explain_9915"} {"prompt": "Explain this bash code:\n\nkubectl create ingress ingtls --class=default \\\n--rule=\"foo.com/=svc:https,tls\" \\\n--rule=\"foo.com/path/subpath*=othersvc:8080\"", "completion": "This bash code create | Create an ingress with TLS enabled using a specific secret and pathType as Prefix", "language": "bash", "type": "kubectl-command_explanation", "context": "create | Create an ingress with TLS enabled using a specific secret and pathType as Prefix", "source": "https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands", "id": "k8s_00389_code_explain_9916"} {"prompt": "Explain this bash code:\n\nkubectl create ingress ingsecret --class=default \\\n--rule=\"foo.com/*=svc:8080,tls=secret1\"", "completion": "This bash code create | Create an ingress with a default backend", "language": "bash", "type": "kubectl-command_explanation", "context": "create | Create an ingress with a default backend", "source": "https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands", "id": "k8s_00389_code_explain_9917"} {"prompt": "Explain this bash code:\n\nkubectl create ingress ingdefault --class=default \\\n--default-backend=defaultsvc:http \\\n--rule=\"foo.com/*=svc:8080,tls=secret1\"", "completion": "This bash code create | Create an ingress with the specified name.", "language": "bash", "type": "kubectl-command_explanation", "context": "create | Create an ingress with the specified name.", "source": "https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands", "id": "k8s_00389_code_explain_9918"} {"prompt": "Explain this bash code:\n\nkubectl create job test-job --from=cronjob/a-cronjob", "completion": "This bash code create | Create a job with the specified name.", "language": "bash", "type": "kubectl-command_explanation", "context": "create | Create a job with the specified name.", "source": "https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands", "id": "k8s_00389_code_explain_9919"} {"prompt": "Explain this bash code:\n\nkubectl create poddisruptionbudget my-pdb --selector=app=rails --min-available=1", "completion": "This bash code create | Create a pod disruption budget named my-pdb that will select all pods with the app=nginx label # and require at least half of the pods selected to be available at any point in time", "language": "bash", "type": "kubectl-command_explanation", "context": "create | Create a pod disruption budget named my-pdb that will select all pods with the app=nginx label # and require at least half of the pods selected to be available at any point in time", "source": "https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands", "id": "k8s_00389_code_explain_9920"} {"prompt": "Explain this bash code:\n\nkubectl create pdb my-pdb --selector=app=nginx --min-available=50%", "completion": "This bash code create | Create a pod disruption budget with the specified name, selector, and desired minimum available pods.", "language": "bash", "type": "kubectl-command_explanation", "context": "create | Create a pod disruption budget with the specified name, selector, and desired minimum available pods.", "source": "https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands", "id": "k8s_00389_code_explain_9921"} {"prompt": "Explain this bash code:\n\nkubectl create priorityclass high-priority --value=1000 --description=\"high priority\"", "completion": "This bash code create | Create a priority class named default-priority that is considered as the global default priority", "language": "bash", "type": "kubectl-command_explanation", "context": "create | Create a priority class named default-priority that is considered as the global default priority", "source": "https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands", "id": "k8s_00389_code_explain_9922"} {"prompt": "Explain this bash code:\n\nkubectl create priorityclass default-priority --value=1000 --global-default=true --description=\"default priority\"", "completion": "This bash code create | Create a priority class named high-priority that cannot preempt pods with lower priority", "language": "bash", "type": "kubectl-command_explanation", "context": "create | Create a priority class named high-priority that cannot preempt pods with lower priority", "source": "https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands", "id": "k8s_00389_code_explain_9923"} {"prompt": "Explain this bash code:\n\nkubectl create priorityclass high-priority --value=1000 --description=\"high priority\" --preemption-policy=\"Never\"", "completion": "This bash code create | Create a priority class with the specified name, value, globalDefault and description.", "language": "bash", "type": "kubectl-command_explanation", "context": "create | Create a priority class with the specified name, value, globalDefault and description.", "source": "https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands", "id": "k8s_00389_code_explain_9924"} {"prompt": "Explain this bash code:\n\nkubectl create quota my-quota --hard=cpu=1,memory=1G,pods=2,services=3,replicationcontrollers=2,resourcequotas=1,secrets=5,persistentvolumeclaims=10", "completion": "This bash code create | Create a new resource quota named best-effort", "language": "bash", "type": "kubectl-command_explanation", "context": "create | Create a new resource quota named best-effort", "source": "https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands", "id": "k8s_00389_code_explain_9925"} {"prompt": "Explain this bash code:\n\nkubectl create quota best-effort --hard=pods=100 --scopes=BestEffort", "completion": "This bash code create | Create a resource quota with the specified name, hard limits, and optional scopes.", "language": "bash", "type": "kubectl-command_explanation", "context": "create | Create a resource quota with the specified name, hard limits, and optional scopes.", "source": "https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands", "id": "k8s_00389_code_explain_9926"} {"prompt": "Explain this bash code:\n\nkubectl create role pod-reader --verb=get --verb=list --verb=watch --resource=pods", "completion": "This bash code create | Create a role named \"pod-reader\" with ResourceName specified", "language": "bash", "type": "kubectl-command_explanation", "context": "create | Create a role named \"pod-reader\" with ResourceName specified", "source": "https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands", "id": "k8s_00389_code_explain_9927"} {"prompt": "Explain this bash code:\n\nkubectl create role pod-reader --verb=get --resource=pods --resource-name=readablepod --resource-name=anotherpod", "completion": "This bash code create | Create a role named \"foo\" with API Group specified", "language": "bash", "type": "kubectl-command_explanation", "context": "create | Create a role named \"foo\" with API Group specified", "source": "https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands", "id": "k8s_00389_code_explain_9928"} {"prompt": "Explain this bash code:\n\nkubectl create role foo --verb=get,list,watch --resource=rs.extensions", "completion": "This bash code create | Create a role named \"foo\" with SubResource specified", "language": "bash", "type": "kubectl-command_explanation", "context": "create | Create a role named \"foo\" with SubResource specified", "source": "https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands", "id": "k8s_00389_code_explain_9929"} {"prompt": "Explain this bash code:\n\nkubectl create role foo --verb=get,list,watch --resource=pods,pods/status", "completion": "This bash code create | Create a role with single rule.", "language": "bash", "type": "kubectl-command_explanation", "context": "create | Create a role with single rule.", "source": "https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands", "id": "k8s_00389_code_explain_9930"} {"prompt": "Explain this bash code:\n\nkubectl create rolebinding admin --clusterrole=admin --user=user1 --user=user2 --group=group1", "completion": "This bash code create | Create a role binding for a particular role or cluster role.", "language": "bash", "type": "kubectl-command_explanation", "context": "create | Create a role binding for a particular role or cluster role.", "source": "https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands", "id": "k8s_00389_code_explain_9931"} {"prompt": "Explain this bash code:\n\nkubectl create secret docker-registry my-secret --docker-server=DOCKER_REGISTRY_SERVER --docker-username=DOCKER_USER --docker-password=DOCKER_PASSWORD --docker-email=DOCKER_EMAIL", "completion": "This bash code create | Create a new secret named my-secret from ~/.docker/config.json", "language": "bash", "type": "kubectl-command_explanation", "context": "create | Create a new secret named my-secret from ~/.docker/config.json", "source": "https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands", "id": "k8s_00389_code_explain_9932"} {"prompt": "Explain this bash code:\n\nkubectl create secret docker-registry my-secret --from-file=.dockerconfigjson=path/to/.docker/config.json", "completion": "This bash code create | Create a new secret for use with Docker registries.", "language": "bash", "type": "kubectl-command_explanation", "context": "create | Create a new secret for use with Docker registries.", "source": "https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands", "id": "k8s_00389_code_explain_9933"} {"prompt": "Explain this bash code:\n\nkubectl create secret generic my-secret --from-file=path/to/bar", "completion": "This bash code create | Create a new secret named my-secret with specified keys instead of names on disk", "language": "bash", "type": "kubectl-command_explanation", "context": "create | Create a new secret named my-secret with specified keys instead of names on disk", "source": "https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands", "id": "k8s_00389_code_explain_9934"} {"prompt": "Explain this bash code:\n\nkubectl create secret generic my-secret --from-file=ssh-privatekey=path/to/id_rsa --from-file=ssh-publickey=path/to/id_rsa.pub", "completion": "This bash code create | Create a new secret named my-secret with key1=supersecret and key2=topsecret", "language": "bash", "type": "kubectl-command_explanation", "context": "create | Create a new secret named my-secret with key1=supersecret and key2=topsecret", "source": "https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands", "id": "k8s_00389_code_explain_9935"} {"prompt": "Explain this bash code:\n\nkubectl create secret generic my-secret --from-literal=key1=supersecret --from-literal=key2=topsecret", "completion": "This bash code create | Create a new secret named my-secret using a combination of a file and a literal", "language": "bash", "type": "kubectl-command_explanation", "context": "create | Create a new secret named my-secret using a combination of a file and a literal", "source": "https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands", "id": "k8s_00389_code_explain_9936"} {"prompt": "Explain this bash code:\n\nkubectl create secret generic my-secret --from-file=ssh-privatekey=path/to/id_rsa --from-literal=passphrase=topsecret", "completion": "This bash code create | Create a new secret named my-secret from an env file", "language": "bash", "type": "kubectl-command_explanation", "context": "create | Create a new secret named my-secret from an env file", "source": "https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands", "id": "k8s_00389_code_explain_9937"} {"prompt": "Explain this bash code:\n\nkubectl create secret generic my-secret --from-env-file=path/to/bar.env", "completion": "This bash code create | Create a secret based on a file, directory, or specified literal value.", "language": "bash", "type": "kubectl-command_explanation", "context": "create | Create a secret based on a file, directory, or specified literal value.", "source": "https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands", "id": "k8s_00389_code_explain_9938"} {"prompt": "Explain this bash code:\n\nkubectl create secret tls tls-secret --cert=path/to/tls.cert --key=path/to/tls.key", "completion": "This bash code create | Create a TLS secret from the given public/private key pair.", "language": "bash", "type": "kubectl-command_explanation", "context": "create | Create a TLS secret from the given public/private key pair.", "source": "https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands", "id": "k8s_00389_code_explain_9939"} {"prompt": "Explain this bash code:\n\nkubectl create service clusterip my-cs --tcp=5678:8080", "completion": "This bash code create | Create a new ClusterIP service named my-cs (in headless mode)", "language": "bash", "type": "kubectl-command_explanation", "context": "create | Create a new ClusterIP service named my-cs (in headless mode)", "source": "https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands", "id": "k8s_00389_code_explain_9940"} {"prompt": "Explain this bash code:\n\nkubectl create service clusterip my-cs --clusterip=\"None\"", "completion": "This bash code create | Create a ClusterIP service with the specified name.", "language": "bash", "type": "kubectl-command_explanation", "context": "create | Create a ClusterIP service with the specified name.", "source": "https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands", "id": "k8s_00389_code_explain_9941"} {"prompt": "Explain this bash code:\n\nkubectl create service externalname my-ns --external-name bar.com", "completion": "This bash code create | Create an ExternalName service with the specified name.", "language": "bash", "type": "kubectl-command_explanation", "context": "create | Create an ExternalName service with the specified name.", "source": "https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands", "id": "k8s_00389_code_explain_9942"} {"prompt": "Explain this bash code:\n\nkubectl create service loadbalancer my-lbs --tcp=5678:8080", "completion": "This bash code create | Create a LoadBalancer service with the specified name.", "language": "bash", "type": "kubectl-command_explanation", "context": "create | Create a LoadBalancer service with the specified name.", "source": "https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands", "id": "k8s_00389_code_explain_9943"} {"prompt": "Explain this bash code:\n\nkubectl create service nodeport my-ns --tcp=5678:8080", "completion": "This bash code create | Create a NodePort service with the specified name.", "language": "bash", "type": "kubectl-command_explanation", "context": "create | Create a NodePort service with the specified name.", "source": "https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands", "id": "k8s_00389_code_explain_9944"} {"prompt": "Explain this bash code:\n\nkubectl get -o template pod/web-pod-13je7 --template={{.status.phase}}", "completion": "This bash code get | List resource information in custom columns", "language": "bash", "type": "kubectl-command_explanation", "context": "get | List resource information in custom columns", "source": "https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands", "id": "k8s_00389_code_explain_9945"} {"prompt": "Explain this bash code:\n\nkubectl get pod test-pod -o custom-columns=CONTAINER:.spec.containers[0].name,IMAGE:.spec.containers[0].image", "completion": "This bash code get | List all replication controllers and services together in ps output format", "language": "bash", "type": "kubectl-command_explanation", "context": "get | List all replication controllers and services together in ps output format", "source": "https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands", "id": "k8s_00389_code_explain_9946"} {"prompt": "Explain this bash code:\n\nkubectl get rc/web service/frontend pods/web-pod-13je7", "completion": "This bash code get | Display one or many resources.", "language": "bash", "type": "kubectl-command_explanation", "context": "get | Display one or many resources.", "source": "https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands", "id": "k8s_00389_code_explain_9947"} {"prompt": "Explain this bash code:\n\nkubectl run hazelcast --image=hazelcast/hazelcast --port=5701", "completion": "This bash code run | Start a hazelcast pod and set environment variables \"DNS_DOMAIN=cluster\" and \"POD_NAMESPACE=default\" in the container", "language": "bash", "type": "kubectl-command_explanation", "context": "run | Start a hazelcast pod and set environment variables \"DNS_DOMAIN=cluster\" and \"POD_NAMESPACE=default\" in the container", "source": "https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands", "id": "k8s_00389_code_explain_9948"} {"prompt": "Explain this bash code:\n\nkubectl run hazelcast --image=hazelcast/hazelcast --env=\"DNS_DOMAIN=cluster\" --env=\"POD_NAMESPACE=default\"", "completion": "This bash code run | Start a hazelcast pod and set labels \"app=hazelcast\" and \"env=prod\" in the container", "language": "bash", "type": "kubectl-command_explanation", "context": "run | Start a hazelcast pod and set labels \"app=hazelcast\" and \"env=prod\" in the container", "source": "https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands", "id": "k8s_00389_code_explain_9949"} {"prompt": "Explain this bash code:\n\nkubectl run hazelcast --image=hazelcast/hazelcast --labels=\"app=hazelcast,env=prod\"", "completion": "This bash code run | Dry run; print the corresponding API objects without creating them", "language": "bash", "type": "kubectl-command_explanation", "context": "run | Dry run; print the corresponding API objects without creating them", "source": "https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands", "id": "k8s_00389_code_explain_9950"} {"prompt": "Explain this bash code:\n\nkubectl run nginx --image=nginx --overrides='{ \"apiVersion\": \"v1\", \"spec\": { ... } }'", "completion": "This bash code run | Start a busybox pod and keep it in the foreground, don't restart it if it exits", "language": "bash", "type": "kubectl-command_explanation", "context": "run | Start a busybox pod and keep it in the foreground, don't restart it if it exits", "source": "https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands", "id": "k8s_00389_code_explain_9951"} {"prompt": "Explain this bash code:\n\nkubectl run -i -t busybox --image=busybox --restart=Never", "completion": "This bash code run | Start the nginx pod using the default command, but use custom arguments (arg1 .. argN) for that command", "language": "bash", "type": "kubectl-command_explanation", "context": "run | Start the nginx pod using the default command, but use custom arguments (arg1 .. argN) for that command", "source": "https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands", "id": "k8s_00389_code_explain_9952"} {"prompt": "Explain this bash code:\n\nkubectl run nginx --image=nginx -- <arg1> <arg2> ... <argN>", "completion": "This bash code run | Start the nginx pod using a different command and custom arguments", "language": "bash", "type": "kubectl-command_explanation", "context": "run | Start the nginx pod using a different command and custom arguments", "source": "https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands", "id": "k8s_00389_code_explain_9953"} {"prompt": "Explain this bash code:\n\nkubectl run nginx --image=nginx --command -- <cmd> <arg1> ... <argN>", "completion": "This bash code run | Create and run a particular image in a pod.", "language": "bash", "type": "kubectl-command_explanation", "context": "run | Create and run a particular image in a pod.", "source": "https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands", "id": "k8s_00389_code_explain_9954"} {"prompt": "Explain this bash code:\n\nkubectl expose rc nginx --port=80 --target-port=8000", "completion": "This bash code expose | Create a service for a replication controller identified by type and name specified in \"nginx-controller.yaml\", which serves on port 80 and connects to the containers on port 8000", "language": "bash", "type": "kubectl-command_explanation", "context": "expose | Create a service for a replication controller identified by type and name specified in \"nginx-controller.yaml\", which serves on port 80 and connects to the containers on port 8000", "source": "https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands", "id": "k8s_00389_code_explain_9955"} {"prompt": "Explain this bash code:\n\nkubectl expose -f nginx-controller.yaml --port=80 --target-port=8000", "completion": "This bash code expose | Create a service for a pod valid-pod, which serves on port 444 with the name \"frontend\"", "language": "bash", "type": "kubectl-command_explanation", "context": "expose | Create a service for a pod valid-pod, which serves on port 444 with the name \"frontend\"", "source": "https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands", "id": "k8s_00389_code_explain_9956"} {"prompt": "Explain this bash code:\n\nkubectl expose pod valid-pod --port=444 --name=frontend", "completion": "This bash code expose | Create a second service based on the above service, exposing the container port 8443 as port 443 with the name \"nginx-https\"", "language": "bash", "type": "kubectl-command_explanation", "context": "expose | Create a second service based on the above service, exposing the container port 8443 as port 443 with the name \"nginx-https\"", "source": "https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands", "id": "k8s_00389_code_explain_9957"} {"prompt": "Explain this bash code:\n\nkubectl expose service nginx --port=443 --target-port=8443 --name=nginx-https", "completion": "This bash code expose | Create a service for a replicated streaming application on port 4100 balancing UDP traffic and named 'video-stream'.", "language": "bash", "type": "kubectl-command_explanation", "context": "expose | Create a service for a replicated streaming application on port 4100 balancing UDP traffic and named 'video-stream'.", "source": "https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands", "id": "k8s_00389_code_explain_9958"} {"prompt": "Explain this bash code:\n\nkubectl expose rc streamer --port=4100 --protocol=UDP --name=video-stream", "completion": "This bash code expose | Create a service for a replicated nginx using replica set, which serves on port 80 and connects to the containers on port 8000", "language": "bash", "type": "kubectl-command_explanation", "context": "expose | Create a service for a replicated nginx using replica set, which serves on port 80 and connects to the containers on port 8000", "source": "https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands", "id": "k8s_00389_code_explain_9959"} {"prompt": "Explain this bash code:\n\nkubectl expose rs nginx --port=80 --target-port=8000", "completion": "This bash code expose | Create a service for an nginx deployment, which serves on port 80 and connects to the containers on port 8000", "language": "bash", "type": "kubectl-command_explanation", "context": "expose | Create a service for an nginx deployment, which serves on port 80 and connects to the containers on port 8000", "source": "https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands", "id": "k8s_00389_code_explain_9960"} {"prompt": "Explain this bash code:\n\nkubectl expose deployment nginx --port=80 --target-port=8000", "completion": "This bash code expose | Expose a resource as a new Kubernetes service.", "language": "bash", "type": "kubectl-command_explanation", "context": "expose | Expose a resource as a new Kubernetes service.", "source": "https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands", "id": "k8s_00389_code_explain_9961"} {"prompt": "Explain this bash code:\n\nkubectl apply --prune -f manifest.yaml -l app=nginx", "completion": "This bash code apply | Apply the configuration in manifest.yaml and delete all the other config maps that are not in the file", "language": "bash", "type": "kubectl-command_explanation", "context": "apply | Apply the configuration in manifest.yaml and delete all the other config maps that are not in the file", "source": "https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands", "id": "k8s_00389_code_explain_9962"} {"prompt": "Explain this bash code:\n\nkubectl apply --prune -f manifest.yaml --all --prune-whitelist=core/v1/ConfigMap", "completion": "This bash code apply | Apply a configuration to a resource by file name or stdin. The resource name must be specified. This resource will be created if it doesn't exist yet. To use 'apply', always create the resource initia", "language": "bash", "type": "kubectl-command_explanation", "context": "apply | Apply a configuration to a resource by file name or stdin. The resource name must be specified. This resource will be created if it doesn't exist yet. To use 'apply', always create the resource initia", "source": "https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands", "id": "k8s_00389_code_explain_9963"} {"prompt": "Explain this bash code:\n\nkubectl apply edit-last-applied -f deploy.yaml -o json", "completion": "This bash code apply | Edit the latest last-applied-configuration annotations of resources from the default editor.", "language": "bash", "type": "kubectl-command_explanation", "context": "apply | Edit the latest last-applied-configuration annotations of resources from the default editor.", "source": "https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands", "id": "k8s_00389_code_explain_9964"} {"prompt": "Explain this bash code:\n\nkubectl apply set-last-applied -f deploy.yaml --create-annotation=true", "completion": "This bash code apply | Set the latest last-applied-configuration annotations by setting it to match the contents of a file. This results in the last-applied-configuration being updated as though 'kubectl apply -f ' was run,", "language": "bash", "type": "kubectl-command_explanation", "context": "apply | Set the latest last-applied-configuration annotations by setting it to match the contents of a file. This results in the last-applied-configuration being updated as though 'kubectl apply -f ' was run,", "source": "https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands", "id": "k8s_00389_code_explain_9965"} {"prompt": "Explain this bash code:\n\nkubectl apply view-last-applied -f deploy.yaml -o json", "completion": "This bash code apply | View the latest last-applied-configuration annotations by type/name or file.", "language": "bash", "type": "kubectl-command_explanation", "context": "apply | View the latest last-applied-configuration annotations by type/name or file.", "source": "https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands", "id": "k8s_00389_code_explain_9966"} {"prompt": "Explain this bash code:\n\nkubectl annotate pods foo description='my frontend'", "completion": "This bash code annotate | Update a pod identified by type and name in \"pod.json\"", "language": "bash", "type": "kubectl-command_explanation", "context": "annotate | Update a pod identified by type and name in \"pod.json\"", "source": "https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands", "id": "k8s_00389_code_explain_9967"} {"prompt": "Explain this bash code:\n\nkubectl annotate -f pod.json description='my frontend'", "completion": "This bash code annotate | Update pod 'foo' with the annotation 'description' and the value 'my frontend running nginx', overwriting any existing value", "language": "bash", "type": "kubectl-command_explanation", "context": "annotate | Update pod 'foo' with the annotation 'description' and the value 'my frontend running nginx', overwriting any existing value", "source": "https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands", "id": "k8s_00389_code_explain_9968"} {"prompt": "Explain this bash code:\n\nkubectl annotate --overwrite pods foo description='my frontend running nginx'", "completion": "This bash code annotate | Update all pods in the namespace", "language": "bash", "type": "kubectl-command_explanation", "context": "annotate | Update all pods in the namespace", "source": "https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands", "id": "k8s_00389_code_explain_9969"} {"prompt": "Explain this bash code:\n\nkubectl annotate pods --all description='my frontend running nginx'", "completion": "This bash code annotate | Update pod 'foo' only if the resource is unchanged from version 1", "language": "bash", "type": "kubectl-command_explanation", "context": "annotate | Update pod 'foo' only if the resource is unchanged from version 1", "source": "https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands", "id": "k8s_00389_code_explain_9970"} {"prompt": "Explain this bash code:\n\nkubectl annotate pods foo description='my frontend running nginx' --resource-version=1", "completion": "This bash code annotate | Update pod 'foo' by removing an annotation named 'description' if it exists # Does not require the --overwrite flag", "language": "bash", "type": "kubectl-command_explanation", "context": "annotate | Update pod 'foo' by removing an annotation named 'description' if it exists # Does not require the --overwrite flag", "source": "https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands", "id": "k8s_00389_code_explain_9971"} {"prompt": "Explain this bash code:\n\nkubectl debug --image=myproj/debug-tools -c debugger mypod", "completion": "This bash code debug | Create a copy of mypod adding a debug container and attach to it", "language": "bash", "type": "kubectl-command_explanation", "context": "debug | Create a copy of mypod adding a debug container and attach to it", "source": "https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands", "id": "k8s_00389_code_explain_9972"} {"prompt": "Explain this bash code:\n\nkubectl debug mypod -it --image=busybox --copy-to=my-debugger", "completion": "This bash code debug | Create a copy of mypod changing the command of mycontainer", "language": "bash", "type": "kubectl-command_explanation", "context": "debug | Create a copy of mypod changing the command of mycontainer", "source": "https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands", "id": "k8s_00389_code_explain_9973"} {"prompt": "Explain this bash code:\n\nkubectl debug mypod -it --copy-to=my-debugger --container=mycontainer -- sh", "completion": "This bash code debug | Create a copy of mypod changing all container images to busybox", "language": "bash", "type": "kubectl-command_explanation", "context": "debug | Create a copy of mypod changing all container images to busybox", "source": "https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands", "id": "k8s_00389_code_explain_9974"} {"prompt": "Explain this bash code:\n\nkubectl debug mypod --copy-to=my-debugger --set-image=*=busybox", "completion": "This bash code debug | Create a copy of mypod adding a debug container and changing container images", "language": "bash", "type": "kubectl-command_explanation", "context": "debug | Create a copy of mypod adding a debug container and changing container images", "source": "https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands", "id": "k8s_00389_code_explain_9975"} {"prompt": "Explain this bash code:\n\nkubectl debug mypod -it --copy-to=my-debugger --image=debian --set-image=app=app:debug,sidecar=sidecar:debug", "completion": "This bash code debug | Create an interactive debugging session on a node and immediately attach to it. # The container will run in the host namespaces and the host's filesystem will be mounted at /host", "language": "bash", "type": "kubectl-command_explanation", "context": "debug | Create an interactive debugging session on a node and immediately attach to it. # The container will run in the host namespaces and the host's filesystem will be mounted at /host", "source": "https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands", "id": "k8s_00389_code_explain_9976"} {"prompt": "Explain this text code:\n\nKUBE_EDITOR=\"nano\" kubectl edit svc/docker-registry", "completion": "This text code edit | Edit the job 'myjob' in JSON using the v1 API format", "language": "text", "type": "code-example_explanation", "context": "edit | Edit the job 'myjob' in JSON using the v1 API format", "source": "https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands", "id": "k8s_00389_code_explain_9977"} {"prompt": "Explain this bash code:\n\nkubectl edit deployment/mydeployment -o yaml --save-config", "completion": "This bash code edit | Edit a resource from the default editor.", "language": "bash", "type": "kubectl-command_explanation", "context": "edit | Edit a resource from the default editor.", "source": "https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands", "id": "k8s_00389_code_explain_9978"} {"prompt": "Explain this bash code:\n\nkubectl kustomize https://github.com/kubernetes-sigs/kustomize.git/examples/helloWorld?ref=v1.0.6", "completion": "This bash code kustomize | Build a set of KRM resources using a 'kustomization.yaml' file. The DIR argument must be a path to a directory containing 'kustomization.yaml', or a git repository URL with a path suffix specifying sa", "language": "bash", "type": "kubectl-command_explanation", "context": "kustomize | Build a set of KRM resources using a 'kustomization.yaml' file. The DIR argument must be a path to a directory containing 'kustomization.yaml', or a git repository URL with a path suffix specifying sa", "source": "https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands", "id": "k8s_00389_code_explain_9979"} {"prompt": "Explain this bash code:\n\nkubectl label --overwrite pods foo status=unhealthy", "completion": "This bash code label | Update all pods in the namespace", "language": "bash", "type": "kubectl-command_explanation", "context": "label | Update all pods in the namespace", "source": "https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands", "id": "k8s_00389_code_explain_9980"} {"prompt": "Explain this bash code:\n\nkubectl label pods foo status=unhealthy --resource-version=1", "completion": "This bash code label | Update pod 'foo' by removing a label named 'bar' if it exists # Does not require the --overwrite flag", "language": "bash", "type": "kubectl-command_explanation", "context": "label | Update pod 'foo' by removing a label named 'bar' if it exists # Does not require the --overwrite flag", "source": "https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands", "id": "k8s_00389_code_explain_9981"} {"prompt": "Explain this bash code:\n\nkubectl patch node k8s-node-1 -p '{\"spec\":{\"unschedulable\":true}}'", "completion": "This bash code patch | Partially update a node using a strategic merge patch, specifying the patch as YAML", "language": "bash", "type": "kubectl-command_explanation", "context": "patch | Partially update a node using a strategic merge patch, specifying the patch as YAML", "source": "https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands", "id": "k8s_00389_code_explain_9982"} {"prompt": "Explain this bash code:\n\nkubectl patch node k8s-node-1 -p $'spec:\\n unschedulable: true'", "completion": "This bash code patch | Partially update a node identified by the type and name specified in \"node.json\" using strategic merge patch", "language": "bash", "type": "kubectl-command_explanation", "context": "patch | Partially update a node identified by the type and name specified in \"node.json\" using strategic merge patch", "source": "https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands", "id": "k8s_00389_code_explain_9983"} {"prompt": "Explain this bash code:\n\nkubectl patch -f node.json -p '{\"spec\":{\"unschedulable\":true}}'", "completion": "This bash code patch | Update a container's image; spec.containers[*].name is required because it's a merge key", "language": "bash", "type": "kubectl-command_explanation", "context": "patch | Update a container's image; spec.containers[*].name is required because it's a merge key", "source": "https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands", "id": "k8s_00389_code_explain_9984"} {"prompt": "Explain this bash code:\n\nkubectl patch pod valid-pod -p '{\"spec\":{\"containers\":[{\"name\":\"kubernetes-serve-hostname\",\"image\":\"new image\"}]}}'", "completion": "This bash code patch | Update a container's image using a JSON patch with positional arrays", "language": "bash", "type": "kubectl-command_explanation", "context": "patch | Update a container's image using a JSON patch with positional arrays", "source": "https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands", "id": "k8s_00389_code_explain_9985"} {"prompt": "Explain this bash code:\n\nkubectl patch pod valid-pod --type='json' -p='[{\"op\": \"replace\", \"path\": \"/spec/containers/0/image\", \"value\":\"new image\"}]'", "completion": "This bash code patch | Update fields of a resource using strategic merge patch, a JSON merge patch, or a JSON patch.", "language": "bash", "type": "kubectl-command_explanation", "context": "patch | Update fields of a resource using strategic merge patch, a JSON merge patch, or a JSON patch.", "source": "https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands", "id": "k8s_00389_code_explain_9986"} {"prompt": "Explain this bash code:\n\nkubectl get pod mypod -o yaml | sed 's/\\(image: myimage\\):.*$/\\1:v4/' | kubectl replace -f -", "completion": "This bash code replace | Force replace, delete and then re-create the resource", "language": "bash", "type": "kubectl-command_explanation", "context": "replace | Force replace, delete and then re-create the resource", "source": "https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands", "id": "k8s_00389_code_explain_9987"} {"prompt": "Explain this bash code:\n\nkubectl rollout undo --dry-run=server deployment/abc", "completion": "This bash code rollout | Roll back to a previous rollout.", "language": "bash", "type": "kubectl-command_explanation", "context": "rollout | Roll back to a previous rollout.", "source": "https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands", "id": "k8s_00389_code_explain_9988"} {"prompt": "Explain this bash code:\n\nkubectl scale --current-replicas=2 --replicas=3 deployment/mysql", "completion": "This bash code scale | Scale multiple replication controllers", "language": "bash", "type": "kubectl-command_explanation", "context": "scale | Scale multiple replication controllers", "source": "https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands", "id": "k8s_00389_code_explain_9989"} {"prompt": "Explain this bash code:\n\nkubectl set env deployment/registry STORAGE_DIR=/local", "completion": "This bash code set | List the environment variables defined on a deployments 'sample-build'", "language": "bash", "type": "kubectl-command_explanation", "context": "set | List the environment variables defined on a deployments 'sample-build'", "source": "https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands", "id": "k8s_00389_code_explain_9990"} {"prompt": "Explain this bash code:\n\nkubectl set env deployment/sample-build STORAGE_DIR=/data -o yaml", "completion": "This bash code set | Update all containers in all replication controllers in the project to have ENV=prod", "language": "bash", "type": "kubectl-command_explanation", "context": "set | Update all containers in all replication controllers in the project to have ENV=prod", "source": "https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands", "id": "k8s_00389_code_explain_9991"} {"prompt": "Explain this bash code:\n\nkubectl set env --from=secret/mysecret deployment/myapp", "completion": "This bash code set | Import environment from a config map with a prefix", "language": "bash", "type": "kubectl-command_explanation", "context": "set | Import environment from a config map with a prefix", "source": "https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands", "id": "k8s_00389_code_explain_9992"} {"prompt": "Explain this bash code:\n\nkubectl set env --from=configmap/myconfigmap --prefix=MYSQL_ deployment/myapp", "completion": "This bash code set | Import specific keys from a config map", "language": "bash", "type": "kubectl-command_explanation", "context": "set | Import specific keys from a config map", "source": "https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands", "id": "k8s_00389_code_explain_9993"} {"prompt": "Explain this bash code:\n\nkubectl set env --keys=my-example-key --from=configmap/myconfigmap deployment/myapp", "completion": "This bash code set | Remove the environment variable ENV from container 'c1' in all deployment configs", "language": "bash", "type": "kubectl-command_explanation", "context": "set | Remove the environment variable ENV from container 'c1' in all deployment configs", "source": "https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands", "id": "k8s_00389_code_explain_9994"} {"prompt": "Explain this bash code:\n\nkubectl set env deployments --all --containers=\"c1\" ENV-", "completion": "This bash code set | Remove the environment variable ENV from a deployment definition on disk and # update the deployment config on the server", "language": "bash", "type": "kubectl-command_explanation", "context": "set | Remove the environment variable ENV from a deployment definition on disk and # update the deployment config on the server", "source": "https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands", "id": "k8s_00389_code_explain_9995"} {"prompt": "Explain this text code:\n\nenv | grep RAILS_ | kubectl set env -e - deployment/registry", "completion": "This text code set | Update environment variables on a pod template.", "language": "text", "type": "code-example_explanation", "context": "set | Update environment variables on a pod template.", "source": "https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands", "id": "k8s_00389_code_explain_9996"} {"prompt": "Explain this bash code:\n\nkubectl set image deployment/nginx busybox=busybox nginx=nginx:1.9.1", "completion": "This bash code set | Update all deployments' and rc's nginx container's image to 'nginx:1.9.1'", "language": "bash", "type": "kubectl-command_explanation", "context": "set | Update all deployments' and rc's nginx container's image to 'nginx:1.9.1'", "source": "https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands", "id": "k8s_00389_code_explain_9997"} {"prompt": "Explain this bash code:\n\nkubectl set image deployments,rc nginx=nginx:1.9.1 --all", "completion": "This bash code set | Update image of all containers of daemonset abc to 'nginx:1.9.1'", "language": "bash", "type": "kubectl-command_explanation", "context": "set | Update image of all containers of daemonset abc to 'nginx:1.9.1'", "source": "https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands", "id": "k8s_00389_code_explain_9998"} {"prompt": "Explain this bash code:\n\nkubectl set image -f path/to/file.yaml nginx=nginx:1.9.1 --local -o yaml", "completion": "This bash code set | Update existing container image(s) of resources.", "language": "bash", "type": "kubectl-command_explanation", "context": "set | Update existing container image(s) of resources.", "source": "https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands", "id": "k8s_00389_code_explain_9999"} {"prompt": "Explain this bash code:\n\nkubectl set resources deployment nginx -c=nginx --limits=cpu=200m,memory=512Mi", "completion": "This bash code set | Set the resource request and limits for all containers in nginx", "language": "bash", "type": "kubectl-command_explanation", "context": "set | Set the resource request and limits for all containers in nginx", "source": "https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands", "id": "k8s_00389_code_explain_10000"} {"prompt": "Explain this bash code:\n\nkubectl set resources deployment nginx --limits=cpu=200m,memory=512Mi --requests=cpu=100m,memory=256Mi", "completion": "This bash code set | Remove the resource requests for resources on containers in nginx", "language": "bash", "type": "kubectl-command_explanation", "context": "set | Remove the resource requests for resources on containers in nginx", "source": "https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands", "id": "k8s_00389_code_explain_10001"} {"prompt": "Explain this bash code:\n\nkubectl set resources deployment nginx --limits=cpu=0,memory=0 --requests=cpu=0,memory=0", "completion": "This bash code set | Print the result (in yaml format) of updating nginx container limits from a local, without hitting the server", "language": "bash", "type": "kubectl-command_explanation", "context": "set | Print the result (in yaml format) of updating nginx container limits from a local, without hitting the server", "source": "https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands", "id": "k8s_00389_code_explain_10002"} {"prompt": "Explain this bash code:\n\nkubectl set resources -f path/to/file.yaml --limits=cpu=200m,memory=512Mi --local -o yaml", "completion": "This bash code set | Specify compute resource requirements (CPU, memory) for any resource that defines a pod template. If a pod is successfully scheduled, it is guaranteed the amount of resource requested, but may burst", "language": "bash", "type": "kubectl-command_explanation", "context": "set | Specify compute resource requirements (CPU, memory) for any resource that defines a pod template. If a pod is successfully scheduled, it is guaranteed the amount of resource requested, but may burst", "source": "https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands", "id": "k8s_00389_code_explain_10003"} {"prompt": "Explain this bash code:\n\nkubectl create service clusterip my-svc --clusterip=\"None\" -o yaml --dry-run=client | kubectl set selector --local -f - 'environment=qa' -o yaml | kubectl create -f -\nkubectl create deployment my-dep -o yaml --dry-run=client | kubectl label --local -f - environment=qa -o yaml | kubectl create -f -", "completion": "This bash code set | Set the selector on a resource. Note that the new selector will overwrite the old selector if the resource had one prior to the invocation of 'set selector'.", "language": "bash", "type": "kubectl-command_explanation", "context": "set | Set the selector on a resource. Note that the new selector will overwrite the old selector if the resource had one prior to the invocation of 'set selector'.", "source": "https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands", "id": "k8s_00389_code_explain_10004"} {"prompt": "Explain this bash code:\n\nkubectl set serviceaccount deployment nginx-deployment serviceaccount1", "completion": "This bash code set | Print the result (in YAML format) of updated nginx deployment with the service account from local file, without hitting the API server", "language": "bash", "type": "kubectl-command_explanation", "context": "set | Print the result (in YAML format) of updated nginx deployment with the service account from local file, without hitting the API server", "source": "https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands", "id": "k8s_00389_code_explain_10005"} {"prompt": "Explain this bash code:\n\nkubectl set sa -f nginx-deployment.yaml serviceaccount1 --local --dry-run=client -o yaml", "completion": "This bash code set | Update the service account of pod template resources.", "language": "bash", "type": "kubectl-command_explanation", "context": "set | Update the service account of pod template resources.", "source": "https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands", "id": "k8s_00389_code_explain_10006"} {"prompt": "Explain this bash code:\n\nkubectl set subject clusterrolebinding admin --serviceaccount=namespace:serviceaccount1", "completion": "This bash code set | Update a role binding for user1, user2, and group1", "language": "bash", "type": "kubectl-command_explanation", "context": "set | Update a role binding for user1, user2, and group1", "source": "https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands", "id": "k8s_00389_code_explain_10007"} {"prompt": "Explain this bash code:\n\nkubectl set subject rolebinding admin --user=user1 --user=user2 --group=group1", "completion": "This bash code set | Print the result (in YAML format) of updating rolebinding subjects from a local, without hitting the server", "language": "bash", "type": "kubectl-command_explanation", "context": "set | Print the result (in YAML format) of updating rolebinding subjects from a local, without hitting the server", "source": "https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands", "id": "k8s_00389_code_explain_10008"} {"prompt": "Explain this bash code:\n\nkubectl create rolebinding admin --role=admin --user=admin -o yaml --dry-run=client | kubectl set subject --local -f - --user=foo -o yaml", "completion": "This bash code set | Update the user, group, or service account in a role binding or cluster role binding.", "language": "bash", "type": "kubectl-command_explanation", "context": "set | Update the user, group, or service account in a role binding or cluster role binding.", "source": "https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands", "id": "k8s_00389_code_explain_10009"} {"prompt": "Explain this bash code:\n\nkubectl wait --for=condition=Ready=false pod/busybox1", "completion": "This bash code wait | Wait for the pod \"busybox1\" to be deleted, with a timeout of 60s, after having issued the \"delete\" command", "language": "bash", "type": "kubectl-command_explanation", "context": "wait | Wait for the pod \"busybox1\" to be deleted, with a timeout of 60s, after having issued the \"delete\" command", "source": "https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands", "id": "k8s_00389_code_explain_10010"} {"prompt": "Explain this bash code:\n\nkubectl delete pod/busybox1\nkubectl wait --for=delete pod/busybox1 --timeout=60s", "completion": "This bash code wait | Experimental: Wait for a specific condition on one or many resources.", "language": "bash", "type": "kubectl-command_explanation", "context": "wait | Experimental: Wait for a specific condition on one or many resources.", "source": "https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands", "id": "k8s_00389_code_explain_10011"} {"prompt": "Explain this text code:\n\ntar cf - /tmp/foo | kubectl exec -i -n <some-namespace> <some-pod> -- tar xf - -C /tmp/bar", "completion": "This text code cp | Copy /tmp/foo from a remote pod to /tmp/bar locally", "language": "text", "type": "code-example_explanation", "context": "cp | Copy /tmp/foo from a remote pod to /tmp/bar locally", "source": "https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands", "id": "k8s_00389_code_explain_10012"} {"prompt": "Explain this bash code:\n\nkubectl exec -n <some-namespace> <some-pod> -- tar cf - /tmp/foo | tar xf - -C /tmp/bar", "completion": "This bash code cp | Copy /tmp/foo_dir local directory to /tmp/bar_dir in a remote pod in the default namespace", "language": "bash", "type": "kubectl-command_explanation", "context": "cp | Copy /tmp/foo_dir local directory to /tmp/bar_dir in a remote pod in the default namespace", "source": "https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands", "id": "k8s_00389_code_explain_10013"} {"prompt": "Explain this bash code:\n\nkubectl cp /tmp/foo <some-pod>:/tmp/bar -c <specific-container>", "completion": "This bash code cp | Copy /tmp/foo local file to /tmp/bar in a remote pod in namespace", "language": "bash", "type": "kubectl-command_explanation", "context": "cp | Copy /tmp/foo local file to /tmp/bar in a remote pod in namespace", "source": "https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands", "id": "k8s_00389_code_explain_10014"} {"prompt": "Explain this bash code:\n\nkubectl cp /tmp/foo <some-namespace>/<some-pod>:/tmp/bar", "completion": "This bash code cp | Copy /tmp/foo from a remote pod to /tmp/bar locally", "language": "bash", "type": "kubectl-command_explanation", "context": "cp | Copy /tmp/foo from a remote pod to /tmp/bar locally", "source": "https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands", "id": "k8s_00389_code_explain_10015"} {"prompt": "Explain this bash code:\n\nkubectl cp <some-namespace>/<some-pod>:/tmp/foo /tmp/bar", "completion": "This bash code cp | Copy files and directories to and from containers.", "language": "bash", "type": "kubectl-command_explanation", "context": "cp | Copy files and directories to and from containers.", "source": "https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands", "id": "k8s_00389_code_explain_10016"} {"prompt": "Explain this bash code:\n\nkubectl describe nodes kubernetes-node-emt8.c.myproject.internal", "completion": "This bash code describe | Describe a pod", "language": "bash", "type": "kubectl-command_explanation", "context": "describe | Describe a pod", "source": "https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands", "id": "k8s_00389_code_explain_10017"} {"prompt": "Explain this bash code:\n\nkubectl exec mypod -c ruby-container -i -t -- bash -il", "completion": "This bash code exec | List contents of /usr from the first container of pod mypod and sort by modification time # If the command you want to execute in the pod has any flags in common (e.g. -i), # you must use two dashes (", "language": "bash", "type": "kubectl-command_explanation", "context": "exec | List contents of /usr from the first container of pod mypod and sort by modification time # If the command you want to execute in the pod has any flags in common (e.g. -i), # you must use two dashes (", "source": "https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands", "id": "k8s_00389_code_explain_10018"} {"prompt": "Explain this bash code:\n\nkubectl logs --insecure-skip-tls-verify-backend nginx", "completion": "This bash code logs | Return snapshot logs from first container of a job named hello", "language": "bash", "type": "kubectl-command_explanation", "context": "logs | Return snapshot logs from first container of a job named hello", "source": "https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands", "id": "k8s_00389_code_explain_10019"} {"prompt": "Explain this bash code:\n\nkubectl port-forward deployment/mydeployment 5000 6000", "completion": "This bash code port-forward | Listen on port 8443 locally, forwarding to the targetPort of the service's port named \"https\" in a pod selected by the service", "language": "bash", "type": "kubectl-command_explanation", "context": "port-forward | Listen on port 8443 locally, forwarding to the targetPort of the service's port named \"https\" in a pod selected by the service", "source": "https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands", "id": "k8s_00389_code_explain_10020"} {"prompt": "Explain this bash code:\n\nkubectl port-forward --address 0.0.0.0 pod/mypod 8888:5000", "completion": "This bash code port-forward | Listen on port 8888 on localhost and selected IP, forwarding to 5000 in the pod", "language": "bash", "type": "kubectl-command_explanation", "context": "port-forward | Listen on port 8888 on localhost and selected IP, forwarding to 5000 in the pod", "source": "https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands", "id": "k8s_00389_code_explain_10021"} {"prompt": "Explain this bash code:\n\nkubectl port-forward --address localhost,10.19.21.23 pod/mypod 8888:5000", "completion": "This bash code port-forward | Listen on a random port locally, forwarding to 5000 in the pod", "language": "bash", "type": "kubectl-command_explanation", "context": "port-forward | Listen on a random port locally, forwarding to 5000 in the pod", "source": "https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands", "id": "k8s_00389_code_explain_10022"} {"prompt": "Explain this bash code:\n\nkubectl proxy --www=/my/files --www-prefix=/static/ --api-prefix=/api/", "completion": "This bash code proxy | To proxy the entire Kubernetes API at a different root # You can get pods info with 'curl localhost:8001/custom/api/v1/pods'", "language": "bash", "type": "kubectl-command_explanation", "context": "proxy | To proxy the entire Kubernetes API at a different root # You can get pods info with 'curl localhost:8001/custom/api/v1/pods'", "source": "https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands", "id": "k8s_00389_code_explain_10023"} {"prompt": "Explain this bash code:\n\nkubectl cluster-info dump --output-directory=/path/to/cluster-state", "completion": "This bash code cluster-info | Dump all namespaces to stdout", "language": "bash", "type": "kubectl-command_explanation", "context": "cluster-info | Dump all namespaces to stdout", "source": "https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands", "id": "k8s_00389_code_explain_10024"} {"prompt": "Explain this bash code:\n\nkubectl cluster-info dump --namespaces default,kube-system --output-directory=/path/to/cluster-state", "completion": "This bash code cluster-info | Dump cluster information out suitable for debugging and diagnosing cluster problems. By default, dumps everything to stdout. You can optionally specify a directory with --output-directory. If you sp", "language": "bash", "type": "kubectl-command_explanation", "context": "cluster-info | Dump cluster information out suitable for debugging and diagnosing cluster problems. By default, dumps everything to stdout. You can optionally specify a directory with --output-directory. If you sp", "source": "https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands", "id": "k8s_00389_code_explain_10025"} {"prompt": "Explain this bash code:\n\nkubectl taint nodes foo dedicated=special-user:NoSchedule", "completion": "This bash code taint | Remove from node 'foo' the taint with key 'dedicated' and effect 'NoSchedule' if one exists", "language": "bash", "type": "kubectl-command_explanation", "context": "taint | Remove from node 'foo' the taint with key 'dedicated' and effect 'NoSchedule' if one exists", "source": "https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands", "id": "k8s_00389_code_explain_10026"} {"prompt": "Explain this bash code:\n\nkubectl taint node -l myLabel=X dedicated=foo:PreferNoSchedule", "completion": "This bash code taint | Add to node 'foo' a taint with key 'bar' and no value", "language": "bash", "type": "kubectl-command_explanation", "context": "taint | Add to node 'foo' a taint with key 'bar' and no value", "source": "https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands", "id": "k8s_00389_code_explain_10027"} {"prompt": "Explain this bash code:\n\nkubectl completion bash > $(brew --prefix)/etc/bash_completion.d/kubectl", "completion": "This bash code completion | Installing bash completion on Linux ## If bash-completion is not installed on Linux, install the 'bash-completion' package ## via your distribution's package manager. ## Load the kubectl completion co", "language": "bash", "type": "kubectl-command_explanation", "context": "completion | Installing bash completion on Linux ## If bash-completion is not installed on Linux, install the 'bash-completion' package ## via your distribution's package manager. ## Load the kubectl completion co", "source": "https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands", "id": "k8s_00389_code_explain_10028"} {"prompt": "Explain this bash code:\n\nkubectl completion bash > ~/.kube/completion.bash.inc\nprintf \"", "completion": "This bash code completion | Kubectl shell completion", "language": "bash", "type": "kubectl-command_explanation", "context": "completion | Kubectl shell completion", "source": "https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands", "id": "k8s_00389_code_explain_10029"} {"prompt": "Explain this text code:\n\nsource '$HOME/.kube/completion.bash.inc'\n\" >> $HOME/.bash_profile\nsource $HOME/.bash_profile", "completion": "This text code completion | Load the kubectl completion code for zsh[1] into the current shell", "language": "text", "type": "code-example_explanation", "context": "completion | Load the kubectl completion code for zsh[1] into the current shell", "source": "https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands", "id": "k8s_00389_code_explain_10030"} {"prompt": "Explain this bash code:\n\nkubectl config set clusters.my-cluster.server https://1.2.3.4", "completion": "This bash code config | Set the certificate-authority-data field on the my-cluster cluster", "language": "bash", "type": "kubectl-command_explanation", "context": "config | Set the certificate-authority-data field on the my-cluster cluster", "source": "https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands", "id": "k8s_00389_code_explain_10031"} {"prompt": "Explain this bash code:\n\nkubectl config set clusters.my-cluster.certificate-authority-data $(echo \"cert_data_here\" | base64 -i -)", "completion": "This bash code config | Set the cluster field in the my-context context to my-cluster", "language": "bash", "type": "kubectl-command_explanation", "context": "config | Set the cluster field in the my-context context to my-cluster", "source": "https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands", "id": "k8s_00389_code_explain_10032"} {"prompt": "Explain this bash code:\n\nkubectl config set contexts.my-context.cluster my-cluster", "completion": "This bash code config | Set the client-key-data field in the cluster-admin user using --set-raw-bytes option", "language": "bash", "type": "kubectl-command_explanation", "context": "config | Set the client-key-data field in the cluster-admin user using --set-raw-bytes option", "source": "https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands", "id": "k8s_00389_code_explain_10033"} {"prompt": "Explain this bash code:\n\nkubectl config set users.cluster-admin.client-key-data cert_data_here --set-raw-bytes=true", "completion": "This bash code config | Set an individual value in a kubeconfig file.", "language": "bash", "type": "kubectl-command_explanation", "context": "config | Set an individual value in a kubeconfig file.", "source": "https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands", "id": "k8s_00389_code_explain_10034"} {"prompt": "Explain this bash code:\n\nkubectl config set-cluster e2e --server=https://1.2.3.4", "completion": "This bash code config | Embed certificate authority data for the e2e cluster entry", "language": "bash", "type": "kubectl-command_explanation", "context": "config | Embed certificate authority data for the e2e cluster entry", "source": "https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands", "id": "k8s_00389_code_explain_10035"} {"prompt": "Explain this bash code:\n\nkubectl config set-cluster e2e --embed-certs --certificate-authority=~/.kube/e2e/kubernetes.ca.crt", "completion": "This bash code config | Disable cert checking for the dev cluster entry", "language": "bash", "type": "kubectl-command_explanation", "context": "config | Disable cert checking for the dev cluster entry", "source": "https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands", "id": "k8s_00389_code_explain_10036"} {"prompt": "Explain this bash code:\n\nkubectl config set-cluster e2e --insecure-skip-tls-verify=true", "completion": "This bash code config | Set custom TLS server name to use for validation for the e2e cluster entry", "language": "bash", "type": "kubectl-command_explanation", "context": "config | Set custom TLS server name to use for validation for the e2e cluster entry", "source": "https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands", "id": "k8s_00389_code_explain_10037"} {"prompt": "Explain this bash code:\n\nkubectl config set-cluster e2e --tls-server-name=my-cluster-name", "completion": "This bash code config | Set a cluster entry in kubeconfig.", "language": "bash", "type": "kubectl-command_explanation", "context": "config | Set a cluster entry in kubeconfig.", "source": "https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands", "id": "k8s_00389_code_explain_10038"} {"prompt": "Explain this bash code:\n\nkubectl config set-context gce --user=cluster-admin", "completion": "This bash code config | Set a context entry in kubeconfig.", "language": "bash", "type": "kubectl-command_explanation", "context": "config | Set a context entry in kubeconfig.", "source": "https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands", "id": "k8s_00389_code_explain_10039"} {"prompt": "Explain this bash code:\n\nkubectl config set-credentials cluster-admin --client-key=~/.kube/admin.key", "completion": "This bash code config | Set basic auth for the \"cluster-admin\" entry", "language": "bash", "type": "kubectl-command_explanation", "context": "config | Set basic auth for the \"cluster-admin\" entry", "source": "https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands", "id": "k8s_00389_code_explain_10040"} {"prompt": "Explain this bash code:\n\nkubectl config set-credentials cluster-admin --username=admin --password=uXFGweU9l35qcif", "completion": "This bash code config | Embed client certificate data in the \"cluster-admin\" entry", "language": "bash", "type": "kubectl-command_explanation", "context": "config | Embed client certificate data in the \"cluster-admin\" entry", "source": "https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands", "id": "k8s_00389_code_explain_10041"} {"prompt": "Explain this bash code:\n\nkubectl config set-credentials cluster-admin --client-certificate=~/.kube/admin.crt --embed-certs=true", "completion": "This bash code config | Enable the Google Compute Platform auth provider for the \"cluster-admin\" entry", "language": "bash", "type": "kubectl-command_explanation", "context": "config | Enable the Google Compute Platform auth provider for the \"cluster-admin\" entry", "source": "https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands", "id": "k8s_00389_code_explain_10042"} {"prompt": "Explain this bash code:\n\nkubectl config set-credentials cluster-admin --auth-provider=gcp", "completion": "This bash code config | Enable the OpenID Connect auth provider for the \"cluster-admin\" entry with additional args", "language": "bash", "type": "kubectl-command_explanation", "context": "config | Enable the OpenID Connect auth provider for the \"cluster-admin\" entry with additional args", "source": "https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands", "id": "k8s_00389_code_explain_10043"} {"prompt": "Explain this bash code:\n\nkubectl config set-credentials cluster-admin --auth-provider=oidc --auth-provider-arg=client-id=foo --auth-provider-arg=client-secret=bar", "completion": "This bash code config | Remove the \"client-secret\" config value for the OpenID Connect auth provider for the \"cluster-admin\" entry", "language": "bash", "type": "kubectl-command_explanation", "context": "config | Remove the \"client-secret\" config value for the OpenID Connect auth provider for the \"cluster-admin\" entry", "source": "https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands", "id": "k8s_00389_code_explain_10044"} {"prompt": "Explain this bash code:\n\nkubectl config set-credentials cluster-admin --auth-provider=oidc --auth-provider-arg=client-secret-", "completion": "This bash code config | Enable new exec auth plugin for the \"cluster-admin\" entry", "language": "bash", "type": "kubectl-command_explanation", "context": "config | Enable new exec auth plugin for the \"cluster-admin\" entry", "source": "https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands", "id": "k8s_00389_code_explain_10045"} {"prompt": "Explain this bash code:\n\nkubectl config set-credentials cluster-admin --exec-command=/path/to/the/executable --exec-api-version=client.authentication.k8s.io/v1beta1", "completion": "This bash code config | Define new exec auth plugin args for the \"cluster-admin\" entry", "language": "bash", "type": "kubectl-command_explanation", "context": "config | Define new exec auth plugin args for the \"cluster-admin\" entry", "source": "https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands", "id": "k8s_00389_code_explain_10046"} {"prompt": "Explain this bash code:\n\nkubectl config set-credentials cluster-admin --exec-arg=arg1 --exec-arg=arg2", "completion": "This bash code config | Create or update exec auth plugin environment variables for the \"cluster-admin\" entry", "language": "bash", "type": "kubectl-command_explanation", "context": "config | Create or update exec auth plugin environment variables for the \"cluster-admin\" entry", "source": "https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands", "id": "k8s_00389_code_explain_10047"} {"prompt": "Explain this bash code:\n\nkubectl config set-credentials cluster-admin --exec-env=key1=val1 --exec-env=key2=val2", "completion": "This bash code config | Remove exec auth plugin environment variables for the \"cluster-admin\" entry", "language": "bash", "type": "kubectl-command_explanation", "context": "config | Remove exec auth plugin environment variables for the \"cluster-admin\" entry", "source": "https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands", "id": "k8s_00389_code_explain_10048"} {"prompt": "Explain this bash code:\n\nkubectl config set-credentials cluster-admin --exec-env=var-to-remove-", "completion": "This bash code config | Set a user entry in kubeconfig.", "language": "bash", "type": "kubectl-command_explanation", "context": "config | Set a user entry in kubeconfig.", "source": "https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands", "id": "k8s_00389_code_explain_10049"} {"prompt": "Explain this bash code:\n\nkubectl config view -o jsonpath='{.users[?(@.name == \"e2e\")].user.password}'", "completion": "This bash code config | Display merged kubeconfig settings or a specified kubeconfig file.", "language": "bash", "type": "kubectl-command_explanation", "context": "config | Display merged kubeconfig settings or a specified kubeconfig file.", "source": "https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands", "id": "k8s_00389_code_explain_10050"} {"prompt": "Explain this bash code:\n\nkubectl create -f docker-registry.yaml --edit -o json", "completion": "This bash code create | Create a resource from a file or from stdin.", "language": "bash", "type": "kubectl-command_explanation", "context": "create | Create a resource from a file or from stdin.", "source": "https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands", "id": "k8s_00389_code_explain_10051"} {"prompt": "Explain this bash code:\n\nkubectl create clusterrole pod-reader --verb=get,list,watch --resource=pods", "completion": "This bash code create | Create a cluster role named \"pod-reader\" with ResourceName specified", "language": "bash", "type": "kubectl-command_explanation", "context": "create | Create a cluster role named \"pod-reader\" with ResourceName specified", "source": "https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands", "id": "k8s_00389_code_explain_10052"} {"prompt": "Explain this bash code:\n\nkubectl create clusterrole pod-reader --verb=get --resource=pods --resource-name=readablepod --resource-name=anotherpod", "completion": "This bash code create | Create a cluster role named \"foo\" with API Group specified", "language": "bash", "type": "kubectl-command_explanation", "context": "create | Create a cluster role named \"foo\" with API Group specified", "source": "https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands", "id": "k8s_00389_code_explain_10053"} {"prompt": "Explain this bash code:\n\nkubectl create clusterrole foo --verb=get,list,watch --resource=rs.extensions", "completion": "This bash code create | Create a cluster role named \"foo\" with SubResource specified", "language": "bash", "type": "kubectl-command_explanation", "context": "create | Create a cluster role named \"foo\" with SubResource specified", "source": "https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands", "id": "k8s_00389_code_explain_10054"} {"prompt": "Explain this bash code:\n\nkubectl create clusterrole foo --verb=get,list,watch --resource=pods,pods/status", "completion": "This bash code create | Create a cluster role name \"foo\" with NonResourceURL specified", "language": "bash", "type": "kubectl-command_explanation", "context": "create | Create a cluster role name \"foo\" with NonResourceURL specified", "source": "https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands", "id": "k8s_00389_code_explain_10055"} {"prompt": "Explain this bash code:\n\nkubectl create clusterrole \"foo\" --verb=get --non-resource-url=/logs/*", "completion": "This bash code create | Create a cluster role name \"monitoring\" with AggregationRule specified", "language": "bash", "type": "kubectl-command_explanation", "context": "create | Create a cluster role name \"monitoring\" with AggregationRule specified", "source": "https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands", "id": "k8s_00389_code_explain_10056"} {"prompt": "Explain this bash code:\n\nkubectl create clusterrole monitoring --aggregation-rule=\"rbac.example.com/aggregate-to-monitoring=true\"", "completion": "This bash code create | Create a cluster role.", "language": "bash", "type": "kubectl-command_explanation", "context": "create | Create a cluster role.", "source": "https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands", "id": "k8s_00389_code_explain_10057"} {"prompt": "Explain this text code:\n\n$ kubectl create clusterrole NAME --verb=verb --resource=resource.group [--resource-name=resourcename] [--dry-run=server|client|none]", "completion": "This text code create | Create a cluster role binding for user1, user2, and group1 using the cluster-admin cluster role", "language": "text", "type": "code-example_explanation", "context": "create | Create a cluster role binding for user1, user2, and group1 using the cluster-admin cluster role", "source": "https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands", "id": "k8s_00389_code_explain_10058"} {"prompt": "Explain this bash code:\n\nkubectl create clusterrolebinding cluster-admin --clusterrole=cluster-admin --user=user1 --user=user2 --group=group1", "completion": "This bash code create | Create a cluster role binding for a particular cluster role.", "language": "bash", "type": "kubectl-command_explanation", "context": "create | Create a cluster role binding for a particular cluster role.", "source": "https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands", "id": "k8s_00389_code_explain_10059"} {"prompt": "Explain this text code:\n\n$ kubectl create clusterrolebinding NAME --clusterrole=NAME [--user=username] [--group=groupname] [--serviceaccount=namespace:serviceaccountname] [--dry-run=server|client|none]", "completion": "This text code create | Create a new config map named my-config based on folder bar", "language": "text", "type": "code-example_explanation", "context": "create | Create a new config map named my-config based on folder bar", "source": "https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands", "id": "k8s_00389_code_explain_10060"} {"prompt": "Explain this bash code:\n\nkubectl create configmap my-config --from-file=path/to/bar", "completion": "This bash code create | Create a new config map named my-config with specified keys instead of file basenames on disk", "language": "bash", "type": "kubectl-command_explanation", "context": "create | Create a new config map named my-config with specified keys instead of file basenames on disk", "source": "https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands", "id": "k8s_00389_code_explain_10061"} {"prompt": "Explain this bash code:\n\nkubectl create configmap my-config --from-file=key1=/path/to/bar/file1.txt --from-file=key2=/path/to/bar/file2.txt", "completion": "This bash code create | Create a new config map named my-config with key1=config1 and key2=config2", "language": "bash", "type": "kubectl-command_explanation", "context": "create | Create a new config map named my-config with key1=config1 and key2=config2", "source": "https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands", "id": "k8s_00389_code_explain_10062"} {"prompt": "Explain this bash code:\n\nkubectl create configmap my-config --from-literal=key1=config1 --from-literal=key2=config2", "completion": "This bash code create | Create a new config map named my-config from the key=value pairs in the file", "language": "bash", "type": "kubectl-command_explanation", "context": "create | Create a new config map named my-config from the key=value pairs in the file", "source": "https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands", "id": "k8s_00389_code_explain_10063"} {"prompt": "Explain this bash code:\n\nkubectl create configmap my-config --from-file=path/to/bar", "completion": "This bash code create | Create a new config map named my-config from an env file", "language": "bash", "type": "kubectl-command_explanation", "context": "create | Create a new config map named my-config from an env file", "source": "https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands", "id": "k8s_00389_code_explain_10064"} {"prompt": "Explain this bash code:\n\nkubectl create configmap my-config --from-env-file=path/to/bar.env", "completion": "This bash code create | Create a config map based on a file, directory, or specified literal value.", "language": "bash", "type": "kubectl-command_explanation", "context": "create | Create a config map based on a file, directory, or specified literal value.", "source": "https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands", "id": "k8s_00389_code_explain_10065"} {"prompt": "Explain this text code:\n\n$ kubectl create configmap NAME [--from-file=[key=]source] [--from-literal=key1=value1] [--dry-run=server|client|none]", "completion": "This text code create | Create a cron job", "language": "text", "type": "code-example_explanation", "context": "create | Create a cron job", "source": "https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands", "id": "k8s_00389_code_explain_10066"} {"prompt": "Explain this bash code:\n\nkubectl create cronjob my-job --image=busybox --schedule=\"*/1 * * * *\"", "completion": "This bash code create | Create a cron job with a command", "language": "bash", "type": "kubectl-command_explanation", "context": "create | Create a cron job with a command", "source": "https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands", "id": "k8s_00389_code_explain_10067"} {"prompt": "Explain this bash code:\n\nkubectl create cronjob my-job --image=busybox --schedule=\"*/1 * * * *\" -- date", "completion": "This bash code create | Create a cron job with the specified name.", "language": "bash", "type": "kubectl-command_explanation", "context": "create | Create a cron job with the specified name.", "source": "https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands", "id": "k8s_00389_code_explain_10068"} {"prompt": "Explain this text code:\n\n$ kubectl create cronjob NAME --image=image --schedule='0/5 * * * ?' -- [COMMAND] [args...]", "completion": "This text code create | Create a deployment named my-dep that runs the busybox image", "language": "text", "type": "code-example_explanation", "context": "create | Create a deployment named my-dep that runs the busybox image", "source": "https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands", "id": "k8s_00389_code_explain_10069"} {"prompt": "Explain this bash code:\n\nkubectl create deployment my-dep --image=busybox -- date", "completion": "This bash code create | Create a deployment named my-dep that runs the nginx image with 3 replicas", "language": "bash", "type": "kubectl-command_explanation", "context": "create | Create a deployment named my-dep that runs the nginx image with 3 replicas", "source": "https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands", "id": "k8s_00389_code_explain_10070"} {"prompt": "Explain this bash code:\n\nkubectl create deployment my-dep --image=nginx --replicas=3", "completion": "This bash code create | Create a deployment named my-dep that runs the busybox image and expose port 5701", "language": "bash", "type": "kubectl-command_explanation", "context": "create | Create a deployment named my-dep that runs the busybox image and expose port 5701", "source": "https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands", "id": "k8s_00389_code_explain_10071"} {"prompt": "Explain this bash code:\n\nkubectl create deployment my-dep --image=busybox --port=5701", "completion": "This bash code create | Create a deployment with the specified name.", "language": "bash", "type": "kubectl-command_explanation", "context": "create | Create a deployment with the specified name.", "source": "https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands", "id": "k8s_00389_code_explain_10072"} {"prompt": "Explain this text code:\n\n$ kubectl create deployment NAME --image=image -- [COMMAND] [args...]", "completion": "This text code create | Create a single ingress called 'simple' that directs requests to foo.com/bar to svc # svc1:8080 with a tls secret \"my-cert\"", "language": "text", "type": "code-example_explanation", "context": "create | Create a single ingress called 'simple' that directs requests to foo.com/bar to svc # svc1:8080 with a tls secret \"my-cert\"", "source": "https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands", "id": "k8s_00389_code_explain_10073"} {"prompt": "Explain this bash code:\n\nkubectl create ingress simple --rule=\"foo.com/bar=svc1:8080,tls=my-cert\"", "completion": "This bash code create | Create a catch all ingress of \"/path\" pointing to service svc:port and Ingress Class as \"otheringress\"", "language": "bash", "type": "kubectl-command_explanation", "context": "create | Create a catch all ingress of \"/path\" pointing to service svc:port and Ingress Class as \"otheringress\"", "source": "https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands", "id": "k8s_00389_code_explain_10074"} {"prompt": "Explain this bash code:\n\nkubectl create ingress catch-all --class=otheringress --rule=\"/path=svc:port\"", "completion": "This bash code create | Create an ingress with two annotations: ingress.annotation1 and ingress.annotations2", "language": "bash", "type": "kubectl-command_explanation", "context": "create | Create an ingress with two annotations: ingress.annotation1 and ingress.annotations2", "source": "https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands", "id": "k8s_00389_code_explain_10075"} {"prompt": "Explain this bash code:\n\nkubectl create ingress annotated --class=default --rule=\"foo.com/bar=svc:port\" \\\n--annotation ingress.annotation1=foo \\\n--annotation ingress.annotation2=bla", "completion": "This bash code create | Create an ingress with the same host and multiple paths", "language": "bash", "type": "kubectl-command_explanation", "context": "create | Create an ingress with the same host and multiple paths", "source": "https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands", "id": "k8s_00389_code_explain_10076"} {"prompt": "Explain this bash code:\n\nkubectl create ingress multipath --class=default \\\n--rule=\"foo.com/=svc:port\" \\\n--rule=\"foo.com/admin/=svcadmin:portadmin\"", "completion": "This bash code create | Create an ingress with multiple hosts and the pathType as Prefix", "language": "bash", "type": "kubectl-command_explanation", "context": "create | Create an ingress with multiple hosts and the pathType as Prefix", "source": "https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands", "id": "k8s_00389_code_explain_10077"} {"prompt": "Explain this bash code:\n\nkubectl create ingress ingress1 --class=default \\\n--rule=\"foo.com/path*=svc:8080\" \\\n--rule=\"bar.com/admin*=svc2:http\"", "completion": "This bash code create | Create an ingress with TLS enabled using the default ingress certificate and different path types", "language": "bash", "type": "kubectl-command_explanation", "context": "create | Create an ingress with TLS enabled using the default ingress certificate and different path types", "source": "https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands", "id": "k8s_00389_code_explain_10078"} {"prompt": "Explain this bash code:\n\nkubectl create ingress ingtls --class=default \\\n--rule=\"foo.com/=svc:https,tls\" \\\n--rule=\"foo.com/path/subpath*=othersvc:8080\"", "completion": "This bash code create | Create an ingress with TLS enabled using a specific secret and pathType as Prefix", "language": "bash", "type": "kubectl-command_explanation", "context": "create | Create an ingress with TLS enabled using a specific secret and pathType as Prefix", "source": "https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands", "id": "k8s_00389_code_explain_10079"} {"prompt": "Explain this bash code:\n\nkubectl create ingress ingsecret --class=default \\\n--rule=\"foo.com/*=svc:8080,tls=secret1\"", "completion": "This bash code create | Create an ingress with a default backend", "language": "bash", "type": "kubectl-command_explanation", "context": "create | Create an ingress with a default backend", "source": "https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands", "id": "k8s_00389_code_explain_10080"} {"prompt": "Explain this bash code:\n\nkubectl create ingress ingdefault --class=default \\\n--default-backend=defaultsvc:http \\\n--rule=\"foo.com/*=svc:8080,tls=secret1\"", "completion": "This bash code create | Create an ingress with the specified name.", "language": "bash", "type": "kubectl-command_explanation", "context": "create | Create an ingress with the specified name.", "source": "https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands", "id": "k8s_00389_code_explain_10081"} {"prompt": "Explain this text code:\n\n$ kubectl create ingress NAME --rule=host/path=service:port[,tls[=secret]]", "completion": "This text code create | Create a job", "language": "text", "type": "code-example_explanation", "context": "create | Create a job", "source": "https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands", "id": "k8s_00389_code_explain_10082"} {"prompt": "Explain this bash code:\n\nkubectl create job test-job --from=cronjob/a-cronjob", "completion": "This bash code create | Create a job with the specified name.", "language": "bash", "type": "kubectl-command_explanation", "context": "create | Create a job with the specified name.", "source": "https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands", "id": "k8s_00389_code_explain_10083"} {"prompt": "Explain this text code:\n\n$ kubectl create job NAME --image=image [--from=cronjob/name] -- [COMMAND] [args...]", "completion": "This text code create | Create a new namespace named my-namespace", "language": "text", "type": "code-example_explanation", "context": "create | Create a new namespace named my-namespace", "source": "https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands", "id": "k8s_00389_code_explain_10084"} {"prompt": "Explain this text code:\n\n$ kubectl create namespace NAME [--dry-run=server|client|none]", "completion": "This text code create | Create a pod disruption budget named my-pdb that will select all pods with the app=rails label # and require at least one of them being available at any point in time", "language": "text", "type": "code-example_explanation", "context": "create | Create a pod disruption budget named my-pdb that will select all pods with the app=rails label # and require at least one of them being available at any point in time", "source": "https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands", "id": "k8s_00389_code_explain_10085"} {"prompt": "Explain this bash code:\n\nkubectl create poddisruptionbudget my-pdb --selector=app=rails --min-available=1", "completion": "This bash code create | Create a pod disruption budget named my-pdb that will select all pods with the app=nginx label # and require at least half of the pods selected to be available at any point in time", "language": "bash", "type": "kubectl-command_explanation", "context": "create | Create a pod disruption budget named my-pdb that will select all pods with the app=nginx label # and require at least half of the pods selected to be available at any point in time", "source": "https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands", "id": "k8s_00389_code_explain_10086"} {"prompt": "Explain this bash code:\n\nkubectl create pdb my-pdb --selector=app=nginx --min-available=50%", "completion": "This bash code create | Create a pod disruption budget with the specified name, selector, and desired minimum available pods.", "language": "bash", "type": "kubectl-command_explanation", "context": "create | Create a pod disruption budget with the specified name, selector, and desired minimum available pods.", "source": "https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands", "id": "k8s_00389_code_explain_10087"} {"prompt": "Explain this text code:\n\n$ kubectl create poddisruptionbudget NAME --selector=SELECTOR --min-available=N [--dry-run=server|client|none]", "completion": "This text code create | Create a priority class named high-priority", "language": "text", "type": "code-example_explanation", "context": "create | Create a priority class named high-priority", "source": "https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands", "id": "k8s_00389_code_explain_10088"} {"prompt": "Explain this bash code:\n\nkubectl create priorityclass high-priority --value=1000 --description=\"high priority\"", "completion": "This bash code create | Create a priority class named default-priority that is considered as the global default priority", "language": "bash", "type": "kubectl-command_explanation", "context": "create | Create a priority class named default-priority that is considered as the global default priority", "source": "https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands", "id": "k8s_00389_code_explain_10089"} {"prompt": "Explain this bash code:\n\nkubectl create priorityclass default-priority --value=1000 --global-default=true --description=\"default priority\"", "completion": "This bash code create | Create a priority class named high-priority that cannot preempt pods with lower priority", "language": "bash", "type": "kubectl-command_explanation", "context": "create | Create a priority class named high-priority that cannot preempt pods with lower priority", "source": "https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands", "id": "k8s_00389_code_explain_10090"} {"prompt": "Explain this bash code:\n\nkubectl create priorityclass high-priority --value=1000 --description=\"high priority\" --preemption-policy=\"Never\"", "completion": "This bash code create | Create a priority class with the specified name, value, globalDefault and description.", "language": "bash", "type": "kubectl-command_explanation", "context": "create | Create a priority class with the specified name, value, globalDefault and description.", "source": "https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands", "id": "k8s_00389_code_explain_10091"} {"prompt": "Explain this text code:\n\n$ kubectl create priorityclass NAME --value=VALUE --global-default=BOOL [--dry-run=server|client|none]", "completion": "This text code create | Create a new resource quota named my-quota", "language": "text", "type": "code-example_explanation", "context": "create | Create a new resource quota named my-quota", "source": "https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands", "id": "k8s_00389_code_explain_10092"} {"prompt": "Explain this bash code:\n\nkubectl create quota my-quota --hard=cpu=1,memory=1G,pods=2,services=3,replicationcontrollers=2,resourcequotas=1,secrets=5,persistentvolumeclaims=10", "completion": "This bash code create | Create a new resource quota named best-effort", "language": "bash", "type": "kubectl-command_explanation", "context": "create | Create a new resource quota named best-effort", "source": "https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands", "id": "k8s_00389_code_explain_10093"} {"prompt": "Explain this bash code:\n\nkubectl create quota best-effort --hard=pods=100 --scopes=BestEffort", "completion": "This bash code create | Create a resource quota with the specified name, hard limits, and optional scopes.", "language": "bash", "type": "kubectl-command_explanation", "context": "create | Create a resource quota with the specified name, hard limits, and optional scopes.", "source": "https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands", "id": "k8s_00389_code_explain_10094"} {"prompt": "Explain this text code:\n\n$ kubectl create quota NAME [--hard=key1=value1,key2=value2] [--scopes=Scope1,Scope2] [--dry-run=server|client|none]", "completion": "This text code create | Create a role named \"pod-reader\" that allows user to perform \"get\", \"watch\" and \"list\" on pods", "language": "text", "type": "code-example_explanation", "context": "create | Create a role named \"pod-reader\" that allows user to perform \"get\", \"watch\" and \"list\" on pods", "source": "https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands", "id": "k8s_00389_code_explain_10095"} {"prompt": "Explain this bash code:\n\nkubectl create role pod-reader --verb=get --verb=list --verb=watch --resource=pods", "completion": "This bash code create | Create a role named \"pod-reader\" with ResourceName specified", "language": "bash", "type": "kubectl-command_explanation", "context": "create | Create a role named \"pod-reader\" with ResourceName specified", "source": "https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands", "id": "k8s_00389_code_explain_10096"} {"prompt": "Explain this bash code:\n\nkubectl create role pod-reader --verb=get --resource=pods --resource-name=readablepod --resource-name=anotherpod", "completion": "This bash code create | Create a role named \"foo\" with API Group specified", "language": "bash", "type": "kubectl-command_explanation", "context": "create | Create a role named \"foo\" with API Group specified", "source": "https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands", "id": "k8s_00389_code_explain_10097"} {"prompt": "Explain this bash code:\n\nkubectl create role foo --verb=get,list,watch --resource=rs.extensions", "completion": "This bash code create | Create a role named \"foo\" with SubResource specified", "language": "bash", "type": "kubectl-command_explanation", "context": "create | Create a role named \"foo\" with SubResource specified", "source": "https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands", "id": "k8s_00389_code_explain_10098"} {"prompt": "Explain this bash code:\n\nkubectl create role foo --verb=get,list,watch --resource=pods,pods/status", "completion": "This bash code create | Create a role with single rule.", "language": "bash", "type": "kubectl-command_explanation", "context": "create | Create a role with single rule.", "source": "https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands", "id": "k8s_00389_code_explain_10099"} {"prompt": "Explain this text code:\n\n$ kubectl create role NAME --verb=verb --resource=resource.group/subresource [--resource-name=resourcename] [--dry-run=server|client|none]", "completion": "This text code create | Create a role binding for user1, user2, and group1 using the admin cluster role", "language": "text", "type": "code-example_explanation", "context": "create | Create a role binding for user1, user2, and group1 using the admin cluster role", "source": "https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands", "id": "k8s_00389_code_explain_10100"} {"prompt": "Explain this bash code:\n\nkubectl create rolebinding admin --clusterrole=admin --user=user1 --user=user2 --group=group1", "completion": "This bash code create | Create a role binding for a particular role or cluster role.", "language": "bash", "type": "kubectl-command_explanation", "context": "create | Create a role binding for a particular role or cluster role.", "source": "https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands", "id": "k8s_00389_code_explain_10101"} {"prompt": "Explain this text code:\n\n$ kubectl create rolebinding NAME --clusterrole=NAME|--role=NAME [--user=username] [--group=groupname] [--serviceaccount=namespace:serviceaccountname] [--dry-run=server|client|none]", "completion": "This text code create | Create a secret using specified subcommand.", "language": "text", "type": "code-example_explanation", "context": "create | Create a secret using specified subcommand.", "source": "https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands", "id": "k8s_00389_code_explain_10102"} {"prompt": "Explain this bash code:\n\nkubectl create secret docker-registry my-secret --docker-server=DOCKER_REGISTRY_SERVER --docker-username=DOCKER_USER --docker-password=DOCKER_PASSWORD --docker-email=DOCKER_EMAIL", "completion": "This bash code create | Create a new secret named my-secret from ~/.docker/config.json", "language": "bash", "type": "kubectl-command_explanation", "context": "create | Create a new secret named my-secret from ~/.docker/config.json", "source": "https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands", "id": "k8s_00389_code_explain_10103"} {"prompt": "Explain this bash code:\n\nkubectl create secret docker-registry my-secret --from-file=.dockerconfigjson=path/to/.docker/config.json", "completion": "This bash code create | Create a new secret for use with Docker registries.", "language": "bash", "type": "kubectl-command_explanation", "context": "create | Create a new secret for use with Docker registries.", "source": "https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands", "id": "k8s_00389_code_explain_10104"} {"prompt": "Explain this text code:\n\n$ kubectl create secret docker-registry NAME --docker-username=user --docker-password=password --docker-email=email [--docker-server=string] [--from-file=[key=]source] [--dry-run=server|client|none]", "completion": "This text code create | Create a new secret named my-secret with keys for each file in folder bar", "language": "text", "type": "code-example_explanation", "context": "create | Create a new secret named my-secret with keys for each file in folder bar", "source": "https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands", "id": "k8s_00389_code_explain_10105"} {"prompt": "Explain this bash code:\n\nkubectl create secret generic my-secret --from-file=path/to/bar", "completion": "This bash code create | Create a new secret named my-secret with specified keys instead of names on disk", "language": "bash", "type": "kubectl-command_explanation", "context": "create | Create a new secret named my-secret with specified keys instead of names on disk", "source": "https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands", "id": "k8s_00389_code_explain_10106"} {"prompt": "Explain this bash code:\n\nkubectl create secret generic my-secret --from-file=ssh-privatekey=path/to/id_rsa --from-file=ssh-publickey=path/to/id_rsa.pub", "completion": "This bash code create | Create a new secret named my-secret with key1=supersecret and key2=topsecret", "language": "bash", "type": "kubectl-command_explanation", "context": "create | Create a new secret named my-secret with key1=supersecret and key2=topsecret", "source": "https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands", "id": "k8s_00389_code_explain_10107"} {"prompt": "Explain this bash code:\n\nkubectl create secret generic my-secret --from-literal=key1=supersecret --from-literal=key2=topsecret", "completion": "This bash code create | Create a new secret named my-secret using a combination of a file and a literal", "language": "bash", "type": "kubectl-command_explanation", "context": "create | Create a new secret named my-secret using a combination of a file and a literal", "source": "https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands", "id": "k8s_00389_code_explain_10108"} {"prompt": "Explain this bash code:\n\nkubectl create secret generic my-secret --from-file=ssh-privatekey=path/to/id_rsa --from-literal=passphrase=topsecret", "completion": "This bash code create | Create a new secret named my-secret from an env file", "language": "bash", "type": "kubectl-command_explanation", "context": "create | Create a new secret named my-secret from an env file", "source": "https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands", "id": "k8s_00389_code_explain_10109"} {"prompt": "Explain this bash code:\n\nkubectl create secret generic my-secret --from-env-file=path/to/bar.env", "completion": "This bash code create | Create a secret based on a file, directory, or specified literal value.", "language": "bash", "type": "kubectl-command_explanation", "context": "create | Create a secret based on a file, directory, or specified literal value.", "source": "https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands", "id": "k8s_00389_code_explain_10110"} {"prompt": "Explain this text code:\n\n$ kubectl create generic NAME [--type=string] [--from-file=[key=]source] [--from-literal=key1=value1] [--dry-run=server|client|none]", "completion": "This text code create | Create a new TLS secret named tls-secret with the given key pair", "language": "text", "type": "code-example_explanation", "context": "create | Create a new TLS secret named tls-secret with the given key pair", "source": "https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands", "id": "k8s_00389_code_explain_10111"} {"prompt": "Explain this bash code:\n\nkubectl create secret tls tls-secret --cert=path/to/tls.cert --key=path/to/tls.key", "completion": "This bash code create | Create a TLS secret from the given public/private key pair.", "language": "bash", "type": "kubectl-command_explanation", "context": "create | Create a TLS secret from the given public/private key pair.", "source": "https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands", "id": "k8s_00389_code_explain_10112"} {"prompt": "Explain this text code:\n\n$ kubectl create secret tls NAME --cert=path/to/cert/file --key=path/to/key/file [--dry-run=server|client|none]", "completion": "This text code create | Create a service using a specified subcommand.", "language": "text", "type": "code-example_explanation", "context": "create | Create a service using a specified subcommand.", "source": "https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands", "id": "k8s_00389_code_explain_10113"} {"prompt": "Explain this bash code:\n\nkubectl create service clusterip my-cs --tcp=5678:8080", "completion": "This bash code create | Create a new ClusterIP service named my-cs (in headless mode)", "language": "bash", "type": "kubectl-command_explanation", "context": "create | Create a new ClusterIP service named my-cs (in headless mode)", "source": "https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands", "id": "k8s_00389_code_explain_10114"} {"prompt": "Explain this bash code:\n\nkubectl create service clusterip my-cs --clusterip=\"None\"", "completion": "This bash code create | Create a ClusterIP service with the specified name.", "language": "bash", "type": "kubectl-command_explanation", "context": "create | Create a ClusterIP service with the specified name.", "source": "https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands", "id": "k8s_00389_code_explain_10115"} {"prompt": "Explain this text code:\n\n$ kubectl create service clusterip NAME [--tcp=<port>:<targetPort>] [--dry-run=server|client|none]", "completion": "This text code create | Create a new ExternalName service named my-ns", "language": "text", "type": "code-example_explanation", "context": "create | Create a new ExternalName service named my-ns", "source": "https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands", "id": "k8s_00389_code_explain_10116"} {"prompt": "Explain this bash code:\n\nkubectl create service externalname my-ns --external-name bar.com", "completion": "This bash code create | Create an ExternalName service with the specified name.", "language": "bash", "type": "kubectl-command_explanation", "context": "create | Create an ExternalName service with the specified name.", "source": "https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands", "id": "k8s_00389_code_explain_10117"} {"prompt": "Explain this text code:\n\n$ kubectl create service externalname NAME --external-name external.name [--dry-run=server|client|none]", "completion": "This text code create | Create a new LoadBalancer service named my-lbs", "language": "text", "type": "code-example_explanation", "context": "create | Create a new LoadBalancer service named my-lbs", "source": "https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands", "id": "k8s_00389_code_explain_10118"} {"prompt": "Explain this bash code:\n\nkubectl create service loadbalancer my-lbs --tcp=5678:8080", "completion": "This bash code create | Create a LoadBalancer service with the specified name.", "language": "bash", "type": "kubectl-command_explanation", "context": "create | Create a LoadBalancer service with the specified name.", "source": "https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands", "id": "k8s_00389_code_explain_10119"} {"prompt": "Explain this text code:\n\n$ kubectl create service loadbalancer NAME [--tcp=port:targetPort] [--dry-run=server|client|none]", "completion": "This text code create | Create a new NodePort service named my-ns", "language": "text", "type": "code-example_explanation", "context": "create | Create a new NodePort service named my-ns", "source": "https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands", "id": "k8s_00389_code_explain_10120"} {"prompt": "Explain this bash code:\n\nkubectl create service nodeport my-ns --tcp=5678:8080", "completion": "This bash code create | Create a NodePort service with the specified name.", "language": "bash", "type": "kubectl-command_explanation", "context": "create | Create a NodePort service with the specified name.", "source": "https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands", "id": "k8s_00389_code_explain_10121"} {"prompt": "Explain this text code:\n\n$ kubectl create service nodeport NAME [--tcp=port:targetPort] [--dry-run=server|client|none]", "completion": "This text code create | Create a new service account named my-service-account", "language": "text", "type": "code-example_explanation", "context": "create | Create a new service account named my-service-account", "source": "https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands", "id": "k8s_00389_code_explain_10122"} {"prompt": "Explain this text code:\n\n$ kubectl create serviceaccount NAME [--dry-run=server|client|none]", "completion": "This text code create | List all pods in ps output format", "language": "text", "type": "code-example_explanation", "context": "create | List all pods in ps output format", "source": "https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands", "id": "k8s_00389_code_explain_10123"} {"prompt": "Explain this bash code:\n\nkubectl get -o template pod/web-pod-13je7 --template={{.status.phase}}", "completion": "This bash code get | List resource information in custom columns", "language": "bash", "type": "kubectl-command_explanation", "context": "get | List resource information in custom columns", "source": "https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands", "id": "k8s_00389_code_explain_10124"} {"prompt": "Explain this bash code:\n\nkubectl get pod test-pod -o custom-columns=CONTAINER:.spec.containers[0].name,IMAGE:.spec.containers[0].image", "completion": "This bash code get | List all replication controllers and services together in ps output format", "language": "bash", "type": "kubectl-command_explanation", "context": "get | List all replication controllers and services together in ps output format", "source": "https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands", "id": "k8s_00389_code_explain_10125"} {"prompt": "Explain this bash code:\n\nkubectl get rc/web service/frontend pods/web-pod-13je7", "completion": "This bash code get | Display one or many resources.", "language": "bash", "type": "kubectl-command_explanation", "context": "get | Display one or many resources.", "source": "https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands", "id": "k8s_00389_code_explain_10126"} {"prompt": "Explain this text code:\n\n$ kubectl get [(-o|--output=)json|yaml|name|go-template|go-template-file|template|templatefile|jsonpath|jsonpath-as-json|jsonpath-file|custom-columns|custom-columns-file|wide] (TYPE[.VERSION][.GROUP] [NAME | -l label] | TYPE[.VERSION][.GROUP]/NAME ...) [flags]", "completion": "This text code get | Start a nginx pod", "language": "text", "type": "code-example_explanation", "context": "get | Start a nginx pod", "source": "https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands", "id": "k8s_00389_code_explain_10127"} {"prompt": "Explain this bash code:\n\nkubectl run hazelcast --image=hazelcast/hazelcast --port=5701", "completion": "This bash code run | Start a hazelcast pod and set environment variables \"DNS_DOMAIN=cluster\" and \"POD_NAMESPACE=default\" in the container", "language": "bash", "type": "kubectl-command_explanation", "context": "run | Start a hazelcast pod and set environment variables \"DNS_DOMAIN=cluster\" and \"POD_NAMESPACE=default\" in the container", "source": "https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands", "id": "k8s_00389_code_explain_10128"} {"prompt": "Explain this bash code:\n\nkubectl run hazelcast --image=hazelcast/hazelcast --env=\"DNS_DOMAIN=cluster\" --env=\"POD_NAMESPACE=default\"", "completion": "This bash code run | Start a hazelcast pod and set labels \"app=hazelcast\" and \"env=prod\" in the container", "language": "bash", "type": "kubectl-command_explanation", "context": "run | Start a hazelcast pod and set labels \"app=hazelcast\" and \"env=prod\" in the container", "source": "https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands", "id": "k8s_00389_code_explain_10129"} {"prompt": "Explain this bash code:\n\nkubectl run hazelcast --image=hazelcast/hazelcast --labels=\"app=hazelcast,env=prod\"", "completion": "This bash code run | Dry run; print the corresponding API objects without creating them", "language": "bash", "type": "kubectl-command_explanation", "context": "run | Dry run; print the corresponding API objects without creating them", "source": "https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands", "id": "k8s_00389_code_explain_10130"} {"prompt": "Explain this bash code:\n\nkubectl run nginx --image=nginx --overrides='{ \"apiVersion\": \"v1\", \"spec\": { ... } }'", "completion": "This bash code run | Start a busybox pod and keep it in the foreground, don't restart it if it exits", "language": "bash", "type": "kubectl-command_explanation", "context": "run | Start a busybox pod and keep it in the foreground, don't restart it if it exits", "source": "https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands", "id": "k8s_00389_code_explain_10131"} {"prompt": "Explain this bash code:\n\nkubectl run -i -t busybox --image=busybox --restart=Never", "completion": "This bash code run | Start the nginx pod using the default command, but use custom arguments (arg1 .. argN) for that command", "language": "bash", "type": "kubectl-command_explanation", "context": "run | Start the nginx pod using the default command, but use custom arguments (arg1 .. argN) for that command", "source": "https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands", "id": "k8s_00389_code_explain_10132"} {"prompt": "Explain this bash code:\n\nkubectl run nginx --image=nginx -- <arg1> <arg2> ... <argN>", "completion": "This bash code run | Start the nginx pod using a different command and custom arguments", "language": "bash", "type": "kubectl-command_explanation", "context": "run | Start the nginx pod using a different command and custom arguments", "source": "https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands", "id": "k8s_00389_code_explain_10133"} {"prompt": "Explain this bash code:\n\nkubectl run nginx --image=nginx --command -- <cmd> <arg1> ... <argN>", "completion": "This bash code run | Create and run a particular image in a pod.", "language": "bash", "type": "kubectl-command_explanation", "context": "run | Create and run a particular image in a pod.", "source": "https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands", "id": "k8s_00389_code_explain_10134"} {"prompt": "Explain this text code:\n\n$ kubectl run NAME --image=image [--env=\"key=value\"] [--port=port] [--dry-run=server|client] [--overrides=inline-json] [--command] -- [COMMAND] [args...]", "completion": "This text code run | Create a service for a replicated nginx, which serves on port 80 and connects to the containers on port 8000", "language": "text", "type": "code-example_explanation", "context": "run | Create a service for a replicated nginx, which serves on port 80 and connects to the containers on port 8000", "source": "https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands", "id": "k8s_00389_code_explain_10135"} {"prompt": "Explain this bash code:\n\nkubectl expose rc nginx --port=80 --target-port=8000", "completion": "This bash code expose | Create a service for a replication controller identified by type and name specified in \"nginx-controller.yaml\", which serves on port 80 and connects to the containers on port 8000", "language": "bash", "type": "kubectl-command_explanation", "context": "expose | Create a service for a replication controller identified by type and name specified in \"nginx-controller.yaml\", which serves on port 80 and connects to the containers on port 8000", "source": "https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands", "id": "k8s_00389_code_explain_10136"} {"prompt": "Explain this bash code:\n\nkubectl expose -f nginx-controller.yaml --port=80 --target-port=8000", "completion": "This bash code expose | Create a service for a pod valid-pod, which serves on port 444 with the name \"frontend\"", "language": "bash", "type": "kubectl-command_explanation", "context": "expose | Create a service for a pod valid-pod, which serves on port 444 with the name \"frontend\"", "source": "https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands", "id": "k8s_00389_code_explain_10137"} {"prompt": "Explain this bash code:\n\nkubectl expose pod valid-pod --port=444 --name=frontend", "completion": "This bash code expose | Create a second service based on the above service, exposing the container port 8443 as port 443 with the name \"nginx-https\"", "language": "bash", "type": "kubectl-command_explanation", "context": "expose | Create a second service based on the above service, exposing the container port 8443 as port 443 with the name \"nginx-https\"", "source": "https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands", "id": "k8s_00389_code_explain_10138"} {"prompt": "Explain this bash code:\n\nkubectl expose service nginx --port=443 --target-port=8443 --name=nginx-https", "completion": "This bash code expose | Create a service for a replicated streaming application on port 4100 balancing UDP traffic and named 'video-stream'.", "language": "bash", "type": "kubectl-command_explanation", "context": "expose | Create a service for a replicated streaming application on port 4100 balancing UDP traffic and named 'video-stream'.", "source": "https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands", "id": "k8s_00389_code_explain_10139"} {"prompt": "Explain this bash code:\n\nkubectl expose rc streamer --port=4100 --protocol=UDP --name=video-stream", "completion": "This bash code expose | Create a service for a replicated nginx using replica set, which serves on port 80 and connects to the containers on port 8000", "language": "bash", "type": "kubectl-command_explanation", "context": "expose | Create a service for a replicated nginx using replica set, which serves on port 80 and connects to the containers on port 8000", "source": "https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands", "id": "k8s_00389_code_explain_10140"} {"prompt": "Explain this bash code:\n\nkubectl expose rs nginx --port=80 --target-port=8000", "completion": "This bash code expose | Create a service for an nginx deployment, which serves on port 80 and connects to the containers on port 8000", "language": "bash", "type": "kubectl-command_explanation", "context": "expose | Create a service for an nginx deployment, which serves on port 80 and connects to the containers on port 8000", "source": "https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands", "id": "k8s_00389_code_explain_10141"} {"prompt": "Explain this bash code:\n\nkubectl expose deployment nginx --port=80 --target-port=8000", "completion": "This bash code expose | Expose a resource as a new Kubernetes service.", "language": "bash", "type": "kubectl-command_explanation", "context": "expose | Expose a resource as a new Kubernetes service.", "source": "https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands", "id": "k8s_00389_code_explain_10142"} {"prompt": "Explain this text code:\n\n$ kubectl expose (-f FILENAME | TYPE NAME) [--port=port] [--protocol=TCP|UDP|SCTP] [--target-port=number-or-name] [--name=name] [--external-ip=external-ip-of-service] [--type=type]", "completion": "This text code expose | Delete a pod using the type and name specified in pod.json", "language": "text", "type": "code-example_explanation", "context": "expose | Delete a pod using the type and name specified in pod.json", "source": "https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands", "id": "k8s_00389_code_explain_10143"} {"prompt": "Explain this text code:\n\n$ kubectl delete ([-f FILENAME] | [-k DIRECTORY] | TYPE [(NAME | -l label | --all)])", "completion": "This text code delete | This section contains commands for creating, updating, deleting, and viewing your workloads in a Kubernetes cluster.", "language": "text", "type": "code-example_explanation", "context": "delete | This section contains commands for creating, updating, deleting, and viewing your workloads in a Kubernetes cluster.", "source": "https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands", "id": "k8s_00389_code_explain_10144"} {"prompt": "Explain this bash code:\n\nkubectl apply --prune -f manifest.yaml -l app=nginx", "completion": "This bash code apply | Apply the configuration in manifest.yaml and delete all the other config maps that are not in the file", "language": "bash", "type": "kubectl-command_explanation", "context": "apply | Apply the configuration in manifest.yaml and delete all the other config maps that are not in the file", "source": "https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands", "id": "k8s_00389_code_explain_10145"} {"prompt": "Explain this bash code:\n\nkubectl apply --prune -f manifest.yaml --all --prune-whitelist=core/v1/ConfigMap", "completion": "This bash code apply | Apply a configuration to a resource by file name or stdin. The resource name must be specified. This resource will be created if it doesn't exist yet. To use 'apply', always create the resource initia", "language": "bash", "type": "kubectl-command_explanation", "context": "apply | Apply a configuration to a resource by file name or stdin. The resource name must be specified. This resource will be created if it doesn't exist yet. To use 'apply', always create the resource initia", "source": "https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands", "id": "k8s_00389_code_explain_10146"} {"prompt": "Explain this bash code:\n\nkubectl apply edit-last-applied -f deploy.yaml -o json", "completion": "This bash code apply | Edit the latest last-applied-configuration annotations of resources from the default editor.", "language": "bash", "type": "kubectl-command_explanation", "context": "apply | Edit the latest last-applied-configuration annotations of resources from the default editor.", "source": "https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands", "id": "k8s_00389_code_explain_10147"} {"prompt": "Explain this text code:\n\n$ kubectl apply edit-last-applied (RESOURCE/NAME | -f FILENAME)", "completion": "This text code apply | Set the last-applied-configuration of a resource to match the contents of a file", "language": "text", "type": "code-example_explanation", "context": "apply | Set the last-applied-configuration of a resource to match the contents of a file", "source": "https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands", "id": "k8s_00389_code_explain_10148"} {"prompt": "Explain this bash code:\n\nkubectl apply set-last-applied -f deploy.yaml --create-annotation=true", "completion": "This bash code apply | Set the latest last-applied-configuration annotations by setting it to match the contents of a file. This results in the last-applied-configuration being updated as though 'kubectl apply -f ' was run,", "language": "bash", "type": "kubectl-command_explanation", "context": "apply | Set the latest last-applied-configuration annotations by setting it to match the contents of a file. This results in the last-applied-configuration being updated as though 'kubectl apply -f ' was run,", "source": "https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands", "id": "k8s_00389_code_explain_10149"} {"prompt": "Explain this bash code:\n\nkubectl apply view-last-applied -f deploy.yaml -o json", "completion": "This bash code apply | View the latest last-applied-configuration annotations by type/name or file.", "language": "bash", "type": "kubectl-command_explanation", "context": "apply | View the latest last-applied-configuration annotations by type/name or file.", "source": "https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands", "id": "k8s_00389_code_explain_10150"} {"prompt": "Explain this text code:\n\n$ kubectl apply view-last-applied (TYPE [NAME | -l label] | TYPE/NAME | -f FILENAME)", "completion": "This text code apply | Update pod 'foo' with the annotation 'description' and the value 'my frontend' # If the same annotation is set multiple times, only the last value will be applied", "language": "text", "type": "code-example_explanation", "context": "apply | Update pod 'foo' with the annotation 'description' and the value 'my frontend' # If the same annotation is set multiple times, only the last value will be applied", "source": "https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands", "id": "k8s_00389_code_explain_10151"} {"prompt": "Explain this bash code:\n\nkubectl annotate pods foo description='my frontend'", "completion": "This bash code annotate | Update a pod identified by type and name in \"pod.json\"", "language": "bash", "type": "kubectl-command_explanation", "context": "annotate | Update a pod identified by type and name in \"pod.json\"", "source": "https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands", "id": "k8s_00389_code_explain_10152"} {"prompt": "Explain this bash code:\n\nkubectl annotate -f pod.json description='my frontend'", "completion": "This bash code annotate | Update pod 'foo' with the annotation 'description' and the value 'my frontend running nginx', overwriting any existing value", "language": "bash", "type": "kubectl-command_explanation", "context": "annotate | Update pod 'foo' with the annotation 'description' and the value 'my frontend running nginx', overwriting any existing value", "source": "https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands", "id": "k8s_00389_code_explain_10153"} {"prompt": "Explain this bash code:\n\nkubectl annotate --overwrite pods foo description='my frontend running nginx'", "completion": "This bash code annotate | Update all pods in the namespace", "language": "bash", "type": "kubectl-command_explanation", "context": "annotate | Update all pods in the namespace", "source": "https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands", "id": "k8s_00389_code_explain_10154"} {"prompt": "Explain this bash code:\n\nkubectl annotate pods --all description='my frontend running nginx'", "completion": "This bash code annotate | Update pod 'foo' only if the resource is unchanged from version 1", "language": "bash", "type": "kubectl-command_explanation", "context": "annotate | Update pod 'foo' only if the resource is unchanged from version 1", "source": "https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands", "id": "k8s_00389_code_explain_10155"} {"prompt": "Explain this bash code:\n\nkubectl annotate pods foo description='my frontend running nginx' --resource-version=1", "completion": "This bash code annotate | Update pod 'foo' by removing an annotation named 'description' if it exists # Does not require the --overwrite flag", "language": "bash", "type": "kubectl-command_explanation", "context": "annotate | Update pod 'foo' by removing an annotation named 'description' if it exists # Does not require the --overwrite flag", "source": "https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands", "id": "k8s_00389_code_explain_10156"} {"prompt": "Explain this text code:\n\n$ kubectl annotate [--overwrite] (-f FILENAME | TYPE NAME) KEY_1=VAL_1 ... KEY_N=VAL_N [--resource-version=version]", "completion": "This text code annotate | Auto scale a deployment \"foo\", with the number of pods between 2 and 10, no target CPU utilization specified so a default autoscaling policy will be used", "language": "text", "type": "code-example_explanation", "context": "annotate | Auto scale a deployment \"foo\", with the number of pods between 2 and 10, no target CPU utilization specified so a default autoscaling policy will be used", "source": "https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands", "id": "k8s_00389_code_explain_10157"} {"prompt": "Explain this text code:\n\n$ kubectl autoscale (-f FILENAME | TYPE NAME | TYPE/NAME) [--min=MINPODS] --max=MAXPODS [--cpu-percent=CPU]", "completion": "This text code autoscale | Create an interactive debugging session in pod mypod and immediately attach to it. # (requires the EphemeralContainers feature to be enabled in the cluster)", "language": "text", "type": "code-example_explanation", "context": "autoscale | Create an interactive debugging session in pod mypod and immediately attach to it. # (requires the EphemeralContainers feature to be enabled in the cluster)", "source": "https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands", "id": "k8s_00389_code_explain_10158"} {"prompt": "Explain this bash code:\n\nkubectl debug --image=myproj/debug-tools -c debugger mypod", "completion": "This bash code debug | Create a copy of mypod adding a debug container and attach to it", "language": "bash", "type": "kubectl-command_explanation", "context": "debug | Create a copy of mypod adding a debug container and attach to it", "source": "https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands", "id": "k8s_00389_code_explain_10159"} {"prompt": "Explain this bash code:\n\nkubectl debug mypod -it --image=busybox --copy-to=my-debugger", "completion": "This bash code debug | Create a copy of mypod changing the command of mycontainer", "language": "bash", "type": "kubectl-command_explanation", "context": "debug | Create a copy of mypod changing the command of mycontainer", "source": "https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands", "id": "k8s_00389_code_explain_10160"} {"prompt": "Explain this bash code:\n\nkubectl debug mypod -it --copy-to=my-debugger --container=mycontainer -- sh", "completion": "This bash code debug | Create a copy of mypod changing all container images to busybox", "language": "bash", "type": "kubectl-command_explanation", "context": "debug | Create a copy of mypod changing all container images to busybox", "source": "https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands", "id": "k8s_00389_code_explain_10161"} {"prompt": "Explain this bash code:\n\nkubectl debug mypod --copy-to=my-debugger --set-image=*=busybox", "completion": "This bash code debug | Create a copy of mypod adding a debug container and changing container images", "language": "bash", "type": "kubectl-command_explanation", "context": "debug | Create a copy of mypod adding a debug container and changing container images", "source": "https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands", "id": "k8s_00389_code_explain_10162"} {"prompt": "Explain this bash code:\n\nkubectl debug mypod -it --copy-to=my-debugger --image=debian --set-image=app=app:debug,sidecar=sidecar:debug", "completion": "This bash code debug | Create an interactive debugging session on a node and immediately attach to it. # The container will run in the host namespaces and the host's filesystem will be mounted at /host", "language": "bash", "type": "kubectl-command_explanation", "context": "debug | Create an interactive debugging session on a node and immediately attach to it. # The container will run in the host namespaces and the host's filesystem will be mounted at /host", "source": "https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands", "id": "k8s_00389_code_explain_10163"} {"prompt": "Explain this text code:\n\n$ kubectl debug (POD | TYPE[[.VERSION].GROUP]/NAME) [ -- COMMAND [args...] ]", "completion": "This text code debug | Diff resources included in pod.json", "language": "text", "type": "code-example_explanation", "context": "debug | Diff resources included in pod.json", "source": "https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands", "id": "k8s_00389_code_explain_10164"} {"prompt": "Explain this text code:\n\nKUBE_EDITOR=\"nano\" kubectl edit svc/docker-registry", "completion": "This text code edit | Edit the job 'myjob' in JSON using the v1 API format", "language": "text", "type": "code-example_explanation", "context": "edit | Edit the job 'myjob' in JSON using the v1 API format", "source": "https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands", "id": "k8s_00389_code_explain_10165"} {"prompt": "Explain this bash code:\n\nkubectl edit deployment/mydeployment -o yaml --save-config", "completion": "This bash code edit | Edit a resource from the default editor.", "language": "bash", "type": "kubectl-command_explanation", "context": "edit | Edit a resource from the default editor.", "source": "https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands", "id": "k8s_00389_code_explain_10166"} {"prompt": "Explain this bash code:\n\nkubectl kustomize https://github.com/kubernetes-sigs/kustomize.git/examples/helloWorld?ref=v1.0.6", "completion": "This bash code kustomize | Build a set of KRM resources using a 'kustomization.yaml' file. The DIR argument must be a path to a directory containing 'kustomization.yaml', or a git repository URL with a path suffix specifying sa", "language": "bash", "type": "kubectl-command_explanation", "context": "kustomize | Build a set of KRM resources using a 'kustomization.yaml' file. The DIR argument must be a path to a directory containing 'kustomization.yaml', or a git repository URL with a path suffix specifying sa", "source": "https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands", "id": "k8s_00389_code_explain_10167"} {"prompt": "Explain this bash code:\n\nkubectl label --overwrite pods foo status=unhealthy", "completion": "This bash code label | Update all pods in the namespace", "language": "bash", "type": "kubectl-command_explanation", "context": "label | Update all pods in the namespace", "source": "https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands", "id": "k8s_00389_code_explain_10168"} {"prompt": "Explain this bash code:\n\nkubectl label pods foo status=unhealthy --resource-version=1", "completion": "This bash code label | Update pod 'foo' by removing a label named 'bar' if it exists # Does not require the --overwrite flag", "language": "bash", "type": "kubectl-command_explanation", "context": "label | Update pod 'foo' by removing a label named 'bar' if it exists # Does not require the --overwrite flag", "source": "https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands", "id": "k8s_00389_code_explain_10169"} {"prompt": "Explain this text code:\n\n$ kubectl label [--overwrite] (-f FILENAME | TYPE NAME) KEY_1=VAL_1 ... KEY_N=VAL_N [--resource-version=version]", "completion": "This text code label | Partially update a node using a strategic merge patch, specifying the patch as JSON", "language": "text", "type": "code-example_explanation", "context": "label | Partially update a node using a strategic merge patch, specifying the patch as JSON", "source": "https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands", "id": "k8s_00389_code_explain_10170"} {"prompt": "Explain this bash code:\n\nkubectl patch node k8s-node-1 -p '{\"spec\":{\"unschedulable\":true}}'", "completion": "This bash code patch | Partially update a node using a strategic merge patch, specifying the patch as YAML", "language": "bash", "type": "kubectl-command_explanation", "context": "patch | Partially update a node using a strategic merge patch, specifying the patch as YAML", "source": "https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands", "id": "k8s_00389_code_explain_10171"} {"prompt": "Explain this bash code:\n\nkubectl patch node k8s-node-1 -p $'spec:\\n unschedulable: true'", "completion": "This bash code patch | Partially update a node identified by the type and name specified in \"node.json\" using strategic merge patch", "language": "bash", "type": "kubectl-command_explanation", "context": "patch | Partially update a node identified by the type and name specified in \"node.json\" using strategic merge patch", "source": "https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands", "id": "k8s_00389_code_explain_10172"} {"prompt": "Explain this bash code:\n\nkubectl patch -f node.json -p '{\"spec\":{\"unschedulable\":true}}'", "completion": "This bash code patch | Update a container's image; spec.containers[*].name is required because it's a merge key", "language": "bash", "type": "kubectl-command_explanation", "context": "patch | Update a container's image; spec.containers[*].name is required because it's a merge key", "source": "https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands", "id": "k8s_00389_code_explain_10173"} {"prompt": "Explain this bash code:\n\nkubectl patch pod valid-pod -p '{\"spec\":{\"containers\":[{\"name\":\"kubernetes-serve-hostname\",\"image\":\"new image\"}]}}'", "completion": "This bash code patch | Update a container's image using a JSON patch with positional arrays", "language": "bash", "type": "kubectl-command_explanation", "context": "patch | Update a container's image using a JSON patch with positional arrays", "source": "https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands", "id": "k8s_00389_code_explain_10174"} {"prompt": "Explain this bash code:\n\nkubectl patch pod valid-pod --type='json' -p='[{\"op\": \"replace\", \"path\": \"/spec/containers/0/image\", \"value\":\"new image\"}]'", "completion": "This bash code patch | Update fields of a resource using strategic merge patch, a JSON merge patch, or a JSON patch.", "language": "bash", "type": "kubectl-command_explanation", "context": "patch | Update fields of a resource using strategic merge patch, a JSON merge patch, or a JSON patch.", "source": "https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands", "id": "k8s_00389_code_explain_10175"} {"prompt": "Explain this text code:\n\n$ kubectl patch (-f FILENAME | TYPE NAME) [-p PATCH|--patch-file FILE]", "completion": "This text code patch | Replace a pod using the data in pod.json", "language": "text", "type": "code-example_explanation", "context": "patch | Replace a pod using the data in pod.json", "source": "https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands", "id": "k8s_00389_code_explain_10176"} {"prompt": "Explain this bash code:\n\nkubectl get pod mypod -o yaml | sed 's/\\(image: myimage\\):.*$/\\1:v4/' | kubectl replace -f -", "completion": "This bash code replace | Force replace, delete and then re-create the resource", "language": "bash", "type": "kubectl-command_explanation", "context": "replace | Force replace, delete and then re-create the resource", "source": "https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands", "id": "k8s_00389_code_explain_10177"} {"prompt": "Explain this text code:\n\n$ kubectl rollout history (TYPE NAME | TYPE/NAME) [flags]", "completion": "This text code rollout | Mark the nginx deployment as paused # Any current state of the deployment will continue its function; new updates # to the deployment will not have an effect as long as the deployment is paused", "language": "text", "type": "code-example_explanation", "context": "rollout | Mark the nginx deployment as paused # Any current state of the deployment will continue its function; new updates # to the deployment will not have an effect as long as the deployment is paused", "source": "https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands", "id": "k8s_00389_code_explain_10178"} {"prompt": "Explain this text code:\n\n$ kubectl rollout status (TYPE NAME | TYPE/NAME) [flags]", "completion": "This text code rollout | Roll back to the previous deployment", "language": "text", "type": "code-example_explanation", "context": "rollout | Roll back to the previous deployment", "source": "https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands", "id": "k8s_00389_code_explain_10179"} {"prompt": "Explain this bash code:\n\nkubectl rollout undo --dry-run=server deployment/abc", "completion": "This bash code rollout | Roll back to a previous rollout.", "language": "bash", "type": "kubectl-command_explanation", "context": "rollout | Roll back to a previous rollout.", "source": "https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands", "id": "k8s_00389_code_explain_10180"} {"prompt": "Explain this text code:\n\n$ kubectl rollout undo (TYPE NAME | TYPE/NAME) [flags]", "completion": "This text code rollout | Scale a replica set named 'foo' to 3", "language": "text", "type": "code-example_explanation", "context": "rollout | Scale a replica set named 'foo' to 3", "source": "https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands", "id": "k8s_00389_code_explain_10181"} {"prompt": "Explain this bash code:\n\nkubectl scale --current-replicas=2 --replicas=3 deployment/mysql", "completion": "This bash code scale | Scale multiple replication controllers", "language": "bash", "type": "kubectl-command_explanation", "context": "scale | Scale multiple replication controllers", "source": "https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands", "id": "k8s_00389_code_explain_10182"} {"prompt": "Explain this text code:\n\n$ kubectl scale [--resource-version=version] [--current-replicas=count] --replicas=COUNT (-f FILENAME | TYPE NAME)", "completion": "This text code scale | Configure application resources.", "language": "text", "type": "code-example_explanation", "context": "scale | Configure application resources.", "source": "https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands", "id": "k8s_00389_code_explain_10183"} {"prompt": "Explain this bash code:\n\nkubectl set env deployment/registry STORAGE_DIR=/local", "completion": "This bash code set | List the environment variables defined on a deployments 'sample-build'", "language": "bash", "type": "kubectl-command_explanation", "context": "set | List the environment variables defined on a deployments 'sample-build'", "source": "https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands", "id": "k8s_00389_code_explain_10184"} {"prompt": "Explain this bash code:\n\nkubectl set env deployment/sample-build STORAGE_DIR=/data -o yaml", "completion": "This bash code set | Update all containers in all replication controllers in the project to have ENV=prod", "language": "bash", "type": "kubectl-command_explanation", "context": "set | Update all containers in all replication controllers in the project to have ENV=prod", "source": "https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands", "id": "k8s_00389_code_explain_10185"} {"prompt": "Explain this bash code:\n\nkubectl set env --from=secret/mysecret deployment/myapp", "completion": "This bash code set | Import environment from a config map with a prefix", "language": "bash", "type": "kubectl-command_explanation", "context": "set | Import environment from a config map with a prefix", "source": "https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands", "id": "k8s_00389_code_explain_10186"} {"prompt": "Explain this bash code:\n\nkubectl set env --from=configmap/myconfigmap --prefix=MYSQL_ deployment/myapp", "completion": "This bash code set | Import specific keys from a config map", "language": "bash", "type": "kubectl-command_explanation", "context": "set | Import specific keys from a config map", "source": "https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands", "id": "k8s_00389_code_explain_10187"} {"prompt": "Explain this bash code:\n\nkubectl set env --keys=my-example-key --from=configmap/myconfigmap deployment/myapp", "completion": "This bash code set | Remove the environment variable ENV from container 'c1' in all deployment configs", "language": "bash", "type": "kubectl-command_explanation", "context": "set | Remove the environment variable ENV from container 'c1' in all deployment configs", "source": "https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands", "id": "k8s_00389_code_explain_10188"} {"prompt": "Explain this bash code:\n\nkubectl set env deployments --all --containers=\"c1\" ENV-", "completion": "This bash code set | Remove the environment variable ENV from a deployment definition on disk and # update the deployment config on the server", "language": "bash", "type": "kubectl-command_explanation", "context": "set | Remove the environment variable ENV from a deployment definition on disk and # update the deployment config on the server", "source": "https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands", "id": "k8s_00389_code_explain_10189"} {"prompt": "Explain this text code:\n\nenv | grep RAILS_ | kubectl set env -e - deployment/registry", "completion": "This text code set | Update environment variables on a pod template.", "language": "text", "type": "code-example_explanation", "context": "set | Update environment variables on a pod template.", "source": "https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands", "id": "k8s_00389_code_explain_10190"} {"prompt": "Explain this text code:\n\n$ kubectl set env RESOURCE/NAME KEY_1=VAL_1 ... KEY_N=VAL_N", "completion": "This text code set | Set a deployment's nginx container image to 'nginx:1.9.1', and its busybox container image to 'busybox'", "language": "text", "type": "code-example_explanation", "context": "set | Set a deployment's nginx container image to 'nginx:1.9.1', and its busybox container image to 'busybox'", "source": "https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands", "id": "k8s_00389_code_explain_10191"} {"prompt": "Explain this bash code:\n\nkubectl set image deployment/nginx busybox=busybox nginx=nginx:1.9.1", "completion": "This bash code set | Update all deployments' and rc's nginx container's image to 'nginx:1.9.1'", "language": "bash", "type": "kubectl-command_explanation", "context": "set | Update all deployments' and rc's nginx container's image to 'nginx:1.9.1'", "source": "https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands", "id": "k8s_00389_code_explain_10192"} {"prompt": "Explain this bash code:\n\nkubectl set image deployments,rc nginx=nginx:1.9.1 --all", "completion": "This bash code set | Update image of all containers of daemonset abc to 'nginx:1.9.1'", "language": "bash", "type": "kubectl-command_explanation", "context": "set | Update image of all containers of daemonset abc to 'nginx:1.9.1'", "source": "https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands", "id": "k8s_00389_code_explain_10193"} {"prompt": "Explain this bash code:\n\nkubectl set image -f path/to/file.yaml nginx=nginx:1.9.1 --local -o yaml", "completion": "This bash code set | Update existing container image(s) of resources.", "language": "bash", "type": "kubectl-command_explanation", "context": "set | Update existing container image(s) of resources.", "source": "https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands", "id": "k8s_00389_code_explain_10194"} {"prompt": "Explain this text code:\n\n$ kubectl set image (-f FILENAME | TYPE NAME) CONTAINER_NAME_1=CONTAINER_IMAGE_1 ... CONTAINER_NAME_N=CONTAINER_IMAGE_N", "completion": "This text code set | Set a deployments nginx container cpu limits to \"200m\" and memory to \"512Mi\"", "language": "text", "type": "code-example_explanation", "context": "set | Set a deployments nginx container cpu limits to \"200m\" and memory to \"512Mi\"", "source": "https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands", "id": "k8s_00389_code_explain_10195"} {"prompt": "Explain this bash code:\n\nkubectl set resources deployment nginx -c=nginx --limits=cpu=200m,memory=512Mi", "completion": "This bash code set | Set the resource request and limits for all containers in nginx", "language": "bash", "type": "kubectl-command_explanation", "context": "set | Set the resource request and limits for all containers in nginx", "source": "https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands", "id": "k8s_00389_code_explain_10196"} {"prompt": "Explain this bash code:\n\nkubectl set resources deployment nginx --limits=cpu=200m,memory=512Mi --requests=cpu=100m,memory=256Mi", "completion": "This bash code set | Remove the resource requests for resources on containers in nginx", "language": "bash", "type": "kubectl-command_explanation", "context": "set | Remove the resource requests for resources on containers in nginx", "source": "https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands", "id": "k8s_00389_code_explain_10197"} {"prompt": "Explain this bash code:\n\nkubectl set resources deployment nginx --limits=cpu=0,memory=0 --requests=cpu=0,memory=0", "completion": "This bash code set | Print the result (in yaml format) of updating nginx container limits from a local, without hitting the server", "language": "bash", "type": "kubectl-command_explanation", "context": "set | Print the result (in yaml format) of updating nginx container limits from a local, without hitting the server", "source": "https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands", "id": "k8s_00389_code_explain_10198"} {"prompt": "Explain this bash code:\n\nkubectl set resources -f path/to/file.yaml --limits=cpu=200m,memory=512Mi --local -o yaml", "completion": "This bash code set | Specify compute resource requirements (CPU, memory) for any resource that defines a pod template. If a pod is successfully scheduled, it is guaranteed the amount of resource requested, but may burst", "language": "bash", "type": "kubectl-command_explanation", "context": "set | Specify compute resource requirements (CPU, memory) for any resource that defines a pod template. If a pod is successfully scheduled, it is guaranteed the amount of resource requested, but may burst", "source": "https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands", "id": "k8s_00389_code_explain_10199"} {"prompt": "Explain this text code:\n\n$ kubectl set resources (-f FILENAME | TYPE NAME) ([--limits=LIMITS & --requests=REQUESTS]", "completion": "This text code set | Set the labels and selector before creating a deployment/service pair", "language": "text", "type": "code-example_explanation", "context": "set | Set the labels and selector before creating a deployment/service pair", "source": "https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands", "id": "k8s_00389_code_explain_10200"} {"prompt": "Explain this bash code:\n\nkubectl create service clusterip my-svc --clusterip=\"None\" -o yaml --dry-run=client | kubectl set selector --local -f - 'environment=qa' -o yaml | kubectl create -f -\nkubectl create deployment my-dep -o yaml --dry-run=client | kubectl label --local -f - environment=qa -o yaml | kubectl create -f -", "completion": "This bash code set | Set the selector on a resource. Note that the new selector will overwrite the old selector if the resource had one prior to the invocation of 'set selector'.", "language": "bash", "type": "kubectl-command_explanation", "context": "set | Set the selector on a resource. Note that the new selector will overwrite the old selector if the resource had one prior to the invocation of 'set selector'.", "source": "https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands", "id": "k8s_00389_code_explain_10201"} {"prompt": "Explain this text code:\n\n$ kubectl set selector (-f FILENAME | TYPE NAME) EXPRESSIONS [--resource-version=version]", "completion": "This text code set | Set deployment nginx-deployment's service account to serviceaccount1", "language": "text", "type": "code-example_explanation", "context": "set | Set deployment nginx-deployment's service account to serviceaccount1", "source": "https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands", "id": "k8s_00389_code_explain_10202"} {"prompt": "Explain this bash code:\n\nkubectl set serviceaccount deployment nginx-deployment serviceaccount1", "completion": "This bash code set | Print the result (in YAML format) of updated nginx deployment with the service account from local file, without hitting the API server", "language": "bash", "type": "kubectl-command_explanation", "context": "set | Print the result (in YAML format) of updated nginx deployment with the service account from local file, without hitting the API server", "source": "https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands", "id": "k8s_00389_code_explain_10203"} {"prompt": "Explain this bash code:\n\nkubectl set sa -f nginx-deployment.yaml serviceaccount1 --local --dry-run=client -o yaml", "completion": "This bash code set | Update the service account of pod template resources.", "language": "bash", "type": "kubectl-command_explanation", "context": "set | Update the service account of pod template resources.", "source": "https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands", "id": "k8s_00389_code_explain_10204"} {"prompt": "Explain this text code:\n\n$ kubectl set serviceaccount (-f FILENAME | TYPE NAME) SERVICE_ACCOUNT", "completion": "This text code set | Update a cluster role binding for serviceaccount1", "language": "text", "type": "code-example_explanation", "context": "set | Update a cluster role binding for serviceaccount1", "source": "https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands", "id": "k8s_00389_code_explain_10205"} {"prompt": "Explain this bash code:\n\nkubectl set subject clusterrolebinding admin --serviceaccount=namespace:serviceaccount1", "completion": "This bash code set | Update a role binding for user1, user2, and group1", "language": "bash", "type": "kubectl-command_explanation", "context": "set | Update a role binding for user1, user2, and group1", "source": "https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands", "id": "k8s_00389_code_explain_10206"} {"prompt": "Explain this bash code:\n\nkubectl set subject rolebinding admin --user=user1 --user=user2 --group=group1", "completion": "This bash code set | Print the result (in YAML format) of updating rolebinding subjects from a local, without hitting the server", "language": "bash", "type": "kubectl-command_explanation", "context": "set | Print the result (in YAML format) of updating rolebinding subjects from a local, without hitting the server", "source": "https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands", "id": "k8s_00389_code_explain_10207"} {"prompt": "Explain this bash code:\n\nkubectl create rolebinding admin --role=admin --user=admin -o yaml --dry-run=client | kubectl set subject --local -f - --user=foo -o yaml", "completion": "This bash code set | Update the user, group, or service account in a role binding or cluster role binding.", "language": "bash", "type": "kubectl-command_explanation", "context": "set | Update the user, group, or service account in a role binding or cluster role binding.", "source": "https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands", "id": "k8s_00389_code_explain_10208"} {"prompt": "Explain this text code:\n\n$ kubectl set subject (-f FILENAME | TYPE NAME) [--user=username] [--group=groupname] [--serviceaccount=namespace:serviceaccountname] [--dry-run=server|client|none]", "completion": "This text code set | Wait for the pod \"busybox1\" to contain the status condition of type \"Ready\"", "language": "text", "type": "code-example_explanation", "context": "set | Wait for the pod \"busybox1\" to contain the status condition of type \"Ready\"", "source": "https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands", "id": "k8s_00389_code_explain_10209"} {"prompt": "Explain this bash code:\n\nkubectl wait --for=condition=Ready=false pod/busybox1", "completion": "This bash code wait | Wait for the pod \"busybox1\" to be deleted, with a timeout of 60s, after having issued the \"delete\" command", "language": "bash", "type": "kubectl-command_explanation", "context": "wait | Wait for the pod \"busybox1\" to be deleted, with a timeout of 60s, after having issued the \"delete\" command", "source": "https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands", "id": "k8s_00389_code_explain_10210"} {"prompt": "Explain this bash code:\n\nkubectl delete pod/busybox1\nkubectl wait --for=delete pod/busybox1 --timeout=60s", "completion": "This bash code wait | Experimental: Wait for a specific condition on one or many resources.", "language": "bash", "type": "kubectl-command_explanation", "context": "wait | Experimental: Wait for a specific condition on one or many resources.", "source": "https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands", "id": "k8s_00389_code_explain_10211"} {"prompt": "Explain this text code:\n\n$ kubectl wait ([-f FILENAME] | resource.group/resource.name | resource.group [(-l label | --all)]) [--for=delete|--for condition=available]", "completion": "This text code wait | This section contains commands for inspecting and debugging your applications.", "language": "text", "type": "code-example_explanation", "context": "wait | This section contains commands for inspecting and debugging your applications.", "source": "https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands", "id": "k8s_00389_code_explain_10212"} {"prompt": "Explain this text code:\n\n$ kubectl auth can-i VERB [TYPE | TYPE/NAME | NONRESOURCEURL]", "completion": "This text code auth | Reconcile RBAC resources from a file", "language": "text", "type": "code-example_explanation", "context": "auth | Reconcile RBAC resources from a file", "source": "https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands", "id": "k8s_00389_code_explain_10213"} {"prompt": "Explain this text code:\n\ntar cf - /tmp/foo | kubectl exec -i -n <some-namespace> <some-pod> -- tar xf - -C /tmp/bar", "completion": "This text code cp | Copy /tmp/foo from a remote pod to /tmp/bar locally", "language": "text", "type": "code-example_explanation", "context": "cp | Copy /tmp/foo from a remote pod to /tmp/bar locally", "source": "https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands", "id": "k8s_00389_code_explain_10214"} {"prompt": "Explain this bash code:\n\nkubectl exec -n <some-namespace> <some-pod> -- tar cf - /tmp/foo | tar xf - -C /tmp/bar", "completion": "This bash code cp | Copy /tmp/foo_dir local directory to /tmp/bar_dir in a remote pod in the default namespace", "language": "bash", "type": "kubectl-command_explanation", "context": "cp | Copy /tmp/foo_dir local directory to /tmp/bar_dir in a remote pod in the default namespace", "source": "https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands", "id": "k8s_00389_code_explain_10215"} {"prompt": "Explain this bash code:\n\nkubectl cp /tmp/foo <some-pod>:/tmp/bar -c <specific-container>", "completion": "This bash code cp | Copy /tmp/foo local file to /tmp/bar in a remote pod in namespace", "language": "bash", "type": "kubectl-command_explanation", "context": "cp | Copy /tmp/foo local file to /tmp/bar in a remote pod in namespace", "source": "https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands", "id": "k8s_00389_code_explain_10216"} {"prompt": "Explain this bash code:\n\nkubectl cp /tmp/foo <some-namespace>/<some-pod>:/tmp/bar", "completion": "This bash code cp | Copy /tmp/foo from a remote pod to /tmp/bar locally", "language": "bash", "type": "kubectl-command_explanation", "context": "cp | Copy /tmp/foo from a remote pod to /tmp/bar locally", "source": "https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands", "id": "k8s_00389_code_explain_10217"} {"prompt": "Explain this bash code:\n\nkubectl cp <some-namespace>/<some-pod>:/tmp/foo /tmp/bar", "completion": "This bash code cp | Copy files and directories to and from containers.", "language": "bash", "type": "kubectl-command_explanation", "context": "cp | Copy files and directories to and from containers.", "source": "https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands", "id": "k8s_00389_code_explain_10218"} {"prompt": "Explain this bash code:\n\nkubectl describe nodes kubernetes-node-emt8.c.myproject.internal", "completion": "This bash code describe | Describe a pod", "language": "bash", "type": "kubectl-command_explanation", "context": "describe | Describe a pod", "source": "https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands", "id": "k8s_00389_code_explain_10219"} {"prompt": "Explain this text code:\n\n$ kubectl describe (-f FILENAME | TYPE [NAME_PREFIX | -l label] | TYPE/NAME)", "completion": "This text code describe | Get output from running the 'date' command from pod mypod, using the first container by default", "language": "text", "type": "code-example_explanation", "context": "describe | Get output from running the 'date' command from pod mypod, using the first container by default", "source": "https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands", "id": "k8s_00389_code_explain_10220"} {"prompt": "Explain this bash code:\n\nkubectl exec mypod -c ruby-container -i -t -- bash -il", "completion": "This bash code exec | List contents of /usr from the first container of pod mypod and sort by modification time # If the command you want to execute in the pod has any flags in common (e.g. -i), # you must use two dashes (", "language": "bash", "type": "kubectl-command_explanation", "context": "exec | List contents of /usr from the first container of pod mypod and sort by modification time # If the command you want to execute in the pod has any flags in common (e.g. -i), # you must use two dashes (", "source": "https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands", "id": "k8s_00389_code_explain_10221"} {"prompt": "Explain this text code:\n\n$ kubectl exec (POD | TYPE/NAME) [-c CONTAINER] [flags] -- COMMAND [args...]", "completion": "This text code exec | Return snapshot logs from pod nginx with only one container", "language": "text", "type": "code-example_explanation", "context": "exec | Return snapshot logs from pod nginx with only one container", "source": "https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands", "id": "k8s_00389_code_explain_10222"} {"prompt": "Explain this bash code:\n\nkubectl logs --insecure-skip-tls-verify-backend nginx", "completion": "This bash code logs | Return snapshot logs from first container of a job named hello", "language": "bash", "type": "kubectl-command_explanation", "context": "logs | Return snapshot logs from first container of a job named hello", "source": "https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands", "id": "k8s_00389_code_explain_10223"} {"prompt": "Explain this text code:\n\n$ kubectl logs [-f] [-p] (POD | TYPE/NAME) [-c CONTAINER]", "completion": "This text code logs | Listen on ports 5000 and 6000 locally, forwarding data to/from ports 5000 and 6000 in the pod", "language": "text", "type": "code-example_explanation", "context": "logs | Listen on ports 5000 and 6000 locally, forwarding data to/from ports 5000 and 6000 in the pod", "source": "https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands", "id": "k8s_00389_code_explain_10224"} {"prompt": "Explain this bash code:\n\nkubectl port-forward deployment/mydeployment 5000 6000", "completion": "This bash code port-forward | Listen on port 8443 locally, forwarding to the targetPort of the service's port named \"https\" in a pod selected by the service", "language": "bash", "type": "kubectl-command_explanation", "context": "port-forward | Listen on port 8443 locally, forwarding to the targetPort of the service's port named \"https\" in a pod selected by the service", "source": "https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands", "id": "k8s_00389_code_explain_10225"} {"prompt": "Explain this bash code:\n\nkubectl port-forward --address 0.0.0.0 pod/mypod 8888:5000", "completion": "This bash code port-forward | Listen on port 8888 on localhost and selected IP, forwarding to 5000 in the pod", "language": "bash", "type": "kubectl-command_explanation", "context": "port-forward | Listen on port 8888 on localhost and selected IP, forwarding to 5000 in the pod", "source": "https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands", "id": "k8s_00389_code_explain_10226"} {"prompt": "Explain this bash code:\n\nkubectl port-forward --address localhost,10.19.21.23 pod/mypod 8888:5000", "completion": "This bash code port-forward | Listen on a random port locally, forwarding to 5000 in the pod", "language": "bash", "type": "kubectl-command_explanation", "context": "port-forward | Listen on a random port locally, forwarding to 5000 in the pod", "source": "https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands", "id": "k8s_00389_code_explain_10227"} {"prompt": "Explain this text code:\n\n$ kubectl port-forward TYPE/NAME [options] [LOCAL_PORT:]REMOTE_PORT [...[LOCAL_PORT_N:]REMOTE_PORT_N]", "completion": "This text code port-forward | To proxy all of the Kubernetes API and nothing else", "language": "text", "type": "code-example_explanation", "context": "port-forward | To proxy all of the Kubernetes API and nothing else", "source": "https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands", "id": "k8s_00389_code_explain_10228"} {"prompt": "Explain this bash code:\n\nkubectl proxy --www=/my/files --www-prefix=/static/ --api-prefix=/api/", "completion": "This bash code proxy | To proxy the entire Kubernetes API at a different root # You can get pods info with 'curl localhost:8001/custom/api/v1/pods'", "language": "bash", "type": "kubectl-command_explanation", "context": "proxy | To proxy the entire Kubernetes API at a different root # You can get pods info with 'curl localhost:8001/custom/api/v1/pods'", "source": "https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands", "id": "k8s_00389_code_explain_10229"} {"prompt": "Explain this text code:\n\n$ kubectl proxy [--port=PORT] [--www=static-dir] [--www-prefix=prefix] [--api-prefix=prefix]", "completion": "This text code proxy | Display Resource (CPU/Memory) usage.", "language": "text", "type": "code-example_explanation", "context": "proxy | Display Resource (CPU/Memory) usage.", "source": "https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands", "id": "k8s_00389_code_explain_10230"} {"prompt": "Explain this bash code:\n\nkubectl cluster-info dump --output-directory=/path/to/cluster-state", "completion": "This bash code cluster-info | Dump all namespaces to stdout", "language": "bash", "type": "kubectl-command_explanation", "context": "cluster-info | Dump all namespaces to stdout", "source": "https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands", "id": "k8s_00389_code_explain_10231"} {"prompt": "Explain this bash code:\n\nkubectl cluster-info dump --namespaces default,kube-system --output-directory=/path/to/cluster-state", "completion": "This bash code cluster-info | Dump cluster information out suitable for debugging and diagnosing cluster problems. By default, dumps everything to stdout. You can optionally specify a directory with --output-directory. If you sp", "language": "bash", "type": "kubectl-command_explanation", "context": "cluster-info | Dump cluster information out suitable for debugging and diagnosing cluster problems. By default, dumps everything to stdout. You can optionally specify a directory with --output-directory. If you sp", "source": "https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands", "id": "k8s_00389_code_explain_10232"} {"prompt": "Explain this bash code:\n\nkubectl taint nodes foo dedicated=special-user:NoSchedule", "completion": "This bash code taint | Remove from node 'foo' the taint with key 'dedicated' and effect 'NoSchedule' if one exists", "language": "bash", "type": "kubectl-command_explanation", "context": "taint | Remove from node 'foo' the taint with key 'dedicated' and effect 'NoSchedule' if one exists", "source": "https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands", "id": "k8s_00389_code_explain_10233"} {"prompt": "Explain this bash code:\n\nkubectl taint node -l myLabel=X dedicated=foo:PreferNoSchedule", "completion": "This bash code taint | Add to node 'foo' a taint with key 'bar' and no value", "language": "bash", "type": "kubectl-command_explanation", "context": "taint | Add to node 'foo' a taint with key 'bar' and no value", "source": "https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands", "id": "k8s_00389_code_explain_10234"} {"prompt": "Explain this text code:\n\n$ kubectl taint NODE NAME KEY_1=VAL_1:TAINT_EFFECT_1 ... KEY_N=VAL_N:TAINT_EFFECT_N", "completion": "This text code taint | Mark node \"foo\" as schedulable", "language": "text", "type": "code-example_explanation", "context": "taint | Mark node \"foo\" as schedulable", "source": "https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands", "id": "k8s_00389_code_explain_10235"} {"prompt": "Explain this bash code:\n\nkubectl completion bash > $(brew --prefix)/etc/bash_completion.d/kubectl", "completion": "This bash code completion | Installing bash completion on Linux ## If bash-completion is not installed on Linux, install the 'bash-completion' package ## via your distribution's package manager. ## Load the kubectl completion co", "language": "bash", "type": "kubectl-command_explanation", "context": "completion | Installing bash completion on Linux ## If bash-completion is not installed on Linux, install the 'bash-completion' package ## via your distribution's package manager. ## Load the kubectl completion co", "source": "https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands", "id": "k8s_00389_code_explain_10236"} {"prompt": "Explain this bash code:\n\nkubectl completion bash > ~/.kube/completion.bash.inc\nprintf \"", "completion": "This bash code completion | Kubectl shell completion", "language": "bash", "type": "kubectl-command_explanation", "context": "completion | Kubectl shell completion", "source": "https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands", "id": "k8s_00389_code_explain_10237"} {"prompt": "Explain this text code:\n\nsource '$HOME/.kube/completion.bash.inc'\n\" >> $HOME/.bash_profile\nsource $HOME/.bash_profile", "completion": "This text code completion | Load the kubectl completion code for zsh[1] into the current shell", "language": "text", "type": "code-example_explanation", "context": "completion | Load the kubectl completion code for zsh[1] into the current shell", "source": "https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands", "id": "k8s_00389_code_explain_10238"} {"prompt": "Explain this text code:\n\n$ kubectl config get-contexts [(-o|--output=)name)]", "completion": "This text code config | List the users that kubectl knows about", "language": "text", "type": "code-example_explanation", "context": "config | List the users that kubectl knows about", "source": "https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands", "id": "k8s_00389_code_explain_10239"} {"prompt": "Explain this text code:\n\n$ kubectl config rename-context CONTEXT_NAME NEW_NAME", "completion": "This text code config | Set the server field on the my-cluster cluster to https://1.2.3.4", "language": "text", "type": "code-example_explanation", "context": "config | Set the server field on the my-cluster cluster to https://1.2.3.4", "source": "https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands", "id": "k8s_00389_code_explain_10240"} {"prompt": "Explain this bash code:\n\nkubectl config set clusters.my-cluster.server https://1.2.3.4", "completion": "This bash code config | Set the certificate-authority-data field on the my-cluster cluster", "language": "bash", "type": "kubectl-command_explanation", "context": "config | Set the certificate-authority-data field on the my-cluster cluster", "source": "https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands", "id": "k8s_00389_code_explain_10241"} {"prompt": "Explain this bash code:\n\nkubectl config set clusters.my-cluster.certificate-authority-data $(echo \"cert_data_here\" | base64 -i -)", "completion": "This bash code config | Set the cluster field in the my-context context to my-cluster", "language": "bash", "type": "kubectl-command_explanation", "context": "config | Set the cluster field in the my-context context to my-cluster", "source": "https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands", "id": "k8s_00389_code_explain_10242"} {"prompt": "Explain this bash code:\n\nkubectl config set contexts.my-context.cluster my-cluster", "completion": "This bash code config | Set the client-key-data field in the cluster-admin user using --set-raw-bytes option", "language": "bash", "type": "kubectl-command_explanation", "context": "config | Set the client-key-data field in the cluster-admin user using --set-raw-bytes option", "source": "https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands", "id": "k8s_00389_code_explain_10243"} {"prompt": "Explain this bash code:\n\nkubectl config set users.cluster-admin.client-key-data cert_data_here --set-raw-bytes=true", "completion": "This bash code config | Set an individual value in a kubeconfig file.", "language": "bash", "type": "kubectl-command_explanation", "context": "config | Set an individual value in a kubeconfig file.", "source": "https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands", "id": "k8s_00389_code_explain_10244"} {"prompt": "Explain this bash code:\n\nkubectl config set-cluster e2e --server=https://1.2.3.4", "completion": "This bash code config | Embed certificate authority data for the e2e cluster entry", "language": "bash", "type": "kubectl-command_explanation", "context": "config | Embed certificate authority data for the e2e cluster entry", "source": "https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands", "id": "k8s_00389_code_explain_10245"} {"prompt": "Explain this bash code:\n\nkubectl config set-cluster e2e --embed-certs --certificate-authority=~/.kube/e2e/kubernetes.ca.crt", "completion": "This bash code config | Disable cert checking for the dev cluster entry", "language": "bash", "type": "kubectl-command_explanation", "context": "config | Disable cert checking for the dev cluster entry", "source": "https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands", "id": "k8s_00389_code_explain_10246"} {"prompt": "Explain this bash code:\n\nkubectl config set-cluster e2e --insecure-skip-tls-verify=true", "completion": "This bash code config | Set custom TLS server name to use for validation for the e2e cluster entry", "language": "bash", "type": "kubectl-command_explanation", "context": "config | Set custom TLS server name to use for validation for the e2e cluster entry", "source": "https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands", "id": "k8s_00389_code_explain_10247"} {"prompt": "Explain this bash code:\n\nkubectl config set-cluster e2e --tls-server-name=my-cluster-name", "completion": "This bash code config | Set a cluster entry in kubeconfig.", "language": "bash", "type": "kubectl-command_explanation", "context": "config | Set a cluster entry in kubeconfig.", "source": "https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands", "id": "k8s_00389_code_explain_10248"} {"prompt": "Explain this text code:\n\n$ kubectl config set-cluster NAME [--server=server] [--certificate-authority=path/to/certificate/authority] [--insecure-skip-tls-verify=true] [--tls-server-name=example.com]", "completion": "This text code config | Set the user field on the gce context entry without touching other values", "language": "text", "type": "code-example_explanation", "context": "config | Set the user field on the gce context entry without touching other values", "source": "https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands", "id": "k8s_00389_code_explain_10249"} {"prompt": "Explain this bash code:\n\nkubectl config set-context gce --user=cluster-admin", "completion": "This bash code config | Set a context entry in kubeconfig.", "language": "bash", "type": "kubectl-command_explanation", "context": "config | Set a context entry in kubeconfig.", "source": "https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands", "id": "k8s_00389_code_explain_10250"} {"prompt": "Explain this text code:\n\n$ kubectl config set-context [NAME | --current] [--cluster=cluster_nickname] [--user=user_nickname] [--namespace=namespace]", "completion": "This text code config | Set only the \"client-key\" field on the \"cluster-admin\" # entry, without touching other values", "language": "text", "type": "code-example_explanation", "context": "config | Set only the \"client-key\" field on the \"cluster-admin\" # entry, without touching other values", "source": "https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands", "id": "k8s_00389_code_explain_10251"} {"prompt": "Explain this bash code:\n\nkubectl config set-credentials cluster-admin --client-key=~/.kube/admin.key", "completion": "This bash code config | Set basic auth for the \"cluster-admin\" entry", "language": "bash", "type": "kubectl-command_explanation", "context": "config | Set basic auth for the \"cluster-admin\" entry", "source": "https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands", "id": "k8s_00389_code_explain_10252"} {"prompt": "Explain this bash code:\n\nkubectl config set-credentials cluster-admin --username=admin --password=uXFGweU9l35qcif", "completion": "This bash code config | Embed client certificate data in the \"cluster-admin\" entry", "language": "bash", "type": "kubectl-command_explanation", "context": "config | Embed client certificate data in the \"cluster-admin\" entry", "source": "https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands", "id": "k8s_00389_code_explain_10253"} {"prompt": "Explain this bash code:\n\nkubectl config set-credentials cluster-admin --client-certificate=~/.kube/admin.crt --embed-certs=true", "completion": "This bash code config | Enable the Google Compute Platform auth provider for the \"cluster-admin\" entry", "language": "bash", "type": "kubectl-command_explanation", "context": "config | Enable the Google Compute Platform auth provider for the \"cluster-admin\" entry", "source": "https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands", "id": "k8s_00389_code_explain_10254"} {"prompt": "Explain this bash code:\n\nkubectl config set-credentials cluster-admin --auth-provider=gcp", "completion": "This bash code config | Enable the OpenID Connect auth provider for the \"cluster-admin\" entry with additional args", "language": "bash", "type": "kubectl-command_explanation", "context": "config | Enable the OpenID Connect auth provider for the \"cluster-admin\" entry with additional args", "source": "https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands", "id": "k8s_00389_code_explain_10255"} {"prompt": "Explain this bash code:\n\nkubectl config set-credentials cluster-admin --auth-provider=oidc --auth-provider-arg=client-id=foo --auth-provider-arg=client-secret=bar", "completion": "This bash code config | Remove the \"client-secret\" config value for the OpenID Connect auth provider for the \"cluster-admin\" entry", "language": "bash", "type": "kubectl-command_explanation", "context": "config | Remove the \"client-secret\" config value for the OpenID Connect auth provider for the \"cluster-admin\" entry", "source": "https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands", "id": "k8s_00389_code_explain_10256"} {"prompt": "Explain this bash code:\n\nkubectl config set-credentials cluster-admin --auth-provider=oidc --auth-provider-arg=client-secret-", "completion": "This bash code config | Enable new exec auth plugin for the \"cluster-admin\" entry", "language": "bash", "type": "kubectl-command_explanation", "context": "config | Enable new exec auth plugin for the \"cluster-admin\" entry", "source": "https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands", "id": "k8s_00389_code_explain_10257"} {"prompt": "Explain this bash code:\n\nkubectl config set-credentials cluster-admin --exec-command=/path/to/the/executable --exec-api-version=client.authentication.k8s.io/v1beta1", "completion": "This bash code config | Define new exec auth plugin args for the \"cluster-admin\" entry", "language": "bash", "type": "kubectl-command_explanation", "context": "config | Define new exec auth plugin args for the \"cluster-admin\" entry", "source": "https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands", "id": "k8s_00389_code_explain_10258"} {"prompt": "Explain this bash code:\n\nkubectl config set-credentials cluster-admin --exec-arg=arg1 --exec-arg=arg2", "completion": "This bash code config | Create or update exec auth plugin environment variables for the \"cluster-admin\" entry", "language": "bash", "type": "kubectl-command_explanation", "context": "config | Create or update exec auth plugin environment variables for the \"cluster-admin\" entry", "source": "https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands", "id": "k8s_00389_code_explain_10259"} {"prompt": "Explain this bash code:\n\nkubectl config set-credentials cluster-admin --exec-env=key1=val1 --exec-env=key2=val2", "completion": "This bash code config | Remove exec auth plugin environment variables for the \"cluster-admin\" entry", "language": "bash", "type": "kubectl-command_explanation", "context": "config | Remove exec auth plugin environment variables for the \"cluster-admin\" entry", "source": "https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands", "id": "k8s_00389_code_explain_10260"} {"prompt": "Explain this bash code:\n\nkubectl config set-credentials cluster-admin --exec-env=var-to-remove-", "completion": "This bash code config | Set a user entry in kubeconfig.", "language": "bash", "type": "kubectl-command_explanation", "context": "config | Set a user entry in kubeconfig.", "source": "https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands", "id": "k8s_00389_code_explain_10261"} {"prompt": "Explain this text code:\n\n$ kubectl config set-credentials NAME [--client-certificate=path/to/certfile] [--client-key=path/to/keyfile] [--token=bearer_token] [--username=basic_user] [--password=basic_password] [--auth-provider=provider_name] [--auth-provider-arg=key=value] [--exec-command=exec_command] [--exec-api-version=exec_api_version] [--exec-arg=arg] [--exec-env=key=value]", "completion": "This text code config | Unset the current-context", "language": "text", "type": "code-example_explanation", "context": "config | Unset the current-context", "source": "https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands", "id": "k8s_00389_code_explain_10262"} {"prompt": "Explain this bash code:\n\nkubectl config view -o jsonpath='{.users[?(@.name == \"e2e\")].user.password}'", "completion": "This bash code config | Display merged kubeconfig settings or a specified kubeconfig file.", "language": "bash", "type": "kubectl-command_explanation", "context": "config | Display merged kubeconfig settings or a specified kubeconfig file.", "source": "https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands", "id": "k8s_00389_code_explain_10263"} {"prompt": "Explain this text code:\n\nEdit the data in docker-registry.yaml in JSON then create the resource using the edited data", "completion": "This text code create | Edit the data in docker-registry.yaml in JSON then create the resource using the edited data", "language": "text", "type": "code-example_explanation", "context": "create | Edit the data in docker-registry.yaml in JSON then create the resource using the edited data", "source": "https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands", "id": "k8s_00389_code_explain_10264"} {"prompt": "Explain this bash code:\n\nkubectl create -f docker-registry.yaml --edit -o json", "completion": "This bash code create | Create a resource from a file or from stdin.", "language": "bash", "type": "kubectl-command_explanation", "context": "create | Create a resource from a file or from stdin.", "source": "https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands", "id": "k8s_00389_code_explain_10265"} {"prompt": "Explain this text code:\n\nCreate a cluster role named \"pod-reader\" that allows user to perform \"get\", \"watch\" and \"list\" on pods", "completion": "This text code create | Create a cluster role named \"pod-reader\" that allows user to perform \"get\", \"watch\" and \"list\" on pods", "language": "text", "type": "code-example_explanation", "context": "create | Create a cluster role named \"pod-reader\" that allows user to perform \"get\", \"watch\" and \"list\" on pods", "source": "https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands", "id": "k8s_00389_code_explain_10266"} {"prompt": "Explain this bash code:\n\nkubectl create clusterrole pod-reader --verb=get,list,watch --resource=pods", "completion": "This bash code create | Create a cluster role named \"pod-reader\" with ResourceName specified", "language": "bash", "type": "kubectl-command_explanation", "context": "create | Create a cluster role named \"pod-reader\" with ResourceName specified", "source": "https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands", "id": "k8s_00389_code_explain_10267"} {"prompt": "Explain this text code:\n\nCreate a cluster role named \"pod-reader\" with ResourceName specified", "completion": "This text code create | Create a cluster role named \"pod-reader\" with ResourceName specified", "language": "text", "type": "code-example_explanation", "context": "create | Create a cluster role named \"pod-reader\" with ResourceName specified", "source": "https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands", "id": "k8s_00389_code_explain_10268"} {"prompt": "Explain this bash code:\n\nkubectl create clusterrole pod-reader --verb=get --resource=pods --resource-name=readablepod --resource-name=anotherpod", "completion": "This bash code create | Create a cluster role named \"foo\" with API Group specified", "language": "bash", "type": "kubectl-command_explanation", "context": "create | Create a cluster role named \"foo\" with API Group specified", "source": "https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands", "id": "k8s_00389_code_explain_10269"} {"prompt": "Explain this text code:\n\nCreate a cluster role named \"foo\" with API Group specified", "completion": "This text code create | Create a cluster role named \"foo\" with API Group specified", "language": "text", "type": "code-example_explanation", "context": "create | Create a cluster role named \"foo\" with API Group specified", "source": "https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands", "id": "k8s_00389_code_explain_10270"} {"prompt": "Explain this bash code:\n\nkubectl create clusterrole foo --verb=get,list,watch --resource=rs.extensions", "completion": "This bash code create | Create a cluster role named \"foo\" with SubResource specified", "language": "bash", "type": "kubectl-command_explanation", "context": "create | Create a cluster role named \"foo\" with SubResource specified", "source": "https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands", "id": "k8s_00389_code_explain_10271"} {"prompt": "Explain this text code:\n\nCreate a cluster role named \"foo\" with SubResource specified", "completion": "This text code create | Create a cluster role named \"foo\" with SubResource specified", "language": "text", "type": "code-example_explanation", "context": "create | Create a cluster role named \"foo\" with SubResource specified", "source": "https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands", "id": "k8s_00389_code_explain_10272"} {"prompt": "Explain this bash code:\n\nkubectl create clusterrole foo --verb=get,list,watch --resource=pods,pods/status", "completion": "This bash code create | Create a cluster role name \"foo\" with NonResourceURL specified", "language": "bash", "type": "kubectl-command_explanation", "context": "create | Create a cluster role name \"foo\" with NonResourceURL specified", "source": "https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands", "id": "k8s_00389_code_explain_10273"} {"prompt": "Explain this text code:\n\nCreate a cluster role name \"foo\" with NonResourceURL specified", "completion": "This text code create | Create a cluster role name \"foo\" with NonResourceURL specified", "language": "text", "type": "code-example_explanation", "context": "create | Create a cluster role name \"foo\" with NonResourceURL specified", "source": "https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands", "id": "k8s_00389_code_explain_10274"} {"prompt": "Explain this bash code:\n\nkubectl create clusterrole \"foo\" --verb=get --non-resource-url=/logs/*", "completion": "This bash code create | Create a cluster role name \"monitoring\" with AggregationRule specified", "language": "bash", "type": "kubectl-command_explanation", "context": "create | Create a cluster role name \"monitoring\" with AggregationRule specified", "source": "https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands", "id": "k8s_00389_code_explain_10275"} {"prompt": "Explain this text code:\n\nCreate a cluster role name \"monitoring\" with AggregationRule specified", "completion": "This text code create | Create a cluster role name \"monitoring\" with AggregationRule specified", "language": "text", "type": "code-example_explanation", "context": "create | Create a cluster role name \"monitoring\" with AggregationRule specified", "source": "https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands", "id": "k8s_00389_code_explain_10276"} {"prompt": "Explain this bash code:\n\nkubectl create clusterrole monitoring --aggregation-rule=\"rbac.example.com/aggregate-to-monitoring=true\"", "completion": "This bash code create | Create a cluster role.", "language": "bash", "type": "kubectl-command_explanation", "context": "create | Create a cluster role.", "source": "https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands", "id": "k8s_00389_code_explain_10277"} {"prompt": "Explain this text code:\n\nCreate a cluster role binding for user1, user2, and group1 using the cluster-admin cluster role", "completion": "This text code create | Create a cluster role binding for user1, user2, and group1 using the cluster-admin cluster role", "language": "text", "type": "code-example_explanation", "context": "create | Create a cluster role binding for user1, user2, and group1 using the cluster-admin cluster role", "source": "https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands", "id": "k8s_00389_code_explain_10278"} {"prompt": "Explain this bash code:\n\nkubectl create clusterrolebinding cluster-admin --clusterrole=cluster-admin --user=user1 --user=user2 --group=group1", "completion": "This bash code create | Create a cluster role binding for a particular cluster role.", "language": "bash", "type": "kubectl-command_explanation", "context": "create | Create a cluster role binding for a particular cluster role.", "source": "https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands", "id": "k8s_00389_code_explain_10279"} {"prompt": "Explain this text code:\n\nCreate a new config map named my-config based on folder bar", "completion": "This text code create | Create a new config map named my-config based on folder bar", "language": "text", "type": "code-example_explanation", "context": "create | Create a new config map named my-config based on folder bar", "source": "https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands", "id": "k8s_00389_code_explain_10280"} {"prompt": "Explain this bash code:\n\nkubectl create configmap my-config --from-file=path/to/bar", "completion": "This bash code create | Create a new config map named my-config with specified keys instead of file basenames on disk", "language": "bash", "type": "kubectl-command_explanation", "context": "create | Create a new config map named my-config with specified keys instead of file basenames on disk", "source": "https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands", "id": "k8s_00389_code_explain_10281"} {"prompt": "Explain this text code:\n\nCreate a new config map named my-config with specified keys instead of file basenames on disk", "completion": "This text code create | Create a new config map named my-config with specified keys instead of file basenames on disk", "language": "text", "type": "code-example_explanation", "context": "create | Create a new config map named my-config with specified keys instead of file basenames on disk", "source": "https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands", "id": "k8s_00389_code_explain_10282"} {"prompt": "Explain this bash code:\n\nkubectl create configmap my-config --from-file=key1=/path/to/bar/file1.txt --from-file=key2=/path/to/bar/file2.txt", "completion": "This bash code create | Create a new config map named my-config with key1=config1 and key2=config2", "language": "bash", "type": "kubectl-command_explanation", "context": "create | Create a new config map named my-config with key1=config1 and key2=config2", "source": "https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands", "id": "k8s_00389_code_explain_10283"} {"prompt": "Explain this text code:\n\nCreate a new config map named my-config with key1=config1 and key2=config2", "completion": "This text code create | Create a new config map named my-config with key1=config1 and key2=config2", "language": "text", "type": "code-example_explanation", "context": "create | Create a new config map named my-config with key1=config1 and key2=config2", "source": "https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands", "id": "k8s_00389_code_explain_10284"} {"prompt": "Explain this bash code:\n\nkubectl create configmap my-config --from-literal=key1=config1 --from-literal=key2=config2", "completion": "This bash code create | Create a new config map named my-config from the key=value pairs in the file", "language": "bash", "type": "kubectl-command_explanation", "context": "create | Create a new config map named my-config from the key=value pairs in the file", "source": "https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands", "id": "k8s_00389_code_explain_10285"} {"prompt": "Explain this text code:\n\nCreate a new config map named my-config from the key=value pairs in the file", "completion": "This text code create | Create a new config map named my-config from the key=value pairs in the file", "language": "text", "type": "code-example_explanation", "context": "create | Create a new config map named my-config from the key=value pairs in the file", "source": "https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands", "id": "k8s_00389_code_explain_10286"} {"prompt": "Explain this bash code:\n\nkubectl create configmap my-config --from-file=path/to/bar", "completion": "This bash code create | Create a new config map named my-config from an env file", "language": "bash", "type": "kubectl-command_explanation", "context": "create | Create a new config map named my-config from an env file", "source": "https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands", "id": "k8s_00389_code_explain_10287"} {"prompt": "Explain this text code:\n\nCreate a new config map named my-config from an env file", "completion": "This text code create | Create a new config map named my-config from an env file", "language": "text", "type": "code-example_explanation", "context": "create | Create a new config map named my-config from an env file", "source": "https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands", "id": "k8s_00389_code_explain_10288"} {"prompt": "Explain this bash code:\n\nkubectl create configmap my-config --from-env-file=path/to/bar.env", "completion": "This bash code create | Create a config map based on a file, directory, or specified literal value.", "language": "bash", "type": "kubectl-command_explanation", "context": "create | Create a config map based on a file, directory, or specified literal value.", "source": "https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands", "id": "k8s_00389_code_explain_10289"} {"prompt": "Explain this bash code:\n\nkubectl create cronjob my-job --image=busybox --schedule=\"*/1 * * * *\"", "completion": "This bash code create | Create a cron job with a command", "language": "bash", "type": "kubectl-command_explanation", "context": "create | Create a cron job with a command", "source": "https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands", "id": "k8s_00389_code_explain_10290"} {"prompt": "Explain this bash code:\n\nkubectl create cronjob my-job --image=busybox --schedule=\"*/1 * * * *\" -- date", "completion": "This bash code create | Create a cron job with the specified name.", "language": "bash", "type": "kubectl-command_explanation", "context": "create | Create a cron job with the specified name.", "source": "https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands", "id": "k8s_00389_code_explain_10291"} {"prompt": "Explain this text code:\n\nCreate a deployment named my-dep that runs the busybox image", "completion": "This text code create | Create a deployment named my-dep that runs the busybox image", "language": "text", "type": "code-example_explanation", "context": "create | Create a deployment named my-dep that runs the busybox image", "source": "https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands", "id": "k8s_00389_code_explain_10292"} {"prompt": "Explain this bash code:\n\nkubectl create deployment my-dep --image=busybox -- date", "completion": "This bash code create | Create a deployment named my-dep that runs the nginx image with 3 replicas", "language": "bash", "type": "kubectl-command_explanation", "context": "create | Create a deployment named my-dep that runs the nginx image with 3 replicas", "source": "https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands", "id": "k8s_00389_code_explain_10293"} {"prompt": "Explain this text code:\n\nCreate a deployment named my-dep that runs the nginx image with 3 replicas", "completion": "This text code create | Create a deployment named my-dep that runs the nginx image with 3 replicas", "language": "text", "type": "code-example_explanation", "context": "create | Create a deployment named my-dep that runs the nginx image with 3 replicas", "source": "https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands", "id": "k8s_00389_code_explain_10294"} {"prompt": "Explain this bash code:\n\nkubectl create deployment my-dep --image=nginx --replicas=3", "completion": "This bash code create | Create a deployment named my-dep that runs the busybox image and expose port 5701", "language": "bash", "type": "kubectl-command_explanation", "context": "create | Create a deployment named my-dep that runs the busybox image and expose port 5701", "source": "https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands", "id": "k8s_00389_code_explain_10295"} {"prompt": "Explain this text code:\n\nCreate a deployment named my-dep that runs the busybox image and expose port 5701", "completion": "This text code create | Create a deployment named my-dep that runs the busybox image and expose port 5701", "language": "text", "type": "code-example_explanation", "context": "create | Create a deployment named my-dep that runs the busybox image and expose port 5701", "source": "https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands", "id": "k8s_00389_code_explain_10296"} {"prompt": "Explain this bash code:\n\nkubectl create deployment my-dep --image=busybox --port=5701", "completion": "This bash code create | Create a deployment with the specified name.", "language": "bash", "type": "kubectl-command_explanation", "context": "create | Create a deployment with the specified name.", "source": "https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands", "id": "k8s_00389_code_explain_10297"} {"prompt": "Explain this text code:\n\nCreate a single ingress called 'simple' that directs requests to foo.com/bar to svc # svc1:8080 with a tls secret \"my-cert\"", "completion": "This text code create | Create a single ingress called 'simple' that directs requests to foo.com/bar to svc # svc1:8080 with a tls secret \"my-cert\"", "language": "text", "type": "code-example_explanation", "context": "create | Create a single ingress called 'simple' that directs requests to foo.com/bar to svc # svc1:8080 with a tls secret \"my-cert\"", "source": "https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands", "id": "k8s_00389_code_explain_10298"} {"prompt": "Explain this bash code:\n\nkubectl create ingress simple --rule=\"foo.com/bar=svc1:8080,tls=my-cert\"", "completion": "This bash code create | Create a catch all ingress of \"/path\" pointing to service svc:port and Ingress Class as \"otheringress\"", "language": "bash", "type": "kubectl-command_explanation", "context": "create | Create a catch all ingress of \"/path\" pointing to service svc:port and Ingress Class as \"otheringress\"", "source": "https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands", "id": "k8s_00389_code_explain_10299"} {"prompt": "Explain this text code:\n\nCreate a catch all ingress of \"/path\" pointing to service svc:port and Ingress Class as \"otheringress\"", "completion": "This text code create | Create a catch all ingress of \"/path\" pointing to service svc:port and Ingress Class as \"otheringress\"", "language": "text", "type": "code-example_explanation", "context": "create | Create a catch all ingress of \"/path\" pointing to service svc:port and Ingress Class as \"otheringress\"", "source": "https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands", "id": "k8s_00389_code_explain_10300"} {"prompt": "Explain this bash code:\n\nkubectl create ingress catch-all --class=otheringress --rule=\"/path=svc:port\"", "completion": "This bash code create | Create an ingress with two annotations: ingress.annotation1 and ingress.annotations2", "language": "bash", "type": "kubectl-command_explanation", "context": "create | Create an ingress with two annotations: ingress.annotation1 and ingress.annotations2", "source": "https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands", "id": "k8s_00389_code_explain_10301"} {"prompt": "Explain this text code:\n\nCreate an ingress with two annotations: ingress.annotation1 and ingress.annotations2", "completion": "This text code create | Create an ingress with two annotations: ingress.annotation1 and ingress.annotations2", "language": "text", "type": "code-example_explanation", "context": "create | Create an ingress with two annotations: ingress.annotation1 and ingress.annotations2", "source": "https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands", "id": "k8s_00389_code_explain_10302"} {"prompt": "Explain this bash code:\n\nkubectl create ingress annotated --class=default --rule=\"foo.com/bar=svc:port\" \\\n--annotation ingress.annotation1=foo \\\n--annotation ingress.annotation2=bla", "completion": "This bash code create | Create an ingress with the same host and multiple paths", "language": "bash", "type": "kubectl-command_explanation", "context": "create | Create an ingress with the same host and multiple paths", "source": "https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands", "id": "k8s_00389_code_explain_10303"} {"prompt": "Explain this text code:\n\nCreate an ingress with the same host and multiple paths", "completion": "This text code create | Create an ingress with the same host and multiple paths", "language": "text", "type": "code-example_explanation", "context": "create | Create an ingress with the same host and multiple paths", "source": "https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands", "id": "k8s_00389_code_explain_10304"} {"prompt": "Explain this bash code:\n\nkubectl create ingress multipath --class=default \\\n--rule=\"foo.com/=svc:port\" \\\n--rule=\"foo.com/admin/=svcadmin:portadmin\"", "completion": "This bash code create | Create an ingress with multiple hosts and the pathType as Prefix", "language": "bash", "type": "kubectl-command_explanation", "context": "create | Create an ingress with multiple hosts and the pathType as Prefix", "source": "https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands", "id": "k8s_00389_code_explain_10305"} {"prompt": "Explain this text code:\n\nCreate an ingress with multiple hosts and the pathType as Prefix", "completion": "This text code create | Create an ingress with multiple hosts and the pathType as Prefix", "language": "text", "type": "code-example_explanation", "context": "create | Create an ingress with multiple hosts and the pathType as Prefix", "source": "https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands", "id": "k8s_00389_code_explain_10306"} {"prompt": "Explain this bash code:\n\nkubectl create ingress ingress1 --class=default \\\n--rule=\"foo.com/path*=svc:8080\" \\\n--rule=\"bar.com/admin*=svc2:http\"", "completion": "This bash code create | Create an ingress with TLS enabled using the default ingress certificate and different path types", "language": "bash", "type": "kubectl-command_explanation", "context": "create | Create an ingress with TLS enabled using the default ingress certificate and different path types", "source": "https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands", "id": "k8s_00389_code_explain_10307"} {"prompt": "Explain this text code:\n\nCreate an ingress with TLS enabled using the default ingress certificate and different path types", "completion": "This text code create | Create an ingress with TLS enabled using the default ingress certificate and different path types", "language": "text", "type": "code-example_explanation", "context": "create | Create an ingress with TLS enabled using the default ingress certificate and different path types", "source": "https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands", "id": "k8s_00389_code_explain_10308"} {"prompt": "Explain this bash code:\n\nkubectl create ingress ingtls --class=default \\\n--rule=\"foo.com/=svc:https,tls\" \\\n--rule=\"foo.com/path/subpath*=othersvc:8080\"", "completion": "This bash code create | Create an ingress with TLS enabled using a specific secret and pathType as Prefix", "language": "bash", "type": "kubectl-command_explanation", "context": "create | Create an ingress with TLS enabled using a specific secret and pathType as Prefix", "source": "https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands", "id": "k8s_00389_code_explain_10309"} {"prompt": "Explain this text code:\n\nCreate an ingress with TLS enabled using a specific secret and pathType as Prefix", "completion": "This text code create | Create an ingress with TLS enabled using a specific secret and pathType as Prefix", "language": "text", "type": "code-example_explanation", "context": "create | Create an ingress with TLS enabled using a specific secret and pathType as Prefix", "source": "https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands", "id": "k8s_00389_code_explain_10310"} {"prompt": "Explain this bash code:\n\nkubectl create ingress ingsecret --class=default \\\n--rule=\"foo.com/*=svc:8080,tls=secret1\"", "completion": "This bash code create | Create an ingress with a default backend", "language": "bash", "type": "kubectl-command_explanation", "context": "create | Create an ingress with a default backend", "source": "https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands", "id": "k8s_00389_code_explain_10311"} {"prompt": "Explain this bash code:\n\nkubectl create ingress ingdefault --class=default \\\n--default-backend=defaultsvc:http \\\n--rule=\"foo.com/*=svc:8080,tls=secret1\"", "completion": "This bash code create | Create an ingress with the specified name.", "language": "bash", "type": "kubectl-command_explanation", "context": "create | Create an ingress with the specified name.", "source": "https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands", "id": "k8s_00389_code_explain_10312"} {"prompt": "Explain this bash code:\n\nkubectl create job test-job --from=cronjob/a-cronjob", "completion": "This bash code create | Create a job with the specified name.", "language": "bash", "type": "kubectl-command_explanation", "context": "create | Create a job with the specified name.", "source": "https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands", "id": "k8s_00389_code_explain_10313"} {"prompt": "Explain this text code:\n\nCreate a pod disruption budget named my-pdb that will select all pods with the app=rails label # and require at least one of them being available at any point in time", "completion": "This text code create | Create a pod disruption budget named my-pdb that will select all pods with the app=rails label # and require at least one of them being available at any point in time", "language": "text", "type": "code-example_explanation", "context": "create | Create a pod disruption budget named my-pdb that will select all pods with the app=rails label # and require at least one of them being available at any point in time", "source": "https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands", "id": "k8s_00389_code_explain_10314"} {"prompt": "Explain this bash code:\n\nkubectl create poddisruptionbudget my-pdb --selector=app=rails --min-available=1", "completion": "This bash code create | Create a pod disruption budget named my-pdb that will select all pods with the app=nginx label # and require at least half of the pods selected to be available at any point in time", "language": "bash", "type": "kubectl-command_explanation", "context": "create | Create a pod disruption budget named my-pdb that will select all pods with the app=nginx label # and require at least half of the pods selected to be available at any point in time", "source": "https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands", "id": "k8s_00389_code_explain_10315"} {"prompt": "Explain this text code:\n\nCreate a pod disruption budget named my-pdb that will select all pods with the app=nginx label # and require at least half of the pods selected to be available at any point in time", "completion": "This text code create | Create a pod disruption budget named my-pdb that will select all pods with the app=nginx label # and require at least half of the pods selected to be available at any point in time", "language": "text", "type": "code-example_explanation", "context": "create | Create a pod disruption budget named my-pdb that will select all pods with the app=nginx label # and require at least half of the pods selected to be available at any point in time", "source": "https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands", "id": "k8s_00389_code_explain_10316"} {"prompt": "Explain this bash code:\n\nkubectl create pdb my-pdb --selector=app=nginx --min-available=50%", "completion": "This bash code create | Create a pod disruption budget with the specified name, selector, and desired minimum available pods.", "language": "bash", "type": "kubectl-command_explanation", "context": "create | Create a pod disruption budget with the specified name, selector, and desired minimum available pods.", "source": "https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands", "id": "k8s_00389_code_explain_10317"} {"prompt": "Explain this bash code:\n\nkubectl create priorityclass high-priority --value=1000 --description=\"high priority\"", "completion": "This bash code create | Create a priority class named default-priority that is considered as the global default priority", "language": "bash", "type": "kubectl-command_explanation", "context": "create | Create a priority class named default-priority that is considered as the global default priority", "source": "https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands", "id": "k8s_00389_code_explain_10318"} {"prompt": "Explain this text code:\n\nCreate a priority class named default-priority that is considered as the global default priority", "completion": "This text code create | Create a priority class named default-priority that is considered as the global default priority", "language": "text", "type": "code-example_explanation", "context": "create | Create a priority class named default-priority that is considered as the global default priority", "source": "https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands", "id": "k8s_00389_code_explain_10319"} {"prompt": "Explain this bash code:\n\nkubectl create priorityclass default-priority --value=1000 --global-default=true --description=\"default priority\"", "completion": "This bash code create | Create a priority class named high-priority that cannot preempt pods with lower priority", "language": "bash", "type": "kubectl-command_explanation", "context": "create | Create a priority class named high-priority that cannot preempt pods with lower priority", "source": "https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands", "id": "k8s_00389_code_explain_10320"} {"prompt": "Explain this text code:\n\nCreate a priority class named high-priority that cannot preempt pods with lower priority", "completion": "This text code create | Create a priority class named high-priority that cannot preempt pods with lower priority", "language": "text", "type": "code-example_explanation", "context": "create | Create a priority class named high-priority that cannot preempt pods with lower priority", "source": "https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands", "id": "k8s_00389_code_explain_10321"} {"prompt": "Explain this bash code:\n\nkubectl create priorityclass high-priority --value=1000 --description=\"high priority\" --preemption-policy=\"Never\"", "completion": "This bash code create | Create a priority class with the specified name, value, globalDefault and description.", "language": "bash", "type": "kubectl-command_explanation", "context": "create | Create a priority class with the specified name, value, globalDefault and description.", "source": "https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands", "id": "k8s_00389_code_explain_10322"} {"prompt": "Explain this bash code:\n\nkubectl create quota my-quota --hard=cpu=1,memory=1G,pods=2,services=3,replicationcontrollers=2,resourcequotas=1,secrets=5,persistentvolumeclaims=10", "completion": "This bash code create | Create a new resource quota named best-effort", "language": "bash", "type": "kubectl-command_explanation", "context": "create | Create a new resource quota named best-effort", "source": "https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands", "id": "k8s_00389_code_explain_10323"} {"prompt": "Explain this bash code:\n\nkubectl create quota best-effort --hard=pods=100 --scopes=BestEffort", "completion": "This bash code create | Create a resource quota with the specified name, hard limits, and optional scopes.", "language": "bash", "type": "kubectl-command_explanation", "context": "create | Create a resource quota with the specified name, hard limits, and optional scopes.", "source": "https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands", "id": "k8s_00389_code_explain_10324"} {"prompt": "Explain this text code:\n\nCreate a role named \"pod-reader\" that allows user to perform \"get\", \"watch\" and \"list\" on pods", "completion": "This text code create | Create a role named \"pod-reader\" that allows user to perform \"get\", \"watch\" and \"list\" on pods", "language": "text", "type": "code-example_explanation", "context": "create | Create a role named \"pod-reader\" that allows user to perform \"get\", \"watch\" and \"list\" on pods", "source": "https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands", "id": "k8s_00389_code_explain_10325"} {"prompt": "Explain this bash code:\n\nkubectl create role pod-reader --verb=get --verb=list --verb=watch --resource=pods", "completion": "This bash code create | Create a role named \"pod-reader\" with ResourceName specified", "language": "bash", "type": "kubectl-command_explanation", "context": "create | Create a role named \"pod-reader\" with ResourceName specified", "source": "https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands", "id": "k8s_00389_code_explain_10326"} {"prompt": "Explain this text code:\n\nCreate a role named \"pod-reader\" with ResourceName specified", "completion": "This text code create | Create a role named \"pod-reader\" with ResourceName specified", "language": "text", "type": "code-example_explanation", "context": "create | Create a role named \"pod-reader\" with ResourceName specified", "source": "https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands", "id": "k8s_00389_code_explain_10327"} {"prompt": "Explain this bash code:\n\nkubectl create role pod-reader --verb=get --resource=pods --resource-name=readablepod --resource-name=anotherpod", "completion": "This bash code create | Create a role named \"foo\" with API Group specified", "language": "bash", "type": "kubectl-command_explanation", "context": "create | Create a role named \"foo\" with API Group specified", "source": "https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands", "id": "k8s_00389_code_explain_10328"} {"prompt": "Explain this bash code:\n\nkubectl create role foo --verb=get,list,watch --resource=rs.extensions", "completion": "This bash code create | Create a role named \"foo\" with SubResource specified", "language": "bash", "type": "kubectl-command_explanation", "context": "create | Create a role named \"foo\" with SubResource specified", "source": "https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands", "id": "k8s_00389_code_explain_10329"} {"prompt": "Explain this text code:\n\nCreate a role named \"foo\" with SubResource specified", "completion": "This text code create | Create a role named \"foo\" with SubResource specified", "language": "text", "type": "code-example_explanation", "context": "create | Create a role named \"foo\" with SubResource specified", "source": "https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands", "id": "k8s_00389_code_explain_10330"} {"prompt": "Explain this bash code:\n\nkubectl create role foo --verb=get,list,watch --resource=pods,pods/status", "completion": "This bash code create | Create a role with single rule.", "language": "bash", "type": "kubectl-command_explanation", "context": "create | Create a role with single rule.", "source": "https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands", "id": "k8s_00389_code_explain_10331"} {"prompt": "Explain this text code:\n\nCreate a role binding for user1, user2, and group1 using the admin cluster role", "completion": "This text code create | Create a role binding for user1, user2, and group1 using the admin cluster role", "language": "text", "type": "code-example_explanation", "context": "create | Create a role binding for user1, user2, and group1 using the admin cluster role", "source": "https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands", "id": "k8s_00389_code_explain_10332"} {"prompt": "Explain this bash code:\n\nkubectl create rolebinding admin --clusterrole=admin --user=user1 --user=user2 --group=group1", "completion": "This bash code create | Create a role binding for a particular role or cluster role.", "language": "bash", "type": "kubectl-command_explanation", "context": "create | Create a role binding for a particular role or cluster role.", "source": "https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands", "id": "k8s_00389_code_explain_10333"} {"prompt": "Explain this text code:\n\nIf you don't already have a .dockercfg file, you can create a dockercfg secret directly by using:", "completion": "This text code create | If you don't already have a .dockercfg file, you can create a dockercfg secret directly by using:", "language": "text", "type": "code-example_explanation", "context": "create | If you don't already have a .dockercfg file, you can create a dockercfg secret directly by using:", "source": "https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands", "id": "k8s_00389_code_explain_10334"} {"prompt": "Explain this bash code:\n\nkubectl create secret docker-registry my-secret --docker-server=DOCKER_REGISTRY_SERVER --docker-username=DOCKER_USER --docker-password=DOCKER_PASSWORD --docker-email=DOCKER_EMAIL", "completion": "This bash code create | Create a new secret named my-secret from ~/.docker/config.json", "language": "bash", "type": "kubectl-command_explanation", "context": "create | Create a new secret named my-secret from ~/.docker/config.json", "source": "https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands", "id": "k8s_00389_code_explain_10335"} {"prompt": "Explain this text code:\n\nCreate a new secret named my-secret from ~/.docker/config.json", "completion": "This text code create | Create a new secret named my-secret from ~/.docker/config.json", "language": "text", "type": "code-example_explanation", "context": "create | Create a new secret named my-secret from ~/.docker/config.json", "source": "https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands", "id": "k8s_00389_code_explain_10336"} {"prompt": "Explain this bash code:\n\nkubectl create secret docker-registry my-secret --from-file=.dockerconfigjson=path/to/.docker/config.json", "completion": "This bash code create | Create a new secret for use with Docker registries.", "language": "bash", "type": "kubectl-command_explanation", "context": "create | Create a new secret for use with Docker registries.", "source": "https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands", "id": "k8s_00389_code_explain_10337"} {"prompt": "Explain this text code:\n\nCreate a new secret named my-secret with keys for each file in folder bar", "completion": "This text code create | Create a new secret named my-secret with keys for each file in folder bar", "language": "text", "type": "code-example_explanation", "context": "create | Create a new secret named my-secret with keys for each file in folder bar", "source": "https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands", "id": "k8s_00389_code_explain_10338"} {"prompt": "Explain this bash code:\n\nkubectl create secret generic my-secret --from-file=path/to/bar", "completion": "This bash code create | Create a new secret named my-secret with specified keys instead of names on disk", "language": "bash", "type": "kubectl-command_explanation", "context": "create | Create a new secret named my-secret with specified keys instead of names on disk", "source": "https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands", "id": "k8s_00389_code_explain_10339"} {"prompt": "Explain this text code:\n\nCreate a new secret named my-secret with specified keys instead of names on disk", "completion": "This text code create | Create a new secret named my-secret with specified keys instead of names on disk", "language": "text", "type": "code-example_explanation", "context": "create | Create a new secret named my-secret with specified keys instead of names on disk", "source": "https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands", "id": "k8s_00389_code_explain_10340"} {"prompt": "Explain this bash code:\n\nkubectl create secret generic my-secret --from-file=ssh-privatekey=path/to/id_rsa --from-file=ssh-publickey=path/to/id_rsa.pub", "completion": "This bash code create | Create a new secret named my-secret with key1=supersecret and key2=topsecret", "language": "bash", "type": "kubectl-command_explanation", "context": "create | Create a new secret named my-secret with key1=supersecret and key2=topsecret", "source": "https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands", "id": "k8s_00389_code_explain_10341"} {"prompt": "Explain this text code:\n\nCreate a new secret named my-secret with key1=supersecret and key2=topsecret", "completion": "This text code create | Create a new secret named my-secret with key1=supersecret and key2=topsecret", "language": "text", "type": "code-example_explanation", "context": "create | Create a new secret named my-secret with key1=supersecret and key2=topsecret", "source": "https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands", "id": "k8s_00389_code_explain_10342"} {"prompt": "Explain this bash code:\n\nkubectl create secret generic my-secret --from-literal=key1=supersecret --from-literal=key2=topsecret", "completion": "This bash code create | Create a new secret named my-secret using a combination of a file and a literal", "language": "bash", "type": "kubectl-command_explanation", "context": "create | Create a new secret named my-secret using a combination of a file and a literal", "source": "https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands", "id": "k8s_00389_code_explain_10343"} {"prompt": "Explain this text code:\n\nCreate a new secret named my-secret using a combination of a file and a literal", "completion": "This text code create | Create a new secret named my-secret using a combination of a file and a literal", "language": "text", "type": "code-example_explanation", "context": "create | Create a new secret named my-secret using a combination of a file and a literal", "source": "https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands", "id": "k8s_00389_code_explain_10344"} {"prompt": "Explain this bash code:\n\nkubectl create secret generic my-secret --from-file=ssh-privatekey=path/to/id_rsa --from-literal=passphrase=topsecret", "completion": "This bash code create | Create a new secret named my-secret from an env file", "language": "bash", "type": "kubectl-command_explanation", "context": "create | Create a new secret named my-secret from an env file", "source": "https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands", "id": "k8s_00389_code_explain_10345"} {"prompt": "Explain this text code:\n\nCreate a new secret named my-secret from an env file", "completion": "This text code create | Create a new secret named my-secret from an env file", "language": "text", "type": "code-example_explanation", "context": "create | Create a new secret named my-secret from an env file", "source": "https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands", "id": "k8s_00389_code_explain_10346"} {"prompt": "Explain this bash code:\n\nkubectl create secret generic my-secret --from-env-file=path/to/bar.env", "completion": "This bash code create | Create a secret based on a file, directory, or specified literal value.", "language": "bash", "type": "kubectl-command_explanation", "context": "create | Create a secret based on a file, directory, or specified literal value.", "source": "https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands", "id": "k8s_00389_code_explain_10347"} {"prompt": "Explain this text code:\n\nCreate a new TLS secret named tls-secret with the given key pair", "completion": "This text code create | Create a new TLS secret named tls-secret with the given key pair", "language": "text", "type": "code-example_explanation", "context": "create | Create a new TLS secret named tls-secret with the given key pair", "source": "https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands", "id": "k8s_00389_code_explain_10348"} {"prompt": "Explain this bash code:\n\nkubectl create secret tls tls-secret --cert=path/to/tls.cert --key=path/to/tls.key", "completion": "This bash code create | Create a TLS secret from the given public/private key pair.", "language": "bash", "type": "kubectl-command_explanation", "context": "create | Create a TLS secret from the given public/private key pair.", "source": "https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands", "id": "k8s_00389_code_explain_10349"} {"prompt": "Explain this bash code:\n\nkubectl create service clusterip my-cs --tcp=5678:8080", "completion": "This bash code create | Create a new ClusterIP service named my-cs (in headless mode)", "language": "bash", "type": "kubectl-command_explanation", "context": "create | Create a new ClusterIP service named my-cs (in headless mode)", "source": "https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands", "id": "k8s_00389_code_explain_10350"} {"prompt": "Explain this text code:\n\nCreate a new ClusterIP service named my-cs (in headless mode)", "completion": "This text code create | Create a new ClusterIP service named my-cs (in headless mode)", "language": "text", "type": "code-example_explanation", "context": "create | Create a new ClusterIP service named my-cs (in headless mode)", "source": "https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands", "id": "k8s_00389_code_explain_10351"} {"prompt": "Explain this bash code:\n\nkubectl create service clusterip my-cs --clusterip=\"None\"", "completion": "This bash code create | Create a ClusterIP service with the specified name.", "language": "bash", "type": "kubectl-command_explanation", "context": "create | Create a ClusterIP service with the specified name.", "source": "https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands", "id": "k8s_00389_code_explain_10352"} {"prompt": "Explain this bash code:\n\nkubectl create service externalname my-ns --external-name bar.com", "completion": "This bash code create | Create an ExternalName service with the specified name.", "language": "bash", "type": "kubectl-command_explanation", "context": "create | Create an ExternalName service with the specified name.", "source": "https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands", "id": "k8s_00389_code_explain_10353"} {"prompt": "Explain this bash code:\n\nkubectl create service loadbalancer my-lbs --tcp=5678:8080", "completion": "This bash code create | Create a LoadBalancer service with the specified name.", "language": "bash", "type": "kubectl-command_explanation", "context": "create | Create a LoadBalancer service with the specified name.", "source": "https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands", "id": "k8s_00389_code_explain_10354"} {"prompt": "Explain this bash code:\n\nkubectl create service nodeport my-ns --tcp=5678:8080", "completion": "This bash code create | Create a NodePort service with the specified name.", "language": "bash", "type": "kubectl-command_explanation", "context": "create | Create a NodePort service with the specified name.", "source": "https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands", "id": "k8s_00389_code_explain_10355"} {"prompt": "Explain this text code:\n\nCreate a new service account named my-service-account", "completion": "This text code create | Create a new service account named my-service-account", "language": "text", "type": "code-example_explanation", "context": "create | Create a new service account named my-service-account", "source": "https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands", "id": "k8s_00389_code_explain_10356"} {"prompt": "Explain this text code:\n\nList all pods in ps output format with more information (such as node name)", "completion": "This text code get | List all pods in ps output format with more information (such as node name)", "language": "text", "type": "code-example_explanation", "context": "get | List all pods in ps output format with more information (such as node name)", "source": "https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands", "id": "k8s_00389_code_explain_10357"} {"prompt": "Explain this text code:\n\nList a single replication controller with specified NAME in ps output format", "completion": "This text code get | List a single replication controller with specified NAME in ps output format", "language": "text", "type": "code-example_explanation", "context": "get | List a single replication controller with specified NAME in ps output format", "source": "https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands", "id": "k8s_00389_code_explain_10358"} {"prompt": "Explain this text code:\n\nList deployments in JSON output format, in the \"v1\" version of the \"apps\" API group", "completion": "This text code get | List deployments in JSON output format, in the \"v1\" version of the \"apps\" API group", "language": "text", "type": "code-example_explanation", "context": "get | List deployments in JSON output format, in the \"v1\" version of the \"apps\" API group", "source": "https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands", "id": "k8s_00389_code_explain_10359"} {"prompt": "Explain this text code:\n\nList a pod identified by type and name specified in \"pod.yaml\" in JSON output format", "completion": "This text code get | List a pod identified by type and name specified in \"pod.yaml\" in JSON output format", "language": "text", "type": "code-example_explanation", "context": "get | List a pod identified by type and name specified in \"pod.yaml\" in JSON output format", "source": "https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands", "id": "k8s_00389_code_explain_10360"} {"prompt": "Explain this text code:\n\nList resources from a directory with kustomization.yaml - e.g. dir/kustomization.yaml", "completion": "This text code get | List resources from a directory with kustomization.yaml - e.g. dir/kustomization.yaml", "language": "text", "type": "code-example_explanation", "context": "get | List resources from a directory with kustomization.yaml - e.g. dir/kustomization.yaml", "source": "https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands", "id": "k8s_00389_code_explain_10361"} {"prompt": "Explain this bash code:\n\nkubectl get -o template pod/web-pod-13je7 --template={{.status.phase}}", "completion": "This bash code get | List resource information in custom columns", "language": "bash", "type": "kubectl-command_explanation", "context": "get | List resource information in custom columns", "source": "https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands", "id": "k8s_00389_code_explain_10362"} {"prompt": "Explain this bash code:\n\nkubectl get pod test-pod -o custom-columns=CONTAINER:.spec.containers[0].name,IMAGE:.spec.containers[0].image", "completion": "This bash code get | List all replication controllers and services together in ps output format", "language": "bash", "type": "kubectl-command_explanation", "context": "get | List all replication controllers and services together in ps output format", "source": "https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands", "id": "k8s_00389_code_explain_10363"} {"prompt": "Explain this text code:\n\nList all replication controllers and services together in ps output format", "completion": "This text code get | List all replication controllers and services together in ps output format", "language": "text", "type": "code-example_explanation", "context": "get | List all replication controllers and services together in ps output format", "source": "https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands", "id": "k8s_00389_code_explain_10364"} {"prompt": "Explain this bash code:\n\nkubectl get rc/web service/frontend pods/web-pod-13je7", "completion": "This bash code get | Display one or many resources.", "language": "bash", "type": "kubectl-command_explanation", "context": "get | Display one or many resources.", "source": "https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands", "id": "k8s_00389_code_explain_10365"} {"prompt": "Explain this text code:\n\nStart a hazelcast pod and let the container expose port 5701", "completion": "This text code run | Start a hazelcast pod and let the container expose port 5701", "language": "text", "type": "code-example_explanation", "context": "run | Start a hazelcast pod and let the container expose port 5701", "source": "https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands", "id": "k8s_00389_code_explain_10366"} {"prompt": "Explain this bash code:\n\nkubectl run hazelcast --image=hazelcast/hazelcast --port=5701", "completion": "This bash code run | Start a hazelcast pod and set environment variables \"DNS_DOMAIN=cluster\" and \"POD_NAMESPACE=default\" in the container", "language": "bash", "type": "kubectl-command_explanation", "context": "run | Start a hazelcast pod and set environment variables \"DNS_DOMAIN=cluster\" and \"POD_NAMESPACE=default\" in the container", "source": "https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands", "id": "k8s_00389_code_explain_10367"} {"prompt": "Explain this text code:\n\nStart a hazelcast pod and set environment variables \"DNS_DOMAIN=cluster\" and \"POD_NAMESPACE=default\" in the container", "completion": "This text code run | Start a hazelcast pod and set environment variables \"DNS_DOMAIN=cluster\" and \"POD_NAMESPACE=default\" in the container", "language": "text", "type": "code-example_explanation", "context": "run | Start a hazelcast pod and set environment variables \"DNS_DOMAIN=cluster\" and \"POD_NAMESPACE=default\" in the container", "source": "https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands", "id": "k8s_00389_code_explain_10368"} {"prompt": "Explain this bash code:\n\nkubectl run hazelcast --image=hazelcast/hazelcast --env=\"DNS_DOMAIN=cluster\" --env=\"POD_NAMESPACE=default\"", "completion": "This bash code run | Start a hazelcast pod and set labels \"app=hazelcast\" and \"env=prod\" in the container", "language": "bash", "type": "kubectl-command_explanation", "context": "run | Start a hazelcast pod and set labels \"app=hazelcast\" and \"env=prod\" in the container", "source": "https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands", "id": "k8s_00389_code_explain_10369"} {"prompt": "Explain this text code:\n\nStart a hazelcast pod and set labels \"app=hazelcast\" and \"env=prod\" in the container", "completion": "This text code run | Start a hazelcast pod and set labels \"app=hazelcast\" and \"env=prod\" in the container", "language": "text", "type": "code-example_explanation", "context": "run | Start a hazelcast pod and set labels \"app=hazelcast\" and \"env=prod\" in the container", "source": "https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands", "id": "k8s_00389_code_explain_10370"} {"prompt": "Explain this bash code:\n\nkubectl run hazelcast --image=hazelcast/hazelcast --labels=\"app=hazelcast,env=prod\"", "completion": "This bash code run | Dry run; print the corresponding API objects without creating them", "language": "bash", "type": "kubectl-command_explanation", "context": "run | Dry run; print the corresponding API objects without creating them", "source": "https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands", "id": "k8s_00389_code_explain_10371"} {"prompt": "Explain this text code:\n\nDry run; print the corresponding API objects without creating them", "completion": "This text code run | Dry run; print the corresponding API objects without creating them", "language": "text", "type": "code-example_explanation", "context": "run | Dry run; print the corresponding API objects without creating them", "source": "https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands", "id": "k8s_00389_code_explain_10372"} {"prompt": "Explain this text code:\n\nStart a nginx pod, but overload the spec with a partial set of values parsed from JSON", "completion": "This text code run | Start a nginx pod, but overload the spec with a partial set of values parsed from JSON", "language": "text", "type": "code-example_explanation", "context": "run | Start a nginx pod, but overload the spec with a partial set of values parsed from JSON", "source": "https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands", "id": "k8s_00389_code_explain_10373"} {"prompt": "Explain this bash code:\n\nkubectl run nginx --image=nginx --overrides='{ \"apiVersion\": \"v1\", \"spec\": { ... } }'", "completion": "This bash code run | Start a busybox pod and keep it in the foreground, don't restart it if it exits", "language": "bash", "type": "kubectl-command_explanation", "context": "run | Start a busybox pod and keep it in the foreground, don't restart it if it exits", "source": "https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands", "id": "k8s_00389_code_explain_10374"} {"prompt": "Explain this text code:\n\nStart a busybox pod and keep it in the foreground, don't restart it if it exits", "completion": "This text code run | Start a busybox pod and keep it in the foreground, don't restart it if it exits", "language": "text", "type": "code-example_explanation", "context": "run | Start a busybox pod and keep it in the foreground, don't restart it if it exits", "source": "https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands", "id": "k8s_00389_code_explain_10375"} {"prompt": "Explain this bash code:\n\nkubectl run -i -t busybox --image=busybox --restart=Never", "completion": "This bash code run | Start the nginx pod using the default command, but use custom arguments (arg1 .. argN) for that command", "language": "bash", "type": "kubectl-command_explanation", "context": "run | Start the nginx pod using the default command, but use custom arguments (arg1 .. argN) for that command", "source": "https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands", "id": "k8s_00389_code_explain_10376"} {"prompt": "Explain this text code:\n\nStart the nginx pod using the default command, but use custom arguments (arg1 .. argN) for that command", "completion": "This text code run | Start the nginx pod using the default command, but use custom arguments (arg1 .. argN) for that command", "language": "text", "type": "code-example_explanation", "context": "run | Start the nginx pod using the default command, but use custom arguments (arg1 .. argN) for that command", "source": "https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands", "id": "k8s_00389_code_explain_10377"} {"prompt": "Explain this bash code:\n\nkubectl run nginx --image=nginx -- <arg1> <arg2> ... <argN>", "completion": "This bash code run | Start the nginx pod using a different command and custom arguments", "language": "bash", "type": "kubectl-command_explanation", "context": "run | Start the nginx pod using a different command and custom arguments", "source": "https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands", "id": "k8s_00389_code_explain_10378"} {"prompt": "Explain this text code:\n\nStart the nginx pod using a different command and custom arguments", "completion": "This text code run | Start the nginx pod using a different command and custom arguments", "language": "text", "type": "code-example_explanation", "context": "run | Start the nginx pod using a different command and custom arguments", "source": "https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands", "id": "k8s_00389_code_explain_10379"} {"prompt": "Explain this bash code:\n\nkubectl run nginx --image=nginx --command -- <cmd> <arg1> ... <argN>", "completion": "This bash code run | Create and run a particular image in a pod.", "language": "bash", "type": "kubectl-command_explanation", "context": "run | Create and run a particular image in a pod.", "source": "https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands", "id": "k8s_00389_code_explain_10380"} {"prompt": "Explain this text code:\n\nCreate a service for a replicated nginx, which serves on port 80 and connects to the containers on port 8000", "completion": "This text code expose | Create a service for a replicated nginx, which serves on port 80 and connects to the containers on port 8000", "language": "text", "type": "code-example_explanation", "context": "expose | Create a service for a replicated nginx, which serves on port 80 and connects to the containers on port 8000", "source": "https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands", "id": "k8s_00389_code_explain_10381"} {"prompt": "Explain this bash code:\n\nkubectl expose rc nginx --port=80 --target-port=8000", "completion": "This bash code expose | Create a service for a replication controller identified by type and name specified in \"nginx-controller.yaml\", which serves on port 80 and connects to the containers on port 8000", "language": "bash", "type": "kubectl-command_explanation", "context": "expose | Create a service for a replication controller identified by type and name specified in \"nginx-controller.yaml\", which serves on port 80 and connects to the containers on port 8000", "source": "https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands", "id": "k8s_00389_code_explain_10382"} {"prompt": "Explain this text code:\n\nCreate a service for a replication controller identified by type and name specified in \"nginx-controller.yaml\", which serves on port 80 and connects to the containers on port 8000", "completion": "This text code expose | Create a service for a replication controller identified by type and name specified in \"nginx-controller.yaml\", which serves on port 80 and connects to the containers on port 8000", "language": "text", "type": "code-example_explanation", "context": "expose | Create a service for a replication controller identified by type and name specified in \"nginx-controller.yaml\", which serves on port 80 and connects to the containers on port 8000", "source": "https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands", "id": "k8s_00389_code_explain_10383"} {"prompt": "Explain this bash code:\n\nkubectl expose -f nginx-controller.yaml --port=80 --target-port=8000", "completion": "This bash code expose | Create a service for a pod valid-pod, which serves on port 444 with the name \"frontend\"", "language": "bash", "type": "kubectl-command_explanation", "context": "expose | Create a service for a pod valid-pod, which serves on port 444 with the name \"frontend\"", "source": "https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands", "id": "k8s_00389_code_explain_10384"} {"prompt": "Explain this text code:\n\nCreate a service for a pod valid-pod, which serves on port 444 with the name \"frontend\"", "completion": "This text code expose | Create a service for a pod valid-pod, which serves on port 444 with the name \"frontend\"", "language": "text", "type": "code-example_explanation", "context": "expose | Create a service for a pod valid-pod, which serves on port 444 with the name \"frontend\"", "source": "https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands", "id": "k8s_00389_code_explain_10385"} {"prompt": "Explain this bash code:\n\nkubectl expose pod valid-pod --port=444 --name=frontend", "completion": "This bash code expose | Create a second service based on the above service, exposing the container port 8443 as port 443 with the name \"nginx-https\"", "language": "bash", "type": "kubectl-command_explanation", "context": "expose | Create a second service based on the above service, exposing the container port 8443 as port 443 with the name \"nginx-https\"", "source": "https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands", "id": "k8s_00389_code_explain_10386"} {"prompt": "Explain this text code:\n\nCreate a second service based on the above service, exposing the container port 8443 as port 443 with the name \"nginx-https\"", "completion": "This text code expose | Create a second service based on the above service, exposing the container port 8443 as port 443 with the name \"nginx-https\"", "language": "text", "type": "code-example_explanation", "context": "expose | Create a second service based on the above service, exposing the container port 8443 as port 443 with the name \"nginx-https\"", "source": "https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands", "id": "k8s_00389_code_explain_10387"} {"prompt": "Explain this bash code:\n\nkubectl expose service nginx --port=443 --target-port=8443 --name=nginx-https", "completion": "This bash code expose | Create a service for a replicated streaming application on port 4100 balancing UDP traffic and named 'video-stream'.", "language": "bash", "type": "kubectl-command_explanation", "context": "expose | Create a service for a replicated streaming application on port 4100 balancing UDP traffic and named 'video-stream'.", "source": "https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands", "id": "k8s_00389_code_explain_10388"} {"prompt": "Explain this text code:\n\nCreate a service for a replicated streaming application on port 4100 balancing UDP traffic and named 'video-stream'.", "completion": "This text code expose | Create a service for a replicated streaming application on port 4100 balancing UDP traffic and named 'video-stream'.", "language": "text", "type": "code-example_explanation", "context": "expose | Create a service for a replicated streaming application on port 4100 balancing UDP traffic and named 'video-stream'.", "source": "https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands", "id": "k8s_00389_code_explain_10389"} {"prompt": "Explain this bash code:\n\nkubectl expose rc streamer --port=4100 --protocol=UDP --name=video-stream", "completion": "This bash code expose | Create a service for a replicated nginx using replica set, which serves on port 80 and connects to the containers on port 8000", "language": "bash", "type": "kubectl-command_explanation", "context": "expose | Create a service for a replicated nginx using replica set, which serves on port 80 and connects to the containers on port 8000", "source": "https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands", "id": "k8s_00389_code_explain_10390"} {"prompt": "Explain this text code:\n\nCreate a service for a replicated nginx using replica set, which serves on port 80 and connects to the containers on port 8000", "completion": "This text code expose | Create a service for a replicated nginx using replica set, which serves on port 80 and connects to the containers on port 8000", "language": "text", "type": "code-example_explanation", "context": "expose | Create a service for a replicated nginx using replica set, which serves on port 80 and connects to the containers on port 8000", "source": "https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands", "id": "k8s_00389_code_explain_10391"} {"prompt": "Explain this bash code:\n\nkubectl expose rs nginx --port=80 --target-port=8000", "completion": "This bash code expose | Create a service for an nginx deployment, which serves on port 80 and connects to the containers on port 8000", "language": "bash", "type": "kubectl-command_explanation", "context": "expose | Create a service for an nginx deployment, which serves on port 80 and connects to the containers on port 8000", "source": "https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands", "id": "k8s_00389_code_explain_10392"} {"prompt": "Explain this text code:\n\nCreate a service for an nginx deployment, which serves on port 80 and connects to the containers on port 8000", "completion": "This text code expose | Create a service for an nginx deployment, which serves on port 80 and connects to the containers on port 8000", "language": "text", "type": "code-example_explanation", "context": "expose | Create a service for an nginx deployment, which serves on port 80 and connects to the containers on port 8000", "source": "https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands", "id": "k8s_00389_code_explain_10393"} {"prompt": "Explain this bash code:\n\nkubectl expose deployment nginx --port=80 --target-port=8000", "completion": "This bash code expose | Expose a resource as a new Kubernetes service.", "language": "bash", "type": "kubectl-command_explanation", "context": "expose | Expose a resource as a new Kubernetes service.", "source": "https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands", "id": "k8s_00389_code_explain_10394"} {"prompt": "Explain this text code:\n\nDelete a pod using the type and name specified in pod.json", "completion": "This text code delete | Delete a pod using the type and name specified in pod.json", "language": "text", "type": "code-example_explanation", "context": "delete | Delete a pod using the type and name specified in pod.json", "source": "https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands", "id": "k8s_00389_code_explain_10395"} {"prompt": "Explain this text code:\n\nDelete resources from a directory containing kustomization.yaml - e.g. dir/kustomization.yaml", "completion": "This text code delete | Delete resources from a directory containing kustomization.yaml - e.g. dir/kustomization.yaml", "language": "text", "type": "code-example_explanation", "context": "delete | Delete resources from a directory containing kustomization.yaml - e.g. dir/kustomization.yaml", "source": "https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands", "id": "k8s_00389_code_explain_10396"} {"prompt": "Explain this text code:\n\nDelete a pod based on the type and name in the JSON passed into stdin", "completion": "This text code delete | Delete a pod based on the type and name in the JSON passed into stdin", "language": "text", "type": "code-example_explanation", "context": "delete | Delete a pod based on the type and name in the JSON passed into stdin", "source": "https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands", "id": "k8s_00389_code_explain_10397"} {"prompt": "Explain this text code:\n\nDelete pods and services with same names \"baz\" and \"foo\"", "completion": "This text code delete | Delete pods and services with same names \"baz\" and \"foo\"", "language": "text", "type": "code-example_explanation", "context": "delete | Delete pods and services with same names \"baz\" and \"foo\"", "source": "https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands", "id": "k8s_00389_code_explain_10398"} {"prompt": "Explain this text code:\n\nApply resources from a directory containing kustomization.yaml - e.g. dir/kustomization.yaml", "completion": "This text code apply | Apply resources from a directory containing kustomization.yaml - e.g. dir/kustomization.yaml", "language": "text", "type": "code-example_explanation", "context": "apply | Apply resources from a directory containing kustomization.yaml - e.g. dir/kustomization.yaml", "source": "https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands", "id": "k8s_00389_code_explain_10399"} {"prompt": "Explain this text code:\n\nNote: --prune is still in Alpha # Apply the configuration in manifest.yaml that matches label app=nginx and delete all other resources that are not in the file and match label app=nginx", "completion": "This text code apply | Note: --prune is still in Alpha # Apply the configuration in manifest.yaml that matches label app=nginx and delete all other resources that are not in the file and match label app=nginx", "language": "text", "type": "code-example_explanation", "context": "apply | Note: --prune is still in Alpha # Apply the configuration in manifest.yaml that matches label app=nginx and delete all other resources that are not in the file and match label app=nginx", "source": "https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands", "id": "k8s_00389_code_explain_10400"} {"prompt": "Explain this bash code:\n\nkubectl apply --prune -f manifest.yaml -l app=nginx", "completion": "This bash code apply | Apply the configuration in manifest.yaml and delete all the other config maps that are not in the file", "language": "bash", "type": "kubectl-command_explanation", "context": "apply | Apply the configuration in manifest.yaml and delete all the other config maps that are not in the file", "source": "https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands", "id": "k8s_00389_code_explain_10401"} {"prompt": "Explain this text code:\n\nApply the configuration in manifest.yaml and delete all the other config maps that are not in the file", "completion": "This text code apply | Apply the configuration in manifest.yaml and delete all the other config maps that are not in the file", "language": "text", "type": "code-example_explanation", "context": "apply | Apply the configuration in manifest.yaml and delete all the other config maps that are not in the file", "source": "https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands", "id": "k8s_00389_code_explain_10402"} {"prompt": "Explain this bash code:\n\nkubectl apply --prune -f manifest.yaml --all --prune-whitelist=core/v1/ConfigMap", "completion": "This bash code apply | Apply a configuration to a resource by file name or stdin. The resource name must be specified. This resource will be created if it doesn't exist yet. To use 'apply', always create the resource initia", "language": "bash", "type": "kubectl-command_explanation", "context": "apply | Apply a configuration to a resource by file name or stdin. The resource name must be specified. This resource will be created if it doesn't exist yet. To use 'apply', always create the resource initia", "source": "https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands", "id": "k8s_00389_code_explain_10403"} {"prompt": "Explain this text code:\n\nEdit the last-applied-configuration annotations by type/name in YAML", "completion": "This text code apply | Edit the last-applied-configuration annotations by type/name in YAML", "language": "text", "type": "code-example_explanation", "context": "apply | Edit the last-applied-configuration annotations by type/name in YAML", "source": "https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands", "id": "k8s_00389_code_explain_10404"} {"prompt": "Explain this text code:\n\nEdit the last-applied-configuration annotations by file in JSON", "completion": "This text code apply | Edit the last-applied-configuration annotations by file in JSON", "language": "text", "type": "code-example_explanation", "context": "apply | Edit the last-applied-configuration annotations by file in JSON", "source": "https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands", "id": "k8s_00389_code_explain_10405"} {"prompt": "Explain this bash code:\n\nkubectl apply edit-last-applied -f deploy.yaml -o json", "completion": "This bash code apply | Edit the latest last-applied-configuration annotations of resources from the default editor.", "language": "bash", "type": "kubectl-command_explanation", "context": "apply | Edit the latest last-applied-configuration annotations of resources from the default editor.", "source": "https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands", "id": "k8s_00389_code_explain_10406"} {"prompt": "Explain this text code:\n\nSet the last-applied-configuration of a resource to match the contents of a file", "completion": "This text code apply | Set the last-applied-configuration of a resource to match the contents of a file", "language": "text", "type": "code-example_explanation", "context": "apply | Set the last-applied-configuration of a resource to match the contents of a file", "source": "https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands", "id": "k8s_00389_code_explain_10407"} {"prompt": "Explain this text code:\n\nExecute set-last-applied against each configuration file in a directory", "completion": "This text code apply | Execute set-last-applied against each configuration file in a directory", "language": "text", "type": "code-example_explanation", "context": "apply | Execute set-last-applied against each configuration file in a directory", "source": "https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands", "id": "k8s_00389_code_explain_10408"} {"prompt": "Explain this text code:\n\nSet the last-applied-configuration of a resource to match the contents of a file; will create the annotation if it does not already exist", "completion": "This text code apply | Set the last-applied-configuration of a resource to match the contents of a file; will create the annotation if it does not already exist", "language": "text", "type": "code-example_explanation", "context": "apply | Set the last-applied-configuration of a resource to match the contents of a file; will create the annotation if it does not already exist", "source": "https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands", "id": "k8s_00389_code_explain_10409"} {"prompt": "Explain this bash code:\n\nkubectl apply set-last-applied -f deploy.yaml --create-annotation=true", "completion": "This bash code apply | Set the latest last-applied-configuration annotations by setting it to match the contents of a file. This results in the last-applied-configuration being updated as though 'kubectl apply -f ' was run,", "language": "bash", "type": "kubectl-command_explanation", "context": "apply | Set the latest last-applied-configuration annotations by setting it to match the contents of a file. This results in the last-applied-configuration being updated as though 'kubectl apply -f ' was run,", "source": "https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands", "id": "k8s_00389_code_explain_10410"} {"prompt": "Explain this text code:\n\nView the last-applied-configuration annotations by type/name in YAML", "completion": "This text code apply | View the last-applied-configuration annotations by type/name in YAML", "language": "text", "type": "code-example_explanation", "context": "apply | View the last-applied-configuration annotations by type/name in YAML", "source": "https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands", "id": "k8s_00389_code_explain_10411"} {"prompt": "Explain this text code:\n\nView the last-applied-configuration annotations by file in JSON", "completion": "This text code apply | View the last-applied-configuration annotations by file in JSON", "language": "text", "type": "code-example_explanation", "context": "apply | View the last-applied-configuration annotations by file in JSON", "source": "https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands", "id": "k8s_00389_code_explain_10412"} {"prompt": "Explain this bash code:\n\nkubectl apply view-last-applied -f deploy.yaml -o json", "completion": "This bash code apply | View the latest last-applied-configuration annotations by type/name or file.", "language": "bash", "type": "kubectl-command_explanation", "context": "apply | View the latest last-applied-configuration annotations by type/name or file.", "source": "https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands", "id": "k8s_00389_code_explain_10413"} {"prompt": "Explain this text code:\n\nUpdate pod 'foo' with the annotation 'description' and the value 'my frontend' # If the same annotation is set multiple times, only the last value will be applied", "completion": "This text code annotate | Update pod 'foo' with the annotation 'description' and the value 'my frontend' # If the same annotation is set multiple times, only the last value will be applied", "language": "text", "type": "code-example_explanation", "context": "annotate | Update pod 'foo' with the annotation 'description' and the value 'my frontend' # If the same annotation is set multiple times, only the last value will be applied", "source": "https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands", "id": "k8s_00389_code_explain_10414"} {"prompt": "Explain this bash code:\n\nkubectl annotate pods foo description='my frontend'", "completion": "This bash code annotate | Update a pod identified by type and name in \"pod.json\"", "language": "bash", "type": "kubectl-command_explanation", "context": "annotate | Update a pod identified by type and name in \"pod.json\"", "source": "https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands", "id": "k8s_00389_code_explain_10415"} {"prompt": "Explain this text code:\n\nUpdate a pod identified by type and name in \"pod.json\"", "completion": "This text code annotate | Update a pod identified by type and name in \"pod.json\"", "language": "text", "type": "code-example_explanation", "context": "annotate | Update a pod identified by type and name in \"pod.json\"", "source": "https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands", "id": "k8s_00389_code_explain_10416"} {"prompt": "Explain this bash code:\n\nkubectl annotate -f pod.json description='my frontend'", "completion": "This bash code annotate | Update pod 'foo' with the annotation 'description' and the value 'my frontend running nginx', overwriting any existing value", "language": "bash", "type": "kubectl-command_explanation", "context": "annotate | Update pod 'foo' with the annotation 'description' and the value 'my frontend running nginx', overwriting any existing value", "source": "https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands", "id": "k8s_00389_code_explain_10417"} {"prompt": "Explain this text code:\n\nUpdate pod 'foo' with the annotation 'description' and the value 'my frontend running nginx', overwriting any existing value", "completion": "This text code annotate | Update pod 'foo' with the annotation 'description' and the value 'my frontend running nginx', overwriting any existing value", "language": "text", "type": "code-example_explanation", "context": "annotate | Update pod 'foo' with the annotation 'description' and the value 'my frontend running nginx', overwriting any existing value", "source": "https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands", "id": "k8s_00389_code_explain_10418"} {"prompt": "Explain this bash code:\n\nkubectl annotate --overwrite pods foo description='my frontend running nginx'", "completion": "This bash code annotate | Update all pods in the namespace", "language": "bash", "type": "kubectl-command_explanation", "context": "annotate | Update all pods in the namespace", "source": "https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands", "id": "k8s_00389_code_explain_10419"} {"prompt": "Explain this bash code:\n\nkubectl annotate pods --all description='my frontend running nginx'", "completion": "This bash code annotate | Update pod 'foo' only if the resource is unchanged from version 1", "language": "bash", "type": "kubectl-command_explanation", "context": "annotate | Update pod 'foo' only if the resource is unchanged from version 1", "source": "https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands", "id": "k8s_00389_code_explain_10420"} {"prompt": "Explain this text code:\n\nUpdate pod 'foo' only if the resource is unchanged from version 1", "completion": "This text code annotate | Update pod 'foo' only if the resource is unchanged from version 1", "language": "text", "type": "code-example_explanation", "context": "annotate | Update pod 'foo' only if the resource is unchanged from version 1", "source": "https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands", "id": "k8s_00389_code_explain_10421"} {"prompt": "Explain this bash code:\n\nkubectl annotate pods foo description='my frontend running nginx' --resource-version=1", "completion": "This bash code annotate | Update pod 'foo' by removing an annotation named 'description' if it exists # Does not require the --overwrite flag", "language": "bash", "type": "kubectl-command_explanation", "context": "annotate | Update pod 'foo' by removing an annotation named 'description' if it exists # Does not require the --overwrite flag", "source": "https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands", "id": "k8s_00389_code_explain_10422"} {"prompt": "Explain this text code:\n\nUpdate pod 'foo' by removing an annotation named 'description' if it exists # Does not require the --overwrite flag", "completion": "This text code annotate | Update pod 'foo' by removing an annotation named 'description' if it exists # Does not require the --overwrite flag", "language": "text", "type": "code-example_explanation", "context": "annotate | Update pod 'foo' by removing an annotation named 'description' if it exists # Does not require the --overwrite flag", "source": "https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands", "id": "k8s_00389_code_explain_10423"} {"prompt": "Explain this text code:\n\nAuto scale a deployment \"foo\", with the number of pods between 2 and 10, no target CPU utilization specified so a default autoscaling policy will be used", "completion": "This text code autoscale | Auto scale a deployment \"foo\", with the number of pods between 2 and 10, no target CPU utilization specified so a default autoscaling policy will be used", "language": "text", "type": "code-example_explanation", "context": "autoscale | Auto scale a deployment \"foo\", with the number of pods between 2 and 10, no target CPU utilization specified so a default autoscaling policy will be used", "source": "https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands", "id": "k8s_00389_code_explain_10424"} {"prompt": "Explain this text code:\n\nAuto scale a replication controller \"foo\", with the number of pods between 1 and 5, target CPU utilization at 80%", "completion": "This text code autoscale | Auto scale a replication controller \"foo\", with the number of pods between 1 and 5, target CPU utilization at 80%", "language": "text", "type": "code-example_explanation", "context": "autoscale | Auto scale a replication controller \"foo\", with the number of pods between 1 and 5, target CPU utilization at 80%", "source": "https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands", "id": "k8s_00389_code_explain_10425"} {"prompt": "Explain this text code:\n\nCreate an interactive debugging session in pod mypod and immediately attach to it. # (requires the EphemeralContainers feature to be enabled in the cluster)", "completion": "This text code debug | Create an interactive debugging session in pod mypod and immediately attach to it. # (requires the EphemeralContainers feature to be enabled in the cluster)", "language": "text", "type": "code-example_explanation", "context": "debug | Create an interactive debugging session in pod mypod and immediately attach to it. # (requires the EphemeralContainers feature to be enabled in the cluster)", "source": "https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands", "id": "k8s_00389_code_explain_10426"} {"prompt": "Explain this text code:\n\nCreate a debug container named debugger using a custom automated debugging image. # (requires the EphemeralContainers feature to be enabled in the cluster)", "completion": "This text code debug | Create a debug container named debugger using a custom automated debugging image. # (requires the EphemeralContainers feature to be enabled in the cluster)", "language": "text", "type": "code-example_explanation", "context": "debug | Create a debug container named debugger using a custom automated debugging image. # (requires the EphemeralContainers feature to be enabled in the cluster)", "source": "https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands", "id": "k8s_00389_code_explain_10427"} {"prompt": "Explain this bash code:\n\nkubectl debug --image=myproj/debug-tools -c debugger mypod", "completion": "This bash code debug | Create a copy of mypod adding a debug container and attach to it", "language": "bash", "type": "kubectl-command_explanation", "context": "debug | Create a copy of mypod adding a debug container and attach to it", "source": "https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands", "id": "k8s_00389_code_explain_10428"} {"prompt": "Explain this text code:\n\nCreate a copy of mypod adding a debug container and attach to it", "completion": "This text code debug | Create a copy of mypod adding a debug container and attach to it", "language": "text", "type": "code-example_explanation", "context": "debug | Create a copy of mypod adding a debug container and attach to it", "source": "https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands", "id": "k8s_00389_code_explain_10429"} {"prompt": "Explain this bash code:\n\nkubectl debug mypod -it --image=busybox --copy-to=my-debugger", "completion": "This bash code debug | Create a copy of mypod changing the command of mycontainer", "language": "bash", "type": "kubectl-command_explanation", "context": "debug | Create a copy of mypod changing the command of mycontainer", "source": "https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands", "id": "k8s_00389_code_explain_10430"} {"prompt": "Explain this text code:\n\nCreate a copy of mypod changing the command of mycontainer", "completion": "This text code debug | Create a copy of mypod changing the command of mycontainer", "language": "text", "type": "code-example_explanation", "context": "debug | Create a copy of mypod changing the command of mycontainer", "source": "https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands", "id": "k8s_00389_code_explain_10431"} {"prompt": "Explain this bash code:\n\nkubectl debug mypod -it --copy-to=my-debugger --container=mycontainer -- sh", "completion": "This bash code debug | Create a copy of mypod changing all container images to busybox", "language": "bash", "type": "kubectl-command_explanation", "context": "debug | Create a copy of mypod changing all container images to busybox", "source": "https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands", "id": "k8s_00389_code_explain_10432"} {"prompt": "Explain this text code:\n\nCreate a copy of mypod changing all container images to busybox", "completion": "This text code debug | Create a copy of mypod changing all container images to busybox", "language": "text", "type": "code-example_explanation", "context": "debug | Create a copy of mypod changing all container images to busybox", "source": "https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands", "id": "k8s_00389_code_explain_10433"} {"prompt": "Explain this bash code:\n\nkubectl debug mypod --copy-to=my-debugger --set-image=*=busybox", "completion": "This bash code debug | Create a copy of mypod adding a debug container and changing container images", "language": "bash", "type": "kubectl-command_explanation", "context": "debug | Create a copy of mypod adding a debug container and changing container images", "source": "https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands", "id": "k8s_00389_code_explain_10434"} {"prompt": "Explain this text code:\n\nCreate a copy of mypod adding a debug container and changing container images", "completion": "This text code debug | Create a copy of mypod adding a debug container and changing container images", "language": "text", "type": "code-example_explanation", "context": "debug | Create a copy of mypod adding a debug container and changing container images", "source": "https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands", "id": "k8s_00389_code_explain_10435"} {"prompt": "Explain this bash code:\n\nkubectl debug mypod -it --copy-to=my-debugger --image=debian --set-image=app=app:debug,sidecar=sidecar:debug", "completion": "This bash code debug | Create an interactive debugging session on a node and immediately attach to it. # The container will run in the host namespaces and the host's filesystem will be mounted at /host", "language": "bash", "type": "kubectl-command_explanation", "context": "debug | Create an interactive debugging session on a node and immediately attach to it. # The container will run in the host namespaces and the host's filesystem will be mounted at /host", "source": "https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands", "id": "k8s_00389_code_explain_10436"} {"prompt": "Explain this text code:\n\nCreate an interactive debugging session on a node and immediately attach to it. # The container will run in the host namespaces and the host's filesystem will be mounted at /host", "completion": "This text code debug | Create an interactive debugging session on a node and immediately attach to it. # The container will run in the host namespaces and the host's filesystem will be mounted at /host", "language": "text", "type": "code-example_explanation", "context": "debug | Create an interactive debugging session on a node and immediately attach to it. # The container will run in the host namespaces and the host's filesystem will be mounted at /host", "source": "https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands", "id": "k8s_00389_code_explain_10437"} {"prompt": "Explain this text code:\n\nKUBE_EDITOR=\"nano\" kubectl edit svc/docker-registry", "completion": "This text code edit | Edit the job 'myjob' in JSON using the v1 API format", "language": "text", "type": "code-example_explanation", "context": "edit | Edit the job 'myjob' in JSON using the v1 API format", "source": "https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands", "id": "k8s_00389_code_explain_10438"} {"prompt": "Explain this text code:\n\nEdit the job 'myjob' in JSON using the v1 API format", "completion": "This text code edit | Edit the job 'myjob' in JSON using the v1 API format", "language": "text", "type": "code-example_explanation", "context": "edit | Edit the job 'myjob' in JSON using the v1 API format", "source": "https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands", "id": "k8s_00389_code_explain_10439"} {"prompt": "Explain this text code:\n\nEdit the deployment 'mydeployment' in YAML and save the modified config in its annotation", "completion": "This text code edit | Edit the deployment 'mydeployment' in YAML and save the modified config in its annotation", "language": "text", "type": "code-example_explanation", "context": "edit | Edit the deployment 'mydeployment' in YAML and save the modified config in its annotation", "source": "https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands", "id": "k8s_00389_code_explain_10440"} {"prompt": "Explain this bash code:\n\nkubectl edit deployment/mydeployment -o yaml --save-config", "completion": "This bash code edit | Edit a resource from the default editor.", "language": "bash", "type": "kubectl-command_explanation", "context": "edit | Edit a resource from the default editor.", "source": "https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands", "id": "k8s_00389_code_explain_10441"} {"prompt": "Explain this bash code:\n\nkubectl kustomize https://github.com/kubernetes-sigs/kustomize.git/examples/helloWorld?ref=v1.0.6", "completion": "This bash code kustomize | Build a set of KRM resources using a 'kustomization.yaml' file. The DIR argument must be a path to a directory containing 'kustomization.yaml', or a git repository URL with a path suffix specifying sa", "language": "bash", "type": "kubectl-command_explanation", "context": "kustomize | Build a set of KRM resources using a 'kustomization.yaml' file. The DIR argument must be a path to a directory containing 'kustomization.yaml', or a git repository URL with a path suffix specifying sa", "source": "https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands", "id": "k8s_00389_code_explain_10442"} {"prompt": "Explain this text code:\n\nUpdate pod 'foo' with the label 'unhealthy' and the value 'true'", "completion": "This text code label | Update pod 'foo' with the label 'unhealthy' and the value 'true'", "language": "text", "type": "code-example_explanation", "context": "label | Update pod 'foo' with the label 'unhealthy' and the value 'true'", "source": "https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands", "id": "k8s_00389_code_explain_10443"} {"prompt": "Explain this text code:\n\nUpdate pod 'foo' with the label 'status' and the value 'unhealthy', overwriting any existing value", "completion": "This text code label | Update pod 'foo' with the label 'status' and the value 'unhealthy', overwriting any existing value", "language": "text", "type": "code-example_explanation", "context": "label | Update pod 'foo' with the label 'status' and the value 'unhealthy', overwriting any existing value", "source": "https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands", "id": "k8s_00389_code_explain_10444"} {"prompt": "Explain this bash code:\n\nkubectl label --overwrite pods foo status=unhealthy", "completion": "This bash code label | Update all pods in the namespace", "language": "bash", "type": "kubectl-command_explanation", "context": "label | Update all pods in the namespace", "source": "https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands", "id": "k8s_00389_code_explain_10445"} {"prompt": "Explain this text code:\n\nUpdate a pod identified by the type and name in \"pod.json\"", "completion": "This text code label | Update a pod identified by the type and name in \"pod.json\"", "language": "text", "type": "code-example_explanation", "context": "label | Update a pod identified by the type and name in \"pod.json\"", "source": "https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands", "id": "k8s_00389_code_explain_10446"} {"prompt": "Explain this text code:\n\nUpdate pod 'foo' only if the resource is unchanged from version 1", "completion": "This text code label | Update pod 'foo' only if the resource is unchanged from version 1", "language": "text", "type": "code-example_explanation", "context": "label | Update pod 'foo' only if the resource is unchanged from version 1", "source": "https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands", "id": "k8s_00389_code_explain_10447"} {"prompt": "Explain this bash code:\n\nkubectl label pods foo status=unhealthy --resource-version=1", "completion": "This bash code label | Update pod 'foo' by removing a label named 'bar' if it exists # Does not require the --overwrite flag", "language": "bash", "type": "kubectl-command_explanation", "context": "label | Update pod 'foo' by removing a label named 'bar' if it exists # Does not require the --overwrite flag", "source": "https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands", "id": "k8s_00389_code_explain_10448"} {"prompt": "Explain this text code:\n\nUpdate pod 'foo' by removing a label named 'bar' if it exists # Does not require the --overwrite flag", "completion": "This text code label | Update pod 'foo' by removing a label named 'bar' if it exists # Does not require the --overwrite flag", "language": "text", "type": "code-example_explanation", "context": "label | Update pod 'foo' by removing a label named 'bar' if it exists # Does not require the --overwrite flag", "source": "https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands", "id": "k8s_00389_code_explain_10449"} {"prompt": "Explain this text code:\n\nPartially update a node using a strategic merge patch, specifying the patch as JSON", "completion": "This text code patch | Partially update a node using a strategic merge patch, specifying the patch as JSON", "language": "text", "type": "code-example_explanation", "context": "patch | Partially update a node using a strategic merge patch, specifying the patch as JSON", "source": "https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands", "id": "k8s_00389_code_explain_10450"} {"prompt": "Explain this bash code:\n\nkubectl patch node k8s-node-1 -p '{\"spec\":{\"unschedulable\":true}}'", "completion": "This bash code patch | Partially update a node using a strategic merge patch, specifying the patch as YAML", "language": "bash", "type": "kubectl-command_explanation", "context": "patch | Partially update a node using a strategic merge patch, specifying the patch as YAML", "source": "https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands", "id": "k8s_00389_code_explain_10451"} {"prompt": "Explain this text code:\n\nPartially update a node using a strategic merge patch, specifying the patch as YAML", "completion": "This text code patch | Partially update a node using a strategic merge patch, specifying the patch as YAML", "language": "text", "type": "code-example_explanation", "context": "patch | Partially update a node using a strategic merge patch, specifying the patch as YAML", "source": "https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands", "id": "k8s_00389_code_explain_10452"} {"prompt": "Explain this bash code:\n\nkubectl patch node k8s-node-1 -p $'spec:\\n unschedulable: true'", "completion": "This bash code patch | Partially update a node identified by the type and name specified in \"node.json\" using strategic merge patch", "language": "bash", "type": "kubectl-command_explanation", "context": "patch | Partially update a node identified by the type and name specified in \"node.json\" using strategic merge patch", "source": "https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands", "id": "k8s_00389_code_explain_10453"} {"prompt": "Explain this text code:\n\nPartially update a node identified by the type and name specified in \"node.json\" using strategic merge patch", "completion": "This text code patch | Partially update a node identified by the type and name specified in \"node.json\" using strategic merge patch", "language": "text", "type": "code-example_explanation", "context": "patch | Partially update a node identified by the type and name specified in \"node.json\" using strategic merge patch", "source": "https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands", "id": "k8s_00389_code_explain_10454"} {"prompt": "Explain this bash code:\n\nkubectl patch -f node.json -p '{\"spec\":{\"unschedulable\":true}}'", "completion": "This bash code patch | Update a container's image; spec.containers[*].name is required because it's a merge key", "language": "bash", "type": "kubectl-command_explanation", "context": "patch | Update a container's image; spec.containers[*].name is required because it's a merge key", "source": "https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands", "id": "k8s_00389_code_explain_10455"} {"prompt": "Explain this text code:\n\nUpdate a container's image; spec.containers[*].name is required because it's a merge key", "completion": "This text code patch | Update a container's image; spec.containers[*].name is required because it's a merge key", "language": "text", "type": "code-example_explanation", "context": "patch | Update a container's image; spec.containers[*].name is required because it's a merge key", "source": "https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands", "id": "k8s_00389_code_explain_10456"} {"prompt": "Explain this bash code:\n\nkubectl patch pod valid-pod -p '{\"spec\":{\"containers\":[{\"name\":\"kubernetes-serve-hostname\",\"image\":\"new image\"}]}}'", "completion": "This bash code patch | Update a container's image using a JSON patch with positional arrays", "language": "bash", "type": "kubectl-command_explanation", "context": "patch | Update a container's image using a JSON patch with positional arrays", "source": "https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands", "id": "k8s_00389_code_explain_10457"} {"prompt": "Explain this text code:\n\nUpdate a container's image using a JSON patch with positional arrays", "completion": "This text code patch | Update a container's image using a JSON patch with positional arrays", "language": "text", "type": "code-example_explanation", "context": "patch | Update a container's image using a JSON patch with positional arrays", "source": "https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands", "id": "k8s_00389_code_explain_10458"} {"prompt": "Explain this bash code:\n\nkubectl patch pod valid-pod --type='json' -p='[{\"op\": \"replace\", \"path\": \"/spec/containers/0/image\", \"value\":\"new image\"}]'", "completion": "This bash code patch | Update fields of a resource using strategic merge patch, a JSON merge patch, or a JSON patch.", "language": "bash", "type": "kubectl-command_explanation", "context": "patch | Update fields of a resource using strategic merge patch, a JSON merge patch, or a JSON patch.", "source": "https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands", "id": "k8s_00389_code_explain_10459"} {"prompt": "Explain this text code:\n\nUpdate a single-container pod's image version (tag) to v4", "completion": "This text code replace | Update a single-container pod's image version (tag) to v4", "language": "text", "type": "code-example_explanation", "context": "replace | Update a single-container pod's image version (tag) to v4", "source": "https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands", "id": "k8s_00389_code_explain_10460"} {"prompt": "Explain this bash code:\n\nkubectl get pod mypod -o yaml | sed 's/\\(image: myimage\\):.*$/\\1:v4/' | kubectl replace -f -", "completion": "This bash code replace | Force replace, delete and then re-create the resource", "language": "bash", "type": "kubectl-command_explanation", "context": "replace | Force replace, delete and then re-create the resource", "source": "https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands", "id": "k8s_00389_code_explain_10461"} {"prompt": "Explain this text code:\n\nForce replace, delete and then re-create the resource", "completion": "This text code replace | Force replace, delete and then re-create the resource", "language": "text", "type": "code-example_explanation", "context": "replace | Force replace, delete and then re-create the resource", "source": "https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands", "id": "k8s_00389_code_explain_10462"} {"prompt": "Explain this text code:\n\nMark the nginx deployment as paused # Any current state of the deployment will continue its function; new updates # to the deployment will not have an effect as long as the deployment is paused", "completion": "This text code rollout | Mark the nginx deployment as paused # Any current state of the deployment will continue its function; new updates # to the deployment will not have an effect as long as the deployment is paused", "language": "text", "type": "code-example_explanation", "context": "rollout | Mark the nginx deployment as paused # Any current state of the deployment will continue its function; new updates # to the deployment will not have an effect as long as the deployment is paused", "source": "https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands", "id": "k8s_00389_code_explain_10463"} {"prompt": "Explain this bash code:\n\nkubectl rollout undo --dry-run=server deployment/abc", "completion": "This bash code rollout | Roll back to a previous rollout.", "language": "bash", "type": "kubectl-command_explanation", "context": "rollout | Roll back to a previous rollout.", "source": "https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands", "id": "k8s_00389_code_explain_10464"} {"prompt": "Explain this text code:\n\nScale a resource identified by type and name specified in \"foo.yaml\" to 3", "completion": "This text code scale | Scale a resource identified by type and name specified in \"foo.yaml\" to 3", "language": "text", "type": "code-example_explanation", "context": "scale | Scale a resource identified by type and name specified in \"foo.yaml\" to 3", "source": "https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands", "id": "k8s_00389_code_explain_10465"} {"prompt": "Explain this text code:\n\nIf the deployment named mysql's current size is 2, scale mysql to 3", "completion": "This text code scale | If the deployment named mysql's current size is 2, scale mysql to 3", "language": "text", "type": "code-example_explanation", "context": "scale | If the deployment named mysql's current size is 2, scale mysql to 3", "source": "https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands", "id": "k8s_00389_code_explain_10466"} {"prompt": "Explain this bash code:\n\nkubectl scale --current-replicas=2 --replicas=3 deployment/mysql", "completion": "This bash code scale | Scale multiple replication controllers", "language": "bash", "type": "kubectl-command_explanation", "context": "scale | Scale multiple replication controllers", "source": "https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands", "id": "k8s_00389_code_explain_10467"} {"prompt": "Explain this text code:\n\nUpdate deployment 'registry' with a new environment variable", "completion": "This text code set | Update deployment 'registry' with a new environment variable", "language": "text", "type": "code-example_explanation", "context": "set | Update deployment 'registry' with a new environment variable", "source": "https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands", "id": "k8s_00389_code_explain_10468"} {"prompt": "Explain this bash code:\n\nkubectl set env deployment/registry STORAGE_DIR=/local", "completion": "This bash code set | List the environment variables defined on a deployments 'sample-build'", "language": "bash", "type": "kubectl-command_explanation", "context": "set | List the environment variables defined on a deployments 'sample-build'", "source": "https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands", "id": "k8s_00389_code_explain_10469"} {"prompt": "Explain this text code:\n\nList the environment variables defined on a deployments 'sample-build'", "completion": "This text code set | List the environment variables defined on a deployments 'sample-build'", "language": "text", "type": "code-example_explanation", "context": "set | List the environment variables defined on a deployments 'sample-build'", "source": "https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands", "id": "k8s_00389_code_explain_10470"} {"prompt": "Explain this text code:\n\nOutput modified deployment in YAML, and does not alter the object on the server", "completion": "This text code set | Output modified deployment in YAML, and does not alter the object on the server", "language": "text", "type": "code-example_explanation", "context": "set | Output modified deployment in YAML, and does not alter the object on the server", "source": "https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands", "id": "k8s_00389_code_explain_10471"} {"prompt": "Explain this bash code:\n\nkubectl set env deployment/sample-build STORAGE_DIR=/data -o yaml", "completion": "This bash code set | Update all containers in all replication controllers in the project to have ENV=prod", "language": "bash", "type": "kubectl-command_explanation", "context": "set | Update all containers in all replication controllers in the project to have ENV=prod", "source": "https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands", "id": "k8s_00389_code_explain_10472"} {"prompt": "Explain this text code:\n\nUpdate all containers in all replication controllers in the project to have ENV=prod", "completion": "This text code set | Update all containers in all replication controllers in the project to have ENV=prod", "language": "text", "type": "code-example_explanation", "context": "set | Update all containers in all replication controllers in the project to have ENV=prod", "source": "https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands", "id": "k8s_00389_code_explain_10473"} {"prompt": "Explain this bash code:\n\nkubectl set env --from=secret/mysecret deployment/myapp", "completion": "This bash code set | Import environment from a config map with a prefix", "language": "bash", "type": "kubectl-command_explanation", "context": "set | Import environment from a config map with a prefix", "source": "https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands", "id": "k8s_00389_code_explain_10474"} {"prompt": "Explain this bash code:\n\nkubectl set env --from=configmap/myconfigmap --prefix=MYSQL_ deployment/myapp", "completion": "This bash code set | Import specific keys from a config map", "language": "bash", "type": "kubectl-command_explanation", "context": "set | Import specific keys from a config map", "source": "https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands", "id": "k8s_00389_code_explain_10475"} {"prompt": "Explain this bash code:\n\nkubectl set env --keys=my-example-key --from=configmap/myconfigmap deployment/myapp", "completion": "This bash code set | Remove the environment variable ENV from container 'c1' in all deployment configs", "language": "bash", "type": "kubectl-command_explanation", "context": "set | Remove the environment variable ENV from container 'c1' in all deployment configs", "source": "https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands", "id": "k8s_00389_code_explain_10476"} {"prompt": "Explain this text code:\n\nRemove the environment variable ENV from container 'c1' in all deployment configs", "completion": "This text code set | Remove the environment variable ENV from container 'c1' in all deployment configs", "language": "text", "type": "code-example_explanation", "context": "set | Remove the environment variable ENV from container 'c1' in all deployment configs", "source": "https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands", "id": "k8s_00389_code_explain_10477"} {"prompt": "Explain this bash code:\n\nkubectl set env deployments --all --containers=\"c1\" ENV-", "completion": "This bash code set | Remove the environment variable ENV from a deployment definition on disk and # update the deployment config on the server", "language": "bash", "type": "kubectl-command_explanation", "context": "set | Remove the environment variable ENV from a deployment definition on disk and # update the deployment config on the server", "source": "https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands", "id": "k8s_00389_code_explain_10478"} {"prompt": "Explain this text code:\n\nRemove the environment variable ENV from a deployment definition on disk and # update the deployment config on the server", "completion": "This text code set | Remove the environment variable ENV from a deployment definition on disk and # update the deployment config on the server", "language": "text", "type": "code-example_explanation", "context": "set | Remove the environment variable ENV from a deployment definition on disk and # update the deployment config on the server", "source": "https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands", "id": "k8s_00389_code_explain_10479"} {"prompt": "Explain this text code:\n\nSet some of the local shell environment into a deployment config on the server", "completion": "This text code set | Set some of the local shell environment into a deployment config on the server", "language": "text", "type": "code-example_explanation", "context": "set | Set some of the local shell environment into a deployment config on the server", "source": "https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands", "id": "k8s_00389_code_explain_10480"} {"prompt": "Explain this text code:\n\nenv | grep RAILS_ | kubectl set env -e - deployment/registry", "completion": "This text code set | Update environment variables on a pod template.", "language": "text", "type": "code-example_explanation", "context": "set | Update environment variables on a pod template.", "source": "https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands", "id": "k8s_00389_code_explain_10481"} {"prompt": "Explain this text code:\n\nSet a deployment's nginx container image to 'nginx:1.9.1', and its busybox container image to 'busybox'", "completion": "This text code set | Set a deployment's nginx container image to 'nginx:1.9.1', and its busybox container image to 'busybox'", "language": "text", "type": "code-example_explanation", "context": "set | Set a deployment's nginx container image to 'nginx:1.9.1', and its busybox container image to 'busybox'", "source": "https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands", "id": "k8s_00389_code_explain_10482"} {"prompt": "Explain this bash code:\n\nkubectl set image deployment/nginx busybox=busybox nginx=nginx:1.9.1", "completion": "This bash code set | Update all deployments' and rc's nginx container's image to 'nginx:1.9.1'", "language": "bash", "type": "kubectl-command_explanation", "context": "set | Update all deployments' and rc's nginx container's image to 'nginx:1.9.1'", "source": "https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands", "id": "k8s_00389_code_explain_10483"} {"prompt": "Explain this text code:\n\nUpdate all deployments' and rc's nginx container's image to 'nginx:1.9.1'", "completion": "This text code set | Update all deployments' and rc's nginx container's image to 'nginx:1.9.1'", "language": "text", "type": "code-example_explanation", "context": "set | Update all deployments' and rc's nginx container's image to 'nginx:1.9.1'", "source": "https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands", "id": "k8s_00389_code_explain_10484"} {"prompt": "Explain this bash code:\n\nkubectl set image deployments,rc nginx=nginx:1.9.1 --all", "completion": "This bash code set | Update image of all containers of daemonset abc to 'nginx:1.9.1'", "language": "bash", "type": "kubectl-command_explanation", "context": "set | Update image of all containers of daemonset abc to 'nginx:1.9.1'", "source": "https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands", "id": "k8s_00389_code_explain_10485"} {"prompt": "Explain this text code:\n\nUpdate image of all containers of daemonset abc to 'nginx:1.9.1'", "completion": "This text code set | Update image of all containers of daemonset abc to 'nginx:1.9.1'", "language": "text", "type": "code-example_explanation", "context": "set | Update image of all containers of daemonset abc to 'nginx:1.9.1'", "source": "https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands", "id": "k8s_00389_code_explain_10486"} {"prompt": "Explain this text code:\n\nPrint result (in yaml format) of updating nginx container image from local file, without hitting the server", "completion": "This text code set | Print result (in yaml format) of updating nginx container image from local file, without hitting the server", "language": "text", "type": "code-example_explanation", "context": "set | Print result (in yaml format) of updating nginx container image from local file, without hitting the server", "source": "https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands", "id": "k8s_00389_code_explain_10487"} {"prompt": "Explain this bash code:\n\nkubectl set image -f path/to/file.yaml nginx=nginx:1.9.1 --local -o yaml", "completion": "This bash code set | Update existing container image(s) of resources.", "language": "bash", "type": "kubectl-command_explanation", "context": "set | Update existing container image(s) of resources.", "source": "https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands", "id": "k8s_00389_code_explain_10488"} {"prompt": "Explain this text code:\n\nSet a deployments nginx container cpu limits to \"200m\" and memory to \"512Mi\"", "completion": "This text code set | Set a deployments nginx container cpu limits to \"200m\" and memory to \"512Mi\"", "language": "text", "type": "code-example_explanation", "context": "set | Set a deployments nginx container cpu limits to \"200m\" and memory to \"512Mi\"", "source": "https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands", "id": "k8s_00389_code_explain_10489"} {"prompt": "Explain this bash code:\n\nkubectl set resources deployment nginx -c=nginx --limits=cpu=200m,memory=512Mi", "completion": "This bash code set | Set the resource request and limits for all containers in nginx", "language": "bash", "type": "kubectl-command_explanation", "context": "set | Set the resource request and limits for all containers in nginx", "source": "https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands", "id": "k8s_00389_code_explain_10490"} {"prompt": "Explain this text code:\n\nSet the resource request and limits for all containers in nginx", "completion": "This text code set | Set the resource request and limits for all containers in nginx", "language": "text", "type": "code-example_explanation", "context": "set | Set the resource request and limits for all containers in nginx", "source": "https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands", "id": "k8s_00389_code_explain_10491"} {"prompt": "Explain this bash code:\n\nkubectl set resources deployment nginx --limits=cpu=200m,memory=512Mi --requests=cpu=100m,memory=256Mi", "completion": "This bash code set | Remove the resource requests for resources on containers in nginx", "language": "bash", "type": "kubectl-command_explanation", "context": "set | Remove the resource requests for resources on containers in nginx", "source": "https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands", "id": "k8s_00389_code_explain_10492"} {"prompt": "Explain this text code:\n\nRemove the resource requests for resources on containers in nginx", "completion": "This text code set | Remove the resource requests for resources on containers in nginx", "language": "text", "type": "code-example_explanation", "context": "set | Remove the resource requests for resources on containers in nginx", "source": "https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands", "id": "k8s_00389_code_explain_10493"} {"prompt": "Explain this bash code:\n\nkubectl set resources deployment nginx --limits=cpu=0,memory=0 --requests=cpu=0,memory=0", "completion": "This bash code set | Print the result (in yaml format) of updating nginx container limits from a local, without hitting the server", "language": "bash", "type": "kubectl-command_explanation", "context": "set | Print the result (in yaml format) of updating nginx container limits from a local, without hitting the server", "source": "https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands", "id": "k8s_00389_code_explain_10494"} {"prompt": "Explain this text code:\n\nPrint the result (in yaml format) of updating nginx container limits from a local, without hitting the server", "completion": "This text code set | Print the result (in yaml format) of updating nginx container limits from a local, without hitting the server", "language": "text", "type": "code-example_explanation", "context": "set | Print the result (in yaml format) of updating nginx container limits from a local, without hitting the server", "source": "https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands", "id": "k8s_00389_code_explain_10495"} {"prompt": "Explain this bash code:\n\nkubectl set resources -f path/to/file.yaml --limits=cpu=200m,memory=512Mi --local -o yaml", "completion": "This bash code set | Specify compute resource requirements (CPU, memory) for any resource that defines a pod template. If a pod is successfully scheduled, it is guaranteed the amount of resource requested, but may burst", "language": "bash", "type": "kubectl-command_explanation", "context": "set | Specify compute resource requirements (CPU, memory) for any resource that defines a pod template. If a pod is successfully scheduled, it is guaranteed the amount of resource requested, but may burst", "source": "https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands", "id": "k8s_00389_code_explain_10496"} {"prompt": "Explain this text code:\n\nSet the labels and selector before creating a deployment/service pair", "completion": "This text code set | Set the labels and selector before creating a deployment/service pair", "language": "text", "type": "code-example_explanation", "context": "set | Set the labels and selector before creating a deployment/service pair", "source": "https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands", "id": "k8s_00389_code_explain_10497"} {"prompt": "Explain this bash code:\n\nkubectl create service clusterip my-svc --clusterip=\"None\" -o yaml --dry-run=client | kubectl set selector --local -f - 'environment=qa' -o yaml | kubectl create -f -\nkubectl create deployment my-dep -o yaml --dry-run=client | kubectl label --local -f - environment=qa -o yaml | kubectl create -f -", "completion": "This bash code set | Set the selector on a resource. Note that the new selector will overwrite the old selector if the resource had one prior to the invocation of 'set selector'.", "language": "bash", "type": "kubectl-command_explanation", "context": "set | Set the selector on a resource. Note that the new selector will overwrite the old selector if the resource had one prior to the invocation of 'set selector'.", "source": "https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands", "id": "k8s_00389_code_explain_10498"} {"prompt": "Explain this text code:\n\nSet deployment nginx-deployment's service account to serviceaccount1", "completion": "This text code set | Set deployment nginx-deployment's service account to serviceaccount1", "language": "text", "type": "code-example_explanation", "context": "set | Set deployment nginx-deployment's service account to serviceaccount1", "source": "https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands", "id": "k8s_00389_code_explain_10499"} {"prompt": "Explain this bash code:\n\nkubectl set serviceaccount deployment nginx-deployment serviceaccount1", "completion": "This bash code set | Print the result (in YAML format) of updated nginx deployment with the service account from local file, without hitting the API server", "language": "bash", "type": "kubectl-command_explanation", "context": "set | Print the result (in YAML format) of updated nginx deployment with the service account from local file, without hitting the API server", "source": "https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands", "id": "k8s_00389_code_explain_10500"} {"prompt": "Explain this text code:\n\nPrint the result (in YAML format) of updated nginx deployment with the service account from local file, without hitting the API server", "completion": "This text code set | Print the result (in YAML format) of updated nginx deployment with the service account from local file, without hitting the API server", "language": "text", "type": "code-example_explanation", "context": "set | Print the result (in YAML format) of updated nginx deployment with the service account from local file, without hitting the API server", "source": "https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands", "id": "k8s_00389_code_explain_10501"} {"prompt": "Explain this bash code:\n\nkubectl set sa -f nginx-deployment.yaml serviceaccount1 --local --dry-run=client -o yaml", "completion": "This bash code set | Update the service account of pod template resources.", "language": "bash", "type": "kubectl-command_explanation", "context": "set | Update the service account of pod template resources.", "source": "https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands", "id": "k8s_00389_code_explain_10502"} {"prompt": "Explain this bash code:\n\nkubectl set subject clusterrolebinding admin --serviceaccount=namespace:serviceaccount1", "completion": "This bash code set | Update a role binding for user1, user2, and group1", "language": "bash", "type": "kubectl-command_explanation", "context": "set | Update a role binding for user1, user2, and group1", "source": "https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands", "id": "k8s_00389_code_explain_10503"} {"prompt": "Explain this bash code:\n\nkubectl set subject rolebinding admin --user=user1 --user=user2 --group=group1", "completion": "This bash code set | Print the result (in YAML format) of updating rolebinding subjects from a local, without hitting the server", "language": "bash", "type": "kubectl-command_explanation", "context": "set | Print the result (in YAML format) of updating rolebinding subjects from a local, without hitting the server", "source": "https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands", "id": "k8s_00389_code_explain_10504"} {"prompt": "Explain this text code:\n\nPrint the result (in YAML format) of updating rolebinding subjects from a local, without hitting the server", "completion": "This text code set | Print the result (in YAML format) of updating rolebinding subjects from a local, without hitting the server", "language": "text", "type": "code-example_explanation", "context": "set | Print the result (in YAML format) of updating rolebinding subjects from a local, without hitting the server", "source": "https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands", "id": "k8s_00389_code_explain_10505"} {"prompt": "Explain this bash code:\n\nkubectl create rolebinding admin --role=admin --user=admin -o yaml --dry-run=client | kubectl set subject --local -f - --user=foo -o yaml", "completion": "This bash code set | Update the user, group, or service account in a role binding or cluster role binding.", "language": "bash", "type": "kubectl-command_explanation", "context": "set | Update the user, group, or service account in a role binding or cluster role binding.", "source": "https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands", "id": "k8s_00389_code_explain_10506"} {"prompt": "Explain this text code:\n\nWait for the pod \"busybox1\" to contain the status condition of type \"Ready\"", "completion": "This text code wait | Wait for the pod \"busybox1\" to contain the status condition of type \"Ready\"", "language": "text", "type": "code-example_explanation", "context": "wait | Wait for the pod \"busybox1\" to contain the status condition of type \"Ready\"", "source": "https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands", "id": "k8s_00389_code_explain_10507"} {"prompt": "Explain this text code:\n\nThe default value of status condition is true; you can set it to false", "completion": "This text code wait | The default value of status condition is true; you can set it to false", "language": "text", "type": "code-example_explanation", "context": "wait | The default value of status condition is true; you can set it to false", "source": "https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands", "id": "k8s_00389_code_explain_10508"} {"prompt": "Explain this bash code:\n\nkubectl wait --for=condition=Ready=false pod/busybox1", "completion": "This bash code wait | Wait for the pod \"busybox1\" to be deleted, with a timeout of 60s, after having issued the \"delete\" command", "language": "bash", "type": "kubectl-command_explanation", "context": "wait | Wait for the pod \"busybox1\" to be deleted, with a timeout of 60s, after having issued the \"delete\" command", "source": "https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands", "id": "k8s_00389_code_explain_10509"} {"prompt": "Explain this text code:\n\nWait for the pod \"busybox1\" to be deleted, with a timeout of 60s, after having issued the \"delete\" command", "completion": "This text code wait | Wait for the pod \"busybox1\" to be deleted, with a timeout of 60s, after having issued the \"delete\" command", "language": "text", "type": "code-example_explanation", "context": "wait | Wait for the pod \"busybox1\" to be deleted, with a timeout of 60s, after having issued the \"delete\" command", "source": "https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands", "id": "k8s_00389_code_explain_10510"} {"prompt": "Explain this bash code:\n\nkubectl delete pod/busybox1\nkubectl wait --for=delete pod/busybox1 --timeout=60s", "completion": "This bash code wait | Experimental: Wait for a specific condition on one or many resources.", "language": "bash", "type": "kubectl-command_explanation", "context": "wait | Experimental: Wait for a specific condition on one or many resources.", "source": "https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands", "id": "k8s_00389_code_explain_10511"} {"prompt": "Explain this text code:\n\nGet output from running pod mypod; use the 'kubectl.kubernetes.io/default-container' annotation # for selecting the container to be attached or the first container in the pod will be chosen", "completion": "This text code attach | Get output from running pod mypod; use the 'kubectl.kubernetes.io/default-container' annotation # for selecting the container to be attached or the first container in the pod will be chosen", "language": "text", "type": "code-example_explanation", "context": "attach | Get output from running pod mypod; use the 'kubectl.kubernetes.io/default-container' annotation # for selecting the container to be attached or the first container in the pod will be chosen", "source": "https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands", "id": "k8s_00389_code_explain_10512"} {"prompt": "Explain this text code:\n\nSwitch to raw terminal mode; sends stdin to 'bash' in ruby-container from pod mypod # and sends stdout/stderr from 'bash' back to the client", "completion": "This text code attach | Switch to raw terminal mode; sends stdin to 'bash' in ruby-container from pod mypod # and sends stdout/stderr from 'bash' back to the client", "language": "text", "type": "code-example_explanation", "context": "attach | Switch to raw terminal mode; sends stdin to 'bash' in ruby-container from pod mypod # and sends stdout/stderr from 'bash' back to the client", "source": "https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands", "id": "k8s_00389_code_explain_10513"} {"prompt": "Explain this text code:\n\nGet output from the first pod of a replica set named nginx", "completion": "This text code attach | Get output from the first pod of a replica set named nginx", "language": "text", "type": "code-example_explanation", "context": "attach | Get output from the first pod of a replica set named nginx", "source": "https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands", "id": "k8s_00389_code_explain_10514"} {"prompt": "Explain this text code:\n\nCheck to see if I can list deployments in my current namespace", "completion": "This text code auth | Check to see if I can list deployments in my current namespace", "language": "text", "type": "code-example_explanation", "context": "auth | Check to see if I can list deployments in my current namespace", "source": "https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands", "id": "k8s_00389_code_explain_10515"} {"prompt": "Explain this text code:\n\nCheck to see if I can do everything in my current namespace (\"*\" means all)", "completion": "This text code auth | Check to see if I can do everything in my current namespace (\"*\" means all)", "language": "text", "type": "code-example_explanation", "context": "auth | Check to see if I can do everything in my current namespace (\"*\" means all)", "source": "https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands", "id": "k8s_00389_code_explain_10516"} {"prompt": "Explain this text code:\n\nCheck to see if I can get the job named \"bar\" in namespace \"foo\"", "completion": "This text code auth | Check to see if I can get the job named \"bar\" in namespace \"foo\"", "language": "text", "type": "code-example_explanation", "context": "auth | Check to see if I can get the job named \"bar\" in namespace \"foo\"", "source": "https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands", "id": "k8s_00389_code_explain_10517"} {"prompt": "Explain this text code:\n\n!!!Important Note!!! # Requires that the 'tar' binary is present in your container # image. If 'tar' is not present, 'kubectl cp' will fail. # # For advanced use cases, such as symlinks, wildcard expansion or # file mode preservation, consider using 'kubectl exec'. # Copy /tmp/foo local file to /tmp/bar in a remote pod in namespace", "completion": "This text code cp | !!!Important Note!!! # Requires that the 'tar' binary is present in your container # image. If 'tar' is not present, 'kubectl cp' will fail. # # For advanced use cases, such as symlinks, wildcard exp", "language": "text", "type": "code-example_explanation", "context": "cp | !!!Important Note!!! # Requires that the 'tar' binary is present in your container # image. If 'tar' is not present, 'kubectl cp' will fail. # # For advanced use cases, such as symlinks, wildcard exp", "source": "https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands", "id": "k8s_00389_code_explain_10518"} {"prompt": "Explain this text code:\n\ntar cf - /tmp/foo | kubectl exec -i -n <some-namespace> <some-pod> -- tar xf - -C /tmp/bar", "completion": "This text code cp | Copy /tmp/foo from a remote pod to /tmp/bar locally", "language": "text", "type": "code-example_explanation", "context": "cp | Copy /tmp/foo from a remote pod to /tmp/bar locally", "source": "https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands", "id": "k8s_00389_code_explain_10519"} {"prompt": "Explain this text code:\n\nCopy /tmp/foo from a remote pod to /tmp/bar locally", "completion": "This text code cp | Copy /tmp/foo from a remote pod to /tmp/bar locally", "language": "text", "type": "code-example_explanation", "context": "cp | Copy /tmp/foo from a remote pod to /tmp/bar locally", "source": "https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands", "id": "k8s_00389_code_explain_10520"} {"prompt": "Explain this bash code:\n\nkubectl exec -n <some-namespace> <some-pod> -- tar cf - /tmp/foo | tar xf - -C /tmp/bar", "completion": "This bash code cp | Copy /tmp/foo_dir local directory to /tmp/bar_dir in a remote pod in the default namespace", "language": "bash", "type": "kubectl-command_explanation", "context": "cp | Copy /tmp/foo_dir local directory to /tmp/bar_dir in a remote pod in the default namespace", "source": "https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands", "id": "k8s_00389_code_explain_10521"} {"prompt": "Explain this text code:\n\nCopy /tmp/foo_dir local directory to /tmp/bar_dir in a remote pod in the default namespace", "completion": "This text code cp | Copy /tmp/foo_dir local directory to /tmp/bar_dir in a remote pod in the default namespace", "language": "text", "type": "code-example_explanation", "context": "cp | Copy /tmp/foo_dir local directory to /tmp/bar_dir in a remote pod in the default namespace", "source": "https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands", "id": "k8s_00389_code_explain_10522"} {"prompt": "Explain this text code:\n\nCopy /tmp/foo local file to /tmp/bar in a remote pod in a specific container", "completion": "This text code cp | Copy /tmp/foo local file to /tmp/bar in a remote pod in a specific container", "language": "text", "type": "code-example_explanation", "context": "cp | Copy /tmp/foo local file to /tmp/bar in a remote pod in a specific container", "source": "https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands", "id": "k8s_00389_code_explain_10523"} {"prompt": "Explain this bash code:\n\nkubectl cp /tmp/foo <some-pod>:/tmp/bar -c <specific-container>", "completion": "This bash code cp | Copy /tmp/foo local file to /tmp/bar in a remote pod in namespace", "language": "bash", "type": "kubectl-command_explanation", "context": "cp | Copy /tmp/foo local file to /tmp/bar in a remote pod in namespace", "source": "https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands", "id": "k8s_00389_code_explain_10524"} {"prompt": "Explain this text code:\n\nCopy /tmp/foo local file to /tmp/bar in a remote pod in namespace", "completion": "This text code cp | Copy /tmp/foo local file to /tmp/bar in a remote pod in namespace", "language": "text", "type": "code-example_explanation", "context": "cp | Copy /tmp/foo local file to /tmp/bar in a remote pod in namespace", "source": "https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands", "id": "k8s_00389_code_explain_10525"} {"prompt": "Explain this bash code:\n\nkubectl cp /tmp/foo <some-namespace>/<some-pod>:/tmp/bar", "completion": "This bash code cp | Copy /tmp/foo from a remote pod to /tmp/bar locally", "language": "bash", "type": "kubectl-command_explanation", "context": "cp | Copy /tmp/foo from a remote pod to /tmp/bar locally", "source": "https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands", "id": "k8s_00389_code_explain_10526"} {"prompt": "Explain this text code:\n\nCopy /tmp/foo from a remote pod to /tmp/bar locally", "completion": "This text code cp | Copy /tmp/foo from a remote pod to /tmp/bar locally", "language": "text", "type": "code-example_explanation", "context": "cp | Copy /tmp/foo from a remote pod to /tmp/bar locally", "source": "https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands", "id": "k8s_00389_code_explain_10527"} {"prompt": "Explain this bash code:\n\nkubectl cp <some-namespace>/<some-pod>:/tmp/foo /tmp/bar", "completion": "This bash code cp | Copy files and directories to and from containers.", "language": "bash", "type": "kubectl-command_explanation", "context": "cp | Copy files and directories to and from containers.", "source": "https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands", "id": "k8s_00389_code_explain_10528"} {"prompt": "Explain this bash code:\n\nkubectl describe nodes kubernetes-node-emt8.c.myproject.internal", "completion": "This bash code describe | Describe a pod", "language": "bash", "type": "kubectl-command_explanation", "context": "describe | Describe a pod", "source": "https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands", "id": "k8s_00389_code_explain_10529"} {"prompt": "Explain this text code:\n\nDescribe a pod identified by type and name in \"pod.json\"", "completion": "This text code describe | Describe a pod identified by type and name in \"pod.json\"", "language": "text", "type": "code-example_explanation", "context": "describe | Describe a pod identified by type and name in \"pod.json\"", "source": "https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands", "id": "k8s_00389_code_explain_10530"} {"prompt": "Explain this text code:\n\nDescribe all pods managed by the 'frontend' replication controller (rc-created pods # get the name of the rc as a prefix in the pod the name)", "completion": "This text code describe | Describe all pods managed by the 'frontend' replication controller (rc-created pods # get the name of the rc as a prefix in the pod the name)", "language": "text", "type": "code-example_explanation", "context": "describe | Describe all pods managed by the 'frontend' replication controller (rc-created pods # get the name of the rc as a prefix in the pod the name)", "source": "https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands", "id": "k8s_00389_code_explain_10531"} {"prompt": "Explain this text code:\n\nGet output from running the 'date' command from pod mypod, using the first container by default", "completion": "This text code exec | Get output from running the 'date' command from pod mypod, using the first container by default", "language": "text", "type": "code-example_explanation", "context": "exec | Get output from running the 'date' command from pod mypod, using the first container by default", "source": "https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands", "id": "k8s_00389_code_explain_10532"} {"prompt": "Explain this text code:\n\nGet output from running the 'date' command in ruby-container from pod mypod", "completion": "This text code exec | Get output from running the 'date' command in ruby-container from pod mypod", "language": "text", "type": "code-example_explanation", "context": "exec | Get output from running the 'date' command in ruby-container from pod mypod", "source": "https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands", "id": "k8s_00389_code_explain_10533"} {"prompt": "Explain this text code:\n\nSwitch to raw terminal mode; sends stdin to 'bash' in ruby-container from pod mypod # and sends stdout/stderr from 'bash' back to the client", "completion": "This text code exec | Switch to raw terminal mode; sends stdin to 'bash' in ruby-container from pod mypod # and sends stdout/stderr from 'bash' back to the client", "language": "text", "type": "code-example_explanation", "context": "exec | Switch to raw terminal mode; sends stdin to 'bash' in ruby-container from pod mypod # and sends stdout/stderr from 'bash' back to the client", "source": "https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands", "id": "k8s_00389_code_explain_10534"} {"prompt": "Explain this bash code:\n\nkubectl exec mypod -c ruby-container -i -t -- bash -il", "completion": "This bash code exec | List contents of /usr from the first container of pod mypod and sort by modification time # If the command you want to execute in the pod has any flags in common (e.g. -i), # you must use two dashes (", "language": "bash", "type": "kubectl-command_explanation", "context": "exec | List contents of /usr from the first container of pod mypod and sort by modification time # If the command you want to execute in the pod has any flags in common (e.g. -i), # you must use two dashes (", "source": "https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands", "id": "k8s_00389_code_explain_10535"} {"prompt": "Explain this text code:\n\nList contents of /usr from the first container of pod mypod and sort by modification time # If the command you want to execute in the pod has any flags in common (e.g. -i), # you must use two dashes (--) to separate your command's flags/arguments # Also note, do not surround your command and its flags/arguments with quotes # unless that is how you would execute it normally (i.e., do ls -t /usr, not \"ls -t /usr\")", "completion": "This text code exec | List contents of /usr from the first container of pod mypod and sort by modification time # If the command you want to execute in the pod has any flags in common (e.g. -i), # you must use two dashes (", "language": "text", "type": "code-example_explanation", "context": "exec | List contents of /usr from the first container of pod mypod and sort by modification time # If the command you want to execute in the pod has any flags in common (e.g. -i), # you must use two dashes (", "source": "https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands", "id": "k8s_00389_code_explain_10536"} {"prompt": "Explain this text code:\n\nGet output from running 'date' command from the first pod of the deployment mydeployment, using the first container by default", "completion": "This text code exec | Get output from running 'date' command from the first pod of the deployment mydeployment, using the first container by default", "language": "text", "type": "code-example_explanation", "context": "exec | Get output from running 'date' command from the first pod of the deployment mydeployment, using the first container by default", "source": "https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands", "id": "k8s_00389_code_explain_10537"} {"prompt": "Explain this text code:\n\nGet output from running 'date' command from the first pod of the service myservice, using the first container by default", "completion": "This text code exec | Get output from running 'date' command from the first pod of the service myservice, using the first container by default", "language": "text", "type": "code-example_explanation", "context": "exec | Get output from running 'date' command from the first pod of the service myservice, using the first container by default", "source": "https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands", "id": "k8s_00389_code_explain_10538"} {"prompt": "Explain this text code:\n\nReturn snapshot logs from pod nginx with only one container", "completion": "This text code logs | Return snapshot logs from pod nginx with only one container", "language": "text", "type": "code-example_explanation", "context": "logs | Return snapshot logs from pod nginx with only one container", "source": "https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands", "id": "k8s_00389_code_explain_10539"} {"prompt": "Explain this text code:\n\nReturn snapshot logs from pod nginx with multi containers", "completion": "This text code logs | Return snapshot logs from pod nginx with multi containers", "language": "text", "type": "code-example_explanation", "context": "logs | Return snapshot logs from pod nginx with multi containers", "source": "https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands", "id": "k8s_00389_code_explain_10540"} {"prompt": "Explain this text code:\n\nReturn snapshot logs from all containers in pods defined by label app=nginx", "completion": "This text code logs | Return snapshot logs from all containers in pods defined by label app=nginx", "language": "text", "type": "code-example_explanation", "context": "logs | Return snapshot logs from all containers in pods defined by label app=nginx", "source": "https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands", "id": "k8s_00389_code_explain_10541"} {"prompt": "Explain this text code:\n\nReturn snapshot of previous terminated ruby container logs from pod web-1", "completion": "This text code logs | Return snapshot of previous terminated ruby container logs from pod web-1", "language": "text", "type": "code-example_explanation", "context": "logs | Return snapshot of previous terminated ruby container logs from pod web-1", "source": "https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands", "id": "k8s_00389_code_explain_10542"} {"prompt": "Explain this text code:\n\nBegin streaming the logs of the ruby container in pod web-1", "completion": "This text code logs | Begin streaming the logs of the ruby container in pod web-1", "language": "text", "type": "code-example_explanation", "context": "logs | Begin streaming the logs of the ruby container in pod web-1", "source": "https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands", "id": "k8s_00389_code_explain_10543"} {"prompt": "Explain this text code:\n\nBegin streaming the logs from all containers in pods defined by label app=nginx", "completion": "This text code logs | Begin streaming the logs from all containers in pods defined by label app=nginx", "language": "text", "type": "code-example_explanation", "context": "logs | Begin streaming the logs from all containers in pods defined by label app=nginx", "source": "https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands", "id": "k8s_00389_code_explain_10544"} {"prompt": "Explain this text code:\n\nDisplay only the most recent 20 lines of output in pod nginx", "completion": "This text code logs | Display only the most recent 20 lines of output in pod nginx", "language": "text", "type": "code-example_explanation", "context": "logs | Display only the most recent 20 lines of output in pod nginx", "source": "https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands", "id": "k8s_00389_code_explain_10545"} {"prompt": "Explain this text code:\n\nShow all logs from pod nginx written in the last hour", "completion": "This text code logs | Show all logs from pod nginx written in the last hour", "language": "text", "type": "code-example_explanation", "context": "logs | Show all logs from pod nginx written in the last hour", "source": "https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands", "id": "k8s_00389_code_explain_10546"} {"prompt": "Explain this text code:\n\nShow logs from a kubelet with an expired serving certificate", "completion": "This text code logs | Show logs from a kubelet with an expired serving certificate", "language": "text", "type": "code-example_explanation", "context": "logs | Show logs from a kubelet with an expired serving certificate", "source": "https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands", "id": "k8s_00389_code_explain_10547"} {"prompt": "Explain this bash code:\n\nkubectl logs --insecure-skip-tls-verify-backend nginx", "completion": "This bash code logs | Return snapshot logs from first container of a job named hello", "language": "bash", "type": "kubectl-command_explanation", "context": "logs | Return snapshot logs from first container of a job named hello", "source": "https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands", "id": "k8s_00389_code_explain_10548"} {"prompt": "Explain this text code:\n\nReturn snapshot logs from first container of a job named hello", "completion": "This text code logs | Return snapshot logs from first container of a job named hello", "language": "text", "type": "code-example_explanation", "context": "logs | Return snapshot logs from first container of a job named hello", "source": "https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands", "id": "k8s_00389_code_explain_10549"} {"prompt": "Explain this text code:\n\nReturn snapshot logs from container nginx-1 of a deployment named nginx", "completion": "This text code logs | Return snapshot logs from container nginx-1 of a deployment named nginx", "language": "text", "type": "code-example_explanation", "context": "logs | Return snapshot logs from container nginx-1 of a deployment named nginx", "source": "https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands", "id": "k8s_00389_code_explain_10550"} {"prompt": "Explain this text code:\n\nListen on ports 5000 and 6000 locally, forwarding data to/from ports 5000 and 6000 in the pod", "completion": "This text code port-forward | Listen on ports 5000 and 6000 locally, forwarding data to/from ports 5000 and 6000 in the pod", "language": "text", "type": "code-example_explanation", "context": "port-forward | Listen on ports 5000 and 6000 locally, forwarding data to/from ports 5000 and 6000 in the pod", "source": "https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands", "id": "k8s_00389_code_explain_10551"} {"prompt": "Explain this text code:\n\nListen on ports 5000 and 6000 locally, forwarding data to/from ports 5000 and 6000 in a pod selected by the deployment", "completion": "This text code port-forward | Listen on ports 5000 and 6000 locally, forwarding data to/from ports 5000 and 6000 in a pod selected by the deployment", "language": "text", "type": "code-example_explanation", "context": "port-forward | Listen on ports 5000 and 6000 locally, forwarding data to/from ports 5000 and 6000 in a pod selected by the deployment", "source": "https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands", "id": "k8s_00389_code_explain_10552"} {"prompt": "Explain this bash code:\n\nkubectl port-forward deployment/mydeployment 5000 6000", "completion": "This bash code port-forward | Listen on port 8443 locally, forwarding to the targetPort of the service's port named \"https\" in a pod selected by the service", "language": "bash", "type": "kubectl-command_explanation", "context": "port-forward | Listen on port 8443 locally, forwarding to the targetPort of the service's port named \"https\" in a pod selected by the service", "source": "https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands", "id": "k8s_00389_code_explain_10553"} {"prompt": "Explain this text code:\n\nListen on port 8443 locally, forwarding to the targetPort of the service's port named \"https\" in a pod selected by the service", "completion": "This text code port-forward | Listen on port 8443 locally, forwarding to the targetPort of the service's port named \"https\" in a pod selected by the service", "language": "text", "type": "code-example_explanation", "context": "port-forward | Listen on port 8443 locally, forwarding to the targetPort of the service's port named \"https\" in a pod selected by the service", "source": "https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands", "id": "k8s_00389_code_explain_10554"} {"prompt": "Explain this text code:\n\nListen on port 8888 locally, forwarding to 5000 in the pod", "completion": "This text code port-forward | Listen on port 8888 locally, forwarding to 5000 in the pod", "language": "text", "type": "code-example_explanation", "context": "port-forward | Listen on port 8888 locally, forwarding to 5000 in the pod", "source": "https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands", "id": "k8s_00389_code_explain_10555"} {"prompt": "Explain this text code:\n\nListen on port 8888 on all addresses, forwarding to 5000 in the pod", "completion": "This text code port-forward | Listen on port 8888 on all addresses, forwarding to 5000 in the pod", "language": "text", "type": "code-example_explanation", "context": "port-forward | Listen on port 8888 on all addresses, forwarding to 5000 in the pod", "source": "https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands", "id": "k8s_00389_code_explain_10556"} {"prompt": "Explain this bash code:\n\nkubectl port-forward --address 0.0.0.0 pod/mypod 8888:5000", "completion": "This bash code port-forward | Listen on port 8888 on localhost and selected IP, forwarding to 5000 in the pod", "language": "bash", "type": "kubectl-command_explanation", "context": "port-forward | Listen on port 8888 on localhost and selected IP, forwarding to 5000 in the pod", "source": "https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands", "id": "k8s_00389_code_explain_10557"} {"prompt": "Explain this text code:\n\nListen on port 8888 on localhost and selected IP, forwarding to 5000 in the pod", "completion": "This text code port-forward | Listen on port 8888 on localhost and selected IP, forwarding to 5000 in the pod", "language": "text", "type": "code-example_explanation", "context": "port-forward | Listen on port 8888 on localhost and selected IP, forwarding to 5000 in the pod", "source": "https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands", "id": "k8s_00389_code_explain_10558"} {"prompt": "Explain this bash code:\n\nkubectl port-forward --address localhost,10.19.21.23 pod/mypod 8888:5000", "completion": "This bash code port-forward | Listen on a random port locally, forwarding to 5000 in the pod", "language": "bash", "type": "kubectl-command_explanation", "context": "port-forward | Listen on a random port locally, forwarding to 5000 in the pod", "source": "https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands", "id": "k8s_00389_code_explain_10559"} {"prompt": "Explain this text code:\n\nListen on a random port locally, forwarding to 5000 in the pod", "completion": "This text code port-forward | Listen on a random port locally, forwarding to 5000 in the pod", "language": "text", "type": "code-example_explanation", "context": "port-forward | Listen on a random port locally, forwarding to 5000 in the pod", "source": "https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands", "id": "k8s_00389_code_explain_10560"} {"prompt": "Explain this text code:\n\nTo proxy all of the Kubernetes API and nothing else", "completion": "This text code proxy | To proxy all of the Kubernetes API and nothing else", "language": "text", "type": "code-example_explanation", "context": "proxy | To proxy all of the Kubernetes API and nothing else", "source": "https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands", "id": "k8s_00389_code_explain_10561"} {"prompt": "Explain this text code:\n\nTo proxy only part of the Kubernetes API and also some static files # You can get pods info with 'curl localhost:8001/api/v1/pods'", "completion": "This text code proxy | To proxy only part of the Kubernetes API and also some static files # You can get pods info with 'curl localhost:8001/api/v1/pods'", "language": "text", "type": "code-example_explanation", "context": "proxy | To proxy only part of the Kubernetes API and also some static files # You can get pods info with 'curl localhost:8001/api/v1/pods'", "source": "https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands", "id": "k8s_00389_code_explain_10562"} {"prompt": "Explain this bash code:\n\nkubectl proxy --www=/my/files --www-prefix=/static/ --api-prefix=/api/", "completion": "This bash code proxy | To proxy the entire Kubernetes API at a different root # You can get pods info with 'curl localhost:8001/custom/api/v1/pods'", "language": "bash", "type": "kubectl-command_explanation", "context": "proxy | To proxy the entire Kubernetes API at a different root # You can get pods info with 'curl localhost:8001/custom/api/v1/pods'", "source": "https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands", "id": "k8s_00389_code_explain_10563"} {"prompt": "Explain this text code:\n\nTo proxy the entire Kubernetes API at a different root # You can get pods info with 'curl localhost:8001/custom/api/v1/pods'", "completion": "This text code proxy | To proxy the entire Kubernetes API at a different root # You can get pods info with 'curl localhost:8001/custom/api/v1/pods'", "language": "text", "type": "code-example_explanation", "context": "proxy | To proxy the entire Kubernetes API at a different root # You can get pods info with 'curl localhost:8001/custom/api/v1/pods'", "source": "https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands", "id": "k8s_00389_code_explain_10564"} {"prompt": "Explain this text code:\n\nRun a proxy to the Kubernetes API server on port 8011, serving static content from ./local/www/", "completion": "This text code proxy | Run a proxy to the Kubernetes API server on port 8011, serving static content from ./local/www/", "language": "text", "type": "code-example_explanation", "context": "proxy | Run a proxy to the Kubernetes API server on port 8011, serving static content from ./local/www/", "source": "https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands", "id": "k8s_00389_code_explain_10565"} {"prompt": "Explain this text code:\n\nRun a proxy to the Kubernetes API server on an arbitrary local port # The chosen port for the server will be output to stdout", "completion": "This text code proxy | Run a proxy to the Kubernetes API server on an arbitrary local port # The chosen port for the server will be output to stdout", "language": "text", "type": "code-example_explanation", "context": "proxy | Run a proxy to the Kubernetes API server on an arbitrary local port # The chosen port for the server will be output to stdout", "source": "https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands", "id": "k8s_00389_code_explain_10566"} {"prompt": "Explain this text code:\n\nRun a proxy to the Kubernetes API server, changing the API prefix to k8s-api # This makes e.g. the pods API available at localhost:8001/k8s-api/v1/pods/", "completion": "This text code proxy | Run a proxy to the Kubernetes API server, changing the API prefix to k8s-api # This makes e.g. the pods API available at localhost:8001/k8s-api/v1/pods/", "language": "text", "type": "code-example_explanation", "context": "proxy | Run a proxy to the Kubernetes API server, changing the API prefix to k8s-api # This makes e.g. the pods API available at localhost:8001/k8s-api/v1/pods/", "source": "https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands", "id": "k8s_00389_code_explain_10567"} {"prompt": "Explain this text code:\n\nShow metrics for the pods defined by label name=myLabel", "completion": "This text code top | Show metrics for the pods defined by label name=myLabel", "language": "text", "type": "code-example_explanation", "context": "top | Show metrics for the pods defined by label name=myLabel", "source": "https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands", "id": "k8s_00389_code_explain_10568"} {"prompt": "Explain this text code:\n\nPrint the address of the control plane and cluster services", "completion": "This text code cluster-info | Print the address of the control plane and cluster services", "language": "text", "type": "code-example_explanation", "context": "cluster-info | Print the address of the control plane and cluster services", "source": "https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands", "id": "k8s_00389_code_explain_10569"} {"prompt": "Explain this text code:\n\nDump current cluster state to /path/to/cluster-state", "completion": "This text code cluster-info | Dump current cluster state to /path/to/cluster-state", "language": "text", "type": "code-example_explanation", "context": "cluster-info | Dump current cluster state to /path/to/cluster-state", "source": "https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands", "id": "k8s_00389_code_explain_10570"} {"prompt": "Explain this bash code:\n\nkubectl cluster-info dump --output-directory=/path/to/cluster-state", "completion": "This bash code cluster-info | Dump all namespaces to stdout", "language": "bash", "type": "kubectl-command_explanation", "context": "cluster-info | Dump all namespaces to stdout", "source": "https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands", "id": "k8s_00389_code_explain_10571"} {"prompt": "Explain this bash code:\n\nkubectl cluster-info dump --namespaces default,kube-system --output-directory=/path/to/cluster-state", "completion": "This bash code cluster-info | Dump cluster information out suitable for debugging and diagnosing cluster problems. By default, dumps everything to stdout. You can optionally specify a directory with --output-directory. If you sp", "language": "bash", "type": "kubectl-command_explanation", "context": "cluster-info | Dump cluster information out suitable for debugging and diagnosing cluster problems. By default, dumps everything to stdout. You can optionally specify a directory with --output-directory. If you sp", "source": "https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands", "id": "k8s_00389_code_explain_10572"} {"prompt": "Explain this text code:\n\nDrain node \"foo\", even if there are pods not managed by a replication controller, replica set, job, daemon set or stateful set on it", "completion": "This text code drain | Drain node \"foo\", even if there are pods not managed by a replication controller, replica set, job, daemon set or stateful set on it", "language": "text", "type": "code-example_explanation", "context": "drain | Drain node \"foo\", even if there are pods not managed by a replication controller, replica set, job, daemon set or stateful set on it", "source": "https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands", "id": "k8s_00389_code_explain_10573"} {"prompt": "Explain this text code:\n\nAs above, but abort if there are pods not managed by a replication controller, replica set, job, daemon set or stateful set, and use a grace period of 15 minutes", "completion": "This text code drain | As above, but abort if there are pods not managed by a replication controller, replica set, job, daemon set or stateful set, and use a grace period of 15 minutes", "language": "text", "type": "code-example_explanation", "context": "drain | As above, but abort if there are pods not managed by a replication controller, replica set, job, daemon set or stateful set, and use a grace period of 15 minutes", "source": "https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands", "id": "k8s_00389_code_explain_10574"} {"prompt": "Explain this text code:\n\nUpdate node 'foo' with a taint with key 'dedicated' and value 'special-user' and effect 'NoSchedule' # If a taint with that key and effect already exists, its value is replaced as specified", "completion": "This text code taint | Update node 'foo' with a taint with key 'dedicated' and value 'special-user' and effect 'NoSchedule' # If a taint with that key and effect already exists, its value is replaced as specified", "language": "text", "type": "code-example_explanation", "context": "taint | Update node 'foo' with a taint with key 'dedicated' and value 'special-user' and effect 'NoSchedule' # If a taint with that key and effect already exists, its value is replaced as specified", "source": "https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands", "id": "k8s_00389_code_explain_10575"} {"prompt": "Explain this bash code:\n\nkubectl taint nodes foo dedicated=special-user:NoSchedule", "completion": "This bash code taint | Remove from node 'foo' the taint with key 'dedicated' and effect 'NoSchedule' if one exists", "language": "bash", "type": "kubectl-command_explanation", "context": "taint | Remove from node 'foo' the taint with key 'dedicated' and effect 'NoSchedule' if one exists", "source": "https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands", "id": "k8s_00389_code_explain_10576"} {"prompt": "Explain this text code:\n\nRemove from node 'foo' the taint with key 'dedicated' and effect 'NoSchedule' if one exists", "completion": "This text code taint | Remove from node 'foo' the taint with key 'dedicated' and effect 'NoSchedule' if one exists", "language": "text", "type": "code-example_explanation", "context": "taint | Remove from node 'foo' the taint with key 'dedicated' and effect 'NoSchedule' if one exists", "source": "https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands", "id": "k8s_00389_code_explain_10577"} {"prompt": "Explain this text code:\n\nRemove from node 'foo' all the taints with key 'dedicated'", "completion": "This text code taint | Remove from node 'foo' all the taints with key 'dedicated'", "language": "text", "type": "code-example_explanation", "context": "taint | Remove from node 'foo' all the taints with key 'dedicated'", "source": "https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands", "id": "k8s_00389_code_explain_10578"} {"prompt": "Explain this text code:\n\nAdd a taint with key 'dedicated' on nodes having label mylabel=X", "completion": "This text code taint | Add a taint with key 'dedicated' on nodes having label mylabel=X", "language": "text", "type": "code-example_explanation", "context": "taint | Add a taint with key 'dedicated' on nodes having label mylabel=X", "source": "https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands", "id": "k8s_00389_code_explain_10579"} {"prompt": "Explain this bash code:\n\nkubectl taint node -l myLabel=X dedicated=foo:PreferNoSchedule", "completion": "This bash code taint | Add to node 'foo' a taint with key 'bar' and no value", "language": "bash", "type": "kubectl-command_explanation", "context": "taint | Add to node 'foo' a taint with key 'bar' and no value", "source": "https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands", "id": "k8s_00389_code_explain_10580"} {"prompt": "Explain this text code:\n\nAdd to node 'foo' a taint with key 'bar' and no value", "completion": "This text code taint | Add to node 'foo' a taint with key 'bar' and no value", "language": "text", "type": "code-example_explanation", "context": "taint | Add to node 'foo' a taint with key 'bar' and no value", "source": "https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands", "id": "k8s_00389_code_explain_10581"} {"prompt": "Explain this text code:\n\nPrint the supported API resources with more information", "completion": "This text code api-resources | Print the supported API resources with more information", "language": "text", "type": "code-example_explanation", "context": "api-resources | Print the supported API resources with more information", "source": "https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands", "id": "k8s_00389_code_explain_10582"} {"prompt": "Explain this text code:\n\nPrint the supported API resources sorted by a column", "completion": "This text code api-resources | Print the supported API resources sorted by a column", "language": "text", "type": "code-example_explanation", "context": "api-resources | Print the supported API resources sorted by a column", "source": "https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands", "id": "k8s_00389_code_explain_10583"} {"prompt": "Explain this text code:\n\nPrint the supported API resources with a specific APIGroup", "completion": "This text code api-resources | Print the supported API resources with a specific APIGroup", "language": "text", "type": "code-example_explanation", "context": "api-resources | Print the supported API resources with a specific APIGroup", "source": "https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands", "id": "k8s_00389_code_explain_10584"} {"prompt": "Explain this text code:\n\nInstalling bash completion on macOS using homebrew ## If running Bash 3.2 included with macOS", "completion": "This text code completion | Installing bash completion on macOS using homebrew ## If running Bash 3.2 included with macOS", "language": "text", "type": "code-example_explanation", "context": "completion | Installing bash completion on macOS using homebrew ## If running Bash 3.2 included with macOS", "source": "https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands", "id": "k8s_00389_code_explain_10585"} {"prompt": "Explain this text code:\n\nIf kubectl is installed via homebrew, this should start working immediately ## If you've installed via other means, you may need add the completion to your completion directory", "completion": "This text code completion | If kubectl is installed via homebrew, this should start working immediately ## If you've installed via other means, you may need add the completion to your completion directory", "language": "text", "type": "code-example_explanation", "context": "completion | If kubectl is installed via homebrew, this should start working immediately ## If you've installed via other means, you may need add the completion to your completion directory", "source": "https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands", "id": "k8s_00389_code_explain_10586"} {"prompt": "Explain this bash code:\n\nkubectl completion bash > $(brew --prefix)/etc/bash_completion.d/kubectl", "completion": "This bash code completion | Installing bash completion on Linux ## If bash-completion is not installed on Linux, install the 'bash-completion' package ## via your distribution's package manager. ## Load the kubectl completion co", "language": "bash", "type": "kubectl-command_explanation", "context": "completion | Installing bash completion on Linux ## If bash-completion is not installed on Linux, install the 'bash-completion' package ## via your distribution's package manager. ## Load the kubectl completion co", "source": "https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands", "id": "k8s_00389_code_explain_10587"} {"prompt": "Explain this go code:\n\nInstalling bash completion on Linux ## If bash-completion is not installed on Linux, install the 'bash-completion' package ## via your distribution's package manager. ## Load the kubectl completion code for bash into the current shell", "completion": "This go code completion | Installing bash completion on Linux ## If bash-completion is not installed on Linux, install the 'bash-completion' package ## via your distribution's package manager. ## Load the kubectl completion co", "language": "go", "type": "code-example_explanation", "context": "completion | Installing bash completion on Linux ## If bash-completion is not installed on Linux, install the 'bash-completion' package ## via your distribution's package manager. ## Load the kubectl completion co", "source": "https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands", "id": "k8s_00389_code_explain_10588"} {"prompt": "Explain this text code:\n\nWrite bash completion code to a file and source it from .bash_profile", "completion": "This text code completion | Write bash completion code to a file and source it from .bash_profile", "language": "text", "type": "code-example_explanation", "context": "completion | Write bash completion code to a file and source it from .bash_profile", "source": "https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands", "id": "k8s_00389_code_explain_10589"} {"prompt": "Explain this bash code:\n\nkubectl completion bash > ~/.kube/completion.bash.inc\nprintf \"", "completion": "This bash code completion | Kubectl shell completion", "language": "bash", "type": "kubectl-command_explanation", "context": "completion | Kubectl shell completion", "source": "https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands", "id": "k8s_00389_code_explain_10590"} {"prompt": "Explain this text code:\n\nsource '$HOME/.kube/completion.bash.inc'\n\" >> $HOME/.bash_profile\nsource $HOME/.bash_profile", "completion": "This text code completion | Load the kubectl completion code for zsh[1] into the current shell", "language": "text", "type": "code-example_explanation", "context": "completion | Load the kubectl completion code for zsh[1] into the current shell", "source": "https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands", "id": "k8s_00389_code_explain_10591"} {"prompt": "Explain this text code:\n\nLoad the kubectl completion code for zsh[1] into the current shell", "completion": "This text code completion | Load the kubectl completion code for zsh[1] into the current shell", "language": "text", "type": "code-example_explanation", "context": "completion | Load the kubectl completion code for zsh[1] into the current shell", "source": "https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands", "id": "k8s_00389_code_explain_10592"} {"prompt": "Explain this text code:\n\nSet the kubectl completion code for zsh[1] to autoload on startup", "completion": "This text code completion | Set the kubectl completion code for zsh[1] to autoload on startup", "language": "text", "type": "code-example_explanation", "context": "completion | Set the kubectl completion code for zsh[1] to autoload on startup", "source": "https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands", "id": "k8s_00389_code_explain_10593"} {"prompt": "Explain this text code:\n\nRename the context 'old-name' to 'new-name' in your kubeconfig file", "completion": "This text code config | Rename the context 'old-name' to 'new-name' in your kubeconfig file", "language": "text", "type": "code-example_explanation", "context": "config | Rename the context 'old-name' to 'new-name' in your kubeconfig file", "source": "https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands", "id": "k8s_00389_code_explain_10594"} {"prompt": "Explain this text code:\n\nSet the server field on the my-cluster cluster to https://1.2.3.4", "completion": "This text code config | Set the server field on the my-cluster cluster to https://1.2.3.4", "language": "text", "type": "code-example_explanation", "context": "config | Set the server field on the my-cluster cluster to https://1.2.3.4", "source": "https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands", "id": "k8s_00389_code_explain_10595"} {"prompt": "Explain this bash code:\n\nkubectl config set clusters.my-cluster.server https://1.2.3.4", "completion": "This bash code config | Set the certificate-authority-data field on the my-cluster cluster", "language": "bash", "type": "kubectl-command_explanation", "context": "config | Set the certificate-authority-data field on the my-cluster cluster", "source": "https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands", "id": "k8s_00389_code_explain_10596"} {"prompt": "Explain this text code:\n\nSet the certificate-authority-data field on the my-cluster cluster", "completion": "This text code config | Set the certificate-authority-data field on the my-cluster cluster", "language": "text", "type": "code-example_explanation", "context": "config | Set the certificate-authority-data field on the my-cluster cluster", "source": "https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands", "id": "k8s_00389_code_explain_10597"} {"prompt": "Explain this bash code:\n\nkubectl config set clusters.my-cluster.certificate-authority-data $(echo \"cert_data_here\" | base64 -i -)", "completion": "This bash code config | Set the cluster field in the my-context context to my-cluster", "language": "bash", "type": "kubectl-command_explanation", "context": "config | Set the cluster field in the my-context context to my-cluster", "source": "https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands", "id": "k8s_00389_code_explain_10598"} {"prompt": "Explain this text code:\n\nSet the cluster field in the my-context context to my-cluster", "completion": "This text code config | Set the cluster field in the my-context context to my-cluster", "language": "text", "type": "code-example_explanation", "context": "config | Set the cluster field in the my-context context to my-cluster", "source": "https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands", "id": "k8s_00389_code_explain_10599"} {"prompt": "Explain this bash code:\n\nkubectl config set contexts.my-context.cluster my-cluster", "completion": "This bash code config | Set the client-key-data field in the cluster-admin user using --set-raw-bytes option", "language": "bash", "type": "kubectl-command_explanation", "context": "config | Set the client-key-data field in the cluster-admin user using --set-raw-bytes option", "source": "https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands", "id": "k8s_00389_code_explain_10600"} {"prompt": "Explain this text code:\n\nSet the client-key-data field in the cluster-admin user using --set-raw-bytes option", "completion": "This text code config | Set the client-key-data field in the cluster-admin user using --set-raw-bytes option", "language": "text", "type": "code-example_explanation", "context": "config | Set the client-key-data field in the cluster-admin user using --set-raw-bytes option", "source": "https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands", "id": "k8s_00389_code_explain_10601"} {"prompt": "Explain this bash code:\n\nkubectl config set users.cluster-admin.client-key-data cert_data_here --set-raw-bytes=true", "completion": "This bash code config | Set an individual value in a kubeconfig file.", "language": "bash", "type": "kubectl-command_explanation", "context": "config | Set an individual value in a kubeconfig file.", "source": "https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands", "id": "k8s_00389_code_explain_10602"} {"prompt": "Explain this text code:\n\nSet only the server field on the e2e cluster entry without touching other values", "completion": "This text code config | Set only the server field on the e2e cluster entry without touching other values", "language": "text", "type": "code-example_explanation", "context": "config | Set only the server field on the e2e cluster entry without touching other values", "source": "https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands", "id": "k8s_00389_code_explain_10603"} {"prompt": "Explain this bash code:\n\nkubectl config set-cluster e2e --server=https://1.2.3.4", "completion": "This bash code config | Embed certificate authority data for the e2e cluster entry", "language": "bash", "type": "kubectl-command_explanation", "context": "config | Embed certificate authority data for the e2e cluster entry", "source": "https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands", "id": "k8s_00389_code_explain_10604"} {"prompt": "Explain this text code:\n\nEmbed certificate authority data for the e2e cluster entry", "completion": "This text code config | Embed certificate authority data for the e2e cluster entry", "language": "text", "type": "code-example_explanation", "context": "config | Embed certificate authority data for the e2e cluster entry", "source": "https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands", "id": "k8s_00389_code_explain_10605"} {"prompt": "Explain this bash code:\n\nkubectl config set-cluster e2e --embed-certs --certificate-authority=~/.kube/e2e/kubernetes.ca.crt", "completion": "This bash code config | Disable cert checking for the dev cluster entry", "language": "bash", "type": "kubectl-command_explanation", "context": "config | Disable cert checking for the dev cluster entry", "source": "https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands", "id": "k8s_00389_code_explain_10606"} {"prompt": "Explain this bash code:\n\nkubectl config set-cluster e2e --insecure-skip-tls-verify=true", "completion": "This bash code config | Set custom TLS server name to use for validation for the e2e cluster entry", "language": "bash", "type": "kubectl-command_explanation", "context": "config | Set custom TLS server name to use for validation for the e2e cluster entry", "source": "https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands", "id": "k8s_00389_code_explain_10607"} {"prompt": "Explain this text code:\n\nSet custom TLS server name to use for validation for the e2e cluster entry", "completion": "This text code config | Set custom TLS server name to use for validation for the e2e cluster entry", "language": "text", "type": "code-example_explanation", "context": "config | Set custom TLS server name to use for validation for the e2e cluster entry", "source": "https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands", "id": "k8s_00389_code_explain_10608"} {"prompt": "Explain this bash code:\n\nkubectl config set-cluster e2e --tls-server-name=my-cluster-name", "completion": "This bash code config | Set a cluster entry in kubeconfig.", "language": "bash", "type": "kubectl-command_explanation", "context": "config | Set a cluster entry in kubeconfig.", "source": "https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands", "id": "k8s_00389_code_explain_10609"} {"prompt": "Explain this text code:\n\nSet the user field on the gce context entry without touching other values", "completion": "This text code config | Set the user field on the gce context entry without touching other values", "language": "text", "type": "code-example_explanation", "context": "config | Set the user field on the gce context entry without touching other values", "source": "https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands", "id": "k8s_00389_code_explain_10610"} {"prompt": "Explain this bash code:\n\nkubectl config set-context gce --user=cluster-admin", "completion": "This bash code config | Set a context entry in kubeconfig.", "language": "bash", "type": "kubectl-command_explanation", "context": "config | Set a context entry in kubeconfig.", "source": "https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands", "id": "k8s_00389_code_explain_10611"} {"prompt": "Explain this text code:\n\nSet only the \"client-key\" field on the \"cluster-admin\" # entry, without touching other values", "completion": "This text code config | Set only the \"client-key\" field on the \"cluster-admin\" # entry, without touching other values", "language": "text", "type": "code-example_explanation", "context": "config | Set only the \"client-key\" field on the \"cluster-admin\" # entry, without touching other values", "source": "https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands", "id": "k8s_00389_code_explain_10612"} {"prompt": "Explain this bash code:\n\nkubectl config set-credentials cluster-admin --client-key=~/.kube/admin.key", "completion": "This bash code config | Set basic auth for the \"cluster-admin\" entry", "language": "bash", "type": "kubectl-command_explanation", "context": "config | Set basic auth for the \"cluster-admin\" entry", "source": "https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands", "id": "k8s_00389_code_explain_10613"} {"prompt": "Explain this bash code:\n\nkubectl config set-credentials cluster-admin --username=admin --password=uXFGweU9l35qcif", "completion": "This bash code config | Embed client certificate data in the \"cluster-admin\" entry", "language": "bash", "type": "kubectl-command_explanation", "context": "config | Embed client certificate data in the \"cluster-admin\" entry", "source": "https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands", "id": "k8s_00389_code_explain_10614"} {"prompt": "Explain this text code:\n\nEmbed client certificate data in the \"cluster-admin\" entry", "completion": "This text code config | Embed client certificate data in the \"cluster-admin\" entry", "language": "text", "type": "code-example_explanation", "context": "config | Embed client certificate data in the \"cluster-admin\" entry", "source": "https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands", "id": "k8s_00389_code_explain_10615"} {"prompt": "Explain this bash code:\n\nkubectl config set-credentials cluster-admin --client-certificate=~/.kube/admin.crt --embed-certs=true", "completion": "This bash code config | Enable the Google Compute Platform auth provider for the \"cluster-admin\" entry", "language": "bash", "type": "kubectl-command_explanation", "context": "config | Enable the Google Compute Platform auth provider for the \"cluster-admin\" entry", "source": "https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands", "id": "k8s_00389_code_explain_10616"} {"prompt": "Explain this text code:\n\nEnable the Google Compute Platform auth provider for the \"cluster-admin\" entry", "completion": "This text code config | Enable the Google Compute Platform auth provider for the \"cluster-admin\" entry", "language": "text", "type": "code-example_explanation", "context": "config | Enable the Google Compute Platform auth provider for the \"cluster-admin\" entry", "source": "https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands", "id": "k8s_00389_code_explain_10617"} {"prompt": "Explain this bash code:\n\nkubectl config set-credentials cluster-admin --auth-provider=gcp", "completion": "This bash code config | Enable the OpenID Connect auth provider for the \"cluster-admin\" entry with additional args", "language": "bash", "type": "kubectl-command_explanation", "context": "config | Enable the OpenID Connect auth provider for the \"cluster-admin\" entry with additional args", "source": "https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands", "id": "k8s_00389_code_explain_10618"} {"prompt": "Explain this text code:\n\nEnable the OpenID Connect auth provider for the \"cluster-admin\" entry with additional args", "completion": "This text code config | Enable the OpenID Connect auth provider for the \"cluster-admin\" entry with additional args", "language": "text", "type": "code-example_explanation", "context": "config | Enable the OpenID Connect auth provider for the \"cluster-admin\" entry with additional args", "source": "https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands", "id": "k8s_00389_code_explain_10619"} {"prompt": "Explain this bash code:\n\nkubectl config set-credentials cluster-admin --auth-provider=oidc --auth-provider-arg=client-id=foo --auth-provider-arg=client-secret=bar", "completion": "This bash code config | Remove the \"client-secret\" config value for the OpenID Connect auth provider for the \"cluster-admin\" entry", "language": "bash", "type": "kubectl-command_explanation", "context": "config | Remove the \"client-secret\" config value for the OpenID Connect auth provider for the \"cluster-admin\" entry", "source": "https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands", "id": "k8s_00389_code_explain_10620"} {"prompt": "Explain this text code:\n\nRemove the \"client-secret\" config value for the OpenID Connect auth provider for the \"cluster-admin\" entry", "completion": "This text code config | Remove the \"client-secret\" config value for the OpenID Connect auth provider for the \"cluster-admin\" entry", "language": "text", "type": "code-example_explanation", "context": "config | Remove the \"client-secret\" config value for the OpenID Connect auth provider for the \"cluster-admin\" entry", "source": "https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands", "id": "k8s_00389_code_explain_10621"} {"prompt": "Explain this bash code:\n\nkubectl config set-credentials cluster-admin --auth-provider=oidc --auth-provider-arg=client-secret-", "completion": "This bash code config | Enable new exec auth plugin for the \"cluster-admin\" entry", "language": "bash", "type": "kubectl-command_explanation", "context": "config | Enable new exec auth plugin for the \"cluster-admin\" entry", "source": "https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands", "id": "k8s_00389_code_explain_10622"} {"prompt": "Explain this text code:\n\nEnable new exec auth plugin for the \"cluster-admin\" entry", "completion": "This text code config | Enable new exec auth plugin for the \"cluster-admin\" entry", "language": "text", "type": "code-example_explanation", "context": "config | Enable new exec auth plugin for the \"cluster-admin\" entry", "source": "https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands", "id": "k8s_00389_code_explain_10623"} {"prompt": "Explain this bash code:\n\nkubectl config set-credentials cluster-admin --exec-command=/path/to/the/executable --exec-api-version=client.authentication.k8s.io/v1beta1", "completion": "This bash code config | Define new exec auth plugin args for the \"cluster-admin\" entry", "language": "bash", "type": "kubectl-command_explanation", "context": "config | Define new exec auth plugin args for the \"cluster-admin\" entry", "source": "https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands", "id": "k8s_00389_code_explain_10624"} {"prompt": "Explain this text code:\n\nDefine new exec auth plugin args for the \"cluster-admin\" entry", "completion": "This text code config | Define new exec auth plugin args for the \"cluster-admin\" entry", "language": "text", "type": "code-example_explanation", "context": "config | Define new exec auth plugin args for the \"cluster-admin\" entry", "source": "https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands", "id": "k8s_00389_code_explain_10625"} {"prompt": "Explain this bash code:\n\nkubectl config set-credentials cluster-admin --exec-arg=arg1 --exec-arg=arg2", "completion": "This bash code config | Create or update exec auth plugin environment variables for the \"cluster-admin\" entry", "language": "bash", "type": "kubectl-command_explanation", "context": "config | Create or update exec auth plugin environment variables for the \"cluster-admin\" entry", "source": "https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands", "id": "k8s_00389_code_explain_10626"} {"prompt": "Explain this text code:\n\nCreate or update exec auth plugin environment variables for the \"cluster-admin\" entry", "completion": "This text code config | Create or update exec auth plugin environment variables for the \"cluster-admin\" entry", "language": "text", "type": "code-example_explanation", "context": "config | Create or update exec auth plugin environment variables for the \"cluster-admin\" entry", "source": "https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands", "id": "k8s_00389_code_explain_10627"} {"prompt": "Explain this bash code:\n\nkubectl config set-credentials cluster-admin --exec-env=key1=val1 --exec-env=key2=val2", "completion": "This bash code config | Remove exec auth plugin environment variables for the \"cluster-admin\" entry", "language": "bash", "type": "kubectl-command_explanation", "context": "config | Remove exec auth plugin environment variables for the \"cluster-admin\" entry", "source": "https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands", "id": "k8s_00389_code_explain_10628"} {"prompt": "Explain this text code:\n\nRemove exec auth plugin environment variables for the \"cluster-admin\" entry", "completion": "This text code config | Remove exec auth plugin environment variables for the \"cluster-admin\" entry", "language": "text", "type": "code-example_explanation", "context": "config | Remove exec auth plugin environment variables for the \"cluster-admin\" entry", "source": "https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands", "id": "k8s_00389_code_explain_10629"} {"prompt": "Explain this bash code:\n\nkubectl config set-credentials cluster-admin --exec-env=var-to-remove-", "completion": "This bash code config | Set a user entry in kubeconfig.", "language": "bash", "type": "kubectl-command_explanation", "context": "config | Set a user entry in kubeconfig.", "source": "https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands", "id": "k8s_00389_code_explain_10630"} {"prompt": "Explain this text code:\n\nShow merged kubeconfig settings and raw certificate data", "completion": "This text code config | Show merged kubeconfig settings and raw certificate data", "language": "text", "type": "code-example_explanation", "context": "config | Show merged kubeconfig settings and raw certificate data", "source": "https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands", "id": "k8s_00389_code_explain_10631"} {"prompt": "Explain this bash code:\n\nkubectl config view -o jsonpath='{.users[?(@.name == \"e2e\")].user.password}'", "completion": "This bash code config | Display merged kubeconfig settings or a specified kubeconfig file.", "language": "bash", "type": "kubectl-command_explanation", "context": "config | Display merged kubeconfig settings or a specified kubeconfig file.", "source": "https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands", "id": "k8s_00389_code_explain_10632"} {"prompt": "Explain this text code:\n\nGet the documentation of the resource and its fields", "completion": "This text code explain | Get the documentation of the resource and its fields", "language": "text", "type": "code-example_explanation", "context": "explain | Get the documentation of the resource and its fields", "source": "https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands", "id": "k8s_00389_code_explain_10633"} {"prompt": "Explain this text code:\n\nGet the documentation of a specific field of a resource", "completion": "This text code explain | Get the documentation of a specific field of a resource", "language": "text", "type": "code-example_explanation", "context": "explain | Get the documentation of a specific field of a resource", "source": "https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands", "id": "k8s_00389_code_explain_10634"} {"prompt": "Explain this text code:\n\nPrint the client and server versions for the current context", "completion": "This text code version | Print the client and server versions for the current context", "language": "text", "type": "code-example_explanation", "context": "version | Print the client and server versions for the current context", "source": "https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands", "id": "k8s_00389_code_explain_10635"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Service\nmetadata:\n name: nginx\n labels:\n app: nginx\nspec:\n ports:\n - port: 80\n name: web\n clusterIP: None\n selector:\n app: nginx\n---\napiVersion: apps/v1\nkind: StatefulSet\nmetadata:\n name: web\nspec:\n selector:\n matchLabels:\n app: nginx # has to match .spec.template.metadata.labels\n serviceName: \"nginx\"\n replicas: 3 # by default is 1", "completion": " minReadySeconds: 10 # by default is 0\n template:\n metadata:\n labels:\n app: nginx # has to match .spec.selector.matchLabels\n spec:\n terminationGracePeriodSeconds: 10\n containers:\n - name: nginx\n image: registry.k8s.io/nginx-slim:0.24\n ports:\n - containerPort: 80\n name: web\n volumeMounts:\n - name: www\n mountPath: /usr/share/nginx/html\n volumeClaimTemplates:\n - metadata:\n name: www\n spec:\n accessModes: [ \"ReadWriteOnce\" ]\n storageClassName: \"my-storage-class\"\n resources:\n requests:\n storage: 1Gi", "language": "yaml", "type": "service-definition", "context": "StatefulSets | In the above example:", "source": "https://kubernetes.io/docs/concepts/workloads/controllers/statefulset/", "id": "k8s_00390_code_completion_10636"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Service\nmetadata:\n name: nginx\n labels:\n app: nginx\nspec:\n ports:\n - port: 80\n name: web\n clusterIP: None\n selector:\n app: nginx\n---\napiVersion: apps/v1\nkind: StatefulSet\nmetadata:\n name: web\nspec:\n selector:\n matchLabels:\n app: nginx # has to match .spec.template.metadata.labels\n serviceName: \"nginx\"\n replicas: 3 # by default is 1\n minReadySeconds: 10 # by default is 0\n template:\n metadata:\n labels:\n app: nginx # has to match .spec.selector.matchLabels\n spec:\n terminationGracePeriodSeconds: 10\n containers:\n - name: nginx\n image: registry.k8s.io/nginx-slim:0.24\n ports:\n - containerPort: 80\n name: web\n volumeMounts:\n - name: www\n mountPath: /usr/share/nginx/html\n volumeClaimTemplates:\n - metadata:\n name: www\n spec:\n accessModes: [ \"ReadWriteOnce\" ]\n storageClassName: \"my-storage-class\"\n resources:\n requests:\n storage: 1Gi", "completion": "This yaml code StatefulSets | In the above example:", "language": "yaml", "type": "service-definition_explanation", "context": "StatefulSets | In the above example:", "source": "https://kubernetes.io/docs/concepts/workloads/controllers/statefulset/", "id": "k8s_00390_code_explain_10637"} {"prompt": "Complete this yaml code:\n\napiVersion: apps/v1\nkind: StatefulSet\nmetadata:", "completion": " name: webapp\nspec:\n revisionHistoryLimit: 5 # Keep last 5 revisions\n # ... other spec fields ...", "language": "yaml", "type": "kubernetes-manifest", "context": "StatefulSets | You can revert to a previous configuration using:", "source": "https://kubernetes.io/docs/concepts/workloads/controllers/statefulset/", "id": "k8s_00390_code_completion_10638"} {"prompt": "Explain this yaml code:\n\napiVersion: apps/v1\nkind: StatefulSet\nmetadata:\n name: webapp\nspec:\n revisionHistoryLimit: 5 # Keep last 5 revisions\n # ... other spec fields ...", "completion": "This yaml code StatefulSets | You can revert to a previous configuration using:", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "StatefulSets | You can revert to a previous configuration using:", "source": "https://kubernetes.io/docs/concepts/workloads/controllers/statefulset/", "id": "k8s_00390_code_explain_10639"} {"prompt": "Complete this text code:\n\n# View revision history\nkubectl rollout history statefulset/webapp", "completion": "\n# Rollback to a specific revision\nkubectl rollout undo statefulset/webapp --to-revision=3", "language": "text", "type": "code-example", "context": "StatefulSets | This will:", "source": "https://kubernetes.io/docs/concepts/workloads/controllers/statefulset/", "id": "k8s_00390_code_completion_10640"} {"prompt": "Explain this text code:\n\n# View revision history\nkubectl rollout history statefulset/webapp\n\n# Rollback to a specific revision\nkubectl rollout undo statefulset/webapp --to-revision=3", "completion": "This text code StatefulSets | This will:", "language": "text", "type": "code-example_explanation", "context": "StatefulSets | This will:", "source": "https://kubernetes.io/docs/concepts/workloads/controllers/statefulset/", "id": "k8s_00390_code_explain_10641"} {"prompt": "Complete this text code:\n\n# List all revisions for the StatefulSet\nkubectl get controllerrevisions -l app.kubernetes.io/name=webapp", "completion": "\n# View detailed configuration of a specific revision\nkubectl get controllerrevision/webapp-3 -o yaml", "language": "text", "type": "code-example", "context": "StatefulSets | Regularly check revisions with:", "source": "https://kubernetes.io/docs/concepts/workloads/controllers/statefulset/", "id": "k8s_00390_code_completion_10642"} {"prompt": "Explain this text code:\n\n# List all revisions for the StatefulSet\nkubectl get controllerrevisions -l app.kubernetes.io/name=webapp\n\n# View detailed configuration of a specific revision\nkubectl get controllerrevision/webapp-3 -o yaml", "completion": "This text code StatefulSets | Regularly check revisions with:", "language": "text", "type": "code-example_explanation", "context": "StatefulSets | Regularly check revisions with:", "source": "https://kubernetes.io/docs/concepts/workloads/controllers/statefulset/", "id": "k8s_00390_code_explain_10643"} {"prompt": "Complete this yaml code:\n\napiVersion: apps/v1\nkind: StatefulSet\n...\nspec:", "completion": " persistentVolumeClaimRetentionPolicy:\n whenDeleted: Retain\n whenScaled: Delete\n...", "language": "yaml", "type": "kubernetes-manifest", "context": "StatefulSets | The StatefulSet controller adds owner references to its PVCs, which are then deleted by the garbage collector after the Pod is terminated. This enables the Pod to cleanly unmount all volumes before th", "source": "https://kubernetes.io/docs/concepts/workloads/controllers/statefulset/", "id": "k8s_00390_code_completion_10644"} {"prompt": "Explain this yaml code:\n\napiVersion: apps/v1\nkind: StatefulSet\n...\nspec:\n persistentVolumeClaimRetentionPolicy:\n whenDeleted: Retain\n whenScaled: Delete\n...", "completion": "This yaml code StatefulSets | The StatefulSet controller adds owner references to its PVCs, which are then deleted by the garbage collector after the Pod is terminated. This enables the Pod to cleanly unmount all volumes before th", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "StatefulSets | The StatefulSet controller adds owner references to its PVCs, which are then deleted by the garbage collector after the Pod is terminated. This enables the Pod to cleanly unmount all volumes before th", "source": "https://kubernetes.io/docs/concepts/workloads/controllers/statefulset/", "id": "k8s_00390_code_explain_10645"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Service\nmetadata:\n name: nginx\n labels:\n app: nginx\nspec:\n ports:\n - port: 80\n name: web\n clusterIP: None\n selector:\n app: nginx\n---\napiVersion: apps/v1\nkind: StatefulSet\nmetadata:\n name: web\nspec:\n selector:\n matchLabels:\n app: nginx # has to match .spec.template.metadata.labels\n serviceName: \"nginx\"\n replicas: 3 # by default is 1", "completion": " minReadySeconds: 10 # by default is 0\n template:\n metadata:\n labels:\n app: nginx # has to match .spec.selector.matchLabels\n spec:\n terminationGracePeriodSeconds: 10\n containers:\n - name: nginx\n image: registry.k8s.io/nginx-slim:0.24\n ports:\n - containerPort: 80\n name: web\n volumeMounts:\n - name: www\n mountPath: /usr/share/nginx/html\n volumeClaimTemplates:\n - metadata:\n name: www\n spec:\n accessModes: [ \"ReadWriteOnce\" ]\n storageClassName: \"my-storage-class\"\n resources:\n requests:\n storage: 1Gi", "language": "yaml", "type": "service-definition", "context": "StatefulSets | In the above example:", "source": "https://kubernetes.io/docs/concepts/workloads/controllers/statefulset/", "id": "k8s_00390_code_completion_10646"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Service\nmetadata:\n name: nginx\n labels:\n app: nginx\nspec:\n ports:\n - port: 80\n name: web\n clusterIP: None\n selector:\n app: nginx\n---\napiVersion: apps/v1\nkind: StatefulSet\nmetadata:\n name: web\nspec:\n selector:\n matchLabels:\n app: nginx # has to match .spec.template.metadata.labels\n serviceName: \"nginx\"\n replicas: 3 # by default is 1\n minReadySeconds: 10 # by default is 0\n template:\n metadata:\n labels:\n app: nginx # has to match .spec.selector.matchLabels\n spec:\n terminationGracePeriodSeconds: 10\n containers:\n - name: nginx\n image: registry.k8s.io/nginx-slim:0.24\n ports:\n - containerPort: 80\n name: web\n volumeMounts:\n - name: www\n mountPath: /usr/share/nginx/html\n volumeClaimTemplates:\n - metadata:\n name: www\n spec:\n accessModes: [ \"ReadWriteOnce\" ]\n storageClassName: \"my-storage-class\"\n resources:\n requests:\n storage: 1Gi", "completion": "This yaml code StatefulSets | In the above example:", "language": "yaml", "type": "service-definition_explanation", "context": "StatefulSets | In the above example:", "source": "https://kubernetes.io/docs/concepts/workloads/controllers/statefulset/", "id": "k8s_00390_code_explain_10647"} {"prompt": "Complete this yaml code:\n\napiVersion: apps/v1\nkind: StatefulSet\nmetadata:", "completion": " name: webapp\nspec:\n revisionHistoryLimit: 5 # Keep last 5 revisions\n # ... other spec fields ...", "language": "yaml", "type": "kubernetes-manifest", "context": "StatefulSets | You can revert to a previous configuration using:", "source": "https://kubernetes.io/docs/concepts/workloads/controllers/statefulset/", "id": "k8s_00390_code_completion_10648"} {"prompt": "Explain this yaml code:\n\napiVersion: apps/v1\nkind: StatefulSet\nmetadata:\n name: webapp\nspec:\n revisionHistoryLimit: 5 # Keep last 5 revisions\n # ... other spec fields ...", "completion": "This yaml code StatefulSets | You can revert to a previous configuration using:", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "StatefulSets | You can revert to a previous configuration using:", "source": "https://kubernetes.io/docs/concepts/workloads/controllers/statefulset/", "id": "k8s_00390_code_explain_10649"} {"prompt": "Complete this bash code:\n\n# View revision history\nkubectl rollout history statefulset/webapp", "completion": "\n# Rollback to a specific revision\nkubectl rollout undo statefulset/webapp --to-revision=3", "language": "bash", "type": "shell-command", "context": "StatefulSets | This will:", "source": "https://kubernetes.io/docs/concepts/workloads/controllers/statefulset/", "id": "k8s_00390_code_completion_10650"} {"prompt": "Explain this bash code:\n\n# View revision history\nkubectl rollout history statefulset/webapp\n\n# Rollback to a specific revision\nkubectl rollout undo statefulset/webapp --to-revision=3", "completion": "This bash code StatefulSets | This will:", "language": "bash", "type": "shell-command_explanation", "context": "StatefulSets | This will:", "source": "https://kubernetes.io/docs/concepts/workloads/controllers/statefulset/", "id": "k8s_00390_code_explain_10651"} {"prompt": "Complete this bash code:\n\n# List all revisions for the StatefulSet\nkubectl get controllerrevisions -l app.kubernetes.io/name=webapp", "completion": "\n# View detailed configuration of a specific revision\nkubectl get controllerrevision/webapp-3 -o yaml", "language": "bash", "type": "shell-command", "context": "StatefulSets | Regularly check revisions with:", "source": "https://kubernetes.io/docs/concepts/workloads/controllers/statefulset/", "id": "k8s_00390_code_completion_10652"} {"prompt": "Explain this bash code:\n\n# List all revisions for the StatefulSet\nkubectl get controllerrevisions -l app.kubernetes.io/name=webapp\n\n# View detailed configuration of a specific revision\nkubectl get controllerrevision/webapp-3 -o yaml", "completion": "This bash code StatefulSets | Regularly check revisions with:", "language": "bash", "type": "shell-command_explanation", "context": "StatefulSets | Regularly check revisions with:", "source": "https://kubernetes.io/docs/concepts/workloads/controllers/statefulset/", "id": "k8s_00390_code_explain_10653"} {"prompt": "Complete this yaml code:\n\napiVersion: apps/v1\nkind: StatefulSet\n...\nspec:", "completion": " persistentVolumeClaimRetentionPolicy:\n whenDeleted: Retain\n whenScaled: Delete\n...", "language": "yaml", "type": "kubernetes-manifest", "context": "StatefulSets | The StatefulSet controller adds owner references to its PVCs, which are then deleted by the garbage collector after the Pod is terminated. This enables the Pod to cleanly unmount all volumes before th", "source": "https://kubernetes.io/docs/concepts/workloads/controllers/statefulset/", "id": "k8s_00390_code_completion_10654"} {"prompt": "Explain this yaml code:\n\napiVersion: apps/v1\nkind: StatefulSet\n...\nspec:\n persistentVolumeClaimRetentionPolicy:\n whenDeleted: Retain\n whenScaled: Delete\n...", "completion": "This yaml code StatefulSets | The StatefulSet controller adds owner references to its PVCs, which are then deleted by the garbage collector after the Pod is terminated. This enables the Pod to cleanly unmount all volumes before th", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "StatefulSets | The StatefulSet controller adds owner references to its PVCs, which are then deleted by the garbage collector after the Pod is terminated. This enables the Pod to cleanly unmount all volumes before th", "source": "https://kubernetes.io/docs/concepts/workloads/controllers/statefulset/", "id": "k8s_00390_code_explain_10655"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Service\nmetadata:\n name: nginx\n labels:\n app: nginx\nspec:\n ports:\n - port: 80\n name: web\n clusterIP: None\n selector:\n app: nginx\n---\napiVersion: apps/v1\nkind: StatefulSet\nmetadata:\n name: web\nspec:\n selector:\n matchLabels:\n app: nginx # has to match .spec.template.metadata.labels\n serviceName: \"nginx\"\n replicas: 3 # by default is 1", "completion": " minReadySeconds: 10 # by default is 0\n template:\n metadata:\n labels:\n app: nginx # has to match .spec.selector.matchLabels\n spec:\n terminationGracePeriodSeconds: 10\n containers:\n - name: nginx\n image: registry.k8s.io/nginx-slim:0.24\n ports:\n - containerPort: 80\n name: web\n volumeMounts:\n - name: www\n mountPath: /usr/share/nginx/html\n volumeClaimTemplates:\n - metadata:\n name: www\n spec:\n accessModes: [ \"ReadWriteOnce\" ]\n storageClassName: \"my-storage-class\"\n resources:\n requests:\n storage: 1Gi", "language": "yaml", "type": "service-definition", "context": "StatefulSets | In the above example:", "source": "https://kubernetes.io/docs/concepts/workloads/controllers/statefulset/", "id": "k8s_00390_code_completion_10656"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Service\nmetadata:\n name: nginx\n labels:\n app: nginx\nspec:\n ports:\n - port: 80\n name: web\n clusterIP: None\n selector:\n app: nginx\n---\napiVersion: apps/v1\nkind: StatefulSet\nmetadata:\n name: web\nspec:\n selector:\n matchLabels:\n app: nginx # has to match .spec.template.metadata.labels\n serviceName: \"nginx\"\n replicas: 3 # by default is 1\n minReadySeconds: 10 # by default is 0\n template:\n metadata:\n labels:\n app: nginx # has to match .spec.selector.matchLabels\n spec:\n terminationGracePeriodSeconds: 10\n containers:\n - name: nginx\n image: registry.k8s.io/nginx-slim:0.24\n ports:\n - containerPort: 80\n name: web\n volumeMounts:\n - name: www\n mountPath: /usr/share/nginx/html\n volumeClaimTemplates:\n - metadata:\n name: www\n spec:\n accessModes: [ \"ReadWriteOnce\" ]\n storageClassName: \"my-storage-class\"\n resources:\n requests:\n storage: 1Gi", "completion": "This yaml code StatefulSets | In the above example:", "language": "yaml", "type": "service-definition_explanation", "context": "StatefulSets | In the above example:", "source": "https://kubernetes.io/docs/concepts/workloads/controllers/statefulset/", "id": "k8s_00390_code_explain_10657"} {"prompt": "Complete this yaml code:\n\napiVersion: apps/v1\nkind: StatefulSet\nmetadata:", "completion": " name: webapp\nspec:\n revisionHistoryLimit: 5 # Keep last 5 revisions\n # ... other spec fields ...", "language": "yaml", "type": "kubernetes-manifest", "context": "StatefulSets | You can revert to a previous configuration using:", "source": "https://kubernetes.io/docs/concepts/workloads/controllers/statefulset/", "id": "k8s_00390_code_completion_10658"} {"prompt": "Explain this yaml code:\n\napiVersion: apps/v1\nkind: StatefulSet\nmetadata:\n name: webapp\nspec:\n revisionHistoryLimit: 5 # Keep last 5 revisions\n # ... other spec fields ...", "completion": "This yaml code StatefulSets | You can revert to a previous configuration using:", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "StatefulSets | You can revert to a previous configuration using:", "source": "https://kubernetes.io/docs/concepts/workloads/controllers/statefulset/", "id": "k8s_00390_code_explain_10659"} {"prompt": "Complete this text code:\n\n# View revision history\nkubectl rollout history statefulset/webapp", "completion": "\n# Rollback to a specific revision\nkubectl rollout undo statefulset/webapp --to-revision=3", "language": "text", "type": "code-example", "context": "StatefulSets | This will:", "source": "https://kubernetes.io/docs/concepts/workloads/controllers/statefulset/", "id": "k8s_00390_code_completion_10660"} {"prompt": "Explain this text code:\n\n# View revision history\nkubectl rollout history statefulset/webapp\n\n# Rollback to a specific revision\nkubectl rollout undo statefulset/webapp --to-revision=3", "completion": "This text code StatefulSets | This will:", "language": "text", "type": "code-example_explanation", "context": "StatefulSets | This will:", "source": "https://kubernetes.io/docs/concepts/workloads/controllers/statefulset/", "id": "k8s_00390_code_explain_10661"} {"prompt": "Complete this text code:\n\n# List all revisions for the StatefulSet\nkubectl get controllerrevisions -l app.kubernetes.io/name=webapp", "completion": "\n# View detailed configuration of a specific revision\nkubectl get controllerrevision/webapp-3 -o yaml", "language": "text", "type": "code-example", "context": "StatefulSets | Regularly check revisions with:", "source": "https://kubernetes.io/docs/concepts/workloads/controllers/statefulset/", "id": "k8s_00390_code_completion_10662"} {"prompt": "Explain this text code:\n\n# List all revisions for the StatefulSet\nkubectl get controllerrevisions -l app.kubernetes.io/name=webapp\n\n# View detailed configuration of a specific revision\nkubectl get controllerrevision/webapp-3 -o yaml", "completion": "This text code StatefulSets | Regularly check revisions with:", "language": "text", "type": "code-example_explanation", "context": "StatefulSets | Regularly check revisions with:", "source": "https://kubernetes.io/docs/concepts/workloads/controllers/statefulset/", "id": "k8s_00390_code_explain_10663"} {"prompt": "Complete this yaml code:\n\napiVersion: apps/v1\nkind: StatefulSet\n...\nspec:", "completion": " persistentVolumeClaimRetentionPolicy:\n whenDeleted: Retain\n whenScaled: Delete\n...", "language": "yaml", "type": "kubernetes-manifest", "context": "StatefulSets | The StatefulSet controller adds owner references to its PVCs, which are then deleted by the garbage collector after the Pod is terminated. This enables the Pod to cleanly unmount all volumes before th", "source": "https://kubernetes.io/docs/concepts/workloads/controllers/statefulset/", "id": "k8s_00390_code_completion_10664"} {"prompt": "Explain this yaml code:\n\napiVersion: apps/v1\nkind: StatefulSet\n...\nspec:\n persistentVolumeClaimRetentionPolicy:\n whenDeleted: Retain\n whenScaled: Delete\n...", "completion": "This yaml code StatefulSets | The StatefulSet controller adds owner references to its PVCs, which are then deleted by the garbage collector after the Pod is terminated. This enables the Pod to cleanly unmount all volumes before th", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "StatefulSets | The StatefulSet controller adds owner references to its PVCs, which are then deleted by the garbage collector after the Pod is terminated. This enables the Pod to cleanly unmount all volumes before th", "source": "https://kubernetes.io/docs/concepts/workloads/controllers/statefulset/", "id": "k8s_00390_code_explain_10665"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: envar-demo\n labels:\n purpose: demonstrate-envars\nspec:", "completion": " containers:\n - name: envar-demo-container\n image: gcr.io/google-samples/hello-app:2.0\n env:\n - name: DEMO_GREETING\n value: \"Hello from the environment\"\n - name: DEMO_FAREWELL\n value: \"Such a sweet sorrow\"", "language": "yaml", "type": "pod-definition", "context": "Define Environment Variables for a Container | Create a Pod based on that manifest:", "source": "https://kubernetes.io/docs/tasks/inject-data-application/define-environment-variable-container/", "id": "k8s_00392_code_completion_10666"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: envar-demo\n labels:\n purpose: demonstrate-envars\nspec:\n containers:\n - name: envar-demo-container\n image: gcr.io/google-samples/hello-app:2.0\n env:\n - name: DEMO_GREETING\n value: \"Hello from the environment\"\n - name: DEMO_FAREWELL\n value: \"Such a sweet sorrow\"", "completion": "This yaml code Define Environment Variables for a Container | Create a Pod based on that manifest:", "language": "yaml", "type": "pod-definition_explanation", "context": "Define Environment Variables for a Container | Create a Pod based on that manifest:", "source": "https://kubernetes.io/docs/tasks/inject-data-application/define-environment-variable-container/", "id": "k8s_00392_code_explain_10667"} {"prompt": "Explain this bash code:\n\nkubectl apply -f https://k8s.io/examples/pods/inject/envars.yaml", "completion": "This bash code Define Environment Variables for a Container | List the running Pods:", "language": "bash", "type": "kubectl-command_explanation", "context": "Define Environment Variables for a Container | List the running Pods:", "source": "https://kubernetes.io/docs/tasks/inject-data-application/define-environment-variable-container/", "id": "k8s_00392_code_explain_10668"} {"prompt": "Explain this text code:\n\nNAME READY STATUS RESTARTS AGE\nenvar-demo 1/1 Running 0 9s", "completion": "This text code Define Environment Variables for a Container | List the Pod's container environment variables:", "language": "text", "type": "code-example_explanation", "context": "Define Environment Variables for a Container | List the Pod's container environment variables:", "source": "https://kubernetes.io/docs/tasks/inject-data-application/define-environment-variable-container/", "id": "k8s_00392_code_explain_10669"} {"prompt": "Complete this text code:\n\nNODE_VERSION=4.4.2\nEXAMPLE_SERVICE_PORT_8080_TCP_ADDR=10.3.245.237\nHOSTNAME=envar-demo", "completion": "...\nDEMO_GREETING=Hello from the environment\nDEMO_FAREWELL=Such a sweet sorrow", "language": "text", "type": "code-example", "context": "Define Environment Variables for a Container | Environment variables that you define in a Pod's configuration under .spec.containers[*].env[*] can be used elsewhere in the configuration, for example in commands and arguments that you set for the P", "source": "https://kubernetes.io/docs/tasks/inject-data-application/define-environment-variable-container/", "id": "k8s_00392_code_completion_10670"} {"prompt": "Explain this text code:\n\nNODE_VERSION=4.4.2\nEXAMPLE_SERVICE_PORT_8080_TCP_ADDR=10.3.245.237\nHOSTNAME=envar-demo\n...\nDEMO_GREETING=Hello from the environment\nDEMO_FAREWELL=Such a sweet sorrow", "completion": "This text code Define Environment Variables for a Container | Environment variables that you define in a Pod's configuration under .spec.containers[*].env[*] can be used elsewhere in the configuration, for example in commands and arguments that you set for the P", "language": "text", "type": "code-example_explanation", "context": "Define Environment Variables for a Container | Environment variables that you define in a Pod's configuration under .spec.containers[*].env[*] can be used elsewhere in the configuration, for example in commands and arguments that you set for the P", "source": "https://kubernetes.io/docs/tasks/inject-data-application/define-environment-variable-container/", "id": "k8s_00392_code_explain_10671"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: print-greeting\nspec:\n containers:\n - name: env-print-demo\n image: bash\n env:", "completion": " - name: GREETING\n value: \"Warm greetings to\"\n - name: HONORIFIC\n value: \"The Most Honorable\"\n - name: NAME\n value: \"Kubernetes\"\n - name: MESSAGE\n value: \"$(GREETING) $(HONORIFIC) $(NAME)\"\n command: [\"echo\"]\n args: [\"$(MESSAGE)\"]", "language": "yaml", "type": "pod-definition", "context": "Define Environment Variables for a Container | Upon creation, the command echo Warm greetings to The Most Honorable Kubernetes is run on the container.", "source": "https://kubernetes.io/docs/tasks/inject-data-application/define-environment-variable-container/", "id": "k8s_00392_code_completion_10672"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: print-greeting\nspec:\n containers:\n - name: env-print-demo\n image: bash\n env:\n - name: GREETING\n value: \"Warm greetings to\"\n - name: HONORIFIC\n value: \"The Most Honorable\"\n - name: NAME\n value: \"Kubernetes\"\n - name: MESSAGE\n value: \"$(GREETING) $(HONORIFIC) $(NAME)\"\n command: [\"echo\"]\n args: [\"$(MESSAGE)\"]", "completion": "This yaml code Define Environment Variables for a Container | Upon creation, the command echo Warm greetings to The Most Honorable Kubernetes is run on the container.", "language": "yaml", "type": "pod-definition_explanation", "context": "Define Environment Variables for a Container | Upon creation, the command echo Warm greetings to The Most Honorable Kubernetes is run on the container.", "source": "https://kubernetes.io/docs/tasks/inject-data-application/define-environment-variable-container/", "id": "k8s_00392_code_explain_10673"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: envar-demo\n labels:\n purpose: demonstrate-envars\nspec:", "completion": " containers:\n - name: envar-demo-container\n image: gcr.io/google-samples/hello-app:2.0\n env:\n - name: DEMO_GREETING\n value: \"Hello from the environment\"\n - name: DEMO_FAREWELL\n value: \"Such a sweet sorrow\"", "language": "yaml", "type": "pod-definition", "context": "Define Environment Variables for a Container | Create a Pod based on that manifest:", "source": "https://kubernetes.io/docs/tasks/inject-data-application/define-environment-variable-container/", "id": "k8s_00392_code_completion_10674"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: envar-demo\n labels:\n purpose: demonstrate-envars\nspec:\n containers:\n - name: envar-demo-container\n image: gcr.io/google-samples/hello-app:2.0\n env:\n - name: DEMO_GREETING\n value: \"Hello from the environment\"\n - name: DEMO_FAREWELL\n value: \"Such a sweet sorrow\"", "completion": "This yaml code Define Environment Variables for a Container | Create a Pod based on that manifest:", "language": "yaml", "type": "pod-definition_explanation", "context": "Define Environment Variables for a Container | Create a Pod based on that manifest:", "source": "https://kubernetes.io/docs/tasks/inject-data-application/define-environment-variable-container/", "id": "k8s_00392_code_explain_10675"} {"prompt": "Explain this shell code:\n\nkubectl apply -f https://k8s.io/examples/pods/inject/envars.yaml", "completion": "This shell code Define Environment Variables for a Container | List the running Pods:", "language": "shell", "type": "code-example_explanation", "context": "Define Environment Variables for a Container | List the running Pods:", "source": "https://kubernetes.io/docs/tasks/inject-data-application/define-environment-variable-container/", "id": "k8s_00392_code_explain_10676"} {"prompt": "Explain this text code:\n\nNAME READY STATUS RESTARTS AGE\nenvar-demo 1/1 Running 0 9s", "completion": "This text code Define Environment Variables for a Container | List the Pod's container environment variables:", "language": "text", "type": "code-example_explanation", "context": "Define Environment Variables for a Container | List the Pod's container environment variables:", "source": "https://kubernetes.io/docs/tasks/inject-data-application/define-environment-variable-container/", "id": "k8s_00392_code_explain_10677"} {"prompt": "Complete this text code:\n\nNODE_VERSION=4.4.2\nEXAMPLE_SERVICE_PORT_8080_TCP_ADDR=10.3.245.237\nHOSTNAME=envar-demo", "completion": "...\nDEMO_GREETING=Hello from the environment\nDEMO_FAREWELL=Such a sweet sorrow", "language": "text", "type": "code-example", "context": "Define Environment Variables for a Container | Environment variables that you define in a Pod's configuration under .spec.containers[*].env[*] can be used elsewhere in the configuration, for example in commands and arguments that you set for the P", "source": "https://kubernetes.io/docs/tasks/inject-data-application/define-environment-variable-container/", "id": "k8s_00392_code_completion_10678"} {"prompt": "Explain this text code:\n\nNODE_VERSION=4.4.2\nEXAMPLE_SERVICE_PORT_8080_TCP_ADDR=10.3.245.237\nHOSTNAME=envar-demo\n...\nDEMO_GREETING=Hello from the environment\nDEMO_FAREWELL=Such a sweet sorrow", "completion": "This text code Define Environment Variables for a Container | Environment variables that you define in a Pod's configuration under .spec.containers[*].env[*] can be used elsewhere in the configuration, for example in commands and arguments that you set for the P", "language": "text", "type": "code-example_explanation", "context": "Define Environment Variables for a Container | Environment variables that you define in a Pod's configuration under .spec.containers[*].env[*] can be used elsewhere in the configuration, for example in commands and arguments that you set for the P", "source": "https://kubernetes.io/docs/tasks/inject-data-application/define-environment-variable-container/", "id": "k8s_00392_code_explain_10679"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: print-greeting\nspec:\n containers:\n - name: env-print-demo\n image: bash\n env:", "completion": " - name: GREETING\n value: \"Warm greetings to\"\n - name: HONORIFIC\n value: \"The Most Honorable\"\n - name: NAME\n value: \"Kubernetes\"\n - name: MESSAGE\n value: \"$(GREETING) $(HONORIFIC) $(NAME)\"\n command: [\"echo\"]\n args: [\"$(MESSAGE)\"]", "language": "yaml", "type": "pod-definition", "context": "Define Environment Variables for a Container | Upon creation, the command echo Warm greetings to The Most Honorable Kubernetes is run on the container.", "source": "https://kubernetes.io/docs/tasks/inject-data-application/define-environment-variable-container/", "id": "k8s_00392_code_completion_10680"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: print-greeting\nspec:\n containers:\n - name: env-print-demo\n image: bash\n env:\n - name: GREETING\n value: \"Warm greetings to\"\n - name: HONORIFIC\n value: \"The Most Honorable\"\n - name: NAME\n value: \"Kubernetes\"\n - name: MESSAGE\n value: \"$(GREETING) $(HONORIFIC) $(NAME)\"\n command: [\"echo\"]\n args: [\"$(MESSAGE)\"]", "completion": "This yaml code Define Environment Variables for a Container | Upon creation, the command echo Warm greetings to The Most Honorable Kubernetes is run on the container.", "language": "yaml", "type": "pod-definition_explanation", "context": "Define Environment Variables for a Container | Upon creation, the command echo Warm greetings to The Most Honorable Kubernetes is run on the container.", "source": "https://kubernetes.io/docs/tasks/inject-data-application/define-environment-variable-container/", "id": "k8s_00392_code_explain_10681"} {"prompt": "Explain this text code:\n\necho Warm greetings to The Most Honorable Kubernetes", "completion": "This text code Define Environment Variables for a Container |", "language": "text", "type": "code-example_explanation", "context": "Define Environment Variables for a Container |", "source": "https://kubernetes.io/docs/tasks/inject-data-application/define-environment-variable-container/", "id": "k8s_00392_code_explain_10682"} {"prompt": "Complete this yaml code:\n\npods/inject/envars.yaml\napiVersion: v1\nkind: Pod\nmetadata:\n name: envar-demo\n labels:\n purpose: demonstrate-envars\nspec:", "completion": " containers:\n - name: envar-demo-container\n image: gcr.io/google-samples/hello-app:2.0\n env:\n - name: DEMO_GREETING\n value: \"Hello from the environment\"\n - name: DEMO_FAREWELL\n value: \"Such a sweet sorrow\"", "language": "yaml", "type": "pod-definition", "context": "Define Environment Variables for a Container | Create a Pod based on that manifest:", "source": "https://kubernetes.io/docs/tasks/inject-data-application/define-environment-variable-container/", "id": "k8s_00392_code_completion_10683"} {"prompt": "Explain this yaml code:\n\npods/inject/envars.yaml\napiVersion: v1\nkind: Pod\nmetadata:\n name: envar-demo\n labels:\n purpose: demonstrate-envars\nspec:\n containers:\n - name: envar-demo-container\n image: gcr.io/google-samples/hello-app:2.0\n env:\n - name: DEMO_GREETING\n value: \"Hello from the environment\"\n - name: DEMO_FAREWELL\n value: \"Such a sweet sorrow\"", "completion": "This yaml code Define Environment Variables for a Container | Create a Pod based on that manifest:", "language": "yaml", "type": "pod-definition_explanation", "context": "Define Environment Variables for a Container | Create a Pod based on that manifest:", "source": "https://kubernetes.io/docs/tasks/inject-data-application/define-environment-variable-container/", "id": "k8s_00392_code_explain_10684"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: envar-demo\n labels:\n purpose: demonstrate-envars\nspec:", "completion": " containers:\n - name: envar-demo-container\n image: gcr.io/google-samples/hello-app:2.0\n env:\n - name: DEMO_GREETING\n value: \"Hello from the environment\"\n - name: DEMO_FAREWELL\n value: \"Such a sweet sorrow\"", "language": "yaml", "type": "pod-definition", "context": "Define Environment Variables for a Container | Create a Pod based on that manifest:", "source": "https://kubernetes.io/docs/tasks/inject-data-application/define-environment-variable-container/", "id": "k8s_00392_code_completion_10685"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: envar-demo\n labels:\n purpose: demonstrate-envars\nspec:\n containers:\n - name: envar-demo-container\n image: gcr.io/google-samples/hello-app:2.0\n env:\n - name: DEMO_GREETING\n value: \"Hello from the environment\"\n - name: DEMO_FAREWELL\n value: \"Such a sweet sorrow\"", "completion": "This yaml code Define Environment Variables for a Container | Create a Pod based on that manifest:", "language": "yaml", "type": "pod-definition_explanation", "context": "Define Environment Variables for a Container | Create a Pod based on that manifest:", "source": "https://kubernetes.io/docs/tasks/inject-data-application/define-environment-variable-container/", "id": "k8s_00392_code_explain_10686"} {"prompt": "Explain this bash code:\n\nkubectl apply -f https://k8s.io/examples/pods/inject/envars.yaml", "completion": "This bash code Define Environment Variables for a Container | List the running Pods:", "language": "bash", "type": "kubectl-command_explanation", "context": "Define Environment Variables for a Container | List the running Pods:", "source": "https://kubernetes.io/docs/tasks/inject-data-application/define-environment-variable-container/", "id": "k8s_00392_code_explain_10687"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: print-greeting\nspec:\n containers:\n - name: env-print-demo\n image: bash\n env:", "completion": " - name: GREETING\n value: \"Warm greetings to\"\n - name: HONORIFIC\n value: \"The Most Honorable\"\n - name: NAME\n value: \"Kubernetes\"\n - name: MESSAGE\n value: \"$(GREETING) $(HONORIFIC) $(NAME)\"\n command: [\"echo\"]\n args: [\"$(MESSAGE)\"]", "language": "yaml", "type": "pod-definition", "context": "Define Environment Variables for a Container | Upon creation, the command echo Warm greetings to The Most Honorable Kubernetes is run on the container.", "source": "https://kubernetes.io/docs/tasks/inject-data-application/define-environment-variable-container/", "id": "k8s_00392_code_completion_10688"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: print-greeting\nspec:\n containers:\n - name: env-print-demo\n image: bash\n env:\n - name: GREETING\n value: \"Warm greetings to\"\n - name: HONORIFIC\n value: \"The Most Honorable\"\n - name: NAME\n value: \"Kubernetes\"\n - name: MESSAGE\n value: \"$(GREETING) $(HONORIFIC) $(NAME)\"\n command: [\"echo\"]\n args: [\"$(MESSAGE)\"]", "completion": "This yaml code Define Environment Variables for a Container | Upon creation, the command echo Warm greetings to The Most Honorable Kubernetes is run on the container.", "language": "yaml", "type": "pod-definition_explanation", "context": "Define Environment Variables for a Container | Upon creation, the command echo Warm greetings to The Most Honorable Kubernetes is run on the container.", "source": "https://kubernetes.io/docs/tasks/inject-data-application/define-environment-variable-container/", "id": "k8s_00392_code_explain_10689"} {"prompt": "Explain this bash code:\n\nkubectl get pods --field-selector status.phase=Running", "completion": "This bash code Field Selectors | Supported field selectors vary by Kubernetes resource type. All resource types support the metadata.name and metadata.namespace fields. Using unsupported field selectors produces an error. For example", "language": "bash", "type": "kubectl-command_explanation", "context": "Field Selectors | Supported field selectors vary by Kubernetes resource type. All resource types support the metadata.name and metadata.namespace fields. Using unsupported field selectors produces an error. For example", "source": "https://kubernetes.io/docs/concepts/overview/working-with-objects/field-selectors/", "id": "k8s_00393_code_explain_10690"} {"prompt": "Explain this text code:\n\nError from server (BadRequest): Unable to find \"ingresses\" that match label selector \"\", field selector \"foo.bar=baz\": \"foo.bar\" is not a known field selector: only \"metadata.name\", \"metadata.namespace\"", "completion": "This text code Field Selectors | All custom resource types support the metadata.name and metadata.namespace fields.", "language": "text", "type": "code-example_explanation", "context": "Field Selectors | All custom resource types support the metadata.name and metadata.namespace fields.", "source": "https://kubernetes.io/docs/concepts/overview/working-with-objects/field-selectors/", "id": "k8s_00393_code_explain_10691"} {"prompt": "Explain this bash code:\n\nkubectl get services --all-namespaces --field-selector metadata.namespace!=default", "completion": "This bash code Field Selectors | As with label and other selectors, field selectors can be chained together as a comma-separated list. This kubectl command selects all Pods for which the status.phase does not equal Running and the sp", "language": "bash", "type": "kubectl-command_explanation", "context": "Field Selectors | As with label and other selectors, field selectors can be chained together as a comma-separated list. This kubectl command selects all Pods for which the status.phase does not equal Running and the sp", "source": "https://kubernetes.io/docs/concepts/overview/working-with-objects/field-selectors/", "id": "k8s_00393_code_explain_10692"} {"prompt": "Explain this bash code:\n\nkubectl get pods --field-selector=status.phase!=Running,spec.restartPolicy=Always", "completion": "This bash code Field Selectors | You can use field selectors across multiple resource types. This kubectl command selects all Statefulsets and Services that are not in the default namespace:", "language": "bash", "type": "kubectl-command_explanation", "context": "Field Selectors | You can use field selectors across multiple resource types. This kubectl command selects all Statefulsets and Services that are not in the default namespace:", "source": "https://kubernetes.io/docs/concepts/overview/working-with-objects/field-selectors/", "id": "k8s_00393_code_explain_10693"} {"prompt": "Explain this bash code:\n\nkubectl get statefulsets,services --all-namespaces --field-selector metadata.namespace!=default", "completion": "This bash code Field Selectors |", "language": "bash", "type": "kubectl-command_explanation", "context": "Field Selectors |", "source": "https://kubernetes.io/docs/concepts/overview/working-with-objects/field-selectors/", "id": "k8s_00393_code_explain_10694"} {"prompt": "Explain this shell code:\n\nkubectl get pods --field-selector status.phase=Running", "completion": "This shell code Field Selectors | Supported field selectors vary by Kubernetes resource type. All resource types support the metadata.name and metadata.namespace fields. Using unsupported field selectors produces an error. For example", "language": "shell", "type": "code-example_explanation", "context": "Field Selectors | Supported field selectors vary by Kubernetes resource type. All resource types support the metadata.name and metadata.namespace fields. Using unsupported field selectors produces an error. For example", "source": "https://kubernetes.io/docs/concepts/overview/working-with-objects/field-selectors/", "id": "k8s_00393_code_explain_10695"} {"prompt": "Explain this text code:\n\nError from server (BadRequest): Unable to find \"ingresses\" that match label selector \"\", field selector \"foo.bar=baz\": \"foo.bar\" is not a known field selector: only \"metadata.name\", \"metadata.namespace\"", "completion": "This text code Field Selectors | All custom resource types support the metadata.name and metadata.namespace fields.", "language": "text", "type": "code-example_explanation", "context": "Field Selectors | All custom resource types support the metadata.name and metadata.namespace fields.", "source": "https://kubernetes.io/docs/concepts/overview/working-with-objects/field-selectors/", "id": "k8s_00393_code_explain_10696"} {"prompt": "Explain this shell code:\n\nkubectl get services --all-namespaces --field-selector metadata.namespace!=default", "completion": "This shell code Field Selectors | As with label and other selectors, field selectors can be chained together as a comma-separated list. This kubectl command selects all Pods for which the status.phase does not equal Running and the sp", "language": "shell", "type": "code-example_explanation", "context": "Field Selectors | As with label and other selectors, field selectors can be chained together as a comma-separated list. This kubectl command selects all Pods for which the status.phase does not equal Running and the sp", "source": "https://kubernetes.io/docs/concepts/overview/working-with-objects/field-selectors/", "id": "k8s_00393_code_explain_10697"} {"prompt": "Explain this shell code:\n\nkubectl get pods --field-selector=status.phase!=Running,spec.restartPolicy=Always", "completion": "This shell code Field Selectors | You can use field selectors across multiple resource types. This kubectl command selects all Statefulsets and Services that are not in the default namespace:", "language": "shell", "type": "code-example_explanation", "context": "Field Selectors | You can use field selectors across multiple resource types. This kubectl command selects all Statefulsets and Services that are not in the default namespace:", "source": "https://kubernetes.io/docs/concepts/overview/working-with-objects/field-selectors/", "id": "k8s_00393_code_explain_10698"} {"prompt": "Explain this shell code:\n\nkubectl get statefulsets,services --all-namespaces --field-selector metadata.namespace!=default", "completion": "This shell code Field Selectors |", "language": "shell", "type": "code-example_explanation", "context": "Field Selectors |", "source": "https://kubernetes.io/docs/concepts/overview/working-with-objects/field-selectors/", "id": "k8s_00393_code_explain_10699"} {"prompt": "Explain this bash code:\n\nkubectl get pods --field-selector status.phase=Running", "completion": "This bash code Field Selectors | Supported field selectors vary by Kubernetes resource type. All resource types support the metadata.name and metadata.namespace fields. Using unsupported field selectors produces an error. For example", "language": "bash", "type": "kubectl-command_explanation", "context": "Field Selectors | Supported field selectors vary by Kubernetes resource type. All resource types support the metadata.name and metadata.namespace fields. Using unsupported field selectors produces an error. For example", "source": "https://kubernetes.io/docs/concepts/overview/working-with-objects/field-selectors/", "id": "k8s_00393_code_explain_10700"} {"prompt": "Explain this bash code:\n\nkubectl get services --all-namespaces --field-selector metadata.namespace!=default", "completion": "This bash code Field Selectors | As with label and other selectors, field selectors can be chained together as a comma-separated list. This kubectl command selects all Pods for which the status.phase does not equal Running and the sp", "language": "bash", "type": "kubectl-command_explanation", "context": "Field Selectors | As with label and other selectors, field selectors can be chained together as a comma-separated list. This kubectl command selects all Pods for which the status.phase does not equal Running and the sp", "source": "https://kubernetes.io/docs/concepts/overview/working-with-objects/field-selectors/", "id": "k8s_00393_code_explain_10701"} {"prompt": "Explain this bash code:\n\nkubectl get pods --field-selector=status.phase!=Running,spec.restartPolicy=Always", "completion": "This bash code Field Selectors | You can use field selectors across multiple resource types. This kubectl command selects all Statefulsets and Services that are not in the default namespace:", "language": "bash", "type": "kubectl-command_explanation", "context": "Field Selectors | You can use field selectors across multiple resource types. This kubectl command selects all Statefulsets and Services that are not in the default namespace:", "source": "https://kubernetes.io/docs/concepts/overview/working-with-objects/field-selectors/", "id": "k8s_00393_code_explain_10702"} {"prompt": "Explain this bash code:\n\nkubectl get statefulsets,services --all-namespaces --field-selector metadata.namespace!=default", "completion": "This bash code Field Selectors |", "language": "bash", "type": "kubectl-command_explanation", "context": "Field Selectors |", "source": "https://kubernetes.io/docs/concepts/overview/working-with-objects/field-selectors/", "id": "k8s_00393_code_explain_10703"} {"prompt": "Complete this text code:\n\n# $CONFIG_DIR is the pod manifest path of your kubelet.\n# $LOG_DIR is the test output path.", "completion": "sudo docker run -it --rm --privileged --net=host \\\n -v /:/rootfs -v $CONFIG_DIR:$CONFIG_DIR -v $LOG_DIR:/var/result \\\n registry.k8s.io/node-test:0.2", "language": "text", "type": "code-example", "context": "Validate node setup | Kubernetes also provides node conformance test docker images for other architectures:", "source": "https://kubernetes.io/docs/setup/best-practices/node-conformance/", "id": "k8s_00394_code_completion_10704"} {"prompt": "Explain this text code:\n\n# $CONFIG_DIR is the pod manifest path of your kubelet.\n# $LOG_DIR is the test output path.\nsudo docker run -it --rm --privileged --net=host \\\n -v /:/rootfs -v $CONFIG_DIR:$CONFIG_DIR -v $LOG_DIR:/var/result \\\n registry.k8s.io/node-test:0.2", "completion": "This text code Validate node setup | Kubernetes also provides node conformance test docker images for other architectures:", "language": "text", "type": "code-example_explanation", "context": "Validate node setup | Kubernetes also provides node conformance test docker images for other architectures:", "source": "https://kubernetes.io/docs/setup/best-practices/node-conformance/", "id": "k8s_00394_code_explain_10705"} {"prompt": "Complete this text code:\n\nsudo docker run -it --rm --privileged --net=host \\\n -v /:/rootfs:ro -v $CONFIG_DIR:$CONFIG_DIR -v $LOG_DIR:/var/result \\", "completion": " -e FOCUS=MirrorPod \\ # Only run MirrorPod test\n registry.k8s.io/node-test:0.2", "language": "text", "type": "code-example", "context": "Validate node setup | To skip specific tests, overwrite the environment variable SKIP with the regular expression of tests you want to skip.", "source": "https://kubernetes.io/docs/setup/best-practices/node-conformance/", "id": "k8s_00394_code_completion_10706"} {"prompt": "Explain this text code:\n\nsudo docker run -it --rm --privileged --net=host \\\n -v /:/rootfs:ro -v $CONFIG_DIR:$CONFIG_DIR -v $LOG_DIR:/var/result \\\n -e FOCUS=MirrorPod \\ # Only run MirrorPod test\n registry.k8s.io/node-test:0.2", "completion": "This text code Validate node setup | To skip specific tests, overwrite the environment variable SKIP with the regular expression of tests you want to skip.", "language": "text", "type": "code-example_explanation", "context": "Validate node setup | To skip specific tests, overwrite the environment variable SKIP with the regular expression of tests you want to skip.", "source": "https://kubernetes.io/docs/setup/best-practices/node-conformance/", "id": "k8s_00394_code_explain_10707"} {"prompt": "Complete this text code:\n\nsudo docker run -it --rm --privileged --net=host \\\n -v /:/rootfs:ro -v $CONFIG_DIR:$CONFIG_DIR -v $LOG_DIR:/var/result \\", "completion": " -e SKIP=MirrorPod \\ # Run all conformance tests but skip MirrorPod test\n registry.k8s.io/node-test:0.2", "language": "text", "type": "code-example", "context": "Validate node setup | Node conformance test is a containerized version of node e2e test. By default, it runs all conformance tests.", "source": "https://kubernetes.io/docs/setup/best-practices/node-conformance/", "id": "k8s_00394_code_completion_10708"} {"prompt": "Explain this text code:\n\nsudo docker run -it --rm --privileged --net=host \\\n -v /:/rootfs:ro -v $CONFIG_DIR:$CONFIG_DIR -v $LOG_DIR:/var/result \\\n -e SKIP=MirrorPod \\ # Run all conformance tests but skip MirrorPod test\n registry.k8s.io/node-test:0.2", "completion": "This text code Validate node setup | Node conformance test is a containerized version of node e2e test. By default, it runs all conformance tests.", "language": "text", "type": "code-example_explanation", "context": "Validate node setup | Node conformance test is a containerized version of node e2e test. By default, it runs all conformance tests.", "source": "https://kubernetes.io/docs/setup/best-practices/node-conformance/", "id": "k8s_00394_code_explain_10709"} {"prompt": "Complete this shell code:\n\n# $CONFIG_DIR is the pod manifest path of your kubelet.\n# $LOG_DIR is the test output path.", "completion": "sudo docker run -it --rm --privileged --net=host \\\n -v /:/rootfs -v $CONFIG_DIR:$CONFIG_DIR -v $LOG_DIR:/var/result \\\n registry.k8s.io/node-test:0.2", "language": "shell", "type": "code-example", "context": "Validate node setup | Kubernetes also provides node conformance test docker images for other architectures:", "source": "https://kubernetes.io/docs/setup/best-practices/node-conformance/", "id": "k8s_00394_code_completion_10710"} {"prompt": "Explain this shell code:\n\n# $CONFIG_DIR is the pod manifest path of your kubelet.\n# $LOG_DIR is the test output path.\nsudo docker run -it --rm --privileged --net=host \\\n -v /:/rootfs -v $CONFIG_DIR:$CONFIG_DIR -v $LOG_DIR:/var/result \\\n registry.k8s.io/node-test:0.2", "completion": "This shell code Validate node setup | Kubernetes also provides node conformance test docker images for other architectures:", "language": "shell", "type": "code-example_explanation", "context": "Validate node setup | Kubernetes also provides node conformance test docker images for other architectures:", "source": "https://kubernetes.io/docs/setup/best-practices/node-conformance/", "id": "k8s_00394_code_explain_10711"} {"prompt": "Complete this shell code:\n\nsudo docker run -it --rm --privileged --net=host \\\n -v /:/rootfs:ro -v $CONFIG_DIR:$CONFIG_DIR -v $LOG_DIR:/var/result \\", "completion": " -e FOCUS=MirrorPod \\ # Only run MirrorPod test\n registry.k8s.io/node-test:0.2", "language": "shell", "type": "code-example", "context": "Validate node setup | To skip specific tests, overwrite the environment variable SKIP with the regular expression of tests you want to skip.", "source": "https://kubernetes.io/docs/setup/best-practices/node-conformance/", "id": "k8s_00394_code_completion_10712"} {"prompt": "Explain this shell code:\n\nsudo docker run -it --rm --privileged --net=host \\\n -v /:/rootfs:ro -v $CONFIG_DIR:$CONFIG_DIR -v $LOG_DIR:/var/result \\\n -e FOCUS=MirrorPod \\ # Only run MirrorPod test\n registry.k8s.io/node-test:0.2", "completion": "This shell code Validate node setup | To skip specific tests, overwrite the environment variable SKIP with the regular expression of tests you want to skip.", "language": "shell", "type": "code-example_explanation", "context": "Validate node setup | To skip specific tests, overwrite the environment variable SKIP with the regular expression of tests you want to skip.", "source": "https://kubernetes.io/docs/setup/best-practices/node-conformance/", "id": "k8s_00394_code_explain_10713"} {"prompt": "Complete this shell code:\n\nsudo docker run -it --rm --privileged --net=host \\\n -v /:/rootfs:ro -v $CONFIG_DIR:$CONFIG_DIR -v $LOG_DIR:/var/result \\", "completion": " -e SKIP=MirrorPod \\ # Run all conformance tests but skip MirrorPod test\n registry.k8s.io/node-test:0.2", "language": "shell", "type": "code-example", "context": "Validate node setup | Node conformance test is a containerized version of node e2e test. By default, it runs all conformance tests.", "source": "https://kubernetes.io/docs/setup/best-practices/node-conformance/", "id": "k8s_00394_code_completion_10714"} {"prompt": "Explain this shell code:\n\nsudo docker run -it --rm --privileged --net=host \\\n -v /:/rootfs:ro -v $CONFIG_DIR:$CONFIG_DIR -v $LOG_DIR:/var/result \\\n -e SKIP=MirrorPod \\ # Run all conformance tests but skip MirrorPod test\n registry.k8s.io/node-test:0.2", "completion": "This shell code Validate node setup | Node conformance test is a containerized version of node e2e test. By default, it runs all conformance tests.", "language": "shell", "type": "code-example_explanation", "context": "Validate node setup | Node conformance test is a containerized version of node e2e test. By default, it runs all conformance tests.", "source": "https://kubernetes.io/docs/setup/best-practices/node-conformance/", "id": "k8s_00394_code_explain_10715"} {"prompt": "Complete this text code:\n\n# $CONFIG_DIR is the pod manifest path of your kubelet.\n# $LOG_DIR is the test output path.", "completion": "sudo docker run -it --rm --privileged --net=host \\\n -v /:/rootfs -v $CONFIG_DIR:$CONFIG_DIR -v $LOG_DIR:/var/result \\\n registry.k8s.io/node-test:0.2", "language": "text", "type": "code-example", "context": "Validate node setup | Kubernetes also provides node conformance test docker images for other architectures:", "source": "https://kubernetes.io/docs/setup/best-practices/node-conformance/", "id": "k8s_00394_code_completion_10716"} {"prompt": "Explain this text code:\n\n# $CONFIG_DIR is the pod manifest path of your kubelet.\n# $LOG_DIR is the test output path.\nsudo docker run -it --rm --privileged --net=host \\\n -v /:/rootfs -v $CONFIG_DIR:$CONFIG_DIR -v $LOG_DIR:/var/result \\\n registry.k8s.io/node-test:0.2", "completion": "This text code Validate node setup | Kubernetes also provides node conformance test docker images for other architectures:", "language": "text", "type": "code-example_explanation", "context": "Validate node setup | Kubernetes also provides node conformance test docker images for other architectures:", "source": "https://kubernetes.io/docs/setup/best-practices/node-conformance/", "id": "k8s_00394_code_explain_10717"} {"prompt": "Complete this text code:\n\nsudo docker run -it --rm --privileged --net=host \\\n -v /:/rootfs:ro -v $CONFIG_DIR:$CONFIG_DIR -v $LOG_DIR:/var/result \\", "completion": " -e FOCUS=MirrorPod \\ # Only run MirrorPod test\n registry.k8s.io/node-test:0.2", "language": "text", "type": "code-example", "context": "Validate node setup | To skip specific tests, overwrite the environment variable SKIP with the regular expression of tests you want to skip.", "source": "https://kubernetes.io/docs/setup/best-practices/node-conformance/", "id": "k8s_00394_code_completion_10718"} {"prompt": "Explain this text code:\n\nsudo docker run -it --rm --privileged --net=host \\\n -v /:/rootfs:ro -v $CONFIG_DIR:$CONFIG_DIR -v $LOG_DIR:/var/result \\\n -e FOCUS=MirrorPod \\ # Only run MirrorPod test\n registry.k8s.io/node-test:0.2", "completion": "This text code Validate node setup | To skip specific tests, overwrite the environment variable SKIP with the regular expression of tests you want to skip.", "language": "text", "type": "code-example_explanation", "context": "Validate node setup | To skip specific tests, overwrite the environment variable SKIP with the regular expression of tests you want to skip.", "source": "https://kubernetes.io/docs/setup/best-practices/node-conformance/", "id": "k8s_00394_code_explain_10719"} {"prompt": "Complete this text code:\n\nsudo docker run -it --rm --privileged --net=host \\\n -v /:/rootfs:ro -v $CONFIG_DIR:$CONFIG_DIR -v $LOG_DIR:/var/result \\", "completion": " -e SKIP=MirrorPod \\ # Run all conformance tests but skip MirrorPod test\n registry.k8s.io/node-test:0.2", "language": "text", "type": "code-example", "context": "Validate node setup | Node conformance test is a containerized version of node e2e test. By default, it runs all conformance tests.", "source": "https://kubernetes.io/docs/setup/best-practices/node-conformance/", "id": "k8s_00394_code_completion_10720"} {"prompt": "Explain this text code:\n\nsudo docker run -it --rm --privileged --net=host \\\n -v /:/rootfs:ro -v $CONFIG_DIR:$CONFIG_DIR -v $LOG_DIR:/var/result \\\n -e SKIP=MirrorPod \\ # Run all conformance tests but skip MirrorPod test\n registry.k8s.io/node-test:0.2", "completion": "This text code Validate node setup | Node conformance test is a containerized version of node e2e test. By default, it runs all conformance tests.", "language": "text", "type": "code-example_explanation", "context": "Validate node setup | Node conformance test is a containerized version of node e2e test. By default, it runs all conformance tests.", "source": "https://kubernetes.io/docs/setup/best-practices/node-conformance/", "id": "k8s_00394_code_explain_10721"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: busybox-1\nspec:", "completion": " containers:\n - image: busybox:1.28\n command:\n - sleep\n - \"3600\"\n name: busybox", "language": "yaml", "type": "pod-definition", "context": "Pod Hostname | The Pod created by this manifest will have its hostname and fully qualified domain name (FQDN) set to busybox-1.", "source": "https://kubernetes.io/docs/concepts/workloads/pods/pod-hostname/", "id": "k8s_00395_code_completion_10722"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: busybox-1\nspec:\n containers:\n - image: busybox:1.28\n command:\n - sleep\n - \"3600\"\n name: busybox", "completion": "This yaml code Pod Hostname | The Pod created by this manifest will have its hostname and fully qualified domain name (FQDN) set to busybox-1.", "language": "yaml", "type": "pod-definition_explanation", "context": "Pod Hostname | The Pod created by this manifest will have its hostname and fully qualified domain name (FQDN) set to busybox-1.", "source": "https://kubernetes.io/docs/concepts/workloads/pods/pod-hostname/", "id": "k8s_00395_code_explain_10723"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: busybox-2-busybox-example-domain\nspec:\n hostnameOverride: busybox-2.busybox.example.domain", "completion": " containers:\n - image: busybox:1.28\n command:\n - sleep\n - \"3600\"\n name: busybox", "language": "yaml", "type": "pod-definition", "context": "Pod Hostname | If hostnameOverride is set alongside hostname and subdomain fields:", "source": "https://kubernetes.io/docs/concepts/workloads/pods/pod-hostname/", "id": "k8s_00395_code_completion_10724"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: busybox-2-busybox-example-domain\nspec:\n hostnameOverride: busybox-2.busybox.example.domain\n containers:\n - image: busybox:1.28\n command:\n - sleep\n - \"3600\"\n name: busybox", "completion": "This yaml code Pod Hostname | If hostnameOverride is set alongside hostname and subdomain fields:", "language": "yaml", "type": "pod-definition_explanation", "context": "Pod Hostname | If hostnameOverride is set alongside hostname and subdomain fields:", "source": "https://kubernetes.io/docs/concepts/workloads/pods/pod-hostname/", "id": "k8s_00395_code_explain_10725"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: busybox-1\nspec:", "completion": " containers:\n - image: busybox:1.28\n command:\n - sleep\n - \"3600\"\n name: busybox", "language": "yaml", "type": "pod-definition", "context": "Pod Hostname | The Pod created by this manifest will have its hostname and fully qualified domain name (FQDN) set to busybox-1.", "source": "https://kubernetes.io/docs/concepts/workloads/pods/pod-hostname/", "id": "k8s_00395_code_completion_10726"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: busybox-1\nspec:\n containers:\n - image: busybox:1.28\n command:\n - sleep\n - \"3600\"\n name: busybox", "completion": "This yaml code Pod Hostname | The Pod created by this manifest will have its hostname and fully qualified domain name (FQDN) set to busybox-1.", "language": "yaml", "type": "pod-definition_explanation", "context": "Pod Hostname | The Pod created by this manifest will have its hostname and fully qualified domain name (FQDN) set to busybox-1.", "source": "https://kubernetes.io/docs/concepts/workloads/pods/pod-hostname/", "id": "k8s_00395_code_explain_10727"} {"prompt": "Explain this text code:\n\nbusybox-1.busybox-subdomain.my-namespace.svc.cluster-domain.example", "completion": "This text code Pod Hostname | When both setHostnameAsFQDN: true and the subdomain field is set in the Pod spec, the kubelet writes the Pod's FQDN into the hostname for that Pod's namespace. In this case, both hostname and hostname", "language": "text", "type": "code-example_explanation", "context": "Pod Hostname | When both setHostnameAsFQDN: true and the subdomain field is set in the Pod spec, the kubelet writes the Pod's FQDN into the hostname for that Pod's namespace. In this case, both hostname and hostname", "source": "https://kubernetes.io/docs/concepts/workloads/pods/pod-hostname/", "id": "k8s_00395_code_explain_10728"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: busybox-2-busybox-example-domain\nspec:\n hostnameOverride: busybox-2.busybox.example.domain", "completion": " containers:\n - image: busybox:1.28\n command:\n - sleep\n - \"3600\"\n name: busybox", "language": "yaml", "type": "pod-definition", "context": "Pod Hostname | If hostnameOverride is set alongside hostname and subdomain fields:", "source": "https://kubernetes.io/docs/concepts/workloads/pods/pod-hostname/", "id": "k8s_00395_code_completion_10729"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: busybox-2-busybox-example-domain\nspec:\n hostnameOverride: busybox-2.busybox.example.domain\n containers:\n - image: busybox:1.28\n command:\n - sleep\n - \"3600\"\n name: busybox", "completion": "This yaml code Pod Hostname | If hostnameOverride is set alongside hostname and subdomain fields:", "language": "yaml", "type": "pod-definition_explanation", "context": "Pod Hostname | If hostnameOverride is set alongside hostname and subdomain fields:", "source": "https://kubernetes.io/docs/concepts/workloads/pods/pod-hostname/", "id": "k8s_00395_code_explain_10730"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: busybox-1\nspec:", "completion": " containers:\n - image: busybox:1.28\n command:\n - sleep\n - \"3600\"\n name: busybox", "language": "yaml", "type": "pod-definition", "context": "Pod Hostname | The Pod created by this manifest will have its hostname and fully qualified domain name (FQDN) set to busybox-1.", "source": "https://kubernetes.io/docs/concepts/workloads/pods/pod-hostname/", "id": "k8s_00395_code_completion_10731"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: busybox-1\nspec:\n containers:\n - image: busybox:1.28\n command:\n - sleep\n - \"3600\"\n name: busybox", "completion": "This yaml code Pod Hostname | The Pod created by this manifest will have its hostname and fully qualified domain name (FQDN) set to busybox-1.", "language": "yaml", "type": "pod-definition_explanation", "context": "Pod Hostname | The Pod created by this manifest will have its hostname and fully qualified domain name (FQDN) set to busybox-1.", "source": "https://kubernetes.io/docs/concepts/workloads/pods/pod-hostname/", "id": "k8s_00395_code_explain_10732"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: busybox-2-busybox-example-domain\nspec:\n hostnameOverride: busybox-2.busybox.example.domain", "completion": " containers:\n - image: busybox:1.28\n command:\n - sleep\n - \"3600\"\n name: busybox", "language": "yaml", "type": "pod-definition", "context": "Pod Hostname | If hostnameOverride is set alongside hostname and subdomain fields:", "source": "https://kubernetes.io/docs/concepts/workloads/pods/pod-hostname/", "id": "k8s_00395_code_completion_10733"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: busybox-2-busybox-example-domain\nspec:\n hostnameOverride: busybox-2.busybox.example.domain\n containers:\n - image: busybox:1.28\n command:\n - sleep\n - \"3600\"\n name: busybox", "completion": "This yaml code Pod Hostname | If hostnameOverride is set alongside hostname and subdomain fields:", "language": "yaml", "type": "pod-definition_explanation", "context": "Pod Hostname | If hostnameOverride is set alongside hostname and subdomain fields:", "source": "https://kubernetes.io/docs/concepts/workloads/pods/pod-hostname/", "id": "k8s_00395_code_explain_10734"} {"prompt": "Explain this bash code:\n\nkubectl create ingress NAME --rule=host/path=service:port[,tls[=secret]]", "completion": "This bash code kubectl create ingress | If true, ignore any errors in templates when a field or map key is missing in the template. Only applies to golang and jsonpath output formats.", "language": "bash", "type": "kubectl-command_explanation", "context": "kubectl create ingress | If true, ignore any errors in templates when a field or map key is missing in the template. Only applies to golang and jsonpath output formats.", "source": "https://kubernetes.io/docs/reference/kubectl/generated/kubectl_create/kubectl_create_ingress/", "id": "k8s_00398_code_explain_10735"} {"prompt": "Complete this text code:\n\n# Create a single ingress called 'simple' that directs requests to foo.com/bar to svc\n # svc1:8080 with a TLS secret \"my-cert\"\n kubectl create ingress simple --rule=\"foo.com/bar=svc1:8080,tls=my-cert\"\n \n # Create a catch all ingress of \"/path\" pointing to service svc:port and Ingress Class as \"otheringress\"\n kubectl create ingress catch-all --class=otheringress --rule=\"/path=svc:port\"\n \n # Create an ingress with two annotations: ingress.annotation1 and ingress.annotations2\n kubectl create ingress annotated --class=default --rule=\"foo.com/bar=svc:port\" \\\n --annotation ingress.annotation1=foo \\\n --annotation ingress.annotation2=bla\n \n # Create an ingress with the same host and multiple paths\n kubectl create ingress multipath --class=default \\\n --rule=\"foo.com/=svc:port\" \\\n --rule=\"foo.com/admin/=svcadmin:portadmin\"\n ", "completion": " # Create an ingress with multiple hosts and the pathType as Prefix\n kubectl create ingress ingress1 --class=default \\\n --rule=\"foo.com/path*=svc:8080\" \\\n --rule=\"bar.com/admin*=svc2:http\"\n \n # Create an ingress with TLS enabled using the default ingress certificate and different path types\n kubectl create ingress ingtls --class=default \\\n --rule=\"foo.com/=svc:https,tls\" \\\n --rule=\"foo.com/path/subpath*=othersvc:8080\"\n \n # Create an ingress with TLS enabled using a specific secret and pathType as Prefix\n kubectl create ingress ingsecret --class=default \\\n --rule=\"foo.com/*=svc:8080,tls=secret1\"\n \n # Create an ingress with a default backend\n kubectl create ingress ingdefault --class=default \\\n --default-backend=defaultsvc:http \\\n --rule=\"foo.com/*=svc:8080,tls=secret1\"", "language": "text", "type": "code-example", "context": "kubectl create ingress | If true, ignore any errors in templates when a field or map key is missing in the template. Only applies to golang and jsonpath output formats.", "source": "https://kubernetes.io/docs/reference/kubectl/generated/kubectl_create/kubectl_create_ingress/", "id": "k8s_00398_code_completion_10736"} {"prompt": "Explain this text code:\n\n# Create a single ingress called 'simple' that directs requests to foo.com/bar to svc\n # svc1:8080 with a TLS secret \"my-cert\"\n kubectl create ingress simple --rule=\"foo.com/bar=svc1:8080,tls=my-cert\"\n \n # Create a catch all ingress of \"/path\" pointing to service svc:port and Ingress Class as \"otheringress\"\n kubectl create ingress catch-all --class=otheringress --rule=\"/path=svc:port\"\n \n # Create an ingress with two annotations: ingress.annotation1 and ingress.annotations2\n kubectl create ingress annotated --class=default --rule=\"foo.com/bar=svc:port\" \\\n --annotation ingress.annotation1=foo \\\n --annotation ingress.annotation2=bla\n \n # Create an ingress with the same host and multiple paths\n kubectl create ingress multipath --class=default \\\n --rule=\"foo.com/=svc:port\" \\\n --rule=\"foo.com/admin/=svcadmin:portadmin\"\n \n # Create an ingress with multiple hosts and the pathType as Prefix\n kubectl create ingress ingress1 --class=default \\\n --rule=\"foo.com/path*=svc:8080\" \\\n --rule=\"bar.com/admin*=svc2:http\"\n \n # Create an ingress with TLS enabled using the default ingress certificate and different path types\n kubectl create ingress ingtls --class=default \\\n --rule=\"foo.com/=svc:https,tls\" \\\n --rule=\"foo.com/path/subpath*=othersvc:8080\"\n \n # Create an ingress with TLS enabled using a specific secret and pathType as Prefix\n kubectl create ingress ingsecret --class=default \\\n --rule=\"foo.com/*=svc:8080,tls=secret1\"\n \n # Create an ingress with a default backend\n kubectl create ingress ingdefault --class=default \\\n --default-backend=defaultsvc:http \\\n --rule=\"foo.com/*=svc:8080,tls=secret1\"", "completion": "This text code kubectl create ingress | If true, ignore any errors in templates when a field or map key is missing in the template. Only applies to golang and jsonpath output formats.", "language": "text", "type": "code-example_explanation", "context": "kubectl create ingress | If true, ignore any errors in templates when a field or map key is missing in the template. Only applies to golang and jsonpath output formats.", "source": "https://kubernetes.io/docs/reference/kubectl/generated/kubectl_create/kubectl_create_ingress/", "id": "k8s_00398_code_explain_10737"} {"prompt": "Explain this bash code:\n\nkubectl create ingress NAME --rule=host/path=service:port[,tls[=secret]]", "completion": "This bash code kubectl create ingress | If true, ignore any errors in templates when a field or map key is missing in the template. Only applies to golang and jsonpath output formats.", "language": "bash", "type": "kubectl-command_explanation", "context": "kubectl create ingress | If true, ignore any errors in templates when a field or map key is missing in the template. Only applies to golang and jsonpath output formats.", "source": "https://kubernetes.io/docs/reference/kubectl/generated/kubectl_create/kubectl_create_ingress/", "id": "k8s_00398_code_explain_10738"} {"prompt": "Complete this text code:\n\n# Create a single ingress called 'simple' that directs requests to foo.com/bar to svc\n # svc1:8080 with a TLS secret \"my-cert\"\n kubectl create ingress simple --rule=\"foo.com/bar=svc1:8080,tls=my-cert\"\n \n # Create a catch all ingress of \"/path\" pointing to service svc:port and Ingress Class as \"otheringress\"\n kubectl create ingress catch-all --class=otheringress --rule=\"/path=svc:port\"\n \n # Create an ingress with two annotations: ingress.annotation1 and ingress.annotations2\n kubectl create ingress annotated --class=default --rule=\"foo.com/bar=svc:port\" \\\n --annotation ingress.annotation1=foo \\\n --annotation ingress.annotation2=bla\n \n # Create an ingress with the same host and multiple paths\n kubectl create ingress multipath --class=default \\\n --rule=\"foo.com/=svc:port\" \\\n --rule=\"foo.com/admin/=svcadmin:portadmin\"\n ", "completion": " # Create an ingress with multiple hosts and the pathType as Prefix\n kubectl create ingress ingress1 --class=default \\\n --rule=\"foo.com/path*=svc:8080\" \\\n --rule=\"bar.com/admin*=svc2:http\"\n \n # Create an ingress with TLS enabled using the default ingress certificate and different path types\n kubectl create ingress ingtls --class=default \\\n --rule=\"foo.com/=svc:https,tls\" \\\n --rule=\"foo.com/path/subpath*=othersvc:8080\"\n \n # Create an ingress with TLS enabled using a specific secret and pathType as Prefix\n kubectl create ingress ingsecret --class=default \\\n --rule=\"foo.com/*=svc:8080,tls=secret1\"\n \n # Create an ingress with a default backend\n kubectl create ingress ingdefault --class=default \\\n --default-backend=defaultsvc:http \\\n --rule=\"foo.com/*=svc:8080,tls=secret1\"", "language": "text", "type": "code-example", "context": "kubectl create ingress | If true, ignore any errors in templates when a field or map key is missing in the template. Only applies to golang and jsonpath output formats.", "source": "https://kubernetes.io/docs/reference/kubectl/generated/kubectl_create/kubectl_create_ingress/", "id": "k8s_00398_code_completion_10739"} {"prompt": "Explain this text code:\n\n# Create a single ingress called 'simple' that directs requests to foo.com/bar to svc\n # svc1:8080 with a TLS secret \"my-cert\"\n kubectl create ingress simple --rule=\"foo.com/bar=svc1:8080,tls=my-cert\"\n \n # Create a catch all ingress of \"/path\" pointing to service svc:port and Ingress Class as \"otheringress\"\n kubectl create ingress catch-all --class=otheringress --rule=\"/path=svc:port\"\n \n # Create an ingress with two annotations: ingress.annotation1 and ingress.annotations2\n kubectl create ingress annotated --class=default --rule=\"foo.com/bar=svc:port\" \\\n --annotation ingress.annotation1=foo \\\n --annotation ingress.annotation2=bla\n \n # Create an ingress with the same host and multiple paths\n kubectl create ingress multipath --class=default \\\n --rule=\"foo.com/=svc:port\" \\\n --rule=\"foo.com/admin/=svcadmin:portadmin\"\n \n # Create an ingress with multiple hosts and the pathType as Prefix\n kubectl create ingress ingress1 --class=default \\\n --rule=\"foo.com/path*=svc:8080\" \\\n --rule=\"bar.com/admin*=svc2:http\"\n \n # Create an ingress with TLS enabled using the default ingress certificate and different path types\n kubectl create ingress ingtls --class=default \\\n --rule=\"foo.com/=svc:https,tls\" \\\n --rule=\"foo.com/path/subpath*=othersvc:8080\"\n \n # Create an ingress with TLS enabled using a specific secret and pathType as Prefix\n kubectl create ingress ingsecret --class=default \\\n --rule=\"foo.com/*=svc:8080,tls=secret1\"\n \n # Create an ingress with a default backend\n kubectl create ingress ingdefault --class=default \\\n --default-backend=defaultsvc:http \\\n --rule=\"foo.com/*=svc:8080,tls=secret1\"", "completion": "This text code kubectl create ingress | If true, ignore any errors in templates when a field or map key is missing in the template. Only applies to golang and jsonpath output formats.", "language": "text", "type": "code-example_explanation", "context": "kubectl create ingress | If true, ignore any errors in templates when a field or map key is missing in the template. Only applies to golang and jsonpath output formats.", "source": "https://kubernetes.io/docs/reference/kubectl/generated/kubectl_create/kubectl_create_ingress/", "id": "k8s_00398_code_explain_10740"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Namespace\nmetadata:", "completion": " name: development\n labels:\n name: development", "language": "yaml", "type": "kubernetes-manifest", "context": "Namespaces Walkthrough | Create the development namespace using kubectl.", "source": "https://kubernetes.io/docs/tutorials/cluster-management/namespaces-walkthrough/", "id": "k8s_00399_code_completion_10741"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Namespace\nmetadata:\n name: development\n labels:\n name: development", "completion": "This yaml code Namespaces Walkthrough | Create the development namespace using kubectl.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Namespaces Walkthrough | Create the development namespace using kubectl.", "source": "https://kubernetes.io/docs/tutorials/cluster-management/namespaces-walkthrough/", "id": "k8s_00399_code_explain_10742"} {"prompt": "Explain this bash code:\n\nkubectl create -f https://k8s.io/examples/admin/namespace-dev.yaml", "completion": "This bash code Namespaces Walkthrough | Save the following contents into file namespace-prod.yaml which describes a production namespace:", "language": "bash", "type": "kubectl-command_explanation", "context": "Namespaces Walkthrough | Save the following contents into file namespace-prod.yaml which describes a production namespace:", "source": "https://kubernetes.io/docs/tutorials/cluster-management/namespaces-walkthrough/", "id": "k8s_00399_code_explain_10743"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Namespace\nmetadata:", "completion": " name: production\n labels:\n name: production", "language": "yaml", "type": "kubernetes-manifest", "context": "Namespaces Walkthrough | And then let's create the production namespace using kubectl.", "source": "https://kubernetes.io/docs/tutorials/cluster-management/namespaces-walkthrough/", "id": "k8s_00399_code_completion_10744"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Namespace\nmetadata:\n name: production\n labels:\n name: production", "completion": "This yaml code Namespaces Walkthrough | And then let's create the production namespace using kubectl.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Namespaces Walkthrough | And then let's create the production namespace using kubectl.", "source": "https://kubernetes.io/docs/tutorials/cluster-management/namespaces-walkthrough/", "id": "k8s_00399_code_explain_10745"} {"prompt": "Explain this bash code:\n\nkubectl create -f https://k8s.io/examples/admin/namespace-prod.yaml", "completion": "This bash code Namespaces Walkthrough | To be sure things are right, let's list all of the namespaces in our cluster.", "language": "bash", "type": "kubectl-command_explanation", "context": "Namespaces Walkthrough | To be sure things are right, let's list all of the namespaces in our cluster.", "source": "https://kubernetes.io/docs/tutorials/cluster-management/namespaces-walkthrough/", "id": "k8s_00399_code_explain_10746"} {"prompt": "Complete this text code:\n\nNAME STATUS AGE LABELS\ndefault Active 32m <none>", "completion": "development Active 29s name=development\nproduction Active 23s name=production", "language": "text", "type": "code-example", "context": "Namespaces Walkthrough | A Kubernetes namespace provides the scope for Pods, Services, and Deployments in the cluster.", "source": "https://kubernetes.io/docs/tutorials/cluster-management/namespaces-walkthrough/", "id": "k8s_00399_code_completion_10747"} {"prompt": "Explain this text code:\n\nNAME STATUS AGE LABELS\ndefault Active 32m <none>\ndevelopment Active 29s name=development\nproduction Active 23s name=production", "completion": "This text code Namespaces Walkthrough | A Kubernetes namespace provides the scope for Pods, Services, and Deployments in the cluster.", "language": "text", "type": "code-example_explanation", "context": "Namespaces Walkthrough | A Kubernetes namespace provides the scope for Pods, Services, and Deployments in the cluster.", "source": "https://kubernetes.io/docs/tutorials/cluster-management/namespaces-walkthrough/", "id": "k8s_00399_code_explain_10748"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nclusters:\n- cluster:\n certificate-authority-data: REDACTED\n server: https://130.211.122.180\n name: lithe-cocoa-92103_kubernetes\ncontexts:\n- context:\n cluster: lithe-cocoa-92103_kubernetes\n user: lithe-cocoa-92103_kubernetes\n name: lithe-cocoa-92103_kubernetes\ncurrent-context: lithe-cocoa-92103_kubernetes", "completion": "kind: Config\npreferences: {}\nusers:\n- name: lithe-cocoa-92103_kubernetes\n user:\n client-certificate-data: REDACTED\n client-key-data: REDACTED\n token: 65rZW78y8HbwXXtSXuUw9DbP4FLjHi4b\n- name: lithe-cocoa-92103_kubernetes-basic-auth\n user:\n password: h5M0FtUUIflBSdI7\n username: admin", "language": "yaml", "type": "kubernetes-manifest", "context": "Namespaces Walkthrough | The next step is to define a context for the kubectl client to work in each namespace. The value of \"cluster\" and \"user\" fields are copied from the current context.", "source": "https://kubernetes.io/docs/tutorials/cluster-management/namespaces-walkthrough/", "id": "k8s_00399_code_completion_10749"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nclusters:\n- cluster:\n certificate-authority-data: REDACTED\n server: https://130.211.122.180\n name: lithe-cocoa-92103_kubernetes\ncontexts:\n- context:\n cluster: lithe-cocoa-92103_kubernetes\n user: lithe-cocoa-92103_kubernetes\n name: lithe-cocoa-92103_kubernetes\ncurrent-context: lithe-cocoa-92103_kubernetes\nkind: Config\npreferences: {}\nusers:\n- name: lithe-cocoa-92103_kubernetes\n user:\n client-certificate-data: REDACTED\n client-key-data: REDACTED\n token: 65rZW78y8HbwXXtSXuUw9DbP4FLjHi4b\n- name: lithe-cocoa-92103_kubernetes-basic-auth\n user:\n password: h5M0FtUUIflBSdI7\n username: admin", "completion": "This yaml code Namespaces Walkthrough | The next step is to define a context for the kubectl client to work in each namespace. The value of \"cluster\" and \"user\" fields are copied from the current context.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Namespaces Walkthrough | The next step is to define a context for the kubectl client to work in each namespace. The value of \"cluster\" and \"user\" fields are copied from the current context.", "source": "https://kubernetes.io/docs/tutorials/cluster-management/namespaces-walkthrough/", "id": "k8s_00399_code_explain_10750"} {"prompt": "Complete this bash code:\n\nkubectl config set-context dev --namespace=development \\\n --cluster=lithe-cocoa-92103_kubernetes \\\n --user=lithe-cocoa-92103_kubernetes", "completion": "\nkubectl config set-context prod --namespace=production \\\n --cluster=lithe-cocoa-92103_kubernetes \\\n --user=lithe-cocoa-92103_kubernetes", "language": "bash", "type": "kubectl-command", "context": "Namespaces Walkthrough | By default, the above commands add two contexts that are saved into file .kube/config. You can now view the contexts and alternate against the two new request contexts depending on which namespace you", "source": "https://kubernetes.io/docs/tutorials/cluster-management/namespaces-walkthrough/", "id": "k8s_00399_code_completion_10751"} {"prompt": "Explain this bash code:\n\nkubectl config set-context dev --namespace=development \\\n --cluster=lithe-cocoa-92103_kubernetes \\\n --user=lithe-cocoa-92103_kubernetes\n\nkubectl config set-context prod --namespace=production \\\n --cluster=lithe-cocoa-92103_kubernetes \\\n --user=lithe-cocoa-92103_kubernetes", "completion": "This bash code Namespaces Walkthrough | By default, the above commands add two contexts that are saved into file .kube/config. You can now view the contexts and alternate against the two new request contexts depending on which namespace you", "language": "bash", "type": "kubectl-command_explanation", "context": "Namespaces Walkthrough | By default, the above commands add two contexts that are saved into file .kube/config. You can now view the contexts and alternate against the two new request contexts depending on which namespace you", "source": "https://kubernetes.io/docs/tutorials/cluster-management/namespaces-walkthrough/", "id": "k8s_00399_code_explain_10752"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nclusters:\n- cluster:\n certificate-authority-data: REDACTED\n server: https://130.211.122.180\n name: lithe-cocoa-92103_kubernetes\ncontexts:\n- context:\n cluster: lithe-cocoa-92103_kubernetes\n user: lithe-cocoa-92103_kubernetes\n name: lithe-cocoa-92103_kubernetes\n- context:\n cluster: lithe-cocoa-92103_kubernetes\n namespace: development\n user: lithe-cocoa-92103_kubernetes\n name: dev\n- context:", "completion": " cluster: lithe-cocoa-92103_kubernetes\n namespace: production\n user: lithe-cocoa-92103_kubernetes\n name: prod\ncurrent-context: lithe-cocoa-92103_kubernetes\nkind: Config\npreferences: {}\nusers:\n- name: lithe-cocoa-92103_kubernetes\n user:\n client-certificate-data: REDACTED\n client-key-data: REDACTED\n token: 65rZW78y8HbwXXtSXuUw9DbP4FLjHi4b\n- name: lithe-cocoa-92103_kubernetes-basic-auth\n user:\n password: h5M0FtUUIflBSdI7\n username: admin", "language": "yaml", "type": "kubernetes-manifest", "context": "Namespaces Walkthrough | Let's switch to operate in the development namespace.", "source": "https://kubernetes.io/docs/tutorials/cluster-management/namespaces-walkthrough/", "id": "k8s_00399_code_completion_10753"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nclusters:\n- cluster:\n certificate-authority-data: REDACTED\n server: https://130.211.122.180\n name: lithe-cocoa-92103_kubernetes\ncontexts:\n- context:\n cluster: lithe-cocoa-92103_kubernetes\n user: lithe-cocoa-92103_kubernetes\n name: lithe-cocoa-92103_kubernetes\n- context:\n cluster: lithe-cocoa-92103_kubernetes\n namespace: development\n user: lithe-cocoa-92103_kubernetes\n name: dev\n- context:\n cluster: lithe-cocoa-92103_kubernetes\n namespace: production\n user: lithe-cocoa-92103_kubernetes\n name: prod\ncurrent-context: lithe-cocoa-92103_kubernetes\nkind: Config\npreferences: {}\nusers:\n- name: lithe-cocoa-92103_kubernetes\n user:\n client-certificate-data: REDACTED\n client-key-data: REDACTED\n token: 65rZW78y8HbwXXtSXuUw9DbP4FLjHi4b\n- name: lithe-cocoa-92103_kubernetes-basic-auth\n user:\n password: h5M0FtUUIflBSdI7\n username: admin", "completion": "This yaml code Namespaces Walkthrough | Let's switch to operate in the development namespace.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Namespaces Walkthrough | Let's switch to operate in the development namespace.", "source": "https://kubernetes.io/docs/tutorials/cluster-management/namespaces-walkthrough/", "id": "k8s_00399_code_explain_10754"} {"prompt": "Complete this yaml code:\n\napiVersion: apps/v1\nkind: Deployment\nmetadata:\n labels:\n app: snowflake\n name: snowflake\nspec:\n replicas: 2\n selector:\n matchLabels:", "completion": " app: snowflake\n template:\n metadata:\n labels:\n app: snowflake\n spec:\n containers:\n - image: registry.k8s.io/serve_hostname\n imagePullPolicy: Always\n name: snowflake", "language": "yaml", "type": "deployment-definition", "context": "Namespaces Walkthrough | Apply the manifest to create a Deployment", "source": "https://kubernetes.io/docs/tutorials/cluster-management/namespaces-walkthrough/", "id": "k8s_00399_code_completion_10755"} {"prompt": "Explain this yaml code:\n\napiVersion: apps/v1\nkind: Deployment\nmetadata:\n labels:\n app: snowflake\n name: snowflake\nspec:\n replicas: 2\n selector:\n matchLabels:\n app: snowflake\n template:\n metadata:\n labels:\n app: snowflake\n spec:\n containers:\n - image: registry.k8s.io/serve_hostname\n imagePullPolicy: Always\n name: snowflake", "completion": "This yaml code Namespaces Walkthrough | Apply the manifest to create a Deployment", "language": "yaml", "type": "deployment-definition_explanation", "context": "Namespaces Walkthrough | Apply the manifest to create a Deployment", "source": "https://kubernetes.io/docs/tutorials/cluster-management/namespaces-walkthrough/", "id": "k8s_00399_code_explain_10756"} {"prompt": "Explain this bash code:\n\nkubectl apply -f https://k8s.io/examples/admin/snowflake-deployment.yaml", "completion": "This bash code Namespaces Walkthrough | We have created a deployment whose replica size is 2 that is running the pod called snowflake with a basic container that serves the hostname.", "language": "bash", "type": "kubectl-command_explanation", "context": "Namespaces Walkthrough | We have created a deployment whose replica size is 2 that is running the pod called snowflake with a basic container that serves the hostname.", "source": "https://kubernetes.io/docs/tutorials/cluster-management/namespaces-walkthrough/", "id": "k8s_00399_code_explain_10757"} {"prompt": "Explain this text code:\n\nNAME READY UP-TO-DATE AVAILABLE AGE\nsnowflake 2/2 2 2 2m", "completion": "This text code Namespaces Walkthrough | And this is great, developers are able to do what they want, and they do not have to worry about affecting content in the production namespace.", "language": "text", "type": "code-example_explanation", "context": "Namespaces Walkthrough | And this is great, developers are able to do what they want, and they do not have to worry about affecting content in the production namespace.", "source": "https://kubernetes.io/docs/tutorials/cluster-management/namespaces-walkthrough/", "id": "k8s_00399_code_explain_10758"} {"prompt": "Explain this text code:\n\nNAME READY STATUS RESTARTS AGE\nsnowflake-3968820950-9dgr8 1/1 Running 0 2m\nsnowflake-3968820950-vgc4n 1/1 Running 0 2m", "completion": "This text code Namespaces Walkthrough | And this is great, developers are able to do what they want, and they do not have to worry about affecting content in the production namespace.", "language": "text", "type": "code-example_explanation", "context": "Namespaces Walkthrough | And this is great, developers are able to do what they want, and they do not have to worry about affecting content in the production namespace.", "source": "https://kubernetes.io/docs/tutorials/cluster-management/namespaces-walkthrough/", "id": "k8s_00399_code_explain_10759"} {"prompt": "Explain this bash code:\n\nkubectl create deployment cattle --image=registry.k8s.io/serve_hostname --replicas=5\n\nkubectl get deployment", "completion": "This bash code Namespaces Walkthrough | At this point, it should be clear that the resources users create in one namespace are hidden from the other namespace.", "language": "bash", "type": "kubectl-command_explanation", "context": "Namespaces Walkthrough | At this point, it should be clear that the resources users create in one namespace are hidden from the other namespace.", "source": "https://kubernetes.io/docs/tutorials/cluster-management/namespaces-walkthrough/", "id": "k8s_00399_code_explain_10760"} {"prompt": "Explain this text code:\n\nNAME READY UP-TO-DATE AVAILABLE AGE\ncattle 5/5 5 5 10s", "completion": "This text code Namespaces Walkthrough | At this point, it should be clear that the resources users create in one namespace are hidden from the other namespace.", "language": "text", "type": "code-example_explanation", "context": "Namespaces Walkthrough | At this point, it should be clear that the resources users create in one namespace are hidden from the other namespace.", "source": "https://kubernetes.io/docs/tutorials/cluster-management/namespaces-walkthrough/", "id": "k8s_00399_code_explain_10761"} {"prompt": "Complete this text code:\n\nNAME READY STATUS RESTARTS AGE\ncattle-2263376956-41xy6 1/1 Running 0 34s\ncattle-2263376956-kw466 1/1 Running 0 34s", "completion": "cattle-2263376956-n4v97 1/1 Running 0 34s\ncattle-2263376956-p5p3i 1/1 Running 0 34s\ncattle-2263376956-sxpth 1/1 Running 0 34s", "language": "text", "type": "code-example", "context": "Namespaces Walkthrough | At this point, it should be clear that the resources users create in one namespace are hidden from the other namespace.", "source": "https://kubernetes.io/docs/tutorials/cluster-management/namespaces-walkthrough/", "id": "k8s_00399_code_completion_10762"} {"prompt": "Explain this text code:\n\nNAME READY STATUS RESTARTS AGE\ncattle-2263376956-41xy6 1/1 Running 0 34s\ncattle-2263376956-kw466 1/1 Running 0 34s\ncattle-2263376956-n4v97 1/1 Running 0 34s\ncattle-2263376956-p5p3i 1/1 Running 0 34s\ncattle-2263376956-sxpth 1/1 Running 0 34s", "completion": "This text code Namespaces Walkthrough | At this point, it should be clear that the resources users create in one namespace are hidden from the other namespace.", "language": "text", "type": "code-example_explanation", "context": "Namespaces Walkthrough | At this point, it should be clear that the resources users create in one namespace are hidden from the other namespace.", "source": "https://kubernetes.io/docs/tutorials/cluster-management/namespaces-walkthrough/", "id": "k8s_00399_code_explain_10763"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Namespace\nmetadata:", "completion": " name: development\n labels:\n name: development", "language": "yaml", "type": "kubernetes-manifest", "context": "Namespaces Walkthrough | Create the development namespace using kubectl.", "source": "https://kubernetes.io/docs/tutorials/cluster-management/namespaces-walkthrough/", "id": "k8s_00399_code_completion_10764"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Namespace\nmetadata:\n name: development\n labels:\n name: development", "completion": "This yaml code Namespaces Walkthrough | Create the development namespace using kubectl.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Namespaces Walkthrough | Create the development namespace using kubectl.", "source": "https://kubernetes.io/docs/tutorials/cluster-management/namespaces-walkthrough/", "id": "k8s_00399_code_explain_10765"} {"prompt": "Explain this shell code:\n\nkubectl create -f https://k8s.io/examples/admin/namespace-dev.yaml", "completion": "This shell code Namespaces Walkthrough | Save the following contents into file namespace-prod.yaml which describes a production namespace:", "language": "shell", "type": "code-example_explanation", "context": "Namespaces Walkthrough | Save the following contents into file namespace-prod.yaml which describes a production namespace:", "source": "https://kubernetes.io/docs/tutorials/cluster-management/namespaces-walkthrough/", "id": "k8s_00399_code_explain_10766"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Namespace\nmetadata:", "completion": " name: production\n labels:\n name: production", "language": "yaml", "type": "kubernetes-manifest", "context": "Namespaces Walkthrough | And then let's create the production namespace using kubectl.", "source": "https://kubernetes.io/docs/tutorials/cluster-management/namespaces-walkthrough/", "id": "k8s_00399_code_completion_10767"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Namespace\nmetadata:\n name: production\n labels:\n name: production", "completion": "This yaml code Namespaces Walkthrough | And then let's create the production namespace using kubectl.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Namespaces Walkthrough | And then let's create the production namespace using kubectl.", "source": "https://kubernetes.io/docs/tutorials/cluster-management/namespaces-walkthrough/", "id": "k8s_00399_code_explain_10768"} {"prompt": "Explain this shell code:\n\nkubectl create -f https://k8s.io/examples/admin/namespace-prod.yaml", "completion": "This shell code Namespaces Walkthrough | To be sure things are right, let's list all of the namespaces in our cluster.", "language": "shell", "type": "code-example_explanation", "context": "Namespaces Walkthrough | To be sure things are right, let's list all of the namespaces in our cluster.", "source": "https://kubernetes.io/docs/tutorials/cluster-management/namespaces-walkthrough/", "id": "k8s_00399_code_explain_10769"} {"prompt": "Complete this text code:\n\nNAME STATUS AGE LABELS\ndefault Active 32m <none>", "completion": "development Active 29s name=development\nproduction Active 23s name=production", "language": "text", "type": "code-example", "context": "Namespaces Walkthrough | A Kubernetes namespace provides the scope for Pods, Services, and Deployments in the cluster.", "source": "https://kubernetes.io/docs/tutorials/cluster-management/namespaces-walkthrough/", "id": "k8s_00399_code_completion_10770"} {"prompt": "Explain this text code:\n\nNAME STATUS AGE LABELS\ndefault Active 32m <none>\ndevelopment Active 29s name=development\nproduction Active 23s name=production", "completion": "This text code Namespaces Walkthrough | A Kubernetes namespace provides the scope for Pods, Services, and Deployments in the cluster.", "language": "text", "type": "code-example_explanation", "context": "Namespaces Walkthrough | A Kubernetes namespace provides the scope for Pods, Services, and Deployments in the cluster.", "source": "https://kubernetes.io/docs/tutorials/cluster-management/namespaces-walkthrough/", "id": "k8s_00399_code_explain_10771"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nclusters:\n- cluster:\n certificate-authority-data: REDACTED\n server: https://130.211.122.180\n name: lithe-cocoa-92103_kubernetes\ncontexts:\n- context:\n cluster: lithe-cocoa-92103_kubernetes\n user: lithe-cocoa-92103_kubernetes\n name: lithe-cocoa-92103_kubernetes\ncurrent-context: lithe-cocoa-92103_kubernetes", "completion": "kind: Config\npreferences: {}\nusers:\n- name: lithe-cocoa-92103_kubernetes\n user:\n client-certificate-data: REDACTED\n client-key-data: REDACTED\n token: 65rZW78y8HbwXXtSXuUw9DbP4FLjHi4b\n- name: lithe-cocoa-92103_kubernetes-basic-auth\n user:\n password: h5M0FtUUIflBSdI7\n username: admin", "language": "yaml", "type": "kubernetes-manifest", "context": "Namespaces Walkthrough | The next step is to define a context for the kubectl client to work in each namespace. The value of \"cluster\" and \"user\" fields are copied from the current context.", "source": "https://kubernetes.io/docs/tutorials/cluster-management/namespaces-walkthrough/", "id": "k8s_00399_code_completion_10772"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nclusters:\n- cluster:\n certificate-authority-data: REDACTED\n server: https://130.211.122.180\n name: lithe-cocoa-92103_kubernetes\ncontexts:\n- context:\n cluster: lithe-cocoa-92103_kubernetes\n user: lithe-cocoa-92103_kubernetes\n name: lithe-cocoa-92103_kubernetes\ncurrent-context: lithe-cocoa-92103_kubernetes\nkind: Config\npreferences: {}\nusers:\n- name: lithe-cocoa-92103_kubernetes\n user:\n client-certificate-data: REDACTED\n client-key-data: REDACTED\n token: 65rZW78y8HbwXXtSXuUw9DbP4FLjHi4b\n- name: lithe-cocoa-92103_kubernetes-basic-auth\n user:\n password: h5M0FtUUIflBSdI7\n username: admin", "completion": "This yaml code Namespaces Walkthrough | The next step is to define a context for the kubectl client to work in each namespace. The value of \"cluster\" and \"user\" fields are copied from the current context.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Namespaces Walkthrough | The next step is to define a context for the kubectl client to work in each namespace. The value of \"cluster\" and \"user\" fields are copied from the current context.", "source": "https://kubernetes.io/docs/tutorials/cluster-management/namespaces-walkthrough/", "id": "k8s_00399_code_explain_10773"} {"prompt": "Complete this shell code:\n\nkubectl config set-context dev --namespace=development \\\n --cluster=lithe-cocoa-92103_kubernetes \\\n --user=lithe-cocoa-92103_kubernetes", "completion": "\nkubectl config set-context prod --namespace=production \\\n --cluster=lithe-cocoa-92103_kubernetes \\\n --user=lithe-cocoa-92103_kubernetes", "language": "shell", "type": "code-example", "context": "Namespaces Walkthrough | By default, the above commands add two contexts that are saved into file .kube/config. You can now view the contexts and alternate against the two new request contexts depending on which namespace you", "source": "https://kubernetes.io/docs/tutorials/cluster-management/namespaces-walkthrough/", "id": "k8s_00399_code_completion_10774"} {"prompt": "Explain this shell code:\n\nkubectl config set-context dev --namespace=development \\\n --cluster=lithe-cocoa-92103_kubernetes \\\n --user=lithe-cocoa-92103_kubernetes\n\nkubectl config set-context prod --namespace=production \\\n --cluster=lithe-cocoa-92103_kubernetes \\\n --user=lithe-cocoa-92103_kubernetes", "completion": "This shell code Namespaces Walkthrough | By default, the above commands add two contexts that are saved into file .kube/config. You can now view the contexts and alternate against the two new request contexts depending on which namespace you", "language": "shell", "type": "code-example_explanation", "context": "Namespaces Walkthrough | By default, the above commands add two contexts that are saved into file .kube/config. You can now view the contexts and alternate against the two new request contexts depending on which namespace you", "source": "https://kubernetes.io/docs/tutorials/cluster-management/namespaces-walkthrough/", "id": "k8s_00399_code_explain_10775"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nclusters:\n- cluster:\n certificate-authority-data: REDACTED\n server: https://130.211.122.180\n name: lithe-cocoa-92103_kubernetes\ncontexts:\n- context:\n cluster: lithe-cocoa-92103_kubernetes\n user: lithe-cocoa-92103_kubernetes\n name: lithe-cocoa-92103_kubernetes\n- context:\n cluster: lithe-cocoa-92103_kubernetes\n namespace: development\n user: lithe-cocoa-92103_kubernetes\n name: dev\n- context:", "completion": " cluster: lithe-cocoa-92103_kubernetes\n namespace: production\n user: lithe-cocoa-92103_kubernetes\n name: prod\ncurrent-context: lithe-cocoa-92103_kubernetes\nkind: Config\npreferences: {}\nusers:\n- name: lithe-cocoa-92103_kubernetes\n user:\n client-certificate-data: REDACTED\n client-key-data: REDACTED\n token: 65rZW78y8HbwXXtSXuUw9DbP4FLjHi4b\n- name: lithe-cocoa-92103_kubernetes-basic-auth\n user:\n password: h5M0FtUUIflBSdI7\n username: admin", "language": "yaml", "type": "kubernetes-manifest", "context": "Namespaces Walkthrough | Let's switch to operate in the development namespace.", "source": "https://kubernetes.io/docs/tutorials/cluster-management/namespaces-walkthrough/", "id": "k8s_00399_code_completion_10776"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nclusters:\n- cluster:\n certificate-authority-data: REDACTED\n server: https://130.211.122.180\n name: lithe-cocoa-92103_kubernetes\ncontexts:\n- context:\n cluster: lithe-cocoa-92103_kubernetes\n user: lithe-cocoa-92103_kubernetes\n name: lithe-cocoa-92103_kubernetes\n- context:\n cluster: lithe-cocoa-92103_kubernetes\n namespace: development\n user: lithe-cocoa-92103_kubernetes\n name: dev\n- context:\n cluster: lithe-cocoa-92103_kubernetes\n namespace: production\n user: lithe-cocoa-92103_kubernetes\n name: prod\ncurrent-context: lithe-cocoa-92103_kubernetes\nkind: Config\npreferences: {}\nusers:\n- name: lithe-cocoa-92103_kubernetes\n user:\n client-certificate-data: REDACTED\n client-key-data: REDACTED\n token: 65rZW78y8HbwXXtSXuUw9DbP4FLjHi4b\n- name: lithe-cocoa-92103_kubernetes-basic-auth\n user:\n password: h5M0FtUUIflBSdI7\n username: admin", "completion": "This yaml code Namespaces Walkthrough | Let's switch to operate in the development namespace.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Namespaces Walkthrough | Let's switch to operate in the development namespace.", "source": "https://kubernetes.io/docs/tutorials/cluster-management/namespaces-walkthrough/", "id": "k8s_00399_code_explain_10777"} {"prompt": "Complete this yaml code:\n\napiVersion: apps/v1\nkind: Deployment\nmetadata:\n labels:\n app: snowflake\n name: snowflake\nspec:\n replicas: 2\n selector:\n matchLabels:", "completion": " app: snowflake\n template:\n metadata:\n labels:\n app: snowflake\n spec:\n containers:\n - image: registry.k8s.io/serve_hostname\n imagePullPolicy: Always\n name: snowflake", "language": "yaml", "type": "deployment-definition", "context": "Namespaces Walkthrough | Apply the manifest to create a Deployment", "source": "https://kubernetes.io/docs/tutorials/cluster-management/namespaces-walkthrough/", "id": "k8s_00399_code_completion_10778"} {"prompt": "Explain this yaml code:\n\napiVersion: apps/v1\nkind: Deployment\nmetadata:\n labels:\n app: snowflake\n name: snowflake\nspec:\n replicas: 2\n selector:\n matchLabels:\n app: snowflake\n template:\n metadata:\n labels:\n app: snowflake\n spec:\n containers:\n - image: registry.k8s.io/serve_hostname\n imagePullPolicy: Always\n name: snowflake", "completion": "This yaml code Namespaces Walkthrough | Apply the manifest to create a Deployment", "language": "yaml", "type": "deployment-definition_explanation", "context": "Namespaces Walkthrough | Apply the manifest to create a Deployment", "source": "https://kubernetes.io/docs/tutorials/cluster-management/namespaces-walkthrough/", "id": "k8s_00399_code_explain_10779"} {"prompt": "Explain this shell code:\n\nkubectl apply -f https://k8s.io/examples/admin/snowflake-deployment.yaml", "completion": "This shell code Namespaces Walkthrough | We have created a deployment whose replica size is 2 that is running the pod called snowflake with a basic container that serves the hostname.", "language": "shell", "type": "code-example_explanation", "context": "Namespaces Walkthrough | We have created a deployment whose replica size is 2 that is running the pod called snowflake with a basic container that serves the hostname.", "source": "https://kubernetes.io/docs/tutorials/cluster-management/namespaces-walkthrough/", "id": "k8s_00399_code_explain_10780"} {"prompt": "Explain this text code:\n\nNAME READY UP-TO-DATE AVAILABLE AGE\nsnowflake 2/2 2 2 2m", "completion": "This text code Namespaces Walkthrough | And this is great, developers are able to do what they want, and they do not have to worry about affecting content in the production namespace.", "language": "text", "type": "code-example_explanation", "context": "Namespaces Walkthrough | And this is great, developers are able to do what they want, and they do not have to worry about affecting content in the production namespace.", "source": "https://kubernetes.io/docs/tutorials/cluster-management/namespaces-walkthrough/", "id": "k8s_00399_code_explain_10781"} {"prompt": "Explain this text code:\n\nNAME READY STATUS RESTARTS AGE\nsnowflake-3968820950-9dgr8 1/1 Running 0 2m\nsnowflake-3968820950-vgc4n 1/1 Running 0 2m", "completion": "This text code Namespaces Walkthrough | And this is great, developers are able to do what they want, and they do not have to worry about affecting content in the production namespace.", "language": "text", "type": "code-example_explanation", "context": "Namespaces Walkthrough | And this is great, developers are able to do what they want, and they do not have to worry about affecting content in the production namespace.", "source": "https://kubernetes.io/docs/tutorials/cluster-management/namespaces-walkthrough/", "id": "k8s_00399_code_explain_10782"} {"prompt": "Explain this shell code:\n\nkubectl create deployment cattle --image=registry.k8s.io/serve_hostname --replicas=5\n\nkubectl get deployment", "completion": "This shell code Namespaces Walkthrough | At this point, it should be clear that the resources users create in one namespace are hidden from the other namespace.", "language": "shell", "type": "code-example_explanation", "context": "Namespaces Walkthrough | At this point, it should be clear that the resources users create in one namespace are hidden from the other namespace.", "source": "https://kubernetes.io/docs/tutorials/cluster-management/namespaces-walkthrough/", "id": "k8s_00399_code_explain_10783"} {"prompt": "Explain this text code:\n\nNAME READY UP-TO-DATE AVAILABLE AGE\ncattle 5/5 5 5 10s", "completion": "This text code Namespaces Walkthrough | At this point, it should be clear that the resources users create in one namespace are hidden from the other namespace.", "language": "text", "type": "code-example_explanation", "context": "Namespaces Walkthrough | At this point, it should be clear that the resources users create in one namespace are hidden from the other namespace.", "source": "https://kubernetes.io/docs/tutorials/cluster-management/namespaces-walkthrough/", "id": "k8s_00399_code_explain_10784"} {"prompt": "Complete this text code:\n\nNAME READY STATUS RESTARTS AGE\ncattle-2263376956-41xy6 1/1 Running 0 34s\ncattle-2263376956-kw466 1/1 Running 0 34s", "completion": "cattle-2263376956-n4v97 1/1 Running 0 34s\ncattle-2263376956-p5p3i 1/1 Running 0 34s\ncattle-2263376956-sxpth 1/1 Running 0 34s", "language": "text", "type": "code-example", "context": "Namespaces Walkthrough | At this point, it should be clear that the resources users create in one namespace are hidden from the other namespace.", "source": "https://kubernetes.io/docs/tutorials/cluster-management/namespaces-walkthrough/", "id": "k8s_00399_code_completion_10785"} {"prompt": "Explain this text code:\n\nNAME READY STATUS RESTARTS AGE\ncattle-2263376956-41xy6 1/1 Running 0 34s\ncattle-2263376956-kw466 1/1 Running 0 34s\ncattle-2263376956-n4v97 1/1 Running 0 34s\ncattle-2263376956-p5p3i 1/1 Running 0 34s\ncattle-2263376956-sxpth 1/1 Running 0 34s", "completion": "This text code Namespaces Walkthrough | At this point, it should be clear that the resources users create in one namespace are hidden from the other namespace.", "language": "text", "type": "code-example_explanation", "context": "Namespaces Walkthrough | At this point, it should be clear that the resources users create in one namespace are hidden from the other namespace.", "source": "https://kubernetes.io/docs/tutorials/cluster-management/namespaces-walkthrough/", "id": "k8s_00399_code_explain_10786"} {"prompt": "Complete this yaml code:\n\nadmin/namespace-dev.yaml\napiVersion: v1\nkind: Namespace", "completion": "metadata:\n name: development\n labels:\n name: development", "language": "yaml", "type": "kubernetes-manifest", "context": "Namespaces Walkthrough | Create the development namespace using kubectl.", "source": "https://kubernetes.io/docs/tutorials/cluster-management/namespaces-walkthrough/", "id": "k8s_00399_code_completion_10787"} {"prompt": "Explain this yaml code:\n\nadmin/namespace-dev.yaml\napiVersion: v1\nkind: Namespace\nmetadata:\n name: development\n labels:\n name: development", "completion": "This yaml code Namespaces Walkthrough | Create the development namespace using kubectl.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Namespaces Walkthrough | Create the development namespace using kubectl.", "source": "https://kubernetes.io/docs/tutorials/cluster-management/namespaces-walkthrough/", "id": "k8s_00399_code_explain_10788"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Namespace\nmetadata:", "completion": " name: development\n labels:\n name: development", "language": "yaml", "type": "kubernetes-manifest", "context": "Namespaces Walkthrough | Create the development namespace using kubectl.", "source": "https://kubernetes.io/docs/tutorials/cluster-management/namespaces-walkthrough/", "id": "k8s_00399_code_completion_10789"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Namespace\nmetadata:\n name: development\n labels:\n name: development", "completion": "This yaml code Namespaces Walkthrough | Create the development namespace using kubectl.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Namespaces Walkthrough | Create the development namespace using kubectl.", "source": "https://kubernetes.io/docs/tutorials/cluster-management/namespaces-walkthrough/", "id": "k8s_00399_code_explain_10790"} {"prompt": "Explain this bash code:\n\nkubectl create -f https://k8s.io/examples/admin/namespace-dev.yaml", "completion": "This bash code Namespaces Walkthrough | Save the following contents into file namespace-prod.yaml which describes a production namespace:", "language": "bash", "type": "kubectl-command_explanation", "context": "Namespaces Walkthrough | Save the following contents into file namespace-prod.yaml which describes a production namespace:", "source": "https://kubernetes.io/docs/tutorials/cluster-management/namespaces-walkthrough/", "id": "k8s_00399_code_explain_10791"} {"prompt": "Complete this yaml code:\n\nadmin/namespace-prod.yaml\napiVersion: v1\nkind: Namespace", "completion": "metadata:\n name: production\n labels:\n name: production", "language": "yaml", "type": "kubernetes-manifest", "context": "Namespaces Walkthrough | And then let's create the production namespace using kubectl.", "source": "https://kubernetes.io/docs/tutorials/cluster-management/namespaces-walkthrough/", "id": "k8s_00399_code_completion_10792"} {"prompt": "Explain this yaml code:\n\nadmin/namespace-prod.yaml\napiVersion: v1\nkind: Namespace\nmetadata:\n name: production\n labels:\n name: production", "completion": "This yaml code Namespaces Walkthrough | And then let's create the production namespace using kubectl.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Namespaces Walkthrough | And then let's create the production namespace using kubectl.", "source": "https://kubernetes.io/docs/tutorials/cluster-management/namespaces-walkthrough/", "id": "k8s_00399_code_explain_10793"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Namespace\nmetadata:", "completion": " name: production\n labels:\n name: production", "language": "yaml", "type": "kubernetes-manifest", "context": "Namespaces Walkthrough | And then let's create the production namespace using kubectl.", "source": "https://kubernetes.io/docs/tutorials/cluster-management/namespaces-walkthrough/", "id": "k8s_00399_code_completion_10794"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Namespace\nmetadata:\n name: production\n labels:\n name: production", "completion": "This yaml code Namespaces Walkthrough | And then let's create the production namespace using kubectl.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Namespaces Walkthrough | And then let's create the production namespace using kubectl.", "source": "https://kubernetes.io/docs/tutorials/cluster-management/namespaces-walkthrough/", "id": "k8s_00399_code_explain_10795"} {"prompt": "Explain this bash code:\n\nkubectl create -f https://k8s.io/examples/admin/namespace-prod.yaml", "completion": "This bash code Namespaces Walkthrough | To be sure things are right, let's list all of the namespaces in our cluster.", "language": "bash", "type": "kubectl-command_explanation", "context": "Namespaces Walkthrough | To be sure things are right, let's list all of the namespaces in our cluster.", "source": "https://kubernetes.io/docs/tutorials/cluster-management/namespaces-walkthrough/", "id": "k8s_00399_code_explain_10796"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nclusters:\n- cluster:\n certificate-authority-data: REDACTED\n server: https://130.211.122.180\n name: lithe-cocoa-92103_kubernetes\ncontexts:\n- context:\n cluster: lithe-cocoa-92103_kubernetes\n user: lithe-cocoa-92103_kubernetes\n name: lithe-cocoa-92103_kubernetes\ncurrent-context: lithe-cocoa-92103_kubernetes", "completion": "kind: Config\npreferences: {}\nusers:\n- name: lithe-cocoa-92103_kubernetes\n user:\n client-certificate-data: REDACTED\n client-key-data: REDACTED\n token: 65rZW78y8HbwXXtSXuUw9DbP4FLjHi4b\n- name: lithe-cocoa-92103_kubernetes-basic-auth\n user:\n password: h5M0FtUUIflBSdI7\n username: admin", "language": "yaml", "type": "kubernetes-manifest", "context": "Namespaces Walkthrough | The next step is to define a context for the kubectl client to work in each namespace. The value of \"cluster\" and \"user\" fields are copied from the current context.", "source": "https://kubernetes.io/docs/tutorials/cluster-management/namespaces-walkthrough/", "id": "k8s_00399_code_completion_10797"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nclusters:\n- cluster:\n certificate-authority-data: REDACTED\n server: https://130.211.122.180\n name: lithe-cocoa-92103_kubernetes\ncontexts:\n- context:\n cluster: lithe-cocoa-92103_kubernetes\n user: lithe-cocoa-92103_kubernetes\n name: lithe-cocoa-92103_kubernetes\ncurrent-context: lithe-cocoa-92103_kubernetes\nkind: Config\npreferences: {}\nusers:\n- name: lithe-cocoa-92103_kubernetes\n user:\n client-certificate-data: REDACTED\n client-key-data: REDACTED\n token: 65rZW78y8HbwXXtSXuUw9DbP4FLjHi4b\n- name: lithe-cocoa-92103_kubernetes-basic-auth\n user:\n password: h5M0FtUUIflBSdI7\n username: admin", "completion": "This yaml code Namespaces Walkthrough | The next step is to define a context for the kubectl client to work in each namespace. The value of \"cluster\" and \"user\" fields are copied from the current context.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Namespaces Walkthrough | The next step is to define a context for the kubectl client to work in each namespace. The value of \"cluster\" and \"user\" fields are copied from the current context.", "source": "https://kubernetes.io/docs/tutorials/cluster-management/namespaces-walkthrough/", "id": "k8s_00399_code_explain_10798"} {"prompt": "Complete this bash code:\n\nkubectl config set-context dev --namespace=development \\\n --cluster=lithe-cocoa-92103_kubernetes \\\n --user=lithe-cocoa-92103_kubernetes", "completion": "\nkubectl config set-context prod --namespace=production \\\n --cluster=lithe-cocoa-92103_kubernetes \\\n --user=lithe-cocoa-92103_kubernetes", "language": "bash", "type": "kubectl-command", "context": "Namespaces Walkthrough | By default, the above commands add two contexts that are saved into file .kube/config. You can now view the contexts and alternate against the two new request contexts depending on which namespace you", "source": "https://kubernetes.io/docs/tutorials/cluster-management/namespaces-walkthrough/", "id": "k8s_00399_code_completion_10799"} {"prompt": "Explain this bash code:\n\nkubectl config set-context dev --namespace=development \\\n --cluster=lithe-cocoa-92103_kubernetes \\\n --user=lithe-cocoa-92103_kubernetes\n\nkubectl config set-context prod --namespace=production \\\n --cluster=lithe-cocoa-92103_kubernetes \\\n --user=lithe-cocoa-92103_kubernetes", "completion": "This bash code Namespaces Walkthrough | By default, the above commands add two contexts that are saved into file .kube/config. You can now view the contexts and alternate against the two new request contexts depending on which namespace you", "language": "bash", "type": "kubectl-command_explanation", "context": "Namespaces Walkthrough | By default, the above commands add two contexts that are saved into file .kube/config. You can now view the contexts and alternate against the two new request contexts depending on which namespace you", "source": "https://kubernetes.io/docs/tutorials/cluster-management/namespaces-walkthrough/", "id": "k8s_00399_code_explain_10800"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nclusters:\n- cluster:\n certificate-authority-data: REDACTED\n server: https://130.211.122.180\n name: lithe-cocoa-92103_kubernetes\ncontexts:\n- context:\n cluster: lithe-cocoa-92103_kubernetes\n user: lithe-cocoa-92103_kubernetes\n name: lithe-cocoa-92103_kubernetes\n- context:\n cluster: lithe-cocoa-92103_kubernetes\n namespace: development\n user: lithe-cocoa-92103_kubernetes\n name: dev\n- context:", "completion": " cluster: lithe-cocoa-92103_kubernetes\n namespace: production\n user: lithe-cocoa-92103_kubernetes\n name: prod\ncurrent-context: lithe-cocoa-92103_kubernetes\nkind: Config\npreferences: {}\nusers:\n- name: lithe-cocoa-92103_kubernetes\n user:\n client-certificate-data: REDACTED\n client-key-data: REDACTED\n token: 65rZW78y8HbwXXtSXuUw9DbP4FLjHi4b\n- name: lithe-cocoa-92103_kubernetes-basic-auth\n user:\n password: h5M0FtUUIflBSdI7\n username: admin", "language": "yaml", "type": "kubernetes-manifest", "context": "Namespaces Walkthrough | Let's switch to operate in the development namespace.", "source": "https://kubernetes.io/docs/tutorials/cluster-management/namespaces-walkthrough/", "id": "k8s_00399_code_completion_10801"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nclusters:\n- cluster:\n certificate-authority-data: REDACTED\n server: https://130.211.122.180\n name: lithe-cocoa-92103_kubernetes\ncontexts:\n- context:\n cluster: lithe-cocoa-92103_kubernetes\n user: lithe-cocoa-92103_kubernetes\n name: lithe-cocoa-92103_kubernetes\n- context:\n cluster: lithe-cocoa-92103_kubernetes\n namespace: development\n user: lithe-cocoa-92103_kubernetes\n name: dev\n- context:\n cluster: lithe-cocoa-92103_kubernetes\n namespace: production\n user: lithe-cocoa-92103_kubernetes\n name: prod\ncurrent-context: lithe-cocoa-92103_kubernetes\nkind: Config\npreferences: {}\nusers:\n- name: lithe-cocoa-92103_kubernetes\n user:\n client-certificate-data: REDACTED\n client-key-data: REDACTED\n token: 65rZW78y8HbwXXtSXuUw9DbP4FLjHi4b\n- name: lithe-cocoa-92103_kubernetes-basic-auth\n user:\n password: h5M0FtUUIflBSdI7\n username: admin", "completion": "This yaml code Namespaces Walkthrough | Let's switch to operate in the development namespace.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Namespaces Walkthrough | Let's switch to operate in the development namespace.", "source": "https://kubernetes.io/docs/tutorials/cluster-management/namespaces-walkthrough/", "id": "k8s_00399_code_explain_10802"} {"prompt": "Complete this yaml code:\n\nadmin/snowflake-deployment.yaml\napiVersion: apps/v1\nkind: Deployment\nmetadata:\n labels:\n app: snowflake\n name: snowflake\nspec:\n replicas: 2\n selector:", "completion": " matchLabels:\n app: snowflake\n template:\n metadata:\n labels:\n app: snowflake\n spec:\n containers:\n - image: registry.k8s.io/serve_hostname\n imagePullPolicy: Always\n name: snowflake", "language": "yaml", "type": "deployment-definition", "context": "Namespaces Walkthrough | Apply the manifest to create a Deployment", "source": "https://kubernetes.io/docs/tutorials/cluster-management/namespaces-walkthrough/", "id": "k8s_00399_code_completion_10803"} {"prompt": "Explain this yaml code:\n\nadmin/snowflake-deployment.yaml\napiVersion: apps/v1\nkind: Deployment\nmetadata:\n labels:\n app: snowflake\n name: snowflake\nspec:\n replicas: 2\n selector:\n matchLabels:\n app: snowflake\n template:\n metadata:\n labels:\n app: snowflake\n spec:\n containers:\n - image: registry.k8s.io/serve_hostname\n imagePullPolicy: Always\n name: snowflake", "completion": "This yaml code Namespaces Walkthrough | Apply the manifest to create a Deployment", "language": "yaml", "type": "deployment-definition_explanation", "context": "Namespaces Walkthrough | Apply the manifest to create a Deployment", "source": "https://kubernetes.io/docs/tutorials/cluster-management/namespaces-walkthrough/", "id": "k8s_00399_code_explain_10804"} {"prompt": "Complete this yaml code:\n\napiVersion: apps/v1\nkind: Deployment\nmetadata:\n labels:\n app: snowflake\n name: snowflake\nspec:\n replicas: 2\n selector:\n matchLabels:", "completion": " app: snowflake\n template:\n metadata:\n labels:\n app: snowflake\n spec:\n containers:\n - image: registry.k8s.io/serve_hostname\n imagePullPolicy: Always\n name: snowflake", "language": "yaml", "type": "deployment-definition", "context": "Namespaces Walkthrough | Apply the manifest to create a Deployment", "source": "https://kubernetes.io/docs/tutorials/cluster-management/namespaces-walkthrough/", "id": "k8s_00399_code_completion_10805"} {"prompt": "Explain this yaml code:\n\napiVersion: apps/v1\nkind: Deployment\nmetadata:\n labels:\n app: snowflake\n name: snowflake\nspec:\n replicas: 2\n selector:\n matchLabels:\n app: snowflake\n template:\n metadata:\n labels:\n app: snowflake\n spec:\n containers:\n - image: registry.k8s.io/serve_hostname\n imagePullPolicy: Always\n name: snowflake", "completion": "This yaml code Namespaces Walkthrough | Apply the manifest to create a Deployment", "language": "yaml", "type": "deployment-definition_explanation", "context": "Namespaces Walkthrough | Apply the manifest to create a Deployment", "source": "https://kubernetes.io/docs/tutorials/cluster-management/namespaces-walkthrough/", "id": "k8s_00399_code_explain_10806"} {"prompt": "Explain this bash code:\n\nkubectl apply -f https://k8s.io/examples/admin/snowflake-deployment.yaml", "completion": "This bash code Namespaces Walkthrough | We have created a deployment whose replica size is 2 that is running the pod called snowflake with a basic container that serves the hostname.", "language": "bash", "type": "kubectl-command_explanation", "context": "Namespaces Walkthrough | We have created a deployment whose replica size is 2 that is running the pod called snowflake with a basic container that serves the hostname.", "source": "https://kubernetes.io/docs/tutorials/cluster-management/namespaces-walkthrough/", "id": "k8s_00399_code_explain_10807"} {"prompt": "Explain this bash code:\n\nkubectl create deployment cattle --image=registry.k8s.io/serve_hostname --replicas=5\n\nkubectl get deployment", "completion": "This bash code Namespaces Walkthrough | At this point, it should be clear that the resources users create in one namespace are hidden from the other namespace.", "language": "bash", "type": "kubectl-command_explanation", "context": "Namespaces Walkthrough | At this point, it should be clear that the resources users create in one namespace are hidden from the other namespace.", "source": "https://kubernetes.io/docs/tutorials/cluster-management/namespaces-walkthrough/", "id": "k8s_00399_code_explain_10808"} {"prompt": "Explain this bash code:\n\nkubectl proxy [--port=PORT] [--www=static-dir] [--www-prefix=prefix] [--api-prefix=prefix]", "completion": "This bash code kubectl proxy | Regular expression for hosts that the proxy should accept.", "language": "bash", "type": "kubectl-command_explanation", "context": "kubectl proxy | Regular expression for hosts that the proxy should accept.", "source": "https://kubernetes.io/docs/reference/kubectl/generated/kubectl_proxy/", "id": "k8s_00400_code_explain_10809"} {"prompt": "Complete this text code:\n\n# To proxy all of the Kubernetes API and nothing else\n kubectl proxy --api-prefix=/\n \n # To proxy only part of the Kubernetes API and also some static files\n # You can get pods info with 'curl localhost:8001/api/v1/pods'\n kubectl proxy --www=/my/files --www-prefix=/static/ --api-prefix=/api/\n \n # To proxy the entire Kubernetes API at a different root\n # You can get pods info with 'curl localhost:8001/custom/api/v1/pods'\n kubectl proxy --api-prefix=/custom/", "completion": " \n # Run a proxy to the Kubernetes API server on port 8011, serving static content from ./local/www/\n kubectl proxy --port=8011 --www=./local/www/\n \n # Run a proxy to the Kubernetes API server on an arbitrary local port\n # The chosen port for the server will be output to stdout\n kubectl proxy --port=0\n \n # Run a proxy to the Kubernetes API server, changing the API prefix to k8s-api\n # This makes e.g. the pods API available at localhost:8001/k8s-api/v1/pods/\n kubectl proxy --api-prefix=/k8s-api", "language": "text", "type": "code-example", "context": "kubectl proxy | Regular expression for hosts that the proxy should accept.", "source": "https://kubernetes.io/docs/reference/kubectl/generated/kubectl_proxy/", "id": "k8s_00400_code_completion_10810"} {"prompt": "Explain this text code:\n\n# To proxy all of the Kubernetes API and nothing else\n kubectl proxy --api-prefix=/\n \n # To proxy only part of the Kubernetes API and also some static files\n # You can get pods info with 'curl localhost:8001/api/v1/pods'\n kubectl proxy --www=/my/files --www-prefix=/static/ --api-prefix=/api/\n \n # To proxy the entire Kubernetes API at a different root\n # You can get pods info with 'curl localhost:8001/custom/api/v1/pods'\n kubectl proxy --api-prefix=/custom/\n \n # Run a proxy to the Kubernetes API server on port 8011, serving static content from ./local/www/\n kubectl proxy --port=8011 --www=./local/www/\n \n # Run a proxy to the Kubernetes API server on an arbitrary local port\n # The chosen port for the server will be output to stdout\n kubectl proxy --port=0\n \n # Run a proxy to the Kubernetes API server, changing the API prefix to k8s-api\n # This makes e.g. the pods API available at localhost:8001/k8s-api/v1/pods/\n kubectl proxy --api-prefix=/k8s-api", "completion": "This text code kubectl proxy | Regular expression for hosts that the proxy should accept.", "language": "text", "type": "code-example_explanation", "context": "kubectl proxy | Regular expression for hosts that the proxy should accept.", "source": "https://kubernetes.io/docs/reference/kubectl/generated/kubectl_proxy/", "id": "k8s_00400_code_explain_10811"} {"prompt": "Explain this bash code:\n\nkubectl proxy [--port=PORT] [--www=static-dir] [--www-prefix=prefix] [--api-prefix=prefix]", "completion": "This bash code kubectl proxy | Regular expression for hosts that the proxy should accept.", "language": "bash", "type": "kubectl-command_explanation", "context": "kubectl proxy | Regular expression for hosts that the proxy should accept.", "source": "https://kubernetes.io/docs/reference/kubectl/generated/kubectl_proxy/", "id": "k8s_00400_code_explain_10812"} {"prompt": "Complete this text code:\n\n# To proxy all of the Kubernetes API and nothing else\n kubectl proxy --api-prefix=/\n \n # To proxy only part of the Kubernetes API and also some static files\n # You can get pods info with 'curl localhost:8001/api/v1/pods'\n kubectl proxy --www=/my/files --www-prefix=/static/ --api-prefix=/api/\n \n # To proxy the entire Kubernetes API at a different root\n # You can get pods info with 'curl localhost:8001/custom/api/v1/pods'\n kubectl proxy --api-prefix=/custom/", "completion": " \n # Run a proxy to the Kubernetes API server on port 8011, serving static content from ./local/www/\n kubectl proxy --port=8011 --www=./local/www/\n \n # Run a proxy to the Kubernetes API server on an arbitrary local port\n # The chosen port for the server will be output to stdout\n kubectl proxy --port=0\n \n # Run a proxy to the Kubernetes API server, changing the API prefix to k8s-api\n # This makes e.g. the pods API available at localhost:8001/k8s-api/v1/pods/\n kubectl proxy --api-prefix=/k8s-api", "language": "text", "type": "code-example", "context": "kubectl proxy | Regular expression for hosts that the proxy should accept.", "source": "https://kubernetes.io/docs/reference/kubectl/generated/kubectl_proxy/", "id": "k8s_00400_code_completion_10813"} {"prompt": "Explain this text code:\n\n# To proxy all of the Kubernetes API and nothing else\n kubectl proxy --api-prefix=/\n \n # To proxy only part of the Kubernetes API and also some static files\n # You can get pods info with 'curl localhost:8001/api/v1/pods'\n kubectl proxy --www=/my/files --www-prefix=/static/ --api-prefix=/api/\n \n # To proxy the entire Kubernetes API at a different root\n # You can get pods info with 'curl localhost:8001/custom/api/v1/pods'\n kubectl proxy --api-prefix=/custom/\n \n # Run a proxy to the Kubernetes API server on port 8011, serving static content from ./local/www/\n kubectl proxy --port=8011 --www=./local/www/\n \n # Run a proxy to the Kubernetes API server on an arbitrary local port\n # The chosen port for the server will be output to stdout\n kubectl proxy --port=0\n \n # Run a proxy to the Kubernetes API server, changing the API prefix to k8s-api\n # This makes e.g. the pods API available at localhost:8001/k8s-api/v1/pods/\n kubectl proxy --api-prefix=/k8s-api", "completion": "This text code kubectl proxy | Regular expression for hosts that the proxy should accept.", "language": "text", "type": "code-example_explanation", "context": "kubectl proxy | Regular expression for hosts that the proxy should accept.", "source": "https://kubernetes.io/docs/reference/kubectl/generated/kubectl_proxy/", "id": "k8s_00400_code_explain_10814"} {"prompt": "Complete this text code:\n\n# Output a kubeconfig file for an additional user named foo\n kubeadm kubeconfig user --client-name=foo", "completion": " \n # Output a kubeconfig file for an additional user named foo using a kubeadm config file bar\n kubeadm kubeconfig user --client-name=foo --config=bar", "language": "text", "type": "code-example", "context": "kubeadm kubeconfig | The name of user. It will be used as the CN if client certificates are created", "source": "https://kubernetes.io/docs/reference/setup-tools/kubeadm/kubeadm-kubeconfig/", "id": "k8s_00401_code_completion_10815"} {"prompt": "Explain this text code:\n\n# Output a kubeconfig file for an additional user named foo\n kubeadm kubeconfig user --client-name=foo\n \n # Output a kubeconfig file for an additional user named foo using a kubeadm config file bar\n kubeadm kubeconfig user --client-name=foo --config=bar", "completion": "This text code kubeadm kubeconfig | The name of user. It will be used as the CN if client certificates are created", "language": "text", "type": "code-example_explanation", "context": "kubeadm kubeconfig | The name of user. It will be used as the CN if client certificates are created", "source": "https://kubernetes.io/docs/reference/setup-tools/kubeadm/kubeadm-kubeconfig/", "id": "k8s_00401_code_explain_10816"} {"prompt": "Complete this text code:\n\n# Output a kubeconfig file for an additional user named foo\n kubeadm kubeconfig user --client-name=foo", "completion": " \n # Output a kubeconfig file for an additional user named foo using a kubeadm config file bar\n kubeadm kubeconfig user --client-name=foo --config=bar", "language": "text", "type": "code-example", "context": "kubeadm kubeconfig | The name of user. It will be used as the CN if client certificates are created", "source": "https://kubernetes.io/docs/reference/setup-tools/kubeadm/kubeadm-kubeconfig/", "id": "k8s_00401_code_completion_10817"} {"prompt": "Explain this text code:\n\n# Output a kubeconfig file for an additional user named foo\n kubeadm kubeconfig user --client-name=foo\n \n # Output a kubeconfig file for an additional user named foo using a kubeadm config file bar\n kubeadm kubeconfig user --client-name=foo --config=bar", "completion": "This text code kubeadm kubeconfig | The name of user. It will be used as the CN if client certificates are created", "language": "text", "type": "code-example_explanation", "context": "kubeadm kubeconfig | The name of user. It will be used as the CN if client certificates are created", "source": "https://kubernetes.io/docs/reference/setup-tools/kubeadm/kubeadm-kubeconfig/", "id": "k8s_00401_code_explain_10818"} {"prompt": "Complete this text code:\n\n# Edit the service named 'registry'\n kubectl edit svc/registry\n \n # Use an alternative editor\n KUBE_EDITOR=\"nano\" kubectl edit svc/registry\n \n # Edit the job 'myjob' in JSON using the v1 API format", "completion": " kubectl edit job.v1.batch/myjob -o json\n \n # Edit the deployment 'mydeployment' in YAML and save the modified config in its annotation\n kubectl edit deployment/mydeployment -o yaml --save-config\n \n # Edit the 'status' subresource for the 'mydeployment' deployment\n kubectl edit deployment mydeployment --subresource='status'", "language": "text", "type": "code-example", "context": "kubectl edit | If true, ignore any errors in templates when a field or map key is missing in the template. Only applies to golang and jsonpath output formats.", "source": "https://kubernetes.io/docs/reference/kubectl/generated/kubectl_edit/", "id": "k8s_00402_code_completion_10819"} {"prompt": "Explain this text code:\n\n# Edit the service named 'registry'\n kubectl edit svc/registry\n \n # Use an alternative editor\n KUBE_EDITOR=\"nano\" kubectl edit svc/registry\n \n # Edit the job 'myjob' in JSON using the v1 API format\n kubectl edit job.v1.batch/myjob -o json\n \n # Edit the deployment 'mydeployment' in YAML and save the modified config in its annotation\n kubectl edit deployment/mydeployment -o yaml --save-config\n \n # Edit the 'status' subresource for the 'mydeployment' deployment\n kubectl edit deployment mydeployment --subresource='status'", "completion": "This text code kubectl edit | If true, ignore any errors in templates when a field or map key is missing in the template. Only applies to golang and jsonpath output formats.", "language": "text", "type": "code-example_explanation", "context": "kubectl edit | If true, ignore any errors in templates when a field or map key is missing in the template. Only applies to golang and jsonpath output formats.", "source": "https://kubernetes.io/docs/reference/kubectl/generated/kubectl_edit/", "id": "k8s_00402_code_explain_10820"} {"prompt": "Complete this text code:\n\n# Edit the service named 'registry'\n kubectl edit svc/registry\n \n # Use an alternative editor\n KUBE_EDITOR=\"nano\" kubectl edit svc/registry\n \n # Edit the job 'myjob' in JSON using the v1 API format", "completion": " kubectl edit job.v1.batch/myjob -o json\n \n # Edit the deployment 'mydeployment' in YAML and save the modified config in its annotation\n kubectl edit deployment/mydeployment -o yaml --save-config\n \n # Edit the 'status' subresource for the 'mydeployment' deployment\n kubectl edit deployment mydeployment --subresource='status'", "language": "text", "type": "code-example", "context": "kubectl edit | If true, ignore any errors in templates when a field or map key is missing in the template. Only applies to golang and jsonpath output formats.", "source": "https://kubernetes.io/docs/reference/kubectl/generated/kubectl_edit/", "id": "k8s_00402_code_completion_10821"} {"prompt": "Explain this text code:\n\n# Edit the service named 'registry'\n kubectl edit svc/registry\n \n # Use an alternative editor\n KUBE_EDITOR=\"nano\" kubectl edit svc/registry\n \n # Edit the job 'myjob' in JSON using the v1 API format\n kubectl edit job.v1.batch/myjob -o json\n \n # Edit the deployment 'mydeployment' in YAML and save the modified config in its annotation\n kubectl edit deployment/mydeployment -o yaml --save-config\n \n # Edit the 'status' subresource for the 'mydeployment' deployment\n kubectl edit deployment mydeployment --subresource='status'", "completion": "This text code kubectl edit | If true, ignore any errors in templates when a field or map key is missing in the template. Only applies to golang and jsonpath output formats.", "language": "text", "type": "code-example_explanation", "context": "kubectl edit | If true, ignore any errors in templates when a field or map key is missing in the template. Only applies to golang and jsonpath output formats.", "source": "https://kubernetes.io/docs/reference/kubectl/generated/kubectl_edit/", "id": "k8s_00402_code_explain_10822"} {"prompt": "Complete this json code:\n\n{\n \"apiVersion\": \"policy/v1\",\n \"kind\": \"Eviction\",\n \"metadata\": {", "completion": " \"name\": \"quux\",\n \"namespace\": \"default\"\n }\n}", "language": "json", "type": "api-response", "context": "API-initiated Eviction | Note:Deprecated in v1.22 in favor of policy/v1{ \"apiVersion\": \"policy/v1beta1\", \"kind\": \"Eviction\", \"metadata\": { \"name\": \"quux\", \"namespace\": \"default\" } }", "source": "https://kubernetes.io/docs/concepts/scheduling-eviction/api-eviction/", "id": "k8s_00403_code_completion_10823"} {"prompt": "Explain this json code:\n\n{\n \"apiVersion\": \"policy/v1\",\n \"kind\": \"Eviction\",\n \"metadata\": {\n \"name\": \"quux\",\n \"namespace\": \"default\"\n }\n}", "completion": "This json code API-initiated Eviction | Note:Deprecated in v1.22 in favor of policy/v1{ \"apiVersion\": \"policy/v1beta1\", \"kind\": \"Eviction\", \"metadata\": { \"name\": \"quux\", \"namespace\": \"default\" } }", "language": "json", "type": "api-response_explanation", "context": "API-initiated Eviction | Note:Deprecated in v1.22 in favor of policy/v1{ \"apiVersion\": \"policy/v1beta1\", \"kind\": \"Eviction\", \"metadata\": { \"name\": \"quux\", \"namespace\": \"default\" } }", "source": "https://kubernetes.io/docs/concepts/scheduling-eviction/api-eviction/", "id": "k8s_00403_code_explain_10824"} {"prompt": "Complete this json code:\n\n{\n \"apiVersion\": \"policy/v1beta1\",\n \"kind\": \"Eviction\",\n \"metadata\": {", "completion": " \"name\": \"quux\",\n \"namespace\": \"default\"\n }\n}", "language": "json", "type": "api-response", "context": "API-initiated Eviction | Alternatively, you can attempt an eviction operation by accessing the API using curl or wget, similar to the following example:", "source": "https://kubernetes.io/docs/concepts/scheduling-eviction/api-eviction/", "id": "k8s_00403_code_completion_10825"} {"prompt": "Explain this json code:\n\n{\n \"apiVersion\": \"policy/v1beta1\",\n \"kind\": \"Eviction\",\n \"metadata\": {\n \"name\": \"quux\",\n \"namespace\": \"default\"\n }\n}", "completion": "This json code API-initiated Eviction | Alternatively, you can attempt an eviction operation by accessing the API using curl or wget, similar to the following example:", "language": "json", "type": "api-response_explanation", "context": "API-initiated Eviction | Alternatively, you can attempt an eviction operation by accessing the API using curl or wget, similar to the following example:", "source": "https://kubernetes.io/docs/concepts/scheduling-eviction/api-eviction/", "id": "k8s_00403_code_explain_10826"} {"prompt": "Explain this text code:\n\ncurl -v -H 'Content-type: application/json' https://your-cluster-api-endpoint.example/api/v1/namespaces/default/pods/quux/eviction -d @eviction.json", "completion": "This text code API-initiated Eviction | When you request an eviction using the API, the API server performs admission checks and responds in one of the following ways:", "language": "text", "type": "code-example_explanation", "context": "API-initiated Eviction | When you request an eviction using the API, the API server performs admission checks and responds in one of the following ways:", "source": "https://kubernetes.io/docs/concepts/scheduling-eviction/api-eviction/", "id": "k8s_00403_code_explain_10827"} {"prompt": "Complete this json code:\n\n{\n \"apiVersion\": \"policy/v1\",\n \"kind\": \"Eviction\",\n \"metadata\": {", "completion": " \"name\": \"quux\",\n \"namespace\": \"default\"\n }\n}", "language": "json", "type": "api-response", "context": "API-initiated Eviction | Note:Deprecated in v1.22 in favor of policy/v1{ \"apiVersion\": \"policy/v1beta1\", \"kind\": \"Eviction\", \"metadata\": { \"name\": \"quux\", \"namespace\": \"default\" } }", "source": "https://kubernetes.io/docs/concepts/scheduling-eviction/api-eviction/", "id": "k8s_00403_code_completion_10828"} {"prompt": "Explain this json code:\n\n{\n \"apiVersion\": \"policy/v1\",\n \"kind\": \"Eviction\",\n \"metadata\": {\n \"name\": \"quux\",\n \"namespace\": \"default\"\n }\n}", "completion": "This json code API-initiated Eviction | Note:Deprecated in v1.22 in favor of policy/v1{ \"apiVersion\": \"policy/v1beta1\", \"kind\": \"Eviction\", \"metadata\": { \"name\": \"quux\", \"namespace\": \"default\" } }", "language": "json", "type": "api-response_explanation", "context": "API-initiated Eviction | Note:Deprecated in v1.22 in favor of policy/v1{ \"apiVersion\": \"policy/v1beta1\", \"kind\": \"Eviction\", \"metadata\": { \"name\": \"quux\", \"namespace\": \"default\" } }", "source": "https://kubernetes.io/docs/concepts/scheduling-eviction/api-eviction/", "id": "k8s_00403_code_explain_10829"} {"prompt": "Complete this json code:\n\n{\n \"apiVersion\": \"policy/v1beta1\",\n \"kind\": \"Eviction\",\n \"metadata\": {", "completion": " \"name\": \"quux\",\n \"namespace\": \"default\"\n }\n}", "language": "json", "type": "api-response", "context": "API-initiated Eviction | Alternatively, you can attempt an eviction operation by accessing the API using curl or wget, similar to the following example:", "source": "https://kubernetes.io/docs/concepts/scheduling-eviction/api-eviction/", "id": "k8s_00403_code_completion_10830"} {"prompt": "Explain this json code:\n\n{\n \"apiVersion\": \"policy/v1beta1\",\n \"kind\": \"Eviction\",\n \"metadata\": {\n \"name\": \"quux\",\n \"namespace\": \"default\"\n }\n}", "completion": "This json code API-initiated Eviction | Alternatively, you can attempt an eviction operation by accessing the API using curl or wget, similar to the following example:", "language": "json", "type": "api-response_explanation", "context": "API-initiated Eviction | Alternatively, you can attempt an eviction operation by accessing the API using curl or wget, similar to the following example:", "source": "https://kubernetes.io/docs/concepts/scheduling-eviction/api-eviction/", "id": "k8s_00403_code_explain_10831"} {"prompt": "Explain this bash code:\n\ncurl -v -H 'Content-type: application/json' https://your-cluster-api-endpoint.example/api/v1/namespaces/default/pods/quux/eviction -d @eviction.json", "completion": "This bash code API-initiated Eviction | When you request an eviction using the API, the API server performs admission checks and responds in one of the following ways:", "language": "bash", "type": "shell-command_explanation", "context": "API-initiated Eviction | When you request an eviction using the API, the API server performs admission checks and responds in one of the following ways:", "source": "https://kubernetes.io/docs/concepts/scheduling-eviction/api-eviction/", "id": "k8s_00403_code_explain_10832"} {"prompt": "Complete this json code:\n\n{\n \"apiVersion\": \"policy/v1\",\n \"kind\": \"Eviction\",\n \"metadata\": {", "completion": " \"name\": \"quux\",\n \"namespace\": \"default\"\n }\n}", "language": "json", "type": "api-response", "context": "API-initiated Eviction | Note:Deprecated in v1.22 in favor of policy/v1{ \"apiVersion\": \"policy/v1beta1\", \"kind\": \"Eviction\", \"metadata\": { \"name\": \"quux\", \"namespace\": \"default\" } }", "source": "https://kubernetes.io/docs/concepts/scheduling-eviction/api-eviction/", "id": "k8s_00403_code_completion_10833"} {"prompt": "Explain this json code:\n\n{\n \"apiVersion\": \"policy/v1\",\n \"kind\": \"Eviction\",\n \"metadata\": {\n \"name\": \"quux\",\n \"namespace\": \"default\"\n }\n}", "completion": "This json code API-initiated Eviction | Note:Deprecated in v1.22 in favor of policy/v1{ \"apiVersion\": \"policy/v1beta1\", \"kind\": \"Eviction\", \"metadata\": { \"name\": \"quux\", \"namespace\": \"default\" } }", "language": "json", "type": "api-response_explanation", "context": "API-initiated Eviction | Note:Deprecated in v1.22 in favor of policy/v1{ \"apiVersion\": \"policy/v1beta1\", \"kind\": \"Eviction\", \"metadata\": { \"name\": \"quux\", \"namespace\": \"default\" } }", "source": "https://kubernetes.io/docs/concepts/scheduling-eviction/api-eviction/", "id": "k8s_00403_code_explain_10834"} {"prompt": "Complete this json code:\n\n{\n \"apiVersion\": \"policy/v1beta1\",\n \"kind\": \"Eviction\",\n \"metadata\": {", "completion": " \"name\": \"quux\",\n \"namespace\": \"default\"\n }\n}", "language": "json", "type": "api-response", "context": "API-initiated Eviction | Alternatively, you can attempt an eviction operation by accessing the API using curl or wget, similar to the following example:", "source": "https://kubernetes.io/docs/concepts/scheduling-eviction/api-eviction/", "id": "k8s_00403_code_completion_10835"} {"prompt": "Explain this json code:\n\n{\n \"apiVersion\": \"policy/v1beta1\",\n \"kind\": \"Eviction\",\n \"metadata\": {\n \"name\": \"quux\",\n \"namespace\": \"default\"\n }\n}", "completion": "This json code API-initiated Eviction | Alternatively, you can attempt an eviction operation by accessing the API using curl or wget, similar to the following example:", "language": "json", "type": "api-response_explanation", "context": "API-initiated Eviction | Alternatively, you can attempt an eviction operation by accessing the API using curl or wget, similar to the following example:", "source": "https://kubernetes.io/docs/concepts/scheduling-eviction/api-eviction/", "id": "k8s_00403_code_explain_10836"} {"prompt": "Explain this text code:\n\ncurl -v -H 'Content-type: application/json' https://your-cluster-api-endpoint.example/api/v1/namespaces/default/pods/quux/eviction -d @eviction.json", "completion": "This text code API-initiated Eviction | When you request an eviction using the API, the API server performs admission checks and responds in one of the following ways:", "language": "text", "type": "code-example_explanation", "context": "API-initiated Eviction | When you request an eviction using the API, the API server performs admission checks and responds in one of the following ways:", "source": "https://kubernetes.io/docs/concepts/scheduling-eviction/api-eviction/", "id": "k8s_00403_code_explain_10837"} {"prompt": "Complete this yaml code:\n\n---\napiVersion: apps/v1\nkind: Deployment\nmetadata:\n name: backend\nspec:\n selector:\n matchLabels:\n app: hello\n tier: backend\n track: stable\n replicas: 3\n template:", "completion": " metadata:\n labels:\n app: hello\n tier: backend\n track: stable\n spec:\n containers:\n - name: hello\n image: \"gcr.io/google-samples/hello-go-gke:1.0\"\n ports:\n - name: http\n containerPort: 80\n...", "language": "yaml", "type": "deployment-definition", "context": "Connect a Frontend to a Backend Using Services | Create the backend Deployment:", "source": "https://kubernetes.io/docs/tasks/access-application-cluster/connecting-frontend-backend/", "id": "k8s_00404_code_completion_10838"} {"prompt": "Explain this yaml code:\n\n---\napiVersion: apps/v1\nkind: Deployment\nmetadata:\n name: backend\nspec:\n selector:\n matchLabels:\n app: hello\n tier: backend\n track: stable\n replicas: 3\n template:\n metadata:\n labels:\n app: hello\n tier: backend\n track: stable\n spec:\n containers:\n - name: hello\n image: \"gcr.io/google-samples/hello-go-gke:1.0\"\n ports:\n - name: http\n containerPort: 80\n...", "completion": "This yaml code Connect a Frontend to a Backend Using Services | Create the backend Deployment:", "language": "yaml", "type": "deployment-definition_explanation", "context": "Connect a Frontend to a Backend Using Services | Create the backend Deployment:", "source": "https://kubernetes.io/docs/tasks/access-application-cluster/connecting-frontend-backend/", "id": "k8s_00404_code_explain_10839"} {"prompt": "Explain this bash code:\n\nkubectl apply -f https://k8s.io/examples/service/access/backend-deployment.yaml", "completion": "This bash code Connect a Frontend to a Backend Using Services | View information about the backend Deployment:", "language": "bash", "type": "kubectl-command_explanation", "context": "Connect a Frontend to a Backend Using Services | View information about the backend Deployment:", "source": "https://kubernetes.io/docs/tasks/access-application-cluster/connecting-frontend-backend/", "id": "k8s_00404_code_explain_10840"} {"prompt": "Complete this text code:\n\nName: backend\nNamespace: default\nCreationTimestamp: Mon, 24 Oct 2016 14:21:02 -0700\nLabels: app=hello\n tier=backend\n track=stable\nAnnotations: deployment.kubernetes.io/revision=1\nSelector: app=hello,tier=backend,track=stable\nReplicas: 3 desired | 3 updated | 3 total | 3 available | 0 unavailable\nStrategyType: RollingUpdate\nMinReadySeconds: 0\nRollingUpdateStrategy: 1 max unavailable, 1 max surge\nPod Template:\n Labels: app=hello\n tier=backend\n track=stable", "completion": " Containers:\n hello:\n Image: \"gcr.io/google-samples/hello-go-gke:1.0\"\n Port: 80/TCP\n Environment: <none>\n Mounts: <none>\n Volumes: <none>\nConditions:\n Type Status Reason\n ---- ------ ------\n Available True MinimumReplicasAvailable\n Progressing True NewReplicaSetAvailable\nOldReplicaSets: <none>\nNewReplicaSet: hello-3621623197 (3/3 replicas created)\nEvents:\n...", "language": "text", "type": "code-example", "context": "Connect a Frontend to a Backend Using Services | The key to sending requests from a frontend to a backend is the backend Service. A Service creates a persistent IP address and DNS name entry so that the backend microservice can always be reached. A", "source": "https://kubernetes.io/docs/tasks/access-application-cluster/connecting-frontend-backend/", "id": "k8s_00404_code_completion_10841"} {"prompt": "Explain this text code:\n\nName: backend\nNamespace: default\nCreationTimestamp: Mon, 24 Oct 2016 14:21:02 -0700\nLabels: app=hello\n tier=backend\n track=stable\nAnnotations: deployment.kubernetes.io/revision=1\nSelector: app=hello,tier=backend,track=stable\nReplicas: 3 desired | 3 updated | 3 total | 3 available | 0 unavailable\nStrategyType: RollingUpdate\nMinReadySeconds: 0\nRollingUpdateStrategy: 1 max unavailable, 1 max surge\nPod Template:\n Labels: app=hello\n tier=backend\n track=stable\n Containers:\n hello:\n Image: \"gcr.io/google-samples/hello-go-gke:1.0\"\n Port: 80/TCP\n Environment: <none>\n Mounts: <none>\n Volumes: <none>\nConditions:\n Type Status Reason\n ---- ------ ------\n Available True MinimumReplicasAvailable\n Progressing True NewReplicaSetAvailable\nOldReplicaSets: <none>\nNewReplicaSet: hello-3621623197 (3/3 replicas created)\nEvents:\n...", "completion": "This text code Connect a Frontend to a Backend Using Services | The key to sending requests from a frontend to a backend is the backend Service. A Service creates a persistent IP address and DNS name entry so that the backend microservice can always be reached. A", "language": "text", "type": "code-example_explanation", "context": "Connect a Frontend to a Backend Using Services | The key to sending requests from a frontend to a backend is the backend Service. A Service creates a persistent IP address and DNS name entry so that the backend microservice can always be reached. A", "source": "https://kubernetes.io/docs/tasks/access-application-cluster/connecting-frontend-backend/", "id": "k8s_00404_code_explain_10842"} {"prompt": "Complete this yaml code:\n\n---\napiVersion: v1\nkind: Service\nmetadata:\n name: hello\nspec:\n selector:", "completion": " app: hello\n tier: backend\n ports:\n - protocol: TCP\n port: 80\n targetPort: http\n...", "language": "yaml", "type": "service-definition", "context": "Connect a Frontend to a Backend Using Services | In the configuration file, you can see that the Service, named hello routes traffic to Pods that have the labels app: hello and tier: backend.", "source": "https://kubernetes.io/docs/tasks/access-application-cluster/connecting-frontend-backend/", "id": "k8s_00404_code_completion_10843"} {"prompt": "Explain this yaml code:\n\n---\napiVersion: v1\nkind: Service\nmetadata:\n name: hello\nspec:\n selector:\n app: hello\n tier: backend\n ports:\n - protocol: TCP\n port: 80\n targetPort: http\n...", "completion": "This yaml code Connect a Frontend to a Backend Using Services | In the configuration file, you can see that the Service, named hello routes traffic to Pods that have the labels app: hello and tier: backend.", "language": "yaml", "type": "service-definition_explanation", "context": "Connect a Frontend to a Backend Using Services | In the configuration file, you can see that the Service, named hello routes traffic to Pods that have the labels app: hello and tier: backend.", "source": "https://kubernetes.io/docs/tasks/access-application-cluster/connecting-frontend-backend/", "id": "k8s_00404_code_explain_10844"} {"prompt": "Explain this bash code:\n\nkubectl apply -f https://k8s.io/examples/service/access/backend-service.yaml", "completion": "This bash code Connect a Frontend to a Backend Using Services | At this point, you have a backend Deployment running three replicas of your hello application, and you have a Service that can route traffic to them. However, this service is neither available nor res", "language": "bash", "type": "kubectl-command_explanation", "context": "Connect a Frontend to a Backend Using Services | At this point, you have a backend Deployment running three replicas of your hello application, and you have a Service that can route traffic to them. However, this service is neither available nor res", "source": "https://kubernetes.io/docs/tasks/access-application-cluster/connecting-frontend-backend/", "id": "k8s_00404_code_explain_10845"} {"prompt": "Complete this text code:\n\n# The identifier Backend is internal to nginx, and used to name this specific upstream\nupstream Backend {\n # hello is the internal DNS name used by the backend Service inside Kubernetes\n server hello;\n}\nserver {", "completion": "listen 80;\nlocation / {\n # The following statement will proxy traffic to the upstream named Backend\n proxy_pass http://Backend;\n}\n}", "language": "text", "type": "code-example", "context": "Connect a Frontend to a Backend Using Services | server { listen 80;", "source": "https://kubernetes.io/docs/tasks/access-application-cluster/connecting-frontend-backend/", "id": "k8s_00404_code_completion_10846"} {"prompt": "Explain this text code:\n\n# The identifier Backend is internal to nginx, and used to name this specific upstream\nupstream Backend {\n # hello is the internal DNS name used by the backend Service inside Kubernetes\n server hello;\n}\nserver {\nlisten 80;\nlocation / {\n # The following statement will proxy traffic to the upstream named Backend\n proxy_pass http://Backend;\n}\n}", "completion": "This text code Connect a Frontend to a Backend Using Services | server { listen 80;", "language": "text", "type": "code-example_explanation", "context": "Connect a Frontend to a Backend Using Services | server { listen 80;", "source": "https://kubernetes.io/docs/tasks/access-application-cluster/connecting-frontend-backend/", "id": "k8s_00404_code_explain_10847"} {"prompt": "Complete this text code:\n\nlocation / {\n # The following statement will proxy traffic to the upstream named Backend", "completion": " proxy_pass http://Backend;\n}", "language": "text", "type": "code-example", "context": "Connect a Frontend to a Backend Using Services | }", "source": "https://kubernetes.io/docs/tasks/access-application-cluster/connecting-frontend-backend/", "id": "k8s_00404_code_completion_10848"} {"prompt": "Explain this text code:\n\nlocation / {\n # The following statement will proxy traffic to the upstream named Backend\n proxy_pass http://Backend;\n}", "completion": "This text code Connect a Frontend to a Backend Using Services | }", "language": "text", "type": "code-example_explanation", "context": "Connect a Frontend to a Backend Using Services | }", "source": "https://kubernetes.io/docs/tasks/access-application-cluster/connecting-frontend-backend/", "id": "k8s_00404_code_explain_10849"} {"prompt": "Complete this yaml code:\n\n---\napiVersion: v1\nkind: Service\nmetadata:\n name: frontend\nspec:\n selector:", "completion": " app: hello\n tier: frontend\n ports:\n - protocol: \"TCP\"\n port: 80\n targetPort: 80\n type: LoadBalancer\n...", "language": "yaml", "type": "service-definition", "context": "Connect a Frontend to a Backend Using Services | Create the frontend Deployment and Service:", "source": "https://kubernetes.io/docs/tasks/access-application-cluster/connecting-frontend-backend/", "id": "k8s_00404_code_completion_10850"} {"prompt": "Explain this yaml code:\n\n---\napiVersion: v1\nkind: Service\nmetadata:\n name: frontend\nspec:\n selector:\n app: hello\n tier: frontend\n ports:\n - protocol: \"TCP\"\n port: 80\n targetPort: 80\n type: LoadBalancer\n...", "completion": "This yaml code Connect a Frontend to a Backend Using Services | Create the frontend Deployment and Service:", "language": "yaml", "type": "service-definition_explanation", "context": "Connect a Frontend to a Backend Using Services | Create the frontend Deployment and Service:", "source": "https://kubernetes.io/docs/tasks/access-application-cluster/connecting-frontend-backend/", "id": "k8s_00404_code_explain_10851"} {"prompt": "Complete this yaml code:\n\n---\napiVersion: apps/v1\nkind: Deployment\nmetadata:\n name: frontend\nspec:\n selector:\n matchLabels:\n app: hello\n tier: frontend\n track: stable\n replicas: 1\n template:", "completion": " metadata:\n labels:\n app: hello\n tier: frontend\n track: stable\n spec:\n containers:\n - name: nginx\n image: \"gcr.io/google-samples/hello-frontend:1.0\"\n lifecycle:\n preStop:\n exec:\n command: [\"/usr/sbin/nginx\",\"-s\",\"quit\"]\n...", "language": "yaml", "type": "deployment-definition", "context": "Connect a Frontend to a Backend Using Services | Create the frontend Deployment and Service:", "source": "https://kubernetes.io/docs/tasks/access-application-cluster/connecting-frontend-backend/", "id": "k8s_00404_code_completion_10852"} {"prompt": "Explain this yaml code:\n\n---\napiVersion: apps/v1\nkind: Deployment\nmetadata:\n name: frontend\nspec:\n selector:\n matchLabels:\n app: hello\n tier: frontend\n track: stable\n replicas: 1\n template:\n metadata:\n labels:\n app: hello\n tier: frontend\n track: stable\n spec:\n containers:\n - name: nginx\n image: \"gcr.io/google-samples/hello-frontend:1.0\"\n lifecycle:\n preStop:\n exec:\n command: [\"/usr/sbin/nginx\",\"-s\",\"quit\"]\n...", "completion": "This yaml code Connect a Frontend to a Backend Using Services | Create the frontend Deployment and Service:", "language": "yaml", "type": "deployment-definition_explanation", "context": "Connect a Frontend to a Backend Using Services | Create the frontend Deployment and Service:", "source": "https://kubernetes.io/docs/tasks/access-application-cluster/connecting-frontend-backend/", "id": "k8s_00404_code_explain_10853"} {"prompt": "Explain this bash code:\n\nkubectl apply -f https://k8s.io/examples/service/access/frontend-deployment.yaml\nkubectl apply -f https://k8s.io/examples/service/access/frontend-service.yaml", "completion": "This bash code Connect a Frontend to a Backend Using Services | The output verifies that both resources were created:", "language": "bash", "type": "kubectl-command_explanation", "context": "Connect a Frontend to a Backend Using Services | The output verifies that both resources were created:", "source": "https://kubernetes.io/docs/tasks/access-application-cluster/connecting-frontend-backend/", "id": "k8s_00404_code_explain_10854"} {"prompt": "Explain this text code:\n\ndeployment.apps/frontend created\nservice/frontend created", "completion": "This text code Connect a Frontend to a Backend Using Services | Once you've created a Service of type LoadBalancer, you can use this command to find the external IP:", "language": "text", "type": "code-example_explanation", "context": "Connect a Frontend to a Backend Using Services | Once you've created a Service of type LoadBalancer, you can use this command to find the external IP:", "source": "https://kubernetes.io/docs/tasks/access-application-cluster/connecting-frontend-backend/", "id": "k8s_00404_code_explain_10855"} {"prompt": "Explain this text code:\n\nNAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE\nfrontend LoadBalancer 10.51.252.116 <pending> 80/TCP 10s", "completion": "This text code Connect a Frontend to a Backend Using Services | As soon as an external IP is provisioned, however, the configuration updates to include the new IP under the EXTERNAL-IP heading:", "language": "text", "type": "code-example_explanation", "context": "Connect a Frontend to a Backend Using Services | As soon as an external IP is provisioned, however, the configuration updates to include the new IP under the EXTERNAL-IP heading:", "source": "https://kubernetes.io/docs/tasks/access-application-cluster/connecting-frontend-backend/", "id": "k8s_00404_code_explain_10856"} {"prompt": "Explain this text code:\n\nNAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE\nfrontend LoadBalancer 10.51.252.116 XXX.XXX.XXX.XXX 80/TCP 1m", "completion": "This text code Connect a Frontend to a Backend Using Services | That IP can now be used to interact with the frontend service from outside the cluster.", "language": "text", "type": "code-example_explanation", "context": "Connect a Frontend to a Backend Using Services | That IP can now be used to interact with the frontend service from outside the cluster.", "source": "https://kubernetes.io/docs/tasks/access-application-cluster/connecting-frontend-backend/", "id": "k8s_00404_code_explain_10857"} {"prompt": "Explain this text code:\n\ncurl http://${EXTERNAL_IP} # replace this with the EXTERNAL-IP you saw earlier", "completion": "This text code Connect a Frontend to a Backend Using Services | The output shows the message generated by the backend:", "language": "text", "type": "code-example_explanation", "context": "Connect a Frontend to a Backend Using Services | The output shows the message generated by the backend:", "source": "https://kubernetes.io/docs/tasks/access-application-cluster/connecting-frontend-backend/", "id": "k8s_00404_code_explain_10858"} {"prompt": "Complete this yaml code:\n\n---\napiVersion: apps/v1\nkind: Deployment\nmetadata:\n name: backend\nspec:\n selector:\n matchLabels:\n app: hello\n tier: backend\n track: stable\n replicas: 3\n template:", "completion": " metadata:\n labels:\n app: hello\n tier: backend\n track: stable\n spec:\n containers:\n - name: hello\n image: \"gcr.io/google-samples/hello-go-gke:1.0\"\n ports:\n - name: http\n containerPort: 80\n...", "language": "yaml", "type": "deployment-definition", "context": "Connect a Frontend to a Backend Using Services | Create the backend Deployment:", "source": "https://kubernetes.io/docs/tasks/access-application-cluster/connecting-frontend-backend/", "id": "k8s_00404_code_completion_10859"} {"prompt": "Explain this yaml code:\n\n---\napiVersion: apps/v1\nkind: Deployment\nmetadata:\n name: backend\nspec:\n selector:\n matchLabels:\n app: hello\n tier: backend\n track: stable\n replicas: 3\n template:\n metadata:\n labels:\n app: hello\n tier: backend\n track: stable\n spec:\n containers:\n - name: hello\n image: \"gcr.io/google-samples/hello-go-gke:1.0\"\n ports:\n - name: http\n containerPort: 80\n...", "completion": "This yaml code Connect a Frontend to a Backend Using Services | Create the backend Deployment:", "language": "yaml", "type": "deployment-definition_explanation", "context": "Connect a Frontend to a Backend Using Services | Create the backend Deployment:", "source": "https://kubernetes.io/docs/tasks/access-application-cluster/connecting-frontend-backend/", "id": "k8s_00404_code_explain_10860"} {"prompt": "Explain this shell code:\n\nkubectl apply -f https://k8s.io/examples/service/access/backend-deployment.yaml", "completion": "This shell code Connect a Frontend to a Backend Using Services | View information about the backend Deployment:", "language": "shell", "type": "code-example_explanation", "context": "Connect a Frontend to a Backend Using Services | View information about the backend Deployment:", "source": "https://kubernetes.io/docs/tasks/access-application-cluster/connecting-frontend-backend/", "id": "k8s_00404_code_explain_10861"} {"prompt": "Complete this text code:\n\nName: backend\nNamespace: default\nCreationTimestamp: Mon, 24 Oct 2016 14:21:02 -0700\nLabels: app=hello\n tier=backend\n track=stable\nAnnotations: deployment.kubernetes.io/revision=1\nSelector: app=hello,tier=backend,track=stable\nReplicas: 3 desired | 3 updated | 3 total | 3 available | 0 unavailable\nStrategyType: RollingUpdate\nMinReadySeconds: 0\nRollingUpdateStrategy: 1 max unavailable, 1 max surge\nPod Template:\n Labels: app=hello\n tier=backend\n track=stable", "completion": " Containers:\n hello:\n Image: \"gcr.io/google-samples/hello-go-gke:1.0\"\n Port: 80/TCP\n Environment: <none>\n Mounts: <none>\n Volumes: <none>\nConditions:\n Type Status Reason\n ---- ------ ------\n Available True MinimumReplicasAvailable\n Progressing True NewReplicaSetAvailable\nOldReplicaSets: <none>\nNewReplicaSet: hello-3621623197 (3/3 replicas created)\nEvents:\n...", "language": "text", "type": "code-example", "context": "Connect a Frontend to a Backend Using Services | The key to sending requests from a frontend to a backend is the backend Service. A Service creates a persistent IP address and DNS name entry so that the backend microservice can always be reached. A", "source": "https://kubernetes.io/docs/tasks/access-application-cluster/connecting-frontend-backend/", "id": "k8s_00404_code_completion_10862"} {"prompt": "Explain this text code:\n\nName: backend\nNamespace: default\nCreationTimestamp: Mon, 24 Oct 2016 14:21:02 -0700\nLabels: app=hello\n tier=backend\n track=stable\nAnnotations: deployment.kubernetes.io/revision=1\nSelector: app=hello,tier=backend,track=stable\nReplicas: 3 desired | 3 updated | 3 total | 3 available | 0 unavailable\nStrategyType: RollingUpdate\nMinReadySeconds: 0\nRollingUpdateStrategy: 1 max unavailable, 1 max surge\nPod Template:\n Labels: app=hello\n tier=backend\n track=stable\n Containers:\n hello:\n Image: \"gcr.io/google-samples/hello-go-gke:1.0\"\n Port: 80/TCP\n Environment: <none>\n Mounts: <none>\n Volumes: <none>\nConditions:\n Type Status Reason\n ---- ------ ------\n Available True MinimumReplicasAvailable\n Progressing True NewReplicaSetAvailable\nOldReplicaSets: <none>\nNewReplicaSet: hello-3621623197 (3/3 replicas created)\nEvents:\n...", "completion": "This text code Connect a Frontend to a Backend Using Services | The key to sending requests from a frontend to a backend is the backend Service. A Service creates a persistent IP address and DNS name entry so that the backend microservice can always be reached. A", "language": "text", "type": "code-example_explanation", "context": "Connect a Frontend to a Backend Using Services | The key to sending requests from a frontend to a backend is the backend Service. A Service creates a persistent IP address and DNS name entry so that the backend microservice can always be reached. A", "source": "https://kubernetes.io/docs/tasks/access-application-cluster/connecting-frontend-backend/", "id": "k8s_00404_code_explain_10863"} {"prompt": "Complete this yaml code:\n\n---\napiVersion: v1\nkind: Service\nmetadata:\n name: hello\nspec:\n selector:", "completion": " app: hello\n tier: backend\n ports:\n - protocol: TCP\n port: 80\n targetPort: http\n...", "language": "yaml", "type": "service-definition", "context": "Connect a Frontend to a Backend Using Services | In the configuration file, you can see that the Service, named hello routes traffic to Pods that have the labels app: hello and tier: backend.", "source": "https://kubernetes.io/docs/tasks/access-application-cluster/connecting-frontend-backend/", "id": "k8s_00404_code_completion_10864"} {"prompt": "Explain this yaml code:\n\n---\napiVersion: v1\nkind: Service\nmetadata:\n name: hello\nspec:\n selector:\n app: hello\n tier: backend\n ports:\n - protocol: TCP\n port: 80\n targetPort: http\n...", "completion": "This yaml code Connect a Frontend to a Backend Using Services | In the configuration file, you can see that the Service, named hello routes traffic to Pods that have the labels app: hello and tier: backend.", "language": "yaml", "type": "service-definition_explanation", "context": "Connect a Frontend to a Backend Using Services | In the configuration file, you can see that the Service, named hello routes traffic to Pods that have the labels app: hello and tier: backend.", "source": "https://kubernetes.io/docs/tasks/access-application-cluster/connecting-frontend-backend/", "id": "k8s_00404_code_explain_10865"} {"prompt": "Explain this shell code:\n\nkubectl apply -f https://k8s.io/examples/service/access/backend-service.yaml", "completion": "This shell code Connect a Frontend to a Backend Using Services | At this point, you have a backend Deployment running three replicas of your hello application, and you have a Service that can route traffic to them. However, this service is neither available nor res", "language": "shell", "type": "code-example_explanation", "context": "Connect a Frontend to a Backend Using Services | At this point, you have a backend Deployment running three replicas of your hello application, and you have a Service that can route traffic to them. However, this service is neither available nor res", "source": "https://kubernetes.io/docs/tasks/access-application-cluster/connecting-frontend-backend/", "id": "k8s_00404_code_explain_10866"} {"prompt": "Complete this conf code:\n\n# The identifier Backend is internal to nginx, and used to name this specific upstream\nupstream Backend {\n # hello is the internal DNS name used by the backend Service inside Kubernetes\n server hello;\n}\nserver {", "completion": "listen 80;\nlocation / {\n # The following statement will proxy traffic to the upstream named Backend\n proxy_pass http://Backend;\n}\n}", "language": "conf", "type": "code-example", "context": "Connect a Frontend to a Backend Using Services | server { listen 80;", "source": "https://kubernetes.io/docs/tasks/access-application-cluster/connecting-frontend-backend/", "id": "k8s_00404_code_completion_10867"} {"prompt": "Explain this conf code:\n\n# The identifier Backend is internal to nginx, and used to name this specific upstream\nupstream Backend {\n # hello is the internal DNS name used by the backend Service inside Kubernetes\n server hello;\n}\nserver {\nlisten 80;\nlocation / {\n # The following statement will proxy traffic to the upstream named Backend\n proxy_pass http://Backend;\n}\n}", "completion": "This conf code Connect a Frontend to a Backend Using Services | server { listen 80;", "language": "conf", "type": "code-example_explanation", "context": "Connect a Frontend to a Backend Using Services | server { listen 80;", "source": "https://kubernetes.io/docs/tasks/access-application-cluster/connecting-frontend-backend/", "id": "k8s_00404_code_explain_10868"} {"prompt": "Complete this text code:\n\nlocation / {\n # The following statement will proxy traffic to the upstream named Backend", "completion": " proxy_pass http://Backend;\n}", "language": "text", "type": "code-example", "context": "Connect a Frontend to a Backend Using Services | }", "source": "https://kubernetes.io/docs/tasks/access-application-cluster/connecting-frontend-backend/", "id": "k8s_00404_code_completion_10869"} {"prompt": "Explain this text code:\n\nlocation / {\n # The following statement will proxy traffic to the upstream named Backend\n proxy_pass http://Backend;\n}", "completion": "This text code Connect a Frontend to a Backend Using Services | }", "language": "text", "type": "code-example_explanation", "context": "Connect a Frontend to a Backend Using Services | }", "source": "https://kubernetes.io/docs/tasks/access-application-cluster/connecting-frontend-backend/", "id": "k8s_00404_code_explain_10870"} {"prompt": "Complete this yaml code:\n\n---\napiVersion: v1\nkind: Service\nmetadata:\n name: frontend\nspec:\n selector:", "completion": " app: hello\n tier: frontend\n ports:\n - protocol: \"TCP\"\n port: 80\n targetPort: 80\n type: LoadBalancer\n...", "language": "yaml", "type": "service-definition", "context": "Connect a Frontend to a Backend Using Services | Create the frontend Deployment and Service:", "source": "https://kubernetes.io/docs/tasks/access-application-cluster/connecting-frontend-backend/", "id": "k8s_00404_code_completion_10871"} {"prompt": "Explain this yaml code:\n\n---\napiVersion: v1\nkind: Service\nmetadata:\n name: frontend\nspec:\n selector:\n app: hello\n tier: frontend\n ports:\n - protocol: \"TCP\"\n port: 80\n targetPort: 80\n type: LoadBalancer\n...", "completion": "This yaml code Connect a Frontend to a Backend Using Services | Create the frontend Deployment and Service:", "language": "yaml", "type": "service-definition_explanation", "context": "Connect a Frontend to a Backend Using Services | Create the frontend Deployment and Service:", "source": "https://kubernetes.io/docs/tasks/access-application-cluster/connecting-frontend-backend/", "id": "k8s_00404_code_explain_10872"} {"prompt": "Complete this yaml code:\n\n---\napiVersion: apps/v1\nkind: Deployment\nmetadata:\n name: frontend\nspec:\n selector:\n matchLabels:\n app: hello\n tier: frontend\n track: stable\n replicas: 1\n template:", "completion": " metadata:\n labels:\n app: hello\n tier: frontend\n track: stable\n spec:\n containers:\n - name: nginx\n image: \"gcr.io/google-samples/hello-frontend:1.0\"\n lifecycle:\n preStop:\n exec:\n command: [\"/usr/sbin/nginx\",\"-s\",\"quit\"]\n...", "language": "yaml", "type": "deployment-definition", "context": "Connect a Frontend to a Backend Using Services | Create the frontend Deployment and Service:", "source": "https://kubernetes.io/docs/tasks/access-application-cluster/connecting-frontend-backend/", "id": "k8s_00404_code_completion_10873"} {"prompt": "Explain this yaml code:\n\n---\napiVersion: apps/v1\nkind: Deployment\nmetadata:\n name: frontend\nspec:\n selector:\n matchLabels:\n app: hello\n tier: frontend\n track: stable\n replicas: 1\n template:\n metadata:\n labels:\n app: hello\n tier: frontend\n track: stable\n spec:\n containers:\n - name: nginx\n image: \"gcr.io/google-samples/hello-frontend:1.0\"\n lifecycle:\n preStop:\n exec:\n command: [\"/usr/sbin/nginx\",\"-s\",\"quit\"]\n...", "completion": "This yaml code Connect a Frontend to a Backend Using Services | Create the frontend Deployment and Service:", "language": "yaml", "type": "deployment-definition_explanation", "context": "Connect a Frontend to a Backend Using Services | Create the frontend Deployment and Service:", "source": "https://kubernetes.io/docs/tasks/access-application-cluster/connecting-frontend-backend/", "id": "k8s_00404_code_explain_10874"} {"prompt": "Explain this shell code:\n\nkubectl apply -f https://k8s.io/examples/service/access/frontend-deployment.yaml\nkubectl apply -f https://k8s.io/examples/service/access/frontend-service.yaml", "completion": "This shell code Connect a Frontend to a Backend Using Services | The output verifies that both resources were created:", "language": "shell", "type": "code-example_explanation", "context": "Connect a Frontend to a Backend Using Services | The output verifies that both resources were created:", "source": "https://kubernetes.io/docs/tasks/access-application-cluster/connecting-frontend-backend/", "id": "k8s_00404_code_explain_10875"} {"prompt": "Explain this text code:\n\ndeployment.apps/frontend created\nservice/frontend created", "completion": "This text code Connect a Frontend to a Backend Using Services | Once you've created a Service of type LoadBalancer, you can use this command to find the external IP:", "language": "text", "type": "code-example_explanation", "context": "Connect a Frontend to a Backend Using Services | Once you've created a Service of type LoadBalancer, you can use this command to find the external IP:", "source": "https://kubernetes.io/docs/tasks/access-application-cluster/connecting-frontend-backend/", "id": "k8s_00404_code_explain_10876"} {"prompt": "Explain this text code:\n\nNAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE\nfrontend LoadBalancer 10.51.252.116 <pending> 80/TCP 10s", "completion": "This text code Connect a Frontend to a Backend Using Services | As soon as an external IP is provisioned, however, the configuration updates to include the new IP under the EXTERNAL-IP heading:", "language": "text", "type": "code-example_explanation", "context": "Connect a Frontend to a Backend Using Services | As soon as an external IP is provisioned, however, the configuration updates to include the new IP under the EXTERNAL-IP heading:", "source": "https://kubernetes.io/docs/tasks/access-application-cluster/connecting-frontend-backend/", "id": "k8s_00404_code_explain_10877"} {"prompt": "Explain this text code:\n\nNAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE\nfrontend LoadBalancer 10.51.252.116 XXX.XXX.XXX.XXX 80/TCP 1m", "completion": "This text code Connect a Frontend to a Backend Using Services | That IP can now be used to interact with the frontend service from outside the cluster.", "language": "text", "type": "code-example_explanation", "context": "Connect a Frontend to a Backend Using Services | That IP can now be used to interact with the frontend service from outside the cluster.", "source": "https://kubernetes.io/docs/tasks/access-application-cluster/connecting-frontend-backend/", "id": "k8s_00404_code_explain_10878"} {"prompt": "Explain this shell code:\n\ncurl http://${EXTERNAL_IP} # replace this with the EXTERNAL-IP you saw earlier", "completion": "This shell code Connect a Frontend to a Backend Using Services | The output shows the message generated by the backend:", "language": "shell", "type": "code-example_explanation", "context": "Connect a Frontend to a Backend Using Services | The output shows the message generated by the backend:", "source": "https://kubernetes.io/docs/tasks/access-application-cluster/connecting-frontend-backend/", "id": "k8s_00404_code_explain_10879"} {"prompt": "Complete this yaml code:\n\nservice/access/backend-deployment.yaml\n---\napiVersion: apps/v1\nkind: Deployment\nmetadata:\n name: backend\nspec:\n selector:\n matchLabels:\n app: hello\n tier: backend\n track: stable\n replicas: 3", "completion": " template:\n metadata:\n labels:\n app: hello\n tier: backend\n track: stable\n spec:\n containers:\n - name: hello\n image: \"gcr.io/google-samples/hello-go-gke:1.0\"\n ports:\n - name: http\n containerPort: 80\n...", "language": "yaml", "type": "deployment-definition", "context": "Connect a Frontend to a Backend Using Services | Create the backend Deployment:", "source": "https://kubernetes.io/docs/tasks/access-application-cluster/connecting-frontend-backend/", "id": "k8s_00404_code_completion_10880"} {"prompt": "Explain this yaml code:\n\nservice/access/backend-deployment.yaml\n---\napiVersion: apps/v1\nkind: Deployment\nmetadata:\n name: backend\nspec:\n selector:\n matchLabels:\n app: hello\n tier: backend\n track: stable\n replicas: 3\n template:\n metadata:\n labels:\n app: hello\n tier: backend\n track: stable\n spec:\n containers:\n - name: hello\n image: \"gcr.io/google-samples/hello-go-gke:1.0\"\n ports:\n - name: http\n containerPort: 80\n...", "completion": "This yaml code Connect a Frontend to a Backend Using Services | Create the backend Deployment:", "language": "yaml", "type": "deployment-definition_explanation", "context": "Connect a Frontend to a Backend Using Services | Create the backend Deployment:", "source": "https://kubernetes.io/docs/tasks/access-application-cluster/connecting-frontend-backend/", "id": "k8s_00404_code_explain_10881"} {"prompt": "Complete this yaml code:\n\n---\napiVersion: apps/v1\nkind: Deployment\nmetadata:\n name: backend\nspec:\n selector:\n matchLabels:\n app: hello\n tier: backend\n track: stable\n replicas: 3\n template:", "completion": " metadata:\n labels:\n app: hello\n tier: backend\n track: stable\n spec:\n containers:\n - name: hello\n image: \"gcr.io/google-samples/hello-go-gke:1.0\"\n ports:\n - name: http\n containerPort: 80\n...", "language": "yaml", "type": "deployment-definition", "context": "Connect a Frontend to a Backend Using Services | Create the backend Deployment:", "source": "https://kubernetes.io/docs/tasks/access-application-cluster/connecting-frontend-backend/", "id": "k8s_00404_code_completion_10882"} {"prompt": "Explain this yaml code:\n\n---\napiVersion: apps/v1\nkind: Deployment\nmetadata:\n name: backend\nspec:\n selector:\n matchLabels:\n app: hello\n tier: backend\n track: stable\n replicas: 3\n template:\n metadata:\n labels:\n app: hello\n tier: backend\n track: stable\n spec:\n containers:\n - name: hello\n image: \"gcr.io/google-samples/hello-go-gke:1.0\"\n ports:\n - name: http\n containerPort: 80\n...", "completion": "This yaml code Connect a Frontend to a Backend Using Services | Create the backend Deployment:", "language": "yaml", "type": "deployment-definition_explanation", "context": "Connect a Frontend to a Backend Using Services | Create the backend Deployment:", "source": "https://kubernetes.io/docs/tasks/access-application-cluster/connecting-frontend-backend/", "id": "k8s_00404_code_explain_10883"} {"prompt": "Explain this bash code:\n\nkubectl apply -f https://k8s.io/examples/service/access/backend-deployment.yaml", "completion": "This bash code Connect a Frontend to a Backend Using Services | View information about the backend Deployment:", "language": "bash", "type": "kubectl-command_explanation", "context": "Connect a Frontend to a Backend Using Services | View information about the backend Deployment:", "source": "https://kubernetes.io/docs/tasks/access-application-cluster/connecting-frontend-backend/", "id": "k8s_00404_code_explain_10884"} {"prompt": "Complete this yaml code:\n\nservice/access/backend-service.yaml\n---\napiVersion: v1\nkind: Service\nmetadata:\n name: hello\nspec:", "completion": " selector:\n app: hello\n tier: backend\n ports:\n - protocol: TCP\n port: 80\n targetPort: http\n...", "language": "yaml", "type": "service-definition", "context": "Connect a Frontend to a Backend Using Services | In the configuration file, you can see that the Service, named hello routes traffic to Pods that have the labels app: hello and tier: backend.", "source": "https://kubernetes.io/docs/tasks/access-application-cluster/connecting-frontend-backend/", "id": "k8s_00404_code_completion_10885"} {"prompt": "Explain this yaml code:\n\nservice/access/backend-service.yaml\n---\napiVersion: v1\nkind: Service\nmetadata:\n name: hello\nspec:\n selector:\n app: hello\n tier: backend\n ports:\n - protocol: TCP\n port: 80\n targetPort: http\n...", "completion": "This yaml code Connect a Frontend to a Backend Using Services | In the configuration file, you can see that the Service, named hello routes traffic to Pods that have the labels app: hello and tier: backend.", "language": "yaml", "type": "service-definition_explanation", "context": "Connect a Frontend to a Backend Using Services | In the configuration file, you can see that the Service, named hello routes traffic to Pods that have the labels app: hello and tier: backend.", "source": "https://kubernetes.io/docs/tasks/access-application-cluster/connecting-frontend-backend/", "id": "k8s_00404_code_explain_10886"} {"prompt": "Complete this yaml code:\n\n---\napiVersion: v1\nkind: Service\nmetadata:\n name: hello\nspec:\n selector:", "completion": " app: hello\n tier: backend\n ports:\n - protocol: TCP\n port: 80\n targetPort: http\n...", "language": "yaml", "type": "service-definition", "context": "Connect a Frontend to a Backend Using Services | In the configuration file, you can see that the Service, named hello routes traffic to Pods that have the labels app: hello and tier: backend.", "source": "https://kubernetes.io/docs/tasks/access-application-cluster/connecting-frontend-backend/", "id": "k8s_00404_code_completion_10887"} {"prompt": "Explain this yaml code:\n\n---\napiVersion: v1\nkind: Service\nmetadata:\n name: hello\nspec:\n selector:\n app: hello\n tier: backend\n ports:\n - protocol: TCP\n port: 80\n targetPort: http\n...", "completion": "This yaml code Connect a Frontend to a Backend Using Services | In the configuration file, you can see that the Service, named hello routes traffic to Pods that have the labels app: hello and tier: backend.", "language": "yaml", "type": "service-definition_explanation", "context": "Connect a Frontend to a Backend Using Services | In the configuration file, you can see that the Service, named hello routes traffic to Pods that have the labels app: hello and tier: backend.", "source": "https://kubernetes.io/docs/tasks/access-application-cluster/connecting-frontend-backend/", "id": "k8s_00404_code_explain_10888"} {"prompt": "Explain this bash code:\n\nkubectl apply -f https://k8s.io/examples/service/access/backend-service.yaml", "completion": "This bash code Connect a Frontend to a Backend Using Services | At this point, you have a backend Deployment running three replicas of your hello application, and you have a Service that can route traffic to them. However, this service is neither available nor res", "language": "bash", "type": "kubectl-command_explanation", "context": "Connect a Frontend to a Backend Using Services | At this point, you have a backend Deployment running three replicas of your hello application, and you have a Service that can route traffic to them. However, this service is neither available nor res", "source": "https://kubernetes.io/docs/tasks/access-application-cluster/connecting-frontend-backend/", "id": "k8s_00404_code_explain_10889"} {"prompt": "Complete this text code:\n\nservice/access/frontend-nginx.conf\n# The identifier Backend is internal to nginx, and used to name this specific upstream\nupstream Backend {\n # hello is the internal DNS name used by the backend Service inside Kubernetes\n server hello;\n}", "completion": "server {\nlisten 80;\nlocation / {\n # The following statement will proxy traffic to the upstream named Backend\n proxy_pass http://Backend;\n}\n}", "language": "text", "type": "code-example", "context": "Connect a Frontend to a Backend Using Services | server { listen 80;", "source": "https://kubernetes.io/docs/tasks/access-application-cluster/connecting-frontend-backend/", "id": "k8s_00404_code_completion_10890"} {"prompt": "Explain this text code:\n\nservice/access/frontend-nginx.conf\n# The identifier Backend is internal to nginx, and used to name this specific upstream\nupstream Backend {\n # hello is the internal DNS name used by the backend Service inside Kubernetes\n server hello;\n}\nserver {\nlisten 80;\nlocation / {\n # The following statement will proxy traffic to the upstream named Backend\n proxy_pass http://Backend;\n}\n}", "completion": "This text code Connect a Frontend to a Backend Using Services | server { listen 80;", "language": "text", "type": "code-example_explanation", "context": "Connect a Frontend to a Backend Using Services | server { listen 80;", "source": "https://kubernetes.io/docs/tasks/access-application-cluster/connecting-frontend-backend/", "id": "k8s_00404_code_explain_10891"} {"prompt": "Complete this yaml code:\n\nservice/access/frontend-service.yaml\n---\napiVersion: v1\nkind: Service\nmetadata:\n name: frontend\nspec:\n selector:", "completion": " app: hello\n tier: frontend\n ports:\n - protocol: \"TCP\"\n port: 80\n targetPort: 80\n type: LoadBalancer\n...", "language": "yaml", "type": "service-definition", "context": "Connect a Frontend to a Backend Using Services | Create the frontend Deployment and Service:", "source": "https://kubernetes.io/docs/tasks/access-application-cluster/connecting-frontend-backend/", "id": "k8s_00404_code_completion_10892"} {"prompt": "Explain this yaml code:\n\nservice/access/frontend-service.yaml\n---\napiVersion: v1\nkind: Service\nmetadata:\n name: frontend\nspec:\n selector:\n app: hello\n tier: frontend\n ports:\n - protocol: \"TCP\"\n port: 80\n targetPort: 80\n type: LoadBalancer\n...", "completion": "This yaml code Connect a Frontend to a Backend Using Services | Create the frontend Deployment and Service:", "language": "yaml", "type": "service-definition_explanation", "context": "Connect a Frontend to a Backend Using Services | Create the frontend Deployment and Service:", "source": "https://kubernetes.io/docs/tasks/access-application-cluster/connecting-frontend-backend/", "id": "k8s_00404_code_explain_10893"} {"prompt": "Complete this yaml code:\n\n---\napiVersion: v1\nkind: Service\nmetadata:\n name: frontend\nspec:\n selector:", "completion": " app: hello\n tier: frontend\n ports:\n - protocol: \"TCP\"\n port: 80\n targetPort: 80\n type: LoadBalancer\n...", "language": "yaml", "type": "service-definition", "context": "Connect a Frontend to a Backend Using Services | Create the frontend Deployment and Service:", "source": "https://kubernetes.io/docs/tasks/access-application-cluster/connecting-frontend-backend/", "id": "k8s_00404_code_completion_10894"} {"prompt": "Explain this yaml code:\n\n---\napiVersion: v1\nkind: Service\nmetadata:\n name: frontend\nspec:\n selector:\n app: hello\n tier: frontend\n ports:\n - protocol: \"TCP\"\n port: 80\n targetPort: 80\n type: LoadBalancer\n...", "completion": "This yaml code Connect a Frontend to a Backend Using Services | Create the frontend Deployment and Service:", "language": "yaml", "type": "service-definition_explanation", "context": "Connect a Frontend to a Backend Using Services | Create the frontend Deployment and Service:", "source": "https://kubernetes.io/docs/tasks/access-application-cluster/connecting-frontend-backend/", "id": "k8s_00404_code_explain_10895"} {"prompt": "Complete this yaml code:\n\nservice/access/frontend-deployment.yaml\n---\napiVersion: apps/v1\nkind: Deployment\nmetadata:\n name: frontend\nspec:\n selector:\n matchLabels:\n app: hello\n tier: frontend\n track: stable\n replicas: 1\n template:", "completion": " metadata:\n labels:\n app: hello\n tier: frontend\n track: stable\n spec:\n containers:\n - name: nginx\n image: \"gcr.io/google-samples/hello-frontend:1.0\"\n lifecycle:\n preStop:\n exec:\n command: [\"/usr/sbin/nginx\",\"-s\",\"quit\"]\n...", "language": "yaml", "type": "deployment-definition", "context": "Connect a Frontend to a Backend Using Services | Create the frontend Deployment and Service:", "source": "https://kubernetes.io/docs/tasks/access-application-cluster/connecting-frontend-backend/", "id": "k8s_00404_code_completion_10896"} {"prompt": "Explain this yaml code:\n\nservice/access/frontend-deployment.yaml\n---\napiVersion: apps/v1\nkind: Deployment\nmetadata:\n name: frontend\nspec:\n selector:\n matchLabels:\n app: hello\n tier: frontend\n track: stable\n replicas: 1\n template:\n metadata:\n labels:\n app: hello\n tier: frontend\n track: stable\n spec:\n containers:\n - name: nginx\n image: \"gcr.io/google-samples/hello-frontend:1.0\"\n lifecycle:\n preStop:\n exec:\n command: [\"/usr/sbin/nginx\",\"-s\",\"quit\"]\n...", "completion": "This yaml code Connect a Frontend to a Backend Using Services | Create the frontend Deployment and Service:", "language": "yaml", "type": "deployment-definition_explanation", "context": "Connect a Frontend to a Backend Using Services | Create the frontend Deployment and Service:", "source": "https://kubernetes.io/docs/tasks/access-application-cluster/connecting-frontend-backend/", "id": "k8s_00404_code_explain_10897"} {"prompt": "Complete this yaml code:\n\n---\napiVersion: apps/v1\nkind: Deployment\nmetadata:\n name: frontend\nspec:\n selector:\n matchLabels:\n app: hello\n tier: frontend\n track: stable\n replicas: 1\n template:", "completion": " metadata:\n labels:\n app: hello\n tier: frontend\n track: stable\n spec:\n containers:\n - name: nginx\n image: \"gcr.io/google-samples/hello-frontend:1.0\"\n lifecycle:\n preStop:\n exec:\n command: [\"/usr/sbin/nginx\",\"-s\",\"quit\"]\n...", "language": "yaml", "type": "deployment-definition", "context": "Connect a Frontend to a Backend Using Services | Create the frontend Deployment and Service:", "source": "https://kubernetes.io/docs/tasks/access-application-cluster/connecting-frontend-backend/", "id": "k8s_00404_code_completion_10898"} {"prompt": "Explain this yaml code:\n\n---\napiVersion: apps/v1\nkind: Deployment\nmetadata:\n name: frontend\nspec:\n selector:\n matchLabels:\n app: hello\n tier: frontend\n track: stable\n replicas: 1\n template:\n metadata:\n labels:\n app: hello\n tier: frontend\n track: stable\n spec:\n containers:\n - name: nginx\n image: \"gcr.io/google-samples/hello-frontend:1.0\"\n lifecycle:\n preStop:\n exec:\n command: [\"/usr/sbin/nginx\",\"-s\",\"quit\"]\n...", "completion": "This yaml code Connect a Frontend to a Backend Using Services | Create the frontend Deployment and Service:", "language": "yaml", "type": "deployment-definition_explanation", "context": "Connect a Frontend to a Backend Using Services | Create the frontend Deployment and Service:", "source": "https://kubernetes.io/docs/tasks/access-application-cluster/connecting-frontend-backend/", "id": "k8s_00404_code_explain_10899"} {"prompt": "Explain this bash code:\n\nkubectl apply -f https://k8s.io/examples/service/access/frontend-deployment.yaml\nkubectl apply -f https://k8s.io/examples/service/access/frontend-service.yaml", "completion": "This bash code Connect a Frontend to a Backend Using Services | The output verifies that both resources were created:", "language": "bash", "type": "kubectl-command_explanation", "context": "Connect a Frontend to a Backend Using Services | The output verifies that both resources were created:", "source": "https://kubernetes.io/docs/tasks/access-application-cluster/connecting-frontend-backend/", "id": "k8s_00404_code_explain_10900"} {"prompt": "Explain this text code:\n\ncurl http://${EXTERNAL_IP} # replace this with the EXTERNAL-IP you saw earlier", "completion": "This text code Connect a Frontend to a Backend Using Services | The output shows the message generated by the backend:", "language": "text", "type": "code-example_explanation", "context": "Connect a Frontend to a Backend Using Services | The output shows the message generated by the backend:", "source": "https://kubernetes.io/docs/tasks/access-application-cluster/connecting-frontend-backend/", "id": "k8s_00404_code_explain_10901"} {"prompt": "Complete this yaml code:\n\napiVersion: rbac.authorization.k8s.io/v1\nkind: ClusterRole\nmetadata:\n name: cloud-controller-manager\nrules:\n- apiGroups:\n - \"\"\n resources:\n - events\n verbs:\n - create\n - patch\n - update\n- apiGroups:\n - \"\"\n resources:\n - nodes\n verbs:\n - '*'\n- apiGroups:\n - \"\"\n resources:\n - nodes/status\n verbs:\n - patch\n- apiGroups:\n - \"\"", "completion": " resources:\n - services\n verbs:\n - list\n - watch\n- apiGroups:\n - \"\"\n resources:\n - services/status\n verbs:\n - patch\n - update\n- apiGroups:\n - \"\"\n resources:\n - serviceaccounts\n verbs:\n - create\n- apiGroups:\n - \"\"\n resources:\n - persistentvolumes\n verbs:\n - get\n - list\n - update\n - watch", "language": "yaml", "type": "kubernetes-manifest", "context": "Cloud Controller Manager | Cloud Controller Manager Administration has instructions on running and managing the cloud controller manager.", "source": "https://kubernetes.io/docs/concepts/architecture/cloud-controller/", "id": "k8s_00405_code_completion_10902"} {"prompt": "Explain this yaml code:\n\napiVersion: rbac.authorization.k8s.io/v1\nkind: ClusterRole\nmetadata:\n name: cloud-controller-manager\nrules:\n- apiGroups:\n - \"\"\n resources:\n - events\n verbs:\n - create\n - patch\n - update\n- apiGroups:\n - \"\"\n resources:\n - nodes\n verbs:\n - '*'\n- apiGroups:\n - \"\"\n resources:\n - nodes/status\n verbs:\n - patch\n- apiGroups:\n - \"\"\n resources:\n - services\n verbs:\n - list\n - watch\n- apiGroups:\n - \"\"\n resources:\n - services/status\n verbs:\n - patch\n - update\n- apiGroups:\n - \"\"\n resources:\n - serviceaccounts\n verbs:\n - create\n- apiGroups:\n - \"\"\n resources:\n - persistentvolumes\n verbs:\n - get\n - list\n - update\n - watch", "completion": "This yaml code Cloud Controller Manager | Cloud Controller Manager Administration has instructions on running and managing the cloud controller manager.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Cloud Controller Manager | Cloud Controller Manager Administration has instructions on running and managing the cloud controller manager.", "source": "https://kubernetes.io/docs/concepts/architecture/cloud-controller/", "id": "k8s_00405_code_explain_10903"} {"prompt": "Complete this yaml code:\n\napiVersion: rbac.authorization.k8s.io/v1\nkind: ClusterRole\nmetadata:\n name: cloud-controller-manager\nrules:\n- apiGroups:\n - \"\"\n resources:\n - events\n verbs:\n - create\n - patch\n - update\n- apiGroups:\n - \"\"\n resources:\n - nodes\n verbs:\n - '*'\n- apiGroups:\n - \"\"\n resources:\n - nodes/status\n verbs:\n - patch\n- apiGroups:\n - \"\"", "completion": " resources:\n - services\n verbs:\n - list\n - watch\n- apiGroups:\n - \"\"\n resources:\n - services/status\n verbs:\n - patch\n - update\n- apiGroups:\n - \"\"\n resources:\n - serviceaccounts\n verbs:\n - create\n- apiGroups:\n - \"\"\n resources:\n - persistentvolumes\n verbs:\n - get\n - list\n - update\n - watch", "language": "yaml", "type": "kubernetes-manifest", "context": "Cloud Controller Manager | Cloud Controller Manager Administration has instructions on running and managing the cloud controller manager.", "source": "https://kubernetes.io/docs/concepts/architecture/cloud-controller/", "id": "k8s_00405_code_completion_10904"} {"prompt": "Explain this yaml code:\n\napiVersion: rbac.authorization.k8s.io/v1\nkind: ClusterRole\nmetadata:\n name: cloud-controller-manager\nrules:\n- apiGroups:\n - \"\"\n resources:\n - events\n verbs:\n - create\n - patch\n - update\n- apiGroups:\n - \"\"\n resources:\n - nodes\n verbs:\n - '*'\n- apiGroups:\n - \"\"\n resources:\n - nodes/status\n verbs:\n - patch\n- apiGroups:\n - \"\"\n resources:\n - services\n verbs:\n - list\n - watch\n- apiGroups:\n - \"\"\n resources:\n - services/status\n verbs:\n - patch\n - update\n- apiGroups:\n - \"\"\n resources:\n - serviceaccounts\n verbs:\n - create\n- apiGroups:\n - \"\"\n resources:\n - persistentvolumes\n verbs:\n - get\n - list\n - update\n - watch", "completion": "This yaml code Cloud Controller Manager | Cloud Controller Manager Administration has instructions on running and managing the cloud controller manager.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Cloud Controller Manager | Cloud Controller Manager Administration has instructions on running and managing the cloud controller manager.", "source": "https://kubernetes.io/docs/concepts/architecture/cloud-controller/", "id": "k8s_00405_code_explain_10905"} {"prompt": "Complete this yaml code:\n\napiVersion: rbac.authorization.k8s.io/v1\nkind: ClusterRole\nmetadata:\n name: cloud-controller-manager\nrules:\n- apiGroups:\n - \"\"\n resources:\n - events\n verbs:\n - create\n - patch\n - update\n- apiGroups:\n - \"\"\n resources:\n - nodes\n verbs:\n - '*'\n- apiGroups:\n - \"\"\n resources:\n - nodes/status\n verbs:\n - patch\n- apiGroups:\n - \"\"", "completion": " resources:\n - services\n verbs:\n - list\n - watch\n- apiGroups:\n - \"\"\n resources:\n - services/status\n verbs:\n - patch\n - update\n- apiGroups:\n - \"\"\n resources:\n - serviceaccounts\n verbs:\n - create\n- apiGroups:\n - \"\"\n resources:\n - persistentvolumes\n verbs:\n - get\n - list\n - update\n - watch", "language": "yaml", "type": "kubernetes-manifest", "context": "Cloud Controller Manager | Cloud Controller Manager Administration has instructions on running and managing the cloud controller manager.", "source": "https://kubernetes.io/docs/concepts/architecture/cloud-controller/", "id": "k8s_00405_code_completion_10906"} {"prompt": "Explain this yaml code:\n\napiVersion: rbac.authorization.k8s.io/v1\nkind: ClusterRole\nmetadata:\n name: cloud-controller-manager\nrules:\n- apiGroups:\n - \"\"\n resources:\n - events\n verbs:\n - create\n - patch\n - update\n- apiGroups:\n - \"\"\n resources:\n - nodes\n verbs:\n - '*'\n- apiGroups:\n - \"\"\n resources:\n - nodes/status\n verbs:\n - patch\n- apiGroups:\n - \"\"\n resources:\n - services\n verbs:\n - list\n - watch\n- apiGroups:\n - \"\"\n resources:\n - services/status\n verbs:\n - patch\n - update\n- apiGroups:\n - \"\"\n resources:\n - serviceaccounts\n verbs:\n - create\n- apiGroups:\n - \"\"\n resources:\n - persistentvolumes\n verbs:\n - get\n - list\n - update\n - watch", "completion": "This yaml code Cloud Controller Manager | Cloud Controller Manager Administration has instructions on running and managing the cloud controller manager.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Cloud Controller Manager | Cloud Controller Manager Administration has instructions on running and managing the cloud controller manager.", "source": "https://kubernetes.io/docs/concepts/architecture/cloud-controller/", "id": "k8s_00405_code_explain_10907"} {"prompt": "Complete this bash code:\n\nkubectl get pods --all-namespaces \\\n-o=jsonpath='{range .items[*]}{\"\\n\"}{.metadata.namespace}{\":\\t\"}{.metadata.name}{\":\\t\"}{range .spec.volumes[*]}{.hostPath.path}{\", \"}{end}{end}' \\", "completion": "| sort \\\n| grep '/var/run/docker.sock'", "language": "bash", "type": "kubectl-command", "context": "Migrating telemetry and security agents from dockershim | If your cluster nodes are customized and install additional security and telemetry agents on the node, check with the agent vendor to verify whether it has any dependency on Docker.", "source": "https://kubernetes.io/docs/tasks/administer-cluster/migrating-from-dockershim/migrating-telemetry-and-security-agents/", "id": "k8s_00407_code_completion_10908"} {"prompt": "Explain this bash code:\n\nkubectl get pods --all-namespaces \\\n-o=jsonpath='{range .items[*]}{\"\\n\"}{.metadata.namespace}{\":\\t\"}{.metadata.name}{\":\\t\"}{range .spec.volumes[*]}{.hostPath.path}{\", \"}{end}{end}' \\\n| sort \\\n| grep '/var/run/docker.sock'", "completion": "This bash code Migrating telemetry and security agents from dockershim | If your cluster nodes are customized and install additional security and telemetry agents on the node, check with the agent vendor to verify whether it has any dependency on Docker.", "language": "bash", "type": "kubectl-command_explanation", "context": "Migrating telemetry and security agents from dockershim | If your cluster nodes are customized and install additional security and telemetry agents on the node, check with the agent vendor to verify whether it has any dependency on Docker.", "source": "https://kubernetes.io/docs/tasks/administer-cluster/migrating-from-dockershim/migrating-telemetry-and-security-agents/", "id": "k8s_00407_code_explain_10909"} {"prompt": "Complete this bash code:\n\nkubectl get pods --all-namespaces \\\n-o=jsonpath='{range .items[*]}{\"\\n\"}{.metadata.namespace}{\":\\t\"}{.metadata.name}{\":\\t\"}{range .spec.volumes[*]}{.hostPath.path}{\", \"}{end}{end}' \\", "completion": "| sort \\\n| grep '/var/run/docker.sock'", "language": "bash", "type": "kubectl-command", "context": "Migrating telemetry and security agents from dockershim | If your cluster nodes are customized and install additional security and telemetry agents on the node, check with the agent vendor to verify whether it has any dependency on Docker.", "source": "https://kubernetes.io/docs/tasks/administer-cluster/migrating-from-dockershim/migrating-telemetry-and-security-agents/", "id": "k8s_00407_code_completion_10910"} {"prompt": "Explain this bash code:\n\nkubectl get pods --all-namespaces \\\n-o=jsonpath='{range .items[*]}{\"\\n\"}{.metadata.namespace}{\":\\t\"}{.metadata.name}{\":\\t\"}{range .spec.volumes[*]}{.hostPath.path}{\", \"}{end}{end}' \\\n| sort \\\n| grep '/var/run/docker.sock'", "completion": "This bash code Migrating telemetry and security agents from dockershim | If your cluster nodes are customized and install additional security and telemetry agents on the node, check with the agent vendor to verify whether it has any dependency on Docker.", "language": "bash", "type": "kubectl-command_explanation", "context": "Migrating telemetry and security agents from dockershim | If your cluster nodes are customized and install additional security and telemetry agents on the node, check with the agent vendor to verify whether it has any dependency on Docker.", "source": "https://kubernetes.io/docs/tasks/administer-cluster/migrating-from-dockershim/migrating-telemetry-and-security-agents/", "id": "k8s_00407_code_explain_10911"} {"prompt": "Complete this bash code:\n\nkubectl get pods --all-namespaces \\\n-o=jsonpath='{range .items[*]}{\"\\n\"}{.metadata.namespace}{\":\\t\"}{.metadata.name}{\":\\t\"}{range .spec.volumes[*]}{.hostPath.path}{\", \"}{end}{end}' \\", "completion": "| sort \\\n| grep '/var/run/docker.sock'", "language": "bash", "type": "kubectl-command", "context": "Migrating telemetry and security agents from dockershim | If your cluster nodes are customized and install additional security and telemetry agents on the node, check with the agent vendor to verify whether it has any dependency on Docker.", "source": "https://kubernetes.io/docs/tasks/administer-cluster/migrating-from-dockershim/migrating-telemetry-and-security-agents/", "id": "k8s_00407_code_completion_10912"} {"prompt": "Explain this bash code:\n\nkubectl get pods --all-namespaces \\\n-o=jsonpath='{range .items[*]}{\"\\n\"}{.metadata.namespace}{\":\\t\"}{.metadata.name}{\":\\t\"}{range .spec.volumes[*]}{.hostPath.path}{\", \"}{end}{end}' \\\n| sort \\\n| grep '/var/run/docker.sock'", "completion": "This bash code Migrating telemetry and security agents from dockershim | If your cluster nodes are customized and install additional security and telemetry agents on the node, check with the agent vendor to verify whether it has any dependency on Docker.", "language": "bash", "type": "kubectl-command_explanation", "context": "Migrating telemetry and security agents from dockershim | If your cluster nodes are customized and install additional security and telemetry agents on the node, check with the agent vendor to verify whether it has any dependency on Docker.", "source": "https://kubernetes.io/docs/tasks/administer-cluster/migrating-from-dockershim/migrating-telemetry-and-security-agents/", "id": "k8s_00407_code_explain_10913"} {"prompt": "Complete this json code:\n\n{\n \"kind\": \"Node\",\n \"apiVersion\": \"v1\",\n \"metadata\": {\n \"name\": \"10.240.79.157\",", "completion": " \"labels\": {\n \"name\": \"my-first-k8s-node\"\n }\n }\n}", "language": "json", "type": "api-response", "context": "1 - Nodes | Kubernetes creates a Node object internally (the representation). Kubernetes checks that a kubelet has registered to the API server that matches the metadata.name field of the Node. If the node is hea", "source": "https://kubernetes.io/docs/concepts/architecture/_print/", "id": "k8s_00411_code_completion_10914"} {"prompt": "Explain this json code:\n\n{\n \"kind\": \"Node\",\n \"apiVersion\": \"v1\",\n \"metadata\": {\n \"name\": \"10.240.79.157\",\n \"labels\": {\n \"name\": \"my-first-k8s-node\"\n }\n }\n}", "completion": "This json code 1 - Nodes | Kubernetes creates a Node object internally (the representation). Kubernetes checks that a kubelet has registered to the API server that matches the metadata.name field of the Node. If the node is hea", "language": "json", "type": "api-response_explanation", "context": "1 - Nodes | Kubernetes creates a Node object internally (the representation). Kubernetes checks that a kubelet has registered to the API server that matches the metadata.name field of the Node. If the node is hea", "source": "https://kubernetes.io/docs/concepts/architecture/_print/", "id": "k8s_00411_code_explain_10915"} {"prompt": "Explain this bash code:\n\nkubectl -n kube-system get lease -l apiserver.kubernetes.io/identity=kube-apiserver", "completion": "This bash code 4 - Leases | The SHA256 hash used in the lease name is based on the OS hostname as seen by that API server. Each kube-apiserver should be configured to use a hostname that is unique within the cluster. New instanc", "language": "bash", "type": "kubectl-command_explanation", "context": "4 - Leases | The SHA256 hash used in the lease name is based on the OS hostname as seen by that API server. Each kube-apiserver should be configured to use a hostname that is unique within the cluster. New instanc", "source": "https://kubernetes.io/docs/concepts/architecture/_print/", "id": "k8s_00411_code_explain_10916"} {"prompt": "Complete this text code:\n\nNAME HOLDER AGE\napiserver-07a5ea9b9b072c4a5f3d1c3702 apiserver-07a5ea9b9b072c4a5f3d1c3702_0c8914f7-0f35-440e-8676-7844977d3a05 5m33s", "completion": "apiserver-7be9e061c59d368b3ddaf1376e apiserver-7be9e061c59d368b3ddaf1376e_84f2a85d-37c1-4b14-b6b9-603e62e4896f 4m23s\napiserver-1dfef752bcb36637d2763d1868 apiserver-1dfef752bcb36637d2763d1868_c5ffa286-8a9a-45d4-91e7-61118ed58d2e 4m43s", "language": "text", "type": "code-example", "context": "4 - Leases | The SHA256 hash used in the lease name is based on the OS hostname as seen by that API server. Each kube-apiserver should be configured to use a hostname that is unique within the cluster. New instanc", "source": "https://kubernetes.io/docs/concepts/architecture/_print/", "id": "k8s_00411_code_completion_10917"} {"prompt": "Explain this text code:\n\nNAME HOLDER AGE\napiserver-07a5ea9b9b072c4a5f3d1c3702 apiserver-07a5ea9b9b072c4a5f3d1c3702_0c8914f7-0f35-440e-8676-7844977d3a05 5m33s\napiserver-7be9e061c59d368b3ddaf1376e apiserver-7be9e061c59d368b3ddaf1376e_84f2a85d-37c1-4b14-b6b9-603e62e4896f 4m23s\napiserver-1dfef752bcb36637d2763d1868 apiserver-1dfef752bcb36637d2763d1868_c5ffa286-8a9a-45d4-91e7-61118ed58d2e 4m43s", "completion": "This text code 4 - Leases | The SHA256 hash used in the lease name is based on the OS hostname as seen by that API server. Each kube-apiserver should be configured to use a hostname that is unique within the cluster. New instanc", "language": "text", "type": "code-example_explanation", "context": "4 - Leases | The SHA256 hash used in the lease name is based on the OS hostname as seen by that API server. Each kube-apiserver should be configured to use a hostname that is unique within the cluster. New instanc", "source": "https://kubernetes.io/docs/concepts/architecture/_print/", "id": "k8s_00411_code_explain_10918"} {"prompt": "Explain this bash code:\n\nkubectl -n kube-system get lease apiserver-07a5ea9b9b072c4a5f3d1c3702 -o yaml", "completion": "This bash code 4 - Leases | Expired leases from kube-apiservers that no longer exist are garbage collected by new kube-apiservers after 1 hour.", "language": "bash", "type": "kubectl-command_explanation", "context": "4 - Leases | Expired leases from kube-apiservers that no longer exist are garbage collected by new kube-apiservers after 1 hour.", "source": "https://kubernetes.io/docs/concepts/architecture/_print/", "id": "k8s_00411_code_explain_10919"} {"prompt": "Complete this yaml code:\n\napiVersion: coordination.k8s.io/v1\nkind: Lease\nmetadata:\n creationTimestamp: \"2023-07-02T13:16:48Z\"\n labels:\n apiserver.kubernetes.io/identity: kube-apiserver\n kubernetes.io/hostname: master-1", "completion": " name: apiserver-07a5ea9b9b072c4a5f3d1c3702\n namespace: kube-system\n resourceVersion: \"334899\"\n uid: 90870ab5-1ba9-4523-b215-e4d4e662acb1\nspec:\n holderIdentity: apiserver-07a5ea9b9b072c4a5f3d1c3702_0c8914f7-0f35-440e-8676-7844977d3a05\n leaseDurationSeconds: 3600\n renewTime: \"2023-07-04T21:58:48.065888Z\"", "language": "yaml", "type": "kubernetes-manifest", "context": "4 - Leases | Expired leases from kube-apiservers that no longer exist are garbage collected by new kube-apiservers after 1 hour.", "source": "https://kubernetes.io/docs/concepts/architecture/_print/", "id": "k8s_00411_code_completion_10920"} {"prompt": "Explain this yaml code:\n\napiVersion: coordination.k8s.io/v1\nkind: Lease\nmetadata:\n creationTimestamp: \"2023-07-02T13:16:48Z\"\n labels:\n apiserver.kubernetes.io/identity: kube-apiserver\n kubernetes.io/hostname: master-1\n name: apiserver-07a5ea9b9b072c4a5f3d1c3702\n namespace: kube-system\n resourceVersion: \"334899\"\n uid: 90870ab5-1ba9-4523-b215-e4d4e662acb1\nspec:\n holderIdentity: apiserver-07a5ea9b9b072c4a5f3d1c3702_0c8914f7-0f35-440e-8676-7844977d3a05\n leaseDurationSeconds: 3600\n renewTime: \"2023-07-04T21:58:48.065888Z\"", "completion": "This yaml code 4 - Leases | Expired leases from kube-apiservers that no longer exist are garbage collected by new kube-apiservers after 1 hour.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "4 - Leases | Expired leases from kube-apiservers that no longer exist are garbage collected by new kube-apiservers after 1 hour.", "source": "https://kubernetes.io/docs/concepts/architecture/_print/", "id": "k8s_00411_code_explain_10921"} {"prompt": "Complete this yaml code:\n\napiVersion: rbac.authorization.k8s.io/v1\nkind: ClusterRole\nmetadata:\n name: cloud-controller-manager\nrules:\n- apiGroups:\n - \"\"\n resources:\n - events\n verbs:\n - create\n - patch\n - update\n- apiGroups:\n - \"\"\n resources:\n - nodes\n verbs:\n - '*'\n- apiGroups:\n - \"\"\n resources:\n - nodes/status\n verbs:\n - patch\n- apiGroups:\n - \"\"", "completion": " resources:\n - services\n verbs:\n - list\n - watch\n- apiGroups:\n - \"\"\n resources:\n - services/status\n verbs:\n - patch\n - update\n- apiGroups:\n - \"\"\n resources:\n - serviceaccounts\n verbs:\n - create\n- apiGroups:\n - \"\"\n resources:\n - persistentvolumes\n verbs:\n - get\n - list\n - update\n - watch", "language": "yaml", "type": "kubernetes-manifest", "context": "5 - Cloud Controller Manager | Cloud Controller Manager Administration has instructions on running and managing the cloud controller manager.", "source": "https://kubernetes.io/docs/concepts/architecture/_print/", "id": "k8s_00411_code_completion_10922"} {"prompt": "Explain this yaml code:\n\napiVersion: rbac.authorization.k8s.io/v1\nkind: ClusterRole\nmetadata:\n name: cloud-controller-manager\nrules:\n- apiGroups:\n - \"\"\n resources:\n - events\n verbs:\n - create\n - patch\n - update\n- apiGroups:\n - \"\"\n resources:\n - nodes\n verbs:\n - '*'\n- apiGroups:\n - \"\"\n resources:\n - nodes/status\n verbs:\n - patch\n- apiGroups:\n - \"\"\n resources:\n - services\n verbs:\n - list\n - watch\n- apiGroups:\n - \"\"\n resources:\n - services/status\n verbs:\n - patch\n - update\n- apiGroups:\n - \"\"\n resources:\n - serviceaccounts\n verbs:\n - create\n- apiGroups:\n - \"\"\n resources:\n - persistentvolumes\n verbs:\n - get\n - list\n - update\n - watch", "completion": "This yaml code 5 - Cloud Controller Manager | Cloud Controller Manager Administration has instructions on running and managing the cloud controller manager.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "5 - Cloud Controller Manager | Cloud Controller Manager Administration has instructions on running and managing the cloud controller manager.", "source": "https://kubernetes.io/docs/concepts/architecture/_print/", "id": "k8s_00411_code_explain_10923"} {"prompt": "Complete this text code:\n\nkube-apiserver \\\n--feature-gates=UnknownVersionInteroperabilityProxy=true \\\n# required command line arguments for this feature\n--peer-ca-file=<path to kube-apiserver CA cert>\n--proxy-client-cert-file=<path to aggregator proxy cert>,\n--proxy-client-key-file=<path to aggregator proxy key>,\n--requestheader-client-ca-file=<path to aggregator CA cert>,", "completion": "# requestheader-allowed-names can be set to blank to allow any Common Name\n--requestheader-allowed-names=<valid Common Names to verify proxy client cert against>,\n\n# optional flags for this feature\n--peer-advertise-ip=`IP of this kube-apiserver that should be used by peers to proxy requests`\n--peer-advertise-port=`port of this kube-apiserver that should be used by peers to proxy requests`\n\n# …and other flags as usual", "language": "text", "type": "code-example", "context": "10 - Mixed Version Proxy | The source kube-apiserver reuses the existing APIserver client authentication flags --proxy-client-cert-file and --proxy-client-key-file to present its identity that will be verified by its peer (the", "source": "https://kubernetes.io/docs/concepts/architecture/_print/", "id": "k8s_00411_code_completion_10924"} {"prompt": "Explain this text code:\n\nkube-apiserver \\\n--feature-gates=UnknownVersionInteroperabilityProxy=true \\\n# required command line arguments for this feature\n--peer-ca-file=<path to kube-apiserver CA cert>\n--proxy-client-cert-file=<path to aggregator proxy cert>,\n--proxy-client-key-file=<path to aggregator proxy key>,\n--requestheader-client-ca-file=<path to aggregator CA cert>,\n# requestheader-allowed-names can be set to blank to allow any Common Name\n--requestheader-allowed-names=<valid Common Names to verify proxy client cert against>,\n\n# optional flags for this feature\n--peer-advertise-ip=`IP of this kube-apiserver that should be used by peers to proxy requests`\n--peer-advertise-port=`port of this kube-apiserver that should be used by peers to proxy requests`\n\n# …and other flags as usual", "completion": "This text code 10 - Mixed Version Proxy | The source kube-apiserver reuses the existing APIserver client authentication flags --proxy-client-cert-file and --proxy-client-key-file to present its identity that will be verified by its peer (the", "language": "text", "type": "code-example_explanation", "context": "10 - Mixed Version Proxy | The source kube-apiserver reuses the existing APIserver client authentication flags --proxy-client-cert-file and --proxy-client-key-file to present its identity that will be verified by its peer (the", "source": "https://kubernetes.io/docs/concepts/architecture/_print/", "id": "k8s_00411_code_explain_10925"} {"prompt": "Complete this json code:\n\n{\n \"kind\": \"Node\",\n \"apiVersion\": \"v1\",\n \"metadata\": {\n \"name\": \"10.240.79.157\",", "completion": " \"labels\": {\n \"name\": \"my-first-k8s-node\"\n }\n }\n}", "language": "json", "type": "api-response", "context": "1 - Nodes | Kubernetes creates a Node object internally (the representation). Kubernetes checks that a kubelet has registered to the API server that matches the metadata.name field of the Node. If the node is hea", "source": "https://kubernetes.io/docs/concepts/architecture/_print/", "id": "k8s_00411_code_completion_10926"} {"prompt": "Explain this json code:\n\n{\n \"kind\": \"Node\",\n \"apiVersion\": \"v1\",\n \"metadata\": {\n \"name\": \"10.240.79.157\",\n \"labels\": {\n \"name\": \"my-first-k8s-node\"\n }\n }\n}", "completion": "This json code 1 - Nodes | Kubernetes creates a Node object internally (the representation). Kubernetes checks that a kubelet has registered to the API server that matches the metadata.name field of the Node. If the node is hea", "language": "json", "type": "api-response_explanation", "context": "1 - Nodes | Kubernetes creates a Node object internally (the representation). Kubernetes checks that a kubelet has registered to the API server that matches the metadata.name field of the Node. If the node is hea", "source": "https://kubernetes.io/docs/concepts/architecture/_print/", "id": "k8s_00411_code_explain_10927"} {"prompt": "Explain this shell code:\n\nkubectl -n kube-system get lease -l apiserver.kubernetes.io/identity=kube-apiserver", "completion": "This shell code 4 - Leases | The SHA256 hash used in the lease name is based on the OS hostname as seen by that API server. Each kube-apiserver should be configured to use a hostname that is unique within the cluster. New instanc", "language": "shell", "type": "code-example_explanation", "context": "4 - Leases | The SHA256 hash used in the lease name is based on the OS hostname as seen by that API server. Each kube-apiserver should be configured to use a hostname that is unique within the cluster. New instanc", "source": "https://kubernetes.io/docs/concepts/architecture/_print/", "id": "k8s_00411_code_explain_10928"} {"prompt": "Complete this text code:\n\nNAME HOLDER AGE\napiserver-07a5ea9b9b072c4a5f3d1c3702 apiserver-07a5ea9b9b072c4a5f3d1c3702_0c8914f7-0f35-440e-8676-7844977d3a05 5m33s", "completion": "apiserver-7be9e061c59d368b3ddaf1376e apiserver-7be9e061c59d368b3ddaf1376e_84f2a85d-37c1-4b14-b6b9-603e62e4896f 4m23s\napiserver-1dfef752bcb36637d2763d1868 apiserver-1dfef752bcb36637d2763d1868_c5ffa286-8a9a-45d4-91e7-61118ed58d2e 4m43s", "language": "text", "type": "code-example", "context": "4 - Leases | The SHA256 hash used in the lease name is based on the OS hostname as seen by that API server. Each kube-apiserver should be configured to use a hostname that is unique within the cluster. New instanc", "source": "https://kubernetes.io/docs/concepts/architecture/_print/", "id": "k8s_00411_code_completion_10929"} {"prompt": "Explain this text code:\n\nNAME HOLDER AGE\napiserver-07a5ea9b9b072c4a5f3d1c3702 apiserver-07a5ea9b9b072c4a5f3d1c3702_0c8914f7-0f35-440e-8676-7844977d3a05 5m33s\napiserver-7be9e061c59d368b3ddaf1376e apiserver-7be9e061c59d368b3ddaf1376e_84f2a85d-37c1-4b14-b6b9-603e62e4896f 4m23s\napiserver-1dfef752bcb36637d2763d1868 apiserver-1dfef752bcb36637d2763d1868_c5ffa286-8a9a-45d4-91e7-61118ed58d2e 4m43s", "completion": "This text code 4 - Leases | The SHA256 hash used in the lease name is based on the OS hostname as seen by that API server. Each kube-apiserver should be configured to use a hostname that is unique within the cluster. New instanc", "language": "text", "type": "code-example_explanation", "context": "4 - Leases | The SHA256 hash used in the lease name is based on the OS hostname as seen by that API server. Each kube-apiserver should be configured to use a hostname that is unique within the cluster. New instanc", "source": "https://kubernetes.io/docs/concepts/architecture/_print/", "id": "k8s_00411_code_explain_10930"} {"prompt": "Explain this shell code:\n\nkubectl -n kube-system get lease apiserver-07a5ea9b9b072c4a5f3d1c3702 -o yaml", "completion": "This shell code 4 - Leases | Expired leases from kube-apiservers that no longer exist are garbage collected by new kube-apiservers after 1 hour.", "language": "shell", "type": "code-example_explanation", "context": "4 - Leases | Expired leases from kube-apiservers that no longer exist are garbage collected by new kube-apiservers after 1 hour.", "source": "https://kubernetes.io/docs/concepts/architecture/_print/", "id": "k8s_00411_code_explain_10931"} {"prompt": "Complete this yaml code:\n\napiVersion: coordination.k8s.io/v1\nkind: Lease\nmetadata:\n creationTimestamp: \"2023-07-02T13:16:48Z\"\n labels:\n apiserver.kubernetes.io/identity: kube-apiserver\n kubernetes.io/hostname: master-1", "completion": " name: apiserver-07a5ea9b9b072c4a5f3d1c3702\n namespace: kube-system\n resourceVersion: \"334899\"\n uid: 90870ab5-1ba9-4523-b215-e4d4e662acb1\nspec:\n holderIdentity: apiserver-07a5ea9b9b072c4a5f3d1c3702_0c8914f7-0f35-440e-8676-7844977d3a05\n leaseDurationSeconds: 3600\n renewTime: \"2023-07-04T21:58:48.065888Z\"", "language": "yaml", "type": "kubernetes-manifest", "context": "4 - Leases | Expired leases from kube-apiservers that no longer exist are garbage collected by new kube-apiservers after 1 hour.", "source": "https://kubernetes.io/docs/concepts/architecture/_print/", "id": "k8s_00411_code_completion_10932"} {"prompt": "Explain this yaml code:\n\napiVersion: coordination.k8s.io/v1\nkind: Lease\nmetadata:\n creationTimestamp: \"2023-07-02T13:16:48Z\"\n labels:\n apiserver.kubernetes.io/identity: kube-apiserver\n kubernetes.io/hostname: master-1\n name: apiserver-07a5ea9b9b072c4a5f3d1c3702\n namespace: kube-system\n resourceVersion: \"334899\"\n uid: 90870ab5-1ba9-4523-b215-e4d4e662acb1\nspec:\n holderIdentity: apiserver-07a5ea9b9b072c4a5f3d1c3702_0c8914f7-0f35-440e-8676-7844977d3a05\n leaseDurationSeconds: 3600\n renewTime: \"2023-07-04T21:58:48.065888Z\"", "completion": "This yaml code 4 - Leases | Expired leases from kube-apiservers that no longer exist are garbage collected by new kube-apiservers after 1 hour.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "4 - Leases | Expired leases from kube-apiservers that no longer exist are garbage collected by new kube-apiservers after 1 hour.", "source": "https://kubernetes.io/docs/concepts/architecture/_print/", "id": "k8s_00411_code_explain_10933"} {"prompt": "Complete this yaml code:\n\napiVersion: rbac.authorization.k8s.io/v1\nkind: ClusterRole\nmetadata:\n name: cloud-controller-manager\nrules:\n- apiGroups:\n - \"\"\n resources:\n - events\n verbs:\n - create\n - patch\n - update\n- apiGroups:\n - \"\"\n resources:\n - nodes\n verbs:\n - '*'\n- apiGroups:\n - \"\"\n resources:\n - nodes/status\n verbs:\n - patch\n- apiGroups:\n - \"\"", "completion": " resources:\n - services\n verbs:\n - list\n - watch\n- apiGroups:\n - \"\"\n resources:\n - services/status\n verbs:\n - patch\n - update\n- apiGroups:\n - \"\"\n resources:\n - serviceaccounts\n verbs:\n - create\n- apiGroups:\n - \"\"\n resources:\n - persistentvolumes\n verbs:\n - get\n - list\n - update\n - watch", "language": "yaml", "type": "kubernetes-manifest", "context": "5 - Cloud Controller Manager | Cloud Controller Manager Administration has instructions on running and managing the cloud controller manager.", "source": "https://kubernetes.io/docs/concepts/architecture/_print/", "id": "k8s_00411_code_completion_10934"} {"prompt": "Explain this yaml code:\n\napiVersion: rbac.authorization.k8s.io/v1\nkind: ClusterRole\nmetadata:\n name: cloud-controller-manager\nrules:\n- apiGroups:\n - \"\"\n resources:\n - events\n verbs:\n - create\n - patch\n - update\n- apiGroups:\n - \"\"\n resources:\n - nodes\n verbs:\n - '*'\n- apiGroups:\n - \"\"\n resources:\n - nodes/status\n verbs:\n - patch\n- apiGroups:\n - \"\"\n resources:\n - services\n verbs:\n - list\n - watch\n- apiGroups:\n - \"\"\n resources:\n - services/status\n verbs:\n - patch\n - update\n- apiGroups:\n - \"\"\n resources:\n - serviceaccounts\n verbs:\n - create\n- apiGroups:\n - \"\"\n resources:\n - persistentvolumes\n verbs:\n - get\n - list\n - update\n - watch", "completion": "This yaml code 5 - Cloud Controller Manager | Cloud Controller Manager Administration has instructions on running and managing the cloud controller manager.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "5 - Cloud Controller Manager | Cloud Controller Manager Administration has instructions on running and managing the cloud controller manager.", "source": "https://kubernetes.io/docs/concepts/architecture/_print/", "id": "k8s_00411_code_explain_10935"} {"prompt": "Explain this bash code:\n\nkubectl get events -A --field-selector=reason=OwnerRefInvalidNamespace", "completion": "This bash code 9 - Garbage Collection | Kubernetes checks for and deletes objects that no longer have owner references, like the pods left behind when you delete a ReplicaSet. When you delete an object, you can control whether Kubernetes de", "language": "bash", "type": "kubectl-command_explanation", "context": "9 - Garbage Collection | Kubernetes checks for and deletes objects that no longer have owner references, like the pods left behind when you delete a ReplicaSet. When you delete an object, you can control whether Kubernetes de", "source": "https://kubernetes.io/docs/concepts/architecture/_print/", "id": "k8s_00411_code_explain_10936"} {"prompt": "Complete this shell code:\n\nkube-apiserver \\\n--feature-gates=UnknownVersionInteroperabilityProxy=true \\\n# required command line arguments for this feature\n--peer-ca-file=<path to kube-apiserver CA cert>\n--proxy-client-cert-file=<path to aggregator proxy cert>,\n--proxy-client-key-file=<path to aggregator proxy key>,\n--requestheader-client-ca-file=<path to aggregator CA cert>,", "completion": "# requestheader-allowed-names can be set to blank to allow any Common Name\n--requestheader-allowed-names=<valid Common Names to verify proxy client cert against>,\n\n# optional flags for this feature\n--peer-advertise-ip=`IP of this kube-apiserver that should be used by peers to proxy requests`\n--peer-advertise-port=`port of this kube-apiserver that should be used by peers to proxy requests`\n\n# …and other flags as usual", "language": "shell", "type": "code-example", "context": "10 - Mixed Version Proxy | The source kube-apiserver reuses the existing APIserver client authentication flags --proxy-client-cert-file and --proxy-client-key-file to present its identity that will be verified by its peer (the", "source": "https://kubernetes.io/docs/concepts/architecture/_print/", "id": "k8s_00411_code_completion_10937"} {"prompt": "Explain this shell code:\n\nkube-apiserver \\\n--feature-gates=UnknownVersionInteroperabilityProxy=true \\\n# required command line arguments for this feature\n--peer-ca-file=<path to kube-apiserver CA cert>\n--proxy-client-cert-file=<path to aggregator proxy cert>,\n--proxy-client-key-file=<path to aggregator proxy key>,\n--requestheader-client-ca-file=<path to aggregator CA cert>,\n# requestheader-allowed-names can be set to blank to allow any Common Name\n--requestheader-allowed-names=<valid Common Names to verify proxy client cert against>,\n\n# optional flags for this feature\n--peer-advertise-ip=`IP of this kube-apiserver that should be used by peers to proxy requests`\n--peer-advertise-port=`port of this kube-apiserver that should be used by peers to proxy requests`\n\n# …and other flags as usual", "completion": "This shell code 10 - Mixed Version Proxy | The source kube-apiserver reuses the existing APIserver client authentication flags --proxy-client-cert-file and --proxy-client-key-file to present its identity that will be verified by its peer (the", "language": "shell", "type": "code-example_explanation", "context": "10 - Mixed Version Proxy | The source kube-apiserver reuses the existing APIserver client authentication flags --proxy-client-cert-file and --proxy-client-key-file to present its identity that will be verified by its peer (the", "source": "https://kubernetes.io/docs/concepts/architecture/_print/", "id": "k8s_00411_code_explain_10938"} {"prompt": "Complete this json code:\n\n{\n \"kind\": \"Node\",\n \"apiVersion\": \"v1\",\n \"metadata\": {\n \"name\": \"10.240.79.157\",", "completion": " \"labels\": {\n \"name\": \"my-first-k8s-node\"\n }\n }\n}", "language": "json", "type": "api-response", "context": "1 - Nodes | Kubernetes creates a Node object internally (the representation). Kubernetes checks that a kubelet has registered to the API server that matches the metadata.name field of the Node. If the node is hea", "source": "https://kubernetes.io/docs/concepts/architecture/_print/", "id": "k8s_00411_code_completion_10939"} {"prompt": "Explain this json code:\n\n{\n \"kind\": \"Node\",\n \"apiVersion\": \"v1\",\n \"metadata\": {\n \"name\": \"10.240.79.157\",\n \"labels\": {\n \"name\": \"my-first-k8s-node\"\n }\n }\n}", "completion": "This json code 1 - Nodes | Kubernetes creates a Node object internally (the representation). Kubernetes checks that a kubelet has registered to the API server that matches the metadata.name field of the Node. If the node is hea", "language": "json", "type": "api-response_explanation", "context": "1 - Nodes | Kubernetes creates a Node object internally (the representation). Kubernetes checks that a kubelet has registered to the API server that matches the metadata.name field of the Node. If the node is hea", "source": "https://kubernetes.io/docs/concepts/architecture/_print/", "id": "k8s_00411_code_explain_10940"} {"prompt": "Explain this bash code:\n\nkubectl -n kube-system get lease -l apiserver.kubernetes.io/identity=kube-apiserver", "completion": "This bash code 4 - Leases | The SHA256 hash used in the lease name is based on the OS hostname as seen by that API server. Each kube-apiserver should be configured to use a hostname that is unique within the cluster. New instanc", "language": "bash", "type": "kubectl-command_explanation", "context": "4 - Leases | The SHA256 hash used in the lease name is based on the OS hostname as seen by that API server. Each kube-apiserver should be configured to use a hostname that is unique within the cluster. New instanc", "source": "https://kubernetes.io/docs/concepts/architecture/_print/", "id": "k8s_00411_code_explain_10941"} {"prompt": "Explain this bash code:\n\nkubectl -n kube-system get lease apiserver-07a5ea9b9b072c4a5f3d1c3702 -o yaml", "completion": "This bash code 4 - Leases | Expired leases from kube-apiservers that no longer exist are garbage collected by new kube-apiservers after 1 hour.", "language": "bash", "type": "kubectl-command_explanation", "context": "4 - Leases | Expired leases from kube-apiservers that no longer exist are garbage collected by new kube-apiservers after 1 hour.", "source": "https://kubernetes.io/docs/concepts/architecture/_print/", "id": "k8s_00411_code_explain_10942"} {"prompt": "Complete this yaml code:\n\napiVersion: coordination.k8s.io/v1\nkind: Lease\nmetadata:\n creationTimestamp: \"2023-07-02T13:16:48Z\"\n labels:\n apiserver.kubernetes.io/identity: kube-apiserver\n kubernetes.io/hostname: master-1", "completion": " name: apiserver-07a5ea9b9b072c4a5f3d1c3702\n namespace: kube-system\n resourceVersion: \"334899\"\n uid: 90870ab5-1ba9-4523-b215-e4d4e662acb1\nspec:\n holderIdentity: apiserver-07a5ea9b9b072c4a5f3d1c3702_0c8914f7-0f35-440e-8676-7844977d3a05\n leaseDurationSeconds: 3600\n renewTime: \"2023-07-04T21:58:48.065888Z\"", "language": "yaml", "type": "kubernetes-manifest", "context": "4 - Leases | Expired leases from kube-apiservers that no longer exist are garbage collected by new kube-apiservers after 1 hour.", "source": "https://kubernetes.io/docs/concepts/architecture/_print/", "id": "k8s_00411_code_completion_10943"} {"prompt": "Explain this yaml code:\n\napiVersion: coordination.k8s.io/v1\nkind: Lease\nmetadata:\n creationTimestamp: \"2023-07-02T13:16:48Z\"\n labels:\n apiserver.kubernetes.io/identity: kube-apiserver\n kubernetes.io/hostname: master-1\n name: apiserver-07a5ea9b9b072c4a5f3d1c3702\n namespace: kube-system\n resourceVersion: \"334899\"\n uid: 90870ab5-1ba9-4523-b215-e4d4e662acb1\nspec:\n holderIdentity: apiserver-07a5ea9b9b072c4a5f3d1c3702_0c8914f7-0f35-440e-8676-7844977d3a05\n leaseDurationSeconds: 3600\n renewTime: \"2023-07-04T21:58:48.065888Z\"", "completion": "This yaml code 4 - Leases | Expired leases from kube-apiservers that no longer exist are garbage collected by new kube-apiservers after 1 hour.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "4 - Leases | Expired leases from kube-apiservers that no longer exist are garbage collected by new kube-apiservers after 1 hour.", "source": "https://kubernetes.io/docs/concepts/architecture/_print/", "id": "k8s_00411_code_explain_10944"} {"prompt": "Complete this yaml code:\n\napiVersion: rbac.authorization.k8s.io/v1\nkind: ClusterRole\nmetadata:\n name: cloud-controller-manager\nrules:\n- apiGroups:\n - \"\"\n resources:\n - events\n verbs:\n - create\n - patch\n - update\n- apiGroups:\n - \"\"\n resources:\n - nodes\n verbs:\n - '*'\n- apiGroups:\n - \"\"\n resources:\n - nodes/status\n verbs:\n - patch\n- apiGroups:\n - \"\"", "completion": " resources:\n - services\n verbs:\n - list\n - watch\n- apiGroups:\n - \"\"\n resources:\n - services/status\n verbs:\n - patch\n - update\n- apiGroups:\n - \"\"\n resources:\n - serviceaccounts\n verbs:\n - create\n- apiGroups:\n - \"\"\n resources:\n - persistentvolumes\n verbs:\n - get\n - list\n - update\n - watch", "language": "yaml", "type": "kubernetes-manifest", "context": "5 - Cloud Controller Manager | Cloud Controller Manager Administration has instructions on running and managing the cloud controller manager.", "source": "https://kubernetes.io/docs/concepts/architecture/_print/", "id": "k8s_00411_code_completion_10945"} {"prompt": "Explain this yaml code:\n\napiVersion: rbac.authorization.k8s.io/v1\nkind: ClusterRole\nmetadata:\n name: cloud-controller-manager\nrules:\n- apiGroups:\n - \"\"\n resources:\n - events\n verbs:\n - create\n - patch\n - update\n- apiGroups:\n - \"\"\n resources:\n - nodes\n verbs:\n - '*'\n- apiGroups:\n - \"\"\n resources:\n - nodes/status\n verbs:\n - patch\n- apiGroups:\n - \"\"\n resources:\n - services\n verbs:\n - list\n - watch\n- apiGroups:\n - \"\"\n resources:\n - services/status\n verbs:\n - patch\n - update\n- apiGroups:\n - \"\"\n resources:\n - serviceaccounts\n verbs:\n - create\n- apiGroups:\n - \"\"\n resources:\n - persistentvolumes\n verbs:\n - get\n - list\n - update\n - watch", "completion": "This yaml code 5 - Cloud Controller Manager | Cloud Controller Manager Administration has instructions on running and managing the cloud controller manager.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "5 - Cloud Controller Manager | Cloud Controller Manager Administration has instructions on running and managing the cloud controller manager.", "source": "https://kubernetes.io/docs/concepts/architecture/_print/", "id": "k8s_00411_code_explain_10946"} {"prompt": "Complete this text code:\n\nkube-apiserver \\\n--feature-gates=UnknownVersionInteroperabilityProxy=true \\\n# required command line arguments for this feature\n--peer-ca-file=<path to kube-apiserver CA cert>\n--proxy-client-cert-file=<path to aggregator proxy cert>,\n--proxy-client-key-file=<path to aggregator proxy key>,\n--requestheader-client-ca-file=<path to aggregator CA cert>,", "completion": "# requestheader-allowed-names can be set to blank to allow any Common Name\n--requestheader-allowed-names=<valid Common Names to verify proxy client cert against>,\n\n# optional flags for this feature\n--peer-advertise-ip=`IP of this kube-apiserver that should be used by peers to proxy requests`\n--peer-advertise-port=`port of this kube-apiserver that should be used by peers to proxy requests`\n\n# …and other flags as usual", "language": "text", "type": "code-example", "context": "10 - Mixed Version Proxy | The source kube-apiserver reuses the existing APIserver client authentication flags --proxy-client-cert-file and --proxy-client-key-file to present its identity that will be verified by its peer (the", "source": "https://kubernetes.io/docs/concepts/architecture/_print/", "id": "k8s_00411_code_completion_10947"} {"prompt": "Explain this text code:\n\nkube-apiserver \\\n--feature-gates=UnknownVersionInteroperabilityProxy=true \\\n# required command line arguments for this feature\n--peer-ca-file=<path to kube-apiserver CA cert>\n--proxy-client-cert-file=<path to aggregator proxy cert>,\n--proxy-client-key-file=<path to aggregator proxy key>,\n--requestheader-client-ca-file=<path to aggregator CA cert>,\n# requestheader-allowed-names can be set to blank to allow any Common Name\n--requestheader-allowed-names=<valid Common Names to verify proxy client cert against>,\n\n# optional flags for this feature\n--peer-advertise-ip=`IP of this kube-apiserver that should be used by peers to proxy requests`\n--peer-advertise-port=`port of this kube-apiserver that should be used by peers to proxy requests`\n\n# …and other flags as usual", "completion": "This text code 10 - Mixed Version Proxy | The source kube-apiserver reuses the existing APIserver client authentication flags --proxy-client-cert-file and --proxy-client-key-file to present its identity that will be verified by its peer (the", "language": "text", "type": "code-example_explanation", "context": "10 - Mixed Version Proxy | The source kube-apiserver reuses the existing APIserver client authentication flags --proxy-client-cert-file and --proxy-client-key-file to present its identity that will be verified by its peer (the", "source": "https://kubernetes.io/docs/concepts/architecture/_print/", "id": "k8s_00411_code_explain_10948"} {"prompt": "Complete this bash code:\n\nkubectl get pods --all-namespaces -o jsonpath=\"{.items[*].spec['initContainers', 'containers'][*].image}\" |\\\ntr -s '[[:space:]]' '\\n' |\\", "completion": "sort |\\\nuniq -c", "language": "bash", "type": "kubectl-command", "context": "List All Container Images Running in a Cluster | The jsonpath is interpreted as follows:", "source": "https://kubernetes.io/docs/tasks/access-application-cluster/list-all-running-container-images/", "id": "k8s_00412_code_completion_10949"} {"prompt": "Explain this bash code:\n\nkubectl get pods --all-namespaces -o jsonpath=\"{.items[*].spec['initContainers', 'containers'][*].image}\" |\\\ntr -s '[[:space:]]' '\\n' |\\\nsort |\\\nuniq -c", "completion": "This bash code List All Container Images Running in a Cluster | The jsonpath is interpreted as follows:", "language": "bash", "type": "kubectl-command_explanation", "context": "List All Container Images Running in a Cluster | The jsonpath is interpreted as follows:", "source": "https://kubernetes.io/docs/tasks/access-application-cluster/list-all-running-container-images/", "id": "k8s_00412_code_explain_10950"} {"prompt": "Explain this bash code:\n\nkubectl get pods --all-namespaces -o jsonpath='{range .items[*]}{\"\\n\"}{.metadata.name}{\":\\t\"}{range .spec.containers[*]}{.image}{\", \"}{end}{end}' |\\\nsort", "completion": "This bash code List All Container Images Running in a Cluster | To target only Pods matching a specific label, use the -l flag. The following matches only Pods with labels matching app=nginx.", "language": "bash", "type": "kubectl-command_explanation", "context": "List All Container Images Running in a Cluster | To target only Pods matching a specific label, use the -l flag. The following matches only Pods with labels matching app=nginx.", "source": "https://kubernetes.io/docs/tasks/access-application-cluster/list-all-running-container-images/", "id": "k8s_00412_code_explain_10951"} {"prompt": "Explain this bash code:\n\nkubectl get pods --all-namespaces -o jsonpath=\"{.items[*].spec.containers[*].image}\" -l app=nginx", "completion": "This bash code List All Container Images Running in a Cluster | To target only pods in a specific namespace, use the namespace flag. The following matches only Pods in the kube-system namespace.", "language": "bash", "type": "kubectl-command_explanation", "context": "List All Container Images Running in a Cluster | To target only pods in a specific namespace, use the namespace flag. The following matches only Pods in the kube-system namespace.", "source": "https://kubernetes.io/docs/tasks/access-application-cluster/list-all-running-container-images/", "id": "k8s_00412_code_explain_10952"} {"prompt": "Explain this bash code:\n\nkubectl get pods --namespace kube-system -o jsonpath=\"{.items[*].spec.containers[*].image}\"", "completion": "This bash code List All Container Images Running in a Cluster | As an alternative to jsonpath, Kubectl supports using go-templates for formatting the output:", "language": "bash", "type": "kubectl-command_explanation", "context": "List All Container Images Running in a Cluster | As an alternative to jsonpath, Kubectl supports using go-templates for formatting the output:", "source": "https://kubernetes.io/docs/tasks/access-application-cluster/list-all-running-container-images/", "id": "k8s_00412_code_explain_10953"} {"prompt": "Explain this bash code:\n\nkubectl get pods --all-namespaces -o go-template --template=\"{{range .items}}{{range .spec.containers}}{{.image}} {{end}}{{end}}\"", "completion": "This bash code List All Container Images Running in a Cluster |", "language": "bash", "type": "kubectl-command_explanation", "context": "List All Container Images Running in a Cluster |", "source": "https://kubernetes.io/docs/tasks/access-application-cluster/list-all-running-container-images/", "id": "k8s_00412_code_explain_10954"} {"prompt": "Explain this text code:\n\n-o jsonpath={.items[*].spec['initContainers', 'containers'][*].image}", "completion": "This text code List All Container Images Running in a Cluster | The jsonpath is interpreted as follows:", "language": "text", "type": "code-example_explanation", "context": "List All Container Images Running in a Cluster | The jsonpath is interpreted as follows:", "source": "https://kubernetes.io/docs/tasks/access-application-cluster/list-all-running-container-images/", "id": "k8s_00412_code_explain_10955"} {"prompt": "Complete this shell code:\n\nkubectl get pods --all-namespaces -o jsonpath=\"{.items[*].spec['initContainers', 'containers'][*].image}\" |\\\ntr -s '[[:space:]]' '\\n' |\\", "completion": "sort |\\\nuniq -c", "language": "shell", "type": "code-example", "context": "List All Container Images Running in a Cluster | The jsonpath is interpreted as follows:", "source": "https://kubernetes.io/docs/tasks/access-application-cluster/list-all-running-container-images/", "id": "k8s_00412_code_completion_10956"} {"prompt": "Explain this shell code:\n\nkubectl get pods --all-namespaces -o jsonpath=\"{.items[*].spec['initContainers', 'containers'][*].image}\" |\\\ntr -s '[[:space:]]' '\\n' |\\\nsort |\\\nuniq -c", "completion": "This shell code List All Container Images Running in a Cluster | The jsonpath is interpreted as follows:", "language": "shell", "type": "code-example_explanation", "context": "List All Container Images Running in a Cluster | The jsonpath is interpreted as follows:", "source": "https://kubernetes.io/docs/tasks/access-application-cluster/list-all-running-container-images/", "id": "k8s_00412_code_explain_10957"} {"prompt": "Explain this shell code:\n\nkubectl get pods --all-namespaces -o jsonpath='{range .items[*]}{\"\\n\"}{.metadata.name}{\":\\t\"}{range .spec.containers[*]}{.image}{\", \"}{end}{end}' |\\\nsort", "completion": "This shell code List All Container Images Running in a Cluster | To target only Pods matching a specific label, use the -l flag. The following matches only Pods with labels matching app=nginx.", "language": "shell", "type": "code-example_explanation", "context": "List All Container Images Running in a Cluster | To target only Pods matching a specific label, use the -l flag. The following matches only Pods with labels matching app=nginx.", "source": "https://kubernetes.io/docs/tasks/access-application-cluster/list-all-running-container-images/", "id": "k8s_00412_code_explain_10958"} {"prompt": "Explain this shell code:\n\nkubectl get pods --all-namespaces -o jsonpath=\"{.items[*].spec.containers[*].image}\" -l app=nginx", "completion": "This shell code List All Container Images Running in a Cluster | To target only pods in a specific namespace, use the namespace flag. The following matches only Pods in the kube-system namespace.", "language": "shell", "type": "code-example_explanation", "context": "List All Container Images Running in a Cluster | To target only pods in a specific namespace, use the namespace flag. The following matches only Pods in the kube-system namespace.", "source": "https://kubernetes.io/docs/tasks/access-application-cluster/list-all-running-container-images/", "id": "k8s_00412_code_explain_10959"} {"prompt": "Explain this shell code:\n\nkubectl get pods --namespace kube-system -o jsonpath=\"{.items[*].spec.containers[*].image}\"", "completion": "This shell code List All Container Images Running in a Cluster | As an alternative to jsonpath, Kubectl supports using go-templates for formatting the output:", "language": "shell", "type": "code-example_explanation", "context": "List All Container Images Running in a Cluster | As an alternative to jsonpath, Kubectl supports using go-templates for formatting the output:", "source": "https://kubernetes.io/docs/tasks/access-application-cluster/list-all-running-container-images/", "id": "k8s_00412_code_explain_10960"} {"prompt": "Explain this shell code:\n\nkubectl get pods --all-namespaces -o go-template --template=\"{{range .items}}{{range .spec.containers}}{{.image}} {{end}}{{end}}\"", "completion": "This shell code List All Container Images Running in a Cluster |", "language": "shell", "type": "code-example_explanation", "context": "List All Container Images Running in a Cluster |", "source": "https://kubernetes.io/docs/tasks/access-application-cluster/list-all-running-container-images/", "id": "k8s_00412_code_explain_10961"} {"prompt": "Complete this bash code:\n\nkubectl get pods --all-namespaces -o jsonpath=\"{.items[*].spec['initContainers', 'containers'][*].image}\" |\\\ntr -s '[[:space:]]' '\\n' |\\", "completion": "sort |\\\nuniq -c", "language": "bash", "type": "kubectl-command", "context": "List All Container Images Running in a Cluster | The jsonpath is interpreted as follows:", "source": "https://kubernetes.io/docs/tasks/access-application-cluster/list-all-running-container-images/", "id": "k8s_00412_code_completion_10962"} {"prompt": "Explain this bash code:\n\nkubectl get pods --all-namespaces -o jsonpath=\"{.items[*].spec['initContainers', 'containers'][*].image}\" |\\\ntr -s '[[:space:]]' '\\n' |\\\nsort |\\\nuniq -c", "completion": "This bash code List All Container Images Running in a Cluster | The jsonpath is interpreted as follows:", "language": "bash", "type": "kubectl-command_explanation", "context": "List All Container Images Running in a Cluster | The jsonpath is interpreted as follows:", "source": "https://kubernetes.io/docs/tasks/access-application-cluster/list-all-running-container-images/", "id": "k8s_00412_code_explain_10963"} {"prompt": "Explain this bash code:\n\nkubectl get pods --all-namespaces -o jsonpath='{range .items[*]}{\"\\n\"}{.metadata.name}{\":\\t\"}{range .spec.containers[*]}{.image}{\", \"}{end}{end}' |\\\nsort", "completion": "This bash code List All Container Images Running in a Cluster | To target only Pods matching a specific label, use the -l flag. The following matches only Pods with labels matching app=nginx.", "language": "bash", "type": "kubectl-command_explanation", "context": "List All Container Images Running in a Cluster | To target only Pods matching a specific label, use the -l flag. The following matches only Pods with labels matching app=nginx.", "source": "https://kubernetes.io/docs/tasks/access-application-cluster/list-all-running-container-images/", "id": "k8s_00412_code_explain_10964"} {"prompt": "Explain this bash code:\n\nkubectl get pods --all-namespaces -o jsonpath=\"{.items[*].spec.containers[*].image}\" -l app=nginx", "completion": "This bash code List All Container Images Running in a Cluster | To target only pods in a specific namespace, use the namespace flag. The following matches only Pods in the kube-system namespace.", "language": "bash", "type": "kubectl-command_explanation", "context": "List All Container Images Running in a Cluster | To target only pods in a specific namespace, use the namespace flag. The following matches only Pods in the kube-system namespace.", "source": "https://kubernetes.io/docs/tasks/access-application-cluster/list-all-running-container-images/", "id": "k8s_00412_code_explain_10965"} {"prompt": "Explain this bash code:\n\nkubectl get pods --namespace kube-system -o jsonpath=\"{.items[*].spec.containers[*].image}\"", "completion": "This bash code List All Container Images Running in a Cluster | As an alternative to jsonpath, Kubectl supports using go-templates for formatting the output:", "language": "bash", "type": "kubectl-command_explanation", "context": "List All Container Images Running in a Cluster | As an alternative to jsonpath, Kubectl supports using go-templates for formatting the output:", "source": "https://kubernetes.io/docs/tasks/access-application-cluster/list-all-running-container-images/", "id": "k8s_00412_code_explain_10966"} {"prompt": "Explain this bash code:\n\nkubectl get pods --all-namespaces -o go-template --template=\"{{range .items}}{{range .spec.containers}}{{.image}} {{end}}{{end}}\"", "completion": "This bash code List All Container Images Running in a Cluster |", "language": "bash", "type": "kubectl-command_explanation", "context": "List All Container Images Running in a Cluster |", "source": "https://kubernetes.io/docs/tasks/access-application-cluster/list-all-running-container-images/", "id": "k8s_00412_code_explain_10967"} {"prompt": "Complete this json code:\n\n{\n \"auths\": {\n \"my-registry.example/images\": { \"auth\": \"…\" },", "completion": " \"*.my-registry.example/images\": { \"auth\": \"…\" }\n }\n}", "language": "json", "type": "api-response", "context": "Images | Image pull operations pass the credentials to the CRI container runtime for every valid pattern. For example, the following container image names would match successfully:", "source": "https://kubernetes.io/docs/concepts/containers/images/", "id": "k8s_00415_code_completion_10968"} {"prompt": "Explain this json code:\n\n{\n \"auths\": {\n \"my-registry.example/images\": { \"auth\": \"…\" },\n \"*.my-registry.example/images\": { \"auth\": \"…\" }\n }\n}", "completion": "This json code Images | Image pull operations pass the credentials to the CRI container runtime for every valid pattern. For example, the following container image names would match successfully:", "language": "json", "type": "api-response_explanation", "context": "Images | Image pull operations pass the credentials to the CRI container runtime for every valid pattern. For example, the following container image names would match successfully:", "source": "https://kubernetes.io/docs/concepts/containers/images/", "id": "k8s_00415_code_explain_10969"} {"prompt": "Complete this json code:\n\n{\n \"auths\": {\n \"my-registry.example/images\": {\n \"auth\": \"…\"\n },", "completion": " \"my-registry.example/images/subpath\": {\n \"auth\": \"…\"\n }\n }\n}", "language": "json", "type": "api-response", "context": "Images | If now a container specifies an image my-registry.example/images/subpath/my-image to be pulled, then the kubelet will try to download it using both authentication sources if one of them fails.", "source": "https://kubernetes.io/docs/concepts/containers/images/", "id": "k8s_00415_code_completion_10970"} {"prompt": "Explain this json code:\n\n{\n \"auths\": {\n \"my-registry.example/images\": {\n \"auth\": \"…\"\n },\n \"my-registry.example/images/subpath\": {\n \"auth\": \"…\"\n }\n }\n}", "completion": "This json code Images | If now a container specifies an image my-registry.example/images/subpath/my-image to be pulled, then the kubelet will try to download it using both authentication sources if one of them fails.", "language": "json", "type": "api-response_explanation", "context": "Images | If now a container specifies an image my-registry.example/images/subpath/my-image to be pulled, then the kubelet will try to download it using both authentication sources if one of them fails.", "source": "https://kubernetes.io/docs/concepts/containers/images/", "id": "k8s_00415_code_explain_10971"} {"prompt": "Complete this bash code:\n\nkubectl create secret docker-registry <name> \\\n --docker-server=<docker-registry-server> \\", "completion": " --docker-username=<docker-user> \\\n --docker-password=<docker-password> \\\n --docker-email=<docker-email>", "language": "bash", "type": "kubectl-command", "context": "Images | If you already have a Docker credentials file then, rather than using the above command, you can import the credentials file as a Kubernetes Secret. Create a Secret based on existing Docker credential", "source": "https://kubernetes.io/docs/concepts/containers/images/", "id": "k8s_00415_code_completion_10972"} {"prompt": "Explain this bash code:\n\nkubectl create secret docker-registry <name> \\\n --docker-server=<docker-registry-server> \\\n --docker-username=<docker-user> \\\n --docker-password=<docker-password> \\\n --docker-email=<docker-email>", "completion": "This bash code Images | If you already have a Docker credentials file then, rather than using the above command, you can import the credentials file as a Kubernetes Secret. Create a Secret based on existing Docker credential", "language": "bash", "type": "kubectl-command_explanation", "context": "Images | If you already have a Docker credentials file then, rather than using the above command, you can import the credentials file as a Kubernetes Secret. Create a Secret based on existing Docker credential", "source": "https://kubernetes.io/docs/concepts/containers/images/", "id": "k8s_00415_code_explain_10973"} {"prompt": "Complete this yaml code:\n\ncat <<EOF > pod.yaml\napiVersion: v1\nkind: Pod\nmetadata:\n name: foo\n namespace: awesomeapps\nspec:\n containers:\n - name: foo", "completion": " image: janedoe/awesomeapp:v1\n imagePullSecrets:\n - name: myregistrykey\nEOF\n\ncat <<EOF >> ./kustomization.yaml\nresources:\n- pod.yaml\nEOF", "language": "yaml", "type": "pod-definition", "context": "Images | This needs to be done for each Pod that is using a private registry.", "source": "https://kubernetes.io/docs/concepts/containers/images/", "id": "k8s_00415_code_completion_10974"} {"prompt": "Explain this yaml code:\n\ncat <<EOF > pod.yaml\napiVersion: v1\nkind: Pod\nmetadata:\n name: foo\n namespace: awesomeapps\nspec:\n containers:\n - name: foo\n image: janedoe/awesomeapp:v1\n imagePullSecrets:\n - name: myregistrykey\nEOF\n\ncat <<EOF >> ./kustomization.yaml\nresources:\n- pod.yaml\nEOF", "completion": "This yaml code Images | This needs to be done for each Pod that is using a private registry.", "language": "yaml", "type": "pod-definition_explanation", "context": "Images | This needs to be done for each Pod that is using a private registry.", "source": "https://kubernetes.io/docs/concepts/containers/images/", "id": "k8s_00415_code_explain_10975"} {"prompt": "Explain this text code:\n\nsha256:1ff6c18fbef2045af6b9c16bf034cc421a29027b800e4f9b68ae9b1cb3e9ae07", "completion": "This text code Images | Some image name examples that Kubernetes can use are:", "language": "text", "type": "code-example_explanation", "context": "Images | Some image name examples that Kubernetes can use are:", "source": "https://kubernetes.io/docs/concepts/containers/images/", "id": "k8s_00415_code_explain_10976"} {"prompt": "Explain this text code:\n\nregistry.k8s.io/pause@sha256:1ff6c18fbef2045af6b9c16bf034cc421a29027b800e4f9b68ae9b1cb3e9ae07", "completion": "This text code Images | When you first create a Deployment, StatefulSet, Pod, or other object that includes a PodTemplate, and a pull policy was not explicitly specified, then by default the pull policy of all containers in", "language": "text", "type": "code-example_explanation", "context": "Images | When you first create a Deployment, StatefulSet, Pod, or other object that includes a PodTemplate, and a pull policy was not explicitly specified, then by default the pull policy of all containers in", "source": "https://kubernetes.io/docs/concepts/containers/images/", "id": "k8s_00415_code_explain_10977"} {"prompt": "Explain this text code:\n\nregistry.k8s.io/pause:3.5@sha256:1ff6c18fbef2045af6b9c16bf034cc421a29027b800e4f9b68ae9b1cb3e9ae07", "completion": "This text code Images | When you first create a Deployment, StatefulSet, Pod, or other object that includes a PodTemplate, and a pull policy was not explicitly specified, then by default the pull policy of all containers in", "language": "text", "type": "code-example_explanation", "context": "Images | When you first create a Deployment, StatefulSet, Pod, or other object that includes a PodTemplate, and a pull policy was not explicitly specified, then by default the pull policy of all containers in", "source": "https://kubernetes.io/docs/concepts/containers/images/", "id": "k8s_00415_code_explain_10978"} {"prompt": "Explain this text code:\n\nimage@sha256:45b23dee08af5e43a7fea6c4cf9c25ccf269ee113168c19722f87876677c5cb2", "completion": "This text code Images | When using image tags, if the image registry were to change the code that the tag on that image represents, you might end up with a mix of Pods running the old and new code. An image digest uniquely i", "language": "text", "type": "code-example_explanation", "context": "Images | When using image tags, if the image registry were to change the code that the tag on that image represents, you might end up with a mix of Pods running the old and new code. An image digest uniquely i", "source": "https://kubernetes.io/docs/concepts/containers/images/", "id": "k8s_00415_code_explain_10979"} {"prompt": "Complete this json code:\n\n{\n \"auths\": {\n \"my-registry.example/images\": { \"auth\": \"…\" },", "completion": " \"*.my-registry.example/images\": { \"auth\": \"…\" }\n }\n}", "language": "json", "type": "api-response", "context": "Images | Image pull operations pass the credentials to the CRI container runtime for every valid pattern. For example, the following container image names would match successfully:", "source": "https://kubernetes.io/docs/concepts/containers/images/", "id": "k8s_00415_code_completion_10980"} {"prompt": "Explain this json code:\n\n{\n \"auths\": {\n \"my-registry.example/images\": { \"auth\": \"…\" },\n \"*.my-registry.example/images\": { \"auth\": \"…\" }\n }\n}", "completion": "This json code Images | Image pull operations pass the credentials to the CRI container runtime for every valid pattern. For example, the following container image names would match successfully:", "language": "json", "type": "api-response_explanation", "context": "Images | Image pull operations pass the credentials to the CRI container runtime for every valid pattern. For example, the following container image names would match successfully:", "source": "https://kubernetes.io/docs/concepts/containers/images/", "id": "k8s_00415_code_explain_10981"} {"prompt": "Complete this json code:\n\n{\n \"auths\": {\n \"my-registry.example/images\": {\n \"auth\": \"…\"\n },", "completion": " \"my-registry.example/images/subpath\": {\n \"auth\": \"…\"\n }\n }\n}", "language": "json", "type": "api-response", "context": "Images | If now a container specifies an image my-registry.example/images/subpath/my-image to be pulled, then the kubelet will try to download it using both authentication sources if one of them fails.", "source": "https://kubernetes.io/docs/concepts/containers/images/", "id": "k8s_00415_code_completion_10982"} {"prompt": "Explain this json code:\n\n{\n \"auths\": {\n \"my-registry.example/images\": {\n \"auth\": \"…\"\n },\n \"my-registry.example/images/subpath\": {\n \"auth\": \"…\"\n }\n }\n}", "completion": "This json code Images | If now a container specifies an image my-registry.example/images/subpath/my-image to be pulled, then the kubelet will try to download it using both authentication sources if one of them fails.", "language": "json", "type": "api-response_explanation", "context": "Images | If now a container specifies an image my-registry.example/images/subpath/my-image to be pulled, then the kubelet will try to download it using both authentication sources if one of them fails.", "source": "https://kubernetes.io/docs/concepts/containers/images/", "id": "k8s_00415_code_explain_10983"} {"prompt": "Complete this shell code:\n\nkubectl create secret docker-registry <name> \\\n --docker-server=<docker-registry-server> \\", "completion": " --docker-username=<docker-user> \\\n --docker-password=<docker-password> \\\n --docker-email=<docker-email>", "language": "shell", "type": "code-example", "context": "Images | If you already have a Docker credentials file then, rather than using the above command, you can import the credentials file as a Kubernetes Secret. Create a Secret based on existing Docker credential", "source": "https://kubernetes.io/docs/concepts/containers/images/", "id": "k8s_00415_code_completion_10984"} {"prompt": "Explain this shell code:\n\nkubectl create secret docker-registry <name> \\\n --docker-server=<docker-registry-server> \\\n --docker-username=<docker-user> \\\n --docker-password=<docker-password> \\\n --docker-email=<docker-email>", "completion": "This shell code Images | If you already have a Docker credentials file then, rather than using the above command, you can import the credentials file as a Kubernetes Secret. Create a Secret based on existing Docker credential", "language": "shell", "type": "code-example_explanation", "context": "Images | If you already have a Docker credentials file then, rather than using the above command, you can import the credentials file as a Kubernetes Secret. Create a Secret based on existing Docker credential", "source": "https://kubernetes.io/docs/concepts/containers/images/", "id": "k8s_00415_code_explain_10985"} {"prompt": "Complete this shell code:\n\ncat <<EOF > pod.yaml\napiVersion: v1\nkind: Pod\nmetadata:\n name: foo\n namespace: awesomeapps\nspec:\n containers:\n - name: foo", "completion": " image: janedoe/awesomeapp:v1\n imagePullSecrets:\n - name: myregistrykey\nEOF\n\ncat <<EOF >> ./kustomization.yaml\nresources:\n- pod.yaml\nEOF", "language": "shell", "type": "code-example", "context": "Images | This needs to be done for each Pod that is using a private registry.", "source": "https://kubernetes.io/docs/concepts/containers/images/", "id": "k8s_00415_code_completion_10986"} {"prompt": "Explain this shell code:\n\ncat <<EOF > pod.yaml\napiVersion: v1\nkind: Pod\nmetadata:\n name: foo\n namespace: awesomeapps\nspec:\n containers:\n - name: foo\n image: janedoe/awesomeapp:v1\n imagePullSecrets:\n - name: myregistrykey\nEOF\n\ncat <<EOF >> ./kustomization.yaml\nresources:\n- pod.yaml\nEOF", "completion": "This shell code Images | This needs to be done for each Pod that is using a private registry.", "language": "shell", "type": "code-example_explanation", "context": "Images | This needs to be done for each Pod that is using a private registry.", "source": "https://kubernetes.io/docs/concepts/containers/images/", "id": "k8s_00415_code_explain_10987"} {"prompt": "Complete this json code:\n\n{\n \"auths\": {\n \"my-registry.example/images\": { \"auth\": \"…\" },", "completion": " \"*.my-registry.example/images\": { \"auth\": \"…\" }\n }\n}", "language": "json", "type": "api-response", "context": "Images | Image pull operations pass the credentials to the CRI container runtime for every valid pattern. For example, the following container image names would match successfully:", "source": "https://kubernetes.io/docs/concepts/containers/images/", "id": "k8s_00415_code_completion_10988"} {"prompt": "Explain this json code:\n\n{\n \"auths\": {\n \"my-registry.example/images\": { \"auth\": \"…\" },\n \"*.my-registry.example/images\": { \"auth\": \"…\" }\n }\n}", "completion": "This json code Images | Image pull operations pass the credentials to the CRI container runtime for every valid pattern. For example, the following container image names would match successfully:", "language": "json", "type": "api-response_explanation", "context": "Images | Image pull operations pass the credentials to the CRI container runtime for every valid pattern. For example, the following container image names would match successfully:", "source": "https://kubernetes.io/docs/concepts/containers/images/", "id": "k8s_00415_code_explain_10989"} {"prompt": "Complete this json code:\n\n{\n \"auths\": {\n \"my-registry.example/images\": {\n \"auth\": \"…\"\n },", "completion": " \"my-registry.example/images/subpath\": {\n \"auth\": \"…\"\n }\n }\n}", "language": "json", "type": "api-response", "context": "Images | If now a container specifies an image my-registry.example/images/subpath/my-image to be pulled, then the kubelet will try to download it using both authentication sources if one of them fails.", "source": "https://kubernetes.io/docs/concepts/containers/images/", "id": "k8s_00415_code_completion_10990"} {"prompt": "Explain this json code:\n\n{\n \"auths\": {\n \"my-registry.example/images\": {\n \"auth\": \"…\"\n },\n \"my-registry.example/images/subpath\": {\n \"auth\": \"…\"\n }\n }\n}", "completion": "This json code Images | If now a container specifies an image my-registry.example/images/subpath/my-image to be pulled, then the kubelet will try to download it using both authentication sources if one of them fails.", "language": "json", "type": "api-response_explanation", "context": "Images | If now a container specifies an image my-registry.example/images/subpath/my-image to be pulled, then the kubelet will try to download it using both authentication sources if one of them fails.", "source": "https://kubernetes.io/docs/concepts/containers/images/", "id": "k8s_00415_code_explain_10991"} {"prompt": "Complete this bash code:\n\nkubectl create secret docker-registry <name> \\\n --docker-server=<docker-registry-server> \\", "completion": " --docker-username=<docker-user> \\\n --docker-password=<docker-password> \\\n --docker-email=<docker-email>", "language": "bash", "type": "kubectl-command", "context": "Images | If you already have a Docker credentials file then, rather than using the above command, you can import the credentials file as a Kubernetes Secret. Create a Secret based on existing Docker credential", "source": "https://kubernetes.io/docs/concepts/containers/images/", "id": "k8s_00415_code_completion_10992"} {"prompt": "Explain this bash code:\n\nkubectl create secret docker-registry <name> \\\n --docker-server=<docker-registry-server> \\\n --docker-username=<docker-user> \\\n --docker-password=<docker-password> \\\n --docker-email=<docker-email>", "completion": "This bash code Images | If you already have a Docker credentials file then, rather than using the above command, you can import the credentials file as a Kubernetes Secret. Create a Secret based on existing Docker credential", "language": "bash", "type": "kubectl-command_explanation", "context": "Images | If you already have a Docker credentials file then, rather than using the above command, you can import the credentials file as a Kubernetes Secret. Create a Secret based on existing Docker credential", "source": "https://kubernetes.io/docs/concepts/containers/images/", "id": "k8s_00415_code_explain_10993"} {"prompt": "Complete this yaml code:\n\ncat <<EOF > pod.yaml\napiVersion: v1\nkind: Pod\nmetadata:\n name: foo\n namespace: awesomeapps\nspec:\n containers:\n - name: foo", "completion": " image: janedoe/awesomeapp:v1\n imagePullSecrets:\n - name: myregistrykey\nEOF\n\ncat <<EOF >> ./kustomization.yaml\nresources:\n- pod.yaml\nEOF", "language": "yaml", "type": "pod-definition", "context": "Images | This needs to be done for each Pod that is using a private registry.", "source": "https://kubernetes.io/docs/concepts/containers/images/", "id": "k8s_00415_code_completion_10994"} {"prompt": "Explain this yaml code:\n\ncat <<EOF > pod.yaml\napiVersion: v1\nkind: Pod\nmetadata:\n name: foo\n namespace: awesomeapps\nspec:\n containers:\n - name: foo\n image: janedoe/awesomeapp:v1\n imagePullSecrets:\n - name: myregistrykey\nEOF\n\ncat <<EOF >> ./kustomization.yaml\nresources:\n- pod.yaml\nEOF", "completion": "This yaml code Images | This needs to be done for each Pod that is using a private registry.", "language": "yaml", "type": "pod-definition_explanation", "context": "Images | This needs to be done for each Pod that is using a private registry.", "source": "https://kubernetes.io/docs/concepts/containers/images/", "id": "k8s_00415_code_explain_10995"} {"prompt": "Explain this text code:\n\nkubeadm join phase preflight [api-server-endpoint] [flags]", "completion": "This text code kubeadm join phase | If the node should host a new control plane instance, the IP address the API Server will advertise it's listening on. If not set the default network interface will be used.", "language": "text", "type": "code-example_explanation", "context": "kubeadm join phase | If the node should host a new control plane instance, the IP address the API Server will advertise it's listening on. If not set the default network interface will be used.", "source": "https://kubernetes.io/docs/reference/setup-tools/kubeadm/kubeadm-join-phase/", "id": "k8s_00417_code_explain_10996"} {"prompt": "Explain this text code:\n\n# Run join pre-flight checks using a config file.\n kubeadm join phase preflight --config kubeadm-config.yaml", "completion": "This text code kubeadm join phase | If the node should host a new control plane instance, the IP address the API Server will advertise it's listening on. If not set the default network interface will be used.", "language": "text", "type": "code-example_explanation", "context": "kubeadm join phase | If the node should host a new control plane instance, the IP address the API Server will advertise it's listening on. If not set the default network interface will be used.", "source": "https://kubernetes.io/docs/reference/setup-tools/kubeadm/kubeadm-join-phase/", "id": "k8s_00417_code_explain_10997"} {"prompt": "Explain this text code:\n\n# Prepares the machine for serving a control plane\n kubeadm join phase control-plane-prepare all", "completion": "This text code kubeadm join phase | help for control-plane-prepare", "language": "text", "type": "code-example_explanation", "context": "kubeadm join phase | help for control-plane-prepare", "source": "https://kubernetes.io/docs/reference/setup-tools/kubeadm/kubeadm-join-phase/", "id": "k8s_00417_code_explain_10998"} {"prompt": "Explain this text code:\n\nkubeadm join phase control-plane-prepare all [api-server-endpoint] [flags]", "completion": "This text code kubeadm join phase | If the node should host a new control plane instance, the IP address the API Server will advertise it's listening on. If not set the default network interface will be used.", "language": "text", "type": "code-example_explanation", "context": "kubeadm join phase | If the node should host a new control plane instance, the IP address the API Server will advertise it's listening on. If not set the default network interface will be used.", "source": "https://kubernetes.io/docs/reference/setup-tools/kubeadm/kubeadm-join-phase/", "id": "k8s_00417_code_explain_10999"} {"prompt": "Explain this text code:\n\nkubeadm join phase control-plane-prepare download-certs [api-server-endpoint] [flags]", "completion": "This text code kubeadm join phase | Use this key to decrypt the certificate secrets uploaded by init. The certificate key is a hex encoded string that is an AES key of size 32 bytes.", "language": "text", "type": "code-example_explanation", "context": "kubeadm join phase | Use this key to decrypt the certificate secrets uploaded by init. The certificate key is a hex encoded string that is an AES key of size 32 bytes.", "source": "https://kubernetes.io/docs/reference/setup-tools/kubeadm/kubeadm-join-phase/", "id": "k8s_00417_code_explain_11000"} {"prompt": "Explain this text code:\n\nkubeadm join phase control-plane-prepare certs [api-server-endpoint] [flags]", "completion": "This text code kubeadm join phase | If the node should host a new control plane instance, the IP address the API Server will advertise it's listening on. If not set the default network interface will be used.", "language": "text", "type": "code-example_explanation", "context": "kubeadm join phase | If the node should host a new control plane instance, the IP address the API Server will advertise it's listening on. If not set the default network interface will be used.", "source": "https://kubernetes.io/docs/reference/setup-tools/kubeadm/kubeadm-join-phase/", "id": "k8s_00417_code_explain_11001"} {"prompt": "Explain this text code:\n\nkubeadm join phase control-plane-prepare kubeconfig [api-server-endpoint] [flags]", "completion": "This text code kubeadm join phase | Use this key to decrypt the certificate secrets uploaded by init. The certificate key is a hex encoded string that is an AES key of size 32 bytes.", "language": "text", "type": "code-example_explanation", "context": "kubeadm join phase | Use this key to decrypt the certificate secrets uploaded by init. The certificate key is a hex encoded string that is an AES key of size 32 bytes.", "source": "https://kubernetes.io/docs/reference/setup-tools/kubeadm/kubeadm-join-phase/", "id": "k8s_00417_code_explain_11002"} {"prompt": "Explain this text code:\n\nkubeadm join phase control-plane-prepare control-plane [flags]", "completion": "This text code kubeadm join phase | If the node should host a new control plane instance, the IP address the API Server will advertise it's listening on. If not set the default network interface will be used.", "language": "text", "type": "code-example_explanation", "context": "kubeadm join phase | If the node should host a new control plane instance, the IP address the API Server will advertise it's listening on. If not set the default network interface will be used.", "source": "https://kubernetes.io/docs/reference/setup-tools/kubeadm/kubeadm-join-phase/", "id": "k8s_00417_code_explain_11003"} {"prompt": "Explain this text code:\n\nkubeadm join phase kubelet-start [api-server-endpoint] [flags]", "completion": "This text code kubeadm join phase | Path to a kubeadm configuration file.", "language": "text", "type": "code-example_explanation", "context": "kubeadm join phase | Path to a kubeadm configuration file.", "source": "https://kubernetes.io/docs/reference/setup-tools/kubeadm/kubeadm-join-phase/", "id": "k8s_00417_code_explain_11004"} {"prompt": "Explain this text code:\n\n# Joins a machine as a control plane instance\n kubeadm join phase control-plane-join all", "completion": "This text code kubeadm join phase | help for control-plane-join", "language": "text", "type": "code-example_explanation", "context": "kubeadm join phase | help for control-plane-join", "source": "https://kubernetes.io/docs/reference/setup-tools/kubeadm/kubeadm-join-phase/", "id": "k8s_00417_code_explain_11005"} {"prompt": "Explain this text code:\n\nkubeadm join phase control-plane-join mark-control-plane [flags]", "completion": "This text code kubeadm join phase | Path to a kubeadm configuration file.", "language": "text", "type": "code-example_explanation", "context": "kubeadm join phase | Path to a kubeadm configuration file.", "source": "https://kubernetes.io/docs/reference/setup-tools/kubeadm/kubeadm-join-phase/", "id": "k8s_00417_code_explain_11006"} {"prompt": "Explain this text code:\n\nkubeadm join phase preflight [api-server-endpoint] [flags]", "completion": "This text code kubeadm join phase | If the node should host a new control plane instance, the IP address the API Server will advertise it's listening on. If not set the default network interface will be used.", "language": "text", "type": "code-example_explanation", "context": "kubeadm join phase | If the node should host a new control plane instance, the IP address the API Server will advertise it's listening on. If not set the default network interface will be used.", "source": "https://kubernetes.io/docs/reference/setup-tools/kubeadm/kubeadm-join-phase/", "id": "k8s_00417_code_explain_11007"} {"prompt": "Explain this text code:\n\n# Run join pre-flight checks using a config file.\n kubeadm join phase preflight --config kubeadm-config.yaml", "completion": "This text code kubeadm join phase | If the node should host a new control plane instance, the IP address the API Server will advertise it's listening on. If not set the default network interface will be used.", "language": "text", "type": "code-example_explanation", "context": "kubeadm join phase | If the node should host a new control plane instance, the IP address the API Server will advertise it's listening on. If not set the default network interface will be used.", "source": "https://kubernetes.io/docs/reference/setup-tools/kubeadm/kubeadm-join-phase/", "id": "k8s_00417_code_explain_11008"} {"prompt": "Explain this text code:\n\n# Prepares the machine for serving a control plane\n kubeadm join phase control-plane-prepare all", "completion": "This text code kubeadm join phase | help for control-plane-prepare", "language": "text", "type": "code-example_explanation", "context": "kubeadm join phase | help for control-plane-prepare", "source": "https://kubernetes.io/docs/reference/setup-tools/kubeadm/kubeadm-join-phase/", "id": "k8s_00417_code_explain_11009"} {"prompt": "Explain this text code:\n\nkubeadm join phase control-plane-prepare all [api-server-endpoint] [flags]", "completion": "This text code kubeadm join phase | If the node should host a new control plane instance, the IP address the API Server will advertise it's listening on. If not set the default network interface will be used.", "language": "text", "type": "code-example_explanation", "context": "kubeadm join phase | If the node should host a new control plane instance, the IP address the API Server will advertise it's listening on. If not set the default network interface will be used.", "source": "https://kubernetes.io/docs/reference/setup-tools/kubeadm/kubeadm-join-phase/", "id": "k8s_00417_code_explain_11010"} {"prompt": "Explain this text code:\n\nkubeadm join phase control-plane-prepare download-certs [api-server-endpoint] [flags]", "completion": "This text code kubeadm join phase | Use this key to decrypt the certificate secrets uploaded by init. The certificate key is a hex encoded string that is an AES key of size 32 bytes.", "language": "text", "type": "code-example_explanation", "context": "kubeadm join phase | Use this key to decrypt the certificate secrets uploaded by init. The certificate key is a hex encoded string that is an AES key of size 32 bytes.", "source": "https://kubernetes.io/docs/reference/setup-tools/kubeadm/kubeadm-join-phase/", "id": "k8s_00417_code_explain_11011"} {"prompt": "Explain this text code:\n\nkubeadm join phase control-plane-prepare certs [api-server-endpoint] [flags]", "completion": "This text code kubeadm join phase | If the node should host a new control plane instance, the IP address the API Server will advertise it's listening on. If not set the default network interface will be used.", "language": "text", "type": "code-example_explanation", "context": "kubeadm join phase | If the node should host a new control plane instance, the IP address the API Server will advertise it's listening on. If not set the default network interface will be used.", "source": "https://kubernetes.io/docs/reference/setup-tools/kubeadm/kubeadm-join-phase/", "id": "k8s_00417_code_explain_11012"} {"prompt": "Explain this text code:\n\nkubeadm join phase control-plane-prepare kubeconfig [api-server-endpoint] [flags]", "completion": "This text code kubeadm join phase | Use this key to decrypt the certificate secrets uploaded by init. The certificate key is a hex encoded string that is an AES key of size 32 bytes.", "language": "text", "type": "code-example_explanation", "context": "kubeadm join phase | Use this key to decrypt the certificate secrets uploaded by init. The certificate key is a hex encoded string that is an AES key of size 32 bytes.", "source": "https://kubernetes.io/docs/reference/setup-tools/kubeadm/kubeadm-join-phase/", "id": "k8s_00417_code_explain_11013"} {"prompt": "Explain this text code:\n\nkubeadm join phase control-plane-prepare control-plane [flags]", "completion": "This text code kubeadm join phase | If the node should host a new control plane instance, the IP address the API Server will advertise it's listening on. If not set the default network interface will be used.", "language": "text", "type": "code-example_explanation", "context": "kubeadm join phase | If the node should host a new control plane instance, the IP address the API Server will advertise it's listening on. If not set the default network interface will be used.", "source": "https://kubernetes.io/docs/reference/setup-tools/kubeadm/kubeadm-join-phase/", "id": "k8s_00417_code_explain_11014"} {"prompt": "Explain this text code:\n\nkubeadm join phase kubelet-start [api-server-endpoint] [flags]", "completion": "This text code kubeadm join phase | Path to a kubeadm configuration file.", "language": "text", "type": "code-example_explanation", "context": "kubeadm join phase | Path to a kubeadm configuration file.", "source": "https://kubernetes.io/docs/reference/setup-tools/kubeadm/kubeadm-join-phase/", "id": "k8s_00417_code_explain_11015"} {"prompt": "Explain this text code:\n\n# Joins a machine as a control plane instance\n kubeadm join phase control-plane-join all", "completion": "This text code kubeadm join phase | help for control-plane-join", "language": "text", "type": "code-example_explanation", "context": "kubeadm join phase | help for control-plane-join", "source": "https://kubernetes.io/docs/reference/setup-tools/kubeadm/kubeadm-join-phase/", "id": "k8s_00417_code_explain_11016"} {"prompt": "Explain this text code:\n\nkubeadm join phase control-plane-join mark-control-plane [flags]", "completion": "This text code kubeadm join phase | Path to a kubeadm configuration file.", "language": "text", "type": "code-example_explanation", "context": "kubeadm join phase | Path to a kubeadm configuration file.", "source": "https://kubernetes.io/docs/reference/setup-tools/kubeadm/kubeadm-join-phase/", "id": "k8s_00417_code_explain_11017"} {"prompt": "Explain this text code:\n\nValidatingAdmissionPolicyStatusControllerConfiguration", "completion": "This text code kube-controller-manager Configuration (v1alpha1) | ValidatingAdmissionPolicyStatusControllerConfiguration holds configuration for ValidatingAdmissionPolicyStatusController related features.", "language": "text", "type": "code-example_explanation", "context": "kube-controller-manager Configuration (v1alpha1) | ValidatingAdmissionPolicyStatusControllerConfiguration holds configuration for ValidatingAdmissionPolicyStatusController related features.", "source": "https://kubernetes.io/docs/reference/config-api/kube-controller-manager-config.v1alpha1/", "id": "k8s_00419_code_explain_11018"} {"prompt": "Explain this text code:\n\nHorizontalPodAutoscalerDownscaleStabilizationWindow", "completion": "This text code kube-controller-manager Configuration (v1alpha1) | HorizontalPodAutoscalerDowncaleStabilizationWindow is a period for which autoscaler will look backwards and not scale down below any recommendation it made during that period.", "language": "text", "type": "code-example_explanation", "context": "kube-controller-manager Configuration (v1alpha1) | HorizontalPodAutoscalerDowncaleStabilizationWindow is a period for which autoscaler will look backwards and not scale down below any recommendation it made during that period.", "source": "https://kubernetes.io/docs/reference/config-api/kube-controller-manager-config.v1alpha1/", "id": "k8s_00419_code_explain_11019"} {"prompt": "Explain this text code:\n\nValidatingAdmissionPolicyStatusControllerConfiguration", "completion": "This text code kube-controller-manager Configuration (v1alpha1) | Appears in:", "language": "text", "type": "code-example_explanation", "context": "kube-controller-manager Configuration (v1alpha1) | Appears in:", "source": "https://kubernetes.io/docs/reference/config-api/kube-controller-manager-config.v1alpha1/", "id": "k8s_00419_code_explain_11020"} {"prompt": "Explain this text code:\n\nValidatingAdmissionPolicyStatusControllerConfiguration", "completion": "This text code kube-controller-manager Configuration (v1alpha1) | © 2025 The Kubernetes Authors | Documentation Distributed under CC BY 4.0", "language": "text", "type": "code-example_explanation", "context": "kube-controller-manager Configuration (v1alpha1) | © 2025 The Kubernetes Authors | Documentation Distributed under CC BY 4.0", "source": "https://kubernetes.io/docs/reference/config-api/kube-controller-manager-config.v1alpha1/", "id": "k8s_00419_code_explain_11021"} {"prompt": "Explain this bash code:\n\nkubectl apply view-last-applied (TYPE [NAME | -l label] | TYPE/NAME | -f FILENAME)", "completion": "This bash code kubectl apply view-last-applied | Select all resources in the namespace of the specified resource types", "language": "bash", "type": "kubectl-command_explanation", "context": "kubectl apply view-last-applied | Select all resources in the namespace of the specified resource types", "source": "https://kubernetes.io/docs/reference/kubectl/generated/kubectl_apply/kubectl_apply_view-last-applied/", "id": "k8s_00420_code_explain_11022"} {"prompt": "Complete this text code:\n\n# View the last-applied-configuration annotations by type/name in YAML\n kubectl apply view-last-applied deployment/nginx", "completion": " \n # View the last-applied-configuration annotations by file in JSON\n kubectl apply view-last-applied -f deploy.yaml -o json", "language": "text", "type": "code-example", "context": "kubectl apply view-last-applied | Select all resources in the namespace of the specified resource types", "source": "https://kubernetes.io/docs/reference/kubectl/generated/kubectl_apply/kubectl_apply_view-last-applied/", "id": "k8s_00420_code_completion_11023"} {"prompt": "Explain this text code:\n\n# View the last-applied-configuration annotations by type/name in YAML\n kubectl apply view-last-applied deployment/nginx\n \n # View the last-applied-configuration annotations by file in JSON\n kubectl apply view-last-applied -f deploy.yaml -o json", "completion": "This text code kubectl apply view-last-applied | Select all resources in the namespace of the specified resource types", "language": "text", "type": "code-example_explanation", "context": "kubectl apply view-last-applied | Select all resources in the namespace of the specified resource types", "source": "https://kubernetes.io/docs/reference/kubectl/generated/kubectl_apply/kubectl_apply_view-last-applied/", "id": "k8s_00420_code_explain_11024"} {"prompt": "Explain this bash code:\n\nkubectl apply view-last-applied (TYPE [NAME | -l label] | TYPE/NAME | -f FILENAME)", "completion": "This bash code kubectl apply view-last-applied | Select all resources in the namespace of the specified resource types", "language": "bash", "type": "kubectl-command_explanation", "context": "kubectl apply view-last-applied | Select all resources in the namespace of the specified resource types", "source": "https://kubernetes.io/docs/reference/kubectl/generated/kubectl_apply/kubectl_apply_view-last-applied/", "id": "k8s_00420_code_explain_11025"} {"prompt": "Complete this text code:\n\n# View the last-applied-configuration annotations by type/name in YAML\n kubectl apply view-last-applied deployment/nginx", "completion": " \n # View the last-applied-configuration annotations by file in JSON\n kubectl apply view-last-applied -f deploy.yaml -o json", "language": "text", "type": "code-example", "context": "kubectl apply view-last-applied | Select all resources in the namespace of the specified resource types", "source": "https://kubernetes.io/docs/reference/kubectl/generated/kubectl_apply/kubectl_apply_view-last-applied/", "id": "k8s_00420_code_completion_11026"} {"prompt": "Explain this text code:\n\n# View the last-applied-configuration annotations by type/name in YAML\n kubectl apply view-last-applied deployment/nginx\n \n # View the last-applied-configuration annotations by file in JSON\n kubectl apply view-last-applied -f deploy.yaml -o json", "completion": "This text code kubectl apply view-last-applied | Select all resources in the namespace of the specified resource types", "language": "text", "type": "code-example_explanation", "context": "kubectl apply view-last-applied | Select all resources in the namespace of the specified resource types", "source": "https://kubernetes.io/docs/reference/kubectl/generated/kubectl_apply/kubectl_apply_view-last-applied/", "id": "k8s_00420_code_explain_11027"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: configmap-pod\nspec:\n containers:\n - name: test\n image: busybox:1.28\n command: ['sh', '-c', 'echo \"The app is running!\" && tail -f /dev/null']", "completion": " volumeMounts:\n - name: config-vol\n mountPath: /etc/config\n volumes:\n - name: config-vol\n configMap:\n name: log-config\n items:\n - key: log_level\n path: log_level.conf", "language": "yaml", "type": "pod-definition", "context": "Volumes | The log-config ConfigMap is mounted as a volume, and all contents stored in its log_level entry are mounted into the Pod at path /etc/config/log_level.conf. Note that this path is derived from the vol", "source": "https://kubernetes.io/docs/concepts/storage/volumes/", "id": "k8s_00421_code_completion_11028"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: configmap-pod\nspec:\n containers:\n - name: test\n image: busybox:1.28\n command: ['sh', '-c', 'echo \"The app is running!\" && tail -f /dev/null']\n volumeMounts:\n - name: config-vol\n mountPath: /etc/config\n volumes:\n - name: config-vol\n configMap:\n name: log-config\n items:\n - key: log_level\n path: log_level.conf", "completion": "This yaml code Volumes | The log-config ConfigMap is mounted as a volume, and all contents stored in its log_level entry are mounted into the Pod at path /etc/config/log_level.conf. Note that this path is derived from the vol", "language": "yaml", "type": "pod-definition_explanation", "context": "Volumes | The log-config ConfigMap is mounted as a volume, and all contents stored in its log_level entry are mounted into the Pod at path /etc/config/log_level.conf. Note that this path is derived from the vol", "source": "https://kubernetes.io/docs/concepts/storage/volumes/", "id": "k8s_00421_code_explain_11029"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: test-pd\nspec:\n containers:\n - image: registry.k8s.io/test-webserver", "completion": " name: test-container\n volumeMounts:\n - mountPath: /cache\n name: cache-volume\n volumes:\n - name: cache-volume\n emptyDir:\n sizeLimit: 500Mi", "language": "yaml", "type": "pod-definition", "context": "Volumes | An fc volume type allows an existing fibre channel block storage volume to be mounted in a Pod. You can specify single or multiple target world wide names (WWNs) using the parameter targetWWNs in your", "source": "https://kubernetes.io/docs/concepts/storage/volumes/", "id": "k8s_00421_code_completion_11030"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: test-pd\nspec:\n containers:\n - image: registry.k8s.io/test-webserver\n name: test-container\n volumeMounts:\n - mountPath: /cache\n name: cache-volume\n volumes:\n - name: cache-volume\n emptyDir:\n sizeLimit: 500Mi", "completion": "This yaml code Volumes | An fc volume type allows an existing fibre channel block storage volume to be mounted in a Pod. You can specify single or multiple target world wide names (WWNs) using the parameter targetWWNs in your", "language": "yaml", "type": "pod-definition_explanation", "context": "Volumes | An fc volume type allows an existing fibre channel block storage volume to be mounted in a Pod. You can specify single or multiple target world wide names (WWNs) using the parameter targetWWNs in your", "source": "https://kubernetes.io/docs/concepts/storage/volumes/", "id": "k8s_00421_code_explain_11031"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: test-pd\nspec:\n containers:\n - image: registry.k8s.io/test-webserver\n name: test-container", "completion": " volumeMounts:\n - mountPath: /cache\n name: cache-volume\n volumes:\n - name: cache-volume\n emptyDir:\n sizeLimit: 500Mi\n medium: Memory", "language": "yaml", "type": "pod-definition", "context": "Volumes | An fc volume type allows an existing fibre channel block storage volume to be mounted in a Pod. You can specify single or multiple target world wide names (WWNs) using the parameter targetWWNs in your", "source": "https://kubernetes.io/docs/concepts/storage/volumes/", "id": "k8s_00421_code_completion_11032"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: test-pd\nspec:\n containers:\n - image: registry.k8s.io/test-webserver\n name: test-container\n volumeMounts:\n - mountPath: /cache\n name: cache-volume\n volumes:\n - name: cache-volume\n emptyDir:\n sizeLimit: 500Mi\n medium: Memory", "completion": "This yaml code Volumes | An fc volume type allows an existing fibre channel block storage volume to be mounted in a Pod. You can specify single or multiple target world wide names (WWNs) using the parameter targetWWNs in your", "language": "yaml", "type": "pod-definition_explanation", "context": "Volumes | An fc volume type allows an existing fibre channel block storage volume to be mounted in a Pod. You can specify single or multiple target world wide names (WWNs) using the parameter targetWWNs in your", "source": "https://kubernetes.io/docs/concepts/storage/volumes/", "id": "k8s_00421_code_explain_11033"} {"prompt": "Explain this text code:\n\n!has(object.spec.volumes) || !object.spec.volumes.exists(v, has(v.gitRepo))", "completion": "This text code Volumes | You can use this deprecated storage plugin in your cluster if you explicitly enable the GitRepoVolumeDriver feature gate.", "language": "text", "type": "code-example_explanation", "context": "Volumes | You can use this deprecated storage plugin in your cluster if you explicitly enable the GitRepoVolumeDriver feature gate.", "source": "https://kubernetes.io/docs/concepts/storage/volumes/", "id": "k8s_00421_code_explain_11034"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: server\nspec:\n containers:\n - image: nginx\n name: nginx", "completion": " volumeMounts:\n - mountPath: /mypath\n name: git-volume\n volumes:\n - name: git-volume\n gitRepo:\n repository: \"git@somewhere:me/my-git-repository.git\"\n revision: \"22f1d8406d464b0c0874075539c1f2e96c253775\"", "language": "yaml", "type": "pod-definition", "context": "Volumes | Kubernetes 1.34 does not include a glusterfs volume type.", "source": "https://kubernetes.io/docs/concepts/storage/volumes/", "id": "k8s_00421_code_completion_11035"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: server\nspec:\n containers:\n - image: nginx\n name: nginx\n volumeMounts:\n - mountPath: /mypath\n name: git-volume\n volumes:\n - name: git-volume\n gitRepo:\n repository: \"git@somewhere:me/my-git-repository.git\"\n revision: \"22f1d8406d464b0c0874075539c1f2e96c253775\"", "completion": "This yaml code Volumes | Kubernetes 1.34 does not include a glusterfs volume type.", "language": "yaml", "type": "pod-definition_explanation", "context": "Volumes | Kubernetes 1.34 does not include a glusterfs volume type.", "source": "https://kubernetes.io/docs/concepts/storage/volumes/", "id": "k8s_00421_code_explain_11036"} {"prompt": "Complete this yaml code:\n\n---\n# This manifest mounts /data/foo on the host as /foo inside the\n# single container that runs within the hostpath-example-linux Pod.\n#\n# The mount into the container is read-only.\napiVersion: v1\nkind: Pod\nmetadata:\n name: hostpath-example-linux\nspec:\n os: { name: linux }\n nodeSelector:\n kubernetes.io/os: linux", "completion": " containers:\n - name: example-container\n image: registry.k8s.io/test-webserver\n volumeMounts:\n - mountPath: /foo\n name: example-volume\n readOnly: true\n volumes:\n - name: example-volume\n # mount /data/foo, but only if that directory already exists\n hostPath:\n path: /data/foo # directory location on host\n type: Directory # this field is optional", "language": "yaml", "type": "pod-definition", "context": "Volumes | --- # This manifest mounts C:\\Data\\foo on the host as C:\\foo, inside the # single container that runs within the hostpath-example-windows Pod. # # The mount into the container is read-only. apiVersion", "source": "https://kubernetes.io/docs/concepts/storage/volumes/", "id": "k8s_00421_code_completion_11037"} {"prompt": "Explain this yaml code:\n\n---\n# This manifest mounts /data/foo on the host as /foo inside the\n# single container that runs within the hostpath-example-linux Pod.\n#\n# The mount into the container is read-only.\napiVersion: v1\nkind: Pod\nmetadata:\n name: hostpath-example-linux\nspec:\n os: { name: linux }\n nodeSelector:\n kubernetes.io/os: linux\n containers:\n - name: example-container\n image: registry.k8s.io/test-webserver\n volumeMounts:\n - mountPath: /foo\n name: example-volume\n readOnly: true\n volumes:\n - name: example-volume\n # mount /data/foo, but only if that directory already exists\n hostPath:\n path: /data/foo # directory location on host\n type: Directory # this field is optional", "completion": "This yaml code Volumes | --- # This manifest mounts C:\\Data\\foo on the host as C:\\foo, inside the # single container that runs within the hostpath-example-windows Pod. # # The mount into the container is read-only. apiVersion", "language": "yaml", "type": "pod-definition_explanation", "context": "Volumes | --- # This manifest mounts C:\\Data\\foo on the host as C:\\foo, inside the # single container that runs within the hostpath-example-windows Pod. # # The mount into the container is read-only. apiVersion", "source": "https://kubernetes.io/docs/concepts/storage/volumes/", "id": "k8s_00421_code_explain_11038"} {"prompt": "Complete this yaml code:\n\n---\n# This manifest mounts C:\\Data\\foo on the host as C:\\foo, inside the\n# single container that runs within the hostpath-example-windows Pod.\n#\n# The mount into the container is read-only.\napiVersion: v1\nkind: Pod\nmetadata:\n name: hostpath-example-windows\nspec:\n os: { name: windows }\n nodeSelector:\n kubernetes.io/os: windows", "completion": " containers:\n - name: example-container\n image: microsoft/windowsservercore:1709\n volumeMounts:\n - name: example-volume\n mountPath: \"C:\\\\foo\"\n readOnly: true\n volumes:\n # mount C:\\Data\\foo from the host, but only if that directory already exists\n - name: example-volume\n hostPath:\n path: \"C:\\\\Data\\\\foo\" # directory location on host\n type: Directory # this field is optional", "language": "yaml", "type": "pod-definition", "context": "Volumes | The following manifest defines a Pod that mounts /var/local/aaa inside the single container in the Pod. If the node does not already have a path /var/local/aaa, the kubelet creates it as a directory a", "source": "https://kubernetes.io/docs/concepts/storage/volumes/", "id": "k8s_00421_code_completion_11039"} {"prompt": "Explain this yaml code:\n\n---\n# This manifest mounts C:\\Data\\foo on the host as C:\\foo, inside the\n# single container that runs within the hostpath-example-windows Pod.\n#\n# The mount into the container is read-only.\napiVersion: v1\nkind: Pod\nmetadata:\n name: hostpath-example-windows\nspec:\n os: { name: windows }\n nodeSelector:\n kubernetes.io/os: windows\n containers:\n - name: example-container\n image: microsoft/windowsservercore:1709\n volumeMounts:\n - name: example-volume\n mountPath: \"C:\\\\foo\"\n readOnly: true\n volumes:\n # mount C:\\Data\\foo from the host, but only if that directory already exists\n - name: example-volume\n hostPath:\n path: \"C:\\\\Data\\\\foo\" # directory location on host\n type: Directory # this field is optional", "completion": "This yaml code Volumes | The following manifest defines a Pod that mounts /var/local/aaa inside the single container in the Pod. If the node does not already have a path /var/local/aaa, the kubelet creates it as a directory a", "language": "yaml", "type": "pod-definition_explanation", "context": "Volumes | The following manifest defines a Pod that mounts /var/local/aaa inside the single container in the Pod. If the node does not already have a path /var/local/aaa, the kubelet creates it as a directory a", "source": "https://kubernetes.io/docs/concepts/storage/volumes/", "id": "k8s_00421_code_explain_11040"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: test-webserver\nspec:\n os: { name: linux }\n nodeSelector:\n kubernetes.io/os: linux\n containers:\n - name: test-webserver\n image: registry.k8s.io/test-webserver:latest\n volumeMounts:\n - mountPath: /var/local/aaa", "completion": " name: mydir\n - mountPath: /var/local/aaa/1.txt\n name: myfile\n volumes:\n - name: mydir\n hostPath:\n # Ensure the file directory is created.\n path: /var/local/aaa\n type: DirectoryOrCreate\n - name: myfile\n hostPath:\n path: /var/local/aaa/1.txt\n type: FileOrCreate", "language": "yaml", "type": "pod-definition", "context": "Volumes | An image volume source represents an OCI object (a container image or artifact) which is available on the kubelet's host machine.", "source": "https://kubernetes.io/docs/concepts/storage/volumes/", "id": "k8s_00421_code_completion_11041"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: test-webserver\nspec:\n os: { name: linux }\n nodeSelector:\n kubernetes.io/os: linux\n containers:\n - name: test-webserver\n image: registry.k8s.io/test-webserver:latest\n volumeMounts:\n - mountPath: /var/local/aaa\n name: mydir\n - mountPath: /var/local/aaa/1.txt\n name: myfile\n volumes:\n - name: mydir\n hostPath:\n # Ensure the file directory is created.\n path: /var/local/aaa\n type: DirectoryOrCreate\n - name: myfile\n hostPath:\n path: /var/local/aaa/1.txt\n type: FileOrCreate", "completion": "This yaml code Volumes | An image volume source represents an OCI object (a container image or artifact) which is available on the kubelet's host machine.", "language": "yaml", "type": "pod-definition_explanation", "context": "Volumes | An image volume source represents an OCI object (a container image or artifact) which is available on the kubelet's host machine.", "source": "https://kubernetes.io/docs/concepts/storage/volumes/", "id": "k8s_00421_code_explain_11042"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: image-volume\nspec:\n containers:\n - name: shell\n command: [\"sleep\", \"infinity\"]", "completion": " image: debian\n volumeMounts:\n - name: volume\n mountPath: /volume\n volumes:\n - name: volume\n image:\n reference: quay.io/crio/artifact:v2\n pullPolicy: IfNotPresent", "language": "yaml", "type": "pod-definition", "context": "Volumes | The volume is resolved at pod startup depending on which pullPolicy value is provided:", "source": "https://kubernetes.io/docs/concepts/storage/volumes/", "id": "k8s_00421_code_completion_11043"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: image-volume\nspec:\n containers:\n - name: shell\n command: [\"sleep\", \"infinity\"]\n image: debian\n volumeMounts:\n - name: volume\n mountPath: /volume\n volumes:\n - name: volume\n image:\n reference: quay.io/crio/artifact:v2\n pullPolicy: IfNotPresent", "completion": "This yaml code Volumes | The volume is resolved at pod startup depending on which pullPolicy value is provided:", "language": "yaml", "type": "pod-definition_explanation", "context": "Volumes | The volume is resolved at pod startup depending on which pullPolicy value is provided:", "source": "https://kubernetes.io/docs/concepts/storage/volumes/", "id": "k8s_00421_code_explain_11044"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: PersistentVolume\nmetadata:\n name: example-pv\nspec:\n capacity:\n storage: 100Gi\n volumeMode: Filesystem\n accessModes:\n - ReadWriteOnce\n persistentVolumeReclaimPolicy: Delete", "completion": " storageClassName: local-storage\n local:\n path: /mnt/disks/ssd1\n nodeAffinity:\n required:\n nodeSelectorTerms:\n - matchExpressions:\n - key: kubernetes.io/hostname\n operator: In\n values:\n - example-node", "language": "yaml", "type": "kubernetes-manifest", "context": "Volumes | You must set a PersistentVolume nodeAffinity when using local volumes. The Kubernetes scheduler uses the PersistentVolume nodeAffinity to schedule these Pods to the correct node.", "source": "https://kubernetes.io/docs/concepts/storage/volumes/", "id": "k8s_00421_code_completion_11045"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: PersistentVolume\nmetadata:\n name: example-pv\nspec:\n capacity:\n storage: 100Gi\n volumeMode: Filesystem\n accessModes:\n - ReadWriteOnce\n persistentVolumeReclaimPolicy: Delete\n storageClassName: local-storage\n local:\n path: /mnt/disks/ssd1\n nodeAffinity:\n required:\n nodeSelectorTerms:\n - matchExpressions:\n - key: kubernetes.io/hostname\n operator: In\n values:\n - example-node", "completion": "This yaml code Volumes | You must set a PersistentVolume nodeAffinity when using local volumes. The Kubernetes scheduler uses the PersistentVolume nodeAffinity to schedule these Pods to the correct node.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Volumes | You must set a PersistentVolume nodeAffinity when using local volumes. The Kubernetes scheduler uses the PersistentVolume nodeAffinity to schedule these Pods to the correct node.", "source": "https://kubernetes.io/docs/concepts/storage/volumes/", "id": "k8s_00421_code_explain_11046"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: test-pd\nspec:\n containers:\n - image: registry.k8s.io/test-webserver\n name: test-container", "completion": " volumeMounts:\n - mountPath: /my-nfs-data\n name: test-volume\n volumes:\n - name: test-volume\n nfs:\n server: my-nfs-server.example.com\n path: /my-nfs-volume\n readOnly: true", "language": "yaml", "type": "pod-definition", "context": "Volumes | You must have your own NFS server running with the share exported before you can use it.", "source": "https://kubernetes.io/docs/concepts/storage/volumes/", "id": "k8s_00421_code_completion_11047"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: test-pd\nspec:\n containers:\n - image: registry.k8s.io/test-webserver\n name: test-container\n volumeMounts:\n - mountPath: /my-nfs-data\n name: test-volume\n volumes:\n - name: test-volume\n nfs:\n server: my-nfs-server.example.com\n path: /my-nfs-volume\n readOnly: true", "completion": "This yaml code Volumes | You must have your own NFS server running with the share exported before you can use it.", "language": "yaml", "type": "pod-definition_explanation", "context": "Volumes | You must have your own NFS server running with the share exported before you can use it.", "source": "https://kubernetes.io/docs/concepts/storage/volumes/", "id": "k8s_00421_code_explain_11048"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: test-portworx-volume-pod\nspec:\n containers:\n - image: registry.k8s.io/test-webserver\n name: test-container", "completion": " volumeMounts:\n - mountPath: /mnt\n name: pxvol\n volumes:\n - name: pxvol\n # This Portworx volume must already exist.\n portworxVolume:\n volumeID: \"pxvol\"\n fsType: \"<fs-type>\"", "language": "yaml", "type": "pod-definition", "context": "Volumes | In Kubernetes 1.34, all operations for the in-tree Portworx volumes are redirected to the pxd.portworx.com Container Storage Interface (CSI) Driver by default.Portworx CSI Driver must be installed on", "source": "https://kubernetes.io/docs/concepts/storage/volumes/", "id": "k8s_00421_code_completion_11049"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: test-portworx-volume-pod\nspec:\n containers:\n - image: registry.k8s.io/test-webserver\n name: test-container\n volumeMounts:\n - mountPath: /mnt\n name: pxvol\n volumes:\n - name: pxvol\n # This Portworx volume must already exist.\n portworxVolume:\n volumeID: \"pxvol\"\n fsType: \"<fs-type>\"", "completion": "This yaml code Volumes | In Kubernetes 1.34, all operations for the in-tree Portworx volumes are redirected to the pxd.portworx.com Container Storage Interface (CSI) Driver by default.Portworx CSI Driver must be installed on", "language": "yaml", "type": "pod-definition_explanation", "context": "Volumes | In Kubernetes 1.34, all operations for the in-tree Portworx volumes are redirected to the pxd.portworx.com Container Storage Interface (CSI) Driver by default.Portworx CSI Driver must be installed on", "source": "https://kubernetes.io/docs/concepts/storage/volumes/", "id": "k8s_00421_code_explain_11050"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: my-lamp-site\nspec:\n containers:\n - name: mysql\n image: mysql\n env:\n - name: MYSQL_ROOT_PASSWORD\n value: \"rootpasswd\"\n volumeMounts:", "completion": " - mountPath: /var/lib/mysql\n name: site-data\n subPath: mysql\n - name: php\n image: php:7.0-apache\n volumeMounts:\n - mountPath: /var/www/html\n name: site-data\n subPath: html\n volumes:\n - name: site-data\n persistentVolumeClaim:\n claimName: my-lamp-site-data", "language": "yaml", "type": "pod-definition", "context": "Volumes | Use the subPathExpr field to construct subPath directory names from downward API environment variables. The subPath and subPathExpr properties are mutually exclusive.", "source": "https://kubernetes.io/docs/concepts/storage/volumes/", "id": "k8s_00421_code_completion_11051"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: my-lamp-site\nspec:\n containers:\n - name: mysql\n image: mysql\n env:\n - name: MYSQL_ROOT_PASSWORD\n value: \"rootpasswd\"\n volumeMounts:\n - mountPath: /var/lib/mysql\n name: site-data\n subPath: mysql\n - name: php\n image: php:7.0-apache\n volumeMounts:\n - mountPath: /var/www/html\n name: site-data\n subPath: html\n volumes:\n - name: site-data\n persistentVolumeClaim:\n claimName: my-lamp-site-data", "completion": "This yaml code Volumes | Use the subPathExpr field to construct subPath directory names from downward API environment variables. The subPath and subPathExpr properties are mutually exclusive.", "language": "yaml", "type": "pod-definition_explanation", "context": "Volumes | Use the subPathExpr field to construct subPath directory names from downward API environment variables. The subPath and subPathExpr properties are mutually exclusive.", "source": "https://kubernetes.io/docs/concepts/storage/volumes/", "id": "k8s_00421_code_explain_11052"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: pod1\nspec:\n containers:\n - name: container1\n env:\n - name: POD_NAME\n valueFrom:\n fieldRef:\n apiVersion: v1", "completion": " fieldPath: metadata.name\n image: busybox:1.28\n command: [ \"sh\", \"-c\", \"while [ true ]; do echo 'Hello'; sleep 10; done | tee -a /logs/hello.txt\" ]\n volumeMounts:\n - name: workdir1\n mountPath: /logs\n # The variable expansion uses round brackets (not curly brackets).\n subPathExpr: $(POD_NAME)\n restartPolicy: Never\n volumes:\n - name: workdir1\n hostPath:\n path: /var/log/pods", "language": "yaml", "type": "pod-definition", "context": "Volumes | The storage medium (such as Disk or SSD) of an emptyDir volume is determined by the medium of the filesystem holding the kubelet root dir (typically /var/lib/kubelet). There is no limit on how much sp", "source": "https://kubernetes.io/docs/concepts/storage/volumes/", "id": "k8s_00421_code_completion_11053"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: pod1\nspec:\n containers:\n - name: container1\n env:\n - name: POD_NAME\n valueFrom:\n fieldRef:\n apiVersion: v1\n fieldPath: metadata.name\n image: busybox:1.28\n command: [ \"sh\", \"-c\", \"while [ true ]; do echo 'Hello'; sleep 10; done | tee -a /logs/hello.txt\" ]\n volumeMounts:\n - name: workdir1\n mountPath: /logs\n # The variable expansion uses round brackets (not curly brackets).\n subPathExpr: $(POD_NAME)\n restartPolicy: Never\n volumes:\n - name: workdir1\n hostPath:\n path: /var/log/pods", "completion": "This yaml code Volumes | The storage medium (such as Disk or SSD) of an emptyDir volume is determined by the medium of the filesystem holding the kubelet root dir (typically /var/lib/kubelet). There is no limit on how much sp", "language": "yaml", "type": "pod-definition_explanation", "context": "Volumes | The storage medium (such as Disk or SSD) of an emptyDir volume is determined by the medium of the filesystem holding the kubelet root dir (typically /var/lib/kubelet). There is no limit on how much sp", "source": "https://kubernetes.io/docs/concepts/storage/volumes/", "id": "k8s_00421_code_explain_11054"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: rro\nspec:\n volumes:\n - name: mnt\n hostPath:\n # tmpfs is mounted on /mnt/tmpfs\n path: /mnt\n containers:\n - name: busybox\n image: busybox\n args: [\"sleep\", \"infinity\"]", "completion": " volumeMounts:\n # /mnt-rro/tmpfs is not writable\n - name: mnt\n mountPath: /mnt-rro\n readOnly: true\n mountPropagation: None\n recursiveReadOnly: Enabled\n # /mnt-ro/tmpfs is writable\n - name: mnt\n mountPath: /mnt-ro\n readOnly: true\n # /mnt-rw/tmpfs is writable\n - name: mnt\n mountPath: /mnt-rw", "language": "yaml", "type": "pod-definition", "context": "Volumes | When this property is recognized by kubelet and kube-apiserver, the .status.containerStatuses[].volumeMounts[].recursiveReadOnly field is set to either Enabled or Disabled.", "source": "https://kubernetes.io/docs/concepts/storage/volumes/", "id": "k8s_00421_code_completion_11055"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: rro\nspec:\n volumes:\n - name: mnt\n hostPath:\n # tmpfs is mounted on /mnt/tmpfs\n path: /mnt\n containers:\n - name: busybox\n image: busybox\n args: [\"sleep\", \"infinity\"]\n volumeMounts:\n # /mnt-rro/tmpfs is not writable\n - name: mnt\n mountPath: /mnt-rro\n readOnly: true\n mountPropagation: None\n recursiveReadOnly: Enabled\n # /mnt-ro/tmpfs is writable\n - name: mnt\n mountPath: /mnt-ro\n readOnly: true\n # /mnt-rw/tmpfs is writable\n - name: mnt\n mountPath: /mnt-rw", "completion": "This yaml code Volumes | When this property is recognized by kubelet and kube-apiserver, the .status.containerStatuses[].volumeMounts[].recursiveReadOnly field is set to either Enabled or Disabled.", "language": "yaml", "type": "pod-definition_explanation", "context": "Volumes | When this property is recognized by kubelet and kube-apiserver, the .status.containerStatuses[].volumeMounts[].recursiveReadOnly field is set to either Enabled or Disabled.", "source": "https://kubernetes.io/docs/concepts/storage/volumes/", "id": "k8s_00421_code_explain_11056"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: configmap-pod\nspec:\n containers:\n - name: test\n image: busybox:1.28\n command: ['sh', '-c', 'echo \"The app is running!\" && tail -f /dev/null']", "completion": " volumeMounts:\n - name: config-vol\n mountPath: /etc/config\n volumes:\n - name: config-vol\n configMap:\n name: log-config\n items:\n - key: log_level\n path: log_level.conf", "language": "yaml", "type": "pod-definition", "context": "Volumes | The log-config ConfigMap is mounted as a volume, and all contents stored in its log_level entry are mounted into the Pod at path /etc/config/log_level.conf. Note that this path is derived from the vol", "source": "https://kubernetes.io/docs/concepts/storage/volumes/", "id": "k8s_00421_code_completion_11057"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: configmap-pod\nspec:\n containers:\n - name: test\n image: busybox:1.28\n command: ['sh', '-c', 'echo \"The app is running!\" && tail -f /dev/null']\n volumeMounts:\n - name: config-vol\n mountPath: /etc/config\n volumes:\n - name: config-vol\n configMap:\n name: log-config\n items:\n - key: log_level\n path: log_level.conf", "completion": "This yaml code Volumes | The log-config ConfigMap is mounted as a volume, and all contents stored in its log_level entry are mounted into the Pod at path /etc/config/log_level.conf. Note that this path is derived from the vol", "language": "yaml", "type": "pod-definition_explanation", "context": "Volumes | The log-config ConfigMap is mounted as a volume, and all contents stored in its log_level entry are mounted into the Pod at path /etc/config/log_level.conf. Note that this path is derived from the vol", "source": "https://kubernetes.io/docs/concepts/storage/volumes/", "id": "k8s_00421_code_explain_11058"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: test-pd\nspec:\n containers:\n - image: registry.k8s.io/test-webserver", "completion": " name: test-container\n volumeMounts:\n - mountPath: /cache\n name: cache-volume\n volumes:\n - name: cache-volume\n emptyDir:\n sizeLimit: 500Mi", "language": "yaml", "type": "pod-definition", "context": "Volumes | An fc volume type allows an existing fibre channel block storage volume to be mounted in a Pod. You can specify single or multiple target world wide names (WWNs) using the parameter targetWWNs in your", "source": "https://kubernetes.io/docs/concepts/storage/volumes/", "id": "k8s_00421_code_completion_11059"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: test-pd\nspec:\n containers:\n - image: registry.k8s.io/test-webserver\n name: test-container\n volumeMounts:\n - mountPath: /cache\n name: cache-volume\n volumes:\n - name: cache-volume\n emptyDir:\n sizeLimit: 500Mi", "completion": "This yaml code Volumes | An fc volume type allows an existing fibre channel block storage volume to be mounted in a Pod. You can specify single or multiple target world wide names (WWNs) using the parameter targetWWNs in your", "language": "yaml", "type": "pod-definition_explanation", "context": "Volumes | An fc volume type allows an existing fibre channel block storage volume to be mounted in a Pod. You can specify single or multiple target world wide names (WWNs) using the parameter targetWWNs in your", "source": "https://kubernetes.io/docs/concepts/storage/volumes/", "id": "k8s_00421_code_explain_11060"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: test-pd\nspec:\n containers:\n - image: registry.k8s.io/test-webserver\n name: test-container", "completion": " volumeMounts:\n - mountPath: /cache\n name: cache-volume\n volumes:\n - name: cache-volume\n emptyDir:\n sizeLimit: 500Mi\n medium: Memory", "language": "yaml", "type": "pod-definition", "context": "Volumes | An fc volume type allows an existing fibre channel block storage volume to be mounted in a Pod. You can specify single or multiple target world wide names (WWNs) using the parameter targetWWNs in your", "source": "https://kubernetes.io/docs/concepts/storage/volumes/", "id": "k8s_00421_code_completion_11061"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: test-pd\nspec:\n containers:\n - image: registry.k8s.io/test-webserver\n name: test-container\n volumeMounts:\n - mountPath: /cache\n name: cache-volume\n volumes:\n - name: cache-volume\n emptyDir:\n sizeLimit: 500Mi\n medium: Memory", "completion": "This yaml code Volumes | An fc volume type allows an existing fibre channel block storage volume to be mounted in a Pod. You can specify single or multiple target world wide names (WWNs) using the parameter targetWWNs in your", "language": "yaml", "type": "pod-definition_explanation", "context": "Volumes | An fc volume type allows an existing fibre channel block storage volume to be mounted in a Pod. You can specify single or multiple target world wide names (WWNs) using the parameter targetWWNs in your", "source": "https://kubernetes.io/docs/concepts/storage/volumes/", "id": "k8s_00421_code_explain_11062"} {"prompt": "Explain this cel code:\n\n!has(object.spec.volumes) || !object.spec.volumes.exists(v, has(v.gitRepo))", "completion": "This cel code Volumes | You can use this deprecated storage plugin in your cluster if you explicitly enable the GitRepoVolumeDriver feature gate.", "language": "cel", "type": "code-example_explanation", "context": "Volumes | You can use this deprecated storage plugin in your cluster if you explicitly enable the GitRepoVolumeDriver feature gate.", "source": "https://kubernetes.io/docs/concepts/storage/volumes/", "id": "k8s_00421_code_explain_11063"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: server\nspec:\n containers:\n - image: nginx\n name: nginx", "completion": " volumeMounts:\n - mountPath: /mypath\n name: git-volume\n volumes:\n - name: git-volume\n gitRepo:\n repository: \"git@somewhere:me/my-git-repository.git\"\n revision: \"22f1d8406d464b0c0874075539c1f2e96c253775\"", "language": "yaml", "type": "pod-definition", "context": "Volumes | Kubernetes 1.34 does not include a glusterfs volume type.", "source": "https://kubernetes.io/docs/concepts/storage/volumes/", "id": "k8s_00421_code_completion_11064"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: server\nspec:\n containers:\n - image: nginx\n name: nginx\n volumeMounts:\n - mountPath: /mypath\n name: git-volume\n volumes:\n - name: git-volume\n gitRepo:\n repository: \"git@somewhere:me/my-git-repository.git\"\n revision: \"22f1d8406d464b0c0874075539c1f2e96c253775\"", "completion": "This yaml code Volumes | Kubernetes 1.34 does not include a glusterfs volume type.", "language": "yaml", "type": "pod-definition_explanation", "context": "Volumes | Kubernetes 1.34 does not include a glusterfs volume type.", "source": "https://kubernetes.io/docs/concepts/storage/volumes/", "id": "k8s_00421_code_explain_11065"} {"prompt": "Complete this yaml code:\n\n---\n# This manifest mounts /data/foo on the host as /foo inside the\n# single container that runs within the hostpath-example-linux Pod.\n#\n# The mount into the container is read-only.\napiVersion: v1\nkind: Pod\nmetadata:\n name: hostpath-example-linux\nspec:\n os: { name: linux }\n nodeSelector:\n kubernetes.io/os: linux", "completion": " containers:\n - name: example-container\n image: registry.k8s.io/test-webserver\n volumeMounts:\n - mountPath: /foo\n name: example-volume\n readOnly: true\n volumes:\n - name: example-volume\n # mount /data/foo, but only if that directory already exists\n hostPath:\n path: /data/foo # directory location on host\n type: Directory # this field is optional", "language": "yaml", "type": "pod-definition", "context": "Volumes | --- # This manifest mounts C:\\Data\\foo on the host as C:\\foo, inside the # single container that runs within the hostpath-example-windows Pod. # # The mount into the container is read-only. apiVersion", "source": "https://kubernetes.io/docs/concepts/storage/volumes/", "id": "k8s_00421_code_completion_11066"} {"prompt": "Explain this yaml code:\n\n---\n# This manifest mounts /data/foo on the host as /foo inside the\n# single container that runs within the hostpath-example-linux Pod.\n#\n# The mount into the container is read-only.\napiVersion: v1\nkind: Pod\nmetadata:\n name: hostpath-example-linux\nspec:\n os: { name: linux }\n nodeSelector:\n kubernetes.io/os: linux\n containers:\n - name: example-container\n image: registry.k8s.io/test-webserver\n volumeMounts:\n - mountPath: /foo\n name: example-volume\n readOnly: true\n volumes:\n - name: example-volume\n # mount /data/foo, but only if that directory already exists\n hostPath:\n path: /data/foo # directory location on host\n type: Directory # this field is optional", "completion": "This yaml code Volumes | --- # This manifest mounts C:\\Data\\foo on the host as C:\\foo, inside the # single container that runs within the hostpath-example-windows Pod. # # The mount into the container is read-only. apiVersion", "language": "yaml", "type": "pod-definition_explanation", "context": "Volumes | --- # This manifest mounts C:\\Data\\foo on the host as C:\\foo, inside the # single container that runs within the hostpath-example-windows Pod. # # The mount into the container is read-only. apiVersion", "source": "https://kubernetes.io/docs/concepts/storage/volumes/", "id": "k8s_00421_code_explain_11067"} {"prompt": "Complete this yaml code:\n\n---\n# This manifest mounts C:\\Data\\foo on the host as C:\\foo, inside the\n# single container that runs within the hostpath-example-windows Pod.\n#\n# The mount into the container is read-only.\napiVersion: v1\nkind: Pod\nmetadata:\n name: hostpath-example-windows\nspec:\n os: { name: windows }\n nodeSelector:\n kubernetes.io/os: windows", "completion": " containers:\n - name: example-container\n image: microsoft/windowsservercore:1709\n volumeMounts:\n - name: example-volume\n mountPath: \"C:\\\\foo\"\n readOnly: true\n volumes:\n # mount C:\\Data\\foo from the host, but only if that directory already exists\n - name: example-volume\n hostPath:\n path: \"C:\\\\Data\\\\foo\" # directory location on host\n type: Directory # this field is optional", "language": "yaml", "type": "pod-definition", "context": "Volumes | The following manifest defines a Pod that mounts /var/local/aaa inside the single container in the Pod. If the node does not already have a path /var/local/aaa, the kubelet creates it as a directory a", "source": "https://kubernetes.io/docs/concepts/storage/volumes/", "id": "k8s_00421_code_completion_11068"} {"prompt": "Explain this yaml code:\n\n---\n# This manifest mounts C:\\Data\\foo on the host as C:\\foo, inside the\n# single container that runs within the hostpath-example-windows Pod.\n#\n# The mount into the container is read-only.\napiVersion: v1\nkind: Pod\nmetadata:\n name: hostpath-example-windows\nspec:\n os: { name: windows }\n nodeSelector:\n kubernetes.io/os: windows\n containers:\n - name: example-container\n image: microsoft/windowsservercore:1709\n volumeMounts:\n - name: example-volume\n mountPath: \"C:\\\\foo\"\n readOnly: true\n volumes:\n # mount C:\\Data\\foo from the host, but only if that directory already exists\n - name: example-volume\n hostPath:\n path: \"C:\\\\Data\\\\foo\" # directory location on host\n type: Directory # this field is optional", "completion": "This yaml code Volumes | The following manifest defines a Pod that mounts /var/local/aaa inside the single container in the Pod. If the node does not already have a path /var/local/aaa, the kubelet creates it as a directory a", "language": "yaml", "type": "pod-definition_explanation", "context": "Volumes | The following manifest defines a Pod that mounts /var/local/aaa inside the single container in the Pod. If the node does not already have a path /var/local/aaa, the kubelet creates it as a directory a", "source": "https://kubernetes.io/docs/concepts/storage/volumes/", "id": "k8s_00421_code_explain_11069"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: test-webserver\nspec:\n os: { name: linux }\n nodeSelector:\n kubernetes.io/os: linux\n containers:\n - name: test-webserver\n image: registry.k8s.io/test-webserver:latest\n volumeMounts:\n - mountPath: /var/local/aaa", "completion": " name: mydir\n - mountPath: /var/local/aaa/1.txt\n name: myfile\n volumes:\n - name: mydir\n hostPath:\n # Ensure the file directory is created.\n path: /var/local/aaa\n type: DirectoryOrCreate\n - name: myfile\n hostPath:\n path: /var/local/aaa/1.txt\n type: FileOrCreate", "language": "yaml", "type": "pod-definition", "context": "Volumes | An image volume source represents an OCI object (a container image or artifact) which is available on the kubelet's host machine.", "source": "https://kubernetes.io/docs/concepts/storage/volumes/", "id": "k8s_00421_code_completion_11070"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: test-webserver\nspec:\n os: { name: linux }\n nodeSelector:\n kubernetes.io/os: linux\n containers:\n - name: test-webserver\n image: registry.k8s.io/test-webserver:latest\n volumeMounts:\n - mountPath: /var/local/aaa\n name: mydir\n - mountPath: /var/local/aaa/1.txt\n name: myfile\n volumes:\n - name: mydir\n hostPath:\n # Ensure the file directory is created.\n path: /var/local/aaa\n type: DirectoryOrCreate\n - name: myfile\n hostPath:\n path: /var/local/aaa/1.txt\n type: FileOrCreate", "completion": "This yaml code Volumes | An image volume source represents an OCI object (a container image or artifact) which is available on the kubelet's host machine.", "language": "yaml", "type": "pod-definition_explanation", "context": "Volumes | An image volume source represents an OCI object (a container image or artifact) which is available on the kubelet's host machine.", "source": "https://kubernetes.io/docs/concepts/storage/volumes/", "id": "k8s_00421_code_explain_11071"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: image-volume\nspec:\n containers:\n - name: shell\n command: [\"sleep\", \"infinity\"]", "completion": " image: debian\n volumeMounts:\n - name: volume\n mountPath: /volume\n volumes:\n - name: volume\n image:\n reference: quay.io/crio/artifact:v2\n pullPolicy: IfNotPresent", "language": "yaml", "type": "pod-definition", "context": "Volumes | The volume is resolved at pod startup depending on which pullPolicy value is provided:", "source": "https://kubernetes.io/docs/concepts/storage/volumes/", "id": "k8s_00421_code_completion_11072"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: image-volume\nspec:\n containers:\n - name: shell\n command: [\"sleep\", \"infinity\"]\n image: debian\n volumeMounts:\n - name: volume\n mountPath: /volume\n volumes:\n - name: volume\n image:\n reference: quay.io/crio/artifact:v2\n pullPolicy: IfNotPresent", "completion": "This yaml code Volumes | The volume is resolved at pod startup depending on which pullPolicy value is provided:", "language": "yaml", "type": "pod-definition_explanation", "context": "Volumes | The volume is resolved at pod startup depending on which pullPolicy value is provided:", "source": "https://kubernetes.io/docs/concepts/storage/volumes/", "id": "k8s_00421_code_explain_11073"} {"prompt": "Explain this text code:\n\nspec.containers[*].volumeMounts.[subPath,subPathExpr]", "completion": "This text code Volumes | The following fields are available for the image type:", "language": "text", "type": "code-example_explanation", "context": "Volumes | The following fields are available for the image type:", "source": "https://kubernetes.io/docs/concepts/storage/volumes/", "id": "k8s_00421_code_explain_11074"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: PersistentVolume\nmetadata:\n name: example-pv\nspec:\n capacity:\n storage: 100Gi\n volumeMode: Filesystem\n accessModes:\n - ReadWriteOnce\n persistentVolumeReclaimPolicy: Delete", "completion": " storageClassName: local-storage\n local:\n path: /mnt/disks/ssd1\n nodeAffinity:\n required:\n nodeSelectorTerms:\n - matchExpressions:\n - key: kubernetes.io/hostname\n operator: In\n values:\n - example-node", "language": "yaml", "type": "kubernetes-manifest", "context": "Volumes | You must set a PersistentVolume nodeAffinity when using local volumes. The Kubernetes scheduler uses the PersistentVolume nodeAffinity to schedule these Pods to the correct node.", "source": "https://kubernetes.io/docs/concepts/storage/volumes/", "id": "k8s_00421_code_completion_11075"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: PersistentVolume\nmetadata:\n name: example-pv\nspec:\n capacity:\n storage: 100Gi\n volumeMode: Filesystem\n accessModes:\n - ReadWriteOnce\n persistentVolumeReclaimPolicy: Delete\n storageClassName: local-storage\n local:\n path: /mnt/disks/ssd1\n nodeAffinity:\n required:\n nodeSelectorTerms:\n - matchExpressions:\n - key: kubernetes.io/hostname\n operator: In\n values:\n - example-node", "completion": "This yaml code Volumes | You must set a PersistentVolume nodeAffinity when using local volumes. The Kubernetes scheduler uses the PersistentVolume nodeAffinity to schedule these Pods to the correct node.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Volumes | You must set a PersistentVolume nodeAffinity when using local volumes. The Kubernetes scheduler uses the PersistentVolume nodeAffinity to schedule these Pods to the correct node.", "source": "https://kubernetes.io/docs/concepts/storage/volumes/", "id": "k8s_00421_code_explain_11076"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: test-pd\nspec:\n containers:\n - image: registry.k8s.io/test-webserver\n name: test-container", "completion": " volumeMounts:\n - mountPath: /my-nfs-data\n name: test-volume\n volumes:\n - name: test-volume\n nfs:\n server: my-nfs-server.example.com\n path: /my-nfs-volume\n readOnly: true", "language": "yaml", "type": "pod-definition", "context": "Volumes | You must have your own NFS server running with the share exported before you can use it.", "source": "https://kubernetes.io/docs/concepts/storage/volumes/", "id": "k8s_00421_code_completion_11077"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: test-pd\nspec:\n containers:\n - image: registry.k8s.io/test-webserver\n name: test-container\n volumeMounts:\n - mountPath: /my-nfs-data\n name: test-volume\n volumes:\n - name: test-volume\n nfs:\n server: my-nfs-server.example.com\n path: /my-nfs-volume\n readOnly: true", "completion": "This yaml code Volumes | You must have your own NFS server running with the share exported before you can use it.", "language": "yaml", "type": "pod-definition_explanation", "context": "Volumes | You must have your own NFS server running with the share exported before you can use it.", "source": "https://kubernetes.io/docs/concepts/storage/volumes/", "id": "k8s_00421_code_explain_11078"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: test-portworx-volume-pod\nspec:\n containers:\n - image: registry.k8s.io/test-webserver\n name: test-container", "completion": " volumeMounts:\n - mountPath: /mnt\n name: pxvol\n volumes:\n - name: pxvol\n # This Portworx volume must already exist.\n portworxVolume:\n volumeID: \"pxvol\"\n fsType: \"<fs-type>\"", "language": "yaml", "type": "pod-definition", "context": "Volumes | In Kubernetes 1.34, all operations for the in-tree Portworx volumes are redirected to the pxd.portworx.com Container Storage Interface (CSI) Driver by default.Portworx CSI Driver must be installed on", "source": "https://kubernetes.io/docs/concepts/storage/volumes/", "id": "k8s_00421_code_completion_11079"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: test-portworx-volume-pod\nspec:\n containers:\n - image: registry.k8s.io/test-webserver\n name: test-container\n volumeMounts:\n - mountPath: /mnt\n name: pxvol\n volumes:\n - name: pxvol\n # This Portworx volume must already exist.\n portworxVolume:\n volumeID: \"pxvol\"\n fsType: \"<fs-type>\"", "completion": "This yaml code Volumes | In Kubernetes 1.34, all operations for the in-tree Portworx volumes are redirected to the pxd.portworx.com Container Storage Interface (CSI) Driver by default.Portworx CSI Driver must be installed on", "language": "yaml", "type": "pod-definition_explanation", "context": "Volumes | In Kubernetes 1.34, all operations for the in-tree Portworx volumes are redirected to the pxd.portworx.com Container Storage Interface (CSI) Driver by default.Portworx CSI Driver must be installed on", "source": "https://kubernetes.io/docs/concepts/storage/volumes/", "id": "k8s_00421_code_explain_11080"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: my-lamp-site\nspec:\n containers:\n - name: mysql\n image: mysql\n env:\n - name: MYSQL_ROOT_PASSWORD\n value: \"rootpasswd\"\n volumeMounts:", "completion": " - mountPath: /var/lib/mysql\n name: site-data\n subPath: mysql\n - name: php\n image: php:7.0-apache\n volumeMounts:\n - mountPath: /var/www/html\n name: site-data\n subPath: html\n volumes:\n - name: site-data\n persistentVolumeClaim:\n claimName: my-lamp-site-data", "language": "yaml", "type": "pod-definition", "context": "Volumes | Use the subPathExpr field to construct subPath directory names from downward API environment variables. The subPath and subPathExpr properties are mutually exclusive.", "source": "https://kubernetes.io/docs/concepts/storage/volumes/", "id": "k8s_00421_code_completion_11081"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: my-lamp-site\nspec:\n containers:\n - name: mysql\n image: mysql\n env:\n - name: MYSQL_ROOT_PASSWORD\n value: \"rootpasswd\"\n volumeMounts:\n - mountPath: /var/lib/mysql\n name: site-data\n subPath: mysql\n - name: php\n image: php:7.0-apache\n volumeMounts:\n - mountPath: /var/www/html\n name: site-data\n subPath: html\n volumes:\n - name: site-data\n persistentVolumeClaim:\n claimName: my-lamp-site-data", "completion": "This yaml code Volumes | Use the subPathExpr field to construct subPath directory names from downward API environment variables. The subPath and subPathExpr properties are mutually exclusive.", "language": "yaml", "type": "pod-definition_explanation", "context": "Volumes | Use the subPathExpr field to construct subPath directory names from downward API environment variables. The subPath and subPathExpr properties are mutually exclusive.", "source": "https://kubernetes.io/docs/concepts/storage/volumes/", "id": "k8s_00421_code_explain_11082"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: pod1\nspec:\n containers:\n - name: container1\n env:\n - name: POD_NAME\n valueFrom:\n fieldRef:\n apiVersion: v1", "completion": " fieldPath: metadata.name\n image: busybox:1.28\n command: [ \"sh\", \"-c\", \"while [ true ]; do echo 'Hello'; sleep 10; done | tee -a /logs/hello.txt\" ]\n volumeMounts:\n - name: workdir1\n mountPath: /logs\n # The variable expansion uses round brackets (not curly brackets).\n subPathExpr: $(POD_NAME)\n restartPolicy: Never\n volumes:\n - name: workdir1\n hostPath:\n path: /var/log/pods", "language": "yaml", "type": "pod-definition", "context": "Volumes | The storage medium (such as Disk or SSD) of an emptyDir volume is determined by the medium of the filesystem holding the kubelet root dir (typically /var/lib/kubelet). There is no limit on how much sp", "source": "https://kubernetes.io/docs/concepts/storage/volumes/", "id": "k8s_00421_code_completion_11083"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: pod1\nspec:\n containers:\n - name: container1\n env:\n - name: POD_NAME\n valueFrom:\n fieldRef:\n apiVersion: v1\n fieldPath: metadata.name\n image: busybox:1.28\n command: [ \"sh\", \"-c\", \"while [ true ]; do echo 'Hello'; sleep 10; done | tee -a /logs/hello.txt\" ]\n volumeMounts:\n - name: workdir1\n mountPath: /logs\n # The variable expansion uses round brackets (not curly brackets).\n subPathExpr: $(POD_NAME)\n restartPolicy: Never\n volumes:\n - name: workdir1\n hostPath:\n path: /var/log/pods", "completion": "This yaml code Volumes | The storage medium (such as Disk or SSD) of an emptyDir volume is determined by the medium of the filesystem holding the kubelet root dir (typically /var/lib/kubelet). There is no limit on how much sp", "language": "yaml", "type": "pod-definition_explanation", "context": "Volumes | The storage medium (such as Disk or SSD) of an emptyDir volume is determined by the medium of the filesystem holding the kubelet root dir (typically /var/lib/kubelet). There is no limit on how much sp", "source": "https://kubernetes.io/docs/concepts/storage/volumes/", "id": "k8s_00421_code_explain_11084"} {"prompt": "Explain this text code:\n\n.spec.containers[].volumeMounts[].recursiveReadOnly", "completion": "This text code Volumes | The allowed values are:", "language": "text", "type": "code-example_explanation", "context": "Volumes | The allowed values are:", "source": "https://kubernetes.io/docs/concepts/storage/volumes/", "id": "k8s_00421_code_explain_11085"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: rro\nspec:\n volumes:\n - name: mnt\n hostPath:\n # tmpfs is mounted on /mnt/tmpfs\n path: /mnt\n containers:\n - name: busybox\n image: busybox\n args: [\"sleep\", \"infinity\"]", "completion": " volumeMounts:\n # /mnt-rro/tmpfs is not writable\n - name: mnt\n mountPath: /mnt-rro\n readOnly: true\n mountPropagation: None\n recursiveReadOnly: Enabled\n # /mnt-ro/tmpfs is writable\n - name: mnt\n mountPath: /mnt-ro\n readOnly: true\n # /mnt-rw/tmpfs is writable\n - name: mnt\n mountPath: /mnt-rw", "language": "yaml", "type": "pod-definition", "context": "Volumes | When this property is recognized by kubelet and kube-apiserver, the .status.containerStatuses[].volumeMounts[].recursiveReadOnly field is set to either Enabled or Disabled.", "source": "https://kubernetes.io/docs/concepts/storage/volumes/", "id": "k8s_00421_code_completion_11086"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: rro\nspec:\n volumes:\n - name: mnt\n hostPath:\n # tmpfs is mounted on /mnt/tmpfs\n path: /mnt\n containers:\n - name: busybox\n image: busybox\n args: [\"sleep\", \"infinity\"]\n volumeMounts:\n # /mnt-rro/tmpfs is not writable\n - name: mnt\n mountPath: /mnt-rro\n readOnly: true\n mountPropagation: None\n recursiveReadOnly: Enabled\n # /mnt-ro/tmpfs is writable\n - name: mnt\n mountPath: /mnt-ro\n readOnly: true\n # /mnt-rw/tmpfs is writable\n - name: mnt\n mountPath: /mnt-rw", "completion": "This yaml code Volumes | When this property is recognized by kubelet and kube-apiserver, the .status.containerStatuses[].volumeMounts[].recursiveReadOnly field is set to either Enabled or Disabled.", "language": "yaml", "type": "pod-definition_explanation", "context": "Volumes | When this property is recognized by kubelet and kube-apiserver, the .status.containerStatuses[].volumeMounts[].recursiveReadOnly field is set to either Enabled or Disabled.", "source": "https://kubernetes.io/docs/concepts/storage/volumes/", "id": "k8s_00421_code_explain_11087"} {"prompt": "Explain this text code:\n\n.status.containerStatuses[].volumeMounts[].recursiveReadOnly", "completion": "This text code Volumes | The following container runtimes are known to support recursive read-only mounts.", "language": "text", "type": "code-example_explanation", "context": "Volumes | The following container runtimes are known to support recursive read-only mounts.", "source": "https://kubernetes.io/docs/concepts/storage/volumes/", "id": "k8s_00421_code_explain_11088"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: configmap-pod\nspec:\n containers:\n - name: test\n image: busybox:1.28\n command: ['sh', '-c', 'echo \"The app is running!\" && tail -f /dev/null']", "completion": " volumeMounts:\n - name: config-vol\n mountPath: /etc/config\n volumes:\n - name: config-vol\n configMap:\n name: log-config\n items:\n - key: log_level\n path: log_level.conf", "language": "yaml", "type": "pod-definition", "context": "Volumes | The log-config ConfigMap is mounted as a volume, and all contents stored in its log_level entry are mounted into the Pod at path /etc/config/log_level.conf. Note that this path is derived from the vol", "source": "https://kubernetes.io/docs/concepts/storage/volumes/", "id": "k8s_00421_code_completion_11089"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: configmap-pod\nspec:\n containers:\n - name: test\n image: busybox:1.28\n command: ['sh', '-c', 'echo \"The app is running!\" && tail -f /dev/null']\n volumeMounts:\n - name: config-vol\n mountPath: /etc/config\n volumes:\n - name: config-vol\n configMap:\n name: log-config\n items:\n - key: log_level\n path: log_level.conf", "completion": "This yaml code Volumes | The log-config ConfigMap is mounted as a volume, and all contents stored in its log_level entry are mounted into the Pod at path /etc/config/log_level.conf. Note that this path is derived from the vol", "language": "yaml", "type": "pod-definition_explanation", "context": "Volumes | The log-config ConfigMap is mounted as a volume, and all contents stored in its log_level entry are mounted into the Pod at path /etc/config/log_level.conf. Note that this path is derived from the vol", "source": "https://kubernetes.io/docs/concepts/storage/volumes/", "id": "k8s_00421_code_explain_11090"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: test-pd\nspec:\n containers:\n - image: registry.k8s.io/test-webserver", "completion": " name: test-container\n volumeMounts:\n - mountPath: /cache\n name: cache-volume\n volumes:\n - name: cache-volume\n emptyDir:\n sizeLimit: 500Mi", "language": "yaml", "type": "pod-definition", "context": "Volumes | An fc volume type allows an existing fibre channel block storage volume to be mounted in a Pod. You can specify single or multiple target world wide names (WWNs) using the parameter targetWWNs in your", "source": "https://kubernetes.io/docs/concepts/storage/volumes/", "id": "k8s_00421_code_completion_11091"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: test-pd\nspec:\n containers:\n - image: registry.k8s.io/test-webserver\n name: test-container\n volumeMounts:\n - mountPath: /cache\n name: cache-volume\n volumes:\n - name: cache-volume\n emptyDir:\n sizeLimit: 500Mi", "completion": "This yaml code Volumes | An fc volume type allows an existing fibre channel block storage volume to be mounted in a Pod. You can specify single or multiple target world wide names (WWNs) using the parameter targetWWNs in your", "language": "yaml", "type": "pod-definition_explanation", "context": "Volumes | An fc volume type allows an existing fibre channel block storage volume to be mounted in a Pod. You can specify single or multiple target world wide names (WWNs) using the parameter targetWWNs in your", "source": "https://kubernetes.io/docs/concepts/storage/volumes/", "id": "k8s_00421_code_explain_11092"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: test-pd\nspec:\n containers:\n - image: registry.k8s.io/test-webserver\n name: test-container", "completion": " volumeMounts:\n - mountPath: /cache\n name: cache-volume\n volumes:\n - name: cache-volume\n emptyDir:\n sizeLimit: 500Mi\n medium: Memory", "language": "yaml", "type": "pod-definition", "context": "Volumes | An fc volume type allows an existing fibre channel block storage volume to be mounted in a Pod. You can specify single or multiple target world wide names (WWNs) using the parameter targetWWNs in your", "source": "https://kubernetes.io/docs/concepts/storage/volumes/", "id": "k8s_00421_code_completion_11093"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: test-pd\nspec:\n containers:\n - image: registry.k8s.io/test-webserver\n name: test-container\n volumeMounts:\n - mountPath: /cache\n name: cache-volume\n volumes:\n - name: cache-volume\n emptyDir:\n sizeLimit: 500Mi\n medium: Memory", "completion": "This yaml code Volumes | An fc volume type allows an existing fibre channel block storage volume to be mounted in a Pod. You can specify single or multiple target world wide names (WWNs) using the parameter targetWWNs in your", "language": "yaml", "type": "pod-definition_explanation", "context": "Volumes | An fc volume type allows an existing fibre channel block storage volume to be mounted in a Pod. You can specify single or multiple target world wide names (WWNs) using the parameter targetWWNs in your", "source": "https://kubernetes.io/docs/concepts/storage/volumes/", "id": "k8s_00421_code_explain_11094"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: server\nspec:\n containers:\n - image: nginx\n name: nginx", "completion": " volumeMounts:\n - mountPath: /mypath\n name: git-volume\n volumes:\n - name: git-volume\n gitRepo:\n repository: \"git@somewhere:me/my-git-repository.git\"\n revision: \"22f1d8406d464b0c0874075539c1f2e96c253775\"", "language": "yaml", "type": "pod-definition", "context": "Volumes | Kubernetes 1.34 does not include a glusterfs volume type.", "source": "https://kubernetes.io/docs/concepts/storage/volumes/", "id": "k8s_00421_code_completion_11095"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: server\nspec:\n containers:\n - image: nginx\n name: nginx\n volumeMounts:\n - mountPath: /mypath\n name: git-volume\n volumes:\n - name: git-volume\n gitRepo:\n repository: \"git@somewhere:me/my-git-repository.git\"\n revision: \"22f1d8406d464b0c0874075539c1f2e96c253775\"", "completion": "This yaml code Volumes | Kubernetes 1.34 does not include a glusterfs volume type.", "language": "yaml", "type": "pod-definition_explanation", "context": "Volumes | Kubernetes 1.34 does not include a glusterfs volume type.", "source": "https://kubernetes.io/docs/concepts/storage/volumes/", "id": "k8s_00421_code_explain_11096"} {"prompt": "Complete this yaml code:\n\n---\n# This manifest mounts /data/foo on the host as /foo inside the\n# single container that runs within the hostpath-example-linux Pod.\n#\n# The mount into the container is read-only.\napiVersion: v1\nkind: Pod\nmetadata:\n name: hostpath-example-linux\nspec:\n os: { name: linux }\n nodeSelector:\n kubernetes.io/os: linux", "completion": " containers:\n - name: example-container\n image: registry.k8s.io/test-webserver\n volumeMounts:\n - mountPath: /foo\n name: example-volume\n readOnly: true\n volumes:\n - name: example-volume\n # mount /data/foo, but only if that directory already exists\n hostPath:\n path: /data/foo # directory location on host\n type: Directory # this field is optional", "language": "yaml", "type": "pod-definition", "context": "Volumes | --- # This manifest mounts C:\\Data\\foo on the host as C:\\foo, inside the # single container that runs within the hostpath-example-windows Pod. # # The mount into the container is read-only. apiVersion", "source": "https://kubernetes.io/docs/concepts/storage/volumes/", "id": "k8s_00421_code_completion_11097"} {"prompt": "Explain this yaml code:\n\n---\n# This manifest mounts /data/foo on the host as /foo inside the\n# single container that runs within the hostpath-example-linux Pod.\n#\n# The mount into the container is read-only.\napiVersion: v1\nkind: Pod\nmetadata:\n name: hostpath-example-linux\nspec:\n os: { name: linux }\n nodeSelector:\n kubernetes.io/os: linux\n containers:\n - name: example-container\n image: registry.k8s.io/test-webserver\n volumeMounts:\n - mountPath: /foo\n name: example-volume\n readOnly: true\n volumes:\n - name: example-volume\n # mount /data/foo, but only if that directory already exists\n hostPath:\n path: /data/foo # directory location on host\n type: Directory # this field is optional", "completion": "This yaml code Volumes | --- # This manifest mounts C:\\Data\\foo on the host as C:\\foo, inside the # single container that runs within the hostpath-example-windows Pod. # # The mount into the container is read-only. apiVersion", "language": "yaml", "type": "pod-definition_explanation", "context": "Volumes | --- # This manifest mounts C:\\Data\\foo on the host as C:\\foo, inside the # single container that runs within the hostpath-example-windows Pod. # # The mount into the container is read-only. apiVersion", "source": "https://kubernetes.io/docs/concepts/storage/volumes/", "id": "k8s_00421_code_explain_11098"} {"prompt": "Complete this yaml code:\n\n---\n# This manifest mounts C:\\Data\\foo on the host as C:\\foo, inside the\n# single container that runs within the hostpath-example-windows Pod.\n#\n# The mount into the container is read-only.\napiVersion: v1\nkind: Pod\nmetadata:\n name: hostpath-example-windows\nspec:\n os: { name: windows }\n nodeSelector:\n kubernetes.io/os: windows", "completion": " containers:\n - name: example-container\n image: microsoft/windowsservercore:1709\n volumeMounts:\n - name: example-volume\n mountPath: \"C:\\\\foo\"\n readOnly: true\n volumes:\n # mount C:\\Data\\foo from the host, but only if that directory already exists\n - name: example-volume\n hostPath:\n path: \"C:\\\\Data\\\\foo\" # directory location on host\n type: Directory # this field is optional", "language": "yaml", "type": "pod-definition", "context": "Volumes | The following manifest defines a Pod that mounts /var/local/aaa inside the single container in the Pod. If the node does not already have a path /var/local/aaa, the kubelet creates it as a directory a", "source": "https://kubernetes.io/docs/concepts/storage/volumes/", "id": "k8s_00421_code_completion_11099"} {"prompt": "Explain this yaml code:\n\n---\n# This manifest mounts C:\\Data\\foo on the host as C:\\foo, inside the\n# single container that runs within the hostpath-example-windows Pod.\n#\n# The mount into the container is read-only.\napiVersion: v1\nkind: Pod\nmetadata:\n name: hostpath-example-windows\nspec:\n os: { name: windows }\n nodeSelector:\n kubernetes.io/os: windows\n containers:\n - name: example-container\n image: microsoft/windowsservercore:1709\n volumeMounts:\n - name: example-volume\n mountPath: \"C:\\\\foo\"\n readOnly: true\n volumes:\n # mount C:\\Data\\foo from the host, but only if that directory already exists\n - name: example-volume\n hostPath:\n path: \"C:\\\\Data\\\\foo\" # directory location on host\n type: Directory # this field is optional", "completion": "This yaml code Volumes | The following manifest defines a Pod that mounts /var/local/aaa inside the single container in the Pod. If the node does not already have a path /var/local/aaa, the kubelet creates it as a directory a", "language": "yaml", "type": "pod-definition_explanation", "context": "Volumes | The following manifest defines a Pod that mounts /var/local/aaa inside the single container in the Pod. If the node does not already have a path /var/local/aaa, the kubelet creates it as a directory a", "source": "https://kubernetes.io/docs/concepts/storage/volumes/", "id": "k8s_00421_code_explain_11100"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: test-webserver\nspec:\n os: { name: linux }\n nodeSelector:\n kubernetes.io/os: linux\n containers:\n - name: test-webserver\n image: registry.k8s.io/test-webserver:latest\n volumeMounts:\n - mountPath: /var/local/aaa", "completion": " name: mydir\n - mountPath: /var/local/aaa/1.txt\n name: myfile\n volumes:\n - name: mydir\n hostPath:\n # Ensure the file directory is created.\n path: /var/local/aaa\n type: DirectoryOrCreate\n - name: myfile\n hostPath:\n path: /var/local/aaa/1.txt\n type: FileOrCreate", "language": "yaml", "type": "pod-definition", "context": "Volumes | An image volume source represents an OCI object (a container image or artifact) which is available on the kubelet's host machine.", "source": "https://kubernetes.io/docs/concepts/storage/volumes/", "id": "k8s_00421_code_completion_11101"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: test-webserver\nspec:\n os: { name: linux }\n nodeSelector:\n kubernetes.io/os: linux\n containers:\n - name: test-webserver\n image: registry.k8s.io/test-webserver:latest\n volumeMounts:\n - mountPath: /var/local/aaa\n name: mydir\n - mountPath: /var/local/aaa/1.txt\n name: myfile\n volumes:\n - name: mydir\n hostPath:\n # Ensure the file directory is created.\n path: /var/local/aaa\n type: DirectoryOrCreate\n - name: myfile\n hostPath:\n path: /var/local/aaa/1.txt\n type: FileOrCreate", "completion": "This yaml code Volumes | An image volume source represents an OCI object (a container image or artifact) which is available on the kubelet's host machine.", "language": "yaml", "type": "pod-definition_explanation", "context": "Volumes | An image volume source represents an OCI object (a container image or artifact) which is available on the kubelet's host machine.", "source": "https://kubernetes.io/docs/concepts/storage/volumes/", "id": "k8s_00421_code_explain_11102"} {"prompt": "Complete this yaml code:\n\npods/image-volumes.yaml\napiVersion: v1\nkind: Pod\nmetadata:\n name: image-volume\nspec:\n containers:\n - name: shell\n command: [\"sleep\", \"infinity\"]", "completion": " image: debian\n volumeMounts:\n - name: volume\n mountPath: /volume\n volumes:\n - name: volume\n image:\n reference: quay.io/crio/artifact:v2\n pullPolicy: IfNotPresent", "language": "yaml", "type": "pod-definition", "context": "Volumes | The volume is resolved at pod startup depending on which pullPolicy value is provided:", "source": "https://kubernetes.io/docs/concepts/storage/volumes/", "id": "k8s_00421_code_completion_11103"} {"prompt": "Explain this yaml code:\n\npods/image-volumes.yaml\napiVersion: v1\nkind: Pod\nmetadata:\n name: image-volume\nspec:\n containers:\n - name: shell\n command: [\"sleep\", \"infinity\"]\n image: debian\n volumeMounts:\n - name: volume\n mountPath: /volume\n volumes:\n - name: volume\n image:\n reference: quay.io/crio/artifact:v2\n pullPolicy: IfNotPresent", "completion": "This yaml code Volumes | The volume is resolved at pod startup depending on which pullPolicy value is provided:", "language": "yaml", "type": "pod-definition_explanation", "context": "Volumes | The volume is resolved at pod startup depending on which pullPolicy value is provided:", "source": "https://kubernetes.io/docs/concepts/storage/volumes/", "id": "k8s_00421_code_explain_11104"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: image-volume\nspec:\n containers:\n - name: shell\n command: [\"sleep\", \"infinity\"]", "completion": " image: debian\n volumeMounts:\n - name: volume\n mountPath: /volume\n volumes:\n - name: volume\n image:\n reference: quay.io/crio/artifact:v2\n pullPolicy: IfNotPresent", "language": "yaml", "type": "pod-definition", "context": "Volumes | The volume is resolved at pod startup depending on which pullPolicy value is provided:", "source": "https://kubernetes.io/docs/concepts/storage/volumes/", "id": "k8s_00421_code_completion_11105"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: image-volume\nspec:\n containers:\n - name: shell\n command: [\"sleep\", \"infinity\"]\n image: debian\n volumeMounts:\n - name: volume\n mountPath: /volume\n volumes:\n - name: volume\n image:\n reference: quay.io/crio/artifact:v2\n pullPolicy: IfNotPresent", "completion": "This yaml code Volumes | The volume is resolved at pod startup depending on which pullPolicy value is provided:", "language": "yaml", "type": "pod-definition_explanation", "context": "Volumes | The volume is resolved at pod startup depending on which pullPolicy value is provided:", "source": "https://kubernetes.io/docs/concepts/storage/volumes/", "id": "k8s_00421_code_explain_11106"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: PersistentVolume\nmetadata:\n name: example-pv\nspec:\n capacity:\n storage: 100Gi\n volumeMode: Filesystem\n accessModes:\n - ReadWriteOnce\n persistentVolumeReclaimPolicy: Delete", "completion": " storageClassName: local-storage\n local:\n path: /mnt/disks/ssd1\n nodeAffinity:\n required:\n nodeSelectorTerms:\n - matchExpressions:\n - key: kubernetes.io/hostname\n operator: In\n values:\n - example-node", "language": "yaml", "type": "kubernetes-manifest", "context": "Volumes | You must set a PersistentVolume nodeAffinity when using local volumes. The Kubernetes scheduler uses the PersistentVolume nodeAffinity to schedule these Pods to the correct node.", "source": "https://kubernetes.io/docs/concepts/storage/volumes/", "id": "k8s_00421_code_completion_11107"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: PersistentVolume\nmetadata:\n name: example-pv\nspec:\n capacity:\n storage: 100Gi\n volumeMode: Filesystem\n accessModes:\n - ReadWriteOnce\n persistentVolumeReclaimPolicy: Delete\n storageClassName: local-storage\n local:\n path: /mnt/disks/ssd1\n nodeAffinity:\n required:\n nodeSelectorTerms:\n - matchExpressions:\n - key: kubernetes.io/hostname\n operator: In\n values:\n - example-node", "completion": "This yaml code Volumes | You must set a PersistentVolume nodeAffinity when using local volumes. The Kubernetes scheduler uses the PersistentVolume nodeAffinity to schedule these Pods to the correct node.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Volumes | You must set a PersistentVolume nodeAffinity when using local volumes. The Kubernetes scheduler uses the PersistentVolume nodeAffinity to schedule these Pods to the correct node.", "source": "https://kubernetes.io/docs/concepts/storage/volumes/", "id": "k8s_00421_code_explain_11108"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: test-pd\nspec:\n containers:\n - image: registry.k8s.io/test-webserver\n name: test-container", "completion": " volumeMounts:\n - mountPath: /my-nfs-data\n name: test-volume\n volumes:\n - name: test-volume\n nfs:\n server: my-nfs-server.example.com\n path: /my-nfs-volume\n readOnly: true", "language": "yaml", "type": "pod-definition", "context": "Volumes | You must have your own NFS server running with the share exported before you can use it.", "source": "https://kubernetes.io/docs/concepts/storage/volumes/", "id": "k8s_00421_code_completion_11109"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: test-pd\nspec:\n containers:\n - image: registry.k8s.io/test-webserver\n name: test-container\n volumeMounts:\n - mountPath: /my-nfs-data\n name: test-volume\n volumes:\n - name: test-volume\n nfs:\n server: my-nfs-server.example.com\n path: /my-nfs-volume\n readOnly: true", "completion": "This yaml code Volumes | You must have your own NFS server running with the share exported before you can use it.", "language": "yaml", "type": "pod-definition_explanation", "context": "Volumes | You must have your own NFS server running with the share exported before you can use it.", "source": "https://kubernetes.io/docs/concepts/storage/volumes/", "id": "k8s_00421_code_explain_11110"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: test-portworx-volume-pod\nspec:\n containers:\n - image: registry.k8s.io/test-webserver\n name: test-container", "completion": " volumeMounts:\n - mountPath: /mnt\n name: pxvol\n volumes:\n - name: pxvol\n # This Portworx volume must already exist.\n portworxVolume:\n volumeID: \"pxvol\"\n fsType: \"<fs-type>\"", "language": "yaml", "type": "pod-definition", "context": "Volumes | In Kubernetes 1.34, all operations for the in-tree Portworx volumes are redirected to the pxd.portworx.com Container Storage Interface (CSI) Driver by default.Portworx CSI Driver must be installed on", "source": "https://kubernetes.io/docs/concepts/storage/volumes/", "id": "k8s_00421_code_completion_11111"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: test-portworx-volume-pod\nspec:\n containers:\n - image: registry.k8s.io/test-webserver\n name: test-container\n volumeMounts:\n - mountPath: /mnt\n name: pxvol\n volumes:\n - name: pxvol\n # This Portworx volume must already exist.\n portworxVolume:\n volumeID: \"pxvol\"\n fsType: \"<fs-type>\"", "completion": "This yaml code Volumes | In Kubernetes 1.34, all operations for the in-tree Portworx volumes are redirected to the pxd.portworx.com Container Storage Interface (CSI) Driver by default.Portworx CSI Driver must be installed on", "language": "yaml", "type": "pod-definition_explanation", "context": "Volumes | In Kubernetes 1.34, all operations for the in-tree Portworx volumes are redirected to the pxd.portworx.com Container Storage Interface (CSI) Driver by default.Portworx CSI Driver must be installed on", "source": "https://kubernetes.io/docs/concepts/storage/volumes/", "id": "k8s_00421_code_explain_11112"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: my-lamp-site\nspec:\n containers:\n - name: mysql\n image: mysql\n env:\n - name: MYSQL_ROOT_PASSWORD\n value: \"rootpasswd\"\n volumeMounts:", "completion": " - mountPath: /var/lib/mysql\n name: site-data\n subPath: mysql\n - name: php\n image: php:7.0-apache\n volumeMounts:\n - mountPath: /var/www/html\n name: site-data\n subPath: html\n volumes:\n - name: site-data\n persistentVolumeClaim:\n claimName: my-lamp-site-data", "language": "yaml", "type": "pod-definition", "context": "Volumes | Use the subPathExpr field to construct subPath directory names from downward API environment variables. The subPath and subPathExpr properties are mutually exclusive.", "source": "https://kubernetes.io/docs/concepts/storage/volumes/", "id": "k8s_00421_code_completion_11113"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: my-lamp-site\nspec:\n containers:\n - name: mysql\n image: mysql\n env:\n - name: MYSQL_ROOT_PASSWORD\n value: \"rootpasswd\"\n volumeMounts:\n - mountPath: /var/lib/mysql\n name: site-data\n subPath: mysql\n - name: php\n image: php:7.0-apache\n volumeMounts:\n - mountPath: /var/www/html\n name: site-data\n subPath: html\n volumes:\n - name: site-data\n persistentVolumeClaim:\n claimName: my-lamp-site-data", "completion": "This yaml code Volumes | Use the subPathExpr field to construct subPath directory names from downward API environment variables. The subPath and subPathExpr properties are mutually exclusive.", "language": "yaml", "type": "pod-definition_explanation", "context": "Volumes | Use the subPathExpr field to construct subPath directory names from downward API environment variables. The subPath and subPathExpr properties are mutually exclusive.", "source": "https://kubernetes.io/docs/concepts/storage/volumes/", "id": "k8s_00421_code_explain_11114"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: pod1\nspec:\n containers:\n - name: container1\n env:\n - name: POD_NAME\n valueFrom:\n fieldRef:\n apiVersion: v1", "completion": " fieldPath: metadata.name\n image: busybox:1.28\n command: [ \"sh\", \"-c\", \"while [ true ]; do echo 'Hello'; sleep 10; done | tee -a /logs/hello.txt\" ]\n volumeMounts:\n - name: workdir1\n mountPath: /logs\n # The variable expansion uses round brackets (not curly brackets).\n subPathExpr: $(POD_NAME)\n restartPolicy: Never\n volumes:\n - name: workdir1\n hostPath:\n path: /var/log/pods", "language": "yaml", "type": "pod-definition", "context": "Volumes | The storage medium (such as Disk or SSD) of an emptyDir volume is determined by the medium of the filesystem holding the kubelet root dir (typically /var/lib/kubelet). There is no limit on how much sp", "source": "https://kubernetes.io/docs/concepts/storage/volumes/", "id": "k8s_00421_code_completion_11115"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: pod1\nspec:\n containers:\n - name: container1\n env:\n - name: POD_NAME\n valueFrom:\n fieldRef:\n apiVersion: v1\n fieldPath: metadata.name\n image: busybox:1.28\n command: [ \"sh\", \"-c\", \"while [ true ]; do echo 'Hello'; sleep 10; done | tee -a /logs/hello.txt\" ]\n volumeMounts:\n - name: workdir1\n mountPath: /logs\n # The variable expansion uses round brackets (not curly brackets).\n subPathExpr: $(POD_NAME)\n restartPolicy: Never\n volumes:\n - name: workdir1\n hostPath:\n path: /var/log/pods", "completion": "This yaml code Volumes | The storage medium (such as Disk or SSD) of an emptyDir volume is determined by the medium of the filesystem holding the kubelet root dir (typically /var/lib/kubelet). There is no limit on how much sp", "language": "yaml", "type": "pod-definition_explanation", "context": "Volumes | The storage medium (such as Disk or SSD) of an emptyDir volume is determined by the medium of the filesystem holding the kubelet root dir (typically /var/lib/kubelet). There is no limit on how much sp", "source": "https://kubernetes.io/docs/concepts/storage/volumes/", "id": "k8s_00421_code_explain_11116"} {"prompt": "Complete this yaml code:\n\nstorage/rro.yaml\napiVersion: v1\nkind: Pod\nmetadata:\n name: rro\nspec:\n volumes:\n - name: mnt\n hostPath:\n # tmpfs is mounted on /mnt/tmpfs\n path: /mnt\n containers:\n - name: busybox\n image: busybox", "completion": " args: [\"sleep\", \"infinity\"]\n volumeMounts:\n # /mnt-rro/tmpfs is not writable\n - name: mnt\n mountPath: /mnt-rro\n readOnly: true\n mountPropagation: None\n recursiveReadOnly: Enabled\n # /mnt-ro/tmpfs is writable\n - name: mnt\n mountPath: /mnt-ro\n readOnly: true\n # /mnt-rw/tmpfs is writable\n - name: mnt\n mountPath: /mnt-rw", "language": "yaml", "type": "pod-definition", "context": "Volumes | When this property is recognized by kubelet and kube-apiserver, the .status.containerStatuses[].volumeMounts[].recursiveReadOnly field is set to either Enabled or Disabled.", "source": "https://kubernetes.io/docs/concepts/storage/volumes/", "id": "k8s_00421_code_completion_11117"} {"prompt": "Explain this yaml code:\n\nstorage/rro.yaml\napiVersion: v1\nkind: Pod\nmetadata:\n name: rro\nspec:\n volumes:\n - name: mnt\n hostPath:\n # tmpfs is mounted on /mnt/tmpfs\n path: /mnt\n containers:\n - name: busybox\n image: busybox\n args: [\"sleep\", \"infinity\"]\n volumeMounts:\n # /mnt-rro/tmpfs is not writable\n - name: mnt\n mountPath: /mnt-rro\n readOnly: true\n mountPropagation: None\n recursiveReadOnly: Enabled\n # /mnt-ro/tmpfs is writable\n - name: mnt\n mountPath: /mnt-ro\n readOnly: true\n # /mnt-rw/tmpfs is writable\n - name: mnt\n mountPath: /mnt-rw", "completion": "This yaml code Volumes | When this property is recognized by kubelet and kube-apiserver, the .status.containerStatuses[].volumeMounts[].recursiveReadOnly field is set to either Enabled or Disabled.", "language": "yaml", "type": "pod-definition_explanation", "context": "Volumes | When this property is recognized by kubelet and kube-apiserver, the .status.containerStatuses[].volumeMounts[].recursiveReadOnly field is set to either Enabled or Disabled.", "source": "https://kubernetes.io/docs/concepts/storage/volumes/", "id": "k8s_00421_code_explain_11118"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: rro\nspec:\n volumes:\n - name: mnt\n hostPath:\n # tmpfs is mounted on /mnt/tmpfs\n path: /mnt\n containers:\n - name: busybox\n image: busybox\n args: [\"sleep\", \"infinity\"]", "completion": " volumeMounts:\n # /mnt-rro/tmpfs is not writable\n - name: mnt\n mountPath: /mnt-rro\n readOnly: true\n mountPropagation: None\n recursiveReadOnly: Enabled\n # /mnt-ro/tmpfs is writable\n - name: mnt\n mountPath: /mnt-ro\n readOnly: true\n # /mnt-rw/tmpfs is writable\n - name: mnt\n mountPath: /mnt-rw", "language": "yaml", "type": "pod-definition", "context": "Volumes | When this property is recognized by kubelet and kube-apiserver, the .status.containerStatuses[].volumeMounts[].recursiveReadOnly field is set to either Enabled or Disabled.", "source": "https://kubernetes.io/docs/concepts/storage/volumes/", "id": "k8s_00421_code_completion_11119"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: rro\nspec:\n volumes:\n - name: mnt\n hostPath:\n # tmpfs is mounted on /mnt/tmpfs\n path: /mnt\n containers:\n - name: busybox\n image: busybox\n args: [\"sleep\", \"infinity\"]\n volumeMounts:\n # /mnt-rro/tmpfs is not writable\n - name: mnt\n mountPath: /mnt-rro\n readOnly: true\n mountPropagation: None\n recursiveReadOnly: Enabled\n # /mnt-ro/tmpfs is writable\n - name: mnt\n mountPath: /mnt-ro\n readOnly: true\n # /mnt-rw/tmpfs is writable\n - name: mnt\n mountPath: /mnt-rw", "completion": "This yaml code Volumes | When this property is recognized by kubelet and kube-apiserver, the .status.containerStatuses[].volumeMounts[].recursiveReadOnly field is set to either Enabled or Disabled.", "language": "yaml", "type": "pod-definition_explanation", "context": "Volumes | When this property is recognized by kubelet and kube-apiserver, the .status.containerStatuses[].volumeMounts[].recursiveReadOnly field is set to either Enabled or Disabled.", "source": "https://kubernetes.io/docs/concepts/storage/volumes/", "id": "k8s_00421_code_explain_11120"} {"prompt": "Complete this text code:\n\nThis issue sounds more like a request for support and less\nlike an issue specifically for docs. I encourage you to bring\nyour question to the `#kubernetes-users` channel in\n[Kubernetes slack](https://slack.k8s.io/). You can also search\nresources like\n[Stack Overflow](https://stackoverflow.com/questions/tagged/kubernetes)", "completion": "for answers to similar questions.\n\nYou can also open issues for Kubernetes functionality in\nhttps://github.com/kubernetes/kubernetes.\n\nIf this is a documentation issue, please re-open this issue.", "language": "text", "type": "code-example", "context": "Reviewing for approvers and reviewers | Sample code bug report response:", "source": "https://kubernetes.io/docs/contribute/review/for-approvers/", "id": "k8s_00422_code_completion_11121"} {"prompt": "Explain this text code:\n\nThis issue sounds more like a request for support and less\nlike an issue specifically for docs. I encourage you to bring\nyour question to the `#kubernetes-users` channel in\n[Kubernetes slack](https://slack.k8s.io/). You can also search\nresources like\n[Stack Overflow](https://stackoverflow.com/questions/tagged/kubernetes)\nfor answers to similar questions.\n\nYou can also open issues for Kubernetes functionality in\nhttps://github.com/kubernetes/kubernetes.\n\nIf this is a documentation issue, please re-open this issue.", "completion": "This text code Reviewing for approvers and reviewers | Sample code bug report response:", "language": "text", "type": "code-example_explanation", "context": "Reviewing for approvers and reviewers | Sample code bug report response:", "source": "https://kubernetes.io/docs/contribute/review/for-approvers/", "id": "k8s_00422_code_explain_11122"} {"prompt": "Complete this text code:\n\nThis sounds more like an issue with the code than an issue with\nthe documentation. Please open an issue at", "completion": "https://github.com/kubernetes/kubernetes/issues.\n\nIf this is a documentation issue, please re-open this issue.", "language": "text", "type": "code-example", "context": "Reviewing for approvers and reviewers | As an approver, when you review pull requests (PRs), there are various cases where you might do the following:", "source": "https://kubernetes.io/docs/contribute/review/for-approvers/", "id": "k8s_00422_code_completion_11123"} {"prompt": "Explain this text code:\n\nThis sounds more like an issue with the code than an issue with\nthe documentation. Please open an issue at\nhttps://github.com/kubernetes/kubernetes/issues.\n\nIf this is a documentation issue, please re-open this issue.", "completion": "This text code Reviewing for approvers and reviewers | As an approver, when you review pull requests (PRs), there are various cases where you might do the following:", "language": "text", "type": "code-example_explanation", "context": "Reviewing for approvers and reviewers | As an approver, when you review pull requests (PRs), there are various cases where you might do the following:", "source": "https://kubernetes.io/docs/contribute/review/for-approvers/", "id": "k8s_00422_code_explain_11124"} {"prompt": "Complete this none code:\n\nThis issue sounds more like a request for support and less\nlike an issue specifically for docs. I encourage you to bring\nyour question to the `#kubernetes-users` channel in\n[Kubernetes slack](https://slack.k8s.io/). You can also search\nresources like\n[Stack Overflow](https://stackoverflow.com/questions/tagged/kubernetes)", "completion": "for answers to similar questions.\n\nYou can also open issues for Kubernetes functionality in\nhttps://github.com/kubernetes/kubernetes.\n\nIf this is a documentation issue, please re-open this issue.", "language": "none", "type": "code-example", "context": "Reviewing for approvers and reviewers | Sample code bug report response:", "source": "https://kubernetes.io/docs/contribute/review/for-approvers/", "id": "k8s_00422_code_completion_11125"} {"prompt": "Explain this none code:\n\nThis issue sounds more like a request for support and less\nlike an issue specifically for docs. I encourage you to bring\nyour question to the `#kubernetes-users` channel in\n[Kubernetes slack](https://slack.k8s.io/). You can also search\nresources like\n[Stack Overflow](https://stackoverflow.com/questions/tagged/kubernetes)\nfor answers to similar questions.\n\nYou can also open issues for Kubernetes functionality in\nhttps://github.com/kubernetes/kubernetes.\n\nIf this is a documentation issue, please re-open this issue.", "completion": "This none code Reviewing for approvers and reviewers | Sample code bug report response:", "language": "none", "type": "code-example_explanation", "context": "Reviewing for approvers and reviewers | Sample code bug report response:", "source": "https://kubernetes.io/docs/contribute/review/for-approvers/", "id": "k8s_00422_code_explain_11126"} {"prompt": "Complete this none code:\n\nThis sounds more like an issue with the code than an issue with\nthe documentation. Please open an issue at", "completion": "https://github.com/kubernetes/kubernetes/issues.\n\nIf this is a documentation issue, please re-open this issue.", "language": "none", "type": "code-example", "context": "Reviewing for approvers and reviewers | As an approver, when you review pull requests (PRs), there are various cases where you might do the following:", "source": "https://kubernetes.io/docs/contribute/review/for-approvers/", "id": "k8s_00422_code_completion_11127"} {"prompt": "Explain this none code:\n\nThis sounds more like an issue with the code than an issue with\nthe documentation. Please open an issue at\nhttps://github.com/kubernetes/kubernetes/issues.\n\nIf this is a documentation issue, please re-open this issue.", "completion": "This none code Reviewing for approvers and reviewers | As an approver, when you review pull requests (PRs), there are various cases where you might do the following:", "language": "none", "type": "code-example_explanation", "context": "Reviewing for approvers and reviewers | As an approver, when you review pull requests (PRs), there are various cases where you might do the following:", "source": "https://kubernetes.io/docs/contribute/review/for-approvers/", "id": "k8s_00422_code_explain_11128"} {"prompt": "Explain this json code:\n\n{\"apiVersion\": \"abac.authorization.kubernetes.io/v1beta1\", \"kind\": \"Policy\", \"spec\": {\"user\": \"alice\", \"namespace\": \"*\", \"resource\": \"*\", \"apiGroup\": \"*\"}}", "completion": "This json code Using ABAC Authorization | The kubelet can read any pods:", "language": "json", "type": "api-response_explanation", "context": "Using ABAC Authorization | The kubelet can read any pods:", "source": "https://kubernetes.io/docs/reference/access-authn-authz/abac/", "id": "k8s_00426_code_explain_11129"} {"prompt": "Explain this json code:\n\n{\"apiVersion\": \"abac.authorization.kubernetes.io/v1beta1\", \"kind\": \"Policy\", \"spec\": {\"user\": \"kubelet\", \"namespace\": \"*\", \"resource\": \"pods\", \"readonly\": true}}", "completion": "This json code Using ABAC Authorization | The kubelet can read and write events:", "language": "json", "type": "api-response_explanation", "context": "Using ABAC Authorization | The kubelet can read and write events:", "source": "https://kubernetes.io/docs/reference/access-authn-authz/abac/", "id": "k8s_00426_code_explain_11130"} {"prompt": "Explain this json code:\n\n{\"apiVersion\": \"abac.authorization.kubernetes.io/v1beta1\", \"kind\": \"Policy\", \"spec\": {\"user\": \"kubelet\", \"namespace\": \"*\", \"resource\": \"events\"}}", "completion": "This json code Using ABAC Authorization | Bob can just read pods in namespace \"projectCaribou\":", "language": "json", "type": "api-response_explanation", "context": "Using ABAC Authorization | Bob can just read pods in namespace \"projectCaribou\":", "source": "https://kubernetes.io/docs/reference/access-authn-authz/abac/", "id": "k8s_00426_code_explain_11131"} {"prompt": "Explain this json code:\n\n{\"apiVersion\": \"abac.authorization.kubernetes.io/v1beta1\", \"kind\": \"Policy\", \"spec\": {\"user\": \"bob\", \"namespace\": \"projectCaribou\", \"resource\": \"pods\", \"readonly\": true}}", "completion": "This json code Using ABAC Authorization | Anyone can make read-only requests to all non-resource paths:", "language": "json", "type": "api-response_explanation", "context": "Using ABAC Authorization | Anyone can make read-only requests to all non-resource paths:", "source": "https://kubernetes.io/docs/reference/access-authn-authz/abac/", "id": "k8s_00426_code_explain_11132"} {"prompt": "Explain this json code:\n\n{\"apiVersion\": \"abac.authorization.kubernetes.io/v1beta1\", \"kind\": \"Policy\", \"spec\": {\"group\": \"system:authenticated\", \"readonly\": true, \"nonResourcePath\": \"*\"}}\n {\"apiVersion\": \"abac.authorization.kubernetes.io/v1beta1\", \"kind\": \"Policy\", \"spec\": {\"group\": \"system:unauthenticated\", \"readonly\": true, \"nonResourcePath\": \"*\"}}", "completion": "This json code Using ABAC Authorization | Complete file example", "language": "json", "type": "api-response_explanation", "context": "Using ABAC Authorization | Complete file example", "source": "https://kubernetes.io/docs/reference/access-authn-authz/abac/", "id": "k8s_00426_code_explain_11133"} {"prompt": "Explain this text code:\n\nsystem:serviceaccount:<namespace>:<serviceaccountname>", "completion": "This text code Using ABAC Authorization | Creating a new namespace leads to the creation of a new service account in the following format:", "language": "text", "type": "code-example_explanation", "context": "Using ABAC Authorization | Creating a new namespace leads to the creation of a new service account in the following format:", "source": "https://kubernetes.io/docs/reference/access-authn-authz/abac/", "id": "k8s_00426_code_explain_11134"} {"prompt": "Explain this json code:\n\n{\"apiVersion\":\"abac.authorization.kubernetes.io/v1beta1\",\"kind\":\"Policy\",\"spec\":{\"user\":\"system:serviceaccount:kube-system:default\",\"namespace\":\"*\",\"resource\":\"*\",\"apiGroup\":\"*\"}}", "completion": "This json code Using ABAC Authorization | The apiserver will need to be restarted to pick up the new policy lines.", "language": "json", "type": "api-response_explanation", "context": "Using ABAC Authorization | The apiserver will need to be restarted to pick up the new policy lines.", "source": "https://kubernetes.io/docs/reference/access-authn-authz/abac/", "id": "k8s_00426_code_explain_11135"} {"prompt": "Explain this json code:\n\n{\"apiVersion\": \"abac.authorization.kubernetes.io/v1beta1\", \"kind\": \"Policy\", \"spec\": {\"user\": \"alice\", \"namespace\": \"*\", \"resource\": \"*\", \"apiGroup\": \"*\"}}", "completion": "This json code Using ABAC Authorization | The kubelet can read any pods:", "language": "json", "type": "api-response_explanation", "context": "Using ABAC Authorization | The kubelet can read any pods:", "source": "https://kubernetes.io/docs/reference/access-authn-authz/abac/", "id": "k8s_00426_code_explain_11136"} {"prompt": "Explain this json code:\n\n{\"apiVersion\": \"abac.authorization.kubernetes.io/v1beta1\", \"kind\": \"Policy\", \"spec\": {\"user\": \"kubelet\", \"namespace\": \"*\", \"resource\": \"pods\", \"readonly\": true}}", "completion": "This json code Using ABAC Authorization | The kubelet can read and write events:", "language": "json", "type": "api-response_explanation", "context": "Using ABAC Authorization | The kubelet can read and write events:", "source": "https://kubernetes.io/docs/reference/access-authn-authz/abac/", "id": "k8s_00426_code_explain_11137"} {"prompt": "Explain this json code:\n\n{\"apiVersion\": \"abac.authorization.kubernetes.io/v1beta1\", \"kind\": \"Policy\", \"spec\": {\"user\": \"kubelet\", \"namespace\": \"*\", \"resource\": \"events\"}}", "completion": "This json code Using ABAC Authorization | Bob can just read pods in namespace \"projectCaribou\":", "language": "json", "type": "api-response_explanation", "context": "Using ABAC Authorization | Bob can just read pods in namespace \"projectCaribou\":", "source": "https://kubernetes.io/docs/reference/access-authn-authz/abac/", "id": "k8s_00426_code_explain_11138"} {"prompt": "Explain this json code:\n\n{\"apiVersion\": \"abac.authorization.kubernetes.io/v1beta1\", \"kind\": \"Policy\", \"spec\": {\"user\": \"bob\", \"namespace\": \"projectCaribou\", \"resource\": \"pods\", \"readonly\": true}}", "completion": "This json code Using ABAC Authorization | Anyone can make read-only requests to all non-resource paths:", "language": "json", "type": "api-response_explanation", "context": "Using ABAC Authorization | Anyone can make read-only requests to all non-resource paths:", "source": "https://kubernetes.io/docs/reference/access-authn-authz/abac/", "id": "k8s_00426_code_explain_11139"} {"prompt": "Explain this json code:\n\n{\"apiVersion\": \"abac.authorization.kubernetes.io/v1beta1\", \"kind\": \"Policy\", \"spec\": {\"group\": \"system:authenticated\", \"readonly\": true, \"nonResourcePath\": \"*\"}}\n {\"apiVersion\": \"abac.authorization.kubernetes.io/v1beta1\", \"kind\": \"Policy\", \"spec\": {\"group\": \"system:unauthenticated\", \"readonly\": true, \"nonResourcePath\": \"*\"}}", "completion": "This json code Using ABAC Authorization | Complete file example", "language": "json", "type": "api-response_explanation", "context": "Using ABAC Authorization | Complete file example", "source": "https://kubernetes.io/docs/reference/access-authn-authz/abac/", "id": "k8s_00426_code_explain_11140"} {"prompt": "Explain this shell code:\n\nsystem:serviceaccount:<namespace>:<serviceaccountname>", "completion": "This shell code Using ABAC Authorization | Creating a new namespace leads to the creation of a new service account in the following format:", "language": "shell", "type": "code-example_explanation", "context": "Using ABAC Authorization | Creating a new namespace leads to the creation of a new service account in the following format:", "source": "https://kubernetes.io/docs/reference/access-authn-authz/abac/", "id": "k8s_00426_code_explain_11141"} {"prompt": "Explain this json code:\n\n{\"apiVersion\":\"abac.authorization.kubernetes.io/v1beta1\",\"kind\":\"Policy\",\"spec\":{\"user\":\"system:serviceaccount:kube-system:default\",\"namespace\":\"*\",\"resource\":\"*\",\"apiGroup\":\"*\"}}", "completion": "This json code Using ABAC Authorization | The apiserver will need to be restarted to pick up the new policy lines.", "language": "json", "type": "api-response_explanation", "context": "Using ABAC Authorization | The apiserver will need to be restarted to pick up the new policy lines.", "source": "https://kubernetes.io/docs/reference/access-authn-authz/abac/", "id": "k8s_00426_code_explain_11142"} {"prompt": "Explain this json code:\n\n{\"apiVersion\": \"abac.authorization.kubernetes.io/v1beta1\", \"kind\": \"Policy\", \"spec\": {\"user\": \"alice\", \"namespace\": \"*\", \"resource\": \"*\", \"apiGroup\": \"*\"}}", "completion": "This json code Using ABAC Authorization | The kubelet can read any pods:", "language": "json", "type": "api-response_explanation", "context": "Using ABAC Authorization | The kubelet can read any pods:", "source": "https://kubernetes.io/docs/reference/access-authn-authz/abac/", "id": "k8s_00426_code_explain_11143"} {"prompt": "Explain this json code:\n\n{\"apiVersion\": \"abac.authorization.kubernetes.io/v1beta1\", \"kind\": \"Policy\", \"spec\": {\"user\": \"kubelet\", \"namespace\": \"*\", \"resource\": \"pods\", \"readonly\": true}}", "completion": "This json code Using ABAC Authorization | The kubelet can read and write events:", "language": "json", "type": "api-response_explanation", "context": "Using ABAC Authorization | The kubelet can read and write events:", "source": "https://kubernetes.io/docs/reference/access-authn-authz/abac/", "id": "k8s_00426_code_explain_11144"} {"prompt": "Explain this json code:\n\n{\"apiVersion\": \"abac.authorization.kubernetes.io/v1beta1\", \"kind\": \"Policy\", \"spec\": {\"user\": \"kubelet\", \"namespace\": \"*\", \"resource\": \"events\"}}", "completion": "This json code Using ABAC Authorization | Bob can just read pods in namespace \"projectCaribou\":", "language": "json", "type": "api-response_explanation", "context": "Using ABAC Authorization | Bob can just read pods in namespace \"projectCaribou\":", "source": "https://kubernetes.io/docs/reference/access-authn-authz/abac/", "id": "k8s_00426_code_explain_11145"} {"prompt": "Explain this json code:\n\n{\"apiVersion\": \"abac.authorization.kubernetes.io/v1beta1\", \"kind\": \"Policy\", \"spec\": {\"user\": \"bob\", \"namespace\": \"projectCaribou\", \"resource\": \"pods\", \"readonly\": true}}", "completion": "This json code Using ABAC Authorization | Anyone can make read-only requests to all non-resource paths:", "language": "json", "type": "api-response_explanation", "context": "Using ABAC Authorization | Anyone can make read-only requests to all non-resource paths:", "source": "https://kubernetes.io/docs/reference/access-authn-authz/abac/", "id": "k8s_00426_code_explain_11146"} {"prompt": "Explain this json code:\n\n{\"apiVersion\": \"abac.authorization.kubernetes.io/v1beta1\", \"kind\": \"Policy\", \"spec\": {\"group\": \"system:authenticated\", \"readonly\": true, \"nonResourcePath\": \"*\"}}\n {\"apiVersion\": \"abac.authorization.kubernetes.io/v1beta1\", \"kind\": \"Policy\", \"spec\": {\"group\": \"system:unauthenticated\", \"readonly\": true, \"nonResourcePath\": \"*\"}}", "completion": "This json code Using ABAC Authorization | Complete file example", "language": "json", "type": "api-response_explanation", "context": "Using ABAC Authorization | Complete file example", "source": "https://kubernetes.io/docs/reference/access-authn-authz/abac/", "id": "k8s_00426_code_explain_11147"} {"prompt": "Explain this text code:\n\nsystem:serviceaccount:<namespace>:<serviceaccountname>", "completion": "This text code Using ABAC Authorization | Creating a new namespace leads to the creation of a new service account in the following format:", "language": "text", "type": "code-example_explanation", "context": "Using ABAC Authorization | Creating a new namespace leads to the creation of a new service account in the following format:", "source": "https://kubernetes.io/docs/reference/access-authn-authz/abac/", "id": "k8s_00426_code_explain_11148"} {"prompt": "Explain this json code:\n\n{\"apiVersion\":\"abac.authorization.kubernetes.io/v1beta1\",\"kind\":\"Policy\",\"spec\":{\"user\":\"system:serviceaccount:kube-system:default\",\"namespace\":\"*\",\"resource\":\"*\",\"apiGroup\":\"*\"}}", "completion": "This json code Using ABAC Authorization | The apiserver will need to be restarted to pick up the new policy lines.", "language": "json", "type": "api-response_explanation", "context": "Using ABAC Authorization | The apiserver will need to be restarted to pick up the new policy lines.", "source": "https://kubernetes.io/docs/reference/access-authn-authz/abac/", "id": "k8s_00426_code_explain_11149"} {"prompt": "Complete this yaml code:\n\napiVersion: kubelet.config.k8s.io/v1beta1\nkind: KubeletConfiguration\nport: 20250\nauthorization:\n mode: Webhook", "completion": " webhook:\n cacheAuthorizedTTL: \"5m\"\n cacheUnauthorizedTTL: \"30s\"\nserializeImagePulls: false\naddress: \"192.168.0.1\"", "language": "yaml", "type": "kubernetes-manifest", "context": "Kubelet Configuration Directory Merging | Contents of a file in --config-dir directory:", "source": "https://kubernetes.io/docs/reference/node/kubelet-config-directory-merging/", "id": "k8s_00431_code_completion_11150"} {"prompt": "Explain this yaml code:\n\napiVersion: kubelet.config.k8s.io/v1beta1\nkind: KubeletConfiguration\nport: 20250\nauthorization:\n mode: Webhook\n webhook:\n cacheAuthorizedTTL: \"5m\"\n cacheUnauthorizedTTL: \"30s\"\nserializeImagePulls: false\naddress: \"192.168.0.1\"", "completion": "This yaml code Kubelet Configuration Directory Merging | Contents of a file in --config-dir directory:", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Kubelet Configuration Directory Merging | Contents of a file in --config-dir directory:", "source": "https://kubernetes.io/docs/reference/node/kubelet-config-directory-merging/", "id": "k8s_00431_code_explain_11151"} {"prompt": "Complete this yaml code:\n\napiVersion: kubelet.config.k8s.io/v1beta1\nkind: KubeletConfiguration\nauthorization:\n mode: AlwaysAllow", "completion": " webhook:\n cacheAuthorizedTTL: \"8m\"\n cacheUnauthorizedTTL: \"45s\"\naddress: \"192.168.0.8\"", "language": "yaml", "type": "kubernetes-manifest", "context": "Kubelet Configuration Directory Merging | The resulting configuration will be as follows:", "source": "https://kubernetes.io/docs/reference/node/kubelet-config-directory-merging/", "id": "k8s_00431_code_completion_11152"} {"prompt": "Explain this yaml code:\n\napiVersion: kubelet.config.k8s.io/v1beta1\nkind: KubeletConfiguration\nauthorization:\n mode: AlwaysAllow\n webhook:\n cacheAuthorizedTTL: \"8m\"\n cacheUnauthorizedTTL: \"45s\"\naddress: \"192.168.0.8\"", "completion": "This yaml code Kubelet Configuration Directory Merging | The resulting configuration will be as follows:", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Kubelet Configuration Directory Merging | The resulting configuration will be as follows:", "source": "https://kubernetes.io/docs/reference/node/kubelet-config-directory-merging/", "id": "k8s_00431_code_explain_11153"} {"prompt": "Complete this yaml code:\n\napiVersion: kubelet.config.k8s.io/v1beta1\nkind: KubeletConfiguration\nport: 20250\nserializeImagePulls: false\nauthorization:", "completion": " mode: AlwaysAllow\n webhook:\n cacheAuthorizedTTL: \"8m\"\n cacheUnauthorizedTTL: \"45s\"\naddress: \"192.168.0.8\"", "language": "yaml", "type": "kubernetes-manifest", "context": "Kubelet Configuration Directory Merging | You can override the slices/lists values of the kubelet configuration. However, the entire list gets overridden during the merging process. For example, you can override the clusterDNS list as follows", "source": "https://kubernetes.io/docs/reference/node/kubelet-config-directory-merging/", "id": "k8s_00431_code_completion_11154"} {"prompt": "Explain this yaml code:\n\napiVersion: kubelet.config.k8s.io/v1beta1\nkind: KubeletConfiguration\nport: 20250\nserializeImagePulls: false\nauthorization:\n mode: AlwaysAllow\n webhook:\n cacheAuthorizedTTL: \"8m\"\n cacheUnauthorizedTTL: \"45s\"\naddress: \"192.168.0.8\"", "completion": "This yaml code Kubelet Configuration Directory Merging | You can override the slices/lists values of the kubelet configuration. However, the entire list gets overridden during the merging process. For example, you can override the clusterDNS list as follows", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Kubelet Configuration Directory Merging | You can override the slices/lists values of the kubelet configuration. However, the entire list gets overridden during the merging process. For example, you can override the clusterDNS list as follows", "source": "https://kubernetes.io/docs/reference/node/kubelet-config-directory-merging/", "id": "k8s_00431_code_explain_11155"} {"prompt": "Complete this yaml code:\n\napiVersion: kubelet.config.k8s.io/v1beta1\nkind: KubeletConfiguration\nport: 20250", "completion": "serializeImagePulls: false\nclusterDNS:\n - \"192.168.0.9\"\n - \"192.168.0.8\"", "language": "yaml", "type": "kubernetes-manifest", "context": "Kubelet Configuration Directory Merging | Contents of a file in --config-dir directory:", "source": "https://kubernetes.io/docs/reference/node/kubelet-config-directory-merging/", "id": "k8s_00431_code_completion_11156"} {"prompt": "Explain this yaml code:\n\napiVersion: kubelet.config.k8s.io/v1beta1\nkind: KubeletConfiguration\nport: 20250\nserializeImagePulls: false\nclusterDNS:\n - \"192.168.0.9\"\n - \"192.168.0.8\"", "completion": "This yaml code Kubelet Configuration Directory Merging | Contents of a file in --config-dir directory:", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Kubelet Configuration Directory Merging | Contents of a file in --config-dir directory:", "source": "https://kubernetes.io/docs/reference/node/kubelet-config-directory-merging/", "id": "k8s_00431_code_explain_11157"} {"prompt": "Complete this yaml code:\n\napiVersion: kubelet.config.k8s.io/v1beta1\nkind: KubeletConfiguration\nclusterDNS:", "completion": " - \"192.168.0.2\"\n - \"192.168.0.3\"\n - \"192.168.0.5\"", "language": "yaml", "type": "kubernetes-manifest", "context": "Kubelet Configuration Directory Merging | The resulting configuration will be as follows:", "source": "https://kubernetes.io/docs/reference/node/kubelet-config-directory-merging/", "id": "k8s_00431_code_completion_11158"} {"prompt": "Explain this yaml code:\n\napiVersion: kubelet.config.k8s.io/v1beta1\nkind: KubeletConfiguration\nclusterDNS:\n - \"192.168.0.2\"\n - \"192.168.0.3\"\n - \"192.168.0.5\"", "completion": "This yaml code Kubelet Configuration Directory Merging | The resulting configuration will be as follows:", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Kubelet Configuration Directory Merging | The resulting configuration will be as follows:", "source": "https://kubernetes.io/docs/reference/node/kubelet-config-directory-merging/", "id": "k8s_00431_code_explain_11159"} {"prompt": "Complete this yaml code:\n\napiVersion: kubelet.config.k8s.io/v1beta1\nkind: KubeletConfiguration\nport: 20250\nserializeImagePulls: false", "completion": "clusterDNS:\n - \"192.168.0.2\"\n - \"192.168.0.3\"\n - \"192.168.0.5\"", "language": "yaml", "type": "kubernetes-manifest", "context": "Kubelet Configuration Directory Merging | Individual fields in maps, regardless of their value types (boolean, string, etc.), can be selectively overridden. However, for map[string][]string, the entire list associated with a specific field ge", "source": "https://kubernetes.io/docs/reference/node/kubelet-config-directory-merging/", "id": "k8s_00431_code_completion_11160"} {"prompt": "Explain this yaml code:\n\napiVersion: kubelet.config.k8s.io/v1beta1\nkind: KubeletConfiguration\nport: 20250\nserializeImagePulls: false\nclusterDNS:\n - \"192.168.0.2\"\n - \"192.168.0.3\"\n - \"192.168.0.5\"", "completion": "This yaml code Kubelet Configuration Directory Merging | Individual fields in maps, regardless of their value types (boolean, string, etc.), can be selectively overridden. However, for map[string][]string, the entire list associated with a specific field ge", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Kubelet Configuration Directory Merging | Individual fields in maps, regardless of their value types (boolean, string, etc.), can be selectively overridden. However, for map[string][]string, the entire list associated with a specific field ge", "source": "https://kubernetes.io/docs/reference/node/kubelet-config-directory-merging/", "id": "k8s_00431_code_explain_11161"} {"prompt": "Complete this yaml code:\n\napiVersion: kubelet.config.k8s.io/v1beta1\nkind: KubeletConfiguration\nport: 20250\nserializeImagePulls: false\nfeatureGates:\n AllAlpha: false\n MemoryQoS: true", "completion": "staticPodURLHeader:\n kubelet-api-support:\n - \"Authorization: 234APSDFA\"\n - \"X-Custom-Header: 123\"\n custom-static-pod:\n - \"Authorization: 223EWRWER\"\n - \"X-Custom-Header: 456\"", "language": "yaml", "type": "kubernetes-manifest", "context": "Kubelet Configuration Directory Merging | Contents of a file in --config-dir directory:", "source": "https://kubernetes.io/docs/reference/node/kubelet-config-directory-merging/", "id": "k8s_00431_code_completion_11162"} {"prompt": "Explain this yaml code:\n\napiVersion: kubelet.config.k8s.io/v1beta1\nkind: KubeletConfiguration\nport: 20250\nserializeImagePulls: false\nfeatureGates:\n AllAlpha: false\n MemoryQoS: true\nstaticPodURLHeader:\n kubelet-api-support:\n - \"Authorization: 234APSDFA\"\n - \"X-Custom-Header: 123\"\n custom-static-pod:\n - \"Authorization: 223EWRWER\"\n - \"X-Custom-Header: 456\"", "completion": "This yaml code Kubelet Configuration Directory Merging | Contents of a file in --config-dir directory:", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Kubelet Configuration Directory Merging | Contents of a file in --config-dir directory:", "source": "https://kubernetes.io/docs/reference/node/kubelet-config-directory-merging/", "id": "k8s_00431_code_explain_11163"} {"prompt": "Complete this yaml code:\n\napiVersion: kubelet.config.k8s.io/v1beta1\nkind: KubeletConfiguration\nfeatureGates:\n MemoryQoS: false\n KubeletTracing: true", "completion": " DynamicResourceAllocation: true\nstaticPodURLHeader:\n custom-static-pod:\n - \"Authorization: 223EWRWER\"\n - \"X-Custom-Header: 345\"", "language": "yaml", "type": "kubernetes-manifest", "context": "Kubelet Configuration Directory Merging | The resulting configuration will be as follows:", "source": "https://kubernetes.io/docs/reference/node/kubelet-config-directory-merging/", "id": "k8s_00431_code_completion_11164"} {"prompt": "Explain this yaml code:\n\napiVersion: kubelet.config.k8s.io/v1beta1\nkind: KubeletConfiguration\nfeatureGates:\n MemoryQoS: false\n KubeletTracing: true\n DynamicResourceAllocation: true\nstaticPodURLHeader:\n custom-static-pod:\n - \"Authorization: 223EWRWER\"\n - \"X-Custom-Header: 345\"", "completion": "This yaml code Kubelet Configuration Directory Merging | The resulting configuration will be as follows:", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Kubelet Configuration Directory Merging | The resulting configuration will be as follows:", "source": "https://kubernetes.io/docs/reference/node/kubelet-config-directory-merging/", "id": "k8s_00431_code_explain_11165"} {"prompt": "Complete this yaml code:\n\napiVersion: kubelet.config.k8s.io/v1beta1\nkind: KubeletConfiguration\nport: 20250\nserializeImagePulls: false\nfeatureGates:\n AllAlpha: false\n MemoryQoS: false\n KubeletTracing: true", "completion": " DynamicResourceAllocation: true\nstaticPodURLHeader:\n kubelet-api-support:\n - \"Authorization: 234APSDFA\"\n - \"X-Custom-Header: 123\"\n custom-static-pod:\n - \"Authorization: 223EWRWER\"\n - \"X-Custom-Header: 345\"", "language": "yaml", "type": "kubernetes-manifest", "context": "Kubelet Configuration Directory Merging |", "source": "https://kubernetes.io/docs/reference/node/kubelet-config-directory-merging/", "id": "k8s_00431_code_completion_11166"} {"prompt": "Explain this yaml code:\n\napiVersion: kubelet.config.k8s.io/v1beta1\nkind: KubeletConfiguration\nport: 20250\nserializeImagePulls: false\nfeatureGates:\n AllAlpha: false\n MemoryQoS: false\n KubeletTracing: true\n DynamicResourceAllocation: true\nstaticPodURLHeader:\n kubelet-api-support:\n - \"Authorization: 234APSDFA\"\n - \"X-Custom-Header: 123\"\n custom-static-pod:\n - \"Authorization: 223EWRWER\"\n - \"X-Custom-Header: 345\"", "completion": "This yaml code Kubelet Configuration Directory Merging |", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Kubelet Configuration Directory Merging |", "source": "https://kubernetes.io/docs/reference/node/kubelet-config-directory-merging/", "id": "k8s_00431_code_explain_11167"} {"prompt": "Complete this yaml code:\n\napiVersion: kubelet.config.k8s.io/v1beta1\nkind: KubeletConfiguration\nport: 20250\nauthorization:\n mode: Webhook", "completion": " webhook:\n cacheAuthorizedTTL: \"5m\"\n cacheUnauthorizedTTL: \"30s\"\nserializeImagePulls: false\naddress: \"192.168.0.1\"", "language": "yaml", "type": "kubernetes-manifest", "context": "Kubelet Configuration Directory Merging | Contents of a file in --config-dir directory:", "source": "https://kubernetes.io/docs/reference/node/kubelet-config-directory-merging/", "id": "k8s_00431_code_completion_11168"} {"prompt": "Explain this yaml code:\n\napiVersion: kubelet.config.k8s.io/v1beta1\nkind: KubeletConfiguration\nport: 20250\nauthorization:\n mode: Webhook\n webhook:\n cacheAuthorizedTTL: \"5m\"\n cacheUnauthorizedTTL: \"30s\"\nserializeImagePulls: false\naddress: \"192.168.0.1\"", "completion": "This yaml code Kubelet Configuration Directory Merging | Contents of a file in --config-dir directory:", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Kubelet Configuration Directory Merging | Contents of a file in --config-dir directory:", "source": "https://kubernetes.io/docs/reference/node/kubelet-config-directory-merging/", "id": "k8s_00431_code_explain_11169"} {"prompt": "Complete this yaml code:\n\napiVersion: kubelet.config.k8s.io/v1beta1\nkind: KubeletConfiguration\nauthorization:\n mode: AlwaysAllow", "completion": " webhook:\n cacheAuthorizedTTL: \"8m\"\n cacheUnauthorizedTTL: \"45s\"\naddress: \"192.168.0.8\"", "language": "yaml", "type": "kubernetes-manifest", "context": "Kubelet Configuration Directory Merging | The resulting configuration will be as follows:", "source": "https://kubernetes.io/docs/reference/node/kubelet-config-directory-merging/", "id": "k8s_00431_code_completion_11170"} {"prompt": "Explain this yaml code:\n\napiVersion: kubelet.config.k8s.io/v1beta1\nkind: KubeletConfiguration\nauthorization:\n mode: AlwaysAllow\n webhook:\n cacheAuthorizedTTL: \"8m\"\n cacheUnauthorizedTTL: \"45s\"\naddress: \"192.168.0.8\"", "completion": "This yaml code Kubelet Configuration Directory Merging | The resulting configuration will be as follows:", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Kubelet Configuration Directory Merging | The resulting configuration will be as follows:", "source": "https://kubernetes.io/docs/reference/node/kubelet-config-directory-merging/", "id": "k8s_00431_code_explain_11171"} {"prompt": "Complete this yaml code:\n\napiVersion: kubelet.config.k8s.io/v1beta1\nkind: KubeletConfiguration\nport: 20250\nserializeImagePulls: false\nauthorization:", "completion": " mode: AlwaysAllow\n webhook:\n cacheAuthorizedTTL: \"8m\"\n cacheUnauthorizedTTL: \"45s\"\naddress: \"192.168.0.8\"", "language": "yaml", "type": "kubernetes-manifest", "context": "Kubelet Configuration Directory Merging | You can override the slices/lists values of the kubelet configuration. However, the entire list gets overridden during the merging process. For example, you can override the clusterDNS list as follows", "source": "https://kubernetes.io/docs/reference/node/kubelet-config-directory-merging/", "id": "k8s_00431_code_completion_11172"} {"prompt": "Explain this yaml code:\n\napiVersion: kubelet.config.k8s.io/v1beta1\nkind: KubeletConfiguration\nport: 20250\nserializeImagePulls: false\nauthorization:\n mode: AlwaysAllow\n webhook:\n cacheAuthorizedTTL: \"8m\"\n cacheUnauthorizedTTL: \"45s\"\naddress: \"192.168.0.8\"", "completion": "This yaml code Kubelet Configuration Directory Merging | You can override the slices/lists values of the kubelet configuration. However, the entire list gets overridden during the merging process. For example, you can override the clusterDNS list as follows", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Kubelet Configuration Directory Merging | You can override the slices/lists values of the kubelet configuration. However, the entire list gets overridden during the merging process. For example, you can override the clusterDNS list as follows", "source": "https://kubernetes.io/docs/reference/node/kubelet-config-directory-merging/", "id": "k8s_00431_code_explain_11173"} {"prompt": "Complete this yaml code:\n\napiVersion: kubelet.config.k8s.io/v1beta1\nkind: KubeletConfiguration\nport: 20250", "completion": "serializeImagePulls: false\nclusterDNS:\n - \"192.168.0.9\"\n - \"192.168.0.8\"", "language": "yaml", "type": "kubernetes-manifest", "context": "Kubelet Configuration Directory Merging | Contents of a file in --config-dir directory:", "source": "https://kubernetes.io/docs/reference/node/kubelet-config-directory-merging/", "id": "k8s_00431_code_completion_11174"} {"prompt": "Explain this yaml code:\n\napiVersion: kubelet.config.k8s.io/v1beta1\nkind: KubeletConfiguration\nport: 20250\nserializeImagePulls: false\nclusterDNS:\n - \"192.168.0.9\"\n - \"192.168.0.8\"", "completion": "This yaml code Kubelet Configuration Directory Merging | Contents of a file in --config-dir directory:", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Kubelet Configuration Directory Merging | Contents of a file in --config-dir directory:", "source": "https://kubernetes.io/docs/reference/node/kubelet-config-directory-merging/", "id": "k8s_00431_code_explain_11175"} {"prompt": "Complete this yaml code:\n\napiVersion: kubelet.config.k8s.io/v1beta1\nkind: KubeletConfiguration\nclusterDNS:", "completion": " - \"192.168.0.2\"\n - \"192.168.0.3\"\n - \"192.168.0.5\"", "language": "yaml", "type": "kubernetes-manifest", "context": "Kubelet Configuration Directory Merging | The resulting configuration will be as follows:", "source": "https://kubernetes.io/docs/reference/node/kubelet-config-directory-merging/", "id": "k8s_00431_code_completion_11176"} {"prompt": "Explain this yaml code:\n\napiVersion: kubelet.config.k8s.io/v1beta1\nkind: KubeletConfiguration\nclusterDNS:\n - \"192.168.0.2\"\n - \"192.168.0.3\"\n - \"192.168.0.5\"", "completion": "This yaml code Kubelet Configuration Directory Merging | The resulting configuration will be as follows:", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Kubelet Configuration Directory Merging | The resulting configuration will be as follows:", "source": "https://kubernetes.io/docs/reference/node/kubelet-config-directory-merging/", "id": "k8s_00431_code_explain_11177"} {"prompt": "Complete this yaml code:\n\napiVersion: kubelet.config.k8s.io/v1beta1\nkind: KubeletConfiguration\nport: 20250\nserializeImagePulls: false", "completion": "clusterDNS:\n - \"192.168.0.2\"\n - \"192.168.0.3\"\n - \"192.168.0.5\"", "language": "yaml", "type": "kubernetes-manifest", "context": "Kubelet Configuration Directory Merging | Individual fields in maps, regardless of their value types (boolean, string, etc.), can be selectively overridden. However, for map[string][]string, the entire list associated with a specific field ge", "source": "https://kubernetes.io/docs/reference/node/kubelet-config-directory-merging/", "id": "k8s_00431_code_completion_11178"} {"prompt": "Explain this yaml code:\n\napiVersion: kubelet.config.k8s.io/v1beta1\nkind: KubeletConfiguration\nport: 20250\nserializeImagePulls: false\nclusterDNS:\n - \"192.168.0.2\"\n - \"192.168.0.3\"\n - \"192.168.0.5\"", "completion": "This yaml code Kubelet Configuration Directory Merging | Individual fields in maps, regardless of their value types (boolean, string, etc.), can be selectively overridden. However, for map[string][]string, the entire list associated with a specific field ge", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Kubelet Configuration Directory Merging | Individual fields in maps, regardless of their value types (boolean, string, etc.), can be selectively overridden. However, for map[string][]string, the entire list associated with a specific field ge", "source": "https://kubernetes.io/docs/reference/node/kubelet-config-directory-merging/", "id": "k8s_00431_code_explain_11179"} {"prompt": "Complete this yaml code:\n\napiVersion: kubelet.config.k8s.io/v1beta1\nkind: KubeletConfiguration\nport: 20250\nserializeImagePulls: false\nfeatureGates:\n AllAlpha: false\n MemoryQoS: true", "completion": "staticPodURLHeader:\n kubelet-api-support:\n - \"Authorization: 234APSDFA\"\n - \"X-Custom-Header: 123\"\n custom-static-pod:\n - \"Authorization: 223EWRWER\"\n - \"X-Custom-Header: 456\"", "language": "yaml", "type": "kubernetes-manifest", "context": "Kubelet Configuration Directory Merging | Contents of a file in --config-dir directory:", "source": "https://kubernetes.io/docs/reference/node/kubelet-config-directory-merging/", "id": "k8s_00431_code_completion_11180"} {"prompt": "Explain this yaml code:\n\napiVersion: kubelet.config.k8s.io/v1beta1\nkind: KubeletConfiguration\nport: 20250\nserializeImagePulls: false\nfeatureGates:\n AllAlpha: false\n MemoryQoS: true\nstaticPodURLHeader:\n kubelet-api-support:\n - \"Authorization: 234APSDFA\"\n - \"X-Custom-Header: 123\"\n custom-static-pod:\n - \"Authorization: 223EWRWER\"\n - \"X-Custom-Header: 456\"", "completion": "This yaml code Kubelet Configuration Directory Merging | Contents of a file in --config-dir directory:", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Kubelet Configuration Directory Merging | Contents of a file in --config-dir directory:", "source": "https://kubernetes.io/docs/reference/node/kubelet-config-directory-merging/", "id": "k8s_00431_code_explain_11181"} {"prompt": "Complete this yaml code:\n\napiVersion: kubelet.config.k8s.io/v1beta1\nkind: KubeletConfiguration\nfeatureGates:\n MemoryQoS: false\n KubeletTracing: true", "completion": " DynamicResourceAllocation: true\nstaticPodURLHeader:\n custom-static-pod:\n - \"Authorization: 223EWRWER\"\n - \"X-Custom-Header: 345\"", "language": "yaml", "type": "kubernetes-manifest", "context": "Kubelet Configuration Directory Merging | The resulting configuration will be as follows:", "source": "https://kubernetes.io/docs/reference/node/kubelet-config-directory-merging/", "id": "k8s_00431_code_completion_11182"} {"prompt": "Explain this yaml code:\n\napiVersion: kubelet.config.k8s.io/v1beta1\nkind: KubeletConfiguration\nfeatureGates:\n MemoryQoS: false\n KubeletTracing: true\n DynamicResourceAllocation: true\nstaticPodURLHeader:\n custom-static-pod:\n - \"Authorization: 223EWRWER\"\n - \"X-Custom-Header: 345\"", "completion": "This yaml code Kubelet Configuration Directory Merging | The resulting configuration will be as follows:", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Kubelet Configuration Directory Merging | The resulting configuration will be as follows:", "source": "https://kubernetes.io/docs/reference/node/kubelet-config-directory-merging/", "id": "k8s_00431_code_explain_11183"} {"prompt": "Complete this yaml code:\n\napiVersion: kubelet.config.k8s.io/v1beta1\nkind: KubeletConfiguration\nport: 20250\nserializeImagePulls: false\nfeatureGates:\n AllAlpha: false\n MemoryQoS: false\n KubeletTracing: true", "completion": " DynamicResourceAllocation: true\nstaticPodURLHeader:\n kubelet-api-support:\n - \"Authorization: 234APSDFA\"\n - \"X-Custom-Header: 123\"\n custom-static-pod:\n - \"Authorization: 223EWRWER\"\n - \"X-Custom-Header: 345\"", "language": "yaml", "type": "kubernetes-manifest", "context": "Kubelet Configuration Directory Merging |", "source": "https://kubernetes.io/docs/reference/node/kubelet-config-directory-merging/", "id": "k8s_00431_code_completion_11184"} {"prompt": "Explain this yaml code:\n\napiVersion: kubelet.config.k8s.io/v1beta1\nkind: KubeletConfiguration\nport: 20250\nserializeImagePulls: false\nfeatureGates:\n AllAlpha: false\n MemoryQoS: false\n KubeletTracing: true\n DynamicResourceAllocation: true\nstaticPodURLHeader:\n kubelet-api-support:\n - \"Authorization: 234APSDFA\"\n - \"X-Custom-Header: 123\"\n custom-static-pod:\n - \"Authorization: 223EWRWER\"\n - \"X-Custom-Header: 345\"", "completion": "This yaml code Kubelet Configuration Directory Merging |", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Kubelet Configuration Directory Merging |", "source": "https://kubernetes.io/docs/reference/node/kubelet-config-directory-merging/", "id": "k8s_00431_code_explain_11185"} {"prompt": "Complete this yaml code:\n\napiVersion: kubelet.config.k8s.io/v1beta1\nkind: KubeletConfiguration\nport: 20250\nauthorization:\n mode: Webhook", "completion": " webhook:\n cacheAuthorizedTTL: \"5m\"\n cacheUnauthorizedTTL: \"30s\"\nserializeImagePulls: false\naddress: \"192.168.0.1\"", "language": "yaml", "type": "kubernetes-manifest", "context": "Kubelet Configuration Directory Merging | Contents of a file in --config-dir directory:", "source": "https://kubernetes.io/docs/reference/node/kubelet-config-directory-merging/", "id": "k8s_00431_code_completion_11186"} {"prompt": "Explain this yaml code:\n\napiVersion: kubelet.config.k8s.io/v1beta1\nkind: KubeletConfiguration\nport: 20250\nauthorization:\n mode: Webhook\n webhook:\n cacheAuthorizedTTL: \"5m\"\n cacheUnauthorizedTTL: \"30s\"\nserializeImagePulls: false\naddress: \"192.168.0.1\"", "completion": "This yaml code Kubelet Configuration Directory Merging | Contents of a file in --config-dir directory:", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Kubelet Configuration Directory Merging | Contents of a file in --config-dir directory:", "source": "https://kubernetes.io/docs/reference/node/kubelet-config-directory-merging/", "id": "k8s_00431_code_explain_11187"} {"prompt": "Complete this yaml code:\n\napiVersion: kubelet.config.k8s.io/v1beta1\nkind: KubeletConfiguration\nauthorization:\n mode: AlwaysAllow", "completion": " webhook:\n cacheAuthorizedTTL: \"8m\"\n cacheUnauthorizedTTL: \"45s\"\naddress: \"192.168.0.8\"", "language": "yaml", "type": "kubernetes-manifest", "context": "Kubelet Configuration Directory Merging | The resulting configuration will be as follows:", "source": "https://kubernetes.io/docs/reference/node/kubelet-config-directory-merging/", "id": "k8s_00431_code_completion_11188"} {"prompt": "Explain this yaml code:\n\napiVersion: kubelet.config.k8s.io/v1beta1\nkind: KubeletConfiguration\nauthorization:\n mode: AlwaysAllow\n webhook:\n cacheAuthorizedTTL: \"8m\"\n cacheUnauthorizedTTL: \"45s\"\naddress: \"192.168.0.8\"", "completion": "This yaml code Kubelet Configuration Directory Merging | The resulting configuration will be as follows:", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Kubelet Configuration Directory Merging | The resulting configuration will be as follows:", "source": "https://kubernetes.io/docs/reference/node/kubelet-config-directory-merging/", "id": "k8s_00431_code_explain_11189"} {"prompt": "Complete this yaml code:\n\napiVersion: kubelet.config.k8s.io/v1beta1\nkind: KubeletConfiguration\nport: 20250\nserializeImagePulls: false\nauthorization:", "completion": " mode: AlwaysAllow\n webhook:\n cacheAuthorizedTTL: \"8m\"\n cacheUnauthorizedTTL: \"45s\"\naddress: \"192.168.0.8\"", "language": "yaml", "type": "kubernetes-manifest", "context": "Kubelet Configuration Directory Merging | You can override the slices/lists values of the kubelet configuration. However, the entire list gets overridden during the merging process. For example, you can override the clusterDNS list as follows", "source": "https://kubernetes.io/docs/reference/node/kubelet-config-directory-merging/", "id": "k8s_00431_code_completion_11190"} {"prompt": "Explain this yaml code:\n\napiVersion: kubelet.config.k8s.io/v1beta1\nkind: KubeletConfiguration\nport: 20250\nserializeImagePulls: false\nauthorization:\n mode: AlwaysAllow\n webhook:\n cacheAuthorizedTTL: \"8m\"\n cacheUnauthorizedTTL: \"45s\"\naddress: \"192.168.0.8\"", "completion": "This yaml code Kubelet Configuration Directory Merging | You can override the slices/lists values of the kubelet configuration. However, the entire list gets overridden during the merging process. For example, you can override the clusterDNS list as follows", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Kubelet Configuration Directory Merging | You can override the slices/lists values of the kubelet configuration. However, the entire list gets overridden during the merging process. For example, you can override the clusterDNS list as follows", "source": "https://kubernetes.io/docs/reference/node/kubelet-config-directory-merging/", "id": "k8s_00431_code_explain_11191"} {"prompt": "Complete this yaml code:\n\napiVersion: kubelet.config.k8s.io/v1beta1\nkind: KubeletConfiguration\nport: 20250", "completion": "serializeImagePulls: false\nclusterDNS:\n - \"192.168.0.9\"\n - \"192.168.0.8\"", "language": "yaml", "type": "kubernetes-manifest", "context": "Kubelet Configuration Directory Merging | Contents of a file in --config-dir directory:", "source": "https://kubernetes.io/docs/reference/node/kubelet-config-directory-merging/", "id": "k8s_00431_code_completion_11192"} {"prompt": "Explain this yaml code:\n\napiVersion: kubelet.config.k8s.io/v1beta1\nkind: KubeletConfiguration\nport: 20250\nserializeImagePulls: false\nclusterDNS:\n - \"192.168.0.9\"\n - \"192.168.0.8\"", "completion": "This yaml code Kubelet Configuration Directory Merging | Contents of a file in --config-dir directory:", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Kubelet Configuration Directory Merging | Contents of a file in --config-dir directory:", "source": "https://kubernetes.io/docs/reference/node/kubelet-config-directory-merging/", "id": "k8s_00431_code_explain_11193"} {"prompt": "Complete this yaml code:\n\napiVersion: kubelet.config.k8s.io/v1beta1\nkind: KubeletConfiguration\nclusterDNS:", "completion": " - \"192.168.0.2\"\n - \"192.168.0.3\"\n - \"192.168.0.5\"", "language": "yaml", "type": "kubernetes-manifest", "context": "Kubelet Configuration Directory Merging | The resulting configuration will be as follows:", "source": "https://kubernetes.io/docs/reference/node/kubelet-config-directory-merging/", "id": "k8s_00431_code_completion_11194"} {"prompt": "Explain this yaml code:\n\napiVersion: kubelet.config.k8s.io/v1beta1\nkind: KubeletConfiguration\nclusterDNS:\n - \"192.168.0.2\"\n - \"192.168.0.3\"\n - \"192.168.0.5\"", "completion": "This yaml code Kubelet Configuration Directory Merging | The resulting configuration will be as follows:", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Kubelet Configuration Directory Merging | The resulting configuration will be as follows:", "source": "https://kubernetes.io/docs/reference/node/kubelet-config-directory-merging/", "id": "k8s_00431_code_explain_11195"} {"prompt": "Complete this yaml code:\n\napiVersion: kubelet.config.k8s.io/v1beta1\nkind: KubeletConfiguration\nport: 20250\nserializeImagePulls: false", "completion": "clusterDNS:\n - \"192.168.0.2\"\n - \"192.168.0.3\"\n - \"192.168.0.5\"", "language": "yaml", "type": "kubernetes-manifest", "context": "Kubelet Configuration Directory Merging | Individual fields in maps, regardless of their value types (boolean, string, etc.), can be selectively overridden. However, for map[string][]string, the entire list associated with a specific field ge", "source": "https://kubernetes.io/docs/reference/node/kubelet-config-directory-merging/", "id": "k8s_00431_code_completion_11196"} {"prompt": "Explain this yaml code:\n\napiVersion: kubelet.config.k8s.io/v1beta1\nkind: KubeletConfiguration\nport: 20250\nserializeImagePulls: false\nclusterDNS:\n - \"192.168.0.2\"\n - \"192.168.0.3\"\n - \"192.168.0.5\"", "completion": "This yaml code Kubelet Configuration Directory Merging | Individual fields in maps, regardless of their value types (boolean, string, etc.), can be selectively overridden. However, for map[string][]string, the entire list associated with a specific field ge", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Kubelet Configuration Directory Merging | Individual fields in maps, regardless of their value types (boolean, string, etc.), can be selectively overridden. However, for map[string][]string, the entire list associated with a specific field ge", "source": "https://kubernetes.io/docs/reference/node/kubelet-config-directory-merging/", "id": "k8s_00431_code_explain_11197"} {"prompt": "Complete this yaml code:\n\napiVersion: kubelet.config.k8s.io/v1beta1\nkind: KubeletConfiguration\nport: 20250\nserializeImagePulls: false\nfeatureGates:\n AllAlpha: false\n MemoryQoS: true", "completion": "staticPodURLHeader:\n kubelet-api-support:\n - \"Authorization: 234APSDFA\"\n - \"X-Custom-Header: 123\"\n custom-static-pod:\n - \"Authorization: 223EWRWER\"\n - \"X-Custom-Header: 456\"", "language": "yaml", "type": "kubernetes-manifest", "context": "Kubelet Configuration Directory Merging | Contents of a file in --config-dir directory:", "source": "https://kubernetes.io/docs/reference/node/kubelet-config-directory-merging/", "id": "k8s_00431_code_completion_11198"} {"prompt": "Explain this yaml code:\n\napiVersion: kubelet.config.k8s.io/v1beta1\nkind: KubeletConfiguration\nport: 20250\nserializeImagePulls: false\nfeatureGates:\n AllAlpha: false\n MemoryQoS: true\nstaticPodURLHeader:\n kubelet-api-support:\n - \"Authorization: 234APSDFA\"\n - \"X-Custom-Header: 123\"\n custom-static-pod:\n - \"Authorization: 223EWRWER\"\n - \"X-Custom-Header: 456\"", "completion": "This yaml code Kubelet Configuration Directory Merging | Contents of a file in --config-dir directory:", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Kubelet Configuration Directory Merging | Contents of a file in --config-dir directory:", "source": "https://kubernetes.io/docs/reference/node/kubelet-config-directory-merging/", "id": "k8s_00431_code_explain_11199"} {"prompt": "Complete this yaml code:\n\napiVersion: kubelet.config.k8s.io/v1beta1\nkind: KubeletConfiguration\nfeatureGates:\n MemoryQoS: false\n KubeletTracing: true", "completion": " DynamicResourceAllocation: true\nstaticPodURLHeader:\n custom-static-pod:\n - \"Authorization: 223EWRWER\"\n - \"X-Custom-Header: 345\"", "language": "yaml", "type": "kubernetes-manifest", "context": "Kubelet Configuration Directory Merging | The resulting configuration will be as follows:", "source": "https://kubernetes.io/docs/reference/node/kubelet-config-directory-merging/", "id": "k8s_00431_code_completion_11200"} {"prompt": "Explain this yaml code:\n\napiVersion: kubelet.config.k8s.io/v1beta1\nkind: KubeletConfiguration\nfeatureGates:\n MemoryQoS: false\n KubeletTracing: true\n DynamicResourceAllocation: true\nstaticPodURLHeader:\n custom-static-pod:\n - \"Authorization: 223EWRWER\"\n - \"X-Custom-Header: 345\"", "completion": "This yaml code Kubelet Configuration Directory Merging | The resulting configuration will be as follows:", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Kubelet Configuration Directory Merging | The resulting configuration will be as follows:", "source": "https://kubernetes.io/docs/reference/node/kubelet-config-directory-merging/", "id": "k8s_00431_code_explain_11201"} {"prompt": "Complete this yaml code:\n\napiVersion: kubelet.config.k8s.io/v1beta1\nkind: KubeletConfiguration\nport: 20250\nserializeImagePulls: false\nfeatureGates:\n AllAlpha: false\n MemoryQoS: false\n KubeletTracing: true", "completion": " DynamicResourceAllocation: true\nstaticPodURLHeader:\n kubelet-api-support:\n - \"Authorization: 234APSDFA\"\n - \"X-Custom-Header: 123\"\n custom-static-pod:\n - \"Authorization: 223EWRWER\"\n - \"X-Custom-Header: 345\"", "language": "yaml", "type": "kubernetes-manifest", "context": "Kubelet Configuration Directory Merging |", "source": "https://kubernetes.io/docs/reference/node/kubelet-config-directory-merging/", "id": "k8s_00431_code_completion_11202"} {"prompt": "Explain this yaml code:\n\napiVersion: kubelet.config.k8s.io/v1beta1\nkind: KubeletConfiguration\nport: 20250\nserializeImagePulls: false\nfeatureGates:\n AllAlpha: false\n MemoryQoS: false\n KubeletTracing: true\n DynamicResourceAllocation: true\nstaticPodURLHeader:\n kubelet-api-support:\n - \"Authorization: 234APSDFA\"\n - \"X-Custom-Header: 123\"\n custom-static-pod:\n - \"Authorization: 223EWRWER\"\n - \"X-Custom-Header: 345\"", "completion": "This yaml code Kubelet Configuration Directory Merging |", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Kubelet Configuration Directory Merging |", "source": "https://kubernetes.io/docs/reference/node/kubelet-config-directory-merging/", "id": "k8s_00431_code_explain_11203"} {"prompt": "Complete this yaml code:\n\napiVersion: kubeadm.k8s.io/v1beta4\nkind: InitConfiguration\n\napiVersion: kubeadm.k8s.io/v1beta4\nkind: ClusterConfiguration\n\napiVersion: kubelet.config.k8s.io/v1beta1\nkind: KubeletConfiguration\n\napiVersion: kubeproxy.config.k8s.io/v1alpha1", "completion": "kind: KubeProxyConfiguration\n\napiVersion: kubeadm.k8s.io/v1beta4\nkind: JoinConfiguration\n\napiVersion: kubeadm.k8s.io/v1beta4\nkind: ResetConfiguration\n\napiVersion: kubeadm.k8s.io/v1beta4\nkind: UpgradeConfiguration", "language": "yaml", "type": "kubernetes-manifest", "context": "Migration from old kubeadm config versions | To print the defaults for init and join actions use the following commands:", "source": "https://kubernetes.io/docs/reference/config-api/kubeadm-config.v1beta4/", "id": "k8s_00432_code_completion_11204"} {"prompt": "Explain this yaml code:\n\napiVersion: kubeadm.k8s.io/v1beta4\nkind: InitConfiguration\n\napiVersion: kubeadm.k8s.io/v1beta4\nkind: ClusterConfiguration\n\napiVersion: kubelet.config.k8s.io/v1beta1\nkind: KubeletConfiguration\n\napiVersion: kubeproxy.config.k8s.io/v1alpha1\nkind: KubeProxyConfiguration\n\napiVersion: kubeadm.k8s.io/v1beta4\nkind: JoinConfiguration\n\napiVersion: kubeadm.k8s.io/v1beta4\nkind: ResetConfiguration\n\napiVersion: kubeadm.k8s.io/v1beta4\nkind: UpgradeConfiguration", "completion": "This yaml code Migration from old kubeadm config versions | To print the defaults for init and join actions use the following commands:", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Migration from old kubeadm config versions | To print the defaults for init and join actions use the following commands:", "source": "https://kubernetes.io/docs/reference/config-api/kubeadm-config.v1beta4/", "id": "k8s_00432_code_explain_11205"} {"prompt": "Complete this text code:\n\nkubeadm config print init-defaults\nkubeadm config print join-defaults", "completion": "kubeadm config print reset-defaults\nkubeadm config print upgrade-defaults", "language": "text", "type": "code-example", "context": "Migration from old kubeadm config versions | The list of configuration types that must be included in a configuration file depends by the action you are performing (init or join) and by the configuration options you are going to use (defaults or", "source": "https://kubernetes.io/docs/reference/config-api/kubeadm-config.v1beta4/", "id": "k8s_00432_code_completion_11206"} {"prompt": "Explain this text code:\n\nkubeadm config print init-defaults\nkubeadm config print join-defaults\nkubeadm config print reset-defaults\nkubeadm config print upgrade-defaults", "completion": "This text code Migration from old kubeadm config versions | The list of configuration types that must be included in a configuration file depends by the action you are performing (init or join) and by the configuration options you are going to use (defaults or", "language": "text", "type": "code-example_explanation", "context": "Migration from old kubeadm config versions | The list of configuration types that must be included in a configuration file depends by the action you are performing (init or join) and by the configuration options you are going to use (defaults or", "source": "https://kubernetes.io/docs/reference/config-api/kubeadm-config.v1beta4/", "id": "k8s_00432_code_explain_11207"} {"prompt": "Complete this yaml code:\n\napiVersion: kubeadm.k8s.io/v1beta4\nkind: InitConfiguration\nbootstrapTokens:", "completion": "...\nnodeRegistration:\n...", "language": "yaml", "type": "kubernetes-manifest", "context": "Migration from old kubeadm config versions | The InitConfiguration type should be used to configure runtime settings, that in case of kubeadm init are the configuration of the bootstrap token and all the setting which are specific to the node wh", "source": "https://kubernetes.io/docs/reference/config-api/kubeadm-config.v1beta4/", "id": "k8s_00432_code_completion_11208"} {"prompt": "Explain this yaml code:\n\napiVersion: kubeadm.k8s.io/v1beta4\nkind: InitConfiguration\nbootstrapTokens:\n...\nnodeRegistration:\n...", "completion": "This yaml code Migration from old kubeadm config versions | The InitConfiguration type should be used to configure runtime settings, that in case of kubeadm init are the configuration of the bootstrap token and all the setting which are specific to the node wh", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Migration from old kubeadm config versions | The InitConfiguration type should be used to configure runtime settings, that in case of kubeadm init are the configuration of the bootstrap token and all the setting which are specific to the node wh", "source": "https://kubernetes.io/docs/reference/config-api/kubeadm-config.v1beta4/", "id": "k8s_00432_code_explain_11209"} {"prompt": "Complete this yaml code:\n\napiVersion: kubeadm.k8s.io/v1beta4\nkind: ClusterConfiguration\nnetworking:\n ...\netcd:\n ...", "completion": "apiServer:\n extraArgs:\n ...\n extraVolumes:\n ...\n...", "language": "yaml", "type": "kubernetes-manifest", "context": "Migration from old kubeadm config versions | The ClusterConfiguration type should be used to configure cluster-wide settings, including settings for:", "source": "https://kubernetes.io/docs/reference/config-api/kubeadm-config.v1beta4/", "id": "k8s_00432_code_completion_11210"} {"prompt": "Explain this yaml code:\n\napiVersion: kubeadm.k8s.io/v1beta4\nkind: ClusterConfiguration\nnetworking:\n ...\netcd:\n ...\napiServer:\n extraArgs:\n ...\n extraVolumes:\n ...\n...", "completion": "This yaml code Migration from old kubeadm config versions | The ClusterConfiguration type should be used to configure cluster-wide settings, including settings for:", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Migration from old kubeadm config versions | The ClusterConfiguration type should be used to configure cluster-wide settings, including settings for:", "source": "https://kubernetes.io/docs/reference/config-api/kubeadm-config.v1beta4/", "id": "k8s_00432_code_explain_11211"} {"prompt": "Explain this yaml code:\n\napiVersion: kubeproxy.config.k8s.io/v1alpha1\nkind: KubeProxyConfiguration\n...", "completion": "This yaml code Migration from old kubeadm config versions | The KubeProxyConfiguration type should be used to change the configuration passed to kube-proxy instances deployed in the cluster. If this object is not provided or provided only partially, kubeadm ap", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Migration from old kubeadm config versions | The KubeProxyConfiguration type should be used to change the configuration passed to kube-proxy instances deployed in the cluster. If this object is not provided or provided only partially, kubeadm ap", "source": "https://kubernetes.io/docs/reference/config-api/kubeadm-config.v1beta4/", "id": "k8s_00432_code_explain_11212"} {"prompt": "Explain this yaml code:\n\napiVersion: kubelet.config.k8s.io/v1beta1\nkind: KubeletConfiguration\n...", "completion": "This yaml code Migration from old kubeadm config versions | The KubeletConfiguration type should be used to change the configurations that will be passed to all kubelet instances deployed in the cluster. If this object is not provided or provided only partiall", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Migration from old kubeadm config versions | The KubeletConfiguration type should be used to change the configurations that will be passed to all kubelet instances deployed in the cluster. If this object is not provided or provided only partiall", "source": "https://kubernetes.io/docs/reference/config-api/kubeadm-config.v1beta4/", "id": "k8s_00432_code_explain_11213"} {"prompt": "Complete this yaml code:\n\napiVersion: kubeadm.k8s.io/v1beta4\nkind: InitConfiguration\nbootstrapTokens:\n - token: \"9a08jv.c0izixklcxtmnze7\"\n description: \"kubeadm bootstrap token\"\n ttl: \"24h\"\n - token: \"783bde.3f89s0fje9f38fhf\"\n description: \"another bootstrap token\"\n usages:\n - authentication\n - signing\n groups:\n - system:bootstrappers:kubeadm:default-node-token\n\nnodeRegistration:\n name: \"ec2-10-100-0-1\"\n criSocket: \"unix:///var/run/containerd/containerd.sock\"\n taints:\n - key: \"kubeadmNode\"\n value: \"someValue\"\n effect: \"NoSchedule\"\n kubeletExtraArgs:\n - name: v\n value: \"5\"\n ignorePreflightErrors:\n - IsPrivilegedUser\n imagePullPolicy: \"IfNotPresent\"\n imagePullSerial: true\n\nlocalAPIEndpoint:\n advertiseAddress: \"10.100.0.1\"\n bindPort: 6443\ncertificateKey: \"e6a2eb8581237ab72a4f494f30285ec12a9694d750b9785706a83bfcbbbd2204\"\nskipPhases:\n - preflight\ntimeouts:\n controlPlaneComponentHealthCheck: \"60s\"\n kubenetesAPICall: \"40s\"\n---\napiVersion: kubeadm.k8s.io/v1beta4\nkind: ClusterConfiguration\netcd:\n\n # one of local or external\n local:\n imageRepository: \"registry.k8s.io\"\n imageTag: \"3.2.24\"\n dataDir: \"/var/lib/etcd\"\n extraArgs:\n - name: listen-client-urls\n value: http://10.100.0.1:2379\n extraEnvs:\n - name: SOME_VAR\n value: SOME_VALUE\n serverCertSANs:\n - ec2-10-100-0-1.compute-1.amazonaws.com\n peerCertSANs:\n - 10.100.0.1\n # external:\n # endpoints:\n # - 10.100.0.1:2379\n # - 10.100.0.2:2379\n # caFile: \"/etcd/kubernetes/pki/etcd/etcd-ca.crt\"\n # certFile: \"/etcd/kubernetes/pki/etcd/etcd.crt\"", "completion": " # keyFile: \"/etcd/kubernetes/pki/etcd/etcd.key\"\n\nnetworking:\n serviceSubnet: \"10.96.0.0/16\"\n podSubnet: \"10.244.0.0/24\"\n dnsDomain: \"cluster.local\"\nkubernetesVersion: \"v1.21.0\"\ncontrolPlaneEndpoint: \"10.100.0.1:6443\"\napiServer:\n extraArgs:\n - name: authorization-mode\n value: Node,RBAC\n extraEnvs:\n - name: SOME_VAR\n value: SOME_VALUE\n extraVolumes:\n - name: \"some-volume\"\n hostPath: \"/etc/some-path\"\n mountPath: \"/etc/some-pod-path\"\n readOnly: false\n pathType: File\n certSANs:\n - \"10.100.1.1\"\n - \"ec2-10-100-0-1.compute-1.amazonaws.com\"\n\ncontrollerManager:\n extraArgs:\n - name: node-cidr-mask-size\n value: \"20\"\n extraVolumes:\n - name: \"some-volume\"\n hostPath: \"/etc/some-path\"\n mountPath: \"/etc/some-pod-path\"\n readOnly: false\n pathType: File\n\nscheduler:\n extraArgs:\n - name: address\n value: 10.100.0.1\n extraVolumes:\n - name: \"some-volume\"\n hostPath: \"/etc/some-path\"\n mountPath: \"/etc/some-pod-path\"\n readOnly: false\n pathType: File\n\ncertificatesDir: \"/etc/kubernetes/pki\"\nimageRepository: \"registry.k8s.io\"\nclusterName: \"example-cluster\"\nencryptionAlgorithm: ECDSA-P256\ndns:\n disabled: true # disable CoreDNS\nproxy:\n disabled: true # disable kube-proxy\n\n---\napiVersion: kubelet.config.k8s.io/v1beta1\nkind: KubeletConfiguration\n# kubelet specific options here\n---\napiVersion: kubeproxy.config.k8s.io/v1alpha1\nkind: KubeProxyConfiguration\n# kube-proxy specific options here", "language": "yaml", "type": "kubernetes-manifest", "context": "Migration from old kubeadm config versions | When executing kubeadm join with the --config option, the JoinConfiguration type should be provided.", "source": "https://kubernetes.io/docs/reference/config-api/kubeadm-config.v1beta4/", "id": "k8s_00432_code_completion_11214"} {"prompt": "Explain this yaml code:\n\napiVersion: kubeadm.k8s.io/v1beta4\nkind: InitConfiguration\nbootstrapTokens:\n - token: \"9a08jv.c0izixklcxtmnze7\"\n description: \"kubeadm bootstrap token\"\n ttl: \"24h\"\n - token: \"783bde.3f89s0fje9f38fhf\"\n description: \"another bootstrap token\"\n usages:\n - authentication\n - signing\n groups:\n - system:bootstrappers:kubeadm:default-node-token\n\nnodeRegistration:\n name: \"ec2-10-100-0-1\"\n criSocket: \"unix:///var/run/containerd/containerd.sock\"\n taints:\n - key: \"kubeadmNode\"\n value: \"someValue\"\n effect: \"NoSchedule\"\n kubeletExtraArgs:\n - name: v\n value: \"5\"\n ignorePreflightErrors:\n - IsPrivilegedUser\n imagePullPolicy: \"IfNotPresent\"\n imagePullSerial: true\n\nlocalAPIEndpoint:\n advertiseAddress: \"10.100.0.1\"\n bindPort: 6443\ncertificateKey: \"e6a2eb8581237ab72a4f494f30285ec12a9694d750b9785706a83bfcbbbd2204\"\nskipPhases:\n - preflight\ntimeouts:\n controlPlaneComponentHealthCheck: \"60s\"\n kubenetesAPICall: \"40s\"\n---\napiVersion: kubeadm.k8s.io/v1beta4\nkind: ClusterConfiguration\netcd:\n\n # one of local or external\n local:\n imageRepository: \"registry.k8s.io\"\n imageTag: \"3.2.24\"\n dataDir: \"/var/lib/etcd\"\n extraArgs:\n - name: listen-client-urls\n value: http://10.100.0.1:2379\n extraEnvs:\n - name: SOME_VAR\n value: SOME_VALUE\n serverCertSANs:\n - ec2-10-100-0-1.compute-1.amazonaws.com\n peerCertSANs:\n - 10.100.0.1\n # external:\n # endpoints:\n # - 10.100.0.1:2379\n # - 10.100.0.2:2379\n # caFile: \"/etcd/kubernetes/pki/etcd/etcd-ca.crt\"\n # certFile: \"/etcd/kubernetes/pki/etcd/etcd.crt\"\n # keyFile: \"/etcd/kubernetes/pki/etcd/etcd.key\"\n\nnetworking:\n serviceSubnet: \"10.96.0.0/16\"\n podSubnet: \"10.244.0.0/24\"\n dnsDomain: \"cluster.local\"\nkubernetesVersion: \"v1.21.0\"\ncontrolPlaneEndpoint: \"10.100.0.1:6443\"\napiServer:\n extraArgs:\n - name: authorization-mode\n value: Node,RBAC\n extraEnvs:\n - name: SOME_VAR\n value: SOME_VALUE\n extraVolumes:\n - name: \"some-volume\"\n hostPath: \"/etc/some-path\"\n mountPath: \"/etc/some-pod-path\"\n readOnly: false\n pathType: File\n certSANs:\n - \"10.100.1.1\"\n - \"ec2-10-100-0-1.compute-1.amazonaws.com\"\n\ncontrollerManager:\n extraArgs:\n - name: node-cidr-mask-size\n value: \"20\"\n extraVolumes:\n - name: \"some-volume\"\n hostPath: \"/etc/some-path\"\n mountPath: \"/etc/some-pod-path\"\n readOnly: false\n pathType: File\n\nscheduler:\n extraArgs:\n - name: address\n value: 10.100.0.1\n extraVolumes:\n - name: \"some-volume\"\n hostPath: \"/etc/some-path\"\n mountPath: \"/etc/some-pod-path\"\n readOnly: false\n pathType: File\n\ncertificatesDir: \"/etc/kubernetes/pki\"\nimageRepository: \"registry.k8s.io\"\nclusterName: \"example-cluster\"\nencryptionAlgorithm: ECDSA-P256\ndns:\n disabled: true # disable CoreDNS\nproxy:\n disabled: true # disable kube-proxy\n\n---\napiVersion: kubelet.config.k8s.io/v1beta1\nkind: KubeletConfiguration\n# kubelet specific options here\n---\napiVersion: kubeproxy.config.k8s.io/v1alpha1\nkind: KubeProxyConfiguration\n# kube-proxy specific options here", "completion": "This yaml code Migration from old kubeadm config versions | When executing kubeadm join with the --config option, the JoinConfiguration type should be provided.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Migration from old kubeadm config versions | When executing kubeadm join with the --config option, the JoinConfiguration type should be provided.", "source": "https://kubernetes.io/docs/reference/config-api/kubeadm-config.v1beta4/", "id": "k8s_00432_code_explain_11215"} {"prompt": "Complete this yaml code:\n\napiVersion: kubeadm.k8s.io/v1beta4\nkind: JoinConfiguration\ndiscovery:\n bootstrapToken:", "completion": " apiServerEndpoint: some-address:6443\n token: abcdef.0123456789abcdef\n unsafeSkipCAVerification: true\n tlsBootstrapToken: abcdef.0123456789abcdef", "language": "yaml", "type": "kubernetes-manifest", "context": "Migration from old kubeadm config versions | The JoinConfiguration type should be used to configure runtime settings, that in case of kubeadm join are the discovery method used for accessing the cluster info and all the setting which are specifi", "source": "https://kubernetes.io/docs/reference/config-api/kubeadm-config.v1beta4/", "id": "k8s_00432_code_completion_11216"} {"prompt": "Explain this yaml code:\n\napiVersion: kubeadm.k8s.io/v1beta4\nkind: JoinConfiguration\ndiscovery:\n bootstrapToken:\n apiServerEndpoint: some-address:6443\n token: abcdef.0123456789abcdef\n unsafeSkipCAVerification: true\n tlsBootstrapToken: abcdef.0123456789abcdef", "completion": "This yaml code Migration from old kubeadm config versions | The JoinConfiguration type should be used to configure runtime settings, that in case of kubeadm join are the discovery method used for accessing the cluster info and all the setting which are specifi", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Migration from old kubeadm config versions | The JoinConfiguration type should be used to configure runtime settings, that in case of kubeadm join are the discovery method used for accessing the cluster info and all the setting which are specifi", "source": "https://kubernetes.io/docs/reference/config-api/kubeadm-config.v1beta4/", "id": "k8s_00432_code_explain_11217"} {"prompt": "Explain this yaml code:\n\napiVersion: kubeadm.k8s.io/v1beta4\nkind: ResetConfiguration\n...", "completion": "This yaml code Migration from old kubeadm config versions | When executing kubeadm upgrade with the --config option, the UpgradeConfiguration type should be provided.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Migration from old kubeadm config versions | When executing kubeadm upgrade with the --config option, the UpgradeConfiguration type should be provided.", "source": "https://kubernetes.io/docs/reference/config-api/kubeadm-config.v1beta4/", "id": "k8s_00432_code_explain_11218"} {"prompt": "Complete this yaml code:\n\napiVersion: kubeadm.k8s.io/v1beta4\nkind: UpgradeConfiguration\napply:\n ...\ndiff:", "completion": " ...\nnode:\n ...\nplan:\n ...", "language": "yaml", "type": "kubernetes-manifest", "context": "Migration from old kubeadm config versions | The UpgradeConfiguration structure includes a few substructures that only apply to different subcommands of kubeadm upgrade. For example, the apply substructure will be used with the kubeadm upgrade a", "source": "https://kubernetes.io/docs/reference/config-api/kubeadm-config.v1beta4/", "id": "k8s_00432_code_completion_11219"} {"prompt": "Explain this yaml code:\n\napiVersion: kubeadm.k8s.io/v1beta4\nkind: UpgradeConfiguration\napply:\n ...\ndiff:\n ...\nnode:\n ...\nplan:\n ...", "completion": "This yaml code Migration from old kubeadm config versions | The UpgradeConfiguration structure includes a few substructures that only apply to different subcommands of kubeadm upgrade. For example, the apply substructure will be used with the kubeadm upgrade a", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Migration from old kubeadm config versions | The UpgradeConfiguration structure includes a few substructures that only apply to different subcommands of kubeadm upgrade. For example, the apply substructure will be used with the kubeadm upgrade a", "source": "https://kubernetes.io/docs/reference/config-api/kubeadm-config.v1beta4/", "id": "k8s_00432_code_explain_11220"} {"prompt": "Complete this yaml code:\n\napiVersion: apiserver.k8s.io/v1beta1\nkind: EgressSelectorConfiguration\negressSelections:\n# Since we want to control the egress traffic to the cluster, we use the\n# \"cluster\" as the name. Other supported values are \"etcd\", and \"controlplane\".\n- name: cluster\n connection:\n # This controls the protocol between the API Server and the Konnectivity\n # server. Supported values are \"GRPC\" and \"HTTPConnect\". There is no\n # end user visible difference between the two modes. You need to set the", "completion": " # Konnectivity server to work in the same mode.\n proxyProtocol: GRPC\n transport:\n # This controls what transport the API Server uses to communicate with the\n # Konnectivity server. UDS is recommended if the Konnectivity server\n # locates on the same machine as the API Server. You need to configure the\n # Konnectivity server to listen on the same UDS socket.\n # The other supported transport is \"tcp\". You will need to set up TLS \n # config to secure the TCP transport.\n uds:\n udsName: /etc/kubernetes/konnectivity-server/konnectivity-server.socket", "language": "yaml", "type": "kubernetes-manifest", "context": "Set up Konnectivity service | You need to configure the API Server to use the Konnectivity service and direct the network traffic to the cluster nodes:", "source": "https://kubernetes.io/docs/tasks/extend-kubernetes/setup-konnectivity/", "id": "k8s_00433_code_completion_11221"} {"prompt": "Explain this yaml code:\n\napiVersion: apiserver.k8s.io/v1beta1\nkind: EgressSelectorConfiguration\negressSelections:\n# Since we want to control the egress traffic to the cluster, we use the\n# \"cluster\" as the name. Other supported values are \"etcd\", and \"controlplane\".\n- name: cluster\n connection:\n # This controls the protocol between the API Server and the Konnectivity\n # server. Supported values are \"GRPC\" and \"HTTPConnect\". There is no\n # end user visible difference between the two modes. You need to set the\n # Konnectivity server to work in the same mode.\n proxyProtocol: GRPC\n transport:\n # This controls what transport the API Server uses to communicate with the\n # Konnectivity server. UDS is recommended if the Konnectivity server\n # locates on the same machine as the API Server. You need to configure the\n # Konnectivity server to listen on the same UDS socket.\n # The other supported transport is \"tcp\". You will need to set up TLS \n # config to secure the TCP transport.\n uds:\n udsName: /etc/kubernetes/konnectivity-server/konnectivity-server.socket", "completion": "This yaml code Set up Konnectivity service | You need to configure the API Server to use the Konnectivity service and direct the network traffic to the cluster nodes:", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Set up Konnectivity service | You need to configure the API Server to use the Konnectivity service and direct the network traffic to the cluster nodes:", "source": "https://kubernetes.io/docs/tasks/extend-kubernetes/setup-konnectivity/", "id": "k8s_00433_code_explain_11222"} {"prompt": "Complete this text code:\n\nspec:\n containers:\n volumeMounts:\n - name: konnectivity-uds\n mountPath: /etc/kubernetes/konnectivity-server", "completion": " readOnly: false\n volumes:\n - name: konnectivity-uds\n hostPath:\n path: /etc/kubernetes/konnectivity-server\n type: DirectoryOrCreate", "language": "text", "type": "code-example", "context": "Set up Konnectivity service | Generate or obtain a certificate and kubeconfig for konnectivity-server. For example, you can use the OpenSSL command line tool to issue a X.509 certificate, using the cluster CA certificate /etc/kube", "source": "https://kubernetes.io/docs/tasks/extend-kubernetes/setup-konnectivity/", "id": "k8s_00433_code_completion_11223"} {"prompt": "Explain this text code:\n\nspec:\n containers:\n volumeMounts:\n - name: konnectivity-uds\n mountPath: /etc/kubernetes/konnectivity-server\n readOnly: false\n volumes:\n - name: konnectivity-uds\n hostPath:\n path: /etc/kubernetes/konnectivity-server\n type: DirectoryOrCreate", "completion": "This text code Set up Konnectivity service | Generate or obtain a certificate and kubeconfig for konnectivity-server. For example, you can use the OpenSSL command line tool to issue a X.509 certificate, using the cluster CA certificate /etc/kube", "language": "text", "type": "code-example_explanation", "context": "Set up Konnectivity service | Generate or obtain a certificate and kubeconfig for konnectivity-server. For example, you can use the OpenSSL command line tool to issue a X.509 certificate, using the cluster CA certificate /etc/kube", "source": "https://kubernetes.io/docs/tasks/extend-kubernetes/setup-konnectivity/", "id": "k8s_00433_code_explain_11224"} {"prompt": "Complete this text code:\n\nopenssl req -subj \"/CN=system:konnectivity-server\" -new -newkey rsa:2048 -nodes -out konnectivity.csr -keyout konnectivity.key\nopenssl x509 -req -in konnectivity.csr -CA /etc/kubernetes/pki/ca.crt -CAkey /etc/kubernetes/pki/ca.key -CAcreateserial -out konnectivity.crt -days 375 -sha256\nSERVER=$(kubectl config view -o jsonpath='{.clusters..server}')\nkubectl --kubeconfig /etc/kubernetes/konnectivity-server.conf config set-credentials system:konnectivity-server --client-certificate konnectivity.crt --client-key konnectivity.key --embed-certs=true", "completion": "kubectl --kubeconfig /etc/kubernetes/konnectivity-server.conf config set-cluster kubernetes --server \"$SERVER\" --certificate-authority /etc/kubernetes/pki/ca.crt --embed-certs=true\nkubectl --kubeconfig /etc/kubernetes/konnectivity-server.conf config set-context system:konnectivity-server@kubernetes --cluster kubernetes --user system:konnectivity-server\nkubectl --kubeconfig /etc/kubernetes/konnectivity-server.conf config use-context system:konnectivity-server@kubernetes\nrm -f konnectivity.crt konnectivity.key konnectivity.csr", "language": "text", "type": "code-example", "context": "Set up Konnectivity service | Next, you need to deploy the Konnectivity server and agents. kubernetes-sigs/apiserver-network-proxy is a reference implementation.", "source": "https://kubernetes.io/docs/tasks/extend-kubernetes/setup-konnectivity/", "id": "k8s_00433_code_completion_11225"} {"prompt": "Explain this text code:\n\nopenssl req -subj \"/CN=system:konnectivity-server\" -new -newkey rsa:2048 -nodes -out konnectivity.csr -keyout konnectivity.key\nopenssl x509 -req -in konnectivity.csr -CA /etc/kubernetes/pki/ca.crt -CAkey /etc/kubernetes/pki/ca.key -CAcreateserial -out konnectivity.crt -days 375 -sha256\nSERVER=$(kubectl config view -o jsonpath='{.clusters..server}')\nkubectl --kubeconfig /etc/kubernetes/konnectivity-server.conf config set-credentials system:konnectivity-server --client-certificate konnectivity.crt --client-key konnectivity.key --embed-certs=true\nkubectl --kubeconfig /etc/kubernetes/konnectivity-server.conf config set-cluster kubernetes --server \"$SERVER\" --certificate-authority /etc/kubernetes/pki/ca.crt --embed-certs=true\nkubectl --kubeconfig /etc/kubernetes/konnectivity-server.conf config set-context system:konnectivity-server@kubernetes --cluster kubernetes --user system:konnectivity-server\nkubectl --kubeconfig /etc/kubernetes/konnectivity-server.conf config use-context system:konnectivity-server@kubernetes\nrm -f konnectivity.crt konnectivity.key konnectivity.csr", "completion": "This text code Set up Konnectivity service | Next, you need to deploy the Konnectivity server and agents. kubernetes-sigs/apiserver-network-proxy is a reference implementation.", "language": "text", "type": "code-example_explanation", "context": "Set up Konnectivity service | Next, you need to deploy the Konnectivity server and agents. kubernetes-sigs/apiserver-network-proxy is a reference implementation.", "source": "https://kubernetes.io/docs/tasks/extend-kubernetes/setup-konnectivity/", "id": "k8s_00433_code_explain_11226"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: konnectivity-server\n namespace: kube-system\nspec:\n priorityClassName: system-cluster-critical\n hostNetwork: true\n containers:\n - name: konnectivity-server-container\n image: registry.k8s.io/kas-network-proxy/proxy-server:v0.0.37\n command: [\"/proxy-server\"]\n args: [\n \"--logtostderr=true\",\n # This needs to be consistent with the value set in egressSelectorConfiguration.\n \"--uds-name=/etc/kubernetes/konnectivity-server/konnectivity-server.socket\",\n \"--delete-existing-uds-file\",\n # The following two lines assume the Konnectivity server is\n # deployed on the same machine as the apiserver, and the certs and\n # key of the API Server are at the specified location.\n \"--cluster-cert=/etc/kubernetes/pki/apiserver.crt\",\n \"--cluster-key=/etc/kubernetes/pki/apiserver.key\",\n # This needs to be consistent with the value set in egressSelectorConfiguration.\n \"--mode=grpc\",\n \"--server-port=0\",\n \"--agent-port=8132\",\n \"--admin-port=8133\",\n \"--health-port=8134\",\n \"--agent-namespace=kube-system\",\n \"--agent-service-account=konnectivity-agent\",\n \"--kubeconfig=/etc/kubernetes/konnectivity-server.conf\",\n \"--authentication-audience=system:konnectivity-server\"\n ]\n livenessProbe:\n httpGet:\n scheme: HTTP", "completion": " host: 127.0.0.1\n port: 8134\n path: /healthz\n initialDelaySeconds: 30\n timeoutSeconds: 60\n ports:\n - name: agentport\n containerPort: 8132\n hostPort: 8132\n - name: adminport\n containerPort: 8133\n hostPort: 8133\n - name: healthport\n containerPort: 8134\n hostPort: 8134\n volumeMounts:\n - name: k8s-certs\n mountPath: /etc/kubernetes/pki\n readOnly: true\n - name: kubeconfig\n mountPath: /etc/kubernetes/konnectivity-server.conf\n readOnly: true\n - name: konnectivity-uds\n mountPath: /etc/kubernetes/konnectivity-server\n readOnly: false\n volumes:\n - name: k8s-certs\n hostPath:\n path: /etc/kubernetes/pki\n - name: kubeconfig\n hostPath:\n path: /etc/kubernetes/konnectivity-server.conf\n type: FileOrCreate\n - name: konnectivity-uds\n hostPath:\n path: /etc/kubernetes/konnectivity-server\n type: DirectoryOrCreate", "language": "yaml", "type": "pod-definition", "context": "Set up Konnectivity service | Then deploy the Konnectivity agents in your cluster:", "source": "https://kubernetes.io/docs/tasks/extend-kubernetes/setup-konnectivity/", "id": "k8s_00433_code_completion_11227"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: konnectivity-server\n namespace: kube-system\nspec:\n priorityClassName: system-cluster-critical\n hostNetwork: true\n containers:\n - name: konnectivity-server-container\n image: registry.k8s.io/kas-network-proxy/proxy-server:v0.0.37\n command: [\"/proxy-server\"]\n args: [\n \"--logtostderr=true\",\n # This needs to be consistent with the value set in egressSelectorConfiguration.\n \"--uds-name=/etc/kubernetes/konnectivity-server/konnectivity-server.socket\",\n \"--delete-existing-uds-file\",\n # The following two lines assume the Konnectivity server is\n # deployed on the same machine as the apiserver, and the certs and\n # key of the API Server are at the specified location.\n \"--cluster-cert=/etc/kubernetes/pki/apiserver.crt\",\n \"--cluster-key=/etc/kubernetes/pki/apiserver.key\",\n # This needs to be consistent with the value set in egressSelectorConfiguration.\n \"--mode=grpc\",\n \"--server-port=0\",\n \"--agent-port=8132\",\n \"--admin-port=8133\",\n \"--health-port=8134\",\n \"--agent-namespace=kube-system\",\n \"--agent-service-account=konnectivity-agent\",\n \"--kubeconfig=/etc/kubernetes/konnectivity-server.conf\",\n \"--authentication-audience=system:konnectivity-server\"\n ]\n livenessProbe:\n httpGet:\n scheme: HTTP\n host: 127.0.0.1\n port: 8134\n path: /healthz\n initialDelaySeconds: 30\n timeoutSeconds: 60\n ports:\n - name: agentport\n containerPort: 8132\n hostPort: 8132\n - name: adminport\n containerPort: 8133\n hostPort: 8133\n - name: healthport\n containerPort: 8134\n hostPort: 8134\n volumeMounts:\n - name: k8s-certs\n mountPath: /etc/kubernetes/pki\n readOnly: true\n - name: kubeconfig\n mountPath: /etc/kubernetes/konnectivity-server.conf\n readOnly: true\n - name: konnectivity-uds\n mountPath: /etc/kubernetes/konnectivity-server\n readOnly: false\n volumes:\n - name: k8s-certs\n hostPath:\n path: /etc/kubernetes/pki\n - name: kubeconfig\n hostPath:\n path: /etc/kubernetes/konnectivity-server.conf\n type: FileOrCreate\n - name: konnectivity-uds\n hostPath:\n path: /etc/kubernetes/konnectivity-server\n type: DirectoryOrCreate", "completion": "This yaml code Set up Konnectivity service | Then deploy the Konnectivity agents in your cluster:", "language": "yaml", "type": "pod-definition_explanation", "context": "Set up Konnectivity service | Then deploy the Konnectivity agents in your cluster:", "source": "https://kubernetes.io/docs/tasks/extend-kubernetes/setup-konnectivity/", "id": "k8s_00433_code_explain_11228"} {"prompt": "Complete this yaml code:\n\napiVersion: apps/v1\n# Alternatively, you can deploy the agents as Deployments. It is not necessary\n# to have an agent on each node.\nkind: DaemonSet\nmetadata:\n labels:\n addonmanager.kubernetes.io/mode: Reconcile\n k8s-app: konnectivity-agent\n namespace: kube-system\n name: konnectivity-agent\nspec:\n selector:\n matchLabels:\n k8s-app: konnectivity-agent\n template:\n metadata:\n labels:\n k8s-app: konnectivity-agent\n spec:\n priorityClassName: system-cluster-critical\n tolerations:\n - key: \"CriticalAddonsOnly\"\n operator: \"Exists\"\n containers:\n - image: us.gcr.io/k8s-artifacts-prod/kas-network-proxy/proxy-agent:v0.0.37\n name: konnectivity-agent\n command: [\"/proxy-agent\"]", "completion": " args: [\n \"--logtostderr=true\",\n \"--ca-cert=/var/run/secrets/kubernetes.io/serviceaccount/ca.crt\",\n # Since the konnectivity server runs with hostNetwork=true,\n # this is the IP address of the master machine.\n \"--proxy-server-host=35.225.206.7\",\n \"--proxy-server-port=8132\",\n \"--admin-server-port=8133\",\n \"--health-server-port=8134\",\n \"--service-account-token-path=/var/run/secrets/tokens/konnectivity-agent-token\"\n ]\n volumeMounts:\n - mountPath: /var/run/secrets/tokens\n name: konnectivity-agent-token\n livenessProbe:\n httpGet:\n port: 8134\n path: /healthz\n initialDelaySeconds: 15\n timeoutSeconds: 15\n serviceAccountName: konnectivity-agent\n volumes:\n - name: konnectivity-agent-token\n projected:\n sources:\n - serviceAccountToken:\n path: konnectivity-agent-token\n audience: system:konnectivity-server", "language": "yaml", "type": "kubernetes-manifest", "context": "Set up Konnectivity service | Last, if RBAC is enabled in your cluster, create the relevant RBAC rules:", "source": "https://kubernetes.io/docs/tasks/extend-kubernetes/setup-konnectivity/", "id": "k8s_00433_code_completion_11229"} {"prompt": "Explain this yaml code:\n\napiVersion: apps/v1\n# Alternatively, you can deploy the agents as Deployments. It is not necessary\n# to have an agent on each node.\nkind: DaemonSet\nmetadata:\n labels:\n addonmanager.kubernetes.io/mode: Reconcile\n k8s-app: konnectivity-agent\n namespace: kube-system\n name: konnectivity-agent\nspec:\n selector:\n matchLabels:\n k8s-app: konnectivity-agent\n template:\n metadata:\n labels:\n k8s-app: konnectivity-agent\n spec:\n priorityClassName: system-cluster-critical\n tolerations:\n - key: \"CriticalAddonsOnly\"\n operator: \"Exists\"\n containers:\n - image: us.gcr.io/k8s-artifacts-prod/kas-network-proxy/proxy-agent:v0.0.37\n name: konnectivity-agent\n command: [\"/proxy-agent\"]\n args: [\n \"--logtostderr=true\",\n \"--ca-cert=/var/run/secrets/kubernetes.io/serviceaccount/ca.crt\",\n # Since the konnectivity server runs with hostNetwork=true,\n # this is the IP address of the master machine.\n \"--proxy-server-host=35.225.206.7\",\n \"--proxy-server-port=8132\",\n \"--admin-server-port=8133\",\n \"--health-server-port=8134\",\n \"--service-account-token-path=/var/run/secrets/tokens/konnectivity-agent-token\"\n ]\n volumeMounts:\n - mountPath: /var/run/secrets/tokens\n name: konnectivity-agent-token\n livenessProbe:\n httpGet:\n port: 8134\n path: /healthz\n initialDelaySeconds: 15\n timeoutSeconds: 15\n serviceAccountName: konnectivity-agent\n volumes:\n - name: konnectivity-agent-token\n projected:\n sources:\n - serviceAccountToken:\n path: konnectivity-agent-token\n audience: system:konnectivity-server", "completion": "This yaml code Set up Konnectivity service | Last, if RBAC is enabled in your cluster, create the relevant RBAC rules:", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Set up Konnectivity service | Last, if RBAC is enabled in your cluster, create the relevant RBAC rules:", "source": "https://kubernetes.io/docs/tasks/extend-kubernetes/setup-konnectivity/", "id": "k8s_00433_code_explain_11230"} {"prompt": "Complete this yaml code:\n\napiVersion: rbac.authorization.k8s.io/v1\nkind: ClusterRoleBinding\nmetadata:\n name: system:konnectivity-server\n labels:\n kubernetes.io/cluster-service: \"true\"\n addonmanager.kubernetes.io/mode: Reconcile\nroleRef:\n apiGroup: rbac.authorization.k8s.io\n kind: ClusterRole\n name: system:auth-delegator\nsubjects:", "completion": " - apiGroup: rbac.authorization.k8s.io\n kind: User\n name: system:konnectivity-server\n---\napiVersion: v1\nkind: ServiceAccount\nmetadata:\n name: konnectivity-agent\n namespace: kube-system\n labels:\n kubernetes.io/cluster-service: \"true\"\n addonmanager.kubernetes.io/mode: Reconcile", "language": "yaml", "type": "service-definition", "context": "Set up Konnectivity service |", "source": "https://kubernetes.io/docs/tasks/extend-kubernetes/setup-konnectivity/", "id": "k8s_00433_code_completion_11231"} {"prompt": "Explain this yaml code:\n\napiVersion: rbac.authorization.k8s.io/v1\nkind: ClusterRoleBinding\nmetadata:\n name: system:konnectivity-server\n labels:\n kubernetes.io/cluster-service: \"true\"\n addonmanager.kubernetes.io/mode: Reconcile\nroleRef:\n apiGroup: rbac.authorization.k8s.io\n kind: ClusterRole\n name: system:auth-delegator\nsubjects:\n - apiGroup: rbac.authorization.k8s.io\n kind: User\n name: system:konnectivity-server\n---\napiVersion: v1\nkind: ServiceAccount\nmetadata:\n name: konnectivity-agent\n namespace: kube-system\n labels:\n kubernetes.io/cluster-service: \"true\"\n addonmanager.kubernetes.io/mode: Reconcile", "completion": "This yaml code Set up Konnectivity service |", "language": "yaml", "type": "service-definition_explanation", "context": "Set up Konnectivity service |", "source": "https://kubernetes.io/docs/tasks/extend-kubernetes/setup-konnectivity/", "id": "k8s_00433_code_explain_11232"} {"prompt": "Explain this text code:\n\nadmin/konnectivity/egress-selector-configuration.yaml", "completion": "This text code Set up Konnectivity service | You need to configure the API Server to use the Konnectivity service and direct the network traffic to the cluster nodes:", "language": "text", "type": "code-example_explanation", "context": "Set up Konnectivity service | You need to configure the API Server to use the Konnectivity service and direct the network traffic to the cluster nodes:", "source": "https://kubernetes.io/docs/tasks/extend-kubernetes/setup-konnectivity/", "id": "k8s_00433_code_explain_11233"} {"prompt": "Complete this yaml code:\n\napiVersion: apiserver.k8s.io/v1beta1\nkind: EgressSelectorConfiguration\negressSelections:\n# Since we want to control the egress traffic to the cluster, we use the\n# \"cluster\" as the name. Other supported values are \"etcd\", and \"controlplane\".\n- name: cluster\n connection:\n # This controls the protocol between the API Server and the Konnectivity\n # server. Supported values are \"GRPC\" and \"HTTPConnect\". There is no\n # end user visible difference between the two modes. You need to set the", "completion": " # Konnectivity server to work in the same mode.\n proxyProtocol: GRPC\n transport:\n # This controls what transport the API Server uses to communicate with the\n # Konnectivity server. UDS is recommended if the Konnectivity server\n # locates on the same machine as the API Server. You need to configure the\n # Konnectivity server to listen on the same UDS socket.\n # The other supported transport is \"tcp\". You will need to set up TLS \n # config to secure the TCP transport.\n uds:\n udsName: /etc/kubernetes/konnectivity-server/konnectivity-server.socket", "language": "yaml", "type": "kubernetes-manifest", "context": "Set up Konnectivity service | You need to configure the API Server to use the Konnectivity service and direct the network traffic to the cluster nodes:", "source": "https://kubernetes.io/docs/tasks/extend-kubernetes/setup-konnectivity/", "id": "k8s_00433_code_completion_11234"} {"prompt": "Explain this yaml code:\n\napiVersion: apiserver.k8s.io/v1beta1\nkind: EgressSelectorConfiguration\negressSelections:\n# Since we want to control the egress traffic to the cluster, we use the\n# \"cluster\" as the name. Other supported values are \"etcd\", and \"controlplane\".\n- name: cluster\n connection:\n # This controls the protocol between the API Server and the Konnectivity\n # server. Supported values are \"GRPC\" and \"HTTPConnect\". There is no\n # end user visible difference between the two modes. You need to set the\n # Konnectivity server to work in the same mode.\n proxyProtocol: GRPC\n transport:\n # This controls what transport the API Server uses to communicate with the\n # Konnectivity server. UDS is recommended if the Konnectivity server\n # locates on the same machine as the API Server. You need to configure the\n # Konnectivity server to listen on the same UDS socket.\n # The other supported transport is \"tcp\". You will need to set up TLS \n # config to secure the TCP transport.\n uds:\n udsName: /etc/kubernetes/konnectivity-server/konnectivity-server.socket", "completion": "This yaml code Set up Konnectivity service | You need to configure the API Server to use the Konnectivity service and direct the network traffic to the cluster nodes:", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Set up Konnectivity service | You need to configure the API Server to use the Konnectivity service and direct the network traffic to the cluster nodes:", "source": "https://kubernetes.io/docs/tasks/extend-kubernetes/setup-konnectivity/", "id": "k8s_00433_code_explain_11235"} {"prompt": "Explain this text code:\n\nadmin/konnectivity/egress-selector-configuration.yaml", "completion": "This text code Set up Konnectivity service | Generate or obtain a certificate and kubeconfig for konnectivity-server. For example, you can use the OpenSSL command line tool to issue a X.509 certificate, using the cluster CA certificate /etc/kube", "language": "text", "type": "code-example_explanation", "context": "Set up Konnectivity service | Generate or obtain a certificate and kubeconfig for konnectivity-server. For example, you can use the OpenSSL command line tool to issue a X.509 certificate, using the cluster CA certificate /etc/kube", "source": "https://kubernetes.io/docs/tasks/extend-kubernetes/setup-konnectivity/", "id": "k8s_00433_code_explain_11236"} {"prompt": "Complete this yaml code:\n\nspec:\n containers:\n volumeMounts:\n - name: konnectivity-uds\n mountPath: /etc/kubernetes/konnectivity-server", "completion": " readOnly: false\n volumes:\n - name: konnectivity-uds\n hostPath:\n path: /etc/kubernetes/konnectivity-server\n type: DirectoryOrCreate", "language": "yaml", "type": "kubernetes-manifest", "context": "Set up Konnectivity service | Generate or obtain a certificate and kubeconfig for konnectivity-server. For example, you can use the OpenSSL command line tool to issue a X.509 certificate, using the cluster CA certificate /etc/kube", "source": "https://kubernetes.io/docs/tasks/extend-kubernetes/setup-konnectivity/", "id": "k8s_00433_code_completion_11237"} {"prompt": "Explain this yaml code:\n\nspec:\n containers:\n volumeMounts:\n - name: konnectivity-uds\n mountPath: /etc/kubernetes/konnectivity-server\n readOnly: false\n volumes:\n - name: konnectivity-uds\n hostPath:\n path: /etc/kubernetes/konnectivity-server\n type: DirectoryOrCreate", "completion": "This yaml code Set up Konnectivity service | Generate or obtain a certificate and kubeconfig for konnectivity-server. For example, you can use the OpenSSL command line tool to issue a X.509 certificate, using the cluster CA certificate /etc/kube", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Set up Konnectivity service | Generate or obtain a certificate and kubeconfig for konnectivity-server. For example, you can use the OpenSSL command line tool to issue a X.509 certificate, using the cluster CA certificate /etc/kube", "source": "https://kubernetes.io/docs/tasks/extend-kubernetes/setup-konnectivity/", "id": "k8s_00433_code_explain_11238"} {"prompt": "Complete this bash code:\n\nopenssl req -subj \"/CN=system:konnectivity-server\" -new -newkey rsa:2048 -nodes -out konnectivity.csr -keyout konnectivity.key\nopenssl x509 -req -in konnectivity.csr -CA /etc/kubernetes/pki/ca.crt -CAkey /etc/kubernetes/pki/ca.key -CAcreateserial -out konnectivity.crt -days 375 -sha256\nSERVER=$(kubectl config view -o jsonpath='{.clusters..server}')\nkubectl --kubeconfig /etc/kubernetes/konnectivity-server.conf config set-credentials system:konnectivity-server --client-certificate konnectivity.crt --client-key konnectivity.key --embed-certs=true", "completion": "kubectl --kubeconfig /etc/kubernetes/konnectivity-server.conf config set-cluster kubernetes --server \"$SERVER\" --certificate-authority /etc/kubernetes/pki/ca.crt --embed-certs=true\nkubectl --kubeconfig /etc/kubernetes/konnectivity-server.conf config set-context system:konnectivity-server@kubernetes --cluster kubernetes --user system:konnectivity-server\nkubectl --kubeconfig /etc/kubernetes/konnectivity-server.conf config use-context system:konnectivity-server@kubernetes\nrm -f konnectivity.crt konnectivity.key konnectivity.csr", "language": "bash", "type": "shell-command", "context": "Set up Konnectivity service | Next, you need to deploy the Konnectivity server and agents. kubernetes-sigs/apiserver-network-proxy is a reference implementation.", "source": "https://kubernetes.io/docs/tasks/extend-kubernetes/setup-konnectivity/", "id": "k8s_00433_code_completion_11239"} {"prompt": "Explain this bash code:\n\nopenssl req -subj \"/CN=system:konnectivity-server\" -new -newkey rsa:2048 -nodes -out konnectivity.csr -keyout konnectivity.key\nopenssl x509 -req -in konnectivity.csr -CA /etc/kubernetes/pki/ca.crt -CAkey /etc/kubernetes/pki/ca.key -CAcreateserial -out konnectivity.crt -days 375 -sha256\nSERVER=$(kubectl config view -o jsonpath='{.clusters..server}')\nkubectl --kubeconfig /etc/kubernetes/konnectivity-server.conf config set-credentials system:konnectivity-server --client-certificate konnectivity.crt --client-key konnectivity.key --embed-certs=true\nkubectl --kubeconfig /etc/kubernetes/konnectivity-server.conf config set-cluster kubernetes --server \"$SERVER\" --certificate-authority /etc/kubernetes/pki/ca.crt --embed-certs=true\nkubectl --kubeconfig /etc/kubernetes/konnectivity-server.conf config set-context system:konnectivity-server@kubernetes --cluster kubernetes --user system:konnectivity-server\nkubectl --kubeconfig /etc/kubernetes/konnectivity-server.conf config use-context system:konnectivity-server@kubernetes\nrm -f konnectivity.crt konnectivity.key konnectivity.csr", "completion": "This bash code Set up Konnectivity service | Next, you need to deploy the Konnectivity server and agents. kubernetes-sigs/apiserver-network-proxy is a reference implementation.", "language": "bash", "type": "shell-command_explanation", "context": "Set up Konnectivity service | Next, you need to deploy the Konnectivity server and agents. kubernetes-sigs/apiserver-network-proxy is a reference implementation.", "source": "https://kubernetes.io/docs/tasks/extend-kubernetes/setup-konnectivity/", "id": "k8s_00433_code_explain_11240"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: konnectivity-server\n namespace: kube-system\nspec:\n priorityClassName: system-cluster-critical\n hostNetwork: true\n containers:\n - name: konnectivity-server-container\n image: registry.k8s.io/kas-network-proxy/proxy-server:v0.0.37\n command: [\"/proxy-server\"]\n args: [\n \"--logtostderr=true\",\n # This needs to be consistent with the value set in egressSelectorConfiguration.\n \"--uds-name=/etc/kubernetes/konnectivity-server/konnectivity-server.socket\",\n \"--delete-existing-uds-file\",\n # The following two lines assume the Konnectivity server is\n # deployed on the same machine as the apiserver, and the certs and\n # key of the API Server are at the specified location.\n \"--cluster-cert=/etc/kubernetes/pki/apiserver.crt\",\n \"--cluster-key=/etc/kubernetes/pki/apiserver.key\",\n # This needs to be consistent with the value set in egressSelectorConfiguration.\n \"--mode=grpc\",\n \"--server-port=0\",\n \"--agent-port=8132\",\n \"--admin-port=8133\",\n \"--health-port=8134\",\n \"--agent-namespace=kube-system\",\n \"--agent-service-account=konnectivity-agent\",\n \"--kubeconfig=/etc/kubernetes/konnectivity-server.conf\",\n \"--authentication-audience=system:konnectivity-server\"\n ]\n livenessProbe:\n httpGet:\n scheme: HTTP", "completion": " host: 127.0.0.1\n port: 8134\n path: /healthz\n initialDelaySeconds: 30\n timeoutSeconds: 60\n ports:\n - name: agentport\n containerPort: 8132\n hostPort: 8132\n - name: adminport\n containerPort: 8133\n hostPort: 8133\n - name: healthport\n containerPort: 8134\n hostPort: 8134\n volumeMounts:\n - name: k8s-certs\n mountPath: /etc/kubernetes/pki\n readOnly: true\n - name: kubeconfig\n mountPath: /etc/kubernetes/konnectivity-server.conf\n readOnly: true\n - name: konnectivity-uds\n mountPath: /etc/kubernetes/konnectivity-server\n readOnly: false\n volumes:\n - name: k8s-certs\n hostPath:\n path: /etc/kubernetes/pki\n - name: kubeconfig\n hostPath:\n path: /etc/kubernetes/konnectivity-server.conf\n type: FileOrCreate\n - name: konnectivity-uds\n hostPath:\n path: /etc/kubernetes/konnectivity-server\n type: DirectoryOrCreate", "language": "yaml", "type": "pod-definition", "context": "Set up Konnectivity service | Then deploy the Konnectivity agents in your cluster:", "source": "https://kubernetes.io/docs/tasks/extend-kubernetes/setup-konnectivity/", "id": "k8s_00433_code_completion_11241"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: konnectivity-server\n namespace: kube-system\nspec:\n priorityClassName: system-cluster-critical\n hostNetwork: true\n containers:\n - name: konnectivity-server-container\n image: registry.k8s.io/kas-network-proxy/proxy-server:v0.0.37\n command: [\"/proxy-server\"]\n args: [\n \"--logtostderr=true\",\n # This needs to be consistent with the value set in egressSelectorConfiguration.\n \"--uds-name=/etc/kubernetes/konnectivity-server/konnectivity-server.socket\",\n \"--delete-existing-uds-file\",\n # The following two lines assume the Konnectivity server is\n # deployed on the same machine as the apiserver, and the certs and\n # key of the API Server are at the specified location.\n \"--cluster-cert=/etc/kubernetes/pki/apiserver.crt\",\n \"--cluster-key=/etc/kubernetes/pki/apiserver.key\",\n # This needs to be consistent with the value set in egressSelectorConfiguration.\n \"--mode=grpc\",\n \"--server-port=0\",\n \"--agent-port=8132\",\n \"--admin-port=8133\",\n \"--health-port=8134\",\n \"--agent-namespace=kube-system\",\n \"--agent-service-account=konnectivity-agent\",\n \"--kubeconfig=/etc/kubernetes/konnectivity-server.conf\",\n \"--authentication-audience=system:konnectivity-server\"\n ]\n livenessProbe:\n httpGet:\n scheme: HTTP\n host: 127.0.0.1\n port: 8134\n path: /healthz\n initialDelaySeconds: 30\n timeoutSeconds: 60\n ports:\n - name: agentport\n containerPort: 8132\n hostPort: 8132\n - name: adminport\n containerPort: 8133\n hostPort: 8133\n - name: healthport\n containerPort: 8134\n hostPort: 8134\n volumeMounts:\n - name: k8s-certs\n mountPath: /etc/kubernetes/pki\n readOnly: true\n - name: kubeconfig\n mountPath: /etc/kubernetes/konnectivity-server.conf\n readOnly: true\n - name: konnectivity-uds\n mountPath: /etc/kubernetes/konnectivity-server\n readOnly: false\n volumes:\n - name: k8s-certs\n hostPath:\n path: /etc/kubernetes/pki\n - name: kubeconfig\n hostPath:\n path: /etc/kubernetes/konnectivity-server.conf\n type: FileOrCreate\n - name: konnectivity-uds\n hostPath:\n path: /etc/kubernetes/konnectivity-server\n type: DirectoryOrCreate", "completion": "This yaml code Set up Konnectivity service | Then deploy the Konnectivity agents in your cluster:", "language": "yaml", "type": "pod-definition_explanation", "context": "Set up Konnectivity service | Then deploy the Konnectivity agents in your cluster:", "source": "https://kubernetes.io/docs/tasks/extend-kubernetes/setup-konnectivity/", "id": "k8s_00433_code_explain_11242"} {"prompt": "Complete this yaml code:\n\napiVersion: apps/v1\n# Alternatively, you can deploy the agents as Deployments. It is not necessary\n# to have an agent on each node.\nkind: DaemonSet\nmetadata:\n labels:\n addonmanager.kubernetes.io/mode: Reconcile\n k8s-app: konnectivity-agent\n namespace: kube-system\n name: konnectivity-agent\nspec:\n selector:\n matchLabels:\n k8s-app: konnectivity-agent\n template:\n metadata:\n labels:\n k8s-app: konnectivity-agent\n spec:\n priorityClassName: system-cluster-critical\n tolerations:\n - key: \"CriticalAddonsOnly\"\n operator: \"Exists\"\n containers:\n - image: us.gcr.io/k8s-artifacts-prod/kas-network-proxy/proxy-agent:v0.0.37\n name: konnectivity-agent\n command: [\"/proxy-agent\"]", "completion": " args: [\n \"--logtostderr=true\",\n \"--ca-cert=/var/run/secrets/kubernetes.io/serviceaccount/ca.crt\",\n # Since the konnectivity server runs with hostNetwork=true,\n # this is the IP address of the master machine.\n \"--proxy-server-host=35.225.206.7\",\n \"--proxy-server-port=8132\",\n \"--admin-server-port=8133\",\n \"--health-server-port=8134\",\n \"--service-account-token-path=/var/run/secrets/tokens/konnectivity-agent-token\"\n ]\n volumeMounts:\n - mountPath: /var/run/secrets/tokens\n name: konnectivity-agent-token\n livenessProbe:\n httpGet:\n port: 8134\n path: /healthz\n initialDelaySeconds: 15\n timeoutSeconds: 15\n serviceAccountName: konnectivity-agent\n volumes:\n - name: konnectivity-agent-token\n projected:\n sources:\n - serviceAccountToken:\n path: konnectivity-agent-token\n audience: system:konnectivity-server", "language": "yaml", "type": "kubernetes-manifest", "context": "Set up Konnectivity service | Last, if RBAC is enabled in your cluster, create the relevant RBAC rules:", "source": "https://kubernetes.io/docs/tasks/extend-kubernetes/setup-konnectivity/", "id": "k8s_00433_code_completion_11243"} {"prompt": "Explain this yaml code:\n\napiVersion: apps/v1\n# Alternatively, you can deploy the agents as Deployments. It is not necessary\n# to have an agent on each node.\nkind: DaemonSet\nmetadata:\n labels:\n addonmanager.kubernetes.io/mode: Reconcile\n k8s-app: konnectivity-agent\n namespace: kube-system\n name: konnectivity-agent\nspec:\n selector:\n matchLabels:\n k8s-app: konnectivity-agent\n template:\n metadata:\n labels:\n k8s-app: konnectivity-agent\n spec:\n priorityClassName: system-cluster-critical\n tolerations:\n - key: \"CriticalAddonsOnly\"\n operator: \"Exists\"\n containers:\n - image: us.gcr.io/k8s-artifacts-prod/kas-network-proxy/proxy-agent:v0.0.37\n name: konnectivity-agent\n command: [\"/proxy-agent\"]\n args: [\n \"--logtostderr=true\",\n \"--ca-cert=/var/run/secrets/kubernetes.io/serviceaccount/ca.crt\",\n # Since the konnectivity server runs with hostNetwork=true,\n # this is the IP address of the master machine.\n \"--proxy-server-host=35.225.206.7\",\n \"--proxy-server-port=8132\",\n \"--admin-server-port=8133\",\n \"--health-server-port=8134\",\n \"--service-account-token-path=/var/run/secrets/tokens/konnectivity-agent-token\"\n ]\n volumeMounts:\n - mountPath: /var/run/secrets/tokens\n name: konnectivity-agent-token\n livenessProbe:\n httpGet:\n port: 8134\n path: /healthz\n initialDelaySeconds: 15\n timeoutSeconds: 15\n serviceAccountName: konnectivity-agent\n volumes:\n - name: konnectivity-agent-token\n projected:\n sources:\n - serviceAccountToken:\n path: konnectivity-agent-token\n audience: system:konnectivity-server", "completion": "This yaml code Set up Konnectivity service | Last, if RBAC is enabled in your cluster, create the relevant RBAC rules:", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Set up Konnectivity service | Last, if RBAC is enabled in your cluster, create the relevant RBAC rules:", "source": "https://kubernetes.io/docs/tasks/extend-kubernetes/setup-konnectivity/", "id": "k8s_00433_code_explain_11244"} {"prompt": "Complete this yaml code:\n\napiVersion: rbac.authorization.k8s.io/v1\nkind: ClusterRoleBinding\nmetadata:\n name: system:konnectivity-server\n labels:\n kubernetes.io/cluster-service: \"true\"\n addonmanager.kubernetes.io/mode: Reconcile\nroleRef:\n apiGroup: rbac.authorization.k8s.io\n kind: ClusterRole\n name: system:auth-delegator\nsubjects:", "completion": " - apiGroup: rbac.authorization.k8s.io\n kind: User\n name: system:konnectivity-server\n---\napiVersion: v1\nkind: ServiceAccount\nmetadata:\n name: konnectivity-agent\n namespace: kube-system\n labels:\n kubernetes.io/cluster-service: \"true\"\n addonmanager.kubernetes.io/mode: Reconcile", "language": "yaml", "type": "service-definition", "context": "Set up Konnectivity service |", "source": "https://kubernetes.io/docs/tasks/extend-kubernetes/setup-konnectivity/", "id": "k8s_00433_code_completion_11245"} {"prompt": "Explain this yaml code:\n\napiVersion: rbac.authorization.k8s.io/v1\nkind: ClusterRoleBinding\nmetadata:\n name: system:konnectivity-server\n labels:\n kubernetes.io/cluster-service: \"true\"\n addonmanager.kubernetes.io/mode: Reconcile\nroleRef:\n apiGroup: rbac.authorization.k8s.io\n kind: ClusterRole\n name: system:auth-delegator\nsubjects:\n - apiGroup: rbac.authorization.k8s.io\n kind: User\n name: system:konnectivity-server\n---\napiVersion: v1\nkind: ServiceAccount\nmetadata:\n name: konnectivity-agent\n namespace: kube-system\n labels:\n kubernetes.io/cluster-service: \"true\"\n addonmanager.kubernetes.io/mode: Reconcile", "completion": "This yaml code Set up Konnectivity service |", "language": "yaml", "type": "service-definition_explanation", "context": "Set up Konnectivity service |", "source": "https://kubernetes.io/docs/tasks/extend-kubernetes/setup-konnectivity/", "id": "k8s_00433_code_explain_11246"} {"prompt": "Complete this yaml code:\n\nadmin/konnectivity/egress-selector-configuration.yaml\napiVersion: apiserver.k8s.io/v1beta1\nkind: EgressSelectorConfiguration\negressSelections:\n# Since we want to control the egress traffic to the cluster, we use the\n# \"cluster\" as the name. Other supported values are \"etcd\", and \"controlplane\".\n- name: cluster\n connection:\n # This controls the protocol between the API Server and the Konnectivity\n # server. Supported values are \"GRPC\" and \"HTTPConnect\". There is no\n # end user visible difference between the two modes. You need to set the", "completion": " # Konnectivity server to work in the same mode.\n proxyProtocol: GRPC\n transport:\n # This controls what transport the API Server uses to communicate with the\n # Konnectivity server. UDS is recommended if the Konnectivity server\n # locates on the same machine as the API Server. You need to configure the\n # Konnectivity server to listen on the same UDS socket.\n # The other supported transport is \"tcp\". You will need to set up TLS \n # config to secure the TCP transport.\n uds:\n udsName: /etc/kubernetes/konnectivity-server/konnectivity-server.socket", "language": "yaml", "type": "kubernetes-manifest", "context": "Set up Konnectivity service | You need to configure the API Server to use the Konnectivity service and direct the network traffic to the cluster nodes:", "source": "https://kubernetes.io/docs/tasks/extend-kubernetes/setup-konnectivity/", "id": "k8s_00433_code_completion_11247"} {"prompt": "Explain this yaml code:\n\nadmin/konnectivity/egress-selector-configuration.yaml\napiVersion: apiserver.k8s.io/v1beta1\nkind: EgressSelectorConfiguration\negressSelections:\n# Since we want to control the egress traffic to the cluster, we use the\n# \"cluster\" as the name. Other supported values are \"etcd\", and \"controlplane\".\n- name: cluster\n connection:\n # This controls the protocol between the API Server and the Konnectivity\n # server. Supported values are \"GRPC\" and \"HTTPConnect\". There is no\n # end user visible difference between the two modes. You need to set the\n # Konnectivity server to work in the same mode.\n proxyProtocol: GRPC\n transport:\n # This controls what transport the API Server uses to communicate with the\n # Konnectivity server. UDS is recommended if the Konnectivity server\n # locates on the same machine as the API Server. You need to configure the\n # Konnectivity server to listen on the same UDS socket.\n # The other supported transport is \"tcp\". You will need to set up TLS \n # config to secure the TCP transport.\n uds:\n udsName: /etc/kubernetes/konnectivity-server/konnectivity-server.socket", "completion": "This yaml code Set up Konnectivity service | You need to configure the API Server to use the Konnectivity service and direct the network traffic to the cluster nodes:", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Set up Konnectivity service | You need to configure the API Server to use the Konnectivity service and direct the network traffic to the cluster nodes:", "source": "https://kubernetes.io/docs/tasks/extend-kubernetes/setup-konnectivity/", "id": "k8s_00433_code_explain_11248"} {"prompt": "Complete this yaml code:\n\napiVersion: apiserver.k8s.io/v1beta1\nkind: EgressSelectorConfiguration\negressSelections:\n# Since we want to control the egress traffic to the cluster, we use the\n# \"cluster\" as the name. Other supported values are \"etcd\", and \"controlplane\".\n- name: cluster\n connection:\n # This controls the protocol between the API Server and the Konnectivity\n # server. Supported values are \"GRPC\" and \"HTTPConnect\". There is no\n # end user visible difference between the two modes. You need to set the", "completion": " # Konnectivity server to work in the same mode.\n proxyProtocol: GRPC\n transport:\n # This controls what transport the API Server uses to communicate with the\n # Konnectivity server. UDS is recommended if the Konnectivity server\n # locates on the same machine as the API Server. You need to configure the\n # Konnectivity server to listen on the same UDS socket.\n # The other supported transport is \"tcp\". You will need to set up TLS \n # config to secure the TCP transport.\n uds:\n udsName: /etc/kubernetes/konnectivity-server/konnectivity-server.socket", "language": "yaml", "type": "kubernetes-manifest", "context": "Set up Konnectivity service | You need to configure the API Server to use the Konnectivity service and direct the network traffic to the cluster nodes:", "source": "https://kubernetes.io/docs/tasks/extend-kubernetes/setup-konnectivity/", "id": "k8s_00433_code_completion_11249"} {"prompt": "Explain this yaml code:\n\napiVersion: apiserver.k8s.io/v1beta1\nkind: EgressSelectorConfiguration\negressSelections:\n# Since we want to control the egress traffic to the cluster, we use the\n# \"cluster\" as the name. Other supported values are \"etcd\", and \"controlplane\".\n- name: cluster\n connection:\n # This controls the protocol between the API Server and the Konnectivity\n # server. Supported values are \"GRPC\" and \"HTTPConnect\". There is no\n # end user visible difference between the two modes. You need to set the\n # Konnectivity server to work in the same mode.\n proxyProtocol: GRPC\n transport:\n # This controls what transport the API Server uses to communicate with the\n # Konnectivity server. UDS is recommended if the Konnectivity server\n # locates on the same machine as the API Server. You need to configure the\n # Konnectivity server to listen on the same UDS socket.\n # The other supported transport is \"tcp\". You will need to set up TLS \n # config to secure the TCP transport.\n uds:\n udsName: /etc/kubernetes/konnectivity-server/konnectivity-server.socket", "completion": "This yaml code Set up Konnectivity service | You need to configure the API Server to use the Konnectivity service and direct the network traffic to the cluster nodes:", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Set up Konnectivity service | You need to configure the API Server to use the Konnectivity service and direct the network traffic to the cluster nodes:", "source": "https://kubernetes.io/docs/tasks/extend-kubernetes/setup-konnectivity/", "id": "k8s_00433_code_explain_11250"} {"prompt": "Complete this text code:\n\nspec:\n containers:\n volumeMounts:\n - name: konnectivity-uds\n mountPath: /etc/kubernetes/konnectivity-server", "completion": " readOnly: false\n volumes:\n - name: konnectivity-uds\n hostPath:\n path: /etc/kubernetes/konnectivity-server\n type: DirectoryOrCreate", "language": "text", "type": "code-example", "context": "Set up Konnectivity service | Generate or obtain a certificate and kubeconfig for konnectivity-server. For example, you can use the OpenSSL command line tool to issue a X.509 certificate, using the cluster CA certificate /etc/kube", "source": "https://kubernetes.io/docs/tasks/extend-kubernetes/setup-konnectivity/", "id": "k8s_00433_code_completion_11251"} {"prompt": "Explain this text code:\n\nspec:\n containers:\n volumeMounts:\n - name: konnectivity-uds\n mountPath: /etc/kubernetes/konnectivity-server\n readOnly: false\n volumes:\n - name: konnectivity-uds\n hostPath:\n path: /etc/kubernetes/konnectivity-server\n type: DirectoryOrCreate", "completion": "This text code Set up Konnectivity service | Generate or obtain a certificate and kubeconfig for konnectivity-server. For example, you can use the OpenSSL command line tool to issue a X.509 certificate, using the cluster CA certificate /etc/kube", "language": "text", "type": "code-example_explanation", "context": "Set up Konnectivity service | Generate or obtain a certificate and kubeconfig for konnectivity-server. For example, you can use the OpenSSL command line tool to issue a X.509 certificate, using the cluster CA certificate /etc/kube", "source": "https://kubernetes.io/docs/tasks/extend-kubernetes/setup-konnectivity/", "id": "k8s_00433_code_explain_11252"} {"prompt": "Complete this text code:\n\nopenssl req -subj \"/CN=system:konnectivity-server\" -new -newkey rsa:2048 -nodes -out konnectivity.csr -keyout konnectivity.key\nopenssl x509 -req -in konnectivity.csr -CA /etc/kubernetes/pki/ca.crt -CAkey /etc/kubernetes/pki/ca.key -CAcreateserial -out konnectivity.crt -days 375 -sha256\nSERVER=$(kubectl config view -o jsonpath='{.clusters..server}')\nkubectl --kubeconfig /etc/kubernetes/konnectivity-server.conf config set-credentials system:konnectivity-server --client-certificate konnectivity.crt --client-key konnectivity.key --embed-certs=true", "completion": "kubectl --kubeconfig /etc/kubernetes/konnectivity-server.conf config set-cluster kubernetes --server \"$SERVER\" --certificate-authority /etc/kubernetes/pki/ca.crt --embed-certs=true\nkubectl --kubeconfig /etc/kubernetes/konnectivity-server.conf config set-context system:konnectivity-server@kubernetes --cluster kubernetes --user system:konnectivity-server\nkubectl --kubeconfig /etc/kubernetes/konnectivity-server.conf config use-context system:konnectivity-server@kubernetes\nrm -f konnectivity.crt konnectivity.key konnectivity.csr", "language": "text", "type": "code-example", "context": "Set up Konnectivity service | Next, you need to deploy the Konnectivity server and agents. kubernetes-sigs/apiserver-network-proxy is a reference implementation.", "source": "https://kubernetes.io/docs/tasks/extend-kubernetes/setup-konnectivity/", "id": "k8s_00433_code_completion_11253"} {"prompt": "Explain this text code:\n\nopenssl req -subj \"/CN=system:konnectivity-server\" -new -newkey rsa:2048 -nodes -out konnectivity.csr -keyout konnectivity.key\nopenssl x509 -req -in konnectivity.csr -CA /etc/kubernetes/pki/ca.crt -CAkey /etc/kubernetes/pki/ca.key -CAcreateserial -out konnectivity.crt -days 375 -sha256\nSERVER=$(kubectl config view -o jsonpath='{.clusters..server}')\nkubectl --kubeconfig /etc/kubernetes/konnectivity-server.conf config set-credentials system:konnectivity-server --client-certificate konnectivity.crt --client-key konnectivity.key --embed-certs=true\nkubectl --kubeconfig /etc/kubernetes/konnectivity-server.conf config set-cluster kubernetes --server \"$SERVER\" --certificate-authority /etc/kubernetes/pki/ca.crt --embed-certs=true\nkubectl --kubeconfig /etc/kubernetes/konnectivity-server.conf config set-context system:konnectivity-server@kubernetes --cluster kubernetes --user system:konnectivity-server\nkubectl --kubeconfig /etc/kubernetes/konnectivity-server.conf config use-context system:konnectivity-server@kubernetes\nrm -f konnectivity.crt konnectivity.key konnectivity.csr", "completion": "This text code Set up Konnectivity service | Next, you need to deploy the Konnectivity server and agents. kubernetes-sigs/apiserver-network-proxy is a reference implementation.", "language": "text", "type": "code-example_explanation", "context": "Set up Konnectivity service | Next, you need to deploy the Konnectivity server and agents. kubernetes-sigs/apiserver-network-proxy is a reference implementation.", "source": "https://kubernetes.io/docs/tasks/extend-kubernetes/setup-konnectivity/", "id": "k8s_00433_code_explain_11254"} {"prompt": "Complete this yaml code:\n\nadmin/konnectivity/konnectivity-server.yaml\napiVersion: v1\nkind: Pod\nmetadata:\n name: konnectivity-server\n namespace: kube-system\nspec:\n priorityClassName: system-cluster-critical\n hostNetwork: true\n containers:\n - name: konnectivity-server-container\n image: registry.k8s.io/kas-network-proxy/proxy-server:v0.0.37\n command: [\"/proxy-server\"]\n args: [\n \"--logtostderr=true\",\n # This needs to be consistent with the value set in egressSelectorConfiguration.\n \"--uds-name=/etc/kubernetes/konnectivity-server/konnectivity-server.socket\",\n \"--delete-existing-uds-file\",\n # The following two lines assume the Konnectivity server is\n # deployed on the same machine as the apiserver, and the certs and\n # key of the API Server are at the specified location.\n \"--cluster-cert=/etc/kubernetes/pki/apiserver.crt\",\n \"--cluster-key=/etc/kubernetes/pki/apiserver.key\",\n # This needs to be consistent with the value set in egressSelectorConfiguration.\n \"--mode=grpc\",\n \"--server-port=0\",\n \"--agent-port=8132\",\n \"--admin-port=8133\",\n \"--health-port=8134\",\n \"--agent-namespace=kube-system\",\n \"--agent-service-account=konnectivity-agent\",\n \"--kubeconfig=/etc/kubernetes/konnectivity-server.conf\",\n \"--authentication-audience=system:konnectivity-server\"\n ]\n livenessProbe:\n httpGet:\n scheme: HTTP", "completion": " host: 127.0.0.1\n port: 8134\n path: /healthz\n initialDelaySeconds: 30\n timeoutSeconds: 60\n ports:\n - name: agentport\n containerPort: 8132\n hostPort: 8132\n - name: adminport\n containerPort: 8133\n hostPort: 8133\n - name: healthport\n containerPort: 8134\n hostPort: 8134\n volumeMounts:\n - name: k8s-certs\n mountPath: /etc/kubernetes/pki\n readOnly: true\n - name: kubeconfig\n mountPath: /etc/kubernetes/konnectivity-server.conf\n readOnly: true\n - name: konnectivity-uds\n mountPath: /etc/kubernetes/konnectivity-server\n readOnly: false\n volumes:\n - name: k8s-certs\n hostPath:\n path: /etc/kubernetes/pki\n - name: kubeconfig\n hostPath:\n path: /etc/kubernetes/konnectivity-server.conf\n type: FileOrCreate\n - name: konnectivity-uds\n hostPath:\n path: /etc/kubernetes/konnectivity-server\n type: DirectoryOrCreate", "language": "yaml", "type": "pod-definition", "context": "Set up Konnectivity service | Then deploy the Konnectivity agents in your cluster:", "source": "https://kubernetes.io/docs/tasks/extend-kubernetes/setup-konnectivity/", "id": "k8s_00433_code_completion_11255"} {"prompt": "Explain this yaml code:\n\nadmin/konnectivity/konnectivity-server.yaml\napiVersion: v1\nkind: Pod\nmetadata:\n name: konnectivity-server\n namespace: kube-system\nspec:\n priorityClassName: system-cluster-critical\n hostNetwork: true\n containers:\n - name: konnectivity-server-container\n image: registry.k8s.io/kas-network-proxy/proxy-server:v0.0.37\n command: [\"/proxy-server\"]\n args: [\n \"--logtostderr=true\",\n # This needs to be consistent with the value set in egressSelectorConfiguration.\n \"--uds-name=/etc/kubernetes/konnectivity-server/konnectivity-server.socket\",\n \"--delete-existing-uds-file\",\n # The following two lines assume the Konnectivity server is\n # deployed on the same machine as the apiserver, and the certs and\n # key of the API Server are at the specified location.\n \"--cluster-cert=/etc/kubernetes/pki/apiserver.crt\",\n \"--cluster-key=/etc/kubernetes/pki/apiserver.key\",\n # This needs to be consistent with the value set in egressSelectorConfiguration.\n \"--mode=grpc\",\n \"--server-port=0\",\n \"--agent-port=8132\",\n \"--admin-port=8133\",\n \"--health-port=8134\",\n \"--agent-namespace=kube-system\",\n \"--agent-service-account=konnectivity-agent\",\n \"--kubeconfig=/etc/kubernetes/konnectivity-server.conf\",\n \"--authentication-audience=system:konnectivity-server\"\n ]\n livenessProbe:\n httpGet:\n scheme: HTTP\n host: 127.0.0.1\n port: 8134\n path: /healthz\n initialDelaySeconds: 30\n timeoutSeconds: 60\n ports:\n - name: agentport\n containerPort: 8132\n hostPort: 8132\n - name: adminport\n containerPort: 8133\n hostPort: 8133\n - name: healthport\n containerPort: 8134\n hostPort: 8134\n volumeMounts:\n - name: k8s-certs\n mountPath: /etc/kubernetes/pki\n readOnly: true\n - name: kubeconfig\n mountPath: /etc/kubernetes/konnectivity-server.conf\n readOnly: true\n - name: konnectivity-uds\n mountPath: /etc/kubernetes/konnectivity-server\n readOnly: false\n volumes:\n - name: k8s-certs\n hostPath:\n path: /etc/kubernetes/pki\n - name: kubeconfig\n hostPath:\n path: /etc/kubernetes/konnectivity-server.conf\n type: FileOrCreate\n - name: konnectivity-uds\n hostPath:\n path: /etc/kubernetes/konnectivity-server\n type: DirectoryOrCreate", "completion": "This yaml code Set up Konnectivity service | Then deploy the Konnectivity agents in your cluster:", "language": "yaml", "type": "pod-definition_explanation", "context": "Set up Konnectivity service | Then deploy the Konnectivity agents in your cluster:", "source": "https://kubernetes.io/docs/tasks/extend-kubernetes/setup-konnectivity/", "id": "k8s_00433_code_explain_11256"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: konnectivity-server\n namespace: kube-system\nspec:\n priorityClassName: system-cluster-critical\n hostNetwork: true\n containers:\n - name: konnectivity-server-container\n image: registry.k8s.io/kas-network-proxy/proxy-server:v0.0.37\n command: [\"/proxy-server\"]\n args: [\n \"--logtostderr=true\",\n # This needs to be consistent with the value set in egressSelectorConfiguration.\n \"--uds-name=/etc/kubernetes/konnectivity-server/konnectivity-server.socket\",\n \"--delete-existing-uds-file\",\n # The following two lines assume the Konnectivity server is\n # deployed on the same machine as the apiserver, and the certs and\n # key of the API Server are at the specified location.\n \"--cluster-cert=/etc/kubernetes/pki/apiserver.crt\",\n \"--cluster-key=/etc/kubernetes/pki/apiserver.key\",\n # This needs to be consistent with the value set in egressSelectorConfiguration.\n \"--mode=grpc\",\n \"--server-port=0\",\n \"--agent-port=8132\",\n \"--admin-port=8133\",\n \"--health-port=8134\",\n \"--agent-namespace=kube-system\",\n \"--agent-service-account=konnectivity-agent\",\n \"--kubeconfig=/etc/kubernetes/konnectivity-server.conf\",\n \"--authentication-audience=system:konnectivity-server\"\n ]\n livenessProbe:\n httpGet:\n scheme: HTTP", "completion": " host: 127.0.0.1\n port: 8134\n path: /healthz\n initialDelaySeconds: 30\n timeoutSeconds: 60\n ports:\n - name: agentport\n containerPort: 8132\n hostPort: 8132\n - name: adminport\n containerPort: 8133\n hostPort: 8133\n - name: healthport\n containerPort: 8134\n hostPort: 8134\n volumeMounts:\n - name: k8s-certs\n mountPath: /etc/kubernetes/pki\n readOnly: true\n - name: kubeconfig\n mountPath: /etc/kubernetes/konnectivity-server.conf\n readOnly: true\n - name: konnectivity-uds\n mountPath: /etc/kubernetes/konnectivity-server\n readOnly: false\n volumes:\n - name: k8s-certs\n hostPath:\n path: /etc/kubernetes/pki\n - name: kubeconfig\n hostPath:\n path: /etc/kubernetes/konnectivity-server.conf\n type: FileOrCreate\n - name: konnectivity-uds\n hostPath:\n path: /etc/kubernetes/konnectivity-server\n type: DirectoryOrCreate", "language": "yaml", "type": "pod-definition", "context": "Set up Konnectivity service | Then deploy the Konnectivity agents in your cluster:", "source": "https://kubernetes.io/docs/tasks/extend-kubernetes/setup-konnectivity/", "id": "k8s_00433_code_completion_11257"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: konnectivity-server\n namespace: kube-system\nspec:\n priorityClassName: system-cluster-critical\n hostNetwork: true\n containers:\n - name: konnectivity-server-container\n image: registry.k8s.io/kas-network-proxy/proxy-server:v0.0.37\n command: [\"/proxy-server\"]\n args: [\n \"--logtostderr=true\",\n # This needs to be consistent with the value set in egressSelectorConfiguration.\n \"--uds-name=/etc/kubernetes/konnectivity-server/konnectivity-server.socket\",\n \"--delete-existing-uds-file\",\n # The following two lines assume the Konnectivity server is\n # deployed on the same machine as the apiserver, and the certs and\n # key of the API Server are at the specified location.\n \"--cluster-cert=/etc/kubernetes/pki/apiserver.crt\",\n \"--cluster-key=/etc/kubernetes/pki/apiserver.key\",\n # This needs to be consistent with the value set in egressSelectorConfiguration.\n \"--mode=grpc\",\n \"--server-port=0\",\n \"--agent-port=8132\",\n \"--admin-port=8133\",\n \"--health-port=8134\",\n \"--agent-namespace=kube-system\",\n \"--agent-service-account=konnectivity-agent\",\n \"--kubeconfig=/etc/kubernetes/konnectivity-server.conf\",\n \"--authentication-audience=system:konnectivity-server\"\n ]\n livenessProbe:\n httpGet:\n scheme: HTTP\n host: 127.0.0.1\n port: 8134\n path: /healthz\n initialDelaySeconds: 30\n timeoutSeconds: 60\n ports:\n - name: agentport\n containerPort: 8132\n hostPort: 8132\n - name: adminport\n containerPort: 8133\n hostPort: 8133\n - name: healthport\n containerPort: 8134\n hostPort: 8134\n volumeMounts:\n - name: k8s-certs\n mountPath: /etc/kubernetes/pki\n readOnly: true\n - name: kubeconfig\n mountPath: /etc/kubernetes/konnectivity-server.conf\n readOnly: true\n - name: konnectivity-uds\n mountPath: /etc/kubernetes/konnectivity-server\n readOnly: false\n volumes:\n - name: k8s-certs\n hostPath:\n path: /etc/kubernetes/pki\n - name: kubeconfig\n hostPath:\n path: /etc/kubernetes/konnectivity-server.conf\n type: FileOrCreate\n - name: konnectivity-uds\n hostPath:\n path: /etc/kubernetes/konnectivity-server\n type: DirectoryOrCreate", "completion": "This yaml code Set up Konnectivity service | Then deploy the Konnectivity agents in your cluster:", "language": "yaml", "type": "pod-definition_explanation", "context": "Set up Konnectivity service | Then deploy the Konnectivity agents in your cluster:", "source": "https://kubernetes.io/docs/tasks/extend-kubernetes/setup-konnectivity/", "id": "k8s_00433_code_explain_11258"} {"prompt": "Complete this yaml code:\n\nadmin/konnectivity/konnectivity-agent.yaml\napiVersion: apps/v1\n# Alternatively, you can deploy the agents as Deployments. It is not necessary\n# to have an agent on each node.\nkind: DaemonSet\nmetadata:\n labels:\n addonmanager.kubernetes.io/mode: Reconcile\n k8s-app: konnectivity-agent\n namespace: kube-system\n name: konnectivity-agent\nspec:\n selector:\n matchLabels:\n k8s-app: konnectivity-agent\n template:\n metadata:\n labels:\n k8s-app: konnectivity-agent\n spec:\n priorityClassName: system-cluster-critical\n tolerations:\n - key: \"CriticalAddonsOnly\"\n operator: \"Exists\"\n containers:\n - image: us.gcr.io/k8s-artifacts-prod/kas-network-proxy/proxy-agent:v0.0.37\n name: konnectivity-agent\n command: [\"/proxy-agent\"]", "completion": " args: [\n \"--logtostderr=true\",\n \"--ca-cert=/var/run/secrets/kubernetes.io/serviceaccount/ca.crt\",\n # Since the konnectivity server runs with hostNetwork=true,\n # this is the IP address of the master machine.\n \"--proxy-server-host=35.225.206.7\",\n \"--proxy-server-port=8132\",\n \"--admin-server-port=8133\",\n \"--health-server-port=8134\",\n \"--service-account-token-path=/var/run/secrets/tokens/konnectivity-agent-token\"\n ]\n volumeMounts:\n - mountPath: /var/run/secrets/tokens\n name: konnectivity-agent-token\n livenessProbe:\n httpGet:\n port: 8134\n path: /healthz\n initialDelaySeconds: 15\n timeoutSeconds: 15\n serviceAccountName: konnectivity-agent\n volumes:\n - name: konnectivity-agent-token\n projected:\n sources:\n - serviceAccountToken:\n path: konnectivity-agent-token\n audience: system:konnectivity-server", "language": "yaml", "type": "kubernetes-manifest", "context": "Set up Konnectivity service | Last, if RBAC is enabled in your cluster, create the relevant RBAC rules:", "source": "https://kubernetes.io/docs/tasks/extend-kubernetes/setup-konnectivity/", "id": "k8s_00433_code_completion_11259"} {"prompt": "Explain this yaml code:\n\nadmin/konnectivity/konnectivity-agent.yaml\napiVersion: apps/v1\n# Alternatively, you can deploy the agents as Deployments. It is not necessary\n# to have an agent on each node.\nkind: DaemonSet\nmetadata:\n labels:\n addonmanager.kubernetes.io/mode: Reconcile\n k8s-app: konnectivity-agent\n namespace: kube-system\n name: konnectivity-agent\nspec:\n selector:\n matchLabels:\n k8s-app: konnectivity-agent\n template:\n metadata:\n labels:\n k8s-app: konnectivity-agent\n spec:\n priorityClassName: system-cluster-critical\n tolerations:\n - key: \"CriticalAddonsOnly\"\n operator: \"Exists\"\n containers:\n - image: us.gcr.io/k8s-artifacts-prod/kas-network-proxy/proxy-agent:v0.0.37\n name: konnectivity-agent\n command: [\"/proxy-agent\"]\n args: [\n \"--logtostderr=true\",\n \"--ca-cert=/var/run/secrets/kubernetes.io/serviceaccount/ca.crt\",\n # Since the konnectivity server runs with hostNetwork=true,\n # this is the IP address of the master machine.\n \"--proxy-server-host=35.225.206.7\",\n \"--proxy-server-port=8132\",\n \"--admin-server-port=8133\",\n \"--health-server-port=8134\",\n \"--service-account-token-path=/var/run/secrets/tokens/konnectivity-agent-token\"\n ]\n volumeMounts:\n - mountPath: /var/run/secrets/tokens\n name: konnectivity-agent-token\n livenessProbe:\n httpGet:\n port: 8134\n path: /healthz\n initialDelaySeconds: 15\n timeoutSeconds: 15\n serviceAccountName: konnectivity-agent\n volumes:\n - name: konnectivity-agent-token\n projected:\n sources:\n - serviceAccountToken:\n path: konnectivity-agent-token\n audience: system:konnectivity-server", "completion": "This yaml code Set up Konnectivity service | Last, if RBAC is enabled in your cluster, create the relevant RBAC rules:", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Set up Konnectivity service | Last, if RBAC is enabled in your cluster, create the relevant RBAC rules:", "source": "https://kubernetes.io/docs/tasks/extend-kubernetes/setup-konnectivity/", "id": "k8s_00433_code_explain_11260"} {"prompt": "Complete this yaml code:\n\napiVersion: apps/v1\n# Alternatively, you can deploy the agents as Deployments. It is not necessary\n# to have an agent on each node.\nkind: DaemonSet\nmetadata:\n labels:\n addonmanager.kubernetes.io/mode: Reconcile\n k8s-app: konnectivity-agent\n namespace: kube-system\n name: konnectivity-agent\nspec:\n selector:\n matchLabels:\n k8s-app: konnectivity-agent\n template:\n metadata:\n labels:\n k8s-app: konnectivity-agent\n spec:\n priorityClassName: system-cluster-critical\n tolerations:\n - key: \"CriticalAddonsOnly\"\n operator: \"Exists\"\n containers:\n - image: us.gcr.io/k8s-artifacts-prod/kas-network-proxy/proxy-agent:v0.0.37\n name: konnectivity-agent\n command: [\"/proxy-agent\"]", "completion": " args: [\n \"--logtostderr=true\",\n \"--ca-cert=/var/run/secrets/kubernetes.io/serviceaccount/ca.crt\",\n # Since the konnectivity server runs with hostNetwork=true,\n # this is the IP address of the master machine.\n \"--proxy-server-host=35.225.206.7\",\n \"--proxy-server-port=8132\",\n \"--admin-server-port=8133\",\n \"--health-server-port=8134\",\n \"--service-account-token-path=/var/run/secrets/tokens/konnectivity-agent-token\"\n ]\n volumeMounts:\n - mountPath: /var/run/secrets/tokens\n name: konnectivity-agent-token\n livenessProbe:\n httpGet:\n port: 8134\n path: /healthz\n initialDelaySeconds: 15\n timeoutSeconds: 15\n serviceAccountName: konnectivity-agent\n volumes:\n - name: konnectivity-agent-token\n projected:\n sources:\n - serviceAccountToken:\n path: konnectivity-agent-token\n audience: system:konnectivity-server", "language": "yaml", "type": "kubernetes-manifest", "context": "Set up Konnectivity service | Last, if RBAC is enabled in your cluster, create the relevant RBAC rules:", "source": "https://kubernetes.io/docs/tasks/extend-kubernetes/setup-konnectivity/", "id": "k8s_00433_code_completion_11261"} {"prompt": "Explain this yaml code:\n\napiVersion: apps/v1\n# Alternatively, you can deploy the agents as Deployments. It is not necessary\n# to have an agent on each node.\nkind: DaemonSet\nmetadata:\n labels:\n addonmanager.kubernetes.io/mode: Reconcile\n k8s-app: konnectivity-agent\n namespace: kube-system\n name: konnectivity-agent\nspec:\n selector:\n matchLabels:\n k8s-app: konnectivity-agent\n template:\n metadata:\n labels:\n k8s-app: konnectivity-agent\n spec:\n priorityClassName: system-cluster-critical\n tolerations:\n - key: \"CriticalAddonsOnly\"\n operator: \"Exists\"\n containers:\n - image: us.gcr.io/k8s-artifacts-prod/kas-network-proxy/proxy-agent:v0.0.37\n name: konnectivity-agent\n command: [\"/proxy-agent\"]\n args: [\n \"--logtostderr=true\",\n \"--ca-cert=/var/run/secrets/kubernetes.io/serviceaccount/ca.crt\",\n # Since the konnectivity server runs with hostNetwork=true,\n # this is the IP address of the master machine.\n \"--proxy-server-host=35.225.206.7\",\n \"--proxy-server-port=8132\",\n \"--admin-server-port=8133\",\n \"--health-server-port=8134\",\n \"--service-account-token-path=/var/run/secrets/tokens/konnectivity-agent-token\"\n ]\n volumeMounts:\n - mountPath: /var/run/secrets/tokens\n name: konnectivity-agent-token\n livenessProbe:\n httpGet:\n port: 8134\n path: /healthz\n initialDelaySeconds: 15\n timeoutSeconds: 15\n serviceAccountName: konnectivity-agent\n volumes:\n - name: konnectivity-agent-token\n projected:\n sources:\n - serviceAccountToken:\n path: konnectivity-agent-token\n audience: system:konnectivity-server", "completion": "This yaml code Set up Konnectivity service | Last, if RBAC is enabled in your cluster, create the relevant RBAC rules:", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Set up Konnectivity service | Last, if RBAC is enabled in your cluster, create the relevant RBAC rules:", "source": "https://kubernetes.io/docs/tasks/extend-kubernetes/setup-konnectivity/", "id": "k8s_00433_code_explain_11262"} {"prompt": "Complete this yaml code:\n\nadmin/konnectivity/konnectivity-rbac.yaml\napiVersion: rbac.authorization.k8s.io/v1\nkind: ClusterRoleBinding\nmetadata:\n name: system:konnectivity-server\n labels:\n kubernetes.io/cluster-service: \"true\"\n addonmanager.kubernetes.io/mode: Reconcile\nroleRef:\n apiGroup: rbac.authorization.k8s.io\n kind: ClusterRole\n name: system:auth-delegator", "completion": "subjects:\n - apiGroup: rbac.authorization.k8s.io\n kind: User\n name: system:konnectivity-server\n---\napiVersion: v1\nkind: ServiceAccount\nmetadata:\n name: konnectivity-agent\n namespace: kube-system\n labels:\n kubernetes.io/cluster-service: \"true\"\n addonmanager.kubernetes.io/mode: Reconcile", "language": "yaml", "type": "service-definition", "context": "Set up Konnectivity service |", "source": "https://kubernetes.io/docs/tasks/extend-kubernetes/setup-konnectivity/", "id": "k8s_00433_code_completion_11263"} {"prompt": "Explain this yaml code:\n\nadmin/konnectivity/konnectivity-rbac.yaml\napiVersion: rbac.authorization.k8s.io/v1\nkind: ClusterRoleBinding\nmetadata:\n name: system:konnectivity-server\n labels:\n kubernetes.io/cluster-service: \"true\"\n addonmanager.kubernetes.io/mode: Reconcile\nroleRef:\n apiGroup: rbac.authorization.k8s.io\n kind: ClusterRole\n name: system:auth-delegator\nsubjects:\n - apiGroup: rbac.authorization.k8s.io\n kind: User\n name: system:konnectivity-server\n---\napiVersion: v1\nkind: ServiceAccount\nmetadata:\n name: konnectivity-agent\n namespace: kube-system\n labels:\n kubernetes.io/cluster-service: \"true\"\n addonmanager.kubernetes.io/mode: Reconcile", "completion": "This yaml code Set up Konnectivity service |", "language": "yaml", "type": "service-definition_explanation", "context": "Set up Konnectivity service |", "source": "https://kubernetes.io/docs/tasks/extend-kubernetes/setup-konnectivity/", "id": "k8s_00433_code_explain_11264"} {"prompt": "Complete this yaml code:\n\napiVersion: rbac.authorization.k8s.io/v1\nkind: ClusterRoleBinding\nmetadata:\n name: system:konnectivity-server\n labels:\n kubernetes.io/cluster-service: \"true\"\n addonmanager.kubernetes.io/mode: Reconcile\nroleRef:\n apiGroup: rbac.authorization.k8s.io\n kind: ClusterRole\n name: system:auth-delegator\nsubjects:", "completion": " - apiGroup: rbac.authorization.k8s.io\n kind: User\n name: system:konnectivity-server\n---\napiVersion: v1\nkind: ServiceAccount\nmetadata:\n name: konnectivity-agent\n namespace: kube-system\n labels:\n kubernetes.io/cluster-service: \"true\"\n addonmanager.kubernetes.io/mode: Reconcile", "language": "yaml", "type": "service-definition", "context": "Set up Konnectivity service |", "source": "https://kubernetes.io/docs/tasks/extend-kubernetes/setup-konnectivity/", "id": "k8s_00433_code_completion_11265"} {"prompt": "Explain this yaml code:\n\napiVersion: rbac.authorization.k8s.io/v1\nkind: ClusterRoleBinding\nmetadata:\n name: system:konnectivity-server\n labels:\n kubernetes.io/cluster-service: \"true\"\n addonmanager.kubernetes.io/mode: Reconcile\nroleRef:\n apiGroup: rbac.authorization.k8s.io\n kind: ClusterRole\n name: system:auth-delegator\nsubjects:\n - apiGroup: rbac.authorization.k8s.io\n kind: User\n name: system:konnectivity-server\n---\napiVersion: v1\nkind: ServiceAccount\nmetadata:\n name: konnectivity-agent\n namespace: kube-system\n labels:\n kubernetes.io/cluster-service: \"true\"\n addonmanager.kubernetes.io/mode: Reconcile", "completion": "This yaml code Set up Konnectivity service |", "language": "yaml", "type": "service-definition_explanation", "context": "Set up Konnectivity service |", "source": "https://kubernetes.io/docs/tasks/extend-kubernetes/setup-konnectivity/", "id": "k8s_00433_code_explain_11266"} {"prompt": "Explain this bash code:\n\nkubectl rollout history (TYPE NAME | TYPE/NAME) [flags]", "completion": "This bash code kubectl rollout history | If true, ignore any errors in templates when a field or map key is missing in the template. Only applies to golang and jsonpath output formats.", "language": "bash", "type": "kubectl-command_explanation", "context": "kubectl rollout history | If true, ignore any errors in templates when a field or map key is missing in the template. Only applies to golang and jsonpath output formats.", "source": "https://kubernetes.io/docs/reference/kubectl/generated/kubectl_rollout/kubectl_rollout_history/", "id": "k8s_00434_code_explain_11267"} {"prompt": "Complete this text code:\n\n# View the rollout history of a deployment\n kubectl rollout history deployment/abc", "completion": " \n # View the details of daemonset revision 3\n kubectl rollout history daemonset/abc --revision=3", "language": "text", "type": "code-example", "context": "kubectl rollout history | If true, ignore any errors in templates when a field or map key is missing in the template. Only applies to golang and jsonpath output formats.", "source": "https://kubernetes.io/docs/reference/kubectl/generated/kubectl_rollout/kubectl_rollout_history/", "id": "k8s_00434_code_completion_11268"} {"prompt": "Explain this text code:\n\n# View the rollout history of a deployment\n kubectl rollout history deployment/abc\n \n # View the details of daemonset revision 3\n kubectl rollout history daemonset/abc --revision=3", "completion": "This text code kubectl rollout history | If true, ignore any errors in templates when a field or map key is missing in the template. Only applies to golang and jsonpath output formats.", "language": "text", "type": "code-example_explanation", "context": "kubectl rollout history | If true, ignore any errors in templates when a field or map key is missing in the template. Only applies to golang and jsonpath output formats.", "source": "https://kubernetes.io/docs/reference/kubectl/generated/kubectl_rollout/kubectl_rollout_history/", "id": "k8s_00434_code_explain_11269"} {"prompt": "Explain this bash code:\n\nkubectl rollout history (TYPE NAME | TYPE/NAME) [flags]", "completion": "This bash code kubectl rollout history | If true, ignore any errors in templates when a field or map key is missing in the template. Only applies to golang and jsonpath output formats.", "language": "bash", "type": "kubectl-command_explanation", "context": "kubectl rollout history | If true, ignore any errors in templates when a field or map key is missing in the template. Only applies to golang and jsonpath output formats.", "source": "https://kubernetes.io/docs/reference/kubectl/generated/kubectl_rollout/kubectl_rollout_history/", "id": "k8s_00434_code_explain_11270"} {"prompt": "Complete this text code:\n\n# View the rollout history of a deployment\n kubectl rollout history deployment/abc", "completion": " \n # View the details of daemonset revision 3\n kubectl rollout history daemonset/abc --revision=3", "language": "text", "type": "code-example", "context": "kubectl rollout history | If true, ignore any errors in templates when a field or map key is missing in the template. Only applies to golang and jsonpath output formats.", "source": "https://kubernetes.io/docs/reference/kubectl/generated/kubectl_rollout/kubectl_rollout_history/", "id": "k8s_00434_code_completion_11271"} {"prompt": "Explain this text code:\n\n# View the rollout history of a deployment\n kubectl rollout history deployment/abc\n \n # View the details of daemonset revision 3\n kubectl rollout history daemonset/abc --revision=3", "completion": "This text code kubectl rollout history | If true, ignore any errors in templates when a field or map key is missing in the template. Only applies to golang and jsonpath output formats.", "language": "text", "type": "code-example_explanation", "context": "kubectl rollout history | If true, ignore any errors in templates when a field or map key is missing in the template. Only applies to golang and jsonpath output formats.", "source": "https://kubernetes.io/docs/reference/kubectl/generated/kubectl_rollout/kubectl_rollout_history/", "id": "k8s_00434_code_explain_11272"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: LimitRange\nmetadata:\n name: storagelimits\nspec:", "completion": " limits:\n - type: PersistentVolumeClaim\n max:\n storage: 2Gi\n min:\n storage: 1Gi", "language": "yaml", "type": "kubernetes-manifest", "context": "Limit Storage Consumption | Minimum storage requests are used when the underlying storage provider requires certain minimums. For example, AWS EBS volumes have a 1Gi minimum requirement.", "source": "https://kubernetes.io/docs/tasks/administer-cluster/limit-storage-consumption/", "id": "k8s_00436_code_completion_11273"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: LimitRange\nmetadata:\n name: storagelimits\nspec:\n limits:\n - type: PersistentVolumeClaim\n max:\n storage: 2Gi\n min:\n storage: 1Gi", "completion": "This yaml code Limit Storage Consumption | Minimum storage requests are used when the underlying storage provider requires certain minimums. For example, AWS EBS volumes have a 1Gi minimum requirement.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Limit Storage Consumption | Minimum storage requests are used when the underlying storage provider requires certain minimums. For example, AWS EBS volumes have a 1Gi minimum requirement.", "source": "https://kubernetes.io/docs/tasks/administer-cluster/limit-storage-consumption/", "id": "k8s_00436_code_explain_11274"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: ResourceQuota\nmetadata:\n name: storagequota", "completion": "spec:\n hard:\n persistentvolumeclaims: \"5\"\n requests.storage: \"5Gi\"", "language": "yaml", "type": "kubernetes-manifest", "context": "Limit Storage Consumption | A limit range can put a ceiling on how much storage is requested while a resource quota can effectively cap the storage consumed by a namespace through claim counts and cumulative storage capacity. Th", "source": "https://kubernetes.io/docs/tasks/administer-cluster/limit-storage-consumption/", "id": "k8s_00436_code_completion_11275"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: ResourceQuota\nmetadata:\n name: storagequota\nspec:\n hard:\n persistentvolumeclaims: \"5\"\n requests.storage: \"5Gi\"", "completion": "This yaml code Limit Storage Consumption | A limit range can put a ceiling on how much storage is requested while a resource quota can effectively cap the storage consumed by a namespace through claim counts and cumulative storage capacity. Th", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Limit Storage Consumption | A limit range can put a ceiling on how much storage is requested while a resource quota can effectively cap the storage consumed by a namespace through claim counts and cumulative storage capacity. Th", "source": "https://kubernetes.io/docs/tasks/administer-cluster/limit-storage-consumption/", "id": "k8s_00436_code_explain_11276"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: LimitRange\nmetadata:\n name: storagelimits\nspec:", "completion": " limits:\n - type: PersistentVolumeClaim\n max:\n storage: 2Gi\n min:\n storage: 1Gi", "language": "yaml", "type": "kubernetes-manifest", "context": "Limit Storage Consumption | Minimum storage requests are used when the underlying storage provider requires certain minimums. For example, AWS EBS volumes have a 1Gi minimum requirement.", "source": "https://kubernetes.io/docs/tasks/administer-cluster/limit-storage-consumption/", "id": "k8s_00436_code_completion_11277"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: LimitRange\nmetadata:\n name: storagelimits\nspec:\n limits:\n - type: PersistentVolumeClaim\n max:\n storage: 2Gi\n min:\n storage: 1Gi", "completion": "This yaml code Limit Storage Consumption | Minimum storage requests are used when the underlying storage provider requires certain minimums. For example, AWS EBS volumes have a 1Gi minimum requirement.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Limit Storage Consumption | Minimum storage requests are used when the underlying storage provider requires certain minimums. For example, AWS EBS volumes have a 1Gi minimum requirement.", "source": "https://kubernetes.io/docs/tasks/administer-cluster/limit-storage-consumption/", "id": "k8s_00436_code_explain_11278"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: ResourceQuota\nmetadata:\n name: storagequota", "completion": "spec:\n hard:\n persistentvolumeclaims: \"5\"\n requests.storage: \"5Gi\"", "language": "yaml", "type": "kubernetes-manifest", "context": "Limit Storage Consumption | A limit range can put a ceiling on how much storage is requested while a resource quota can effectively cap the storage consumed by a namespace through claim counts and cumulative storage capacity. Th", "source": "https://kubernetes.io/docs/tasks/administer-cluster/limit-storage-consumption/", "id": "k8s_00436_code_completion_11279"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: ResourceQuota\nmetadata:\n name: storagequota\nspec:\n hard:\n persistentvolumeclaims: \"5\"\n requests.storage: \"5Gi\"", "completion": "This yaml code Limit Storage Consumption | A limit range can put a ceiling on how much storage is requested while a resource quota can effectively cap the storage consumed by a namespace through claim counts and cumulative storage capacity. Th", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Limit Storage Consumption | A limit range can put a ceiling on how much storage is requested while a resource quota can effectively cap the storage consumed by a namespace through claim counts and cumulative storage capacity. Th", "source": "https://kubernetes.io/docs/tasks/administer-cluster/limit-storage-consumption/", "id": "k8s_00436_code_explain_11280"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: LimitRange\nmetadata:\n name: storagelimits\nspec:", "completion": " limits:\n - type: PersistentVolumeClaim\n max:\n storage: 2Gi\n min:\n storage: 1Gi", "language": "yaml", "type": "kubernetes-manifest", "context": "Limit Storage Consumption | Minimum storage requests are used when the underlying storage provider requires certain minimums. For example, AWS EBS volumes have a 1Gi minimum requirement.", "source": "https://kubernetes.io/docs/tasks/administer-cluster/limit-storage-consumption/", "id": "k8s_00436_code_completion_11281"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: LimitRange\nmetadata:\n name: storagelimits\nspec:\n limits:\n - type: PersistentVolumeClaim\n max:\n storage: 2Gi\n min:\n storage: 1Gi", "completion": "This yaml code Limit Storage Consumption | Minimum storage requests are used when the underlying storage provider requires certain minimums. For example, AWS EBS volumes have a 1Gi minimum requirement.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Limit Storage Consumption | Minimum storage requests are used when the underlying storage provider requires certain minimums. For example, AWS EBS volumes have a 1Gi minimum requirement.", "source": "https://kubernetes.io/docs/tasks/administer-cluster/limit-storage-consumption/", "id": "k8s_00436_code_explain_11282"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: ResourceQuota\nmetadata:\n name: storagequota", "completion": "spec:\n hard:\n persistentvolumeclaims: \"5\"\n requests.storage: \"5Gi\"", "language": "yaml", "type": "kubernetes-manifest", "context": "Limit Storage Consumption | A limit range can put a ceiling on how much storage is requested while a resource quota can effectively cap the storage consumed by a namespace through claim counts and cumulative storage capacity. Th", "source": "https://kubernetes.io/docs/tasks/administer-cluster/limit-storage-consumption/", "id": "k8s_00436_code_completion_11283"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: ResourceQuota\nmetadata:\n name: storagequota\nspec:\n hard:\n persistentvolumeclaims: \"5\"\n requests.storage: \"5Gi\"", "completion": "This yaml code Limit Storage Consumption | A limit range can put a ceiling on how much storage is requested while a resource quota can effectively cap the storage consumed by a namespace through claim counts and cumulative storage capacity. Th", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Limit Storage Consumption | A limit range can put a ceiling on how much storage is requested while a resource quota can effectively cap the storage consumed by a namespace through claim counts and cumulative storage capacity. Th", "source": "https://kubernetes.io/docs/tasks/administer-cluster/limit-storage-consumption/", "id": "k8s_00436_code_explain_11284"} {"prompt": "Explain this bash code:\n\nkubectl events [(-o|--output=)json|yaml|name|go-template|go-template-file|template|templatefile|jsonpath|jsonpath-as-json|jsonpath-file] [--for TYPE/NAME] [--watch] [--types=Normal,Warning]", "completion": "This bash code kubectl events | If present, list the requested object(s) across all namespaces. Namespace in current context is ignored even if specified with --namespace.", "language": "bash", "type": "kubectl-command_explanation", "context": "kubectl events | If present, list the requested object(s) across all namespaces. Namespace in current context is ignored even if specified with --namespace.", "source": "https://kubernetes.io/docs/reference/kubectl/generated/kubectl_events/", "id": "k8s_00437_code_explain_11285"} {"prompt": "Complete this text code:\n\n# List recent events in the default namespace\n kubectl events\n \n # List recent events in all namespaces\n kubectl events --all-namespaces\n \n # List recent events for the specified pod, then wait for more events and list them as they arrive", "completion": " kubectl events --for pod/web-pod-13je7 --watch\n \n # List recent events in YAML format\n kubectl events -oyaml\n \n # List recent only events of type 'Warning' or 'Normal'\n kubectl events --types=Warning,Normal", "language": "text", "type": "code-example", "context": "kubectl events | If present, list the requested object(s) across all namespaces. Namespace in current context is ignored even if specified with --namespace.", "source": "https://kubernetes.io/docs/reference/kubectl/generated/kubectl_events/", "id": "k8s_00437_code_completion_11286"} {"prompt": "Explain this text code:\n\n# List recent events in the default namespace\n kubectl events\n \n # List recent events in all namespaces\n kubectl events --all-namespaces\n \n # List recent events for the specified pod, then wait for more events and list them as they arrive\n kubectl events --for pod/web-pod-13je7 --watch\n \n # List recent events in YAML format\n kubectl events -oyaml\n \n # List recent only events of type 'Warning' or 'Normal'\n kubectl events --types=Warning,Normal", "completion": "This text code kubectl events | If present, list the requested object(s) across all namespaces. Namespace in current context is ignored even if specified with --namespace.", "language": "text", "type": "code-example_explanation", "context": "kubectl events | If present, list the requested object(s) across all namespaces. Namespace in current context is ignored even if specified with --namespace.", "source": "https://kubernetes.io/docs/reference/kubectl/generated/kubectl_events/", "id": "k8s_00437_code_explain_11287"} {"prompt": "Explain this bash code:\n\nkubectl events [(-o|--output=)json|yaml|name|go-template|go-template-file|template|templatefile|jsonpath|jsonpath-as-json|jsonpath-file] [--for TYPE/NAME] [--watch] [--types=Normal,Warning]", "completion": "This bash code kubectl events | If present, list the requested object(s) across all namespaces. Namespace in current context is ignored even if specified with --namespace.", "language": "bash", "type": "kubectl-command_explanation", "context": "kubectl events | If present, list the requested object(s) across all namespaces. Namespace in current context is ignored even if specified with --namespace.", "source": "https://kubernetes.io/docs/reference/kubectl/generated/kubectl_events/", "id": "k8s_00437_code_explain_11288"} {"prompt": "Complete this text code:\n\n# List recent events in the default namespace\n kubectl events\n \n # List recent events in all namespaces\n kubectl events --all-namespaces\n \n # List recent events for the specified pod, then wait for more events and list them as they arrive", "completion": " kubectl events --for pod/web-pod-13je7 --watch\n \n # List recent events in YAML format\n kubectl events -oyaml\n \n # List recent only events of type 'Warning' or 'Normal'\n kubectl events --types=Warning,Normal", "language": "text", "type": "code-example", "context": "kubectl events | If present, list the requested object(s) across all namespaces. Namespace in current context is ignored even if specified with --namespace.", "source": "https://kubernetes.io/docs/reference/kubectl/generated/kubectl_events/", "id": "k8s_00437_code_completion_11289"} {"prompt": "Explain this text code:\n\n# List recent events in the default namespace\n kubectl events\n \n # List recent events in all namespaces\n kubectl events --all-namespaces\n \n # List recent events for the specified pod, then wait for more events and list them as they arrive\n kubectl events --for pod/web-pod-13je7 --watch\n \n # List recent events in YAML format\n kubectl events -oyaml\n \n # List recent only events of type 'Warning' or 'Normal'\n kubectl events --types=Warning,Normal", "completion": "This text code kubectl events | If present, list the requested object(s) across all namespaces. Namespace in current context is ignored even if specified with --namespace.", "language": "text", "type": "code-example_explanation", "context": "kubectl events | If present, list the requested object(s) across all namespaces. Namespace in current context is ignored even if specified with --namespace.", "source": "https://kubernetes.io/docs/reference/kubectl/generated/kubectl_events/", "id": "k8s_00437_code_explain_11290"} {"prompt": "Complete this text code:\n\nNAME STATUS VERSION CONTAINER-RUNTIME\nnode-1 Ready v1.16.15 docker://19.3.1", "completion": "node-2 Ready v1.16.15 docker://19.3.1\nnode-3 Ready v1.16.15 docker://19.3.1", "language": "text", "type": "code-example", "context": "Find Out What Container Runtime is Used on a Node | If your runtime shows as Docker Engine, you still might not be affected by the removal of dockershim in Kubernetes v1.24. Check the runtime endpoint to see if you use dockershim. If you don't use dock", "source": "https://kubernetes.io/docs/tasks/administer-cluster/migrating-from-dockershim/find-out-runtime-you-use/", "id": "k8s_00438_code_completion_11291"} {"prompt": "Explain this text code:\n\nNAME STATUS VERSION CONTAINER-RUNTIME\nnode-1 Ready v1.16.15 docker://19.3.1\nnode-2 Ready v1.16.15 docker://19.3.1\nnode-3 Ready v1.16.15 docker://19.3.1", "completion": "This text code Find Out What Container Runtime is Used on a Node | If your runtime shows as Docker Engine, you still might not be affected by the removal of dockershim in Kubernetes v1.24. Check the runtime endpoint to see if you use dockershim. If you don't use dock", "language": "text", "type": "code-example_explanation", "context": "Find Out What Container Runtime is Used on a Node | If your runtime shows as Docker Engine, you still might not be affected by the removal of dockershim in Kubernetes v1.24. Check the runtime endpoint to see if you use dockershim. If you don't use dock", "source": "https://kubernetes.io/docs/tasks/administer-cluster/migrating-from-dockershim/find-out-runtime-you-use/", "id": "k8s_00438_code_explain_11292"} {"prompt": "Complete this text code:\n\nNAME STATUS VERSION CONTAINER-RUNTIME\nnode-1 Ready v1.19.6 containerd://1.4.1", "completion": "node-2 Ready v1.19.6 containerd://1.4.1\nnode-3 Ready v1.19.6 containerd://1.4.1", "language": "text", "type": "code-example", "context": "Find Out What Container Runtime is Used on a Node | Find out more information about container runtimes on Container Runtimes page.", "source": "https://kubernetes.io/docs/tasks/administer-cluster/migrating-from-dockershim/find-out-runtime-you-use/", "id": "k8s_00438_code_completion_11293"} {"prompt": "Explain this text code:\n\nNAME STATUS VERSION CONTAINER-RUNTIME\nnode-1 Ready v1.19.6 containerd://1.4.1\nnode-2 Ready v1.19.6 containerd://1.4.1\nnode-3 Ready v1.19.6 containerd://1.4.1", "completion": "This text code Find Out What Container Runtime is Used on a Node | Find out more information about container runtimes on Container Runtimes page.", "language": "text", "type": "code-example_explanation", "context": "Find Out What Container Runtime is Used on a Node | Find out more information about container runtimes on Container Runtimes page.", "source": "https://kubernetes.io/docs/tasks/administer-cluster/migrating-from-dockershim/find-out-runtime-you-use/", "id": "k8s_00438_code_explain_11294"} {"prompt": "Complete this none code:\n\nNAME STATUS VERSION CONTAINER-RUNTIME\nnode-1 Ready v1.16.15 docker://19.3.1", "completion": "node-2 Ready v1.16.15 docker://19.3.1\nnode-3 Ready v1.16.15 docker://19.3.1", "language": "none", "type": "code-example", "context": "Find Out What Container Runtime is Used on a Node | If your runtime shows as Docker Engine, you still might not be affected by the removal of dockershim in Kubernetes v1.24. Check the runtime endpoint to see if you use dockershim. If you don't use dock", "source": "https://kubernetes.io/docs/tasks/administer-cluster/migrating-from-dockershim/find-out-runtime-you-use/", "id": "k8s_00438_code_completion_11295"} {"prompt": "Explain this none code:\n\nNAME STATUS VERSION CONTAINER-RUNTIME\nnode-1 Ready v1.16.15 docker://19.3.1\nnode-2 Ready v1.16.15 docker://19.3.1\nnode-3 Ready v1.16.15 docker://19.3.1", "completion": "This none code Find Out What Container Runtime is Used on a Node | If your runtime shows as Docker Engine, you still might not be affected by the removal of dockershim in Kubernetes v1.24. Check the runtime endpoint to see if you use dockershim. If you don't use dock", "language": "none", "type": "code-example_explanation", "context": "Find Out What Container Runtime is Used on a Node | If your runtime shows as Docker Engine, you still might not be affected by the removal of dockershim in Kubernetes v1.24. Check the runtime endpoint to see if you use dockershim. If you don't use dock", "source": "https://kubernetes.io/docs/tasks/administer-cluster/migrating-from-dockershim/find-out-runtime-you-use/", "id": "k8s_00438_code_explain_11296"} {"prompt": "Complete this none code:\n\nNAME STATUS VERSION CONTAINER-RUNTIME\nnode-1 Ready v1.19.6 containerd://1.4.1", "completion": "node-2 Ready v1.19.6 containerd://1.4.1\nnode-3 Ready v1.19.6 containerd://1.4.1", "language": "none", "type": "code-example", "context": "Find Out What Container Runtime is Used on a Node | Find out more information about container runtimes on Container Runtimes page.", "source": "https://kubernetes.io/docs/tasks/administer-cluster/migrating-from-dockershim/find-out-runtime-you-use/", "id": "k8s_00438_code_completion_11297"} {"prompt": "Explain this none code:\n\nNAME STATUS VERSION CONTAINER-RUNTIME\nnode-1 Ready v1.19.6 containerd://1.4.1\nnode-2 Ready v1.19.6 containerd://1.4.1\nnode-3 Ready v1.19.6 containerd://1.4.1", "completion": "This none code Find Out What Container Runtime is Used on a Node | Find out more information about container runtimes on Container Runtimes page.", "language": "none", "type": "code-example_explanation", "context": "Find Out What Container Runtime is Used on a Node | Find out more information about container runtimes on Container Runtimes page.", "source": "https://kubernetes.io/docs/tasks/administer-cluster/migrating-from-dockershim/find-out-runtime-you-use/", "id": "k8s_00438_code_explain_11298"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Service\nmetadata:\n labels:\n k8s-app: kube-dns\n kubernetes.io/cluster-service: \"true\"\n kubernetes.io/name: CoreDNS\n name: kube-dns\n namespace: kube-system\nspec:\n clusterIP: 10.96.0.10", "completion": " ports:\n - name: dns\n port: 53\n protocol: UDP\n targetPort: 53\n - name: dns-tcp\n port: 53\n protocol: TCP\n targetPort: 53\n selector:\n k8s-app: kube-dns\n type: ClusterIP", "language": "yaml", "type": "service-definition", "context": "Service ClusterIP allocation | But, as it was explained before, the IP address 10.96.0.10 has not been reserved. If other Services are created before or in parallel with dynamic allocation, there is a chance they can allocate this", "source": "https://kubernetes.io/docs/concepts/services-networking/cluster-ip-allocation/", "id": "k8s_00439_code_completion_11299"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Service\nmetadata:\n labels:\n k8s-app: kube-dns\n kubernetes.io/cluster-service: \"true\"\n kubernetes.io/name: CoreDNS\n name: kube-dns\n namespace: kube-system\nspec:\n clusterIP: 10.96.0.10\n ports:\n - name: dns\n port: 53\n protocol: UDP\n targetPort: 53\n - name: dns-tcp\n port: 53\n protocol: TCP\n targetPort: 53\n selector:\n k8s-app: kube-dns\n type: ClusterIP", "completion": "This yaml code Service ClusterIP allocation | But, as it was explained before, the IP address 10.96.0.10 has not been reserved. If other Services are created before or in parallel with dynamic allocation, there is a chance they can allocate this", "language": "yaml", "type": "service-definition_explanation", "context": "Service ClusterIP allocation | But, as it was explained before, the IP address 10.96.0.10 has not been reserved. If other Services are created before or in parallel with dynamic allocation, there is a chance they can allocate this", "source": "https://kubernetes.io/docs/concepts/services-networking/cluster-ip-allocation/", "id": "k8s_00439_code_explain_11300"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Service\nmetadata:\n labels:\n k8s-app: kube-dns\n kubernetes.io/cluster-service: \"true\"\n kubernetes.io/name: CoreDNS\n name: kube-dns\n namespace: kube-system\nspec:\n clusterIP: 10.96.0.10", "completion": " ports:\n - name: dns\n port: 53\n protocol: UDP\n targetPort: 53\n - name: dns-tcp\n port: 53\n protocol: TCP\n targetPort: 53\n selector:\n k8s-app: kube-dns\n type: ClusterIP", "language": "yaml", "type": "service-definition", "context": "Service ClusterIP allocation | But, as it was explained before, the IP address 10.96.0.10 has not been reserved. If other Services are created before or in parallel with dynamic allocation, there is a chance they can allocate this", "source": "https://kubernetes.io/docs/concepts/services-networking/cluster-ip-allocation/", "id": "k8s_00439_code_completion_11301"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Service\nmetadata:\n labels:\n k8s-app: kube-dns\n kubernetes.io/cluster-service: \"true\"\n kubernetes.io/name: CoreDNS\n name: kube-dns\n namespace: kube-system\nspec:\n clusterIP: 10.96.0.10\n ports:\n - name: dns\n port: 53\n protocol: UDP\n targetPort: 53\n - name: dns-tcp\n port: 53\n protocol: TCP\n targetPort: 53\n selector:\n k8s-app: kube-dns\n type: ClusterIP", "completion": "This yaml code Service ClusterIP allocation | But, as it was explained before, the IP address 10.96.0.10 has not been reserved. If other Services are created before or in parallel with dynamic allocation, there is a chance they can allocate this", "language": "yaml", "type": "service-definition_explanation", "context": "Service ClusterIP allocation | But, as it was explained before, the IP address 10.96.0.10 has not been reserved. If other Services are created before or in parallel with dynamic allocation, there is a chance they can allocate this", "source": "https://kubernetes.io/docs/concepts/services-networking/cluster-ip-allocation/", "id": "k8s_00439_code_explain_11302"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Service\nmetadata:\n labels:\n k8s-app: kube-dns\n kubernetes.io/cluster-service: \"true\"\n kubernetes.io/name: CoreDNS\n name: kube-dns\n namespace: kube-system\nspec:\n clusterIP: 10.96.0.10", "completion": " ports:\n - name: dns\n port: 53\n protocol: UDP\n targetPort: 53\n - name: dns-tcp\n port: 53\n protocol: TCP\n targetPort: 53\n selector:\n k8s-app: kube-dns\n type: ClusterIP", "language": "yaml", "type": "service-definition", "context": "Service ClusterIP allocation | But, as it was explained before, the IP address 10.96.0.10 has not been reserved. If other Services are created before or in parallel with dynamic allocation, there is a chance they can allocate this", "source": "https://kubernetes.io/docs/concepts/services-networking/cluster-ip-allocation/", "id": "k8s_00439_code_completion_11303"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Service\nmetadata:\n labels:\n k8s-app: kube-dns\n kubernetes.io/cluster-service: \"true\"\n kubernetes.io/name: CoreDNS\n name: kube-dns\n namespace: kube-system\nspec:\n clusterIP: 10.96.0.10\n ports:\n - name: dns\n port: 53\n protocol: UDP\n targetPort: 53\n - name: dns-tcp\n port: 53\n protocol: TCP\n targetPort: 53\n selector:\n k8s-app: kube-dns\n type: ClusterIP", "completion": "This yaml code Service ClusterIP allocation | But, as it was explained before, the IP address 10.96.0.10 has not been reserved. If other Services are created before or in parallel with dynamic allocation, there is a chance they can allocate this", "language": "yaml", "type": "service-definition_explanation", "context": "Service ClusterIP allocation | But, as it was explained before, the IP address 10.96.0.10 has not been reserved. If other Services are created before or in parallel with dynamic allocation, there is a chance they can allocate this", "source": "https://kubernetes.io/docs/concepts/services-networking/cluster-ip-allocation/", "id": "k8s_00439_code_explain_11304"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: shell-demo\nspec:\n volumes:\n - name: shared-data\n emptyDir: {}", "completion": " containers:\n - name: nginx\n image: nginx\n volumeMounts:\n - name: shared-data\n mountPath: /usr/share/nginx/html\n hostNetwork: true\n dnsPolicy: Default", "language": "yaml", "type": "pod-definition", "context": "Get a Shell to a Running Container | Create the Pod:", "source": "https://kubernetes.io/docs/tasks/debug/debug-application/get-shell-running-container/", "id": "k8s_00441_code_completion_11305"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: shell-demo\nspec:\n volumes:\n - name: shared-data\n emptyDir: {}\n containers:\n - name: nginx\n image: nginx\n volumeMounts:\n - name: shared-data\n mountPath: /usr/share/nginx/html\n hostNetwork: true\n dnsPolicy: Default", "completion": "This yaml code Get a Shell to a Running Container | Create the Pod:", "language": "yaml", "type": "pod-definition_explanation", "context": "Get a Shell to a Running Container | Create the Pod:", "source": "https://kubernetes.io/docs/tasks/debug/debug-application/get-shell-running-container/", "id": "k8s_00441_code_explain_11306"} {"prompt": "Explain this bash code:\n\nkubectl apply -f https://k8s.io/examples/application/shell-demo.yaml", "completion": "This bash code Get a Shell to a Running Container | Verify that the container is running:", "language": "bash", "type": "kubectl-command_explanation", "context": "Get a Shell to a Running Container | Verify that the container is running:", "source": "https://kubernetes.io/docs/tasks/debug/debug-application/get-shell-running-container/", "id": "k8s_00441_code_explain_11307"} {"prompt": "Complete this text code:\n\n# You can run these example commands inside the container\nls /\ncat /proc/mounts\ncat /proc/1/maps\napt-get update\napt-get install -y tcpdump", "completion": "tcpdump\napt-get install -y lsof\nlsof\napt-get install -y procps\nps aux\nps aux | grep nginx", "language": "text", "type": "code-example", "context": "Get a Shell to a Running Container | Look again at the configuration file for your Pod. The Pod has an emptyDir volume, and the container mounts the volume at /usr/share/nginx/html.", "source": "https://kubernetes.io/docs/tasks/debug/debug-application/get-shell-running-container/", "id": "k8s_00441_code_completion_11308"} {"prompt": "Explain this text code:\n\n# You can run these example commands inside the container\nls /\ncat /proc/mounts\ncat /proc/1/maps\napt-get update\napt-get install -y tcpdump\ntcpdump\napt-get install -y lsof\nlsof\napt-get install -y procps\nps aux\nps aux | grep nginx", "completion": "This text code Get a Shell to a Running Container | Look again at the configuration file for your Pod. The Pod has an emptyDir volume, and the container mounts the volume at /usr/share/nginx/html.", "language": "text", "type": "code-example_explanation", "context": "Get a Shell to a Running Container | Look again at the configuration file for your Pod. The Pod has an emptyDir volume, and the container mounts the volume at /usr/share/nginx/html.", "source": "https://kubernetes.io/docs/tasks/debug/debug-application/get-shell-running-container/", "id": "k8s_00441_code_explain_11309"} {"prompt": "Explain this text code:\n\n# Run this inside the container\necho 'Hello shell demo' > /usr/share/nginx/html/index.html", "completion": "This text code Get a Shell to a Running Container | In your shell, send a GET request to the nginx server:", "language": "text", "type": "code-example_explanation", "context": "Get a Shell to a Running Container | In your shell, send a GET request to the nginx server:", "source": "https://kubernetes.io/docs/tasks/debug/debug-application/get-shell-running-container/", "id": "k8s_00441_code_explain_11310"} {"prompt": "Complete this text code:\n\n# Run this in the shell inside your container\napt-get update", "completion": "apt-get install curl\ncurl http://localhost/", "language": "text", "type": "code-example", "context": "Get a Shell to a Running Container | The output shows the text that you wrote to the index.html file:", "source": "https://kubernetes.io/docs/tasks/debug/debug-application/get-shell-running-container/", "id": "k8s_00441_code_completion_11311"} {"prompt": "Explain this text code:\n\n# Run this in the shell inside your container\napt-get update\napt-get install curl\ncurl http://localhost/", "completion": "This text code Get a Shell to a Running Container | The output shows the text that you wrote to the index.html file:", "language": "text", "type": "code-example_explanation", "context": "Get a Shell to a Running Container | The output shows the text that you wrote to the index.html file:", "source": "https://kubernetes.io/docs/tasks/debug/debug-application/get-shell-running-container/", "id": "k8s_00441_code_explain_11312"} {"prompt": "Explain this bash code:\n\nkubectl exec shell-demo -- ps aux\nkubectl exec shell-demo -- ls /\nkubectl exec shell-demo -- cat /proc/1/mounts", "completion": "This bash code Get a Shell to a Running Container | If a Pod has more than one container, use --container or -c to specify a container in the kubectl exec command. For example, suppose you have a Pod named my-pod, and the Pod has two containers named m", "language": "bash", "type": "kubectl-command_explanation", "context": "Get a Shell to a Running Container | If a Pod has more than one container, use --container or -c to specify a container in the kubectl exec command. For example, suppose you have a Pod named my-pod, and the Pod has two containers named m", "source": "https://kubernetes.io/docs/tasks/debug/debug-application/get-shell-running-container/", "id": "k8s_00441_code_explain_11313"} {"prompt": "Explain this bash code:\n\nkubectl exec -i -t my-pod --container main-app -- /bin/bash", "completion": "This bash code Get a Shell to a Running Container |", "language": "bash", "type": "kubectl-command_explanation", "context": "Get a Shell to a Running Container |", "source": "https://kubernetes.io/docs/tasks/debug/debug-application/get-shell-running-container/", "id": "k8s_00441_code_explain_11314"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: shell-demo\nspec:\n volumes:\n - name: shared-data\n emptyDir: {}", "completion": " containers:\n - name: nginx\n image: nginx\n volumeMounts:\n - name: shared-data\n mountPath: /usr/share/nginx/html\n hostNetwork: true\n dnsPolicy: Default", "language": "yaml", "type": "pod-definition", "context": "Get a Shell to a Running Container | Create the Pod:", "source": "https://kubernetes.io/docs/tasks/debug/debug-application/get-shell-running-container/", "id": "k8s_00441_code_completion_11315"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: shell-demo\nspec:\n volumes:\n - name: shared-data\n emptyDir: {}\n containers:\n - name: nginx\n image: nginx\n volumeMounts:\n - name: shared-data\n mountPath: /usr/share/nginx/html\n hostNetwork: true\n dnsPolicy: Default", "completion": "This yaml code Get a Shell to a Running Container | Create the Pod:", "language": "yaml", "type": "pod-definition_explanation", "context": "Get a Shell to a Running Container | Create the Pod:", "source": "https://kubernetes.io/docs/tasks/debug/debug-application/get-shell-running-container/", "id": "k8s_00441_code_explain_11316"} {"prompt": "Explain this shell code:\n\nkubectl apply -f https://k8s.io/examples/application/shell-demo.yaml", "completion": "This shell code Get a Shell to a Running Container | Verify that the container is running:", "language": "shell", "type": "code-example_explanation", "context": "Get a Shell to a Running Container | Verify that the container is running:", "source": "https://kubernetes.io/docs/tasks/debug/debug-application/get-shell-running-container/", "id": "k8s_00441_code_explain_11317"} {"prompt": "Complete this shell code:\n\n# You can run these example commands inside the container\nls /\ncat /proc/mounts\ncat /proc/1/maps\napt-get update\napt-get install -y tcpdump", "completion": "tcpdump\napt-get install -y lsof\nlsof\napt-get install -y procps\nps aux\nps aux | grep nginx", "language": "shell", "type": "code-example", "context": "Get a Shell to a Running Container | Look again at the configuration file for your Pod. The Pod has an emptyDir volume, and the container mounts the volume at /usr/share/nginx/html.", "source": "https://kubernetes.io/docs/tasks/debug/debug-application/get-shell-running-container/", "id": "k8s_00441_code_completion_11318"} {"prompt": "Explain this shell code:\n\n# You can run these example commands inside the container\nls /\ncat /proc/mounts\ncat /proc/1/maps\napt-get update\napt-get install -y tcpdump\ntcpdump\napt-get install -y lsof\nlsof\napt-get install -y procps\nps aux\nps aux | grep nginx", "completion": "This shell code Get a Shell to a Running Container | Look again at the configuration file for your Pod. The Pod has an emptyDir volume, and the container mounts the volume at /usr/share/nginx/html.", "language": "shell", "type": "code-example_explanation", "context": "Get a Shell to a Running Container | Look again at the configuration file for your Pod. The Pod has an emptyDir volume, and the container mounts the volume at /usr/share/nginx/html.", "source": "https://kubernetes.io/docs/tasks/debug/debug-application/get-shell-running-container/", "id": "k8s_00441_code_explain_11319"} {"prompt": "Explain this shell code:\n\n# Run this inside the container\necho 'Hello shell demo' > /usr/share/nginx/html/index.html", "completion": "This shell code Get a Shell to a Running Container | In your shell, send a GET request to the nginx server:", "language": "shell", "type": "code-example_explanation", "context": "Get a Shell to a Running Container | In your shell, send a GET request to the nginx server:", "source": "https://kubernetes.io/docs/tasks/debug/debug-application/get-shell-running-container/", "id": "k8s_00441_code_explain_11320"} {"prompt": "Complete this shell code:\n\n# Run this in the shell inside your container\napt-get update", "completion": "apt-get install curl\ncurl http://localhost/", "language": "shell", "type": "code-example", "context": "Get a Shell to a Running Container | The output shows the text that you wrote to the index.html file:", "source": "https://kubernetes.io/docs/tasks/debug/debug-application/get-shell-running-container/", "id": "k8s_00441_code_completion_11321"} {"prompt": "Explain this shell code:\n\n# Run this in the shell inside your container\napt-get update\napt-get install curl\ncurl http://localhost/", "completion": "This shell code Get a Shell to a Running Container | The output shows the text that you wrote to the index.html file:", "language": "shell", "type": "code-example_explanation", "context": "Get a Shell to a Running Container | The output shows the text that you wrote to the index.html file:", "source": "https://kubernetes.io/docs/tasks/debug/debug-application/get-shell-running-container/", "id": "k8s_00441_code_explain_11322"} {"prompt": "Explain this shell code:\n\nkubectl exec shell-demo -- ps aux\nkubectl exec shell-demo -- ls /\nkubectl exec shell-demo -- cat /proc/1/mounts", "completion": "This shell code Get a Shell to a Running Container | If a Pod has more than one container, use --container or -c to specify a container in the kubectl exec command. For example, suppose you have a Pod named my-pod, and the Pod has two containers named m", "language": "shell", "type": "code-example_explanation", "context": "Get a Shell to a Running Container | If a Pod has more than one container, use --container or -c to specify a container in the kubectl exec command. For example, suppose you have a Pod named my-pod, and the Pod has two containers named m", "source": "https://kubernetes.io/docs/tasks/debug/debug-application/get-shell-running-container/", "id": "k8s_00441_code_explain_11323"} {"prompt": "Explain this shell code:\n\nkubectl exec -i -t my-pod --container main-app -- /bin/bash", "completion": "This shell code Get a Shell to a Running Container |", "language": "shell", "type": "code-example_explanation", "context": "Get a Shell to a Running Container |", "source": "https://kubernetes.io/docs/tasks/debug/debug-application/get-shell-running-container/", "id": "k8s_00441_code_explain_11324"} {"prompt": "Complete this yaml code:\n\napplication/shell-demo.yaml\napiVersion: v1\nkind: Pod\nmetadata:\n name: shell-demo\nspec:\n volumes:\n - name: shared-data", "completion": " emptyDir: {}\n containers:\n - name: nginx\n image: nginx\n volumeMounts:\n - name: shared-data\n mountPath: /usr/share/nginx/html\n hostNetwork: true\n dnsPolicy: Default", "language": "yaml", "type": "pod-definition", "context": "Get a Shell to a Running Container | Create the Pod:", "source": "https://kubernetes.io/docs/tasks/debug/debug-application/get-shell-running-container/", "id": "k8s_00441_code_completion_11325"} {"prompt": "Explain this yaml code:\n\napplication/shell-demo.yaml\napiVersion: v1\nkind: Pod\nmetadata:\n name: shell-demo\nspec:\n volumes:\n - name: shared-data\n emptyDir: {}\n containers:\n - name: nginx\n image: nginx\n volumeMounts:\n - name: shared-data\n mountPath: /usr/share/nginx/html\n hostNetwork: true\n dnsPolicy: Default", "completion": "This yaml code Get a Shell to a Running Container | Create the Pod:", "language": "yaml", "type": "pod-definition_explanation", "context": "Get a Shell to a Running Container | Create the Pod:", "source": "https://kubernetes.io/docs/tasks/debug/debug-application/get-shell-running-container/", "id": "k8s_00441_code_explain_11326"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: shell-demo\nspec:\n volumes:\n - name: shared-data\n emptyDir: {}", "completion": " containers:\n - name: nginx\n image: nginx\n volumeMounts:\n - name: shared-data\n mountPath: /usr/share/nginx/html\n hostNetwork: true\n dnsPolicy: Default", "language": "yaml", "type": "pod-definition", "context": "Get a Shell to a Running Container | Create the Pod:", "source": "https://kubernetes.io/docs/tasks/debug/debug-application/get-shell-running-container/", "id": "k8s_00441_code_completion_11327"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: shell-demo\nspec:\n volumes:\n - name: shared-data\n emptyDir: {}\n containers:\n - name: nginx\n image: nginx\n volumeMounts:\n - name: shared-data\n mountPath: /usr/share/nginx/html\n hostNetwork: true\n dnsPolicy: Default", "completion": "This yaml code Get a Shell to a Running Container | Create the Pod:", "language": "yaml", "type": "pod-definition_explanation", "context": "Get a Shell to a Running Container | Create the Pod:", "source": "https://kubernetes.io/docs/tasks/debug/debug-application/get-shell-running-container/", "id": "k8s_00441_code_explain_11328"} {"prompt": "Explain this bash code:\n\nkubectl apply -f https://k8s.io/examples/application/shell-demo.yaml", "completion": "This bash code Get a Shell to a Running Container | Verify that the container is running:", "language": "bash", "type": "kubectl-command_explanation", "context": "Get a Shell to a Running Container | Verify that the container is running:", "source": "https://kubernetes.io/docs/tasks/debug/debug-application/get-shell-running-container/", "id": "k8s_00441_code_explain_11329"} {"prompt": "Complete this text code:\n\n# You can run these example commands inside the container\nls /\ncat /proc/mounts\ncat /proc/1/maps\napt-get update\napt-get install -y tcpdump", "completion": "tcpdump\napt-get install -y lsof\nlsof\napt-get install -y procps\nps aux\nps aux | grep nginx", "language": "text", "type": "code-example", "context": "Get a Shell to a Running Container | Look again at the configuration file for your Pod. The Pod has an emptyDir volume, and the container mounts the volume at /usr/share/nginx/html.", "source": "https://kubernetes.io/docs/tasks/debug/debug-application/get-shell-running-container/", "id": "k8s_00441_code_completion_11330"} {"prompt": "Explain this text code:\n\n# You can run these example commands inside the container\nls /\ncat /proc/mounts\ncat /proc/1/maps\napt-get update\napt-get install -y tcpdump\ntcpdump\napt-get install -y lsof\nlsof\napt-get install -y procps\nps aux\nps aux | grep nginx", "completion": "This text code Get a Shell to a Running Container | Look again at the configuration file for your Pod. The Pod has an emptyDir volume, and the container mounts the volume at /usr/share/nginx/html.", "language": "text", "type": "code-example_explanation", "context": "Get a Shell to a Running Container | Look again at the configuration file for your Pod. The Pod has an emptyDir volume, and the container mounts the volume at /usr/share/nginx/html.", "source": "https://kubernetes.io/docs/tasks/debug/debug-application/get-shell-running-container/", "id": "k8s_00441_code_explain_11331"} {"prompt": "Explain this text code:\n\n# Run this inside the container\necho 'Hello shell demo' > /usr/share/nginx/html/index.html", "completion": "This text code Get a Shell to a Running Container | In your shell, send a GET request to the nginx server:", "language": "text", "type": "code-example_explanation", "context": "Get a Shell to a Running Container | In your shell, send a GET request to the nginx server:", "source": "https://kubernetes.io/docs/tasks/debug/debug-application/get-shell-running-container/", "id": "k8s_00441_code_explain_11332"} {"prompt": "Complete this text code:\n\n# Run this in the shell inside your container\napt-get update", "completion": "apt-get install curl\ncurl http://localhost/", "language": "text", "type": "code-example", "context": "Get a Shell to a Running Container | The output shows the text that you wrote to the index.html file:", "source": "https://kubernetes.io/docs/tasks/debug/debug-application/get-shell-running-container/", "id": "k8s_00441_code_completion_11333"} {"prompt": "Explain this text code:\n\n# Run this in the shell inside your container\napt-get update\napt-get install curl\ncurl http://localhost/", "completion": "This text code Get a Shell to a Running Container | The output shows the text that you wrote to the index.html file:", "language": "text", "type": "code-example_explanation", "context": "Get a Shell to a Running Container | The output shows the text that you wrote to the index.html file:", "source": "https://kubernetes.io/docs/tasks/debug/debug-application/get-shell-running-container/", "id": "k8s_00441_code_explain_11334"} {"prompt": "Explain this bash code:\n\nkubectl exec shell-demo -- ps aux\nkubectl exec shell-demo -- ls /\nkubectl exec shell-demo -- cat /proc/1/mounts", "completion": "This bash code Get a Shell to a Running Container | If a Pod has more than one container, use --container or -c to specify a container in the kubectl exec command. For example, suppose you have a Pod named my-pod, and the Pod has two containers named m", "language": "bash", "type": "kubectl-command_explanation", "context": "Get a Shell to a Running Container | If a Pod has more than one container, use --container or -c to specify a container in the kubectl exec command. For example, suppose you have a Pod named my-pod, and the Pod has two containers named m", "source": "https://kubernetes.io/docs/tasks/debug/debug-application/get-shell-running-container/", "id": "k8s_00441_code_explain_11335"} {"prompt": "Explain this bash code:\n\nkubectl exec -i -t my-pod --container main-app -- /bin/bash", "completion": "This bash code Get a Shell to a Running Container |", "language": "bash", "type": "kubectl-command_explanation", "context": "Get a Shell to a Running Container |", "source": "https://kubernetes.io/docs/tasks/debug/debug-application/get-shell-running-container/", "id": "k8s_00441_code_explain_11336"} {"prompt": "Explain this bash code:\n\nkubectl set selector (-f FILENAME | TYPE NAME) EXPRESSIONS [--resource-version=version]", "completion": "This bash code kubectl set selector | Select all resources in the namespace of the specified resource types", "language": "bash", "type": "kubectl-command_explanation", "context": "kubectl set selector | Select all resources in the namespace of the specified resource types", "source": "https://kubernetes.io/docs/reference/kubectl/generated/kubectl_set/kubectl_set_selector/", "id": "k8s_00444_code_explain_11337"} {"prompt": "Explain this text code:\n\n# Set the labels and selector before creating a deployment/service pair\n kubectl create service clusterip my-svc --clusterip=\"None\" -o yaml --dry-run=client | kubectl set selector --local -f - 'environment=qa' -o yaml | kubectl create -f -\n kubectl create deployment my-dep -o yaml --dry-run=client | kubectl label --local -f - environment=qa -o yaml | kubectl create -f -", "completion": "This text code kubectl set selector | Select all resources in the namespace of the specified resource types", "language": "text", "type": "code-example_explanation", "context": "kubectl set selector | Select all resources in the namespace of the specified resource types", "source": "https://kubernetes.io/docs/reference/kubectl/generated/kubectl_set/kubectl_set_selector/", "id": "k8s_00444_code_explain_11338"} {"prompt": "Explain this bash code:\n\nkubectl set selector (-f FILENAME | TYPE NAME) EXPRESSIONS [--resource-version=version]", "completion": "This bash code kubectl set selector | Select all resources in the namespace of the specified resource types", "language": "bash", "type": "kubectl-command_explanation", "context": "kubectl set selector | Select all resources in the namespace of the specified resource types", "source": "https://kubernetes.io/docs/reference/kubectl/generated/kubectl_set/kubectl_set_selector/", "id": "k8s_00444_code_explain_11339"} {"prompt": "Explain this text code:\n\n# Set the labels and selector before creating a deployment/service pair\n kubectl create service clusterip my-svc --clusterip=\"None\" -o yaml --dry-run=client | kubectl set selector --local -f - 'environment=qa' -o yaml | kubectl create -f -\n kubectl create deployment my-dep -o yaml --dry-run=client | kubectl label --local -f - environment=qa -o yaml | kubectl create -f -", "completion": "This text code kubectl set selector | Select all resources in the namespace of the specified resource types", "language": "text", "type": "code-example_explanation", "context": "kubectl set selector | Select all resources in the namespace of the specified resource types", "source": "https://kubernetes.io/docs/reference/kubectl/generated/kubectl_set/kubectl_set_selector/", "id": "k8s_00444_code_explain_11340"} {"prompt": "Complete this yaml code:\n\n# SOURCE: https://cloud.google.com/kubernetes-engine/docs/tutorials/guestbook\napiVersion: apps/v1\nkind: Deployment\nmetadata:\n name: redis-leader\n labels:\n app: redis\n role: leader\n tier: backend\nspec:\n replicas: 1\n selector:\n matchLabels:\n app: redis\n template:", "completion": " metadata:\n labels:\n app: redis\n role: leader\n tier: backend\n spec:\n containers:\n - name: leader\n image: \"docker.io/redis:6.0.5\"\n resources:\n requests:\n cpu: 100m\n memory: 100Mi\n ports:\n - containerPort: 6379", "language": "yaml", "type": "deployment-definition", "context": "Example: Deploying PHP Guestbook application with Redis | Launch a terminal window in the directory you downloaded the manifest files.", "source": "https://kubernetes.io/docs/tutorials/stateless-application/guestbook/", "id": "k8s_00446_code_completion_11341"} {"prompt": "Explain this yaml code:\n\n# SOURCE: https://cloud.google.com/kubernetes-engine/docs/tutorials/guestbook\napiVersion: apps/v1\nkind: Deployment\nmetadata:\n name: redis-leader\n labels:\n app: redis\n role: leader\n tier: backend\nspec:\n replicas: 1\n selector:\n matchLabels:\n app: redis\n template:\n metadata:\n labels:\n app: redis\n role: leader\n tier: backend\n spec:\n containers:\n - name: leader\n image: \"docker.io/redis:6.0.5\"\n resources:\n requests:\n cpu: 100m\n memory: 100Mi\n ports:\n - containerPort: 6379", "completion": "This yaml code Example: Deploying PHP Guestbook application with Redis | Launch a terminal window in the directory you downloaded the manifest files.", "language": "yaml", "type": "deployment-definition_explanation", "context": "Example: Deploying PHP Guestbook application with Redis | Launch a terminal window in the directory you downloaded the manifest files.", "source": "https://kubernetes.io/docs/tutorials/stateless-application/guestbook/", "id": "k8s_00446_code_explain_11342"} {"prompt": "Explain this bash code:\n\nkubectl apply -f https://k8s.io/examples/application/guestbook/redis-leader-deployment.yaml", "completion": "This bash code Example: Deploying PHP Guestbook application with Redis | Query the list of Pods to verify that the Redis Pod is running:", "language": "bash", "type": "kubectl-command_explanation", "context": "Example: Deploying PHP Guestbook application with Redis | Query the list of Pods to verify that the Redis Pod is running:", "source": "https://kubernetes.io/docs/tutorials/stateless-application/guestbook/", "id": "k8s_00446_code_explain_11343"} {"prompt": "Explain this text code:\n\nNAME READY STATUS RESTARTS AGE\nredis-leader-fb76b4755-xjr2n 1/1 Running 0 13s", "completion": "This text code Example: Deploying PHP Guestbook application with Redis | Run the following command to view the logs from the Redis leader Pod:", "language": "text", "type": "code-example_explanation", "context": "Example: Deploying PHP Guestbook application with Redis | Run the following command to view the logs from the Redis leader Pod:", "source": "https://kubernetes.io/docs/tutorials/stateless-application/guestbook/", "id": "k8s_00446_code_explain_11344"} {"prompt": "Complete this yaml code:\n\n# SOURCE: https://cloud.google.com/kubernetes-engine/docs/tutorials/guestbook\napiVersion: v1\nkind: Service\nmetadata:\n name: redis-leader\n labels:\n app: redis\n role: leader", "completion": " tier: backend\nspec:\n ports:\n - port: 6379\n targetPort: 6379\n selector:\n app: redis\n role: leader\n tier: backend", "language": "yaml", "type": "service-definition", "context": "Example: Deploying PHP Guestbook application with Redis | Apply the Redis Service from the following redis-leader-service.yaml file:", "source": "https://kubernetes.io/docs/tutorials/stateless-application/guestbook/", "id": "k8s_00446_code_completion_11345"} {"prompt": "Explain this yaml code:\n\n# SOURCE: https://cloud.google.com/kubernetes-engine/docs/tutorials/guestbook\napiVersion: v1\nkind: Service\nmetadata:\n name: redis-leader\n labels:\n app: redis\n role: leader\n tier: backend\nspec:\n ports:\n - port: 6379\n targetPort: 6379\n selector:\n app: redis\n role: leader\n tier: backend", "completion": "This yaml code Example: Deploying PHP Guestbook application with Redis | Apply the Redis Service from the following redis-leader-service.yaml file:", "language": "yaml", "type": "service-definition_explanation", "context": "Example: Deploying PHP Guestbook application with Redis | Apply the Redis Service from the following redis-leader-service.yaml file:", "source": "https://kubernetes.io/docs/tutorials/stateless-application/guestbook/", "id": "k8s_00446_code_explain_11346"} {"prompt": "Explain this bash code:\n\nkubectl apply -f https://k8s.io/examples/application/guestbook/redis-leader-service.yaml", "completion": "This bash code Example: Deploying PHP Guestbook application with Redis | Query the list of Services to verify that the Redis Service is running:", "language": "bash", "type": "kubectl-command_explanation", "context": "Example: Deploying PHP Guestbook application with Redis | Query the list of Services to verify that the Redis Service is running:", "source": "https://kubernetes.io/docs/tutorials/stateless-application/guestbook/", "id": "k8s_00446_code_explain_11347"} {"prompt": "Explain this text code:\n\nNAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE\nkubernetes ClusterIP 10.0.0.1 <none> 443/TCP 1m\nredis-leader ClusterIP 10.103.78.24 <none> 6379/TCP 16s", "completion": "This text code Example: Deploying PHP Guestbook application with Redis | Although the Redis leader is a single Pod, you can make it highly available and meet traffic demands by adding a few Redis followers, or replicas.", "language": "text", "type": "code-example_explanation", "context": "Example: Deploying PHP Guestbook application with Redis | Although the Redis leader is a single Pod, you can make it highly available and meet traffic demands by adding a few Redis followers, or replicas.", "source": "https://kubernetes.io/docs/tutorials/stateless-application/guestbook/", "id": "k8s_00446_code_explain_11348"} {"prompt": "Complete this yaml code:\n\n# SOURCE: https://cloud.google.com/kubernetes-engine/docs/tutorials/guestbook\napiVersion: apps/v1\nkind: Deployment\nmetadata:\n name: redis-follower\n labels:\n app: redis\n role: follower\n tier: backend\nspec:\n replicas: 2\n selector:\n matchLabels:\n app: redis\n template:", "completion": " metadata:\n labels:\n app: redis\n role: follower\n tier: backend\n spec:\n containers:\n - name: follower\n image: us-docker.pkg.dev/google-samples/containers/gke/gb-redis-follower:v2\n resources:\n requests:\n cpu: 100m\n memory: 100Mi\n ports:\n - containerPort: 6379", "language": "yaml", "type": "deployment-definition", "context": "Example: Deploying PHP Guestbook application with Redis | Apply the Redis Deployment from the following redis-follower-deployment.yaml file:", "source": "https://kubernetes.io/docs/tutorials/stateless-application/guestbook/", "id": "k8s_00446_code_completion_11349"} {"prompt": "Explain this yaml code:\n\n# SOURCE: https://cloud.google.com/kubernetes-engine/docs/tutorials/guestbook\napiVersion: apps/v1\nkind: Deployment\nmetadata:\n name: redis-follower\n labels:\n app: redis\n role: follower\n tier: backend\nspec:\n replicas: 2\n selector:\n matchLabels:\n app: redis\n template:\n metadata:\n labels:\n app: redis\n role: follower\n tier: backend\n spec:\n containers:\n - name: follower\n image: us-docker.pkg.dev/google-samples/containers/gke/gb-redis-follower:v2\n resources:\n requests:\n cpu: 100m\n memory: 100Mi\n ports:\n - containerPort: 6379", "completion": "This yaml code Example: Deploying PHP Guestbook application with Redis | Apply the Redis Deployment from the following redis-follower-deployment.yaml file:", "language": "yaml", "type": "deployment-definition_explanation", "context": "Example: Deploying PHP Guestbook application with Redis | Apply the Redis Deployment from the following redis-follower-deployment.yaml file:", "source": "https://kubernetes.io/docs/tutorials/stateless-application/guestbook/", "id": "k8s_00446_code_explain_11350"} {"prompt": "Explain this bash code:\n\nkubectl apply -f https://k8s.io/examples/application/guestbook/redis-follower-deployment.yaml", "completion": "This bash code Example: Deploying PHP Guestbook application with Redis | Verify that the two Redis follower replicas are running by querying the list of Pods:", "language": "bash", "type": "kubectl-command_explanation", "context": "Example: Deploying PHP Guestbook application with Redis | Verify that the two Redis follower replicas are running by querying the list of Pods:", "source": "https://kubernetes.io/docs/tutorials/stateless-application/guestbook/", "id": "k8s_00446_code_explain_11351"} {"prompt": "Complete this text code:\n\nNAME READY STATUS RESTARTS AGE\nredis-follower-dddfbdcc9-82sfr 1/1 Running 0 37s", "completion": "redis-follower-dddfbdcc9-qrt5k 1/1 Running 0 38s\nredis-leader-fb76b4755-xjr2n 1/1 Running 0 11m", "language": "text", "type": "code-example", "context": "Example: Deploying PHP Guestbook application with Redis | The guestbook application needs to communicate with the Redis followers to read data. To make the Redis followers discoverable, you must set up another Service.", "source": "https://kubernetes.io/docs/tutorials/stateless-application/guestbook/", "id": "k8s_00446_code_completion_11352"} {"prompt": "Explain this text code:\n\nNAME READY STATUS RESTARTS AGE\nredis-follower-dddfbdcc9-82sfr 1/1 Running 0 37s\nredis-follower-dddfbdcc9-qrt5k 1/1 Running 0 38s\nredis-leader-fb76b4755-xjr2n 1/1 Running 0 11m", "completion": "This text code Example: Deploying PHP Guestbook application with Redis | The guestbook application needs to communicate with the Redis followers to read data. To make the Redis followers discoverable, you must set up another Service.", "language": "text", "type": "code-example_explanation", "context": "Example: Deploying PHP Guestbook application with Redis | The guestbook application needs to communicate with the Redis followers to read data. To make the Redis followers discoverable, you must set up another Service.", "source": "https://kubernetes.io/docs/tutorials/stateless-application/guestbook/", "id": "k8s_00446_code_explain_11353"} {"prompt": "Complete this yaml code:\n\n# SOURCE: https://cloud.google.com/kubernetes-engine/docs/tutorials/guestbook\napiVersion: v1\nkind: Service\nmetadata:\n name: redis-follower\n labels:\n app: redis\n role: follower", "completion": " tier: backend\nspec:\n ports:\n # the port that this service should serve on\n - port: 6379\n selector:\n app: redis\n role: follower\n tier: backend", "language": "yaml", "type": "service-definition", "context": "Example: Deploying PHP Guestbook application with Redis | Apply the Redis Service from the following redis-follower-service.yaml file:", "source": "https://kubernetes.io/docs/tutorials/stateless-application/guestbook/", "id": "k8s_00446_code_completion_11354"} {"prompt": "Explain this yaml code:\n\n# SOURCE: https://cloud.google.com/kubernetes-engine/docs/tutorials/guestbook\napiVersion: v1\nkind: Service\nmetadata:\n name: redis-follower\n labels:\n app: redis\n role: follower\n tier: backend\nspec:\n ports:\n # the port that this service should serve on\n - port: 6379\n selector:\n app: redis\n role: follower\n tier: backend", "completion": "This yaml code Example: Deploying PHP Guestbook application with Redis | Apply the Redis Service from the following redis-follower-service.yaml file:", "language": "yaml", "type": "service-definition_explanation", "context": "Example: Deploying PHP Guestbook application with Redis | Apply the Redis Service from the following redis-follower-service.yaml file:", "source": "https://kubernetes.io/docs/tutorials/stateless-application/guestbook/", "id": "k8s_00446_code_explain_11355"} {"prompt": "Explain this bash code:\n\nkubectl apply -f https://k8s.io/examples/application/guestbook/redis-follower-service.yaml", "completion": "This bash code Example: Deploying PHP Guestbook application with Redis | Query the list of Services to verify that the Redis Service is running:", "language": "bash", "type": "kubectl-command_explanation", "context": "Example: Deploying PHP Guestbook application with Redis | Query the list of Services to verify that the Redis Service is running:", "source": "https://kubernetes.io/docs/tutorials/stateless-application/guestbook/", "id": "k8s_00446_code_explain_11356"} {"prompt": "Complete this text code:\n\nNAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE\nkubernetes ClusterIP 10.96.0.1 <none> 443/TCP 3d19h", "completion": "redis-follower ClusterIP 10.110.162.42 <none> 6379/TCP 9s\nredis-leader ClusterIP 10.103.78.24 <none> 6379/TCP 6m10s", "language": "text", "type": "code-example", "context": "Example: Deploying PHP Guestbook application with Redis | Now that you have the Redis storage of your guestbook up and running, start the guestbook web servers. Like the Redis followers, the frontend is deployed using a Kubernetes Deployment.", "source": "https://kubernetes.io/docs/tutorials/stateless-application/guestbook/", "id": "k8s_00446_code_completion_11357"} {"prompt": "Explain this text code:\n\nNAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE\nkubernetes ClusterIP 10.96.0.1 <none> 443/TCP 3d19h\nredis-follower ClusterIP 10.110.162.42 <none> 6379/TCP 9s\nredis-leader ClusterIP 10.103.78.24 <none> 6379/TCP 6m10s", "completion": "This text code Example: Deploying PHP Guestbook application with Redis | Now that you have the Redis storage of your guestbook up and running, start the guestbook web servers. Like the Redis followers, the frontend is deployed using a Kubernetes Deployment.", "language": "text", "type": "code-example_explanation", "context": "Example: Deploying PHP Guestbook application with Redis | Now that you have the Redis storage of your guestbook up and running, start the guestbook web servers. Like the Redis followers, the frontend is deployed using a Kubernetes Deployment.", "source": "https://kubernetes.io/docs/tutorials/stateless-application/guestbook/", "id": "k8s_00446_code_explain_11358"} {"prompt": "Complete this yaml code:\n\n# SOURCE: https://cloud.google.com/kubernetes-engine/docs/tutorials/guestbook\napiVersion: apps/v1\nkind: Deployment\nmetadata:\n name: frontend\nspec:\n replicas: 3\n selector:\n matchLabels:\n app: guestbook\n tier: frontend\n template:\n metadata:\n labels:", "completion": " app: guestbook\n tier: frontend\n spec:\n containers:\n - name: php-redis\n image: us-docker.pkg.dev/google-samples/containers/gke/gb-frontend:v5\n env:\n - name: GET_HOSTS_FROM\n value: \"dns\"\n resources:\n requests:\n cpu: 100m\n memory: 100Mi\n ports:\n - containerPort: 80", "language": "yaml", "type": "deployment-definition", "context": "Example: Deploying PHP Guestbook application with Redis | Apply the frontend Deployment from the frontend-deployment.yaml file:", "source": "https://kubernetes.io/docs/tutorials/stateless-application/guestbook/", "id": "k8s_00446_code_completion_11359"} {"prompt": "Explain this yaml code:\n\n# SOURCE: https://cloud.google.com/kubernetes-engine/docs/tutorials/guestbook\napiVersion: apps/v1\nkind: Deployment\nmetadata:\n name: frontend\nspec:\n replicas: 3\n selector:\n matchLabels:\n app: guestbook\n tier: frontend\n template:\n metadata:\n labels:\n app: guestbook\n tier: frontend\n spec:\n containers:\n - name: php-redis\n image: us-docker.pkg.dev/google-samples/containers/gke/gb-frontend:v5\n env:\n - name: GET_HOSTS_FROM\n value: \"dns\"\n resources:\n requests:\n cpu: 100m\n memory: 100Mi\n ports:\n - containerPort: 80", "completion": "This yaml code Example: Deploying PHP Guestbook application with Redis | Apply the frontend Deployment from the frontend-deployment.yaml file:", "language": "yaml", "type": "deployment-definition_explanation", "context": "Example: Deploying PHP Guestbook application with Redis | Apply the frontend Deployment from the frontend-deployment.yaml file:", "source": "https://kubernetes.io/docs/tutorials/stateless-application/guestbook/", "id": "k8s_00446_code_explain_11360"} {"prompt": "Explain this bash code:\n\nkubectl apply -f https://k8s.io/examples/application/guestbook/frontend-deployment.yaml", "completion": "This bash code Example: Deploying PHP Guestbook application with Redis | Query the list of Pods to verify that the three frontend replicas are running:", "language": "bash", "type": "kubectl-command_explanation", "context": "Example: Deploying PHP Guestbook application with Redis | Query the list of Pods to verify that the three frontend replicas are running:", "source": "https://kubernetes.io/docs/tutorials/stateless-application/guestbook/", "id": "k8s_00446_code_explain_11361"} {"prompt": "Complete this text code:\n\nNAME READY STATUS RESTARTS AGE\nfrontend-85595f5bf9-5tqhb 1/1 Running 0 47s", "completion": "frontend-85595f5bf9-qbzwm 1/1 Running 0 47s\nfrontend-85595f5bf9-zchwc 1/1 Running 0 47s", "language": "text", "type": "code-example", "context": "Example: Deploying PHP Guestbook application with Redis | The Redis Services you applied is only accessible within the Kubernetes cluster because the default type for a Service is ClusterIP. ClusterIP provides a single IP address for the set of Pods the Serv", "source": "https://kubernetes.io/docs/tutorials/stateless-application/guestbook/", "id": "k8s_00446_code_completion_11362"} {"prompt": "Explain this text code:\n\nNAME READY STATUS RESTARTS AGE\nfrontend-85595f5bf9-5tqhb 1/1 Running 0 47s\nfrontend-85595f5bf9-qbzwm 1/1 Running 0 47s\nfrontend-85595f5bf9-zchwc 1/1 Running 0 47s", "completion": "This text code Example: Deploying PHP Guestbook application with Redis | The Redis Services you applied is only accessible within the Kubernetes cluster because the default type for a Service is ClusterIP. ClusterIP provides a single IP address for the set of Pods the Serv", "language": "text", "type": "code-example_explanation", "context": "Example: Deploying PHP Guestbook application with Redis | The Redis Services you applied is only accessible within the Kubernetes cluster because the default type for a Service is ClusterIP. ClusterIP provides a single IP address for the set of Pods the Serv", "source": "https://kubernetes.io/docs/tutorials/stateless-application/guestbook/", "id": "k8s_00446_code_explain_11363"} {"prompt": "Complete this yaml code:\n\n# SOURCE: https://cloud.google.com/kubernetes-engine/docs/tutorials/guestbook\napiVersion: v1\nkind: Service\nmetadata:\n name: frontend\n labels:\n app: guestbook\n tier: frontend\nspec:", "completion": " # if your cluster supports it, uncomment the following to automatically create\n # an external load-balanced IP for the frontend service.\n # type: LoadBalancer\n #type: LoadBalancer\n ports:\n # the port that this service should serve on\n - port: 80\n selector:\n app: guestbook\n tier: frontend", "language": "yaml", "type": "service-definition", "context": "Example: Deploying PHP Guestbook application with Redis | Apply the frontend Service from the frontend-service.yaml file:", "source": "https://kubernetes.io/docs/tutorials/stateless-application/guestbook/", "id": "k8s_00446_code_completion_11364"} {"prompt": "Explain this yaml code:\n\n# SOURCE: https://cloud.google.com/kubernetes-engine/docs/tutorials/guestbook\napiVersion: v1\nkind: Service\nmetadata:\n name: frontend\n labels:\n app: guestbook\n tier: frontend\nspec:\n # if your cluster supports it, uncomment the following to automatically create\n # an external load-balanced IP for the frontend service.\n # type: LoadBalancer\n #type: LoadBalancer\n ports:\n # the port that this service should serve on\n - port: 80\n selector:\n app: guestbook\n tier: frontend", "completion": "This yaml code Example: Deploying PHP Guestbook application with Redis | Apply the frontend Service from the frontend-service.yaml file:", "language": "yaml", "type": "service-definition_explanation", "context": "Example: Deploying PHP Guestbook application with Redis | Apply the frontend Service from the frontend-service.yaml file:", "source": "https://kubernetes.io/docs/tutorials/stateless-application/guestbook/", "id": "k8s_00446_code_explain_11365"} {"prompt": "Explain this bash code:\n\nkubectl apply -f https://k8s.io/examples/application/guestbook/frontend-service.yaml", "completion": "This bash code Example: Deploying PHP Guestbook application with Redis | Query the list of Services to verify that the frontend Service is running:", "language": "bash", "type": "kubectl-command_explanation", "context": "Example: Deploying PHP Guestbook application with Redis | Query the list of Services to verify that the frontend Service is running:", "source": "https://kubernetes.io/docs/tutorials/stateless-application/guestbook/", "id": "k8s_00446_code_explain_11366"} {"prompt": "Complete this text code:\n\nNAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE\nfrontend ClusterIP 10.97.28.230 <none> 80/TCP 19s", "completion": "kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 3d19h\nredis-follower ClusterIP 10.110.162.42 <none> 6379/TCP 5m48s\nredis-leader ClusterIP 10.103.78.24 <none> 6379/TCP 11m", "language": "text", "type": "code-example", "context": "Example: Deploying PHP Guestbook application with Redis | Run the following command to forward port 8080 on your local machine to port 80 on the service.", "source": "https://kubernetes.io/docs/tutorials/stateless-application/guestbook/", "id": "k8s_00446_code_completion_11367"} {"prompt": "Explain this text code:\n\nNAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE\nfrontend ClusterIP 10.97.28.230 <none> 80/TCP 19s\nkubernetes ClusterIP 10.96.0.1 <none> 443/TCP 3d19h\nredis-follower ClusterIP 10.110.162.42 <none> 6379/TCP 5m48s\nredis-leader ClusterIP 10.103.78.24 <none> 6379/TCP 11m", "completion": "This text code Example: Deploying PHP Guestbook application with Redis | Run the following command to forward port 8080 on your local machine to port 80 on the service.", "language": "text", "type": "code-example_explanation", "context": "Example: Deploying PHP Guestbook application with Redis | Run the following command to forward port 8080 on your local machine to port 80 on the service.", "source": "https://kubernetes.io/docs/tutorials/stateless-application/guestbook/", "id": "k8s_00446_code_explain_11368"} {"prompt": "Explain this text code:\n\nForwarding from 127.0.0.1:8080 -> 80\nForwarding from [::1]:8080 -> 80", "completion": "This text code Example: Deploying PHP Guestbook application with Redis | load the page http://localhost:8080 in your browser to view your guestbook.", "language": "text", "type": "code-example_explanation", "context": "Example: Deploying PHP Guestbook application with Redis | load the page http://localhost:8080 in your browser to view your guestbook.", "source": "https://kubernetes.io/docs/tutorials/stateless-application/guestbook/", "id": "k8s_00446_code_explain_11369"} {"prompt": "Explain this text code:\n\nNAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE\nfrontend LoadBalancer 10.51.242.136 109.197.92.229 80:32372/TCP 1m", "completion": "This text code Example: Deploying PHP Guestbook application with Redis | Copy the external IP address, and load the page in your browser to view your guestbook.", "language": "text", "type": "code-example_explanation", "context": "Example: Deploying PHP Guestbook application with Redis | Copy the external IP address, and load the page in your browser to view your guestbook.", "source": "https://kubernetes.io/docs/tutorials/stateless-application/guestbook/", "id": "k8s_00446_code_explain_11370"} {"prompt": "Complete this text code:\n\nNAME READY STATUS RESTARTS AGE\nfrontend-85595f5bf9-5df5m 1/1 Running 0 83s\nfrontend-85595f5bf9-7zmg5 1/1 Running 0 83s\nfrontend-85595f5bf9-cpskg 1/1 Running 0 15m", "completion": "frontend-85595f5bf9-l2l54 1/1 Running 0 14m\nfrontend-85595f5bf9-l9c8z 1/1 Running 0 14m\nredis-follower-dddfbdcc9-82sfr 1/1 Running 0 97m\nredis-follower-dddfbdcc9-qrt5k 1/1 Running 0 97m\nredis-leader-fb76b4755-xjr2n 1/1 Running 0 108m", "language": "text", "type": "code-example", "context": "Example: Deploying PHP Guestbook application with Redis | Run the following command to scale down the number of frontend Pods:", "source": "https://kubernetes.io/docs/tutorials/stateless-application/guestbook/", "id": "k8s_00446_code_completion_11371"} {"prompt": "Explain this text code:\n\nNAME READY STATUS RESTARTS AGE\nfrontend-85595f5bf9-5df5m 1/1 Running 0 83s\nfrontend-85595f5bf9-7zmg5 1/1 Running 0 83s\nfrontend-85595f5bf9-cpskg 1/1 Running 0 15m\nfrontend-85595f5bf9-l2l54 1/1 Running 0 14m\nfrontend-85595f5bf9-l9c8z 1/1 Running 0 14m\nredis-follower-dddfbdcc9-82sfr 1/1 Running 0 97m\nredis-follower-dddfbdcc9-qrt5k 1/1 Running 0 97m\nredis-leader-fb76b4755-xjr2n 1/1 Running 0 108m", "completion": "This text code Example: Deploying PHP Guestbook application with Redis | Run the following command to scale down the number of frontend Pods:", "language": "text", "type": "code-example_explanation", "context": "Example: Deploying PHP Guestbook application with Redis | Run the following command to scale down the number of frontend Pods:", "source": "https://kubernetes.io/docs/tutorials/stateless-application/guestbook/", "id": "k8s_00446_code_explain_11372"} {"prompt": "Complete this text code:\n\nNAME READY STATUS RESTARTS AGE\nfrontend-85595f5bf9-cpskg 1/1 Running 0 16m\nfrontend-85595f5bf9-l9c8z 1/1 Running 0 15m", "completion": "redis-follower-dddfbdcc9-82sfr 1/1 Running 0 98m\nredis-follower-dddfbdcc9-qrt5k 1/1 Running 0 98m\nredis-leader-fb76b4755-xjr2n 1/1 Running 0 109m", "language": "text", "type": "code-example", "context": "Example: Deploying PHP Guestbook application with Redis | Deleting the Deployments and Services also deletes any running Pods. Use labels to delete multiple resources with one command.", "source": "https://kubernetes.io/docs/tutorials/stateless-application/guestbook/", "id": "k8s_00446_code_completion_11373"} {"prompt": "Explain this text code:\n\nNAME READY STATUS RESTARTS AGE\nfrontend-85595f5bf9-cpskg 1/1 Running 0 16m\nfrontend-85595f5bf9-l9c8z 1/1 Running 0 15m\nredis-follower-dddfbdcc9-82sfr 1/1 Running 0 98m\nredis-follower-dddfbdcc9-qrt5k 1/1 Running 0 98m\nredis-leader-fb76b4755-xjr2n 1/1 Running 0 109m", "completion": "This text code Example: Deploying PHP Guestbook application with Redis | Deleting the Deployments and Services also deletes any running Pods. Use labels to delete multiple resources with one command.", "language": "text", "type": "code-example_explanation", "context": "Example: Deploying PHP Guestbook application with Redis | Deleting the Deployments and Services also deletes any running Pods. Use labels to delete multiple resources with one command.", "source": "https://kubernetes.io/docs/tutorials/stateless-application/guestbook/", "id": "k8s_00446_code_explain_11374"} {"prompt": "Complete this bash code:\n\nkubectl delete deployment -l app=redis\nkubectl delete service -l app=redis", "completion": "kubectl delete deployment frontend\nkubectl delete service frontend", "language": "bash", "type": "kubectl-command", "context": "Example: Deploying PHP Guestbook application with Redis | The response should look similar to this:", "source": "https://kubernetes.io/docs/tutorials/stateless-application/guestbook/", "id": "k8s_00446_code_completion_11375"} {"prompt": "Explain this bash code:\n\nkubectl delete deployment -l app=redis\nkubectl delete service -l app=redis\nkubectl delete deployment frontend\nkubectl delete service frontend", "completion": "This bash code Example: Deploying PHP Guestbook application with Redis | The response should look similar to this:", "language": "bash", "type": "kubectl-command_explanation", "context": "Example: Deploying PHP Guestbook application with Redis | The response should look similar to this:", "source": "https://kubernetes.io/docs/tutorials/stateless-application/guestbook/", "id": "k8s_00446_code_explain_11376"} {"prompt": "Complete this text code:\n\ndeployment.apps \"redis-follower\" deleted\ndeployment.apps \"redis-leader\" deleted", "completion": "deployment.apps \"frontend\" deleted\nservice \"frontend\" deleted", "language": "text", "type": "code-example", "context": "Example: Deploying PHP Guestbook application with Redis | Query the list of Pods to verify that no Pods are running:", "source": "https://kubernetes.io/docs/tutorials/stateless-application/guestbook/", "id": "k8s_00446_code_completion_11377"} {"prompt": "Explain this text code:\n\ndeployment.apps \"redis-follower\" deleted\ndeployment.apps \"redis-leader\" deleted\ndeployment.apps \"frontend\" deleted\nservice \"frontend\" deleted", "completion": "This text code Example: Deploying PHP Guestbook application with Redis | Query the list of Pods to verify that no Pods are running:", "language": "text", "type": "code-example_explanation", "context": "Example: Deploying PHP Guestbook application with Redis | Query the list of Pods to verify that no Pods are running:", "source": "https://kubernetes.io/docs/tutorials/stateless-application/guestbook/", "id": "k8s_00446_code_explain_11378"} {"prompt": "Complete this yaml code:\n\n# SOURCE: https://cloud.google.com/kubernetes-engine/docs/tutorials/guestbook\napiVersion: apps/v1\nkind: Deployment\nmetadata:\n name: redis-leader\n labels:\n app: redis\n role: leader\n tier: backend\nspec:\n replicas: 1\n selector:\n matchLabels:\n app: redis\n template:", "completion": " metadata:\n labels:\n app: redis\n role: leader\n tier: backend\n spec:\n containers:\n - name: leader\n image: \"docker.io/redis:6.0.5\"\n resources:\n requests:\n cpu: 100m\n memory: 100Mi\n ports:\n - containerPort: 6379", "language": "yaml", "type": "deployment-definition", "context": "Example: Deploying PHP Guestbook application with Redis | Launch a terminal window in the directory you downloaded the manifest files.", "source": "https://kubernetes.io/docs/tutorials/stateless-application/guestbook/", "id": "k8s_00446_code_completion_11379"} {"prompt": "Explain this yaml code:\n\n# SOURCE: https://cloud.google.com/kubernetes-engine/docs/tutorials/guestbook\napiVersion: apps/v1\nkind: Deployment\nmetadata:\n name: redis-leader\n labels:\n app: redis\n role: leader\n tier: backend\nspec:\n replicas: 1\n selector:\n matchLabels:\n app: redis\n template:\n metadata:\n labels:\n app: redis\n role: leader\n tier: backend\n spec:\n containers:\n - name: leader\n image: \"docker.io/redis:6.0.5\"\n resources:\n requests:\n cpu: 100m\n memory: 100Mi\n ports:\n - containerPort: 6379", "completion": "This yaml code Example: Deploying PHP Guestbook application with Redis | Launch a terminal window in the directory you downloaded the manifest files.", "language": "yaml", "type": "deployment-definition_explanation", "context": "Example: Deploying PHP Guestbook application with Redis | Launch a terminal window in the directory you downloaded the manifest files.", "source": "https://kubernetes.io/docs/tutorials/stateless-application/guestbook/", "id": "k8s_00446_code_explain_11380"} {"prompt": "Explain this shell code:\n\nkubectl apply -f https://k8s.io/examples/application/guestbook/redis-leader-deployment.yaml", "completion": "This shell code Example: Deploying PHP Guestbook application with Redis | Query the list of Pods to verify that the Redis Pod is running:", "language": "shell", "type": "code-example_explanation", "context": "Example: Deploying PHP Guestbook application with Redis | Query the list of Pods to verify that the Redis Pod is running:", "source": "https://kubernetes.io/docs/tutorials/stateless-application/guestbook/", "id": "k8s_00446_code_explain_11381"} {"prompt": "Explain this text code:\n\nNAME READY STATUS RESTARTS AGE\nredis-leader-fb76b4755-xjr2n 1/1 Running 0 13s", "completion": "This text code Example: Deploying PHP Guestbook application with Redis | Run the following command to view the logs from the Redis leader Pod:", "language": "text", "type": "code-example_explanation", "context": "Example: Deploying PHP Guestbook application with Redis | Run the following command to view the logs from the Redis leader Pod:", "source": "https://kubernetes.io/docs/tutorials/stateless-application/guestbook/", "id": "k8s_00446_code_explain_11382"} {"prompt": "Complete this yaml code:\n\n# SOURCE: https://cloud.google.com/kubernetes-engine/docs/tutorials/guestbook\napiVersion: v1\nkind: Service\nmetadata:\n name: redis-leader\n labels:\n app: redis\n role: leader", "completion": " tier: backend\nspec:\n ports:\n - port: 6379\n targetPort: 6379\n selector:\n app: redis\n role: leader\n tier: backend", "language": "yaml", "type": "service-definition", "context": "Example: Deploying PHP Guestbook application with Redis | Apply the Redis Service from the following redis-leader-service.yaml file:", "source": "https://kubernetes.io/docs/tutorials/stateless-application/guestbook/", "id": "k8s_00446_code_completion_11383"} {"prompt": "Explain this yaml code:\n\n# SOURCE: https://cloud.google.com/kubernetes-engine/docs/tutorials/guestbook\napiVersion: v1\nkind: Service\nmetadata:\n name: redis-leader\n labels:\n app: redis\n role: leader\n tier: backend\nspec:\n ports:\n - port: 6379\n targetPort: 6379\n selector:\n app: redis\n role: leader\n tier: backend", "completion": "This yaml code Example: Deploying PHP Guestbook application with Redis | Apply the Redis Service from the following redis-leader-service.yaml file:", "language": "yaml", "type": "service-definition_explanation", "context": "Example: Deploying PHP Guestbook application with Redis | Apply the Redis Service from the following redis-leader-service.yaml file:", "source": "https://kubernetes.io/docs/tutorials/stateless-application/guestbook/", "id": "k8s_00446_code_explain_11384"} {"prompt": "Explain this shell code:\n\nkubectl apply -f https://k8s.io/examples/application/guestbook/redis-leader-service.yaml", "completion": "This shell code Example: Deploying PHP Guestbook application with Redis | Query the list of Services to verify that the Redis Service is running:", "language": "shell", "type": "code-example_explanation", "context": "Example: Deploying PHP Guestbook application with Redis | Query the list of Services to verify that the Redis Service is running:", "source": "https://kubernetes.io/docs/tutorials/stateless-application/guestbook/", "id": "k8s_00446_code_explain_11385"} {"prompt": "Explain this text code:\n\nNAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE\nkubernetes ClusterIP 10.0.0.1 <none> 443/TCP 1m\nredis-leader ClusterIP 10.103.78.24 <none> 6379/TCP 16s", "completion": "This text code Example: Deploying PHP Guestbook application with Redis | Although the Redis leader is a single Pod, you can make it highly available and meet traffic demands by adding a few Redis followers, or replicas.", "language": "text", "type": "code-example_explanation", "context": "Example: Deploying PHP Guestbook application with Redis | Although the Redis leader is a single Pod, you can make it highly available and meet traffic demands by adding a few Redis followers, or replicas.", "source": "https://kubernetes.io/docs/tutorials/stateless-application/guestbook/", "id": "k8s_00446_code_explain_11386"} {"prompt": "Explain this text code:\n\napplication/guestbook/redis-follower-deployment.yaml", "completion": "This text code Example: Deploying PHP Guestbook application with Redis | Apply the Redis Deployment from the following redis-follower-deployment.yaml file:", "language": "text", "type": "code-example_explanation", "context": "Example: Deploying PHP Guestbook application with Redis | Apply the Redis Deployment from the following redis-follower-deployment.yaml file:", "source": "https://kubernetes.io/docs/tutorials/stateless-application/guestbook/", "id": "k8s_00446_code_explain_11387"} {"prompt": "Complete this yaml code:\n\n# SOURCE: https://cloud.google.com/kubernetes-engine/docs/tutorials/guestbook\napiVersion: apps/v1\nkind: Deployment\nmetadata:\n name: redis-follower\n labels:\n app: redis\n role: follower\n tier: backend\nspec:\n replicas: 2\n selector:\n matchLabels:\n app: redis\n template:", "completion": " metadata:\n labels:\n app: redis\n role: follower\n tier: backend\n spec:\n containers:\n - name: follower\n image: us-docker.pkg.dev/google-samples/containers/gke/gb-redis-follower:v2\n resources:\n requests:\n cpu: 100m\n memory: 100Mi\n ports:\n - containerPort: 6379", "language": "yaml", "type": "deployment-definition", "context": "Example: Deploying PHP Guestbook application with Redis | Apply the Redis Deployment from the following redis-follower-deployment.yaml file:", "source": "https://kubernetes.io/docs/tutorials/stateless-application/guestbook/", "id": "k8s_00446_code_completion_11388"} {"prompt": "Explain this yaml code:\n\n# SOURCE: https://cloud.google.com/kubernetes-engine/docs/tutorials/guestbook\napiVersion: apps/v1\nkind: Deployment\nmetadata:\n name: redis-follower\n labels:\n app: redis\n role: follower\n tier: backend\nspec:\n replicas: 2\n selector:\n matchLabels:\n app: redis\n template:\n metadata:\n labels:\n app: redis\n role: follower\n tier: backend\n spec:\n containers:\n - name: follower\n image: us-docker.pkg.dev/google-samples/containers/gke/gb-redis-follower:v2\n resources:\n requests:\n cpu: 100m\n memory: 100Mi\n ports:\n - containerPort: 6379", "completion": "This yaml code Example: Deploying PHP Guestbook application with Redis | Apply the Redis Deployment from the following redis-follower-deployment.yaml file:", "language": "yaml", "type": "deployment-definition_explanation", "context": "Example: Deploying PHP Guestbook application with Redis | Apply the Redis Deployment from the following redis-follower-deployment.yaml file:", "source": "https://kubernetes.io/docs/tutorials/stateless-application/guestbook/", "id": "k8s_00446_code_explain_11389"} {"prompt": "Explain this shell code:\n\nkubectl apply -f https://k8s.io/examples/application/guestbook/redis-follower-deployment.yaml", "completion": "This shell code Example: Deploying PHP Guestbook application with Redis | Verify that the two Redis follower replicas are running by querying the list of Pods:", "language": "shell", "type": "code-example_explanation", "context": "Example: Deploying PHP Guestbook application with Redis | Verify that the two Redis follower replicas are running by querying the list of Pods:", "source": "https://kubernetes.io/docs/tutorials/stateless-application/guestbook/", "id": "k8s_00446_code_explain_11390"} {"prompt": "Complete this text code:\n\nNAME READY STATUS RESTARTS AGE\nredis-follower-dddfbdcc9-82sfr 1/1 Running 0 37s", "completion": "redis-follower-dddfbdcc9-qrt5k 1/1 Running 0 38s\nredis-leader-fb76b4755-xjr2n 1/1 Running 0 11m", "language": "text", "type": "code-example", "context": "Example: Deploying PHP Guestbook application with Redis | The guestbook application needs to communicate with the Redis followers to read data. To make the Redis followers discoverable, you must set up another Service.", "source": "https://kubernetes.io/docs/tutorials/stateless-application/guestbook/", "id": "k8s_00446_code_completion_11391"} {"prompt": "Explain this text code:\n\nNAME READY STATUS RESTARTS AGE\nredis-follower-dddfbdcc9-82sfr 1/1 Running 0 37s\nredis-follower-dddfbdcc9-qrt5k 1/1 Running 0 38s\nredis-leader-fb76b4755-xjr2n 1/1 Running 0 11m", "completion": "This text code Example: Deploying PHP Guestbook application with Redis | The guestbook application needs to communicate with the Redis followers to read data. To make the Redis followers discoverable, you must set up another Service.", "language": "text", "type": "code-example_explanation", "context": "Example: Deploying PHP Guestbook application with Redis | The guestbook application needs to communicate with the Redis followers to read data. To make the Redis followers discoverable, you must set up another Service.", "source": "https://kubernetes.io/docs/tutorials/stateless-application/guestbook/", "id": "k8s_00446_code_explain_11392"} {"prompt": "Complete this yaml code:\n\n# SOURCE: https://cloud.google.com/kubernetes-engine/docs/tutorials/guestbook\napiVersion: v1\nkind: Service\nmetadata:\n name: redis-follower\n labels:\n app: redis\n role: follower", "completion": " tier: backend\nspec:\n ports:\n # the port that this service should serve on\n - port: 6379\n selector:\n app: redis\n role: follower\n tier: backend", "language": "yaml", "type": "service-definition", "context": "Example: Deploying PHP Guestbook application with Redis | Apply the Redis Service from the following redis-follower-service.yaml file:", "source": "https://kubernetes.io/docs/tutorials/stateless-application/guestbook/", "id": "k8s_00446_code_completion_11393"} {"prompt": "Explain this yaml code:\n\n# SOURCE: https://cloud.google.com/kubernetes-engine/docs/tutorials/guestbook\napiVersion: v1\nkind: Service\nmetadata:\n name: redis-follower\n labels:\n app: redis\n role: follower\n tier: backend\nspec:\n ports:\n # the port that this service should serve on\n - port: 6379\n selector:\n app: redis\n role: follower\n tier: backend", "completion": "This yaml code Example: Deploying PHP Guestbook application with Redis | Apply the Redis Service from the following redis-follower-service.yaml file:", "language": "yaml", "type": "service-definition_explanation", "context": "Example: Deploying PHP Guestbook application with Redis | Apply the Redis Service from the following redis-follower-service.yaml file:", "source": "https://kubernetes.io/docs/tutorials/stateless-application/guestbook/", "id": "k8s_00446_code_explain_11394"} {"prompt": "Explain this shell code:\n\nkubectl apply -f https://k8s.io/examples/application/guestbook/redis-follower-service.yaml", "completion": "This shell code Example: Deploying PHP Guestbook application with Redis | Query the list of Services to verify that the Redis Service is running:", "language": "shell", "type": "code-example_explanation", "context": "Example: Deploying PHP Guestbook application with Redis | Query the list of Services to verify that the Redis Service is running:", "source": "https://kubernetes.io/docs/tutorials/stateless-application/guestbook/", "id": "k8s_00446_code_explain_11395"} {"prompt": "Complete this text code:\n\nNAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE\nkubernetes ClusterIP 10.96.0.1 <none> 443/TCP 3d19h", "completion": "redis-follower ClusterIP 10.110.162.42 <none> 6379/TCP 9s\nredis-leader ClusterIP 10.103.78.24 <none> 6379/TCP 6m10s", "language": "text", "type": "code-example", "context": "Example: Deploying PHP Guestbook application with Redis | Now that you have the Redis storage of your guestbook up and running, start the guestbook web servers. Like the Redis followers, the frontend is deployed using a Kubernetes Deployment.", "source": "https://kubernetes.io/docs/tutorials/stateless-application/guestbook/", "id": "k8s_00446_code_completion_11396"} {"prompt": "Explain this text code:\n\nNAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE\nkubernetes ClusterIP 10.96.0.1 <none> 443/TCP 3d19h\nredis-follower ClusterIP 10.110.162.42 <none> 6379/TCP 9s\nredis-leader ClusterIP 10.103.78.24 <none> 6379/TCP 6m10s", "completion": "This text code Example: Deploying PHP Guestbook application with Redis | Now that you have the Redis storage of your guestbook up and running, start the guestbook web servers. Like the Redis followers, the frontend is deployed using a Kubernetes Deployment.", "language": "text", "type": "code-example_explanation", "context": "Example: Deploying PHP Guestbook application with Redis | Now that you have the Redis storage of your guestbook up and running, start the guestbook web servers. Like the Redis followers, the frontend is deployed using a Kubernetes Deployment.", "source": "https://kubernetes.io/docs/tutorials/stateless-application/guestbook/", "id": "k8s_00446_code_explain_11397"} {"prompt": "Complete this yaml code:\n\n# SOURCE: https://cloud.google.com/kubernetes-engine/docs/tutorials/guestbook\napiVersion: apps/v1\nkind: Deployment\nmetadata:\n name: frontend\nspec:\n replicas: 3\n selector:\n matchLabels:\n app: guestbook\n tier: frontend\n template:\n metadata:\n labels:", "completion": " app: guestbook\n tier: frontend\n spec:\n containers:\n - name: php-redis\n image: us-docker.pkg.dev/google-samples/containers/gke/gb-frontend:v5\n env:\n - name: GET_HOSTS_FROM\n value: \"dns\"\n resources:\n requests:\n cpu: 100m\n memory: 100Mi\n ports:\n - containerPort: 80", "language": "yaml", "type": "deployment-definition", "context": "Example: Deploying PHP Guestbook application with Redis | Apply the frontend Deployment from the frontend-deployment.yaml file:", "source": "https://kubernetes.io/docs/tutorials/stateless-application/guestbook/", "id": "k8s_00446_code_completion_11398"} {"prompt": "Explain this yaml code:\n\n# SOURCE: https://cloud.google.com/kubernetes-engine/docs/tutorials/guestbook\napiVersion: apps/v1\nkind: Deployment\nmetadata:\n name: frontend\nspec:\n replicas: 3\n selector:\n matchLabels:\n app: guestbook\n tier: frontend\n template:\n metadata:\n labels:\n app: guestbook\n tier: frontend\n spec:\n containers:\n - name: php-redis\n image: us-docker.pkg.dev/google-samples/containers/gke/gb-frontend:v5\n env:\n - name: GET_HOSTS_FROM\n value: \"dns\"\n resources:\n requests:\n cpu: 100m\n memory: 100Mi\n ports:\n - containerPort: 80", "completion": "This yaml code Example: Deploying PHP Guestbook application with Redis | Apply the frontend Deployment from the frontend-deployment.yaml file:", "language": "yaml", "type": "deployment-definition_explanation", "context": "Example: Deploying PHP Guestbook application with Redis | Apply the frontend Deployment from the frontend-deployment.yaml file:", "source": "https://kubernetes.io/docs/tutorials/stateless-application/guestbook/", "id": "k8s_00446_code_explain_11399"} {"prompt": "Explain this shell code:\n\nkubectl apply -f https://k8s.io/examples/application/guestbook/frontend-deployment.yaml", "completion": "This shell code Example: Deploying PHP Guestbook application with Redis | Query the list of Pods to verify that the three frontend replicas are running:", "language": "shell", "type": "code-example_explanation", "context": "Example: Deploying PHP Guestbook application with Redis | Query the list of Pods to verify that the three frontend replicas are running:", "source": "https://kubernetes.io/docs/tutorials/stateless-application/guestbook/", "id": "k8s_00446_code_explain_11400"} {"prompt": "Complete this text code:\n\nNAME READY STATUS RESTARTS AGE\nfrontend-85595f5bf9-5tqhb 1/1 Running 0 47s", "completion": "frontend-85595f5bf9-qbzwm 1/1 Running 0 47s\nfrontend-85595f5bf9-zchwc 1/1 Running 0 47s", "language": "text", "type": "code-example", "context": "Example: Deploying PHP Guestbook application with Redis | The Redis Services you applied is only accessible within the Kubernetes cluster because the default type for a Service is ClusterIP. ClusterIP provides a single IP address for the set of Pods the Serv", "source": "https://kubernetes.io/docs/tutorials/stateless-application/guestbook/", "id": "k8s_00446_code_completion_11401"} {"prompt": "Explain this text code:\n\nNAME READY STATUS RESTARTS AGE\nfrontend-85595f5bf9-5tqhb 1/1 Running 0 47s\nfrontend-85595f5bf9-qbzwm 1/1 Running 0 47s\nfrontend-85595f5bf9-zchwc 1/1 Running 0 47s", "completion": "This text code Example: Deploying PHP Guestbook application with Redis | The Redis Services you applied is only accessible within the Kubernetes cluster because the default type for a Service is ClusterIP. ClusterIP provides a single IP address for the set of Pods the Serv", "language": "text", "type": "code-example_explanation", "context": "Example: Deploying PHP Guestbook application with Redis | The Redis Services you applied is only accessible within the Kubernetes cluster because the default type for a Service is ClusterIP. ClusterIP provides a single IP address for the set of Pods the Serv", "source": "https://kubernetes.io/docs/tutorials/stateless-application/guestbook/", "id": "k8s_00446_code_explain_11402"} {"prompt": "Complete this yaml code:\n\n# SOURCE: https://cloud.google.com/kubernetes-engine/docs/tutorials/guestbook\napiVersion: v1\nkind: Service\nmetadata:\n name: frontend\n labels:\n app: guestbook\n tier: frontend\nspec:", "completion": " # if your cluster supports it, uncomment the following to automatically create\n # an external load-balanced IP for the frontend service.\n # type: LoadBalancer\n #type: LoadBalancer\n ports:\n # the port that this service should serve on\n - port: 80\n selector:\n app: guestbook\n tier: frontend", "language": "yaml", "type": "service-definition", "context": "Example: Deploying PHP Guestbook application with Redis | Apply the frontend Service from the frontend-service.yaml file:", "source": "https://kubernetes.io/docs/tutorials/stateless-application/guestbook/", "id": "k8s_00446_code_completion_11403"} {"prompt": "Explain this yaml code:\n\n# SOURCE: https://cloud.google.com/kubernetes-engine/docs/tutorials/guestbook\napiVersion: v1\nkind: Service\nmetadata:\n name: frontend\n labels:\n app: guestbook\n tier: frontend\nspec:\n # if your cluster supports it, uncomment the following to automatically create\n # an external load-balanced IP for the frontend service.\n # type: LoadBalancer\n #type: LoadBalancer\n ports:\n # the port that this service should serve on\n - port: 80\n selector:\n app: guestbook\n tier: frontend", "completion": "This yaml code Example: Deploying PHP Guestbook application with Redis | Apply the frontend Service from the frontend-service.yaml file:", "language": "yaml", "type": "service-definition_explanation", "context": "Example: Deploying PHP Guestbook application with Redis | Apply the frontend Service from the frontend-service.yaml file:", "source": "https://kubernetes.io/docs/tutorials/stateless-application/guestbook/", "id": "k8s_00446_code_explain_11404"} {"prompt": "Explain this shell code:\n\nkubectl apply -f https://k8s.io/examples/application/guestbook/frontend-service.yaml", "completion": "This shell code Example: Deploying PHP Guestbook application with Redis | Query the list of Services to verify that the frontend Service is running:", "language": "shell", "type": "code-example_explanation", "context": "Example: Deploying PHP Guestbook application with Redis | Query the list of Services to verify that the frontend Service is running:", "source": "https://kubernetes.io/docs/tutorials/stateless-application/guestbook/", "id": "k8s_00446_code_explain_11405"} {"prompt": "Complete this text code:\n\nNAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE\nfrontend ClusterIP 10.97.28.230 <none> 80/TCP 19s", "completion": "kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 3d19h\nredis-follower ClusterIP 10.110.162.42 <none> 6379/TCP 5m48s\nredis-leader ClusterIP 10.103.78.24 <none> 6379/TCP 11m", "language": "text", "type": "code-example", "context": "Example: Deploying PHP Guestbook application with Redis | Run the following command to forward port 8080 on your local machine to port 80 on the service.", "source": "https://kubernetes.io/docs/tutorials/stateless-application/guestbook/", "id": "k8s_00446_code_completion_11406"} {"prompt": "Explain this text code:\n\nNAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE\nfrontend ClusterIP 10.97.28.230 <none> 80/TCP 19s\nkubernetes ClusterIP 10.96.0.1 <none> 443/TCP 3d19h\nredis-follower ClusterIP 10.110.162.42 <none> 6379/TCP 5m48s\nredis-leader ClusterIP 10.103.78.24 <none> 6379/TCP 11m", "completion": "This text code Example: Deploying PHP Guestbook application with Redis | Run the following command to forward port 8080 on your local machine to port 80 on the service.", "language": "text", "type": "code-example_explanation", "context": "Example: Deploying PHP Guestbook application with Redis | Run the following command to forward port 8080 on your local machine to port 80 on the service.", "source": "https://kubernetes.io/docs/tutorials/stateless-application/guestbook/", "id": "k8s_00446_code_explain_11407"} {"prompt": "Explain this text code:\n\nForwarding from 127.0.0.1:8080 -> 80\nForwarding from [::1]:8080 -> 80", "completion": "This text code Example: Deploying PHP Guestbook application with Redis | load the page http://localhost:8080 in your browser to view your guestbook.", "language": "text", "type": "code-example_explanation", "context": "Example: Deploying PHP Guestbook application with Redis | load the page http://localhost:8080 in your browser to view your guestbook.", "source": "https://kubernetes.io/docs/tutorials/stateless-application/guestbook/", "id": "k8s_00446_code_explain_11408"} {"prompt": "Explain this text code:\n\nNAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE\nfrontend LoadBalancer 10.51.242.136 109.197.92.229 80:32372/TCP 1m", "completion": "This text code Example: Deploying PHP Guestbook application with Redis | Copy the external IP address, and load the page in your browser to view your guestbook.", "language": "text", "type": "code-example_explanation", "context": "Example: Deploying PHP Guestbook application with Redis | Copy the external IP address, and load the page in your browser to view your guestbook.", "source": "https://kubernetes.io/docs/tutorials/stateless-application/guestbook/", "id": "k8s_00446_code_explain_11409"} {"prompt": "Complete this text code:\n\nNAME READY STATUS RESTARTS AGE\nfrontend-85595f5bf9-5df5m 1/1 Running 0 83s\nfrontend-85595f5bf9-7zmg5 1/1 Running 0 83s\nfrontend-85595f5bf9-cpskg 1/1 Running 0 15m", "completion": "frontend-85595f5bf9-l2l54 1/1 Running 0 14m\nfrontend-85595f5bf9-l9c8z 1/1 Running 0 14m\nredis-follower-dddfbdcc9-82sfr 1/1 Running 0 97m\nredis-follower-dddfbdcc9-qrt5k 1/1 Running 0 97m\nredis-leader-fb76b4755-xjr2n 1/1 Running 0 108m", "language": "text", "type": "code-example", "context": "Example: Deploying PHP Guestbook application with Redis | Run the following command to scale down the number of frontend Pods:", "source": "https://kubernetes.io/docs/tutorials/stateless-application/guestbook/", "id": "k8s_00446_code_completion_11410"} {"prompt": "Explain this text code:\n\nNAME READY STATUS RESTARTS AGE\nfrontend-85595f5bf9-5df5m 1/1 Running 0 83s\nfrontend-85595f5bf9-7zmg5 1/1 Running 0 83s\nfrontend-85595f5bf9-cpskg 1/1 Running 0 15m\nfrontend-85595f5bf9-l2l54 1/1 Running 0 14m\nfrontend-85595f5bf9-l9c8z 1/1 Running 0 14m\nredis-follower-dddfbdcc9-82sfr 1/1 Running 0 97m\nredis-follower-dddfbdcc9-qrt5k 1/1 Running 0 97m\nredis-leader-fb76b4755-xjr2n 1/1 Running 0 108m", "completion": "This text code Example: Deploying PHP Guestbook application with Redis | Run the following command to scale down the number of frontend Pods:", "language": "text", "type": "code-example_explanation", "context": "Example: Deploying PHP Guestbook application with Redis | Run the following command to scale down the number of frontend Pods:", "source": "https://kubernetes.io/docs/tutorials/stateless-application/guestbook/", "id": "k8s_00446_code_explain_11411"} {"prompt": "Complete this text code:\n\nNAME READY STATUS RESTARTS AGE\nfrontend-85595f5bf9-cpskg 1/1 Running 0 16m\nfrontend-85595f5bf9-l9c8z 1/1 Running 0 15m", "completion": "redis-follower-dddfbdcc9-82sfr 1/1 Running 0 98m\nredis-follower-dddfbdcc9-qrt5k 1/1 Running 0 98m\nredis-leader-fb76b4755-xjr2n 1/1 Running 0 109m", "language": "text", "type": "code-example", "context": "Example: Deploying PHP Guestbook application with Redis | Deleting the Deployments and Services also deletes any running Pods. Use labels to delete multiple resources with one command.", "source": "https://kubernetes.io/docs/tutorials/stateless-application/guestbook/", "id": "k8s_00446_code_completion_11412"} {"prompt": "Explain this text code:\n\nNAME READY STATUS RESTARTS AGE\nfrontend-85595f5bf9-cpskg 1/1 Running 0 16m\nfrontend-85595f5bf9-l9c8z 1/1 Running 0 15m\nredis-follower-dddfbdcc9-82sfr 1/1 Running 0 98m\nredis-follower-dddfbdcc9-qrt5k 1/1 Running 0 98m\nredis-leader-fb76b4755-xjr2n 1/1 Running 0 109m", "completion": "This text code Example: Deploying PHP Guestbook application with Redis | Deleting the Deployments and Services also deletes any running Pods. Use labels to delete multiple resources with one command.", "language": "text", "type": "code-example_explanation", "context": "Example: Deploying PHP Guestbook application with Redis | Deleting the Deployments and Services also deletes any running Pods. Use labels to delete multiple resources with one command.", "source": "https://kubernetes.io/docs/tutorials/stateless-application/guestbook/", "id": "k8s_00446_code_explain_11413"} {"prompt": "Complete this shell code:\n\nkubectl delete deployment -l app=redis\nkubectl delete service -l app=redis", "completion": "kubectl delete deployment frontend\nkubectl delete service frontend", "language": "shell", "type": "code-example", "context": "Example: Deploying PHP Guestbook application with Redis | The response should look similar to this:", "source": "https://kubernetes.io/docs/tutorials/stateless-application/guestbook/", "id": "k8s_00446_code_completion_11414"} {"prompt": "Explain this shell code:\n\nkubectl delete deployment -l app=redis\nkubectl delete service -l app=redis\nkubectl delete deployment frontend\nkubectl delete service frontend", "completion": "This shell code Example: Deploying PHP Guestbook application with Redis | The response should look similar to this:", "language": "shell", "type": "code-example_explanation", "context": "Example: Deploying PHP Guestbook application with Redis | The response should look similar to this:", "source": "https://kubernetes.io/docs/tutorials/stateless-application/guestbook/", "id": "k8s_00446_code_explain_11415"} {"prompt": "Complete this text code:\n\ndeployment.apps \"redis-follower\" deleted\ndeployment.apps \"redis-leader\" deleted", "completion": "deployment.apps \"frontend\" deleted\nservice \"frontend\" deleted", "language": "text", "type": "code-example", "context": "Example: Deploying PHP Guestbook application with Redis | Query the list of Pods to verify that no Pods are running:", "source": "https://kubernetes.io/docs/tutorials/stateless-application/guestbook/", "id": "k8s_00446_code_completion_11416"} {"prompt": "Explain this text code:\n\ndeployment.apps \"redis-follower\" deleted\ndeployment.apps \"redis-leader\" deleted\ndeployment.apps \"frontend\" deleted\nservice \"frontend\" deleted", "completion": "This text code Example: Deploying PHP Guestbook application with Redis | Query the list of Pods to verify that no Pods are running:", "language": "text", "type": "code-example_explanation", "context": "Example: Deploying PHP Guestbook application with Redis | Query the list of Pods to verify that no Pods are running:", "source": "https://kubernetes.io/docs/tutorials/stateless-application/guestbook/", "id": "k8s_00446_code_explain_11417"} {"prompt": "Complete this yaml code:\n\napplication/guestbook/redis-leader-deployment.yaml\n# SOURCE: https://cloud.google.com/kubernetes-engine/docs/tutorials/guestbook\napiVersion: apps/v1\nkind: Deployment\nmetadata:\n name: redis-leader\n labels:\n app: redis\n role: leader\n tier: backend\nspec:\n replicas: 1\n selector:\n matchLabels:\n app: redis", "completion": " template:\n metadata:\n labels:\n app: redis\n role: leader\n tier: backend\n spec:\n containers:\n - name: leader\n image: \"docker.io/redis:6.0.5\"\n resources:\n requests:\n cpu: 100m\n memory: 100Mi\n ports:\n - containerPort: 6379", "language": "yaml", "type": "deployment-definition", "context": "Example: Deploying PHP Guestbook application with Redis | Launch a terminal window in the directory you downloaded the manifest files.", "source": "https://kubernetes.io/docs/tutorials/stateless-application/guestbook/", "id": "k8s_00446_code_completion_11418"} {"prompt": "Explain this yaml code:\n\napplication/guestbook/redis-leader-deployment.yaml\n# SOURCE: https://cloud.google.com/kubernetes-engine/docs/tutorials/guestbook\napiVersion: apps/v1\nkind: Deployment\nmetadata:\n name: redis-leader\n labels:\n app: redis\n role: leader\n tier: backend\nspec:\n replicas: 1\n selector:\n matchLabels:\n app: redis\n template:\n metadata:\n labels:\n app: redis\n role: leader\n tier: backend\n spec:\n containers:\n - name: leader\n image: \"docker.io/redis:6.0.5\"\n resources:\n requests:\n cpu: 100m\n memory: 100Mi\n ports:\n - containerPort: 6379", "completion": "This yaml code Example: Deploying PHP Guestbook application with Redis | Launch a terminal window in the directory you downloaded the manifest files.", "language": "yaml", "type": "deployment-definition_explanation", "context": "Example: Deploying PHP Guestbook application with Redis | Launch a terminal window in the directory you downloaded the manifest files.", "source": "https://kubernetes.io/docs/tutorials/stateless-application/guestbook/", "id": "k8s_00446_code_explain_11419"} {"prompt": "Complete this yaml code:\n\n# SOURCE: https://cloud.google.com/kubernetes-engine/docs/tutorials/guestbook\napiVersion: apps/v1\nkind: Deployment\nmetadata:\n name: redis-leader\n labels:\n app: redis\n role: leader\n tier: backend\nspec:\n replicas: 1\n selector:\n matchLabels:\n app: redis\n template:", "completion": " metadata:\n labels:\n app: redis\n role: leader\n tier: backend\n spec:\n containers:\n - name: leader\n image: \"docker.io/redis:6.0.5\"\n resources:\n requests:\n cpu: 100m\n memory: 100Mi\n ports:\n - containerPort: 6379", "language": "yaml", "type": "deployment-definition", "context": "Example: Deploying PHP Guestbook application with Redis | Launch a terminal window in the directory you downloaded the manifest files.", "source": "https://kubernetes.io/docs/tutorials/stateless-application/guestbook/", "id": "k8s_00446_code_completion_11420"} {"prompt": "Explain this yaml code:\n\n# SOURCE: https://cloud.google.com/kubernetes-engine/docs/tutorials/guestbook\napiVersion: apps/v1\nkind: Deployment\nmetadata:\n name: redis-leader\n labels:\n app: redis\n role: leader\n tier: backend\nspec:\n replicas: 1\n selector:\n matchLabels:\n app: redis\n template:\n metadata:\n labels:\n app: redis\n role: leader\n tier: backend\n spec:\n containers:\n - name: leader\n image: \"docker.io/redis:6.0.5\"\n resources:\n requests:\n cpu: 100m\n memory: 100Mi\n ports:\n - containerPort: 6379", "completion": "This yaml code Example: Deploying PHP Guestbook application with Redis | Launch a terminal window in the directory you downloaded the manifest files.", "language": "yaml", "type": "deployment-definition_explanation", "context": "Example: Deploying PHP Guestbook application with Redis | Launch a terminal window in the directory you downloaded the manifest files.", "source": "https://kubernetes.io/docs/tutorials/stateless-application/guestbook/", "id": "k8s_00446_code_explain_11421"} {"prompt": "Explain this bash code:\n\nkubectl apply -f https://k8s.io/examples/application/guestbook/redis-leader-deployment.yaml", "completion": "This bash code Example: Deploying PHP Guestbook application with Redis | Query the list of Pods to verify that the Redis Pod is running:", "language": "bash", "type": "kubectl-command_explanation", "context": "Example: Deploying PHP Guestbook application with Redis | Query the list of Pods to verify that the Redis Pod is running:", "source": "https://kubernetes.io/docs/tutorials/stateless-application/guestbook/", "id": "k8s_00446_code_explain_11422"} {"prompt": "Complete this yaml code:\n\napplication/guestbook/redis-leader-service.yaml\n# SOURCE: https://cloud.google.com/kubernetes-engine/docs/tutorials/guestbook\napiVersion: v1\nkind: Service\nmetadata:\n name: redis-leader\n labels:\n app: redis\n role: leader", "completion": " tier: backend\nspec:\n ports:\n - port: 6379\n targetPort: 6379\n selector:\n app: redis\n role: leader\n tier: backend", "language": "yaml", "type": "service-definition", "context": "Example: Deploying PHP Guestbook application with Redis | Apply the Redis Service from the following redis-leader-service.yaml file:", "source": "https://kubernetes.io/docs/tutorials/stateless-application/guestbook/", "id": "k8s_00446_code_completion_11423"} {"prompt": "Explain this yaml code:\n\napplication/guestbook/redis-leader-service.yaml\n# SOURCE: https://cloud.google.com/kubernetes-engine/docs/tutorials/guestbook\napiVersion: v1\nkind: Service\nmetadata:\n name: redis-leader\n labels:\n app: redis\n role: leader\n tier: backend\nspec:\n ports:\n - port: 6379\n targetPort: 6379\n selector:\n app: redis\n role: leader\n tier: backend", "completion": "This yaml code Example: Deploying PHP Guestbook application with Redis | Apply the Redis Service from the following redis-leader-service.yaml file:", "language": "yaml", "type": "service-definition_explanation", "context": "Example: Deploying PHP Guestbook application with Redis | Apply the Redis Service from the following redis-leader-service.yaml file:", "source": "https://kubernetes.io/docs/tutorials/stateless-application/guestbook/", "id": "k8s_00446_code_explain_11424"} {"prompt": "Complete this yaml code:\n\n# SOURCE: https://cloud.google.com/kubernetes-engine/docs/tutorials/guestbook\napiVersion: v1\nkind: Service\nmetadata:\n name: redis-leader\n labels:\n app: redis\n role: leader", "completion": " tier: backend\nspec:\n ports:\n - port: 6379\n targetPort: 6379\n selector:\n app: redis\n role: leader\n tier: backend", "language": "yaml", "type": "service-definition", "context": "Example: Deploying PHP Guestbook application with Redis | Apply the Redis Service from the following redis-leader-service.yaml file:", "source": "https://kubernetes.io/docs/tutorials/stateless-application/guestbook/", "id": "k8s_00446_code_completion_11425"} {"prompt": "Explain this yaml code:\n\n# SOURCE: https://cloud.google.com/kubernetes-engine/docs/tutorials/guestbook\napiVersion: v1\nkind: Service\nmetadata:\n name: redis-leader\n labels:\n app: redis\n role: leader\n tier: backend\nspec:\n ports:\n - port: 6379\n targetPort: 6379\n selector:\n app: redis\n role: leader\n tier: backend", "completion": "This yaml code Example: Deploying PHP Guestbook application with Redis | Apply the Redis Service from the following redis-leader-service.yaml file:", "language": "yaml", "type": "service-definition_explanation", "context": "Example: Deploying PHP Guestbook application with Redis | Apply the Redis Service from the following redis-leader-service.yaml file:", "source": "https://kubernetes.io/docs/tutorials/stateless-application/guestbook/", "id": "k8s_00446_code_explain_11426"} {"prompt": "Explain this bash code:\n\nkubectl apply -f https://k8s.io/examples/application/guestbook/redis-leader-service.yaml", "completion": "This bash code Example: Deploying PHP Guestbook application with Redis | Query the list of Services to verify that the Redis Service is running:", "language": "bash", "type": "kubectl-command_explanation", "context": "Example: Deploying PHP Guestbook application with Redis | Query the list of Services to verify that the Redis Service is running:", "source": "https://kubernetes.io/docs/tutorials/stateless-application/guestbook/", "id": "k8s_00446_code_explain_11427"} {"prompt": "Complete this yaml code:\n\napplication/guestbook/redis-follower-deployment.yaml\n# SOURCE: https://cloud.google.com/kubernetes-engine/docs/tutorials/guestbook\napiVersion: apps/v1\nkind: Deployment\nmetadata:\n name: redis-follower\n labels:\n app: redis\n role: follower\n tier: backend\nspec:\n replicas: 2\n selector:\n matchLabels:\n app: redis", "completion": " template:\n metadata:\n labels:\n app: redis\n role: follower\n tier: backend\n spec:\n containers:\n - name: follower\n image: us-docker.pkg.dev/google-samples/containers/gke/gb-redis-follower:v2\n resources:\n requests:\n cpu: 100m\n memory: 100Mi\n ports:\n - containerPort: 6379", "language": "yaml", "type": "deployment-definition", "context": "Example: Deploying PHP Guestbook application with Redis | Apply the Redis Deployment from the following redis-follower-deployment.yaml file:", "source": "https://kubernetes.io/docs/tutorials/stateless-application/guestbook/", "id": "k8s_00446_code_completion_11428"} {"prompt": "Explain this yaml code:\n\napplication/guestbook/redis-follower-deployment.yaml\n# SOURCE: https://cloud.google.com/kubernetes-engine/docs/tutorials/guestbook\napiVersion: apps/v1\nkind: Deployment\nmetadata:\n name: redis-follower\n labels:\n app: redis\n role: follower\n tier: backend\nspec:\n replicas: 2\n selector:\n matchLabels:\n app: redis\n template:\n metadata:\n labels:\n app: redis\n role: follower\n tier: backend\n spec:\n containers:\n - name: follower\n image: us-docker.pkg.dev/google-samples/containers/gke/gb-redis-follower:v2\n resources:\n requests:\n cpu: 100m\n memory: 100Mi\n ports:\n - containerPort: 6379", "completion": "This yaml code Example: Deploying PHP Guestbook application with Redis | Apply the Redis Deployment from the following redis-follower-deployment.yaml file:", "language": "yaml", "type": "deployment-definition_explanation", "context": "Example: Deploying PHP Guestbook application with Redis | Apply the Redis Deployment from the following redis-follower-deployment.yaml file:", "source": "https://kubernetes.io/docs/tutorials/stateless-application/guestbook/", "id": "k8s_00446_code_explain_11429"} {"prompt": "Complete this yaml code:\n\n# SOURCE: https://cloud.google.com/kubernetes-engine/docs/tutorials/guestbook\napiVersion: apps/v1\nkind: Deployment\nmetadata:\n name: redis-follower\n labels:\n app: redis\n role: follower\n tier: backend\nspec:\n replicas: 2\n selector:\n matchLabels:\n app: redis\n template:", "completion": " metadata:\n labels:\n app: redis\n role: follower\n tier: backend\n spec:\n containers:\n - name: follower\n image: us-docker.pkg.dev/google-samples/containers/gke/gb-redis-follower:v2\n resources:\n requests:\n cpu: 100m\n memory: 100Mi\n ports:\n - containerPort: 6379", "language": "yaml", "type": "deployment-definition", "context": "Example: Deploying PHP Guestbook application with Redis | Apply the Redis Deployment from the following redis-follower-deployment.yaml file:", "source": "https://kubernetes.io/docs/tutorials/stateless-application/guestbook/", "id": "k8s_00446_code_completion_11430"} {"prompt": "Explain this yaml code:\n\n# SOURCE: https://cloud.google.com/kubernetes-engine/docs/tutorials/guestbook\napiVersion: apps/v1\nkind: Deployment\nmetadata:\n name: redis-follower\n labels:\n app: redis\n role: follower\n tier: backend\nspec:\n replicas: 2\n selector:\n matchLabels:\n app: redis\n template:\n metadata:\n labels:\n app: redis\n role: follower\n tier: backend\n spec:\n containers:\n - name: follower\n image: us-docker.pkg.dev/google-samples/containers/gke/gb-redis-follower:v2\n resources:\n requests:\n cpu: 100m\n memory: 100Mi\n ports:\n - containerPort: 6379", "completion": "This yaml code Example: Deploying PHP Guestbook application with Redis | Apply the Redis Deployment from the following redis-follower-deployment.yaml file:", "language": "yaml", "type": "deployment-definition_explanation", "context": "Example: Deploying PHP Guestbook application with Redis | Apply the Redis Deployment from the following redis-follower-deployment.yaml file:", "source": "https://kubernetes.io/docs/tutorials/stateless-application/guestbook/", "id": "k8s_00446_code_explain_11431"} {"prompt": "Explain this bash code:\n\nkubectl apply -f https://k8s.io/examples/application/guestbook/redis-follower-deployment.yaml", "completion": "This bash code Example: Deploying PHP Guestbook application with Redis | Verify that the two Redis follower replicas are running by querying the list of Pods:", "language": "bash", "type": "kubectl-command_explanation", "context": "Example: Deploying PHP Guestbook application with Redis | Verify that the two Redis follower replicas are running by querying the list of Pods:", "source": "https://kubernetes.io/docs/tutorials/stateless-application/guestbook/", "id": "k8s_00446_code_explain_11432"} {"prompt": "Complete this yaml code:\n\napplication/guestbook/redis-follower-service.yaml\n# SOURCE: https://cloud.google.com/kubernetes-engine/docs/tutorials/guestbook\napiVersion: v1\nkind: Service\nmetadata:\n name: redis-follower\n labels:\n app: redis\n role: follower", "completion": " tier: backend\nspec:\n ports:\n # the port that this service should serve on\n - port: 6379\n selector:\n app: redis\n role: follower\n tier: backend", "language": "yaml", "type": "service-definition", "context": "Example: Deploying PHP Guestbook application with Redis | Apply the Redis Service from the following redis-follower-service.yaml file:", "source": "https://kubernetes.io/docs/tutorials/stateless-application/guestbook/", "id": "k8s_00446_code_completion_11433"} {"prompt": "Explain this yaml code:\n\napplication/guestbook/redis-follower-service.yaml\n# SOURCE: https://cloud.google.com/kubernetes-engine/docs/tutorials/guestbook\napiVersion: v1\nkind: Service\nmetadata:\n name: redis-follower\n labels:\n app: redis\n role: follower\n tier: backend\nspec:\n ports:\n # the port that this service should serve on\n - port: 6379\n selector:\n app: redis\n role: follower\n tier: backend", "completion": "This yaml code Example: Deploying PHP Guestbook application with Redis | Apply the Redis Service from the following redis-follower-service.yaml file:", "language": "yaml", "type": "service-definition_explanation", "context": "Example: Deploying PHP Guestbook application with Redis | Apply the Redis Service from the following redis-follower-service.yaml file:", "source": "https://kubernetes.io/docs/tutorials/stateless-application/guestbook/", "id": "k8s_00446_code_explain_11434"} {"prompt": "Complete this yaml code:\n\n# SOURCE: https://cloud.google.com/kubernetes-engine/docs/tutorials/guestbook\napiVersion: v1\nkind: Service\nmetadata:\n name: redis-follower\n labels:\n app: redis\n role: follower", "completion": " tier: backend\nspec:\n ports:\n # the port that this service should serve on\n - port: 6379\n selector:\n app: redis\n role: follower\n tier: backend", "language": "yaml", "type": "service-definition", "context": "Example: Deploying PHP Guestbook application with Redis | Apply the Redis Service from the following redis-follower-service.yaml file:", "source": "https://kubernetes.io/docs/tutorials/stateless-application/guestbook/", "id": "k8s_00446_code_completion_11435"} {"prompt": "Explain this yaml code:\n\n# SOURCE: https://cloud.google.com/kubernetes-engine/docs/tutorials/guestbook\napiVersion: v1\nkind: Service\nmetadata:\n name: redis-follower\n labels:\n app: redis\n role: follower\n tier: backend\nspec:\n ports:\n # the port that this service should serve on\n - port: 6379\n selector:\n app: redis\n role: follower\n tier: backend", "completion": "This yaml code Example: Deploying PHP Guestbook application with Redis | Apply the Redis Service from the following redis-follower-service.yaml file:", "language": "yaml", "type": "service-definition_explanation", "context": "Example: Deploying PHP Guestbook application with Redis | Apply the Redis Service from the following redis-follower-service.yaml file:", "source": "https://kubernetes.io/docs/tutorials/stateless-application/guestbook/", "id": "k8s_00446_code_explain_11436"} {"prompt": "Explain this bash code:\n\nkubectl apply -f https://k8s.io/examples/application/guestbook/redis-follower-service.yaml", "completion": "This bash code Example: Deploying PHP Guestbook application with Redis | Query the list of Services to verify that the Redis Service is running:", "language": "bash", "type": "kubectl-command_explanation", "context": "Example: Deploying PHP Guestbook application with Redis | Query the list of Services to verify that the Redis Service is running:", "source": "https://kubernetes.io/docs/tutorials/stateless-application/guestbook/", "id": "k8s_00446_code_explain_11437"} {"prompt": "Complete this yaml code:\n\napplication/guestbook/frontend-deployment.yaml\n# SOURCE: https://cloud.google.com/kubernetes-engine/docs/tutorials/guestbook\napiVersion: apps/v1\nkind: Deployment\nmetadata:\n name: frontend\nspec:\n replicas: 3\n selector:\n matchLabels:\n app: guestbook\n tier: frontend\n template:\n metadata:\n labels:", "completion": " app: guestbook\n tier: frontend\n spec:\n containers:\n - name: php-redis\n image: us-docker.pkg.dev/google-samples/containers/gke/gb-frontend:v5\n env:\n - name: GET_HOSTS_FROM\n value: \"dns\"\n resources:\n requests:\n cpu: 100m\n memory: 100Mi\n ports:\n - containerPort: 80", "language": "yaml", "type": "deployment-definition", "context": "Example: Deploying PHP Guestbook application with Redis | Apply the frontend Deployment from the frontend-deployment.yaml file:", "source": "https://kubernetes.io/docs/tutorials/stateless-application/guestbook/", "id": "k8s_00446_code_completion_11438"} {"prompt": "Explain this yaml code:\n\napplication/guestbook/frontend-deployment.yaml\n# SOURCE: https://cloud.google.com/kubernetes-engine/docs/tutorials/guestbook\napiVersion: apps/v1\nkind: Deployment\nmetadata:\n name: frontend\nspec:\n replicas: 3\n selector:\n matchLabels:\n app: guestbook\n tier: frontend\n template:\n metadata:\n labels:\n app: guestbook\n tier: frontend\n spec:\n containers:\n - name: php-redis\n image: us-docker.pkg.dev/google-samples/containers/gke/gb-frontend:v5\n env:\n - name: GET_HOSTS_FROM\n value: \"dns\"\n resources:\n requests:\n cpu: 100m\n memory: 100Mi\n ports:\n - containerPort: 80", "completion": "This yaml code Example: Deploying PHP Guestbook application with Redis | Apply the frontend Deployment from the frontend-deployment.yaml file:", "language": "yaml", "type": "deployment-definition_explanation", "context": "Example: Deploying PHP Guestbook application with Redis | Apply the frontend Deployment from the frontend-deployment.yaml file:", "source": "https://kubernetes.io/docs/tutorials/stateless-application/guestbook/", "id": "k8s_00446_code_explain_11439"} {"prompt": "Complete this yaml code:\n\n# SOURCE: https://cloud.google.com/kubernetes-engine/docs/tutorials/guestbook\napiVersion: apps/v1\nkind: Deployment\nmetadata:\n name: frontend\nspec:\n replicas: 3\n selector:\n matchLabels:\n app: guestbook\n tier: frontend\n template:\n metadata:\n labels:", "completion": " app: guestbook\n tier: frontend\n spec:\n containers:\n - name: php-redis\n image: us-docker.pkg.dev/google-samples/containers/gke/gb-frontend:v5\n env:\n - name: GET_HOSTS_FROM\n value: \"dns\"\n resources:\n requests:\n cpu: 100m\n memory: 100Mi\n ports:\n - containerPort: 80", "language": "yaml", "type": "deployment-definition", "context": "Example: Deploying PHP Guestbook application with Redis | Apply the frontend Deployment from the frontend-deployment.yaml file:", "source": "https://kubernetes.io/docs/tutorials/stateless-application/guestbook/", "id": "k8s_00446_code_completion_11440"} {"prompt": "Explain this yaml code:\n\n# SOURCE: https://cloud.google.com/kubernetes-engine/docs/tutorials/guestbook\napiVersion: apps/v1\nkind: Deployment\nmetadata:\n name: frontend\nspec:\n replicas: 3\n selector:\n matchLabels:\n app: guestbook\n tier: frontend\n template:\n metadata:\n labels:\n app: guestbook\n tier: frontend\n spec:\n containers:\n - name: php-redis\n image: us-docker.pkg.dev/google-samples/containers/gke/gb-frontend:v5\n env:\n - name: GET_HOSTS_FROM\n value: \"dns\"\n resources:\n requests:\n cpu: 100m\n memory: 100Mi\n ports:\n - containerPort: 80", "completion": "This yaml code Example: Deploying PHP Guestbook application with Redis | Apply the frontend Deployment from the frontend-deployment.yaml file:", "language": "yaml", "type": "deployment-definition_explanation", "context": "Example: Deploying PHP Guestbook application with Redis | Apply the frontend Deployment from the frontend-deployment.yaml file:", "source": "https://kubernetes.io/docs/tutorials/stateless-application/guestbook/", "id": "k8s_00446_code_explain_11441"} {"prompt": "Explain this bash code:\n\nkubectl apply -f https://k8s.io/examples/application/guestbook/frontend-deployment.yaml", "completion": "This bash code Example: Deploying PHP Guestbook application with Redis | Query the list of Pods to verify that the three frontend replicas are running:", "language": "bash", "type": "kubectl-command_explanation", "context": "Example: Deploying PHP Guestbook application with Redis | Query the list of Pods to verify that the three frontend replicas are running:", "source": "https://kubernetes.io/docs/tutorials/stateless-application/guestbook/", "id": "k8s_00446_code_explain_11442"} {"prompt": "Complete this yaml code:\n\napplication/guestbook/frontend-service.yaml\n# SOURCE: https://cloud.google.com/kubernetes-engine/docs/tutorials/guestbook\napiVersion: v1\nkind: Service\nmetadata:\n name: frontend\n labels:\n app: guestbook\n tier: frontend\nspec:", "completion": " # if your cluster supports it, uncomment the following to automatically create\n # an external load-balanced IP for the frontend service.\n # type: LoadBalancer\n #type: LoadBalancer\n ports:\n # the port that this service should serve on\n - port: 80\n selector:\n app: guestbook\n tier: frontend", "language": "yaml", "type": "service-definition", "context": "Example: Deploying PHP Guestbook application with Redis | Apply the frontend Service from the frontend-service.yaml file:", "source": "https://kubernetes.io/docs/tutorials/stateless-application/guestbook/", "id": "k8s_00446_code_completion_11443"} {"prompt": "Explain this yaml code:\n\napplication/guestbook/frontend-service.yaml\n# SOURCE: https://cloud.google.com/kubernetes-engine/docs/tutorials/guestbook\napiVersion: v1\nkind: Service\nmetadata:\n name: frontend\n labels:\n app: guestbook\n tier: frontend\nspec:\n # if your cluster supports it, uncomment the following to automatically create\n # an external load-balanced IP for the frontend service.\n # type: LoadBalancer\n #type: LoadBalancer\n ports:\n # the port that this service should serve on\n - port: 80\n selector:\n app: guestbook\n tier: frontend", "completion": "This yaml code Example: Deploying PHP Guestbook application with Redis | Apply the frontend Service from the frontend-service.yaml file:", "language": "yaml", "type": "service-definition_explanation", "context": "Example: Deploying PHP Guestbook application with Redis | Apply the frontend Service from the frontend-service.yaml file:", "source": "https://kubernetes.io/docs/tutorials/stateless-application/guestbook/", "id": "k8s_00446_code_explain_11444"} {"prompt": "Complete this yaml code:\n\n# SOURCE: https://cloud.google.com/kubernetes-engine/docs/tutorials/guestbook\napiVersion: v1\nkind: Service\nmetadata:\n name: frontend\n labels:\n app: guestbook\n tier: frontend\nspec:", "completion": " # if your cluster supports it, uncomment the following to automatically create\n # an external load-balanced IP for the frontend service.\n # type: LoadBalancer\n #type: LoadBalancer\n ports:\n # the port that this service should serve on\n - port: 80\n selector:\n app: guestbook\n tier: frontend", "language": "yaml", "type": "service-definition", "context": "Example: Deploying PHP Guestbook application with Redis | Apply the frontend Service from the frontend-service.yaml file:", "source": "https://kubernetes.io/docs/tutorials/stateless-application/guestbook/", "id": "k8s_00446_code_completion_11445"} {"prompt": "Explain this yaml code:\n\n# SOURCE: https://cloud.google.com/kubernetes-engine/docs/tutorials/guestbook\napiVersion: v1\nkind: Service\nmetadata:\n name: frontend\n labels:\n app: guestbook\n tier: frontend\nspec:\n # if your cluster supports it, uncomment the following to automatically create\n # an external load-balanced IP for the frontend service.\n # type: LoadBalancer\n #type: LoadBalancer\n ports:\n # the port that this service should serve on\n - port: 80\n selector:\n app: guestbook\n tier: frontend", "completion": "This yaml code Example: Deploying PHP Guestbook application with Redis | Apply the frontend Service from the frontend-service.yaml file:", "language": "yaml", "type": "service-definition_explanation", "context": "Example: Deploying PHP Guestbook application with Redis | Apply the frontend Service from the frontend-service.yaml file:", "source": "https://kubernetes.io/docs/tutorials/stateless-application/guestbook/", "id": "k8s_00446_code_explain_11446"} {"prompt": "Explain this bash code:\n\nkubectl apply -f https://k8s.io/examples/application/guestbook/frontend-service.yaml", "completion": "This bash code Example: Deploying PHP Guestbook application with Redis | Query the list of Services to verify that the frontend Service is running:", "language": "bash", "type": "kubectl-command_explanation", "context": "Example: Deploying PHP Guestbook application with Redis | Query the list of Services to verify that the frontend Service is running:", "source": "https://kubernetes.io/docs/tutorials/stateless-application/guestbook/", "id": "k8s_00446_code_explain_11447"} {"prompt": "Complete this bash code:\n\nkubectl delete deployment -l app=redis\nkubectl delete service -l app=redis", "completion": "kubectl delete deployment frontend\nkubectl delete service frontend", "language": "bash", "type": "kubectl-command", "context": "Example: Deploying PHP Guestbook application with Redis | The response should look similar to this:", "source": "https://kubernetes.io/docs/tutorials/stateless-application/guestbook/", "id": "k8s_00446_code_completion_11448"} {"prompt": "Explain this bash code:\n\nkubectl delete deployment -l app=redis\nkubectl delete service -l app=redis\nkubectl delete deployment frontend\nkubectl delete service frontend", "completion": "This bash code Example: Deploying PHP Guestbook application with Redis | The response should look similar to this:", "language": "bash", "type": "kubectl-command_explanation", "context": "Example: Deploying PHP Guestbook application with Redis | The response should look similar to this:", "source": "https://kubernetes.io/docs/tutorials/stateless-application/guestbook/", "id": "k8s_00446_code_explain_11449"} {"prompt": "Explain this text code:\n\ncheckpoint-<podFullName>-<containerName>-<timestamp>.tar", "completion": "This text code Kubelet Checkpoint API | The checkpoint archive is in tar format, and could be listed using an implementation of tar. The contents of the archive depend on the underlying CRI implementation (the container runtime on that node", "language": "text", "type": "code-example_explanation", "context": "Kubelet Checkpoint API | The checkpoint archive is in tar format, and could be listed using an implementation of tar. The contents of the archive depend on the underlying CRI implementation (the container runtime on that node", "source": "https://kubernetes.io/docs/reference/node/kubelet-checkpoint-api/", "id": "k8s_00447_code_explain_11450"} {"prompt": "Complete this text code:\n\n# List all available plugins\n kubectl plugin list", "completion": " \n # List only binary names of available plugins without paths\n kubectl plugin list --name-only", "language": "text", "type": "code-example", "context": "kubectl plugin list | help for list", "source": "https://kubernetes.io/docs/reference/kubectl/generated/kubectl_plugin/kubectl_plugin_list/", "id": "k8s_00448_code_completion_11451"} {"prompt": "Explain this text code:\n\n# List all available plugins\n kubectl plugin list\n \n # List only binary names of available plugins without paths\n kubectl plugin list --name-only", "completion": "This text code kubectl plugin list | help for list", "language": "text", "type": "code-example_explanation", "context": "kubectl plugin list | help for list", "source": "https://kubernetes.io/docs/reference/kubectl/generated/kubectl_plugin/kubectl_plugin_list/", "id": "k8s_00448_code_explain_11452"} {"prompt": "Complete this text code:\n\n# List all available plugins\n kubectl plugin list", "completion": " \n # List only binary names of available plugins without paths\n kubectl plugin list --name-only", "language": "text", "type": "code-example", "context": "kubectl plugin list | help for list", "source": "https://kubernetes.io/docs/reference/kubectl/generated/kubectl_plugin/kubectl_plugin_list/", "id": "k8s_00448_code_completion_11453"} {"prompt": "Explain this text code:\n\n# List all available plugins\n kubectl plugin list\n \n # List only binary names of available plugins without paths\n kubectl plugin list --name-only", "completion": "This text code kubectl plugin list | help for list", "language": "text", "type": "code-example_explanation", "context": "kubectl plugin list | help for list", "source": "https://kubernetes.io/docs/reference/kubectl/generated/kubectl_plugin/kubectl_plugin_list/", "id": "k8s_00448_code_explain_11454"} {"prompt": "Complete this text code:\n\n--kube-reserved=cpu=4,memory=4Gi\n--system-reserved=cpu=1,memory=1Gi", "completion": "--memory-manager-policy=Static\n--reserved-memory '0:memory=3Gi;1:memory=2148Mi'", "language": "text", "type": "code-example", "context": "Utilizing the NUMA-aware Memory Manager | Prior to Kubernetes 1.32, you also need to add", "source": "https://kubernetes.io/docs/tasks/administer-cluster/memory-manager/", "id": "k8s_00452_code_completion_11455"} {"prompt": "Explain this text code:\n\n--kube-reserved=cpu=4,memory=4Gi\n--system-reserved=cpu=1,memory=1Gi\n--memory-manager-policy=Static\n--reserved-memory '0:memory=3Gi;1:memory=2148Mi'", "completion": "This text code Utilizing the NUMA-aware Memory Manager | Prior to Kubernetes 1.32, you also need to add", "language": "text", "type": "code-example_explanation", "context": "Utilizing the NUMA-aware Memory Manager | Prior to Kubernetes 1.32, you also need to add", "source": "https://kubernetes.io/docs/tasks/administer-cluster/memory-manager/", "id": "k8s_00452_code_explain_11456"} {"prompt": "Complete this text code:\n\nspec:\n containers:\n - name: nginx\n image: nginx\n resources:\n limits:", "completion": " memory: \"200Mi\"\n cpu: \"2\"\n example.com/device: \"1\"\n requests:\n memory: \"200Mi\"\n cpu: \"2\"\n example.com/device: \"1\"", "language": "text", "type": "code-example", "context": "Utilizing the NUMA-aware Memory Manager | Also, a pod sharing CPU(s) runs in the Guaranteed QoS class, when requests are equal to limits.", "source": "https://kubernetes.io/docs/tasks/administer-cluster/memory-manager/", "id": "k8s_00452_code_completion_11457"} {"prompt": "Explain this text code:\n\nspec:\n containers:\n - name: nginx\n image: nginx\n resources:\n limits:\n memory: \"200Mi\"\n cpu: \"2\"\n example.com/device: \"1\"\n requests:\n memory: \"200Mi\"\n cpu: \"2\"\n example.com/device: \"1\"", "completion": "This text code Utilizing the NUMA-aware Memory Manager | Also, a pod sharing CPU(s) runs in the Guaranteed QoS class, when requests are equal to limits.", "language": "text", "type": "code-example_explanation", "context": "Utilizing the NUMA-aware Memory Manager | Also, a pod sharing CPU(s) runs in the Guaranteed QoS class, when requests are equal to limits.", "source": "https://kubernetes.io/docs/tasks/administer-cluster/memory-manager/", "id": "k8s_00452_code_explain_11458"} {"prompt": "Complete this text code:\n\nspec:\n containers:\n - name: nginx\n image: nginx\n resources:\n limits:", "completion": " memory: \"200Mi\"\n cpu: \"300m\"\n example.com/device: \"1\"\n requests:\n memory: \"200Mi\"\n cpu: \"300m\"\n example.com/device: \"1\"", "language": "text", "type": "code-example", "context": "Utilizing the NUMA-aware Memory Manager | Notice that both CPU and memory requests must be specified for a Pod to lend it to Guaranteed QoS class.", "source": "https://kubernetes.io/docs/tasks/administer-cluster/memory-manager/", "id": "k8s_00452_code_completion_11459"} {"prompt": "Explain this text code:\n\nspec:\n containers:\n - name: nginx\n image: nginx\n resources:\n limits:\n memory: \"200Mi\"\n cpu: \"300m\"\n example.com/device: \"1\"\n requests:\n memory: \"200Mi\"\n cpu: \"300m\"\n example.com/device: \"1\"", "completion": "This text code Utilizing the NUMA-aware Memory Manager | Notice that both CPU and memory requests must be specified for a Pod to lend it to Guaranteed QoS class.", "language": "text", "type": "code-example_explanation", "context": "Utilizing the NUMA-aware Memory Manager | Notice that both CPU and memory requests must be specified for a Pod to lend it to Guaranteed QoS class.", "source": "https://kubernetes.io/docs/tasks/administer-cluster/memory-manager/", "id": "k8s_00452_code_explain_11460"} {"prompt": "Explain this text code:\n\nNAME READY STATUS RESTARTS AGE\nguaranteed 0/1 TopologyAffinityError 0 113s", "completion": "This text code Utilizing the NUMA-aware Memory Manager | Use kubectl describe pod <id> or kubectl get events to obtain detailed error message:", "language": "text", "type": "code-example_explanation", "context": "Utilizing the NUMA-aware Memory Manager | Use kubectl describe pod <id> or kubectl get events to obtain detailed error message:", "source": "https://kubernetes.io/docs/tasks/administer-cluster/memory-manager/", "id": "k8s_00452_code_explain_11461"} {"prompt": "Explain this text code:\n\nWarning TopologyAffinityError 10m kubelet, dell8 Resources cannot be allocated with Topology locality", "completion": "This text code Utilizing the NUMA-aware Memory Manager | Search system logs with respect to a particular pod.", "language": "text", "type": "code-example_explanation", "context": "Utilizing the NUMA-aware Memory Manager | Search system logs with respect to a particular pod.", "source": "https://kubernetes.io/docs/tasks/administer-cluster/memory-manager/", "id": "k8s_00452_code_explain_11462"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: guaranteed\nspec:\n containers:\n - name: guaranteed\n image: consumer", "completion": " imagePullPolicy: Never\n resources:\n limits:\n cpu: \"2\"\n memory: 150Gi\n requests:\n cpu: \"2\"\n memory: 150Gi\n command: [\"sleep\",\"infinity\"]", "language": "yaml", "type": "pod-definition", "context": "Utilizing the NUMA-aware Memory Manager | Next, let us log into the node where it was deployed and examine the state file in /var/lib/kubelet/memory_manager_state:", "source": "https://kubernetes.io/docs/tasks/administer-cluster/memory-manager/", "id": "k8s_00452_code_completion_11463"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: guaranteed\nspec:\n containers:\n - name: guaranteed\n image: consumer\n imagePullPolicy: Never\n resources:\n limits:\n cpu: \"2\"\n memory: 150Gi\n requests:\n cpu: \"2\"\n memory: 150Gi\n command: [\"sleep\",\"infinity\"]", "completion": "This yaml code Utilizing the NUMA-aware Memory Manager | Next, let us log into the node where it was deployed and examine the state file in /var/lib/kubelet/memory_manager_state:", "language": "yaml", "type": "pod-definition_explanation", "context": "Utilizing the NUMA-aware Memory Manager | Next, let us log into the node where it was deployed and examine the state file in /var/lib/kubelet/memory_manager_state:", "source": "https://kubernetes.io/docs/tasks/administer-cluster/memory-manager/", "id": "k8s_00452_code_explain_11464"} {"prompt": "Complete this json code:\n\n{\n \"policyName\":\"Static\",\n \"machineState\":{\n \"0\":{\n \"numberOfAssignments\":1,\n \"memoryMap\":{\n \"hugepages-1Gi\":{\n \"total\":0,\n \"systemReserved\":0,\n \"allocatable\":0,\n \"reserved\":0,\n \"free\":0\n },\n \"memory\":{\n \"total\":134987354112,\n \"systemReserved\":3221225472,\n \"allocatable\":131766128640,\n \"reserved\":131766128640,\n \"free\":0\n }\n },\n \"nodes\":[\n 0,\n 1\n ]\n },\n \"1\":{\n \"numberOfAssignments\":1,\n \"memoryMap\":{\n \"hugepages-1Gi\":{\n \"total\":0,\n \"systemReserved\":0,\n \"allocatable\":0,", "completion": " \"reserved\":0,\n \"free\":0\n },\n \"memory\":{\n \"total\":135286722560,\n \"systemReserved\":2252341248,\n \"allocatable\":133034381312,\n \"reserved\":29295144960,\n \"free\":103739236352\n }\n },\n \"nodes\":[\n 0,\n 1\n ]\n }\n },\n \"entries\":{\n \"fa9bdd38-6df9-4cf9-aa67-8c4814da37a8\":{\n \"guaranteed\":[\n {\n \"numaAffinity\":[\n 0,\n 1\n ],\n \"type\":\"memory\",\n \"size\":161061273600\n }\n ]\n }\n },\n \"checksum\":4142013182\n}", "language": "json", "type": "api-response", "context": "Utilizing the NUMA-aware Memory Manager | It can be deduced from the state file that the pod was pinned to both NUMA nodes, i.e.:", "source": "https://kubernetes.io/docs/tasks/administer-cluster/memory-manager/", "id": "k8s_00452_code_completion_11465"} {"prompt": "Explain this json code:\n\n{\n \"policyName\":\"Static\",\n \"machineState\":{\n \"0\":{\n \"numberOfAssignments\":1,\n \"memoryMap\":{\n \"hugepages-1Gi\":{\n \"total\":0,\n \"systemReserved\":0,\n \"allocatable\":0,\n \"reserved\":0,\n \"free\":0\n },\n \"memory\":{\n \"total\":134987354112,\n \"systemReserved\":3221225472,\n \"allocatable\":131766128640,\n \"reserved\":131766128640,\n \"free\":0\n }\n },\n \"nodes\":[\n 0,\n 1\n ]\n },\n \"1\":{\n \"numberOfAssignments\":1,\n \"memoryMap\":{\n \"hugepages-1Gi\":{\n \"total\":0,\n \"systemReserved\":0,\n \"allocatable\":0,\n \"reserved\":0,\n \"free\":0\n },\n \"memory\":{\n \"total\":135286722560,\n \"systemReserved\":2252341248,\n \"allocatable\":133034381312,\n \"reserved\":29295144960,\n \"free\":103739236352\n }\n },\n \"nodes\":[\n 0,\n 1\n ]\n }\n },\n \"entries\":{\n \"fa9bdd38-6df9-4cf9-aa67-8c4814da37a8\":{\n \"guaranteed\":[\n {\n \"numaAffinity\":[\n 0,\n 1\n ],\n \"type\":\"memory\",\n \"size\":161061273600\n }\n ]\n }\n },\n \"checksum\":4142013182\n}", "completion": "This json code Utilizing the NUMA-aware Memory Manager | It can be deduced from the state file that the pod was pinned to both NUMA nodes, i.e.:", "language": "json", "type": "api-response_explanation", "context": "Utilizing the NUMA-aware Memory Manager | It can be deduced from the state file that the pod was pinned to both NUMA nodes, i.e.:", "source": "https://kubernetes.io/docs/tasks/administer-cluster/memory-manager/", "id": "k8s_00452_code_explain_11466"} {"prompt": "Explain this text code:\n\n--reserved-memory N:memory-type1=value1,memory-type2=value2,...", "completion": "This text code Utilizing the NUMA-aware Memory Manager | Example usage:", "language": "text", "type": "code-example_explanation", "context": "Utilizing the NUMA-aware Memory Manager | Example usage:", "source": "https://kubernetes.io/docs/tasks/administer-cluster/memory-manager/", "id": "k8s_00452_code_explain_11467"} {"prompt": "Explain this text code:\n\n--reserved-memory 0:memory=1Gi --reserved-memory 1:memory=2Gi", "completion": "This text code Utilizing the NUMA-aware Memory Manager | or", "language": "text", "type": "code-example_explanation", "context": "Utilizing the NUMA-aware Memory Manager | or", "source": "https://kubernetes.io/docs/tasks/administer-cluster/memory-manager/", "id": "k8s_00452_code_explain_11468"} {"prompt": "Explain this text code:\n\nsum(reserved-memory(i)) = kube-reserved + system-reserved + eviction-threshold", "completion": "This text code Utilizing the NUMA-aware Memory Manager | where i is an index of a NUMA node.", "language": "text", "type": "code-example_explanation", "context": "Utilizing the NUMA-aware Memory Manager | where i is an index of a NUMA node.", "source": "https://kubernetes.io/docs/tasks/administer-cluster/memory-manager/", "id": "k8s_00452_code_explain_11469"} {"prompt": "Explain this text code:\n\nsum(reserved-memory(i)) = reserved-memory(0) + reserved-memory(1) = 1Gi + 2Gi = 3Gi", "completion": "This text code Utilizing the NUMA-aware Memory Manager | An example of kubelet command-line arguments relevant to the node Allocatable configuration:", "language": "text", "type": "code-example_explanation", "context": "Utilizing the NUMA-aware Memory Manager | An example of kubelet command-line arguments relevant to the node Allocatable configuration:", "source": "https://kubernetes.io/docs/tasks/administer-cluster/memory-manager/", "id": "k8s_00452_code_explain_11470"} {"prompt": "Complete this shell code:\n\n--kube-reserved=cpu=4,memory=4Gi\n--system-reserved=cpu=1,memory=1Gi", "completion": "--memory-manager-policy=Static\n--reserved-memory '0:memory=3Gi;1:memory=2148Mi'", "language": "shell", "type": "code-example", "context": "Utilizing the NUMA-aware Memory Manager | Prior to Kubernetes 1.32, you also need to add", "source": "https://kubernetes.io/docs/tasks/administer-cluster/memory-manager/", "id": "k8s_00452_code_completion_11471"} {"prompt": "Explain this shell code:\n\n--kube-reserved=cpu=4,memory=4Gi\n--system-reserved=cpu=1,memory=1Gi\n--memory-manager-policy=Static\n--reserved-memory '0:memory=3Gi;1:memory=2148Mi'", "completion": "This shell code Utilizing the NUMA-aware Memory Manager | Prior to Kubernetes 1.32, you also need to add", "language": "shell", "type": "code-example_explanation", "context": "Utilizing the NUMA-aware Memory Manager | Prior to Kubernetes 1.32, you also need to add", "source": "https://kubernetes.io/docs/tasks/administer-cluster/memory-manager/", "id": "k8s_00452_code_explain_11472"} {"prompt": "Explain this text code:\n\nkube-reserved + system-reserved + eviction-hard(default) = reserved-memory(0) + reserved-memory(1)", "completion": "This text code Utilizing the NUMA-aware Memory Manager | If the selected policy is anything other than None, the Memory Manager identifies pods that are in the Guaranteed QoS class. The Memory Manager provides specific topology hints to the Topology Manager", "language": "text", "type": "code-example_explanation", "context": "Utilizing the NUMA-aware Memory Manager | If the selected policy is anything other than None, the Memory Manager identifies pods that are in the Guaranteed QoS class. The Memory Manager provides specific topology hints to the Topology Manager", "source": "https://kubernetes.io/docs/tasks/administer-cluster/memory-manager/", "id": "k8s_00452_code_explain_11473"} {"prompt": "Complete this yaml code:\n\nspec:\n containers:\n - name: nginx\n image: nginx\n resources:\n limits:", "completion": " memory: \"200Mi\"\n cpu: \"2\"\n example.com/device: \"1\"\n requests:\n memory: \"200Mi\"\n cpu: \"2\"\n example.com/device: \"1\"", "language": "yaml", "type": "kubernetes-manifest", "context": "Utilizing the NUMA-aware Memory Manager | Also, a pod sharing CPU(s) runs in the Guaranteed QoS class, when requests are equal to limits.", "source": "https://kubernetes.io/docs/tasks/administer-cluster/memory-manager/", "id": "k8s_00452_code_completion_11474"} {"prompt": "Explain this yaml code:\n\nspec:\n containers:\n - name: nginx\n image: nginx\n resources:\n limits:\n memory: \"200Mi\"\n cpu: \"2\"\n example.com/device: \"1\"\n requests:\n memory: \"200Mi\"\n cpu: \"2\"\n example.com/device: \"1\"", "completion": "This yaml code Utilizing the NUMA-aware Memory Manager | Also, a pod sharing CPU(s) runs in the Guaranteed QoS class, when requests are equal to limits.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Utilizing the NUMA-aware Memory Manager | Also, a pod sharing CPU(s) runs in the Guaranteed QoS class, when requests are equal to limits.", "source": "https://kubernetes.io/docs/tasks/administer-cluster/memory-manager/", "id": "k8s_00452_code_explain_11475"} {"prompt": "Complete this yaml code:\n\nspec:\n containers:\n - name: nginx\n image: nginx\n resources:\n limits:", "completion": " memory: \"200Mi\"\n cpu: \"300m\"\n example.com/device: \"1\"\n requests:\n memory: \"200Mi\"\n cpu: \"300m\"\n example.com/device: \"1\"", "language": "yaml", "type": "kubernetes-manifest", "context": "Utilizing the NUMA-aware Memory Manager | Notice that both CPU and memory requests must be specified for a Pod to lend it to Guaranteed QoS class.", "source": "https://kubernetes.io/docs/tasks/administer-cluster/memory-manager/", "id": "k8s_00452_code_completion_11476"} {"prompt": "Explain this yaml code:\n\nspec:\n containers:\n - name: nginx\n image: nginx\n resources:\n limits:\n memory: \"200Mi\"\n cpu: \"300m\"\n example.com/device: \"1\"\n requests:\n memory: \"200Mi\"\n cpu: \"300m\"\n example.com/device: \"1\"", "completion": "This yaml code Utilizing the NUMA-aware Memory Manager | Notice that both CPU and memory requests must be specified for a Pod to lend it to Guaranteed QoS class.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Utilizing the NUMA-aware Memory Manager | Notice that both CPU and memory requests must be specified for a Pod to lend it to Guaranteed QoS class.", "source": "https://kubernetes.io/docs/tasks/administer-cluster/memory-manager/", "id": "k8s_00452_code_explain_11477"} {"prompt": "Explain this none code:\n\nNAME READY STATUS RESTARTS AGE\nguaranteed 0/1 TopologyAffinityError 0 113s", "completion": "This none code Utilizing the NUMA-aware Memory Manager | Use kubectl describe pod <id> or kubectl get events to obtain detailed error message:", "language": "none", "type": "code-example_explanation", "context": "Utilizing the NUMA-aware Memory Manager | Use kubectl describe pod <id> or kubectl get events to obtain detailed error message:", "source": "https://kubernetes.io/docs/tasks/administer-cluster/memory-manager/", "id": "k8s_00452_code_explain_11478"} {"prompt": "Explain this none code:\n\nWarning TopologyAffinityError 10m kubelet, dell8 Resources cannot be allocated with Topology locality", "completion": "This none code Utilizing the NUMA-aware Memory Manager | Search system logs with respect to a particular pod.", "language": "none", "type": "code-example_explanation", "context": "Utilizing the NUMA-aware Memory Manager | Search system logs with respect to a particular pod.", "source": "https://kubernetes.io/docs/tasks/administer-cluster/memory-manager/", "id": "k8s_00452_code_explain_11479"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: guaranteed\nspec:\n containers:\n - name: guaranteed\n image: consumer", "completion": " imagePullPolicy: Never\n resources:\n limits:\n cpu: \"2\"\n memory: 150Gi\n requests:\n cpu: \"2\"\n memory: 150Gi\n command: [\"sleep\",\"infinity\"]", "language": "yaml", "type": "pod-definition", "context": "Utilizing the NUMA-aware Memory Manager | Next, let us log into the node where it was deployed and examine the state file in /var/lib/kubelet/memory_manager_state:", "source": "https://kubernetes.io/docs/tasks/administer-cluster/memory-manager/", "id": "k8s_00452_code_completion_11480"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: guaranteed\nspec:\n containers:\n - name: guaranteed\n image: consumer\n imagePullPolicy: Never\n resources:\n limits:\n cpu: \"2\"\n memory: 150Gi\n requests:\n cpu: \"2\"\n memory: 150Gi\n command: [\"sleep\",\"infinity\"]", "completion": "This yaml code Utilizing the NUMA-aware Memory Manager | Next, let us log into the node where it was deployed and examine the state file in /var/lib/kubelet/memory_manager_state:", "language": "yaml", "type": "pod-definition_explanation", "context": "Utilizing the NUMA-aware Memory Manager | Next, let us log into the node where it was deployed and examine the state file in /var/lib/kubelet/memory_manager_state:", "source": "https://kubernetes.io/docs/tasks/administer-cluster/memory-manager/", "id": "k8s_00452_code_explain_11481"} {"prompt": "Complete this json code:\n\n{\n \"policyName\":\"Static\",\n \"machineState\":{\n \"0\":{\n \"numberOfAssignments\":1,\n \"memoryMap\":{\n \"hugepages-1Gi\":{\n \"total\":0,\n \"systemReserved\":0,\n \"allocatable\":0,\n \"reserved\":0,\n \"free\":0\n },\n \"memory\":{\n \"total\":134987354112,\n \"systemReserved\":3221225472,\n \"allocatable\":131766128640,\n \"reserved\":131766128640,\n \"free\":0\n }\n },\n \"nodes\":[\n 0,\n 1\n ]\n },\n \"1\":{\n \"numberOfAssignments\":1,\n \"memoryMap\":{\n \"hugepages-1Gi\":{\n \"total\":0,\n \"systemReserved\":0,\n \"allocatable\":0,", "completion": " \"reserved\":0,\n \"free\":0\n },\n \"memory\":{\n \"total\":135286722560,\n \"systemReserved\":2252341248,\n \"allocatable\":133034381312,\n \"reserved\":29295144960,\n \"free\":103739236352\n }\n },\n \"nodes\":[\n 0,\n 1\n ]\n }\n },\n \"entries\":{\n \"fa9bdd38-6df9-4cf9-aa67-8c4814da37a8\":{\n \"guaranteed\":[\n {\n \"numaAffinity\":[\n 0,\n 1\n ],\n \"type\":\"memory\",\n \"size\":161061273600\n }\n ]\n }\n },\n \"checksum\":4142013182\n}", "language": "json", "type": "api-response", "context": "Utilizing the NUMA-aware Memory Manager | It can be deduced from the state file that the pod was pinned to both NUMA nodes, i.e.:", "source": "https://kubernetes.io/docs/tasks/administer-cluster/memory-manager/", "id": "k8s_00452_code_completion_11482"} {"prompt": "Explain this json code:\n\n{\n \"policyName\":\"Static\",\n \"machineState\":{\n \"0\":{\n \"numberOfAssignments\":1,\n \"memoryMap\":{\n \"hugepages-1Gi\":{\n \"total\":0,\n \"systemReserved\":0,\n \"allocatable\":0,\n \"reserved\":0,\n \"free\":0\n },\n \"memory\":{\n \"total\":134987354112,\n \"systemReserved\":3221225472,\n \"allocatable\":131766128640,\n \"reserved\":131766128640,\n \"free\":0\n }\n },\n \"nodes\":[\n 0,\n 1\n ]\n },\n \"1\":{\n \"numberOfAssignments\":1,\n \"memoryMap\":{\n \"hugepages-1Gi\":{\n \"total\":0,\n \"systemReserved\":0,\n \"allocatable\":0,\n \"reserved\":0,\n \"free\":0\n },\n \"memory\":{\n \"total\":135286722560,\n \"systemReserved\":2252341248,\n \"allocatable\":133034381312,\n \"reserved\":29295144960,\n \"free\":103739236352\n }\n },\n \"nodes\":[\n 0,\n 1\n ]\n }\n },\n \"entries\":{\n \"fa9bdd38-6df9-4cf9-aa67-8c4814da37a8\":{\n \"guaranteed\":[\n {\n \"numaAffinity\":[\n 0,\n 1\n ],\n \"type\":\"memory\",\n \"size\":161061273600\n }\n ]\n }\n },\n \"checksum\":4142013182\n}", "completion": "This json code Utilizing the NUMA-aware Memory Manager | It can be deduced from the state file that the pod was pinned to both NUMA nodes, i.e.:", "language": "json", "type": "api-response_explanation", "context": "Utilizing the NUMA-aware Memory Manager | It can be deduced from the state file that the pod was pinned to both NUMA nodes, i.e.:", "source": "https://kubernetes.io/docs/tasks/administer-cluster/memory-manager/", "id": "k8s_00452_code_explain_11483"} {"prompt": "Complete this text code:\n\n--kube-reserved=cpu=4,memory=4Gi\n--system-reserved=cpu=1,memory=1Gi", "completion": "--memory-manager-policy=Static\n--reserved-memory '0:memory=3Gi;1:memory=2148Mi'", "language": "text", "type": "code-example", "context": "Utilizing the NUMA-aware Memory Manager | Prior to Kubernetes 1.32, you also need to add", "source": "https://kubernetes.io/docs/tasks/administer-cluster/memory-manager/", "id": "k8s_00452_code_completion_11484"} {"prompt": "Explain this text code:\n\n--kube-reserved=cpu=4,memory=4Gi\n--system-reserved=cpu=1,memory=1Gi\n--memory-manager-policy=Static\n--reserved-memory '0:memory=3Gi;1:memory=2148Mi'", "completion": "This text code Utilizing the NUMA-aware Memory Manager | Prior to Kubernetes 1.32, you also need to add", "language": "text", "type": "code-example_explanation", "context": "Utilizing the NUMA-aware Memory Manager | Prior to Kubernetes 1.32, you also need to add", "source": "https://kubernetes.io/docs/tasks/administer-cluster/memory-manager/", "id": "k8s_00452_code_explain_11485"} {"prompt": "Complete this text code:\n\nspec:\n containers:\n - name: nginx\n image: nginx\n resources:\n limits:", "completion": " memory: \"200Mi\"\n cpu: \"2\"\n example.com/device: \"1\"\n requests:\n memory: \"200Mi\"\n cpu: \"2\"\n example.com/device: \"1\"", "language": "text", "type": "code-example", "context": "Utilizing the NUMA-aware Memory Manager | Also, a pod sharing CPU(s) runs in the Guaranteed QoS class, when requests are equal to limits.", "source": "https://kubernetes.io/docs/tasks/administer-cluster/memory-manager/", "id": "k8s_00452_code_completion_11486"} {"prompt": "Explain this text code:\n\nspec:\n containers:\n - name: nginx\n image: nginx\n resources:\n limits:\n memory: \"200Mi\"\n cpu: \"2\"\n example.com/device: \"1\"\n requests:\n memory: \"200Mi\"\n cpu: \"2\"\n example.com/device: \"1\"", "completion": "This text code Utilizing the NUMA-aware Memory Manager | Also, a pod sharing CPU(s) runs in the Guaranteed QoS class, when requests are equal to limits.", "language": "text", "type": "code-example_explanation", "context": "Utilizing the NUMA-aware Memory Manager | Also, a pod sharing CPU(s) runs in the Guaranteed QoS class, when requests are equal to limits.", "source": "https://kubernetes.io/docs/tasks/administer-cluster/memory-manager/", "id": "k8s_00452_code_explain_11487"} {"prompt": "Complete this text code:\n\nspec:\n containers:\n - name: nginx\n image: nginx\n resources:\n limits:", "completion": " memory: \"200Mi\"\n cpu: \"300m\"\n example.com/device: \"1\"\n requests:\n memory: \"200Mi\"\n cpu: \"300m\"\n example.com/device: \"1\"", "language": "text", "type": "code-example", "context": "Utilizing the NUMA-aware Memory Manager | Notice that both CPU and memory requests must be specified for a Pod to lend it to Guaranteed QoS class.", "source": "https://kubernetes.io/docs/tasks/administer-cluster/memory-manager/", "id": "k8s_00452_code_completion_11488"} {"prompt": "Explain this text code:\n\nspec:\n containers:\n - name: nginx\n image: nginx\n resources:\n limits:\n memory: \"200Mi\"\n cpu: \"300m\"\n example.com/device: \"1\"\n requests:\n memory: \"200Mi\"\n cpu: \"300m\"\n example.com/device: \"1\"", "completion": "This text code Utilizing the NUMA-aware Memory Manager | Notice that both CPU and memory requests must be specified for a Pod to lend it to Guaranteed QoS class.", "language": "text", "type": "code-example_explanation", "context": "Utilizing the NUMA-aware Memory Manager | Notice that both CPU and memory requests must be specified for a Pod to lend it to Guaranteed QoS class.", "source": "https://kubernetes.io/docs/tasks/administer-cluster/memory-manager/", "id": "k8s_00452_code_explain_11489"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: guaranteed\nspec:\n containers:\n - name: guaranteed\n image: consumer", "completion": " imagePullPolicy: Never\n resources:\n limits:\n cpu: \"2\"\n memory: 150Gi\n requests:\n cpu: \"2\"\n memory: 150Gi\n command: [\"sleep\",\"infinity\"]", "language": "yaml", "type": "pod-definition", "context": "Utilizing the NUMA-aware Memory Manager | Next, let us log into the node where it was deployed and examine the state file in /var/lib/kubelet/memory_manager_state:", "source": "https://kubernetes.io/docs/tasks/administer-cluster/memory-manager/", "id": "k8s_00452_code_completion_11490"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: guaranteed\nspec:\n containers:\n - name: guaranteed\n image: consumer\n imagePullPolicy: Never\n resources:\n limits:\n cpu: \"2\"\n memory: 150Gi\n requests:\n cpu: \"2\"\n memory: 150Gi\n command: [\"sleep\",\"infinity\"]", "completion": "This yaml code Utilizing the NUMA-aware Memory Manager | Next, let us log into the node where it was deployed and examine the state file in /var/lib/kubelet/memory_manager_state:", "language": "yaml", "type": "pod-definition_explanation", "context": "Utilizing the NUMA-aware Memory Manager | Next, let us log into the node where it was deployed and examine the state file in /var/lib/kubelet/memory_manager_state:", "source": "https://kubernetes.io/docs/tasks/administer-cluster/memory-manager/", "id": "k8s_00452_code_explain_11491"} {"prompt": "Complete this json code:\n\n{\n \"policyName\":\"Static\",\n \"machineState\":{\n \"0\":{\n \"numberOfAssignments\":1,\n \"memoryMap\":{\n \"hugepages-1Gi\":{\n \"total\":0,\n \"systemReserved\":0,\n \"allocatable\":0,\n \"reserved\":0,\n \"free\":0\n },\n \"memory\":{\n \"total\":134987354112,\n \"systemReserved\":3221225472,\n \"allocatable\":131766128640,\n \"reserved\":131766128640,\n \"free\":0\n }\n },\n \"nodes\":[\n 0,\n 1\n ]\n },\n \"1\":{\n \"numberOfAssignments\":1,\n \"memoryMap\":{\n \"hugepages-1Gi\":{\n \"total\":0,\n \"systemReserved\":0,\n \"allocatable\":0,", "completion": " \"reserved\":0,\n \"free\":0\n },\n \"memory\":{\n \"total\":135286722560,\n \"systemReserved\":2252341248,\n \"allocatable\":133034381312,\n \"reserved\":29295144960,\n \"free\":103739236352\n }\n },\n \"nodes\":[\n 0,\n 1\n ]\n }\n },\n \"entries\":{\n \"fa9bdd38-6df9-4cf9-aa67-8c4814da37a8\":{\n \"guaranteed\":[\n {\n \"numaAffinity\":[\n 0,\n 1\n ],\n \"type\":\"memory\",\n \"size\":161061273600\n }\n ]\n }\n },\n \"checksum\":4142013182\n}", "language": "json", "type": "api-response", "context": "Utilizing the NUMA-aware Memory Manager | It can be deduced from the state file that the pod was pinned to both NUMA nodes, i.e.:", "source": "https://kubernetes.io/docs/tasks/administer-cluster/memory-manager/", "id": "k8s_00452_code_completion_11492"} {"prompt": "Explain this json code:\n\n{\n \"policyName\":\"Static\",\n \"machineState\":{\n \"0\":{\n \"numberOfAssignments\":1,\n \"memoryMap\":{\n \"hugepages-1Gi\":{\n \"total\":0,\n \"systemReserved\":0,\n \"allocatable\":0,\n \"reserved\":0,\n \"free\":0\n },\n \"memory\":{\n \"total\":134987354112,\n \"systemReserved\":3221225472,\n \"allocatable\":131766128640,\n \"reserved\":131766128640,\n \"free\":0\n }\n },\n \"nodes\":[\n 0,\n 1\n ]\n },\n \"1\":{\n \"numberOfAssignments\":1,\n \"memoryMap\":{\n \"hugepages-1Gi\":{\n \"total\":0,\n \"systemReserved\":0,\n \"allocatable\":0,\n \"reserved\":0,\n \"free\":0\n },\n \"memory\":{\n \"total\":135286722560,\n \"systemReserved\":2252341248,\n \"allocatable\":133034381312,\n \"reserved\":29295144960,\n \"free\":103739236352\n }\n },\n \"nodes\":[\n 0,\n 1\n ]\n }\n },\n \"entries\":{\n \"fa9bdd38-6df9-4cf9-aa67-8c4814da37a8\":{\n \"guaranteed\":[\n {\n \"numaAffinity\":[\n 0,\n 1\n ],\n \"type\":\"memory\",\n \"size\":161061273600\n }\n ]\n }\n },\n \"checksum\":4142013182\n}", "completion": "This json code Utilizing the NUMA-aware Memory Manager | It can be deduced from the state file that the pod was pinned to both NUMA nodes, i.e.:", "language": "json", "type": "api-response_explanation", "context": "Utilizing the NUMA-aware Memory Manager | It can be deduced from the state file that the pod was pinned to both NUMA nodes, i.e.:", "source": "https://kubernetes.io/docs/tasks/administer-cluster/memory-manager/", "id": "k8s_00452_code_explain_11493"} {"prompt": "Complete this yaml code:\n\napiVersion: apiextensions.k8s.io/v1\nkind: CustomResourceDefinition\nmetadata:\n # name must match the spec fields below, and be in the form: <plural>.<group>\n name: crontabs.example.com\nspec:\n # group name to use for REST API: /apis/<group>/<version>\n group: example.com\n # list of versions supported by this CustomResourceDefinition\n versions:\n - name: v1beta1\n # Each version can be enabled/disabled by Served flag.\n served: true\n # One and only one version must be marked as the storage version.\n storage: true\n # A schema is required\n schema:\n openAPIV3Schema:\n type: object\n properties:\n host:\n type: string\n port:\n type: string\n - name: v1\n served: true", "completion": " storage: false\n schema:\n openAPIV3Schema:\n type: object\n properties:\n host:\n type: string\n port:\n type: string\n # The conversion section is introduced in Kubernetes 1.13+ with a default value of\n # None conversion (strategy sub-field set to None).\n conversion:\n # None conversion assumes the same schema for all versions and only sets the apiVersion\n # field of custom resources to the proper value\n strategy: None\n # either Namespaced or Cluster\n scope: Namespaced\n names:\n # plural name to be used in the URL: /apis/<group>/<version>/<plural>\n plural: crontabs\n # singular name to be used as an alias on the CLI and for display\n singular: crontab\n # kind is normally the CamelCased singular type. Your resource manifests use this.\n kind: CronTab\n # shortNames allow shorter string to match your resource on the CLI\n shortNames:\n - ct", "language": "yaml", "type": "kubernetes-manifest", "context": "Versions in CustomResourceDefinitions | # Deprecated in v1.16 in favor of apiextensions.k8s.io/v1 apiVersion: apiextensions.k8s.io/v1beta1 kind: CustomResourceDefinition metadata: # name must match the spec fields below, and be in the for", "source": "https://kubernetes.io/docs/tasks/extend-kubernetes/custom-resources/custom-resource-definition-versioning/", "id": "k8s_00454_code_completion_11494"} {"prompt": "Explain this yaml code:\n\napiVersion: apiextensions.k8s.io/v1\nkind: CustomResourceDefinition\nmetadata:\n # name must match the spec fields below, and be in the form: <plural>.<group>\n name: crontabs.example.com\nspec:\n # group name to use for REST API: /apis/<group>/<version>\n group: example.com\n # list of versions supported by this CustomResourceDefinition\n versions:\n - name: v1beta1\n # Each version can be enabled/disabled by Served flag.\n served: true\n # One and only one version must be marked as the storage version.\n storage: true\n # A schema is required\n schema:\n openAPIV3Schema:\n type: object\n properties:\n host:\n type: string\n port:\n type: string\n - name: v1\n served: true\n storage: false\n schema:\n openAPIV3Schema:\n type: object\n properties:\n host:\n type: string\n port:\n type: string\n # The conversion section is introduced in Kubernetes 1.13+ with a default value of\n # None conversion (strategy sub-field set to None).\n conversion:\n # None conversion assumes the same schema for all versions and only sets the apiVersion\n # field of custom resources to the proper value\n strategy: None\n # either Namespaced or Cluster\n scope: Namespaced\n names:\n # plural name to be used in the URL: /apis/<group>/<version>/<plural>\n plural: crontabs\n # singular name to be used as an alias on the CLI and for display\n singular: crontab\n # kind is normally the CamelCased singular type. Your resource manifests use this.\n kind: CronTab\n # shortNames allow shorter string to match your resource on the CLI\n shortNames:\n - ct", "completion": "This yaml code Versions in CustomResourceDefinitions | # Deprecated in v1.16 in favor of apiextensions.k8s.io/v1 apiVersion: apiextensions.k8s.io/v1beta1 kind: CustomResourceDefinition metadata: # name must match the spec fields below, and be in the for", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Versions in CustomResourceDefinitions | # Deprecated in v1.16 in favor of apiextensions.k8s.io/v1 apiVersion: apiextensions.k8s.io/v1beta1 kind: CustomResourceDefinition metadata: # name must match the spec fields below, and be in the for", "source": "https://kubernetes.io/docs/tasks/extend-kubernetes/custom-resources/custom-resource-definition-versioning/", "id": "k8s_00454_code_explain_11495"} {"prompt": "Complete this yaml code:\n\n# Deprecated in v1.16 in favor of apiextensions.k8s.io/v1\napiVersion: apiextensions.k8s.io/v1beta1\nkind: CustomResourceDefinition\nmetadata:\n # name must match the spec fields below, and be in the form: <plural>.<group>\n name: crontabs.example.com\nspec:\n # group name to use for REST API: /apis/<group>/<version>\n group: example.com\n # list of versions supported by this CustomResourceDefinition\n versions:\n - name: v1beta1\n # Each version can be enabled/disabled by Served flag.\n served: true\n # One and only one version must be marked as the storage version.\n storage: true\n - name: v1\n served: true\n storage: false\n validation:\n openAPIV3Schema:\n type: object", "completion": " properties:\n host:\n type: string\n port:\n type: string\n # The conversion section is introduced in Kubernetes 1.13+ with a default value of\n # None conversion (strategy sub-field set to None).\n conversion:\n # None conversion assumes the same schema for all versions and only sets the apiVersion\n # field of custom resources to the proper value\n strategy: None\n # either Namespaced or Cluster\n scope: Namespaced\n names:\n # plural name to be used in the URL: /apis/<group>/<version>/<plural>\n plural: crontabs\n # singular name to be used as an alias on the CLI and for display\n singular: crontab\n # kind is normally the PascalCased singular type. Your resource manifests use this.\n kind: CronTab\n # shortNames allow shorter string to match your resource on the CLI\n shortNames:\n - ct", "language": "yaml", "type": "kubernetes-manifest", "context": "Versions in CustomResourceDefinitions | You can save the CustomResourceDefinition in a YAML file, then use kubectl apply to create it.", "source": "https://kubernetes.io/docs/tasks/extend-kubernetes/custom-resources/custom-resource-definition-versioning/", "id": "k8s_00454_code_completion_11496"} {"prompt": "Explain this yaml code:\n\n# Deprecated in v1.16 in favor of apiextensions.k8s.io/v1\napiVersion: apiextensions.k8s.io/v1beta1\nkind: CustomResourceDefinition\nmetadata:\n # name must match the spec fields below, and be in the form: <plural>.<group>\n name: crontabs.example.com\nspec:\n # group name to use for REST API: /apis/<group>/<version>\n group: example.com\n # list of versions supported by this CustomResourceDefinition\n versions:\n - name: v1beta1\n # Each version can be enabled/disabled by Served flag.\n served: true\n # One and only one version must be marked as the storage version.\n storage: true\n - name: v1\n served: true\n storage: false\n validation:\n openAPIV3Schema:\n type: object\n properties:\n host:\n type: string\n port:\n type: string\n # The conversion section is introduced in Kubernetes 1.13+ with a default value of\n # None conversion (strategy sub-field set to None).\n conversion:\n # None conversion assumes the same schema for all versions and only sets the apiVersion\n # field of custom resources to the proper value\n strategy: None\n # either Namespaced or Cluster\n scope: Namespaced\n names:\n # plural name to be used in the URL: /apis/<group>/<version>/<plural>\n plural: crontabs\n # singular name to be used as an alias on the CLI and for display\n singular: crontab\n # kind is normally the PascalCased singular type. Your resource manifests use this.\n kind: CronTab\n # shortNames allow shorter string to match your resource on the CLI\n shortNames:\n - ct", "completion": "This yaml code Versions in CustomResourceDefinitions | You can save the CustomResourceDefinition in a YAML file, then use kubectl apply to create it.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Versions in CustomResourceDefinitions | You can save the CustomResourceDefinition in a YAML file, then use kubectl apply to create it.", "source": "https://kubernetes.io/docs/tasks/extend-kubernetes/custom-resources/custom-resource-definition-versioning/", "id": "k8s_00454_code_explain_11497"} {"prompt": "Complete this text code:\n\n- v10\n- v2\n- v1\n- v11beta2\n- v10beta3", "completion": "- v3beta1\n- v12alpha1\n- v11alpha2\n- foo1\n- foo10", "language": "text", "type": "code-example", "context": "Versions in CustomResourceDefinitions | For the example in Specify multiple versions, the version sort order is v1, followed by v1beta1. This causes the kubectl command to use v1 as the default version unless the provided object specifies t", "source": "https://kubernetes.io/docs/tasks/extend-kubernetes/custom-resources/custom-resource-definition-versioning/", "id": "k8s_00454_code_completion_11498"} {"prompt": "Explain this text code:\n\n- v10\n- v2\n- v1\n- v11beta2\n- v10beta3\n- v3beta1\n- v12alpha1\n- v11alpha2\n- foo1\n- foo10", "completion": "This text code Versions in CustomResourceDefinitions | For the example in Specify multiple versions, the version sort order is v1, followed by v1beta1. This causes the kubectl command to use v1 as the default version unless the provided object specifies t", "language": "text", "type": "code-example_explanation", "context": "Versions in CustomResourceDefinitions | For the example in Specify multiple versions, the version sort order is v1, followed by v1beta1. This causes the kubectl command to use v1 as the default version unless the provided object specifies t", "source": "https://kubernetes.io/docs/tasks/extend-kubernetes/custom-resources/custom-resource-definition-versioning/", "id": "k8s_00454_code_explain_11499"} {"prompt": "Complete this yaml code:\n\napiVersion: apiextensions.k8s.io/v1\nkind: CustomResourceDefinition\n name: crontabs.example.com\nspec:\n group: example.com\n names:\n plural: crontabs\n singular: crontab\n kind: CronTab\n scope: Namespaced\n versions:\n - name: v1alpha1\n served: true\n storage: false\n # This indicates the v1alpha1 version of the custom resource is deprecated.\n # API requests to this version receive a warning header in the server response.", "completion": " deprecated: true\n # This overrides the default warning returned to API clients making v1alpha1 API requests.\n deprecationWarning: \"example.com/v1alpha1 CronTab is deprecated; see http://example.com/v1alpha1-v1 for instructions to migrate to example.com/v1 CronTab\"\n \n schema: ...\n - name: v1beta1\n served: true\n # This indicates the v1beta1 version of the custom resource is deprecated.\n # API requests to this version receive a warning header in the server response.\n # A default warning message is returned for this version.\n deprecated: true\n schema: ...\n - name: v1\n served: true\n storage: true\n schema: ...", "language": "yaml", "type": "kubernetes-manifest", "context": "Versions in CustomResourceDefinitions | # Deprecated in v1.16 in favor of apiextensions.k8s.io/v1 apiVersion: apiextensions.k8s.io/v1beta1 kind: CustomResourceDefinition metadata: name: crontabs.example.com spec: group: example.com na", "source": "https://kubernetes.io/docs/tasks/extend-kubernetes/custom-resources/custom-resource-definition-versioning/", "id": "k8s_00454_code_completion_11500"} {"prompt": "Explain this yaml code:\n\napiVersion: apiextensions.k8s.io/v1\nkind: CustomResourceDefinition\n name: crontabs.example.com\nspec:\n group: example.com\n names:\n plural: crontabs\n singular: crontab\n kind: CronTab\n scope: Namespaced\n versions:\n - name: v1alpha1\n served: true\n storage: false\n # This indicates the v1alpha1 version of the custom resource is deprecated.\n # API requests to this version receive a warning header in the server response.\n deprecated: true\n # This overrides the default warning returned to API clients making v1alpha1 API requests.\n deprecationWarning: \"example.com/v1alpha1 CronTab is deprecated; see http://example.com/v1alpha1-v1 for instructions to migrate to example.com/v1 CronTab\"\n \n schema: ...\n - name: v1beta1\n served: true\n # This indicates the v1beta1 version of the custom resource is deprecated.\n # API requests to this version receive a warning header in the server response.\n # A default warning message is returned for this version.\n deprecated: true\n schema: ...\n - name: v1\n served: true\n storage: true\n schema: ...", "completion": "This yaml code Versions in CustomResourceDefinitions | # Deprecated in v1.16 in favor of apiextensions.k8s.io/v1 apiVersion: apiextensions.k8s.io/v1beta1 kind: CustomResourceDefinition metadata: name: crontabs.example.com spec: group: example.com na", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Versions in CustomResourceDefinitions | # Deprecated in v1.16 in favor of apiextensions.k8s.io/v1 apiVersion: apiextensions.k8s.io/v1beta1 kind: CustomResourceDefinition metadata: name: crontabs.example.com spec: group: example.com na", "source": "https://kubernetes.io/docs/tasks/extend-kubernetes/custom-resources/custom-resource-definition-versioning/", "id": "k8s_00454_code_explain_11501"} {"prompt": "Complete this yaml code:\n\n# Deprecated in v1.16 in favor of apiextensions.k8s.io/v1\napiVersion: apiextensions.k8s.io/v1beta1\nkind: CustomResourceDefinition\nmetadata:\n name: crontabs.example.com\nspec:\n group: example.com\n names:\n plural: crontabs\n singular: crontab\n kind: CronTab\n scope: Namespaced\n validation: ...\n versions:\n - name: v1alpha1", "completion": " served: true\n storage: false\n # This indicates the v1alpha1 version of the custom resource is deprecated.\n # API requests to this version receive a warning header in the server response.\n deprecated: true\n # This overrides the default warning returned to API clients making v1alpha1 API requests.\n deprecationWarning: \"example.com/v1alpha1 CronTab is deprecated; see http://example.com/v1alpha1-v1 for instructions to migrate to example.com/v1 CronTab\"\n - name: v1beta1\n served: true\n # This indicates the v1beta1 version of the custom resource is deprecated.\n # API requests to this version receive a warning header in the server response.\n # A default warning message is returned for this version.\n deprecated: true\n - name: v1\n served: true\n storage: true", "language": "yaml", "type": "kubernetes-manifest", "context": "Versions in CustomResourceDefinitions | An older API version cannot be dropped from a CustomResourceDefinition manifest until existing stored data has been migrated to the newer API version for all clusters that served the older version of", "source": "https://kubernetes.io/docs/tasks/extend-kubernetes/custom-resources/custom-resource-definition-versioning/", "id": "k8s_00454_code_completion_11502"} {"prompt": "Explain this yaml code:\n\n# Deprecated in v1.16 in favor of apiextensions.k8s.io/v1\napiVersion: apiextensions.k8s.io/v1beta1\nkind: CustomResourceDefinition\nmetadata:\n name: crontabs.example.com\nspec:\n group: example.com\n names:\n plural: crontabs\n singular: crontab\n kind: CronTab\n scope: Namespaced\n validation: ...\n versions:\n - name: v1alpha1\n served: true\n storage: false\n # This indicates the v1alpha1 version of the custom resource is deprecated.\n # API requests to this version receive a warning header in the server response.\n deprecated: true\n # This overrides the default warning returned to API clients making v1alpha1 API requests.\n deprecationWarning: \"example.com/v1alpha1 CronTab is deprecated; see http://example.com/v1alpha1-v1 for instructions to migrate to example.com/v1 CronTab\"\n - name: v1beta1\n served: true\n # This indicates the v1beta1 version of the custom resource is deprecated.\n # API requests to this version receive a warning header in the server response.\n # A default warning message is returned for this version.\n deprecated: true\n - name: v1\n served: true\n storage: true", "completion": "This yaml code Versions in CustomResourceDefinitions | An older API version cannot be dropped from a CustomResourceDefinition manifest until existing stored data has been migrated to the newer API version for all clusters that served the older version of", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Versions in CustomResourceDefinitions | An older API version cannot be dropped from a CustomResourceDefinition manifest until existing stored data has been migrated to the newer API version for all clusters that served the older version of", "source": "https://kubernetes.io/docs/tasks/extend-kubernetes/custom-resources/custom-resource-definition-versioning/", "id": "k8s_00454_code_explain_11503"} {"prompt": "Complete this yaml code:\n\napiVersion: apiextensions.k8s.io/v1\nkind: CustomResourceDefinition\n name: crontabs.example.com\nspec:\n group: example.com\n names:\n plural: crontabs\n singular: crontab\n kind: CronTab\n scope: Namespaced", "completion": " versions:\n - name: v1beta1\n # This indicates the v1beta1 version of the custom resource is no longer served.\n # API requests to this version receive a not found error in the server response.\n served: false\n schema: ...\n - name: v1\n served: true\n # The new served version should be set as the storage version\n storage: true\n schema: ...", "language": "yaml", "type": "kubernetes-manifest", "context": "Versions in CustomResourceDefinitions | The above example has a None conversion between versions which only sets the apiVersion field on conversion and does not change the rest of the object. The API server also supports webhook conversions", "source": "https://kubernetes.io/docs/tasks/extend-kubernetes/custom-resources/custom-resource-definition-versioning/", "id": "k8s_00454_code_completion_11504"} {"prompt": "Explain this yaml code:\n\napiVersion: apiextensions.k8s.io/v1\nkind: CustomResourceDefinition\n name: crontabs.example.com\nspec:\n group: example.com\n names:\n plural: crontabs\n singular: crontab\n kind: CronTab\n scope: Namespaced\n versions:\n - name: v1beta1\n # This indicates the v1beta1 version of the custom resource is no longer served.\n # API requests to this version receive a not found error in the server response.\n served: false\n schema: ...\n - name: v1\n served: true\n # The new served version should be set as the storage version\n storage: true\n schema: ...", "completion": "This yaml code Versions in CustomResourceDefinitions | The above example has a None conversion between versions which only sets the apiVersion field on conversion and does not change the rest of the object. The API server also supports webhook conversions", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Versions in CustomResourceDefinitions | The above example has a None conversion between versions which only sets the apiVersion field on conversion and does not change the rest of the object. The API server also supports webhook conversions", "source": "https://kubernetes.io/docs/tasks/extend-kubernetes/custom-resources/custom-resource-definition-versioning/", "id": "k8s_00454_code_explain_11505"} {"prompt": "Complete this yaml code:\n\napiVersion: apiextensions.k8s.io/v1\nkind: CustomResourceDefinition\nmetadata:\n # name must match the spec fields below, and be in the form: <plural>.<group>\n name: crontabs.example.com\nspec:\n # group name to use for REST API: /apis/<group>/<version>\n group: example.com\n # list of versions supported by this CustomResourceDefinition\n versions:\n - name: v1beta1\n # Each version can be enabled/disabled by Served flag.\n served: true\n # One and only one version must be marked as the storage version.\n storage: true\n # Each version can define its own schema when there is no top-level\n # schema is defined.\n schema:\n openAPIV3Schema:\n type: object\n properties:\n hostPort:\n type: string\n - name: v1\n served: true\n storage: false\n schema:\n openAPIV3Schema:\n type: object\n properties:", "completion": " host:\n type: string\n port:\n type: string\n conversion:\n # the Webhook strategy instructs the API server to call an external webhook for any conversion between custom resources.\n strategy: Webhook\n # webhook is required when strategy is `Webhook` and it configures the webhook endpoint to be called by API server.\n webhook:\n # conversionReviewVersions indicates what ConversionReview versions are understood/preferred by the webhook.\n # The first version in the list understood by the API server is sent to the webhook.\n # The webhook must respond with a ConversionReview object in the same version it received.\n conversionReviewVersions: [\"v1\",\"v1beta1\"]\n clientConfig:\n service:\n namespace: default\n name: example-conversion-webhook-server\n path: /crdconvert\n caBundle: \"Ci0tLS0tQk...<base64-encoded PEM bundle>...tLS0K\"\n # either Namespaced or Cluster\n scope: Namespaced\n names:\n # plural name to be used in the URL: /apis/<group>/<version>/<plural>\n plural: crontabs\n # singular name to be used as an alias on the CLI and for display\n singular: crontab\n # kind is normally the CamelCased singular type. Your resource manifests use this.\n kind: CronTab\n # shortNames allow shorter string to match your resource on the CLI\n shortNames:\n - ct", "language": "yaml", "type": "kubernetes-manifest", "context": "Versions in CustomResourceDefinitions | # Deprecated in v1.16 in favor of apiextensions.k8s.io/v1 apiVersion: apiextensions.k8s.io/v1beta1 kind: CustomResourceDefinition metadata: # name must match the spec fields below, and be in the for", "source": "https://kubernetes.io/docs/tasks/extend-kubernetes/custom-resources/custom-resource-definition-versioning/", "id": "k8s_00454_code_completion_11506"} {"prompt": "Explain this yaml code:\n\napiVersion: apiextensions.k8s.io/v1\nkind: CustomResourceDefinition\nmetadata:\n # name must match the spec fields below, and be in the form: <plural>.<group>\n name: crontabs.example.com\nspec:\n # group name to use for REST API: /apis/<group>/<version>\n group: example.com\n # list of versions supported by this CustomResourceDefinition\n versions:\n - name: v1beta1\n # Each version can be enabled/disabled by Served flag.\n served: true\n # One and only one version must be marked as the storage version.\n storage: true\n # Each version can define its own schema when there is no top-level\n # schema is defined.\n schema:\n openAPIV3Schema:\n type: object\n properties:\n hostPort:\n type: string\n - name: v1\n served: true\n storage: false\n schema:\n openAPIV3Schema:\n type: object\n properties:\n host:\n type: string\n port:\n type: string\n conversion:\n # the Webhook strategy instructs the API server to call an external webhook for any conversion between custom resources.\n strategy: Webhook\n # webhook is required when strategy is `Webhook` and it configures the webhook endpoint to be called by API server.\n webhook:\n # conversionReviewVersions indicates what ConversionReview versions are understood/preferred by the webhook.\n # The first version in the list understood by the API server is sent to the webhook.\n # The webhook must respond with a ConversionReview object in the same version it received.\n conversionReviewVersions: [\"v1\",\"v1beta1\"]\n clientConfig:\n service:\n namespace: default\n name: example-conversion-webhook-server\n path: /crdconvert\n caBundle: \"Ci0tLS0tQk...<base64-encoded PEM bundle>...tLS0K\"\n # either Namespaced or Cluster\n scope: Namespaced\n names:\n # plural name to be used in the URL: /apis/<group>/<version>/<plural>\n plural: crontabs\n # singular name to be used as an alias on the CLI and for display\n singular: crontab\n # kind is normally the CamelCased singular type. Your resource manifests use this.\n kind: CronTab\n # shortNames allow shorter string to match your resource on the CLI\n shortNames:\n - ct", "completion": "This yaml code Versions in CustomResourceDefinitions | # Deprecated in v1.16 in favor of apiextensions.k8s.io/v1 apiVersion: apiextensions.k8s.io/v1beta1 kind: CustomResourceDefinition metadata: # name must match the spec fields below, and be in the for", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Versions in CustomResourceDefinitions | # Deprecated in v1.16 in favor of apiextensions.k8s.io/v1 apiVersion: apiextensions.k8s.io/v1beta1 kind: CustomResourceDefinition metadata: # name must match the spec fields below, and be in the for", "source": "https://kubernetes.io/docs/tasks/extend-kubernetes/custom-resources/custom-resource-definition-versioning/", "id": "k8s_00454_code_explain_11507"} {"prompt": "Complete this yaml code:\n\n# Deprecated in v1.16 in favor of apiextensions.k8s.io/v1\napiVersion: apiextensions.k8s.io/v1beta1\nkind: CustomResourceDefinition\nmetadata:\n # name must match the spec fields below, and be in the form: <plural>.<group>\n name: crontabs.example.com\nspec:\n # group name to use for REST API: /apis/<group>/<version>\n group: example.com\n # prunes object fields that are not specified in OpenAPI schemas below.\n preserveUnknownFields: false\n # list of versions supported by this CustomResourceDefinition\n versions:\n - name: v1beta1\n # Each version can be enabled/disabled by Served flag.\n served: true\n # One and only one version must be marked as the storage version.\n storage: true\n # Each version can define its own schema when there is no top-level\n # schema is defined.\n schema:\n openAPIV3Schema:\n type: object\n properties:\n hostPort:\n type: string\n - name: v1\n served: true\n storage: false", "completion": " schema:\n openAPIV3Schema:\n type: object\n properties:\n host:\n type: string\n port:\n type: string\n conversion:\n # the Webhook strategy instructs the API server to call an external webhook for any conversion between custom resources.\n strategy: Webhook\n # webhookClientConfig is required when strategy is `Webhook` and it configures the webhook endpoint to be called by API server.\n webhookClientConfig:\n service:\n namespace: default\n name: example-conversion-webhook-server\n path: /crdconvert\n caBundle: \"Ci0tLS0tQk...<base64-encoded PEM bundle>...tLS0K\"\n # either Namespaced or Cluster\n scope: Namespaced\n names:\n # plural name to be used in the URL: /apis/<group>/<version>/<plural>\n plural: crontabs\n # singular name to be used as an alias on the CLI and for display\n singular: crontab\n # kind is normally the CamelCased singular type. Your resource manifests use this.\n kind: CronTab\n # shortNames allow shorter string to match your resource on the CLI\n shortNames:\n - ct", "language": "yaml", "type": "kubernetes-manifest", "context": "Versions in CustomResourceDefinitions | You can save the CustomResourceDefinition in a YAML file, then use kubectl apply to apply it.", "source": "https://kubernetes.io/docs/tasks/extend-kubernetes/custom-resources/custom-resource-definition-versioning/", "id": "k8s_00454_code_completion_11508"} {"prompt": "Explain this yaml code:\n\n# Deprecated in v1.16 in favor of apiextensions.k8s.io/v1\napiVersion: apiextensions.k8s.io/v1beta1\nkind: CustomResourceDefinition\nmetadata:\n # name must match the spec fields below, and be in the form: <plural>.<group>\n name: crontabs.example.com\nspec:\n # group name to use for REST API: /apis/<group>/<version>\n group: example.com\n # prunes object fields that are not specified in OpenAPI schemas below.\n preserveUnknownFields: false\n # list of versions supported by this CustomResourceDefinition\n versions:\n - name: v1beta1\n # Each version can be enabled/disabled by Served flag.\n served: true\n # One and only one version must be marked as the storage version.\n storage: true\n # Each version can define its own schema when there is no top-level\n # schema is defined.\n schema:\n openAPIV3Schema:\n type: object\n properties:\n hostPort:\n type: string\n - name: v1\n served: true\n storage: false\n schema:\n openAPIV3Schema:\n type: object\n properties:\n host:\n type: string\n port:\n type: string\n conversion:\n # the Webhook strategy instructs the API server to call an external webhook for any conversion between custom resources.\n strategy: Webhook\n # webhookClientConfig is required when strategy is `Webhook` and it configures the webhook endpoint to be called by API server.\n webhookClientConfig:\n service:\n namespace: default\n name: example-conversion-webhook-server\n path: /crdconvert\n caBundle: \"Ci0tLS0tQk...<base64-encoded PEM bundle>...tLS0K\"\n # either Namespaced or Cluster\n scope: Namespaced\n names:\n # plural name to be used in the URL: /apis/<group>/<version>/<plural>\n plural: crontabs\n # singular name to be used as an alias on the CLI and for display\n singular: crontab\n # kind is normally the CamelCased singular type. Your resource manifests use this.\n kind: CronTab\n # shortNames allow shorter string to match your resource on the CLI\n shortNames:\n - ct", "completion": "This yaml code Versions in CustomResourceDefinitions | You can save the CustomResourceDefinition in a YAML file, then use kubectl apply to apply it.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Versions in CustomResourceDefinitions | You can save the CustomResourceDefinition in a YAML file, then use kubectl apply to apply it.", "source": "https://kubernetes.io/docs/tasks/extend-kubernetes/custom-resources/custom-resource-definition-versioning/", "id": "k8s_00454_code_explain_11509"} {"prompt": "Explain this bash code:\n\nkubectl apply -f my-versioned-crontab-with-conversion.yaml", "completion": "This bash code Versions in CustomResourceDefinitions | Make sure the conversion service is up and running before applying new changes.", "language": "bash", "type": "kubectl-command_explanation", "context": "Versions in CustomResourceDefinitions | Make sure the conversion service is up and running before applying new changes.", "source": "https://kubernetes.io/docs/tasks/extend-kubernetes/custom-resources/custom-resource-definition-versioning/", "id": "k8s_00454_code_explain_11510"} {"prompt": "Complete this yaml code:\n\napiVersion: apiextensions.k8s.io/v1\nkind: CustomResourceDefinition\n...\nspec:\n ...", "completion": " conversion:\n strategy: Webhook\n webhook:\n clientConfig:\n url: \"https://my-webhook.example.com:9443/my-webhook-path\"\n...", "language": "yaml", "type": "kubernetes-manifest", "context": "Versions in CustomResourceDefinitions | # Deprecated in v1.16 in favor of apiextensions.k8s.io/v1 apiVersion: apiextensions.k8s.io/v1beta1 kind: CustomResourceDefinition ... spec: ... conversion: strategy: Webhook webhookClientC", "source": "https://kubernetes.io/docs/tasks/extend-kubernetes/custom-resources/custom-resource-definition-versioning/", "id": "k8s_00454_code_completion_11511"} {"prompt": "Explain this yaml code:\n\napiVersion: apiextensions.k8s.io/v1\nkind: CustomResourceDefinition\n...\nspec:\n ...\n conversion:\n strategy: Webhook\n webhook:\n clientConfig:\n url: \"https://my-webhook.example.com:9443/my-webhook-path\"\n...", "completion": "This yaml code Versions in CustomResourceDefinitions | # Deprecated in v1.16 in favor of apiextensions.k8s.io/v1 apiVersion: apiextensions.k8s.io/v1beta1 kind: CustomResourceDefinition ... spec: ... conversion: strategy: Webhook webhookClientC", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Versions in CustomResourceDefinitions | # Deprecated in v1.16 in favor of apiextensions.k8s.io/v1 apiVersion: apiextensions.k8s.io/v1beta1 kind: CustomResourceDefinition ... spec: ... conversion: strategy: Webhook webhookClientC", "source": "https://kubernetes.io/docs/tasks/extend-kubernetes/custom-resources/custom-resource-definition-versioning/", "id": "k8s_00454_code_explain_11512"} {"prompt": "Complete this yaml code:\n\n# Deprecated in v1.16 in favor of apiextensions.k8s.io/v1\napiVersion: apiextensions.k8s.io/v1beta1\nkind: CustomResourceDefinition\n...\nspec:", "completion": " ...\n conversion:\n strategy: Webhook\n webhookClientConfig:\n url: \"https://my-webhook.example.com:9443/my-webhook-path\"\n...", "language": "yaml", "type": "kubernetes-manifest", "context": "Versions in CustomResourceDefinitions | The service stanza inside webhookClientConfig is a reference to the service for a conversion webhook. If the webhook is running within the cluster, then you should use service instead of url. The serv", "source": "https://kubernetes.io/docs/tasks/extend-kubernetes/custom-resources/custom-resource-definition-versioning/", "id": "k8s_00454_code_completion_11513"} {"prompt": "Explain this yaml code:\n\n# Deprecated in v1.16 in favor of apiextensions.k8s.io/v1\napiVersion: apiextensions.k8s.io/v1beta1\nkind: CustomResourceDefinition\n...\nspec:\n ...\n conversion:\n strategy: Webhook\n webhookClientConfig:\n url: \"https://my-webhook.example.com:9443/my-webhook-path\"\n...", "completion": "This yaml code Versions in CustomResourceDefinitions | The service stanza inside webhookClientConfig is a reference to the service for a conversion webhook. If the webhook is running within the cluster, then you should use service instead of url. The serv", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Versions in CustomResourceDefinitions | The service stanza inside webhookClientConfig is a reference to the service for a conversion webhook. If the webhook is running within the cluster, then you should use service instead of url. The serv", "source": "https://kubernetes.io/docs/tasks/extend-kubernetes/custom-resources/custom-resource-definition-versioning/", "id": "k8s_00454_code_explain_11514"} {"prompt": "Complete this yaml code:\n\napiVersion: apiextensions.k8s.io/v1\nkind: CustomResourceDefinition\n...\nspec:\n ...\n conversion:\n strategy: Webhook\n webhook:", "completion": " clientConfig:\n service:\n namespace: my-service-namespace\n name: my-service-name\n path: /my-path\n port: 1234\n caBundle: \"Ci0tLS0tQk...<base64-encoded PEM bundle>...tLS0K\"\n...", "language": "yaml", "type": "kubernetes-manifest", "context": "Versions in CustomResourceDefinitions | # Deprecated in v1.16 in favor of apiextensions.k8s.io/v1 apiVersion: apiextensions.k8s.io/v1beta1 kind: CustomResourceDefinition ... spec: ... conversion: strategy: Webhook webhookClientC", "source": "https://kubernetes.io/docs/tasks/extend-kubernetes/custom-resources/custom-resource-definition-versioning/", "id": "k8s_00454_code_completion_11515"} {"prompt": "Explain this yaml code:\n\napiVersion: apiextensions.k8s.io/v1\nkind: CustomResourceDefinition\n...\nspec:\n ...\n conversion:\n strategy: Webhook\n webhook:\n clientConfig:\n service:\n namespace: my-service-namespace\n name: my-service-name\n path: /my-path\n port: 1234\n caBundle: \"Ci0tLS0tQk...<base64-encoded PEM bundle>...tLS0K\"\n...", "completion": "This yaml code Versions in CustomResourceDefinitions | # Deprecated in v1.16 in favor of apiextensions.k8s.io/v1 apiVersion: apiextensions.k8s.io/v1beta1 kind: CustomResourceDefinition ... spec: ... conversion: strategy: Webhook webhookClientC", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Versions in CustomResourceDefinitions | # Deprecated in v1.16 in favor of apiextensions.k8s.io/v1 apiVersion: apiextensions.k8s.io/v1beta1 kind: CustomResourceDefinition ... spec: ... conversion: strategy: Webhook webhookClientC", "source": "https://kubernetes.io/docs/tasks/extend-kubernetes/custom-resources/custom-resource-definition-versioning/", "id": "k8s_00454_code_explain_11516"} {"prompt": "Complete this yaml code:\n\n# Deprecated in v1.16 in favor of apiextensions.k8s.io/v1\napiVersion: apiextensions.k8s.io/v1beta1\nkind: CustomResourceDefinition\n...\nspec:\n ...\n conversion:\n strategy: Webhook", "completion": " webhookClientConfig:\n service:\n namespace: my-service-namespace\n name: my-service-name\n path: /my-path\n port: 1234\n caBundle: \"Ci0tLS0tQk...<base64-encoded PEM bundle>...tLS0K\"\n...", "language": "yaml", "type": "kubernetes-manifest", "context": "Versions in CustomResourceDefinitions | Webhooks are sent a POST request, with Content-Type: application/json, with a ConversionReview API object in the apiextensions.k8s.io API group serialized to JSON as the body.", "source": "https://kubernetes.io/docs/tasks/extend-kubernetes/custom-resources/custom-resource-definition-versioning/", "id": "k8s_00454_code_completion_11517"} {"prompt": "Explain this yaml code:\n\n# Deprecated in v1.16 in favor of apiextensions.k8s.io/v1\napiVersion: apiextensions.k8s.io/v1beta1\nkind: CustomResourceDefinition\n...\nspec:\n ...\n conversion:\n strategy: Webhook\n webhookClientConfig:\n service:\n namespace: my-service-namespace\n name: my-service-name\n path: /my-path\n port: 1234\n caBundle: \"Ci0tLS0tQk...<base64-encoded PEM bundle>...tLS0K\"\n...", "completion": "This yaml code Versions in CustomResourceDefinitions | Webhooks are sent a POST request, with Content-Type: application/json, with a ConversionReview API object in the apiextensions.k8s.io API group serialized to JSON as the body.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Versions in CustomResourceDefinitions | Webhooks are sent a POST request, with Content-Type: application/json, with a ConversionReview API object in the apiextensions.k8s.io API group serialized to JSON as the body.", "source": "https://kubernetes.io/docs/tasks/extend-kubernetes/custom-resources/custom-resource-definition-versioning/", "id": "k8s_00454_code_explain_11518"} {"prompt": "Complete this yaml code:\n\napiVersion: apiextensions.k8s.io/v1\nkind: CustomResourceDefinition\n...\nspec:\n ...", "completion": " conversion:\n strategy: Webhook\n webhook:\n conversionReviewVersions: [\"v1\", \"v1beta1\"]\n ...", "language": "yaml", "type": "kubernetes-manifest", "context": "Versions in CustomResourceDefinitions | conversionReviewVersions is a required field when creating apiextensions.k8s.io/v1 custom resource definitions. Webhooks are required to support at least one ConversionReview version understood by the", "source": "https://kubernetes.io/docs/tasks/extend-kubernetes/custom-resources/custom-resource-definition-versioning/", "id": "k8s_00454_code_completion_11519"} {"prompt": "Explain this yaml code:\n\napiVersion: apiextensions.k8s.io/v1\nkind: CustomResourceDefinition\n...\nspec:\n ...\n conversion:\n strategy: Webhook\n webhook:\n conversionReviewVersions: [\"v1\", \"v1beta1\"]\n ...", "completion": "This yaml code Versions in CustomResourceDefinitions | conversionReviewVersions is a required field when creating apiextensions.k8s.io/v1 custom resource definitions. Webhooks are required to support at least one ConversionReview version understood by the", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Versions in CustomResourceDefinitions | conversionReviewVersions is a required field when creating apiextensions.k8s.io/v1 custom resource definitions. Webhooks are required to support at least one ConversionReview version understood by the", "source": "https://kubernetes.io/docs/tasks/extend-kubernetes/custom-resources/custom-resource-definition-versioning/", "id": "k8s_00454_code_explain_11520"} {"prompt": "Complete this yaml code:\n\n# Deprecated in v1.16 in favor of apiextensions.k8s.io/v1\napiVersion: apiextensions.k8s.io/v1beta1\nkind: CustomResourceDefinition\n...\nspec:", "completion": " ...\n conversion:\n strategy: Webhook\n conversionReviewVersions: [\"v1\", \"v1beta1\"]\n ...", "language": "yaml", "type": "kubernetes-manifest", "context": "Versions in CustomResourceDefinitions | If no conversionReviewVersions are specified, the default when creating apiextensions.k8s.io/v1beta1 custom resource definitions is v1beta1.", "source": "https://kubernetes.io/docs/tasks/extend-kubernetes/custom-resources/custom-resource-definition-versioning/", "id": "k8s_00454_code_completion_11521"} {"prompt": "Explain this yaml code:\n\n# Deprecated in v1.16 in favor of apiextensions.k8s.io/v1\napiVersion: apiextensions.k8s.io/v1beta1\nkind: CustomResourceDefinition\n...\nspec:\n ...\n conversion:\n strategy: Webhook\n conversionReviewVersions: [\"v1\", \"v1beta1\"]\n ...", "completion": "This yaml code Versions in CustomResourceDefinitions | If no conversionReviewVersions are specified, the default when creating apiextensions.k8s.io/v1beta1 custom resource definitions is v1beta1.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Versions in CustomResourceDefinitions | If no conversionReviewVersions are specified, the default when creating apiextensions.k8s.io/v1beta1 custom resource definitions is v1beta1.", "source": "https://kubernetes.io/docs/tasks/extend-kubernetes/custom-resources/custom-resource-definition-versioning/", "id": "k8s_00454_code_explain_11522"} {"prompt": "Complete this json code:\n\n{\n \"apiVersion\": \"apiextensions.k8s.io/v1\",\n \"kind\": \"ConversionReview\",\n \"request\": {\n # Random uid uniquely identifying this conversion call\n \"uid\": \"705ab4f5-6393-11e8-b7cc-42010a800002\",\n \n # The API group and version the objects should be converted to\n \"desiredAPIVersion\": \"example.com/v1\",\n \n # The list of objects to convert.\n # May contain one or more objects, in one or more versions.\n \"objects\": [\n {\n \"kind\": \"CronTab\",\n \"apiVersion\": \"example.com/v1beta1\",\n \"metadata\": {\n \"creationTimestamp\": \"2019-09-04T14:03:02Z\",\n \"name\": \"local-crontab\",", "completion": " \"namespace\": \"default\",\n \"resourceVersion\": \"143\",\n \"uid\": \"3415a7fc-162b-4300-b5da-fd6083580d66\"\n },\n \"hostPort\": \"localhost:1234\"\n },\n {\n \"kind\": \"CronTab\",\n \"apiVersion\": \"example.com/v1beta1\",\n \"metadata\": {\n \"creationTimestamp\": \"2019-09-03T13:02:01Z\",\n \"name\": \"remote-crontab\",\n \"resourceVersion\": \"12893\",\n \"uid\": \"359a83ec-b575-460d-b553-d859cedde8a0\"\n },\n \"hostPort\": \"example.com:2345\"\n }\n ]\n }\n}", "language": "json", "type": "api-response", "context": "Versions in CustomResourceDefinitions | { # Deprecated in v1.16 in favor of apiextensions.k8s.io/v1 \"apiVersion\": \"apiextensions.k8s.io/v1beta1\", \"kind\": \"ConversionReview\", \"request\": { # Random uid uniquely identifying this co", "source": "https://kubernetes.io/docs/tasks/extend-kubernetes/custom-resources/custom-resource-definition-versioning/", "id": "k8s_00454_code_completion_11523"} {"prompt": "Explain this json code:\n\n{\n \"apiVersion\": \"apiextensions.k8s.io/v1\",\n \"kind\": \"ConversionReview\",\n \"request\": {\n # Random uid uniquely identifying this conversion call\n \"uid\": \"705ab4f5-6393-11e8-b7cc-42010a800002\",\n \n # The API group and version the objects should be converted to\n \"desiredAPIVersion\": \"example.com/v1\",\n \n # The list of objects to convert.\n # May contain one or more objects, in one or more versions.\n \"objects\": [\n {\n \"kind\": \"CronTab\",\n \"apiVersion\": \"example.com/v1beta1\",\n \"metadata\": {\n \"creationTimestamp\": \"2019-09-04T14:03:02Z\",\n \"name\": \"local-crontab\",\n \"namespace\": \"default\",\n \"resourceVersion\": \"143\",\n \"uid\": \"3415a7fc-162b-4300-b5da-fd6083580d66\"\n },\n \"hostPort\": \"localhost:1234\"\n },\n {\n \"kind\": \"CronTab\",\n \"apiVersion\": \"example.com/v1beta1\",\n \"metadata\": {\n \"creationTimestamp\": \"2019-09-03T13:02:01Z\",\n \"name\": \"remote-crontab\",\n \"resourceVersion\": \"12893\",\n \"uid\": \"359a83ec-b575-460d-b553-d859cedde8a0\"\n },\n \"hostPort\": \"example.com:2345\"\n }\n ]\n }\n}", "completion": "This json code Versions in CustomResourceDefinitions | { # Deprecated in v1.16 in favor of apiextensions.k8s.io/v1 \"apiVersion\": \"apiextensions.k8s.io/v1beta1\", \"kind\": \"ConversionReview\", \"request\": { # Random uid uniquely identifying this co", "language": "json", "type": "api-response_explanation", "context": "Versions in CustomResourceDefinitions | { # Deprecated in v1.16 in favor of apiextensions.k8s.io/v1 \"apiVersion\": \"apiextensions.k8s.io/v1beta1\", \"kind\": \"ConversionReview\", \"request\": { # Random uid uniquely identifying this co", "source": "https://kubernetes.io/docs/tasks/extend-kubernetes/custom-resources/custom-resource-definition-versioning/", "id": "k8s_00454_code_explain_11524"} {"prompt": "Complete this json code:\n\n{\n # Deprecated in v1.16 in favor of apiextensions.k8s.io/v1\n \"apiVersion\": \"apiextensions.k8s.io/v1beta1\",\n \"kind\": \"ConversionReview\",\n \"request\": {\n # Random uid uniquely identifying this conversion call\n \"uid\": \"705ab4f5-6393-11e8-b7cc-42010a800002\",\n \n # The API group and version the objects should be converted to\n \"desiredAPIVersion\": \"example.com/v1\",\n \n # The list of objects to convert.\n # May contain one or more objects, in one or more versions.\n \"objects\": [\n {\n \"kind\": \"CronTab\",\n \"apiVersion\": \"example.com/v1beta1\",\n \"metadata\": {\n \"creationTimestamp\": \"2019-09-04T14:03:02Z\",\n \"name\": \"local-crontab\",", "completion": " \"namespace\": \"default\",\n \"resourceVersion\": \"143\",\n \"uid\": \"3415a7fc-162b-4300-b5da-fd6083580d66\"\n },\n \"hostPort\": \"localhost:1234\"\n },\n {\n \"kind\": \"CronTab\",\n \"apiVersion\": \"example.com/v1beta1\",\n \"metadata\": {\n \"creationTimestamp\": \"2019-09-03T13:02:01Z\",\n \"name\": \"remote-crontab\",\n \"resourceVersion\": \"12893\",\n \"uid\": \"359a83ec-b575-460d-b553-d859cedde8a0\"\n },\n \"hostPort\": \"example.com:2345\"\n }\n ]\n }\n}", "language": "json", "type": "api-response", "context": "Versions in CustomResourceDefinitions | Webhooks respond with a 200 HTTP status code, Content-Type: application/json, and a body containing a ConversionReview object (in the same version they were sent), with the response stanza populated,", "source": "https://kubernetes.io/docs/tasks/extend-kubernetes/custom-resources/custom-resource-definition-versioning/", "id": "k8s_00454_code_completion_11525"} {"prompt": "Explain this json code:\n\n{\n # Deprecated in v1.16 in favor of apiextensions.k8s.io/v1\n \"apiVersion\": \"apiextensions.k8s.io/v1beta1\",\n \"kind\": \"ConversionReview\",\n \"request\": {\n # Random uid uniquely identifying this conversion call\n \"uid\": \"705ab4f5-6393-11e8-b7cc-42010a800002\",\n \n # The API group and version the objects should be converted to\n \"desiredAPIVersion\": \"example.com/v1\",\n \n # The list of objects to convert.\n # May contain one or more objects, in one or more versions.\n \"objects\": [\n {\n \"kind\": \"CronTab\",\n \"apiVersion\": \"example.com/v1beta1\",\n \"metadata\": {\n \"creationTimestamp\": \"2019-09-04T14:03:02Z\",\n \"name\": \"local-crontab\",\n \"namespace\": \"default\",\n \"resourceVersion\": \"143\",\n \"uid\": \"3415a7fc-162b-4300-b5da-fd6083580d66\"\n },\n \"hostPort\": \"localhost:1234\"\n },\n {\n \"kind\": \"CronTab\",\n \"apiVersion\": \"example.com/v1beta1\",\n \"metadata\": {\n \"creationTimestamp\": \"2019-09-03T13:02:01Z\",\n \"name\": \"remote-crontab\",\n \"resourceVersion\": \"12893\",\n \"uid\": \"359a83ec-b575-460d-b553-d859cedde8a0\"\n },\n \"hostPort\": \"example.com:2345\"\n }\n ]\n }\n}", "completion": "This json code Versions in CustomResourceDefinitions | Webhooks respond with a 200 HTTP status code, Content-Type: application/json, and a body containing a ConversionReview object (in the same version they were sent), with the response stanza populated,", "language": "json", "type": "api-response_explanation", "context": "Versions in CustomResourceDefinitions | Webhooks respond with a 200 HTTP status code, Content-Type: application/json, and a body containing a ConversionReview object (in the same version they were sent), with the response stanza populated,", "source": "https://kubernetes.io/docs/tasks/extend-kubernetes/custom-resources/custom-resource-definition-versioning/", "id": "k8s_00454_code_explain_11526"} {"prompt": "Complete this json code:\n\n{\n \"apiVersion\": \"apiextensions.k8s.io/v1\",\n \"kind\": \"ConversionReview\",\n \"response\": {\n # must match <request.uid>\n \"uid\": \"705ab4f5-6393-11e8-b7cc-42010a800002\",\n \"result\": {\n \"status\": \"Success\"\n },\n # Objects must match the order of request.objects, and have apiVersion set to <request.desiredAPIVersion>.\n # kind, metadata.uid, metadata.name, and metadata.namespace fields must not be changed by the webhook.\n # metadata.labels and metadata.annotations fields may be changed by the webhook.\n # All other changes to metadata fields by the webhook are ignored.\n \"convertedObjects\": [\n {\n \"kind\": \"CronTab\",\n \"apiVersion\": \"example.com/v1\",\n \"metadata\": {\n \"creationTimestamp\": \"2019-09-04T14:03:02Z\",\n \"name\": \"local-crontab\",\n \"namespace\": \"default\",", "completion": " \"resourceVersion\": \"143\",\n \"uid\": \"3415a7fc-162b-4300-b5da-fd6083580d66\"\n },\n \"host\": \"localhost\",\n \"port\": \"1234\"\n },\n {\n \"kind\": \"CronTab\",\n \"apiVersion\": \"example.com/v1\",\n \"metadata\": {\n \"creationTimestamp\": \"2019-09-03T13:02:01Z\",\n \"name\": \"remote-crontab\",\n \"resourceVersion\": \"12893\",\n \"uid\": \"359a83ec-b575-460d-b553-d859cedde8a0\"\n },\n \"host\": \"example.com\",\n \"port\": \"2345\"\n }\n ]\n }\n}", "language": "json", "type": "api-response", "context": "Versions in CustomResourceDefinitions | { # Deprecated in v1.16 in favor of apiextensions.k8s.io/v1 \"apiVersion\": \"apiextensions.k8s.io/v1beta1\", \"kind\": \"ConversionReview\", \"response\": { # must match <request.uid> \"uid\": \"7", "source": "https://kubernetes.io/docs/tasks/extend-kubernetes/custom-resources/custom-resource-definition-versioning/", "id": "k8s_00454_code_completion_11527"} {"prompt": "Explain this json code:\n\n{\n \"apiVersion\": \"apiextensions.k8s.io/v1\",\n \"kind\": \"ConversionReview\",\n \"response\": {\n # must match <request.uid>\n \"uid\": \"705ab4f5-6393-11e8-b7cc-42010a800002\",\n \"result\": {\n \"status\": \"Success\"\n },\n # Objects must match the order of request.objects, and have apiVersion set to <request.desiredAPIVersion>.\n # kind, metadata.uid, metadata.name, and metadata.namespace fields must not be changed by the webhook.\n # metadata.labels and metadata.annotations fields may be changed by the webhook.\n # All other changes to metadata fields by the webhook are ignored.\n \"convertedObjects\": [\n {\n \"kind\": \"CronTab\",\n \"apiVersion\": \"example.com/v1\",\n \"metadata\": {\n \"creationTimestamp\": \"2019-09-04T14:03:02Z\",\n \"name\": \"local-crontab\",\n \"namespace\": \"default\",\n \"resourceVersion\": \"143\",\n \"uid\": \"3415a7fc-162b-4300-b5da-fd6083580d66\"\n },\n \"host\": \"localhost\",\n \"port\": \"1234\"\n },\n {\n \"kind\": \"CronTab\",\n \"apiVersion\": \"example.com/v1\",\n \"metadata\": {\n \"creationTimestamp\": \"2019-09-03T13:02:01Z\",\n \"name\": \"remote-crontab\",\n \"resourceVersion\": \"12893\",\n \"uid\": \"359a83ec-b575-460d-b553-d859cedde8a0\"\n },\n \"host\": \"example.com\",\n \"port\": \"2345\"\n }\n ]\n }\n}", "completion": "This json code Versions in CustomResourceDefinitions | { # Deprecated in v1.16 in favor of apiextensions.k8s.io/v1 \"apiVersion\": \"apiextensions.k8s.io/v1beta1\", \"kind\": \"ConversionReview\", \"response\": { # must match <request.uid> \"uid\": \"7", "language": "json", "type": "api-response_explanation", "context": "Versions in CustomResourceDefinitions | { # Deprecated in v1.16 in favor of apiextensions.k8s.io/v1 \"apiVersion\": \"apiextensions.k8s.io/v1beta1\", \"kind\": \"ConversionReview\", \"response\": { # must match <request.uid> \"uid\": \"7", "source": "https://kubernetes.io/docs/tasks/extend-kubernetes/custom-resources/custom-resource-definition-versioning/", "id": "k8s_00454_code_explain_11528"} {"prompt": "Complete this json code:\n\n{\n # Deprecated in v1.16 in favor of apiextensions.k8s.io/v1\n \"apiVersion\": \"apiextensions.k8s.io/v1beta1\",\n \"kind\": \"ConversionReview\",\n \"response\": {\n # must match <request.uid>\n \"uid\": \"705ab4f5-6393-11e8-b7cc-42010a800002\",\n \"result\": {\n \"status\": \"Failed\"\n },\n # Objects must match the order of request.objects, and have apiVersion set to <request.desiredAPIVersion>.\n # kind, metadata.uid, metadata.name, and metadata.namespace fields must not be changed by the webhook.\n # metadata.labels and metadata.annotations fields may be changed by the webhook.\n # All other changes to metadata fields by the webhook are ignored.\n \"convertedObjects\": [\n {\n \"kind\": \"CronTab\",\n \"apiVersion\": \"example.com/v1\",\n \"metadata\": {\n \"creationTimestamp\": \"2019-09-04T14:03:02Z\",\n \"name\": \"local-crontab\",", "completion": " \"namespace\": \"default\",\n \"resourceVersion\": \"143\",\n \"uid\": \"3415a7fc-162b-4300-b5da-fd6083580d66\"\n },\n \"host\": \"localhost\",\n \"port\": \"1234\"\n },\n {\n \"kind\": \"CronTab\",\n \"apiVersion\": \"example.com/v1\",\n \"metadata\": {\n \"creationTimestamp\": \"2019-09-03T13:02:01Z\",\n \"name\": \"remote-crontab\",\n \"resourceVersion\": \"12893\",\n \"uid\": \"359a83ec-b575-460d-b553-d859cedde8a0\"\n },\n \"host\": \"example.com\",\n \"port\": \"2345\"\n }\n ]\n }\n}", "language": "json", "type": "api-response", "context": "Versions in CustomResourceDefinitions | If conversion fails, a webhook should return a response stanza containing the following fields:", "source": "https://kubernetes.io/docs/tasks/extend-kubernetes/custom-resources/custom-resource-definition-versioning/", "id": "k8s_00454_code_completion_11529"} {"prompt": "Explain this json code:\n\n{\n # Deprecated in v1.16 in favor of apiextensions.k8s.io/v1\n \"apiVersion\": \"apiextensions.k8s.io/v1beta1\",\n \"kind\": \"ConversionReview\",\n \"response\": {\n # must match <request.uid>\n \"uid\": \"705ab4f5-6393-11e8-b7cc-42010a800002\",\n \"result\": {\n \"status\": \"Failed\"\n },\n # Objects must match the order of request.objects, and have apiVersion set to <request.desiredAPIVersion>.\n # kind, metadata.uid, metadata.name, and metadata.namespace fields must not be changed by the webhook.\n # metadata.labels and metadata.annotations fields may be changed by the webhook.\n # All other changes to metadata fields by the webhook are ignored.\n \"convertedObjects\": [\n {\n \"kind\": \"CronTab\",\n \"apiVersion\": \"example.com/v1\",\n \"metadata\": {\n \"creationTimestamp\": \"2019-09-04T14:03:02Z\",\n \"name\": \"local-crontab\",\n \"namespace\": \"default\",\n \"resourceVersion\": \"143\",\n \"uid\": \"3415a7fc-162b-4300-b5da-fd6083580d66\"\n },\n \"host\": \"localhost\",\n \"port\": \"1234\"\n },\n {\n \"kind\": \"CronTab\",\n \"apiVersion\": \"example.com/v1\",\n \"metadata\": {\n \"creationTimestamp\": \"2019-09-03T13:02:01Z\",\n \"name\": \"remote-crontab\",\n \"resourceVersion\": \"12893\",\n \"uid\": \"359a83ec-b575-460d-b553-d859cedde8a0\"\n },\n \"host\": \"example.com\",\n \"port\": \"2345\"\n }\n ]\n }\n}", "completion": "This json code Versions in CustomResourceDefinitions | If conversion fails, a webhook should return a response stanza containing the following fields:", "language": "json", "type": "api-response_explanation", "context": "Versions in CustomResourceDefinitions | If conversion fails, a webhook should return a response stanza containing the following fields:", "source": "https://kubernetes.io/docs/tasks/extend-kubernetes/custom-resources/custom-resource-definition-versioning/", "id": "k8s_00454_code_explain_11530"} {"prompt": "Complete this json code:\n\n{\n \"apiVersion\": \"apiextensions.k8s.io/v1\",\n \"kind\": \"ConversionReview\",\n \"response\": {\n \"uid\": \"<value from request.uid>\",", "completion": " \"result\": {\n \"status\": \"Failed\",\n \"message\": \"hostPort could not be parsed into a separate host and port\"\n }\n }\n}", "language": "json", "type": "api-response", "context": "Versions in CustomResourceDefinitions | { # Deprecated in v1.16 in favor of apiextensions.k8s.io/v1 \"apiVersion\": \"apiextensions.k8s.io/v1beta1\", \"kind\": \"ConversionReview\", \"response\": { \"uid\": \"<value from request.uid>\", \"", "source": "https://kubernetes.io/docs/tasks/extend-kubernetes/custom-resources/custom-resource-definition-versioning/", "id": "k8s_00454_code_completion_11531"} {"prompt": "Explain this json code:\n\n{\n \"apiVersion\": \"apiextensions.k8s.io/v1\",\n \"kind\": \"ConversionReview\",\n \"response\": {\n \"uid\": \"<value from request.uid>\",\n \"result\": {\n \"status\": \"Failed\",\n \"message\": \"hostPort could not be parsed into a separate host and port\"\n }\n }\n}", "completion": "This json code Versions in CustomResourceDefinitions | { # Deprecated in v1.16 in favor of apiextensions.k8s.io/v1 \"apiVersion\": \"apiextensions.k8s.io/v1beta1\", \"kind\": \"ConversionReview\", \"response\": { \"uid\": \"<value from request.uid>\", \"", "language": "json", "type": "api-response_explanation", "context": "Versions in CustomResourceDefinitions | { # Deprecated in v1.16 in favor of apiextensions.k8s.io/v1 \"apiVersion\": \"apiextensions.k8s.io/v1beta1\", \"kind\": \"ConversionReview\", \"response\": { \"uid\": \"<value from request.uid>\", \"", "source": "https://kubernetes.io/docs/tasks/extend-kubernetes/custom-resources/custom-resource-definition-versioning/", "id": "k8s_00454_code_explain_11532"} {"prompt": "Complete this json code:\n\n{\n # Deprecated in v1.16 in favor of apiextensions.k8s.io/v1\n \"apiVersion\": \"apiextensions.k8s.io/v1beta1\",\n \"kind\": \"ConversionReview\",\n \"response\": {\n \"uid\": \"<value from request.uid>\",", "completion": " \"result\": {\n \"status\": \"Failed\",\n \"message\": \"hostPort could not be parsed into a separate host and port\"\n }\n }\n}", "language": "json", "type": "api-response", "context": "Versions in CustomResourceDefinitions | When an object is written, it is stored at the version designated as the storage version at the time of the write. If the storage version changes, existing objects are never converted automatically. H", "source": "https://kubernetes.io/docs/tasks/extend-kubernetes/custom-resources/custom-resource-definition-versioning/", "id": "k8s_00454_code_completion_11533"} {"prompt": "Explain this json code:\n\n{\n # Deprecated in v1.16 in favor of apiextensions.k8s.io/v1\n \"apiVersion\": \"apiextensions.k8s.io/v1beta1\",\n \"kind\": \"ConversionReview\",\n \"response\": {\n \"uid\": \"<value from request.uid>\",\n \"result\": {\n \"status\": \"Failed\",\n \"message\": \"hostPort could not be parsed into a separate host and port\"\n }\n }\n}", "completion": "This json code Versions in CustomResourceDefinitions | When an object is written, it is stored at the version designated as the storage version at the time of the write. If the storage version changes, existing objects are never converted automatically. H", "language": "json", "type": "api-response_explanation", "context": "Versions in CustomResourceDefinitions | When an object is written, it is stored at the version designated as the storage version at the time of the write. If the storage version changes, existing objects are never converted automatically. H", "source": "https://kubernetes.io/docs/tasks/extend-kubernetes/custom-resources/custom-resource-definition-versioning/", "id": "k8s_00454_code_explain_11534"} {"prompt": "Explain this bash code:\n\nkubectl patch customresourcedefinitions <CRD_Name> --subresource='status' --type='merge' -p '{\"status\":{\"storedVersions\":[\"v1\"]}}'", "completion": "This bash code Versions in CustomResourceDefinitions |", "language": "bash", "type": "kubectl-command_explanation", "context": "Versions in CustomResourceDefinitions |", "source": "https://kubernetes.io/docs/tasks/extend-kubernetes/custom-resources/custom-resource-definition-versioning/", "id": "k8s_00454_code_explain_11535"} {"prompt": "Complete this yaml code:\n\napiVersion: apiextensions.k8s.io/v1\nkind: CustomResourceDefinition\nmetadata:\n # name must match the spec fields below, and be in the form: <plural>.<group>\n name: crontabs.example.com\nspec:\n # group name to use for REST API: /apis/<group>/<version>\n group: example.com\n # list of versions supported by this CustomResourceDefinition\n versions:\n - name: v1beta1\n # Each version can be enabled/disabled by Served flag.\n served: true\n # One and only one version must be marked as the storage version.\n storage: true\n # A schema is required\n schema:\n openAPIV3Schema:\n type: object\n properties:\n host:\n type: string\n port:\n type: string\n - name: v1\n served: true", "completion": " storage: false\n schema:\n openAPIV3Schema:\n type: object\n properties:\n host:\n type: string\n port:\n type: string\n # The conversion section is introduced in Kubernetes 1.13+ with a default value of\n # None conversion (strategy sub-field set to None).\n conversion:\n # None conversion assumes the same schema for all versions and only sets the apiVersion\n # field of custom resources to the proper value\n strategy: None\n # either Namespaced or Cluster\n scope: Namespaced\n names:\n # plural name to be used in the URL: /apis/<group>/<version>/<plural>\n plural: crontabs\n # singular name to be used as an alias on the CLI and for display\n singular: crontab\n # kind is normally the CamelCased singular type. Your resource manifests use this.\n kind: CronTab\n # shortNames allow shorter string to match your resource on the CLI\n shortNames:\n - ct", "language": "yaml", "type": "kubernetes-manifest", "context": "Versions in CustomResourceDefinitions | # Deprecated in v1.16 in favor of apiextensions.k8s.io/v1 apiVersion: apiextensions.k8s.io/v1beta1 kind: CustomResourceDefinition metadata: # name must match the spec fields below, and be in the for", "source": "https://kubernetes.io/docs/tasks/extend-kubernetes/custom-resources/custom-resource-definition-versioning/", "id": "k8s_00454_code_completion_11536"} {"prompt": "Explain this yaml code:\n\napiVersion: apiextensions.k8s.io/v1\nkind: CustomResourceDefinition\nmetadata:\n # name must match the spec fields below, and be in the form: <plural>.<group>\n name: crontabs.example.com\nspec:\n # group name to use for REST API: /apis/<group>/<version>\n group: example.com\n # list of versions supported by this CustomResourceDefinition\n versions:\n - name: v1beta1\n # Each version can be enabled/disabled by Served flag.\n served: true\n # One and only one version must be marked as the storage version.\n storage: true\n # A schema is required\n schema:\n openAPIV3Schema:\n type: object\n properties:\n host:\n type: string\n port:\n type: string\n - name: v1\n served: true\n storage: false\n schema:\n openAPIV3Schema:\n type: object\n properties:\n host:\n type: string\n port:\n type: string\n # The conversion section is introduced in Kubernetes 1.13+ with a default value of\n # None conversion (strategy sub-field set to None).\n conversion:\n # None conversion assumes the same schema for all versions and only sets the apiVersion\n # field of custom resources to the proper value\n strategy: None\n # either Namespaced or Cluster\n scope: Namespaced\n names:\n # plural name to be used in the URL: /apis/<group>/<version>/<plural>\n plural: crontabs\n # singular name to be used as an alias on the CLI and for display\n singular: crontab\n # kind is normally the CamelCased singular type. Your resource manifests use this.\n kind: CronTab\n # shortNames allow shorter string to match your resource on the CLI\n shortNames:\n - ct", "completion": "This yaml code Versions in CustomResourceDefinitions | # Deprecated in v1.16 in favor of apiextensions.k8s.io/v1 apiVersion: apiextensions.k8s.io/v1beta1 kind: CustomResourceDefinition metadata: # name must match the spec fields below, and be in the for", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Versions in CustomResourceDefinitions | # Deprecated in v1.16 in favor of apiextensions.k8s.io/v1 apiVersion: apiextensions.k8s.io/v1beta1 kind: CustomResourceDefinition metadata: # name must match the spec fields below, and be in the for", "source": "https://kubernetes.io/docs/tasks/extend-kubernetes/custom-resources/custom-resource-definition-versioning/", "id": "k8s_00454_code_explain_11537"} {"prompt": "Complete this yaml code:\n\n# Deprecated in v1.16 in favor of apiextensions.k8s.io/v1\napiVersion: apiextensions.k8s.io/v1beta1\nkind: CustomResourceDefinition\nmetadata:\n # name must match the spec fields below, and be in the form: <plural>.<group>\n name: crontabs.example.com\nspec:\n # group name to use for REST API: /apis/<group>/<version>\n group: example.com\n # list of versions supported by this CustomResourceDefinition\n versions:\n - name: v1beta1\n # Each version can be enabled/disabled by Served flag.\n served: true\n # One and only one version must be marked as the storage version.\n storage: true\n - name: v1\n served: true\n storage: false\n validation:\n openAPIV3Schema:\n type: object", "completion": " properties:\n host:\n type: string\n port:\n type: string\n # The conversion section is introduced in Kubernetes 1.13+ with a default value of\n # None conversion (strategy sub-field set to None).\n conversion:\n # None conversion assumes the same schema for all versions and only sets the apiVersion\n # field of custom resources to the proper value\n strategy: None\n # either Namespaced or Cluster\n scope: Namespaced\n names:\n # plural name to be used in the URL: /apis/<group>/<version>/<plural>\n plural: crontabs\n # singular name to be used as an alias on the CLI and for display\n singular: crontab\n # kind is normally the PascalCased singular type. Your resource manifests use this.\n kind: CronTab\n # shortNames allow shorter string to match your resource on the CLI\n shortNames:\n - ct", "language": "yaml", "type": "kubernetes-manifest", "context": "Versions in CustomResourceDefinitions | You can save the CustomResourceDefinition in a YAML file, then use kubectl apply to create it.", "source": "https://kubernetes.io/docs/tasks/extend-kubernetes/custom-resources/custom-resource-definition-versioning/", "id": "k8s_00454_code_completion_11538"} {"prompt": "Explain this yaml code:\n\n# Deprecated in v1.16 in favor of apiextensions.k8s.io/v1\napiVersion: apiextensions.k8s.io/v1beta1\nkind: CustomResourceDefinition\nmetadata:\n # name must match the spec fields below, and be in the form: <plural>.<group>\n name: crontabs.example.com\nspec:\n # group name to use for REST API: /apis/<group>/<version>\n group: example.com\n # list of versions supported by this CustomResourceDefinition\n versions:\n - name: v1beta1\n # Each version can be enabled/disabled by Served flag.\n served: true\n # One and only one version must be marked as the storage version.\n storage: true\n - name: v1\n served: true\n storage: false\n validation:\n openAPIV3Schema:\n type: object\n properties:\n host:\n type: string\n port:\n type: string\n # The conversion section is introduced in Kubernetes 1.13+ with a default value of\n # None conversion (strategy sub-field set to None).\n conversion:\n # None conversion assumes the same schema for all versions and only sets the apiVersion\n # field of custom resources to the proper value\n strategy: None\n # either Namespaced or Cluster\n scope: Namespaced\n names:\n # plural name to be used in the URL: /apis/<group>/<version>/<plural>\n plural: crontabs\n # singular name to be used as an alias on the CLI and for display\n singular: crontab\n # kind is normally the PascalCased singular type. Your resource manifests use this.\n kind: CronTab\n # shortNames allow shorter string to match your resource on the CLI\n shortNames:\n - ct", "completion": "This yaml code Versions in CustomResourceDefinitions | You can save the CustomResourceDefinition in a YAML file, then use kubectl apply to create it.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Versions in CustomResourceDefinitions | You can save the CustomResourceDefinition in a YAML file, then use kubectl apply to create it.", "source": "https://kubernetes.io/docs/tasks/extend-kubernetes/custom-resources/custom-resource-definition-versioning/", "id": "k8s_00454_code_explain_11539"} {"prompt": "Complete this none code:\n\n- v10\n- v2\n- v1\n- v11beta2\n- v10beta3", "completion": "- v3beta1\n- v12alpha1\n- v11alpha2\n- foo1\n- foo10", "language": "none", "type": "code-example", "context": "Versions in CustomResourceDefinitions | For the example in Specify multiple versions, the version sort order is v1, followed by v1beta1. This causes the kubectl command to use v1 as the default version unless the provided object specifies t", "source": "https://kubernetes.io/docs/tasks/extend-kubernetes/custom-resources/custom-resource-definition-versioning/", "id": "k8s_00454_code_completion_11540"} {"prompt": "Explain this none code:\n\n- v10\n- v2\n- v1\n- v11beta2\n- v10beta3\n- v3beta1\n- v12alpha1\n- v11alpha2\n- foo1\n- foo10", "completion": "This none code Versions in CustomResourceDefinitions | For the example in Specify multiple versions, the version sort order is v1, followed by v1beta1. This causes the kubectl command to use v1 as the default version unless the provided object specifies t", "language": "none", "type": "code-example_explanation", "context": "Versions in CustomResourceDefinitions | For the example in Specify multiple versions, the version sort order is v1, followed by v1beta1. This causes the kubectl command to use v1 as the default version unless the provided object specifies t", "source": "https://kubernetes.io/docs/tasks/extend-kubernetes/custom-resources/custom-resource-definition-versioning/", "id": "k8s_00454_code_explain_11541"} {"prompt": "Complete this yaml code:\n\napiVersion: apiextensions.k8s.io/v1\nkind: CustomResourceDefinition\n name: crontabs.example.com\nspec:\n group: example.com\n names:\n plural: crontabs\n singular: crontab\n kind: CronTab\n scope: Namespaced\n versions:\n - name: v1alpha1\n served: true\n storage: false\n # This indicates the v1alpha1 version of the custom resource is deprecated.\n # API requests to this version receive a warning header in the server response.", "completion": " deprecated: true\n # This overrides the default warning returned to API clients making v1alpha1 API requests.\n deprecationWarning: \"example.com/v1alpha1 CronTab is deprecated; see http://example.com/v1alpha1-v1 for instructions to migrate to example.com/v1 CronTab\"\n \n schema: ...\n - name: v1beta1\n served: true\n # This indicates the v1beta1 version of the custom resource is deprecated.\n # API requests to this version receive a warning header in the server response.\n # A default warning message is returned for this version.\n deprecated: true\n schema: ...\n - name: v1\n served: true\n storage: true\n schema: ...", "language": "yaml", "type": "kubernetes-manifest", "context": "Versions in CustomResourceDefinitions | # Deprecated in v1.16 in favor of apiextensions.k8s.io/v1 apiVersion: apiextensions.k8s.io/v1beta1 kind: CustomResourceDefinition metadata: name: crontabs.example.com spec: group: example.com na", "source": "https://kubernetes.io/docs/tasks/extend-kubernetes/custom-resources/custom-resource-definition-versioning/", "id": "k8s_00454_code_completion_11542"} {"prompt": "Explain this yaml code:\n\napiVersion: apiextensions.k8s.io/v1\nkind: CustomResourceDefinition\n name: crontabs.example.com\nspec:\n group: example.com\n names:\n plural: crontabs\n singular: crontab\n kind: CronTab\n scope: Namespaced\n versions:\n - name: v1alpha1\n served: true\n storage: false\n # This indicates the v1alpha1 version of the custom resource is deprecated.\n # API requests to this version receive a warning header in the server response.\n deprecated: true\n # This overrides the default warning returned to API clients making v1alpha1 API requests.\n deprecationWarning: \"example.com/v1alpha1 CronTab is deprecated; see http://example.com/v1alpha1-v1 for instructions to migrate to example.com/v1 CronTab\"\n \n schema: ...\n - name: v1beta1\n served: true\n # This indicates the v1beta1 version of the custom resource is deprecated.\n # API requests to this version receive a warning header in the server response.\n # A default warning message is returned for this version.\n deprecated: true\n schema: ...\n - name: v1\n served: true\n storage: true\n schema: ...", "completion": "This yaml code Versions in CustomResourceDefinitions | # Deprecated in v1.16 in favor of apiextensions.k8s.io/v1 apiVersion: apiextensions.k8s.io/v1beta1 kind: CustomResourceDefinition metadata: name: crontabs.example.com spec: group: example.com na", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Versions in CustomResourceDefinitions | # Deprecated in v1.16 in favor of apiextensions.k8s.io/v1 apiVersion: apiextensions.k8s.io/v1beta1 kind: CustomResourceDefinition metadata: name: crontabs.example.com spec: group: example.com na", "source": "https://kubernetes.io/docs/tasks/extend-kubernetes/custom-resources/custom-resource-definition-versioning/", "id": "k8s_00454_code_explain_11543"} {"prompt": "Complete this yaml code:\n\n# Deprecated in v1.16 in favor of apiextensions.k8s.io/v1\napiVersion: apiextensions.k8s.io/v1beta1\nkind: CustomResourceDefinition\nmetadata:\n name: crontabs.example.com\nspec:\n group: example.com\n names:\n plural: crontabs\n singular: crontab\n kind: CronTab\n scope: Namespaced\n validation: ...\n versions:\n - name: v1alpha1", "completion": " served: true\n storage: false\n # This indicates the v1alpha1 version of the custom resource is deprecated.\n # API requests to this version receive a warning header in the server response.\n deprecated: true\n # This overrides the default warning returned to API clients making v1alpha1 API requests.\n deprecationWarning: \"example.com/v1alpha1 CronTab is deprecated; see http://example.com/v1alpha1-v1 for instructions to migrate to example.com/v1 CronTab\"\n - name: v1beta1\n served: true\n # This indicates the v1beta1 version of the custom resource is deprecated.\n # API requests to this version receive a warning header in the server response.\n # A default warning message is returned for this version.\n deprecated: true\n - name: v1\n served: true\n storage: true", "language": "yaml", "type": "kubernetes-manifest", "context": "Versions in CustomResourceDefinitions | An older API version cannot be dropped from a CustomResourceDefinition manifest until existing stored data has been migrated to the newer API version for all clusters that served the older version of", "source": "https://kubernetes.io/docs/tasks/extend-kubernetes/custom-resources/custom-resource-definition-versioning/", "id": "k8s_00454_code_completion_11544"} {"prompt": "Explain this yaml code:\n\n# Deprecated in v1.16 in favor of apiextensions.k8s.io/v1\napiVersion: apiextensions.k8s.io/v1beta1\nkind: CustomResourceDefinition\nmetadata:\n name: crontabs.example.com\nspec:\n group: example.com\n names:\n plural: crontabs\n singular: crontab\n kind: CronTab\n scope: Namespaced\n validation: ...\n versions:\n - name: v1alpha1\n served: true\n storage: false\n # This indicates the v1alpha1 version of the custom resource is deprecated.\n # API requests to this version receive a warning header in the server response.\n deprecated: true\n # This overrides the default warning returned to API clients making v1alpha1 API requests.\n deprecationWarning: \"example.com/v1alpha1 CronTab is deprecated; see http://example.com/v1alpha1-v1 for instructions to migrate to example.com/v1 CronTab\"\n - name: v1beta1\n served: true\n # This indicates the v1beta1 version of the custom resource is deprecated.\n # API requests to this version receive a warning header in the server response.\n # A default warning message is returned for this version.\n deprecated: true\n - name: v1\n served: true\n storage: true", "completion": "This yaml code Versions in CustomResourceDefinitions | An older API version cannot be dropped from a CustomResourceDefinition manifest until existing stored data has been migrated to the newer API version for all clusters that served the older version of", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Versions in CustomResourceDefinitions | An older API version cannot be dropped from a CustomResourceDefinition manifest until existing stored data has been migrated to the newer API version for all clusters that served the older version of", "source": "https://kubernetes.io/docs/tasks/extend-kubernetes/custom-resources/custom-resource-definition-versioning/", "id": "k8s_00454_code_explain_11545"} {"prompt": "Complete this yaml code:\n\napiVersion: apiextensions.k8s.io/v1\nkind: CustomResourceDefinition\n name: crontabs.example.com\nspec:\n group: example.com\n names:\n plural: crontabs\n singular: crontab\n kind: CronTab\n scope: Namespaced", "completion": " versions:\n - name: v1beta1\n # This indicates the v1beta1 version of the custom resource is no longer served.\n # API requests to this version receive a not found error in the server response.\n served: false\n schema: ...\n - name: v1\n served: true\n # The new served version should be set as the storage version\n storage: true\n schema: ...", "language": "yaml", "type": "kubernetes-manifest", "context": "Versions in CustomResourceDefinitions | The above example has a None conversion between versions which only sets the apiVersion field on conversion and does not change the rest of the object. The API server also supports webhook conversions", "source": "https://kubernetes.io/docs/tasks/extend-kubernetes/custom-resources/custom-resource-definition-versioning/", "id": "k8s_00454_code_completion_11546"} {"prompt": "Explain this yaml code:\n\napiVersion: apiextensions.k8s.io/v1\nkind: CustomResourceDefinition\n name: crontabs.example.com\nspec:\n group: example.com\n names:\n plural: crontabs\n singular: crontab\n kind: CronTab\n scope: Namespaced\n versions:\n - name: v1beta1\n # This indicates the v1beta1 version of the custom resource is no longer served.\n # API requests to this version receive a not found error in the server response.\n served: false\n schema: ...\n - name: v1\n served: true\n # The new served version should be set as the storage version\n storage: true\n schema: ...", "completion": "This yaml code Versions in CustomResourceDefinitions | The above example has a None conversion between versions which only sets the apiVersion field on conversion and does not change the rest of the object. The API server also supports webhook conversions", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Versions in CustomResourceDefinitions | The above example has a None conversion between versions which only sets the apiVersion field on conversion and does not change the rest of the object. The API server also supports webhook conversions", "source": "https://kubernetes.io/docs/tasks/extend-kubernetes/custom-resources/custom-resource-definition-versioning/", "id": "k8s_00454_code_explain_11547"} {"prompt": "Complete this yaml code:\n\napiVersion: apiextensions.k8s.io/v1\nkind: CustomResourceDefinition\nmetadata:\n # name must match the spec fields below, and be in the form: <plural>.<group>\n name: crontabs.example.com\nspec:\n # group name to use for REST API: /apis/<group>/<version>\n group: example.com\n # list of versions supported by this CustomResourceDefinition\n versions:\n - name: v1beta1\n # Each version can be enabled/disabled by Served flag.\n served: true\n # One and only one version must be marked as the storage version.\n storage: true\n # Each version can define its own schema when there is no top-level\n # schema is defined.\n schema:\n openAPIV3Schema:\n type: object\n properties:\n hostPort:\n type: string\n - name: v1\n served: true\n storage: false\n schema:\n openAPIV3Schema:\n type: object\n properties:", "completion": " host:\n type: string\n port:\n type: string\n conversion:\n # the Webhook strategy instructs the API server to call an external webhook for any conversion between custom resources.\n strategy: Webhook\n # webhook is required when strategy is `Webhook` and it configures the webhook endpoint to be called by API server.\n webhook:\n # conversionReviewVersions indicates what ConversionReview versions are understood/preferred by the webhook.\n # The first version in the list understood by the API server is sent to the webhook.\n # The webhook must respond with a ConversionReview object in the same version it received.\n conversionReviewVersions: [\"v1\",\"v1beta1\"]\n clientConfig:\n service:\n namespace: default\n name: example-conversion-webhook-server\n path: /crdconvert\n caBundle: \"Ci0tLS0tQk...<base64-encoded PEM bundle>...tLS0K\"\n # either Namespaced or Cluster\n scope: Namespaced\n names:\n # plural name to be used in the URL: /apis/<group>/<version>/<plural>\n plural: crontabs\n # singular name to be used as an alias on the CLI and for display\n singular: crontab\n # kind is normally the CamelCased singular type. Your resource manifests use this.\n kind: CronTab\n # shortNames allow shorter string to match your resource on the CLI\n shortNames:\n - ct", "language": "yaml", "type": "kubernetes-manifest", "context": "Versions in CustomResourceDefinitions | # Deprecated in v1.16 in favor of apiextensions.k8s.io/v1 apiVersion: apiextensions.k8s.io/v1beta1 kind: CustomResourceDefinition metadata: # name must match the spec fields below, and be in the for", "source": "https://kubernetes.io/docs/tasks/extend-kubernetes/custom-resources/custom-resource-definition-versioning/", "id": "k8s_00454_code_completion_11548"} {"prompt": "Explain this yaml code:\n\napiVersion: apiextensions.k8s.io/v1\nkind: CustomResourceDefinition\nmetadata:\n # name must match the spec fields below, and be in the form: <plural>.<group>\n name: crontabs.example.com\nspec:\n # group name to use for REST API: /apis/<group>/<version>\n group: example.com\n # list of versions supported by this CustomResourceDefinition\n versions:\n - name: v1beta1\n # Each version can be enabled/disabled by Served flag.\n served: true\n # One and only one version must be marked as the storage version.\n storage: true\n # Each version can define its own schema when there is no top-level\n # schema is defined.\n schema:\n openAPIV3Schema:\n type: object\n properties:\n hostPort:\n type: string\n - name: v1\n served: true\n storage: false\n schema:\n openAPIV3Schema:\n type: object\n properties:\n host:\n type: string\n port:\n type: string\n conversion:\n # the Webhook strategy instructs the API server to call an external webhook for any conversion between custom resources.\n strategy: Webhook\n # webhook is required when strategy is `Webhook` and it configures the webhook endpoint to be called by API server.\n webhook:\n # conversionReviewVersions indicates what ConversionReview versions are understood/preferred by the webhook.\n # The first version in the list understood by the API server is sent to the webhook.\n # The webhook must respond with a ConversionReview object in the same version it received.\n conversionReviewVersions: [\"v1\",\"v1beta1\"]\n clientConfig:\n service:\n namespace: default\n name: example-conversion-webhook-server\n path: /crdconvert\n caBundle: \"Ci0tLS0tQk...<base64-encoded PEM bundle>...tLS0K\"\n # either Namespaced or Cluster\n scope: Namespaced\n names:\n # plural name to be used in the URL: /apis/<group>/<version>/<plural>\n plural: crontabs\n # singular name to be used as an alias on the CLI and for display\n singular: crontab\n # kind is normally the CamelCased singular type. Your resource manifests use this.\n kind: CronTab\n # shortNames allow shorter string to match your resource on the CLI\n shortNames:\n - ct", "completion": "This yaml code Versions in CustomResourceDefinitions | # Deprecated in v1.16 in favor of apiextensions.k8s.io/v1 apiVersion: apiextensions.k8s.io/v1beta1 kind: CustomResourceDefinition metadata: # name must match the spec fields below, and be in the for", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Versions in CustomResourceDefinitions | # Deprecated in v1.16 in favor of apiextensions.k8s.io/v1 apiVersion: apiextensions.k8s.io/v1beta1 kind: CustomResourceDefinition metadata: # name must match the spec fields below, and be in the for", "source": "https://kubernetes.io/docs/tasks/extend-kubernetes/custom-resources/custom-resource-definition-versioning/", "id": "k8s_00454_code_explain_11549"} {"prompt": "Complete this yaml code:\n\n# Deprecated in v1.16 in favor of apiextensions.k8s.io/v1\napiVersion: apiextensions.k8s.io/v1beta1\nkind: CustomResourceDefinition\nmetadata:\n # name must match the spec fields below, and be in the form: <plural>.<group>\n name: crontabs.example.com\nspec:\n # group name to use for REST API: /apis/<group>/<version>\n group: example.com\n # prunes object fields that are not specified in OpenAPI schemas below.\n preserveUnknownFields: false\n # list of versions supported by this CustomResourceDefinition\n versions:\n - name: v1beta1\n # Each version can be enabled/disabled by Served flag.\n served: true\n # One and only one version must be marked as the storage version.\n storage: true\n # Each version can define its own schema when there is no top-level\n # schema is defined.\n schema:\n openAPIV3Schema:\n type: object\n properties:\n hostPort:\n type: string\n - name: v1\n served: true\n storage: false", "completion": " schema:\n openAPIV3Schema:\n type: object\n properties:\n host:\n type: string\n port:\n type: string\n conversion:\n # the Webhook strategy instructs the API server to call an external webhook for any conversion between custom resources.\n strategy: Webhook\n # webhookClientConfig is required when strategy is `Webhook` and it configures the webhook endpoint to be called by API server.\n webhookClientConfig:\n service:\n namespace: default\n name: example-conversion-webhook-server\n path: /crdconvert\n caBundle: \"Ci0tLS0tQk...<base64-encoded PEM bundle>...tLS0K\"\n # either Namespaced or Cluster\n scope: Namespaced\n names:\n # plural name to be used in the URL: /apis/<group>/<version>/<plural>\n plural: crontabs\n # singular name to be used as an alias on the CLI and for display\n singular: crontab\n # kind is normally the CamelCased singular type. Your resource manifests use this.\n kind: CronTab\n # shortNames allow shorter string to match your resource on the CLI\n shortNames:\n - ct", "language": "yaml", "type": "kubernetes-manifest", "context": "Versions in CustomResourceDefinitions | You can save the CustomResourceDefinition in a YAML file, then use kubectl apply to apply it.", "source": "https://kubernetes.io/docs/tasks/extend-kubernetes/custom-resources/custom-resource-definition-versioning/", "id": "k8s_00454_code_completion_11550"} {"prompt": "Explain this yaml code:\n\n# Deprecated in v1.16 in favor of apiextensions.k8s.io/v1\napiVersion: apiextensions.k8s.io/v1beta1\nkind: CustomResourceDefinition\nmetadata:\n # name must match the spec fields below, and be in the form: <plural>.<group>\n name: crontabs.example.com\nspec:\n # group name to use for REST API: /apis/<group>/<version>\n group: example.com\n # prunes object fields that are not specified in OpenAPI schemas below.\n preserveUnknownFields: false\n # list of versions supported by this CustomResourceDefinition\n versions:\n - name: v1beta1\n # Each version can be enabled/disabled by Served flag.\n served: true\n # One and only one version must be marked as the storage version.\n storage: true\n # Each version can define its own schema when there is no top-level\n # schema is defined.\n schema:\n openAPIV3Schema:\n type: object\n properties:\n hostPort:\n type: string\n - name: v1\n served: true\n storage: false\n schema:\n openAPIV3Schema:\n type: object\n properties:\n host:\n type: string\n port:\n type: string\n conversion:\n # the Webhook strategy instructs the API server to call an external webhook for any conversion between custom resources.\n strategy: Webhook\n # webhookClientConfig is required when strategy is `Webhook` and it configures the webhook endpoint to be called by API server.\n webhookClientConfig:\n service:\n namespace: default\n name: example-conversion-webhook-server\n path: /crdconvert\n caBundle: \"Ci0tLS0tQk...<base64-encoded PEM bundle>...tLS0K\"\n # either Namespaced or Cluster\n scope: Namespaced\n names:\n # plural name to be used in the URL: /apis/<group>/<version>/<plural>\n plural: crontabs\n # singular name to be used as an alias on the CLI and for display\n singular: crontab\n # kind is normally the CamelCased singular type. Your resource manifests use this.\n kind: CronTab\n # shortNames allow shorter string to match your resource on the CLI\n shortNames:\n - ct", "completion": "This yaml code Versions in CustomResourceDefinitions | You can save the CustomResourceDefinition in a YAML file, then use kubectl apply to apply it.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Versions in CustomResourceDefinitions | You can save the CustomResourceDefinition in a YAML file, then use kubectl apply to apply it.", "source": "https://kubernetes.io/docs/tasks/extend-kubernetes/custom-resources/custom-resource-definition-versioning/", "id": "k8s_00454_code_explain_11551"} {"prompt": "Explain this shell code:\n\nkubectl apply -f my-versioned-crontab-with-conversion.yaml", "completion": "This shell code Versions in CustomResourceDefinitions | Make sure the conversion service is up and running before applying new changes.", "language": "shell", "type": "code-example_explanation", "context": "Versions in CustomResourceDefinitions | Make sure the conversion service is up and running before applying new changes.", "source": "https://kubernetes.io/docs/tasks/extend-kubernetes/custom-resources/custom-resource-definition-versioning/", "id": "k8s_00454_code_explain_11552"} {"prompt": "Complete this yaml code:\n\napiVersion: apiextensions.k8s.io/v1\nkind: CustomResourceDefinition\n...\nspec:\n ...", "completion": " conversion:\n strategy: Webhook\n webhook:\n clientConfig:\n url: \"https://my-webhook.example.com:9443/my-webhook-path\"\n...", "language": "yaml", "type": "kubernetes-manifest", "context": "Versions in CustomResourceDefinitions | # Deprecated in v1.16 in favor of apiextensions.k8s.io/v1 apiVersion: apiextensions.k8s.io/v1beta1 kind: CustomResourceDefinition ... spec: ... conversion: strategy: Webhook webhookClientC", "source": "https://kubernetes.io/docs/tasks/extend-kubernetes/custom-resources/custom-resource-definition-versioning/", "id": "k8s_00454_code_completion_11553"} {"prompt": "Explain this yaml code:\n\napiVersion: apiextensions.k8s.io/v1\nkind: CustomResourceDefinition\n...\nspec:\n ...\n conversion:\n strategy: Webhook\n webhook:\n clientConfig:\n url: \"https://my-webhook.example.com:9443/my-webhook-path\"\n...", "completion": "This yaml code Versions in CustomResourceDefinitions | # Deprecated in v1.16 in favor of apiextensions.k8s.io/v1 apiVersion: apiextensions.k8s.io/v1beta1 kind: CustomResourceDefinition ... spec: ... conversion: strategy: Webhook webhookClientC", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Versions in CustomResourceDefinitions | # Deprecated in v1.16 in favor of apiextensions.k8s.io/v1 apiVersion: apiextensions.k8s.io/v1beta1 kind: CustomResourceDefinition ... spec: ... conversion: strategy: Webhook webhookClientC", "source": "https://kubernetes.io/docs/tasks/extend-kubernetes/custom-resources/custom-resource-definition-versioning/", "id": "k8s_00454_code_explain_11554"} {"prompt": "Complete this yaml code:\n\n# Deprecated in v1.16 in favor of apiextensions.k8s.io/v1\napiVersion: apiextensions.k8s.io/v1beta1\nkind: CustomResourceDefinition\n...\nspec:", "completion": " ...\n conversion:\n strategy: Webhook\n webhookClientConfig:\n url: \"https://my-webhook.example.com:9443/my-webhook-path\"\n...", "language": "yaml", "type": "kubernetes-manifest", "context": "Versions in CustomResourceDefinitions | The service stanza inside webhookClientConfig is a reference to the service for a conversion webhook. If the webhook is running within the cluster, then you should use service instead of url. The serv", "source": "https://kubernetes.io/docs/tasks/extend-kubernetes/custom-resources/custom-resource-definition-versioning/", "id": "k8s_00454_code_completion_11555"} {"prompt": "Explain this yaml code:\n\n# Deprecated in v1.16 in favor of apiextensions.k8s.io/v1\napiVersion: apiextensions.k8s.io/v1beta1\nkind: CustomResourceDefinition\n...\nspec:\n ...\n conversion:\n strategy: Webhook\n webhookClientConfig:\n url: \"https://my-webhook.example.com:9443/my-webhook-path\"\n...", "completion": "This yaml code Versions in CustomResourceDefinitions | The service stanza inside webhookClientConfig is a reference to the service for a conversion webhook. If the webhook is running within the cluster, then you should use service instead of url. The serv", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Versions in CustomResourceDefinitions | The service stanza inside webhookClientConfig is a reference to the service for a conversion webhook. If the webhook is running within the cluster, then you should use service instead of url. The serv", "source": "https://kubernetes.io/docs/tasks/extend-kubernetes/custom-resources/custom-resource-definition-versioning/", "id": "k8s_00454_code_explain_11556"} {"prompt": "Complete this yaml code:\n\napiVersion: apiextensions.k8s.io/v1\nkind: CustomResourceDefinition\n...\nspec:\n ...\n conversion:\n strategy: Webhook\n webhook:", "completion": " clientConfig:\n service:\n namespace: my-service-namespace\n name: my-service-name\n path: /my-path\n port: 1234\n caBundle: \"Ci0tLS0tQk...<base64-encoded PEM bundle>...tLS0K\"\n...", "language": "yaml", "type": "kubernetes-manifest", "context": "Versions in CustomResourceDefinitions | # Deprecated in v1.16 in favor of apiextensions.k8s.io/v1 apiVersion: apiextensions.k8s.io/v1beta1 kind: CustomResourceDefinition ... spec: ... conversion: strategy: Webhook webhookClientC", "source": "https://kubernetes.io/docs/tasks/extend-kubernetes/custom-resources/custom-resource-definition-versioning/", "id": "k8s_00454_code_completion_11557"} {"prompt": "Explain this yaml code:\n\napiVersion: apiextensions.k8s.io/v1\nkind: CustomResourceDefinition\n...\nspec:\n ...\n conversion:\n strategy: Webhook\n webhook:\n clientConfig:\n service:\n namespace: my-service-namespace\n name: my-service-name\n path: /my-path\n port: 1234\n caBundle: \"Ci0tLS0tQk...<base64-encoded PEM bundle>...tLS0K\"\n...", "completion": "This yaml code Versions in CustomResourceDefinitions | # Deprecated in v1.16 in favor of apiextensions.k8s.io/v1 apiVersion: apiextensions.k8s.io/v1beta1 kind: CustomResourceDefinition ... spec: ... conversion: strategy: Webhook webhookClientC", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Versions in CustomResourceDefinitions | # Deprecated in v1.16 in favor of apiextensions.k8s.io/v1 apiVersion: apiextensions.k8s.io/v1beta1 kind: CustomResourceDefinition ... spec: ... conversion: strategy: Webhook webhookClientC", "source": "https://kubernetes.io/docs/tasks/extend-kubernetes/custom-resources/custom-resource-definition-versioning/", "id": "k8s_00454_code_explain_11558"} {"prompt": "Complete this yaml code:\n\n# Deprecated in v1.16 in favor of apiextensions.k8s.io/v1\napiVersion: apiextensions.k8s.io/v1beta1\nkind: CustomResourceDefinition\n...\nspec:\n ...\n conversion:\n strategy: Webhook", "completion": " webhookClientConfig:\n service:\n namespace: my-service-namespace\n name: my-service-name\n path: /my-path\n port: 1234\n caBundle: \"Ci0tLS0tQk...<base64-encoded PEM bundle>...tLS0K\"\n...", "language": "yaml", "type": "kubernetes-manifest", "context": "Versions in CustomResourceDefinitions | Webhooks are sent a POST request, with Content-Type: application/json, with a ConversionReview API object in the apiextensions.k8s.io API group serialized to JSON as the body.", "source": "https://kubernetes.io/docs/tasks/extend-kubernetes/custom-resources/custom-resource-definition-versioning/", "id": "k8s_00454_code_completion_11559"} {"prompt": "Explain this yaml code:\n\n# Deprecated in v1.16 in favor of apiextensions.k8s.io/v1\napiVersion: apiextensions.k8s.io/v1beta1\nkind: CustomResourceDefinition\n...\nspec:\n ...\n conversion:\n strategy: Webhook\n webhookClientConfig:\n service:\n namespace: my-service-namespace\n name: my-service-name\n path: /my-path\n port: 1234\n caBundle: \"Ci0tLS0tQk...<base64-encoded PEM bundle>...tLS0K\"\n...", "completion": "This yaml code Versions in CustomResourceDefinitions | Webhooks are sent a POST request, with Content-Type: application/json, with a ConversionReview API object in the apiextensions.k8s.io API group serialized to JSON as the body.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Versions in CustomResourceDefinitions | Webhooks are sent a POST request, with Content-Type: application/json, with a ConversionReview API object in the apiextensions.k8s.io API group serialized to JSON as the body.", "source": "https://kubernetes.io/docs/tasks/extend-kubernetes/custom-resources/custom-resource-definition-versioning/", "id": "k8s_00454_code_explain_11560"} {"prompt": "Complete this yaml code:\n\napiVersion: apiextensions.k8s.io/v1\nkind: CustomResourceDefinition\n...\nspec:\n ...", "completion": " conversion:\n strategy: Webhook\n webhook:\n conversionReviewVersions: [\"v1\", \"v1beta1\"]\n ...", "language": "yaml", "type": "kubernetes-manifest", "context": "Versions in CustomResourceDefinitions | conversionReviewVersions is a required field when creating apiextensions.k8s.io/v1 custom resource definitions. Webhooks are required to support at least one ConversionReview version understood by the", "source": "https://kubernetes.io/docs/tasks/extend-kubernetes/custom-resources/custom-resource-definition-versioning/", "id": "k8s_00454_code_completion_11561"} {"prompt": "Explain this yaml code:\n\napiVersion: apiextensions.k8s.io/v1\nkind: CustomResourceDefinition\n...\nspec:\n ...\n conversion:\n strategy: Webhook\n webhook:\n conversionReviewVersions: [\"v1\", \"v1beta1\"]\n ...", "completion": "This yaml code Versions in CustomResourceDefinitions | conversionReviewVersions is a required field when creating apiextensions.k8s.io/v1 custom resource definitions. Webhooks are required to support at least one ConversionReview version understood by the", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Versions in CustomResourceDefinitions | conversionReviewVersions is a required field when creating apiextensions.k8s.io/v1 custom resource definitions. Webhooks are required to support at least one ConversionReview version understood by the", "source": "https://kubernetes.io/docs/tasks/extend-kubernetes/custom-resources/custom-resource-definition-versioning/", "id": "k8s_00454_code_explain_11562"} {"prompt": "Complete this yaml code:\n\n# Deprecated in v1.16 in favor of apiextensions.k8s.io/v1\napiVersion: apiextensions.k8s.io/v1beta1\nkind: CustomResourceDefinition\n...\nspec:", "completion": " ...\n conversion:\n strategy: Webhook\n conversionReviewVersions: [\"v1\", \"v1beta1\"]\n ...", "language": "yaml", "type": "kubernetes-manifest", "context": "Versions in CustomResourceDefinitions | If no conversionReviewVersions are specified, the default when creating apiextensions.k8s.io/v1beta1 custom resource definitions is v1beta1.", "source": "https://kubernetes.io/docs/tasks/extend-kubernetes/custom-resources/custom-resource-definition-versioning/", "id": "k8s_00454_code_completion_11563"} {"prompt": "Explain this yaml code:\n\n# Deprecated in v1.16 in favor of apiextensions.k8s.io/v1\napiVersion: apiextensions.k8s.io/v1beta1\nkind: CustomResourceDefinition\n...\nspec:\n ...\n conversion:\n strategy: Webhook\n conversionReviewVersions: [\"v1\", \"v1beta1\"]\n ...", "completion": "This yaml code Versions in CustomResourceDefinitions | If no conversionReviewVersions are specified, the default when creating apiextensions.k8s.io/v1beta1 custom resource definitions is v1beta1.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Versions in CustomResourceDefinitions | If no conversionReviewVersions are specified, the default when creating apiextensions.k8s.io/v1beta1 custom resource definitions is v1beta1.", "source": "https://kubernetes.io/docs/tasks/extend-kubernetes/custom-resources/custom-resource-definition-versioning/", "id": "k8s_00454_code_explain_11564"} {"prompt": "Complete this yaml code:\n\n{\n \"apiVersion\": \"apiextensions.k8s.io/v1\",\n \"kind\": \"ConversionReview\",\n \"request\": {\n # Random uid uniquely identifying this conversion call\n \"uid\": \"705ab4f5-6393-11e8-b7cc-42010a800002\",\n \n # The API group and version the objects should be converted to\n \"desiredAPIVersion\": \"example.com/v1\",\n \n # The list of objects to convert.\n # May contain one or more objects, in one or more versions.\n \"objects\": [\n {\n \"kind\": \"CronTab\",\n \"apiVersion\": \"example.com/v1beta1\",\n \"metadata\": {\n \"creationTimestamp\": \"2019-09-04T14:03:02Z\",\n \"name\": \"local-crontab\",", "completion": " \"namespace\": \"default\",\n \"resourceVersion\": \"143\",\n \"uid\": \"3415a7fc-162b-4300-b5da-fd6083580d66\"\n },\n \"hostPort\": \"localhost:1234\"\n },\n {\n \"kind\": \"CronTab\",\n \"apiVersion\": \"example.com/v1beta1\",\n \"metadata\": {\n \"creationTimestamp\": \"2019-09-03T13:02:01Z\",\n \"name\": \"remote-crontab\",\n \"resourceVersion\": \"12893\",\n \"uid\": \"359a83ec-b575-460d-b553-d859cedde8a0\"\n },\n \"hostPort\": \"example.com:2345\"\n }\n ]\n }\n}", "language": "yaml", "type": "kubernetes-manifest", "context": "Versions in CustomResourceDefinitions | { # Deprecated in v1.16 in favor of apiextensions.k8s.io/v1 \"apiVersion\": \"apiextensions.k8s.io/v1beta1\", \"kind\": \"ConversionReview\", \"request\": { # Random uid uniquely identifying this co", "source": "https://kubernetes.io/docs/tasks/extend-kubernetes/custom-resources/custom-resource-definition-versioning/", "id": "k8s_00454_code_completion_11565"} {"prompt": "Explain this yaml code:\n\n{\n \"apiVersion\": \"apiextensions.k8s.io/v1\",\n \"kind\": \"ConversionReview\",\n \"request\": {\n # Random uid uniquely identifying this conversion call\n \"uid\": \"705ab4f5-6393-11e8-b7cc-42010a800002\",\n \n # The API group and version the objects should be converted to\n \"desiredAPIVersion\": \"example.com/v1\",\n \n # The list of objects to convert.\n # May contain one or more objects, in one or more versions.\n \"objects\": [\n {\n \"kind\": \"CronTab\",\n \"apiVersion\": \"example.com/v1beta1\",\n \"metadata\": {\n \"creationTimestamp\": \"2019-09-04T14:03:02Z\",\n \"name\": \"local-crontab\",\n \"namespace\": \"default\",\n \"resourceVersion\": \"143\",\n \"uid\": \"3415a7fc-162b-4300-b5da-fd6083580d66\"\n },\n \"hostPort\": \"localhost:1234\"\n },\n {\n \"kind\": \"CronTab\",\n \"apiVersion\": \"example.com/v1beta1\",\n \"metadata\": {\n \"creationTimestamp\": \"2019-09-03T13:02:01Z\",\n \"name\": \"remote-crontab\",\n \"resourceVersion\": \"12893\",\n \"uid\": \"359a83ec-b575-460d-b553-d859cedde8a0\"\n },\n \"hostPort\": \"example.com:2345\"\n }\n ]\n }\n}", "completion": "This yaml code Versions in CustomResourceDefinitions | { # Deprecated in v1.16 in favor of apiextensions.k8s.io/v1 \"apiVersion\": \"apiextensions.k8s.io/v1beta1\", \"kind\": \"ConversionReview\", \"request\": { # Random uid uniquely identifying this co", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Versions in CustomResourceDefinitions | { # Deprecated in v1.16 in favor of apiextensions.k8s.io/v1 \"apiVersion\": \"apiextensions.k8s.io/v1beta1\", \"kind\": \"ConversionReview\", \"request\": { # Random uid uniquely identifying this co", "source": "https://kubernetes.io/docs/tasks/extend-kubernetes/custom-resources/custom-resource-definition-versioning/", "id": "k8s_00454_code_explain_11566"} {"prompt": "Complete this yaml code:\n\n{\n # Deprecated in v1.16 in favor of apiextensions.k8s.io/v1\n \"apiVersion\": \"apiextensions.k8s.io/v1beta1\",\n \"kind\": \"ConversionReview\",\n \"request\": {\n # Random uid uniquely identifying this conversion call\n \"uid\": \"705ab4f5-6393-11e8-b7cc-42010a800002\",\n \n # The API group and version the objects should be converted to\n \"desiredAPIVersion\": \"example.com/v1\",\n \n # The list of objects to convert.\n # May contain one or more objects, in one or more versions.\n \"objects\": [\n {\n \"kind\": \"CronTab\",\n \"apiVersion\": \"example.com/v1beta1\",\n \"metadata\": {\n \"creationTimestamp\": \"2019-09-04T14:03:02Z\",\n \"name\": \"local-crontab\",", "completion": " \"namespace\": \"default\",\n \"resourceVersion\": \"143\",\n \"uid\": \"3415a7fc-162b-4300-b5da-fd6083580d66\"\n },\n \"hostPort\": \"localhost:1234\"\n },\n {\n \"kind\": \"CronTab\",\n \"apiVersion\": \"example.com/v1beta1\",\n \"metadata\": {\n \"creationTimestamp\": \"2019-09-03T13:02:01Z\",\n \"name\": \"remote-crontab\",\n \"resourceVersion\": \"12893\",\n \"uid\": \"359a83ec-b575-460d-b553-d859cedde8a0\"\n },\n \"hostPort\": \"example.com:2345\"\n }\n ]\n }\n}", "language": "yaml", "type": "kubernetes-manifest", "context": "Versions in CustomResourceDefinitions | Webhooks respond with a 200 HTTP status code, Content-Type: application/json, and a body containing a ConversionReview object (in the same version they were sent), with the response stanza populated,", "source": "https://kubernetes.io/docs/tasks/extend-kubernetes/custom-resources/custom-resource-definition-versioning/", "id": "k8s_00454_code_completion_11567"} {"prompt": "Explain this yaml code:\n\n{\n # Deprecated in v1.16 in favor of apiextensions.k8s.io/v1\n \"apiVersion\": \"apiextensions.k8s.io/v1beta1\",\n \"kind\": \"ConversionReview\",\n \"request\": {\n # Random uid uniquely identifying this conversion call\n \"uid\": \"705ab4f5-6393-11e8-b7cc-42010a800002\",\n \n # The API group and version the objects should be converted to\n \"desiredAPIVersion\": \"example.com/v1\",\n \n # The list of objects to convert.\n # May contain one or more objects, in one or more versions.\n \"objects\": [\n {\n \"kind\": \"CronTab\",\n \"apiVersion\": \"example.com/v1beta1\",\n \"metadata\": {\n \"creationTimestamp\": \"2019-09-04T14:03:02Z\",\n \"name\": \"local-crontab\",\n \"namespace\": \"default\",\n \"resourceVersion\": \"143\",\n \"uid\": \"3415a7fc-162b-4300-b5da-fd6083580d66\"\n },\n \"hostPort\": \"localhost:1234\"\n },\n {\n \"kind\": \"CronTab\",\n \"apiVersion\": \"example.com/v1beta1\",\n \"metadata\": {\n \"creationTimestamp\": \"2019-09-03T13:02:01Z\",\n \"name\": \"remote-crontab\",\n \"resourceVersion\": \"12893\",\n \"uid\": \"359a83ec-b575-460d-b553-d859cedde8a0\"\n },\n \"hostPort\": \"example.com:2345\"\n }\n ]\n }\n}", "completion": "This yaml code Versions in CustomResourceDefinitions | Webhooks respond with a 200 HTTP status code, Content-Type: application/json, and a body containing a ConversionReview object (in the same version they were sent), with the response stanza populated,", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Versions in CustomResourceDefinitions | Webhooks respond with a 200 HTTP status code, Content-Type: application/json, and a body containing a ConversionReview object (in the same version they were sent), with the response stanza populated,", "source": "https://kubernetes.io/docs/tasks/extend-kubernetes/custom-resources/custom-resource-definition-versioning/", "id": "k8s_00454_code_explain_11568"} {"prompt": "Complete this yaml code:\n\n{\n \"apiVersion\": \"apiextensions.k8s.io/v1\",\n \"kind\": \"ConversionReview\",\n \"response\": {\n # must match <request.uid>\n \"uid\": \"705ab4f5-6393-11e8-b7cc-42010a800002\",\n \"result\": {\n \"status\": \"Success\"\n },\n # Objects must match the order of request.objects, and have apiVersion set to <request.desiredAPIVersion>.\n # kind, metadata.uid, metadata.name, and metadata.namespace fields must not be changed by the webhook.\n # metadata.labels and metadata.annotations fields may be changed by the webhook.\n # All other changes to metadata fields by the webhook are ignored.\n \"convertedObjects\": [\n {\n \"kind\": \"CronTab\",\n \"apiVersion\": \"example.com/v1\",\n \"metadata\": {\n \"creationTimestamp\": \"2019-09-04T14:03:02Z\",\n \"name\": \"local-crontab\",\n \"namespace\": \"default\",", "completion": " \"resourceVersion\": \"143\",\n \"uid\": \"3415a7fc-162b-4300-b5da-fd6083580d66\"\n },\n \"host\": \"localhost\",\n \"port\": \"1234\"\n },\n {\n \"kind\": \"CronTab\",\n \"apiVersion\": \"example.com/v1\",\n \"metadata\": {\n \"creationTimestamp\": \"2019-09-03T13:02:01Z\",\n \"name\": \"remote-crontab\",\n \"resourceVersion\": \"12893\",\n \"uid\": \"359a83ec-b575-460d-b553-d859cedde8a0\"\n },\n \"host\": \"example.com\",\n \"port\": \"2345\"\n }\n ]\n }\n}", "language": "yaml", "type": "kubernetes-manifest", "context": "Versions in CustomResourceDefinitions | { # Deprecated in v1.16 in favor of apiextensions.k8s.io/v1 \"apiVersion\": \"apiextensions.k8s.io/v1beta1\", \"kind\": \"ConversionReview\", \"response\": { # must match <request.uid> \"uid\": \"7", "source": "https://kubernetes.io/docs/tasks/extend-kubernetes/custom-resources/custom-resource-definition-versioning/", "id": "k8s_00454_code_completion_11569"} {"prompt": "Explain this yaml code:\n\n{\n \"apiVersion\": \"apiextensions.k8s.io/v1\",\n \"kind\": \"ConversionReview\",\n \"response\": {\n # must match <request.uid>\n \"uid\": \"705ab4f5-6393-11e8-b7cc-42010a800002\",\n \"result\": {\n \"status\": \"Success\"\n },\n # Objects must match the order of request.objects, and have apiVersion set to <request.desiredAPIVersion>.\n # kind, metadata.uid, metadata.name, and metadata.namespace fields must not be changed by the webhook.\n # metadata.labels and metadata.annotations fields may be changed by the webhook.\n # All other changes to metadata fields by the webhook are ignored.\n \"convertedObjects\": [\n {\n \"kind\": \"CronTab\",\n \"apiVersion\": \"example.com/v1\",\n \"metadata\": {\n \"creationTimestamp\": \"2019-09-04T14:03:02Z\",\n \"name\": \"local-crontab\",\n \"namespace\": \"default\",\n \"resourceVersion\": \"143\",\n \"uid\": \"3415a7fc-162b-4300-b5da-fd6083580d66\"\n },\n \"host\": \"localhost\",\n \"port\": \"1234\"\n },\n {\n \"kind\": \"CronTab\",\n \"apiVersion\": \"example.com/v1\",\n \"metadata\": {\n \"creationTimestamp\": \"2019-09-03T13:02:01Z\",\n \"name\": \"remote-crontab\",\n \"resourceVersion\": \"12893\",\n \"uid\": \"359a83ec-b575-460d-b553-d859cedde8a0\"\n },\n \"host\": \"example.com\",\n \"port\": \"2345\"\n }\n ]\n }\n}", "completion": "This yaml code Versions in CustomResourceDefinitions | { # Deprecated in v1.16 in favor of apiextensions.k8s.io/v1 \"apiVersion\": \"apiextensions.k8s.io/v1beta1\", \"kind\": \"ConversionReview\", \"response\": { # must match <request.uid> \"uid\": \"7", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Versions in CustomResourceDefinitions | { # Deprecated in v1.16 in favor of apiextensions.k8s.io/v1 \"apiVersion\": \"apiextensions.k8s.io/v1beta1\", \"kind\": \"ConversionReview\", \"response\": { # must match <request.uid> \"uid\": \"7", "source": "https://kubernetes.io/docs/tasks/extend-kubernetes/custom-resources/custom-resource-definition-versioning/", "id": "k8s_00454_code_explain_11570"} {"prompt": "Complete this yaml code:\n\n{\n # Deprecated in v1.16 in favor of apiextensions.k8s.io/v1\n \"apiVersion\": \"apiextensions.k8s.io/v1beta1\",\n \"kind\": \"ConversionReview\",\n \"response\": {\n # must match <request.uid>\n \"uid\": \"705ab4f5-6393-11e8-b7cc-42010a800002\",\n \"result\": {\n \"status\": \"Failed\"\n },\n # Objects must match the order of request.objects, and have apiVersion set to <request.desiredAPIVersion>.\n # kind, metadata.uid, metadata.name, and metadata.namespace fields must not be changed by the webhook.\n # metadata.labels and metadata.annotations fields may be changed by the webhook.\n # All other changes to metadata fields by the webhook are ignored.\n \"convertedObjects\": [\n {\n \"kind\": \"CronTab\",\n \"apiVersion\": \"example.com/v1\",\n \"metadata\": {\n \"creationTimestamp\": \"2019-09-04T14:03:02Z\",\n \"name\": \"local-crontab\",", "completion": " \"namespace\": \"default\",\n \"resourceVersion\": \"143\",\n \"uid\": \"3415a7fc-162b-4300-b5da-fd6083580d66\"\n },\n \"host\": \"localhost\",\n \"port\": \"1234\"\n },\n {\n \"kind\": \"CronTab\",\n \"apiVersion\": \"example.com/v1\",\n \"metadata\": {\n \"creationTimestamp\": \"2019-09-03T13:02:01Z\",\n \"name\": \"remote-crontab\",\n \"resourceVersion\": \"12893\",\n \"uid\": \"359a83ec-b575-460d-b553-d859cedde8a0\"\n },\n \"host\": \"example.com\",\n \"port\": \"2345\"\n }\n ]\n }\n}", "language": "yaml", "type": "kubernetes-manifest", "context": "Versions in CustomResourceDefinitions | If conversion fails, a webhook should return a response stanza containing the following fields:", "source": "https://kubernetes.io/docs/tasks/extend-kubernetes/custom-resources/custom-resource-definition-versioning/", "id": "k8s_00454_code_completion_11571"} {"prompt": "Explain this yaml code:\n\n{\n # Deprecated in v1.16 in favor of apiextensions.k8s.io/v1\n \"apiVersion\": \"apiextensions.k8s.io/v1beta1\",\n \"kind\": \"ConversionReview\",\n \"response\": {\n # must match <request.uid>\n \"uid\": \"705ab4f5-6393-11e8-b7cc-42010a800002\",\n \"result\": {\n \"status\": \"Failed\"\n },\n # Objects must match the order of request.objects, and have apiVersion set to <request.desiredAPIVersion>.\n # kind, metadata.uid, metadata.name, and metadata.namespace fields must not be changed by the webhook.\n # metadata.labels and metadata.annotations fields may be changed by the webhook.\n # All other changes to metadata fields by the webhook are ignored.\n \"convertedObjects\": [\n {\n \"kind\": \"CronTab\",\n \"apiVersion\": \"example.com/v1\",\n \"metadata\": {\n \"creationTimestamp\": \"2019-09-04T14:03:02Z\",\n \"name\": \"local-crontab\",\n \"namespace\": \"default\",\n \"resourceVersion\": \"143\",\n \"uid\": \"3415a7fc-162b-4300-b5da-fd6083580d66\"\n },\n \"host\": \"localhost\",\n \"port\": \"1234\"\n },\n {\n \"kind\": \"CronTab\",\n \"apiVersion\": \"example.com/v1\",\n \"metadata\": {\n \"creationTimestamp\": \"2019-09-03T13:02:01Z\",\n \"name\": \"remote-crontab\",\n \"resourceVersion\": \"12893\",\n \"uid\": \"359a83ec-b575-460d-b553-d859cedde8a0\"\n },\n \"host\": \"example.com\",\n \"port\": \"2345\"\n }\n ]\n }\n}", "completion": "This yaml code Versions in CustomResourceDefinitions | If conversion fails, a webhook should return a response stanza containing the following fields:", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Versions in CustomResourceDefinitions | If conversion fails, a webhook should return a response stanza containing the following fields:", "source": "https://kubernetes.io/docs/tasks/extend-kubernetes/custom-resources/custom-resource-definition-versioning/", "id": "k8s_00454_code_explain_11572"} {"prompt": "Complete this yaml code:\n\n{\n \"apiVersion\": \"apiextensions.k8s.io/v1\",\n \"kind\": \"ConversionReview\",\n \"response\": {\n \"uid\": \"<value from request.uid>\",", "completion": " \"result\": {\n \"status\": \"Failed\",\n \"message\": \"hostPort could not be parsed into a separate host and port\"\n }\n }\n}", "language": "yaml", "type": "kubernetes-manifest", "context": "Versions in CustomResourceDefinitions | { # Deprecated in v1.16 in favor of apiextensions.k8s.io/v1 \"apiVersion\": \"apiextensions.k8s.io/v1beta1\", \"kind\": \"ConversionReview\", \"response\": { \"uid\": \"<value from request.uid>\", \"", "source": "https://kubernetes.io/docs/tasks/extend-kubernetes/custom-resources/custom-resource-definition-versioning/", "id": "k8s_00454_code_completion_11573"} {"prompt": "Explain this yaml code:\n\n{\n \"apiVersion\": \"apiextensions.k8s.io/v1\",\n \"kind\": \"ConversionReview\",\n \"response\": {\n \"uid\": \"<value from request.uid>\",\n \"result\": {\n \"status\": \"Failed\",\n \"message\": \"hostPort could not be parsed into a separate host and port\"\n }\n }\n}", "completion": "This yaml code Versions in CustomResourceDefinitions | { # Deprecated in v1.16 in favor of apiextensions.k8s.io/v1 \"apiVersion\": \"apiextensions.k8s.io/v1beta1\", \"kind\": \"ConversionReview\", \"response\": { \"uid\": \"<value from request.uid>\", \"", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Versions in CustomResourceDefinitions | { # Deprecated in v1.16 in favor of apiextensions.k8s.io/v1 \"apiVersion\": \"apiextensions.k8s.io/v1beta1\", \"kind\": \"ConversionReview\", \"response\": { \"uid\": \"<value from request.uid>\", \"", "source": "https://kubernetes.io/docs/tasks/extend-kubernetes/custom-resources/custom-resource-definition-versioning/", "id": "k8s_00454_code_explain_11574"} {"prompt": "Complete this yaml code:\n\n{\n # Deprecated in v1.16 in favor of apiextensions.k8s.io/v1\n \"apiVersion\": \"apiextensions.k8s.io/v1beta1\",\n \"kind\": \"ConversionReview\",\n \"response\": {\n \"uid\": \"<value from request.uid>\",", "completion": " \"result\": {\n \"status\": \"Failed\",\n \"message\": \"hostPort could not be parsed into a separate host and port\"\n }\n }\n}", "language": "yaml", "type": "kubernetes-manifest", "context": "Versions in CustomResourceDefinitions | When an object is written, it is stored at the version designated as the storage version at the time of the write. If the storage version changes, existing objects are never converted automatically. H", "source": "https://kubernetes.io/docs/tasks/extend-kubernetes/custom-resources/custom-resource-definition-versioning/", "id": "k8s_00454_code_completion_11575"} {"prompt": "Explain this yaml code:\n\n{\n # Deprecated in v1.16 in favor of apiextensions.k8s.io/v1\n \"apiVersion\": \"apiextensions.k8s.io/v1beta1\",\n \"kind\": \"ConversionReview\",\n \"response\": {\n \"uid\": \"<value from request.uid>\",\n \"result\": {\n \"status\": \"Failed\",\n \"message\": \"hostPort could not be parsed into a separate host and port\"\n }\n }\n}", "completion": "This yaml code Versions in CustomResourceDefinitions | When an object is written, it is stored at the version designated as the storage version at the time of the write. If the storage version changes, existing objects are never converted automatically. H", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Versions in CustomResourceDefinitions | When an object is written, it is stored at the version designated as the storage version at the time of the write. If the storage version changes, existing objects are never converted automatically. H", "source": "https://kubernetes.io/docs/tasks/extend-kubernetes/custom-resources/custom-resource-definition-versioning/", "id": "k8s_00454_code_explain_11576"} {"prompt": "Explain this bash code:\n\nkubectl patch customresourcedefinitions <CRD_Name> --subresource='status' --type='merge' -p '{\"status\":{\"storedVersions\":[\"v1\"]}}'", "completion": "This bash code Versions in CustomResourceDefinitions |", "language": "bash", "type": "kubectl-command_explanation", "context": "Versions in CustomResourceDefinitions |", "source": "https://kubernetes.io/docs/tasks/extend-kubernetes/custom-resources/custom-resource-definition-versioning/", "id": "k8s_00454_code_explain_11577"} {"prompt": "Complete this yaml code:\n\napiVersion: apiextensions.k8s.io/v1\nkind: CustomResourceDefinition\nmetadata:\n # name must match the spec fields below, and be in the form: <plural>.<group>\n name: crontabs.example.com\nspec:\n # group name to use for REST API: /apis/<group>/<version>\n group: example.com\n # list of versions supported by this CustomResourceDefinition\n versions:\n - name: v1beta1\n # Each version can be enabled/disabled by Served flag.\n served: true\n # One and only one version must be marked as the storage version.\n storage: true\n # A schema is required\n schema:\n openAPIV3Schema:\n type: object\n properties:\n host:\n type: string\n port:\n type: string\n - name: v1\n served: true", "completion": " storage: false\n schema:\n openAPIV3Schema:\n type: object\n properties:\n host:\n type: string\n port:\n type: string\n # The conversion section is introduced in Kubernetes 1.13+ with a default value of\n # None conversion (strategy sub-field set to None).\n conversion:\n # None conversion assumes the same schema for all versions and only sets the apiVersion\n # field of custom resources to the proper value\n strategy: None\n # either Namespaced or Cluster\n scope: Namespaced\n names:\n # plural name to be used in the URL: /apis/<group>/<version>/<plural>\n plural: crontabs\n # singular name to be used as an alias on the CLI and for display\n singular: crontab\n # kind is normally the CamelCased singular type. Your resource manifests use this.\n kind: CronTab\n # shortNames allow shorter string to match your resource on the CLI\n shortNames:\n - ct", "language": "yaml", "type": "kubernetes-manifest", "context": "Versions in CustomResourceDefinitions | # Deprecated in v1.16 in favor of apiextensions.k8s.io/v1 apiVersion: apiextensions.k8s.io/v1beta1 kind: CustomResourceDefinition metadata: # name must match the spec fields below, and be in the for", "source": "https://kubernetes.io/docs/tasks/extend-kubernetes/custom-resources/custom-resource-definition-versioning/", "id": "k8s_00454_code_completion_11578"} {"prompt": "Explain this yaml code:\n\napiVersion: apiextensions.k8s.io/v1\nkind: CustomResourceDefinition\nmetadata:\n # name must match the spec fields below, and be in the form: <plural>.<group>\n name: crontabs.example.com\nspec:\n # group name to use for REST API: /apis/<group>/<version>\n group: example.com\n # list of versions supported by this CustomResourceDefinition\n versions:\n - name: v1beta1\n # Each version can be enabled/disabled by Served flag.\n served: true\n # One and only one version must be marked as the storage version.\n storage: true\n # A schema is required\n schema:\n openAPIV3Schema:\n type: object\n properties:\n host:\n type: string\n port:\n type: string\n - name: v1\n served: true\n storage: false\n schema:\n openAPIV3Schema:\n type: object\n properties:\n host:\n type: string\n port:\n type: string\n # The conversion section is introduced in Kubernetes 1.13+ with a default value of\n # None conversion (strategy sub-field set to None).\n conversion:\n # None conversion assumes the same schema for all versions and only sets the apiVersion\n # field of custom resources to the proper value\n strategy: None\n # either Namespaced or Cluster\n scope: Namespaced\n names:\n # plural name to be used in the URL: /apis/<group>/<version>/<plural>\n plural: crontabs\n # singular name to be used as an alias on the CLI and for display\n singular: crontab\n # kind is normally the CamelCased singular type. Your resource manifests use this.\n kind: CronTab\n # shortNames allow shorter string to match your resource on the CLI\n shortNames:\n - ct", "completion": "This yaml code Versions in CustomResourceDefinitions | # Deprecated in v1.16 in favor of apiextensions.k8s.io/v1 apiVersion: apiextensions.k8s.io/v1beta1 kind: CustomResourceDefinition metadata: # name must match the spec fields below, and be in the for", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Versions in CustomResourceDefinitions | # Deprecated in v1.16 in favor of apiextensions.k8s.io/v1 apiVersion: apiextensions.k8s.io/v1beta1 kind: CustomResourceDefinition metadata: # name must match the spec fields below, and be in the for", "source": "https://kubernetes.io/docs/tasks/extend-kubernetes/custom-resources/custom-resource-definition-versioning/", "id": "k8s_00454_code_explain_11579"} {"prompt": "Complete this yaml code:\n\n# Deprecated in v1.16 in favor of apiextensions.k8s.io/v1\napiVersion: apiextensions.k8s.io/v1beta1\nkind: CustomResourceDefinition\nmetadata:\n # name must match the spec fields below, and be in the form: <plural>.<group>\n name: crontabs.example.com\nspec:\n # group name to use for REST API: /apis/<group>/<version>\n group: example.com\n # list of versions supported by this CustomResourceDefinition\n versions:\n - name: v1beta1\n # Each version can be enabled/disabled by Served flag.\n served: true\n # One and only one version must be marked as the storage version.\n storage: true\n - name: v1\n served: true\n storage: false\n validation:\n openAPIV3Schema:\n type: object", "completion": " properties:\n host:\n type: string\n port:\n type: string\n # The conversion section is introduced in Kubernetes 1.13+ with a default value of\n # None conversion (strategy sub-field set to None).\n conversion:\n # None conversion assumes the same schema for all versions and only sets the apiVersion\n # field of custom resources to the proper value\n strategy: None\n # either Namespaced or Cluster\n scope: Namespaced\n names:\n # plural name to be used in the URL: /apis/<group>/<version>/<plural>\n plural: crontabs\n # singular name to be used as an alias on the CLI and for display\n singular: crontab\n # kind is normally the PascalCased singular type. Your resource manifests use this.\n kind: CronTab\n # shortNames allow shorter string to match your resource on the CLI\n shortNames:\n - ct", "language": "yaml", "type": "kubernetes-manifest", "context": "Versions in CustomResourceDefinitions | You can save the CustomResourceDefinition in a YAML file, then use kubectl apply to create it.", "source": "https://kubernetes.io/docs/tasks/extend-kubernetes/custom-resources/custom-resource-definition-versioning/", "id": "k8s_00454_code_completion_11580"} {"prompt": "Explain this yaml code:\n\n# Deprecated in v1.16 in favor of apiextensions.k8s.io/v1\napiVersion: apiextensions.k8s.io/v1beta1\nkind: CustomResourceDefinition\nmetadata:\n # name must match the spec fields below, and be in the form: <plural>.<group>\n name: crontabs.example.com\nspec:\n # group name to use for REST API: /apis/<group>/<version>\n group: example.com\n # list of versions supported by this CustomResourceDefinition\n versions:\n - name: v1beta1\n # Each version can be enabled/disabled by Served flag.\n served: true\n # One and only one version must be marked as the storage version.\n storage: true\n - name: v1\n served: true\n storage: false\n validation:\n openAPIV3Schema:\n type: object\n properties:\n host:\n type: string\n port:\n type: string\n # The conversion section is introduced in Kubernetes 1.13+ with a default value of\n # None conversion (strategy sub-field set to None).\n conversion:\n # None conversion assumes the same schema for all versions and only sets the apiVersion\n # field of custom resources to the proper value\n strategy: None\n # either Namespaced or Cluster\n scope: Namespaced\n names:\n # plural name to be used in the URL: /apis/<group>/<version>/<plural>\n plural: crontabs\n # singular name to be used as an alias on the CLI and for display\n singular: crontab\n # kind is normally the PascalCased singular type. Your resource manifests use this.\n kind: CronTab\n # shortNames allow shorter string to match your resource on the CLI\n shortNames:\n - ct", "completion": "This yaml code Versions in CustomResourceDefinitions | You can save the CustomResourceDefinition in a YAML file, then use kubectl apply to create it.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Versions in CustomResourceDefinitions | You can save the CustomResourceDefinition in a YAML file, then use kubectl apply to create it.", "source": "https://kubernetes.io/docs/tasks/extend-kubernetes/custom-resources/custom-resource-definition-versioning/", "id": "k8s_00454_code_explain_11581"} {"prompt": "Complete this yaml code:\n\napiVersion: apiextensions.k8s.io/v1\nkind: CustomResourceDefinition\n name: crontabs.example.com\nspec:\n group: example.com\n names:\n plural: crontabs\n singular: crontab\n kind: CronTab\n scope: Namespaced\n versions:\n - name: v1alpha1\n served: true\n storage: false\n # This indicates the v1alpha1 version of the custom resource is deprecated.\n # API requests to this version receive a warning header in the server response.", "completion": " deprecated: true\n # This overrides the default warning returned to API clients making v1alpha1 API requests.\n deprecationWarning: \"example.com/v1alpha1 CronTab is deprecated; see http://example.com/v1alpha1-v1 for instructions to migrate to example.com/v1 CronTab\"\n \n schema: ...\n - name: v1beta1\n served: true\n # This indicates the v1beta1 version of the custom resource is deprecated.\n # API requests to this version receive a warning header in the server response.\n # A default warning message is returned for this version.\n deprecated: true\n schema: ...\n - name: v1\n served: true\n storage: true\n schema: ...", "language": "yaml", "type": "kubernetes-manifest", "context": "Versions in CustomResourceDefinitions | # Deprecated in v1.16 in favor of apiextensions.k8s.io/v1 apiVersion: apiextensions.k8s.io/v1beta1 kind: CustomResourceDefinition metadata: name: crontabs.example.com spec: group: example.com na", "source": "https://kubernetes.io/docs/tasks/extend-kubernetes/custom-resources/custom-resource-definition-versioning/", "id": "k8s_00454_code_completion_11582"} {"prompt": "Explain this yaml code:\n\napiVersion: apiextensions.k8s.io/v1\nkind: CustomResourceDefinition\n name: crontabs.example.com\nspec:\n group: example.com\n names:\n plural: crontabs\n singular: crontab\n kind: CronTab\n scope: Namespaced\n versions:\n - name: v1alpha1\n served: true\n storage: false\n # This indicates the v1alpha1 version of the custom resource is deprecated.\n # API requests to this version receive a warning header in the server response.\n deprecated: true\n # This overrides the default warning returned to API clients making v1alpha1 API requests.\n deprecationWarning: \"example.com/v1alpha1 CronTab is deprecated; see http://example.com/v1alpha1-v1 for instructions to migrate to example.com/v1 CronTab\"\n \n schema: ...\n - name: v1beta1\n served: true\n # This indicates the v1beta1 version of the custom resource is deprecated.\n # API requests to this version receive a warning header in the server response.\n # A default warning message is returned for this version.\n deprecated: true\n schema: ...\n - name: v1\n served: true\n storage: true\n schema: ...", "completion": "This yaml code Versions in CustomResourceDefinitions | # Deprecated in v1.16 in favor of apiextensions.k8s.io/v1 apiVersion: apiextensions.k8s.io/v1beta1 kind: CustomResourceDefinition metadata: name: crontabs.example.com spec: group: example.com na", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Versions in CustomResourceDefinitions | # Deprecated in v1.16 in favor of apiextensions.k8s.io/v1 apiVersion: apiextensions.k8s.io/v1beta1 kind: CustomResourceDefinition metadata: name: crontabs.example.com spec: group: example.com na", "source": "https://kubernetes.io/docs/tasks/extend-kubernetes/custom-resources/custom-resource-definition-versioning/", "id": "k8s_00454_code_explain_11583"} {"prompt": "Complete this yaml code:\n\n# Deprecated in v1.16 in favor of apiextensions.k8s.io/v1\napiVersion: apiextensions.k8s.io/v1beta1\nkind: CustomResourceDefinition\nmetadata:\n name: crontabs.example.com\nspec:\n group: example.com\n names:\n plural: crontabs\n singular: crontab\n kind: CronTab\n scope: Namespaced\n validation: ...\n versions:\n - name: v1alpha1", "completion": " served: true\n storage: false\n # This indicates the v1alpha1 version of the custom resource is deprecated.\n # API requests to this version receive a warning header in the server response.\n deprecated: true\n # This overrides the default warning returned to API clients making v1alpha1 API requests.\n deprecationWarning: \"example.com/v1alpha1 CronTab is deprecated; see http://example.com/v1alpha1-v1 for instructions to migrate to example.com/v1 CronTab\"\n - name: v1beta1\n served: true\n # This indicates the v1beta1 version of the custom resource is deprecated.\n # API requests to this version receive a warning header in the server response.\n # A default warning message is returned for this version.\n deprecated: true\n - name: v1\n served: true\n storage: true", "language": "yaml", "type": "kubernetes-manifest", "context": "Versions in CustomResourceDefinitions | An older API version cannot be dropped from a CustomResourceDefinition manifest until existing stored data has been migrated to the newer API version for all clusters that served the older version of", "source": "https://kubernetes.io/docs/tasks/extend-kubernetes/custom-resources/custom-resource-definition-versioning/", "id": "k8s_00454_code_completion_11584"} {"prompt": "Explain this yaml code:\n\n# Deprecated in v1.16 in favor of apiextensions.k8s.io/v1\napiVersion: apiextensions.k8s.io/v1beta1\nkind: CustomResourceDefinition\nmetadata:\n name: crontabs.example.com\nspec:\n group: example.com\n names:\n plural: crontabs\n singular: crontab\n kind: CronTab\n scope: Namespaced\n validation: ...\n versions:\n - name: v1alpha1\n served: true\n storage: false\n # This indicates the v1alpha1 version of the custom resource is deprecated.\n # API requests to this version receive a warning header in the server response.\n deprecated: true\n # This overrides the default warning returned to API clients making v1alpha1 API requests.\n deprecationWarning: \"example.com/v1alpha1 CronTab is deprecated; see http://example.com/v1alpha1-v1 for instructions to migrate to example.com/v1 CronTab\"\n - name: v1beta1\n served: true\n # This indicates the v1beta1 version of the custom resource is deprecated.\n # API requests to this version receive a warning header in the server response.\n # A default warning message is returned for this version.\n deprecated: true\n - name: v1\n served: true\n storage: true", "completion": "This yaml code Versions in CustomResourceDefinitions | An older API version cannot be dropped from a CustomResourceDefinition manifest until existing stored data has been migrated to the newer API version for all clusters that served the older version of", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Versions in CustomResourceDefinitions | An older API version cannot be dropped from a CustomResourceDefinition manifest until existing stored data has been migrated to the newer API version for all clusters that served the older version of", "source": "https://kubernetes.io/docs/tasks/extend-kubernetes/custom-resources/custom-resource-definition-versioning/", "id": "k8s_00454_code_explain_11585"} {"prompt": "Complete this yaml code:\n\napiVersion: apiextensions.k8s.io/v1\nkind: CustomResourceDefinition\n name: crontabs.example.com\nspec:\n group: example.com\n names:\n plural: crontabs\n singular: crontab\n kind: CronTab\n scope: Namespaced", "completion": " versions:\n - name: v1beta1\n # This indicates the v1beta1 version of the custom resource is no longer served.\n # API requests to this version receive a not found error in the server response.\n served: false\n schema: ...\n - name: v1\n served: true\n # The new served version should be set as the storage version\n storage: true\n schema: ...", "language": "yaml", "type": "kubernetes-manifest", "context": "Versions in CustomResourceDefinitions | The above example has a None conversion between versions which only sets the apiVersion field on conversion and does not change the rest of the object. The API server also supports webhook conversions", "source": "https://kubernetes.io/docs/tasks/extend-kubernetes/custom-resources/custom-resource-definition-versioning/", "id": "k8s_00454_code_completion_11586"} {"prompt": "Explain this yaml code:\n\napiVersion: apiextensions.k8s.io/v1\nkind: CustomResourceDefinition\n name: crontabs.example.com\nspec:\n group: example.com\n names:\n plural: crontabs\n singular: crontab\n kind: CronTab\n scope: Namespaced\n versions:\n - name: v1beta1\n # This indicates the v1beta1 version of the custom resource is no longer served.\n # API requests to this version receive a not found error in the server response.\n served: false\n schema: ...\n - name: v1\n served: true\n # The new served version should be set as the storage version\n storage: true\n schema: ...", "completion": "This yaml code Versions in CustomResourceDefinitions | The above example has a None conversion between versions which only sets the apiVersion field on conversion and does not change the rest of the object. The API server also supports webhook conversions", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Versions in CustomResourceDefinitions | The above example has a None conversion between versions which only sets the apiVersion field on conversion and does not change the rest of the object. The API server also supports webhook conversions", "source": "https://kubernetes.io/docs/tasks/extend-kubernetes/custom-resources/custom-resource-definition-versioning/", "id": "k8s_00454_code_explain_11587"} {"prompt": "Complete this yaml code:\n\napiVersion: apiextensions.k8s.io/v1\nkind: CustomResourceDefinition\nmetadata:\n # name must match the spec fields below, and be in the form: <plural>.<group>\n name: crontabs.example.com\nspec:\n # group name to use for REST API: /apis/<group>/<version>\n group: example.com\n # list of versions supported by this CustomResourceDefinition\n versions:\n - name: v1beta1\n # Each version can be enabled/disabled by Served flag.\n served: true\n # One and only one version must be marked as the storage version.\n storage: true\n # Each version can define its own schema when there is no top-level\n # schema is defined.\n schema:\n openAPIV3Schema:\n type: object\n properties:\n hostPort:\n type: string\n - name: v1\n served: true\n storage: false\n schema:\n openAPIV3Schema:\n type: object\n properties:", "completion": " host:\n type: string\n port:\n type: string\n conversion:\n # the Webhook strategy instructs the API server to call an external webhook for any conversion between custom resources.\n strategy: Webhook\n # webhook is required when strategy is `Webhook` and it configures the webhook endpoint to be called by API server.\n webhook:\n # conversionReviewVersions indicates what ConversionReview versions are understood/preferred by the webhook.\n # The first version in the list understood by the API server is sent to the webhook.\n # The webhook must respond with a ConversionReview object in the same version it received.\n conversionReviewVersions: [\"v1\",\"v1beta1\"]\n clientConfig:\n service:\n namespace: default\n name: example-conversion-webhook-server\n path: /crdconvert\n caBundle: \"Ci0tLS0tQk...<base64-encoded PEM bundle>...tLS0K\"\n # either Namespaced or Cluster\n scope: Namespaced\n names:\n # plural name to be used in the URL: /apis/<group>/<version>/<plural>\n plural: crontabs\n # singular name to be used as an alias on the CLI and for display\n singular: crontab\n # kind is normally the CamelCased singular type. Your resource manifests use this.\n kind: CronTab\n # shortNames allow shorter string to match your resource on the CLI\n shortNames:\n - ct", "language": "yaml", "type": "kubernetes-manifest", "context": "Versions in CustomResourceDefinitions | # Deprecated in v1.16 in favor of apiextensions.k8s.io/v1 apiVersion: apiextensions.k8s.io/v1beta1 kind: CustomResourceDefinition metadata: # name must match the spec fields below, and be in the for", "source": "https://kubernetes.io/docs/tasks/extend-kubernetes/custom-resources/custom-resource-definition-versioning/", "id": "k8s_00454_code_completion_11588"} {"prompt": "Explain this yaml code:\n\napiVersion: apiextensions.k8s.io/v1\nkind: CustomResourceDefinition\nmetadata:\n # name must match the spec fields below, and be in the form: <plural>.<group>\n name: crontabs.example.com\nspec:\n # group name to use for REST API: /apis/<group>/<version>\n group: example.com\n # list of versions supported by this CustomResourceDefinition\n versions:\n - name: v1beta1\n # Each version can be enabled/disabled by Served flag.\n served: true\n # One and only one version must be marked as the storage version.\n storage: true\n # Each version can define its own schema when there is no top-level\n # schema is defined.\n schema:\n openAPIV3Schema:\n type: object\n properties:\n hostPort:\n type: string\n - name: v1\n served: true\n storage: false\n schema:\n openAPIV3Schema:\n type: object\n properties:\n host:\n type: string\n port:\n type: string\n conversion:\n # the Webhook strategy instructs the API server to call an external webhook for any conversion between custom resources.\n strategy: Webhook\n # webhook is required when strategy is `Webhook` and it configures the webhook endpoint to be called by API server.\n webhook:\n # conversionReviewVersions indicates what ConversionReview versions are understood/preferred by the webhook.\n # The first version in the list understood by the API server is sent to the webhook.\n # The webhook must respond with a ConversionReview object in the same version it received.\n conversionReviewVersions: [\"v1\",\"v1beta1\"]\n clientConfig:\n service:\n namespace: default\n name: example-conversion-webhook-server\n path: /crdconvert\n caBundle: \"Ci0tLS0tQk...<base64-encoded PEM bundle>...tLS0K\"\n # either Namespaced or Cluster\n scope: Namespaced\n names:\n # plural name to be used in the URL: /apis/<group>/<version>/<plural>\n plural: crontabs\n # singular name to be used as an alias on the CLI and for display\n singular: crontab\n # kind is normally the CamelCased singular type. Your resource manifests use this.\n kind: CronTab\n # shortNames allow shorter string to match your resource on the CLI\n shortNames:\n - ct", "completion": "This yaml code Versions in CustomResourceDefinitions | # Deprecated in v1.16 in favor of apiextensions.k8s.io/v1 apiVersion: apiextensions.k8s.io/v1beta1 kind: CustomResourceDefinition metadata: # name must match the spec fields below, and be in the for", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Versions in CustomResourceDefinitions | # Deprecated in v1.16 in favor of apiextensions.k8s.io/v1 apiVersion: apiextensions.k8s.io/v1beta1 kind: CustomResourceDefinition metadata: # name must match the spec fields below, and be in the for", "source": "https://kubernetes.io/docs/tasks/extend-kubernetes/custom-resources/custom-resource-definition-versioning/", "id": "k8s_00454_code_explain_11589"} {"prompt": "Complete this yaml code:\n\n# Deprecated in v1.16 in favor of apiextensions.k8s.io/v1\napiVersion: apiextensions.k8s.io/v1beta1\nkind: CustomResourceDefinition\nmetadata:\n # name must match the spec fields below, and be in the form: <plural>.<group>\n name: crontabs.example.com\nspec:\n # group name to use for REST API: /apis/<group>/<version>\n group: example.com\n # prunes object fields that are not specified in OpenAPI schemas below.\n preserveUnknownFields: false\n # list of versions supported by this CustomResourceDefinition\n versions:\n - name: v1beta1\n # Each version can be enabled/disabled by Served flag.\n served: true\n # One and only one version must be marked as the storage version.\n storage: true\n # Each version can define its own schema when there is no top-level\n # schema is defined.\n schema:\n openAPIV3Schema:\n type: object\n properties:\n hostPort:\n type: string\n - name: v1\n served: true\n storage: false", "completion": " schema:\n openAPIV3Schema:\n type: object\n properties:\n host:\n type: string\n port:\n type: string\n conversion:\n # the Webhook strategy instructs the API server to call an external webhook for any conversion between custom resources.\n strategy: Webhook\n # webhookClientConfig is required when strategy is `Webhook` and it configures the webhook endpoint to be called by API server.\n webhookClientConfig:\n service:\n namespace: default\n name: example-conversion-webhook-server\n path: /crdconvert\n caBundle: \"Ci0tLS0tQk...<base64-encoded PEM bundle>...tLS0K\"\n # either Namespaced or Cluster\n scope: Namespaced\n names:\n # plural name to be used in the URL: /apis/<group>/<version>/<plural>\n plural: crontabs\n # singular name to be used as an alias on the CLI and for display\n singular: crontab\n # kind is normally the CamelCased singular type. Your resource manifests use this.\n kind: CronTab\n # shortNames allow shorter string to match your resource on the CLI\n shortNames:\n - ct", "language": "yaml", "type": "kubernetes-manifest", "context": "Versions in CustomResourceDefinitions | You can save the CustomResourceDefinition in a YAML file, then use kubectl apply to apply it.", "source": "https://kubernetes.io/docs/tasks/extend-kubernetes/custom-resources/custom-resource-definition-versioning/", "id": "k8s_00454_code_completion_11590"} {"prompt": "Explain this yaml code:\n\n# Deprecated in v1.16 in favor of apiextensions.k8s.io/v1\napiVersion: apiextensions.k8s.io/v1beta1\nkind: CustomResourceDefinition\nmetadata:\n # name must match the spec fields below, and be in the form: <plural>.<group>\n name: crontabs.example.com\nspec:\n # group name to use for REST API: /apis/<group>/<version>\n group: example.com\n # prunes object fields that are not specified in OpenAPI schemas below.\n preserveUnknownFields: false\n # list of versions supported by this CustomResourceDefinition\n versions:\n - name: v1beta1\n # Each version can be enabled/disabled by Served flag.\n served: true\n # One and only one version must be marked as the storage version.\n storage: true\n # Each version can define its own schema when there is no top-level\n # schema is defined.\n schema:\n openAPIV3Schema:\n type: object\n properties:\n hostPort:\n type: string\n - name: v1\n served: true\n storage: false\n schema:\n openAPIV3Schema:\n type: object\n properties:\n host:\n type: string\n port:\n type: string\n conversion:\n # the Webhook strategy instructs the API server to call an external webhook for any conversion between custom resources.\n strategy: Webhook\n # webhookClientConfig is required when strategy is `Webhook` and it configures the webhook endpoint to be called by API server.\n webhookClientConfig:\n service:\n namespace: default\n name: example-conversion-webhook-server\n path: /crdconvert\n caBundle: \"Ci0tLS0tQk...<base64-encoded PEM bundle>...tLS0K\"\n # either Namespaced or Cluster\n scope: Namespaced\n names:\n # plural name to be used in the URL: /apis/<group>/<version>/<plural>\n plural: crontabs\n # singular name to be used as an alias on the CLI and for display\n singular: crontab\n # kind is normally the CamelCased singular type. Your resource manifests use this.\n kind: CronTab\n # shortNames allow shorter string to match your resource on the CLI\n shortNames:\n - ct", "completion": "This yaml code Versions in CustomResourceDefinitions | You can save the CustomResourceDefinition in a YAML file, then use kubectl apply to apply it.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Versions in CustomResourceDefinitions | You can save the CustomResourceDefinition in a YAML file, then use kubectl apply to apply it.", "source": "https://kubernetes.io/docs/tasks/extend-kubernetes/custom-resources/custom-resource-definition-versioning/", "id": "k8s_00454_code_explain_11591"} {"prompt": "Explain this bash code:\n\nkubectl apply -f my-versioned-crontab-with-conversion.yaml", "completion": "This bash code Versions in CustomResourceDefinitions | Make sure the conversion service is up and running before applying new changes.", "language": "bash", "type": "kubectl-command_explanation", "context": "Versions in CustomResourceDefinitions | Make sure the conversion service is up and running before applying new changes.", "source": "https://kubernetes.io/docs/tasks/extend-kubernetes/custom-resources/custom-resource-definition-versioning/", "id": "k8s_00454_code_explain_11592"} {"prompt": "Complete this yaml code:\n\napiVersion: apiextensions.k8s.io/v1\nkind: CustomResourceDefinition\n...\nspec:\n ...", "completion": " conversion:\n strategy: Webhook\n webhook:\n clientConfig:\n url: \"https://my-webhook.example.com:9443/my-webhook-path\"\n...", "language": "yaml", "type": "kubernetes-manifest", "context": "Versions in CustomResourceDefinitions | # Deprecated in v1.16 in favor of apiextensions.k8s.io/v1 apiVersion: apiextensions.k8s.io/v1beta1 kind: CustomResourceDefinition ... spec: ... conversion: strategy: Webhook webhookClientC", "source": "https://kubernetes.io/docs/tasks/extend-kubernetes/custom-resources/custom-resource-definition-versioning/", "id": "k8s_00454_code_completion_11593"} {"prompt": "Explain this yaml code:\n\napiVersion: apiextensions.k8s.io/v1\nkind: CustomResourceDefinition\n...\nspec:\n ...\n conversion:\n strategy: Webhook\n webhook:\n clientConfig:\n url: \"https://my-webhook.example.com:9443/my-webhook-path\"\n...", "completion": "This yaml code Versions in CustomResourceDefinitions | # Deprecated in v1.16 in favor of apiextensions.k8s.io/v1 apiVersion: apiextensions.k8s.io/v1beta1 kind: CustomResourceDefinition ... spec: ... conversion: strategy: Webhook webhookClientC", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Versions in CustomResourceDefinitions | # Deprecated in v1.16 in favor of apiextensions.k8s.io/v1 apiVersion: apiextensions.k8s.io/v1beta1 kind: CustomResourceDefinition ... spec: ... conversion: strategy: Webhook webhookClientC", "source": "https://kubernetes.io/docs/tasks/extend-kubernetes/custom-resources/custom-resource-definition-versioning/", "id": "k8s_00454_code_explain_11594"} {"prompt": "Complete this yaml code:\n\n# Deprecated in v1.16 in favor of apiextensions.k8s.io/v1\napiVersion: apiextensions.k8s.io/v1beta1\nkind: CustomResourceDefinition\n...\nspec:", "completion": " ...\n conversion:\n strategy: Webhook\n webhookClientConfig:\n url: \"https://my-webhook.example.com:9443/my-webhook-path\"\n...", "language": "yaml", "type": "kubernetes-manifest", "context": "Versions in CustomResourceDefinitions | The service stanza inside webhookClientConfig is a reference to the service for a conversion webhook. If the webhook is running within the cluster, then you should use service instead of url. The serv", "source": "https://kubernetes.io/docs/tasks/extend-kubernetes/custom-resources/custom-resource-definition-versioning/", "id": "k8s_00454_code_completion_11595"} {"prompt": "Explain this yaml code:\n\n# Deprecated in v1.16 in favor of apiextensions.k8s.io/v1\napiVersion: apiextensions.k8s.io/v1beta1\nkind: CustomResourceDefinition\n...\nspec:\n ...\n conversion:\n strategy: Webhook\n webhookClientConfig:\n url: \"https://my-webhook.example.com:9443/my-webhook-path\"\n...", "completion": "This yaml code Versions in CustomResourceDefinitions | The service stanza inside webhookClientConfig is a reference to the service for a conversion webhook. If the webhook is running within the cluster, then you should use service instead of url. The serv", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Versions in CustomResourceDefinitions | The service stanza inside webhookClientConfig is a reference to the service for a conversion webhook. If the webhook is running within the cluster, then you should use service instead of url. The serv", "source": "https://kubernetes.io/docs/tasks/extend-kubernetes/custom-resources/custom-resource-definition-versioning/", "id": "k8s_00454_code_explain_11596"} {"prompt": "Complete this yaml code:\n\napiVersion: apiextensions.k8s.io/v1\nkind: CustomResourceDefinition\n...\nspec:\n ...\n conversion:\n strategy: Webhook\n webhook:", "completion": " clientConfig:\n service:\n namespace: my-service-namespace\n name: my-service-name\n path: /my-path\n port: 1234\n caBundle: \"Ci0tLS0tQk...<base64-encoded PEM bundle>...tLS0K\"\n...", "language": "yaml", "type": "kubernetes-manifest", "context": "Versions in CustomResourceDefinitions | # Deprecated in v1.16 in favor of apiextensions.k8s.io/v1 apiVersion: apiextensions.k8s.io/v1beta1 kind: CustomResourceDefinition ... spec: ... conversion: strategy: Webhook webhookClientC", "source": "https://kubernetes.io/docs/tasks/extend-kubernetes/custom-resources/custom-resource-definition-versioning/", "id": "k8s_00454_code_completion_11597"} {"prompt": "Explain this yaml code:\n\napiVersion: apiextensions.k8s.io/v1\nkind: CustomResourceDefinition\n...\nspec:\n ...\n conversion:\n strategy: Webhook\n webhook:\n clientConfig:\n service:\n namespace: my-service-namespace\n name: my-service-name\n path: /my-path\n port: 1234\n caBundle: \"Ci0tLS0tQk...<base64-encoded PEM bundle>...tLS0K\"\n...", "completion": "This yaml code Versions in CustomResourceDefinitions | # Deprecated in v1.16 in favor of apiextensions.k8s.io/v1 apiVersion: apiextensions.k8s.io/v1beta1 kind: CustomResourceDefinition ... spec: ... conversion: strategy: Webhook webhookClientC", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Versions in CustomResourceDefinitions | # Deprecated in v1.16 in favor of apiextensions.k8s.io/v1 apiVersion: apiextensions.k8s.io/v1beta1 kind: CustomResourceDefinition ... spec: ... conversion: strategy: Webhook webhookClientC", "source": "https://kubernetes.io/docs/tasks/extend-kubernetes/custom-resources/custom-resource-definition-versioning/", "id": "k8s_00454_code_explain_11598"} {"prompt": "Complete this yaml code:\n\n# Deprecated in v1.16 in favor of apiextensions.k8s.io/v1\napiVersion: apiextensions.k8s.io/v1beta1\nkind: CustomResourceDefinition\n...\nspec:\n ...\n conversion:\n strategy: Webhook", "completion": " webhookClientConfig:\n service:\n namespace: my-service-namespace\n name: my-service-name\n path: /my-path\n port: 1234\n caBundle: \"Ci0tLS0tQk...<base64-encoded PEM bundle>...tLS0K\"\n...", "language": "yaml", "type": "kubernetes-manifest", "context": "Versions in CustomResourceDefinitions | Webhooks are sent a POST request, with Content-Type: application/json, with a ConversionReview API object in the apiextensions.k8s.io API group serialized to JSON as the body.", "source": "https://kubernetes.io/docs/tasks/extend-kubernetes/custom-resources/custom-resource-definition-versioning/", "id": "k8s_00454_code_completion_11599"} {"prompt": "Explain this yaml code:\n\n# Deprecated in v1.16 in favor of apiextensions.k8s.io/v1\napiVersion: apiextensions.k8s.io/v1beta1\nkind: CustomResourceDefinition\n...\nspec:\n ...\n conversion:\n strategy: Webhook\n webhookClientConfig:\n service:\n namespace: my-service-namespace\n name: my-service-name\n path: /my-path\n port: 1234\n caBundle: \"Ci0tLS0tQk...<base64-encoded PEM bundle>...tLS0K\"\n...", "completion": "This yaml code Versions in CustomResourceDefinitions | Webhooks are sent a POST request, with Content-Type: application/json, with a ConversionReview API object in the apiextensions.k8s.io API group serialized to JSON as the body.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Versions in CustomResourceDefinitions | Webhooks are sent a POST request, with Content-Type: application/json, with a ConversionReview API object in the apiextensions.k8s.io API group serialized to JSON as the body.", "source": "https://kubernetes.io/docs/tasks/extend-kubernetes/custom-resources/custom-resource-definition-versioning/", "id": "k8s_00454_code_explain_11600"} {"prompt": "Complete this yaml code:\n\napiVersion: apiextensions.k8s.io/v1\nkind: CustomResourceDefinition\n...\nspec:\n ...", "completion": " conversion:\n strategy: Webhook\n webhook:\n conversionReviewVersions: [\"v1\", \"v1beta1\"]\n ...", "language": "yaml", "type": "kubernetes-manifest", "context": "Versions in CustomResourceDefinitions | conversionReviewVersions is a required field when creating apiextensions.k8s.io/v1 custom resource definitions. Webhooks are required to support at least one ConversionReview version understood by the", "source": "https://kubernetes.io/docs/tasks/extend-kubernetes/custom-resources/custom-resource-definition-versioning/", "id": "k8s_00454_code_completion_11601"} {"prompt": "Explain this yaml code:\n\napiVersion: apiextensions.k8s.io/v1\nkind: CustomResourceDefinition\n...\nspec:\n ...\n conversion:\n strategy: Webhook\n webhook:\n conversionReviewVersions: [\"v1\", \"v1beta1\"]\n ...", "completion": "This yaml code Versions in CustomResourceDefinitions | conversionReviewVersions is a required field when creating apiextensions.k8s.io/v1 custom resource definitions. Webhooks are required to support at least one ConversionReview version understood by the", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Versions in CustomResourceDefinitions | conversionReviewVersions is a required field when creating apiextensions.k8s.io/v1 custom resource definitions. Webhooks are required to support at least one ConversionReview version understood by the", "source": "https://kubernetes.io/docs/tasks/extend-kubernetes/custom-resources/custom-resource-definition-versioning/", "id": "k8s_00454_code_explain_11602"} {"prompt": "Complete this yaml code:\n\n# Deprecated in v1.16 in favor of apiextensions.k8s.io/v1\napiVersion: apiextensions.k8s.io/v1beta1\nkind: CustomResourceDefinition\n...\nspec:", "completion": " ...\n conversion:\n strategy: Webhook\n conversionReviewVersions: [\"v1\", \"v1beta1\"]\n ...", "language": "yaml", "type": "kubernetes-manifest", "context": "Versions in CustomResourceDefinitions | If no conversionReviewVersions are specified, the default when creating apiextensions.k8s.io/v1beta1 custom resource definitions is v1beta1.", "source": "https://kubernetes.io/docs/tasks/extend-kubernetes/custom-resources/custom-resource-definition-versioning/", "id": "k8s_00454_code_completion_11603"} {"prompt": "Explain this yaml code:\n\n# Deprecated in v1.16 in favor of apiextensions.k8s.io/v1\napiVersion: apiextensions.k8s.io/v1beta1\nkind: CustomResourceDefinition\n...\nspec:\n ...\n conversion:\n strategy: Webhook\n conversionReviewVersions: [\"v1\", \"v1beta1\"]\n ...", "completion": "This yaml code Versions in CustomResourceDefinitions | If no conversionReviewVersions are specified, the default when creating apiextensions.k8s.io/v1beta1 custom resource definitions is v1beta1.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Versions in CustomResourceDefinitions | If no conversionReviewVersions are specified, the default when creating apiextensions.k8s.io/v1beta1 custom resource definitions is v1beta1.", "source": "https://kubernetes.io/docs/tasks/extend-kubernetes/custom-resources/custom-resource-definition-versioning/", "id": "k8s_00454_code_explain_11604"} {"prompt": "Complete this json code:\n\n{\n \"apiVersion\": \"apiextensions.k8s.io/v1\",\n \"kind\": \"ConversionReview\",\n \"request\": {\n # Random uid uniquely identifying this conversion call\n \"uid\": \"705ab4f5-6393-11e8-b7cc-42010a800002\",\n \n # The API group and version the objects should be converted to\n \"desiredAPIVersion\": \"example.com/v1\",\n \n # The list of objects to convert.\n # May contain one or more objects, in one or more versions.\n \"objects\": [\n {\n \"kind\": \"CronTab\",\n \"apiVersion\": \"example.com/v1beta1\",\n \"metadata\": {\n \"creationTimestamp\": \"2019-09-04T14:03:02Z\",\n \"name\": \"local-crontab\",", "completion": " \"namespace\": \"default\",\n \"resourceVersion\": \"143\",\n \"uid\": \"3415a7fc-162b-4300-b5da-fd6083580d66\"\n },\n \"hostPort\": \"localhost:1234\"\n },\n {\n \"kind\": \"CronTab\",\n \"apiVersion\": \"example.com/v1beta1\",\n \"metadata\": {\n \"creationTimestamp\": \"2019-09-03T13:02:01Z\",\n \"name\": \"remote-crontab\",\n \"resourceVersion\": \"12893\",\n \"uid\": \"359a83ec-b575-460d-b553-d859cedde8a0\"\n },\n \"hostPort\": \"example.com:2345\"\n }\n ]\n }\n}", "language": "json", "type": "api-response", "context": "Versions in CustomResourceDefinitions | { # Deprecated in v1.16 in favor of apiextensions.k8s.io/v1 \"apiVersion\": \"apiextensions.k8s.io/v1beta1\", \"kind\": \"ConversionReview\", \"request\": { # Random uid uniquely identifying this co", "source": "https://kubernetes.io/docs/tasks/extend-kubernetes/custom-resources/custom-resource-definition-versioning/", "id": "k8s_00454_code_completion_11605"} {"prompt": "Explain this json code:\n\n{\n \"apiVersion\": \"apiextensions.k8s.io/v1\",\n \"kind\": \"ConversionReview\",\n \"request\": {\n # Random uid uniquely identifying this conversion call\n \"uid\": \"705ab4f5-6393-11e8-b7cc-42010a800002\",\n \n # The API group and version the objects should be converted to\n \"desiredAPIVersion\": \"example.com/v1\",\n \n # The list of objects to convert.\n # May contain one or more objects, in one or more versions.\n \"objects\": [\n {\n \"kind\": \"CronTab\",\n \"apiVersion\": \"example.com/v1beta1\",\n \"metadata\": {\n \"creationTimestamp\": \"2019-09-04T14:03:02Z\",\n \"name\": \"local-crontab\",\n \"namespace\": \"default\",\n \"resourceVersion\": \"143\",\n \"uid\": \"3415a7fc-162b-4300-b5da-fd6083580d66\"\n },\n \"hostPort\": \"localhost:1234\"\n },\n {\n \"kind\": \"CronTab\",\n \"apiVersion\": \"example.com/v1beta1\",\n \"metadata\": {\n \"creationTimestamp\": \"2019-09-03T13:02:01Z\",\n \"name\": \"remote-crontab\",\n \"resourceVersion\": \"12893\",\n \"uid\": \"359a83ec-b575-460d-b553-d859cedde8a0\"\n },\n \"hostPort\": \"example.com:2345\"\n }\n ]\n }\n}", "completion": "This json code Versions in CustomResourceDefinitions | { # Deprecated in v1.16 in favor of apiextensions.k8s.io/v1 \"apiVersion\": \"apiextensions.k8s.io/v1beta1\", \"kind\": \"ConversionReview\", \"request\": { # Random uid uniquely identifying this co", "language": "json", "type": "api-response_explanation", "context": "Versions in CustomResourceDefinitions | { # Deprecated in v1.16 in favor of apiextensions.k8s.io/v1 \"apiVersion\": \"apiextensions.k8s.io/v1beta1\", \"kind\": \"ConversionReview\", \"request\": { # Random uid uniquely identifying this co", "source": "https://kubernetes.io/docs/tasks/extend-kubernetes/custom-resources/custom-resource-definition-versioning/", "id": "k8s_00454_code_explain_11606"} {"prompt": "Complete this json code:\n\n{\n # Deprecated in v1.16 in favor of apiextensions.k8s.io/v1\n \"apiVersion\": \"apiextensions.k8s.io/v1beta1\",\n \"kind\": \"ConversionReview\",\n \"request\": {\n # Random uid uniquely identifying this conversion call\n \"uid\": \"705ab4f5-6393-11e8-b7cc-42010a800002\",\n \n # The API group and version the objects should be converted to\n \"desiredAPIVersion\": \"example.com/v1\",\n \n # The list of objects to convert.\n # May contain one or more objects, in one or more versions.\n \"objects\": [\n {\n \"kind\": \"CronTab\",\n \"apiVersion\": \"example.com/v1beta1\",\n \"metadata\": {\n \"creationTimestamp\": \"2019-09-04T14:03:02Z\",\n \"name\": \"local-crontab\",", "completion": " \"namespace\": \"default\",\n \"resourceVersion\": \"143\",\n \"uid\": \"3415a7fc-162b-4300-b5da-fd6083580d66\"\n },\n \"hostPort\": \"localhost:1234\"\n },\n {\n \"kind\": \"CronTab\",\n \"apiVersion\": \"example.com/v1beta1\",\n \"metadata\": {\n \"creationTimestamp\": \"2019-09-03T13:02:01Z\",\n \"name\": \"remote-crontab\",\n \"resourceVersion\": \"12893\",\n \"uid\": \"359a83ec-b575-460d-b553-d859cedde8a0\"\n },\n \"hostPort\": \"example.com:2345\"\n }\n ]\n }\n}", "language": "json", "type": "api-response", "context": "Versions in CustomResourceDefinitions | Webhooks respond with a 200 HTTP status code, Content-Type: application/json, and a body containing a ConversionReview object (in the same version they were sent), with the response stanza populated,", "source": "https://kubernetes.io/docs/tasks/extend-kubernetes/custom-resources/custom-resource-definition-versioning/", "id": "k8s_00454_code_completion_11607"} {"prompt": "Explain this json code:\n\n{\n # Deprecated in v1.16 in favor of apiextensions.k8s.io/v1\n \"apiVersion\": \"apiextensions.k8s.io/v1beta1\",\n \"kind\": \"ConversionReview\",\n \"request\": {\n # Random uid uniquely identifying this conversion call\n \"uid\": \"705ab4f5-6393-11e8-b7cc-42010a800002\",\n \n # The API group and version the objects should be converted to\n \"desiredAPIVersion\": \"example.com/v1\",\n \n # The list of objects to convert.\n # May contain one or more objects, in one or more versions.\n \"objects\": [\n {\n \"kind\": \"CronTab\",\n \"apiVersion\": \"example.com/v1beta1\",\n \"metadata\": {\n \"creationTimestamp\": \"2019-09-04T14:03:02Z\",\n \"name\": \"local-crontab\",\n \"namespace\": \"default\",\n \"resourceVersion\": \"143\",\n \"uid\": \"3415a7fc-162b-4300-b5da-fd6083580d66\"\n },\n \"hostPort\": \"localhost:1234\"\n },\n {\n \"kind\": \"CronTab\",\n \"apiVersion\": \"example.com/v1beta1\",\n \"metadata\": {\n \"creationTimestamp\": \"2019-09-03T13:02:01Z\",\n \"name\": \"remote-crontab\",\n \"resourceVersion\": \"12893\",\n \"uid\": \"359a83ec-b575-460d-b553-d859cedde8a0\"\n },\n \"hostPort\": \"example.com:2345\"\n }\n ]\n }\n}", "completion": "This json code Versions in CustomResourceDefinitions | Webhooks respond with a 200 HTTP status code, Content-Type: application/json, and a body containing a ConversionReview object (in the same version they were sent), with the response stanza populated,", "language": "json", "type": "api-response_explanation", "context": "Versions in CustomResourceDefinitions | Webhooks respond with a 200 HTTP status code, Content-Type: application/json, and a body containing a ConversionReview object (in the same version they were sent), with the response stanza populated,", "source": "https://kubernetes.io/docs/tasks/extend-kubernetes/custom-resources/custom-resource-definition-versioning/", "id": "k8s_00454_code_explain_11608"} {"prompt": "Complete this json code:\n\n{\n \"apiVersion\": \"apiextensions.k8s.io/v1\",\n \"kind\": \"ConversionReview\",\n \"response\": {\n # must match <request.uid>\n \"uid\": \"705ab4f5-6393-11e8-b7cc-42010a800002\",\n \"result\": {\n \"status\": \"Success\"\n },\n # Objects must match the order of request.objects, and have apiVersion set to <request.desiredAPIVersion>.\n # kind, metadata.uid, metadata.name, and metadata.namespace fields must not be changed by the webhook.\n # metadata.labels and metadata.annotations fields may be changed by the webhook.\n # All other changes to metadata fields by the webhook are ignored.\n \"convertedObjects\": [\n {\n \"kind\": \"CronTab\",\n \"apiVersion\": \"example.com/v1\",\n \"metadata\": {\n \"creationTimestamp\": \"2019-09-04T14:03:02Z\",\n \"name\": \"local-crontab\",\n \"namespace\": \"default\",", "completion": " \"resourceVersion\": \"143\",\n \"uid\": \"3415a7fc-162b-4300-b5da-fd6083580d66\"\n },\n \"host\": \"localhost\",\n \"port\": \"1234\"\n },\n {\n \"kind\": \"CronTab\",\n \"apiVersion\": \"example.com/v1\",\n \"metadata\": {\n \"creationTimestamp\": \"2019-09-03T13:02:01Z\",\n \"name\": \"remote-crontab\",\n \"resourceVersion\": \"12893\",\n \"uid\": \"359a83ec-b575-460d-b553-d859cedde8a0\"\n },\n \"host\": \"example.com\",\n \"port\": \"2345\"\n }\n ]\n }\n}", "language": "json", "type": "api-response", "context": "Versions in CustomResourceDefinitions | { # Deprecated in v1.16 in favor of apiextensions.k8s.io/v1 \"apiVersion\": \"apiextensions.k8s.io/v1beta1\", \"kind\": \"ConversionReview\", \"response\": { # must match <request.uid> \"uid\": \"7", "source": "https://kubernetes.io/docs/tasks/extend-kubernetes/custom-resources/custom-resource-definition-versioning/", "id": "k8s_00454_code_completion_11609"} {"prompt": "Explain this json code:\n\n{\n \"apiVersion\": \"apiextensions.k8s.io/v1\",\n \"kind\": \"ConversionReview\",\n \"response\": {\n # must match <request.uid>\n \"uid\": \"705ab4f5-6393-11e8-b7cc-42010a800002\",\n \"result\": {\n \"status\": \"Success\"\n },\n # Objects must match the order of request.objects, and have apiVersion set to <request.desiredAPIVersion>.\n # kind, metadata.uid, metadata.name, and metadata.namespace fields must not be changed by the webhook.\n # metadata.labels and metadata.annotations fields may be changed by the webhook.\n # All other changes to metadata fields by the webhook are ignored.\n \"convertedObjects\": [\n {\n \"kind\": \"CronTab\",\n \"apiVersion\": \"example.com/v1\",\n \"metadata\": {\n \"creationTimestamp\": \"2019-09-04T14:03:02Z\",\n \"name\": \"local-crontab\",\n \"namespace\": \"default\",\n \"resourceVersion\": \"143\",\n \"uid\": \"3415a7fc-162b-4300-b5da-fd6083580d66\"\n },\n \"host\": \"localhost\",\n \"port\": \"1234\"\n },\n {\n \"kind\": \"CronTab\",\n \"apiVersion\": \"example.com/v1\",\n \"metadata\": {\n \"creationTimestamp\": \"2019-09-03T13:02:01Z\",\n \"name\": \"remote-crontab\",\n \"resourceVersion\": \"12893\",\n \"uid\": \"359a83ec-b575-460d-b553-d859cedde8a0\"\n },\n \"host\": \"example.com\",\n \"port\": \"2345\"\n }\n ]\n }\n}", "completion": "This json code Versions in CustomResourceDefinitions | { # Deprecated in v1.16 in favor of apiextensions.k8s.io/v1 \"apiVersion\": \"apiextensions.k8s.io/v1beta1\", \"kind\": \"ConversionReview\", \"response\": { # must match <request.uid> \"uid\": \"7", "language": "json", "type": "api-response_explanation", "context": "Versions in CustomResourceDefinitions | { # Deprecated in v1.16 in favor of apiextensions.k8s.io/v1 \"apiVersion\": \"apiextensions.k8s.io/v1beta1\", \"kind\": \"ConversionReview\", \"response\": { # must match <request.uid> \"uid\": \"7", "source": "https://kubernetes.io/docs/tasks/extend-kubernetes/custom-resources/custom-resource-definition-versioning/", "id": "k8s_00454_code_explain_11610"} {"prompt": "Complete this json code:\n\n{\n # Deprecated in v1.16 in favor of apiextensions.k8s.io/v1\n \"apiVersion\": \"apiextensions.k8s.io/v1beta1\",\n \"kind\": \"ConversionReview\",\n \"response\": {\n # must match <request.uid>\n \"uid\": \"705ab4f5-6393-11e8-b7cc-42010a800002\",\n \"result\": {\n \"status\": \"Failed\"\n },\n # Objects must match the order of request.objects, and have apiVersion set to <request.desiredAPIVersion>.\n # kind, metadata.uid, metadata.name, and metadata.namespace fields must not be changed by the webhook.\n # metadata.labels and metadata.annotations fields may be changed by the webhook.\n # All other changes to metadata fields by the webhook are ignored.\n \"convertedObjects\": [\n {\n \"kind\": \"CronTab\",\n \"apiVersion\": \"example.com/v1\",\n \"metadata\": {\n \"creationTimestamp\": \"2019-09-04T14:03:02Z\",\n \"name\": \"local-crontab\",", "completion": " \"namespace\": \"default\",\n \"resourceVersion\": \"143\",\n \"uid\": \"3415a7fc-162b-4300-b5da-fd6083580d66\"\n },\n \"host\": \"localhost\",\n \"port\": \"1234\"\n },\n {\n \"kind\": \"CronTab\",\n \"apiVersion\": \"example.com/v1\",\n \"metadata\": {\n \"creationTimestamp\": \"2019-09-03T13:02:01Z\",\n \"name\": \"remote-crontab\",\n \"resourceVersion\": \"12893\",\n \"uid\": \"359a83ec-b575-460d-b553-d859cedde8a0\"\n },\n \"host\": \"example.com\",\n \"port\": \"2345\"\n }\n ]\n }\n}", "language": "json", "type": "api-response", "context": "Versions in CustomResourceDefinitions | If conversion fails, a webhook should return a response stanza containing the following fields:", "source": "https://kubernetes.io/docs/tasks/extend-kubernetes/custom-resources/custom-resource-definition-versioning/", "id": "k8s_00454_code_completion_11611"} {"prompt": "Explain this json code:\n\n{\n # Deprecated in v1.16 in favor of apiextensions.k8s.io/v1\n \"apiVersion\": \"apiextensions.k8s.io/v1beta1\",\n \"kind\": \"ConversionReview\",\n \"response\": {\n # must match <request.uid>\n \"uid\": \"705ab4f5-6393-11e8-b7cc-42010a800002\",\n \"result\": {\n \"status\": \"Failed\"\n },\n # Objects must match the order of request.objects, and have apiVersion set to <request.desiredAPIVersion>.\n # kind, metadata.uid, metadata.name, and metadata.namespace fields must not be changed by the webhook.\n # metadata.labels and metadata.annotations fields may be changed by the webhook.\n # All other changes to metadata fields by the webhook are ignored.\n \"convertedObjects\": [\n {\n \"kind\": \"CronTab\",\n \"apiVersion\": \"example.com/v1\",\n \"metadata\": {\n \"creationTimestamp\": \"2019-09-04T14:03:02Z\",\n \"name\": \"local-crontab\",\n \"namespace\": \"default\",\n \"resourceVersion\": \"143\",\n \"uid\": \"3415a7fc-162b-4300-b5da-fd6083580d66\"\n },\n \"host\": \"localhost\",\n \"port\": \"1234\"\n },\n {\n \"kind\": \"CronTab\",\n \"apiVersion\": \"example.com/v1\",\n \"metadata\": {\n \"creationTimestamp\": \"2019-09-03T13:02:01Z\",\n \"name\": \"remote-crontab\",\n \"resourceVersion\": \"12893\",\n \"uid\": \"359a83ec-b575-460d-b553-d859cedde8a0\"\n },\n \"host\": \"example.com\",\n \"port\": \"2345\"\n }\n ]\n }\n}", "completion": "This json code Versions in CustomResourceDefinitions | If conversion fails, a webhook should return a response stanza containing the following fields:", "language": "json", "type": "api-response_explanation", "context": "Versions in CustomResourceDefinitions | If conversion fails, a webhook should return a response stanza containing the following fields:", "source": "https://kubernetes.io/docs/tasks/extend-kubernetes/custom-resources/custom-resource-definition-versioning/", "id": "k8s_00454_code_explain_11612"} {"prompt": "Complete this json code:\n\n{\n \"apiVersion\": \"apiextensions.k8s.io/v1\",\n \"kind\": \"ConversionReview\",\n \"response\": {\n \"uid\": \"<value from request.uid>\",", "completion": " \"result\": {\n \"status\": \"Failed\",\n \"message\": \"hostPort could not be parsed into a separate host and port\"\n }\n }\n}", "language": "json", "type": "api-response", "context": "Versions in CustomResourceDefinitions | { # Deprecated in v1.16 in favor of apiextensions.k8s.io/v1 \"apiVersion\": \"apiextensions.k8s.io/v1beta1\", \"kind\": \"ConversionReview\", \"response\": { \"uid\": \"<value from request.uid>\", \"", "source": "https://kubernetes.io/docs/tasks/extend-kubernetes/custom-resources/custom-resource-definition-versioning/", "id": "k8s_00454_code_completion_11613"} {"prompt": "Explain this json code:\n\n{\n \"apiVersion\": \"apiextensions.k8s.io/v1\",\n \"kind\": \"ConversionReview\",\n \"response\": {\n \"uid\": \"<value from request.uid>\",\n \"result\": {\n \"status\": \"Failed\",\n \"message\": \"hostPort could not be parsed into a separate host and port\"\n }\n }\n}", "completion": "This json code Versions in CustomResourceDefinitions | { # Deprecated in v1.16 in favor of apiextensions.k8s.io/v1 \"apiVersion\": \"apiextensions.k8s.io/v1beta1\", \"kind\": \"ConversionReview\", \"response\": { \"uid\": \"<value from request.uid>\", \"", "language": "json", "type": "api-response_explanation", "context": "Versions in CustomResourceDefinitions | { # Deprecated in v1.16 in favor of apiextensions.k8s.io/v1 \"apiVersion\": \"apiextensions.k8s.io/v1beta1\", \"kind\": \"ConversionReview\", \"response\": { \"uid\": \"<value from request.uid>\", \"", "source": "https://kubernetes.io/docs/tasks/extend-kubernetes/custom-resources/custom-resource-definition-versioning/", "id": "k8s_00454_code_explain_11614"} {"prompt": "Complete this json code:\n\n{\n # Deprecated in v1.16 in favor of apiextensions.k8s.io/v1\n \"apiVersion\": \"apiextensions.k8s.io/v1beta1\",\n \"kind\": \"ConversionReview\",\n \"response\": {\n \"uid\": \"<value from request.uid>\",", "completion": " \"result\": {\n \"status\": \"Failed\",\n \"message\": \"hostPort could not be parsed into a separate host and port\"\n }\n }\n}", "language": "json", "type": "api-response", "context": "Versions in CustomResourceDefinitions | When an object is written, it is stored at the version designated as the storage version at the time of the write. If the storage version changes, existing objects are never converted automatically. H", "source": "https://kubernetes.io/docs/tasks/extend-kubernetes/custom-resources/custom-resource-definition-versioning/", "id": "k8s_00454_code_completion_11615"} {"prompt": "Explain this json code:\n\n{\n # Deprecated in v1.16 in favor of apiextensions.k8s.io/v1\n \"apiVersion\": \"apiextensions.k8s.io/v1beta1\",\n \"kind\": \"ConversionReview\",\n \"response\": {\n \"uid\": \"<value from request.uid>\",\n \"result\": {\n \"status\": \"Failed\",\n \"message\": \"hostPort could not be parsed into a separate host and port\"\n }\n }\n}", "completion": "This json code Versions in CustomResourceDefinitions | When an object is written, it is stored at the version designated as the storage version at the time of the write. If the storage version changes, existing objects are never converted automatically. H", "language": "json", "type": "api-response_explanation", "context": "Versions in CustomResourceDefinitions | When an object is written, it is stored at the version designated as the storage version at the time of the write. If the storage version changes, existing objects are never converted automatically. H", "source": "https://kubernetes.io/docs/tasks/extend-kubernetes/custom-resources/custom-resource-definition-versioning/", "id": "k8s_00454_code_explain_11616"} {"prompt": "Explain this bash code:\n\nkubectl patch customresourcedefinitions <CRD_Name> --subresource='status' --type='merge' -p '{\"status\":{\"storedVersions\":[\"v1\"]}}'", "completion": "This bash code Versions in CustomResourceDefinitions |", "language": "bash", "type": "kubectl-command_explanation", "context": "Versions in CustomResourceDefinitions |", "source": "https://kubernetes.io/docs/tasks/extend-kubernetes/custom-resources/custom-resource-definition-versioning/", "id": "k8s_00454_code_explain_11617"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: LimitRange\nmetadata:\n name: cpu-resource-constraint\nspec:\n limits:\n - default: # this section defines default limits", "completion": " cpu: 500m\n defaultRequest: # this section defines default requests\n cpu: 500m\n max: # max and min define the limit range\n cpu: \"1\"\n min:\n cpu: 100m\n type: Container", "language": "yaml", "type": "kubernetes-manifest", "context": "1 - Limit Ranges | along with a Pod that declares a CPU resource request of 700m, but not a limit:", "source": "https://kubernetes.io/docs/concepts/policy/_print/", "id": "k8s_00456_code_completion_11618"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: LimitRange\nmetadata:\n name: cpu-resource-constraint\nspec:\n limits:\n - default: # this section defines default limits\n cpu: 500m\n defaultRequest: # this section defines default requests\n cpu: 500m\n max: # max and min define the limit range\n cpu: \"1\"\n min:\n cpu: 100m\n type: Container", "completion": "This yaml code 1 - Limit Ranges | along with a Pod that declares a CPU resource request of 700m, but not a limit:", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "1 - Limit Ranges | along with a Pod that declares a CPU resource request of 700m, but not a limit:", "source": "https://kubernetes.io/docs/concepts/policy/_print/", "id": "k8s_00456_code_explain_11619"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: example-conflict-with-limitrange-cpu\nspec:", "completion": " containers:\n - name: demo\n image: registry.k8s.io/pause:3.8\n resources:\n requests:\n cpu: 700m", "language": "yaml", "type": "pod-definition", "context": "1 - Limit Ranges | then that Pod will not be scheduled, failing with an error similar to:", "source": "https://kubernetes.io/docs/concepts/policy/_print/", "id": "k8s_00456_code_completion_11620"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: example-conflict-with-limitrange-cpu\nspec:\n containers:\n - name: demo\n image: registry.k8s.io/pause:3.8\n resources:\n requests:\n cpu: 700m", "completion": "This yaml code 1 - Limit Ranges | then that Pod will not be scheduled, failing with an error similar to:", "language": "yaml", "type": "pod-definition_explanation", "context": "1 - Limit Ranges | then that Pod will not be scheduled, failing with an error similar to:", "source": "https://kubernetes.io/docs/concepts/policy/_print/", "id": "k8s_00456_code_explain_11621"} {"prompt": "Explain this text code:\n\nPod \"example-conflict-with-limitrange-cpu\" is invalid: spec.containers[0].resources.requests: Invalid value: \"700m\": must be less than or equal to cpu limit", "completion": "This text code 1 - Limit Ranges | If you set both request and limit, then that new Pod will be scheduled successfully even with the same LimitRange in place:", "language": "text", "type": "code-example_explanation", "context": "1 - Limit Ranges | If you set both request and limit, then that new Pod will be scheduled successfully even with the same LimitRange in place:", "source": "https://kubernetes.io/docs/concepts/policy/_print/", "id": "k8s_00456_code_explain_11622"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: example-no-conflict-with-limitrange-cpu\nspec:\n containers:", "completion": " - name: demo\n image: registry.k8s.io/pause:3.8\n resources:\n requests:\n cpu: 700m\n limits:\n cpu: 700m", "language": "yaml", "type": "pod-definition", "context": "1 - Limit Ranges | Examples of policies that could be created using LimitRange are:", "source": "https://kubernetes.io/docs/concepts/policy/_print/", "id": "k8s_00456_code_completion_11623"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: example-no-conflict-with-limitrange-cpu\nspec:\n containers:\n - name: demo\n image: registry.k8s.io/pause:3.8\n resources:\n requests:\n cpu: 700m\n limits:\n cpu: 700m", "completion": "This yaml code 1 - Limit Ranges | Examples of policies that could be created using LimitRange are:", "language": "yaml", "type": "pod-definition_explanation", "context": "1 - Limit Ranges | Examples of policies that could be created using LimitRange are:", "source": "https://kubernetes.io/docs/concepts/policy/_print/", "id": "k8s_00456_code_explain_11624"} {"prompt": "Complete this text code:\n\nscopeSelector:\n matchExpressions:\n - scopeName: PriorityClass", "completion": " operator: In\n values:\n - middle", "language": "text", "type": "code-example", "context": "2 - Resource Quotas | If the operator is Exists or DoesNotExist, the values field must NOT be specified.", "source": "https://kubernetes.io/docs/concepts/policy/_print/", "id": "k8s_00456_code_completion_11625"} {"prompt": "Explain this text code:\n\nscopeSelector:\n matchExpressions:\n - scopeName: PriorityClass\n operator: In\n values:\n - middle", "completion": "This text code 2 - Resource Quotas | If the operator is Exists or DoesNotExist, the values field must NOT be specified.", "language": "text", "type": "code-example_explanation", "context": "2 - Resource Quotas | If the operator is Exists or DoesNotExist, the values field must NOT be specified.", "source": "https://kubernetes.io/docs/concepts/policy/_print/", "id": "k8s_00456_code_explain_11626"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: List\nitems:\n- apiVersion: v1\n kind: ResourceQuota\n metadata:\n name: pods-high\n spec:\n hard:\n cpu: \"1000\"\n memory: \"200Gi\"\n pods: \"10\"\n scopeSelector:\n matchExpressions:\n - operator: In\n scopeName: PriorityClass\n values: [\"high\"]\n- apiVersion: v1\n kind: ResourceQuota\n metadata:\n name: pods-medium\n spec:", "completion": " hard:\n cpu: \"10\"\n memory: \"20Gi\"\n pods: \"10\"\n scopeSelector:\n matchExpressions:\n - operator: In\n scopeName: PriorityClass\n values: [\"medium\"]\n- apiVersion: v1\n kind: ResourceQuota\n metadata:\n name: pods-low\n spec:\n hard:\n cpu: \"5\"\n memory: \"10Gi\"\n pods: \"10\"\n scopeSelector:\n matchExpressions:\n - operator: In\n scopeName: PriorityClass\n values: [\"low\"]", "language": "yaml", "type": "kubernetes-manifest", "context": "2 - Resource Quotas | Apply the YAML using kubectl create.", "source": "https://kubernetes.io/docs/concepts/policy/_print/", "id": "k8s_00456_code_completion_11627"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: List\nitems:\n- apiVersion: v1\n kind: ResourceQuota\n metadata:\n name: pods-high\n spec:\n hard:\n cpu: \"1000\"\n memory: \"200Gi\"\n pods: \"10\"\n scopeSelector:\n matchExpressions:\n - operator: In\n scopeName: PriorityClass\n values: [\"high\"]\n- apiVersion: v1\n kind: ResourceQuota\n metadata:\n name: pods-medium\n spec:\n hard:\n cpu: \"10\"\n memory: \"20Gi\"\n pods: \"10\"\n scopeSelector:\n matchExpressions:\n - operator: In\n scopeName: PriorityClass\n values: [\"medium\"]\n- apiVersion: v1\n kind: ResourceQuota\n metadata:\n name: pods-low\n spec:\n hard:\n cpu: \"5\"\n memory: \"10Gi\"\n pods: \"10\"\n scopeSelector:\n matchExpressions:\n - operator: In\n scopeName: PriorityClass\n values: [\"low\"]", "completion": "This yaml code 2 - Resource Quotas | Apply the YAML using kubectl create.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "2 - Resource Quotas | Apply the YAML using kubectl create.", "source": "https://kubernetes.io/docs/concepts/policy/_print/", "id": "k8s_00456_code_explain_11628"} {"prompt": "Explain this text code:\n\nresourcequota/pods-high created\nresourcequota/pods-medium created\nresourcequota/pods-low created", "completion": "This text code 2 - Resource Quotas | Verify that Used quota is 0 using kubectl describe quota.", "language": "text", "type": "code-example_explanation", "context": "2 - Resource Quotas | Verify that Used quota is 0 using kubectl describe quota.", "source": "https://kubernetes.io/docs/concepts/policy/_print/", "id": "k8s_00456_code_explain_11629"} {"prompt": "Complete this text code:\n\nName: pods-high\nNamespace: default\nResource Used Hard\n-------- ---- ----\ncpu 0 1k\nmemory 0 200Gi\npods 0 10\n\n\nName: pods-low\nNamespace: default\nResource Used Hard", "completion": "-------- ---- ----\ncpu 0 5\nmemory 0 10Gi\npods 0 10\n\n\nName: pods-medium\nNamespace: default\nResource Used Hard\n-------- ---- ----\ncpu 0 10\nmemory 0 20Gi\npods 0 10", "language": "text", "type": "code-example", "context": "2 - Resource Quotas | Create a pod with priority \"high\". Save the following YAML to a file high-priority-pod.yaml.", "source": "https://kubernetes.io/docs/concepts/policy/_print/", "id": "k8s_00456_code_completion_11630"} {"prompt": "Explain this text code:\n\nName: pods-high\nNamespace: default\nResource Used Hard\n-------- ---- ----\ncpu 0 1k\nmemory 0 200Gi\npods 0 10\n\n\nName: pods-low\nNamespace: default\nResource Used Hard\n-------- ---- ----\ncpu 0 5\nmemory 0 10Gi\npods 0 10\n\n\nName: pods-medium\nNamespace: default\nResource Used Hard\n-------- ---- ----\ncpu 0 10\nmemory 0 20Gi\npods 0 10", "completion": "This text code 2 - Resource Quotas | Create a pod with priority \"high\". Save the following YAML to a file high-priority-pod.yaml.", "language": "text", "type": "code-example_explanation", "context": "2 - Resource Quotas | Create a pod with priority \"high\". Save the following YAML to a file high-priority-pod.yaml.", "source": "https://kubernetes.io/docs/concepts/policy/_print/", "id": "k8s_00456_code_explain_11631"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: high-priority\nspec:\n containers:\n - name: high-priority\n image: ubuntu\n command: [\"/bin/sh\"]", "completion": " args: [\"-c\", \"while true; do echo hello; sleep 10;done\"]\n resources:\n requests:\n memory: \"10Gi\"\n cpu: \"500m\"\n limits:\n memory: \"10Gi\"\n cpu: \"500m\"\n priorityClassName: high", "language": "yaml", "type": "pod-definition", "context": "2 - Resource Quotas | Apply it with kubectl create.", "source": "https://kubernetes.io/docs/concepts/policy/_print/", "id": "k8s_00456_code_completion_11632"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: high-priority\nspec:\n containers:\n - name: high-priority\n image: ubuntu\n command: [\"/bin/sh\"]\n args: [\"-c\", \"while true; do echo hello; sleep 10;done\"]\n resources:\n requests:\n memory: \"10Gi\"\n cpu: \"500m\"\n limits:\n memory: \"10Gi\"\n cpu: \"500m\"\n priorityClassName: high", "completion": "This yaml code 2 - Resource Quotas | Apply it with kubectl create.", "language": "yaml", "type": "pod-definition_explanation", "context": "2 - Resource Quotas | Apply it with kubectl create.", "source": "https://kubernetes.io/docs/concepts/policy/_print/", "id": "k8s_00456_code_explain_11633"} {"prompt": "Complete this text code:\n\nName: pods-high\nNamespace: default\nResource Used Hard\n-------- ---- ----\ncpu 500m 1k\nmemory 10Gi 200Gi\npods 1 10\n\n\nName: pods-low\nNamespace: default\nResource Used Hard", "completion": "-------- ---- ----\ncpu 0 5\nmemory 0 10Gi\npods 0 10\n\n\nName: pods-medium\nNamespace: default\nResource Used Hard\n-------- ---- ----\ncpu 0 10\nmemory 0 20Gi\npods 0 10", "language": "text", "type": "code-example", "context": "2 - Resource Quotas | Operators can use CrossNamespacePodAffinity quota scope to limit which namespaces are allowed to have pods with affinity terms that cross namespaces. Specifically, it controls which pods are allowed t", "source": "https://kubernetes.io/docs/concepts/policy/_print/", "id": "k8s_00456_code_completion_11634"} {"prompt": "Explain this text code:\n\nName: pods-high\nNamespace: default\nResource Used Hard\n-------- ---- ----\ncpu 500m 1k\nmemory 10Gi 200Gi\npods 1 10\n\n\nName: pods-low\nNamespace: default\nResource Used Hard\n-------- ---- ----\ncpu 0 5\nmemory 0 10Gi\npods 0 10\n\n\nName: pods-medium\nNamespace: default\nResource Used Hard\n-------- ---- ----\ncpu 0 10\nmemory 0 20Gi\npods 0 10", "completion": "This text code 2 - Resource Quotas | Operators can use CrossNamespacePodAffinity quota scope to limit which namespaces are allowed to have pods with affinity terms that cross namespaces. Specifically, it controls which pods are allowed t", "language": "text", "type": "code-example_explanation", "context": "2 - Resource Quotas | Operators can use CrossNamespacePodAffinity quota scope to limit which namespaces are allowed to have pods with affinity terms that cross namespaces. Specifically, it controls which pods are allowed t", "source": "https://kubernetes.io/docs/concepts/policy/_print/", "id": "k8s_00456_code_explain_11635"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: ResourceQuota\nmetadata:\n name: disable-cross-namespace-affinity\n namespace: foo-ns\nspec:", "completion": " hard:\n pods: \"0\"\n scopeSelector:\n matchExpressions:\n - scopeName: CrossNamespacePodAffinity\n operator: Exists", "language": "yaml", "type": "kubernetes-manifest", "context": "2 - Resource Quotas | If operators want to disallow using namespaces and namespaceSelector by default, and only allow it for specific namespaces, they could configure CrossNamespacePodAffinity as a limited resource by sett", "source": "https://kubernetes.io/docs/concepts/policy/_print/", "id": "k8s_00456_code_completion_11636"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: ResourceQuota\nmetadata:\n name: disable-cross-namespace-affinity\n namespace: foo-ns\nspec:\n hard:\n pods: \"0\"\n scopeSelector:\n matchExpressions:\n - scopeName: CrossNamespacePodAffinity\n operator: Exists", "completion": "This yaml code 2 - Resource Quotas | If operators want to disallow using namespaces and namespaceSelector by default, and only allow it for specific namespaces, they could configure CrossNamespacePodAffinity as a limited resource by sett", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "2 - Resource Quotas | If operators want to disallow using namespaces and namespaceSelector by default, and only allow it for specific namespaces, they could configure CrossNamespacePodAffinity as a limited resource by sett", "source": "https://kubernetes.io/docs/concepts/policy/_print/", "id": "k8s_00456_code_explain_11637"} {"prompt": "Complete this yaml code:\n\napiVersion: apiserver.config.k8s.io/v1\nkind: AdmissionConfiguration\nplugins:\n- name: \"ResourceQuota\"\n configuration:\n apiVersion: apiserver.config.k8s.io/v1", "completion": " kind: ResourceQuotaConfiguration\n limitedResources:\n - resource: pods\n matchScopes:\n - scopeName: CrossNamespacePodAffinity\n operator: Exists", "language": "yaml", "type": "kubernetes-manifest", "context": "2 - Resource Quotas | With the above configuration, pods can use namespaces and namespaceSelector in pod affinity only if the namespace where they are created have a resource quota object with CrossNamespacePodAffinity sco", "source": "https://kubernetes.io/docs/concepts/policy/_print/", "id": "k8s_00456_code_completion_11638"} {"prompt": "Explain this yaml code:\n\napiVersion: apiserver.config.k8s.io/v1\nkind: AdmissionConfiguration\nplugins:\n- name: \"ResourceQuota\"\n configuration:\n apiVersion: apiserver.config.k8s.io/v1\n kind: ResourceQuotaConfiguration\n limitedResources:\n - resource: pods\n matchScopes:\n - scopeName: CrossNamespacePodAffinity\n operator: Exists", "completion": "This yaml code 2 - Resource Quotas | With the above configuration, pods can use namespaces and namespaceSelector in pod affinity only if the namespace where they are created have a resource quota object with CrossNamespacePodAffinity sco", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "2 - Resource Quotas | With the above configuration, pods can use namespaces and namespaceSelector in pod affinity only if the namespace where they are created have a resource quota object with CrossNamespacePodAffinity sco", "source": "https://kubernetes.io/docs/concepts/policy/_print/", "id": "k8s_00456_code_explain_11639"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: List\nitems:\n- apiVersion: v1\n kind: ResourceQuota\n metadata:\n name: pvcs-gold\n spec:\n hard:\n requests.storage: \"10Gi\"\n persistentvolumeclaims: \"10\"\n scopeSelector:\n matchExpressions:\n - operator: In\n scopeName: VolumeAttributesClass\n values: [\"gold\"]\n- apiVersion: v1\n kind: ResourceQuota\n metadata:\n name: pvcs-silver\n spec:", "completion": " hard:\n requests.storage: \"20Gi\"\n persistentvolumeclaims: \"10\"\n scopeSelector:\n matchExpressions:\n - operator: In\n scopeName: VolumeAttributesClass\n values: [\"silver\"]\n- apiVersion: v1\n kind: ResourceQuota\n metadata:\n name: pvcs-copper\n spec:\n hard:\n requests.storage: \"30Gi\"\n persistentvolumeclaims: \"10\"\n scopeSelector:\n matchExpressions:\n - operator: In\n scopeName: VolumeAttributesClass\n values: [\"copper\"]", "language": "yaml", "type": "kubernetes-manifest", "context": "2 - Resource Quotas | Apply the YAML using kubectl create.", "source": "https://kubernetes.io/docs/concepts/policy/_print/", "id": "k8s_00456_code_completion_11640"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: List\nitems:\n- apiVersion: v1\n kind: ResourceQuota\n metadata:\n name: pvcs-gold\n spec:\n hard:\n requests.storage: \"10Gi\"\n persistentvolumeclaims: \"10\"\n scopeSelector:\n matchExpressions:\n - operator: In\n scopeName: VolumeAttributesClass\n values: [\"gold\"]\n- apiVersion: v1\n kind: ResourceQuota\n metadata:\n name: pvcs-silver\n spec:\n hard:\n requests.storage: \"20Gi\"\n persistentvolumeclaims: \"10\"\n scopeSelector:\n matchExpressions:\n - operator: In\n scopeName: VolumeAttributesClass\n values: [\"silver\"]\n- apiVersion: v1\n kind: ResourceQuota\n metadata:\n name: pvcs-copper\n spec:\n hard:\n requests.storage: \"30Gi\"\n persistentvolumeclaims: \"10\"\n scopeSelector:\n matchExpressions:\n - operator: In\n scopeName: VolumeAttributesClass\n values: [\"copper\"]", "completion": "This yaml code 2 - Resource Quotas | Apply the YAML using kubectl create.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "2 - Resource Quotas | Apply the YAML using kubectl create.", "source": "https://kubernetes.io/docs/concepts/policy/_print/", "id": "k8s_00456_code_explain_11641"} {"prompt": "Explain this text code:\n\nresourcequota/pvcs-gold created\nresourcequota/pvcs-silver created\nresourcequota/pvcs-copper created", "completion": "This text code 2 - Resource Quotas | Verify that Used quota is 0 using kubectl describe quota.", "language": "text", "type": "code-example_explanation", "context": "2 - Resource Quotas | Verify that Used quota is 0 using kubectl describe quota.", "source": "https://kubernetes.io/docs/concepts/policy/_print/", "id": "k8s_00456_code_explain_11642"} {"prompt": "Complete this text code:\n\nName: pvcs-gold\nNamespace: default\nResource Used Hard\n-------- ---- ----\npersistentvolumeclaims 0 10\nrequests.storage 0 10Gi\n\n\nName: pvcs-silver\nNamespace: default\nResource Used Hard", "completion": "-------- ---- ----\npersistentvolumeclaims 0 10\nrequests.storage 0 20Gi\n\n\nName: pvcs-copper\nNamespace: default\nResource Used Hard\n-------- ---- ----\npersistentvolumeclaims 0 10\nrequests.storage 0 30Gi", "language": "text", "type": "code-example", "context": "2 - Resource Quotas | Create a pvc with volume attributes class \"gold\". Save the following YAML to a file gold-vac-pvc.yaml.", "source": "https://kubernetes.io/docs/concepts/policy/_print/", "id": "k8s_00456_code_completion_11643"} {"prompt": "Explain this text code:\n\nName: pvcs-gold\nNamespace: default\nResource Used Hard\n-------- ---- ----\npersistentvolumeclaims 0 10\nrequests.storage 0 10Gi\n\n\nName: pvcs-silver\nNamespace: default\nResource Used Hard\n-------- ---- ----\npersistentvolumeclaims 0 10\nrequests.storage 0 20Gi\n\n\nName: pvcs-copper\nNamespace: default\nResource Used Hard\n-------- ---- ----\npersistentvolumeclaims 0 10\nrequests.storage 0 30Gi", "completion": "This text code 2 - Resource Quotas | Create a pvc with volume attributes class \"gold\". Save the following YAML to a file gold-vac-pvc.yaml.", "language": "text", "type": "code-example_explanation", "context": "2 - Resource Quotas | Create a pvc with volume attributes class \"gold\". Save the following YAML to a file gold-vac-pvc.yaml.", "source": "https://kubernetes.io/docs/concepts/policy/_print/", "id": "k8s_00456_code_explain_11644"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: PersistentVolumeClaim\nmetadata:\n name: gold-vac-pvc\nspec:\n accessModes:", "completion": " - ReadWriteOnce\n resources:\n requests:\n storage: 2Gi\n storageClassName: # change this to the name of the storage class you want to use\n volumeAttributesClassName: gold", "language": "yaml", "type": "kubernetes-manifest", "context": "2 - Resource Quotas | Apply it with kubectl create.", "source": "https://kubernetes.io/docs/concepts/policy/_print/", "id": "k8s_00456_code_completion_11645"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: PersistentVolumeClaim\nmetadata:\n name: gold-vac-pvc\nspec:\n accessModes:\n - ReadWriteOnce\n resources:\n requests:\n storage: 2Gi\n storageClassName: # change this to the name of the storage class you want to use\n volumeAttributesClassName: gold", "completion": "This yaml code 2 - Resource Quotas | Apply it with kubectl create.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "2 - Resource Quotas | Apply it with kubectl create.", "source": "https://kubernetes.io/docs/concepts/policy/_print/", "id": "k8s_00456_code_explain_11646"} {"prompt": "Complete this text code:\n\nName: pvcs-gold\nNamespace: default\nResource Used Hard\n-------- ---- ----\npersistentvolumeclaims 1 10\nrequests.storage 2Gi 10Gi\n\n\nName: pvcs-silver\nNamespace: default\nResource Used Hard", "completion": "-------- ---- ----\npersistentvolumeclaims 0 10\nrequests.storage 0 20Gi\n\n\nName: pvcs-copper\nNamespace: default\nResource Used Hard\n-------- ---- ----\npersistentvolumeclaims 0 10\nrequests.storage 0 30Gi", "language": "text", "type": "code-example", "context": "2 - Resource Quotas | Once the PVC is bound, it is allowed to modify the desired volume attributes class. Let's change it to \"silver\" with kubectl patch.", "source": "https://kubernetes.io/docs/concepts/policy/_print/", "id": "k8s_00456_code_completion_11647"} {"prompt": "Explain this text code:\n\nName: pvcs-gold\nNamespace: default\nResource Used Hard\n-------- ---- ----\npersistentvolumeclaims 1 10\nrequests.storage 2Gi 10Gi\n\n\nName: pvcs-silver\nNamespace: default\nResource Used Hard\n-------- ---- ----\npersistentvolumeclaims 0 10\nrequests.storage 0 20Gi\n\n\nName: pvcs-copper\nNamespace: default\nResource Used Hard\n-------- ---- ----\npersistentvolumeclaims 0 10\nrequests.storage 0 30Gi", "completion": "This text code 2 - Resource Quotas | Once the PVC is bound, it is allowed to modify the desired volume attributes class. Let's change it to \"silver\" with kubectl patch.", "language": "text", "type": "code-example_explanation", "context": "2 - Resource Quotas | Once the PVC is bound, it is allowed to modify the desired volume attributes class. Let's change it to \"silver\" with kubectl patch.", "source": "https://kubernetes.io/docs/concepts/policy/_print/", "id": "k8s_00456_code_explain_11648"} {"prompt": "Explain this bash code:\n\nkubectl patch pvc gold-vac-pvc --type='merge' -p '{\"spec\":{\"volumeAttributesClassName\":\"silver\"}}'", "completion": "This bash code 2 - Resource Quotas | Verify that \"Used\" stats for \"silver\" volume attributes class quota, pvcs-silver has changed, pvcs-copper is unchanged, and pvcs-gold might be unchanged or released, which depends on the PVC's status.", "language": "bash", "type": "kubectl-command_explanation", "context": "2 - Resource Quotas | Verify that \"Used\" stats for \"silver\" volume attributes class quota, pvcs-silver has changed, pvcs-copper is unchanged, and pvcs-gold might be unchanged or released, which depends on the PVC's status.", "source": "https://kubernetes.io/docs/concepts/policy/_print/", "id": "k8s_00456_code_explain_11649"} {"prompt": "Complete this text code:\n\nName: pvcs-gold\nNamespace: default\nResource Used Hard\n-------- ---- ----\npersistentvolumeclaims 1 10\nrequests.storage 2Gi 10Gi\n\n\nName: pvcs-silver\nNamespace: default\nResource Used Hard", "completion": "-------- ---- ----\npersistentvolumeclaims 1 10\nrequests.storage 2Gi 20Gi\n\n\nName: pvcs-copper\nNamespace: default\nResource Used Hard\n-------- ---- ----\npersistentvolumeclaims 0 10\nrequests.storage 0 30Gi", "language": "text", "type": "code-example", "context": "2 - Resource Quotas | Let's change it to \"copper\" with kubectl patch.", "source": "https://kubernetes.io/docs/concepts/policy/_print/", "id": "k8s_00456_code_completion_11650"} {"prompt": "Explain this text code:\n\nName: pvcs-gold\nNamespace: default\nResource Used Hard\n-------- ---- ----\npersistentvolumeclaims 1 10\nrequests.storage 2Gi 10Gi\n\n\nName: pvcs-silver\nNamespace: default\nResource Used Hard\n-------- ---- ----\npersistentvolumeclaims 1 10\nrequests.storage 2Gi 20Gi\n\n\nName: pvcs-copper\nNamespace: default\nResource Used Hard\n-------- ---- ----\npersistentvolumeclaims 0 10\nrequests.storage 0 30Gi", "completion": "This text code 2 - Resource Quotas | Let's change it to \"copper\" with kubectl patch.", "language": "text", "type": "code-example_explanation", "context": "2 - Resource Quotas | Let's change it to \"copper\" with kubectl patch.", "source": "https://kubernetes.io/docs/concepts/policy/_print/", "id": "k8s_00456_code_explain_11651"} {"prompt": "Explain this bash code:\n\nkubectl patch pvc gold-vac-pvc --type='merge' -p '{\"spec\":{\"volumeAttributesClassName\":\"copper\"}}'", "completion": "This bash code 2 - Resource Quotas | Verify that \"Used\" stats for \"copper\" volume attributes class quota, pvcs-copper has changed, pvcs-silver and pvcs-gold might be unchanged or released, which depends on the PVC's status.", "language": "bash", "type": "kubectl-command_explanation", "context": "2 - Resource Quotas | Verify that \"Used\" stats for \"copper\" volume attributes class quota, pvcs-copper has changed, pvcs-silver and pvcs-gold might be unchanged or released, which depends on the PVC's status.", "source": "https://kubernetes.io/docs/concepts/policy/_print/", "id": "k8s_00456_code_explain_11652"} {"prompt": "Complete this text code:\n\nName: pvcs-gold\nNamespace: default\nResource Used Hard\n-------- ---- ----\npersistentvolumeclaims 1 10\nrequests.storage 2Gi 10Gi\n\n\nName: pvcs-silver\nNamespace: default\nResource Used Hard", "completion": "-------- ---- ----\npersistentvolumeclaims 1 10\nrequests.storage 2Gi 20Gi\n\n\nName: pvcs-copper\nNamespace: default\nResource Used Hard\n-------- ---- ----\npersistentvolumeclaims 1 10\nrequests.storage 2Gi 30Gi", "language": "text", "type": "code-example", "context": "2 - Resource Quotas | Print the manifest of the PVC using the following command:", "source": "https://kubernetes.io/docs/concepts/policy/_print/", "id": "k8s_00456_code_completion_11653"} {"prompt": "Explain this text code:\n\nName: pvcs-gold\nNamespace: default\nResource Used Hard\n-------- ---- ----\npersistentvolumeclaims 1 10\nrequests.storage 2Gi 10Gi\n\n\nName: pvcs-silver\nNamespace: default\nResource Used Hard\n-------- ---- ----\npersistentvolumeclaims 1 10\nrequests.storage 2Gi 20Gi\n\n\nName: pvcs-copper\nNamespace: default\nResource Used Hard\n-------- ---- ----\npersistentvolumeclaims 1 10\nrequests.storage 2Gi 30Gi", "completion": "This text code 2 - Resource Quotas | Print the manifest of the PVC using the following command:", "language": "text", "type": "code-example_explanation", "context": "2 - Resource Quotas | Print the manifest of the PVC using the following command:", "source": "https://kubernetes.io/docs/concepts/policy/_print/", "id": "k8s_00456_code_explain_11654"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: PersistentVolumeClaim\nmetadata:\n name: gold-vac-pvc\nspec:\n accessModes:\n - ReadWriteOnce\n resources:\n requests:\n storage: 2Gi\n storageClassName: default", "completion": " volumeAttributesClassName: copper\nstatus:\n accessModes:\n - ReadWriteOnce\n capacity:\n storage: 2Gi\n currentVolumeAttributesClassName: gold\n phase: Bound\n modifyVolumeStatus:\n status: InProgress\n targetVolumeAttributesClassName: silver\n storageClassName: default", "language": "yaml", "type": "kubernetes-manifest", "context": "2 - Resource Quotas | Wait a moment for the volume modification to complete, then verify the quota again.", "source": "https://kubernetes.io/docs/concepts/policy/_print/", "id": "k8s_00456_code_completion_11655"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: PersistentVolumeClaim\nmetadata:\n name: gold-vac-pvc\nspec:\n accessModes:\n - ReadWriteOnce\n resources:\n requests:\n storage: 2Gi\n storageClassName: default\n volumeAttributesClassName: copper\nstatus:\n accessModes:\n - ReadWriteOnce\n capacity:\n storage: 2Gi\n currentVolumeAttributesClassName: gold\n phase: Bound\n modifyVolumeStatus:\n status: InProgress\n targetVolumeAttributesClassName: silver\n storageClassName: default", "completion": "This yaml code 2 - Resource Quotas | Wait a moment for the volume modification to complete, then verify the quota again.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "2 - Resource Quotas | Wait a moment for the volume modification to complete, then verify the quota again.", "source": "https://kubernetes.io/docs/concepts/policy/_print/", "id": "k8s_00456_code_explain_11656"} {"prompt": "Complete this text code:\n\nName: pvcs-gold\nNamespace: default\nResource Used Hard\n-------- ---- ----\npersistentvolumeclaims 0 10\nrequests.storage 0 10Gi\n\n\nName: pvcs-silver\nNamespace: default\nResource Used Hard", "completion": "-------- ---- ----\npersistentvolumeclaims 0 10\nrequests.storage 0 20Gi\n\n\nName: pvcs-copper\nNamespace: default\nResource Used Hard\n-------- ---- ----\npersistentvolumeclaims 1 10\nrequests.storage 2Gi 30Gi", "language": "text", "type": "code-example", "context": "2 - Resource Quotas | When allocating compute resources, each container may specify a request and a limit value for either CPU or memory. The quota can be configured to quota either value.", "source": "https://kubernetes.io/docs/concepts/policy/_print/", "id": "k8s_00456_code_completion_11657"} {"prompt": "Explain this text code:\n\nName: pvcs-gold\nNamespace: default\nResource Used Hard\n-------- ---- ----\npersistentvolumeclaims 0 10\nrequests.storage 0 10Gi\n\n\nName: pvcs-silver\nNamespace: default\nResource Used Hard\n-------- ---- ----\npersistentvolumeclaims 0 10\nrequests.storage 0 20Gi\n\n\nName: pvcs-copper\nNamespace: default\nResource Used Hard\n-------- ---- ----\npersistentvolumeclaims 1 10\nrequests.storage 2Gi 30Gi", "completion": "This text code 2 - Resource Quotas | When allocating compute resources, each container may specify a request and a limit value for either CPU or memory. The quota can be configured to quota either value.", "language": "text", "type": "code-example_explanation", "context": "2 - Resource Quotas | When allocating compute resources, each container may specify a request and a limit value for either CPU or memory. The quota can be configured to quota either value.", "source": "https://kubernetes.io/docs/concepts/policy/_print/", "id": "k8s_00456_code_explain_11658"} {"prompt": "Complete this yaml code:\n\ncat <<EOF > compute-resources.yaml\napiVersion: v1\nkind: ResourceQuota\nmetadata:\n name: compute-resources\nspec:", "completion": " hard:\n requests.cpu: \"1\"\n requests.memory: \"1Gi\"\n limits.cpu: \"2\"\n limits.memory: \"2Gi\"\n requests.nvidia.com/gpu: 4\nEOF", "language": "yaml", "type": "kubernetes-manifest", "context": "2 - Resource Quotas | kubectl also supports object count quota for all standard namespaced resources using the syntax count/<resource>.<group>:", "source": "https://kubernetes.io/docs/concepts/policy/_print/", "id": "k8s_00456_code_completion_11659"} {"prompt": "Explain this yaml code:\n\ncat <<EOF > compute-resources.yaml\napiVersion: v1\nkind: ResourceQuota\nmetadata:\n name: compute-resources\nspec:\n hard:\n requests.cpu: \"1\"\n requests.memory: \"1Gi\"\n limits.cpu: \"2\"\n limits.memory: \"2Gi\"\n requests.nvidia.com/gpu: 4\nEOF", "completion": "This yaml code 2 - Resource Quotas | kubectl also supports object count quota for all standard namespaced resources using the syntax count/<resource>.<group>:", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "2 - Resource Quotas | kubectl also supports object count quota for all standard namespaced resources using the syntax count/<resource>.<group>:", "source": "https://kubernetes.io/docs/concepts/policy/_print/", "id": "k8s_00456_code_explain_11660"} {"prompt": "Explain this bash code:\n\nkubectl create -f ./compute-resources.yaml --namespace=myspace", "completion": "This bash code 2 - Resource Quotas | kubectl also supports object count quota for all standard namespaced resources using the syntax count/<resource>.<group>:", "language": "bash", "type": "kubectl-command_explanation", "context": "2 - Resource Quotas | kubectl also supports object count quota for all standard namespaced resources using the syntax count/<resource>.<group>:", "source": "https://kubernetes.io/docs/concepts/policy/_print/", "id": "k8s_00456_code_explain_11661"} {"prompt": "Complete this yaml code:\n\ncat <<EOF > object-counts.yaml\napiVersion: v1\nkind: ResourceQuota\nmetadata:\n name: object-counts\nspec:\n hard:", "completion": " configmaps: \"10\"\n persistentvolumeclaims: \"4\"\n pods: \"4\"\n replicationcontrollers: \"20\"\n secrets: \"10\"\n services: \"10\"\n services.loadbalancers: \"2\"\nEOF", "language": "yaml", "type": "kubernetes-manifest", "context": "2 - Resource Quotas | kubectl also supports object count quota for all standard namespaced resources using the syntax count/<resource>.<group>:", "source": "https://kubernetes.io/docs/concepts/policy/_print/", "id": "k8s_00456_code_completion_11662"} {"prompt": "Explain this yaml code:\n\ncat <<EOF > object-counts.yaml\napiVersion: v1\nkind: ResourceQuota\nmetadata:\n name: object-counts\nspec:\n hard:\n configmaps: \"10\"\n persistentvolumeclaims: \"4\"\n pods: \"4\"\n replicationcontrollers: \"20\"\n secrets: \"10\"\n services: \"10\"\n services.loadbalancers: \"2\"\nEOF", "completion": "This yaml code 2 - Resource Quotas | kubectl also supports object count quota for all standard namespaced resources using the syntax count/<resource>.<group>:", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "2 - Resource Quotas | kubectl also supports object count quota for all standard namespaced resources using the syntax count/<resource>.<group>:", "source": "https://kubernetes.io/docs/concepts/policy/_print/", "id": "k8s_00456_code_explain_11663"} {"prompt": "Explain this bash code:\n\nkubectl create -f ./object-counts.yaml --namespace=myspace", "completion": "This bash code 2 - Resource Quotas | kubectl also supports object count quota for all standard namespaced resources using the syntax count/<resource>.<group>:", "language": "bash", "type": "kubectl-command_explanation", "context": "2 - Resource Quotas | kubectl also supports object count quota for all standard namespaced resources using the syntax count/<resource>.<group>:", "source": "https://kubernetes.io/docs/concepts/policy/_print/", "id": "k8s_00456_code_explain_11664"} {"prompt": "Explain this text code:\n\nNAME AGE\ncompute-resources 30s\nobject-counts 32s", "completion": "This text code 2 - Resource Quotas | kubectl also supports object count quota for all standard namespaced resources using the syntax count/<resource>.<group>:", "language": "text", "type": "code-example_explanation", "context": "2 - Resource Quotas | kubectl also supports object count quota for all standard namespaced resources using the syntax count/<resource>.<group>:", "source": "https://kubernetes.io/docs/concepts/policy/_print/", "id": "k8s_00456_code_explain_11665"} {"prompt": "Explain this bash code:\n\nkubectl describe quota compute-resources --namespace=myspace", "completion": "This bash code 2 - Resource Quotas | kubectl also supports object count quota for all standard namespaced resources using the syntax count/<resource>.<group>:", "language": "bash", "type": "kubectl-command_explanation", "context": "2 - Resource Quotas | kubectl also supports object count quota for all standard namespaced resources using the syntax count/<resource>.<group>:", "source": "https://kubernetes.io/docs/concepts/policy/_print/", "id": "k8s_00456_code_explain_11666"} {"prompt": "Complete this text code:\n\nName: compute-resources\nNamespace: myspace\nResource Used Hard\n-------- ---- ----", "completion": "limits.cpu 0 2\nlimits.memory 0 2Gi\nrequests.cpu 0 1\nrequests.memory 0 1Gi\nrequests.nvidia.com/gpu 0 4", "language": "text", "type": "code-example", "context": "2 - Resource Quotas | kubectl also supports object count quota for all standard namespaced resources using the syntax count/<resource>.<group>:", "source": "https://kubernetes.io/docs/concepts/policy/_print/", "id": "k8s_00456_code_completion_11667"} {"prompt": "Explain this text code:\n\nName: compute-resources\nNamespace: myspace\nResource Used Hard\n-------- ---- ----\nlimits.cpu 0 2\nlimits.memory 0 2Gi\nrequests.cpu 0 1\nrequests.memory 0 1Gi\nrequests.nvidia.com/gpu 0 4", "completion": "This text code 2 - Resource Quotas | kubectl also supports object count quota for all standard namespaced resources using the syntax count/<resource>.<group>:", "language": "text", "type": "code-example_explanation", "context": "2 - Resource Quotas | kubectl also supports object count quota for all standard namespaced resources using the syntax count/<resource>.<group>:", "source": "https://kubernetes.io/docs/concepts/policy/_print/", "id": "k8s_00456_code_explain_11668"} {"prompt": "Explain this bash code:\n\nkubectl describe quota object-counts --namespace=myspace", "completion": "This bash code 2 - Resource Quotas | kubectl also supports object count quota for all standard namespaced resources using the syntax count/<resource>.<group>:", "language": "bash", "type": "kubectl-command_explanation", "context": "2 - Resource Quotas | kubectl also supports object count quota for all standard namespaced resources using the syntax count/<resource>.<group>:", "source": "https://kubernetes.io/docs/concepts/policy/_print/", "id": "k8s_00456_code_explain_11669"} {"prompt": "Complete this text code:\n\nName: object-counts\nNamespace: myspace\nResource Used Hard\n-------- ---- ----\nconfigmaps 0 10", "completion": "persistentvolumeclaims 0 4\npods 0 4\nreplicationcontrollers 0 20\nsecrets 1 10\nservices 0 10\nservices.loadbalancers 0 2", "language": "text", "type": "code-example", "context": "2 - Resource Quotas | kubectl also supports object count quota for all standard namespaced resources using the syntax count/<resource>.<group>:", "source": "https://kubernetes.io/docs/concepts/policy/_print/", "id": "k8s_00456_code_completion_11670"} {"prompt": "Explain this text code:\n\nName: object-counts\nNamespace: myspace\nResource Used Hard\n-------- ---- ----\nconfigmaps 0 10\npersistentvolumeclaims 0 4\npods 0 4\nreplicationcontrollers 0 20\nsecrets 1 10\nservices 0 10\nservices.loadbalancers 0 2", "completion": "This text code 2 - Resource Quotas | kubectl also supports object count quota for all standard namespaced resources using the syntax count/<resource>.<group>:", "language": "text", "type": "code-example_explanation", "context": "2 - Resource Quotas | kubectl also supports object count quota for all standard namespaced resources using the syntax count/<resource>.<group>:", "source": "https://kubernetes.io/docs/concepts/policy/_print/", "id": "k8s_00456_code_explain_11671"} {"prompt": "Explain this bash code:\n\nkubectl create quota test --hard=count/deployments.apps=2,count/replicasets.apps=4,count/pods=3,count/secrets=4 --namespace=myspace", "completion": "This bash code 2 - Resource Quotas | ResourceQuotas are independent of the cluster capacity. They are expressed in absolute units. So, if you add nodes to your cluster, this does not automatically give each namespace the ability to consu", "language": "bash", "type": "kubectl-command_explanation", "context": "2 - Resource Quotas | ResourceQuotas are independent of the cluster capacity. They are expressed in absolute units. So, if you add nodes to your cluster, this does not automatically give each namespace the ability to consu", "source": "https://kubernetes.io/docs/concepts/policy/_print/", "id": "k8s_00456_code_explain_11672"} {"prompt": "Explain this bash code:\n\nkubectl create deployment nginx --image=nginx --namespace=myspace --replicas=2", "completion": "This bash code 2 - Resource Quotas | ResourceQuotas are independent of the cluster capacity. They are expressed in absolute units. So, if you add nodes to your cluster, this does not automatically give each namespace the ability to consu", "language": "bash", "type": "kubectl-command_explanation", "context": "2 - Resource Quotas | ResourceQuotas are independent of the cluster capacity. They are expressed in absolute units. So, if you add nodes to your cluster, this does not automatically give each namespace the ability to consu", "source": "https://kubernetes.io/docs/concepts/policy/_print/", "id": "k8s_00456_code_explain_11673"} {"prompt": "Complete this text code:\n\nName: test\nNamespace: myspace\nResource Used Hard\n-------- ---- ----", "completion": "count/deployments.apps 1 2\ncount/pods 2 3\ncount/replicasets.apps 1 4\ncount/secrets 1 4", "language": "text", "type": "code-example", "context": "2 - Resource Quotas | ResourceQuotas are independent of the cluster capacity. They are expressed in absolute units. So, if you add nodes to your cluster, this does not automatically give each namespace the ability to consu", "source": "https://kubernetes.io/docs/concepts/policy/_print/", "id": "k8s_00456_code_completion_11674"} {"prompt": "Explain this text code:\n\nName: test\nNamespace: myspace\nResource Used Hard\n-------- ---- ----\ncount/deployments.apps 1 2\ncount/pods 2 3\ncount/replicasets.apps 1 4\ncount/secrets 1 4", "completion": "This text code 2 - Resource Quotas | ResourceQuotas are independent of the cluster capacity. They are expressed in absolute units. So, if you add nodes to your cluster, this does not automatically give each namespace the ability to consu", "language": "text", "type": "code-example_explanation", "context": "2 - Resource Quotas | ResourceQuotas are independent of the cluster capacity. They are expressed in absolute units. So, if you add nodes to your cluster, this does not automatically give each namespace the ability to consu", "source": "https://kubernetes.io/docs/concepts/policy/_print/", "id": "k8s_00456_code_explain_11675"} {"prompt": "Complete this yaml code:\n\napiVersion: apiserver.config.k8s.io/v1\nkind: AdmissionConfiguration\nplugins:\n- name: \"ResourceQuota\"\n configuration:\n apiVersion: apiserver.config.k8s.io/v1", "completion": " kind: ResourceQuotaConfiguration\n limitedResources:\n - resource: pods\n matchScopes:\n - scopeName: PriorityClass\n operator: In\n values: [\"cluster-services\"]", "language": "yaml", "type": "kubernetes-manifest", "context": "2 - Resource Quotas | Then, create a resource quota object in the kube-system namespace:", "source": "https://kubernetes.io/docs/concepts/policy/_print/", "id": "k8s_00456_code_completion_11676"} {"prompt": "Explain this yaml code:\n\napiVersion: apiserver.config.k8s.io/v1\nkind: AdmissionConfiguration\nplugins:\n- name: \"ResourceQuota\"\n configuration:\n apiVersion: apiserver.config.k8s.io/v1\n kind: ResourceQuotaConfiguration\n limitedResources:\n - resource: pods\n matchScopes:\n - scopeName: PriorityClass\n operator: In\n values: [\"cluster-services\"]", "completion": "This yaml code 2 - Resource Quotas | Then, create a resource quota object in the kube-system namespace:", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "2 - Resource Quotas | Then, create a resource quota object in the kube-system namespace:", "source": "https://kubernetes.io/docs/concepts/policy/_print/", "id": "k8s_00456_code_explain_11677"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: ResourceQuota\nmetadata:\n name: pods-cluster-services\nspec:", "completion": " scopeSelector:\n matchExpressions:\n - operator : In\n scopeName: PriorityClass\n values: [\"cluster-services\"]", "language": "yaml", "type": "kubernetes-manifest", "context": "2 - Resource Quotas | In this case, a pod creation will be allowed if:", "source": "https://kubernetes.io/docs/concepts/policy/_print/", "id": "k8s_00456_code_completion_11678"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: ResourceQuota\nmetadata:\n name: pods-cluster-services\nspec:\n scopeSelector:\n matchExpressions:\n - operator : In\n scopeName: PriorityClass\n values: [\"cluster-services\"]", "completion": "This yaml code 2 - Resource Quotas | In this case, a pod creation will be allowed if:", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "2 - Resource Quotas | In this case, a pod creation will be allowed if:", "source": "https://kubernetes.io/docs/concepts/policy/_print/", "id": "k8s_00456_code_explain_11679"} {"prompt": "Explain this bash code:\n\nkubectl apply -f https://k8s.io/examples/policy/priority-class-resourcequota.yaml -n kube-system", "completion": "This bash code 2 - Resource Quotas | In this case, a pod creation will be allowed if:", "language": "bash", "type": "kubectl-command_explanation", "context": "2 - Resource Quotas | In this case, a pod creation will be allowed if:", "source": "https://kubernetes.io/docs/concepts/policy/_print/", "id": "k8s_00456_code_explain_11680"} {"prompt": "Complete this text code:\n\nspec:\n containers:", "completion": " - name: nginx\n image: nginx", "language": "text", "type": "code-example", "context": "4 - Node Resource Managers | The pod above runs in the BestEffort QoS class because no resource requests or limits are specified. It runs in the shared pool.", "source": "https://kubernetes.io/docs/concepts/policy/_print/", "id": "k8s_00456_code_completion_11681"} {"prompt": "Explain this text code:\n\nspec:\n containers:\n - name: nginx\n image: nginx", "completion": "This text code 4 - Node Resource Managers | The pod above runs in the BestEffort QoS class because no resource requests or limits are specified. It runs in the shared pool.", "language": "text", "type": "code-example_explanation", "context": "4 - Node Resource Managers | The pod above runs in the BestEffort QoS class because no resource requests or limits are specified. It runs in the shared pool.", "source": "https://kubernetes.io/docs/concepts/policy/_print/", "id": "k8s_00456_code_explain_11682"} {"prompt": "Complete this text code:\n\nspec:\n containers:\n - name: nginx\n image: nginx", "completion": " resources:\n limits:\n memory: \"200Mi\"\n requests:\n memory: \"100Mi\"", "language": "text", "type": "code-example", "context": "4 - Node Resource Managers | The pod above runs in the Burstable QoS class because resource requests do not equal limits and the cpu quantity is not specified. It runs in the shared pool.", "source": "https://kubernetes.io/docs/concepts/policy/_print/", "id": "k8s_00456_code_completion_11683"} {"prompt": "Explain this text code:\n\nspec:\n containers:\n - name: nginx\n image: nginx\n resources:\n limits:\n memory: \"200Mi\"\n requests:\n memory: \"100Mi\"", "completion": "This text code 4 - Node Resource Managers | The pod above runs in the Burstable QoS class because resource requests do not equal limits and the cpu quantity is not specified. It runs in the shared pool.", "language": "text", "type": "code-example_explanation", "context": "4 - Node Resource Managers | The pod above runs in the Burstable QoS class because resource requests do not equal limits and the cpu quantity is not specified. It runs in the shared pool.", "source": "https://kubernetes.io/docs/concepts/policy/_print/", "id": "k8s_00456_code_explain_11684"} {"prompt": "Complete this text code:\n\nspec:\n containers:\n - name: nginx\n image: nginx\n resources:", "completion": " limits:\n memory: \"200Mi\"\n cpu: \"2\"\n requests:\n memory: \"100Mi\"\n cpu: \"1\"", "language": "text", "type": "code-example", "context": "4 - Node Resource Managers | The pod above runs in the Burstable QoS class because resource requests do not equal limits. It runs in the shared pool.", "source": "https://kubernetes.io/docs/concepts/policy/_print/", "id": "k8s_00456_code_completion_11685"} {"prompt": "Explain this text code:\n\nspec:\n containers:\n - name: nginx\n image: nginx\n resources:\n limits:\n memory: \"200Mi\"\n cpu: \"2\"\n requests:\n memory: \"100Mi\"\n cpu: \"1\"", "completion": "This text code 4 - Node Resource Managers | The pod above runs in the Burstable QoS class because resource requests do not equal limits. It runs in the shared pool.", "language": "text", "type": "code-example_explanation", "context": "4 - Node Resource Managers | The pod above runs in the Burstable QoS class because resource requests do not equal limits. It runs in the shared pool.", "source": "https://kubernetes.io/docs/concepts/policy/_print/", "id": "k8s_00456_code_explain_11686"} {"prompt": "Complete this text code:\n\nspec:\n containers:\n - name: nginx\n image: nginx\n resources:", "completion": " limits:\n memory: \"200Mi\"\n cpu: \"2\"\n requests:\n memory: \"200Mi\"\n cpu: \"2\"", "language": "text", "type": "code-example", "context": "4 - Node Resource Managers | The pod above runs in the Guaranteed QoS class because requests are equal to limits. And the container's resource limit for the CPU resource is an integer greater than or equal to one. The nginx conta", "source": "https://kubernetes.io/docs/concepts/policy/_print/", "id": "k8s_00456_code_completion_11687"} {"prompt": "Explain this text code:\n\nspec:\n containers:\n - name: nginx\n image: nginx\n resources:\n limits:\n memory: \"200Mi\"\n cpu: \"2\"\n requests:\n memory: \"200Mi\"\n cpu: \"2\"", "completion": "This text code 4 - Node Resource Managers | The pod above runs in the Guaranteed QoS class because requests are equal to limits. And the container's resource limit for the CPU resource is an integer greater than or equal to one. The nginx conta", "language": "text", "type": "code-example_explanation", "context": "4 - Node Resource Managers | The pod above runs in the Guaranteed QoS class because requests are equal to limits. And the container's resource limit for the CPU resource is an integer greater than or equal to one. The nginx conta", "source": "https://kubernetes.io/docs/concepts/policy/_print/", "id": "k8s_00456_code_explain_11688"} {"prompt": "Complete this text code:\n\nspec:\n containers:\n - name: nginx\n image: nginx\n resources:", "completion": " limits:\n memory: \"200Mi\"\n cpu: \"1.5\"\n requests:\n memory: \"200Mi\"\n cpu: \"1.5\"", "language": "text", "type": "code-example", "context": "4 - Node Resource Managers | The pod above runs in the Guaranteed QoS class because requests are equal to limits. But the container's resource limit for the CPU resource is a fraction. It runs in the shared pool.", "source": "https://kubernetes.io/docs/concepts/policy/_print/", "id": "k8s_00456_code_completion_11689"} {"prompt": "Explain this text code:\n\nspec:\n containers:\n - name: nginx\n image: nginx\n resources:\n limits:\n memory: \"200Mi\"\n cpu: \"1.5\"\n requests:\n memory: \"200Mi\"\n cpu: \"1.5\"", "completion": "This text code 4 - Node Resource Managers | The pod above runs in the Guaranteed QoS class because requests are equal to limits. But the container's resource limit for the CPU resource is a fraction. It runs in the shared pool.", "language": "text", "type": "code-example_explanation", "context": "4 - Node Resource Managers | The pod above runs in the Guaranteed QoS class because requests are equal to limits. But the container's resource limit for the CPU resource is a fraction. It runs in the shared pool.", "source": "https://kubernetes.io/docs/concepts/policy/_print/", "id": "k8s_00456_code_explain_11690"} {"prompt": "Complete this text code:\n\nspec:\n containers:\n - name: nginx\n image: nginx", "completion": " resources:\n limits:\n memory: \"200Mi\"\n cpu: \"2\"", "language": "text", "type": "code-example", "context": "4 - Node Resource Managers | The pod above runs in the Guaranteed QoS class because only limits are specified and requests are set equal to limits when not explicitly specified. And the container's resource limit for the CPU reso", "source": "https://kubernetes.io/docs/concepts/policy/_print/", "id": "k8s_00456_code_completion_11691"} {"prompt": "Explain this text code:\n\nspec:\n containers:\n - name: nginx\n image: nginx\n resources:\n limits:\n memory: \"200Mi\"\n cpu: \"2\"", "completion": "This text code 4 - Node Resource Managers | The pod above runs in the Guaranteed QoS class because only limits are specified and requests are set equal to limits when not explicitly specified. And the container's resource limit for the CPU reso", "language": "text", "type": "code-example_explanation", "context": "4 - Node Resource Managers | The pod above runs in the Guaranteed QoS class because only limits are specified and requests are set equal to limits when not explicitly specified. And the container's resource limit for the CPU reso", "source": "https://kubernetes.io/docs/concepts/policy/_print/", "id": "k8s_00456_code_explain_11692"} {"prompt": "Explain this text code:\n\nconcepts/policy/limit-range/problematic-limit-range.yaml", "completion": "This text code 1 - Limit Ranges | along with a Pod that declares a CPU resource request of 700m, but not a limit:", "language": "text", "type": "code-example_explanation", "context": "1 - Limit Ranges | along with a Pod that declares a CPU resource request of 700m, but not a limit:", "source": "https://kubernetes.io/docs/concepts/policy/_print/", "id": "k8s_00456_code_explain_11693"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: LimitRange\nmetadata:\n name: cpu-resource-constraint\nspec:\n limits:\n - default: # this section defines default limits", "completion": " cpu: 500m\n defaultRequest: # this section defines default requests\n cpu: 500m\n max: # max and min define the limit range\n cpu: \"1\"\n min:\n cpu: 100m\n type: Container", "language": "yaml", "type": "kubernetes-manifest", "context": "1 - Limit Ranges | along with a Pod that declares a CPU resource request of 700m, but not a limit:", "source": "https://kubernetes.io/docs/concepts/policy/_print/", "id": "k8s_00456_code_completion_11694"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: LimitRange\nmetadata:\n name: cpu-resource-constraint\nspec:\n limits:\n - default: # this section defines default limits\n cpu: 500m\n defaultRequest: # this section defines default requests\n cpu: 500m\n max: # max and min define the limit range\n cpu: \"1\"\n min:\n cpu: 100m\n type: Container", "completion": "This yaml code 1 - Limit Ranges | along with a Pod that declares a CPU resource request of 700m, but not a limit:", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "1 - Limit Ranges | along with a Pod that declares a CPU resource request of 700m, but not a limit:", "source": "https://kubernetes.io/docs/concepts/policy/_print/", "id": "k8s_00456_code_explain_11695"} {"prompt": "Explain this text code:\n\nconcepts/policy/limit-range/example-conflict-with-limitrange-cpu.yaml", "completion": "This text code 1 - Limit Ranges | then that Pod will not be scheduled, failing with an error similar to:", "language": "text", "type": "code-example_explanation", "context": "1 - Limit Ranges | then that Pod will not be scheduled, failing with an error similar to:", "source": "https://kubernetes.io/docs/concepts/policy/_print/", "id": "k8s_00456_code_explain_11696"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: example-conflict-with-limitrange-cpu\nspec:", "completion": " containers:\n - name: demo\n image: registry.k8s.io/pause:3.8\n resources:\n requests:\n cpu: 700m", "language": "yaml", "type": "pod-definition", "context": "1 - Limit Ranges | then that Pod will not be scheduled, failing with an error similar to:", "source": "https://kubernetes.io/docs/concepts/policy/_print/", "id": "k8s_00456_code_completion_11697"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: example-conflict-with-limitrange-cpu\nspec:\n containers:\n - name: demo\n image: registry.k8s.io/pause:3.8\n resources:\n requests:\n cpu: 700m", "completion": "This yaml code 1 - Limit Ranges | then that Pod will not be scheduled, failing with an error similar to:", "language": "yaml", "type": "pod-definition_explanation", "context": "1 - Limit Ranges | then that Pod will not be scheduled, failing with an error similar to:", "source": "https://kubernetes.io/docs/concepts/policy/_print/", "id": "k8s_00456_code_explain_11698"} {"prompt": "Explain this text code:\n\nPod \"example-conflict-with-limitrange-cpu\" is invalid: spec.containers[0].resources.requests: Invalid value: \"700m\": must be less than or equal to cpu limit", "completion": "This text code 1 - Limit Ranges | If you set both request and limit, then that new Pod will be scheduled successfully even with the same LimitRange in place:", "language": "text", "type": "code-example_explanation", "context": "1 - Limit Ranges | If you set both request and limit, then that new Pod will be scheduled successfully even with the same LimitRange in place:", "source": "https://kubernetes.io/docs/concepts/policy/_print/", "id": "k8s_00456_code_explain_11699"} {"prompt": "Explain this text code:\n\nconcepts/policy/limit-range/example-no-conflict-with-limitrange-cpu.yaml", "completion": "This text code 1 - Limit Ranges | Examples of policies that could be created using LimitRange are:", "language": "text", "type": "code-example_explanation", "context": "1 - Limit Ranges | Examples of policies that could be created using LimitRange are:", "source": "https://kubernetes.io/docs/concepts/policy/_print/", "id": "k8s_00456_code_explain_11700"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: example-no-conflict-with-limitrange-cpu\nspec:\n containers:", "completion": " - name: demo\n image: registry.k8s.io/pause:3.8\n resources:\n requests:\n cpu: 700m\n limits:\n cpu: 700m", "language": "yaml", "type": "pod-definition", "context": "1 - Limit Ranges | Examples of policies that could be created using LimitRange are:", "source": "https://kubernetes.io/docs/concepts/policy/_print/", "id": "k8s_00456_code_completion_11701"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: example-no-conflict-with-limitrange-cpu\nspec:\n containers:\n - name: demo\n image: registry.k8s.io/pause:3.8\n resources:\n requests:\n cpu: 700m\n limits:\n cpu: 700m", "completion": "This yaml code 1 - Limit Ranges | Examples of policies that could be created using LimitRange are:", "language": "yaml", "type": "pod-definition_explanation", "context": "1 - Limit Ranges | Examples of policies that could be created using LimitRange are:", "source": "https://kubernetes.io/docs/concepts/policy/_print/", "id": "k8s_00456_code_explain_11702"} {"prompt": "Explain this text code:\n\n<storage-class-name>.storageclass.storage.k8s.io/requests.storage", "completion": "This text code 2 - Resource Quotas | For example, if you want to quota storage with gold StorageClass separate from a bronze StorageClass, you can define a quota as follows:", "language": "text", "type": "code-example_explanation", "context": "2 - Resource Quotas | For example, if you want to quota storage with gold StorageClass separate from a bronze StorageClass, you can define a quota as follows:", "source": "https://kubernetes.io/docs/concepts/policy/_print/", "id": "k8s_00456_code_explain_11703"} {"prompt": "Explain this text code:\n\n<storage-class-name>.storageclass.storage.k8s.io/persistentvolumeclaims", "completion": "This text code 2 - Resource Quotas | For example, if you want to quota storage with gold StorageClass separate from a bronze StorageClass, you can define a quota as follows:", "language": "text", "type": "code-example_explanation", "context": "2 - Resource Quotas | For example, if you want to quota storage with gold StorageClass separate from a bronze StorageClass, you can define a quota as follows:", "source": "https://kubernetes.io/docs/concepts/policy/_print/", "id": "k8s_00456_code_explain_11704"} {"prompt": "Explain this text code:\n\ngold.storageclass.storage.k8s.io/requests.storage: 500Gi", "completion": "This text code 2 - Resource Quotas | When using a CRI container runtime, container logs will count against the ephemeral storage quota. This can result in the unexpected eviction of pods that have exhausted their storage quotas.", "language": "text", "type": "code-example_explanation", "context": "2 - Resource Quotas | When using a CRI container runtime, container logs will count against the ephemeral storage quota. This can result in the unexpected eviction of pods that have exhausted their storage quotas.", "source": "https://kubernetes.io/docs/concepts/policy/_print/", "id": "k8s_00456_code_explain_11705"} {"prompt": "Explain this text code:\n\nbronze.storageclass.storage.k8s.io/requests.storage: 100Gi", "completion": "This text code 2 - Resource Quotas | When using a CRI container runtime, container logs will count against the ephemeral storage quota. This can result in the unexpected eviction of pods that have exhausted their storage quotas.", "language": "text", "type": "code-example_explanation", "context": "2 - Resource Quotas | When using a CRI container runtime, container logs will count against the ephemeral storage quota. This can result in the unexpected eviction of pods that have exhausted their storage quotas.", "source": "https://kubernetes.io/docs/concepts/policy/_print/", "id": "k8s_00456_code_explain_11706"} {"prompt": "Complete this yaml code:\n\nscopeSelector:\n matchExpressions:\n - scopeName: PriorityClass", "completion": " operator: In\n values:\n - middle", "language": "yaml", "type": "kubernetes-manifest", "context": "2 - Resource Quotas | If the operator is Exists or DoesNotExist, the values field must NOT be specified.", "source": "https://kubernetes.io/docs/concepts/policy/_print/", "id": "k8s_00456_code_completion_11707"} {"prompt": "Explain this yaml code:\n\nscopeSelector:\n matchExpressions:\n - scopeName: PriorityClass\n operator: In\n values:\n - middle", "completion": "This yaml code 2 - Resource Quotas | If the operator is Exists or DoesNotExist, the values field must NOT be specified.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "2 - Resource Quotas | If the operator is Exists or DoesNotExist, the values field must NOT be specified.", "source": "https://kubernetes.io/docs/concepts/policy/_print/", "id": "k8s_00456_code_explain_11708"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: List\nitems:\n- apiVersion: v1\n kind: ResourceQuota\n metadata:\n name: pods-high\n spec:\n hard:\n cpu: \"1000\"\n memory: \"200Gi\"\n pods: \"10\"\n scopeSelector:\n matchExpressions:\n - operator: In\n scopeName: PriorityClass\n values: [\"high\"]\n- apiVersion: v1\n kind: ResourceQuota\n metadata:\n name: pods-medium\n spec:", "completion": " hard:\n cpu: \"10\"\n memory: \"20Gi\"\n pods: \"10\"\n scopeSelector:\n matchExpressions:\n - operator: In\n scopeName: PriorityClass\n values: [\"medium\"]\n- apiVersion: v1\n kind: ResourceQuota\n metadata:\n name: pods-low\n spec:\n hard:\n cpu: \"5\"\n memory: \"10Gi\"\n pods: \"10\"\n scopeSelector:\n matchExpressions:\n - operator: In\n scopeName: PriorityClass\n values: [\"low\"]", "language": "yaml", "type": "kubernetes-manifest", "context": "2 - Resource Quotas | Apply the YAML using kubectl create.", "source": "https://kubernetes.io/docs/concepts/policy/_print/", "id": "k8s_00456_code_completion_11709"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: List\nitems:\n- apiVersion: v1\n kind: ResourceQuota\n metadata:\n name: pods-high\n spec:\n hard:\n cpu: \"1000\"\n memory: \"200Gi\"\n pods: \"10\"\n scopeSelector:\n matchExpressions:\n - operator: In\n scopeName: PriorityClass\n values: [\"high\"]\n- apiVersion: v1\n kind: ResourceQuota\n metadata:\n name: pods-medium\n spec:\n hard:\n cpu: \"10\"\n memory: \"20Gi\"\n pods: \"10\"\n scopeSelector:\n matchExpressions:\n - operator: In\n scopeName: PriorityClass\n values: [\"medium\"]\n- apiVersion: v1\n kind: ResourceQuota\n metadata:\n name: pods-low\n spec:\n hard:\n cpu: \"5\"\n memory: \"10Gi\"\n pods: \"10\"\n scopeSelector:\n matchExpressions:\n - operator: In\n scopeName: PriorityClass\n values: [\"low\"]", "completion": "This yaml code 2 - Resource Quotas | Apply the YAML using kubectl create.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "2 - Resource Quotas | Apply the YAML using kubectl create.", "source": "https://kubernetes.io/docs/concepts/policy/_print/", "id": "k8s_00456_code_explain_11710"} {"prompt": "Explain this text code:\n\nresourcequota/pods-high created\nresourcequota/pods-medium created\nresourcequota/pods-low created", "completion": "This text code 2 - Resource Quotas | Verify that Used quota is 0 using kubectl describe quota.", "language": "text", "type": "code-example_explanation", "context": "2 - Resource Quotas | Verify that Used quota is 0 using kubectl describe quota.", "source": "https://kubernetes.io/docs/concepts/policy/_print/", "id": "k8s_00456_code_explain_11711"} {"prompt": "Complete this text code:\n\nName: pods-high\nNamespace: default\nResource Used Hard\n-------- ---- ----\ncpu 0 1k\nmemory 0 200Gi\npods 0 10\n\n\nName: pods-low\nNamespace: default\nResource Used Hard", "completion": "-------- ---- ----\ncpu 0 5\nmemory 0 10Gi\npods 0 10\n\n\nName: pods-medium\nNamespace: default\nResource Used Hard\n-------- ---- ----\ncpu 0 10\nmemory 0 20Gi\npods 0 10", "language": "text", "type": "code-example", "context": "2 - Resource Quotas | Create a pod with priority \"high\". Save the following YAML to a file high-priority-pod.yaml.", "source": "https://kubernetes.io/docs/concepts/policy/_print/", "id": "k8s_00456_code_completion_11712"} {"prompt": "Explain this text code:\n\nName: pods-high\nNamespace: default\nResource Used Hard\n-------- ---- ----\ncpu 0 1k\nmemory 0 200Gi\npods 0 10\n\n\nName: pods-low\nNamespace: default\nResource Used Hard\n-------- ---- ----\ncpu 0 5\nmemory 0 10Gi\npods 0 10\n\n\nName: pods-medium\nNamespace: default\nResource Used Hard\n-------- ---- ----\ncpu 0 10\nmemory 0 20Gi\npods 0 10", "completion": "This text code 2 - Resource Quotas | Create a pod with priority \"high\". Save the following YAML to a file high-priority-pod.yaml.", "language": "text", "type": "code-example_explanation", "context": "2 - Resource Quotas | Create a pod with priority \"high\". Save the following YAML to a file high-priority-pod.yaml.", "source": "https://kubernetes.io/docs/concepts/policy/_print/", "id": "k8s_00456_code_explain_11713"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: high-priority\nspec:\n containers:\n - name: high-priority\n image: ubuntu\n command: [\"/bin/sh\"]", "completion": " args: [\"-c\", \"while true; do echo hello; sleep 10;done\"]\n resources:\n requests:\n memory: \"10Gi\"\n cpu: \"500m\"\n limits:\n memory: \"10Gi\"\n cpu: \"500m\"\n priorityClassName: high", "language": "yaml", "type": "pod-definition", "context": "2 - Resource Quotas | Apply it with kubectl create.", "source": "https://kubernetes.io/docs/concepts/policy/_print/", "id": "k8s_00456_code_completion_11714"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: high-priority\nspec:\n containers:\n - name: high-priority\n image: ubuntu\n command: [\"/bin/sh\"]\n args: [\"-c\", \"while true; do echo hello; sleep 10;done\"]\n resources:\n requests:\n memory: \"10Gi\"\n cpu: \"500m\"\n limits:\n memory: \"10Gi\"\n cpu: \"500m\"\n priorityClassName: high", "completion": "This yaml code 2 - Resource Quotas | Apply it with kubectl create.", "language": "yaml", "type": "pod-definition_explanation", "context": "2 - Resource Quotas | Apply it with kubectl create.", "source": "https://kubernetes.io/docs/concepts/policy/_print/", "id": "k8s_00456_code_explain_11715"} {"prompt": "Complete this text code:\n\nName: pods-high\nNamespace: default\nResource Used Hard\n-------- ---- ----\ncpu 500m 1k\nmemory 10Gi 200Gi\npods 1 10\n\n\nName: pods-low\nNamespace: default\nResource Used Hard", "completion": "-------- ---- ----\ncpu 0 5\nmemory 0 10Gi\npods 0 10\n\n\nName: pods-medium\nNamespace: default\nResource Used Hard\n-------- ---- ----\ncpu 0 10\nmemory 0 20Gi\npods 0 10", "language": "text", "type": "code-example", "context": "2 - Resource Quotas | Operators can use CrossNamespacePodAffinity quota scope to limit which namespaces are allowed to have pods with affinity terms that cross namespaces. Specifically, it controls which pods are allowed t", "source": "https://kubernetes.io/docs/concepts/policy/_print/", "id": "k8s_00456_code_completion_11716"} {"prompt": "Explain this text code:\n\nName: pods-high\nNamespace: default\nResource Used Hard\n-------- ---- ----\ncpu 500m 1k\nmemory 10Gi 200Gi\npods 1 10\n\n\nName: pods-low\nNamespace: default\nResource Used Hard\n-------- ---- ----\ncpu 0 5\nmemory 0 10Gi\npods 0 10\n\n\nName: pods-medium\nNamespace: default\nResource Used Hard\n-------- ---- ----\ncpu 0 10\nmemory 0 20Gi\npods 0 10", "completion": "This text code 2 - Resource Quotas | Operators can use CrossNamespacePodAffinity quota scope to limit which namespaces are allowed to have pods with affinity terms that cross namespaces. Specifically, it controls which pods are allowed t", "language": "text", "type": "code-example_explanation", "context": "2 - Resource Quotas | Operators can use CrossNamespacePodAffinity quota scope to limit which namespaces are allowed to have pods with affinity terms that cross namespaces. Specifically, it controls which pods are allowed t", "source": "https://kubernetes.io/docs/concepts/policy/_print/", "id": "k8s_00456_code_explain_11717"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: ResourceQuota\nmetadata:\n name: disable-cross-namespace-affinity\n namespace: foo-ns\nspec:", "completion": " hard:\n pods: \"0\"\n scopeSelector:\n matchExpressions:\n - scopeName: CrossNamespacePodAffinity\n operator: Exists", "language": "yaml", "type": "kubernetes-manifest", "context": "2 - Resource Quotas | If operators want to disallow using namespaces and namespaceSelector by default, and only allow it for specific namespaces, they could configure CrossNamespacePodAffinity as a limited resource by sett", "source": "https://kubernetes.io/docs/concepts/policy/_print/", "id": "k8s_00456_code_completion_11718"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: ResourceQuota\nmetadata:\n name: disable-cross-namespace-affinity\n namespace: foo-ns\nspec:\n hard:\n pods: \"0\"\n scopeSelector:\n matchExpressions:\n - scopeName: CrossNamespacePodAffinity\n operator: Exists", "completion": "This yaml code 2 - Resource Quotas | If operators want to disallow using namespaces and namespaceSelector by default, and only allow it for specific namespaces, they could configure CrossNamespacePodAffinity as a limited resource by sett", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "2 - Resource Quotas | If operators want to disallow using namespaces and namespaceSelector by default, and only allow it for specific namespaces, they could configure CrossNamespacePodAffinity as a limited resource by sett", "source": "https://kubernetes.io/docs/concepts/policy/_print/", "id": "k8s_00456_code_explain_11719"} {"prompt": "Complete this yaml code:\n\napiVersion: apiserver.config.k8s.io/v1\nkind: AdmissionConfiguration\nplugins:\n- name: \"ResourceQuota\"\n configuration:\n apiVersion: apiserver.config.k8s.io/v1", "completion": " kind: ResourceQuotaConfiguration\n limitedResources:\n - resource: pods\n matchScopes:\n - scopeName: CrossNamespacePodAffinity\n operator: Exists", "language": "yaml", "type": "kubernetes-manifest", "context": "2 - Resource Quotas | With the above configuration, pods can use namespaces and namespaceSelector in pod affinity only if the namespace where they are created have a resource quota object with CrossNamespacePodAffinity sco", "source": "https://kubernetes.io/docs/concepts/policy/_print/", "id": "k8s_00456_code_completion_11720"} {"prompt": "Explain this yaml code:\n\napiVersion: apiserver.config.k8s.io/v1\nkind: AdmissionConfiguration\nplugins:\n- name: \"ResourceQuota\"\n configuration:\n apiVersion: apiserver.config.k8s.io/v1\n kind: ResourceQuotaConfiguration\n limitedResources:\n - resource: pods\n matchScopes:\n - scopeName: CrossNamespacePodAffinity\n operator: Exists", "completion": "This yaml code 2 - Resource Quotas | With the above configuration, pods can use namespaces and namespaceSelector in pod affinity only if the namespace where they are created have a resource quota object with CrossNamespacePodAffinity sco", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "2 - Resource Quotas | With the above configuration, pods can use namespaces and namespaceSelector in pod affinity only if the namespace where they are created have a resource quota object with CrossNamespacePodAffinity sco", "source": "https://kubernetes.io/docs/concepts/policy/_print/", "id": "k8s_00456_code_explain_11721"} {"prompt": "Explain this text code:\n\nstatus.modifyVolumeStatus.targetVolumeAttributesClassName", "completion": "This text code 2 - Resource Quotas | A quota is matched and consumed only if scopeSelector in the quota spec selects the PVC.", "language": "text", "type": "code-example_explanation", "context": "2 - Resource Quotas | A quota is matched and consumed only if scopeSelector in the quota spec selects the PVC.", "source": "https://kubernetes.io/docs/concepts/policy/_print/", "id": "k8s_00456_code_explain_11722"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: List\nitems:\n- apiVersion: v1\n kind: ResourceQuota\n metadata:\n name: pvcs-gold\n spec:\n hard:\n requests.storage: \"10Gi\"\n persistentvolumeclaims: \"10\"\n scopeSelector:\n matchExpressions:\n - operator: In\n scopeName: VolumeAttributesClass\n values: [\"gold\"]\n- apiVersion: v1\n kind: ResourceQuota\n metadata:\n name: pvcs-silver\n spec:", "completion": " hard:\n requests.storage: \"20Gi\"\n persistentvolumeclaims: \"10\"\n scopeSelector:\n matchExpressions:\n - operator: In\n scopeName: VolumeAttributesClass\n values: [\"silver\"]\n- apiVersion: v1\n kind: ResourceQuota\n metadata:\n name: pvcs-copper\n spec:\n hard:\n requests.storage: \"30Gi\"\n persistentvolumeclaims: \"10\"\n scopeSelector:\n matchExpressions:\n - operator: In\n scopeName: VolumeAttributesClass\n values: [\"copper\"]", "language": "yaml", "type": "kubernetes-manifest", "context": "2 - Resource Quotas | Apply the YAML using kubectl create.", "source": "https://kubernetes.io/docs/concepts/policy/_print/", "id": "k8s_00456_code_completion_11723"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: List\nitems:\n- apiVersion: v1\n kind: ResourceQuota\n metadata:\n name: pvcs-gold\n spec:\n hard:\n requests.storage: \"10Gi\"\n persistentvolumeclaims: \"10\"\n scopeSelector:\n matchExpressions:\n - operator: In\n scopeName: VolumeAttributesClass\n values: [\"gold\"]\n- apiVersion: v1\n kind: ResourceQuota\n metadata:\n name: pvcs-silver\n spec:\n hard:\n requests.storage: \"20Gi\"\n persistentvolumeclaims: \"10\"\n scopeSelector:\n matchExpressions:\n - operator: In\n scopeName: VolumeAttributesClass\n values: [\"silver\"]\n- apiVersion: v1\n kind: ResourceQuota\n metadata:\n name: pvcs-copper\n spec:\n hard:\n requests.storage: \"30Gi\"\n persistentvolumeclaims: \"10\"\n scopeSelector:\n matchExpressions:\n - operator: In\n scopeName: VolumeAttributesClass\n values: [\"copper\"]", "completion": "This yaml code 2 - Resource Quotas | Apply the YAML using kubectl create.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "2 - Resource Quotas | Apply the YAML using kubectl create.", "source": "https://kubernetes.io/docs/concepts/policy/_print/", "id": "k8s_00456_code_explain_11724"} {"prompt": "Explain this text code:\n\nresourcequota/pvcs-gold created\nresourcequota/pvcs-silver created\nresourcequota/pvcs-copper created", "completion": "This text code 2 - Resource Quotas | Verify that Used quota is 0 using kubectl describe quota.", "language": "text", "type": "code-example_explanation", "context": "2 - Resource Quotas | Verify that Used quota is 0 using kubectl describe quota.", "source": "https://kubernetes.io/docs/concepts/policy/_print/", "id": "k8s_00456_code_explain_11725"} {"prompt": "Complete this text code:\n\nName: pvcs-gold\nNamespace: default\nResource Used Hard\n-------- ---- ----\npersistentvolumeclaims 0 10\nrequests.storage 0 10Gi\n\n\nName: pvcs-silver\nNamespace: default\nResource Used Hard", "completion": "-------- ---- ----\npersistentvolumeclaims 0 10\nrequests.storage 0 20Gi\n\n\nName: pvcs-copper\nNamespace: default\nResource Used Hard\n-------- ---- ----\npersistentvolumeclaims 0 10\nrequests.storage 0 30Gi", "language": "text", "type": "code-example", "context": "2 - Resource Quotas | Create a pvc with volume attributes class \"gold\". Save the following YAML to a file gold-vac-pvc.yaml.", "source": "https://kubernetes.io/docs/concepts/policy/_print/", "id": "k8s_00456_code_completion_11726"} {"prompt": "Explain this text code:\n\nName: pvcs-gold\nNamespace: default\nResource Used Hard\n-------- ---- ----\npersistentvolumeclaims 0 10\nrequests.storage 0 10Gi\n\n\nName: pvcs-silver\nNamespace: default\nResource Used Hard\n-------- ---- ----\npersistentvolumeclaims 0 10\nrequests.storage 0 20Gi\n\n\nName: pvcs-copper\nNamespace: default\nResource Used Hard\n-------- ---- ----\npersistentvolumeclaims 0 10\nrequests.storage 0 30Gi", "completion": "This text code 2 - Resource Quotas | Create a pvc with volume attributes class \"gold\". Save the following YAML to a file gold-vac-pvc.yaml.", "language": "text", "type": "code-example_explanation", "context": "2 - Resource Quotas | Create a pvc with volume attributes class \"gold\". Save the following YAML to a file gold-vac-pvc.yaml.", "source": "https://kubernetes.io/docs/concepts/policy/_print/", "id": "k8s_00456_code_explain_11727"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: PersistentVolumeClaim\nmetadata:\n name: gold-vac-pvc\nspec:\n accessModes:", "completion": " - ReadWriteOnce\n resources:\n requests:\n storage: 2Gi\n storageClassName: # change this to the name of the storage class you want to use\n volumeAttributesClassName: gold", "language": "yaml", "type": "kubernetes-manifest", "context": "2 - Resource Quotas | Apply it with kubectl create.", "source": "https://kubernetes.io/docs/concepts/policy/_print/", "id": "k8s_00456_code_completion_11728"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: PersistentVolumeClaim\nmetadata:\n name: gold-vac-pvc\nspec:\n accessModes:\n - ReadWriteOnce\n resources:\n requests:\n storage: 2Gi\n storageClassName: # change this to the name of the storage class you want to use\n volumeAttributesClassName: gold", "completion": "This yaml code 2 - Resource Quotas | Apply it with kubectl create.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "2 - Resource Quotas | Apply it with kubectl create.", "source": "https://kubernetes.io/docs/concepts/policy/_print/", "id": "k8s_00456_code_explain_11729"} {"prompt": "Complete this text code:\n\nName: pvcs-gold\nNamespace: default\nResource Used Hard\n-------- ---- ----\npersistentvolumeclaims 1 10\nrequests.storage 2Gi 10Gi\n\n\nName: pvcs-silver\nNamespace: default\nResource Used Hard", "completion": "-------- ---- ----\npersistentvolumeclaims 0 10\nrequests.storage 0 20Gi\n\n\nName: pvcs-copper\nNamespace: default\nResource Used Hard\n-------- ---- ----\npersistentvolumeclaims 0 10\nrequests.storage 0 30Gi", "language": "text", "type": "code-example", "context": "2 - Resource Quotas | Once the PVC is bound, it is allowed to modify the desired volume attributes class. Let's change it to \"silver\" with kubectl patch.", "source": "https://kubernetes.io/docs/concepts/policy/_print/", "id": "k8s_00456_code_completion_11730"} {"prompt": "Explain this text code:\n\nName: pvcs-gold\nNamespace: default\nResource Used Hard\n-------- ---- ----\npersistentvolumeclaims 1 10\nrequests.storage 2Gi 10Gi\n\n\nName: pvcs-silver\nNamespace: default\nResource Used Hard\n-------- ---- ----\npersistentvolumeclaims 0 10\nrequests.storage 0 20Gi\n\n\nName: pvcs-copper\nNamespace: default\nResource Used Hard\n-------- ---- ----\npersistentvolumeclaims 0 10\nrequests.storage 0 30Gi", "completion": "This text code 2 - Resource Quotas | Once the PVC is bound, it is allowed to modify the desired volume attributes class. Let's change it to \"silver\" with kubectl patch.", "language": "text", "type": "code-example_explanation", "context": "2 - Resource Quotas | Once the PVC is bound, it is allowed to modify the desired volume attributes class. Let's change it to \"silver\" with kubectl patch.", "source": "https://kubernetes.io/docs/concepts/policy/_print/", "id": "k8s_00456_code_explain_11731"} {"prompt": "Explain this shell code:\n\nkubectl patch pvc gold-vac-pvc --type='merge' -p '{\"spec\":{\"volumeAttributesClassName\":\"silver\"}}'", "completion": "This shell code 2 - Resource Quotas | Verify that \"Used\" stats for \"silver\" volume attributes class quota, pvcs-silver has changed, pvcs-copper is unchanged, and pvcs-gold might be unchanged or released, which depends on the PVC's status.", "language": "shell", "type": "code-example_explanation", "context": "2 - Resource Quotas | Verify that \"Used\" stats for \"silver\" volume attributes class quota, pvcs-silver has changed, pvcs-copper is unchanged, and pvcs-gold might be unchanged or released, which depends on the PVC's status.", "source": "https://kubernetes.io/docs/concepts/policy/_print/", "id": "k8s_00456_code_explain_11732"} {"prompt": "Complete this text code:\n\nName: pvcs-gold\nNamespace: default\nResource Used Hard\n-------- ---- ----\npersistentvolumeclaims 1 10\nrequests.storage 2Gi 10Gi\n\n\nName: pvcs-silver\nNamespace: default\nResource Used Hard", "completion": "-------- ---- ----\npersistentvolumeclaims 1 10\nrequests.storage 2Gi 20Gi\n\n\nName: pvcs-copper\nNamespace: default\nResource Used Hard\n-------- ---- ----\npersistentvolumeclaims 0 10\nrequests.storage 0 30Gi", "language": "text", "type": "code-example", "context": "2 - Resource Quotas | Let's change it to \"copper\" with kubectl patch.", "source": "https://kubernetes.io/docs/concepts/policy/_print/", "id": "k8s_00456_code_completion_11733"} {"prompt": "Explain this text code:\n\nName: pvcs-gold\nNamespace: default\nResource Used Hard\n-------- ---- ----\npersistentvolumeclaims 1 10\nrequests.storage 2Gi 10Gi\n\n\nName: pvcs-silver\nNamespace: default\nResource Used Hard\n-------- ---- ----\npersistentvolumeclaims 1 10\nrequests.storage 2Gi 20Gi\n\n\nName: pvcs-copper\nNamespace: default\nResource Used Hard\n-------- ---- ----\npersistentvolumeclaims 0 10\nrequests.storage 0 30Gi", "completion": "This text code 2 - Resource Quotas | Let's change it to \"copper\" with kubectl patch.", "language": "text", "type": "code-example_explanation", "context": "2 - Resource Quotas | Let's change it to \"copper\" with kubectl patch.", "source": "https://kubernetes.io/docs/concepts/policy/_print/", "id": "k8s_00456_code_explain_11734"} {"prompt": "Explain this shell code:\n\nkubectl patch pvc gold-vac-pvc --type='merge' -p '{\"spec\":{\"volumeAttributesClassName\":\"copper\"}}'", "completion": "This shell code 2 - Resource Quotas | Verify that \"Used\" stats for \"copper\" volume attributes class quota, pvcs-copper has changed, pvcs-silver and pvcs-gold might be unchanged or released, which depends on the PVC's status.", "language": "shell", "type": "code-example_explanation", "context": "2 - Resource Quotas | Verify that \"Used\" stats for \"copper\" volume attributes class quota, pvcs-copper has changed, pvcs-silver and pvcs-gold might be unchanged or released, which depends on the PVC's status.", "source": "https://kubernetes.io/docs/concepts/policy/_print/", "id": "k8s_00456_code_explain_11735"} {"prompt": "Complete this text code:\n\nName: pvcs-gold\nNamespace: default\nResource Used Hard\n-------- ---- ----\npersistentvolumeclaims 1 10\nrequests.storage 2Gi 10Gi\n\n\nName: pvcs-silver\nNamespace: default\nResource Used Hard", "completion": "-------- ---- ----\npersistentvolumeclaims 1 10\nrequests.storage 2Gi 20Gi\n\n\nName: pvcs-copper\nNamespace: default\nResource Used Hard\n-------- ---- ----\npersistentvolumeclaims 1 10\nrequests.storage 2Gi 30Gi", "language": "text", "type": "code-example", "context": "2 - Resource Quotas | Print the manifest of the PVC using the following command:", "source": "https://kubernetes.io/docs/concepts/policy/_print/", "id": "k8s_00456_code_completion_11736"} {"prompt": "Explain this text code:\n\nName: pvcs-gold\nNamespace: default\nResource Used Hard\n-------- ---- ----\npersistentvolumeclaims 1 10\nrequests.storage 2Gi 10Gi\n\n\nName: pvcs-silver\nNamespace: default\nResource Used Hard\n-------- ---- ----\npersistentvolumeclaims 1 10\nrequests.storage 2Gi 20Gi\n\n\nName: pvcs-copper\nNamespace: default\nResource Used Hard\n-------- ---- ----\npersistentvolumeclaims 1 10\nrequests.storage 2Gi 30Gi", "completion": "This text code 2 - Resource Quotas | Print the manifest of the PVC using the following command:", "language": "text", "type": "code-example_explanation", "context": "2 - Resource Quotas | Print the manifest of the PVC using the following command:", "source": "https://kubernetes.io/docs/concepts/policy/_print/", "id": "k8s_00456_code_explain_11737"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: PersistentVolumeClaim\nmetadata:\n name: gold-vac-pvc\nspec:\n accessModes:\n - ReadWriteOnce\n resources:\n requests:\n storage: 2Gi\n storageClassName: default", "completion": " volumeAttributesClassName: copper\nstatus:\n accessModes:\n - ReadWriteOnce\n capacity:\n storage: 2Gi\n currentVolumeAttributesClassName: gold\n phase: Bound\n modifyVolumeStatus:\n status: InProgress\n targetVolumeAttributesClassName: silver\n storageClassName: default", "language": "yaml", "type": "kubernetes-manifest", "context": "2 - Resource Quotas | Wait a moment for the volume modification to complete, then verify the quota again.", "source": "https://kubernetes.io/docs/concepts/policy/_print/", "id": "k8s_00456_code_completion_11738"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: PersistentVolumeClaim\nmetadata:\n name: gold-vac-pvc\nspec:\n accessModes:\n - ReadWriteOnce\n resources:\n requests:\n storage: 2Gi\n storageClassName: default\n volumeAttributesClassName: copper\nstatus:\n accessModes:\n - ReadWriteOnce\n capacity:\n storage: 2Gi\n currentVolumeAttributesClassName: gold\n phase: Bound\n modifyVolumeStatus:\n status: InProgress\n targetVolumeAttributesClassName: silver\n storageClassName: default", "completion": "This yaml code 2 - Resource Quotas | Wait a moment for the volume modification to complete, then verify the quota again.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "2 - Resource Quotas | Wait a moment for the volume modification to complete, then verify the quota again.", "source": "https://kubernetes.io/docs/concepts/policy/_print/", "id": "k8s_00456_code_explain_11739"} {"prompt": "Complete this text code:\n\nName: pvcs-gold\nNamespace: default\nResource Used Hard\n-------- ---- ----\npersistentvolumeclaims 0 10\nrequests.storage 0 10Gi\n\n\nName: pvcs-silver\nNamespace: default\nResource Used Hard", "completion": "-------- ---- ----\npersistentvolumeclaims 0 10\nrequests.storage 0 20Gi\n\n\nName: pvcs-copper\nNamespace: default\nResource Used Hard\n-------- ---- ----\npersistentvolumeclaims 1 10\nrequests.storage 2Gi 30Gi", "language": "text", "type": "code-example", "context": "2 - Resource Quotas | When allocating compute resources, each container may specify a request and a limit value for either CPU or memory. The quota can be configured to quota either value.", "source": "https://kubernetes.io/docs/concepts/policy/_print/", "id": "k8s_00456_code_completion_11740"} {"prompt": "Explain this text code:\n\nName: pvcs-gold\nNamespace: default\nResource Used Hard\n-------- ---- ----\npersistentvolumeclaims 0 10\nrequests.storage 0 10Gi\n\n\nName: pvcs-silver\nNamespace: default\nResource Used Hard\n-------- ---- ----\npersistentvolumeclaims 0 10\nrequests.storage 0 20Gi\n\n\nName: pvcs-copper\nNamespace: default\nResource Used Hard\n-------- ---- ----\npersistentvolumeclaims 1 10\nrequests.storage 2Gi 30Gi", "completion": "This text code 2 - Resource Quotas | When allocating compute resources, each container may specify a request and a limit value for either CPU or memory. The quota can be configured to quota either value.", "language": "text", "type": "code-example_explanation", "context": "2 - Resource Quotas | When allocating compute resources, each container may specify a request and a limit value for either CPU or memory. The quota can be configured to quota either value.", "source": "https://kubernetes.io/docs/concepts/policy/_print/", "id": "k8s_00456_code_explain_11741"} {"prompt": "Complete this shell code:\n\ncat <<EOF > compute-resources.yaml\napiVersion: v1\nkind: ResourceQuota\nmetadata:\n name: compute-resources\nspec:", "completion": " hard:\n requests.cpu: \"1\"\n requests.memory: \"1Gi\"\n limits.cpu: \"2\"\n limits.memory: \"2Gi\"\n requests.nvidia.com/gpu: 4\nEOF", "language": "shell", "type": "code-example", "context": "2 - Resource Quotas | kubectl also supports object count quota for all standard namespaced resources using the syntax count/<resource>.<group>:", "source": "https://kubernetes.io/docs/concepts/policy/_print/", "id": "k8s_00456_code_completion_11742"} {"prompt": "Explain this shell code:\n\ncat <<EOF > compute-resources.yaml\napiVersion: v1\nkind: ResourceQuota\nmetadata:\n name: compute-resources\nspec:\n hard:\n requests.cpu: \"1\"\n requests.memory: \"1Gi\"\n limits.cpu: \"2\"\n limits.memory: \"2Gi\"\n requests.nvidia.com/gpu: 4\nEOF", "completion": "This shell code 2 - Resource Quotas | kubectl also supports object count quota for all standard namespaced resources using the syntax count/<resource>.<group>:", "language": "shell", "type": "code-example_explanation", "context": "2 - Resource Quotas | kubectl also supports object count quota for all standard namespaced resources using the syntax count/<resource>.<group>:", "source": "https://kubernetes.io/docs/concepts/policy/_print/", "id": "k8s_00456_code_explain_11743"} {"prompt": "Explain this shell code:\n\nkubectl create -f ./compute-resources.yaml --namespace=myspace", "completion": "This shell code 2 - Resource Quotas | kubectl also supports object count quota for all standard namespaced resources using the syntax count/<resource>.<group>:", "language": "shell", "type": "code-example_explanation", "context": "2 - Resource Quotas | kubectl also supports object count quota for all standard namespaced resources using the syntax count/<resource>.<group>:", "source": "https://kubernetes.io/docs/concepts/policy/_print/", "id": "k8s_00456_code_explain_11744"} {"prompt": "Complete this shell code:\n\ncat <<EOF > object-counts.yaml\napiVersion: v1\nkind: ResourceQuota\nmetadata:\n name: object-counts\nspec:\n hard:", "completion": " configmaps: \"10\"\n persistentvolumeclaims: \"4\"\n pods: \"4\"\n replicationcontrollers: \"20\"\n secrets: \"10\"\n services: \"10\"\n services.loadbalancers: \"2\"\nEOF", "language": "shell", "type": "code-example", "context": "2 - Resource Quotas | kubectl also supports object count quota for all standard namespaced resources using the syntax count/<resource>.<group>:", "source": "https://kubernetes.io/docs/concepts/policy/_print/", "id": "k8s_00456_code_completion_11745"} {"prompt": "Explain this shell code:\n\ncat <<EOF > object-counts.yaml\napiVersion: v1\nkind: ResourceQuota\nmetadata:\n name: object-counts\nspec:\n hard:\n configmaps: \"10\"\n persistentvolumeclaims: \"4\"\n pods: \"4\"\n replicationcontrollers: \"20\"\n secrets: \"10\"\n services: \"10\"\n services.loadbalancers: \"2\"\nEOF", "completion": "This shell code 2 - Resource Quotas | kubectl also supports object count quota for all standard namespaced resources using the syntax count/<resource>.<group>:", "language": "shell", "type": "code-example_explanation", "context": "2 - Resource Quotas | kubectl also supports object count quota for all standard namespaced resources using the syntax count/<resource>.<group>:", "source": "https://kubernetes.io/docs/concepts/policy/_print/", "id": "k8s_00456_code_explain_11746"} {"prompt": "Explain this shell code:\n\nkubectl create -f ./object-counts.yaml --namespace=myspace", "completion": "This shell code 2 - Resource Quotas | kubectl also supports object count quota for all standard namespaced resources using the syntax count/<resource>.<group>:", "language": "shell", "type": "code-example_explanation", "context": "2 - Resource Quotas | kubectl also supports object count quota for all standard namespaced resources using the syntax count/<resource>.<group>:", "source": "https://kubernetes.io/docs/concepts/policy/_print/", "id": "k8s_00456_code_explain_11747"} {"prompt": "Explain this none code:\n\nNAME AGE\ncompute-resources 30s\nobject-counts 32s", "completion": "This none code 2 - Resource Quotas | kubectl also supports object count quota for all standard namespaced resources using the syntax count/<resource>.<group>:", "language": "none", "type": "code-example_explanation", "context": "2 - Resource Quotas | kubectl also supports object count quota for all standard namespaced resources using the syntax count/<resource>.<group>:", "source": "https://kubernetes.io/docs/concepts/policy/_print/", "id": "k8s_00456_code_explain_11748"} {"prompt": "Explain this shell code:\n\nkubectl describe quota compute-resources --namespace=myspace", "completion": "This shell code 2 - Resource Quotas | kubectl also supports object count quota for all standard namespaced resources using the syntax count/<resource>.<group>:", "language": "shell", "type": "code-example_explanation", "context": "2 - Resource Quotas | kubectl also supports object count quota for all standard namespaced resources using the syntax count/<resource>.<group>:", "source": "https://kubernetes.io/docs/concepts/policy/_print/", "id": "k8s_00456_code_explain_11749"} {"prompt": "Complete this none code:\n\nName: compute-resources\nNamespace: myspace\nResource Used Hard\n-------- ---- ----", "completion": "limits.cpu 0 2\nlimits.memory 0 2Gi\nrequests.cpu 0 1\nrequests.memory 0 1Gi\nrequests.nvidia.com/gpu 0 4", "language": "none", "type": "code-example", "context": "2 - Resource Quotas | kubectl also supports object count quota for all standard namespaced resources using the syntax count/<resource>.<group>:", "source": "https://kubernetes.io/docs/concepts/policy/_print/", "id": "k8s_00456_code_completion_11750"} {"prompt": "Explain this none code:\n\nName: compute-resources\nNamespace: myspace\nResource Used Hard\n-------- ---- ----\nlimits.cpu 0 2\nlimits.memory 0 2Gi\nrequests.cpu 0 1\nrequests.memory 0 1Gi\nrequests.nvidia.com/gpu 0 4", "completion": "This none code 2 - Resource Quotas | kubectl also supports object count quota for all standard namespaced resources using the syntax count/<resource>.<group>:", "language": "none", "type": "code-example_explanation", "context": "2 - Resource Quotas | kubectl also supports object count quota for all standard namespaced resources using the syntax count/<resource>.<group>:", "source": "https://kubernetes.io/docs/concepts/policy/_print/", "id": "k8s_00456_code_explain_11751"} {"prompt": "Explain this shell code:\n\nkubectl describe quota object-counts --namespace=myspace", "completion": "This shell code 2 - Resource Quotas | kubectl also supports object count quota for all standard namespaced resources using the syntax count/<resource>.<group>:", "language": "shell", "type": "code-example_explanation", "context": "2 - Resource Quotas | kubectl also supports object count quota for all standard namespaced resources using the syntax count/<resource>.<group>:", "source": "https://kubernetes.io/docs/concepts/policy/_print/", "id": "k8s_00456_code_explain_11752"} {"prompt": "Complete this none code:\n\nName: object-counts\nNamespace: myspace\nResource Used Hard\n-------- ---- ----\nconfigmaps 0 10", "completion": "persistentvolumeclaims 0 4\npods 0 4\nreplicationcontrollers 0 20\nsecrets 1 10\nservices 0 10\nservices.loadbalancers 0 2", "language": "none", "type": "code-example", "context": "2 - Resource Quotas | kubectl also supports object count quota for all standard namespaced resources using the syntax count/<resource>.<group>:", "source": "https://kubernetes.io/docs/concepts/policy/_print/", "id": "k8s_00456_code_completion_11753"} {"prompt": "Explain this none code:\n\nName: object-counts\nNamespace: myspace\nResource Used Hard\n-------- ---- ----\nconfigmaps 0 10\npersistentvolumeclaims 0 4\npods 0 4\nreplicationcontrollers 0 20\nsecrets 1 10\nservices 0 10\nservices.loadbalancers 0 2", "completion": "This none code 2 - Resource Quotas | kubectl also supports object count quota for all standard namespaced resources using the syntax count/<resource>.<group>:", "language": "none", "type": "code-example_explanation", "context": "2 - Resource Quotas | kubectl also supports object count quota for all standard namespaced resources using the syntax count/<resource>.<group>:", "source": "https://kubernetes.io/docs/concepts/policy/_print/", "id": "k8s_00456_code_explain_11754"} {"prompt": "Explain this shell code:\n\nkubectl create quota test --hard=count/deployments.apps=2,count/replicasets.apps=4,count/pods=3,count/secrets=4 --namespace=myspace", "completion": "This shell code 2 - Resource Quotas | ResourceQuotas are independent of the cluster capacity. They are expressed in absolute units. So, if you add nodes to your cluster, this does not automatically give each namespace the ability to consu", "language": "shell", "type": "code-example_explanation", "context": "2 - Resource Quotas | ResourceQuotas are independent of the cluster capacity. They are expressed in absolute units. So, if you add nodes to your cluster, this does not automatically give each namespace the ability to consu", "source": "https://kubernetes.io/docs/concepts/policy/_print/", "id": "k8s_00456_code_explain_11755"} {"prompt": "Explain this shell code:\n\nkubectl create deployment nginx --image=nginx --namespace=myspace --replicas=2", "completion": "This shell code 2 - Resource Quotas | ResourceQuotas are independent of the cluster capacity. They are expressed in absolute units. So, if you add nodes to your cluster, this does not automatically give each namespace the ability to consu", "language": "shell", "type": "code-example_explanation", "context": "2 - Resource Quotas | ResourceQuotas are independent of the cluster capacity. They are expressed in absolute units. So, if you add nodes to your cluster, this does not automatically give each namespace the ability to consu", "source": "https://kubernetes.io/docs/concepts/policy/_print/", "id": "k8s_00456_code_explain_11756"} {"prompt": "Complete this text code:\n\nName: test\nNamespace: myspace\nResource Used Hard\n-------- ---- ----", "completion": "count/deployments.apps 1 2\ncount/pods 2 3\ncount/replicasets.apps 1 4\ncount/secrets 1 4", "language": "text", "type": "code-example", "context": "2 - Resource Quotas | ResourceQuotas are independent of the cluster capacity. They are expressed in absolute units. So, if you add nodes to your cluster, this does not automatically give each namespace the ability to consu", "source": "https://kubernetes.io/docs/concepts/policy/_print/", "id": "k8s_00456_code_completion_11757"} {"prompt": "Explain this text code:\n\nName: test\nNamespace: myspace\nResource Used Hard\n-------- ---- ----\ncount/deployments.apps 1 2\ncount/pods 2 3\ncount/replicasets.apps 1 4\ncount/secrets 1 4", "completion": "This text code 2 - Resource Quotas | ResourceQuotas are independent of the cluster capacity. They are expressed in absolute units. So, if you add nodes to your cluster, this does not automatically give each namespace the ability to consu", "language": "text", "type": "code-example_explanation", "context": "2 - Resource Quotas | ResourceQuotas are independent of the cluster capacity. They are expressed in absolute units. So, if you add nodes to your cluster, this does not automatically give each namespace the ability to consu", "source": "https://kubernetes.io/docs/concepts/policy/_print/", "id": "k8s_00456_code_explain_11758"} {"prompt": "Complete this yaml code:\n\napiVersion: apiserver.config.k8s.io/v1\nkind: AdmissionConfiguration\nplugins:\n- name: \"ResourceQuota\"\n configuration:\n apiVersion: apiserver.config.k8s.io/v1", "completion": " kind: ResourceQuotaConfiguration\n limitedResources:\n - resource: pods\n matchScopes:\n - scopeName: PriorityClass\n operator: In\n values: [\"cluster-services\"]", "language": "yaml", "type": "kubernetes-manifest", "context": "2 - Resource Quotas | Then, create a resource quota object in the kube-system namespace:", "source": "https://kubernetes.io/docs/concepts/policy/_print/", "id": "k8s_00456_code_completion_11759"} {"prompt": "Explain this yaml code:\n\napiVersion: apiserver.config.k8s.io/v1\nkind: AdmissionConfiguration\nplugins:\n- name: \"ResourceQuota\"\n configuration:\n apiVersion: apiserver.config.k8s.io/v1\n kind: ResourceQuotaConfiguration\n limitedResources:\n - resource: pods\n matchScopes:\n - scopeName: PriorityClass\n operator: In\n values: [\"cluster-services\"]", "completion": "This yaml code 2 - Resource Quotas | Then, create a resource quota object in the kube-system namespace:", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "2 - Resource Quotas | Then, create a resource quota object in the kube-system namespace:", "source": "https://kubernetes.io/docs/concepts/policy/_print/", "id": "k8s_00456_code_explain_11760"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: ResourceQuota\nmetadata:\n name: pods-cluster-services\nspec:", "completion": " scopeSelector:\n matchExpressions:\n - operator : In\n scopeName: PriorityClass\n values: [\"cluster-services\"]", "language": "yaml", "type": "kubernetes-manifest", "context": "2 - Resource Quotas | In this case, a pod creation will be allowed if:", "source": "https://kubernetes.io/docs/concepts/policy/_print/", "id": "k8s_00456_code_completion_11761"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: ResourceQuota\nmetadata:\n name: pods-cluster-services\nspec:\n scopeSelector:\n matchExpressions:\n - operator : In\n scopeName: PriorityClass\n values: [\"cluster-services\"]", "completion": "This yaml code 2 - Resource Quotas | In this case, a pod creation will be allowed if:", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "2 - Resource Quotas | In this case, a pod creation will be allowed if:", "source": "https://kubernetes.io/docs/concepts/policy/_print/", "id": "k8s_00456_code_explain_11762"} {"prompt": "Explain this shell code:\n\nkubectl apply -f https://k8s.io/examples/policy/priority-class-resourcequota.yaml -n kube-system", "completion": "This shell code 2 - Resource Quotas | In this case, a pod creation will be allowed if:", "language": "shell", "type": "code-example_explanation", "context": "2 - Resource Quotas | In this case, a pod creation will be allowed if:", "source": "https://kubernetes.io/docs/concepts/policy/_print/", "id": "k8s_00456_code_explain_11763"} {"prompt": "Complete this yaml code:\n\nspec:\n containers:", "completion": " - name: nginx\n image: nginx", "language": "yaml", "type": "kubernetes-manifest", "context": "4 - Node Resource Managers | The pod above runs in the BestEffort QoS class because no resource requests or limits are specified. It runs in the shared pool.", "source": "https://kubernetes.io/docs/concepts/policy/_print/", "id": "k8s_00456_code_completion_11764"} {"prompt": "Explain this yaml code:\n\nspec:\n containers:\n - name: nginx\n image: nginx", "completion": "This yaml code 4 - Node Resource Managers | The pod above runs in the BestEffort QoS class because no resource requests or limits are specified. It runs in the shared pool.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "4 - Node Resource Managers | The pod above runs in the BestEffort QoS class because no resource requests or limits are specified. It runs in the shared pool.", "source": "https://kubernetes.io/docs/concepts/policy/_print/", "id": "k8s_00456_code_explain_11765"} {"prompt": "Complete this yaml code:\n\nspec:\n containers:\n - name: nginx\n image: nginx", "completion": " resources:\n limits:\n memory: \"200Mi\"\n requests:\n memory: \"100Mi\"", "language": "yaml", "type": "kubernetes-manifest", "context": "4 - Node Resource Managers | The pod above runs in the Burstable QoS class because resource requests do not equal limits and the cpu quantity is not specified. It runs in the shared pool.", "source": "https://kubernetes.io/docs/concepts/policy/_print/", "id": "k8s_00456_code_completion_11766"} {"prompt": "Explain this yaml code:\n\nspec:\n containers:\n - name: nginx\n image: nginx\n resources:\n limits:\n memory: \"200Mi\"\n requests:\n memory: \"100Mi\"", "completion": "This yaml code 4 - Node Resource Managers | The pod above runs in the Burstable QoS class because resource requests do not equal limits and the cpu quantity is not specified. It runs in the shared pool.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "4 - Node Resource Managers | The pod above runs in the Burstable QoS class because resource requests do not equal limits and the cpu quantity is not specified. It runs in the shared pool.", "source": "https://kubernetes.io/docs/concepts/policy/_print/", "id": "k8s_00456_code_explain_11767"} {"prompt": "Complete this yaml code:\n\nspec:\n containers:\n - name: nginx\n image: nginx\n resources:", "completion": " limits:\n memory: \"200Mi\"\n cpu: \"2\"\n requests:\n memory: \"100Mi\"\n cpu: \"1\"", "language": "yaml", "type": "kubernetes-manifest", "context": "4 - Node Resource Managers | The pod above runs in the Burstable QoS class because resource requests do not equal limits. It runs in the shared pool.", "source": "https://kubernetes.io/docs/concepts/policy/_print/", "id": "k8s_00456_code_completion_11768"} {"prompt": "Explain this yaml code:\n\nspec:\n containers:\n - name: nginx\n image: nginx\n resources:\n limits:\n memory: \"200Mi\"\n cpu: \"2\"\n requests:\n memory: \"100Mi\"\n cpu: \"1\"", "completion": "This yaml code 4 - Node Resource Managers | The pod above runs in the Burstable QoS class because resource requests do not equal limits. It runs in the shared pool.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "4 - Node Resource Managers | The pod above runs in the Burstable QoS class because resource requests do not equal limits. It runs in the shared pool.", "source": "https://kubernetes.io/docs/concepts/policy/_print/", "id": "k8s_00456_code_explain_11769"} {"prompt": "Complete this yaml code:\n\nspec:\n containers:\n - name: nginx\n image: nginx\n resources:", "completion": " limits:\n memory: \"200Mi\"\n cpu: \"2\"\n requests:\n memory: \"200Mi\"\n cpu: \"2\"", "language": "yaml", "type": "kubernetes-manifest", "context": "4 - Node Resource Managers | The pod above runs in the Guaranteed QoS class because requests are equal to limits. And the container's resource limit for the CPU resource is an integer greater than or equal to one. The nginx conta", "source": "https://kubernetes.io/docs/concepts/policy/_print/", "id": "k8s_00456_code_completion_11770"} {"prompt": "Explain this yaml code:\n\nspec:\n containers:\n - name: nginx\n image: nginx\n resources:\n limits:\n memory: \"200Mi\"\n cpu: \"2\"\n requests:\n memory: \"200Mi\"\n cpu: \"2\"", "completion": "This yaml code 4 - Node Resource Managers | The pod above runs in the Guaranteed QoS class because requests are equal to limits. And the container's resource limit for the CPU resource is an integer greater than or equal to one. The nginx conta", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "4 - Node Resource Managers | The pod above runs in the Guaranteed QoS class because requests are equal to limits. And the container's resource limit for the CPU resource is an integer greater than or equal to one. The nginx conta", "source": "https://kubernetes.io/docs/concepts/policy/_print/", "id": "k8s_00456_code_explain_11771"} {"prompt": "Complete this yaml code:\n\nspec:\n containers:\n - name: nginx\n image: nginx\n resources:", "completion": " limits:\n memory: \"200Mi\"\n cpu: \"1.5\"\n requests:\n memory: \"200Mi\"\n cpu: \"1.5\"", "language": "yaml", "type": "kubernetes-manifest", "context": "4 - Node Resource Managers | The pod above runs in the Guaranteed QoS class because requests are equal to limits. But the container's resource limit for the CPU resource is a fraction. It runs in the shared pool.", "source": "https://kubernetes.io/docs/concepts/policy/_print/", "id": "k8s_00456_code_completion_11772"} {"prompt": "Explain this yaml code:\n\nspec:\n containers:\n - name: nginx\n image: nginx\n resources:\n limits:\n memory: \"200Mi\"\n cpu: \"1.5\"\n requests:\n memory: \"200Mi\"\n cpu: \"1.5\"", "completion": "This yaml code 4 - Node Resource Managers | The pod above runs in the Guaranteed QoS class because requests are equal to limits. But the container's resource limit for the CPU resource is a fraction. It runs in the shared pool.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "4 - Node Resource Managers | The pod above runs in the Guaranteed QoS class because requests are equal to limits. But the container's resource limit for the CPU resource is a fraction. It runs in the shared pool.", "source": "https://kubernetes.io/docs/concepts/policy/_print/", "id": "k8s_00456_code_explain_11773"} {"prompt": "Complete this yaml code:\n\nspec:\n containers:\n - name: nginx\n image: nginx", "completion": " resources:\n limits:\n memory: \"200Mi\"\n cpu: \"2\"", "language": "yaml", "type": "kubernetes-manifest", "context": "4 - Node Resource Managers | The pod above runs in the Guaranteed QoS class because only limits are specified and requests are set equal to limits when not explicitly specified. And the container's resource limit for the CPU reso", "source": "https://kubernetes.io/docs/concepts/policy/_print/", "id": "k8s_00456_code_completion_11774"} {"prompt": "Explain this yaml code:\n\nspec:\n containers:\n - name: nginx\n image: nginx\n resources:\n limits:\n memory: \"200Mi\"\n cpu: \"2\"", "completion": "This yaml code 4 - Node Resource Managers | The pod above runs in the Guaranteed QoS class because only limits are specified and requests are set equal to limits when not explicitly specified. And the container's resource limit for the CPU reso", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "4 - Node Resource Managers | The pod above runs in the Guaranteed QoS class because only limits are specified and requests are set equal to limits when not explicitly specified. And the container's resource limit for the CPU reso", "source": "https://kubernetes.io/docs/concepts/policy/_print/", "id": "k8s_00456_code_explain_11775"} {"prompt": "Complete this yaml code:\n\nconcepts/policy/limit-range/problematic-limit-range.yaml\napiVersion: v1\nkind: LimitRange\nmetadata:\n name: cpu-resource-constraint\nspec:\n limits:\n - default: # this section defines default limits", "completion": " cpu: 500m\n defaultRequest: # this section defines default requests\n cpu: 500m\n max: # max and min define the limit range\n cpu: \"1\"\n min:\n cpu: 100m\n type: Container", "language": "yaml", "type": "kubernetes-manifest", "context": "1 - Limit Ranges | along with a Pod that declares a CPU resource request of 700m, but not a limit:", "source": "https://kubernetes.io/docs/concepts/policy/_print/", "id": "k8s_00456_code_completion_11776"} {"prompt": "Explain this yaml code:\n\nconcepts/policy/limit-range/problematic-limit-range.yaml\napiVersion: v1\nkind: LimitRange\nmetadata:\n name: cpu-resource-constraint\nspec:\n limits:\n - default: # this section defines default limits\n cpu: 500m\n defaultRequest: # this section defines default requests\n cpu: 500m\n max: # max and min define the limit range\n cpu: \"1\"\n min:\n cpu: 100m\n type: Container", "completion": "This yaml code 1 - Limit Ranges | along with a Pod that declares a CPU resource request of 700m, but not a limit:", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "1 - Limit Ranges | along with a Pod that declares a CPU resource request of 700m, but not a limit:", "source": "https://kubernetes.io/docs/concepts/policy/_print/", "id": "k8s_00456_code_explain_11777"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: LimitRange\nmetadata:\n name: cpu-resource-constraint\nspec:\n limits:\n - default: # this section defines default limits", "completion": " cpu: 500m\n defaultRequest: # this section defines default requests\n cpu: 500m\n max: # max and min define the limit range\n cpu: \"1\"\n min:\n cpu: 100m\n type: Container", "language": "yaml", "type": "kubernetes-manifest", "context": "1 - Limit Ranges | along with a Pod that declares a CPU resource request of 700m, but not a limit:", "source": "https://kubernetes.io/docs/concepts/policy/_print/", "id": "k8s_00456_code_completion_11778"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: LimitRange\nmetadata:\n name: cpu-resource-constraint\nspec:\n limits:\n - default: # this section defines default limits\n cpu: 500m\n defaultRequest: # this section defines default requests\n cpu: 500m\n max: # max and min define the limit range\n cpu: \"1\"\n min:\n cpu: 100m\n type: Container", "completion": "This yaml code 1 - Limit Ranges | along with a Pod that declares a CPU resource request of 700m, but not a limit:", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "1 - Limit Ranges | along with a Pod that declares a CPU resource request of 700m, but not a limit:", "source": "https://kubernetes.io/docs/concepts/policy/_print/", "id": "k8s_00456_code_explain_11779"} {"prompt": "Complete this yaml code:\n\nconcepts/policy/limit-range/example-conflict-with-limitrange-cpu.yaml\napiVersion: v1\nkind: Pod\nmetadata:\n name: example-conflict-with-limitrange-cpu\nspec:", "completion": " containers:\n - name: demo\n image: registry.k8s.io/pause:3.8\n resources:\n requests:\n cpu: 700m", "language": "yaml", "type": "pod-definition", "context": "1 - Limit Ranges | then that Pod will not be scheduled, failing with an error similar to:", "source": "https://kubernetes.io/docs/concepts/policy/_print/", "id": "k8s_00456_code_completion_11780"} {"prompt": "Explain this yaml code:\n\nconcepts/policy/limit-range/example-conflict-with-limitrange-cpu.yaml\napiVersion: v1\nkind: Pod\nmetadata:\n name: example-conflict-with-limitrange-cpu\nspec:\n containers:\n - name: demo\n image: registry.k8s.io/pause:3.8\n resources:\n requests:\n cpu: 700m", "completion": "This yaml code 1 - Limit Ranges | then that Pod will not be scheduled, failing with an error similar to:", "language": "yaml", "type": "pod-definition_explanation", "context": "1 - Limit Ranges | then that Pod will not be scheduled, failing with an error similar to:", "source": "https://kubernetes.io/docs/concepts/policy/_print/", "id": "k8s_00456_code_explain_11781"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: example-conflict-with-limitrange-cpu\nspec:", "completion": " containers:\n - name: demo\n image: registry.k8s.io/pause:3.8\n resources:\n requests:\n cpu: 700m", "language": "yaml", "type": "pod-definition", "context": "1 - Limit Ranges | then that Pod will not be scheduled, failing with an error similar to:", "source": "https://kubernetes.io/docs/concepts/policy/_print/", "id": "k8s_00456_code_completion_11782"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: example-conflict-with-limitrange-cpu\nspec:\n containers:\n - name: demo\n image: registry.k8s.io/pause:3.8\n resources:\n requests:\n cpu: 700m", "completion": "This yaml code 1 - Limit Ranges | then that Pod will not be scheduled, failing with an error similar to:", "language": "yaml", "type": "pod-definition_explanation", "context": "1 - Limit Ranges | then that Pod will not be scheduled, failing with an error similar to:", "source": "https://kubernetes.io/docs/concepts/policy/_print/", "id": "k8s_00456_code_explain_11783"} {"prompt": "Complete this yaml code:\n\nconcepts/policy/limit-range/example-no-conflict-with-limitrange-cpu.yaml\napiVersion: v1\nkind: Pod\nmetadata:\n name: example-no-conflict-with-limitrange-cpu\nspec:\n containers:", "completion": " - name: demo\n image: registry.k8s.io/pause:3.8\n resources:\n requests:\n cpu: 700m\n limits:\n cpu: 700m", "language": "yaml", "type": "pod-definition", "context": "1 - Limit Ranges | Examples of policies that could be created using LimitRange are:", "source": "https://kubernetes.io/docs/concepts/policy/_print/", "id": "k8s_00456_code_completion_11784"} {"prompt": "Explain this yaml code:\n\nconcepts/policy/limit-range/example-no-conflict-with-limitrange-cpu.yaml\napiVersion: v1\nkind: Pod\nmetadata:\n name: example-no-conflict-with-limitrange-cpu\nspec:\n containers:\n - name: demo\n image: registry.k8s.io/pause:3.8\n resources:\n requests:\n cpu: 700m\n limits:\n cpu: 700m", "completion": "This yaml code 1 - Limit Ranges | Examples of policies that could be created using LimitRange are:", "language": "yaml", "type": "pod-definition_explanation", "context": "1 - Limit Ranges | Examples of policies that could be created using LimitRange are:", "source": "https://kubernetes.io/docs/concepts/policy/_print/", "id": "k8s_00456_code_explain_11785"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: example-no-conflict-with-limitrange-cpu\nspec:\n containers:", "completion": " - name: demo\n image: registry.k8s.io/pause:3.8\n resources:\n requests:\n cpu: 700m\n limits:\n cpu: 700m", "language": "yaml", "type": "pod-definition", "context": "1 - Limit Ranges | Examples of policies that could be created using LimitRange are:", "source": "https://kubernetes.io/docs/concepts/policy/_print/", "id": "k8s_00456_code_completion_11786"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: example-no-conflict-with-limitrange-cpu\nspec:\n containers:\n - name: demo\n image: registry.k8s.io/pause:3.8\n resources:\n requests:\n cpu: 700m\n limits:\n cpu: 700m", "completion": "This yaml code 1 - Limit Ranges | Examples of policies that could be created using LimitRange are:", "language": "yaml", "type": "pod-definition_explanation", "context": "1 - Limit Ranges | Examples of policies that could be created using LimitRange are:", "source": "https://kubernetes.io/docs/concepts/policy/_print/", "id": "k8s_00456_code_explain_11787"} {"prompt": "Complete this text code:\n\nscopeSelector:\n matchExpressions:\n - scopeName: PriorityClass", "completion": " operator: In\n values:\n - middle", "language": "text", "type": "code-example", "context": "2 - Resource Quotas | If the operator is Exists or DoesNotExist, the values field must NOT be specified.", "source": "https://kubernetes.io/docs/concepts/policy/_print/", "id": "k8s_00456_code_completion_11788"} {"prompt": "Explain this text code:\n\nscopeSelector:\n matchExpressions:\n - scopeName: PriorityClass\n operator: In\n values:\n - middle", "completion": "This text code 2 - Resource Quotas | If the operator is Exists or DoesNotExist, the values field must NOT be specified.", "language": "text", "type": "code-example_explanation", "context": "2 - Resource Quotas | If the operator is Exists or DoesNotExist, the values field must NOT be specified.", "source": "https://kubernetes.io/docs/concepts/policy/_print/", "id": "k8s_00456_code_explain_11789"} {"prompt": "Complete this yaml code:\n\npolicy/quota.yaml\napiVersion: v1\nkind: List\nitems:\n- apiVersion: v1\n kind: ResourceQuota\n metadata:\n name: pods-high\n spec:\n hard:\n cpu: \"1000\"\n memory: \"200Gi\"\n pods: \"10\"\n scopeSelector:\n matchExpressions:\n - operator: In\n scopeName: PriorityClass\n values: [\"high\"]\n- apiVersion: v1\n kind: ResourceQuota\n metadata:\n name: pods-medium\n spec:", "completion": " hard:\n cpu: \"10\"\n memory: \"20Gi\"\n pods: \"10\"\n scopeSelector:\n matchExpressions:\n - operator: In\n scopeName: PriorityClass\n values: [\"medium\"]\n- apiVersion: v1\n kind: ResourceQuota\n metadata:\n name: pods-low\n spec:\n hard:\n cpu: \"5\"\n memory: \"10Gi\"\n pods: \"10\"\n scopeSelector:\n matchExpressions:\n - operator: In\n scopeName: PriorityClass\n values: [\"low\"]", "language": "yaml", "type": "kubernetes-manifest", "context": "2 - Resource Quotas | Apply the YAML using kubectl create.", "source": "https://kubernetes.io/docs/concepts/policy/_print/", "id": "k8s_00456_code_completion_11790"} {"prompt": "Explain this yaml code:\n\npolicy/quota.yaml\napiVersion: v1\nkind: List\nitems:\n- apiVersion: v1\n kind: ResourceQuota\n metadata:\n name: pods-high\n spec:\n hard:\n cpu: \"1000\"\n memory: \"200Gi\"\n pods: \"10\"\n scopeSelector:\n matchExpressions:\n - operator: In\n scopeName: PriorityClass\n values: [\"high\"]\n- apiVersion: v1\n kind: ResourceQuota\n metadata:\n name: pods-medium\n spec:\n hard:\n cpu: \"10\"\n memory: \"20Gi\"\n pods: \"10\"\n scopeSelector:\n matchExpressions:\n - operator: In\n scopeName: PriorityClass\n values: [\"medium\"]\n- apiVersion: v1\n kind: ResourceQuota\n metadata:\n name: pods-low\n spec:\n hard:\n cpu: \"5\"\n memory: \"10Gi\"\n pods: \"10\"\n scopeSelector:\n matchExpressions:\n - operator: In\n scopeName: PriorityClass\n values: [\"low\"]", "completion": "This yaml code 2 - Resource Quotas | Apply the YAML using kubectl create.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "2 - Resource Quotas | Apply the YAML using kubectl create.", "source": "https://kubernetes.io/docs/concepts/policy/_print/", "id": "k8s_00456_code_explain_11791"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: List\nitems:\n- apiVersion: v1\n kind: ResourceQuota\n metadata:\n name: pods-high\n spec:\n hard:\n cpu: \"1000\"\n memory: \"200Gi\"\n pods: \"10\"\n scopeSelector:\n matchExpressions:\n - operator: In\n scopeName: PriorityClass\n values: [\"high\"]\n- apiVersion: v1\n kind: ResourceQuota\n metadata:\n name: pods-medium\n spec:", "completion": " hard:\n cpu: \"10\"\n memory: \"20Gi\"\n pods: \"10\"\n scopeSelector:\n matchExpressions:\n - operator: In\n scopeName: PriorityClass\n values: [\"medium\"]\n- apiVersion: v1\n kind: ResourceQuota\n metadata:\n name: pods-low\n spec:\n hard:\n cpu: \"5\"\n memory: \"10Gi\"\n pods: \"10\"\n scopeSelector:\n matchExpressions:\n - operator: In\n scopeName: PriorityClass\n values: [\"low\"]", "language": "yaml", "type": "kubernetes-manifest", "context": "2 - Resource Quotas | Apply the YAML using kubectl create.", "source": "https://kubernetes.io/docs/concepts/policy/_print/", "id": "k8s_00456_code_completion_11792"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: List\nitems:\n- apiVersion: v1\n kind: ResourceQuota\n metadata:\n name: pods-high\n spec:\n hard:\n cpu: \"1000\"\n memory: \"200Gi\"\n pods: \"10\"\n scopeSelector:\n matchExpressions:\n - operator: In\n scopeName: PriorityClass\n values: [\"high\"]\n- apiVersion: v1\n kind: ResourceQuota\n metadata:\n name: pods-medium\n spec:\n hard:\n cpu: \"10\"\n memory: \"20Gi\"\n pods: \"10\"\n scopeSelector:\n matchExpressions:\n - operator: In\n scopeName: PriorityClass\n values: [\"medium\"]\n- apiVersion: v1\n kind: ResourceQuota\n metadata:\n name: pods-low\n spec:\n hard:\n cpu: \"5\"\n memory: \"10Gi\"\n pods: \"10\"\n scopeSelector:\n matchExpressions:\n - operator: In\n scopeName: PriorityClass\n values: [\"low\"]", "completion": "This yaml code 2 - Resource Quotas | Apply the YAML using kubectl create.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "2 - Resource Quotas | Apply the YAML using kubectl create.", "source": "https://kubernetes.io/docs/concepts/policy/_print/", "id": "k8s_00456_code_explain_11793"} {"prompt": "Complete this yaml code:\n\npolicy/high-priority-pod.yaml\napiVersion: v1\nkind: Pod\nmetadata:\n name: high-priority\nspec:\n containers:\n - name: high-priority\n image: ubuntu", "completion": " command: [\"/bin/sh\"]\n args: [\"-c\", \"while true; do echo hello; sleep 10;done\"]\n resources:\n requests:\n memory: \"10Gi\"\n cpu: \"500m\"\n limits:\n memory: \"10Gi\"\n cpu: \"500m\"\n priorityClassName: high", "language": "yaml", "type": "pod-definition", "context": "2 - Resource Quotas | Apply it with kubectl create.", "source": "https://kubernetes.io/docs/concepts/policy/_print/", "id": "k8s_00456_code_completion_11794"} {"prompt": "Explain this yaml code:\n\npolicy/high-priority-pod.yaml\napiVersion: v1\nkind: Pod\nmetadata:\n name: high-priority\nspec:\n containers:\n - name: high-priority\n image: ubuntu\n command: [\"/bin/sh\"]\n args: [\"-c\", \"while true; do echo hello; sleep 10;done\"]\n resources:\n requests:\n memory: \"10Gi\"\n cpu: \"500m\"\n limits:\n memory: \"10Gi\"\n cpu: \"500m\"\n priorityClassName: high", "completion": "This yaml code 2 - Resource Quotas | Apply it with kubectl create.", "language": "yaml", "type": "pod-definition_explanation", "context": "2 - Resource Quotas | Apply it with kubectl create.", "source": "https://kubernetes.io/docs/concepts/policy/_print/", "id": "k8s_00456_code_explain_11795"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: high-priority\nspec:\n containers:\n - name: high-priority\n image: ubuntu\n command: [\"/bin/sh\"]", "completion": " args: [\"-c\", \"while true; do echo hello; sleep 10;done\"]\n resources:\n requests:\n memory: \"10Gi\"\n cpu: \"500m\"\n limits:\n memory: \"10Gi\"\n cpu: \"500m\"\n priorityClassName: high", "language": "yaml", "type": "pod-definition", "context": "2 - Resource Quotas | Apply it with kubectl create.", "source": "https://kubernetes.io/docs/concepts/policy/_print/", "id": "k8s_00456_code_completion_11796"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: high-priority\nspec:\n containers:\n - name: high-priority\n image: ubuntu\n command: [\"/bin/sh\"]\n args: [\"-c\", \"while true; do echo hello; sleep 10;done\"]\n resources:\n requests:\n memory: \"10Gi\"\n cpu: \"500m\"\n limits:\n memory: \"10Gi\"\n cpu: \"500m\"\n priorityClassName: high", "completion": "This yaml code 2 - Resource Quotas | Apply it with kubectl create.", "language": "yaml", "type": "pod-definition_explanation", "context": "2 - Resource Quotas | Apply it with kubectl create.", "source": "https://kubernetes.io/docs/concepts/policy/_print/", "id": "k8s_00456_code_explain_11797"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: ResourceQuota\nmetadata:\n name: disable-cross-namespace-affinity\n namespace: foo-ns\nspec:", "completion": " hard:\n pods: \"0\"\n scopeSelector:\n matchExpressions:\n - scopeName: CrossNamespacePodAffinity\n operator: Exists", "language": "yaml", "type": "kubernetes-manifest", "context": "2 - Resource Quotas | If operators want to disallow using namespaces and namespaceSelector by default, and only allow it for specific namespaces, they could configure CrossNamespacePodAffinity as a limited resource by sett", "source": "https://kubernetes.io/docs/concepts/policy/_print/", "id": "k8s_00456_code_completion_11798"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: ResourceQuota\nmetadata:\n name: disable-cross-namespace-affinity\n namespace: foo-ns\nspec:\n hard:\n pods: \"0\"\n scopeSelector:\n matchExpressions:\n - scopeName: CrossNamespacePodAffinity\n operator: Exists", "completion": "This yaml code 2 - Resource Quotas | If operators want to disallow using namespaces and namespaceSelector by default, and only allow it for specific namespaces, they could configure CrossNamespacePodAffinity as a limited resource by sett", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "2 - Resource Quotas | If operators want to disallow using namespaces and namespaceSelector by default, and only allow it for specific namespaces, they could configure CrossNamespacePodAffinity as a limited resource by sett", "source": "https://kubernetes.io/docs/concepts/policy/_print/", "id": "k8s_00456_code_explain_11799"} {"prompt": "Complete this yaml code:\n\napiVersion: apiserver.config.k8s.io/v1\nkind: AdmissionConfiguration\nplugins:\n- name: \"ResourceQuota\"\n configuration:\n apiVersion: apiserver.config.k8s.io/v1", "completion": " kind: ResourceQuotaConfiguration\n limitedResources:\n - resource: pods\n matchScopes:\n - scopeName: CrossNamespacePodAffinity\n operator: Exists", "language": "yaml", "type": "kubernetes-manifest", "context": "2 - Resource Quotas | With the above configuration, pods can use namespaces and namespaceSelector in pod affinity only if the namespace where they are created have a resource quota object with CrossNamespacePodAffinity sco", "source": "https://kubernetes.io/docs/concepts/policy/_print/", "id": "k8s_00456_code_completion_11800"} {"prompt": "Explain this yaml code:\n\napiVersion: apiserver.config.k8s.io/v1\nkind: AdmissionConfiguration\nplugins:\n- name: \"ResourceQuota\"\n configuration:\n apiVersion: apiserver.config.k8s.io/v1\n kind: ResourceQuotaConfiguration\n limitedResources:\n - resource: pods\n matchScopes:\n - scopeName: CrossNamespacePodAffinity\n operator: Exists", "completion": "This yaml code 2 - Resource Quotas | With the above configuration, pods can use namespaces and namespaceSelector in pod affinity only if the namespace where they are created have a resource quota object with CrossNamespacePodAffinity sco", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "2 - Resource Quotas | With the above configuration, pods can use namespaces and namespaceSelector in pod affinity only if the namespace where they are created have a resource quota object with CrossNamespacePodAffinity sco", "source": "https://kubernetes.io/docs/concepts/policy/_print/", "id": "k8s_00456_code_explain_11801"} {"prompt": "Complete this yaml code:\n\npolicy/quota-vac.yaml\napiVersion: v1\nkind: List\nitems:\n- apiVersion: v1\n kind: ResourceQuota\n metadata:\n name: pvcs-gold\n spec:\n hard:\n requests.storage: \"10Gi\"\n persistentvolumeclaims: \"10\"\n scopeSelector:\n matchExpressions:\n - operator: In\n scopeName: VolumeAttributesClass\n values: [\"gold\"]\n- apiVersion: v1\n kind: ResourceQuota\n metadata:\n name: pvcs-silver", "completion": " spec:\n hard:\n requests.storage: \"20Gi\"\n persistentvolumeclaims: \"10\"\n scopeSelector:\n matchExpressions:\n - operator: In\n scopeName: VolumeAttributesClass\n values: [\"silver\"]\n- apiVersion: v1\n kind: ResourceQuota\n metadata:\n name: pvcs-copper\n spec:\n hard:\n requests.storage: \"30Gi\"\n persistentvolumeclaims: \"10\"\n scopeSelector:\n matchExpressions:\n - operator: In\n scopeName: VolumeAttributesClass\n values: [\"copper\"]", "language": "yaml", "type": "kubernetes-manifest", "context": "2 - Resource Quotas | Apply the YAML using kubectl create.", "source": "https://kubernetes.io/docs/concepts/policy/_print/", "id": "k8s_00456_code_completion_11802"} {"prompt": "Explain this yaml code:\n\npolicy/quota-vac.yaml\napiVersion: v1\nkind: List\nitems:\n- apiVersion: v1\n kind: ResourceQuota\n metadata:\n name: pvcs-gold\n spec:\n hard:\n requests.storage: \"10Gi\"\n persistentvolumeclaims: \"10\"\n scopeSelector:\n matchExpressions:\n - operator: In\n scopeName: VolumeAttributesClass\n values: [\"gold\"]\n- apiVersion: v1\n kind: ResourceQuota\n metadata:\n name: pvcs-silver\n spec:\n hard:\n requests.storage: \"20Gi\"\n persistentvolumeclaims: \"10\"\n scopeSelector:\n matchExpressions:\n - operator: In\n scopeName: VolumeAttributesClass\n values: [\"silver\"]\n- apiVersion: v1\n kind: ResourceQuota\n metadata:\n name: pvcs-copper\n spec:\n hard:\n requests.storage: \"30Gi\"\n persistentvolumeclaims: \"10\"\n scopeSelector:\n matchExpressions:\n - operator: In\n scopeName: VolumeAttributesClass\n values: [\"copper\"]", "completion": "This yaml code 2 - Resource Quotas | Apply the YAML using kubectl create.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "2 - Resource Quotas | Apply the YAML using kubectl create.", "source": "https://kubernetes.io/docs/concepts/policy/_print/", "id": "k8s_00456_code_explain_11803"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: List\nitems:\n- apiVersion: v1\n kind: ResourceQuota\n metadata:\n name: pvcs-gold\n spec:\n hard:\n requests.storage: \"10Gi\"\n persistentvolumeclaims: \"10\"\n scopeSelector:\n matchExpressions:\n - operator: In\n scopeName: VolumeAttributesClass\n values: [\"gold\"]\n- apiVersion: v1\n kind: ResourceQuota\n metadata:\n name: pvcs-silver\n spec:", "completion": " hard:\n requests.storage: \"20Gi\"\n persistentvolumeclaims: \"10\"\n scopeSelector:\n matchExpressions:\n - operator: In\n scopeName: VolumeAttributesClass\n values: [\"silver\"]\n- apiVersion: v1\n kind: ResourceQuota\n metadata:\n name: pvcs-copper\n spec:\n hard:\n requests.storage: \"30Gi\"\n persistentvolumeclaims: \"10\"\n scopeSelector:\n matchExpressions:\n - operator: In\n scopeName: VolumeAttributesClass\n values: [\"copper\"]", "language": "yaml", "type": "kubernetes-manifest", "context": "2 - Resource Quotas | Apply the YAML using kubectl create.", "source": "https://kubernetes.io/docs/concepts/policy/_print/", "id": "k8s_00456_code_completion_11804"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: List\nitems:\n- apiVersion: v1\n kind: ResourceQuota\n metadata:\n name: pvcs-gold\n spec:\n hard:\n requests.storage: \"10Gi\"\n persistentvolumeclaims: \"10\"\n scopeSelector:\n matchExpressions:\n - operator: In\n scopeName: VolumeAttributesClass\n values: [\"gold\"]\n- apiVersion: v1\n kind: ResourceQuota\n metadata:\n name: pvcs-silver\n spec:\n hard:\n requests.storage: \"20Gi\"\n persistentvolumeclaims: \"10\"\n scopeSelector:\n matchExpressions:\n - operator: In\n scopeName: VolumeAttributesClass\n values: [\"silver\"]\n- apiVersion: v1\n kind: ResourceQuota\n metadata:\n name: pvcs-copper\n spec:\n hard:\n requests.storage: \"30Gi\"\n persistentvolumeclaims: \"10\"\n scopeSelector:\n matchExpressions:\n - operator: In\n scopeName: VolumeAttributesClass\n values: [\"copper\"]", "completion": "This yaml code 2 - Resource Quotas | Apply the YAML using kubectl create.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "2 - Resource Quotas | Apply the YAML using kubectl create.", "source": "https://kubernetes.io/docs/concepts/policy/_print/", "id": "k8s_00456_code_explain_11805"} {"prompt": "Complete this yaml code:\n\npolicy/gold-vac-pvc.yaml\napiVersion: v1\nkind: PersistentVolumeClaim\nmetadata:\n name: gold-vac-pvc\nspec:", "completion": " accessModes:\n - ReadWriteOnce\n resources:\n requests:\n storage: 2Gi\n storageClassName: # change this to the name of the storage class you want to use\n volumeAttributesClassName: gold", "language": "yaml", "type": "kubernetes-manifest", "context": "2 - Resource Quotas | Apply it with kubectl create.", "source": "https://kubernetes.io/docs/concepts/policy/_print/", "id": "k8s_00456_code_completion_11806"} {"prompt": "Explain this yaml code:\n\npolicy/gold-vac-pvc.yaml\napiVersion: v1\nkind: PersistentVolumeClaim\nmetadata:\n name: gold-vac-pvc\nspec:\n accessModes:\n - ReadWriteOnce\n resources:\n requests:\n storage: 2Gi\n storageClassName: # change this to the name of the storage class you want to use\n volumeAttributesClassName: gold", "completion": "This yaml code 2 - Resource Quotas | Apply it with kubectl create.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "2 - Resource Quotas | Apply it with kubectl create.", "source": "https://kubernetes.io/docs/concepts/policy/_print/", "id": "k8s_00456_code_explain_11807"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: PersistentVolumeClaim\nmetadata:\n name: gold-vac-pvc\nspec:\n accessModes:", "completion": " - ReadWriteOnce\n resources:\n requests:\n storage: 2Gi\n storageClassName: # change this to the name of the storage class you want to use\n volumeAttributesClassName: gold", "language": "yaml", "type": "kubernetes-manifest", "context": "2 - Resource Quotas | Apply it with kubectl create.", "source": "https://kubernetes.io/docs/concepts/policy/_print/", "id": "k8s_00456_code_completion_11808"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: PersistentVolumeClaim\nmetadata:\n name: gold-vac-pvc\nspec:\n accessModes:\n - ReadWriteOnce\n resources:\n requests:\n storage: 2Gi\n storageClassName: # change this to the name of the storage class you want to use\n volumeAttributesClassName: gold", "completion": "This yaml code 2 - Resource Quotas | Apply it with kubectl create.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "2 - Resource Quotas | Apply it with kubectl create.", "source": "https://kubernetes.io/docs/concepts/policy/_print/", "id": "k8s_00456_code_explain_11809"} {"prompt": "Explain this bash code:\n\nkubectl patch pvc gold-vac-pvc --type='merge' -p '{\"spec\":{\"volumeAttributesClassName\":\"silver\"}}'", "completion": "This bash code 2 - Resource Quotas | Verify that \"Used\" stats for \"silver\" volume attributes class quota, pvcs-silver has changed, pvcs-copper is unchanged, and pvcs-gold might be unchanged or released, which depends on the PVC's status.", "language": "bash", "type": "kubectl-command_explanation", "context": "2 - Resource Quotas | Verify that \"Used\" stats for \"silver\" volume attributes class quota, pvcs-silver has changed, pvcs-copper is unchanged, and pvcs-gold might be unchanged or released, which depends on the PVC's status.", "source": "https://kubernetes.io/docs/concepts/policy/_print/", "id": "k8s_00456_code_explain_11810"} {"prompt": "Explain this bash code:\n\nkubectl patch pvc gold-vac-pvc --type='merge' -p '{\"spec\":{\"volumeAttributesClassName\":\"copper\"}}'", "completion": "This bash code 2 - Resource Quotas | Verify that \"Used\" stats for \"copper\" volume attributes class quota, pvcs-copper has changed, pvcs-silver and pvcs-gold might be unchanged or released, which depends on the PVC's status.", "language": "bash", "type": "kubectl-command_explanation", "context": "2 - Resource Quotas | Verify that \"Used\" stats for \"copper\" volume attributes class quota, pvcs-copper has changed, pvcs-silver and pvcs-gold might be unchanged or released, which depends on the PVC's status.", "source": "https://kubernetes.io/docs/concepts/policy/_print/", "id": "k8s_00456_code_explain_11811"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: PersistentVolumeClaim\nmetadata:\n name: gold-vac-pvc\nspec:\n accessModes:\n - ReadWriteOnce\n resources:\n requests:\n storage: 2Gi\n storageClassName: default", "completion": " volumeAttributesClassName: copper\nstatus:\n accessModes:\n - ReadWriteOnce\n capacity:\n storage: 2Gi\n currentVolumeAttributesClassName: gold\n phase: Bound\n modifyVolumeStatus:\n status: InProgress\n targetVolumeAttributesClassName: silver\n storageClassName: default", "language": "yaml", "type": "kubernetes-manifest", "context": "2 - Resource Quotas | Wait a moment for the volume modification to complete, then verify the quota again.", "source": "https://kubernetes.io/docs/concepts/policy/_print/", "id": "k8s_00456_code_completion_11812"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: PersistentVolumeClaim\nmetadata:\n name: gold-vac-pvc\nspec:\n accessModes:\n - ReadWriteOnce\n resources:\n requests:\n storage: 2Gi\n storageClassName: default\n volumeAttributesClassName: copper\nstatus:\n accessModes:\n - ReadWriteOnce\n capacity:\n storage: 2Gi\n currentVolumeAttributesClassName: gold\n phase: Bound\n modifyVolumeStatus:\n status: InProgress\n targetVolumeAttributesClassName: silver\n storageClassName: default", "completion": "This yaml code 2 - Resource Quotas | Wait a moment for the volume modification to complete, then verify the quota again.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "2 - Resource Quotas | Wait a moment for the volume modification to complete, then verify the quota again.", "source": "https://kubernetes.io/docs/concepts/policy/_print/", "id": "k8s_00456_code_explain_11813"} {"prompt": "Complete this yaml code:\n\ncat <<EOF > compute-resources.yaml\napiVersion: v1\nkind: ResourceQuota\nmetadata:\n name: compute-resources\nspec:", "completion": " hard:\n requests.cpu: \"1\"\n requests.memory: \"1Gi\"\n limits.cpu: \"2\"\n limits.memory: \"2Gi\"\n requests.nvidia.com/gpu: 4\nEOF", "language": "yaml", "type": "kubernetes-manifest", "context": "2 - Resource Quotas | kubectl also supports object count quota for all standard namespaced resources using the syntax count/<resource>.<group>:", "source": "https://kubernetes.io/docs/concepts/policy/_print/", "id": "k8s_00456_code_completion_11814"} {"prompt": "Explain this yaml code:\n\ncat <<EOF > compute-resources.yaml\napiVersion: v1\nkind: ResourceQuota\nmetadata:\n name: compute-resources\nspec:\n hard:\n requests.cpu: \"1\"\n requests.memory: \"1Gi\"\n limits.cpu: \"2\"\n limits.memory: \"2Gi\"\n requests.nvidia.com/gpu: 4\nEOF", "completion": "This yaml code 2 - Resource Quotas | kubectl also supports object count quota for all standard namespaced resources using the syntax count/<resource>.<group>:", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "2 - Resource Quotas | kubectl also supports object count quota for all standard namespaced resources using the syntax count/<resource>.<group>:", "source": "https://kubernetes.io/docs/concepts/policy/_print/", "id": "k8s_00456_code_explain_11815"} {"prompt": "Explain this bash code:\n\nkubectl create -f ./compute-resources.yaml --namespace=myspace", "completion": "This bash code 2 - Resource Quotas | kubectl also supports object count quota for all standard namespaced resources using the syntax count/<resource>.<group>:", "language": "bash", "type": "kubectl-command_explanation", "context": "2 - Resource Quotas | kubectl also supports object count quota for all standard namespaced resources using the syntax count/<resource>.<group>:", "source": "https://kubernetes.io/docs/concepts/policy/_print/", "id": "k8s_00456_code_explain_11816"} {"prompt": "Complete this yaml code:\n\ncat <<EOF > object-counts.yaml\napiVersion: v1\nkind: ResourceQuota\nmetadata:\n name: object-counts\nspec:\n hard:", "completion": " configmaps: \"10\"\n persistentvolumeclaims: \"4\"\n pods: \"4\"\n replicationcontrollers: \"20\"\n secrets: \"10\"\n services: \"10\"\n services.loadbalancers: \"2\"\nEOF", "language": "yaml", "type": "kubernetes-manifest", "context": "2 - Resource Quotas | kubectl also supports object count quota for all standard namespaced resources using the syntax count/<resource>.<group>:", "source": "https://kubernetes.io/docs/concepts/policy/_print/", "id": "k8s_00456_code_completion_11817"} {"prompt": "Explain this yaml code:\n\ncat <<EOF > object-counts.yaml\napiVersion: v1\nkind: ResourceQuota\nmetadata:\n name: object-counts\nspec:\n hard:\n configmaps: \"10\"\n persistentvolumeclaims: \"4\"\n pods: \"4\"\n replicationcontrollers: \"20\"\n secrets: \"10\"\n services: \"10\"\n services.loadbalancers: \"2\"\nEOF", "completion": "This yaml code 2 - Resource Quotas | kubectl also supports object count quota for all standard namespaced resources using the syntax count/<resource>.<group>:", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "2 - Resource Quotas | kubectl also supports object count quota for all standard namespaced resources using the syntax count/<resource>.<group>:", "source": "https://kubernetes.io/docs/concepts/policy/_print/", "id": "k8s_00456_code_explain_11818"} {"prompt": "Explain this bash code:\n\nkubectl create -f ./object-counts.yaml --namespace=myspace", "completion": "This bash code 2 - Resource Quotas | kubectl also supports object count quota for all standard namespaced resources using the syntax count/<resource>.<group>:", "language": "bash", "type": "kubectl-command_explanation", "context": "2 - Resource Quotas | kubectl also supports object count quota for all standard namespaced resources using the syntax count/<resource>.<group>:", "source": "https://kubernetes.io/docs/concepts/policy/_print/", "id": "k8s_00456_code_explain_11819"} {"prompt": "Explain this bash code:\n\nkubectl describe quota compute-resources --namespace=myspace", "completion": "This bash code 2 - Resource Quotas | kubectl also supports object count quota for all standard namespaced resources using the syntax count/<resource>.<group>:", "language": "bash", "type": "kubectl-command_explanation", "context": "2 - Resource Quotas | kubectl also supports object count quota for all standard namespaced resources using the syntax count/<resource>.<group>:", "source": "https://kubernetes.io/docs/concepts/policy/_print/", "id": "k8s_00456_code_explain_11820"} {"prompt": "Explain this bash code:\n\nkubectl describe quota object-counts --namespace=myspace", "completion": "This bash code 2 - Resource Quotas | kubectl also supports object count quota for all standard namespaced resources using the syntax count/<resource>.<group>:", "language": "bash", "type": "kubectl-command_explanation", "context": "2 - Resource Quotas | kubectl also supports object count quota for all standard namespaced resources using the syntax count/<resource>.<group>:", "source": "https://kubernetes.io/docs/concepts/policy/_print/", "id": "k8s_00456_code_explain_11821"} {"prompt": "Explain this bash code:\n\nkubectl create quota test --hard=count/deployments.apps=2,count/replicasets.apps=4,count/pods=3,count/secrets=4 --namespace=myspace", "completion": "This bash code 2 - Resource Quotas | ResourceQuotas are independent of the cluster capacity. They are expressed in absolute units. So, if you add nodes to your cluster, this does not automatically give each namespace the ability to consu", "language": "bash", "type": "kubectl-command_explanation", "context": "2 - Resource Quotas | ResourceQuotas are independent of the cluster capacity. They are expressed in absolute units. So, if you add nodes to your cluster, this does not automatically give each namespace the ability to consu", "source": "https://kubernetes.io/docs/concepts/policy/_print/", "id": "k8s_00456_code_explain_11822"} {"prompt": "Explain this bash code:\n\nkubectl create deployment nginx --image=nginx --namespace=myspace --replicas=2", "completion": "This bash code 2 - Resource Quotas | ResourceQuotas are independent of the cluster capacity. They are expressed in absolute units. So, if you add nodes to your cluster, this does not automatically give each namespace the ability to consu", "language": "bash", "type": "kubectl-command_explanation", "context": "2 - Resource Quotas | ResourceQuotas are independent of the cluster capacity. They are expressed in absolute units. So, if you add nodes to your cluster, this does not automatically give each namespace the ability to consu", "source": "https://kubernetes.io/docs/concepts/policy/_print/", "id": "k8s_00456_code_explain_11823"} {"prompt": "Complete this yaml code:\n\napiVersion: apiserver.config.k8s.io/v1\nkind: AdmissionConfiguration\nplugins:\n- name: \"ResourceQuota\"\n configuration:\n apiVersion: apiserver.config.k8s.io/v1", "completion": " kind: ResourceQuotaConfiguration\n limitedResources:\n - resource: pods\n matchScopes:\n - scopeName: PriorityClass\n operator: In\n values: [\"cluster-services\"]", "language": "yaml", "type": "kubernetes-manifest", "context": "2 - Resource Quotas | Then, create a resource quota object in the kube-system namespace:", "source": "https://kubernetes.io/docs/concepts/policy/_print/", "id": "k8s_00456_code_completion_11824"} {"prompt": "Explain this yaml code:\n\napiVersion: apiserver.config.k8s.io/v1\nkind: AdmissionConfiguration\nplugins:\n- name: \"ResourceQuota\"\n configuration:\n apiVersion: apiserver.config.k8s.io/v1\n kind: ResourceQuotaConfiguration\n limitedResources:\n - resource: pods\n matchScopes:\n - scopeName: PriorityClass\n operator: In\n values: [\"cluster-services\"]", "completion": "This yaml code 2 - Resource Quotas | Then, create a resource quota object in the kube-system namespace:", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "2 - Resource Quotas | Then, create a resource quota object in the kube-system namespace:", "source": "https://kubernetes.io/docs/concepts/policy/_print/", "id": "k8s_00456_code_explain_11825"} {"prompt": "Complete this yaml code:\n\npolicy/priority-class-resourcequota.yaml\napiVersion: v1\nkind: ResourceQuota\nmetadata:\n name: pods-cluster-services", "completion": "spec:\n scopeSelector:\n matchExpressions:\n - operator : In\n scopeName: PriorityClass\n values: [\"cluster-services\"]", "language": "yaml", "type": "kubernetes-manifest", "context": "2 - Resource Quotas | In this case, a pod creation will be allowed if:", "source": "https://kubernetes.io/docs/concepts/policy/_print/", "id": "k8s_00456_code_completion_11826"} {"prompt": "Explain this yaml code:\n\npolicy/priority-class-resourcequota.yaml\napiVersion: v1\nkind: ResourceQuota\nmetadata:\n name: pods-cluster-services\nspec:\n scopeSelector:\n matchExpressions:\n - operator : In\n scopeName: PriorityClass\n values: [\"cluster-services\"]", "completion": "This yaml code 2 - Resource Quotas | In this case, a pod creation will be allowed if:", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "2 - Resource Quotas | In this case, a pod creation will be allowed if:", "source": "https://kubernetes.io/docs/concepts/policy/_print/", "id": "k8s_00456_code_explain_11827"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: ResourceQuota\nmetadata:\n name: pods-cluster-services\nspec:", "completion": " scopeSelector:\n matchExpressions:\n - operator : In\n scopeName: PriorityClass\n values: [\"cluster-services\"]", "language": "yaml", "type": "kubernetes-manifest", "context": "2 - Resource Quotas | In this case, a pod creation will be allowed if:", "source": "https://kubernetes.io/docs/concepts/policy/_print/", "id": "k8s_00456_code_completion_11828"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: ResourceQuota\nmetadata:\n name: pods-cluster-services\nspec:\n scopeSelector:\n matchExpressions:\n - operator : In\n scopeName: PriorityClass\n values: [\"cluster-services\"]", "completion": "This yaml code 2 - Resource Quotas | In this case, a pod creation will be allowed if:", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "2 - Resource Quotas | In this case, a pod creation will be allowed if:", "source": "https://kubernetes.io/docs/concepts/policy/_print/", "id": "k8s_00456_code_explain_11829"} {"prompt": "Explain this bash code:\n\nkubectl apply -f https://k8s.io/examples/policy/priority-class-resourcequota.yaml -n kube-system", "completion": "This bash code 2 - Resource Quotas | In this case, a pod creation will be allowed if:", "language": "bash", "type": "kubectl-command_explanation", "context": "2 - Resource Quotas | In this case, a pod creation will be allowed if:", "source": "https://kubernetes.io/docs/concepts/policy/_print/", "id": "k8s_00456_code_explain_11830"} {"prompt": "Complete this text code:\n\nspec:\n containers:", "completion": " - name: nginx\n image: nginx", "language": "text", "type": "code-example", "context": "4 - Node Resource Managers | The pod above runs in the BestEffort QoS class because no resource requests or limits are specified. It runs in the shared pool.", "source": "https://kubernetes.io/docs/concepts/policy/_print/", "id": "k8s_00456_code_completion_11831"} {"prompt": "Explain this text code:\n\nspec:\n containers:\n - name: nginx\n image: nginx", "completion": "This text code 4 - Node Resource Managers | The pod above runs in the BestEffort QoS class because no resource requests or limits are specified. It runs in the shared pool.", "language": "text", "type": "code-example_explanation", "context": "4 - Node Resource Managers | The pod above runs in the BestEffort QoS class because no resource requests or limits are specified. It runs in the shared pool.", "source": "https://kubernetes.io/docs/concepts/policy/_print/", "id": "k8s_00456_code_explain_11832"} {"prompt": "Complete this text code:\n\nspec:\n containers:\n - name: nginx\n image: nginx", "completion": " resources:\n limits:\n memory: \"200Mi\"\n requests:\n memory: \"100Mi\"", "language": "text", "type": "code-example", "context": "4 - Node Resource Managers | The pod above runs in the Burstable QoS class because resource requests do not equal limits and the cpu quantity is not specified. It runs in the shared pool.", "source": "https://kubernetes.io/docs/concepts/policy/_print/", "id": "k8s_00456_code_completion_11833"} {"prompt": "Explain this text code:\n\nspec:\n containers:\n - name: nginx\n image: nginx\n resources:\n limits:\n memory: \"200Mi\"\n requests:\n memory: \"100Mi\"", "completion": "This text code 4 - Node Resource Managers | The pod above runs in the Burstable QoS class because resource requests do not equal limits and the cpu quantity is not specified. It runs in the shared pool.", "language": "text", "type": "code-example_explanation", "context": "4 - Node Resource Managers | The pod above runs in the Burstable QoS class because resource requests do not equal limits and the cpu quantity is not specified. It runs in the shared pool.", "source": "https://kubernetes.io/docs/concepts/policy/_print/", "id": "k8s_00456_code_explain_11834"} {"prompt": "Complete this text code:\n\nspec:\n containers:\n - name: nginx\n image: nginx\n resources:", "completion": " limits:\n memory: \"200Mi\"\n cpu: \"2\"\n requests:\n memory: \"100Mi\"\n cpu: \"1\"", "language": "text", "type": "code-example", "context": "4 - Node Resource Managers | The pod above runs in the Burstable QoS class because resource requests do not equal limits. It runs in the shared pool.", "source": "https://kubernetes.io/docs/concepts/policy/_print/", "id": "k8s_00456_code_completion_11835"} {"prompt": "Explain this text code:\n\nspec:\n containers:\n - name: nginx\n image: nginx\n resources:\n limits:\n memory: \"200Mi\"\n cpu: \"2\"\n requests:\n memory: \"100Mi\"\n cpu: \"1\"", "completion": "This text code 4 - Node Resource Managers | The pod above runs in the Burstable QoS class because resource requests do not equal limits. It runs in the shared pool.", "language": "text", "type": "code-example_explanation", "context": "4 - Node Resource Managers | The pod above runs in the Burstable QoS class because resource requests do not equal limits. It runs in the shared pool.", "source": "https://kubernetes.io/docs/concepts/policy/_print/", "id": "k8s_00456_code_explain_11836"} {"prompt": "Complete this text code:\n\nspec:\n containers:\n - name: nginx\n image: nginx\n resources:", "completion": " limits:\n memory: \"200Mi\"\n cpu: \"2\"\n requests:\n memory: \"200Mi\"\n cpu: \"2\"", "language": "text", "type": "code-example", "context": "4 - Node Resource Managers | The pod above runs in the Guaranteed QoS class because requests are equal to limits. And the container's resource limit for the CPU resource is an integer greater than or equal to one. The nginx conta", "source": "https://kubernetes.io/docs/concepts/policy/_print/", "id": "k8s_00456_code_completion_11837"} {"prompt": "Explain this text code:\n\nspec:\n containers:\n - name: nginx\n image: nginx\n resources:\n limits:\n memory: \"200Mi\"\n cpu: \"2\"\n requests:\n memory: \"200Mi\"\n cpu: \"2\"", "completion": "This text code 4 - Node Resource Managers | The pod above runs in the Guaranteed QoS class because requests are equal to limits. And the container's resource limit for the CPU resource is an integer greater than or equal to one. The nginx conta", "language": "text", "type": "code-example_explanation", "context": "4 - Node Resource Managers | The pod above runs in the Guaranteed QoS class because requests are equal to limits. And the container's resource limit for the CPU resource is an integer greater than or equal to one. The nginx conta", "source": "https://kubernetes.io/docs/concepts/policy/_print/", "id": "k8s_00456_code_explain_11838"} {"prompt": "Complete this text code:\n\nspec:\n containers:\n - name: nginx\n image: nginx\n resources:", "completion": " limits:\n memory: \"200Mi\"\n cpu: \"1.5\"\n requests:\n memory: \"200Mi\"\n cpu: \"1.5\"", "language": "text", "type": "code-example", "context": "4 - Node Resource Managers | The pod above runs in the Guaranteed QoS class because requests are equal to limits. But the container's resource limit for the CPU resource is a fraction. It runs in the shared pool.", "source": "https://kubernetes.io/docs/concepts/policy/_print/", "id": "k8s_00456_code_completion_11839"} {"prompt": "Explain this text code:\n\nspec:\n containers:\n - name: nginx\n image: nginx\n resources:\n limits:\n memory: \"200Mi\"\n cpu: \"1.5\"\n requests:\n memory: \"200Mi\"\n cpu: \"1.5\"", "completion": "This text code 4 - Node Resource Managers | The pod above runs in the Guaranteed QoS class because requests are equal to limits. But the container's resource limit for the CPU resource is a fraction. It runs in the shared pool.", "language": "text", "type": "code-example_explanation", "context": "4 - Node Resource Managers | The pod above runs in the Guaranteed QoS class because requests are equal to limits. But the container's resource limit for the CPU resource is a fraction. It runs in the shared pool.", "source": "https://kubernetes.io/docs/concepts/policy/_print/", "id": "k8s_00456_code_explain_11840"} {"prompt": "Complete this text code:\n\nspec:\n containers:\n - name: nginx\n image: nginx", "completion": " resources:\n limits:\n memory: \"200Mi\"\n cpu: \"2\"", "language": "text", "type": "code-example", "context": "4 - Node Resource Managers | The pod above runs in the Guaranteed QoS class because only limits are specified and requests are set equal to limits when not explicitly specified. And the container's resource limit for the CPU reso", "source": "https://kubernetes.io/docs/concepts/policy/_print/", "id": "k8s_00456_code_completion_11841"} {"prompt": "Explain this text code:\n\nspec:\n containers:\n - name: nginx\n image: nginx\n resources:\n limits:\n memory: \"200Mi\"\n cpu: \"2\"", "completion": "This text code 4 - Node Resource Managers | The pod above runs in the Guaranteed QoS class because only limits are specified and requests are set equal to limits when not explicitly specified. And the container's resource limit for the CPU reso", "language": "text", "type": "code-example_explanation", "context": "4 - Node Resource Managers | The pod above runs in the Guaranteed QoS class because only limits are specified and requests are set equal to limits when not explicitly specified. And the container's resource limit for the CPU reso", "source": "https://kubernetes.io/docs/concepts/policy/_print/", "id": "k8s_00456_code_explain_11842"} {"prompt": "Explain this bash code:\n\nkubectl create rolebinding NAME --clusterrole=NAME|--role=NAME [--user=username] [--group=groupname] [--serviceaccount=namespace:serviceaccountname] [--dry-run=server|client|none]", "completion": "This bash code kubectl create rolebinding | If true, ignore any errors in templates when a field or map key is missing in the template. Only applies to golang and jsonpath output formats.", "language": "bash", "type": "kubectl-command_explanation", "context": "kubectl create rolebinding | If true, ignore any errors in templates when a field or map key is missing in the template. Only applies to golang and jsonpath output formats.", "source": "https://kubernetes.io/docs/reference/kubectl/generated/kubectl_create/kubectl_create_rolebinding/", "id": "k8s_00457_code_explain_11843"} {"prompt": "Complete this text code:\n\n# Create a role binding for user1, user2, and group1 using the admin cluster role\n kubectl create rolebinding admin --clusterrole=admin --user=user1 --user=user2 --group=group1", "completion": " \n # Create a role binding for service account monitoring:sa-dev using the admin role\n kubectl create rolebinding admin-binding --role=admin --serviceaccount=monitoring:sa-dev", "language": "text", "type": "code-example", "context": "kubectl create rolebinding | If true, ignore any errors in templates when a field or map key is missing in the template. Only applies to golang and jsonpath output formats.", "source": "https://kubernetes.io/docs/reference/kubectl/generated/kubectl_create/kubectl_create_rolebinding/", "id": "k8s_00457_code_completion_11844"} {"prompt": "Explain this text code:\n\n# Create a role binding for user1, user2, and group1 using the admin cluster role\n kubectl create rolebinding admin --clusterrole=admin --user=user1 --user=user2 --group=group1\n \n # Create a role binding for service account monitoring:sa-dev using the admin role\n kubectl create rolebinding admin-binding --role=admin --serviceaccount=monitoring:sa-dev", "completion": "This text code kubectl create rolebinding | If true, ignore any errors in templates when a field or map key is missing in the template. Only applies to golang and jsonpath output formats.", "language": "text", "type": "code-example_explanation", "context": "kubectl create rolebinding | If true, ignore any errors in templates when a field or map key is missing in the template. Only applies to golang and jsonpath output formats.", "source": "https://kubernetes.io/docs/reference/kubectl/generated/kubectl_create/kubectl_create_rolebinding/", "id": "k8s_00457_code_explain_11845"} {"prompt": "Explain this bash code:\n\nkubectl create rolebinding NAME --clusterrole=NAME|--role=NAME [--user=username] [--group=groupname] [--serviceaccount=namespace:serviceaccountname] [--dry-run=server|client|none]", "completion": "This bash code kubectl create rolebinding | If true, ignore any errors in templates when a field or map key is missing in the template. Only applies to golang and jsonpath output formats.", "language": "bash", "type": "kubectl-command_explanation", "context": "kubectl create rolebinding | If true, ignore any errors in templates when a field or map key is missing in the template. Only applies to golang and jsonpath output formats.", "source": "https://kubernetes.io/docs/reference/kubectl/generated/kubectl_create/kubectl_create_rolebinding/", "id": "k8s_00457_code_explain_11846"} {"prompt": "Complete this text code:\n\n# Create a role binding for user1, user2, and group1 using the admin cluster role\n kubectl create rolebinding admin --clusterrole=admin --user=user1 --user=user2 --group=group1", "completion": " \n # Create a role binding for service account monitoring:sa-dev using the admin role\n kubectl create rolebinding admin-binding --role=admin --serviceaccount=monitoring:sa-dev", "language": "text", "type": "code-example", "context": "kubectl create rolebinding | If true, ignore any errors in templates when a field or map key is missing in the template. Only applies to golang and jsonpath output formats.", "source": "https://kubernetes.io/docs/reference/kubectl/generated/kubectl_create/kubectl_create_rolebinding/", "id": "k8s_00457_code_completion_11847"} {"prompt": "Explain this text code:\n\n# Create a role binding for user1, user2, and group1 using the admin cluster role\n kubectl create rolebinding admin --clusterrole=admin --user=user1 --user=user2 --group=group1\n \n # Create a role binding for service account monitoring:sa-dev using the admin role\n kubectl create rolebinding admin-binding --role=admin --serviceaccount=monitoring:sa-dev", "completion": "This text code kubectl create rolebinding | If true, ignore any errors in templates when a field or map key is missing in the template. Only applies to golang and jsonpath output formats.", "language": "text", "type": "code-example_explanation", "context": "kubectl create rolebinding | If true, ignore any errors in templates when a field or map key is missing in the template. Only applies to golang and jsonpath output formats.", "source": "https://kubernetes.io/docs/reference/kubectl/generated/kubectl_create/kubectl_create_rolebinding/", "id": "k8s_00457_code_explain_11848"} {"prompt": "Complete this text code:\n\n# Create a pod using the data in pod.json\n kubectl create -f ./pod.json\n \n # Create a pod based on the JSON passed into stdin", "completion": " cat pod.json | kubectl create -f -\n \n # Edit the data in registry.yaml in JSON then create the resource using the edited data\n kubectl create -f registry.yaml --edit -o json", "language": "text", "type": "code-example", "context": "kubectl create | If true, ignore any errors in templates when a field or map key is missing in the template. Only applies to golang and jsonpath output formats.", "source": "https://kubernetes.io/docs/reference/kubectl/generated/kubectl_create/", "id": "k8s_00458_code_completion_11849"} {"prompt": "Explain this text code:\n\n# Create a pod using the data in pod.json\n kubectl create -f ./pod.json\n \n # Create a pod based on the JSON passed into stdin\n cat pod.json | kubectl create -f -\n \n # Edit the data in registry.yaml in JSON then create the resource using the edited data\n kubectl create -f registry.yaml --edit -o json", "completion": "This text code kubectl create | If true, ignore any errors in templates when a field or map key is missing in the template. Only applies to golang and jsonpath output formats.", "language": "text", "type": "code-example_explanation", "context": "kubectl create | If true, ignore any errors in templates when a field or map key is missing in the template. Only applies to golang and jsonpath output formats.", "source": "https://kubernetes.io/docs/reference/kubectl/generated/kubectl_create/", "id": "k8s_00458_code_explain_11850"} {"prompt": "Complete this text code:\n\n# Create a pod using the data in pod.json\n kubectl create -f ./pod.json\n \n # Create a pod based on the JSON passed into stdin", "completion": " cat pod.json | kubectl create -f -\n \n # Edit the data in registry.yaml in JSON then create the resource using the edited data\n kubectl create -f registry.yaml --edit -o json", "language": "text", "type": "code-example", "context": "kubectl create | If true, ignore any errors in templates when a field or map key is missing in the template. Only applies to golang and jsonpath output formats.", "source": "https://kubernetes.io/docs/reference/kubectl/generated/kubectl_create/", "id": "k8s_00458_code_completion_11851"} {"prompt": "Explain this text code:\n\n# Create a pod using the data in pod.json\n kubectl create -f ./pod.json\n \n # Create a pod based on the JSON passed into stdin\n cat pod.json | kubectl create -f -\n \n # Edit the data in registry.yaml in JSON then create the resource using the edited data\n kubectl create -f registry.yaml --edit -o json", "completion": "This text code kubectl create | If true, ignore any errors in templates when a field or map key is missing in the template. Only applies to golang and jsonpath output formats.", "language": "text", "type": "code-example_explanation", "context": "kubectl create | If true, ignore any errors in templates when a field or map key is missing in the template. Only applies to golang and jsonpath output formats.", "source": "https://kubernetes.io/docs/reference/kubectl/generated/kubectl_create/", "id": "k8s_00458_code_explain_11852"} {"prompt": "Complete this text code:\n\n# Rollback to the previous deployment\n kubectl rollout undo deployment/abc\n \n # Check the rollout status of a daemonset\n kubectl rollout status daemonset/foo", "completion": " \n # Restart a deployment\n kubectl rollout restart deployment/abc\n \n # Restart deployments with the 'app=nginx' label\n kubectl rollout restart deployment --selector=app=nginx", "language": "text", "type": "code-example", "context": "kubectl rollout | help for rollout", "source": "https://kubernetes.io/docs/reference/kubectl/generated/kubectl_rollout/", "id": "k8s_00459_code_completion_11853"} {"prompt": "Explain this text code:\n\n# Rollback to the previous deployment\n kubectl rollout undo deployment/abc\n \n # Check the rollout status of a daemonset\n kubectl rollout status daemonset/foo\n \n # Restart a deployment\n kubectl rollout restart deployment/abc\n \n # Restart deployments with the 'app=nginx' label\n kubectl rollout restart deployment --selector=app=nginx", "completion": "This text code kubectl rollout | help for rollout", "language": "text", "type": "code-example_explanation", "context": "kubectl rollout | help for rollout", "source": "https://kubernetes.io/docs/reference/kubectl/generated/kubectl_rollout/", "id": "k8s_00459_code_explain_11854"} {"prompt": "Complete this text code:\n\n# Rollback to the previous deployment\n kubectl rollout undo deployment/abc\n \n # Check the rollout status of a daemonset\n kubectl rollout status daemonset/foo", "completion": " \n # Restart a deployment\n kubectl rollout restart deployment/abc\n \n # Restart deployments with the 'app=nginx' label\n kubectl rollout restart deployment --selector=app=nginx", "language": "text", "type": "code-example", "context": "kubectl rollout | help for rollout", "source": "https://kubernetes.io/docs/reference/kubectl/generated/kubectl_rollout/", "id": "k8s_00459_code_completion_11855"} {"prompt": "Explain this text code:\n\n# Rollback to the previous deployment\n kubectl rollout undo deployment/abc\n \n # Check the rollout status of a daemonset\n kubectl rollout status daemonset/foo\n \n # Restart a deployment\n kubectl rollout restart deployment/abc\n \n # Restart deployments with the 'app=nginx' label\n kubectl rollout restart deployment --selector=app=nginx", "completion": "This text code kubectl rollout | help for rollout", "language": "text", "type": "code-example_explanation", "context": "kubectl rollout | help for rollout", "source": "https://kubernetes.io/docs/reference/kubectl/generated/kubectl_rollout/", "id": "k8s_00459_code_explain_11856"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: command-demo\n labels:\n purpose: demonstrate-command", "completion": "spec:\n containers:\n - name: command-demo-container\n image: debian\n command: [\"printenv\"]\n args: [\"HOSTNAME\", \"KUBERNETES_PORT\"]\n restartPolicy: OnFailure", "language": "yaml", "type": "pod-definition", "context": "Define a Command and Arguments for a Container | Create a Pod based on the YAML configuration file:", "source": "https://kubernetes.io/docs/tasks/inject-data-application/define-command-argument-container/", "id": "k8s_00460_code_completion_11857"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: command-demo\n labels:\n purpose: demonstrate-command\nspec:\n containers:\n - name: command-demo-container\n image: debian\n command: [\"printenv\"]\n args: [\"HOSTNAME\", \"KUBERNETES_PORT\"]\n restartPolicy: OnFailure", "completion": "This yaml code Define a Command and Arguments for a Container | Create a Pod based on the YAML configuration file:", "language": "yaml", "type": "pod-definition_explanation", "context": "Define a Command and Arguments for a Container | Create a Pod based on the YAML configuration file:", "source": "https://kubernetes.io/docs/tasks/inject-data-application/define-command-argument-container/", "id": "k8s_00460_code_explain_11858"} {"prompt": "Explain this bash code:\n\nkubectl apply -f https://k8s.io/examples/pods/commands.yaml", "completion": "This bash code Define a Command and Arguments for a Container | List the running Pods:", "language": "bash", "type": "kubectl-command_explanation", "context": "Define a Command and Arguments for a Container | List the running Pods:", "source": "https://kubernetes.io/docs/tasks/inject-data-application/define-command-argument-container/", "id": "k8s_00460_code_explain_11859"} {"prompt": "Complete this text code:\n\nenv:\n- name: MESSAGE", "completion": " value: \"hello world\"\ncommand: [\"/bin/echo\"]\nargs: [\"$(MESSAGE)\"]", "language": "text", "type": "code-example", "context": "Define a Command and Arguments for a Container | This means you can define an argument for a Pod using any of the techniques available for defining environment variables, including ConfigMaps and Secrets.", "source": "https://kubernetes.io/docs/tasks/inject-data-application/define-command-argument-container/", "id": "k8s_00460_code_completion_11860"} {"prompt": "Explain this text code:\n\nenv:\n- name: MESSAGE\n value: \"hello world\"\ncommand: [\"/bin/echo\"]\nargs: [\"$(MESSAGE)\"]", "completion": "This text code Define a Command and Arguments for a Container | This means you can define an argument for a Pod using any of the techniques available for defining environment variables, including ConfigMaps and Secrets.", "language": "text", "type": "code-example_explanation", "context": "Define a Command and Arguments for a Container | This means you can define an argument for a Pod using any of the techniques available for defining environment variables, including ConfigMaps and Secrets.", "source": "https://kubernetes.io/docs/tasks/inject-data-application/define-command-argument-container/", "id": "k8s_00460_code_explain_11861"} {"prompt": "Explain this text code:\n\ncommand: [\"/bin/sh\"]\nargs: [\"-c\", \"while true; do echo hello; sleep 10;done\"]", "completion": "This text code Define a Command and Arguments for a Container |", "language": "text", "type": "code-example_explanation", "context": "Define a Command and Arguments for a Container |", "source": "https://kubernetes.io/docs/tasks/inject-data-application/define-command-argument-container/", "id": "k8s_00460_code_explain_11862"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: command-demo\n labels:\n purpose: demonstrate-command", "completion": "spec:\n containers:\n - name: command-demo-container\n image: debian\n command: [\"printenv\"]\n args: [\"HOSTNAME\", \"KUBERNETES_PORT\"]\n restartPolicy: OnFailure", "language": "yaml", "type": "pod-definition", "context": "Define a Command and Arguments for a Container | Create a Pod based on the YAML configuration file:", "source": "https://kubernetes.io/docs/tasks/inject-data-application/define-command-argument-container/", "id": "k8s_00460_code_completion_11863"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: command-demo\n labels:\n purpose: demonstrate-command\nspec:\n containers:\n - name: command-demo-container\n image: debian\n command: [\"printenv\"]\n args: [\"HOSTNAME\", \"KUBERNETES_PORT\"]\n restartPolicy: OnFailure", "completion": "This yaml code Define a Command and Arguments for a Container | Create a Pod based on the YAML configuration file:", "language": "yaml", "type": "pod-definition_explanation", "context": "Define a Command and Arguments for a Container | Create a Pod based on the YAML configuration file:", "source": "https://kubernetes.io/docs/tasks/inject-data-application/define-command-argument-container/", "id": "k8s_00460_code_explain_11864"} {"prompt": "Explain this shell code:\n\nkubectl apply -f https://k8s.io/examples/pods/commands.yaml", "completion": "This shell code Define a Command and Arguments for a Container | List the running Pods:", "language": "shell", "type": "code-example_explanation", "context": "Define a Command and Arguments for a Container | List the running Pods:", "source": "https://kubernetes.io/docs/tasks/inject-data-application/define-command-argument-container/", "id": "k8s_00460_code_explain_11865"} {"prompt": "Complete this yaml code:\n\nenv:\n- name: MESSAGE", "completion": " value: \"hello world\"\ncommand: [\"/bin/echo\"]\nargs: [\"$(MESSAGE)\"]", "language": "yaml", "type": "kubernetes-manifest", "context": "Define a Command and Arguments for a Container | This means you can define an argument for a Pod using any of the techniques available for defining environment variables, including ConfigMaps and Secrets.", "source": "https://kubernetes.io/docs/tasks/inject-data-application/define-command-argument-container/", "id": "k8s_00460_code_completion_11866"} {"prompt": "Explain this yaml code:\n\nenv:\n- name: MESSAGE\n value: \"hello world\"\ncommand: [\"/bin/echo\"]\nargs: [\"$(MESSAGE)\"]", "completion": "This yaml code Define a Command and Arguments for a Container | This means you can define an argument for a Pod using any of the techniques available for defining environment variables, including ConfigMaps and Secrets.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Define a Command and Arguments for a Container | This means you can define an argument for a Pod using any of the techniques available for defining environment variables, including ConfigMaps and Secrets.", "source": "https://kubernetes.io/docs/tasks/inject-data-application/define-command-argument-container/", "id": "k8s_00460_code_explain_11867"} {"prompt": "Explain this shell code:\n\ncommand: [\"/bin/sh\"]\nargs: [\"-c\", \"while true; do echo hello; sleep 10;done\"]", "completion": "This shell code Define a Command and Arguments for a Container |", "language": "shell", "type": "code-example_explanation", "context": "Define a Command and Arguments for a Container |", "source": "https://kubernetes.io/docs/tasks/inject-data-application/define-command-argument-container/", "id": "k8s_00460_code_explain_11868"} {"prompt": "Complete this yaml code:\n\npods/commands.yaml\napiVersion: v1\nkind: Pod\nmetadata:\n name: command-demo\n labels:\n purpose: demonstrate-command", "completion": "spec:\n containers:\n - name: command-demo-container\n image: debian\n command: [\"printenv\"]\n args: [\"HOSTNAME\", \"KUBERNETES_PORT\"]\n restartPolicy: OnFailure", "language": "yaml", "type": "pod-definition", "context": "Define a Command and Arguments for a Container | Create a Pod based on the YAML configuration file:", "source": "https://kubernetes.io/docs/tasks/inject-data-application/define-command-argument-container/", "id": "k8s_00460_code_completion_11869"} {"prompt": "Explain this yaml code:\n\npods/commands.yaml\napiVersion: v1\nkind: Pod\nmetadata:\n name: command-demo\n labels:\n purpose: demonstrate-command\nspec:\n containers:\n - name: command-demo-container\n image: debian\n command: [\"printenv\"]\n args: [\"HOSTNAME\", \"KUBERNETES_PORT\"]\n restartPolicy: OnFailure", "completion": "This yaml code Define a Command and Arguments for a Container | Create a Pod based on the YAML configuration file:", "language": "yaml", "type": "pod-definition_explanation", "context": "Define a Command and Arguments for a Container | Create a Pod based on the YAML configuration file:", "source": "https://kubernetes.io/docs/tasks/inject-data-application/define-command-argument-container/", "id": "k8s_00460_code_explain_11870"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: command-demo\n labels:\n purpose: demonstrate-command", "completion": "spec:\n containers:\n - name: command-demo-container\n image: debian\n command: [\"printenv\"]\n args: [\"HOSTNAME\", \"KUBERNETES_PORT\"]\n restartPolicy: OnFailure", "language": "yaml", "type": "pod-definition", "context": "Define a Command and Arguments for a Container | Create a Pod based on the YAML configuration file:", "source": "https://kubernetes.io/docs/tasks/inject-data-application/define-command-argument-container/", "id": "k8s_00460_code_completion_11871"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: command-demo\n labels:\n purpose: demonstrate-command\nspec:\n containers:\n - name: command-demo-container\n image: debian\n command: [\"printenv\"]\n args: [\"HOSTNAME\", \"KUBERNETES_PORT\"]\n restartPolicy: OnFailure", "completion": "This yaml code Define a Command and Arguments for a Container | Create a Pod based on the YAML configuration file:", "language": "yaml", "type": "pod-definition_explanation", "context": "Define a Command and Arguments for a Container | Create a Pod based on the YAML configuration file:", "source": "https://kubernetes.io/docs/tasks/inject-data-application/define-command-argument-container/", "id": "k8s_00460_code_explain_11872"} {"prompt": "Explain this bash code:\n\nkubectl apply -f https://k8s.io/examples/pods/commands.yaml", "completion": "This bash code Define a Command and Arguments for a Container | List the running Pods:", "language": "bash", "type": "kubectl-command_explanation", "context": "Define a Command and Arguments for a Container | List the running Pods:", "source": "https://kubernetes.io/docs/tasks/inject-data-application/define-command-argument-container/", "id": "k8s_00460_code_explain_11873"} {"prompt": "Complete this text code:\n\nenv:\n- name: MESSAGE", "completion": " value: \"hello world\"\ncommand: [\"/bin/echo\"]\nargs: [\"$(MESSAGE)\"]", "language": "text", "type": "code-example", "context": "Define a Command and Arguments for a Container | This means you can define an argument for a Pod using any of the techniques available for defining environment variables, including ConfigMaps and Secrets.", "source": "https://kubernetes.io/docs/tasks/inject-data-application/define-command-argument-container/", "id": "k8s_00460_code_completion_11874"} {"prompt": "Explain this text code:\n\nenv:\n- name: MESSAGE\n value: \"hello world\"\ncommand: [\"/bin/echo\"]\nargs: [\"$(MESSAGE)\"]", "completion": "This text code Define a Command and Arguments for a Container | This means you can define an argument for a Pod using any of the techniques available for defining environment variables, including ConfigMaps and Secrets.", "language": "text", "type": "code-example_explanation", "context": "Define a Command and Arguments for a Container | This means you can define an argument for a Pod using any of the techniques available for defining environment variables, including ConfigMaps and Secrets.", "source": "https://kubernetes.io/docs/tasks/inject-data-application/define-command-argument-container/", "id": "k8s_00460_code_explain_11875"} {"prompt": "Explain this text code:\n\ncommand: [\"/bin/sh\"]\nargs: [\"-c\", \"while true; do echo hello; sleep 10;done\"]", "completion": "This text code Define a Command and Arguments for a Container |", "language": "text", "type": "code-example_explanation", "context": "Define a Command and Arguments for a Container |", "source": "https://kubernetes.io/docs/tasks/inject-data-application/define-command-argument-container/", "id": "k8s_00460_code_explain_11876"} {"prompt": "Complete this text code:\n\ndaemonsets \"<daemonset-name>\"\nREVISION CHANGE-CAUSE", "completion": "1 ...\n2 ...\n...", "language": "text", "type": "code-example", "context": "Perform a Rollback on a DaemonSet | To see the details of a specific revision:", "source": "https://kubernetes.io/docs/tasks/manage-daemon/rollback-daemon-set/", "id": "k8s_00461_code_completion_11877"} {"prompt": "Explain this text code:\n\ndaemonsets \"<daemonset-name>\"\nREVISION CHANGE-CAUSE\n1 ...\n2 ...\n...", "completion": "This text code Perform a Rollback on a DaemonSet | To see the details of a specific revision:", "language": "text", "type": "code-example_explanation", "context": "Perform a Rollback on a DaemonSet | To see the details of a specific revision:", "source": "https://kubernetes.io/docs/tasks/manage-daemon/rollback-daemon-set/", "id": "k8s_00461_code_explain_11878"} {"prompt": "Explain this bash code:\n\nkubectl rollout history daemonset <daemonset-name> --revision=1", "completion": "This bash code Perform a Rollback on a DaemonSet | This returns the details of that revision:", "language": "bash", "type": "kubectl-command_explanation", "context": "Perform a Rollback on a DaemonSet | This returns the details of that revision:", "source": "https://kubernetes.io/docs/tasks/manage-daemon/rollback-daemon-set/", "id": "k8s_00461_code_explain_11879"} {"prompt": "Complete this text code:\n\ndaemonsets \"<daemonset-name>\" with revision #1\nPod Template:\nLabels: foo=bar\nContainers:\napp:", "completion": " Image: ...\n Port: ...\n Environment: ...\n Mounts: ...\nVolumes: ...", "language": "text", "type": "code-example", "context": "Perform a Rollback on a DaemonSet | If it succeeds, the command returns:", "source": "https://kubernetes.io/docs/tasks/manage-daemon/rollback-daemon-set/", "id": "k8s_00461_code_completion_11880"} {"prompt": "Explain this text code:\n\ndaemonsets \"<daemonset-name>\" with revision #1\nPod Template:\nLabels: foo=bar\nContainers:\napp:\n Image: ...\n Port: ...\n Environment: ...\n Mounts: ...\nVolumes: ...", "completion": "This text code Perform a Rollback on a DaemonSet | If it succeeds, the command returns:", "language": "text", "type": "code-example_explanation", "context": "Perform a Rollback on a DaemonSet | If it succeeds, the command returns:", "source": "https://kubernetes.io/docs/tasks/manage-daemon/rollback-daemon-set/", "id": "k8s_00461_code_explain_11881"} {"prompt": "Explain this text code:\n\n# Specify the revision number you get from Step 1 in --to-revision\nkubectl rollout undo daemonset <daemonset-name> --to-revision=<revision>", "completion": "This text code Perform a Rollback on a DaemonSet | If it succeeds, the command returns:", "language": "text", "type": "code-example_explanation", "context": "Perform a Rollback on a DaemonSet | If it succeeds, the command returns:", "source": "https://kubernetes.io/docs/tasks/manage-daemon/rollback-daemon-set/", "id": "k8s_00461_code_explain_11882"} {"prompt": "Explain this text code:\n\ndaemonset \"<daemonset-name>\" successfully rolled out", "completion": "This text code Perform a Rollback on a DaemonSet | In the previous kubectl rollout history step, you got a list of DaemonSet revisions. Each revision is stored in a resource named ControllerRevision.", "language": "text", "type": "code-example_explanation", "context": "Perform a Rollback on a DaemonSet | In the previous kubectl rollout history step, you got a list of DaemonSet revisions. Each revision is stored in a resource named ControllerRevision.", "source": "https://kubernetes.io/docs/tasks/manage-daemon/rollback-daemon-set/", "id": "k8s_00461_code_explain_11883"} {"prompt": "Explain this bash code:\n\nkubectl get controllerrevision -l <daemonset-selector-key>=<daemonset-selector-value>", "completion": "This bash code Perform a Rollback on a DaemonSet | This returns a list of ControllerRevisions:", "language": "bash", "type": "kubectl-command_explanation", "context": "Perform a Rollback on a DaemonSet | This returns a list of ControllerRevisions:", "source": "https://kubernetes.io/docs/tasks/manage-daemon/rollback-daemon-set/", "id": "k8s_00461_code_explain_11884"} {"prompt": "Explain this text code:\n\nNAME CONTROLLER REVISION AGE\n<daemonset-name>-<revision-hash> DaemonSet/<daemonset-name> 1 1h\n<daemonset-name>-<revision-hash> DaemonSet/<daemonset-name> 2 1h", "completion": "This text code Perform a Rollback on a DaemonSet | Each ControllerRevision stores the annotations and template of a DaemonSet revision.", "language": "text", "type": "code-example_explanation", "context": "Perform a Rollback on a DaemonSet | Each ControllerRevision stores the annotations and template of a DaemonSet revision.", "source": "https://kubernetes.io/docs/tasks/manage-daemon/rollback-daemon-set/", "id": "k8s_00461_code_explain_11885"} {"prompt": "Complete this text code:\n\ndaemonsets \"<daemonset-name>\"\nREVISION CHANGE-CAUSE", "completion": "1 ...\n2 ...\n...", "language": "text", "type": "code-example", "context": "Perform a Rollback on a DaemonSet | To see the details of a specific revision:", "source": "https://kubernetes.io/docs/tasks/manage-daemon/rollback-daemon-set/", "id": "k8s_00461_code_completion_11886"} {"prompt": "Explain this text code:\n\ndaemonsets \"<daemonset-name>\"\nREVISION CHANGE-CAUSE\n1 ...\n2 ...\n...", "completion": "This text code Perform a Rollback on a DaemonSet | To see the details of a specific revision:", "language": "text", "type": "code-example_explanation", "context": "Perform a Rollback on a DaemonSet | To see the details of a specific revision:", "source": "https://kubernetes.io/docs/tasks/manage-daemon/rollback-daemon-set/", "id": "k8s_00461_code_explain_11887"} {"prompt": "Explain this shell code:\n\nkubectl rollout history daemonset <daemonset-name> --revision=1", "completion": "This shell code Perform a Rollback on a DaemonSet | This returns the details of that revision:", "language": "shell", "type": "code-example_explanation", "context": "Perform a Rollback on a DaemonSet | This returns the details of that revision:", "source": "https://kubernetes.io/docs/tasks/manage-daemon/rollback-daemon-set/", "id": "k8s_00461_code_explain_11888"} {"prompt": "Complete this text code:\n\ndaemonsets \"<daemonset-name>\" with revision #1\nPod Template:\nLabels: foo=bar\nContainers:\napp:", "completion": " Image: ...\n Port: ...\n Environment: ...\n Mounts: ...\nVolumes: ...", "language": "text", "type": "code-example", "context": "Perform a Rollback on a DaemonSet | If it succeeds, the command returns:", "source": "https://kubernetes.io/docs/tasks/manage-daemon/rollback-daemon-set/", "id": "k8s_00461_code_completion_11889"} {"prompt": "Explain this text code:\n\ndaemonsets \"<daemonset-name>\" with revision #1\nPod Template:\nLabels: foo=bar\nContainers:\napp:\n Image: ...\n Port: ...\n Environment: ...\n Mounts: ...\nVolumes: ...", "completion": "This text code Perform a Rollback on a DaemonSet | If it succeeds, the command returns:", "language": "text", "type": "code-example_explanation", "context": "Perform a Rollback on a DaemonSet | If it succeeds, the command returns:", "source": "https://kubernetes.io/docs/tasks/manage-daemon/rollback-daemon-set/", "id": "k8s_00461_code_explain_11890"} {"prompt": "Explain this shell code:\n\n# Specify the revision number you get from Step 1 in --to-revision\nkubectl rollout undo daemonset <daemonset-name> --to-revision=<revision>", "completion": "This shell code Perform a Rollback on a DaemonSet | If it succeeds, the command returns:", "language": "shell", "type": "code-example_explanation", "context": "Perform a Rollback on a DaemonSet | If it succeeds, the command returns:", "source": "https://kubernetes.io/docs/tasks/manage-daemon/rollback-daemon-set/", "id": "k8s_00461_code_explain_11891"} {"prompt": "Explain this text code:\n\ndaemonset \"<daemonset-name>\" successfully rolled out", "completion": "This text code Perform a Rollback on a DaemonSet | In the previous kubectl rollout history step, you got a list of DaemonSet revisions. Each revision is stored in a resource named ControllerRevision.", "language": "text", "type": "code-example_explanation", "context": "Perform a Rollback on a DaemonSet | In the previous kubectl rollout history step, you got a list of DaemonSet revisions. Each revision is stored in a resource named ControllerRevision.", "source": "https://kubernetes.io/docs/tasks/manage-daemon/rollback-daemon-set/", "id": "k8s_00461_code_explain_11892"} {"prompt": "Explain this shell code:\n\nkubectl get controllerrevision -l <daemonset-selector-key>=<daemonset-selector-value>", "completion": "This shell code Perform a Rollback on a DaemonSet | This returns a list of ControllerRevisions:", "language": "shell", "type": "code-example_explanation", "context": "Perform a Rollback on a DaemonSet | This returns a list of ControllerRevisions:", "source": "https://kubernetes.io/docs/tasks/manage-daemon/rollback-daemon-set/", "id": "k8s_00461_code_explain_11893"} {"prompt": "Explain this text code:\n\nNAME CONTROLLER REVISION AGE\n<daemonset-name>-<revision-hash> DaemonSet/<daemonset-name> 1 1h\n<daemonset-name>-<revision-hash> DaemonSet/<daemonset-name> 2 1h", "completion": "This text code Perform a Rollback on a DaemonSet | Each ControllerRevision stores the annotations and template of a DaemonSet revision.", "language": "text", "type": "code-example_explanation", "context": "Perform a Rollback on a DaemonSet | Each ControllerRevision stores the annotations and template of a DaemonSet revision.", "source": "https://kubernetes.io/docs/tasks/manage-daemon/rollback-daemon-set/", "id": "k8s_00461_code_explain_11894"} {"prompt": "Explain this bash code:\n\nkubectl rollout history daemonset <daemonset-name> --revision=1", "completion": "This bash code Perform a Rollback on a DaemonSet | This returns the details of that revision:", "language": "bash", "type": "kubectl-command_explanation", "context": "Perform a Rollback on a DaemonSet | This returns the details of that revision:", "source": "https://kubernetes.io/docs/tasks/manage-daemon/rollback-daemon-set/", "id": "k8s_00461_code_explain_11895"} {"prompt": "Explain this text code:\n\n# Specify the revision number you get from Step 1 in --to-revision\nkubectl rollout undo daemonset <daemonset-name> --to-revision=<revision>", "completion": "This text code Perform a Rollback on a DaemonSet | If it succeeds, the command returns:", "language": "text", "type": "code-example_explanation", "context": "Perform a Rollback on a DaemonSet | If it succeeds, the command returns:", "source": "https://kubernetes.io/docs/tasks/manage-daemon/rollback-daemon-set/", "id": "k8s_00461_code_explain_11896"} {"prompt": "Explain this bash code:\n\nkubectl get controllerrevision -l <daemonset-selector-key>=<daemonset-selector-value>", "completion": "This bash code Perform a Rollback on a DaemonSet | This returns a list of ControllerRevisions:", "language": "bash", "type": "kubectl-command_explanation", "context": "Perform a Rollback on a DaemonSet | This returns a list of ControllerRevisions:", "source": "https://kubernetes.io/docs/tasks/manage-daemon/rollback-daemon-set/", "id": "k8s_00461_code_explain_11897"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: example-vector-add\nspec:\n restartPolicy: OnFailure", "completion": " containers:\n - name: example-vector-add\n image: \"registry.example/example-vector-add:v42\"\n resources:\n limits:\n gpu-vendor.example/example-gpu: 1 # requesting 1 GPU", "language": "yaml", "type": "pod-definition", "context": "Schedule GPUs | If different nodes in your cluster have different types of GPUs, then you can use Node Labels and Node Selectors to schedule pods to appropriate nodes.", "source": "https://kubernetes.io/docs/tasks/manage-gpus/scheduling-gpus/", "id": "k8s_00462_code_completion_11898"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: example-vector-add\nspec:\n restartPolicy: OnFailure\n containers:\n - name: example-vector-add\n image: \"registry.example/example-vector-add:v42\"\n resources:\n limits:\n gpu-vendor.example/example-gpu: 1 # requesting 1 GPU", "completion": "This yaml code Schedule GPUs | If different nodes in your cluster have different types of GPUs, then you can use Node Labels and Node Selectors to schedule pods to appropriate nodes.", "language": "yaml", "type": "pod-definition_explanation", "context": "Schedule GPUs | If different nodes in your cluster have different types of GPUs, then you can use Node Labels and Node Selectors to schedule pods to appropriate nodes.", "source": "https://kubernetes.io/docs/tasks/manage-gpus/scheduling-gpus/", "id": "k8s_00462_code_explain_11899"} {"prompt": "Explain this text code:\n\n# Label your nodes with the accelerator type they have.\nkubectl label nodes node1 accelerator=example-gpu-x100\nkubectl label nodes node2 accelerator=other-gpu-k915", "completion": "This text code Schedule GPUs | That label key accelerator is just an example; you can use a different label key if you prefer.", "language": "text", "type": "code-example_explanation", "context": "Schedule GPUs | That label key accelerator is just an example; you can use a different label key if you prefer.", "source": "https://kubernetes.io/docs/tasks/manage-gpus/scheduling-gpus/", "id": "k8s_00462_code_explain_11900"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: example-vector-add\nspec:\n restartPolicy: OnFailure\n # You can use Kubernetes node affinity to schedule this Pod onto a node\n # that provides the kind of GPU that its container needs in order to work\n affinity:\n nodeAffinity:\n requiredDuringSchedulingIgnoredDuringExecution:\n nodeSelectorTerms:", "completion": " - matchExpressions:\n - key: \"gpu.gpu-vendor.example/installed-memory\"\n operator: Gt # (greater than)\n values: [\"40535\"]\n - key: \"feature.node.kubernetes.io/pci-10.present\" # NFD Feature label\n values: [\"true\"] # (optional) only schedule on nodes with PCI device 10\n containers:\n - name: example-vector-add\n image: \"registry.example/example-vector-add:v42\"\n resources:\n limits:\n gpu-vendor.example/example-gpu: 1 # requesting 1 GPU", "language": "yaml", "type": "pod-definition", "context": "Schedule GPUs | Items on this page refer to third party products or projects that provide functionality required by Kubernetes. The Kubernetes project authors aren't responsible for those third-party products or proj", "source": "https://kubernetes.io/docs/tasks/manage-gpus/scheduling-gpus/", "id": "k8s_00462_code_completion_11901"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: example-vector-add\nspec:\n restartPolicy: OnFailure\n # You can use Kubernetes node affinity to schedule this Pod onto a node\n # that provides the kind of GPU that its container needs in order to work\n affinity:\n nodeAffinity:\n requiredDuringSchedulingIgnoredDuringExecution:\n nodeSelectorTerms:\n - matchExpressions:\n - key: \"gpu.gpu-vendor.example/installed-memory\"\n operator: Gt # (greater than)\n values: [\"40535\"]\n - key: \"feature.node.kubernetes.io/pci-10.present\" # NFD Feature label\n values: [\"true\"] # (optional) only schedule on nodes with PCI device 10\n containers:\n - name: example-vector-add\n image: \"registry.example/example-vector-add:v42\"\n resources:\n limits:\n gpu-vendor.example/example-gpu: 1 # requesting 1 GPU", "completion": "This yaml code Schedule GPUs | Items on this page refer to third party products or projects that provide functionality required by Kubernetes. The Kubernetes project authors aren't responsible for those third-party products or proj", "language": "yaml", "type": "pod-definition_explanation", "context": "Schedule GPUs | Items on this page refer to third party products or projects that provide functionality required by Kubernetes. The Kubernetes project authors aren't responsible for those third-party products or proj", "source": "https://kubernetes.io/docs/tasks/manage-gpus/scheduling-gpus/", "id": "k8s_00462_code_explain_11902"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: example-vector-add\nspec:\n restartPolicy: OnFailure", "completion": " containers:\n - name: example-vector-add\n image: \"registry.example/example-vector-add:v42\"\n resources:\n limits:\n gpu-vendor.example/example-gpu: 1 # requesting 1 GPU", "language": "yaml", "type": "pod-definition", "context": "Schedule GPUs | If different nodes in your cluster have different types of GPUs, then you can use Node Labels and Node Selectors to schedule pods to appropriate nodes.", "source": "https://kubernetes.io/docs/tasks/manage-gpus/scheduling-gpus/", "id": "k8s_00462_code_completion_11903"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: example-vector-add\nspec:\n restartPolicy: OnFailure\n containers:\n - name: example-vector-add\n image: \"registry.example/example-vector-add:v42\"\n resources:\n limits:\n gpu-vendor.example/example-gpu: 1 # requesting 1 GPU", "completion": "This yaml code Schedule GPUs | If different nodes in your cluster have different types of GPUs, then you can use Node Labels and Node Selectors to schedule pods to appropriate nodes.", "language": "yaml", "type": "pod-definition_explanation", "context": "Schedule GPUs | If different nodes in your cluster have different types of GPUs, then you can use Node Labels and Node Selectors to schedule pods to appropriate nodes.", "source": "https://kubernetes.io/docs/tasks/manage-gpus/scheduling-gpus/", "id": "k8s_00462_code_explain_11904"} {"prompt": "Explain this shell code:\n\n# Label your nodes with the accelerator type they have.\nkubectl label nodes node1 accelerator=example-gpu-x100\nkubectl label nodes node2 accelerator=other-gpu-k915", "completion": "This shell code Schedule GPUs | That label key accelerator is just an example; you can use a different label key if you prefer.", "language": "shell", "type": "code-example_explanation", "context": "Schedule GPUs | That label key accelerator is just an example; you can use a different label key if you prefer.", "source": "https://kubernetes.io/docs/tasks/manage-gpus/scheduling-gpus/", "id": "k8s_00462_code_explain_11905"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: example-vector-add\nspec:\n restartPolicy: OnFailure\n # You can use Kubernetes node affinity to schedule this Pod onto a node\n # that provides the kind of GPU that its container needs in order to work\n affinity:\n nodeAffinity:\n requiredDuringSchedulingIgnoredDuringExecution:\n nodeSelectorTerms:", "completion": " - matchExpressions:\n - key: \"gpu.gpu-vendor.example/installed-memory\"\n operator: Gt # (greater than)\n values: [\"40535\"]\n - key: \"feature.node.kubernetes.io/pci-10.present\" # NFD Feature label\n values: [\"true\"] # (optional) only schedule on nodes with PCI device 10\n containers:\n - name: example-vector-add\n image: \"registry.example/example-vector-add:v42\"\n resources:\n limits:\n gpu-vendor.example/example-gpu: 1 # requesting 1 GPU", "language": "yaml", "type": "pod-definition", "context": "Schedule GPUs | Items on this page refer to third party products or projects that provide functionality required by Kubernetes. The Kubernetes project authors aren't responsible for those third-party products or proj", "source": "https://kubernetes.io/docs/tasks/manage-gpus/scheduling-gpus/", "id": "k8s_00462_code_completion_11906"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: example-vector-add\nspec:\n restartPolicy: OnFailure\n # You can use Kubernetes node affinity to schedule this Pod onto a node\n # that provides the kind of GPU that its container needs in order to work\n affinity:\n nodeAffinity:\n requiredDuringSchedulingIgnoredDuringExecution:\n nodeSelectorTerms:\n - matchExpressions:\n - key: \"gpu.gpu-vendor.example/installed-memory\"\n operator: Gt # (greater than)\n values: [\"40535\"]\n - key: \"feature.node.kubernetes.io/pci-10.present\" # NFD Feature label\n values: [\"true\"] # (optional) only schedule on nodes with PCI device 10\n containers:\n - name: example-vector-add\n image: \"registry.example/example-vector-add:v42\"\n resources:\n limits:\n gpu-vendor.example/example-gpu: 1 # requesting 1 GPU", "completion": "This yaml code Schedule GPUs | Items on this page refer to third party products or projects that provide functionality required by Kubernetes. The Kubernetes project authors aren't responsible for those third-party products or proj", "language": "yaml", "type": "pod-definition_explanation", "context": "Schedule GPUs | Items on this page refer to third party products or projects that provide functionality required by Kubernetes. The Kubernetes project authors aren't responsible for those third-party products or proj", "source": "https://kubernetes.io/docs/tasks/manage-gpus/scheduling-gpus/", "id": "k8s_00462_code_explain_11907"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: example-vector-add\nspec:\n restartPolicy: OnFailure", "completion": " containers:\n - name: example-vector-add\n image: \"registry.example/example-vector-add:v42\"\n resources:\n limits:\n gpu-vendor.example/example-gpu: 1 # requesting 1 GPU", "language": "yaml", "type": "pod-definition", "context": "Schedule GPUs | If different nodes in your cluster have different types of GPUs, then you can use Node Labels and Node Selectors to schedule pods to appropriate nodes.", "source": "https://kubernetes.io/docs/tasks/manage-gpus/scheduling-gpus/", "id": "k8s_00462_code_completion_11908"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: example-vector-add\nspec:\n restartPolicy: OnFailure\n containers:\n - name: example-vector-add\n image: \"registry.example/example-vector-add:v42\"\n resources:\n limits:\n gpu-vendor.example/example-gpu: 1 # requesting 1 GPU", "completion": "This yaml code Schedule GPUs | If different nodes in your cluster have different types of GPUs, then you can use Node Labels and Node Selectors to schedule pods to appropriate nodes.", "language": "yaml", "type": "pod-definition_explanation", "context": "Schedule GPUs | If different nodes in your cluster have different types of GPUs, then you can use Node Labels and Node Selectors to schedule pods to appropriate nodes.", "source": "https://kubernetes.io/docs/tasks/manage-gpus/scheduling-gpus/", "id": "k8s_00462_code_explain_11909"} {"prompt": "Explain this text code:\n\n# Label your nodes with the accelerator type they have.\nkubectl label nodes node1 accelerator=example-gpu-x100\nkubectl label nodes node2 accelerator=other-gpu-k915", "completion": "This text code Schedule GPUs | That label key accelerator is just an example; you can use a different label key if you prefer.", "language": "text", "type": "code-example_explanation", "context": "Schedule GPUs | That label key accelerator is just an example; you can use a different label key if you prefer.", "source": "https://kubernetes.io/docs/tasks/manage-gpus/scheduling-gpus/", "id": "k8s_00462_code_explain_11910"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: example-vector-add\nspec:\n restartPolicy: OnFailure\n # You can use Kubernetes node affinity to schedule this Pod onto a node\n # that provides the kind of GPU that its container needs in order to work\n affinity:\n nodeAffinity:\n requiredDuringSchedulingIgnoredDuringExecution:\n nodeSelectorTerms:", "completion": " - matchExpressions:\n - key: \"gpu.gpu-vendor.example/installed-memory\"\n operator: Gt # (greater than)\n values: [\"40535\"]\n - key: \"feature.node.kubernetes.io/pci-10.present\" # NFD Feature label\n values: [\"true\"] # (optional) only schedule on nodes with PCI device 10\n containers:\n - name: example-vector-add\n image: \"registry.example/example-vector-add:v42\"\n resources:\n limits:\n gpu-vendor.example/example-gpu: 1 # requesting 1 GPU", "language": "yaml", "type": "pod-definition", "context": "Schedule GPUs | Items on this page refer to third party products or projects that provide functionality required by Kubernetes. The Kubernetes project authors aren't responsible for those third-party products or proj", "source": "https://kubernetes.io/docs/tasks/manage-gpus/scheduling-gpus/", "id": "k8s_00462_code_completion_11911"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: example-vector-add\nspec:\n restartPolicy: OnFailure\n # You can use Kubernetes node affinity to schedule this Pod onto a node\n # that provides the kind of GPU that its container needs in order to work\n affinity:\n nodeAffinity:\n requiredDuringSchedulingIgnoredDuringExecution:\n nodeSelectorTerms:\n - matchExpressions:\n - key: \"gpu.gpu-vendor.example/installed-memory\"\n operator: Gt # (greater than)\n values: [\"40535\"]\n - key: \"feature.node.kubernetes.io/pci-10.present\" # NFD Feature label\n values: [\"true\"] # (optional) only schedule on nodes with PCI device 10\n containers:\n - name: example-vector-add\n image: \"registry.example/example-vector-add:v42\"\n resources:\n limits:\n gpu-vendor.example/example-gpu: 1 # requesting 1 GPU", "completion": "This yaml code Schedule GPUs | Items on this page refer to third party products or projects that provide functionality required by Kubernetes. The Kubernetes project authors aren't responsible for those third-party products or proj", "language": "yaml", "type": "pod-definition_explanation", "context": "Schedule GPUs | Items on this page refer to third party products or projects that provide functionality required by Kubernetes. The Kubernetes project authors aren't responsible for those third-party products or proj", "source": "https://kubernetes.io/docs/tasks/manage-gpus/scheduling-gpus/", "id": "k8s_00462_code_explain_11912"} {"prompt": "Complete this text code:\n\n# Point to the internal API server hostname\nAPISERVER=https://kubernetes.default.svc\n\n# Path to ServiceAccount token\nSERVICEACCOUNT=/var/run/secrets/kubernetes.io/serviceaccount\n\n# Read this Pod's namespace\nNAMESPACE=$(cat ${SERVICEACCOUNT}/namespace)", "completion": "\n# Read the ServiceAccount bearer token\nTOKEN=$(cat ${SERVICEACCOUNT}/token)\n\n# Reference the internal certificate authority (CA)\nCACERT=${SERVICEACCOUNT}/ca.crt\n\n# Explore the API with TOKEN\ncurl --cacert ${CACERT} --header \"Authorization: Bearer ${TOKEN}\" -X GET ${APISERVER}/api", "language": "text", "type": "code-example", "context": "Accessing the Kubernetes API from a Pod | The output will be similar to this:", "source": "https://kubernetes.io/docs/tasks/run-application/access-api-from-pod/", "id": "k8s_00465_code_completion_11913"} {"prompt": "Explain this text code:\n\n# Point to the internal API server hostname\nAPISERVER=https://kubernetes.default.svc\n\n# Path to ServiceAccount token\nSERVICEACCOUNT=/var/run/secrets/kubernetes.io/serviceaccount\n\n# Read this Pod's namespace\nNAMESPACE=$(cat ${SERVICEACCOUNT}/namespace)\n\n# Read the ServiceAccount bearer token\nTOKEN=$(cat ${SERVICEACCOUNT}/token)\n\n# Reference the internal certificate authority (CA)\nCACERT=${SERVICEACCOUNT}/ca.crt\n\n# Explore the API with TOKEN\ncurl --cacert ${CACERT} --header \"Authorization: Bearer ${TOKEN}\" -X GET ${APISERVER}/api", "completion": "This text code Accessing the Kubernetes API from a Pod | The output will be similar to this:", "language": "text", "type": "code-example_explanation", "context": "Accessing the Kubernetes API from a Pod | The output will be similar to this:", "source": "https://kubernetes.io/docs/tasks/run-application/access-api-from-pod/", "id": "k8s_00465_code_explain_11914"} {"prompt": "Complete this json code:\n\n{\n \"kind\": \"APIVersions\",\n \"versions\": [\"v1\"],\n \"serverAddressByClientCIDRs\": [\n {", "completion": " \"clientCIDR\": \"0.0.0.0/0\",\n \"serverAddress\": \"10.0.1.149:443\"\n }\n ]\n}", "language": "json", "type": "api-response", "context": "Accessing the Kubernetes API from a Pod |", "source": "https://kubernetes.io/docs/tasks/run-application/access-api-from-pod/", "id": "k8s_00465_code_completion_11915"} {"prompt": "Explain this json code:\n\n{\n \"kind\": \"APIVersions\",\n \"versions\": [\"v1\"],\n \"serverAddressByClientCIDRs\": [\n {\n \"clientCIDR\": \"0.0.0.0/0\",\n \"serverAddress\": \"10.0.1.149:443\"\n }\n ]\n}", "completion": "This json code Accessing the Kubernetes API from a Pod |", "language": "json", "type": "api-response_explanation", "context": "Accessing the Kubernetes API from a Pod |", "source": "https://kubernetes.io/docs/tasks/run-application/access-api-from-pod/", "id": "k8s_00465_code_explain_11916"} {"prompt": "Explain this text code:\n\n/var/run/secrets/kubernetes.io/serviceaccount/token", "completion": "This text code Accessing the Kubernetes API from a Pod | If available, a certificate bundle is placed into the filesystem tree of each container at /var/run/secrets/kubernetes.io/serviceaccount/ca.crt, and should be used to verify the serving certificate of", "language": "text", "type": "code-example_explanation", "context": "Accessing the Kubernetes API from a Pod | If available, a certificate bundle is placed into the filesystem tree of each container at /var/run/secrets/kubernetes.io/serviceaccount/ca.crt, and should be used to verify the serving certificate of", "source": "https://kubernetes.io/docs/tasks/run-application/access-api-from-pod/", "id": "k8s_00465_code_explain_11917"} {"prompt": "Explain this text code:\n\n/var/run/secrets/kubernetes.io/serviceaccount/ca.crt", "completion": "This text code Accessing the Kubernetes API from a Pod | Finally, the default namespace to be used for namespaced API operations is placed in a file at /var/run/secrets/kubernetes.io/serviceaccount/namespace in each container.", "language": "text", "type": "code-example_explanation", "context": "Accessing the Kubernetes API from a Pod | Finally, the default namespace to be used for namespaced API operations is placed in a file at /var/run/secrets/kubernetes.io/serviceaccount/namespace in each container.", "source": "https://kubernetes.io/docs/tasks/run-application/access-api-from-pod/", "id": "k8s_00465_code_explain_11918"} {"prompt": "Explain this text code:\n\n/var/run/secrets/kubernetes.io/serviceaccount/namespace", "completion": "This text code Accessing the Kubernetes API from a Pod | If you would like to query the API without an official client library, you can run kubectl proxy as the command of a new sidecar container in the Pod. This way, kubectl proxy will authenticate to the", "language": "text", "type": "code-example_explanation", "context": "Accessing the Kubernetes API from a Pod | If you would like to query the API without an official client library, you can run kubectl proxy as the command of a new sidecar container in the Pod. This way, kubectl proxy will authenticate to the", "source": "https://kubernetes.io/docs/tasks/run-application/access-api-from-pod/", "id": "k8s_00465_code_explain_11919"} {"prompt": "Complete this shell code:\n\n# Point to the internal API server hostname\nAPISERVER=https://kubernetes.default.svc\n\n# Path to ServiceAccount token\nSERVICEACCOUNT=/var/run/secrets/kubernetes.io/serviceaccount\n\n# Read this Pod's namespace\nNAMESPACE=$(cat ${SERVICEACCOUNT}/namespace)", "completion": "\n# Read the ServiceAccount bearer token\nTOKEN=$(cat ${SERVICEACCOUNT}/token)\n\n# Reference the internal certificate authority (CA)\nCACERT=${SERVICEACCOUNT}/ca.crt\n\n# Explore the API with TOKEN\ncurl --cacert ${CACERT} --header \"Authorization: Bearer ${TOKEN}\" -X GET ${APISERVER}/api", "language": "shell", "type": "code-example", "context": "Accessing the Kubernetes API from a Pod | The output will be similar to this:", "source": "https://kubernetes.io/docs/tasks/run-application/access-api-from-pod/", "id": "k8s_00465_code_completion_11920"} {"prompt": "Explain this shell code:\n\n# Point to the internal API server hostname\nAPISERVER=https://kubernetes.default.svc\n\n# Path to ServiceAccount token\nSERVICEACCOUNT=/var/run/secrets/kubernetes.io/serviceaccount\n\n# Read this Pod's namespace\nNAMESPACE=$(cat ${SERVICEACCOUNT}/namespace)\n\n# Read the ServiceAccount bearer token\nTOKEN=$(cat ${SERVICEACCOUNT}/token)\n\n# Reference the internal certificate authority (CA)\nCACERT=${SERVICEACCOUNT}/ca.crt\n\n# Explore the API with TOKEN\ncurl --cacert ${CACERT} --header \"Authorization: Bearer ${TOKEN}\" -X GET ${APISERVER}/api", "completion": "This shell code Accessing the Kubernetes API from a Pod | The output will be similar to this:", "language": "shell", "type": "code-example_explanation", "context": "Accessing the Kubernetes API from a Pod | The output will be similar to this:", "source": "https://kubernetes.io/docs/tasks/run-application/access-api-from-pod/", "id": "k8s_00465_code_explain_11921"} {"prompt": "Complete this json code:\n\n{\n \"kind\": \"APIVersions\",\n \"versions\": [\"v1\"],\n \"serverAddressByClientCIDRs\": [\n {", "completion": " \"clientCIDR\": \"0.0.0.0/0\",\n \"serverAddress\": \"10.0.1.149:443\"\n }\n ]\n}", "language": "json", "type": "api-response", "context": "Accessing the Kubernetes API from a Pod |", "source": "https://kubernetes.io/docs/tasks/run-application/access-api-from-pod/", "id": "k8s_00465_code_completion_11922"} {"prompt": "Explain this json code:\n\n{\n \"kind\": \"APIVersions\",\n \"versions\": [\"v1\"],\n \"serverAddressByClientCIDRs\": [\n {\n \"clientCIDR\": \"0.0.0.0/0\",\n \"serverAddress\": \"10.0.1.149:443\"\n }\n ]\n}", "completion": "This json code Accessing the Kubernetes API from a Pod |", "language": "json", "type": "api-response_explanation", "context": "Accessing the Kubernetes API from a Pod |", "source": "https://kubernetes.io/docs/tasks/run-application/access-api-from-pod/", "id": "k8s_00465_code_explain_11923"} {"prompt": "Complete this text code:\n\n# Point to the internal API server hostname\nAPISERVER=https://kubernetes.default.svc\n\n# Path to ServiceAccount token\nSERVICEACCOUNT=/var/run/secrets/kubernetes.io/serviceaccount\n\n# Read this Pod's namespace\nNAMESPACE=$(cat ${SERVICEACCOUNT}/namespace)", "completion": "\n# Read the ServiceAccount bearer token\nTOKEN=$(cat ${SERVICEACCOUNT}/token)\n\n# Reference the internal certificate authority (CA)\nCACERT=${SERVICEACCOUNT}/ca.crt\n\n# Explore the API with TOKEN\ncurl --cacert ${CACERT} --header \"Authorization: Bearer ${TOKEN}\" -X GET ${APISERVER}/api", "language": "text", "type": "code-example", "context": "Accessing the Kubernetes API from a Pod | The output will be similar to this:", "source": "https://kubernetes.io/docs/tasks/run-application/access-api-from-pod/", "id": "k8s_00465_code_completion_11924"} {"prompt": "Explain this text code:\n\n# Point to the internal API server hostname\nAPISERVER=https://kubernetes.default.svc\n\n# Path to ServiceAccount token\nSERVICEACCOUNT=/var/run/secrets/kubernetes.io/serviceaccount\n\n# Read this Pod's namespace\nNAMESPACE=$(cat ${SERVICEACCOUNT}/namespace)\n\n# Read the ServiceAccount bearer token\nTOKEN=$(cat ${SERVICEACCOUNT}/token)\n\n# Reference the internal certificate authority (CA)\nCACERT=${SERVICEACCOUNT}/ca.crt\n\n# Explore the API with TOKEN\ncurl --cacert ${CACERT} --header \"Authorization: Bearer ${TOKEN}\" -X GET ${APISERVER}/api", "completion": "This text code Accessing the Kubernetes API from a Pod | The output will be similar to this:", "language": "text", "type": "code-example_explanation", "context": "Accessing the Kubernetes API from a Pod | The output will be similar to this:", "source": "https://kubernetes.io/docs/tasks/run-application/access-api-from-pod/", "id": "k8s_00465_code_explain_11925"} {"prompt": "Complete this json code:\n\n{\n \"kind\": \"APIVersions\",\n \"versions\": [\"v1\"],\n \"serverAddressByClientCIDRs\": [\n {", "completion": " \"clientCIDR\": \"0.0.0.0/0\",\n \"serverAddress\": \"10.0.1.149:443\"\n }\n ]\n}", "language": "json", "type": "api-response", "context": "Accessing the Kubernetes API from a Pod |", "source": "https://kubernetes.io/docs/tasks/run-application/access-api-from-pod/", "id": "k8s_00465_code_completion_11926"} {"prompt": "Explain this json code:\n\n{\n \"kind\": \"APIVersions\",\n \"versions\": [\"v1\"],\n \"serverAddressByClientCIDRs\": [\n {\n \"clientCIDR\": \"0.0.0.0/0\",\n \"serverAddress\": \"10.0.1.149:443\"\n }\n ]\n}", "completion": "This json code Accessing the Kubernetes API from a Pod |", "language": "json", "type": "api-response_explanation", "context": "Accessing the Kubernetes API from a Pod |", "source": "https://kubernetes.io/docs/tasks/run-application/access-api-from-pod/", "id": "k8s_00465_code_explain_11927"} {"prompt": "Complete this yaml code:\n\napiVersion: policy/v1\nkind: PodDisruptionBudget\nmetadata:\n name: zk-pdb", "completion": "spec:\n minAvailable: 2\n selector:\n matchLabels:\n app: zookeeper", "language": "yaml", "type": "pod-definition", "context": "Specifying a Disruption Budget for your Application | Example PDB Using maxUnavailable:", "source": "https://kubernetes.io/docs/tasks/run-application/configure-pdb/", "id": "k8s_00469_code_completion_11928"} {"prompt": "Explain this yaml code:\n\napiVersion: policy/v1\nkind: PodDisruptionBudget\nmetadata:\n name: zk-pdb\nspec:\n minAvailable: 2\n selector:\n matchLabels:\n app: zookeeper", "completion": "This yaml code Specifying a Disruption Budget for your Application | Example PDB Using maxUnavailable:", "language": "yaml", "type": "pod-definition_explanation", "context": "Specifying a Disruption Budget for your Application | Example PDB Using maxUnavailable:", "source": "https://kubernetes.io/docs/tasks/run-application/configure-pdb/", "id": "k8s_00469_code_explain_11929"} {"prompt": "Complete this yaml code:\n\napiVersion: policy/v1\nkind: PodDisruptionBudget\nmetadata:\n name: zk-pdb", "completion": "spec:\n maxUnavailable: 1\n selector:\n matchLabels:\n app: zookeeper", "language": "yaml", "type": "pod-definition", "context": "Specifying a Disruption Budget for your Application | For example, if the above zk-pdb object selects the pods of a StatefulSet of size 3, both specifications have the exact same meaning. The use of maxUnavailable is recommended as it automatically respo", "source": "https://kubernetes.io/docs/tasks/run-application/configure-pdb/", "id": "k8s_00469_code_completion_11930"} {"prompt": "Explain this yaml code:\n\napiVersion: policy/v1\nkind: PodDisruptionBudget\nmetadata:\n name: zk-pdb\nspec:\n maxUnavailable: 1\n selector:\n matchLabels:\n app: zookeeper", "completion": "This yaml code Specifying a Disruption Budget for your Application | For example, if the above zk-pdb object selects the pods of a StatefulSet of size 3, both specifications have the exact same meaning. The use of maxUnavailable is recommended as it automatically respo", "language": "yaml", "type": "pod-definition_explanation", "context": "Specifying a Disruption Budget for your Application | For example, if the above zk-pdb object selects the pods of a StatefulSet of size 3, both specifications have the exact same meaning. The use of maxUnavailable is recommended as it automatically respo", "source": "https://kubernetes.io/docs/tasks/run-application/configure-pdb/", "id": "k8s_00469_code_explain_11931"} {"prompt": "Explain this text code:\n\nNAME MIN AVAILABLE MAX UNAVAILABLE ALLOWED DISRUPTIONS AGE\nzk-pdb 2 N/A 0 7s", "completion": "This text code Specifying a Disruption Budget for your Application | If there are matching pods (say, 3), then you would see something like this:", "language": "text", "type": "code-example_explanation", "context": "Specifying a Disruption Budget for your Application | If there are matching pods (say, 3), then you would see something like this:", "source": "https://kubernetes.io/docs/tasks/run-application/configure-pdb/", "id": "k8s_00469_code_explain_11932"} {"prompt": "Explain this text code:\n\nNAME MIN AVAILABLE MAX UNAVAILABLE ALLOWED DISRUPTIONS AGE\nzk-pdb 2 N/A 1 7s", "completion": "This text code Specifying a Disruption Budget for your Application | The non-zero value for ALLOWED DISRUPTIONS means that the disruption controller has seen the pods, counted the matching pods, and updated the status of the PDB.", "language": "text", "type": "code-example_explanation", "context": "Specifying a Disruption Budget for your Application | The non-zero value for ALLOWED DISRUPTIONS means that the disruption controller has seen the pods, counted the matching pods, and updated the status of the PDB.", "source": "https://kubernetes.io/docs/tasks/run-application/configure-pdb/", "id": "k8s_00469_code_explain_11933"} {"prompt": "Complete this yaml code:\n\napiVersion: policy/v1\nkind: PodDisruptionBudget\nmetadata:\n annotations:\n…\n creationTimestamp: \"2020-03-04T04:22:56Z\"\n generation: 1", "completion": " name: zk-pdb\n…\nstatus:\n currentHealthy: 3\n desiredHealthy: 2\n disruptionsAllowed: 1\n expectedPods: 3\n observedGeneration: 1", "language": "yaml", "type": "pod-definition", "context": "Specifying a Disruption Budget for your Application | The current implementation considers healthy pods, as pods that have .status.conditions item with type=\"Ready\" and status=\"True\". These pods are tracked via .status.currentHealthy field in the PDB sta", "source": "https://kubernetes.io/docs/tasks/run-application/configure-pdb/", "id": "k8s_00469_code_completion_11934"} {"prompt": "Explain this yaml code:\n\napiVersion: policy/v1\nkind: PodDisruptionBudget\nmetadata:\n annotations:\n…\n creationTimestamp: \"2020-03-04T04:22:56Z\"\n generation: 1\n name: zk-pdb\n…\nstatus:\n currentHealthy: 3\n desiredHealthy: 2\n disruptionsAllowed: 1\n expectedPods: 3\n observedGeneration: 1", "completion": "This yaml code Specifying a Disruption Budget for your Application | The current implementation considers healthy pods, as pods that have .status.conditions item with type=\"Ready\" and status=\"True\". These pods are tracked via .status.currentHealthy field in the PDB sta", "language": "yaml", "type": "pod-definition_explanation", "context": "Specifying a Disruption Budget for your Application | The current implementation considers healthy pods, as pods that have .status.conditions item with type=\"Ready\" and status=\"True\". These pods are tracked via .status.currentHealthy field in the PDB sta", "source": "https://kubernetes.io/docs/tasks/run-application/configure-pdb/", "id": "k8s_00469_code_explain_11935"} {"prompt": "Explain this text code:\n\npolicy/zookeeper-pod-disruption-budget-minavailable.yaml", "completion": "This text code Specifying a Disruption Budget for your Application | Example PDB Using maxUnavailable:", "language": "text", "type": "code-example_explanation", "context": "Specifying a Disruption Budget for your Application | Example PDB Using maxUnavailable:", "source": "https://kubernetes.io/docs/tasks/run-application/configure-pdb/", "id": "k8s_00469_code_explain_11936"} {"prompt": "Complete this yaml code:\n\napiVersion: policy/v1\nkind: PodDisruptionBudget\nmetadata:\n name: zk-pdb", "completion": "spec:\n minAvailable: 2\n selector:\n matchLabels:\n app: zookeeper", "language": "yaml", "type": "pod-definition", "context": "Specifying a Disruption Budget for your Application | Example PDB Using maxUnavailable:", "source": "https://kubernetes.io/docs/tasks/run-application/configure-pdb/", "id": "k8s_00469_code_completion_11937"} {"prompt": "Explain this yaml code:\n\napiVersion: policy/v1\nkind: PodDisruptionBudget\nmetadata:\n name: zk-pdb\nspec:\n minAvailable: 2\n selector:\n matchLabels:\n app: zookeeper", "completion": "This yaml code Specifying a Disruption Budget for your Application | Example PDB Using maxUnavailable:", "language": "yaml", "type": "pod-definition_explanation", "context": "Specifying a Disruption Budget for your Application | Example PDB Using maxUnavailable:", "source": "https://kubernetes.io/docs/tasks/run-application/configure-pdb/", "id": "k8s_00469_code_explain_11938"} {"prompt": "Explain this text code:\n\npolicy/zookeeper-pod-disruption-budget-maxunavailable.yaml", "completion": "This text code Specifying a Disruption Budget for your Application | For example, if the above zk-pdb object selects the pods of a StatefulSet of size 3, both specifications have the exact same meaning. The use of maxUnavailable is recommended as it automatically respo", "language": "text", "type": "code-example_explanation", "context": "Specifying a Disruption Budget for your Application | For example, if the above zk-pdb object selects the pods of a StatefulSet of size 3, both specifications have the exact same meaning. The use of maxUnavailable is recommended as it automatically respo", "source": "https://kubernetes.io/docs/tasks/run-application/configure-pdb/", "id": "k8s_00469_code_explain_11939"} {"prompt": "Complete this yaml code:\n\napiVersion: policy/v1\nkind: PodDisruptionBudget\nmetadata:\n name: zk-pdb", "completion": "spec:\n maxUnavailable: 1\n selector:\n matchLabels:\n app: zookeeper", "language": "yaml", "type": "pod-definition", "context": "Specifying a Disruption Budget for your Application | For example, if the above zk-pdb object selects the pods of a StatefulSet of size 3, both specifications have the exact same meaning. The use of maxUnavailable is recommended as it automatically respo", "source": "https://kubernetes.io/docs/tasks/run-application/configure-pdb/", "id": "k8s_00469_code_completion_11940"} {"prompt": "Explain this yaml code:\n\napiVersion: policy/v1\nkind: PodDisruptionBudget\nmetadata:\n name: zk-pdb\nspec:\n maxUnavailable: 1\n selector:\n matchLabels:\n app: zookeeper", "completion": "This yaml code Specifying a Disruption Budget for your Application | For example, if the above zk-pdb object selects the pods of a StatefulSet of size 3, both specifications have the exact same meaning. The use of maxUnavailable is recommended as it automatically respo", "language": "yaml", "type": "pod-definition_explanation", "context": "Specifying a Disruption Budget for your Application | For example, if the above zk-pdb object selects the pods of a StatefulSet of size 3, both specifications have the exact same meaning. The use of maxUnavailable is recommended as it automatically respo", "source": "https://kubernetes.io/docs/tasks/run-application/configure-pdb/", "id": "k8s_00469_code_explain_11941"} {"prompt": "Explain this text code:\n\nNAME MIN AVAILABLE MAX UNAVAILABLE ALLOWED DISRUPTIONS AGE\nzk-pdb 2 N/A 0 7s", "completion": "This text code Specifying a Disruption Budget for your Application | If there are matching pods (say, 3), then you would see something like this:", "language": "text", "type": "code-example_explanation", "context": "Specifying a Disruption Budget for your Application | If there are matching pods (say, 3), then you would see something like this:", "source": "https://kubernetes.io/docs/tasks/run-application/configure-pdb/", "id": "k8s_00469_code_explain_11942"} {"prompt": "Explain this text code:\n\nNAME MIN AVAILABLE MAX UNAVAILABLE ALLOWED DISRUPTIONS AGE\nzk-pdb 2 N/A 1 7s", "completion": "This text code Specifying a Disruption Budget for your Application | The non-zero value for ALLOWED DISRUPTIONS means that the disruption controller has seen the pods, counted the matching pods, and updated the status of the PDB.", "language": "text", "type": "code-example_explanation", "context": "Specifying a Disruption Budget for your Application | The non-zero value for ALLOWED DISRUPTIONS means that the disruption controller has seen the pods, counted the matching pods, and updated the status of the PDB.", "source": "https://kubernetes.io/docs/tasks/run-application/configure-pdb/", "id": "k8s_00469_code_explain_11943"} {"prompt": "Complete this yaml code:\n\napiVersion: policy/v1\nkind: PodDisruptionBudget\nmetadata:\n annotations:\n…\n creationTimestamp: \"2020-03-04T04:22:56Z\"\n generation: 1", "completion": " name: zk-pdb\n…\nstatus:\n currentHealthy: 3\n desiredHealthy: 2\n disruptionsAllowed: 1\n expectedPods: 3\n observedGeneration: 1", "language": "yaml", "type": "pod-definition", "context": "Specifying a Disruption Budget for your Application | The current implementation considers healthy pods, as pods that have .status.conditions item with type=\"Ready\" and status=\"True\". These pods are tracked via .status.currentHealthy field in the PDB sta", "source": "https://kubernetes.io/docs/tasks/run-application/configure-pdb/", "id": "k8s_00469_code_completion_11944"} {"prompt": "Explain this yaml code:\n\napiVersion: policy/v1\nkind: PodDisruptionBudget\nmetadata:\n annotations:\n…\n creationTimestamp: \"2020-03-04T04:22:56Z\"\n generation: 1\n name: zk-pdb\n…\nstatus:\n currentHealthy: 3\n desiredHealthy: 2\n disruptionsAllowed: 1\n expectedPods: 3\n observedGeneration: 1", "completion": "This yaml code Specifying a Disruption Budget for your Application | The current implementation considers healthy pods, as pods that have .status.conditions item with type=\"Ready\" and status=\"True\". These pods are tracked via .status.currentHealthy field in the PDB sta", "language": "yaml", "type": "pod-definition_explanation", "context": "Specifying a Disruption Budget for your Application | The current implementation considers healthy pods, as pods that have .status.conditions item with type=\"Ready\" and status=\"True\". These pods are tracked via .status.currentHealthy field in the PDB sta", "source": "https://kubernetes.io/docs/tasks/run-application/configure-pdb/", "id": "k8s_00469_code_explain_11945"} {"prompt": "Complete this yaml code:\n\npolicy/zookeeper-pod-disruption-budget-minavailable.yaml\napiVersion: policy/v1\nkind: PodDisruptionBudget\nmetadata:\n name: zk-pdb", "completion": "spec:\n minAvailable: 2\n selector:\n matchLabels:\n app: zookeeper", "language": "yaml", "type": "pod-definition", "context": "Specifying a Disruption Budget for your Application | Example PDB Using maxUnavailable:", "source": "https://kubernetes.io/docs/tasks/run-application/configure-pdb/", "id": "k8s_00469_code_completion_11946"} {"prompt": "Explain this yaml code:\n\npolicy/zookeeper-pod-disruption-budget-minavailable.yaml\napiVersion: policy/v1\nkind: PodDisruptionBudget\nmetadata:\n name: zk-pdb\nspec:\n minAvailable: 2\n selector:\n matchLabels:\n app: zookeeper", "completion": "This yaml code Specifying a Disruption Budget for your Application | Example PDB Using maxUnavailable:", "language": "yaml", "type": "pod-definition_explanation", "context": "Specifying a Disruption Budget for your Application | Example PDB Using maxUnavailable:", "source": "https://kubernetes.io/docs/tasks/run-application/configure-pdb/", "id": "k8s_00469_code_explain_11947"} {"prompt": "Complete this yaml code:\n\napiVersion: policy/v1\nkind: PodDisruptionBudget\nmetadata:\n name: zk-pdb", "completion": "spec:\n minAvailable: 2\n selector:\n matchLabels:\n app: zookeeper", "language": "yaml", "type": "pod-definition", "context": "Specifying a Disruption Budget for your Application | Example PDB Using maxUnavailable:", "source": "https://kubernetes.io/docs/tasks/run-application/configure-pdb/", "id": "k8s_00469_code_completion_11948"} {"prompt": "Explain this yaml code:\n\napiVersion: policy/v1\nkind: PodDisruptionBudget\nmetadata:\n name: zk-pdb\nspec:\n minAvailable: 2\n selector:\n matchLabels:\n app: zookeeper", "completion": "This yaml code Specifying a Disruption Budget for your Application | Example PDB Using maxUnavailable:", "language": "yaml", "type": "pod-definition_explanation", "context": "Specifying a Disruption Budget for your Application | Example PDB Using maxUnavailable:", "source": "https://kubernetes.io/docs/tasks/run-application/configure-pdb/", "id": "k8s_00469_code_explain_11949"} {"prompt": "Complete this yaml code:\n\npolicy/zookeeper-pod-disruption-budget-maxunavailable.yaml\napiVersion: policy/v1\nkind: PodDisruptionBudget\nmetadata:\n name: zk-pdb", "completion": "spec:\n maxUnavailable: 1\n selector:\n matchLabels:\n app: zookeeper", "language": "yaml", "type": "pod-definition", "context": "Specifying a Disruption Budget for your Application | For example, if the above zk-pdb object selects the pods of a StatefulSet of size 3, both specifications have the exact same meaning. The use of maxUnavailable is recommended as it automatically respo", "source": "https://kubernetes.io/docs/tasks/run-application/configure-pdb/", "id": "k8s_00469_code_completion_11950"} {"prompt": "Explain this yaml code:\n\npolicy/zookeeper-pod-disruption-budget-maxunavailable.yaml\napiVersion: policy/v1\nkind: PodDisruptionBudget\nmetadata:\n name: zk-pdb\nspec:\n maxUnavailable: 1\n selector:\n matchLabels:\n app: zookeeper", "completion": "This yaml code Specifying a Disruption Budget for your Application | For example, if the above zk-pdb object selects the pods of a StatefulSet of size 3, both specifications have the exact same meaning. The use of maxUnavailable is recommended as it automatically respo", "language": "yaml", "type": "pod-definition_explanation", "context": "Specifying a Disruption Budget for your Application | For example, if the above zk-pdb object selects the pods of a StatefulSet of size 3, both specifications have the exact same meaning. The use of maxUnavailable is recommended as it automatically respo", "source": "https://kubernetes.io/docs/tasks/run-application/configure-pdb/", "id": "k8s_00469_code_explain_11951"} {"prompt": "Complete this yaml code:\n\napiVersion: policy/v1\nkind: PodDisruptionBudget\nmetadata:\n name: zk-pdb", "completion": "spec:\n maxUnavailable: 1\n selector:\n matchLabels:\n app: zookeeper", "language": "yaml", "type": "pod-definition", "context": "Specifying a Disruption Budget for your Application | For example, if the above zk-pdb object selects the pods of a StatefulSet of size 3, both specifications have the exact same meaning. The use of maxUnavailable is recommended as it automatically respo", "source": "https://kubernetes.io/docs/tasks/run-application/configure-pdb/", "id": "k8s_00469_code_completion_11952"} {"prompt": "Explain this yaml code:\n\napiVersion: policy/v1\nkind: PodDisruptionBudget\nmetadata:\n name: zk-pdb\nspec:\n maxUnavailable: 1\n selector:\n matchLabels:\n app: zookeeper", "completion": "This yaml code Specifying a Disruption Budget for your Application | For example, if the above zk-pdb object selects the pods of a StatefulSet of size 3, both specifications have the exact same meaning. The use of maxUnavailable is recommended as it automatically respo", "language": "yaml", "type": "pod-definition_explanation", "context": "Specifying a Disruption Budget for your Application | For example, if the above zk-pdb object selects the pods of a StatefulSet of size 3, both specifications have the exact same meaning. The use of maxUnavailable is recommended as it automatically respo", "source": "https://kubernetes.io/docs/tasks/run-application/configure-pdb/", "id": "k8s_00469_code_explain_11953"} {"prompt": "Complete this yaml code:\n\napiVersion: policy/v1\nkind: PodDisruptionBudget\nmetadata:\n annotations:\n…\n creationTimestamp: \"2020-03-04T04:22:56Z\"\n generation: 1", "completion": " name: zk-pdb\n…\nstatus:\n currentHealthy: 3\n desiredHealthy: 2\n disruptionsAllowed: 1\n expectedPods: 3\n observedGeneration: 1", "language": "yaml", "type": "pod-definition", "context": "Specifying a Disruption Budget for your Application | The current implementation considers healthy pods, as pods that have .status.conditions item with type=\"Ready\" and status=\"True\". These pods are tracked via .status.currentHealthy field in the PDB sta", "source": "https://kubernetes.io/docs/tasks/run-application/configure-pdb/", "id": "k8s_00469_code_completion_11954"} {"prompt": "Explain this yaml code:\n\napiVersion: policy/v1\nkind: PodDisruptionBudget\nmetadata:\n annotations:\n…\n creationTimestamp: \"2020-03-04T04:22:56Z\"\n generation: 1\n name: zk-pdb\n…\nstatus:\n currentHealthy: 3\n desiredHealthy: 2\n disruptionsAllowed: 1\n expectedPods: 3\n observedGeneration: 1", "completion": "This yaml code Specifying a Disruption Budget for your Application | The current implementation considers healthy pods, as pods that have .status.conditions item with type=\"Ready\" and status=\"True\". These pods are tracked via .status.currentHealthy field in the PDB sta", "language": "yaml", "type": "pod-definition_explanation", "context": "Specifying a Disruption Budget for your Application | The current implementation considers healthy pods, as pods that have .status.conditions item with type=\"Ready\" and status=\"True\". These pods are tracked via .status.currentHealthy field in the PDB sta", "source": "https://kubernetes.io/docs/tasks/run-application/configure-pdb/", "id": "k8s_00469_code_explain_11955"} {"prompt": "Complete this yaml code:\n\napiVersion: apps/v1\nkind: Deployment\nmetadata:\n labels:\n app.kubernetes.io/name: load-balancer-example\n name: hello-world\nspec:\n replicas: 5\n selector:\n matchLabels:", "completion": " app.kubernetes.io/name: load-balancer-example\n template:\n metadata:\n labels:\n app.kubernetes.io/name: load-balancer-example\n spec:\n containers:\n - image: gcr.io/google-samples/hello-app:2.0\n name: hello-world\n ports:\n - containerPort: 8080", "language": "yaml", "type": "deployment-definition", "context": "Exposing an External IP Address to Access an Application in a Cluster | The preceding command creates a Deployment and an associated ReplicaSet. The ReplicaSet has five Pods each of which runs the Hello World application.", "source": "https://kubernetes.io/docs/tutorials/stateless-application/expose-external-ip-address/", "id": "k8s_00471_code_completion_11956"} {"prompt": "Explain this yaml code:\n\napiVersion: apps/v1\nkind: Deployment\nmetadata:\n labels:\n app.kubernetes.io/name: load-balancer-example\n name: hello-world\nspec:\n replicas: 5\n selector:\n matchLabels:\n app.kubernetes.io/name: load-balancer-example\n template:\n metadata:\n labels:\n app.kubernetes.io/name: load-balancer-example\n spec:\n containers:\n - image: gcr.io/google-samples/hello-app:2.0\n name: hello-world\n ports:\n - containerPort: 8080", "completion": "This yaml code Exposing an External IP Address to Access an Application in a Cluster | The preceding command creates a Deployment and an associated ReplicaSet. The ReplicaSet has five Pods each of which runs the Hello World application.", "language": "yaml", "type": "deployment-definition_explanation", "context": "Exposing an External IP Address to Access an Application in a Cluster | The preceding command creates a Deployment and an associated ReplicaSet. The ReplicaSet has five Pods each of which runs the Hello World application.", "source": "https://kubernetes.io/docs/tutorials/stateless-application/expose-external-ip-address/", "id": "k8s_00471_code_explain_11957"} {"prompt": "Explain this bash code:\n\nkubectl apply -f https://k8s.io/examples/service/load-balancer-example.yaml", "completion": "This bash code Exposing an External IP Address to Access an Application in a Cluster | The preceding command creates a Deployment and an associated ReplicaSet. The ReplicaSet has five Pods each of which runs the Hello World application.", "language": "bash", "type": "kubectl-command_explanation", "context": "Exposing an External IP Address to Access an Application in a Cluster | The preceding command creates a Deployment and an associated ReplicaSet. The ReplicaSet has five Pods each of which runs the Hello World application.", "source": "https://kubernetes.io/docs/tutorials/stateless-application/expose-external-ip-address/", "id": "k8s_00471_code_explain_11958"} {"prompt": "Explain this bash code:\n\nkubectl get deployments hello-world\nkubectl describe deployments hello-world", "completion": "This bash code Exposing an External IP Address to Access an Application in a Cluster | Display information about your ReplicaSet objects:", "language": "bash", "type": "kubectl-command_explanation", "context": "Exposing an External IP Address to Access an Application in a Cluster | Display information about your ReplicaSet objects:", "source": "https://kubernetes.io/docs/tutorials/stateless-application/expose-external-ip-address/", "id": "k8s_00471_code_explain_11959"} {"prompt": "Explain this bash code:\n\nkubectl get replicasets\nkubectl describe replicasets", "completion": "This bash code Exposing an External IP Address to Access an Application in a Cluster | Create a Service object that exposes the deployment:", "language": "bash", "type": "kubectl-command_explanation", "context": "Exposing an External IP Address to Access an Application in a Cluster | Create a Service object that exposes the deployment:", "source": "https://kubernetes.io/docs/tutorials/stateless-application/expose-external-ip-address/", "id": "k8s_00471_code_explain_11960"} {"prompt": "Explain this bash code:\n\nkubectl expose deployment hello-world --type=LoadBalancer --name=my-service", "completion": "This bash code Exposing an External IP Address to Access an Application in a Cluster | Display information about the Service:", "language": "bash", "type": "kubectl-command_explanation", "context": "Exposing an External IP Address to Access an Application in a Cluster | Display information about the Service:", "source": "https://kubernetes.io/docs/tutorials/stateless-application/expose-external-ip-address/", "id": "k8s_00471_code_explain_11961"} {"prompt": "Explain this text code:\n\nNAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE\nmy-service LoadBalancer 10.3.245.137 104.198.205.71 8080/TCP 54s", "completion": "This text code Exposing an External IP Address to Access an Application in a Cluster | Display detailed information about the Service:", "language": "text", "type": "code-example_explanation", "context": "Exposing an External IP Address to Access an Application in a Cluster | Display detailed information about the Service:", "source": "https://kubernetes.io/docs/tutorials/stateless-application/expose-external-ip-address/", "id": "k8s_00471_code_explain_11962"} {"prompt": "Complete this text code:\n\nName: my-service\nNamespace: default\nLabels: app.kubernetes.io/name=load-balancer-example\nAnnotations: <none>\nSelector: app.kubernetes.io/name=load-balancer-example\nType: LoadBalancer", "completion": "IP: 10.3.245.137\nLoadBalancer Ingress: 104.198.205.71\nPort: <unset> 8080/TCP\nNodePort: <unset> 32377/TCP\nEndpoints: 10.0.0.6:8080,10.0.1.6:8080,10.0.1.7:8080 + 2 more...\nSession Affinity: None\nEvents: <none>", "language": "text", "type": "code-example", "context": "Exposing an External IP Address to Access an Application in a Cluster | Make a note of the external IP address (LoadBalancer Ingress) exposed by your service. In this example, the external IP address is 104.198.205.71. Also note the value of Port and NodePort. In this exa", "source": "https://kubernetes.io/docs/tutorials/stateless-application/expose-external-ip-address/", "id": "k8s_00471_code_completion_11963"} {"prompt": "Explain this text code:\n\nName: my-service\nNamespace: default\nLabels: app.kubernetes.io/name=load-balancer-example\nAnnotations: <none>\nSelector: app.kubernetes.io/name=load-balancer-example\nType: LoadBalancer\nIP: 10.3.245.137\nLoadBalancer Ingress: 104.198.205.71\nPort: <unset> 8080/TCP\nNodePort: <unset> 32377/TCP\nEndpoints: 10.0.0.6:8080,10.0.1.6:8080,10.0.1.7:8080 + 2 more...\nSession Affinity: None\nEvents: <none>", "completion": "This text code Exposing an External IP Address to Access an Application in a Cluster | Make a note of the external IP address (LoadBalancer Ingress) exposed by your service. In this example, the external IP address is 104.198.205.71. Also note the value of Port and NodePort. In this exa", "language": "text", "type": "code-example_explanation", "context": "Exposing an External IP Address to Access an Application in a Cluster | Make a note of the external IP address (LoadBalancer Ingress) exposed by your service. In this example, the external IP address is 104.198.205.71. Also note the value of Port and NodePort. In this exa", "source": "https://kubernetes.io/docs/tutorials/stateless-application/expose-external-ip-address/", "id": "k8s_00471_code_explain_11964"} {"prompt": "Complete this text code:\n\nNAME ... IP NODE\nhello-world-2895499144-1jaz9 ... 10.0.1.6 gke-cluster-1-default-pool-e0b8d269-1afc\nhello-world-2895499144-2e5uh ... 10.0.1.8 gke-cluster-1-default-pool-e0b8d269-1afc", "completion": "hello-world-2895499144-9m4h1 ... 10.0.0.6 gke-cluster-1-default-pool-e0b8d269-5v7a\nhello-world-2895499144-o4z13 ... 10.0.1.7 gke-cluster-1-default-pool-e0b8d269-1afc\nhello-world-2895499144-segjf ... 10.0.2.5 gke-cluster-1-default-pool-e0b8d269-cpuc", "language": "text", "type": "code-example", "context": "Exposing an External IP Address to Access an Application in a Cluster | Use the external IP address (LoadBalancer Ingress) to access the Hello World application:", "source": "https://kubernetes.io/docs/tutorials/stateless-application/expose-external-ip-address/", "id": "k8s_00471_code_completion_11965"} {"prompt": "Explain this text code:\n\nNAME ... IP NODE\nhello-world-2895499144-1jaz9 ... 10.0.1.6 gke-cluster-1-default-pool-e0b8d269-1afc\nhello-world-2895499144-2e5uh ... 10.0.1.8 gke-cluster-1-default-pool-e0b8d269-1afc\nhello-world-2895499144-9m4h1 ... 10.0.0.6 gke-cluster-1-default-pool-e0b8d269-5v7a\nhello-world-2895499144-o4z13 ... 10.0.1.7 gke-cluster-1-default-pool-e0b8d269-1afc\nhello-world-2895499144-segjf ... 10.0.2.5 gke-cluster-1-default-pool-e0b8d269-cpuc", "completion": "This text code Exposing an External IP Address to Access an Application in a Cluster | Use the external IP address (LoadBalancer Ingress) to access the Hello World application:", "language": "text", "type": "code-example_explanation", "context": "Exposing an External IP Address to Access an Application in a Cluster | Use the external IP address (LoadBalancer Ingress) to access the Hello World application:", "source": "https://kubernetes.io/docs/tutorials/stateless-application/expose-external-ip-address/", "id": "k8s_00471_code_explain_11966"} {"prompt": "Explain this text code:\n\nHello, world!\nVersion: 2.0.0\nHostname: 0bd46b45f32f", "completion": "This text code Exposing an External IP Address to Access an Application in a Cluster | To delete the Service, enter this command:", "language": "text", "type": "code-example_explanation", "context": "Exposing an External IP Address to Access an Application in a Cluster | To delete the Service, enter this command:", "source": "https://kubernetes.io/docs/tutorials/stateless-application/expose-external-ip-address/", "id": "k8s_00471_code_explain_11967"} {"prompt": "Complete this yaml code:\n\napiVersion: apps/v1\nkind: Deployment\nmetadata:\n labels:\n app.kubernetes.io/name: load-balancer-example\n name: hello-world\nspec:\n replicas: 5\n selector:\n matchLabels:", "completion": " app.kubernetes.io/name: load-balancer-example\n template:\n metadata:\n labels:\n app.kubernetes.io/name: load-balancer-example\n spec:\n containers:\n - image: gcr.io/google-samples/hello-app:2.0\n name: hello-world\n ports:\n - containerPort: 8080", "language": "yaml", "type": "deployment-definition", "context": "Exposing an External IP Address to Access an Application in a Cluster | The preceding command creates a Deployment and an associated ReplicaSet. The ReplicaSet has five Pods each of which runs the Hello World application.", "source": "https://kubernetes.io/docs/tutorials/stateless-application/expose-external-ip-address/", "id": "k8s_00471_code_completion_11968"} {"prompt": "Explain this yaml code:\n\napiVersion: apps/v1\nkind: Deployment\nmetadata:\n labels:\n app.kubernetes.io/name: load-balancer-example\n name: hello-world\nspec:\n replicas: 5\n selector:\n matchLabels:\n app.kubernetes.io/name: load-balancer-example\n template:\n metadata:\n labels:\n app.kubernetes.io/name: load-balancer-example\n spec:\n containers:\n - image: gcr.io/google-samples/hello-app:2.0\n name: hello-world\n ports:\n - containerPort: 8080", "completion": "This yaml code Exposing an External IP Address to Access an Application in a Cluster | The preceding command creates a Deployment and an associated ReplicaSet. The ReplicaSet has five Pods each of which runs the Hello World application.", "language": "yaml", "type": "deployment-definition_explanation", "context": "Exposing an External IP Address to Access an Application in a Cluster | The preceding command creates a Deployment and an associated ReplicaSet. The ReplicaSet has five Pods each of which runs the Hello World application.", "source": "https://kubernetes.io/docs/tutorials/stateless-application/expose-external-ip-address/", "id": "k8s_00471_code_explain_11969"} {"prompt": "Explain this shell code:\n\nkubectl apply -f https://k8s.io/examples/service/load-balancer-example.yaml", "completion": "This shell code Exposing an External IP Address to Access an Application in a Cluster | The preceding command creates a Deployment and an associated ReplicaSet. The ReplicaSet has five Pods each of which runs the Hello World application.", "language": "shell", "type": "code-example_explanation", "context": "Exposing an External IP Address to Access an Application in a Cluster | The preceding command creates a Deployment and an associated ReplicaSet. The ReplicaSet has five Pods each of which runs the Hello World application.", "source": "https://kubernetes.io/docs/tutorials/stateless-application/expose-external-ip-address/", "id": "k8s_00471_code_explain_11970"} {"prompt": "Explain this shell code:\n\nkubectl get deployments hello-world\nkubectl describe deployments hello-world", "completion": "This shell code Exposing an External IP Address to Access an Application in a Cluster | Display information about your ReplicaSet objects:", "language": "shell", "type": "code-example_explanation", "context": "Exposing an External IP Address to Access an Application in a Cluster | Display information about your ReplicaSet objects:", "source": "https://kubernetes.io/docs/tutorials/stateless-application/expose-external-ip-address/", "id": "k8s_00471_code_explain_11971"} {"prompt": "Explain this shell code:\n\nkubectl get replicasets\nkubectl describe replicasets", "completion": "This shell code Exposing an External IP Address to Access an Application in a Cluster | Create a Service object that exposes the deployment:", "language": "shell", "type": "code-example_explanation", "context": "Exposing an External IP Address to Access an Application in a Cluster | Create a Service object that exposes the deployment:", "source": "https://kubernetes.io/docs/tutorials/stateless-application/expose-external-ip-address/", "id": "k8s_00471_code_explain_11972"} {"prompt": "Explain this shell code:\n\nkubectl expose deployment hello-world --type=LoadBalancer --name=my-service", "completion": "This shell code Exposing an External IP Address to Access an Application in a Cluster | Display information about the Service:", "language": "shell", "type": "code-example_explanation", "context": "Exposing an External IP Address to Access an Application in a Cluster | Display information about the Service:", "source": "https://kubernetes.io/docs/tutorials/stateless-application/expose-external-ip-address/", "id": "k8s_00471_code_explain_11973"} {"prompt": "Explain this console code:\n\nNAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE\nmy-service LoadBalancer 10.3.245.137 104.198.205.71 8080/TCP 54s", "completion": "This console code Exposing an External IP Address to Access an Application in a Cluster | Display detailed information about the Service:", "language": "console", "type": "code-example_explanation", "context": "Exposing an External IP Address to Access an Application in a Cluster | Display detailed information about the Service:", "source": "https://kubernetes.io/docs/tutorials/stateless-application/expose-external-ip-address/", "id": "k8s_00471_code_explain_11974"} {"prompt": "Complete this console code:\n\nName: my-service\nNamespace: default\nLabels: app.kubernetes.io/name=load-balancer-example\nAnnotations: <none>\nSelector: app.kubernetes.io/name=load-balancer-example\nType: LoadBalancer", "completion": "IP: 10.3.245.137\nLoadBalancer Ingress: 104.198.205.71\nPort: <unset> 8080/TCP\nNodePort: <unset> 32377/TCP\nEndpoints: 10.0.0.6:8080,10.0.1.6:8080,10.0.1.7:8080 + 2 more...\nSession Affinity: None\nEvents: <none>", "language": "console", "type": "code-example", "context": "Exposing an External IP Address to Access an Application in a Cluster | Make a note of the external IP address (LoadBalancer Ingress) exposed by your service. In this example, the external IP address is 104.198.205.71. Also note the value of Port and NodePort. In this exa", "source": "https://kubernetes.io/docs/tutorials/stateless-application/expose-external-ip-address/", "id": "k8s_00471_code_completion_11975"} {"prompt": "Explain this console code:\n\nName: my-service\nNamespace: default\nLabels: app.kubernetes.io/name=load-balancer-example\nAnnotations: <none>\nSelector: app.kubernetes.io/name=load-balancer-example\nType: LoadBalancer\nIP: 10.3.245.137\nLoadBalancer Ingress: 104.198.205.71\nPort: <unset> 8080/TCP\nNodePort: <unset> 32377/TCP\nEndpoints: 10.0.0.6:8080,10.0.1.6:8080,10.0.1.7:8080 + 2 more...\nSession Affinity: None\nEvents: <none>", "completion": "This console code Exposing an External IP Address to Access an Application in a Cluster | Make a note of the external IP address (LoadBalancer Ingress) exposed by your service. In this example, the external IP address is 104.198.205.71. Also note the value of Port and NodePort. In this exa", "language": "console", "type": "code-example_explanation", "context": "Exposing an External IP Address to Access an Application in a Cluster | Make a note of the external IP address (LoadBalancer Ingress) exposed by your service. In this example, the external IP address is 104.198.205.71. Also note the value of Port and NodePort. In this exa", "source": "https://kubernetes.io/docs/tutorials/stateless-application/expose-external-ip-address/", "id": "k8s_00471_code_explain_11976"} {"prompt": "Complete this console code:\n\nNAME ... IP NODE\nhello-world-2895499144-1jaz9 ... 10.0.1.6 gke-cluster-1-default-pool-e0b8d269-1afc\nhello-world-2895499144-2e5uh ... 10.0.1.8 gke-cluster-1-default-pool-e0b8d269-1afc", "completion": "hello-world-2895499144-9m4h1 ... 10.0.0.6 gke-cluster-1-default-pool-e0b8d269-5v7a\nhello-world-2895499144-o4z13 ... 10.0.1.7 gke-cluster-1-default-pool-e0b8d269-1afc\nhello-world-2895499144-segjf ... 10.0.2.5 gke-cluster-1-default-pool-e0b8d269-cpuc", "language": "console", "type": "code-example", "context": "Exposing an External IP Address to Access an Application in a Cluster | Use the external IP address (LoadBalancer Ingress) to access the Hello World application:", "source": "https://kubernetes.io/docs/tutorials/stateless-application/expose-external-ip-address/", "id": "k8s_00471_code_completion_11977"} {"prompt": "Explain this console code:\n\nNAME ... IP NODE\nhello-world-2895499144-1jaz9 ... 10.0.1.6 gke-cluster-1-default-pool-e0b8d269-1afc\nhello-world-2895499144-2e5uh ... 10.0.1.8 gke-cluster-1-default-pool-e0b8d269-1afc\nhello-world-2895499144-9m4h1 ... 10.0.0.6 gke-cluster-1-default-pool-e0b8d269-5v7a\nhello-world-2895499144-o4z13 ... 10.0.1.7 gke-cluster-1-default-pool-e0b8d269-1afc\nhello-world-2895499144-segjf ... 10.0.2.5 gke-cluster-1-default-pool-e0b8d269-cpuc", "completion": "This console code Exposing an External IP Address to Access an Application in a Cluster | Use the external IP address (LoadBalancer Ingress) to access the Hello World application:", "language": "console", "type": "code-example_explanation", "context": "Exposing an External IP Address to Access an Application in a Cluster | Use the external IP address (LoadBalancer Ingress) to access the Hello World application:", "source": "https://kubernetes.io/docs/tutorials/stateless-application/expose-external-ip-address/", "id": "k8s_00471_code_explain_11978"} {"prompt": "Explain this shell code:\n\nHello, world!\nVersion: 2.0.0\nHostname: 0bd46b45f32f", "completion": "This shell code Exposing an External IP Address to Access an Application in a Cluster | To delete the Service, enter this command:", "language": "shell", "type": "code-example_explanation", "context": "Exposing an External IP Address to Access an Application in a Cluster | To delete the Service, enter this command:", "source": "https://kubernetes.io/docs/tutorials/stateless-application/expose-external-ip-address/", "id": "k8s_00471_code_explain_11979"} {"prompt": "Complete this yaml code:\n\nservice/load-balancer-example.yaml\napiVersion: apps/v1\nkind: Deployment\nmetadata:\n labels:\n app.kubernetes.io/name: load-balancer-example\n name: hello-world\nspec:\n replicas: 5\n selector:\n matchLabels:", "completion": " app.kubernetes.io/name: load-balancer-example\n template:\n metadata:\n labels:\n app.kubernetes.io/name: load-balancer-example\n spec:\n containers:\n - image: gcr.io/google-samples/hello-app:2.0\n name: hello-world\n ports:\n - containerPort: 8080", "language": "yaml", "type": "deployment-definition", "context": "Exposing an External IP Address to Access an Application in a Cluster | The preceding command creates a Deployment and an associated ReplicaSet. The ReplicaSet has five Pods each of which runs the Hello World application.", "source": "https://kubernetes.io/docs/tutorials/stateless-application/expose-external-ip-address/", "id": "k8s_00471_code_completion_11980"} {"prompt": "Explain this yaml code:\n\nservice/load-balancer-example.yaml\napiVersion: apps/v1\nkind: Deployment\nmetadata:\n labels:\n app.kubernetes.io/name: load-balancer-example\n name: hello-world\nspec:\n replicas: 5\n selector:\n matchLabels:\n app.kubernetes.io/name: load-balancer-example\n template:\n metadata:\n labels:\n app.kubernetes.io/name: load-balancer-example\n spec:\n containers:\n - image: gcr.io/google-samples/hello-app:2.0\n name: hello-world\n ports:\n - containerPort: 8080", "completion": "This yaml code Exposing an External IP Address to Access an Application in a Cluster | The preceding command creates a Deployment and an associated ReplicaSet. The ReplicaSet has five Pods each of which runs the Hello World application.", "language": "yaml", "type": "deployment-definition_explanation", "context": "Exposing an External IP Address to Access an Application in a Cluster | The preceding command creates a Deployment and an associated ReplicaSet. The ReplicaSet has five Pods each of which runs the Hello World application.", "source": "https://kubernetes.io/docs/tutorials/stateless-application/expose-external-ip-address/", "id": "k8s_00471_code_explain_11981"} {"prompt": "Complete this yaml code:\n\napiVersion: apps/v1\nkind: Deployment\nmetadata:\n labels:\n app.kubernetes.io/name: load-balancer-example\n name: hello-world\nspec:\n replicas: 5\n selector:\n matchLabels:", "completion": " app.kubernetes.io/name: load-balancer-example\n template:\n metadata:\n labels:\n app.kubernetes.io/name: load-balancer-example\n spec:\n containers:\n - image: gcr.io/google-samples/hello-app:2.0\n name: hello-world\n ports:\n - containerPort: 8080", "language": "yaml", "type": "deployment-definition", "context": "Exposing an External IP Address to Access an Application in a Cluster | The preceding command creates a Deployment and an associated ReplicaSet. The ReplicaSet has five Pods each of which runs the Hello World application.", "source": "https://kubernetes.io/docs/tutorials/stateless-application/expose-external-ip-address/", "id": "k8s_00471_code_completion_11982"} {"prompt": "Explain this yaml code:\n\napiVersion: apps/v1\nkind: Deployment\nmetadata:\n labels:\n app.kubernetes.io/name: load-balancer-example\n name: hello-world\nspec:\n replicas: 5\n selector:\n matchLabels:\n app.kubernetes.io/name: load-balancer-example\n template:\n metadata:\n labels:\n app.kubernetes.io/name: load-balancer-example\n spec:\n containers:\n - image: gcr.io/google-samples/hello-app:2.0\n name: hello-world\n ports:\n - containerPort: 8080", "completion": "This yaml code Exposing an External IP Address to Access an Application in a Cluster | The preceding command creates a Deployment and an associated ReplicaSet. The ReplicaSet has five Pods each of which runs the Hello World application.", "language": "yaml", "type": "deployment-definition_explanation", "context": "Exposing an External IP Address to Access an Application in a Cluster | The preceding command creates a Deployment and an associated ReplicaSet. The ReplicaSet has five Pods each of which runs the Hello World application.", "source": "https://kubernetes.io/docs/tutorials/stateless-application/expose-external-ip-address/", "id": "k8s_00471_code_explain_11983"} {"prompt": "Explain this bash code:\n\nkubectl apply -f https://k8s.io/examples/service/load-balancer-example.yaml", "completion": "This bash code Exposing an External IP Address to Access an Application in a Cluster | The preceding command creates a Deployment and an associated ReplicaSet. The ReplicaSet has five Pods each of which runs the Hello World application.", "language": "bash", "type": "kubectl-command_explanation", "context": "Exposing an External IP Address to Access an Application in a Cluster | The preceding command creates a Deployment and an associated ReplicaSet. The ReplicaSet has five Pods each of which runs the Hello World application.", "source": "https://kubernetes.io/docs/tutorials/stateless-application/expose-external-ip-address/", "id": "k8s_00471_code_explain_11984"} {"prompt": "Explain this bash code:\n\nkubectl get deployments hello-world\nkubectl describe deployments hello-world", "completion": "This bash code Exposing an External IP Address to Access an Application in a Cluster | Display information about your ReplicaSet objects:", "language": "bash", "type": "kubectl-command_explanation", "context": "Exposing an External IP Address to Access an Application in a Cluster | Display information about your ReplicaSet objects:", "source": "https://kubernetes.io/docs/tutorials/stateless-application/expose-external-ip-address/", "id": "k8s_00471_code_explain_11985"} {"prompt": "Explain this bash code:\n\nkubectl get replicasets\nkubectl describe replicasets", "completion": "This bash code Exposing an External IP Address to Access an Application in a Cluster | Create a Service object that exposes the deployment:", "language": "bash", "type": "kubectl-command_explanation", "context": "Exposing an External IP Address to Access an Application in a Cluster | Create a Service object that exposes the deployment:", "source": "https://kubernetes.io/docs/tutorials/stateless-application/expose-external-ip-address/", "id": "k8s_00471_code_explain_11986"} {"prompt": "Explain this bash code:\n\nkubectl expose deployment hello-world --type=LoadBalancer --name=my-service", "completion": "This bash code Exposing an External IP Address to Access an Application in a Cluster | Display information about the Service:", "language": "bash", "type": "kubectl-command_explanation", "context": "Exposing an External IP Address to Access an Application in a Cluster | Display information about the Service:", "source": "https://kubernetes.io/docs/tutorials/stateless-application/expose-external-ip-address/", "id": "k8s_00471_code_explain_11987"} {"prompt": "Explain this text code:\n\nNAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE\nmy-service LoadBalancer 10.3.245.137 104.198.205.71 8080/TCP 54s", "completion": "This text code Exposing an External IP Address to Access an Application in a Cluster | Display detailed information about the Service:", "language": "text", "type": "code-example_explanation", "context": "Exposing an External IP Address to Access an Application in a Cluster | Display detailed information about the Service:", "source": "https://kubernetes.io/docs/tutorials/stateless-application/expose-external-ip-address/", "id": "k8s_00471_code_explain_11988"} {"prompt": "Complete this text code:\n\nName: my-service\nNamespace: default\nLabels: app.kubernetes.io/name=load-balancer-example\nAnnotations: <none>\nSelector: app.kubernetes.io/name=load-balancer-example\nType: LoadBalancer", "completion": "IP: 10.3.245.137\nLoadBalancer Ingress: 104.198.205.71\nPort: <unset> 8080/TCP\nNodePort: <unset> 32377/TCP\nEndpoints: 10.0.0.6:8080,10.0.1.6:8080,10.0.1.7:8080 + 2 more...\nSession Affinity: None\nEvents: <none>", "language": "text", "type": "code-example", "context": "Exposing an External IP Address to Access an Application in a Cluster | Make a note of the external IP address (LoadBalancer Ingress) exposed by your service. In this example, the external IP address is 104.198.205.71. Also note the value of Port and NodePort. In this exa", "source": "https://kubernetes.io/docs/tutorials/stateless-application/expose-external-ip-address/", "id": "k8s_00471_code_completion_11989"} {"prompt": "Explain this text code:\n\nName: my-service\nNamespace: default\nLabels: app.kubernetes.io/name=load-balancer-example\nAnnotations: <none>\nSelector: app.kubernetes.io/name=load-balancer-example\nType: LoadBalancer\nIP: 10.3.245.137\nLoadBalancer Ingress: 104.198.205.71\nPort: <unset> 8080/TCP\nNodePort: <unset> 32377/TCP\nEndpoints: 10.0.0.6:8080,10.0.1.6:8080,10.0.1.7:8080 + 2 more...\nSession Affinity: None\nEvents: <none>", "completion": "This text code Exposing an External IP Address to Access an Application in a Cluster | Make a note of the external IP address (LoadBalancer Ingress) exposed by your service. In this example, the external IP address is 104.198.205.71. Also note the value of Port and NodePort. In this exa", "language": "text", "type": "code-example_explanation", "context": "Exposing an External IP Address to Access an Application in a Cluster | Make a note of the external IP address (LoadBalancer Ingress) exposed by your service. In this example, the external IP address is 104.198.205.71. Also note the value of Port and NodePort. In this exa", "source": "https://kubernetes.io/docs/tutorials/stateless-application/expose-external-ip-address/", "id": "k8s_00471_code_explain_11990"} {"prompt": "Complete this text code:\n\nNAME ... IP NODE\nhello-world-2895499144-1jaz9 ... 10.0.1.6 gke-cluster-1-default-pool-e0b8d269-1afc\nhello-world-2895499144-2e5uh ... 10.0.1.8 gke-cluster-1-default-pool-e0b8d269-1afc", "completion": "hello-world-2895499144-9m4h1 ... 10.0.0.6 gke-cluster-1-default-pool-e0b8d269-5v7a\nhello-world-2895499144-o4z13 ... 10.0.1.7 gke-cluster-1-default-pool-e0b8d269-1afc\nhello-world-2895499144-segjf ... 10.0.2.5 gke-cluster-1-default-pool-e0b8d269-cpuc", "language": "text", "type": "code-example", "context": "Exposing an External IP Address to Access an Application in a Cluster | Use the external IP address (LoadBalancer Ingress) to access the Hello World application:", "source": "https://kubernetes.io/docs/tutorials/stateless-application/expose-external-ip-address/", "id": "k8s_00471_code_completion_11991"} {"prompt": "Explain this text code:\n\nNAME ... IP NODE\nhello-world-2895499144-1jaz9 ... 10.0.1.6 gke-cluster-1-default-pool-e0b8d269-1afc\nhello-world-2895499144-2e5uh ... 10.0.1.8 gke-cluster-1-default-pool-e0b8d269-1afc\nhello-world-2895499144-9m4h1 ... 10.0.0.6 gke-cluster-1-default-pool-e0b8d269-5v7a\nhello-world-2895499144-o4z13 ... 10.0.1.7 gke-cluster-1-default-pool-e0b8d269-1afc\nhello-world-2895499144-segjf ... 10.0.2.5 gke-cluster-1-default-pool-e0b8d269-cpuc", "completion": "This text code Exposing an External IP Address to Access an Application in a Cluster | Use the external IP address (LoadBalancer Ingress) to access the Hello World application:", "language": "text", "type": "code-example_explanation", "context": "Exposing an External IP Address to Access an Application in a Cluster | Use the external IP address (LoadBalancer Ingress) to access the Hello World application:", "source": "https://kubernetes.io/docs/tutorials/stateless-application/expose-external-ip-address/", "id": "k8s_00471_code_explain_11992"} {"prompt": "Explain this text code:\n\nHello, world!\nVersion: 2.0.0\nHostname: 0bd46b45f32f", "completion": "This text code Exposing an External IP Address to Access an Application in a Cluster | To delete the Service, enter this command:", "language": "text", "type": "code-example_explanation", "context": "Exposing an External IP Address to Access an Application in a Cluster | To delete the Service, enter this command:", "source": "https://kubernetes.io/docs/tutorials/stateless-application/expose-external-ip-address/", "id": "k8s_00471_code_explain_11993"} {"prompt": "Explain this text code:\n\nkube-apiserver --enable-admission-plugins=NamespaceLifecycle,LimitRanger ...", "completion": "This text code Admission Control in Kubernetes | The Kubernetes API server flag disable-admission-plugins takes a comma-delimited list of admission control plugins to be disabled, even if they are in the list of plugins enabled by default.", "language": "text", "type": "code-example_explanation", "context": "Admission Control in Kubernetes | The Kubernetes API server flag disable-admission-plugins takes a comma-delimited list of admission control plugins to be disabled, even if they are in the list of plugins enabled by default.", "source": "https://kubernetes.io/docs/reference/access-authn-authz/admission-controllers/", "id": "k8s_00472_code_explain_11994"} {"prompt": "Explain this text code:\n\nkube-apiserver --disable-admission-plugins=PodNodeSelector,AlwaysDeny ...", "completion": "This text code Admission Control in Kubernetes | To see which admission plugins are enabled:", "language": "text", "type": "code-example_explanation", "context": "Admission Control in Kubernetes | To see which admission plugins are enabled:", "source": "https://kubernetes.io/docs/reference/access-authn-authz/admission-controllers/", "id": "k8s_00472_code_explain_11995"} {"prompt": "Explain this text code:\n\nCertificateApproval, CertificateSigning, CertificateSubjectRestriction, DefaultIngressClass, DefaultStorageClass, DefaultTolerationSeconds, LimitRanger, MutatingAdmissionWebhook, NamespaceLifecycle, PersistentVolumeClaimResize, PodSecurity, Priority, ResourceQuota, RuntimeClass, ServiceAccount, StorageObjectInUseProtection, TaintNodesByCondition, ValidatingAdmissionPolicy, ValidatingAdmissionWebhook", "completion": "This text code Admission Control in Kubernetes | Type: Validating.", "language": "text", "type": "code-example_explanation", "context": "Admission Control in Kubernetes | Type: Validating.", "source": "https://kubernetes.io/docs/reference/access-authn-authz/admission-controllers/", "id": "k8s_00472_code_explain_11996"} {"prompt": "Complete this yaml code:\n\napiVersion: apiserver.config.k8s.io/v1\nkind: AdmissionConfiguration\nplugins:", "completion": " - name: EventRateLimit\n path: eventconfig.yaml\n...", "language": "yaml", "type": "kubernetes-manifest", "context": "Admission Control in Kubernetes | There are four types of limits that can be specified in the configuration:", "source": "https://kubernetes.io/docs/reference/access-authn-authz/admission-controllers/", "id": "k8s_00472_code_completion_11997"} {"prompt": "Explain this yaml code:\n\napiVersion: apiserver.config.k8s.io/v1\nkind: AdmissionConfiguration\nplugins:\n - name: EventRateLimit\n path: eventconfig.yaml\n...", "completion": "This yaml code Admission Control in Kubernetes | There are four types of limits that can be specified in the configuration:", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Admission Control in Kubernetes | There are four types of limits that can be specified in the configuration:", "source": "https://kubernetes.io/docs/reference/access-authn-authz/admission-controllers/", "id": "k8s_00472_code_explain_11998"} {"prompt": "Complete this yaml code:\n\napiVersion: eventratelimit.admission.k8s.io/v1alpha1\nkind: Configuration\nlimits:\n - type: Namespace\n qps: 50", "completion": " burst: 100\n cacheSize: 2000\n - type: User\n qps: 10\n burst: 50", "language": "yaml", "type": "kubernetes-manifest", "context": "Admission Control in Kubernetes | See the EventRateLimit Config API (v1alpha1) for more details.", "source": "https://kubernetes.io/docs/reference/access-authn-authz/admission-controllers/", "id": "k8s_00472_code_completion_11999"} {"prompt": "Explain this yaml code:\n\napiVersion: eventratelimit.admission.k8s.io/v1alpha1\nkind: Configuration\nlimits:\n - type: Namespace\n qps: 50\n burst: 100\n cacheSize: 2000\n - type: User\n qps: 10\n burst: 50", "completion": "This yaml code Admission Control in Kubernetes | See the EventRateLimit Config API (v1alpha1) for more details.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Admission Control in Kubernetes | See the EventRateLimit Config API (v1alpha1) for more details.", "source": "https://kubernetes.io/docs/reference/access-authn-authz/admission-controllers/", "id": "k8s_00472_code_explain_12000"} {"prompt": "Complete this text code:\n\nimagePolicy:\n kubeConfigFile: /path/to/kubeconfig/for/backend\n # time in s to cache approval\n allowTTL: 50\n # time in s to cache denial", "completion": " denyTTL: 50\n # time in ms to wait between retries\n retryBackoff: 500\n # determines behavior if the webhook backend fails\n defaultAllow: true", "language": "text", "type": "code-example", "context": "Admission Control in Kubernetes | Reference the ImagePolicyWebhook configuration file from the file provided to the API server's command line flag --admission-control-config-file:", "source": "https://kubernetes.io/docs/reference/access-authn-authz/admission-controllers/", "id": "k8s_00472_code_completion_12001"} {"prompt": "Explain this text code:\n\nimagePolicy:\n kubeConfigFile: /path/to/kubeconfig/for/backend\n # time in s to cache approval\n allowTTL: 50\n # time in s to cache denial\n denyTTL: 50\n # time in ms to wait between retries\n retryBackoff: 500\n # determines behavior if the webhook backend fails\n defaultAllow: true", "completion": "This text code Admission Control in Kubernetes | Reference the ImagePolicyWebhook configuration file from the file provided to the API server's command line flag --admission-control-config-file:", "language": "text", "type": "code-example_explanation", "context": "Admission Control in Kubernetes | Reference the ImagePolicyWebhook configuration file from the file provided to the API server's command line flag --admission-control-config-file:", "source": "https://kubernetes.io/docs/reference/access-authn-authz/admission-controllers/", "id": "k8s_00472_code_explain_12002"} {"prompt": "Complete this yaml code:\n\napiVersion: apiserver.config.k8s.io/v1\nkind: AdmissionConfiguration\nplugins:", "completion": " - name: ImagePolicyWebhook\n path: imagepolicyconfig.yaml\n...", "language": "yaml", "type": "kubernetes-manifest", "context": "Admission Control in Kubernetes | Alternatively, you can embed the configuration directly in the file:", "source": "https://kubernetes.io/docs/reference/access-authn-authz/admission-controllers/", "id": "k8s_00472_code_completion_12003"} {"prompt": "Explain this yaml code:\n\napiVersion: apiserver.config.k8s.io/v1\nkind: AdmissionConfiguration\nplugins:\n - name: ImagePolicyWebhook\n path: imagepolicyconfig.yaml\n...", "completion": "This yaml code Admission Control in Kubernetes | Alternatively, you can embed the configuration directly in the file:", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Admission Control in Kubernetes | Alternatively, you can embed the configuration directly in the file:", "source": "https://kubernetes.io/docs/reference/access-authn-authz/admission-controllers/", "id": "k8s_00472_code_explain_12004"} {"prompt": "Complete this yaml code:\n\napiVersion: apiserver.config.k8s.io/v1\nkind: AdmissionConfiguration\nplugins:\n - name: ImagePolicyWebhook\n configuration:", "completion": " imagePolicy:\n kubeConfigFile: <path-to-kubeconfig-file>\n allowTTL: 50\n denyTTL: 50\n retryBackoff: 500\n defaultAllow: true", "language": "yaml", "type": "kubernetes-manifest", "context": "Admission Control in Kubernetes | The ImagePolicyWebhook config file must reference a kubeconfig formatted file which sets up the connection to the backend. It is required that the backend communicate over TLS.", "source": "https://kubernetes.io/docs/reference/access-authn-authz/admission-controllers/", "id": "k8s_00472_code_completion_12005"} {"prompt": "Explain this yaml code:\n\napiVersion: apiserver.config.k8s.io/v1\nkind: AdmissionConfiguration\nplugins:\n - name: ImagePolicyWebhook\n configuration:\n imagePolicy:\n kubeConfigFile: <path-to-kubeconfig-file>\n allowTTL: 50\n denyTTL: 50\n retryBackoff: 500\n defaultAllow: true", "completion": "This yaml code Admission Control in Kubernetes | The ImagePolicyWebhook config file must reference a kubeconfig formatted file which sets up the connection to the backend. It is required that the backend communicate over TLS.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Admission Control in Kubernetes | The ImagePolicyWebhook config file must reference a kubeconfig formatted file which sets up the connection to the backend. It is required that the backend communicate over TLS.", "source": "https://kubernetes.io/docs/reference/access-authn-authz/admission-controllers/", "id": "k8s_00472_code_explain_12006"} {"prompt": "Complete this text code:\n\n# clusters refers to the remote service.\nclusters:\n - name: name-of-remote-imagepolicy-service\n cluster:\n certificate-authority: /path/to/ca.pem # CA for verifying the remote service.\n server: https://images.example.com/policy # URL of remote service to query. Must use 'https'.", "completion": "\n# users refers to the API server's webhook configuration.\nusers:\n - name: name-of-api-server\n user:\n client-certificate: /path/to/cert.pem # cert for the webhook admission controller to use\n client-key: /path/to/key.pem # key matching the cert", "language": "text", "type": "code-example", "context": "Admission Control in Kubernetes | For additional HTTP configuration, refer to the kubeconfig documentation.", "source": "https://kubernetes.io/docs/reference/access-authn-authz/admission-controllers/", "id": "k8s_00472_code_completion_12007"} {"prompt": "Explain this text code:\n\n# clusters refers to the remote service.\nclusters:\n - name: name-of-remote-imagepolicy-service\n cluster:\n certificate-authority: /path/to/ca.pem # CA for verifying the remote service.\n server: https://images.example.com/policy # URL of remote service to query. Must use 'https'.\n\n# users refers to the API server's webhook configuration.\nusers:\n - name: name-of-api-server\n user:\n client-certificate: /path/to/cert.pem # cert for the webhook admission controller to use\n client-key: /path/to/key.pem # key matching the cert", "completion": "This text code Admission Control in Kubernetes | For additional HTTP configuration, refer to the kubeconfig documentation.", "language": "text", "type": "code-example_explanation", "context": "Admission Control in Kubernetes | For additional HTTP configuration, refer to the kubeconfig documentation.", "source": "https://kubernetes.io/docs/reference/access-authn-authz/admission-controllers/", "id": "k8s_00472_code_explain_12008"} {"prompt": "Complete this json code:\n\n{\n \"apiVersion\": \"imagepolicy.k8s.io/v1alpha1\",\n \"kind\": \"ImageReview\",\n \"spec\": {\n \"containers\": [\n {\n \"image\": \"myrepo/myimage:v1\"\n },\n {", "completion": " \"image\": \"myrepo/myimage@sha256:beb6bd6a68f114c1dc2ea4b28db81bdf91de202a9014972bec5e4d9171d90ed\"\n }\n ],\n \"annotations\": {\n \"mycluster.image-policy.k8s.io/ticket-1234\": \"break-glass\"\n },\n \"namespace\": \"mynamespace\"\n }\n}", "language": "json", "type": "api-response", "context": "Admission Control in Kubernetes | The remote service is expected to fill the status field of the request and respond to either allow or disallow access. The response body's spec field is ignored, and may be omitted. A permissive respo", "source": "https://kubernetes.io/docs/reference/access-authn-authz/admission-controllers/", "id": "k8s_00472_code_completion_12009"} {"prompt": "Explain this json code:\n\n{\n \"apiVersion\": \"imagepolicy.k8s.io/v1alpha1\",\n \"kind\": \"ImageReview\",\n \"spec\": {\n \"containers\": [\n {\n \"image\": \"myrepo/myimage:v1\"\n },\n {\n \"image\": \"myrepo/myimage@sha256:beb6bd6a68f114c1dc2ea4b28db81bdf91de202a9014972bec5e4d9171d90ed\"\n }\n ],\n \"annotations\": {\n \"mycluster.image-policy.k8s.io/ticket-1234\": \"break-glass\"\n },\n \"namespace\": \"mynamespace\"\n }\n}", "completion": "This json code Admission Control in Kubernetes | The remote service is expected to fill the status field of the request and respond to either allow or disallow access. The response body's spec field is ignored, and may be omitted. A permissive respo", "language": "json", "type": "api-response_explanation", "context": "Admission Control in Kubernetes | The remote service is expected to fill the status field of the request and respond to either allow or disallow access. The response body's spec field is ignored, and may be omitted. A permissive respo", "source": "https://kubernetes.io/docs/reference/access-authn-authz/admission-controllers/", "id": "k8s_00472_code_explain_12010"} {"prompt": "Complete this json code:\n\n{\n \"apiVersion\": \"imagepolicy.k8s.io/v1alpha1\",\n \"kind\": \"ImageReview\",", "completion": " \"status\": {\n \"allowed\": true\n }\n}", "language": "json", "type": "api-response", "context": "Admission Control in Kubernetes | To disallow access, the service would return:", "source": "https://kubernetes.io/docs/reference/access-authn-authz/admission-controllers/", "id": "k8s_00472_code_completion_12011"} {"prompt": "Explain this json code:\n\n{\n \"apiVersion\": \"imagepolicy.k8s.io/v1alpha1\",\n \"kind\": \"ImageReview\",\n \"status\": {\n \"allowed\": true\n }\n}", "completion": "This json code Admission Control in Kubernetes | To disallow access, the service would return:", "language": "json", "type": "api-response_explanation", "context": "Admission Control in Kubernetes | To disallow access, the service would return:", "source": "https://kubernetes.io/docs/reference/access-authn-authz/admission-controllers/", "id": "k8s_00472_code_explain_12012"} {"prompt": "Complete this json code:\n\n{\n \"apiVersion\": \"imagepolicy.k8s.io/v1alpha1\",\n \"kind\": \"ImageReview\",\n \"status\": {", "completion": " \"allowed\": false,\n \"reason\": \"image currently blacklisted\"\n }\n}", "language": "json", "type": "api-response", "context": "Admission Control in Kubernetes | For further documentation refer to the imagepolicy.v1alpha1 API.", "source": "https://kubernetes.io/docs/reference/access-authn-authz/admission-controllers/", "id": "k8s_00472_code_completion_12013"} {"prompt": "Explain this json code:\n\n{\n \"apiVersion\": \"imagepolicy.k8s.io/v1alpha1\",\n \"kind\": \"ImageReview\",\n \"status\": {\n \"allowed\": false,\n \"reason\": \"image currently blacklisted\"\n }\n}", "completion": "This json code Admission Control in Kubernetes | For further documentation refer to the imagepolicy.v1alpha1 API.", "language": "json", "type": "api-response_explanation", "context": "Admission Control in Kubernetes | For further documentation refer to the imagepolicy.v1alpha1 API.", "source": "https://kubernetes.io/docs/reference/access-authn-authz/admission-controllers/", "id": "k8s_00472_code_explain_12014"} {"prompt": "Complete this yaml code:\n\napiVersion: storage.k8s.io/v1\nkind: StorageClass\nmetadata:\n name: gluster-vol-default\nprovisioner: kubernetes.io/glusterfs", "completion": "parameters:\n resturl: \"http://192.168.10.100:8080\"\n restuser: \"\"\n secretNamespace: \"\"\n secretName: \"\"\nallowVolumeExpansion: true", "language": "yaml", "type": "kubernetes-manifest", "context": "Admission Control in Kubernetes | For more information about persistent volume claims, see PersistentVolumeClaims.", "source": "https://kubernetes.io/docs/reference/access-authn-authz/admission-controllers/", "id": "k8s_00472_code_completion_12015"} {"prompt": "Explain this yaml code:\n\napiVersion: storage.k8s.io/v1\nkind: StorageClass\nmetadata:\n name: gluster-vol-default\nprovisioner: kubernetes.io/glusterfs\nparameters:\n resturl: \"http://192.168.10.100:8080\"\n restuser: \"\"\n secretNamespace: \"\"\n secretName: \"\"\nallowVolumeExpansion: true", "completion": "This yaml code Admission Control in Kubernetes | For more information about persistent volume claims, see PersistentVolumeClaims.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Admission Control in Kubernetes | For more information about persistent volume claims, see PersistentVolumeClaims.", "source": "https://kubernetes.io/docs/reference/access-authn-authz/admission-controllers/", "id": "k8s_00472_code_explain_12016"} {"prompt": "Complete this text code:\n\npodNodeSelectorPluginConfig:\n clusterDefaultNodeSelector: name-of-node-selector", "completion": " namespace1: name-of-node-selector\n namespace2: name-of-node-selector", "language": "text", "type": "code-example", "context": "Admission Control in Kubernetes | Reference the PodNodeSelector configuration file from the file provided to the API server's command line flag --admission-control-config-file:", "source": "https://kubernetes.io/docs/reference/access-authn-authz/admission-controllers/", "id": "k8s_00472_code_completion_12017"} {"prompt": "Explain this text code:\n\npodNodeSelectorPluginConfig:\n clusterDefaultNodeSelector: name-of-node-selector\n namespace1: name-of-node-selector\n namespace2: name-of-node-selector", "completion": "This text code Admission Control in Kubernetes | Reference the PodNodeSelector configuration file from the file provided to the API server's command line flag --admission-control-config-file:", "language": "text", "type": "code-example_explanation", "context": "Admission Control in Kubernetes | Reference the PodNodeSelector configuration file from the file provided to the API server's command line flag --admission-control-config-file:", "source": "https://kubernetes.io/docs/reference/access-authn-authz/admission-controllers/", "id": "k8s_00472_code_explain_12018"} {"prompt": "Complete this yaml code:\n\napiVersion: apiserver.config.k8s.io/v1\nkind: AdmissionConfiguration\nplugins:", "completion": "- name: PodNodeSelector\n path: podnodeselector.yaml\n...", "language": "yaml", "type": "kubernetes-manifest", "context": "Admission Control in Kubernetes | PodNodeSelector uses the annotation key scheduler.alpha.kubernetes.io/node-selector to assign node selectors to namespaces.", "source": "https://kubernetes.io/docs/reference/access-authn-authz/admission-controllers/", "id": "k8s_00472_code_completion_12019"} {"prompt": "Explain this yaml code:\n\napiVersion: apiserver.config.k8s.io/v1\nkind: AdmissionConfiguration\nplugins:\n- name: PodNodeSelector\n path: podnodeselector.yaml\n...", "completion": "This yaml code Admission Control in Kubernetes | PodNodeSelector uses the annotation key scheduler.alpha.kubernetes.io/node-selector to assign node selectors to namespaces.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Admission Control in Kubernetes | PodNodeSelector uses the annotation key scheduler.alpha.kubernetes.io/node-selector to assign node selectors to namespaces.", "source": "https://kubernetes.io/docs/reference/access-authn-authz/admission-controllers/", "id": "k8s_00472_code_explain_12020"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Namespace\nmetadata:", "completion": " annotations:\n scheduler.alpha.kubernetes.io/node-selector: name-of-node-selector\n name: namespace3", "language": "yaml", "type": "kubernetes-manifest", "context": "Admission Control in Kubernetes | This admission controller has the following behavior:", "source": "https://kubernetes.io/docs/reference/access-authn-authz/admission-controllers/", "id": "k8s_00472_code_completion_12021"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Namespace\nmetadata:\n annotations:\n scheduler.alpha.kubernetes.io/node-selector: name-of-node-selector\n name: namespace3", "completion": "This yaml code Admission Control in Kubernetes | This admission controller has the following behavior:", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Admission Control in Kubernetes | This admission controller has the following behavior:", "source": "https://kubernetes.io/docs/reference/access-authn-authz/admission-controllers/", "id": "k8s_00472_code_explain_12022"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Namespace\nmetadata:", "completion": " name: apps-that-need-nodes-exclusively\n annotations:\n scheduler.alpha.kubernetes.io/defaultTolerations: '[{\"operator\": \"Exists\", \"effect\": \"NoSchedule\", \"key\": \"dedicated-node\"}]'\n scheduler.alpha.kubernetes.io/tolerationsWhitelist: '[{\"operator\": \"Exists\", \"effect\": \"NoSchedule\", \"key\": \"dedicated-node\"}]'", "language": "yaml", "type": "kubernetes-manifest", "context": "Admission Control in Kubernetes | This admission controller is disabled by default.", "source": "https://kubernetes.io/docs/reference/access-authn-authz/admission-controllers/", "id": "k8s_00472_code_completion_12023"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Namespace\nmetadata:\n name: apps-that-need-nodes-exclusively\n annotations:\n scheduler.alpha.kubernetes.io/defaultTolerations: '[{\"operator\": \"Exists\", \"effect\": \"NoSchedule\", \"key\": \"dedicated-node\"}]'\n scheduler.alpha.kubernetes.io/tolerationsWhitelist: '[{\"operator\": \"Exists\", \"effect\": \"NoSchedule\", \"key\": \"dedicated-node\"}]'", "completion": "This yaml code Admission Control in Kubernetes | This admission controller is disabled by default.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Admission Control in Kubernetes | This admission controller is disabled by default.", "source": "https://kubernetes.io/docs/reference/access-authn-authz/admission-controllers/", "id": "k8s_00472_code_explain_12024"} {"prompt": "Explain this shell code:\n\nkube-apiserver --enable-admission-plugins=NamespaceLifecycle,LimitRanger ...", "completion": "This shell code Admission Control in Kubernetes | The Kubernetes API server flag disable-admission-plugins takes a comma-delimited list of admission control plugins to be disabled, even if they are in the list of plugins enabled by default.", "language": "shell", "type": "code-example_explanation", "context": "Admission Control in Kubernetes | The Kubernetes API server flag disable-admission-plugins takes a comma-delimited list of admission control plugins to be disabled, even if they are in the list of plugins enabled by default.", "source": "https://kubernetes.io/docs/reference/access-authn-authz/admission-controllers/", "id": "k8s_00472_code_explain_12025"} {"prompt": "Explain this shell code:\n\nkube-apiserver --disable-admission-plugins=PodNodeSelector,AlwaysDeny ...", "completion": "This shell code Admission Control in Kubernetes | To see which admission plugins are enabled:", "language": "shell", "type": "code-example_explanation", "context": "Admission Control in Kubernetes | To see which admission plugins are enabled:", "source": "https://kubernetes.io/docs/reference/access-authn-authz/admission-controllers/", "id": "k8s_00472_code_explain_12026"} {"prompt": "Explain this shell code:\n\nCertificateApproval, CertificateSigning, CertificateSubjectRestriction, DefaultIngressClass, DefaultStorageClass, DefaultTolerationSeconds, LimitRanger, MutatingAdmissionWebhook, NamespaceLifecycle, PersistentVolumeClaimResize, PodSecurity, Priority, ResourceQuota, RuntimeClass, ServiceAccount, StorageObjectInUseProtection, TaintNodesByCondition, ValidatingAdmissionPolicy, ValidatingAdmissionWebhook", "completion": "This shell code Admission Control in Kubernetes | Type: Validating.", "language": "shell", "type": "code-example_explanation", "context": "Admission Control in Kubernetes | Type: Validating.", "source": "https://kubernetes.io/docs/reference/access-authn-authz/admission-controllers/", "id": "k8s_00472_code_explain_12027"} {"prompt": "Complete this yaml code:\n\napiVersion: apiserver.config.k8s.io/v1\nkind: AdmissionConfiguration\nplugins:", "completion": " - name: EventRateLimit\n path: eventconfig.yaml\n...", "language": "yaml", "type": "kubernetes-manifest", "context": "Admission Control in Kubernetes | There are four types of limits that can be specified in the configuration:", "source": "https://kubernetes.io/docs/reference/access-authn-authz/admission-controllers/", "id": "k8s_00472_code_completion_12028"} {"prompt": "Explain this yaml code:\n\napiVersion: apiserver.config.k8s.io/v1\nkind: AdmissionConfiguration\nplugins:\n - name: EventRateLimit\n path: eventconfig.yaml\n...", "completion": "This yaml code Admission Control in Kubernetes | There are four types of limits that can be specified in the configuration:", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Admission Control in Kubernetes | There are four types of limits that can be specified in the configuration:", "source": "https://kubernetes.io/docs/reference/access-authn-authz/admission-controllers/", "id": "k8s_00472_code_explain_12029"} {"prompt": "Complete this yaml code:\n\napiVersion: eventratelimit.admission.k8s.io/v1alpha1\nkind: Configuration\nlimits:\n - type: Namespace\n qps: 50", "completion": " burst: 100\n cacheSize: 2000\n - type: User\n qps: 10\n burst: 50", "language": "yaml", "type": "kubernetes-manifest", "context": "Admission Control in Kubernetes | See the EventRateLimit Config API (v1alpha1) for more details.", "source": "https://kubernetes.io/docs/reference/access-authn-authz/admission-controllers/", "id": "k8s_00472_code_completion_12030"} {"prompt": "Explain this yaml code:\n\napiVersion: eventratelimit.admission.k8s.io/v1alpha1\nkind: Configuration\nlimits:\n - type: Namespace\n qps: 50\n burst: 100\n cacheSize: 2000\n - type: User\n qps: 10\n burst: 50", "completion": "This yaml code Admission Control in Kubernetes | See the EventRateLimit Config API (v1alpha1) for more details.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Admission Control in Kubernetes | See the EventRateLimit Config API (v1alpha1) for more details.", "source": "https://kubernetes.io/docs/reference/access-authn-authz/admission-controllers/", "id": "k8s_00472_code_explain_12031"} {"prompt": "Complete this yaml code:\n\nimagePolicy:\n kubeConfigFile: /path/to/kubeconfig/for/backend\n # time in s to cache approval\n allowTTL: 50\n # time in s to cache denial", "completion": " denyTTL: 50\n # time in ms to wait between retries\n retryBackoff: 500\n # determines behavior if the webhook backend fails\n defaultAllow: true", "language": "yaml", "type": "kubernetes-manifest", "context": "Admission Control in Kubernetes | Reference the ImagePolicyWebhook configuration file from the file provided to the API server's command line flag --admission-control-config-file:", "source": "https://kubernetes.io/docs/reference/access-authn-authz/admission-controllers/", "id": "k8s_00472_code_completion_12032"} {"prompt": "Explain this yaml code:\n\nimagePolicy:\n kubeConfigFile: /path/to/kubeconfig/for/backend\n # time in s to cache approval\n allowTTL: 50\n # time in s to cache denial\n denyTTL: 50\n # time in ms to wait between retries\n retryBackoff: 500\n # determines behavior if the webhook backend fails\n defaultAllow: true", "completion": "This yaml code Admission Control in Kubernetes | Reference the ImagePolicyWebhook configuration file from the file provided to the API server's command line flag --admission-control-config-file:", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Admission Control in Kubernetes | Reference the ImagePolicyWebhook configuration file from the file provided to the API server's command line flag --admission-control-config-file:", "source": "https://kubernetes.io/docs/reference/access-authn-authz/admission-controllers/", "id": "k8s_00472_code_explain_12033"} {"prompt": "Complete this yaml code:\n\napiVersion: apiserver.config.k8s.io/v1\nkind: AdmissionConfiguration\nplugins:", "completion": " - name: ImagePolicyWebhook\n path: imagepolicyconfig.yaml\n...", "language": "yaml", "type": "kubernetes-manifest", "context": "Admission Control in Kubernetes | Alternatively, you can embed the configuration directly in the file:", "source": "https://kubernetes.io/docs/reference/access-authn-authz/admission-controllers/", "id": "k8s_00472_code_completion_12034"} {"prompt": "Explain this yaml code:\n\napiVersion: apiserver.config.k8s.io/v1\nkind: AdmissionConfiguration\nplugins:\n - name: ImagePolicyWebhook\n path: imagepolicyconfig.yaml\n...", "completion": "This yaml code Admission Control in Kubernetes | Alternatively, you can embed the configuration directly in the file:", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Admission Control in Kubernetes | Alternatively, you can embed the configuration directly in the file:", "source": "https://kubernetes.io/docs/reference/access-authn-authz/admission-controllers/", "id": "k8s_00472_code_explain_12035"} {"prompt": "Complete this yaml code:\n\napiVersion: apiserver.config.k8s.io/v1\nkind: AdmissionConfiguration\nplugins:\n - name: ImagePolicyWebhook\n configuration:", "completion": " imagePolicy:\n kubeConfigFile: <path-to-kubeconfig-file>\n allowTTL: 50\n denyTTL: 50\n retryBackoff: 500\n defaultAllow: true", "language": "yaml", "type": "kubernetes-manifest", "context": "Admission Control in Kubernetes | The ImagePolicyWebhook config file must reference a kubeconfig formatted file which sets up the connection to the backend. It is required that the backend communicate over TLS.", "source": "https://kubernetes.io/docs/reference/access-authn-authz/admission-controllers/", "id": "k8s_00472_code_completion_12036"} {"prompt": "Explain this yaml code:\n\napiVersion: apiserver.config.k8s.io/v1\nkind: AdmissionConfiguration\nplugins:\n - name: ImagePolicyWebhook\n configuration:\n imagePolicy:\n kubeConfigFile: <path-to-kubeconfig-file>\n allowTTL: 50\n denyTTL: 50\n retryBackoff: 500\n defaultAllow: true", "completion": "This yaml code Admission Control in Kubernetes | The ImagePolicyWebhook config file must reference a kubeconfig formatted file which sets up the connection to the backend. It is required that the backend communicate over TLS.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Admission Control in Kubernetes | The ImagePolicyWebhook config file must reference a kubeconfig formatted file which sets up the connection to the backend. It is required that the backend communicate over TLS.", "source": "https://kubernetes.io/docs/reference/access-authn-authz/admission-controllers/", "id": "k8s_00472_code_explain_12037"} {"prompt": "Complete this yaml code:\n\n# clusters refers to the remote service.\nclusters:\n - name: name-of-remote-imagepolicy-service\n cluster:\n certificate-authority: /path/to/ca.pem # CA for verifying the remote service.\n server: https://images.example.com/policy # URL of remote service to query. Must use 'https'.", "completion": "\n# users refers to the API server's webhook configuration.\nusers:\n - name: name-of-api-server\n user:\n client-certificate: /path/to/cert.pem # cert for the webhook admission controller to use\n client-key: /path/to/key.pem # key matching the cert", "language": "yaml", "type": "kubernetes-manifest", "context": "Admission Control in Kubernetes | For additional HTTP configuration, refer to the kubeconfig documentation.", "source": "https://kubernetes.io/docs/reference/access-authn-authz/admission-controllers/", "id": "k8s_00472_code_completion_12038"} {"prompt": "Explain this yaml code:\n\n# clusters refers to the remote service.\nclusters:\n - name: name-of-remote-imagepolicy-service\n cluster:\n certificate-authority: /path/to/ca.pem # CA for verifying the remote service.\n server: https://images.example.com/policy # URL of remote service to query. Must use 'https'.\n\n# users refers to the API server's webhook configuration.\nusers:\n - name: name-of-api-server\n user:\n client-certificate: /path/to/cert.pem # cert for the webhook admission controller to use\n client-key: /path/to/key.pem # key matching the cert", "completion": "This yaml code Admission Control in Kubernetes | For additional HTTP configuration, refer to the kubeconfig documentation.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Admission Control in Kubernetes | For additional HTTP configuration, refer to the kubeconfig documentation.", "source": "https://kubernetes.io/docs/reference/access-authn-authz/admission-controllers/", "id": "k8s_00472_code_explain_12039"} {"prompt": "Complete this json code:\n\n{\n \"apiVersion\": \"imagepolicy.k8s.io/v1alpha1\",\n \"kind\": \"ImageReview\",\n \"spec\": {\n \"containers\": [\n {\n \"image\": \"myrepo/myimage:v1\"\n },\n {", "completion": " \"image\": \"myrepo/myimage@sha256:beb6bd6a68f114c1dc2ea4b28db81bdf91de202a9014972bec5e4d9171d90ed\"\n }\n ],\n \"annotations\": {\n \"mycluster.image-policy.k8s.io/ticket-1234\": \"break-glass\"\n },\n \"namespace\": \"mynamespace\"\n }\n}", "language": "json", "type": "api-response", "context": "Admission Control in Kubernetes | The remote service is expected to fill the status field of the request and respond to either allow or disallow access. The response body's spec field is ignored, and may be omitted. A permissive respo", "source": "https://kubernetes.io/docs/reference/access-authn-authz/admission-controllers/", "id": "k8s_00472_code_completion_12040"} {"prompt": "Explain this json code:\n\n{\n \"apiVersion\": \"imagepolicy.k8s.io/v1alpha1\",\n \"kind\": \"ImageReview\",\n \"spec\": {\n \"containers\": [\n {\n \"image\": \"myrepo/myimage:v1\"\n },\n {\n \"image\": \"myrepo/myimage@sha256:beb6bd6a68f114c1dc2ea4b28db81bdf91de202a9014972bec5e4d9171d90ed\"\n }\n ],\n \"annotations\": {\n \"mycluster.image-policy.k8s.io/ticket-1234\": \"break-glass\"\n },\n \"namespace\": \"mynamespace\"\n }\n}", "completion": "This json code Admission Control in Kubernetes | The remote service is expected to fill the status field of the request and respond to either allow or disallow access. The response body's spec field is ignored, and may be omitted. A permissive respo", "language": "json", "type": "api-response_explanation", "context": "Admission Control in Kubernetes | The remote service is expected to fill the status field of the request and respond to either allow or disallow access. The response body's spec field is ignored, and may be omitted. A permissive respo", "source": "https://kubernetes.io/docs/reference/access-authn-authz/admission-controllers/", "id": "k8s_00472_code_explain_12041"} {"prompt": "Complete this json code:\n\n{\n \"apiVersion\": \"imagepolicy.k8s.io/v1alpha1\",\n \"kind\": \"ImageReview\",", "completion": " \"status\": {\n \"allowed\": true\n }\n}", "language": "json", "type": "api-response", "context": "Admission Control in Kubernetes | To disallow access, the service would return:", "source": "https://kubernetes.io/docs/reference/access-authn-authz/admission-controllers/", "id": "k8s_00472_code_completion_12042"} {"prompt": "Explain this json code:\n\n{\n \"apiVersion\": \"imagepolicy.k8s.io/v1alpha1\",\n \"kind\": \"ImageReview\",\n \"status\": {\n \"allowed\": true\n }\n}", "completion": "This json code Admission Control in Kubernetes | To disallow access, the service would return:", "language": "json", "type": "api-response_explanation", "context": "Admission Control in Kubernetes | To disallow access, the service would return:", "source": "https://kubernetes.io/docs/reference/access-authn-authz/admission-controllers/", "id": "k8s_00472_code_explain_12043"} {"prompt": "Complete this json code:\n\n{\n \"apiVersion\": \"imagepolicy.k8s.io/v1alpha1\",\n \"kind\": \"ImageReview\",\n \"status\": {", "completion": " \"allowed\": false,\n \"reason\": \"image currently blacklisted\"\n }\n}", "language": "json", "type": "api-response", "context": "Admission Control in Kubernetes | For further documentation refer to the imagepolicy.v1alpha1 API.", "source": "https://kubernetes.io/docs/reference/access-authn-authz/admission-controllers/", "id": "k8s_00472_code_completion_12044"} {"prompt": "Explain this json code:\n\n{\n \"apiVersion\": \"imagepolicy.k8s.io/v1alpha1\",\n \"kind\": \"ImageReview\",\n \"status\": {\n \"allowed\": false,\n \"reason\": \"image currently blacklisted\"\n }\n}", "completion": "This json code Admission Control in Kubernetes | For further documentation refer to the imagepolicy.v1alpha1 API.", "language": "json", "type": "api-response_explanation", "context": "Admission Control in Kubernetes | For further documentation refer to the imagepolicy.v1alpha1 API.", "source": "https://kubernetes.io/docs/reference/access-authn-authz/admission-controllers/", "id": "k8s_00472_code_explain_12045"} {"prompt": "Complete this yaml code:\n\napiVersion: storage.k8s.io/v1\nkind: StorageClass\nmetadata:\n name: gluster-vol-default\nprovisioner: kubernetes.io/glusterfs", "completion": "parameters:\n resturl: \"http://192.168.10.100:8080\"\n restuser: \"\"\n secretNamespace: \"\"\n secretName: \"\"\nallowVolumeExpansion: true", "language": "yaml", "type": "kubernetes-manifest", "context": "Admission Control in Kubernetes | For more information about persistent volume claims, see PersistentVolumeClaims.", "source": "https://kubernetes.io/docs/reference/access-authn-authz/admission-controllers/", "id": "k8s_00472_code_completion_12046"} {"prompt": "Explain this yaml code:\n\napiVersion: storage.k8s.io/v1\nkind: StorageClass\nmetadata:\n name: gluster-vol-default\nprovisioner: kubernetes.io/glusterfs\nparameters:\n resturl: \"http://192.168.10.100:8080\"\n restuser: \"\"\n secretNamespace: \"\"\n secretName: \"\"\nallowVolumeExpansion: true", "completion": "This yaml code Admission Control in Kubernetes | For more information about persistent volume claims, see PersistentVolumeClaims.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Admission Control in Kubernetes | For more information about persistent volume claims, see PersistentVolumeClaims.", "source": "https://kubernetes.io/docs/reference/access-authn-authz/admission-controllers/", "id": "k8s_00472_code_explain_12047"} {"prompt": "Complete this yaml code:\n\npodNodeSelectorPluginConfig:\n clusterDefaultNodeSelector: name-of-node-selector", "completion": " namespace1: name-of-node-selector\n namespace2: name-of-node-selector", "language": "yaml", "type": "kubernetes-manifest", "context": "Admission Control in Kubernetes | Reference the PodNodeSelector configuration file from the file provided to the API server's command line flag --admission-control-config-file:", "source": "https://kubernetes.io/docs/reference/access-authn-authz/admission-controllers/", "id": "k8s_00472_code_completion_12048"} {"prompt": "Explain this yaml code:\n\npodNodeSelectorPluginConfig:\n clusterDefaultNodeSelector: name-of-node-selector\n namespace1: name-of-node-selector\n namespace2: name-of-node-selector", "completion": "This yaml code Admission Control in Kubernetes | Reference the PodNodeSelector configuration file from the file provided to the API server's command line flag --admission-control-config-file:", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Admission Control in Kubernetes | Reference the PodNodeSelector configuration file from the file provided to the API server's command line flag --admission-control-config-file:", "source": "https://kubernetes.io/docs/reference/access-authn-authz/admission-controllers/", "id": "k8s_00472_code_explain_12049"} {"prompt": "Complete this yaml code:\n\napiVersion: apiserver.config.k8s.io/v1\nkind: AdmissionConfiguration\nplugins:", "completion": "- name: PodNodeSelector\n path: podnodeselector.yaml\n...", "language": "yaml", "type": "kubernetes-manifest", "context": "Admission Control in Kubernetes | PodNodeSelector uses the annotation key scheduler.alpha.kubernetes.io/node-selector to assign node selectors to namespaces.", "source": "https://kubernetes.io/docs/reference/access-authn-authz/admission-controllers/", "id": "k8s_00472_code_completion_12050"} {"prompt": "Explain this yaml code:\n\napiVersion: apiserver.config.k8s.io/v1\nkind: AdmissionConfiguration\nplugins:\n- name: PodNodeSelector\n path: podnodeselector.yaml\n...", "completion": "This yaml code Admission Control in Kubernetes | PodNodeSelector uses the annotation key scheduler.alpha.kubernetes.io/node-selector to assign node selectors to namespaces.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Admission Control in Kubernetes | PodNodeSelector uses the annotation key scheduler.alpha.kubernetes.io/node-selector to assign node selectors to namespaces.", "source": "https://kubernetes.io/docs/reference/access-authn-authz/admission-controllers/", "id": "k8s_00472_code_explain_12051"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Namespace\nmetadata:", "completion": " annotations:\n scheduler.alpha.kubernetes.io/node-selector: name-of-node-selector\n name: namespace3", "language": "yaml", "type": "kubernetes-manifest", "context": "Admission Control in Kubernetes | This admission controller has the following behavior:", "source": "https://kubernetes.io/docs/reference/access-authn-authz/admission-controllers/", "id": "k8s_00472_code_completion_12052"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Namespace\nmetadata:\n annotations:\n scheduler.alpha.kubernetes.io/node-selector: name-of-node-selector\n name: namespace3", "completion": "This yaml code Admission Control in Kubernetes | This admission controller has the following behavior:", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Admission Control in Kubernetes | This admission controller has the following behavior:", "source": "https://kubernetes.io/docs/reference/access-authn-authz/admission-controllers/", "id": "k8s_00472_code_explain_12053"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Namespace\nmetadata:", "completion": " name: apps-that-need-nodes-exclusively\n annotations:\n scheduler.alpha.kubernetes.io/defaultTolerations: '[{\"operator\": \"Exists\", \"effect\": \"NoSchedule\", \"key\": \"dedicated-node\"}]'\n scheduler.alpha.kubernetes.io/tolerationsWhitelist: '[{\"operator\": \"Exists\", \"effect\": \"NoSchedule\", \"key\": \"dedicated-node\"}]'", "language": "yaml", "type": "kubernetes-manifest", "context": "Admission Control in Kubernetes | This admission controller is disabled by default.", "source": "https://kubernetes.io/docs/reference/access-authn-authz/admission-controllers/", "id": "k8s_00472_code_completion_12054"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Namespace\nmetadata:\n name: apps-that-need-nodes-exclusively\n annotations:\n scheduler.alpha.kubernetes.io/defaultTolerations: '[{\"operator\": \"Exists\", \"effect\": \"NoSchedule\", \"key\": \"dedicated-node\"}]'\n scheduler.alpha.kubernetes.io/tolerationsWhitelist: '[{\"operator\": \"Exists\", \"effect\": \"NoSchedule\", \"key\": \"dedicated-node\"}]'", "completion": "This yaml code Admission Control in Kubernetes | This admission controller is disabled by default.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Admission Control in Kubernetes | This admission controller is disabled by default.", "source": "https://kubernetes.io/docs/reference/access-authn-authz/admission-controllers/", "id": "k8s_00472_code_explain_12055"} {"prompt": "Explain this text code:\n\nkube-apiserver --enable-admission-plugins=NamespaceLifecycle,LimitRanger ...", "completion": "This text code Admission Control in Kubernetes | The Kubernetes API server flag disable-admission-plugins takes a comma-delimited list of admission control plugins to be disabled, even if they are in the list of plugins enabled by default.", "language": "text", "type": "code-example_explanation", "context": "Admission Control in Kubernetes | The Kubernetes API server flag disable-admission-plugins takes a comma-delimited list of admission control plugins to be disabled, even if they are in the list of plugins enabled by default.", "source": "https://kubernetes.io/docs/reference/access-authn-authz/admission-controllers/", "id": "k8s_00472_code_explain_12056"} {"prompt": "Explain this text code:\n\nkube-apiserver --disable-admission-plugins=PodNodeSelector,AlwaysDeny ...", "completion": "This text code Admission Control in Kubernetes | To see which admission plugins are enabled:", "language": "text", "type": "code-example_explanation", "context": "Admission Control in Kubernetes | To see which admission plugins are enabled:", "source": "https://kubernetes.io/docs/reference/access-authn-authz/admission-controllers/", "id": "k8s_00472_code_explain_12057"} {"prompt": "Explain this text code:\n\nCertificateApproval, CertificateSigning, CertificateSubjectRestriction, DefaultIngressClass, DefaultStorageClass, DefaultTolerationSeconds, LimitRanger, MutatingAdmissionWebhook, NamespaceLifecycle, PersistentVolumeClaimResize, PodSecurity, Priority, ResourceQuota, RuntimeClass, ServiceAccount, StorageObjectInUseProtection, TaintNodesByCondition, ValidatingAdmissionPolicy, ValidatingAdmissionWebhook", "completion": "This text code Admission Control in Kubernetes | Type: Validating.", "language": "text", "type": "code-example_explanation", "context": "Admission Control in Kubernetes | Type: Validating.", "source": "https://kubernetes.io/docs/reference/access-authn-authz/admission-controllers/", "id": "k8s_00472_code_explain_12058"} {"prompt": "Complete this yaml code:\n\napiVersion: apiserver.config.k8s.io/v1\nkind: AdmissionConfiguration\nplugins:", "completion": " - name: EventRateLimit\n path: eventconfig.yaml\n...", "language": "yaml", "type": "kubernetes-manifest", "context": "Admission Control in Kubernetes | There are four types of limits that can be specified in the configuration:", "source": "https://kubernetes.io/docs/reference/access-authn-authz/admission-controllers/", "id": "k8s_00472_code_completion_12059"} {"prompt": "Explain this yaml code:\n\napiVersion: apiserver.config.k8s.io/v1\nkind: AdmissionConfiguration\nplugins:\n - name: EventRateLimit\n path: eventconfig.yaml\n...", "completion": "This yaml code Admission Control in Kubernetes | There are four types of limits that can be specified in the configuration:", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Admission Control in Kubernetes | There are four types of limits that can be specified in the configuration:", "source": "https://kubernetes.io/docs/reference/access-authn-authz/admission-controllers/", "id": "k8s_00472_code_explain_12060"} {"prompt": "Complete this yaml code:\n\napiVersion: eventratelimit.admission.k8s.io/v1alpha1\nkind: Configuration\nlimits:\n - type: Namespace\n qps: 50", "completion": " burst: 100\n cacheSize: 2000\n - type: User\n qps: 10\n burst: 50", "language": "yaml", "type": "kubernetes-manifest", "context": "Admission Control in Kubernetes | See the EventRateLimit Config API (v1alpha1) for more details.", "source": "https://kubernetes.io/docs/reference/access-authn-authz/admission-controllers/", "id": "k8s_00472_code_completion_12061"} {"prompt": "Explain this yaml code:\n\napiVersion: eventratelimit.admission.k8s.io/v1alpha1\nkind: Configuration\nlimits:\n - type: Namespace\n qps: 50\n burst: 100\n cacheSize: 2000\n - type: User\n qps: 10\n burst: 50", "completion": "This yaml code Admission Control in Kubernetes | See the EventRateLimit Config API (v1alpha1) for more details.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Admission Control in Kubernetes | See the EventRateLimit Config API (v1alpha1) for more details.", "source": "https://kubernetes.io/docs/reference/access-authn-authz/admission-controllers/", "id": "k8s_00472_code_explain_12062"} {"prompt": "Complete this text code:\n\nimagePolicy:\n kubeConfigFile: /path/to/kubeconfig/for/backend\n # time in s to cache approval\n allowTTL: 50\n # time in s to cache denial", "completion": " denyTTL: 50\n # time in ms to wait between retries\n retryBackoff: 500\n # determines behavior if the webhook backend fails\n defaultAllow: true", "language": "text", "type": "code-example", "context": "Admission Control in Kubernetes | Reference the ImagePolicyWebhook configuration file from the file provided to the API server's command line flag --admission-control-config-file:", "source": "https://kubernetes.io/docs/reference/access-authn-authz/admission-controllers/", "id": "k8s_00472_code_completion_12063"} {"prompt": "Explain this text code:\n\nimagePolicy:\n kubeConfigFile: /path/to/kubeconfig/for/backend\n # time in s to cache approval\n allowTTL: 50\n # time in s to cache denial\n denyTTL: 50\n # time in ms to wait between retries\n retryBackoff: 500\n # determines behavior if the webhook backend fails\n defaultAllow: true", "completion": "This text code Admission Control in Kubernetes | Reference the ImagePolicyWebhook configuration file from the file provided to the API server's command line flag --admission-control-config-file:", "language": "text", "type": "code-example_explanation", "context": "Admission Control in Kubernetes | Reference the ImagePolicyWebhook configuration file from the file provided to the API server's command line flag --admission-control-config-file:", "source": "https://kubernetes.io/docs/reference/access-authn-authz/admission-controllers/", "id": "k8s_00472_code_explain_12064"} {"prompt": "Complete this yaml code:\n\napiVersion: apiserver.config.k8s.io/v1\nkind: AdmissionConfiguration\nplugins:", "completion": " - name: ImagePolicyWebhook\n path: imagepolicyconfig.yaml\n...", "language": "yaml", "type": "kubernetes-manifest", "context": "Admission Control in Kubernetes | Alternatively, you can embed the configuration directly in the file:", "source": "https://kubernetes.io/docs/reference/access-authn-authz/admission-controllers/", "id": "k8s_00472_code_completion_12065"} {"prompt": "Explain this yaml code:\n\napiVersion: apiserver.config.k8s.io/v1\nkind: AdmissionConfiguration\nplugins:\n - name: ImagePolicyWebhook\n path: imagepolicyconfig.yaml\n...", "completion": "This yaml code Admission Control in Kubernetes | Alternatively, you can embed the configuration directly in the file:", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Admission Control in Kubernetes | Alternatively, you can embed the configuration directly in the file:", "source": "https://kubernetes.io/docs/reference/access-authn-authz/admission-controllers/", "id": "k8s_00472_code_explain_12066"} {"prompt": "Complete this yaml code:\n\napiVersion: apiserver.config.k8s.io/v1\nkind: AdmissionConfiguration\nplugins:\n - name: ImagePolicyWebhook\n configuration:", "completion": " imagePolicy:\n kubeConfigFile: <path-to-kubeconfig-file>\n allowTTL: 50\n denyTTL: 50\n retryBackoff: 500\n defaultAllow: true", "language": "yaml", "type": "kubernetes-manifest", "context": "Admission Control in Kubernetes | The ImagePolicyWebhook config file must reference a kubeconfig formatted file which sets up the connection to the backend. It is required that the backend communicate over TLS.", "source": "https://kubernetes.io/docs/reference/access-authn-authz/admission-controllers/", "id": "k8s_00472_code_completion_12067"} {"prompt": "Explain this yaml code:\n\napiVersion: apiserver.config.k8s.io/v1\nkind: AdmissionConfiguration\nplugins:\n - name: ImagePolicyWebhook\n configuration:\n imagePolicy:\n kubeConfigFile: <path-to-kubeconfig-file>\n allowTTL: 50\n denyTTL: 50\n retryBackoff: 500\n defaultAllow: true", "completion": "This yaml code Admission Control in Kubernetes | The ImagePolicyWebhook config file must reference a kubeconfig formatted file which sets up the connection to the backend. It is required that the backend communicate over TLS.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Admission Control in Kubernetes | The ImagePolicyWebhook config file must reference a kubeconfig formatted file which sets up the connection to the backend. It is required that the backend communicate over TLS.", "source": "https://kubernetes.io/docs/reference/access-authn-authz/admission-controllers/", "id": "k8s_00472_code_explain_12068"} {"prompt": "Complete this text code:\n\n# clusters refers to the remote service.\nclusters:\n - name: name-of-remote-imagepolicy-service\n cluster:\n certificate-authority: /path/to/ca.pem # CA for verifying the remote service.\n server: https://images.example.com/policy # URL of remote service to query. Must use 'https'.", "completion": "\n# users refers to the API server's webhook configuration.\nusers:\n - name: name-of-api-server\n user:\n client-certificate: /path/to/cert.pem # cert for the webhook admission controller to use\n client-key: /path/to/key.pem # key matching the cert", "language": "text", "type": "code-example", "context": "Admission Control in Kubernetes | For additional HTTP configuration, refer to the kubeconfig documentation.", "source": "https://kubernetes.io/docs/reference/access-authn-authz/admission-controllers/", "id": "k8s_00472_code_completion_12069"} {"prompt": "Explain this text code:\n\n# clusters refers to the remote service.\nclusters:\n - name: name-of-remote-imagepolicy-service\n cluster:\n certificate-authority: /path/to/ca.pem # CA for verifying the remote service.\n server: https://images.example.com/policy # URL of remote service to query. Must use 'https'.\n\n# users refers to the API server's webhook configuration.\nusers:\n - name: name-of-api-server\n user:\n client-certificate: /path/to/cert.pem # cert for the webhook admission controller to use\n client-key: /path/to/key.pem # key matching the cert", "completion": "This text code Admission Control in Kubernetes | For additional HTTP configuration, refer to the kubeconfig documentation.", "language": "text", "type": "code-example_explanation", "context": "Admission Control in Kubernetes | For additional HTTP configuration, refer to the kubeconfig documentation.", "source": "https://kubernetes.io/docs/reference/access-authn-authz/admission-controllers/", "id": "k8s_00472_code_explain_12070"} {"prompt": "Complete this json code:\n\n{\n \"apiVersion\": \"imagepolicy.k8s.io/v1alpha1\",\n \"kind\": \"ImageReview\",\n \"spec\": {\n \"containers\": [\n {\n \"image\": \"myrepo/myimage:v1\"\n },\n {", "completion": " \"image\": \"myrepo/myimage@sha256:beb6bd6a68f114c1dc2ea4b28db81bdf91de202a9014972bec5e4d9171d90ed\"\n }\n ],\n \"annotations\": {\n \"mycluster.image-policy.k8s.io/ticket-1234\": \"break-glass\"\n },\n \"namespace\": \"mynamespace\"\n }\n}", "language": "json", "type": "api-response", "context": "Admission Control in Kubernetes | The remote service is expected to fill the status field of the request and respond to either allow or disallow access. The response body's spec field is ignored, and may be omitted. A permissive respo", "source": "https://kubernetes.io/docs/reference/access-authn-authz/admission-controllers/", "id": "k8s_00472_code_completion_12071"} {"prompt": "Explain this json code:\n\n{\n \"apiVersion\": \"imagepolicy.k8s.io/v1alpha1\",\n \"kind\": \"ImageReview\",\n \"spec\": {\n \"containers\": [\n {\n \"image\": \"myrepo/myimage:v1\"\n },\n {\n \"image\": \"myrepo/myimage@sha256:beb6bd6a68f114c1dc2ea4b28db81bdf91de202a9014972bec5e4d9171d90ed\"\n }\n ],\n \"annotations\": {\n \"mycluster.image-policy.k8s.io/ticket-1234\": \"break-glass\"\n },\n \"namespace\": \"mynamespace\"\n }\n}", "completion": "This json code Admission Control in Kubernetes | The remote service is expected to fill the status field of the request and respond to either allow or disallow access. The response body's spec field is ignored, and may be omitted. A permissive respo", "language": "json", "type": "api-response_explanation", "context": "Admission Control in Kubernetes | The remote service is expected to fill the status field of the request and respond to either allow or disallow access. The response body's spec field is ignored, and may be omitted. A permissive respo", "source": "https://kubernetes.io/docs/reference/access-authn-authz/admission-controllers/", "id": "k8s_00472_code_explain_12072"} {"prompt": "Complete this json code:\n\n{\n \"apiVersion\": \"imagepolicy.k8s.io/v1alpha1\",\n \"kind\": \"ImageReview\",", "completion": " \"status\": {\n \"allowed\": true\n }\n}", "language": "json", "type": "api-response", "context": "Admission Control in Kubernetes | To disallow access, the service would return:", "source": "https://kubernetes.io/docs/reference/access-authn-authz/admission-controllers/", "id": "k8s_00472_code_completion_12073"} {"prompt": "Explain this json code:\n\n{\n \"apiVersion\": \"imagepolicy.k8s.io/v1alpha1\",\n \"kind\": \"ImageReview\",\n \"status\": {\n \"allowed\": true\n }\n}", "completion": "This json code Admission Control in Kubernetes | To disallow access, the service would return:", "language": "json", "type": "api-response_explanation", "context": "Admission Control in Kubernetes | To disallow access, the service would return:", "source": "https://kubernetes.io/docs/reference/access-authn-authz/admission-controllers/", "id": "k8s_00472_code_explain_12074"} {"prompt": "Complete this json code:\n\n{\n \"apiVersion\": \"imagepolicy.k8s.io/v1alpha1\",\n \"kind\": \"ImageReview\",\n \"status\": {", "completion": " \"allowed\": false,\n \"reason\": \"image currently blacklisted\"\n }\n}", "language": "json", "type": "api-response", "context": "Admission Control in Kubernetes | For further documentation refer to the imagepolicy.v1alpha1 API.", "source": "https://kubernetes.io/docs/reference/access-authn-authz/admission-controllers/", "id": "k8s_00472_code_completion_12075"} {"prompt": "Explain this json code:\n\n{\n \"apiVersion\": \"imagepolicy.k8s.io/v1alpha1\",\n \"kind\": \"ImageReview\",\n \"status\": {\n \"allowed\": false,\n \"reason\": \"image currently blacklisted\"\n }\n}", "completion": "This json code Admission Control in Kubernetes | For further documentation refer to the imagepolicy.v1alpha1 API.", "language": "json", "type": "api-response_explanation", "context": "Admission Control in Kubernetes | For further documentation refer to the imagepolicy.v1alpha1 API.", "source": "https://kubernetes.io/docs/reference/access-authn-authz/admission-controllers/", "id": "k8s_00472_code_explain_12076"} {"prompt": "Complete this yaml code:\n\napiVersion: storage.k8s.io/v1\nkind: StorageClass\nmetadata:\n name: gluster-vol-default\nprovisioner: kubernetes.io/glusterfs", "completion": "parameters:\n resturl: \"http://192.168.10.100:8080\"\n restuser: \"\"\n secretNamespace: \"\"\n secretName: \"\"\nallowVolumeExpansion: true", "language": "yaml", "type": "kubernetes-manifest", "context": "Admission Control in Kubernetes | For more information about persistent volume claims, see PersistentVolumeClaims.", "source": "https://kubernetes.io/docs/reference/access-authn-authz/admission-controllers/", "id": "k8s_00472_code_completion_12077"} {"prompt": "Explain this yaml code:\n\napiVersion: storage.k8s.io/v1\nkind: StorageClass\nmetadata:\n name: gluster-vol-default\nprovisioner: kubernetes.io/glusterfs\nparameters:\n resturl: \"http://192.168.10.100:8080\"\n restuser: \"\"\n secretNamespace: \"\"\n secretName: \"\"\nallowVolumeExpansion: true", "completion": "This yaml code Admission Control in Kubernetes | For more information about persistent volume claims, see PersistentVolumeClaims.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Admission Control in Kubernetes | For more information about persistent volume claims, see PersistentVolumeClaims.", "source": "https://kubernetes.io/docs/reference/access-authn-authz/admission-controllers/", "id": "k8s_00472_code_explain_12078"} {"prompt": "Complete this text code:\n\npodNodeSelectorPluginConfig:\n clusterDefaultNodeSelector: name-of-node-selector", "completion": " namespace1: name-of-node-selector\n namespace2: name-of-node-selector", "language": "text", "type": "code-example", "context": "Admission Control in Kubernetes | Reference the PodNodeSelector configuration file from the file provided to the API server's command line flag --admission-control-config-file:", "source": "https://kubernetes.io/docs/reference/access-authn-authz/admission-controllers/", "id": "k8s_00472_code_completion_12079"} {"prompt": "Explain this text code:\n\npodNodeSelectorPluginConfig:\n clusterDefaultNodeSelector: name-of-node-selector\n namespace1: name-of-node-selector\n namespace2: name-of-node-selector", "completion": "This text code Admission Control in Kubernetes | Reference the PodNodeSelector configuration file from the file provided to the API server's command line flag --admission-control-config-file:", "language": "text", "type": "code-example_explanation", "context": "Admission Control in Kubernetes | Reference the PodNodeSelector configuration file from the file provided to the API server's command line flag --admission-control-config-file:", "source": "https://kubernetes.io/docs/reference/access-authn-authz/admission-controllers/", "id": "k8s_00472_code_explain_12080"} {"prompt": "Complete this yaml code:\n\napiVersion: apiserver.config.k8s.io/v1\nkind: AdmissionConfiguration\nplugins:", "completion": "- name: PodNodeSelector\n path: podnodeselector.yaml\n...", "language": "yaml", "type": "kubernetes-manifest", "context": "Admission Control in Kubernetes | PodNodeSelector uses the annotation key scheduler.alpha.kubernetes.io/node-selector to assign node selectors to namespaces.", "source": "https://kubernetes.io/docs/reference/access-authn-authz/admission-controllers/", "id": "k8s_00472_code_completion_12081"} {"prompt": "Explain this yaml code:\n\napiVersion: apiserver.config.k8s.io/v1\nkind: AdmissionConfiguration\nplugins:\n- name: PodNodeSelector\n path: podnodeselector.yaml\n...", "completion": "This yaml code Admission Control in Kubernetes | PodNodeSelector uses the annotation key scheduler.alpha.kubernetes.io/node-selector to assign node selectors to namespaces.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Admission Control in Kubernetes | PodNodeSelector uses the annotation key scheduler.alpha.kubernetes.io/node-selector to assign node selectors to namespaces.", "source": "https://kubernetes.io/docs/reference/access-authn-authz/admission-controllers/", "id": "k8s_00472_code_explain_12082"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Namespace\nmetadata:", "completion": " annotations:\n scheduler.alpha.kubernetes.io/node-selector: name-of-node-selector\n name: namespace3", "language": "yaml", "type": "kubernetes-manifest", "context": "Admission Control in Kubernetes | This admission controller has the following behavior:", "source": "https://kubernetes.io/docs/reference/access-authn-authz/admission-controllers/", "id": "k8s_00472_code_completion_12083"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Namespace\nmetadata:\n annotations:\n scheduler.alpha.kubernetes.io/node-selector: name-of-node-selector\n name: namespace3", "completion": "This yaml code Admission Control in Kubernetes | This admission controller has the following behavior:", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Admission Control in Kubernetes | This admission controller has the following behavior:", "source": "https://kubernetes.io/docs/reference/access-authn-authz/admission-controllers/", "id": "k8s_00472_code_explain_12084"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Namespace\nmetadata:", "completion": " name: apps-that-need-nodes-exclusively\n annotations:\n scheduler.alpha.kubernetes.io/defaultTolerations: '[{\"operator\": \"Exists\", \"effect\": \"NoSchedule\", \"key\": \"dedicated-node\"}]'\n scheduler.alpha.kubernetes.io/tolerationsWhitelist: '[{\"operator\": \"Exists\", \"effect\": \"NoSchedule\", \"key\": \"dedicated-node\"}]'", "language": "yaml", "type": "kubernetes-manifest", "context": "Admission Control in Kubernetes | This admission controller is disabled by default.", "source": "https://kubernetes.io/docs/reference/access-authn-authz/admission-controllers/", "id": "k8s_00472_code_completion_12085"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Namespace\nmetadata:\n name: apps-that-need-nodes-exclusively\n annotations:\n scheduler.alpha.kubernetes.io/defaultTolerations: '[{\"operator\": \"Exists\", \"effect\": \"NoSchedule\", \"key\": \"dedicated-node\"}]'\n scheduler.alpha.kubernetes.io/tolerationsWhitelist: '[{\"operator\": \"Exists\", \"effect\": \"NoSchedule\", \"key\": \"dedicated-node\"}]'", "completion": "This yaml code Admission Control in Kubernetes | This admission controller is disabled by default.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Admission Control in Kubernetes | This admission controller is disabled by default.", "source": "https://kubernetes.io/docs/reference/access-authn-authz/admission-controllers/", "id": "k8s_00472_code_explain_12086"} {"prompt": "Complete this yaml code:\n\napiVersion: batch/v1\nkind: CronJob\nmetadata:\n name: hello\nspec:\n schedule: \"* * * * *\"\n jobTemplate:\n spec:\n template:", "completion": " spec:\n containers:\n - name: hello\n image: busybox:1.28\n imagePullPolicy: IfNotPresent\n command:\n - /bin/sh\n - -c\n - date; echo Hello from the Kubernetes cluster\n restartPolicy: OnFailure", "language": "yaml", "type": "kubernetes-manifest", "context": "Running Automated Tasks with a CronJob | Run the example CronJob by using this command:", "source": "https://kubernetes.io/docs/tasks/job/automated-tasks-with-cron-jobs/", "id": "k8s_00473_code_completion_12087"} {"prompt": "Explain this yaml code:\n\napiVersion: batch/v1\nkind: CronJob\nmetadata:\n name: hello\nspec:\n schedule: \"* * * * *\"\n jobTemplate:\n spec:\n template:\n spec:\n containers:\n - name: hello\n image: busybox:1.28\n imagePullPolicy: IfNotPresent\n command:\n - /bin/sh\n - -c\n - date; echo Hello from the Kubernetes cluster\n restartPolicy: OnFailure", "completion": "This yaml code Running Automated Tasks with a CronJob | Run the example CronJob by using this command:", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Running Automated Tasks with a CronJob | Run the example CronJob by using this command:", "source": "https://kubernetes.io/docs/tasks/job/automated-tasks-with-cron-jobs/", "id": "k8s_00473_code_explain_12088"} {"prompt": "Explain this bash code:\n\nkubectl create -f https://k8s.io/examples/application/job/cronjob.yaml", "completion": "This bash code Running Automated Tasks with a CronJob | The output is similar to this:", "language": "bash", "type": "kubectl-command_explanation", "context": "Running Automated Tasks with a CronJob | The output is similar to this:", "source": "https://kubernetes.io/docs/tasks/job/automated-tasks-with-cron-jobs/", "id": "k8s_00473_code_explain_12089"} {"prompt": "Explain this text code:\n\nNAME SCHEDULE SUSPEND ACTIVE LAST SCHEDULE AGE\nhello */1 * * * * False 0 <none> 10s", "completion": "This text code Running Automated Tasks with a CronJob | As you can see from the results of the command, the cron job has not scheduled or run any jobs yet. Watch for the job to be created in around one minute:", "language": "text", "type": "code-example_explanation", "context": "Running Automated Tasks with a CronJob | As you can see from the results of the command, the cron job has not scheduled or run any jobs yet. Watch for the job to be created in around one minute:", "source": "https://kubernetes.io/docs/tasks/job/automated-tasks-with-cron-jobs/", "id": "k8s_00473_code_explain_12090"} {"prompt": "Complete this text code:\n\nNAME COMPLETIONS DURATION AGE\nhello-4111706356 0/1 0s", "completion": "hello-4111706356 0/1 0s 0s\nhello-4111706356 1/1 5s 5s", "language": "text", "type": "code-example", "context": "Running Automated Tasks with a CronJob | Now you've seen one running job scheduled by the \"hello\" cron job. You can stop watching the job and view the cron job again to see that it scheduled the job:", "source": "https://kubernetes.io/docs/tasks/job/automated-tasks-with-cron-jobs/", "id": "k8s_00473_code_completion_12091"} {"prompt": "Explain this text code:\n\nNAME COMPLETIONS DURATION AGE\nhello-4111706356 0/1 0s\nhello-4111706356 0/1 0s 0s\nhello-4111706356 1/1 5s 5s", "completion": "This text code Running Automated Tasks with a CronJob | Now you've seen one running job scheduled by the \"hello\" cron job. You can stop watching the job and view the cron job again to see that it scheduled the job:", "language": "text", "type": "code-example_explanation", "context": "Running Automated Tasks with a CronJob | Now you've seen one running job scheduled by the \"hello\" cron job. You can stop watching the job and view the cron job again to see that it scheduled the job:", "source": "https://kubernetes.io/docs/tasks/job/automated-tasks-with-cron-jobs/", "id": "k8s_00473_code_explain_12092"} {"prompt": "Explain this text code:\n\nNAME SCHEDULE SUSPEND ACTIVE LAST SCHEDULE AGE\nhello */1 * * * * False 0 50s 75s", "completion": "This text code Running Automated Tasks with a CronJob | You should see that the cron job hello successfully scheduled a job at the time specified in LAST SCHEDULE. There are currently 0 active jobs, meaning that the job has completed or failed.", "language": "text", "type": "code-example_explanation", "context": "Running Automated Tasks with a CronJob | You should see that the cron job hello successfully scheduled a job at the time specified in LAST SCHEDULE. There are currently 0 active jobs, meaning that the job has completed or failed.", "source": "https://kubernetes.io/docs/tasks/job/automated-tasks-with-cron-jobs/", "id": "k8s_00473_code_explain_12093"} {"prompt": "Explain this text code:\n\n# Replace \"hello-4111706356\" with the job name in your system\npods=$(kubectl get pods --selector=job-name=hello-4111706356 --output=jsonpath={.items[*].metadata.name})", "completion": "This text code Running Automated Tasks with a CronJob | Show the pod log:", "language": "text", "type": "code-example_explanation", "context": "Running Automated Tasks with a CronJob | Show the pod log:", "source": "https://kubernetes.io/docs/tasks/job/automated-tasks-with-cron-jobs/", "id": "k8s_00473_code_explain_12094"} {"prompt": "Explain this text code:\n\nFri Feb 22 11:02:09 UTC 2019\nHello from the Kubernetes cluster", "completion": "This text code Running Automated Tasks with a CronJob | When you don't need a cron job any more, delete it with kubectl delete cronjob <cronjob name>:", "language": "text", "type": "code-example_explanation", "context": "Running Automated Tasks with a CronJob | When you don't need a cron job any more, delete it with kubectl delete cronjob <cronjob name>:", "source": "https://kubernetes.io/docs/tasks/job/automated-tasks-with-cron-jobs/", "id": "k8s_00473_code_explain_12095"} {"prompt": "Complete this yaml code:\n\napiVersion: batch/v1\nkind: CronJob\nmetadata:\n name: hello\nspec:\n schedule: \"* * * * *\"\n jobTemplate:\n spec:\n template:", "completion": " spec:\n containers:\n - name: hello\n image: busybox:1.28\n imagePullPolicy: IfNotPresent\n command:\n - /bin/sh\n - -c\n - date; echo Hello from the Kubernetes cluster\n restartPolicy: OnFailure", "language": "yaml", "type": "kubernetes-manifest", "context": "Running Automated Tasks with a CronJob | Run the example CronJob by using this command:", "source": "https://kubernetes.io/docs/tasks/job/automated-tasks-with-cron-jobs/", "id": "k8s_00473_code_completion_12096"} {"prompt": "Explain this yaml code:\n\napiVersion: batch/v1\nkind: CronJob\nmetadata:\n name: hello\nspec:\n schedule: \"* * * * *\"\n jobTemplate:\n spec:\n template:\n spec:\n containers:\n - name: hello\n image: busybox:1.28\n imagePullPolicy: IfNotPresent\n command:\n - /bin/sh\n - -c\n - date; echo Hello from the Kubernetes cluster\n restartPolicy: OnFailure", "completion": "This yaml code Running Automated Tasks with a CronJob | Run the example CronJob by using this command:", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Running Automated Tasks with a CronJob | Run the example CronJob by using this command:", "source": "https://kubernetes.io/docs/tasks/job/automated-tasks-with-cron-jobs/", "id": "k8s_00473_code_explain_12097"} {"prompt": "Explain this shell code:\n\nkubectl create -f https://k8s.io/examples/application/job/cronjob.yaml", "completion": "This shell code Running Automated Tasks with a CronJob | The output is similar to this:", "language": "shell", "type": "code-example_explanation", "context": "Running Automated Tasks with a CronJob | The output is similar to this:", "source": "https://kubernetes.io/docs/tasks/job/automated-tasks-with-cron-jobs/", "id": "k8s_00473_code_explain_12098"} {"prompt": "Explain this text code:\n\nNAME SCHEDULE SUSPEND ACTIVE LAST SCHEDULE AGE\nhello */1 * * * * False 0 <none> 10s", "completion": "This text code Running Automated Tasks with a CronJob | As you can see from the results of the command, the cron job has not scheduled or run any jobs yet. Watch for the job to be created in around one minute:", "language": "text", "type": "code-example_explanation", "context": "Running Automated Tasks with a CronJob | As you can see from the results of the command, the cron job has not scheduled or run any jobs yet. Watch for the job to be created in around one minute:", "source": "https://kubernetes.io/docs/tasks/job/automated-tasks-with-cron-jobs/", "id": "k8s_00473_code_explain_12099"} {"prompt": "Complete this text code:\n\nNAME COMPLETIONS DURATION AGE\nhello-4111706356 0/1 0s", "completion": "hello-4111706356 0/1 0s 0s\nhello-4111706356 1/1 5s 5s", "language": "text", "type": "code-example", "context": "Running Automated Tasks with a CronJob | Now you've seen one running job scheduled by the \"hello\" cron job. You can stop watching the job and view the cron job again to see that it scheduled the job:", "source": "https://kubernetes.io/docs/tasks/job/automated-tasks-with-cron-jobs/", "id": "k8s_00473_code_completion_12100"} {"prompt": "Explain this text code:\n\nNAME COMPLETIONS DURATION AGE\nhello-4111706356 0/1 0s\nhello-4111706356 0/1 0s 0s\nhello-4111706356 1/1 5s 5s", "completion": "This text code Running Automated Tasks with a CronJob | Now you've seen one running job scheduled by the \"hello\" cron job. You can stop watching the job and view the cron job again to see that it scheduled the job:", "language": "text", "type": "code-example_explanation", "context": "Running Automated Tasks with a CronJob | Now you've seen one running job scheduled by the \"hello\" cron job. You can stop watching the job and view the cron job again to see that it scheduled the job:", "source": "https://kubernetes.io/docs/tasks/job/automated-tasks-with-cron-jobs/", "id": "k8s_00473_code_explain_12101"} {"prompt": "Explain this text code:\n\nNAME SCHEDULE SUSPEND ACTIVE LAST SCHEDULE AGE\nhello */1 * * * * False 0 50s 75s", "completion": "This text code Running Automated Tasks with a CronJob | You should see that the cron job hello successfully scheduled a job at the time specified in LAST SCHEDULE. There are currently 0 active jobs, meaning that the job has completed or failed.", "language": "text", "type": "code-example_explanation", "context": "Running Automated Tasks with a CronJob | You should see that the cron job hello successfully scheduled a job at the time specified in LAST SCHEDULE. There are currently 0 active jobs, meaning that the job has completed or failed.", "source": "https://kubernetes.io/docs/tasks/job/automated-tasks-with-cron-jobs/", "id": "k8s_00473_code_explain_12102"} {"prompt": "Explain this shell code:\n\n# Replace \"hello-4111706356\" with the job name in your system\npods=$(kubectl get pods --selector=job-name=hello-4111706356 --output=jsonpath={.items[*].metadata.name})", "completion": "This shell code Running Automated Tasks with a CronJob | Show the pod log:", "language": "shell", "type": "code-example_explanation", "context": "Running Automated Tasks with a CronJob | Show the pod log:", "source": "https://kubernetes.io/docs/tasks/job/automated-tasks-with-cron-jobs/", "id": "k8s_00473_code_explain_12103"} {"prompt": "Explain this text code:\n\nFri Feb 22 11:02:09 UTC 2019\nHello from the Kubernetes cluster", "completion": "This text code Running Automated Tasks with a CronJob | When you don't need a cron job any more, delete it with kubectl delete cronjob <cronjob name>:", "language": "text", "type": "code-example_explanation", "context": "Running Automated Tasks with a CronJob | When you don't need a cron job any more, delete it with kubectl delete cronjob <cronjob name>:", "source": "https://kubernetes.io/docs/tasks/job/automated-tasks-with-cron-jobs/", "id": "k8s_00473_code_explain_12104"} {"prompt": "Complete this yaml code:\n\napplication/job/cronjob.yaml\napiVersion: batch/v1\nkind: CronJob\nmetadata:\n name: hello\nspec:\n schedule: \"* * * * *\"\n jobTemplate:\n spec:\n template:", "completion": " spec:\n containers:\n - name: hello\n image: busybox:1.28\n imagePullPolicy: IfNotPresent\n command:\n - /bin/sh\n - -c\n - date; echo Hello from the Kubernetes cluster\n restartPolicy: OnFailure", "language": "yaml", "type": "kubernetes-manifest", "context": "Running Automated Tasks with a CronJob | Run the example CronJob by using this command:", "source": "https://kubernetes.io/docs/tasks/job/automated-tasks-with-cron-jobs/", "id": "k8s_00473_code_completion_12105"} {"prompt": "Explain this yaml code:\n\napplication/job/cronjob.yaml\napiVersion: batch/v1\nkind: CronJob\nmetadata:\n name: hello\nspec:\n schedule: \"* * * * *\"\n jobTemplate:\n spec:\n template:\n spec:\n containers:\n - name: hello\n image: busybox:1.28\n imagePullPolicy: IfNotPresent\n command:\n - /bin/sh\n - -c\n - date; echo Hello from the Kubernetes cluster\n restartPolicy: OnFailure", "completion": "This yaml code Running Automated Tasks with a CronJob | Run the example CronJob by using this command:", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Running Automated Tasks with a CronJob | Run the example CronJob by using this command:", "source": "https://kubernetes.io/docs/tasks/job/automated-tasks-with-cron-jobs/", "id": "k8s_00473_code_explain_12106"} {"prompt": "Complete this yaml code:\n\napiVersion: batch/v1\nkind: CronJob\nmetadata:\n name: hello\nspec:\n schedule: \"* * * * *\"\n jobTemplate:\n spec:\n template:", "completion": " spec:\n containers:\n - name: hello\n image: busybox:1.28\n imagePullPolicy: IfNotPresent\n command:\n - /bin/sh\n - -c\n - date; echo Hello from the Kubernetes cluster\n restartPolicy: OnFailure", "language": "yaml", "type": "kubernetes-manifest", "context": "Running Automated Tasks with a CronJob | Run the example CronJob by using this command:", "source": "https://kubernetes.io/docs/tasks/job/automated-tasks-with-cron-jobs/", "id": "k8s_00473_code_completion_12107"} {"prompt": "Explain this yaml code:\n\napiVersion: batch/v1\nkind: CronJob\nmetadata:\n name: hello\nspec:\n schedule: \"* * * * *\"\n jobTemplate:\n spec:\n template:\n spec:\n containers:\n - name: hello\n image: busybox:1.28\n imagePullPolicy: IfNotPresent\n command:\n - /bin/sh\n - -c\n - date; echo Hello from the Kubernetes cluster\n restartPolicy: OnFailure", "completion": "This yaml code Running Automated Tasks with a CronJob | Run the example CronJob by using this command:", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Running Automated Tasks with a CronJob | Run the example CronJob by using this command:", "source": "https://kubernetes.io/docs/tasks/job/automated-tasks-with-cron-jobs/", "id": "k8s_00473_code_explain_12108"} {"prompt": "Explain this bash code:\n\nkubectl create -f https://k8s.io/examples/application/job/cronjob.yaml", "completion": "This bash code Running Automated Tasks with a CronJob | The output is similar to this:", "language": "bash", "type": "kubectl-command_explanation", "context": "Running Automated Tasks with a CronJob | The output is similar to this:", "source": "https://kubernetes.io/docs/tasks/job/automated-tasks-with-cron-jobs/", "id": "k8s_00473_code_explain_12109"} {"prompt": "Explain this text code:\n\n# Replace \"hello-4111706356\" with the job name in your system\npods=$(kubectl get pods --selector=job-name=hello-4111706356 --output=jsonpath={.items[*].metadata.name})", "completion": "This text code Running Automated Tasks with a CronJob | Show the pod log:", "language": "text", "type": "code-example_explanation", "context": "Running Automated Tasks with a CronJob | Show the pod log:", "source": "https://kubernetes.io/docs/tasks/job/automated-tasks-with-cron-jobs/", "id": "k8s_00473_code_explain_12110"} {"prompt": "Complete this yaml code:\n\napiVersion: scheduling.k8s.io/v1\nkind: PriorityClass\nmetadata:", "completion": " name: high-priority\nvalue: 1000000\nglobalDefault: false\ndescription: \"This priority class should be used for XYZ service pods only.\"", "language": "yaml", "type": "kubernetes-manifest", "context": "Pod Priority and Preemption | Pods with preemptionPolicy: Never will be placed in the scheduling queue ahead of lower-priority pods, but they cannot preempt other pods. A non-preempting pod waiting to be scheduled will stay in the", "source": "https://kubernetes.io/docs/concepts/scheduling-eviction/pod-priority-preemption/", "id": "k8s_00477_code_completion_12111"} {"prompt": "Explain this yaml code:\n\napiVersion: scheduling.k8s.io/v1\nkind: PriorityClass\nmetadata:\n name: high-priority\nvalue: 1000000\nglobalDefault: false\ndescription: \"This priority class should be used for XYZ service pods only.\"", "completion": "This yaml code Pod Priority and Preemption | Pods with preemptionPolicy: Never will be placed in the scheduling queue ahead of lower-priority pods, but they cannot preempt other pods. A non-preempting pod waiting to be scheduled will stay in the", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Pod Priority and Preemption | Pods with preemptionPolicy: Never will be placed in the scheduling queue ahead of lower-priority pods, but they cannot preempt other pods. A non-preempting pod waiting to be scheduled will stay in the", "source": "https://kubernetes.io/docs/concepts/scheduling-eviction/pod-priority-preemption/", "id": "k8s_00477_code_explain_12112"} {"prompt": "Complete this yaml code:\n\napiVersion: scheduling.k8s.io/v1\nkind: PriorityClass\nmetadata:\n name: high-priority-nonpreempting", "completion": "value: 1000000\npreemptionPolicy: Never\nglobalDefault: false\ndescription: \"This priority class will not cause other pods to be preempted.\"", "language": "yaml", "type": "kubernetes-manifest", "context": "Pod Priority and Preemption | After you have one or more PriorityClasses, you can create Pods that specify one of those PriorityClass names in their specifications. The priority admission controller uses the priorityClassName fiel", "source": "https://kubernetes.io/docs/concepts/scheduling-eviction/pod-priority-preemption/", "id": "k8s_00477_code_completion_12113"} {"prompt": "Explain this yaml code:\n\napiVersion: scheduling.k8s.io/v1\nkind: PriorityClass\nmetadata:\n name: high-priority-nonpreempting\nvalue: 1000000\npreemptionPolicy: Never\nglobalDefault: false\ndescription: \"This priority class will not cause other pods to be preempted.\"", "completion": "This yaml code Pod Priority and Preemption | After you have one or more PriorityClasses, you can create Pods that specify one of those PriorityClass names in their specifications. The priority admission controller uses the priorityClassName fiel", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Pod Priority and Preemption | After you have one or more PriorityClasses, you can create Pods that specify one of those PriorityClass names in their specifications. The priority admission controller uses the priorityClassName fiel", "source": "https://kubernetes.io/docs/concepts/scheduling-eviction/pod-priority-preemption/", "id": "k8s_00477_code_explain_12114"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: nginx\n labels:\n env: test", "completion": "spec:\n containers:\n - name: nginx\n image: nginx\n imagePullPolicy: IfNotPresent\n priorityClassName: high-priority", "language": "yaml", "type": "pod-definition", "context": "Pod Priority and Preemption | When Pod priority is enabled, the scheduler orders pending Pods by their priority and a pending Pod is placed ahead of other pending Pods with lower priority in the scheduling queue. As a result, the", "source": "https://kubernetes.io/docs/concepts/scheduling-eviction/pod-priority-preemption/", "id": "k8s_00477_code_completion_12115"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: nginx\n labels:\n env: test\nspec:\n containers:\n - name: nginx\n image: nginx\n imagePullPolicy: IfNotPresent\n priorityClassName: high-priority", "completion": "This yaml code Pod Priority and Preemption | When Pod priority is enabled, the scheduler orders pending Pods by their priority and a pending Pod is placed ahead of other pending Pods with lower priority in the scheduling queue. As a result, the", "language": "yaml", "type": "pod-definition_explanation", "context": "Pod Priority and Preemption | When Pod priority is enabled, the scheduler orders pending Pods by their priority and a pending Pod is placed ahead of other pending Pods with lower priority in the scheduling queue. As a result, the", "source": "https://kubernetes.io/docs/concepts/scheduling-eviction/pod-priority-preemption/", "id": "k8s_00477_code_explain_12116"} {"prompt": "Complete this yaml code:\n\napiVersion: scheduling.k8s.io/v1\nkind: PriorityClass\nmetadata:", "completion": " name: high-priority\nvalue: 1000000\nglobalDefault: false\ndescription: \"This priority class should be used for XYZ service pods only.\"", "language": "yaml", "type": "kubernetes-manifest", "context": "Pod Priority and Preemption | Pods with preemptionPolicy: Never will be placed in the scheduling queue ahead of lower-priority pods, but they cannot preempt other pods. A non-preempting pod waiting to be scheduled will stay in the", "source": "https://kubernetes.io/docs/concepts/scheduling-eviction/pod-priority-preemption/", "id": "k8s_00477_code_completion_12117"} {"prompt": "Explain this yaml code:\n\napiVersion: scheduling.k8s.io/v1\nkind: PriorityClass\nmetadata:\n name: high-priority\nvalue: 1000000\nglobalDefault: false\ndescription: \"This priority class should be used for XYZ service pods only.\"", "completion": "This yaml code Pod Priority and Preemption | Pods with preemptionPolicy: Never will be placed in the scheduling queue ahead of lower-priority pods, but they cannot preempt other pods. A non-preempting pod waiting to be scheduled will stay in the", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Pod Priority and Preemption | Pods with preemptionPolicy: Never will be placed in the scheduling queue ahead of lower-priority pods, but they cannot preempt other pods. A non-preempting pod waiting to be scheduled will stay in the", "source": "https://kubernetes.io/docs/concepts/scheduling-eviction/pod-priority-preemption/", "id": "k8s_00477_code_explain_12118"} {"prompt": "Complete this yaml code:\n\napiVersion: scheduling.k8s.io/v1\nkind: PriorityClass\nmetadata:\n name: high-priority-nonpreempting", "completion": "value: 1000000\npreemptionPolicy: Never\nglobalDefault: false\ndescription: \"This priority class will not cause other pods to be preempted.\"", "language": "yaml", "type": "kubernetes-manifest", "context": "Pod Priority and Preemption | After you have one or more PriorityClasses, you can create Pods that specify one of those PriorityClass names in their specifications. The priority admission controller uses the priorityClassName fiel", "source": "https://kubernetes.io/docs/concepts/scheduling-eviction/pod-priority-preemption/", "id": "k8s_00477_code_completion_12119"} {"prompt": "Explain this yaml code:\n\napiVersion: scheduling.k8s.io/v1\nkind: PriorityClass\nmetadata:\n name: high-priority-nonpreempting\nvalue: 1000000\npreemptionPolicy: Never\nglobalDefault: false\ndescription: \"This priority class will not cause other pods to be preempted.\"", "completion": "This yaml code Pod Priority and Preemption | After you have one or more PriorityClasses, you can create Pods that specify one of those PriorityClass names in their specifications. The priority admission controller uses the priorityClassName fiel", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Pod Priority and Preemption | After you have one or more PriorityClasses, you can create Pods that specify one of those PriorityClass names in their specifications. The priority admission controller uses the priorityClassName fiel", "source": "https://kubernetes.io/docs/concepts/scheduling-eviction/pod-priority-preemption/", "id": "k8s_00477_code_explain_12120"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: nginx\n labels:\n env: test", "completion": "spec:\n containers:\n - name: nginx\n image: nginx\n imagePullPolicy: IfNotPresent\n priorityClassName: high-priority", "language": "yaml", "type": "pod-definition", "context": "Pod Priority and Preemption | When Pod priority is enabled, the scheduler orders pending Pods by their priority and a pending Pod is placed ahead of other pending Pods with lower priority in the scheduling queue. As a result, the", "source": "https://kubernetes.io/docs/concepts/scheduling-eviction/pod-priority-preemption/", "id": "k8s_00477_code_completion_12121"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: nginx\n labels:\n env: test\nspec:\n containers:\n - name: nginx\n image: nginx\n imagePullPolicy: IfNotPresent\n priorityClassName: high-priority", "completion": "This yaml code Pod Priority and Preemption | When Pod priority is enabled, the scheduler orders pending Pods by their priority and a pending Pod is placed ahead of other pending Pods with lower priority in the scheduling queue. As a result, the", "language": "yaml", "type": "pod-definition_explanation", "context": "Pod Priority and Preemption | When Pod priority is enabled, the scheduler orders pending Pods by their priority and a pending Pod is placed ahead of other pending Pods with lower priority in the scheduling queue. As a result, the", "source": "https://kubernetes.io/docs/concepts/scheduling-eviction/pod-priority-preemption/", "id": "k8s_00477_code_explain_12122"} {"prompt": "Complete this yaml code:\n\napiVersion: scheduling.k8s.io/v1\nkind: PriorityClass\nmetadata:", "completion": " name: high-priority\nvalue: 1000000\nglobalDefault: false\ndescription: \"This priority class should be used for XYZ service pods only.\"", "language": "yaml", "type": "kubernetes-manifest", "context": "Pod Priority and Preemption | Pods with preemptionPolicy: Never will be placed in the scheduling queue ahead of lower-priority pods, but they cannot preempt other pods. A non-preempting pod waiting to be scheduled will stay in the", "source": "https://kubernetes.io/docs/concepts/scheduling-eviction/pod-priority-preemption/", "id": "k8s_00477_code_completion_12123"} {"prompt": "Explain this yaml code:\n\napiVersion: scheduling.k8s.io/v1\nkind: PriorityClass\nmetadata:\n name: high-priority\nvalue: 1000000\nglobalDefault: false\ndescription: \"This priority class should be used for XYZ service pods only.\"", "completion": "This yaml code Pod Priority and Preemption | Pods with preemptionPolicy: Never will be placed in the scheduling queue ahead of lower-priority pods, but they cannot preempt other pods. A non-preempting pod waiting to be scheduled will stay in the", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Pod Priority and Preemption | Pods with preemptionPolicy: Never will be placed in the scheduling queue ahead of lower-priority pods, but they cannot preempt other pods. A non-preempting pod waiting to be scheduled will stay in the", "source": "https://kubernetes.io/docs/concepts/scheduling-eviction/pod-priority-preemption/", "id": "k8s_00477_code_explain_12124"} {"prompt": "Complete this yaml code:\n\napiVersion: scheduling.k8s.io/v1\nkind: PriorityClass\nmetadata:\n name: high-priority-nonpreempting", "completion": "value: 1000000\npreemptionPolicy: Never\nglobalDefault: false\ndescription: \"This priority class will not cause other pods to be preempted.\"", "language": "yaml", "type": "kubernetes-manifest", "context": "Pod Priority and Preemption | After you have one or more PriorityClasses, you can create Pods that specify one of those PriorityClass names in their specifications. The priority admission controller uses the priorityClassName fiel", "source": "https://kubernetes.io/docs/concepts/scheduling-eviction/pod-priority-preemption/", "id": "k8s_00477_code_completion_12125"} {"prompt": "Explain this yaml code:\n\napiVersion: scheduling.k8s.io/v1\nkind: PriorityClass\nmetadata:\n name: high-priority-nonpreempting\nvalue: 1000000\npreemptionPolicy: Never\nglobalDefault: false\ndescription: \"This priority class will not cause other pods to be preempted.\"", "completion": "This yaml code Pod Priority and Preemption | After you have one or more PriorityClasses, you can create Pods that specify one of those PriorityClass names in their specifications. The priority admission controller uses the priorityClassName fiel", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Pod Priority and Preemption | After you have one or more PriorityClasses, you can create Pods that specify one of those PriorityClass names in their specifications. The priority admission controller uses the priorityClassName fiel", "source": "https://kubernetes.io/docs/concepts/scheduling-eviction/pod-priority-preemption/", "id": "k8s_00477_code_explain_12126"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: nginx\n labels:\n env: test", "completion": "spec:\n containers:\n - name: nginx\n image: nginx\n imagePullPolicy: IfNotPresent\n priorityClassName: high-priority", "language": "yaml", "type": "pod-definition", "context": "Pod Priority and Preemption | When Pod priority is enabled, the scheduler orders pending Pods by their priority and a pending Pod is placed ahead of other pending Pods with lower priority in the scheduling queue. As a result, the", "source": "https://kubernetes.io/docs/concepts/scheduling-eviction/pod-priority-preemption/", "id": "k8s_00477_code_completion_12127"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: nginx\n labels:\n env: test\nspec:\n containers:\n - name: nginx\n image: nginx\n imagePullPolicy: IfNotPresent\n priorityClassName: high-priority", "completion": "This yaml code Pod Priority and Preemption | When Pod priority is enabled, the scheduler orders pending Pods by their priority and a pending Pod is placed ahead of other pending Pods with lower priority in the scheduling queue. As a result, the", "language": "yaml", "type": "pod-definition_explanation", "context": "Pod Priority and Preemption | When Pod priority is enabled, the scheduler orders pending Pods by their priority and a pending Pod is placed ahead of other pending Pods with lower priority in the scheduling queue. As a result, the", "source": "https://kubernetes.io/docs/concepts/scheduling-eviction/pod-priority-preemption/", "id": "k8s_00477_code_explain_12128"} {"prompt": "Complete this yaml code:\n\napiVersion: resource.k8s.io/v1\nkind: ResourceClaimTemplate\nmetadata:\n name: example-resource-claim-template\nspec:\n spec:\n devices:\n requests:", "completion": " - name: gpu-claim\n exactly:\n deviceClassName: example-device-class\n selectors:\n - cel:\n expression: |-\n device.attributes[\"driver.example.com\"].type == \"gpu\" &&\n device.capacity[\"driver.example.com\"].memory == quantity(\"64Gi\")", "language": "yaml", "type": "kubernetes-manifest", "context": "Allocate Devices to Workloads with DRA | This manifest creates a ResourceClaimTemplate that requests devices in the example-device-class DeviceClass that match both of the following parameters:", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/assign-resources/allocate-devices-dra/", "id": "k8s_00479_code_completion_12129"} {"prompt": "Explain this yaml code:\n\napiVersion: resource.k8s.io/v1\nkind: ResourceClaimTemplate\nmetadata:\n name: example-resource-claim-template\nspec:\n spec:\n devices:\n requests:\n - name: gpu-claim\n exactly:\n deviceClassName: example-device-class\n selectors:\n - cel:\n expression: |-\n device.attributes[\"driver.example.com\"].type == \"gpu\" &&\n device.capacity[\"driver.example.com\"].memory == quantity(\"64Gi\")", "completion": "This yaml code Allocate Devices to Workloads with DRA | This manifest creates a ResourceClaimTemplate that requests devices in the example-device-class DeviceClass that match both of the following parameters:", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Allocate Devices to Workloads with DRA | This manifest creates a ResourceClaimTemplate that requests devices in the example-device-class DeviceClass that match both of the following parameters:", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/assign-resources/allocate-devices-dra/", "id": "k8s_00479_code_explain_12130"} {"prompt": "Explain this bash code:\n\nkubectl apply -f https://k8s.io/examples/dra/resourceclaimtemplate.yaml", "completion": "This bash code Allocate Devices to Workloads with DRA | Review the following example manifest:dra/resourceclaim.yaml apiVersion: resource.k8s.io/v1 kind: ResourceClaim metadata: name: example-resource-claim spec: devices: requests: - name: sing", "language": "bash", "type": "kubectl-command_explanation", "context": "Allocate Devices to Workloads with DRA | Review the following example manifest:dra/resourceclaim.yaml apiVersion: resource.k8s.io/v1 kind: ResourceClaim metadata: name: example-resource-claim spec: devices: requests: - name: sing", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/assign-resources/allocate-devices-dra/", "id": "k8s_00479_code_explain_12131"} {"prompt": "Complete this yaml code:\n\napiVersion: resource.k8s.io/v1\nkind: ResourceClaim\nmetadata:\n name: example-resource-claim\nspec:\n devices:\n requests:\n - name: single-gpu-claim", "completion": " exactly:\n deviceClassName: example-device-class\n allocationMode: All\n selectors:\n - cel:\n expression: |-\n device.attributes[\"driver.example.com\"].type == \"gpu\" &&\n device.capacity[\"driver.example.com\"].memory == quantity(\"64Gi\")", "language": "yaml", "type": "kubernetes-manifest", "context": "Allocate Devices to Workloads with DRA | This manifest creates ResourceClaim that requests devices in the example-device-class DeviceClass that match both of the following parameters:", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/assign-resources/allocate-devices-dra/", "id": "k8s_00479_code_completion_12132"} {"prompt": "Explain this yaml code:\n\napiVersion: resource.k8s.io/v1\nkind: ResourceClaim\nmetadata:\n name: example-resource-claim\nspec:\n devices:\n requests:\n - name: single-gpu-claim\n exactly:\n deviceClassName: example-device-class\n allocationMode: All\n selectors:\n - cel:\n expression: |-\n device.attributes[\"driver.example.com\"].type == \"gpu\" &&\n device.capacity[\"driver.example.com\"].memory == quantity(\"64Gi\")", "completion": "This yaml code Allocate Devices to Workloads with DRA | This manifest creates ResourceClaim that requests devices in the example-device-class DeviceClass that match both of the following parameters:", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Allocate Devices to Workloads with DRA | This manifest creates ResourceClaim that requests devices in the example-device-class DeviceClass that match both of the following parameters:", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/assign-resources/allocate-devices-dra/", "id": "k8s_00479_code_explain_12133"} {"prompt": "Explain this bash code:\n\nkubectl apply -f https://k8s.io/examples/dra/resourceclaim.yaml", "completion": "This bash code Allocate Devices to Workloads with DRA | To request device allocation, specify a ResourceClaim or a ResourceClaimTemplate in the resourceClaims field of the Pod specification. Then, request a specific claim by name in the resources.claims fi", "language": "bash", "type": "kubectl-command_explanation", "context": "Allocate Devices to Workloads with DRA | To request device allocation, specify a ResourceClaim or a ResourceClaimTemplate in the resourceClaims field of the Pod specification. Then, request a specific claim by name in the resources.claims fi", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/assign-resources/allocate-devices-dra/", "id": "k8s_00479_code_explain_12134"} {"prompt": "Complete this yaml code:\n\napiVersion: batch/v1\nkind: Job\nmetadata:\n name: example-dra-job\nspec:\n completions: 10\n parallelism: 2\n template:\n spec:\n restartPolicy: Never\n containers:\n - name: container0\n image: ubuntu:24.04\n command: [\"sleep\", \"9999\"]\n resources:\n claims:\n - name: separate-gpu-claim", "completion": " - name: container1\n image: ubuntu:24.04\n command: [\"sleep\", \"9999\"]\n resources:\n claims:\n - name: shared-gpu-claim\n - name: container2\n image: ubuntu:24.04\n command: [\"sleep\", \"9999\"]\n resources:\n claims:\n - name: shared-gpu-claim\n resourceClaims:\n - name: separate-gpu-claim\n resourceClaimTemplateName: example-resource-claim-template\n - name: shared-gpu-claim\n resourceClaimName: example-resource-claim", "language": "yaml", "type": "kubernetes-manifest", "context": "Allocate Devices to Workloads with DRA | Each Pod in this Job has the following properties:", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/assign-resources/allocate-devices-dra/", "id": "k8s_00479_code_completion_12135"} {"prompt": "Explain this yaml code:\n\napiVersion: batch/v1\nkind: Job\nmetadata:\n name: example-dra-job\nspec:\n completions: 10\n parallelism: 2\n template:\n spec:\n restartPolicy: Never\n containers:\n - name: container0\n image: ubuntu:24.04\n command: [\"sleep\", \"9999\"]\n resources:\n claims:\n - name: separate-gpu-claim\n - name: container1\n image: ubuntu:24.04\n command: [\"sleep\", \"9999\"]\n resources:\n claims:\n - name: shared-gpu-claim\n - name: container2\n image: ubuntu:24.04\n command: [\"sleep\", \"9999\"]\n resources:\n claims:\n - name: shared-gpu-claim\n resourceClaims:\n - name: separate-gpu-claim\n resourceClaimTemplateName: example-resource-claim-template\n - name: shared-gpu-claim\n resourceClaimName: example-resource-claim", "completion": "This yaml code Allocate Devices to Workloads with DRA | Each Pod in this Job has the following properties:", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Allocate Devices to Workloads with DRA | Each Pod in this Job has the following properties:", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/assign-resources/allocate-devices-dra/", "id": "k8s_00479_code_explain_12136"} {"prompt": "Explain this bash code:\n\nkubectl apply -f https://k8s.io/examples/dra/dra-example-job.yaml", "completion": "This bash code Allocate Devices to Workloads with DRA | Try the following troubleshooting steps:", "language": "bash", "type": "kubectl-command_explanation", "context": "Allocate Devices to Workloads with DRA | Try the following troubleshooting steps:", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/assign-resources/allocate-devices-dra/", "id": "k8s_00479_code_explain_12137"} {"prompt": "Explain this bash code:\n\nkubectl delete -f https://k8s.io/examples/dra/dra-example-job.yaml", "completion": "This bash code Allocate Devices to Workloads with DRA | To delete your resource claims, run one of the following commands:", "language": "bash", "type": "kubectl-command_explanation", "context": "Allocate Devices to Workloads with DRA | To delete your resource claims, run one of the following commands:", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/assign-resources/allocate-devices-dra/", "id": "k8s_00479_code_explain_12138"} {"prompt": "Explain this bash code:\n\nkubectl delete -f https://k8s.io/examples/dra/resourceclaimtemplate.yaml", "completion": "This bash code Allocate Devices to Workloads with DRA | Delete the ResourceClaim:", "language": "bash", "type": "kubectl-command_explanation", "context": "Allocate Devices to Workloads with DRA | Delete the ResourceClaim:", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/assign-resources/allocate-devices-dra/", "id": "k8s_00479_code_explain_12139"} {"prompt": "Explain this bash code:\n\nkubectl delete -f https://k8s.io/examples/dra/resourceclaim.yaml", "completion": "This bash code Allocate Devices to Workloads with DRA |", "language": "bash", "type": "kubectl-command_explanation", "context": "Allocate Devices to Workloads with DRA |", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/assign-resources/allocate-devices-dra/", "id": "k8s_00479_code_explain_12140"} {"prompt": "Complete this yaml code:\n\napiVersion: resource.k8s.io/v1\nkind: ResourceClaimTemplate\nmetadata:\n name: example-resource-claim-template\nspec:\n spec:\n devices:\n requests:", "completion": " - name: gpu-claim\n exactly:\n deviceClassName: example-device-class\n selectors:\n - cel:\n expression: |-\n device.attributes[\"driver.example.com\"].type == \"gpu\" &&\n device.capacity[\"driver.example.com\"].memory == quantity(\"64Gi\")", "language": "yaml", "type": "kubernetes-manifest", "context": "Allocate Devices to Workloads with DRA | This manifest creates a ResourceClaimTemplate that requests devices in the example-device-class DeviceClass that match both of the following parameters:", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/assign-resources/allocate-devices-dra/", "id": "k8s_00479_code_completion_12141"} {"prompt": "Explain this yaml code:\n\napiVersion: resource.k8s.io/v1\nkind: ResourceClaimTemplate\nmetadata:\n name: example-resource-claim-template\nspec:\n spec:\n devices:\n requests:\n - name: gpu-claim\n exactly:\n deviceClassName: example-device-class\n selectors:\n - cel:\n expression: |-\n device.attributes[\"driver.example.com\"].type == \"gpu\" &&\n device.capacity[\"driver.example.com\"].memory == quantity(\"64Gi\")", "completion": "This yaml code Allocate Devices to Workloads with DRA | This manifest creates a ResourceClaimTemplate that requests devices in the example-device-class DeviceClass that match both of the following parameters:", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Allocate Devices to Workloads with DRA | This manifest creates a ResourceClaimTemplate that requests devices in the example-device-class DeviceClass that match both of the following parameters:", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/assign-resources/allocate-devices-dra/", "id": "k8s_00479_code_explain_12142"} {"prompt": "Explain this shell code:\n\nkubectl apply -f https://k8s.io/examples/dra/resourceclaimtemplate.yaml", "completion": "This shell code Allocate Devices to Workloads with DRA | Review the following example manifest:dra/resourceclaim.yaml apiVersion: resource.k8s.io/v1 kind: ResourceClaim metadata: name: example-resource-claim spec: devices: requests: - name: sing", "language": "shell", "type": "code-example_explanation", "context": "Allocate Devices to Workloads with DRA | Review the following example manifest:dra/resourceclaim.yaml apiVersion: resource.k8s.io/v1 kind: ResourceClaim metadata: name: example-resource-claim spec: devices: requests: - name: sing", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/assign-resources/allocate-devices-dra/", "id": "k8s_00479_code_explain_12143"} {"prompt": "Complete this yaml code:\n\napiVersion: resource.k8s.io/v1\nkind: ResourceClaim\nmetadata:\n name: example-resource-claim\nspec:\n devices:\n requests:\n - name: single-gpu-claim", "completion": " exactly:\n deviceClassName: example-device-class\n allocationMode: All\n selectors:\n - cel:\n expression: |-\n device.attributes[\"driver.example.com\"].type == \"gpu\" &&\n device.capacity[\"driver.example.com\"].memory == quantity(\"64Gi\")", "language": "yaml", "type": "kubernetes-manifest", "context": "Allocate Devices to Workloads with DRA | This manifest creates ResourceClaim that requests devices in the example-device-class DeviceClass that match both of the following parameters:", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/assign-resources/allocate-devices-dra/", "id": "k8s_00479_code_completion_12144"} {"prompt": "Explain this yaml code:\n\napiVersion: resource.k8s.io/v1\nkind: ResourceClaim\nmetadata:\n name: example-resource-claim\nspec:\n devices:\n requests:\n - name: single-gpu-claim\n exactly:\n deviceClassName: example-device-class\n allocationMode: All\n selectors:\n - cel:\n expression: |-\n device.attributes[\"driver.example.com\"].type == \"gpu\" &&\n device.capacity[\"driver.example.com\"].memory == quantity(\"64Gi\")", "completion": "This yaml code Allocate Devices to Workloads with DRA | This manifest creates ResourceClaim that requests devices in the example-device-class DeviceClass that match both of the following parameters:", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Allocate Devices to Workloads with DRA | This manifest creates ResourceClaim that requests devices in the example-device-class DeviceClass that match both of the following parameters:", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/assign-resources/allocate-devices-dra/", "id": "k8s_00479_code_explain_12145"} {"prompt": "Explain this shell code:\n\nkubectl apply -f https://k8s.io/examples/dra/resourceclaim.yaml", "completion": "This shell code Allocate Devices to Workloads with DRA | To request device allocation, specify a ResourceClaim or a ResourceClaimTemplate in the resourceClaims field of the Pod specification. Then, request a specific claim by name in the resources.claims fi", "language": "shell", "type": "code-example_explanation", "context": "Allocate Devices to Workloads with DRA | To request device allocation, specify a ResourceClaim or a ResourceClaimTemplate in the resourceClaims field of the Pod specification. Then, request a specific claim by name in the resources.claims fi", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/assign-resources/allocate-devices-dra/", "id": "k8s_00479_code_explain_12146"} {"prompt": "Complete this yaml code:\n\napiVersion: batch/v1\nkind: Job\nmetadata:\n name: example-dra-job\nspec:\n completions: 10\n parallelism: 2\n template:\n spec:\n restartPolicy: Never\n containers:\n - name: container0\n image: ubuntu:24.04\n command: [\"sleep\", \"9999\"]\n resources:\n claims:\n - name: separate-gpu-claim", "completion": " - name: container1\n image: ubuntu:24.04\n command: [\"sleep\", \"9999\"]\n resources:\n claims:\n - name: shared-gpu-claim\n - name: container2\n image: ubuntu:24.04\n command: [\"sleep\", \"9999\"]\n resources:\n claims:\n - name: shared-gpu-claim\n resourceClaims:\n - name: separate-gpu-claim\n resourceClaimTemplateName: example-resource-claim-template\n - name: shared-gpu-claim\n resourceClaimName: example-resource-claim", "language": "yaml", "type": "kubernetes-manifest", "context": "Allocate Devices to Workloads with DRA | Each Pod in this Job has the following properties:", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/assign-resources/allocate-devices-dra/", "id": "k8s_00479_code_completion_12147"} {"prompt": "Explain this yaml code:\n\napiVersion: batch/v1\nkind: Job\nmetadata:\n name: example-dra-job\nspec:\n completions: 10\n parallelism: 2\n template:\n spec:\n restartPolicy: Never\n containers:\n - name: container0\n image: ubuntu:24.04\n command: [\"sleep\", \"9999\"]\n resources:\n claims:\n - name: separate-gpu-claim\n - name: container1\n image: ubuntu:24.04\n command: [\"sleep\", \"9999\"]\n resources:\n claims:\n - name: shared-gpu-claim\n - name: container2\n image: ubuntu:24.04\n command: [\"sleep\", \"9999\"]\n resources:\n claims:\n - name: shared-gpu-claim\n resourceClaims:\n - name: separate-gpu-claim\n resourceClaimTemplateName: example-resource-claim-template\n - name: shared-gpu-claim\n resourceClaimName: example-resource-claim", "completion": "This yaml code Allocate Devices to Workloads with DRA | Each Pod in this Job has the following properties:", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Allocate Devices to Workloads with DRA | Each Pod in this Job has the following properties:", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/assign-resources/allocate-devices-dra/", "id": "k8s_00479_code_explain_12148"} {"prompt": "Explain this shell code:\n\nkubectl apply -f https://k8s.io/examples/dra/dra-example-job.yaml", "completion": "This shell code Allocate Devices to Workloads with DRA | Try the following troubleshooting steps:", "language": "shell", "type": "code-example_explanation", "context": "Allocate Devices to Workloads with DRA | Try the following troubleshooting steps:", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/assign-resources/allocate-devices-dra/", "id": "k8s_00479_code_explain_12149"} {"prompt": "Explain this text code:\n\nmust specify one of: resourceClaimName, resourceClaimTemplateName", "completion": "This text code Allocate Devices to Workloads with DRA | To delete the Kubernetes objects that you created in this task, follow these steps:", "language": "text", "type": "code-example_explanation", "context": "Allocate Devices to Workloads with DRA | To delete the Kubernetes objects that you created in this task, follow these steps:", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/assign-resources/allocate-devices-dra/", "id": "k8s_00479_code_explain_12150"} {"prompt": "Explain this shell code:\n\nkubectl delete -f https://k8s.io/examples/dra/dra-example-job.yaml", "completion": "This shell code Allocate Devices to Workloads with DRA | To delete your resource claims, run one of the following commands:", "language": "shell", "type": "code-example_explanation", "context": "Allocate Devices to Workloads with DRA | To delete your resource claims, run one of the following commands:", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/assign-resources/allocate-devices-dra/", "id": "k8s_00479_code_explain_12151"} {"prompt": "Explain this shell code:\n\nkubectl delete -f https://k8s.io/examples/dra/resourceclaimtemplate.yaml", "completion": "This shell code Allocate Devices to Workloads with DRA | Delete the ResourceClaim:", "language": "shell", "type": "code-example_explanation", "context": "Allocate Devices to Workloads with DRA | Delete the ResourceClaim:", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/assign-resources/allocate-devices-dra/", "id": "k8s_00479_code_explain_12152"} {"prompt": "Explain this shell code:\n\nkubectl delete -f https://k8s.io/examples/dra/resourceclaim.yaml", "completion": "This shell code Allocate Devices to Workloads with DRA |", "language": "shell", "type": "code-example_explanation", "context": "Allocate Devices to Workloads with DRA |", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/assign-resources/allocate-devices-dra/", "id": "k8s_00479_code_explain_12153"} {"prompt": "Complete this yaml code:\n\ndra/resourceclaimtemplate.yaml\napiVersion: resource.k8s.io/v1\nkind: ResourceClaimTemplate\nmetadata:\n name: example-resource-claim-template\nspec:\n spec:\n devices:", "completion": " requests:\n - name: gpu-claim\n exactly:\n deviceClassName: example-device-class\n selectors:\n - cel:\n expression: |-\n device.attributes[\"driver.example.com\"].type == \"gpu\" &&\n device.capacity[\"driver.example.com\"].memory == quantity(\"64Gi\")", "language": "yaml", "type": "kubernetes-manifest", "context": "Allocate Devices to Workloads with DRA | This manifest creates a ResourceClaimTemplate that requests devices in the example-device-class DeviceClass that match both of the following parameters:", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/assign-resources/allocate-devices-dra/", "id": "k8s_00479_code_completion_12154"} {"prompt": "Explain this yaml code:\n\ndra/resourceclaimtemplate.yaml\napiVersion: resource.k8s.io/v1\nkind: ResourceClaimTemplate\nmetadata:\n name: example-resource-claim-template\nspec:\n spec:\n devices:\n requests:\n - name: gpu-claim\n exactly:\n deviceClassName: example-device-class\n selectors:\n - cel:\n expression: |-\n device.attributes[\"driver.example.com\"].type == \"gpu\" &&\n device.capacity[\"driver.example.com\"].memory == quantity(\"64Gi\")", "completion": "This yaml code Allocate Devices to Workloads with DRA | This manifest creates a ResourceClaimTemplate that requests devices in the example-device-class DeviceClass that match both of the following parameters:", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Allocate Devices to Workloads with DRA | This manifest creates a ResourceClaimTemplate that requests devices in the example-device-class DeviceClass that match both of the following parameters:", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/assign-resources/allocate-devices-dra/", "id": "k8s_00479_code_explain_12155"} {"prompt": "Complete this yaml code:\n\napiVersion: resource.k8s.io/v1\nkind: ResourceClaimTemplate\nmetadata:\n name: example-resource-claim-template\nspec:\n spec:\n devices:\n requests:", "completion": " - name: gpu-claim\n exactly:\n deviceClassName: example-device-class\n selectors:\n - cel:\n expression: |-\n device.attributes[\"driver.example.com\"].type == \"gpu\" &&\n device.capacity[\"driver.example.com\"].memory == quantity(\"64Gi\")", "language": "yaml", "type": "kubernetes-manifest", "context": "Allocate Devices to Workloads with DRA | This manifest creates a ResourceClaimTemplate that requests devices in the example-device-class DeviceClass that match both of the following parameters:", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/assign-resources/allocate-devices-dra/", "id": "k8s_00479_code_completion_12156"} {"prompt": "Explain this yaml code:\n\napiVersion: resource.k8s.io/v1\nkind: ResourceClaimTemplate\nmetadata:\n name: example-resource-claim-template\nspec:\n spec:\n devices:\n requests:\n - name: gpu-claim\n exactly:\n deviceClassName: example-device-class\n selectors:\n - cel:\n expression: |-\n device.attributes[\"driver.example.com\"].type == \"gpu\" &&\n device.capacity[\"driver.example.com\"].memory == quantity(\"64Gi\")", "completion": "This yaml code Allocate Devices to Workloads with DRA | This manifest creates a ResourceClaimTemplate that requests devices in the example-device-class DeviceClass that match both of the following parameters:", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Allocate Devices to Workloads with DRA | This manifest creates a ResourceClaimTemplate that requests devices in the example-device-class DeviceClass that match both of the following parameters:", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/assign-resources/allocate-devices-dra/", "id": "k8s_00479_code_explain_12157"} {"prompt": "Explain this bash code:\n\nkubectl apply -f https://k8s.io/examples/dra/resourceclaimtemplate.yaml", "completion": "This bash code Allocate Devices to Workloads with DRA | Review the following example manifest:dra/resourceclaim.yaml apiVersion: resource.k8s.io/v1 kind: ResourceClaim metadata: name: example-resource-claim spec: devices: requests: - name: sing", "language": "bash", "type": "kubectl-command_explanation", "context": "Allocate Devices to Workloads with DRA | Review the following example manifest:dra/resourceclaim.yaml apiVersion: resource.k8s.io/v1 kind: ResourceClaim metadata: name: example-resource-claim spec: devices: requests: - name: sing", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/assign-resources/allocate-devices-dra/", "id": "k8s_00479_code_explain_12158"} {"prompt": "Complete this yaml code:\n\ndra/resourceclaim.yaml\napiVersion: resource.k8s.io/v1\nkind: ResourceClaim\nmetadata:\n name: example-resource-claim\nspec:\n devices:\n requests:", "completion": " - name: single-gpu-claim\n exactly:\n deviceClassName: example-device-class\n allocationMode: All\n selectors:\n - cel:\n expression: |-\n device.attributes[\"driver.example.com\"].type == \"gpu\" &&\n device.capacity[\"driver.example.com\"].memory == quantity(\"64Gi\")", "language": "yaml", "type": "kubernetes-manifest", "context": "Allocate Devices to Workloads with DRA | This manifest creates ResourceClaim that requests devices in the example-device-class DeviceClass that match both of the following parameters:", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/assign-resources/allocate-devices-dra/", "id": "k8s_00479_code_completion_12159"} {"prompt": "Explain this yaml code:\n\ndra/resourceclaim.yaml\napiVersion: resource.k8s.io/v1\nkind: ResourceClaim\nmetadata:\n name: example-resource-claim\nspec:\n devices:\n requests:\n - name: single-gpu-claim\n exactly:\n deviceClassName: example-device-class\n allocationMode: All\n selectors:\n - cel:\n expression: |-\n device.attributes[\"driver.example.com\"].type == \"gpu\" &&\n device.capacity[\"driver.example.com\"].memory == quantity(\"64Gi\")", "completion": "This yaml code Allocate Devices to Workloads with DRA | This manifest creates ResourceClaim that requests devices in the example-device-class DeviceClass that match both of the following parameters:", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Allocate Devices to Workloads with DRA | This manifest creates ResourceClaim that requests devices in the example-device-class DeviceClass that match both of the following parameters:", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/assign-resources/allocate-devices-dra/", "id": "k8s_00479_code_explain_12160"} {"prompt": "Complete this yaml code:\n\napiVersion: resource.k8s.io/v1\nkind: ResourceClaim\nmetadata:\n name: example-resource-claim\nspec:\n devices:\n requests:\n - name: single-gpu-claim", "completion": " exactly:\n deviceClassName: example-device-class\n allocationMode: All\n selectors:\n - cel:\n expression: |-\n device.attributes[\"driver.example.com\"].type == \"gpu\" &&\n device.capacity[\"driver.example.com\"].memory == quantity(\"64Gi\")", "language": "yaml", "type": "kubernetes-manifest", "context": "Allocate Devices to Workloads with DRA | This manifest creates ResourceClaim that requests devices in the example-device-class DeviceClass that match both of the following parameters:", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/assign-resources/allocate-devices-dra/", "id": "k8s_00479_code_completion_12161"} {"prompt": "Explain this yaml code:\n\napiVersion: resource.k8s.io/v1\nkind: ResourceClaim\nmetadata:\n name: example-resource-claim\nspec:\n devices:\n requests:\n - name: single-gpu-claim\n exactly:\n deviceClassName: example-device-class\n allocationMode: All\n selectors:\n - cel:\n expression: |-\n device.attributes[\"driver.example.com\"].type == \"gpu\" &&\n device.capacity[\"driver.example.com\"].memory == quantity(\"64Gi\")", "completion": "This yaml code Allocate Devices to Workloads with DRA | This manifest creates ResourceClaim that requests devices in the example-device-class DeviceClass that match both of the following parameters:", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Allocate Devices to Workloads with DRA | This manifest creates ResourceClaim that requests devices in the example-device-class DeviceClass that match both of the following parameters:", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/assign-resources/allocate-devices-dra/", "id": "k8s_00479_code_explain_12162"} {"prompt": "Explain this bash code:\n\nkubectl apply -f https://k8s.io/examples/dra/resourceclaim.yaml", "completion": "This bash code Allocate Devices to Workloads with DRA | To request device allocation, specify a ResourceClaim or a ResourceClaimTemplate in the resourceClaims field of the Pod specification. Then, request a specific claim by name in the resources.claims fi", "language": "bash", "type": "kubectl-command_explanation", "context": "Allocate Devices to Workloads with DRA | To request device allocation, specify a ResourceClaim or a ResourceClaimTemplate in the resourceClaims field of the Pod specification. Then, request a specific claim by name in the resources.claims fi", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/assign-resources/allocate-devices-dra/", "id": "k8s_00479_code_explain_12163"} {"prompt": "Complete this yaml code:\n\ndra/dra-example-job.yaml\napiVersion: batch/v1\nkind: Job\nmetadata:\n name: example-dra-job\nspec:\n completions: 10\n parallelism: 2\n template:\n spec:\n restartPolicy: Never\n containers:\n - name: container0\n image: ubuntu:24.04\n command: [\"sleep\", \"9999\"]\n resources:\n claims:", "completion": " - name: separate-gpu-claim\n - name: container1\n image: ubuntu:24.04\n command: [\"sleep\", \"9999\"]\n resources:\n claims:\n - name: shared-gpu-claim\n - name: container2\n image: ubuntu:24.04\n command: [\"sleep\", \"9999\"]\n resources:\n claims:\n - name: shared-gpu-claim\n resourceClaims:\n - name: separate-gpu-claim\n resourceClaimTemplateName: example-resource-claim-template\n - name: shared-gpu-claim\n resourceClaimName: example-resource-claim", "language": "yaml", "type": "kubernetes-manifest", "context": "Allocate Devices to Workloads with DRA | Each Pod in this Job has the following properties:", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/assign-resources/allocate-devices-dra/", "id": "k8s_00479_code_completion_12164"} {"prompt": "Explain this yaml code:\n\ndra/dra-example-job.yaml\napiVersion: batch/v1\nkind: Job\nmetadata:\n name: example-dra-job\nspec:\n completions: 10\n parallelism: 2\n template:\n spec:\n restartPolicy: Never\n containers:\n - name: container0\n image: ubuntu:24.04\n command: [\"sleep\", \"9999\"]\n resources:\n claims:\n - name: separate-gpu-claim\n - name: container1\n image: ubuntu:24.04\n command: [\"sleep\", \"9999\"]\n resources:\n claims:\n - name: shared-gpu-claim\n - name: container2\n image: ubuntu:24.04\n command: [\"sleep\", \"9999\"]\n resources:\n claims:\n - name: shared-gpu-claim\n resourceClaims:\n - name: separate-gpu-claim\n resourceClaimTemplateName: example-resource-claim-template\n - name: shared-gpu-claim\n resourceClaimName: example-resource-claim", "completion": "This yaml code Allocate Devices to Workloads with DRA | Each Pod in this Job has the following properties:", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Allocate Devices to Workloads with DRA | Each Pod in this Job has the following properties:", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/assign-resources/allocate-devices-dra/", "id": "k8s_00479_code_explain_12165"} {"prompt": "Complete this yaml code:\n\napiVersion: batch/v1\nkind: Job\nmetadata:\n name: example-dra-job\nspec:\n completions: 10\n parallelism: 2\n template:\n spec:\n restartPolicy: Never\n containers:\n - name: container0\n image: ubuntu:24.04\n command: [\"sleep\", \"9999\"]\n resources:\n claims:\n - name: separate-gpu-claim", "completion": " - name: container1\n image: ubuntu:24.04\n command: [\"sleep\", \"9999\"]\n resources:\n claims:\n - name: shared-gpu-claim\n - name: container2\n image: ubuntu:24.04\n command: [\"sleep\", \"9999\"]\n resources:\n claims:\n - name: shared-gpu-claim\n resourceClaims:\n - name: separate-gpu-claim\n resourceClaimTemplateName: example-resource-claim-template\n - name: shared-gpu-claim\n resourceClaimName: example-resource-claim", "language": "yaml", "type": "kubernetes-manifest", "context": "Allocate Devices to Workloads with DRA | Each Pod in this Job has the following properties:", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/assign-resources/allocate-devices-dra/", "id": "k8s_00479_code_completion_12166"} {"prompt": "Explain this yaml code:\n\napiVersion: batch/v1\nkind: Job\nmetadata:\n name: example-dra-job\nspec:\n completions: 10\n parallelism: 2\n template:\n spec:\n restartPolicy: Never\n containers:\n - name: container0\n image: ubuntu:24.04\n command: [\"sleep\", \"9999\"]\n resources:\n claims:\n - name: separate-gpu-claim\n - name: container1\n image: ubuntu:24.04\n command: [\"sleep\", \"9999\"]\n resources:\n claims:\n - name: shared-gpu-claim\n - name: container2\n image: ubuntu:24.04\n command: [\"sleep\", \"9999\"]\n resources:\n claims:\n - name: shared-gpu-claim\n resourceClaims:\n - name: separate-gpu-claim\n resourceClaimTemplateName: example-resource-claim-template\n - name: shared-gpu-claim\n resourceClaimName: example-resource-claim", "completion": "This yaml code Allocate Devices to Workloads with DRA | Each Pod in this Job has the following properties:", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Allocate Devices to Workloads with DRA | Each Pod in this Job has the following properties:", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/assign-resources/allocate-devices-dra/", "id": "k8s_00479_code_explain_12167"} {"prompt": "Explain this bash code:\n\nkubectl apply -f https://k8s.io/examples/dra/dra-example-job.yaml", "completion": "This bash code Allocate Devices to Workloads with DRA | Try the following troubleshooting steps:", "language": "bash", "type": "kubectl-command_explanation", "context": "Allocate Devices to Workloads with DRA | Try the following troubleshooting steps:", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/assign-resources/allocate-devices-dra/", "id": "k8s_00479_code_explain_12168"} {"prompt": "Explain this bash code:\n\nkubectl delete -f https://k8s.io/examples/dra/dra-example-job.yaml", "completion": "This bash code Allocate Devices to Workloads with DRA | To delete your resource claims, run one of the following commands:", "language": "bash", "type": "kubectl-command_explanation", "context": "Allocate Devices to Workloads with DRA | To delete your resource claims, run one of the following commands:", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/assign-resources/allocate-devices-dra/", "id": "k8s_00479_code_explain_12169"} {"prompt": "Explain this bash code:\n\nkubectl delete -f https://k8s.io/examples/dra/resourceclaimtemplate.yaml", "completion": "This bash code Allocate Devices to Workloads with DRA | Delete the ResourceClaim:", "language": "bash", "type": "kubectl-command_explanation", "context": "Allocate Devices to Workloads with DRA | Delete the ResourceClaim:", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/assign-resources/allocate-devices-dra/", "id": "k8s_00479_code_explain_12170"} {"prompt": "Explain this bash code:\n\nkubectl delete -f https://k8s.io/examples/dra/resourceclaim.yaml", "completion": "This bash code Allocate Devices to Workloads with DRA |", "language": "bash", "type": "kubectl-command_explanation", "context": "Allocate Devices to Workloads with DRA |", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/assign-resources/allocate-devices-dra/", "id": "k8s_00479_code_explain_12171"} {"prompt": "Explain this bash code:\n\nkubectl create deployment source-ip-app --image=registry.k8s.io/echoserver:1.10", "completion": "This bash code Using Source IP | The output is:", "language": "bash", "type": "kubectl-command_explanation", "context": "Using Source IP | The output is:", "source": "https://kubernetes.io/docs/tutorials/services/source-ip/", "id": "k8s_00480_code_explain_12172"} {"prompt": "Complete this text code:\n\nNAME STATUS ROLES AGE VERSION\nkubernetes-node-6jst Ready <none> 2h v1.13.0", "completion": "kubernetes-node-cx31 Ready <none> 2h v1.13.0\nkubernetes-node-jj1t Ready <none> 2h v1.13.0", "language": "text", "type": "code-example", "context": "Using Source IP | Get the proxy mode on one of the nodes (kube-proxy listens on port 10249):", "source": "https://kubernetes.io/docs/tutorials/services/source-ip/", "id": "k8s_00480_code_completion_12173"} {"prompt": "Explain this text code:\n\nNAME STATUS ROLES AGE VERSION\nkubernetes-node-6jst Ready <none> 2h v1.13.0\nkubernetes-node-cx31 Ready <none> 2h v1.13.0\nkubernetes-node-jj1t Ready <none> 2h v1.13.0", "completion": "This text code Using Source IP | Get the proxy mode on one of the nodes (kube-proxy listens on port 10249):", "language": "text", "type": "code-example_explanation", "context": "Using Source IP | Get the proxy mode on one of the nodes (kube-proxy listens on port 10249):", "source": "https://kubernetes.io/docs/tutorials/services/source-ip/", "id": "k8s_00480_code_explain_12174"} {"prompt": "Explain this text code:\n\n# Run this in a shell on the node you want to query.\ncurl http://localhost:10249/proxyMode", "completion": "This text code Using Source IP | The output is:", "language": "text", "type": "code-example_explanation", "context": "Using Source IP | The output is:", "source": "https://kubernetes.io/docs/tutorials/services/source-ip/", "id": "k8s_00480_code_explain_12175"} {"prompt": "Explain this bash code:\n\nkubectl expose deployment source-ip-app --name=clusterip --port=80 --target-port=8080", "completion": "This bash code Using Source IP | The output is:", "language": "bash", "type": "kubectl-command_explanation", "context": "Using Source IP | The output is:", "source": "https://kubernetes.io/docs/tutorials/services/source-ip/", "id": "k8s_00480_code_explain_12176"} {"prompt": "Explain this text code:\n\nNAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE\nclusterip ClusterIP 10.0.170.92 <none> 80/TCP 51s", "completion": "This text code Using Source IP | And hitting the ClusterIP from a pod in the same cluster:", "language": "text", "type": "code-example_explanation", "context": "Using Source IP | And hitting the ClusterIP from a pod in the same cluster:", "source": "https://kubernetes.io/docs/tutorials/services/source-ip/", "id": "k8s_00480_code_explain_12177"} {"prompt": "Explain this bash code:\n\nkubectl run busybox -it --image=busybox:1.28 --restart=Never --rm", "completion": "This bash code Using Source IP | The output is similar to this:", "language": "bash", "type": "kubectl-command_explanation", "context": "Using Source IP | The output is similar to this:", "source": "https://kubernetes.io/docs/tutorials/services/source-ip/", "id": "k8s_00480_code_explain_12178"} {"prompt": "Explain this text code:\n\nWaiting for pod default/busybox to be running, status is Pending, pod ready: false\nIf you don't see a command prompt, try pressing enter.", "completion": "This text code Using Source IP | You can then run a command inside that Pod:", "language": "text", "type": "code-example_explanation", "context": "Using Source IP | You can then run a command inside that Pod:", "source": "https://kubernetes.io/docs/tutorials/services/source-ip/", "id": "k8s_00480_code_explain_12179"} {"prompt": "Explain this text code:\n\n# Run this inside the terminal from \"kubectl run\"\nip addr", "completion": "This text code Using Source IP | …then use wget to query the local webserver", "language": "text", "type": "code-example_explanation", "context": "Using Source IP | …then use wget to query the local webserver", "source": "https://kubernetes.io/docs/tutorials/services/source-ip/", "id": "k8s_00480_code_explain_12180"} {"prompt": "Complete this text code:\n\n1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue\n link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00\n inet 127.0.0.1/8 scope host lo\n valid_lft forever preferred_lft forever\n inet6 ::1/128 scope host\n valid_lft forever preferred_lft forever", "completion": "3: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1460 qdisc noqueue\n link/ether 0a:58:0a:f4:03:08 brd ff:ff:ff:ff:ff:ff\n inet 10.244.3.8/24 scope global eth0\n valid_lft forever preferred_lft forever\n inet6 fe80::188a:84ff:feb0:26a5/64 scope link\n valid_lft forever preferred_lft forever", "language": "text", "type": "code-example", "context": "Using Source IP | …then use wget to query the local webserver", "source": "https://kubernetes.io/docs/tutorials/services/source-ip/", "id": "k8s_00480_code_completion_12181"} {"prompt": "Explain this text code:\n\n1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue\n link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00\n inet 127.0.0.1/8 scope host lo\n valid_lft forever preferred_lft forever\n inet6 ::1/128 scope host\n valid_lft forever preferred_lft forever\n3: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1460 qdisc noqueue\n link/ether 0a:58:0a:f4:03:08 brd ff:ff:ff:ff:ff:ff\n inet 10.244.3.8/24 scope global eth0\n valid_lft forever preferred_lft forever\n inet6 fe80::188a:84ff:feb0:26a5/64 scope link\n valid_lft forever preferred_lft forever", "completion": "This text code Using Source IP | …then use wget to query the local webserver", "language": "text", "type": "code-example_explanation", "context": "Using Source IP | …then use wget to query the local webserver", "source": "https://kubernetes.io/docs/tutorials/services/source-ip/", "id": "k8s_00480_code_explain_12182"} {"prompt": "Explain this text code:\n\n# Replace \"10.0.170.92\" with the IPv4 address of the Service named \"clusterip\"\nwget -qO - 10.0.170.92", "completion": "This text code Using Source IP | The client_address is always the client pod's IP address, whether the client pod and server pod are in the same node or in different nodes.", "language": "text", "type": "code-example_explanation", "context": "Using Source IP | The client_address is always the client pod's IP address, whether the client pod and server pod are in the same node or in different nodes.", "source": "https://kubernetes.io/docs/tutorials/services/source-ip/", "id": "k8s_00480_code_explain_12183"} {"prompt": "Complete this text code:\n\nCLIENT VALUES:\nclient_address=10.244.3.8", "completion": "command=GET\n...", "language": "text", "type": "code-example", "context": "Using Source IP | The client_address is always the client pod's IP address, whether the client pod and server pod are in the same node or in different nodes.", "source": "https://kubernetes.io/docs/tutorials/services/source-ip/", "id": "k8s_00480_code_completion_12184"} {"prompt": "Explain this text code:\n\nCLIENT VALUES:\nclient_address=10.244.3.8\ncommand=GET\n...", "completion": "This text code Using Source IP | The client_address is always the client pod's IP address, whether the client pod and server pod are in the same node or in different nodes.", "language": "text", "type": "code-example_explanation", "context": "Using Source IP | The client_address is always the client pod's IP address, whether the client pod and server pod are in the same node or in different nodes.", "source": "https://kubernetes.io/docs/tutorials/services/source-ip/", "id": "k8s_00480_code_explain_12185"} {"prompt": "Explain this bash code:\n\nkubectl expose deployment source-ip-app --name=nodeport --port=80 --target-port=8080 --type=NodePort", "completion": "This bash code Using Source IP | The output is:", "language": "bash", "type": "kubectl-command_explanation", "context": "Using Source IP | The output is:", "source": "https://kubernetes.io/docs/tutorials/services/source-ip/", "id": "k8s_00480_code_explain_12186"} {"prompt": "Explain this text code:\n\nNODEPORT=$(kubectl get -o jsonpath=\"{.spec.ports[0].nodePort}\" services nodeport)\nNODES=$(kubectl get nodes -o jsonpath='{ $.items[*].status.addresses[?(@.type==\"InternalIP\")].address }')", "completion": "This text code Using Source IP | If you're running on a cloud provider, you may need to open up a firewall-rule for the nodes:nodeport reported above. Now you can try reaching the Service from outside the cluster through the node por", "language": "text", "type": "code-example_explanation", "context": "Using Source IP | If you're running on a cloud provider, you may need to open up a firewall-rule for the nodes:nodeport reported above. Now you can try reaching the Service from outside the cluster through the node por", "source": "https://kubernetes.io/docs/tutorials/services/source-ip/", "id": "k8s_00480_code_explain_12187"} {"prompt": "Explain this text code:\n\nfor node in $NODES; do curl -s $node:$NODEPORT | grep -i client_address; done", "completion": "This text code Using Source IP | The output is similar to:", "language": "text", "type": "code-example_explanation", "context": "Using Source IP | The output is similar to:", "source": "https://kubernetes.io/docs/tutorials/services/source-ip/", "id": "k8s_00480_code_explain_12188"} {"prompt": "Explain this text code:\n\nclient_address=10.180.1.1\nclient_address=10.240.0.5\nclient_address=10.240.0.3", "completion": "This text code Using Source IP | Note that these are not the correct client IPs, they're cluster internal IPs. This is what happens:", "language": "text", "type": "code-example_explanation", "context": "Using Source IP | Note that these are not the correct client IPs, they're cluster internal IPs. This is what happens:", "source": "https://kubernetes.io/docs/tutorials/services/source-ip/", "id": "k8s_00480_code_explain_12189"} {"prompt": "Explain this bash code:\n\nkubectl patch svc nodeport -p '{\"spec\":{\"externalTrafficPolicy\":\"Local\"}}'", "completion": "This bash code Using Source IP | The output is:", "language": "bash", "type": "kubectl-command_explanation", "context": "Using Source IP | The output is:", "source": "https://kubernetes.io/docs/tutorials/services/source-ip/", "id": "k8s_00480_code_explain_12190"} {"prompt": "Explain this text code:\n\nfor node in $NODES; do curl --connect-timeout 1 -s $node:$NODEPORT | grep -i client_address; done", "completion": "This text code Using Source IP | The output is similar to:", "language": "text", "type": "code-example_explanation", "context": "Using Source IP | The output is similar to:", "source": "https://kubernetes.io/docs/tutorials/services/source-ip/", "id": "k8s_00480_code_explain_12191"} {"prompt": "Explain this bash code:\n\nkubectl expose deployment source-ip-app --name=loadbalancer --port=80 --target-port=8080 --type=LoadBalancer", "completion": "This bash code Using Source IP | The output is:", "language": "bash", "type": "kubectl-command_explanation", "context": "Using Source IP | The output is:", "source": "https://kubernetes.io/docs/tutorials/services/source-ip/", "id": "k8s_00480_code_explain_12192"} {"prompt": "Explain this text code:\n\nNAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE\nloadbalancer LoadBalancer 10.0.65.118 203.0.113.140 80/TCP 5m", "completion": "This text code Using Source IP | Next, send a request to this Service's external-ip:", "language": "text", "type": "code-example_explanation", "context": "Using Source IP | Next, send a request to this Service's external-ip:", "source": "https://kubernetes.io/docs/tutorials/services/source-ip/", "id": "k8s_00480_code_explain_12193"} {"prompt": "Explain this bash code:\n\nkubectl patch svc loadbalancer -p '{\"spec\":{\"externalTrafficPolicy\":\"Local\"}}'", "completion": "This bash code Using Source IP | You should immediately see the service.spec.healthCheckNodePort field allocated by Kubernetes:", "language": "bash", "type": "kubectl-command_explanation", "context": "Using Source IP | You should immediately see the service.spec.healthCheckNodePort field allocated by Kubernetes:", "source": "https://kubernetes.io/docs/tutorials/services/source-ip/", "id": "k8s_00480_code_explain_12194"} {"prompt": "Explain this bash code:\n\nkubectl get svc loadbalancer -o yaml | grep -i healthCheckNodePort", "completion": "This bash code Using Source IP | The output is similar to this:", "language": "bash", "type": "kubectl-command_explanation", "context": "Using Source IP | The output is similar to this:", "source": "https://kubernetes.io/docs/tutorials/services/source-ip/", "id": "k8s_00480_code_explain_12195"} {"prompt": "Explain this text code:\n\nNAME READY STATUS RESTARTS AGE IP NODE\nsource-ip-app-826191075-qehz4 1/1 Running 0 20h 10.180.1.136 kubernetes-node-6jst", "completion": "This text code Using Source IP | Use curl to fetch the /healthz endpoint on various nodes:", "language": "text", "type": "code-example_explanation", "context": "Using Source IP | Use curl to fetch the /healthz endpoint on various nodes:", "source": "https://kubernetes.io/docs/tutorials/services/source-ip/", "id": "k8s_00480_code_explain_12196"} {"prompt": "Explain this text code:\n\n# Run this locally on a node you choose\ncurl localhost:32122/healthz", "completion": "This text code Using Source IP | On a different node you might get a different result:", "language": "text", "type": "code-example_explanation", "context": "Using Source IP | On a different node you might get a different result:", "source": "https://kubernetes.io/docs/tutorials/services/source-ip/", "id": "k8s_00480_code_explain_12197"} {"prompt": "Explain this text code:\n\n# Run this locally on a node you choose\ncurl localhost:32122/healthz", "completion": "This text code Using Source IP | A controller running on the control plane is responsible for allocating the cloud load balancer. The same controller also allocates HTTP health checks pointing to this port/path on each node. Wait abo", "language": "text", "type": "code-example_explanation", "context": "Using Source IP | A controller running on the control plane is responsible for allocating the cloud load balancer. The same controller also allocates HTTP health checks pointing to this port/path on each node. Wait abo", "source": "https://kubernetes.io/docs/tutorials/services/source-ip/", "id": "k8s_00480_code_explain_12198"} {"prompt": "Explain this shell code:\n\nkubectl create deployment source-ip-app --image=registry.k8s.io/echoserver:1.10", "completion": "This shell code Using Source IP | The output is:", "language": "shell", "type": "code-example_explanation", "context": "Using Source IP | The output is:", "source": "https://kubernetes.io/docs/tutorials/services/source-ip/", "id": "k8s_00480_code_explain_12199"} {"prompt": "Complete this text code:\n\nNAME STATUS ROLES AGE VERSION\nkubernetes-node-6jst Ready <none> 2h v1.13.0", "completion": "kubernetes-node-cx31 Ready <none> 2h v1.13.0\nkubernetes-node-jj1t Ready <none> 2h v1.13.0", "language": "text", "type": "code-example", "context": "Using Source IP | Get the proxy mode on one of the nodes (kube-proxy listens on port 10249):", "source": "https://kubernetes.io/docs/tutorials/services/source-ip/", "id": "k8s_00480_code_completion_12200"} {"prompt": "Explain this text code:\n\nNAME STATUS ROLES AGE VERSION\nkubernetes-node-6jst Ready <none> 2h v1.13.0\nkubernetes-node-cx31 Ready <none> 2h v1.13.0\nkubernetes-node-jj1t Ready <none> 2h v1.13.0", "completion": "This text code Using Source IP | Get the proxy mode on one of the nodes (kube-proxy listens on port 10249):", "language": "text", "type": "code-example_explanation", "context": "Using Source IP | Get the proxy mode on one of the nodes (kube-proxy listens on port 10249):", "source": "https://kubernetes.io/docs/tutorials/services/source-ip/", "id": "k8s_00480_code_explain_12201"} {"prompt": "Explain this shell code:\n\n# Run this in a shell on the node you want to query.\ncurl http://localhost:10249/proxyMode", "completion": "This shell code Using Source IP | The output is:", "language": "shell", "type": "code-example_explanation", "context": "Using Source IP | The output is:", "source": "https://kubernetes.io/docs/tutorials/services/source-ip/", "id": "k8s_00480_code_explain_12202"} {"prompt": "Explain this shell code:\n\nkubectl expose deployment source-ip-app --name=clusterip --port=80 --target-port=8080", "completion": "This shell code Using Source IP | The output is:", "language": "shell", "type": "code-example_explanation", "context": "Using Source IP | The output is:", "source": "https://kubernetes.io/docs/tutorials/services/source-ip/", "id": "k8s_00480_code_explain_12203"} {"prompt": "Explain this text code:\n\nNAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE\nclusterip ClusterIP 10.0.170.92 <none> 80/TCP 51s", "completion": "This text code Using Source IP | And hitting the ClusterIP from a pod in the same cluster:", "language": "text", "type": "code-example_explanation", "context": "Using Source IP | And hitting the ClusterIP from a pod in the same cluster:", "source": "https://kubernetes.io/docs/tutorials/services/source-ip/", "id": "k8s_00480_code_explain_12204"} {"prompt": "Explain this shell code:\n\nkubectl run busybox -it --image=busybox:1.28 --restart=Never --rm", "completion": "This shell code Using Source IP | The output is similar to this:", "language": "shell", "type": "code-example_explanation", "context": "Using Source IP | The output is similar to this:", "source": "https://kubernetes.io/docs/tutorials/services/source-ip/", "id": "k8s_00480_code_explain_12205"} {"prompt": "Explain this text code:\n\nWaiting for pod default/busybox to be running, status is Pending, pod ready: false\nIf you don't see a command prompt, try pressing enter.", "completion": "This text code Using Source IP | You can then run a command inside that Pod:", "language": "text", "type": "code-example_explanation", "context": "Using Source IP | You can then run a command inside that Pod:", "source": "https://kubernetes.io/docs/tutorials/services/source-ip/", "id": "k8s_00480_code_explain_12206"} {"prompt": "Explain this shell code:\n\n# Run this inside the terminal from \"kubectl run\"\nip addr", "completion": "This shell code Using Source IP | …then use wget to query the local webserver", "language": "shell", "type": "code-example_explanation", "context": "Using Source IP | …then use wget to query the local webserver", "source": "https://kubernetes.io/docs/tutorials/services/source-ip/", "id": "k8s_00480_code_explain_12207"} {"prompt": "Complete this text code:\n\n1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue\n link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00\n inet 127.0.0.1/8 scope host lo\n valid_lft forever preferred_lft forever\n inet6 ::1/128 scope host\n valid_lft forever preferred_lft forever", "completion": "3: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1460 qdisc noqueue\n link/ether 0a:58:0a:f4:03:08 brd ff:ff:ff:ff:ff:ff\n inet 10.244.3.8/24 scope global eth0\n valid_lft forever preferred_lft forever\n inet6 fe80::188a:84ff:feb0:26a5/64 scope link\n valid_lft forever preferred_lft forever", "language": "text", "type": "code-example", "context": "Using Source IP | …then use wget to query the local webserver", "source": "https://kubernetes.io/docs/tutorials/services/source-ip/", "id": "k8s_00480_code_completion_12208"} {"prompt": "Explain this text code:\n\n1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue\n link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00\n inet 127.0.0.1/8 scope host lo\n valid_lft forever preferred_lft forever\n inet6 ::1/128 scope host\n valid_lft forever preferred_lft forever\n3: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1460 qdisc noqueue\n link/ether 0a:58:0a:f4:03:08 brd ff:ff:ff:ff:ff:ff\n inet 10.244.3.8/24 scope global eth0\n valid_lft forever preferred_lft forever\n inet6 fe80::188a:84ff:feb0:26a5/64 scope link\n valid_lft forever preferred_lft forever", "completion": "This text code Using Source IP | …then use wget to query the local webserver", "language": "text", "type": "code-example_explanation", "context": "Using Source IP | …then use wget to query the local webserver", "source": "https://kubernetes.io/docs/tutorials/services/source-ip/", "id": "k8s_00480_code_explain_12209"} {"prompt": "Explain this shell code:\n\n# Replace \"10.0.170.92\" with the IPv4 address of the Service named \"clusterip\"\nwget -qO - 10.0.170.92", "completion": "This shell code Using Source IP | The client_address is always the client pod's IP address, whether the client pod and server pod are in the same node or in different nodes.", "language": "shell", "type": "code-example_explanation", "context": "Using Source IP | The client_address is always the client pod's IP address, whether the client pod and server pod are in the same node or in different nodes.", "source": "https://kubernetes.io/docs/tutorials/services/source-ip/", "id": "k8s_00480_code_explain_12210"} {"prompt": "Complete this text code:\n\nCLIENT VALUES:\nclient_address=10.244.3.8", "completion": "command=GET\n...", "language": "text", "type": "code-example", "context": "Using Source IP | The client_address is always the client pod's IP address, whether the client pod and server pod are in the same node or in different nodes.", "source": "https://kubernetes.io/docs/tutorials/services/source-ip/", "id": "k8s_00480_code_completion_12211"} {"prompt": "Explain this text code:\n\nCLIENT VALUES:\nclient_address=10.244.3.8\ncommand=GET\n...", "completion": "This text code Using Source IP | The client_address is always the client pod's IP address, whether the client pod and server pod are in the same node or in different nodes.", "language": "text", "type": "code-example_explanation", "context": "Using Source IP | The client_address is always the client pod's IP address, whether the client pod and server pod are in the same node or in different nodes.", "source": "https://kubernetes.io/docs/tutorials/services/source-ip/", "id": "k8s_00480_code_explain_12212"} {"prompt": "Explain this shell code:\n\nkubectl expose deployment source-ip-app --name=nodeport --port=80 --target-port=8080 --type=NodePort", "completion": "This shell code Using Source IP | The output is:", "language": "shell", "type": "code-example_explanation", "context": "Using Source IP | The output is:", "source": "https://kubernetes.io/docs/tutorials/services/source-ip/", "id": "k8s_00480_code_explain_12213"} {"prompt": "Explain this shell code:\n\nNODEPORT=$(kubectl get -o jsonpath=\"{.spec.ports[0].nodePort}\" services nodeport)\nNODES=$(kubectl get nodes -o jsonpath='{ $.items[*].status.addresses[?(@.type==\"InternalIP\")].address }')", "completion": "This shell code Using Source IP | If you're running on a cloud provider, you may need to open up a firewall-rule for the nodes:nodeport reported above. Now you can try reaching the Service from outside the cluster through the node por", "language": "shell", "type": "code-example_explanation", "context": "Using Source IP | If you're running on a cloud provider, you may need to open up a firewall-rule for the nodes:nodeport reported above. Now you can try reaching the Service from outside the cluster through the node por", "source": "https://kubernetes.io/docs/tutorials/services/source-ip/", "id": "k8s_00480_code_explain_12214"} {"prompt": "Explain this shell code:\n\nfor node in $NODES; do curl -s $node:$NODEPORT | grep -i client_address; done", "completion": "This shell code Using Source IP | The output is similar to:", "language": "shell", "type": "code-example_explanation", "context": "Using Source IP | The output is similar to:", "source": "https://kubernetes.io/docs/tutorials/services/source-ip/", "id": "k8s_00480_code_explain_12215"} {"prompt": "Explain this text code:\n\nclient_address=10.180.1.1\nclient_address=10.240.0.5\nclient_address=10.240.0.3", "completion": "This text code Using Source IP | Note that these are not the correct client IPs, they're cluster internal IPs. This is what happens:", "language": "text", "type": "code-example_explanation", "context": "Using Source IP | Note that these are not the correct client IPs, they're cluster internal IPs. This is what happens:", "source": "https://kubernetes.io/docs/tutorials/services/source-ip/", "id": "k8s_00480_code_explain_12216"} {"prompt": "Explain this shell code:\n\nkubectl patch svc nodeport -p '{\"spec\":{\"externalTrafficPolicy\":\"Local\"}}'", "completion": "This shell code Using Source IP | The output is:", "language": "shell", "type": "code-example_explanation", "context": "Using Source IP | The output is:", "source": "https://kubernetes.io/docs/tutorials/services/source-ip/", "id": "k8s_00480_code_explain_12217"} {"prompt": "Explain this shell code:\n\nfor node in $NODES; do curl --connect-timeout 1 -s $node:$NODEPORT | grep -i client_address; done", "completion": "This shell code Using Source IP | The output is similar to:", "language": "shell", "type": "code-example_explanation", "context": "Using Source IP | The output is similar to:", "source": "https://kubernetes.io/docs/tutorials/services/source-ip/", "id": "k8s_00480_code_explain_12218"} {"prompt": "Explain this shell code:\n\nkubectl expose deployment source-ip-app --name=loadbalancer --port=80 --target-port=8080 --type=LoadBalancer", "completion": "This shell code Using Source IP | The output is:", "language": "shell", "type": "code-example_explanation", "context": "Using Source IP | The output is:", "source": "https://kubernetes.io/docs/tutorials/services/source-ip/", "id": "k8s_00480_code_explain_12219"} {"prompt": "Explain this text code:\n\nNAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE\nloadbalancer LoadBalancer 10.0.65.118 203.0.113.140 80/TCP 5m", "completion": "This text code Using Source IP | Next, send a request to this Service's external-ip:", "language": "text", "type": "code-example_explanation", "context": "Using Source IP | Next, send a request to this Service's external-ip:", "source": "https://kubernetes.io/docs/tutorials/services/source-ip/", "id": "k8s_00480_code_explain_12220"} {"prompt": "Explain this shell code:\n\nkubectl patch svc loadbalancer -p '{\"spec\":{\"externalTrafficPolicy\":\"Local\"}}'", "completion": "This shell code Using Source IP | You should immediately see the service.spec.healthCheckNodePort field allocated by Kubernetes:", "language": "shell", "type": "code-example_explanation", "context": "Using Source IP | You should immediately see the service.spec.healthCheckNodePort field allocated by Kubernetes:", "source": "https://kubernetes.io/docs/tutorials/services/source-ip/", "id": "k8s_00480_code_explain_12221"} {"prompt": "Explain this shell code:\n\nkubectl get svc loadbalancer -o yaml | grep -i healthCheckNodePort", "completion": "This shell code Using Source IP | The output is similar to this:", "language": "shell", "type": "code-example_explanation", "context": "Using Source IP | The output is similar to this:", "source": "https://kubernetes.io/docs/tutorials/services/source-ip/", "id": "k8s_00480_code_explain_12222"} {"prompt": "Explain this text code:\n\nNAME READY STATUS RESTARTS AGE IP NODE\nsource-ip-app-826191075-qehz4 1/1 Running 0 20h 10.180.1.136 kubernetes-node-6jst", "completion": "This text code Using Source IP | Use curl to fetch the /healthz endpoint on various nodes:", "language": "text", "type": "code-example_explanation", "context": "Using Source IP | Use curl to fetch the /healthz endpoint on various nodes:", "source": "https://kubernetes.io/docs/tutorials/services/source-ip/", "id": "k8s_00480_code_explain_12223"} {"prompt": "Explain this shell code:\n\n# Run this locally on a node you choose\ncurl localhost:32122/healthz", "completion": "This shell code Using Source IP | On a different node you might get a different result:", "language": "shell", "type": "code-example_explanation", "context": "Using Source IP | On a different node you might get a different result:", "source": "https://kubernetes.io/docs/tutorials/services/source-ip/", "id": "k8s_00480_code_explain_12224"} {"prompt": "Explain this shell code:\n\n# Run this locally on a node you choose\ncurl localhost:32122/healthz", "completion": "This shell code Using Source IP | A controller running on the control plane is responsible for allocating the cloud load balancer. The same controller also allocates HTTP health checks pointing to this port/path on each node. Wait abo", "language": "shell", "type": "code-example_explanation", "context": "Using Source IP | A controller running on the control plane is responsible for allocating the cloud load balancer. The same controller also allocates HTTP health checks pointing to this port/path on each node. Wait abo", "source": "https://kubernetes.io/docs/tutorials/services/source-ip/", "id": "k8s_00480_code_explain_12225"} {"prompt": "Explain this bash code:\n\nkubectl create deployment source-ip-app --image=registry.k8s.io/echoserver:1.10", "completion": "This bash code Using Source IP | The output is:", "language": "bash", "type": "kubectl-command_explanation", "context": "Using Source IP | The output is:", "source": "https://kubernetes.io/docs/tutorials/services/source-ip/", "id": "k8s_00480_code_explain_12226"} {"prompt": "Explain this text code:\n\n# Run this in a shell on the node you want to query.\ncurl http://localhost:10249/proxyMode", "completion": "This text code Using Source IP | The output is:", "language": "text", "type": "code-example_explanation", "context": "Using Source IP | The output is:", "source": "https://kubernetes.io/docs/tutorials/services/source-ip/", "id": "k8s_00480_code_explain_12227"} {"prompt": "Explain this bash code:\n\nkubectl expose deployment source-ip-app --name=clusterip --port=80 --target-port=8080", "completion": "This bash code Using Source IP | The output is:", "language": "bash", "type": "kubectl-command_explanation", "context": "Using Source IP | The output is:", "source": "https://kubernetes.io/docs/tutorials/services/source-ip/", "id": "k8s_00480_code_explain_12228"} {"prompt": "Explain this bash code:\n\nkubectl run busybox -it --image=busybox:1.28 --restart=Never --rm", "completion": "This bash code Using Source IP | The output is similar to this:", "language": "bash", "type": "kubectl-command_explanation", "context": "Using Source IP | The output is similar to this:", "source": "https://kubernetes.io/docs/tutorials/services/source-ip/", "id": "k8s_00480_code_explain_12229"} {"prompt": "Explain this text code:\n\n# Run this inside the terminal from \"kubectl run\"\nip addr", "completion": "This text code Using Source IP | …then use wget to query the local webserver", "language": "text", "type": "code-example_explanation", "context": "Using Source IP | …then use wget to query the local webserver", "source": "https://kubernetes.io/docs/tutorials/services/source-ip/", "id": "k8s_00480_code_explain_12230"} {"prompt": "Explain this text code:\n\n# Replace \"10.0.170.92\" with the IPv4 address of the Service named \"clusterip\"\nwget -qO - 10.0.170.92", "completion": "This text code Using Source IP | The client_address is always the client pod's IP address, whether the client pod and server pod are in the same node or in different nodes.", "language": "text", "type": "code-example_explanation", "context": "Using Source IP | The client_address is always the client pod's IP address, whether the client pod and server pod are in the same node or in different nodes.", "source": "https://kubernetes.io/docs/tutorials/services/source-ip/", "id": "k8s_00480_code_explain_12231"} {"prompt": "Explain this bash code:\n\nkubectl expose deployment source-ip-app --name=nodeport --port=80 --target-port=8080 --type=NodePort", "completion": "This bash code Using Source IP | The output is:", "language": "bash", "type": "kubectl-command_explanation", "context": "Using Source IP | The output is:", "source": "https://kubernetes.io/docs/tutorials/services/source-ip/", "id": "k8s_00480_code_explain_12232"} {"prompt": "Explain this text code:\n\nNODEPORT=$(kubectl get -o jsonpath=\"{.spec.ports[0].nodePort}\" services nodeport)\nNODES=$(kubectl get nodes -o jsonpath='{ $.items[*].status.addresses[?(@.type==\"InternalIP\")].address }')", "completion": "This text code Using Source IP | If you're running on a cloud provider, you may need to open up a firewall-rule for the nodes:nodeport reported above. Now you can try reaching the Service from outside the cluster through the node por", "language": "text", "type": "code-example_explanation", "context": "Using Source IP | If you're running on a cloud provider, you may need to open up a firewall-rule for the nodes:nodeport reported above. Now you can try reaching the Service from outside the cluster through the node por", "source": "https://kubernetes.io/docs/tutorials/services/source-ip/", "id": "k8s_00480_code_explain_12233"} {"prompt": "Explain this text code:\n\nfor node in $NODES; do curl -s $node:$NODEPORT | grep -i client_address; done", "completion": "This text code Using Source IP | The output is similar to:", "language": "text", "type": "code-example_explanation", "context": "Using Source IP | The output is similar to:", "source": "https://kubernetes.io/docs/tutorials/services/source-ip/", "id": "k8s_00480_code_explain_12234"} {"prompt": "Explain this bash code:\n\nkubectl patch svc nodeport -p '{\"spec\":{\"externalTrafficPolicy\":\"Local\"}}'", "completion": "This bash code Using Source IP | The output is:", "language": "bash", "type": "kubectl-command_explanation", "context": "Using Source IP | The output is:", "source": "https://kubernetes.io/docs/tutorials/services/source-ip/", "id": "k8s_00480_code_explain_12235"} {"prompt": "Explain this text code:\n\nfor node in $NODES; do curl --connect-timeout 1 -s $node:$NODEPORT | grep -i client_address; done", "completion": "This text code Using Source IP | The output is similar to:", "language": "text", "type": "code-example_explanation", "context": "Using Source IP | The output is similar to:", "source": "https://kubernetes.io/docs/tutorials/services/source-ip/", "id": "k8s_00480_code_explain_12236"} {"prompt": "Explain this bash code:\n\nkubectl expose deployment source-ip-app --name=loadbalancer --port=80 --target-port=8080 --type=LoadBalancer", "completion": "This bash code Using Source IP | The output is:", "language": "bash", "type": "kubectl-command_explanation", "context": "Using Source IP | The output is:", "source": "https://kubernetes.io/docs/tutorials/services/source-ip/", "id": "k8s_00480_code_explain_12237"} {"prompt": "Explain this bash code:\n\nkubectl patch svc loadbalancer -p '{\"spec\":{\"externalTrafficPolicy\":\"Local\"}}'", "completion": "This bash code Using Source IP | You should immediately see the service.spec.healthCheckNodePort field allocated by Kubernetes:", "language": "bash", "type": "kubectl-command_explanation", "context": "Using Source IP | You should immediately see the service.spec.healthCheckNodePort field allocated by Kubernetes:", "source": "https://kubernetes.io/docs/tutorials/services/source-ip/", "id": "k8s_00480_code_explain_12238"} {"prompt": "Explain this bash code:\n\nkubectl get svc loadbalancer -o yaml | grep -i healthCheckNodePort", "completion": "This bash code Using Source IP | The output is similar to this:", "language": "bash", "type": "kubectl-command_explanation", "context": "Using Source IP | The output is similar to this:", "source": "https://kubernetes.io/docs/tutorials/services/source-ip/", "id": "k8s_00480_code_explain_12239"} {"prompt": "Explain this text code:\n\n# Run this locally on a node you choose\ncurl localhost:32122/healthz", "completion": "This text code Using Source IP | On a different node you might get a different result:", "language": "text", "type": "code-example_explanation", "context": "Using Source IP | On a different node you might get a different result:", "source": "https://kubernetes.io/docs/tutorials/services/source-ip/", "id": "k8s_00480_code_explain_12240"} {"prompt": "Explain this text code:\n\n# Run this locally on a node you choose\ncurl localhost:32122/healthz", "completion": "This text code Using Source IP | A controller running on the control plane is responsible for allocating the cloud load balancer. The same controller also allocates HTTP health checks pointing to this port/path on each node. Wait abo", "language": "text", "type": "code-example_explanation", "context": "Using Source IP | A controller running on the control plane is responsible for allocating the cloud load balancer. The same controller also allocates HTTP health checks pointing to this port/path on each node. Wait abo", "source": "https://kubernetes.io/docs/tutorials/services/source-ip/", "id": "k8s_00480_code_explain_12241"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: ServiceAccount\nmetadata:", "completion": " annotations:\n kubernetes.io/enforce-mountable-secrets: \"true\"\n name: my-serviceaccount\n namespace: my-namespace", "language": "yaml", "type": "service-definition", "context": "Service Accounts | When this annotation is set to \"true\", the Kubernetes control plane ensures that the Secrets from this ServiceAccount are subject to certain mounting restrictions.", "source": "https://kubernetes.io/docs/concepts/security/service-accounts/", "id": "k8s_00482_code_completion_12242"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: ServiceAccount\nmetadata:\n annotations:\n kubernetes.io/enforce-mountable-secrets: \"true\"\n name: my-serviceaccount\n namespace: my-namespace", "completion": "This yaml code Service Accounts | When this annotation is set to \"true\", the Kubernetes control plane ensures that the Secrets from this ServiceAccount are subject to certain mounting restrictions.", "language": "yaml", "type": "service-definition_explanation", "context": "Service Accounts | When this annotation is set to \"true\", the Kubernetes control plane ensures that the Secrets from this ServiceAccount are subject to certain mounting restrictions.", "source": "https://kubernetes.io/docs/concepts/security/service-accounts/", "id": "k8s_00482_code_explain_12243"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: ServiceAccount\nmetadata:", "completion": " annotations:\n kubernetes.io/enforce-mountable-secrets: \"true\"\n name: my-serviceaccount\n namespace: my-namespace", "language": "yaml", "type": "service-definition", "context": "Service Accounts | When this annotation is set to \"true\", the Kubernetes control plane ensures that the Secrets from this ServiceAccount are subject to certain mounting restrictions.", "source": "https://kubernetes.io/docs/concepts/security/service-accounts/", "id": "k8s_00482_code_completion_12244"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: ServiceAccount\nmetadata:\n annotations:\n kubernetes.io/enforce-mountable-secrets: \"true\"\n name: my-serviceaccount\n namespace: my-namespace", "completion": "This yaml code Service Accounts | When this annotation is set to \"true\", the Kubernetes control plane ensures that the Secrets from this ServiceAccount are subject to certain mounting restrictions.", "language": "yaml", "type": "service-definition_explanation", "context": "Service Accounts | When this annotation is set to \"true\", the Kubernetes control plane ensures that the Secrets from this ServiceAccount are subject to certain mounting restrictions.", "source": "https://kubernetes.io/docs/concepts/security/service-accounts/", "id": "k8s_00482_code_explain_12245"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: ServiceAccount\nmetadata:", "completion": " annotations:\n kubernetes.io/enforce-mountable-secrets: \"true\"\n name: my-serviceaccount\n namespace: my-namespace", "language": "yaml", "type": "service-definition", "context": "Service Accounts | When this annotation is set to \"true\", the Kubernetes control plane ensures that the Secrets from this ServiceAccount are subject to certain mounting restrictions.", "source": "https://kubernetes.io/docs/concepts/security/service-accounts/", "id": "k8s_00482_code_completion_12246"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: ServiceAccount\nmetadata:\n annotations:\n kubernetes.io/enforce-mountable-secrets: \"true\"\n name: my-serviceaccount\n namespace: my-namespace", "completion": "This yaml code Service Accounts | When this annotation is set to \"true\", the Kubernetes control plane ensures that the Secrets from this ServiceAccount are subject to certain mounting restrictions.", "language": "yaml", "type": "service-definition_explanation", "context": "Service Accounts | When this annotation is set to \"true\", the Kubernetes control plane ensures that the Secrets from this ServiceAccount are subject to certain mounting restrictions.", "source": "https://kubernetes.io/docs/concepts/security/service-accounts/", "id": "k8s_00482_code_explain_12247"} {"prompt": "Explain this bash code:\n\nkubectl create clusterrole NAME --verb=verb --resource=resource.group [--resource-name=resourcename] [--dry-run=server|client|none]", "completion": "This bash code kubectl create clusterrole | An aggregation label selector for combining ClusterRoles.", "language": "bash", "type": "kubectl-command_explanation", "context": "kubectl create clusterrole | An aggregation label selector for combining ClusterRoles.", "source": "https://kubernetes.io/docs/reference/kubectl/generated/kubectl_create/kubectl_create_clusterrole/", "id": "k8s_00483_code_explain_12248"} {"prompt": "Complete this text code:\n\n# Create a cluster role named \"pod-reader\" that allows user to perform \"get\", \"watch\" and \"list\" on pods\n kubectl create clusterrole pod-reader --verb=get,list,watch --resource=pods\n \n # Create a cluster role named \"pod-reader\" with ResourceName specified\n kubectl create clusterrole pod-reader --verb=get --resource=pods --resource-name=readablepod --resource-name=anotherpod\n \n # Create a cluster role named \"foo\" with API Group specified\n kubectl create clusterrole foo --verb=get,list,watch --resource=rs.apps", "completion": " \n # Create a cluster role named \"foo\" with SubResource specified\n kubectl create clusterrole foo --verb=get,list,watch --resource=pods,pods/status\n \n # Create a cluster role name \"foo\" with NonResourceURL specified\n kubectl create clusterrole \"foo\" --verb=get --non-resource-url=/logs/*\n \n # Create a cluster role name \"monitoring\" with AggregationRule specified\n kubectl create clusterrole monitoring --aggregation-rule=\"rbac.example.com/aggregate-to-monitoring=true\"", "language": "text", "type": "code-example", "context": "kubectl create clusterrole | An aggregation label selector for combining ClusterRoles.", "source": "https://kubernetes.io/docs/reference/kubectl/generated/kubectl_create/kubectl_create_clusterrole/", "id": "k8s_00483_code_completion_12249"} {"prompt": "Explain this text code:\n\n# Create a cluster role named \"pod-reader\" that allows user to perform \"get\", \"watch\" and \"list\" on pods\n kubectl create clusterrole pod-reader --verb=get,list,watch --resource=pods\n \n # Create a cluster role named \"pod-reader\" with ResourceName specified\n kubectl create clusterrole pod-reader --verb=get --resource=pods --resource-name=readablepod --resource-name=anotherpod\n \n # Create a cluster role named \"foo\" with API Group specified\n kubectl create clusterrole foo --verb=get,list,watch --resource=rs.apps\n \n # Create a cluster role named \"foo\" with SubResource specified\n kubectl create clusterrole foo --verb=get,list,watch --resource=pods,pods/status\n \n # Create a cluster role name \"foo\" with NonResourceURL specified\n kubectl create clusterrole \"foo\" --verb=get --non-resource-url=/logs/*\n \n # Create a cluster role name \"monitoring\" with AggregationRule specified\n kubectl create clusterrole monitoring --aggregation-rule=\"rbac.example.com/aggregate-to-monitoring=true\"", "completion": "This text code kubectl create clusterrole | An aggregation label selector for combining ClusterRoles.", "language": "text", "type": "code-example_explanation", "context": "kubectl create clusterrole | An aggregation label selector for combining ClusterRoles.", "source": "https://kubernetes.io/docs/reference/kubectl/generated/kubectl_create/kubectl_create_clusterrole/", "id": "k8s_00483_code_explain_12250"} {"prompt": "Explain this bash code:\n\nkubectl create clusterrole NAME --verb=verb --resource=resource.group [--resource-name=resourcename] [--dry-run=server|client|none]", "completion": "This bash code kubectl create clusterrole | An aggregation label selector for combining ClusterRoles.", "language": "bash", "type": "kubectl-command_explanation", "context": "kubectl create clusterrole | An aggregation label selector for combining ClusterRoles.", "source": "https://kubernetes.io/docs/reference/kubectl/generated/kubectl_create/kubectl_create_clusterrole/", "id": "k8s_00483_code_explain_12251"} {"prompt": "Complete this text code:\n\n# Create a cluster role named \"pod-reader\" that allows user to perform \"get\", \"watch\" and \"list\" on pods\n kubectl create clusterrole pod-reader --verb=get,list,watch --resource=pods\n \n # Create a cluster role named \"pod-reader\" with ResourceName specified\n kubectl create clusterrole pod-reader --verb=get --resource=pods --resource-name=readablepod --resource-name=anotherpod\n \n # Create a cluster role named \"foo\" with API Group specified\n kubectl create clusterrole foo --verb=get,list,watch --resource=rs.apps", "completion": " \n # Create a cluster role named \"foo\" with SubResource specified\n kubectl create clusterrole foo --verb=get,list,watch --resource=pods,pods/status\n \n # Create a cluster role name \"foo\" with NonResourceURL specified\n kubectl create clusterrole \"foo\" --verb=get --non-resource-url=/logs/*\n \n # Create a cluster role name \"monitoring\" with AggregationRule specified\n kubectl create clusterrole monitoring --aggregation-rule=\"rbac.example.com/aggregate-to-monitoring=true\"", "language": "text", "type": "code-example", "context": "kubectl create clusterrole | An aggregation label selector for combining ClusterRoles.", "source": "https://kubernetes.io/docs/reference/kubectl/generated/kubectl_create/kubectl_create_clusterrole/", "id": "k8s_00483_code_completion_12252"} {"prompt": "Explain this text code:\n\n# Create a cluster role named \"pod-reader\" that allows user to perform \"get\", \"watch\" and \"list\" on pods\n kubectl create clusterrole pod-reader --verb=get,list,watch --resource=pods\n \n # Create a cluster role named \"pod-reader\" with ResourceName specified\n kubectl create clusterrole pod-reader --verb=get --resource=pods --resource-name=readablepod --resource-name=anotherpod\n \n # Create a cluster role named \"foo\" with API Group specified\n kubectl create clusterrole foo --verb=get,list,watch --resource=rs.apps\n \n # Create a cluster role named \"foo\" with SubResource specified\n kubectl create clusterrole foo --verb=get,list,watch --resource=pods,pods/status\n \n # Create a cluster role name \"foo\" with NonResourceURL specified\n kubectl create clusterrole \"foo\" --verb=get --non-resource-url=/logs/*\n \n # Create a cluster role name \"monitoring\" with AggregationRule specified\n kubectl create clusterrole monitoring --aggregation-rule=\"rbac.example.com/aggregate-to-monitoring=true\"", "completion": "This text code kubectl create clusterrole | An aggregation label selector for combining ClusterRoles.", "language": "text", "type": "code-example_explanation", "context": "kubectl create clusterrole | An aggregation label selector for combining ClusterRoles.", "source": "https://kubernetes.io/docs/reference/kubectl/generated/kubectl_create/kubectl_create_clusterrole/", "id": "k8s_00483_code_explain_12253"} {"prompt": "Explain this text code:\n\ncurl.exe -LO \"https://dl.k8s.io/release/v1.34.0/bin/windows/amd64/kubectl.exe\"", "completion": "This text code Install and Set Up kubectl on Windows | Validate the binary (optional)", "language": "text", "type": "code-example_explanation", "context": "Install and Set Up kubectl on Windows | Validate the binary (optional)", "source": "https://kubernetes.io/docs/tasks/tools/install-kubectl-windows/", "id": "k8s_00484_code_explain_12254"} {"prompt": "Explain this text code:\n\ncurl.exe -LO \"https://dl.k8s.io/v1.34.0/bin/windows/amd64/kubectl.exe.sha256\"", "completion": "This text code Install and Set Up kubectl on Windows | Validate the kubectl binary against the checksum file:", "language": "text", "type": "code-example_explanation", "context": "Install and Set Up kubectl on Windows | Validate the kubectl binary against the checksum file:", "source": "https://kubernetes.io/docs/tasks/tools/install-kubectl-windows/", "id": "k8s_00484_code_explain_12255"} {"prompt": "Explain this text code:\n\nCertUtil -hashfile kubectl.exe SHA256\ntype kubectl.exe.sha256", "completion": "This text code Install and Set Up kubectl on Windows | Using PowerShell to automate the verification using the -eq operator to get a True or False result:", "language": "text", "type": "code-example_explanation", "context": "Install and Set Up kubectl on Windows | Using PowerShell to automate the verification using the -eq operator to get a True or False result:", "source": "https://kubernetes.io/docs/tasks/tools/install-kubectl-windows/", "id": "k8s_00484_code_explain_12256"} {"prompt": "Explain this text code:\n\n$(Get-FileHash -Algorithm SHA256 .\\kubectl.exe).Hash -eq $(Get-Content .\\kubectl.exe.sha256)", "completion": "This text code Install and Set Up kubectl on Windows | Append or prepend the kubectl binary folder to your PATH environment variable.", "language": "text", "type": "code-example_explanation", "context": "Install and Set Up kubectl on Windows | Append or prepend the kubectl binary folder to your PATH environment variable.", "source": "https://kubernetes.io/docs/tasks/tools/install-kubectl-windows/", "id": "k8s_00484_code_explain_12257"} {"prompt": "Explain this text code:\n\n# If you're using cmd.exe, run: cd %USERPROFILE%\ncd ~", "completion": "This text code Install and Set Up kubectl on Windows | Create the .kube directory:", "language": "text", "type": "code-example_explanation", "context": "Install and Set Up kubectl on Windows | Create the .kube directory:", "source": "https://kubernetes.io/docs/tasks/tools/install-kubectl-windows/", "id": "k8s_00484_code_explain_12258"} {"prompt": "Explain this text code:\n\nThe connection to the server <server-name:port> was refused - did you specify the right host or port?", "completion": "This text code Install and Set Up kubectl on Windows | For example, if you are intending to run a Kubernetes cluster on your laptop (locally), you will need a tool like Minikube to be installed first and then re-run the commands stated above.", "language": "text", "type": "code-example_explanation", "context": "Install and Set Up kubectl on Windows | For example, if you are intending to run a Kubernetes cluster on your laptop (locally), you will need a tool like Minikube to be installed first and then re-run the commands stated above.", "source": "https://kubernetes.io/docs/tasks/tools/install-kubectl-windows/", "id": "k8s_00484_code_explain_12259"} {"prompt": "Explain this bash code:\n\nkubectl completion powershell | Out-String | Invoke-Expression", "completion": "This bash code Install and Set Up kubectl on Windows | This command will regenerate the auto-completion script on every PowerShell start up. You can also add the generated script directly to your $PROFILE file.", "language": "bash", "type": "kubectl-command_explanation", "context": "Install and Set Up kubectl on Windows | This command will regenerate the auto-completion script on every PowerShell start up. You can also add the generated script directly to your $PROFILE file.", "source": "https://kubernetes.io/docs/tasks/tools/install-kubectl-windows/", "id": "k8s_00484_code_explain_12260"} {"prompt": "Explain this text code:\n\ncurl.exe -LO \"https://dl.k8s.io/release/v1.34.0/bin/windows/amd64/kubectl-convert.exe\"", "completion": "This text code Install and Set Up kubectl on Windows | Validate the binary (optional).", "language": "text", "type": "code-example_explanation", "context": "Install and Set Up kubectl on Windows | Validate the binary (optional).", "source": "https://kubernetes.io/docs/tasks/tools/install-kubectl-windows/", "id": "k8s_00484_code_explain_12261"} {"prompt": "Explain this text code:\n\ncurl.exe -LO \"https://dl.k8s.io/v1.34.0/bin/windows/amd64/kubectl-convert.exe.sha256\"", "completion": "This text code Install and Set Up kubectl on Windows | Validate the kubectl-convert binary against the checksum file:", "language": "text", "type": "code-example_explanation", "context": "Install and Set Up kubectl on Windows | Validate the kubectl-convert binary against the checksum file:", "source": "https://kubernetes.io/docs/tasks/tools/install-kubectl-windows/", "id": "k8s_00484_code_explain_12262"} {"prompt": "Explain this text code:\n\nCertUtil -hashfile kubectl-convert.exe SHA256\ntype kubectl-convert.exe.sha256", "completion": "This text code Install and Set Up kubectl on Windows | Using PowerShell to automate the verification using the -eq operator to get a True or False result:", "language": "text", "type": "code-example_explanation", "context": "Install and Set Up kubectl on Windows | Using PowerShell to automate the verification using the -eq operator to get a True or False result:", "source": "https://kubernetes.io/docs/tasks/tools/install-kubectl-windows/", "id": "k8s_00484_code_explain_12263"} {"prompt": "Explain this text code:\n\n$($(CertUtil -hashfile .\\kubectl-convert.exe SHA256)[1] -replace \" \", \"\") -eq $(type .\\kubectl-convert.exe.sha256)", "completion": "This text code Install and Set Up kubectl on Windows | Append or prepend the kubectl-convert binary folder to your PATH environment variable.", "language": "text", "type": "code-example_explanation", "context": "Install and Set Up kubectl on Windows | Append or prepend the kubectl-convert binary folder to your PATH environment variable.", "source": "https://kubernetes.io/docs/tasks/tools/install-kubectl-windows/", "id": "k8s_00484_code_explain_12264"} {"prompt": "Explain this text code:\n\ndel kubectl-convert.exe\ndel kubectl-convert.exe.sha256", "completion": "This text code Install and Set Up kubectl on Windows |", "language": "text", "type": "code-example_explanation", "context": "Install and Set Up kubectl on Windows |", "source": "https://kubernetes.io/docs/tasks/tools/install-kubectl-windows/", "id": "k8s_00484_code_explain_12265"} {"prompt": "Explain this powershell code:\n\ncurl.exe -LO \"https://dl.k8s.io/release/v1.34.0/bin/windows/amd64/kubectl.exe\"", "completion": "This powershell code Install and Set Up kubectl on Windows | Validate the binary (optional)", "language": "powershell", "type": "code-example_explanation", "context": "Install and Set Up kubectl on Windows | Validate the binary (optional)", "source": "https://kubernetes.io/docs/tasks/tools/install-kubectl-windows/", "id": "k8s_00484_code_explain_12266"} {"prompt": "Explain this powershell code:\n\ncurl.exe -LO \"https://dl.k8s.io/v1.34.0/bin/windows/amd64/kubectl.exe.sha256\"", "completion": "This powershell code Install and Set Up kubectl on Windows | Validate the kubectl binary against the checksum file:", "language": "powershell", "type": "code-example_explanation", "context": "Install and Set Up kubectl on Windows | Validate the kubectl binary against the checksum file:", "source": "https://kubernetes.io/docs/tasks/tools/install-kubectl-windows/", "id": "k8s_00484_code_explain_12267"} {"prompt": "Explain this cmd code:\n\nCertUtil -hashfile kubectl.exe SHA256\ntype kubectl.exe.sha256", "completion": "This cmd code Install and Set Up kubectl on Windows | Using PowerShell to automate the verification using the -eq operator to get a True or False result:", "language": "cmd", "type": "code-example_explanation", "context": "Install and Set Up kubectl on Windows | Using PowerShell to automate the verification using the -eq operator to get a True or False result:", "source": "https://kubernetes.io/docs/tasks/tools/install-kubectl-windows/", "id": "k8s_00484_code_explain_12268"} {"prompt": "Explain this powershell code:\n\n$(Get-FileHash -Algorithm SHA256 .\\kubectl.exe).Hash -eq $(Get-Content .\\kubectl.exe.sha256)", "completion": "This powershell code Install and Set Up kubectl on Windows | Append or prepend the kubectl binary folder to your PATH environment variable.", "language": "powershell", "type": "code-example_explanation", "context": "Install and Set Up kubectl on Windows | Append or prepend the kubectl binary folder to your PATH environment variable.", "source": "https://kubernetes.io/docs/tasks/tools/install-kubectl-windows/", "id": "k8s_00484_code_explain_12269"} {"prompt": "Explain this powershell code:\n\n# If you're using cmd.exe, run: cd %USERPROFILE%\ncd ~", "completion": "This powershell code Install and Set Up kubectl on Windows | Create the .kube directory:", "language": "powershell", "type": "code-example_explanation", "context": "Install and Set Up kubectl on Windows | Create the .kube directory:", "source": "https://kubernetes.io/docs/tasks/tools/install-kubectl-windows/", "id": "k8s_00484_code_explain_12270"} {"prompt": "Explain this text code:\n\nThe connection to the server <server-name:port> was refused - did you specify the right host or port?", "completion": "This text code Install and Set Up kubectl on Windows | For example, if you are intending to run a Kubernetes cluster on your laptop (locally), you will need a tool like Minikube to be installed first and then re-run the commands stated above.", "language": "text", "type": "code-example_explanation", "context": "Install and Set Up kubectl on Windows | For example, if you are intending to run a Kubernetes cluster on your laptop (locally), you will need a tool like Minikube to be installed first and then re-run the commands stated above.", "source": "https://kubernetes.io/docs/tasks/tools/install-kubectl-windows/", "id": "k8s_00484_code_explain_12271"} {"prompt": "Explain this powershell code:\n\nkubectl completion powershell | Out-String | Invoke-Expression", "completion": "This powershell code Install and Set Up kubectl on Windows | This command will regenerate the auto-completion script on every PowerShell start up. You can also add the generated script directly to your $PROFILE file.", "language": "powershell", "type": "code-example_explanation", "context": "Install and Set Up kubectl on Windows | This command will regenerate the auto-completion script on every PowerShell start up. You can also add the generated script directly to your $PROFILE file.", "source": "https://kubernetes.io/docs/tasks/tools/install-kubectl-windows/", "id": "k8s_00484_code_explain_12272"} {"prompt": "Explain this powershell code:\n\ncurl.exe -LO \"https://dl.k8s.io/release/v1.34.0/bin/windows/amd64/kubectl-convert.exe\"", "completion": "This powershell code Install and Set Up kubectl on Windows | Validate the binary (optional).", "language": "powershell", "type": "code-example_explanation", "context": "Install and Set Up kubectl on Windows | Validate the binary (optional).", "source": "https://kubernetes.io/docs/tasks/tools/install-kubectl-windows/", "id": "k8s_00484_code_explain_12273"} {"prompt": "Explain this powershell code:\n\ncurl.exe -LO \"https://dl.k8s.io/v1.34.0/bin/windows/amd64/kubectl-convert.exe.sha256\"", "completion": "This powershell code Install and Set Up kubectl on Windows | Validate the kubectl-convert binary against the checksum file:", "language": "powershell", "type": "code-example_explanation", "context": "Install and Set Up kubectl on Windows | Validate the kubectl-convert binary against the checksum file:", "source": "https://kubernetes.io/docs/tasks/tools/install-kubectl-windows/", "id": "k8s_00484_code_explain_12274"} {"prompt": "Explain this cmd code:\n\nCertUtil -hashfile kubectl-convert.exe SHA256\ntype kubectl-convert.exe.sha256", "completion": "This cmd code Install and Set Up kubectl on Windows | Using PowerShell to automate the verification using the -eq operator to get a True or False result:", "language": "cmd", "type": "code-example_explanation", "context": "Install and Set Up kubectl on Windows | Using PowerShell to automate the verification using the -eq operator to get a True or False result:", "source": "https://kubernetes.io/docs/tasks/tools/install-kubectl-windows/", "id": "k8s_00484_code_explain_12275"} {"prompt": "Explain this powershell code:\n\n$($(CertUtil -hashfile .\\kubectl-convert.exe SHA256)[1] -replace \" \", \"\") -eq $(type .\\kubectl-convert.exe.sha256)", "completion": "This powershell code Install and Set Up kubectl on Windows | Append or prepend the kubectl-convert binary folder to your PATH environment variable.", "language": "powershell", "type": "code-example_explanation", "context": "Install and Set Up kubectl on Windows | Append or prepend the kubectl-convert binary folder to your PATH environment variable.", "source": "https://kubernetes.io/docs/tasks/tools/install-kubectl-windows/", "id": "k8s_00484_code_explain_12276"} {"prompt": "Explain this powershell code:\n\ndel kubectl-convert.exe\ndel kubectl-convert.exe.sha256", "completion": "This powershell code Install and Set Up kubectl on Windows |", "language": "powershell", "type": "code-example_explanation", "context": "Install and Set Up kubectl on Windows |", "source": "https://kubernetes.io/docs/tasks/tools/install-kubectl-windows/", "id": "k8s_00484_code_explain_12277"} {"prompt": "Explain this text code:\n\ncurl.exe -LO \"https://dl.k8s.io/release/v1.34.0/bin/windows/amd64/kubectl.exe\"", "completion": "This text code Install and Set Up kubectl on Windows | Validate the binary (optional)", "language": "text", "type": "code-example_explanation", "context": "Install and Set Up kubectl on Windows | Validate the binary (optional)", "source": "https://kubernetes.io/docs/tasks/tools/install-kubectl-windows/", "id": "k8s_00484_code_explain_12278"} {"prompt": "Explain this text code:\n\ncurl.exe -LO \"https://dl.k8s.io/v1.34.0/bin/windows/amd64/kubectl.exe.sha256\"", "completion": "This text code Install and Set Up kubectl on Windows | Validate the kubectl binary against the checksum file:", "language": "text", "type": "code-example_explanation", "context": "Install and Set Up kubectl on Windows | Validate the kubectl binary against the checksum file:", "source": "https://kubernetes.io/docs/tasks/tools/install-kubectl-windows/", "id": "k8s_00484_code_explain_12279"} {"prompt": "Explain this text code:\n\nCertUtil -hashfile kubectl.exe SHA256\ntype kubectl.exe.sha256", "completion": "This text code Install and Set Up kubectl on Windows | Using PowerShell to automate the verification using the -eq operator to get a True or False result:", "language": "text", "type": "code-example_explanation", "context": "Install and Set Up kubectl on Windows | Using PowerShell to automate the verification using the -eq operator to get a True or False result:", "source": "https://kubernetes.io/docs/tasks/tools/install-kubectl-windows/", "id": "k8s_00484_code_explain_12280"} {"prompt": "Explain this text code:\n\n$(Get-FileHash -Algorithm SHA256 .\\kubectl.exe).Hash -eq $(Get-Content .\\kubectl.exe.sha256)", "completion": "This text code Install and Set Up kubectl on Windows | Append or prepend the kubectl binary folder to your PATH environment variable.", "language": "text", "type": "code-example_explanation", "context": "Install and Set Up kubectl on Windows | Append or prepend the kubectl binary folder to your PATH environment variable.", "source": "https://kubernetes.io/docs/tasks/tools/install-kubectl-windows/", "id": "k8s_00484_code_explain_12281"} {"prompt": "Explain this text code:\n\n# If you're using cmd.exe, run: cd %USERPROFILE%\ncd ~", "completion": "This text code Install and Set Up kubectl on Windows | Create the .kube directory:", "language": "text", "type": "code-example_explanation", "context": "Install and Set Up kubectl on Windows | Create the .kube directory:", "source": "https://kubernetes.io/docs/tasks/tools/install-kubectl-windows/", "id": "k8s_00484_code_explain_12282"} {"prompt": "Explain this bash code:\n\nkubectl completion powershell | Out-String | Invoke-Expression", "completion": "This bash code Install and Set Up kubectl on Windows | This command will regenerate the auto-completion script on every PowerShell start up. You can also add the generated script directly to your $PROFILE file.", "language": "bash", "type": "kubectl-command_explanation", "context": "Install and Set Up kubectl on Windows | This command will regenerate the auto-completion script on every PowerShell start up. You can also add the generated script directly to your $PROFILE file.", "source": "https://kubernetes.io/docs/tasks/tools/install-kubectl-windows/", "id": "k8s_00484_code_explain_12283"} {"prompt": "Explain this text code:\n\ncurl.exe -LO \"https://dl.k8s.io/release/v1.34.0/bin/windows/amd64/kubectl-convert.exe\"", "completion": "This text code Install and Set Up kubectl on Windows | Validate the binary (optional).", "language": "text", "type": "code-example_explanation", "context": "Install and Set Up kubectl on Windows | Validate the binary (optional).", "source": "https://kubernetes.io/docs/tasks/tools/install-kubectl-windows/", "id": "k8s_00484_code_explain_12284"} {"prompt": "Explain this text code:\n\ncurl.exe -LO \"https://dl.k8s.io/v1.34.0/bin/windows/amd64/kubectl-convert.exe.sha256\"", "completion": "This text code Install and Set Up kubectl on Windows | Validate the kubectl-convert binary against the checksum file:", "language": "text", "type": "code-example_explanation", "context": "Install and Set Up kubectl on Windows | Validate the kubectl-convert binary against the checksum file:", "source": "https://kubernetes.io/docs/tasks/tools/install-kubectl-windows/", "id": "k8s_00484_code_explain_12285"} {"prompt": "Explain this text code:\n\nCertUtil -hashfile kubectl-convert.exe SHA256\ntype kubectl-convert.exe.sha256", "completion": "This text code Install and Set Up kubectl on Windows | Using PowerShell to automate the verification using the -eq operator to get a True or False result:", "language": "text", "type": "code-example_explanation", "context": "Install and Set Up kubectl on Windows | Using PowerShell to automate the verification using the -eq operator to get a True or False result:", "source": "https://kubernetes.io/docs/tasks/tools/install-kubectl-windows/", "id": "k8s_00484_code_explain_12286"} {"prompt": "Explain this text code:\n\n$($(CertUtil -hashfile .\\kubectl-convert.exe SHA256)[1] -replace \" \", \"\") -eq $(type .\\kubectl-convert.exe.sha256)", "completion": "This text code Install and Set Up kubectl on Windows | Append or prepend the kubectl-convert binary folder to your PATH environment variable.", "language": "text", "type": "code-example_explanation", "context": "Install and Set Up kubectl on Windows | Append or prepend the kubectl-convert binary folder to your PATH environment variable.", "source": "https://kubernetes.io/docs/tasks/tools/install-kubectl-windows/", "id": "k8s_00484_code_explain_12287"} {"prompt": "Explain this text code:\n\ndel kubectl-convert.exe\ndel kubectl-convert.exe.sha256", "completion": "This text code Install and Set Up kubectl on Windows |", "language": "text", "type": "code-example_explanation", "context": "Install and Set Up kubectl on Windows |", "source": "https://kubernetes.io/docs/tasks/tools/install-kubectl-windows/", "id": "k8s_00484_code_explain_12288"} {"prompt": "Explain this text code:\n\n# List the clusters that kubectl knows about\n kubectl config get-clusters", "completion": "This text code kubectl config get-clusters | help for get-clusters", "language": "text", "type": "code-example_explanation", "context": "kubectl config get-clusters | help for get-clusters", "source": "https://kubernetes.io/docs/reference/kubectl/generated/kubectl_config/kubectl_config_get-clusters/", "id": "k8s_00485_code_explain_12289"} {"prompt": "Explain this text code:\n\n# List the clusters that kubectl knows about\n kubectl config get-clusters", "completion": "This text code kubectl config get-clusters | help for get-clusters", "language": "text", "type": "code-example_explanation", "context": "kubectl config get-clusters | help for get-clusters", "source": "https://kubernetes.io/docs/reference/kubectl/generated/kubectl_config/kubectl_config_get-clusters/", "id": "k8s_00485_code_explain_12290"} {"prompt": "Explain this bash code:\n\nkubectl explain TYPE [--recursive=FALSE|TRUE] [--api-version=api-version-group] [-o|--output=plaintext|plaintext-openapiv2]", "completion": "This bash code kubectl explain | Get different explanations for particular API version (API group/version)", "language": "bash", "type": "kubectl-command_explanation", "context": "kubectl explain | Get different explanations for particular API version (API group/version)", "source": "https://kubernetes.io/docs/reference/kubectl/generated/kubectl_explain/", "id": "k8s_00487_code_explain_12291"} {"prompt": "Complete this text code:\n\n# Get the documentation of the resource and its fields\n kubectl explain pods\n \n # Get all the fields in the resource\n kubectl explain pods --recursive\n \n # Get the explanation for deployment in supported api versions", "completion": " kubectl explain deployments --api-version=apps/v1\n \n # Get the documentation of a specific field of a resource\n kubectl explain pods.spec.containers\n \n # Get the documentation of resources in different format\n kubectl explain deployment --output=plaintext-openapiv2", "language": "text", "type": "code-example", "context": "kubectl explain | Get different explanations for particular API version (API group/version)", "source": "https://kubernetes.io/docs/reference/kubectl/generated/kubectl_explain/", "id": "k8s_00487_code_completion_12292"} {"prompt": "Explain this text code:\n\n# Get the documentation of the resource and its fields\n kubectl explain pods\n \n # Get all the fields in the resource\n kubectl explain pods --recursive\n \n # Get the explanation for deployment in supported api versions\n kubectl explain deployments --api-version=apps/v1\n \n # Get the documentation of a specific field of a resource\n kubectl explain pods.spec.containers\n \n # Get the documentation of resources in different format\n kubectl explain deployment --output=plaintext-openapiv2", "completion": "This text code kubectl explain | Get different explanations for particular API version (API group/version)", "language": "text", "type": "code-example_explanation", "context": "kubectl explain | Get different explanations for particular API version (API group/version)", "source": "https://kubernetes.io/docs/reference/kubectl/generated/kubectl_explain/", "id": "k8s_00487_code_explain_12293"} {"prompt": "Explain this bash code:\n\nkubectl explain TYPE [--recursive=FALSE|TRUE] [--api-version=api-version-group] [-o|--output=plaintext|plaintext-openapiv2]", "completion": "This bash code kubectl explain | Get different explanations for particular API version (API group/version)", "language": "bash", "type": "kubectl-command_explanation", "context": "kubectl explain | Get different explanations for particular API version (API group/version)", "source": "https://kubernetes.io/docs/reference/kubectl/generated/kubectl_explain/", "id": "k8s_00487_code_explain_12294"} {"prompt": "Complete this text code:\n\n# Get the documentation of the resource and its fields\n kubectl explain pods\n \n # Get all the fields in the resource\n kubectl explain pods --recursive\n \n # Get the explanation for deployment in supported api versions", "completion": " kubectl explain deployments --api-version=apps/v1\n \n # Get the documentation of a specific field of a resource\n kubectl explain pods.spec.containers\n \n # Get the documentation of resources in different format\n kubectl explain deployment --output=plaintext-openapiv2", "language": "text", "type": "code-example", "context": "kubectl explain | Get different explanations for particular API version (API group/version)", "source": "https://kubernetes.io/docs/reference/kubectl/generated/kubectl_explain/", "id": "k8s_00487_code_completion_12295"} {"prompt": "Explain this text code:\n\n# Get the documentation of the resource and its fields\n kubectl explain pods\n \n # Get all the fields in the resource\n kubectl explain pods --recursive\n \n # Get the explanation for deployment in supported api versions\n kubectl explain deployments --api-version=apps/v1\n \n # Get the documentation of a specific field of a resource\n kubectl explain pods.spec.containers\n \n # Get the documentation of resources in different format\n kubectl explain deployment --output=plaintext-openapiv2", "completion": "This text code kubectl explain | Get different explanations for particular API version (API group/version)", "language": "text", "type": "code-example_explanation", "context": "kubectl explain | Get different explanations for particular API version (API group/version)", "source": "https://kubernetes.io/docs/reference/kubectl/generated/kubectl_explain/", "id": "k8s_00487_code_explain_12296"} {"prompt": "Complete this yaml code:\n\napiVersion: apps/v1\nkind: ReplicaSet\nmetadata:\n name: frontend\n labels:\n app: guestbook\n tier: frontend\nspec:\n # modify replicas according to your case\n replicas: 3", "completion": " selector:\n matchLabels:\n tier: frontend\n template:\n metadata:\n labels:\n tier: frontend\n spec:\n containers:\n - name: php-redis\n image: us-docker.pkg.dev/google-samples/containers/gke/gb-frontend:v5", "language": "yaml", "type": "kubernetes-manifest", "context": "ReplicaSet | Saving this manifest into frontend.yaml and submitting it to a Kubernetes cluster will create the defined ReplicaSet and the Pods that it manages.", "source": "https://kubernetes.io/docs/concepts/workloads/controllers/replicaset/", "id": "k8s_00488_code_completion_12297"} {"prompt": "Explain this yaml code:\n\napiVersion: apps/v1\nkind: ReplicaSet\nmetadata:\n name: frontend\n labels:\n app: guestbook\n tier: frontend\nspec:\n # modify replicas according to your case\n replicas: 3\n selector:\n matchLabels:\n tier: frontend\n template:\n metadata:\n labels:\n tier: frontend\n spec:\n containers:\n - name: php-redis\n image: us-docker.pkg.dev/google-samples/containers/gke/gb-frontend:v5", "completion": "This yaml code ReplicaSet | Saving this manifest into frontend.yaml and submitting it to a Kubernetes cluster will create the defined ReplicaSet and the Pods that it manages.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "ReplicaSet | Saving this manifest into frontend.yaml and submitting it to a Kubernetes cluster will create the defined ReplicaSet and the Pods that it manages.", "source": "https://kubernetes.io/docs/concepts/workloads/controllers/replicaset/", "id": "k8s_00488_code_explain_12298"} {"prompt": "Explain this bash code:\n\nkubectl apply -f https://kubernetes.io/examples/controllers/frontend.yaml", "completion": "This bash code ReplicaSet | You can then get the current ReplicaSets deployed:", "language": "bash", "type": "kubectl-command_explanation", "context": "ReplicaSet | You can then get the current ReplicaSets deployed:", "source": "https://kubernetes.io/docs/concepts/workloads/controllers/replicaset/", "id": "k8s_00488_code_explain_12299"} {"prompt": "Explain this text code:\n\nNAME DESIRED CURRENT READY AGE\nfrontend 3 3 3 6s", "completion": "This text code ReplicaSet | You can also check on the state of the ReplicaSet:", "language": "text", "type": "code-example_explanation", "context": "ReplicaSet | You can also check on the state of the ReplicaSet:", "source": "https://kubernetes.io/docs/concepts/workloads/controllers/replicaset/", "id": "k8s_00488_code_explain_12300"} {"prompt": "Complete this text code:\n\nName: frontend\nNamespace: default\nSelector: tier=frontend\nLabels: app=guestbook\n tier=frontend\nAnnotations: <none>\nReplicas: 3 current / 3 desired\nPods Status: 3 Running / 0 Waiting / 0 Succeeded / 0 Failed\nPod Template:\n Labels: tier=frontend\n Containers:\n php-redis:", "completion": " Image: us-docker.pkg.dev/google-samples/containers/gke/gb-frontend:v5\n Port: <none>\n Host Port: <none>\n Environment: <none>\n Mounts: <none>\n Volumes: <none>\nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal SuccessfulCreate 13s replicaset-controller Created pod: frontend-gbgfx\n Normal SuccessfulCreate 13s replicaset-controller Created pod: frontend-rwz57\n Normal SuccessfulCreate 13s replicaset-controller Created pod: frontend-wkl7w", "language": "text", "type": "code-example", "context": "ReplicaSet | And lastly you can check for the Pods brought up:", "source": "https://kubernetes.io/docs/concepts/workloads/controllers/replicaset/", "id": "k8s_00488_code_completion_12301"} {"prompt": "Explain this text code:\n\nName: frontend\nNamespace: default\nSelector: tier=frontend\nLabels: app=guestbook\n tier=frontend\nAnnotations: <none>\nReplicas: 3 current / 3 desired\nPods Status: 3 Running / 0 Waiting / 0 Succeeded / 0 Failed\nPod Template:\n Labels: tier=frontend\n Containers:\n php-redis:\n Image: us-docker.pkg.dev/google-samples/containers/gke/gb-frontend:v5\n Port: <none>\n Host Port: <none>\n Environment: <none>\n Mounts: <none>\n Volumes: <none>\nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal SuccessfulCreate 13s replicaset-controller Created pod: frontend-gbgfx\n Normal SuccessfulCreate 13s replicaset-controller Created pod: frontend-rwz57\n Normal SuccessfulCreate 13s replicaset-controller Created pod: frontend-wkl7w", "completion": "This text code ReplicaSet | And lastly you can check for the Pods brought up:", "language": "text", "type": "code-example_explanation", "context": "ReplicaSet | And lastly you can check for the Pods brought up:", "source": "https://kubernetes.io/docs/concepts/workloads/controllers/replicaset/", "id": "k8s_00488_code_explain_12302"} {"prompt": "Complete this text code:\n\nNAME READY STATUS RESTARTS AGE\nfrontend-gbgfx 1/1 Running 0 10m", "completion": "frontend-rwz57 1/1 Running 0 10m\nfrontend-wkl7w 1/1 Running 0 10m", "language": "text", "type": "code-example", "context": "ReplicaSet | You can also verify that the owner reference of these pods is set to the frontend ReplicaSet. To do this, get the yaml of one of the Pods running:", "source": "https://kubernetes.io/docs/concepts/workloads/controllers/replicaset/", "id": "k8s_00488_code_completion_12303"} {"prompt": "Explain this text code:\n\nNAME READY STATUS RESTARTS AGE\nfrontend-gbgfx 1/1 Running 0 10m\nfrontend-rwz57 1/1 Running 0 10m\nfrontend-wkl7w 1/1 Running 0 10m", "completion": "This text code ReplicaSet | You can also verify that the owner reference of these pods is set to the frontend ReplicaSet. To do this, get the yaml of one of the Pods running:", "language": "text", "type": "code-example_explanation", "context": "ReplicaSet | You can also verify that the owner reference of these pods is set to the frontend ReplicaSet. To do this, get the yaml of one of the Pods running:", "source": "https://kubernetes.io/docs/concepts/workloads/controllers/replicaset/", "id": "k8s_00488_code_explain_12304"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n creationTimestamp: \"2024-02-28T22:30:44Z\"\n generateName: frontend-\n labels:\n tier: frontend\n name: frontend-gbgfx", "completion": " namespace: default\n ownerReferences:\n - apiVersion: apps/v1\n blockOwnerDeletion: true\n controller: true\n kind: ReplicaSet\n name: frontend\n uid: e129deca-f864-481b-bb16-b27abfd92292\n...", "language": "yaml", "type": "pod-definition", "context": "ReplicaSet | While you can create bare Pods with no problems, it is strongly recommended to make sure that the bare Pods do not have labels which match the selector of one of your ReplicaSets. The reason for this", "source": "https://kubernetes.io/docs/concepts/workloads/controllers/replicaset/", "id": "k8s_00488_code_completion_12305"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n creationTimestamp: \"2024-02-28T22:30:44Z\"\n generateName: frontend-\n labels:\n tier: frontend\n name: frontend-gbgfx\n namespace: default\n ownerReferences:\n - apiVersion: apps/v1\n blockOwnerDeletion: true\n controller: true\n kind: ReplicaSet\n name: frontend\n uid: e129deca-f864-481b-bb16-b27abfd92292\n...", "completion": "This yaml code ReplicaSet | While you can create bare Pods with no problems, it is strongly recommended to make sure that the bare Pods do not have labels which match the selector of one of your ReplicaSets. The reason for this", "language": "yaml", "type": "pod-definition_explanation", "context": "ReplicaSet | While you can create bare Pods with no problems, it is strongly recommended to make sure that the bare Pods do not have labels which match the selector of one of your ReplicaSets. The reason for this", "source": "https://kubernetes.io/docs/concepts/workloads/controllers/replicaset/", "id": "k8s_00488_code_explain_12306"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: pod1\n labels:\n tier: frontend\nspec:\n containers:\n - name: hello1\n image: gcr.io/google-samples/hello-app:2.0\n", "completion": "---\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: pod2\n labels:\n tier: frontend\nspec:\n containers:\n - name: hello2\n image: gcr.io/google-samples/hello-app:1.0", "language": "yaml", "type": "pod-definition", "context": "ReplicaSet | As those Pods do not have a Controller (or any object) as their owner reference and match the selector of the frontend ReplicaSet, they will immediately be acquired by it.", "source": "https://kubernetes.io/docs/concepts/workloads/controllers/replicaset/", "id": "k8s_00488_code_completion_12307"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: pod1\n labels:\n tier: frontend\nspec:\n containers:\n - name: hello1\n image: gcr.io/google-samples/hello-app:2.0\n\n---\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: pod2\n labels:\n tier: frontend\nspec:\n containers:\n - name: hello2\n image: gcr.io/google-samples/hello-app:1.0", "completion": "This yaml code ReplicaSet | As those Pods do not have a Controller (or any object) as their owner reference and match the selector of the frontend ReplicaSet, they will immediately be acquired by it.", "language": "yaml", "type": "pod-definition_explanation", "context": "ReplicaSet | As those Pods do not have a Controller (or any object) as their owner reference and match the selector of the frontend ReplicaSet, they will immediately be acquired by it.", "source": "https://kubernetes.io/docs/concepts/workloads/controllers/replicaset/", "id": "k8s_00488_code_explain_12308"} {"prompt": "Explain this bash code:\n\nkubectl apply -f https://kubernetes.io/examples/pods/pod-rs.yaml", "completion": "This bash code ReplicaSet | The new Pods will be acquired by the ReplicaSet, and then immediately terminated as the ReplicaSet would be over its desired count.", "language": "bash", "type": "kubectl-command_explanation", "context": "ReplicaSet | The new Pods will be acquired by the ReplicaSet, and then immediately terminated as the ReplicaSet would be over its desired count.", "source": "https://kubernetes.io/docs/concepts/workloads/controllers/replicaset/", "id": "k8s_00488_code_explain_12309"} {"prompt": "Complete this text code:\n\nNAME READY STATUS RESTARTS AGE\nfrontend-b2zdv 1/1 Running 0 10m\nfrontend-vcmts 1/1 Running 0 10m", "completion": "frontend-wtsmm 1/1 Running 0 10m\npod1 0/1 Terminating 0 1s\npod2 0/1 Terminating 0 1s", "language": "text", "type": "code-example", "context": "ReplicaSet | If you create the Pods first:", "source": "https://kubernetes.io/docs/concepts/workloads/controllers/replicaset/", "id": "k8s_00488_code_completion_12310"} {"prompt": "Explain this text code:\n\nNAME READY STATUS RESTARTS AGE\nfrontend-b2zdv 1/1 Running 0 10m\nfrontend-vcmts 1/1 Running 0 10m\nfrontend-wtsmm 1/1 Running 0 10m\npod1 0/1 Terminating 0 1s\npod2 0/1 Terminating 0 1s", "completion": "This text code ReplicaSet | If you create the Pods first:", "language": "text", "type": "code-example_explanation", "context": "ReplicaSet | If you create the Pods first:", "source": "https://kubernetes.io/docs/concepts/workloads/controllers/replicaset/", "id": "k8s_00488_code_explain_12311"} {"prompt": "Explain this bash code:\n\nkubectl apply -f https://kubernetes.io/examples/pods/pod-rs.yaml", "completion": "This bash code ReplicaSet | And then create the ReplicaSet however:", "language": "bash", "type": "kubectl-command_explanation", "context": "ReplicaSet | And then create the ReplicaSet however:", "source": "https://kubernetes.io/docs/concepts/workloads/controllers/replicaset/", "id": "k8s_00488_code_explain_12312"} {"prompt": "Explain this bash code:\n\nkubectl apply -f https://kubernetes.io/examples/controllers/frontend.yaml", "completion": "This bash code ReplicaSet | You shall see that the ReplicaSet has acquired the Pods and has only created new ones according to its spec until the number of its new Pods and the original matches its desired count. As fetching the", "language": "bash", "type": "kubectl-command_explanation", "context": "ReplicaSet | You shall see that the ReplicaSet has acquired the Pods and has only created new ones according to its spec until the number of its new Pods and the original matches its desired count. As fetching the", "source": "https://kubernetes.io/docs/concepts/workloads/controllers/replicaset/", "id": "k8s_00488_code_explain_12313"} {"prompt": "Complete this text code:\n\nNAME READY STATUS RESTARTS AGE\nfrontend-hmmj2 1/1 Running 0 9s", "completion": "pod1 1/1 Running 0 36s\npod2 1/1 Running 0 36s", "language": "text", "type": "code-example", "context": "ReplicaSet | In this manner, a ReplicaSet can own a non-homogeneous set of Pods", "source": "https://kubernetes.io/docs/concepts/workloads/controllers/replicaset/", "id": "k8s_00488_code_completion_12314"} {"prompt": "Explain this text code:\n\nNAME READY STATUS RESTARTS AGE\nfrontend-hmmj2 1/1 Running 0 9s\npod1 1/1 Running 0 36s\npod2 1/1 Running 0 36s", "completion": "This text code ReplicaSet | In this manner, a ReplicaSet can own a non-homogeneous set of Pods", "language": "text", "type": "code-example_explanation", "context": "ReplicaSet | In this manner, a ReplicaSet can own a non-homogeneous set of Pods", "source": "https://kubernetes.io/docs/concepts/workloads/controllers/replicaset/", "id": "k8s_00488_code_explain_12315"} {"prompt": "Complete this bash code:\n\nkubectl proxy --port=8080\ncurl -X DELETE 'localhost:8080/apis/apps/v1/namespaces/default/replicasets/frontend' \\", "completion": " -d '{\"kind\":\"DeleteOptions\",\"apiVersion\":\"v1\",\"propagationPolicy\":\"Foreground\"}' \\\n -H \"Content-Type: application/json\"", "language": "bash", "type": "kubectl-command", "context": "ReplicaSet | You can delete a ReplicaSet without affecting any of its Pods using kubectl delete with the --cascade=orphan option. When using the REST API or the client-go library, you must set propagationPolicy to", "source": "https://kubernetes.io/docs/concepts/workloads/controllers/replicaset/", "id": "k8s_00488_code_completion_12316"} {"prompt": "Explain this bash code:\n\nkubectl proxy --port=8080\ncurl -X DELETE 'localhost:8080/apis/apps/v1/namespaces/default/replicasets/frontend' \\\n -d '{\"kind\":\"DeleteOptions\",\"apiVersion\":\"v1\",\"propagationPolicy\":\"Foreground\"}' \\\n -H \"Content-Type: application/json\"", "completion": "This bash code ReplicaSet | You can delete a ReplicaSet without affecting any of its Pods using kubectl delete with the --cascade=orphan option. When using the REST API or the client-go library, you must set propagationPolicy to", "language": "bash", "type": "kubectl-command_explanation", "context": "ReplicaSet | You can delete a ReplicaSet without affecting any of its Pods using kubectl delete with the --cascade=orphan option. When using the REST API or the client-go library, you must set propagationPolicy to", "source": "https://kubernetes.io/docs/concepts/workloads/controllers/replicaset/", "id": "k8s_00488_code_explain_12317"} {"prompt": "Complete this bash code:\n\nkubectl proxy --port=8080\ncurl -X DELETE 'localhost:8080/apis/apps/v1/namespaces/default/replicasets/frontend' \\", "completion": " -d '{\"kind\":\"DeleteOptions\",\"apiVersion\":\"v1\",\"propagationPolicy\":\"Orphan\"}' \\\n -H \"Content-Type: application/json\"", "language": "bash", "type": "kubectl-command", "context": "ReplicaSet | Once the original is deleted, you can create a new ReplicaSet to replace it. As long as the old and new .spec.selector are the same, then the new one will adopt the old Pods. However, it will not make", "source": "https://kubernetes.io/docs/concepts/workloads/controllers/replicaset/", "id": "k8s_00488_code_completion_12318"} {"prompt": "Explain this bash code:\n\nkubectl proxy --port=8080\ncurl -X DELETE 'localhost:8080/apis/apps/v1/namespaces/default/replicasets/frontend' \\\n -d '{\"kind\":\"DeleteOptions\",\"apiVersion\":\"v1\",\"propagationPolicy\":\"Orphan\"}' \\\n -H \"Content-Type: application/json\"", "completion": "This bash code ReplicaSet | Once the original is deleted, you can create a new ReplicaSet to replace it. As long as the old and new .spec.selector are the same, then the new one will adopt the old Pods. However, it will not make", "language": "bash", "type": "kubectl-command_explanation", "context": "ReplicaSet | Once the original is deleted, you can create a new ReplicaSet to replace it. As long as the old and new .spec.selector are the same, then the new one will adopt the old Pods. However, it will not make", "source": "https://kubernetes.io/docs/concepts/workloads/controllers/replicaset/", "id": "k8s_00488_code_explain_12319"} {"prompt": "Complete this yaml code:\n\napiVersion: autoscaling/v1\nkind: HorizontalPodAutoscaler\nmetadata:\n name: frontend-scaler\nspec:", "completion": " scaleTargetRef:\n kind: ReplicaSet\n name: frontend\n minReplicas: 3\n maxReplicas: 10\n targetCPUUtilizationPercentage: 50", "language": "yaml", "type": "kubernetes-manifest", "context": "ReplicaSet | Saving this manifest into hpa-rs.yaml and submitting it to a Kubernetes cluster should create the defined HPA that autoscales the target ReplicaSet depending on the CPU usage of the replicated Pods.", "source": "https://kubernetes.io/docs/concepts/workloads/controllers/replicaset/", "id": "k8s_00488_code_completion_12320"} {"prompt": "Explain this yaml code:\n\napiVersion: autoscaling/v1\nkind: HorizontalPodAutoscaler\nmetadata:\n name: frontend-scaler\nspec:\n scaleTargetRef:\n kind: ReplicaSet\n name: frontend\n minReplicas: 3\n maxReplicas: 10\n targetCPUUtilizationPercentage: 50", "completion": "This yaml code ReplicaSet | Saving this manifest into hpa-rs.yaml and submitting it to a Kubernetes cluster should create the defined HPA that autoscales the target ReplicaSet depending on the CPU usage of the replicated Pods.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "ReplicaSet | Saving this manifest into hpa-rs.yaml and submitting it to a Kubernetes cluster should create the defined HPA that autoscales the target ReplicaSet depending on the CPU usage of the replicated Pods.", "source": "https://kubernetes.io/docs/concepts/workloads/controllers/replicaset/", "id": "k8s_00488_code_explain_12321"} {"prompt": "Explain this bash code:\n\nkubectl apply -f https://k8s.io/examples/controllers/hpa-rs.yaml", "completion": "This bash code ReplicaSet | Alternatively, you can use the kubectl autoscale command to accomplish the same (and it's easier!)", "language": "bash", "type": "kubectl-command_explanation", "context": "ReplicaSet | Alternatively, you can use the kubectl autoscale command to accomplish the same (and it's easier!)", "source": "https://kubernetes.io/docs/concepts/workloads/controllers/replicaset/", "id": "k8s_00488_code_explain_12322"} {"prompt": "Explain this bash code:\n\nkubectl autoscale rs frontend --max=10 --min=3 --cpu-percent=50", "completion": "This bash code ReplicaSet | Deployment is an object which can own ReplicaSets and update them and their Pods via declarative, server-side rolling updates. While ReplicaSets can be used independently, today they're mainly used by", "language": "bash", "type": "kubectl-command_explanation", "context": "ReplicaSet | Deployment is an object which can own ReplicaSets and update them and their Pods via declarative, server-side rolling updates. While ReplicaSets can be used independently, today they're mainly used by", "source": "https://kubernetes.io/docs/concepts/workloads/controllers/replicaset/", "id": "k8s_00488_code_explain_12323"} {"prompt": "Complete this yaml code:\n\napiVersion: apps/v1\nkind: ReplicaSet\nmetadata:\n name: frontend\n labels:\n app: guestbook\n tier: frontend\nspec:\n # modify replicas according to your case\n replicas: 3", "completion": " selector:\n matchLabels:\n tier: frontend\n template:\n metadata:\n labels:\n tier: frontend\n spec:\n containers:\n - name: php-redis\n image: us-docker.pkg.dev/google-samples/containers/gke/gb-frontend:v5", "language": "yaml", "type": "kubernetes-manifest", "context": "ReplicaSet | Saving this manifest into frontend.yaml and submitting it to a Kubernetes cluster will create the defined ReplicaSet and the Pods that it manages.", "source": "https://kubernetes.io/docs/concepts/workloads/controllers/replicaset/", "id": "k8s_00488_code_completion_12324"} {"prompt": "Explain this yaml code:\n\napiVersion: apps/v1\nkind: ReplicaSet\nmetadata:\n name: frontend\n labels:\n app: guestbook\n tier: frontend\nspec:\n # modify replicas according to your case\n replicas: 3\n selector:\n matchLabels:\n tier: frontend\n template:\n metadata:\n labels:\n tier: frontend\n spec:\n containers:\n - name: php-redis\n image: us-docker.pkg.dev/google-samples/containers/gke/gb-frontend:v5", "completion": "This yaml code ReplicaSet | Saving this manifest into frontend.yaml and submitting it to a Kubernetes cluster will create the defined ReplicaSet and the Pods that it manages.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "ReplicaSet | Saving this manifest into frontend.yaml and submitting it to a Kubernetes cluster will create the defined ReplicaSet and the Pods that it manages.", "source": "https://kubernetes.io/docs/concepts/workloads/controllers/replicaset/", "id": "k8s_00488_code_explain_12325"} {"prompt": "Explain this shell code:\n\nkubectl apply -f https://kubernetes.io/examples/controllers/frontend.yaml", "completion": "This shell code ReplicaSet | You can then get the current ReplicaSets deployed:", "language": "shell", "type": "code-example_explanation", "context": "ReplicaSet | You can then get the current ReplicaSets deployed:", "source": "https://kubernetes.io/docs/concepts/workloads/controllers/replicaset/", "id": "k8s_00488_code_explain_12326"} {"prompt": "Explain this text code:\n\nNAME DESIRED CURRENT READY AGE\nfrontend 3 3 3 6s", "completion": "This text code ReplicaSet | You can also check on the state of the ReplicaSet:", "language": "text", "type": "code-example_explanation", "context": "ReplicaSet | You can also check on the state of the ReplicaSet:", "source": "https://kubernetes.io/docs/concepts/workloads/controllers/replicaset/", "id": "k8s_00488_code_explain_12327"} {"prompt": "Complete this text code:\n\nName: frontend\nNamespace: default\nSelector: tier=frontend\nLabels: app=guestbook\n tier=frontend\nAnnotations: <none>\nReplicas: 3 current / 3 desired\nPods Status: 3 Running / 0 Waiting / 0 Succeeded / 0 Failed\nPod Template:\n Labels: tier=frontend\n Containers:\n php-redis:", "completion": " Image: us-docker.pkg.dev/google-samples/containers/gke/gb-frontend:v5\n Port: <none>\n Host Port: <none>\n Environment: <none>\n Mounts: <none>\n Volumes: <none>\nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal SuccessfulCreate 13s replicaset-controller Created pod: frontend-gbgfx\n Normal SuccessfulCreate 13s replicaset-controller Created pod: frontend-rwz57\n Normal SuccessfulCreate 13s replicaset-controller Created pod: frontend-wkl7w", "language": "text", "type": "code-example", "context": "ReplicaSet | And lastly you can check for the Pods brought up:", "source": "https://kubernetes.io/docs/concepts/workloads/controllers/replicaset/", "id": "k8s_00488_code_completion_12328"} {"prompt": "Explain this text code:\n\nName: frontend\nNamespace: default\nSelector: tier=frontend\nLabels: app=guestbook\n tier=frontend\nAnnotations: <none>\nReplicas: 3 current / 3 desired\nPods Status: 3 Running / 0 Waiting / 0 Succeeded / 0 Failed\nPod Template:\n Labels: tier=frontend\n Containers:\n php-redis:\n Image: us-docker.pkg.dev/google-samples/containers/gke/gb-frontend:v5\n Port: <none>\n Host Port: <none>\n Environment: <none>\n Mounts: <none>\n Volumes: <none>\nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal SuccessfulCreate 13s replicaset-controller Created pod: frontend-gbgfx\n Normal SuccessfulCreate 13s replicaset-controller Created pod: frontend-rwz57\n Normal SuccessfulCreate 13s replicaset-controller Created pod: frontend-wkl7w", "completion": "This text code ReplicaSet | And lastly you can check for the Pods brought up:", "language": "text", "type": "code-example_explanation", "context": "ReplicaSet | And lastly you can check for the Pods brought up:", "source": "https://kubernetes.io/docs/concepts/workloads/controllers/replicaset/", "id": "k8s_00488_code_explain_12329"} {"prompt": "Complete this text code:\n\nNAME READY STATUS RESTARTS AGE\nfrontend-gbgfx 1/1 Running 0 10m", "completion": "frontend-rwz57 1/1 Running 0 10m\nfrontend-wkl7w 1/1 Running 0 10m", "language": "text", "type": "code-example", "context": "ReplicaSet | You can also verify that the owner reference of these pods is set to the frontend ReplicaSet. To do this, get the yaml of one of the Pods running:", "source": "https://kubernetes.io/docs/concepts/workloads/controllers/replicaset/", "id": "k8s_00488_code_completion_12330"} {"prompt": "Explain this text code:\n\nNAME READY STATUS RESTARTS AGE\nfrontend-gbgfx 1/1 Running 0 10m\nfrontend-rwz57 1/1 Running 0 10m\nfrontend-wkl7w 1/1 Running 0 10m", "completion": "This text code ReplicaSet | You can also verify that the owner reference of these pods is set to the frontend ReplicaSet. To do this, get the yaml of one of the Pods running:", "language": "text", "type": "code-example_explanation", "context": "ReplicaSet | You can also verify that the owner reference of these pods is set to the frontend ReplicaSet. To do this, get the yaml of one of the Pods running:", "source": "https://kubernetes.io/docs/concepts/workloads/controllers/replicaset/", "id": "k8s_00488_code_explain_12331"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n creationTimestamp: \"2024-02-28T22:30:44Z\"\n generateName: frontend-\n labels:\n tier: frontend\n name: frontend-gbgfx", "completion": " namespace: default\n ownerReferences:\n - apiVersion: apps/v1\n blockOwnerDeletion: true\n controller: true\n kind: ReplicaSet\n name: frontend\n uid: e129deca-f864-481b-bb16-b27abfd92292\n...", "language": "yaml", "type": "pod-definition", "context": "ReplicaSet | While you can create bare Pods with no problems, it is strongly recommended to make sure that the bare Pods do not have labels which match the selector of one of your ReplicaSets. The reason for this", "source": "https://kubernetes.io/docs/concepts/workloads/controllers/replicaset/", "id": "k8s_00488_code_completion_12332"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n creationTimestamp: \"2024-02-28T22:30:44Z\"\n generateName: frontend-\n labels:\n tier: frontend\n name: frontend-gbgfx\n namespace: default\n ownerReferences:\n - apiVersion: apps/v1\n blockOwnerDeletion: true\n controller: true\n kind: ReplicaSet\n name: frontend\n uid: e129deca-f864-481b-bb16-b27abfd92292\n...", "completion": "This yaml code ReplicaSet | While you can create bare Pods with no problems, it is strongly recommended to make sure that the bare Pods do not have labels which match the selector of one of your ReplicaSets. The reason for this", "language": "yaml", "type": "pod-definition_explanation", "context": "ReplicaSet | While you can create bare Pods with no problems, it is strongly recommended to make sure that the bare Pods do not have labels which match the selector of one of your ReplicaSets. The reason for this", "source": "https://kubernetes.io/docs/concepts/workloads/controllers/replicaset/", "id": "k8s_00488_code_explain_12333"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: pod1\n labels:\n tier: frontend\nspec:\n containers:\n - name: hello1\n image: gcr.io/google-samples/hello-app:2.0\n", "completion": "---\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: pod2\n labels:\n tier: frontend\nspec:\n containers:\n - name: hello2\n image: gcr.io/google-samples/hello-app:1.0", "language": "yaml", "type": "pod-definition", "context": "ReplicaSet | As those Pods do not have a Controller (or any object) as their owner reference and match the selector of the frontend ReplicaSet, they will immediately be acquired by it.", "source": "https://kubernetes.io/docs/concepts/workloads/controllers/replicaset/", "id": "k8s_00488_code_completion_12334"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: pod1\n labels:\n tier: frontend\nspec:\n containers:\n - name: hello1\n image: gcr.io/google-samples/hello-app:2.0\n\n---\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: pod2\n labels:\n tier: frontend\nspec:\n containers:\n - name: hello2\n image: gcr.io/google-samples/hello-app:1.0", "completion": "This yaml code ReplicaSet | As those Pods do not have a Controller (or any object) as their owner reference and match the selector of the frontend ReplicaSet, they will immediately be acquired by it.", "language": "yaml", "type": "pod-definition_explanation", "context": "ReplicaSet | As those Pods do not have a Controller (or any object) as their owner reference and match the selector of the frontend ReplicaSet, they will immediately be acquired by it.", "source": "https://kubernetes.io/docs/concepts/workloads/controllers/replicaset/", "id": "k8s_00488_code_explain_12335"} {"prompt": "Explain this shell code:\n\nkubectl apply -f https://kubernetes.io/examples/pods/pod-rs.yaml", "completion": "This shell code ReplicaSet | The new Pods will be acquired by the ReplicaSet, and then immediately terminated as the ReplicaSet would be over its desired count.", "language": "shell", "type": "code-example_explanation", "context": "ReplicaSet | The new Pods will be acquired by the ReplicaSet, and then immediately terminated as the ReplicaSet would be over its desired count.", "source": "https://kubernetes.io/docs/concepts/workloads/controllers/replicaset/", "id": "k8s_00488_code_explain_12336"} {"prompt": "Complete this text code:\n\nNAME READY STATUS RESTARTS AGE\nfrontend-b2zdv 1/1 Running 0 10m\nfrontend-vcmts 1/1 Running 0 10m", "completion": "frontend-wtsmm 1/1 Running 0 10m\npod1 0/1 Terminating 0 1s\npod2 0/1 Terminating 0 1s", "language": "text", "type": "code-example", "context": "ReplicaSet | If you create the Pods first:", "source": "https://kubernetes.io/docs/concepts/workloads/controllers/replicaset/", "id": "k8s_00488_code_completion_12337"} {"prompt": "Explain this text code:\n\nNAME READY STATUS RESTARTS AGE\nfrontend-b2zdv 1/1 Running 0 10m\nfrontend-vcmts 1/1 Running 0 10m\nfrontend-wtsmm 1/1 Running 0 10m\npod1 0/1 Terminating 0 1s\npod2 0/1 Terminating 0 1s", "completion": "This text code ReplicaSet | If you create the Pods first:", "language": "text", "type": "code-example_explanation", "context": "ReplicaSet | If you create the Pods first:", "source": "https://kubernetes.io/docs/concepts/workloads/controllers/replicaset/", "id": "k8s_00488_code_explain_12338"} {"prompt": "Explain this shell code:\n\nkubectl apply -f https://kubernetes.io/examples/pods/pod-rs.yaml", "completion": "This shell code ReplicaSet | And then create the ReplicaSet however:", "language": "shell", "type": "code-example_explanation", "context": "ReplicaSet | And then create the ReplicaSet however:", "source": "https://kubernetes.io/docs/concepts/workloads/controllers/replicaset/", "id": "k8s_00488_code_explain_12339"} {"prompt": "Explain this shell code:\n\nkubectl apply -f https://kubernetes.io/examples/controllers/frontend.yaml", "completion": "This shell code ReplicaSet | You shall see that the ReplicaSet has acquired the Pods and has only created new ones according to its spec until the number of its new Pods and the original matches its desired count. As fetching the", "language": "shell", "type": "code-example_explanation", "context": "ReplicaSet | You shall see that the ReplicaSet has acquired the Pods and has only created new ones according to its spec until the number of its new Pods and the original matches its desired count. As fetching the", "source": "https://kubernetes.io/docs/concepts/workloads/controllers/replicaset/", "id": "k8s_00488_code_explain_12340"} {"prompt": "Complete this text code:\n\nNAME READY STATUS RESTARTS AGE\nfrontend-hmmj2 1/1 Running 0 9s", "completion": "pod1 1/1 Running 0 36s\npod2 1/1 Running 0 36s", "language": "text", "type": "code-example", "context": "ReplicaSet | In this manner, a ReplicaSet can own a non-homogeneous set of Pods", "source": "https://kubernetes.io/docs/concepts/workloads/controllers/replicaset/", "id": "k8s_00488_code_completion_12341"} {"prompt": "Explain this text code:\n\nNAME READY STATUS RESTARTS AGE\nfrontend-hmmj2 1/1 Running 0 9s\npod1 1/1 Running 0 36s\npod2 1/1 Running 0 36s", "completion": "This text code ReplicaSet | In this manner, a ReplicaSet can own a non-homogeneous set of Pods", "language": "text", "type": "code-example_explanation", "context": "ReplicaSet | In this manner, a ReplicaSet can own a non-homogeneous set of Pods", "source": "https://kubernetes.io/docs/concepts/workloads/controllers/replicaset/", "id": "k8s_00488_code_explain_12342"} {"prompt": "Complete this shell code:\n\nkubectl proxy --port=8080\ncurl -X DELETE 'localhost:8080/apis/apps/v1/namespaces/default/replicasets/frontend' \\", "completion": " -d '{\"kind\":\"DeleteOptions\",\"apiVersion\":\"v1\",\"propagationPolicy\":\"Foreground\"}' \\\n -H \"Content-Type: application/json\"", "language": "shell", "type": "code-example", "context": "ReplicaSet | You can delete a ReplicaSet without affecting any of its Pods using kubectl delete with the --cascade=orphan option. When using the REST API or the client-go library, you must set propagationPolicy to", "source": "https://kubernetes.io/docs/concepts/workloads/controllers/replicaset/", "id": "k8s_00488_code_completion_12343"} {"prompt": "Explain this shell code:\n\nkubectl proxy --port=8080\ncurl -X DELETE 'localhost:8080/apis/apps/v1/namespaces/default/replicasets/frontend' \\\n -d '{\"kind\":\"DeleteOptions\",\"apiVersion\":\"v1\",\"propagationPolicy\":\"Foreground\"}' \\\n -H \"Content-Type: application/json\"", "completion": "This shell code ReplicaSet | You can delete a ReplicaSet without affecting any of its Pods using kubectl delete with the --cascade=orphan option. When using the REST API or the client-go library, you must set propagationPolicy to", "language": "shell", "type": "code-example_explanation", "context": "ReplicaSet | You can delete a ReplicaSet without affecting any of its Pods using kubectl delete with the --cascade=orphan option. When using the REST API or the client-go library, you must set propagationPolicy to", "source": "https://kubernetes.io/docs/concepts/workloads/controllers/replicaset/", "id": "k8s_00488_code_explain_12344"} {"prompt": "Complete this shell code:\n\nkubectl proxy --port=8080\ncurl -X DELETE 'localhost:8080/apis/apps/v1/namespaces/default/replicasets/frontend' \\", "completion": " -d '{\"kind\":\"DeleteOptions\",\"apiVersion\":\"v1\",\"propagationPolicy\":\"Orphan\"}' \\\n -H \"Content-Type: application/json\"", "language": "shell", "type": "code-example", "context": "ReplicaSet | Once the original is deleted, you can create a new ReplicaSet to replace it. As long as the old and new .spec.selector are the same, then the new one will adopt the old Pods. However, it will not make", "source": "https://kubernetes.io/docs/concepts/workloads/controllers/replicaset/", "id": "k8s_00488_code_completion_12345"} {"prompt": "Explain this shell code:\n\nkubectl proxy --port=8080\ncurl -X DELETE 'localhost:8080/apis/apps/v1/namespaces/default/replicasets/frontend' \\\n -d '{\"kind\":\"DeleteOptions\",\"apiVersion\":\"v1\",\"propagationPolicy\":\"Orphan\"}' \\\n -H \"Content-Type: application/json\"", "completion": "This shell code ReplicaSet | Once the original is deleted, you can create a new ReplicaSet to replace it. As long as the old and new .spec.selector are the same, then the new one will adopt the old Pods. However, it will not make", "language": "shell", "type": "code-example_explanation", "context": "ReplicaSet | Once the original is deleted, you can create a new ReplicaSet to replace it. As long as the old and new .spec.selector are the same, then the new one will adopt the old Pods. However, it will not make", "source": "https://kubernetes.io/docs/concepts/workloads/controllers/replicaset/", "id": "k8s_00488_code_explain_12346"} {"prompt": "Complete this yaml code:\n\napiVersion: autoscaling/v1\nkind: HorizontalPodAutoscaler\nmetadata:\n name: frontend-scaler\nspec:", "completion": " scaleTargetRef:\n kind: ReplicaSet\n name: frontend\n minReplicas: 3\n maxReplicas: 10\n targetCPUUtilizationPercentage: 50", "language": "yaml", "type": "kubernetes-manifest", "context": "ReplicaSet | Saving this manifest into hpa-rs.yaml and submitting it to a Kubernetes cluster should create the defined HPA that autoscales the target ReplicaSet depending on the CPU usage of the replicated Pods.", "source": "https://kubernetes.io/docs/concepts/workloads/controllers/replicaset/", "id": "k8s_00488_code_completion_12347"} {"prompt": "Explain this yaml code:\n\napiVersion: autoscaling/v1\nkind: HorizontalPodAutoscaler\nmetadata:\n name: frontend-scaler\nspec:\n scaleTargetRef:\n kind: ReplicaSet\n name: frontend\n minReplicas: 3\n maxReplicas: 10\n targetCPUUtilizationPercentage: 50", "completion": "This yaml code ReplicaSet | Saving this manifest into hpa-rs.yaml and submitting it to a Kubernetes cluster should create the defined HPA that autoscales the target ReplicaSet depending on the CPU usage of the replicated Pods.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "ReplicaSet | Saving this manifest into hpa-rs.yaml and submitting it to a Kubernetes cluster should create the defined HPA that autoscales the target ReplicaSet depending on the CPU usage of the replicated Pods.", "source": "https://kubernetes.io/docs/concepts/workloads/controllers/replicaset/", "id": "k8s_00488_code_explain_12348"} {"prompt": "Explain this shell code:\n\nkubectl apply -f https://k8s.io/examples/controllers/hpa-rs.yaml", "completion": "This shell code ReplicaSet | Alternatively, you can use the kubectl autoscale command to accomplish the same (and it's easier!)", "language": "shell", "type": "code-example_explanation", "context": "ReplicaSet | Alternatively, you can use the kubectl autoscale command to accomplish the same (and it's easier!)", "source": "https://kubernetes.io/docs/concepts/workloads/controllers/replicaset/", "id": "k8s_00488_code_explain_12349"} {"prompt": "Explain this shell code:\n\nkubectl autoscale rs frontend --max=10 --min=3 --cpu-percent=50", "completion": "This shell code ReplicaSet | Deployment is an object which can own ReplicaSets and update them and their Pods via declarative, server-side rolling updates. While ReplicaSets can be used independently, today they're mainly used by", "language": "shell", "type": "code-example_explanation", "context": "ReplicaSet | Deployment is an object which can own ReplicaSets and update them and their Pods via declarative, server-side rolling updates. While ReplicaSets can be used independently, today they're mainly used by", "source": "https://kubernetes.io/docs/concepts/workloads/controllers/replicaset/", "id": "k8s_00488_code_explain_12350"} {"prompt": "Complete this yaml code:\n\ncontrollers/frontend.yaml\napiVersion: apps/v1\nkind: ReplicaSet\nmetadata:\n name: frontend\n labels:\n app: guestbook\n tier: frontend\nspec:\n # modify replicas according to your case\n replicas: 3", "completion": " selector:\n matchLabels:\n tier: frontend\n template:\n metadata:\n labels:\n tier: frontend\n spec:\n containers:\n - name: php-redis\n image: us-docker.pkg.dev/google-samples/containers/gke/gb-frontend:v5", "language": "yaml", "type": "kubernetes-manifest", "context": "ReplicaSet | Saving this manifest into frontend.yaml and submitting it to a Kubernetes cluster will create the defined ReplicaSet and the Pods that it manages.", "source": "https://kubernetes.io/docs/concepts/workloads/controllers/replicaset/", "id": "k8s_00488_code_completion_12351"} {"prompt": "Explain this yaml code:\n\ncontrollers/frontend.yaml\napiVersion: apps/v1\nkind: ReplicaSet\nmetadata:\n name: frontend\n labels:\n app: guestbook\n tier: frontend\nspec:\n # modify replicas according to your case\n replicas: 3\n selector:\n matchLabels:\n tier: frontend\n template:\n metadata:\n labels:\n tier: frontend\n spec:\n containers:\n - name: php-redis\n image: us-docker.pkg.dev/google-samples/containers/gke/gb-frontend:v5", "completion": "This yaml code ReplicaSet | Saving this manifest into frontend.yaml and submitting it to a Kubernetes cluster will create the defined ReplicaSet and the Pods that it manages.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "ReplicaSet | Saving this manifest into frontend.yaml and submitting it to a Kubernetes cluster will create the defined ReplicaSet and the Pods that it manages.", "source": "https://kubernetes.io/docs/concepts/workloads/controllers/replicaset/", "id": "k8s_00488_code_explain_12352"} {"prompt": "Complete this yaml code:\n\napiVersion: apps/v1\nkind: ReplicaSet\nmetadata:\n name: frontend\n labels:\n app: guestbook\n tier: frontend\nspec:\n # modify replicas according to your case\n replicas: 3", "completion": " selector:\n matchLabels:\n tier: frontend\n template:\n metadata:\n labels:\n tier: frontend\n spec:\n containers:\n - name: php-redis\n image: us-docker.pkg.dev/google-samples/containers/gke/gb-frontend:v5", "language": "yaml", "type": "kubernetes-manifest", "context": "ReplicaSet | Saving this manifest into frontend.yaml and submitting it to a Kubernetes cluster will create the defined ReplicaSet and the Pods that it manages.", "source": "https://kubernetes.io/docs/concepts/workloads/controllers/replicaset/", "id": "k8s_00488_code_completion_12353"} {"prompt": "Explain this yaml code:\n\napiVersion: apps/v1\nkind: ReplicaSet\nmetadata:\n name: frontend\n labels:\n app: guestbook\n tier: frontend\nspec:\n # modify replicas according to your case\n replicas: 3\n selector:\n matchLabels:\n tier: frontend\n template:\n metadata:\n labels:\n tier: frontend\n spec:\n containers:\n - name: php-redis\n image: us-docker.pkg.dev/google-samples/containers/gke/gb-frontend:v5", "completion": "This yaml code ReplicaSet | Saving this manifest into frontend.yaml and submitting it to a Kubernetes cluster will create the defined ReplicaSet and the Pods that it manages.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "ReplicaSet | Saving this manifest into frontend.yaml and submitting it to a Kubernetes cluster will create the defined ReplicaSet and the Pods that it manages.", "source": "https://kubernetes.io/docs/concepts/workloads/controllers/replicaset/", "id": "k8s_00488_code_explain_12354"} {"prompt": "Explain this bash code:\n\nkubectl apply -f https://kubernetes.io/examples/controllers/frontend.yaml", "completion": "This bash code ReplicaSet | You can then get the current ReplicaSets deployed:", "language": "bash", "type": "kubectl-command_explanation", "context": "ReplicaSet | You can then get the current ReplicaSets deployed:", "source": "https://kubernetes.io/docs/concepts/workloads/controllers/replicaset/", "id": "k8s_00488_code_explain_12355"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n creationTimestamp: \"2024-02-28T22:30:44Z\"\n generateName: frontend-\n labels:\n tier: frontend\n name: frontend-gbgfx", "completion": " namespace: default\n ownerReferences:\n - apiVersion: apps/v1\n blockOwnerDeletion: true\n controller: true\n kind: ReplicaSet\n name: frontend\n uid: e129deca-f864-481b-bb16-b27abfd92292\n...", "language": "yaml", "type": "pod-definition", "context": "ReplicaSet | While you can create bare Pods with no problems, it is strongly recommended to make sure that the bare Pods do not have labels which match the selector of one of your ReplicaSets. The reason for this", "source": "https://kubernetes.io/docs/concepts/workloads/controllers/replicaset/", "id": "k8s_00488_code_completion_12356"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n creationTimestamp: \"2024-02-28T22:30:44Z\"\n generateName: frontend-\n labels:\n tier: frontend\n name: frontend-gbgfx\n namespace: default\n ownerReferences:\n - apiVersion: apps/v1\n blockOwnerDeletion: true\n controller: true\n kind: ReplicaSet\n name: frontend\n uid: e129deca-f864-481b-bb16-b27abfd92292\n...", "completion": "This yaml code ReplicaSet | While you can create bare Pods with no problems, it is strongly recommended to make sure that the bare Pods do not have labels which match the selector of one of your ReplicaSets. The reason for this", "language": "yaml", "type": "pod-definition_explanation", "context": "ReplicaSet | While you can create bare Pods with no problems, it is strongly recommended to make sure that the bare Pods do not have labels which match the selector of one of your ReplicaSets. The reason for this", "source": "https://kubernetes.io/docs/concepts/workloads/controllers/replicaset/", "id": "k8s_00488_code_explain_12357"} {"prompt": "Complete this yaml code:\n\npods/pod-rs.yaml\napiVersion: v1\nkind: Pod\nmetadata:\n name: pod1\n labels:\n tier: frontend\nspec:\n containers:\n - name: hello1\n image: gcr.io/google-samples/hello-app:2.0\n", "completion": "---\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: pod2\n labels:\n tier: frontend\nspec:\n containers:\n - name: hello2\n image: gcr.io/google-samples/hello-app:1.0", "language": "yaml", "type": "pod-definition", "context": "ReplicaSet | As those Pods do not have a Controller (or any object) as their owner reference and match the selector of the frontend ReplicaSet, they will immediately be acquired by it.", "source": "https://kubernetes.io/docs/concepts/workloads/controllers/replicaset/", "id": "k8s_00488_code_completion_12358"} {"prompt": "Explain this yaml code:\n\npods/pod-rs.yaml\napiVersion: v1\nkind: Pod\nmetadata:\n name: pod1\n labels:\n tier: frontend\nspec:\n containers:\n - name: hello1\n image: gcr.io/google-samples/hello-app:2.0\n\n---\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: pod2\n labels:\n tier: frontend\nspec:\n containers:\n - name: hello2\n image: gcr.io/google-samples/hello-app:1.0", "completion": "This yaml code ReplicaSet | As those Pods do not have a Controller (or any object) as their owner reference and match the selector of the frontend ReplicaSet, they will immediately be acquired by it.", "language": "yaml", "type": "pod-definition_explanation", "context": "ReplicaSet | As those Pods do not have a Controller (or any object) as their owner reference and match the selector of the frontend ReplicaSet, they will immediately be acquired by it.", "source": "https://kubernetes.io/docs/concepts/workloads/controllers/replicaset/", "id": "k8s_00488_code_explain_12359"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: pod1\n labels:\n tier: frontend\nspec:\n containers:\n - name: hello1\n image: gcr.io/google-samples/hello-app:2.0\n", "completion": "---\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: pod2\n labels:\n tier: frontend\nspec:\n containers:\n - name: hello2\n image: gcr.io/google-samples/hello-app:1.0", "language": "yaml", "type": "pod-definition", "context": "ReplicaSet | As those Pods do not have a Controller (or any object) as their owner reference and match the selector of the frontend ReplicaSet, they will immediately be acquired by it.", "source": "https://kubernetes.io/docs/concepts/workloads/controllers/replicaset/", "id": "k8s_00488_code_completion_12360"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: pod1\n labels:\n tier: frontend\nspec:\n containers:\n - name: hello1\n image: gcr.io/google-samples/hello-app:2.0\n\n---\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: pod2\n labels:\n tier: frontend\nspec:\n containers:\n - name: hello2\n image: gcr.io/google-samples/hello-app:1.0", "completion": "This yaml code ReplicaSet | As those Pods do not have a Controller (or any object) as their owner reference and match the selector of the frontend ReplicaSet, they will immediately be acquired by it.", "language": "yaml", "type": "pod-definition_explanation", "context": "ReplicaSet | As those Pods do not have a Controller (or any object) as their owner reference and match the selector of the frontend ReplicaSet, they will immediately be acquired by it.", "source": "https://kubernetes.io/docs/concepts/workloads/controllers/replicaset/", "id": "k8s_00488_code_explain_12361"} {"prompt": "Explain this bash code:\n\nkubectl apply -f https://kubernetes.io/examples/pods/pod-rs.yaml", "completion": "This bash code ReplicaSet | The new Pods will be acquired by the ReplicaSet, and then immediately terminated as the ReplicaSet would be over its desired count.", "language": "bash", "type": "kubectl-command_explanation", "context": "ReplicaSet | The new Pods will be acquired by the ReplicaSet, and then immediately terminated as the ReplicaSet would be over its desired count.", "source": "https://kubernetes.io/docs/concepts/workloads/controllers/replicaset/", "id": "k8s_00488_code_explain_12362"} {"prompt": "Explain this bash code:\n\nkubectl apply -f https://kubernetes.io/examples/pods/pod-rs.yaml", "completion": "This bash code ReplicaSet | And then create the ReplicaSet however:", "language": "bash", "type": "kubectl-command_explanation", "context": "ReplicaSet | And then create the ReplicaSet however:", "source": "https://kubernetes.io/docs/concepts/workloads/controllers/replicaset/", "id": "k8s_00488_code_explain_12363"} {"prompt": "Explain this bash code:\n\nkubectl apply -f https://kubernetes.io/examples/controllers/frontend.yaml", "completion": "This bash code ReplicaSet | You shall see that the ReplicaSet has acquired the Pods and has only created new ones according to its spec until the number of its new Pods and the original matches its desired count. As fetching the", "language": "bash", "type": "kubectl-command_explanation", "context": "ReplicaSet | You shall see that the ReplicaSet has acquired the Pods and has only created new ones according to its spec until the number of its new Pods and the original matches its desired count. As fetching the", "source": "https://kubernetes.io/docs/concepts/workloads/controllers/replicaset/", "id": "k8s_00488_code_explain_12364"} {"prompt": "Complete this bash code:\n\nkubectl proxy --port=8080\ncurl -X DELETE 'localhost:8080/apis/apps/v1/namespaces/default/replicasets/frontend' \\", "completion": " -d '{\"kind\":\"DeleteOptions\",\"apiVersion\":\"v1\",\"propagationPolicy\":\"Foreground\"}' \\\n -H \"Content-Type: application/json\"", "language": "bash", "type": "kubectl-command", "context": "ReplicaSet | You can delete a ReplicaSet without affecting any of its Pods using kubectl delete with the --cascade=orphan option. When using the REST API or the client-go library, you must set propagationPolicy to", "source": "https://kubernetes.io/docs/concepts/workloads/controllers/replicaset/", "id": "k8s_00488_code_completion_12365"} {"prompt": "Explain this bash code:\n\nkubectl proxy --port=8080\ncurl -X DELETE 'localhost:8080/apis/apps/v1/namespaces/default/replicasets/frontend' \\\n -d '{\"kind\":\"DeleteOptions\",\"apiVersion\":\"v1\",\"propagationPolicy\":\"Foreground\"}' \\\n -H \"Content-Type: application/json\"", "completion": "This bash code ReplicaSet | You can delete a ReplicaSet without affecting any of its Pods using kubectl delete with the --cascade=orphan option. When using the REST API or the client-go library, you must set propagationPolicy to", "language": "bash", "type": "kubectl-command_explanation", "context": "ReplicaSet | You can delete a ReplicaSet without affecting any of its Pods using kubectl delete with the --cascade=orphan option. When using the REST API or the client-go library, you must set propagationPolicy to", "source": "https://kubernetes.io/docs/concepts/workloads/controllers/replicaset/", "id": "k8s_00488_code_explain_12366"} {"prompt": "Complete this bash code:\n\nkubectl proxy --port=8080\ncurl -X DELETE 'localhost:8080/apis/apps/v1/namespaces/default/replicasets/frontend' \\", "completion": " -d '{\"kind\":\"DeleteOptions\",\"apiVersion\":\"v1\",\"propagationPolicy\":\"Orphan\"}' \\\n -H \"Content-Type: application/json\"", "language": "bash", "type": "kubectl-command", "context": "ReplicaSet | Once the original is deleted, you can create a new ReplicaSet to replace it. As long as the old and new .spec.selector are the same, then the new one will adopt the old Pods. However, it will not make", "source": "https://kubernetes.io/docs/concepts/workloads/controllers/replicaset/", "id": "k8s_00488_code_completion_12367"} {"prompt": "Explain this bash code:\n\nkubectl proxy --port=8080\ncurl -X DELETE 'localhost:8080/apis/apps/v1/namespaces/default/replicasets/frontend' \\\n -d '{\"kind\":\"DeleteOptions\",\"apiVersion\":\"v1\",\"propagationPolicy\":\"Orphan\"}' \\\n -H \"Content-Type: application/json\"", "completion": "This bash code ReplicaSet | Once the original is deleted, you can create a new ReplicaSet to replace it. As long as the old and new .spec.selector are the same, then the new one will adopt the old Pods. However, it will not make", "language": "bash", "type": "kubectl-command_explanation", "context": "ReplicaSet | Once the original is deleted, you can create a new ReplicaSet to replace it. As long as the old and new .spec.selector are the same, then the new one will adopt the old Pods. However, it will not make", "source": "https://kubernetes.io/docs/concepts/workloads/controllers/replicaset/", "id": "k8s_00488_code_explain_12368"} {"prompt": "Complete this yaml code:\n\ncontrollers/hpa-rs.yaml\napiVersion: autoscaling/v1\nkind: HorizontalPodAutoscaler\nmetadata:\n name: frontend-scaler\nspec:", "completion": " scaleTargetRef:\n kind: ReplicaSet\n name: frontend\n minReplicas: 3\n maxReplicas: 10\n targetCPUUtilizationPercentage: 50", "language": "yaml", "type": "kubernetes-manifest", "context": "ReplicaSet | Saving this manifest into hpa-rs.yaml and submitting it to a Kubernetes cluster should create the defined HPA that autoscales the target ReplicaSet depending on the CPU usage of the replicated Pods.", "source": "https://kubernetes.io/docs/concepts/workloads/controllers/replicaset/", "id": "k8s_00488_code_completion_12369"} {"prompt": "Explain this yaml code:\n\ncontrollers/hpa-rs.yaml\napiVersion: autoscaling/v1\nkind: HorizontalPodAutoscaler\nmetadata:\n name: frontend-scaler\nspec:\n scaleTargetRef:\n kind: ReplicaSet\n name: frontend\n minReplicas: 3\n maxReplicas: 10\n targetCPUUtilizationPercentage: 50", "completion": "This yaml code ReplicaSet | Saving this manifest into hpa-rs.yaml and submitting it to a Kubernetes cluster should create the defined HPA that autoscales the target ReplicaSet depending on the CPU usage of the replicated Pods.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "ReplicaSet | Saving this manifest into hpa-rs.yaml and submitting it to a Kubernetes cluster should create the defined HPA that autoscales the target ReplicaSet depending on the CPU usage of the replicated Pods.", "source": "https://kubernetes.io/docs/concepts/workloads/controllers/replicaset/", "id": "k8s_00488_code_explain_12370"} {"prompt": "Complete this yaml code:\n\napiVersion: autoscaling/v1\nkind: HorizontalPodAutoscaler\nmetadata:\n name: frontend-scaler\nspec:", "completion": " scaleTargetRef:\n kind: ReplicaSet\n name: frontend\n minReplicas: 3\n maxReplicas: 10\n targetCPUUtilizationPercentage: 50", "language": "yaml", "type": "kubernetes-manifest", "context": "ReplicaSet | Saving this manifest into hpa-rs.yaml and submitting it to a Kubernetes cluster should create the defined HPA that autoscales the target ReplicaSet depending on the CPU usage of the replicated Pods.", "source": "https://kubernetes.io/docs/concepts/workloads/controllers/replicaset/", "id": "k8s_00488_code_completion_12371"} {"prompt": "Explain this yaml code:\n\napiVersion: autoscaling/v1\nkind: HorizontalPodAutoscaler\nmetadata:\n name: frontend-scaler\nspec:\n scaleTargetRef:\n kind: ReplicaSet\n name: frontend\n minReplicas: 3\n maxReplicas: 10\n targetCPUUtilizationPercentage: 50", "completion": "This yaml code ReplicaSet | Saving this manifest into hpa-rs.yaml and submitting it to a Kubernetes cluster should create the defined HPA that autoscales the target ReplicaSet depending on the CPU usage of the replicated Pods.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "ReplicaSet | Saving this manifest into hpa-rs.yaml and submitting it to a Kubernetes cluster should create the defined HPA that autoscales the target ReplicaSet depending on the CPU usage of the replicated Pods.", "source": "https://kubernetes.io/docs/concepts/workloads/controllers/replicaset/", "id": "k8s_00488_code_explain_12372"} {"prompt": "Explain this bash code:\n\nkubectl apply -f https://k8s.io/examples/controllers/hpa-rs.yaml", "completion": "This bash code ReplicaSet | Alternatively, you can use the kubectl autoscale command to accomplish the same (and it's easier!)", "language": "bash", "type": "kubectl-command_explanation", "context": "ReplicaSet | Alternatively, you can use the kubectl autoscale command to accomplish the same (and it's easier!)", "source": "https://kubernetes.io/docs/concepts/workloads/controllers/replicaset/", "id": "k8s_00488_code_explain_12373"} {"prompt": "Explain this bash code:\n\nkubectl autoscale rs frontend --max=10 --min=3 --cpu-percent=50", "completion": "This bash code ReplicaSet | Deployment is an object which can own ReplicaSets and update them and their Pods via declarative, server-side rolling updates. While ReplicaSets can be used independently, today they're mainly used by", "language": "bash", "type": "kubectl-command_explanation", "context": "ReplicaSet | Deployment is an object which can own ReplicaSets and update them and their Pods via declarative, server-side rolling updates. While ReplicaSets can be used independently, today they're mainly used by", "source": "https://kubernetes.io/docs/concepts/workloads/controllers/replicaset/", "id": "k8s_00488_code_explain_12374"} {"prompt": "Complete this text code:\n\n# Download the sample files into `configure-pod-container/configmap/` directory\nwget https://kubernetes.io/examples/configmap/game.properties -O configure-pod-container/configmap/game.properties\nwget https://kubernetes.io/examples/configmap/ui.properties -O configure-pod-container/configmap/ui.properties", "completion": "\n# Create the ConfigMap\nkubectl create configmap game-config --from-file=configure-pod-container/configmap/", "language": "text", "type": "code-example", "context": "Configure a Pod to Use a ConfigMap | The above command packages each file, in this case, game.properties and ui.properties in the configure-pod-container/configmap/ directory into the game-config ConfigMap. You can display details of the", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/configure-pod-configmap/", "id": "k8s_00489_code_completion_12375"} {"prompt": "Explain this text code:\n\n# Download the sample files into `configure-pod-container/configmap/` directory\nwget https://kubernetes.io/examples/configmap/game.properties -O configure-pod-container/configmap/game.properties\nwget https://kubernetes.io/examples/configmap/ui.properties -O configure-pod-container/configmap/ui.properties\n\n# Create the ConfigMap\nkubectl create configmap game-config --from-file=configure-pod-container/configmap/", "completion": "This text code Configure a Pod to Use a ConfigMap | The above command packages each file, in this case, game.properties and ui.properties in the configure-pod-container/configmap/ directory into the game-config ConfigMap. You can display details of the", "language": "text", "type": "code-example_explanation", "context": "Configure a Pod to Use a ConfigMap | The above command packages each file, in this case, game.properties and ui.properties in the configure-pod-container/configmap/ directory into the game-config ConfigMap. You can display details of the", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/configure-pod-configmap/", "id": "k8s_00489_code_explain_12376"} {"prompt": "Complete this text code:\n\nName: game-config\nNamespace: default\nLabels: <none>\nAnnotations: <none>\n\nData\n====\ngame.properties:\n----\nenemies=aliens\nlives=3", "completion": "enemies.cheat=true\nenemies.cheat.level=noGoodRotten\nsecret.code.passphrase=UUDDLRLRBABAS\nsecret.code.allowed=true\nsecret.code.lives=30\nui.properties:\n----\ncolor.good=purple\ncolor.bad=yellow\nallow.textmode=true\nhow.nice.to.look=fairlyNice", "language": "text", "type": "code-example", "context": "Configure a Pod to Use a ConfigMap | The game.properties and ui.properties files in the configure-pod-container/configmap/ directory are represented in the data section of the ConfigMap.", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/configure-pod-configmap/", "id": "k8s_00489_code_completion_12377"} {"prompt": "Explain this text code:\n\nName: game-config\nNamespace: default\nLabels: <none>\nAnnotations: <none>\n\nData\n====\ngame.properties:\n----\nenemies=aliens\nlives=3\nenemies.cheat=true\nenemies.cheat.level=noGoodRotten\nsecret.code.passphrase=UUDDLRLRBABAS\nsecret.code.allowed=true\nsecret.code.lives=30\nui.properties:\n----\ncolor.good=purple\ncolor.bad=yellow\nallow.textmode=true\nhow.nice.to.look=fairlyNice", "completion": "This text code Configure a Pod to Use a ConfigMap | The game.properties and ui.properties files in the configure-pod-container/configmap/ directory are represented in the data section of the ConfigMap.", "language": "text", "type": "code-example_explanation", "context": "Configure a Pod to Use a ConfigMap | The game.properties and ui.properties files in the configure-pod-container/configmap/ directory are represented in the data section of the ConfigMap.", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/configure-pod-configmap/", "id": "k8s_00489_code_explain_12378"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: ConfigMap\nmetadata:\n creationTimestamp: 2022-02-18T18:52:05Z\n name: game-config\n namespace: default\n resourceVersion: \"516\"\n uid: b4952dc3-d670-11e5-8cd0-68f728db1985\ndata:\n game.properties: |\n enemies=aliens", "completion": " lives=3\n enemies.cheat=true\n enemies.cheat.level=noGoodRotten\n secret.code.passphrase=UUDDLRLRBABAS\n secret.code.allowed=true\n secret.code.lives=30 \n ui.properties: |\n color.good=purple\n color.bad=yellow\n allow.textmode=true\n how.nice.to.look=fairlyNice", "language": "yaml", "type": "kubernetes-manifest", "context": "Configure a Pod to Use a ConfigMap | You can use kubectl create configmap to create a ConfigMap from an individual file, or from multiple files.", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/configure-pod-configmap/", "id": "k8s_00489_code_completion_12379"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: ConfigMap\nmetadata:\n creationTimestamp: 2022-02-18T18:52:05Z\n name: game-config\n namespace: default\n resourceVersion: \"516\"\n uid: b4952dc3-d670-11e5-8cd0-68f728db1985\ndata:\n game.properties: |\n enemies=aliens\n lives=3\n enemies.cheat=true\n enemies.cheat.level=noGoodRotten\n secret.code.passphrase=UUDDLRLRBABAS\n secret.code.allowed=true\n secret.code.lives=30 \n ui.properties: |\n color.good=purple\n color.bad=yellow\n allow.textmode=true\n how.nice.to.look=fairlyNice", "completion": "This yaml code Configure a Pod to Use a ConfigMap | You can use kubectl create configmap to create a ConfigMap from an individual file, or from multiple files.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Configure a Pod to Use a ConfigMap | You can use kubectl create configmap to create a ConfigMap from an individual file, or from multiple files.", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/configure-pod-configmap/", "id": "k8s_00489_code_explain_12380"} {"prompt": "Explain this bash code:\n\nkubectl create configmap game-config-2 --from-file=configure-pod-container/configmap/game.properties", "completion": "This bash code Configure a Pod to Use a ConfigMap | would produce the following ConfigMap:", "language": "bash", "type": "kubectl-command_explanation", "context": "Configure a Pod to Use a ConfigMap | would produce the following ConfigMap:", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/configure-pod-configmap/", "id": "k8s_00489_code_explain_12381"} {"prompt": "Complete this text code:\n\nName: game-config-2\nNamespace: default\nLabels: <none>\nAnnotations: <none>\n\nData\n====\ngame.properties:", "completion": "----\nenemies=aliens\nlives=3\nenemies.cheat=true\nenemies.cheat.level=noGoodRotten\nsecret.code.passphrase=UUDDLRLRBABAS\nsecret.code.allowed=true\nsecret.code.lives=30", "language": "text", "type": "code-example", "context": "Configure a Pod to Use a ConfigMap | You can pass in the --from-file argument multiple times to create a ConfigMap from multiple data sources.", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/configure-pod-configmap/", "id": "k8s_00489_code_completion_12382"} {"prompt": "Explain this text code:\n\nName: game-config-2\nNamespace: default\nLabels: <none>\nAnnotations: <none>\n\nData\n====\ngame.properties:\n----\nenemies=aliens\nlives=3\nenemies.cheat=true\nenemies.cheat.level=noGoodRotten\nsecret.code.passphrase=UUDDLRLRBABAS\nsecret.code.allowed=true\nsecret.code.lives=30", "completion": "This text code Configure a Pod to Use a ConfigMap | You can pass in the --from-file argument multiple times to create a ConfigMap from multiple data sources.", "language": "text", "type": "code-example_explanation", "context": "Configure a Pod to Use a ConfigMap | You can pass in the --from-file argument multiple times to create a ConfigMap from multiple data sources.", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/configure-pod-configmap/", "id": "k8s_00489_code_explain_12383"} {"prompt": "Explain this bash code:\n\nkubectl create configmap game-config-2 --from-file=configure-pod-container/configmap/game.properties --from-file=configure-pod-container/configmap/ui.properties", "completion": "This bash code Configure a Pod to Use a ConfigMap | You can display details of the game-config-2 ConfigMap using the following command:", "language": "bash", "type": "kubectl-command_explanation", "context": "Configure a Pod to Use a ConfigMap | You can display details of the game-config-2 ConfigMap using the following command:", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/configure-pod-configmap/", "id": "k8s_00489_code_explain_12384"} {"prompt": "Complete this text code:\n\nName: game-config-2\nNamespace: default\nLabels: <none>\nAnnotations: <none>\n\nData\n====\ngame.properties:\n----\nenemies=aliens\nlives=3", "completion": "enemies.cheat=true\nenemies.cheat.level=noGoodRotten\nsecret.code.passphrase=UUDDLRLRBABAS\nsecret.code.allowed=true\nsecret.code.lives=30\nui.properties:\n----\ncolor.good=purple\ncolor.bad=yellow\nallow.textmode=true\nhow.nice.to.look=fairlyNice", "language": "text", "type": "code-example", "context": "Configure a Pod to Use a ConfigMap | Use the option --from-env-file to create a ConfigMap from an env-file, for example:", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/configure-pod-configmap/", "id": "k8s_00489_code_completion_12385"} {"prompt": "Explain this text code:\n\nName: game-config-2\nNamespace: default\nLabels: <none>\nAnnotations: <none>\n\nData\n====\ngame.properties:\n----\nenemies=aliens\nlives=3\nenemies.cheat=true\nenemies.cheat.level=noGoodRotten\nsecret.code.passphrase=UUDDLRLRBABAS\nsecret.code.allowed=true\nsecret.code.lives=30\nui.properties:\n----\ncolor.good=purple\ncolor.bad=yellow\nallow.textmode=true\nhow.nice.to.look=fairlyNice", "completion": "This text code Configure a Pod to Use a ConfigMap | Use the option --from-env-file to create a ConfigMap from an env-file, for example:", "language": "text", "type": "code-example_explanation", "context": "Configure a Pod to Use a ConfigMap | Use the option --from-env-file to create a ConfigMap from an env-file, for example:", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/configure-pod-configmap/", "id": "k8s_00489_code_explain_12386"} {"prompt": "Complete this text code:\n\n# Env-files contain a list of environment variables.\n# These syntax rules apply:\n# Each line in an env file has to be in VAR=VAL format.\n# Lines beginning with # (i.e. comments) are ignored.\n# Blank lines are ignored.\n# There is no special handling of quotation marks (i.e. they will be part of the ConfigMap value)).\n\n# Download the sample files into `configure-pod-container/configmap/` directory\nwget https://kubernetes.io/examples/configmap/game-env-file.properties -O configure-pod-container/configmap/game-env-file.properties", "completion": "wget https://kubernetes.io/examples/configmap/ui-env-file.properties -O configure-pod-container/configmap/ui-env-file.properties\n\n# The env-file `game-env-file.properties` looks like below\ncat configure-pod-container/configmap/game-env-file.properties\nenemies=aliens\nlives=3\nallowed=\"true\"\n\n# This comment and the empty line above it are ignored", "language": "text", "type": "code-example", "context": "Configure a Pod to Use a ConfigMap | would produce a ConfigMap. View the ConfigMap:", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/configure-pod-configmap/", "id": "k8s_00489_code_completion_12387"} {"prompt": "Explain this text code:\n\n# Env-files contain a list of environment variables.\n# These syntax rules apply:\n# Each line in an env file has to be in VAR=VAL format.\n# Lines beginning with # (i.e. comments) are ignored.\n# Blank lines are ignored.\n# There is no special handling of quotation marks (i.e. they will be part of the ConfigMap value)).\n\n# Download the sample files into `configure-pod-container/configmap/` directory\nwget https://kubernetes.io/examples/configmap/game-env-file.properties -O configure-pod-container/configmap/game-env-file.properties\nwget https://kubernetes.io/examples/configmap/ui-env-file.properties -O configure-pod-container/configmap/ui-env-file.properties\n\n# The env-file `game-env-file.properties` looks like below\ncat configure-pod-container/configmap/game-env-file.properties\nenemies=aliens\nlives=3\nallowed=\"true\"\n\n# This comment and the empty line above it are ignored", "completion": "This text code Configure a Pod to Use a ConfigMap | would produce a ConfigMap. View the ConfigMap:", "language": "text", "type": "code-example_explanation", "context": "Configure a Pod to Use a ConfigMap | would produce a ConfigMap. View the ConfigMap:", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/configure-pod-configmap/", "id": "k8s_00489_code_explain_12388"} {"prompt": "Explain this bash code:\n\nkubectl create configmap game-config-env-file \\\n --from-env-file=configure-pod-container/configmap/game-env-file.properties", "completion": "This bash code Configure a Pod to Use a ConfigMap | would produce a ConfigMap. View the ConfigMap:", "language": "bash", "type": "kubectl-command_explanation", "context": "Configure a Pod to Use a ConfigMap | would produce a ConfigMap. View the ConfigMap:", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/configure-pod-configmap/", "id": "k8s_00489_code_explain_12389"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: ConfigMap\nmetadata:\n creationTimestamp: 2019-12-27T18:36:28Z\n name: game-config-env-file\n namespace: default", "completion": " resourceVersion: \"809965\"\n uid: d9d1ca5b-eb34-11e7-887b-42010a8002b8\ndata:\n allowed: '\"true\"'\n enemies: aliens\n lives: \"3\"", "language": "yaml", "type": "kubernetes-manifest", "context": "Configure a Pod to Use a ConfigMap | Starting with Kubernetes v1.23, kubectl supports the --from-env-file argument to be specified multiple times to create a ConfigMap from multiple data sources.", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/configure-pod-configmap/", "id": "k8s_00489_code_completion_12390"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: ConfigMap\nmetadata:\n creationTimestamp: 2019-12-27T18:36:28Z\n name: game-config-env-file\n namespace: default\n resourceVersion: \"809965\"\n uid: d9d1ca5b-eb34-11e7-887b-42010a8002b8\ndata:\n allowed: '\"true\"'\n enemies: aliens\n lives: \"3\"", "completion": "This yaml code Configure a Pod to Use a ConfigMap | Starting with Kubernetes v1.23, kubectl supports the --from-env-file argument to be specified multiple times to create a ConfigMap from multiple data sources.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Configure a Pod to Use a ConfigMap | Starting with Kubernetes v1.23, kubectl supports the --from-env-file argument to be specified multiple times to create a ConfigMap from multiple data sources.", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/configure-pod-configmap/", "id": "k8s_00489_code_explain_12391"} {"prompt": "Explain this bash code:\n\nkubectl create configmap config-multi-env-files \\\n --from-env-file=configure-pod-container/configmap/game-env-file.properties \\\n --from-env-file=configure-pod-container/configmap/ui-env-file.properties", "completion": "This bash code Configure a Pod to Use a ConfigMap | would produce the following ConfigMap:", "language": "bash", "type": "kubectl-command_explanation", "context": "Configure a Pod to Use a ConfigMap | would produce the following ConfigMap:", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/configure-pod-configmap/", "id": "k8s_00489_code_explain_12392"} {"prompt": "Explain this bash code:\n\nkubectl get configmap config-multi-env-files -o yaml", "completion": "This bash code Configure a Pod to Use a ConfigMap | where the output is similar to this:", "language": "bash", "type": "kubectl-command_explanation", "context": "Configure a Pod to Use a ConfigMap | where the output is similar to this:", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/configure-pod-configmap/", "id": "k8s_00489_code_explain_12393"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: ConfigMap\nmetadata:\n creationTimestamp: 2019-12-27T18:38:34Z\n name: config-multi-env-files\n namespace: default\n resourceVersion: \"810136\"", "completion": " uid: 252c4572-eb35-11e7-887b-42010a8002b8\ndata:\n allowed: '\"true\"'\n color: purple\n enemies: aliens\n how: fairlyNice\n lives: \"3\"\n textmode: \"true\"", "language": "yaml", "type": "kubernetes-manifest", "context": "Configure a Pod to Use a ConfigMap | You can define a key other than the file name to use in the data section of your ConfigMap when using the --from-file argument:", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/configure-pod-configmap/", "id": "k8s_00489_code_completion_12394"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: ConfigMap\nmetadata:\n creationTimestamp: 2019-12-27T18:38:34Z\n name: config-multi-env-files\n namespace: default\n resourceVersion: \"810136\"\n uid: 252c4572-eb35-11e7-887b-42010a8002b8\ndata:\n allowed: '\"true\"'\n color: purple\n enemies: aliens\n how: fairlyNice\n lives: \"3\"\n textmode: \"true\"", "completion": "This yaml code Configure a Pod to Use a ConfigMap | You can define a key other than the file name to use in the data section of your ConfigMap when using the --from-file argument:", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Configure a Pod to Use a ConfigMap | You can define a key other than the file name to use in the data section of your ConfigMap when using the --from-file argument:", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/configure-pod-configmap/", "id": "k8s_00489_code_explain_12395"} {"prompt": "Explain this bash code:\n\nkubectl create configmap game-config-3 --from-file=<my-key-name>=<path-to-file>", "completion": "This bash code Configure a Pod to Use a ConfigMap | where <my-key-name> is the key you want to use in the ConfigMap and <path-to-file> is the location of the data source file you want the key to represent.", "language": "bash", "type": "kubectl-command_explanation", "context": "Configure a Pod to Use a ConfigMap | where <my-key-name> is the key you want to use in the ConfigMap and <path-to-file> is the location of the data source file you want the key to represent.", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/configure-pod-configmap/", "id": "k8s_00489_code_explain_12396"} {"prompt": "Explain this bash code:\n\nkubectl create configmap game-config-3 --from-file=game-special-key=configure-pod-container/configmap/game.properties", "completion": "This bash code Configure a Pod to Use a ConfigMap | would produce the following ConfigMap:", "language": "bash", "type": "kubectl-command_explanation", "context": "Configure a Pod to Use a ConfigMap | would produce the following ConfigMap:", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/configure-pod-configmap/", "id": "k8s_00489_code_explain_12397"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: ConfigMap\nmetadata:\n creationTimestamp: 2022-02-18T18:54:22Z\n name: game-config-3\n namespace: default\n resourceVersion: \"530\"\n uid: 05f8da22-d671-11e5-8cd0-68f728db1985", "completion": "data:\n game-special-key: |\n enemies=aliens\n lives=3\n enemies.cheat=true\n enemies.cheat.level=noGoodRotten\n secret.code.passphrase=UUDDLRLRBABAS\n secret.code.allowed=true\n secret.code.lives=30", "language": "yaml", "type": "kubernetes-manifest", "context": "Configure a Pod to Use a ConfigMap | You can use kubectl create configmap with the --from-literal argument to define a literal value from the command line:", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/configure-pod-configmap/", "id": "k8s_00489_code_completion_12398"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: ConfigMap\nmetadata:\n creationTimestamp: 2022-02-18T18:54:22Z\n name: game-config-3\n namespace: default\n resourceVersion: \"530\"\n uid: 05f8da22-d671-11e5-8cd0-68f728db1985\ndata:\n game-special-key: |\n enemies=aliens\n lives=3\n enemies.cheat=true\n enemies.cheat.level=noGoodRotten\n secret.code.passphrase=UUDDLRLRBABAS\n secret.code.allowed=true\n secret.code.lives=30", "completion": "This yaml code Configure a Pod to Use a ConfigMap | You can use kubectl create configmap with the --from-literal argument to define a literal value from the command line:", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Configure a Pod to Use a ConfigMap | You can use kubectl create configmap with the --from-literal argument to define a literal value from the command line:", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/configure-pod-configmap/", "id": "k8s_00489_code_explain_12399"} {"prompt": "Explain this bash code:\n\nkubectl create configmap special-config --from-literal=special.how=very --from-literal=special.type=charm", "completion": "This bash code Configure a Pod to Use a ConfigMap | You can pass in multiple key-value pairs. Each pair provided on the command line is represented as a separate entry in the data section of the ConfigMap.", "language": "bash", "type": "kubectl-command_explanation", "context": "Configure a Pod to Use a ConfigMap | You can pass in multiple key-value pairs. Each pair provided on the command line is represented as a separate entry in the data section of the ConfigMap.", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/configure-pod-configmap/", "id": "k8s_00489_code_explain_12400"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: ConfigMap\nmetadata:\n creationTimestamp: 2022-02-18T19:14:38Z\n name: special-config", "completion": " namespace: default\n resourceVersion: \"651\"\n uid: dadce046-d673-11e5-8cd0-68f728db1985\ndata:\n special.how: very\n special.type: charm", "language": "yaml", "type": "kubernetes-manifest", "context": "Configure a Pod to Use a ConfigMap | You can also create a ConfigMap from generators and then apply it to create the object in the cluster's API server. You should specify the generators in a kustomization.yaml file within a directory.", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/configure-pod-configmap/", "id": "k8s_00489_code_completion_12401"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: ConfigMap\nmetadata:\n creationTimestamp: 2022-02-18T19:14:38Z\n name: special-config\n namespace: default\n resourceVersion: \"651\"\n uid: dadce046-d673-11e5-8cd0-68f728db1985\ndata:\n special.how: very\n special.type: charm", "completion": "This yaml code Configure a Pod to Use a ConfigMap | You can also create a ConfigMap from generators and then apply it to create the object in the cluster's API server. You should specify the generators in a kustomization.yaml file within a directory.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Configure a Pod to Use a ConfigMap | You can also create a ConfigMap from generators and then apply it to create the object in the cluster's API server. You should specify the generators in a kustomization.yaml file within a directory.", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/configure-pod-configmap/", "id": "k8s_00489_code_explain_12402"} {"prompt": "Complete this text code:\n\n# Create a kustomization.yaml file with ConfigMapGenerator\ncat <<EOF >./kustomization.yaml\nconfigMapGenerator:\n- name: game-config-4\n options:", "completion": " labels:\n game-config: config-4\n files:\n - configure-pod-container/configmap/game.properties\nEOF", "language": "text", "type": "code-example", "context": "Configure a Pod to Use a ConfigMap | Apply the kustomization directory to create the ConfigMap object:", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/configure-pod-configmap/", "id": "k8s_00489_code_completion_12403"} {"prompt": "Explain this text code:\n\n# Create a kustomization.yaml file with ConfigMapGenerator\ncat <<EOF >./kustomization.yaml\nconfigMapGenerator:\n- name: game-config-4\n options:\n labels:\n game-config: config-4\n files:\n - configure-pod-container/configmap/game.properties\nEOF", "completion": "This text code Configure a Pod to Use a ConfigMap | Apply the kustomization directory to create the ConfigMap object:", "language": "text", "type": "code-example_explanation", "context": "Configure a Pod to Use a ConfigMap | Apply the kustomization directory to create the ConfigMap object:", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/configure-pod-configmap/", "id": "k8s_00489_code_explain_12404"} {"prompt": "Explain this text code:\n\nNAME DATA AGE\ngame-config-4-m9dm2f92bt 1 37s", "completion": "This text code Configure a Pod to Use a ConfigMap | and also:", "language": "text", "type": "code-example_explanation", "context": "Configure a Pod to Use a ConfigMap | and also:", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/configure-pod-configmap/", "id": "k8s_00489_code_explain_12405"} {"prompt": "Explain this bash code:\n\nkubectl describe configmaps/game-config-4-m9dm2f92bt", "completion": "This bash code Configure a Pod to Use a ConfigMap | Notice that the generated ConfigMap name has a suffix appended by hashing the contents. This ensures that a new ConfigMap is generated each time the content is modified.", "language": "bash", "type": "kubectl-command_explanation", "context": "Configure a Pod to Use a ConfigMap | Notice that the generated ConfigMap name has a suffix appended by hashing the contents. This ensures that a new ConfigMap is generated each time the content is modified.", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/configure-pod-configmap/", "id": "k8s_00489_code_explain_12406"} {"prompt": "Complete this text code:\n\nName: game-config-4-m9dm2f92bt\nNamespace: default\nLabels: game-config=config-4\nAnnotations: kubectl.kubernetes.io/last-applied-configuration:\n {\"apiVersion\":\"v1\",\"data\":{\"game.properties\":\"enemies=aliens\\nlives=3\\nenemies.cheat=true\\nenemies.cheat.level=noGoodRotten\\nsecret.code.p...\n\nData\n====\ngame.properties:", "completion": "----\nenemies=aliens\nlives=3\nenemies.cheat=true\nenemies.cheat.level=noGoodRotten\nsecret.code.passphrase=UUDDLRLRBABAS\nsecret.code.allowed=true\nsecret.code.lives=30\nEvents: <none>", "language": "text", "type": "code-example", "context": "Configure a Pod to Use a ConfigMap | Notice that the generated ConfigMap name has a suffix appended by hashing the contents. This ensures that a new ConfigMap is generated each time the content is modified.", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/configure-pod-configmap/", "id": "k8s_00489_code_completion_12407"} {"prompt": "Explain this text code:\n\nName: game-config-4-m9dm2f92bt\nNamespace: default\nLabels: game-config=config-4\nAnnotations: kubectl.kubernetes.io/last-applied-configuration:\n {\"apiVersion\":\"v1\",\"data\":{\"game.properties\":\"enemies=aliens\\nlives=3\\nenemies.cheat=true\\nenemies.cheat.level=noGoodRotten\\nsecret.code.p...\n\nData\n====\ngame.properties:\n----\nenemies=aliens\nlives=3\nenemies.cheat=true\nenemies.cheat.level=noGoodRotten\nsecret.code.passphrase=UUDDLRLRBABAS\nsecret.code.allowed=true\nsecret.code.lives=30\nEvents: <none>", "completion": "This text code Configure a Pod to Use a ConfigMap | Notice that the generated ConfigMap name has a suffix appended by hashing the contents. This ensures that a new ConfigMap is generated each time the content is modified.", "language": "text", "type": "code-example_explanation", "context": "Configure a Pod to Use a ConfigMap | Notice that the generated ConfigMap name has a suffix appended by hashing the contents. This ensures that a new ConfigMap is generated each time the content is modified.", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/configure-pod-configmap/", "id": "k8s_00489_code_explain_12408"} {"prompt": "Complete this text code:\n\n# Create a kustomization.yaml file with ConfigMapGenerator\ncat <<EOF >./kustomization.yaml\nconfigMapGenerator:\n- name: game-config-5\n options:", "completion": " labels:\n game-config: config-5\n files:\n - game-special-key=configure-pod-container/configmap/game.properties\nEOF", "language": "text", "type": "code-example", "context": "Configure a Pod to Use a ConfigMap | Apply the kustomization directory to create the ConfigMap object.", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/configure-pod-configmap/", "id": "k8s_00489_code_completion_12409"} {"prompt": "Explain this text code:\n\n# Create a kustomization.yaml file with ConfigMapGenerator\ncat <<EOF >./kustomization.yaml\nconfigMapGenerator:\n- name: game-config-5\n options:\n labels:\n game-config: config-5\n files:\n - game-special-key=configure-pod-container/configmap/game.properties\nEOF", "completion": "This text code Configure a Pod to Use a ConfigMap | Apply the kustomization directory to create the ConfigMap object.", "language": "text", "type": "code-example_explanation", "context": "Configure a Pod to Use a ConfigMap | Apply the kustomization directory to create the ConfigMap object.", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/configure-pod-configmap/", "id": "k8s_00489_code_explain_12410"} {"prompt": "Complete this text code:\n\n---\n# kustomization.yaml contents for creating a ConfigMap from literals\nconfigMapGenerator:", "completion": "- name: special-config-2\n literals:\n - special.how=very\n - special.type=charm", "language": "text", "type": "code-example", "context": "Configure a Pod to Use a ConfigMap | Apply the kustomization directory to create the ConfigMap object:", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/configure-pod-configmap/", "id": "k8s_00489_code_completion_12411"} {"prompt": "Explain this text code:\n\n---\n# kustomization.yaml contents for creating a ConfigMap from literals\nconfigMapGenerator:\n- name: special-config-2\n literals:\n - special.how=very\n - special.type=charm", "completion": "This text code Configure a Pod to Use a ConfigMap | Apply the kustomization directory to create the ConfigMap object:", "language": "text", "type": "code-example_explanation", "context": "Configure a Pod to Use a ConfigMap | Apply the kustomization directory to create the ConfigMap object:", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/configure-pod-configmap/", "id": "k8s_00489_code_explain_12412"} {"prompt": "Explain this bash code:\n\nkubectl delete configmap special-config\nkubectl delete configmap env-config\nkubectl delete configmap -l 'game-config in (config-4,config-5)'", "completion": "This bash code Configure a Pod to Use a ConfigMap | Now that you have learned to define ConfigMaps, you can move on to the next section, and learn how to use these objects with Pods.", "language": "bash", "type": "kubectl-command_explanation", "context": "Configure a Pod to Use a ConfigMap | Now that you have learned to define ConfigMaps, you can move on to the next section, and learn how to use these objects with Pods.", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/configure-pod-configmap/", "id": "k8s_00489_code_explain_12413"} {"prompt": "Explain this bash code:\n\nkubectl create configmap special-config --from-literal=special.how=very", "completion": "This bash code Configure a Pod to Use a ConfigMap | Assign the special.how value defined in the ConfigMap to the SPECIAL_LEVEL_KEY environment variable in the Pod specification.", "language": "bash", "type": "kubectl-command_explanation", "context": "Configure a Pod to Use a ConfigMap | Assign the special.how value defined in the ConfigMap to the SPECIAL_LEVEL_KEY environment variable in the Pod specification.", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/configure-pod-configmap/", "id": "k8s_00489_code_explain_12414"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: dapi-test-pod\nspec:\n containers:\n - name: test-container\n image: registry.k8s.io/busybox:1.27.2\n command: [ \"/bin/sh\", \"-c\", \"env\" ]", "completion": " env:\n # Define the environment variable\n - name: SPECIAL_LEVEL_KEY\n valueFrom:\n configMapKeyRef:\n # The ConfigMap containing the value you want to assign to SPECIAL_LEVEL_KEY\n name: special-config\n # Specify the key associated with the value\n key: special.how\n restartPolicy: Never", "language": "yaml", "type": "pod-definition", "context": "Configure a Pod to Use a ConfigMap | Create the Pod:", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/configure-pod-configmap/", "id": "k8s_00489_code_completion_12415"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: dapi-test-pod\nspec:\n containers:\n - name: test-container\n image: registry.k8s.io/busybox:1.27.2\n command: [ \"/bin/sh\", \"-c\", \"env\" ]\n env:\n # Define the environment variable\n - name: SPECIAL_LEVEL_KEY\n valueFrom:\n configMapKeyRef:\n # The ConfigMap containing the value you want to assign to SPECIAL_LEVEL_KEY\n name: special-config\n # Specify the key associated with the value\n key: special.how\n restartPolicy: Never", "completion": "This yaml code Configure a Pod to Use a ConfigMap | Create the Pod:", "language": "yaml", "type": "pod-definition_explanation", "context": "Configure a Pod to Use a ConfigMap | Create the Pod:", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/configure-pod-configmap/", "id": "k8s_00489_code_explain_12416"} {"prompt": "Explain this bash code:\n\nkubectl create -f https://kubernetes.io/examples/pods/pod-single-configmap-env-variable.yaml", "completion": "This bash code Configure a Pod to Use a ConfigMap | Now, the Pod's output includes environment variable SPECIAL_LEVEL_KEY=very.", "language": "bash", "type": "kubectl-command_explanation", "context": "Configure a Pod to Use a ConfigMap | Now, the Pod's output includes environment variable SPECIAL_LEVEL_KEY=very.", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/configure-pod-configmap/", "id": "k8s_00489_code_explain_12417"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: ConfigMap\nmetadata:\n name: special-config\n namespace: default\ndata:\n special.how: very", "completion": "---\napiVersion: v1\nkind: ConfigMap\nmetadata:\n name: env-config\n namespace: default\ndata:\n log_level: INFO", "language": "yaml", "type": "kubernetes-manifest", "context": "Configure a Pod to Use a ConfigMap | Create the ConfigMap:", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/configure-pod-configmap/", "id": "k8s_00489_code_completion_12418"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: ConfigMap\nmetadata:\n name: special-config\n namespace: default\ndata:\n special.how: very\n---\napiVersion: v1\nkind: ConfigMap\nmetadata:\n name: env-config\n namespace: default\ndata:\n log_level: INFO", "completion": "This yaml code Configure a Pod to Use a ConfigMap | Create the ConfigMap:", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Configure a Pod to Use a ConfigMap | Create the ConfigMap:", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/configure-pod-configmap/", "id": "k8s_00489_code_explain_12419"} {"prompt": "Explain this bash code:\n\nkubectl create -f https://kubernetes.io/examples/configmap/configmaps.yaml", "completion": "This bash code Configure a Pod to Use a ConfigMap | Define the environment variables in the Pod specification.", "language": "bash", "type": "kubectl-command_explanation", "context": "Configure a Pod to Use a ConfigMap | Define the environment variables in the Pod specification.", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/configure-pod-configmap/", "id": "k8s_00489_code_explain_12420"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: dapi-test-pod\nspec:\n containers:\n - name: test-container\n image: registry.k8s.io/busybox:1.27.2\n command: [ \"/bin/sh\", \"-c\", \"env\" ]\n env:", "completion": " - name: SPECIAL_LEVEL_KEY\n valueFrom:\n configMapKeyRef:\n name: special-config\n key: special.how\n - name: LOG_LEVEL\n valueFrom:\n configMapKeyRef:\n name: env-config\n key: log_level\n restartPolicy: Never", "language": "yaml", "type": "pod-definition", "context": "Configure a Pod to Use a ConfigMap | Create the Pod:", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/configure-pod-configmap/", "id": "k8s_00489_code_completion_12421"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: dapi-test-pod\nspec:\n containers:\n - name: test-container\n image: registry.k8s.io/busybox:1.27.2\n command: [ \"/bin/sh\", \"-c\", \"env\" ]\n env:\n - name: SPECIAL_LEVEL_KEY\n valueFrom:\n configMapKeyRef:\n name: special-config\n key: special.how\n - name: LOG_LEVEL\n valueFrom:\n configMapKeyRef:\n name: env-config\n key: log_level\n restartPolicy: Never", "completion": "This yaml code Configure a Pod to Use a ConfigMap | Create the Pod:", "language": "yaml", "type": "pod-definition_explanation", "context": "Configure a Pod to Use a ConfigMap | Create the Pod:", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/configure-pod-configmap/", "id": "k8s_00489_code_explain_12422"} {"prompt": "Explain this bash code:\n\nkubectl create -f https://kubernetes.io/examples/pods/pod-multiple-configmap-env-variable.yaml", "completion": "This bash code Configure a Pod to Use a ConfigMap | Now, the Pod's output includes environment variables SPECIAL_LEVEL_KEY=very and LOG_LEVEL=INFO.", "language": "bash", "type": "kubectl-command_explanation", "context": "Configure a Pod to Use a ConfigMap | Now, the Pod's output includes environment variables SPECIAL_LEVEL_KEY=very and LOG_LEVEL=INFO.", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/configure-pod-configmap/", "id": "k8s_00489_code_explain_12423"} {"prompt": "Explain this bash code:\n\nkubectl delete pod dapi-test-pod --now\nkubectl delete configmap special-config\nkubectl delete configmap env-config", "completion": "This bash code Configure a Pod to Use a ConfigMap | Create a ConfigMap containing multiple key-value pairs.", "language": "bash", "type": "kubectl-command_explanation", "context": "Configure a Pod to Use a ConfigMap | Create a ConfigMap containing multiple key-value pairs.", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/configure-pod-configmap/", "id": "k8s_00489_code_explain_12424"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: ConfigMap\nmetadata:\n name: special-config", "completion": " namespace: default\ndata:\n SPECIAL_LEVEL: very\n SPECIAL_TYPE: charm", "language": "yaml", "type": "kubernetes-manifest", "context": "Configure a Pod to Use a ConfigMap | Create the ConfigMap:", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/configure-pod-configmap/", "id": "k8s_00489_code_completion_12425"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: ConfigMap\nmetadata:\n name: special-config\n namespace: default\ndata:\n SPECIAL_LEVEL: very\n SPECIAL_TYPE: charm", "completion": "This yaml code Configure a Pod to Use a ConfigMap | Create the ConfigMap:", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Configure a Pod to Use a ConfigMap | Create the ConfigMap:", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/configure-pod-configmap/", "id": "k8s_00489_code_explain_12426"} {"prompt": "Explain this bash code:\n\nkubectl create -f https://kubernetes.io/examples/configmap/configmap-multikeys.yaml", "completion": "This bash code Configure a Pod to Use a ConfigMap | Use envFrom to define all of the ConfigMap's data as container environment variables. The key from the ConfigMap becomes the environment variable name in the Pod.", "language": "bash", "type": "kubectl-command_explanation", "context": "Configure a Pod to Use a ConfigMap | Use envFrom to define all of the ConfigMap's data as container environment variables. The key from the ConfigMap becomes the environment variable name in the Pod.", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/configure-pod-configmap/", "id": "k8s_00489_code_explain_12427"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: dapi-test-pod\nspec:\n containers:", "completion": " - name: test-container\n image: registry.k8s.io/busybox:1.27.2\n command: [ \"/bin/sh\", \"-c\", \"env\" ]\n envFrom:\n - configMapRef:\n name: special-config\n restartPolicy: Never", "language": "yaml", "type": "pod-definition", "context": "Configure a Pod to Use a ConfigMap | Create the Pod:", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/configure-pod-configmap/", "id": "k8s_00489_code_completion_12428"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: dapi-test-pod\nspec:\n containers:\n - name: test-container\n image: registry.k8s.io/busybox:1.27.2\n command: [ \"/bin/sh\", \"-c\", \"env\" ]\n envFrom:\n - configMapRef:\n name: special-config\n restartPolicy: Never", "completion": "This yaml code Configure a Pod to Use a ConfigMap | Create the Pod:", "language": "yaml", "type": "pod-definition_explanation", "context": "Configure a Pod to Use a ConfigMap | Create the Pod:", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/configure-pod-configmap/", "id": "k8s_00489_code_explain_12429"} {"prompt": "Explain this bash code:\n\nkubectl create -f https://kubernetes.io/examples/pods/pod-configmap-envFrom.yaml", "completion": "This bash code Configure a Pod to Use a ConfigMap | Now, the Pod's output includes environment variables SPECIAL_LEVEL=very and SPECIAL_TYPE=charm.", "language": "bash", "type": "kubectl-command_explanation", "context": "Configure a Pod to Use a ConfigMap | Now, the Pod's output includes environment variables SPECIAL_LEVEL=very and SPECIAL_TYPE=charm.", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/configure-pod-configmap/", "id": "k8s_00489_code_explain_12430"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: dapi-test-pod\nspec:\n containers:\n - name: test-container\n image: registry.k8s.io/busybox:1.27.2\n command: [ \"/bin/echo\", \"$(SPECIAL_LEVEL_KEY) $(SPECIAL_TYPE_KEY)\" ]\n env:", "completion": " - name: SPECIAL_LEVEL_KEY\n valueFrom:\n configMapKeyRef:\n name: special-config\n key: SPECIAL_LEVEL\n - name: SPECIAL_TYPE_KEY\n valueFrom:\n configMapKeyRef:\n name: special-config\n key: SPECIAL_TYPE\n restartPolicy: Never", "language": "yaml", "type": "pod-definition", "context": "Configure a Pod to Use a ConfigMap | Create that Pod, by running:", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/configure-pod-configmap/", "id": "k8s_00489_code_completion_12431"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: dapi-test-pod\nspec:\n containers:\n - name: test-container\n image: registry.k8s.io/busybox:1.27.2\n command: [ \"/bin/echo\", \"$(SPECIAL_LEVEL_KEY) $(SPECIAL_TYPE_KEY)\" ]\n env:\n - name: SPECIAL_LEVEL_KEY\n valueFrom:\n configMapKeyRef:\n name: special-config\n key: SPECIAL_LEVEL\n - name: SPECIAL_TYPE_KEY\n valueFrom:\n configMapKeyRef:\n name: special-config\n key: SPECIAL_TYPE\n restartPolicy: Never", "completion": "This yaml code Configure a Pod to Use a ConfigMap | Create that Pod, by running:", "language": "yaml", "type": "pod-definition_explanation", "context": "Configure a Pod to Use a ConfigMap | Create that Pod, by running:", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/configure-pod-configmap/", "id": "k8s_00489_code_explain_12432"} {"prompt": "Explain this bash code:\n\nkubectl create -f https://kubernetes.io/examples/pods/pod-configmap-env-var-valueFrom.yaml", "completion": "This bash code Configure a Pod to Use a ConfigMap | That pod produces the following output from the test-container container:", "language": "bash", "type": "kubectl-command_explanation", "context": "Configure a Pod to Use a ConfigMap | That pod produces the following output from the test-container container:", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/configure-pod-configmap/", "id": "k8s_00489_code_explain_12433"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: ConfigMap\nmetadata:\n name: special-config", "completion": " namespace: default\ndata:\n SPECIAL_LEVEL: very\n SPECIAL_TYPE: charm", "language": "yaml", "type": "kubernetes-manifest", "context": "Configure a Pod to Use a ConfigMap | Create the ConfigMap:", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/configure-pod-configmap/", "id": "k8s_00489_code_completion_12434"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: ConfigMap\nmetadata:\n name: special-config\n namespace: default\ndata:\n SPECIAL_LEVEL: very\n SPECIAL_TYPE: charm", "completion": "This yaml code Configure a Pod to Use a ConfigMap | Create the ConfigMap:", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Configure a Pod to Use a ConfigMap | Create the ConfigMap:", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/configure-pod-configmap/", "id": "k8s_00489_code_explain_12435"} {"prompt": "Explain this bash code:\n\nkubectl create -f https://kubernetes.io/examples/configmap/configmap-multikeys.yaml", "completion": "This bash code Configure a Pod to Use a ConfigMap | Add the ConfigMap name under the volumes section of the Pod specification. This adds the ConfigMap data to the directory specified as volumeMounts.mountPath (in this case, /etc/config). The command se", "language": "bash", "type": "kubectl-command_explanation", "context": "Configure a Pod to Use a ConfigMap | Add the ConfigMap name under the volumes section of the Pod specification. This adds the ConfigMap data to the directory specified as volumeMounts.mountPath (in this case, /etc/config). The command se", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/configure-pod-configmap/", "id": "k8s_00489_code_explain_12436"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: dapi-test-pod\nspec:\n containers:\n - name: test-container\n image: registry.k8s.io/busybox:1.27.2\n command: [ \"/bin/sh\", \"-c\", \"ls /etc/config/\" ]", "completion": " volumeMounts:\n - name: config-volume\n mountPath: /etc/config\n volumes:\n - name: config-volume\n configMap:\n # Provide the name of the ConfigMap containing the files you want\n # to add to the container\n name: special-config\n restartPolicy: Never", "language": "yaml", "type": "pod-definition", "context": "Configure a Pod to Use a ConfigMap | Create the Pod:", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/configure-pod-configmap/", "id": "k8s_00489_code_completion_12437"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: dapi-test-pod\nspec:\n containers:\n - name: test-container\n image: registry.k8s.io/busybox:1.27.2\n command: [ \"/bin/sh\", \"-c\", \"ls /etc/config/\" ]\n volumeMounts:\n - name: config-volume\n mountPath: /etc/config\n volumes:\n - name: config-volume\n configMap:\n # Provide the name of the ConfigMap containing the files you want\n # to add to the container\n name: special-config\n restartPolicy: Never", "completion": "This yaml code Configure a Pod to Use a ConfigMap | Create the Pod:", "language": "yaml", "type": "pod-definition_explanation", "context": "Configure a Pod to Use a ConfigMap | Create the Pod:", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/configure-pod-configmap/", "id": "k8s_00489_code_explain_12438"} {"prompt": "Explain this bash code:\n\nkubectl create -f https://kubernetes.io/examples/pods/pod-configmap-volume.yaml", "completion": "This bash code Configure a Pod to Use a ConfigMap | When the pod runs, the command ls /etc/config/ produces the output below:", "language": "bash", "type": "kubectl-command_explanation", "context": "Configure a Pod to Use a ConfigMap | When the pod runs, the command ls /etc/config/ produces the output below:", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/configure-pod-configmap/", "id": "k8s_00489_code_explain_12439"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: dapi-test-pod\nspec:\n containers:\n - name: test-container\n image: registry.k8s.io/busybox:1.27.2\n command: [ \"/bin/sh\",\"-c\",\"cat /etc/config/keys\" ]\n volumeMounts:", "completion": " - name: config-volume\n mountPath: /etc/config\n volumes:\n - name: config-volume\n configMap:\n name: special-config\n items:\n - key: SPECIAL_LEVEL\n path: keys\n restartPolicy: Never", "language": "yaml", "type": "pod-definition", "context": "Configure a Pod to Use a ConfigMap | Create the Pod:", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/configure-pod-configmap/", "id": "k8s_00489_code_completion_12440"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: dapi-test-pod\nspec:\n containers:\n - name: test-container\n image: registry.k8s.io/busybox:1.27.2\n command: [ \"/bin/sh\",\"-c\",\"cat /etc/config/keys\" ]\n volumeMounts:\n - name: config-volume\n mountPath: /etc/config\n volumes:\n - name: config-volume\n configMap:\n name: special-config\n items:\n - key: SPECIAL_LEVEL\n path: keys\n restartPolicy: Never", "completion": "This yaml code Configure a Pod to Use a ConfigMap | Create the Pod:", "language": "yaml", "type": "pod-definition_explanation", "context": "Configure a Pod to Use a ConfigMap | Create the Pod:", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/configure-pod-configmap/", "id": "k8s_00489_code_explain_12441"} {"prompt": "Explain this bash code:\n\nkubectl create -f https://kubernetes.io/examples/pods/pod-configmap-volume-specific-key.yaml", "completion": "This bash code Configure a Pod to Use a ConfigMap | When the pod runs, the command cat /etc/config/keys produces the output below:", "language": "bash", "type": "kubectl-command_explanation", "context": "Configure a Pod to Use a ConfigMap | When the pod runs, the command cat /etc/config/keys produces the output below:", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/configure-pod-configmap/", "id": "k8s_00489_code_explain_12442"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: ConfigMap\nmetadata:\n creationTimestamp: 2016-02-18T19:14:38Z\n name: example-config\n namespace: default\ndata:", "completion": " # example of a simple property defined using --from-literal\n example.property.1: hello\n example.property.2: world\n # example of a complex property defined using --from-file\n example.property.file: |-\n property.1=value-1\n property.2=value-2\n property.3=value-3", "language": "yaml", "type": "kubernetes-manifest", "context": "Configure a Pod to Use a ConfigMap | When kubectl creates a ConfigMap from inputs that are not ASCII or UTF-8, the tool puts these into the binaryData field of the ConfigMap, and not in data. Both text and binary data sources can be comb", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/configure-pod-configmap/", "id": "k8s_00489_code_completion_12443"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: ConfigMap\nmetadata:\n creationTimestamp: 2016-02-18T19:14:38Z\n name: example-config\n namespace: default\ndata:\n # example of a simple property defined using --from-literal\n example.property.1: hello\n example.property.2: world\n # example of a complex property defined using --from-file\n example.property.file: |-\n property.1=value-1\n property.2=value-2\n property.3=value-3", "completion": "This yaml code Configure a Pod to Use a ConfigMap | When kubectl creates a ConfigMap from inputs that are not ASCII or UTF-8, the tool puts these into the binaryData field of the ConfigMap, and not in data. Both text and binary data sources can be comb", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Configure a Pod to Use a ConfigMap | When kubectl creates a ConfigMap from inputs that are not ASCII or UTF-8, the tool puts these into the binaryData field of the ConfigMap, and not in data. Both text and binary data sources can be comb", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/configure-pod-configmap/", "id": "k8s_00489_code_explain_12444"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: dapi-test-pod\nspec:\n containers:\n - name: test-container\n image: gcr.io/google_containers/busybox", "completion": " command: [\"/bin/sh\", \"-c\", \"env\"]\n env:\n - name: SPECIAL_LEVEL_KEY\n valueFrom:\n configMapKeyRef:\n name: a-config\n key: akey\n optional: true # mark the variable as optional\n restartPolicy: Never", "language": "yaml", "type": "pod-definition", "context": "Configure a Pod to Use a ConfigMap | If you run this pod, and there is no ConfigMap named a-config, the output is empty. If you run this pod, and there is a ConfigMap named a-config but that ConfigMap doesn't have a key named akey, the o", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/configure-pod-configmap/", "id": "k8s_00489_code_completion_12445"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: dapi-test-pod\nspec:\n containers:\n - name: test-container\n image: gcr.io/google_containers/busybox\n command: [\"/bin/sh\", \"-c\", \"env\"]\n env:\n - name: SPECIAL_LEVEL_KEY\n valueFrom:\n configMapKeyRef:\n name: a-config\n key: akey\n optional: true # mark the variable as optional\n restartPolicy: Never", "completion": "This yaml code Configure a Pod to Use a ConfigMap | If you run this pod, and there is no ConfigMap named a-config, the output is empty. If you run this pod, and there is a ConfigMap named a-config but that ConfigMap doesn't have a key named akey, the o", "language": "yaml", "type": "pod-definition_explanation", "context": "Configure a Pod to Use a ConfigMap | If you run this pod, and there is no ConfigMap named a-config, the output is empty. If you run this pod, and there is a ConfigMap named a-config but that ConfigMap doesn't have a key named akey, the o", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/configure-pod-configmap/", "id": "k8s_00489_code_explain_12446"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: dapi-test-pod\nspec:\n containers:\n - name: test-container\n image: gcr.io/google_containers/busybox\n command: [\"/bin/sh\", \"-c\", \"ls /etc/config\"]", "completion": " volumeMounts:\n - name: config-volume\n mountPath: /etc/config\n volumes:\n - name: config-volume\n configMap:\n name: no-config\n optional: true # mark the source ConfigMap as optional\n restartPolicy: Never", "language": "yaml", "type": "pod-definition", "context": "Configure a Pod to Use a ConfigMap | You must create the ConfigMap object before you reference it in a Pod specification. Alternatively, mark the ConfigMap reference as optional in the Pod spec (see Optional ConfigMaps). If you reference", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/configure-pod-configmap/", "id": "k8s_00489_code_completion_12447"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: dapi-test-pod\nspec:\n containers:\n - name: test-container\n image: gcr.io/google_containers/busybox\n command: [\"/bin/sh\", \"-c\", \"ls /etc/config\"]\n volumeMounts:\n - name: config-volume\n mountPath: /etc/config\n volumes:\n - name: config-volume\n configMap:\n name: no-config\n optional: true # mark the source ConfigMap as optional\n restartPolicy: Never", "completion": "This yaml code Configure a Pod to Use a ConfigMap | You must create the ConfigMap object before you reference it in a Pod specification. Alternatively, mark the ConfigMap reference as optional in the Pod spec (see Optional ConfigMaps). If you reference", "language": "yaml", "type": "pod-definition_explanation", "context": "Configure a Pod to Use a ConfigMap | You must create the ConfigMap object before you reference it in a Pod specification. Alternatively, mark the ConfigMap reference as optional in the Pod spec (see Optional ConfigMaps). If you reference", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/configure-pod-configmap/", "id": "k8s_00489_code_explain_12448"} {"prompt": "Explain this text code:\n\nLASTSEEN FIRSTSEEN COUNT NAME KIND SUBOBJECT TYPE REASON SOURCE MESSAGE\n0s 0s 1 dapi-test-pod Pod Warning InvalidEnvironmentVariableNames {kubelet, 127.0.0.1} Keys [1badkey, 2alsobad] from the EnvFrom configMap default/myconfig were skipped since they are considered invalid environment variable names.", "completion": "This text code Configure a Pod to Use a ConfigMap | ConfigMaps reside in a specific Namespace. Pods can only refer to ConfigMaps that are in the same namespace as the Pod.", "language": "text", "type": "code-example_explanation", "context": "Configure a Pod to Use a ConfigMap | ConfigMaps reside in a specific Namespace. Pods can only refer to ConfigMaps that are in the same namespace as the Pod.", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/configure-pod-configmap/", "id": "k8s_00489_code_explain_12449"} {"prompt": "Complete this bash code:\n\nkubectl delete configmaps/game-config configmaps/game-config-2 configmaps/game-config-3 \\\n configmaps/game-config-env-file\nkubectl delete pod dapi-test-pod --now", "completion": "\n# You might already have removed the next set\nkubectl delete configmaps/special-config configmaps/env-config\nkubectl delete configmap -l 'game-config in (config-4,config-5)'", "language": "bash", "type": "kubectl-command", "context": "Configure a Pod to Use a ConfigMap | Remove the kustomization.yaml file that you used to generate the ConfigMap:", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/configure-pod-configmap/", "id": "k8s_00489_code_completion_12450"} {"prompt": "Explain this bash code:\n\nkubectl delete configmaps/game-config configmaps/game-config-2 configmaps/game-config-3 \\\n configmaps/game-config-env-file\nkubectl delete pod dapi-test-pod --now\n\n# You might already have removed the next set\nkubectl delete configmaps/special-config configmaps/env-config\nkubectl delete configmap -l 'game-config in (config-4,config-5)'", "completion": "This bash code Configure a Pod to Use a ConfigMap | Remove the kustomization.yaml file that you used to generate the ConfigMap:", "language": "bash", "type": "kubectl-command_explanation", "context": "Configure a Pod to Use a ConfigMap | Remove the kustomization.yaml file that you used to generate the ConfigMap:", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/configure-pod-configmap/", "id": "k8s_00489_code_explain_12451"} {"prompt": "Complete this shell code:\n\n# Download the sample files into `configure-pod-container/configmap/` directory\nwget https://kubernetes.io/examples/configmap/game.properties -O configure-pod-container/configmap/game.properties\nwget https://kubernetes.io/examples/configmap/ui.properties -O configure-pod-container/configmap/ui.properties", "completion": "\n# Create the ConfigMap\nkubectl create configmap game-config --from-file=configure-pod-container/configmap/", "language": "shell", "type": "code-example", "context": "Configure a Pod to Use a ConfigMap | The above command packages each file, in this case, game.properties and ui.properties in the configure-pod-container/configmap/ directory into the game-config ConfigMap. You can display details of the", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/configure-pod-configmap/", "id": "k8s_00489_code_completion_12452"} {"prompt": "Explain this shell code:\n\n# Download the sample files into `configure-pod-container/configmap/` directory\nwget https://kubernetes.io/examples/configmap/game.properties -O configure-pod-container/configmap/game.properties\nwget https://kubernetes.io/examples/configmap/ui.properties -O configure-pod-container/configmap/ui.properties\n\n# Create the ConfigMap\nkubectl create configmap game-config --from-file=configure-pod-container/configmap/", "completion": "This shell code Configure a Pod to Use a ConfigMap | The above command packages each file, in this case, game.properties and ui.properties in the configure-pod-container/configmap/ directory into the game-config ConfigMap. You can display details of the", "language": "shell", "type": "code-example_explanation", "context": "Configure a Pod to Use a ConfigMap | The above command packages each file, in this case, game.properties and ui.properties in the configure-pod-container/configmap/ directory into the game-config ConfigMap. You can display details of the", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/configure-pod-configmap/", "id": "k8s_00489_code_explain_12453"} {"prompt": "Complete this text code:\n\nName: game-config\nNamespace: default\nLabels: <none>\nAnnotations: <none>\n\nData\n====\ngame.properties:\n----\nenemies=aliens\nlives=3", "completion": "enemies.cheat=true\nenemies.cheat.level=noGoodRotten\nsecret.code.passphrase=UUDDLRLRBABAS\nsecret.code.allowed=true\nsecret.code.lives=30\nui.properties:\n----\ncolor.good=purple\ncolor.bad=yellow\nallow.textmode=true\nhow.nice.to.look=fairlyNice", "language": "text", "type": "code-example", "context": "Configure a Pod to Use a ConfigMap | The game.properties and ui.properties files in the configure-pod-container/configmap/ directory are represented in the data section of the ConfigMap.", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/configure-pod-configmap/", "id": "k8s_00489_code_completion_12454"} {"prompt": "Explain this text code:\n\nName: game-config\nNamespace: default\nLabels: <none>\nAnnotations: <none>\n\nData\n====\ngame.properties:\n----\nenemies=aliens\nlives=3\nenemies.cheat=true\nenemies.cheat.level=noGoodRotten\nsecret.code.passphrase=UUDDLRLRBABAS\nsecret.code.allowed=true\nsecret.code.lives=30\nui.properties:\n----\ncolor.good=purple\ncolor.bad=yellow\nallow.textmode=true\nhow.nice.to.look=fairlyNice", "completion": "This text code Configure a Pod to Use a ConfigMap | The game.properties and ui.properties files in the configure-pod-container/configmap/ directory are represented in the data section of the ConfigMap.", "language": "text", "type": "code-example_explanation", "context": "Configure a Pod to Use a ConfigMap | The game.properties and ui.properties files in the configure-pod-container/configmap/ directory are represented in the data section of the ConfigMap.", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/configure-pod-configmap/", "id": "k8s_00489_code_explain_12455"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: ConfigMap\nmetadata:\n creationTimestamp: 2022-02-18T18:52:05Z\n name: game-config\n namespace: default\n resourceVersion: \"516\"\n uid: b4952dc3-d670-11e5-8cd0-68f728db1985\ndata:\n game.properties: |\n enemies=aliens", "completion": " lives=3\n enemies.cheat=true\n enemies.cheat.level=noGoodRotten\n secret.code.passphrase=UUDDLRLRBABAS\n secret.code.allowed=true\n secret.code.lives=30 \n ui.properties: |\n color.good=purple\n color.bad=yellow\n allow.textmode=true\n how.nice.to.look=fairlyNice", "language": "yaml", "type": "kubernetes-manifest", "context": "Configure a Pod to Use a ConfigMap | You can use kubectl create configmap to create a ConfigMap from an individual file, or from multiple files.", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/configure-pod-configmap/", "id": "k8s_00489_code_completion_12456"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: ConfigMap\nmetadata:\n creationTimestamp: 2022-02-18T18:52:05Z\n name: game-config\n namespace: default\n resourceVersion: \"516\"\n uid: b4952dc3-d670-11e5-8cd0-68f728db1985\ndata:\n game.properties: |\n enemies=aliens\n lives=3\n enemies.cheat=true\n enemies.cheat.level=noGoodRotten\n secret.code.passphrase=UUDDLRLRBABAS\n secret.code.allowed=true\n secret.code.lives=30 \n ui.properties: |\n color.good=purple\n color.bad=yellow\n allow.textmode=true\n how.nice.to.look=fairlyNice", "completion": "This yaml code Configure a Pod to Use a ConfigMap | You can use kubectl create configmap to create a ConfigMap from an individual file, or from multiple files.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Configure a Pod to Use a ConfigMap | You can use kubectl create configmap to create a ConfigMap from an individual file, or from multiple files.", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/configure-pod-configmap/", "id": "k8s_00489_code_explain_12457"} {"prompt": "Explain this shell code:\n\nkubectl create configmap game-config-2 --from-file=configure-pod-container/configmap/game.properties", "completion": "This shell code Configure a Pod to Use a ConfigMap | would produce the following ConfigMap:", "language": "shell", "type": "code-example_explanation", "context": "Configure a Pod to Use a ConfigMap | would produce the following ConfigMap:", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/configure-pod-configmap/", "id": "k8s_00489_code_explain_12458"} {"prompt": "Complete this text code:\n\nName: game-config-2\nNamespace: default\nLabels: <none>\nAnnotations: <none>\n\nData\n====\ngame.properties:", "completion": "----\nenemies=aliens\nlives=3\nenemies.cheat=true\nenemies.cheat.level=noGoodRotten\nsecret.code.passphrase=UUDDLRLRBABAS\nsecret.code.allowed=true\nsecret.code.lives=30", "language": "text", "type": "code-example", "context": "Configure a Pod to Use a ConfigMap | You can pass in the --from-file argument multiple times to create a ConfigMap from multiple data sources.", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/configure-pod-configmap/", "id": "k8s_00489_code_completion_12459"} {"prompt": "Explain this text code:\n\nName: game-config-2\nNamespace: default\nLabels: <none>\nAnnotations: <none>\n\nData\n====\ngame.properties:\n----\nenemies=aliens\nlives=3\nenemies.cheat=true\nenemies.cheat.level=noGoodRotten\nsecret.code.passphrase=UUDDLRLRBABAS\nsecret.code.allowed=true\nsecret.code.lives=30", "completion": "This text code Configure a Pod to Use a ConfigMap | You can pass in the --from-file argument multiple times to create a ConfigMap from multiple data sources.", "language": "text", "type": "code-example_explanation", "context": "Configure a Pod to Use a ConfigMap | You can pass in the --from-file argument multiple times to create a ConfigMap from multiple data sources.", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/configure-pod-configmap/", "id": "k8s_00489_code_explain_12460"} {"prompt": "Explain this shell code:\n\nkubectl create configmap game-config-2 --from-file=configure-pod-container/configmap/game.properties --from-file=configure-pod-container/configmap/ui.properties", "completion": "This shell code Configure a Pod to Use a ConfigMap | You can display details of the game-config-2 ConfigMap using the following command:", "language": "shell", "type": "code-example_explanation", "context": "Configure a Pod to Use a ConfigMap | You can display details of the game-config-2 ConfigMap using the following command:", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/configure-pod-configmap/", "id": "k8s_00489_code_explain_12461"} {"prompt": "Complete this text code:\n\nName: game-config-2\nNamespace: default\nLabels: <none>\nAnnotations: <none>\n\nData\n====\ngame.properties:\n----\nenemies=aliens\nlives=3", "completion": "enemies.cheat=true\nenemies.cheat.level=noGoodRotten\nsecret.code.passphrase=UUDDLRLRBABAS\nsecret.code.allowed=true\nsecret.code.lives=30\nui.properties:\n----\ncolor.good=purple\ncolor.bad=yellow\nallow.textmode=true\nhow.nice.to.look=fairlyNice", "language": "text", "type": "code-example", "context": "Configure a Pod to Use a ConfigMap | Use the option --from-env-file to create a ConfigMap from an env-file, for example:", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/configure-pod-configmap/", "id": "k8s_00489_code_completion_12462"} {"prompt": "Explain this text code:\n\nName: game-config-2\nNamespace: default\nLabels: <none>\nAnnotations: <none>\n\nData\n====\ngame.properties:\n----\nenemies=aliens\nlives=3\nenemies.cheat=true\nenemies.cheat.level=noGoodRotten\nsecret.code.passphrase=UUDDLRLRBABAS\nsecret.code.allowed=true\nsecret.code.lives=30\nui.properties:\n----\ncolor.good=purple\ncolor.bad=yellow\nallow.textmode=true\nhow.nice.to.look=fairlyNice", "completion": "This text code Configure a Pod to Use a ConfigMap | Use the option --from-env-file to create a ConfigMap from an env-file, for example:", "language": "text", "type": "code-example_explanation", "context": "Configure a Pod to Use a ConfigMap | Use the option --from-env-file to create a ConfigMap from an env-file, for example:", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/configure-pod-configmap/", "id": "k8s_00489_code_explain_12463"} {"prompt": "Complete this shell code:\n\n# Env-files contain a list of environment variables.\n# These syntax rules apply:\n# Each line in an env file has to be in VAR=VAL format.\n# Lines beginning with # (i.e. comments) are ignored.\n# Blank lines are ignored.\n# There is no special handling of quotation marks (i.e. they will be part of the ConfigMap value)).\n\n# Download the sample files into `configure-pod-container/configmap/` directory\nwget https://kubernetes.io/examples/configmap/game-env-file.properties -O configure-pod-container/configmap/game-env-file.properties", "completion": "wget https://kubernetes.io/examples/configmap/ui-env-file.properties -O configure-pod-container/configmap/ui-env-file.properties\n\n# The env-file `game-env-file.properties` looks like below\ncat configure-pod-container/configmap/game-env-file.properties\nenemies=aliens\nlives=3\nallowed=\"true\"\n\n# This comment and the empty line above it are ignored", "language": "shell", "type": "code-example", "context": "Configure a Pod to Use a ConfigMap | would produce a ConfigMap. View the ConfigMap:", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/configure-pod-configmap/", "id": "k8s_00489_code_completion_12464"} {"prompt": "Explain this shell code:\n\n# Env-files contain a list of environment variables.\n# These syntax rules apply:\n# Each line in an env file has to be in VAR=VAL format.\n# Lines beginning with # (i.e. comments) are ignored.\n# Blank lines are ignored.\n# There is no special handling of quotation marks (i.e. they will be part of the ConfigMap value)).\n\n# Download the sample files into `configure-pod-container/configmap/` directory\nwget https://kubernetes.io/examples/configmap/game-env-file.properties -O configure-pod-container/configmap/game-env-file.properties\nwget https://kubernetes.io/examples/configmap/ui-env-file.properties -O configure-pod-container/configmap/ui-env-file.properties\n\n# The env-file `game-env-file.properties` looks like below\ncat configure-pod-container/configmap/game-env-file.properties\nenemies=aliens\nlives=3\nallowed=\"true\"\n\n# This comment and the empty line above it are ignored", "completion": "This shell code Configure a Pod to Use a ConfigMap | would produce a ConfigMap. View the ConfigMap:", "language": "shell", "type": "code-example_explanation", "context": "Configure a Pod to Use a ConfigMap | would produce a ConfigMap. View the ConfigMap:", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/configure-pod-configmap/", "id": "k8s_00489_code_explain_12465"} {"prompt": "Explain this shell code:\n\nkubectl create configmap game-config-env-file \\\n --from-env-file=configure-pod-container/configmap/game-env-file.properties", "completion": "This shell code Configure a Pod to Use a ConfigMap | would produce a ConfigMap. View the ConfigMap:", "language": "shell", "type": "code-example_explanation", "context": "Configure a Pod to Use a ConfigMap | would produce a ConfigMap. View the ConfigMap:", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/configure-pod-configmap/", "id": "k8s_00489_code_explain_12466"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: ConfigMap\nmetadata:\n creationTimestamp: 2019-12-27T18:36:28Z\n name: game-config-env-file\n namespace: default", "completion": " resourceVersion: \"809965\"\n uid: d9d1ca5b-eb34-11e7-887b-42010a8002b8\ndata:\n allowed: '\"true\"'\n enemies: aliens\n lives: \"3\"", "language": "yaml", "type": "kubernetes-manifest", "context": "Configure a Pod to Use a ConfigMap | Starting with Kubernetes v1.23, kubectl supports the --from-env-file argument to be specified multiple times to create a ConfigMap from multiple data sources.", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/configure-pod-configmap/", "id": "k8s_00489_code_completion_12467"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: ConfigMap\nmetadata:\n creationTimestamp: 2019-12-27T18:36:28Z\n name: game-config-env-file\n namespace: default\n resourceVersion: \"809965\"\n uid: d9d1ca5b-eb34-11e7-887b-42010a8002b8\ndata:\n allowed: '\"true\"'\n enemies: aliens\n lives: \"3\"", "completion": "This yaml code Configure a Pod to Use a ConfigMap | Starting with Kubernetes v1.23, kubectl supports the --from-env-file argument to be specified multiple times to create a ConfigMap from multiple data sources.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Configure a Pod to Use a ConfigMap | Starting with Kubernetes v1.23, kubectl supports the --from-env-file argument to be specified multiple times to create a ConfigMap from multiple data sources.", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/configure-pod-configmap/", "id": "k8s_00489_code_explain_12468"} {"prompt": "Explain this shell code:\n\nkubectl create configmap config-multi-env-files \\\n --from-env-file=configure-pod-container/configmap/game-env-file.properties \\\n --from-env-file=configure-pod-container/configmap/ui-env-file.properties", "completion": "This shell code Configure a Pod to Use a ConfigMap | would produce the following ConfigMap:", "language": "shell", "type": "code-example_explanation", "context": "Configure a Pod to Use a ConfigMap | would produce the following ConfigMap:", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/configure-pod-configmap/", "id": "k8s_00489_code_explain_12469"} {"prompt": "Explain this shell code:\n\nkubectl get configmap config-multi-env-files -o yaml", "completion": "This shell code Configure a Pod to Use a ConfigMap | where the output is similar to this:", "language": "shell", "type": "code-example_explanation", "context": "Configure a Pod to Use a ConfigMap | where the output is similar to this:", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/configure-pod-configmap/", "id": "k8s_00489_code_explain_12470"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: ConfigMap\nmetadata:\n creationTimestamp: 2019-12-27T18:38:34Z\n name: config-multi-env-files\n namespace: default\n resourceVersion: \"810136\"", "completion": " uid: 252c4572-eb35-11e7-887b-42010a8002b8\ndata:\n allowed: '\"true\"'\n color: purple\n enemies: aliens\n how: fairlyNice\n lives: \"3\"\n textmode: \"true\"", "language": "yaml", "type": "kubernetes-manifest", "context": "Configure a Pod to Use a ConfigMap | You can define a key other than the file name to use in the data section of your ConfigMap when using the --from-file argument:", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/configure-pod-configmap/", "id": "k8s_00489_code_completion_12471"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: ConfigMap\nmetadata:\n creationTimestamp: 2019-12-27T18:38:34Z\n name: config-multi-env-files\n namespace: default\n resourceVersion: \"810136\"\n uid: 252c4572-eb35-11e7-887b-42010a8002b8\ndata:\n allowed: '\"true\"'\n color: purple\n enemies: aliens\n how: fairlyNice\n lives: \"3\"\n textmode: \"true\"", "completion": "This yaml code Configure a Pod to Use a ConfigMap | You can define a key other than the file name to use in the data section of your ConfigMap when using the --from-file argument:", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Configure a Pod to Use a ConfigMap | You can define a key other than the file name to use in the data section of your ConfigMap when using the --from-file argument:", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/configure-pod-configmap/", "id": "k8s_00489_code_explain_12472"} {"prompt": "Explain this shell code:\n\nkubectl create configmap game-config-3 --from-file=<my-key-name>=<path-to-file>", "completion": "This shell code Configure a Pod to Use a ConfigMap | where <my-key-name> is the key you want to use in the ConfigMap and <path-to-file> is the location of the data source file you want the key to represent.", "language": "shell", "type": "code-example_explanation", "context": "Configure a Pod to Use a ConfigMap | where <my-key-name> is the key you want to use in the ConfigMap and <path-to-file> is the location of the data source file you want the key to represent.", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/configure-pod-configmap/", "id": "k8s_00489_code_explain_12473"} {"prompt": "Explain this shell code:\n\nkubectl create configmap game-config-3 --from-file=game-special-key=configure-pod-container/configmap/game.properties", "completion": "This shell code Configure a Pod to Use a ConfigMap | would produce the following ConfigMap:", "language": "shell", "type": "code-example_explanation", "context": "Configure a Pod to Use a ConfigMap | would produce the following ConfigMap:", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/configure-pod-configmap/", "id": "k8s_00489_code_explain_12474"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: ConfigMap\nmetadata:\n creationTimestamp: 2022-02-18T18:54:22Z\n name: game-config-3\n namespace: default\n resourceVersion: \"530\"\n uid: 05f8da22-d671-11e5-8cd0-68f728db1985", "completion": "data:\n game-special-key: |\n enemies=aliens\n lives=3\n enemies.cheat=true\n enemies.cheat.level=noGoodRotten\n secret.code.passphrase=UUDDLRLRBABAS\n secret.code.allowed=true\n secret.code.lives=30", "language": "yaml", "type": "kubernetes-manifest", "context": "Configure a Pod to Use a ConfigMap | You can use kubectl create configmap with the --from-literal argument to define a literal value from the command line:", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/configure-pod-configmap/", "id": "k8s_00489_code_completion_12475"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: ConfigMap\nmetadata:\n creationTimestamp: 2022-02-18T18:54:22Z\n name: game-config-3\n namespace: default\n resourceVersion: \"530\"\n uid: 05f8da22-d671-11e5-8cd0-68f728db1985\ndata:\n game-special-key: |\n enemies=aliens\n lives=3\n enemies.cheat=true\n enemies.cheat.level=noGoodRotten\n secret.code.passphrase=UUDDLRLRBABAS\n secret.code.allowed=true\n secret.code.lives=30", "completion": "This yaml code Configure a Pod to Use a ConfigMap | You can use kubectl create configmap with the --from-literal argument to define a literal value from the command line:", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Configure a Pod to Use a ConfigMap | You can use kubectl create configmap with the --from-literal argument to define a literal value from the command line:", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/configure-pod-configmap/", "id": "k8s_00489_code_explain_12476"} {"prompt": "Explain this shell code:\n\nkubectl create configmap special-config --from-literal=special.how=very --from-literal=special.type=charm", "completion": "This shell code Configure a Pod to Use a ConfigMap | You can pass in multiple key-value pairs. Each pair provided on the command line is represented as a separate entry in the data section of the ConfigMap.", "language": "shell", "type": "code-example_explanation", "context": "Configure a Pod to Use a ConfigMap | You can pass in multiple key-value pairs. Each pair provided on the command line is represented as a separate entry in the data section of the ConfigMap.", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/configure-pod-configmap/", "id": "k8s_00489_code_explain_12477"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: ConfigMap\nmetadata:\n creationTimestamp: 2022-02-18T19:14:38Z\n name: special-config", "completion": " namespace: default\n resourceVersion: \"651\"\n uid: dadce046-d673-11e5-8cd0-68f728db1985\ndata:\n special.how: very\n special.type: charm", "language": "yaml", "type": "kubernetes-manifest", "context": "Configure a Pod to Use a ConfigMap | You can also create a ConfigMap from generators and then apply it to create the object in the cluster's API server. You should specify the generators in a kustomization.yaml file within a directory.", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/configure-pod-configmap/", "id": "k8s_00489_code_completion_12478"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: ConfigMap\nmetadata:\n creationTimestamp: 2022-02-18T19:14:38Z\n name: special-config\n namespace: default\n resourceVersion: \"651\"\n uid: dadce046-d673-11e5-8cd0-68f728db1985\ndata:\n special.how: very\n special.type: charm", "completion": "This yaml code Configure a Pod to Use a ConfigMap | You can also create a ConfigMap from generators and then apply it to create the object in the cluster's API server. You should specify the generators in a kustomization.yaml file within a directory.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Configure a Pod to Use a ConfigMap | You can also create a ConfigMap from generators and then apply it to create the object in the cluster's API server. You should specify the generators in a kustomization.yaml file within a directory.", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/configure-pod-configmap/", "id": "k8s_00489_code_explain_12479"} {"prompt": "Complete this shell code:\n\n# Create a kustomization.yaml file with ConfigMapGenerator\ncat <<EOF >./kustomization.yaml\nconfigMapGenerator:\n- name: game-config-4\n options:", "completion": " labels:\n game-config: config-4\n files:\n - configure-pod-container/configmap/game.properties\nEOF", "language": "shell", "type": "code-example", "context": "Configure a Pod to Use a ConfigMap | Apply the kustomization directory to create the ConfigMap object:", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/configure-pod-configmap/", "id": "k8s_00489_code_completion_12480"} {"prompt": "Explain this shell code:\n\n# Create a kustomization.yaml file with ConfigMapGenerator\ncat <<EOF >./kustomization.yaml\nconfigMapGenerator:\n- name: game-config-4\n options:\n labels:\n game-config: config-4\n files:\n - configure-pod-container/configmap/game.properties\nEOF", "completion": "This shell code Configure a Pod to Use a ConfigMap | Apply the kustomization directory to create the ConfigMap object:", "language": "shell", "type": "code-example_explanation", "context": "Configure a Pod to Use a ConfigMap | Apply the kustomization directory to create the ConfigMap object:", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/configure-pod-configmap/", "id": "k8s_00489_code_explain_12481"} {"prompt": "Explain this text code:\n\nNAME DATA AGE\ngame-config-4-m9dm2f92bt 1 37s", "completion": "This text code Configure a Pod to Use a ConfigMap | and also:", "language": "text", "type": "code-example_explanation", "context": "Configure a Pod to Use a ConfigMap | and also:", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/configure-pod-configmap/", "id": "k8s_00489_code_explain_12482"} {"prompt": "Explain this shell code:\n\nkubectl describe configmaps/game-config-4-m9dm2f92bt", "completion": "This shell code Configure a Pod to Use a ConfigMap | Notice that the generated ConfigMap name has a suffix appended by hashing the contents. This ensures that a new ConfigMap is generated each time the content is modified.", "language": "shell", "type": "code-example_explanation", "context": "Configure a Pod to Use a ConfigMap | Notice that the generated ConfigMap name has a suffix appended by hashing the contents. This ensures that a new ConfigMap is generated each time the content is modified.", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/configure-pod-configmap/", "id": "k8s_00489_code_explain_12483"} {"prompt": "Complete this text code:\n\nName: game-config-4-m9dm2f92bt\nNamespace: default\nLabels: game-config=config-4\nAnnotations: kubectl.kubernetes.io/last-applied-configuration:\n {\"apiVersion\":\"v1\",\"data\":{\"game.properties\":\"enemies=aliens\\nlives=3\\nenemies.cheat=true\\nenemies.cheat.level=noGoodRotten\\nsecret.code.p...\n\nData\n====\ngame.properties:", "completion": "----\nenemies=aliens\nlives=3\nenemies.cheat=true\nenemies.cheat.level=noGoodRotten\nsecret.code.passphrase=UUDDLRLRBABAS\nsecret.code.allowed=true\nsecret.code.lives=30\nEvents: <none>", "language": "text", "type": "code-example", "context": "Configure a Pod to Use a ConfigMap | Notice that the generated ConfigMap name has a suffix appended by hashing the contents. This ensures that a new ConfigMap is generated each time the content is modified.", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/configure-pod-configmap/", "id": "k8s_00489_code_completion_12484"} {"prompt": "Explain this text code:\n\nName: game-config-4-m9dm2f92bt\nNamespace: default\nLabels: game-config=config-4\nAnnotations: kubectl.kubernetes.io/last-applied-configuration:\n {\"apiVersion\":\"v1\",\"data\":{\"game.properties\":\"enemies=aliens\\nlives=3\\nenemies.cheat=true\\nenemies.cheat.level=noGoodRotten\\nsecret.code.p...\n\nData\n====\ngame.properties:\n----\nenemies=aliens\nlives=3\nenemies.cheat=true\nenemies.cheat.level=noGoodRotten\nsecret.code.passphrase=UUDDLRLRBABAS\nsecret.code.allowed=true\nsecret.code.lives=30\nEvents: <none>", "completion": "This text code Configure a Pod to Use a ConfigMap | Notice that the generated ConfigMap name has a suffix appended by hashing the contents. This ensures that a new ConfigMap is generated each time the content is modified.", "language": "text", "type": "code-example_explanation", "context": "Configure a Pod to Use a ConfigMap | Notice that the generated ConfigMap name has a suffix appended by hashing the contents. This ensures that a new ConfigMap is generated each time the content is modified.", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/configure-pod-configmap/", "id": "k8s_00489_code_explain_12485"} {"prompt": "Complete this shell code:\n\n# Create a kustomization.yaml file with ConfigMapGenerator\ncat <<EOF >./kustomization.yaml\nconfigMapGenerator:\n- name: game-config-5\n options:", "completion": " labels:\n game-config: config-5\n files:\n - game-special-key=configure-pod-container/configmap/game.properties\nEOF", "language": "shell", "type": "code-example", "context": "Configure a Pod to Use a ConfigMap | Apply the kustomization directory to create the ConfigMap object.", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/configure-pod-configmap/", "id": "k8s_00489_code_completion_12486"} {"prompt": "Explain this shell code:\n\n# Create a kustomization.yaml file with ConfigMapGenerator\ncat <<EOF >./kustomization.yaml\nconfigMapGenerator:\n- name: game-config-5\n options:\n labels:\n game-config: config-5\n files:\n - game-special-key=configure-pod-container/configmap/game.properties\nEOF", "completion": "This shell code Configure a Pod to Use a ConfigMap | Apply the kustomization directory to create the ConfigMap object.", "language": "shell", "type": "code-example_explanation", "context": "Configure a Pod to Use a ConfigMap | Apply the kustomization directory to create the ConfigMap object.", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/configure-pod-configmap/", "id": "k8s_00489_code_explain_12487"} {"prompt": "Complete this yaml code:\n\n---\n# kustomization.yaml contents for creating a ConfigMap from literals\nconfigMapGenerator:", "completion": "- name: special-config-2\n literals:\n - special.how=very\n - special.type=charm", "language": "yaml", "type": "kubernetes-manifest", "context": "Configure a Pod to Use a ConfigMap | Apply the kustomization directory to create the ConfigMap object:", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/configure-pod-configmap/", "id": "k8s_00489_code_completion_12488"} {"prompt": "Explain this yaml code:\n\n---\n# kustomization.yaml contents for creating a ConfigMap from literals\nconfigMapGenerator:\n- name: special-config-2\n literals:\n - special.how=very\n - special.type=charm", "completion": "This yaml code Configure a Pod to Use a ConfigMap | Apply the kustomization directory to create the ConfigMap object:", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Configure a Pod to Use a ConfigMap | Apply the kustomization directory to create the ConfigMap object:", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/configure-pod-configmap/", "id": "k8s_00489_code_explain_12489"} {"prompt": "Explain this bash code:\n\nkubectl delete configmap special-config\nkubectl delete configmap env-config\nkubectl delete configmap -l 'game-config in (config-4,config-5)'", "completion": "This bash code Configure a Pod to Use a ConfigMap | Now that you have learned to define ConfigMaps, you can move on to the next section, and learn how to use these objects with Pods.", "language": "bash", "type": "kubectl-command_explanation", "context": "Configure a Pod to Use a ConfigMap | Now that you have learned to define ConfigMaps, you can move on to the next section, and learn how to use these objects with Pods.", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/configure-pod-configmap/", "id": "k8s_00489_code_explain_12490"} {"prompt": "Explain this shell code:\n\nkubectl create configmap special-config --from-literal=special.how=very", "completion": "This shell code Configure a Pod to Use a ConfigMap | Assign the special.how value defined in the ConfigMap to the SPECIAL_LEVEL_KEY environment variable in the Pod specification.", "language": "shell", "type": "code-example_explanation", "context": "Configure a Pod to Use a ConfigMap | Assign the special.how value defined in the ConfigMap to the SPECIAL_LEVEL_KEY environment variable in the Pod specification.", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/configure-pod-configmap/", "id": "k8s_00489_code_explain_12491"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: dapi-test-pod\nspec:\n containers:\n - name: test-container\n image: registry.k8s.io/busybox:1.27.2\n command: [ \"/bin/sh\", \"-c\", \"env\" ]", "completion": " env:\n # Define the environment variable\n - name: SPECIAL_LEVEL_KEY\n valueFrom:\n configMapKeyRef:\n # The ConfigMap containing the value you want to assign to SPECIAL_LEVEL_KEY\n name: special-config\n # Specify the key associated with the value\n key: special.how\n restartPolicy: Never", "language": "yaml", "type": "pod-definition", "context": "Configure a Pod to Use a ConfigMap | Create the Pod:", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/configure-pod-configmap/", "id": "k8s_00489_code_completion_12492"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: dapi-test-pod\nspec:\n containers:\n - name: test-container\n image: registry.k8s.io/busybox:1.27.2\n command: [ \"/bin/sh\", \"-c\", \"env\" ]\n env:\n # Define the environment variable\n - name: SPECIAL_LEVEL_KEY\n valueFrom:\n configMapKeyRef:\n # The ConfigMap containing the value you want to assign to SPECIAL_LEVEL_KEY\n name: special-config\n # Specify the key associated with the value\n key: special.how\n restartPolicy: Never", "completion": "This yaml code Configure a Pod to Use a ConfigMap | Create the Pod:", "language": "yaml", "type": "pod-definition_explanation", "context": "Configure a Pod to Use a ConfigMap | Create the Pod:", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/configure-pod-configmap/", "id": "k8s_00489_code_explain_12493"} {"prompt": "Explain this shell code:\n\nkubectl create -f https://kubernetes.io/examples/pods/pod-single-configmap-env-variable.yaml", "completion": "This shell code Configure a Pod to Use a ConfigMap | Now, the Pod's output includes environment variable SPECIAL_LEVEL_KEY=very.", "language": "shell", "type": "code-example_explanation", "context": "Configure a Pod to Use a ConfigMap | Now, the Pod's output includes environment variable SPECIAL_LEVEL_KEY=very.", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/configure-pod-configmap/", "id": "k8s_00489_code_explain_12494"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: ConfigMap\nmetadata:\n name: special-config\n namespace: default\ndata:\n special.how: very", "completion": "---\napiVersion: v1\nkind: ConfigMap\nmetadata:\n name: env-config\n namespace: default\ndata:\n log_level: INFO", "language": "yaml", "type": "kubernetes-manifest", "context": "Configure a Pod to Use a ConfigMap | Create the ConfigMap:", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/configure-pod-configmap/", "id": "k8s_00489_code_completion_12495"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: ConfigMap\nmetadata:\n name: special-config\n namespace: default\ndata:\n special.how: very\n---\napiVersion: v1\nkind: ConfigMap\nmetadata:\n name: env-config\n namespace: default\ndata:\n log_level: INFO", "completion": "This yaml code Configure a Pod to Use a ConfigMap | Create the ConfigMap:", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Configure a Pod to Use a ConfigMap | Create the ConfigMap:", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/configure-pod-configmap/", "id": "k8s_00489_code_explain_12496"} {"prompt": "Explain this shell code:\n\nkubectl create -f https://kubernetes.io/examples/configmap/configmaps.yaml", "completion": "This shell code Configure a Pod to Use a ConfigMap | Define the environment variables in the Pod specification.", "language": "shell", "type": "code-example_explanation", "context": "Configure a Pod to Use a ConfigMap | Define the environment variables in the Pod specification.", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/configure-pod-configmap/", "id": "k8s_00489_code_explain_12497"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: dapi-test-pod\nspec:\n containers:\n - name: test-container\n image: registry.k8s.io/busybox:1.27.2\n command: [ \"/bin/sh\", \"-c\", \"env\" ]\n env:", "completion": " - name: SPECIAL_LEVEL_KEY\n valueFrom:\n configMapKeyRef:\n name: special-config\n key: special.how\n - name: LOG_LEVEL\n valueFrom:\n configMapKeyRef:\n name: env-config\n key: log_level\n restartPolicy: Never", "language": "yaml", "type": "pod-definition", "context": "Configure a Pod to Use a ConfigMap | Create the Pod:", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/configure-pod-configmap/", "id": "k8s_00489_code_completion_12498"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: dapi-test-pod\nspec:\n containers:\n - name: test-container\n image: registry.k8s.io/busybox:1.27.2\n command: [ \"/bin/sh\", \"-c\", \"env\" ]\n env:\n - name: SPECIAL_LEVEL_KEY\n valueFrom:\n configMapKeyRef:\n name: special-config\n key: special.how\n - name: LOG_LEVEL\n valueFrom:\n configMapKeyRef:\n name: env-config\n key: log_level\n restartPolicy: Never", "completion": "This yaml code Configure a Pod to Use a ConfigMap | Create the Pod:", "language": "yaml", "type": "pod-definition_explanation", "context": "Configure a Pod to Use a ConfigMap | Create the Pod:", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/configure-pod-configmap/", "id": "k8s_00489_code_explain_12499"} {"prompt": "Explain this shell code:\n\nkubectl create -f https://kubernetes.io/examples/pods/pod-multiple-configmap-env-variable.yaml", "completion": "This shell code Configure a Pod to Use a ConfigMap | Now, the Pod's output includes environment variables SPECIAL_LEVEL_KEY=very and LOG_LEVEL=INFO.", "language": "shell", "type": "code-example_explanation", "context": "Configure a Pod to Use a ConfigMap | Now, the Pod's output includes environment variables SPECIAL_LEVEL_KEY=very and LOG_LEVEL=INFO.", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/configure-pod-configmap/", "id": "k8s_00489_code_explain_12500"} {"prompt": "Explain this shell code:\n\nkubectl delete pod dapi-test-pod --now\nkubectl delete configmap special-config\nkubectl delete configmap env-config", "completion": "This shell code Configure a Pod to Use a ConfigMap | Create a ConfigMap containing multiple key-value pairs.", "language": "shell", "type": "code-example_explanation", "context": "Configure a Pod to Use a ConfigMap | Create a ConfigMap containing multiple key-value pairs.", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/configure-pod-configmap/", "id": "k8s_00489_code_explain_12501"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: ConfigMap\nmetadata:\n name: special-config", "completion": " namespace: default\ndata:\n SPECIAL_LEVEL: very\n SPECIAL_TYPE: charm", "language": "yaml", "type": "kubernetes-manifest", "context": "Configure a Pod to Use a ConfigMap | Create the ConfigMap:", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/configure-pod-configmap/", "id": "k8s_00489_code_completion_12502"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: ConfigMap\nmetadata:\n name: special-config\n namespace: default\ndata:\n SPECIAL_LEVEL: very\n SPECIAL_TYPE: charm", "completion": "This yaml code Configure a Pod to Use a ConfigMap | Create the ConfigMap:", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Configure a Pod to Use a ConfigMap | Create the ConfigMap:", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/configure-pod-configmap/", "id": "k8s_00489_code_explain_12503"} {"prompt": "Explain this shell code:\n\nkubectl create -f https://kubernetes.io/examples/configmap/configmap-multikeys.yaml", "completion": "This shell code Configure a Pod to Use a ConfigMap | Use envFrom to define all of the ConfigMap's data as container environment variables. The key from the ConfigMap becomes the environment variable name in the Pod.", "language": "shell", "type": "code-example_explanation", "context": "Configure a Pod to Use a ConfigMap | Use envFrom to define all of the ConfigMap's data as container environment variables. The key from the ConfigMap becomes the environment variable name in the Pod.", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/configure-pod-configmap/", "id": "k8s_00489_code_explain_12504"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: dapi-test-pod\nspec:\n containers:", "completion": " - name: test-container\n image: registry.k8s.io/busybox:1.27.2\n command: [ \"/bin/sh\", \"-c\", \"env\" ]\n envFrom:\n - configMapRef:\n name: special-config\n restartPolicy: Never", "language": "yaml", "type": "pod-definition", "context": "Configure a Pod to Use a ConfigMap | Create the Pod:", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/configure-pod-configmap/", "id": "k8s_00489_code_completion_12505"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: dapi-test-pod\nspec:\n containers:\n - name: test-container\n image: registry.k8s.io/busybox:1.27.2\n command: [ \"/bin/sh\", \"-c\", \"env\" ]\n envFrom:\n - configMapRef:\n name: special-config\n restartPolicy: Never", "completion": "This yaml code Configure a Pod to Use a ConfigMap | Create the Pod:", "language": "yaml", "type": "pod-definition_explanation", "context": "Configure a Pod to Use a ConfigMap | Create the Pod:", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/configure-pod-configmap/", "id": "k8s_00489_code_explain_12506"} {"prompt": "Explain this shell code:\n\nkubectl create -f https://kubernetes.io/examples/pods/pod-configmap-envFrom.yaml", "completion": "This shell code Configure a Pod to Use a ConfigMap | Now, the Pod's output includes environment variables SPECIAL_LEVEL=very and SPECIAL_TYPE=charm.", "language": "shell", "type": "code-example_explanation", "context": "Configure a Pod to Use a ConfigMap | Now, the Pod's output includes environment variables SPECIAL_LEVEL=very and SPECIAL_TYPE=charm.", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/configure-pod-configmap/", "id": "k8s_00489_code_explain_12507"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: dapi-test-pod\nspec:\n containers:\n - name: test-container\n image: registry.k8s.io/busybox:1.27.2\n command: [ \"/bin/echo\", \"$(SPECIAL_LEVEL_KEY) $(SPECIAL_TYPE_KEY)\" ]\n env:", "completion": " - name: SPECIAL_LEVEL_KEY\n valueFrom:\n configMapKeyRef:\n name: special-config\n key: SPECIAL_LEVEL\n - name: SPECIAL_TYPE_KEY\n valueFrom:\n configMapKeyRef:\n name: special-config\n key: SPECIAL_TYPE\n restartPolicy: Never", "language": "yaml", "type": "pod-definition", "context": "Configure a Pod to Use a ConfigMap | Create that Pod, by running:", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/configure-pod-configmap/", "id": "k8s_00489_code_completion_12508"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: dapi-test-pod\nspec:\n containers:\n - name: test-container\n image: registry.k8s.io/busybox:1.27.2\n command: [ \"/bin/echo\", \"$(SPECIAL_LEVEL_KEY) $(SPECIAL_TYPE_KEY)\" ]\n env:\n - name: SPECIAL_LEVEL_KEY\n valueFrom:\n configMapKeyRef:\n name: special-config\n key: SPECIAL_LEVEL\n - name: SPECIAL_TYPE_KEY\n valueFrom:\n configMapKeyRef:\n name: special-config\n key: SPECIAL_TYPE\n restartPolicy: Never", "completion": "This yaml code Configure a Pod to Use a ConfigMap | Create that Pod, by running:", "language": "yaml", "type": "pod-definition_explanation", "context": "Configure a Pod to Use a ConfigMap | Create that Pod, by running:", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/configure-pod-configmap/", "id": "k8s_00489_code_explain_12509"} {"prompt": "Explain this shell code:\n\nkubectl create -f https://kubernetes.io/examples/pods/pod-configmap-env-var-valueFrom.yaml", "completion": "This shell code Configure a Pod to Use a ConfigMap | That pod produces the following output from the test-container container:", "language": "shell", "type": "code-example_explanation", "context": "Configure a Pod to Use a ConfigMap | That pod produces the following output from the test-container container:", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/configure-pod-configmap/", "id": "k8s_00489_code_explain_12510"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: ConfigMap\nmetadata:\n name: special-config", "completion": " namespace: default\ndata:\n SPECIAL_LEVEL: very\n SPECIAL_TYPE: charm", "language": "yaml", "type": "kubernetes-manifest", "context": "Configure a Pod to Use a ConfigMap | Create the ConfigMap:", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/configure-pod-configmap/", "id": "k8s_00489_code_completion_12511"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: ConfigMap\nmetadata:\n name: special-config\n namespace: default\ndata:\n SPECIAL_LEVEL: very\n SPECIAL_TYPE: charm", "completion": "This yaml code Configure a Pod to Use a ConfigMap | Create the ConfigMap:", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Configure a Pod to Use a ConfigMap | Create the ConfigMap:", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/configure-pod-configmap/", "id": "k8s_00489_code_explain_12512"} {"prompt": "Explain this shell code:\n\nkubectl create -f https://kubernetes.io/examples/configmap/configmap-multikeys.yaml", "completion": "This shell code Configure a Pod to Use a ConfigMap | Add the ConfigMap name under the volumes section of the Pod specification. This adds the ConfigMap data to the directory specified as volumeMounts.mountPath (in this case, /etc/config). The command se", "language": "shell", "type": "code-example_explanation", "context": "Configure a Pod to Use a ConfigMap | Add the ConfigMap name under the volumes section of the Pod specification. This adds the ConfigMap data to the directory specified as volumeMounts.mountPath (in this case, /etc/config). The command se", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/configure-pod-configmap/", "id": "k8s_00489_code_explain_12513"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: dapi-test-pod\nspec:\n containers:\n - name: test-container\n image: registry.k8s.io/busybox:1.27.2\n command: [ \"/bin/sh\", \"-c\", \"ls /etc/config/\" ]", "completion": " volumeMounts:\n - name: config-volume\n mountPath: /etc/config\n volumes:\n - name: config-volume\n configMap:\n # Provide the name of the ConfigMap containing the files you want\n # to add to the container\n name: special-config\n restartPolicy: Never", "language": "yaml", "type": "pod-definition", "context": "Configure a Pod to Use a ConfigMap | Create the Pod:", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/configure-pod-configmap/", "id": "k8s_00489_code_completion_12514"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: dapi-test-pod\nspec:\n containers:\n - name: test-container\n image: registry.k8s.io/busybox:1.27.2\n command: [ \"/bin/sh\", \"-c\", \"ls /etc/config/\" ]\n volumeMounts:\n - name: config-volume\n mountPath: /etc/config\n volumes:\n - name: config-volume\n configMap:\n # Provide the name of the ConfigMap containing the files you want\n # to add to the container\n name: special-config\n restartPolicy: Never", "completion": "This yaml code Configure a Pod to Use a ConfigMap | Create the Pod:", "language": "yaml", "type": "pod-definition_explanation", "context": "Configure a Pod to Use a ConfigMap | Create the Pod:", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/configure-pod-configmap/", "id": "k8s_00489_code_explain_12515"} {"prompt": "Explain this shell code:\n\nkubectl create -f https://kubernetes.io/examples/pods/pod-configmap-volume.yaml", "completion": "This shell code Configure a Pod to Use a ConfigMap | When the pod runs, the command ls /etc/config/ produces the output below:", "language": "shell", "type": "code-example_explanation", "context": "Configure a Pod to Use a ConfigMap | When the pod runs, the command ls /etc/config/ produces the output below:", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/configure-pod-configmap/", "id": "k8s_00489_code_explain_12516"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: dapi-test-pod\nspec:\n containers:\n - name: test-container\n image: registry.k8s.io/busybox:1.27.2\n command: [ \"/bin/sh\",\"-c\",\"cat /etc/config/keys\" ]\n volumeMounts:", "completion": " - name: config-volume\n mountPath: /etc/config\n volumes:\n - name: config-volume\n configMap:\n name: special-config\n items:\n - key: SPECIAL_LEVEL\n path: keys\n restartPolicy: Never", "language": "yaml", "type": "pod-definition", "context": "Configure a Pod to Use a ConfigMap | Create the Pod:", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/configure-pod-configmap/", "id": "k8s_00489_code_completion_12517"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: dapi-test-pod\nspec:\n containers:\n - name: test-container\n image: registry.k8s.io/busybox:1.27.2\n command: [ \"/bin/sh\",\"-c\",\"cat /etc/config/keys\" ]\n volumeMounts:\n - name: config-volume\n mountPath: /etc/config\n volumes:\n - name: config-volume\n configMap:\n name: special-config\n items:\n - key: SPECIAL_LEVEL\n path: keys\n restartPolicy: Never", "completion": "This yaml code Configure a Pod to Use a ConfigMap | Create the Pod:", "language": "yaml", "type": "pod-definition_explanation", "context": "Configure a Pod to Use a ConfigMap | Create the Pod:", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/configure-pod-configmap/", "id": "k8s_00489_code_explain_12518"} {"prompt": "Explain this shell code:\n\nkubectl create -f https://kubernetes.io/examples/pods/pod-configmap-volume-specific-key.yaml", "completion": "This shell code Configure a Pod to Use a ConfigMap | When the pod runs, the command cat /etc/config/keys produces the output below:", "language": "shell", "type": "code-example_explanation", "context": "Configure a Pod to Use a ConfigMap | When the pod runs, the command cat /etc/config/keys produces the output below:", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/configure-pod-configmap/", "id": "k8s_00489_code_explain_12519"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: ConfigMap\nmetadata:\n creationTimestamp: 2016-02-18T19:14:38Z\n name: example-config\n namespace: default\ndata:", "completion": " # example of a simple property defined using --from-literal\n example.property.1: hello\n example.property.2: world\n # example of a complex property defined using --from-file\n example.property.file: |-\n property.1=value-1\n property.2=value-2\n property.3=value-3", "language": "yaml", "type": "kubernetes-manifest", "context": "Configure a Pod to Use a ConfigMap | When kubectl creates a ConfigMap from inputs that are not ASCII or UTF-8, the tool puts these into the binaryData field of the ConfigMap, and not in data. Both text and binary data sources can be comb", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/configure-pod-configmap/", "id": "k8s_00489_code_completion_12520"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: ConfigMap\nmetadata:\n creationTimestamp: 2016-02-18T19:14:38Z\n name: example-config\n namespace: default\ndata:\n # example of a simple property defined using --from-literal\n example.property.1: hello\n example.property.2: world\n # example of a complex property defined using --from-file\n example.property.file: |-\n property.1=value-1\n property.2=value-2\n property.3=value-3", "completion": "This yaml code Configure a Pod to Use a ConfigMap | When kubectl creates a ConfigMap from inputs that are not ASCII or UTF-8, the tool puts these into the binaryData field of the ConfigMap, and not in data. Both text and binary data sources can be comb", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Configure a Pod to Use a ConfigMap | When kubectl creates a ConfigMap from inputs that are not ASCII or UTF-8, the tool puts these into the binaryData field of the ConfigMap, and not in data. Both text and binary data sources can be comb", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/configure-pod-configmap/", "id": "k8s_00489_code_explain_12521"} {"prompt": "Explain this bash code:\n\nkubectl get configmap -o jsonpath='{.binaryData}' <name>", "completion": "This bash code Configure a Pod to Use a ConfigMap | Pods can load data from a ConfigMap that uses either data or binaryData.", "language": "bash", "type": "kubectl-command_explanation", "context": "Configure a Pod to Use a ConfigMap | Pods can load data from a ConfigMap that uses either data or binaryData.", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/configure-pod-configmap/", "id": "k8s_00489_code_explain_12522"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: dapi-test-pod\nspec:\n containers:\n - name: test-container\n image: gcr.io/google_containers/busybox", "completion": " command: [\"/bin/sh\", \"-c\", \"env\"]\n env:\n - name: SPECIAL_LEVEL_KEY\n valueFrom:\n configMapKeyRef:\n name: a-config\n key: akey\n optional: true # mark the variable as optional\n restartPolicy: Never", "language": "yaml", "type": "pod-definition", "context": "Configure a Pod to Use a ConfigMap | If you run this pod, and there is no ConfigMap named a-config, the output is empty. If you run this pod, and there is a ConfigMap named a-config but that ConfigMap doesn't have a key named akey, the o", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/configure-pod-configmap/", "id": "k8s_00489_code_completion_12523"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: dapi-test-pod\nspec:\n containers:\n - name: test-container\n image: gcr.io/google_containers/busybox\n command: [\"/bin/sh\", \"-c\", \"env\"]\n env:\n - name: SPECIAL_LEVEL_KEY\n valueFrom:\n configMapKeyRef:\n name: a-config\n key: akey\n optional: true # mark the variable as optional\n restartPolicy: Never", "completion": "This yaml code Configure a Pod to Use a ConfigMap | If you run this pod, and there is no ConfigMap named a-config, the output is empty. If you run this pod, and there is a ConfigMap named a-config but that ConfigMap doesn't have a key named akey, the o", "language": "yaml", "type": "pod-definition_explanation", "context": "Configure a Pod to Use a ConfigMap | If you run this pod, and there is no ConfigMap named a-config, the output is empty. If you run this pod, and there is a ConfigMap named a-config but that ConfigMap doesn't have a key named akey, the o", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/configure-pod-configmap/", "id": "k8s_00489_code_explain_12524"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: dapi-test-pod\nspec:\n containers:\n - name: test-container\n image: gcr.io/google_containers/busybox\n command: [\"/bin/sh\", \"-c\", \"ls /etc/config\"]", "completion": " volumeMounts:\n - name: config-volume\n mountPath: /etc/config\n volumes:\n - name: config-volume\n configMap:\n name: no-config\n optional: true # mark the source ConfigMap as optional\n restartPolicy: Never", "language": "yaml", "type": "pod-definition", "context": "Configure a Pod to Use a ConfigMap | You must create the ConfigMap object before you reference it in a Pod specification. Alternatively, mark the ConfigMap reference as optional in the Pod spec (see Optional ConfigMaps). If you reference", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/configure-pod-configmap/", "id": "k8s_00489_code_completion_12525"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: dapi-test-pod\nspec:\n containers:\n - name: test-container\n image: gcr.io/google_containers/busybox\n command: [\"/bin/sh\", \"-c\", \"ls /etc/config\"]\n volumeMounts:\n - name: config-volume\n mountPath: /etc/config\n volumes:\n - name: config-volume\n configMap:\n name: no-config\n optional: true # mark the source ConfigMap as optional\n restartPolicy: Never", "completion": "This yaml code Configure a Pod to Use a ConfigMap | You must create the ConfigMap object before you reference it in a Pod specification. Alternatively, mark the ConfigMap reference as optional in the Pod spec (see Optional ConfigMaps). If you reference", "language": "yaml", "type": "pod-definition_explanation", "context": "Configure a Pod to Use a ConfigMap | You must create the ConfigMap object before you reference it in a Pod specification. Alternatively, mark the ConfigMap reference as optional in the Pod spec (see Optional ConfigMaps). If you reference", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/configure-pod-configmap/", "id": "k8s_00489_code_explain_12526"} {"prompt": "Explain this text code:\n\nLASTSEEN FIRSTSEEN COUNT NAME KIND SUBOBJECT TYPE REASON SOURCE MESSAGE\n0s 0s 1 dapi-test-pod Pod Warning InvalidEnvironmentVariableNames {kubelet, 127.0.0.1} Keys [1badkey, 2alsobad] from the EnvFrom configMap default/myconfig were skipped since they are considered invalid environment variable names.", "completion": "This text code Configure a Pod to Use a ConfigMap | ConfigMaps reside in a specific Namespace. Pods can only refer to ConfigMaps that are in the same namespace as the Pod.", "language": "text", "type": "code-example_explanation", "context": "Configure a Pod to Use a ConfigMap | ConfigMaps reside in a specific Namespace. Pods can only refer to ConfigMaps that are in the same namespace as the Pod.", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/configure-pod-configmap/", "id": "k8s_00489_code_explain_12527"} {"prompt": "Complete this bash code:\n\nkubectl delete configmaps/game-config configmaps/game-config-2 configmaps/game-config-3 \\\n configmaps/game-config-env-file\nkubectl delete pod dapi-test-pod --now", "completion": "\n# You might already have removed the next set\nkubectl delete configmaps/special-config configmaps/env-config\nkubectl delete configmap -l 'game-config in (config-4,config-5)'", "language": "bash", "type": "kubectl-command", "context": "Configure a Pod to Use a ConfigMap | Remove the kustomization.yaml file that you used to generate the ConfigMap:", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/configure-pod-configmap/", "id": "k8s_00489_code_completion_12528"} {"prompt": "Explain this bash code:\n\nkubectl delete configmaps/game-config configmaps/game-config-2 configmaps/game-config-3 \\\n configmaps/game-config-env-file\nkubectl delete pod dapi-test-pod --now\n\n# You might already have removed the next set\nkubectl delete configmaps/special-config configmaps/env-config\nkubectl delete configmap -l 'game-config in (config-4,config-5)'", "completion": "This bash code Configure a Pod to Use a ConfigMap | Remove the kustomization.yaml file that you used to generate the ConfigMap:", "language": "bash", "type": "kubectl-command_explanation", "context": "Configure a Pod to Use a ConfigMap | Remove the kustomization.yaml file that you used to generate the ConfigMap:", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/configure-pod-configmap/", "id": "k8s_00489_code_explain_12529"} {"prompt": "Complete this text code:\n\n# Download the sample files into `configure-pod-container/configmap/` directory\nwget https://kubernetes.io/examples/configmap/game.properties -O configure-pod-container/configmap/game.properties\nwget https://kubernetes.io/examples/configmap/ui.properties -O configure-pod-container/configmap/ui.properties", "completion": "\n# Create the ConfigMap\nkubectl create configmap game-config --from-file=configure-pod-container/configmap/", "language": "text", "type": "code-example", "context": "Configure a Pod to Use a ConfigMap | The above command packages each file, in this case, game.properties and ui.properties in the configure-pod-container/configmap/ directory into the game-config ConfigMap. You can display details of the", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/configure-pod-configmap/", "id": "k8s_00489_code_completion_12530"} {"prompt": "Explain this text code:\n\n# Download the sample files into `configure-pod-container/configmap/` directory\nwget https://kubernetes.io/examples/configmap/game.properties -O configure-pod-container/configmap/game.properties\nwget https://kubernetes.io/examples/configmap/ui.properties -O configure-pod-container/configmap/ui.properties\n\n# Create the ConfigMap\nkubectl create configmap game-config --from-file=configure-pod-container/configmap/", "completion": "This text code Configure a Pod to Use a ConfigMap | The above command packages each file, in this case, game.properties and ui.properties in the configure-pod-container/configmap/ directory into the game-config ConfigMap. You can display details of the", "language": "text", "type": "code-example_explanation", "context": "Configure a Pod to Use a ConfigMap | The above command packages each file, in this case, game.properties and ui.properties in the configure-pod-container/configmap/ directory into the game-config ConfigMap. You can display details of the", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/configure-pod-configmap/", "id": "k8s_00489_code_explain_12531"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: ConfigMap\nmetadata:\n creationTimestamp: 2022-02-18T18:52:05Z\n name: game-config\n namespace: default\n resourceVersion: \"516\"\n uid: b4952dc3-d670-11e5-8cd0-68f728db1985\ndata:\n game.properties: |\n enemies=aliens", "completion": " lives=3\n enemies.cheat=true\n enemies.cheat.level=noGoodRotten\n secret.code.passphrase=UUDDLRLRBABAS\n secret.code.allowed=true\n secret.code.lives=30 \n ui.properties: |\n color.good=purple\n color.bad=yellow\n allow.textmode=true\n how.nice.to.look=fairlyNice", "language": "yaml", "type": "kubernetes-manifest", "context": "Configure a Pod to Use a ConfigMap | You can use kubectl create configmap to create a ConfigMap from an individual file, or from multiple files.", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/configure-pod-configmap/", "id": "k8s_00489_code_completion_12532"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: ConfigMap\nmetadata:\n creationTimestamp: 2022-02-18T18:52:05Z\n name: game-config\n namespace: default\n resourceVersion: \"516\"\n uid: b4952dc3-d670-11e5-8cd0-68f728db1985\ndata:\n game.properties: |\n enemies=aliens\n lives=3\n enemies.cheat=true\n enemies.cheat.level=noGoodRotten\n secret.code.passphrase=UUDDLRLRBABAS\n secret.code.allowed=true\n secret.code.lives=30 \n ui.properties: |\n color.good=purple\n color.bad=yellow\n allow.textmode=true\n how.nice.to.look=fairlyNice", "completion": "This yaml code Configure a Pod to Use a ConfigMap | You can use kubectl create configmap to create a ConfigMap from an individual file, or from multiple files.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Configure a Pod to Use a ConfigMap | You can use kubectl create configmap to create a ConfigMap from an individual file, or from multiple files.", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/configure-pod-configmap/", "id": "k8s_00489_code_explain_12533"} {"prompt": "Explain this bash code:\n\nkubectl create configmap game-config-2 --from-file=configure-pod-container/configmap/game.properties", "completion": "This bash code Configure a Pod to Use a ConfigMap | would produce the following ConfigMap:", "language": "bash", "type": "kubectl-command_explanation", "context": "Configure a Pod to Use a ConfigMap | would produce the following ConfigMap:", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/configure-pod-configmap/", "id": "k8s_00489_code_explain_12534"} {"prompt": "Explain this bash code:\n\nkubectl create configmap game-config-2 --from-file=configure-pod-container/configmap/game.properties --from-file=configure-pod-container/configmap/ui.properties", "completion": "This bash code Configure a Pod to Use a ConfigMap | You can display details of the game-config-2 ConfigMap using the following command:", "language": "bash", "type": "kubectl-command_explanation", "context": "Configure a Pod to Use a ConfigMap | You can display details of the game-config-2 ConfigMap using the following command:", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/configure-pod-configmap/", "id": "k8s_00489_code_explain_12535"} {"prompt": "Complete this text code:\n\n# Env-files contain a list of environment variables.\n# These syntax rules apply:\n# Each line in an env file has to be in VAR=VAL format.\n# Lines beginning with # (i.e. comments) are ignored.\n# Blank lines are ignored.\n# There is no special handling of quotation marks (i.e. they will be part of the ConfigMap value)).\n\n# Download the sample files into `configure-pod-container/configmap/` directory\nwget https://kubernetes.io/examples/configmap/game-env-file.properties -O configure-pod-container/configmap/game-env-file.properties", "completion": "wget https://kubernetes.io/examples/configmap/ui-env-file.properties -O configure-pod-container/configmap/ui-env-file.properties\n\n# The env-file `game-env-file.properties` looks like below\ncat configure-pod-container/configmap/game-env-file.properties\nenemies=aliens\nlives=3\nallowed=\"true\"\n\n# This comment and the empty line above it are ignored", "language": "text", "type": "code-example", "context": "Configure a Pod to Use a ConfigMap | would produce a ConfigMap. View the ConfigMap:", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/configure-pod-configmap/", "id": "k8s_00489_code_completion_12536"} {"prompt": "Explain this text code:\n\n# Env-files contain a list of environment variables.\n# These syntax rules apply:\n# Each line in an env file has to be in VAR=VAL format.\n# Lines beginning with # (i.e. comments) are ignored.\n# Blank lines are ignored.\n# There is no special handling of quotation marks (i.e. they will be part of the ConfigMap value)).\n\n# Download the sample files into `configure-pod-container/configmap/` directory\nwget https://kubernetes.io/examples/configmap/game-env-file.properties -O configure-pod-container/configmap/game-env-file.properties\nwget https://kubernetes.io/examples/configmap/ui-env-file.properties -O configure-pod-container/configmap/ui-env-file.properties\n\n# The env-file `game-env-file.properties` looks like below\ncat configure-pod-container/configmap/game-env-file.properties\nenemies=aliens\nlives=3\nallowed=\"true\"\n\n# This comment and the empty line above it are ignored", "completion": "This text code Configure a Pod to Use a ConfigMap | would produce a ConfigMap. View the ConfigMap:", "language": "text", "type": "code-example_explanation", "context": "Configure a Pod to Use a ConfigMap | would produce a ConfigMap. View the ConfigMap:", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/configure-pod-configmap/", "id": "k8s_00489_code_explain_12537"} {"prompt": "Explain this bash code:\n\nkubectl create configmap game-config-env-file \\\n --from-env-file=configure-pod-container/configmap/game-env-file.properties", "completion": "This bash code Configure a Pod to Use a ConfigMap | would produce a ConfigMap. View the ConfigMap:", "language": "bash", "type": "kubectl-command_explanation", "context": "Configure a Pod to Use a ConfigMap | would produce a ConfigMap. View the ConfigMap:", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/configure-pod-configmap/", "id": "k8s_00489_code_explain_12538"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: ConfigMap\nmetadata:\n creationTimestamp: 2019-12-27T18:36:28Z\n name: game-config-env-file\n namespace: default", "completion": " resourceVersion: \"809965\"\n uid: d9d1ca5b-eb34-11e7-887b-42010a8002b8\ndata:\n allowed: '\"true\"'\n enemies: aliens\n lives: \"3\"", "language": "yaml", "type": "kubernetes-manifest", "context": "Configure a Pod to Use a ConfigMap | Starting with Kubernetes v1.23, kubectl supports the --from-env-file argument to be specified multiple times to create a ConfigMap from multiple data sources.", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/configure-pod-configmap/", "id": "k8s_00489_code_completion_12539"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: ConfigMap\nmetadata:\n creationTimestamp: 2019-12-27T18:36:28Z\n name: game-config-env-file\n namespace: default\n resourceVersion: \"809965\"\n uid: d9d1ca5b-eb34-11e7-887b-42010a8002b8\ndata:\n allowed: '\"true\"'\n enemies: aliens\n lives: \"3\"", "completion": "This yaml code Configure a Pod to Use a ConfigMap | Starting with Kubernetes v1.23, kubectl supports the --from-env-file argument to be specified multiple times to create a ConfigMap from multiple data sources.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Configure a Pod to Use a ConfigMap | Starting with Kubernetes v1.23, kubectl supports the --from-env-file argument to be specified multiple times to create a ConfigMap from multiple data sources.", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/configure-pod-configmap/", "id": "k8s_00489_code_explain_12540"} {"prompt": "Explain this bash code:\n\nkubectl create configmap config-multi-env-files \\\n --from-env-file=configure-pod-container/configmap/game-env-file.properties \\\n --from-env-file=configure-pod-container/configmap/ui-env-file.properties", "completion": "This bash code Configure a Pod to Use a ConfigMap | would produce the following ConfigMap:", "language": "bash", "type": "kubectl-command_explanation", "context": "Configure a Pod to Use a ConfigMap | would produce the following ConfigMap:", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/configure-pod-configmap/", "id": "k8s_00489_code_explain_12541"} {"prompt": "Explain this bash code:\n\nkubectl get configmap config-multi-env-files -o yaml", "completion": "This bash code Configure a Pod to Use a ConfigMap | where the output is similar to this:", "language": "bash", "type": "kubectl-command_explanation", "context": "Configure a Pod to Use a ConfigMap | where the output is similar to this:", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/configure-pod-configmap/", "id": "k8s_00489_code_explain_12542"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: ConfigMap\nmetadata:\n creationTimestamp: 2019-12-27T18:38:34Z\n name: config-multi-env-files\n namespace: default\n resourceVersion: \"810136\"", "completion": " uid: 252c4572-eb35-11e7-887b-42010a8002b8\ndata:\n allowed: '\"true\"'\n color: purple\n enemies: aliens\n how: fairlyNice\n lives: \"3\"\n textmode: \"true\"", "language": "yaml", "type": "kubernetes-manifest", "context": "Configure a Pod to Use a ConfigMap | You can define a key other than the file name to use in the data section of your ConfigMap when using the --from-file argument:", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/configure-pod-configmap/", "id": "k8s_00489_code_completion_12543"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: ConfigMap\nmetadata:\n creationTimestamp: 2019-12-27T18:38:34Z\n name: config-multi-env-files\n namespace: default\n resourceVersion: \"810136\"\n uid: 252c4572-eb35-11e7-887b-42010a8002b8\ndata:\n allowed: '\"true\"'\n color: purple\n enemies: aliens\n how: fairlyNice\n lives: \"3\"\n textmode: \"true\"", "completion": "This yaml code Configure a Pod to Use a ConfigMap | You can define a key other than the file name to use in the data section of your ConfigMap when using the --from-file argument:", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Configure a Pod to Use a ConfigMap | You can define a key other than the file name to use in the data section of your ConfigMap when using the --from-file argument:", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/configure-pod-configmap/", "id": "k8s_00489_code_explain_12544"} {"prompt": "Explain this bash code:\n\nkubectl create configmap game-config-3 --from-file=<my-key-name>=<path-to-file>", "completion": "This bash code Configure a Pod to Use a ConfigMap | where <my-key-name> is the key you want to use in the ConfigMap and <path-to-file> is the location of the data source file you want the key to represent.", "language": "bash", "type": "kubectl-command_explanation", "context": "Configure a Pod to Use a ConfigMap | where <my-key-name> is the key you want to use in the ConfigMap and <path-to-file> is the location of the data source file you want the key to represent.", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/configure-pod-configmap/", "id": "k8s_00489_code_explain_12545"} {"prompt": "Explain this bash code:\n\nkubectl create configmap game-config-3 --from-file=game-special-key=configure-pod-container/configmap/game.properties", "completion": "This bash code Configure a Pod to Use a ConfigMap | would produce the following ConfigMap:", "language": "bash", "type": "kubectl-command_explanation", "context": "Configure a Pod to Use a ConfigMap | would produce the following ConfigMap:", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/configure-pod-configmap/", "id": "k8s_00489_code_explain_12546"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: ConfigMap\nmetadata:\n creationTimestamp: 2022-02-18T18:54:22Z\n name: game-config-3\n namespace: default\n resourceVersion: \"530\"\n uid: 05f8da22-d671-11e5-8cd0-68f728db1985", "completion": "data:\n game-special-key: |\n enemies=aliens\n lives=3\n enemies.cheat=true\n enemies.cheat.level=noGoodRotten\n secret.code.passphrase=UUDDLRLRBABAS\n secret.code.allowed=true\n secret.code.lives=30", "language": "yaml", "type": "kubernetes-manifest", "context": "Configure a Pod to Use a ConfigMap | You can use kubectl create configmap with the --from-literal argument to define a literal value from the command line:", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/configure-pod-configmap/", "id": "k8s_00489_code_completion_12547"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: ConfigMap\nmetadata:\n creationTimestamp: 2022-02-18T18:54:22Z\n name: game-config-3\n namespace: default\n resourceVersion: \"530\"\n uid: 05f8da22-d671-11e5-8cd0-68f728db1985\ndata:\n game-special-key: |\n enemies=aliens\n lives=3\n enemies.cheat=true\n enemies.cheat.level=noGoodRotten\n secret.code.passphrase=UUDDLRLRBABAS\n secret.code.allowed=true\n secret.code.lives=30", "completion": "This yaml code Configure a Pod to Use a ConfigMap | You can use kubectl create configmap with the --from-literal argument to define a literal value from the command line:", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Configure a Pod to Use a ConfigMap | You can use kubectl create configmap with the --from-literal argument to define a literal value from the command line:", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/configure-pod-configmap/", "id": "k8s_00489_code_explain_12548"} {"prompt": "Explain this bash code:\n\nkubectl create configmap special-config --from-literal=special.how=very --from-literal=special.type=charm", "completion": "This bash code Configure a Pod to Use a ConfigMap | You can pass in multiple key-value pairs. Each pair provided on the command line is represented as a separate entry in the data section of the ConfigMap.", "language": "bash", "type": "kubectl-command_explanation", "context": "Configure a Pod to Use a ConfigMap | You can pass in multiple key-value pairs. Each pair provided on the command line is represented as a separate entry in the data section of the ConfigMap.", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/configure-pod-configmap/", "id": "k8s_00489_code_explain_12549"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: ConfigMap\nmetadata:\n creationTimestamp: 2022-02-18T19:14:38Z\n name: special-config", "completion": " namespace: default\n resourceVersion: \"651\"\n uid: dadce046-d673-11e5-8cd0-68f728db1985\ndata:\n special.how: very\n special.type: charm", "language": "yaml", "type": "kubernetes-manifest", "context": "Configure a Pod to Use a ConfigMap | You can also create a ConfigMap from generators and then apply it to create the object in the cluster's API server. You should specify the generators in a kustomization.yaml file within a directory.", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/configure-pod-configmap/", "id": "k8s_00489_code_completion_12550"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: ConfigMap\nmetadata:\n creationTimestamp: 2022-02-18T19:14:38Z\n name: special-config\n namespace: default\n resourceVersion: \"651\"\n uid: dadce046-d673-11e5-8cd0-68f728db1985\ndata:\n special.how: very\n special.type: charm", "completion": "This yaml code Configure a Pod to Use a ConfigMap | You can also create a ConfigMap from generators and then apply it to create the object in the cluster's API server. You should specify the generators in a kustomization.yaml file within a directory.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Configure a Pod to Use a ConfigMap | You can also create a ConfigMap from generators and then apply it to create the object in the cluster's API server. You should specify the generators in a kustomization.yaml file within a directory.", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/configure-pod-configmap/", "id": "k8s_00489_code_explain_12551"} {"prompt": "Complete this text code:\n\n# Create a kustomization.yaml file with ConfigMapGenerator\ncat <<EOF >./kustomization.yaml\nconfigMapGenerator:\n- name: game-config-4\n options:", "completion": " labels:\n game-config: config-4\n files:\n - configure-pod-container/configmap/game.properties\nEOF", "language": "text", "type": "code-example", "context": "Configure a Pod to Use a ConfigMap | Apply the kustomization directory to create the ConfigMap object:", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/configure-pod-configmap/", "id": "k8s_00489_code_completion_12552"} {"prompt": "Explain this text code:\n\n# Create a kustomization.yaml file with ConfigMapGenerator\ncat <<EOF >./kustomization.yaml\nconfigMapGenerator:\n- name: game-config-4\n options:\n labels:\n game-config: config-4\n files:\n - configure-pod-container/configmap/game.properties\nEOF", "completion": "This text code Configure a Pod to Use a ConfigMap | Apply the kustomization directory to create the ConfigMap object:", "language": "text", "type": "code-example_explanation", "context": "Configure a Pod to Use a ConfigMap | Apply the kustomization directory to create the ConfigMap object:", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/configure-pod-configmap/", "id": "k8s_00489_code_explain_12553"} {"prompt": "Explain this bash code:\n\nkubectl describe configmaps/game-config-4-m9dm2f92bt", "completion": "This bash code Configure a Pod to Use a ConfigMap | Notice that the generated ConfigMap name has a suffix appended by hashing the contents. This ensures that a new ConfigMap is generated each time the content is modified.", "language": "bash", "type": "kubectl-command_explanation", "context": "Configure a Pod to Use a ConfigMap | Notice that the generated ConfigMap name has a suffix appended by hashing the contents. This ensures that a new ConfigMap is generated each time the content is modified.", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/configure-pod-configmap/", "id": "k8s_00489_code_explain_12554"} {"prompt": "Complete this text code:\n\n# Create a kustomization.yaml file with ConfigMapGenerator\ncat <<EOF >./kustomization.yaml\nconfigMapGenerator:\n- name: game-config-5\n options:", "completion": " labels:\n game-config: config-5\n files:\n - game-special-key=configure-pod-container/configmap/game.properties\nEOF", "language": "text", "type": "code-example", "context": "Configure a Pod to Use a ConfigMap | Apply the kustomization directory to create the ConfigMap object.", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/configure-pod-configmap/", "id": "k8s_00489_code_completion_12555"} {"prompt": "Explain this text code:\n\n# Create a kustomization.yaml file with ConfigMapGenerator\ncat <<EOF >./kustomization.yaml\nconfigMapGenerator:\n- name: game-config-5\n options:\n labels:\n game-config: config-5\n files:\n - game-special-key=configure-pod-container/configmap/game.properties\nEOF", "completion": "This text code Configure a Pod to Use a ConfigMap | Apply the kustomization directory to create the ConfigMap object.", "language": "text", "type": "code-example_explanation", "context": "Configure a Pod to Use a ConfigMap | Apply the kustomization directory to create the ConfigMap object.", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/configure-pod-configmap/", "id": "k8s_00489_code_explain_12556"} {"prompt": "Complete this text code:\n\n---\n# kustomization.yaml contents for creating a ConfigMap from literals\nconfigMapGenerator:", "completion": "- name: special-config-2\n literals:\n - special.how=very\n - special.type=charm", "language": "text", "type": "code-example", "context": "Configure a Pod to Use a ConfigMap | Apply the kustomization directory to create the ConfigMap object:", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/configure-pod-configmap/", "id": "k8s_00489_code_completion_12557"} {"prompt": "Explain this text code:\n\n---\n# kustomization.yaml contents for creating a ConfigMap from literals\nconfigMapGenerator:\n- name: special-config-2\n literals:\n - special.how=very\n - special.type=charm", "completion": "This text code Configure a Pod to Use a ConfigMap | Apply the kustomization directory to create the ConfigMap object:", "language": "text", "type": "code-example_explanation", "context": "Configure a Pod to Use a ConfigMap | Apply the kustomization directory to create the ConfigMap object:", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/configure-pod-configmap/", "id": "k8s_00489_code_explain_12558"} {"prompt": "Explain this bash code:\n\nkubectl delete configmap special-config\nkubectl delete configmap env-config\nkubectl delete configmap -l 'game-config in (config-4,config-5)'", "completion": "This bash code Configure a Pod to Use a ConfigMap | Now that you have learned to define ConfigMaps, you can move on to the next section, and learn how to use these objects with Pods.", "language": "bash", "type": "kubectl-command_explanation", "context": "Configure a Pod to Use a ConfigMap | Now that you have learned to define ConfigMaps, you can move on to the next section, and learn how to use these objects with Pods.", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/configure-pod-configmap/", "id": "k8s_00489_code_explain_12559"} {"prompt": "Explain this bash code:\n\nkubectl create configmap special-config --from-literal=special.how=very", "completion": "This bash code Configure a Pod to Use a ConfigMap | Assign the special.how value defined in the ConfigMap to the SPECIAL_LEVEL_KEY environment variable in the Pod specification.", "language": "bash", "type": "kubectl-command_explanation", "context": "Configure a Pod to Use a ConfigMap | Assign the special.how value defined in the ConfigMap to the SPECIAL_LEVEL_KEY environment variable in the Pod specification.", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/configure-pod-configmap/", "id": "k8s_00489_code_explain_12560"} {"prompt": "Complete this yaml code:\n\npods/pod-single-configmap-env-variable.yaml\napiVersion: v1\nkind: Pod\nmetadata:\n name: dapi-test-pod\nspec:\n containers:\n - name: test-container\n image: registry.k8s.io/busybox:1.27.2\n command: [ \"/bin/sh\", \"-c\", \"env\" ]", "completion": " env:\n # Define the environment variable\n - name: SPECIAL_LEVEL_KEY\n valueFrom:\n configMapKeyRef:\n # The ConfigMap containing the value you want to assign to SPECIAL_LEVEL_KEY\n name: special-config\n # Specify the key associated with the value\n key: special.how\n restartPolicy: Never", "language": "yaml", "type": "pod-definition", "context": "Configure a Pod to Use a ConfigMap | Create the Pod:", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/configure-pod-configmap/", "id": "k8s_00489_code_completion_12561"} {"prompt": "Explain this yaml code:\n\npods/pod-single-configmap-env-variable.yaml\napiVersion: v1\nkind: Pod\nmetadata:\n name: dapi-test-pod\nspec:\n containers:\n - name: test-container\n image: registry.k8s.io/busybox:1.27.2\n command: [ \"/bin/sh\", \"-c\", \"env\" ]\n env:\n # Define the environment variable\n - name: SPECIAL_LEVEL_KEY\n valueFrom:\n configMapKeyRef:\n # The ConfigMap containing the value you want to assign to SPECIAL_LEVEL_KEY\n name: special-config\n # Specify the key associated with the value\n key: special.how\n restartPolicy: Never", "completion": "This yaml code Configure a Pod to Use a ConfigMap | Create the Pod:", "language": "yaml", "type": "pod-definition_explanation", "context": "Configure a Pod to Use a ConfigMap | Create the Pod:", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/configure-pod-configmap/", "id": "k8s_00489_code_explain_12562"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: dapi-test-pod\nspec:\n containers:\n - name: test-container\n image: registry.k8s.io/busybox:1.27.2\n command: [ \"/bin/sh\", \"-c\", \"env\" ]", "completion": " env:\n # Define the environment variable\n - name: SPECIAL_LEVEL_KEY\n valueFrom:\n configMapKeyRef:\n # The ConfigMap containing the value you want to assign to SPECIAL_LEVEL_KEY\n name: special-config\n # Specify the key associated with the value\n key: special.how\n restartPolicy: Never", "language": "yaml", "type": "pod-definition", "context": "Configure a Pod to Use a ConfigMap | Create the Pod:", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/configure-pod-configmap/", "id": "k8s_00489_code_completion_12563"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: dapi-test-pod\nspec:\n containers:\n - name: test-container\n image: registry.k8s.io/busybox:1.27.2\n command: [ \"/bin/sh\", \"-c\", \"env\" ]\n env:\n # Define the environment variable\n - name: SPECIAL_LEVEL_KEY\n valueFrom:\n configMapKeyRef:\n # The ConfigMap containing the value you want to assign to SPECIAL_LEVEL_KEY\n name: special-config\n # Specify the key associated with the value\n key: special.how\n restartPolicy: Never", "completion": "This yaml code Configure a Pod to Use a ConfigMap | Create the Pod:", "language": "yaml", "type": "pod-definition_explanation", "context": "Configure a Pod to Use a ConfigMap | Create the Pod:", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/configure-pod-configmap/", "id": "k8s_00489_code_explain_12564"} {"prompt": "Explain this bash code:\n\nkubectl create -f https://kubernetes.io/examples/pods/pod-single-configmap-env-variable.yaml", "completion": "This bash code Configure a Pod to Use a ConfigMap | Now, the Pod's output includes environment variable SPECIAL_LEVEL_KEY=very.", "language": "bash", "type": "kubectl-command_explanation", "context": "Configure a Pod to Use a ConfigMap | Now, the Pod's output includes environment variable SPECIAL_LEVEL_KEY=very.", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/configure-pod-configmap/", "id": "k8s_00489_code_explain_12565"} {"prompt": "Complete this yaml code:\n\nconfigmap/configmaps.yaml\napiVersion: v1\nkind: ConfigMap\nmetadata:\n name: special-config\n namespace: default\ndata:\n special.how: very", "completion": "---\napiVersion: v1\nkind: ConfigMap\nmetadata:\n name: env-config\n namespace: default\ndata:\n log_level: INFO", "language": "yaml", "type": "kubernetes-manifest", "context": "Configure a Pod to Use a ConfigMap | Create the ConfigMap:", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/configure-pod-configmap/", "id": "k8s_00489_code_completion_12566"} {"prompt": "Explain this yaml code:\n\nconfigmap/configmaps.yaml\napiVersion: v1\nkind: ConfigMap\nmetadata:\n name: special-config\n namespace: default\ndata:\n special.how: very\n---\napiVersion: v1\nkind: ConfigMap\nmetadata:\n name: env-config\n namespace: default\ndata:\n log_level: INFO", "completion": "This yaml code Configure a Pod to Use a ConfigMap | Create the ConfigMap:", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Configure a Pod to Use a ConfigMap | Create the ConfigMap:", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/configure-pod-configmap/", "id": "k8s_00489_code_explain_12567"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: ConfigMap\nmetadata:\n name: special-config\n namespace: default\ndata:\n special.how: very", "completion": "---\napiVersion: v1\nkind: ConfigMap\nmetadata:\n name: env-config\n namespace: default\ndata:\n log_level: INFO", "language": "yaml", "type": "kubernetes-manifest", "context": "Configure a Pod to Use a ConfigMap | Create the ConfigMap:", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/configure-pod-configmap/", "id": "k8s_00489_code_completion_12568"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: ConfigMap\nmetadata:\n name: special-config\n namespace: default\ndata:\n special.how: very\n---\napiVersion: v1\nkind: ConfigMap\nmetadata:\n name: env-config\n namespace: default\ndata:\n log_level: INFO", "completion": "This yaml code Configure a Pod to Use a ConfigMap | Create the ConfigMap:", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Configure a Pod to Use a ConfigMap | Create the ConfigMap:", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/configure-pod-configmap/", "id": "k8s_00489_code_explain_12569"} {"prompt": "Explain this bash code:\n\nkubectl create -f https://kubernetes.io/examples/configmap/configmaps.yaml", "completion": "This bash code Configure a Pod to Use a ConfigMap | Define the environment variables in the Pod specification.", "language": "bash", "type": "kubectl-command_explanation", "context": "Configure a Pod to Use a ConfigMap | Define the environment variables in the Pod specification.", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/configure-pod-configmap/", "id": "k8s_00489_code_explain_12570"} {"prompt": "Complete this yaml code:\n\npods/pod-multiple-configmap-env-variable.yaml\napiVersion: v1\nkind: Pod\nmetadata:\n name: dapi-test-pod\nspec:\n containers:\n - name: test-container\n image: registry.k8s.io/busybox:1.27.2\n command: [ \"/bin/sh\", \"-c\", \"env\" ]\n env:", "completion": " - name: SPECIAL_LEVEL_KEY\n valueFrom:\n configMapKeyRef:\n name: special-config\n key: special.how\n - name: LOG_LEVEL\n valueFrom:\n configMapKeyRef:\n name: env-config\n key: log_level\n restartPolicy: Never", "language": "yaml", "type": "pod-definition", "context": "Configure a Pod to Use a ConfigMap | Create the Pod:", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/configure-pod-configmap/", "id": "k8s_00489_code_completion_12571"} {"prompt": "Explain this yaml code:\n\npods/pod-multiple-configmap-env-variable.yaml\napiVersion: v1\nkind: Pod\nmetadata:\n name: dapi-test-pod\nspec:\n containers:\n - name: test-container\n image: registry.k8s.io/busybox:1.27.2\n command: [ \"/bin/sh\", \"-c\", \"env\" ]\n env:\n - name: SPECIAL_LEVEL_KEY\n valueFrom:\n configMapKeyRef:\n name: special-config\n key: special.how\n - name: LOG_LEVEL\n valueFrom:\n configMapKeyRef:\n name: env-config\n key: log_level\n restartPolicy: Never", "completion": "This yaml code Configure a Pod to Use a ConfigMap | Create the Pod:", "language": "yaml", "type": "pod-definition_explanation", "context": "Configure a Pod to Use a ConfigMap | Create the Pod:", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/configure-pod-configmap/", "id": "k8s_00489_code_explain_12572"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: dapi-test-pod\nspec:\n containers:\n - name: test-container\n image: registry.k8s.io/busybox:1.27.2\n command: [ \"/bin/sh\", \"-c\", \"env\" ]\n env:", "completion": " - name: SPECIAL_LEVEL_KEY\n valueFrom:\n configMapKeyRef:\n name: special-config\n key: special.how\n - name: LOG_LEVEL\n valueFrom:\n configMapKeyRef:\n name: env-config\n key: log_level\n restartPolicy: Never", "language": "yaml", "type": "pod-definition", "context": "Configure a Pod to Use a ConfigMap | Create the Pod:", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/configure-pod-configmap/", "id": "k8s_00489_code_completion_12573"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: dapi-test-pod\nspec:\n containers:\n - name: test-container\n image: registry.k8s.io/busybox:1.27.2\n command: [ \"/bin/sh\", \"-c\", \"env\" ]\n env:\n - name: SPECIAL_LEVEL_KEY\n valueFrom:\n configMapKeyRef:\n name: special-config\n key: special.how\n - name: LOG_LEVEL\n valueFrom:\n configMapKeyRef:\n name: env-config\n key: log_level\n restartPolicy: Never", "completion": "This yaml code Configure a Pod to Use a ConfigMap | Create the Pod:", "language": "yaml", "type": "pod-definition_explanation", "context": "Configure a Pod to Use a ConfigMap | Create the Pod:", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/configure-pod-configmap/", "id": "k8s_00489_code_explain_12574"} {"prompt": "Explain this bash code:\n\nkubectl create -f https://kubernetes.io/examples/pods/pod-multiple-configmap-env-variable.yaml", "completion": "This bash code Configure a Pod to Use a ConfigMap | Now, the Pod's output includes environment variables SPECIAL_LEVEL_KEY=very and LOG_LEVEL=INFO.", "language": "bash", "type": "kubectl-command_explanation", "context": "Configure a Pod to Use a ConfigMap | Now, the Pod's output includes environment variables SPECIAL_LEVEL_KEY=very and LOG_LEVEL=INFO.", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/configure-pod-configmap/", "id": "k8s_00489_code_explain_12575"} {"prompt": "Explain this bash code:\n\nkubectl delete pod dapi-test-pod --now\nkubectl delete configmap special-config\nkubectl delete configmap env-config", "completion": "This bash code Configure a Pod to Use a ConfigMap | Create a ConfigMap containing multiple key-value pairs.", "language": "bash", "type": "kubectl-command_explanation", "context": "Configure a Pod to Use a ConfigMap | Create a ConfigMap containing multiple key-value pairs.", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/configure-pod-configmap/", "id": "k8s_00489_code_explain_12576"} {"prompt": "Complete this yaml code:\n\nconfigmap/configmap-multikeys.yaml\napiVersion: v1\nkind: ConfigMap\nmetadata:", "completion": " name: special-config\n namespace: default\ndata:\n SPECIAL_LEVEL: very\n SPECIAL_TYPE: charm", "language": "yaml", "type": "kubernetes-manifest", "context": "Configure a Pod to Use a ConfigMap | Create the ConfigMap:", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/configure-pod-configmap/", "id": "k8s_00489_code_completion_12577"} {"prompt": "Explain this yaml code:\n\nconfigmap/configmap-multikeys.yaml\napiVersion: v1\nkind: ConfigMap\nmetadata:\n name: special-config\n namespace: default\ndata:\n SPECIAL_LEVEL: very\n SPECIAL_TYPE: charm", "completion": "This yaml code Configure a Pod to Use a ConfigMap | Create the ConfigMap:", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Configure a Pod to Use a ConfigMap | Create the ConfigMap:", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/configure-pod-configmap/", "id": "k8s_00489_code_explain_12578"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: ConfigMap\nmetadata:\n name: special-config", "completion": " namespace: default\ndata:\n SPECIAL_LEVEL: very\n SPECIAL_TYPE: charm", "language": "yaml", "type": "kubernetes-manifest", "context": "Configure a Pod to Use a ConfigMap | Create the ConfigMap:", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/configure-pod-configmap/", "id": "k8s_00489_code_completion_12579"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: ConfigMap\nmetadata:\n name: special-config\n namespace: default\ndata:\n SPECIAL_LEVEL: very\n SPECIAL_TYPE: charm", "completion": "This yaml code Configure a Pod to Use a ConfigMap | Create the ConfigMap:", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Configure a Pod to Use a ConfigMap | Create the ConfigMap:", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/configure-pod-configmap/", "id": "k8s_00489_code_explain_12580"} {"prompt": "Explain this bash code:\n\nkubectl create -f https://kubernetes.io/examples/configmap/configmap-multikeys.yaml", "completion": "This bash code Configure a Pod to Use a ConfigMap | Use envFrom to define all of the ConfigMap's data as container environment variables. The key from the ConfigMap becomes the environment variable name in the Pod.", "language": "bash", "type": "kubectl-command_explanation", "context": "Configure a Pod to Use a ConfigMap | Use envFrom to define all of the ConfigMap's data as container environment variables. The key from the ConfigMap becomes the environment variable name in the Pod.", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/configure-pod-configmap/", "id": "k8s_00489_code_explain_12581"} {"prompt": "Complete this yaml code:\n\npods/pod-configmap-envFrom.yaml\napiVersion: v1\nkind: Pod\nmetadata:\n name: dapi-test-pod\nspec:\n containers:", "completion": " - name: test-container\n image: registry.k8s.io/busybox:1.27.2\n command: [ \"/bin/sh\", \"-c\", \"env\" ]\n envFrom:\n - configMapRef:\n name: special-config\n restartPolicy: Never", "language": "yaml", "type": "pod-definition", "context": "Configure a Pod to Use a ConfigMap | Create the Pod:", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/configure-pod-configmap/", "id": "k8s_00489_code_completion_12582"} {"prompt": "Explain this yaml code:\n\npods/pod-configmap-envFrom.yaml\napiVersion: v1\nkind: Pod\nmetadata:\n name: dapi-test-pod\nspec:\n containers:\n - name: test-container\n image: registry.k8s.io/busybox:1.27.2\n command: [ \"/bin/sh\", \"-c\", \"env\" ]\n envFrom:\n - configMapRef:\n name: special-config\n restartPolicy: Never", "completion": "This yaml code Configure a Pod to Use a ConfigMap | Create the Pod:", "language": "yaml", "type": "pod-definition_explanation", "context": "Configure a Pod to Use a ConfigMap | Create the Pod:", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/configure-pod-configmap/", "id": "k8s_00489_code_explain_12583"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: dapi-test-pod\nspec:\n containers:", "completion": " - name: test-container\n image: registry.k8s.io/busybox:1.27.2\n command: [ \"/bin/sh\", \"-c\", \"env\" ]\n envFrom:\n - configMapRef:\n name: special-config\n restartPolicy: Never", "language": "yaml", "type": "pod-definition", "context": "Configure a Pod to Use a ConfigMap | Create the Pod:", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/configure-pod-configmap/", "id": "k8s_00489_code_completion_12584"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: dapi-test-pod\nspec:\n containers:\n - name: test-container\n image: registry.k8s.io/busybox:1.27.2\n command: [ \"/bin/sh\", \"-c\", \"env\" ]\n envFrom:\n - configMapRef:\n name: special-config\n restartPolicy: Never", "completion": "This yaml code Configure a Pod to Use a ConfigMap | Create the Pod:", "language": "yaml", "type": "pod-definition_explanation", "context": "Configure a Pod to Use a ConfigMap | Create the Pod:", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/configure-pod-configmap/", "id": "k8s_00489_code_explain_12585"} {"prompt": "Explain this bash code:\n\nkubectl create -f https://kubernetes.io/examples/pods/pod-configmap-envFrom.yaml", "completion": "This bash code Configure a Pod to Use a ConfigMap | Now, the Pod's output includes environment variables SPECIAL_LEVEL=very and SPECIAL_TYPE=charm.", "language": "bash", "type": "kubectl-command_explanation", "context": "Configure a Pod to Use a ConfigMap | Now, the Pod's output includes environment variables SPECIAL_LEVEL=very and SPECIAL_TYPE=charm.", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/configure-pod-configmap/", "id": "k8s_00489_code_explain_12586"} {"prompt": "Complete this yaml code:\n\npods/pod-configmap-env-var-valueFrom.yaml\napiVersion: v1\nkind: Pod\nmetadata:\n name: dapi-test-pod\nspec:\n containers:\n - name: test-container\n image: registry.k8s.io/busybox:1.27.2\n command: [ \"/bin/echo\", \"$(SPECIAL_LEVEL_KEY) $(SPECIAL_TYPE_KEY)\" ]\n env:", "completion": " - name: SPECIAL_LEVEL_KEY\n valueFrom:\n configMapKeyRef:\n name: special-config\n key: SPECIAL_LEVEL\n - name: SPECIAL_TYPE_KEY\n valueFrom:\n configMapKeyRef:\n name: special-config\n key: SPECIAL_TYPE\n restartPolicy: Never", "language": "yaml", "type": "pod-definition", "context": "Configure a Pod to Use a ConfigMap | Create that Pod, by running:", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/configure-pod-configmap/", "id": "k8s_00489_code_completion_12587"} {"prompt": "Explain this yaml code:\n\npods/pod-configmap-env-var-valueFrom.yaml\napiVersion: v1\nkind: Pod\nmetadata:\n name: dapi-test-pod\nspec:\n containers:\n - name: test-container\n image: registry.k8s.io/busybox:1.27.2\n command: [ \"/bin/echo\", \"$(SPECIAL_LEVEL_KEY) $(SPECIAL_TYPE_KEY)\" ]\n env:\n - name: SPECIAL_LEVEL_KEY\n valueFrom:\n configMapKeyRef:\n name: special-config\n key: SPECIAL_LEVEL\n - name: SPECIAL_TYPE_KEY\n valueFrom:\n configMapKeyRef:\n name: special-config\n key: SPECIAL_TYPE\n restartPolicy: Never", "completion": "This yaml code Configure a Pod to Use a ConfigMap | Create that Pod, by running:", "language": "yaml", "type": "pod-definition_explanation", "context": "Configure a Pod to Use a ConfigMap | Create that Pod, by running:", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/configure-pod-configmap/", "id": "k8s_00489_code_explain_12588"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: dapi-test-pod\nspec:\n containers:\n - name: test-container\n image: registry.k8s.io/busybox:1.27.2\n command: [ \"/bin/echo\", \"$(SPECIAL_LEVEL_KEY) $(SPECIAL_TYPE_KEY)\" ]\n env:", "completion": " - name: SPECIAL_LEVEL_KEY\n valueFrom:\n configMapKeyRef:\n name: special-config\n key: SPECIAL_LEVEL\n - name: SPECIAL_TYPE_KEY\n valueFrom:\n configMapKeyRef:\n name: special-config\n key: SPECIAL_TYPE\n restartPolicy: Never", "language": "yaml", "type": "pod-definition", "context": "Configure a Pod to Use a ConfigMap | Create that Pod, by running:", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/configure-pod-configmap/", "id": "k8s_00489_code_completion_12589"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: dapi-test-pod\nspec:\n containers:\n - name: test-container\n image: registry.k8s.io/busybox:1.27.2\n command: [ \"/bin/echo\", \"$(SPECIAL_LEVEL_KEY) $(SPECIAL_TYPE_KEY)\" ]\n env:\n - name: SPECIAL_LEVEL_KEY\n valueFrom:\n configMapKeyRef:\n name: special-config\n key: SPECIAL_LEVEL\n - name: SPECIAL_TYPE_KEY\n valueFrom:\n configMapKeyRef:\n name: special-config\n key: SPECIAL_TYPE\n restartPolicy: Never", "completion": "This yaml code Configure a Pod to Use a ConfigMap | Create that Pod, by running:", "language": "yaml", "type": "pod-definition_explanation", "context": "Configure a Pod to Use a ConfigMap | Create that Pod, by running:", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/configure-pod-configmap/", "id": "k8s_00489_code_explain_12590"} {"prompt": "Explain this bash code:\n\nkubectl create -f https://kubernetes.io/examples/pods/pod-configmap-env-var-valueFrom.yaml", "completion": "This bash code Configure a Pod to Use a ConfigMap | That pod produces the following output from the test-container container:", "language": "bash", "type": "kubectl-command_explanation", "context": "Configure a Pod to Use a ConfigMap | That pod produces the following output from the test-container container:", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/configure-pod-configmap/", "id": "k8s_00489_code_explain_12591"} {"prompt": "Complete this yaml code:\n\nconfigmap/configmap-multikeys.yaml\napiVersion: v1\nkind: ConfigMap\nmetadata:", "completion": " name: special-config\n namespace: default\ndata:\n SPECIAL_LEVEL: very\n SPECIAL_TYPE: charm", "language": "yaml", "type": "kubernetes-manifest", "context": "Configure a Pod to Use a ConfigMap | Create the ConfigMap:", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/configure-pod-configmap/", "id": "k8s_00489_code_completion_12592"} {"prompt": "Explain this yaml code:\n\nconfigmap/configmap-multikeys.yaml\napiVersion: v1\nkind: ConfigMap\nmetadata:\n name: special-config\n namespace: default\ndata:\n SPECIAL_LEVEL: very\n SPECIAL_TYPE: charm", "completion": "This yaml code Configure a Pod to Use a ConfigMap | Create the ConfigMap:", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Configure a Pod to Use a ConfigMap | Create the ConfigMap:", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/configure-pod-configmap/", "id": "k8s_00489_code_explain_12593"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: ConfigMap\nmetadata:\n name: special-config", "completion": " namespace: default\ndata:\n SPECIAL_LEVEL: very\n SPECIAL_TYPE: charm", "language": "yaml", "type": "kubernetes-manifest", "context": "Configure a Pod to Use a ConfigMap | Create the ConfigMap:", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/configure-pod-configmap/", "id": "k8s_00489_code_completion_12594"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: ConfigMap\nmetadata:\n name: special-config\n namespace: default\ndata:\n SPECIAL_LEVEL: very\n SPECIAL_TYPE: charm", "completion": "This yaml code Configure a Pod to Use a ConfigMap | Create the ConfigMap:", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Configure a Pod to Use a ConfigMap | Create the ConfigMap:", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/configure-pod-configmap/", "id": "k8s_00489_code_explain_12595"} {"prompt": "Explain this bash code:\n\nkubectl create -f https://kubernetes.io/examples/configmap/configmap-multikeys.yaml", "completion": "This bash code Configure a Pod to Use a ConfigMap | Add the ConfigMap name under the volumes section of the Pod specification. This adds the ConfigMap data to the directory specified as volumeMounts.mountPath (in this case, /etc/config). The command se", "language": "bash", "type": "kubectl-command_explanation", "context": "Configure a Pod to Use a ConfigMap | Add the ConfigMap name under the volumes section of the Pod specification. This adds the ConfigMap data to the directory specified as volumeMounts.mountPath (in this case, /etc/config). The command se", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/configure-pod-configmap/", "id": "k8s_00489_code_explain_12596"} {"prompt": "Complete this yaml code:\n\npods/pod-configmap-volume.yaml\napiVersion: v1\nkind: Pod\nmetadata:\n name: dapi-test-pod\nspec:\n containers:\n - name: test-container\n image: registry.k8s.io/busybox:1.27.2\n command: [ \"/bin/sh\", \"-c\", \"ls /etc/config/\" ]", "completion": " volumeMounts:\n - name: config-volume\n mountPath: /etc/config\n volumes:\n - name: config-volume\n configMap:\n # Provide the name of the ConfigMap containing the files you want\n # to add to the container\n name: special-config\n restartPolicy: Never", "language": "yaml", "type": "pod-definition", "context": "Configure a Pod to Use a ConfigMap | Create the Pod:", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/configure-pod-configmap/", "id": "k8s_00489_code_completion_12597"} {"prompt": "Explain this yaml code:\n\npods/pod-configmap-volume.yaml\napiVersion: v1\nkind: Pod\nmetadata:\n name: dapi-test-pod\nspec:\n containers:\n - name: test-container\n image: registry.k8s.io/busybox:1.27.2\n command: [ \"/bin/sh\", \"-c\", \"ls /etc/config/\" ]\n volumeMounts:\n - name: config-volume\n mountPath: /etc/config\n volumes:\n - name: config-volume\n configMap:\n # Provide the name of the ConfigMap containing the files you want\n # to add to the container\n name: special-config\n restartPolicy: Never", "completion": "This yaml code Configure a Pod to Use a ConfigMap | Create the Pod:", "language": "yaml", "type": "pod-definition_explanation", "context": "Configure a Pod to Use a ConfigMap | Create the Pod:", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/configure-pod-configmap/", "id": "k8s_00489_code_explain_12598"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: dapi-test-pod\nspec:\n containers:\n - name: test-container\n image: registry.k8s.io/busybox:1.27.2\n command: [ \"/bin/sh\", \"-c\", \"ls /etc/config/\" ]", "completion": " volumeMounts:\n - name: config-volume\n mountPath: /etc/config\n volumes:\n - name: config-volume\n configMap:\n # Provide the name of the ConfigMap containing the files you want\n # to add to the container\n name: special-config\n restartPolicy: Never", "language": "yaml", "type": "pod-definition", "context": "Configure a Pod to Use a ConfigMap | Create the Pod:", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/configure-pod-configmap/", "id": "k8s_00489_code_completion_12599"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: dapi-test-pod\nspec:\n containers:\n - name: test-container\n image: registry.k8s.io/busybox:1.27.2\n command: [ \"/bin/sh\", \"-c\", \"ls /etc/config/\" ]\n volumeMounts:\n - name: config-volume\n mountPath: /etc/config\n volumes:\n - name: config-volume\n configMap:\n # Provide the name of the ConfigMap containing the files you want\n # to add to the container\n name: special-config\n restartPolicy: Never", "completion": "This yaml code Configure a Pod to Use a ConfigMap | Create the Pod:", "language": "yaml", "type": "pod-definition_explanation", "context": "Configure a Pod to Use a ConfigMap | Create the Pod:", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/configure-pod-configmap/", "id": "k8s_00489_code_explain_12600"} {"prompt": "Explain this bash code:\n\nkubectl create -f https://kubernetes.io/examples/pods/pod-configmap-volume.yaml", "completion": "This bash code Configure a Pod to Use a ConfigMap | When the pod runs, the command ls /etc/config/ produces the output below:", "language": "bash", "type": "kubectl-command_explanation", "context": "Configure a Pod to Use a ConfigMap | When the pod runs, the command ls /etc/config/ produces the output below:", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/configure-pod-configmap/", "id": "k8s_00489_code_explain_12601"} {"prompt": "Complete this yaml code:\n\npods/pod-configmap-volume-specific-key.yaml\napiVersion: v1\nkind: Pod\nmetadata:\n name: dapi-test-pod\nspec:\n containers:\n - name: test-container\n image: registry.k8s.io/busybox:1.27.2\n command: [ \"/bin/sh\",\"-c\",\"cat /etc/config/keys\" ]", "completion": " volumeMounts:\n - name: config-volume\n mountPath: /etc/config\n volumes:\n - name: config-volume\n configMap:\n name: special-config\n items:\n - key: SPECIAL_LEVEL\n path: keys\n restartPolicy: Never", "language": "yaml", "type": "pod-definition", "context": "Configure a Pod to Use a ConfigMap | Create the Pod:", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/configure-pod-configmap/", "id": "k8s_00489_code_completion_12602"} {"prompt": "Explain this yaml code:\n\npods/pod-configmap-volume-specific-key.yaml\napiVersion: v1\nkind: Pod\nmetadata:\n name: dapi-test-pod\nspec:\n containers:\n - name: test-container\n image: registry.k8s.io/busybox:1.27.2\n command: [ \"/bin/sh\",\"-c\",\"cat /etc/config/keys\" ]\n volumeMounts:\n - name: config-volume\n mountPath: /etc/config\n volumes:\n - name: config-volume\n configMap:\n name: special-config\n items:\n - key: SPECIAL_LEVEL\n path: keys\n restartPolicy: Never", "completion": "This yaml code Configure a Pod to Use a ConfigMap | Create the Pod:", "language": "yaml", "type": "pod-definition_explanation", "context": "Configure a Pod to Use a ConfigMap | Create the Pod:", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/configure-pod-configmap/", "id": "k8s_00489_code_explain_12603"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: dapi-test-pod\nspec:\n containers:\n - name: test-container\n image: registry.k8s.io/busybox:1.27.2\n command: [ \"/bin/sh\",\"-c\",\"cat /etc/config/keys\" ]\n volumeMounts:", "completion": " - name: config-volume\n mountPath: /etc/config\n volumes:\n - name: config-volume\n configMap:\n name: special-config\n items:\n - key: SPECIAL_LEVEL\n path: keys\n restartPolicy: Never", "language": "yaml", "type": "pod-definition", "context": "Configure a Pod to Use a ConfigMap | Create the Pod:", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/configure-pod-configmap/", "id": "k8s_00489_code_completion_12604"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: dapi-test-pod\nspec:\n containers:\n - name: test-container\n image: registry.k8s.io/busybox:1.27.2\n command: [ \"/bin/sh\",\"-c\",\"cat /etc/config/keys\" ]\n volumeMounts:\n - name: config-volume\n mountPath: /etc/config\n volumes:\n - name: config-volume\n configMap:\n name: special-config\n items:\n - key: SPECIAL_LEVEL\n path: keys\n restartPolicy: Never", "completion": "This yaml code Configure a Pod to Use a ConfigMap | Create the Pod:", "language": "yaml", "type": "pod-definition_explanation", "context": "Configure a Pod to Use a ConfigMap | Create the Pod:", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/configure-pod-configmap/", "id": "k8s_00489_code_explain_12605"} {"prompt": "Explain this bash code:\n\nkubectl create -f https://kubernetes.io/examples/pods/pod-configmap-volume-specific-key.yaml", "completion": "This bash code Configure a Pod to Use a ConfigMap | When the pod runs, the command cat /etc/config/keys produces the output below:", "language": "bash", "type": "kubectl-command_explanation", "context": "Configure a Pod to Use a ConfigMap | When the pod runs, the command cat /etc/config/keys produces the output below:", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/configure-pod-configmap/", "id": "k8s_00489_code_explain_12606"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: ConfigMap\nmetadata:\n creationTimestamp: 2016-02-18T19:14:38Z\n name: example-config\n namespace: default\ndata:", "completion": " # example of a simple property defined using --from-literal\n example.property.1: hello\n example.property.2: world\n # example of a complex property defined using --from-file\n example.property.file: |-\n property.1=value-1\n property.2=value-2\n property.3=value-3", "language": "yaml", "type": "kubernetes-manifest", "context": "Configure a Pod to Use a ConfigMap | When kubectl creates a ConfigMap from inputs that are not ASCII or UTF-8, the tool puts these into the binaryData field of the ConfigMap, and not in data. Both text and binary data sources can be comb", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/configure-pod-configmap/", "id": "k8s_00489_code_completion_12607"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: ConfigMap\nmetadata:\n creationTimestamp: 2016-02-18T19:14:38Z\n name: example-config\n namespace: default\ndata:\n # example of a simple property defined using --from-literal\n example.property.1: hello\n example.property.2: world\n # example of a complex property defined using --from-file\n example.property.file: |-\n property.1=value-1\n property.2=value-2\n property.3=value-3", "completion": "This yaml code Configure a Pod to Use a ConfigMap | When kubectl creates a ConfigMap from inputs that are not ASCII or UTF-8, the tool puts these into the binaryData field of the ConfigMap, and not in data. Both text and binary data sources can be comb", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Configure a Pod to Use a ConfigMap | When kubectl creates a ConfigMap from inputs that are not ASCII or UTF-8, the tool puts these into the binaryData field of the ConfigMap, and not in data. Both text and binary data sources can be comb", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/configure-pod-configmap/", "id": "k8s_00489_code_explain_12608"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: dapi-test-pod\nspec:\n containers:\n - name: test-container\n image: gcr.io/google_containers/busybox", "completion": " command: [\"/bin/sh\", \"-c\", \"env\"]\n env:\n - name: SPECIAL_LEVEL_KEY\n valueFrom:\n configMapKeyRef:\n name: a-config\n key: akey\n optional: true # mark the variable as optional\n restartPolicy: Never", "language": "yaml", "type": "pod-definition", "context": "Configure a Pod to Use a ConfigMap | If you run this pod, and there is no ConfigMap named a-config, the output is empty. If you run this pod, and there is a ConfigMap named a-config but that ConfigMap doesn't have a key named akey, the o", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/configure-pod-configmap/", "id": "k8s_00489_code_completion_12609"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: dapi-test-pod\nspec:\n containers:\n - name: test-container\n image: gcr.io/google_containers/busybox\n command: [\"/bin/sh\", \"-c\", \"env\"]\n env:\n - name: SPECIAL_LEVEL_KEY\n valueFrom:\n configMapKeyRef:\n name: a-config\n key: akey\n optional: true # mark the variable as optional\n restartPolicy: Never", "completion": "This yaml code Configure a Pod to Use a ConfigMap | If you run this pod, and there is no ConfigMap named a-config, the output is empty. If you run this pod, and there is a ConfigMap named a-config but that ConfigMap doesn't have a key named akey, the o", "language": "yaml", "type": "pod-definition_explanation", "context": "Configure a Pod to Use a ConfigMap | If you run this pod, and there is no ConfigMap named a-config, the output is empty. If you run this pod, and there is a ConfigMap named a-config but that ConfigMap doesn't have a key named akey, the o", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/configure-pod-configmap/", "id": "k8s_00489_code_explain_12610"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: dapi-test-pod\nspec:\n containers:\n - name: test-container\n image: gcr.io/google_containers/busybox\n command: [\"/bin/sh\", \"-c\", \"ls /etc/config\"]", "completion": " volumeMounts:\n - name: config-volume\n mountPath: /etc/config\n volumes:\n - name: config-volume\n configMap:\n name: no-config\n optional: true # mark the source ConfigMap as optional\n restartPolicy: Never", "language": "yaml", "type": "pod-definition", "context": "Configure a Pod to Use a ConfigMap | You must create the ConfigMap object before you reference it in a Pod specification. Alternatively, mark the ConfigMap reference as optional in the Pod spec (see Optional ConfigMaps). If you reference", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/configure-pod-configmap/", "id": "k8s_00489_code_completion_12611"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: dapi-test-pod\nspec:\n containers:\n - name: test-container\n image: gcr.io/google_containers/busybox\n command: [\"/bin/sh\", \"-c\", \"ls /etc/config\"]\n volumeMounts:\n - name: config-volume\n mountPath: /etc/config\n volumes:\n - name: config-volume\n configMap:\n name: no-config\n optional: true # mark the source ConfigMap as optional\n restartPolicy: Never", "completion": "This yaml code Configure a Pod to Use a ConfigMap | You must create the ConfigMap object before you reference it in a Pod specification. Alternatively, mark the ConfigMap reference as optional in the Pod spec (see Optional ConfigMaps). If you reference", "language": "yaml", "type": "pod-definition_explanation", "context": "Configure a Pod to Use a ConfigMap | You must create the ConfigMap object before you reference it in a Pod specification. Alternatively, mark the ConfigMap reference as optional in the Pod spec (see Optional ConfigMaps). If you reference", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/configure-pod-configmap/", "id": "k8s_00489_code_explain_12612"} {"prompt": "Complete this bash code:\n\nkubectl delete configmaps/game-config configmaps/game-config-2 configmaps/game-config-3 \\\n configmaps/game-config-env-file\nkubectl delete pod dapi-test-pod --now", "completion": "\n# You might already have removed the next set\nkubectl delete configmaps/special-config configmaps/env-config\nkubectl delete configmap -l 'game-config in (config-4,config-5)'", "language": "bash", "type": "kubectl-command", "context": "Configure a Pod to Use a ConfigMap | Remove the kustomization.yaml file that you used to generate the ConfigMap:", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/configure-pod-configmap/", "id": "k8s_00489_code_completion_12613"} {"prompt": "Explain this bash code:\n\nkubectl delete configmaps/game-config configmaps/game-config-2 configmaps/game-config-3 \\\n configmaps/game-config-env-file\nkubectl delete pod dapi-test-pod --now\n\n# You might already have removed the next set\nkubectl delete configmaps/special-config configmaps/env-config\nkubectl delete configmap -l 'game-config in (config-4,config-5)'", "completion": "This bash code Configure a Pod to Use a ConfigMap | Remove the kustomization.yaml file that you used to generate the ConfigMap:", "language": "bash", "type": "kubectl-command_explanation", "context": "Configure a Pod to Use a ConfigMap | Remove the kustomization.yaml file that you used to generate the ConfigMap:", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/configure-pod-configmap/", "id": "k8s_00489_code_explain_12614"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: ServiceAccount\nmetadata:", "completion": " name: build-robot\nautomountServiceAccountToken: false\n...", "language": "yaml", "type": "service-definition", "context": "Configure Service Accounts for Pods | You can also opt out of automounting API credentials for a particular Pod:", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/configure-service-account/", "id": "k8s_00490_code_completion_12615"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: ServiceAccount\nmetadata:\n name: build-robot\nautomountServiceAccountToken: false\n...", "completion": "This yaml code Configure Service Accounts for Pods | You can also opt out of automounting API credentials for a particular Pod:", "language": "yaml", "type": "service-definition_explanation", "context": "Configure Service Accounts for Pods | You can also opt out of automounting API credentials for a particular Pod:", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/configure-service-account/", "id": "k8s_00490_code_explain_12616"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: my-pod", "completion": "spec:\n serviceAccountName: build-robot\n automountServiceAccountToken: false\n ...", "language": "yaml", "type": "pod-definition", "context": "Configure Service Accounts for Pods | If both the ServiceAccount and the Pod's .spec specify a value for automountServiceAccountToken, the Pod spec takes precedence.", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/configure-service-account/", "id": "k8s_00490_code_completion_12617"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: my-pod\nspec:\n serviceAccountName: build-robot\n automountServiceAccountToken: false\n ...", "completion": "This yaml code Configure Service Accounts for Pods | If both the ServiceAccount and the Pod's .spec specify a value for automountServiceAccountToken, the Pod spec takes precedence.", "language": "yaml", "type": "pod-definition_explanation", "context": "Configure Service Accounts for Pods | If both the ServiceAccount and the Pod's .spec specify a value for automountServiceAccountToken, the Pod spec takes precedence.", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/configure-service-account/", "id": "k8s_00490_code_explain_12618"} {"prompt": "Complete this yaml code:\n\nkubectl apply -f - <<EOF\napiVersion: v1\nkind: ServiceAccount", "completion": "metadata:\n name: build-robot\nEOF", "language": "yaml", "type": "service-definition", "context": "Configure Service Accounts for Pods | The name of a ServiceAccount object must be a valid DNS subdomain name.", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/configure-service-account/", "id": "k8s_00490_code_completion_12619"} {"prompt": "Explain this yaml code:\n\nkubectl apply -f - <<EOF\napiVersion: v1\nkind: ServiceAccount\nmetadata:\n name: build-robot\nEOF", "completion": "This yaml code Configure Service Accounts for Pods | The name of a ServiceAccount object must be a valid DNS subdomain name.", "language": "yaml", "type": "service-definition_explanation", "context": "Configure Service Accounts for Pods | The name of a ServiceAccount object must be a valid DNS subdomain name.", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/configure-service-account/", "id": "k8s_00490_code_explain_12620"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: ServiceAccount\nmetadata:\n creationTimestamp: 2019-06-16T00:12:34Z", "completion": " name: build-robot\n namespace: default\n resourceVersion: \"272500\"\n uid: 721ab723-13bc-11e5-aec2-42010af0021e", "language": "yaml", "type": "service-definition", "context": "Configure Service Accounts for Pods | You can use authorization plugins to set permissions on service accounts.", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/configure-service-account/", "id": "k8s_00490_code_completion_12621"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: ServiceAccount\nmetadata:\n creationTimestamp: 2019-06-16T00:12:34Z\n name: build-robot\n namespace: default\n resourceVersion: \"272500\"\n uid: 721ab723-13bc-11e5-aec2-42010af0021e", "completion": "This yaml code Configure Service Accounts for Pods | You can use authorization plugins to set permissions on service accounts.", "language": "yaml", "type": "service-definition_explanation", "context": "Configure Service Accounts for Pods | You can use authorization plugins to set permissions on service accounts.", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/configure-service-account/", "id": "k8s_00490_code_explain_12622"} {"prompt": "Explain this bash code:\n\nkubectl create token build-robot --bound-object-kind Node --bound-object-name node-001 --bound-object-uid 123...456", "completion": "This bash code Configure Service Accounts for Pods | The token will be valid until it expires or either the associated Node or service account are deleted.", "language": "bash", "type": "kubectl-command_explanation", "context": "Configure Service Accounts for Pods | The token will be valid until it expires or either the associated Node or service account are deleted.", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/configure-service-account/", "id": "k8s_00490_code_explain_12623"} {"prompt": "Complete this yaml code:\n\nkubectl apply -f - <<EOF\napiVersion: v1\nkind: Secret\nmetadata:", "completion": " name: build-robot-secret\n annotations:\n kubernetes.io/service-account.name: build-robot\ntype: kubernetes.io/service-account-token\nEOF", "language": "yaml", "type": "kubernetes-manifest", "context": "Configure Service Accounts for Pods | If you view the Secret using:", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/configure-service-account/", "id": "k8s_00490_code_completion_12624"} {"prompt": "Explain this yaml code:\n\nkubectl apply -f - <<EOF\napiVersion: v1\nkind: Secret\nmetadata:\n name: build-robot-secret\n annotations:\n kubernetes.io/service-account.name: build-robot\ntype: kubernetes.io/service-account-token\nEOF", "completion": "This yaml code Configure Service Accounts for Pods | If you view the Secret using:", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Configure Service Accounts for Pods | If you view the Secret using:", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/configure-service-account/", "id": "k8s_00490_code_explain_12625"} {"prompt": "Complete this text code:\n\nName: build-robot-secret\nNamespace: default\nLabels: <none>\nAnnotations: kubernetes.io/service-account.name: build-robot\n kubernetes.io/service-account.uid: da68f9c6-9d26-11e7-b84e-002dc52800da\n", "completion": "Type: kubernetes.io/service-account-token\n\nData\n====\nca.crt: 1338 bytes\nnamespace: 7 bytes\ntoken: ...", "language": "text", "type": "code-example", "context": "Configure Service Accounts for Pods | The content of token is omitted here.", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/configure-service-account/", "id": "k8s_00490_code_completion_12626"} {"prompt": "Explain this text code:\n\nName: build-robot-secret\nNamespace: default\nLabels: <none>\nAnnotations: kubernetes.io/service-account.name: build-robot\n kubernetes.io/service-account.uid: da68f9c6-9d26-11e7-b84e-002dc52800da\n\nType: kubernetes.io/service-account-token\n\nData\n====\nca.crt: 1338 bytes\nnamespace: 7 bytes\ntoken: ...", "completion": "This text code Configure Service Accounts for Pods | The content of token is omitted here.", "language": "text", "type": "code-example_explanation", "context": "Configure Service Accounts for Pods | The content of token is omitted here.", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/configure-service-account/", "id": "k8s_00490_code_explain_12627"} {"prompt": "Explain this bash code:\n\nkubectl create secret docker-registry myregistrykey --docker-server=<registry name> \\\n --docker-username=DUMMY_USERNAME --docker-password=DUMMY_DOCKER_PASSWORD \\\n --docker-email=DUMMY_DOCKER_EMAIL", "completion": "This bash code Configure Service Accounts for Pods | Verify it has been created.", "language": "bash", "type": "kubectl-command_explanation", "context": "Configure Service Accounts for Pods | Verify it has been created.", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/configure-service-account/", "id": "k8s_00490_code_explain_12628"} {"prompt": "Explain this text code:\n\nNAME TYPE DATA AGE\nmyregistrykey   kubernetes.io/.dockerconfigjson   1       1d", "completion": "This text code Configure Service Accounts for Pods | Next, modify the default service account for the namespace to use this Secret as an imagePullSecret.", "language": "text", "type": "code-example_explanation", "context": "Configure Service Accounts for Pods | Next, modify the default service account for the namespace to use this Secret as an imagePullSecret.", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/configure-service-account/", "id": "k8s_00490_code_explain_12629"} {"prompt": "Explain this bash code:\n\nkubectl patch serviceaccount default -p '{\"imagePullSecrets\": [{\"name\": \"myregistrykey\"}]}'", "completion": "This bash code Configure Service Accounts for Pods | You can achieve the same outcome by editing the object manually:", "language": "bash", "type": "kubectl-command_explanation", "context": "Configure Service Accounts for Pods | You can achieve the same outcome by editing the object manually:", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/configure-service-account/", "id": "k8s_00490_code_explain_12630"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: ServiceAccount\nmetadata:\n creationTimestamp: 2021-07-07T22:02:39Z", "completion": " name: default\n namespace: default\n resourceVersion: \"243024\"\n uid: 052fb0f4-3d50-11e5-b066-42010af0d7b6", "language": "yaml", "type": "service-definition", "context": "Configure Service Accounts for Pods | Using your editor, delete the line with key resourceVersion, add lines for imagePullSecrets: and save it. Leave the uid value set the same as you found it.", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/configure-service-account/", "id": "k8s_00490_code_completion_12631"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: ServiceAccount\nmetadata:\n creationTimestamp: 2021-07-07T22:02:39Z\n name: default\n namespace: default\n resourceVersion: \"243024\"\n uid: 052fb0f4-3d50-11e5-b066-42010af0d7b6", "completion": "This yaml code Configure Service Accounts for Pods | Using your editor, delete the line with key resourceVersion, add lines for imagePullSecrets: and save it. Leave the uid value set the same as you found it.", "language": "yaml", "type": "service-definition_explanation", "context": "Configure Service Accounts for Pods | Using your editor, delete the line with key resourceVersion, add lines for imagePullSecrets: and save it. Leave the uid value set the same as you found it.", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/configure-service-account/", "id": "k8s_00490_code_explain_12632"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: ServiceAccount\nmetadata:\n creationTimestamp: 2021-07-07T22:02:39Z", "completion": " name: default\n namespace: default\n uid: 052fb0f4-3d50-11e5-b066-42010af0d7b6\nimagePullSecrets:\n - name: myregistrykey", "language": "yaml", "type": "service-definition", "context": "Configure Service Accounts for Pods | Now, when a new Pod is created in the current namespace and using the default ServiceAccount, the new Pod has its spec.imagePullSecrets field set automatically:", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/configure-service-account/", "id": "k8s_00490_code_completion_12633"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: ServiceAccount\nmetadata:\n creationTimestamp: 2021-07-07T22:02:39Z\n name: default\n namespace: default\n uid: 052fb0f4-3d50-11e5-b066-42010af0d7b6\nimagePullSecrets:\n - name: myregistrykey", "completion": "This yaml code Configure Service Accounts for Pods | Now, when a new Pod is created in the current namespace and using the default ServiceAccount, the new Pod has its spec.imagePullSecrets field set automatically:", "language": "yaml", "type": "service-definition_explanation", "context": "Configure Service Accounts for Pods | Now, when a new Pod is created in the current namespace and using the default ServiceAccount, the new Pod has its spec.imagePullSecrets field set automatically:", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/configure-service-account/", "id": "k8s_00490_code_explain_12634"} {"prompt": "Explain this bash code:\n\nkubectl run nginx --image=<registry name>/nginx --restart=Never\nkubectl get pod nginx -o=jsonpath='{.spec.imagePullSecrets[0].name}{\"\\n\"}'", "completion": "This bash code Configure Service Accounts for Pods | The output is:", "language": "bash", "type": "kubectl-command_explanation", "context": "Configure Service Accounts for Pods | The output is:", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/configure-service-account/", "id": "k8s_00490_code_explain_12635"} {"prompt": "Complete this json code:\n\n{\n \"aud\": [ # matches the requested audiences, or the API server's default audiences when none are explicitly requested\n \"https://kubernetes.default.svc\"\n ],\n \"exp\": 1731613413,\n \"iat\": 1700077413,\n \"iss\": \"https://kubernetes.default.svc\", # matches the first value passed to the --service-account-issuer flag\n \"jti\": \"ea28ed49-2e11-4280-9ec5-bc3d1d84661a\", \n \"kubernetes.io\": {\n \"namespace\": \"kube-system\",\n \"node\": {\n \"name\": \"127.0.0.1\",\n \"uid\": \"58456cb0-dd00-45ed-b797-5578fdceaced\"", "completion": " },\n \"pod\": {\n \"name\": \"coredns-69cbfb9798-jv9gn\",\n \"uid\": \"778a530c-b3f4-47c0-9cd5-ab018fb64f33\"\n },\n \"serviceaccount\": {\n \"name\": \"coredns\",\n \"uid\": \"a087d5a0-e1dd-43ec-93ac-f13d89cd13af\"\n },\n \"warnafter\": 1700081020\n },\n \"nbf\": 1700077413,\n \"sub\": \"system:serviceaccount:kube-system:coredns\"\n}", "language": "json", "type": "api-response", "context": "Configure Service Accounts for Pods | To provide a Pod with a token with an audience of vault and a validity duration of two hours, you could define a Pod manifest that is similar to:", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/configure-service-account/", "id": "k8s_00490_code_completion_12636"} {"prompt": "Explain this json code:\n\n{\n \"aud\": [ # matches the requested audiences, or the API server's default audiences when none are explicitly requested\n \"https://kubernetes.default.svc\"\n ],\n \"exp\": 1731613413,\n \"iat\": 1700077413,\n \"iss\": \"https://kubernetes.default.svc\", # matches the first value passed to the --service-account-issuer flag\n \"jti\": \"ea28ed49-2e11-4280-9ec5-bc3d1d84661a\", \n \"kubernetes.io\": {\n \"namespace\": \"kube-system\",\n \"node\": {\n \"name\": \"127.0.0.1\",\n \"uid\": \"58456cb0-dd00-45ed-b797-5578fdceaced\"\n },\n \"pod\": {\n \"name\": \"coredns-69cbfb9798-jv9gn\",\n \"uid\": \"778a530c-b3f4-47c0-9cd5-ab018fb64f33\"\n },\n \"serviceaccount\": {\n \"name\": \"coredns\",\n \"uid\": \"a087d5a0-e1dd-43ec-93ac-f13d89cd13af\"\n },\n \"warnafter\": 1700081020\n },\n \"nbf\": 1700077413,\n \"sub\": \"system:serviceaccount:kube-system:coredns\"\n}", "completion": "This json code Configure Service Accounts for Pods | To provide a Pod with a token with an audience of vault and a validity duration of two hours, you could define a Pod manifest that is similar to:", "language": "json", "type": "api-response_explanation", "context": "Configure Service Accounts for Pods | To provide a Pod with a token with an audience of vault and a validity duration of two hours, you could define a Pod manifest that is similar to:", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/configure-service-account/", "id": "k8s_00490_code_explain_12637"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: nginx\nspec:\n containers:\n - image: nginx\n name: nginx\n volumeMounts:\n - mountPath: /var/run/secrets/tokens", "completion": " name: vault-token\n serviceAccountName: build-robot\n volumes:\n - name: vault-token\n projected:\n sources:\n - serviceAccountToken:\n path: vault-token\n expirationSeconds: 7200\n audience: vault", "language": "yaml", "type": "pod-definition", "context": "Configure Service Accounts for Pods | Create the Pod:", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/configure-service-account/", "id": "k8s_00490_code_completion_12638"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: nginx\nspec:\n containers:\n - image: nginx\n name: nginx\n volumeMounts:\n - mountPath: /var/run/secrets/tokens\n name: vault-token\n serviceAccountName: build-robot\n volumes:\n - name: vault-token\n projected:\n sources:\n - serviceAccountToken:\n path: vault-token\n expirationSeconds: 7200\n audience: vault", "completion": "This yaml code Configure Service Accounts for Pods | Create the Pod:", "language": "yaml", "type": "pod-definition_explanation", "context": "Configure Service Accounts for Pods | Create the Pod:", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/configure-service-account/", "id": "k8s_00490_code_explain_12639"} {"prompt": "Explain this bash code:\n\nkubectl create -f https://k8s.io/examples/pods/pod-projected-svc-token.yaml", "completion": "This bash code Configure Service Accounts for Pods | The kubelet will: request and store the token on behalf of the Pod; make the token available to the Pod at a configurable file path; and refresh the token as it approaches expiration. The kubelet proa", "language": "bash", "type": "kubectl-command_explanation", "context": "Configure Service Accounts for Pods | The kubelet will: request and store the token on behalf of the Pod; make the token available to the Pod at a configurable file path; and refresh the token as it approaches expiration. The kubelet proa", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/configure-service-account/", "id": "k8s_00490_code_explain_12640"} {"prompt": "Explain this text code:\n\n/var/run/secrets/kubernetes.io/serviceaccount/token", "completion": "This text code Configure Service Accounts for Pods | For example:", "language": "text", "type": "code-example_explanation", "context": "Configure Service Accounts for Pods | For example:", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/configure-service-account/", "id": "k8s_00490_code_explain_12641"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: ServiceAccount\nmetadata:", "completion": " name: build-robot\nautomountServiceAccountToken: false\n...", "language": "yaml", "type": "service-definition", "context": "Configure Service Accounts for Pods | You can also opt out of automounting API credentials for a particular Pod:", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/configure-service-account/", "id": "k8s_00490_code_completion_12642"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: ServiceAccount\nmetadata:\n name: build-robot\nautomountServiceAccountToken: false\n...", "completion": "This yaml code Configure Service Accounts for Pods | You can also opt out of automounting API credentials for a particular Pod:", "language": "yaml", "type": "service-definition_explanation", "context": "Configure Service Accounts for Pods | You can also opt out of automounting API credentials for a particular Pod:", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/configure-service-account/", "id": "k8s_00490_code_explain_12643"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: my-pod", "completion": "spec:\n serviceAccountName: build-robot\n automountServiceAccountToken: false\n ...", "language": "yaml", "type": "pod-definition", "context": "Configure Service Accounts for Pods | If both the ServiceAccount and the Pod's .spec specify a value for automountServiceAccountToken, the Pod spec takes precedence.", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/configure-service-account/", "id": "k8s_00490_code_completion_12644"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: my-pod\nspec:\n serviceAccountName: build-robot\n automountServiceAccountToken: false\n ...", "completion": "This yaml code Configure Service Accounts for Pods | If both the ServiceAccount and the Pod's .spec specify a value for automountServiceAccountToken, the Pod spec takes precedence.", "language": "yaml", "type": "pod-definition_explanation", "context": "Configure Service Accounts for Pods | If both the ServiceAccount and the Pod's .spec specify a value for automountServiceAccountToken, the Pod spec takes precedence.", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/configure-service-account/", "id": "k8s_00490_code_explain_12645"} {"prompt": "Complete this shell code:\n\nkubectl apply -f - <<EOF\napiVersion: v1\nkind: ServiceAccount", "completion": "metadata:\n name: build-robot\nEOF", "language": "shell", "type": "code-example", "context": "Configure Service Accounts for Pods | The name of a ServiceAccount object must be a valid DNS subdomain name.", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/configure-service-account/", "id": "k8s_00490_code_completion_12646"} {"prompt": "Explain this shell code:\n\nkubectl apply -f - <<EOF\napiVersion: v1\nkind: ServiceAccount\nmetadata:\n name: build-robot\nEOF", "completion": "This shell code Configure Service Accounts for Pods | The name of a ServiceAccount object must be a valid DNS subdomain name.", "language": "shell", "type": "code-example_explanation", "context": "Configure Service Accounts for Pods | The name of a ServiceAccount object must be a valid DNS subdomain name.", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/configure-service-account/", "id": "k8s_00490_code_explain_12647"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: ServiceAccount\nmetadata:\n creationTimestamp: 2019-06-16T00:12:34Z", "completion": " name: build-robot\n namespace: default\n resourceVersion: \"272500\"\n uid: 721ab723-13bc-11e5-aec2-42010af0021e", "language": "yaml", "type": "service-definition", "context": "Configure Service Accounts for Pods | You can use authorization plugins to set permissions on service accounts.", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/configure-service-account/", "id": "k8s_00490_code_completion_12648"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: ServiceAccount\nmetadata:\n creationTimestamp: 2019-06-16T00:12:34Z\n name: build-robot\n namespace: default\n resourceVersion: \"272500\"\n uid: 721ab723-13bc-11e5-aec2-42010af0021e", "completion": "This yaml code Configure Service Accounts for Pods | You can use authorization plugins to set permissions on service accounts.", "language": "yaml", "type": "service-definition_explanation", "context": "Configure Service Accounts for Pods | You can use authorization plugins to set permissions on service accounts.", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/configure-service-account/", "id": "k8s_00490_code_explain_12649"} {"prompt": "Explain this shell code:\n\nkubectl create token build-robot --bound-object-kind Node --bound-object-name node-001 --bound-object-uid 123...456", "completion": "This shell code Configure Service Accounts for Pods | The token will be valid until it expires or either the associated Node or service account are deleted.", "language": "shell", "type": "code-example_explanation", "context": "Configure Service Accounts for Pods | The token will be valid until it expires or either the associated Node or service account are deleted.", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/configure-service-account/", "id": "k8s_00490_code_explain_12650"} {"prompt": "Complete this shell code:\n\nkubectl apply -f - <<EOF\napiVersion: v1\nkind: Secret\nmetadata:", "completion": " name: build-robot-secret\n annotations:\n kubernetes.io/service-account.name: build-robot\ntype: kubernetes.io/service-account-token\nEOF", "language": "shell", "type": "code-example", "context": "Configure Service Accounts for Pods | If you view the Secret using:", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/configure-service-account/", "id": "k8s_00490_code_completion_12651"} {"prompt": "Explain this shell code:\n\nkubectl apply -f - <<EOF\napiVersion: v1\nkind: Secret\nmetadata:\n name: build-robot-secret\n annotations:\n kubernetes.io/service-account.name: build-robot\ntype: kubernetes.io/service-account-token\nEOF", "completion": "This shell code Configure Service Accounts for Pods | If you view the Secret using:", "language": "shell", "type": "code-example_explanation", "context": "Configure Service Accounts for Pods | If you view the Secret using:", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/configure-service-account/", "id": "k8s_00490_code_explain_12652"} {"prompt": "Complete this text code:\n\nName: build-robot-secret\nNamespace: default\nLabels: <none>\nAnnotations: kubernetes.io/service-account.name: build-robot\n kubernetes.io/service-account.uid: da68f9c6-9d26-11e7-b84e-002dc52800da\n", "completion": "Type: kubernetes.io/service-account-token\n\nData\n====\nca.crt: 1338 bytes\nnamespace: 7 bytes\ntoken: ...", "language": "text", "type": "code-example", "context": "Configure Service Accounts for Pods | The content of token is omitted here.", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/configure-service-account/", "id": "k8s_00490_code_completion_12653"} {"prompt": "Explain this text code:\n\nName: build-robot-secret\nNamespace: default\nLabels: <none>\nAnnotations: kubernetes.io/service-account.name: build-robot\n kubernetes.io/service-account.uid: da68f9c6-9d26-11e7-b84e-002dc52800da\n\nType: kubernetes.io/service-account-token\n\nData\n====\nca.crt: 1338 bytes\nnamespace: 7 bytes\ntoken: ...", "completion": "This text code Configure Service Accounts for Pods | The content of token is omitted here.", "language": "text", "type": "code-example_explanation", "context": "Configure Service Accounts for Pods | The content of token is omitted here.", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/configure-service-account/", "id": "k8s_00490_code_explain_12654"} {"prompt": "Explain this shell code:\n\nkubectl create secret docker-registry myregistrykey --docker-server=<registry name> \\\n --docker-username=DUMMY_USERNAME --docker-password=DUMMY_DOCKER_PASSWORD \\\n --docker-email=DUMMY_DOCKER_EMAIL", "completion": "This shell code Configure Service Accounts for Pods | Verify it has been created.", "language": "shell", "type": "code-example_explanation", "context": "Configure Service Accounts for Pods | Verify it has been created.", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/configure-service-account/", "id": "k8s_00490_code_explain_12655"} {"prompt": "Explain this text code:\n\nNAME TYPE DATA AGE\nmyregistrykey   kubernetes.io/.dockerconfigjson   1       1d", "completion": "This text code Configure Service Accounts for Pods | Next, modify the default service account for the namespace to use this Secret as an imagePullSecret.", "language": "text", "type": "code-example_explanation", "context": "Configure Service Accounts for Pods | Next, modify the default service account for the namespace to use this Secret as an imagePullSecret.", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/configure-service-account/", "id": "k8s_00490_code_explain_12656"} {"prompt": "Explain this shell code:\n\nkubectl patch serviceaccount default -p '{\"imagePullSecrets\": [{\"name\": \"myregistrykey\"}]}'", "completion": "This shell code Configure Service Accounts for Pods | You can achieve the same outcome by editing the object manually:", "language": "shell", "type": "code-example_explanation", "context": "Configure Service Accounts for Pods | You can achieve the same outcome by editing the object manually:", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/configure-service-account/", "id": "k8s_00490_code_explain_12657"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: ServiceAccount\nmetadata:\n creationTimestamp: 2021-07-07T22:02:39Z", "completion": " name: default\n namespace: default\n resourceVersion: \"243024\"\n uid: 052fb0f4-3d50-11e5-b066-42010af0d7b6", "language": "yaml", "type": "service-definition", "context": "Configure Service Accounts for Pods | Using your editor, delete the line with key resourceVersion, add lines for imagePullSecrets: and save it. Leave the uid value set the same as you found it.", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/configure-service-account/", "id": "k8s_00490_code_completion_12658"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: ServiceAccount\nmetadata:\n creationTimestamp: 2021-07-07T22:02:39Z\n name: default\n namespace: default\n resourceVersion: \"243024\"\n uid: 052fb0f4-3d50-11e5-b066-42010af0d7b6", "completion": "This yaml code Configure Service Accounts for Pods | Using your editor, delete the line with key resourceVersion, add lines for imagePullSecrets: and save it. Leave the uid value set the same as you found it.", "language": "yaml", "type": "service-definition_explanation", "context": "Configure Service Accounts for Pods | Using your editor, delete the line with key resourceVersion, add lines for imagePullSecrets: and save it. Leave the uid value set the same as you found it.", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/configure-service-account/", "id": "k8s_00490_code_explain_12659"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: ServiceAccount\nmetadata:\n creationTimestamp: 2021-07-07T22:02:39Z", "completion": " name: default\n namespace: default\n uid: 052fb0f4-3d50-11e5-b066-42010af0d7b6\nimagePullSecrets:\n - name: myregistrykey", "language": "yaml", "type": "service-definition", "context": "Configure Service Accounts for Pods | Now, when a new Pod is created in the current namespace and using the default ServiceAccount, the new Pod has its spec.imagePullSecrets field set automatically:", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/configure-service-account/", "id": "k8s_00490_code_completion_12660"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: ServiceAccount\nmetadata:\n creationTimestamp: 2021-07-07T22:02:39Z\n name: default\n namespace: default\n uid: 052fb0f4-3d50-11e5-b066-42010af0d7b6\nimagePullSecrets:\n - name: myregistrykey", "completion": "This yaml code Configure Service Accounts for Pods | Now, when a new Pod is created in the current namespace and using the default ServiceAccount, the new Pod has its spec.imagePullSecrets field set automatically:", "language": "yaml", "type": "service-definition_explanation", "context": "Configure Service Accounts for Pods | Now, when a new Pod is created in the current namespace and using the default ServiceAccount, the new Pod has its spec.imagePullSecrets field set automatically:", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/configure-service-account/", "id": "k8s_00490_code_explain_12661"} {"prompt": "Explain this shell code:\n\nkubectl run nginx --image=<registry name>/nginx --restart=Never\nkubectl get pod nginx -o=jsonpath='{.spec.imagePullSecrets[0].name}{\"\\n\"}'", "completion": "This shell code Configure Service Accounts for Pods | The output is:", "language": "shell", "type": "code-example_explanation", "context": "Configure Service Accounts for Pods | The output is:", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/configure-service-account/", "id": "k8s_00490_code_explain_12662"} {"prompt": "Complete this yaml code:\n\n{\n \"aud\": [ # matches the requested audiences, or the API server's default audiences when none are explicitly requested\n \"https://kubernetes.default.svc\"\n ],\n \"exp\": 1731613413,\n \"iat\": 1700077413,\n \"iss\": \"https://kubernetes.default.svc\", # matches the first value passed to the --service-account-issuer flag\n \"jti\": \"ea28ed49-2e11-4280-9ec5-bc3d1d84661a\", \n \"kubernetes.io\": {\n \"namespace\": \"kube-system\",\n \"node\": {\n \"name\": \"127.0.0.1\",\n \"uid\": \"58456cb0-dd00-45ed-b797-5578fdceaced\"", "completion": " },\n \"pod\": {\n \"name\": \"coredns-69cbfb9798-jv9gn\",\n \"uid\": \"778a530c-b3f4-47c0-9cd5-ab018fb64f33\"\n },\n \"serviceaccount\": {\n \"name\": \"coredns\",\n \"uid\": \"a087d5a0-e1dd-43ec-93ac-f13d89cd13af\"\n },\n \"warnafter\": 1700081020\n },\n \"nbf\": 1700077413,\n \"sub\": \"system:serviceaccount:kube-system:coredns\"\n}", "language": "yaml", "type": "kubernetes-manifest", "context": "Configure Service Accounts for Pods | To provide a Pod with a token with an audience of vault and a validity duration of two hours, you could define a Pod manifest that is similar to:", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/configure-service-account/", "id": "k8s_00490_code_completion_12663"} {"prompt": "Explain this yaml code:\n\n{\n \"aud\": [ # matches the requested audiences, or the API server's default audiences when none are explicitly requested\n \"https://kubernetes.default.svc\"\n ],\n \"exp\": 1731613413,\n \"iat\": 1700077413,\n \"iss\": \"https://kubernetes.default.svc\", # matches the first value passed to the --service-account-issuer flag\n \"jti\": \"ea28ed49-2e11-4280-9ec5-bc3d1d84661a\", \n \"kubernetes.io\": {\n \"namespace\": \"kube-system\",\n \"node\": {\n \"name\": \"127.0.0.1\",\n \"uid\": \"58456cb0-dd00-45ed-b797-5578fdceaced\"\n },\n \"pod\": {\n \"name\": \"coredns-69cbfb9798-jv9gn\",\n \"uid\": \"778a530c-b3f4-47c0-9cd5-ab018fb64f33\"\n },\n \"serviceaccount\": {\n \"name\": \"coredns\",\n \"uid\": \"a087d5a0-e1dd-43ec-93ac-f13d89cd13af\"\n },\n \"warnafter\": 1700081020\n },\n \"nbf\": 1700077413,\n \"sub\": \"system:serviceaccount:kube-system:coredns\"\n}", "completion": "This yaml code Configure Service Accounts for Pods | To provide a Pod with a token with an audience of vault and a validity duration of two hours, you could define a Pod manifest that is similar to:", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Configure Service Accounts for Pods | To provide a Pod with a token with an audience of vault and a validity duration of two hours, you could define a Pod manifest that is similar to:", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/configure-service-account/", "id": "k8s_00490_code_explain_12664"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: nginx\nspec:\n containers:\n - image: nginx\n name: nginx\n volumeMounts:\n - mountPath: /var/run/secrets/tokens", "completion": " name: vault-token\n serviceAccountName: build-robot\n volumes:\n - name: vault-token\n projected:\n sources:\n - serviceAccountToken:\n path: vault-token\n expirationSeconds: 7200\n audience: vault", "language": "yaml", "type": "pod-definition", "context": "Configure Service Accounts for Pods | Create the Pod:", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/configure-service-account/", "id": "k8s_00490_code_completion_12665"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: nginx\nspec:\n containers:\n - image: nginx\n name: nginx\n volumeMounts:\n - mountPath: /var/run/secrets/tokens\n name: vault-token\n serviceAccountName: build-robot\n volumes:\n - name: vault-token\n projected:\n sources:\n - serviceAccountToken:\n path: vault-token\n expirationSeconds: 7200\n audience: vault", "completion": "This yaml code Configure Service Accounts for Pods | Create the Pod:", "language": "yaml", "type": "pod-definition_explanation", "context": "Configure Service Accounts for Pods | Create the Pod:", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/configure-service-account/", "id": "k8s_00490_code_explain_12666"} {"prompt": "Explain this shell code:\n\nkubectl create -f https://k8s.io/examples/pods/pod-projected-svc-token.yaml", "completion": "This shell code Configure Service Accounts for Pods | The kubelet will: request and store the token on behalf of the Pod; make the token available to the Pod at a configurable file path; and refresh the token as it approaches expiration. The kubelet proa", "language": "shell", "type": "code-example_explanation", "context": "Configure Service Accounts for Pods | The kubelet will: request and store the token on behalf of the Pod; make the token available to the Pod at a configurable file path; and refresh the token as it approaches expiration. The kubelet proa", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/configure-service-account/", "id": "k8s_00490_code_explain_12667"} {"prompt": "Explain this json code:\n\n{service-account-issuer}/.well-known/openid-configuration", "completion": "This json code Configure Service Accounts for Pods | If the URL does not comply, ServiceAccount issuer discovery endpoints are not registered or accessible.", "language": "json", "type": "api-response_explanation", "context": "Configure Service Accounts for Pods | If the URL does not comply, ServiceAccount issuer discovery endpoints are not registered or accessible.", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/configure-service-account/", "id": "k8s_00490_code_explain_12668"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: ServiceAccount\nmetadata:", "completion": " name: build-robot\nautomountServiceAccountToken: false\n...", "language": "yaml", "type": "service-definition", "context": "Configure Service Accounts for Pods | You can also opt out of automounting API credentials for a particular Pod:", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/configure-service-account/", "id": "k8s_00490_code_completion_12669"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: ServiceAccount\nmetadata:\n name: build-robot\nautomountServiceAccountToken: false\n...", "completion": "This yaml code Configure Service Accounts for Pods | You can also opt out of automounting API credentials for a particular Pod:", "language": "yaml", "type": "service-definition_explanation", "context": "Configure Service Accounts for Pods | You can also opt out of automounting API credentials for a particular Pod:", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/configure-service-account/", "id": "k8s_00490_code_explain_12670"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: my-pod", "completion": "spec:\n serviceAccountName: build-robot\n automountServiceAccountToken: false\n ...", "language": "yaml", "type": "pod-definition", "context": "Configure Service Accounts for Pods | If both the ServiceAccount and the Pod's .spec specify a value for automountServiceAccountToken, the Pod spec takes precedence.", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/configure-service-account/", "id": "k8s_00490_code_completion_12671"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: my-pod\nspec:\n serviceAccountName: build-robot\n automountServiceAccountToken: false\n ...", "completion": "This yaml code Configure Service Accounts for Pods | If both the ServiceAccount and the Pod's .spec specify a value for automountServiceAccountToken, the Pod spec takes precedence.", "language": "yaml", "type": "pod-definition_explanation", "context": "Configure Service Accounts for Pods | If both the ServiceAccount and the Pod's .spec specify a value for automountServiceAccountToken, the Pod spec takes precedence.", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/configure-service-account/", "id": "k8s_00490_code_explain_12672"} {"prompt": "Complete this yaml code:\n\nkubectl apply -f - <<EOF\napiVersion: v1\nkind: ServiceAccount", "completion": "metadata:\n name: build-robot\nEOF", "language": "yaml", "type": "service-definition", "context": "Configure Service Accounts for Pods | The name of a ServiceAccount object must be a valid DNS subdomain name.", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/configure-service-account/", "id": "k8s_00490_code_completion_12673"} {"prompt": "Explain this yaml code:\n\nkubectl apply -f - <<EOF\napiVersion: v1\nkind: ServiceAccount\nmetadata:\n name: build-robot\nEOF", "completion": "This yaml code Configure Service Accounts for Pods | The name of a ServiceAccount object must be a valid DNS subdomain name.", "language": "yaml", "type": "service-definition_explanation", "context": "Configure Service Accounts for Pods | The name of a ServiceAccount object must be a valid DNS subdomain name.", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/configure-service-account/", "id": "k8s_00490_code_explain_12674"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: ServiceAccount\nmetadata:\n creationTimestamp: 2019-06-16T00:12:34Z", "completion": " name: build-robot\n namespace: default\n resourceVersion: \"272500\"\n uid: 721ab723-13bc-11e5-aec2-42010af0021e", "language": "yaml", "type": "service-definition", "context": "Configure Service Accounts for Pods | You can use authorization plugins to set permissions on service accounts.", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/configure-service-account/", "id": "k8s_00490_code_completion_12675"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: ServiceAccount\nmetadata:\n creationTimestamp: 2019-06-16T00:12:34Z\n name: build-robot\n namespace: default\n resourceVersion: \"272500\"\n uid: 721ab723-13bc-11e5-aec2-42010af0021e", "completion": "This yaml code Configure Service Accounts for Pods | You can use authorization plugins to set permissions on service accounts.", "language": "yaml", "type": "service-definition_explanation", "context": "Configure Service Accounts for Pods | You can use authorization plugins to set permissions on service accounts.", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/configure-service-account/", "id": "k8s_00490_code_explain_12676"} {"prompt": "Explain this bash code:\n\nkubectl create token build-robot --bound-object-kind Node --bound-object-name node-001 --bound-object-uid 123...456", "completion": "This bash code Configure Service Accounts for Pods | The token will be valid until it expires or either the associated Node or service account are deleted.", "language": "bash", "type": "kubectl-command_explanation", "context": "Configure Service Accounts for Pods | The token will be valid until it expires or either the associated Node or service account are deleted.", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/configure-service-account/", "id": "k8s_00490_code_explain_12677"} {"prompt": "Complete this yaml code:\n\nkubectl apply -f - <<EOF\napiVersion: v1\nkind: Secret\nmetadata:", "completion": " name: build-robot-secret\n annotations:\n kubernetes.io/service-account.name: build-robot\ntype: kubernetes.io/service-account-token\nEOF", "language": "yaml", "type": "kubernetes-manifest", "context": "Configure Service Accounts for Pods | If you view the Secret using:", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/configure-service-account/", "id": "k8s_00490_code_completion_12678"} {"prompt": "Explain this yaml code:\n\nkubectl apply -f - <<EOF\napiVersion: v1\nkind: Secret\nmetadata:\n name: build-robot-secret\n annotations:\n kubernetes.io/service-account.name: build-robot\ntype: kubernetes.io/service-account-token\nEOF", "completion": "This yaml code Configure Service Accounts for Pods | If you view the Secret using:", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Configure Service Accounts for Pods | If you view the Secret using:", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/configure-service-account/", "id": "k8s_00490_code_explain_12679"} {"prompt": "Explain this bash code:\n\nkubectl create secret docker-registry myregistrykey --docker-server=<registry name> \\\n --docker-username=DUMMY_USERNAME --docker-password=DUMMY_DOCKER_PASSWORD \\\n --docker-email=DUMMY_DOCKER_EMAIL", "completion": "This bash code Configure Service Accounts for Pods | Verify it has been created.", "language": "bash", "type": "kubectl-command_explanation", "context": "Configure Service Accounts for Pods | Verify it has been created.", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/configure-service-account/", "id": "k8s_00490_code_explain_12680"} {"prompt": "Explain this bash code:\n\nkubectl patch serviceaccount default -p '{\"imagePullSecrets\": [{\"name\": \"myregistrykey\"}]}'", "completion": "This bash code Configure Service Accounts for Pods | You can achieve the same outcome by editing the object manually:", "language": "bash", "type": "kubectl-command_explanation", "context": "Configure Service Accounts for Pods | You can achieve the same outcome by editing the object manually:", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/configure-service-account/", "id": "k8s_00490_code_explain_12681"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: ServiceAccount\nmetadata:\n creationTimestamp: 2021-07-07T22:02:39Z", "completion": " name: default\n namespace: default\n resourceVersion: \"243024\"\n uid: 052fb0f4-3d50-11e5-b066-42010af0d7b6", "language": "yaml", "type": "service-definition", "context": "Configure Service Accounts for Pods | Using your editor, delete the line with key resourceVersion, add lines for imagePullSecrets: and save it. Leave the uid value set the same as you found it.", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/configure-service-account/", "id": "k8s_00490_code_completion_12682"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: ServiceAccount\nmetadata:\n creationTimestamp: 2021-07-07T22:02:39Z\n name: default\n namespace: default\n resourceVersion: \"243024\"\n uid: 052fb0f4-3d50-11e5-b066-42010af0d7b6", "completion": "This yaml code Configure Service Accounts for Pods | Using your editor, delete the line with key resourceVersion, add lines for imagePullSecrets: and save it. Leave the uid value set the same as you found it.", "language": "yaml", "type": "service-definition_explanation", "context": "Configure Service Accounts for Pods | Using your editor, delete the line with key resourceVersion, add lines for imagePullSecrets: and save it. Leave the uid value set the same as you found it.", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/configure-service-account/", "id": "k8s_00490_code_explain_12683"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: ServiceAccount\nmetadata:\n creationTimestamp: 2021-07-07T22:02:39Z", "completion": " name: default\n namespace: default\n uid: 052fb0f4-3d50-11e5-b066-42010af0d7b6\nimagePullSecrets:\n - name: myregistrykey", "language": "yaml", "type": "service-definition", "context": "Configure Service Accounts for Pods | Now, when a new Pod is created in the current namespace and using the default ServiceAccount, the new Pod has its spec.imagePullSecrets field set automatically:", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/configure-service-account/", "id": "k8s_00490_code_completion_12684"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: ServiceAccount\nmetadata:\n creationTimestamp: 2021-07-07T22:02:39Z\n name: default\n namespace: default\n uid: 052fb0f4-3d50-11e5-b066-42010af0d7b6\nimagePullSecrets:\n - name: myregistrykey", "completion": "This yaml code Configure Service Accounts for Pods | Now, when a new Pod is created in the current namespace and using the default ServiceAccount, the new Pod has its spec.imagePullSecrets field set automatically:", "language": "yaml", "type": "service-definition_explanation", "context": "Configure Service Accounts for Pods | Now, when a new Pod is created in the current namespace and using the default ServiceAccount, the new Pod has its spec.imagePullSecrets field set automatically:", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/configure-service-account/", "id": "k8s_00490_code_explain_12685"} {"prompt": "Explain this bash code:\n\nkubectl run nginx --image=<registry name>/nginx --restart=Never\nkubectl get pod nginx -o=jsonpath='{.spec.imagePullSecrets[0].name}{\"\\n\"}'", "completion": "This bash code Configure Service Accounts for Pods | The output is:", "language": "bash", "type": "kubectl-command_explanation", "context": "Configure Service Accounts for Pods | The output is:", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/configure-service-account/", "id": "k8s_00490_code_explain_12686"} {"prompt": "Complete this json code:\n\n{\n \"aud\": [ # matches the requested audiences, or the API server's default audiences when none are explicitly requested\n \"https://kubernetes.default.svc\"\n ],\n \"exp\": 1731613413,\n \"iat\": 1700077413,\n \"iss\": \"https://kubernetes.default.svc\", # matches the first value passed to the --service-account-issuer flag\n \"jti\": \"ea28ed49-2e11-4280-9ec5-bc3d1d84661a\", \n \"kubernetes.io\": {\n \"namespace\": \"kube-system\",\n \"node\": {\n \"name\": \"127.0.0.1\",\n \"uid\": \"58456cb0-dd00-45ed-b797-5578fdceaced\"", "completion": " },\n \"pod\": {\n \"name\": \"coredns-69cbfb9798-jv9gn\",\n \"uid\": \"778a530c-b3f4-47c0-9cd5-ab018fb64f33\"\n },\n \"serviceaccount\": {\n \"name\": \"coredns\",\n \"uid\": \"a087d5a0-e1dd-43ec-93ac-f13d89cd13af\"\n },\n \"warnafter\": 1700081020\n },\n \"nbf\": 1700077413,\n \"sub\": \"system:serviceaccount:kube-system:coredns\"\n}", "language": "json", "type": "api-response", "context": "Configure Service Accounts for Pods | To provide a Pod with a token with an audience of vault and a validity duration of two hours, you could define a Pod manifest that is similar to:", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/configure-service-account/", "id": "k8s_00490_code_completion_12687"} {"prompt": "Explain this json code:\n\n{\n \"aud\": [ # matches the requested audiences, or the API server's default audiences when none are explicitly requested\n \"https://kubernetes.default.svc\"\n ],\n \"exp\": 1731613413,\n \"iat\": 1700077413,\n \"iss\": \"https://kubernetes.default.svc\", # matches the first value passed to the --service-account-issuer flag\n \"jti\": \"ea28ed49-2e11-4280-9ec5-bc3d1d84661a\", \n \"kubernetes.io\": {\n \"namespace\": \"kube-system\",\n \"node\": {\n \"name\": \"127.0.0.1\",\n \"uid\": \"58456cb0-dd00-45ed-b797-5578fdceaced\"\n },\n \"pod\": {\n \"name\": \"coredns-69cbfb9798-jv9gn\",\n \"uid\": \"778a530c-b3f4-47c0-9cd5-ab018fb64f33\"\n },\n \"serviceaccount\": {\n \"name\": \"coredns\",\n \"uid\": \"a087d5a0-e1dd-43ec-93ac-f13d89cd13af\"\n },\n \"warnafter\": 1700081020\n },\n \"nbf\": 1700077413,\n \"sub\": \"system:serviceaccount:kube-system:coredns\"\n}", "completion": "This json code Configure Service Accounts for Pods | To provide a Pod with a token with an audience of vault and a validity duration of two hours, you could define a Pod manifest that is similar to:", "language": "json", "type": "api-response_explanation", "context": "Configure Service Accounts for Pods | To provide a Pod with a token with an audience of vault and a validity duration of two hours, you could define a Pod manifest that is similar to:", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/configure-service-account/", "id": "k8s_00490_code_explain_12688"} {"prompt": "Complete this yaml code:\n\npods/pod-projected-svc-token.yaml\napiVersion: v1\nkind: Pod\nmetadata:\n name: nginx\nspec:\n containers:\n - image: nginx\n name: nginx\n volumeMounts:", "completion": " - mountPath: /var/run/secrets/tokens\n name: vault-token\n serviceAccountName: build-robot\n volumes:\n - name: vault-token\n projected:\n sources:\n - serviceAccountToken:\n path: vault-token\n expirationSeconds: 7200\n audience: vault", "language": "yaml", "type": "pod-definition", "context": "Configure Service Accounts for Pods | Create the Pod:", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/configure-service-account/", "id": "k8s_00490_code_completion_12689"} {"prompt": "Explain this yaml code:\n\npods/pod-projected-svc-token.yaml\napiVersion: v1\nkind: Pod\nmetadata:\n name: nginx\nspec:\n containers:\n - image: nginx\n name: nginx\n volumeMounts:\n - mountPath: /var/run/secrets/tokens\n name: vault-token\n serviceAccountName: build-robot\n volumes:\n - name: vault-token\n projected:\n sources:\n - serviceAccountToken:\n path: vault-token\n expirationSeconds: 7200\n audience: vault", "completion": "This yaml code Configure Service Accounts for Pods | Create the Pod:", "language": "yaml", "type": "pod-definition_explanation", "context": "Configure Service Accounts for Pods | Create the Pod:", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/configure-service-account/", "id": "k8s_00490_code_explain_12690"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: nginx\nspec:\n containers:\n - image: nginx\n name: nginx\n volumeMounts:\n - mountPath: /var/run/secrets/tokens", "completion": " name: vault-token\n serviceAccountName: build-robot\n volumes:\n - name: vault-token\n projected:\n sources:\n - serviceAccountToken:\n path: vault-token\n expirationSeconds: 7200\n audience: vault", "language": "yaml", "type": "pod-definition", "context": "Configure Service Accounts for Pods | Create the Pod:", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/configure-service-account/", "id": "k8s_00490_code_completion_12691"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: nginx\nspec:\n containers:\n - image: nginx\n name: nginx\n volumeMounts:\n - mountPath: /var/run/secrets/tokens\n name: vault-token\n serviceAccountName: build-robot\n volumes:\n - name: vault-token\n projected:\n sources:\n - serviceAccountToken:\n path: vault-token\n expirationSeconds: 7200\n audience: vault", "completion": "This yaml code Configure Service Accounts for Pods | Create the Pod:", "language": "yaml", "type": "pod-definition_explanation", "context": "Configure Service Accounts for Pods | Create the Pod:", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/configure-service-account/", "id": "k8s_00490_code_explain_12692"} {"prompt": "Explain this bash code:\n\nkubectl create -f https://k8s.io/examples/pods/pod-projected-svc-token.yaml", "completion": "This bash code Configure Service Accounts for Pods | The kubelet will: request and store the token on behalf of the Pod; make the token available to the Pod at a configurable file path; and refresh the token as it approaches expiration. The kubelet proa", "language": "bash", "type": "kubectl-command_explanation", "context": "Configure Service Accounts for Pods | The kubelet will: request and store the token on behalf of the Pod; make the token available to the Pod at a configurable file path; and refresh the token as it approaches expiration. The kubelet proa", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/configure-service-account/", "id": "k8s_00490_code_explain_12693"} {"prompt": "Complete this yaml code:\n\napiVersion: rbac.authorization.k8s.io/v1\nkind: ClusterRole\nmetadata:\n name: csr-creator\nrules:\n- apiGroups:\n - certificates.k8s.io", "completion": " resources:\n - certificatesigningrequests\n verbs:\n - create\n - get\n - list\n - watch", "language": "yaml", "type": "kubernetes-manifest", "context": "Certificates and Certificate Signing Requests | To allow approving a CertificateSigningRequest:", "source": "https://kubernetes.io/docs/reference/access-authn-authz/certificate-signing-requests/", "id": "k8s_00494_code_completion_12694"} {"prompt": "Explain this yaml code:\n\napiVersion: rbac.authorization.k8s.io/v1\nkind: ClusterRole\nmetadata:\n name: csr-creator\nrules:\n- apiGroups:\n - certificates.k8s.io\n resources:\n - certificatesigningrequests\n verbs:\n - create\n - get\n - list\n - watch", "completion": "This yaml code Certificates and Certificate Signing Requests | To allow approving a CertificateSigningRequest:", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Certificates and Certificate Signing Requests | To allow approving a CertificateSigningRequest:", "source": "https://kubernetes.io/docs/reference/access-authn-authz/certificate-signing-requests/", "id": "k8s_00494_code_explain_12695"} {"prompt": "Complete this yaml code:\n\napiVersion: rbac.authorization.k8s.io/v1\nkind: ClusterRole\nmetadata:\n name: csr-approver\nrules:\n- apiGroups:\n - certificates.k8s.io\n resources:\n - certificatesigningrequests\n verbs:\n - get\n - list\n - watch", "completion": "- apiGroups:\n - certificates.k8s.io\n resources:\n - certificatesigningrequests/approval\n verbs:\n - update\n- apiGroups:\n - certificates.k8s.io\n resources:\n - signers\n resourceNames:\n - example.com/my-signer-name # example.com/* can be used to authorize for all signers in the 'example.com' domain\n verbs:\n - approve", "language": "yaml", "type": "kubernetes-manifest", "context": "Certificates and Certificate Signing Requests | To allow signing a CertificateSigningRequest:", "source": "https://kubernetes.io/docs/reference/access-authn-authz/certificate-signing-requests/", "id": "k8s_00494_code_completion_12696"} {"prompt": "Explain this yaml code:\n\napiVersion: rbac.authorization.k8s.io/v1\nkind: ClusterRole\nmetadata:\n name: csr-approver\nrules:\n- apiGroups:\n - certificates.k8s.io\n resources:\n - certificatesigningrequests\n verbs:\n - get\n - list\n - watch\n- apiGroups:\n - certificates.k8s.io\n resources:\n - certificatesigningrequests/approval\n verbs:\n - update\n- apiGroups:\n - certificates.k8s.io\n resources:\n - signers\n resourceNames:\n - example.com/my-signer-name # example.com/* can be used to authorize for all signers in the 'example.com' domain\n verbs:\n - approve", "completion": "This yaml code Certificates and Certificate Signing Requests | To allow signing a CertificateSigningRequest:", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Certificates and Certificate Signing Requests | To allow signing a CertificateSigningRequest:", "source": "https://kubernetes.io/docs/reference/access-authn-authz/certificate-signing-requests/", "id": "k8s_00494_code_explain_12697"} {"prompt": "Complete this yaml code:\n\napiVersion: rbac.authorization.k8s.io/v1\nkind: ClusterRole\nmetadata:\n name: csr-signer\nrules:\n- apiGroups:\n - certificates.k8s.io\n resources:\n - certificatesigningrequests\n verbs:\n - get\n - list\n - watch", "completion": "- apiGroups:\n - certificates.k8s.io\n resources:\n - certificatesigningrequests/status\n verbs:\n - update\n- apiGroups:\n - certificates.k8s.io\n resources:\n - signers\n resourceNames:\n - example.com/my-signer-name # example.com/* can be used to authorize for all signers in the 'example.com' domain\n verbs:\n - sign", "language": "yaml", "type": "kubernetes-manifest", "context": "Certificates and Certificate Signing Requests | Signers abstractly represent the entity or entities that might sign, or have signed, a security certificate.", "source": "https://kubernetes.io/docs/reference/access-authn-authz/certificate-signing-requests/", "id": "k8s_00494_code_completion_12698"} {"prompt": "Explain this yaml code:\n\napiVersion: rbac.authorization.k8s.io/v1\nkind: ClusterRole\nmetadata:\n name: csr-signer\nrules:\n- apiGroups:\n - certificates.k8s.io\n resources:\n - certificatesigningrequests\n verbs:\n - get\n - list\n - watch\n- apiGroups:\n - certificates.k8s.io\n resources:\n - certificatesigningrequests/status\n verbs:\n - update\n- apiGroups:\n - certificates.k8s.io\n resources:\n - signers\n resourceNames:\n - example.com/my-signer-name # example.com/* can be used to authorize for all signers in the 'example.com' domain\n verbs:\n - sign", "completion": "This yaml code Certificates and Certificate Signing Requests | Signers abstractly represent the entity or entities that might sign, or have signed, a security certificate.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Certificates and Certificate Signing Requests | Signers abstractly represent the entity or entities that might sign, or have signed, a security certificate.", "source": "https://kubernetes.io/docs/reference/access-authn-authz/certificate-signing-requests/", "id": "k8s_00494_code_explain_12699"} {"prompt": "Complete this text code:\n\n-----BEGIN CERTIFICATE-----\nMIIDgjCCAmqgAwIBAgIUC1N1EJ4Qnsd322BhDPRwmg3b/oAwDQYJKoZIhvcNAQEL\nBQAwXDELMAkGA1UEBhMCeHgxCjAIBgNVBAgMAXgxCjAIBgNVBAcMAXgxCjAIBgNV\nBAoMAXgxCjAIBgNVBAsMAXgxCzAJBgNVBAMMAmNhMRAwDgYJKoZIhvcNAQkBFgF4\nMB4XDTIwMDcwNjIyMDcwMFoXDTI1MDcwNTIyMDcwMFowNzEVMBMGA1UEChMMc3lz\ndGVtOm5vZGVzMR4wHAYDVQQDExVzeXN0ZW06bm9kZToxMjcuMC4wLjEwggEiMA0G\nCSqGSIb3DQEBAQUAA4IBDwAwggEKAoIBAQDne5X2eQ1JcLZkKvhzCR4Hxl9+ZmU3\n+e1zfOywLdoQxrPi+o4hVsUH3q0y52BMa7u1yehHDRSaq9u62cmi5ekgXhXHzGmm\nkmW5n0itRECv3SFsSm2DSghRKf0mm6iTYHWDHzUXKdm9lPPWoSOxoR5oqOsm3JEh\nQ7Et13wrvTJqBMJo1GTwQuF+HYOku0NF/DLqbZIcpI08yQKyrBgYz2uO51/oNp8a", "completion": "sTCsV4OUfyHhx2BBLUo4g4SptHFySTBwlpRWBnSjZPOhmN74JcpTLB4J5f4iEeA7\n2QytZfADckG4wVkhH3C2EJUmRtFIBVirwDn39GXkSGlnvnMgF3uLZ6zNAgMBAAGj\nYTBfMA4GA1UdDwEB/wQEAwIFoDATBgNVHSUEDDAKBggrBgEFBQcDAjAMBgNVHRMB\nAf8EAjAAMB0GA1UdDgQWBBTREl2hW54lkQBDeVCcd2f2VSlB1DALBgNVHREEBDAC\nggAwDQYJKoZIhvcNAQELBQADggEBABpZjuIKTq8pCaX8dMEGPWtAykgLsTcD2jYr\nL0/TCrqmuaaliUa42jQTt2OVsVP/L8ofFunj/KjpQU0bvKJPLMRKtmxbhXuQCQi1\nqCRkp8o93mHvEz3mTUN+D1cfQ2fpsBENLnpS0F4G/JyY2Vrh19/X8+mImMEK5eOy\no0BMby7byUj98WmcUvNCiXbC6F45QTmkwEhMqWns0JZQY+/XeDhEcg+lJvz9Eyo2\naGgPsye1o3DpyXnyfJWAWMhOz7cikS5X2adesbgI86PhEHBXPIJ1v13ZdfCExmdd\nM1fLPhLyR54fGaY+7/X8P9AZzPefAkwizeXwe9ii6/a08vWoiE4=\n-----END CERTIFICATE-----", "language": "text", "type": "code-example", "context": "Certificates and Certificate Signing Requests | Non-PEM content may appear before or after the CERTIFICATE PEM blocks and is unvalidated, to allow for explanatory text as described in section 5.2 of RFC7468.", "source": "https://kubernetes.io/docs/reference/access-authn-authz/certificate-signing-requests/", "id": "k8s_00494_code_completion_12700"} {"prompt": "Explain this text code:\n\n-----BEGIN CERTIFICATE-----\nMIIDgjCCAmqgAwIBAgIUC1N1EJ4Qnsd322BhDPRwmg3b/oAwDQYJKoZIhvcNAQEL\nBQAwXDELMAkGA1UEBhMCeHgxCjAIBgNVBAgMAXgxCjAIBgNVBAcMAXgxCjAIBgNV\nBAoMAXgxCjAIBgNVBAsMAXgxCzAJBgNVBAMMAmNhMRAwDgYJKoZIhvcNAQkBFgF4\nMB4XDTIwMDcwNjIyMDcwMFoXDTI1MDcwNTIyMDcwMFowNzEVMBMGA1UEChMMc3lz\ndGVtOm5vZGVzMR4wHAYDVQQDExVzeXN0ZW06bm9kZToxMjcuMC4wLjEwggEiMA0G\nCSqGSIb3DQEBAQUAA4IBDwAwggEKAoIBAQDne5X2eQ1JcLZkKvhzCR4Hxl9+ZmU3\n+e1zfOywLdoQxrPi+o4hVsUH3q0y52BMa7u1yehHDRSaq9u62cmi5ekgXhXHzGmm\nkmW5n0itRECv3SFsSm2DSghRKf0mm6iTYHWDHzUXKdm9lPPWoSOxoR5oqOsm3JEh\nQ7Et13wrvTJqBMJo1GTwQuF+HYOku0NF/DLqbZIcpI08yQKyrBgYz2uO51/oNp8a\nsTCsV4OUfyHhx2BBLUo4g4SptHFySTBwlpRWBnSjZPOhmN74JcpTLB4J5f4iEeA7\n2QytZfADckG4wVkhH3C2EJUmRtFIBVirwDn39GXkSGlnvnMgF3uLZ6zNAgMBAAGj\nYTBfMA4GA1UdDwEB/wQEAwIFoDATBgNVHSUEDDAKBggrBgEFBQcDAjAMBgNVHRMB\nAf8EAjAAMB0GA1UdDgQWBBTREl2hW54lkQBDeVCcd2f2VSlB1DALBgNVHREEBDAC\nggAwDQYJKoZIhvcNAQELBQADggEBABpZjuIKTq8pCaX8dMEGPWtAykgLsTcD2jYr\nL0/TCrqmuaaliUa42jQTt2OVsVP/L8ofFunj/KjpQU0bvKJPLMRKtmxbhXuQCQi1\nqCRkp8o93mHvEz3mTUN+D1cfQ2fpsBENLnpS0F4G/JyY2Vrh19/X8+mImMEK5eOy\no0BMby7byUj98WmcUvNCiXbC6F45QTmkwEhMqWns0JZQY+/XeDhEcg+lJvz9Eyo2\naGgPsye1o3DpyXnyfJWAWMhOz7cikS5X2adesbgI86PhEHBXPIJ1v13ZdfCExmdd\nM1fLPhLyR54fGaY+7/X8P9AZzPefAkwizeXwe9ii6/a08vWoiE4=\n-----END CERTIFICATE-----", "completion": "This text code Certificates and Certificate Signing Requests | Non-PEM content may appear before or after the CERTIFICATE PEM blocks and is unvalidated, to allow for explanatory text as described in section 5.2 of RFC7468.", "language": "text", "type": "code-example_explanation", "context": "Certificates and Certificate Signing Requests | Non-PEM content may appear before or after the CERTIFICATE PEM blocks and is unvalidated, to allow for explanatory text as described in section 5.2 of RFC7468.", "source": "https://kubernetes.io/docs/reference/access-authn-authz/certificate-signing-requests/", "id": "k8s_00494_code_explain_12701"} {"prompt": "Complete this yaml code:\n\napiVersion: certificates.k8s.io/v1\nkind: CertificateSigningRequest", "completion": "...\nstatus:\n certificate: \"LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JS...\"", "language": "yaml", "type": "kubernetes-manifest", "context": "Certificates and Certificate Signing Requests | Before a signer issues a certificate based on a CertificateSigningRequest, the signer typically checks that the issuance for that CSR has been approved.", "source": "https://kubernetes.io/docs/reference/access-authn-authz/certificate-signing-requests/", "id": "k8s_00494_code_completion_12702"} {"prompt": "Explain this yaml code:\n\napiVersion: certificates.k8s.io/v1\nkind: CertificateSigningRequest\n...\nstatus:\n certificate: \"LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JS...\"", "completion": "This yaml code Certificates and Certificate Signing Requests | Before a signer issues a certificate based on a CertificateSigningRequest, the signer typically checks that the issuance for that CSR has been approved.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Certificates and Certificate Signing Requests | Before a signer issues a certificate based on a CertificateSigningRequest, the signer typically checks that the issuance for that CSR has been approved.", "source": "https://kubernetes.io/docs/reference/access-authn-authz/certificate-signing-requests/", "id": "k8s_00494_code_explain_12703"} {"prompt": "Explain this bash code:\n\nkubectl certificate approve <certificate-signing-request-name>", "completion": "This bash code Certificates and Certificate Signing Requests | Likewise, to deny a CSR:", "language": "bash", "type": "kubectl-command_explanation", "context": "Certificates and Certificate Signing Requests | Likewise, to deny a CSR:", "source": "https://kubernetes.io/docs/reference/access-authn-authz/certificate-signing-requests/", "id": "k8s_00494_code_explain_12704"} {"prompt": "Explain this bash code:\n\nkubectl certificate deny <certificate-signing-request-name>", "completion": "This bash code Certificates and Certificate Signing Requests | Users of the REST API can approve CSRs by submitting an UPDATE request to the approval subresource of the CSR to be approved. For example, you could write an operator that watches for a particular kin", "language": "bash", "type": "kubectl-command_explanation", "context": "Certificates and Certificate Signing Requests | Users of the REST API can approve CSRs by submitting an UPDATE request to the approval subresource of the CSR to be approved. For example, you could write an operator that watches for a particular kin", "source": "https://kubernetes.io/docs/reference/access-authn-authz/certificate-signing-requests/", "id": "k8s_00494_code_explain_12705"} {"prompt": "Complete this yaml code:\n\napiVersion: certificates.k8s.io/v1\nkind: CertificateSigningRequest\n...\nstatus:\n conditions:", "completion": " - lastUpdateTime: \"2020-02-08T11:37:35Z\"\n lastTransitionTime: \"2020-02-08T11:37:35Z\"\n message: Approved by my custom approver controller\n reason: ApprovedByMyPolicy # You can set this to any string\n type: Approved", "language": "yaml", "type": "kubernetes-manifest", "context": "Certificates and Certificate Signing Requests | For Denied CSRs:", "source": "https://kubernetes.io/docs/reference/access-authn-authz/certificate-signing-requests/", "id": "k8s_00494_code_completion_12706"} {"prompt": "Explain this yaml code:\n\napiVersion: certificates.k8s.io/v1\nkind: CertificateSigningRequest\n...\nstatus:\n conditions:\n - lastUpdateTime: \"2020-02-08T11:37:35Z\"\n lastTransitionTime: \"2020-02-08T11:37:35Z\"\n message: Approved by my custom approver controller\n reason: ApprovedByMyPolicy # You can set this to any string\n type: Approved", "completion": "This yaml code Certificates and Certificate Signing Requests | For Denied CSRs:", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Certificates and Certificate Signing Requests | For Denied CSRs:", "source": "https://kubernetes.io/docs/reference/access-authn-authz/certificate-signing-requests/", "id": "k8s_00494_code_explain_12707"} {"prompt": "Complete this yaml code:\n\napiVersion: certificates.k8s.io/v1\nkind: CertificateSigningRequest\n...\nstatus:\n conditions:", "completion": " - lastUpdateTime: \"2020-02-08T11:37:35Z\"\n lastTransitionTime: \"2020-02-08T11:37:35Z\"\n message: Denied by my custom approver controller\n reason: DeniedByMyPolicy # You can set this to any string\n type: Denied", "language": "yaml", "type": "kubernetes-manifest", "context": "Certificates and Certificate Signing Requests | It's usual to set status.conditions.reason to a machine-friendly reason code using TitleCase; this is a convention but you can set it to anything you like. If you want to add a note for human consumpt", "source": "https://kubernetes.io/docs/reference/access-authn-authz/certificate-signing-requests/", "id": "k8s_00494_code_completion_12708"} {"prompt": "Explain this yaml code:\n\napiVersion: certificates.k8s.io/v1\nkind: CertificateSigningRequest\n...\nstatus:\n conditions:\n - lastUpdateTime: \"2020-02-08T11:37:35Z\"\n lastTransitionTime: \"2020-02-08T11:37:35Z\"\n message: Denied by my custom approver controller\n reason: DeniedByMyPolicy # You can set this to any string\n type: Denied", "completion": "This yaml code Certificates and Certificate Signing Requests | It's usual to set status.conditions.reason to a machine-friendly reason code using TitleCase; this is a convention but you can set it to anything you like. If you want to add a note for human consumpt", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Certificates and Certificate Signing Requests | It's usual to set status.conditions.reason to a machine-friendly reason code using TitleCase; this is a convention but you can set it to anything you like. If you want to add a note for human consumpt", "source": "https://kubernetes.io/docs/reference/access-authn-authz/certificate-signing-requests/", "id": "k8s_00494_code_explain_12709"} {"prompt": "Explain this bash code:\n\nkubectl get clustertrustbundles --as='system:serviceaccount:mynamespace:default'", "completion": "This bash code Certificates and Certificate Signing Requests | Signer-linked ClusterTrustBundles are associated with a signer name, like this:", "language": "bash", "type": "kubectl-command_explanation", "context": "Certificates and Certificate Signing Requests | Signer-linked ClusterTrustBundles are associated with a signer name, like this:", "source": "https://kubernetes.io/docs/reference/access-authn-authz/certificate-signing-requests/", "id": "k8s_00494_code_explain_12710"} {"prompt": "Complete this yaml code:\n\napiVersion: certificates.k8s.io/v1alpha1\nkind: ClusterTrustBundle\nmetadata:", "completion": " name: example.com:mysigner:foo\nspec:\n signerName: example.com/mysigner\n trustBundle: \"<... PEM data ...>\"", "language": "yaml", "type": "kubernetes-manifest", "context": "Certificates and Certificate Signing Requests | These ClusterTrustBundles are intended to be maintained by a signer-specific controller in the cluster, so they have several security features:", "source": "https://kubernetes.io/docs/reference/access-authn-authz/certificate-signing-requests/", "id": "k8s_00494_code_completion_12711"} {"prompt": "Explain this yaml code:\n\napiVersion: certificates.k8s.io/v1alpha1\nkind: ClusterTrustBundle\nmetadata:\n name: example.com:mysigner:foo\nspec:\n signerName: example.com/mysigner\n trustBundle: \"<... PEM data ...>\"", "completion": "This yaml code Certificates and Certificate Signing Requests | These ClusterTrustBundles are intended to be maintained by a signer-specific controller in the cluster, so they have several security features:", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Certificates and Certificate Signing Requests | These ClusterTrustBundles are intended to be maintained by a signer-specific controller in the cluster, so they have several security features:", "source": "https://kubernetes.io/docs/reference/access-authn-authz/certificate-signing-requests/", "id": "k8s_00494_code_explain_12712"} {"prompt": "Complete this yaml code:\n\napiVersion: certificates.k8s.io/v1alpha1\nkind: ClusterTrustBundle\nmetadata:", "completion": " name: foo\nspec:\n # no signerName specified, so the field is blank\n trustBundle: \"<... PEM data ...>\"", "language": "yaml", "type": "kubernetes-manifest", "context": "Certificates and Certificate Signing Requests | They are primarily intended for cluster configuration use cases. Each signer-unlinked ClusterTrustBundle is an independent object, in contrast to the customary grouping behavior of signer-linked Clust", "source": "https://kubernetes.io/docs/reference/access-authn-authz/certificate-signing-requests/", "id": "k8s_00494_code_completion_12713"} {"prompt": "Explain this yaml code:\n\napiVersion: certificates.k8s.io/v1alpha1\nkind: ClusterTrustBundle\nmetadata:\n name: foo\nspec:\n # no signerName specified, so the field is blank\n trustBundle: \"<... PEM data ...>\"", "completion": "This yaml code Certificates and Certificate Signing Requests | They are primarily intended for cluster configuration use cases. Each signer-unlinked ClusterTrustBundle is an independent object, in contrast to the customary grouping behavior of signer-linked Clust", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Certificates and Certificate Signing Requests | They are primarily intended for cluster configuration use cases. Each signer-unlinked ClusterTrustBundle is an independent object, in contrast to the customary grouping behavior of signer-linked Clust", "source": "https://kubernetes.io/docs/reference/access-authn-authz/certificate-signing-requests/", "id": "k8s_00494_code_explain_12714"} {"prompt": "Explain this text code:\n\naccess/certificate-signing-request/clusterrole-create.yaml", "completion": "This text code Certificates and Certificate Signing Requests | To allow approving a CertificateSigningRequest:", "language": "text", "type": "code-example_explanation", "context": "Certificates and Certificate Signing Requests | To allow approving a CertificateSigningRequest:", "source": "https://kubernetes.io/docs/reference/access-authn-authz/certificate-signing-requests/", "id": "k8s_00494_code_explain_12715"} {"prompt": "Complete this yaml code:\n\napiVersion: rbac.authorization.k8s.io/v1\nkind: ClusterRole\nmetadata:\n name: csr-creator\nrules:\n- apiGroups:\n - certificates.k8s.io", "completion": " resources:\n - certificatesigningrequests\n verbs:\n - create\n - get\n - list\n - watch", "language": "yaml", "type": "kubernetes-manifest", "context": "Certificates and Certificate Signing Requests | To allow approving a CertificateSigningRequest:", "source": "https://kubernetes.io/docs/reference/access-authn-authz/certificate-signing-requests/", "id": "k8s_00494_code_completion_12716"} {"prompt": "Explain this yaml code:\n\napiVersion: rbac.authorization.k8s.io/v1\nkind: ClusterRole\nmetadata:\n name: csr-creator\nrules:\n- apiGroups:\n - certificates.k8s.io\n resources:\n - certificatesigningrequests\n verbs:\n - create\n - get\n - list\n - watch", "completion": "This yaml code Certificates and Certificate Signing Requests | To allow approving a CertificateSigningRequest:", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Certificates and Certificate Signing Requests | To allow approving a CertificateSigningRequest:", "source": "https://kubernetes.io/docs/reference/access-authn-authz/certificate-signing-requests/", "id": "k8s_00494_code_explain_12717"} {"prompt": "Explain this text code:\n\naccess/certificate-signing-request/clusterrole-approve.yaml", "completion": "This text code Certificates and Certificate Signing Requests | To allow signing a CertificateSigningRequest:", "language": "text", "type": "code-example_explanation", "context": "Certificates and Certificate Signing Requests | To allow signing a CertificateSigningRequest:", "source": "https://kubernetes.io/docs/reference/access-authn-authz/certificate-signing-requests/", "id": "k8s_00494_code_explain_12718"} {"prompt": "Complete this yaml code:\n\napiVersion: rbac.authorization.k8s.io/v1\nkind: ClusterRole\nmetadata:\n name: csr-approver\nrules:\n- apiGroups:\n - certificates.k8s.io\n resources:\n - certificatesigningrequests\n verbs:\n - get\n - list\n - watch", "completion": "- apiGroups:\n - certificates.k8s.io\n resources:\n - certificatesigningrequests/approval\n verbs:\n - update\n- apiGroups:\n - certificates.k8s.io\n resources:\n - signers\n resourceNames:\n - example.com/my-signer-name # example.com/* can be used to authorize for all signers in the 'example.com' domain\n verbs:\n - approve", "language": "yaml", "type": "kubernetes-manifest", "context": "Certificates and Certificate Signing Requests | To allow signing a CertificateSigningRequest:", "source": "https://kubernetes.io/docs/reference/access-authn-authz/certificate-signing-requests/", "id": "k8s_00494_code_completion_12719"} {"prompt": "Explain this yaml code:\n\napiVersion: rbac.authorization.k8s.io/v1\nkind: ClusterRole\nmetadata:\n name: csr-approver\nrules:\n- apiGroups:\n - certificates.k8s.io\n resources:\n - certificatesigningrequests\n verbs:\n - get\n - list\n - watch\n- apiGroups:\n - certificates.k8s.io\n resources:\n - certificatesigningrequests/approval\n verbs:\n - update\n- apiGroups:\n - certificates.k8s.io\n resources:\n - signers\n resourceNames:\n - example.com/my-signer-name # example.com/* can be used to authorize for all signers in the 'example.com' domain\n verbs:\n - approve", "completion": "This yaml code Certificates and Certificate Signing Requests | To allow signing a CertificateSigningRequest:", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Certificates and Certificate Signing Requests | To allow signing a CertificateSigningRequest:", "source": "https://kubernetes.io/docs/reference/access-authn-authz/certificate-signing-requests/", "id": "k8s_00494_code_explain_12720"} {"prompt": "Explain this text code:\n\naccess/certificate-signing-request/clusterrole-sign.yaml", "completion": "This text code Certificates and Certificate Signing Requests | Signers abstractly represent the entity or entities that might sign, or have signed, a security certificate.", "language": "text", "type": "code-example_explanation", "context": "Certificates and Certificate Signing Requests | Signers abstractly represent the entity or entities that might sign, or have signed, a security certificate.", "source": "https://kubernetes.io/docs/reference/access-authn-authz/certificate-signing-requests/", "id": "k8s_00494_code_explain_12721"} {"prompt": "Complete this yaml code:\n\napiVersion: rbac.authorization.k8s.io/v1\nkind: ClusterRole\nmetadata:\n name: csr-signer\nrules:\n- apiGroups:\n - certificates.k8s.io\n resources:\n - certificatesigningrequests\n verbs:\n - get\n - list\n - watch", "completion": "- apiGroups:\n - certificates.k8s.io\n resources:\n - certificatesigningrequests/status\n verbs:\n - update\n- apiGroups:\n - certificates.k8s.io\n resources:\n - signers\n resourceNames:\n - example.com/my-signer-name # example.com/* can be used to authorize for all signers in the 'example.com' domain\n verbs:\n - sign", "language": "yaml", "type": "kubernetes-manifest", "context": "Certificates and Certificate Signing Requests | Signers abstractly represent the entity or entities that might sign, or have signed, a security certificate.", "source": "https://kubernetes.io/docs/reference/access-authn-authz/certificate-signing-requests/", "id": "k8s_00494_code_completion_12722"} {"prompt": "Explain this yaml code:\n\napiVersion: rbac.authorization.k8s.io/v1\nkind: ClusterRole\nmetadata:\n name: csr-signer\nrules:\n- apiGroups:\n - certificates.k8s.io\n resources:\n - certificatesigningrequests\n verbs:\n - get\n - list\n - watch\n- apiGroups:\n - certificates.k8s.io\n resources:\n - certificatesigningrequests/status\n verbs:\n - update\n- apiGroups:\n - certificates.k8s.io\n resources:\n - signers\n resourceNames:\n - example.com/my-signer-name # example.com/* can be used to authorize for all signers in the 'example.com' domain\n verbs:\n - sign", "completion": "This yaml code Certificates and Certificate Signing Requests | Signers abstractly represent the entity or entities that might sign, or have signed, a security certificate.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Certificates and Certificate Signing Requests | Signers abstractly represent the entity or entities that might sign, or have signed, a security certificate.", "source": "https://kubernetes.io/docs/reference/access-authn-authz/certificate-signing-requests/", "id": "k8s_00494_code_explain_12723"} {"prompt": "Explain this json code:\n\n[\"digital signature\", \"key encipherment\", \"client auth\"]", "completion": "This json code Certificates and Certificate Signing Requests | kubernetes.io/kube-apiserver-client-kubelet: signs client certificates that will be honored as client certificates by the API server. May be auto-approved by kube-controller-manager.", "language": "json", "type": "api-response_explanation", "context": "Certificates and Certificate Signing Requests | kubernetes.io/kube-apiserver-client-kubelet: signs client certificates that will be honored as client certificates by the API server. May be auto-approved by kube-controller-manager.", "source": "https://kubernetes.io/docs/reference/access-authn-authz/certificate-signing-requests/", "id": "k8s_00494_code_explain_12724"} {"prompt": "Explain this json code:\n\n[\"key encipherment\", \"digital signature\", \"client auth\"]", "completion": "This json code Certificates and Certificate Signing Requests | kubernetes.io/kubelet-serving: signs serving certificates that are honored as a valid kubelet serving certificate by the API server, but has no other guarantees. Never auto-approved by kube-controller", "language": "json", "type": "api-response_explanation", "context": "Certificates and Certificate Signing Requests | kubernetes.io/kubelet-serving: signs serving certificates that are honored as a valid kubelet serving certificate by the API server, but has no other guarantees. Never auto-approved by kube-controller", "source": "https://kubernetes.io/docs/reference/access-authn-authz/certificate-signing-requests/", "id": "k8s_00494_code_explain_12725"} {"prompt": "Explain this json code:\n\n[\"key encipherment\", \"digital signature\", \"server auth\"]", "completion": "This json code Certificates and Certificate Signing Requests | kubernetes.io/legacy-unknown: has no guarantees for trust at all. Some third-party distributions of Kubernetes may honor client certificates signed by it. The stable CertificateSigningRequest API (ver", "language": "json", "type": "api-response_explanation", "context": "Certificates and Certificate Signing Requests | kubernetes.io/legacy-unknown: has no guarantees for trust at all. Some third-party distributions of Kubernetes may honor client certificates signed by it. The stable CertificateSigningRequest API (ver", "source": "https://kubernetes.io/docs/reference/access-authn-authz/certificate-signing-requests/", "id": "k8s_00494_code_explain_12726"} {"prompt": "Complete this text code:\n\n-----BEGIN CERTIFICATE-----\nMIIDgjCCAmqgAwIBAgIUC1N1EJ4Qnsd322BhDPRwmg3b/oAwDQYJKoZIhvcNAQEL\nBQAwXDELMAkGA1UEBhMCeHgxCjAIBgNVBAgMAXgxCjAIBgNVBAcMAXgxCjAIBgNV\nBAoMAXgxCjAIBgNVBAsMAXgxCzAJBgNVBAMMAmNhMRAwDgYJKoZIhvcNAQkBFgF4\nMB4XDTIwMDcwNjIyMDcwMFoXDTI1MDcwNTIyMDcwMFowNzEVMBMGA1UEChMMc3lz\ndGVtOm5vZGVzMR4wHAYDVQQDExVzeXN0ZW06bm9kZToxMjcuMC4wLjEwggEiMA0G\nCSqGSIb3DQEBAQUAA4IBDwAwggEKAoIBAQDne5X2eQ1JcLZkKvhzCR4Hxl9+ZmU3\n+e1zfOywLdoQxrPi+o4hVsUH3q0y52BMa7u1yehHDRSaq9u62cmi5ekgXhXHzGmm\nkmW5n0itRECv3SFsSm2DSghRKf0mm6iTYHWDHzUXKdm9lPPWoSOxoR5oqOsm3JEh\nQ7Et13wrvTJqBMJo1GTwQuF+HYOku0NF/DLqbZIcpI08yQKyrBgYz2uO51/oNp8a", "completion": "sTCsV4OUfyHhx2BBLUo4g4SptHFySTBwlpRWBnSjZPOhmN74JcpTLB4J5f4iEeA7\n2QytZfADckG4wVkhH3C2EJUmRtFIBVirwDn39GXkSGlnvnMgF3uLZ6zNAgMBAAGj\nYTBfMA4GA1UdDwEB/wQEAwIFoDATBgNVHSUEDDAKBggrBgEFBQcDAjAMBgNVHRMB\nAf8EAjAAMB0GA1UdDgQWBBTREl2hW54lkQBDeVCcd2f2VSlB1DALBgNVHREEBDAC\nggAwDQYJKoZIhvcNAQELBQADggEBABpZjuIKTq8pCaX8dMEGPWtAykgLsTcD2jYr\nL0/TCrqmuaaliUa42jQTt2OVsVP/L8ofFunj/KjpQU0bvKJPLMRKtmxbhXuQCQi1\nqCRkp8o93mHvEz3mTUN+D1cfQ2fpsBENLnpS0F4G/JyY2Vrh19/X8+mImMEK5eOy\no0BMby7byUj98WmcUvNCiXbC6F45QTmkwEhMqWns0JZQY+/XeDhEcg+lJvz9Eyo2\naGgPsye1o3DpyXnyfJWAWMhOz7cikS5X2adesbgI86PhEHBXPIJ1v13ZdfCExmdd\nM1fLPhLyR54fGaY+7/X8P9AZzPefAkwizeXwe9ii6/a08vWoiE4=\n-----END CERTIFICATE-----", "language": "text", "type": "code-example", "context": "Certificates and Certificate Signing Requests | Non-PEM content may appear before or after the CERTIFICATE PEM blocks and is unvalidated, to allow for explanatory text as described in section 5.2 of RFC7468.", "source": "https://kubernetes.io/docs/reference/access-authn-authz/certificate-signing-requests/", "id": "k8s_00494_code_completion_12727"} {"prompt": "Explain this text code:\n\n-----BEGIN CERTIFICATE-----\nMIIDgjCCAmqgAwIBAgIUC1N1EJ4Qnsd322BhDPRwmg3b/oAwDQYJKoZIhvcNAQEL\nBQAwXDELMAkGA1UEBhMCeHgxCjAIBgNVBAgMAXgxCjAIBgNVBAcMAXgxCjAIBgNV\nBAoMAXgxCjAIBgNVBAsMAXgxCzAJBgNVBAMMAmNhMRAwDgYJKoZIhvcNAQkBFgF4\nMB4XDTIwMDcwNjIyMDcwMFoXDTI1MDcwNTIyMDcwMFowNzEVMBMGA1UEChMMc3lz\ndGVtOm5vZGVzMR4wHAYDVQQDExVzeXN0ZW06bm9kZToxMjcuMC4wLjEwggEiMA0G\nCSqGSIb3DQEBAQUAA4IBDwAwggEKAoIBAQDne5X2eQ1JcLZkKvhzCR4Hxl9+ZmU3\n+e1zfOywLdoQxrPi+o4hVsUH3q0y52BMa7u1yehHDRSaq9u62cmi5ekgXhXHzGmm\nkmW5n0itRECv3SFsSm2DSghRKf0mm6iTYHWDHzUXKdm9lPPWoSOxoR5oqOsm3JEh\nQ7Et13wrvTJqBMJo1GTwQuF+HYOku0NF/DLqbZIcpI08yQKyrBgYz2uO51/oNp8a\nsTCsV4OUfyHhx2BBLUo4g4SptHFySTBwlpRWBnSjZPOhmN74JcpTLB4J5f4iEeA7\n2QytZfADckG4wVkhH3C2EJUmRtFIBVirwDn39GXkSGlnvnMgF3uLZ6zNAgMBAAGj\nYTBfMA4GA1UdDwEB/wQEAwIFoDATBgNVHSUEDDAKBggrBgEFBQcDAjAMBgNVHRMB\nAf8EAjAAMB0GA1UdDgQWBBTREl2hW54lkQBDeVCcd2f2VSlB1DALBgNVHREEBDAC\nggAwDQYJKoZIhvcNAQELBQADggEBABpZjuIKTq8pCaX8dMEGPWtAykgLsTcD2jYr\nL0/TCrqmuaaliUa42jQTt2OVsVP/L8ofFunj/KjpQU0bvKJPLMRKtmxbhXuQCQi1\nqCRkp8o93mHvEz3mTUN+D1cfQ2fpsBENLnpS0F4G/JyY2Vrh19/X8+mImMEK5eOy\no0BMby7byUj98WmcUvNCiXbC6F45QTmkwEhMqWns0JZQY+/XeDhEcg+lJvz9Eyo2\naGgPsye1o3DpyXnyfJWAWMhOz7cikS5X2adesbgI86PhEHBXPIJ1v13ZdfCExmdd\nM1fLPhLyR54fGaY+7/X8P9AZzPefAkwizeXwe9ii6/a08vWoiE4=\n-----END CERTIFICATE-----", "completion": "This text code Certificates and Certificate Signing Requests | Non-PEM content may appear before or after the CERTIFICATE PEM blocks and is unvalidated, to allow for explanatory text as described in section 5.2 of RFC7468.", "language": "text", "type": "code-example_explanation", "context": "Certificates and Certificate Signing Requests | Non-PEM content may appear before or after the CERTIFICATE PEM blocks and is unvalidated, to allow for explanatory text as described in section 5.2 of RFC7468.", "source": "https://kubernetes.io/docs/reference/access-authn-authz/certificate-signing-requests/", "id": "k8s_00494_code_explain_12728"} {"prompt": "Complete this yaml code:\n\napiVersion: certificates.k8s.io/v1\nkind: CertificateSigningRequest", "completion": "...\nstatus:\n certificate: \"LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JS...\"", "language": "yaml", "type": "kubernetes-manifest", "context": "Certificates and Certificate Signing Requests | Before a signer issues a certificate based on a CertificateSigningRequest, the signer typically checks that the issuance for that CSR has been approved.", "source": "https://kubernetes.io/docs/reference/access-authn-authz/certificate-signing-requests/", "id": "k8s_00494_code_completion_12729"} {"prompt": "Explain this yaml code:\n\napiVersion: certificates.k8s.io/v1\nkind: CertificateSigningRequest\n...\nstatus:\n certificate: \"LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JS...\"", "completion": "This yaml code Certificates and Certificate Signing Requests | Before a signer issues a certificate based on a CertificateSigningRequest, the signer typically checks that the issuance for that CSR has been approved.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Certificates and Certificate Signing Requests | Before a signer issues a certificate based on a CertificateSigningRequest, the signer typically checks that the issuance for that CSR has been approved.", "source": "https://kubernetes.io/docs/reference/access-authn-authz/certificate-signing-requests/", "id": "k8s_00494_code_explain_12730"} {"prompt": "Explain this shell code:\n\nkubectl certificate approve <certificate-signing-request-name>", "completion": "This shell code Certificates and Certificate Signing Requests | Likewise, to deny a CSR:", "language": "shell", "type": "code-example_explanation", "context": "Certificates and Certificate Signing Requests | Likewise, to deny a CSR:", "source": "https://kubernetes.io/docs/reference/access-authn-authz/certificate-signing-requests/", "id": "k8s_00494_code_explain_12731"} {"prompt": "Explain this shell code:\n\nkubectl certificate deny <certificate-signing-request-name>", "completion": "This shell code Certificates and Certificate Signing Requests | Users of the REST API can approve CSRs by submitting an UPDATE request to the approval subresource of the CSR to be approved. For example, you could write an operator that watches for a particular kin", "language": "shell", "type": "code-example_explanation", "context": "Certificates and Certificate Signing Requests | Users of the REST API can approve CSRs by submitting an UPDATE request to the approval subresource of the CSR to be approved. For example, you could write an operator that watches for a particular kin", "source": "https://kubernetes.io/docs/reference/access-authn-authz/certificate-signing-requests/", "id": "k8s_00494_code_explain_12732"} {"prompt": "Complete this yaml code:\n\napiVersion: certificates.k8s.io/v1\nkind: CertificateSigningRequest\n...\nstatus:\n conditions:", "completion": " - lastUpdateTime: \"2020-02-08T11:37:35Z\"\n lastTransitionTime: \"2020-02-08T11:37:35Z\"\n message: Approved by my custom approver controller\n reason: ApprovedByMyPolicy # You can set this to any string\n type: Approved", "language": "yaml", "type": "kubernetes-manifest", "context": "Certificates and Certificate Signing Requests | For Denied CSRs:", "source": "https://kubernetes.io/docs/reference/access-authn-authz/certificate-signing-requests/", "id": "k8s_00494_code_completion_12733"} {"prompt": "Explain this yaml code:\n\napiVersion: certificates.k8s.io/v1\nkind: CertificateSigningRequest\n...\nstatus:\n conditions:\n - lastUpdateTime: \"2020-02-08T11:37:35Z\"\n lastTransitionTime: \"2020-02-08T11:37:35Z\"\n message: Approved by my custom approver controller\n reason: ApprovedByMyPolicy # You can set this to any string\n type: Approved", "completion": "This yaml code Certificates and Certificate Signing Requests | For Denied CSRs:", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Certificates and Certificate Signing Requests | For Denied CSRs:", "source": "https://kubernetes.io/docs/reference/access-authn-authz/certificate-signing-requests/", "id": "k8s_00494_code_explain_12734"} {"prompt": "Complete this yaml code:\n\napiVersion: certificates.k8s.io/v1\nkind: CertificateSigningRequest\n...\nstatus:\n conditions:", "completion": " - lastUpdateTime: \"2020-02-08T11:37:35Z\"\n lastTransitionTime: \"2020-02-08T11:37:35Z\"\n message: Denied by my custom approver controller\n reason: DeniedByMyPolicy # You can set this to any string\n type: Denied", "language": "yaml", "type": "kubernetes-manifest", "context": "Certificates and Certificate Signing Requests | It's usual to set status.conditions.reason to a machine-friendly reason code using TitleCase; this is a convention but you can set it to anything you like. If you want to add a note for human consumpt", "source": "https://kubernetes.io/docs/reference/access-authn-authz/certificate-signing-requests/", "id": "k8s_00494_code_completion_12735"} {"prompt": "Explain this yaml code:\n\napiVersion: certificates.k8s.io/v1\nkind: CertificateSigningRequest\n...\nstatus:\n conditions:\n - lastUpdateTime: \"2020-02-08T11:37:35Z\"\n lastTransitionTime: \"2020-02-08T11:37:35Z\"\n message: Denied by my custom approver controller\n reason: DeniedByMyPolicy # You can set this to any string\n type: Denied", "completion": "This yaml code Certificates and Certificate Signing Requests | It's usual to set status.conditions.reason to a machine-friendly reason code using TitleCase; this is a convention but you can set it to anything you like. If you want to add a note for human consumpt", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Certificates and Certificate Signing Requests | It's usual to set status.conditions.reason to a machine-friendly reason code using TitleCase; this is a convention but you can set it to anything you like. If you want to add a note for human consumpt", "source": "https://kubernetes.io/docs/reference/access-authn-authz/certificate-signing-requests/", "id": "k8s_00494_code_explain_12736"} {"prompt": "Explain this text code:\n\n--runtime-config=certificates.k8s.io/v1alpha1/podcertificaterequests=true", "completion": "This text code Certificates and Certificate Signing Requests | PodCertificateRequests are API objects tailored to provisioning certificates to workloads running as Pods within a cluster. The user typically does not interact with PodCertificateRequests directly, b", "language": "text", "type": "code-example_explanation", "context": "Certificates and Certificate Signing Requests | PodCertificateRequests are API objects tailored to provisioning certificates to workloads running as Pods within a cluster. The user typically does not interact with PodCertificateRequests directly, b", "source": "https://kubernetes.io/docs/reference/access-authn-authz/certificate-signing-requests/", "id": "k8s_00494_code_explain_12737"} {"prompt": "Explain this bash code:\n\nkubectl get clustertrustbundles --as='system:serviceaccount:mynamespace:default'", "completion": "This bash code Certificates and Certificate Signing Requests | Signer-linked ClusterTrustBundles are associated with a signer name, like this:", "language": "bash", "type": "kubectl-command_explanation", "context": "Certificates and Certificate Signing Requests | Signer-linked ClusterTrustBundles are associated with a signer name, like this:", "source": "https://kubernetes.io/docs/reference/access-authn-authz/certificate-signing-requests/", "id": "k8s_00494_code_explain_12738"} {"prompt": "Complete this yaml code:\n\napiVersion: certificates.k8s.io/v1alpha1\nkind: ClusterTrustBundle\nmetadata:", "completion": " name: example.com:mysigner:foo\nspec:\n signerName: example.com/mysigner\n trustBundle: \"<... PEM data ...>\"", "language": "yaml", "type": "kubernetes-manifest", "context": "Certificates and Certificate Signing Requests | These ClusterTrustBundles are intended to be maintained by a signer-specific controller in the cluster, so they have several security features:", "source": "https://kubernetes.io/docs/reference/access-authn-authz/certificate-signing-requests/", "id": "k8s_00494_code_completion_12739"} {"prompt": "Explain this yaml code:\n\napiVersion: certificates.k8s.io/v1alpha1\nkind: ClusterTrustBundle\nmetadata:\n name: example.com:mysigner:foo\nspec:\n signerName: example.com/mysigner\n trustBundle: \"<... PEM data ...>\"", "completion": "This yaml code Certificates and Certificate Signing Requests | These ClusterTrustBundles are intended to be maintained by a signer-specific controller in the cluster, so they have several security features:", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Certificates and Certificate Signing Requests | These ClusterTrustBundles are intended to be maintained by a signer-specific controller in the cluster, so they have several security features:", "source": "https://kubernetes.io/docs/reference/access-authn-authz/certificate-signing-requests/", "id": "k8s_00494_code_explain_12740"} {"prompt": "Complete this yaml code:\n\napiVersion: certificates.k8s.io/v1alpha1\nkind: ClusterTrustBundle\nmetadata:", "completion": " name: foo\nspec:\n # no signerName specified, so the field is blank\n trustBundle: \"<... PEM data ...>\"", "language": "yaml", "type": "kubernetes-manifest", "context": "Certificates and Certificate Signing Requests | They are primarily intended for cluster configuration use cases. Each signer-unlinked ClusterTrustBundle is an independent object, in contrast to the customary grouping behavior of signer-linked Clust", "source": "https://kubernetes.io/docs/reference/access-authn-authz/certificate-signing-requests/", "id": "k8s_00494_code_completion_12741"} {"prompt": "Explain this yaml code:\n\napiVersion: certificates.k8s.io/v1alpha1\nkind: ClusterTrustBundle\nmetadata:\n name: foo\nspec:\n # no signerName specified, so the field is blank\n trustBundle: \"<... PEM data ...>\"", "completion": "This yaml code Certificates and Certificate Signing Requests | They are primarily intended for cluster configuration use cases. Each signer-unlinked ClusterTrustBundle is an independent object, in contrast to the customary grouping behavior of signer-linked Clust", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Certificates and Certificate Signing Requests | They are primarily intended for cluster configuration use cases. Each signer-unlinked ClusterTrustBundle is an independent object, in contrast to the customary grouping behavior of signer-linked Clust", "source": "https://kubernetes.io/docs/reference/access-authn-authz/certificate-signing-requests/", "id": "k8s_00494_code_explain_12742"} {"prompt": "Complete this yaml code:\n\naccess/certificate-signing-request/clusterrole-create.yaml\napiVersion: rbac.authorization.k8s.io/v1\nkind: ClusterRole\nmetadata:\n name: csr-creator\nrules:\n- apiGroups:", "completion": " - certificates.k8s.io\n resources:\n - certificatesigningrequests\n verbs:\n - create\n - get\n - list\n - watch", "language": "yaml", "type": "kubernetes-manifest", "context": "Certificates and Certificate Signing Requests | To allow approving a CertificateSigningRequest:", "source": "https://kubernetes.io/docs/reference/access-authn-authz/certificate-signing-requests/", "id": "k8s_00494_code_completion_12743"} {"prompt": "Explain this yaml code:\n\naccess/certificate-signing-request/clusterrole-create.yaml\napiVersion: rbac.authorization.k8s.io/v1\nkind: ClusterRole\nmetadata:\n name: csr-creator\nrules:\n- apiGroups:\n - certificates.k8s.io\n resources:\n - certificatesigningrequests\n verbs:\n - create\n - get\n - list\n - watch", "completion": "This yaml code Certificates and Certificate Signing Requests | To allow approving a CertificateSigningRequest:", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Certificates and Certificate Signing Requests | To allow approving a CertificateSigningRequest:", "source": "https://kubernetes.io/docs/reference/access-authn-authz/certificate-signing-requests/", "id": "k8s_00494_code_explain_12744"} {"prompt": "Complete this yaml code:\n\napiVersion: rbac.authorization.k8s.io/v1\nkind: ClusterRole\nmetadata:\n name: csr-creator\nrules:\n- apiGroups:\n - certificates.k8s.io", "completion": " resources:\n - certificatesigningrequests\n verbs:\n - create\n - get\n - list\n - watch", "language": "yaml", "type": "kubernetes-manifest", "context": "Certificates and Certificate Signing Requests | To allow approving a CertificateSigningRequest:", "source": "https://kubernetes.io/docs/reference/access-authn-authz/certificate-signing-requests/", "id": "k8s_00494_code_completion_12745"} {"prompt": "Explain this yaml code:\n\napiVersion: rbac.authorization.k8s.io/v1\nkind: ClusterRole\nmetadata:\n name: csr-creator\nrules:\n- apiGroups:\n - certificates.k8s.io\n resources:\n - certificatesigningrequests\n verbs:\n - create\n - get\n - list\n - watch", "completion": "This yaml code Certificates and Certificate Signing Requests | To allow approving a CertificateSigningRequest:", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Certificates and Certificate Signing Requests | To allow approving a CertificateSigningRequest:", "source": "https://kubernetes.io/docs/reference/access-authn-authz/certificate-signing-requests/", "id": "k8s_00494_code_explain_12746"} {"prompt": "Complete this yaml code:\n\naccess/certificate-signing-request/clusterrole-approve.yaml\napiVersion: rbac.authorization.k8s.io/v1\nkind: ClusterRole\nmetadata:\n name: csr-approver\nrules:\n- apiGroups:\n - certificates.k8s.io\n resources:\n - certificatesigningrequests\n verbs:\n - get\n - list\n - watch", "completion": "- apiGroups:\n - certificates.k8s.io\n resources:\n - certificatesigningrequests/approval\n verbs:\n - update\n- apiGroups:\n - certificates.k8s.io\n resources:\n - signers\n resourceNames:\n - example.com/my-signer-name # example.com/* can be used to authorize for all signers in the 'example.com' domain\n verbs:\n - approve", "language": "yaml", "type": "kubernetes-manifest", "context": "Certificates and Certificate Signing Requests | To allow signing a CertificateSigningRequest:", "source": "https://kubernetes.io/docs/reference/access-authn-authz/certificate-signing-requests/", "id": "k8s_00494_code_completion_12747"} {"prompt": "Explain this yaml code:\n\naccess/certificate-signing-request/clusterrole-approve.yaml\napiVersion: rbac.authorization.k8s.io/v1\nkind: ClusterRole\nmetadata:\n name: csr-approver\nrules:\n- apiGroups:\n - certificates.k8s.io\n resources:\n - certificatesigningrequests\n verbs:\n - get\n - list\n - watch\n- apiGroups:\n - certificates.k8s.io\n resources:\n - certificatesigningrequests/approval\n verbs:\n - update\n- apiGroups:\n - certificates.k8s.io\n resources:\n - signers\n resourceNames:\n - example.com/my-signer-name # example.com/* can be used to authorize for all signers in the 'example.com' domain\n verbs:\n - approve", "completion": "This yaml code Certificates and Certificate Signing Requests | To allow signing a CertificateSigningRequest:", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Certificates and Certificate Signing Requests | To allow signing a CertificateSigningRequest:", "source": "https://kubernetes.io/docs/reference/access-authn-authz/certificate-signing-requests/", "id": "k8s_00494_code_explain_12748"} {"prompt": "Complete this yaml code:\n\napiVersion: rbac.authorization.k8s.io/v1\nkind: ClusterRole\nmetadata:\n name: csr-approver\nrules:\n- apiGroups:\n - certificates.k8s.io\n resources:\n - certificatesigningrequests\n verbs:\n - get\n - list\n - watch", "completion": "- apiGroups:\n - certificates.k8s.io\n resources:\n - certificatesigningrequests/approval\n verbs:\n - update\n- apiGroups:\n - certificates.k8s.io\n resources:\n - signers\n resourceNames:\n - example.com/my-signer-name # example.com/* can be used to authorize for all signers in the 'example.com' domain\n verbs:\n - approve", "language": "yaml", "type": "kubernetes-manifest", "context": "Certificates and Certificate Signing Requests | To allow signing a CertificateSigningRequest:", "source": "https://kubernetes.io/docs/reference/access-authn-authz/certificate-signing-requests/", "id": "k8s_00494_code_completion_12749"} {"prompt": "Explain this yaml code:\n\napiVersion: rbac.authorization.k8s.io/v1\nkind: ClusterRole\nmetadata:\n name: csr-approver\nrules:\n- apiGroups:\n - certificates.k8s.io\n resources:\n - certificatesigningrequests\n verbs:\n - get\n - list\n - watch\n- apiGroups:\n - certificates.k8s.io\n resources:\n - certificatesigningrequests/approval\n verbs:\n - update\n- apiGroups:\n - certificates.k8s.io\n resources:\n - signers\n resourceNames:\n - example.com/my-signer-name # example.com/* can be used to authorize for all signers in the 'example.com' domain\n verbs:\n - approve", "completion": "This yaml code Certificates and Certificate Signing Requests | To allow signing a CertificateSigningRequest:", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Certificates and Certificate Signing Requests | To allow signing a CertificateSigningRequest:", "source": "https://kubernetes.io/docs/reference/access-authn-authz/certificate-signing-requests/", "id": "k8s_00494_code_explain_12750"} {"prompt": "Complete this yaml code:\n\naccess/certificate-signing-request/clusterrole-sign.yaml\napiVersion: rbac.authorization.k8s.io/v1\nkind: ClusterRole\nmetadata:\n name: csr-signer\nrules:\n- apiGroups:\n - certificates.k8s.io\n resources:\n - certificatesigningrequests\n verbs:\n - get\n - list\n - watch", "completion": "- apiGroups:\n - certificates.k8s.io\n resources:\n - certificatesigningrequests/status\n verbs:\n - update\n- apiGroups:\n - certificates.k8s.io\n resources:\n - signers\n resourceNames:\n - example.com/my-signer-name # example.com/* can be used to authorize for all signers in the 'example.com' domain\n verbs:\n - sign", "language": "yaml", "type": "kubernetes-manifest", "context": "Certificates and Certificate Signing Requests | Signers abstractly represent the entity or entities that might sign, or have signed, a security certificate.", "source": "https://kubernetes.io/docs/reference/access-authn-authz/certificate-signing-requests/", "id": "k8s_00494_code_completion_12751"} {"prompt": "Explain this yaml code:\n\naccess/certificate-signing-request/clusterrole-sign.yaml\napiVersion: rbac.authorization.k8s.io/v1\nkind: ClusterRole\nmetadata:\n name: csr-signer\nrules:\n- apiGroups:\n - certificates.k8s.io\n resources:\n - certificatesigningrequests\n verbs:\n - get\n - list\n - watch\n- apiGroups:\n - certificates.k8s.io\n resources:\n - certificatesigningrequests/status\n verbs:\n - update\n- apiGroups:\n - certificates.k8s.io\n resources:\n - signers\n resourceNames:\n - example.com/my-signer-name # example.com/* can be used to authorize for all signers in the 'example.com' domain\n verbs:\n - sign", "completion": "This yaml code Certificates and Certificate Signing Requests | Signers abstractly represent the entity or entities that might sign, or have signed, a security certificate.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Certificates and Certificate Signing Requests | Signers abstractly represent the entity or entities that might sign, or have signed, a security certificate.", "source": "https://kubernetes.io/docs/reference/access-authn-authz/certificate-signing-requests/", "id": "k8s_00494_code_explain_12752"} {"prompt": "Complete this yaml code:\n\napiVersion: rbac.authorization.k8s.io/v1\nkind: ClusterRole\nmetadata:\n name: csr-signer\nrules:\n- apiGroups:\n - certificates.k8s.io\n resources:\n - certificatesigningrequests\n verbs:\n - get\n - list\n - watch", "completion": "- apiGroups:\n - certificates.k8s.io\n resources:\n - certificatesigningrequests/status\n verbs:\n - update\n- apiGroups:\n - certificates.k8s.io\n resources:\n - signers\n resourceNames:\n - example.com/my-signer-name # example.com/* can be used to authorize for all signers in the 'example.com' domain\n verbs:\n - sign", "language": "yaml", "type": "kubernetes-manifest", "context": "Certificates and Certificate Signing Requests | Signers abstractly represent the entity or entities that might sign, or have signed, a security certificate.", "source": "https://kubernetes.io/docs/reference/access-authn-authz/certificate-signing-requests/", "id": "k8s_00494_code_completion_12753"} {"prompt": "Explain this yaml code:\n\napiVersion: rbac.authorization.k8s.io/v1\nkind: ClusterRole\nmetadata:\n name: csr-signer\nrules:\n- apiGroups:\n - certificates.k8s.io\n resources:\n - certificatesigningrequests\n verbs:\n - get\n - list\n - watch\n- apiGroups:\n - certificates.k8s.io\n resources:\n - certificatesigningrequests/status\n verbs:\n - update\n- apiGroups:\n - certificates.k8s.io\n resources:\n - signers\n resourceNames:\n - example.com/my-signer-name # example.com/* can be used to authorize for all signers in the 'example.com' domain\n verbs:\n - sign", "completion": "This yaml code Certificates and Certificate Signing Requests | Signers abstractly represent the entity or entities that might sign, or have signed, a security certificate.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Certificates and Certificate Signing Requests | Signers abstractly represent the entity or entities that might sign, or have signed, a security certificate.", "source": "https://kubernetes.io/docs/reference/access-authn-authz/certificate-signing-requests/", "id": "k8s_00494_code_explain_12754"} {"prompt": "Complete this yaml code:\n\napiVersion: certificates.k8s.io/v1\nkind: CertificateSigningRequest", "completion": "...\nstatus:\n certificate: \"LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JS...\"", "language": "yaml", "type": "kubernetes-manifest", "context": "Certificates and Certificate Signing Requests | Before a signer issues a certificate based on a CertificateSigningRequest, the signer typically checks that the issuance for that CSR has been approved.", "source": "https://kubernetes.io/docs/reference/access-authn-authz/certificate-signing-requests/", "id": "k8s_00494_code_completion_12755"} {"prompt": "Explain this yaml code:\n\napiVersion: certificates.k8s.io/v1\nkind: CertificateSigningRequest\n...\nstatus:\n certificate: \"LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JS...\"", "completion": "This yaml code Certificates and Certificate Signing Requests | Before a signer issues a certificate based on a CertificateSigningRequest, the signer typically checks that the issuance for that CSR has been approved.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Certificates and Certificate Signing Requests | Before a signer issues a certificate based on a CertificateSigningRequest, the signer typically checks that the issuance for that CSR has been approved.", "source": "https://kubernetes.io/docs/reference/access-authn-authz/certificate-signing-requests/", "id": "k8s_00494_code_explain_12756"} {"prompt": "Explain this bash code:\n\nkubectl certificate approve <certificate-signing-request-name>", "completion": "This bash code Certificates and Certificate Signing Requests | Likewise, to deny a CSR:", "language": "bash", "type": "kubectl-command_explanation", "context": "Certificates and Certificate Signing Requests | Likewise, to deny a CSR:", "source": "https://kubernetes.io/docs/reference/access-authn-authz/certificate-signing-requests/", "id": "k8s_00494_code_explain_12757"} {"prompt": "Explain this bash code:\n\nkubectl certificate deny <certificate-signing-request-name>", "completion": "This bash code Certificates and Certificate Signing Requests | Users of the REST API can approve CSRs by submitting an UPDATE request to the approval subresource of the CSR to be approved. For example, you could write an operator that watches for a particular kin", "language": "bash", "type": "kubectl-command_explanation", "context": "Certificates and Certificate Signing Requests | Users of the REST API can approve CSRs by submitting an UPDATE request to the approval subresource of the CSR to be approved. For example, you could write an operator that watches for a particular kin", "source": "https://kubernetes.io/docs/reference/access-authn-authz/certificate-signing-requests/", "id": "k8s_00494_code_explain_12758"} {"prompt": "Complete this yaml code:\n\napiVersion: certificates.k8s.io/v1\nkind: CertificateSigningRequest\n...\nstatus:\n conditions:", "completion": " - lastUpdateTime: \"2020-02-08T11:37:35Z\"\n lastTransitionTime: \"2020-02-08T11:37:35Z\"\n message: Approved by my custom approver controller\n reason: ApprovedByMyPolicy # You can set this to any string\n type: Approved", "language": "yaml", "type": "kubernetes-manifest", "context": "Certificates and Certificate Signing Requests | For Denied CSRs:", "source": "https://kubernetes.io/docs/reference/access-authn-authz/certificate-signing-requests/", "id": "k8s_00494_code_completion_12759"} {"prompt": "Explain this yaml code:\n\napiVersion: certificates.k8s.io/v1\nkind: CertificateSigningRequest\n...\nstatus:\n conditions:\n - lastUpdateTime: \"2020-02-08T11:37:35Z\"\n lastTransitionTime: \"2020-02-08T11:37:35Z\"\n message: Approved by my custom approver controller\n reason: ApprovedByMyPolicy # You can set this to any string\n type: Approved", "completion": "This yaml code Certificates and Certificate Signing Requests | For Denied CSRs:", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Certificates and Certificate Signing Requests | For Denied CSRs:", "source": "https://kubernetes.io/docs/reference/access-authn-authz/certificate-signing-requests/", "id": "k8s_00494_code_explain_12760"} {"prompt": "Complete this yaml code:\n\napiVersion: certificates.k8s.io/v1\nkind: CertificateSigningRequest\n...\nstatus:\n conditions:", "completion": " - lastUpdateTime: \"2020-02-08T11:37:35Z\"\n lastTransitionTime: \"2020-02-08T11:37:35Z\"\n message: Denied by my custom approver controller\n reason: DeniedByMyPolicy # You can set this to any string\n type: Denied", "language": "yaml", "type": "kubernetes-manifest", "context": "Certificates and Certificate Signing Requests | It's usual to set status.conditions.reason to a machine-friendly reason code using TitleCase; this is a convention but you can set it to anything you like. If you want to add a note for human consumpt", "source": "https://kubernetes.io/docs/reference/access-authn-authz/certificate-signing-requests/", "id": "k8s_00494_code_completion_12761"} {"prompt": "Explain this yaml code:\n\napiVersion: certificates.k8s.io/v1\nkind: CertificateSigningRequest\n...\nstatus:\n conditions:\n - lastUpdateTime: \"2020-02-08T11:37:35Z\"\n lastTransitionTime: \"2020-02-08T11:37:35Z\"\n message: Denied by my custom approver controller\n reason: DeniedByMyPolicy # You can set this to any string\n type: Denied", "completion": "This yaml code Certificates and Certificate Signing Requests | It's usual to set status.conditions.reason to a machine-friendly reason code using TitleCase; this is a convention but you can set it to anything you like. If you want to add a note for human consumpt", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Certificates and Certificate Signing Requests | It's usual to set status.conditions.reason to a machine-friendly reason code using TitleCase; this is a convention but you can set it to anything you like. If you want to add a note for human consumpt", "source": "https://kubernetes.io/docs/reference/access-authn-authz/certificate-signing-requests/", "id": "k8s_00494_code_explain_12762"} {"prompt": "Explain this bash code:\n\nkubectl get clustertrustbundles --as='system:serviceaccount:mynamespace:default'", "completion": "This bash code Certificates and Certificate Signing Requests | Signer-linked ClusterTrustBundles are associated with a signer name, like this:", "language": "bash", "type": "kubectl-command_explanation", "context": "Certificates and Certificate Signing Requests | Signer-linked ClusterTrustBundles are associated with a signer name, like this:", "source": "https://kubernetes.io/docs/reference/access-authn-authz/certificate-signing-requests/", "id": "k8s_00494_code_explain_12763"} {"prompt": "Complete this yaml code:\n\napiVersion: certificates.k8s.io/v1alpha1\nkind: ClusterTrustBundle\nmetadata:", "completion": " name: example.com:mysigner:foo\nspec:\n signerName: example.com/mysigner\n trustBundle: \"<... PEM data ...>\"", "language": "yaml", "type": "kubernetes-manifest", "context": "Certificates and Certificate Signing Requests | These ClusterTrustBundles are intended to be maintained by a signer-specific controller in the cluster, so they have several security features:", "source": "https://kubernetes.io/docs/reference/access-authn-authz/certificate-signing-requests/", "id": "k8s_00494_code_completion_12764"} {"prompt": "Explain this yaml code:\n\napiVersion: certificates.k8s.io/v1alpha1\nkind: ClusterTrustBundle\nmetadata:\n name: example.com:mysigner:foo\nspec:\n signerName: example.com/mysigner\n trustBundle: \"<... PEM data ...>\"", "completion": "This yaml code Certificates and Certificate Signing Requests | These ClusterTrustBundles are intended to be maintained by a signer-specific controller in the cluster, so they have several security features:", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Certificates and Certificate Signing Requests | These ClusterTrustBundles are intended to be maintained by a signer-specific controller in the cluster, so they have several security features:", "source": "https://kubernetes.io/docs/reference/access-authn-authz/certificate-signing-requests/", "id": "k8s_00494_code_explain_12765"} {"prompt": "Complete this yaml code:\n\napiVersion: certificates.k8s.io/v1alpha1\nkind: ClusterTrustBundle\nmetadata:", "completion": " name: foo\nspec:\n # no signerName specified, so the field is blank\n trustBundle: \"<... PEM data ...>\"", "language": "yaml", "type": "kubernetes-manifest", "context": "Certificates and Certificate Signing Requests | They are primarily intended for cluster configuration use cases. Each signer-unlinked ClusterTrustBundle is an independent object, in contrast to the customary grouping behavior of signer-linked Clust", "source": "https://kubernetes.io/docs/reference/access-authn-authz/certificate-signing-requests/", "id": "k8s_00494_code_completion_12766"} {"prompt": "Explain this yaml code:\n\napiVersion: certificates.k8s.io/v1alpha1\nkind: ClusterTrustBundle\nmetadata:\n name: foo\nspec:\n # no signerName specified, so the field is blank\n trustBundle: \"<... PEM data ...>\"", "completion": "This yaml code Certificates and Certificate Signing Requests | They are primarily intended for cluster configuration use cases. Each signer-unlinked ClusterTrustBundle is an independent object, in contrast to the customary grouping behavior of signer-linked Clust", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Certificates and Certificate Signing Requests | They are primarily intended for cluster configuration use cases. Each signer-unlinked ClusterTrustBundle is an independent object, in contrast to the customary grouping behavior of signer-linked Clust", "source": "https://kubernetes.io/docs/reference/access-authn-authz/certificate-signing-requests/", "id": "k8s_00494_code_explain_12767"} {"prompt": "Complete this text code:\n\n\"metadata\": {\n \"labels\": {\n \"key1\" : \"value1\",", "completion": " \"key2\" : \"value2\"\n }\n}", "language": "text", "type": "code-example", "context": "Labels and Selectors | Labels allow for efficient queries and watches and are ideal for use in UIs and CLIs. Non-identifying information should be recorded using annotations.", "source": "https://kubernetes.io/docs/concepts/overview/working-with-objects/labels/", "id": "k8s_00495_code_completion_12768"} {"prompt": "Explain this text code:\n\n\"metadata\": {\n \"labels\": {\n \"key1\" : \"value1\",\n \"key2\" : \"value2\"\n }\n}", "completion": "This text code Labels and Selectors | Labels allow for efficient queries and watches and are ideal for use in UIs and CLIs. Non-identifying information should be recorded using annotations.", "language": "text", "type": "code-example_explanation", "context": "Labels and Selectors | Labels allow for efficient queries and watches and are ideal for use in UIs and CLIs. Non-identifying information should be recorded using annotations.", "source": "https://kubernetes.io/docs/concepts/overview/working-with-objects/labels/", "id": "k8s_00495_code_explain_12769"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: label-demo\n labels:\n environment: production", "completion": " app: nginx\nspec:\n containers:\n - name: nginx\n image: nginx:1.14.2\n ports:\n - containerPort: 80", "language": "yaml", "type": "pod-definition", "context": "Labels and Selectors | Unlike names and UIDs, labels do not provide uniqueness. In general, we expect many objects to carry the same label(s).", "source": "https://kubernetes.io/docs/concepts/overview/working-with-objects/labels/", "id": "k8s_00495_code_completion_12770"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: label-demo\n labels:\n environment: production\n app: nginx\nspec:\n containers:\n - name: nginx\n image: nginx:1.14.2\n ports:\n - containerPort: 80", "completion": "This yaml code Labels and Selectors | Unlike names and UIDs, labels do not provide uniqueness. In general, we expect many objects to carry the same label(s).", "language": "yaml", "type": "pod-definition_explanation", "context": "Labels and Selectors | Unlike names and UIDs, labels do not provide uniqueness. In general, we expect many objects to carry the same label(s).", "source": "https://kubernetes.io/docs/concepts/overview/working-with-objects/labels/", "id": "k8s_00495_code_explain_12771"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: cuda-test\nspec:\n containers:", "completion": " - name: cuda-test\n image: \"registry.k8s.io/cuda-vector-add:v0.1\"\n resources:\n limits:\n nvidia.com/gpu: 1\n nodeSelector:\n accelerator: nvidia-tesla-p100", "language": "yaml", "type": "pod-definition", "context": "Labels and Selectors | Set-based label requirements allow filtering keys according to a set of values. Three kinds of operators are supported: in,notin and exists (only the key identifier). For example:", "source": "https://kubernetes.io/docs/concepts/overview/working-with-objects/labels/", "id": "k8s_00495_code_completion_12772"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: cuda-test\nspec:\n containers:\n - name: cuda-test\n image: \"registry.k8s.io/cuda-vector-add:v0.1\"\n resources:\n limits:\n nvidia.com/gpu: 1\n nodeSelector:\n accelerator: nvidia-tesla-p100", "completion": "This yaml code Labels and Selectors | Set-based label requirements allow filtering keys according to a set of values. Three kinds of operators are supported: in,notin and exists (only the key identifier). For example:", "language": "yaml", "type": "pod-definition_explanation", "context": "Labels and Selectors | Set-based label requirements allow filtering keys according to a set of values. Three kinds of operators are supported: in,notin and exists (only the key identifier). For example:", "source": "https://kubernetes.io/docs/concepts/overview/working-with-objects/labels/", "id": "k8s_00495_code_explain_12773"} {"prompt": "Complete this text code:\n\nenvironment in (production, qa)\ntier notin (frontend, backend)", "completion": "partition\n!partition", "language": "text", "type": "code-example", "context": "Labels and Selectors | Similarly the comma separator acts as an AND operator. So filtering resources with a partition key (no matter the value) and with environment different than qa can be achieved using partition,environm", "source": "https://kubernetes.io/docs/concepts/overview/working-with-objects/labels/", "id": "k8s_00495_code_completion_12774"} {"prompt": "Explain this text code:\n\nenvironment in (production, qa)\ntier notin (frontend, backend)\npartition\n!partition", "completion": "This text code Labels and Selectors | Similarly the comma separator acts as an AND operator. So filtering resources with a partition key (no matter the value) and with environment different than qa can be achieved using partition,environm", "language": "text", "type": "code-example_explanation", "context": "Labels and Selectors | Similarly the comma separator acts as an AND operator. So filtering resources with a partition key (no matter the value) and with environment different than qa can be achieved using partition,environm", "source": "https://kubernetes.io/docs/concepts/overview/working-with-objects/labels/", "id": "k8s_00495_code_explain_12775"} {"prompt": "Explain this bash code:\n\nkubectl get pods -l environment=production,tier=frontend", "completion": "This bash code Labels and Selectors | or using set-based requirements:", "language": "bash", "type": "kubectl-command_explanation", "context": "Labels and Selectors | or using set-based requirements:", "source": "https://kubernetes.io/docs/concepts/overview/working-with-objects/labels/", "id": "k8s_00495_code_explain_12776"} {"prompt": "Explain this bash code:\n\nkubectl get pods -l 'environment in (production),tier in (frontend)'", "completion": "This bash code Labels and Selectors | As already mentioned set-based requirements are more expressive. For instance, they can implement the OR operator on values:", "language": "bash", "type": "kubectl-command_explanation", "context": "Labels and Selectors | As already mentioned set-based requirements are more expressive. For instance, they can implement the OR operator on values:", "source": "https://kubernetes.io/docs/concepts/overview/working-with-objects/labels/", "id": "k8s_00495_code_explain_12777"} {"prompt": "Explain this bash code:\n\nkubectl get pods -l 'environment in (production, qa)'", "completion": "This bash code Labels and Selectors | or restricting negative matching via notin operator:", "language": "bash", "type": "kubectl-command_explanation", "context": "Labels and Selectors | or restricting negative matching via notin operator:", "source": "https://kubernetes.io/docs/concepts/overview/working-with-objects/labels/", "id": "k8s_00495_code_explain_12778"} {"prompt": "Explain this bash code:\n\nkubectl get pods -l 'environment,environment notin (frontend)'", "completion": "This bash code Labels and Selectors | Some Kubernetes objects, such as services and replicationcontrollers, also use label selectors to specify sets of other resources, such as pods.", "language": "bash", "type": "kubectl-command_explanation", "context": "Labels and Selectors | Some Kubernetes objects, such as services and replicationcontrollers, also use label selectors to specify sets of other resources, such as pods.", "source": "https://kubernetes.io/docs/concepts/overview/working-with-objects/labels/", "id": "k8s_00495_code_explain_12779"} {"prompt": "Complete this text code:\n\nselector:\n matchLabels:\n component: redis", "completion": " matchExpressions:\n - { key: tier, operator: In, values: [cache] }\n - { key: environment, operator: NotIn, values: [dev] }", "language": "text", "type": "code-example", "context": "Labels and Selectors | matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels map is equivalent to an element of matchExpressions, whose key field is \"key\", the operator is \"In\", and the values a", "source": "https://kubernetes.io/docs/concepts/overview/working-with-objects/labels/", "id": "k8s_00495_code_completion_12780"} {"prompt": "Explain this text code:\n\nselector:\n matchLabels:\n component: redis\n matchExpressions:\n - { key: tier, operator: In, values: [cache] }\n - { key: environment, operator: NotIn, values: [dev] }", "completion": "This text code Labels and Selectors | matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels map is equivalent to an element of matchExpressions, whose key field is \"key\", the operator is \"In\", and the values a", "language": "text", "type": "code-example_explanation", "context": "Labels and Selectors | matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels map is equivalent to an element of matchExpressions, whose key field is \"key\", the operator is \"In\", and the values a", "source": "https://kubernetes.io/docs/concepts/overview/working-with-objects/labels/", "id": "k8s_00495_code_explain_12781"} {"prompt": "Complete this text code:\n\nlabels:\n app: guestbook", "completion": " tier: backend\n role: master", "language": "text", "type": "code-example", "context": "Labels and Selectors | and", "source": "https://kubernetes.io/docs/concepts/overview/working-with-objects/labels/", "id": "k8s_00495_code_completion_12782"} {"prompt": "Explain this text code:\n\nlabels:\n app: guestbook\n tier: backend\n role: master", "completion": "This text code Labels and Selectors | and", "language": "text", "type": "code-example_explanation", "context": "Labels and Selectors | and", "source": "https://kubernetes.io/docs/concepts/overview/working-with-objects/labels/", "id": "k8s_00495_code_explain_12783"} {"prompt": "Complete this text code:\n\nlabels:\n app: guestbook", "completion": " tier: backend\n role: replica", "language": "text", "type": "code-example", "context": "Labels and Selectors | The labels allow for slicing and dicing the resources along any dimension specified by a label:", "source": "https://kubernetes.io/docs/concepts/overview/working-with-objects/labels/", "id": "k8s_00495_code_completion_12784"} {"prompt": "Explain this text code:\n\nlabels:\n app: guestbook\n tier: backend\n role: replica", "completion": "This text code Labels and Selectors | The labels allow for slicing and dicing the resources along any dimension specified by a label:", "language": "text", "type": "code-example_explanation", "context": "Labels and Selectors | The labels allow for slicing and dicing the resources along any dimension specified by a label:", "source": "https://kubernetes.io/docs/concepts/overview/working-with-objects/labels/", "id": "k8s_00495_code_explain_12785"} {"prompt": "Explain this bash code:\n\nkubectl apply -f examples/guestbook/all-in-one/guestbook-all-in-one.yaml\nkubectl get pods -Lapp -Ltier -Lrole", "completion": "This bash code Labels and Selectors | Sometimes you may want to relabel existing pods and other resources before creating new resources. This can be done with kubectl label. For example, if you want to label all your NGINX Pods as fronten", "language": "bash", "type": "kubectl-command_explanation", "context": "Labels and Selectors | Sometimes you may want to relabel existing pods and other resources before creating new resources. This can be done with kubectl label. For example, if you want to label all your NGINX Pods as fronten", "source": "https://kubernetes.io/docs/concepts/overview/working-with-objects/labels/", "id": "k8s_00495_code_explain_12786"} {"prompt": "Complete this text code:\n\nNAME READY STATUS RESTARTS AGE APP TIER ROLE\nguestbook-fe-4nlpb 1/1 Running 0 1m guestbook frontend <none>\nguestbook-fe-ght6d 1/1 Running 0 1m guestbook frontend <none>\nguestbook-fe-jpy62 1/1 Running 0 1m guestbook frontend <none>", "completion": "guestbook-redis-master-5pg3b 1/1 Running 0 1m guestbook backend master\nguestbook-redis-replica-2q2yf 1/1 Running 0 1m guestbook backend replica\nguestbook-redis-replica-qgazl 1/1 Running 0 1m guestbook backend replica\nmy-nginx-divi2 1/1 Running 0 29m nginx <none> <none>\nmy-nginx-o0ef1 1/1 Running 0 29m nginx <none> <none>", "language": "text", "type": "code-example", "context": "Labels and Selectors | Sometimes you may want to relabel existing pods and other resources before creating new resources. This can be done with kubectl label. For example, if you want to label all your NGINX Pods as fronten", "source": "https://kubernetes.io/docs/concepts/overview/working-with-objects/labels/", "id": "k8s_00495_code_completion_12787"} {"prompt": "Explain this text code:\n\nNAME READY STATUS RESTARTS AGE APP TIER ROLE\nguestbook-fe-4nlpb 1/1 Running 0 1m guestbook frontend <none>\nguestbook-fe-ght6d 1/1 Running 0 1m guestbook frontend <none>\nguestbook-fe-jpy62 1/1 Running 0 1m guestbook frontend <none>\nguestbook-redis-master-5pg3b 1/1 Running 0 1m guestbook backend master\nguestbook-redis-replica-2q2yf 1/1 Running 0 1m guestbook backend replica\nguestbook-redis-replica-qgazl 1/1 Running 0 1m guestbook backend replica\nmy-nginx-divi2 1/1 Running 0 29m nginx <none> <none>\nmy-nginx-o0ef1 1/1 Running 0 29m nginx <none> <none>", "completion": "This text code Labels and Selectors | Sometimes you may want to relabel existing pods and other resources before creating new resources. This can be done with kubectl label. For example, if you want to label all your NGINX Pods as fronten", "language": "text", "type": "code-example_explanation", "context": "Labels and Selectors | Sometimes you may want to relabel existing pods and other resources before creating new resources. This can be done with kubectl label. For example, if you want to label all your NGINX Pods as fronten", "source": "https://kubernetes.io/docs/concepts/overview/working-with-objects/labels/", "id": "k8s_00495_code_explain_12788"} {"prompt": "Explain this text code:\n\nNAME READY STATUS RESTARTS AGE\nguestbook-redis-replica-2q2yf 1/1 Running 0 3m\nguestbook-redis-replica-qgazl 1/1 Running 0 3m", "completion": "This text code Labels and Selectors | Sometimes you may want to relabel existing pods and other resources before creating new resources. This can be done with kubectl label. For example, if you want to label all your NGINX Pods as fronten", "language": "text", "type": "code-example_explanation", "context": "Labels and Selectors | Sometimes you may want to relabel existing pods and other resources before creating new resources. This can be done with kubectl label. For example, if you want to label all your NGINX Pods as fronten", "source": "https://kubernetes.io/docs/concepts/overview/working-with-objects/labels/", "id": "k8s_00495_code_explain_12789"} {"prompt": "Explain this text code:\n\npod/my-nginx-2035384211-j5fhi labeled\npod/my-nginx-2035384211-u2c7e labeled\npod/my-nginx-2035384211-u3t6x labeled", "completion": "This text code Labels and Selectors | This first filters all pods with the label \"app=nginx\", and then labels them with the \"tier=fe\". To see the pods you labeled, run:", "language": "text", "type": "code-example_explanation", "context": "Labels and Selectors | This first filters all pods with the label \"app=nginx\", and then labels them with the \"tier=fe\". To see the pods you labeled, run:", "source": "https://kubernetes.io/docs/concepts/overview/working-with-objects/labels/", "id": "k8s_00495_code_explain_12790"} {"prompt": "Complete this text code:\n\nNAME READY STATUS RESTARTS AGE TIER\nmy-nginx-2035384211-j5fhi 1/1 Running 0 23m fe", "completion": "my-nginx-2035384211-u2c7e 1/1 Running 0 23m fe\nmy-nginx-2035384211-u3t6x 1/1 Running 0 23m fe", "language": "text", "type": "code-example", "context": "Labels and Selectors | This outputs all \"app=nginx\" pods, with an additional label column of pods' tier (specified with -L or --label-columns).", "source": "https://kubernetes.io/docs/concepts/overview/working-with-objects/labels/", "id": "k8s_00495_code_completion_12791"} {"prompt": "Explain this text code:\n\nNAME READY STATUS RESTARTS AGE TIER\nmy-nginx-2035384211-j5fhi 1/1 Running 0 23m fe\nmy-nginx-2035384211-u2c7e 1/1 Running 0 23m fe\nmy-nginx-2035384211-u3t6x 1/1 Running 0 23m fe", "completion": "This text code Labels and Selectors | This outputs all \"app=nginx\" pods, with an additional label column of pods' tier (specified with -L or --label-columns).", "language": "text", "type": "code-example_explanation", "context": "Labels and Selectors | This outputs all \"app=nginx\" pods, with an additional label column of pods' tier (specified with -L or --label-columns).", "source": "https://kubernetes.io/docs/concepts/overview/working-with-objects/labels/", "id": "k8s_00495_code_explain_12792"} {"prompt": "Complete this json code:\n\n\"metadata\": {\n \"labels\": {\n \"key1\" : \"value1\",", "completion": " \"key2\" : \"value2\"\n }\n}", "language": "json", "type": "api-response", "context": "Labels and Selectors | Labels allow for efficient queries and watches and are ideal for use in UIs and CLIs. Non-identifying information should be recorded using annotations.", "source": "https://kubernetes.io/docs/concepts/overview/working-with-objects/labels/", "id": "k8s_00495_code_completion_12793"} {"prompt": "Explain this json code:\n\n\"metadata\": {\n \"labels\": {\n \"key1\" : \"value1\",\n \"key2\" : \"value2\"\n }\n}", "completion": "This json code Labels and Selectors | Labels allow for efficient queries and watches and are ideal for use in UIs and CLIs. Non-identifying information should be recorded using annotations.", "language": "json", "type": "api-response_explanation", "context": "Labels and Selectors | Labels allow for efficient queries and watches and are ideal for use in UIs and CLIs. Non-identifying information should be recorded using annotations.", "source": "https://kubernetes.io/docs/concepts/overview/working-with-objects/labels/", "id": "k8s_00495_code_explain_12794"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: label-demo\n labels:\n environment: production", "completion": " app: nginx\nspec:\n containers:\n - name: nginx\n image: nginx:1.14.2\n ports:\n - containerPort: 80", "language": "yaml", "type": "pod-definition", "context": "Labels and Selectors | Unlike names and UIDs, labels do not provide uniqueness. In general, we expect many objects to carry the same label(s).", "source": "https://kubernetes.io/docs/concepts/overview/working-with-objects/labels/", "id": "k8s_00495_code_completion_12795"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: label-demo\n labels:\n environment: production\n app: nginx\nspec:\n containers:\n - name: nginx\n image: nginx:1.14.2\n ports:\n - containerPort: 80", "completion": "This yaml code Labels and Selectors | Unlike names and UIDs, labels do not provide uniqueness. In general, we expect many objects to carry the same label(s).", "language": "yaml", "type": "pod-definition_explanation", "context": "Labels and Selectors | Unlike names and UIDs, labels do not provide uniqueness. In general, we expect many objects to carry the same label(s).", "source": "https://kubernetes.io/docs/concepts/overview/working-with-objects/labels/", "id": "k8s_00495_code_explain_12796"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: cuda-test\nspec:\n containers:", "completion": " - name: cuda-test\n image: \"registry.k8s.io/cuda-vector-add:v0.1\"\n resources:\n limits:\n nvidia.com/gpu: 1\n nodeSelector:\n accelerator: nvidia-tesla-p100", "language": "yaml", "type": "pod-definition", "context": "Labels and Selectors | Set-based label requirements allow filtering keys according to a set of values. Three kinds of operators are supported: in,notin and exists (only the key identifier). For example:", "source": "https://kubernetes.io/docs/concepts/overview/working-with-objects/labels/", "id": "k8s_00495_code_completion_12797"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: cuda-test\nspec:\n containers:\n - name: cuda-test\n image: \"registry.k8s.io/cuda-vector-add:v0.1\"\n resources:\n limits:\n nvidia.com/gpu: 1\n nodeSelector:\n accelerator: nvidia-tesla-p100", "completion": "This yaml code Labels and Selectors | Set-based label requirements allow filtering keys according to a set of values. Three kinds of operators are supported: in,notin and exists (only the key identifier). For example:", "language": "yaml", "type": "pod-definition_explanation", "context": "Labels and Selectors | Set-based label requirements allow filtering keys according to a set of values. Three kinds of operators are supported: in,notin and exists (only the key identifier). For example:", "source": "https://kubernetes.io/docs/concepts/overview/working-with-objects/labels/", "id": "k8s_00495_code_explain_12798"} {"prompt": "Complete this text code:\n\nenvironment in (production, qa)\ntier notin (frontend, backend)", "completion": "partition\n!partition", "language": "text", "type": "code-example", "context": "Labels and Selectors | Similarly the comma separator acts as an AND operator. So filtering resources with a partition key (no matter the value) and with environment different than qa can be achieved using partition,environm", "source": "https://kubernetes.io/docs/concepts/overview/working-with-objects/labels/", "id": "k8s_00495_code_completion_12799"} {"prompt": "Explain this text code:\n\nenvironment in (production, qa)\ntier notin (frontend, backend)\npartition\n!partition", "completion": "This text code Labels and Selectors | Similarly the comma separator acts as an AND operator. So filtering resources with a partition key (no matter the value) and with environment different than qa can be achieved using partition,environm", "language": "text", "type": "code-example_explanation", "context": "Labels and Selectors | Similarly the comma separator acts as an AND operator. So filtering resources with a partition key (no matter the value) and with environment different than qa can be achieved using partition,environm", "source": "https://kubernetes.io/docs/concepts/overview/working-with-objects/labels/", "id": "k8s_00495_code_explain_12800"} {"prompt": "Explain this text code:\n\npartition in (customerA, customerB),environment!=qa", "completion": "This text code Labels and Selectors | For list and watch operations, you can specify label selectors to filter the sets of objects returned; you specify the filter using a query parameter. (To learn in detail about watches in Kubernetes,", "language": "text", "type": "code-example_explanation", "context": "Labels and Selectors | For list and watch operations, you can specify label selectors to filter the sets of objects returned; you specify the filter using a query parameter. (To learn in detail about watches in Kubernetes,", "source": "https://kubernetes.io/docs/concepts/overview/working-with-objects/labels/", "id": "k8s_00495_code_explain_12801"} {"prompt": "Explain this text code:\n\n?labelSelector=environment%3Dproduction,tier%3Dfrontend", "completion": "This text code Labels and Selectors | Both label selector styles can be used to list or watch resources via a REST client. For example, targeting apiserver with kubectl and using equality-based one may write:", "language": "text", "type": "code-example_explanation", "context": "Labels and Selectors | Both label selector styles can be used to list or watch resources via a REST client. For example, targeting apiserver with kubectl and using equality-based one may write:", "source": "https://kubernetes.io/docs/concepts/overview/working-with-objects/labels/", "id": "k8s_00495_code_explain_12802"} {"prompt": "Explain this text code:\n\n?labelSelector=environment+in+%28production%2Cqa%29%2Ctier+in+%28frontend%29", "completion": "This text code Labels and Selectors | Both label selector styles can be used to list or watch resources via a REST client. For example, targeting apiserver with kubectl and using equality-based one may write:", "language": "text", "type": "code-example_explanation", "context": "Labels and Selectors | Both label selector styles can be used to list or watch resources via a REST client. For example, targeting apiserver with kubectl and using equality-based one may write:", "source": "https://kubernetes.io/docs/concepts/overview/working-with-objects/labels/", "id": "k8s_00495_code_explain_12803"} {"prompt": "Explain this shell code:\n\nkubectl get pods -l environment=production,tier=frontend", "completion": "This shell code Labels and Selectors | or using set-based requirements:", "language": "shell", "type": "code-example_explanation", "context": "Labels and Selectors | or using set-based requirements:", "source": "https://kubernetes.io/docs/concepts/overview/working-with-objects/labels/", "id": "k8s_00495_code_explain_12804"} {"prompt": "Explain this shell code:\n\nkubectl get pods -l 'environment in (production),tier in (frontend)'", "completion": "This shell code Labels and Selectors | As already mentioned set-based requirements are more expressive. For instance, they can implement the OR operator on values:", "language": "shell", "type": "code-example_explanation", "context": "Labels and Selectors | As already mentioned set-based requirements are more expressive. For instance, they can implement the OR operator on values:", "source": "https://kubernetes.io/docs/concepts/overview/working-with-objects/labels/", "id": "k8s_00495_code_explain_12805"} {"prompt": "Explain this shell code:\n\nkubectl get pods -l 'environment in (production, qa)'", "completion": "This shell code Labels and Selectors | or restricting negative matching via notin operator:", "language": "shell", "type": "code-example_explanation", "context": "Labels and Selectors | or restricting negative matching via notin operator:", "source": "https://kubernetes.io/docs/concepts/overview/working-with-objects/labels/", "id": "k8s_00495_code_explain_12806"} {"prompt": "Explain this shell code:\n\nkubectl get pods -l 'environment,environment notin (frontend)'", "completion": "This shell code Labels and Selectors | Some Kubernetes objects, such as services and replicationcontrollers, also use label selectors to specify sets of other resources, such as pods.", "language": "shell", "type": "code-example_explanation", "context": "Labels and Selectors | Some Kubernetes objects, such as services and replicationcontrollers, also use label selectors to specify sets of other resources, such as pods.", "source": "https://kubernetes.io/docs/concepts/overview/working-with-objects/labels/", "id": "k8s_00495_code_explain_12807"} {"prompt": "Complete this yaml code:\n\nselector:\n matchLabels:\n component: redis", "completion": " matchExpressions:\n - { key: tier, operator: In, values: [cache] }\n - { key: environment, operator: NotIn, values: [dev] }", "language": "yaml", "type": "kubernetes-manifest", "context": "Labels and Selectors | matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels map is equivalent to an element of matchExpressions, whose key field is \"key\", the operator is \"In\", and the values a", "source": "https://kubernetes.io/docs/concepts/overview/working-with-objects/labels/", "id": "k8s_00495_code_completion_12808"} {"prompt": "Explain this yaml code:\n\nselector:\n matchLabels:\n component: redis\n matchExpressions:\n - { key: tier, operator: In, values: [cache] }\n - { key: environment, operator: NotIn, values: [dev] }", "completion": "This yaml code Labels and Selectors | matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels map is equivalent to an element of matchExpressions, whose key field is \"key\", the operator is \"In\", and the values a", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Labels and Selectors | matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels map is equivalent to an element of matchExpressions, whose key field is \"key\", the operator is \"In\", and the values a", "source": "https://kubernetes.io/docs/concepts/overview/working-with-objects/labels/", "id": "k8s_00495_code_explain_12809"} {"prompt": "Complete this yaml code:\n\nlabels:\n app: guestbook", "completion": " tier: backend\n role: master", "language": "yaml", "type": "kubernetes-manifest", "context": "Labels and Selectors | and", "source": "https://kubernetes.io/docs/concepts/overview/working-with-objects/labels/", "id": "k8s_00495_code_completion_12810"} {"prompt": "Explain this yaml code:\n\nlabels:\n app: guestbook\n tier: backend\n role: master", "completion": "This yaml code Labels and Selectors | and", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Labels and Selectors | and", "source": "https://kubernetes.io/docs/concepts/overview/working-with-objects/labels/", "id": "k8s_00495_code_explain_12811"} {"prompt": "Complete this yaml code:\n\nlabels:\n app: guestbook", "completion": " tier: backend\n role: replica", "language": "yaml", "type": "kubernetes-manifest", "context": "Labels and Selectors | The labels allow for slicing and dicing the resources along any dimension specified by a label:", "source": "https://kubernetes.io/docs/concepts/overview/working-with-objects/labels/", "id": "k8s_00495_code_completion_12812"} {"prompt": "Explain this yaml code:\n\nlabels:\n app: guestbook\n tier: backend\n role: replica", "completion": "This yaml code Labels and Selectors | The labels allow for slicing and dicing the resources along any dimension specified by a label:", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Labels and Selectors | The labels allow for slicing and dicing the resources along any dimension specified by a label:", "source": "https://kubernetes.io/docs/concepts/overview/working-with-objects/labels/", "id": "k8s_00495_code_explain_12813"} {"prompt": "Explain this shell code:\n\nkubectl apply -f examples/guestbook/all-in-one/guestbook-all-in-one.yaml\nkubectl get pods -Lapp -Ltier -Lrole", "completion": "This shell code Labels and Selectors | Sometimes you may want to relabel existing pods and other resources before creating new resources. This can be done with kubectl label. For example, if you want to label all your NGINX Pods as fronten", "language": "shell", "type": "code-example_explanation", "context": "Labels and Selectors | Sometimes you may want to relabel existing pods and other resources before creating new resources. This can be done with kubectl label. For example, if you want to label all your NGINX Pods as fronten", "source": "https://kubernetes.io/docs/concepts/overview/working-with-objects/labels/", "id": "k8s_00495_code_explain_12814"} {"prompt": "Complete this none code:\n\nNAME READY STATUS RESTARTS AGE APP TIER ROLE\nguestbook-fe-4nlpb 1/1 Running 0 1m guestbook frontend <none>\nguestbook-fe-ght6d 1/1 Running 0 1m guestbook frontend <none>\nguestbook-fe-jpy62 1/1 Running 0 1m guestbook frontend <none>", "completion": "guestbook-redis-master-5pg3b 1/1 Running 0 1m guestbook backend master\nguestbook-redis-replica-2q2yf 1/1 Running 0 1m guestbook backend replica\nguestbook-redis-replica-qgazl 1/1 Running 0 1m guestbook backend replica\nmy-nginx-divi2 1/1 Running 0 29m nginx <none> <none>\nmy-nginx-o0ef1 1/1 Running 0 29m nginx <none> <none>", "language": "none", "type": "code-example", "context": "Labels and Selectors | Sometimes you may want to relabel existing pods and other resources before creating new resources. This can be done with kubectl label. For example, if you want to label all your NGINX Pods as fronten", "source": "https://kubernetes.io/docs/concepts/overview/working-with-objects/labels/", "id": "k8s_00495_code_completion_12815"} {"prompt": "Explain this none code:\n\nNAME READY STATUS RESTARTS AGE APP TIER ROLE\nguestbook-fe-4nlpb 1/1 Running 0 1m guestbook frontend <none>\nguestbook-fe-ght6d 1/1 Running 0 1m guestbook frontend <none>\nguestbook-fe-jpy62 1/1 Running 0 1m guestbook frontend <none>\nguestbook-redis-master-5pg3b 1/1 Running 0 1m guestbook backend master\nguestbook-redis-replica-2q2yf 1/1 Running 0 1m guestbook backend replica\nguestbook-redis-replica-qgazl 1/1 Running 0 1m guestbook backend replica\nmy-nginx-divi2 1/1 Running 0 29m nginx <none> <none>\nmy-nginx-o0ef1 1/1 Running 0 29m nginx <none> <none>", "completion": "This none code Labels and Selectors | Sometimes you may want to relabel existing pods and other resources before creating new resources. This can be done with kubectl label. For example, if you want to label all your NGINX Pods as fronten", "language": "none", "type": "code-example_explanation", "context": "Labels and Selectors | Sometimes you may want to relabel existing pods and other resources before creating new resources. This can be done with kubectl label. For example, if you want to label all your NGINX Pods as fronten", "source": "https://kubernetes.io/docs/concepts/overview/working-with-objects/labels/", "id": "k8s_00495_code_explain_12816"} {"prompt": "Explain this none code:\n\nNAME READY STATUS RESTARTS AGE\nguestbook-redis-replica-2q2yf 1/1 Running 0 3m\nguestbook-redis-replica-qgazl 1/1 Running 0 3m", "completion": "This none code Labels and Selectors | Sometimes you may want to relabel existing pods and other resources before creating new resources. This can be done with kubectl label. For example, if you want to label all your NGINX Pods as fronten", "language": "none", "type": "code-example_explanation", "context": "Labels and Selectors | Sometimes you may want to relabel existing pods and other resources before creating new resources. This can be done with kubectl label. For example, if you want to label all your NGINX Pods as fronten", "source": "https://kubernetes.io/docs/concepts/overview/working-with-objects/labels/", "id": "k8s_00495_code_explain_12817"} {"prompt": "Explain this none code:\n\npod/my-nginx-2035384211-j5fhi labeled\npod/my-nginx-2035384211-u2c7e labeled\npod/my-nginx-2035384211-u3t6x labeled", "completion": "This none code Labels and Selectors | This first filters all pods with the label \"app=nginx\", and then labels them with the \"tier=fe\". To see the pods you labeled, run:", "language": "none", "type": "code-example_explanation", "context": "Labels and Selectors | This first filters all pods with the label \"app=nginx\", and then labels them with the \"tier=fe\". To see the pods you labeled, run:", "source": "https://kubernetes.io/docs/concepts/overview/working-with-objects/labels/", "id": "k8s_00495_code_explain_12818"} {"prompt": "Complete this none code:\n\nNAME READY STATUS RESTARTS AGE TIER\nmy-nginx-2035384211-j5fhi 1/1 Running 0 23m fe", "completion": "my-nginx-2035384211-u2c7e 1/1 Running 0 23m fe\nmy-nginx-2035384211-u3t6x 1/1 Running 0 23m fe", "language": "none", "type": "code-example", "context": "Labels and Selectors | This outputs all \"app=nginx\" pods, with an additional label column of pods' tier (specified with -L or --label-columns).", "source": "https://kubernetes.io/docs/concepts/overview/working-with-objects/labels/", "id": "k8s_00495_code_completion_12819"} {"prompt": "Explain this none code:\n\nNAME READY STATUS RESTARTS AGE TIER\nmy-nginx-2035384211-j5fhi 1/1 Running 0 23m fe\nmy-nginx-2035384211-u2c7e 1/1 Running 0 23m fe\nmy-nginx-2035384211-u3t6x 1/1 Running 0 23m fe", "completion": "This none code Labels and Selectors | This outputs all \"app=nginx\" pods, with an additional label column of pods' tier (specified with -L or --label-columns).", "language": "none", "type": "code-example_explanation", "context": "Labels and Selectors | This outputs all \"app=nginx\" pods, with an additional label column of pods' tier (specified with -L or --label-columns).", "source": "https://kubernetes.io/docs/concepts/overview/working-with-objects/labels/", "id": "k8s_00495_code_explain_12820"} {"prompt": "Complete this text code:\n\n\"metadata\": {\n \"labels\": {\n \"key1\" : \"value1\",", "completion": " \"key2\" : \"value2\"\n }\n}", "language": "text", "type": "code-example", "context": "Labels and Selectors | Labels allow for efficient queries and watches and are ideal for use in UIs and CLIs. Non-identifying information should be recorded using annotations.", "source": "https://kubernetes.io/docs/concepts/overview/working-with-objects/labels/", "id": "k8s_00495_code_completion_12821"} {"prompt": "Explain this text code:\n\n\"metadata\": {\n \"labels\": {\n \"key1\" : \"value1\",\n \"key2\" : \"value2\"\n }\n}", "completion": "This text code Labels and Selectors | Labels allow for efficient queries and watches and are ideal for use in UIs and CLIs. Non-identifying information should be recorded using annotations.", "language": "text", "type": "code-example_explanation", "context": "Labels and Selectors | Labels allow for efficient queries and watches and are ideal for use in UIs and CLIs. Non-identifying information should be recorded using annotations.", "source": "https://kubernetes.io/docs/concepts/overview/working-with-objects/labels/", "id": "k8s_00495_code_explain_12822"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: label-demo\n labels:\n environment: production", "completion": " app: nginx\nspec:\n containers:\n - name: nginx\n image: nginx:1.14.2\n ports:\n - containerPort: 80", "language": "yaml", "type": "pod-definition", "context": "Labels and Selectors | Unlike names and UIDs, labels do not provide uniqueness. In general, we expect many objects to carry the same label(s).", "source": "https://kubernetes.io/docs/concepts/overview/working-with-objects/labels/", "id": "k8s_00495_code_completion_12823"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: label-demo\n labels:\n environment: production\n app: nginx\nspec:\n containers:\n - name: nginx\n image: nginx:1.14.2\n ports:\n - containerPort: 80", "completion": "This yaml code Labels and Selectors | Unlike names and UIDs, labels do not provide uniqueness. In general, we expect many objects to carry the same label(s).", "language": "yaml", "type": "pod-definition_explanation", "context": "Labels and Selectors | Unlike names and UIDs, labels do not provide uniqueness. In general, we expect many objects to carry the same label(s).", "source": "https://kubernetes.io/docs/concepts/overview/working-with-objects/labels/", "id": "k8s_00495_code_explain_12824"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: cuda-test\nspec:\n containers:", "completion": " - name: cuda-test\n image: \"registry.k8s.io/cuda-vector-add:v0.1\"\n resources:\n limits:\n nvidia.com/gpu: 1\n nodeSelector:\n accelerator: nvidia-tesla-p100", "language": "yaml", "type": "pod-definition", "context": "Labels and Selectors | Set-based label requirements allow filtering keys according to a set of values. Three kinds of operators are supported: in,notin and exists (only the key identifier). For example:", "source": "https://kubernetes.io/docs/concepts/overview/working-with-objects/labels/", "id": "k8s_00495_code_completion_12825"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: cuda-test\nspec:\n containers:\n - name: cuda-test\n image: \"registry.k8s.io/cuda-vector-add:v0.1\"\n resources:\n limits:\n nvidia.com/gpu: 1\n nodeSelector:\n accelerator: nvidia-tesla-p100", "completion": "This yaml code Labels and Selectors | Set-based label requirements allow filtering keys according to a set of values. Three kinds of operators are supported: in,notin and exists (only the key identifier). For example:", "language": "yaml", "type": "pod-definition_explanation", "context": "Labels and Selectors | Set-based label requirements allow filtering keys according to a set of values. Three kinds of operators are supported: in,notin and exists (only the key identifier). For example:", "source": "https://kubernetes.io/docs/concepts/overview/working-with-objects/labels/", "id": "k8s_00495_code_explain_12826"} {"prompt": "Explain this bash code:\n\nkubectl get pods -l environment=production,tier=frontend", "completion": "This bash code Labels and Selectors | or using set-based requirements:", "language": "bash", "type": "kubectl-command_explanation", "context": "Labels and Selectors | or using set-based requirements:", "source": "https://kubernetes.io/docs/concepts/overview/working-with-objects/labels/", "id": "k8s_00495_code_explain_12827"} {"prompt": "Explain this bash code:\n\nkubectl get pods -l 'environment in (production),tier in (frontend)'", "completion": "This bash code Labels and Selectors | As already mentioned set-based requirements are more expressive. For instance, they can implement the OR operator on values:", "language": "bash", "type": "kubectl-command_explanation", "context": "Labels and Selectors | As already mentioned set-based requirements are more expressive. For instance, they can implement the OR operator on values:", "source": "https://kubernetes.io/docs/concepts/overview/working-with-objects/labels/", "id": "k8s_00495_code_explain_12828"} {"prompt": "Explain this bash code:\n\nkubectl get pods -l 'environment in (production, qa)'", "completion": "This bash code Labels and Selectors | or restricting negative matching via notin operator:", "language": "bash", "type": "kubectl-command_explanation", "context": "Labels and Selectors | or restricting negative matching via notin operator:", "source": "https://kubernetes.io/docs/concepts/overview/working-with-objects/labels/", "id": "k8s_00495_code_explain_12829"} {"prompt": "Explain this bash code:\n\nkubectl get pods -l 'environment,environment notin (frontend)'", "completion": "This bash code Labels and Selectors | Some Kubernetes objects, such as services and replicationcontrollers, also use label selectors to specify sets of other resources, such as pods.", "language": "bash", "type": "kubectl-command_explanation", "context": "Labels and Selectors | Some Kubernetes objects, such as services and replicationcontrollers, also use label selectors to specify sets of other resources, such as pods.", "source": "https://kubernetes.io/docs/concepts/overview/working-with-objects/labels/", "id": "k8s_00495_code_explain_12830"} {"prompt": "Complete this text code:\n\nselector:\n matchLabels:\n component: redis", "completion": " matchExpressions:\n - { key: tier, operator: In, values: [cache] }\n - { key: environment, operator: NotIn, values: [dev] }", "language": "text", "type": "code-example", "context": "Labels and Selectors | matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels map is equivalent to an element of matchExpressions, whose key field is \"key\", the operator is \"In\", and the values a", "source": "https://kubernetes.io/docs/concepts/overview/working-with-objects/labels/", "id": "k8s_00495_code_completion_12831"} {"prompt": "Explain this text code:\n\nselector:\n matchLabels:\n component: redis\n matchExpressions:\n - { key: tier, operator: In, values: [cache] }\n - { key: environment, operator: NotIn, values: [dev] }", "completion": "This text code Labels and Selectors | matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels map is equivalent to an element of matchExpressions, whose key field is \"key\", the operator is \"In\", and the values a", "language": "text", "type": "code-example_explanation", "context": "Labels and Selectors | matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels map is equivalent to an element of matchExpressions, whose key field is \"key\", the operator is \"In\", and the values a", "source": "https://kubernetes.io/docs/concepts/overview/working-with-objects/labels/", "id": "k8s_00495_code_explain_12832"} {"prompt": "Complete this text code:\n\nlabels:\n app: guestbook", "completion": " tier: backend\n role: master", "language": "text", "type": "code-example", "context": "Labels and Selectors | and", "source": "https://kubernetes.io/docs/concepts/overview/working-with-objects/labels/", "id": "k8s_00495_code_completion_12833"} {"prompt": "Explain this text code:\n\nlabels:\n app: guestbook\n tier: backend\n role: master", "completion": "This text code Labels and Selectors | and", "language": "text", "type": "code-example_explanation", "context": "Labels and Selectors | and", "source": "https://kubernetes.io/docs/concepts/overview/working-with-objects/labels/", "id": "k8s_00495_code_explain_12834"} {"prompt": "Complete this text code:\n\nlabels:\n app: guestbook", "completion": " tier: backend\n role: replica", "language": "text", "type": "code-example", "context": "Labels and Selectors | The labels allow for slicing and dicing the resources along any dimension specified by a label:", "source": "https://kubernetes.io/docs/concepts/overview/working-with-objects/labels/", "id": "k8s_00495_code_completion_12835"} {"prompt": "Explain this text code:\n\nlabels:\n app: guestbook\n tier: backend\n role: replica", "completion": "This text code Labels and Selectors | The labels allow for slicing and dicing the resources along any dimension specified by a label:", "language": "text", "type": "code-example_explanation", "context": "Labels and Selectors | The labels allow for slicing and dicing the resources along any dimension specified by a label:", "source": "https://kubernetes.io/docs/concepts/overview/working-with-objects/labels/", "id": "k8s_00495_code_explain_12836"} {"prompt": "Explain this bash code:\n\nkubectl apply -f examples/guestbook/all-in-one/guestbook-all-in-one.yaml\nkubectl get pods -Lapp -Ltier -Lrole", "completion": "This bash code Labels and Selectors | Sometimes you may want to relabel existing pods and other resources before creating new resources. This can be done with kubectl label. For example, if you want to label all your NGINX Pods as fronten", "language": "bash", "type": "kubectl-command_explanation", "context": "Labels and Selectors | Sometimes you may want to relabel existing pods and other resources before creating new resources. This can be done with kubectl label. For example, if you want to label all your NGINX Pods as fronten", "source": "https://kubernetes.io/docs/concepts/overview/working-with-objects/labels/", "id": "k8s_00495_code_explain_12837"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: ResourceQuota\nmetadata:\n name: object-quota-demo", "completion": "spec:\n hard:\n persistentvolumeclaims: \"1\"\n services.loadbalancers: \"2\"\n services.nodeports: \"0\"", "language": "yaml", "type": "kubernetes-manifest", "context": "Configure Quotas for API Objects | Create the ResourceQuota:", "source": "https://kubernetes.io/docs/tasks/administer-cluster/quota-api-object/", "id": "k8s_00498_code_completion_12838"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: ResourceQuota\nmetadata:\n name: object-quota-demo\nspec:\n hard:\n persistentvolumeclaims: \"1\"\n services.loadbalancers: \"2\"\n services.nodeports: \"0\"", "completion": "This yaml code Configure Quotas for API Objects | Create the ResourceQuota:", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Configure Quotas for API Objects | Create the ResourceQuota:", "source": "https://kubernetes.io/docs/tasks/administer-cluster/quota-api-object/", "id": "k8s_00498_code_explain_12839"} {"prompt": "Explain this bash code:\n\nkubectl apply -f https://k8s.io/examples/admin/resource/quota-objects.yaml --namespace=quota-object-example", "completion": "This bash code Configure Quotas for API Objects | View detailed information about the ResourceQuota:", "language": "bash", "type": "kubectl-command_explanation", "context": "Configure Quotas for API Objects | View detailed information about the ResourceQuota:", "source": "https://kubernetes.io/docs/tasks/administer-cluster/quota-api-object/", "id": "k8s_00498_code_explain_12840"} {"prompt": "Explain this bash code:\n\nkubectl get resourcequota object-quota-demo --namespace=quota-object-example --output=yaml", "completion": "This bash code Configure Quotas for API Objects | The output shows that in the quota-object-example namespace, there can be at most one PersistentVolumeClaim, at most two Services of type LoadBalancer, and no Services of type NodePort.", "language": "bash", "type": "kubectl-command_explanation", "context": "Configure Quotas for API Objects | The output shows that in the quota-object-example namespace, there can be at most one PersistentVolumeClaim, at most two Services of type LoadBalancer, and no Services of type NodePort.", "source": "https://kubernetes.io/docs/tasks/administer-cluster/quota-api-object/", "id": "k8s_00498_code_explain_12841"} {"prompt": "Complete this text code:\n\nstatus:\n hard:\n persistentvolumeclaims: \"1\"\n services.loadbalancers: \"2\"", "completion": " services.nodeports: \"0\"\n used:\n persistentvolumeclaims: \"0\"\n services.loadbalancers: \"0\"\n services.nodeports: \"0\"", "language": "text", "type": "code-example", "context": "Configure Quotas for API Objects | Here is the configuration file for a PersistentVolumeClaim object:", "source": "https://kubernetes.io/docs/tasks/administer-cluster/quota-api-object/", "id": "k8s_00498_code_completion_12842"} {"prompt": "Explain this text code:\n\nstatus:\n hard:\n persistentvolumeclaims: \"1\"\n services.loadbalancers: \"2\"\n services.nodeports: \"0\"\n used:\n persistentvolumeclaims: \"0\"\n services.loadbalancers: \"0\"\n services.nodeports: \"0\"", "completion": "This text code Configure Quotas for API Objects | Here is the configuration file for a PersistentVolumeClaim object:", "language": "text", "type": "code-example_explanation", "context": "Configure Quotas for API Objects | Here is the configuration file for a PersistentVolumeClaim object:", "source": "https://kubernetes.io/docs/tasks/administer-cluster/quota-api-object/", "id": "k8s_00498_code_explain_12843"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: PersistentVolumeClaim\nmetadata:\n name: pvc-quota-demo\nspec:", "completion": " storageClassName: manual\n accessModes:\n - ReadWriteOnce\n resources:\n requests:\n storage: 3Gi", "language": "yaml", "type": "kubernetes-manifest", "context": "Configure Quotas for API Objects | Create the PersistentVolumeClaim:", "source": "https://kubernetes.io/docs/tasks/administer-cluster/quota-api-object/", "id": "k8s_00498_code_completion_12844"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: PersistentVolumeClaim\nmetadata:\n name: pvc-quota-demo\nspec:\n storageClassName: manual\n accessModes:\n - ReadWriteOnce\n resources:\n requests:\n storage: 3Gi", "completion": "This yaml code Configure Quotas for API Objects | Create the PersistentVolumeClaim:", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Configure Quotas for API Objects | Create the PersistentVolumeClaim:", "source": "https://kubernetes.io/docs/tasks/administer-cluster/quota-api-object/", "id": "k8s_00498_code_explain_12845"} {"prompt": "Explain this bash code:\n\nkubectl apply -f https://k8s.io/examples/admin/resource/quota-objects-pvc.yaml --namespace=quota-object-example", "completion": "This bash code Configure Quotas for API Objects | Verify that the PersistentVolumeClaim was created:", "language": "bash", "type": "kubectl-command_explanation", "context": "Configure Quotas for API Objects | Verify that the PersistentVolumeClaim was created:", "source": "https://kubernetes.io/docs/tasks/administer-cluster/quota-api-object/", "id": "k8s_00498_code_explain_12846"} {"prompt": "Explain this bash code:\n\nkubectl get persistentvolumeclaims --namespace=quota-object-example", "completion": "This bash code Configure Quotas for API Objects | The output shows that the PersistentVolumeClaim exists and has status Pending:", "language": "bash", "type": "kubectl-command_explanation", "context": "Configure Quotas for API Objects | The output shows that the PersistentVolumeClaim exists and has status Pending:", "source": "https://kubernetes.io/docs/tasks/administer-cluster/quota-api-object/", "id": "k8s_00498_code_explain_12847"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: PersistentVolumeClaim\nmetadata:\n name: pvc-quota-demo-2\nspec:", "completion": " storageClassName: manual\n accessModes:\n - ReadWriteOnce\n resources:\n requests:\n storage: 4Gi", "language": "yaml", "type": "kubernetes-manifest", "context": "Configure Quotas for API Objects | Attempt to create the second PersistentVolumeClaim:", "source": "https://kubernetes.io/docs/tasks/administer-cluster/quota-api-object/", "id": "k8s_00498_code_completion_12848"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: PersistentVolumeClaim\nmetadata:\n name: pvc-quota-demo-2\nspec:\n storageClassName: manual\n accessModes:\n - ReadWriteOnce\n resources:\n requests:\n storage: 4Gi", "completion": "This yaml code Configure Quotas for API Objects | Attempt to create the second PersistentVolumeClaim:", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Configure Quotas for API Objects | Attempt to create the second PersistentVolumeClaim:", "source": "https://kubernetes.io/docs/tasks/administer-cluster/quota-api-object/", "id": "k8s_00498_code_explain_12849"} {"prompt": "Explain this bash code:\n\nkubectl apply -f https://k8s.io/examples/admin/resource/quota-objects-pvc-2.yaml --namespace=quota-object-example", "completion": "This bash code Configure Quotas for API Objects | The output shows that the second PersistentVolumeClaim was not created, because it would have exceeded the quota for the namespace.", "language": "bash", "type": "kubectl-command_explanation", "context": "Configure Quotas for API Objects | The output shows that the second PersistentVolumeClaim was not created, because it would have exceeded the quota for the namespace.", "source": "https://kubernetes.io/docs/tasks/administer-cluster/quota-api-object/", "id": "k8s_00498_code_explain_12850"} {"prompt": "Explain this text code:\n\npersistentvolumeclaims \"pvc-quota-demo-2\" is forbidden:\nexceeded quota: object-quota-demo, requested: persistentvolumeclaims=1,\nused: persistentvolumeclaims=1, limited: persistentvolumeclaims=1", "completion": "This text code Configure Quotas for API Objects | These are the strings used to identify API resources that can be constrained by quotas:", "language": "text", "type": "code-example_explanation", "context": "Configure Quotas for API Objects | These are the strings used to identify API resources that can be constrained by quotas:", "source": "https://kubernetes.io/docs/tasks/administer-cluster/quota-api-object/", "id": "k8s_00498_code_explain_12851"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: ResourceQuota\nmetadata:\n name: object-quota-demo", "completion": "spec:\n hard:\n persistentvolumeclaims: \"1\"\n services.loadbalancers: \"2\"\n services.nodeports: \"0\"", "language": "yaml", "type": "kubernetes-manifest", "context": "Configure Quotas for API Objects | Create the ResourceQuota:", "source": "https://kubernetes.io/docs/tasks/administer-cluster/quota-api-object/", "id": "k8s_00498_code_completion_12852"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: ResourceQuota\nmetadata:\n name: object-quota-demo\nspec:\n hard:\n persistentvolumeclaims: \"1\"\n services.loadbalancers: \"2\"\n services.nodeports: \"0\"", "completion": "This yaml code Configure Quotas for API Objects | Create the ResourceQuota:", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Configure Quotas for API Objects | Create the ResourceQuota:", "source": "https://kubernetes.io/docs/tasks/administer-cluster/quota-api-object/", "id": "k8s_00498_code_explain_12853"} {"prompt": "Explain this shell code:\n\nkubectl apply -f https://k8s.io/examples/admin/resource/quota-objects.yaml --namespace=quota-object-example", "completion": "This shell code Configure Quotas for API Objects | View detailed information about the ResourceQuota:", "language": "shell", "type": "code-example_explanation", "context": "Configure Quotas for API Objects | View detailed information about the ResourceQuota:", "source": "https://kubernetes.io/docs/tasks/administer-cluster/quota-api-object/", "id": "k8s_00498_code_explain_12854"} {"prompt": "Explain this shell code:\n\nkubectl get resourcequota object-quota-demo --namespace=quota-object-example --output=yaml", "completion": "This shell code Configure Quotas for API Objects | The output shows that in the quota-object-example namespace, there can be at most one PersistentVolumeClaim, at most two Services of type LoadBalancer, and no Services of type NodePort.", "language": "shell", "type": "code-example_explanation", "context": "Configure Quotas for API Objects | The output shows that in the quota-object-example namespace, there can be at most one PersistentVolumeClaim, at most two Services of type LoadBalancer, and no Services of type NodePort.", "source": "https://kubernetes.io/docs/tasks/administer-cluster/quota-api-object/", "id": "k8s_00498_code_explain_12855"} {"prompt": "Complete this yaml code:\n\nstatus:\n hard:\n persistentvolumeclaims: \"1\"\n services.loadbalancers: \"2\"", "completion": " services.nodeports: \"0\"\n used:\n persistentvolumeclaims: \"0\"\n services.loadbalancers: \"0\"\n services.nodeports: \"0\"", "language": "yaml", "type": "kubernetes-manifest", "context": "Configure Quotas for API Objects | Here is the configuration file for a PersistentVolumeClaim object:", "source": "https://kubernetes.io/docs/tasks/administer-cluster/quota-api-object/", "id": "k8s_00498_code_completion_12856"} {"prompt": "Explain this yaml code:\n\nstatus:\n hard:\n persistentvolumeclaims: \"1\"\n services.loadbalancers: \"2\"\n services.nodeports: \"0\"\n used:\n persistentvolumeclaims: \"0\"\n services.loadbalancers: \"0\"\n services.nodeports: \"0\"", "completion": "This yaml code Configure Quotas for API Objects | Here is the configuration file for a PersistentVolumeClaim object:", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Configure Quotas for API Objects | Here is the configuration file for a PersistentVolumeClaim object:", "source": "https://kubernetes.io/docs/tasks/administer-cluster/quota-api-object/", "id": "k8s_00498_code_explain_12857"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: PersistentVolumeClaim\nmetadata:\n name: pvc-quota-demo\nspec:", "completion": " storageClassName: manual\n accessModes:\n - ReadWriteOnce\n resources:\n requests:\n storage: 3Gi", "language": "yaml", "type": "kubernetes-manifest", "context": "Configure Quotas for API Objects | Create the PersistentVolumeClaim:", "source": "https://kubernetes.io/docs/tasks/administer-cluster/quota-api-object/", "id": "k8s_00498_code_completion_12858"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: PersistentVolumeClaim\nmetadata:\n name: pvc-quota-demo\nspec:\n storageClassName: manual\n accessModes:\n - ReadWriteOnce\n resources:\n requests:\n storage: 3Gi", "completion": "This yaml code Configure Quotas for API Objects | Create the PersistentVolumeClaim:", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Configure Quotas for API Objects | Create the PersistentVolumeClaim:", "source": "https://kubernetes.io/docs/tasks/administer-cluster/quota-api-object/", "id": "k8s_00498_code_explain_12859"} {"prompt": "Explain this shell code:\n\nkubectl apply -f https://k8s.io/examples/admin/resource/quota-objects-pvc.yaml --namespace=quota-object-example", "completion": "This shell code Configure Quotas for API Objects | Verify that the PersistentVolumeClaim was created:", "language": "shell", "type": "code-example_explanation", "context": "Configure Quotas for API Objects | Verify that the PersistentVolumeClaim was created:", "source": "https://kubernetes.io/docs/tasks/administer-cluster/quota-api-object/", "id": "k8s_00498_code_explain_12860"} {"prompt": "Explain this shell code:\n\nkubectl get persistentvolumeclaims --namespace=quota-object-example", "completion": "This shell code Configure Quotas for API Objects | The output shows that the PersistentVolumeClaim exists and has status Pending:", "language": "shell", "type": "code-example_explanation", "context": "Configure Quotas for API Objects | The output shows that the PersistentVolumeClaim exists and has status Pending:", "source": "https://kubernetes.io/docs/tasks/administer-cluster/quota-api-object/", "id": "k8s_00498_code_explain_12861"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: PersistentVolumeClaim\nmetadata:\n name: pvc-quota-demo-2\nspec:", "completion": " storageClassName: manual\n accessModes:\n - ReadWriteOnce\n resources:\n requests:\n storage: 4Gi", "language": "yaml", "type": "kubernetes-manifest", "context": "Configure Quotas for API Objects | Attempt to create the second PersistentVolumeClaim:", "source": "https://kubernetes.io/docs/tasks/administer-cluster/quota-api-object/", "id": "k8s_00498_code_completion_12862"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: PersistentVolumeClaim\nmetadata:\n name: pvc-quota-demo-2\nspec:\n storageClassName: manual\n accessModes:\n - ReadWriteOnce\n resources:\n requests:\n storage: 4Gi", "completion": "This yaml code Configure Quotas for API Objects | Attempt to create the second PersistentVolumeClaim:", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Configure Quotas for API Objects | Attempt to create the second PersistentVolumeClaim:", "source": "https://kubernetes.io/docs/tasks/administer-cluster/quota-api-object/", "id": "k8s_00498_code_explain_12863"} {"prompt": "Explain this shell code:\n\nkubectl apply -f https://k8s.io/examples/admin/resource/quota-objects-pvc-2.yaml --namespace=quota-object-example", "completion": "This shell code Configure Quotas for API Objects | The output shows that the second PersistentVolumeClaim was not created, because it would have exceeded the quota for the namespace.", "language": "shell", "type": "code-example_explanation", "context": "Configure Quotas for API Objects | The output shows that the second PersistentVolumeClaim was not created, because it would have exceeded the quota for the namespace.", "source": "https://kubernetes.io/docs/tasks/administer-cluster/quota-api-object/", "id": "k8s_00498_code_explain_12864"} {"prompt": "Explain this text code:\n\npersistentvolumeclaims \"pvc-quota-demo-2\" is forbidden:\nexceeded quota: object-quota-demo, requested: persistentvolumeclaims=1,\nused: persistentvolumeclaims=1, limited: persistentvolumeclaims=1", "completion": "This text code Configure Quotas for API Objects | These are the strings used to identify API resources that can be constrained by quotas:", "language": "text", "type": "code-example_explanation", "context": "Configure Quotas for API Objects | These are the strings used to identify API resources that can be constrained by quotas:", "source": "https://kubernetes.io/docs/tasks/administer-cluster/quota-api-object/", "id": "k8s_00498_code_explain_12865"} {"prompt": "Complete this yaml code:\n\nadmin/resource/quota-objects.yaml\napiVersion: v1\nkind: ResourceQuota\nmetadata:\n name: object-quota-demo", "completion": "spec:\n hard:\n persistentvolumeclaims: \"1\"\n services.loadbalancers: \"2\"\n services.nodeports: \"0\"", "language": "yaml", "type": "kubernetes-manifest", "context": "Configure Quotas for API Objects | Create the ResourceQuota:", "source": "https://kubernetes.io/docs/tasks/administer-cluster/quota-api-object/", "id": "k8s_00498_code_completion_12866"} {"prompt": "Explain this yaml code:\n\nadmin/resource/quota-objects.yaml\napiVersion: v1\nkind: ResourceQuota\nmetadata:\n name: object-quota-demo\nspec:\n hard:\n persistentvolumeclaims: \"1\"\n services.loadbalancers: \"2\"\n services.nodeports: \"0\"", "completion": "This yaml code Configure Quotas for API Objects | Create the ResourceQuota:", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Configure Quotas for API Objects | Create the ResourceQuota:", "source": "https://kubernetes.io/docs/tasks/administer-cluster/quota-api-object/", "id": "k8s_00498_code_explain_12867"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: ResourceQuota\nmetadata:\n name: object-quota-demo", "completion": "spec:\n hard:\n persistentvolumeclaims: \"1\"\n services.loadbalancers: \"2\"\n services.nodeports: \"0\"", "language": "yaml", "type": "kubernetes-manifest", "context": "Configure Quotas for API Objects | Create the ResourceQuota:", "source": "https://kubernetes.io/docs/tasks/administer-cluster/quota-api-object/", "id": "k8s_00498_code_completion_12868"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: ResourceQuota\nmetadata:\n name: object-quota-demo\nspec:\n hard:\n persistentvolumeclaims: \"1\"\n services.loadbalancers: \"2\"\n services.nodeports: \"0\"", "completion": "This yaml code Configure Quotas for API Objects | Create the ResourceQuota:", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Configure Quotas for API Objects | Create the ResourceQuota:", "source": "https://kubernetes.io/docs/tasks/administer-cluster/quota-api-object/", "id": "k8s_00498_code_explain_12869"} {"prompt": "Explain this bash code:\n\nkubectl apply -f https://k8s.io/examples/admin/resource/quota-objects.yaml --namespace=quota-object-example", "completion": "This bash code Configure Quotas for API Objects | View detailed information about the ResourceQuota:", "language": "bash", "type": "kubectl-command_explanation", "context": "Configure Quotas for API Objects | View detailed information about the ResourceQuota:", "source": "https://kubernetes.io/docs/tasks/administer-cluster/quota-api-object/", "id": "k8s_00498_code_explain_12870"} {"prompt": "Explain this bash code:\n\nkubectl get resourcequota object-quota-demo --namespace=quota-object-example --output=yaml", "completion": "This bash code Configure Quotas for API Objects | The output shows that in the quota-object-example namespace, there can be at most one PersistentVolumeClaim, at most two Services of type LoadBalancer, and no Services of type NodePort.", "language": "bash", "type": "kubectl-command_explanation", "context": "Configure Quotas for API Objects | The output shows that in the quota-object-example namespace, there can be at most one PersistentVolumeClaim, at most two Services of type LoadBalancer, and no Services of type NodePort.", "source": "https://kubernetes.io/docs/tasks/administer-cluster/quota-api-object/", "id": "k8s_00498_code_explain_12871"} {"prompt": "Complete this text code:\n\nstatus:\n hard:\n persistentvolumeclaims: \"1\"\n services.loadbalancers: \"2\"", "completion": " services.nodeports: \"0\"\n used:\n persistentvolumeclaims: \"0\"\n services.loadbalancers: \"0\"\n services.nodeports: \"0\"", "language": "text", "type": "code-example", "context": "Configure Quotas for API Objects | Here is the configuration file for a PersistentVolumeClaim object:", "source": "https://kubernetes.io/docs/tasks/administer-cluster/quota-api-object/", "id": "k8s_00498_code_completion_12872"} {"prompt": "Explain this text code:\n\nstatus:\n hard:\n persistentvolumeclaims: \"1\"\n services.loadbalancers: \"2\"\n services.nodeports: \"0\"\n used:\n persistentvolumeclaims: \"0\"\n services.loadbalancers: \"0\"\n services.nodeports: \"0\"", "completion": "This text code Configure Quotas for API Objects | Here is the configuration file for a PersistentVolumeClaim object:", "language": "text", "type": "code-example_explanation", "context": "Configure Quotas for API Objects | Here is the configuration file for a PersistentVolumeClaim object:", "source": "https://kubernetes.io/docs/tasks/administer-cluster/quota-api-object/", "id": "k8s_00498_code_explain_12873"} {"prompt": "Complete this yaml code:\n\nadmin/resource/quota-objects-pvc.yaml\napiVersion: v1\nkind: PersistentVolumeClaim\nmetadata:\n name: pvc-quota-demo\nspec:", "completion": " storageClassName: manual\n accessModes:\n - ReadWriteOnce\n resources:\n requests:\n storage: 3Gi", "language": "yaml", "type": "kubernetes-manifest", "context": "Configure Quotas for API Objects | Create the PersistentVolumeClaim:", "source": "https://kubernetes.io/docs/tasks/administer-cluster/quota-api-object/", "id": "k8s_00498_code_completion_12874"} {"prompt": "Explain this yaml code:\n\nadmin/resource/quota-objects-pvc.yaml\napiVersion: v1\nkind: PersistentVolumeClaim\nmetadata:\n name: pvc-quota-demo\nspec:\n storageClassName: manual\n accessModes:\n - ReadWriteOnce\n resources:\n requests:\n storage: 3Gi", "completion": "This yaml code Configure Quotas for API Objects | Create the PersistentVolumeClaim:", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Configure Quotas for API Objects | Create the PersistentVolumeClaim:", "source": "https://kubernetes.io/docs/tasks/administer-cluster/quota-api-object/", "id": "k8s_00498_code_explain_12875"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: PersistentVolumeClaim\nmetadata:\n name: pvc-quota-demo\nspec:", "completion": " storageClassName: manual\n accessModes:\n - ReadWriteOnce\n resources:\n requests:\n storage: 3Gi", "language": "yaml", "type": "kubernetes-manifest", "context": "Configure Quotas for API Objects | Create the PersistentVolumeClaim:", "source": "https://kubernetes.io/docs/tasks/administer-cluster/quota-api-object/", "id": "k8s_00498_code_completion_12876"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: PersistentVolumeClaim\nmetadata:\n name: pvc-quota-demo\nspec:\n storageClassName: manual\n accessModes:\n - ReadWriteOnce\n resources:\n requests:\n storage: 3Gi", "completion": "This yaml code Configure Quotas for API Objects | Create the PersistentVolumeClaim:", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Configure Quotas for API Objects | Create the PersistentVolumeClaim:", "source": "https://kubernetes.io/docs/tasks/administer-cluster/quota-api-object/", "id": "k8s_00498_code_explain_12877"} {"prompt": "Explain this bash code:\n\nkubectl apply -f https://k8s.io/examples/admin/resource/quota-objects-pvc.yaml --namespace=quota-object-example", "completion": "This bash code Configure Quotas for API Objects | Verify that the PersistentVolumeClaim was created:", "language": "bash", "type": "kubectl-command_explanation", "context": "Configure Quotas for API Objects | Verify that the PersistentVolumeClaim was created:", "source": "https://kubernetes.io/docs/tasks/administer-cluster/quota-api-object/", "id": "k8s_00498_code_explain_12878"} {"prompt": "Explain this bash code:\n\nkubectl get persistentvolumeclaims --namespace=quota-object-example", "completion": "This bash code Configure Quotas for API Objects | The output shows that the PersistentVolumeClaim exists and has status Pending:", "language": "bash", "type": "kubectl-command_explanation", "context": "Configure Quotas for API Objects | The output shows that the PersistentVolumeClaim exists and has status Pending:", "source": "https://kubernetes.io/docs/tasks/administer-cluster/quota-api-object/", "id": "k8s_00498_code_explain_12879"} {"prompt": "Complete this yaml code:\n\nadmin/resource/quota-objects-pvc-2.yaml\napiVersion: v1\nkind: PersistentVolumeClaim\nmetadata:\n name: pvc-quota-demo-2\nspec:", "completion": " storageClassName: manual\n accessModes:\n - ReadWriteOnce\n resources:\n requests:\n storage: 4Gi", "language": "yaml", "type": "kubernetes-manifest", "context": "Configure Quotas for API Objects | Attempt to create the second PersistentVolumeClaim:", "source": "https://kubernetes.io/docs/tasks/administer-cluster/quota-api-object/", "id": "k8s_00498_code_completion_12880"} {"prompt": "Explain this yaml code:\n\nadmin/resource/quota-objects-pvc-2.yaml\napiVersion: v1\nkind: PersistentVolumeClaim\nmetadata:\n name: pvc-quota-demo-2\nspec:\n storageClassName: manual\n accessModes:\n - ReadWriteOnce\n resources:\n requests:\n storage: 4Gi", "completion": "This yaml code Configure Quotas for API Objects | Attempt to create the second PersistentVolumeClaim:", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Configure Quotas for API Objects | Attempt to create the second PersistentVolumeClaim:", "source": "https://kubernetes.io/docs/tasks/administer-cluster/quota-api-object/", "id": "k8s_00498_code_explain_12881"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: PersistentVolumeClaim\nmetadata:\n name: pvc-quota-demo-2\nspec:", "completion": " storageClassName: manual\n accessModes:\n - ReadWriteOnce\n resources:\n requests:\n storage: 4Gi", "language": "yaml", "type": "kubernetes-manifest", "context": "Configure Quotas for API Objects | Attempt to create the second PersistentVolumeClaim:", "source": "https://kubernetes.io/docs/tasks/administer-cluster/quota-api-object/", "id": "k8s_00498_code_completion_12882"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: PersistentVolumeClaim\nmetadata:\n name: pvc-quota-demo-2\nspec:\n storageClassName: manual\n accessModes:\n - ReadWriteOnce\n resources:\n requests:\n storage: 4Gi", "completion": "This yaml code Configure Quotas for API Objects | Attempt to create the second PersistentVolumeClaim:", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Configure Quotas for API Objects | Attempt to create the second PersistentVolumeClaim:", "source": "https://kubernetes.io/docs/tasks/administer-cluster/quota-api-object/", "id": "k8s_00498_code_explain_12883"} {"prompt": "Explain this bash code:\n\nkubectl apply -f https://k8s.io/examples/admin/resource/quota-objects-pvc-2.yaml --namespace=quota-object-example", "completion": "This bash code Configure Quotas for API Objects | The output shows that the second PersistentVolumeClaim was not created, because it would have exceeded the quota for the namespace.", "language": "bash", "type": "kubectl-command_explanation", "context": "Configure Quotas for API Objects | The output shows that the second PersistentVolumeClaim was not created, because it would have exceeded the quota for the namespace.", "source": "https://kubernetes.io/docs/tasks/administer-cluster/quota-api-object/", "id": "k8s_00498_code_explain_12884"} {"prompt": "Explain this bash code:\n\nkubectl run -it --rm --restart=Never busybox --image=gcr.io/google-containers/busybox sh", "completion": "This bash code Debug Services | If you already have a running Pod that you prefer to use, you can run a command in it using:", "language": "bash", "type": "kubectl-command_explanation", "context": "Debug Services | If you already have a running Pod that you prefer to use, you can run a command in it using:", "source": "https://kubernetes.io/docs/tasks/debug/debug-application/debug-service/", "id": "k8s_00499_code_explain_12885"} {"prompt": "Explain this bash code:\n\nkubectl exec <POD-NAME> -c <CONTAINER-NAME> -- <COMMAND>", "completion": "This bash code Debug Services | For the purposes of this walk-through, let's run some Pods. Since you're probably debugging your own Service you can substitute your own details, or you can follow along and get a second data point.", "language": "bash", "type": "kubectl-command_explanation", "context": "Debug Services | For the purposes of this walk-through, let's run some Pods. Since you're probably debugging your own Service you can substitute your own details, or you can follow along and get a second data point.", "source": "https://kubernetes.io/docs/tasks/debug/debug-application/debug-service/", "id": "k8s_00499_code_explain_12886"} {"prompt": "Explain this bash code:\n\nkubectl create deployment hostnames --image=registry.k8s.io/serve_hostname", "completion": "This bash code Debug Services | kubectl commands will print the type and name of the resource created or mutated, which can then be used in subsequent commands.", "language": "bash", "type": "kubectl-command_explanation", "context": "Debug Services | kubectl commands will print the type and name of the resource created or mutated, which can then be used in subsequent commands.", "source": "https://kubernetes.io/docs/tasks/debug/debug-application/debug-service/", "id": "k8s_00499_code_explain_12887"} {"prompt": "Complete this yaml code:\n\napiVersion: apps/v1\nkind: Deployment\nmetadata:\n labels:\n app: hostnames\n name: hostnames\nspec:\n selector:\n matchLabels:", "completion": " app: hostnames\n replicas: 3\n template:\n metadata:\n labels:\n app: hostnames\n spec:\n containers:\n - name: hostnames\n image: registry.k8s.io/serve_hostname", "language": "yaml", "type": "deployment-definition", "context": "Debug Services | The label \"app\" is automatically set by kubectl create deployment to the name of the Deployment.", "source": "https://kubernetes.io/docs/tasks/debug/debug-application/debug-service/", "id": "k8s_00499_code_completion_12888"} {"prompt": "Explain this yaml code:\n\napiVersion: apps/v1\nkind: Deployment\nmetadata:\n labels:\n app: hostnames\n name: hostnames\nspec:\n selector:\n matchLabels:\n app: hostnames\n replicas: 3\n template:\n metadata:\n labels:\n app: hostnames\n spec:\n containers:\n - name: hostnames\n image: registry.k8s.io/serve_hostname", "completion": "This yaml code Debug Services | The label \"app\" is automatically set by kubectl create deployment to the name of the Deployment.", "language": "yaml", "type": "deployment-definition_explanation", "context": "Debug Services | The label \"app\" is automatically set by kubectl create deployment to the name of the Deployment.", "source": "https://kubernetes.io/docs/tasks/debug/debug-application/debug-service/", "id": "k8s_00499_code_explain_12889"} {"prompt": "Complete this text code:\n\nNAME READY STATUS RESTARTS AGE\nhostnames-632524106-bbpiw 1/1 Running 0 2m", "completion": "hostnames-632524106-ly40y 1/1 Running 0 2m\nhostnames-632524106-tlaok 1/1 Running 0 2m", "language": "text", "type": "code-example", "context": "Debug Services | You can also confirm that your Pods are serving. You can get the list of Pod IP addresses and test them directly.", "source": "https://kubernetes.io/docs/tasks/debug/debug-application/debug-service/", "id": "k8s_00499_code_completion_12890"} {"prompt": "Explain this text code:\n\nNAME READY STATUS RESTARTS AGE\nhostnames-632524106-bbpiw 1/1 Running 0 2m\nhostnames-632524106-ly40y 1/1 Running 0 2m\nhostnames-632524106-tlaok 1/1 Running 0 2m", "completion": "This text code Debug Services | You can also confirm that your Pods are serving. You can get the list of Pod IP addresses and test them directly.", "language": "text", "type": "code-example_explanation", "context": "Debug Services | You can also confirm that your Pods are serving. You can get the list of Pod IP addresses and test them directly.", "source": "https://kubernetes.io/docs/tasks/debug/debug-application/debug-service/", "id": "k8s_00499_code_explain_12891"} {"prompt": "Explain this bash code:\n\nkubectl get pods -l app=hostnames \\\n -o go-template='{{range .items}}{{.status.podIP}}{{\"\\n\"}}{{end}}'", "completion": "This bash code Debug Services | The example container used for this walk-through serves its own hostname via HTTP on port 9376, but if you are debugging your own app, you'll want to use whatever port number your Pods are listening o", "language": "bash", "type": "kubectl-command_explanation", "context": "Debug Services | The example container used for this walk-through serves its own hostname via HTTP on port 9376, but if you are debugging your own app, you'll want to use whatever port number your Pods are listening o", "source": "https://kubernetes.io/docs/tasks/debug/debug-application/debug-service/", "id": "k8s_00499_code_explain_12892"} {"prompt": "Explain this text code:\n\nfor ep in 10.244.0.5:9376 10.244.0.6:9376 10.244.0.7:9376; do\n wget -qO- $ep\ndone", "completion": "This text code Debug Services | This should produce something like:", "language": "text", "type": "code-example_explanation", "context": "Debug Services | This should produce something like:", "source": "https://kubernetes.io/docs/tasks/debug/debug-application/debug-service/", "id": "k8s_00499_code_explain_12893"} {"prompt": "Explain this text code:\n\nhostnames-632524106-bbpiw\nhostnames-632524106-ly40y\nhostnames-632524106-tlaok", "completion": "This text code Debug Services | If you are not getting the responses you expect at this point, your Pods might not be healthy or might not be listening on the port you think they are. You might find kubectl logs to be useful for see", "language": "text", "type": "code-example_explanation", "context": "Debug Services | If you are not getting the responses you expect at this point, your Pods might not be healthy or might not be listening on the port you think they are. You might find kubectl logs to be useful for see", "source": "https://kubernetes.io/docs/tasks/debug/debug-application/debug-service/", "id": "k8s_00499_code_explain_12894"} {"prompt": "Explain this text code:\n\nResolving hostnames (hostnames)... failed: Name or service not known.\nwget: unable to resolve host address 'hostnames'", "completion": "This text code Debug Services | The first thing to check is whether that Service actually exists:", "language": "text", "type": "code-example_explanation", "context": "Debug Services | The first thing to check is whether that Service actually exists:", "source": "https://kubernetes.io/docs/tasks/debug/debug-application/debug-service/", "id": "k8s_00499_code_explain_12895"} {"prompt": "Explain this text code:\n\nNo resources found.\nError from server (NotFound): services \"hostnames\" not found", "completion": "This text code Debug Services | Let's create the Service. As before, this is for the walk-through - you can use your own Service's details here.", "language": "text", "type": "code-example_explanation", "context": "Debug Services | Let's create the Service. As before, this is for the walk-through - you can use your own Service's details here.", "source": "https://kubernetes.io/docs/tasks/debug/debug-application/debug-service/", "id": "k8s_00499_code_explain_12896"} {"prompt": "Explain this bash code:\n\nkubectl expose deployment hostnames --port=80 --target-port=9376", "completion": "This bash code Debug Services | And read it back:", "language": "bash", "type": "kubectl-command_explanation", "context": "Debug Services | And read it back:", "source": "https://kubernetes.io/docs/tasks/debug/debug-application/debug-service/", "id": "k8s_00499_code_explain_12897"} {"prompt": "Explain this text code:\n\nNAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE\nhostnames ClusterIP 10.0.1.175 <none> 80/TCP 5s", "completion": "This text code Debug Services | Now you know that the Service exists.", "language": "text", "type": "code-example_explanation", "context": "Debug Services | Now you know that the Service exists.", "source": "https://kubernetes.io/docs/tasks/debug/debug-application/debug-service/", "id": "k8s_00499_code_explain_12898"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Service\nmetadata:\n labels:\n app: hostnames\n name: hostnames\nspec:", "completion": " selector:\n app: hostnames\n ports:\n - name: default\n protocol: TCP\n port: 80\n targetPort: 9376", "language": "yaml", "type": "service-definition", "context": "Debug Services | In order to highlight the full range of configuration, the Service you created here uses a different port number than the Pods. For many real-world Services, these values might be the same.", "source": "https://kubernetes.io/docs/tasks/debug/debug-application/debug-service/", "id": "k8s_00499_code_completion_12899"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Service\nmetadata:\n labels:\n app: hostnames\n name: hostnames\nspec:\n selector:\n app: hostnames\n ports:\n - name: default\n protocol: TCP\n port: 80\n targetPort: 9376", "completion": "This yaml code Debug Services | In order to highlight the full range of configuration, the Service you created here uses a different port number than the Pods. For many real-world Services, these values might be the same.", "language": "yaml", "type": "service-definition_explanation", "context": "Debug Services | In order to highlight the full range of configuration, the Service you created here uses a different port number than the Pods. For many real-world Services, these values might be the same.", "source": "https://kubernetes.io/docs/tasks/debug/debug-application/debug-service/", "id": "k8s_00499_code_explain_12900"} {"prompt": "Complete this text code:\n\nAddress 1: 10.0.0.10 kube-dns.kube-system.svc.cluster.local\n", "completion": "Name: hostnames\nAddress 1: 10.0.1.175 hostnames.default.svc.cluster.local", "language": "text", "type": "code-example", "context": "Debug Services | If this fails, perhaps your Pod and Service are in different Namespaces, try a namespace-qualified name (again, from within a Pod):", "source": "https://kubernetes.io/docs/tasks/debug/debug-application/debug-service/", "id": "k8s_00499_code_completion_12901"} {"prompt": "Explain this text code:\n\nAddress 1: 10.0.0.10 kube-dns.kube-system.svc.cluster.local\n\nName: hostnames\nAddress 1: 10.0.1.175 hostnames.default.svc.cluster.local", "completion": "This text code Debug Services | If this fails, perhaps your Pod and Service are in different Namespaces, try a namespace-qualified name (again, from within a Pod):", "language": "text", "type": "code-example_explanation", "context": "Debug Services | If this fails, perhaps your Pod and Service are in different Namespaces, try a namespace-qualified name (again, from within a Pod):", "source": "https://kubernetes.io/docs/tasks/debug/debug-application/debug-service/", "id": "k8s_00499_code_explain_12902"} {"prompt": "Complete this text code:\n\nAddress 1: 10.0.0.10 kube-dns.kube-system.svc.cluster.local\n", "completion": "Name: hostnames.default\nAddress 1: 10.0.1.175 hostnames.default.svc.cluster.local", "language": "text", "type": "code-example", "context": "Debug Services | If this works, you'll need to adjust your app to use a cross-namespace name, or run your app and Service in the same Namespace. If this still fails, try a fully-qualified name:", "source": "https://kubernetes.io/docs/tasks/debug/debug-application/debug-service/", "id": "k8s_00499_code_completion_12903"} {"prompt": "Explain this text code:\n\nAddress 1: 10.0.0.10 kube-dns.kube-system.svc.cluster.local\n\nName: hostnames.default\nAddress 1: 10.0.1.175 hostnames.default.svc.cluster.local", "completion": "This text code Debug Services | If this works, you'll need to adjust your app to use a cross-namespace name, or run your app and Service in the same Namespace. If this still fails, try a fully-qualified name:", "language": "text", "type": "code-example_explanation", "context": "Debug Services | If this works, you'll need to adjust your app to use a cross-namespace name, or run your app and Service in the same Namespace. If this still fails, try a fully-qualified name:", "source": "https://kubernetes.io/docs/tasks/debug/debug-application/debug-service/", "id": "k8s_00499_code_explain_12904"} {"prompt": "Complete this text code:\n\nAddress 1: 10.0.0.10 kube-dns.kube-system.svc.cluster.local\n", "completion": "Name: hostnames.default.svc.cluster.local\nAddress 1: 10.0.1.175 hostnames.default.svc.cluster.local", "language": "text", "type": "code-example", "context": "Debug Services | Note the suffix here: \"default.svc.cluster.local\". The \"default\" is the Namespace you're operating in. The \"svc\" denotes that this is a Service. The \"cluster.local\" is your cluster domain, which COULD", "source": "https://kubernetes.io/docs/tasks/debug/debug-application/debug-service/", "id": "k8s_00499_code_completion_12905"} {"prompt": "Explain this text code:\n\nAddress 1: 10.0.0.10 kube-dns.kube-system.svc.cluster.local\n\nName: hostnames.default.svc.cluster.local\nAddress 1: 10.0.1.175 hostnames.default.svc.cluster.local", "completion": "This text code Debug Services | Note the suffix here: \"default.svc.cluster.local\". The \"default\" is the Namespace you're operating in. The \"svc\" denotes that this is a Service. The \"cluster.local\" is your cluster domain, which COULD", "language": "text", "type": "code-example_explanation", "context": "Debug Services | Note the suffix here: \"default.svc.cluster.local\". The \"default\" is the Namespace you're operating in. The \"svc\" denotes that this is a Service. The \"cluster.local\" is your cluster domain, which COULD", "source": "https://kubernetes.io/docs/tasks/debug/debug-application/debug-service/", "id": "k8s_00499_code_explain_12906"} {"prompt": "Explain this text code:\n\nnslookup hostnames.default.svc.cluster.local 10.0.0.10", "completion": "This text code Debug Services | If you are able to do a fully-qualified name lookup but not a relative one, you need to check that your /etc/resolv.conf file in your Pod is correct. From within a Pod:", "language": "text", "type": "code-example_explanation", "context": "Debug Services | If you are able to do a fully-qualified name lookup but not a relative one, you need to check that your /etc/resolv.conf file in your Pod is correct. From within a Pod:", "source": "https://kubernetes.io/docs/tasks/debug/debug-application/debug-service/", "id": "k8s_00499_code_explain_12907"} {"prompt": "Complete this text code:\n\nServer: 10.0.0.10\nAddress: 10.0.0.10#53", "completion": "\nName: hostnames.default.svc.cluster.local\nAddress: 10.0.1.175", "language": "text", "type": "code-example", "context": "Debug Services | If you are able to do a fully-qualified name lookup but not a relative one, you need to check that your /etc/resolv.conf file in your Pod is correct. From within a Pod:", "source": "https://kubernetes.io/docs/tasks/debug/debug-application/debug-service/", "id": "k8s_00499_code_completion_12908"} {"prompt": "Explain this text code:\n\nServer: 10.0.0.10\nAddress: 10.0.0.10#53\n\nName: hostnames.default.svc.cluster.local\nAddress: 10.0.1.175", "completion": "This text code Debug Services | If you are able to do a fully-qualified name lookup but not a relative one, you need to check that your /etc/resolv.conf file in your Pod is correct. From within a Pod:", "language": "text", "type": "code-example_explanation", "context": "Debug Services | If you are able to do a fully-qualified name lookup but not a relative one, you need to check that your /etc/resolv.conf file in your Pod is correct. From within a Pod:", "source": "https://kubernetes.io/docs/tasks/debug/debug-application/debug-service/", "id": "k8s_00499_code_explain_12909"} {"prompt": "Explain this text code:\n\nnameserver 10.0.0.10\nsearch default.svc.cluster.local svc.cluster.local cluster.local example.com\noptions ndots:5", "completion": "This text code Debug Services | The nameserver line must indicate your cluster's DNS Service. This is passed into kubelet with the --cluster-dns flag.", "language": "text", "type": "code-example_explanation", "context": "Debug Services | The nameserver line must indicate your cluster's DNS Service. This is passed into kubelet with the --cluster-dns flag.", "source": "https://kubernetes.io/docs/tasks/debug/debug-application/debug-service/", "id": "k8s_00499_code_explain_12910"} {"prompt": "Complete this text code:\n\nServer: 10.0.0.10\nAddress 1: 10.0.0.10 kube-dns.kube-system.svc.cluster.local", "completion": "\nName: kubernetes.default\nAddress 1: 10.0.0.1 kubernetes.default.svc.cluster.local", "language": "text", "type": "code-example", "context": "Debug Services | If this fails, please see the kube-proxy section of this document, or even go back to the top of this document and start over, but instead of debugging your own Service, debug the DNS Service.", "source": "https://kubernetes.io/docs/tasks/debug/debug-application/debug-service/", "id": "k8s_00499_code_completion_12911"} {"prompt": "Explain this text code:\n\nServer: 10.0.0.10\nAddress 1: 10.0.0.10 kube-dns.kube-system.svc.cluster.local\n\nName: kubernetes.default\nAddress 1: 10.0.0.1 kubernetes.default.svc.cluster.local", "completion": "This text code Debug Services | If this fails, please see the kube-proxy section of this document, or even go back to the top of this document and start over, but instead of debugging your own Service, debug the DNS Service.", "language": "text", "type": "code-example_explanation", "context": "Debug Services | If this fails, please see the kube-proxy section of this document, or even go back to the top of this document and start over, but instead of debugging your own Service, debug the DNS Service.", "source": "https://kubernetes.io/docs/tasks/debug/debug-application/debug-service/", "id": "k8s_00499_code_explain_12912"} {"prompt": "Explain this text code:\n\nfor i in $(seq 1 3); do \n wget -qO- 10.0.1.175:80\ndone", "completion": "This text code Debug Services | This should produce something like:", "language": "text", "type": "code-example_explanation", "context": "Debug Services | This should produce something like:", "source": "https://kubernetes.io/docs/tasks/debug/debug-application/debug-service/", "id": "k8s_00499_code_explain_12913"} {"prompt": "Explain this text code:\n\nhostnames-632524106-bbpiw\nhostnames-632524106-ly40y\nhostnames-632524106-tlaok", "completion": "This text code Debug Services | If your Service is working, you should get correct responses. If not, there are a number of things that could be going wrong. Read on.", "language": "text", "type": "code-example_explanation", "context": "Debug Services | If your Service is working, you should get correct responses. If not, there are a number of things that could be going wrong. Read on.", "source": "https://kubernetes.io/docs/tasks/debug/debug-application/debug-service/", "id": "k8s_00499_code_explain_12914"} {"prompt": "Complete this json code:\n\n{\n \"kind\": \"Service\",\n \"apiVersion\": \"v1\",\n \"metadata\": {\n \"name\": \"hostnames\",\n \"namespace\": \"default\",\n \"uid\": \"428c8b6c-24bc-11e5-936d-42010af0a9bc\",\n \"resourceVersion\": \"347189\",\n \"creationTimestamp\": \"2015-07-07T15:24:29Z\",\n \"labels\": {\n \"app\": \"hostnames\"\n }\n },\n \"spec\": {\n \"ports\": [\n {\n \"name\": \"default\",", "completion": " \"protocol\": \"TCP\",\n \"port\": 80,\n \"targetPort\": 9376,\n \"nodePort\": 0\n }\n ],\n \"selector\": {\n \"app\": \"hostnames\"\n },\n \"clusterIP\": \"10.0.1.175\",\n \"type\": \"ClusterIP\",\n \"sessionAffinity\": \"None\"\n },\n \"status\": {\n \"loadBalancer\": {}\n }\n}", "language": "json", "type": "api-response", "context": "Debug Services | If you got this far, you have confirmed that your Service is correctly defined and is resolved by DNS. Now let's check that the Pods you ran are actually being selected by the Service.", "source": "https://kubernetes.io/docs/tasks/debug/debug-application/debug-service/", "id": "k8s_00499_code_completion_12915"} {"prompt": "Explain this json code:\n\n{\n \"kind\": \"Service\",\n \"apiVersion\": \"v1\",\n \"metadata\": {\n \"name\": \"hostnames\",\n \"namespace\": \"default\",\n \"uid\": \"428c8b6c-24bc-11e5-936d-42010af0a9bc\",\n \"resourceVersion\": \"347189\",\n \"creationTimestamp\": \"2015-07-07T15:24:29Z\",\n \"labels\": {\n \"app\": \"hostnames\"\n }\n },\n \"spec\": {\n \"ports\": [\n {\n \"name\": \"default\",\n \"protocol\": \"TCP\",\n \"port\": 80,\n \"targetPort\": 9376,\n \"nodePort\": 0\n }\n ],\n \"selector\": {\n \"app\": \"hostnames\"\n },\n \"clusterIP\": \"10.0.1.175\",\n \"type\": \"ClusterIP\",\n \"sessionAffinity\": \"None\"\n },\n \"status\": {\n \"loadBalancer\": {}\n }\n}", "completion": "This json code Debug Services | If you got this far, you have confirmed that your Service is correctly defined and is resolved by DNS. Now let's check that the Pods you ran are actually being selected by the Service.", "language": "json", "type": "api-response_explanation", "context": "Debug Services | If you got this far, you have confirmed that your Service is correctly defined and is resolved by DNS. Now let's check that the Pods you ran are actually being selected by the Service.", "source": "https://kubernetes.io/docs/tasks/debug/debug-application/debug-service/", "id": "k8s_00499_code_explain_12916"} {"prompt": "Complete this text code:\n\nNAME READY STATUS RESTARTS AGE\nhostnames-632524106-bbpiw 1/1 Running 0 1h", "completion": "hostnames-632524106-ly40y 1/1 Running 0 1h\nhostnames-632524106-tlaok 1/1 Running 0 1h", "language": "text", "type": "code-example", "context": "Debug Services | The -l app=hostnames argument is a label selector configured on the Service.", "source": "https://kubernetes.io/docs/tasks/debug/debug-application/debug-service/", "id": "k8s_00499_code_completion_12917"} {"prompt": "Explain this text code:\n\nNAME READY STATUS RESTARTS AGE\nhostnames-632524106-bbpiw 1/1 Running 0 1h\nhostnames-632524106-ly40y 1/1 Running 0 1h\nhostnames-632524106-tlaok 1/1 Running 0 1h", "completion": "This text code Debug Services | The -l app=hostnames argument is a label selector configured on the Service.", "language": "text", "type": "code-example_explanation", "context": "Debug Services | The -l app=hostnames argument is a label selector configured on the Service.", "source": "https://kubernetes.io/docs/tasks/debug/debug-application/debug-service/", "id": "k8s_00499_code_explain_12918"} {"prompt": "Complete this bash code:\n\nkubectl get endpointslices -l k8s.io/service-name=hostnames\n", "completion": "NAME ADDRESSTYPE PORTS ENDPOINTS\nhostnames-ytpni IPv4 9376 10.244.0.5,10.244.0.6,10.244.0.7", "language": "bash", "type": "kubectl-command", "context": "Debug Services | This confirms that the EndpointSlice controller has found the correct Pods for your Service. If the ENDPOINTS column is <none>, you should check that the spec.selector field of your Service actually s", "source": "https://kubernetes.io/docs/tasks/debug/debug-application/debug-service/", "id": "k8s_00499_code_completion_12919"} {"prompt": "Explain this bash code:\n\nkubectl get endpointslices -l k8s.io/service-name=hostnames\n\nNAME ADDRESSTYPE PORTS ENDPOINTS\nhostnames-ytpni IPv4 9376 10.244.0.5,10.244.0.6,10.244.0.7", "completion": "This bash code Debug Services | This confirms that the EndpointSlice controller has found the correct Pods for your Service. If the ENDPOINTS column is <none>, you should check that the spec.selector field of your Service actually s", "language": "bash", "type": "kubectl-command_explanation", "context": "Debug Services | This confirms that the EndpointSlice controller has found the correct Pods for your Service. If the ENDPOINTS column is <none>, you should check that the spec.selector field of your Service actually s", "source": "https://kubernetes.io/docs/tasks/debug/debug-application/debug-service/", "id": "k8s_00499_code_explain_12920"} {"prompt": "Explain this text code:\n\nfor ep in 10.244.0.5:9376 10.244.0.6:9376 10.244.0.7:9376; do\n wget -qO- $ep\ndone", "completion": "This text code Debug Services | This should produce something like:", "language": "text", "type": "code-example_explanation", "context": "Debug Services | This should produce something like:", "source": "https://kubernetes.io/docs/tasks/debug/debug-application/debug-service/", "id": "k8s_00499_code_explain_12921"} {"prompt": "Explain this text code:\n\nhostnames-632524106-bbpiw\nhostnames-632524106-ly40y\nhostnames-632524106-tlaok", "completion": "This text code Debug Services | You expect each Pod in the endpoints list to return its own hostname. If this is not what happens (or whatever the correct behavior is for your own Pods), you should investigate what's happening there", "language": "text", "type": "code-example_explanation", "context": "Debug Services | You expect each Pod in the endpoints list to return its own hostname. If this is not what happens (or whatever the correct behavior is for your own Pods), you should investigate what's happening there", "source": "https://kubernetes.io/docs/tasks/debug/debug-application/debug-service/", "id": "k8s_00499_code_explain_12922"} {"prompt": "Explain this text code:\n\nroot 4194 0.4 0.1 101864 17696 ? Sl Jul04 25:43 /usr/local/bin/kube-proxy --master=https://kubernetes-master --kubeconfig=/var/lib/kube-proxy/kubeconfig --v=2", "completion": "This text code Debug Services | Next, confirm that it is not failing something obvious, like contacting the master. To do this, you'll have to look at the logs. Accessing the logs depends on your Node OS. On some OSes it is a file,", "language": "text", "type": "code-example_explanation", "context": "Debug Services | Next, confirm that it is not failing something obvious, like contacting the master. To do this, you'll have to look at the logs. Accessing the logs depends on your Node OS. On some OSes it is a file,", "source": "https://kubernetes.io/docs/tasks/debug/debug-application/debug-service/", "id": "k8s_00499_code_explain_12923"} {"prompt": "Complete this text code:\n\nI1027 22:14:53.995134 5063 server.go:200] Running in resource-only container \"/kube-proxy\"\nI1027 22:14:53.998163 5063 server.go:247] Using iptables Proxier.\nI1027 22:14:54.038140 5063 proxier.go:352] Setting endpoints for \"kube-system/kube-dns:dns-tcp\" to [10.244.1.3:53]\nI1027 22:14:54.038164 5063 proxier.go:352] Setting endpoints for \"kube-system/kube-dns:dns\" to [10.244.1.3:53]", "completion": "I1027 22:14:54.038209 5063 proxier.go:352] Setting endpoints for \"default/kubernetes:https\" to [10.240.0.2:443]\nI1027 22:14:54.038238 5063 proxier.go:429] Not syncing iptables until Services and Endpoints have been received from master\nI1027 22:14:54.040048 5063 proxier.go:294] Adding new service \"default/kubernetes:https\" at 10.0.0.1:443/TCP\nI1027 22:14:54.040154 5063 proxier.go:294] Adding new service \"kube-system/kube-dns:dns\" at 10.0.0.10:53/UDP\nI1027 22:14:54.040223 5063 proxier.go:294] Adding new service \"kube-system/kube-dns:dns-tcp\" at 10.0.0.10:53/TCP", "language": "text", "type": "code-example", "context": "Debug Services | If you see error messages about not being able to contact the master, you should double-check your Node configuration and installation steps.", "source": "https://kubernetes.io/docs/tasks/debug/debug-application/debug-service/", "id": "k8s_00499_code_completion_12924"} {"prompt": "Explain this text code:\n\nI1027 22:14:53.995134 5063 server.go:200] Running in resource-only container \"/kube-proxy\"\nI1027 22:14:53.998163 5063 server.go:247] Using iptables Proxier.\nI1027 22:14:54.038140 5063 proxier.go:352] Setting endpoints for \"kube-system/kube-dns:dns-tcp\" to [10.244.1.3:53]\nI1027 22:14:54.038164 5063 proxier.go:352] Setting endpoints for \"kube-system/kube-dns:dns\" to [10.244.1.3:53]\nI1027 22:14:54.038209 5063 proxier.go:352] Setting endpoints for \"default/kubernetes:https\" to [10.240.0.2:443]\nI1027 22:14:54.038238 5063 proxier.go:429] Not syncing iptables until Services and Endpoints have been received from master\nI1027 22:14:54.040048 5063 proxier.go:294] Adding new service \"default/kubernetes:https\" at 10.0.0.1:443/TCP\nI1027 22:14:54.040154 5063 proxier.go:294] Adding new service \"kube-system/kube-dns:dns\" at 10.0.0.10:53/UDP\nI1027 22:14:54.040223 5063 proxier.go:294] Adding new service \"kube-system/kube-dns:dns-tcp\" at 10.0.0.10:53/TCP", "completion": "This text code Debug Services | If you see error messages about not being able to contact the master, you should double-check your Node configuration and installation steps.", "language": "text", "type": "code-example_explanation", "context": "Debug Services | If you see error messages about not being able to contact the master, you should double-check your Node configuration and installation steps.", "source": "https://kubernetes.io/docs/tasks/debug/debug-application/debug-service/", "id": "k8s_00499_code_explain_12925"} {"prompt": "Complete this text code:\n\n-A KUBE-SEP-57KPRZ3JQVENLNBR -s 10.244.3.6/32 -m comment --comment \"default/hostnames:\" -j MARK --set-xmark 0x00004000/0x00004000\n-A KUBE-SEP-57KPRZ3JQVENLNBR -p tcp -m comment --comment \"default/hostnames:\" -m tcp -j DNAT --to-destination 10.244.3.6:9376\n-A KUBE-SEP-WNBA2IHDGP2BOBGZ -s 10.244.1.7/32 -m comment --comment \"default/hostnames:\" -j MARK --set-xmark 0x00004000/0x00004000\n-A KUBE-SEP-WNBA2IHDGP2BOBGZ -p tcp -m comment --comment \"default/hostnames:\" -m tcp -j DNAT --to-destination 10.244.1.7:9376\n-A KUBE-SEP-X3P2623AGDH6CDF3 -s 10.244.2.3/32 -m comment --comment \"default/hostnames:\" -j MARK --set-xmark 0x00004000/0x00004000", "completion": "-A KUBE-SEP-X3P2623AGDH6CDF3 -p tcp -m comment --comment \"default/hostnames:\" -m tcp -j DNAT --to-destination 10.244.2.3:9376\n-A KUBE-SERVICES -d 10.0.1.175/32 -p tcp -m comment --comment \"default/hostnames: cluster IP\" -m tcp --dport 80 -j KUBE-SVC-NWV5X2332I4OT4T3\n-A KUBE-SVC-NWV5X2332I4OT4T3 -m comment --comment \"default/hostnames:\" -m statistic --mode random --probability 0.33332999982 -j KUBE-SEP-WNBA2IHDGP2BOBGZ\n-A KUBE-SVC-NWV5X2332I4OT4T3 -m comment --comment \"default/hostnames:\" -m statistic --mode random --probability 0.50000000000 -j KUBE-SEP-X3P2623AGDH6CDF3\n-A KUBE-SVC-NWV5X2332I4OT4T3 -m comment --comment \"default/hostnames:\" -j KUBE-SEP-57KPRZ3JQVENLNBR", "language": "text", "type": "code-example", "context": "Debug Services | For each port of each Service, there should be 1 rule in KUBE-SERVICES and one KUBE-SVC-<hash> chain. For each Pod endpoint, there should be a small number of rules in that KUBE-SVC-<hash> and one KUB", "source": "https://kubernetes.io/docs/tasks/debug/debug-application/debug-service/", "id": "k8s_00499_code_completion_12926"} {"prompt": "Explain this text code:\n\n-A KUBE-SEP-57KPRZ3JQVENLNBR -s 10.244.3.6/32 -m comment --comment \"default/hostnames:\" -j MARK --set-xmark 0x00004000/0x00004000\n-A KUBE-SEP-57KPRZ3JQVENLNBR -p tcp -m comment --comment \"default/hostnames:\" -m tcp -j DNAT --to-destination 10.244.3.6:9376\n-A KUBE-SEP-WNBA2IHDGP2BOBGZ -s 10.244.1.7/32 -m comment --comment \"default/hostnames:\" -j MARK --set-xmark 0x00004000/0x00004000\n-A KUBE-SEP-WNBA2IHDGP2BOBGZ -p tcp -m comment --comment \"default/hostnames:\" -m tcp -j DNAT --to-destination 10.244.1.7:9376\n-A KUBE-SEP-X3P2623AGDH6CDF3 -s 10.244.2.3/32 -m comment --comment \"default/hostnames:\" -j MARK --set-xmark 0x00004000/0x00004000\n-A KUBE-SEP-X3P2623AGDH6CDF3 -p tcp -m comment --comment \"default/hostnames:\" -m tcp -j DNAT --to-destination 10.244.2.3:9376\n-A KUBE-SERVICES -d 10.0.1.175/32 -p tcp -m comment --comment \"default/hostnames: cluster IP\" -m tcp --dport 80 -j KUBE-SVC-NWV5X2332I4OT4T3\n-A KUBE-SVC-NWV5X2332I4OT4T3 -m comment --comment \"default/hostnames:\" -m statistic --mode random --probability 0.33332999982 -j KUBE-SEP-WNBA2IHDGP2BOBGZ\n-A KUBE-SVC-NWV5X2332I4OT4T3 -m comment --comment \"default/hostnames:\" -m statistic --mode random --probability 0.50000000000 -j KUBE-SEP-X3P2623AGDH6CDF3\n-A KUBE-SVC-NWV5X2332I4OT4T3 -m comment --comment \"default/hostnames:\" -j KUBE-SEP-57KPRZ3JQVENLNBR", "completion": "This text code Debug Services | For each port of each Service, there should be 1 rule in KUBE-SERVICES and one KUBE-SVC-<hash> chain. For each Pod endpoint, there should be a small number of rules in that KUBE-SVC-<hash> and one KUB", "language": "text", "type": "code-example_explanation", "context": "Debug Services | For each port of each Service, there should be 1 rule in KUBE-SERVICES and one KUBE-SVC-<hash> chain. For each Pod endpoint, there should be a small number of rules in that KUBE-SVC-<hash> and one KUB", "source": "https://kubernetes.io/docs/tasks/debug/debug-application/debug-service/", "id": "k8s_00499_code_explain_12927"} {"prompt": "Complete this text code:\n\nProt LocalAddress:Port Scheduler Flags\n -> RemoteAddress:Port Forward Weight ActiveConn InActConn\n...\nTCP 10.0.1.175:80 rr", "completion": " -> 10.244.0.5:9376 Masq 1 0 0\n -> 10.244.0.6:9376 Masq 1 0 0\n -> 10.244.0.7:9376 Masq 1 0 0\n...", "language": "text", "type": "code-example", "context": "Debug Services | For each port of each Service, plus any NodePorts, external IPs, and load-balancer IPs, kube-proxy will create a virtual server. For each Pod endpoint, it will create corresponding real servers. In th", "source": "https://kubernetes.io/docs/tasks/debug/debug-application/debug-service/", "id": "k8s_00499_code_completion_12928"} {"prompt": "Explain this text code:\n\nProt LocalAddress:Port Scheduler Flags\n -> RemoteAddress:Port Forward Weight ActiveConn InActConn\n...\nTCP 10.0.1.175:80 rr\n -> 10.244.0.5:9376 Masq 1 0 0\n -> 10.244.0.6:9376 Masq 1 0 0\n -> 10.244.0.7:9376 Masq 1 0 0\n...", "completion": "This text code Debug Services | For each port of each Service, plus any NodePorts, external IPs, and load-balancer IPs, kube-proxy will create a virtual server. For each Pod endpoint, it will create corresponding real servers. In th", "language": "text", "type": "code-example_explanation", "context": "Debug Services | For each port of each Service, plus any NodePorts, external IPs, and load-balancer IPs, kube-proxy will create a virtual server. For each Pod endpoint, it will create corresponding real servers. In th", "source": "https://kubernetes.io/docs/tasks/debug/debug-application/debug-service/", "id": "k8s_00499_code_explain_12929"} {"prompt": "Explain this text code:\n\nSetting endpoints for default/hostnames:default to [10.244.0.5:9376 10.244.0.6:9376 10.244.0.7:9376]", "completion": "This text code Debug Services | If you don't see those, try restarting kube-proxy with the -v flag set to 4, and then look at the logs again.", "language": "text", "type": "code-example_explanation", "context": "Debug Services | If you don't see those, try restarting kube-proxy with the -v flag set to 4, and then look at the logs again.", "source": "https://kubernetes.io/docs/tasks/debug/debug-application/debug-service/", "id": "k8s_00499_code_explain_12930"} {"prompt": "Explain this text code:\n\nroot 3392 1.1 0.8 186804 65208 ? Sl 00:51 11:11 /usr/local/bin/kubelet --enable-debugging-handlers=true --config=/etc/kubernetes/manifests --allow-privileged=True --v=4 --cluster-dns=10.0.0.10 --cluster-domain=cluster.local --configure-cbr0=true --cgroup-root=/ --system-cgroups=/system --hairpin-mode=promiscuous-bridge --runtime-cgroups=/docker-daemon --kubelet-cgroups=/kubelet --babysit-daemons=true --max-pods=110 --serialize-image-pulls=false --outofdisk-transition-frequency=0", "completion": "This text code Debug Services | If you get this far, something very strange is happening. Your Service is running, has EndpointSlices, and your Pods are actually serving. You have DNS working, and kube-proxy does not seem to be misb", "language": "text", "type": "code-example_explanation", "context": "Debug Services | If you get this far, something very strange is happening. Your Service is running, has EndpointSlices, and your Pods are actually serving. You have DNS working, and kube-proxy does not seem to be misb", "source": "https://kubernetes.io/docs/tasks/debug/debug-application/debug-service/", "id": "k8s_00499_code_explain_12931"} {"prompt": "Explain this text code:\n\nI0629 00:51:43.648698 3252 kubelet.go:380] Hairpin mode set to \"promiscuous-bridge\"", "completion": "This text code Debug Services | If you get this far, something very strange is happening. Your Service is running, has EndpointSlices, and your Pods are actually serving. You have DNS working, and kube-proxy does not seem to be misb", "language": "text", "type": "code-example_explanation", "context": "Debug Services | If you get this far, something very strange is happening. Your Service is running, has EndpointSlices, and your Pods are actually serving. You have DNS working, and kube-proxy does not seem to be misb", "source": "https://kubernetes.io/docs/tasks/debug/debug-application/debug-service/", "id": "k8s_00499_code_explain_12932"} {"prompt": "Explain this text code:\n\nfor intf in /sys/devices/virtual/net/cbr0/brif/*; do cat $intf/hairpin_mode; done", "completion": "This text code Debug Services | If you get this far, something very strange is happening. Your Service is running, has EndpointSlices, and your Pods are actually serving. You have DNS working, and kube-proxy does not seem to be misb", "language": "text", "type": "code-example_explanation", "context": "Debug Services | If you get this far, something very strange is happening. Your Service is running, has EndpointSlices, and your Pods are actually serving. You have DNS working, and kube-proxy does not seem to be misb", "source": "https://kubernetes.io/docs/tasks/debug/debug-application/debug-service/", "id": "k8s_00499_code_explain_12933"} {"prompt": "Explain this text code:\n\nUP BROADCAST RUNNING PROMISC MULTICAST MTU:1460 Metric:1", "completion": "This text code Debug Services | If you get this far, something very strange is happening. Your Service is running, has EndpointSlices, and your Pods are actually serving. You have DNS working, and kube-proxy does not seem to be misb", "language": "text", "type": "code-example_explanation", "context": "Debug Services | If you get this far, something very strange is happening. Your Service is running, has EndpointSlices, and your Pods are actually serving. You have DNS working, and kube-proxy does not seem to be misb", "source": "https://kubernetes.io/docs/tasks/debug/debug-application/debug-service/", "id": "k8s_00499_code_explain_12934"} {"prompt": "Explain this none code:\n\nkubectl run -it --rm --restart=Never busybox --image=gcr.io/google-containers/busybox sh", "completion": "This none code Debug Services | If you already have a running Pod that you prefer to use, you can run a command in it using:", "language": "none", "type": "code-example_explanation", "context": "Debug Services | If you already have a running Pod that you prefer to use, you can run a command in it using:", "source": "https://kubernetes.io/docs/tasks/debug/debug-application/debug-service/", "id": "k8s_00499_code_explain_12935"} {"prompt": "Explain this shell code:\n\nkubectl exec <POD-NAME> -c <CONTAINER-NAME> -- <COMMAND>", "completion": "This shell code Debug Services | For the purposes of this walk-through, let's run some Pods. Since you're probably debugging your own Service you can substitute your own details, or you can follow along and get a second data point.", "language": "shell", "type": "code-example_explanation", "context": "Debug Services | For the purposes of this walk-through, let's run some Pods. Since you're probably debugging your own Service you can substitute your own details, or you can follow along and get a second data point.", "source": "https://kubernetes.io/docs/tasks/debug/debug-application/debug-service/", "id": "k8s_00499_code_explain_12936"} {"prompt": "Explain this shell code:\n\nkubectl create deployment hostnames --image=registry.k8s.io/serve_hostname", "completion": "This shell code Debug Services | kubectl commands will print the type and name of the resource created or mutated, which can then be used in subsequent commands.", "language": "shell", "type": "code-example_explanation", "context": "Debug Services | kubectl commands will print the type and name of the resource created or mutated, which can then be used in subsequent commands.", "source": "https://kubernetes.io/docs/tasks/debug/debug-application/debug-service/", "id": "k8s_00499_code_explain_12937"} {"prompt": "Complete this yaml code:\n\napiVersion: apps/v1\nkind: Deployment\nmetadata:\n labels:\n app: hostnames\n name: hostnames\nspec:\n selector:\n matchLabels:", "completion": " app: hostnames\n replicas: 3\n template:\n metadata:\n labels:\n app: hostnames\n spec:\n containers:\n - name: hostnames\n image: registry.k8s.io/serve_hostname", "language": "yaml", "type": "deployment-definition", "context": "Debug Services | The label \"app\" is automatically set by kubectl create deployment to the name of the Deployment.", "source": "https://kubernetes.io/docs/tasks/debug/debug-application/debug-service/", "id": "k8s_00499_code_completion_12938"} {"prompt": "Explain this yaml code:\n\napiVersion: apps/v1\nkind: Deployment\nmetadata:\n labels:\n app: hostnames\n name: hostnames\nspec:\n selector:\n matchLabels:\n app: hostnames\n replicas: 3\n template:\n metadata:\n labels:\n app: hostnames\n spec:\n containers:\n - name: hostnames\n image: registry.k8s.io/serve_hostname", "completion": "This yaml code Debug Services | The label \"app\" is automatically set by kubectl create deployment to the name of the Deployment.", "language": "yaml", "type": "deployment-definition_explanation", "context": "Debug Services | The label \"app\" is automatically set by kubectl create deployment to the name of the Deployment.", "source": "https://kubernetes.io/docs/tasks/debug/debug-application/debug-service/", "id": "k8s_00499_code_explain_12939"} {"prompt": "Complete this none code:\n\nNAME READY STATUS RESTARTS AGE\nhostnames-632524106-bbpiw 1/1 Running 0 2m", "completion": "hostnames-632524106-ly40y 1/1 Running 0 2m\nhostnames-632524106-tlaok 1/1 Running 0 2m", "language": "none", "type": "code-example", "context": "Debug Services | You can also confirm that your Pods are serving. You can get the list of Pod IP addresses and test them directly.", "source": "https://kubernetes.io/docs/tasks/debug/debug-application/debug-service/", "id": "k8s_00499_code_completion_12940"} {"prompt": "Explain this none code:\n\nNAME READY STATUS RESTARTS AGE\nhostnames-632524106-bbpiw 1/1 Running 0 2m\nhostnames-632524106-ly40y 1/1 Running 0 2m\nhostnames-632524106-tlaok 1/1 Running 0 2m", "completion": "This none code Debug Services | You can also confirm that your Pods are serving. You can get the list of Pod IP addresses and test them directly.", "language": "none", "type": "code-example_explanation", "context": "Debug Services | You can also confirm that your Pods are serving. You can get the list of Pod IP addresses and test them directly.", "source": "https://kubernetes.io/docs/tasks/debug/debug-application/debug-service/", "id": "k8s_00499_code_explain_12941"} {"prompt": "Explain this shell code:\n\nkubectl get pods -l app=hostnames \\\n -o go-template='{{range .items}}{{.status.podIP}}{{\"\\n\"}}{{end}}'", "completion": "This shell code Debug Services | The example container used for this walk-through serves its own hostname via HTTP on port 9376, but if you are debugging your own app, you'll want to use whatever port number your Pods are listening o", "language": "shell", "type": "code-example_explanation", "context": "Debug Services | The example container used for this walk-through serves its own hostname via HTTP on port 9376, but if you are debugging your own app, you'll want to use whatever port number your Pods are listening o", "source": "https://kubernetes.io/docs/tasks/debug/debug-application/debug-service/", "id": "k8s_00499_code_explain_12942"} {"prompt": "Explain this shell code:\n\nfor ep in 10.244.0.5:9376 10.244.0.6:9376 10.244.0.7:9376; do\n wget -qO- $ep\ndone", "completion": "This shell code Debug Services | This should produce something like:", "language": "shell", "type": "code-example_explanation", "context": "Debug Services | This should produce something like:", "source": "https://kubernetes.io/docs/tasks/debug/debug-application/debug-service/", "id": "k8s_00499_code_explain_12943"} {"prompt": "Explain this text code:\n\nhostnames-632524106-bbpiw\nhostnames-632524106-ly40y\nhostnames-632524106-tlaok", "completion": "This text code Debug Services | If you are not getting the responses you expect at this point, your Pods might not be healthy or might not be listening on the port you think they are. You might find kubectl logs to be useful for see", "language": "text", "type": "code-example_explanation", "context": "Debug Services | If you are not getting the responses you expect at this point, your Pods might not be healthy or might not be listening on the port you think they are. You might find kubectl logs to be useful for see", "source": "https://kubernetes.io/docs/tasks/debug/debug-application/debug-service/", "id": "k8s_00499_code_explain_12944"} {"prompt": "Explain this none code:\n\nResolving hostnames (hostnames)... failed: Name or service not known.\nwget: unable to resolve host address 'hostnames'", "completion": "This none code Debug Services | The first thing to check is whether that Service actually exists:", "language": "none", "type": "code-example_explanation", "context": "Debug Services | The first thing to check is whether that Service actually exists:", "source": "https://kubernetes.io/docs/tasks/debug/debug-application/debug-service/", "id": "k8s_00499_code_explain_12945"} {"prompt": "Explain this none code:\n\nNo resources found.\nError from server (NotFound): services \"hostnames\" not found", "completion": "This none code Debug Services | Let's create the Service. As before, this is for the walk-through - you can use your own Service's details here.", "language": "none", "type": "code-example_explanation", "context": "Debug Services | Let's create the Service. As before, this is for the walk-through - you can use your own Service's details here.", "source": "https://kubernetes.io/docs/tasks/debug/debug-application/debug-service/", "id": "k8s_00499_code_explain_12946"} {"prompt": "Explain this shell code:\n\nkubectl expose deployment hostnames --port=80 --target-port=9376", "completion": "This shell code Debug Services | And read it back:", "language": "shell", "type": "code-example_explanation", "context": "Debug Services | And read it back:", "source": "https://kubernetes.io/docs/tasks/debug/debug-application/debug-service/", "id": "k8s_00499_code_explain_12947"} {"prompt": "Explain this none code:\n\nNAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE\nhostnames ClusterIP 10.0.1.175 <none> 80/TCP 5s", "completion": "This none code Debug Services | Now you know that the Service exists.", "language": "none", "type": "code-example_explanation", "context": "Debug Services | Now you know that the Service exists.", "source": "https://kubernetes.io/docs/tasks/debug/debug-application/debug-service/", "id": "k8s_00499_code_explain_12948"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Service\nmetadata:\n labels:\n app: hostnames\n name: hostnames\nspec:", "completion": " selector:\n app: hostnames\n ports:\n - name: default\n protocol: TCP\n port: 80\n targetPort: 9376", "language": "yaml", "type": "service-definition", "context": "Debug Services | In order to highlight the full range of configuration, the Service you created here uses a different port number than the Pods. For many real-world Services, these values might be the same.", "source": "https://kubernetes.io/docs/tasks/debug/debug-application/debug-service/", "id": "k8s_00499_code_completion_12949"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Service\nmetadata:\n labels:\n app: hostnames\n name: hostnames\nspec:\n selector:\n app: hostnames\n ports:\n - name: default\n protocol: TCP\n port: 80\n targetPort: 9376", "completion": "This yaml code Debug Services | In order to highlight the full range of configuration, the Service you created here uses a different port number than the Pods. For many real-world Services, these values might be the same.", "language": "yaml", "type": "service-definition_explanation", "context": "Debug Services | In order to highlight the full range of configuration, the Service you created here uses a different port number than the Pods. For many real-world Services, these values might be the same.", "source": "https://kubernetes.io/docs/tasks/debug/debug-application/debug-service/", "id": "k8s_00499_code_explain_12950"} {"prompt": "Complete this none code:\n\nAddress 1: 10.0.0.10 kube-dns.kube-system.svc.cluster.local\n", "completion": "Name: hostnames\nAddress 1: 10.0.1.175 hostnames.default.svc.cluster.local", "language": "none", "type": "code-example", "context": "Debug Services | If this fails, perhaps your Pod and Service are in different Namespaces, try a namespace-qualified name (again, from within a Pod):", "source": "https://kubernetes.io/docs/tasks/debug/debug-application/debug-service/", "id": "k8s_00499_code_completion_12951"} {"prompt": "Explain this none code:\n\nAddress 1: 10.0.0.10 kube-dns.kube-system.svc.cluster.local\n\nName: hostnames\nAddress 1: 10.0.1.175 hostnames.default.svc.cluster.local", "completion": "This none code Debug Services | If this fails, perhaps your Pod and Service are in different Namespaces, try a namespace-qualified name (again, from within a Pod):", "language": "none", "type": "code-example_explanation", "context": "Debug Services | If this fails, perhaps your Pod and Service are in different Namespaces, try a namespace-qualified name (again, from within a Pod):", "source": "https://kubernetes.io/docs/tasks/debug/debug-application/debug-service/", "id": "k8s_00499_code_explain_12952"} {"prompt": "Complete this none code:\n\nAddress 1: 10.0.0.10 kube-dns.kube-system.svc.cluster.local\n", "completion": "Name: hostnames.default\nAddress 1: 10.0.1.175 hostnames.default.svc.cluster.local", "language": "none", "type": "code-example", "context": "Debug Services | If this works, you'll need to adjust your app to use a cross-namespace name, or run your app and Service in the same Namespace. If this still fails, try a fully-qualified name:", "source": "https://kubernetes.io/docs/tasks/debug/debug-application/debug-service/", "id": "k8s_00499_code_completion_12953"} {"prompt": "Explain this none code:\n\nAddress 1: 10.0.0.10 kube-dns.kube-system.svc.cluster.local\n\nName: hostnames.default\nAddress 1: 10.0.1.175 hostnames.default.svc.cluster.local", "completion": "This none code Debug Services | If this works, you'll need to adjust your app to use a cross-namespace name, or run your app and Service in the same Namespace. If this still fails, try a fully-qualified name:", "language": "none", "type": "code-example_explanation", "context": "Debug Services | If this works, you'll need to adjust your app to use a cross-namespace name, or run your app and Service in the same Namespace. If this still fails, try a fully-qualified name:", "source": "https://kubernetes.io/docs/tasks/debug/debug-application/debug-service/", "id": "k8s_00499_code_explain_12954"} {"prompt": "Complete this none code:\n\nAddress 1: 10.0.0.10 kube-dns.kube-system.svc.cluster.local\n", "completion": "Name: hostnames.default.svc.cluster.local\nAddress 1: 10.0.1.175 hostnames.default.svc.cluster.local", "language": "none", "type": "code-example", "context": "Debug Services | Note the suffix here: \"default.svc.cluster.local\". The \"default\" is the Namespace you're operating in. The \"svc\" denotes that this is a Service. The \"cluster.local\" is your cluster domain, which COULD", "source": "https://kubernetes.io/docs/tasks/debug/debug-application/debug-service/", "id": "k8s_00499_code_completion_12955"} {"prompt": "Explain this none code:\n\nAddress 1: 10.0.0.10 kube-dns.kube-system.svc.cluster.local\n\nName: hostnames.default.svc.cluster.local\nAddress 1: 10.0.1.175 hostnames.default.svc.cluster.local", "completion": "This none code Debug Services | Note the suffix here: \"default.svc.cluster.local\". The \"default\" is the Namespace you're operating in. The \"svc\" denotes that this is a Service. The \"cluster.local\" is your cluster domain, which COULD", "language": "none", "type": "code-example_explanation", "context": "Debug Services | Note the suffix here: \"default.svc.cluster.local\". The \"default\" is the Namespace you're operating in. The \"svc\" denotes that this is a Service. The \"cluster.local\" is your cluster domain, which COULD", "source": "https://kubernetes.io/docs/tasks/debug/debug-application/debug-service/", "id": "k8s_00499_code_explain_12956"} {"prompt": "Explain this shell code:\n\nnslookup hostnames.default.svc.cluster.local 10.0.0.10", "completion": "This shell code Debug Services | If you are able to do a fully-qualified name lookup but not a relative one, you need to check that your /etc/resolv.conf file in your Pod is correct. From within a Pod:", "language": "shell", "type": "code-example_explanation", "context": "Debug Services | If you are able to do a fully-qualified name lookup but not a relative one, you need to check that your /etc/resolv.conf file in your Pod is correct. From within a Pod:", "source": "https://kubernetes.io/docs/tasks/debug/debug-application/debug-service/", "id": "k8s_00499_code_explain_12957"} {"prompt": "Complete this none code:\n\nServer: 10.0.0.10\nAddress: 10.0.0.10#53", "completion": "\nName: hostnames.default.svc.cluster.local\nAddress: 10.0.1.175", "language": "none", "type": "code-example", "context": "Debug Services | If you are able to do a fully-qualified name lookup but not a relative one, you need to check that your /etc/resolv.conf file in your Pod is correct. From within a Pod:", "source": "https://kubernetes.io/docs/tasks/debug/debug-application/debug-service/", "id": "k8s_00499_code_completion_12958"} {"prompt": "Explain this none code:\n\nServer: 10.0.0.10\nAddress: 10.0.0.10#53\n\nName: hostnames.default.svc.cluster.local\nAddress: 10.0.1.175", "completion": "This none code Debug Services | If you are able to do a fully-qualified name lookup but not a relative one, you need to check that your /etc/resolv.conf file in your Pod is correct. From within a Pod:", "language": "none", "type": "code-example_explanation", "context": "Debug Services | If you are able to do a fully-qualified name lookup but not a relative one, you need to check that your /etc/resolv.conf file in your Pod is correct. From within a Pod:", "source": "https://kubernetes.io/docs/tasks/debug/debug-application/debug-service/", "id": "k8s_00499_code_explain_12959"} {"prompt": "Explain this text code:\n\nnameserver 10.0.0.10\nsearch default.svc.cluster.local svc.cluster.local cluster.local example.com\noptions ndots:5", "completion": "This text code Debug Services | The nameserver line must indicate your cluster's DNS Service. This is passed into kubelet with the --cluster-dns flag.", "language": "text", "type": "code-example_explanation", "context": "Debug Services | The nameserver line must indicate your cluster's DNS Service. This is passed into kubelet with the --cluster-dns flag.", "source": "https://kubernetes.io/docs/tasks/debug/debug-application/debug-service/", "id": "k8s_00499_code_explain_12960"} {"prompt": "Complete this none code:\n\nServer: 10.0.0.10\nAddress 1: 10.0.0.10 kube-dns.kube-system.svc.cluster.local", "completion": "\nName: kubernetes.default\nAddress 1: 10.0.0.1 kubernetes.default.svc.cluster.local", "language": "none", "type": "code-example", "context": "Debug Services | If this fails, please see the kube-proxy section of this document, or even go back to the top of this document and start over, but instead of debugging your own Service, debug the DNS Service.", "source": "https://kubernetes.io/docs/tasks/debug/debug-application/debug-service/", "id": "k8s_00499_code_completion_12961"} {"prompt": "Explain this none code:\n\nServer: 10.0.0.10\nAddress 1: 10.0.0.10 kube-dns.kube-system.svc.cluster.local\n\nName: kubernetes.default\nAddress 1: 10.0.0.1 kubernetes.default.svc.cluster.local", "completion": "This none code Debug Services | If this fails, please see the kube-proxy section of this document, or even go back to the top of this document and start over, but instead of debugging your own Service, debug the DNS Service.", "language": "none", "type": "code-example_explanation", "context": "Debug Services | If this fails, please see the kube-proxy section of this document, or even go back to the top of this document and start over, but instead of debugging your own Service, debug the DNS Service.", "source": "https://kubernetes.io/docs/tasks/debug/debug-application/debug-service/", "id": "k8s_00499_code_explain_12962"} {"prompt": "Explain this shell code:\n\nfor i in $(seq 1 3); do \n wget -qO- 10.0.1.175:80\ndone", "completion": "This shell code Debug Services | This should produce something like:", "language": "shell", "type": "code-example_explanation", "context": "Debug Services | This should produce something like:", "source": "https://kubernetes.io/docs/tasks/debug/debug-application/debug-service/", "id": "k8s_00499_code_explain_12963"} {"prompt": "Explain this text code:\n\nhostnames-632524106-bbpiw\nhostnames-632524106-ly40y\nhostnames-632524106-tlaok", "completion": "This text code Debug Services | If your Service is working, you should get correct responses. If not, there are a number of things that could be going wrong. Read on.", "language": "text", "type": "code-example_explanation", "context": "Debug Services | If your Service is working, you should get correct responses. If not, there are a number of things that could be going wrong. Read on.", "source": "https://kubernetes.io/docs/tasks/debug/debug-application/debug-service/", "id": "k8s_00499_code_explain_12964"} {"prompt": "Complete this json code:\n\n{\n \"kind\": \"Service\",\n \"apiVersion\": \"v1\",\n \"metadata\": {\n \"name\": \"hostnames\",\n \"namespace\": \"default\",\n \"uid\": \"428c8b6c-24bc-11e5-936d-42010af0a9bc\",\n \"resourceVersion\": \"347189\",\n \"creationTimestamp\": \"2015-07-07T15:24:29Z\",\n \"labels\": {\n \"app\": \"hostnames\"\n }\n },\n \"spec\": {\n \"ports\": [\n {\n \"name\": \"default\",", "completion": " \"protocol\": \"TCP\",\n \"port\": 80,\n \"targetPort\": 9376,\n \"nodePort\": 0\n }\n ],\n \"selector\": {\n \"app\": \"hostnames\"\n },\n \"clusterIP\": \"10.0.1.175\",\n \"type\": \"ClusterIP\",\n \"sessionAffinity\": \"None\"\n },\n \"status\": {\n \"loadBalancer\": {}\n }\n}", "language": "json", "type": "api-response", "context": "Debug Services | If you got this far, you have confirmed that your Service is correctly defined and is resolved by DNS. Now let's check that the Pods you ran are actually being selected by the Service.", "source": "https://kubernetes.io/docs/tasks/debug/debug-application/debug-service/", "id": "k8s_00499_code_completion_12965"} {"prompt": "Explain this json code:\n\n{\n \"kind\": \"Service\",\n \"apiVersion\": \"v1\",\n \"metadata\": {\n \"name\": \"hostnames\",\n \"namespace\": \"default\",\n \"uid\": \"428c8b6c-24bc-11e5-936d-42010af0a9bc\",\n \"resourceVersion\": \"347189\",\n \"creationTimestamp\": \"2015-07-07T15:24:29Z\",\n \"labels\": {\n \"app\": \"hostnames\"\n }\n },\n \"spec\": {\n \"ports\": [\n {\n \"name\": \"default\",\n \"protocol\": \"TCP\",\n \"port\": 80,\n \"targetPort\": 9376,\n \"nodePort\": 0\n }\n ],\n \"selector\": {\n \"app\": \"hostnames\"\n },\n \"clusterIP\": \"10.0.1.175\",\n \"type\": \"ClusterIP\",\n \"sessionAffinity\": \"None\"\n },\n \"status\": {\n \"loadBalancer\": {}\n }\n}", "completion": "This json code Debug Services | If you got this far, you have confirmed that your Service is correctly defined and is resolved by DNS. Now let's check that the Pods you ran are actually being selected by the Service.", "language": "json", "type": "api-response_explanation", "context": "Debug Services | If you got this far, you have confirmed that your Service is correctly defined and is resolved by DNS. Now let's check that the Pods you ran are actually being selected by the Service.", "source": "https://kubernetes.io/docs/tasks/debug/debug-application/debug-service/", "id": "k8s_00499_code_explain_12966"} {"prompt": "Complete this none code:\n\nNAME READY STATUS RESTARTS AGE\nhostnames-632524106-bbpiw 1/1 Running 0 1h", "completion": "hostnames-632524106-ly40y 1/1 Running 0 1h\nhostnames-632524106-tlaok 1/1 Running 0 1h", "language": "none", "type": "code-example", "context": "Debug Services | The -l app=hostnames argument is a label selector configured on the Service.", "source": "https://kubernetes.io/docs/tasks/debug/debug-application/debug-service/", "id": "k8s_00499_code_completion_12967"} {"prompt": "Explain this none code:\n\nNAME READY STATUS RESTARTS AGE\nhostnames-632524106-bbpiw 1/1 Running 0 1h\nhostnames-632524106-ly40y 1/1 Running 0 1h\nhostnames-632524106-tlaok 1/1 Running 0 1h", "completion": "This none code Debug Services | The -l app=hostnames argument is a label selector configured on the Service.", "language": "none", "type": "code-example_explanation", "context": "Debug Services | The -l app=hostnames argument is a label selector configured on the Service.", "source": "https://kubernetes.io/docs/tasks/debug/debug-application/debug-service/", "id": "k8s_00499_code_explain_12968"} {"prompt": "Complete this shell code:\n\nkubectl get endpointslices -l k8s.io/service-name=hostnames\n", "completion": "NAME ADDRESSTYPE PORTS ENDPOINTS\nhostnames-ytpni IPv4 9376 10.244.0.5,10.244.0.6,10.244.0.7", "language": "shell", "type": "code-example", "context": "Debug Services | This confirms that the EndpointSlice controller has found the correct Pods for your Service. If the ENDPOINTS column is <none>, you should check that the spec.selector field of your Service actually s", "source": "https://kubernetes.io/docs/tasks/debug/debug-application/debug-service/", "id": "k8s_00499_code_completion_12969"} {"prompt": "Explain this shell code:\n\nkubectl get endpointslices -l k8s.io/service-name=hostnames\n\nNAME ADDRESSTYPE PORTS ENDPOINTS\nhostnames-ytpni IPv4 9376 10.244.0.5,10.244.0.6,10.244.0.7", "completion": "This shell code Debug Services | This confirms that the EndpointSlice controller has found the correct Pods for your Service. If the ENDPOINTS column is <none>, you should check that the spec.selector field of your Service actually s", "language": "shell", "type": "code-example_explanation", "context": "Debug Services | This confirms that the EndpointSlice controller has found the correct Pods for your Service. If the ENDPOINTS column is <none>, you should check that the spec.selector field of your Service actually s", "source": "https://kubernetes.io/docs/tasks/debug/debug-application/debug-service/", "id": "k8s_00499_code_explain_12970"} {"prompt": "Explain this shell code:\n\nfor ep in 10.244.0.5:9376 10.244.0.6:9376 10.244.0.7:9376; do\n wget -qO- $ep\ndone", "completion": "This shell code Debug Services | This should produce something like:", "language": "shell", "type": "code-example_explanation", "context": "Debug Services | This should produce something like:", "source": "https://kubernetes.io/docs/tasks/debug/debug-application/debug-service/", "id": "k8s_00499_code_explain_12971"} {"prompt": "Explain this text code:\n\nhostnames-632524106-bbpiw\nhostnames-632524106-ly40y\nhostnames-632524106-tlaok", "completion": "This text code Debug Services | You expect each Pod in the endpoints list to return its own hostname. If this is not what happens (or whatever the correct behavior is for your own Pods), you should investigate what's happening there", "language": "text", "type": "code-example_explanation", "context": "Debug Services | You expect each Pod in the endpoints list to return its own hostname. If this is not what happens (or whatever the correct behavior is for your own Pods), you should investigate what's happening there", "source": "https://kubernetes.io/docs/tasks/debug/debug-application/debug-service/", "id": "k8s_00499_code_explain_12972"} {"prompt": "Explain this none code:\n\nroot 4194 0.4 0.1 101864 17696 ? Sl Jul04 25:43 /usr/local/bin/kube-proxy --master=https://kubernetes-master --kubeconfig=/var/lib/kube-proxy/kubeconfig --v=2", "completion": "This none code Debug Services | Next, confirm that it is not failing something obvious, like contacting the master. To do this, you'll have to look at the logs. Accessing the logs depends on your Node OS. On some OSes it is a file,", "language": "none", "type": "code-example_explanation", "context": "Debug Services | Next, confirm that it is not failing something obvious, like contacting the master. To do this, you'll have to look at the logs. Accessing the logs depends on your Node OS. On some OSes it is a file,", "source": "https://kubernetes.io/docs/tasks/debug/debug-application/debug-service/", "id": "k8s_00499_code_explain_12973"} {"prompt": "Complete this none code:\n\nI1027 22:14:53.995134 5063 server.go:200] Running in resource-only container \"/kube-proxy\"\nI1027 22:14:53.998163 5063 server.go:247] Using iptables Proxier.\nI1027 22:14:54.038140 5063 proxier.go:352] Setting endpoints for \"kube-system/kube-dns:dns-tcp\" to [10.244.1.3:53]\nI1027 22:14:54.038164 5063 proxier.go:352] Setting endpoints for \"kube-system/kube-dns:dns\" to [10.244.1.3:53]", "completion": "I1027 22:14:54.038209 5063 proxier.go:352] Setting endpoints for \"default/kubernetes:https\" to [10.240.0.2:443]\nI1027 22:14:54.038238 5063 proxier.go:429] Not syncing iptables until Services and Endpoints have been received from master\nI1027 22:14:54.040048 5063 proxier.go:294] Adding new service \"default/kubernetes:https\" at 10.0.0.1:443/TCP\nI1027 22:14:54.040154 5063 proxier.go:294] Adding new service \"kube-system/kube-dns:dns\" at 10.0.0.10:53/UDP\nI1027 22:14:54.040223 5063 proxier.go:294] Adding new service \"kube-system/kube-dns:dns-tcp\" at 10.0.0.10:53/TCP", "language": "none", "type": "code-example", "context": "Debug Services | If you see error messages about not being able to contact the master, you should double-check your Node configuration and installation steps.", "source": "https://kubernetes.io/docs/tasks/debug/debug-application/debug-service/", "id": "k8s_00499_code_completion_12974"} {"prompt": "Explain this none code:\n\nI1027 22:14:53.995134 5063 server.go:200] Running in resource-only container \"/kube-proxy\"\nI1027 22:14:53.998163 5063 server.go:247] Using iptables Proxier.\nI1027 22:14:54.038140 5063 proxier.go:352] Setting endpoints for \"kube-system/kube-dns:dns-tcp\" to [10.244.1.3:53]\nI1027 22:14:54.038164 5063 proxier.go:352] Setting endpoints for \"kube-system/kube-dns:dns\" to [10.244.1.3:53]\nI1027 22:14:54.038209 5063 proxier.go:352] Setting endpoints for \"default/kubernetes:https\" to [10.240.0.2:443]\nI1027 22:14:54.038238 5063 proxier.go:429] Not syncing iptables until Services and Endpoints have been received from master\nI1027 22:14:54.040048 5063 proxier.go:294] Adding new service \"default/kubernetes:https\" at 10.0.0.1:443/TCP\nI1027 22:14:54.040154 5063 proxier.go:294] Adding new service \"kube-system/kube-dns:dns\" at 10.0.0.10:53/UDP\nI1027 22:14:54.040223 5063 proxier.go:294] Adding new service \"kube-system/kube-dns:dns-tcp\" at 10.0.0.10:53/TCP", "completion": "This none code Debug Services | If you see error messages about not being able to contact the master, you should double-check your Node configuration and installation steps.", "language": "none", "type": "code-example_explanation", "context": "Debug Services | If you see error messages about not being able to contact the master, you should double-check your Node configuration and installation steps.", "source": "https://kubernetes.io/docs/tasks/debug/debug-application/debug-service/", "id": "k8s_00499_code_explain_12975"} {"prompt": "Complete this none code:\n\n-A KUBE-SEP-57KPRZ3JQVENLNBR -s 10.244.3.6/32 -m comment --comment \"default/hostnames:\" -j MARK --set-xmark 0x00004000/0x00004000\n-A KUBE-SEP-57KPRZ3JQVENLNBR -p tcp -m comment --comment \"default/hostnames:\" -m tcp -j DNAT --to-destination 10.244.3.6:9376\n-A KUBE-SEP-WNBA2IHDGP2BOBGZ -s 10.244.1.7/32 -m comment --comment \"default/hostnames:\" -j MARK --set-xmark 0x00004000/0x00004000\n-A KUBE-SEP-WNBA2IHDGP2BOBGZ -p tcp -m comment --comment \"default/hostnames:\" -m tcp -j DNAT --to-destination 10.244.1.7:9376\n-A KUBE-SEP-X3P2623AGDH6CDF3 -s 10.244.2.3/32 -m comment --comment \"default/hostnames:\" -j MARK --set-xmark 0x00004000/0x00004000", "completion": "-A KUBE-SEP-X3P2623AGDH6CDF3 -p tcp -m comment --comment \"default/hostnames:\" -m tcp -j DNAT --to-destination 10.244.2.3:9376\n-A KUBE-SERVICES -d 10.0.1.175/32 -p tcp -m comment --comment \"default/hostnames: cluster IP\" -m tcp --dport 80 -j KUBE-SVC-NWV5X2332I4OT4T3\n-A KUBE-SVC-NWV5X2332I4OT4T3 -m comment --comment \"default/hostnames:\" -m statistic --mode random --probability 0.33332999982 -j KUBE-SEP-WNBA2IHDGP2BOBGZ\n-A KUBE-SVC-NWV5X2332I4OT4T3 -m comment --comment \"default/hostnames:\" -m statistic --mode random --probability 0.50000000000 -j KUBE-SEP-X3P2623AGDH6CDF3\n-A KUBE-SVC-NWV5X2332I4OT4T3 -m comment --comment \"default/hostnames:\" -j KUBE-SEP-57KPRZ3JQVENLNBR", "language": "none", "type": "code-example", "context": "Debug Services | For each port of each Service, there should be 1 rule in KUBE-SERVICES and one KUBE-SVC-<hash> chain. For each Pod endpoint, there should be a small number of rules in that KUBE-SVC-<hash> and one KUB", "source": "https://kubernetes.io/docs/tasks/debug/debug-application/debug-service/", "id": "k8s_00499_code_completion_12976"} {"prompt": "Explain this none code:\n\n-A KUBE-SEP-57KPRZ3JQVENLNBR -s 10.244.3.6/32 -m comment --comment \"default/hostnames:\" -j MARK --set-xmark 0x00004000/0x00004000\n-A KUBE-SEP-57KPRZ3JQVENLNBR -p tcp -m comment --comment \"default/hostnames:\" -m tcp -j DNAT --to-destination 10.244.3.6:9376\n-A KUBE-SEP-WNBA2IHDGP2BOBGZ -s 10.244.1.7/32 -m comment --comment \"default/hostnames:\" -j MARK --set-xmark 0x00004000/0x00004000\n-A KUBE-SEP-WNBA2IHDGP2BOBGZ -p tcp -m comment --comment \"default/hostnames:\" -m tcp -j DNAT --to-destination 10.244.1.7:9376\n-A KUBE-SEP-X3P2623AGDH6CDF3 -s 10.244.2.3/32 -m comment --comment \"default/hostnames:\" -j MARK --set-xmark 0x00004000/0x00004000\n-A KUBE-SEP-X3P2623AGDH6CDF3 -p tcp -m comment --comment \"default/hostnames:\" -m tcp -j DNAT --to-destination 10.244.2.3:9376\n-A KUBE-SERVICES -d 10.0.1.175/32 -p tcp -m comment --comment \"default/hostnames: cluster IP\" -m tcp --dport 80 -j KUBE-SVC-NWV5X2332I4OT4T3\n-A KUBE-SVC-NWV5X2332I4OT4T3 -m comment --comment \"default/hostnames:\" -m statistic --mode random --probability 0.33332999982 -j KUBE-SEP-WNBA2IHDGP2BOBGZ\n-A KUBE-SVC-NWV5X2332I4OT4T3 -m comment --comment \"default/hostnames:\" -m statistic --mode random --probability 0.50000000000 -j KUBE-SEP-X3P2623AGDH6CDF3\n-A KUBE-SVC-NWV5X2332I4OT4T3 -m comment --comment \"default/hostnames:\" -j KUBE-SEP-57KPRZ3JQVENLNBR", "completion": "This none code Debug Services | For each port of each Service, there should be 1 rule in KUBE-SERVICES and one KUBE-SVC-<hash> chain. For each Pod endpoint, there should be a small number of rules in that KUBE-SVC-<hash> and one KUB", "language": "none", "type": "code-example_explanation", "context": "Debug Services | For each port of each Service, there should be 1 rule in KUBE-SERVICES and one KUBE-SVC-<hash> chain. For each Pod endpoint, there should be a small number of rules in that KUBE-SVC-<hash> and one KUB", "source": "https://kubernetes.io/docs/tasks/debug/debug-application/debug-service/", "id": "k8s_00499_code_explain_12977"} {"prompt": "Complete this none code:\n\nProt LocalAddress:Port Scheduler Flags\n -> RemoteAddress:Port Forward Weight ActiveConn InActConn\n...\nTCP 10.0.1.175:80 rr", "completion": " -> 10.244.0.5:9376 Masq 1 0 0\n -> 10.244.0.6:9376 Masq 1 0 0\n -> 10.244.0.7:9376 Masq 1 0 0\n...", "language": "none", "type": "code-example", "context": "Debug Services | For each port of each Service, plus any NodePorts, external IPs, and load-balancer IPs, kube-proxy will create a virtual server. For each Pod endpoint, it will create corresponding real servers. In th", "source": "https://kubernetes.io/docs/tasks/debug/debug-application/debug-service/", "id": "k8s_00499_code_completion_12978"} {"prompt": "Explain this none code:\n\nProt LocalAddress:Port Scheduler Flags\n -> RemoteAddress:Port Forward Weight ActiveConn InActConn\n...\nTCP 10.0.1.175:80 rr\n -> 10.244.0.5:9376 Masq 1 0 0\n -> 10.244.0.6:9376 Masq 1 0 0\n -> 10.244.0.7:9376 Masq 1 0 0\n...", "completion": "This none code Debug Services | For each port of each Service, plus any NodePorts, external IPs, and load-balancer IPs, kube-proxy will create a virtual server. For each Pod endpoint, it will create corresponding real servers. In th", "language": "none", "type": "code-example_explanation", "context": "Debug Services | For each port of each Service, plus any NodePorts, external IPs, and load-balancer IPs, kube-proxy will create a virtual server. For each Pod endpoint, it will create corresponding real servers. In th", "source": "https://kubernetes.io/docs/tasks/debug/debug-application/debug-service/", "id": "k8s_00499_code_explain_12979"} {"prompt": "Explain this none code:\n\nSetting endpoints for default/hostnames:default to [10.244.0.5:9376 10.244.0.6:9376 10.244.0.7:9376]", "completion": "This none code Debug Services | If you don't see those, try restarting kube-proxy with the -v flag set to 4, and then look at the logs again.", "language": "none", "type": "code-example_explanation", "context": "Debug Services | If you don't see those, try restarting kube-proxy with the -v flag set to 4, and then look at the logs again.", "source": "https://kubernetes.io/docs/tasks/debug/debug-application/debug-service/", "id": "k8s_00499_code_explain_12980"} {"prompt": "Explain this none code:\n\nroot 3392 1.1 0.8 186804 65208 ? Sl 00:51 11:11 /usr/local/bin/kubelet --enable-debugging-handlers=true --config=/etc/kubernetes/manifests --allow-privileged=True --v=4 --cluster-dns=10.0.0.10 --cluster-domain=cluster.local --configure-cbr0=true --cgroup-root=/ --system-cgroups=/system --hairpin-mode=promiscuous-bridge --runtime-cgroups=/docker-daemon --kubelet-cgroups=/kubelet --babysit-daemons=true --max-pods=110 --serialize-image-pulls=false --outofdisk-transition-frequency=0", "completion": "This none code Debug Services | If you get this far, something very strange is happening. Your Service is running, has EndpointSlices, and your Pods are actually serving. You have DNS working, and kube-proxy does not seem to be misb", "language": "none", "type": "code-example_explanation", "context": "Debug Services | If you get this far, something very strange is happening. Your Service is running, has EndpointSlices, and your Pods are actually serving. You have DNS working, and kube-proxy does not seem to be misb", "source": "https://kubernetes.io/docs/tasks/debug/debug-application/debug-service/", "id": "k8s_00499_code_explain_12981"} {"prompt": "Explain this none code:\n\nI0629 00:51:43.648698 3252 kubelet.go:380] Hairpin mode set to \"promiscuous-bridge\"", "completion": "This none code Debug Services | If you get this far, something very strange is happening. Your Service is running, has EndpointSlices, and your Pods are actually serving. You have DNS working, and kube-proxy does not seem to be misb", "language": "none", "type": "code-example_explanation", "context": "Debug Services | If you get this far, something very strange is happening. Your Service is running, has EndpointSlices, and your Pods are actually serving. You have DNS working, and kube-proxy does not seem to be misb", "source": "https://kubernetes.io/docs/tasks/debug/debug-application/debug-service/", "id": "k8s_00499_code_explain_12982"} {"prompt": "Explain this shell code:\n\nfor intf in /sys/devices/virtual/net/cbr0/brif/*; do cat $intf/hairpin_mode; done", "completion": "This shell code Debug Services | If you get this far, something very strange is happening. Your Service is running, has EndpointSlices, and your Pods are actually serving. You have DNS working, and kube-proxy does not seem to be misb", "language": "shell", "type": "code-example_explanation", "context": "Debug Services | If you get this far, something very strange is happening. Your Service is running, has EndpointSlices, and your Pods are actually serving. You have DNS working, and kube-proxy does not seem to be misb", "source": "https://kubernetes.io/docs/tasks/debug/debug-application/debug-service/", "id": "k8s_00499_code_explain_12983"} {"prompt": "Explain this none code:\n\nUP BROADCAST RUNNING PROMISC MULTICAST MTU:1460 Metric:1", "completion": "This none code Debug Services | If you get this far, something very strange is happening. Your Service is running, has EndpointSlices, and your Pods are actually serving. You have DNS working, and kube-proxy does not seem to be misb", "language": "none", "type": "code-example_explanation", "context": "Debug Services | If you get this far, something very strange is happening. Your Service is running, has EndpointSlices, and your Pods are actually serving. You have DNS working, and kube-proxy does not seem to be misb", "source": "https://kubernetes.io/docs/tasks/debug/debug-application/debug-service/", "id": "k8s_00499_code_explain_12984"} {"prompt": "Explain this bash code:\n\nkubectl exec <POD-NAME> -c <CONTAINER-NAME> -- <COMMAND>", "completion": "This bash code Debug Services | For the purposes of this walk-through, let's run some Pods. Since you're probably debugging your own Service you can substitute your own details, or you can follow along and get a second data point.", "language": "bash", "type": "kubectl-command_explanation", "context": "Debug Services | For the purposes of this walk-through, let's run some Pods. Since you're probably debugging your own Service you can substitute your own details, or you can follow along and get a second data point.", "source": "https://kubernetes.io/docs/tasks/debug/debug-application/debug-service/", "id": "k8s_00499_code_explain_12985"} {"prompt": "Explain this bash code:\n\nkubectl create deployment hostnames --image=registry.k8s.io/serve_hostname", "completion": "This bash code Debug Services | kubectl commands will print the type and name of the resource created or mutated, which can then be used in subsequent commands.", "language": "bash", "type": "kubectl-command_explanation", "context": "Debug Services | kubectl commands will print the type and name of the resource created or mutated, which can then be used in subsequent commands.", "source": "https://kubernetes.io/docs/tasks/debug/debug-application/debug-service/", "id": "k8s_00499_code_explain_12986"} {"prompt": "Complete this yaml code:\n\napiVersion: apps/v1\nkind: Deployment\nmetadata:\n labels:\n app: hostnames\n name: hostnames\nspec:\n selector:\n matchLabels:", "completion": " app: hostnames\n replicas: 3\n template:\n metadata:\n labels:\n app: hostnames\n spec:\n containers:\n - name: hostnames\n image: registry.k8s.io/serve_hostname", "language": "yaml", "type": "deployment-definition", "context": "Debug Services | The label \"app\" is automatically set by kubectl create deployment to the name of the Deployment.", "source": "https://kubernetes.io/docs/tasks/debug/debug-application/debug-service/", "id": "k8s_00499_code_completion_12987"} {"prompt": "Explain this yaml code:\n\napiVersion: apps/v1\nkind: Deployment\nmetadata:\n labels:\n app: hostnames\n name: hostnames\nspec:\n selector:\n matchLabels:\n app: hostnames\n replicas: 3\n template:\n metadata:\n labels:\n app: hostnames\n spec:\n containers:\n - name: hostnames\n image: registry.k8s.io/serve_hostname", "completion": "This yaml code Debug Services | The label \"app\" is automatically set by kubectl create deployment to the name of the Deployment.", "language": "yaml", "type": "deployment-definition_explanation", "context": "Debug Services | The label \"app\" is automatically set by kubectl create deployment to the name of the Deployment.", "source": "https://kubernetes.io/docs/tasks/debug/debug-application/debug-service/", "id": "k8s_00499_code_explain_12988"} {"prompt": "Explain this bash code:\n\nkubectl get pods -l app=hostnames \\\n -o go-template='{{range .items}}{{.status.podIP}}{{\"\\n\"}}{{end}}'", "completion": "This bash code Debug Services | The example container used for this walk-through serves its own hostname via HTTP on port 9376, but if you are debugging your own app, you'll want to use whatever port number your Pods are listening o", "language": "bash", "type": "kubectl-command_explanation", "context": "Debug Services | The example container used for this walk-through serves its own hostname via HTTP on port 9376, but if you are debugging your own app, you'll want to use whatever port number your Pods are listening o", "source": "https://kubernetes.io/docs/tasks/debug/debug-application/debug-service/", "id": "k8s_00499_code_explain_12989"} {"prompt": "Explain this text code:\n\nfor ep in 10.244.0.5:9376 10.244.0.6:9376 10.244.0.7:9376; do\n wget -qO- $ep\ndone", "completion": "This text code Debug Services | This should produce something like:", "language": "text", "type": "code-example_explanation", "context": "Debug Services | This should produce something like:", "source": "https://kubernetes.io/docs/tasks/debug/debug-application/debug-service/", "id": "k8s_00499_code_explain_12990"} {"prompt": "Explain this bash code:\n\nkubectl expose deployment hostnames --port=80 --target-port=9376", "completion": "This bash code Debug Services | And read it back:", "language": "bash", "type": "kubectl-command_explanation", "context": "Debug Services | And read it back:", "source": "https://kubernetes.io/docs/tasks/debug/debug-application/debug-service/", "id": "k8s_00499_code_explain_12991"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Service\nmetadata:\n labels:\n app: hostnames\n name: hostnames\nspec:", "completion": " selector:\n app: hostnames\n ports:\n - name: default\n protocol: TCP\n port: 80\n targetPort: 9376", "language": "yaml", "type": "service-definition", "context": "Debug Services | In order to highlight the full range of configuration, the Service you created here uses a different port number than the Pods. For many real-world Services, these values might be the same.", "source": "https://kubernetes.io/docs/tasks/debug/debug-application/debug-service/", "id": "k8s_00499_code_completion_12992"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Service\nmetadata:\n labels:\n app: hostnames\n name: hostnames\nspec:\n selector:\n app: hostnames\n ports:\n - name: default\n protocol: TCP\n port: 80\n targetPort: 9376", "completion": "This yaml code Debug Services | In order to highlight the full range of configuration, the Service you created here uses a different port number than the Pods. For many real-world Services, these values might be the same.", "language": "yaml", "type": "service-definition_explanation", "context": "Debug Services | In order to highlight the full range of configuration, the Service you created here uses a different port number than the Pods. For many real-world Services, these values might be the same.", "source": "https://kubernetes.io/docs/tasks/debug/debug-application/debug-service/", "id": "k8s_00499_code_explain_12993"} {"prompt": "Explain this text code:\n\nnslookup hostnames.default.svc.cluster.local 10.0.0.10", "completion": "This text code Debug Services | If you are able to do a fully-qualified name lookup but not a relative one, you need to check that your /etc/resolv.conf file in your Pod is correct. From within a Pod:", "language": "text", "type": "code-example_explanation", "context": "Debug Services | If you are able to do a fully-qualified name lookup but not a relative one, you need to check that your /etc/resolv.conf file in your Pod is correct. From within a Pod:", "source": "https://kubernetes.io/docs/tasks/debug/debug-application/debug-service/", "id": "k8s_00499_code_explain_12994"} {"prompt": "Explain this text code:\n\nfor i in $(seq 1 3); do \n wget -qO- 10.0.1.175:80\ndone", "completion": "This text code Debug Services | This should produce something like:", "language": "text", "type": "code-example_explanation", "context": "Debug Services | This should produce something like:", "source": "https://kubernetes.io/docs/tasks/debug/debug-application/debug-service/", "id": "k8s_00499_code_explain_12995"} {"prompt": "Complete this json code:\n\n{\n \"kind\": \"Service\",\n \"apiVersion\": \"v1\",\n \"metadata\": {\n \"name\": \"hostnames\",\n \"namespace\": \"default\",\n \"uid\": \"428c8b6c-24bc-11e5-936d-42010af0a9bc\",\n \"resourceVersion\": \"347189\",\n \"creationTimestamp\": \"2015-07-07T15:24:29Z\",\n \"labels\": {\n \"app\": \"hostnames\"\n }\n },\n \"spec\": {\n \"ports\": [\n {\n \"name\": \"default\",", "completion": " \"protocol\": \"TCP\",\n \"port\": 80,\n \"targetPort\": 9376,\n \"nodePort\": 0\n }\n ],\n \"selector\": {\n \"app\": \"hostnames\"\n },\n \"clusterIP\": \"10.0.1.175\",\n \"type\": \"ClusterIP\",\n \"sessionAffinity\": \"None\"\n },\n \"status\": {\n \"loadBalancer\": {}\n }\n}", "language": "json", "type": "api-response", "context": "Debug Services | If you got this far, you have confirmed that your Service is correctly defined and is resolved by DNS. Now let's check that the Pods you ran are actually being selected by the Service.", "source": "https://kubernetes.io/docs/tasks/debug/debug-application/debug-service/", "id": "k8s_00499_code_completion_12996"} {"prompt": "Explain this json code:\n\n{\n \"kind\": \"Service\",\n \"apiVersion\": \"v1\",\n \"metadata\": {\n \"name\": \"hostnames\",\n \"namespace\": \"default\",\n \"uid\": \"428c8b6c-24bc-11e5-936d-42010af0a9bc\",\n \"resourceVersion\": \"347189\",\n \"creationTimestamp\": \"2015-07-07T15:24:29Z\",\n \"labels\": {\n \"app\": \"hostnames\"\n }\n },\n \"spec\": {\n \"ports\": [\n {\n \"name\": \"default\",\n \"protocol\": \"TCP\",\n \"port\": 80,\n \"targetPort\": 9376,\n \"nodePort\": 0\n }\n ],\n \"selector\": {\n \"app\": \"hostnames\"\n },\n \"clusterIP\": \"10.0.1.175\",\n \"type\": \"ClusterIP\",\n \"sessionAffinity\": \"None\"\n },\n \"status\": {\n \"loadBalancer\": {}\n }\n}", "completion": "This json code Debug Services | If you got this far, you have confirmed that your Service is correctly defined and is resolved by DNS. Now let's check that the Pods you ran are actually being selected by the Service.", "language": "json", "type": "api-response_explanation", "context": "Debug Services | If you got this far, you have confirmed that your Service is correctly defined and is resolved by DNS. Now let's check that the Pods you ran are actually being selected by the Service.", "source": "https://kubernetes.io/docs/tasks/debug/debug-application/debug-service/", "id": "k8s_00499_code_explain_12997"} {"prompt": "Complete this bash code:\n\nkubectl get endpointslices -l k8s.io/service-name=hostnames\n", "completion": "NAME ADDRESSTYPE PORTS ENDPOINTS\nhostnames-ytpni IPv4 9376 10.244.0.5,10.244.0.6,10.244.0.7", "language": "bash", "type": "kubectl-command", "context": "Debug Services | This confirms that the EndpointSlice controller has found the correct Pods for your Service. If the ENDPOINTS column is <none>, you should check that the spec.selector field of your Service actually s", "source": "https://kubernetes.io/docs/tasks/debug/debug-application/debug-service/", "id": "k8s_00499_code_completion_12998"} {"prompt": "Explain this bash code:\n\nkubectl get endpointslices -l k8s.io/service-name=hostnames\n\nNAME ADDRESSTYPE PORTS ENDPOINTS\nhostnames-ytpni IPv4 9376 10.244.0.5,10.244.0.6,10.244.0.7", "completion": "This bash code Debug Services | This confirms that the EndpointSlice controller has found the correct Pods for your Service. If the ENDPOINTS column is <none>, you should check that the spec.selector field of your Service actually s", "language": "bash", "type": "kubectl-command_explanation", "context": "Debug Services | This confirms that the EndpointSlice controller has found the correct Pods for your Service. If the ENDPOINTS column is <none>, you should check that the spec.selector field of your Service actually s", "source": "https://kubernetes.io/docs/tasks/debug/debug-application/debug-service/", "id": "k8s_00499_code_explain_12999"} {"prompt": "Explain this text code:\n\nfor ep in 10.244.0.5:9376 10.244.0.6:9376 10.244.0.7:9376; do\n wget -qO- $ep\ndone", "completion": "This text code Debug Services | This should produce something like:", "language": "text", "type": "code-example_explanation", "context": "Debug Services | This should produce something like:", "source": "https://kubernetes.io/docs/tasks/debug/debug-application/debug-service/", "id": "k8s_00499_code_explain_13000"} {"prompt": "Explain this text code:\n\nfor intf in /sys/devices/virtual/net/cbr0/brif/*; do cat $intf/hairpin_mode; done", "completion": "This text code Debug Services | If you get this far, something very strange is happening. Your Service is running, has EndpointSlices, and your Pods are actually serving. You have DNS working, and kube-proxy does not seem to be misb", "language": "text", "type": "code-example_explanation", "context": "Debug Services | If you get this far, something very strange is happening. Your Service is running, has EndpointSlices, and your Pods are actually serving. You have DNS working, and kube-proxy does not seem to be misb", "source": "https://kubernetes.io/docs/tasks/debug/debug-application/debug-service/", "id": "k8s_00499_code_explain_13001"} {"prompt": "Complete this yaml code:\n\n# kubeadm-config.yaml\nkind: ClusterConfiguration\napiVersion: kubeadm.k8s.io/v1beta4\nkubernetesVersion: v1.21.0", "completion": "---\nkind: KubeletConfiguration\napiVersion: kubelet.config.k8s.io/v1beta1\ncgroupDriver: systemd", "language": "yaml", "type": "kubernetes-manifest", "context": "Configuring a cgroup driver | Such a configuration file can then be passed to the kubeadm command:", "source": "https://kubernetes.io/docs/tasks/administer-cluster/kubeadm/configure-cgroup-driver/", "id": "k8s_00500_code_completion_13002"} {"prompt": "Explain this yaml code:\n\n# kubeadm-config.yaml\nkind: ClusterConfiguration\napiVersion: kubeadm.k8s.io/v1beta4\nkubernetesVersion: v1.21.0\n---\nkind: KubeletConfiguration\napiVersion: kubelet.config.k8s.io/v1beta1\ncgroupDriver: systemd", "completion": "This yaml code Configuring a cgroup driver | Such a configuration file can then be passed to the kubeadm command:", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Configuring a cgroup driver | Such a configuration file can then be passed to the kubeadm command:", "source": "https://kubernetes.io/docs/tasks/administer-cluster/kubeadm/configure-cgroup-driver/", "id": "k8s_00500_code_explain_13003"} {"prompt": "Complete this yaml code:\n\n# kubeadm-config.yaml\nkind: ClusterConfiguration\napiVersion: kubeadm.k8s.io/v1beta4\nkubernetesVersion: v1.21.0", "completion": "---\nkind: KubeletConfiguration\napiVersion: kubelet.config.k8s.io/v1beta1\ncgroupDriver: systemd", "language": "yaml", "type": "kubernetes-manifest", "context": "Configuring a cgroup driver | Such a configuration file can then be passed to the kubeadm command:", "source": "https://kubernetes.io/docs/tasks/administer-cluster/kubeadm/configure-cgroup-driver/", "id": "k8s_00500_code_completion_13004"} {"prompt": "Explain this yaml code:\n\n# kubeadm-config.yaml\nkind: ClusterConfiguration\napiVersion: kubeadm.k8s.io/v1beta4\nkubernetesVersion: v1.21.0\n---\nkind: KubeletConfiguration\napiVersion: kubelet.config.k8s.io/v1beta1\ncgroupDriver: systemd", "completion": "This yaml code Configuring a cgroup driver | Such a configuration file can then be passed to the kubeadm command:", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Configuring a cgroup driver | Such a configuration file can then be passed to the kubeadm command:", "source": "https://kubernetes.io/docs/tasks/administer-cluster/kubeadm/configure-cgroup-driver/", "id": "k8s_00500_code_explain_13005"} {"prompt": "Complete this yaml code:\n\n# kubeadm-config.yaml\nkind: ClusterConfiguration\napiVersion: kubeadm.k8s.io/v1beta4\nkubernetesVersion: v1.21.0", "completion": "---\nkind: KubeletConfiguration\napiVersion: kubelet.config.k8s.io/v1beta1\ncgroupDriver: systemd", "language": "yaml", "type": "kubernetes-manifest", "context": "Configuring a cgroup driver | Such a configuration file can then be passed to the kubeadm command:", "source": "https://kubernetes.io/docs/tasks/administer-cluster/kubeadm/configure-cgroup-driver/", "id": "k8s_00500_code_completion_13006"} {"prompt": "Explain this yaml code:\n\n# kubeadm-config.yaml\nkind: ClusterConfiguration\napiVersion: kubeadm.k8s.io/v1beta4\nkubernetesVersion: v1.21.0\n---\nkind: KubeletConfiguration\napiVersion: kubelet.config.k8s.io/v1beta1\ncgroupDriver: systemd", "completion": "This yaml code Configuring a cgroup driver | Such a configuration file can then be passed to the kubeadm command:", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Configuring a cgroup driver | Such a configuration file can then be passed to the kubeadm command:", "source": "https://kubernetes.io/docs/tasks/administer-cluster/kubeadm/configure-cgroup-driver/", "id": "k8s_00500_code_explain_13007"} {"prompt": "Complete this json code:\n\n{\n \"name\": \"k8s-pod-network\",\n \"cniVersion\": \"0.4.0\",\n \"plugins\": [\n {\n \"type\": \"calico\",\n \"log_level\": \"info\",\n \"datastore_type\": \"kubernetes\",\n \"nodename\": \"127.0.0.1\",\n \"ipam\": {\n \"type\": \"host-local\",\n \"subnet\": \"usePodCidr\"\n },", "completion": " \"policy\": {\n \"type\": \"k8s\"\n },\n \"kubernetes\": {\n \"kubeconfig\": \"/etc/cni/net.d/calico-kubeconfig\"\n }\n },\n {\n \"type\": \"portmap\",\n \"capabilities\": {\"portMappings\": true},\n \"externalSetMarkChain\": \"KUBE-MARK-MASQ\"\n }\n ]\n}", "language": "json", "type": "api-response", "context": "Network Plugins | Experimental Feature", "source": "https://kubernetes.io/docs/concepts/extend-kubernetes/compute-storage-net/network-plugins/", "id": "k8s_00501_code_completion_13008"} {"prompt": "Explain this json code:\n\n{\n \"name\": \"k8s-pod-network\",\n \"cniVersion\": \"0.4.0\",\n \"plugins\": [\n {\n \"type\": \"calico\",\n \"log_level\": \"info\",\n \"datastore_type\": \"kubernetes\",\n \"nodename\": \"127.0.0.1\",\n \"ipam\": {\n \"type\": \"host-local\",\n \"subnet\": \"usePodCidr\"\n },\n \"policy\": {\n \"type\": \"k8s\"\n },\n \"kubernetes\": {\n \"kubeconfig\": \"/etc/cni/net.d/calico-kubeconfig\"\n }\n },\n {\n \"type\": \"portmap\",\n \"capabilities\": {\"portMappings\": true},\n \"externalSetMarkChain\": \"KUBE-MARK-MASQ\"\n }\n ]\n}", "completion": "This json code Network Plugins | Experimental Feature", "language": "json", "type": "api-response_explanation", "context": "Network Plugins | Experimental Feature", "source": "https://kubernetes.io/docs/concepts/extend-kubernetes/compute-storage-net/network-plugins/", "id": "k8s_00501_code_explain_13009"} {"prompt": "Complete this json code:\n\n{\n \"name\": \"k8s-pod-network\",\n \"cniVersion\": \"0.4.0\",\n \"plugins\": [\n {\n \"type\": \"calico\",\n \"log_level\": \"info\",\n \"datastore_type\": \"kubernetes\",\n \"nodename\": \"127.0.0.1\",\n \"ipam\": {\n \"type\": \"host-local\",\n \"subnet\": \"usePodCidr\"\n },", "completion": " \"policy\": {\n \"type\": \"k8s\"\n },\n \"kubernetes\": {\n \"kubeconfig\": \"/etc/cni/net.d/calico-kubeconfig\"\n }\n },\n {\n \"type\": \"bandwidth\",\n \"capabilities\": {\"bandwidth\": true}\n }\n ]\n}", "language": "json", "type": "api-response", "context": "Network Plugins | Now you can add the kubernetes.io/ingress-bandwidth and kubernetes.io/egress-bandwidth annotations to your Pod. For example:", "source": "https://kubernetes.io/docs/concepts/extend-kubernetes/compute-storage-net/network-plugins/", "id": "k8s_00501_code_completion_13010"} {"prompt": "Explain this json code:\n\n{\n \"name\": \"k8s-pod-network\",\n \"cniVersion\": \"0.4.0\",\n \"plugins\": [\n {\n \"type\": \"calico\",\n \"log_level\": \"info\",\n \"datastore_type\": \"kubernetes\",\n \"nodename\": \"127.0.0.1\",\n \"ipam\": {\n \"type\": \"host-local\",\n \"subnet\": \"usePodCidr\"\n },\n \"policy\": {\n \"type\": \"k8s\"\n },\n \"kubernetes\": {\n \"kubeconfig\": \"/etc/cni/net.d/calico-kubeconfig\"\n }\n },\n {\n \"type\": \"bandwidth\",\n \"capabilities\": {\"bandwidth\": true}\n }\n ]\n}", "completion": "This json code Network Plugins | Now you can add the kubernetes.io/ingress-bandwidth and kubernetes.io/egress-bandwidth annotations to your Pod. For example:", "language": "json", "type": "api-response_explanation", "context": "Network Plugins | Now you can add the kubernetes.io/ingress-bandwidth and kubernetes.io/egress-bandwidth annotations to your Pod. For example:", "source": "https://kubernetes.io/docs/concepts/extend-kubernetes/compute-storage-net/network-plugins/", "id": "k8s_00501_code_explain_13011"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:", "completion": " annotations:\n kubernetes.io/ingress-bandwidth: 1M\n kubernetes.io/egress-bandwidth: 1M\n...", "language": "yaml", "type": "pod-definition", "context": "Network Plugins |", "source": "https://kubernetes.io/docs/concepts/extend-kubernetes/compute-storage-net/network-plugins/", "id": "k8s_00501_code_completion_13012"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n annotations:\n kubernetes.io/ingress-bandwidth: 1M\n kubernetes.io/egress-bandwidth: 1M\n...", "completion": "This yaml code Network Plugins |", "language": "yaml", "type": "pod-definition_explanation", "context": "Network Plugins |", "source": "https://kubernetes.io/docs/concepts/extend-kubernetes/compute-storage-net/network-plugins/", "id": "k8s_00501_code_explain_13013"} {"prompt": "Complete this json code:\n\n{\n \"name\": \"k8s-pod-network\",\n \"cniVersion\": \"0.4.0\",\n \"plugins\": [\n {\n \"type\": \"calico\",\n \"log_level\": \"info\",\n \"datastore_type\": \"kubernetes\",\n \"nodename\": \"127.0.0.1\",\n \"ipam\": {\n \"type\": \"host-local\",\n \"subnet\": \"usePodCidr\"\n },", "completion": " \"policy\": {\n \"type\": \"k8s\"\n },\n \"kubernetes\": {\n \"kubeconfig\": \"/etc/cni/net.d/calico-kubeconfig\"\n }\n },\n {\n \"type\": \"portmap\",\n \"capabilities\": {\"portMappings\": true},\n \"externalSetMarkChain\": \"KUBE-MARK-MASQ\"\n }\n ]\n}", "language": "json", "type": "api-response", "context": "Network Plugins | Experimental Feature", "source": "https://kubernetes.io/docs/concepts/extend-kubernetes/compute-storage-net/network-plugins/", "id": "k8s_00501_code_completion_13014"} {"prompt": "Explain this json code:\n\n{\n \"name\": \"k8s-pod-network\",\n \"cniVersion\": \"0.4.0\",\n \"plugins\": [\n {\n \"type\": \"calico\",\n \"log_level\": \"info\",\n \"datastore_type\": \"kubernetes\",\n \"nodename\": \"127.0.0.1\",\n \"ipam\": {\n \"type\": \"host-local\",\n \"subnet\": \"usePodCidr\"\n },\n \"policy\": {\n \"type\": \"k8s\"\n },\n \"kubernetes\": {\n \"kubeconfig\": \"/etc/cni/net.d/calico-kubeconfig\"\n }\n },\n {\n \"type\": \"portmap\",\n \"capabilities\": {\"portMappings\": true},\n \"externalSetMarkChain\": \"KUBE-MARK-MASQ\"\n }\n ]\n}", "completion": "This json code Network Plugins | Experimental Feature", "language": "json", "type": "api-response_explanation", "context": "Network Plugins | Experimental Feature", "source": "https://kubernetes.io/docs/concepts/extend-kubernetes/compute-storage-net/network-plugins/", "id": "k8s_00501_code_explain_13015"} {"prompt": "Complete this json code:\n\n{\n \"name\": \"k8s-pod-network\",\n \"cniVersion\": \"0.4.0\",\n \"plugins\": [\n {\n \"type\": \"calico\",\n \"log_level\": \"info\",\n \"datastore_type\": \"kubernetes\",\n \"nodename\": \"127.0.0.1\",\n \"ipam\": {\n \"type\": \"host-local\",\n \"subnet\": \"usePodCidr\"\n },", "completion": " \"policy\": {\n \"type\": \"k8s\"\n },\n \"kubernetes\": {\n \"kubeconfig\": \"/etc/cni/net.d/calico-kubeconfig\"\n }\n },\n {\n \"type\": \"bandwidth\",\n \"capabilities\": {\"bandwidth\": true}\n }\n ]\n}", "language": "json", "type": "api-response", "context": "Network Plugins | Now you can add the kubernetes.io/ingress-bandwidth and kubernetes.io/egress-bandwidth annotations to your Pod. For example:", "source": "https://kubernetes.io/docs/concepts/extend-kubernetes/compute-storage-net/network-plugins/", "id": "k8s_00501_code_completion_13016"} {"prompt": "Explain this json code:\n\n{\n \"name\": \"k8s-pod-network\",\n \"cniVersion\": \"0.4.0\",\n \"plugins\": [\n {\n \"type\": \"calico\",\n \"log_level\": \"info\",\n \"datastore_type\": \"kubernetes\",\n \"nodename\": \"127.0.0.1\",\n \"ipam\": {\n \"type\": \"host-local\",\n \"subnet\": \"usePodCidr\"\n },\n \"policy\": {\n \"type\": \"k8s\"\n },\n \"kubernetes\": {\n \"kubeconfig\": \"/etc/cni/net.d/calico-kubeconfig\"\n }\n },\n {\n \"type\": \"bandwidth\",\n \"capabilities\": {\"bandwidth\": true}\n }\n ]\n}", "completion": "This json code Network Plugins | Now you can add the kubernetes.io/ingress-bandwidth and kubernetes.io/egress-bandwidth annotations to your Pod. For example:", "language": "json", "type": "api-response_explanation", "context": "Network Plugins | Now you can add the kubernetes.io/ingress-bandwidth and kubernetes.io/egress-bandwidth annotations to your Pod. For example:", "source": "https://kubernetes.io/docs/concepts/extend-kubernetes/compute-storage-net/network-plugins/", "id": "k8s_00501_code_explain_13017"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:", "completion": " annotations:\n kubernetes.io/ingress-bandwidth: 1M\n kubernetes.io/egress-bandwidth: 1M\n...", "language": "yaml", "type": "pod-definition", "context": "Network Plugins |", "source": "https://kubernetes.io/docs/concepts/extend-kubernetes/compute-storage-net/network-plugins/", "id": "k8s_00501_code_completion_13018"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n annotations:\n kubernetes.io/ingress-bandwidth: 1M\n kubernetes.io/egress-bandwidth: 1M\n...", "completion": "This yaml code Network Plugins |", "language": "yaml", "type": "pod-definition_explanation", "context": "Network Plugins |", "source": "https://kubernetes.io/docs/concepts/extend-kubernetes/compute-storage-net/network-plugins/", "id": "k8s_00501_code_explain_13019"} {"prompt": "Complete this json code:\n\n{\n \"name\": \"k8s-pod-network\",\n \"cniVersion\": \"0.4.0\",\n \"plugins\": [\n {\n \"type\": \"calico\",\n \"log_level\": \"info\",\n \"datastore_type\": \"kubernetes\",\n \"nodename\": \"127.0.0.1\",\n \"ipam\": {\n \"type\": \"host-local\",\n \"subnet\": \"usePodCidr\"\n },", "completion": " \"policy\": {\n \"type\": \"k8s\"\n },\n \"kubernetes\": {\n \"kubeconfig\": \"/etc/cni/net.d/calico-kubeconfig\"\n }\n },\n {\n \"type\": \"portmap\",\n \"capabilities\": {\"portMappings\": true},\n \"externalSetMarkChain\": \"KUBE-MARK-MASQ\"\n }\n ]\n}", "language": "json", "type": "api-response", "context": "Network Plugins | Experimental Feature", "source": "https://kubernetes.io/docs/concepts/extend-kubernetes/compute-storage-net/network-plugins/", "id": "k8s_00501_code_completion_13020"} {"prompt": "Explain this json code:\n\n{\n \"name\": \"k8s-pod-network\",\n \"cniVersion\": \"0.4.0\",\n \"plugins\": [\n {\n \"type\": \"calico\",\n \"log_level\": \"info\",\n \"datastore_type\": \"kubernetes\",\n \"nodename\": \"127.0.0.1\",\n \"ipam\": {\n \"type\": \"host-local\",\n \"subnet\": \"usePodCidr\"\n },\n \"policy\": {\n \"type\": \"k8s\"\n },\n \"kubernetes\": {\n \"kubeconfig\": \"/etc/cni/net.d/calico-kubeconfig\"\n }\n },\n {\n \"type\": \"portmap\",\n \"capabilities\": {\"portMappings\": true},\n \"externalSetMarkChain\": \"KUBE-MARK-MASQ\"\n }\n ]\n}", "completion": "This json code Network Plugins | Experimental Feature", "language": "json", "type": "api-response_explanation", "context": "Network Plugins | Experimental Feature", "source": "https://kubernetes.io/docs/concepts/extend-kubernetes/compute-storage-net/network-plugins/", "id": "k8s_00501_code_explain_13021"} {"prompt": "Complete this json code:\n\n{\n \"name\": \"k8s-pod-network\",\n \"cniVersion\": \"0.4.0\",\n \"plugins\": [\n {\n \"type\": \"calico\",\n \"log_level\": \"info\",\n \"datastore_type\": \"kubernetes\",\n \"nodename\": \"127.0.0.1\",\n \"ipam\": {\n \"type\": \"host-local\",\n \"subnet\": \"usePodCidr\"\n },", "completion": " \"policy\": {\n \"type\": \"k8s\"\n },\n \"kubernetes\": {\n \"kubeconfig\": \"/etc/cni/net.d/calico-kubeconfig\"\n }\n },\n {\n \"type\": \"bandwidth\",\n \"capabilities\": {\"bandwidth\": true}\n }\n ]\n}", "language": "json", "type": "api-response", "context": "Network Plugins | Now you can add the kubernetes.io/ingress-bandwidth and kubernetes.io/egress-bandwidth annotations to your Pod. For example:", "source": "https://kubernetes.io/docs/concepts/extend-kubernetes/compute-storage-net/network-plugins/", "id": "k8s_00501_code_completion_13022"} {"prompt": "Explain this json code:\n\n{\n \"name\": \"k8s-pod-network\",\n \"cniVersion\": \"0.4.0\",\n \"plugins\": [\n {\n \"type\": \"calico\",\n \"log_level\": \"info\",\n \"datastore_type\": \"kubernetes\",\n \"nodename\": \"127.0.0.1\",\n \"ipam\": {\n \"type\": \"host-local\",\n \"subnet\": \"usePodCidr\"\n },\n \"policy\": {\n \"type\": \"k8s\"\n },\n \"kubernetes\": {\n \"kubeconfig\": \"/etc/cni/net.d/calico-kubeconfig\"\n }\n },\n {\n \"type\": \"bandwidth\",\n \"capabilities\": {\"bandwidth\": true}\n }\n ]\n}", "completion": "This json code Network Plugins | Now you can add the kubernetes.io/ingress-bandwidth and kubernetes.io/egress-bandwidth annotations to your Pod. For example:", "language": "json", "type": "api-response_explanation", "context": "Network Plugins | Now you can add the kubernetes.io/ingress-bandwidth and kubernetes.io/egress-bandwidth annotations to your Pod. For example:", "source": "https://kubernetes.io/docs/concepts/extend-kubernetes/compute-storage-net/network-plugins/", "id": "k8s_00501_code_explain_13023"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:", "completion": " annotations:\n kubernetes.io/ingress-bandwidth: 1M\n kubernetes.io/egress-bandwidth: 1M\n...", "language": "yaml", "type": "pod-definition", "context": "Network Plugins |", "source": "https://kubernetes.io/docs/concepts/extend-kubernetes/compute-storage-net/network-plugins/", "id": "k8s_00501_code_completion_13024"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n annotations:\n kubernetes.io/ingress-bandwidth: 1M\n kubernetes.io/egress-bandwidth: 1M\n...", "completion": "This yaml code Network Plugins |", "language": "yaml", "type": "pod-definition_explanation", "context": "Network Plugins |", "source": "https://kubernetes.io/docs/concepts/extend-kubernetes/compute-storage-net/network-plugins/", "id": "k8s_00501_code_explain_13025"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: envfile-test-pod\nspec:\n initContainers:\n - name: setup-envfile\n image: nginx\n command: ['sh', '-c', 'echo \"DB_ADDRESS=address\\nREST_ENDPOINT=endpoint\" > /data/config.env']\n volumeMounts:\n - name: config\n mountPath: /data\n containers:\n - name: use-envfile", "completion": " image: nginx\n command: [ \"/bin/sh\", \"-c\", \"env\" ]\n env:\n - name: DB_ADDRESS\n valueFrom:\n fileKeyRef:\n path: config.env\n volumeName: config\n key: DB_ADDRESS\n optional: false\n restartPolicy: Never\n volumes:\n - name: config\n emptyDir: {}", "language": "yaml", "type": "pod-definition", "context": "Define Environment Variable Values Using An Init Container | In this manifest, you can see the initContainer mounts an emptyDir volume and writes environment variables to a file within it, and the regular containers reference both the file and the environment v", "source": "https://kubernetes.io/docs/tasks/inject-data-application/define-environment-variable-via-file/", "id": "k8s_00502_code_completion_13026"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: envfile-test-pod\nspec:\n initContainers:\n - name: setup-envfile\n image: nginx\n command: ['sh', '-c', 'echo \"DB_ADDRESS=address\\nREST_ENDPOINT=endpoint\" > /data/config.env']\n volumeMounts:\n - name: config\n mountPath: /data\n containers:\n - name: use-envfile\n image: nginx\n command: [ \"/bin/sh\", \"-c\", \"env\" ]\n env:\n - name: DB_ADDRESS\n valueFrom:\n fileKeyRef:\n path: config.env\n volumeName: config\n key: DB_ADDRESS\n optional: false\n restartPolicy: Never\n volumes:\n - name: config\n emptyDir: {}", "completion": "This yaml code Define Environment Variable Values Using An Init Container | In this manifest, you can see the initContainer mounts an emptyDir volume and writes environment variables to a file within it, and the regular containers reference both the file and the environment v", "language": "yaml", "type": "pod-definition_explanation", "context": "Define Environment Variable Values Using An Init Container | In this manifest, you can see the initContainer mounts an emptyDir volume and writes environment variables to a file within it, and the regular containers reference both the file and the environment v", "source": "https://kubernetes.io/docs/tasks/inject-data-application/define-environment-variable-via-file/", "id": "k8s_00502_code_explain_13027"} {"prompt": "Explain this bash code:\n\nkubectl apply -f https://k8s.io/examples/pods/inject/envars-file-container.yaml", "completion": "This bash code Define Environment Variable Values Using An Init Container | Verify that the container in the Pod is running:", "language": "bash", "type": "kubectl-command_explanation", "context": "Define Environment Variable Values Using An Init Container | Verify that the container in the Pod is running:", "source": "https://kubernetes.io/docs/tasks/inject-data-application/define-environment-variable-via-file/", "id": "k8s_00502_code_explain_13028"} {"prompt": "Explain this text code:\n\n# If the new Pod isn't yet healthy, rerun this command a few times.\nkubectl get pods", "completion": "This text code Define Environment Variable Values Using An Init Container | Check container logs for environment variables:", "language": "text", "type": "code-example_explanation", "context": "Define Environment Variable Values Using An Init Container | Check container logs for environment variables:", "source": "https://kubernetes.io/docs/tasks/inject-data-application/define-environment-variable-via-file/", "id": "k8s_00502_code_explain_13029"} {"prompt": "Explain this bash code:\n\nkubectl logs dapi-test-pod -c use-envfile | grep DB_ADDRESS", "completion": "This bash code Define Environment Variable Values Using An Init Container | The output shows the values of selected environment variables:", "language": "bash", "type": "kubectl-command_explanation", "context": "Define Environment Variable Values Using An Init Container | The output shows the values of selected environment variables:", "source": "https://kubernetes.io/docs/tasks/inject-data-application/define-environment-variable-via-file/", "id": "k8s_00502_code_explain_13030"} {"prompt": "Complete this text code:\n\n# comment\nVAR=VAL → VAL", "completion": "\nVAR=VAL # not a comment → VAL # not a comment", "language": "text", "type": "code-example", "context": "Define Environment Variable Values Using An Init Container | Line Continuation: A backslash (\\) at the end of a variable declaration line indicates the value continues on the next line. The lines are joined with a single space.", "source": "https://kubernetes.io/docs/tasks/inject-data-application/define-environment-variable-via-file/", "id": "k8s_00502_code_completion_13031"} {"prompt": "Explain this text code:\n\n# comment\nVAR=VAL → VAL\n\nVAR=VAL # not a comment → VAL # not a comment", "completion": "This text code Define Environment Variable Values Using An Init Container | Line Continuation: A backslash (\\) at the end of a variable declaration line indicates the value continues on the next line. The lines are joined with a single space.", "language": "text", "type": "code-example_explanation", "context": "Define Environment Variable Values Using An Init Container | Line Continuation: A backslash (\\) at the end of a variable declaration line indicates the value continues on the next line. The lines are joined with a single space.", "source": "https://kubernetes.io/docs/tasks/inject-data-application/define-environment-variable-via-file/", "id": "k8s_00502_code_explain_13032"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: envfile-test-pod\nspec:\n initContainers:\n - name: setup-envfile\n image: nginx\n command: ['sh', '-c', 'echo \"DB_ADDRESS=address\\nREST_ENDPOINT=endpoint\" > /data/config.env']\n volumeMounts:\n - name: config\n mountPath: /data\n containers:\n - name: use-envfile", "completion": " image: nginx\n command: [ \"/bin/sh\", \"-c\", \"env\" ]\n env:\n - name: DB_ADDRESS\n valueFrom:\n fileKeyRef:\n path: config.env\n volumeName: config\n key: DB_ADDRESS\n optional: false\n restartPolicy: Never\n volumes:\n - name: config\n emptyDir: {}", "language": "yaml", "type": "pod-definition", "context": "Define Environment Variable Values Using An Init Container | In this manifest, you can see the initContainer mounts an emptyDir volume and writes environment variables to a file within it, and the regular containers reference both the file and the environment v", "source": "https://kubernetes.io/docs/tasks/inject-data-application/define-environment-variable-via-file/", "id": "k8s_00502_code_completion_13033"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: envfile-test-pod\nspec:\n initContainers:\n - name: setup-envfile\n image: nginx\n command: ['sh', '-c', 'echo \"DB_ADDRESS=address\\nREST_ENDPOINT=endpoint\" > /data/config.env']\n volumeMounts:\n - name: config\n mountPath: /data\n containers:\n - name: use-envfile\n image: nginx\n command: [ \"/bin/sh\", \"-c\", \"env\" ]\n env:\n - name: DB_ADDRESS\n valueFrom:\n fileKeyRef:\n path: config.env\n volumeName: config\n key: DB_ADDRESS\n optional: false\n restartPolicy: Never\n volumes:\n - name: config\n emptyDir: {}", "completion": "This yaml code Define Environment Variable Values Using An Init Container | In this manifest, you can see the initContainer mounts an emptyDir volume and writes environment variables to a file within it, and the regular containers reference both the file and the environment v", "language": "yaml", "type": "pod-definition_explanation", "context": "Define Environment Variable Values Using An Init Container | In this manifest, you can see the initContainer mounts an emptyDir volume and writes environment variables to a file within it, and the regular containers reference both the file and the environment v", "source": "https://kubernetes.io/docs/tasks/inject-data-application/define-environment-variable-via-file/", "id": "k8s_00502_code_explain_13034"} {"prompt": "Explain this shell code:\n\nkubectl apply -f https://k8s.io/examples/pods/inject/envars-file-container.yaml", "completion": "This shell code Define Environment Variable Values Using An Init Container | Verify that the container in the Pod is running:", "language": "shell", "type": "code-example_explanation", "context": "Define Environment Variable Values Using An Init Container | Verify that the container in the Pod is running:", "source": "https://kubernetes.io/docs/tasks/inject-data-application/define-environment-variable-via-file/", "id": "k8s_00502_code_explain_13035"} {"prompt": "Explain this shell code:\n\n# If the new Pod isn't yet healthy, rerun this command a few times.\nkubectl get pods", "completion": "This shell code Define Environment Variable Values Using An Init Container | Check container logs for environment variables:", "language": "shell", "type": "code-example_explanation", "context": "Define Environment Variable Values Using An Init Container | Check container logs for environment variables:", "source": "https://kubernetes.io/docs/tasks/inject-data-application/define-environment-variable-via-file/", "id": "k8s_00502_code_explain_13036"} {"prompt": "Explain this shell code:\n\nkubectl logs dapi-test-pod -c use-envfile | grep DB_ADDRESS", "completion": "This shell code Define Environment Variable Values Using An Init Container | The output shows the values of selected environment variables:", "language": "shell", "type": "code-example_explanation", "context": "Define Environment Variable Values Using An Init Container | The output shows the values of selected environment variables:", "source": "https://kubernetes.io/docs/tasks/inject-data-application/define-environment-variable-via-file/", "id": "k8s_00502_code_explain_13037"} {"prompt": "Complete this text code:\n\n# comment\nVAR=VAL → VAL", "completion": "\nVAR=VAL # not a comment → VAL # not a comment", "language": "text", "type": "code-example", "context": "Define Environment Variable Values Using An Init Container | Line Continuation: A backslash (\\) at the end of a variable declaration line indicates the value continues on the next line. The lines are joined with a single space.", "source": "https://kubernetes.io/docs/tasks/inject-data-application/define-environment-variable-via-file/", "id": "k8s_00502_code_completion_13038"} {"prompt": "Explain this text code:\n\n# comment\nVAR=VAL → VAL\n\nVAR=VAL # not a comment → VAL # not a comment", "completion": "This text code Define Environment Variable Values Using An Init Container | Line Continuation: A backslash (\\) at the end of a variable declaration line indicates the value continues on the next line. The lines are joined with a single space.", "language": "text", "type": "code-example_explanation", "context": "Define Environment Variable Values Using An Init Container | Line Continuation: A backslash (\\) at the end of a variable declaration line indicates the value continues on the next line. The lines are joined with a single space.", "source": "https://kubernetes.io/docs/tasks/inject-data-application/define-environment-variable-via-file/", "id": "k8s_00502_code_explain_13039"} {"prompt": "Complete this yaml code:\n\npods/inject/envars-file-container.yaml\napiVersion: v1\nkind: Pod\nmetadata:\n name: envfile-test-pod\nspec:\n initContainers:\n - name: setup-envfile\n image: nginx\n command: ['sh', '-c', 'echo \"DB_ADDRESS=address\\nREST_ENDPOINT=endpoint\" > /data/config.env']\n volumeMounts:\n - name: config\n mountPath: /data\n containers:", "completion": " - name: use-envfile\n image: nginx\n command: [ \"/bin/sh\", \"-c\", \"env\" ]\n env:\n - name: DB_ADDRESS\n valueFrom:\n fileKeyRef:\n path: config.env\n volumeName: config\n key: DB_ADDRESS\n optional: false\n restartPolicy: Never\n volumes:\n - name: config\n emptyDir: {}", "language": "yaml", "type": "pod-definition", "context": "Define Environment Variable Values Using An Init Container | In this manifest, you can see the initContainer mounts an emptyDir volume and writes environment variables to a file within it, and the regular containers reference both the file and the environment v", "source": "https://kubernetes.io/docs/tasks/inject-data-application/define-environment-variable-via-file/", "id": "k8s_00502_code_completion_13040"} {"prompt": "Explain this yaml code:\n\npods/inject/envars-file-container.yaml\napiVersion: v1\nkind: Pod\nmetadata:\n name: envfile-test-pod\nspec:\n initContainers:\n - name: setup-envfile\n image: nginx\n command: ['sh', '-c', 'echo \"DB_ADDRESS=address\\nREST_ENDPOINT=endpoint\" > /data/config.env']\n volumeMounts:\n - name: config\n mountPath: /data\n containers:\n - name: use-envfile\n image: nginx\n command: [ \"/bin/sh\", \"-c\", \"env\" ]\n env:\n - name: DB_ADDRESS\n valueFrom:\n fileKeyRef:\n path: config.env\n volumeName: config\n key: DB_ADDRESS\n optional: false\n restartPolicy: Never\n volumes:\n - name: config\n emptyDir: {}", "completion": "This yaml code Define Environment Variable Values Using An Init Container | In this manifest, you can see the initContainer mounts an emptyDir volume and writes environment variables to a file within it, and the regular containers reference both the file and the environment v", "language": "yaml", "type": "pod-definition_explanation", "context": "Define Environment Variable Values Using An Init Container | In this manifest, you can see the initContainer mounts an emptyDir volume and writes environment variables to a file within it, and the regular containers reference both the file and the environment v", "source": "https://kubernetes.io/docs/tasks/inject-data-application/define-environment-variable-via-file/", "id": "k8s_00502_code_explain_13041"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: envfile-test-pod\nspec:\n initContainers:\n - name: setup-envfile\n image: nginx\n command: ['sh', '-c', 'echo \"DB_ADDRESS=address\\nREST_ENDPOINT=endpoint\" > /data/config.env']\n volumeMounts:\n - name: config\n mountPath: /data\n containers:\n - name: use-envfile", "completion": " image: nginx\n command: [ \"/bin/sh\", \"-c\", \"env\" ]\n env:\n - name: DB_ADDRESS\n valueFrom:\n fileKeyRef:\n path: config.env\n volumeName: config\n key: DB_ADDRESS\n optional: false\n restartPolicy: Never\n volumes:\n - name: config\n emptyDir: {}", "language": "yaml", "type": "pod-definition", "context": "Define Environment Variable Values Using An Init Container | In this manifest, you can see the initContainer mounts an emptyDir volume and writes environment variables to a file within it, and the regular containers reference both the file and the environment v", "source": "https://kubernetes.io/docs/tasks/inject-data-application/define-environment-variable-via-file/", "id": "k8s_00502_code_completion_13042"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: envfile-test-pod\nspec:\n initContainers:\n - name: setup-envfile\n image: nginx\n command: ['sh', '-c', 'echo \"DB_ADDRESS=address\\nREST_ENDPOINT=endpoint\" > /data/config.env']\n volumeMounts:\n - name: config\n mountPath: /data\n containers:\n - name: use-envfile\n image: nginx\n command: [ \"/bin/sh\", \"-c\", \"env\" ]\n env:\n - name: DB_ADDRESS\n valueFrom:\n fileKeyRef:\n path: config.env\n volumeName: config\n key: DB_ADDRESS\n optional: false\n restartPolicy: Never\n volumes:\n - name: config\n emptyDir: {}", "completion": "This yaml code Define Environment Variable Values Using An Init Container | In this manifest, you can see the initContainer mounts an emptyDir volume and writes environment variables to a file within it, and the regular containers reference both the file and the environment v", "language": "yaml", "type": "pod-definition_explanation", "context": "Define Environment Variable Values Using An Init Container | In this manifest, you can see the initContainer mounts an emptyDir volume and writes environment variables to a file within it, and the regular containers reference both the file and the environment v", "source": "https://kubernetes.io/docs/tasks/inject-data-application/define-environment-variable-via-file/", "id": "k8s_00502_code_explain_13043"} {"prompt": "Explain this bash code:\n\nkubectl apply -f https://k8s.io/examples/pods/inject/envars-file-container.yaml", "completion": "This bash code Define Environment Variable Values Using An Init Container | Verify that the container in the Pod is running:", "language": "bash", "type": "kubectl-command_explanation", "context": "Define Environment Variable Values Using An Init Container | Verify that the container in the Pod is running:", "source": "https://kubernetes.io/docs/tasks/inject-data-application/define-environment-variable-via-file/", "id": "k8s_00502_code_explain_13044"} {"prompt": "Explain this text code:\n\n# If the new Pod isn't yet healthy, rerun this command a few times.\nkubectl get pods", "completion": "This text code Define Environment Variable Values Using An Init Container | Check container logs for environment variables:", "language": "text", "type": "code-example_explanation", "context": "Define Environment Variable Values Using An Init Container | Check container logs for environment variables:", "source": "https://kubernetes.io/docs/tasks/inject-data-application/define-environment-variable-via-file/", "id": "k8s_00502_code_explain_13045"} {"prompt": "Explain this bash code:\n\nkubectl logs dapi-test-pod -c use-envfile | grep DB_ADDRESS", "completion": "This bash code Define Environment Variable Values Using An Init Container | The output shows the values of selected environment variables:", "language": "bash", "type": "kubectl-command_explanation", "context": "Define Environment Variable Values Using An Init Container | The output shows the values of selected environment variables:", "source": "https://kubernetes.io/docs/tasks/inject-data-application/define-environment-variable-via-file/", "id": "k8s_00502_code_explain_13046"} {"prompt": "Complete this text code:\n\nspec:\n containers:", "completion": " - name: nginx\n image: nginx", "language": "text", "type": "code-example", "context": "Control Topology Management Policies on a node | This pod runs in the BestEffort QoS class because no resource requests or limits are specified.", "source": "https://kubernetes.io/docs/tasks/administer-cluster/topology-manager/", "id": "k8s_00503_code_completion_13047"} {"prompt": "Explain this text code:\n\nspec:\n containers:\n - name: nginx\n image: nginx", "completion": "This text code Control Topology Management Policies on a node | This pod runs in the BestEffort QoS class because no resource requests or limits are specified.", "language": "text", "type": "code-example_explanation", "context": "Control Topology Management Policies on a node | This pod runs in the BestEffort QoS class because no resource requests or limits are specified.", "source": "https://kubernetes.io/docs/tasks/administer-cluster/topology-manager/", "id": "k8s_00503_code_explain_13048"} {"prompt": "Complete this text code:\n\nspec:\n containers:\n - name: nginx\n image: nginx", "completion": " resources:\n limits:\n memory: \"200Mi\"\n requests:\n memory: \"100Mi\"", "language": "text", "type": "code-example", "context": "Control Topology Management Policies on a node | This pod runs in the Burstable QoS class because requests are less than limits.", "source": "https://kubernetes.io/docs/tasks/administer-cluster/topology-manager/", "id": "k8s_00503_code_completion_13049"} {"prompt": "Explain this text code:\n\nspec:\n containers:\n - name: nginx\n image: nginx\n resources:\n limits:\n memory: \"200Mi\"\n requests:\n memory: \"100Mi\"", "completion": "This text code Control Topology Management Policies on a node | This pod runs in the Burstable QoS class because requests are less than limits.", "language": "text", "type": "code-example_explanation", "context": "Control Topology Management Policies on a node | This pod runs in the Burstable QoS class because requests are less than limits.", "source": "https://kubernetes.io/docs/tasks/administer-cluster/topology-manager/", "id": "k8s_00503_code_explain_13050"} {"prompt": "Complete this text code:\n\nspec:\n containers:\n - name: nginx\n image: nginx\n resources:\n limits:", "completion": " memory: \"200Mi\"\n cpu: \"2\"\n example.com/device: \"1\"\n requests:\n memory: \"200Mi\"\n cpu: \"2\"\n example.com/device: \"1\"", "language": "text", "type": "code-example", "context": "Control Topology Management Policies on a node | This pod with integer CPU request runs in the Guaranteed QoS class because requests are equal to limits.", "source": "https://kubernetes.io/docs/tasks/administer-cluster/topology-manager/", "id": "k8s_00503_code_completion_13051"} {"prompt": "Explain this text code:\n\nspec:\n containers:\n - name: nginx\n image: nginx\n resources:\n limits:\n memory: \"200Mi\"\n cpu: \"2\"\n example.com/device: \"1\"\n requests:\n memory: \"200Mi\"\n cpu: \"2\"\n example.com/device: \"1\"", "completion": "This text code Control Topology Management Policies on a node | This pod with integer CPU request runs in the Guaranteed QoS class because requests are equal to limits.", "language": "text", "type": "code-example_explanation", "context": "Control Topology Management Policies on a node | This pod with integer CPU request runs in the Guaranteed QoS class because requests are equal to limits.", "source": "https://kubernetes.io/docs/tasks/administer-cluster/topology-manager/", "id": "k8s_00503_code_explain_13052"} {"prompt": "Complete this text code:\n\nspec:\n containers:\n - name: nginx\n image: nginx\n resources:\n limits:", "completion": " memory: \"200Mi\"\n cpu: \"300m\"\n example.com/device: \"1\"\n requests:\n memory: \"200Mi\"\n cpu: \"300m\"\n example.com/device: \"1\"", "language": "text", "type": "code-example", "context": "Control Topology Management Policies on a node | This pod with sharing CPU request runs in the Guaranteed QoS class because requests are equal to limits.", "source": "https://kubernetes.io/docs/tasks/administer-cluster/topology-manager/", "id": "k8s_00503_code_completion_13053"} {"prompt": "Explain this text code:\n\nspec:\n containers:\n - name: nginx\n image: nginx\n resources:\n limits:\n memory: \"200Mi\"\n cpu: \"300m\"\n example.com/device: \"1\"\n requests:\n memory: \"200Mi\"\n cpu: \"300m\"\n example.com/device: \"1\"", "completion": "This text code Control Topology Management Policies on a node | This pod with sharing CPU request runs in the Guaranteed QoS class because requests are equal to limits.", "language": "text", "type": "code-example_explanation", "context": "Control Topology Management Policies on a node | This pod with sharing CPU request runs in the Guaranteed QoS class because requests are equal to limits.", "source": "https://kubernetes.io/docs/tasks/administer-cluster/topology-manager/", "id": "k8s_00503_code_explain_13054"} {"prompt": "Complete this text code:\n\nspec:\n containers:\n - name: nginx\n image: nginx\n resources:", "completion": " limits:\n example.com/deviceA: \"1\"\n example.com/deviceB: \"1\"\n requests:\n example.com/deviceA: \"1\"\n example.com/deviceB: \"1\"", "language": "text", "type": "code-example", "context": "Control Topology Management Policies on a node | This pod runs in the BestEffort QoS class because there are no CPU and memory requests.", "source": "https://kubernetes.io/docs/tasks/administer-cluster/topology-manager/", "id": "k8s_00503_code_completion_13055"} {"prompt": "Explain this text code:\n\nspec:\n containers:\n - name: nginx\n image: nginx\n resources:\n limits:\n example.com/deviceA: \"1\"\n example.com/deviceB: \"1\"\n requests:\n example.com/deviceA: \"1\"\n example.com/deviceB: \"1\"", "completion": "This text code Control Topology Management Policies on a node | This pod runs in the BestEffort QoS class because there are no CPU and memory requests.", "language": "text", "type": "code-example_explanation", "context": "Control Topology Management Policies on a node | This pod runs in the BestEffort QoS class because there are no CPU and memory requests.", "source": "https://kubernetes.io/docs/tasks/administer-cluster/topology-manager/", "id": "k8s_00503_code_explain_13056"} {"prompt": "Complete this yaml code:\n\nspec:\n containers:", "completion": " - name: nginx\n image: nginx", "language": "yaml", "type": "kubernetes-manifest", "context": "Control Topology Management Policies on a node | This pod runs in the BestEffort QoS class because no resource requests or limits are specified.", "source": "https://kubernetes.io/docs/tasks/administer-cluster/topology-manager/", "id": "k8s_00503_code_completion_13057"} {"prompt": "Explain this yaml code:\n\nspec:\n containers:\n - name: nginx\n image: nginx", "completion": "This yaml code Control Topology Management Policies on a node | This pod runs in the BestEffort QoS class because no resource requests or limits are specified.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Control Topology Management Policies on a node | This pod runs in the BestEffort QoS class because no resource requests or limits are specified.", "source": "https://kubernetes.io/docs/tasks/administer-cluster/topology-manager/", "id": "k8s_00503_code_explain_13058"} {"prompt": "Complete this yaml code:\n\nspec:\n containers:\n - name: nginx\n image: nginx", "completion": " resources:\n limits:\n memory: \"200Mi\"\n requests:\n memory: \"100Mi\"", "language": "yaml", "type": "kubernetes-manifest", "context": "Control Topology Management Policies on a node | This pod runs in the Burstable QoS class because requests are less than limits.", "source": "https://kubernetes.io/docs/tasks/administer-cluster/topology-manager/", "id": "k8s_00503_code_completion_13059"} {"prompt": "Explain this yaml code:\n\nspec:\n containers:\n - name: nginx\n image: nginx\n resources:\n limits:\n memory: \"200Mi\"\n requests:\n memory: \"100Mi\"", "completion": "This yaml code Control Topology Management Policies on a node | This pod runs in the Burstable QoS class because requests are less than limits.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Control Topology Management Policies on a node | This pod runs in the Burstable QoS class because requests are less than limits.", "source": "https://kubernetes.io/docs/tasks/administer-cluster/topology-manager/", "id": "k8s_00503_code_explain_13060"} {"prompt": "Complete this yaml code:\n\nspec:\n containers:\n - name: nginx\n image: nginx\n resources:\n limits:", "completion": " memory: \"200Mi\"\n cpu: \"2\"\n example.com/device: \"1\"\n requests:\n memory: \"200Mi\"\n cpu: \"2\"\n example.com/device: \"1\"", "language": "yaml", "type": "kubernetes-manifest", "context": "Control Topology Management Policies on a node | This pod with integer CPU request runs in the Guaranteed QoS class because requests are equal to limits.", "source": "https://kubernetes.io/docs/tasks/administer-cluster/topology-manager/", "id": "k8s_00503_code_completion_13061"} {"prompt": "Explain this yaml code:\n\nspec:\n containers:\n - name: nginx\n image: nginx\n resources:\n limits:\n memory: \"200Mi\"\n cpu: \"2\"\n example.com/device: \"1\"\n requests:\n memory: \"200Mi\"\n cpu: \"2\"\n example.com/device: \"1\"", "completion": "This yaml code Control Topology Management Policies on a node | This pod with integer CPU request runs in the Guaranteed QoS class because requests are equal to limits.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Control Topology Management Policies on a node | This pod with integer CPU request runs in the Guaranteed QoS class because requests are equal to limits.", "source": "https://kubernetes.io/docs/tasks/administer-cluster/topology-manager/", "id": "k8s_00503_code_explain_13062"} {"prompt": "Complete this yaml code:\n\nspec:\n containers:\n - name: nginx\n image: nginx\n resources:\n limits:", "completion": " memory: \"200Mi\"\n cpu: \"300m\"\n example.com/device: \"1\"\n requests:\n memory: \"200Mi\"\n cpu: \"300m\"\n example.com/device: \"1\"", "language": "yaml", "type": "kubernetes-manifest", "context": "Control Topology Management Policies on a node | This pod with sharing CPU request runs in the Guaranteed QoS class because requests are equal to limits.", "source": "https://kubernetes.io/docs/tasks/administer-cluster/topology-manager/", "id": "k8s_00503_code_completion_13063"} {"prompt": "Explain this yaml code:\n\nspec:\n containers:\n - name: nginx\n image: nginx\n resources:\n limits:\n memory: \"200Mi\"\n cpu: \"300m\"\n example.com/device: \"1\"\n requests:\n memory: \"200Mi\"\n cpu: \"300m\"\n example.com/device: \"1\"", "completion": "This yaml code Control Topology Management Policies on a node | This pod with sharing CPU request runs in the Guaranteed QoS class because requests are equal to limits.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Control Topology Management Policies on a node | This pod with sharing CPU request runs in the Guaranteed QoS class because requests are equal to limits.", "source": "https://kubernetes.io/docs/tasks/administer-cluster/topology-manager/", "id": "k8s_00503_code_explain_13064"} {"prompt": "Complete this yaml code:\n\nspec:\n containers:\n - name: nginx\n image: nginx\n resources:", "completion": " limits:\n example.com/deviceA: \"1\"\n example.com/deviceB: \"1\"\n requests:\n example.com/deviceA: \"1\"\n example.com/deviceB: \"1\"", "language": "yaml", "type": "kubernetes-manifest", "context": "Control Topology Management Policies on a node | This pod runs in the BestEffort QoS class because there are no CPU and memory requests.", "source": "https://kubernetes.io/docs/tasks/administer-cluster/topology-manager/", "id": "k8s_00503_code_completion_13065"} {"prompt": "Explain this yaml code:\n\nspec:\n containers:\n - name: nginx\n image: nginx\n resources:\n limits:\n example.com/deviceA: \"1\"\n example.com/deviceB: \"1\"\n requests:\n example.com/deviceA: \"1\"\n example.com/deviceB: \"1\"", "completion": "This yaml code Control Topology Management Policies on a node | This pod runs in the BestEffort QoS class because there are no CPU and memory requests.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Control Topology Management Policies on a node | This pod runs in the BestEffort QoS class because there are no CPU and memory requests.", "source": "https://kubernetes.io/docs/tasks/administer-cluster/topology-manager/", "id": "k8s_00503_code_explain_13066"} {"prompt": "Complete this text code:\n\nspec:\n containers:", "completion": " - name: nginx\n image: nginx", "language": "text", "type": "code-example", "context": "Control Topology Management Policies on a node | This pod runs in the BestEffort QoS class because no resource requests or limits are specified.", "source": "https://kubernetes.io/docs/tasks/administer-cluster/topology-manager/", "id": "k8s_00503_code_completion_13067"} {"prompt": "Explain this text code:\n\nspec:\n containers:\n - name: nginx\n image: nginx", "completion": "This text code Control Topology Management Policies on a node | This pod runs in the BestEffort QoS class because no resource requests or limits are specified.", "language": "text", "type": "code-example_explanation", "context": "Control Topology Management Policies on a node | This pod runs in the BestEffort QoS class because no resource requests or limits are specified.", "source": "https://kubernetes.io/docs/tasks/administer-cluster/topology-manager/", "id": "k8s_00503_code_explain_13068"} {"prompt": "Complete this text code:\n\nspec:\n containers:\n - name: nginx\n image: nginx", "completion": " resources:\n limits:\n memory: \"200Mi\"\n requests:\n memory: \"100Mi\"", "language": "text", "type": "code-example", "context": "Control Topology Management Policies on a node | This pod runs in the Burstable QoS class because requests are less than limits.", "source": "https://kubernetes.io/docs/tasks/administer-cluster/topology-manager/", "id": "k8s_00503_code_completion_13069"} {"prompt": "Explain this text code:\n\nspec:\n containers:\n - name: nginx\n image: nginx\n resources:\n limits:\n memory: \"200Mi\"\n requests:\n memory: \"100Mi\"", "completion": "This text code Control Topology Management Policies on a node | This pod runs in the Burstable QoS class because requests are less than limits.", "language": "text", "type": "code-example_explanation", "context": "Control Topology Management Policies on a node | This pod runs in the Burstable QoS class because requests are less than limits.", "source": "https://kubernetes.io/docs/tasks/administer-cluster/topology-manager/", "id": "k8s_00503_code_explain_13070"} {"prompt": "Complete this text code:\n\nspec:\n containers:\n - name: nginx\n image: nginx\n resources:\n limits:", "completion": " memory: \"200Mi\"\n cpu: \"2\"\n example.com/device: \"1\"\n requests:\n memory: \"200Mi\"\n cpu: \"2\"\n example.com/device: \"1\"", "language": "text", "type": "code-example", "context": "Control Topology Management Policies on a node | This pod with integer CPU request runs in the Guaranteed QoS class because requests are equal to limits.", "source": "https://kubernetes.io/docs/tasks/administer-cluster/topology-manager/", "id": "k8s_00503_code_completion_13071"} {"prompt": "Explain this text code:\n\nspec:\n containers:\n - name: nginx\n image: nginx\n resources:\n limits:\n memory: \"200Mi\"\n cpu: \"2\"\n example.com/device: \"1\"\n requests:\n memory: \"200Mi\"\n cpu: \"2\"\n example.com/device: \"1\"", "completion": "This text code Control Topology Management Policies on a node | This pod with integer CPU request runs in the Guaranteed QoS class because requests are equal to limits.", "language": "text", "type": "code-example_explanation", "context": "Control Topology Management Policies on a node | This pod with integer CPU request runs in the Guaranteed QoS class because requests are equal to limits.", "source": "https://kubernetes.io/docs/tasks/administer-cluster/topology-manager/", "id": "k8s_00503_code_explain_13072"} {"prompt": "Complete this text code:\n\nspec:\n containers:\n - name: nginx\n image: nginx\n resources:\n limits:", "completion": " memory: \"200Mi\"\n cpu: \"300m\"\n example.com/device: \"1\"\n requests:\n memory: \"200Mi\"\n cpu: \"300m\"\n example.com/device: \"1\"", "language": "text", "type": "code-example", "context": "Control Topology Management Policies on a node | This pod with sharing CPU request runs in the Guaranteed QoS class because requests are equal to limits.", "source": "https://kubernetes.io/docs/tasks/administer-cluster/topology-manager/", "id": "k8s_00503_code_completion_13073"} {"prompt": "Explain this text code:\n\nspec:\n containers:\n - name: nginx\n image: nginx\n resources:\n limits:\n memory: \"200Mi\"\n cpu: \"300m\"\n example.com/device: \"1\"\n requests:\n memory: \"200Mi\"\n cpu: \"300m\"\n example.com/device: \"1\"", "completion": "This text code Control Topology Management Policies on a node | This pod with sharing CPU request runs in the Guaranteed QoS class because requests are equal to limits.", "language": "text", "type": "code-example_explanation", "context": "Control Topology Management Policies on a node | This pod with sharing CPU request runs in the Guaranteed QoS class because requests are equal to limits.", "source": "https://kubernetes.io/docs/tasks/administer-cluster/topology-manager/", "id": "k8s_00503_code_explain_13074"} {"prompt": "Complete this text code:\n\nspec:\n containers:\n - name: nginx\n image: nginx\n resources:", "completion": " limits:\n example.com/deviceA: \"1\"\n example.com/deviceB: \"1\"\n requests:\n example.com/deviceA: \"1\"\n example.com/deviceB: \"1\"", "language": "text", "type": "code-example", "context": "Control Topology Management Policies on a node | This pod runs in the BestEffort QoS class because there are no CPU and memory requests.", "source": "https://kubernetes.io/docs/tasks/administer-cluster/topology-manager/", "id": "k8s_00503_code_completion_13075"} {"prompt": "Explain this text code:\n\nspec:\n containers:\n - name: nginx\n image: nginx\n resources:\n limits:\n example.com/deviceA: \"1\"\n example.com/deviceB: \"1\"\n requests:\n example.com/deviceA: \"1\"\n example.com/deviceB: \"1\"", "completion": "This text code Control Topology Management Policies on a node | This pod runs in the BestEffort QoS class because there are no CPU and memory requests.", "language": "text", "type": "code-example_explanation", "context": "Control Topology Management Policies on a node | This pod runs in the BestEffort QoS class because there are no CPU and memory requests.", "source": "https://kubernetes.io/docs/tasks/administer-cluster/topology-manager/", "id": "k8s_00503_code_explain_13076"} {"prompt": "Explain this promql code:\n\nsum(rate(workqueue_adds_total{name=\"resource_claim\"}[5m]))", "completion": "This promql code Good practices for Dynamic Resource Allocation as a Cluster Admin | If you are experiencing low Workqueue Add Rate, high Workqueue Depth, and/or high Workqueue Work Duration, this suggests the controller isn't performing optimally. Consider tuning parameters like QPS,", "language": "promql", "type": "code-example_explanation", "context": "Good practices for Dynamic Resource Allocation as a Cluster Admin | If you are experiencing low Workqueue Add Rate, high Workqueue Depth, and/or high Workqueue Work Duration, this suggests the controller isn't performing optimally. Consider tuning parameters like QPS,", "source": "https://kubernetes.io/docs/concepts/cluster-administration/dra/", "id": "k8s_00504_code_explain_13077"} {"prompt": "Explain this promql code:\n\nsum(workqueue_depth{endpoint=\"kube-controller-manager\",\nname=\"resource_claim\"})", "completion": "This promql code Good practices for Dynamic Resource Allocation as a Cluster Admin | If you are experiencing low Workqueue Add Rate, high Workqueue Depth, and/or high Workqueue Work Duration, this suggests the controller isn't performing optimally. Consider tuning parameters like QPS,", "language": "promql", "type": "code-example_explanation", "context": "Good practices for Dynamic Resource Allocation as a Cluster Admin | If you are experiencing low Workqueue Add Rate, high Workqueue Depth, and/or high Workqueue Work Duration, this suggests the controller isn't performing optimally. Consider tuning parameters like QPS,", "source": "https://kubernetes.io/docs/concepts/cluster-administration/dra/", "id": "k8s_00504_code_explain_13078"} {"prompt": "Explain this promql code:\n\nhistogram_quantile(0.99,\nsum(rate(workqueue_work_duration_seconds_bucket{name=\"resource_claim\"}[5m]))\nby (le))", "completion": "This promql code Good practices for Dynamic Resource Allocation as a Cluster Admin | If you are experiencing low Workqueue Add Rate, high Workqueue Depth, and/or high Workqueue Work Duration, this suggests the controller isn't performing optimally. Consider tuning parameters like QPS,", "language": "promql", "type": "code-example_explanation", "context": "Good practices for Dynamic Resource Allocation as a Cluster Admin | If you are experiencing low Workqueue Add Rate, high Workqueue Depth, and/or high Workqueue Work Duration, this suggests the controller isn't performing optimally. Consider tuning parameters like QPS,", "source": "https://kubernetes.io/docs/concepts/cluster-administration/dra/", "id": "k8s_00504_code_explain_13079"} {"prompt": "Explain this promql code:\n\nhistogram_quantile(0.99,\nsum(increase(scheduler_pod_scheduling_sli_duration_seconds_bucket[5m])) by\n(le))", "completion": "This promql code Good practices for Dynamic Resource Allocation as a Cluster Admin | When a Pod bound to a node must have a ResourceClaim satisfied, kubelet calls the NodePrepareResources and NodeUnprepareResources methods of the DRA driver. You can observe this behavior from the kube", "language": "promql", "type": "code-example_explanation", "context": "Good practices for Dynamic Resource Allocation as a Cluster Admin | When a Pod bound to a node must have a ResourceClaim satisfied, kubelet calls the NodePrepareResources and NodeUnprepareResources methods of the DRA driver. You can observe this behavior from the kube", "source": "https://kubernetes.io/docs/concepts/cluster-administration/dra/", "id": "k8s_00504_code_explain_13080"} {"prompt": "Explain this promql code:\n\nhistogram_quantile(0.99,\nsum(increase(scheduler_scheduling_algorithm_duration_seconds_bucket[5m])) by\n(le))", "completion": "This promql code Good practices for Dynamic Resource Allocation as a Cluster Admin | When a Pod bound to a node must have a ResourceClaim satisfied, kubelet calls the NodePrepareResources and NodeUnprepareResources methods of the DRA driver. You can observe this behavior from the kube", "language": "promql", "type": "code-example_explanation", "context": "Good practices for Dynamic Resource Allocation as a Cluster Admin | When a Pod bound to a node must have a ResourceClaim satisfied, kubelet calls the NodePrepareResources and NodeUnprepareResources methods of the DRA driver. You can observe this behavior from the kube", "source": "https://kubernetes.io/docs/concepts/cluster-administration/dra/", "id": "k8s_00504_code_explain_13081"} {"prompt": "Explain this promql code:\n\nhistogram_quantile(0.99,\nsum(rate(dra_operations_duration_seconds_bucket{operation_name=\"PrepareResources\"}[5m]))\nby (le))", "completion": "This promql code Good practices for Dynamic Resource Allocation as a Cluster Admin | DRA drivers implement the kubeletplugin package interface which surfaces its own metric for the underlying gRPC operation NodePrepareResources and NodeUnprepareResources. You can observe this behavior", "language": "promql", "type": "code-example_explanation", "context": "Good practices for Dynamic Resource Allocation as a Cluster Admin | DRA drivers implement the kubeletplugin package interface which surfaces its own metric for the underlying gRPC operation NodePrepareResources and NodeUnprepareResources. You can observe this behavior", "source": "https://kubernetes.io/docs/concepts/cluster-administration/dra/", "id": "k8s_00504_code_explain_13082"} {"prompt": "Explain this promql code:\n\nhistogram_quantile(0.99,\nsum(rate(dra_operations_duration_seconds_bucket{operation_name=\"UnprepareResources\"}[5m]))\nby (le))", "completion": "This promql code Good practices for Dynamic Resource Allocation as a Cluster Admin | DRA drivers implement the kubeletplugin package interface which surfaces its own metric for the underlying gRPC operation NodePrepareResources and NodeUnprepareResources. You can observe this behavior", "language": "promql", "type": "code-example_explanation", "context": "Good practices for Dynamic Resource Allocation as a Cluster Admin | DRA drivers implement the kubeletplugin package interface which surfaces its own metric for the underlying gRPC operation NodePrepareResources and NodeUnprepareResources. You can observe this behavior", "source": "https://kubernetes.io/docs/concepts/cluster-administration/dra/", "id": "k8s_00504_code_explain_13083"} {"prompt": "Explain this promql code:\n\nhistogram_quantile(0.99,\nsum(rate(dra_grpc_operations_duration_seconds_bucket{method_name=~\".*NodePrepareResources\"}[5m]))\nby (le))", "completion": "This promql code Good practices for Dynamic Resource Allocation as a Cluster Admin |", "language": "promql", "type": "code-example_explanation", "context": "Good practices for Dynamic Resource Allocation as a Cluster Admin |", "source": "https://kubernetes.io/docs/concepts/cluster-administration/dra/", "id": "k8s_00504_code_explain_13084"} {"prompt": "Explain this promql code:\n\nhistogram_quantile(0.99,\nsum(rate(dra_grpc_operations_duration_seconds_bucket{method_name=~\".*NodeUnprepareResources\"}[5m]))\nby (le))", "completion": "This promql code Good practices for Dynamic Resource Allocation as a Cluster Admin |", "language": "promql", "type": "code-example_explanation", "context": "Good practices for Dynamic Resource Allocation as a Cluster Admin |", "source": "https://kubernetes.io/docs/concepts/cluster-administration/dra/", "id": "k8s_00504_code_explain_13085"} {"prompt": "Complete this text code:\n\n# Restart all deployments in the test-namespace namespace\n kubectl rollout restart deployment -n test-namespace\n \n # Restart a deployment\n kubectl rollout restart deployment/nginx", "completion": " \n # Restart a daemon set\n kubectl rollout restart daemonset/abc\n \n # Restart deployments with the app=nginx label\n kubectl rollout restart deployment --selector=app=nginx", "language": "text", "type": "code-example", "context": "kubectl rollout restart | If true, ignore any errors in templates when a field or map key is missing in the template. Only applies to golang and jsonpath output formats.", "source": "https://kubernetes.io/docs/reference/kubectl/generated/kubectl_rollout/kubectl_rollout_restart/", "id": "k8s_00505_code_completion_13086"} {"prompt": "Explain this text code:\n\n# Restart all deployments in the test-namespace namespace\n kubectl rollout restart deployment -n test-namespace\n \n # Restart a deployment\n kubectl rollout restart deployment/nginx\n \n # Restart a daemon set\n kubectl rollout restart daemonset/abc\n \n # Restart deployments with the app=nginx label\n kubectl rollout restart deployment --selector=app=nginx", "completion": "This text code kubectl rollout restart | If true, ignore any errors in templates when a field or map key is missing in the template. Only applies to golang and jsonpath output formats.", "language": "text", "type": "code-example_explanation", "context": "kubectl rollout restart | If true, ignore any errors in templates when a field or map key is missing in the template. Only applies to golang and jsonpath output formats.", "source": "https://kubernetes.io/docs/reference/kubectl/generated/kubectl_rollout/kubectl_rollout_restart/", "id": "k8s_00505_code_explain_13087"} {"prompt": "Complete this text code:\n\n# Restart all deployments in the test-namespace namespace\n kubectl rollout restart deployment -n test-namespace\n \n # Restart a deployment\n kubectl rollout restart deployment/nginx", "completion": " \n # Restart a daemon set\n kubectl rollout restart daemonset/abc\n \n # Restart deployments with the app=nginx label\n kubectl rollout restart deployment --selector=app=nginx", "language": "text", "type": "code-example", "context": "kubectl rollout restart | If true, ignore any errors in templates when a field or map key is missing in the template. Only applies to golang and jsonpath output formats.", "source": "https://kubernetes.io/docs/reference/kubectl/generated/kubectl_rollout/kubectl_rollout_restart/", "id": "k8s_00505_code_completion_13088"} {"prompt": "Explain this text code:\n\n# Restart all deployments in the test-namespace namespace\n kubectl rollout restart deployment -n test-namespace\n \n # Restart a deployment\n kubectl rollout restart deployment/nginx\n \n # Restart a daemon set\n kubectl rollout restart daemonset/abc\n \n # Restart deployments with the app=nginx label\n kubectl rollout restart deployment --selector=app=nginx", "completion": "This text code kubectl rollout restart | If true, ignore any errors in templates when a field or map key is missing in the template. Only applies to golang and jsonpath output formats.", "language": "text", "type": "code-example_explanation", "context": "kubectl rollout restart | If true, ignore any errors in templates when a field or map key is missing in the template. Only applies to golang and jsonpath output formats.", "source": "https://kubernetes.io/docs/reference/kubectl/generated/kubectl_rollout/kubectl_rollout_restart/", "id": "k8s_00505_code_explain_13089"} {"prompt": "Complete this text code:\n\nNAME STATUS ROLES AGE VERSION\nkube-worker-1 NotReady <none> 1h v1.23.3", "completion": "kubernetes-node-bols Ready <none> 1h v1.23.3\nkubernetes-node-st6x Ready <none> 1h v1.23.3\nkubernetes-node-unaj Ready <none> 1h v1.23.3", "language": "text", "type": "code-example", "context": "Troubleshooting Clusters | For now, digging deeper into the cluster requires logging into the relevant machines. Here are the locations of the relevant log files. On systemd-based systems, you may need to use journalctl instead", "source": "https://kubernetes.io/docs/tasks/debug/debug-cluster/", "id": "k8s_00506_code_completion_13090"} {"prompt": "Explain this text code:\n\nNAME STATUS ROLES AGE VERSION\nkube-worker-1 NotReady <none> 1h v1.23.3\nkubernetes-node-bols Ready <none> 1h v1.23.3\nkubernetes-node-st6x Ready <none> 1h v1.23.3\nkubernetes-node-unaj Ready <none> 1h v1.23.3", "completion": "This text code Troubleshooting Clusters | For now, digging deeper into the cluster requires logging into the relevant machines. Here are the locations of the relevant log files. On systemd-based systems, you may need to use journalctl instead", "language": "text", "type": "code-example_explanation", "context": "Troubleshooting Clusters | For now, digging deeper into the cluster requires logging into the relevant machines. Here are the locations of the relevant log files. On systemd-based systems, you may need to use journalctl instead", "source": "https://kubernetes.io/docs/tasks/debug/debug-cluster/", "id": "k8s_00506_code_explain_13091"} {"prompt": "Complete this text code:\n\nName: kube-worker-1\nRoles: <none>\nLabels: beta.kubernetes.io/arch=amd64\n beta.kubernetes.io/os=linux\n kubernetes.io/arch=amd64\n kubernetes.io/hostname=kube-worker-1\n kubernetes.io/os=linux\n node.alpha.kubernetes.io/ttl: 0\n volumes.kubernetes.io/controller-managed-attach-detach: true\nCreationTimestamp: Thu, 17 Feb 2022 16:46:30 -0500\nTaints: node.kubernetes.io/unreachable:NoExecute\n node.kubernetes.io/unreachable:NoSchedule\nUnschedulable: false\nLease:\n HolderIdentity: kube-worker-1\n AcquireTime: <unset>\n RenewTime: Thu, 17 Feb 2022 17:13:09 -0500\nConditions:\n Type Status LastHeartbeatTime LastTransitionTime Reason Message\n ---- ------ ----------------- ------------------ ------ -------\n NetworkUnavailable False Thu, 17 Feb 2022 17:09:13 -0500 Thu, 17 Feb 2022 17:09:13 -0500 WeaveIsUp Weave pod has set this\n MemoryPressure Unknown Thu, 17 Feb 2022 17:12:40 -0500 Thu, 17 Feb 2022 17:13:52 -0500 NodeStatusUnknown Kubelet stopped posting node status.\n DiskPressure Unknown Thu, 17 Feb 2022 17:12:40 -0500 Thu, 17 Feb 2022 17:13:52 -0500 NodeStatusUnknown Kubelet stopped posting node status.\n PIDPressure Unknown Thu, 17 Feb 2022 17:12:40 -0500 Thu, 17 Feb 2022 17:13:52 -0500 NodeStatusUnknown Kubelet stopped posting node status.\n Ready Unknown Thu, 17 Feb 2022 17:12:40 -0500 Thu, 17 Feb 2022 17:13:52 -0500 NodeStatusUnknown Kubelet stopped posting node status.\nAddresses:\n InternalIP: 192.168.0.113\n Hostname: kube-worker-1\nCapacity:\n cpu: 2\n ephemeral-storage: 15372232Ki\n hugepages-2Mi: 0\n memory: 2025188Ki\n pods: 110", "completion": "Allocatable:\n cpu: 2\n ephemeral-storage: 14167048988\n hugepages-2Mi: 0\n memory: 1922788Ki\n pods: 110\nSystem Info:\n Machine ID: 9384e2927f544209b5d7b67474bbf92b\n System UUID: aa829ca9-73d7-064d-9019-df07404ad448\n Boot ID: 5a295a03-aaca-4340-af20-1327fa5dab5c\n Kernel Version: 5.13.0-28-generic\n OS Image: Ubuntu 21.10\n Operating System: linux\n Architecture: amd64\n Container Runtime Version: containerd://1.5.9\n Kubelet Version: v1.23.3\n Kube-Proxy Version: v1.23.3\nNon-terminated Pods: (4 in total)\n Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits Age\n --------- ---- ------------ ---------- --------------- ------------- ---\n default nginx-deployment-67d4bdd6f5-cx2nz 500m (25%) 500m (25%) 128Mi (6%) 128Mi (6%) 23m\n default nginx-deployment-67d4bdd6f5-w6kd7 500m (25%) 500m (25%) 128Mi (6%) 128Mi (6%) 23m\n kube-system kube-proxy-dnxbz 0 (0%) 0 (0%) 0 (0%) 0 (0%) 28m\n kube-system weave-net-gjxxp 100m (5%) 0 (0%) 200Mi (10%) 0 (0%) 28m\nAllocated resources:\n (Total limits may be over 100 percent, i.e., overcommitted.)\n Resource Requests Limits\n -------- -------- ------\n cpu 1100m (55%) 1 (50%)\n memory 456Mi (24%) 256Mi (13%)\n ephemeral-storage 0 (0%) 0 (0%)\n hugepages-2Mi 0 (0%) 0 (0%)\nEvents:\n...", "language": "text", "type": "code-example", "context": "Troubleshooting Clusters | For now, digging deeper into the cluster requires logging into the relevant machines. Here are the locations of the relevant log files. On systemd-based systems, you may need to use journalctl instead", "source": "https://kubernetes.io/docs/tasks/debug/debug-cluster/", "id": "k8s_00506_code_completion_13092"} {"prompt": "Explain this text code:\n\nName: kube-worker-1\nRoles: <none>\nLabels: beta.kubernetes.io/arch=amd64\n beta.kubernetes.io/os=linux\n kubernetes.io/arch=amd64\n kubernetes.io/hostname=kube-worker-1\n kubernetes.io/os=linux\n node.alpha.kubernetes.io/ttl: 0\n volumes.kubernetes.io/controller-managed-attach-detach: true\nCreationTimestamp: Thu, 17 Feb 2022 16:46:30 -0500\nTaints: node.kubernetes.io/unreachable:NoExecute\n node.kubernetes.io/unreachable:NoSchedule\nUnschedulable: false\nLease:\n HolderIdentity: kube-worker-1\n AcquireTime: <unset>\n RenewTime: Thu, 17 Feb 2022 17:13:09 -0500\nConditions:\n Type Status LastHeartbeatTime LastTransitionTime Reason Message\n ---- ------ ----------------- ------------------ ------ -------\n NetworkUnavailable False Thu, 17 Feb 2022 17:09:13 -0500 Thu, 17 Feb 2022 17:09:13 -0500 WeaveIsUp Weave pod has set this\n MemoryPressure Unknown Thu, 17 Feb 2022 17:12:40 -0500 Thu, 17 Feb 2022 17:13:52 -0500 NodeStatusUnknown Kubelet stopped posting node status.\n DiskPressure Unknown Thu, 17 Feb 2022 17:12:40 -0500 Thu, 17 Feb 2022 17:13:52 -0500 NodeStatusUnknown Kubelet stopped posting node status.\n PIDPressure Unknown Thu, 17 Feb 2022 17:12:40 -0500 Thu, 17 Feb 2022 17:13:52 -0500 NodeStatusUnknown Kubelet stopped posting node status.\n Ready Unknown Thu, 17 Feb 2022 17:12:40 -0500 Thu, 17 Feb 2022 17:13:52 -0500 NodeStatusUnknown Kubelet stopped posting node status.\nAddresses:\n InternalIP: 192.168.0.113\n Hostname: kube-worker-1\nCapacity:\n cpu: 2\n ephemeral-storage: 15372232Ki\n hugepages-2Mi: 0\n memory: 2025188Ki\n pods: 110\nAllocatable:\n cpu: 2\n ephemeral-storage: 14167048988\n hugepages-2Mi: 0\n memory: 1922788Ki\n pods: 110\nSystem Info:\n Machine ID: 9384e2927f544209b5d7b67474bbf92b\n System UUID: aa829ca9-73d7-064d-9019-df07404ad448\n Boot ID: 5a295a03-aaca-4340-af20-1327fa5dab5c\n Kernel Version: 5.13.0-28-generic\n OS Image: Ubuntu 21.10\n Operating System: linux\n Architecture: amd64\n Container Runtime Version: containerd://1.5.9\n Kubelet Version: v1.23.3\n Kube-Proxy Version: v1.23.3\nNon-terminated Pods: (4 in total)\n Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits Age\n --------- ---- ------------ ---------- --------------- ------------- ---\n default nginx-deployment-67d4bdd6f5-cx2nz 500m (25%) 500m (25%) 128Mi (6%) 128Mi (6%) 23m\n default nginx-deployment-67d4bdd6f5-w6kd7 500m (25%) 500m (25%) 128Mi (6%) 128Mi (6%) 23m\n kube-system kube-proxy-dnxbz 0 (0%) 0 (0%) 0 (0%) 0 (0%) 28m\n kube-system weave-net-gjxxp 100m (5%) 0 (0%) 200Mi (10%) 0 (0%) 28m\nAllocated resources:\n (Total limits may be over 100 percent, i.e., overcommitted.)\n Resource Requests Limits\n -------- -------- ------\n cpu 1100m (55%) 1 (50%)\n memory 456Mi (24%) 256Mi (13%)\n ephemeral-storage 0 (0%) 0 (0%)\n hugepages-2Mi 0 (0%) 0 (0%)\nEvents:\n...", "completion": "This text code Troubleshooting Clusters | For now, digging deeper into the cluster requires logging into the relevant machines. Here are the locations of the relevant log files. On systemd-based systems, you may need to use journalctl instead", "language": "text", "type": "code-example_explanation", "context": "Troubleshooting Clusters | For now, digging deeper into the cluster requires logging into the relevant machines. Here are the locations of the relevant log files. On systemd-based systems, you may need to use journalctl instead", "source": "https://kubernetes.io/docs/tasks/debug/debug-cluster/", "id": "k8s_00506_code_explain_13093"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Node\nmetadata:\n annotations:\n node.alpha.kubernetes.io/ttl: \"0\"\n volumes.kubernetes.io/controller-managed-attach-detach: \"true\"\n creationTimestamp: \"2022-02-17T21:46:30Z\"\n labels:\n beta.kubernetes.io/arch: amd64\n beta.kubernetes.io/os: linux\n kubernetes.io/arch: amd64\n kubernetes.io/hostname: kube-worker-1\n kubernetes.io/os: linux\n name: kube-worker-1\n resourceVersion: \"4026\"\n uid: 98efe7cb-2978-4a0b-842a-1a7bf12c05f8\nspec: {}\nstatus:\n addresses:\n - address: 192.168.0.113\n type: InternalIP\n - address: kube-worker-1\n type: Hostname\n allocatable:\n cpu: \"2\"\n ephemeral-storage: \"14167048988\"\n hugepages-2Mi: \"0\"\n memory: 1922788Ki\n pods: \"110\"\n capacity:\n cpu: \"2\"\n ephemeral-storage: 15372232Ki\n hugepages-2Mi: \"0\"\n memory: 2025188Ki\n pods: \"110\"\n conditions:\n - lastHeartbeatTime: \"2022-02-17T22:20:32Z\"\n lastTransitionTime: \"2022-02-17T22:20:32Z\"\n message: Weave pod has set this\n reason: WeaveIsUp", "completion": " status: \"False\"\n type: NetworkUnavailable\n - lastHeartbeatTime: \"2022-02-17T22:20:15Z\"\n lastTransitionTime: \"2022-02-17T22:13:25Z\"\n message: kubelet has sufficient memory available\n reason: KubeletHasSufficientMemory\n status: \"False\"\n type: MemoryPressure\n - lastHeartbeatTime: \"2022-02-17T22:20:15Z\"\n lastTransitionTime: \"2022-02-17T22:13:25Z\"\n message: kubelet has no disk pressure\n reason: KubeletHasNoDiskPressure\n status: \"False\"\n type: DiskPressure\n - lastHeartbeatTime: \"2022-02-17T22:20:15Z\"\n lastTransitionTime: \"2022-02-17T22:13:25Z\"\n message: kubelet has sufficient PID available\n reason: KubeletHasSufficientPID\n status: \"False\"\n type: PIDPressure\n - lastHeartbeatTime: \"2022-02-17T22:20:15Z\"\n lastTransitionTime: \"2022-02-17T22:15:15Z\"\n message: kubelet is posting ready status\n reason: KubeletReady\n status: \"True\"\n type: Ready\n daemonEndpoints:\n kubeletEndpoint:\n Port: 10250\n nodeInfo:\n architecture: amd64\n bootID: 22333234-7a6b-44d4-9ce1-67e31dc7e369\n containerRuntimeVersion: containerd://1.5.9\n kernelVersion: 5.13.0-28-generic\n kubeProxyVersion: v1.23.3\n kubeletVersion: v1.23.3\n machineID: 9384e2927f544209b5d7b67474bbf92b\n operatingSystem: linux\n osImage: Ubuntu 21.10\n systemUUID: aa829ca9-73d7-064d-9019-df07404ad448", "language": "yaml", "type": "kubernetes-manifest", "context": "Troubleshooting Clusters | For now, digging deeper into the cluster requires logging into the relevant machines. Here are the locations of the relevant log files. On systemd-based systems, you may need to use journalctl instead", "source": "https://kubernetes.io/docs/tasks/debug/debug-cluster/", "id": "k8s_00506_code_completion_13094"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Node\nmetadata:\n annotations:\n node.alpha.kubernetes.io/ttl: \"0\"\n volumes.kubernetes.io/controller-managed-attach-detach: \"true\"\n creationTimestamp: \"2022-02-17T21:46:30Z\"\n labels:\n beta.kubernetes.io/arch: amd64\n beta.kubernetes.io/os: linux\n kubernetes.io/arch: amd64\n kubernetes.io/hostname: kube-worker-1\n kubernetes.io/os: linux\n name: kube-worker-1\n resourceVersion: \"4026\"\n uid: 98efe7cb-2978-4a0b-842a-1a7bf12c05f8\nspec: {}\nstatus:\n addresses:\n - address: 192.168.0.113\n type: InternalIP\n - address: kube-worker-1\n type: Hostname\n allocatable:\n cpu: \"2\"\n ephemeral-storage: \"14167048988\"\n hugepages-2Mi: \"0\"\n memory: 1922788Ki\n pods: \"110\"\n capacity:\n cpu: \"2\"\n ephemeral-storage: 15372232Ki\n hugepages-2Mi: \"0\"\n memory: 2025188Ki\n pods: \"110\"\n conditions:\n - lastHeartbeatTime: \"2022-02-17T22:20:32Z\"\n lastTransitionTime: \"2022-02-17T22:20:32Z\"\n message: Weave pod has set this\n reason: WeaveIsUp\n status: \"False\"\n type: NetworkUnavailable\n - lastHeartbeatTime: \"2022-02-17T22:20:15Z\"\n lastTransitionTime: \"2022-02-17T22:13:25Z\"\n message: kubelet has sufficient memory available\n reason: KubeletHasSufficientMemory\n status: \"False\"\n type: MemoryPressure\n - lastHeartbeatTime: \"2022-02-17T22:20:15Z\"\n lastTransitionTime: \"2022-02-17T22:13:25Z\"\n message: kubelet has no disk pressure\n reason: KubeletHasNoDiskPressure\n status: \"False\"\n type: DiskPressure\n - lastHeartbeatTime: \"2022-02-17T22:20:15Z\"\n lastTransitionTime: \"2022-02-17T22:13:25Z\"\n message: kubelet has sufficient PID available\n reason: KubeletHasSufficientPID\n status: \"False\"\n type: PIDPressure\n - lastHeartbeatTime: \"2022-02-17T22:20:15Z\"\n lastTransitionTime: \"2022-02-17T22:15:15Z\"\n message: kubelet is posting ready status\n reason: KubeletReady\n status: \"True\"\n type: Ready\n daemonEndpoints:\n kubeletEndpoint:\n Port: 10250\n nodeInfo:\n architecture: amd64\n bootID: 22333234-7a6b-44d4-9ce1-67e31dc7e369\n containerRuntimeVersion: containerd://1.5.9\n kernelVersion: 5.13.0-28-generic\n kubeProxyVersion: v1.23.3\n kubeletVersion: v1.23.3\n machineID: 9384e2927f544209b5d7b67474bbf92b\n operatingSystem: linux\n osImage: Ubuntu 21.10\n systemUUID: aa829ca9-73d7-064d-9019-df07404ad448", "completion": "This yaml code Troubleshooting Clusters | For now, digging deeper into the cluster requires logging into the relevant machines. Here are the locations of the relevant log files. On systemd-based systems, you may need to use journalctl instead", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Troubleshooting Clusters | For now, digging deeper into the cluster requires logging into the relevant machines. Here are the locations of the relevant log files. On systemd-based systems, you may need to use journalctl instead", "source": "https://kubernetes.io/docs/tasks/debug/debug-cluster/", "id": "k8s_00506_code_explain_13095"} {"prompt": "Complete this none code:\n\nNAME STATUS ROLES AGE VERSION\nkube-worker-1 NotReady <none> 1h v1.23.3", "completion": "kubernetes-node-bols Ready <none> 1h v1.23.3\nkubernetes-node-st6x Ready <none> 1h v1.23.3\nkubernetes-node-unaj Ready <none> 1h v1.23.3", "language": "none", "type": "code-example", "context": "Troubleshooting Clusters | For now, digging deeper into the cluster requires logging into the relevant machines. Here are the locations of the relevant log files. On systemd-based systems, you may need to use journalctl instead", "source": "https://kubernetes.io/docs/tasks/debug/debug-cluster/", "id": "k8s_00506_code_completion_13096"} {"prompt": "Explain this none code:\n\nNAME STATUS ROLES AGE VERSION\nkube-worker-1 NotReady <none> 1h v1.23.3\nkubernetes-node-bols Ready <none> 1h v1.23.3\nkubernetes-node-st6x Ready <none> 1h v1.23.3\nkubernetes-node-unaj Ready <none> 1h v1.23.3", "completion": "This none code Troubleshooting Clusters | For now, digging deeper into the cluster requires logging into the relevant machines. Here are the locations of the relevant log files. On systemd-based systems, you may need to use journalctl instead", "language": "none", "type": "code-example_explanation", "context": "Troubleshooting Clusters | For now, digging deeper into the cluster requires logging into the relevant machines. Here are the locations of the relevant log files. On systemd-based systems, you may need to use journalctl instead", "source": "https://kubernetes.io/docs/tasks/debug/debug-cluster/", "id": "k8s_00506_code_explain_13097"} {"prompt": "Complete this none code:\n\nName: kube-worker-1\nRoles: <none>\nLabels: beta.kubernetes.io/arch=amd64\n beta.kubernetes.io/os=linux\n kubernetes.io/arch=amd64\n kubernetes.io/hostname=kube-worker-1\n kubernetes.io/os=linux\n node.alpha.kubernetes.io/ttl: 0\n volumes.kubernetes.io/controller-managed-attach-detach: true\nCreationTimestamp: Thu, 17 Feb 2022 16:46:30 -0500\nTaints: node.kubernetes.io/unreachable:NoExecute\n node.kubernetes.io/unreachable:NoSchedule\nUnschedulable: false\nLease:\n HolderIdentity: kube-worker-1\n AcquireTime: <unset>\n RenewTime: Thu, 17 Feb 2022 17:13:09 -0500\nConditions:\n Type Status LastHeartbeatTime LastTransitionTime Reason Message\n ---- ------ ----------------- ------------------ ------ -------\n NetworkUnavailable False Thu, 17 Feb 2022 17:09:13 -0500 Thu, 17 Feb 2022 17:09:13 -0500 WeaveIsUp Weave pod has set this\n MemoryPressure Unknown Thu, 17 Feb 2022 17:12:40 -0500 Thu, 17 Feb 2022 17:13:52 -0500 NodeStatusUnknown Kubelet stopped posting node status.\n DiskPressure Unknown Thu, 17 Feb 2022 17:12:40 -0500 Thu, 17 Feb 2022 17:13:52 -0500 NodeStatusUnknown Kubelet stopped posting node status.\n PIDPressure Unknown Thu, 17 Feb 2022 17:12:40 -0500 Thu, 17 Feb 2022 17:13:52 -0500 NodeStatusUnknown Kubelet stopped posting node status.\n Ready Unknown Thu, 17 Feb 2022 17:12:40 -0500 Thu, 17 Feb 2022 17:13:52 -0500 NodeStatusUnknown Kubelet stopped posting node status.\nAddresses:\n InternalIP: 192.168.0.113\n Hostname: kube-worker-1\nCapacity:\n cpu: 2\n ephemeral-storage: 15372232Ki\n hugepages-2Mi: 0\n memory: 2025188Ki\n pods: 110", "completion": "Allocatable:\n cpu: 2\n ephemeral-storage: 14167048988\n hugepages-2Mi: 0\n memory: 1922788Ki\n pods: 110\nSystem Info:\n Machine ID: 9384e2927f544209b5d7b67474bbf92b\n System UUID: aa829ca9-73d7-064d-9019-df07404ad448\n Boot ID: 5a295a03-aaca-4340-af20-1327fa5dab5c\n Kernel Version: 5.13.0-28-generic\n OS Image: Ubuntu 21.10\n Operating System: linux\n Architecture: amd64\n Container Runtime Version: containerd://1.5.9\n Kubelet Version: v1.23.3\n Kube-Proxy Version: v1.23.3\nNon-terminated Pods: (4 in total)\n Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits Age\n --------- ---- ------------ ---------- --------------- ------------- ---\n default nginx-deployment-67d4bdd6f5-cx2nz 500m (25%) 500m (25%) 128Mi (6%) 128Mi (6%) 23m\n default nginx-deployment-67d4bdd6f5-w6kd7 500m (25%) 500m (25%) 128Mi (6%) 128Mi (6%) 23m\n kube-system kube-proxy-dnxbz 0 (0%) 0 (0%) 0 (0%) 0 (0%) 28m\n kube-system weave-net-gjxxp 100m (5%) 0 (0%) 200Mi (10%) 0 (0%) 28m\nAllocated resources:\n (Total limits may be over 100 percent, i.e., overcommitted.)\n Resource Requests Limits\n -------- -------- ------\n cpu 1100m (55%) 1 (50%)\n memory 456Mi (24%) 256Mi (13%)\n ephemeral-storage 0 (0%) 0 (0%)\n hugepages-2Mi 0 (0%) 0 (0%)\nEvents:\n...", "language": "none", "type": "code-example", "context": "Troubleshooting Clusters | For now, digging deeper into the cluster requires logging into the relevant machines. Here are the locations of the relevant log files. On systemd-based systems, you may need to use journalctl instead", "source": "https://kubernetes.io/docs/tasks/debug/debug-cluster/", "id": "k8s_00506_code_completion_13098"} {"prompt": "Explain this none code:\n\nName: kube-worker-1\nRoles: <none>\nLabels: beta.kubernetes.io/arch=amd64\n beta.kubernetes.io/os=linux\n kubernetes.io/arch=amd64\n kubernetes.io/hostname=kube-worker-1\n kubernetes.io/os=linux\n node.alpha.kubernetes.io/ttl: 0\n volumes.kubernetes.io/controller-managed-attach-detach: true\nCreationTimestamp: Thu, 17 Feb 2022 16:46:30 -0500\nTaints: node.kubernetes.io/unreachable:NoExecute\n node.kubernetes.io/unreachable:NoSchedule\nUnschedulable: false\nLease:\n HolderIdentity: kube-worker-1\n AcquireTime: <unset>\n RenewTime: Thu, 17 Feb 2022 17:13:09 -0500\nConditions:\n Type Status LastHeartbeatTime LastTransitionTime Reason Message\n ---- ------ ----------------- ------------------ ------ -------\n NetworkUnavailable False Thu, 17 Feb 2022 17:09:13 -0500 Thu, 17 Feb 2022 17:09:13 -0500 WeaveIsUp Weave pod has set this\n MemoryPressure Unknown Thu, 17 Feb 2022 17:12:40 -0500 Thu, 17 Feb 2022 17:13:52 -0500 NodeStatusUnknown Kubelet stopped posting node status.\n DiskPressure Unknown Thu, 17 Feb 2022 17:12:40 -0500 Thu, 17 Feb 2022 17:13:52 -0500 NodeStatusUnknown Kubelet stopped posting node status.\n PIDPressure Unknown Thu, 17 Feb 2022 17:12:40 -0500 Thu, 17 Feb 2022 17:13:52 -0500 NodeStatusUnknown Kubelet stopped posting node status.\n Ready Unknown Thu, 17 Feb 2022 17:12:40 -0500 Thu, 17 Feb 2022 17:13:52 -0500 NodeStatusUnknown Kubelet stopped posting node status.\nAddresses:\n InternalIP: 192.168.0.113\n Hostname: kube-worker-1\nCapacity:\n cpu: 2\n ephemeral-storage: 15372232Ki\n hugepages-2Mi: 0\n memory: 2025188Ki\n pods: 110\nAllocatable:\n cpu: 2\n ephemeral-storage: 14167048988\n hugepages-2Mi: 0\n memory: 1922788Ki\n pods: 110\nSystem Info:\n Machine ID: 9384e2927f544209b5d7b67474bbf92b\n System UUID: aa829ca9-73d7-064d-9019-df07404ad448\n Boot ID: 5a295a03-aaca-4340-af20-1327fa5dab5c\n Kernel Version: 5.13.0-28-generic\n OS Image: Ubuntu 21.10\n Operating System: linux\n Architecture: amd64\n Container Runtime Version: containerd://1.5.9\n Kubelet Version: v1.23.3\n Kube-Proxy Version: v1.23.3\nNon-terminated Pods: (4 in total)\n Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits Age\n --------- ---- ------------ ---------- --------------- ------------- ---\n default nginx-deployment-67d4bdd6f5-cx2nz 500m (25%) 500m (25%) 128Mi (6%) 128Mi (6%) 23m\n default nginx-deployment-67d4bdd6f5-w6kd7 500m (25%) 500m (25%) 128Mi (6%) 128Mi (6%) 23m\n kube-system kube-proxy-dnxbz 0 (0%) 0 (0%) 0 (0%) 0 (0%) 28m\n kube-system weave-net-gjxxp 100m (5%) 0 (0%) 200Mi (10%) 0 (0%) 28m\nAllocated resources:\n (Total limits may be over 100 percent, i.e., overcommitted.)\n Resource Requests Limits\n -------- -------- ------\n cpu 1100m (55%) 1 (50%)\n memory 456Mi (24%) 256Mi (13%)\n ephemeral-storage 0 (0%) 0 (0%)\n hugepages-2Mi 0 (0%) 0 (0%)\nEvents:\n...", "completion": "This none code Troubleshooting Clusters | For now, digging deeper into the cluster requires logging into the relevant machines. Here are the locations of the relevant log files. On systemd-based systems, you may need to use journalctl instead", "language": "none", "type": "code-example_explanation", "context": "Troubleshooting Clusters | For now, digging deeper into the cluster requires logging into the relevant machines. Here are the locations of the relevant log files. On systemd-based systems, you may need to use journalctl instead", "source": "https://kubernetes.io/docs/tasks/debug/debug-cluster/", "id": "k8s_00506_code_explain_13099"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Node\nmetadata:\n annotations:\n node.alpha.kubernetes.io/ttl: \"0\"\n volumes.kubernetes.io/controller-managed-attach-detach: \"true\"\n creationTimestamp: \"2022-02-17T21:46:30Z\"\n labels:\n beta.kubernetes.io/arch: amd64\n beta.kubernetes.io/os: linux\n kubernetes.io/arch: amd64\n kubernetes.io/hostname: kube-worker-1\n kubernetes.io/os: linux\n name: kube-worker-1\n resourceVersion: \"4026\"\n uid: 98efe7cb-2978-4a0b-842a-1a7bf12c05f8\nspec: {}\nstatus:\n addresses:\n - address: 192.168.0.113\n type: InternalIP\n - address: kube-worker-1\n type: Hostname\n allocatable:\n cpu: \"2\"\n ephemeral-storage: \"14167048988\"\n hugepages-2Mi: \"0\"\n memory: 1922788Ki\n pods: \"110\"\n capacity:\n cpu: \"2\"\n ephemeral-storage: 15372232Ki\n hugepages-2Mi: \"0\"\n memory: 2025188Ki\n pods: \"110\"\n conditions:\n - lastHeartbeatTime: \"2022-02-17T22:20:32Z\"\n lastTransitionTime: \"2022-02-17T22:20:32Z\"\n message: Weave pod has set this\n reason: WeaveIsUp", "completion": " status: \"False\"\n type: NetworkUnavailable\n - lastHeartbeatTime: \"2022-02-17T22:20:15Z\"\n lastTransitionTime: \"2022-02-17T22:13:25Z\"\n message: kubelet has sufficient memory available\n reason: KubeletHasSufficientMemory\n status: \"False\"\n type: MemoryPressure\n - lastHeartbeatTime: \"2022-02-17T22:20:15Z\"\n lastTransitionTime: \"2022-02-17T22:13:25Z\"\n message: kubelet has no disk pressure\n reason: KubeletHasNoDiskPressure\n status: \"False\"\n type: DiskPressure\n - lastHeartbeatTime: \"2022-02-17T22:20:15Z\"\n lastTransitionTime: \"2022-02-17T22:13:25Z\"\n message: kubelet has sufficient PID available\n reason: KubeletHasSufficientPID\n status: \"False\"\n type: PIDPressure\n - lastHeartbeatTime: \"2022-02-17T22:20:15Z\"\n lastTransitionTime: \"2022-02-17T22:15:15Z\"\n message: kubelet is posting ready status\n reason: KubeletReady\n status: \"True\"\n type: Ready\n daemonEndpoints:\n kubeletEndpoint:\n Port: 10250\n nodeInfo:\n architecture: amd64\n bootID: 22333234-7a6b-44d4-9ce1-67e31dc7e369\n containerRuntimeVersion: containerd://1.5.9\n kernelVersion: 5.13.0-28-generic\n kubeProxyVersion: v1.23.3\n kubeletVersion: v1.23.3\n machineID: 9384e2927f544209b5d7b67474bbf92b\n operatingSystem: linux\n osImage: Ubuntu 21.10\n systemUUID: aa829ca9-73d7-064d-9019-df07404ad448", "language": "yaml", "type": "kubernetes-manifest", "context": "Troubleshooting Clusters | For now, digging deeper into the cluster requires logging into the relevant machines. Here are the locations of the relevant log files. On systemd-based systems, you may need to use journalctl instead", "source": "https://kubernetes.io/docs/tasks/debug/debug-cluster/", "id": "k8s_00506_code_completion_13100"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Node\nmetadata:\n annotations:\n node.alpha.kubernetes.io/ttl: \"0\"\n volumes.kubernetes.io/controller-managed-attach-detach: \"true\"\n creationTimestamp: \"2022-02-17T21:46:30Z\"\n labels:\n beta.kubernetes.io/arch: amd64\n beta.kubernetes.io/os: linux\n kubernetes.io/arch: amd64\n kubernetes.io/hostname: kube-worker-1\n kubernetes.io/os: linux\n name: kube-worker-1\n resourceVersion: \"4026\"\n uid: 98efe7cb-2978-4a0b-842a-1a7bf12c05f8\nspec: {}\nstatus:\n addresses:\n - address: 192.168.0.113\n type: InternalIP\n - address: kube-worker-1\n type: Hostname\n allocatable:\n cpu: \"2\"\n ephemeral-storage: \"14167048988\"\n hugepages-2Mi: \"0\"\n memory: 1922788Ki\n pods: \"110\"\n capacity:\n cpu: \"2\"\n ephemeral-storage: 15372232Ki\n hugepages-2Mi: \"0\"\n memory: 2025188Ki\n pods: \"110\"\n conditions:\n - lastHeartbeatTime: \"2022-02-17T22:20:32Z\"\n lastTransitionTime: \"2022-02-17T22:20:32Z\"\n message: Weave pod has set this\n reason: WeaveIsUp\n status: \"False\"\n type: NetworkUnavailable\n - lastHeartbeatTime: \"2022-02-17T22:20:15Z\"\n lastTransitionTime: \"2022-02-17T22:13:25Z\"\n message: kubelet has sufficient memory available\n reason: KubeletHasSufficientMemory\n status: \"False\"\n type: MemoryPressure\n - lastHeartbeatTime: \"2022-02-17T22:20:15Z\"\n lastTransitionTime: \"2022-02-17T22:13:25Z\"\n message: kubelet has no disk pressure\n reason: KubeletHasNoDiskPressure\n status: \"False\"\n type: DiskPressure\n - lastHeartbeatTime: \"2022-02-17T22:20:15Z\"\n lastTransitionTime: \"2022-02-17T22:13:25Z\"\n message: kubelet has sufficient PID available\n reason: KubeletHasSufficientPID\n status: \"False\"\n type: PIDPressure\n - lastHeartbeatTime: \"2022-02-17T22:20:15Z\"\n lastTransitionTime: \"2022-02-17T22:15:15Z\"\n message: kubelet is posting ready status\n reason: KubeletReady\n status: \"True\"\n type: Ready\n daemonEndpoints:\n kubeletEndpoint:\n Port: 10250\n nodeInfo:\n architecture: amd64\n bootID: 22333234-7a6b-44d4-9ce1-67e31dc7e369\n containerRuntimeVersion: containerd://1.5.9\n kernelVersion: 5.13.0-28-generic\n kubeProxyVersion: v1.23.3\n kubeletVersion: v1.23.3\n machineID: 9384e2927f544209b5d7b67474bbf92b\n operatingSystem: linux\n osImage: Ubuntu 21.10\n systemUUID: aa829ca9-73d7-064d-9019-df07404ad448", "completion": "This yaml code Troubleshooting Clusters | For now, digging deeper into the cluster requires logging into the relevant machines. Here are the locations of the relevant log files. On systemd-based systems, you may need to use journalctl instead", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Troubleshooting Clusters | For now, digging deeper into the cluster requires logging into the relevant machines. Here are the locations of the relevant log files. On systemd-based systems, you may need to use journalctl instead", "source": "https://kubernetes.io/docs/tasks/debug/debug-cluster/", "id": "k8s_00506_code_explain_13101"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Node\nmetadata:\n annotations:\n node.alpha.kubernetes.io/ttl: \"0\"\n volumes.kubernetes.io/controller-managed-attach-detach: \"true\"\n creationTimestamp: \"2022-02-17T21:46:30Z\"\n labels:\n beta.kubernetes.io/arch: amd64\n beta.kubernetes.io/os: linux\n kubernetes.io/arch: amd64\n kubernetes.io/hostname: kube-worker-1\n kubernetes.io/os: linux\n name: kube-worker-1\n resourceVersion: \"4026\"\n uid: 98efe7cb-2978-4a0b-842a-1a7bf12c05f8\nspec: {}\nstatus:\n addresses:\n - address: 192.168.0.113\n type: InternalIP\n - address: kube-worker-1\n type: Hostname\n allocatable:\n cpu: \"2\"\n ephemeral-storage: \"14167048988\"\n hugepages-2Mi: \"0\"\n memory: 1922788Ki\n pods: \"110\"\n capacity:\n cpu: \"2\"\n ephemeral-storage: 15372232Ki\n hugepages-2Mi: \"0\"\n memory: 2025188Ki\n pods: \"110\"\n conditions:\n - lastHeartbeatTime: \"2022-02-17T22:20:32Z\"\n lastTransitionTime: \"2022-02-17T22:20:32Z\"\n message: Weave pod has set this\n reason: WeaveIsUp", "completion": " status: \"False\"\n type: NetworkUnavailable\n - lastHeartbeatTime: \"2022-02-17T22:20:15Z\"\n lastTransitionTime: \"2022-02-17T22:13:25Z\"\n message: kubelet has sufficient memory available\n reason: KubeletHasSufficientMemory\n status: \"False\"\n type: MemoryPressure\n - lastHeartbeatTime: \"2022-02-17T22:20:15Z\"\n lastTransitionTime: \"2022-02-17T22:13:25Z\"\n message: kubelet has no disk pressure\n reason: KubeletHasNoDiskPressure\n status: \"False\"\n type: DiskPressure\n - lastHeartbeatTime: \"2022-02-17T22:20:15Z\"\n lastTransitionTime: \"2022-02-17T22:13:25Z\"\n message: kubelet has sufficient PID available\n reason: KubeletHasSufficientPID\n status: \"False\"\n type: PIDPressure\n - lastHeartbeatTime: \"2022-02-17T22:20:15Z\"\n lastTransitionTime: \"2022-02-17T22:15:15Z\"\n message: kubelet is posting ready status\n reason: KubeletReady\n status: \"True\"\n type: Ready\n daemonEndpoints:\n kubeletEndpoint:\n Port: 10250\n nodeInfo:\n architecture: amd64\n bootID: 22333234-7a6b-44d4-9ce1-67e31dc7e369\n containerRuntimeVersion: containerd://1.5.9\n kernelVersion: 5.13.0-28-generic\n kubeProxyVersion: v1.23.3\n kubeletVersion: v1.23.3\n machineID: 9384e2927f544209b5d7b67474bbf92b\n operatingSystem: linux\n osImage: Ubuntu 21.10\n systemUUID: aa829ca9-73d7-064d-9019-df07404ad448", "language": "yaml", "type": "kubernetes-manifest", "context": "Troubleshooting Clusters | For now, digging deeper into the cluster requires logging into the relevant machines. Here are the locations of the relevant log files. On systemd-based systems, you may need to use journalctl instead", "source": "https://kubernetes.io/docs/tasks/debug/debug-cluster/", "id": "k8s_00506_code_completion_13102"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Node\nmetadata:\n annotations:\n node.alpha.kubernetes.io/ttl: \"0\"\n volumes.kubernetes.io/controller-managed-attach-detach: \"true\"\n creationTimestamp: \"2022-02-17T21:46:30Z\"\n labels:\n beta.kubernetes.io/arch: amd64\n beta.kubernetes.io/os: linux\n kubernetes.io/arch: amd64\n kubernetes.io/hostname: kube-worker-1\n kubernetes.io/os: linux\n name: kube-worker-1\n resourceVersion: \"4026\"\n uid: 98efe7cb-2978-4a0b-842a-1a7bf12c05f8\nspec: {}\nstatus:\n addresses:\n - address: 192.168.0.113\n type: InternalIP\n - address: kube-worker-1\n type: Hostname\n allocatable:\n cpu: \"2\"\n ephemeral-storage: \"14167048988\"\n hugepages-2Mi: \"0\"\n memory: 1922788Ki\n pods: \"110\"\n capacity:\n cpu: \"2\"\n ephemeral-storage: 15372232Ki\n hugepages-2Mi: \"0\"\n memory: 2025188Ki\n pods: \"110\"\n conditions:\n - lastHeartbeatTime: \"2022-02-17T22:20:32Z\"\n lastTransitionTime: \"2022-02-17T22:20:32Z\"\n message: Weave pod has set this\n reason: WeaveIsUp\n status: \"False\"\n type: NetworkUnavailable\n - lastHeartbeatTime: \"2022-02-17T22:20:15Z\"\n lastTransitionTime: \"2022-02-17T22:13:25Z\"\n message: kubelet has sufficient memory available\n reason: KubeletHasSufficientMemory\n status: \"False\"\n type: MemoryPressure\n - lastHeartbeatTime: \"2022-02-17T22:20:15Z\"\n lastTransitionTime: \"2022-02-17T22:13:25Z\"\n message: kubelet has no disk pressure\n reason: KubeletHasNoDiskPressure\n status: \"False\"\n type: DiskPressure\n - lastHeartbeatTime: \"2022-02-17T22:20:15Z\"\n lastTransitionTime: \"2022-02-17T22:13:25Z\"\n message: kubelet has sufficient PID available\n reason: KubeletHasSufficientPID\n status: \"False\"\n type: PIDPressure\n - lastHeartbeatTime: \"2022-02-17T22:20:15Z\"\n lastTransitionTime: \"2022-02-17T22:15:15Z\"\n message: kubelet is posting ready status\n reason: KubeletReady\n status: \"True\"\n type: Ready\n daemonEndpoints:\n kubeletEndpoint:\n Port: 10250\n nodeInfo:\n architecture: amd64\n bootID: 22333234-7a6b-44d4-9ce1-67e31dc7e369\n containerRuntimeVersion: containerd://1.5.9\n kernelVersion: 5.13.0-28-generic\n kubeProxyVersion: v1.23.3\n kubeletVersion: v1.23.3\n machineID: 9384e2927f544209b5d7b67474bbf92b\n operatingSystem: linux\n osImage: Ubuntu 21.10\n systemUUID: aa829ca9-73d7-064d-9019-df07404ad448", "completion": "This yaml code Troubleshooting Clusters | For now, digging deeper into the cluster requires logging into the relevant machines. Here are the locations of the relevant log files. On systemd-based systems, you may need to use journalctl instead", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Troubleshooting Clusters | For now, digging deeper into the cluster requires logging into the relevant machines. Here are the locations of the relevant log files. On systemd-based systems, you may need to use journalctl instead", "source": "https://kubernetes.io/docs/tasks/debug/debug-cluster/", "id": "k8s_00506_code_explain_13103"} {"prompt": "Explain this text code:\n\n# Print flags inherited by all commands\n kubectl options", "completion": "This text code kubectl options | help for options", "language": "text", "type": "code-example_explanation", "context": "kubectl options | help for options", "source": "https://kubernetes.io/docs/reference/kubectl/generated/kubectl_options/", "id": "k8s_00509_code_explain_13104"} {"prompt": "Explain this text code:\n\n# Print flags inherited by all commands\n kubectl options", "completion": "This text code kubectl options | help for options", "language": "text", "type": "code-example_explanation", "context": "kubectl options | help for options", "source": "https://kubernetes.io/docs/reference/kubectl/generated/kubectl_options/", "id": "k8s_00509_code_explain_13105"} {"prompt": "Explain this text code:\n\ncurl -LO \"https://dl.k8s.io/release/$(curl -L -s https://dl.k8s.io/release/stable.txt)/bin/darwin/amd64/kubectl\"", "completion": "This text code Install and Set Up kubectl on macOS | curl -LO \"https://dl.k8s.io/release/$(curl -L -s https://dl.k8s.io/release/stable.txt)/bin/darwin/arm64/kubectl\"", "language": "text", "type": "code-example_explanation", "context": "Install and Set Up kubectl on macOS | curl -LO \"https://dl.k8s.io/release/$(curl -L -s https://dl.k8s.io/release/stable.txt)/bin/darwin/arm64/kubectl\"", "source": "https://kubernetes.io/docs/tasks/tools/install-kubectl-macos/", "id": "k8s_00510_code_explain_13106"} {"prompt": "Explain this text code:\n\ncurl -LO \"https://dl.k8s.io/release/$(curl -L -s https://dl.k8s.io/release/stable.txt)/bin/darwin/arm64/kubectl\"", "completion": "This text code Install and Set Up kubectl on macOS | To download a specific version, replace the $(curl -L -s https://dl.k8s.io/release/stable.txt) portion of the command with the specific version.", "language": "text", "type": "code-example_explanation", "context": "Install and Set Up kubectl on macOS | To download a specific version, replace the $(curl -L -s https://dl.k8s.io/release/stable.txt) portion of the command with the specific version.", "source": "https://kubernetes.io/docs/tasks/tools/install-kubectl-macos/", "id": "k8s_00510_code_explain_13107"} {"prompt": "Explain this text code:\n\ncurl -LO \"https://dl.k8s.io/release/v1.34.0/bin/darwin/amd64/kubectl\"", "completion": "This text code Install and Set Up kubectl on macOS | And for macOS on Apple Silicon, type:", "language": "text", "type": "code-example_explanation", "context": "Install and Set Up kubectl on macOS | And for macOS on Apple Silicon, type:", "source": "https://kubernetes.io/docs/tasks/tools/install-kubectl-macos/", "id": "k8s_00510_code_explain_13108"} {"prompt": "Explain this text code:\n\ncurl -LO \"https://dl.k8s.io/release/v1.34.0/bin/darwin/arm64/kubectl\"", "completion": "This text code Install and Set Up kubectl on macOS | Validate the binary (optional)", "language": "text", "type": "code-example_explanation", "context": "Install and Set Up kubectl on macOS | Validate the binary (optional)", "source": "https://kubernetes.io/docs/tasks/tools/install-kubectl-macos/", "id": "k8s_00510_code_explain_13109"} {"prompt": "Explain this text code:\n\ncurl -LO \"https://dl.k8s.io/release/$(curl -L -s https://dl.k8s.io/release/stable.txt)/bin/darwin/amd64/kubectl.sha256\"", "completion": "This text code Install and Set Up kubectl on macOS | curl -LO \"https://dl.k8s.io/release/$(curl -L -s https://dl.k8s.io/release/stable.txt)/bin/darwin/arm64/kubectl.sha256\"", "language": "text", "type": "code-example_explanation", "context": "Install and Set Up kubectl on macOS | curl -LO \"https://dl.k8s.io/release/$(curl -L -s https://dl.k8s.io/release/stable.txt)/bin/darwin/arm64/kubectl.sha256\"", "source": "https://kubernetes.io/docs/tasks/tools/install-kubectl-macos/", "id": "k8s_00510_code_explain_13110"} {"prompt": "Explain this text code:\n\ncurl -LO \"https://dl.k8s.io/release/$(curl -L -s https://dl.k8s.io/release/stable.txt)/bin/darwin/arm64/kubectl.sha256\"", "completion": "This text code Install and Set Up kubectl on macOS | Validate the kubectl binary against the checksum file:", "language": "text", "type": "code-example_explanation", "context": "Install and Set Up kubectl on macOS | Validate the kubectl binary against the checksum file:", "source": "https://kubernetes.io/docs/tasks/tools/install-kubectl-macos/", "id": "k8s_00510_code_explain_13111"} {"prompt": "Explain this text code:\n\necho \"$(cat kubectl.sha256) kubectl\" | shasum -a 256 --check", "completion": "This text code Install and Set Up kubectl on macOS | If valid, the output is:", "language": "text", "type": "code-example_explanation", "context": "Install and Set Up kubectl on macOS | If valid, the output is:", "source": "https://kubernetes.io/docs/tasks/tools/install-kubectl-macos/", "id": "k8s_00510_code_explain_13112"} {"prompt": "Explain this bash code:\n\nkubectl: FAILED\nshasum: WARNING: 1 computed checksum did NOT match", "completion": "This bash code Install and Set Up kubectl on macOS | Make the kubectl binary executable.", "language": "bash", "type": "kubectl-command_explanation", "context": "Install and Set Up kubectl on macOS | Make the kubectl binary executable.", "source": "https://kubernetes.io/docs/tasks/tools/install-kubectl-macos/", "id": "k8s_00510_code_explain_13113"} {"prompt": "Explain this text code:\n\nsudo mv ./kubectl /usr/local/bin/kubectl\nsudo chown root: /usr/local/bin/kubectl", "completion": "This text code Install and Set Up kubectl on macOS | Test to ensure the version you installed is up-to-date:", "language": "text", "type": "code-example_explanation", "context": "Install and Set Up kubectl on macOS | Test to ensure the version you installed is up-to-date:", "source": "https://kubernetes.io/docs/tasks/tools/install-kubectl-macos/", "id": "k8s_00510_code_explain_13114"} {"prompt": "Explain this text code:\n\nThe connection to the server <server-name:port> was refused - did you specify the right host or port?", "completion": "This text code Install and Set Up kubectl on macOS | For example, if you are intending to run a Kubernetes cluster on your laptop (locally), you will need a tool like Minikube to be installed first and then re-run the commands stated above.", "language": "text", "type": "code-example_explanation", "context": "Install and Set Up kubectl on macOS | For example, if you are intending to run a Kubernetes cluster on your laptop (locally), you will need a tool like Minikube to be installed first and then re-run the commands stated above.", "source": "https://kubernetes.io/docs/tasks/tools/install-kubectl-macos/", "id": "k8s_00510_code_explain_13115"} {"prompt": "Explain this text code:\n\nbrew_etc=\"$(brew --prefix)/etc\" && [[ -r \"${brew_etc}/profile.d/bash_completion.sh\" ]] && . \"${brew_etc}/profile.d/bash_completion.sh\"", "completion": "This text code Install and Set Up kubectl on macOS | Reload your shell and verify that bash-completion v2 is correctly installed with type _init_completion.", "language": "text", "type": "code-example_explanation", "context": "Install and Set Up kubectl on macOS | Reload your shell and verify that bash-completion v2 is correctly installed with type _init_completion.", "source": "https://kubernetes.io/docs/tasks/tools/install-kubectl-macos/", "id": "k8s_00510_code_explain_13116"} {"prompt": "Explain this text code:\n\necho 'source <(kubectl completion bash)' >>~/.bash_profile", "completion": "This text code Install and Set Up kubectl on macOS | Add the completion script to the /usr/local/etc/bash_completion.d directory:", "language": "text", "type": "code-example_explanation", "context": "Install and Set Up kubectl on macOS | Add the completion script to the /usr/local/etc/bash_completion.d directory:", "source": "https://kubernetes.io/docs/tasks/tools/install-kubectl-macos/", "id": "k8s_00510_code_explain_13117"} {"prompt": "Explain this bash code:\n\nkubectl completion bash >/usr/local/etc/bash_completion.d/kubectl", "completion": "This bash code Install and Set Up kubectl on macOS | If you have an alias for kubectl, you can extend shell completion to work with that alias:", "language": "bash", "type": "kubectl-command_explanation", "context": "Install and Set Up kubectl on macOS | If you have an alias for kubectl, you can extend shell completion to work with that alias:", "source": "https://kubernetes.io/docs/tasks/tools/install-kubectl-macos/", "id": "k8s_00510_code_explain_13118"} {"prompt": "Explain this text code:\n\necho 'alias k=kubectl' >>~/.bash_profile\necho 'complete -o default -F __start_kubectl k' >>~/.bash_profile", "completion": "This text code Install and Set Up kubectl on macOS | If you installed kubectl with Homebrew (as explained here), then the kubectl completion script should already be in /usr/local/etc/bash_completion.d/kubectl. In that case, you don't need to do anythin", "language": "text", "type": "code-example_explanation", "context": "Install and Set Up kubectl on macOS | If you installed kubectl with Homebrew (as explained here), then the kubectl completion script should already be in /usr/local/etc/bash_completion.d/kubectl. In that case, you don't need to do anythin", "source": "https://kubernetes.io/docs/tasks/tools/install-kubectl-macos/", "id": "k8s_00510_code_explain_13119"} {"prompt": "Explain this text code:\n\ncurl -LO \"https://dl.k8s.io/release/$(curl -L -s https://dl.k8s.io/release/stable.txt)/bin/darwin/amd64/kubectl-convert\"", "completion": "This text code Install and Set Up kubectl on macOS | curl -LO \"https://dl.k8s.io/release/$(curl -L -s https://dl.k8s.io/release/stable.txt)/bin/darwin/arm64/kubectl-convert\"", "language": "text", "type": "code-example_explanation", "context": "Install and Set Up kubectl on macOS | curl -LO \"https://dl.k8s.io/release/$(curl -L -s https://dl.k8s.io/release/stable.txt)/bin/darwin/arm64/kubectl-convert\"", "source": "https://kubernetes.io/docs/tasks/tools/install-kubectl-macos/", "id": "k8s_00510_code_explain_13120"} {"prompt": "Explain this text code:\n\ncurl -LO \"https://dl.k8s.io/release/$(curl -L -s https://dl.k8s.io/release/stable.txt)/bin/darwin/arm64/kubectl-convert\"", "completion": "This text code Install and Set Up kubectl on macOS | Validate the binary (optional)", "language": "text", "type": "code-example_explanation", "context": "Install and Set Up kubectl on macOS | Validate the binary (optional)", "source": "https://kubernetes.io/docs/tasks/tools/install-kubectl-macos/", "id": "k8s_00510_code_explain_13121"} {"prompt": "Explain this text code:\n\ncurl -LO \"https://dl.k8s.io/release/$(curl -L -s https://dl.k8s.io/release/stable.txt)/bin/darwin/amd64/kubectl-convert.sha256\"", "completion": "This text code Install and Set Up kubectl on macOS | curl -LO \"https://dl.k8s.io/release/$(curl -L -s https://dl.k8s.io/release/stable.txt)/bin/darwin/arm64/kubectl-convert.sha256\"", "language": "text", "type": "code-example_explanation", "context": "Install and Set Up kubectl on macOS | curl -LO \"https://dl.k8s.io/release/$(curl -L -s https://dl.k8s.io/release/stable.txt)/bin/darwin/arm64/kubectl-convert.sha256\"", "source": "https://kubernetes.io/docs/tasks/tools/install-kubectl-macos/", "id": "k8s_00510_code_explain_13122"} {"prompt": "Explain this text code:\n\ncurl -LO \"https://dl.k8s.io/release/$(curl -L -s https://dl.k8s.io/release/stable.txt)/bin/darwin/arm64/kubectl-convert.sha256\"", "completion": "This text code Install and Set Up kubectl on macOS | Validate the kubectl-convert binary against the checksum file:", "language": "text", "type": "code-example_explanation", "context": "Install and Set Up kubectl on macOS | Validate the kubectl-convert binary against the checksum file:", "source": "https://kubernetes.io/docs/tasks/tools/install-kubectl-macos/", "id": "k8s_00510_code_explain_13123"} {"prompt": "Explain this text code:\n\necho \"$(cat kubectl-convert.sha256) kubectl-convert\" | shasum -a 256 --check", "completion": "This text code Install and Set Up kubectl on macOS | If valid, the output is:", "language": "text", "type": "code-example_explanation", "context": "Install and Set Up kubectl on macOS | If valid, the output is:", "source": "https://kubernetes.io/docs/tasks/tools/install-kubectl-macos/", "id": "k8s_00510_code_explain_13124"} {"prompt": "Explain this bash code:\n\nkubectl-convert: FAILED\nshasum: WARNING: 1 computed checksum did NOT match", "completion": "This bash code Install and Set Up kubectl on macOS | Make kubectl-convert binary executable", "language": "bash", "type": "kubectl-command_explanation", "context": "Install and Set Up kubectl on macOS | Make kubectl-convert binary executable", "source": "https://kubernetes.io/docs/tasks/tools/install-kubectl-macos/", "id": "k8s_00510_code_explain_13125"} {"prompt": "Explain this text code:\n\nsudo mv ./kubectl-convert /usr/local/bin/kubectl-convert\nsudo chown root: /usr/local/bin/kubectl-convert", "completion": "This text code Install and Set Up kubectl on macOS | Verify plugin is successfully installed", "language": "text", "type": "code-example_explanation", "context": "Install and Set Up kubectl on macOS | Verify plugin is successfully installed", "source": "https://kubernetes.io/docs/tasks/tools/install-kubectl-macos/", "id": "k8s_00510_code_explain_13126"} {"prompt": "Explain this bash code:\n\ncurl -LO \"https://dl.k8s.io/release/$(curl -L -s https://dl.k8s.io/release/stable.txt)/bin/darwin/amd64/kubectl\"", "completion": "This bash code Install and Set Up kubectl on macOS | curl -LO \"https://dl.k8s.io/release/$(curl -L -s https://dl.k8s.io/release/stable.txt)/bin/darwin/arm64/kubectl\"", "language": "bash", "type": "shell-command_explanation", "context": "Install and Set Up kubectl on macOS | curl -LO \"https://dl.k8s.io/release/$(curl -L -s https://dl.k8s.io/release/stable.txt)/bin/darwin/arm64/kubectl\"", "source": "https://kubernetes.io/docs/tasks/tools/install-kubectl-macos/", "id": "k8s_00510_code_explain_13127"} {"prompt": "Explain this bash code:\n\ncurl -LO \"https://dl.k8s.io/release/$(curl -L -s https://dl.k8s.io/release/stable.txt)/bin/darwin/arm64/kubectl\"", "completion": "This bash code Install and Set Up kubectl on macOS | To download a specific version, replace the $(curl -L -s https://dl.k8s.io/release/stable.txt) portion of the command with the specific version.", "language": "bash", "type": "shell-command_explanation", "context": "Install and Set Up kubectl on macOS | To download a specific version, replace the $(curl -L -s https://dl.k8s.io/release/stable.txt) portion of the command with the specific version.", "source": "https://kubernetes.io/docs/tasks/tools/install-kubectl-macos/", "id": "k8s_00510_code_explain_13128"} {"prompt": "Explain this bash code:\n\ncurl -LO \"https://dl.k8s.io/release/v1.34.0/bin/darwin/amd64/kubectl\"", "completion": "This bash code Install and Set Up kubectl on macOS | And for macOS on Apple Silicon, type:", "language": "bash", "type": "shell-command_explanation", "context": "Install and Set Up kubectl on macOS | And for macOS on Apple Silicon, type:", "source": "https://kubernetes.io/docs/tasks/tools/install-kubectl-macos/", "id": "k8s_00510_code_explain_13129"} {"prompt": "Explain this bash code:\n\ncurl -LO \"https://dl.k8s.io/release/v1.34.0/bin/darwin/arm64/kubectl\"", "completion": "This bash code Install and Set Up kubectl on macOS | Validate the binary (optional)", "language": "bash", "type": "shell-command_explanation", "context": "Install and Set Up kubectl on macOS | Validate the binary (optional)", "source": "https://kubernetes.io/docs/tasks/tools/install-kubectl-macos/", "id": "k8s_00510_code_explain_13130"} {"prompt": "Explain this bash code:\n\ncurl -LO \"https://dl.k8s.io/release/$(curl -L -s https://dl.k8s.io/release/stable.txt)/bin/darwin/amd64/kubectl.sha256\"", "completion": "This bash code Install and Set Up kubectl on macOS | curl -LO \"https://dl.k8s.io/release/$(curl -L -s https://dl.k8s.io/release/stable.txt)/bin/darwin/arm64/kubectl.sha256\"", "language": "bash", "type": "shell-command_explanation", "context": "Install and Set Up kubectl on macOS | curl -LO \"https://dl.k8s.io/release/$(curl -L -s https://dl.k8s.io/release/stable.txt)/bin/darwin/arm64/kubectl.sha256\"", "source": "https://kubernetes.io/docs/tasks/tools/install-kubectl-macos/", "id": "k8s_00510_code_explain_13131"} {"prompt": "Explain this bash code:\n\ncurl -LO \"https://dl.k8s.io/release/$(curl -L -s https://dl.k8s.io/release/stable.txt)/bin/darwin/arm64/kubectl.sha256\"", "completion": "This bash code Install and Set Up kubectl on macOS | Validate the kubectl binary against the checksum file:", "language": "bash", "type": "shell-command_explanation", "context": "Install and Set Up kubectl on macOS | Validate the kubectl binary against the checksum file:", "source": "https://kubernetes.io/docs/tasks/tools/install-kubectl-macos/", "id": "k8s_00510_code_explain_13132"} {"prompt": "Explain this bash code:\n\necho \"$(cat kubectl.sha256) kubectl\" | shasum -a 256 --check", "completion": "This bash code Install and Set Up kubectl on macOS | If valid, the output is:", "language": "bash", "type": "shell-command_explanation", "context": "Install and Set Up kubectl on macOS | If valid, the output is:", "source": "https://kubernetes.io/docs/tasks/tools/install-kubectl-macos/", "id": "k8s_00510_code_explain_13133"} {"prompt": "Explain this console code:\n\nkubectl: FAILED\nshasum: WARNING: 1 computed checksum did NOT match", "completion": "This console code Install and Set Up kubectl on macOS | Make the kubectl binary executable.", "language": "console", "type": "code-example_explanation", "context": "Install and Set Up kubectl on macOS | Make the kubectl binary executable.", "source": "https://kubernetes.io/docs/tasks/tools/install-kubectl-macos/", "id": "k8s_00510_code_explain_13134"} {"prompt": "Explain this bash code:\n\nsudo mv ./kubectl /usr/local/bin/kubectl\nsudo chown root: /usr/local/bin/kubectl", "completion": "This bash code Install and Set Up kubectl on macOS | Test to ensure the version you installed is up-to-date:", "language": "bash", "type": "shell-command_explanation", "context": "Install and Set Up kubectl on macOS | Test to ensure the version you installed is up-to-date:", "source": "https://kubernetes.io/docs/tasks/tools/install-kubectl-macos/", "id": "k8s_00510_code_explain_13135"} {"prompt": "Explain this text code:\n\nThe connection to the server <server-name:port> was refused - did you specify the right host or port?", "completion": "This text code Install and Set Up kubectl on macOS | For example, if you are intending to run a Kubernetes cluster on your laptop (locally), you will need a tool like Minikube to be installed first and then re-run the commands stated above.", "language": "text", "type": "code-example_explanation", "context": "Install and Set Up kubectl on macOS | For example, if you are intending to run a Kubernetes cluster on your laptop (locally), you will need a tool like Minikube to be installed first and then re-run the commands stated above.", "source": "https://kubernetes.io/docs/tasks/tools/install-kubectl-macos/", "id": "k8s_00510_code_explain_13136"} {"prompt": "Explain this bash code:\n\nbrew_etc=\"$(brew --prefix)/etc\" && [[ -r \"${brew_etc}/profile.d/bash_completion.sh\" ]] && . \"${brew_etc}/profile.d/bash_completion.sh\"", "completion": "This bash code Install and Set Up kubectl on macOS | Reload your shell and verify that bash-completion v2 is correctly installed with type _init_completion.", "language": "bash", "type": "shell-command_explanation", "context": "Install and Set Up kubectl on macOS | Reload your shell and verify that bash-completion v2 is correctly installed with type _init_completion.", "source": "https://kubernetes.io/docs/tasks/tools/install-kubectl-macos/", "id": "k8s_00510_code_explain_13137"} {"prompt": "Explain this bash code:\n\necho 'source <(kubectl completion bash)' >>~/.bash_profile", "completion": "This bash code Install and Set Up kubectl on macOS | Add the completion script to the /usr/local/etc/bash_completion.d directory:", "language": "bash", "type": "shell-command_explanation", "context": "Install and Set Up kubectl on macOS | Add the completion script to the /usr/local/etc/bash_completion.d directory:", "source": "https://kubernetes.io/docs/tasks/tools/install-kubectl-macos/", "id": "k8s_00510_code_explain_13138"} {"prompt": "Explain this bash code:\n\nkubectl completion bash >/usr/local/etc/bash_completion.d/kubectl", "completion": "This bash code Install and Set Up kubectl on macOS | If you have an alias for kubectl, you can extend shell completion to work with that alias:", "language": "bash", "type": "kubectl-command_explanation", "context": "Install and Set Up kubectl on macOS | If you have an alias for kubectl, you can extend shell completion to work with that alias:", "source": "https://kubernetes.io/docs/tasks/tools/install-kubectl-macos/", "id": "k8s_00510_code_explain_13139"} {"prompt": "Explain this bash code:\n\necho 'alias k=kubectl' >>~/.bash_profile\necho 'complete -o default -F __start_kubectl k' >>~/.bash_profile", "completion": "This bash code Install and Set Up kubectl on macOS | If you installed kubectl with Homebrew (as explained here), then the kubectl completion script should already be in /usr/local/etc/bash_completion.d/kubectl. In that case, you don't need to do anythin", "language": "bash", "type": "shell-command_explanation", "context": "Install and Set Up kubectl on macOS | If you installed kubectl with Homebrew (as explained here), then the kubectl completion script should already be in /usr/local/etc/bash_completion.d/kubectl. In that case, you don't need to do anythin", "source": "https://kubernetes.io/docs/tasks/tools/install-kubectl-macos/", "id": "k8s_00510_code_explain_13140"} {"prompt": "Explain this bash code:\n\ncurl -LO \"https://dl.k8s.io/release/$(curl -L -s https://dl.k8s.io/release/stable.txt)/bin/darwin/amd64/kubectl-convert\"", "completion": "This bash code Install and Set Up kubectl on macOS | curl -LO \"https://dl.k8s.io/release/$(curl -L -s https://dl.k8s.io/release/stable.txt)/bin/darwin/arm64/kubectl-convert\"", "language": "bash", "type": "shell-command_explanation", "context": "Install and Set Up kubectl on macOS | curl -LO \"https://dl.k8s.io/release/$(curl -L -s https://dl.k8s.io/release/stable.txt)/bin/darwin/arm64/kubectl-convert\"", "source": "https://kubernetes.io/docs/tasks/tools/install-kubectl-macos/", "id": "k8s_00510_code_explain_13141"} {"prompt": "Explain this bash code:\n\ncurl -LO \"https://dl.k8s.io/release/$(curl -L -s https://dl.k8s.io/release/stable.txt)/bin/darwin/arm64/kubectl-convert\"", "completion": "This bash code Install and Set Up kubectl on macOS | Validate the binary (optional)", "language": "bash", "type": "shell-command_explanation", "context": "Install and Set Up kubectl on macOS | Validate the binary (optional)", "source": "https://kubernetes.io/docs/tasks/tools/install-kubectl-macos/", "id": "k8s_00510_code_explain_13142"} {"prompt": "Explain this bash code:\n\ncurl -LO \"https://dl.k8s.io/release/$(curl -L -s https://dl.k8s.io/release/stable.txt)/bin/darwin/amd64/kubectl-convert.sha256\"", "completion": "This bash code Install and Set Up kubectl on macOS | curl -LO \"https://dl.k8s.io/release/$(curl -L -s https://dl.k8s.io/release/stable.txt)/bin/darwin/arm64/kubectl-convert.sha256\"", "language": "bash", "type": "shell-command_explanation", "context": "Install and Set Up kubectl on macOS | curl -LO \"https://dl.k8s.io/release/$(curl -L -s https://dl.k8s.io/release/stable.txt)/bin/darwin/arm64/kubectl-convert.sha256\"", "source": "https://kubernetes.io/docs/tasks/tools/install-kubectl-macos/", "id": "k8s_00510_code_explain_13143"} {"prompt": "Explain this bash code:\n\ncurl -LO \"https://dl.k8s.io/release/$(curl -L -s https://dl.k8s.io/release/stable.txt)/bin/darwin/arm64/kubectl-convert.sha256\"", "completion": "This bash code Install and Set Up kubectl on macOS | Validate the kubectl-convert binary against the checksum file:", "language": "bash", "type": "shell-command_explanation", "context": "Install and Set Up kubectl on macOS | Validate the kubectl-convert binary against the checksum file:", "source": "https://kubernetes.io/docs/tasks/tools/install-kubectl-macos/", "id": "k8s_00510_code_explain_13144"} {"prompt": "Explain this bash code:\n\necho \"$(cat kubectl-convert.sha256) kubectl-convert\" | shasum -a 256 --check", "completion": "This bash code Install and Set Up kubectl on macOS | If valid, the output is:", "language": "bash", "type": "shell-command_explanation", "context": "Install and Set Up kubectl on macOS | If valid, the output is:", "source": "https://kubernetes.io/docs/tasks/tools/install-kubectl-macos/", "id": "k8s_00510_code_explain_13145"} {"prompt": "Explain this console code:\n\nkubectl-convert: FAILED\nshasum: WARNING: 1 computed checksum did NOT match", "completion": "This console code Install and Set Up kubectl on macOS | Make kubectl-convert binary executable", "language": "console", "type": "code-example_explanation", "context": "Install and Set Up kubectl on macOS | Make kubectl-convert binary executable", "source": "https://kubernetes.io/docs/tasks/tools/install-kubectl-macos/", "id": "k8s_00510_code_explain_13146"} {"prompt": "Explain this bash code:\n\nsudo mv ./kubectl-convert /usr/local/bin/kubectl-convert\nsudo chown root: /usr/local/bin/kubectl-convert", "completion": "This bash code Install and Set Up kubectl on macOS | Verify plugin is successfully installed", "language": "bash", "type": "shell-command_explanation", "context": "Install and Set Up kubectl on macOS | Verify plugin is successfully installed", "source": "https://kubernetes.io/docs/tasks/tools/install-kubectl-macos/", "id": "k8s_00510_code_explain_13147"} {"prompt": "Explain this text code:\n\ncurl -LO \"https://dl.k8s.io/release/$(curl -L -s https://dl.k8s.io/release/stable.txt)/bin/darwin/amd64/kubectl\"", "completion": "This text code Install and Set Up kubectl on macOS | curl -LO \"https://dl.k8s.io/release/$(curl -L -s https://dl.k8s.io/release/stable.txt)/bin/darwin/arm64/kubectl\"", "language": "text", "type": "code-example_explanation", "context": "Install and Set Up kubectl on macOS | curl -LO \"https://dl.k8s.io/release/$(curl -L -s https://dl.k8s.io/release/stable.txt)/bin/darwin/arm64/kubectl\"", "source": "https://kubernetes.io/docs/tasks/tools/install-kubectl-macos/", "id": "k8s_00510_code_explain_13148"} {"prompt": "Explain this text code:\n\ncurl -LO \"https://dl.k8s.io/release/$(curl -L -s https://dl.k8s.io/release/stable.txt)/bin/darwin/arm64/kubectl\"", "completion": "This text code Install and Set Up kubectl on macOS | To download a specific version, replace the $(curl -L -s https://dl.k8s.io/release/stable.txt) portion of the command with the specific version.", "language": "text", "type": "code-example_explanation", "context": "Install and Set Up kubectl on macOS | To download a specific version, replace the $(curl -L -s https://dl.k8s.io/release/stable.txt) portion of the command with the specific version.", "source": "https://kubernetes.io/docs/tasks/tools/install-kubectl-macos/", "id": "k8s_00510_code_explain_13149"} {"prompt": "Explain this text code:\n\ncurl -LO \"https://dl.k8s.io/release/v1.34.0/bin/darwin/amd64/kubectl\"", "completion": "This text code Install and Set Up kubectl on macOS | And for macOS on Apple Silicon, type:", "language": "text", "type": "code-example_explanation", "context": "Install and Set Up kubectl on macOS | And for macOS on Apple Silicon, type:", "source": "https://kubernetes.io/docs/tasks/tools/install-kubectl-macos/", "id": "k8s_00510_code_explain_13150"} {"prompt": "Explain this text code:\n\ncurl -LO \"https://dl.k8s.io/release/v1.34.0/bin/darwin/arm64/kubectl\"", "completion": "This text code Install and Set Up kubectl on macOS | Validate the binary (optional)", "language": "text", "type": "code-example_explanation", "context": "Install and Set Up kubectl on macOS | Validate the binary (optional)", "source": "https://kubernetes.io/docs/tasks/tools/install-kubectl-macos/", "id": "k8s_00510_code_explain_13151"} {"prompt": "Explain this text code:\n\ncurl -LO \"https://dl.k8s.io/release/$(curl -L -s https://dl.k8s.io/release/stable.txt)/bin/darwin/amd64/kubectl.sha256\"", "completion": "This text code Install and Set Up kubectl on macOS | curl -LO \"https://dl.k8s.io/release/$(curl -L -s https://dl.k8s.io/release/stable.txt)/bin/darwin/arm64/kubectl.sha256\"", "language": "text", "type": "code-example_explanation", "context": "Install and Set Up kubectl on macOS | curl -LO \"https://dl.k8s.io/release/$(curl -L -s https://dl.k8s.io/release/stable.txt)/bin/darwin/arm64/kubectl.sha256\"", "source": "https://kubernetes.io/docs/tasks/tools/install-kubectl-macos/", "id": "k8s_00510_code_explain_13152"} {"prompt": "Explain this text code:\n\ncurl -LO \"https://dl.k8s.io/release/$(curl -L -s https://dl.k8s.io/release/stable.txt)/bin/darwin/arm64/kubectl.sha256\"", "completion": "This text code Install and Set Up kubectl on macOS | Validate the kubectl binary against the checksum file:", "language": "text", "type": "code-example_explanation", "context": "Install and Set Up kubectl on macOS | Validate the kubectl binary against the checksum file:", "source": "https://kubernetes.io/docs/tasks/tools/install-kubectl-macos/", "id": "k8s_00510_code_explain_13153"} {"prompt": "Explain this text code:\n\necho \"$(cat kubectl.sha256) kubectl\" | shasum -a 256 --check", "completion": "This text code Install and Set Up kubectl on macOS | If valid, the output is:", "language": "text", "type": "code-example_explanation", "context": "Install and Set Up kubectl on macOS | If valid, the output is:", "source": "https://kubernetes.io/docs/tasks/tools/install-kubectl-macos/", "id": "k8s_00510_code_explain_13154"} {"prompt": "Explain this bash code:\n\nkubectl: FAILED\nshasum: WARNING: 1 computed checksum did NOT match", "completion": "This bash code Install and Set Up kubectl on macOS | Make the kubectl binary executable.", "language": "bash", "type": "kubectl-command_explanation", "context": "Install and Set Up kubectl on macOS | Make the kubectl binary executable.", "source": "https://kubernetes.io/docs/tasks/tools/install-kubectl-macos/", "id": "k8s_00510_code_explain_13155"} {"prompt": "Explain this text code:\n\nsudo mv ./kubectl /usr/local/bin/kubectl\nsudo chown root: /usr/local/bin/kubectl", "completion": "This text code Install and Set Up kubectl on macOS | Test to ensure the version you installed is up-to-date:", "language": "text", "type": "code-example_explanation", "context": "Install and Set Up kubectl on macOS | Test to ensure the version you installed is up-to-date:", "source": "https://kubernetes.io/docs/tasks/tools/install-kubectl-macos/", "id": "k8s_00510_code_explain_13156"} {"prompt": "Explain this text code:\n\nbrew_etc=\"$(brew --prefix)/etc\" && [[ -r \"${brew_etc}/profile.d/bash_completion.sh\" ]] && . \"${brew_etc}/profile.d/bash_completion.sh\"", "completion": "This text code Install and Set Up kubectl on macOS | Reload your shell and verify that bash-completion v2 is correctly installed with type _init_completion.", "language": "text", "type": "code-example_explanation", "context": "Install and Set Up kubectl on macOS | Reload your shell and verify that bash-completion v2 is correctly installed with type _init_completion.", "source": "https://kubernetes.io/docs/tasks/tools/install-kubectl-macos/", "id": "k8s_00510_code_explain_13157"} {"prompt": "Explain this text code:\n\necho 'source <(kubectl completion bash)' >>~/.bash_profile", "completion": "This text code Install and Set Up kubectl on macOS | Add the completion script to the /usr/local/etc/bash_completion.d directory:", "language": "text", "type": "code-example_explanation", "context": "Install and Set Up kubectl on macOS | Add the completion script to the /usr/local/etc/bash_completion.d directory:", "source": "https://kubernetes.io/docs/tasks/tools/install-kubectl-macos/", "id": "k8s_00510_code_explain_13158"} {"prompt": "Explain this bash code:\n\nkubectl completion bash >/usr/local/etc/bash_completion.d/kubectl", "completion": "This bash code Install and Set Up kubectl on macOS | If you have an alias for kubectl, you can extend shell completion to work with that alias:", "language": "bash", "type": "kubectl-command_explanation", "context": "Install and Set Up kubectl on macOS | If you have an alias for kubectl, you can extend shell completion to work with that alias:", "source": "https://kubernetes.io/docs/tasks/tools/install-kubectl-macos/", "id": "k8s_00510_code_explain_13159"} {"prompt": "Explain this text code:\n\necho 'alias k=kubectl' >>~/.bash_profile\necho 'complete -o default -F __start_kubectl k' >>~/.bash_profile", "completion": "This text code Install and Set Up kubectl on macOS | If you installed kubectl with Homebrew (as explained here), then the kubectl completion script should already be in /usr/local/etc/bash_completion.d/kubectl. In that case, you don't need to do anythin", "language": "text", "type": "code-example_explanation", "context": "Install and Set Up kubectl on macOS | If you installed kubectl with Homebrew (as explained here), then the kubectl completion script should already be in /usr/local/etc/bash_completion.d/kubectl. In that case, you don't need to do anythin", "source": "https://kubernetes.io/docs/tasks/tools/install-kubectl-macos/", "id": "k8s_00510_code_explain_13160"} {"prompt": "Explain this text code:\n\ncurl -LO \"https://dl.k8s.io/release/$(curl -L -s https://dl.k8s.io/release/stable.txt)/bin/darwin/amd64/kubectl-convert\"", "completion": "This text code Install and Set Up kubectl on macOS | curl -LO \"https://dl.k8s.io/release/$(curl -L -s https://dl.k8s.io/release/stable.txt)/bin/darwin/arm64/kubectl-convert\"", "language": "text", "type": "code-example_explanation", "context": "Install and Set Up kubectl on macOS | curl -LO \"https://dl.k8s.io/release/$(curl -L -s https://dl.k8s.io/release/stable.txt)/bin/darwin/arm64/kubectl-convert\"", "source": "https://kubernetes.io/docs/tasks/tools/install-kubectl-macos/", "id": "k8s_00510_code_explain_13161"} {"prompt": "Explain this text code:\n\ncurl -LO \"https://dl.k8s.io/release/$(curl -L -s https://dl.k8s.io/release/stable.txt)/bin/darwin/arm64/kubectl-convert\"", "completion": "This text code Install and Set Up kubectl on macOS | Validate the binary (optional)", "language": "text", "type": "code-example_explanation", "context": "Install and Set Up kubectl on macOS | Validate the binary (optional)", "source": "https://kubernetes.io/docs/tasks/tools/install-kubectl-macos/", "id": "k8s_00510_code_explain_13162"} {"prompt": "Explain this text code:\n\ncurl -LO \"https://dl.k8s.io/release/$(curl -L -s https://dl.k8s.io/release/stable.txt)/bin/darwin/amd64/kubectl-convert.sha256\"", "completion": "This text code Install and Set Up kubectl on macOS | curl -LO \"https://dl.k8s.io/release/$(curl -L -s https://dl.k8s.io/release/stable.txt)/bin/darwin/arm64/kubectl-convert.sha256\"", "language": "text", "type": "code-example_explanation", "context": "Install and Set Up kubectl on macOS | curl -LO \"https://dl.k8s.io/release/$(curl -L -s https://dl.k8s.io/release/stable.txt)/bin/darwin/arm64/kubectl-convert.sha256\"", "source": "https://kubernetes.io/docs/tasks/tools/install-kubectl-macos/", "id": "k8s_00510_code_explain_13163"} {"prompt": "Explain this text code:\n\ncurl -LO \"https://dl.k8s.io/release/$(curl -L -s https://dl.k8s.io/release/stable.txt)/bin/darwin/arm64/kubectl-convert.sha256\"", "completion": "This text code Install and Set Up kubectl on macOS | Validate the kubectl-convert binary against the checksum file:", "language": "text", "type": "code-example_explanation", "context": "Install and Set Up kubectl on macOS | Validate the kubectl-convert binary against the checksum file:", "source": "https://kubernetes.io/docs/tasks/tools/install-kubectl-macos/", "id": "k8s_00510_code_explain_13164"} {"prompt": "Explain this text code:\n\necho \"$(cat kubectl-convert.sha256) kubectl-convert\" | shasum -a 256 --check", "completion": "This text code Install and Set Up kubectl on macOS | If valid, the output is:", "language": "text", "type": "code-example_explanation", "context": "Install and Set Up kubectl on macOS | If valid, the output is:", "source": "https://kubernetes.io/docs/tasks/tools/install-kubectl-macos/", "id": "k8s_00510_code_explain_13165"} {"prompt": "Explain this bash code:\n\nkubectl-convert: FAILED\nshasum: WARNING: 1 computed checksum did NOT match", "completion": "This bash code Install and Set Up kubectl on macOS | Make kubectl-convert binary executable", "language": "bash", "type": "kubectl-command_explanation", "context": "Install and Set Up kubectl on macOS | Make kubectl-convert binary executable", "source": "https://kubernetes.io/docs/tasks/tools/install-kubectl-macos/", "id": "k8s_00510_code_explain_13166"} {"prompt": "Explain this text code:\n\nsudo mv ./kubectl-convert /usr/local/bin/kubectl-convert\nsudo chown root: /usr/local/bin/kubectl-convert", "completion": "This text code Install and Set Up kubectl on macOS | Verify plugin is successfully installed", "language": "text", "type": "code-example_explanation", "context": "Install and Set Up kubectl on macOS | Verify plugin is successfully installed", "source": "https://kubernetes.io/docs/tasks/tools/install-kubectl-macos/", "id": "k8s_00510_code_explain_13167"} {"prompt": "Complete this text code:\n\nNAME STATUS AGE\ndefault Active 1d", "completion": "kube-node-lease Active 1d\nkube-public Active 1d\nkube-system Active 1d", "language": "text", "type": "code-example", "context": "Namespaces | To set the namespace for a current request, use the --namespace flag.", "source": "https://kubernetes.io/docs/concepts/overview/working-with-objects/namespaces/", "id": "k8s_00512_code_completion_13168"} {"prompt": "Explain this text code:\n\nNAME STATUS AGE\ndefault Active 1d\nkube-node-lease Active 1d\nkube-public Active 1d\nkube-system Active 1d", "completion": "This text code Namespaces | To set the namespace for a current request, use the --namespace flag.", "language": "text", "type": "code-example_explanation", "context": "Namespaces | To set the namespace for a current request, use the --namespace flag.", "source": "https://kubernetes.io/docs/concepts/overview/working-with-objects/namespaces/", "id": "k8s_00512_code_explain_13169"} {"prompt": "Explain this bash code:\n\nkubectl run nginx --image=nginx --namespace=<insert-namespace-name-here>\nkubectl get pods --namespace=<insert-namespace-name-here>", "completion": "This bash code Namespaces | You can permanently save the namespace for all subsequent kubectl commands in that context.", "language": "bash", "type": "kubectl-command_explanation", "context": "Namespaces | You can permanently save the namespace for all subsequent kubectl commands in that context.", "source": "https://kubernetes.io/docs/concepts/overview/working-with-objects/namespaces/", "id": "k8s_00512_code_explain_13170"} {"prompt": "Explain this bash code:\n\nkubectl config set-context --current --namespace=<insert-namespace-name-here>\n# Validate it\nkubectl config view --minify | grep namespace:", "completion": "This bash code Namespaces | When you create a Service, it creates a corresponding DNS entry. This entry is of the form <service-name>.<namespace-name>.svc.cluster.local, which means that if a container only uses <service-name>,", "language": "bash", "type": "kubectl-command_explanation", "context": "Namespaces | When you create a Service, it creates a corresponding DNS entry. This entry is of the form <service-name>.<namespace-name>.svc.cluster.local, which means that if a container only uses <service-name>,", "source": "https://kubernetes.io/docs/concepts/overview/working-with-objects/namespaces/", "id": "k8s_00512_code_explain_13171"} {"prompt": "Complete this text code:\n\n# In a namespace\nkubectl api-resources --namespaced=true", "completion": "\n# Not in a namespace\nkubectl api-resources --namespaced=false", "language": "text", "type": "code-example", "context": "Namespaces | The Kubernetes control plane sets an immutable label kubernetes.io/metadata.name on all namespaces. The value of the label is the namespace name.", "source": "https://kubernetes.io/docs/concepts/overview/working-with-objects/namespaces/", "id": "k8s_00512_code_completion_13172"} {"prompt": "Explain this text code:\n\n# In a namespace\nkubectl api-resources --namespaced=true\n\n# Not in a namespace\nkubectl api-resources --namespaced=false", "completion": "This text code Namespaces | The Kubernetes control plane sets an immutable label kubernetes.io/metadata.name on all namespaces. The value of the label is the namespace name.", "language": "text", "type": "code-example_explanation", "context": "Namespaces | The Kubernetes control plane sets an immutable label kubernetes.io/metadata.name on all namespaces. The value of the label is the namespace name.", "source": "https://kubernetes.io/docs/concepts/overview/working-with-objects/namespaces/", "id": "k8s_00512_code_explain_13173"} {"prompt": "Complete this text code:\n\nNAME STATUS AGE\ndefault Active 1d", "completion": "kube-node-lease Active 1d\nkube-public Active 1d\nkube-system Active 1d", "language": "text", "type": "code-example", "context": "Namespaces | To set the namespace for a current request, use the --namespace flag.", "source": "https://kubernetes.io/docs/concepts/overview/working-with-objects/namespaces/", "id": "k8s_00512_code_completion_13174"} {"prompt": "Explain this text code:\n\nNAME STATUS AGE\ndefault Active 1d\nkube-node-lease Active 1d\nkube-public Active 1d\nkube-system Active 1d", "completion": "This text code Namespaces | To set the namespace for a current request, use the --namespace flag.", "language": "text", "type": "code-example_explanation", "context": "Namespaces | To set the namespace for a current request, use the --namespace flag.", "source": "https://kubernetes.io/docs/concepts/overview/working-with-objects/namespaces/", "id": "k8s_00512_code_explain_13175"} {"prompt": "Explain this shell code:\n\nkubectl run nginx --image=nginx --namespace=<insert-namespace-name-here>\nkubectl get pods --namespace=<insert-namespace-name-here>", "completion": "This shell code Namespaces | You can permanently save the namespace for all subsequent kubectl commands in that context.", "language": "shell", "type": "code-example_explanation", "context": "Namespaces | You can permanently save the namespace for all subsequent kubectl commands in that context.", "source": "https://kubernetes.io/docs/concepts/overview/working-with-objects/namespaces/", "id": "k8s_00512_code_explain_13176"} {"prompt": "Explain this shell code:\n\nkubectl config set-context --current --namespace=<insert-namespace-name-here>\n# Validate it\nkubectl config view --minify | grep namespace:", "completion": "This shell code Namespaces | When you create a Service, it creates a corresponding DNS entry. This entry is of the form <service-name>.<namespace-name>.svc.cluster.local, which means that if a container only uses <service-name>,", "language": "shell", "type": "code-example_explanation", "context": "Namespaces | When you create a Service, it creates a corresponding DNS entry. This entry is of the form <service-name>.<namespace-name>.svc.cluster.local, which means that if a container only uses <service-name>,", "source": "https://kubernetes.io/docs/concepts/overview/working-with-objects/namespaces/", "id": "k8s_00512_code_explain_13177"} {"prompt": "Complete this shell code:\n\n# In a namespace\nkubectl api-resources --namespaced=true", "completion": "\n# Not in a namespace\nkubectl api-resources --namespaced=false", "language": "shell", "type": "code-example", "context": "Namespaces | The Kubernetes control plane sets an immutable label kubernetes.io/metadata.name on all namespaces. The value of the label is the namespace name.", "source": "https://kubernetes.io/docs/concepts/overview/working-with-objects/namespaces/", "id": "k8s_00512_code_completion_13178"} {"prompt": "Explain this shell code:\n\n# In a namespace\nkubectl api-resources --namespaced=true\n\n# Not in a namespace\nkubectl api-resources --namespaced=false", "completion": "This shell code Namespaces | The Kubernetes control plane sets an immutable label kubernetes.io/metadata.name on all namespaces. The value of the label is the namespace name.", "language": "shell", "type": "code-example_explanation", "context": "Namespaces | The Kubernetes control plane sets an immutable label kubernetes.io/metadata.name on all namespaces. The value of the label is the namespace name.", "source": "https://kubernetes.io/docs/concepts/overview/working-with-objects/namespaces/", "id": "k8s_00512_code_explain_13179"} {"prompt": "Explain this bash code:\n\nkubectl run nginx --image=nginx --namespace=<insert-namespace-name-here>\nkubectl get pods --namespace=<insert-namespace-name-here>", "completion": "This bash code Namespaces | You can permanently save the namespace for all subsequent kubectl commands in that context.", "language": "bash", "type": "kubectl-command_explanation", "context": "Namespaces | You can permanently save the namespace for all subsequent kubectl commands in that context.", "source": "https://kubernetes.io/docs/concepts/overview/working-with-objects/namespaces/", "id": "k8s_00512_code_explain_13180"} {"prompt": "Explain this bash code:\n\nkubectl config set-context --current --namespace=<insert-namespace-name-here>\n# Validate it\nkubectl config view --minify | grep namespace:", "completion": "This bash code Namespaces | When you create a Service, it creates a corresponding DNS entry. This entry is of the form <service-name>.<namespace-name>.svc.cluster.local, which means that if a container only uses <service-name>,", "language": "bash", "type": "kubectl-command_explanation", "context": "Namespaces | When you create a Service, it creates a corresponding DNS entry. This entry is of the form <service-name>.<namespace-name>.svc.cluster.local, which means that if a container only uses <service-name>,", "source": "https://kubernetes.io/docs/concepts/overview/working-with-objects/namespaces/", "id": "k8s_00512_code_explain_13181"} {"prompt": "Complete this text code:\n\n# In a namespace\nkubectl api-resources --namespaced=true", "completion": "\n# Not in a namespace\nkubectl api-resources --namespaced=false", "language": "text", "type": "code-example", "context": "Namespaces | The Kubernetes control plane sets an immutable label kubernetes.io/metadata.name on all namespaces. The value of the label is the namespace name.", "source": "https://kubernetes.io/docs/concepts/overview/working-with-objects/namespaces/", "id": "k8s_00512_code_completion_13182"} {"prompt": "Explain this text code:\n\n# In a namespace\nkubectl api-resources --namespaced=true\n\n# Not in a namespace\nkubectl api-resources --namespaced=false", "completion": "This text code Namespaces | The Kubernetes control plane sets an immutable label kubernetes.io/metadata.name on all namespaces. The value of the label is the namespace name.", "language": "text", "type": "code-example_explanation", "context": "Namespaces | The Kubernetes control plane sets an immutable label kubernetes.io/metadata.name on all namespaces. The value of the label is the namespace name.", "source": "https://kubernetes.io/docs/concepts/overview/working-with-objects/namespaces/", "id": "k8s_00512_code_explain_13183"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: ConfigMap\nmetadata:\n name: coredns\n namespace: kube-system\ndata:\n Corefile: |\n .:53 {\n errors\n health {\n lameduck 5s\n }", "completion": " ready\n kubernetes cluster.local in-addr.arpa ip6.arpa {\n pods insecure\n fallthrough in-addr.arpa ip6.arpa\n ttl 30\n }\n prometheus :9153\n forward . /etc/resolv.conf\n cache 30\n loop\n reload\n loadbalance\n }", "language": "yaml", "type": "kubernetes-manifest", "context": "Customizing DNS Service | The Corefile configuration includes the following plugins of CoreDNS:", "source": "https://kubernetes.io/docs/tasks/administer-cluster/dns-custom-nameservers/", "id": "k8s_00515_code_completion_13184"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: ConfigMap\nmetadata:\n name: coredns\n namespace: kube-system\ndata:\n Corefile: |\n .:53 {\n errors\n health {\n lameduck 5s\n }\n ready\n kubernetes cluster.local in-addr.arpa ip6.arpa {\n pods insecure\n fallthrough in-addr.arpa ip6.arpa\n ttl 30\n }\n prometheus :9153\n forward . /etc/resolv.conf\n cache 30\n loop\n reload\n loadbalance\n }", "completion": "This yaml code Customizing DNS Service | The Corefile configuration includes the following plugins of CoreDNS:", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Customizing DNS Service | The Corefile configuration includes the following plugins of CoreDNS:", "source": "https://kubernetes.io/docs/tasks/administer-cluster/dns-custom-nameservers/", "id": "k8s_00515_code_explain_13185"} {"prompt": "Complete this text code:\n\nconsul.local:53 {\n errors", "completion": " cache 30\n forward . 10.150.0.1\n}", "language": "text", "type": "code-example", "context": "Customizing DNS Service | To explicitly force all non-cluster DNS lookups to go through a specific nameserver at 172.16.0.1, point the forward to the nameserver instead of /etc/resolv.conf", "source": "https://kubernetes.io/docs/tasks/administer-cluster/dns-custom-nameservers/", "id": "k8s_00515_code_completion_13186"} {"prompt": "Explain this text code:\n\nconsul.local:53 {\n errors\n cache 30\n forward . 10.150.0.1\n}", "completion": "This text code Customizing DNS Service | To explicitly force all non-cluster DNS lookups to go through a specific nameserver at 172.16.0.1, point the forward to the nameserver instead of /etc/resolv.conf", "language": "text", "type": "code-example_explanation", "context": "Customizing DNS Service | To explicitly force all non-cluster DNS lookups to go through a specific nameserver at 172.16.0.1, point the forward to the nameserver instead of /etc/resolv.conf", "source": "https://kubernetes.io/docs/tasks/administer-cluster/dns-custom-nameservers/", "id": "k8s_00515_code_explain_13187"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: ConfigMap\nmetadata:\n name: coredns\n namespace: kube-system\ndata:\n Corefile: |\n .:53 {\n errors\n health\n kubernetes cluster.local in-addr.arpa ip6.arpa {\n pods insecure\n fallthrough in-addr.arpa ip6.arpa", "completion": " }\n prometheus :9153\n forward . 172.16.0.1\n cache 30\n loop\n reload\n loadbalance\n }\n consul.local:53 {\n errors\n cache 30\n forward . 10.150.0.1\n }", "language": "yaml", "type": "kubernetes-manifest", "context": "Customizing DNS Service |", "source": "https://kubernetes.io/docs/tasks/administer-cluster/dns-custom-nameservers/", "id": "k8s_00515_code_completion_13188"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: ConfigMap\nmetadata:\n name: coredns\n namespace: kube-system\ndata:\n Corefile: |\n .:53 {\n errors\n health\n kubernetes cluster.local in-addr.arpa ip6.arpa {\n pods insecure\n fallthrough in-addr.arpa ip6.arpa\n }\n prometheus :9153\n forward . 172.16.0.1\n cache 30\n loop\n reload\n loadbalance\n }\n consul.local:53 {\n errors\n cache 30\n forward . 10.150.0.1\n }", "completion": "This yaml code Customizing DNS Service |", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Customizing DNS Service |", "source": "https://kubernetes.io/docs/tasks/administer-cluster/dns-custom-nameservers/", "id": "k8s_00515_code_explain_13189"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: ConfigMap\nmetadata:\n name: coredns\n namespace: kube-system\ndata:\n Corefile: |\n .:53 {\n errors\n health {\n lameduck 5s\n }", "completion": " ready\n kubernetes cluster.local in-addr.arpa ip6.arpa {\n pods insecure\n fallthrough in-addr.arpa ip6.arpa\n ttl 30\n }\n prometheus :9153\n forward . /etc/resolv.conf\n cache 30\n loop\n reload\n loadbalance\n }", "language": "yaml", "type": "kubernetes-manifest", "context": "Customizing DNS Service | The Corefile configuration includes the following plugins of CoreDNS:", "source": "https://kubernetes.io/docs/tasks/administer-cluster/dns-custom-nameservers/", "id": "k8s_00515_code_completion_13190"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: ConfigMap\nmetadata:\n name: coredns\n namespace: kube-system\ndata:\n Corefile: |\n .:53 {\n errors\n health {\n lameduck 5s\n }\n ready\n kubernetes cluster.local in-addr.arpa ip6.arpa {\n pods insecure\n fallthrough in-addr.arpa ip6.arpa\n ttl 30\n }\n prometheus :9153\n forward . /etc/resolv.conf\n cache 30\n loop\n reload\n loadbalance\n }", "completion": "This yaml code Customizing DNS Service | The Corefile configuration includes the following plugins of CoreDNS:", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Customizing DNS Service | The Corefile configuration includes the following plugins of CoreDNS:", "source": "https://kubernetes.io/docs/tasks/administer-cluster/dns-custom-nameservers/", "id": "k8s_00515_code_explain_13191"} {"prompt": "Complete this text code:\n\nconsul.local:53 {\n errors", "completion": " cache 30\n forward . 10.150.0.1\n}", "language": "text", "type": "code-example", "context": "Customizing DNS Service | To explicitly force all non-cluster DNS lookups to go through a specific nameserver at 172.16.0.1, point the forward to the nameserver instead of /etc/resolv.conf", "source": "https://kubernetes.io/docs/tasks/administer-cluster/dns-custom-nameservers/", "id": "k8s_00515_code_completion_13192"} {"prompt": "Explain this text code:\n\nconsul.local:53 {\n errors\n cache 30\n forward . 10.150.0.1\n}", "completion": "This text code Customizing DNS Service | To explicitly force all non-cluster DNS lookups to go through a specific nameserver at 172.16.0.1, point the forward to the nameserver instead of /etc/resolv.conf", "language": "text", "type": "code-example_explanation", "context": "Customizing DNS Service | To explicitly force all non-cluster DNS lookups to go through a specific nameserver at 172.16.0.1, point the forward to the nameserver instead of /etc/resolv.conf", "source": "https://kubernetes.io/docs/tasks/administer-cluster/dns-custom-nameservers/", "id": "k8s_00515_code_explain_13193"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: ConfigMap\nmetadata:\n name: coredns\n namespace: kube-system\ndata:\n Corefile: |\n .:53 {\n errors\n health\n kubernetes cluster.local in-addr.arpa ip6.arpa {\n pods insecure\n fallthrough in-addr.arpa ip6.arpa", "completion": " }\n prometheus :9153\n forward . 172.16.0.1\n cache 30\n loop\n reload\n loadbalance\n }\n consul.local:53 {\n errors\n cache 30\n forward . 10.150.0.1\n }", "language": "yaml", "type": "kubernetes-manifest", "context": "Customizing DNS Service |", "source": "https://kubernetes.io/docs/tasks/administer-cluster/dns-custom-nameservers/", "id": "k8s_00515_code_completion_13194"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: ConfigMap\nmetadata:\n name: coredns\n namespace: kube-system\ndata:\n Corefile: |\n .:53 {\n errors\n health\n kubernetes cluster.local in-addr.arpa ip6.arpa {\n pods insecure\n fallthrough in-addr.arpa ip6.arpa\n }\n prometheus :9153\n forward . 172.16.0.1\n cache 30\n loop\n reload\n loadbalance\n }\n consul.local:53 {\n errors\n cache 30\n forward . 10.150.0.1\n }", "completion": "This yaml code Customizing DNS Service |", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Customizing DNS Service |", "source": "https://kubernetes.io/docs/tasks/administer-cluster/dns-custom-nameservers/", "id": "k8s_00515_code_explain_13195"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: ConfigMap\nmetadata:\n name: coredns\n namespace: kube-system\ndata:\n Corefile: |\n .:53 {\n errors\n health {\n lameduck 5s\n }", "completion": " ready\n kubernetes cluster.local in-addr.arpa ip6.arpa {\n pods insecure\n fallthrough in-addr.arpa ip6.arpa\n ttl 30\n }\n prometheus :9153\n forward . /etc/resolv.conf\n cache 30\n loop\n reload\n loadbalance\n }", "language": "yaml", "type": "kubernetes-manifest", "context": "Customizing DNS Service | The Corefile configuration includes the following plugins of CoreDNS:", "source": "https://kubernetes.io/docs/tasks/administer-cluster/dns-custom-nameservers/", "id": "k8s_00515_code_completion_13196"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: ConfigMap\nmetadata:\n name: coredns\n namespace: kube-system\ndata:\n Corefile: |\n .:53 {\n errors\n health {\n lameduck 5s\n }\n ready\n kubernetes cluster.local in-addr.arpa ip6.arpa {\n pods insecure\n fallthrough in-addr.arpa ip6.arpa\n ttl 30\n }\n prometheus :9153\n forward . /etc/resolv.conf\n cache 30\n loop\n reload\n loadbalance\n }", "completion": "This yaml code Customizing DNS Service | The Corefile configuration includes the following plugins of CoreDNS:", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Customizing DNS Service | The Corefile configuration includes the following plugins of CoreDNS:", "source": "https://kubernetes.io/docs/tasks/administer-cluster/dns-custom-nameservers/", "id": "k8s_00515_code_explain_13197"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: ConfigMap\nmetadata:\n name: coredns\n namespace: kube-system\ndata:\n Corefile: |\n .:53 {\n errors\n health\n kubernetes cluster.local in-addr.arpa ip6.arpa {\n pods insecure\n fallthrough in-addr.arpa ip6.arpa", "completion": " }\n prometheus :9153\n forward . 172.16.0.1\n cache 30\n loop\n reload\n loadbalance\n }\n consul.local:53 {\n errors\n cache 30\n forward . 10.150.0.1\n }", "language": "yaml", "type": "kubernetes-manifest", "context": "Customizing DNS Service |", "source": "https://kubernetes.io/docs/tasks/administer-cluster/dns-custom-nameservers/", "id": "k8s_00515_code_completion_13198"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: ConfigMap\nmetadata:\n name: coredns\n namespace: kube-system\ndata:\n Corefile: |\n .:53 {\n errors\n health\n kubernetes cluster.local in-addr.arpa ip6.arpa {\n pods insecure\n fallthrough in-addr.arpa ip6.arpa\n }\n prometheus :9153\n forward . 172.16.0.1\n cache 30\n loop\n reload\n loadbalance\n }\n consul.local:53 {\n errors\n cache 30\n forward . 10.150.0.1\n }", "completion": "This yaml code Customizing DNS Service |", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Customizing DNS Service |", "source": "https://kubernetes.io/docs/tasks/administer-cluster/dns-custom-nameservers/", "id": "k8s_00515_code_explain_13199"} {"prompt": "Explain this text code:\n\nCreating debugging pod node-debugger-mynode-pdx84 with container debugger on node mynode.\nIf you don't see a command prompt, try pressing enter.\nroot@mynode:/#", "completion": "This text code Debugging Kubernetes Nodes With Kubectl | The debug command helps to gather information and troubleshoot issues. Commands that you might use include ip, ifconfig, nc, ping, and ps and so on. You can also install other tools, such as mtr, tcpd", "language": "text", "type": "code-example_explanation", "context": "Debugging Kubernetes Nodes With Kubectl | The debug command helps to gather information and troubleshoot issues. Commands that you might use include ip, ifconfig, nc, ping, and ps and so on. You can also install other tools, such as mtr, tcpd", "source": "https://kubernetes.io/docs/tasks/debug/debug-cluster/kubectl-node-debug/", "id": "k8s_00516_code_explain_13200"} {"prompt": "Explain this text code:\n\nNAME READY STATUS RESTARTS AGE\nnode-debugger-mynode-pdx84 0/1 Completed 0 8m1s", "completion": "This text code Debugging Kubernetes Nodes With Kubectl |", "language": "text", "type": "code-example_explanation", "context": "Debugging Kubernetes Nodes With Kubectl |", "source": "https://kubernetes.io/docs/tasks/debug/debug-cluster/kubectl-node-debug/", "id": "k8s_00516_code_explain_13201"} {"prompt": "Explain this text code:\n\n# Change the pod name accordingly\nkubectl delete pod node-debugger-mynode-pdx84 --now", "completion": "This text code Debugging Kubernetes Nodes With Kubectl |", "language": "text", "type": "code-example_explanation", "context": "Debugging Kubernetes Nodes With Kubectl |", "source": "https://kubernetes.io/docs/tasks/debug/debug-cluster/kubectl-node-debug/", "id": "k8s_00516_code_explain_13202"} {"prompt": "Explain this console code:\n\nCreating debugging pod node-debugger-mynode-pdx84 with container debugger on node mynode.\nIf you don't see a command prompt, try pressing enter.\nroot@mynode:/#", "completion": "This console code Debugging Kubernetes Nodes With Kubectl | The debug command helps to gather information and troubleshoot issues. Commands that you might use include ip, ifconfig, nc, ping, and ps and so on. You can also install other tools, such as mtr, tcpd", "language": "console", "type": "code-example_explanation", "context": "Debugging Kubernetes Nodes With Kubectl | The debug command helps to gather information and troubleshoot issues. Commands that you might use include ip, ifconfig, nc, ping, and ps and so on. You can also install other tools, such as mtr, tcpd", "source": "https://kubernetes.io/docs/tasks/debug/debug-cluster/kubectl-node-debug/", "id": "k8s_00516_code_explain_13203"} {"prompt": "Explain this none code:\n\nNAME READY STATUS RESTARTS AGE\nnode-debugger-mynode-pdx84 0/1 Completed 0 8m1s", "completion": "This none code Debugging Kubernetes Nodes With Kubectl |", "language": "none", "type": "code-example_explanation", "context": "Debugging Kubernetes Nodes With Kubectl |", "source": "https://kubernetes.io/docs/tasks/debug/debug-cluster/kubectl-node-debug/", "id": "k8s_00516_code_explain_13204"} {"prompt": "Explain this shell code:\n\n# Change the pod name accordingly\nkubectl delete pod node-debugger-mynode-pdx84 --now", "completion": "This shell code Debugging Kubernetes Nodes With Kubectl |", "language": "shell", "type": "code-example_explanation", "context": "Debugging Kubernetes Nodes With Kubectl |", "source": "https://kubernetes.io/docs/tasks/debug/debug-cluster/kubectl-node-debug/", "id": "k8s_00516_code_explain_13205"} {"prompt": "Explain this text code:\n\nCreating debugging pod node-debugger-mynode-pdx84 with container debugger on node mynode.\nIf you don't see a command prompt, try pressing enter.\nroot@mynode:/#", "completion": "This text code Debugging Kubernetes Nodes With Kubectl | The debug command helps to gather information and troubleshoot issues. Commands that you might use include ip, ifconfig, nc, ping, and ps and so on. You can also install other tools, such as mtr, tcpd", "language": "text", "type": "code-example_explanation", "context": "Debugging Kubernetes Nodes With Kubectl | The debug command helps to gather information and troubleshoot issues. Commands that you might use include ip, ifconfig, nc, ping, and ps and so on. You can also install other tools, such as mtr, tcpd", "source": "https://kubernetes.io/docs/tasks/debug/debug-cluster/kubectl-node-debug/", "id": "k8s_00516_code_explain_13206"} {"prompt": "Explain this text code:\n\n# Change the pod name accordingly\nkubectl delete pod node-debugger-mynode-pdx84 --now", "completion": "This text code Debugging Kubernetes Nodes With Kubectl |", "language": "text", "type": "code-example_explanation", "context": "Debugging Kubernetes Nodes With Kubectl |", "source": "https://kubernetes.io/docs/tasks/debug/debug-cluster/kubectl-node-debug/", "id": "k8s_00516_code_explain_13207"} {"prompt": "Explain this text code:\n\n/docs/tasks/extend-kubernetes/http-proxy-access-api/", "completion": "This text code Writing a new topic | In your topic, put a title field in the front matter. The front matter is the YAML block that is between the triple-dashed lines at the top of the page. Here's an example:", "language": "text", "type": "code-example_explanation", "context": "Writing a new topic | In your topic, put a title field in the front matter. The front matter is the YAML block that is between the triple-dashed lines at the top of the page. Here's an example:", "source": "https://kubernetes.io/docs/contribute/style/write-new-topic/", "id": "k8s_00517_code_explain_13208"} {"prompt": "Explain this text code:\n\n---\ntitle: Using an HTTP Proxy to Access the Kubernetes API\n---", "completion": "This text code Writing a new topic | Depending on your page type, put your new file in a subdirectory of one of these:", "language": "text", "type": "code-example_explanation", "context": "Writing a new topic | Depending on your page type, put your new file in a subdirectory of one of these:", "source": "https://kubernetes.io/docs/contribute/style/write-new-topic/", "id": "k8s_00517_code_explain_13209"} {"prompt": "Explain this json code:\n\n{{% code_sample file=\"<RELPATH>/my-example-yaml>\" %}}", "completion": "This json code Writing a new topic | where <RELPATH> is the path to the file to include, relative to the examples directory. The following Hugo shortcode references a YAML file located at /content/en/examples/pods/storage/gce-volume.yaml", "language": "json", "type": "api-response_explanation", "context": "Writing a new topic | where <RELPATH> is the path to the file to include, relative to the examples directory. The following Hugo shortcode references a YAML file located at /content/en/examples/pods/storage/gce-volume.yaml", "source": "https://kubernetes.io/docs/contribute/style/write-new-topic/", "id": "k8s_00517_code_explain_13210"} {"prompt": "Explain this json code:\n\n{{% code_sample file=\"pods/storage/gce-volume.yaml\" %}}", "completion": "This json code Writing a new topic | If you need to demonstrate how to create an API object based on a configuration file, place the configuration file in one of the subdirectories under <LANG>/examples.", "language": "json", "type": "api-response_explanation", "context": "Writing a new topic | If you need to demonstrate how to create an API object based on a configuration file, place the configuration file in one of the subdirectories under <LANG>/examples.", "source": "https://kubernetes.io/docs/contribute/style/write-new-topic/", "id": "k8s_00517_code_explain_13211"} {"prompt": "Explain this bash code:\n\nkubectl create -f https://k8s.io/examples/pods/storage/gce-volume.yaml", "completion": "This bash code Writing a new topic | For an example of a topic that uses this technique, see Running a Single-Instance Stateful Application.", "language": "bash", "type": "kubectl-command_explanation", "context": "Writing a new topic | For an example of a topic that uses this technique, see Running a Single-Instance Stateful Application.", "source": "https://kubernetes.io/docs/contribute/style/write-new-topic/", "id": "k8s_00517_code_explain_13212"} {"prompt": "Explain this text code:\n\n/docs/tasks/extend-kubernetes/http-proxy-access-api/", "completion": "This text code Writing a new topic | In your topic, put a title field in the front matter. The front matter is the YAML block that is between the triple-dashed lines at the top of the page. Here's an example:", "language": "text", "type": "code-example_explanation", "context": "Writing a new topic | In your topic, put a title field in the front matter. The front matter is the YAML block that is between the triple-dashed lines at the top of the page. Here's an example:", "source": "https://kubernetes.io/docs/contribute/style/write-new-topic/", "id": "k8s_00517_code_explain_13213"} {"prompt": "Explain this text code:\n\n---\ntitle: Using an HTTP Proxy to Access the Kubernetes API\n---", "completion": "This text code Writing a new topic | Depending on your page type, put your new file in a subdirectory of one of these:", "language": "text", "type": "code-example_explanation", "context": "Writing a new topic | Depending on your page type, put your new file in a subdirectory of one of these:", "source": "https://kubernetes.io/docs/contribute/style/write-new-topic/", "id": "k8s_00517_code_explain_13214"} {"prompt": "Explain this bash code:\n\nkubectl get deploy mydeployment -o json | jq '.status'", "completion": "This bash code Writing a new topic | Another way to include code in your topic is to create a new, complete sample file (or group of sample files) and then reference the sample from your topic. Use this method to include sample YAML file", "language": "bash", "type": "kubectl-command_explanation", "context": "Writing a new topic | Another way to include code in your topic is to create a new, complete sample file (or group of sample files) and then reference the sample from your topic. Use this method to include sample YAML file", "source": "https://kubernetes.io/docs/contribute/style/write-new-topic/", "id": "k8s_00517_code_explain_13215"} {"prompt": "Explain this none code:\n\n{{% code_sample file=\"<RELPATH>/my-example-yaml>\" %}}", "completion": "This none code Writing a new topic | where <RELPATH> is the path to the file to include, relative to the examples directory. The following Hugo shortcode references a YAML file located at /content/en/examples/pods/storage/gce-volume.yaml", "language": "none", "type": "code-example_explanation", "context": "Writing a new topic | where <RELPATH> is the path to the file to include, relative to the examples directory. The following Hugo shortcode references a YAML file located at /content/en/examples/pods/storage/gce-volume.yaml", "source": "https://kubernetes.io/docs/contribute/style/write-new-topic/", "id": "k8s_00517_code_explain_13216"} {"prompt": "Explain this none code:\n\n{{% code_sample file=\"pods/storage/gce-volume.yaml\" %}}", "completion": "This none code Writing a new topic | If you need to demonstrate how to create an API object based on a configuration file, place the configuration file in one of the subdirectories under <LANG>/examples.", "language": "none", "type": "code-example_explanation", "context": "Writing a new topic | If you need to demonstrate how to create an API object based on a configuration file, place the configuration file in one of the subdirectories under <LANG>/examples.", "source": "https://kubernetes.io/docs/contribute/style/write-new-topic/", "id": "k8s_00517_code_explain_13217"} {"prompt": "Explain this bash code:\n\nkubectl create -f https://k8s.io/examples/pods/storage/gce-volume.yaml", "completion": "This bash code Writing a new topic | For an example of a topic that uses this technique, see Running a Single-Instance Stateful Application.", "language": "bash", "type": "kubectl-command_explanation", "context": "Writing a new topic | For an example of a topic that uses this technique, see Running a Single-Instance Stateful Application.", "source": "https://kubernetes.io/docs/contribute/style/write-new-topic/", "id": "k8s_00517_code_explain_13218"} {"prompt": "Explain this text code:\n\n# Deny CSR 'csr-sqgzp'\n kubectl certificate deny csr-sqgzp", "completion": "This text code kubectl certificate deny | If true, ignore any errors in templates when a field or map key is missing in the template. Only applies to golang and jsonpath output formats.", "language": "text", "type": "code-example_explanation", "context": "kubectl certificate deny | If true, ignore any errors in templates when a field or map key is missing in the template. Only applies to golang and jsonpath output formats.", "source": "https://kubernetes.io/docs/reference/kubectl/generated/kubectl_certificate/kubectl_certificate_deny/", "id": "k8s_00519_code_explain_13219"} {"prompt": "Explain this text code:\n\n# Deny CSR 'csr-sqgzp'\n kubectl certificate deny csr-sqgzp", "completion": "This text code kubectl certificate deny | If true, ignore any errors in templates when a field or map key is missing in the template. Only applies to golang and jsonpath output formats.", "language": "text", "type": "code-example_explanation", "context": "kubectl certificate deny | If true, ignore any errors in templates when a field or map key is missing in the template. Only applies to golang and jsonpath output formats.", "source": "https://kubernetes.io/docs/reference/kubectl/generated/kubectl_certificate/kubectl_certificate_deny/", "id": "k8s_00519_code_explain_13220"} {"prompt": "Complete this yaml code:\n\napiVersion: admissionregistration.k8s.io/v1\nkind: ValidatingAdmissionPolicy\nmetadata:\n name: \"demo-policy.example.com\"\nspec:\n failurePolicy: Fail\n matchConstraints:", "completion": " resourceRules:\n - apiGroups: [\"apps\"]\n apiVersions: [\"v1\"]\n operations: [\"CREATE\", \"UPDATE\"]\n resources: [\"deployments\"]\n validations:\n - expression: \"object.spec.replicas <= 5\"", "language": "yaml", "type": "kubernetes-manifest", "context": "Validating Admission Policy | spec.validations contains CEL expressions which use the Common Expression Language (CEL) to validate the request. If an expression evaluates to false, the validation check is enforced according to the", "source": "https://kubernetes.io/docs/reference/access-authn-authz/validating-admission-policy/", "id": "k8s_00521_code_completion_13221"} {"prompt": "Explain this yaml code:\n\napiVersion: admissionregistration.k8s.io/v1\nkind: ValidatingAdmissionPolicy\nmetadata:\n name: \"demo-policy.example.com\"\nspec:\n failurePolicy: Fail\n matchConstraints:\n resourceRules:\n - apiGroups: [\"apps\"]\n apiVersions: [\"v1\"]\n operations: [\"CREATE\", \"UPDATE\"]\n resources: [\"deployments\"]\n validations:\n - expression: \"object.spec.replicas <= 5\"", "completion": "This yaml code Validating Admission Policy | spec.validations contains CEL expressions which use the Common Expression Language (CEL) to validate the request. If an expression evaluates to false, the validation check is enforced according to the", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Validating Admission Policy | spec.validations contains CEL expressions which use the Common Expression Language (CEL) to validate the request. If an expression evaluates to false, the validation check is enforced according to the", "source": "https://kubernetes.io/docs/reference/access-authn-authz/validating-admission-policy/", "id": "k8s_00521_code_explain_13222"} {"prompt": "Complete this yaml code:\n\napiVersion: admissionregistration.k8s.io/v1\nkind: ValidatingAdmissionPolicyBinding\nmetadata:\n name: \"demo-binding-test.example.com\"\nspec:", "completion": " policyName: \"demo-policy.example.com\"\n validationActions: [Deny]\n matchResources:\n namespaceSelector:\n matchLabels:\n environment: test", "language": "yaml", "type": "kubernetes-manifest", "context": "Validating Admission Policy | When trying to create a deployment with replicas set not satisfying the validation expression, an error will return containing message:", "source": "https://kubernetes.io/docs/reference/access-authn-authz/validating-admission-policy/", "id": "k8s_00521_code_completion_13223"} {"prompt": "Explain this yaml code:\n\napiVersion: admissionregistration.k8s.io/v1\nkind: ValidatingAdmissionPolicyBinding\nmetadata:\n name: \"demo-binding-test.example.com\"\nspec:\n policyName: \"demo-policy.example.com\"\n validationActions: [Deny]\n matchResources:\n namespaceSelector:\n matchLabels:\n environment: test", "completion": "This yaml code Validating Admission Policy | When trying to create a deployment with replicas set not satisfying the validation expression, an error will return containing message:", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Validating Admission Policy | When trying to create a deployment with replicas set not satisfying the validation expression, an error will return containing message:", "source": "https://kubernetes.io/docs/reference/access-authn-authz/validating-admission-policy/", "id": "k8s_00521_code_explain_13224"} {"prompt": "Explain this text code:\n\nValidatingAdmissionPolicy 'demo-policy.example.com' with binding 'demo-binding-test.example.com' denied request: failed expression: object.spec.replicas <= 5", "completion": "This text code Validating Admission Policy | The above provides a simple example of using ValidatingAdmissionPolicy without a parameter configured.", "language": "text", "type": "code-example_explanation", "context": "Validating Admission Policy | The above provides a simple example of using ValidatingAdmissionPolicy without a parameter configured.", "source": "https://kubernetes.io/docs/reference/access-authn-authz/validating-admission-policy/", "id": "k8s_00521_code_explain_13225"} {"prompt": "Complete this yaml code:\n\napiVersion: admissionregistration.k8s.io/v1\nkind: ValidatingAdmissionPolicy\nmetadata:\n name: \"replicalimit-policy.example.com\"\nspec:\n failurePolicy: Fail\n paramKind:\n apiVersion: rules.example.com/v1\n kind: ReplicaLimit", "completion": " matchConstraints:\n resourceRules:\n - apiGroups: [\"apps\"]\n apiVersions: [\"v1\"]\n operations: [\"CREATE\", \"UPDATE\"]\n resources: [\"deployments\"]\n validations:\n - expression: \"object.spec.replicas <= params.maxReplicas\"\n reason: Invalid", "language": "yaml", "type": "kubernetes-manifest", "context": "Validating Admission Policy | The spec.paramKind field of the ValidatingAdmissionPolicy specifies the kind of resources used to parameterize this policy. For this example, it is configured by ReplicaLimit custom resources. Note in", "source": "https://kubernetes.io/docs/reference/access-authn-authz/validating-admission-policy/", "id": "k8s_00521_code_completion_13226"} {"prompt": "Explain this yaml code:\n\napiVersion: admissionregistration.k8s.io/v1\nkind: ValidatingAdmissionPolicy\nmetadata:\n name: \"replicalimit-policy.example.com\"\nspec:\n failurePolicy: Fail\n paramKind:\n apiVersion: rules.example.com/v1\n kind: ReplicaLimit\n matchConstraints:\n resourceRules:\n - apiGroups: [\"apps\"]\n apiVersions: [\"v1\"]\n operations: [\"CREATE\", \"UPDATE\"]\n resources: [\"deployments\"]\n validations:\n - expression: \"object.spec.replicas <= params.maxReplicas\"\n reason: Invalid", "completion": "This yaml code Validating Admission Policy | The spec.paramKind field of the ValidatingAdmissionPolicy specifies the kind of resources used to parameterize this policy. For this example, it is configured by ReplicaLimit custom resources. Note in", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Validating Admission Policy | The spec.paramKind field of the ValidatingAdmissionPolicy specifies the kind of resources used to parameterize this policy. For this example, it is configured by ReplicaLimit custom resources. Note in", "source": "https://kubernetes.io/docs/reference/access-authn-authz/validating-admission-policy/", "id": "k8s_00521_code_explain_13227"} {"prompt": "Complete this yaml code:\n\napiVersion: admissionregistration.k8s.io/v1\nkind: ValidatingAdmissionPolicyBinding\nmetadata:\n name: \"replicalimit-binding-test.example.com\"\nspec:\n policyName: \"replicalimit-policy.example.com\"\n validationActions: [Deny]", "completion": " paramRef:\n name: \"replica-limit-test.example.com\"\n namespace: \"default\"\n parameterNotFoundAction: Deny\n matchResources:\n namespaceSelector:\n matchLabels:\n environment: test", "language": "yaml", "type": "kubernetes-manifest", "context": "Validating Admission Policy | Notice this binding applies a parameter to the policy for all resources which are in the test environment.", "source": "https://kubernetes.io/docs/reference/access-authn-authz/validating-admission-policy/", "id": "k8s_00521_code_completion_13228"} {"prompt": "Explain this yaml code:\n\napiVersion: admissionregistration.k8s.io/v1\nkind: ValidatingAdmissionPolicyBinding\nmetadata:\n name: \"replicalimit-binding-test.example.com\"\nspec:\n policyName: \"replicalimit-policy.example.com\"\n validationActions: [Deny]\n paramRef:\n name: \"replica-limit-test.example.com\"\n namespace: \"default\"\n parameterNotFoundAction: Deny\n matchResources:\n namespaceSelector:\n matchLabels:\n environment: test", "completion": "This yaml code Validating Admission Policy | Notice this binding applies a parameter to the policy for all resources which are in the test environment.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Validating Admission Policy | Notice this binding applies a parameter to the policy for all resources which are in the test environment.", "source": "https://kubernetes.io/docs/reference/access-authn-authz/validating-admission-policy/", "id": "k8s_00521_code_explain_13229"} {"prompt": "Complete this yaml code:\n\napiVersion: rules.example.com/v1\nkind: ReplicaLimit\nmetadata:", "completion": " name: \"replica-limit-test.example.com\"\n namespace: \"default\"\nmaxReplicas: 3", "language": "yaml", "type": "kubernetes-manifest", "context": "Validating Admission Policy | This policy parameter resource limits deployments to a max of 3 replicas.", "source": "https://kubernetes.io/docs/reference/access-authn-authz/validating-admission-policy/", "id": "k8s_00521_code_completion_13230"} {"prompt": "Explain this yaml code:\n\napiVersion: rules.example.com/v1\nkind: ReplicaLimit\nmetadata:\n name: \"replica-limit-test.example.com\"\n namespace: \"default\"\nmaxReplicas: 3", "completion": "This yaml code Validating Admission Policy | This policy parameter resource limits deployments to a max of 3 replicas.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Validating Admission Policy | This policy parameter resource limits deployments to a max of 3 replicas.", "source": "https://kubernetes.io/docs/reference/access-authn-authz/validating-admission-policy/", "id": "k8s_00521_code_explain_13231"} {"prompt": "Complete this yaml code:\n\napiVersion: admissionregistration.k8s.io/v1\nkind: ValidatingAdmissionPolicyBinding\nmetadata:\n name: \"replicalimit-binding-nontest\"\nspec:\n policyName: \"replicalimit-policy.example.com\"\n validationActions: [Deny]\n paramRef:\n name: \"replica-limit-prod.example.com\"", "completion": " namespace: \"default\"\n parameterNotFoundAction: Deny\n matchResources:\n namespaceSelector:\n matchExpressions:\n - key: environment\n operator: NotIn\n values:\n - test", "language": "yaml", "type": "kubernetes-manifest", "context": "Validating Admission Policy | Notice this binding applies a different parameter to resources which are not in the test environment.", "source": "https://kubernetes.io/docs/reference/access-authn-authz/validating-admission-policy/", "id": "k8s_00521_code_completion_13232"} {"prompt": "Explain this yaml code:\n\napiVersion: admissionregistration.k8s.io/v1\nkind: ValidatingAdmissionPolicyBinding\nmetadata:\n name: \"replicalimit-binding-nontest\"\nspec:\n policyName: \"replicalimit-policy.example.com\"\n validationActions: [Deny]\n paramRef:\n name: \"replica-limit-prod.example.com\"\n namespace: \"default\"\n parameterNotFoundAction: Deny\n matchResources:\n namespaceSelector:\n matchExpressions:\n - key: environment\n operator: NotIn\n values:\n - test", "completion": "This yaml code Validating Admission Policy | Notice this binding applies a different parameter to resources which are not in the test environment.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Validating Admission Policy | Notice this binding applies a different parameter to resources which are not in the test environment.", "source": "https://kubernetes.io/docs/reference/access-authn-authz/validating-admission-policy/", "id": "k8s_00521_code_explain_13233"} {"prompt": "Complete this yaml code:\n\napiVersion: rules.example.com/v1\nkind: ReplicaLimit", "completion": "metadata:\n name: \"replica-limit-prod.example.com\"\nmaxReplicas: 100", "language": "yaml", "type": "kubernetes-manifest", "context": "Validating Admission Policy | For each admission request, the API server evaluates CEL expressions of each (policy, binding, param) combination that match the request. For a request to be admitted it must pass all evaluations.", "source": "https://kubernetes.io/docs/reference/access-authn-authz/validating-admission-policy/", "id": "k8s_00521_code_completion_13234"} {"prompt": "Explain this yaml code:\n\napiVersion: rules.example.com/v1\nkind: ReplicaLimit\nmetadata:\n name: \"replica-limit-prod.example.com\"\nmaxReplicas: 100", "completion": "This yaml code Validating Admission Policy | For each admission request, the API server evaluates CEL expressions of each (policy, binding, param) combination that match the request. For a request to be admitted it must pass all evaluations.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Validating Admission Policy | For each admission request, the API server evaluates CEL expressions of each (policy, binding, param) combination that match the request. For a request to be admitted it must pass all evaluations.", "source": "https://kubernetes.io/docs/reference/access-authn-authz/validating-admission-policy/", "id": "k8s_00521_code_explain_13235"} {"prompt": "Explain this text code:\n\n- expression: \"params != null\"\n message: \"params missing but required to bind to this policy\"", "completion": "This text code Validating Admission Policy | It can be convenient to be able to have optional parameters as part of a parameter resource, and only validate them if present. CEL provides has(), which checks if the key passed to it exists. CEL als", "language": "text", "type": "code-example_explanation", "context": "Validating Admission Policy | It can be convenient to be able to have optional parameters as part of a parameter resource, and only validate them if present. CEL provides has(), which checks if the key passed to it exists. CEL als", "source": "https://kubernetes.io/docs/reference/access-authn-authz/validating-admission-policy/", "id": "k8s_00521_code_explain_13236"} {"prompt": "Complete this yaml code:\n\napiVersion: admissionregistration.k8s.io/v1alpha1\nkind: ValidatingAdmissionPolicyBinding\nmetadata:\n name: example-binding\nspec:\n policyName: example-policy", "completion": " paramRef:\n selector:\n matchLabels:\n environment: test\n parameterNotFoundAction: Allow\n validationActions:\n - Deny", "language": "yaml", "type": "kubernetes-manifest", "context": "Validating Admission Policy | failurePolicy defines how mis-configurations and CEL expressions evaluating to error from the admission policy are handled. Allowed values are Ignore or Fail.", "source": "https://kubernetes.io/docs/reference/access-authn-authz/validating-admission-policy/", "id": "k8s_00521_code_completion_13237"} {"prompt": "Explain this yaml code:\n\napiVersion: admissionregistration.k8s.io/v1alpha1\nkind: ValidatingAdmissionPolicyBinding\nmetadata:\n name: example-binding\nspec:\n policyName: example-policy\n paramRef:\n selector:\n matchLabels:\n environment: test\n parameterNotFoundAction: Allow\n validationActions:\n - Deny", "completion": "This yaml code Validating Admission Policy | failurePolicy defines how mis-configurations and CEL expressions evaluating to error from the admission policy are handled. Allowed values are Ignore or Fail.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Validating Admission Policy | failurePolicy defines how mis-configurations and CEL expressions evaluating to error from the admission policy are handled. Allowed values are Ignore or Fail.", "source": "https://kubernetes.io/docs/reference/access-authn-authz/validating-admission-policy/", "id": "k8s_00521_code_explain_13238"} {"prompt": "Complete this yaml code:\n\napiVersion: admissionregistration.k8s.io/v1\nkind: ValidatingAdmissionPolicy\nspec:", "completion": "...\nfailurePolicy: Ignore # The default is \"Fail\"\nvalidations:\n- expression: \"object.spec.xyz == params.x\"", "language": "yaml", "type": "kubernetes-manifest", "context": "Validating Admission Policy | spec.validations[i].expression represents the expression which will be evaluated by CEL. To learn more, see the CEL language specification CEL expressions have access to the contents of the Admission", "source": "https://kubernetes.io/docs/reference/access-authn-authz/validating-admission-policy/", "id": "k8s_00521_code_completion_13239"} {"prompt": "Explain this yaml code:\n\napiVersion: admissionregistration.k8s.io/v1\nkind: ValidatingAdmissionPolicy\nspec:\n...\nfailurePolicy: Ignore # The default is \"Fail\"\nvalidations:\n- expression: \"object.spec.xyz == params.x\"", "completion": "This yaml code Validating Admission Policy | spec.validations[i].expression represents the expression which will be evaluated by CEL. To learn more, see the CEL language specification CEL expressions have access to the contents of the Admission", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Validating Admission Policy | spec.validations[i].expression represents the expression which will be evaluated by CEL. To learn more, see the CEL language specification CEL expressions have access to the contents of the Admission", "source": "https://kubernetes.io/docs/reference/access-authn-authz/validating-admission-policy/", "id": "k8s_00521_code_explain_13240"} {"prompt": "Complete this yaml code:\n\napiVersion: admissionregistration.k8s.io/v1\nkind: ValidatingAdmissionPolicy\nmetadata:\n name: \"demo-policy.example.com\"\nspec:\n failurePolicy: Fail\n matchConstraints:\n resourceRules:\n - apiGroups: [\"*\"]\n apiVersions: [\"*\"]", "completion": " operations: [\"CREATE\", \"UPDATE\"]\n resources: [\"*\"]\n matchConditions:\n - name: 'exclude-leases' # Each match condition must have a unique name\n expression: '!(request.resource.group == \"coordination.k8s.io\" && request.resource.resource == \"leases\")' # Match non-lease resources.\n - name: 'exclude-kubelet-requests'\n expression: '!(\"system:nodes\" in request.userInfo.groups)' # Match requests made by non-node users.\n - name: 'rbac' # Skip RBAC requests.\n expression: 'request.resource.group != \"rbac.authorization.k8s.io\"'\n validations:\n - expression: \"!object.metadata.name.contains('demo') || object.metadata.namespace == 'demo'\"", "language": "yaml", "type": "kubernetes-manifest", "context": "Validating Admission Policy | Match conditions have access to the same CEL variables as validation expressions.", "source": "https://kubernetes.io/docs/reference/access-authn-authz/validating-admission-policy/", "id": "k8s_00521_code_completion_13241"} {"prompt": "Explain this yaml code:\n\napiVersion: admissionregistration.k8s.io/v1\nkind: ValidatingAdmissionPolicy\nmetadata:\n name: \"demo-policy.example.com\"\nspec:\n failurePolicy: Fail\n matchConstraints:\n resourceRules:\n - apiGroups: [\"*\"]\n apiVersions: [\"*\"]\n operations: [\"CREATE\", \"UPDATE\"]\n resources: [\"*\"]\n matchConditions:\n - name: 'exclude-leases' # Each match condition must have a unique name\n expression: '!(request.resource.group == \"coordination.k8s.io\" && request.resource.resource == \"leases\")' # Match non-lease resources.\n - name: 'exclude-kubelet-requests'\n expression: '!(\"system:nodes\" in request.userInfo.groups)' # Match requests made by non-node users.\n - name: 'rbac' # Skip RBAC requests.\n expression: 'request.resource.group != \"rbac.authorization.k8s.io\"'\n validations:\n - expression: \"!object.metadata.name.contains('demo') || object.metadata.namespace == 'demo'\"", "completion": "This yaml code Validating Admission Policy | Match conditions have access to the same CEL variables as validation expressions.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Validating Admission Policy | Match conditions have access to the same CEL variables as validation expressions.", "source": "https://kubernetes.io/docs/reference/access-authn-authz/validating-admission-policy/", "id": "k8s_00521_code_explain_13242"} {"prompt": "Complete this yaml code:\n\napiVersion: admissionregistration.k8s.io/v1\nkind: ValidatingAdmissionPolicy\nmetadata:\n name: \"demo-policy.example.com\"\nspec:\n failurePolicy: Fail\n matchConstraints:\n resourceRules:\n - apiGroups: [\"apps\"]", "completion": " apiVersions: [\"v1\"]\n operations: [\"CREATE\", \"UPDATE\"]\n resources: [\"deployments\"]\n validations:\n - expression: \"object.spec.replicas > 50\"\n messageExpression: \"'Deployment spec.replicas set to ' + string(object.spec.replicas)\"\n auditAnnotations:\n - key: \"high-replica-count\"\n valueExpression: \"'Deployment spec.replicas set to ' + string(object.spec.replicas)\"", "language": "yaml", "type": "kubernetes-manifest", "context": "Validating Admission Policy | When an API request is validated with this admission policy, the resulting audit event will look like:", "source": "https://kubernetes.io/docs/reference/access-authn-authz/validating-admission-policy/", "id": "k8s_00521_code_completion_13243"} {"prompt": "Explain this yaml code:\n\napiVersion: admissionregistration.k8s.io/v1\nkind: ValidatingAdmissionPolicy\nmetadata:\n name: \"demo-policy.example.com\"\nspec:\n failurePolicy: Fail\n matchConstraints:\n resourceRules:\n - apiGroups: [\"apps\"]\n apiVersions: [\"v1\"]\n operations: [\"CREATE\", \"UPDATE\"]\n resources: [\"deployments\"]\n validations:\n - expression: \"object.spec.replicas > 50\"\n messageExpression: \"'Deployment spec.replicas set to ' + string(object.spec.replicas)\"\n auditAnnotations:\n - key: \"high-replica-count\"\n valueExpression: \"'Deployment spec.replicas set to ' + string(object.spec.replicas)\"", "completion": "This yaml code Validating Admission Policy | When an API request is validated with this admission policy, the resulting audit event will look like:", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Validating Admission Policy | When an API request is validated with this admission policy, the resulting audit event will look like:", "source": "https://kubernetes.io/docs/reference/access-authn-authz/validating-admission-policy/", "id": "k8s_00521_code_explain_13244"} {"prompt": "Complete this text code:\n\n# the audit event recorded\n{\n \"kind\": \"Event\",\n \"apiVersion\": \"audit.k8s.io/v1\",\n \"annotations\": {\n \"demo-policy.example.com/high-replica-count\": \"Deployment spec.replicas set to 128\"", "completion": " # other annotations\n ...\n }\n # other fields\n ...\n}", "language": "text", "type": "code-example", "context": "Validating Admission Policy | In this example the annotation will only be included if the spec.replicas of the Deployment is more than 50, otherwise the CEL expression evaluates to null and the annotation will not be included.", "source": "https://kubernetes.io/docs/reference/access-authn-authz/validating-admission-policy/", "id": "k8s_00521_code_completion_13245"} {"prompt": "Explain this text code:\n\n# the audit event recorded\n{\n \"kind\": \"Event\",\n \"apiVersion\": \"audit.k8s.io/v1\",\n \"annotations\": {\n \"demo-policy.example.com/high-replica-count\": \"Deployment spec.replicas set to 128\"\n # other annotations\n ...\n }\n # other fields\n ...\n}", "completion": "This text code Validating Admission Policy | In this example the annotation will only be included if the spec.replicas of the Deployment is more than 50, otherwise the CEL expression evaluates to null and the annotation will not be included.", "language": "text", "type": "code-example_explanation", "context": "Validating Admission Policy | In this example the annotation will only be included if the spec.replicas of the Deployment is more than 50, otherwise the CEL expression evaluates to null and the annotation will not be included.", "source": "https://kubernetes.io/docs/reference/access-authn-authz/validating-admission-policy/", "id": "k8s_00521_code_explain_13246"} {"prompt": "Complete this yaml code:\n\napiVersion: admissionregistration.k8s.io/v1\nkind: ValidatingAdmissionPolicy\nmetadata:\n name: \"deploy-replica-policy.example.com\"\nspec:\n paramKind:\n apiVersion: rules.example.com/v1\n kind: ReplicaLimit\n matchConstraints:", "completion": " resourceRules:\n - apiGroups: [\"apps\"]\n apiVersions: [\"v1\"]\n operations: [\"CREATE\", \"UPDATE\"]\n resources: [\"deployments\"]\n validations:\n - expression: \"object.spec.replicas <= params.maxReplicas\"\n messageExpression: \"'object.spec.replicas must be no greater than ' + string(params.maxReplicas)\"\n reason: Invalid", "language": "yaml", "type": "kubernetes-manifest", "context": "Validating Admission Policy | After creating a params object that limits the replicas to 3 and setting up the binding, when we try to create a deployment with 5 replicas, we will receive the following message.", "source": "https://kubernetes.io/docs/reference/access-authn-authz/validating-admission-policy/", "id": "k8s_00521_code_completion_13247"} {"prompt": "Explain this yaml code:\n\napiVersion: admissionregistration.k8s.io/v1\nkind: ValidatingAdmissionPolicy\nmetadata:\n name: \"deploy-replica-policy.example.com\"\nspec:\n paramKind:\n apiVersion: rules.example.com/v1\n kind: ReplicaLimit\n matchConstraints:\n resourceRules:\n - apiGroups: [\"apps\"]\n apiVersions: [\"v1\"]\n operations: [\"CREATE\", \"UPDATE\"]\n resources: [\"deployments\"]\n validations:\n - expression: \"object.spec.replicas <= params.maxReplicas\"\n messageExpression: \"'object.spec.replicas must be no greater than ' + string(params.maxReplicas)\"\n reason: Invalid", "completion": "This yaml code Validating Admission Policy | After creating a params object that limits the replicas to 3 and setting up the binding, when we try to create a deployment with 5 replicas, we will receive the following message.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Validating Admission Policy | After creating a params object that limits the replicas to 3 and setting up the binding, when we try to create a deployment with 5 replicas, we will receive the following message.", "source": "https://kubernetes.io/docs/reference/access-authn-authz/validating-admission-policy/", "id": "k8s_00521_code_explain_13248"} {"prompt": "Explain this text code:\n\n$ kubectl create deploy --image=nginx nginx --replicas=5\nerror: failed to create deployment: deployments.apps \"nginx\" is forbidden: ValidatingAdmissionPolicy 'deploy-replica-policy.example.com' with binding 'demo-binding-test.example.com' denied request: object.spec.replicas must be no greater than 3", "completion": "This text code Validating Admission Policy | This is more informative than a static message of \"too many replicas\".", "language": "text", "type": "code-example_explanation", "context": "Validating Admission Policy | This is more informative than a static message of \"too many replicas\".", "source": "https://kubernetes.io/docs/reference/access-authn-authz/validating-admission-policy/", "id": "k8s_00521_code_explain_13249"} {"prompt": "Complete this yaml code:\n\napiVersion: admissionregistration.k8s.io/v1\nkind: ValidatingAdmissionPolicy\nmetadata:\n name: \"deploy-replica-policy.example.com\"\nspec:\n matchConstraints:\n resourceRules:", "completion": " - apiGroups: [\"apps\"]\n apiVersions: [\"v1\"]\n operations: [\"CREATE\", \"UPDATE\"]\n resources: [\"deployments\"]\n validations:\n - expression: \"object.replicas > 1\" # should be \"object.spec.replicas > 1\"\n message: \"must be replicated\"\n reason: Invalid", "language": "yaml", "type": "kubernetes-manifest", "context": "Validating Admission Policy | The status will yield the following information:", "source": "https://kubernetes.io/docs/reference/access-authn-authz/validating-admission-policy/", "id": "k8s_00521_code_completion_13250"} {"prompt": "Explain this yaml code:\n\napiVersion: admissionregistration.k8s.io/v1\nkind: ValidatingAdmissionPolicy\nmetadata:\n name: \"deploy-replica-policy.example.com\"\nspec:\n matchConstraints:\n resourceRules:\n - apiGroups: [\"apps\"]\n apiVersions: [\"v1\"]\n operations: [\"CREATE\", \"UPDATE\"]\n resources: [\"deployments\"]\n validations:\n - expression: \"object.replicas > 1\" # should be \"object.spec.replicas > 1\"\n message: \"must be replicated\"\n reason: Invalid", "completion": "This yaml code Validating Admission Policy | The status will yield the following information:", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Validating Admission Policy | The status will yield the following information:", "source": "https://kubernetes.io/docs/reference/access-authn-authz/validating-admission-policy/", "id": "k8s_00521_code_explain_13251"} {"prompt": "Complete this text code:\n\nstatus:\n typeChecking:\n expressionWarnings:\n - fieldRef: spec.validations[0].expression", "completion": " warning: |-\n apps/v1, Kind=Deployment: ERROR: <input>:1:7: undefined field 'replicas'\n | object.replicas > 1\n | ......^", "language": "text", "type": "code-example", "context": "Validating Admission Policy | If multiple resources are matched in spec.matchConstraints, all of matched resources will be checked against. For example, the following policy definition", "source": "https://kubernetes.io/docs/reference/access-authn-authz/validating-admission-policy/", "id": "k8s_00521_code_completion_13252"} {"prompt": "Explain this text code:\n\nstatus:\n typeChecking:\n expressionWarnings:\n - fieldRef: spec.validations[0].expression\n warning: |-\n apps/v1, Kind=Deployment: ERROR: <input>:1:7: undefined field 'replicas'\n | object.replicas > 1\n | ......^", "completion": "This text code Validating Admission Policy | If multiple resources are matched in spec.matchConstraints, all of matched resources will be checked against. For example, the following policy definition", "language": "text", "type": "code-example_explanation", "context": "Validating Admission Policy | If multiple resources are matched in spec.matchConstraints, all of matched resources will be checked against. For example, the following policy definition", "source": "https://kubernetes.io/docs/reference/access-authn-authz/validating-admission-policy/", "id": "k8s_00521_code_explain_13253"} {"prompt": "Complete this yaml code:\n\napiVersion: admissionregistration.k8s.io/v1\nkind: ValidatingAdmissionPolicy\nmetadata:\n name: \"replica-policy.example.com\"\nspec:\n matchConstraints:\n resourceRules:", "completion": " - apiGroups: [\"apps\"]\n apiVersions: [\"v1\"]\n operations: [\"CREATE\", \"UPDATE\"]\n resources: [\"deployments\",\"replicasets\"]\n validations:\n - expression: \"object.replicas > 1\" # should be \"object.spec.replicas > 1\"\n message: \"must be replicated\"\n reason: Invalid", "language": "yaml", "type": "kubernetes-manifest", "context": "Validating Admission Policy | will have multiple types and type checking result of each type in the warning message.", "source": "https://kubernetes.io/docs/reference/access-authn-authz/validating-admission-policy/", "id": "k8s_00521_code_completion_13254"} {"prompt": "Explain this yaml code:\n\napiVersion: admissionregistration.k8s.io/v1\nkind: ValidatingAdmissionPolicy\nmetadata:\n name: \"replica-policy.example.com\"\nspec:\n matchConstraints:\n resourceRules:\n - apiGroups: [\"apps\"]\n apiVersions: [\"v1\"]\n operations: [\"CREATE\", \"UPDATE\"]\n resources: [\"deployments\",\"replicasets\"]\n validations:\n - expression: \"object.replicas > 1\" # should be \"object.spec.replicas > 1\"\n message: \"must be replicated\"\n reason: Invalid", "completion": "This yaml code Validating Admission Policy | will have multiple types and type checking result of each type in the warning message.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Validating Admission Policy | will have multiple types and type checking result of each type in the warning message.", "source": "https://kubernetes.io/docs/reference/access-authn-authz/validating-admission-policy/", "id": "k8s_00521_code_explain_13255"} {"prompt": "Complete this text code:\n\nstatus:\n typeChecking:\n expressionWarnings:\n - fieldRef: spec.validations[0].expression\n warning: |-", "completion": " apps/v1, Kind=Deployment: ERROR: <input>:1:7: undefined field 'replicas'\n | object.replicas > 1\n | ......^\n apps/v1, Kind=ReplicaSet: ERROR: <input>:1:7: undefined field 'replicas'\n | object.replicas > 1\n | ......^", "language": "text", "type": "code-example", "context": "Validating Admission Policy | Type Checking has the following limitation:", "source": "https://kubernetes.io/docs/reference/access-authn-authz/validating-admission-policy/", "id": "k8s_00521_code_completion_13256"} {"prompt": "Explain this text code:\n\nstatus:\n typeChecking:\n expressionWarnings:\n - fieldRef: spec.validations[0].expression\n warning: |-\n apps/v1, Kind=Deployment: ERROR: <input>:1:7: undefined field 'replicas'\n | object.replicas > 1\n | ......^\n apps/v1, Kind=ReplicaSet: ERROR: <input>:1:7: undefined field 'replicas'\n | object.replicas > 1\n | ......^", "completion": "This text code Validating Admission Policy | Type Checking has the following limitation:", "language": "text", "type": "code-example_explanation", "context": "Validating Admission Policy | Type Checking has the following limitation:", "source": "https://kubernetes.io/docs/reference/access-authn-authz/validating-admission-policy/", "id": "k8s_00521_code_explain_13257"} {"prompt": "Complete this text code:\n\nspec:\n variables:\n - name: foo", "completion": " expression: \"'foo' in object.spec.metadata.labels ? object.spec.metadata.labels['foo'] : 'default'\"\n validations:\n - expression: variables.foo == 'bar'", "language": "text", "type": "code-example", "context": "Validating Admission Policy | A variable is lazily evaluated when it is first referred. Any error that occurs during the evaluation will be reported during the evaluation of the referring expression. Both the result and potential", "source": "https://kubernetes.io/docs/reference/access-authn-authz/validating-admission-policy/", "id": "k8s_00521_code_completion_13258"} {"prompt": "Explain this text code:\n\nspec:\n variables:\n - name: foo\n expression: \"'foo' in object.spec.metadata.labels ? object.spec.metadata.labels['foo'] : 'default'\"\n validations:\n - expression: variables.foo == 'bar'", "completion": "This text code Validating Admission Policy | A variable is lazily evaluated when it is first referred. Any error that occurs during the evaluation will be reported during the evaluation of the referring expression. Both the result and potential", "language": "text", "type": "code-example_explanation", "context": "Validating Admission Policy | A variable is lazily evaluated when it is first referred. Any error that occurs during the evaluation will be reported during the evaluation of the referring expression. Both the result and potential", "source": "https://kubernetes.io/docs/reference/access-authn-authz/validating-admission-policy/", "id": "k8s_00521_code_explain_13259"} {"prompt": "Complete this yaml code:\n\n# This policy enforces that all containers of a deployment has the image repo match the environment label of its namespace.\n# Except for \"exempt\" deployments, or any containers that do not belong to the \"example.com\" organization (e.g. common sidecars).\n# For example, if the namespace has a label of {\"environment\": \"staging\"}, all container images must be either staging.example.com/*\n# or do not contain \"example.com\" at all, unless the deployment has {\"exempt\": \"true\"} label.\napiVersion: admissionregistration.k8s.io/v1\nkind: ValidatingAdmissionPolicy\nmetadata:\n name: \"image-matches-namespace-environment.policy.example.com\"\nspec:\n failurePolicy: Fail\n matchConstraints:\n resourceRules:\n - apiGroups: [\"apps\"]\n apiVersions: [\"v1\"]", "completion": " operations: [\"CREATE\", \"UPDATE\"]\n resources: [\"deployments\"]\n variables:\n - name: environment\n expression: \"'environment' in namespaceObject.metadata.labels ? namespaceObject.metadata.labels['environment'] : 'prod'\"\n - name: exempt\n expression: \"'exempt' in object.metadata.labels && object.metadata.labels['exempt'] == 'true'\"\n - name: containers\n expression: \"object.spec.template.spec.containers\"\n - name: containersToCheck\n expression: \"variables.containers.filter(c, c.image.contains('example.com/'))\"\n validations:\n - expression: \"variables.exempt || variables.containersToCheck.all(c, c.image.startsWith(variables.environment + '.'))\"\n messageExpression: \"'only ' + variables.environment + ' images are allowed in namespace ' + namespaceObject.metadata.name\"", "language": "yaml", "type": "kubernetes-manifest", "context": "Validating Admission Policy | With the policy bound to the namespace default, which is labeled environment: prod, the following attempt to create a deployment would be rejected.", "source": "https://kubernetes.io/docs/reference/access-authn-authz/validating-admission-policy/", "id": "k8s_00521_code_completion_13260"} {"prompt": "Explain this yaml code:\n\n# This policy enforces that all containers of a deployment has the image repo match the environment label of its namespace.\n# Except for \"exempt\" deployments, or any containers that do not belong to the \"example.com\" organization (e.g. common sidecars).\n# For example, if the namespace has a label of {\"environment\": \"staging\"}, all container images must be either staging.example.com/*\n# or do not contain \"example.com\" at all, unless the deployment has {\"exempt\": \"true\"} label.\napiVersion: admissionregistration.k8s.io/v1\nkind: ValidatingAdmissionPolicy\nmetadata:\n name: \"image-matches-namespace-environment.policy.example.com\"\nspec:\n failurePolicy: Fail\n matchConstraints:\n resourceRules:\n - apiGroups: [\"apps\"]\n apiVersions: [\"v1\"]\n operations: [\"CREATE\", \"UPDATE\"]\n resources: [\"deployments\"]\n variables:\n - name: environment\n expression: \"'environment' in namespaceObject.metadata.labels ? namespaceObject.metadata.labels['environment'] : 'prod'\"\n - name: exempt\n expression: \"'exempt' in object.metadata.labels && object.metadata.labels['exempt'] == 'true'\"\n - name: containers\n expression: \"object.spec.template.spec.containers\"\n - name: containersToCheck\n expression: \"variables.containers.filter(c, c.image.contains('example.com/'))\"\n validations:\n - expression: \"variables.exempt || variables.containersToCheck.all(c, c.image.startsWith(variables.environment + '.'))\"\n messageExpression: \"'only ' + variables.environment + ' images are allowed in namespace ' + namespaceObject.metadata.name\"", "completion": "This yaml code Validating Admission Policy | With the policy bound to the namespace default, which is labeled environment: prod, the following attempt to create a deployment would be rejected.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Validating Admission Policy | With the policy bound to the namespace default, which is labeled environment: prod, the following attempt to create a deployment would be rejected.", "source": "https://kubernetes.io/docs/reference/access-authn-authz/validating-admission-policy/", "id": "k8s_00521_code_explain_13261"} {"prompt": "Explain this bash code:\n\nkubectl create deploy --image=dev.example.com/nginx invalid", "completion": "This bash code Validating Admission Policy | The error message is similar to this.", "language": "bash", "type": "kubectl-command_explanation", "context": "Validating Admission Policy | The error message is similar to this.", "source": "https://kubernetes.io/docs/reference/access-authn-authz/validating-admission-policy/", "id": "k8s_00521_code_explain_13262"} {"prompt": "Explain this text code:\n\nerror: failed to create deployment: deployments.apps \"invalid\" is forbidden: ValidatingAdmissionPolicy 'image-matches-namespace-environment.policy.example.com' with binding 'demo-binding-test.example.com' denied request: only prod images are allowed in namespace default", "completion": "This text code Validating Admission Policy | There are certain API kinds that are exempt from admission-time validation checks. For example, you can't create a ValidatingAdmissionPolicy that prevents changes to ValidatingAdmissionPolicyBindings.", "language": "text", "type": "code-example_explanation", "context": "Validating Admission Policy | There are certain API kinds that are exempt from admission-time validation checks. For example, you can't create a ValidatingAdmissionPolicy that prevents changes to ValidatingAdmissionPolicyBindings.", "source": "https://kubernetes.io/docs/reference/access-authn-authz/validating-admission-policy/", "id": "k8s_00521_code_explain_13263"} {"prompt": "Explain this text code:\n\nvalidatingadmissionpolicy/basic-example-policy.yaml", "completion": "This text code Validating Admission Policy | spec.validations contains CEL expressions which use the Common Expression Language (CEL) to validate the request. If an expression evaluates to false, the validation check is enforced according to the", "language": "text", "type": "code-example_explanation", "context": "Validating Admission Policy | spec.validations contains CEL expressions which use the Common Expression Language (CEL) to validate the request. If an expression evaluates to false, the validation check is enforced according to the", "source": "https://kubernetes.io/docs/reference/access-authn-authz/validating-admission-policy/", "id": "k8s_00521_code_explain_13264"} {"prompt": "Complete this yaml code:\n\napiVersion: admissionregistration.k8s.io/v1\nkind: ValidatingAdmissionPolicy\nmetadata:\n name: \"demo-policy.example.com\"\nspec:\n failurePolicy: Fail\n matchConstraints:", "completion": " resourceRules:\n - apiGroups: [\"apps\"]\n apiVersions: [\"v1\"]\n operations: [\"CREATE\", \"UPDATE\"]\n resources: [\"deployments\"]\n validations:\n - expression: \"object.spec.replicas <= 5\"", "language": "yaml", "type": "kubernetes-manifest", "context": "Validating Admission Policy | spec.validations contains CEL expressions which use the Common Expression Language (CEL) to validate the request. If an expression evaluates to false, the validation check is enforced according to the", "source": "https://kubernetes.io/docs/reference/access-authn-authz/validating-admission-policy/", "id": "k8s_00521_code_completion_13265"} {"prompt": "Explain this yaml code:\n\napiVersion: admissionregistration.k8s.io/v1\nkind: ValidatingAdmissionPolicy\nmetadata:\n name: \"demo-policy.example.com\"\nspec:\n failurePolicy: Fail\n matchConstraints:\n resourceRules:\n - apiGroups: [\"apps\"]\n apiVersions: [\"v1\"]\n operations: [\"CREATE\", \"UPDATE\"]\n resources: [\"deployments\"]\n validations:\n - expression: \"object.spec.replicas <= 5\"", "completion": "This yaml code Validating Admission Policy | spec.validations contains CEL expressions which use the Common Expression Language (CEL) to validate the request. If an expression evaluates to false, the validation check is enforced according to the", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Validating Admission Policy | spec.validations contains CEL expressions which use the Common Expression Language (CEL) to validate the request. If an expression evaluates to false, the validation check is enforced according to the", "source": "https://kubernetes.io/docs/reference/access-authn-authz/validating-admission-policy/", "id": "k8s_00521_code_explain_13266"} {"prompt": "Explain this text code:\n\nvalidatingadmissionpolicy/basic-example-binding.yaml", "completion": "This text code Validating Admission Policy | When trying to create a deployment with replicas set not satisfying the validation expression, an error will return containing message:", "language": "text", "type": "code-example_explanation", "context": "Validating Admission Policy | When trying to create a deployment with replicas set not satisfying the validation expression, an error will return containing message:", "source": "https://kubernetes.io/docs/reference/access-authn-authz/validating-admission-policy/", "id": "k8s_00521_code_explain_13267"} {"prompt": "Complete this yaml code:\n\napiVersion: admissionregistration.k8s.io/v1\nkind: ValidatingAdmissionPolicyBinding\nmetadata:\n name: \"demo-binding-test.example.com\"\nspec:", "completion": " policyName: \"demo-policy.example.com\"\n validationActions: [Deny]\n matchResources:\n namespaceSelector:\n matchLabels:\n environment: test", "language": "yaml", "type": "kubernetes-manifest", "context": "Validating Admission Policy | When trying to create a deployment with replicas set not satisfying the validation expression, an error will return containing message:", "source": "https://kubernetes.io/docs/reference/access-authn-authz/validating-admission-policy/", "id": "k8s_00521_code_completion_13268"} {"prompt": "Explain this yaml code:\n\napiVersion: admissionregistration.k8s.io/v1\nkind: ValidatingAdmissionPolicyBinding\nmetadata:\n name: \"demo-binding-test.example.com\"\nspec:\n policyName: \"demo-policy.example.com\"\n validationActions: [Deny]\n matchResources:\n namespaceSelector:\n matchLabels:\n environment: test", "completion": "This yaml code Validating Admission Policy | When trying to create a deployment with replicas set not satisfying the validation expression, an error will return containing message:", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Validating Admission Policy | When trying to create a deployment with replicas set not satisfying the validation expression, an error will return containing message:", "source": "https://kubernetes.io/docs/reference/access-authn-authz/validating-admission-policy/", "id": "k8s_00521_code_explain_13269"} {"prompt": "Explain this none code:\n\nValidatingAdmissionPolicy 'demo-policy.example.com' with binding 'demo-binding-test.example.com' denied request: failed expression: object.spec.replicas <= 5", "completion": "This none code Validating Admission Policy | The above provides a simple example of using ValidatingAdmissionPolicy without a parameter configured.", "language": "none", "type": "code-example_explanation", "context": "Validating Admission Policy | The above provides a simple example of using ValidatingAdmissionPolicy without a parameter configured.", "source": "https://kubernetes.io/docs/reference/access-authn-authz/validating-admission-policy/", "id": "k8s_00521_code_explain_13270"} {"prompt": "Complete this yaml code:\n\napiVersion: admissionregistration.k8s.io/v1\nkind: ValidatingAdmissionPolicy\nmetadata:\n name: \"replicalimit-policy.example.com\"\nspec:\n failurePolicy: Fail\n paramKind:\n apiVersion: rules.example.com/v1\n kind: ReplicaLimit", "completion": " matchConstraints:\n resourceRules:\n - apiGroups: [\"apps\"]\n apiVersions: [\"v1\"]\n operations: [\"CREATE\", \"UPDATE\"]\n resources: [\"deployments\"]\n validations:\n - expression: \"object.spec.replicas <= params.maxReplicas\"\n reason: Invalid", "language": "yaml", "type": "kubernetes-manifest", "context": "Validating Admission Policy | The spec.paramKind field of the ValidatingAdmissionPolicy specifies the kind of resources used to parameterize this policy. For this example, it is configured by ReplicaLimit custom resources. Note in", "source": "https://kubernetes.io/docs/reference/access-authn-authz/validating-admission-policy/", "id": "k8s_00521_code_completion_13271"} {"prompt": "Explain this yaml code:\n\napiVersion: admissionregistration.k8s.io/v1\nkind: ValidatingAdmissionPolicy\nmetadata:\n name: \"replicalimit-policy.example.com\"\nspec:\n failurePolicy: Fail\n paramKind:\n apiVersion: rules.example.com/v1\n kind: ReplicaLimit\n matchConstraints:\n resourceRules:\n - apiGroups: [\"apps\"]\n apiVersions: [\"v1\"]\n operations: [\"CREATE\", \"UPDATE\"]\n resources: [\"deployments\"]\n validations:\n - expression: \"object.spec.replicas <= params.maxReplicas\"\n reason: Invalid", "completion": "This yaml code Validating Admission Policy | The spec.paramKind field of the ValidatingAdmissionPolicy specifies the kind of resources used to parameterize this policy. For this example, it is configured by ReplicaLimit custom resources. Note in", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Validating Admission Policy | The spec.paramKind field of the ValidatingAdmissionPolicy specifies the kind of resources used to parameterize this policy. For this example, it is configured by ReplicaLimit custom resources. Note in", "source": "https://kubernetes.io/docs/reference/access-authn-authz/validating-admission-policy/", "id": "k8s_00521_code_explain_13272"} {"prompt": "Complete this yaml code:\n\napiVersion: admissionregistration.k8s.io/v1\nkind: ValidatingAdmissionPolicyBinding\nmetadata:\n name: \"replicalimit-binding-test.example.com\"\nspec:\n policyName: \"replicalimit-policy.example.com\"\n validationActions: [Deny]", "completion": " paramRef:\n name: \"replica-limit-test.example.com\"\n namespace: \"default\"\n parameterNotFoundAction: Deny\n matchResources:\n namespaceSelector:\n matchLabels:\n environment: test", "language": "yaml", "type": "kubernetes-manifest", "context": "Validating Admission Policy | Notice this binding applies a parameter to the policy for all resources which are in the test environment.", "source": "https://kubernetes.io/docs/reference/access-authn-authz/validating-admission-policy/", "id": "k8s_00521_code_completion_13273"} {"prompt": "Explain this yaml code:\n\napiVersion: admissionregistration.k8s.io/v1\nkind: ValidatingAdmissionPolicyBinding\nmetadata:\n name: \"replicalimit-binding-test.example.com\"\nspec:\n policyName: \"replicalimit-policy.example.com\"\n validationActions: [Deny]\n paramRef:\n name: \"replica-limit-test.example.com\"\n namespace: \"default\"\n parameterNotFoundAction: Deny\n matchResources:\n namespaceSelector:\n matchLabels:\n environment: test", "completion": "This yaml code Validating Admission Policy | Notice this binding applies a parameter to the policy for all resources which are in the test environment.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Validating Admission Policy | Notice this binding applies a parameter to the policy for all resources which are in the test environment.", "source": "https://kubernetes.io/docs/reference/access-authn-authz/validating-admission-policy/", "id": "k8s_00521_code_explain_13274"} {"prompt": "Complete this yaml code:\n\napiVersion: rules.example.com/v1\nkind: ReplicaLimit\nmetadata:", "completion": " name: \"replica-limit-test.example.com\"\n namespace: \"default\"\nmaxReplicas: 3", "language": "yaml", "type": "kubernetes-manifest", "context": "Validating Admission Policy | This policy parameter resource limits deployments to a max of 3 replicas.", "source": "https://kubernetes.io/docs/reference/access-authn-authz/validating-admission-policy/", "id": "k8s_00521_code_completion_13275"} {"prompt": "Explain this yaml code:\n\napiVersion: rules.example.com/v1\nkind: ReplicaLimit\nmetadata:\n name: \"replica-limit-test.example.com\"\n namespace: \"default\"\nmaxReplicas: 3", "completion": "This yaml code Validating Admission Policy | This policy parameter resource limits deployments to a max of 3 replicas.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Validating Admission Policy | This policy parameter resource limits deployments to a max of 3 replicas.", "source": "https://kubernetes.io/docs/reference/access-authn-authz/validating-admission-policy/", "id": "k8s_00521_code_explain_13276"} {"prompt": "Explain this text code:\n\nvalidatingadmissionpolicy/binding-with-param-prod.yaml", "completion": "This text code Validating Admission Policy | Notice this binding applies a different parameter to resources which are not in the test environment.", "language": "text", "type": "code-example_explanation", "context": "Validating Admission Policy | Notice this binding applies a different parameter to resources which are not in the test environment.", "source": "https://kubernetes.io/docs/reference/access-authn-authz/validating-admission-policy/", "id": "k8s_00521_code_explain_13277"} {"prompt": "Complete this yaml code:\n\napiVersion: admissionregistration.k8s.io/v1\nkind: ValidatingAdmissionPolicyBinding\nmetadata:\n name: \"replicalimit-binding-nontest\"\nspec:\n policyName: \"replicalimit-policy.example.com\"\n validationActions: [Deny]\n paramRef:\n name: \"replica-limit-prod.example.com\"", "completion": " namespace: \"default\"\n parameterNotFoundAction: Deny\n matchResources:\n namespaceSelector:\n matchExpressions:\n - key: environment\n operator: NotIn\n values:\n - test", "language": "yaml", "type": "kubernetes-manifest", "context": "Validating Admission Policy | Notice this binding applies a different parameter to resources which are not in the test environment.", "source": "https://kubernetes.io/docs/reference/access-authn-authz/validating-admission-policy/", "id": "k8s_00521_code_completion_13278"} {"prompt": "Explain this yaml code:\n\napiVersion: admissionregistration.k8s.io/v1\nkind: ValidatingAdmissionPolicyBinding\nmetadata:\n name: \"replicalimit-binding-nontest\"\nspec:\n policyName: \"replicalimit-policy.example.com\"\n validationActions: [Deny]\n paramRef:\n name: \"replica-limit-prod.example.com\"\n namespace: \"default\"\n parameterNotFoundAction: Deny\n matchResources:\n namespaceSelector:\n matchExpressions:\n - key: environment\n operator: NotIn\n values:\n - test", "completion": "This yaml code Validating Admission Policy | Notice this binding applies a different parameter to resources which are not in the test environment.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Validating Admission Policy | Notice this binding applies a different parameter to resources which are not in the test environment.", "source": "https://kubernetes.io/docs/reference/access-authn-authz/validating-admission-policy/", "id": "k8s_00521_code_explain_13279"} {"prompt": "Explain this text code:\n\nvalidatingadmissionpolicy/replicalimit-param-prod.yaml", "completion": "This text code Validating Admission Policy | For each admission request, the API server evaluates CEL expressions of each (policy, binding, param) combination that match the request. For a request to be admitted it must pass all evaluations.", "language": "text", "type": "code-example_explanation", "context": "Validating Admission Policy | For each admission request, the API server evaluates CEL expressions of each (policy, binding, param) combination that match the request. For a request to be admitted it must pass all evaluations.", "source": "https://kubernetes.io/docs/reference/access-authn-authz/validating-admission-policy/", "id": "k8s_00521_code_explain_13280"} {"prompt": "Complete this yaml code:\n\napiVersion: rules.example.com/v1\nkind: ReplicaLimit", "completion": "metadata:\n name: \"replica-limit-prod.example.com\"\nmaxReplicas: 100", "language": "yaml", "type": "kubernetes-manifest", "context": "Validating Admission Policy | For each admission request, the API server evaluates CEL expressions of each (policy, binding, param) combination that match the request. For a request to be admitted it must pass all evaluations.", "source": "https://kubernetes.io/docs/reference/access-authn-authz/validating-admission-policy/", "id": "k8s_00521_code_completion_13281"} {"prompt": "Explain this yaml code:\n\napiVersion: rules.example.com/v1\nkind: ReplicaLimit\nmetadata:\n name: \"replica-limit-prod.example.com\"\nmaxReplicas: 100", "completion": "This yaml code Validating Admission Policy | For each admission request, the API server evaluates CEL expressions of each (policy, binding, param) combination that match the request. For a request to be admitted it must pass all evaluations.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Validating Admission Policy | For each admission request, the API server evaluates CEL expressions of each (policy, binding, param) combination that match the request. For a request to be admitted it must pass all evaluations.", "source": "https://kubernetes.io/docs/reference/access-authn-authz/validating-admission-policy/", "id": "k8s_00521_code_explain_13282"} {"prompt": "Explain this text code:\n\n- expression: \"params != null\"\n message: \"params missing but required to bind to this policy\"", "completion": "This text code Validating Admission Policy | It can be convenient to be able to have optional parameters as part of a parameter resource, and only validate them if present. CEL provides has(), which checks if the key passed to it exists. CEL als", "language": "text", "type": "code-example_explanation", "context": "Validating Admission Policy | It can be convenient to be able to have optional parameters as part of a parameter resource, and only validate them if present. CEL provides has(), which checks if the key passed to it exists. CEL als", "source": "https://kubernetes.io/docs/reference/access-authn-authz/validating-admission-policy/", "id": "k8s_00521_code_explain_13283"} {"prompt": "Explain this text code:\n\n!has(params.optionalNumber) || (params.optionalNumber >= 5 && params.optionalNumber <= 10)", "completion": "This text code Validating Admission Policy | Here, we first check that the optional parameter is present with !has(params.optionalNumber).", "language": "text", "type": "code-example_explanation", "context": "Validating Admission Policy | Here, we first check that the optional parameter is present with !has(params.optionalNumber).", "source": "https://kubernetes.io/docs/reference/access-authn-authz/validating-admission-policy/", "id": "k8s_00521_code_explain_13284"} {"prompt": "Complete this yaml code:\n\napiVersion: admissionregistration.k8s.io/v1alpha1\nkind: ValidatingAdmissionPolicyBinding\nmetadata:\n name: example-binding\nspec:\n policyName: example-policy", "completion": " paramRef:\n selector:\n matchLabels:\n environment: test\n parameterNotFoundAction: Allow\n validationActions:\n - Deny", "language": "yaml", "type": "kubernetes-manifest", "context": "Validating Admission Policy | failurePolicy defines how mis-configurations and CEL expressions evaluating to error from the admission policy are handled. Allowed values are Ignore or Fail.", "source": "https://kubernetes.io/docs/reference/access-authn-authz/validating-admission-policy/", "id": "k8s_00521_code_completion_13285"} {"prompt": "Explain this yaml code:\n\napiVersion: admissionregistration.k8s.io/v1alpha1\nkind: ValidatingAdmissionPolicyBinding\nmetadata:\n name: example-binding\nspec:\n policyName: example-policy\n paramRef:\n selector:\n matchLabels:\n environment: test\n parameterNotFoundAction: Allow\n validationActions:\n - Deny", "completion": "This yaml code Validating Admission Policy | failurePolicy defines how mis-configurations and CEL expressions evaluating to error from the admission policy are handled. Allowed values are Ignore or Fail.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Validating Admission Policy | failurePolicy defines how mis-configurations and CEL expressions evaluating to error from the admission policy are handled. Allowed values are Ignore or Fail.", "source": "https://kubernetes.io/docs/reference/access-authn-authz/validating-admission-policy/", "id": "k8s_00521_code_explain_13286"} {"prompt": "Explain this text code:\n\nvalidatingadmissionpolicy/failure-policy-ignore.yaml", "completion": "This text code Validating Admission Policy | spec.validations[i].expression represents the expression which will be evaluated by CEL. To learn more, see the CEL language specification CEL expressions have access to the contents of the Admission", "language": "text", "type": "code-example_explanation", "context": "Validating Admission Policy | spec.validations[i].expression represents the expression which will be evaluated by CEL. To learn more, see the CEL language specification CEL expressions have access to the contents of the Admission", "source": "https://kubernetes.io/docs/reference/access-authn-authz/validating-admission-policy/", "id": "k8s_00521_code_explain_13287"} {"prompt": "Complete this yaml code:\n\napiVersion: admissionregistration.k8s.io/v1\nkind: ValidatingAdmissionPolicy\nspec:", "completion": "...\nfailurePolicy: Ignore # The default is \"Fail\"\nvalidations:\n- expression: \"object.spec.xyz == params.x\"", "language": "yaml", "type": "kubernetes-manifest", "context": "Validating Admission Policy | spec.validations[i].expression represents the expression which will be evaluated by CEL. To learn more, see the CEL language specification CEL expressions have access to the contents of the Admission", "source": "https://kubernetes.io/docs/reference/access-authn-authz/validating-admission-policy/", "id": "k8s_00521_code_completion_13288"} {"prompt": "Explain this yaml code:\n\napiVersion: admissionregistration.k8s.io/v1\nkind: ValidatingAdmissionPolicy\nspec:\n...\nfailurePolicy: Ignore # The default is \"Fail\"\nvalidations:\n- expression: \"object.spec.xyz == params.x\"", "completion": "This yaml code Validating Admission Policy | spec.validations[i].expression represents the expression which will be evaluated by CEL. To learn more, see the CEL language specification CEL expressions have access to the contents of the Admission", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Validating Admission Policy | spec.validations[i].expression represents the expression which will be evaluated by CEL. To learn more, see the CEL language specification CEL expressions have access to the contents of the Admission", "source": "https://kubernetes.io/docs/reference/access-authn-authz/validating-admission-policy/", "id": "k8s_00521_code_explain_13289"} {"prompt": "Explain this text code:\n\nobject.minReplicas <= object.replicas && object.replicas <= object.maxReplicas", "completion": "This text code Validating Admission Policy | Read Supported evaluation on CEL for more information about CEL rules.", "language": "text", "type": "code-example_explanation", "context": "Validating Admission Policy | Read Supported evaluation on CEL for more information about CEL rules.", "source": "https://kubernetes.io/docs/reference/access-authn-authz/validating-admission-policy/", "id": "k8s_00521_code_explain_13290"} {"prompt": "Explain this text code:\n\n(size(object.list1) == 0) != (size(object.list2) == 0)", "completion": "This text code Validating Admission Policy | Read Supported evaluation on CEL for more information about CEL rules.", "language": "text", "type": "code-example_explanation", "context": "Validating Admission Policy | Read Supported evaluation on CEL for more information about CEL rules.", "source": "https://kubernetes.io/docs/reference/access-authn-authz/validating-admission-policy/", "id": "k8s_00521_code_explain_13291"} {"prompt": "Explain this text code:\n\n!('MY_KEY' in object.map1) || object['MY_KEY'].matches('^[a-zA-Z]*$')", "completion": "This text code Validating Admission Policy | Read Supported evaluation on CEL for more information about CEL rules.", "language": "text", "type": "code-example_explanation", "context": "Validating Admission Policy | Read Supported evaluation on CEL for more information about CEL rules.", "source": "https://kubernetes.io/docs/reference/access-authn-authz/validating-admission-policy/", "id": "k8s_00521_code_explain_13292"} {"prompt": "Explain this text code:\n\nobject.envars.filter(e, e.name == 'MY_ENV').all(e, e.value.matches('^[a-zA-Z]*$')", "completion": "This text code Validating Admission Policy | Read Supported evaluation on CEL for more information about CEL rules.", "language": "text", "type": "code-example_explanation", "context": "Validating Admission Policy | Read Supported evaluation on CEL for more information about CEL rules.", "source": "https://kubernetes.io/docs/reference/access-authn-authz/validating-admission-policy/", "id": "k8s_00521_code_explain_13293"} {"prompt": "Explain this text code:\n\nhas(object.expired) && object.created + object.ttl < object.expired", "completion": "This text code Validating Admission Policy | Read Supported evaluation on CEL for more information about CEL rules.", "language": "text", "type": "code-example_explanation", "context": "Validating Admission Policy | Read Supported evaluation on CEL for more information about CEL rules.", "source": "https://kubernetes.io/docs/reference/access-authn-authz/validating-admission-policy/", "id": "k8s_00521_code_explain_13294"} {"prompt": "Explain this text code:\n\nobject.widgets.exists(w, w.key == 'x' && w.foo < 10)", "completion": "This text code Validating Admission Policy | Read Supported evaluation on CEL for more information about CEL rules.", "language": "text", "type": "code-example_explanation", "context": "Validating Admission Policy | Read Supported evaluation on CEL for more information about CEL rules.", "source": "https://kubernetes.io/docs/reference/access-authn-authz/validating-admission-policy/", "id": "k8s_00521_code_explain_13295"} {"prompt": "Explain this text code:\n\ntype(object) == string ? object == '100%' : object == 1000", "completion": "This text code Validating Admission Policy | Read Supported evaluation on CEL for more information about CEL rules.", "language": "text", "type": "code-example_explanation", "context": "Validating Admission Policy | Read Supported evaluation on CEL for more information about CEL rules.", "source": "https://kubernetes.io/docs/reference/access-authn-authz/validating-admission-policy/", "id": "k8s_00521_code_explain_13296"} {"prompt": "Explain this text code:\n\nsize(object.names) == size(object.details) && object.names.all(n, n in object.details)", "completion": "This text code Validating Admission Policy | Read Supported evaluation on CEL for more information about CEL rules.", "language": "text", "type": "code-example_explanation", "context": "Validating Admission Policy | Read Supported evaluation on CEL for more information about CEL rules.", "source": "https://kubernetes.io/docs/reference/access-authn-authz/validating-admission-policy/", "id": "k8s_00521_code_explain_13297"} {"prompt": "Explain this text code:\n\nsize(object.clusters.filter(c, c.name == object.primary)) == 1", "completion": "This text code Validating Admission Policy | Read Supported evaluation on CEL for more information about CEL rules.", "language": "text", "type": "code-example_explanation", "context": "Validating Admission Policy | Read Supported evaluation on CEL for more information about CEL rules.", "source": "https://kubernetes.io/docs/reference/access-authn-authz/validating-admission-policy/", "id": "k8s_00521_code_explain_13298"} {"prompt": "Explain this text code:\n\naccess/validating-admission-policy-match-conditions.yaml", "completion": "This text code Validating Admission Policy | Match conditions have access to the same CEL variables as validation expressions.", "language": "text", "type": "code-example_explanation", "context": "Validating Admission Policy | Match conditions have access to the same CEL variables as validation expressions.", "source": "https://kubernetes.io/docs/reference/access-authn-authz/validating-admission-policy/", "id": "k8s_00521_code_explain_13299"} {"prompt": "Complete this yaml code:\n\napiVersion: admissionregistration.k8s.io/v1\nkind: ValidatingAdmissionPolicy\nmetadata:\n name: \"demo-policy.example.com\"\nspec:\n failurePolicy: Fail\n matchConstraints:\n resourceRules:\n - apiGroups: [\"*\"]\n apiVersions: [\"*\"]", "completion": " operations: [\"CREATE\", \"UPDATE\"]\n resources: [\"*\"]\n matchConditions:\n - name: 'exclude-leases' # Each match condition must have a unique name\n expression: '!(request.resource.group == \"coordination.k8s.io\" && request.resource.resource == \"leases\")' # Match non-lease resources.\n - name: 'exclude-kubelet-requests'\n expression: '!(\"system:nodes\" in request.userInfo.groups)' # Match requests made by non-node users.\n - name: 'rbac' # Skip RBAC requests.\n expression: 'request.resource.group != \"rbac.authorization.k8s.io\"'\n validations:\n - expression: \"!object.metadata.name.contains('demo') || object.metadata.namespace == 'demo'\"", "language": "yaml", "type": "kubernetes-manifest", "context": "Validating Admission Policy | Match conditions have access to the same CEL variables as validation expressions.", "source": "https://kubernetes.io/docs/reference/access-authn-authz/validating-admission-policy/", "id": "k8s_00521_code_completion_13300"} {"prompt": "Explain this yaml code:\n\napiVersion: admissionregistration.k8s.io/v1\nkind: ValidatingAdmissionPolicy\nmetadata:\n name: \"demo-policy.example.com\"\nspec:\n failurePolicy: Fail\n matchConstraints:\n resourceRules:\n - apiGroups: [\"*\"]\n apiVersions: [\"*\"]\n operations: [\"CREATE\", \"UPDATE\"]\n resources: [\"*\"]\n matchConditions:\n - name: 'exclude-leases' # Each match condition must have a unique name\n expression: '!(request.resource.group == \"coordination.k8s.io\" && request.resource.resource == \"leases\")' # Match non-lease resources.\n - name: 'exclude-kubelet-requests'\n expression: '!(\"system:nodes\" in request.userInfo.groups)' # Match requests made by non-node users.\n - name: 'rbac' # Skip RBAC requests.\n expression: 'request.resource.group != \"rbac.authorization.k8s.io\"'\n validations:\n - expression: \"!object.metadata.name.contains('demo') || object.metadata.namespace == 'demo'\"", "completion": "This yaml code Validating Admission Policy | Match conditions have access to the same CEL variables as validation expressions.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Validating Admission Policy | Match conditions have access to the same CEL variables as validation expressions.", "source": "https://kubernetes.io/docs/reference/access-authn-authz/validating-admission-policy/", "id": "k8s_00521_code_explain_13301"} {"prompt": "Explain this text code:\n\naccess/validating-admission-policy-audit-annotation.yaml", "completion": "This text code Validating Admission Policy | When an API request is validated with this admission policy, the resulting audit event will look like:", "language": "text", "type": "code-example_explanation", "context": "Validating Admission Policy | When an API request is validated with this admission policy, the resulting audit event will look like:", "source": "https://kubernetes.io/docs/reference/access-authn-authz/validating-admission-policy/", "id": "k8s_00521_code_explain_13302"} {"prompt": "Complete this yaml code:\n\napiVersion: admissionregistration.k8s.io/v1\nkind: ValidatingAdmissionPolicy\nmetadata:\n name: \"demo-policy.example.com\"\nspec:\n failurePolicy: Fail\n matchConstraints:\n resourceRules:\n - apiGroups: [\"apps\"]", "completion": " apiVersions: [\"v1\"]\n operations: [\"CREATE\", \"UPDATE\"]\n resources: [\"deployments\"]\n validations:\n - expression: \"object.spec.replicas > 50\"\n messageExpression: \"'Deployment spec.replicas set to ' + string(object.spec.replicas)\"\n auditAnnotations:\n - key: \"high-replica-count\"\n valueExpression: \"'Deployment spec.replicas set to ' + string(object.spec.replicas)\"", "language": "yaml", "type": "kubernetes-manifest", "context": "Validating Admission Policy | When an API request is validated with this admission policy, the resulting audit event will look like:", "source": "https://kubernetes.io/docs/reference/access-authn-authz/validating-admission-policy/", "id": "k8s_00521_code_completion_13303"} {"prompt": "Explain this yaml code:\n\napiVersion: admissionregistration.k8s.io/v1\nkind: ValidatingAdmissionPolicy\nmetadata:\n name: \"demo-policy.example.com\"\nspec:\n failurePolicy: Fail\n matchConstraints:\n resourceRules:\n - apiGroups: [\"apps\"]\n apiVersions: [\"v1\"]\n operations: [\"CREATE\", \"UPDATE\"]\n resources: [\"deployments\"]\n validations:\n - expression: \"object.spec.replicas > 50\"\n messageExpression: \"'Deployment spec.replicas set to ' + string(object.spec.replicas)\"\n auditAnnotations:\n - key: \"high-replica-count\"\n valueExpression: \"'Deployment spec.replicas set to ' + string(object.spec.replicas)\"", "completion": "This yaml code Validating Admission Policy | When an API request is validated with this admission policy, the resulting audit event will look like:", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Validating Admission Policy | When an API request is validated with this admission policy, the resulting audit event will look like:", "source": "https://kubernetes.io/docs/reference/access-authn-authz/validating-admission-policy/", "id": "k8s_00521_code_explain_13304"} {"prompt": "Complete this text code:\n\n# the audit event recorded\n{\n \"kind\": \"Event\",\n \"apiVersion\": \"audit.k8s.io/v1\",\n \"annotations\": {\n \"demo-policy.example.com/high-replica-count\": \"Deployment spec.replicas set to 128\"", "completion": " # other annotations\n ...\n }\n # other fields\n ...\n}", "language": "text", "type": "code-example", "context": "Validating Admission Policy | In this example the annotation will only be included if the spec.replicas of the Deployment is more than 50, otherwise the CEL expression evaluates to null and the annotation will not be included.", "source": "https://kubernetes.io/docs/reference/access-authn-authz/validating-admission-policy/", "id": "k8s_00521_code_completion_13305"} {"prompt": "Explain this text code:\n\n# the audit event recorded\n{\n \"kind\": \"Event\",\n \"apiVersion\": \"audit.k8s.io/v1\",\n \"annotations\": {\n \"demo-policy.example.com/high-replica-count\": \"Deployment spec.replicas set to 128\"\n # other annotations\n ...\n }\n # other fields\n ...\n}", "completion": "This text code Validating Admission Policy | In this example the annotation will only be included if the spec.replicas of the Deployment is more than 50, otherwise the CEL expression evaluates to null and the annotation will not be included.", "language": "text", "type": "code-example_explanation", "context": "Validating Admission Policy | In this example the annotation will only be included if the spec.replicas of the Deployment is more than 50, otherwise the CEL expression evaluates to null and the annotation will not be included.", "source": "https://kubernetes.io/docs/reference/access-authn-authz/validating-admission-policy/", "id": "k8s_00521_code_explain_13306"} {"prompt": "Complete this yaml code:\n\napiVersion: admissionregistration.k8s.io/v1\nkind: ValidatingAdmissionPolicy\nmetadata:\n name: \"deploy-replica-policy.example.com\"\nspec:\n paramKind:\n apiVersion: rules.example.com/v1\n kind: ReplicaLimit\n matchConstraints:", "completion": " resourceRules:\n - apiGroups: [\"apps\"]\n apiVersions: [\"v1\"]\n operations: [\"CREATE\", \"UPDATE\"]\n resources: [\"deployments\"]\n validations:\n - expression: \"object.spec.replicas <= params.maxReplicas\"\n messageExpression: \"'object.spec.replicas must be no greater than ' + string(params.maxReplicas)\"\n reason: Invalid", "language": "yaml", "type": "kubernetes-manifest", "context": "Validating Admission Policy | After creating a params object that limits the replicas to 3 and setting up the binding, when we try to create a deployment with 5 replicas, we will receive the following message.", "source": "https://kubernetes.io/docs/reference/access-authn-authz/validating-admission-policy/", "id": "k8s_00521_code_completion_13307"} {"prompt": "Explain this yaml code:\n\napiVersion: admissionregistration.k8s.io/v1\nkind: ValidatingAdmissionPolicy\nmetadata:\n name: \"deploy-replica-policy.example.com\"\nspec:\n paramKind:\n apiVersion: rules.example.com/v1\n kind: ReplicaLimit\n matchConstraints:\n resourceRules:\n - apiGroups: [\"apps\"]\n apiVersions: [\"v1\"]\n operations: [\"CREATE\", \"UPDATE\"]\n resources: [\"deployments\"]\n validations:\n - expression: \"object.spec.replicas <= params.maxReplicas\"\n messageExpression: \"'object.spec.replicas must be no greater than ' + string(params.maxReplicas)\"\n reason: Invalid", "completion": "This yaml code Validating Admission Policy | After creating a params object that limits the replicas to 3 and setting up the binding, when we try to create a deployment with 5 replicas, we will receive the following message.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Validating Admission Policy | After creating a params object that limits the replicas to 3 and setting up the binding, when we try to create a deployment with 5 replicas, we will receive the following message.", "source": "https://kubernetes.io/docs/reference/access-authn-authz/validating-admission-policy/", "id": "k8s_00521_code_explain_13308"} {"prompt": "Explain this text code:\n\n$ kubectl create deploy --image=nginx nginx --replicas=5\nerror: failed to create deployment: deployments.apps \"nginx\" is forbidden: ValidatingAdmissionPolicy 'deploy-replica-policy.example.com' with binding 'demo-binding-test.example.com' denied request: object.spec.replicas must be no greater than 3", "completion": "This text code Validating Admission Policy | This is more informative than a static message of \"too many replicas\".", "language": "text", "type": "code-example_explanation", "context": "Validating Admission Policy | This is more informative than a static message of \"too many replicas\".", "source": "https://kubernetes.io/docs/reference/access-authn-authz/validating-admission-policy/", "id": "k8s_00521_code_explain_13309"} {"prompt": "Complete this yaml code:\n\napiVersion: admissionregistration.k8s.io/v1\nkind: ValidatingAdmissionPolicy\nmetadata:\n name: \"deploy-replica-policy.example.com\"\nspec:\n matchConstraints:\n resourceRules:", "completion": " - apiGroups: [\"apps\"]\n apiVersions: [\"v1\"]\n operations: [\"CREATE\", \"UPDATE\"]\n resources: [\"deployments\"]\n validations:\n - expression: \"object.replicas > 1\" # should be \"object.spec.replicas > 1\"\n message: \"must be replicated\"\n reason: Invalid", "language": "yaml", "type": "kubernetes-manifest", "context": "Validating Admission Policy | The status will yield the following information:", "source": "https://kubernetes.io/docs/reference/access-authn-authz/validating-admission-policy/", "id": "k8s_00521_code_completion_13310"} {"prompt": "Explain this yaml code:\n\napiVersion: admissionregistration.k8s.io/v1\nkind: ValidatingAdmissionPolicy\nmetadata:\n name: \"deploy-replica-policy.example.com\"\nspec:\n matchConstraints:\n resourceRules:\n - apiGroups: [\"apps\"]\n apiVersions: [\"v1\"]\n operations: [\"CREATE\", \"UPDATE\"]\n resources: [\"deployments\"]\n validations:\n - expression: \"object.replicas > 1\" # should be \"object.spec.replicas > 1\"\n message: \"must be replicated\"\n reason: Invalid", "completion": "This yaml code Validating Admission Policy | The status will yield the following information:", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Validating Admission Policy | The status will yield the following information:", "source": "https://kubernetes.io/docs/reference/access-authn-authz/validating-admission-policy/", "id": "k8s_00521_code_explain_13311"} {"prompt": "Complete this yaml code:\n\nstatus:\n typeChecking:\n expressionWarnings:\n - fieldRef: spec.validations[0].expression", "completion": " warning: |-\n apps/v1, Kind=Deployment: ERROR: <input>:1:7: undefined field 'replicas'\n | object.replicas > 1\n | ......^", "language": "yaml", "type": "kubernetes-manifest", "context": "Validating Admission Policy | If multiple resources are matched in spec.matchConstraints, all of matched resources will be checked against. For example, the following policy definition", "source": "https://kubernetes.io/docs/reference/access-authn-authz/validating-admission-policy/", "id": "k8s_00521_code_completion_13312"} {"prompt": "Explain this yaml code:\n\nstatus:\n typeChecking:\n expressionWarnings:\n - fieldRef: spec.validations[0].expression\n warning: |-\n apps/v1, Kind=Deployment: ERROR: <input>:1:7: undefined field 'replicas'\n | object.replicas > 1\n | ......^", "completion": "This yaml code Validating Admission Policy | If multiple resources are matched in spec.matchConstraints, all of matched resources will be checked against. For example, the following policy definition", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Validating Admission Policy | If multiple resources are matched in spec.matchConstraints, all of matched resources will be checked against. For example, the following policy definition", "source": "https://kubernetes.io/docs/reference/access-authn-authz/validating-admission-policy/", "id": "k8s_00521_code_explain_13313"} {"prompt": "Explain this text code:\n\nvalidatingadmissionpolicy/typechecking-multiple-match.yaml", "completion": "This text code Validating Admission Policy | will have multiple types and type checking result of each type in the warning message.", "language": "text", "type": "code-example_explanation", "context": "Validating Admission Policy | will have multiple types and type checking result of each type in the warning message.", "source": "https://kubernetes.io/docs/reference/access-authn-authz/validating-admission-policy/", "id": "k8s_00521_code_explain_13314"} {"prompt": "Complete this yaml code:\n\napiVersion: admissionregistration.k8s.io/v1\nkind: ValidatingAdmissionPolicy\nmetadata:\n name: \"replica-policy.example.com\"\nspec:\n matchConstraints:\n resourceRules:", "completion": " - apiGroups: [\"apps\"]\n apiVersions: [\"v1\"]\n operations: [\"CREATE\", \"UPDATE\"]\n resources: [\"deployments\",\"replicasets\"]\n validations:\n - expression: \"object.replicas > 1\" # should be \"object.spec.replicas > 1\"\n message: \"must be replicated\"\n reason: Invalid", "language": "yaml", "type": "kubernetes-manifest", "context": "Validating Admission Policy | will have multiple types and type checking result of each type in the warning message.", "source": "https://kubernetes.io/docs/reference/access-authn-authz/validating-admission-policy/", "id": "k8s_00521_code_completion_13315"} {"prompt": "Explain this yaml code:\n\napiVersion: admissionregistration.k8s.io/v1\nkind: ValidatingAdmissionPolicy\nmetadata:\n name: \"replica-policy.example.com\"\nspec:\n matchConstraints:\n resourceRules:\n - apiGroups: [\"apps\"]\n apiVersions: [\"v1\"]\n operations: [\"CREATE\", \"UPDATE\"]\n resources: [\"deployments\",\"replicasets\"]\n validations:\n - expression: \"object.replicas > 1\" # should be \"object.spec.replicas > 1\"\n message: \"must be replicated\"\n reason: Invalid", "completion": "This yaml code Validating Admission Policy | will have multiple types and type checking result of each type in the warning message.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Validating Admission Policy | will have multiple types and type checking result of each type in the warning message.", "source": "https://kubernetes.io/docs/reference/access-authn-authz/validating-admission-policy/", "id": "k8s_00521_code_explain_13316"} {"prompt": "Complete this yaml code:\n\nstatus:\n typeChecking:\n expressionWarnings:\n - fieldRef: spec.validations[0].expression\n warning: |-", "completion": " apps/v1, Kind=Deployment: ERROR: <input>:1:7: undefined field 'replicas'\n | object.replicas > 1\n | ......^\n apps/v1, Kind=ReplicaSet: ERROR: <input>:1:7: undefined field 'replicas'\n | object.replicas > 1\n | ......^", "language": "yaml", "type": "kubernetes-manifest", "context": "Validating Admission Policy | Type Checking has the following limitation:", "source": "https://kubernetes.io/docs/reference/access-authn-authz/validating-admission-policy/", "id": "k8s_00521_code_completion_13317"} {"prompt": "Explain this yaml code:\n\nstatus:\n typeChecking:\n expressionWarnings:\n - fieldRef: spec.validations[0].expression\n warning: |-\n apps/v1, Kind=Deployment: ERROR: <input>:1:7: undefined field 'replicas'\n | object.replicas > 1\n | ......^\n apps/v1, Kind=ReplicaSet: ERROR: <input>:1:7: undefined field 'replicas'\n | object.replicas > 1\n | ......^", "completion": "This yaml code Validating Admission Policy | Type Checking has the following limitation:", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Validating Admission Policy | Type Checking has the following limitation:", "source": "https://kubernetes.io/docs/reference/access-authn-authz/validating-admission-policy/", "id": "k8s_00521_code_explain_13318"} {"prompt": "Complete this yaml code:\n\nspec:\n variables:\n - name: foo", "completion": " expression: \"'foo' in object.spec.metadata.labels ? object.spec.metadata.labels['foo'] : 'default'\"\n validations:\n - expression: variables.foo == 'bar'", "language": "yaml", "type": "kubernetes-manifest", "context": "Validating Admission Policy | A variable is lazily evaluated when it is first referred. Any error that occurs during the evaluation will be reported during the evaluation of the referring expression. Both the result and potential", "source": "https://kubernetes.io/docs/reference/access-authn-authz/validating-admission-policy/", "id": "k8s_00521_code_completion_13319"} {"prompt": "Explain this yaml code:\n\nspec:\n variables:\n - name: foo\n expression: \"'foo' in object.spec.metadata.labels ? object.spec.metadata.labels['foo'] : 'default'\"\n validations:\n - expression: variables.foo == 'bar'", "completion": "This yaml code Validating Admission Policy | A variable is lazily evaluated when it is first referred. Any error that occurs during the evaluation will be reported during the evaluation of the referring expression. Both the result and potential", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Validating Admission Policy | A variable is lazily evaluated when it is first referred. Any error that occurs during the evaluation will be reported during the evaluation of the referring expression. Both the result and potential", "source": "https://kubernetes.io/docs/reference/access-authn-authz/validating-admission-policy/", "id": "k8s_00521_code_explain_13320"} {"prompt": "Explain this text code:\n\naccess/image-matches-namespace-environment.policy.yaml", "completion": "This text code Validating Admission Policy | With the policy bound to the namespace default, which is labeled environment: prod, the following attempt to create a deployment would be rejected.", "language": "text", "type": "code-example_explanation", "context": "Validating Admission Policy | With the policy bound to the namespace default, which is labeled environment: prod, the following attempt to create a deployment would be rejected.", "source": "https://kubernetes.io/docs/reference/access-authn-authz/validating-admission-policy/", "id": "k8s_00521_code_explain_13321"} {"prompt": "Complete this yaml code:\n\n# This policy enforces that all containers of a deployment has the image repo match the environment label of its namespace.\n# Except for \"exempt\" deployments, or any containers that do not belong to the \"example.com\" organization (e.g. common sidecars).\n# For example, if the namespace has a label of {\"environment\": \"staging\"}, all container images must be either staging.example.com/*\n# or do not contain \"example.com\" at all, unless the deployment has {\"exempt\": \"true\"} label.\napiVersion: admissionregistration.k8s.io/v1\nkind: ValidatingAdmissionPolicy\nmetadata:\n name: \"image-matches-namespace-environment.policy.example.com\"\nspec:\n failurePolicy: Fail\n matchConstraints:\n resourceRules:\n - apiGroups: [\"apps\"]\n apiVersions: [\"v1\"]", "completion": " operations: [\"CREATE\", \"UPDATE\"]\n resources: [\"deployments\"]\n variables:\n - name: environment\n expression: \"'environment' in namespaceObject.metadata.labels ? namespaceObject.metadata.labels['environment'] : 'prod'\"\n - name: exempt\n expression: \"'exempt' in object.metadata.labels && object.metadata.labels['exempt'] == 'true'\"\n - name: containers\n expression: \"object.spec.template.spec.containers\"\n - name: containersToCheck\n expression: \"variables.containers.filter(c, c.image.contains('example.com/'))\"\n validations:\n - expression: \"variables.exempt || variables.containersToCheck.all(c, c.image.startsWith(variables.environment + '.'))\"\n messageExpression: \"'only ' + variables.environment + ' images are allowed in namespace ' + namespaceObject.metadata.name\"", "language": "yaml", "type": "kubernetes-manifest", "context": "Validating Admission Policy | With the policy bound to the namespace default, which is labeled environment: prod, the following attempt to create a deployment would be rejected.", "source": "https://kubernetes.io/docs/reference/access-authn-authz/validating-admission-policy/", "id": "k8s_00521_code_completion_13322"} {"prompt": "Explain this yaml code:\n\n# This policy enforces that all containers of a deployment has the image repo match the environment label of its namespace.\n# Except for \"exempt\" deployments, or any containers that do not belong to the \"example.com\" organization (e.g. common sidecars).\n# For example, if the namespace has a label of {\"environment\": \"staging\"}, all container images must be either staging.example.com/*\n# or do not contain \"example.com\" at all, unless the deployment has {\"exempt\": \"true\"} label.\napiVersion: admissionregistration.k8s.io/v1\nkind: ValidatingAdmissionPolicy\nmetadata:\n name: \"image-matches-namespace-environment.policy.example.com\"\nspec:\n failurePolicy: Fail\n matchConstraints:\n resourceRules:\n - apiGroups: [\"apps\"]\n apiVersions: [\"v1\"]\n operations: [\"CREATE\", \"UPDATE\"]\n resources: [\"deployments\"]\n variables:\n - name: environment\n expression: \"'environment' in namespaceObject.metadata.labels ? namespaceObject.metadata.labels['environment'] : 'prod'\"\n - name: exempt\n expression: \"'exempt' in object.metadata.labels && object.metadata.labels['exempt'] == 'true'\"\n - name: containers\n expression: \"object.spec.template.spec.containers\"\n - name: containersToCheck\n expression: \"variables.containers.filter(c, c.image.contains('example.com/'))\"\n validations:\n - expression: \"variables.exempt || variables.containersToCheck.all(c, c.image.startsWith(variables.environment + '.'))\"\n messageExpression: \"'only ' + variables.environment + ' images are allowed in namespace ' + namespaceObject.metadata.name\"", "completion": "This yaml code Validating Admission Policy | With the policy bound to the namespace default, which is labeled environment: prod, the following attempt to create a deployment would be rejected.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Validating Admission Policy | With the policy bound to the namespace default, which is labeled environment: prod, the following attempt to create a deployment would be rejected.", "source": "https://kubernetes.io/docs/reference/access-authn-authz/validating-admission-policy/", "id": "k8s_00521_code_explain_13323"} {"prompt": "Explain this shell code:\n\nkubectl create deploy --image=dev.example.com/nginx invalid", "completion": "This shell code Validating Admission Policy | The error message is similar to this.", "language": "shell", "type": "code-example_explanation", "context": "Validating Admission Policy | The error message is similar to this.", "source": "https://kubernetes.io/docs/reference/access-authn-authz/validating-admission-policy/", "id": "k8s_00521_code_explain_13324"} {"prompt": "Explain this console code:\n\nerror: failed to create deployment: deployments.apps \"invalid\" is forbidden: ValidatingAdmissionPolicy 'image-matches-namespace-environment.policy.example.com' with binding 'demo-binding-test.example.com' denied request: only prod images are allowed in namespace default", "completion": "This console code Validating Admission Policy | There are certain API kinds that are exempt from admission-time validation checks. For example, you can't create a ValidatingAdmissionPolicy that prevents changes to ValidatingAdmissionPolicyBindings.", "language": "console", "type": "code-example_explanation", "context": "Validating Admission Policy | There are certain API kinds that are exempt from admission-time validation checks. For example, you can't create a ValidatingAdmissionPolicy that prevents changes to ValidatingAdmissionPolicyBindings.", "source": "https://kubernetes.io/docs/reference/access-authn-authz/validating-admission-policy/", "id": "k8s_00521_code_explain_13325"} {"prompt": "Complete this yaml code:\n\nvalidatingadmissionpolicy/basic-example-policy.yaml\napiVersion: admissionregistration.k8s.io/v1\nkind: ValidatingAdmissionPolicy\nmetadata:\n name: \"demo-policy.example.com\"\nspec:\n failurePolicy: Fail", "completion": " matchConstraints:\n resourceRules:\n - apiGroups: [\"apps\"]\n apiVersions: [\"v1\"]\n operations: [\"CREATE\", \"UPDATE\"]\n resources: [\"deployments\"]\n validations:\n - expression: \"object.spec.replicas <= 5\"", "language": "yaml", "type": "kubernetes-manifest", "context": "Validating Admission Policy | spec.validations contains CEL expressions which use the Common Expression Language (CEL) to validate the request. If an expression evaluates to false, the validation check is enforced according to the", "source": "https://kubernetes.io/docs/reference/access-authn-authz/validating-admission-policy/", "id": "k8s_00521_code_completion_13326"} {"prompt": "Explain this yaml code:\n\nvalidatingadmissionpolicy/basic-example-policy.yaml\napiVersion: admissionregistration.k8s.io/v1\nkind: ValidatingAdmissionPolicy\nmetadata:\n name: \"demo-policy.example.com\"\nspec:\n failurePolicy: Fail\n matchConstraints:\n resourceRules:\n - apiGroups: [\"apps\"]\n apiVersions: [\"v1\"]\n operations: [\"CREATE\", \"UPDATE\"]\n resources: [\"deployments\"]\n validations:\n - expression: \"object.spec.replicas <= 5\"", "completion": "This yaml code Validating Admission Policy | spec.validations contains CEL expressions which use the Common Expression Language (CEL) to validate the request. If an expression evaluates to false, the validation check is enforced according to the", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Validating Admission Policy | spec.validations contains CEL expressions which use the Common Expression Language (CEL) to validate the request. If an expression evaluates to false, the validation check is enforced according to the", "source": "https://kubernetes.io/docs/reference/access-authn-authz/validating-admission-policy/", "id": "k8s_00521_code_explain_13327"} {"prompt": "Complete this yaml code:\n\napiVersion: admissionregistration.k8s.io/v1\nkind: ValidatingAdmissionPolicy\nmetadata:\n name: \"demo-policy.example.com\"\nspec:\n failurePolicy: Fail\n matchConstraints:", "completion": " resourceRules:\n - apiGroups: [\"apps\"]\n apiVersions: [\"v1\"]\n operations: [\"CREATE\", \"UPDATE\"]\n resources: [\"deployments\"]\n validations:\n - expression: \"object.spec.replicas <= 5\"", "language": "yaml", "type": "kubernetes-manifest", "context": "Validating Admission Policy | spec.validations contains CEL expressions which use the Common Expression Language (CEL) to validate the request. If an expression evaluates to false, the validation check is enforced according to the", "source": "https://kubernetes.io/docs/reference/access-authn-authz/validating-admission-policy/", "id": "k8s_00521_code_completion_13328"} {"prompt": "Explain this yaml code:\n\napiVersion: admissionregistration.k8s.io/v1\nkind: ValidatingAdmissionPolicy\nmetadata:\n name: \"demo-policy.example.com\"\nspec:\n failurePolicy: Fail\n matchConstraints:\n resourceRules:\n - apiGroups: [\"apps\"]\n apiVersions: [\"v1\"]\n operations: [\"CREATE\", \"UPDATE\"]\n resources: [\"deployments\"]\n validations:\n - expression: \"object.spec.replicas <= 5\"", "completion": "This yaml code Validating Admission Policy | spec.validations contains CEL expressions which use the Common Expression Language (CEL) to validate the request. If an expression evaluates to false, the validation check is enforced according to the", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Validating Admission Policy | spec.validations contains CEL expressions which use the Common Expression Language (CEL) to validate the request. If an expression evaluates to false, the validation check is enforced according to the", "source": "https://kubernetes.io/docs/reference/access-authn-authz/validating-admission-policy/", "id": "k8s_00521_code_explain_13329"} {"prompt": "Complete this yaml code:\n\nvalidatingadmissionpolicy/basic-example-binding.yaml\napiVersion: admissionregistration.k8s.io/v1\nkind: ValidatingAdmissionPolicyBinding\nmetadata:\n name: \"demo-binding-test.example.com\"\nspec:", "completion": " policyName: \"demo-policy.example.com\"\n validationActions: [Deny]\n matchResources:\n namespaceSelector:\n matchLabels:\n environment: test", "language": "yaml", "type": "kubernetes-manifest", "context": "Validating Admission Policy | When trying to create a deployment with replicas set not satisfying the validation expression, an error will return containing message:", "source": "https://kubernetes.io/docs/reference/access-authn-authz/validating-admission-policy/", "id": "k8s_00521_code_completion_13330"} {"prompt": "Explain this yaml code:\n\nvalidatingadmissionpolicy/basic-example-binding.yaml\napiVersion: admissionregistration.k8s.io/v1\nkind: ValidatingAdmissionPolicyBinding\nmetadata:\n name: \"demo-binding-test.example.com\"\nspec:\n policyName: \"demo-policy.example.com\"\n validationActions: [Deny]\n matchResources:\n namespaceSelector:\n matchLabels:\n environment: test", "completion": "This yaml code Validating Admission Policy | When trying to create a deployment with replicas set not satisfying the validation expression, an error will return containing message:", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Validating Admission Policy | When trying to create a deployment with replicas set not satisfying the validation expression, an error will return containing message:", "source": "https://kubernetes.io/docs/reference/access-authn-authz/validating-admission-policy/", "id": "k8s_00521_code_explain_13331"} {"prompt": "Complete this yaml code:\n\napiVersion: admissionregistration.k8s.io/v1\nkind: ValidatingAdmissionPolicyBinding\nmetadata:\n name: \"demo-binding-test.example.com\"\nspec:", "completion": " policyName: \"demo-policy.example.com\"\n validationActions: [Deny]\n matchResources:\n namespaceSelector:\n matchLabels:\n environment: test", "language": "yaml", "type": "kubernetes-manifest", "context": "Validating Admission Policy | When trying to create a deployment with replicas set not satisfying the validation expression, an error will return containing message:", "source": "https://kubernetes.io/docs/reference/access-authn-authz/validating-admission-policy/", "id": "k8s_00521_code_completion_13332"} {"prompt": "Explain this yaml code:\n\napiVersion: admissionregistration.k8s.io/v1\nkind: ValidatingAdmissionPolicyBinding\nmetadata:\n name: \"demo-binding-test.example.com\"\nspec:\n policyName: \"demo-policy.example.com\"\n validationActions: [Deny]\n matchResources:\n namespaceSelector:\n matchLabels:\n environment: test", "completion": "This yaml code Validating Admission Policy | When trying to create a deployment with replicas set not satisfying the validation expression, an error will return containing message:", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Validating Admission Policy | When trying to create a deployment with replicas set not satisfying the validation expression, an error will return containing message:", "source": "https://kubernetes.io/docs/reference/access-authn-authz/validating-admission-policy/", "id": "k8s_00521_code_explain_13333"} {"prompt": "Complete this yaml code:\n\nvalidatingadmissionpolicy/policy-with-param.yaml\napiVersion: admissionregistration.k8s.io/v1\nkind: ValidatingAdmissionPolicy\nmetadata:\n name: \"replicalimit-policy.example.com\"\nspec:\n failurePolicy: Fail\n paramKind:\n apiVersion: rules.example.com/v1", "completion": " kind: ReplicaLimit\n matchConstraints:\n resourceRules:\n - apiGroups: [\"apps\"]\n apiVersions: [\"v1\"]\n operations: [\"CREATE\", \"UPDATE\"]\n resources: [\"deployments\"]\n validations:\n - expression: \"object.spec.replicas <= params.maxReplicas\"\n reason: Invalid", "language": "yaml", "type": "kubernetes-manifest", "context": "Validating Admission Policy | The spec.paramKind field of the ValidatingAdmissionPolicy specifies the kind of resources used to parameterize this policy. For this example, it is configured by ReplicaLimit custom resources. Note in", "source": "https://kubernetes.io/docs/reference/access-authn-authz/validating-admission-policy/", "id": "k8s_00521_code_completion_13334"} {"prompt": "Explain this yaml code:\n\nvalidatingadmissionpolicy/policy-with-param.yaml\napiVersion: admissionregistration.k8s.io/v1\nkind: ValidatingAdmissionPolicy\nmetadata:\n name: \"replicalimit-policy.example.com\"\nspec:\n failurePolicy: Fail\n paramKind:\n apiVersion: rules.example.com/v1\n kind: ReplicaLimit\n matchConstraints:\n resourceRules:\n - apiGroups: [\"apps\"]\n apiVersions: [\"v1\"]\n operations: [\"CREATE\", \"UPDATE\"]\n resources: [\"deployments\"]\n validations:\n - expression: \"object.spec.replicas <= params.maxReplicas\"\n reason: Invalid", "completion": "This yaml code Validating Admission Policy | The spec.paramKind field of the ValidatingAdmissionPolicy specifies the kind of resources used to parameterize this policy. For this example, it is configured by ReplicaLimit custom resources. Note in", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Validating Admission Policy | The spec.paramKind field of the ValidatingAdmissionPolicy specifies the kind of resources used to parameterize this policy. For this example, it is configured by ReplicaLimit custom resources. Note in", "source": "https://kubernetes.io/docs/reference/access-authn-authz/validating-admission-policy/", "id": "k8s_00521_code_explain_13335"} {"prompt": "Complete this yaml code:\n\napiVersion: admissionregistration.k8s.io/v1\nkind: ValidatingAdmissionPolicy\nmetadata:\n name: \"replicalimit-policy.example.com\"\nspec:\n failurePolicy: Fail\n paramKind:\n apiVersion: rules.example.com/v1\n kind: ReplicaLimit", "completion": " matchConstraints:\n resourceRules:\n - apiGroups: [\"apps\"]\n apiVersions: [\"v1\"]\n operations: [\"CREATE\", \"UPDATE\"]\n resources: [\"deployments\"]\n validations:\n - expression: \"object.spec.replicas <= params.maxReplicas\"\n reason: Invalid", "language": "yaml", "type": "kubernetes-manifest", "context": "Validating Admission Policy | The spec.paramKind field of the ValidatingAdmissionPolicy specifies the kind of resources used to parameterize this policy. For this example, it is configured by ReplicaLimit custom resources. Note in", "source": "https://kubernetes.io/docs/reference/access-authn-authz/validating-admission-policy/", "id": "k8s_00521_code_completion_13336"} {"prompt": "Explain this yaml code:\n\napiVersion: admissionregistration.k8s.io/v1\nkind: ValidatingAdmissionPolicy\nmetadata:\n name: \"replicalimit-policy.example.com\"\nspec:\n failurePolicy: Fail\n paramKind:\n apiVersion: rules.example.com/v1\n kind: ReplicaLimit\n matchConstraints:\n resourceRules:\n - apiGroups: [\"apps\"]\n apiVersions: [\"v1\"]\n operations: [\"CREATE\", \"UPDATE\"]\n resources: [\"deployments\"]\n validations:\n - expression: \"object.spec.replicas <= params.maxReplicas\"\n reason: Invalid", "completion": "This yaml code Validating Admission Policy | The spec.paramKind field of the ValidatingAdmissionPolicy specifies the kind of resources used to parameterize this policy. For this example, it is configured by ReplicaLimit custom resources. Note in", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Validating Admission Policy | The spec.paramKind field of the ValidatingAdmissionPolicy specifies the kind of resources used to parameterize this policy. For this example, it is configured by ReplicaLimit custom resources. Note in", "source": "https://kubernetes.io/docs/reference/access-authn-authz/validating-admission-policy/", "id": "k8s_00521_code_explain_13337"} {"prompt": "Complete this yaml code:\n\nvalidatingadmissionpolicy/binding-with-param.yaml\napiVersion: admissionregistration.k8s.io/v1\nkind: ValidatingAdmissionPolicyBinding\nmetadata:\n name: \"replicalimit-binding-test.example.com\"\nspec:\n policyName: \"replicalimit-policy.example.com\"\n validationActions: [Deny]", "completion": " paramRef:\n name: \"replica-limit-test.example.com\"\n namespace: \"default\"\n parameterNotFoundAction: Deny\n matchResources:\n namespaceSelector:\n matchLabels:\n environment: test", "language": "yaml", "type": "kubernetes-manifest", "context": "Validating Admission Policy | Notice this binding applies a parameter to the policy for all resources which are in the test environment.", "source": "https://kubernetes.io/docs/reference/access-authn-authz/validating-admission-policy/", "id": "k8s_00521_code_completion_13338"} {"prompt": "Explain this yaml code:\n\nvalidatingadmissionpolicy/binding-with-param.yaml\napiVersion: admissionregistration.k8s.io/v1\nkind: ValidatingAdmissionPolicyBinding\nmetadata:\n name: \"replicalimit-binding-test.example.com\"\nspec:\n policyName: \"replicalimit-policy.example.com\"\n validationActions: [Deny]\n paramRef:\n name: \"replica-limit-test.example.com\"\n namespace: \"default\"\n parameterNotFoundAction: Deny\n matchResources:\n namespaceSelector:\n matchLabels:\n environment: test", "completion": "This yaml code Validating Admission Policy | Notice this binding applies a parameter to the policy for all resources which are in the test environment.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Validating Admission Policy | Notice this binding applies a parameter to the policy for all resources which are in the test environment.", "source": "https://kubernetes.io/docs/reference/access-authn-authz/validating-admission-policy/", "id": "k8s_00521_code_explain_13339"} {"prompt": "Complete this yaml code:\n\napiVersion: admissionregistration.k8s.io/v1\nkind: ValidatingAdmissionPolicyBinding\nmetadata:\n name: \"replicalimit-binding-test.example.com\"\nspec:\n policyName: \"replicalimit-policy.example.com\"\n validationActions: [Deny]", "completion": " paramRef:\n name: \"replica-limit-test.example.com\"\n namespace: \"default\"\n parameterNotFoundAction: Deny\n matchResources:\n namespaceSelector:\n matchLabels:\n environment: test", "language": "yaml", "type": "kubernetes-manifest", "context": "Validating Admission Policy | Notice this binding applies a parameter to the policy for all resources which are in the test environment.", "source": "https://kubernetes.io/docs/reference/access-authn-authz/validating-admission-policy/", "id": "k8s_00521_code_completion_13340"} {"prompt": "Explain this yaml code:\n\napiVersion: admissionregistration.k8s.io/v1\nkind: ValidatingAdmissionPolicyBinding\nmetadata:\n name: \"replicalimit-binding-test.example.com\"\nspec:\n policyName: \"replicalimit-policy.example.com\"\n validationActions: [Deny]\n paramRef:\n name: \"replica-limit-test.example.com\"\n namespace: \"default\"\n parameterNotFoundAction: Deny\n matchResources:\n namespaceSelector:\n matchLabels:\n environment: test", "completion": "This yaml code Validating Admission Policy | Notice this binding applies a parameter to the policy for all resources which are in the test environment.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Validating Admission Policy | Notice this binding applies a parameter to the policy for all resources which are in the test environment.", "source": "https://kubernetes.io/docs/reference/access-authn-authz/validating-admission-policy/", "id": "k8s_00521_code_explain_13341"} {"prompt": "Complete this yaml code:\n\nvalidatingadmissionpolicy/replicalimit-param.yaml\napiVersion: rules.example.com/v1\nkind: ReplicaLimit", "completion": "metadata:\n name: \"replica-limit-test.example.com\"\n namespace: \"default\"\nmaxReplicas: 3", "language": "yaml", "type": "kubernetes-manifest", "context": "Validating Admission Policy | This policy parameter resource limits deployments to a max of 3 replicas.", "source": "https://kubernetes.io/docs/reference/access-authn-authz/validating-admission-policy/", "id": "k8s_00521_code_completion_13342"} {"prompt": "Explain this yaml code:\n\nvalidatingadmissionpolicy/replicalimit-param.yaml\napiVersion: rules.example.com/v1\nkind: ReplicaLimit\nmetadata:\n name: \"replica-limit-test.example.com\"\n namespace: \"default\"\nmaxReplicas: 3", "completion": "This yaml code Validating Admission Policy | This policy parameter resource limits deployments to a max of 3 replicas.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Validating Admission Policy | This policy parameter resource limits deployments to a max of 3 replicas.", "source": "https://kubernetes.io/docs/reference/access-authn-authz/validating-admission-policy/", "id": "k8s_00521_code_explain_13343"} {"prompt": "Complete this yaml code:\n\napiVersion: rules.example.com/v1\nkind: ReplicaLimit\nmetadata:", "completion": " name: \"replica-limit-test.example.com\"\n namespace: \"default\"\nmaxReplicas: 3", "language": "yaml", "type": "kubernetes-manifest", "context": "Validating Admission Policy | This policy parameter resource limits deployments to a max of 3 replicas.", "source": "https://kubernetes.io/docs/reference/access-authn-authz/validating-admission-policy/", "id": "k8s_00521_code_completion_13344"} {"prompt": "Explain this yaml code:\n\napiVersion: rules.example.com/v1\nkind: ReplicaLimit\nmetadata:\n name: \"replica-limit-test.example.com\"\n namespace: \"default\"\nmaxReplicas: 3", "completion": "This yaml code Validating Admission Policy | This policy parameter resource limits deployments to a max of 3 replicas.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Validating Admission Policy | This policy parameter resource limits deployments to a max of 3 replicas.", "source": "https://kubernetes.io/docs/reference/access-authn-authz/validating-admission-policy/", "id": "k8s_00521_code_explain_13345"} {"prompt": "Complete this yaml code:\n\nvalidatingadmissionpolicy/binding-with-param-prod.yaml\napiVersion: admissionregistration.k8s.io/v1\nkind: ValidatingAdmissionPolicyBinding\nmetadata:\n name: \"replicalimit-binding-nontest\"\nspec:\n policyName: \"replicalimit-policy.example.com\"\n validationActions: [Deny]\n paramRef:", "completion": " name: \"replica-limit-prod.example.com\"\n namespace: \"default\"\n parameterNotFoundAction: Deny\n matchResources:\n namespaceSelector:\n matchExpressions:\n - key: environment\n operator: NotIn\n values:\n - test", "language": "yaml", "type": "kubernetes-manifest", "context": "Validating Admission Policy | Notice this binding applies a different parameter to resources which are not in the test environment.", "source": "https://kubernetes.io/docs/reference/access-authn-authz/validating-admission-policy/", "id": "k8s_00521_code_completion_13346"} {"prompt": "Explain this yaml code:\n\nvalidatingadmissionpolicy/binding-with-param-prod.yaml\napiVersion: admissionregistration.k8s.io/v1\nkind: ValidatingAdmissionPolicyBinding\nmetadata:\n name: \"replicalimit-binding-nontest\"\nspec:\n policyName: \"replicalimit-policy.example.com\"\n validationActions: [Deny]\n paramRef:\n name: \"replica-limit-prod.example.com\"\n namespace: \"default\"\n parameterNotFoundAction: Deny\n matchResources:\n namespaceSelector:\n matchExpressions:\n - key: environment\n operator: NotIn\n values:\n - test", "completion": "This yaml code Validating Admission Policy | Notice this binding applies a different parameter to resources which are not in the test environment.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Validating Admission Policy | Notice this binding applies a different parameter to resources which are not in the test environment.", "source": "https://kubernetes.io/docs/reference/access-authn-authz/validating-admission-policy/", "id": "k8s_00521_code_explain_13347"} {"prompt": "Complete this yaml code:\n\napiVersion: admissionregistration.k8s.io/v1\nkind: ValidatingAdmissionPolicyBinding\nmetadata:\n name: \"replicalimit-binding-nontest\"\nspec:\n policyName: \"replicalimit-policy.example.com\"\n validationActions: [Deny]\n paramRef:\n name: \"replica-limit-prod.example.com\"", "completion": " namespace: \"default\"\n parameterNotFoundAction: Deny\n matchResources:\n namespaceSelector:\n matchExpressions:\n - key: environment\n operator: NotIn\n values:\n - test", "language": "yaml", "type": "kubernetes-manifest", "context": "Validating Admission Policy | Notice this binding applies a different parameter to resources which are not in the test environment.", "source": "https://kubernetes.io/docs/reference/access-authn-authz/validating-admission-policy/", "id": "k8s_00521_code_completion_13348"} {"prompt": "Explain this yaml code:\n\napiVersion: admissionregistration.k8s.io/v1\nkind: ValidatingAdmissionPolicyBinding\nmetadata:\n name: \"replicalimit-binding-nontest\"\nspec:\n policyName: \"replicalimit-policy.example.com\"\n validationActions: [Deny]\n paramRef:\n name: \"replica-limit-prod.example.com\"\n namespace: \"default\"\n parameterNotFoundAction: Deny\n matchResources:\n namespaceSelector:\n matchExpressions:\n - key: environment\n operator: NotIn\n values:\n - test", "completion": "This yaml code Validating Admission Policy | Notice this binding applies a different parameter to resources which are not in the test environment.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Validating Admission Policy | Notice this binding applies a different parameter to resources which are not in the test environment.", "source": "https://kubernetes.io/docs/reference/access-authn-authz/validating-admission-policy/", "id": "k8s_00521_code_explain_13349"} {"prompt": "Complete this yaml code:\n\nvalidatingadmissionpolicy/replicalimit-param-prod.yaml\napiVersion: rules.example.com/v1\nkind: ReplicaLimit", "completion": "metadata:\n name: \"replica-limit-prod.example.com\"\nmaxReplicas: 100", "language": "yaml", "type": "kubernetes-manifest", "context": "Validating Admission Policy | For each admission request, the API server evaluates CEL expressions of each (policy, binding, param) combination that match the request. For a request to be admitted it must pass all evaluations.", "source": "https://kubernetes.io/docs/reference/access-authn-authz/validating-admission-policy/", "id": "k8s_00521_code_completion_13350"} {"prompt": "Explain this yaml code:\n\nvalidatingadmissionpolicy/replicalimit-param-prod.yaml\napiVersion: rules.example.com/v1\nkind: ReplicaLimit\nmetadata:\n name: \"replica-limit-prod.example.com\"\nmaxReplicas: 100", "completion": "This yaml code Validating Admission Policy | For each admission request, the API server evaluates CEL expressions of each (policy, binding, param) combination that match the request. For a request to be admitted it must pass all evaluations.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Validating Admission Policy | For each admission request, the API server evaluates CEL expressions of each (policy, binding, param) combination that match the request. For a request to be admitted it must pass all evaluations.", "source": "https://kubernetes.io/docs/reference/access-authn-authz/validating-admission-policy/", "id": "k8s_00521_code_explain_13351"} {"prompt": "Complete this yaml code:\n\napiVersion: rules.example.com/v1\nkind: ReplicaLimit", "completion": "metadata:\n name: \"replica-limit-prod.example.com\"\nmaxReplicas: 100", "language": "yaml", "type": "kubernetes-manifest", "context": "Validating Admission Policy | For each admission request, the API server evaluates CEL expressions of each (policy, binding, param) combination that match the request. For a request to be admitted it must pass all evaluations.", "source": "https://kubernetes.io/docs/reference/access-authn-authz/validating-admission-policy/", "id": "k8s_00521_code_completion_13352"} {"prompt": "Explain this yaml code:\n\napiVersion: rules.example.com/v1\nkind: ReplicaLimit\nmetadata:\n name: \"replica-limit-prod.example.com\"\nmaxReplicas: 100", "completion": "This yaml code Validating Admission Policy | For each admission request, the API server evaluates CEL expressions of each (policy, binding, param) combination that match the request. For a request to be admitted it must pass all evaluations.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Validating Admission Policy | For each admission request, the API server evaluates CEL expressions of each (policy, binding, param) combination that match the request. For a request to be admitted it must pass all evaluations.", "source": "https://kubernetes.io/docs/reference/access-authn-authz/validating-admission-policy/", "id": "k8s_00521_code_explain_13353"} {"prompt": "Complete this yaml code:\n\napiVersion: admissionregistration.k8s.io/v1alpha1\nkind: ValidatingAdmissionPolicyBinding\nmetadata:\n name: example-binding\nspec:\n policyName: example-policy", "completion": " paramRef:\n selector:\n matchLabels:\n environment: test\n parameterNotFoundAction: Allow\n validationActions:\n - Deny", "language": "yaml", "type": "kubernetes-manifest", "context": "Validating Admission Policy | failurePolicy defines how mis-configurations and CEL expressions evaluating to error from the admission policy are handled. Allowed values are Ignore or Fail.", "source": "https://kubernetes.io/docs/reference/access-authn-authz/validating-admission-policy/", "id": "k8s_00521_code_completion_13354"} {"prompt": "Explain this yaml code:\n\napiVersion: admissionregistration.k8s.io/v1alpha1\nkind: ValidatingAdmissionPolicyBinding\nmetadata:\n name: example-binding\nspec:\n policyName: example-policy\n paramRef:\n selector:\n matchLabels:\n environment: test\n parameterNotFoundAction: Allow\n validationActions:\n - Deny", "completion": "This yaml code Validating Admission Policy | failurePolicy defines how mis-configurations and CEL expressions evaluating to error from the admission policy are handled. Allowed values are Ignore or Fail.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Validating Admission Policy | failurePolicy defines how mis-configurations and CEL expressions evaluating to error from the admission policy are handled. Allowed values are Ignore or Fail.", "source": "https://kubernetes.io/docs/reference/access-authn-authz/validating-admission-policy/", "id": "k8s_00521_code_explain_13355"} {"prompt": "Complete this yaml code:\n\nvalidatingadmissionpolicy/failure-policy-ignore.yaml\napiVersion: admissionregistration.k8s.io/v1\nkind: ValidatingAdmissionPolicy\nspec:", "completion": "...\nfailurePolicy: Ignore # The default is \"Fail\"\nvalidations:\n- expression: \"object.spec.xyz == params.x\"", "language": "yaml", "type": "kubernetes-manifest", "context": "Validating Admission Policy | spec.validations[i].expression represents the expression which will be evaluated by CEL. To learn more, see the CEL language specification CEL expressions have access to the contents of the Admission", "source": "https://kubernetes.io/docs/reference/access-authn-authz/validating-admission-policy/", "id": "k8s_00521_code_completion_13356"} {"prompt": "Explain this yaml code:\n\nvalidatingadmissionpolicy/failure-policy-ignore.yaml\napiVersion: admissionregistration.k8s.io/v1\nkind: ValidatingAdmissionPolicy\nspec:\n...\nfailurePolicy: Ignore # The default is \"Fail\"\nvalidations:\n- expression: \"object.spec.xyz == params.x\"", "completion": "This yaml code Validating Admission Policy | spec.validations[i].expression represents the expression which will be evaluated by CEL. To learn more, see the CEL language specification CEL expressions have access to the contents of the Admission", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Validating Admission Policy | spec.validations[i].expression represents the expression which will be evaluated by CEL. To learn more, see the CEL language specification CEL expressions have access to the contents of the Admission", "source": "https://kubernetes.io/docs/reference/access-authn-authz/validating-admission-policy/", "id": "k8s_00521_code_explain_13357"} {"prompt": "Complete this yaml code:\n\napiVersion: admissionregistration.k8s.io/v1\nkind: ValidatingAdmissionPolicy\nspec:", "completion": "...\nfailurePolicy: Ignore # The default is \"Fail\"\nvalidations:\n- expression: \"object.spec.xyz == params.x\"", "language": "yaml", "type": "kubernetes-manifest", "context": "Validating Admission Policy | spec.validations[i].expression represents the expression which will be evaluated by CEL. To learn more, see the CEL language specification CEL expressions have access to the contents of the Admission", "source": "https://kubernetes.io/docs/reference/access-authn-authz/validating-admission-policy/", "id": "k8s_00521_code_completion_13358"} {"prompt": "Explain this yaml code:\n\napiVersion: admissionregistration.k8s.io/v1\nkind: ValidatingAdmissionPolicy\nspec:\n...\nfailurePolicy: Ignore # The default is \"Fail\"\nvalidations:\n- expression: \"object.spec.xyz == params.x\"", "completion": "This yaml code Validating Admission Policy | spec.validations[i].expression represents the expression which will be evaluated by CEL. To learn more, see the CEL language specification CEL expressions have access to the contents of the Admission", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Validating Admission Policy | spec.validations[i].expression represents the expression which will be evaluated by CEL. To learn more, see the CEL language specification CEL expressions have access to the contents of the Admission", "source": "https://kubernetes.io/docs/reference/access-authn-authz/validating-admission-policy/", "id": "k8s_00521_code_explain_13359"} {"prompt": "Complete this yaml code:\n\naccess/validating-admission-policy-match-conditions.yaml\napiVersion: admissionregistration.k8s.io/v1\nkind: ValidatingAdmissionPolicy\nmetadata:\n name: \"demo-policy.example.com\"\nspec:\n failurePolicy: Fail\n matchConstraints:\n resourceRules:\n - apiGroups: [\"*\"]\n apiVersions: [\"*\"]", "completion": " operations: [\"CREATE\", \"UPDATE\"]\n resources: [\"*\"]\n matchConditions:\n - name: 'exclude-leases' # Each match condition must have a unique name\n expression: '!(request.resource.group == \"coordination.k8s.io\" && request.resource.resource == \"leases\")' # Match non-lease resources.\n - name: 'exclude-kubelet-requests'\n expression: '!(\"system:nodes\" in request.userInfo.groups)' # Match requests made by non-node users.\n - name: 'rbac' # Skip RBAC requests.\n expression: 'request.resource.group != \"rbac.authorization.k8s.io\"'\n validations:\n - expression: \"!object.metadata.name.contains('demo') || object.metadata.namespace == 'demo'\"", "language": "yaml", "type": "kubernetes-manifest", "context": "Validating Admission Policy | Match conditions have access to the same CEL variables as validation expressions.", "source": "https://kubernetes.io/docs/reference/access-authn-authz/validating-admission-policy/", "id": "k8s_00521_code_completion_13360"} {"prompt": "Explain this yaml code:\n\naccess/validating-admission-policy-match-conditions.yaml\napiVersion: admissionregistration.k8s.io/v1\nkind: ValidatingAdmissionPolicy\nmetadata:\n name: \"demo-policy.example.com\"\nspec:\n failurePolicy: Fail\n matchConstraints:\n resourceRules:\n - apiGroups: [\"*\"]\n apiVersions: [\"*\"]\n operations: [\"CREATE\", \"UPDATE\"]\n resources: [\"*\"]\n matchConditions:\n - name: 'exclude-leases' # Each match condition must have a unique name\n expression: '!(request.resource.group == \"coordination.k8s.io\" && request.resource.resource == \"leases\")' # Match non-lease resources.\n - name: 'exclude-kubelet-requests'\n expression: '!(\"system:nodes\" in request.userInfo.groups)' # Match requests made by non-node users.\n - name: 'rbac' # Skip RBAC requests.\n expression: 'request.resource.group != \"rbac.authorization.k8s.io\"'\n validations:\n - expression: \"!object.metadata.name.contains('demo') || object.metadata.namespace == 'demo'\"", "completion": "This yaml code Validating Admission Policy | Match conditions have access to the same CEL variables as validation expressions.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Validating Admission Policy | Match conditions have access to the same CEL variables as validation expressions.", "source": "https://kubernetes.io/docs/reference/access-authn-authz/validating-admission-policy/", "id": "k8s_00521_code_explain_13361"} {"prompt": "Complete this yaml code:\n\napiVersion: admissionregistration.k8s.io/v1\nkind: ValidatingAdmissionPolicy\nmetadata:\n name: \"demo-policy.example.com\"\nspec:\n failurePolicy: Fail\n matchConstraints:\n resourceRules:\n - apiGroups: [\"*\"]\n apiVersions: [\"*\"]", "completion": " operations: [\"CREATE\", \"UPDATE\"]\n resources: [\"*\"]\n matchConditions:\n - name: 'exclude-leases' # Each match condition must have a unique name\n expression: '!(request.resource.group == \"coordination.k8s.io\" && request.resource.resource == \"leases\")' # Match non-lease resources.\n - name: 'exclude-kubelet-requests'\n expression: '!(\"system:nodes\" in request.userInfo.groups)' # Match requests made by non-node users.\n - name: 'rbac' # Skip RBAC requests.\n expression: 'request.resource.group != \"rbac.authorization.k8s.io\"'\n validations:\n - expression: \"!object.metadata.name.contains('demo') || object.metadata.namespace == 'demo'\"", "language": "yaml", "type": "kubernetes-manifest", "context": "Validating Admission Policy | Match conditions have access to the same CEL variables as validation expressions.", "source": "https://kubernetes.io/docs/reference/access-authn-authz/validating-admission-policy/", "id": "k8s_00521_code_completion_13362"} {"prompt": "Explain this yaml code:\n\napiVersion: admissionregistration.k8s.io/v1\nkind: ValidatingAdmissionPolicy\nmetadata:\n name: \"demo-policy.example.com\"\nspec:\n failurePolicy: Fail\n matchConstraints:\n resourceRules:\n - apiGroups: [\"*\"]\n apiVersions: [\"*\"]\n operations: [\"CREATE\", \"UPDATE\"]\n resources: [\"*\"]\n matchConditions:\n - name: 'exclude-leases' # Each match condition must have a unique name\n expression: '!(request.resource.group == \"coordination.k8s.io\" && request.resource.resource == \"leases\")' # Match non-lease resources.\n - name: 'exclude-kubelet-requests'\n expression: '!(\"system:nodes\" in request.userInfo.groups)' # Match requests made by non-node users.\n - name: 'rbac' # Skip RBAC requests.\n expression: 'request.resource.group != \"rbac.authorization.k8s.io\"'\n validations:\n - expression: \"!object.metadata.name.contains('demo') || object.metadata.namespace == 'demo'\"", "completion": "This yaml code Validating Admission Policy | Match conditions have access to the same CEL variables as validation expressions.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Validating Admission Policy | Match conditions have access to the same CEL variables as validation expressions.", "source": "https://kubernetes.io/docs/reference/access-authn-authz/validating-admission-policy/", "id": "k8s_00521_code_explain_13363"} {"prompt": "Complete this yaml code:\n\naccess/validating-admission-policy-audit-annotation.yaml\napiVersion: admissionregistration.k8s.io/v1\nkind: ValidatingAdmissionPolicy\nmetadata:\n name: \"demo-policy.example.com\"\nspec:\n failurePolicy: Fail\n matchConstraints:\n resourceRules:", "completion": " - apiGroups: [\"apps\"]\n apiVersions: [\"v1\"]\n operations: [\"CREATE\", \"UPDATE\"]\n resources: [\"deployments\"]\n validations:\n - expression: \"object.spec.replicas > 50\"\n messageExpression: \"'Deployment spec.replicas set to ' + string(object.spec.replicas)\"\n auditAnnotations:\n - key: \"high-replica-count\"\n valueExpression: \"'Deployment spec.replicas set to ' + string(object.spec.replicas)\"", "language": "yaml", "type": "kubernetes-manifest", "context": "Validating Admission Policy | When an API request is validated with this admission policy, the resulting audit event will look like:", "source": "https://kubernetes.io/docs/reference/access-authn-authz/validating-admission-policy/", "id": "k8s_00521_code_completion_13364"} {"prompt": "Explain this yaml code:\n\naccess/validating-admission-policy-audit-annotation.yaml\napiVersion: admissionregistration.k8s.io/v1\nkind: ValidatingAdmissionPolicy\nmetadata:\n name: \"demo-policy.example.com\"\nspec:\n failurePolicy: Fail\n matchConstraints:\n resourceRules:\n - apiGroups: [\"apps\"]\n apiVersions: [\"v1\"]\n operations: [\"CREATE\", \"UPDATE\"]\n resources: [\"deployments\"]\n validations:\n - expression: \"object.spec.replicas > 50\"\n messageExpression: \"'Deployment spec.replicas set to ' + string(object.spec.replicas)\"\n auditAnnotations:\n - key: \"high-replica-count\"\n valueExpression: \"'Deployment spec.replicas set to ' + string(object.spec.replicas)\"", "completion": "This yaml code Validating Admission Policy | When an API request is validated with this admission policy, the resulting audit event will look like:", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Validating Admission Policy | When an API request is validated with this admission policy, the resulting audit event will look like:", "source": "https://kubernetes.io/docs/reference/access-authn-authz/validating-admission-policy/", "id": "k8s_00521_code_explain_13365"} {"prompt": "Complete this yaml code:\n\napiVersion: admissionregistration.k8s.io/v1\nkind: ValidatingAdmissionPolicy\nmetadata:\n name: \"demo-policy.example.com\"\nspec:\n failurePolicy: Fail\n matchConstraints:\n resourceRules:\n - apiGroups: [\"apps\"]", "completion": " apiVersions: [\"v1\"]\n operations: [\"CREATE\", \"UPDATE\"]\n resources: [\"deployments\"]\n validations:\n - expression: \"object.spec.replicas > 50\"\n messageExpression: \"'Deployment spec.replicas set to ' + string(object.spec.replicas)\"\n auditAnnotations:\n - key: \"high-replica-count\"\n valueExpression: \"'Deployment spec.replicas set to ' + string(object.spec.replicas)\"", "language": "yaml", "type": "kubernetes-manifest", "context": "Validating Admission Policy | When an API request is validated with this admission policy, the resulting audit event will look like:", "source": "https://kubernetes.io/docs/reference/access-authn-authz/validating-admission-policy/", "id": "k8s_00521_code_completion_13366"} {"prompt": "Explain this yaml code:\n\napiVersion: admissionregistration.k8s.io/v1\nkind: ValidatingAdmissionPolicy\nmetadata:\n name: \"demo-policy.example.com\"\nspec:\n failurePolicy: Fail\n matchConstraints:\n resourceRules:\n - apiGroups: [\"apps\"]\n apiVersions: [\"v1\"]\n operations: [\"CREATE\", \"UPDATE\"]\n resources: [\"deployments\"]\n validations:\n - expression: \"object.spec.replicas > 50\"\n messageExpression: \"'Deployment spec.replicas set to ' + string(object.spec.replicas)\"\n auditAnnotations:\n - key: \"high-replica-count\"\n valueExpression: \"'Deployment spec.replicas set to ' + string(object.spec.replicas)\"", "completion": "This yaml code Validating Admission Policy | When an API request is validated with this admission policy, the resulting audit event will look like:", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Validating Admission Policy | When an API request is validated with this admission policy, the resulting audit event will look like:", "source": "https://kubernetes.io/docs/reference/access-authn-authz/validating-admission-policy/", "id": "k8s_00521_code_explain_13367"} {"prompt": "Complete this yaml code:\n\naccess/deployment-replicas-policy.yaml\napiVersion: admissionregistration.k8s.io/v1\nkind: ValidatingAdmissionPolicy\nmetadata:\n name: \"deploy-replica-policy.example.com\"\nspec:\n paramKind:\n apiVersion: rules.example.com/v1\n kind: ReplicaLimit", "completion": " matchConstraints:\n resourceRules:\n - apiGroups: [\"apps\"]\n apiVersions: [\"v1\"]\n operations: [\"CREATE\", \"UPDATE\"]\n resources: [\"deployments\"]\n validations:\n - expression: \"object.spec.replicas <= params.maxReplicas\"\n messageExpression: \"'object.spec.replicas must be no greater than ' + string(params.maxReplicas)\"\n reason: Invalid", "language": "yaml", "type": "kubernetes-manifest", "context": "Validating Admission Policy | After creating a params object that limits the replicas to 3 and setting up the binding, when we try to create a deployment with 5 replicas, we will receive the following message.", "source": "https://kubernetes.io/docs/reference/access-authn-authz/validating-admission-policy/", "id": "k8s_00521_code_completion_13368"} {"prompt": "Explain this yaml code:\n\naccess/deployment-replicas-policy.yaml\napiVersion: admissionregistration.k8s.io/v1\nkind: ValidatingAdmissionPolicy\nmetadata:\n name: \"deploy-replica-policy.example.com\"\nspec:\n paramKind:\n apiVersion: rules.example.com/v1\n kind: ReplicaLimit\n matchConstraints:\n resourceRules:\n - apiGroups: [\"apps\"]\n apiVersions: [\"v1\"]\n operations: [\"CREATE\", \"UPDATE\"]\n resources: [\"deployments\"]\n validations:\n - expression: \"object.spec.replicas <= params.maxReplicas\"\n messageExpression: \"'object.spec.replicas must be no greater than ' + string(params.maxReplicas)\"\n reason: Invalid", "completion": "This yaml code Validating Admission Policy | After creating a params object that limits the replicas to 3 and setting up the binding, when we try to create a deployment with 5 replicas, we will receive the following message.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Validating Admission Policy | After creating a params object that limits the replicas to 3 and setting up the binding, when we try to create a deployment with 5 replicas, we will receive the following message.", "source": "https://kubernetes.io/docs/reference/access-authn-authz/validating-admission-policy/", "id": "k8s_00521_code_explain_13369"} {"prompt": "Complete this yaml code:\n\napiVersion: admissionregistration.k8s.io/v1\nkind: ValidatingAdmissionPolicy\nmetadata:\n name: \"deploy-replica-policy.example.com\"\nspec:\n paramKind:\n apiVersion: rules.example.com/v1\n kind: ReplicaLimit\n matchConstraints:", "completion": " resourceRules:\n - apiGroups: [\"apps\"]\n apiVersions: [\"v1\"]\n operations: [\"CREATE\", \"UPDATE\"]\n resources: [\"deployments\"]\n validations:\n - expression: \"object.spec.replicas <= params.maxReplicas\"\n messageExpression: \"'object.spec.replicas must be no greater than ' + string(params.maxReplicas)\"\n reason: Invalid", "language": "yaml", "type": "kubernetes-manifest", "context": "Validating Admission Policy | After creating a params object that limits the replicas to 3 and setting up the binding, when we try to create a deployment with 5 replicas, we will receive the following message.", "source": "https://kubernetes.io/docs/reference/access-authn-authz/validating-admission-policy/", "id": "k8s_00521_code_completion_13370"} {"prompt": "Explain this yaml code:\n\napiVersion: admissionregistration.k8s.io/v1\nkind: ValidatingAdmissionPolicy\nmetadata:\n name: \"deploy-replica-policy.example.com\"\nspec:\n paramKind:\n apiVersion: rules.example.com/v1\n kind: ReplicaLimit\n matchConstraints:\n resourceRules:\n - apiGroups: [\"apps\"]\n apiVersions: [\"v1\"]\n operations: [\"CREATE\", \"UPDATE\"]\n resources: [\"deployments\"]\n validations:\n - expression: \"object.spec.replicas <= params.maxReplicas\"\n messageExpression: \"'object.spec.replicas must be no greater than ' + string(params.maxReplicas)\"\n reason: Invalid", "completion": "This yaml code Validating Admission Policy | After creating a params object that limits the replicas to 3 and setting up the binding, when we try to create a deployment with 5 replicas, we will receive the following message.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Validating Admission Policy | After creating a params object that limits the replicas to 3 and setting up the binding, when we try to create a deployment with 5 replicas, we will receive the following message.", "source": "https://kubernetes.io/docs/reference/access-authn-authz/validating-admission-policy/", "id": "k8s_00521_code_explain_13371"} {"prompt": "Complete this yaml code:\n\nvalidatingadmissionpolicy/typechecking.yaml\napiVersion: admissionregistration.k8s.io/v1\nkind: ValidatingAdmissionPolicy\nmetadata:\n name: \"deploy-replica-policy.example.com\"\nspec:\n matchConstraints:\n resourceRules:", "completion": " - apiGroups: [\"apps\"]\n apiVersions: [\"v1\"]\n operations: [\"CREATE\", \"UPDATE\"]\n resources: [\"deployments\"]\n validations:\n - expression: \"object.replicas > 1\" # should be \"object.spec.replicas > 1\"\n message: \"must be replicated\"\n reason: Invalid", "language": "yaml", "type": "kubernetes-manifest", "context": "Validating Admission Policy | The status will yield the following information:", "source": "https://kubernetes.io/docs/reference/access-authn-authz/validating-admission-policy/", "id": "k8s_00521_code_completion_13372"} {"prompt": "Explain this yaml code:\n\nvalidatingadmissionpolicy/typechecking.yaml\napiVersion: admissionregistration.k8s.io/v1\nkind: ValidatingAdmissionPolicy\nmetadata:\n name: \"deploy-replica-policy.example.com\"\nspec:\n matchConstraints:\n resourceRules:\n - apiGroups: [\"apps\"]\n apiVersions: [\"v1\"]\n operations: [\"CREATE\", \"UPDATE\"]\n resources: [\"deployments\"]\n validations:\n - expression: \"object.replicas > 1\" # should be \"object.spec.replicas > 1\"\n message: \"must be replicated\"\n reason: Invalid", "completion": "This yaml code Validating Admission Policy | The status will yield the following information:", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Validating Admission Policy | The status will yield the following information:", "source": "https://kubernetes.io/docs/reference/access-authn-authz/validating-admission-policy/", "id": "k8s_00521_code_explain_13373"} {"prompt": "Complete this yaml code:\n\napiVersion: admissionregistration.k8s.io/v1\nkind: ValidatingAdmissionPolicy\nmetadata:\n name: \"deploy-replica-policy.example.com\"\nspec:\n matchConstraints:\n resourceRules:", "completion": " - apiGroups: [\"apps\"]\n apiVersions: [\"v1\"]\n operations: [\"CREATE\", \"UPDATE\"]\n resources: [\"deployments\"]\n validations:\n - expression: \"object.replicas > 1\" # should be \"object.spec.replicas > 1\"\n message: \"must be replicated\"\n reason: Invalid", "language": "yaml", "type": "kubernetes-manifest", "context": "Validating Admission Policy | The status will yield the following information:", "source": "https://kubernetes.io/docs/reference/access-authn-authz/validating-admission-policy/", "id": "k8s_00521_code_completion_13374"} {"prompt": "Explain this yaml code:\n\napiVersion: admissionregistration.k8s.io/v1\nkind: ValidatingAdmissionPolicy\nmetadata:\n name: \"deploy-replica-policy.example.com\"\nspec:\n matchConstraints:\n resourceRules:\n - apiGroups: [\"apps\"]\n apiVersions: [\"v1\"]\n operations: [\"CREATE\", \"UPDATE\"]\n resources: [\"deployments\"]\n validations:\n - expression: \"object.replicas > 1\" # should be \"object.spec.replicas > 1\"\n message: \"must be replicated\"\n reason: Invalid", "completion": "This yaml code Validating Admission Policy | The status will yield the following information:", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Validating Admission Policy | The status will yield the following information:", "source": "https://kubernetes.io/docs/reference/access-authn-authz/validating-admission-policy/", "id": "k8s_00521_code_explain_13375"} {"prompt": "Complete this text code:\n\nstatus:\n typeChecking:\n expressionWarnings:\n - fieldRef: spec.validations[0].expression", "completion": " warning: |-\n apps/v1, Kind=Deployment: ERROR: <input>:1:7: undefined field 'replicas'\n | object.replicas > 1\n | ......^", "language": "text", "type": "code-example", "context": "Validating Admission Policy | If multiple resources are matched in spec.matchConstraints, all of matched resources will be checked against. For example, the following policy definition", "source": "https://kubernetes.io/docs/reference/access-authn-authz/validating-admission-policy/", "id": "k8s_00521_code_completion_13376"} {"prompt": "Explain this text code:\n\nstatus:\n typeChecking:\n expressionWarnings:\n - fieldRef: spec.validations[0].expression\n warning: |-\n apps/v1, Kind=Deployment: ERROR: <input>:1:7: undefined field 'replicas'\n | object.replicas > 1\n | ......^", "completion": "This text code Validating Admission Policy | If multiple resources are matched in spec.matchConstraints, all of matched resources will be checked against. For example, the following policy definition", "language": "text", "type": "code-example_explanation", "context": "Validating Admission Policy | If multiple resources are matched in spec.matchConstraints, all of matched resources will be checked against. For example, the following policy definition", "source": "https://kubernetes.io/docs/reference/access-authn-authz/validating-admission-policy/", "id": "k8s_00521_code_explain_13377"} {"prompt": "Complete this yaml code:\n\nvalidatingadmissionpolicy/typechecking-multiple-match.yaml\napiVersion: admissionregistration.k8s.io/v1\nkind: ValidatingAdmissionPolicy\nmetadata:\n name: \"replica-policy.example.com\"\nspec:\n matchConstraints:\n resourceRules:", "completion": " - apiGroups: [\"apps\"]\n apiVersions: [\"v1\"]\n operations: [\"CREATE\", \"UPDATE\"]\n resources: [\"deployments\",\"replicasets\"]\n validations:\n - expression: \"object.replicas > 1\" # should be \"object.spec.replicas > 1\"\n message: \"must be replicated\"\n reason: Invalid", "language": "yaml", "type": "kubernetes-manifest", "context": "Validating Admission Policy | will have multiple types and type checking result of each type in the warning message.", "source": "https://kubernetes.io/docs/reference/access-authn-authz/validating-admission-policy/", "id": "k8s_00521_code_completion_13378"} {"prompt": "Explain this yaml code:\n\nvalidatingadmissionpolicy/typechecking-multiple-match.yaml\napiVersion: admissionregistration.k8s.io/v1\nkind: ValidatingAdmissionPolicy\nmetadata:\n name: \"replica-policy.example.com\"\nspec:\n matchConstraints:\n resourceRules:\n - apiGroups: [\"apps\"]\n apiVersions: [\"v1\"]\n operations: [\"CREATE\", \"UPDATE\"]\n resources: [\"deployments\",\"replicasets\"]\n validations:\n - expression: \"object.replicas > 1\" # should be \"object.spec.replicas > 1\"\n message: \"must be replicated\"\n reason: Invalid", "completion": "This yaml code Validating Admission Policy | will have multiple types and type checking result of each type in the warning message.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Validating Admission Policy | will have multiple types and type checking result of each type in the warning message.", "source": "https://kubernetes.io/docs/reference/access-authn-authz/validating-admission-policy/", "id": "k8s_00521_code_explain_13379"} {"prompt": "Complete this yaml code:\n\napiVersion: admissionregistration.k8s.io/v1\nkind: ValidatingAdmissionPolicy\nmetadata:\n name: \"replica-policy.example.com\"\nspec:\n matchConstraints:\n resourceRules:", "completion": " - apiGroups: [\"apps\"]\n apiVersions: [\"v1\"]\n operations: [\"CREATE\", \"UPDATE\"]\n resources: [\"deployments\",\"replicasets\"]\n validations:\n - expression: \"object.replicas > 1\" # should be \"object.spec.replicas > 1\"\n message: \"must be replicated\"\n reason: Invalid", "language": "yaml", "type": "kubernetes-manifest", "context": "Validating Admission Policy | will have multiple types and type checking result of each type in the warning message.", "source": "https://kubernetes.io/docs/reference/access-authn-authz/validating-admission-policy/", "id": "k8s_00521_code_completion_13380"} {"prompt": "Explain this yaml code:\n\napiVersion: admissionregistration.k8s.io/v1\nkind: ValidatingAdmissionPolicy\nmetadata:\n name: \"replica-policy.example.com\"\nspec:\n matchConstraints:\n resourceRules:\n - apiGroups: [\"apps\"]\n apiVersions: [\"v1\"]\n operations: [\"CREATE\", \"UPDATE\"]\n resources: [\"deployments\",\"replicasets\"]\n validations:\n - expression: \"object.replicas > 1\" # should be \"object.spec.replicas > 1\"\n message: \"must be replicated\"\n reason: Invalid", "completion": "This yaml code Validating Admission Policy | will have multiple types and type checking result of each type in the warning message.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Validating Admission Policy | will have multiple types and type checking result of each type in the warning message.", "source": "https://kubernetes.io/docs/reference/access-authn-authz/validating-admission-policy/", "id": "k8s_00521_code_explain_13381"} {"prompt": "Complete this text code:\n\nstatus:\n typeChecking:\n expressionWarnings:\n - fieldRef: spec.validations[0].expression\n warning: |-", "completion": " apps/v1, Kind=Deployment: ERROR: <input>:1:7: undefined field 'replicas'\n | object.replicas > 1\n | ......^\n apps/v1, Kind=ReplicaSet: ERROR: <input>:1:7: undefined field 'replicas'\n | object.replicas > 1\n | ......^", "language": "text", "type": "code-example", "context": "Validating Admission Policy | Type Checking has the following limitation:", "source": "https://kubernetes.io/docs/reference/access-authn-authz/validating-admission-policy/", "id": "k8s_00521_code_completion_13382"} {"prompt": "Explain this text code:\n\nstatus:\n typeChecking:\n expressionWarnings:\n - fieldRef: spec.validations[0].expression\n warning: |-\n apps/v1, Kind=Deployment: ERROR: <input>:1:7: undefined field 'replicas'\n | object.replicas > 1\n | ......^\n apps/v1, Kind=ReplicaSet: ERROR: <input>:1:7: undefined field 'replicas'\n | object.replicas > 1\n | ......^", "completion": "This text code Validating Admission Policy | Type Checking has the following limitation:", "language": "text", "type": "code-example_explanation", "context": "Validating Admission Policy | Type Checking has the following limitation:", "source": "https://kubernetes.io/docs/reference/access-authn-authz/validating-admission-policy/", "id": "k8s_00521_code_explain_13383"} {"prompt": "Complete this text code:\n\nspec:\n variables:\n - name: foo", "completion": " expression: \"'foo' in object.spec.metadata.labels ? object.spec.metadata.labels['foo'] : 'default'\"\n validations:\n - expression: variables.foo == 'bar'", "language": "text", "type": "code-example", "context": "Validating Admission Policy | A variable is lazily evaluated when it is first referred. Any error that occurs during the evaluation will be reported during the evaluation of the referring expression. Both the result and potential", "source": "https://kubernetes.io/docs/reference/access-authn-authz/validating-admission-policy/", "id": "k8s_00521_code_completion_13384"} {"prompt": "Explain this text code:\n\nspec:\n variables:\n - name: foo\n expression: \"'foo' in object.spec.metadata.labels ? object.spec.metadata.labels['foo'] : 'default'\"\n validations:\n - expression: variables.foo == 'bar'", "completion": "This text code Validating Admission Policy | A variable is lazily evaluated when it is first referred. Any error that occurs during the evaluation will be reported during the evaluation of the referring expression. Both the result and potential", "language": "text", "type": "code-example_explanation", "context": "Validating Admission Policy | A variable is lazily evaluated when it is first referred. Any error that occurs during the evaluation will be reported during the evaluation of the referring expression. Both the result and potential", "source": "https://kubernetes.io/docs/reference/access-authn-authz/validating-admission-policy/", "id": "k8s_00521_code_explain_13385"} {"prompt": "Complete this yaml code:\n\naccess/image-matches-namespace-environment.policy.yaml\n# This policy enforces that all containers of a deployment has the image repo match the environment label of its namespace.\n# Except for \"exempt\" deployments, or any containers that do not belong to the \"example.com\" organization (e.g. common sidecars).\n# For example, if the namespace has a label of {\"environment\": \"staging\"}, all container images must be either staging.example.com/*\n# or do not contain \"example.com\" at all, unless the deployment has {\"exempt\": \"true\"} label.\napiVersion: admissionregistration.k8s.io/v1\nkind: ValidatingAdmissionPolicy\nmetadata:\n name: \"image-matches-namespace-environment.policy.example.com\"\nspec:\n failurePolicy: Fail\n matchConstraints:\n resourceRules:\n - apiGroups: [\"apps\"]", "completion": " apiVersions: [\"v1\"]\n operations: [\"CREATE\", \"UPDATE\"]\n resources: [\"deployments\"]\n variables:\n - name: environment\n expression: \"'environment' in namespaceObject.metadata.labels ? namespaceObject.metadata.labels['environment'] : 'prod'\"\n - name: exempt\n expression: \"'exempt' in object.metadata.labels && object.metadata.labels['exempt'] == 'true'\"\n - name: containers\n expression: \"object.spec.template.spec.containers\"\n - name: containersToCheck\n expression: \"variables.containers.filter(c, c.image.contains('example.com/'))\"\n validations:\n - expression: \"variables.exempt || variables.containersToCheck.all(c, c.image.startsWith(variables.environment + '.'))\"\n messageExpression: \"'only ' + variables.environment + ' images are allowed in namespace ' + namespaceObject.metadata.name\"", "language": "yaml", "type": "kubernetes-manifest", "context": "Validating Admission Policy | With the policy bound to the namespace default, which is labeled environment: prod, the following attempt to create a deployment would be rejected.", "source": "https://kubernetes.io/docs/reference/access-authn-authz/validating-admission-policy/", "id": "k8s_00521_code_completion_13386"} {"prompt": "Explain this yaml code:\n\naccess/image-matches-namespace-environment.policy.yaml\n# This policy enforces that all containers of a deployment has the image repo match the environment label of its namespace.\n# Except for \"exempt\" deployments, or any containers that do not belong to the \"example.com\" organization (e.g. common sidecars).\n# For example, if the namespace has a label of {\"environment\": \"staging\"}, all container images must be either staging.example.com/*\n# or do not contain \"example.com\" at all, unless the deployment has {\"exempt\": \"true\"} label.\napiVersion: admissionregistration.k8s.io/v1\nkind: ValidatingAdmissionPolicy\nmetadata:\n name: \"image-matches-namespace-environment.policy.example.com\"\nspec:\n failurePolicy: Fail\n matchConstraints:\n resourceRules:\n - apiGroups: [\"apps\"]\n apiVersions: [\"v1\"]\n operations: [\"CREATE\", \"UPDATE\"]\n resources: [\"deployments\"]\n variables:\n - name: environment\n expression: \"'environment' in namespaceObject.metadata.labels ? namespaceObject.metadata.labels['environment'] : 'prod'\"\n - name: exempt\n expression: \"'exempt' in object.metadata.labels && object.metadata.labels['exempt'] == 'true'\"\n - name: containers\n expression: \"object.spec.template.spec.containers\"\n - name: containersToCheck\n expression: \"variables.containers.filter(c, c.image.contains('example.com/'))\"\n validations:\n - expression: \"variables.exempt || variables.containersToCheck.all(c, c.image.startsWith(variables.environment + '.'))\"\n messageExpression: \"'only ' + variables.environment + ' images are allowed in namespace ' + namespaceObject.metadata.name\"", "completion": "This yaml code Validating Admission Policy | With the policy bound to the namespace default, which is labeled environment: prod, the following attempt to create a deployment would be rejected.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Validating Admission Policy | With the policy bound to the namespace default, which is labeled environment: prod, the following attempt to create a deployment would be rejected.", "source": "https://kubernetes.io/docs/reference/access-authn-authz/validating-admission-policy/", "id": "k8s_00521_code_explain_13387"} {"prompt": "Complete this yaml code:\n\n# This policy enforces that all containers of a deployment has the image repo match the environment label of its namespace.\n# Except for \"exempt\" deployments, or any containers that do not belong to the \"example.com\" organization (e.g. common sidecars).\n# For example, if the namespace has a label of {\"environment\": \"staging\"}, all container images must be either staging.example.com/*\n# or do not contain \"example.com\" at all, unless the deployment has {\"exempt\": \"true\"} label.\napiVersion: admissionregistration.k8s.io/v1\nkind: ValidatingAdmissionPolicy\nmetadata:\n name: \"image-matches-namespace-environment.policy.example.com\"\nspec:\n failurePolicy: Fail\n matchConstraints:\n resourceRules:\n - apiGroups: [\"apps\"]\n apiVersions: [\"v1\"]", "completion": " operations: [\"CREATE\", \"UPDATE\"]\n resources: [\"deployments\"]\n variables:\n - name: environment\n expression: \"'environment' in namespaceObject.metadata.labels ? namespaceObject.metadata.labels['environment'] : 'prod'\"\n - name: exempt\n expression: \"'exempt' in object.metadata.labels && object.metadata.labels['exempt'] == 'true'\"\n - name: containers\n expression: \"object.spec.template.spec.containers\"\n - name: containersToCheck\n expression: \"variables.containers.filter(c, c.image.contains('example.com/'))\"\n validations:\n - expression: \"variables.exempt || variables.containersToCheck.all(c, c.image.startsWith(variables.environment + '.'))\"\n messageExpression: \"'only ' + variables.environment + ' images are allowed in namespace ' + namespaceObject.metadata.name\"", "language": "yaml", "type": "kubernetes-manifest", "context": "Validating Admission Policy | With the policy bound to the namespace default, which is labeled environment: prod, the following attempt to create a deployment would be rejected.", "source": "https://kubernetes.io/docs/reference/access-authn-authz/validating-admission-policy/", "id": "k8s_00521_code_completion_13388"} {"prompt": "Explain this yaml code:\n\n# This policy enforces that all containers of a deployment has the image repo match the environment label of its namespace.\n# Except for \"exempt\" deployments, or any containers that do not belong to the \"example.com\" organization (e.g. common sidecars).\n# For example, if the namespace has a label of {\"environment\": \"staging\"}, all container images must be either staging.example.com/*\n# or do not contain \"example.com\" at all, unless the deployment has {\"exempt\": \"true\"} label.\napiVersion: admissionregistration.k8s.io/v1\nkind: ValidatingAdmissionPolicy\nmetadata:\n name: \"image-matches-namespace-environment.policy.example.com\"\nspec:\n failurePolicy: Fail\n matchConstraints:\n resourceRules:\n - apiGroups: [\"apps\"]\n apiVersions: [\"v1\"]\n operations: [\"CREATE\", \"UPDATE\"]\n resources: [\"deployments\"]\n variables:\n - name: environment\n expression: \"'environment' in namespaceObject.metadata.labels ? namespaceObject.metadata.labels['environment'] : 'prod'\"\n - name: exempt\n expression: \"'exempt' in object.metadata.labels && object.metadata.labels['exempt'] == 'true'\"\n - name: containers\n expression: \"object.spec.template.spec.containers\"\n - name: containersToCheck\n expression: \"variables.containers.filter(c, c.image.contains('example.com/'))\"\n validations:\n - expression: \"variables.exempt || variables.containersToCheck.all(c, c.image.startsWith(variables.environment + '.'))\"\n messageExpression: \"'only ' + variables.environment + ' images are allowed in namespace ' + namespaceObject.metadata.name\"", "completion": "This yaml code Validating Admission Policy | With the policy bound to the namespace default, which is labeled environment: prod, the following attempt to create a deployment would be rejected.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Validating Admission Policy | With the policy bound to the namespace default, which is labeled environment: prod, the following attempt to create a deployment would be rejected.", "source": "https://kubernetes.io/docs/reference/access-authn-authz/validating-admission-policy/", "id": "k8s_00521_code_explain_13389"} {"prompt": "Explain this bash code:\n\nkubectl create deploy --image=dev.example.com/nginx invalid", "completion": "This bash code Validating Admission Policy | The error message is similar to this.", "language": "bash", "type": "kubectl-command_explanation", "context": "Validating Admission Policy | The error message is similar to this.", "source": "https://kubernetes.io/docs/reference/access-authn-authz/validating-admission-policy/", "id": "k8s_00521_code_explain_13390"} {"prompt": "Explain this text code:\n\nerror: failed to create deployment: deployments.apps \"invalid\" is forbidden: ValidatingAdmissionPolicy 'image-matches-namespace-environment.policy.example.com' with binding 'demo-binding-test.example.com' denied request: only prod images are allowed in namespace default", "completion": "This text code Validating Admission Policy | There are certain API kinds that are exempt from admission-time validation checks. For example, you can't create a ValidatingAdmissionPolicy that prevents changes to ValidatingAdmissionPolicyBindings.", "language": "text", "type": "code-example_explanation", "context": "Validating Admission Policy | There are certain API kinds that are exempt from admission-time validation checks. For example, you can't create a ValidatingAdmissionPolicy that prevents changes to ValidatingAdmissionPolicyBindings.", "source": "https://kubernetes.io/docs/reference/access-authn-authz/validating-admission-policy/", "id": "k8s_00521_code_explain_13391"} {"prompt": "Explain this bash code:\n\nkubectl expose deployment/kubernetes-bootcamp --type=\"LoadBalancer\" --port 8080", "completion": "This bash code Running Multiple Instances of Your App | Scaling out a Deployment will ensure new Pods are created and scheduled to Nodes with available resources. Scaling will increase the number of Pods to the new desired state. Kubernetes also supports a", "language": "bash", "type": "kubectl-command_explanation", "context": "Running Multiple Instances of Your App | Scaling out a Deployment will ensure new Pods are created and scheduled to Nodes with available resources. Scaling will increase the number of Pods to the new desired state. Kubernetes also supports a", "source": "https://kubernetes.io/docs/tutorials/kubernetes-basics/scale/scale-intro/", "id": "k8s_00522_code_explain_13392"} {"prompt": "Explain this text code:\n\nNAME READY UP-TO-DATE AVAILABLE AGE\nkubernetes-bootcamp 1/1 1 1 11m", "completion": "This text code Running Multiple Instances of Your App | We should have 1 Pod. If not, run the command again. This shows:", "language": "text", "type": "code-example_explanation", "context": "Running Multiple Instances of Your App | We should have 1 Pod. If not, run the command again. This shows:", "source": "https://kubernetes.io/docs/tutorials/kubernetes-basics/scale/scale-intro/", "id": "k8s_00522_code_explain_13393"} {"prompt": "Explain this bash code:\n\nkubectl scale deployments/kubernetes-bootcamp --replicas=4", "completion": "This bash code Running Multiple Instances of Your App | To list your Deployments once again, use get deployments:", "language": "bash", "type": "kubectl-command_explanation", "context": "Running Multiple Instances of Your App | To list your Deployments once again, use get deployments:", "source": "https://kubernetes.io/docs/tutorials/kubernetes-basics/scale/scale-intro/", "id": "k8s_00522_code_explain_13394"} {"prompt": "Explain this text code:\n\nexport NODE_PORT=\"$(kubectl get services/kubernetes-bootcamp -o go-template='{{(index .spec.ports 0).nodePort}}')\"\necho NODE_PORT=$NODE_PORT", "completion": "This text code Running Multiple Instances of Your App | Next, we’ll do a curl to the exposed IP address and port. Execute the command multiple times:", "language": "text", "type": "code-example_explanation", "context": "Running Multiple Instances of Your App | Next, we’ll do a curl to the exposed IP address and port. Execute the command multiple times:", "source": "https://kubernetes.io/docs/tutorials/kubernetes-basics/scale/scale-intro/", "id": "k8s_00522_code_explain_13395"} {"prompt": "Complete this text code:\n\nHello Kubernetes bootcamp! | Running on: kubernetes-bootcamp-644c5687f4-wp67j | v=1\nHello Kubernetes bootcamp! | Running on: kubernetes-bootcamp-644c5687f4-hs9dj | v=1", "completion": "Hello Kubernetes bootcamp! | Running on: kubernetes-bootcamp-644c5687f4-4hjvf | v=1\nHello Kubernetes bootcamp! | Running on: kubernetes-bootcamp-644c5687f4-wp67j | v=1\nHello Kubernetes bootcamp! | Running on: kubernetes-bootcamp-644c5687f4-4hjvf | v=1", "language": "text", "type": "code-example", "context": "Running Multiple Instances of Your App | If you're running minikube with Docker Desktop as the container driver, a minikube tunnel is needed. This is because containers inside Docker Desktop are isolated from your host computer.", "source": "https://kubernetes.io/docs/tutorials/kubernetes-basics/scale/scale-intro/", "id": "k8s_00522_code_completion_13396"} {"prompt": "Explain this text code:\n\nHello Kubernetes bootcamp! | Running on: kubernetes-bootcamp-644c5687f4-wp67j | v=1\nHello Kubernetes bootcamp! | Running on: kubernetes-bootcamp-644c5687f4-hs9dj | v=1\nHello Kubernetes bootcamp! | Running on: kubernetes-bootcamp-644c5687f4-4hjvf | v=1\nHello Kubernetes bootcamp! | Running on: kubernetes-bootcamp-644c5687f4-wp67j | v=1\nHello Kubernetes bootcamp! | Running on: kubernetes-bootcamp-644c5687f4-4hjvf | v=1", "completion": "This text code Running Multiple Instances of Your App | If you're running minikube with Docker Desktop as the container driver, a minikube tunnel is needed. This is because containers inside Docker Desktop are isolated from your host computer.", "language": "text", "type": "code-example_explanation", "context": "Running Multiple Instances of Your App | If you're running minikube with Docker Desktop as the container driver, a minikube tunnel is needed. This is because containers inside Docker Desktop are isolated from your host computer.", "source": "https://kubernetes.io/docs/tutorials/kubernetes-basics/scale/scale-intro/", "id": "k8s_00522_code_explain_13397"} {"prompt": "Explain this text code:\n\nhttp://127.0.0.1:51082\n! Because you are using a Docker driver on darwin, the terminal needs to be open to run it.", "completion": "This text code Running Multiple Instances of Your App | Then use the given URL to access the app:", "language": "text", "type": "code-example_explanation", "context": "Running Multiple Instances of Your App | Then use the given URL to access the app:", "source": "https://kubernetes.io/docs/tutorials/kubernetes-basics/scale/scale-intro/", "id": "k8s_00522_code_explain_13398"} {"prompt": "Explain this bash code:\n\nkubectl scale deployments/kubernetes-bootcamp --replicas=2", "completion": "This bash code Running Multiple Instances of Your App | List the Deployments to check if the change was applied with the get deployments subcommand:", "language": "bash", "type": "kubectl-command_explanation", "context": "Running Multiple Instances of Your App | List the Deployments to check if the change was applied with the get deployments subcommand:", "source": "https://kubernetes.io/docs/tutorials/kubernetes-basics/scale/scale-intro/", "id": "k8s_00522_code_explain_13399"} {"prompt": "Explain this shell code:\n\nkubectl expose deployment/kubernetes-bootcamp --type=\"LoadBalancer\" --port 8080", "completion": "This shell code Running Multiple Instances of Your App | Scaling out a Deployment will ensure new Pods are created and scheduled to Nodes with available resources. Scaling will increase the number of Pods to the new desired state. Kubernetes also supports a", "language": "shell", "type": "code-example_explanation", "context": "Running Multiple Instances of Your App | Scaling out a Deployment will ensure new Pods are created and scheduled to Nodes with available resources. Scaling will increase the number of Pods to the new desired state. Kubernetes also supports a", "source": "https://kubernetes.io/docs/tutorials/kubernetes-basics/scale/scale-intro/", "id": "k8s_00522_code_explain_13400"} {"prompt": "Explain this text code:\n\nNAME READY UP-TO-DATE AVAILABLE AGE\nkubernetes-bootcamp 1/1 1 1 11m", "completion": "This text code Running Multiple Instances of Your App | We should have 1 Pod. If not, run the command again. This shows:", "language": "text", "type": "code-example_explanation", "context": "Running Multiple Instances of Your App | We should have 1 Pod. If not, run the command again. This shows:", "source": "https://kubernetes.io/docs/tutorials/kubernetes-basics/scale/scale-intro/", "id": "k8s_00522_code_explain_13401"} {"prompt": "Explain this shell code:\n\nkubectl scale deployments/kubernetes-bootcamp --replicas=4", "completion": "This shell code Running Multiple Instances of Your App | To list your Deployments once again, use get deployments:", "language": "shell", "type": "code-example_explanation", "context": "Running Multiple Instances of Your App | To list your Deployments once again, use get deployments:", "source": "https://kubernetes.io/docs/tutorials/kubernetes-basics/scale/scale-intro/", "id": "k8s_00522_code_explain_13402"} {"prompt": "Explain this shell code:\n\nexport NODE_PORT=\"$(kubectl get services/kubernetes-bootcamp -o go-template='{{(index .spec.ports 0).nodePort}}')\"\necho NODE_PORT=$NODE_PORT", "completion": "This shell code Running Multiple Instances of Your App | Next, we’ll do a curl to the exposed IP address and port. Execute the command multiple times:", "language": "shell", "type": "code-example_explanation", "context": "Running Multiple Instances of Your App | Next, we’ll do a curl to the exposed IP address and port. Execute the command multiple times:", "source": "https://kubernetes.io/docs/tutorials/kubernetes-basics/scale/scale-intro/", "id": "k8s_00522_code_explain_13403"} {"prompt": "Complete this text code:\n\nHello Kubernetes bootcamp! | Running on: kubernetes-bootcamp-644c5687f4-wp67j | v=1\nHello Kubernetes bootcamp! | Running on: kubernetes-bootcamp-644c5687f4-hs9dj | v=1", "completion": "Hello Kubernetes bootcamp! | Running on: kubernetes-bootcamp-644c5687f4-4hjvf | v=1\nHello Kubernetes bootcamp! | Running on: kubernetes-bootcamp-644c5687f4-wp67j | v=1\nHello Kubernetes bootcamp! | Running on: kubernetes-bootcamp-644c5687f4-4hjvf | v=1", "language": "text", "type": "code-example", "context": "Running Multiple Instances of Your App | If you're running minikube with Docker Desktop as the container driver, a minikube tunnel is needed. This is because containers inside Docker Desktop are isolated from your host computer.", "source": "https://kubernetes.io/docs/tutorials/kubernetes-basics/scale/scale-intro/", "id": "k8s_00522_code_completion_13404"} {"prompt": "Explain this text code:\n\nHello Kubernetes bootcamp! | Running on: kubernetes-bootcamp-644c5687f4-wp67j | v=1\nHello Kubernetes bootcamp! | Running on: kubernetes-bootcamp-644c5687f4-hs9dj | v=1\nHello Kubernetes bootcamp! | Running on: kubernetes-bootcamp-644c5687f4-4hjvf | v=1\nHello Kubernetes bootcamp! | Running on: kubernetes-bootcamp-644c5687f4-wp67j | v=1\nHello Kubernetes bootcamp! | Running on: kubernetes-bootcamp-644c5687f4-4hjvf | v=1", "completion": "This text code Running Multiple Instances of Your App | If you're running minikube with Docker Desktop as the container driver, a minikube tunnel is needed. This is because containers inside Docker Desktop are isolated from your host computer.", "language": "text", "type": "code-example_explanation", "context": "Running Multiple Instances of Your App | If you're running minikube with Docker Desktop as the container driver, a minikube tunnel is needed. This is because containers inside Docker Desktop are isolated from your host computer.", "source": "https://kubernetes.io/docs/tutorials/kubernetes-basics/scale/scale-intro/", "id": "k8s_00522_code_explain_13405"} {"prompt": "Explain this text code:\n\nhttp://127.0.0.1:51082\n! Because you are using a Docker driver on darwin, the terminal needs to be open to run it.", "completion": "This text code Running Multiple Instances of Your App | Then use the given URL to access the app:", "language": "text", "type": "code-example_explanation", "context": "Running Multiple Instances of Your App | Then use the given URL to access the app:", "source": "https://kubernetes.io/docs/tutorials/kubernetes-basics/scale/scale-intro/", "id": "k8s_00522_code_explain_13406"} {"prompt": "Explain this shell code:\n\nkubectl scale deployments/kubernetes-bootcamp --replicas=2", "completion": "This shell code Running Multiple Instances of Your App | List the Deployments to check if the change was applied with the get deployments subcommand:", "language": "shell", "type": "code-example_explanation", "context": "Running Multiple Instances of Your App | List the Deployments to check if the change was applied with the get deployments subcommand:", "source": "https://kubernetes.io/docs/tutorials/kubernetes-basics/scale/scale-intro/", "id": "k8s_00522_code_explain_13407"} {"prompt": "Explain this bash code:\n\nkubectl expose deployment/kubernetes-bootcamp --type=\"LoadBalancer\" --port 8080", "completion": "This bash code Running Multiple Instances of Your App | Scaling out a Deployment will ensure new Pods are created and scheduled to Nodes with available resources. Scaling will increase the number of Pods to the new desired state. Kubernetes also supports a", "language": "bash", "type": "kubectl-command_explanation", "context": "Running Multiple Instances of Your App | Scaling out a Deployment will ensure new Pods are created and scheduled to Nodes with available resources. Scaling will increase the number of Pods to the new desired state. Kubernetes also supports a", "source": "https://kubernetes.io/docs/tutorials/kubernetes-basics/scale/scale-intro/", "id": "k8s_00522_code_explain_13408"} {"prompt": "Explain this bash code:\n\nkubectl scale deployments/kubernetes-bootcamp --replicas=4", "completion": "This bash code Running Multiple Instances of Your App | To list your Deployments once again, use get deployments:", "language": "bash", "type": "kubectl-command_explanation", "context": "Running Multiple Instances of Your App | To list your Deployments once again, use get deployments:", "source": "https://kubernetes.io/docs/tutorials/kubernetes-basics/scale/scale-intro/", "id": "k8s_00522_code_explain_13409"} {"prompt": "Explain this text code:\n\nexport NODE_PORT=\"$(kubectl get services/kubernetes-bootcamp -o go-template='{{(index .spec.ports 0).nodePort}}')\"\necho NODE_PORT=$NODE_PORT", "completion": "This text code Running Multiple Instances of Your App | Next, we’ll do a curl to the exposed IP address and port. Execute the command multiple times:", "language": "text", "type": "code-example_explanation", "context": "Running Multiple Instances of Your App | Next, we’ll do a curl to the exposed IP address and port. Execute the command multiple times:", "source": "https://kubernetes.io/docs/tutorials/kubernetes-basics/scale/scale-intro/", "id": "k8s_00522_code_explain_13410"} {"prompt": "Explain this bash code:\n\nkubectl scale deployments/kubernetes-bootcamp --replicas=2", "completion": "This bash code Running Multiple Instances of Your App | List the Deployments to check if the change was applied with the get deployments subcommand:", "language": "bash", "type": "kubectl-command_explanation", "context": "Running Multiple Instances of Your App | List the Deployments to check if the change was applied with the get deployments subcommand:", "source": "https://kubernetes.io/docs/tutorials/kubernetes-basics/scale/scale-intro/", "id": "k8s_00522_code_explain_13411"} {"prompt": "Explain this bash code:\n\nkubectl create secret (docker-registry | generic | tls)", "completion": "This bash code kubectl create secret | help for secret", "language": "bash", "type": "kubectl-command_explanation", "context": "kubectl create secret | help for secret", "source": "https://kubernetes.io/docs/reference/kubectl/generated/kubectl_create/kubectl_create_secret/", "id": "k8s_00523_code_explain_13412"} {"prompt": "Explain this bash code:\n\nkubectl create secret (docker-registry | generic | tls)", "completion": "This bash code kubectl create secret | help for secret", "language": "bash", "type": "kubectl-command_explanation", "context": "kubectl create secret | help for secret", "source": "https://kubernetes.io/docs/reference/kubectl/generated/kubectl_create/kubectl_create_secret/", "id": "k8s_00523_code_explain_13413"} {"prompt": "Explain this bash code:\n\nkubectl debug (POD | TYPE[[.VERSION].GROUP]/NAME) [ -- COMMAND [args...] ]", "completion": "This bash code kubectl debug | If specified, everything after -- will be passed to the new container as Args instead of Command.", "language": "bash", "type": "kubectl-command_explanation", "context": "kubectl debug | If specified, everything after -- will be passed to the new container as Args instead of Command.", "source": "https://kubernetes.io/docs/reference/kubectl/generated/kubectl_debug/", "id": "k8s_00525_code_explain_13414"} {"prompt": "Complete this text code:\n\n# Create an interactive debugging session in pod mypod and immediately attach to it.\n kubectl debug mypod -it --image=busybox\n \n # Create an interactive debugging session for the pod in the file pod.yaml and immediately attach to it.\n # (requires the EphemeralContainers feature to be enabled in the cluster)\n kubectl debug -f pod.yaml -it --image=busybox\n \n # Create a debug container named debugger using a custom automated debugging image.\n kubectl debug --image=myproj/debug-tools -c debugger mypod\n \n # Create a copy of mypod adding a debug container and attach to it\n kubectl debug mypod -it --image=busybox --copy-to=my-debugger", "completion": " \n # Create a copy of mypod changing the command of mycontainer\n kubectl debug mypod -it --copy-to=my-debugger --container=mycontainer -- sh\n \n # Create a copy of mypod changing all container images to busybox\n kubectl debug mypod --copy-to=my-debugger --set-image=*=busybox\n \n # Create a copy of mypod adding a debug container and changing container images\n kubectl debug mypod -it --copy-to=my-debugger --image=debian --set-image=app=app:debug,sidecar=sidecar:debug\n \n # Create an interactive debugging session on a node and immediately attach to it.\n # The container will run in the host namespaces and the host's filesystem will be mounted at /host\n kubectl debug node/mynode -it --image=busybox", "language": "text", "type": "code-example", "context": "kubectl debug | If specified, everything after -- will be passed to the new container as Args instead of Command.", "source": "https://kubernetes.io/docs/reference/kubectl/generated/kubectl_debug/", "id": "k8s_00525_code_completion_13415"} {"prompt": "Explain this text code:\n\n# Create an interactive debugging session in pod mypod and immediately attach to it.\n kubectl debug mypod -it --image=busybox\n \n # Create an interactive debugging session for the pod in the file pod.yaml and immediately attach to it.\n # (requires the EphemeralContainers feature to be enabled in the cluster)\n kubectl debug -f pod.yaml -it --image=busybox\n \n # Create a debug container named debugger using a custom automated debugging image.\n kubectl debug --image=myproj/debug-tools -c debugger mypod\n \n # Create a copy of mypod adding a debug container and attach to it\n kubectl debug mypod -it --image=busybox --copy-to=my-debugger\n \n # Create a copy of mypod changing the command of mycontainer\n kubectl debug mypod -it --copy-to=my-debugger --container=mycontainer -- sh\n \n # Create a copy of mypod changing all container images to busybox\n kubectl debug mypod --copy-to=my-debugger --set-image=*=busybox\n \n # Create a copy of mypod adding a debug container and changing container images\n kubectl debug mypod -it --copy-to=my-debugger --image=debian --set-image=app=app:debug,sidecar=sidecar:debug\n \n # Create an interactive debugging session on a node and immediately attach to it.\n # The container will run in the host namespaces and the host's filesystem will be mounted at /host\n kubectl debug node/mynode -it --image=busybox", "completion": "This text code kubectl debug | If specified, everything after -- will be passed to the new container as Args instead of Command.", "language": "text", "type": "code-example_explanation", "context": "kubectl debug | If specified, everything after -- will be passed to the new container as Args instead of Command.", "source": "https://kubernetes.io/docs/reference/kubectl/generated/kubectl_debug/", "id": "k8s_00525_code_explain_13416"} {"prompt": "Explain this bash code:\n\nkubectl debug (POD | TYPE[[.VERSION].GROUP]/NAME) [ -- COMMAND [args...] ]", "completion": "This bash code kubectl debug | If specified, everything after -- will be passed to the new container as Args instead of Command.", "language": "bash", "type": "kubectl-command_explanation", "context": "kubectl debug | If specified, everything after -- will be passed to the new container as Args instead of Command.", "source": "https://kubernetes.io/docs/reference/kubectl/generated/kubectl_debug/", "id": "k8s_00525_code_explain_13417"} {"prompt": "Complete this text code:\n\n# Create an interactive debugging session in pod mypod and immediately attach to it.\n kubectl debug mypod -it --image=busybox\n \n # Create an interactive debugging session for the pod in the file pod.yaml and immediately attach to it.\n # (requires the EphemeralContainers feature to be enabled in the cluster)\n kubectl debug -f pod.yaml -it --image=busybox\n \n # Create a debug container named debugger using a custom automated debugging image.\n kubectl debug --image=myproj/debug-tools -c debugger mypod\n \n # Create a copy of mypod adding a debug container and attach to it\n kubectl debug mypod -it --image=busybox --copy-to=my-debugger", "completion": " \n # Create a copy of mypod changing the command of mycontainer\n kubectl debug mypod -it --copy-to=my-debugger --container=mycontainer -- sh\n \n # Create a copy of mypod changing all container images to busybox\n kubectl debug mypod --copy-to=my-debugger --set-image=*=busybox\n \n # Create a copy of mypod adding a debug container and changing container images\n kubectl debug mypod -it --copy-to=my-debugger --image=debian --set-image=app=app:debug,sidecar=sidecar:debug\n \n # Create an interactive debugging session on a node and immediately attach to it.\n # The container will run in the host namespaces and the host's filesystem will be mounted at /host\n kubectl debug node/mynode -it --image=busybox", "language": "text", "type": "code-example", "context": "kubectl debug | If specified, everything after -- will be passed to the new container as Args instead of Command.", "source": "https://kubernetes.io/docs/reference/kubectl/generated/kubectl_debug/", "id": "k8s_00525_code_completion_13418"} {"prompt": "Explain this text code:\n\n# Create an interactive debugging session in pod mypod and immediately attach to it.\n kubectl debug mypod -it --image=busybox\n \n # Create an interactive debugging session for the pod in the file pod.yaml and immediately attach to it.\n # (requires the EphemeralContainers feature to be enabled in the cluster)\n kubectl debug -f pod.yaml -it --image=busybox\n \n # Create a debug container named debugger using a custom automated debugging image.\n kubectl debug --image=myproj/debug-tools -c debugger mypod\n \n # Create a copy of mypod adding a debug container and attach to it\n kubectl debug mypod -it --image=busybox --copy-to=my-debugger\n \n # Create a copy of mypod changing the command of mycontainer\n kubectl debug mypod -it --copy-to=my-debugger --container=mycontainer -- sh\n \n # Create a copy of mypod changing all container images to busybox\n kubectl debug mypod --copy-to=my-debugger --set-image=*=busybox\n \n # Create a copy of mypod adding a debug container and changing container images\n kubectl debug mypod -it --copy-to=my-debugger --image=debian --set-image=app=app:debug,sidecar=sidecar:debug\n \n # Create an interactive debugging session on a node and immediately attach to it.\n # The container will run in the host namespaces and the host's filesystem will be mounted at /host\n kubectl debug node/mynode -it --image=busybox", "completion": "This text code kubectl debug | If specified, everything after -- will be passed to the new container as Args instead of Command.", "language": "text", "type": "code-example_explanation", "context": "kubectl debug | If specified, everything after -- will be passed to the new container as Args instead of Command.", "source": "https://kubernetes.io/docs/reference/kubectl/generated/kubectl_debug/", "id": "k8s_00525_code_explain_13419"} {"prompt": "Complete this yaml code:\n\napiVersion: admissionregistration.k8s.io/v1\nkind: ValidatingWebhookConfiguration\nmetadata:\n name: \"pod-policy.example.com\"\nwebhooks:\n- name: \"pod-policy.example.com\"\n rules:\n - apiGroups: [\"\"]\n apiVersions: [\"v1\"]\n operations: [\"CREATE\"]", "completion": " resources: [\"pods\"]\n scope: \"Namespaced\"\n clientConfig:\n service:\n namespace: \"example-namespace\"\n name: \"example-service\"\n caBundle: <CA_BUNDLE>\n admissionReviewVersions: [\"v1\"]\n sideEffects: None\n timeoutSeconds: 5", "language": "yaml", "type": "kubernetes-manifest", "context": "Dynamic Admission Control | The scope field specifies if only cluster-scoped resources (\"Cluster\") or namespace-scoped resources (\"Namespaced\") will match this rule. \"∗\" means that there are no scope restrictions.", "source": "https://kubernetes.io/docs/reference/access-authn-authz/extensible-admission-controllers/", "id": "k8s_00526_code_completion_13420"} {"prompt": "Explain this yaml code:\n\napiVersion: admissionregistration.k8s.io/v1\nkind: ValidatingWebhookConfiguration\nmetadata:\n name: \"pod-policy.example.com\"\nwebhooks:\n- name: \"pod-policy.example.com\"\n rules:\n - apiGroups: [\"\"]\n apiVersions: [\"v1\"]\n operations: [\"CREATE\"]\n resources: [\"pods\"]\n scope: \"Namespaced\"\n clientConfig:\n service:\n namespace: \"example-namespace\"\n name: \"example-service\"\n caBundle: <CA_BUNDLE>\n admissionReviewVersions: [\"v1\"]\n sideEffects: None\n timeoutSeconds: 5", "completion": "This yaml code Dynamic Admission Control | The scope field specifies if only cluster-scoped resources (\"Cluster\") or namespace-scoped resources (\"Namespaced\") will match this rule. \"∗\" means that there are no scope restrictions.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Dynamic Admission Control | The scope field specifies if only cluster-scoped resources (\"Cluster\") or namespace-scoped resources (\"Namespaced\") will match this rule. \"∗\" means that there are no scope restrictions.", "source": "https://kubernetes.io/docs/reference/access-authn-authz/extensible-admission-controllers/", "id": "k8s_00526_code_explain_13421"} {"prompt": "Complete this yaml code:\n\napiVersion: apiserver.config.k8s.io/v1\nkind: AdmissionConfiguration\nplugins:\n- name: ValidatingAdmissionWebhook\n configuration:\n apiVersion: apiserver.config.k8s.io/v1", "completion": " kind: WebhookAdmissionConfiguration\n kubeConfigFile: \"<path-to-kubeconfig-file>\"\n- name: MutatingAdmissionWebhook\n configuration:\n apiVersion: apiserver.config.k8s.io/v1\n kind: WebhookAdmissionConfiguration\n kubeConfigFile: \"<path-to-kubeconfig-file>\"", "language": "yaml", "type": "kubernetes-manifest", "context": "Dynamic Admission Control | # Deprecated in v1.17 in favor of apiserver.config.k8s.io/v1 apiVersion: apiserver.k8s.io/v1alpha1 kind: AdmissionConfiguration plugins: - name: ValidatingAdmissionWebhook configuration: # Depre", "source": "https://kubernetes.io/docs/reference/access-authn-authz/extensible-admission-controllers/", "id": "k8s_00526_code_completion_13422"} {"prompt": "Explain this yaml code:\n\napiVersion: apiserver.config.k8s.io/v1\nkind: AdmissionConfiguration\nplugins:\n- name: ValidatingAdmissionWebhook\n configuration:\n apiVersion: apiserver.config.k8s.io/v1\n kind: WebhookAdmissionConfiguration\n kubeConfigFile: \"<path-to-kubeconfig-file>\"\n- name: MutatingAdmissionWebhook\n configuration:\n apiVersion: apiserver.config.k8s.io/v1\n kind: WebhookAdmissionConfiguration\n kubeConfigFile: \"<path-to-kubeconfig-file>\"", "completion": "This yaml code Dynamic Admission Control | # Deprecated in v1.17 in favor of apiserver.config.k8s.io/v1 apiVersion: apiserver.k8s.io/v1alpha1 kind: AdmissionConfiguration plugins: - name: ValidatingAdmissionWebhook configuration: # Depre", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Dynamic Admission Control | # Deprecated in v1.17 in favor of apiserver.config.k8s.io/v1 apiVersion: apiserver.k8s.io/v1alpha1 kind: AdmissionConfiguration plugins: - name: ValidatingAdmissionWebhook configuration: # Depre", "source": "https://kubernetes.io/docs/reference/access-authn-authz/extensible-admission-controllers/", "id": "k8s_00526_code_explain_13423"} {"prompt": "Complete this yaml code:\n\n# Deprecated in v1.17 in favor of apiserver.config.k8s.io/v1\napiVersion: apiserver.k8s.io/v1alpha1\nkind: AdmissionConfiguration\nplugins:\n- name: ValidatingAdmissionWebhook\n configuration:\n # Deprecated in v1.17 in favor of apiserver.config.k8s.io/v1, kind=WebhookAdmissionConfiguration\n apiVersion: apiserver.config.k8s.io/v1alpha1", "completion": " kind: WebhookAdmission\n kubeConfigFile: \"<path-to-kubeconfig-file>\"\n- name: MutatingAdmissionWebhook\n configuration:\n # Deprecated in v1.17 in favor of apiserver.config.k8s.io/v1, kind=WebhookAdmissionConfiguration\n apiVersion: apiserver.config.k8s.io/v1alpha1\n kind: WebhookAdmission\n kubeConfigFile: \"<path-to-kubeconfig-file>\"", "language": "yaml", "type": "kubernetes-manifest", "context": "Dynamic Admission Control | For more information about AdmissionConfiguration, see the AdmissionConfiguration (v1) reference. See the webhook configuration section for details about each config field.", "source": "https://kubernetes.io/docs/reference/access-authn-authz/extensible-admission-controllers/", "id": "k8s_00526_code_completion_13424"} {"prompt": "Explain this yaml code:\n\n# Deprecated in v1.17 in favor of apiserver.config.k8s.io/v1\napiVersion: apiserver.k8s.io/v1alpha1\nkind: AdmissionConfiguration\nplugins:\n- name: ValidatingAdmissionWebhook\n configuration:\n # Deprecated in v1.17 in favor of apiserver.config.k8s.io/v1, kind=WebhookAdmissionConfiguration\n apiVersion: apiserver.config.k8s.io/v1alpha1\n kind: WebhookAdmission\n kubeConfigFile: \"<path-to-kubeconfig-file>\"\n- name: MutatingAdmissionWebhook\n configuration:\n # Deprecated in v1.17 in favor of apiserver.config.k8s.io/v1, kind=WebhookAdmissionConfiguration\n apiVersion: apiserver.config.k8s.io/v1alpha1\n kind: WebhookAdmission\n kubeConfigFile: \"<path-to-kubeconfig-file>\"", "completion": "This yaml code Dynamic Admission Control | For more information about AdmissionConfiguration, see the AdmissionConfiguration (v1) reference. See the webhook configuration section for details about each config field.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Dynamic Admission Control | For more information about AdmissionConfiguration, see the AdmissionConfiguration (v1) reference. See the webhook configuration section for details about each config field.", "source": "https://kubernetes.io/docs/reference/access-authn-authz/extensible-admission-controllers/", "id": "k8s_00526_code_explain_13425"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Config\nusers:\n# name should be set to the DNS name of the service or the host (including port) of the URL the webhook is configured to speak to.\n# If a non-443 port is used for services, it must be included in the name when configuring 1.16+ API servers.\n#\n# For a webhook configured to speak to a service on the default port (443), specify the DNS name of the service:\n# - name: webhook1.ns1.svc\n# user: ...\n#\n# For a webhook configured to speak to a service on non-default port (e.g. 8443), specify the DNS name and port of the service in 1.16+:\n# - name: webhook1.ns1.svc:8443\n# user: ...\n# and optionally create a second stanza using only the DNS name of the service for compatibility with 1.15 API servers:\n# - name: webhook1.ns1.svc\n# user: ...\n#\n# For webhooks configured to speak to a URL, match the host (and port) specified in the webhook's URL. Examples:\n# A webhook with `url: https://www.example.com`:\n# - name: www.example.com\n# user: ...", "completion": "#\n# A webhook with `url: https://www.example.com:443`:\n# - name: www.example.com:443\n# user: ...\n#\n# A webhook with `url: https://www.example.com:8443`:\n# - name: www.example.com:8443\n# user: ...\n#\n- name: 'webhook1.ns1.svc'\n user:\n client-certificate-data: \"<pem encoded certificate>\"\n client-key-data: \"<pem encoded key>\"\n# The `name` supports using * to wildcard-match prefixing segments.\n- name: '*.webhook-company.org'\n user:\n password: \"<password>\"\n username: \"<name>\"\n# '*' is the default match.\n- name: '*'\n user:\n token: \"<token>\"", "language": "yaml", "type": "kubernetes-manifest", "context": "Dynamic Admission Control | Of course you need to set up the webhook server to handle these authentication requests.", "source": "https://kubernetes.io/docs/reference/access-authn-authz/extensible-admission-controllers/", "id": "k8s_00526_code_completion_13426"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Config\nusers:\n# name should be set to the DNS name of the service or the host (including port) of the URL the webhook is configured to speak to.\n# If a non-443 port is used for services, it must be included in the name when configuring 1.16+ API servers.\n#\n# For a webhook configured to speak to a service on the default port (443), specify the DNS name of the service:\n# - name: webhook1.ns1.svc\n# user: ...\n#\n# For a webhook configured to speak to a service on non-default port (e.g. 8443), specify the DNS name and port of the service in 1.16+:\n# - name: webhook1.ns1.svc:8443\n# user: ...\n# and optionally create a second stanza using only the DNS name of the service for compatibility with 1.15 API servers:\n# - name: webhook1.ns1.svc\n# user: ...\n#\n# For webhooks configured to speak to a URL, match the host (and port) specified in the webhook's URL. Examples:\n# A webhook with `url: https://www.example.com`:\n# - name: www.example.com\n# user: ...\n#\n# A webhook with `url: https://www.example.com:443`:\n# - name: www.example.com:443\n# user: ...\n#\n# A webhook with `url: https://www.example.com:8443`:\n# - name: www.example.com:8443\n# user: ...\n#\n- name: 'webhook1.ns1.svc'\n user:\n client-certificate-data: \"<pem encoded certificate>\"\n client-key-data: \"<pem encoded key>\"\n# The `name` supports using * to wildcard-match prefixing segments.\n- name: '*.webhook-company.org'\n user:\n password: \"<password>\"\n username: \"<name>\"\n# '*' is the default match.\n- name: '*'\n user:\n token: \"<token>\"", "completion": "This yaml code Dynamic Admission Control | Of course you need to set up the webhook server to handle these authentication requests.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Dynamic Admission Control | Of course you need to set up the webhook server to handle these authentication requests.", "source": "https://kubernetes.io/docs/reference/access-authn-authz/extensible-admission-controllers/", "id": "k8s_00526_code_explain_13427"} {"prompt": "Complete this yaml code:\n\napiVersion: admissionregistration.k8s.io/v1\nkind: ValidatingWebhookConfiguration", "completion": "webhooks:\n- name: my-webhook.example.com\n admissionReviewVersions: [\"v1\", \"v1beta1\"]", "language": "yaml", "type": "kubernetes-manifest", "context": "Dynamic Admission Control | admissionReviewVersions is a required field when creating webhook configurations. Webhooks are required to support at least one AdmissionReview version understood by the current and previous API serve", "source": "https://kubernetes.io/docs/reference/access-authn-authz/extensible-admission-controllers/", "id": "k8s_00526_code_completion_13428"} {"prompt": "Explain this yaml code:\n\napiVersion: admissionregistration.k8s.io/v1\nkind: ValidatingWebhookConfiguration\nwebhooks:\n- name: my-webhook.example.com\n admissionReviewVersions: [\"v1\", \"v1beta1\"]", "completion": "This yaml code Dynamic Admission Control | admissionReviewVersions is a required field when creating webhook configurations. Webhooks are required to support at least one AdmissionReview version understood by the current and previous API serve", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Dynamic Admission Control | admissionReviewVersions is a required field when creating webhook configurations. Webhooks are required to support at least one AdmissionReview version understood by the current and previous API serve", "source": "https://kubernetes.io/docs/reference/access-authn-authz/extensible-admission-controllers/", "id": "k8s_00526_code_explain_13429"} {"prompt": "Complete this json code:\n\n{\n \"apiVersion\": \"admission.k8s.io/v1\",\n \"kind\": \"AdmissionReview\",\n \"request\": {\n # Random uid uniquely identifying this admission call\n \"uid\": \"705ab4f5-6393-11e8-b7cc-42010a800002\",\n\n # Fully-qualified group/version/kind of the incoming object\n \"kind\": {\n \"group\": \"autoscaling\",\n \"version\": \"v1\",\n \"kind\": \"Scale\"\n },\n\n # Fully-qualified group/version/kind of the resource being modified\n \"resource\": {\n \"group\": \"apps\",\n \"version\": \"v1\",\n \"resource\": \"deployments\"\n },\n\n # Subresource, if the request is to a subresource\n \"subResource\": \"scale\",\n\n # Fully-qualified group/version/kind of the incoming object in the original request to the API server\n # This only differs from `kind` if the webhook specified `matchPolicy: Equivalent` and the original\n # request to the API server was converted to a version the webhook registered for\n \"requestKind\": {\n \"group\": \"autoscaling\",\n \"version\": \"v1\",\n \"kind\": \"Scale\"\n },\n\n # Fully-qualified group/version/kind of the resource being modified in the original request to the API server\n # This only differs from `resource` if the webhook specified `matchPolicy: Equivalent` and the original\n # request to the API server was converted to a version the webhook registered for\n \"requestResource\": {\n \"group\": \"apps\",\n \"version\": \"v1\",\n \"resource\": \"deployments\"\n },\n\n # Subresource, if the request is to a subresource\n # This only differs from `subResource` if the webhook specified `matchPolicy: Equivalent` and the original\n # request to the API server was converted to a version the webhook registered for\n \"requestSubResource\": \"scale\",\n\n # Name of the resource being modified\n \"name\": \"my-deployment\",\n\n # Namespace of the resource being modified, if the resource is namespaced (or is a Namespace object)", "completion": " \"namespace\": \"my-namespace\",\n\n # operation can be CREATE, UPDATE, DELETE, or CONNECT\n \"operation\": \"UPDATE\",\n\n \"userInfo\": {\n # Username of the authenticated user making the request to the API server\n \"username\": \"admin\",\n\n # UID of the authenticated user making the request to the API server\n \"uid\": \"014fbff9a07c\",\n\n # Group memberships of the authenticated user making the request to the API server\n \"groups\": [\n \"system:authenticated\",\n \"my-admin-group\"\n ],\n\n # Arbitrary extra info associated with the user making the request to the API server\n # This is populated by the API server authentication layer\n \"extra\": {\n \"some-key\": [\n \"some-value1\",\n \"some-value2\"\n ]\n }\n },\n\n # object is the new object being admitted. It is null for DELETE operations\n \"object\": {\n \"apiVersion\": \"autoscaling/v1\",\n \"kind\": \"Scale\"\n },\n\n # oldObject is the existing object. It is null for CREATE and CONNECT operations\n \"oldObject\": {\n \"apiVersion\": \"autoscaling/v1\",\n \"kind\": \"Scale\"\n },\n\n # options contain the options for the operation being admitted, like meta.k8s.io/v1 CreateOptions,\n # UpdateOptions, or DeleteOptions. It is null for CONNECT operations\n \"options\": {\n \"apiVersion\": \"meta.k8s.io/v1\",\n \"kind\": \"UpdateOptions\"\n },\n\n # dryRun indicates the API request is running in dry run mode and will not be persisted\n # Webhooks with side effects should avoid actuating those side effects when dryRun is true\n \"dryRun\": false\n }\n}", "language": "json", "type": "api-response", "context": "Dynamic Admission Control | Webhooks respond with a 200 HTTP status code, Content-Type: application/json, and a body containing an AdmissionReview object (in the same version they were sent), with the response stanza populated,", "source": "https://kubernetes.io/docs/reference/access-authn-authz/extensible-admission-controllers/", "id": "k8s_00526_code_completion_13430"} {"prompt": "Explain this json code:\n\n{\n \"apiVersion\": \"admission.k8s.io/v1\",\n \"kind\": \"AdmissionReview\",\n \"request\": {\n # Random uid uniquely identifying this admission call\n \"uid\": \"705ab4f5-6393-11e8-b7cc-42010a800002\",\n\n # Fully-qualified group/version/kind of the incoming object\n \"kind\": {\n \"group\": \"autoscaling\",\n \"version\": \"v1\",\n \"kind\": \"Scale\"\n },\n\n # Fully-qualified group/version/kind of the resource being modified\n \"resource\": {\n \"group\": \"apps\",\n \"version\": \"v1\",\n \"resource\": \"deployments\"\n },\n\n # Subresource, if the request is to a subresource\n \"subResource\": \"scale\",\n\n # Fully-qualified group/version/kind of the incoming object in the original request to the API server\n # This only differs from `kind` if the webhook specified `matchPolicy: Equivalent` and the original\n # request to the API server was converted to a version the webhook registered for\n \"requestKind\": {\n \"group\": \"autoscaling\",\n \"version\": \"v1\",\n \"kind\": \"Scale\"\n },\n\n # Fully-qualified group/version/kind of the resource being modified in the original request to the API server\n # This only differs from `resource` if the webhook specified `matchPolicy: Equivalent` and the original\n # request to the API server was converted to a version the webhook registered for\n \"requestResource\": {\n \"group\": \"apps\",\n \"version\": \"v1\",\n \"resource\": \"deployments\"\n },\n\n # Subresource, if the request is to a subresource\n # This only differs from `subResource` if the webhook specified `matchPolicy: Equivalent` and the original\n # request to the API server was converted to a version the webhook registered for\n \"requestSubResource\": \"scale\",\n\n # Name of the resource being modified\n \"name\": \"my-deployment\",\n\n # Namespace of the resource being modified, if the resource is namespaced (or is a Namespace object)\n \"namespace\": \"my-namespace\",\n\n # operation can be CREATE, UPDATE, DELETE, or CONNECT\n \"operation\": \"UPDATE\",\n\n \"userInfo\": {\n # Username of the authenticated user making the request to the API server\n \"username\": \"admin\",\n\n # UID of the authenticated user making the request to the API server\n \"uid\": \"014fbff9a07c\",\n\n # Group memberships of the authenticated user making the request to the API server\n \"groups\": [\n \"system:authenticated\",\n \"my-admin-group\"\n ],\n\n # Arbitrary extra info associated with the user making the request to the API server\n # This is populated by the API server authentication layer\n \"extra\": {\n \"some-key\": [\n \"some-value1\",\n \"some-value2\"\n ]\n }\n },\n\n # object is the new object being admitted. It is null for DELETE operations\n \"object\": {\n \"apiVersion\": \"autoscaling/v1\",\n \"kind\": \"Scale\"\n },\n\n # oldObject is the existing object. It is null for CREATE and CONNECT operations\n \"oldObject\": {\n \"apiVersion\": \"autoscaling/v1\",\n \"kind\": \"Scale\"\n },\n\n # options contain the options for the operation being admitted, like meta.k8s.io/v1 CreateOptions,\n # UpdateOptions, or DeleteOptions. It is null for CONNECT operations\n \"options\": {\n \"apiVersion\": \"meta.k8s.io/v1\",\n \"kind\": \"UpdateOptions\"\n },\n\n # dryRun indicates the API request is running in dry run mode and will not be persisted\n # Webhooks with side effects should avoid actuating those side effects when dryRun is true\n \"dryRun\": false\n }\n}", "completion": "This json code Dynamic Admission Control | Webhooks respond with a 200 HTTP status code, Content-Type: application/json, and a body containing an AdmissionReview object (in the same version they were sent), with the response stanza populated,", "language": "json", "type": "api-response_explanation", "context": "Dynamic Admission Control | Webhooks respond with a 200 HTTP status code, Content-Type: application/json, and a body containing an AdmissionReview object (in the same version they were sent), with the response stanza populated,", "source": "https://kubernetes.io/docs/reference/access-authn-authz/extensible-admission-controllers/", "id": "k8s_00526_code_explain_13431"} {"prompt": "Complete this json code:\n\n{\n \"apiVersion\": \"admission.k8s.io/v1\",\n \"kind\": \"AdmissionReview\",\n \"response\": {", "completion": " \"uid\": \"<value from request.uid>\",\n \"allowed\": true\n }\n}", "language": "json", "type": "api-response", "context": "Dynamic Admission Control | Example of a minimal response from a webhook to forbid a request:", "source": "https://kubernetes.io/docs/reference/access-authn-authz/extensible-admission-controllers/", "id": "k8s_00526_code_completion_13432"} {"prompt": "Explain this json code:\n\n{\n \"apiVersion\": \"admission.k8s.io/v1\",\n \"kind\": \"AdmissionReview\",\n \"response\": {\n \"uid\": \"<value from request.uid>\",\n \"allowed\": true\n }\n}", "completion": "This json code Dynamic Admission Control | Example of a minimal response from a webhook to forbid a request:", "language": "json", "type": "api-response_explanation", "context": "Dynamic Admission Control | Example of a minimal response from a webhook to forbid a request:", "source": "https://kubernetes.io/docs/reference/access-authn-authz/extensible-admission-controllers/", "id": "k8s_00526_code_explain_13433"} {"prompt": "Complete this json code:\n\n{\n \"apiVersion\": \"admission.k8s.io/v1\",\n \"kind\": \"AdmissionReview\",\n \"response\": {", "completion": " \"uid\": \"<value from request.uid>\",\n \"allowed\": false\n }\n}", "language": "json", "type": "api-response", "context": "Dynamic Admission Control | When rejecting a request, the webhook can customize the http code and message returned to the user using the status field. The specified status object is returned to the user. See the API documentatio", "source": "https://kubernetes.io/docs/reference/access-authn-authz/extensible-admission-controllers/", "id": "k8s_00526_code_completion_13434"} {"prompt": "Explain this json code:\n\n{\n \"apiVersion\": \"admission.k8s.io/v1\",\n \"kind\": \"AdmissionReview\",\n \"response\": {\n \"uid\": \"<value from request.uid>\",\n \"allowed\": false\n }\n}", "completion": "This json code Dynamic Admission Control | When rejecting a request, the webhook can customize the http code and message returned to the user using the status field. The specified status object is returned to the user. See the API documentatio", "language": "json", "type": "api-response_explanation", "context": "Dynamic Admission Control | When rejecting a request, the webhook can customize the http code and message returned to the user using the status field. The specified status object is returned to the user. See the API documentatio", "source": "https://kubernetes.io/docs/reference/access-authn-authz/extensible-admission-controllers/", "id": "k8s_00526_code_explain_13435"} {"prompt": "Complete this json code:\n\n{\n \"apiVersion\": \"admission.k8s.io/v1\",\n \"kind\": \"AdmissionReview\",\n \"response\": {\n \"uid\": \"<value from request.uid>\",\n \"allowed\": false,", "completion": " \"status\": {\n \"code\": 403,\n \"message\": \"You cannot do this because it is Tuesday and your name starts with A\"\n }\n }\n}", "language": "json", "type": "api-response", "context": "Dynamic Admission Control | When allowing a request, a mutating admission webhook may optionally modify the incoming object as well. This is done using the patch and patchType fields in the response. The only currently supported", "source": "https://kubernetes.io/docs/reference/access-authn-authz/extensible-admission-controllers/", "id": "k8s_00526_code_completion_13436"} {"prompt": "Explain this json code:\n\n{\n \"apiVersion\": \"admission.k8s.io/v1\",\n \"kind\": \"AdmissionReview\",\n \"response\": {\n \"uid\": \"<value from request.uid>\",\n \"allowed\": false,\n \"status\": {\n \"code\": 403,\n \"message\": \"You cannot do this because it is Tuesday and your name starts with A\"\n }\n }\n}", "completion": "This json code Dynamic Admission Control | When allowing a request, a mutating admission webhook may optionally modify the incoming object as well. This is done using the patch and patchType fields in the response. The only currently supported", "language": "json", "type": "api-response_explanation", "context": "Dynamic Admission Control | When allowing a request, a mutating admission webhook may optionally modify the incoming object as well. This is done using the patch and patchType fields in the response. The only currently supported", "source": "https://kubernetes.io/docs/reference/access-authn-authz/extensible-admission-controllers/", "id": "k8s_00526_code_explain_13437"} {"prompt": "Complete this json code:\n\n{\n \"apiVersion\": \"admission.k8s.io/v1\",\n \"kind\": \"AdmissionReview\",\n \"response\": {\n \"uid\": \"<value from request.uid>\",", "completion": " \"allowed\": true,\n \"patchType\": \"JSONPatch\",\n \"patch\": \"W3sib3AiOiAiYWRkIiwgInBhdGgiOiAiL3NwZWMvcmVwbGljYXMiLCAidmFsdWUiOiAzfV0=\"\n }\n}", "language": "json", "type": "api-response", "context": "Dynamic Admission Control | Admission webhooks can optionally return warning messages that are returned to the requesting client in HTTP Warning headers with a warning code of 299. Warnings can be sent with allowed or rejected a", "source": "https://kubernetes.io/docs/reference/access-authn-authz/extensible-admission-controllers/", "id": "k8s_00526_code_completion_13438"} {"prompt": "Explain this json code:\n\n{\n \"apiVersion\": \"admission.k8s.io/v1\",\n \"kind\": \"AdmissionReview\",\n \"response\": {\n \"uid\": \"<value from request.uid>\",\n \"allowed\": true,\n \"patchType\": \"JSONPatch\",\n \"patch\": \"W3sib3AiOiAiYWRkIiwgInBhdGgiOiAiL3NwZWMvcmVwbGljYXMiLCAidmFsdWUiOiAzfV0=\"\n }\n}", "completion": "This json code Dynamic Admission Control | Admission webhooks can optionally return warning messages that are returned to the requesting client in HTTP Warning headers with a warning code of 299. Warnings can be sent with allowed or rejected a", "language": "json", "type": "api-response_explanation", "context": "Dynamic Admission Control | Admission webhooks can optionally return warning messages that are returned to the requesting client in HTTP Warning headers with a warning code of 299. Warnings can be sent with allowed or rejected a", "source": "https://kubernetes.io/docs/reference/access-authn-authz/extensible-admission-controllers/", "id": "k8s_00526_code_explain_13439"} {"prompt": "Complete this json code:\n\n{\n \"apiVersion\": \"admission.k8s.io/v1\",\n \"kind\": \"AdmissionReview\",\n \"response\": {\n \"uid\": \"<value from request.uid>\",\n \"allowed\": true,", "completion": " \"warnings\": [\n \"duplicate envvar entries specified with name MY_ENV\",\n \"memory request less than 4MB specified for container mycontainer, which will not start successfully\"\n ]\n }\n}", "language": "json", "type": "api-response", "context": "Dynamic Admission Control | To register admission webhooks, create MutatingWebhookConfiguration or ValidatingWebhookConfiguration API objects. The name of a MutatingWebhookConfiguration or a ValidatingWebhookConfiguration object", "source": "https://kubernetes.io/docs/reference/access-authn-authz/extensible-admission-controllers/", "id": "k8s_00526_code_completion_13440"} {"prompt": "Explain this json code:\n\n{\n \"apiVersion\": \"admission.k8s.io/v1\",\n \"kind\": \"AdmissionReview\",\n \"response\": {\n \"uid\": \"<value from request.uid>\",\n \"allowed\": true,\n \"warnings\": [\n \"duplicate envvar entries specified with name MY_ENV\",\n \"memory request less than 4MB specified for container mycontainer, which will not start successfully\"\n ]\n }\n}", "completion": "This json code Dynamic Admission Control | To register admission webhooks, create MutatingWebhookConfiguration or ValidatingWebhookConfiguration API objects. The name of a MutatingWebhookConfiguration or a ValidatingWebhookConfiguration object", "language": "json", "type": "api-response_explanation", "context": "Dynamic Admission Control | To register admission webhooks, create MutatingWebhookConfiguration or ValidatingWebhookConfiguration API objects. The name of a MutatingWebhookConfiguration or a ValidatingWebhookConfiguration object", "source": "https://kubernetes.io/docs/reference/access-authn-authz/extensible-admission-controllers/", "id": "k8s_00526_code_explain_13441"} {"prompt": "Complete this yaml code:\n\napiVersion: admissionregistration.k8s.io/v1\nkind: ValidatingWebhookConfiguration\n...\nwebhooks:\n- name: my-webhook.example.com\n rules:", "completion": " - operations: [\"CREATE\", \"UPDATE\"]\n apiGroups: [\"apps\"]\n apiVersions: [\"v1\", \"v1beta1\"]\n resources: [\"deployments\", \"replicasets\"]\n scope: \"Namespaced\"\n ...", "language": "yaml", "type": "kubernetes-manifest", "context": "Dynamic Admission Control | Match create requests for all resources (but not subresources) in all API groups and versions:", "source": "https://kubernetes.io/docs/reference/access-authn-authz/extensible-admission-controllers/", "id": "k8s_00526_code_completion_13442"} {"prompt": "Explain this yaml code:\n\napiVersion: admissionregistration.k8s.io/v1\nkind: ValidatingWebhookConfiguration\n...\nwebhooks:\n- name: my-webhook.example.com\n rules:\n - operations: [\"CREATE\", \"UPDATE\"]\n apiGroups: [\"apps\"]\n apiVersions: [\"v1\", \"v1beta1\"]\n resources: [\"deployments\", \"replicasets\"]\n scope: \"Namespaced\"\n ...", "completion": "This yaml code Dynamic Admission Control | Match create requests for all resources (but not subresources) in all API groups and versions:", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Dynamic Admission Control | Match create requests for all resources (but not subresources) in all API groups and versions:", "source": "https://kubernetes.io/docs/reference/access-authn-authz/extensible-admission-controllers/", "id": "k8s_00526_code_explain_13443"} {"prompt": "Complete this yaml code:\n\napiVersion: admissionregistration.k8s.io/v1\nkind: ValidatingWebhookConfiguration\nwebhooks:\n - name: my-webhook.example.com\n rules:", "completion": " - operations: [\"CREATE\"]\n apiGroups: [\"*\"]\n apiVersions: [\"*\"]\n resources: [\"*\"]\n scope: \"*\"", "language": "yaml", "type": "kubernetes-manifest", "context": "Dynamic Admission Control | Match update requests for all status subresources in all API groups and versions:", "source": "https://kubernetes.io/docs/reference/access-authn-authz/extensible-admission-controllers/", "id": "k8s_00526_code_completion_13444"} {"prompt": "Explain this yaml code:\n\napiVersion: admissionregistration.k8s.io/v1\nkind: ValidatingWebhookConfiguration\nwebhooks:\n - name: my-webhook.example.com\n rules:\n - operations: [\"CREATE\"]\n apiGroups: [\"*\"]\n apiVersions: [\"*\"]\n resources: [\"*\"]\n scope: \"*\"", "completion": "This yaml code Dynamic Admission Control | Match update requests for all status subresources in all API groups and versions:", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Dynamic Admission Control | Match update requests for all status subresources in all API groups and versions:", "source": "https://kubernetes.io/docs/reference/access-authn-authz/extensible-admission-controllers/", "id": "k8s_00526_code_explain_13445"} {"prompt": "Complete this yaml code:\n\napiVersion: admissionregistration.k8s.io/v1\nkind: ValidatingWebhookConfiguration\nwebhooks:\n - name: my-webhook.example.com\n rules:", "completion": " - operations: [\"UPDATE\"]\n apiGroups: [\"*\"]\n apiVersions: [\"*\"]\n resources: [\"*/status\"]\n scope: \"*\"", "language": "yaml", "type": "kubernetes-manifest", "context": "Dynamic Admission Control | Webhooks may optionally limit which requests are intercepted based on the labels of the objects they would be sent, by specifying an objectSelector. If specified, the objectSelector is evaluated again", "source": "https://kubernetes.io/docs/reference/access-authn-authz/extensible-admission-controllers/", "id": "k8s_00526_code_completion_13446"} {"prompt": "Explain this yaml code:\n\napiVersion: admissionregistration.k8s.io/v1\nkind: ValidatingWebhookConfiguration\nwebhooks:\n - name: my-webhook.example.com\n rules:\n - operations: [\"UPDATE\"]\n apiGroups: [\"*\"]\n apiVersions: [\"*\"]\n resources: [\"*/status\"]\n scope: \"*\"", "completion": "This yaml code Dynamic Admission Control | Webhooks may optionally limit which requests are intercepted based on the labels of the objects they would be sent, by specifying an objectSelector. If specified, the objectSelector is evaluated again", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Dynamic Admission Control | Webhooks may optionally limit which requests are intercepted based on the labels of the objects they would be sent, by specifying an objectSelector. If specified, the objectSelector is evaluated again", "source": "https://kubernetes.io/docs/reference/access-authn-authz/extensible-admission-controllers/", "id": "k8s_00526_code_explain_13447"} {"prompt": "Complete this yaml code:\n\napiVersion: admissionregistration.k8s.io/v1\nkind: MutatingWebhookConfiguration\nwebhooks:\n- name: my-webhook.example.com\n objectSelector:\n matchLabels:", "completion": " foo: bar\n rules:\n - operations: [\"CREATE\"]\n apiGroups: [\"*\"]\n apiVersions: [\"*\"]\n resources: [\"*\"]\n scope: \"*\"", "language": "yaml", "type": "kubernetes-manifest", "context": "Dynamic Admission Control | See labels concept for more examples of label selectors.", "source": "https://kubernetes.io/docs/reference/access-authn-authz/extensible-admission-controllers/", "id": "k8s_00526_code_completion_13448"} {"prompt": "Explain this yaml code:\n\napiVersion: admissionregistration.k8s.io/v1\nkind: MutatingWebhookConfiguration\nwebhooks:\n- name: my-webhook.example.com\n objectSelector:\n matchLabels:\n foo: bar\n rules:\n - operations: [\"CREATE\"]\n apiGroups: [\"*\"]\n apiVersions: [\"*\"]\n resources: [\"*\"]\n scope: \"*\"", "completion": "This yaml code Dynamic Admission Control | See labels concept for more examples of label selectors.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Dynamic Admission Control | See labels concept for more examples of label selectors.", "source": "https://kubernetes.io/docs/reference/access-authn-authz/extensible-admission-controllers/", "id": "k8s_00526_code_explain_13449"} {"prompt": "Complete this yaml code:\n\napiVersion: admissionregistration.k8s.io/v1\nkind: MutatingWebhookConfiguration\nwebhooks:\n - name: my-webhook.example.com\n namespaceSelector:\n matchExpressions:\n - key: runlevel", "completion": " operator: NotIn\n values: [\"0\",\"1\"]\n rules:\n - operations: [\"CREATE\"]\n apiGroups: [\"*\"]\n apiVersions: [\"*\"]\n resources: [\"*\"]\n scope: \"Namespaced\"", "language": "yaml", "type": "kubernetes-manifest", "context": "Dynamic Admission Control | This example shows a validating webhook that matches a CREATE of any namespaced resource inside a namespace that is associated with the \"environment\" of \"prod\" or \"staging\":", "source": "https://kubernetes.io/docs/reference/access-authn-authz/extensible-admission-controllers/", "id": "k8s_00526_code_completion_13450"} {"prompt": "Explain this yaml code:\n\napiVersion: admissionregistration.k8s.io/v1\nkind: MutatingWebhookConfiguration\nwebhooks:\n - name: my-webhook.example.com\n namespaceSelector:\n matchExpressions:\n - key: runlevel\n operator: NotIn\n values: [\"0\",\"1\"]\n rules:\n - operations: [\"CREATE\"]\n apiGroups: [\"*\"]\n apiVersions: [\"*\"]\n resources: [\"*\"]\n scope: \"Namespaced\"", "completion": "This yaml code Dynamic Admission Control | This example shows a validating webhook that matches a CREATE of any namespaced resource inside a namespace that is associated with the \"environment\" of \"prod\" or \"staging\":", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Dynamic Admission Control | This example shows a validating webhook that matches a CREATE of any namespaced resource inside a namespace that is associated with the \"environment\" of \"prod\" or \"staging\":", "source": "https://kubernetes.io/docs/reference/access-authn-authz/extensible-admission-controllers/", "id": "k8s_00526_code_explain_13451"} {"prompt": "Complete this yaml code:\n\napiVersion: admissionregistration.k8s.io/v1\nkind: ValidatingWebhookConfiguration\nwebhooks:\n - name: my-webhook.example.com\n namespaceSelector:\n matchExpressions:\n - key: environment", "completion": " operator: In\n values: [\"prod\",\"staging\"]\n rules:\n - operations: [\"CREATE\"]\n apiGroups: [\"*\"]\n apiVersions: [\"*\"]\n resources: [\"*\"]\n scope: \"Namespaced\"", "language": "yaml", "type": "kubernetes-manifest", "context": "Dynamic Admission Control | See labels concept for more examples of label selectors.", "source": "https://kubernetes.io/docs/reference/access-authn-authz/extensible-admission-controllers/", "id": "k8s_00526_code_completion_13452"} {"prompt": "Explain this yaml code:\n\napiVersion: admissionregistration.k8s.io/v1\nkind: ValidatingWebhookConfiguration\nwebhooks:\n - name: my-webhook.example.com\n namespaceSelector:\n matchExpressions:\n - key: environment\n operator: In\n values: [\"prod\",\"staging\"]\n rules:\n - operations: [\"CREATE\"]\n apiGroups: [\"*\"]\n apiVersions: [\"*\"]\n resources: [\"*\"]\n scope: \"Namespaced\"", "completion": "This yaml code Dynamic Admission Control | See labels concept for more examples of label selectors.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Dynamic Admission Control | See labels concept for more examples of label selectors.", "source": "https://kubernetes.io/docs/reference/access-authn-authz/extensible-admission-controllers/", "id": "k8s_00526_code_explain_13453"} {"prompt": "Complete this yaml code:\n\napiVersion: admissionregistration.k8s.io/v1\nkind: ValidatingWebhookConfiguration\nwebhooks:\n- name: my-webhook.example.com\n matchPolicy: Equivalent", "completion": " rules:\n - operations: [\"CREATE\",\"UPDATE\",\"DELETE\"]\n apiGroups: [\"apps\"]\n apiVersions: [\"v1\"]\n resources: [\"deployments\"]\n scope: \"Namespaced\"", "language": "yaml", "type": "kubernetes-manifest", "context": "Dynamic Admission Control | The matchPolicy for an admission webhooks defaults to Equivalent.", "source": "https://kubernetes.io/docs/reference/access-authn-authz/extensible-admission-controllers/", "id": "k8s_00526_code_completion_13454"} {"prompt": "Explain this yaml code:\n\napiVersion: admissionregistration.k8s.io/v1\nkind: ValidatingWebhookConfiguration\nwebhooks:\n- name: my-webhook.example.com\n matchPolicy: Equivalent\n rules:\n - operations: [\"CREATE\",\"UPDATE\",\"DELETE\"]\n apiGroups: [\"apps\"]\n apiVersions: [\"v1\"]\n resources: [\"deployments\"]\n scope: \"Namespaced\"", "completion": "This yaml code Dynamic Admission Control | The matchPolicy for an admission webhooks defaults to Equivalent.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Dynamic Admission Control | The matchPolicy for an admission webhooks defaults to Equivalent.", "source": "https://kubernetes.io/docs/reference/access-authn-authz/extensible-admission-controllers/", "id": "k8s_00526_code_explain_13455"} {"prompt": "Complete this yaml code:\n\napiVersion: admissionregistration.k8s.io/v1\nkind: ValidatingWebhookConfiguration\nwebhooks:\n - name: my-webhook.example.com\n matchPolicy: Equivalent\n rules:\n - operations: ['CREATE','UPDATE']\n apiGroups: ['*']\n apiVersions: ['*']\n resources: ['*']\n failurePolicy: 'Ignore' # Fail-open (optional)\n sideEffects: None\n clientConfig:\n service:\n namespace: my-namespace\n name: my-webhook\n caBundle: '<omitted>'\n # You can have up to 64 matchConditions per webhook\n matchConditions:\n - name: 'exclude-leases' # Each match condition must have a unique name\n expression: '!(request.resource.group == \"coordination.k8s.io\" && request.resource.resource == \"leases\")' # Match non-lease resources.\n - name: 'exclude-kubelet-requests'\n expression: '!(\"system:nodes\" in request.userInfo.groups)' # Match requests made by non-node users.\n - name: 'rbac' # Skip RBAC requests, which are handled by the second webhook.", "completion": " expression: 'request.resource.group != \"rbac.authorization.k8s.io\"'\n \n # This example illustrates the use of the 'authorizer'. The authorization check is more expensive\n # than a simple expression, so in this example it is scoped to only RBAC requests by using a second\n # webhook. Both webhooks can be served by the same endpoint.\n - name: rbac.my-webhook.example.com\n matchPolicy: Equivalent\n rules:\n - operations: ['CREATE','UPDATE']\n apiGroups: ['rbac.authorization.k8s.io']\n apiVersions: ['*']\n resources: ['*']\n failurePolicy: 'Fail' # Fail-closed (the default)\n sideEffects: None\n clientConfig:\n service:\n namespace: my-namespace\n name: my-webhook\n caBundle: '<omitted>'\n # You can have up to 64 matchConditions per webhook\n matchConditions:\n - name: 'breakglass'\n # Skip requests made by users authorized to 'breakglass' on this webhook.\n # The 'breakglass' API verb does not need to exist outside this check.\n expression: '!authorizer.group(\"admissionregistration.k8s.io\").resource(\"validatingwebhookconfigurations\").name(\"my-webhook.example.com\").check(\"breakglass\").allowed()'", "language": "yaml", "type": "kubernetes-manifest", "context": "Dynamic Admission Control | Match conditions have access to the following CEL variables:", "source": "https://kubernetes.io/docs/reference/access-authn-authz/extensible-admission-controllers/", "id": "k8s_00526_code_completion_13456"} {"prompt": "Explain this yaml code:\n\napiVersion: admissionregistration.k8s.io/v1\nkind: ValidatingWebhookConfiguration\nwebhooks:\n - name: my-webhook.example.com\n matchPolicy: Equivalent\n rules:\n - operations: ['CREATE','UPDATE']\n apiGroups: ['*']\n apiVersions: ['*']\n resources: ['*']\n failurePolicy: 'Ignore' # Fail-open (optional)\n sideEffects: None\n clientConfig:\n service:\n namespace: my-namespace\n name: my-webhook\n caBundle: '<omitted>'\n # You can have up to 64 matchConditions per webhook\n matchConditions:\n - name: 'exclude-leases' # Each match condition must have a unique name\n expression: '!(request.resource.group == \"coordination.k8s.io\" && request.resource.resource == \"leases\")' # Match non-lease resources.\n - name: 'exclude-kubelet-requests'\n expression: '!(\"system:nodes\" in request.userInfo.groups)' # Match requests made by non-node users.\n - name: 'rbac' # Skip RBAC requests, which are handled by the second webhook.\n expression: 'request.resource.group != \"rbac.authorization.k8s.io\"'\n \n # This example illustrates the use of the 'authorizer'. The authorization check is more expensive\n # than a simple expression, so in this example it is scoped to only RBAC requests by using a second\n # webhook. Both webhooks can be served by the same endpoint.\n - name: rbac.my-webhook.example.com\n matchPolicy: Equivalent\n rules:\n - operations: ['CREATE','UPDATE']\n apiGroups: ['rbac.authorization.k8s.io']\n apiVersions: ['*']\n resources: ['*']\n failurePolicy: 'Fail' # Fail-closed (the default)\n sideEffects: None\n clientConfig:\n service:\n namespace: my-namespace\n name: my-webhook\n caBundle: '<omitted>'\n # You can have up to 64 matchConditions per webhook\n matchConditions:\n - name: 'breakglass'\n # Skip requests made by users authorized to 'breakglass' on this webhook.\n # The 'breakglass' API verb does not need to exist outside this check.\n expression: '!authorizer.group(\"admissionregistration.k8s.io\").resource(\"validatingwebhookconfigurations\").name(\"my-webhook.example.com\").check(\"breakglass\").allowed()'", "completion": "This yaml code Dynamic Admission Control | Match conditions have access to the following CEL variables:", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Dynamic Admission Control | Match conditions have access to the following CEL variables:", "source": "https://kubernetes.io/docs/reference/access-authn-authz/extensible-admission-controllers/", "id": "k8s_00526_code_explain_13457"} {"prompt": "Complete this yaml code:\n\napiVersion: admissionregistration.k8s.io/v1\nkind: MutatingWebhookConfiguration\nwebhooks:", "completion": "- name: my-webhook.example.com\n clientConfig:\n url: \"https://my-webhook.example.com:9443/my-webhook-path\"", "language": "yaml", "type": "kubernetes-manifest", "context": "Dynamic Admission Control | The service stanza inside clientConfig is a reference to the service for this webhook. If the webhook is running within the cluster, then you should use service instead of url. The service namespace a", "source": "https://kubernetes.io/docs/reference/access-authn-authz/extensible-admission-controllers/", "id": "k8s_00526_code_completion_13458"} {"prompt": "Explain this yaml code:\n\napiVersion: admissionregistration.k8s.io/v1\nkind: MutatingWebhookConfiguration\nwebhooks:\n- name: my-webhook.example.com\n clientConfig:\n url: \"https://my-webhook.example.com:9443/my-webhook-path\"", "completion": "This yaml code Dynamic Admission Control | The service stanza inside clientConfig is a reference to the service for this webhook. If the webhook is running within the cluster, then you should use service instead of url. The service namespace a", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Dynamic Admission Control | The service stanza inside clientConfig is a reference to the service for this webhook. If the webhook is running within the cluster, then you should use service instead of url. The service namespace a", "source": "https://kubernetes.io/docs/reference/access-authn-authz/extensible-admission-controllers/", "id": "k8s_00526_code_explain_13459"} {"prompt": "Complete this yaml code:\n\napiVersion: admissionregistration.k8s.io/v1\nkind: MutatingWebhookConfiguration\nwebhooks:\n- name: my-webhook.example.com\n clientConfig:", "completion": " caBundle: <CA_BUNDLE>\n service:\n namespace: my-service-namespace\n name: my-service-name\n path: /my-path\n port: 1234", "language": "yaml", "type": "kubernetes-manifest", "context": "Dynamic Admission Control | Webhooks typically operate only on the content of the AdmissionReview sent to them. Some webhooks, however, make out-of-band changes as part of processing admission requests.", "source": "https://kubernetes.io/docs/reference/access-authn-authz/extensible-admission-controllers/", "id": "k8s_00526_code_completion_13460"} {"prompt": "Explain this yaml code:\n\napiVersion: admissionregistration.k8s.io/v1\nkind: MutatingWebhookConfiguration\nwebhooks:\n- name: my-webhook.example.com\n clientConfig:\n caBundle: <CA_BUNDLE>\n service:\n namespace: my-service-namespace\n name: my-service-name\n path: /my-path\n port: 1234", "completion": "This yaml code Dynamic Admission Control | Webhooks typically operate only on the content of the AdmissionReview sent to them. Some webhooks, however, make out-of-band changes as part of processing admission requests.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Dynamic Admission Control | Webhooks typically operate only on the content of the AdmissionReview sent to them. Some webhooks, however, make out-of-band changes as part of processing admission requests.", "source": "https://kubernetes.io/docs/reference/access-authn-authz/extensible-admission-controllers/", "id": "k8s_00526_code_explain_13461"} {"prompt": "Complete this yaml code:\n\napiVersion: admissionregistration.k8s.io/v1\nkind: ValidatingWebhookConfiguration", "completion": "webhooks:\n - name: my-webhook.example.com\n sideEffects: NoneOnDryRun", "language": "yaml", "type": "kubernetes-manifest", "context": "Dynamic Admission Control | Because webhooks add to API request latency, they should evaluate as quickly as possible. timeoutSeconds allows configuring how long the API server should wait for a webhook to respond before treating", "source": "https://kubernetes.io/docs/reference/access-authn-authz/extensible-admission-controllers/", "id": "k8s_00526_code_completion_13462"} {"prompt": "Explain this yaml code:\n\napiVersion: admissionregistration.k8s.io/v1\nkind: ValidatingWebhookConfiguration\nwebhooks:\n - name: my-webhook.example.com\n sideEffects: NoneOnDryRun", "completion": "This yaml code Dynamic Admission Control | Because webhooks add to API request latency, they should evaluate as quickly as possible. timeoutSeconds allows configuring how long the API server should wait for a webhook to respond before treating", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Dynamic Admission Control | Because webhooks add to API request latency, they should evaluate as quickly as possible. timeoutSeconds allows configuring how long the API server should wait for a webhook to respond before treating", "source": "https://kubernetes.io/docs/reference/access-authn-authz/extensible-admission-controllers/", "id": "k8s_00526_code_explain_13463"} {"prompt": "Complete this yaml code:\n\napiVersion: admissionregistration.k8s.io/v1\nkind: ValidatingWebhookConfiguration", "completion": "webhooks:\n - name: my-webhook.example.com\n timeoutSeconds: 2", "language": "yaml", "type": "kubernetes-manifest", "context": "Dynamic Admission Control | The timeout for an admission webhook defaults to 10 seconds.", "source": "https://kubernetes.io/docs/reference/access-authn-authz/extensible-admission-controllers/", "id": "k8s_00526_code_completion_13464"} {"prompt": "Explain this yaml code:\n\napiVersion: admissionregistration.k8s.io/v1\nkind: ValidatingWebhookConfiguration\nwebhooks:\n - name: my-webhook.example.com\n timeoutSeconds: 2", "completion": "This yaml code Dynamic Admission Control | The timeout for an admission webhook defaults to 10 seconds.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Dynamic Admission Control | The timeout for an admission webhook defaults to 10 seconds.", "source": "https://kubernetes.io/docs/reference/access-authn-authz/extensible-admission-controllers/", "id": "k8s_00526_code_explain_13465"} {"prompt": "Complete this yaml code:\n\napiVersion: admissionregistration.k8s.io/v1\nkind: MutatingWebhookConfiguration", "completion": "webhooks:\n- name: my-webhook.example.com\n reinvocationPolicy: IfNeeded", "language": "yaml", "type": "kubernetes-manifest", "context": "Dynamic Admission Control | Mutating webhooks must be idempotent, able to successfully process an object they have already admitted and potentially modified. This is true for all mutating admission webhooks, since any change the", "source": "https://kubernetes.io/docs/reference/access-authn-authz/extensible-admission-controllers/", "id": "k8s_00526_code_completion_13466"} {"prompt": "Explain this yaml code:\n\napiVersion: admissionregistration.k8s.io/v1\nkind: MutatingWebhookConfiguration\nwebhooks:\n- name: my-webhook.example.com\n reinvocationPolicy: IfNeeded", "completion": "This yaml code Dynamic Admission Control | Mutating webhooks must be idempotent, able to successfully process an object they have already admitted and potentially modified. This is true for all mutating admission webhooks, since any change the", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Dynamic Admission Control | Mutating webhooks must be idempotent, able to successfully process an object they have already admitted and potentially modified. This is true for all mutating admission webhooks, since any change the", "source": "https://kubernetes.io/docs/reference/access-authn-authz/extensible-admission-controllers/", "id": "k8s_00526_code_explain_13467"} {"prompt": "Complete this yaml code:\n\napiVersion: admissionregistration.k8s.io/v1\nkind: MutatingWebhookConfiguration", "completion": "webhooks:\n- name: my-webhook.example.com\n failurePolicy: Fail", "language": "yaml", "type": "kubernetes-manifest", "context": "Dynamic Admission Control | The default failurePolicy for an admission webhooks is Fail.", "source": "https://kubernetes.io/docs/reference/access-authn-authz/extensible-admission-controllers/", "id": "k8s_00526_code_completion_13468"} {"prompt": "Explain this yaml code:\n\napiVersion: admissionregistration.k8s.io/v1\nkind: MutatingWebhookConfiguration\nwebhooks:\n- name: my-webhook.example.com\n failurePolicy: Fail", "completion": "This yaml code Dynamic Admission Control | The default failurePolicy for an admission webhooks is Fail.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Dynamic Admission Control | The default failurePolicy for an admission webhooks is Fail.", "source": "https://kubernetes.io/docs/reference/access-authn-authz/extensible-admission-controllers/", "id": "k8s_00526_code_explain_13469"} {"prompt": "Complete this text code:\n\n# the audit event recorded\n{\n \"kind\": \"Event\",\n \"apiVersion\": \"audit.k8s.io/v1\",\n \"annotations\": {\n \"mutation.webhook.admission.k8s.io/round_1_index_2\": \"{\\\"configuration\\\":\\\"my-mutating-webhook-configuration.example.com\\\",\\\"webhook\\\":\\\"my-webhook.example.com\\\",\\\"mutated\\\": false}\"", "completion": " # other annotations\n ...\n }\n # other fields\n ...\n}", "language": "text", "type": "code-example", "context": "Dynamic Admission Control | The following annotation gets recorded for a webhook being invoked in the first round. The webhook is ordered the first in the mutating webhook chain, and mutated the request object during the invocat", "source": "https://kubernetes.io/docs/reference/access-authn-authz/extensible-admission-controllers/", "id": "k8s_00526_code_completion_13470"} {"prompt": "Explain this text code:\n\n# the audit event recorded\n{\n \"kind\": \"Event\",\n \"apiVersion\": \"audit.k8s.io/v1\",\n \"annotations\": {\n \"mutation.webhook.admission.k8s.io/round_1_index_2\": \"{\\\"configuration\\\":\\\"my-mutating-webhook-configuration.example.com\\\",\\\"webhook\\\":\\\"my-webhook.example.com\\\",\\\"mutated\\\": false}\"\n # other annotations\n ...\n }\n # other fields\n ...\n}", "completion": "This text code Dynamic Admission Control | The following annotation gets recorded for a webhook being invoked in the first round. The webhook is ordered the first in the mutating webhook chain, and mutated the request object during the invocat", "language": "text", "type": "code-example_explanation", "context": "Dynamic Admission Control | The following annotation gets recorded for a webhook being invoked in the first round. The webhook is ordered the first in the mutating webhook chain, and mutated the request object during the invocat", "source": "https://kubernetes.io/docs/reference/access-authn-authz/extensible-admission-controllers/", "id": "k8s_00526_code_explain_13471"} {"prompt": "Complete this text code:\n\n# the annotation value deserialized\n{\n \"configuration\": \"my-mutating-webhook-configuration.example.com\",", "completion": " \"webhook\": \"my-webhook.example.com\",\n \"mutated\": false\n}", "language": "text", "type": "code-example", "context": "Dynamic Admission Control | The following annotation gets recorded for a webhook being invoked in the first round. The webhook is ordered the first in the mutating webhook chain, and mutated the request object during the invocat", "source": "https://kubernetes.io/docs/reference/access-authn-authz/extensible-admission-controllers/", "id": "k8s_00526_code_completion_13472"} {"prompt": "Explain this text code:\n\n# the annotation value deserialized\n{\n \"configuration\": \"my-mutating-webhook-configuration.example.com\",\n \"webhook\": \"my-webhook.example.com\",\n \"mutated\": false\n}", "completion": "This text code Dynamic Admission Control | The following annotation gets recorded for a webhook being invoked in the first round. The webhook is ordered the first in the mutating webhook chain, and mutated the request object during the invocat", "language": "text", "type": "code-example_explanation", "context": "Dynamic Admission Control | The following annotation gets recorded for a webhook being invoked in the first round. The webhook is ordered the first in the mutating webhook chain, and mutated the request object during the invocat", "source": "https://kubernetes.io/docs/reference/access-authn-authz/extensible-admission-controllers/", "id": "k8s_00526_code_explain_13473"} {"prompt": "Complete this text code:\n\n# the audit event recorded\n{\n \"kind\": \"Event\",\n \"apiVersion\": \"audit.k8s.io/v1\",\n \"annotations\": {\n \"mutation.webhook.admission.k8s.io/round_0_index_0\": \"{\\\"configuration\\\":\\\"my-mutating-webhook-configuration.example.com\\\",\\\"webhook\\\":\\\"my-webhook-always-mutate.example.com\\\",\\\"mutated\\\": true}\"", "completion": " # other annotations\n ...\n }\n # other fields\n ...\n}", "language": "text", "type": "code-example", "context": "Dynamic Admission Control | At Request audit level or higher, an annotation with key patch.webhook.admission.k8s.io/round_{round idx}_index_{order idx} gets logged with JSON payload indicating a webhook gets invoked for given re", "source": "https://kubernetes.io/docs/reference/access-authn-authz/extensible-admission-controllers/", "id": "k8s_00526_code_completion_13474"} {"prompt": "Explain this text code:\n\n# the audit event recorded\n{\n \"kind\": \"Event\",\n \"apiVersion\": \"audit.k8s.io/v1\",\n \"annotations\": {\n \"mutation.webhook.admission.k8s.io/round_0_index_0\": \"{\\\"configuration\\\":\\\"my-mutating-webhook-configuration.example.com\\\",\\\"webhook\\\":\\\"my-webhook-always-mutate.example.com\\\",\\\"mutated\\\": true}\"\n # other annotations\n ...\n }\n # other fields\n ...\n}", "completion": "This text code Dynamic Admission Control | At Request audit level or higher, an annotation with key patch.webhook.admission.k8s.io/round_{round idx}_index_{order idx} gets logged with JSON payload indicating a webhook gets invoked for given re", "language": "text", "type": "code-example_explanation", "context": "Dynamic Admission Control | At Request audit level or higher, an annotation with key patch.webhook.admission.k8s.io/round_{round idx}_index_{order idx} gets logged with JSON payload indicating a webhook gets invoked for given re", "source": "https://kubernetes.io/docs/reference/access-authn-authz/extensible-admission-controllers/", "id": "k8s_00526_code_explain_13475"} {"prompt": "Complete this text code:\n\n# the annotation value deserialized\n{\n \"configuration\": \"my-mutating-webhook-configuration.example.com\",", "completion": " \"webhook\": \"my-webhook-always-mutate.example.com\",\n \"mutated\": true\n}", "language": "text", "type": "code-example", "context": "Dynamic Admission Control | At Request audit level or higher, an annotation with key patch.webhook.admission.k8s.io/round_{round idx}_index_{order idx} gets logged with JSON payload indicating a webhook gets invoked for given re", "source": "https://kubernetes.io/docs/reference/access-authn-authz/extensible-admission-controllers/", "id": "k8s_00526_code_completion_13476"} {"prompt": "Explain this text code:\n\n# the annotation value deserialized\n{\n \"configuration\": \"my-mutating-webhook-configuration.example.com\",\n \"webhook\": \"my-webhook-always-mutate.example.com\",\n \"mutated\": true\n}", "completion": "This text code Dynamic Admission Control | At Request audit level or higher, an annotation with key patch.webhook.admission.k8s.io/round_{round idx}_index_{order idx} gets logged with JSON payload indicating a webhook gets invoked for given re", "language": "text", "type": "code-example_explanation", "context": "Dynamic Admission Control | At Request audit level or higher, an annotation with key patch.webhook.admission.k8s.io/round_{round idx}_index_{order idx} gets logged with JSON payload indicating a webhook gets invoked for given re", "source": "https://kubernetes.io/docs/reference/access-authn-authz/extensible-admission-controllers/", "id": "k8s_00526_code_explain_13477"} {"prompt": "Complete this text code:\n\n# the audit event recorded\n{\n \"kind\": \"Event\",\n \"apiVersion\": \"audit.k8s.io/v1\",\n \"annotations\": {\n \"patch.webhook.admission.k8s.io/round_1_index_3\": \"{\\\"configuration\\\":\\\"my-other-mutating-webhook-configuration.example.com\\\",\\\"webhook\\\":\\\"my-webhook-always-mutate.example.com\\\",\\\"patch\\\":[{\\\"op\\\":\\\"add\\\",\\\"path\\\":\\\"/data/mutation-stage\\\",\\\"value\\\":\\\"yes\\\"}],\\\"patchType\\\":\\\"JSONPatch\\\"}\"", "completion": " # other annotations\n ...\n }\n # other fields\n ...\n}", "language": "text", "type": "code-example", "context": "Dynamic Admission Control | The API server exposes Prometheus metrics from the /metrics endpoint, which can be used for monitoring and diagnosing API server status. The following metrics record status related to admission webhoo", "source": "https://kubernetes.io/docs/reference/access-authn-authz/extensible-admission-controllers/", "id": "k8s_00526_code_completion_13478"} {"prompt": "Explain this text code:\n\n# the audit event recorded\n{\n \"kind\": \"Event\",\n \"apiVersion\": \"audit.k8s.io/v1\",\n \"annotations\": {\n \"patch.webhook.admission.k8s.io/round_1_index_3\": \"{\\\"configuration\\\":\\\"my-other-mutating-webhook-configuration.example.com\\\",\\\"webhook\\\":\\\"my-webhook-always-mutate.example.com\\\",\\\"patch\\\":[{\\\"op\\\":\\\"add\\\",\\\"path\\\":\\\"/data/mutation-stage\\\",\\\"value\\\":\\\"yes\\\"}],\\\"patchType\\\":\\\"JSONPatch\\\"}\"\n # other annotations\n ...\n }\n # other fields\n ...\n}", "completion": "This text code Dynamic Admission Control | The API server exposes Prometheus metrics from the /metrics endpoint, which can be used for monitoring and diagnosing API server status. The following metrics record status related to admission webhoo", "language": "text", "type": "code-example_explanation", "context": "Dynamic Admission Control | The API server exposes Prometheus metrics from the /metrics endpoint, which can be used for monitoring and diagnosing API server status. The following metrics record status related to admission webhoo", "source": "https://kubernetes.io/docs/reference/access-authn-authz/extensible-admission-controllers/", "id": "k8s_00526_code_explain_13479"} {"prompt": "Complete this text code:\n\n# the annotation value deserialized\n{\n \"configuration\": \"my-other-mutating-webhook-configuration.example.com\",\n \"webhook\": \"my-webhook-always-mutate.example.com\",\n \"patchType\": \"JSONPatch\",\n \"patch\": [", "completion": " {\n \"op\": \"add\",\n \"path\": \"/data/mutation-stage\",\n \"value\": \"yes\"\n }\n ]\n}", "language": "text", "type": "code-example", "context": "Dynamic Admission Control | The API server exposes Prometheus metrics from the /metrics endpoint, which can be used for monitoring and diagnosing API server status. The following metrics record status related to admission webhoo", "source": "https://kubernetes.io/docs/reference/access-authn-authz/extensible-admission-controllers/", "id": "k8s_00526_code_completion_13480"} {"prompt": "Explain this text code:\n\n# the annotation value deserialized\n{\n \"configuration\": \"my-other-mutating-webhook-configuration.example.com\",\n \"webhook\": \"my-webhook-always-mutate.example.com\",\n \"patchType\": \"JSONPatch\",\n \"patch\": [\n {\n \"op\": \"add\",\n \"path\": \"/data/mutation-stage\",\n \"value\": \"yes\"\n }\n ]\n}", "completion": "This text code Dynamic Admission Control | The API server exposes Prometheus metrics from the /metrics endpoint, which can be used for monitoring and diagnosing API server status. The following metrics record status related to admission webhoo", "language": "text", "type": "code-example_explanation", "context": "Dynamic Admission Control | The API server exposes Prometheus metrics from the /metrics endpoint, which can be used for monitoring and diagnosing API server status. The following metrics record status related to admission webhoo", "source": "https://kubernetes.io/docs/reference/access-authn-authz/extensible-admission-controllers/", "id": "k8s_00526_code_explain_13481"} {"prompt": "Complete this text code:\n\n# HELP apiserver_admission_webhook_rejection_count [ALPHA] Admission webhook rejection count, identified by name and broken out for each admission type (validating or admit) and operation. Additional labels specify an error type (calling_webhook_error or apiserver_internal_error if an error occurred; no_error otherwise) and optionally a non-zero rejection code if the webhook rejects the request with an HTTP status code (honored by the apiserver when the code is greater or equal to 400). Codes greater than 600 are truncated to 600, to keep the metrics cardinality bounded.\n# TYPE apiserver_admission_webhook_rejection_count counter", "completion": "apiserver_admission_webhook_rejection_count{error_type=\"calling_webhook_error\",name=\"always-timeout-webhook.example.com\",operation=\"CREATE\",rejection_code=\"0\",type=\"validating\"} 1\napiserver_admission_webhook_rejection_count{error_type=\"calling_webhook_error\",name=\"invalid-admission-response-webhook.example.com\",operation=\"CREATE\",rejection_code=\"0\",type=\"validating\"} 1\napiserver_admission_webhook_rejection_count{error_type=\"no_error\",name=\"deny-unwanted-configmap-data.example.com\",operation=\"CREATE\",rejection_code=\"400\",type=\"validating\"} 13", "language": "text", "type": "code-example", "context": "Dynamic Admission Control | For recommendations and considerations when writing mutating admission webhooks, see Admission Webhooks Good Practices.", "source": "https://kubernetes.io/docs/reference/access-authn-authz/extensible-admission-controllers/", "id": "k8s_00526_code_completion_13482"} {"prompt": "Explain this text code:\n\n# HELP apiserver_admission_webhook_rejection_count [ALPHA] Admission webhook rejection count, identified by name and broken out for each admission type (validating or admit) and operation. Additional labels specify an error type (calling_webhook_error or apiserver_internal_error if an error occurred; no_error otherwise) and optionally a non-zero rejection code if the webhook rejects the request with an HTTP status code (honored by the apiserver when the code is greater or equal to 400). Codes greater than 600 are truncated to 600, to keep the metrics cardinality bounded.\n# TYPE apiserver_admission_webhook_rejection_count counter\napiserver_admission_webhook_rejection_count{error_type=\"calling_webhook_error\",name=\"always-timeout-webhook.example.com\",operation=\"CREATE\",rejection_code=\"0\",type=\"validating\"} 1\napiserver_admission_webhook_rejection_count{error_type=\"calling_webhook_error\",name=\"invalid-admission-response-webhook.example.com\",operation=\"CREATE\",rejection_code=\"0\",type=\"validating\"} 1\napiserver_admission_webhook_rejection_count{error_type=\"no_error\",name=\"deny-unwanted-configmap-data.example.com\",operation=\"CREATE\",rejection_code=\"400\",type=\"validating\"} 13", "completion": "This text code Dynamic Admission Control | For recommendations and considerations when writing mutating admission webhooks, see Admission Webhooks Good Practices.", "language": "text", "type": "code-example_explanation", "context": "Dynamic Admission Control | For recommendations and considerations when writing mutating admission webhooks, see Admission Webhooks Good Practices.", "source": "https://kubernetes.io/docs/reference/access-authn-authz/extensible-admission-controllers/", "id": "k8s_00526_code_explain_13483"} {"prompt": "Complete this yaml code:\n\napiVersion: admissionregistration.k8s.io/v1\nkind: ValidatingWebhookConfiguration\nmetadata:\n name: \"pod-policy.example.com\"\nwebhooks:\n- name: \"pod-policy.example.com\"\n rules:\n - apiGroups: [\"\"]\n apiVersions: [\"v1\"]\n operations: [\"CREATE\"]", "completion": " resources: [\"pods\"]\n scope: \"Namespaced\"\n clientConfig:\n service:\n namespace: \"example-namespace\"\n name: \"example-service\"\n caBundle: <CA_BUNDLE>\n admissionReviewVersions: [\"v1\"]\n sideEffects: None\n timeoutSeconds: 5", "language": "yaml", "type": "kubernetes-manifest", "context": "Dynamic Admission Control | The scope field specifies if only cluster-scoped resources (\"Cluster\") or namespace-scoped resources (\"Namespaced\") will match this rule. \"∗\" means that there are no scope restrictions.", "source": "https://kubernetes.io/docs/reference/access-authn-authz/extensible-admission-controllers/", "id": "k8s_00526_code_completion_13484"} {"prompt": "Explain this yaml code:\n\napiVersion: admissionregistration.k8s.io/v1\nkind: ValidatingWebhookConfiguration\nmetadata:\n name: \"pod-policy.example.com\"\nwebhooks:\n- name: \"pod-policy.example.com\"\n rules:\n - apiGroups: [\"\"]\n apiVersions: [\"v1\"]\n operations: [\"CREATE\"]\n resources: [\"pods\"]\n scope: \"Namespaced\"\n clientConfig:\n service:\n namespace: \"example-namespace\"\n name: \"example-service\"\n caBundle: <CA_BUNDLE>\n admissionReviewVersions: [\"v1\"]\n sideEffects: None\n timeoutSeconds: 5", "completion": "This yaml code Dynamic Admission Control | The scope field specifies if only cluster-scoped resources (\"Cluster\") or namespace-scoped resources (\"Namespaced\") will match this rule. \"∗\" means that there are no scope restrictions.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Dynamic Admission Control | The scope field specifies if only cluster-scoped resources (\"Cluster\") or namespace-scoped resources (\"Namespaced\") will match this rule. \"∗\" means that there are no scope restrictions.", "source": "https://kubernetes.io/docs/reference/access-authn-authz/extensible-admission-controllers/", "id": "k8s_00526_code_explain_13485"} {"prompt": "Complete this yaml code:\n\napiVersion: apiserver.config.k8s.io/v1\nkind: AdmissionConfiguration\nplugins:\n- name: ValidatingAdmissionWebhook\n configuration:\n apiVersion: apiserver.config.k8s.io/v1", "completion": " kind: WebhookAdmissionConfiguration\n kubeConfigFile: \"<path-to-kubeconfig-file>\"\n- name: MutatingAdmissionWebhook\n configuration:\n apiVersion: apiserver.config.k8s.io/v1\n kind: WebhookAdmissionConfiguration\n kubeConfigFile: \"<path-to-kubeconfig-file>\"", "language": "yaml", "type": "kubernetes-manifest", "context": "Dynamic Admission Control | # Deprecated in v1.17 in favor of apiserver.config.k8s.io/v1 apiVersion: apiserver.k8s.io/v1alpha1 kind: AdmissionConfiguration plugins: - name: ValidatingAdmissionWebhook configuration: # Depre", "source": "https://kubernetes.io/docs/reference/access-authn-authz/extensible-admission-controllers/", "id": "k8s_00526_code_completion_13486"} {"prompt": "Explain this yaml code:\n\napiVersion: apiserver.config.k8s.io/v1\nkind: AdmissionConfiguration\nplugins:\n- name: ValidatingAdmissionWebhook\n configuration:\n apiVersion: apiserver.config.k8s.io/v1\n kind: WebhookAdmissionConfiguration\n kubeConfigFile: \"<path-to-kubeconfig-file>\"\n- name: MutatingAdmissionWebhook\n configuration:\n apiVersion: apiserver.config.k8s.io/v1\n kind: WebhookAdmissionConfiguration\n kubeConfigFile: \"<path-to-kubeconfig-file>\"", "completion": "This yaml code Dynamic Admission Control | # Deprecated in v1.17 in favor of apiserver.config.k8s.io/v1 apiVersion: apiserver.k8s.io/v1alpha1 kind: AdmissionConfiguration plugins: - name: ValidatingAdmissionWebhook configuration: # Depre", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Dynamic Admission Control | # Deprecated in v1.17 in favor of apiserver.config.k8s.io/v1 apiVersion: apiserver.k8s.io/v1alpha1 kind: AdmissionConfiguration plugins: - name: ValidatingAdmissionWebhook configuration: # Depre", "source": "https://kubernetes.io/docs/reference/access-authn-authz/extensible-admission-controllers/", "id": "k8s_00526_code_explain_13487"} {"prompt": "Complete this yaml code:\n\n# Deprecated in v1.17 in favor of apiserver.config.k8s.io/v1\napiVersion: apiserver.k8s.io/v1alpha1\nkind: AdmissionConfiguration\nplugins:\n- name: ValidatingAdmissionWebhook\n configuration:\n # Deprecated in v1.17 in favor of apiserver.config.k8s.io/v1, kind=WebhookAdmissionConfiguration\n apiVersion: apiserver.config.k8s.io/v1alpha1", "completion": " kind: WebhookAdmission\n kubeConfigFile: \"<path-to-kubeconfig-file>\"\n- name: MutatingAdmissionWebhook\n configuration:\n # Deprecated in v1.17 in favor of apiserver.config.k8s.io/v1, kind=WebhookAdmissionConfiguration\n apiVersion: apiserver.config.k8s.io/v1alpha1\n kind: WebhookAdmission\n kubeConfigFile: \"<path-to-kubeconfig-file>\"", "language": "yaml", "type": "kubernetes-manifest", "context": "Dynamic Admission Control | For more information about AdmissionConfiguration, see the AdmissionConfiguration (v1) reference. See the webhook configuration section for details about each config field.", "source": "https://kubernetes.io/docs/reference/access-authn-authz/extensible-admission-controllers/", "id": "k8s_00526_code_completion_13488"} {"prompt": "Explain this yaml code:\n\n# Deprecated in v1.17 in favor of apiserver.config.k8s.io/v1\napiVersion: apiserver.k8s.io/v1alpha1\nkind: AdmissionConfiguration\nplugins:\n- name: ValidatingAdmissionWebhook\n configuration:\n # Deprecated in v1.17 in favor of apiserver.config.k8s.io/v1, kind=WebhookAdmissionConfiguration\n apiVersion: apiserver.config.k8s.io/v1alpha1\n kind: WebhookAdmission\n kubeConfigFile: \"<path-to-kubeconfig-file>\"\n- name: MutatingAdmissionWebhook\n configuration:\n # Deprecated in v1.17 in favor of apiserver.config.k8s.io/v1, kind=WebhookAdmissionConfiguration\n apiVersion: apiserver.config.k8s.io/v1alpha1\n kind: WebhookAdmission\n kubeConfigFile: \"<path-to-kubeconfig-file>\"", "completion": "This yaml code Dynamic Admission Control | For more information about AdmissionConfiguration, see the AdmissionConfiguration (v1) reference. See the webhook configuration section for details about each config field.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Dynamic Admission Control | For more information about AdmissionConfiguration, see the AdmissionConfiguration (v1) reference. See the webhook configuration section for details about each config field.", "source": "https://kubernetes.io/docs/reference/access-authn-authz/extensible-admission-controllers/", "id": "k8s_00526_code_explain_13489"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Config\nusers:\n# name should be set to the DNS name of the service or the host (including port) of the URL the webhook is configured to speak to.\n# If a non-443 port is used for services, it must be included in the name when configuring 1.16+ API servers.\n#\n# For a webhook configured to speak to a service on the default port (443), specify the DNS name of the service:\n# - name: webhook1.ns1.svc\n# user: ...\n#\n# For a webhook configured to speak to a service on non-default port (e.g. 8443), specify the DNS name and port of the service in 1.16+:\n# - name: webhook1.ns1.svc:8443\n# user: ...\n# and optionally create a second stanza using only the DNS name of the service for compatibility with 1.15 API servers:\n# - name: webhook1.ns1.svc\n# user: ...\n#\n# For webhooks configured to speak to a URL, match the host (and port) specified in the webhook's URL. Examples:\n# A webhook with `url: https://www.example.com`:\n# - name: www.example.com\n# user: ...", "completion": "#\n# A webhook with `url: https://www.example.com:443`:\n# - name: www.example.com:443\n# user: ...\n#\n# A webhook with `url: https://www.example.com:8443`:\n# - name: www.example.com:8443\n# user: ...\n#\n- name: 'webhook1.ns1.svc'\n user:\n client-certificate-data: \"<pem encoded certificate>\"\n client-key-data: \"<pem encoded key>\"\n# The `name` supports using * to wildcard-match prefixing segments.\n- name: '*.webhook-company.org'\n user:\n password: \"<password>\"\n username: \"<name>\"\n# '*' is the default match.\n- name: '*'\n user:\n token: \"<token>\"", "language": "yaml", "type": "kubernetes-manifest", "context": "Dynamic Admission Control | Of course you need to set up the webhook server to handle these authentication requests.", "source": "https://kubernetes.io/docs/reference/access-authn-authz/extensible-admission-controllers/", "id": "k8s_00526_code_completion_13490"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Config\nusers:\n# name should be set to the DNS name of the service or the host (including port) of the URL the webhook is configured to speak to.\n# If a non-443 port is used for services, it must be included in the name when configuring 1.16+ API servers.\n#\n# For a webhook configured to speak to a service on the default port (443), specify the DNS name of the service:\n# - name: webhook1.ns1.svc\n# user: ...\n#\n# For a webhook configured to speak to a service on non-default port (e.g. 8443), specify the DNS name and port of the service in 1.16+:\n# - name: webhook1.ns1.svc:8443\n# user: ...\n# and optionally create a second stanza using only the DNS name of the service for compatibility with 1.15 API servers:\n# - name: webhook1.ns1.svc\n# user: ...\n#\n# For webhooks configured to speak to a URL, match the host (and port) specified in the webhook's URL. Examples:\n# A webhook with `url: https://www.example.com`:\n# - name: www.example.com\n# user: ...\n#\n# A webhook with `url: https://www.example.com:443`:\n# - name: www.example.com:443\n# user: ...\n#\n# A webhook with `url: https://www.example.com:8443`:\n# - name: www.example.com:8443\n# user: ...\n#\n- name: 'webhook1.ns1.svc'\n user:\n client-certificate-data: \"<pem encoded certificate>\"\n client-key-data: \"<pem encoded key>\"\n# The `name` supports using * to wildcard-match prefixing segments.\n- name: '*.webhook-company.org'\n user:\n password: \"<password>\"\n username: \"<name>\"\n# '*' is the default match.\n- name: '*'\n user:\n token: \"<token>\"", "completion": "This yaml code Dynamic Admission Control | Of course you need to set up the webhook server to handle these authentication requests.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Dynamic Admission Control | Of course you need to set up the webhook server to handle these authentication requests.", "source": "https://kubernetes.io/docs/reference/access-authn-authz/extensible-admission-controllers/", "id": "k8s_00526_code_explain_13491"} {"prompt": "Complete this yaml code:\n\napiVersion: admissionregistration.k8s.io/v1\nkind: ValidatingWebhookConfiguration", "completion": "webhooks:\n- name: my-webhook.example.com\n admissionReviewVersions: [\"v1\", \"v1beta1\"]", "language": "yaml", "type": "kubernetes-manifest", "context": "Dynamic Admission Control | admissionReviewVersions is a required field when creating webhook configurations. Webhooks are required to support at least one AdmissionReview version understood by the current and previous API serve", "source": "https://kubernetes.io/docs/reference/access-authn-authz/extensible-admission-controllers/", "id": "k8s_00526_code_completion_13492"} {"prompt": "Explain this yaml code:\n\napiVersion: admissionregistration.k8s.io/v1\nkind: ValidatingWebhookConfiguration\nwebhooks:\n- name: my-webhook.example.com\n admissionReviewVersions: [\"v1\", \"v1beta1\"]", "completion": "This yaml code Dynamic Admission Control | admissionReviewVersions is a required field when creating webhook configurations. Webhooks are required to support at least one AdmissionReview version understood by the current and previous API serve", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Dynamic Admission Control | admissionReviewVersions is a required field when creating webhook configurations. Webhooks are required to support at least one AdmissionReview version understood by the current and previous API serve", "source": "https://kubernetes.io/docs/reference/access-authn-authz/extensible-admission-controllers/", "id": "k8s_00526_code_explain_13493"} {"prompt": "Complete this yaml code:\n\n{\n \"apiVersion\": \"admission.k8s.io/v1\",\n \"kind\": \"AdmissionReview\",\n \"request\": {\n # Random uid uniquely identifying this admission call\n \"uid\": \"705ab4f5-6393-11e8-b7cc-42010a800002\",\n\n # Fully-qualified group/version/kind of the incoming object\n \"kind\": {\n \"group\": \"autoscaling\",\n \"version\": \"v1\",\n \"kind\": \"Scale\"\n },\n\n # Fully-qualified group/version/kind of the resource being modified\n \"resource\": {\n \"group\": \"apps\",\n \"version\": \"v1\",\n \"resource\": \"deployments\"\n },\n\n # Subresource, if the request is to a subresource\n \"subResource\": \"scale\",\n\n # Fully-qualified group/version/kind of the incoming object in the original request to the API server\n # This only differs from `kind` if the webhook specified `matchPolicy: Equivalent` and the original\n # request to the API server was converted to a version the webhook registered for\n \"requestKind\": {\n \"group\": \"autoscaling\",\n \"version\": \"v1\",\n \"kind\": \"Scale\"\n },\n\n # Fully-qualified group/version/kind of the resource being modified in the original request to the API server\n # This only differs from `resource` if the webhook specified `matchPolicy: Equivalent` and the original\n # request to the API server was converted to a version the webhook registered for\n \"requestResource\": {\n \"group\": \"apps\",\n \"version\": \"v1\",\n \"resource\": \"deployments\"\n },\n\n # Subresource, if the request is to a subresource\n # This only differs from `subResource` if the webhook specified `matchPolicy: Equivalent` and the original\n # request to the API server was converted to a version the webhook registered for\n \"requestSubResource\": \"scale\",\n\n # Name of the resource being modified\n \"name\": \"my-deployment\",\n\n # Namespace of the resource being modified, if the resource is namespaced (or is a Namespace object)", "completion": " \"namespace\": \"my-namespace\",\n\n # operation can be CREATE, UPDATE, DELETE, or CONNECT\n \"operation\": \"UPDATE\",\n\n \"userInfo\": {\n # Username of the authenticated user making the request to the API server\n \"username\": \"admin\",\n\n # UID of the authenticated user making the request to the API server\n \"uid\": \"014fbff9a07c\",\n\n # Group memberships of the authenticated user making the request to the API server\n \"groups\": [\n \"system:authenticated\",\n \"my-admin-group\"\n ],\n\n # Arbitrary extra info associated with the user making the request to the API server\n # This is populated by the API server authentication layer\n \"extra\": {\n \"some-key\": [\n \"some-value1\",\n \"some-value2\"\n ]\n }\n },\n\n # object is the new object being admitted. It is null for DELETE operations\n \"object\": {\n \"apiVersion\": \"autoscaling/v1\",\n \"kind\": \"Scale\"\n },\n\n # oldObject is the existing object. It is null for CREATE and CONNECT operations\n \"oldObject\": {\n \"apiVersion\": \"autoscaling/v1\",\n \"kind\": \"Scale\"\n },\n\n # options contain the options for the operation being admitted, like meta.k8s.io/v1 CreateOptions,\n # UpdateOptions, or DeleteOptions. It is null for CONNECT operations\n \"options\": {\n \"apiVersion\": \"meta.k8s.io/v1\",\n \"kind\": \"UpdateOptions\"\n },\n\n # dryRun indicates the API request is running in dry run mode and will not be persisted\n # Webhooks with side effects should avoid actuating those side effects when dryRun is true\n \"dryRun\": false\n }\n}", "language": "yaml", "type": "kubernetes-manifest", "context": "Dynamic Admission Control | Webhooks respond with a 200 HTTP status code, Content-Type: application/json, and a body containing an AdmissionReview object (in the same version they were sent), with the response stanza populated,", "source": "https://kubernetes.io/docs/reference/access-authn-authz/extensible-admission-controllers/", "id": "k8s_00526_code_completion_13494"} {"prompt": "Explain this yaml code:\n\n{\n \"apiVersion\": \"admission.k8s.io/v1\",\n \"kind\": \"AdmissionReview\",\n \"request\": {\n # Random uid uniquely identifying this admission call\n \"uid\": \"705ab4f5-6393-11e8-b7cc-42010a800002\",\n\n # Fully-qualified group/version/kind of the incoming object\n \"kind\": {\n \"group\": \"autoscaling\",\n \"version\": \"v1\",\n \"kind\": \"Scale\"\n },\n\n # Fully-qualified group/version/kind of the resource being modified\n \"resource\": {\n \"group\": \"apps\",\n \"version\": \"v1\",\n \"resource\": \"deployments\"\n },\n\n # Subresource, if the request is to a subresource\n \"subResource\": \"scale\",\n\n # Fully-qualified group/version/kind of the incoming object in the original request to the API server\n # This only differs from `kind` if the webhook specified `matchPolicy: Equivalent` and the original\n # request to the API server was converted to a version the webhook registered for\n \"requestKind\": {\n \"group\": \"autoscaling\",\n \"version\": \"v1\",\n \"kind\": \"Scale\"\n },\n\n # Fully-qualified group/version/kind of the resource being modified in the original request to the API server\n # This only differs from `resource` if the webhook specified `matchPolicy: Equivalent` and the original\n # request to the API server was converted to a version the webhook registered for\n \"requestResource\": {\n \"group\": \"apps\",\n \"version\": \"v1\",\n \"resource\": \"deployments\"\n },\n\n # Subresource, if the request is to a subresource\n # This only differs from `subResource` if the webhook specified `matchPolicy: Equivalent` and the original\n # request to the API server was converted to a version the webhook registered for\n \"requestSubResource\": \"scale\",\n\n # Name of the resource being modified\n \"name\": \"my-deployment\",\n\n # Namespace of the resource being modified, if the resource is namespaced (or is a Namespace object)\n \"namespace\": \"my-namespace\",\n\n # operation can be CREATE, UPDATE, DELETE, or CONNECT\n \"operation\": \"UPDATE\",\n\n \"userInfo\": {\n # Username of the authenticated user making the request to the API server\n \"username\": \"admin\",\n\n # UID of the authenticated user making the request to the API server\n \"uid\": \"014fbff9a07c\",\n\n # Group memberships of the authenticated user making the request to the API server\n \"groups\": [\n \"system:authenticated\",\n \"my-admin-group\"\n ],\n\n # Arbitrary extra info associated with the user making the request to the API server\n # This is populated by the API server authentication layer\n \"extra\": {\n \"some-key\": [\n \"some-value1\",\n \"some-value2\"\n ]\n }\n },\n\n # object is the new object being admitted. It is null for DELETE operations\n \"object\": {\n \"apiVersion\": \"autoscaling/v1\",\n \"kind\": \"Scale\"\n },\n\n # oldObject is the existing object. It is null for CREATE and CONNECT operations\n \"oldObject\": {\n \"apiVersion\": \"autoscaling/v1\",\n \"kind\": \"Scale\"\n },\n\n # options contain the options for the operation being admitted, like meta.k8s.io/v1 CreateOptions,\n # UpdateOptions, or DeleteOptions. It is null for CONNECT operations\n \"options\": {\n \"apiVersion\": \"meta.k8s.io/v1\",\n \"kind\": \"UpdateOptions\"\n },\n\n # dryRun indicates the API request is running in dry run mode and will not be persisted\n # Webhooks with side effects should avoid actuating those side effects when dryRun is true\n \"dryRun\": false\n }\n}", "completion": "This yaml code Dynamic Admission Control | Webhooks respond with a 200 HTTP status code, Content-Type: application/json, and a body containing an AdmissionReview object (in the same version they were sent), with the response stanza populated,", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Dynamic Admission Control | Webhooks respond with a 200 HTTP status code, Content-Type: application/json, and a body containing an AdmissionReview object (in the same version they were sent), with the response stanza populated,", "source": "https://kubernetes.io/docs/reference/access-authn-authz/extensible-admission-controllers/", "id": "k8s_00526_code_explain_13495"} {"prompt": "Complete this json code:\n\n{\n \"apiVersion\": \"admission.k8s.io/v1\",\n \"kind\": \"AdmissionReview\",\n \"response\": {", "completion": " \"uid\": \"<value from request.uid>\",\n \"allowed\": true\n }\n}", "language": "json", "type": "api-response", "context": "Dynamic Admission Control | Example of a minimal response from a webhook to forbid a request:", "source": "https://kubernetes.io/docs/reference/access-authn-authz/extensible-admission-controllers/", "id": "k8s_00526_code_completion_13496"} {"prompt": "Explain this json code:\n\n{\n \"apiVersion\": \"admission.k8s.io/v1\",\n \"kind\": \"AdmissionReview\",\n \"response\": {\n \"uid\": \"<value from request.uid>\",\n \"allowed\": true\n }\n}", "completion": "This json code Dynamic Admission Control | Example of a minimal response from a webhook to forbid a request:", "language": "json", "type": "api-response_explanation", "context": "Dynamic Admission Control | Example of a minimal response from a webhook to forbid a request:", "source": "https://kubernetes.io/docs/reference/access-authn-authz/extensible-admission-controllers/", "id": "k8s_00526_code_explain_13497"} {"prompt": "Complete this json code:\n\n{\n \"apiVersion\": \"admission.k8s.io/v1\",\n \"kind\": \"AdmissionReview\",\n \"response\": {", "completion": " \"uid\": \"<value from request.uid>\",\n \"allowed\": false\n }\n}", "language": "json", "type": "api-response", "context": "Dynamic Admission Control | When rejecting a request, the webhook can customize the http code and message returned to the user using the status field. The specified status object is returned to the user. See the API documentatio", "source": "https://kubernetes.io/docs/reference/access-authn-authz/extensible-admission-controllers/", "id": "k8s_00526_code_completion_13498"} {"prompt": "Explain this json code:\n\n{\n \"apiVersion\": \"admission.k8s.io/v1\",\n \"kind\": \"AdmissionReview\",\n \"response\": {\n \"uid\": \"<value from request.uid>\",\n \"allowed\": false\n }\n}", "completion": "This json code Dynamic Admission Control | When rejecting a request, the webhook can customize the http code and message returned to the user using the status field. The specified status object is returned to the user. See the API documentatio", "language": "json", "type": "api-response_explanation", "context": "Dynamic Admission Control | When rejecting a request, the webhook can customize the http code and message returned to the user using the status field. The specified status object is returned to the user. See the API documentatio", "source": "https://kubernetes.io/docs/reference/access-authn-authz/extensible-admission-controllers/", "id": "k8s_00526_code_explain_13499"} {"prompt": "Complete this json code:\n\n{\n \"apiVersion\": \"admission.k8s.io/v1\",\n \"kind\": \"AdmissionReview\",\n \"response\": {\n \"uid\": \"<value from request.uid>\",\n \"allowed\": false,", "completion": " \"status\": {\n \"code\": 403,\n \"message\": \"You cannot do this because it is Tuesday and your name starts with A\"\n }\n }\n}", "language": "json", "type": "api-response", "context": "Dynamic Admission Control | When allowing a request, a mutating admission webhook may optionally modify the incoming object as well. This is done using the patch and patchType fields in the response. The only currently supported", "source": "https://kubernetes.io/docs/reference/access-authn-authz/extensible-admission-controllers/", "id": "k8s_00526_code_completion_13500"} {"prompt": "Explain this json code:\n\n{\n \"apiVersion\": \"admission.k8s.io/v1\",\n \"kind\": \"AdmissionReview\",\n \"response\": {\n \"uid\": \"<value from request.uid>\",\n \"allowed\": false,\n \"status\": {\n \"code\": 403,\n \"message\": \"You cannot do this because it is Tuesday and your name starts with A\"\n }\n }\n}", "completion": "This json code Dynamic Admission Control | When allowing a request, a mutating admission webhook may optionally modify the incoming object as well. This is done using the patch and patchType fields in the response. The only currently supported", "language": "json", "type": "api-response_explanation", "context": "Dynamic Admission Control | When allowing a request, a mutating admission webhook may optionally modify the incoming object as well. This is done using the patch and patchType fields in the response. The only currently supported", "source": "https://kubernetes.io/docs/reference/access-authn-authz/extensible-admission-controllers/", "id": "k8s_00526_code_explain_13501"} {"prompt": "Explain this json code:\n\n[{\"op\": \"add\", \"path\": \"/spec/replicas\", \"value\": 3}]", "completion": "This json code Dynamic Admission Control | Base64-encoded, this would be W3sib3AiOiAiYWRkIiwgInBhdGgiOiAiL3NwZWMvcmVwbGljYXMiLCAidmFsdWUiOiAzfV0=", "language": "json", "type": "api-response_explanation", "context": "Dynamic Admission Control | Base64-encoded, this would be W3sib3AiOiAiYWRkIiwgInBhdGgiOiAiL3NwZWMvcmVwbGljYXMiLCAidmFsdWUiOiAzfV0=", "source": "https://kubernetes.io/docs/reference/access-authn-authz/extensible-admission-controllers/", "id": "k8s_00526_code_explain_13502"} {"prompt": "Explain this text code:\n\nW3sib3AiOiAiYWRkIiwgInBhdGgiOiAiL3NwZWMvcmVwbGljYXMiLCAidmFsdWUiOiAzfV0=", "completion": "This text code Dynamic Admission Control | So a webhook response to add that label would be:", "language": "text", "type": "code-example_explanation", "context": "Dynamic Admission Control | So a webhook response to add that label would be:", "source": "https://kubernetes.io/docs/reference/access-authn-authz/extensible-admission-controllers/", "id": "k8s_00526_code_explain_13503"} {"prompt": "Complete this json code:\n\n{\n \"apiVersion\": \"admission.k8s.io/v1\",\n \"kind\": \"AdmissionReview\",\n \"response\": {\n \"uid\": \"<value from request.uid>\",", "completion": " \"allowed\": true,\n \"patchType\": \"JSONPatch\",\n \"patch\": \"W3sib3AiOiAiYWRkIiwgInBhdGgiOiAiL3NwZWMvcmVwbGljYXMiLCAidmFsdWUiOiAzfV0=\"\n }\n}", "language": "json", "type": "api-response", "context": "Dynamic Admission Control | Admission webhooks can optionally return warning messages that are returned to the requesting client in HTTP Warning headers with a warning code of 299. Warnings can be sent with allowed or rejected a", "source": "https://kubernetes.io/docs/reference/access-authn-authz/extensible-admission-controllers/", "id": "k8s_00526_code_completion_13504"} {"prompt": "Explain this json code:\n\n{\n \"apiVersion\": \"admission.k8s.io/v1\",\n \"kind\": \"AdmissionReview\",\n \"response\": {\n \"uid\": \"<value from request.uid>\",\n \"allowed\": true,\n \"patchType\": \"JSONPatch\",\n \"patch\": \"W3sib3AiOiAiYWRkIiwgInBhdGgiOiAiL3NwZWMvcmVwbGljYXMiLCAidmFsdWUiOiAzfV0=\"\n }\n}", "completion": "This json code Dynamic Admission Control | Admission webhooks can optionally return warning messages that are returned to the requesting client in HTTP Warning headers with a warning code of 299. Warnings can be sent with allowed or rejected a", "language": "json", "type": "api-response_explanation", "context": "Dynamic Admission Control | Admission webhooks can optionally return warning messages that are returned to the requesting client in HTTP Warning headers with a warning code of 299. Warnings can be sent with allowed or rejected a", "source": "https://kubernetes.io/docs/reference/access-authn-authz/extensible-admission-controllers/", "id": "k8s_00526_code_explain_13505"} {"prompt": "Complete this json code:\n\n{\n \"apiVersion\": \"admission.k8s.io/v1\",\n \"kind\": \"AdmissionReview\",\n \"response\": {\n \"uid\": \"<value from request.uid>\",\n \"allowed\": true,", "completion": " \"warnings\": [\n \"duplicate envvar entries specified with name MY_ENV\",\n \"memory request less than 4MB specified for container mycontainer, which will not start successfully\"\n ]\n }\n}", "language": "json", "type": "api-response", "context": "Dynamic Admission Control | To register admission webhooks, create MutatingWebhookConfiguration or ValidatingWebhookConfiguration API objects. The name of a MutatingWebhookConfiguration or a ValidatingWebhookConfiguration object", "source": "https://kubernetes.io/docs/reference/access-authn-authz/extensible-admission-controllers/", "id": "k8s_00526_code_completion_13506"} {"prompt": "Explain this json code:\n\n{\n \"apiVersion\": \"admission.k8s.io/v1\",\n \"kind\": \"AdmissionReview\",\n \"response\": {\n \"uid\": \"<value from request.uid>\",\n \"allowed\": true,\n \"warnings\": [\n \"duplicate envvar entries specified with name MY_ENV\",\n \"memory request less than 4MB specified for container mycontainer, which will not start successfully\"\n ]\n }\n}", "completion": "This json code Dynamic Admission Control | To register admission webhooks, create MutatingWebhookConfiguration or ValidatingWebhookConfiguration API objects. The name of a MutatingWebhookConfiguration or a ValidatingWebhookConfiguration object", "language": "json", "type": "api-response_explanation", "context": "Dynamic Admission Control | To register admission webhooks, create MutatingWebhookConfiguration or ValidatingWebhookConfiguration API objects. The name of a MutatingWebhookConfiguration or a ValidatingWebhookConfiguration object", "source": "https://kubernetes.io/docs/reference/access-authn-authz/extensible-admission-controllers/", "id": "k8s_00526_code_explain_13507"} {"prompt": "Complete this yaml code:\n\napiVersion: admissionregistration.k8s.io/v1\nkind: ValidatingWebhookConfiguration\n...\nwebhooks:\n- name: my-webhook.example.com\n rules:", "completion": " - operations: [\"CREATE\", \"UPDATE\"]\n apiGroups: [\"apps\"]\n apiVersions: [\"v1\", \"v1beta1\"]\n resources: [\"deployments\", \"replicasets\"]\n scope: \"Namespaced\"\n ...", "language": "yaml", "type": "kubernetes-manifest", "context": "Dynamic Admission Control | Match create requests for all resources (but not subresources) in all API groups and versions:", "source": "https://kubernetes.io/docs/reference/access-authn-authz/extensible-admission-controllers/", "id": "k8s_00526_code_completion_13508"} {"prompt": "Explain this yaml code:\n\napiVersion: admissionregistration.k8s.io/v1\nkind: ValidatingWebhookConfiguration\n...\nwebhooks:\n- name: my-webhook.example.com\n rules:\n - operations: [\"CREATE\", \"UPDATE\"]\n apiGroups: [\"apps\"]\n apiVersions: [\"v1\", \"v1beta1\"]\n resources: [\"deployments\", \"replicasets\"]\n scope: \"Namespaced\"\n ...", "completion": "This yaml code Dynamic Admission Control | Match create requests for all resources (but not subresources) in all API groups and versions:", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Dynamic Admission Control | Match create requests for all resources (but not subresources) in all API groups and versions:", "source": "https://kubernetes.io/docs/reference/access-authn-authz/extensible-admission-controllers/", "id": "k8s_00526_code_explain_13509"} {"prompt": "Complete this yaml code:\n\napiVersion: admissionregistration.k8s.io/v1\nkind: ValidatingWebhookConfiguration\nwebhooks:\n - name: my-webhook.example.com\n rules:", "completion": " - operations: [\"CREATE\"]\n apiGroups: [\"*\"]\n apiVersions: [\"*\"]\n resources: [\"*\"]\n scope: \"*\"", "language": "yaml", "type": "kubernetes-manifest", "context": "Dynamic Admission Control | Match update requests for all status subresources in all API groups and versions:", "source": "https://kubernetes.io/docs/reference/access-authn-authz/extensible-admission-controllers/", "id": "k8s_00526_code_completion_13510"} {"prompt": "Explain this yaml code:\n\napiVersion: admissionregistration.k8s.io/v1\nkind: ValidatingWebhookConfiguration\nwebhooks:\n - name: my-webhook.example.com\n rules:\n - operations: [\"CREATE\"]\n apiGroups: [\"*\"]\n apiVersions: [\"*\"]\n resources: [\"*\"]\n scope: \"*\"", "completion": "This yaml code Dynamic Admission Control | Match update requests for all status subresources in all API groups and versions:", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Dynamic Admission Control | Match update requests for all status subresources in all API groups and versions:", "source": "https://kubernetes.io/docs/reference/access-authn-authz/extensible-admission-controllers/", "id": "k8s_00526_code_explain_13511"} {"prompt": "Complete this yaml code:\n\napiVersion: admissionregistration.k8s.io/v1\nkind: ValidatingWebhookConfiguration\nwebhooks:\n - name: my-webhook.example.com\n rules:", "completion": " - operations: [\"UPDATE\"]\n apiGroups: [\"*\"]\n apiVersions: [\"*\"]\n resources: [\"*/status\"]\n scope: \"*\"", "language": "yaml", "type": "kubernetes-manifest", "context": "Dynamic Admission Control | Webhooks may optionally limit which requests are intercepted based on the labels of the objects they would be sent, by specifying an objectSelector. If specified, the objectSelector is evaluated again", "source": "https://kubernetes.io/docs/reference/access-authn-authz/extensible-admission-controllers/", "id": "k8s_00526_code_completion_13512"} {"prompt": "Explain this yaml code:\n\napiVersion: admissionregistration.k8s.io/v1\nkind: ValidatingWebhookConfiguration\nwebhooks:\n - name: my-webhook.example.com\n rules:\n - operations: [\"UPDATE\"]\n apiGroups: [\"*\"]\n apiVersions: [\"*\"]\n resources: [\"*/status\"]\n scope: \"*\"", "completion": "This yaml code Dynamic Admission Control | Webhooks may optionally limit which requests are intercepted based on the labels of the objects they would be sent, by specifying an objectSelector. If specified, the objectSelector is evaluated again", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Dynamic Admission Control | Webhooks may optionally limit which requests are intercepted based on the labels of the objects they would be sent, by specifying an objectSelector. If specified, the objectSelector is evaluated again", "source": "https://kubernetes.io/docs/reference/access-authn-authz/extensible-admission-controllers/", "id": "k8s_00526_code_explain_13513"} {"prompt": "Complete this yaml code:\n\napiVersion: admissionregistration.k8s.io/v1\nkind: MutatingWebhookConfiguration\nwebhooks:\n- name: my-webhook.example.com\n objectSelector:\n matchLabels:", "completion": " foo: bar\n rules:\n - operations: [\"CREATE\"]\n apiGroups: [\"*\"]\n apiVersions: [\"*\"]\n resources: [\"*\"]\n scope: \"*\"", "language": "yaml", "type": "kubernetes-manifest", "context": "Dynamic Admission Control | See labels concept for more examples of label selectors.", "source": "https://kubernetes.io/docs/reference/access-authn-authz/extensible-admission-controllers/", "id": "k8s_00526_code_completion_13514"} {"prompt": "Explain this yaml code:\n\napiVersion: admissionregistration.k8s.io/v1\nkind: MutatingWebhookConfiguration\nwebhooks:\n- name: my-webhook.example.com\n objectSelector:\n matchLabels:\n foo: bar\n rules:\n - operations: [\"CREATE\"]\n apiGroups: [\"*\"]\n apiVersions: [\"*\"]\n resources: [\"*\"]\n scope: \"*\"", "completion": "This yaml code Dynamic Admission Control | See labels concept for more examples of label selectors.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Dynamic Admission Control | See labels concept for more examples of label selectors.", "source": "https://kubernetes.io/docs/reference/access-authn-authz/extensible-admission-controllers/", "id": "k8s_00526_code_explain_13515"} {"prompt": "Complete this yaml code:\n\napiVersion: admissionregistration.k8s.io/v1\nkind: MutatingWebhookConfiguration\nwebhooks:\n - name: my-webhook.example.com\n namespaceSelector:\n matchExpressions:\n - key: runlevel", "completion": " operator: NotIn\n values: [\"0\",\"1\"]\n rules:\n - operations: [\"CREATE\"]\n apiGroups: [\"*\"]\n apiVersions: [\"*\"]\n resources: [\"*\"]\n scope: \"Namespaced\"", "language": "yaml", "type": "kubernetes-manifest", "context": "Dynamic Admission Control | This example shows a validating webhook that matches a CREATE of any namespaced resource inside a namespace that is associated with the \"environment\" of \"prod\" or \"staging\":", "source": "https://kubernetes.io/docs/reference/access-authn-authz/extensible-admission-controllers/", "id": "k8s_00526_code_completion_13516"} {"prompt": "Explain this yaml code:\n\napiVersion: admissionregistration.k8s.io/v1\nkind: MutatingWebhookConfiguration\nwebhooks:\n - name: my-webhook.example.com\n namespaceSelector:\n matchExpressions:\n - key: runlevel\n operator: NotIn\n values: [\"0\",\"1\"]\n rules:\n - operations: [\"CREATE\"]\n apiGroups: [\"*\"]\n apiVersions: [\"*\"]\n resources: [\"*\"]\n scope: \"Namespaced\"", "completion": "This yaml code Dynamic Admission Control | This example shows a validating webhook that matches a CREATE of any namespaced resource inside a namespace that is associated with the \"environment\" of \"prod\" or \"staging\":", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Dynamic Admission Control | This example shows a validating webhook that matches a CREATE of any namespaced resource inside a namespace that is associated with the \"environment\" of \"prod\" or \"staging\":", "source": "https://kubernetes.io/docs/reference/access-authn-authz/extensible-admission-controllers/", "id": "k8s_00526_code_explain_13517"} {"prompt": "Complete this yaml code:\n\napiVersion: admissionregistration.k8s.io/v1\nkind: ValidatingWebhookConfiguration\nwebhooks:\n - name: my-webhook.example.com\n namespaceSelector:\n matchExpressions:\n - key: environment", "completion": " operator: In\n values: [\"prod\",\"staging\"]\n rules:\n - operations: [\"CREATE\"]\n apiGroups: [\"*\"]\n apiVersions: [\"*\"]\n resources: [\"*\"]\n scope: \"Namespaced\"", "language": "yaml", "type": "kubernetes-manifest", "context": "Dynamic Admission Control | See labels concept for more examples of label selectors.", "source": "https://kubernetes.io/docs/reference/access-authn-authz/extensible-admission-controllers/", "id": "k8s_00526_code_completion_13518"} {"prompt": "Explain this yaml code:\n\napiVersion: admissionregistration.k8s.io/v1\nkind: ValidatingWebhookConfiguration\nwebhooks:\n - name: my-webhook.example.com\n namespaceSelector:\n matchExpressions:\n - key: environment\n operator: In\n values: [\"prod\",\"staging\"]\n rules:\n - operations: [\"CREATE\"]\n apiGroups: [\"*\"]\n apiVersions: [\"*\"]\n resources: [\"*\"]\n scope: \"Namespaced\"", "completion": "This yaml code Dynamic Admission Control | See labels concept for more examples of label selectors.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Dynamic Admission Control | See labels concept for more examples of label selectors.", "source": "https://kubernetes.io/docs/reference/access-authn-authz/extensible-admission-controllers/", "id": "k8s_00526_code_explain_13519"} {"prompt": "Explain this text code:\n\napiGroups:[\"extensions\"], apiVersions:[\"v1beta1\"], resources:[\"deployments\"]", "completion": "This text code Dynamic Admission Control | This example shows a validating webhook that intercepts modifications to deployments (no matter the API group or version), and is always sent an apps/v1 Deployment object:", "language": "text", "type": "code-example_explanation", "context": "Dynamic Admission Control | This example shows a validating webhook that intercepts modifications to deployments (no matter the API group or version), and is always sent an apps/v1 Deployment object:", "source": "https://kubernetes.io/docs/reference/access-authn-authz/extensible-admission-controllers/", "id": "k8s_00526_code_explain_13520"} {"prompt": "Complete this yaml code:\n\napiVersion: admissionregistration.k8s.io/v1\nkind: ValidatingWebhookConfiguration\nwebhooks:\n- name: my-webhook.example.com\n matchPolicy: Equivalent", "completion": " rules:\n - operations: [\"CREATE\",\"UPDATE\",\"DELETE\"]\n apiGroups: [\"apps\"]\n apiVersions: [\"v1\"]\n resources: [\"deployments\"]\n scope: \"Namespaced\"", "language": "yaml", "type": "kubernetes-manifest", "context": "Dynamic Admission Control | The matchPolicy for an admission webhooks defaults to Equivalent.", "source": "https://kubernetes.io/docs/reference/access-authn-authz/extensible-admission-controllers/", "id": "k8s_00526_code_completion_13521"} {"prompt": "Explain this yaml code:\n\napiVersion: admissionregistration.k8s.io/v1\nkind: ValidatingWebhookConfiguration\nwebhooks:\n- name: my-webhook.example.com\n matchPolicy: Equivalent\n rules:\n - operations: [\"CREATE\",\"UPDATE\",\"DELETE\"]\n apiGroups: [\"apps\"]\n apiVersions: [\"v1\"]\n resources: [\"deployments\"]\n scope: \"Namespaced\"", "completion": "This yaml code Dynamic Admission Control | The matchPolicy for an admission webhooks defaults to Equivalent.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Dynamic Admission Control | The matchPolicy for an admission webhooks defaults to Equivalent.", "source": "https://kubernetes.io/docs/reference/access-authn-authz/extensible-admission-controllers/", "id": "k8s_00526_code_explain_13522"} {"prompt": "Complete this yaml code:\n\napiVersion: admissionregistration.k8s.io/v1\nkind: ValidatingWebhookConfiguration\nwebhooks:\n - name: my-webhook.example.com\n matchPolicy: Equivalent\n rules:\n - operations: ['CREATE','UPDATE']\n apiGroups: ['*']\n apiVersions: ['*']\n resources: ['*']\n failurePolicy: 'Ignore' # Fail-open (optional)\n sideEffects: None\n clientConfig:\n service:\n namespace: my-namespace\n name: my-webhook\n caBundle: '<omitted>'\n # You can have up to 64 matchConditions per webhook\n matchConditions:\n - name: 'exclude-leases' # Each match condition must have a unique name\n expression: '!(request.resource.group == \"coordination.k8s.io\" && request.resource.resource == \"leases\")' # Match non-lease resources.\n - name: 'exclude-kubelet-requests'\n expression: '!(\"system:nodes\" in request.userInfo.groups)' # Match requests made by non-node users.\n - name: 'rbac' # Skip RBAC requests, which are handled by the second webhook.", "completion": " expression: 'request.resource.group != \"rbac.authorization.k8s.io\"'\n \n # This example illustrates the use of the 'authorizer'. The authorization check is more expensive\n # than a simple expression, so in this example it is scoped to only RBAC requests by using a second\n # webhook. Both webhooks can be served by the same endpoint.\n - name: rbac.my-webhook.example.com\n matchPolicy: Equivalent\n rules:\n - operations: ['CREATE','UPDATE']\n apiGroups: ['rbac.authorization.k8s.io']\n apiVersions: ['*']\n resources: ['*']\n failurePolicy: 'Fail' # Fail-closed (the default)\n sideEffects: None\n clientConfig:\n service:\n namespace: my-namespace\n name: my-webhook\n caBundle: '<omitted>'\n # You can have up to 64 matchConditions per webhook\n matchConditions:\n - name: 'breakglass'\n # Skip requests made by users authorized to 'breakglass' on this webhook.\n # The 'breakglass' API verb does not need to exist outside this check.\n expression: '!authorizer.group(\"admissionregistration.k8s.io\").resource(\"validatingwebhookconfigurations\").name(\"my-webhook.example.com\").check(\"breakglass\").allowed()'", "language": "yaml", "type": "kubernetes-manifest", "context": "Dynamic Admission Control | Match conditions have access to the following CEL variables:", "source": "https://kubernetes.io/docs/reference/access-authn-authz/extensible-admission-controllers/", "id": "k8s_00526_code_completion_13523"} {"prompt": "Explain this yaml code:\n\napiVersion: admissionregistration.k8s.io/v1\nkind: ValidatingWebhookConfiguration\nwebhooks:\n - name: my-webhook.example.com\n matchPolicy: Equivalent\n rules:\n - operations: ['CREATE','UPDATE']\n apiGroups: ['*']\n apiVersions: ['*']\n resources: ['*']\n failurePolicy: 'Ignore' # Fail-open (optional)\n sideEffects: None\n clientConfig:\n service:\n namespace: my-namespace\n name: my-webhook\n caBundle: '<omitted>'\n # You can have up to 64 matchConditions per webhook\n matchConditions:\n - name: 'exclude-leases' # Each match condition must have a unique name\n expression: '!(request.resource.group == \"coordination.k8s.io\" && request.resource.resource == \"leases\")' # Match non-lease resources.\n - name: 'exclude-kubelet-requests'\n expression: '!(\"system:nodes\" in request.userInfo.groups)' # Match requests made by non-node users.\n - name: 'rbac' # Skip RBAC requests, which are handled by the second webhook.\n expression: 'request.resource.group != \"rbac.authorization.k8s.io\"'\n \n # This example illustrates the use of the 'authorizer'. The authorization check is more expensive\n # than a simple expression, so in this example it is scoped to only RBAC requests by using a second\n # webhook. Both webhooks can be served by the same endpoint.\n - name: rbac.my-webhook.example.com\n matchPolicy: Equivalent\n rules:\n - operations: ['CREATE','UPDATE']\n apiGroups: ['rbac.authorization.k8s.io']\n apiVersions: ['*']\n resources: ['*']\n failurePolicy: 'Fail' # Fail-closed (the default)\n sideEffects: None\n clientConfig:\n service:\n namespace: my-namespace\n name: my-webhook\n caBundle: '<omitted>'\n # You can have up to 64 matchConditions per webhook\n matchConditions:\n - name: 'breakglass'\n # Skip requests made by users authorized to 'breakglass' on this webhook.\n # The 'breakglass' API verb does not need to exist outside this check.\n expression: '!authorizer.group(\"admissionregistration.k8s.io\").resource(\"validatingwebhookconfigurations\").name(\"my-webhook.example.com\").check(\"breakglass\").allowed()'", "completion": "This yaml code Dynamic Admission Control | Match conditions have access to the following CEL variables:", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Dynamic Admission Control | Match conditions have access to the following CEL variables:", "source": "https://kubernetes.io/docs/reference/access-authn-authz/extensible-admission-controllers/", "id": "k8s_00526_code_explain_13524"} {"prompt": "Complete this yaml code:\n\napiVersion: admissionregistration.k8s.io/v1\nkind: MutatingWebhookConfiguration\nwebhooks:", "completion": "- name: my-webhook.example.com\n clientConfig:\n url: \"https://my-webhook.example.com:9443/my-webhook-path\"", "language": "yaml", "type": "kubernetes-manifest", "context": "Dynamic Admission Control | The service stanza inside clientConfig is a reference to the service for this webhook. If the webhook is running within the cluster, then you should use service instead of url. The service namespace a", "source": "https://kubernetes.io/docs/reference/access-authn-authz/extensible-admission-controllers/", "id": "k8s_00526_code_completion_13525"} {"prompt": "Explain this yaml code:\n\napiVersion: admissionregistration.k8s.io/v1\nkind: MutatingWebhookConfiguration\nwebhooks:\n- name: my-webhook.example.com\n clientConfig:\n url: \"https://my-webhook.example.com:9443/my-webhook-path\"", "completion": "This yaml code Dynamic Admission Control | The service stanza inside clientConfig is a reference to the service for this webhook. If the webhook is running within the cluster, then you should use service instead of url. The service namespace a", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Dynamic Admission Control | The service stanza inside clientConfig is a reference to the service for this webhook. If the webhook is running within the cluster, then you should use service instead of url. The service namespace a", "source": "https://kubernetes.io/docs/reference/access-authn-authz/extensible-admission-controllers/", "id": "k8s_00526_code_explain_13526"} {"prompt": "Complete this yaml code:\n\napiVersion: admissionregistration.k8s.io/v1\nkind: MutatingWebhookConfiguration\nwebhooks:\n- name: my-webhook.example.com\n clientConfig:", "completion": " caBundle: <CA_BUNDLE>\n service:\n namespace: my-service-namespace\n name: my-service-name\n path: /my-path\n port: 1234", "language": "yaml", "type": "kubernetes-manifest", "context": "Dynamic Admission Control | Webhooks typically operate only on the content of the AdmissionReview sent to them. Some webhooks, however, make out-of-band changes as part of processing admission requests.", "source": "https://kubernetes.io/docs/reference/access-authn-authz/extensible-admission-controllers/", "id": "k8s_00526_code_completion_13527"} {"prompt": "Explain this yaml code:\n\napiVersion: admissionregistration.k8s.io/v1\nkind: MutatingWebhookConfiguration\nwebhooks:\n- name: my-webhook.example.com\n clientConfig:\n caBundle: <CA_BUNDLE>\n service:\n namespace: my-service-namespace\n name: my-service-name\n path: /my-path\n port: 1234", "completion": "This yaml code Dynamic Admission Control | Webhooks typically operate only on the content of the AdmissionReview sent to them. Some webhooks, however, make out-of-band changes as part of processing admission requests.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Dynamic Admission Control | Webhooks typically operate only on the content of the AdmissionReview sent to them. Some webhooks, however, make out-of-band changes as part of processing admission requests.", "source": "https://kubernetes.io/docs/reference/access-authn-authz/extensible-admission-controllers/", "id": "k8s_00526_code_explain_13528"} {"prompt": "Complete this yaml code:\n\napiVersion: admissionregistration.k8s.io/v1\nkind: ValidatingWebhookConfiguration", "completion": "webhooks:\n - name: my-webhook.example.com\n sideEffects: NoneOnDryRun", "language": "yaml", "type": "kubernetes-manifest", "context": "Dynamic Admission Control | Because webhooks add to API request latency, they should evaluate as quickly as possible. timeoutSeconds allows configuring how long the API server should wait for a webhook to respond before treating", "source": "https://kubernetes.io/docs/reference/access-authn-authz/extensible-admission-controllers/", "id": "k8s_00526_code_completion_13529"} {"prompt": "Explain this yaml code:\n\napiVersion: admissionregistration.k8s.io/v1\nkind: ValidatingWebhookConfiguration\nwebhooks:\n - name: my-webhook.example.com\n sideEffects: NoneOnDryRun", "completion": "This yaml code Dynamic Admission Control | Because webhooks add to API request latency, they should evaluate as quickly as possible. timeoutSeconds allows configuring how long the API server should wait for a webhook to respond before treating", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Dynamic Admission Control | Because webhooks add to API request latency, they should evaluate as quickly as possible. timeoutSeconds allows configuring how long the API server should wait for a webhook to respond before treating", "source": "https://kubernetes.io/docs/reference/access-authn-authz/extensible-admission-controllers/", "id": "k8s_00526_code_explain_13530"} {"prompt": "Complete this yaml code:\n\napiVersion: admissionregistration.k8s.io/v1\nkind: ValidatingWebhookConfiguration", "completion": "webhooks:\n - name: my-webhook.example.com\n timeoutSeconds: 2", "language": "yaml", "type": "kubernetes-manifest", "context": "Dynamic Admission Control | The timeout for an admission webhook defaults to 10 seconds.", "source": "https://kubernetes.io/docs/reference/access-authn-authz/extensible-admission-controllers/", "id": "k8s_00526_code_completion_13531"} {"prompt": "Explain this yaml code:\n\napiVersion: admissionregistration.k8s.io/v1\nkind: ValidatingWebhookConfiguration\nwebhooks:\n - name: my-webhook.example.com\n timeoutSeconds: 2", "completion": "This yaml code Dynamic Admission Control | The timeout for an admission webhook defaults to 10 seconds.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Dynamic Admission Control | The timeout for an admission webhook defaults to 10 seconds.", "source": "https://kubernetes.io/docs/reference/access-authn-authz/extensible-admission-controllers/", "id": "k8s_00526_code_explain_13532"} {"prompt": "Complete this yaml code:\n\napiVersion: admissionregistration.k8s.io/v1\nkind: MutatingWebhookConfiguration", "completion": "webhooks:\n- name: my-webhook.example.com\n reinvocationPolicy: IfNeeded", "language": "yaml", "type": "kubernetes-manifest", "context": "Dynamic Admission Control | Mutating webhooks must be idempotent, able to successfully process an object they have already admitted and potentially modified. This is true for all mutating admission webhooks, since any change the", "source": "https://kubernetes.io/docs/reference/access-authn-authz/extensible-admission-controllers/", "id": "k8s_00526_code_completion_13533"} {"prompt": "Explain this yaml code:\n\napiVersion: admissionregistration.k8s.io/v1\nkind: MutatingWebhookConfiguration\nwebhooks:\n- name: my-webhook.example.com\n reinvocationPolicy: IfNeeded", "completion": "This yaml code Dynamic Admission Control | Mutating webhooks must be idempotent, able to successfully process an object they have already admitted and potentially modified. This is true for all mutating admission webhooks, since any change the", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Dynamic Admission Control | Mutating webhooks must be idempotent, able to successfully process an object they have already admitted and potentially modified. This is true for all mutating admission webhooks, since any change the", "source": "https://kubernetes.io/docs/reference/access-authn-authz/extensible-admission-controllers/", "id": "k8s_00526_code_explain_13534"} {"prompt": "Complete this yaml code:\n\napiVersion: admissionregistration.k8s.io/v1\nkind: MutatingWebhookConfiguration", "completion": "webhooks:\n- name: my-webhook.example.com\n failurePolicy: Fail", "language": "yaml", "type": "kubernetes-manifest", "context": "Dynamic Admission Control | The default failurePolicy for an admission webhooks is Fail.", "source": "https://kubernetes.io/docs/reference/access-authn-authz/extensible-admission-controllers/", "id": "k8s_00526_code_completion_13535"} {"prompt": "Explain this yaml code:\n\napiVersion: admissionregistration.k8s.io/v1\nkind: MutatingWebhookConfiguration\nwebhooks:\n- name: my-webhook.example.com\n failurePolicy: Fail", "completion": "This yaml code Dynamic Admission Control | The default failurePolicy for an admission webhooks is Fail.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Dynamic Admission Control | The default failurePolicy for an admission webhooks is Fail.", "source": "https://kubernetes.io/docs/reference/access-authn-authz/extensible-admission-controllers/", "id": "k8s_00526_code_explain_13536"} {"prompt": "Explain this text code:\n\nmutation.webhook.admission.k8s.io/round_{round idx}_index_{order idx}", "completion": "This text code Dynamic Admission Control | For example, the following annotation gets recorded for a webhook being reinvoked. The webhook is ordered the third in the mutating webhook chain, and didn't mutated the request object during the invo", "language": "text", "type": "code-example_explanation", "context": "Dynamic Admission Control | For example, the following annotation gets recorded for a webhook being reinvoked. The webhook is ordered the third in the mutating webhook chain, and didn't mutated the request object during the invo", "source": "https://kubernetes.io/docs/reference/access-authn-authz/extensible-admission-controllers/", "id": "k8s_00526_code_explain_13537"} {"prompt": "Complete this yaml code:\n\n# the audit event recorded\n{\n \"kind\": \"Event\",\n \"apiVersion\": \"audit.k8s.io/v1\",\n \"annotations\": {\n \"mutation.webhook.admission.k8s.io/round_1_index_2\": \"{\\\"configuration\\\":\\\"my-mutating-webhook-configuration.example.com\\\",\\\"webhook\\\":\\\"my-webhook.example.com\\\",\\\"mutated\\\": false}\"", "completion": " # other annotations\n ...\n }\n # other fields\n ...\n}", "language": "yaml", "type": "kubernetes-manifest", "context": "Dynamic Admission Control | The following annotation gets recorded for a webhook being invoked in the first round. The webhook is ordered the first in the mutating webhook chain, and mutated the request object during the invocat", "source": "https://kubernetes.io/docs/reference/access-authn-authz/extensible-admission-controllers/", "id": "k8s_00526_code_completion_13538"} {"prompt": "Explain this yaml code:\n\n# the audit event recorded\n{\n \"kind\": \"Event\",\n \"apiVersion\": \"audit.k8s.io/v1\",\n \"annotations\": {\n \"mutation.webhook.admission.k8s.io/round_1_index_2\": \"{\\\"configuration\\\":\\\"my-mutating-webhook-configuration.example.com\\\",\\\"webhook\\\":\\\"my-webhook.example.com\\\",\\\"mutated\\\": false}\"\n # other annotations\n ...\n }\n # other fields\n ...\n}", "completion": "This yaml code Dynamic Admission Control | The following annotation gets recorded for a webhook being invoked in the first round. The webhook is ordered the first in the mutating webhook chain, and mutated the request object during the invocat", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Dynamic Admission Control | The following annotation gets recorded for a webhook being invoked in the first round. The webhook is ordered the first in the mutating webhook chain, and mutated the request object during the invocat", "source": "https://kubernetes.io/docs/reference/access-authn-authz/extensible-admission-controllers/", "id": "k8s_00526_code_explain_13539"} {"prompt": "Complete this yaml code:\n\n# the annotation value deserialized\n{\n \"configuration\": \"my-mutating-webhook-configuration.example.com\",", "completion": " \"webhook\": \"my-webhook.example.com\",\n \"mutated\": false\n}", "language": "yaml", "type": "kubernetes-manifest", "context": "Dynamic Admission Control | The following annotation gets recorded for a webhook being invoked in the first round. The webhook is ordered the first in the mutating webhook chain, and mutated the request object during the invocat", "source": "https://kubernetes.io/docs/reference/access-authn-authz/extensible-admission-controllers/", "id": "k8s_00526_code_completion_13540"} {"prompt": "Explain this yaml code:\n\n# the annotation value deserialized\n{\n \"configuration\": \"my-mutating-webhook-configuration.example.com\",\n \"webhook\": \"my-webhook.example.com\",\n \"mutated\": false\n}", "completion": "This yaml code Dynamic Admission Control | The following annotation gets recorded for a webhook being invoked in the first round. The webhook is ordered the first in the mutating webhook chain, and mutated the request object during the invocat", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Dynamic Admission Control | The following annotation gets recorded for a webhook being invoked in the first round. The webhook is ordered the first in the mutating webhook chain, and mutated the request object during the invocat", "source": "https://kubernetes.io/docs/reference/access-authn-authz/extensible-admission-controllers/", "id": "k8s_00526_code_explain_13541"} {"prompt": "Complete this yaml code:\n\n# the audit event recorded\n{\n \"kind\": \"Event\",\n \"apiVersion\": \"audit.k8s.io/v1\",\n \"annotations\": {\n \"mutation.webhook.admission.k8s.io/round_0_index_0\": \"{\\\"configuration\\\":\\\"my-mutating-webhook-configuration.example.com\\\",\\\"webhook\\\":\\\"my-webhook-always-mutate.example.com\\\",\\\"mutated\\\": true}\"", "completion": " # other annotations\n ...\n }\n # other fields\n ...\n}", "language": "yaml", "type": "kubernetes-manifest", "context": "Dynamic Admission Control | At Request audit level or higher, an annotation with key patch.webhook.admission.k8s.io/round_{round idx}_index_{order idx} gets logged with JSON payload indicating a webhook gets invoked for given re", "source": "https://kubernetes.io/docs/reference/access-authn-authz/extensible-admission-controllers/", "id": "k8s_00526_code_completion_13542"} {"prompt": "Explain this yaml code:\n\n# the audit event recorded\n{\n \"kind\": \"Event\",\n \"apiVersion\": \"audit.k8s.io/v1\",\n \"annotations\": {\n \"mutation.webhook.admission.k8s.io/round_0_index_0\": \"{\\\"configuration\\\":\\\"my-mutating-webhook-configuration.example.com\\\",\\\"webhook\\\":\\\"my-webhook-always-mutate.example.com\\\",\\\"mutated\\\": true}\"\n # other annotations\n ...\n }\n # other fields\n ...\n}", "completion": "This yaml code Dynamic Admission Control | At Request audit level or higher, an annotation with key patch.webhook.admission.k8s.io/round_{round idx}_index_{order idx} gets logged with JSON payload indicating a webhook gets invoked for given re", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Dynamic Admission Control | At Request audit level or higher, an annotation with key patch.webhook.admission.k8s.io/round_{round idx}_index_{order idx} gets logged with JSON payload indicating a webhook gets invoked for given re", "source": "https://kubernetes.io/docs/reference/access-authn-authz/extensible-admission-controllers/", "id": "k8s_00526_code_explain_13543"} {"prompt": "Complete this yaml code:\n\n# the annotation value deserialized\n{\n \"configuration\": \"my-mutating-webhook-configuration.example.com\",", "completion": " \"webhook\": \"my-webhook-always-mutate.example.com\",\n \"mutated\": true\n}", "language": "yaml", "type": "kubernetes-manifest", "context": "Dynamic Admission Control | At Request audit level or higher, an annotation with key patch.webhook.admission.k8s.io/round_{round idx}_index_{order idx} gets logged with JSON payload indicating a webhook gets invoked for given re", "source": "https://kubernetes.io/docs/reference/access-authn-authz/extensible-admission-controllers/", "id": "k8s_00526_code_completion_13544"} {"prompt": "Explain this yaml code:\n\n# the annotation value deserialized\n{\n \"configuration\": \"my-mutating-webhook-configuration.example.com\",\n \"webhook\": \"my-webhook-always-mutate.example.com\",\n \"mutated\": true\n}", "completion": "This yaml code Dynamic Admission Control | At Request audit level or higher, an annotation with key patch.webhook.admission.k8s.io/round_{round idx}_index_{order idx} gets logged with JSON payload indicating a webhook gets invoked for given re", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Dynamic Admission Control | At Request audit level or higher, an annotation with key patch.webhook.admission.k8s.io/round_{round idx}_index_{order idx} gets logged with JSON payload indicating a webhook gets invoked for given re", "source": "https://kubernetes.io/docs/reference/access-authn-authz/extensible-admission-controllers/", "id": "k8s_00526_code_explain_13545"} {"prompt": "Explain this text code:\n\npatch.webhook.admission.k8s.io/round_{round idx}_index_{order idx}", "completion": "This text code Dynamic Admission Control | For example, the following annotation gets recorded for a webhook being reinvoked. The webhook is ordered the fourth in the mutating webhook chain, and responded with a JSON patch which got applied to", "language": "text", "type": "code-example_explanation", "context": "Dynamic Admission Control | For example, the following annotation gets recorded for a webhook being reinvoked. The webhook is ordered the fourth in the mutating webhook chain, and responded with a JSON patch which got applied to", "source": "https://kubernetes.io/docs/reference/access-authn-authz/extensible-admission-controllers/", "id": "k8s_00526_code_explain_13546"} {"prompt": "Complete this yaml code:\n\n# the audit event recorded\n{\n \"kind\": \"Event\",\n \"apiVersion\": \"audit.k8s.io/v1\",\n \"annotations\": {\n \"patch.webhook.admission.k8s.io/round_1_index_3\": \"{\\\"configuration\\\":\\\"my-other-mutating-webhook-configuration.example.com\\\",\\\"webhook\\\":\\\"my-webhook-always-mutate.example.com\\\",\\\"patch\\\":[{\\\"op\\\":\\\"add\\\",\\\"path\\\":\\\"/data/mutation-stage\\\",\\\"value\\\":\\\"yes\\\"}],\\\"patchType\\\":\\\"JSONPatch\\\"}\"", "completion": " # other annotations\n ...\n }\n # other fields\n ...\n}", "language": "yaml", "type": "kubernetes-manifest", "context": "Dynamic Admission Control | The API server exposes Prometheus metrics from the /metrics endpoint, which can be used for monitoring and diagnosing API server status. The following metrics record status related to admission webhoo", "source": "https://kubernetes.io/docs/reference/access-authn-authz/extensible-admission-controllers/", "id": "k8s_00526_code_completion_13547"} {"prompt": "Explain this yaml code:\n\n# the audit event recorded\n{\n \"kind\": \"Event\",\n \"apiVersion\": \"audit.k8s.io/v1\",\n \"annotations\": {\n \"patch.webhook.admission.k8s.io/round_1_index_3\": \"{\\\"configuration\\\":\\\"my-other-mutating-webhook-configuration.example.com\\\",\\\"webhook\\\":\\\"my-webhook-always-mutate.example.com\\\",\\\"patch\\\":[{\\\"op\\\":\\\"add\\\",\\\"path\\\":\\\"/data/mutation-stage\\\",\\\"value\\\":\\\"yes\\\"}],\\\"patchType\\\":\\\"JSONPatch\\\"}\"\n # other annotations\n ...\n }\n # other fields\n ...\n}", "completion": "This yaml code Dynamic Admission Control | The API server exposes Prometheus metrics from the /metrics endpoint, which can be used for monitoring and diagnosing API server status. The following metrics record status related to admission webhoo", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Dynamic Admission Control | The API server exposes Prometheus metrics from the /metrics endpoint, which can be used for monitoring and diagnosing API server status. The following metrics record status related to admission webhoo", "source": "https://kubernetes.io/docs/reference/access-authn-authz/extensible-admission-controllers/", "id": "k8s_00526_code_explain_13548"} {"prompt": "Complete this yaml code:\n\n# the annotation value deserialized\n{\n \"configuration\": \"my-other-mutating-webhook-configuration.example.com\",\n \"webhook\": \"my-webhook-always-mutate.example.com\",\n \"patchType\": \"JSONPatch\",\n \"patch\": [", "completion": " {\n \"op\": \"add\",\n \"path\": \"/data/mutation-stage\",\n \"value\": \"yes\"\n }\n ]\n}", "language": "yaml", "type": "kubernetes-manifest", "context": "Dynamic Admission Control | The API server exposes Prometheus metrics from the /metrics endpoint, which can be used for monitoring and diagnosing API server status. The following metrics record status related to admission webhoo", "source": "https://kubernetes.io/docs/reference/access-authn-authz/extensible-admission-controllers/", "id": "k8s_00526_code_completion_13549"} {"prompt": "Explain this yaml code:\n\n# the annotation value deserialized\n{\n \"configuration\": \"my-other-mutating-webhook-configuration.example.com\",\n \"webhook\": \"my-webhook-always-mutate.example.com\",\n \"patchType\": \"JSONPatch\",\n \"patch\": [\n {\n \"op\": \"add\",\n \"path\": \"/data/mutation-stage\",\n \"value\": \"yes\"\n }\n ]\n}", "completion": "This yaml code Dynamic Admission Control | The API server exposes Prometheus metrics from the /metrics endpoint, which can be used for monitoring and diagnosing API server status. The following metrics record status related to admission webhoo", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Dynamic Admission Control | The API server exposes Prometheus metrics from the /metrics endpoint, which can be used for monitoring and diagnosing API server status. The following metrics record status related to admission webhoo", "source": "https://kubernetes.io/docs/reference/access-authn-authz/extensible-admission-controllers/", "id": "k8s_00526_code_explain_13550"} {"prompt": "Complete this text code:\n\n# HELP apiserver_admission_webhook_rejection_count [ALPHA] Admission webhook rejection count, identified by name and broken out for each admission type (validating or admit) and operation. Additional labels specify an error type (calling_webhook_error or apiserver_internal_error if an error occurred; no_error otherwise) and optionally a non-zero rejection code if the webhook rejects the request with an HTTP status code (honored by the apiserver when the code is greater or equal to 400). Codes greater than 600 are truncated to 600, to keep the metrics cardinality bounded.\n# TYPE apiserver_admission_webhook_rejection_count counter", "completion": "apiserver_admission_webhook_rejection_count{error_type=\"calling_webhook_error\",name=\"always-timeout-webhook.example.com\",operation=\"CREATE\",rejection_code=\"0\",type=\"validating\"} 1\napiserver_admission_webhook_rejection_count{error_type=\"calling_webhook_error\",name=\"invalid-admission-response-webhook.example.com\",operation=\"CREATE\",rejection_code=\"0\",type=\"validating\"} 1\napiserver_admission_webhook_rejection_count{error_type=\"no_error\",name=\"deny-unwanted-configmap-data.example.com\",operation=\"CREATE\",rejection_code=\"400\",type=\"validating\"} 13", "language": "text", "type": "code-example", "context": "Dynamic Admission Control | For recommendations and considerations when writing mutating admission webhooks, see Admission Webhooks Good Practices.", "source": "https://kubernetes.io/docs/reference/access-authn-authz/extensible-admission-controllers/", "id": "k8s_00526_code_completion_13551"} {"prompt": "Explain this text code:\n\n# HELP apiserver_admission_webhook_rejection_count [ALPHA] Admission webhook rejection count, identified by name and broken out for each admission type (validating or admit) and operation. Additional labels specify an error type (calling_webhook_error or apiserver_internal_error if an error occurred; no_error otherwise) and optionally a non-zero rejection code if the webhook rejects the request with an HTTP status code (honored by the apiserver when the code is greater or equal to 400). Codes greater than 600 are truncated to 600, to keep the metrics cardinality bounded.\n# TYPE apiserver_admission_webhook_rejection_count counter\napiserver_admission_webhook_rejection_count{error_type=\"calling_webhook_error\",name=\"always-timeout-webhook.example.com\",operation=\"CREATE\",rejection_code=\"0\",type=\"validating\"} 1\napiserver_admission_webhook_rejection_count{error_type=\"calling_webhook_error\",name=\"invalid-admission-response-webhook.example.com\",operation=\"CREATE\",rejection_code=\"0\",type=\"validating\"} 1\napiserver_admission_webhook_rejection_count{error_type=\"no_error\",name=\"deny-unwanted-configmap-data.example.com\",operation=\"CREATE\",rejection_code=\"400\",type=\"validating\"} 13", "completion": "This text code Dynamic Admission Control | For recommendations and considerations when writing mutating admission webhooks, see Admission Webhooks Good Practices.", "language": "text", "type": "code-example_explanation", "context": "Dynamic Admission Control | For recommendations and considerations when writing mutating admission webhooks, see Admission Webhooks Good Practices.", "source": "https://kubernetes.io/docs/reference/access-authn-authz/extensible-admission-controllers/", "id": "k8s_00526_code_explain_13552"} {"prompt": "Complete this yaml code:\n\napiVersion: admissionregistration.k8s.io/v1\nkind: ValidatingWebhookConfiguration\nmetadata:\n name: \"pod-policy.example.com\"\nwebhooks:\n- name: \"pod-policy.example.com\"\n rules:\n - apiGroups: [\"\"]\n apiVersions: [\"v1\"]\n operations: [\"CREATE\"]", "completion": " resources: [\"pods\"]\n scope: \"Namespaced\"\n clientConfig:\n service:\n namespace: \"example-namespace\"\n name: \"example-service\"\n caBundle: <CA_BUNDLE>\n admissionReviewVersions: [\"v1\"]\n sideEffects: None\n timeoutSeconds: 5", "language": "yaml", "type": "kubernetes-manifest", "context": "Dynamic Admission Control | The scope field specifies if only cluster-scoped resources (\"Cluster\") or namespace-scoped resources (\"Namespaced\") will match this rule. \"∗\" means that there are no scope restrictions.", "source": "https://kubernetes.io/docs/reference/access-authn-authz/extensible-admission-controllers/", "id": "k8s_00526_code_completion_13553"} {"prompt": "Explain this yaml code:\n\napiVersion: admissionregistration.k8s.io/v1\nkind: ValidatingWebhookConfiguration\nmetadata:\n name: \"pod-policy.example.com\"\nwebhooks:\n- name: \"pod-policy.example.com\"\n rules:\n - apiGroups: [\"\"]\n apiVersions: [\"v1\"]\n operations: [\"CREATE\"]\n resources: [\"pods\"]\n scope: \"Namespaced\"\n clientConfig:\n service:\n namespace: \"example-namespace\"\n name: \"example-service\"\n caBundle: <CA_BUNDLE>\n admissionReviewVersions: [\"v1\"]\n sideEffects: None\n timeoutSeconds: 5", "completion": "This yaml code Dynamic Admission Control | The scope field specifies if only cluster-scoped resources (\"Cluster\") or namespace-scoped resources (\"Namespaced\") will match this rule. \"∗\" means that there are no scope restrictions.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Dynamic Admission Control | The scope field specifies if only cluster-scoped resources (\"Cluster\") or namespace-scoped resources (\"Namespaced\") will match this rule. \"∗\" means that there are no scope restrictions.", "source": "https://kubernetes.io/docs/reference/access-authn-authz/extensible-admission-controllers/", "id": "k8s_00526_code_explain_13554"} {"prompt": "Complete this yaml code:\n\napiVersion: apiserver.config.k8s.io/v1\nkind: AdmissionConfiguration\nplugins:\n- name: ValidatingAdmissionWebhook\n configuration:\n apiVersion: apiserver.config.k8s.io/v1", "completion": " kind: WebhookAdmissionConfiguration\n kubeConfigFile: \"<path-to-kubeconfig-file>\"\n- name: MutatingAdmissionWebhook\n configuration:\n apiVersion: apiserver.config.k8s.io/v1\n kind: WebhookAdmissionConfiguration\n kubeConfigFile: \"<path-to-kubeconfig-file>\"", "language": "yaml", "type": "kubernetes-manifest", "context": "Dynamic Admission Control | # Deprecated in v1.17 in favor of apiserver.config.k8s.io/v1 apiVersion: apiserver.k8s.io/v1alpha1 kind: AdmissionConfiguration plugins: - name: ValidatingAdmissionWebhook configuration: # Depre", "source": "https://kubernetes.io/docs/reference/access-authn-authz/extensible-admission-controllers/", "id": "k8s_00526_code_completion_13555"} {"prompt": "Explain this yaml code:\n\napiVersion: apiserver.config.k8s.io/v1\nkind: AdmissionConfiguration\nplugins:\n- name: ValidatingAdmissionWebhook\n configuration:\n apiVersion: apiserver.config.k8s.io/v1\n kind: WebhookAdmissionConfiguration\n kubeConfigFile: \"<path-to-kubeconfig-file>\"\n- name: MutatingAdmissionWebhook\n configuration:\n apiVersion: apiserver.config.k8s.io/v1\n kind: WebhookAdmissionConfiguration\n kubeConfigFile: \"<path-to-kubeconfig-file>\"", "completion": "This yaml code Dynamic Admission Control | # Deprecated in v1.17 in favor of apiserver.config.k8s.io/v1 apiVersion: apiserver.k8s.io/v1alpha1 kind: AdmissionConfiguration plugins: - name: ValidatingAdmissionWebhook configuration: # Depre", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Dynamic Admission Control | # Deprecated in v1.17 in favor of apiserver.config.k8s.io/v1 apiVersion: apiserver.k8s.io/v1alpha1 kind: AdmissionConfiguration plugins: - name: ValidatingAdmissionWebhook configuration: # Depre", "source": "https://kubernetes.io/docs/reference/access-authn-authz/extensible-admission-controllers/", "id": "k8s_00526_code_explain_13556"} {"prompt": "Complete this yaml code:\n\n# Deprecated in v1.17 in favor of apiserver.config.k8s.io/v1\napiVersion: apiserver.k8s.io/v1alpha1\nkind: AdmissionConfiguration\nplugins:\n- name: ValidatingAdmissionWebhook\n configuration:\n # Deprecated in v1.17 in favor of apiserver.config.k8s.io/v1, kind=WebhookAdmissionConfiguration\n apiVersion: apiserver.config.k8s.io/v1alpha1", "completion": " kind: WebhookAdmission\n kubeConfigFile: \"<path-to-kubeconfig-file>\"\n- name: MutatingAdmissionWebhook\n configuration:\n # Deprecated in v1.17 in favor of apiserver.config.k8s.io/v1, kind=WebhookAdmissionConfiguration\n apiVersion: apiserver.config.k8s.io/v1alpha1\n kind: WebhookAdmission\n kubeConfigFile: \"<path-to-kubeconfig-file>\"", "language": "yaml", "type": "kubernetes-manifest", "context": "Dynamic Admission Control | For more information about AdmissionConfiguration, see the AdmissionConfiguration (v1) reference. See the webhook configuration section for details about each config field.", "source": "https://kubernetes.io/docs/reference/access-authn-authz/extensible-admission-controllers/", "id": "k8s_00526_code_completion_13557"} {"prompt": "Explain this yaml code:\n\n# Deprecated in v1.17 in favor of apiserver.config.k8s.io/v1\napiVersion: apiserver.k8s.io/v1alpha1\nkind: AdmissionConfiguration\nplugins:\n- name: ValidatingAdmissionWebhook\n configuration:\n # Deprecated in v1.17 in favor of apiserver.config.k8s.io/v1, kind=WebhookAdmissionConfiguration\n apiVersion: apiserver.config.k8s.io/v1alpha1\n kind: WebhookAdmission\n kubeConfigFile: \"<path-to-kubeconfig-file>\"\n- name: MutatingAdmissionWebhook\n configuration:\n # Deprecated in v1.17 in favor of apiserver.config.k8s.io/v1, kind=WebhookAdmissionConfiguration\n apiVersion: apiserver.config.k8s.io/v1alpha1\n kind: WebhookAdmission\n kubeConfigFile: \"<path-to-kubeconfig-file>\"", "completion": "This yaml code Dynamic Admission Control | For more information about AdmissionConfiguration, see the AdmissionConfiguration (v1) reference. See the webhook configuration section for details about each config field.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Dynamic Admission Control | For more information about AdmissionConfiguration, see the AdmissionConfiguration (v1) reference. See the webhook configuration section for details about each config field.", "source": "https://kubernetes.io/docs/reference/access-authn-authz/extensible-admission-controllers/", "id": "k8s_00526_code_explain_13558"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Config\nusers:\n# name should be set to the DNS name of the service or the host (including port) of the URL the webhook is configured to speak to.\n# If a non-443 port is used for services, it must be included in the name when configuring 1.16+ API servers.\n#\n# For a webhook configured to speak to a service on the default port (443), specify the DNS name of the service:\n# - name: webhook1.ns1.svc\n# user: ...\n#\n# For a webhook configured to speak to a service on non-default port (e.g. 8443), specify the DNS name and port of the service in 1.16+:\n# - name: webhook1.ns1.svc:8443\n# user: ...\n# and optionally create a second stanza using only the DNS name of the service for compatibility with 1.15 API servers:\n# - name: webhook1.ns1.svc\n# user: ...\n#\n# For webhooks configured to speak to a URL, match the host (and port) specified in the webhook's URL. Examples:\n# A webhook with `url: https://www.example.com`:\n# - name: www.example.com\n# user: ...", "completion": "#\n# A webhook with `url: https://www.example.com:443`:\n# - name: www.example.com:443\n# user: ...\n#\n# A webhook with `url: https://www.example.com:8443`:\n# - name: www.example.com:8443\n# user: ...\n#\n- name: 'webhook1.ns1.svc'\n user:\n client-certificate-data: \"<pem encoded certificate>\"\n client-key-data: \"<pem encoded key>\"\n# The `name` supports using * to wildcard-match prefixing segments.\n- name: '*.webhook-company.org'\n user:\n password: \"<password>\"\n username: \"<name>\"\n# '*' is the default match.\n- name: '*'\n user:\n token: \"<token>\"", "language": "yaml", "type": "kubernetes-manifest", "context": "Dynamic Admission Control | Of course you need to set up the webhook server to handle these authentication requests.", "source": "https://kubernetes.io/docs/reference/access-authn-authz/extensible-admission-controllers/", "id": "k8s_00526_code_completion_13559"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Config\nusers:\n# name should be set to the DNS name of the service or the host (including port) of the URL the webhook is configured to speak to.\n# If a non-443 port is used for services, it must be included in the name when configuring 1.16+ API servers.\n#\n# For a webhook configured to speak to a service on the default port (443), specify the DNS name of the service:\n# - name: webhook1.ns1.svc\n# user: ...\n#\n# For a webhook configured to speak to a service on non-default port (e.g. 8443), specify the DNS name and port of the service in 1.16+:\n# - name: webhook1.ns1.svc:8443\n# user: ...\n# and optionally create a second stanza using only the DNS name of the service for compatibility with 1.15 API servers:\n# - name: webhook1.ns1.svc\n# user: ...\n#\n# For webhooks configured to speak to a URL, match the host (and port) specified in the webhook's URL. Examples:\n# A webhook with `url: https://www.example.com`:\n# - name: www.example.com\n# user: ...\n#\n# A webhook with `url: https://www.example.com:443`:\n# - name: www.example.com:443\n# user: ...\n#\n# A webhook with `url: https://www.example.com:8443`:\n# - name: www.example.com:8443\n# user: ...\n#\n- name: 'webhook1.ns1.svc'\n user:\n client-certificate-data: \"<pem encoded certificate>\"\n client-key-data: \"<pem encoded key>\"\n# The `name` supports using * to wildcard-match prefixing segments.\n- name: '*.webhook-company.org'\n user:\n password: \"<password>\"\n username: \"<name>\"\n# '*' is the default match.\n- name: '*'\n user:\n token: \"<token>\"", "completion": "This yaml code Dynamic Admission Control | Of course you need to set up the webhook server to handle these authentication requests.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Dynamic Admission Control | Of course you need to set up the webhook server to handle these authentication requests.", "source": "https://kubernetes.io/docs/reference/access-authn-authz/extensible-admission-controllers/", "id": "k8s_00526_code_explain_13560"} {"prompt": "Complete this yaml code:\n\napiVersion: admissionregistration.k8s.io/v1\nkind: ValidatingWebhookConfiguration", "completion": "webhooks:\n- name: my-webhook.example.com\n admissionReviewVersions: [\"v1\", \"v1beta1\"]", "language": "yaml", "type": "kubernetes-manifest", "context": "Dynamic Admission Control | admissionReviewVersions is a required field when creating webhook configurations. Webhooks are required to support at least one AdmissionReview version understood by the current and previous API serve", "source": "https://kubernetes.io/docs/reference/access-authn-authz/extensible-admission-controllers/", "id": "k8s_00526_code_completion_13561"} {"prompt": "Explain this yaml code:\n\napiVersion: admissionregistration.k8s.io/v1\nkind: ValidatingWebhookConfiguration\nwebhooks:\n- name: my-webhook.example.com\n admissionReviewVersions: [\"v1\", \"v1beta1\"]", "completion": "This yaml code Dynamic Admission Control | admissionReviewVersions is a required field when creating webhook configurations. Webhooks are required to support at least one AdmissionReview version understood by the current and previous API serve", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Dynamic Admission Control | admissionReviewVersions is a required field when creating webhook configurations. Webhooks are required to support at least one AdmissionReview version understood by the current and previous API serve", "source": "https://kubernetes.io/docs/reference/access-authn-authz/extensible-admission-controllers/", "id": "k8s_00526_code_explain_13562"} {"prompt": "Complete this json code:\n\n{\n \"apiVersion\": \"admission.k8s.io/v1\",\n \"kind\": \"AdmissionReview\",\n \"request\": {\n # Random uid uniquely identifying this admission call\n \"uid\": \"705ab4f5-6393-11e8-b7cc-42010a800002\",\n\n # Fully-qualified group/version/kind of the incoming object\n \"kind\": {\n \"group\": \"autoscaling\",\n \"version\": \"v1\",\n \"kind\": \"Scale\"\n },\n\n # Fully-qualified group/version/kind of the resource being modified\n \"resource\": {\n \"group\": \"apps\",\n \"version\": \"v1\",\n \"resource\": \"deployments\"\n },\n\n # Subresource, if the request is to a subresource\n \"subResource\": \"scale\",\n\n # Fully-qualified group/version/kind of the incoming object in the original request to the API server\n # This only differs from `kind` if the webhook specified `matchPolicy: Equivalent` and the original\n # request to the API server was converted to a version the webhook registered for\n \"requestKind\": {\n \"group\": \"autoscaling\",\n \"version\": \"v1\",\n \"kind\": \"Scale\"\n },\n\n # Fully-qualified group/version/kind of the resource being modified in the original request to the API server\n # This only differs from `resource` if the webhook specified `matchPolicy: Equivalent` and the original\n # request to the API server was converted to a version the webhook registered for\n \"requestResource\": {\n \"group\": \"apps\",\n \"version\": \"v1\",\n \"resource\": \"deployments\"\n },\n\n # Subresource, if the request is to a subresource\n # This only differs from `subResource` if the webhook specified `matchPolicy: Equivalent` and the original\n # request to the API server was converted to a version the webhook registered for\n \"requestSubResource\": \"scale\",\n\n # Name of the resource being modified\n \"name\": \"my-deployment\",\n\n # Namespace of the resource being modified, if the resource is namespaced (or is a Namespace object)", "completion": " \"namespace\": \"my-namespace\",\n\n # operation can be CREATE, UPDATE, DELETE, or CONNECT\n \"operation\": \"UPDATE\",\n\n \"userInfo\": {\n # Username of the authenticated user making the request to the API server\n \"username\": \"admin\",\n\n # UID of the authenticated user making the request to the API server\n \"uid\": \"014fbff9a07c\",\n\n # Group memberships of the authenticated user making the request to the API server\n \"groups\": [\n \"system:authenticated\",\n \"my-admin-group\"\n ],\n\n # Arbitrary extra info associated with the user making the request to the API server\n # This is populated by the API server authentication layer\n \"extra\": {\n \"some-key\": [\n \"some-value1\",\n \"some-value2\"\n ]\n }\n },\n\n # object is the new object being admitted. It is null for DELETE operations\n \"object\": {\n \"apiVersion\": \"autoscaling/v1\",\n \"kind\": \"Scale\"\n },\n\n # oldObject is the existing object. It is null for CREATE and CONNECT operations\n \"oldObject\": {\n \"apiVersion\": \"autoscaling/v1\",\n \"kind\": \"Scale\"\n },\n\n # options contain the options for the operation being admitted, like meta.k8s.io/v1 CreateOptions,\n # UpdateOptions, or DeleteOptions. It is null for CONNECT operations\n \"options\": {\n \"apiVersion\": \"meta.k8s.io/v1\",\n \"kind\": \"UpdateOptions\"\n },\n\n # dryRun indicates the API request is running in dry run mode and will not be persisted\n # Webhooks with side effects should avoid actuating those side effects when dryRun is true\n \"dryRun\": false\n }\n}", "language": "json", "type": "api-response", "context": "Dynamic Admission Control | Webhooks respond with a 200 HTTP status code, Content-Type: application/json, and a body containing an AdmissionReview object (in the same version they were sent), with the response stanza populated,", "source": "https://kubernetes.io/docs/reference/access-authn-authz/extensible-admission-controllers/", "id": "k8s_00526_code_completion_13563"} {"prompt": "Explain this json code:\n\n{\n \"apiVersion\": \"admission.k8s.io/v1\",\n \"kind\": \"AdmissionReview\",\n \"request\": {\n # Random uid uniquely identifying this admission call\n \"uid\": \"705ab4f5-6393-11e8-b7cc-42010a800002\",\n\n # Fully-qualified group/version/kind of the incoming object\n \"kind\": {\n \"group\": \"autoscaling\",\n \"version\": \"v1\",\n \"kind\": \"Scale\"\n },\n\n # Fully-qualified group/version/kind of the resource being modified\n \"resource\": {\n \"group\": \"apps\",\n \"version\": \"v1\",\n \"resource\": \"deployments\"\n },\n\n # Subresource, if the request is to a subresource\n \"subResource\": \"scale\",\n\n # Fully-qualified group/version/kind of the incoming object in the original request to the API server\n # This only differs from `kind` if the webhook specified `matchPolicy: Equivalent` and the original\n # request to the API server was converted to a version the webhook registered for\n \"requestKind\": {\n \"group\": \"autoscaling\",\n \"version\": \"v1\",\n \"kind\": \"Scale\"\n },\n\n # Fully-qualified group/version/kind of the resource being modified in the original request to the API server\n # This only differs from `resource` if the webhook specified `matchPolicy: Equivalent` and the original\n # request to the API server was converted to a version the webhook registered for\n \"requestResource\": {\n \"group\": \"apps\",\n \"version\": \"v1\",\n \"resource\": \"deployments\"\n },\n\n # Subresource, if the request is to a subresource\n # This only differs from `subResource` if the webhook specified `matchPolicy: Equivalent` and the original\n # request to the API server was converted to a version the webhook registered for\n \"requestSubResource\": \"scale\",\n\n # Name of the resource being modified\n \"name\": \"my-deployment\",\n\n # Namespace of the resource being modified, if the resource is namespaced (or is a Namespace object)\n \"namespace\": \"my-namespace\",\n\n # operation can be CREATE, UPDATE, DELETE, or CONNECT\n \"operation\": \"UPDATE\",\n\n \"userInfo\": {\n # Username of the authenticated user making the request to the API server\n \"username\": \"admin\",\n\n # UID of the authenticated user making the request to the API server\n \"uid\": \"014fbff9a07c\",\n\n # Group memberships of the authenticated user making the request to the API server\n \"groups\": [\n \"system:authenticated\",\n \"my-admin-group\"\n ],\n\n # Arbitrary extra info associated with the user making the request to the API server\n # This is populated by the API server authentication layer\n \"extra\": {\n \"some-key\": [\n \"some-value1\",\n \"some-value2\"\n ]\n }\n },\n\n # object is the new object being admitted. It is null for DELETE operations\n \"object\": {\n \"apiVersion\": \"autoscaling/v1\",\n \"kind\": \"Scale\"\n },\n\n # oldObject is the existing object. It is null for CREATE and CONNECT operations\n \"oldObject\": {\n \"apiVersion\": \"autoscaling/v1\",\n \"kind\": \"Scale\"\n },\n\n # options contain the options for the operation being admitted, like meta.k8s.io/v1 CreateOptions,\n # UpdateOptions, or DeleteOptions. It is null for CONNECT operations\n \"options\": {\n \"apiVersion\": \"meta.k8s.io/v1\",\n \"kind\": \"UpdateOptions\"\n },\n\n # dryRun indicates the API request is running in dry run mode and will not be persisted\n # Webhooks with side effects should avoid actuating those side effects when dryRun is true\n \"dryRun\": false\n }\n}", "completion": "This json code Dynamic Admission Control | Webhooks respond with a 200 HTTP status code, Content-Type: application/json, and a body containing an AdmissionReview object (in the same version they were sent), with the response stanza populated,", "language": "json", "type": "api-response_explanation", "context": "Dynamic Admission Control | Webhooks respond with a 200 HTTP status code, Content-Type: application/json, and a body containing an AdmissionReview object (in the same version they were sent), with the response stanza populated,", "source": "https://kubernetes.io/docs/reference/access-authn-authz/extensible-admission-controllers/", "id": "k8s_00526_code_explain_13564"} {"prompt": "Complete this json code:\n\n{\n \"apiVersion\": \"admission.k8s.io/v1\",\n \"kind\": \"AdmissionReview\",\n \"response\": {", "completion": " \"uid\": \"<value from request.uid>\",\n \"allowed\": true\n }\n}", "language": "json", "type": "api-response", "context": "Dynamic Admission Control | Example of a minimal response from a webhook to forbid a request:", "source": "https://kubernetes.io/docs/reference/access-authn-authz/extensible-admission-controllers/", "id": "k8s_00526_code_completion_13565"} {"prompt": "Explain this json code:\n\n{\n \"apiVersion\": \"admission.k8s.io/v1\",\n \"kind\": \"AdmissionReview\",\n \"response\": {\n \"uid\": \"<value from request.uid>\",\n \"allowed\": true\n }\n}", "completion": "This json code Dynamic Admission Control | Example of a minimal response from a webhook to forbid a request:", "language": "json", "type": "api-response_explanation", "context": "Dynamic Admission Control | Example of a minimal response from a webhook to forbid a request:", "source": "https://kubernetes.io/docs/reference/access-authn-authz/extensible-admission-controllers/", "id": "k8s_00526_code_explain_13566"} {"prompt": "Complete this json code:\n\n{\n \"apiVersion\": \"admission.k8s.io/v1\",\n \"kind\": \"AdmissionReview\",\n \"response\": {", "completion": " \"uid\": \"<value from request.uid>\",\n \"allowed\": false\n }\n}", "language": "json", "type": "api-response", "context": "Dynamic Admission Control | When rejecting a request, the webhook can customize the http code and message returned to the user using the status field. The specified status object is returned to the user. See the API documentatio", "source": "https://kubernetes.io/docs/reference/access-authn-authz/extensible-admission-controllers/", "id": "k8s_00526_code_completion_13567"} {"prompt": "Explain this json code:\n\n{\n \"apiVersion\": \"admission.k8s.io/v1\",\n \"kind\": \"AdmissionReview\",\n \"response\": {\n \"uid\": \"<value from request.uid>\",\n \"allowed\": false\n }\n}", "completion": "This json code Dynamic Admission Control | When rejecting a request, the webhook can customize the http code and message returned to the user using the status field. The specified status object is returned to the user. See the API documentatio", "language": "json", "type": "api-response_explanation", "context": "Dynamic Admission Control | When rejecting a request, the webhook can customize the http code and message returned to the user using the status field. The specified status object is returned to the user. See the API documentatio", "source": "https://kubernetes.io/docs/reference/access-authn-authz/extensible-admission-controllers/", "id": "k8s_00526_code_explain_13568"} {"prompt": "Complete this json code:\n\n{\n \"apiVersion\": \"admission.k8s.io/v1\",\n \"kind\": \"AdmissionReview\",\n \"response\": {\n \"uid\": \"<value from request.uid>\",\n \"allowed\": false,", "completion": " \"status\": {\n \"code\": 403,\n \"message\": \"You cannot do this because it is Tuesday and your name starts with A\"\n }\n }\n}", "language": "json", "type": "api-response", "context": "Dynamic Admission Control | When allowing a request, a mutating admission webhook may optionally modify the incoming object as well. This is done using the patch and patchType fields in the response. The only currently supported", "source": "https://kubernetes.io/docs/reference/access-authn-authz/extensible-admission-controllers/", "id": "k8s_00526_code_completion_13569"} {"prompt": "Explain this json code:\n\n{\n \"apiVersion\": \"admission.k8s.io/v1\",\n \"kind\": \"AdmissionReview\",\n \"response\": {\n \"uid\": \"<value from request.uid>\",\n \"allowed\": false,\n \"status\": {\n \"code\": 403,\n \"message\": \"You cannot do this because it is Tuesday and your name starts with A\"\n }\n }\n}", "completion": "This json code Dynamic Admission Control | When allowing a request, a mutating admission webhook may optionally modify the incoming object as well. This is done using the patch and patchType fields in the response. The only currently supported", "language": "json", "type": "api-response_explanation", "context": "Dynamic Admission Control | When allowing a request, a mutating admission webhook may optionally modify the incoming object as well. This is done using the patch and patchType fields in the response. The only currently supported", "source": "https://kubernetes.io/docs/reference/access-authn-authz/extensible-admission-controllers/", "id": "k8s_00526_code_explain_13570"} {"prompt": "Complete this json code:\n\n{\n \"apiVersion\": \"admission.k8s.io/v1\",\n \"kind\": \"AdmissionReview\",\n \"response\": {\n \"uid\": \"<value from request.uid>\",", "completion": " \"allowed\": true,\n \"patchType\": \"JSONPatch\",\n \"patch\": \"W3sib3AiOiAiYWRkIiwgInBhdGgiOiAiL3NwZWMvcmVwbGljYXMiLCAidmFsdWUiOiAzfV0=\"\n }\n}", "language": "json", "type": "api-response", "context": "Dynamic Admission Control | Admission webhooks can optionally return warning messages that are returned to the requesting client in HTTP Warning headers with a warning code of 299. Warnings can be sent with allowed or rejected a", "source": "https://kubernetes.io/docs/reference/access-authn-authz/extensible-admission-controllers/", "id": "k8s_00526_code_completion_13571"} {"prompt": "Explain this json code:\n\n{\n \"apiVersion\": \"admission.k8s.io/v1\",\n \"kind\": \"AdmissionReview\",\n \"response\": {\n \"uid\": \"<value from request.uid>\",\n \"allowed\": true,\n \"patchType\": \"JSONPatch\",\n \"patch\": \"W3sib3AiOiAiYWRkIiwgInBhdGgiOiAiL3NwZWMvcmVwbGljYXMiLCAidmFsdWUiOiAzfV0=\"\n }\n}", "completion": "This json code Dynamic Admission Control | Admission webhooks can optionally return warning messages that are returned to the requesting client in HTTP Warning headers with a warning code of 299. Warnings can be sent with allowed or rejected a", "language": "json", "type": "api-response_explanation", "context": "Dynamic Admission Control | Admission webhooks can optionally return warning messages that are returned to the requesting client in HTTP Warning headers with a warning code of 299. Warnings can be sent with allowed or rejected a", "source": "https://kubernetes.io/docs/reference/access-authn-authz/extensible-admission-controllers/", "id": "k8s_00526_code_explain_13572"} {"prompt": "Complete this json code:\n\n{\n \"apiVersion\": \"admission.k8s.io/v1\",\n \"kind\": \"AdmissionReview\",\n \"response\": {\n \"uid\": \"<value from request.uid>\",\n \"allowed\": true,", "completion": " \"warnings\": [\n \"duplicate envvar entries specified with name MY_ENV\",\n \"memory request less than 4MB specified for container mycontainer, which will not start successfully\"\n ]\n }\n}", "language": "json", "type": "api-response", "context": "Dynamic Admission Control | To register admission webhooks, create MutatingWebhookConfiguration or ValidatingWebhookConfiguration API objects. The name of a MutatingWebhookConfiguration or a ValidatingWebhookConfiguration object", "source": "https://kubernetes.io/docs/reference/access-authn-authz/extensible-admission-controllers/", "id": "k8s_00526_code_completion_13573"} {"prompt": "Explain this json code:\n\n{\n \"apiVersion\": \"admission.k8s.io/v1\",\n \"kind\": \"AdmissionReview\",\n \"response\": {\n \"uid\": \"<value from request.uid>\",\n \"allowed\": true,\n \"warnings\": [\n \"duplicate envvar entries specified with name MY_ENV\",\n \"memory request less than 4MB specified for container mycontainer, which will not start successfully\"\n ]\n }\n}", "completion": "This json code Dynamic Admission Control | To register admission webhooks, create MutatingWebhookConfiguration or ValidatingWebhookConfiguration API objects. The name of a MutatingWebhookConfiguration or a ValidatingWebhookConfiguration object", "language": "json", "type": "api-response_explanation", "context": "Dynamic Admission Control | To register admission webhooks, create MutatingWebhookConfiguration or ValidatingWebhookConfiguration API objects. The name of a MutatingWebhookConfiguration or a ValidatingWebhookConfiguration object", "source": "https://kubernetes.io/docs/reference/access-authn-authz/extensible-admission-controllers/", "id": "k8s_00526_code_explain_13574"} {"prompt": "Complete this yaml code:\n\napiVersion: admissionregistration.k8s.io/v1\nkind: ValidatingWebhookConfiguration\n...\nwebhooks:\n- name: my-webhook.example.com\n rules:", "completion": " - operations: [\"CREATE\", \"UPDATE\"]\n apiGroups: [\"apps\"]\n apiVersions: [\"v1\", \"v1beta1\"]\n resources: [\"deployments\", \"replicasets\"]\n scope: \"Namespaced\"\n ...", "language": "yaml", "type": "kubernetes-manifest", "context": "Dynamic Admission Control | Match create requests for all resources (but not subresources) in all API groups and versions:", "source": "https://kubernetes.io/docs/reference/access-authn-authz/extensible-admission-controllers/", "id": "k8s_00526_code_completion_13575"} {"prompt": "Explain this yaml code:\n\napiVersion: admissionregistration.k8s.io/v1\nkind: ValidatingWebhookConfiguration\n...\nwebhooks:\n- name: my-webhook.example.com\n rules:\n - operations: [\"CREATE\", \"UPDATE\"]\n apiGroups: [\"apps\"]\n apiVersions: [\"v1\", \"v1beta1\"]\n resources: [\"deployments\", \"replicasets\"]\n scope: \"Namespaced\"\n ...", "completion": "This yaml code Dynamic Admission Control | Match create requests for all resources (but not subresources) in all API groups and versions:", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Dynamic Admission Control | Match create requests for all resources (but not subresources) in all API groups and versions:", "source": "https://kubernetes.io/docs/reference/access-authn-authz/extensible-admission-controllers/", "id": "k8s_00526_code_explain_13576"} {"prompt": "Complete this yaml code:\n\napiVersion: admissionregistration.k8s.io/v1\nkind: ValidatingWebhookConfiguration\nwebhooks:\n - name: my-webhook.example.com\n rules:", "completion": " - operations: [\"CREATE\"]\n apiGroups: [\"*\"]\n apiVersions: [\"*\"]\n resources: [\"*\"]\n scope: \"*\"", "language": "yaml", "type": "kubernetes-manifest", "context": "Dynamic Admission Control | Match update requests for all status subresources in all API groups and versions:", "source": "https://kubernetes.io/docs/reference/access-authn-authz/extensible-admission-controllers/", "id": "k8s_00526_code_completion_13577"} {"prompt": "Explain this yaml code:\n\napiVersion: admissionregistration.k8s.io/v1\nkind: ValidatingWebhookConfiguration\nwebhooks:\n - name: my-webhook.example.com\n rules:\n - operations: [\"CREATE\"]\n apiGroups: [\"*\"]\n apiVersions: [\"*\"]\n resources: [\"*\"]\n scope: \"*\"", "completion": "This yaml code Dynamic Admission Control | Match update requests for all status subresources in all API groups and versions:", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Dynamic Admission Control | Match update requests for all status subresources in all API groups and versions:", "source": "https://kubernetes.io/docs/reference/access-authn-authz/extensible-admission-controllers/", "id": "k8s_00526_code_explain_13578"} {"prompt": "Complete this yaml code:\n\napiVersion: admissionregistration.k8s.io/v1\nkind: ValidatingWebhookConfiguration\nwebhooks:\n - name: my-webhook.example.com\n rules:", "completion": " - operations: [\"UPDATE\"]\n apiGroups: [\"*\"]\n apiVersions: [\"*\"]\n resources: [\"*/status\"]\n scope: \"*\"", "language": "yaml", "type": "kubernetes-manifest", "context": "Dynamic Admission Control | Webhooks may optionally limit which requests are intercepted based on the labels of the objects they would be sent, by specifying an objectSelector. If specified, the objectSelector is evaluated again", "source": "https://kubernetes.io/docs/reference/access-authn-authz/extensible-admission-controllers/", "id": "k8s_00526_code_completion_13579"} {"prompt": "Explain this yaml code:\n\napiVersion: admissionregistration.k8s.io/v1\nkind: ValidatingWebhookConfiguration\nwebhooks:\n - name: my-webhook.example.com\n rules:\n - operations: [\"UPDATE\"]\n apiGroups: [\"*\"]\n apiVersions: [\"*\"]\n resources: [\"*/status\"]\n scope: \"*\"", "completion": "This yaml code Dynamic Admission Control | Webhooks may optionally limit which requests are intercepted based on the labels of the objects they would be sent, by specifying an objectSelector. If specified, the objectSelector is evaluated again", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Dynamic Admission Control | Webhooks may optionally limit which requests are intercepted based on the labels of the objects they would be sent, by specifying an objectSelector. If specified, the objectSelector is evaluated again", "source": "https://kubernetes.io/docs/reference/access-authn-authz/extensible-admission-controllers/", "id": "k8s_00526_code_explain_13580"} {"prompt": "Complete this yaml code:\n\napiVersion: admissionregistration.k8s.io/v1\nkind: MutatingWebhookConfiguration\nwebhooks:\n- name: my-webhook.example.com\n objectSelector:\n matchLabels:", "completion": " foo: bar\n rules:\n - operations: [\"CREATE\"]\n apiGroups: [\"*\"]\n apiVersions: [\"*\"]\n resources: [\"*\"]\n scope: \"*\"", "language": "yaml", "type": "kubernetes-manifest", "context": "Dynamic Admission Control | See labels concept for more examples of label selectors.", "source": "https://kubernetes.io/docs/reference/access-authn-authz/extensible-admission-controllers/", "id": "k8s_00526_code_completion_13581"} {"prompt": "Explain this yaml code:\n\napiVersion: admissionregistration.k8s.io/v1\nkind: MutatingWebhookConfiguration\nwebhooks:\n- name: my-webhook.example.com\n objectSelector:\n matchLabels:\n foo: bar\n rules:\n - operations: [\"CREATE\"]\n apiGroups: [\"*\"]\n apiVersions: [\"*\"]\n resources: [\"*\"]\n scope: \"*\"", "completion": "This yaml code Dynamic Admission Control | See labels concept for more examples of label selectors.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Dynamic Admission Control | See labels concept for more examples of label selectors.", "source": "https://kubernetes.io/docs/reference/access-authn-authz/extensible-admission-controllers/", "id": "k8s_00526_code_explain_13582"} {"prompt": "Complete this yaml code:\n\napiVersion: admissionregistration.k8s.io/v1\nkind: MutatingWebhookConfiguration\nwebhooks:\n - name: my-webhook.example.com\n namespaceSelector:\n matchExpressions:\n - key: runlevel", "completion": " operator: NotIn\n values: [\"0\",\"1\"]\n rules:\n - operations: [\"CREATE\"]\n apiGroups: [\"*\"]\n apiVersions: [\"*\"]\n resources: [\"*\"]\n scope: \"Namespaced\"", "language": "yaml", "type": "kubernetes-manifest", "context": "Dynamic Admission Control | This example shows a validating webhook that matches a CREATE of any namespaced resource inside a namespace that is associated with the \"environment\" of \"prod\" or \"staging\":", "source": "https://kubernetes.io/docs/reference/access-authn-authz/extensible-admission-controllers/", "id": "k8s_00526_code_completion_13583"} {"prompt": "Explain this yaml code:\n\napiVersion: admissionregistration.k8s.io/v1\nkind: MutatingWebhookConfiguration\nwebhooks:\n - name: my-webhook.example.com\n namespaceSelector:\n matchExpressions:\n - key: runlevel\n operator: NotIn\n values: [\"0\",\"1\"]\n rules:\n - operations: [\"CREATE\"]\n apiGroups: [\"*\"]\n apiVersions: [\"*\"]\n resources: [\"*\"]\n scope: \"Namespaced\"", "completion": "This yaml code Dynamic Admission Control | This example shows a validating webhook that matches a CREATE of any namespaced resource inside a namespace that is associated with the \"environment\" of \"prod\" or \"staging\":", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Dynamic Admission Control | This example shows a validating webhook that matches a CREATE of any namespaced resource inside a namespace that is associated with the \"environment\" of \"prod\" or \"staging\":", "source": "https://kubernetes.io/docs/reference/access-authn-authz/extensible-admission-controllers/", "id": "k8s_00526_code_explain_13584"} {"prompt": "Complete this yaml code:\n\napiVersion: admissionregistration.k8s.io/v1\nkind: ValidatingWebhookConfiguration\nwebhooks:\n - name: my-webhook.example.com\n namespaceSelector:\n matchExpressions:\n - key: environment", "completion": " operator: In\n values: [\"prod\",\"staging\"]\n rules:\n - operations: [\"CREATE\"]\n apiGroups: [\"*\"]\n apiVersions: [\"*\"]\n resources: [\"*\"]\n scope: \"Namespaced\"", "language": "yaml", "type": "kubernetes-manifest", "context": "Dynamic Admission Control | See labels concept for more examples of label selectors.", "source": "https://kubernetes.io/docs/reference/access-authn-authz/extensible-admission-controllers/", "id": "k8s_00526_code_completion_13585"} {"prompt": "Explain this yaml code:\n\napiVersion: admissionregistration.k8s.io/v1\nkind: ValidatingWebhookConfiguration\nwebhooks:\n - name: my-webhook.example.com\n namespaceSelector:\n matchExpressions:\n - key: environment\n operator: In\n values: [\"prod\",\"staging\"]\n rules:\n - operations: [\"CREATE\"]\n apiGroups: [\"*\"]\n apiVersions: [\"*\"]\n resources: [\"*\"]\n scope: \"Namespaced\"", "completion": "This yaml code Dynamic Admission Control | See labels concept for more examples of label selectors.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Dynamic Admission Control | See labels concept for more examples of label selectors.", "source": "https://kubernetes.io/docs/reference/access-authn-authz/extensible-admission-controllers/", "id": "k8s_00526_code_explain_13586"} {"prompt": "Complete this yaml code:\n\napiVersion: admissionregistration.k8s.io/v1\nkind: ValidatingWebhookConfiguration\nwebhooks:\n- name: my-webhook.example.com\n matchPolicy: Equivalent", "completion": " rules:\n - operations: [\"CREATE\",\"UPDATE\",\"DELETE\"]\n apiGroups: [\"apps\"]\n apiVersions: [\"v1\"]\n resources: [\"deployments\"]\n scope: \"Namespaced\"", "language": "yaml", "type": "kubernetes-manifest", "context": "Dynamic Admission Control | The matchPolicy for an admission webhooks defaults to Equivalent.", "source": "https://kubernetes.io/docs/reference/access-authn-authz/extensible-admission-controllers/", "id": "k8s_00526_code_completion_13587"} {"prompt": "Explain this yaml code:\n\napiVersion: admissionregistration.k8s.io/v1\nkind: ValidatingWebhookConfiguration\nwebhooks:\n- name: my-webhook.example.com\n matchPolicy: Equivalent\n rules:\n - operations: [\"CREATE\",\"UPDATE\",\"DELETE\"]\n apiGroups: [\"apps\"]\n apiVersions: [\"v1\"]\n resources: [\"deployments\"]\n scope: \"Namespaced\"", "completion": "This yaml code Dynamic Admission Control | The matchPolicy for an admission webhooks defaults to Equivalent.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Dynamic Admission Control | The matchPolicy for an admission webhooks defaults to Equivalent.", "source": "https://kubernetes.io/docs/reference/access-authn-authz/extensible-admission-controllers/", "id": "k8s_00526_code_explain_13588"} {"prompt": "Complete this yaml code:\n\napiVersion: admissionregistration.k8s.io/v1\nkind: ValidatingWebhookConfiguration\nwebhooks:\n - name: my-webhook.example.com\n matchPolicy: Equivalent\n rules:\n - operations: ['CREATE','UPDATE']\n apiGroups: ['*']\n apiVersions: ['*']\n resources: ['*']\n failurePolicy: 'Ignore' # Fail-open (optional)\n sideEffects: None\n clientConfig:\n service:\n namespace: my-namespace\n name: my-webhook\n caBundle: '<omitted>'\n # You can have up to 64 matchConditions per webhook\n matchConditions:\n - name: 'exclude-leases' # Each match condition must have a unique name\n expression: '!(request.resource.group == \"coordination.k8s.io\" && request.resource.resource == \"leases\")' # Match non-lease resources.\n - name: 'exclude-kubelet-requests'\n expression: '!(\"system:nodes\" in request.userInfo.groups)' # Match requests made by non-node users.\n - name: 'rbac' # Skip RBAC requests, which are handled by the second webhook.", "completion": " expression: 'request.resource.group != \"rbac.authorization.k8s.io\"'\n \n # This example illustrates the use of the 'authorizer'. The authorization check is more expensive\n # than a simple expression, so in this example it is scoped to only RBAC requests by using a second\n # webhook. Both webhooks can be served by the same endpoint.\n - name: rbac.my-webhook.example.com\n matchPolicy: Equivalent\n rules:\n - operations: ['CREATE','UPDATE']\n apiGroups: ['rbac.authorization.k8s.io']\n apiVersions: ['*']\n resources: ['*']\n failurePolicy: 'Fail' # Fail-closed (the default)\n sideEffects: None\n clientConfig:\n service:\n namespace: my-namespace\n name: my-webhook\n caBundle: '<omitted>'\n # You can have up to 64 matchConditions per webhook\n matchConditions:\n - name: 'breakglass'\n # Skip requests made by users authorized to 'breakglass' on this webhook.\n # The 'breakglass' API verb does not need to exist outside this check.\n expression: '!authorizer.group(\"admissionregistration.k8s.io\").resource(\"validatingwebhookconfigurations\").name(\"my-webhook.example.com\").check(\"breakglass\").allowed()'", "language": "yaml", "type": "kubernetes-manifest", "context": "Dynamic Admission Control | Match conditions have access to the following CEL variables:", "source": "https://kubernetes.io/docs/reference/access-authn-authz/extensible-admission-controllers/", "id": "k8s_00526_code_completion_13589"} {"prompt": "Explain this yaml code:\n\napiVersion: admissionregistration.k8s.io/v1\nkind: ValidatingWebhookConfiguration\nwebhooks:\n - name: my-webhook.example.com\n matchPolicy: Equivalent\n rules:\n - operations: ['CREATE','UPDATE']\n apiGroups: ['*']\n apiVersions: ['*']\n resources: ['*']\n failurePolicy: 'Ignore' # Fail-open (optional)\n sideEffects: None\n clientConfig:\n service:\n namespace: my-namespace\n name: my-webhook\n caBundle: '<omitted>'\n # You can have up to 64 matchConditions per webhook\n matchConditions:\n - name: 'exclude-leases' # Each match condition must have a unique name\n expression: '!(request.resource.group == \"coordination.k8s.io\" && request.resource.resource == \"leases\")' # Match non-lease resources.\n - name: 'exclude-kubelet-requests'\n expression: '!(\"system:nodes\" in request.userInfo.groups)' # Match requests made by non-node users.\n - name: 'rbac' # Skip RBAC requests, which are handled by the second webhook.\n expression: 'request.resource.group != \"rbac.authorization.k8s.io\"'\n \n # This example illustrates the use of the 'authorizer'. The authorization check is more expensive\n # than a simple expression, so in this example it is scoped to only RBAC requests by using a second\n # webhook. Both webhooks can be served by the same endpoint.\n - name: rbac.my-webhook.example.com\n matchPolicy: Equivalent\n rules:\n - operations: ['CREATE','UPDATE']\n apiGroups: ['rbac.authorization.k8s.io']\n apiVersions: ['*']\n resources: ['*']\n failurePolicy: 'Fail' # Fail-closed (the default)\n sideEffects: None\n clientConfig:\n service:\n namespace: my-namespace\n name: my-webhook\n caBundle: '<omitted>'\n # You can have up to 64 matchConditions per webhook\n matchConditions:\n - name: 'breakglass'\n # Skip requests made by users authorized to 'breakglass' on this webhook.\n # The 'breakglass' API verb does not need to exist outside this check.\n expression: '!authorizer.group(\"admissionregistration.k8s.io\").resource(\"validatingwebhookconfigurations\").name(\"my-webhook.example.com\").check(\"breakglass\").allowed()'", "completion": "This yaml code Dynamic Admission Control | Match conditions have access to the following CEL variables:", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Dynamic Admission Control | Match conditions have access to the following CEL variables:", "source": "https://kubernetes.io/docs/reference/access-authn-authz/extensible-admission-controllers/", "id": "k8s_00526_code_explain_13590"} {"prompt": "Complete this yaml code:\n\napiVersion: admissionregistration.k8s.io/v1\nkind: MutatingWebhookConfiguration\nwebhooks:", "completion": "- name: my-webhook.example.com\n clientConfig:\n url: \"https://my-webhook.example.com:9443/my-webhook-path\"", "language": "yaml", "type": "kubernetes-manifest", "context": "Dynamic Admission Control | The service stanza inside clientConfig is a reference to the service for this webhook. If the webhook is running within the cluster, then you should use service instead of url. The service namespace a", "source": "https://kubernetes.io/docs/reference/access-authn-authz/extensible-admission-controllers/", "id": "k8s_00526_code_completion_13591"} {"prompt": "Explain this yaml code:\n\napiVersion: admissionregistration.k8s.io/v1\nkind: MutatingWebhookConfiguration\nwebhooks:\n- name: my-webhook.example.com\n clientConfig:\n url: \"https://my-webhook.example.com:9443/my-webhook-path\"", "completion": "This yaml code Dynamic Admission Control | The service stanza inside clientConfig is a reference to the service for this webhook. If the webhook is running within the cluster, then you should use service instead of url. The service namespace a", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Dynamic Admission Control | The service stanza inside clientConfig is a reference to the service for this webhook. If the webhook is running within the cluster, then you should use service instead of url. The service namespace a", "source": "https://kubernetes.io/docs/reference/access-authn-authz/extensible-admission-controllers/", "id": "k8s_00526_code_explain_13592"} {"prompt": "Complete this yaml code:\n\napiVersion: admissionregistration.k8s.io/v1\nkind: MutatingWebhookConfiguration\nwebhooks:\n- name: my-webhook.example.com\n clientConfig:", "completion": " caBundle: <CA_BUNDLE>\n service:\n namespace: my-service-namespace\n name: my-service-name\n path: /my-path\n port: 1234", "language": "yaml", "type": "kubernetes-manifest", "context": "Dynamic Admission Control | Webhooks typically operate only on the content of the AdmissionReview sent to them. Some webhooks, however, make out-of-band changes as part of processing admission requests.", "source": "https://kubernetes.io/docs/reference/access-authn-authz/extensible-admission-controllers/", "id": "k8s_00526_code_completion_13593"} {"prompt": "Explain this yaml code:\n\napiVersion: admissionregistration.k8s.io/v1\nkind: MutatingWebhookConfiguration\nwebhooks:\n- name: my-webhook.example.com\n clientConfig:\n caBundle: <CA_BUNDLE>\n service:\n namespace: my-service-namespace\n name: my-service-name\n path: /my-path\n port: 1234", "completion": "This yaml code Dynamic Admission Control | Webhooks typically operate only on the content of the AdmissionReview sent to them. Some webhooks, however, make out-of-band changes as part of processing admission requests.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Dynamic Admission Control | Webhooks typically operate only on the content of the AdmissionReview sent to them. Some webhooks, however, make out-of-band changes as part of processing admission requests.", "source": "https://kubernetes.io/docs/reference/access-authn-authz/extensible-admission-controllers/", "id": "k8s_00526_code_explain_13594"} {"prompt": "Complete this yaml code:\n\napiVersion: admissionregistration.k8s.io/v1\nkind: ValidatingWebhookConfiguration", "completion": "webhooks:\n - name: my-webhook.example.com\n sideEffects: NoneOnDryRun", "language": "yaml", "type": "kubernetes-manifest", "context": "Dynamic Admission Control | Because webhooks add to API request latency, they should evaluate as quickly as possible. timeoutSeconds allows configuring how long the API server should wait for a webhook to respond before treating", "source": "https://kubernetes.io/docs/reference/access-authn-authz/extensible-admission-controllers/", "id": "k8s_00526_code_completion_13595"} {"prompt": "Explain this yaml code:\n\napiVersion: admissionregistration.k8s.io/v1\nkind: ValidatingWebhookConfiguration\nwebhooks:\n - name: my-webhook.example.com\n sideEffects: NoneOnDryRun", "completion": "This yaml code Dynamic Admission Control | Because webhooks add to API request latency, they should evaluate as quickly as possible. timeoutSeconds allows configuring how long the API server should wait for a webhook to respond before treating", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Dynamic Admission Control | Because webhooks add to API request latency, they should evaluate as quickly as possible. timeoutSeconds allows configuring how long the API server should wait for a webhook to respond before treating", "source": "https://kubernetes.io/docs/reference/access-authn-authz/extensible-admission-controllers/", "id": "k8s_00526_code_explain_13596"} {"prompt": "Complete this yaml code:\n\napiVersion: admissionregistration.k8s.io/v1\nkind: ValidatingWebhookConfiguration", "completion": "webhooks:\n - name: my-webhook.example.com\n timeoutSeconds: 2", "language": "yaml", "type": "kubernetes-manifest", "context": "Dynamic Admission Control | The timeout for an admission webhook defaults to 10 seconds.", "source": "https://kubernetes.io/docs/reference/access-authn-authz/extensible-admission-controllers/", "id": "k8s_00526_code_completion_13597"} {"prompt": "Explain this yaml code:\n\napiVersion: admissionregistration.k8s.io/v1\nkind: ValidatingWebhookConfiguration\nwebhooks:\n - name: my-webhook.example.com\n timeoutSeconds: 2", "completion": "This yaml code Dynamic Admission Control | The timeout for an admission webhook defaults to 10 seconds.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Dynamic Admission Control | The timeout for an admission webhook defaults to 10 seconds.", "source": "https://kubernetes.io/docs/reference/access-authn-authz/extensible-admission-controllers/", "id": "k8s_00526_code_explain_13598"} {"prompt": "Complete this yaml code:\n\napiVersion: admissionregistration.k8s.io/v1\nkind: MutatingWebhookConfiguration", "completion": "webhooks:\n- name: my-webhook.example.com\n reinvocationPolicy: IfNeeded", "language": "yaml", "type": "kubernetes-manifest", "context": "Dynamic Admission Control | Mutating webhooks must be idempotent, able to successfully process an object they have already admitted and potentially modified. This is true for all mutating admission webhooks, since any change the", "source": "https://kubernetes.io/docs/reference/access-authn-authz/extensible-admission-controllers/", "id": "k8s_00526_code_completion_13599"} {"prompt": "Explain this yaml code:\n\napiVersion: admissionregistration.k8s.io/v1\nkind: MutatingWebhookConfiguration\nwebhooks:\n- name: my-webhook.example.com\n reinvocationPolicy: IfNeeded", "completion": "This yaml code Dynamic Admission Control | Mutating webhooks must be idempotent, able to successfully process an object they have already admitted and potentially modified. This is true for all mutating admission webhooks, since any change the", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Dynamic Admission Control | Mutating webhooks must be idempotent, able to successfully process an object they have already admitted and potentially modified. This is true for all mutating admission webhooks, since any change the", "source": "https://kubernetes.io/docs/reference/access-authn-authz/extensible-admission-controllers/", "id": "k8s_00526_code_explain_13600"} {"prompt": "Complete this yaml code:\n\napiVersion: admissionregistration.k8s.io/v1\nkind: MutatingWebhookConfiguration", "completion": "webhooks:\n- name: my-webhook.example.com\n failurePolicy: Fail", "language": "yaml", "type": "kubernetes-manifest", "context": "Dynamic Admission Control | The default failurePolicy for an admission webhooks is Fail.", "source": "https://kubernetes.io/docs/reference/access-authn-authz/extensible-admission-controllers/", "id": "k8s_00526_code_completion_13601"} {"prompt": "Explain this yaml code:\n\napiVersion: admissionregistration.k8s.io/v1\nkind: MutatingWebhookConfiguration\nwebhooks:\n- name: my-webhook.example.com\n failurePolicy: Fail", "completion": "This yaml code Dynamic Admission Control | The default failurePolicy for an admission webhooks is Fail.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Dynamic Admission Control | The default failurePolicy for an admission webhooks is Fail.", "source": "https://kubernetes.io/docs/reference/access-authn-authz/extensible-admission-controllers/", "id": "k8s_00526_code_explain_13602"} {"prompt": "Complete this text code:\n\n# the audit event recorded\n{\n \"kind\": \"Event\",\n \"apiVersion\": \"audit.k8s.io/v1\",\n \"annotations\": {\n \"mutation.webhook.admission.k8s.io/round_1_index_2\": \"{\\\"configuration\\\":\\\"my-mutating-webhook-configuration.example.com\\\",\\\"webhook\\\":\\\"my-webhook.example.com\\\",\\\"mutated\\\": false}\"", "completion": " # other annotations\n ...\n }\n # other fields\n ...\n}", "language": "text", "type": "code-example", "context": "Dynamic Admission Control | The following annotation gets recorded for a webhook being invoked in the first round. The webhook is ordered the first in the mutating webhook chain, and mutated the request object during the invocat", "source": "https://kubernetes.io/docs/reference/access-authn-authz/extensible-admission-controllers/", "id": "k8s_00526_code_completion_13603"} {"prompt": "Explain this text code:\n\n# the audit event recorded\n{\n \"kind\": \"Event\",\n \"apiVersion\": \"audit.k8s.io/v1\",\n \"annotations\": {\n \"mutation.webhook.admission.k8s.io/round_1_index_2\": \"{\\\"configuration\\\":\\\"my-mutating-webhook-configuration.example.com\\\",\\\"webhook\\\":\\\"my-webhook.example.com\\\",\\\"mutated\\\": false}\"\n # other annotations\n ...\n }\n # other fields\n ...\n}", "completion": "This text code Dynamic Admission Control | The following annotation gets recorded for a webhook being invoked in the first round. The webhook is ordered the first in the mutating webhook chain, and mutated the request object during the invocat", "language": "text", "type": "code-example_explanation", "context": "Dynamic Admission Control | The following annotation gets recorded for a webhook being invoked in the first round. The webhook is ordered the first in the mutating webhook chain, and mutated the request object during the invocat", "source": "https://kubernetes.io/docs/reference/access-authn-authz/extensible-admission-controllers/", "id": "k8s_00526_code_explain_13604"} {"prompt": "Complete this text code:\n\n# the annotation value deserialized\n{\n \"configuration\": \"my-mutating-webhook-configuration.example.com\",", "completion": " \"webhook\": \"my-webhook.example.com\",\n \"mutated\": false\n}", "language": "text", "type": "code-example", "context": "Dynamic Admission Control | The following annotation gets recorded for a webhook being invoked in the first round. The webhook is ordered the first in the mutating webhook chain, and mutated the request object during the invocat", "source": "https://kubernetes.io/docs/reference/access-authn-authz/extensible-admission-controllers/", "id": "k8s_00526_code_completion_13605"} {"prompt": "Explain this text code:\n\n# the annotation value deserialized\n{\n \"configuration\": \"my-mutating-webhook-configuration.example.com\",\n \"webhook\": \"my-webhook.example.com\",\n \"mutated\": false\n}", "completion": "This text code Dynamic Admission Control | The following annotation gets recorded for a webhook being invoked in the first round. The webhook is ordered the first in the mutating webhook chain, and mutated the request object during the invocat", "language": "text", "type": "code-example_explanation", "context": "Dynamic Admission Control | The following annotation gets recorded for a webhook being invoked in the first round. The webhook is ordered the first in the mutating webhook chain, and mutated the request object during the invocat", "source": "https://kubernetes.io/docs/reference/access-authn-authz/extensible-admission-controllers/", "id": "k8s_00526_code_explain_13606"} {"prompt": "Complete this text code:\n\n# the audit event recorded\n{\n \"kind\": \"Event\",\n \"apiVersion\": \"audit.k8s.io/v1\",\n \"annotations\": {\n \"mutation.webhook.admission.k8s.io/round_0_index_0\": \"{\\\"configuration\\\":\\\"my-mutating-webhook-configuration.example.com\\\",\\\"webhook\\\":\\\"my-webhook-always-mutate.example.com\\\",\\\"mutated\\\": true}\"", "completion": " # other annotations\n ...\n }\n # other fields\n ...\n}", "language": "text", "type": "code-example", "context": "Dynamic Admission Control | At Request audit level or higher, an annotation with key patch.webhook.admission.k8s.io/round_{round idx}_index_{order idx} gets logged with JSON payload indicating a webhook gets invoked for given re", "source": "https://kubernetes.io/docs/reference/access-authn-authz/extensible-admission-controllers/", "id": "k8s_00526_code_completion_13607"} {"prompt": "Explain this text code:\n\n# the audit event recorded\n{\n \"kind\": \"Event\",\n \"apiVersion\": \"audit.k8s.io/v1\",\n \"annotations\": {\n \"mutation.webhook.admission.k8s.io/round_0_index_0\": \"{\\\"configuration\\\":\\\"my-mutating-webhook-configuration.example.com\\\",\\\"webhook\\\":\\\"my-webhook-always-mutate.example.com\\\",\\\"mutated\\\": true}\"\n # other annotations\n ...\n }\n # other fields\n ...\n}", "completion": "This text code Dynamic Admission Control | At Request audit level or higher, an annotation with key patch.webhook.admission.k8s.io/round_{round idx}_index_{order idx} gets logged with JSON payload indicating a webhook gets invoked for given re", "language": "text", "type": "code-example_explanation", "context": "Dynamic Admission Control | At Request audit level or higher, an annotation with key patch.webhook.admission.k8s.io/round_{round idx}_index_{order idx} gets logged with JSON payload indicating a webhook gets invoked for given re", "source": "https://kubernetes.io/docs/reference/access-authn-authz/extensible-admission-controllers/", "id": "k8s_00526_code_explain_13608"} {"prompt": "Complete this text code:\n\n# the annotation value deserialized\n{\n \"configuration\": \"my-mutating-webhook-configuration.example.com\",", "completion": " \"webhook\": \"my-webhook-always-mutate.example.com\",\n \"mutated\": true\n}", "language": "text", "type": "code-example", "context": "Dynamic Admission Control | At Request audit level or higher, an annotation with key patch.webhook.admission.k8s.io/round_{round idx}_index_{order idx} gets logged with JSON payload indicating a webhook gets invoked for given re", "source": "https://kubernetes.io/docs/reference/access-authn-authz/extensible-admission-controllers/", "id": "k8s_00526_code_completion_13609"} {"prompt": "Explain this text code:\n\n# the annotation value deserialized\n{\n \"configuration\": \"my-mutating-webhook-configuration.example.com\",\n \"webhook\": \"my-webhook-always-mutate.example.com\",\n \"mutated\": true\n}", "completion": "This text code Dynamic Admission Control | At Request audit level or higher, an annotation with key patch.webhook.admission.k8s.io/round_{round idx}_index_{order idx} gets logged with JSON payload indicating a webhook gets invoked for given re", "language": "text", "type": "code-example_explanation", "context": "Dynamic Admission Control | At Request audit level or higher, an annotation with key patch.webhook.admission.k8s.io/round_{round idx}_index_{order idx} gets logged with JSON payload indicating a webhook gets invoked for given re", "source": "https://kubernetes.io/docs/reference/access-authn-authz/extensible-admission-controllers/", "id": "k8s_00526_code_explain_13610"} {"prompt": "Complete this text code:\n\n# the audit event recorded\n{\n \"kind\": \"Event\",\n \"apiVersion\": \"audit.k8s.io/v1\",\n \"annotations\": {\n \"patch.webhook.admission.k8s.io/round_1_index_3\": \"{\\\"configuration\\\":\\\"my-other-mutating-webhook-configuration.example.com\\\",\\\"webhook\\\":\\\"my-webhook-always-mutate.example.com\\\",\\\"patch\\\":[{\\\"op\\\":\\\"add\\\",\\\"path\\\":\\\"/data/mutation-stage\\\",\\\"value\\\":\\\"yes\\\"}],\\\"patchType\\\":\\\"JSONPatch\\\"}\"", "completion": " # other annotations\n ...\n }\n # other fields\n ...\n}", "language": "text", "type": "code-example", "context": "Dynamic Admission Control | The API server exposes Prometheus metrics from the /metrics endpoint, which can be used for monitoring and diagnosing API server status. The following metrics record status related to admission webhoo", "source": "https://kubernetes.io/docs/reference/access-authn-authz/extensible-admission-controllers/", "id": "k8s_00526_code_completion_13611"} {"prompt": "Explain this text code:\n\n# the audit event recorded\n{\n \"kind\": \"Event\",\n \"apiVersion\": \"audit.k8s.io/v1\",\n \"annotations\": {\n \"patch.webhook.admission.k8s.io/round_1_index_3\": \"{\\\"configuration\\\":\\\"my-other-mutating-webhook-configuration.example.com\\\",\\\"webhook\\\":\\\"my-webhook-always-mutate.example.com\\\",\\\"patch\\\":[{\\\"op\\\":\\\"add\\\",\\\"path\\\":\\\"/data/mutation-stage\\\",\\\"value\\\":\\\"yes\\\"}],\\\"patchType\\\":\\\"JSONPatch\\\"}\"\n # other annotations\n ...\n }\n # other fields\n ...\n}", "completion": "This text code Dynamic Admission Control | The API server exposes Prometheus metrics from the /metrics endpoint, which can be used for monitoring and diagnosing API server status. The following metrics record status related to admission webhoo", "language": "text", "type": "code-example_explanation", "context": "Dynamic Admission Control | The API server exposes Prometheus metrics from the /metrics endpoint, which can be used for monitoring and diagnosing API server status. The following metrics record status related to admission webhoo", "source": "https://kubernetes.io/docs/reference/access-authn-authz/extensible-admission-controllers/", "id": "k8s_00526_code_explain_13612"} {"prompt": "Complete this text code:\n\n# the annotation value deserialized\n{\n \"configuration\": \"my-other-mutating-webhook-configuration.example.com\",\n \"webhook\": \"my-webhook-always-mutate.example.com\",\n \"patchType\": \"JSONPatch\",\n \"patch\": [", "completion": " {\n \"op\": \"add\",\n \"path\": \"/data/mutation-stage\",\n \"value\": \"yes\"\n }\n ]\n}", "language": "text", "type": "code-example", "context": "Dynamic Admission Control | The API server exposes Prometheus metrics from the /metrics endpoint, which can be used for monitoring and diagnosing API server status. The following metrics record status related to admission webhoo", "source": "https://kubernetes.io/docs/reference/access-authn-authz/extensible-admission-controllers/", "id": "k8s_00526_code_completion_13613"} {"prompt": "Explain this text code:\n\n# the annotation value deserialized\n{\n \"configuration\": \"my-other-mutating-webhook-configuration.example.com\",\n \"webhook\": \"my-webhook-always-mutate.example.com\",\n \"patchType\": \"JSONPatch\",\n \"patch\": [\n {\n \"op\": \"add\",\n \"path\": \"/data/mutation-stage\",\n \"value\": \"yes\"\n }\n ]\n}", "completion": "This text code Dynamic Admission Control | The API server exposes Prometheus metrics from the /metrics endpoint, which can be used for monitoring and diagnosing API server status. The following metrics record status related to admission webhoo", "language": "text", "type": "code-example_explanation", "context": "Dynamic Admission Control | The API server exposes Prometheus metrics from the /metrics endpoint, which can be used for monitoring and diagnosing API server status. The following metrics record status related to admission webhoo", "source": "https://kubernetes.io/docs/reference/access-authn-authz/extensible-admission-controllers/", "id": "k8s_00526_code_explain_13614"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\n ...\n ownerReferences:\n - apiVersion: apps/v1\n blockOwnerDeletion: true", "completion": " controller: true\n kind: ReplicaSet\n name: nginx-deployment-6b474476c4\n uid: 4fdcd81c-bd5d-41f7-97af-3a3b759af9a7\n ...", "language": "yaml", "type": "kubernetes-manifest", "context": "Use Cascading Deletion in a Cluster | By default, Kubernetes uses background cascading deletion to delete dependents of an object. You can switch to foreground cascading deletion using either kubectl or the Kubernetes API, depending on th", "source": "https://kubernetes.io/docs/tasks/administer-cluster/use-cascading-deletion/", "id": "k8s_00528_code_completion_13615"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\n ...\n ownerReferences:\n - apiVersion: apps/v1\n blockOwnerDeletion: true\n controller: true\n kind: ReplicaSet\n name: nginx-deployment-6b474476c4\n uid: 4fdcd81c-bd5d-41f7-97af-3a3b759af9a7\n ...", "completion": "This yaml code Use Cascading Deletion in a Cluster | By default, Kubernetes uses background cascading deletion to delete dependents of an object. You can switch to foreground cascading deletion using either kubectl or the Kubernetes API, depending on th", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Use Cascading Deletion in a Cluster | By default, Kubernetes uses background cascading deletion to delete dependents of an object. You can switch to foreground cascading deletion using either kubectl or the Kubernetes API, depending on th", "source": "https://kubernetes.io/docs/tasks/administer-cluster/use-cascading-deletion/", "id": "k8s_00528_code_explain_13616"} {"prompt": "Explain this bash code:\n\nkubectl delete deployment nginx-deployment --cascade=foreground", "completion": "This bash code Use Cascading Deletion in a Cluster | Using the Kubernetes API", "language": "bash", "type": "kubectl-command_explanation", "context": "Use Cascading Deletion in a Cluster | Using the Kubernetes API", "source": "https://kubernetes.io/docs/tasks/administer-cluster/use-cascading-deletion/", "id": "k8s_00528_code_explain_13617"} {"prompt": "Explain this text code:\n\ncurl -X DELETE localhost:8080/apis/apps/v1/namespaces/default/deployments/nginx-deployment \\\n -d '{\"kind\":\"DeleteOptions\",\"apiVersion\":\"v1\",\"propagationPolicy\":\"Foreground\"}' \\\n -H \"Content-Type: application/json\"", "completion": "This text code Use Cascading Deletion in a Cluster | The output contains a foregroundDeletion finalizer like this:", "language": "text", "type": "code-example_explanation", "context": "Use Cascading Deletion in a Cluster | The output contains a foregroundDeletion finalizer like this:", "source": "https://kubernetes.io/docs/tasks/administer-cluster/use-cascading-deletion/", "id": "k8s_00528_code_explain_13618"} {"prompt": "Complete this text code:\n\n\"kind\": \"Deployment\",\n\"apiVersion\": \"apps/v1\",\n\"metadata\": {\n \"name\": \"nginx-deployment\",\n \"namespace\": \"default\",\n \"uid\": \"d1ce1b02-cae8-4288-8a53-30e84d8fa505\",", "completion": " \"resourceVersion\": \"1363097\",\n \"creationTimestamp\": \"2021-07-08T20:24:37Z\",\n \"deletionTimestamp\": \"2021-07-08T20:27:39Z\",\n \"finalizers\": [\n \"foregroundDeletion\"\n ]\n ...", "language": "text", "type": "code-example", "context": "Use Cascading Deletion in a Cluster | To check the version, enter kubectl version.", "source": "https://kubernetes.io/docs/tasks/administer-cluster/use-cascading-deletion/", "id": "k8s_00528_code_completion_13619"} {"prompt": "Explain this text code:\n\n\"kind\": \"Deployment\",\n\"apiVersion\": \"apps/v1\",\n\"metadata\": {\n \"name\": \"nginx-deployment\",\n \"namespace\": \"default\",\n \"uid\": \"d1ce1b02-cae8-4288-8a53-30e84d8fa505\",\n \"resourceVersion\": \"1363097\",\n \"creationTimestamp\": \"2021-07-08T20:24:37Z\",\n \"deletionTimestamp\": \"2021-07-08T20:27:39Z\",\n \"finalizers\": [\n \"foregroundDeletion\"\n ]\n ...", "completion": "This text code Use Cascading Deletion in a Cluster | To check the version, enter kubectl version.", "language": "text", "type": "code-example_explanation", "context": "Use Cascading Deletion in a Cluster | To check the version, enter kubectl version.", "source": "https://kubernetes.io/docs/tasks/administer-cluster/use-cascading-deletion/", "id": "k8s_00528_code_explain_13620"} {"prompt": "Explain this bash code:\n\nkubectl delete deployment nginx-deployment --cascade=background", "completion": "This bash code Use Cascading Deletion in a Cluster | Using the Kubernetes API", "language": "bash", "type": "kubectl-command_explanation", "context": "Use Cascading Deletion in a Cluster | Using the Kubernetes API", "source": "https://kubernetes.io/docs/tasks/administer-cluster/use-cascading-deletion/", "id": "k8s_00528_code_explain_13621"} {"prompt": "Explain this text code:\n\ncurl -X DELETE localhost:8080/apis/apps/v1/namespaces/default/deployments/nginx-deployment \\\n -d '{\"kind\":\"DeleteOptions\",\"apiVersion\":\"v1\",\"propagationPolicy\":\"Background\"}' \\\n -H \"Content-Type: application/json\"", "completion": "This text code Use Cascading Deletion in a Cluster | The output is similar to this:", "language": "text", "type": "code-example_explanation", "context": "Use Cascading Deletion in a Cluster | The output is similar to this:", "source": "https://kubernetes.io/docs/tasks/administer-cluster/use-cascading-deletion/", "id": "k8s_00528_code_explain_13622"} {"prompt": "Complete this text code:\n\n\"kind\": \"Status\",\n\"apiVersion\": \"v1\",\n...\n\"status\": \"Success\",\n\"details\": {", "completion": " \"name\": \"nginx-deployment\",\n \"group\": \"apps\",\n \"kind\": \"deployments\",\n \"uid\": \"cc9eefb9-2d49-4445-b1c1-d261c9396456\"\n}", "language": "text", "type": "code-example", "context": "Use Cascading Deletion in a Cluster | By default, when you tell Kubernetes to delete an object, the controller also deletes dependent objects. You can make Kubernetes orphan these dependents using kubectl or the Kubernetes API, depending", "source": "https://kubernetes.io/docs/tasks/administer-cluster/use-cascading-deletion/", "id": "k8s_00528_code_completion_13623"} {"prompt": "Explain this text code:\n\n\"kind\": \"Status\",\n\"apiVersion\": \"v1\",\n...\n\"status\": \"Success\",\n\"details\": {\n \"name\": \"nginx-deployment\",\n \"group\": \"apps\",\n \"kind\": \"deployments\",\n \"uid\": \"cc9eefb9-2d49-4445-b1c1-d261c9396456\"\n}", "completion": "This text code Use Cascading Deletion in a Cluster | By default, when you tell Kubernetes to delete an object, the controller also deletes dependent objects. You can make Kubernetes orphan these dependents using kubectl or the Kubernetes API, depending", "language": "text", "type": "code-example_explanation", "context": "Use Cascading Deletion in a Cluster | By default, when you tell Kubernetes to delete an object, the controller also deletes dependent objects. You can make Kubernetes orphan these dependents using kubectl or the Kubernetes API, depending", "source": "https://kubernetes.io/docs/tasks/administer-cluster/use-cascading-deletion/", "id": "k8s_00528_code_explain_13624"} {"prompt": "Explain this bash code:\n\nkubectl delete deployment nginx-deployment --cascade=orphan", "completion": "This bash code Use Cascading Deletion in a Cluster | Using the Kubernetes API", "language": "bash", "type": "kubectl-command_explanation", "context": "Use Cascading Deletion in a Cluster | Using the Kubernetes API", "source": "https://kubernetes.io/docs/tasks/administer-cluster/use-cascading-deletion/", "id": "k8s_00528_code_explain_13625"} {"prompt": "Explain this text code:\n\ncurl -X DELETE localhost:8080/apis/apps/v1/namespaces/default/deployments/nginx-deployment \\\n -d '{\"kind\":\"DeleteOptions\",\"apiVersion\":\"v1\",\"propagationPolicy\":\"Orphan\"}' \\\n -H \"Content-Type: application/json\"", "completion": "This text code Use Cascading Deletion in a Cluster | The output contains orphan in the finalizers field, similar to this:", "language": "text", "type": "code-example_explanation", "context": "Use Cascading Deletion in a Cluster | The output contains orphan in the finalizers field, similar to this:", "source": "https://kubernetes.io/docs/tasks/administer-cluster/use-cascading-deletion/", "id": "k8s_00528_code_explain_13626"} {"prompt": "Complete this text code:\n\n\"kind\": \"Deployment\",\n\"apiVersion\": \"apps/v1\",\n\"namespace\": \"default\",\n\"uid\": \"6f577034-42a0-479d-be21-78018c466f1f\",\n\"creationTimestamp\": \"2021-07-09T16:46:37Z\",", "completion": "\"deletionTimestamp\": \"2021-07-09T16:47:08Z\",\n\"deletionGracePeriodSeconds\": 0,\n\"finalizers\": [\n \"orphan\"\n],\n...", "language": "text", "type": "code-example", "context": "Use Cascading Deletion in a Cluster | You can check that the Pods managed by the Deployment are still running:", "source": "https://kubernetes.io/docs/tasks/administer-cluster/use-cascading-deletion/", "id": "k8s_00528_code_completion_13627"} {"prompt": "Explain this text code:\n\n\"kind\": \"Deployment\",\n\"apiVersion\": \"apps/v1\",\n\"namespace\": \"default\",\n\"uid\": \"6f577034-42a0-479d-be21-78018c466f1f\",\n\"creationTimestamp\": \"2021-07-09T16:46:37Z\",\n\"deletionTimestamp\": \"2021-07-09T16:47:08Z\",\n\"deletionGracePeriodSeconds\": 0,\n\"finalizers\": [\n \"orphan\"\n],\n...", "completion": "This text code Use Cascading Deletion in a Cluster | You can check that the Pods managed by the Deployment are still running:", "language": "text", "type": "code-example_explanation", "context": "Use Cascading Deletion in a Cluster | You can check that the Pods managed by the Deployment are still running:", "source": "https://kubernetes.io/docs/tasks/administer-cluster/use-cascading-deletion/", "id": "k8s_00528_code_explain_13628"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\n ...\n ownerReferences:\n - apiVersion: apps/v1\n blockOwnerDeletion: true", "completion": " controller: true\n kind: ReplicaSet\n name: nginx-deployment-6b474476c4\n uid: 4fdcd81c-bd5d-41f7-97af-3a3b759af9a7\n ...", "language": "yaml", "type": "kubernetes-manifest", "context": "Use Cascading Deletion in a Cluster | By default, Kubernetes uses background cascading deletion to delete dependents of an object. You can switch to foreground cascading deletion using either kubectl or the Kubernetes API, depending on th", "source": "https://kubernetes.io/docs/tasks/administer-cluster/use-cascading-deletion/", "id": "k8s_00528_code_completion_13629"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\n ...\n ownerReferences:\n - apiVersion: apps/v1\n blockOwnerDeletion: true\n controller: true\n kind: ReplicaSet\n name: nginx-deployment-6b474476c4\n uid: 4fdcd81c-bd5d-41f7-97af-3a3b759af9a7\n ...", "completion": "This yaml code Use Cascading Deletion in a Cluster | By default, Kubernetes uses background cascading deletion to delete dependents of an object. You can switch to foreground cascading deletion using either kubectl or the Kubernetes API, depending on th", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Use Cascading Deletion in a Cluster | By default, Kubernetes uses background cascading deletion to delete dependents of an object. You can switch to foreground cascading deletion using either kubectl or the Kubernetes API, depending on th", "source": "https://kubernetes.io/docs/tasks/administer-cluster/use-cascading-deletion/", "id": "k8s_00528_code_explain_13630"} {"prompt": "Explain this shell code:\n\nkubectl delete deployment nginx-deployment --cascade=foreground", "completion": "This shell code Use Cascading Deletion in a Cluster | Using the Kubernetes API", "language": "shell", "type": "code-example_explanation", "context": "Use Cascading Deletion in a Cluster | Using the Kubernetes API", "source": "https://kubernetes.io/docs/tasks/administer-cluster/use-cascading-deletion/", "id": "k8s_00528_code_explain_13631"} {"prompt": "Explain this shell code:\n\ncurl -X DELETE localhost:8080/apis/apps/v1/namespaces/default/deployments/nginx-deployment \\\n -d '{\"kind\":\"DeleteOptions\",\"apiVersion\":\"v1\",\"propagationPolicy\":\"Foreground\"}' \\\n -H \"Content-Type: application/json\"", "completion": "This shell code Use Cascading Deletion in a Cluster | The output contains a foregroundDeletion finalizer like this:", "language": "shell", "type": "code-example_explanation", "context": "Use Cascading Deletion in a Cluster | The output contains a foregroundDeletion finalizer like this:", "source": "https://kubernetes.io/docs/tasks/administer-cluster/use-cascading-deletion/", "id": "k8s_00528_code_explain_13632"} {"prompt": "Complete this text code:\n\n\"kind\": \"Deployment\",\n\"apiVersion\": \"apps/v1\",\n\"metadata\": {\n \"name\": \"nginx-deployment\",\n \"namespace\": \"default\",\n \"uid\": \"d1ce1b02-cae8-4288-8a53-30e84d8fa505\",", "completion": " \"resourceVersion\": \"1363097\",\n \"creationTimestamp\": \"2021-07-08T20:24:37Z\",\n \"deletionTimestamp\": \"2021-07-08T20:27:39Z\",\n \"finalizers\": [\n \"foregroundDeletion\"\n ]\n ...", "language": "text", "type": "code-example", "context": "Use Cascading Deletion in a Cluster | To check the version, enter kubectl version.", "source": "https://kubernetes.io/docs/tasks/administer-cluster/use-cascading-deletion/", "id": "k8s_00528_code_completion_13633"} {"prompt": "Explain this text code:\n\n\"kind\": \"Deployment\",\n\"apiVersion\": \"apps/v1\",\n\"metadata\": {\n \"name\": \"nginx-deployment\",\n \"namespace\": \"default\",\n \"uid\": \"d1ce1b02-cae8-4288-8a53-30e84d8fa505\",\n \"resourceVersion\": \"1363097\",\n \"creationTimestamp\": \"2021-07-08T20:24:37Z\",\n \"deletionTimestamp\": \"2021-07-08T20:27:39Z\",\n \"finalizers\": [\n \"foregroundDeletion\"\n ]\n ...", "completion": "This text code Use Cascading Deletion in a Cluster | To check the version, enter kubectl version.", "language": "text", "type": "code-example_explanation", "context": "Use Cascading Deletion in a Cluster | To check the version, enter kubectl version.", "source": "https://kubernetes.io/docs/tasks/administer-cluster/use-cascading-deletion/", "id": "k8s_00528_code_explain_13634"} {"prompt": "Explain this shell code:\n\nkubectl delete deployment nginx-deployment --cascade=background", "completion": "This shell code Use Cascading Deletion in a Cluster | Using the Kubernetes API", "language": "shell", "type": "code-example_explanation", "context": "Use Cascading Deletion in a Cluster | Using the Kubernetes API", "source": "https://kubernetes.io/docs/tasks/administer-cluster/use-cascading-deletion/", "id": "k8s_00528_code_explain_13635"} {"prompt": "Explain this shell code:\n\ncurl -X DELETE localhost:8080/apis/apps/v1/namespaces/default/deployments/nginx-deployment \\\n -d '{\"kind\":\"DeleteOptions\",\"apiVersion\":\"v1\",\"propagationPolicy\":\"Background\"}' \\\n -H \"Content-Type: application/json\"", "completion": "This shell code Use Cascading Deletion in a Cluster | The output is similar to this:", "language": "shell", "type": "code-example_explanation", "context": "Use Cascading Deletion in a Cluster | The output is similar to this:", "source": "https://kubernetes.io/docs/tasks/administer-cluster/use-cascading-deletion/", "id": "k8s_00528_code_explain_13636"} {"prompt": "Complete this text code:\n\n\"kind\": \"Status\",\n\"apiVersion\": \"v1\",\n...\n\"status\": \"Success\",\n\"details\": {", "completion": " \"name\": \"nginx-deployment\",\n \"group\": \"apps\",\n \"kind\": \"deployments\",\n \"uid\": \"cc9eefb9-2d49-4445-b1c1-d261c9396456\"\n}", "language": "text", "type": "code-example", "context": "Use Cascading Deletion in a Cluster | By default, when you tell Kubernetes to delete an object, the controller also deletes dependent objects. You can make Kubernetes orphan these dependents using kubectl or the Kubernetes API, depending", "source": "https://kubernetes.io/docs/tasks/administer-cluster/use-cascading-deletion/", "id": "k8s_00528_code_completion_13637"} {"prompt": "Explain this text code:\n\n\"kind\": \"Status\",\n\"apiVersion\": \"v1\",\n...\n\"status\": \"Success\",\n\"details\": {\n \"name\": \"nginx-deployment\",\n \"group\": \"apps\",\n \"kind\": \"deployments\",\n \"uid\": \"cc9eefb9-2d49-4445-b1c1-d261c9396456\"\n}", "completion": "This text code Use Cascading Deletion in a Cluster | By default, when you tell Kubernetes to delete an object, the controller also deletes dependent objects. You can make Kubernetes orphan these dependents using kubectl or the Kubernetes API, depending", "language": "text", "type": "code-example_explanation", "context": "Use Cascading Deletion in a Cluster | By default, when you tell Kubernetes to delete an object, the controller also deletes dependent objects. You can make Kubernetes orphan these dependents using kubectl or the Kubernetes API, depending", "source": "https://kubernetes.io/docs/tasks/administer-cluster/use-cascading-deletion/", "id": "k8s_00528_code_explain_13638"} {"prompt": "Explain this shell code:\n\nkubectl delete deployment nginx-deployment --cascade=orphan", "completion": "This shell code Use Cascading Deletion in a Cluster | Using the Kubernetes API", "language": "shell", "type": "code-example_explanation", "context": "Use Cascading Deletion in a Cluster | Using the Kubernetes API", "source": "https://kubernetes.io/docs/tasks/administer-cluster/use-cascading-deletion/", "id": "k8s_00528_code_explain_13639"} {"prompt": "Explain this shell code:\n\ncurl -X DELETE localhost:8080/apis/apps/v1/namespaces/default/deployments/nginx-deployment \\\n -d '{\"kind\":\"DeleteOptions\",\"apiVersion\":\"v1\",\"propagationPolicy\":\"Orphan\"}' \\\n -H \"Content-Type: application/json\"", "completion": "This shell code Use Cascading Deletion in a Cluster | The output contains orphan in the finalizers field, similar to this:", "language": "shell", "type": "code-example_explanation", "context": "Use Cascading Deletion in a Cluster | The output contains orphan in the finalizers field, similar to this:", "source": "https://kubernetes.io/docs/tasks/administer-cluster/use-cascading-deletion/", "id": "k8s_00528_code_explain_13640"} {"prompt": "Complete this text code:\n\n\"kind\": \"Deployment\",\n\"apiVersion\": \"apps/v1\",\n\"namespace\": \"default\",\n\"uid\": \"6f577034-42a0-479d-be21-78018c466f1f\",\n\"creationTimestamp\": \"2021-07-09T16:46:37Z\",", "completion": "\"deletionTimestamp\": \"2021-07-09T16:47:08Z\",\n\"deletionGracePeriodSeconds\": 0,\n\"finalizers\": [\n \"orphan\"\n],\n...", "language": "text", "type": "code-example", "context": "Use Cascading Deletion in a Cluster | You can check that the Pods managed by the Deployment are still running:", "source": "https://kubernetes.io/docs/tasks/administer-cluster/use-cascading-deletion/", "id": "k8s_00528_code_completion_13641"} {"prompt": "Explain this text code:\n\n\"kind\": \"Deployment\",\n\"apiVersion\": \"apps/v1\",\n\"namespace\": \"default\",\n\"uid\": \"6f577034-42a0-479d-be21-78018c466f1f\",\n\"creationTimestamp\": \"2021-07-09T16:46:37Z\",\n\"deletionTimestamp\": \"2021-07-09T16:47:08Z\",\n\"deletionGracePeriodSeconds\": 0,\n\"finalizers\": [\n \"orphan\"\n],\n...", "completion": "This text code Use Cascading Deletion in a Cluster | You can check that the Pods managed by the Deployment are still running:", "language": "text", "type": "code-example_explanation", "context": "Use Cascading Deletion in a Cluster | You can check that the Pods managed by the Deployment are still running:", "source": "https://kubernetes.io/docs/tasks/administer-cluster/use-cascading-deletion/", "id": "k8s_00528_code_explain_13642"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\n ...\n ownerReferences:\n - apiVersion: apps/v1\n blockOwnerDeletion: true", "completion": " controller: true\n kind: ReplicaSet\n name: nginx-deployment-6b474476c4\n uid: 4fdcd81c-bd5d-41f7-97af-3a3b759af9a7\n ...", "language": "yaml", "type": "kubernetes-manifest", "context": "Use Cascading Deletion in a Cluster | By default, Kubernetes uses background cascading deletion to delete dependents of an object. You can switch to foreground cascading deletion using either kubectl or the Kubernetes API, depending on th", "source": "https://kubernetes.io/docs/tasks/administer-cluster/use-cascading-deletion/", "id": "k8s_00528_code_completion_13643"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\n ...\n ownerReferences:\n - apiVersion: apps/v1\n blockOwnerDeletion: true\n controller: true\n kind: ReplicaSet\n name: nginx-deployment-6b474476c4\n uid: 4fdcd81c-bd5d-41f7-97af-3a3b759af9a7\n ...", "completion": "This yaml code Use Cascading Deletion in a Cluster | By default, Kubernetes uses background cascading deletion to delete dependents of an object. You can switch to foreground cascading deletion using either kubectl or the Kubernetes API, depending on th", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Use Cascading Deletion in a Cluster | By default, Kubernetes uses background cascading deletion to delete dependents of an object. You can switch to foreground cascading deletion using either kubectl or the Kubernetes API, depending on th", "source": "https://kubernetes.io/docs/tasks/administer-cluster/use-cascading-deletion/", "id": "k8s_00528_code_explain_13644"} {"prompt": "Explain this bash code:\n\nkubectl delete deployment nginx-deployment --cascade=foreground", "completion": "This bash code Use Cascading Deletion in a Cluster | Using the Kubernetes API", "language": "bash", "type": "kubectl-command_explanation", "context": "Use Cascading Deletion in a Cluster | Using the Kubernetes API", "source": "https://kubernetes.io/docs/tasks/administer-cluster/use-cascading-deletion/", "id": "k8s_00528_code_explain_13645"} {"prompt": "Explain this text code:\n\ncurl -X DELETE localhost:8080/apis/apps/v1/namespaces/default/deployments/nginx-deployment \\\n -d '{\"kind\":\"DeleteOptions\",\"apiVersion\":\"v1\",\"propagationPolicy\":\"Foreground\"}' \\\n -H \"Content-Type: application/json\"", "completion": "This text code Use Cascading Deletion in a Cluster | The output contains a foregroundDeletion finalizer like this:", "language": "text", "type": "code-example_explanation", "context": "Use Cascading Deletion in a Cluster | The output contains a foregroundDeletion finalizer like this:", "source": "https://kubernetes.io/docs/tasks/administer-cluster/use-cascading-deletion/", "id": "k8s_00528_code_explain_13646"} {"prompt": "Explain this bash code:\n\nkubectl delete deployment nginx-deployment --cascade=background", "completion": "This bash code Use Cascading Deletion in a Cluster | Using the Kubernetes API", "language": "bash", "type": "kubectl-command_explanation", "context": "Use Cascading Deletion in a Cluster | Using the Kubernetes API", "source": "https://kubernetes.io/docs/tasks/administer-cluster/use-cascading-deletion/", "id": "k8s_00528_code_explain_13647"} {"prompt": "Explain this text code:\n\ncurl -X DELETE localhost:8080/apis/apps/v1/namespaces/default/deployments/nginx-deployment \\\n -d '{\"kind\":\"DeleteOptions\",\"apiVersion\":\"v1\",\"propagationPolicy\":\"Background\"}' \\\n -H \"Content-Type: application/json\"", "completion": "This text code Use Cascading Deletion in a Cluster | The output is similar to this:", "language": "text", "type": "code-example_explanation", "context": "Use Cascading Deletion in a Cluster | The output is similar to this:", "source": "https://kubernetes.io/docs/tasks/administer-cluster/use-cascading-deletion/", "id": "k8s_00528_code_explain_13648"} {"prompt": "Explain this bash code:\n\nkubectl delete deployment nginx-deployment --cascade=orphan", "completion": "This bash code Use Cascading Deletion in a Cluster | Using the Kubernetes API", "language": "bash", "type": "kubectl-command_explanation", "context": "Use Cascading Deletion in a Cluster | Using the Kubernetes API", "source": "https://kubernetes.io/docs/tasks/administer-cluster/use-cascading-deletion/", "id": "k8s_00528_code_explain_13649"} {"prompt": "Explain this text code:\n\ncurl -X DELETE localhost:8080/apis/apps/v1/namespaces/default/deployments/nginx-deployment \\\n -d '{\"kind\":\"DeleteOptions\",\"apiVersion\":\"v1\",\"propagationPolicy\":\"Orphan\"}' \\\n -H \"Content-Type: application/json\"", "completion": "This text code Use Cascading Deletion in a Cluster | The output contains orphan in the finalizers field, similar to this:", "language": "text", "type": "code-example_explanation", "context": "Use Cascading Deletion in a Cluster | The output contains orphan in the finalizers field, similar to this:", "source": "https://kubernetes.io/docs/tasks/administer-cluster/use-cascading-deletion/", "id": "k8s_00528_code_explain_13650"} {"prompt": "Explain this bash code:\n\nkubectl patch -n kube-system role 'system::leader-locking-kube-controller-manager' -p '{\"rules\": [ {\"apiGroups\":[ \"coordination.k8s.io\"], \"resources\": [\"leases\"], \"resourceNames\": [\"cloud-provider-extraction-migration\"], \"verbs\": [\"create\", \"list\", \"get\", \"update\"] } ]}' --type=merge`", "completion": "This bash code Migrate Replicated Control Plane To Use Cloud Controller Manager | Do the same to the system::leader-locking-cloud-controller-manager role.", "language": "bash", "type": "kubectl-command_explanation", "context": "Migrate Replicated Control Plane To Use Cloud Controller Manager | Do the same to the system::leader-locking-cloud-controller-manager role.", "source": "https://kubernetes.io/docs/tasks/administer-cluster/controller-manager-leader-migration/", "id": "k8s_00529_code_explain_13651"} {"prompt": "Explain this bash code:\n\nkubectl patch -n kube-system role 'system::leader-locking-cloud-controller-manager' -p '{\"rules\": [ {\"apiGroups\":[ \"coordination.k8s.io\"], \"resources\": [\"leases\"], \"resourceNames\": [\"cloud-provider-extraction-migration\"], \"verbs\": [\"create\", \"list\", \"get\", \"update\"] } ]}' --type=merge`", "completion": "This bash code Migrate Replicated Control Plane To Use Cloud Controller Manager | Leader Migration optionally takes a configuration file representing the state of controller-to-manager assignment. At this moment, with in-tree cloud provider, kube-controller-manager runs route, serv", "language": "bash", "type": "kubectl-command_explanation", "context": "Migrate Replicated Control Plane To Use Cloud Controller Manager | Leader Migration optionally takes a configuration file representing the state of controller-to-manager assignment. At this moment, with in-tree cloud provider, kube-controller-manager runs route, serv", "source": "https://kubernetes.io/docs/tasks/administer-cluster/controller-manager-leader-migration/", "id": "k8s_00529_code_explain_13652"} {"prompt": "Complete this yaml code:\n\nkind: LeaderMigrationConfiguration\napiVersion: controllermanager.config.k8s.io/v1\nleaderName: cloud-provider-extraction-migration\nresourceLock: leases\ncontrollerLeaders:", "completion": " - name: route\n component: kube-controller-manager\n - name: service\n component: kube-controller-manager\n - name: cloud-node-lifecycle\n component: kube-controller-manager", "language": "yaml", "type": "kubernetes-manifest", "context": "Migrate Replicated Control Plane To Use Cloud Controller Manager | Alternatively, because the controllers can run under either controller managers, setting component to * for both sides makes the configuration file consistent between both parties of the migration.", "source": "https://kubernetes.io/docs/tasks/administer-cluster/controller-manager-leader-migration/", "id": "k8s_00529_code_completion_13653"} {"prompt": "Explain this yaml code:\n\nkind: LeaderMigrationConfiguration\napiVersion: controllermanager.config.k8s.io/v1\nleaderName: cloud-provider-extraction-migration\nresourceLock: leases\ncontrollerLeaders:\n - name: route\n component: kube-controller-manager\n - name: service\n component: kube-controller-manager\n - name: cloud-node-lifecycle\n component: kube-controller-manager", "completion": "This yaml code Migrate Replicated Control Plane To Use Cloud Controller Manager | Alternatively, because the controllers can run under either controller managers, setting component to * for both sides makes the configuration file consistent between both parties of the migration.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Migrate Replicated Control Plane To Use Cloud Controller Manager | Alternatively, because the controllers can run under either controller managers, setting component to * for both sides makes the configuration file consistent between both parties of the migration.", "source": "https://kubernetes.io/docs/tasks/administer-cluster/controller-manager-leader-migration/", "id": "k8s_00529_code_explain_13654"} {"prompt": "Complete this yaml code:\n\n# wildcard version\nkind: LeaderMigrationConfiguration\napiVersion: controllermanager.config.k8s.io/v1\nleaderName: cloud-provider-extraction-migration\nresourceLock: leases\ncontrollerLeaders:", "completion": " - name: route\n component: *\n - name: service\n component: *\n - name: cloud-node-lifecycle\n component: *", "language": "yaml", "type": "kubernetes-manifest", "context": "Migrate Replicated Control Plane To Use Cloud Controller Manager | On each control plane node, save the content to /etc/leadermigration.conf, and update the manifest of kube-controller-manager so that the file is mounted inside the container at the same location. Als", "source": "https://kubernetes.io/docs/tasks/administer-cluster/controller-manager-leader-migration/", "id": "k8s_00529_code_completion_13655"} {"prompt": "Explain this yaml code:\n\n# wildcard version\nkind: LeaderMigrationConfiguration\napiVersion: controllermanager.config.k8s.io/v1\nleaderName: cloud-provider-extraction-migration\nresourceLock: leases\ncontrollerLeaders:\n - name: route\n component: *\n - name: service\n component: *\n - name: cloud-node-lifecycle\n component: *", "completion": "This yaml code Migrate Replicated Control Plane To Use Cloud Controller Manager | On each control plane node, save the content to /etc/leadermigration.conf, and update the manifest of kube-controller-manager so that the file is mounted inside the container at the same location. Als", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Migrate Replicated Control Plane To Use Cloud Controller Manager | On each control plane node, save the content to /etc/leadermigration.conf, and update the manifest of kube-controller-manager so that the file is mounted inside the container at the same location. Als", "source": "https://kubernetes.io/docs/tasks/administer-cluster/controller-manager-leader-migration/", "id": "k8s_00529_code_explain_13656"} {"prompt": "Complete this yaml code:\n\nkind: LeaderMigrationConfiguration\napiVersion: controllermanager.config.k8s.io/v1\nleaderName: cloud-provider-extraction-migration\nresourceLock: leases\ncontrollerLeaders:", "completion": " - name: route\n component: cloud-controller-manager\n - name: service\n component: cloud-controller-manager\n - name: cloud-node-lifecycle\n component: cloud-controller-manager", "language": "yaml", "type": "kubernetes-manifest", "context": "Migrate Replicated Control Plane To Use Cloud Controller Manager | When creating control plane nodes of version N + 1, the content should be deployed to /etc/leadermigration.conf. The manifest of cloud-controller-manager should be updated to mount the configuration f", "source": "https://kubernetes.io/docs/tasks/administer-cluster/controller-manager-leader-migration/", "id": "k8s_00529_code_completion_13657"} {"prompt": "Explain this yaml code:\n\nkind: LeaderMigrationConfiguration\napiVersion: controllermanager.config.k8s.io/v1\nleaderName: cloud-provider-extraction-migration\nresourceLock: leases\ncontrollerLeaders:\n - name: route\n component: cloud-controller-manager\n - name: service\n component: cloud-controller-manager\n - name: cloud-node-lifecycle\n component: cloud-controller-manager", "completion": "This yaml code Migrate Replicated Control Plane To Use Cloud Controller Manager | When creating control plane nodes of version N + 1, the content should be deployed to /etc/leadermigration.conf. The manifest of cloud-controller-manager should be updated to mount the configuration f", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Migrate Replicated Control Plane To Use Cloud Controller Manager | When creating control plane nodes of version N + 1, the content should be deployed to /etc/leadermigration.conf. The manifest of cloud-controller-manager should be updated to mount the configuration f", "source": "https://kubernetes.io/docs/tasks/administer-cluster/controller-manager-leader-migration/", "id": "k8s_00529_code_explain_13658"} {"prompt": "Complete this yaml code:\n\n# wildcard version, with nodeipam\nkind: LeaderMigrationConfiguration\napiVersion: controllermanager.config.k8s.io/v1\nleaderName: cloud-provider-extraction-migration\nresourceLock: leases\ncontrollerLeaders:\n - name: route", "completion": " component: *\n - name: service\n component: *\n - name: cloud-node-lifecycle\n component: *\n - name: nodeipam\n- component: *", "language": "yaml", "type": "kubernetes-manifest", "context": "Migrate Replicated Control Plane To Use Cloud Controller Manager |", "source": "https://kubernetes.io/docs/tasks/administer-cluster/controller-manager-leader-migration/", "id": "k8s_00529_code_completion_13659"} {"prompt": "Explain this yaml code:\n\n# wildcard version, with nodeipam\nkind: LeaderMigrationConfiguration\napiVersion: controllermanager.config.k8s.io/v1\nleaderName: cloud-provider-extraction-migration\nresourceLock: leases\ncontrollerLeaders:\n - name: route\n component: *\n - name: service\n component: *\n - name: cloud-node-lifecycle\n component: *\n - name: nodeipam\n- component: *", "completion": "This yaml code Migrate Replicated Control Plane To Use Cloud Controller Manager |", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Migrate Replicated Control Plane To Use Cloud Controller Manager |", "source": "https://kubernetes.io/docs/tasks/administer-cluster/controller-manager-leader-migration/", "id": "k8s_00529_code_explain_13660"} {"prompt": "Explain this shell code:\n\nkubectl patch -n kube-system role 'system::leader-locking-kube-controller-manager' -p '{\"rules\": [ {\"apiGroups\":[ \"coordination.k8s.io\"], \"resources\": [\"leases\"], \"resourceNames\": [\"cloud-provider-extraction-migration\"], \"verbs\": [\"create\", \"list\", \"get\", \"update\"] } ]}' --type=merge`", "completion": "This shell code Migrate Replicated Control Plane To Use Cloud Controller Manager | Do the same to the system::leader-locking-cloud-controller-manager role.", "language": "shell", "type": "code-example_explanation", "context": "Migrate Replicated Control Plane To Use Cloud Controller Manager | Do the same to the system::leader-locking-cloud-controller-manager role.", "source": "https://kubernetes.io/docs/tasks/administer-cluster/controller-manager-leader-migration/", "id": "k8s_00529_code_explain_13661"} {"prompt": "Explain this shell code:\n\nkubectl patch -n kube-system role 'system::leader-locking-cloud-controller-manager' -p '{\"rules\": [ {\"apiGroups\":[ \"coordination.k8s.io\"], \"resources\": [\"leases\"], \"resourceNames\": [\"cloud-provider-extraction-migration\"], \"verbs\": [\"create\", \"list\", \"get\", \"update\"] } ]}' --type=merge`", "completion": "This shell code Migrate Replicated Control Plane To Use Cloud Controller Manager | Leader Migration optionally takes a configuration file representing the state of controller-to-manager assignment. At this moment, with in-tree cloud provider, kube-controller-manager runs route, serv", "language": "shell", "type": "code-example_explanation", "context": "Migrate Replicated Control Plane To Use Cloud Controller Manager | Leader Migration optionally takes a configuration file representing the state of controller-to-manager assignment. At this moment, with in-tree cloud provider, kube-controller-manager runs route, serv", "source": "https://kubernetes.io/docs/tasks/administer-cluster/controller-manager-leader-migration/", "id": "k8s_00529_code_explain_13662"} {"prompt": "Complete this yaml code:\n\nkind: LeaderMigrationConfiguration\napiVersion: controllermanager.config.k8s.io/v1\nleaderName: cloud-provider-extraction-migration\nresourceLock: leases\ncontrollerLeaders:", "completion": " - name: route\n component: kube-controller-manager\n - name: service\n component: kube-controller-manager\n - name: cloud-node-lifecycle\n component: kube-controller-manager", "language": "yaml", "type": "kubernetes-manifest", "context": "Migrate Replicated Control Plane To Use Cloud Controller Manager | Alternatively, because the controllers can run under either controller managers, setting component to * for both sides makes the configuration file consistent between both parties of the migration.", "source": "https://kubernetes.io/docs/tasks/administer-cluster/controller-manager-leader-migration/", "id": "k8s_00529_code_completion_13663"} {"prompt": "Explain this yaml code:\n\nkind: LeaderMigrationConfiguration\napiVersion: controllermanager.config.k8s.io/v1\nleaderName: cloud-provider-extraction-migration\nresourceLock: leases\ncontrollerLeaders:\n - name: route\n component: kube-controller-manager\n - name: service\n component: kube-controller-manager\n - name: cloud-node-lifecycle\n component: kube-controller-manager", "completion": "This yaml code Migrate Replicated Control Plane To Use Cloud Controller Manager | Alternatively, because the controllers can run under either controller managers, setting component to * for both sides makes the configuration file consistent between both parties of the migration.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Migrate Replicated Control Plane To Use Cloud Controller Manager | Alternatively, because the controllers can run under either controller managers, setting component to * for both sides makes the configuration file consistent between both parties of the migration.", "source": "https://kubernetes.io/docs/tasks/administer-cluster/controller-manager-leader-migration/", "id": "k8s_00529_code_explain_13664"} {"prompt": "Complete this yaml code:\n\n# wildcard version\nkind: LeaderMigrationConfiguration\napiVersion: controllermanager.config.k8s.io/v1\nleaderName: cloud-provider-extraction-migration\nresourceLock: leases\ncontrollerLeaders:", "completion": " - name: route\n component: *\n - name: service\n component: *\n - name: cloud-node-lifecycle\n component: *", "language": "yaml", "type": "kubernetes-manifest", "context": "Migrate Replicated Control Plane To Use Cloud Controller Manager | On each control plane node, save the content to /etc/leadermigration.conf, and update the manifest of kube-controller-manager so that the file is mounted inside the container at the same location. Als", "source": "https://kubernetes.io/docs/tasks/administer-cluster/controller-manager-leader-migration/", "id": "k8s_00529_code_completion_13665"} {"prompt": "Explain this yaml code:\n\n# wildcard version\nkind: LeaderMigrationConfiguration\napiVersion: controllermanager.config.k8s.io/v1\nleaderName: cloud-provider-extraction-migration\nresourceLock: leases\ncontrollerLeaders:\n - name: route\n component: *\n - name: service\n component: *\n - name: cloud-node-lifecycle\n component: *", "completion": "This yaml code Migrate Replicated Control Plane To Use Cloud Controller Manager | On each control plane node, save the content to /etc/leadermigration.conf, and update the manifest of kube-controller-manager so that the file is mounted inside the container at the same location. Als", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Migrate Replicated Control Plane To Use Cloud Controller Manager | On each control plane node, save the content to /etc/leadermigration.conf, and update the manifest of kube-controller-manager so that the file is mounted inside the container at the same location. Als", "source": "https://kubernetes.io/docs/tasks/administer-cluster/controller-manager-leader-migration/", "id": "k8s_00529_code_explain_13666"} {"prompt": "Explain this text code:\n\n--leader-migration-config=/etc/leadermigration.conf", "completion": "This text code Migrate Replicated Control Plane To Use Cloud Controller Manager | Restart kube-controller-manager on each node. At this moment, kube-controller-manager has leader migration enabled and is ready for the migration.", "language": "text", "type": "code-example_explanation", "context": "Migrate Replicated Control Plane To Use Cloud Controller Manager | Restart kube-controller-manager on each node. At this moment, kube-controller-manager has leader migration enabled and is ready for the migration.", "source": "https://kubernetes.io/docs/tasks/administer-cluster/controller-manager-leader-migration/", "id": "k8s_00529_code_explain_13667"} {"prompt": "Complete this yaml code:\n\nkind: LeaderMigrationConfiguration\napiVersion: controllermanager.config.k8s.io/v1\nleaderName: cloud-provider-extraction-migration\nresourceLock: leases\ncontrollerLeaders:", "completion": " - name: route\n component: cloud-controller-manager\n - name: service\n component: cloud-controller-manager\n - name: cloud-node-lifecycle\n component: cloud-controller-manager", "language": "yaml", "type": "kubernetes-manifest", "context": "Migrate Replicated Control Plane To Use Cloud Controller Manager | When creating control plane nodes of version N + 1, the content should be deployed to /etc/leadermigration.conf. The manifest of cloud-controller-manager should be updated to mount the configuration f", "source": "https://kubernetes.io/docs/tasks/administer-cluster/controller-manager-leader-migration/", "id": "k8s_00529_code_completion_13668"} {"prompt": "Explain this yaml code:\n\nkind: LeaderMigrationConfiguration\napiVersion: controllermanager.config.k8s.io/v1\nleaderName: cloud-provider-extraction-migration\nresourceLock: leases\ncontrollerLeaders:\n - name: route\n component: cloud-controller-manager\n - name: service\n component: cloud-controller-manager\n - name: cloud-node-lifecycle\n component: cloud-controller-manager", "completion": "This yaml code Migrate Replicated Control Plane To Use Cloud Controller Manager | When creating control plane nodes of version N + 1, the content should be deployed to /etc/leadermigration.conf. The manifest of cloud-controller-manager should be updated to mount the configuration f", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Migrate Replicated Control Plane To Use Cloud Controller Manager | When creating control plane nodes of version N + 1, the content should be deployed to /etc/leadermigration.conf. The manifest of cloud-controller-manager should be updated to mount the configuration f", "source": "https://kubernetes.io/docs/tasks/administer-cluster/controller-manager-leader-migration/", "id": "k8s_00529_code_explain_13669"} {"prompt": "Explain this text code:\n\n--leader-migration-config=/etc/leadermigration.conf", "completion": "This text code Migrate Replicated Control Plane To Use Cloud Controller Manager | Create a new control plane node of version N + 1 with the updated cloud-controller-manager manifest, and with the --cloud-provider flag set to external for kube-controller-manager. kube-controller-man", "language": "text", "type": "code-example_explanation", "context": "Migrate Replicated Control Plane To Use Cloud Controller Manager | Create a new control plane node of version N + 1 with the updated cloud-controller-manager manifest, and with the --cloud-provider flag set to external for kube-controller-manager. kube-controller-man", "source": "https://kubernetes.io/docs/tasks/administer-cluster/controller-manager-leader-migration/", "id": "k8s_00529_code_explain_13670"} {"prompt": "Complete this yaml code:\n\n# wildcard version, with nodeipam\nkind: LeaderMigrationConfiguration\napiVersion: controllermanager.config.k8s.io/v1\nleaderName: cloud-provider-extraction-migration\nresourceLock: leases\ncontrollerLeaders:\n - name: route", "completion": " component: *\n - name: service\n component: *\n - name: cloud-node-lifecycle\n component: *\n - name: nodeipam\n- component: *", "language": "yaml", "type": "kubernetes-manifest", "context": "Migrate Replicated Control Plane To Use Cloud Controller Manager |", "source": "https://kubernetes.io/docs/tasks/administer-cluster/controller-manager-leader-migration/", "id": "k8s_00529_code_completion_13671"} {"prompt": "Explain this yaml code:\n\n# wildcard version, with nodeipam\nkind: LeaderMigrationConfiguration\napiVersion: controllermanager.config.k8s.io/v1\nleaderName: cloud-provider-extraction-migration\nresourceLock: leases\ncontrollerLeaders:\n - name: route\n component: *\n - name: service\n component: *\n - name: cloud-node-lifecycle\n component: *\n - name: nodeipam\n- component: *", "completion": "This yaml code Migrate Replicated Control Plane To Use Cloud Controller Manager |", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Migrate Replicated Control Plane To Use Cloud Controller Manager |", "source": "https://kubernetes.io/docs/tasks/administer-cluster/controller-manager-leader-migration/", "id": "k8s_00529_code_explain_13672"} {"prompt": "Explain this bash code:\n\nkubectl patch -n kube-system role 'system::leader-locking-kube-controller-manager' -p '{\"rules\": [ {\"apiGroups\":[ \"coordination.k8s.io\"], \"resources\": [\"leases\"], \"resourceNames\": [\"cloud-provider-extraction-migration\"], \"verbs\": [\"create\", \"list\", \"get\", \"update\"] } ]}' --type=merge`", "completion": "This bash code Migrate Replicated Control Plane To Use Cloud Controller Manager | Do the same to the system::leader-locking-cloud-controller-manager role.", "language": "bash", "type": "kubectl-command_explanation", "context": "Migrate Replicated Control Plane To Use Cloud Controller Manager | Do the same to the system::leader-locking-cloud-controller-manager role.", "source": "https://kubernetes.io/docs/tasks/administer-cluster/controller-manager-leader-migration/", "id": "k8s_00529_code_explain_13673"} {"prompt": "Explain this bash code:\n\nkubectl patch -n kube-system role 'system::leader-locking-cloud-controller-manager' -p '{\"rules\": [ {\"apiGroups\":[ \"coordination.k8s.io\"], \"resources\": [\"leases\"], \"resourceNames\": [\"cloud-provider-extraction-migration\"], \"verbs\": [\"create\", \"list\", \"get\", \"update\"] } ]}' --type=merge`", "completion": "This bash code Migrate Replicated Control Plane To Use Cloud Controller Manager | Leader Migration optionally takes a configuration file representing the state of controller-to-manager assignment. At this moment, with in-tree cloud provider, kube-controller-manager runs route, serv", "language": "bash", "type": "kubectl-command_explanation", "context": "Migrate Replicated Control Plane To Use Cloud Controller Manager | Leader Migration optionally takes a configuration file representing the state of controller-to-manager assignment. At this moment, with in-tree cloud provider, kube-controller-manager runs route, serv", "source": "https://kubernetes.io/docs/tasks/administer-cluster/controller-manager-leader-migration/", "id": "k8s_00529_code_explain_13674"} {"prompt": "Complete this yaml code:\n\nkind: LeaderMigrationConfiguration\napiVersion: controllermanager.config.k8s.io/v1\nleaderName: cloud-provider-extraction-migration\nresourceLock: leases\ncontrollerLeaders:", "completion": " - name: route\n component: kube-controller-manager\n - name: service\n component: kube-controller-manager\n - name: cloud-node-lifecycle\n component: kube-controller-manager", "language": "yaml", "type": "kubernetes-manifest", "context": "Migrate Replicated Control Plane To Use Cloud Controller Manager | Alternatively, because the controllers can run under either controller managers, setting component to * for both sides makes the configuration file consistent between both parties of the migration.", "source": "https://kubernetes.io/docs/tasks/administer-cluster/controller-manager-leader-migration/", "id": "k8s_00529_code_completion_13675"} {"prompt": "Explain this yaml code:\n\nkind: LeaderMigrationConfiguration\napiVersion: controllermanager.config.k8s.io/v1\nleaderName: cloud-provider-extraction-migration\nresourceLock: leases\ncontrollerLeaders:\n - name: route\n component: kube-controller-manager\n - name: service\n component: kube-controller-manager\n - name: cloud-node-lifecycle\n component: kube-controller-manager", "completion": "This yaml code Migrate Replicated Control Plane To Use Cloud Controller Manager | Alternatively, because the controllers can run under either controller managers, setting component to * for both sides makes the configuration file consistent between both parties of the migration.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Migrate Replicated Control Plane To Use Cloud Controller Manager | Alternatively, because the controllers can run under either controller managers, setting component to * for both sides makes the configuration file consistent between both parties of the migration.", "source": "https://kubernetes.io/docs/tasks/administer-cluster/controller-manager-leader-migration/", "id": "k8s_00529_code_explain_13676"} {"prompt": "Complete this yaml code:\n\n# wildcard version\nkind: LeaderMigrationConfiguration\napiVersion: controllermanager.config.k8s.io/v1\nleaderName: cloud-provider-extraction-migration\nresourceLock: leases\ncontrollerLeaders:", "completion": " - name: route\n component: *\n - name: service\n component: *\n - name: cloud-node-lifecycle\n component: *", "language": "yaml", "type": "kubernetes-manifest", "context": "Migrate Replicated Control Plane To Use Cloud Controller Manager | On each control plane node, save the content to /etc/leadermigration.conf, and update the manifest of kube-controller-manager so that the file is mounted inside the container at the same location. Als", "source": "https://kubernetes.io/docs/tasks/administer-cluster/controller-manager-leader-migration/", "id": "k8s_00529_code_completion_13677"} {"prompt": "Explain this yaml code:\n\n# wildcard version\nkind: LeaderMigrationConfiguration\napiVersion: controllermanager.config.k8s.io/v1\nleaderName: cloud-provider-extraction-migration\nresourceLock: leases\ncontrollerLeaders:\n - name: route\n component: *\n - name: service\n component: *\n - name: cloud-node-lifecycle\n component: *", "completion": "This yaml code Migrate Replicated Control Plane To Use Cloud Controller Manager | On each control plane node, save the content to /etc/leadermigration.conf, and update the manifest of kube-controller-manager so that the file is mounted inside the container at the same location. Als", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Migrate Replicated Control Plane To Use Cloud Controller Manager | On each control plane node, save the content to /etc/leadermigration.conf, and update the manifest of kube-controller-manager so that the file is mounted inside the container at the same location. Als", "source": "https://kubernetes.io/docs/tasks/administer-cluster/controller-manager-leader-migration/", "id": "k8s_00529_code_explain_13678"} {"prompt": "Complete this yaml code:\n\nkind: LeaderMigrationConfiguration\napiVersion: controllermanager.config.k8s.io/v1\nleaderName: cloud-provider-extraction-migration\nresourceLock: leases\ncontrollerLeaders:", "completion": " - name: route\n component: cloud-controller-manager\n - name: service\n component: cloud-controller-manager\n - name: cloud-node-lifecycle\n component: cloud-controller-manager", "language": "yaml", "type": "kubernetes-manifest", "context": "Migrate Replicated Control Plane To Use Cloud Controller Manager | When creating control plane nodes of version N + 1, the content should be deployed to /etc/leadermigration.conf. The manifest of cloud-controller-manager should be updated to mount the configuration f", "source": "https://kubernetes.io/docs/tasks/administer-cluster/controller-manager-leader-migration/", "id": "k8s_00529_code_completion_13679"} {"prompt": "Explain this yaml code:\n\nkind: LeaderMigrationConfiguration\napiVersion: controllermanager.config.k8s.io/v1\nleaderName: cloud-provider-extraction-migration\nresourceLock: leases\ncontrollerLeaders:\n - name: route\n component: cloud-controller-manager\n - name: service\n component: cloud-controller-manager\n - name: cloud-node-lifecycle\n component: cloud-controller-manager", "completion": "This yaml code Migrate Replicated Control Plane To Use Cloud Controller Manager | When creating control plane nodes of version N + 1, the content should be deployed to /etc/leadermigration.conf. The manifest of cloud-controller-manager should be updated to mount the configuration f", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Migrate Replicated Control Plane To Use Cloud Controller Manager | When creating control plane nodes of version N + 1, the content should be deployed to /etc/leadermigration.conf. The manifest of cloud-controller-manager should be updated to mount the configuration f", "source": "https://kubernetes.io/docs/tasks/administer-cluster/controller-manager-leader-migration/", "id": "k8s_00529_code_explain_13680"} {"prompt": "Complete this yaml code:\n\n# wildcard version, with nodeipam\nkind: LeaderMigrationConfiguration\napiVersion: controllermanager.config.k8s.io/v1\nleaderName: cloud-provider-extraction-migration\nresourceLock: leases\ncontrollerLeaders:\n - name: route", "completion": " component: *\n - name: service\n component: *\n - name: cloud-node-lifecycle\n component: *\n - name: nodeipam\n- component: *", "language": "yaml", "type": "kubernetes-manifest", "context": "Migrate Replicated Control Plane To Use Cloud Controller Manager |", "source": "https://kubernetes.io/docs/tasks/administer-cluster/controller-manager-leader-migration/", "id": "k8s_00529_code_completion_13681"} {"prompt": "Explain this yaml code:\n\n# wildcard version, with nodeipam\nkind: LeaderMigrationConfiguration\napiVersion: controllermanager.config.k8s.io/v1\nleaderName: cloud-provider-extraction-migration\nresourceLock: leases\ncontrollerLeaders:\n - name: route\n component: *\n - name: service\n component: *\n - name: cloud-node-lifecycle\n component: *\n - name: nodeipam\n- component: *", "completion": "This yaml code Migrate Replicated Control Plane To Use Cloud Controller Manager |", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Migrate Replicated Control Plane To Use Cloud Controller Manager |", "source": "https://kubernetes.io/docs/tasks/administer-cluster/controller-manager-leader-migration/", "id": "k8s_00529_code_explain_13682"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: LimitRange\nmetadata:\n name: cpu-limit-range\nspec:", "completion": " limits:\n - default:\n cpu: 1\n defaultRequest:\n cpu: 0.5\n type: Container", "language": "yaml", "type": "kubernetes-manifest", "context": "Configure Default CPU Requests and Limits for a Namespace | Create the LimitRange in the default-cpu-example namespace:", "source": "https://kubernetes.io/docs/tasks/administer-cluster/manage-resources/cpu-default-namespace/", "id": "k8s_00532_code_completion_13683"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: LimitRange\nmetadata:\n name: cpu-limit-range\nspec:\n limits:\n - default:\n cpu: 1\n defaultRequest:\n cpu: 0.5\n type: Container", "completion": "This yaml code Configure Default CPU Requests and Limits for a Namespace | Create the LimitRange in the default-cpu-example namespace:", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Configure Default CPU Requests and Limits for a Namespace | Create the LimitRange in the default-cpu-example namespace:", "source": "https://kubernetes.io/docs/tasks/administer-cluster/manage-resources/cpu-default-namespace/", "id": "k8s_00532_code_explain_13684"} {"prompt": "Explain this bash code:\n\nkubectl apply -f https://k8s.io/examples/admin/resource/cpu-defaults.yaml --namespace=default-cpu-example", "completion": "This bash code Configure Default CPU Requests and Limits for a Namespace | Now if you create a Pod in the default-cpu-example namespace, and any container in that Pod does not specify its own values for CPU request and CPU limit, then the control plane applies default values", "language": "bash", "type": "kubectl-command_explanation", "context": "Configure Default CPU Requests and Limits for a Namespace | Now if you create a Pod in the default-cpu-example namespace, and any container in that Pod does not specify its own values for CPU request and CPU limit, then the control plane applies default values", "source": "https://kubernetes.io/docs/tasks/administer-cluster/manage-resources/cpu-default-namespace/", "id": "k8s_00532_code_explain_13685"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: default-cpu-demo", "completion": "spec:\n containers:\n - name: default-cpu-demo-ctr\n image: nginx", "language": "yaml", "type": "pod-definition", "context": "Configure Default CPU Requests and Limits for a Namespace | Create the Pod.", "source": "https://kubernetes.io/docs/tasks/administer-cluster/manage-resources/cpu-default-namespace/", "id": "k8s_00532_code_completion_13686"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: default-cpu-demo\nspec:\n containers:\n - name: default-cpu-demo-ctr\n image: nginx", "completion": "This yaml code Configure Default CPU Requests and Limits for a Namespace | Create the Pod.", "language": "yaml", "type": "pod-definition_explanation", "context": "Configure Default CPU Requests and Limits for a Namespace | Create the Pod.", "source": "https://kubernetes.io/docs/tasks/administer-cluster/manage-resources/cpu-default-namespace/", "id": "k8s_00532_code_explain_13687"} {"prompt": "Explain this bash code:\n\nkubectl apply -f https://k8s.io/examples/admin/resource/cpu-defaults-pod.yaml --namespace=default-cpu-example", "completion": "This bash code Configure Default CPU Requests and Limits for a Namespace | View the Pod's specification:", "language": "bash", "type": "kubectl-command_explanation", "context": "Configure Default CPU Requests and Limits for a Namespace | View the Pod's specification:", "source": "https://kubernetes.io/docs/tasks/administer-cluster/manage-resources/cpu-default-namespace/", "id": "k8s_00532_code_explain_13688"} {"prompt": "Explain this bash code:\n\nkubectl get pod default-cpu-demo --output=yaml --namespace=default-cpu-example", "completion": "This bash code Configure Default CPU Requests and Limits for a Namespace | The output shows that the Pod's only container has a CPU request of 500m cpu (which you can read as “500 millicpu”), and a CPU limit of 1 cpu. These are the default values specified by the LimitRange.", "language": "bash", "type": "kubectl-command_explanation", "context": "Configure Default CPU Requests and Limits for a Namespace | The output shows that the Pod's only container has a CPU request of 500m cpu (which you can read as “500 millicpu”), and a CPU limit of 1 cpu. These are the default values specified by the LimitRange.", "source": "https://kubernetes.io/docs/tasks/administer-cluster/manage-resources/cpu-default-namespace/", "id": "k8s_00532_code_explain_13689"} {"prompt": "Complete this text code:\n\ncontainers:\n- image: nginx\n imagePullPolicy: Always\n name: default-cpu-demo-ctr", "completion": " resources:\n limits:\n cpu: \"1\"\n requests:\n cpu: 500m", "language": "text", "type": "code-example", "context": "Configure Default CPU Requests and Limits for a Namespace | Here's a manifest for a Pod that has one container. The container specifies a CPU limit, but not a request:", "source": "https://kubernetes.io/docs/tasks/administer-cluster/manage-resources/cpu-default-namespace/", "id": "k8s_00532_code_completion_13690"} {"prompt": "Explain this text code:\n\ncontainers:\n- image: nginx\n imagePullPolicy: Always\n name: default-cpu-demo-ctr\n resources:\n limits:\n cpu: \"1\"\n requests:\n cpu: 500m", "completion": "This text code Configure Default CPU Requests and Limits for a Namespace | Here's a manifest for a Pod that has one container. The container specifies a CPU limit, but not a request:", "language": "text", "type": "code-example_explanation", "context": "Configure Default CPU Requests and Limits for a Namespace | Here's a manifest for a Pod that has one container. The container specifies a CPU limit, but not a request:", "source": "https://kubernetes.io/docs/tasks/administer-cluster/manage-resources/cpu-default-namespace/", "id": "k8s_00532_code_explain_13691"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: default-cpu-demo-2\nspec:", "completion": " containers:\n - name: default-cpu-demo-2-ctr\n image: nginx\n resources:\n limits:\n cpu: \"1\"", "language": "yaml", "type": "pod-definition", "context": "Configure Default CPU Requests and Limits for a Namespace | Create the Pod:", "source": "https://kubernetes.io/docs/tasks/administer-cluster/manage-resources/cpu-default-namespace/", "id": "k8s_00532_code_completion_13692"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: default-cpu-demo-2\nspec:\n containers:\n - name: default-cpu-demo-2-ctr\n image: nginx\n resources:\n limits:\n cpu: \"1\"", "completion": "This yaml code Configure Default CPU Requests and Limits for a Namespace | Create the Pod:", "language": "yaml", "type": "pod-definition_explanation", "context": "Configure Default CPU Requests and Limits for a Namespace | Create the Pod:", "source": "https://kubernetes.io/docs/tasks/administer-cluster/manage-resources/cpu-default-namespace/", "id": "k8s_00532_code_explain_13693"} {"prompt": "Explain this bash code:\n\nkubectl apply -f https://k8s.io/examples/admin/resource/cpu-defaults-pod-2.yaml --namespace=default-cpu-example", "completion": "This bash code Configure Default CPU Requests and Limits for a Namespace | View the specification of the Pod that you created:", "language": "bash", "type": "kubectl-command_explanation", "context": "Configure Default CPU Requests and Limits for a Namespace | View the specification of the Pod that you created:", "source": "https://kubernetes.io/docs/tasks/administer-cluster/manage-resources/cpu-default-namespace/", "id": "k8s_00532_code_explain_13694"} {"prompt": "Explain this bash code:\n\nkubectl get pod default-cpu-demo-2 --output=yaml --namespace=default-cpu-example", "completion": "This bash code Configure Default CPU Requests and Limits for a Namespace | The output shows that the container's CPU request is set to match its CPU limit. Notice that the container was not assigned the default CPU request value of 0.5 cpu:", "language": "bash", "type": "kubectl-command_explanation", "context": "Configure Default CPU Requests and Limits for a Namespace | The output shows that the container's CPU request is set to match its CPU limit. Notice that the container was not assigned the default CPU request value of 0.5 cpu:", "source": "https://kubernetes.io/docs/tasks/administer-cluster/manage-resources/cpu-default-namespace/", "id": "k8s_00532_code_explain_13695"} {"prompt": "Complete this text code:\n\nresources:\n limits:", "completion": " cpu: \"1\"\n requests:\n cpu: \"1\"", "language": "text", "type": "code-example", "context": "Configure Default CPU Requests and Limits for a Namespace | Here's an example manifest for a Pod that has one container. The container specifies a CPU request, but not a limit:", "source": "https://kubernetes.io/docs/tasks/administer-cluster/manage-resources/cpu-default-namespace/", "id": "k8s_00532_code_completion_13696"} {"prompt": "Explain this text code:\n\nresources:\n limits:\n cpu: \"1\"\n requests:\n cpu: \"1\"", "completion": "This text code Configure Default CPU Requests and Limits for a Namespace | Here's an example manifest for a Pod that has one container. The container specifies a CPU request, but not a limit:", "language": "text", "type": "code-example_explanation", "context": "Configure Default CPU Requests and Limits for a Namespace | Here's an example manifest for a Pod that has one container. The container specifies a CPU request, but not a limit:", "source": "https://kubernetes.io/docs/tasks/administer-cluster/manage-resources/cpu-default-namespace/", "id": "k8s_00532_code_explain_13697"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: default-cpu-demo-3\nspec:", "completion": " containers:\n - name: default-cpu-demo-3-ctr\n image: nginx\n resources:\n requests:\n cpu: \"0.75\"", "language": "yaml", "type": "pod-definition", "context": "Configure Default CPU Requests and Limits for a Namespace | Create the Pod:", "source": "https://kubernetes.io/docs/tasks/administer-cluster/manage-resources/cpu-default-namespace/", "id": "k8s_00532_code_completion_13698"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: default-cpu-demo-3\nspec:\n containers:\n - name: default-cpu-demo-3-ctr\n image: nginx\n resources:\n requests:\n cpu: \"0.75\"", "completion": "This yaml code Configure Default CPU Requests and Limits for a Namespace | Create the Pod:", "language": "yaml", "type": "pod-definition_explanation", "context": "Configure Default CPU Requests and Limits for a Namespace | Create the Pod:", "source": "https://kubernetes.io/docs/tasks/administer-cluster/manage-resources/cpu-default-namespace/", "id": "k8s_00532_code_explain_13699"} {"prompt": "Explain this bash code:\n\nkubectl apply -f https://k8s.io/examples/admin/resource/cpu-defaults-pod-3.yaml --namespace=default-cpu-example", "completion": "This bash code Configure Default CPU Requests and Limits for a Namespace | View the specification of the Pod that you created:", "language": "bash", "type": "kubectl-command_explanation", "context": "Configure Default CPU Requests and Limits for a Namespace | View the specification of the Pod that you created:", "source": "https://kubernetes.io/docs/tasks/administer-cluster/manage-resources/cpu-default-namespace/", "id": "k8s_00532_code_explain_13700"} {"prompt": "Explain this bash code:\n\nkubectl get pod default-cpu-demo-3 --output=yaml --namespace=default-cpu-example", "completion": "This bash code Configure Default CPU Requests and Limits for a Namespace | The output shows that the container's CPU request is set to the value you specified at the time you created the Pod (in other words: it matches the manifest). However, the same container's CPU limit i", "language": "bash", "type": "kubectl-command_explanation", "context": "Configure Default CPU Requests and Limits for a Namespace | The output shows that the container's CPU request is set to the value you specified at the time you created the Pod (in other words: it matches the manifest). However, the same container's CPU limit i", "source": "https://kubernetes.io/docs/tasks/administer-cluster/manage-resources/cpu-default-namespace/", "id": "k8s_00532_code_explain_13701"} {"prompt": "Complete this text code:\n\nresources:\n limits:", "completion": " cpu: \"1\"\n requests:\n cpu: 750m", "language": "text", "type": "code-example", "context": "Configure Default CPU Requests and Limits for a Namespace | If your namespace has a CPU resource quota configured, it is helpful to have a default value in place for CPU limit. Here are two of the restrictions that a CPU resource quota imposes on a namespace:", "source": "https://kubernetes.io/docs/tasks/administer-cluster/manage-resources/cpu-default-namespace/", "id": "k8s_00532_code_completion_13702"} {"prompt": "Explain this text code:\n\nresources:\n limits:\n cpu: \"1\"\n requests:\n cpu: 750m", "completion": "This text code Configure Default CPU Requests and Limits for a Namespace | If your namespace has a CPU resource quota configured, it is helpful to have a default value in place for CPU limit. Here are two of the restrictions that a CPU resource quota imposes on a namespace:", "language": "text", "type": "code-example_explanation", "context": "Configure Default CPU Requests and Limits for a Namespace | If your namespace has a CPU resource quota configured, it is helpful to have a default value in place for CPU limit. Here are two of the restrictions that a CPU resource quota imposes on a namespace:", "source": "https://kubernetes.io/docs/tasks/administer-cluster/manage-resources/cpu-default-namespace/", "id": "k8s_00532_code_explain_13703"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: LimitRange\nmetadata:\n name: cpu-limit-range\nspec:", "completion": " limits:\n - default:\n cpu: 1\n defaultRequest:\n cpu: 0.5\n type: Container", "language": "yaml", "type": "kubernetes-manifest", "context": "Configure Default CPU Requests and Limits for a Namespace | Create the LimitRange in the default-cpu-example namespace:", "source": "https://kubernetes.io/docs/tasks/administer-cluster/manage-resources/cpu-default-namespace/", "id": "k8s_00532_code_completion_13704"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: LimitRange\nmetadata:\n name: cpu-limit-range\nspec:\n limits:\n - default:\n cpu: 1\n defaultRequest:\n cpu: 0.5\n type: Container", "completion": "This yaml code Configure Default CPU Requests and Limits for a Namespace | Create the LimitRange in the default-cpu-example namespace:", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Configure Default CPU Requests and Limits for a Namespace | Create the LimitRange in the default-cpu-example namespace:", "source": "https://kubernetes.io/docs/tasks/administer-cluster/manage-resources/cpu-default-namespace/", "id": "k8s_00532_code_explain_13705"} {"prompt": "Explain this shell code:\n\nkubectl apply -f https://k8s.io/examples/admin/resource/cpu-defaults.yaml --namespace=default-cpu-example", "completion": "This shell code Configure Default CPU Requests and Limits for a Namespace | Now if you create a Pod in the default-cpu-example namespace, and any container in that Pod does not specify its own values for CPU request and CPU limit, then the control plane applies default values", "language": "shell", "type": "code-example_explanation", "context": "Configure Default CPU Requests and Limits for a Namespace | Now if you create a Pod in the default-cpu-example namespace, and any container in that Pod does not specify its own values for CPU request and CPU limit, then the control plane applies default values", "source": "https://kubernetes.io/docs/tasks/administer-cluster/manage-resources/cpu-default-namespace/", "id": "k8s_00532_code_explain_13706"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: default-cpu-demo", "completion": "spec:\n containers:\n - name: default-cpu-demo-ctr\n image: nginx", "language": "yaml", "type": "pod-definition", "context": "Configure Default CPU Requests and Limits for a Namespace | Create the Pod.", "source": "https://kubernetes.io/docs/tasks/administer-cluster/manage-resources/cpu-default-namespace/", "id": "k8s_00532_code_completion_13707"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: default-cpu-demo\nspec:\n containers:\n - name: default-cpu-demo-ctr\n image: nginx", "completion": "This yaml code Configure Default CPU Requests and Limits for a Namespace | Create the Pod.", "language": "yaml", "type": "pod-definition_explanation", "context": "Configure Default CPU Requests and Limits for a Namespace | Create the Pod.", "source": "https://kubernetes.io/docs/tasks/administer-cluster/manage-resources/cpu-default-namespace/", "id": "k8s_00532_code_explain_13708"} {"prompt": "Explain this shell code:\n\nkubectl apply -f https://k8s.io/examples/admin/resource/cpu-defaults-pod.yaml --namespace=default-cpu-example", "completion": "This shell code Configure Default CPU Requests and Limits for a Namespace | View the Pod's specification:", "language": "shell", "type": "code-example_explanation", "context": "Configure Default CPU Requests and Limits for a Namespace | View the Pod's specification:", "source": "https://kubernetes.io/docs/tasks/administer-cluster/manage-resources/cpu-default-namespace/", "id": "k8s_00532_code_explain_13709"} {"prompt": "Explain this shell code:\n\nkubectl get pod default-cpu-demo --output=yaml --namespace=default-cpu-example", "completion": "This shell code Configure Default CPU Requests and Limits for a Namespace | The output shows that the Pod's only container has a CPU request of 500m cpu (which you can read as “500 millicpu”), and a CPU limit of 1 cpu. These are the default values specified by the LimitRange.", "language": "shell", "type": "code-example_explanation", "context": "Configure Default CPU Requests and Limits for a Namespace | The output shows that the Pod's only container has a CPU request of 500m cpu (which you can read as “500 millicpu”), and a CPU limit of 1 cpu. These are the default values specified by the LimitRange.", "source": "https://kubernetes.io/docs/tasks/administer-cluster/manage-resources/cpu-default-namespace/", "id": "k8s_00532_code_explain_13710"} {"prompt": "Complete this shell code:\n\ncontainers:\n- image: nginx\n imagePullPolicy: Always\n name: default-cpu-demo-ctr", "completion": " resources:\n limits:\n cpu: \"1\"\n requests:\n cpu: 500m", "language": "shell", "type": "code-example", "context": "Configure Default CPU Requests and Limits for a Namespace | Here's a manifest for a Pod that has one container. The container specifies a CPU limit, but not a request:", "source": "https://kubernetes.io/docs/tasks/administer-cluster/manage-resources/cpu-default-namespace/", "id": "k8s_00532_code_completion_13711"} {"prompt": "Explain this shell code:\n\ncontainers:\n- image: nginx\n imagePullPolicy: Always\n name: default-cpu-demo-ctr\n resources:\n limits:\n cpu: \"1\"\n requests:\n cpu: 500m", "completion": "This shell code Configure Default CPU Requests and Limits for a Namespace | Here's a manifest for a Pod that has one container. The container specifies a CPU limit, but not a request:", "language": "shell", "type": "code-example_explanation", "context": "Configure Default CPU Requests and Limits for a Namespace | Here's a manifest for a Pod that has one container. The container specifies a CPU limit, but not a request:", "source": "https://kubernetes.io/docs/tasks/administer-cluster/manage-resources/cpu-default-namespace/", "id": "k8s_00532_code_explain_13712"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: default-cpu-demo-2\nspec:", "completion": " containers:\n - name: default-cpu-demo-2-ctr\n image: nginx\n resources:\n limits:\n cpu: \"1\"", "language": "yaml", "type": "pod-definition", "context": "Configure Default CPU Requests and Limits for a Namespace | Create the Pod:", "source": "https://kubernetes.io/docs/tasks/administer-cluster/manage-resources/cpu-default-namespace/", "id": "k8s_00532_code_completion_13713"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: default-cpu-demo-2\nspec:\n containers:\n - name: default-cpu-demo-2-ctr\n image: nginx\n resources:\n limits:\n cpu: \"1\"", "completion": "This yaml code Configure Default CPU Requests and Limits for a Namespace | Create the Pod:", "language": "yaml", "type": "pod-definition_explanation", "context": "Configure Default CPU Requests and Limits for a Namespace | Create the Pod:", "source": "https://kubernetes.io/docs/tasks/administer-cluster/manage-resources/cpu-default-namespace/", "id": "k8s_00532_code_explain_13714"} {"prompt": "Explain this shell code:\n\nkubectl apply -f https://k8s.io/examples/admin/resource/cpu-defaults-pod-2.yaml --namespace=default-cpu-example", "completion": "This shell code Configure Default CPU Requests and Limits for a Namespace | View the specification of the Pod that you created:", "language": "shell", "type": "code-example_explanation", "context": "Configure Default CPU Requests and Limits for a Namespace | View the specification of the Pod that you created:", "source": "https://kubernetes.io/docs/tasks/administer-cluster/manage-resources/cpu-default-namespace/", "id": "k8s_00532_code_explain_13715"} {"prompt": "Explain this bash code:\n\nkubectl get pod default-cpu-demo-2 --output=yaml --namespace=default-cpu-example", "completion": "This bash code Configure Default CPU Requests and Limits for a Namespace | The output shows that the container's CPU request is set to match its CPU limit. Notice that the container was not assigned the default CPU request value of 0.5 cpu:", "language": "bash", "type": "kubectl-command_explanation", "context": "Configure Default CPU Requests and Limits for a Namespace | The output shows that the container's CPU request is set to match its CPU limit. Notice that the container was not assigned the default CPU request value of 0.5 cpu:", "source": "https://kubernetes.io/docs/tasks/administer-cluster/manage-resources/cpu-default-namespace/", "id": "k8s_00532_code_explain_13716"} {"prompt": "Complete this text code:\n\nresources:\n limits:", "completion": " cpu: \"1\"\n requests:\n cpu: \"1\"", "language": "text", "type": "code-example", "context": "Configure Default CPU Requests and Limits for a Namespace | Here's an example manifest for a Pod that has one container. The container specifies a CPU request, but not a limit:", "source": "https://kubernetes.io/docs/tasks/administer-cluster/manage-resources/cpu-default-namespace/", "id": "k8s_00532_code_completion_13717"} {"prompt": "Explain this text code:\n\nresources:\n limits:\n cpu: \"1\"\n requests:\n cpu: \"1\"", "completion": "This text code Configure Default CPU Requests and Limits for a Namespace | Here's an example manifest for a Pod that has one container. The container specifies a CPU request, but not a limit:", "language": "text", "type": "code-example_explanation", "context": "Configure Default CPU Requests and Limits for a Namespace | Here's an example manifest for a Pod that has one container. The container specifies a CPU request, but not a limit:", "source": "https://kubernetes.io/docs/tasks/administer-cluster/manage-resources/cpu-default-namespace/", "id": "k8s_00532_code_explain_13718"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: default-cpu-demo-3\nspec:", "completion": " containers:\n - name: default-cpu-demo-3-ctr\n image: nginx\n resources:\n requests:\n cpu: \"0.75\"", "language": "yaml", "type": "pod-definition", "context": "Configure Default CPU Requests and Limits for a Namespace | Create the Pod:", "source": "https://kubernetes.io/docs/tasks/administer-cluster/manage-resources/cpu-default-namespace/", "id": "k8s_00532_code_completion_13719"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: default-cpu-demo-3\nspec:\n containers:\n - name: default-cpu-demo-3-ctr\n image: nginx\n resources:\n requests:\n cpu: \"0.75\"", "completion": "This yaml code Configure Default CPU Requests and Limits for a Namespace | Create the Pod:", "language": "yaml", "type": "pod-definition_explanation", "context": "Configure Default CPU Requests and Limits for a Namespace | Create the Pod:", "source": "https://kubernetes.io/docs/tasks/administer-cluster/manage-resources/cpu-default-namespace/", "id": "k8s_00532_code_explain_13720"} {"prompt": "Explain this shell code:\n\nkubectl apply -f https://k8s.io/examples/admin/resource/cpu-defaults-pod-3.yaml --namespace=default-cpu-example", "completion": "This shell code Configure Default CPU Requests and Limits for a Namespace | View the specification of the Pod that you created:", "language": "shell", "type": "code-example_explanation", "context": "Configure Default CPU Requests and Limits for a Namespace | View the specification of the Pod that you created:", "source": "https://kubernetes.io/docs/tasks/administer-cluster/manage-resources/cpu-default-namespace/", "id": "k8s_00532_code_explain_13721"} {"prompt": "Explain this bash code:\n\nkubectl get pod default-cpu-demo-3 --output=yaml --namespace=default-cpu-example", "completion": "This bash code Configure Default CPU Requests and Limits for a Namespace | The output shows that the container's CPU request is set to the value you specified at the time you created the Pod (in other words: it matches the manifest). However, the same container's CPU limit i", "language": "bash", "type": "kubectl-command_explanation", "context": "Configure Default CPU Requests and Limits for a Namespace | The output shows that the container's CPU request is set to the value you specified at the time you created the Pod (in other words: it matches the manifest). However, the same container's CPU limit i", "source": "https://kubernetes.io/docs/tasks/administer-cluster/manage-resources/cpu-default-namespace/", "id": "k8s_00532_code_explain_13722"} {"prompt": "Complete this text code:\n\nresources:\n limits:", "completion": " cpu: \"1\"\n requests:\n cpu: 750m", "language": "text", "type": "code-example", "context": "Configure Default CPU Requests and Limits for a Namespace | If your namespace has a CPU resource quota configured, it is helpful to have a default value in place for CPU limit. Here are two of the restrictions that a CPU resource quota imposes on a namespace:", "source": "https://kubernetes.io/docs/tasks/administer-cluster/manage-resources/cpu-default-namespace/", "id": "k8s_00532_code_completion_13723"} {"prompt": "Explain this text code:\n\nresources:\n limits:\n cpu: \"1\"\n requests:\n cpu: 750m", "completion": "This text code Configure Default CPU Requests and Limits for a Namespace | If your namespace has a CPU resource quota configured, it is helpful to have a default value in place for CPU limit. Here are two of the restrictions that a CPU resource quota imposes on a namespace:", "language": "text", "type": "code-example_explanation", "context": "Configure Default CPU Requests and Limits for a Namespace | If your namespace has a CPU resource quota configured, it is helpful to have a default value in place for CPU limit. Here are two of the restrictions that a CPU resource quota imposes on a namespace:", "source": "https://kubernetes.io/docs/tasks/administer-cluster/manage-resources/cpu-default-namespace/", "id": "k8s_00532_code_explain_13724"} {"prompt": "Complete this yaml code:\n\nadmin/resource/cpu-defaults.yaml\napiVersion: v1\nkind: LimitRange\nmetadata:\n name: cpu-limit-range\nspec:", "completion": " limits:\n - default:\n cpu: 1\n defaultRequest:\n cpu: 0.5\n type: Container", "language": "yaml", "type": "kubernetes-manifest", "context": "Configure Default CPU Requests and Limits for a Namespace | Create the LimitRange in the default-cpu-example namespace:", "source": "https://kubernetes.io/docs/tasks/administer-cluster/manage-resources/cpu-default-namespace/", "id": "k8s_00532_code_completion_13725"} {"prompt": "Explain this yaml code:\n\nadmin/resource/cpu-defaults.yaml\napiVersion: v1\nkind: LimitRange\nmetadata:\n name: cpu-limit-range\nspec:\n limits:\n - default:\n cpu: 1\n defaultRequest:\n cpu: 0.5\n type: Container", "completion": "This yaml code Configure Default CPU Requests and Limits for a Namespace | Create the LimitRange in the default-cpu-example namespace:", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Configure Default CPU Requests and Limits for a Namespace | Create the LimitRange in the default-cpu-example namespace:", "source": "https://kubernetes.io/docs/tasks/administer-cluster/manage-resources/cpu-default-namespace/", "id": "k8s_00532_code_explain_13726"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: LimitRange\nmetadata:\n name: cpu-limit-range\nspec:", "completion": " limits:\n - default:\n cpu: 1\n defaultRequest:\n cpu: 0.5\n type: Container", "language": "yaml", "type": "kubernetes-manifest", "context": "Configure Default CPU Requests and Limits for a Namespace | Create the LimitRange in the default-cpu-example namespace:", "source": "https://kubernetes.io/docs/tasks/administer-cluster/manage-resources/cpu-default-namespace/", "id": "k8s_00532_code_completion_13727"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: LimitRange\nmetadata:\n name: cpu-limit-range\nspec:\n limits:\n - default:\n cpu: 1\n defaultRequest:\n cpu: 0.5\n type: Container", "completion": "This yaml code Configure Default CPU Requests and Limits for a Namespace | Create the LimitRange in the default-cpu-example namespace:", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Configure Default CPU Requests and Limits for a Namespace | Create the LimitRange in the default-cpu-example namespace:", "source": "https://kubernetes.io/docs/tasks/administer-cluster/manage-resources/cpu-default-namespace/", "id": "k8s_00532_code_explain_13728"} {"prompt": "Explain this bash code:\n\nkubectl apply -f https://k8s.io/examples/admin/resource/cpu-defaults.yaml --namespace=default-cpu-example", "completion": "This bash code Configure Default CPU Requests and Limits for a Namespace | Now if you create a Pod in the default-cpu-example namespace, and any container in that Pod does not specify its own values for CPU request and CPU limit, then the control plane applies default values", "language": "bash", "type": "kubectl-command_explanation", "context": "Configure Default CPU Requests and Limits for a Namespace | Now if you create a Pod in the default-cpu-example namespace, and any container in that Pod does not specify its own values for CPU request and CPU limit, then the control plane applies default values", "source": "https://kubernetes.io/docs/tasks/administer-cluster/manage-resources/cpu-default-namespace/", "id": "k8s_00532_code_explain_13729"} {"prompt": "Complete this yaml code:\n\nadmin/resource/cpu-defaults-pod.yaml\napiVersion: v1\nkind: Pod\nmetadata:", "completion": " name: default-cpu-demo\nspec:\n containers:\n - name: default-cpu-demo-ctr\n image: nginx", "language": "yaml", "type": "pod-definition", "context": "Configure Default CPU Requests and Limits for a Namespace | Create the Pod.", "source": "https://kubernetes.io/docs/tasks/administer-cluster/manage-resources/cpu-default-namespace/", "id": "k8s_00532_code_completion_13730"} {"prompt": "Explain this yaml code:\n\nadmin/resource/cpu-defaults-pod.yaml\napiVersion: v1\nkind: Pod\nmetadata:\n name: default-cpu-demo\nspec:\n containers:\n - name: default-cpu-demo-ctr\n image: nginx", "completion": "This yaml code Configure Default CPU Requests and Limits for a Namespace | Create the Pod.", "language": "yaml", "type": "pod-definition_explanation", "context": "Configure Default CPU Requests and Limits for a Namespace | Create the Pod.", "source": "https://kubernetes.io/docs/tasks/administer-cluster/manage-resources/cpu-default-namespace/", "id": "k8s_00532_code_explain_13731"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: default-cpu-demo", "completion": "spec:\n containers:\n - name: default-cpu-demo-ctr\n image: nginx", "language": "yaml", "type": "pod-definition", "context": "Configure Default CPU Requests and Limits for a Namespace | Create the Pod.", "source": "https://kubernetes.io/docs/tasks/administer-cluster/manage-resources/cpu-default-namespace/", "id": "k8s_00532_code_completion_13732"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: default-cpu-demo\nspec:\n containers:\n - name: default-cpu-demo-ctr\n image: nginx", "completion": "This yaml code Configure Default CPU Requests and Limits for a Namespace | Create the Pod.", "language": "yaml", "type": "pod-definition_explanation", "context": "Configure Default CPU Requests and Limits for a Namespace | Create the Pod.", "source": "https://kubernetes.io/docs/tasks/administer-cluster/manage-resources/cpu-default-namespace/", "id": "k8s_00532_code_explain_13733"} {"prompt": "Explain this bash code:\n\nkubectl apply -f https://k8s.io/examples/admin/resource/cpu-defaults-pod.yaml --namespace=default-cpu-example", "completion": "This bash code Configure Default CPU Requests and Limits for a Namespace | View the Pod's specification:", "language": "bash", "type": "kubectl-command_explanation", "context": "Configure Default CPU Requests and Limits for a Namespace | View the Pod's specification:", "source": "https://kubernetes.io/docs/tasks/administer-cluster/manage-resources/cpu-default-namespace/", "id": "k8s_00532_code_explain_13734"} {"prompt": "Explain this bash code:\n\nkubectl get pod default-cpu-demo --output=yaml --namespace=default-cpu-example", "completion": "This bash code Configure Default CPU Requests and Limits for a Namespace | The output shows that the Pod's only container has a CPU request of 500m cpu (which you can read as “500 millicpu”), and a CPU limit of 1 cpu. These are the default values specified by the LimitRange.", "language": "bash", "type": "kubectl-command_explanation", "context": "Configure Default CPU Requests and Limits for a Namespace | The output shows that the Pod's only container has a CPU request of 500m cpu (which you can read as “500 millicpu”), and a CPU limit of 1 cpu. These are the default values specified by the LimitRange.", "source": "https://kubernetes.io/docs/tasks/administer-cluster/manage-resources/cpu-default-namespace/", "id": "k8s_00532_code_explain_13735"} {"prompt": "Complete this text code:\n\ncontainers:\n- image: nginx\n imagePullPolicy: Always\n name: default-cpu-demo-ctr", "completion": " resources:\n limits:\n cpu: \"1\"\n requests:\n cpu: 500m", "language": "text", "type": "code-example", "context": "Configure Default CPU Requests and Limits for a Namespace | Here's a manifest for a Pod that has one container. The container specifies a CPU limit, but not a request:", "source": "https://kubernetes.io/docs/tasks/administer-cluster/manage-resources/cpu-default-namespace/", "id": "k8s_00532_code_completion_13736"} {"prompt": "Explain this text code:\n\ncontainers:\n- image: nginx\n imagePullPolicy: Always\n name: default-cpu-demo-ctr\n resources:\n limits:\n cpu: \"1\"\n requests:\n cpu: 500m", "completion": "This text code Configure Default CPU Requests and Limits for a Namespace | Here's a manifest for a Pod that has one container. The container specifies a CPU limit, but not a request:", "language": "text", "type": "code-example_explanation", "context": "Configure Default CPU Requests and Limits for a Namespace | Here's a manifest for a Pod that has one container. The container specifies a CPU limit, but not a request:", "source": "https://kubernetes.io/docs/tasks/administer-cluster/manage-resources/cpu-default-namespace/", "id": "k8s_00532_code_explain_13737"} {"prompt": "Complete this yaml code:\n\nadmin/resource/cpu-defaults-pod-2.yaml\napiVersion: v1\nkind: Pod\nmetadata:\n name: default-cpu-demo-2\nspec:", "completion": " containers:\n - name: default-cpu-demo-2-ctr\n image: nginx\n resources:\n limits:\n cpu: \"1\"", "language": "yaml", "type": "pod-definition", "context": "Configure Default CPU Requests and Limits for a Namespace | Create the Pod:", "source": "https://kubernetes.io/docs/tasks/administer-cluster/manage-resources/cpu-default-namespace/", "id": "k8s_00532_code_completion_13738"} {"prompt": "Explain this yaml code:\n\nadmin/resource/cpu-defaults-pod-2.yaml\napiVersion: v1\nkind: Pod\nmetadata:\n name: default-cpu-demo-2\nspec:\n containers:\n - name: default-cpu-demo-2-ctr\n image: nginx\n resources:\n limits:\n cpu: \"1\"", "completion": "This yaml code Configure Default CPU Requests and Limits for a Namespace | Create the Pod:", "language": "yaml", "type": "pod-definition_explanation", "context": "Configure Default CPU Requests and Limits for a Namespace | Create the Pod:", "source": "https://kubernetes.io/docs/tasks/administer-cluster/manage-resources/cpu-default-namespace/", "id": "k8s_00532_code_explain_13739"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: default-cpu-demo-2\nspec:", "completion": " containers:\n - name: default-cpu-demo-2-ctr\n image: nginx\n resources:\n limits:\n cpu: \"1\"", "language": "yaml", "type": "pod-definition", "context": "Configure Default CPU Requests and Limits for a Namespace | Create the Pod:", "source": "https://kubernetes.io/docs/tasks/administer-cluster/manage-resources/cpu-default-namespace/", "id": "k8s_00532_code_completion_13740"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: default-cpu-demo-2\nspec:\n containers:\n - name: default-cpu-demo-2-ctr\n image: nginx\n resources:\n limits:\n cpu: \"1\"", "completion": "This yaml code Configure Default CPU Requests and Limits for a Namespace | Create the Pod:", "language": "yaml", "type": "pod-definition_explanation", "context": "Configure Default CPU Requests and Limits for a Namespace | Create the Pod:", "source": "https://kubernetes.io/docs/tasks/administer-cluster/manage-resources/cpu-default-namespace/", "id": "k8s_00532_code_explain_13741"} {"prompt": "Explain this bash code:\n\nkubectl apply -f https://k8s.io/examples/admin/resource/cpu-defaults-pod-2.yaml --namespace=default-cpu-example", "completion": "This bash code Configure Default CPU Requests and Limits for a Namespace | View the specification of the Pod that you created:", "language": "bash", "type": "kubectl-command_explanation", "context": "Configure Default CPU Requests and Limits for a Namespace | View the specification of the Pod that you created:", "source": "https://kubernetes.io/docs/tasks/administer-cluster/manage-resources/cpu-default-namespace/", "id": "k8s_00532_code_explain_13742"} {"prompt": "Complete this yaml code:\n\nadmin/resource/cpu-defaults-pod-3.yaml\napiVersion: v1\nkind: Pod\nmetadata:\n name: default-cpu-demo-3\nspec:", "completion": " containers:\n - name: default-cpu-demo-3-ctr\n image: nginx\n resources:\n requests:\n cpu: \"0.75\"", "language": "yaml", "type": "pod-definition", "context": "Configure Default CPU Requests and Limits for a Namespace | Create the Pod:", "source": "https://kubernetes.io/docs/tasks/administer-cluster/manage-resources/cpu-default-namespace/", "id": "k8s_00532_code_completion_13743"} {"prompt": "Explain this yaml code:\n\nadmin/resource/cpu-defaults-pod-3.yaml\napiVersion: v1\nkind: Pod\nmetadata:\n name: default-cpu-demo-3\nspec:\n containers:\n - name: default-cpu-demo-3-ctr\n image: nginx\n resources:\n requests:\n cpu: \"0.75\"", "completion": "This yaml code Configure Default CPU Requests and Limits for a Namespace | Create the Pod:", "language": "yaml", "type": "pod-definition_explanation", "context": "Configure Default CPU Requests and Limits for a Namespace | Create the Pod:", "source": "https://kubernetes.io/docs/tasks/administer-cluster/manage-resources/cpu-default-namespace/", "id": "k8s_00532_code_explain_13744"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: default-cpu-demo-3\nspec:", "completion": " containers:\n - name: default-cpu-demo-3-ctr\n image: nginx\n resources:\n requests:\n cpu: \"0.75\"", "language": "yaml", "type": "pod-definition", "context": "Configure Default CPU Requests and Limits for a Namespace | Create the Pod:", "source": "https://kubernetes.io/docs/tasks/administer-cluster/manage-resources/cpu-default-namespace/", "id": "k8s_00532_code_completion_13745"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: default-cpu-demo-3\nspec:\n containers:\n - name: default-cpu-demo-3-ctr\n image: nginx\n resources:\n requests:\n cpu: \"0.75\"", "completion": "This yaml code Configure Default CPU Requests and Limits for a Namespace | Create the Pod:", "language": "yaml", "type": "pod-definition_explanation", "context": "Configure Default CPU Requests and Limits for a Namespace | Create the Pod:", "source": "https://kubernetes.io/docs/tasks/administer-cluster/manage-resources/cpu-default-namespace/", "id": "k8s_00532_code_explain_13746"} {"prompt": "Explain this bash code:\n\nkubectl apply -f https://k8s.io/examples/admin/resource/cpu-defaults-pod-3.yaml --namespace=default-cpu-example", "completion": "This bash code Configure Default CPU Requests and Limits for a Namespace | View the specification of the Pod that you created:", "language": "bash", "type": "kubectl-command_explanation", "context": "Configure Default CPU Requests and Limits for a Namespace | View the specification of the Pod that you created:", "source": "https://kubernetes.io/docs/tasks/administer-cluster/manage-resources/cpu-default-namespace/", "id": "k8s_00532_code_explain_13747"} {"prompt": "Explain this bash code:\n\nkubectl get pods -n kube-system | grep kube-scheduler", "completion": "This bash code Scheduler Performance Tuning | to verify that the kube-scheduler component is healthy.", "language": "bash", "type": "kubectl-command_explanation", "context": "Scheduler Performance Tuning | to verify that the kube-scheduler component is healthy.", "source": "https://kubernetes.io/docs/concepts/scheduling-eviction/scheduler-perf-tuning/", "id": "k8s_00533_code_explain_13748"} {"prompt": "Complete this yaml code:\n\napiVersion: kubescheduler.config.k8s.io/v1alpha1\nkind: KubeSchedulerConfiguration\nalgorithmSource:\n provider: DefaultProvider", "completion": "\n...\n\npercentageOfNodesToScore: 50", "language": "yaml", "type": "kubernetes-manifest", "context": "Scheduler Performance Tuning | percentageOfNodesToScore must be a value between 1 and 100 with the default value being calculated based on the cluster size. There is also a hardcoded minimum value of 100 nodes.", "source": "https://kubernetes.io/docs/concepts/scheduling-eviction/scheduler-perf-tuning/", "id": "k8s_00533_code_completion_13749"} {"prompt": "Explain this yaml code:\n\napiVersion: kubescheduler.config.k8s.io/v1alpha1\nkind: KubeSchedulerConfiguration\nalgorithmSource:\n provider: DefaultProvider\n\n...\n\npercentageOfNodesToScore: 50", "completion": "This yaml code Scheduler Performance Tuning | percentageOfNodesToScore must be a value between 1 and 100 with the default value being calculated based on the cluster size. There is also a hardcoded minimum value of 100 nodes.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Scheduler Performance Tuning | percentageOfNodesToScore must be a value between 1 and 100 with the default value being calculated based on the cluster size. There is also a hardcoded minimum value of 100 nodes.", "source": "https://kubernetes.io/docs/concepts/scheduling-eviction/scheduler-perf-tuning/", "id": "k8s_00533_code_explain_13750"} {"prompt": "Explain this text code:\n\nZone 1: Node 1, Node 2, Node 3, Node 4\nZone 2: Node 5, Node 6", "completion": "This text code Scheduler Performance Tuning | The Scheduler evaluates feasibility of the nodes in this order:", "language": "text", "type": "code-example_explanation", "context": "Scheduler Performance Tuning | The Scheduler evaluates feasibility of the nodes in this order:", "source": "https://kubernetes.io/docs/concepts/scheduling-eviction/scheduler-perf-tuning/", "id": "k8s_00533_code_explain_13751"} {"prompt": "Explain this bash code:\n\nkubectl get pods -n kube-system | grep kube-scheduler", "completion": "This bash code Scheduler Performance Tuning | to verify that the kube-scheduler component is healthy.", "language": "bash", "type": "kubectl-command_explanation", "context": "Scheduler Performance Tuning | to verify that the kube-scheduler component is healthy.", "source": "https://kubernetes.io/docs/concepts/scheduling-eviction/scheduler-perf-tuning/", "id": "k8s_00533_code_explain_13752"} {"prompt": "Complete this yaml code:\n\napiVersion: kubescheduler.config.k8s.io/v1alpha1\nkind: KubeSchedulerConfiguration\nalgorithmSource:\n provider: DefaultProvider", "completion": "\n...\n\npercentageOfNodesToScore: 50", "language": "yaml", "type": "kubernetes-manifest", "context": "Scheduler Performance Tuning | percentageOfNodesToScore must be a value between 1 and 100 with the default value being calculated based on the cluster size. There is also a hardcoded minimum value of 100 nodes.", "source": "https://kubernetes.io/docs/concepts/scheduling-eviction/scheduler-perf-tuning/", "id": "k8s_00533_code_completion_13753"} {"prompt": "Explain this yaml code:\n\napiVersion: kubescheduler.config.k8s.io/v1alpha1\nkind: KubeSchedulerConfiguration\nalgorithmSource:\n provider: DefaultProvider\n\n...\n\npercentageOfNodesToScore: 50", "completion": "This yaml code Scheduler Performance Tuning | percentageOfNodesToScore must be a value between 1 and 100 with the default value being calculated based on the cluster size. There is also a hardcoded minimum value of 100 nodes.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Scheduler Performance Tuning | percentageOfNodesToScore must be a value between 1 and 100 with the default value being calculated based on the cluster size. There is also a hardcoded minimum value of 100 nodes.", "source": "https://kubernetes.io/docs/concepts/scheduling-eviction/scheduler-perf-tuning/", "id": "k8s_00533_code_explain_13754"} {"prompt": "Explain this text code:\n\nZone 1: Node 1, Node 2, Node 3, Node 4\nZone 2: Node 5, Node 6", "completion": "This text code Scheduler Performance Tuning | The Scheduler evaluates feasibility of the nodes in this order:", "language": "text", "type": "code-example_explanation", "context": "Scheduler Performance Tuning | The Scheduler evaluates feasibility of the nodes in this order:", "source": "https://kubernetes.io/docs/concepts/scheduling-eviction/scheduler-perf-tuning/", "id": "k8s_00533_code_explain_13755"} {"prompt": "Explain this bash code:\n\nkubectl get pods -n kube-system | grep kube-scheduler", "completion": "This bash code Scheduler Performance Tuning | to verify that the kube-scheduler component is healthy.", "language": "bash", "type": "kubectl-command_explanation", "context": "Scheduler Performance Tuning | to verify that the kube-scheduler component is healthy.", "source": "https://kubernetes.io/docs/concepts/scheduling-eviction/scheduler-perf-tuning/", "id": "k8s_00533_code_explain_13756"} {"prompt": "Complete this yaml code:\n\napiVersion: kubescheduler.config.k8s.io/v1alpha1\nkind: KubeSchedulerConfiguration\nalgorithmSource:\n provider: DefaultProvider", "completion": "\n...\n\npercentageOfNodesToScore: 50", "language": "yaml", "type": "kubernetes-manifest", "context": "Scheduler Performance Tuning | percentageOfNodesToScore must be a value between 1 and 100 with the default value being calculated based on the cluster size. There is also a hardcoded minimum value of 100 nodes.", "source": "https://kubernetes.io/docs/concepts/scheduling-eviction/scheduler-perf-tuning/", "id": "k8s_00533_code_completion_13757"} {"prompt": "Explain this yaml code:\n\napiVersion: kubescheduler.config.k8s.io/v1alpha1\nkind: KubeSchedulerConfiguration\nalgorithmSource:\n provider: DefaultProvider\n\n...\n\npercentageOfNodesToScore: 50", "completion": "This yaml code Scheduler Performance Tuning | percentageOfNodesToScore must be a value between 1 and 100 with the default value being calculated based on the cluster size. There is also a hardcoded minimum value of 100 nodes.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Scheduler Performance Tuning | percentageOfNodesToScore must be a value between 1 and 100 with the default value being calculated based on the cluster size. There is also a hardcoded minimum value of 100 nodes.", "source": "https://kubernetes.io/docs/concepts/scheduling-eviction/scheduler-perf-tuning/", "id": "k8s_00533_code_explain_13758"} {"prompt": "Complete this yaml code:\n\napiVersion: apps/v1\nkind: DaemonSet\nmetadata:\n name: example-daemonset\nspec:\n selector:\n matchLabels:\n app.kubernetes.io/name: example\n template:\n metadata:\n labels:\n app.kubernetes.io/name: example\n spec:\n containers:\n - name: pause\n image: registry.k8s.io/pause\n initContainers:", "completion": " - name: log-machine-id\n image: busybox:1.37\n command: ['sh', '-c', 'cat /etc/machine-id > /var/log/machine-id.log']\n volumeMounts:\n - name: machine-id\n mountPath: /etc/machine-id\n readOnly: true\n - name: log-dir\n mountPath: /var/log\n volumes:\n - name: machine-id\n hostPath:\n path: /etc/machine-id\n type: File\n - name: log-dir\n hostPath:\n path: /var/log", "language": "yaml", "type": "kubernetes-manifest", "context": "Building a Basic DaemonSet | Create a DaemonSet based on the (YAML) manifest:", "source": "https://kubernetes.io/docs/tasks/manage-daemon/create-daemon-set/", "id": "k8s_00534_code_completion_13759"} {"prompt": "Explain this yaml code:\n\napiVersion: apps/v1\nkind: DaemonSet\nmetadata:\n name: example-daemonset\nspec:\n selector:\n matchLabels:\n app.kubernetes.io/name: example\n template:\n metadata:\n labels:\n app.kubernetes.io/name: example\n spec:\n containers:\n - name: pause\n image: registry.k8s.io/pause\n initContainers:\n - name: log-machine-id\n image: busybox:1.37\n command: ['sh', '-c', 'cat /etc/machine-id > /var/log/machine-id.log']\n volumeMounts:\n - name: machine-id\n mountPath: /etc/machine-id\n readOnly: true\n - name: log-dir\n mountPath: /var/log\n volumes:\n - name: machine-id\n hostPath:\n path: /etc/machine-id\n type: File\n - name: log-dir\n hostPath:\n path: /var/log", "completion": "This yaml code Building a Basic DaemonSet | Create a DaemonSet based on the (YAML) manifest:", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Building a Basic DaemonSet | Create a DaemonSet based on the (YAML) manifest:", "source": "https://kubernetes.io/docs/tasks/manage-daemon/create-daemon-set/", "id": "k8s_00534_code_explain_13760"} {"prompt": "Explain this bash code:\n\nkubectl apply -f https://k8s.io/examples/application/basic-daemonset.yaml", "completion": "This bash code Building a Basic DaemonSet | Once applied, you can verify that the DaemonSet is running a Pod on every node in the cluster:", "language": "bash", "type": "kubectl-command_explanation", "context": "Building a Basic DaemonSet | Once applied, you can verify that the DaemonSet is running a Pod on every node in the cluster:", "source": "https://kubernetes.io/docs/tasks/manage-daemon/create-daemon-set/", "id": "k8s_00534_code_explain_13761"} {"prompt": "Explain this text code:\n\nNAME READY STATUS RESTARTS AGE IP NODE\nexample-daemonset-xxxxx 1/1 Running 0 5m x.x.x.x node-1\nexample-daemonset-yyyyy 1/1 Running 0 5m x.x.x.x node-2", "completion": "This text code Building a Basic DaemonSet | You can inspect the contents of the logged /etc/machine-id file by checking the log directory mounted from the host:", "language": "text", "type": "code-example_explanation", "context": "Building a Basic DaemonSet | You can inspect the contents of the logged /etc/machine-id file by checking the log directory mounted from the host:", "source": "https://kubernetes.io/docs/tasks/manage-daemon/create-daemon-set/", "id": "k8s_00534_code_explain_13762"} {"prompt": "Explain this bash code:\n\nkubectl exec <pod-name> -- cat /var/log/machine-id.log", "completion": "This bash code Building a Basic DaemonSet | Where <pod-name> is the name of one of your Pods.", "language": "bash", "type": "kubectl-command_explanation", "context": "Building a Basic DaemonSet | Where <pod-name> is the name of one of your Pods.", "source": "https://kubernetes.io/docs/tasks/manage-daemon/create-daemon-set/", "id": "k8s_00534_code_explain_13763"} {"prompt": "Explain this bash code:\n\nkubectl delete --cascade=foreground --ignore-not-found --now daemonsets/example-daemonset", "completion": "This bash code Building a Basic DaemonSet | This simple DaemonSet example introduces key components like init containers and host path volumes, which can be expanded upon for more advanced use cases. For more details refer to DaemonSet.", "language": "bash", "type": "kubectl-command_explanation", "context": "Building a Basic DaemonSet | This simple DaemonSet example introduces key components like init containers and host path volumes, which can be expanded upon for more advanced use cases. For more details refer to DaemonSet.", "source": "https://kubernetes.io/docs/tasks/manage-daemon/create-daemon-set/", "id": "k8s_00534_code_explain_13764"} {"prompt": "Complete this yaml code:\n\napiVersion: apps/v1\nkind: DaemonSet\nmetadata:\n name: example-daemonset\nspec:\n selector:\n matchLabels:\n app.kubernetes.io/name: example\n template:\n metadata:\n labels:\n app.kubernetes.io/name: example\n spec:\n containers:\n - name: pause\n image: registry.k8s.io/pause\n initContainers:", "completion": " - name: log-machine-id\n image: busybox:1.37\n command: ['sh', '-c', 'cat /etc/machine-id > /var/log/machine-id.log']\n volumeMounts:\n - name: machine-id\n mountPath: /etc/machine-id\n readOnly: true\n - name: log-dir\n mountPath: /var/log\n volumes:\n - name: machine-id\n hostPath:\n path: /etc/machine-id\n type: File\n - name: log-dir\n hostPath:\n path: /var/log", "language": "yaml", "type": "kubernetes-manifest", "context": "Building a Basic DaemonSet | Create a DaemonSet based on the (YAML) manifest:", "source": "https://kubernetes.io/docs/tasks/manage-daemon/create-daemon-set/", "id": "k8s_00534_code_completion_13765"} {"prompt": "Explain this yaml code:\n\napiVersion: apps/v1\nkind: DaemonSet\nmetadata:\n name: example-daemonset\nspec:\n selector:\n matchLabels:\n app.kubernetes.io/name: example\n template:\n metadata:\n labels:\n app.kubernetes.io/name: example\n spec:\n containers:\n - name: pause\n image: registry.k8s.io/pause\n initContainers:\n - name: log-machine-id\n image: busybox:1.37\n command: ['sh', '-c', 'cat /etc/machine-id > /var/log/machine-id.log']\n volumeMounts:\n - name: machine-id\n mountPath: /etc/machine-id\n readOnly: true\n - name: log-dir\n mountPath: /var/log\n volumes:\n - name: machine-id\n hostPath:\n path: /etc/machine-id\n type: File\n - name: log-dir\n hostPath:\n path: /var/log", "completion": "This yaml code Building a Basic DaemonSet | Create a DaemonSet based on the (YAML) manifest:", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Building a Basic DaemonSet | Create a DaemonSet based on the (YAML) manifest:", "source": "https://kubernetes.io/docs/tasks/manage-daemon/create-daemon-set/", "id": "k8s_00534_code_explain_13766"} {"prompt": "Explain this shell code:\n\nkubectl apply -f https://k8s.io/examples/application/basic-daemonset.yaml", "completion": "This shell code Building a Basic DaemonSet | Once applied, you can verify that the DaemonSet is running a Pod on every node in the cluster:", "language": "shell", "type": "code-example_explanation", "context": "Building a Basic DaemonSet | Once applied, you can verify that the DaemonSet is running a Pod on every node in the cluster:", "source": "https://kubernetes.io/docs/tasks/manage-daemon/create-daemon-set/", "id": "k8s_00534_code_explain_13767"} {"prompt": "Explain this text code:\n\nNAME READY STATUS RESTARTS AGE IP NODE\nexample-daemonset-xxxxx 1/1 Running 0 5m x.x.x.x node-1\nexample-daemonset-yyyyy 1/1 Running 0 5m x.x.x.x node-2", "completion": "This text code Building a Basic DaemonSet | You can inspect the contents of the logged /etc/machine-id file by checking the log directory mounted from the host:", "language": "text", "type": "code-example_explanation", "context": "Building a Basic DaemonSet | You can inspect the contents of the logged /etc/machine-id file by checking the log directory mounted from the host:", "source": "https://kubernetes.io/docs/tasks/manage-daemon/create-daemon-set/", "id": "k8s_00534_code_explain_13768"} {"prompt": "Explain this shell code:\n\nkubectl exec <pod-name> -- cat /var/log/machine-id.log", "completion": "This shell code Building a Basic DaemonSet | Where <pod-name> is the name of one of your Pods.", "language": "shell", "type": "code-example_explanation", "context": "Building a Basic DaemonSet | Where <pod-name> is the name of one of your Pods.", "source": "https://kubernetes.io/docs/tasks/manage-daemon/create-daemon-set/", "id": "k8s_00534_code_explain_13769"} {"prompt": "Explain this shell code:\n\nkubectl delete --cascade=foreground --ignore-not-found --now daemonsets/example-daemonset", "completion": "This shell code Building a Basic DaemonSet | This simple DaemonSet example introduces key components like init containers and host path volumes, which can be expanded upon for more advanced use cases. For more details refer to DaemonSet.", "language": "shell", "type": "code-example_explanation", "context": "Building a Basic DaemonSet | This simple DaemonSet example introduces key components like init containers and host path volumes, which can be expanded upon for more advanced use cases. For more details refer to DaemonSet.", "source": "https://kubernetes.io/docs/tasks/manage-daemon/create-daemon-set/", "id": "k8s_00534_code_explain_13770"} {"prompt": "Complete this yaml code:\n\napplication/basic-daemonset.yaml\napiVersion: apps/v1\nkind: DaemonSet\nmetadata:\n name: example-daemonset\nspec:\n selector:\n matchLabels:\n app.kubernetes.io/name: example\n template:\n metadata:\n labels:\n app.kubernetes.io/name: example\n spec:\n containers:\n - name: pause\n image: registry.k8s.io/pause", "completion": " initContainers:\n - name: log-machine-id\n image: busybox:1.37\n command: ['sh', '-c', 'cat /etc/machine-id > /var/log/machine-id.log']\n volumeMounts:\n - name: machine-id\n mountPath: /etc/machine-id\n readOnly: true\n - name: log-dir\n mountPath: /var/log\n volumes:\n - name: machine-id\n hostPath:\n path: /etc/machine-id\n type: File\n - name: log-dir\n hostPath:\n path: /var/log", "language": "yaml", "type": "kubernetes-manifest", "context": "Building a Basic DaemonSet | Create a DaemonSet based on the (YAML) manifest:", "source": "https://kubernetes.io/docs/tasks/manage-daemon/create-daemon-set/", "id": "k8s_00534_code_completion_13771"} {"prompt": "Explain this yaml code:\n\napplication/basic-daemonset.yaml\napiVersion: apps/v1\nkind: DaemonSet\nmetadata:\n name: example-daemonset\nspec:\n selector:\n matchLabels:\n app.kubernetes.io/name: example\n template:\n metadata:\n labels:\n app.kubernetes.io/name: example\n spec:\n containers:\n - name: pause\n image: registry.k8s.io/pause\n initContainers:\n - name: log-machine-id\n image: busybox:1.37\n command: ['sh', '-c', 'cat /etc/machine-id > /var/log/machine-id.log']\n volumeMounts:\n - name: machine-id\n mountPath: /etc/machine-id\n readOnly: true\n - name: log-dir\n mountPath: /var/log\n volumes:\n - name: machine-id\n hostPath:\n path: /etc/machine-id\n type: File\n - name: log-dir\n hostPath:\n path: /var/log", "completion": "This yaml code Building a Basic DaemonSet | Create a DaemonSet based on the (YAML) manifest:", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Building a Basic DaemonSet | Create a DaemonSet based on the (YAML) manifest:", "source": "https://kubernetes.io/docs/tasks/manage-daemon/create-daemon-set/", "id": "k8s_00534_code_explain_13772"} {"prompt": "Complete this yaml code:\n\napiVersion: apps/v1\nkind: DaemonSet\nmetadata:\n name: example-daemonset\nspec:\n selector:\n matchLabels:\n app.kubernetes.io/name: example\n template:\n metadata:\n labels:\n app.kubernetes.io/name: example\n spec:\n containers:\n - name: pause\n image: registry.k8s.io/pause\n initContainers:", "completion": " - name: log-machine-id\n image: busybox:1.37\n command: ['sh', '-c', 'cat /etc/machine-id > /var/log/machine-id.log']\n volumeMounts:\n - name: machine-id\n mountPath: /etc/machine-id\n readOnly: true\n - name: log-dir\n mountPath: /var/log\n volumes:\n - name: machine-id\n hostPath:\n path: /etc/machine-id\n type: File\n - name: log-dir\n hostPath:\n path: /var/log", "language": "yaml", "type": "kubernetes-manifest", "context": "Building a Basic DaemonSet | Create a DaemonSet based on the (YAML) manifest:", "source": "https://kubernetes.io/docs/tasks/manage-daemon/create-daemon-set/", "id": "k8s_00534_code_completion_13773"} {"prompt": "Explain this yaml code:\n\napiVersion: apps/v1\nkind: DaemonSet\nmetadata:\n name: example-daemonset\nspec:\n selector:\n matchLabels:\n app.kubernetes.io/name: example\n template:\n metadata:\n labels:\n app.kubernetes.io/name: example\n spec:\n containers:\n - name: pause\n image: registry.k8s.io/pause\n initContainers:\n - name: log-machine-id\n image: busybox:1.37\n command: ['sh', '-c', 'cat /etc/machine-id > /var/log/machine-id.log']\n volumeMounts:\n - name: machine-id\n mountPath: /etc/machine-id\n readOnly: true\n - name: log-dir\n mountPath: /var/log\n volumes:\n - name: machine-id\n hostPath:\n path: /etc/machine-id\n type: File\n - name: log-dir\n hostPath:\n path: /var/log", "completion": "This yaml code Building a Basic DaemonSet | Create a DaemonSet based on the (YAML) manifest:", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Building a Basic DaemonSet | Create a DaemonSet based on the (YAML) manifest:", "source": "https://kubernetes.io/docs/tasks/manage-daemon/create-daemon-set/", "id": "k8s_00534_code_explain_13774"} {"prompt": "Explain this bash code:\n\nkubectl apply -f https://k8s.io/examples/application/basic-daemonset.yaml", "completion": "This bash code Building a Basic DaemonSet | Once applied, you can verify that the DaemonSet is running a Pod on every node in the cluster:", "language": "bash", "type": "kubectl-command_explanation", "context": "Building a Basic DaemonSet | Once applied, you can verify that the DaemonSet is running a Pod on every node in the cluster:", "source": "https://kubernetes.io/docs/tasks/manage-daemon/create-daemon-set/", "id": "k8s_00534_code_explain_13775"} {"prompt": "Explain this bash code:\n\nkubectl exec <pod-name> -- cat /var/log/machine-id.log", "completion": "This bash code Building a Basic DaemonSet | Where <pod-name> is the name of one of your Pods.", "language": "bash", "type": "kubectl-command_explanation", "context": "Building a Basic DaemonSet | Where <pod-name> is the name of one of your Pods.", "source": "https://kubernetes.io/docs/tasks/manage-daemon/create-daemon-set/", "id": "k8s_00534_code_explain_13776"} {"prompt": "Explain this bash code:\n\nkubectl delete --cascade=foreground --ignore-not-found --now daemonsets/example-daemonset", "completion": "This bash code Building a Basic DaemonSet | This simple DaemonSet example introduces key components like init containers and host path volumes, which can be expanded upon for more advanced use cases. For more details refer to DaemonSet.", "language": "bash", "type": "kubectl-command_explanation", "context": "Building a Basic DaemonSet | This simple DaemonSet example introduces key components like init containers and host path volumes, which can be expanded upon for more advanced use cases. For more details refer to DaemonSet.", "source": "https://kubernetes.io/docs/tasks/manage-daemon/create-daemon-set/", "id": "k8s_00534_code_explain_13777"} {"prompt": "Complete this text code:\n\n# reopen an issue\n/reopen\n\n# transfer issues that don't fit in k/website to another repository\n/transfer[-issue]\n\n# change the state of rotten issues\n/remove-lifecycle rotten\n\n# change the state of stale issues\n/remove-lifecycle stale\n\n# assign sig to an issue\n/sig <sig_name>\n\n# add specific area", "completion": "/area <area_name>\n\n# for beginner friendly issues\n/good-first-issue\n\n# issues that needs help\n/help wanted\n\n# tagging issue as support specific\n/kind support\n\n# to accept triaging for an issue\n/triage accepted\n\n# closing an issue we won't be working on and haven't fixed yet\n/close not-planned", "language": "text", "type": "code-example", "context": "Issue Wranglers | To find more Prow commands, refer to the Command Help documentation.", "source": "https://kubernetes.io/docs/contribute/participate/issue-wrangler/", "id": "k8s_00535_code_completion_13778"} {"prompt": "Explain this text code:\n\n# reopen an issue\n/reopen\n\n# transfer issues that don't fit in k/website to another repository\n/transfer[-issue]\n\n# change the state of rotten issues\n/remove-lifecycle rotten\n\n# change the state of stale issues\n/remove-lifecycle stale\n\n# assign sig to an issue\n/sig <sig_name>\n\n# add specific area\n/area <area_name>\n\n# for beginner friendly issues\n/good-first-issue\n\n# issues that needs help\n/help wanted\n\n# tagging issue as support specific\n/kind support\n\n# to accept triaging for an issue\n/triage accepted\n\n# closing an issue we won't be working on and haven't fixed yet\n/close not-planned", "completion": "This text code Issue Wranglers | To find more Prow commands, refer to the Command Help documentation.", "language": "text", "type": "code-example_explanation", "context": "Issue Wranglers | To find more Prow commands, refer to the Command Help documentation.", "source": "https://kubernetes.io/docs/contribute/participate/issue-wrangler/", "id": "k8s_00535_code_explain_13779"} {"prompt": "Complete this bash code:\n\n# reopen an issue\n/reopen\n\n# transfer issues that don't fit in k/website to another repository\n/transfer[-issue]\n\n# change the state of rotten issues\n/remove-lifecycle rotten\n\n# change the state of stale issues\n/remove-lifecycle stale\n\n# assign sig to an issue\n/sig <sig_name>\n\n# add specific area", "completion": "/area <area_name>\n\n# for beginner friendly issues\n/good-first-issue\n\n# issues that needs help\n/help wanted\n\n# tagging issue as support specific\n/kind support\n\n# to accept triaging for an issue\n/triage accepted\n\n# closing an issue we won't be working on and haven't fixed yet\n/close not-planned", "language": "bash", "type": "shell-command", "context": "Issue Wranglers | To find more Prow commands, refer to the Command Help documentation.", "source": "https://kubernetes.io/docs/contribute/participate/issue-wrangler/", "id": "k8s_00535_code_completion_13780"} {"prompt": "Explain this bash code:\n\n# reopen an issue\n/reopen\n\n# transfer issues that don't fit in k/website to another repository\n/transfer[-issue]\n\n# change the state of rotten issues\n/remove-lifecycle rotten\n\n# change the state of stale issues\n/remove-lifecycle stale\n\n# assign sig to an issue\n/sig <sig_name>\n\n# add specific area\n/area <area_name>\n\n# for beginner friendly issues\n/good-first-issue\n\n# issues that needs help\n/help wanted\n\n# tagging issue as support specific\n/kind support\n\n# to accept triaging for an issue\n/triage accepted\n\n# closing an issue we won't be working on and haven't fixed yet\n/close not-planned", "completion": "This bash code Issue Wranglers | To find more Prow commands, refer to the Command Help documentation.", "language": "bash", "type": "shell-command_explanation", "context": "Issue Wranglers | To find more Prow commands, refer to the Command Help documentation.", "source": "https://kubernetes.io/docs/contribute/participate/issue-wrangler/", "id": "k8s_00535_code_explain_13781"} {"prompt": "Complete this text code:\n\n# reopen an issue\n/reopen\n\n# transfer issues that don't fit in k/website to another repository\n/transfer[-issue]\n\n# change the state of rotten issues\n/remove-lifecycle rotten\n\n# change the state of stale issues\n/remove-lifecycle stale\n\n# assign sig to an issue\n/sig <sig_name>\n\n# add specific area", "completion": "/area <area_name>\n\n# for beginner friendly issues\n/good-first-issue\n\n# issues that needs help\n/help wanted\n\n# tagging issue as support specific\n/kind support\n\n# to accept triaging for an issue\n/triage accepted\n\n# closing an issue we won't be working on and haven't fixed yet\n/close not-planned", "language": "text", "type": "code-example", "context": "Issue Wranglers | To find more Prow commands, refer to the Command Help documentation.", "source": "https://kubernetes.io/docs/contribute/participate/issue-wrangler/", "id": "k8s_00535_code_completion_13782"} {"prompt": "Explain this text code:\n\n# reopen an issue\n/reopen\n\n# transfer issues that don't fit in k/website to another repository\n/transfer[-issue]\n\n# change the state of rotten issues\n/remove-lifecycle rotten\n\n# change the state of stale issues\n/remove-lifecycle stale\n\n# assign sig to an issue\n/sig <sig_name>\n\n# add specific area\n/area <area_name>\n\n# for beginner friendly issues\n/good-first-issue\n\n# issues that needs help\n/help wanted\n\n# tagging issue as support specific\n/kind support\n\n# to accept triaging for an issue\n/triage accepted\n\n# closing an issue we won't be working on and haven't fixed yet\n/close not-planned", "completion": "This text code Issue Wranglers | To find more Prow commands, refer to the Command Help documentation.", "language": "text", "type": "code-example_explanation", "context": "Issue Wranglers | To find more Prow commands, refer to the Command Help documentation.", "source": "https://kubernetes.io/docs/contribute/participate/issue-wrangler/", "id": "k8s_00535_code_explain_13783"} {"prompt": "Explain this text code:\n\nspec.containers[*].securityContext.readOnlyRootFilesystem", "completion": "This text code Windows containers in Kubernetes | In the above list, wildcards (*) indicate all elements in a list. For example, spec.containers[*].securityContext refers to the SecurityContext object for all containers. If any of these fields is spe", "language": "text", "type": "code-example_explanation", "context": "Windows containers in Kubernetes | In the above list, wildcards (*) indicate all elements in a list. For example, spec.containers[*].securityContext refers to the SecurityContext object for all containers. If any of these fields is spe", "source": "https://kubernetes.io/docs/concepts/windows/intro/", "id": "k8s_00537_code_explain_13784"} {"prompt": "Explain this text code:\n\nspec.containers[*].securityContext.allowPrivilegeEscalation", "completion": "This text code Windows containers in Kubernetes | In the above list, wildcards (*) indicate all elements in a list. For example, spec.containers[*].securityContext refers to the SecurityContext object for all containers. If any of these fields is spe", "language": "text", "type": "code-example_explanation", "context": "Windows containers in Kubernetes | In the above list, wildcards (*) indicate all elements in a list. For example, spec.containers[*].securityContext refers to the SecurityContext object for all containers. If any of these fields is spe", "source": "https://kubernetes.io/docs/concepts/windows/intro/", "id": "k8s_00537_code_explain_13785"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: lifecycle-demo\nspec:\n containers:\n - name: lifecycle-demo-container", "completion": " image: nginx\n lifecycle:\n postStart:\n exec:\n command: [\"/bin/sh\", \"-c\", \"echo Hello from the postStart handler > /usr/share/message\"]\n preStop:\n exec:\n command: [\"/bin/sh\",\"-c\",\"nginx -s quit; while killall -0 nginx; do sleep 1; done\"]", "language": "yaml", "type": "pod-definition", "context": "Attach Handlers to Container Lifecycle Events | In the configuration file, you can see that the postStart command writes a message file to the Container's /usr/share directory. The preStop command shuts down nginx gracefully. This is helpful if the", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/attach-handler-lifecycle-event/", "id": "k8s_00538_code_completion_13786"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: lifecycle-demo\nspec:\n containers:\n - name: lifecycle-demo-container\n image: nginx\n lifecycle:\n postStart:\n exec:\n command: [\"/bin/sh\", \"-c\", \"echo Hello from the postStart handler > /usr/share/message\"]\n preStop:\n exec:\n command: [\"/bin/sh\",\"-c\",\"nginx -s quit; while killall -0 nginx; do sleep 1; done\"]", "completion": "This yaml code Attach Handlers to Container Lifecycle Events | In the configuration file, you can see that the postStart command writes a message file to the Container's /usr/share directory. The preStop command shuts down nginx gracefully. This is helpful if the", "language": "yaml", "type": "pod-definition_explanation", "context": "Attach Handlers to Container Lifecycle Events | In the configuration file, you can see that the postStart command writes a message file to the Container's /usr/share directory. The preStop command shuts down nginx gracefully. This is helpful if the", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/attach-handler-lifecycle-event/", "id": "k8s_00538_code_explain_13787"} {"prompt": "Explain this bash code:\n\nkubectl apply -f https://k8s.io/examples/pods/lifecycle-events.yaml", "completion": "This bash code Attach Handlers to Container Lifecycle Events | Verify that the Container in the Pod is running:", "language": "bash", "type": "kubectl-command_explanation", "context": "Attach Handlers to Container Lifecycle Events | Verify that the Container in the Pod is running:", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/attach-handler-lifecycle-event/", "id": "k8s_00538_code_explain_13788"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: lifecycle-demo\nspec:\n containers:\n - name: lifecycle-demo-container", "completion": " image: nginx\n lifecycle:\n postStart:\n exec:\n command: [\"/bin/sh\", \"-c\", \"echo Hello from the postStart handler > /usr/share/message\"]\n preStop:\n exec:\n command: [\"/bin/sh\",\"-c\",\"nginx -s quit; while killall -0 nginx; do sleep 1; done\"]", "language": "yaml", "type": "pod-definition", "context": "Attach Handlers to Container Lifecycle Events | In the configuration file, you can see that the postStart command writes a message file to the Container's /usr/share directory. The preStop command shuts down nginx gracefully. This is helpful if the", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/attach-handler-lifecycle-event/", "id": "k8s_00538_code_completion_13789"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: lifecycle-demo\nspec:\n containers:\n - name: lifecycle-demo-container\n image: nginx\n lifecycle:\n postStart:\n exec:\n command: [\"/bin/sh\", \"-c\", \"echo Hello from the postStart handler > /usr/share/message\"]\n preStop:\n exec:\n command: [\"/bin/sh\",\"-c\",\"nginx -s quit; while killall -0 nginx; do sleep 1; done\"]", "completion": "This yaml code Attach Handlers to Container Lifecycle Events | In the configuration file, you can see that the postStart command writes a message file to the Container's /usr/share directory. The preStop command shuts down nginx gracefully. This is helpful if the", "language": "yaml", "type": "pod-definition_explanation", "context": "Attach Handlers to Container Lifecycle Events | In the configuration file, you can see that the postStart command writes a message file to the Container's /usr/share directory. The preStop command shuts down nginx gracefully. This is helpful if the", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/attach-handler-lifecycle-event/", "id": "k8s_00538_code_explain_13790"} {"prompt": "Explain this bash code:\n\nkubectl apply -f https://k8s.io/examples/pods/lifecycle-events.yaml", "completion": "This bash code Attach Handlers to Container Lifecycle Events | Verify that the Container in the Pod is running:", "language": "bash", "type": "kubectl-command_explanation", "context": "Attach Handlers to Container Lifecycle Events | Verify that the Container in the Pod is running:", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/attach-handler-lifecycle-event/", "id": "k8s_00538_code_explain_13791"} {"prompt": "Complete this yaml code:\n\npods/lifecycle-events.yaml\napiVersion: v1\nkind: Pod\nmetadata:\n name: lifecycle-demo\nspec:\n containers:\n - name: lifecycle-demo-container", "completion": " image: nginx\n lifecycle:\n postStart:\n exec:\n command: [\"/bin/sh\", \"-c\", \"echo Hello from the postStart handler > /usr/share/message\"]\n preStop:\n exec:\n command: [\"/bin/sh\",\"-c\",\"nginx -s quit; while killall -0 nginx; do sleep 1; done\"]", "language": "yaml", "type": "pod-definition", "context": "Attach Handlers to Container Lifecycle Events | In the configuration file, you can see that the postStart command writes a message file to the Container's /usr/share directory. The preStop command shuts down nginx gracefully. This is helpful if the", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/attach-handler-lifecycle-event/", "id": "k8s_00538_code_completion_13792"} {"prompt": "Explain this yaml code:\n\npods/lifecycle-events.yaml\napiVersion: v1\nkind: Pod\nmetadata:\n name: lifecycle-demo\nspec:\n containers:\n - name: lifecycle-demo-container\n image: nginx\n lifecycle:\n postStart:\n exec:\n command: [\"/bin/sh\", \"-c\", \"echo Hello from the postStart handler > /usr/share/message\"]\n preStop:\n exec:\n command: [\"/bin/sh\",\"-c\",\"nginx -s quit; while killall -0 nginx; do sleep 1; done\"]", "completion": "This yaml code Attach Handlers to Container Lifecycle Events | In the configuration file, you can see that the postStart command writes a message file to the Container's /usr/share directory. The preStop command shuts down nginx gracefully. This is helpful if the", "language": "yaml", "type": "pod-definition_explanation", "context": "Attach Handlers to Container Lifecycle Events | In the configuration file, you can see that the postStart command writes a message file to the Container's /usr/share directory. The preStop command shuts down nginx gracefully. This is helpful if the", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/attach-handler-lifecycle-event/", "id": "k8s_00538_code_explain_13793"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: lifecycle-demo\nspec:\n containers:\n - name: lifecycle-demo-container", "completion": " image: nginx\n lifecycle:\n postStart:\n exec:\n command: [\"/bin/sh\", \"-c\", \"echo Hello from the postStart handler > /usr/share/message\"]\n preStop:\n exec:\n command: [\"/bin/sh\",\"-c\",\"nginx -s quit; while killall -0 nginx; do sleep 1; done\"]", "language": "yaml", "type": "pod-definition", "context": "Attach Handlers to Container Lifecycle Events | In the configuration file, you can see that the postStart command writes a message file to the Container's /usr/share directory. The preStop command shuts down nginx gracefully. This is helpful if the", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/attach-handler-lifecycle-event/", "id": "k8s_00538_code_completion_13794"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: lifecycle-demo\nspec:\n containers:\n - name: lifecycle-demo-container\n image: nginx\n lifecycle:\n postStart:\n exec:\n command: [\"/bin/sh\", \"-c\", \"echo Hello from the postStart handler > /usr/share/message\"]\n preStop:\n exec:\n command: [\"/bin/sh\",\"-c\",\"nginx -s quit; while killall -0 nginx; do sleep 1; done\"]", "completion": "This yaml code Attach Handlers to Container Lifecycle Events | In the configuration file, you can see that the postStart command writes a message file to the Container's /usr/share directory. The preStop command shuts down nginx gracefully. This is helpful if the", "language": "yaml", "type": "pod-definition_explanation", "context": "Attach Handlers to Container Lifecycle Events | In the configuration file, you can see that the postStart command writes a message file to the Container's /usr/share directory. The preStop command shuts down nginx gracefully. This is helpful if the", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/attach-handler-lifecycle-event/", "id": "k8s_00538_code_explain_13795"} {"prompt": "Explain this bash code:\n\nkubectl set image deployments/kubernetes-bootcamp kubernetes-bootcamp=docker.io/jocatalin/kubernetes-bootcamp:v2", "completion": "This bash code Performing a Rolling Update | The command notified the Deployment to use a different image for your app and initiated a rolling update. Check the status of the new Pods, and view the old one terminating with the get pods subcomman", "language": "bash", "type": "kubectl-command_explanation", "context": "Performing a Rolling Update | The command notified the Deployment to use a different image for your app and initiated a rolling update. Check the status of the new Pods, and view the old one terminating with the get pods subcomman", "source": "https://kubernetes.io/docs/tutorials/kubernetes-basics/update/update-intro/", "id": "k8s_00539_code_explain_13796"} {"prompt": "Explain this bash code:\n\nkubectl expose deployment/kubernetes-bootcamp --type=\"NodePort\" --port 8080", "completion": "This bash code Performing a Rolling Update | Create an environment variable called NODE_PORT that has the value of the Node port assigned:", "language": "bash", "type": "kubectl-command_explanation", "context": "Performing a Rolling Update | Create an environment variable called NODE_PORT that has the value of the Node port assigned:", "source": "https://kubernetes.io/docs/tutorials/kubernetes-basics/update/update-intro/", "id": "k8s_00539_code_explain_13797"} {"prompt": "Explain this text code:\n\nexport NODE_PORT=\"$(kubectl get services/kubernetes-bootcamp -o go-template='{{(index .spec.ports 0).nodePort}}')\"\necho \"NODE_PORT=$NODE_PORT\"", "completion": "This text code Performing a Rolling Update | Next, do a curl to the exposed IP and port:", "language": "text", "type": "code-example_explanation", "context": "Performing a Rolling Update | Next, do a curl to the exposed IP and port:", "source": "https://kubernetes.io/docs/tutorials/kubernetes-basics/update/update-intro/", "id": "k8s_00539_code_explain_13798"} {"prompt": "Explain this bash code:\n\nkubectl rollout status deployments/kubernetes-bootcamp", "completion": "This bash code Performing a Rolling Update | To view the current image version of the app, run the describe pods subcommand:", "language": "bash", "type": "kubectl-command_explanation", "context": "Performing a Rolling Update | To view the current image version of the app, run the describe pods subcommand:", "source": "https://kubernetes.io/docs/tutorials/kubernetes-basics/update/update-intro/", "id": "k8s_00539_code_explain_13799"} {"prompt": "Explain this bash code:\n\nkubectl set image deployments/kubernetes-bootcamp kubernetes-bootcamp=gcr.io/google-samples/kubernetes-bootcamp:v10", "completion": "This bash code Performing a Rolling Update | Use get deployments to see the status of the deployment:", "language": "bash", "type": "kubectl-command_explanation", "context": "Performing a Rolling Update | Use get deployments to see the status of the deployment:", "source": "https://kubernetes.io/docs/tutorials/kubernetes-basics/update/update-intro/", "id": "k8s_00539_code_explain_13800"} {"prompt": "Explain this bash code:\n\nkubectl rollout undo deployments/kubernetes-bootcamp", "completion": "This bash code Performing a Rolling Update | The rollout undo command reverts the deployment to the previous known state (v2 of the image). Updates are versioned and you can revert to any previously known state of a Deployment.", "language": "bash", "type": "kubectl-command_explanation", "context": "Performing a Rolling Update | The rollout undo command reverts the deployment to the previous known state (v2 of the image). Updates are versioned and you can revert to any previously known state of a Deployment.", "source": "https://kubernetes.io/docs/tutorials/kubernetes-basics/update/update-intro/", "id": "k8s_00539_code_explain_13801"} {"prompt": "Explain this bash code:\n\nkubectl delete deployments/kubernetes-bootcamp services/kubernetes-bootcamp", "completion": "This bash code Performing a Rolling Update |", "language": "bash", "type": "kubectl-command_explanation", "context": "Performing a Rolling Update |", "source": "https://kubernetes.io/docs/tutorials/kubernetes-basics/update/update-intro/", "id": "k8s_00539_code_explain_13802"} {"prompt": "Explain this shell code:\n\nkubectl set image deployments/kubernetes-bootcamp kubernetes-bootcamp=docker.io/jocatalin/kubernetes-bootcamp:v2", "completion": "This shell code Performing a Rolling Update | The command notified the Deployment to use a different image for your app and initiated a rolling update. Check the status of the new Pods, and view the old one terminating with the get pods subcomman", "language": "shell", "type": "code-example_explanation", "context": "Performing a Rolling Update | The command notified the Deployment to use a different image for your app and initiated a rolling update. Check the status of the new Pods, and view the old one terminating with the get pods subcomman", "source": "https://kubernetes.io/docs/tutorials/kubernetes-basics/update/update-intro/", "id": "k8s_00539_code_explain_13803"} {"prompt": "Explain this shell code:\n\nkubectl expose deployment/kubernetes-bootcamp --type=\"NodePort\" --port 8080", "completion": "This shell code Performing a Rolling Update | Create an environment variable called NODE_PORT that has the value of the Node port assigned:", "language": "shell", "type": "code-example_explanation", "context": "Performing a Rolling Update | Create an environment variable called NODE_PORT that has the value of the Node port assigned:", "source": "https://kubernetes.io/docs/tutorials/kubernetes-basics/update/update-intro/", "id": "k8s_00539_code_explain_13804"} {"prompt": "Explain this shell code:\n\nexport NODE_PORT=\"$(kubectl get services/kubernetes-bootcamp -o go-template='{{(index .spec.ports 0).nodePort}}')\"\necho \"NODE_PORT=$NODE_PORT\"", "completion": "This shell code Performing a Rolling Update | Next, do a curl to the exposed IP and port:", "language": "shell", "type": "code-example_explanation", "context": "Performing a Rolling Update | Next, do a curl to the exposed IP and port:", "source": "https://kubernetes.io/docs/tutorials/kubernetes-basics/update/update-intro/", "id": "k8s_00539_code_explain_13805"} {"prompt": "Explain this shell code:\n\nkubectl rollout status deployments/kubernetes-bootcamp", "completion": "This shell code Performing a Rolling Update | To view the current image version of the app, run the describe pods subcommand:", "language": "shell", "type": "code-example_explanation", "context": "Performing a Rolling Update | To view the current image version of the app, run the describe pods subcommand:", "source": "https://kubernetes.io/docs/tutorials/kubernetes-basics/update/update-intro/", "id": "k8s_00539_code_explain_13806"} {"prompt": "Explain this shell code:\n\nkubectl set image deployments/kubernetes-bootcamp kubernetes-bootcamp=gcr.io/google-samples/kubernetes-bootcamp:v10", "completion": "This shell code Performing a Rolling Update | Use get deployments to see the status of the deployment:", "language": "shell", "type": "code-example_explanation", "context": "Performing a Rolling Update | Use get deployments to see the status of the deployment:", "source": "https://kubernetes.io/docs/tutorials/kubernetes-basics/update/update-intro/", "id": "k8s_00539_code_explain_13807"} {"prompt": "Explain this shell code:\n\nkubectl rollout undo deployments/kubernetes-bootcamp", "completion": "This shell code Performing a Rolling Update | The rollout undo command reverts the deployment to the previous known state (v2 of the image). Updates are versioned and you can revert to any previously known state of a Deployment.", "language": "shell", "type": "code-example_explanation", "context": "Performing a Rolling Update | The rollout undo command reverts the deployment to the previous known state (v2 of the image). Updates are versioned and you can revert to any previously known state of a Deployment.", "source": "https://kubernetes.io/docs/tutorials/kubernetes-basics/update/update-intro/", "id": "k8s_00539_code_explain_13808"} {"prompt": "Explain this shell code:\n\nkubectl delete deployments/kubernetes-bootcamp services/kubernetes-bootcamp", "completion": "This shell code Performing a Rolling Update |", "language": "shell", "type": "code-example_explanation", "context": "Performing a Rolling Update |", "source": "https://kubernetes.io/docs/tutorials/kubernetes-basics/update/update-intro/", "id": "k8s_00539_code_explain_13809"} {"prompt": "Explain this bash code:\n\nkubectl set image deployments/kubernetes-bootcamp kubernetes-bootcamp=docker.io/jocatalin/kubernetes-bootcamp:v2", "completion": "This bash code Performing a Rolling Update | The command notified the Deployment to use a different image for your app and initiated a rolling update. Check the status of the new Pods, and view the old one terminating with the get pods subcomman", "language": "bash", "type": "kubectl-command_explanation", "context": "Performing a Rolling Update | The command notified the Deployment to use a different image for your app and initiated a rolling update. Check the status of the new Pods, and view the old one terminating with the get pods subcomman", "source": "https://kubernetes.io/docs/tutorials/kubernetes-basics/update/update-intro/", "id": "k8s_00539_code_explain_13810"} {"prompt": "Explain this bash code:\n\nkubectl expose deployment/kubernetes-bootcamp --type=\"NodePort\" --port 8080", "completion": "This bash code Performing a Rolling Update | Create an environment variable called NODE_PORT that has the value of the Node port assigned:", "language": "bash", "type": "kubectl-command_explanation", "context": "Performing a Rolling Update | Create an environment variable called NODE_PORT that has the value of the Node port assigned:", "source": "https://kubernetes.io/docs/tutorials/kubernetes-basics/update/update-intro/", "id": "k8s_00539_code_explain_13811"} {"prompt": "Explain this text code:\n\nexport NODE_PORT=\"$(kubectl get services/kubernetes-bootcamp -o go-template='{{(index .spec.ports 0).nodePort}}')\"\necho \"NODE_PORT=$NODE_PORT\"", "completion": "This text code Performing a Rolling Update | Next, do a curl to the exposed IP and port:", "language": "text", "type": "code-example_explanation", "context": "Performing a Rolling Update | Next, do a curl to the exposed IP and port:", "source": "https://kubernetes.io/docs/tutorials/kubernetes-basics/update/update-intro/", "id": "k8s_00539_code_explain_13812"} {"prompt": "Explain this bash code:\n\nkubectl rollout status deployments/kubernetes-bootcamp", "completion": "This bash code Performing a Rolling Update | To view the current image version of the app, run the describe pods subcommand:", "language": "bash", "type": "kubectl-command_explanation", "context": "Performing a Rolling Update | To view the current image version of the app, run the describe pods subcommand:", "source": "https://kubernetes.io/docs/tutorials/kubernetes-basics/update/update-intro/", "id": "k8s_00539_code_explain_13813"} {"prompt": "Explain this bash code:\n\nkubectl set image deployments/kubernetes-bootcamp kubernetes-bootcamp=gcr.io/google-samples/kubernetes-bootcamp:v10", "completion": "This bash code Performing a Rolling Update | Use get deployments to see the status of the deployment:", "language": "bash", "type": "kubectl-command_explanation", "context": "Performing a Rolling Update | Use get deployments to see the status of the deployment:", "source": "https://kubernetes.io/docs/tutorials/kubernetes-basics/update/update-intro/", "id": "k8s_00539_code_explain_13814"} {"prompt": "Explain this bash code:\n\nkubectl rollout undo deployments/kubernetes-bootcamp", "completion": "This bash code Performing a Rolling Update | The rollout undo command reverts the deployment to the previous known state (v2 of the image). Updates are versioned and you can revert to any previously known state of a Deployment.", "language": "bash", "type": "kubectl-command_explanation", "context": "Performing a Rolling Update | The rollout undo command reverts the deployment to the previous known state (v2 of the image). Updates are versioned and you can revert to any previously known state of a Deployment.", "source": "https://kubernetes.io/docs/tutorials/kubernetes-basics/update/update-intro/", "id": "k8s_00539_code_explain_13815"} {"prompt": "Explain this bash code:\n\nkubectl delete deployments/kubernetes-bootcamp services/kubernetes-bootcamp", "completion": "This bash code Performing a Rolling Update |", "language": "bash", "type": "kubectl-command_explanation", "context": "Performing a Rolling Update |", "source": "https://kubernetes.io/docs/tutorials/kubernetes-basics/update/update-intro/", "id": "k8s_00539_code_explain_13816"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: cpu-pressure-pod\nspec:\n restartPolicy: Never\n containers:\n - name: cpu-stress\n image: registry.k8s.io/e2e-test-images/agnhost:2.47", "completion": " args:\n - \"stress\"\n - \"--cpus\"\n - \"1\"\n resources:\n limits:\n cpu: \"500m\"\n requests:\n cpu: \"500m\"", "language": "yaml", "type": "pod-definition", "context": "Understand Pressure Stall Information (PSI) Metrics | Apply it to your cluster: kubectl apply -f cpu-pressure-pod.yaml", "source": "https://kubernetes.io/docs/reference/instrumentation/understand-psi-metrics/", "id": "k8s_00541_code_completion_13817"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: cpu-pressure-pod\nspec:\n restartPolicy: Never\n containers:\n - name: cpu-stress\n image: registry.k8s.io/e2e-test-images/agnhost:2.47\n args:\n - \"stress\"\n - \"--cpus\"\n - \"1\"\n resources:\n limits:\n cpu: \"500m\"\n requests:\n cpu: \"500m\"", "completion": "This yaml code Understand Pressure Stall Information (PSI) Metrics | Apply it to your cluster: kubectl apply -f cpu-pressure-pod.yaml", "language": "yaml", "type": "pod-definition_explanation", "context": "Understand Pressure Stall Information (PSI) Metrics | Apply it to your cluster: kubectl apply -f cpu-pressure-pod.yaml", "source": "https://kubernetes.io/docs/reference/instrumentation/understand-psi-metrics/", "id": "k8s_00541_code_explain_13818"} {"prompt": "Explain this text code:\n\n# Replace <node-name> with the name of a node in your cluster\nkubectl get --raw \"/api/v1/nodes/<node-name>/proxy/stats/summary\" | jq '.pods[] | select(.podRef.name | contains(\"cpu-pressure-pod\"))'", "completion": "This text code Understand Pressure Stall Information (PSI) Metrics | You will see the some PSI metrics for CPU increase in the summary API output. The avg10 value for some pressure should rise above zero, indicating that tasks are spending time stalled on the CPU.", "language": "text", "type": "code-example_explanation", "context": "Understand Pressure Stall Information (PSI) Metrics | You will see the some PSI metrics for CPU increase in the summary API output. The avg10 value for some pressure should rise above zero, indicating that tasks are spending time stalled on the CPU.", "source": "https://kubernetes.io/docs/reference/instrumentation/understand-psi-metrics/", "id": "k8s_00541_code_explain_13819"} {"prompt": "Explain this text code:\n\n# Replace <node-name> with the name of the node where the pod is running\nkubectl get --raw \"/api/v1/nodes/<node-name>/proxy/metrics/cadvisor\" | \\\n grep 'container_pressure_cpu_waiting_seconds_total{container=\"cpu-stress\"'", "completion": "This text code Understand Pressure Stall Information (PSI) Metrics | The output should show an increasing value, indicating that the container is spending time stalled waiting for CPU resources.", "language": "text", "type": "code-example_explanation", "context": "Understand Pressure Stall Information (PSI) Metrics | The output should show an increasing value, indicating that the container is spending time stalled waiting for CPU resources.", "source": "https://kubernetes.io/docs/reference/instrumentation/understand-psi-metrics/", "id": "k8s_00541_code_explain_13820"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: memory-pressure-pod\nspec:\n restartPolicy: Never\n containers:\n - name: memory-stress", "completion": " image: registry.k8s.io/e2e-test-images/agnhost:2.47\n command: [\"/bin/sh\", \"-c\"]\n args:\n - \"i=0; while true; do dd if=/dev/zero of=testfile.$i bs=1M count=50 &>/dev/null; i=$(((i+1)%5)); sleep 0.1; done\"\n resources:\n limits:\n memory: \"200M\"\n requests:\n memory: \"200M\"", "language": "yaml", "type": "pod-definition", "context": "Understand Pressure Stall Information (PSI) Metrics | Apply it to your cluster: kubectl apply -f memory-pressure-pod.yaml", "source": "https://kubernetes.io/docs/reference/instrumentation/understand-psi-metrics/", "id": "k8s_00541_code_completion_13821"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: memory-pressure-pod\nspec:\n restartPolicy: Never\n containers:\n - name: memory-stress\n image: registry.k8s.io/e2e-test-images/agnhost:2.47\n command: [\"/bin/sh\", \"-c\"]\n args:\n - \"i=0; while true; do dd if=/dev/zero of=testfile.$i bs=1M count=50 &>/dev/null; i=$(((i+1)%5)); sleep 0.1; done\"\n resources:\n limits:\n memory: \"200M\"\n requests:\n memory: \"200M\"", "completion": "This yaml code Understand Pressure Stall Information (PSI) Metrics | Apply it to your cluster: kubectl apply -f memory-pressure-pod.yaml", "language": "yaml", "type": "pod-definition_explanation", "context": "Understand Pressure Stall Information (PSI) Metrics | Apply it to your cluster: kubectl apply -f memory-pressure-pod.yaml", "source": "https://kubernetes.io/docs/reference/instrumentation/understand-psi-metrics/", "id": "k8s_00541_code_explain_13822"} {"prompt": "Explain this text code:\n\n# Replace <node-name> with the name of a node in your cluster\nkubectl get --raw \"/api/v1/nodes/<node-name>/proxy/stats/summary\" | jq '.pods[] | select(.podRef.name | contains(\"memory-pressure-pod\"))'", "completion": "This text code Understand Pressure Stall Information (PSI) Metrics | Using the Prometheus metrics endpoint:", "language": "text", "type": "code-example_explanation", "context": "Understand Pressure Stall Information (PSI) Metrics | Using the Prometheus metrics endpoint:", "source": "https://kubernetes.io/docs/reference/instrumentation/understand-psi-metrics/", "id": "k8s_00541_code_explain_13823"} {"prompt": "Explain this text code:\n\n# Replace <node-name> with the name of the node where the pod is running\nkubectl get --raw \"/api/v1/nodes/<node-name>/proxy/metrics/cadvisor\" | \\\n grep 'container_pressure_memory_waiting_seconds_total{container=\"memory-stress\"'", "completion": "This text code Understand Pressure Stall Information (PSI) Metrics | In the output, you will observe an increasing value for the metric, indicating that the system is under significant memory pressure.", "language": "text", "type": "code-example_explanation", "context": "Understand Pressure Stall Information (PSI) Metrics | In the output, you will observe an increasing value for the metric, indicating that the system is under significant memory pressure.", "source": "https://kubernetes.io/docs/reference/instrumentation/understand-psi-metrics/", "id": "k8s_00541_code_explain_13824"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: io-pressure-pod\nspec:\n restartPolicy: Never", "completion": " containers:\n - name: io-stress\n image: registry.k8s.io/e2e-test-images/agnhost:2.47\n command: [\"/bin/sh\", \"-c\"]\n args:\n - \"while true; do dd if=/dev/zero of=testfile bs=1M count=128 &>/dev/null; sync; rm testfile &>/dev/null; done\"", "language": "yaml", "type": "pod-definition", "context": "Understand Pressure Stall Information (PSI) Metrics | Apply this to your cluster: kubectl apply -f io-pressure-pod.yaml", "source": "https://kubernetes.io/docs/reference/instrumentation/understand-psi-metrics/", "id": "k8s_00541_code_completion_13825"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: io-pressure-pod\nspec:\n restartPolicy: Never\n containers:\n - name: io-stress\n image: registry.k8s.io/e2e-test-images/agnhost:2.47\n command: [\"/bin/sh\", \"-c\"]\n args:\n - \"while true; do dd if=/dev/zero of=testfile bs=1M count=128 &>/dev/null; sync; rm testfile &>/dev/null; done\"", "completion": "This yaml code Understand Pressure Stall Information (PSI) Metrics | Apply this to your cluster: kubectl apply -f io-pressure-pod.yaml", "language": "yaml", "type": "pod-definition_explanation", "context": "Understand Pressure Stall Information (PSI) Metrics | Apply this to your cluster: kubectl apply -f io-pressure-pod.yaml", "source": "https://kubernetes.io/docs/reference/instrumentation/understand-psi-metrics/", "id": "k8s_00541_code_explain_13826"} {"prompt": "Explain this text code:\n\n# Replace <node-name> with the name of a node in your cluster\nkubectl get --raw \"/api/v1/nodes/<node-name>/proxy/stats/summary\" | jq '.pods[] | select(.podRef.name | contains(\"io-pressure-pod\"))'", "completion": "This text code Understand Pressure Stall Information (PSI) Metrics | Using the Prometheus metrics endpoint:", "language": "text", "type": "code-example_explanation", "context": "Understand Pressure Stall Information (PSI) Metrics | Using the Prometheus metrics endpoint:", "source": "https://kubernetes.io/docs/reference/instrumentation/understand-psi-metrics/", "id": "k8s_00541_code_explain_13827"} {"prompt": "Explain this text code:\n\n# Replace <node-name> with the name of the node where the pod is running\nkubectl get --raw \"/api/v1/nodes/<node-name>/proxy/metrics/cadvisor\" | \\\n grep 'container_pressure_io_waiting_seconds_total{container=\"io-stress\"'", "completion": "This text code Understand Pressure Stall Information (PSI) Metrics | You will see the metric's value increase as the Pod continuously writes to disk.", "language": "text", "type": "code-example_explanation", "context": "Understand Pressure Stall Information (PSI) Metrics | You will see the metric's value increase as the Pod continuously writes to disk.", "source": "https://kubernetes.io/docs/reference/instrumentation/understand-psi-metrics/", "id": "k8s_00541_code_explain_13828"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: cpu-pressure-pod\nspec:\n restartPolicy: Never\n containers:\n - name: cpu-stress\n image: registry.k8s.io/e2e-test-images/agnhost:2.47", "completion": " args:\n - \"stress\"\n - \"--cpus\"\n - \"1\"\n resources:\n limits:\n cpu: \"500m\"\n requests:\n cpu: \"500m\"", "language": "yaml", "type": "pod-definition", "context": "Understand Pressure Stall Information (PSI) Metrics | Apply it to your cluster: kubectl apply -f cpu-pressure-pod.yaml", "source": "https://kubernetes.io/docs/reference/instrumentation/understand-psi-metrics/", "id": "k8s_00541_code_completion_13829"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: cpu-pressure-pod\nspec:\n restartPolicy: Never\n containers:\n - name: cpu-stress\n image: registry.k8s.io/e2e-test-images/agnhost:2.47\n args:\n - \"stress\"\n - \"--cpus\"\n - \"1\"\n resources:\n limits:\n cpu: \"500m\"\n requests:\n cpu: \"500m\"", "completion": "This yaml code Understand Pressure Stall Information (PSI) Metrics | Apply it to your cluster: kubectl apply -f cpu-pressure-pod.yaml", "language": "yaml", "type": "pod-definition_explanation", "context": "Understand Pressure Stall Information (PSI) Metrics | Apply it to your cluster: kubectl apply -f cpu-pressure-pod.yaml", "source": "https://kubernetes.io/docs/reference/instrumentation/understand-psi-metrics/", "id": "k8s_00541_code_explain_13830"} {"prompt": "Explain this shell code:\n\n# Replace <node-name> with the name of a node in your cluster\nkubectl get --raw \"/api/v1/nodes/<node-name>/proxy/stats/summary\" | jq '.pods[] | select(.podRef.name | contains(\"cpu-pressure-pod\"))'", "completion": "This shell code Understand Pressure Stall Information (PSI) Metrics | You will see the some PSI metrics for CPU increase in the summary API output. The avg10 value for some pressure should rise above zero, indicating that tasks are spending time stalled on the CPU.", "language": "shell", "type": "code-example_explanation", "context": "Understand Pressure Stall Information (PSI) Metrics | You will see the some PSI metrics for CPU increase in the summary API output. The avg10 value for some pressure should rise above zero, indicating that tasks are spending time stalled on the CPU.", "source": "https://kubernetes.io/docs/reference/instrumentation/understand-psi-metrics/", "id": "k8s_00541_code_explain_13831"} {"prompt": "Explain this shell code:\n\n# Replace <node-name> with the name of the node where the pod is running\nkubectl get --raw \"/api/v1/nodes/<node-name>/proxy/metrics/cadvisor\" | \\\n grep 'container_pressure_cpu_waiting_seconds_total{container=\"cpu-stress\"'", "completion": "This shell code Understand Pressure Stall Information (PSI) Metrics | The output should show an increasing value, indicating that the container is spending time stalled waiting for CPU resources.", "language": "shell", "type": "code-example_explanation", "context": "Understand Pressure Stall Information (PSI) Metrics | The output should show an increasing value, indicating that the container is spending time stalled waiting for CPU resources.", "source": "https://kubernetes.io/docs/reference/instrumentation/understand-psi-metrics/", "id": "k8s_00541_code_explain_13832"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: memory-pressure-pod\nspec:\n restartPolicy: Never\n containers:\n - name: memory-stress", "completion": " image: registry.k8s.io/e2e-test-images/agnhost:2.47\n command: [\"/bin/sh\", \"-c\"]\n args:\n - \"i=0; while true; do dd if=/dev/zero of=testfile.$i bs=1M count=50 &>/dev/null; i=$(((i+1)%5)); sleep 0.1; done\"\n resources:\n limits:\n memory: \"200M\"\n requests:\n memory: \"200M\"", "language": "yaml", "type": "pod-definition", "context": "Understand Pressure Stall Information (PSI) Metrics | Apply it to your cluster: kubectl apply -f memory-pressure-pod.yaml", "source": "https://kubernetes.io/docs/reference/instrumentation/understand-psi-metrics/", "id": "k8s_00541_code_completion_13833"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: memory-pressure-pod\nspec:\n restartPolicy: Never\n containers:\n - name: memory-stress\n image: registry.k8s.io/e2e-test-images/agnhost:2.47\n command: [\"/bin/sh\", \"-c\"]\n args:\n - \"i=0; while true; do dd if=/dev/zero of=testfile.$i bs=1M count=50 &>/dev/null; i=$(((i+1)%5)); sleep 0.1; done\"\n resources:\n limits:\n memory: \"200M\"\n requests:\n memory: \"200M\"", "completion": "This yaml code Understand Pressure Stall Information (PSI) Metrics | Apply it to your cluster: kubectl apply -f memory-pressure-pod.yaml", "language": "yaml", "type": "pod-definition_explanation", "context": "Understand Pressure Stall Information (PSI) Metrics | Apply it to your cluster: kubectl apply -f memory-pressure-pod.yaml", "source": "https://kubernetes.io/docs/reference/instrumentation/understand-psi-metrics/", "id": "k8s_00541_code_explain_13834"} {"prompt": "Explain this shell code:\n\n# Replace <node-name> with the name of a node in your cluster\nkubectl get --raw \"/api/v1/nodes/<node-name>/proxy/stats/summary\" | jq '.pods[] | select(.podRef.name | contains(\"memory-pressure-pod\"))'", "completion": "This shell code Understand Pressure Stall Information (PSI) Metrics | Using the Prometheus metrics endpoint:", "language": "shell", "type": "code-example_explanation", "context": "Understand Pressure Stall Information (PSI) Metrics | Using the Prometheus metrics endpoint:", "source": "https://kubernetes.io/docs/reference/instrumentation/understand-psi-metrics/", "id": "k8s_00541_code_explain_13835"} {"prompt": "Explain this shell code:\n\n# Replace <node-name> with the name of the node where the pod is running\nkubectl get --raw \"/api/v1/nodes/<node-name>/proxy/metrics/cadvisor\" | \\\n grep 'container_pressure_memory_waiting_seconds_total{container=\"memory-stress\"'", "completion": "This shell code Understand Pressure Stall Information (PSI) Metrics | In the output, you will observe an increasing value for the metric, indicating that the system is under significant memory pressure.", "language": "shell", "type": "code-example_explanation", "context": "Understand Pressure Stall Information (PSI) Metrics | In the output, you will observe an increasing value for the metric, indicating that the system is under significant memory pressure.", "source": "https://kubernetes.io/docs/reference/instrumentation/understand-psi-metrics/", "id": "k8s_00541_code_explain_13836"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: io-pressure-pod\nspec:\n restartPolicy: Never", "completion": " containers:\n - name: io-stress\n image: registry.k8s.io/e2e-test-images/agnhost:2.47\n command: [\"/bin/sh\", \"-c\"]\n args:\n - \"while true; do dd if=/dev/zero of=testfile bs=1M count=128 &>/dev/null; sync; rm testfile &>/dev/null; done\"", "language": "yaml", "type": "pod-definition", "context": "Understand Pressure Stall Information (PSI) Metrics | Apply this to your cluster: kubectl apply -f io-pressure-pod.yaml", "source": "https://kubernetes.io/docs/reference/instrumentation/understand-psi-metrics/", "id": "k8s_00541_code_completion_13837"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: io-pressure-pod\nspec:\n restartPolicy: Never\n containers:\n - name: io-stress\n image: registry.k8s.io/e2e-test-images/agnhost:2.47\n command: [\"/bin/sh\", \"-c\"]\n args:\n - \"while true; do dd if=/dev/zero of=testfile bs=1M count=128 &>/dev/null; sync; rm testfile &>/dev/null; done\"", "completion": "This yaml code Understand Pressure Stall Information (PSI) Metrics | Apply this to your cluster: kubectl apply -f io-pressure-pod.yaml", "language": "yaml", "type": "pod-definition_explanation", "context": "Understand Pressure Stall Information (PSI) Metrics | Apply this to your cluster: kubectl apply -f io-pressure-pod.yaml", "source": "https://kubernetes.io/docs/reference/instrumentation/understand-psi-metrics/", "id": "k8s_00541_code_explain_13838"} {"prompt": "Explain this shell code:\n\n# Replace <node-name> with the name of a node in your cluster\nkubectl get --raw \"/api/v1/nodes/<node-name>/proxy/stats/summary\" | jq '.pods[] | select(.podRef.name | contains(\"io-pressure-pod\"))'", "completion": "This shell code Understand Pressure Stall Information (PSI) Metrics | Using the Prometheus metrics endpoint:", "language": "shell", "type": "code-example_explanation", "context": "Understand Pressure Stall Information (PSI) Metrics | Using the Prometheus metrics endpoint:", "source": "https://kubernetes.io/docs/reference/instrumentation/understand-psi-metrics/", "id": "k8s_00541_code_explain_13839"} {"prompt": "Explain this shell code:\n\n# Replace <node-name> with the name of the node where the pod is running\nkubectl get --raw \"/api/v1/nodes/<node-name>/proxy/metrics/cadvisor\" | \\\n grep 'container_pressure_io_waiting_seconds_total{container=\"io-stress\"'", "completion": "This shell code Understand Pressure Stall Information (PSI) Metrics | You will see the metric's value increase as the Pod continuously writes to disk.", "language": "shell", "type": "code-example_explanation", "context": "Understand Pressure Stall Information (PSI) Metrics | You will see the metric's value increase as the Pod continuously writes to disk.", "source": "https://kubernetes.io/docs/reference/instrumentation/understand-psi-metrics/", "id": "k8s_00541_code_explain_13840"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: cpu-pressure-pod\nspec:\n restartPolicy: Never\n containers:\n - name: cpu-stress\n image: registry.k8s.io/e2e-test-images/agnhost:2.47", "completion": " args:\n - \"stress\"\n - \"--cpus\"\n - \"1\"\n resources:\n limits:\n cpu: \"500m\"\n requests:\n cpu: \"500m\"", "language": "yaml", "type": "pod-definition", "context": "Understand Pressure Stall Information (PSI) Metrics | Apply it to your cluster: kubectl apply -f cpu-pressure-pod.yaml", "source": "https://kubernetes.io/docs/reference/instrumentation/understand-psi-metrics/", "id": "k8s_00541_code_completion_13841"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: cpu-pressure-pod\nspec:\n restartPolicy: Never\n containers:\n - name: cpu-stress\n image: registry.k8s.io/e2e-test-images/agnhost:2.47\n args:\n - \"stress\"\n - \"--cpus\"\n - \"1\"\n resources:\n limits:\n cpu: \"500m\"\n requests:\n cpu: \"500m\"", "completion": "This yaml code Understand Pressure Stall Information (PSI) Metrics | Apply it to your cluster: kubectl apply -f cpu-pressure-pod.yaml", "language": "yaml", "type": "pod-definition_explanation", "context": "Understand Pressure Stall Information (PSI) Metrics | Apply it to your cluster: kubectl apply -f cpu-pressure-pod.yaml", "source": "https://kubernetes.io/docs/reference/instrumentation/understand-psi-metrics/", "id": "k8s_00541_code_explain_13842"} {"prompt": "Explain this text code:\n\n# Replace <node-name> with the name of a node in your cluster\nkubectl get --raw \"/api/v1/nodes/<node-name>/proxy/stats/summary\" | jq '.pods[] | select(.podRef.name | contains(\"cpu-pressure-pod\"))'", "completion": "This text code Understand Pressure Stall Information (PSI) Metrics | You will see the some PSI metrics for CPU increase in the summary API output. The avg10 value for some pressure should rise above zero, indicating that tasks are spending time stalled on the CPU.", "language": "text", "type": "code-example_explanation", "context": "Understand Pressure Stall Information (PSI) Metrics | You will see the some PSI metrics for CPU increase in the summary API output. The avg10 value for some pressure should rise above zero, indicating that tasks are spending time stalled on the CPU.", "source": "https://kubernetes.io/docs/reference/instrumentation/understand-psi-metrics/", "id": "k8s_00541_code_explain_13843"} {"prompt": "Explain this text code:\n\n# Replace <node-name> with the name of the node where the pod is running\nkubectl get --raw \"/api/v1/nodes/<node-name>/proxy/metrics/cadvisor\" | \\\n grep 'container_pressure_cpu_waiting_seconds_total{container=\"cpu-stress\"'", "completion": "This text code Understand Pressure Stall Information (PSI) Metrics | The output should show an increasing value, indicating that the container is spending time stalled waiting for CPU resources.", "language": "text", "type": "code-example_explanation", "context": "Understand Pressure Stall Information (PSI) Metrics | The output should show an increasing value, indicating that the container is spending time stalled waiting for CPU resources.", "source": "https://kubernetes.io/docs/reference/instrumentation/understand-psi-metrics/", "id": "k8s_00541_code_explain_13844"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: memory-pressure-pod\nspec:\n restartPolicy: Never\n containers:\n - name: memory-stress", "completion": " image: registry.k8s.io/e2e-test-images/agnhost:2.47\n command: [\"/bin/sh\", \"-c\"]\n args:\n - \"i=0; while true; do dd if=/dev/zero of=testfile.$i bs=1M count=50 &>/dev/null; i=$(((i+1)%5)); sleep 0.1; done\"\n resources:\n limits:\n memory: \"200M\"\n requests:\n memory: \"200M\"", "language": "yaml", "type": "pod-definition", "context": "Understand Pressure Stall Information (PSI) Metrics | Apply it to your cluster: kubectl apply -f memory-pressure-pod.yaml", "source": "https://kubernetes.io/docs/reference/instrumentation/understand-psi-metrics/", "id": "k8s_00541_code_completion_13845"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: memory-pressure-pod\nspec:\n restartPolicy: Never\n containers:\n - name: memory-stress\n image: registry.k8s.io/e2e-test-images/agnhost:2.47\n command: [\"/bin/sh\", \"-c\"]\n args:\n - \"i=0; while true; do dd if=/dev/zero of=testfile.$i bs=1M count=50 &>/dev/null; i=$(((i+1)%5)); sleep 0.1; done\"\n resources:\n limits:\n memory: \"200M\"\n requests:\n memory: \"200M\"", "completion": "This yaml code Understand Pressure Stall Information (PSI) Metrics | Apply it to your cluster: kubectl apply -f memory-pressure-pod.yaml", "language": "yaml", "type": "pod-definition_explanation", "context": "Understand Pressure Stall Information (PSI) Metrics | Apply it to your cluster: kubectl apply -f memory-pressure-pod.yaml", "source": "https://kubernetes.io/docs/reference/instrumentation/understand-psi-metrics/", "id": "k8s_00541_code_explain_13846"} {"prompt": "Explain this text code:\n\n# Replace <node-name> with the name of a node in your cluster\nkubectl get --raw \"/api/v1/nodes/<node-name>/proxy/stats/summary\" | jq '.pods[] | select(.podRef.name | contains(\"memory-pressure-pod\"))'", "completion": "This text code Understand Pressure Stall Information (PSI) Metrics | Using the Prometheus metrics endpoint:", "language": "text", "type": "code-example_explanation", "context": "Understand Pressure Stall Information (PSI) Metrics | Using the Prometheus metrics endpoint:", "source": "https://kubernetes.io/docs/reference/instrumentation/understand-psi-metrics/", "id": "k8s_00541_code_explain_13847"} {"prompt": "Explain this text code:\n\n# Replace <node-name> with the name of the node where the pod is running\nkubectl get --raw \"/api/v1/nodes/<node-name>/proxy/metrics/cadvisor\" | \\\n grep 'container_pressure_memory_waiting_seconds_total{container=\"memory-stress\"'", "completion": "This text code Understand Pressure Stall Information (PSI) Metrics | In the output, you will observe an increasing value for the metric, indicating that the system is under significant memory pressure.", "language": "text", "type": "code-example_explanation", "context": "Understand Pressure Stall Information (PSI) Metrics | In the output, you will observe an increasing value for the metric, indicating that the system is under significant memory pressure.", "source": "https://kubernetes.io/docs/reference/instrumentation/understand-psi-metrics/", "id": "k8s_00541_code_explain_13848"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: io-pressure-pod\nspec:\n restartPolicy: Never", "completion": " containers:\n - name: io-stress\n image: registry.k8s.io/e2e-test-images/agnhost:2.47\n command: [\"/bin/sh\", \"-c\"]\n args:\n - \"while true; do dd if=/dev/zero of=testfile bs=1M count=128 &>/dev/null; sync; rm testfile &>/dev/null; done\"", "language": "yaml", "type": "pod-definition", "context": "Understand Pressure Stall Information (PSI) Metrics | Apply this to your cluster: kubectl apply -f io-pressure-pod.yaml", "source": "https://kubernetes.io/docs/reference/instrumentation/understand-psi-metrics/", "id": "k8s_00541_code_completion_13849"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: io-pressure-pod\nspec:\n restartPolicy: Never\n containers:\n - name: io-stress\n image: registry.k8s.io/e2e-test-images/agnhost:2.47\n command: [\"/bin/sh\", \"-c\"]\n args:\n - \"while true; do dd if=/dev/zero of=testfile bs=1M count=128 &>/dev/null; sync; rm testfile &>/dev/null; done\"", "completion": "This yaml code Understand Pressure Stall Information (PSI) Metrics | Apply this to your cluster: kubectl apply -f io-pressure-pod.yaml", "language": "yaml", "type": "pod-definition_explanation", "context": "Understand Pressure Stall Information (PSI) Metrics | Apply this to your cluster: kubectl apply -f io-pressure-pod.yaml", "source": "https://kubernetes.io/docs/reference/instrumentation/understand-psi-metrics/", "id": "k8s_00541_code_explain_13850"} {"prompt": "Explain this text code:\n\n# Replace <node-name> with the name of a node in your cluster\nkubectl get --raw \"/api/v1/nodes/<node-name>/proxy/stats/summary\" | jq '.pods[] | select(.podRef.name | contains(\"io-pressure-pod\"))'", "completion": "This text code Understand Pressure Stall Information (PSI) Metrics | Using the Prometheus metrics endpoint:", "language": "text", "type": "code-example_explanation", "context": "Understand Pressure Stall Information (PSI) Metrics | Using the Prometheus metrics endpoint:", "source": "https://kubernetes.io/docs/reference/instrumentation/understand-psi-metrics/", "id": "k8s_00541_code_explain_13851"} {"prompt": "Explain this text code:\n\n# Replace <node-name> with the name of the node where the pod is running\nkubectl get --raw \"/api/v1/nodes/<node-name>/proxy/metrics/cadvisor\" | \\\n grep 'container_pressure_io_waiting_seconds_total{container=\"io-stress\"'", "completion": "This text code Understand Pressure Stall Information (PSI) Metrics | You will see the metric's value increase as the Pod continuously writes to disk.", "language": "text", "type": "code-example_explanation", "context": "Understand Pressure Stall Information (PSI) Metrics | You will see the metric's value increase as the Pod continuously writes to disk.", "source": "https://kubernetes.io/docs/reference/instrumentation/understand-psi-metrics/", "id": "k8s_00541_code_explain_13852"} {"prompt": "Explain this bash code:\n\nkubectl create role NAME --verb=verb --resource=resource.group/subresource [--resource-name=resourcename] [--dry-run=server|client|none]", "completion": "This bash code kubectl create role | If true, ignore any errors in templates when a field or map key is missing in the template. Only applies to golang and jsonpath output formats.", "language": "bash", "type": "kubectl-command_explanation", "context": "kubectl create role | If true, ignore any errors in templates when a field or map key is missing in the template. Only applies to golang and jsonpath output formats.", "source": "https://kubernetes.io/docs/reference/kubectl/generated/kubectl_create/kubectl_create_role/", "id": "k8s_00543_code_explain_13853"} {"prompt": "Complete this text code:\n\n# Create a role named \"pod-reader\" that allows user to perform \"get\", \"watch\" and \"list\" on pods\n kubectl create role pod-reader --verb=get --verb=list --verb=watch --resource=pods\n \n # Create a role named \"pod-reader\" with ResourceName specified\n kubectl create role pod-reader --verb=get --resource=pods --resource-name=readablepod --resource-name=anotherpod", "completion": " \n # Create a role named \"foo\" with API Group specified\n kubectl create role foo --verb=get,list,watch --resource=rs.apps\n \n # Create a role named \"foo\" with SubResource specified\n kubectl create role foo --verb=get,list,watch --resource=pods,pods/status", "language": "text", "type": "code-example", "context": "kubectl create role | If true, ignore any errors in templates when a field or map key is missing in the template. Only applies to golang and jsonpath output formats.", "source": "https://kubernetes.io/docs/reference/kubectl/generated/kubectl_create/kubectl_create_role/", "id": "k8s_00543_code_completion_13854"} {"prompt": "Explain this text code:\n\n# Create a role named \"pod-reader\" that allows user to perform \"get\", \"watch\" and \"list\" on pods\n kubectl create role pod-reader --verb=get --verb=list --verb=watch --resource=pods\n \n # Create a role named \"pod-reader\" with ResourceName specified\n kubectl create role pod-reader --verb=get --resource=pods --resource-name=readablepod --resource-name=anotherpod\n \n # Create a role named \"foo\" with API Group specified\n kubectl create role foo --verb=get,list,watch --resource=rs.apps\n \n # Create a role named \"foo\" with SubResource specified\n kubectl create role foo --verb=get,list,watch --resource=pods,pods/status", "completion": "This text code kubectl create role | If true, ignore any errors in templates when a field or map key is missing in the template. Only applies to golang and jsonpath output formats.", "language": "text", "type": "code-example_explanation", "context": "kubectl create role | If true, ignore any errors in templates when a field or map key is missing in the template. Only applies to golang and jsonpath output formats.", "source": "https://kubernetes.io/docs/reference/kubectl/generated/kubectl_create/kubectl_create_role/", "id": "k8s_00543_code_explain_13855"} {"prompt": "Explain this bash code:\n\nkubectl create role NAME --verb=verb --resource=resource.group/subresource [--resource-name=resourcename] [--dry-run=server|client|none]", "completion": "This bash code kubectl create role | If true, ignore any errors in templates when a field or map key is missing in the template. Only applies to golang and jsonpath output formats.", "language": "bash", "type": "kubectl-command_explanation", "context": "kubectl create role | If true, ignore any errors in templates when a field or map key is missing in the template. Only applies to golang and jsonpath output formats.", "source": "https://kubernetes.io/docs/reference/kubectl/generated/kubectl_create/kubectl_create_role/", "id": "k8s_00543_code_explain_13856"} {"prompt": "Complete this text code:\n\n# Create a role named \"pod-reader\" that allows user to perform \"get\", \"watch\" and \"list\" on pods\n kubectl create role pod-reader --verb=get --verb=list --verb=watch --resource=pods\n \n # Create a role named \"pod-reader\" with ResourceName specified\n kubectl create role pod-reader --verb=get --resource=pods --resource-name=readablepod --resource-name=anotherpod", "completion": " \n # Create a role named \"foo\" with API Group specified\n kubectl create role foo --verb=get,list,watch --resource=rs.apps\n \n # Create a role named \"foo\" with SubResource specified\n kubectl create role foo --verb=get,list,watch --resource=pods,pods/status", "language": "text", "type": "code-example", "context": "kubectl create role | If true, ignore any errors in templates when a field or map key is missing in the template. Only applies to golang and jsonpath output formats.", "source": "https://kubernetes.io/docs/reference/kubectl/generated/kubectl_create/kubectl_create_role/", "id": "k8s_00543_code_completion_13857"} {"prompt": "Explain this text code:\n\n# Create a role named \"pod-reader\" that allows user to perform \"get\", \"watch\" and \"list\" on pods\n kubectl create role pod-reader --verb=get --verb=list --verb=watch --resource=pods\n \n # Create a role named \"pod-reader\" with ResourceName specified\n kubectl create role pod-reader --verb=get --resource=pods --resource-name=readablepod --resource-name=anotherpod\n \n # Create a role named \"foo\" with API Group specified\n kubectl create role foo --verb=get,list,watch --resource=rs.apps\n \n # Create a role named \"foo\" with SubResource specified\n kubectl create role foo --verb=get,list,watch --resource=pods,pods/status", "completion": "This text code kubectl create role | If true, ignore any errors in templates when a field or map key is missing in the template. Only applies to golang and jsonpath output formats.", "language": "text", "type": "code-example_explanation", "context": "kubectl create role | If true, ignore any errors in templates when a field or map key is missing in the template. Only applies to golang and jsonpath output formats.", "source": "https://kubernetes.io/docs/reference/kubectl/generated/kubectl_create/kubectl_create_role/", "id": "k8s_00543_code_explain_13858"} {"prompt": "Complete this text code:\n\n# Replace a pod using the data in pod.json\n kubectl replace -f ./pod.json\n \n # Replace a pod based on the JSON passed into stdin\n cat pod.json | kubectl replace -f -", "completion": " \n # Update a single-container pod's image version (tag) to v4\n kubectl get pod mypod -o yaml | sed 's/\\(image: myimage\\):.*$/\\1:v4/' | kubectl replace -f -\n \n # Force replace, delete and then re-create the resource\n kubectl replace --force -f ./pod.json", "language": "text", "type": "code-example", "context": "kubectl replace | If true, ignore any errors in templates when a field or map key is missing in the template. Only applies to golang and jsonpath output formats.", "source": "https://kubernetes.io/docs/reference/kubectl/generated/kubectl_replace/", "id": "k8s_00544_code_completion_13859"} {"prompt": "Explain this text code:\n\n# Replace a pod using the data in pod.json\n kubectl replace -f ./pod.json\n \n # Replace a pod based on the JSON passed into stdin\n cat pod.json | kubectl replace -f -\n \n # Update a single-container pod's image version (tag) to v4\n kubectl get pod mypod -o yaml | sed 's/\\(image: myimage\\):.*$/\\1:v4/' | kubectl replace -f -\n \n # Force replace, delete and then re-create the resource\n kubectl replace --force -f ./pod.json", "completion": "This text code kubectl replace | If true, ignore any errors in templates when a field or map key is missing in the template. Only applies to golang and jsonpath output formats.", "language": "text", "type": "code-example_explanation", "context": "kubectl replace | If true, ignore any errors in templates when a field or map key is missing in the template. Only applies to golang and jsonpath output formats.", "source": "https://kubernetes.io/docs/reference/kubectl/generated/kubectl_replace/", "id": "k8s_00544_code_explain_13860"} {"prompt": "Complete this text code:\n\n# Replace a pod using the data in pod.json\n kubectl replace -f ./pod.json\n \n # Replace a pod based on the JSON passed into stdin\n cat pod.json | kubectl replace -f -", "completion": " \n # Update a single-container pod's image version (tag) to v4\n kubectl get pod mypod -o yaml | sed 's/\\(image: myimage\\):.*$/\\1:v4/' | kubectl replace -f -\n \n # Force replace, delete and then re-create the resource\n kubectl replace --force -f ./pod.json", "language": "text", "type": "code-example", "context": "kubectl replace | If true, ignore any errors in templates when a field or map key is missing in the template. Only applies to golang and jsonpath output formats.", "source": "https://kubernetes.io/docs/reference/kubectl/generated/kubectl_replace/", "id": "k8s_00544_code_completion_13861"} {"prompt": "Explain this text code:\n\n# Replace a pod using the data in pod.json\n kubectl replace -f ./pod.json\n \n # Replace a pod based on the JSON passed into stdin\n cat pod.json | kubectl replace -f -\n \n # Update a single-container pod's image version (tag) to v4\n kubectl get pod mypod -o yaml | sed 's/\\(image: myimage\\):.*$/\\1:v4/' | kubectl replace -f -\n \n # Force replace, delete and then re-create the resource\n kubectl replace --force -f ./pod.json", "completion": "This text code kubectl replace | If true, ignore any errors in templates when a field or map key is missing in the template. Only applies to golang and jsonpath output formats.", "language": "text", "type": "code-example_explanation", "context": "kubectl replace | If true, ignore any errors in templates when a field or map key is missing in the template. Only applies to golang and jsonpath output formats.", "source": "https://kubernetes.io/docs/reference/kubectl/generated/kubectl_replace/", "id": "k8s_00544_code_explain_13862"} {"prompt": "Explain this text code:\n\nsudo kubeadm join --token <token> <control-plane-host>:<control-plane-port> --discovery-token-ca-cert-hash sha256:<hash>", "completion": "This text code Adding Linux worker nodes | If you do not have the token, you can get it by running the following command on the control plane node:", "language": "text", "type": "code-example_explanation", "context": "Adding Linux worker nodes | If you do not have the token, you can get it by running the following command on the control plane node:", "source": "https://kubernetes.io/docs/tasks/administer-cluster/kubeadm/adding-linux-nodes/", "id": "k8s_00545_code_explain_13863"} {"prompt": "Explain this text code:\n\n# Run this on a control plane node\nsudo kubeadm token list", "completion": "This text code Adding Linux worker nodes | The output is similar to this:", "language": "text", "type": "code-example_explanation", "context": "Adding Linux worker nodes | The output is similar to this:", "source": "https://kubernetes.io/docs/tasks/administer-cluster/kubeadm/adding-linux-nodes/", "id": "k8s_00545_code_explain_13864"} {"prompt": "Complete this text code:\n\nTOKEN TTL EXPIRES USAGES DESCRIPTION EXTRA GROUPS\n8ewj1p.9r9hcjoqgajrj4gi 23h 2018-06-12T02:51:28Z authentication, The default bootstrap system:", "completion": " signing token generated by bootstrappers:\n 'kubeadm init'. kubeadm:\n default-node-token", "language": "text", "type": "code-example", "context": "Adding Linux worker nodes | By default, node join tokens expire after 24 hours. If you are joining a node to the cluster after the current token has expired, you can create a new token by running the following command on the con", "source": "https://kubernetes.io/docs/tasks/administer-cluster/kubeadm/adding-linux-nodes/", "id": "k8s_00545_code_completion_13865"} {"prompt": "Explain this text code:\n\nTOKEN TTL EXPIRES USAGES DESCRIPTION EXTRA GROUPS\n8ewj1p.9r9hcjoqgajrj4gi 23h 2018-06-12T02:51:28Z authentication, The default bootstrap system:\n signing token generated by bootstrappers:\n 'kubeadm init'. kubeadm:\n default-node-token", "completion": "This text code Adding Linux worker nodes | By default, node join tokens expire after 24 hours. If you are joining a node to the cluster after the current token has expired, you can create a new token by running the following command on the con", "language": "text", "type": "code-example_explanation", "context": "Adding Linux worker nodes | By default, node join tokens expire after 24 hours. If you are joining a node to the cluster after the current token has expired, you can create a new token by running the following command on the con", "source": "https://kubernetes.io/docs/tasks/administer-cluster/kubeadm/adding-linux-nodes/", "id": "k8s_00545_code_explain_13866"} {"prompt": "Explain this text code:\n\n# Run this on a control plane node\nsudo kubeadm token create", "completion": "This text code Adding Linux worker nodes | The output is similar to this:", "language": "text", "type": "code-example_explanation", "context": "Adding Linux worker nodes | The output is similar to this:", "source": "https://kubernetes.io/docs/tasks/administer-cluster/kubeadm/adding-linux-nodes/", "id": "k8s_00545_code_explain_13867"} {"prompt": "Explain this text code:\n\n# Run this on a control plane node\nsudo cat /etc/kubernetes/pki/ca.crt | openssl x509 -pubkey | openssl rsa -pubin -outform der 2>/dev/null | \\\n openssl dgst -sha256 -hex | sed 's/^.* //'", "completion": "This text code Adding Linux worker nodes | The output is similar to:", "language": "text", "type": "code-example_explanation", "context": "Adding Linux worker nodes | The output is similar to:", "source": "https://kubernetes.io/docs/tasks/administer-cluster/kubeadm/adding-linux-nodes/", "id": "k8s_00545_code_explain_13868"} {"prompt": "Explain this text code:\n\n8cb2de97839780a412b93877f8507ad6c94f73add17d5d7058e91741c9d5ec78", "completion": "This text code Adding Linux worker nodes | The output of the kubeadm join command should look something like:", "language": "text", "type": "code-example_explanation", "context": "Adding Linux worker nodes | The output of the kubeadm join command should look something like:", "source": "https://kubernetes.io/docs/tasks/administer-cluster/kubeadm/adding-linux-nodes/", "id": "k8s_00545_code_explain_13869"} {"prompt": "Complete this json code:\n\n[preflight] Running pre-flight checks\n\n... (log output of join workflow) ...\n\nNode join complete:", "completion": "* Certificate signing request sent to control-plane and response\n received.\n* Kubelet informed of new secure connection details.\n\nRun 'kubectl get nodes' on control-plane to see this machine join.", "language": "json", "type": "api-response", "context": "Adding Linux worker nodes | A few seconds later, you should notice this node in the output from kubectl get nodes. (for example, run kubectl on a control plane node).", "source": "https://kubernetes.io/docs/tasks/administer-cluster/kubeadm/adding-linux-nodes/", "id": "k8s_00545_code_completion_13870"} {"prompt": "Explain this json code:\n\n[preflight] Running pre-flight checks\n\n... (log output of join workflow) ...\n\nNode join complete:\n* Certificate signing request sent to control-plane and response\n received.\n* Kubelet informed of new secure connection details.\n\nRun 'kubectl get nodes' on control-plane to see this machine join.", "completion": "This json code Adding Linux worker nodes | A few seconds later, you should notice this node in the output from kubectl get nodes. (for example, run kubectl on a control plane node).", "language": "json", "type": "api-response_explanation", "context": "Adding Linux worker nodes | A few seconds later, you should notice this node in the output from kubectl get nodes. (for example, run kubectl on a control plane node).", "source": "https://kubernetes.io/docs/tasks/administer-cluster/kubeadm/adding-linux-nodes/", "id": "k8s_00545_code_explain_13871"} {"prompt": "Explain this bash code:\n\nsudo kubeadm join --token <token> <control-plane-host>:<control-plane-port> --discovery-token-ca-cert-hash sha256:<hash>", "completion": "This bash code Adding Linux worker nodes | If you do not have the token, you can get it by running the following command on the control plane node:", "language": "bash", "type": "shell-command_explanation", "context": "Adding Linux worker nodes | If you do not have the token, you can get it by running the following command on the control plane node:", "source": "https://kubernetes.io/docs/tasks/administer-cluster/kubeadm/adding-linux-nodes/", "id": "k8s_00545_code_explain_13872"} {"prompt": "Explain this bash code:\n\n# Run this on a control plane node\nsudo kubeadm token list", "completion": "This bash code Adding Linux worker nodes | The output is similar to this:", "language": "bash", "type": "shell-command_explanation", "context": "Adding Linux worker nodes | The output is similar to this:", "source": "https://kubernetes.io/docs/tasks/administer-cluster/kubeadm/adding-linux-nodes/", "id": "k8s_00545_code_explain_13873"} {"prompt": "Complete this console code:\n\nTOKEN TTL EXPIRES USAGES DESCRIPTION EXTRA GROUPS\n8ewj1p.9r9hcjoqgajrj4gi 23h 2018-06-12T02:51:28Z authentication, The default bootstrap system:", "completion": " signing token generated by bootstrappers:\n 'kubeadm init'. kubeadm:\n default-node-token", "language": "console", "type": "code-example", "context": "Adding Linux worker nodes | By default, node join tokens expire after 24 hours. If you are joining a node to the cluster after the current token has expired, you can create a new token by running the following command on the con", "source": "https://kubernetes.io/docs/tasks/administer-cluster/kubeadm/adding-linux-nodes/", "id": "k8s_00545_code_completion_13874"} {"prompt": "Explain this console code:\n\nTOKEN TTL EXPIRES USAGES DESCRIPTION EXTRA GROUPS\n8ewj1p.9r9hcjoqgajrj4gi 23h 2018-06-12T02:51:28Z authentication, The default bootstrap system:\n signing token generated by bootstrappers:\n 'kubeadm init'. kubeadm:\n default-node-token", "completion": "This console code Adding Linux worker nodes | By default, node join tokens expire after 24 hours. If you are joining a node to the cluster after the current token has expired, you can create a new token by running the following command on the con", "language": "console", "type": "code-example_explanation", "context": "Adding Linux worker nodes | By default, node join tokens expire after 24 hours. If you are joining a node to the cluster after the current token has expired, you can create a new token by running the following command on the con", "source": "https://kubernetes.io/docs/tasks/administer-cluster/kubeadm/adding-linux-nodes/", "id": "k8s_00545_code_explain_13875"} {"prompt": "Explain this bash code:\n\n# Run this on a control plane node\nsudo kubeadm token create", "completion": "This bash code Adding Linux worker nodes | The output is similar to this:", "language": "bash", "type": "shell-command_explanation", "context": "Adding Linux worker nodes | The output is similar to this:", "source": "https://kubernetes.io/docs/tasks/administer-cluster/kubeadm/adding-linux-nodes/", "id": "k8s_00545_code_explain_13876"} {"prompt": "Explain this bash code:\n\n# Run this on a control plane node\nsudo cat /etc/kubernetes/pki/ca.crt | openssl x509 -pubkey | openssl rsa -pubin -outform der 2>/dev/null | \\\n openssl dgst -sha256 -hex | sed 's/^.* //'", "completion": "This bash code Adding Linux worker nodes | The output is similar to:", "language": "bash", "type": "shell-command_explanation", "context": "Adding Linux worker nodes | The output is similar to:", "source": "https://kubernetes.io/docs/tasks/administer-cluster/kubeadm/adding-linux-nodes/", "id": "k8s_00545_code_explain_13877"} {"prompt": "Explain this console code:\n\n8cb2de97839780a412b93877f8507ad6c94f73add17d5d7058e91741c9d5ec78", "completion": "This console code Adding Linux worker nodes | The output of the kubeadm join command should look something like:", "language": "console", "type": "code-example_explanation", "context": "Adding Linux worker nodes | The output of the kubeadm join command should look something like:", "source": "https://kubernetes.io/docs/tasks/administer-cluster/kubeadm/adding-linux-nodes/", "id": "k8s_00545_code_explain_13878"} {"prompt": "Complete this json code:\n\n[preflight] Running pre-flight checks\n\n... (log output of join workflow) ...\n\nNode join complete:", "completion": "* Certificate signing request sent to control-plane and response\n received.\n* Kubelet informed of new secure connection details.\n\nRun 'kubectl get nodes' on control-plane to see this machine join.", "language": "json", "type": "api-response", "context": "Adding Linux worker nodes | A few seconds later, you should notice this node in the output from kubectl get nodes. (for example, run kubectl on a control plane node).", "source": "https://kubernetes.io/docs/tasks/administer-cluster/kubeadm/adding-linux-nodes/", "id": "k8s_00545_code_completion_13879"} {"prompt": "Explain this json code:\n\n[preflight] Running pre-flight checks\n\n... (log output of join workflow) ...\n\nNode join complete:\n* Certificate signing request sent to control-plane and response\n received.\n* Kubelet informed of new secure connection details.\n\nRun 'kubectl get nodes' on control-plane to see this machine join.", "completion": "This json code Adding Linux worker nodes | A few seconds later, you should notice this node in the output from kubectl get nodes. (for example, run kubectl on a control plane node).", "language": "json", "type": "api-response_explanation", "context": "Adding Linux worker nodes | A few seconds later, you should notice this node in the output from kubectl get nodes. (for example, run kubectl on a control plane node).", "source": "https://kubernetes.io/docs/tasks/administer-cluster/kubeadm/adding-linux-nodes/", "id": "k8s_00545_code_explain_13880"} {"prompt": "Explain this bash code:\n\nkubectl -n kube-system rollout restart deployment coredns", "completion": "This bash code Adding Linux worker nodes |", "language": "bash", "type": "kubectl-command_explanation", "context": "Adding Linux worker nodes |", "source": "https://kubernetes.io/docs/tasks/administer-cluster/kubeadm/adding-linux-nodes/", "id": "k8s_00545_code_explain_13881"} {"prompt": "Explain this text code:\n\nsudo kubeadm join --token <token> <control-plane-host>:<control-plane-port> --discovery-token-ca-cert-hash sha256:<hash>", "completion": "This text code Adding Linux worker nodes | If you do not have the token, you can get it by running the following command on the control plane node:", "language": "text", "type": "code-example_explanation", "context": "Adding Linux worker nodes | If you do not have the token, you can get it by running the following command on the control plane node:", "source": "https://kubernetes.io/docs/tasks/administer-cluster/kubeadm/adding-linux-nodes/", "id": "k8s_00545_code_explain_13882"} {"prompt": "Explain this text code:\n\n# Run this on a control plane node\nsudo kubeadm token list", "completion": "This text code Adding Linux worker nodes | The output is similar to this:", "language": "text", "type": "code-example_explanation", "context": "Adding Linux worker nodes | The output is similar to this:", "source": "https://kubernetes.io/docs/tasks/administer-cluster/kubeadm/adding-linux-nodes/", "id": "k8s_00545_code_explain_13883"} {"prompt": "Complete this text code:\n\nTOKEN TTL EXPIRES USAGES DESCRIPTION EXTRA GROUPS\n8ewj1p.9r9hcjoqgajrj4gi 23h 2018-06-12T02:51:28Z authentication, The default bootstrap system:", "completion": " signing token generated by bootstrappers:\n 'kubeadm init'. kubeadm:\n default-node-token", "language": "text", "type": "code-example", "context": "Adding Linux worker nodes | By default, node join tokens expire after 24 hours. If you are joining a node to the cluster after the current token has expired, you can create a new token by running the following command on the con", "source": "https://kubernetes.io/docs/tasks/administer-cluster/kubeadm/adding-linux-nodes/", "id": "k8s_00545_code_completion_13884"} {"prompt": "Explain this text code:\n\nTOKEN TTL EXPIRES USAGES DESCRIPTION EXTRA GROUPS\n8ewj1p.9r9hcjoqgajrj4gi 23h 2018-06-12T02:51:28Z authentication, The default bootstrap system:\n signing token generated by bootstrappers:\n 'kubeadm init'. kubeadm:\n default-node-token", "completion": "This text code Adding Linux worker nodes | By default, node join tokens expire after 24 hours. If you are joining a node to the cluster after the current token has expired, you can create a new token by running the following command on the con", "language": "text", "type": "code-example_explanation", "context": "Adding Linux worker nodes | By default, node join tokens expire after 24 hours. If you are joining a node to the cluster after the current token has expired, you can create a new token by running the following command on the con", "source": "https://kubernetes.io/docs/tasks/administer-cluster/kubeadm/adding-linux-nodes/", "id": "k8s_00545_code_explain_13885"} {"prompt": "Explain this text code:\n\n# Run this on a control plane node\nsudo kubeadm token create", "completion": "This text code Adding Linux worker nodes | The output is similar to this:", "language": "text", "type": "code-example_explanation", "context": "Adding Linux worker nodes | The output is similar to this:", "source": "https://kubernetes.io/docs/tasks/administer-cluster/kubeadm/adding-linux-nodes/", "id": "k8s_00545_code_explain_13886"} {"prompt": "Explain this text code:\n\n# Run this on a control plane node\nsudo cat /etc/kubernetes/pki/ca.crt | openssl x509 -pubkey | openssl rsa -pubin -outform der 2>/dev/null | \\\n openssl dgst -sha256 -hex | sed 's/^.* //'", "completion": "This text code Adding Linux worker nodes | The output is similar to:", "language": "text", "type": "code-example_explanation", "context": "Adding Linux worker nodes | The output is similar to:", "source": "https://kubernetes.io/docs/tasks/administer-cluster/kubeadm/adding-linux-nodes/", "id": "k8s_00545_code_explain_13887"} {"prompt": "Explain this text code:\n\n8cb2de97839780a412b93877f8507ad6c94f73add17d5d7058e91741c9d5ec78", "completion": "This text code Adding Linux worker nodes | The output of the kubeadm join command should look something like:", "language": "text", "type": "code-example_explanation", "context": "Adding Linux worker nodes | The output of the kubeadm join command should look something like:", "source": "https://kubernetes.io/docs/tasks/administer-cluster/kubeadm/adding-linux-nodes/", "id": "k8s_00545_code_explain_13888"} {"prompt": "Explain this text code:\n\nkubeadm certs renew apiserver-kubelet-client [flags]", "completion": "This text code kubeadm certs | The path where to save the certificates", "language": "text", "type": "code-example_explanation", "context": "kubeadm certs | The path where to save the certificates", "source": "https://kubernetes.io/docs/reference/setup-tools/kubeadm/kubeadm-certs/", "id": "k8s_00546_code_explain_13889"} {"prompt": "Explain this text code:\n\nkubeadm certs renew controller-manager.conf [flags]", "completion": "This text code kubeadm certs | The path where to save the certificates", "language": "text", "type": "code-example_explanation", "context": "kubeadm certs | The path where to save the certificates", "source": "https://kubernetes.io/docs/reference/setup-tools/kubeadm/kubeadm-certs/", "id": "k8s_00546_code_explain_13890"} {"prompt": "Explain this text code:\n\nkubeadm certs renew etcd-healthcheck-client [flags]", "completion": "This text code kubeadm certs | The path where to save the certificates", "language": "text", "type": "code-example_explanation", "context": "kubeadm certs | The path where to save the certificates", "source": "https://kubernetes.io/docs/reference/setup-tools/kubeadm/kubeadm-certs/", "id": "k8s_00546_code_explain_13891"} {"prompt": "Explain this text code:\n\n# The following command will generate keys and CSRs for all control-plane certificates and kubeconfig files:\n kubeadm certs generate-csr --kubeconfig-dir /tmp/etc-k8s --cert-dir /tmp/etc-k8s/pki", "completion": "This text code kubeadm certs | The path where to save the certificates", "language": "text", "type": "code-example_explanation", "context": "kubeadm certs | The path where to save the certificates", "source": "https://kubernetes.io/docs/reference/setup-tools/kubeadm/kubeadm-certs/", "id": "k8s_00546_code_explain_13892"} {"prompt": "Explain this text code:\n\nkubeadm certs renew apiserver-kubelet-client [flags]", "completion": "This text code kubeadm certs | The path where to save the certificates", "language": "text", "type": "code-example_explanation", "context": "kubeadm certs | The path where to save the certificates", "source": "https://kubernetes.io/docs/reference/setup-tools/kubeadm/kubeadm-certs/", "id": "k8s_00546_code_explain_13893"} {"prompt": "Explain this text code:\n\nkubeadm certs renew controller-manager.conf [flags]", "completion": "This text code kubeadm certs | The path where to save the certificates", "language": "text", "type": "code-example_explanation", "context": "kubeadm certs | The path where to save the certificates", "source": "https://kubernetes.io/docs/reference/setup-tools/kubeadm/kubeadm-certs/", "id": "k8s_00546_code_explain_13894"} {"prompt": "Explain this text code:\n\nkubeadm certs renew etcd-healthcheck-client [flags]", "completion": "This text code kubeadm certs | The path where to save the certificates", "language": "text", "type": "code-example_explanation", "context": "kubeadm certs | The path where to save the certificates", "source": "https://kubernetes.io/docs/reference/setup-tools/kubeadm/kubeadm-certs/", "id": "k8s_00546_code_explain_13895"} {"prompt": "Explain this text code:\n\n# The following command will generate keys and CSRs for all control-plane certificates and kubeconfig files:\n kubeadm certs generate-csr --kubeconfig-dir /tmp/etc-k8s --cert-dir /tmp/etc-k8s/pki", "completion": "This text code kubeadm certs | The path where to save the certificates", "language": "text", "type": "code-example_explanation", "context": "kubeadm certs | The path where to save the certificates", "source": "https://kubernetes.io/docs/reference/setup-tools/kubeadm/kubeadm-certs/", "id": "k8s_00546_code_explain_13896"} {"prompt": "Complete this text code:\n\ntype MyStruct struct {\n // +k8s:eachKey=+k8s:minimum=1", "completion": " MyMap map[int]string `json:\"myMap\"`\n}", "language": "text", "type": "code-example", "context": "Declarative API Validation | In this example, eachKey is used to specify that the +k8s:minimum tag should be applied to each int key in MyMap. This means that all keys in the map must be >= 1.", "source": "https://kubernetes.io/docs/reference/using-api/declarative-validation/", "id": "k8s_00548_code_completion_13897"} {"prompt": "Explain this text code:\n\ntype MyStruct struct {\n // +k8s:eachKey=+k8s:minimum=1\n MyMap map[int]string `json:\"myMap\"`\n}", "completion": "This text code Declarative API Validation | In this example, eachKey is used to specify that the +k8s:minimum tag should be applied to each int key in MyMap. This means that all keys in the map must be >= 1.", "language": "text", "type": "code-example_explanation", "context": "Declarative API Validation | In this example, eachKey is used to specify that the +k8s:minimum tag should be applied to each int key in MyMap. This means that all keys in the map must be >= 1.", "source": "https://kubernetes.io/docs/reference/using-api/declarative-validation/", "id": "k8s_00548_code_explain_13898"} {"prompt": "Complete this text code:\n\ntype MyStruct struct {\n // +k8s:eachVal=+k8s:minimum=1", "completion": " MyMap map[string]int `json:\"myMap\"`\n}", "language": "text", "type": "code-example", "context": "Declarative API Validation | In this example, eachVal is used to specify that the +k8s:minimum tag should be applied to each element in MyList. This means that all fields in MyStruct must be >= 1.", "source": "https://kubernetes.io/docs/reference/using-api/declarative-validation/", "id": "k8s_00548_code_completion_13899"} {"prompt": "Explain this text code:\n\ntype MyStruct struct {\n // +k8s:eachVal=+k8s:minimum=1\n MyMap map[string]int `json:\"myMap\"`\n}", "completion": "This text code Declarative API Validation | In this example, eachVal is used to specify that the +k8s:minimum tag should be applied to each element in MyList. This means that all fields in MyStruct must be >= 1.", "language": "text", "type": "code-example_explanation", "context": "Declarative API Validation | In this example, eachVal is used to specify that the +k8s:minimum tag should be applied to each element in MyList. This means that all fields in MyStruct must be >= 1.", "source": "https://kubernetes.io/docs/reference/using-api/declarative-validation/", "id": "k8s_00548_code_explain_13900"} {"prompt": "Complete this text code:\n\n// +k8s:enum\ntype MyEnum string\n", "completion": "const (\n MyEnumA MyEnum = \"A\"\n MyEnumB MyEnum = \"B\"\n)", "language": "text", "type": "code-example", "context": "Declarative API Validation | Then, use this type in another struct:", "source": "https://kubernetes.io/docs/reference/using-api/declarative-validation/", "id": "k8s_00548_code_completion_13901"} {"prompt": "Explain this text code:\n\n// +k8s:enum\ntype MyEnum string\n\nconst (\n MyEnumA MyEnum = \"A\"\n MyEnumB MyEnum = \"B\"\n)", "completion": "This text code Declarative API Validation | Then, use this type in another struct:", "language": "text", "type": "code-example_explanation", "context": "Declarative API Validation | Then, use this type in another struct:", "source": "https://kubernetes.io/docs/reference/using-api/declarative-validation/", "id": "k8s_00548_code_explain_13902"} {"prompt": "Explain this text code:\n\ntype MyStruct struct {\n MyField MyEnum `json:\"myField\"`\n}", "completion": "This text code Declarative API Validation | The validation logic will ensure that MyField is one of the defined enum values (\"A\" or \"B\").", "language": "text", "type": "code-example_explanation", "context": "Declarative API Validation | The validation logic will ensure that MyField is one of the defined enum values (\"A\" or \"B\").", "source": "https://kubernetes.io/docs/reference/using-api/declarative-validation/", "id": "k8s_00548_code_explain_13903"} {"prompt": "Complete this text code:\n\ntype MyStruct struct {\n // +k8s:forbidden", "completion": " MyField string `json:\"myField\"`\n}", "language": "text", "type": "code-example", "context": "Declarative API Validation | In this example, MyField cannot be provided (it is forbidden) when creating or updating MyStruct.", "source": "https://kubernetes.io/docs/reference/using-api/declarative-validation/", "id": "k8s_00548_code_completion_13904"} {"prompt": "Explain this text code:\n\ntype MyStruct struct {\n // +k8s:forbidden\n MyField string `json:\"myField\"`\n}", "completion": "This text code Declarative API Validation | In this example, MyField cannot be provided (it is forbidden) when creating or updating MyStruct.", "language": "text", "type": "code-example_explanation", "context": "Declarative API Validation | In this example, MyField cannot be provided (it is forbidden) when creating or updating MyStruct.", "source": "https://kubernetes.io/docs/reference/using-api/declarative-validation/", "id": "k8s_00548_code_explain_13905"} {"prompt": "Complete this text code:\n\ntype MyStruct struct {\n // +k8s:format=k8s-ip\n IPAddress string `json:\"ipAddress\"`\n\n // +k8s:format=k8s-long-name", "completion": " Subdomain string `json:\"subdomain\"`\n\n // +k8s:format=k8s-short-name\n Label string `json:\"label\"`\n}", "language": "text", "type": "code-example", "context": "Declarative API Validation | In this example:", "source": "https://kubernetes.io/docs/reference/using-api/declarative-validation/", "id": "k8s_00548_code_completion_13906"} {"prompt": "Explain this text code:\n\ntype MyStruct struct {\n // +k8s:format=k8s-ip\n IPAddress string `json:\"ipAddress\"`\n\n // +k8s:format=k8s-long-name\n Subdomain string `json:\"subdomain\"`\n\n // +k8s:format=k8s-short-name\n Label string `json:\"label\"`\n}", "completion": "This text code Declarative API Validation | In this example:", "language": "text", "type": "code-example_explanation", "context": "Declarative API Validation | In this example:", "source": "https://kubernetes.io/docs/reference/using-api/declarative-validation/", "id": "k8s_00548_code_explain_13907"} {"prompt": "Complete this text code:\n\ntype MyStruct struct {\n // +k8s:ifDisabled(\"my-feature\")=+k8s:required", "completion": " MyField string `json:\"myField\"`\n}", "language": "text", "type": "code-example", "context": "Declarative API Validation | In this example, MyField is required only if the \"my-feature\" option is disabled.", "source": "https://kubernetes.io/docs/reference/using-api/declarative-validation/", "id": "k8s_00548_code_completion_13908"} {"prompt": "Explain this text code:\n\ntype MyStruct struct {\n // +k8s:ifDisabled(\"my-feature\")=+k8s:required\n MyField string `json:\"myField\"`\n}", "completion": "This text code Declarative API Validation | In this example, MyField is required only if the \"my-feature\" option is disabled.", "language": "text", "type": "code-example_explanation", "context": "Declarative API Validation | In this example, MyField is required only if the \"my-feature\" option is disabled.", "source": "https://kubernetes.io/docs/reference/using-api/declarative-validation/", "id": "k8s_00548_code_explain_13909"} {"prompt": "Complete this text code:\n\ntype MyStruct struct {\n // +k8s:ifEnabled(\"my-feature\")=+k8s:required", "completion": " MyField string `json:\"myField\"`\n}", "language": "text", "type": "code-example", "context": "Declarative API Validation | In this example, MyField is required only if the \"my-feature\" option is enabled.", "source": "https://kubernetes.io/docs/reference/using-api/declarative-validation/", "id": "k8s_00548_code_completion_13910"} {"prompt": "Explain this text code:\n\ntype MyStruct struct {\n // +k8s:ifEnabled(\"my-feature\")=+k8s:required\n MyField string `json:\"myField\"`\n}", "completion": "This text code Declarative API Validation | In this example, MyField is required only if the \"my-feature\" option is enabled.", "language": "text", "type": "code-example_explanation", "context": "Declarative API Validation | In this example, MyField is required only if the \"my-feature\" option is enabled.", "source": "https://kubernetes.io/docs/reference/using-api/declarative-validation/", "id": "k8s_00548_code_explain_13911"} {"prompt": "Complete this go code:\n\n// This enables the validation dispatcher to handle requests for \"/scale\".\n// +k8s:supportsSubresource=\"/scale\"", "completion": "package v1\n\n// ... includes the definition for the Deployment type", "language": "go", "type": "code-example", "context": "Declarative API Validation | 2. Scope Validation Logic in a Separate Package: Next, create a separate package for the validation rules that are specific only to the /scale subresource.", "source": "https://kubernetes.io/docs/reference/using-api/declarative-validation/", "id": "k8s_00548_code_completion_13912"} {"prompt": "Explain this go code:\n\n// This enables the validation dispatcher to handle requests for \"/scale\".\n// +k8s:supportsSubresource=\"/scale\"\npackage v1\n\n// ... includes the definition for the Deployment type", "completion": "This go code Declarative API Validation | 2. Scope Validation Logic in a Separate Package: Next, create a separate package for the validation rules that are specific only to the /scale subresource.", "language": "go", "type": "code-example_explanation", "context": "Declarative API Validation | 2. Scope Validation Logic in a Separate Package: Next, create a separate package for the validation rules that are specific only to the /scale subresource.", "source": "https://kubernetes.io/docs/reference/using-api/declarative-validation/", "id": "k8s_00548_code_explain_13913"} {"prompt": "Complete this python code:\n\n// This ensures the rules in this package ONLY run for the \"/scale\" subresource.\n// +k8s:isSubresource=\"/scale\"\npackage scale\n", "completion": "import \"k8s.io/api/apps/v1\"\n\n// Validation code in this package would reference types from package v1 (e.g., v1.Scale).\n// The generated validation function will only be invoked for requests to the \"/scale\"\n// subresource of a type defined in a package that supports it.", "language": "python", "type": "code-example", "context": "Declarative API Validation | Description:", "source": "https://kubernetes.io/docs/reference/using-api/declarative-validation/", "id": "k8s_00548_code_completion_13914"} {"prompt": "Explain this python code:\n\n// This ensures the rules in this package ONLY run for the \"/scale\" subresource.\n// +k8s:isSubresource=\"/scale\"\npackage scale\n\nimport \"k8s.io/api/apps/v1\"\n\n// Validation code in this package would reference types from package v1 (e.g., v1.Scale).\n// The generated validation function will only be invoked for requests to the \"/scale\"\n// subresource of a type defined in a package that supports it.", "completion": "This python code Declarative API Validation | Description:", "language": "python", "type": "code-example_explanation", "context": "Declarative API Validation | Description:", "source": "https://kubernetes.io/docs/reference/using-api/declarative-validation/", "id": "k8s_00548_code_explain_13915"} {"prompt": "Complete this text code:\n\ntype MyStruct struct {\n\t// +k8s:listType=map\n\t// +k8s:listMapKey=type\n\t// +k8s:item(type: \"Approved\")=+k8s:zeroOrOneOfMember\n\t// +k8s:item(type: \"Denied\")=+k8s:zeroOrOneOfMember\n\tMyConditions []MyCondition `json:\"conditions\"`", "completion": "}\n\ntype MyCondition struct {\n Type string `json:\"type\"`\n Status string `json:\"status\"`\n}", "language": "text", "type": "code-example", "context": "Declarative API Validation | In this example:", "source": "https://kubernetes.io/docs/reference/using-api/declarative-validation/", "id": "k8s_00548_code_completion_13916"} {"prompt": "Explain this text code:\n\ntype MyStruct struct {\n\t// +k8s:listType=map\n\t// +k8s:listMapKey=type\n\t// +k8s:item(type: \"Approved\")=+k8s:zeroOrOneOfMember\n\t// +k8s:item(type: \"Denied\")=+k8s:zeroOrOneOfMember\n\tMyConditions []MyCondition `json:\"conditions\"`\n}\n\ntype MyCondition struct {\n Type string `json:\"type\"`\n Status string `json:\"status\"`\n}", "completion": "This text code Declarative API Validation | In this example:", "language": "text", "type": "code-example_explanation", "context": "Declarative API Validation | In this example:", "source": "https://kubernetes.io/docs/reference/using-api/declarative-validation/", "id": "k8s_00548_code_explain_13917"} {"prompt": "Complete this text code:\n\n// +k8s:listType=map\n// +k8s:listMapKey=keyFieldOne\n// +k8s:listMapKey=keyFieldTwo\ntype MyList []MyStruct\n", "completion": "type MyStruct struct {\n keyFieldOne string `json:\"keyFieldOne\"`\n keyFieldTwo string `json:\"keyFieldTwo\"`\n valueField string `json:\"valueField\"`\n}", "language": "text", "type": "code-example", "context": "Declarative API Validation | In this example, listMapKey is used to specify that the keyField of MyStruct should be used as the key for the list-map.", "source": "https://kubernetes.io/docs/reference/using-api/declarative-validation/", "id": "k8s_00548_code_completion_13918"} {"prompt": "Explain this text code:\n\n// +k8s:listType=map\n// +k8s:listMapKey=keyFieldOne\n// +k8s:listMapKey=keyFieldTwo\ntype MyList []MyStruct\n\ntype MyStruct struct {\n keyFieldOne string `json:\"keyFieldOne\"`\n keyFieldTwo string `json:\"keyFieldTwo\"`\n valueField string `json:\"valueField\"`\n}", "completion": "This text code Declarative API Validation | In this example, listMapKey is used to specify that the keyField of MyStruct should be used as the key for the list-map.", "language": "text", "type": "code-example_explanation", "context": "Declarative API Validation | In this example, listMapKey is used to specify that the keyField of MyStruct should be used as the key for the list-map.", "source": "https://kubernetes.io/docs/reference/using-api/declarative-validation/", "id": "k8s_00548_code_explain_13919"} {"prompt": "Complete this text code:\n\n// +k8s:listType=map\n// +k8s:listMapKey=keyField\ntype MyList []MyStruct\n", "completion": "type MyStruct struct {\n keyField string `json:\"keyField\"`\n valueField string `json:\"valueField\"`\n}", "language": "text", "type": "code-example", "context": "Declarative API Validation | In this example, MyList is declared as a list of type map, with keyField as the key. This means that the validation logic will ensure that each element in the list has a unique keyField.", "source": "https://kubernetes.io/docs/reference/using-api/declarative-validation/", "id": "k8s_00548_code_completion_13920"} {"prompt": "Explain this text code:\n\n// +k8s:listType=map\n// +k8s:listMapKey=keyField\ntype MyList []MyStruct\n\ntype MyStruct struct {\n keyField string `json:\"keyField\"`\n valueField string `json:\"valueField\"`\n}", "completion": "This text code Declarative API Validation | In this example, MyList is declared as a list of type map, with keyField as the key. This means that the validation logic will ensure that each element in the list has a unique keyField.", "language": "text", "type": "code-example_explanation", "context": "Declarative API Validation | In this example, MyList is declared as a list of type map, with keyField as the key. This means that the validation logic will ensure that each element in the list has a unique keyField.", "source": "https://kubernetes.io/docs/reference/using-api/declarative-validation/", "id": "k8s_00548_code_explain_13921"} {"prompt": "Complete this text code:\n\ntype MyStruct struct {\n // +k8s:maxItems=5", "completion": " MyList []string `json:\"myList\"`\n}", "language": "text", "type": "code-example", "context": "Declarative API Validation | In this example, MyList cannot contain more than 5 items.", "source": "https://kubernetes.io/docs/reference/using-api/declarative-validation/", "id": "k8s_00548_code_completion_13922"} {"prompt": "Explain this text code:\n\ntype MyStruct struct {\n // +k8s:maxItems=5\n MyList []string `json:\"myList\"`\n}", "completion": "This text code Declarative API Validation | In this example, MyList cannot contain more than 5 items.", "language": "text", "type": "code-example_explanation", "context": "Declarative API Validation | In this example, MyList cannot contain more than 5 items.", "source": "https://kubernetes.io/docs/reference/using-api/declarative-validation/", "id": "k8s_00548_code_explain_13923"} {"prompt": "Complete this text code:\n\ntype MyStruct struct {\n // +k8s:maxLength=10", "completion": " MyString string `json:\"myString\"`\n}", "language": "text", "type": "code-example", "context": "Declarative API Validation | In this example, MyString cannot be longer than 10 characters.", "source": "https://kubernetes.io/docs/reference/using-api/declarative-validation/", "id": "k8s_00548_code_completion_13924"} {"prompt": "Explain this text code:\n\ntype MyStruct struct {\n // +k8s:maxLength=10\n MyString string `json:\"myString\"`\n}", "completion": "This text code Declarative API Validation | In this example, MyString cannot be longer than 10 characters.", "language": "text", "type": "code-example_explanation", "context": "Declarative API Validation | In this example, MyString cannot be longer than 10 characters.", "source": "https://kubernetes.io/docs/reference/using-api/declarative-validation/", "id": "k8s_00548_code_explain_13925"} {"prompt": "Complete this text code:\n\ntype MyStruct struct {\n // +k8s:minimum=0", "completion": " MyInt int `json:\"myInt\"`\n}", "language": "text", "type": "code-example", "context": "Declarative API Validation | In this example, MyInt must be greater than or equal to 0.", "source": "https://kubernetes.io/docs/reference/using-api/declarative-validation/", "id": "k8s_00548_code_completion_13926"} {"prompt": "Explain this text code:\n\ntype MyStruct struct {\n // +k8s:minimum=0\n MyInt int `json:\"myInt\"`\n}", "completion": "This text code Declarative API Validation | In this example, MyInt must be greater than or equal to 0.", "language": "text", "type": "code-example_explanation", "context": "Declarative API Validation | In this example, MyInt must be greater than or equal to 0.", "source": "https://kubernetes.io/docs/reference/using-api/declarative-validation/", "id": "k8s_00548_code_explain_13927"} {"prompt": "Complete this text code:\n\ntype MyStruct struct {\n // +k8s:neq=\"disallowed\"\n MyString string `json:\"myString\"`\n\n // +k8s:neq=0", "completion": " MyInt int `json:\"myInt\"`\n\n // +k8s:neq=true\n MyBool bool `json:\"myBool\"`\n}", "language": "text", "type": "code-example", "context": "Declarative API Validation | In this example:", "source": "https://kubernetes.io/docs/reference/using-api/declarative-validation/", "id": "k8s_00548_code_completion_13928"} {"prompt": "Explain this text code:\n\ntype MyStruct struct {\n // +k8s:neq=\"disallowed\"\n MyString string `json:\"myString\"`\n\n // +k8s:neq=0\n MyInt int `json:\"myInt\"`\n\n // +k8s:neq=true\n MyBool bool `json:\"myBool\"`\n}", "completion": "This text code Declarative API Validation | In this example:", "language": "text", "type": "code-example_explanation", "context": "Declarative API Validation | In this example:", "source": "https://kubernetes.io/docs/reference/using-api/declarative-validation/", "id": "k8s_00548_code_explain_13929"} {"prompt": "Complete this python code:\n\nimport \"some/external/package\"\n\ntype MyStruct struct {", "completion": " // +k8s:opaqueType\n ExternalField package.ExternalType `json:\"externalField\"`\n}", "language": "python", "type": "code-example", "context": "Declarative API Validation | In this example, any validation tags on package.ExternalType will be ignored.", "source": "https://kubernetes.io/docs/reference/using-api/declarative-validation/", "id": "k8s_00548_code_completion_13930"} {"prompt": "Explain this python code:\n\nimport \"some/external/package\"\n\ntype MyStruct struct {\n // +k8s:opaqueType\n ExternalField package.ExternalType `json:\"externalField\"`\n}", "completion": "This python code Declarative API Validation | In this example, any validation tags on package.ExternalType will be ignored.", "language": "python", "type": "code-example_explanation", "context": "Declarative API Validation | In this example, any validation tags on package.ExternalType will be ignored.", "source": "https://kubernetes.io/docs/reference/using-api/declarative-validation/", "id": "k8s_00548_code_explain_13931"} {"prompt": "Complete this text code:\n\ntype MyStruct struct {\n // +k8s:optional", "completion": " MyField string `json:\"myField\"`\n}", "language": "text", "type": "code-example", "context": "Declarative API Validation | In this example, MyField is not required to be provided when creating or updating MyStruct.", "source": "https://kubernetes.io/docs/reference/using-api/declarative-validation/", "id": "k8s_00548_code_completion_13932"} {"prompt": "Explain this text code:\n\ntype MyStruct struct {\n // +k8s:optional\n MyField string `json:\"myField\"`\n}", "completion": "This text code Declarative API Validation | In this example, MyField is not required to be provided when creating or updating MyStruct.", "language": "text", "type": "code-example_explanation", "context": "Declarative API Validation | In this example, MyField is not required to be provided when creating or updating MyStruct.", "source": "https://kubernetes.io/docs/reference/using-api/declarative-validation/", "id": "k8s_00548_code_explain_13933"} {"prompt": "Complete this text code:\n\ntype MyStruct struct {\n // +k8s:required", "completion": " MyField string `json:\"myField\"`\n}", "language": "text", "type": "code-example", "context": "Declarative API Validation | In this example, MyField must be provided when creating or updating MyStruct.", "source": "https://kubernetes.io/docs/reference/using-api/declarative-validation/", "id": "k8s_00548_code_completion_13934"} {"prompt": "Explain this text code:\n\ntype MyStruct struct {\n // +k8s:required\n MyField string `json:\"myField\"`\n}", "completion": "This text code Declarative API Validation | In this example, MyField must be provided when creating or updating MyStruct.", "language": "text", "type": "code-example_explanation", "context": "Declarative API Validation | In this example, MyField must be provided when creating or updating MyStruct.", "source": "https://kubernetes.io/docs/reference/using-api/declarative-validation/", "id": "k8s_00548_code_explain_13935"} {"prompt": "Complete this text code:\n\ntype MyStruct struct {\n // +k8s:subfield(\"mySubfield\")=+k8s:required\n MyStruct MyStruct `json:\"MyStruct\"`\n}", "completion": "\ntype MyStruct struct {\n MySubfield string `json:\"mySubfield\"`\n}", "language": "text", "type": "code-example", "context": "Declarative API Validation | In this example, MySubfield within MyStruct is required.", "source": "https://kubernetes.io/docs/reference/using-api/declarative-validation/", "id": "k8s_00548_code_completion_13936"} {"prompt": "Explain this text code:\n\ntype MyStruct struct {\n // +k8s:subfield(\"mySubfield\")=+k8s:required\n MyStruct MyStruct `json:\"MyStruct\"`\n}\n\ntype MyStruct struct {\n MySubfield string `json:\"mySubfield\"`\n}", "completion": "This text code Declarative API Validation | In this example, MySubfield within MyStruct is required.", "language": "text", "type": "code-example_explanation", "context": "Declarative API Validation | In this example, MySubfield within MyStruct is required.", "source": "https://kubernetes.io/docs/reference/using-api/declarative-validation/", "id": "k8s_00548_code_explain_13937"} {"prompt": "Explain this go code:\n\n// +k8s:supportsSubresource=\"/status\"\n// +k8s:supportsSubresource=\"/scale\"\npackage v1", "completion": "This go code Declarative API Validation | Description:", "language": "go", "type": "code-example_explanation", "context": "Declarative API Validation | Description:", "source": "https://kubernetes.io/docs/reference/using-api/declarative-validation/", "id": "k8s_00548_code_explain_13938"} {"prompt": "Complete this text code:\n\ntype MyStruct struct {\n\tTypeMeta int\n\n\t// +k8s:unionDiscriminator\n\tD D `json:\"d\"`\n\n\t// +k8s:unionMember\n\t// +k8s:optional\n\tM1 *M1 `json:\"m1\"`\n\n\t// +k8s:unionMember\n\t// +k8s:optional", "completion": "\tM2 *M2 `json:\"m2\"`\n}\n\ntype D string\n\nconst (\n\tDM1 D = \"M1\"\n\tDM2 D = \"M2\"\n)\n\ntype M1 struct{}\n\ntype M2 struct{}", "language": "text", "type": "code-example", "context": "Declarative API Validation | In this example, the Type field is the discriminator for the union. The value of Type will determine which of the union members (M1 or M2) is expected to be present.", "source": "https://kubernetes.io/docs/reference/using-api/declarative-validation/", "id": "k8s_00548_code_completion_13939"} {"prompt": "Explain this text code:\n\ntype MyStruct struct {\n\tTypeMeta int\n\n\t// +k8s:unionDiscriminator\n\tD D `json:\"d\"`\n\n\t// +k8s:unionMember\n\t// +k8s:optional\n\tM1 *M1 `json:\"m1\"`\n\n\t// +k8s:unionMember\n\t// +k8s:optional\n\tM2 *M2 `json:\"m2\"`\n}\n\ntype D string\n\nconst (\n\tDM1 D = \"M1\"\n\tDM2 D = \"M2\"\n)\n\ntype M1 struct{}\n\ntype M2 struct{}", "completion": "This text code Declarative API Validation | In this example, the Type field is the discriminator for the union. The value of Type will determine which of the union members (M1 or M2) is expected to be present.", "language": "text", "type": "code-example_explanation", "context": "Declarative API Validation | In this example, the Type field is the discriminator for the union. The value of Type will determine which of the union members (M1 or M2) is expected to be present.", "source": "https://kubernetes.io/docs/reference/using-api/declarative-validation/", "id": "k8s_00548_code_explain_13940"} {"prompt": "Complete this text code:\n\ntype MyStruct struct {\n\t// +k8s:unionMember(union: \"union1\")\n\t// +k8s:optional\n\tM1 *M1 `json:\"u1m1\"`\n\n\t// +k8s:unionMember(union: \"union1\")", "completion": "\t// +k8s:optional\n\tM2 *M2 `json:\"u1m2\"`\n}\n\ntype M1 struct{}\n\ntype M2 struct{}", "language": "text", "type": "code-example", "context": "Declarative API Validation | In this example, M1 and M2 are members of the named union union1.", "source": "https://kubernetes.io/docs/reference/using-api/declarative-validation/", "id": "k8s_00548_code_completion_13941"} {"prompt": "Explain this text code:\n\ntype MyStruct struct {\n\t// +k8s:unionMember(union: \"union1\")\n\t// +k8s:optional\n\tM1 *M1 `json:\"u1m1\"`\n\n\t// +k8s:unionMember(union: \"union1\")\n\t// +k8s:optional\n\tM2 *M2 `json:\"u1m2\"`\n}\n\ntype M1 struct{}\n\ntype M2 struct{}", "completion": "This text code Declarative API Validation | In this example, M1 and M2 are members of the named union union1.", "language": "text", "type": "code-example_explanation", "context": "Declarative API Validation | In this example, M1 and M2 are members of the named union union1.", "source": "https://kubernetes.io/docs/reference/using-api/declarative-validation/", "id": "k8s_00548_code_explain_13942"} {"prompt": "Complete this text code:\n\ntype MyStruct struct {\n\t// +k8s:zeroOrOneOfMember\n\t// +k8s:optional\n\tM1 *M1 `json:\"m1\"`\n\n\t// +k8s:zeroOrOneOfMember", "completion": "\t// +k8s:optional\n\tM2 *M2 `json:\"m2\"`\n}\n\ntype M1 struct{}\n\ntype M2 struct{}", "language": "text", "type": "code-example", "context": "Declarative API Validation | In this example, at most one of A or B can be set. It is also valid for neither to be set.", "source": "https://kubernetes.io/docs/reference/using-api/declarative-validation/", "id": "k8s_00548_code_completion_13943"} {"prompt": "Explain this text code:\n\ntype MyStruct struct {\n\t// +k8s:zeroOrOneOfMember\n\t// +k8s:optional\n\tM1 *M1 `json:\"m1\"`\n\n\t// +k8s:zeroOrOneOfMember\n\t// +k8s:optional\n\tM2 *M2 `json:\"m2\"`\n}\n\ntype M1 struct{}\n\ntype M2 struct{}", "completion": "This text code Declarative API Validation | In this example, at most one of A or B can be set. It is also valid for neither to be set.", "language": "text", "type": "code-example_explanation", "context": "Declarative API Validation | In this example, at most one of A or B can be set. It is also valid for neither to be set.", "source": "https://kubernetes.io/docs/reference/using-api/declarative-validation/", "id": "k8s_00548_code_explain_13944"} {"prompt": "Complete this go code:\n\ntype MyStruct struct {\n // +k8s:eachKey=+k8s:minimum=1", "completion": " MyMap map[int]string `json:\"myMap\"`\n}", "language": "go", "type": "code-example", "context": "Declarative API Validation | In this example, eachKey is used to specify that the +k8s:minimum tag should be applied to each int key in MyMap. This means that all keys in the map must be >= 1.", "source": "https://kubernetes.io/docs/reference/using-api/declarative-validation/", "id": "k8s_00548_code_completion_13945"} {"prompt": "Explain this go code:\n\ntype MyStruct struct {\n // +k8s:eachKey=+k8s:minimum=1\n MyMap map[int]string `json:\"myMap\"`\n}", "completion": "This go code Declarative API Validation | In this example, eachKey is used to specify that the +k8s:minimum tag should be applied to each int key in MyMap. This means that all keys in the map must be >= 1.", "language": "go", "type": "code-example_explanation", "context": "Declarative API Validation | In this example, eachKey is used to specify that the +k8s:minimum tag should be applied to each int key in MyMap. This means that all keys in the map must be >= 1.", "source": "https://kubernetes.io/docs/reference/using-api/declarative-validation/", "id": "k8s_00548_code_explain_13946"} {"prompt": "Complete this go code:\n\ntype MyStruct struct {\n // +k8s:eachVal=+k8s:minimum=1", "completion": " MyMap map[string]int `json:\"myMap\"`\n}", "language": "go", "type": "code-example", "context": "Declarative API Validation | In this example, eachVal is used to specify that the +k8s:minimum tag should be applied to each element in MyList. This means that all fields in MyStruct must be >= 1.", "source": "https://kubernetes.io/docs/reference/using-api/declarative-validation/", "id": "k8s_00548_code_completion_13947"} {"prompt": "Explain this go code:\n\ntype MyStruct struct {\n // +k8s:eachVal=+k8s:minimum=1\n MyMap map[string]int `json:\"myMap\"`\n}", "completion": "This go code Declarative API Validation | In this example, eachVal is used to specify that the +k8s:minimum tag should be applied to each element in MyList. This means that all fields in MyStruct must be >= 1.", "language": "go", "type": "code-example_explanation", "context": "Declarative API Validation | In this example, eachVal is used to specify that the +k8s:minimum tag should be applied to each element in MyList. This means that all fields in MyStruct must be >= 1.", "source": "https://kubernetes.io/docs/reference/using-api/declarative-validation/", "id": "k8s_00548_code_explain_13948"} {"prompt": "Complete this go code:\n\n// +k8s:enum\ntype MyEnum string\n", "completion": "const (\n MyEnumA MyEnum = \"A\"\n MyEnumB MyEnum = \"B\"\n)", "language": "go", "type": "code-example", "context": "Declarative API Validation | Then, use this type in another struct:", "source": "https://kubernetes.io/docs/reference/using-api/declarative-validation/", "id": "k8s_00548_code_completion_13949"} {"prompt": "Explain this go code:\n\n// +k8s:enum\ntype MyEnum string\n\nconst (\n MyEnumA MyEnum = \"A\"\n MyEnumB MyEnum = \"B\"\n)", "completion": "This go code Declarative API Validation | Then, use this type in another struct:", "language": "go", "type": "code-example_explanation", "context": "Declarative API Validation | Then, use this type in another struct:", "source": "https://kubernetes.io/docs/reference/using-api/declarative-validation/", "id": "k8s_00548_code_explain_13950"} {"prompt": "Explain this go code:\n\ntype MyStruct struct {\n MyField MyEnum `json:\"myField\"`\n}", "completion": "This go code Declarative API Validation | The validation logic will ensure that MyField is one of the defined enum values (\"A\" or \"B\").", "language": "go", "type": "code-example_explanation", "context": "Declarative API Validation | The validation logic will ensure that MyField is one of the defined enum values (\"A\" or \"B\").", "source": "https://kubernetes.io/docs/reference/using-api/declarative-validation/", "id": "k8s_00548_code_explain_13951"} {"prompt": "Complete this go code:\n\ntype MyStruct struct {\n // +k8s:forbidden", "completion": " MyField string `json:\"myField\"`\n}", "language": "go", "type": "code-example", "context": "Declarative API Validation | In this example, MyField cannot be provided (it is forbidden) when creating or updating MyStruct.", "source": "https://kubernetes.io/docs/reference/using-api/declarative-validation/", "id": "k8s_00548_code_completion_13952"} {"prompt": "Explain this go code:\n\ntype MyStruct struct {\n // +k8s:forbidden\n MyField string `json:\"myField\"`\n}", "completion": "This go code Declarative API Validation | In this example, MyField cannot be provided (it is forbidden) when creating or updating MyStruct.", "language": "go", "type": "code-example_explanation", "context": "Declarative API Validation | In this example, MyField cannot be provided (it is forbidden) when creating or updating MyStruct.", "source": "https://kubernetes.io/docs/reference/using-api/declarative-validation/", "id": "k8s_00548_code_explain_13953"} {"prompt": "Complete this go code:\n\ntype MyStruct struct {\n // +k8s:format=k8s-ip\n IPAddress string `json:\"ipAddress\"`\n\n // +k8s:format=k8s-long-name", "completion": " Subdomain string `json:\"subdomain\"`\n\n // +k8s:format=k8s-short-name\n Label string `json:\"label\"`\n}", "language": "go", "type": "code-example", "context": "Declarative API Validation | In this example:", "source": "https://kubernetes.io/docs/reference/using-api/declarative-validation/", "id": "k8s_00548_code_completion_13954"} {"prompt": "Explain this go code:\n\ntype MyStruct struct {\n // +k8s:format=k8s-ip\n IPAddress string `json:\"ipAddress\"`\n\n // +k8s:format=k8s-long-name\n Subdomain string `json:\"subdomain\"`\n\n // +k8s:format=k8s-short-name\n Label string `json:\"label\"`\n}", "completion": "This go code Declarative API Validation | In this example:", "language": "go", "type": "code-example_explanation", "context": "Declarative API Validation | In this example:", "source": "https://kubernetes.io/docs/reference/using-api/declarative-validation/", "id": "k8s_00548_code_explain_13955"} {"prompt": "Complete this go code:\n\ntype MyStruct struct {\n // +k8s:ifDisabled(\"my-feature\")=+k8s:required", "completion": " MyField string `json:\"myField\"`\n}", "language": "go", "type": "code-example", "context": "Declarative API Validation | In this example, MyField is required only if the \"my-feature\" option is disabled.", "source": "https://kubernetes.io/docs/reference/using-api/declarative-validation/", "id": "k8s_00548_code_completion_13956"} {"prompt": "Explain this go code:\n\ntype MyStruct struct {\n // +k8s:ifDisabled(\"my-feature\")=+k8s:required\n MyField string `json:\"myField\"`\n}", "completion": "This go code Declarative API Validation | In this example, MyField is required only if the \"my-feature\" option is disabled.", "language": "go", "type": "code-example_explanation", "context": "Declarative API Validation | In this example, MyField is required only if the \"my-feature\" option is disabled.", "source": "https://kubernetes.io/docs/reference/using-api/declarative-validation/", "id": "k8s_00548_code_explain_13957"} {"prompt": "Complete this go code:\n\ntype MyStruct struct {\n // +k8s:ifEnabled(\"my-feature\")=+k8s:required", "completion": " MyField string `json:\"myField\"`\n}", "language": "go", "type": "code-example", "context": "Declarative API Validation | In this example, MyField is required only if the \"my-feature\" option is enabled.", "source": "https://kubernetes.io/docs/reference/using-api/declarative-validation/", "id": "k8s_00548_code_completion_13958"} {"prompt": "Explain this go code:\n\ntype MyStruct struct {\n // +k8s:ifEnabled(\"my-feature\")=+k8s:required\n MyField string `json:\"myField\"`\n}", "completion": "This go code Declarative API Validation | In this example, MyField is required only if the \"my-feature\" option is enabled.", "language": "go", "type": "code-example_explanation", "context": "Declarative API Validation | In this example, MyField is required only if the \"my-feature\" option is enabled.", "source": "https://kubernetes.io/docs/reference/using-api/declarative-validation/", "id": "k8s_00548_code_explain_13959"} {"prompt": "Complete this go code:\n\n// This enables the validation dispatcher to handle requests for \"/scale\".\n// +k8s:supportsSubresource=\"/scale\"", "completion": "package v1\n\n// ... includes the definition for the Deployment type", "language": "go", "type": "code-example", "context": "Declarative API Validation | 2. Scope Validation Logic in a Separate Package: Next, create a separate package for the validation rules that are specific only to the /scale subresource.", "source": "https://kubernetes.io/docs/reference/using-api/declarative-validation/", "id": "k8s_00548_code_completion_13960"} {"prompt": "Explain this go code:\n\n// This enables the validation dispatcher to handle requests for \"/scale\".\n// +k8s:supportsSubresource=\"/scale\"\npackage v1\n\n// ... includes the definition for the Deployment type", "completion": "This go code Declarative API Validation | 2. Scope Validation Logic in a Separate Package: Next, create a separate package for the validation rules that are specific only to the /scale subresource.", "language": "go", "type": "code-example_explanation", "context": "Declarative API Validation | 2. Scope Validation Logic in a Separate Package: Next, create a separate package for the validation rules that are specific only to the /scale subresource.", "source": "https://kubernetes.io/docs/reference/using-api/declarative-validation/", "id": "k8s_00548_code_explain_13961"} {"prompt": "Explain this text code:\n\nstaging/src/k8s.io/api/apps/v1/validations/scale/doc.go", "completion": "This text code Declarative API Validation | Description:", "language": "text", "type": "code-example_explanation", "context": "Declarative API Validation | Description:", "source": "https://kubernetes.io/docs/reference/using-api/declarative-validation/", "id": "k8s_00548_code_explain_13962"} {"prompt": "Complete this go code:\n\n// This ensures the rules in this package ONLY run for the \"/scale\" subresource.\n// +k8s:isSubresource=\"/scale\"\npackage scale\n", "completion": "import \"k8s.io/api/apps/v1\"\n\n// Validation code in this package would reference types from package v1 (e.g., v1.Scale).\n// The generated validation function will only be invoked for requests to the \"/scale\"\n// subresource of a type defined in a package that supports it.", "language": "go", "type": "code-example", "context": "Declarative API Validation | Description:", "source": "https://kubernetes.io/docs/reference/using-api/declarative-validation/", "id": "k8s_00548_code_completion_13963"} {"prompt": "Explain this go code:\n\n// This ensures the rules in this package ONLY run for the \"/scale\" subresource.\n// +k8s:isSubresource=\"/scale\"\npackage scale\n\nimport \"k8s.io/api/apps/v1\"\n\n// Validation code in this package would reference types from package v1 (e.g., v1.Scale).\n// The generated validation function will only be invoked for requests to the \"/scale\"\n// subresource of a type defined in a package that supports it.", "completion": "This go code Declarative API Validation | Description:", "language": "go", "type": "code-example_explanation", "context": "Declarative API Validation | Description:", "source": "https://kubernetes.io/docs/reference/using-api/declarative-validation/", "id": "k8s_00548_code_explain_13964"} {"prompt": "Explain this text code:\n\n+k8s:item(<listMapKey-JSON-field-name>: <value>,...)=<validation-tag>", "completion": "This text code Declarative API Validation | +k8s:item(stringKey: \"value\", intKey: 42, boolKey: true)=<validation-tag>", "language": "text", "type": "code-example_explanation", "context": "Declarative API Validation | +k8s:item(stringKey: \"value\", intKey: 42, boolKey: true)=<validation-tag>", "source": "https://kubernetes.io/docs/reference/using-api/declarative-validation/", "id": "k8s_00548_code_explain_13965"} {"prompt": "Explain this text code:\n\n+k8s:item(stringKey: \"value\", intKey: 42, boolKey: true)=<validation-tag>", "completion": "This text code Declarative API Validation | Arguments must be named with the JSON names of the list-map key fields. Values can be strings, integers, or booleans.", "language": "text", "type": "code-example_explanation", "context": "Declarative API Validation | Arguments must be named with the JSON names of the list-map key fields. Values can be strings, integers, or booleans.", "source": "https://kubernetes.io/docs/reference/using-api/declarative-validation/", "id": "k8s_00548_code_explain_13966"} {"prompt": "Complete this go code:\n\ntype MyStruct struct {\n\t// +k8s:listType=map\n\t// +k8s:listMapKey=type\n\t// +k8s:item(type: \"Approved\")=+k8s:zeroOrOneOfMember\n\t// +k8s:item(type: \"Denied\")=+k8s:zeroOrOneOfMember\n\tMyConditions []MyCondition `json:\"conditions\"`", "completion": "}\n\ntype MyCondition struct {\n Type string `json:\"type\"`\n Status string `json:\"status\"`\n}", "language": "go", "type": "code-example", "context": "Declarative API Validation | In this example:", "source": "https://kubernetes.io/docs/reference/using-api/declarative-validation/", "id": "k8s_00548_code_completion_13967"} {"prompt": "Explain this go code:\n\ntype MyStruct struct {\n\t// +k8s:listType=map\n\t// +k8s:listMapKey=type\n\t// +k8s:item(type: \"Approved\")=+k8s:zeroOrOneOfMember\n\t// +k8s:item(type: \"Denied\")=+k8s:zeroOrOneOfMember\n\tMyConditions []MyCondition `json:\"conditions\"`\n}\n\ntype MyCondition struct {\n Type string `json:\"type\"`\n Status string `json:\"status\"`\n}", "completion": "This go code Declarative API Validation | In this example:", "language": "go", "type": "code-example_explanation", "context": "Declarative API Validation | In this example:", "source": "https://kubernetes.io/docs/reference/using-api/declarative-validation/", "id": "k8s_00548_code_explain_13968"} {"prompt": "Complete this go code:\n\n// +k8s:listType=map\n// +k8s:listMapKey=keyFieldOne\n// +k8s:listMapKey=keyFieldTwo\ntype MyList []MyStruct\n", "completion": "type MyStruct struct {\n keyFieldOne string `json:\"keyFieldOne\"`\n keyFieldTwo string `json:\"keyFieldTwo\"`\n valueField string `json:\"valueField\"`\n}", "language": "go", "type": "code-example", "context": "Declarative API Validation | In this example, listMapKey is used to specify that the keyField of MyStruct should be used as the key for the list-map.", "source": "https://kubernetes.io/docs/reference/using-api/declarative-validation/", "id": "k8s_00548_code_completion_13969"} {"prompt": "Explain this go code:\n\n// +k8s:listType=map\n// +k8s:listMapKey=keyFieldOne\n// +k8s:listMapKey=keyFieldTwo\ntype MyList []MyStruct\n\ntype MyStruct struct {\n keyFieldOne string `json:\"keyFieldOne\"`\n keyFieldTwo string `json:\"keyFieldTwo\"`\n valueField string `json:\"valueField\"`\n}", "completion": "This go code Declarative API Validation | In this example, listMapKey is used to specify that the keyField of MyStruct should be used as the key for the list-map.", "language": "go", "type": "code-example_explanation", "context": "Declarative API Validation | In this example, listMapKey is used to specify that the keyField of MyStruct should be used as the key for the list-map.", "source": "https://kubernetes.io/docs/reference/using-api/declarative-validation/", "id": "k8s_00548_code_explain_13970"} {"prompt": "Complete this go code:\n\n// +k8s:listType=map\n// +k8s:listMapKey=keyField\ntype MyList []MyStruct\n", "completion": "type MyStruct struct {\n keyField string `json:\"keyField\"`\n valueField string `json:\"valueField\"`\n}", "language": "go", "type": "code-example", "context": "Declarative API Validation | In this example, MyList is declared as a list of type map, with keyField as the key. This means that the validation logic will ensure that each element in the list has a unique keyField.", "source": "https://kubernetes.io/docs/reference/using-api/declarative-validation/", "id": "k8s_00548_code_completion_13971"} {"prompt": "Explain this go code:\n\n// +k8s:listType=map\n// +k8s:listMapKey=keyField\ntype MyList []MyStruct\n\ntype MyStruct struct {\n keyField string `json:\"keyField\"`\n valueField string `json:\"valueField\"`\n}", "completion": "This go code Declarative API Validation | In this example, MyList is declared as a list of type map, with keyField as the key. This means that the validation logic will ensure that each element in the list has a unique keyField.", "language": "go", "type": "code-example_explanation", "context": "Declarative API Validation | In this example, MyList is declared as a list of type map, with keyField as the key. This means that the validation logic will ensure that each element in the list has a unique keyField.", "source": "https://kubernetes.io/docs/reference/using-api/declarative-validation/", "id": "k8s_00548_code_explain_13972"} {"prompt": "Complete this go code:\n\ntype MyStruct struct {\n // +k8s:maxItems=5", "completion": " MyList []string `json:\"myList\"`\n}", "language": "go", "type": "code-example", "context": "Declarative API Validation | In this example, MyList cannot contain more than 5 items.", "source": "https://kubernetes.io/docs/reference/using-api/declarative-validation/", "id": "k8s_00548_code_completion_13973"} {"prompt": "Explain this go code:\n\ntype MyStruct struct {\n // +k8s:maxItems=5\n MyList []string `json:\"myList\"`\n}", "completion": "This go code Declarative API Validation | In this example, MyList cannot contain more than 5 items.", "language": "go", "type": "code-example_explanation", "context": "Declarative API Validation | In this example, MyList cannot contain more than 5 items.", "source": "https://kubernetes.io/docs/reference/using-api/declarative-validation/", "id": "k8s_00548_code_explain_13974"} {"prompt": "Complete this go code:\n\ntype MyStruct struct {\n // +k8s:maxLength=10", "completion": " MyString string `json:\"myString\"`\n}", "language": "go", "type": "code-example", "context": "Declarative API Validation | In this example, MyString cannot be longer than 10 characters.", "source": "https://kubernetes.io/docs/reference/using-api/declarative-validation/", "id": "k8s_00548_code_completion_13975"} {"prompt": "Explain this go code:\n\ntype MyStruct struct {\n // +k8s:maxLength=10\n MyString string `json:\"myString\"`\n}", "completion": "This go code Declarative API Validation | In this example, MyString cannot be longer than 10 characters.", "language": "go", "type": "code-example_explanation", "context": "Declarative API Validation | In this example, MyString cannot be longer than 10 characters.", "source": "https://kubernetes.io/docs/reference/using-api/declarative-validation/", "id": "k8s_00548_code_explain_13976"} {"prompt": "Complete this go code:\n\ntype MyStruct struct {\n // +k8s:minimum=0", "completion": " MyInt int `json:\"myInt\"`\n}", "language": "go", "type": "code-example", "context": "Declarative API Validation | In this example, MyInt must be greater than or equal to 0.", "source": "https://kubernetes.io/docs/reference/using-api/declarative-validation/", "id": "k8s_00548_code_completion_13977"} {"prompt": "Explain this go code:\n\ntype MyStruct struct {\n // +k8s:minimum=0\n MyInt int `json:\"myInt\"`\n}", "completion": "This go code Declarative API Validation | In this example, MyInt must be greater than or equal to 0.", "language": "go", "type": "code-example_explanation", "context": "Declarative API Validation | In this example, MyInt must be greater than or equal to 0.", "source": "https://kubernetes.io/docs/reference/using-api/declarative-validation/", "id": "k8s_00548_code_explain_13978"} {"prompt": "Complete this go code:\n\ntype MyStruct struct {\n // +k8s:neq=\"disallowed\"\n MyString string `json:\"myString\"`\n\n // +k8s:neq=0", "completion": " MyInt int `json:\"myInt\"`\n\n // +k8s:neq=true\n MyBool bool `json:\"myBool\"`\n}", "language": "go", "type": "code-example", "context": "Declarative API Validation | In this example:", "source": "https://kubernetes.io/docs/reference/using-api/declarative-validation/", "id": "k8s_00548_code_completion_13979"} {"prompt": "Explain this go code:\n\ntype MyStruct struct {\n // +k8s:neq=\"disallowed\"\n MyString string `json:\"myString\"`\n\n // +k8s:neq=0\n MyInt int `json:\"myInt\"`\n\n // +k8s:neq=true\n MyBool bool `json:\"myBool\"`\n}", "completion": "This go code Declarative API Validation | In this example:", "language": "go", "type": "code-example_explanation", "context": "Declarative API Validation | In this example:", "source": "https://kubernetes.io/docs/reference/using-api/declarative-validation/", "id": "k8s_00548_code_explain_13980"} {"prompt": "Complete this go code:\n\nimport \"some/external/package\"\n\ntype MyStruct struct {", "completion": " // +k8s:opaqueType\n ExternalField package.ExternalType `json:\"externalField\"`\n}", "language": "go", "type": "code-example", "context": "Declarative API Validation | In this example, any validation tags on package.ExternalType will be ignored.", "source": "https://kubernetes.io/docs/reference/using-api/declarative-validation/", "id": "k8s_00548_code_completion_13981"} {"prompt": "Explain this go code:\n\nimport \"some/external/package\"\n\ntype MyStruct struct {\n // +k8s:opaqueType\n ExternalField package.ExternalType `json:\"externalField\"`\n}", "completion": "This go code Declarative API Validation | In this example, any validation tags on package.ExternalType will be ignored.", "language": "go", "type": "code-example_explanation", "context": "Declarative API Validation | In this example, any validation tags on package.ExternalType will be ignored.", "source": "https://kubernetes.io/docs/reference/using-api/declarative-validation/", "id": "k8s_00548_code_explain_13982"} {"prompt": "Complete this go code:\n\ntype MyStruct struct {\n // +k8s:optional", "completion": " MyField string `json:\"myField\"`\n}", "language": "go", "type": "code-example", "context": "Declarative API Validation | In this example, MyField is not required to be provided when creating or updating MyStruct.", "source": "https://kubernetes.io/docs/reference/using-api/declarative-validation/", "id": "k8s_00548_code_completion_13983"} {"prompt": "Explain this go code:\n\ntype MyStruct struct {\n // +k8s:optional\n MyField string `json:\"myField\"`\n}", "completion": "This go code Declarative API Validation | In this example, MyField is not required to be provided when creating or updating MyStruct.", "language": "go", "type": "code-example_explanation", "context": "Declarative API Validation | In this example, MyField is not required to be provided when creating or updating MyStruct.", "source": "https://kubernetes.io/docs/reference/using-api/declarative-validation/", "id": "k8s_00548_code_explain_13984"} {"prompt": "Complete this go code:\n\ntype MyStruct struct {\n // +k8s:required", "completion": " MyField string `json:\"myField\"`\n}", "language": "go", "type": "code-example", "context": "Declarative API Validation | In this example, MyField must be provided when creating or updating MyStruct.", "source": "https://kubernetes.io/docs/reference/using-api/declarative-validation/", "id": "k8s_00548_code_completion_13985"} {"prompt": "Explain this go code:\n\ntype MyStruct struct {\n // +k8s:required\n MyField string `json:\"myField\"`\n}", "completion": "This go code Declarative API Validation | In this example, MyField must be provided when creating or updating MyStruct.", "language": "go", "type": "code-example_explanation", "context": "Declarative API Validation | In this example, MyField must be provided when creating or updating MyStruct.", "source": "https://kubernetes.io/docs/reference/using-api/declarative-validation/", "id": "k8s_00548_code_explain_13986"} {"prompt": "Complete this go code:\n\ntype MyStruct struct {\n // +k8s:subfield(\"mySubfield\")=+k8s:required\n MyStruct MyStruct `json:\"MyStruct\"`\n}", "completion": "\ntype MyStruct struct {\n MySubfield string `json:\"mySubfield\"`\n}", "language": "go", "type": "code-example", "context": "Declarative API Validation | In this example, MySubfield within MyStruct is required.", "source": "https://kubernetes.io/docs/reference/using-api/declarative-validation/", "id": "k8s_00548_code_completion_13987"} {"prompt": "Explain this go code:\n\ntype MyStruct struct {\n // +k8s:subfield(\"mySubfield\")=+k8s:required\n MyStruct MyStruct `json:\"MyStruct\"`\n}\n\ntype MyStruct struct {\n MySubfield string `json:\"mySubfield\"`\n}", "completion": "This go code Declarative API Validation | In this example, MySubfield within MyStruct is required.", "language": "go", "type": "code-example_explanation", "context": "Declarative API Validation | In this example, MySubfield within MyStruct is required.", "source": "https://kubernetes.io/docs/reference/using-api/declarative-validation/", "id": "k8s_00548_code_explain_13988"} {"prompt": "Explain this go code:\n\n// +k8s:supportsSubresource=\"/status\"\n// +k8s:supportsSubresource=\"/scale\"\npackage v1", "completion": "This go code Declarative API Validation | Description:", "language": "go", "type": "code-example_explanation", "context": "Declarative API Validation | Description:", "source": "https://kubernetes.io/docs/reference/using-api/declarative-validation/", "id": "k8s_00548_code_explain_13989"} {"prompt": "Complete this go code:\n\ntype MyStruct struct {\n\tTypeMeta int\n\n\t// +k8s:unionDiscriminator\n\tD D `json:\"d\"`\n\n\t// +k8s:unionMember\n\t// +k8s:optional\n\tM1 *M1 `json:\"m1\"`\n\n\t// +k8s:unionMember\n\t// +k8s:optional", "completion": "\tM2 *M2 `json:\"m2\"`\n}\n\ntype D string\n\nconst (\n\tDM1 D = \"M1\"\n\tDM2 D = \"M2\"\n)\n\ntype M1 struct{}\n\ntype M2 struct{}", "language": "go", "type": "code-example", "context": "Declarative API Validation | In this example, the Type field is the discriminator for the union. The value of Type will determine which of the union members (M1 or M2) is expected to be present.", "source": "https://kubernetes.io/docs/reference/using-api/declarative-validation/", "id": "k8s_00548_code_completion_13990"} {"prompt": "Explain this go code:\n\ntype MyStruct struct {\n\tTypeMeta int\n\n\t// +k8s:unionDiscriminator\n\tD D `json:\"d\"`\n\n\t// +k8s:unionMember\n\t// +k8s:optional\n\tM1 *M1 `json:\"m1\"`\n\n\t// +k8s:unionMember\n\t// +k8s:optional\n\tM2 *M2 `json:\"m2\"`\n}\n\ntype D string\n\nconst (\n\tDM1 D = \"M1\"\n\tDM2 D = \"M2\"\n)\n\ntype M1 struct{}\n\ntype M2 struct{}", "completion": "This go code Declarative API Validation | In this example, the Type field is the discriminator for the union. The value of Type will determine which of the union members (M1 or M2) is expected to be present.", "language": "go", "type": "code-example_explanation", "context": "Declarative API Validation | In this example, the Type field is the discriminator for the union. The value of Type will determine which of the union members (M1 or M2) is expected to be present.", "source": "https://kubernetes.io/docs/reference/using-api/declarative-validation/", "id": "k8s_00548_code_explain_13991"} {"prompt": "Complete this go code:\n\ntype MyStruct struct {\n\t// +k8s:unionMember(union: \"union1\")\n\t// +k8s:optional\n\tM1 *M1 `json:\"u1m1\"`\n\n\t// +k8s:unionMember(union: \"union1\")", "completion": "\t// +k8s:optional\n\tM2 *M2 `json:\"u1m2\"`\n}\n\ntype M1 struct{}\n\ntype M2 struct{}", "language": "go", "type": "code-example", "context": "Declarative API Validation | In this example, M1 and M2 are members of the named union union1.", "source": "https://kubernetes.io/docs/reference/using-api/declarative-validation/", "id": "k8s_00548_code_completion_13992"} {"prompt": "Explain this go code:\n\ntype MyStruct struct {\n\t// +k8s:unionMember(union: \"union1\")\n\t// +k8s:optional\n\tM1 *M1 `json:\"u1m1\"`\n\n\t// +k8s:unionMember(union: \"union1\")\n\t// +k8s:optional\n\tM2 *M2 `json:\"u1m2\"`\n}\n\ntype M1 struct{}\n\ntype M2 struct{}", "completion": "This go code Declarative API Validation | In this example, M1 and M2 are members of the named union union1.", "language": "go", "type": "code-example_explanation", "context": "Declarative API Validation | In this example, M1 and M2 are members of the named union union1.", "source": "https://kubernetes.io/docs/reference/using-api/declarative-validation/", "id": "k8s_00548_code_explain_13993"} {"prompt": "Complete this go code:\n\ntype MyStruct struct {\n\t// +k8s:zeroOrOneOfMember\n\t// +k8s:optional\n\tM1 *M1 `json:\"m1\"`\n\n\t// +k8s:zeroOrOneOfMember", "completion": "\t// +k8s:optional\n\tM2 *M2 `json:\"m2\"`\n}\n\ntype M1 struct{}\n\ntype M2 struct{}", "language": "go", "type": "code-example", "context": "Declarative API Validation | In this example, at most one of A or B can be set. It is also valid for neither to be set.", "source": "https://kubernetes.io/docs/reference/using-api/declarative-validation/", "id": "k8s_00548_code_completion_13994"} {"prompt": "Explain this go code:\n\ntype MyStruct struct {\n\t// +k8s:zeroOrOneOfMember\n\t// +k8s:optional\n\tM1 *M1 `json:\"m1\"`\n\n\t// +k8s:zeroOrOneOfMember\n\t// +k8s:optional\n\tM2 *M2 `json:\"m2\"`\n}\n\ntype M1 struct{}\n\ntype M2 struct{}", "completion": "This go code Declarative API Validation | In this example, at most one of A or B can be set. It is also valid for neither to be set.", "language": "go", "type": "code-example_explanation", "context": "Declarative API Validation | In this example, at most one of A or B can be set. It is also valid for neither to be set.", "source": "https://kubernetes.io/docs/reference/using-api/declarative-validation/", "id": "k8s_00548_code_explain_13995"} {"prompt": "Complete this text code:\n\ntype MyStruct struct {\n // +k8s:eachKey=+k8s:minimum=1", "completion": " MyMap map[int]string `json:\"myMap\"`\n}", "language": "text", "type": "code-example", "context": "Declarative API Validation | In this example, eachKey is used to specify that the +k8s:minimum tag should be applied to each int key in MyMap. This means that all keys in the map must be >= 1.", "source": "https://kubernetes.io/docs/reference/using-api/declarative-validation/", "id": "k8s_00548_code_completion_13996"} {"prompt": "Explain this text code:\n\ntype MyStruct struct {\n // +k8s:eachKey=+k8s:minimum=1\n MyMap map[int]string `json:\"myMap\"`\n}", "completion": "This text code Declarative API Validation | In this example, eachKey is used to specify that the +k8s:minimum tag should be applied to each int key in MyMap. This means that all keys in the map must be >= 1.", "language": "text", "type": "code-example_explanation", "context": "Declarative API Validation | In this example, eachKey is used to specify that the +k8s:minimum tag should be applied to each int key in MyMap. This means that all keys in the map must be >= 1.", "source": "https://kubernetes.io/docs/reference/using-api/declarative-validation/", "id": "k8s_00548_code_explain_13997"} {"prompt": "Complete this text code:\n\ntype MyStruct struct {\n // +k8s:eachVal=+k8s:minimum=1", "completion": " MyMap map[string]int `json:\"myMap\"`\n}", "language": "text", "type": "code-example", "context": "Declarative API Validation | In this example, eachVal is used to specify that the +k8s:minimum tag should be applied to each element in MyList. This means that all fields in MyStruct must be >= 1.", "source": "https://kubernetes.io/docs/reference/using-api/declarative-validation/", "id": "k8s_00548_code_completion_13998"} {"prompt": "Explain this text code:\n\ntype MyStruct struct {\n // +k8s:eachVal=+k8s:minimum=1\n MyMap map[string]int `json:\"myMap\"`\n}", "completion": "This text code Declarative API Validation | In this example, eachVal is used to specify that the +k8s:minimum tag should be applied to each element in MyList. This means that all fields in MyStruct must be >= 1.", "language": "text", "type": "code-example_explanation", "context": "Declarative API Validation | In this example, eachVal is used to specify that the +k8s:minimum tag should be applied to each element in MyList. This means that all fields in MyStruct must be >= 1.", "source": "https://kubernetes.io/docs/reference/using-api/declarative-validation/", "id": "k8s_00548_code_explain_13999"} {"prompt": "Complete this text code:\n\n// +k8s:enum\ntype MyEnum string\n", "completion": "const (\n MyEnumA MyEnum = \"A\"\n MyEnumB MyEnum = \"B\"\n)", "language": "text", "type": "code-example", "context": "Declarative API Validation | Then, use this type in another struct:", "source": "https://kubernetes.io/docs/reference/using-api/declarative-validation/", "id": "k8s_00548_code_completion_14000"} {"prompt": "Explain this text code:\n\n// +k8s:enum\ntype MyEnum string\n\nconst (\n MyEnumA MyEnum = \"A\"\n MyEnumB MyEnum = \"B\"\n)", "completion": "This text code Declarative API Validation | Then, use this type in another struct:", "language": "text", "type": "code-example_explanation", "context": "Declarative API Validation | Then, use this type in another struct:", "source": "https://kubernetes.io/docs/reference/using-api/declarative-validation/", "id": "k8s_00548_code_explain_14001"} {"prompt": "Explain this text code:\n\ntype MyStruct struct {\n MyField MyEnum `json:\"myField\"`\n}", "completion": "This text code Declarative API Validation | The validation logic will ensure that MyField is one of the defined enum values (\"A\" or \"B\").", "language": "text", "type": "code-example_explanation", "context": "Declarative API Validation | The validation logic will ensure that MyField is one of the defined enum values (\"A\" or \"B\").", "source": "https://kubernetes.io/docs/reference/using-api/declarative-validation/", "id": "k8s_00548_code_explain_14002"} {"prompt": "Complete this text code:\n\ntype MyStruct struct {\n // +k8s:forbidden", "completion": " MyField string `json:\"myField\"`\n}", "language": "text", "type": "code-example", "context": "Declarative API Validation | In this example, MyField cannot be provided (it is forbidden) when creating or updating MyStruct.", "source": "https://kubernetes.io/docs/reference/using-api/declarative-validation/", "id": "k8s_00548_code_completion_14003"} {"prompt": "Explain this text code:\n\ntype MyStruct struct {\n // +k8s:forbidden\n MyField string `json:\"myField\"`\n}", "completion": "This text code Declarative API Validation | In this example, MyField cannot be provided (it is forbidden) when creating or updating MyStruct.", "language": "text", "type": "code-example_explanation", "context": "Declarative API Validation | In this example, MyField cannot be provided (it is forbidden) when creating or updating MyStruct.", "source": "https://kubernetes.io/docs/reference/using-api/declarative-validation/", "id": "k8s_00548_code_explain_14004"} {"prompt": "Complete this text code:\n\ntype MyStruct struct {\n // +k8s:format=k8s-ip\n IPAddress string `json:\"ipAddress\"`\n\n // +k8s:format=k8s-long-name", "completion": " Subdomain string `json:\"subdomain\"`\n\n // +k8s:format=k8s-short-name\n Label string `json:\"label\"`\n}", "language": "text", "type": "code-example", "context": "Declarative API Validation | In this example:", "source": "https://kubernetes.io/docs/reference/using-api/declarative-validation/", "id": "k8s_00548_code_completion_14005"} {"prompt": "Explain this text code:\n\ntype MyStruct struct {\n // +k8s:format=k8s-ip\n IPAddress string `json:\"ipAddress\"`\n\n // +k8s:format=k8s-long-name\n Subdomain string `json:\"subdomain\"`\n\n // +k8s:format=k8s-short-name\n Label string `json:\"label\"`\n}", "completion": "This text code Declarative API Validation | In this example:", "language": "text", "type": "code-example_explanation", "context": "Declarative API Validation | In this example:", "source": "https://kubernetes.io/docs/reference/using-api/declarative-validation/", "id": "k8s_00548_code_explain_14006"} {"prompt": "Complete this text code:\n\ntype MyStruct struct {\n // +k8s:ifDisabled(\"my-feature\")=+k8s:required", "completion": " MyField string `json:\"myField\"`\n}", "language": "text", "type": "code-example", "context": "Declarative API Validation | In this example, MyField is required only if the \"my-feature\" option is disabled.", "source": "https://kubernetes.io/docs/reference/using-api/declarative-validation/", "id": "k8s_00548_code_completion_14007"} {"prompt": "Explain this text code:\n\ntype MyStruct struct {\n // +k8s:ifDisabled(\"my-feature\")=+k8s:required\n MyField string `json:\"myField\"`\n}", "completion": "This text code Declarative API Validation | In this example, MyField is required only if the \"my-feature\" option is disabled.", "language": "text", "type": "code-example_explanation", "context": "Declarative API Validation | In this example, MyField is required only if the \"my-feature\" option is disabled.", "source": "https://kubernetes.io/docs/reference/using-api/declarative-validation/", "id": "k8s_00548_code_explain_14008"} {"prompt": "Complete this text code:\n\ntype MyStruct struct {\n // +k8s:ifEnabled(\"my-feature\")=+k8s:required", "completion": " MyField string `json:\"myField\"`\n}", "language": "text", "type": "code-example", "context": "Declarative API Validation | In this example, MyField is required only if the \"my-feature\" option is enabled.", "source": "https://kubernetes.io/docs/reference/using-api/declarative-validation/", "id": "k8s_00548_code_completion_14009"} {"prompt": "Explain this text code:\n\ntype MyStruct struct {\n // +k8s:ifEnabled(\"my-feature\")=+k8s:required\n MyField string `json:\"myField\"`\n}", "completion": "This text code Declarative API Validation | In this example, MyField is required only if the \"my-feature\" option is enabled.", "language": "text", "type": "code-example_explanation", "context": "Declarative API Validation | In this example, MyField is required only if the \"my-feature\" option is enabled.", "source": "https://kubernetes.io/docs/reference/using-api/declarative-validation/", "id": "k8s_00548_code_explain_14010"} {"prompt": "Complete this go code:\n\n// This enables the validation dispatcher to handle requests for \"/scale\".\n// +k8s:supportsSubresource=\"/scale\"", "completion": "package v1\n\n// ... includes the definition for the Deployment type", "language": "go", "type": "code-example", "context": "Declarative API Validation | 2. Scope Validation Logic in a Separate Package: Next, create a separate package for the validation rules that are specific only to the /scale subresource.", "source": "https://kubernetes.io/docs/reference/using-api/declarative-validation/", "id": "k8s_00548_code_completion_14011"} {"prompt": "Explain this go code:\n\n// This enables the validation dispatcher to handle requests for \"/scale\".\n// +k8s:supportsSubresource=\"/scale\"\npackage v1\n\n// ... includes the definition for the Deployment type", "completion": "This go code Declarative API Validation | 2. Scope Validation Logic in a Separate Package: Next, create a separate package for the validation rules that are specific only to the /scale subresource.", "language": "go", "type": "code-example_explanation", "context": "Declarative API Validation | 2. Scope Validation Logic in a Separate Package: Next, create a separate package for the validation rules that are specific only to the /scale subresource.", "source": "https://kubernetes.io/docs/reference/using-api/declarative-validation/", "id": "k8s_00548_code_explain_14012"} {"prompt": "Complete this python code:\n\n// This ensures the rules in this package ONLY run for the \"/scale\" subresource.\n// +k8s:isSubresource=\"/scale\"\npackage scale\n", "completion": "import \"k8s.io/api/apps/v1\"\n\n// Validation code in this package would reference types from package v1 (e.g., v1.Scale).\n// The generated validation function will only be invoked for requests to the \"/scale\"\n// subresource of a type defined in a package that supports it.", "language": "python", "type": "code-example", "context": "Declarative API Validation | Description:", "source": "https://kubernetes.io/docs/reference/using-api/declarative-validation/", "id": "k8s_00548_code_completion_14013"} {"prompt": "Explain this python code:\n\n// This ensures the rules in this package ONLY run for the \"/scale\" subresource.\n// +k8s:isSubresource=\"/scale\"\npackage scale\n\nimport \"k8s.io/api/apps/v1\"\n\n// Validation code in this package would reference types from package v1 (e.g., v1.Scale).\n// The generated validation function will only be invoked for requests to the \"/scale\"\n// subresource of a type defined in a package that supports it.", "completion": "This python code Declarative API Validation | Description:", "language": "python", "type": "code-example_explanation", "context": "Declarative API Validation | Description:", "source": "https://kubernetes.io/docs/reference/using-api/declarative-validation/", "id": "k8s_00548_code_explain_14014"} {"prompt": "Complete this text code:\n\ntype MyStruct struct {\n\t// +k8s:listType=map\n\t// +k8s:listMapKey=type\n\t// +k8s:item(type: \"Approved\")=+k8s:zeroOrOneOfMember\n\t// +k8s:item(type: \"Denied\")=+k8s:zeroOrOneOfMember\n\tMyConditions []MyCondition `json:\"conditions\"`", "completion": "}\n\ntype MyCondition struct {\n Type string `json:\"type\"`\n Status string `json:\"status\"`\n}", "language": "text", "type": "code-example", "context": "Declarative API Validation | In this example:", "source": "https://kubernetes.io/docs/reference/using-api/declarative-validation/", "id": "k8s_00548_code_completion_14015"} {"prompt": "Explain this text code:\n\ntype MyStruct struct {\n\t// +k8s:listType=map\n\t// +k8s:listMapKey=type\n\t// +k8s:item(type: \"Approved\")=+k8s:zeroOrOneOfMember\n\t// +k8s:item(type: \"Denied\")=+k8s:zeroOrOneOfMember\n\tMyConditions []MyCondition `json:\"conditions\"`\n}\n\ntype MyCondition struct {\n Type string `json:\"type\"`\n Status string `json:\"status\"`\n}", "completion": "This text code Declarative API Validation | In this example:", "language": "text", "type": "code-example_explanation", "context": "Declarative API Validation | In this example:", "source": "https://kubernetes.io/docs/reference/using-api/declarative-validation/", "id": "k8s_00548_code_explain_14016"} {"prompt": "Complete this text code:\n\n// +k8s:listType=map\n// +k8s:listMapKey=keyFieldOne\n// +k8s:listMapKey=keyFieldTwo\ntype MyList []MyStruct\n", "completion": "type MyStruct struct {\n keyFieldOne string `json:\"keyFieldOne\"`\n keyFieldTwo string `json:\"keyFieldTwo\"`\n valueField string `json:\"valueField\"`\n}", "language": "text", "type": "code-example", "context": "Declarative API Validation | In this example, listMapKey is used to specify that the keyField of MyStruct should be used as the key for the list-map.", "source": "https://kubernetes.io/docs/reference/using-api/declarative-validation/", "id": "k8s_00548_code_completion_14017"} {"prompt": "Explain this text code:\n\n// +k8s:listType=map\n// +k8s:listMapKey=keyFieldOne\n// +k8s:listMapKey=keyFieldTwo\ntype MyList []MyStruct\n\ntype MyStruct struct {\n keyFieldOne string `json:\"keyFieldOne\"`\n keyFieldTwo string `json:\"keyFieldTwo\"`\n valueField string `json:\"valueField\"`\n}", "completion": "This text code Declarative API Validation | In this example, listMapKey is used to specify that the keyField of MyStruct should be used as the key for the list-map.", "language": "text", "type": "code-example_explanation", "context": "Declarative API Validation | In this example, listMapKey is used to specify that the keyField of MyStruct should be used as the key for the list-map.", "source": "https://kubernetes.io/docs/reference/using-api/declarative-validation/", "id": "k8s_00548_code_explain_14018"} {"prompt": "Complete this text code:\n\n// +k8s:listType=map\n// +k8s:listMapKey=keyField\ntype MyList []MyStruct\n", "completion": "type MyStruct struct {\n keyField string `json:\"keyField\"`\n valueField string `json:\"valueField\"`\n}", "language": "text", "type": "code-example", "context": "Declarative API Validation | In this example, MyList is declared as a list of type map, with keyField as the key. This means that the validation logic will ensure that each element in the list has a unique keyField.", "source": "https://kubernetes.io/docs/reference/using-api/declarative-validation/", "id": "k8s_00548_code_completion_14019"} {"prompt": "Explain this text code:\n\n// +k8s:listType=map\n// +k8s:listMapKey=keyField\ntype MyList []MyStruct\n\ntype MyStruct struct {\n keyField string `json:\"keyField\"`\n valueField string `json:\"valueField\"`\n}", "completion": "This text code Declarative API Validation | In this example, MyList is declared as a list of type map, with keyField as the key. This means that the validation logic will ensure that each element in the list has a unique keyField.", "language": "text", "type": "code-example_explanation", "context": "Declarative API Validation | In this example, MyList is declared as a list of type map, with keyField as the key. This means that the validation logic will ensure that each element in the list has a unique keyField.", "source": "https://kubernetes.io/docs/reference/using-api/declarative-validation/", "id": "k8s_00548_code_explain_14020"} {"prompt": "Complete this text code:\n\ntype MyStruct struct {\n // +k8s:maxItems=5", "completion": " MyList []string `json:\"myList\"`\n}", "language": "text", "type": "code-example", "context": "Declarative API Validation | In this example, MyList cannot contain more than 5 items.", "source": "https://kubernetes.io/docs/reference/using-api/declarative-validation/", "id": "k8s_00548_code_completion_14021"} {"prompt": "Explain this text code:\n\ntype MyStruct struct {\n // +k8s:maxItems=5\n MyList []string `json:\"myList\"`\n}", "completion": "This text code Declarative API Validation | In this example, MyList cannot contain more than 5 items.", "language": "text", "type": "code-example_explanation", "context": "Declarative API Validation | In this example, MyList cannot contain more than 5 items.", "source": "https://kubernetes.io/docs/reference/using-api/declarative-validation/", "id": "k8s_00548_code_explain_14022"} {"prompt": "Complete this text code:\n\ntype MyStruct struct {\n // +k8s:maxLength=10", "completion": " MyString string `json:\"myString\"`\n}", "language": "text", "type": "code-example", "context": "Declarative API Validation | In this example, MyString cannot be longer than 10 characters.", "source": "https://kubernetes.io/docs/reference/using-api/declarative-validation/", "id": "k8s_00548_code_completion_14023"} {"prompt": "Explain this text code:\n\ntype MyStruct struct {\n // +k8s:maxLength=10\n MyString string `json:\"myString\"`\n}", "completion": "This text code Declarative API Validation | In this example, MyString cannot be longer than 10 characters.", "language": "text", "type": "code-example_explanation", "context": "Declarative API Validation | In this example, MyString cannot be longer than 10 characters.", "source": "https://kubernetes.io/docs/reference/using-api/declarative-validation/", "id": "k8s_00548_code_explain_14024"} {"prompt": "Complete this text code:\n\ntype MyStruct struct {\n // +k8s:minimum=0", "completion": " MyInt int `json:\"myInt\"`\n}", "language": "text", "type": "code-example", "context": "Declarative API Validation | In this example, MyInt must be greater than or equal to 0.", "source": "https://kubernetes.io/docs/reference/using-api/declarative-validation/", "id": "k8s_00548_code_completion_14025"} {"prompt": "Explain this text code:\n\ntype MyStruct struct {\n // +k8s:minimum=0\n MyInt int `json:\"myInt\"`\n}", "completion": "This text code Declarative API Validation | In this example, MyInt must be greater than or equal to 0.", "language": "text", "type": "code-example_explanation", "context": "Declarative API Validation | In this example, MyInt must be greater than or equal to 0.", "source": "https://kubernetes.io/docs/reference/using-api/declarative-validation/", "id": "k8s_00548_code_explain_14026"} {"prompt": "Complete this text code:\n\ntype MyStruct struct {\n // +k8s:neq=\"disallowed\"\n MyString string `json:\"myString\"`\n\n // +k8s:neq=0", "completion": " MyInt int `json:\"myInt\"`\n\n // +k8s:neq=true\n MyBool bool `json:\"myBool\"`\n}", "language": "text", "type": "code-example", "context": "Declarative API Validation | In this example:", "source": "https://kubernetes.io/docs/reference/using-api/declarative-validation/", "id": "k8s_00548_code_completion_14027"} {"prompt": "Explain this text code:\n\ntype MyStruct struct {\n // +k8s:neq=\"disallowed\"\n MyString string `json:\"myString\"`\n\n // +k8s:neq=0\n MyInt int `json:\"myInt\"`\n\n // +k8s:neq=true\n MyBool bool `json:\"myBool\"`\n}", "completion": "This text code Declarative API Validation | In this example:", "language": "text", "type": "code-example_explanation", "context": "Declarative API Validation | In this example:", "source": "https://kubernetes.io/docs/reference/using-api/declarative-validation/", "id": "k8s_00548_code_explain_14028"} {"prompt": "Complete this python code:\n\nimport \"some/external/package\"\n\ntype MyStruct struct {", "completion": " // +k8s:opaqueType\n ExternalField package.ExternalType `json:\"externalField\"`\n}", "language": "python", "type": "code-example", "context": "Declarative API Validation | In this example, any validation tags on package.ExternalType will be ignored.", "source": "https://kubernetes.io/docs/reference/using-api/declarative-validation/", "id": "k8s_00548_code_completion_14029"} {"prompt": "Explain this python code:\n\nimport \"some/external/package\"\n\ntype MyStruct struct {\n // +k8s:opaqueType\n ExternalField package.ExternalType `json:\"externalField\"`\n}", "completion": "This python code Declarative API Validation | In this example, any validation tags on package.ExternalType will be ignored.", "language": "python", "type": "code-example_explanation", "context": "Declarative API Validation | In this example, any validation tags on package.ExternalType will be ignored.", "source": "https://kubernetes.io/docs/reference/using-api/declarative-validation/", "id": "k8s_00548_code_explain_14030"} {"prompt": "Complete this text code:\n\ntype MyStruct struct {\n // +k8s:optional", "completion": " MyField string `json:\"myField\"`\n}", "language": "text", "type": "code-example", "context": "Declarative API Validation | In this example, MyField is not required to be provided when creating or updating MyStruct.", "source": "https://kubernetes.io/docs/reference/using-api/declarative-validation/", "id": "k8s_00548_code_completion_14031"} {"prompt": "Explain this text code:\n\ntype MyStruct struct {\n // +k8s:optional\n MyField string `json:\"myField\"`\n}", "completion": "This text code Declarative API Validation | In this example, MyField is not required to be provided when creating or updating MyStruct.", "language": "text", "type": "code-example_explanation", "context": "Declarative API Validation | In this example, MyField is not required to be provided when creating or updating MyStruct.", "source": "https://kubernetes.io/docs/reference/using-api/declarative-validation/", "id": "k8s_00548_code_explain_14032"} {"prompt": "Complete this text code:\n\ntype MyStruct struct {\n // +k8s:required", "completion": " MyField string `json:\"myField\"`\n}", "language": "text", "type": "code-example", "context": "Declarative API Validation | In this example, MyField must be provided when creating or updating MyStruct.", "source": "https://kubernetes.io/docs/reference/using-api/declarative-validation/", "id": "k8s_00548_code_completion_14033"} {"prompt": "Explain this text code:\n\ntype MyStruct struct {\n // +k8s:required\n MyField string `json:\"myField\"`\n}", "completion": "This text code Declarative API Validation | In this example, MyField must be provided when creating or updating MyStruct.", "language": "text", "type": "code-example_explanation", "context": "Declarative API Validation | In this example, MyField must be provided when creating or updating MyStruct.", "source": "https://kubernetes.io/docs/reference/using-api/declarative-validation/", "id": "k8s_00548_code_explain_14034"} {"prompt": "Complete this text code:\n\ntype MyStruct struct {\n // +k8s:subfield(\"mySubfield\")=+k8s:required\n MyStruct MyStruct `json:\"MyStruct\"`\n}", "completion": "\ntype MyStruct struct {\n MySubfield string `json:\"mySubfield\"`\n}", "language": "text", "type": "code-example", "context": "Declarative API Validation | In this example, MySubfield within MyStruct is required.", "source": "https://kubernetes.io/docs/reference/using-api/declarative-validation/", "id": "k8s_00548_code_completion_14035"} {"prompt": "Explain this text code:\n\ntype MyStruct struct {\n // +k8s:subfield(\"mySubfield\")=+k8s:required\n MyStruct MyStruct `json:\"MyStruct\"`\n}\n\ntype MyStruct struct {\n MySubfield string `json:\"mySubfield\"`\n}", "completion": "This text code Declarative API Validation | In this example, MySubfield within MyStruct is required.", "language": "text", "type": "code-example_explanation", "context": "Declarative API Validation | In this example, MySubfield within MyStruct is required.", "source": "https://kubernetes.io/docs/reference/using-api/declarative-validation/", "id": "k8s_00548_code_explain_14036"} {"prompt": "Explain this go code:\n\n// +k8s:supportsSubresource=\"/status\"\n// +k8s:supportsSubresource=\"/scale\"\npackage v1", "completion": "This go code Declarative API Validation | Description:", "language": "go", "type": "code-example_explanation", "context": "Declarative API Validation | Description:", "source": "https://kubernetes.io/docs/reference/using-api/declarative-validation/", "id": "k8s_00548_code_explain_14037"} {"prompt": "Complete this text code:\n\ntype MyStruct struct {\n\tTypeMeta int\n\n\t// +k8s:unionDiscriminator\n\tD D `json:\"d\"`\n\n\t// +k8s:unionMember\n\t// +k8s:optional\n\tM1 *M1 `json:\"m1\"`\n\n\t// +k8s:unionMember\n\t// +k8s:optional", "completion": "\tM2 *M2 `json:\"m2\"`\n}\n\ntype D string\n\nconst (\n\tDM1 D = \"M1\"\n\tDM2 D = \"M2\"\n)\n\ntype M1 struct{}\n\ntype M2 struct{}", "language": "text", "type": "code-example", "context": "Declarative API Validation | In this example, the Type field is the discriminator for the union. The value of Type will determine which of the union members (M1 or M2) is expected to be present.", "source": "https://kubernetes.io/docs/reference/using-api/declarative-validation/", "id": "k8s_00548_code_completion_14038"} {"prompt": "Explain this text code:\n\ntype MyStruct struct {\n\tTypeMeta int\n\n\t// +k8s:unionDiscriminator\n\tD D `json:\"d\"`\n\n\t// +k8s:unionMember\n\t// +k8s:optional\n\tM1 *M1 `json:\"m1\"`\n\n\t// +k8s:unionMember\n\t// +k8s:optional\n\tM2 *M2 `json:\"m2\"`\n}\n\ntype D string\n\nconst (\n\tDM1 D = \"M1\"\n\tDM2 D = \"M2\"\n)\n\ntype M1 struct{}\n\ntype M2 struct{}", "completion": "This text code Declarative API Validation | In this example, the Type field is the discriminator for the union. The value of Type will determine which of the union members (M1 or M2) is expected to be present.", "language": "text", "type": "code-example_explanation", "context": "Declarative API Validation | In this example, the Type field is the discriminator for the union. The value of Type will determine which of the union members (M1 or M2) is expected to be present.", "source": "https://kubernetes.io/docs/reference/using-api/declarative-validation/", "id": "k8s_00548_code_explain_14039"} {"prompt": "Complete this text code:\n\ntype MyStruct struct {\n\t// +k8s:unionMember(union: \"union1\")\n\t// +k8s:optional\n\tM1 *M1 `json:\"u1m1\"`\n\n\t// +k8s:unionMember(union: \"union1\")", "completion": "\t// +k8s:optional\n\tM2 *M2 `json:\"u1m2\"`\n}\n\ntype M1 struct{}\n\ntype M2 struct{}", "language": "text", "type": "code-example", "context": "Declarative API Validation | In this example, M1 and M2 are members of the named union union1.", "source": "https://kubernetes.io/docs/reference/using-api/declarative-validation/", "id": "k8s_00548_code_completion_14040"} {"prompt": "Explain this text code:\n\ntype MyStruct struct {\n\t// +k8s:unionMember(union: \"union1\")\n\t// +k8s:optional\n\tM1 *M1 `json:\"u1m1\"`\n\n\t// +k8s:unionMember(union: \"union1\")\n\t// +k8s:optional\n\tM2 *M2 `json:\"u1m2\"`\n}\n\ntype M1 struct{}\n\ntype M2 struct{}", "completion": "This text code Declarative API Validation | In this example, M1 and M2 are members of the named union union1.", "language": "text", "type": "code-example_explanation", "context": "Declarative API Validation | In this example, M1 and M2 are members of the named union union1.", "source": "https://kubernetes.io/docs/reference/using-api/declarative-validation/", "id": "k8s_00548_code_explain_14041"} {"prompt": "Complete this text code:\n\ntype MyStruct struct {\n\t// +k8s:zeroOrOneOfMember\n\t// +k8s:optional\n\tM1 *M1 `json:\"m1\"`\n\n\t// +k8s:zeroOrOneOfMember", "completion": "\t// +k8s:optional\n\tM2 *M2 `json:\"m2\"`\n}\n\ntype M1 struct{}\n\ntype M2 struct{}", "language": "text", "type": "code-example", "context": "Declarative API Validation | In this example, at most one of A or B can be set. It is also valid for neither to be set.", "source": "https://kubernetes.io/docs/reference/using-api/declarative-validation/", "id": "k8s_00548_code_completion_14042"} {"prompt": "Explain this text code:\n\ntype MyStruct struct {\n\t// +k8s:zeroOrOneOfMember\n\t// +k8s:optional\n\tM1 *M1 `json:\"m1\"`\n\n\t// +k8s:zeroOrOneOfMember\n\t// +k8s:optional\n\tM2 *M2 `json:\"m2\"`\n}\n\ntype M1 struct{}\n\ntype M2 struct{}", "completion": "This text code Declarative API Validation | In this example, at most one of A or B can be set. It is also valid for neither to be set.", "language": "text", "type": "code-example_explanation", "context": "Declarative API Validation | In this example, at most one of A or B can be set. It is also valid for neither to be set.", "source": "https://kubernetes.io/docs/reference/using-api/declarative-validation/", "id": "k8s_00548_code_explain_14043"} {"prompt": "Complete this yaml code:\n\napiVersion: networking.k8s.io/v1\nkind: Ingress\nmetadata:\n name: minimal-ingress\n annotations:\n nginx.ingress.kubernetes.io/rewrite-target: /\nspec:\n ingressClassName: nginx-example\n rules:", "completion": " - http:\n paths:\n - path: /testpath\n pathType: Prefix\n backend:\n service:\n name: test\n port:\n number: 80", "language": "yaml", "type": "kubernetes-manifest", "context": "Ingress | An Ingress needs apiVersion, kind, metadata and spec fields. The name of an Ingress object must be a valid DNS subdomain name. For general information about working with config files, see deploying ap", "source": "https://kubernetes.io/docs/concepts/services-networking/ingress/", "id": "k8s_00549_code_completion_14044"} {"prompt": "Explain this yaml code:\n\napiVersion: networking.k8s.io/v1\nkind: Ingress\nmetadata:\n name: minimal-ingress\n annotations:\n nginx.ingress.kubernetes.io/rewrite-target: /\nspec:\n ingressClassName: nginx-example\n rules:\n - http:\n paths:\n - path: /testpath\n pathType: Prefix\n backend:\n service:\n name: test\n port:\n number: 80", "completion": "This yaml code Ingress | An Ingress needs apiVersion, kind, metadata and spec fields. The name of an Ingress object must be a valid DNS subdomain name. For general information about working with config files, see deploying ap", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Ingress | An Ingress needs apiVersion, kind, metadata and spec fields. The name of an Ingress object must be a valid DNS subdomain name. For general information about working with config files, see deploying ap", "source": "https://kubernetes.io/docs/concepts/services-networking/ingress/", "id": "k8s_00549_code_explain_14045"} {"prompt": "Complete this yaml code:\n\napiVersion: networking.k8s.io/v1\nkind: Ingress\nmetadata:\n name: ingress-resource-backend\nspec:\n defaultBackend:\n resource:\n apiGroup: k8s.example.com\n kind: StorageBucket\n name: static-assets", "completion": " rules:\n - http:\n paths:\n - path: /icons\n pathType: ImplementationSpecific\n backend:\n resource:\n apiGroup: k8s.example.com\n kind: StorageBucket\n name: icon-assets", "language": "yaml", "type": "kubernetes-manifest", "context": "Ingress | After creating the Ingress above, you can view it with the following command:", "source": "https://kubernetes.io/docs/concepts/services-networking/ingress/", "id": "k8s_00549_code_completion_14046"} {"prompt": "Explain this yaml code:\n\napiVersion: networking.k8s.io/v1\nkind: Ingress\nmetadata:\n name: ingress-resource-backend\nspec:\n defaultBackend:\n resource:\n apiGroup: k8s.example.com\n kind: StorageBucket\n name: static-assets\n rules:\n - http:\n paths:\n - path: /icons\n pathType: ImplementationSpecific\n backend:\n resource:\n apiGroup: k8s.example.com\n kind: StorageBucket\n name: icon-assets", "completion": "This yaml code Ingress | After creating the Ingress above, you can view it with the following command:", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Ingress | After creating the Ingress above, you can view it with the following command:", "source": "https://kubernetes.io/docs/concepts/services-networking/ingress/", "id": "k8s_00549_code_explain_14047"} {"prompt": "Complete this text code:\n\nName: ingress-resource-backend\nNamespace: default\nAddress:\nDefault backend: APIGroup: k8s.example.com, Kind: StorageBucket, Name: static-assets\nRules:", "completion": " Host Path Backends\n ---- ---- --------\n *\n /icons APIGroup: k8s.example.com, Kind: StorageBucket, Name: icon-assets\nAnnotations: <none>\nEvents: <none>", "language": "text", "type": "code-example", "context": "Ingress | Each path in an Ingress is required to have a corresponding path type. Paths that do not include an explicit pathType will fail validation. There are three supported path types:", "source": "https://kubernetes.io/docs/concepts/services-networking/ingress/", "id": "k8s_00549_code_completion_14048"} {"prompt": "Explain this text code:\n\nName: ingress-resource-backend\nNamespace: default\nAddress:\nDefault backend: APIGroup: k8s.example.com, Kind: StorageBucket, Name: static-assets\nRules:\n Host Path Backends\n ---- ---- --------\n *\n /icons APIGroup: k8s.example.com, Kind: StorageBucket, Name: icon-assets\nAnnotations: <none>\nEvents: <none>", "completion": "This text code Ingress | Each path in an Ingress is required to have a corresponding path type. Paths that do not include an explicit pathType will fail validation. There are three supported path types:", "language": "text", "type": "code-example_explanation", "context": "Ingress | Each path in an Ingress is required to have a corresponding path type. Paths that do not include an explicit pathType will fail validation. There are three supported path types:", "source": "https://kubernetes.io/docs/concepts/services-networking/ingress/", "id": "k8s_00549_code_explain_14049"} {"prompt": "Complete this yaml code:\n\napiVersion: networking.k8s.io/v1\nkind: Ingress\nmetadata:\n name: ingress-wildcard-host\nspec:\n rules:\n - host: \"foo.bar.com\"\n http:\n paths:\n - pathType: Prefix\n path: \"/bar\"\n backend:\n service:", "completion": " name: service1\n port:\n number: 80\n - host: \"*.foo.com\"\n http:\n paths:\n - pathType: Prefix\n path: \"/foo\"\n backend:\n service:\n name: service2\n port:\n number: 80", "language": "yaml", "type": "kubernetes-manifest", "context": "Ingress | Ingresses can be implemented by different controllers, often with different configuration. Each Ingress should specify a class, a reference to an IngressClass resource that contains additional configu", "source": "https://kubernetes.io/docs/concepts/services-networking/ingress/", "id": "k8s_00549_code_completion_14050"} {"prompt": "Explain this yaml code:\n\napiVersion: networking.k8s.io/v1\nkind: Ingress\nmetadata:\n name: ingress-wildcard-host\nspec:\n rules:\n - host: \"foo.bar.com\"\n http:\n paths:\n - pathType: Prefix\n path: \"/bar\"\n backend:\n service:\n name: service1\n port:\n number: 80\n - host: \"*.foo.com\"\n http:\n paths:\n - pathType: Prefix\n path: \"/foo\"\n backend:\n service:\n name: service2\n port:\n number: 80", "completion": "This yaml code Ingress | Ingresses can be implemented by different controllers, often with different configuration. Each Ingress should specify a class, a reference to an IngressClass resource that contains additional configu", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Ingress | Ingresses can be implemented by different controllers, often with different configuration. Each Ingress should specify a class, a reference to an IngressClass resource that contains additional configu", "source": "https://kubernetes.io/docs/concepts/services-networking/ingress/", "id": "k8s_00549_code_explain_14051"} {"prompt": "Complete this yaml code:\n\napiVersion: networking.k8s.io/v1\nkind: IngressClass\nmetadata:\n name: external-lb\nspec:", "completion": " controller: example.com/ingress-controller\n parameters:\n apiGroup: k8s.example.com\n kind: IngressParameters\n name: external-lb", "language": "yaml", "type": "kubernetes-manifest", "context": "Ingress | The .spec.parameters field of an IngressClass lets you reference another resource that provides configuration related to that IngressClass.", "source": "https://kubernetes.io/docs/concepts/services-networking/ingress/", "id": "k8s_00549_code_completion_14052"} {"prompt": "Explain this yaml code:\n\napiVersion: networking.k8s.io/v1\nkind: IngressClass\nmetadata:\n name: external-lb\nspec:\n controller: example.com/ingress-controller\n parameters:\n apiGroup: k8s.example.com\n kind: IngressParameters\n name: external-lb", "completion": "This yaml code Ingress | The .spec.parameters field of an IngressClass lets you reference another resource that provides configuration related to that IngressClass.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Ingress | The .spec.parameters field of an IngressClass lets you reference another resource that provides configuration related to that IngressClass.", "source": "https://kubernetes.io/docs/concepts/services-networking/ingress/", "id": "k8s_00549_code_explain_14053"} {"prompt": "Complete this yaml code:\n\n---\napiVersion: networking.k8s.io/v1\nkind: IngressClass\nmetadata:\n name: external-lb-1\nspec:\n controller: example.com/ingress-controller\n parameters:", "completion": " # The parameters for this IngressClass are specified in a\n # ClusterIngressParameter (API group k8s.example.net) named\n # \"external-config-1\". This definition tells Kubernetes to\n # look for a cluster-scoped parameter resource.\n scope: Cluster\n apiGroup: k8s.example.net\n kind: ClusterIngressParameter\n name: external-config-1", "language": "yaml", "type": "kubernetes-manifest", "context": "Ingress | FEATURE STATE: Kubernetes v1.23 [stable]If you set the .spec.parameters field and set .spec.parameters.scope to Namespace, then the IngressClass refers to a namespaced-scoped resource. You must also s", "source": "https://kubernetes.io/docs/concepts/services-networking/ingress/", "id": "k8s_00549_code_completion_14054"} {"prompt": "Explain this yaml code:\n\n---\napiVersion: networking.k8s.io/v1\nkind: IngressClass\nmetadata:\n name: external-lb-1\nspec:\n controller: example.com/ingress-controller\n parameters:\n # The parameters for this IngressClass are specified in a\n # ClusterIngressParameter (API group k8s.example.net) named\n # \"external-config-1\". This definition tells Kubernetes to\n # look for a cluster-scoped parameter resource.\n scope: Cluster\n apiGroup: k8s.example.net\n kind: ClusterIngressParameter\n name: external-config-1", "completion": "This yaml code Ingress | FEATURE STATE: Kubernetes v1.23 [stable]If you set the .spec.parameters field and set .spec.parameters.scope to Namespace, then the IngressClass refers to a namespaced-scoped resource. You must also s", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Ingress | FEATURE STATE: Kubernetes v1.23 [stable]If you set the .spec.parameters field and set .spec.parameters.scope to Namespace, then the IngressClass refers to a namespaced-scoped resource. You must also s", "source": "https://kubernetes.io/docs/concepts/services-networking/ingress/", "id": "k8s_00549_code_explain_14055"} {"prompt": "Complete this yaml code:\n\n---\napiVersion: networking.k8s.io/v1\nkind: IngressClass\nmetadata:\n name: external-lb-2\nspec:\n controller: example.com/ingress-controller\n parameters:", "completion": " # The parameters for this IngressClass are specified in an\n # IngressParameter (API group k8s.example.com) named \"external-config\",\n # that's in the \"external-configuration\" namespace.\n scope: Namespace\n apiGroup: k8s.example.com\n kind: IngressParameter\n namespace: external-configuration\n name: external-config", "language": "yaml", "type": "kubernetes-manifest", "context": "Ingress | Before the IngressClass resource and ingressClassName field were added in Kubernetes 1.18, Ingress classes were specified with a kubernetes.io/ingress.class annotation on the Ingress. This annotation", "source": "https://kubernetes.io/docs/concepts/services-networking/ingress/", "id": "k8s_00549_code_completion_14056"} {"prompt": "Explain this yaml code:\n\n---\napiVersion: networking.k8s.io/v1\nkind: IngressClass\nmetadata:\n name: external-lb-2\nspec:\n controller: example.com/ingress-controller\n parameters:\n # The parameters for this IngressClass are specified in an\n # IngressParameter (API group k8s.example.com) named \"external-config\",\n # that's in the \"external-configuration\" namespace.\n scope: Namespace\n apiGroup: k8s.example.com\n kind: IngressParameter\n namespace: external-configuration\n name: external-config", "completion": "This yaml code Ingress | Before the IngressClass resource and ingressClassName field were added in Kubernetes 1.18, Ingress classes were specified with a kubernetes.io/ingress.class annotation on the Ingress. This annotation", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Ingress | Before the IngressClass resource and ingressClassName field were added in Kubernetes 1.18, Ingress classes were specified with a kubernetes.io/ingress.class annotation on the Ingress. This annotation", "source": "https://kubernetes.io/docs/concepts/services-networking/ingress/", "id": "k8s_00549_code_explain_14057"} {"prompt": "Complete this yaml code:\n\napiVersion: networking.k8s.io/v1\nkind: IngressClass\nmetadata:\n labels:\n app.kubernetes.io/component: controller", "completion": " name: nginx-example\n annotations:\n ingressclass.kubernetes.io/is-default-class: \"true\"\nspec:\n controller: k8s.io/ingress-nginx", "language": "yaml", "type": "kubernetes-manifest", "context": "Ingress | There are existing Kubernetes concepts that allow you to expose a single Service (see alternatives). You can also do this with an Ingress by specifying a default backend with no rules.", "source": "https://kubernetes.io/docs/concepts/services-networking/ingress/", "id": "k8s_00549_code_completion_14058"} {"prompt": "Explain this yaml code:\n\napiVersion: networking.k8s.io/v1\nkind: IngressClass\nmetadata:\n labels:\n app.kubernetes.io/component: controller\n name: nginx-example\n annotations:\n ingressclass.kubernetes.io/is-default-class: \"true\"\nspec:\n controller: k8s.io/ingress-nginx", "completion": "This yaml code Ingress | There are existing Kubernetes concepts that allow you to expose a single Service (see alternatives). You can also do this with an Ingress by specifying a default backend with no rules.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Ingress | There are existing Kubernetes concepts that allow you to expose a single Service (see alternatives). You can also do this with an Ingress by specifying a default backend with no rules.", "source": "https://kubernetes.io/docs/concepts/services-networking/ingress/", "id": "k8s_00549_code_explain_14059"} {"prompt": "Complete this yaml code:\n\napiVersion: networking.k8s.io/v1\nkind: Ingress\nmetadata:\n name: test-ingress\nspec:", "completion": " defaultBackend:\n service:\n name: test\n port:\n number: 80", "language": "yaml", "type": "kubernetes-manifest", "context": "Ingress | If you create it using kubectl apply -f you should be able to view the state of the Ingress you added:", "source": "https://kubernetes.io/docs/concepts/services-networking/ingress/", "id": "k8s_00549_code_completion_14060"} {"prompt": "Explain this yaml code:\n\napiVersion: networking.k8s.io/v1\nkind: Ingress\nmetadata:\n name: test-ingress\nspec:\n defaultBackend:\n service:\n name: test\n port:\n number: 80", "completion": "This yaml code Ingress | If you create it using kubectl apply -f you should be able to view the state of the Ingress you added:", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Ingress | If you create it using kubectl apply -f you should be able to view the state of the Ingress you added:", "source": "https://kubernetes.io/docs/concepts/services-networking/ingress/", "id": "k8s_00549_code_explain_14061"} {"prompt": "Explain this text code:\n\nNAME CLASS HOSTS ADDRESS PORTS AGE\ntest-ingress external-lb * 203.0.113.123 80 59s", "completion": "This text code Ingress | Where 203.0.113.123 is the IP allocated by the Ingress controller to satisfy this Ingress.", "language": "text", "type": "code-example_explanation", "context": "Ingress | Where 203.0.113.123 is the IP allocated by the Ingress controller to satisfy this Ingress.", "source": "https://kubernetes.io/docs/concepts/services-networking/ingress/", "id": "k8s_00549_code_explain_14062"} {"prompt": "Complete this yaml code:\n\napiVersion: networking.k8s.io/v1\nkind: Ingress\nmetadata:\n name: simple-fanout-example\nspec:\n rules:\n - host: foo.bar.com\n http:\n paths:\n - path: /foo\n pathType: Prefix", "completion": " backend:\n service:\n name: service1\n port:\n number: 4200\n - path: /bar\n pathType: Prefix\n backend:\n service:\n name: service2\n port:\n number: 8080", "language": "yaml", "type": "kubernetes-manifest", "context": "Ingress | When you create the Ingress with kubectl apply -f:", "source": "https://kubernetes.io/docs/concepts/services-networking/ingress/", "id": "k8s_00549_code_completion_14063"} {"prompt": "Explain this yaml code:\n\napiVersion: networking.k8s.io/v1\nkind: Ingress\nmetadata:\n name: simple-fanout-example\nspec:\n rules:\n - host: foo.bar.com\n http:\n paths:\n - path: /foo\n pathType: Prefix\n backend:\n service:\n name: service1\n port:\n number: 4200\n - path: /bar\n pathType: Prefix\n backend:\n service:\n name: service2\n port:\n number: 8080", "completion": "This yaml code Ingress | When you create the Ingress with kubectl apply -f:", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Ingress | When you create the Ingress with kubectl apply -f:", "source": "https://kubernetes.io/docs/concepts/services-networking/ingress/", "id": "k8s_00549_code_explain_14064"} {"prompt": "Complete this text code:\n\nName: simple-fanout-example\nNamespace: default\nAddress: 178.91.123.132\nDefault backend: default-http-backend:80 (10.8.2.3:8080)\nRules:\n Host Path Backends\n ---- ---- --------", "completion": " foo.bar.com\n /foo service1:4200 (10.8.0.90:4200)\n /bar service2:8080 (10.8.0.91:8080)\nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal ADD 22s loadbalancer-controller default/test", "language": "text", "type": "code-example", "context": "Ingress | The Ingress controller provisions an implementation-specific load balancer that satisfies the Ingress, as long as the Services (service1, service2) exist. When it has done so, you can see the address", "source": "https://kubernetes.io/docs/concepts/services-networking/ingress/", "id": "k8s_00549_code_completion_14065"} {"prompt": "Explain this text code:\n\nName: simple-fanout-example\nNamespace: default\nAddress: 178.91.123.132\nDefault backend: default-http-backend:80 (10.8.2.3:8080)\nRules:\n Host Path Backends\n ---- ---- --------\n foo.bar.com\n /foo service1:4200 (10.8.0.90:4200)\n /bar service2:8080 (10.8.0.91:8080)\nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal ADD 22s loadbalancer-controller default/test", "completion": "This text code Ingress | The Ingress controller provisions an implementation-specific load balancer that satisfies the Ingress, as long as the Services (service1, service2) exist. When it has done so, you can see the address", "language": "text", "type": "code-example_explanation", "context": "Ingress | The Ingress controller provisions an implementation-specific load balancer that satisfies the Ingress, as long as the Services (service1, service2) exist. When it has done so, you can see the address", "source": "https://kubernetes.io/docs/concepts/services-networking/ingress/", "id": "k8s_00549_code_explain_14066"} {"prompt": "Complete this yaml code:\n\napiVersion: networking.k8s.io/v1\nkind: Ingress\nmetadata:\n name: name-virtual-host-ingress\nspec:\n rules:\n - host: foo.bar.com\n http:\n paths:\n - pathType: Prefix\n path: \"/\"\n backend:\n service:", "completion": " name: service1\n port:\n number: 80\n - host: bar.foo.com\n http:\n paths:\n - pathType: Prefix\n path: \"/\"\n backend:\n service:\n name: service2\n port:\n number: 80", "language": "yaml", "type": "kubernetes-manifest", "context": "Ingress | If you create an Ingress resource without any hosts defined in the rules, then any web traffic to the IP address of your Ingress controller can be matched without a name based virtual host being requi", "source": "https://kubernetes.io/docs/concepts/services-networking/ingress/", "id": "k8s_00549_code_completion_14067"} {"prompt": "Explain this yaml code:\n\napiVersion: networking.k8s.io/v1\nkind: Ingress\nmetadata:\n name: name-virtual-host-ingress\nspec:\n rules:\n - host: foo.bar.com\n http:\n paths:\n - pathType: Prefix\n path: \"/\"\n backend:\n service:\n name: service1\n port:\n number: 80\n - host: bar.foo.com\n http:\n paths:\n - pathType: Prefix\n path: \"/\"\n backend:\n service:\n name: service2\n port:\n number: 80", "completion": "This yaml code Ingress | If you create an Ingress resource without any hosts defined in the rules, then any web traffic to the IP address of your Ingress controller can be matched without a name based virtual host being requi", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Ingress | If you create an Ingress resource without any hosts defined in the rules, then any web traffic to the IP address of your Ingress controller can be matched without a name based virtual host being requi", "source": "https://kubernetes.io/docs/concepts/services-networking/ingress/", "id": "k8s_00549_code_explain_14068"} {"prompt": "Complete this yaml code:\n\napiVersion: networking.k8s.io/v1\nkind: Ingress\nmetadata:\n name: name-virtual-host-ingress-no-third-host\nspec:\n rules:\n - host: first.bar.com\n http:\n paths:\n - pathType: Prefix\n path: \"/\"\n backend:\n service:\n name: service1\n port:\n number: 80\n - host: second.bar.com", "completion": " http:\n paths:\n - pathType: Prefix\n path: \"/\"\n backend:\n service:\n name: service2\n port:\n number: 80\n - http:\n paths:\n - pathType: Prefix\n path: \"/\"\n backend:\n service:\n name: service3\n port:\n number: 80", "language": "yaml", "type": "kubernetes-manifest", "context": "Ingress | You can secure an Ingress by specifying a Secret that contains a TLS private key and certificate. The Ingress resource only supports a single TLS port, 443, and assumes TLS termination at the ingress", "source": "https://kubernetes.io/docs/concepts/services-networking/ingress/", "id": "k8s_00549_code_completion_14069"} {"prompt": "Explain this yaml code:\n\napiVersion: networking.k8s.io/v1\nkind: Ingress\nmetadata:\n name: name-virtual-host-ingress-no-third-host\nspec:\n rules:\n - host: first.bar.com\n http:\n paths:\n - pathType: Prefix\n path: \"/\"\n backend:\n service:\n name: service1\n port:\n number: 80\n - host: second.bar.com\n http:\n paths:\n - pathType: Prefix\n path: \"/\"\n backend:\n service:\n name: service2\n port:\n number: 80\n - http:\n paths:\n - pathType: Prefix\n path: \"/\"\n backend:\n service:\n name: service3\n port:\n number: 80", "completion": "This yaml code Ingress | You can secure an Ingress by specifying a Secret that contains a TLS private key and certificate. The Ingress resource only supports a single TLS port, 443, and assumes TLS termination at the ingress", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Ingress | You can secure an Ingress by specifying a Secret that contains a TLS private key and certificate. The Ingress resource only supports a single TLS port, 443, and assumes TLS termination at the ingress", "source": "https://kubernetes.io/docs/concepts/services-networking/ingress/", "id": "k8s_00549_code_explain_14070"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Secret\nmetadata:\n name: testsecret-tls", "completion": " namespace: default\ndata:\n tls.crt: base64 encoded cert\n tls.key: base64 encoded key\ntype: kubernetes.io/tls", "language": "yaml", "type": "kubernetes-manifest", "context": "Ingress | Referencing this secret in an Ingress tells the Ingress controller to secure the channel from the client to the load balancer using TLS. You need to make sure the TLS secret you created came from a ce", "source": "https://kubernetes.io/docs/concepts/services-networking/ingress/", "id": "k8s_00549_code_completion_14071"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Secret\nmetadata:\n name: testsecret-tls\n namespace: default\ndata:\n tls.crt: base64 encoded cert\n tls.key: base64 encoded key\ntype: kubernetes.io/tls", "completion": "This yaml code Ingress | Referencing this secret in an Ingress tells the Ingress controller to secure the channel from the client to the load balancer using TLS. You need to make sure the TLS secret you created came from a ce", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Ingress | Referencing this secret in an Ingress tells the Ingress controller to secure the channel from the client to the load balancer using TLS. You need to make sure the TLS secret you created came from a ce", "source": "https://kubernetes.io/docs/concepts/services-networking/ingress/", "id": "k8s_00549_code_explain_14072"} {"prompt": "Complete this yaml code:\n\napiVersion: networking.k8s.io/v1\nkind: Ingress\nmetadata:\n name: tls-example-ingress\nspec:\n tls:\n - hosts:\n - https-example.foo.com\n secretName: testsecret-tls\n rules:", "completion": " - host: https-example.foo.com\n http:\n paths:\n - path: /\n pathType: Prefix\n backend:\n service:\n name: service1\n port:\n number: 80", "language": "yaml", "type": "kubernetes-manifest", "context": "Ingress | An Ingress controller is bootstrapped with some load balancing policy settings that it applies to all Ingress, such as the load balancing algorithm, backend weight scheme, and others. More advanced lo", "source": "https://kubernetes.io/docs/concepts/services-networking/ingress/", "id": "k8s_00549_code_completion_14073"} {"prompt": "Explain this yaml code:\n\napiVersion: networking.k8s.io/v1\nkind: Ingress\nmetadata:\n name: tls-example-ingress\nspec:\n tls:\n - hosts:\n - https-example.foo.com\n secretName: testsecret-tls\n rules:\n - host: https-example.foo.com\n http:\n paths:\n - path: /\n pathType: Prefix\n backend:\n service:\n name: service1\n port:\n number: 80", "completion": "This yaml code Ingress | An Ingress controller is bootstrapped with some load balancing policy settings that it applies to all Ingress, such as the load balancing algorithm, backend weight scheme, and others. More advanced lo", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Ingress | An Ingress controller is bootstrapped with some load balancing policy settings that it applies to all Ingress, such as the load balancing algorithm, backend weight scheme, and others. More advanced lo", "source": "https://kubernetes.io/docs/concepts/services-networking/ingress/", "id": "k8s_00549_code_explain_14074"} {"prompt": "Complete this text code:\n\nName: test\nNamespace: default\nAddress: 178.91.123.132\nDefault backend: default-http-backend:80 (10.8.2.3:8080)\nRules:\n Host Path Backends\n ---- ---- --------", "completion": " foo.bar.com\n /foo service1:80 (10.8.0.90:80)\nAnnotations:\n nginx.ingress.kubernetes.io/rewrite-target: /\nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal ADD 35s loadbalancer-controller default/test", "language": "text", "type": "code-example", "context": "Ingress | This pops up an editor with the existing configuration in YAML format. Modify it to include the new Host:", "source": "https://kubernetes.io/docs/concepts/services-networking/ingress/", "id": "k8s_00549_code_completion_14075"} {"prompt": "Explain this text code:\n\nName: test\nNamespace: default\nAddress: 178.91.123.132\nDefault backend: default-http-backend:80 (10.8.2.3:8080)\nRules:\n Host Path Backends\n ---- ---- --------\n foo.bar.com\n /foo service1:80 (10.8.0.90:80)\nAnnotations:\n nginx.ingress.kubernetes.io/rewrite-target: /\nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal ADD 35s loadbalancer-controller default/test", "completion": "This text code Ingress | This pops up an editor with the existing configuration in YAML format. Modify it to include the new Host:", "language": "text", "type": "code-example_explanation", "context": "Ingress | This pops up an editor with the existing configuration in YAML format. Modify it to include the new Host:", "source": "https://kubernetes.io/docs/concepts/services-networking/ingress/", "id": "k8s_00549_code_explain_14076"} {"prompt": "Complete this text code:\n\nspec:\n rules:\n - host: foo.bar.com\n http:\n paths:\n - backend:\n service:\n name: service1\n port:\n number: 80\n path: /foo", "completion": " pathType: Prefix\n - host: bar.baz.com\n http:\n paths:\n - backend:\n service:\n name: service2\n port:\n number: 80\n path: /foo\n pathType: Prefix\n..", "language": "text", "type": "code-example", "context": "Ingress | After you save your changes, kubectl updates the resource in the API server, which tells the Ingress controller to reconfigure the load balancer.", "source": "https://kubernetes.io/docs/concepts/services-networking/ingress/", "id": "k8s_00549_code_completion_14077"} {"prompt": "Explain this text code:\n\nspec:\n rules:\n - host: foo.bar.com\n http:\n paths:\n - backend:\n service:\n name: service1\n port:\n number: 80\n path: /foo\n pathType: Prefix\n - host: bar.baz.com\n http:\n paths:\n - backend:\n service:\n name: service2\n port:\n number: 80\n path: /foo\n pathType: Prefix\n..", "completion": "This text code Ingress | After you save your changes, kubectl updates the resource in the API server, which tells the Ingress controller to reconfigure the load balancer.", "language": "text", "type": "code-example_explanation", "context": "Ingress | After you save your changes, kubectl updates the resource in the API server, which tells the Ingress controller to reconfigure the load balancer.", "source": "https://kubernetes.io/docs/concepts/services-networking/ingress/", "id": "k8s_00549_code_explain_14078"} {"prompt": "Complete this text code:\n\nName: test\nNamespace: default\nAddress: 178.91.123.132\nDefault backend: default-http-backend:80 (10.8.2.3:8080)\nRules:\n Host Path Backends\n ---- ---- --------\n foo.bar.com", "completion": " /foo service1:80 (10.8.0.90:80)\n bar.baz.com\n /foo service2:80 (10.8.0.91:80)\nAnnotations:\n nginx.ingress.kubernetes.io/rewrite-target: /\nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal ADD 45s loadbalancer-controller default/test", "language": "text", "type": "code-example", "context": "Ingress | You can achieve the same outcome by invoking kubectl replace -f on a modified Ingress YAML file.", "source": "https://kubernetes.io/docs/concepts/services-networking/ingress/", "id": "k8s_00549_code_completion_14079"} {"prompt": "Explain this text code:\n\nName: test\nNamespace: default\nAddress: 178.91.123.132\nDefault backend: default-http-backend:80 (10.8.2.3:8080)\nRules:\n Host Path Backends\n ---- ---- --------\n foo.bar.com\n /foo service1:80 (10.8.0.90:80)\n bar.baz.com\n /foo service2:80 (10.8.0.91:80)\nAnnotations:\n nginx.ingress.kubernetes.io/rewrite-target: /\nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal ADD 45s loadbalancer-controller default/test", "completion": "This text code Ingress | You can achieve the same outcome by invoking kubectl replace -f on a modified Ingress YAML file.", "language": "text", "type": "code-example_explanation", "context": "Ingress | You can achieve the same outcome by invoking kubectl replace -f on a modified Ingress YAML file.", "source": "https://kubernetes.io/docs/concepts/services-networking/ingress/", "id": "k8s_00549_code_explain_14080"} {"prompt": "Complete this yaml code:\n\napiVersion: networking.k8s.io/v1\nkind: Ingress\nmetadata:\n name: minimal-ingress\n annotations:\n nginx.ingress.kubernetes.io/rewrite-target: /\nspec:\n ingressClassName: nginx-example\n rules:", "completion": " - http:\n paths:\n - path: /testpath\n pathType: Prefix\n backend:\n service:\n name: test\n port:\n number: 80", "language": "yaml", "type": "kubernetes-manifest", "context": "Ingress | An Ingress needs apiVersion, kind, metadata and spec fields. The name of an Ingress object must be a valid DNS subdomain name. For general information about working with config files, see deploying ap", "source": "https://kubernetes.io/docs/concepts/services-networking/ingress/", "id": "k8s_00549_code_completion_14081"} {"prompt": "Explain this yaml code:\n\napiVersion: networking.k8s.io/v1\nkind: Ingress\nmetadata:\n name: minimal-ingress\n annotations:\n nginx.ingress.kubernetes.io/rewrite-target: /\nspec:\n ingressClassName: nginx-example\n rules:\n - http:\n paths:\n - path: /testpath\n pathType: Prefix\n backend:\n service:\n name: test\n port:\n number: 80", "completion": "This yaml code Ingress | An Ingress needs apiVersion, kind, metadata and spec fields. The name of an Ingress object must be a valid DNS subdomain name. For general information about working with config files, see deploying ap", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Ingress | An Ingress needs apiVersion, kind, metadata and spec fields. The name of an Ingress object must be a valid DNS subdomain name. For general information about working with config files, see deploying ap", "source": "https://kubernetes.io/docs/concepts/services-networking/ingress/", "id": "k8s_00549_code_explain_14082"} {"prompt": "Complete this yaml code:\n\napiVersion: networking.k8s.io/v1\nkind: Ingress\nmetadata:\n name: ingress-resource-backend\nspec:\n defaultBackend:\n resource:\n apiGroup: k8s.example.com\n kind: StorageBucket\n name: static-assets", "completion": " rules:\n - http:\n paths:\n - path: /icons\n pathType: ImplementationSpecific\n backend:\n resource:\n apiGroup: k8s.example.com\n kind: StorageBucket\n name: icon-assets", "language": "yaml", "type": "kubernetes-manifest", "context": "Ingress | After creating the Ingress above, you can view it with the following command:", "source": "https://kubernetes.io/docs/concepts/services-networking/ingress/", "id": "k8s_00549_code_completion_14083"} {"prompt": "Explain this yaml code:\n\napiVersion: networking.k8s.io/v1\nkind: Ingress\nmetadata:\n name: ingress-resource-backend\nspec:\n defaultBackend:\n resource:\n apiGroup: k8s.example.com\n kind: StorageBucket\n name: static-assets\n rules:\n - http:\n paths:\n - path: /icons\n pathType: ImplementationSpecific\n backend:\n resource:\n apiGroup: k8s.example.com\n kind: StorageBucket\n name: icon-assets", "completion": "This yaml code Ingress | After creating the Ingress above, you can view it with the following command:", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Ingress | After creating the Ingress above, you can view it with the following command:", "source": "https://kubernetes.io/docs/concepts/services-networking/ingress/", "id": "k8s_00549_code_explain_14084"} {"prompt": "Complete this text code:\n\nName: ingress-resource-backend\nNamespace: default\nAddress:\nDefault backend: APIGroup: k8s.example.com, Kind: StorageBucket, Name: static-assets\nRules:", "completion": " Host Path Backends\n ---- ---- --------\n *\n /icons APIGroup: k8s.example.com, Kind: StorageBucket, Name: icon-assets\nAnnotations: <none>\nEvents: <none>", "language": "text", "type": "code-example", "context": "Ingress | Each path in an Ingress is required to have a corresponding path type. Paths that do not include an explicit pathType will fail validation. There are three supported path types:", "source": "https://kubernetes.io/docs/concepts/services-networking/ingress/", "id": "k8s_00549_code_completion_14085"} {"prompt": "Explain this text code:\n\nName: ingress-resource-backend\nNamespace: default\nAddress:\nDefault backend: APIGroup: k8s.example.com, Kind: StorageBucket, Name: static-assets\nRules:\n Host Path Backends\n ---- ---- --------\n *\n /icons APIGroup: k8s.example.com, Kind: StorageBucket, Name: icon-assets\nAnnotations: <none>\nEvents: <none>", "completion": "This text code Ingress | Each path in an Ingress is required to have a corresponding path type. Paths that do not include an explicit pathType will fail validation. There are three supported path types:", "language": "text", "type": "code-example_explanation", "context": "Ingress | Each path in an Ingress is required to have a corresponding path type. Paths that do not include an explicit pathType will fail validation. There are three supported path types:", "source": "https://kubernetes.io/docs/concepts/services-networking/ingress/", "id": "k8s_00549_code_explain_14086"} {"prompt": "Complete this yaml code:\n\napiVersion: networking.k8s.io/v1\nkind: Ingress\nmetadata:\n name: ingress-wildcard-host\nspec:\n rules:\n - host: \"foo.bar.com\"\n http:\n paths:\n - pathType: Prefix\n path: \"/bar\"\n backend:\n service:", "completion": " name: service1\n port:\n number: 80\n - host: \"*.foo.com\"\n http:\n paths:\n - pathType: Prefix\n path: \"/foo\"\n backend:\n service:\n name: service2\n port:\n number: 80", "language": "yaml", "type": "kubernetes-manifest", "context": "Ingress | Ingresses can be implemented by different controllers, often with different configuration. Each Ingress should specify a class, a reference to an IngressClass resource that contains additional configu", "source": "https://kubernetes.io/docs/concepts/services-networking/ingress/", "id": "k8s_00549_code_completion_14087"} {"prompt": "Explain this yaml code:\n\napiVersion: networking.k8s.io/v1\nkind: Ingress\nmetadata:\n name: ingress-wildcard-host\nspec:\n rules:\n - host: \"foo.bar.com\"\n http:\n paths:\n - pathType: Prefix\n path: \"/bar\"\n backend:\n service:\n name: service1\n port:\n number: 80\n - host: \"*.foo.com\"\n http:\n paths:\n - pathType: Prefix\n path: \"/foo\"\n backend:\n service:\n name: service2\n port:\n number: 80", "completion": "This yaml code Ingress | Ingresses can be implemented by different controllers, often with different configuration. Each Ingress should specify a class, a reference to an IngressClass resource that contains additional configu", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Ingress | Ingresses can be implemented by different controllers, often with different configuration. Each Ingress should specify a class, a reference to an IngressClass resource that contains additional configu", "source": "https://kubernetes.io/docs/concepts/services-networking/ingress/", "id": "k8s_00549_code_explain_14088"} {"prompt": "Complete this yaml code:\n\napiVersion: networking.k8s.io/v1\nkind: IngressClass\nmetadata:\n name: external-lb\nspec:", "completion": " controller: example.com/ingress-controller\n parameters:\n apiGroup: k8s.example.com\n kind: IngressParameters\n name: external-lb", "language": "yaml", "type": "kubernetes-manifest", "context": "Ingress | The .spec.parameters field of an IngressClass lets you reference another resource that provides configuration related to that IngressClass.", "source": "https://kubernetes.io/docs/concepts/services-networking/ingress/", "id": "k8s_00549_code_completion_14089"} {"prompt": "Explain this yaml code:\n\napiVersion: networking.k8s.io/v1\nkind: IngressClass\nmetadata:\n name: external-lb\nspec:\n controller: example.com/ingress-controller\n parameters:\n apiGroup: k8s.example.com\n kind: IngressParameters\n name: external-lb", "completion": "This yaml code Ingress | The .spec.parameters field of an IngressClass lets you reference another resource that provides configuration related to that IngressClass.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Ingress | The .spec.parameters field of an IngressClass lets you reference another resource that provides configuration related to that IngressClass.", "source": "https://kubernetes.io/docs/concepts/services-networking/ingress/", "id": "k8s_00549_code_explain_14090"} {"prompt": "Complete this yaml code:\n\n---\napiVersion: networking.k8s.io/v1\nkind: IngressClass\nmetadata:\n name: external-lb-1\nspec:\n controller: example.com/ingress-controller\n parameters:", "completion": " # The parameters for this IngressClass are specified in a\n # ClusterIngressParameter (API group k8s.example.net) named\n # \"external-config-1\". This definition tells Kubernetes to\n # look for a cluster-scoped parameter resource.\n scope: Cluster\n apiGroup: k8s.example.net\n kind: ClusterIngressParameter\n name: external-config-1", "language": "yaml", "type": "kubernetes-manifest", "context": "Ingress | FEATURE STATE: Kubernetes v1.23 [stable]If you set the .spec.parameters field and set .spec.parameters.scope to Namespace, then the IngressClass refers to a namespaced-scoped resource. You must also s", "source": "https://kubernetes.io/docs/concepts/services-networking/ingress/", "id": "k8s_00549_code_completion_14091"} {"prompt": "Explain this yaml code:\n\n---\napiVersion: networking.k8s.io/v1\nkind: IngressClass\nmetadata:\n name: external-lb-1\nspec:\n controller: example.com/ingress-controller\n parameters:\n # The parameters for this IngressClass are specified in a\n # ClusterIngressParameter (API group k8s.example.net) named\n # \"external-config-1\". This definition tells Kubernetes to\n # look for a cluster-scoped parameter resource.\n scope: Cluster\n apiGroup: k8s.example.net\n kind: ClusterIngressParameter\n name: external-config-1", "completion": "This yaml code Ingress | FEATURE STATE: Kubernetes v1.23 [stable]If you set the .spec.parameters field and set .spec.parameters.scope to Namespace, then the IngressClass refers to a namespaced-scoped resource. You must also s", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Ingress | FEATURE STATE: Kubernetes v1.23 [stable]If you set the .spec.parameters field and set .spec.parameters.scope to Namespace, then the IngressClass refers to a namespaced-scoped resource. You must also s", "source": "https://kubernetes.io/docs/concepts/services-networking/ingress/", "id": "k8s_00549_code_explain_14092"} {"prompt": "Complete this yaml code:\n\n---\napiVersion: networking.k8s.io/v1\nkind: IngressClass\nmetadata:\n name: external-lb-2\nspec:\n controller: example.com/ingress-controller\n parameters:", "completion": " # The parameters for this IngressClass are specified in an\n # IngressParameter (API group k8s.example.com) named \"external-config\",\n # that's in the \"external-configuration\" namespace.\n scope: Namespace\n apiGroup: k8s.example.com\n kind: IngressParameter\n namespace: external-configuration\n name: external-config", "language": "yaml", "type": "kubernetes-manifest", "context": "Ingress | Before the IngressClass resource and ingressClassName field were added in Kubernetes 1.18, Ingress classes were specified with a kubernetes.io/ingress.class annotation on the Ingress. This annotation", "source": "https://kubernetes.io/docs/concepts/services-networking/ingress/", "id": "k8s_00549_code_completion_14093"} {"prompt": "Explain this yaml code:\n\n---\napiVersion: networking.k8s.io/v1\nkind: IngressClass\nmetadata:\n name: external-lb-2\nspec:\n controller: example.com/ingress-controller\n parameters:\n # The parameters for this IngressClass are specified in an\n # IngressParameter (API group k8s.example.com) named \"external-config\",\n # that's in the \"external-configuration\" namespace.\n scope: Namespace\n apiGroup: k8s.example.com\n kind: IngressParameter\n namespace: external-configuration\n name: external-config", "completion": "This yaml code Ingress | Before the IngressClass resource and ingressClassName field were added in Kubernetes 1.18, Ingress classes were specified with a kubernetes.io/ingress.class annotation on the Ingress. This annotation", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Ingress | Before the IngressClass resource and ingressClassName field were added in Kubernetes 1.18, Ingress classes were specified with a kubernetes.io/ingress.class annotation on the Ingress. This annotation", "source": "https://kubernetes.io/docs/concepts/services-networking/ingress/", "id": "k8s_00549_code_explain_14094"} {"prompt": "Complete this yaml code:\n\napiVersion: networking.k8s.io/v1\nkind: IngressClass\nmetadata:\n labels:\n app.kubernetes.io/component: controller", "completion": " name: nginx-example\n annotations:\n ingressclass.kubernetes.io/is-default-class: \"true\"\nspec:\n controller: k8s.io/ingress-nginx", "language": "yaml", "type": "kubernetes-manifest", "context": "Ingress | There are existing Kubernetes concepts that allow you to expose a single Service (see alternatives). You can also do this with an Ingress by specifying a default backend with no rules.", "source": "https://kubernetes.io/docs/concepts/services-networking/ingress/", "id": "k8s_00549_code_completion_14095"} {"prompt": "Explain this yaml code:\n\napiVersion: networking.k8s.io/v1\nkind: IngressClass\nmetadata:\n labels:\n app.kubernetes.io/component: controller\n name: nginx-example\n annotations:\n ingressclass.kubernetes.io/is-default-class: \"true\"\nspec:\n controller: k8s.io/ingress-nginx", "completion": "This yaml code Ingress | There are existing Kubernetes concepts that allow you to expose a single Service (see alternatives). You can also do this with an Ingress by specifying a default backend with no rules.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Ingress | There are existing Kubernetes concepts that allow you to expose a single Service (see alternatives). You can also do this with an Ingress by specifying a default backend with no rules.", "source": "https://kubernetes.io/docs/concepts/services-networking/ingress/", "id": "k8s_00549_code_explain_14096"} {"prompt": "Complete this yaml code:\n\napiVersion: networking.k8s.io/v1\nkind: Ingress\nmetadata:\n name: test-ingress\nspec:", "completion": " defaultBackend:\n service:\n name: test\n port:\n number: 80", "language": "yaml", "type": "kubernetes-manifest", "context": "Ingress | If you create it using kubectl apply -f you should be able to view the state of the Ingress you added:", "source": "https://kubernetes.io/docs/concepts/services-networking/ingress/", "id": "k8s_00549_code_completion_14097"} {"prompt": "Explain this yaml code:\n\napiVersion: networking.k8s.io/v1\nkind: Ingress\nmetadata:\n name: test-ingress\nspec:\n defaultBackend:\n service:\n name: test\n port:\n number: 80", "completion": "This yaml code Ingress | If you create it using kubectl apply -f you should be able to view the state of the Ingress you added:", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Ingress | If you create it using kubectl apply -f you should be able to view the state of the Ingress you added:", "source": "https://kubernetes.io/docs/concepts/services-networking/ingress/", "id": "k8s_00549_code_explain_14098"} {"prompt": "Explain this text code:\n\nNAME CLASS HOSTS ADDRESS PORTS AGE\ntest-ingress external-lb * 203.0.113.123 80 59s", "completion": "This text code Ingress | Where 203.0.113.123 is the IP allocated by the Ingress controller to satisfy this Ingress.", "language": "text", "type": "code-example_explanation", "context": "Ingress | Where 203.0.113.123 is the IP allocated by the Ingress controller to satisfy this Ingress.", "source": "https://kubernetes.io/docs/concepts/services-networking/ingress/", "id": "k8s_00549_code_explain_14099"} {"prompt": "Complete this yaml code:\n\napiVersion: networking.k8s.io/v1\nkind: Ingress\nmetadata:\n name: simple-fanout-example\nspec:\n rules:\n - host: foo.bar.com\n http:\n paths:\n - path: /foo\n pathType: Prefix", "completion": " backend:\n service:\n name: service1\n port:\n number: 4200\n - path: /bar\n pathType: Prefix\n backend:\n service:\n name: service2\n port:\n number: 8080", "language": "yaml", "type": "kubernetes-manifest", "context": "Ingress | When you create the Ingress with kubectl apply -f:", "source": "https://kubernetes.io/docs/concepts/services-networking/ingress/", "id": "k8s_00549_code_completion_14100"} {"prompt": "Explain this yaml code:\n\napiVersion: networking.k8s.io/v1\nkind: Ingress\nmetadata:\n name: simple-fanout-example\nspec:\n rules:\n - host: foo.bar.com\n http:\n paths:\n - path: /foo\n pathType: Prefix\n backend:\n service:\n name: service1\n port:\n number: 4200\n - path: /bar\n pathType: Prefix\n backend:\n service:\n name: service2\n port:\n number: 8080", "completion": "This yaml code Ingress | When you create the Ingress with kubectl apply -f:", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Ingress | When you create the Ingress with kubectl apply -f:", "source": "https://kubernetes.io/docs/concepts/services-networking/ingress/", "id": "k8s_00549_code_explain_14101"} {"prompt": "Complete this text code:\n\nName: simple-fanout-example\nNamespace: default\nAddress: 178.91.123.132\nDefault backend: default-http-backend:80 (10.8.2.3:8080)\nRules:\n Host Path Backends\n ---- ---- --------", "completion": " foo.bar.com\n /foo service1:4200 (10.8.0.90:4200)\n /bar service2:8080 (10.8.0.91:8080)\nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal ADD 22s loadbalancer-controller default/test", "language": "text", "type": "code-example", "context": "Ingress | The Ingress controller provisions an implementation-specific load balancer that satisfies the Ingress, as long as the Services (service1, service2) exist. When it has done so, you can see the address", "source": "https://kubernetes.io/docs/concepts/services-networking/ingress/", "id": "k8s_00549_code_completion_14102"} {"prompt": "Explain this text code:\n\nName: simple-fanout-example\nNamespace: default\nAddress: 178.91.123.132\nDefault backend: default-http-backend:80 (10.8.2.3:8080)\nRules:\n Host Path Backends\n ---- ---- --------\n foo.bar.com\n /foo service1:4200 (10.8.0.90:4200)\n /bar service2:8080 (10.8.0.91:8080)\nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal ADD 22s loadbalancer-controller default/test", "completion": "This text code Ingress | The Ingress controller provisions an implementation-specific load balancer that satisfies the Ingress, as long as the Services (service1, service2) exist. When it has done so, you can see the address", "language": "text", "type": "code-example_explanation", "context": "Ingress | The Ingress controller provisions an implementation-specific load balancer that satisfies the Ingress, as long as the Services (service1, service2) exist. When it has done so, you can see the address", "source": "https://kubernetes.io/docs/concepts/services-networking/ingress/", "id": "k8s_00549_code_explain_14103"} {"prompt": "Complete this yaml code:\n\napiVersion: networking.k8s.io/v1\nkind: Ingress\nmetadata:\n name: name-virtual-host-ingress\nspec:\n rules:\n - host: foo.bar.com\n http:\n paths:\n - pathType: Prefix\n path: \"/\"\n backend:\n service:", "completion": " name: service1\n port:\n number: 80\n - host: bar.foo.com\n http:\n paths:\n - pathType: Prefix\n path: \"/\"\n backend:\n service:\n name: service2\n port:\n number: 80", "language": "yaml", "type": "kubernetes-manifest", "context": "Ingress | If you create an Ingress resource without any hosts defined in the rules, then any web traffic to the IP address of your Ingress controller can be matched without a name based virtual host being requi", "source": "https://kubernetes.io/docs/concepts/services-networking/ingress/", "id": "k8s_00549_code_completion_14104"} {"prompt": "Explain this yaml code:\n\napiVersion: networking.k8s.io/v1\nkind: Ingress\nmetadata:\n name: name-virtual-host-ingress\nspec:\n rules:\n - host: foo.bar.com\n http:\n paths:\n - pathType: Prefix\n path: \"/\"\n backend:\n service:\n name: service1\n port:\n number: 80\n - host: bar.foo.com\n http:\n paths:\n - pathType: Prefix\n path: \"/\"\n backend:\n service:\n name: service2\n port:\n number: 80", "completion": "This yaml code Ingress | If you create an Ingress resource without any hosts defined in the rules, then any web traffic to the IP address of your Ingress controller can be matched without a name based virtual host being requi", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Ingress | If you create an Ingress resource without any hosts defined in the rules, then any web traffic to the IP address of your Ingress controller can be matched without a name based virtual host being requi", "source": "https://kubernetes.io/docs/concepts/services-networking/ingress/", "id": "k8s_00549_code_explain_14105"} {"prompt": "Explain this text code:\n\nservice/networking/name-virtual-host-ingress-no-third-host.yaml", "completion": "This text code Ingress | You can secure an Ingress by specifying a Secret that contains a TLS private key and certificate. The Ingress resource only supports a single TLS port, 443, and assumes TLS termination at the ingress", "language": "text", "type": "code-example_explanation", "context": "Ingress | You can secure an Ingress by specifying a Secret that contains a TLS private key and certificate. The Ingress resource only supports a single TLS port, 443, and assumes TLS termination at the ingress", "source": "https://kubernetes.io/docs/concepts/services-networking/ingress/", "id": "k8s_00549_code_explain_14106"} {"prompt": "Complete this yaml code:\n\napiVersion: networking.k8s.io/v1\nkind: Ingress\nmetadata:\n name: name-virtual-host-ingress-no-third-host\nspec:\n rules:\n - host: first.bar.com\n http:\n paths:\n - pathType: Prefix\n path: \"/\"\n backend:\n service:\n name: service1\n port:\n number: 80\n - host: second.bar.com", "completion": " http:\n paths:\n - pathType: Prefix\n path: \"/\"\n backend:\n service:\n name: service2\n port:\n number: 80\n - http:\n paths:\n - pathType: Prefix\n path: \"/\"\n backend:\n service:\n name: service3\n port:\n number: 80", "language": "yaml", "type": "kubernetes-manifest", "context": "Ingress | You can secure an Ingress by specifying a Secret that contains a TLS private key and certificate. The Ingress resource only supports a single TLS port, 443, and assumes TLS termination at the ingress", "source": "https://kubernetes.io/docs/concepts/services-networking/ingress/", "id": "k8s_00549_code_completion_14107"} {"prompt": "Explain this yaml code:\n\napiVersion: networking.k8s.io/v1\nkind: Ingress\nmetadata:\n name: name-virtual-host-ingress-no-third-host\nspec:\n rules:\n - host: first.bar.com\n http:\n paths:\n - pathType: Prefix\n path: \"/\"\n backend:\n service:\n name: service1\n port:\n number: 80\n - host: second.bar.com\n http:\n paths:\n - pathType: Prefix\n path: \"/\"\n backend:\n service:\n name: service2\n port:\n number: 80\n - http:\n paths:\n - pathType: Prefix\n path: \"/\"\n backend:\n service:\n name: service3\n port:\n number: 80", "completion": "This yaml code Ingress | You can secure an Ingress by specifying a Secret that contains a TLS private key and certificate. The Ingress resource only supports a single TLS port, 443, and assumes TLS termination at the ingress", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Ingress | You can secure an Ingress by specifying a Secret that contains a TLS private key and certificate. The Ingress resource only supports a single TLS port, 443, and assumes TLS termination at the ingress", "source": "https://kubernetes.io/docs/concepts/services-networking/ingress/", "id": "k8s_00549_code_explain_14108"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Secret\nmetadata:\n name: testsecret-tls", "completion": " namespace: default\ndata:\n tls.crt: base64 encoded cert\n tls.key: base64 encoded key\ntype: kubernetes.io/tls", "language": "yaml", "type": "kubernetes-manifest", "context": "Ingress | Referencing this secret in an Ingress tells the Ingress controller to secure the channel from the client to the load balancer using TLS. You need to make sure the TLS secret you created came from a ce", "source": "https://kubernetes.io/docs/concepts/services-networking/ingress/", "id": "k8s_00549_code_completion_14109"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Secret\nmetadata:\n name: testsecret-tls\n namespace: default\ndata:\n tls.crt: base64 encoded cert\n tls.key: base64 encoded key\ntype: kubernetes.io/tls", "completion": "This yaml code Ingress | Referencing this secret in an Ingress tells the Ingress controller to secure the channel from the client to the load balancer using TLS. You need to make sure the TLS secret you created came from a ce", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Ingress | Referencing this secret in an Ingress tells the Ingress controller to secure the channel from the client to the load balancer using TLS. You need to make sure the TLS secret you created came from a ce", "source": "https://kubernetes.io/docs/concepts/services-networking/ingress/", "id": "k8s_00549_code_explain_14110"} {"prompt": "Complete this yaml code:\n\napiVersion: networking.k8s.io/v1\nkind: Ingress\nmetadata:\n name: tls-example-ingress\nspec:\n tls:\n - hosts:\n - https-example.foo.com\n secretName: testsecret-tls\n rules:", "completion": " - host: https-example.foo.com\n http:\n paths:\n - path: /\n pathType: Prefix\n backend:\n service:\n name: service1\n port:\n number: 80", "language": "yaml", "type": "kubernetes-manifest", "context": "Ingress | An Ingress controller is bootstrapped with some load balancing policy settings that it applies to all Ingress, such as the load balancing algorithm, backend weight scheme, and others. More advanced lo", "source": "https://kubernetes.io/docs/concepts/services-networking/ingress/", "id": "k8s_00549_code_completion_14111"} {"prompt": "Explain this yaml code:\n\napiVersion: networking.k8s.io/v1\nkind: Ingress\nmetadata:\n name: tls-example-ingress\nspec:\n tls:\n - hosts:\n - https-example.foo.com\n secretName: testsecret-tls\n rules:\n - host: https-example.foo.com\n http:\n paths:\n - path: /\n pathType: Prefix\n backend:\n service:\n name: service1\n port:\n number: 80", "completion": "This yaml code Ingress | An Ingress controller is bootstrapped with some load balancing policy settings that it applies to all Ingress, such as the load balancing algorithm, backend weight scheme, and others. More advanced lo", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Ingress | An Ingress controller is bootstrapped with some load balancing policy settings that it applies to all Ingress, such as the load balancing algorithm, backend weight scheme, and others. More advanced lo", "source": "https://kubernetes.io/docs/concepts/services-networking/ingress/", "id": "k8s_00549_code_explain_14112"} {"prompt": "Complete this text code:\n\nName: test\nNamespace: default\nAddress: 178.91.123.132\nDefault backend: default-http-backend:80 (10.8.2.3:8080)\nRules:\n Host Path Backends\n ---- ---- --------", "completion": " foo.bar.com\n /foo service1:80 (10.8.0.90:80)\nAnnotations:\n nginx.ingress.kubernetes.io/rewrite-target: /\nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal ADD 35s loadbalancer-controller default/test", "language": "text", "type": "code-example", "context": "Ingress | This pops up an editor with the existing configuration in YAML format. Modify it to include the new Host:", "source": "https://kubernetes.io/docs/concepts/services-networking/ingress/", "id": "k8s_00549_code_completion_14113"} {"prompt": "Explain this text code:\n\nName: test\nNamespace: default\nAddress: 178.91.123.132\nDefault backend: default-http-backend:80 (10.8.2.3:8080)\nRules:\n Host Path Backends\n ---- ---- --------\n foo.bar.com\n /foo service1:80 (10.8.0.90:80)\nAnnotations:\n nginx.ingress.kubernetes.io/rewrite-target: /\nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal ADD 35s loadbalancer-controller default/test", "completion": "This text code Ingress | This pops up an editor with the existing configuration in YAML format. Modify it to include the new Host:", "language": "text", "type": "code-example_explanation", "context": "Ingress | This pops up an editor with the existing configuration in YAML format. Modify it to include the new Host:", "source": "https://kubernetes.io/docs/concepts/services-networking/ingress/", "id": "k8s_00549_code_explain_14114"} {"prompt": "Complete this yaml code:\n\nspec:\n rules:\n - host: foo.bar.com\n http:\n paths:\n - backend:\n service:\n name: service1\n port:\n number: 80\n path: /foo", "completion": " pathType: Prefix\n - host: bar.baz.com\n http:\n paths:\n - backend:\n service:\n name: service2\n port:\n number: 80\n path: /foo\n pathType: Prefix\n..", "language": "yaml", "type": "kubernetes-manifest", "context": "Ingress | After you save your changes, kubectl updates the resource in the API server, which tells the Ingress controller to reconfigure the load balancer.", "source": "https://kubernetes.io/docs/concepts/services-networking/ingress/", "id": "k8s_00549_code_completion_14115"} {"prompt": "Explain this yaml code:\n\nspec:\n rules:\n - host: foo.bar.com\n http:\n paths:\n - backend:\n service:\n name: service1\n port:\n number: 80\n path: /foo\n pathType: Prefix\n - host: bar.baz.com\n http:\n paths:\n - backend:\n service:\n name: service2\n port:\n number: 80\n path: /foo\n pathType: Prefix\n..", "completion": "This yaml code Ingress | After you save your changes, kubectl updates the resource in the API server, which tells the Ingress controller to reconfigure the load balancer.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Ingress | After you save your changes, kubectl updates the resource in the API server, which tells the Ingress controller to reconfigure the load balancer.", "source": "https://kubernetes.io/docs/concepts/services-networking/ingress/", "id": "k8s_00549_code_explain_14116"} {"prompt": "Complete this text code:\n\nName: test\nNamespace: default\nAddress: 178.91.123.132\nDefault backend: default-http-backend:80 (10.8.2.3:8080)\nRules:\n Host Path Backends\n ---- ---- --------\n foo.bar.com", "completion": " /foo service1:80 (10.8.0.90:80)\n bar.baz.com\n /foo service2:80 (10.8.0.91:80)\nAnnotations:\n nginx.ingress.kubernetes.io/rewrite-target: /\nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal ADD 45s loadbalancer-controller default/test", "language": "text", "type": "code-example", "context": "Ingress | You can achieve the same outcome by invoking kubectl replace -f on a modified Ingress YAML file.", "source": "https://kubernetes.io/docs/concepts/services-networking/ingress/", "id": "k8s_00549_code_completion_14117"} {"prompt": "Explain this text code:\n\nName: test\nNamespace: default\nAddress: 178.91.123.132\nDefault backend: default-http-backend:80 (10.8.2.3:8080)\nRules:\n Host Path Backends\n ---- ---- --------\n foo.bar.com\n /foo service1:80 (10.8.0.90:80)\n bar.baz.com\n /foo service2:80 (10.8.0.91:80)\nAnnotations:\n nginx.ingress.kubernetes.io/rewrite-target: /\nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal ADD 45s loadbalancer-controller default/test", "completion": "This text code Ingress | You can achieve the same outcome by invoking kubectl replace -f on a modified Ingress YAML file.", "language": "text", "type": "code-example_explanation", "context": "Ingress | You can achieve the same outcome by invoking kubectl replace -f on a modified Ingress YAML file.", "source": "https://kubernetes.io/docs/concepts/services-networking/ingress/", "id": "k8s_00549_code_explain_14118"} {"prompt": "Complete this yaml code:\n\nservice/networking/minimal-ingress.yaml\napiVersion: networking.k8s.io/v1\nkind: Ingress\nmetadata:\n name: minimal-ingress\n annotations:\n nginx.ingress.kubernetes.io/rewrite-target: /\nspec:\n ingressClassName: nginx-example", "completion": " rules:\n - http:\n paths:\n - path: /testpath\n pathType: Prefix\n backend:\n service:\n name: test\n port:\n number: 80", "language": "yaml", "type": "kubernetes-manifest", "context": "Ingress | An Ingress needs apiVersion, kind, metadata and spec fields. The name of an Ingress object must be a valid DNS subdomain name. For general information about working with config files, see deploying ap", "source": "https://kubernetes.io/docs/concepts/services-networking/ingress/", "id": "k8s_00549_code_completion_14119"} {"prompt": "Explain this yaml code:\n\nservice/networking/minimal-ingress.yaml\napiVersion: networking.k8s.io/v1\nkind: Ingress\nmetadata:\n name: minimal-ingress\n annotations:\n nginx.ingress.kubernetes.io/rewrite-target: /\nspec:\n ingressClassName: nginx-example\n rules:\n - http:\n paths:\n - path: /testpath\n pathType: Prefix\n backend:\n service:\n name: test\n port:\n number: 80", "completion": "This yaml code Ingress | An Ingress needs apiVersion, kind, metadata and spec fields. The name of an Ingress object must be a valid DNS subdomain name. For general information about working with config files, see deploying ap", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Ingress | An Ingress needs apiVersion, kind, metadata and spec fields. The name of an Ingress object must be a valid DNS subdomain name. For general information about working with config files, see deploying ap", "source": "https://kubernetes.io/docs/concepts/services-networking/ingress/", "id": "k8s_00549_code_explain_14120"} {"prompt": "Complete this yaml code:\n\napiVersion: networking.k8s.io/v1\nkind: Ingress\nmetadata:\n name: minimal-ingress\n annotations:\n nginx.ingress.kubernetes.io/rewrite-target: /\nspec:\n ingressClassName: nginx-example\n rules:", "completion": " - http:\n paths:\n - path: /testpath\n pathType: Prefix\n backend:\n service:\n name: test\n port:\n number: 80", "language": "yaml", "type": "kubernetes-manifest", "context": "Ingress | An Ingress needs apiVersion, kind, metadata and spec fields. The name of an Ingress object must be a valid DNS subdomain name. For general information about working with config files, see deploying ap", "source": "https://kubernetes.io/docs/concepts/services-networking/ingress/", "id": "k8s_00549_code_completion_14121"} {"prompt": "Explain this yaml code:\n\napiVersion: networking.k8s.io/v1\nkind: Ingress\nmetadata:\n name: minimal-ingress\n annotations:\n nginx.ingress.kubernetes.io/rewrite-target: /\nspec:\n ingressClassName: nginx-example\n rules:\n - http:\n paths:\n - path: /testpath\n pathType: Prefix\n backend:\n service:\n name: test\n port:\n number: 80", "completion": "This yaml code Ingress | An Ingress needs apiVersion, kind, metadata and spec fields. The name of an Ingress object must be a valid DNS subdomain name. For general information about working with config files, see deploying ap", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Ingress | An Ingress needs apiVersion, kind, metadata and spec fields. The name of an Ingress object must be a valid DNS subdomain name. For general information about working with config files, see deploying ap", "source": "https://kubernetes.io/docs/concepts/services-networking/ingress/", "id": "k8s_00549_code_explain_14122"} {"prompt": "Complete this yaml code:\n\nservice/networking/ingress-resource-backend.yaml\napiVersion: networking.k8s.io/v1\nkind: Ingress\nmetadata:\n name: ingress-resource-backend\nspec:\n defaultBackend:\n resource:\n apiGroup: k8s.example.com\n kind: StorageBucket", "completion": " name: static-assets\n rules:\n - http:\n paths:\n - path: /icons\n pathType: ImplementationSpecific\n backend:\n resource:\n apiGroup: k8s.example.com\n kind: StorageBucket\n name: icon-assets", "language": "yaml", "type": "kubernetes-manifest", "context": "Ingress | After creating the Ingress above, you can view it with the following command:", "source": "https://kubernetes.io/docs/concepts/services-networking/ingress/", "id": "k8s_00549_code_completion_14123"} {"prompt": "Explain this yaml code:\n\nservice/networking/ingress-resource-backend.yaml\napiVersion: networking.k8s.io/v1\nkind: Ingress\nmetadata:\n name: ingress-resource-backend\nspec:\n defaultBackend:\n resource:\n apiGroup: k8s.example.com\n kind: StorageBucket\n name: static-assets\n rules:\n - http:\n paths:\n - path: /icons\n pathType: ImplementationSpecific\n backend:\n resource:\n apiGroup: k8s.example.com\n kind: StorageBucket\n name: icon-assets", "completion": "This yaml code Ingress | After creating the Ingress above, you can view it with the following command:", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Ingress | After creating the Ingress above, you can view it with the following command:", "source": "https://kubernetes.io/docs/concepts/services-networking/ingress/", "id": "k8s_00549_code_explain_14124"} {"prompt": "Complete this yaml code:\n\napiVersion: networking.k8s.io/v1\nkind: Ingress\nmetadata:\n name: ingress-resource-backend\nspec:\n defaultBackend:\n resource:\n apiGroup: k8s.example.com\n kind: StorageBucket\n name: static-assets", "completion": " rules:\n - http:\n paths:\n - path: /icons\n pathType: ImplementationSpecific\n backend:\n resource:\n apiGroup: k8s.example.com\n kind: StorageBucket\n name: icon-assets", "language": "yaml", "type": "kubernetes-manifest", "context": "Ingress | After creating the Ingress above, you can view it with the following command:", "source": "https://kubernetes.io/docs/concepts/services-networking/ingress/", "id": "k8s_00549_code_completion_14125"} {"prompt": "Explain this yaml code:\n\napiVersion: networking.k8s.io/v1\nkind: Ingress\nmetadata:\n name: ingress-resource-backend\nspec:\n defaultBackend:\n resource:\n apiGroup: k8s.example.com\n kind: StorageBucket\n name: static-assets\n rules:\n - http:\n paths:\n - path: /icons\n pathType: ImplementationSpecific\n backend:\n resource:\n apiGroup: k8s.example.com\n kind: StorageBucket\n name: icon-assets", "completion": "This yaml code Ingress | After creating the Ingress above, you can view it with the following command:", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Ingress | After creating the Ingress above, you can view it with the following command:", "source": "https://kubernetes.io/docs/concepts/services-networking/ingress/", "id": "k8s_00549_code_explain_14126"} {"prompt": "Complete this yaml code:\n\nservice/networking/ingress-wildcard-host.yaml\napiVersion: networking.k8s.io/v1\nkind: Ingress\nmetadata:\n name: ingress-wildcard-host\nspec:\n rules:\n - host: \"foo.bar.com\"\n http:\n paths:\n - pathType: Prefix\n path: \"/bar\"\n backend:", "completion": " service:\n name: service1\n port:\n number: 80\n - host: \"*.foo.com\"\n http:\n paths:\n - pathType: Prefix\n path: \"/foo\"\n backend:\n service:\n name: service2\n port:\n number: 80", "language": "yaml", "type": "kubernetes-manifest", "context": "Ingress | Ingresses can be implemented by different controllers, often with different configuration. Each Ingress should specify a class, a reference to an IngressClass resource that contains additional configu", "source": "https://kubernetes.io/docs/concepts/services-networking/ingress/", "id": "k8s_00549_code_completion_14127"} {"prompt": "Explain this yaml code:\n\nservice/networking/ingress-wildcard-host.yaml\napiVersion: networking.k8s.io/v1\nkind: Ingress\nmetadata:\n name: ingress-wildcard-host\nspec:\n rules:\n - host: \"foo.bar.com\"\n http:\n paths:\n - pathType: Prefix\n path: \"/bar\"\n backend:\n service:\n name: service1\n port:\n number: 80\n - host: \"*.foo.com\"\n http:\n paths:\n - pathType: Prefix\n path: \"/foo\"\n backend:\n service:\n name: service2\n port:\n number: 80", "completion": "This yaml code Ingress | Ingresses can be implemented by different controllers, often with different configuration. Each Ingress should specify a class, a reference to an IngressClass resource that contains additional configu", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Ingress | Ingresses can be implemented by different controllers, often with different configuration. Each Ingress should specify a class, a reference to an IngressClass resource that contains additional configu", "source": "https://kubernetes.io/docs/concepts/services-networking/ingress/", "id": "k8s_00549_code_explain_14128"} {"prompt": "Complete this yaml code:\n\napiVersion: networking.k8s.io/v1\nkind: Ingress\nmetadata:\n name: ingress-wildcard-host\nspec:\n rules:\n - host: \"foo.bar.com\"\n http:\n paths:\n - pathType: Prefix\n path: \"/bar\"\n backend:\n service:", "completion": " name: service1\n port:\n number: 80\n - host: \"*.foo.com\"\n http:\n paths:\n - pathType: Prefix\n path: \"/foo\"\n backend:\n service:\n name: service2\n port:\n number: 80", "language": "yaml", "type": "kubernetes-manifest", "context": "Ingress | Ingresses can be implemented by different controllers, often with different configuration. Each Ingress should specify a class, a reference to an IngressClass resource that contains additional configu", "source": "https://kubernetes.io/docs/concepts/services-networking/ingress/", "id": "k8s_00549_code_completion_14129"} {"prompt": "Explain this yaml code:\n\napiVersion: networking.k8s.io/v1\nkind: Ingress\nmetadata:\n name: ingress-wildcard-host\nspec:\n rules:\n - host: \"foo.bar.com\"\n http:\n paths:\n - pathType: Prefix\n path: \"/bar\"\n backend:\n service:\n name: service1\n port:\n number: 80\n - host: \"*.foo.com\"\n http:\n paths:\n - pathType: Prefix\n path: \"/foo\"\n backend:\n service:\n name: service2\n port:\n number: 80", "completion": "This yaml code Ingress | Ingresses can be implemented by different controllers, often with different configuration. Each Ingress should specify a class, a reference to an IngressClass resource that contains additional configu", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Ingress | Ingresses can be implemented by different controllers, often with different configuration. Each Ingress should specify a class, a reference to an IngressClass resource that contains additional configu", "source": "https://kubernetes.io/docs/concepts/services-networking/ingress/", "id": "k8s_00549_code_explain_14130"} {"prompt": "Complete this yaml code:\n\nservice/networking/external-lb.yaml\napiVersion: networking.k8s.io/v1\nkind: IngressClass\nmetadata:\n name: external-lb", "completion": "spec:\n controller: example.com/ingress-controller\n parameters:\n apiGroup: k8s.example.com\n kind: IngressParameters\n name: external-lb", "language": "yaml", "type": "kubernetes-manifest", "context": "Ingress | The .spec.parameters field of an IngressClass lets you reference another resource that provides configuration related to that IngressClass.", "source": "https://kubernetes.io/docs/concepts/services-networking/ingress/", "id": "k8s_00549_code_completion_14131"} {"prompt": "Explain this yaml code:\n\nservice/networking/external-lb.yaml\napiVersion: networking.k8s.io/v1\nkind: IngressClass\nmetadata:\n name: external-lb\nspec:\n controller: example.com/ingress-controller\n parameters:\n apiGroup: k8s.example.com\n kind: IngressParameters\n name: external-lb", "completion": "This yaml code Ingress | The .spec.parameters field of an IngressClass lets you reference another resource that provides configuration related to that IngressClass.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Ingress | The .spec.parameters field of an IngressClass lets you reference another resource that provides configuration related to that IngressClass.", "source": "https://kubernetes.io/docs/concepts/services-networking/ingress/", "id": "k8s_00549_code_explain_14132"} {"prompt": "Complete this yaml code:\n\napiVersion: networking.k8s.io/v1\nkind: IngressClass\nmetadata:\n name: external-lb\nspec:", "completion": " controller: example.com/ingress-controller\n parameters:\n apiGroup: k8s.example.com\n kind: IngressParameters\n name: external-lb", "language": "yaml", "type": "kubernetes-manifest", "context": "Ingress | The .spec.parameters field of an IngressClass lets you reference another resource that provides configuration related to that IngressClass.", "source": "https://kubernetes.io/docs/concepts/services-networking/ingress/", "id": "k8s_00549_code_completion_14133"} {"prompt": "Explain this yaml code:\n\napiVersion: networking.k8s.io/v1\nkind: IngressClass\nmetadata:\n name: external-lb\nspec:\n controller: example.com/ingress-controller\n parameters:\n apiGroup: k8s.example.com\n kind: IngressParameters\n name: external-lb", "completion": "This yaml code Ingress | The .spec.parameters field of an IngressClass lets you reference another resource that provides configuration related to that IngressClass.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Ingress | The .spec.parameters field of an IngressClass lets you reference another resource that provides configuration related to that IngressClass.", "source": "https://kubernetes.io/docs/concepts/services-networking/ingress/", "id": "k8s_00549_code_explain_14134"} {"prompt": "Complete this yaml code:\n\n---\napiVersion: networking.k8s.io/v1\nkind: IngressClass\nmetadata:\n name: external-lb-1\nspec:\n controller: example.com/ingress-controller\n parameters:", "completion": " # The parameters for this IngressClass are specified in a\n # ClusterIngressParameter (API group k8s.example.net) named\n # \"external-config-1\". This definition tells Kubernetes to\n # look for a cluster-scoped parameter resource.\n scope: Cluster\n apiGroup: k8s.example.net\n kind: ClusterIngressParameter\n name: external-config-1", "language": "yaml", "type": "kubernetes-manifest", "context": "Ingress | FEATURE STATE: Kubernetes v1.23 [stable]If you set the .spec.parameters field and set .spec.parameters.scope to Namespace, then the IngressClass refers to a namespaced-scoped resource. You must also s", "source": "https://kubernetes.io/docs/concepts/services-networking/ingress/", "id": "k8s_00549_code_completion_14135"} {"prompt": "Explain this yaml code:\n\n---\napiVersion: networking.k8s.io/v1\nkind: IngressClass\nmetadata:\n name: external-lb-1\nspec:\n controller: example.com/ingress-controller\n parameters:\n # The parameters for this IngressClass are specified in a\n # ClusterIngressParameter (API group k8s.example.net) named\n # \"external-config-1\". This definition tells Kubernetes to\n # look for a cluster-scoped parameter resource.\n scope: Cluster\n apiGroup: k8s.example.net\n kind: ClusterIngressParameter\n name: external-config-1", "completion": "This yaml code Ingress | FEATURE STATE: Kubernetes v1.23 [stable]If you set the .spec.parameters field and set .spec.parameters.scope to Namespace, then the IngressClass refers to a namespaced-scoped resource. You must also s", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Ingress | FEATURE STATE: Kubernetes v1.23 [stable]If you set the .spec.parameters field and set .spec.parameters.scope to Namespace, then the IngressClass refers to a namespaced-scoped resource. You must also s", "source": "https://kubernetes.io/docs/concepts/services-networking/ingress/", "id": "k8s_00549_code_explain_14136"} {"prompt": "Complete this yaml code:\n\n---\napiVersion: networking.k8s.io/v1\nkind: IngressClass\nmetadata:\n name: external-lb-2\nspec:\n controller: example.com/ingress-controller\n parameters:", "completion": " # The parameters for this IngressClass are specified in an\n # IngressParameter (API group k8s.example.com) named \"external-config\",\n # that's in the \"external-configuration\" namespace.\n scope: Namespace\n apiGroup: k8s.example.com\n kind: IngressParameter\n namespace: external-configuration\n name: external-config", "language": "yaml", "type": "kubernetes-manifest", "context": "Ingress | Before the IngressClass resource and ingressClassName field were added in Kubernetes 1.18, Ingress classes were specified with a kubernetes.io/ingress.class annotation on the Ingress. This annotation", "source": "https://kubernetes.io/docs/concepts/services-networking/ingress/", "id": "k8s_00549_code_completion_14137"} {"prompt": "Explain this yaml code:\n\n---\napiVersion: networking.k8s.io/v1\nkind: IngressClass\nmetadata:\n name: external-lb-2\nspec:\n controller: example.com/ingress-controller\n parameters:\n # The parameters for this IngressClass are specified in an\n # IngressParameter (API group k8s.example.com) named \"external-config\",\n # that's in the \"external-configuration\" namespace.\n scope: Namespace\n apiGroup: k8s.example.com\n kind: IngressParameter\n namespace: external-configuration\n name: external-config", "completion": "This yaml code Ingress | Before the IngressClass resource and ingressClassName field were added in Kubernetes 1.18, Ingress classes were specified with a kubernetes.io/ingress.class annotation on the Ingress. This annotation", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Ingress | Before the IngressClass resource and ingressClassName field were added in Kubernetes 1.18, Ingress classes were specified with a kubernetes.io/ingress.class annotation on the Ingress. This annotation", "source": "https://kubernetes.io/docs/concepts/services-networking/ingress/", "id": "k8s_00549_code_explain_14138"} {"prompt": "Complete this yaml code:\n\nservice/networking/default-ingressclass.yaml\napiVersion: networking.k8s.io/v1\nkind: IngressClass\nmetadata:\n labels:", "completion": " app.kubernetes.io/component: controller\n name: nginx-example\n annotations:\n ingressclass.kubernetes.io/is-default-class: \"true\"\nspec:\n controller: k8s.io/ingress-nginx", "language": "yaml", "type": "kubernetes-manifest", "context": "Ingress | There are existing Kubernetes concepts that allow you to expose a single Service (see alternatives). You can also do this with an Ingress by specifying a default backend with no rules.", "source": "https://kubernetes.io/docs/concepts/services-networking/ingress/", "id": "k8s_00549_code_completion_14139"} {"prompt": "Explain this yaml code:\n\nservice/networking/default-ingressclass.yaml\napiVersion: networking.k8s.io/v1\nkind: IngressClass\nmetadata:\n labels:\n app.kubernetes.io/component: controller\n name: nginx-example\n annotations:\n ingressclass.kubernetes.io/is-default-class: \"true\"\nspec:\n controller: k8s.io/ingress-nginx", "completion": "This yaml code Ingress | There are existing Kubernetes concepts that allow you to expose a single Service (see alternatives). You can also do this with an Ingress by specifying a default backend with no rules.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Ingress | There are existing Kubernetes concepts that allow you to expose a single Service (see alternatives). You can also do this with an Ingress by specifying a default backend with no rules.", "source": "https://kubernetes.io/docs/concepts/services-networking/ingress/", "id": "k8s_00549_code_explain_14140"} {"prompt": "Complete this yaml code:\n\napiVersion: networking.k8s.io/v1\nkind: IngressClass\nmetadata:\n labels:\n app.kubernetes.io/component: controller", "completion": " name: nginx-example\n annotations:\n ingressclass.kubernetes.io/is-default-class: \"true\"\nspec:\n controller: k8s.io/ingress-nginx", "language": "yaml", "type": "kubernetes-manifest", "context": "Ingress | There are existing Kubernetes concepts that allow you to expose a single Service (see alternatives). You can also do this with an Ingress by specifying a default backend with no rules.", "source": "https://kubernetes.io/docs/concepts/services-networking/ingress/", "id": "k8s_00549_code_completion_14141"} {"prompt": "Explain this yaml code:\n\napiVersion: networking.k8s.io/v1\nkind: IngressClass\nmetadata:\n labels:\n app.kubernetes.io/component: controller\n name: nginx-example\n annotations:\n ingressclass.kubernetes.io/is-default-class: \"true\"\nspec:\n controller: k8s.io/ingress-nginx", "completion": "This yaml code Ingress | There are existing Kubernetes concepts that allow you to expose a single Service (see alternatives). You can also do this with an Ingress by specifying a default backend with no rules.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Ingress | There are existing Kubernetes concepts that allow you to expose a single Service (see alternatives). You can also do this with an Ingress by specifying a default backend with no rules.", "source": "https://kubernetes.io/docs/concepts/services-networking/ingress/", "id": "k8s_00549_code_explain_14142"} {"prompt": "Complete this yaml code:\n\nservice/networking/test-ingress.yaml\napiVersion: networking.k8s.io/v1\nkind: Ingress\nmetadata:\n name: test-ingress", "completion": "spec:\n defaultBackend:\n service:\n name: test\n port:\n number: 80", "language": "yaml", "type": "kubernetes-manifest", "context": "Ingress | If you create it using kubectl apply -f you should be able to view the state of the Ingress you added:", "source": "https://kubernetes.io/docs/concepts/services-networking/ingress/", "id": "k8s_00549_code_completion_14143"} {"prompt": "Explain this yaml code:\n\nservice/networking/test-ingress.yaml\napiVersion: networking.k8s.io/v1\nkind: Ingress\nmetadata:\n name: test-ingress\nspec:\n defaultBackend:\n service:\n name: test\n port:\n number: 80", "completion": "This yaml code Ingress | If you create it using kubectl apply -f you should be able to view the state of the Ingress you added:", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Ingress | If you create it using kubectl apply -f you should be able to view the state of the Ingress you added:", "source": "https://kubernetes.io/docs/concepts/services-networking/ingress/", "id": "k8s_00549_code_explain_14144"} {"prompt": "Complete this yaml code:\n\napiVersion: networking.k8s.io/v1\nkind: Ingress\nmetadata:\n name: test-ingress\nspec:", "completion": " defaultBackend:\n service:\n name: test\n port:\n number: 80", "language": "yaml", "type": "kubernetes-manifest", "context": "Ingress | If you create it using kubectl apply -f you should be able to view the state of the Ingress you added:", "source": "https://kubernetes.io/docs/concepts/services-networking/ingress/", "id": "k8s_00549_code_completion_14145"} {"prompt": "Explain this yaml code:\n\napiVersion: networking.k8s.io/v1\nkind: Ingress\nmetadata:\n name: test-ingress\nspec:\n defaultBackend:\n service:\n name: test\n port:\n number: 80", "completion": "This yaml code Ingress | If you create it using kubectl apply -f you should be able to view the state of the Ingress you added:", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Ingress | If you create it using kubectl apply -f you should be able to view the state of the Ingress you added:", "source": "https://kubernetes.io/docs/concepts/services-networking/ingress/", "id": "k8s_00549_code_explain_14146"} {"prompt": "Complete this yaml code:\n\nservice/networking/simple-fanout-example.yaml\napiVersion: networking.k8s.io/v1\nkind: Ingress\nmetadata:\n name: simple-fanout-example\nspec:\n rules:\n - host: foo.bar.com\n http:\n paths:\n - path: /foo\n pathType: Prefix", "completion": " backend:\n service:\n name: service1\n port:\n number: 4200\n - path: /bar\n pathType: Prefix\n backend:\n service:\n name: service2\n port:\n number: 8080", "language": "yaml", "type": "kubernetes-manifest", "context": "Ingress | When you create the Ingress with kubectl apply -f:", "source": "https://kubernetes.io/docs/concepts/services-networking/ingress/", "id": "k8s_00549_code_completion_14147"} {"prompt": "Explain this yaml code:\n\nservice/networking/simple-fanout-example.yaml\napiVersion: networking.k8s.io/v1\nkind: Ingress\nmetadata:\n name: simple-fanout-example\nspec:\n rules:\n - host: foo.bar.com\n http:\n paths:\n - path: /foo\n pathType: Prefix\n backend:\n service:\n name: service1\n port:\n number: 4200\n - path: /bar\n pathType: Prefix\n backend:\n service:\n name: service2\n port:\n number: 8080", "completion": "This yaml code Ingress | When you create the Ingress with kubectl apply -f:", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Ingress | When you create the Ingress with kubectl apply -f:", "source": "https://kubernetes.io/docs/concepts/services-networking/ingress/", "id": "k8s_00549_code_explain_14148"} {"prompt": "Complete this yaml code:\n\napiVersion: networking.k8s.io/v1\nkind: Ingress\nmetadata:\n name: simple-fanout-example\nspec:\n rules:\n - host: foo.bar.com\n http:\n paths:\n - path: /foo\n pathType: Prefix", "completion": " backend:\n service:\n name: service1\n port:\n number: 4200\n - path: /bar\n pathType: Prefix\n backend:\n service:\n name: service2\n port:\n number: 8080", "language": "yaml", "type": "kubernetes-manifest", "context": "Ingress | When you create the Ingress with kubectl apply -f:", "source": "https://kubernetes.io/docs/concepts/services-networking/ingress/", "id": "k8s_00549_code_completion_14149"} {"prompt": "Explain this yaml code:\n\napiVersion: networking.k8s.io/v1\nkind: Ingress\nmetadata:\n name: simple-fanout-example\nspec:\n rules:\n - host: foo.bar.com\n http:\n paths:\n - path: /foo\n pathType: Prefix\n backend:\n service:\n name: service1\n port:\n number: 4200\n - path: /bar\n pathType: Prefix\n backend:\n service:\n name: service2\n port:\n number: 8080", "completion": "This yaml code Ingress | When you create the Ingress with kubectl apply -f:", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Ingress | When you create the Ingress with kubectl apply -f:", "source": "https://kubernetes.io/docs/concepts/services-networking/ingress/", "id": "k8s_00549_code_explain_14150"} {"prompt": "Complete this yaml code:\n\nservice/networking/name-virtual-host-ingress.yaml\napiVersion: networking.k8s.io/v1\nkind: Ingress\nmetadata:\n name: name-virtual-host-ingress\nspec:\n rules:\n - host: foo.bar.com\n http:\n paths:\n - pathType: Prefix\n path: \"/\"\n backend:", "completion": " service:\n name: service1\n port:\n number: 80\n - host: bar.foo.com\n http:\n paths:\n - pathType: Prefix\n path: \"/\"\n backend:\n service:\n name: service2\n port:\n number: 80", "language": "yaml", "type": "kubernetes-manifest", "context": "Ingress | If you create an Ingress resource without any hosts defined in the rules, then any web traffic to the IP address of your Ingress controller can be matched without a name based virtual host being requi", "source": "https://kubernetes.io/docs/concepts/services-networking/ingress/", "id": "k8s_00549_code_completion_14151"} {"prompt": "Explain this yaml code:\n\nservice/networking/name-virtual-host-ingress.yaml\napiVersion: networking.k8s.io/v1\nkind: Ingress\nmetadata:\n name: name-virtual-host-ingress\nspec:\n rules:\n - host: foo.bar.com\n http:\n paths:\n - pathType: Prefix\n path: \"/\"\n backend:\n service:\n name: service1\n port:\n number: 80\n - host: bar.foo.com\n http:\n paths:\n - pathType: Prefix\n path: \"/\"\n backend:\n service:\n name: service2\n port:\n number: 80", "completion": "This yaml code Ingress | If you create an Ingress resource without any hosts defined in the rules, then any web traffic to the IP address of your Ingress controller can be matched without a name based virtual host being requi", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Ingress | If you create an Ingress resource without any hosts defined in the rules, then any web traffic to the IP address of your Ingress controller can be matched without a name based virtual host being requi", "source": "https://kubernetes.io/docs/concepts/services-networking/ingress/", "id": "k8s_00549_code_explain_14152"} {"prompt": "Complete this yaml code:\n\napiVersion: networking.k8s.io/v1\nkind: Ingress\nmetadata:\n name: name-virtual-host-ingress\nspec:\n rules:\n - host: foo.bar.com\n http:\n paths:\n - pathType: Prefix\n path: \"/\"\n backend:\n service:", "completion": " name: service1\n port:\n number: 80\n - host: bar.foo.com\n http:\n paths:\n - pathType: Prefix\n path: \"/\"\n backend:\n service:\n name: service2\n port:\n number: 80", "language": "yaml", "type": "kubernetes-manifest", "context": "Ingress | If you create an Ingress resource without any hosts defined in the rules, then any web traffic to the IP address of your Ingress controller can be matched without a name based virtual host being requi", "source": "https://kubernetes.io/docs/concepts/services-networking/ingress/", "id": "k8s_00549_code_completion_14153"} {"prompt": "Explain this yaml code:\n\napiVersion: networking.k8s.io/v1\nkind: Ingress\nmetadata:\n name: name-virtual-host-ingress\nspec:\n rules:\n - host: foo.bar.com\n http:\n paths:\n - pathType: Prefix\n path: \"/\"\n backend:\n service:\n name: service1\n port:\n number: 80\n - host: bar.foo.com\n http:\n paths:\n - pathType: Prefix\n path: \"/\"\n backend:\n service:\n name: service2\n port:\n number: 80", "completion": "This yaml code Ingress | If you create an Ingress resource without any hosts defined in the rules, then any web traffic to the IP address of your Ingress controller can be matched without a name based virtual host being requi", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Ingress | If you create an Ingress resource without any hosts defined in the rules, then any web traffic to the IP address of your Ingress controller can be matched without a name based virtual host being requi", "source": "https://kubernetes.io/docs/concepts/services-networking/ingress/", "id": "k8s_00549_code_explain_14154"} {"prompt": "Complete this yaml code:\n\nservice/networking/name-virtual-host-ingress-no-third-host.yaml\napiVersion: networking.k8s.io/v1\nkind: Ingress\nmetadata:\n name: name-virtual-host-ingress-no-third-host\nspec:\n rules:\n - host: first.bar.com\n http:\n paths:\n - pathType: Prefix\n path: \"/\"\n backend:\n service:\n name: service1\n port:\n number: 80\n - host: second.bar.com", "completion": " http:\n paths:\n - pathType: Prefix\n path: \"/\"\n backend:\n service:\n name: service2\n port:\n number: 80\n - http:\n paths:\n - pathType: Prefix\n path: \"/\"\n backend:\n service:\n name: service3\n port:\n number: 80", "language": "yaml", "type": "kubernetes-manifest", "context": "Ingress | You can secure an Ingress by specifying a Secret that contains a TLS private key and certificate. The Ingress resource only supports a single TLS port, 443, and assumes TLS termination at the ingress", "source": "https://kubernetes.io/docs/concepts/services-networking/ingress/", "id": "k8s_00549_code_completion_14155"} {"prompt": "Explain this yaml code:\n\nservice/networking/name-virtual-host-ingress-no-third-host.yaml\napiVersion: networking.k8s.io/v1\nkind: Ingress\nmetadata:\n name: name-virtual-host-ingress-no-third-host\nspec:\n rules:\n - host: first.bar.com\n http:\n paths:\n - pathType: Prefix\n path: \"/\"\n backend:\n service:\n name: service1\n port:\n number: 80\n - host: second.bar.com\n http:\n paths:\n - pathType: Prefix\n path: \"/\"\n backend:\n service:\n name: service2\n port:\n number: 80\n - http:\n paths:\n - pathType: Prefix\n path: \"/\"\n backend:\n service:\n name: service3\n port:\n number: 80", "completion": "This yaml code Ingress | You can secure an Ingress by specifying a Secret that contains a TLS private key and certificate. The Ingress resource only supports a single TLS port, 443, and assumes TLS termination at the ingress", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Ingress | You can secure an Ingress by specifying a Secret that contains a TLS private key and certificate. The Ingress resource only supports a single TLS port, 443, and assumes TLS termination at the ingress", "source": "https://kubernetes.io/docs/concepts/services-networking/ingress/", "id": "k8s_00549_code_explain_14156"} {"prompt": "Complete this yaml code:\n\napiVersion: networking.k8s.io/v1\nkind: Ingress\nmetadata:\n name: name-virtual-host-ingress-no-third-host\nspec:\n rules:\n - host: first.bar.com\n http:\n paths:\n - pathType: Prefix\n path: \"/\"\n backend:\n service:\n name: service1\n port:\n number: 80\n - host: second.bar.com", "completion": " http:\n paths:\n - pathType: Prefix\n path: \"/\"\n backend:\n service:\n name: service2\n port:\n number: 80\n - http:\n paths:\n - pathType: Prefix\n path: \"/\"\n backend:\n service:\n name: service3\n port:\n number: 80", "language": "yaml", "type": "kubernetes-manifest", "context": "Ingress | You can secure an Ingress by specifying a Secret that contains a TLS private key and certificate. The Ingress resource only supports a single TLS port, 443, and assumes TLS termination at the ingress", "source": "https://kubernetes.io/docs/concepts/services-networking/ingress/", "id": "k8s_00549_code_completion_14157"} {"prompt": "Explain this yaml code:\n\napiVersion: networking.k8s.io/v1\nkind: Ingress\nmetadata:\n name: name-virtual-host-ingress-no-third-host\nspec:\n rules:\n - host: first.bar.com\n http:\n paths:\n - pathType: Prefix\n path: \"/\"\n backend:\n service:\n name: service1\n port:\n number: 80\n - host: second.bar.com\n http:\n paths:\n - pathType: Prefix\n path: \"/\"\n backend:\n service:\n name: service2\n port:\n number: 80\n - http:\n paths:\n - pathType: Prefix\n path: \"/\"\n backend:\n service:\n name: service3\n port:\n number: 80", "completion": "This yaml code Ingress | You can secure an Ingress by specifying a Secret that contains a TLS private key and certificate. The Ingress resource only supports a single TLS port, 443, and assumes TLS termination at the ingress", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Ingress | You can secure an Ingress by specifying a Secret that contains a TLS private key and certificate. The Ingress resource only supports a single TLS port, 443, and assumes TLS termination at the ingress", "source": "https://kubernetes.io/docs/concepts/services-networking/ingress/", "id": "k8s_00549_code_explain_14158"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Secret\nmetadata:\n name: testsecret-tls", "completion": " namespace: default\ndata:\n tls.crt: base64 encoded cert\n tls.key: base64 encoded key\ntype: kubernetes.io/tls", "language": "yaml", "type": "kubernetes-manifest", "context": "Ingress | Referencing this secret in an Ingress tells the Ingress controller to secure the channel from the client to the load balancer using TLS. You need to make sure the TLS secret you created came from a ce", "source": "https://kubernetes.io/docs/concepts/services-networking/ingress/", "id": "k8s_00549_code_completion_14159"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Secret\nmetadata:\n name: testsecret-tls\n namespace: default\ndata:\n tls.crt: base64 encoded cert\n tls.key: base64 encoded key\ntype: kubernetes.io/tls", "completion": "This yaml code Ingress | Referencing this secret in an Ingress tells the Ingress controller to secure the channel from the client to the load balancer using TLS. You need to make sure the TLS secret you created came from a ce", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Ingress | Referencing this secret in an Ingress tells the Ingress controller to secure the channel from the client to the load balancer using TLS. You need to make sure the TLS secret you created came from a ce", "source": "https://kubernetes.io/docs/concepts/services-networking/ingress/", "id": "k8s_00549_code_explain_14160"} {"prompt": "Complete this yaml code:\n\nservice/networking/tls-example-ingress.yaml\napiVersion: networking.k8s.io/v1\nkind: Ingress\nmetadata:\n name: tls-example-ingress\nspec:\n tls:\n - hosts:\n - https-example.foo.com\n secretName: testsecret-tls", "completion": " rules:\n - host: https-example.foo.com\n http:\n paths:\n - path: /\n pathType: Prefix\n backend:\n service:\n name: service1\n port:\n number: 80", "language": "yaml", "type": "kubernetes-manifest", "context": "Ingress | An Ingress controller is bootstrapped with some load balancing policy settings that it applies to all Ingress, such as the load balancing algorithm, backend weight scheme, and others. More advanced lo", "source": "https://kubernetes.io/docs/concepts/services-networking/ingress/", "id": "k8s_00549_code_completion_14161"} {"prompt": "Explain this yaml code:\n\nservice/networking/tls-example-ingress.yaml\napiVersion: networking.k8s.io/v1\nkind: Ingress\nmetadata:\n name: tls-example-ingress\nspec:\n tls:\n - hosts:\n - https-example.foo.com\n secretName: testsecret-tls\n rules:\n - host: https-example.foo.com\n http:\n paths:\n - path: /\n pathType: Prefix\n backend:\n service:\n name: service1\n port:\n number: 80", "completion": "This yaml code Ingress | An Ingress controller is bootstrapped with some load balancing policy settings that it applies to all Ingress, such as the load balancing algorithm, backend weight scheme, and others. More advanced lo", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Ingress | An Ingress controller is bootstrapped with some load balancing policy settings that it applies to all Ingress, such as the load balancing algorithm, backend weight scheme, and others. More advanced lo", "source": "https://kubernetes.io/docs/concepts/services-networking/ingress/", "id": "k8s_00549_code_explain_14162"} {"prompt": "Complete this yaml code:\n\napiVersion: networking.k8s.io/v1\nkind: Ingress\nmetadata:\n name: tls-example-ingress\nspec:\n tls:\n - hosts:\n - https-example.foo.com\n secretName: testsecret-tls\n rules:", "completion": " - host: https-example.foo.com\n http:\n paths:\n - path: /\n pathType: Prefix\n backend:\n service:\n name: service1\n port:\n number: 80", "language": "yaml", "type": "kubernetes-manifest", "context": "Ingress | An Ingress controller is bootstrapped with some load balancing policy settings that it applies to all Ingress, such as the load balancing algorithm, backend weight scheme, and others. More advanced lo", "source": "https://kubernetes.io/docs/concepts/services-networking/ingress/", "id": "k8s_00549_code_completion_14163"} {"prompt": "Explain this yaml code:\n\napiVersion: networking.k8s.io/v1\nkind: Ingress\nmetadata:\n name: tls-example-ingress\nspec:\n tls:\n - hosts:\n - https-example.foo.com\n secretName: testsecret-tls\n rules:\n - host: https-example.foo.com\n http:\n paths:\n - path: /\n pathType: Prefix\n backend:\n service:\n name: service1\n port:\n number: 80", "completion": "This yaml code Ingress | An Ingress controller is bootstrapped with some load balancing policy settings that it applies to all Ingress, such as the load balancing algorithm, backend weight scheme, and others. More advanced lo", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Ingress | An Ingress controller is bootstrapped with some load balancing policy settings that it applies to all Ingress, such as the load balancing algorithm, backend weight scheme, and others. More advanced lo", "source": "https://kubernetes.io/docs/concepts/services-networking/ingress/", "id": "k8s_00549_code_explain_14164"} {"prompt": "Complete this text code:\n\nspec:\n rules:\n - host: foo.bar.com\n http:\n paths:\n - backend:\n service:\n name: service1\n port:\n number: 80\n path: /foo", "completion": " pathType: Prefix\n - host: bar.baz.com\n http:\n paths:\n - backend:\n service:\n name: service2\n port:\n number: 80\n path: /foo\n pathType: Prefix\n..", "language": "text", "type": "code-example", "context": "Ingress | After you save your changes, kubectl updates the resource in the API server, which tells the Ingress controller to reconfigure the load balancer.", "source": "https://kubernetes.io/docs/concepts/services-networking/ingress/", "id": "k8s_00549_code_completion_14165"} {"prompt": "Explain this text code:\n\nspec:\n rules:\n - host: foo.bar.com\n http:\n paths:\n - backend:\n service:\n name: service1\n port:\n number: 80\n path: /foo\n pathType: Prefix\n - host: bar.baz.com\n http:\n paths:\n - backend:\n service:\n name: service2\n port:\n number: 80\n path: /foo\n pathType: Prefix\n..", "completion": "This text code Ingress | After you save your changes, kubectl updates the resource in the API server, which tells the Ingress controller to reconfigure the load balancer.", "language": "text", "type": "code-example_explanation", "context": "Ingress | After you save your changes, kubectl updates the resource in the API server, which tells the Ingress controller to reconfigure the load balancer.", "source": "https://kubernetes.io/docs/concepts/services-networking/ingress/", "id": "k8s_00549_code_explain_14166"} {"prompt": "Complete this text code:\n\ntype MyAPIObject struct {\n\truntime.TypeMeta `json:\",inline\"`\n\tMyPlugin runtime.Object `json:\"myPlugin\"`\n}", "completion": "\ntype PluginA struct {\n\tAOption string `json:\"aOption\"`\n}", "language": "text", "type": "code-example", "context": "ResourceClaim | // External package:", "source": "https://kubernetes.io/docs/reference/kubernetes-api/workload-resources/resource-claim-v1/", "id": "k8s_00552_code_completion_14167"} {"prompt": "Explain this text code:\n\ntype MyAPIObject struct {\n\truntime.TypeMeta `json:\",inline\"`\n\tMyPlugin runtime.Object `json:\"myPlugin\"`\n}\n\ntype PluginA struct {\n\tAOption string `json:\"aOption\"`\n}", "completion": "This text code ResourceClaim | // External package:", "language": "text", "type": "code-example_explanation", "context": "ResourceClaim | // External package:", "source": "https://kubernetes.io/docs/reference/kubernetes-api/workload-resources/resource-claim-v1/", "id": "k8s_00552_code_explain_14168"} {"prompt": "Complete this text code:\n\ntype MyAPIObject struct {\n\truntime.TypeMeta `json:\",inline\"`\n\tMyPlugin runtime.RawExtension `json:\"myPlugin\"`\n}", "completion": "\ntype PluginA struct {\n\tAOption string `json:\"aOption\"`\n}", "language": "text", "type": "code-example", "context": "ResourceClaim | // On the wire, the JSON will look something like this:", "source": "https://kubernetes.io/docs/reference/kubernetes-api/workload-resources/resource-claim-v1/", "id": "k8s_00552_code_completion_14169"} {"prompt": "Explain this text code:\n\ntype MyAPIObject struct {\n\truntime.TypeMeta `json:\",inline\"`\n\tMyPlugin runtime.RawExtension `json:\"myPlugin\"`\n}\n\ntype PluginA struct {\n\tAOption string `json:\"aOption\"`\n}", "completion": "This text code ResourceClaim | // On the wire, the JSON will look something like this:", "language": "text", "type": "code-example_explanation", "context": "ResourceClaim | // On the wire, the JSON will look something like this:", "source": "https://kubernetes.io/docs/reference/kubernetes-api/workload-resources/resource-claim-v1/", "id": "k8s_00552_code_explain_14170"} {"prompt": "Complete this json code:\n\n{\n\t\"kind\":\"MyAPIObject\",\n\t\"apiVersion\":\"v1\",\n\t\"myPlugin\": {", "completion": "\t\t\"kind\":\"PluginA\",\n\t\t\"aOption\":\"foo\",\n\t},\n}", "language": "json", "type": "api-response", "context": "ResourceClaim | So what happens? Decode first uses json or yaml to unmarshal the serialized data into your external MyAPIObject. That causes the raw JSON to be stored, but not unpacked. The next step is to copy (usin", "source": "https://kubernetes.io/docs/reference/kubernetes-api/workload-resources/resource-claim-v1/", "id": "k8s_00552_code_completion_14171"} {"prompt": "Explain this json code:\n\n{\n\t\"kind\":\"MyAPIObject\",\n\t\"apiVersion\":\"v1\",\n\t\"myPlugin\": {\n\t\t\"kind\":\"PluginA\",\n\t\t\"aOption\":\"foo\",\n\t},\n}", "completion": "This json code ResourceClaim | So what happens? Decode first uses json or yaml to unmarshal the serialized data into your external MyAPIObject. That causes the raw JSON to be stored, but not unpacked. The next step is to copy (usin", "language": "json", "type": "api-response_explanation", "context": "ResourceClaim | So what happens? Decode first uses json or yaml to unmarshal the serialized data into your external MyAPIObject. That causes the raw JSON to be stored, but not unpacked. The next step is to copy (usin", "source": "https://kubernetes.io/docs/reference/kubernetes-api/workload-resources/resource-claim-v1/", "id": "k8s_00552_code_explain_14172"} {"prompt": "Complete this text code:\n\ndevice.driver\ndevice.attributes[\"dra.example.com\"].model", "completion": "device.attributes[\"ext.example.com\"].family\ndevice.capacity[\"dra.example.com\"].modules", "language": "text", "type": "code-example", "context": "ResourceClaim | The device.driver field can be used to check for a specific driver, either as a high-level precondition (i.e. you only want to consider devices from this driver) or as part of a multi-clause expressio", "source": "https://kubernetes.io/docs/reference/kubernetes-api/workload-resources/resource-claim-v1/", "id": "k8s_00552_code_completion_14173"} {"prompt": "Explain this text code:\n\ndevice.driver\ndevice.attributes[\"dra.example.com\"].model\ndevice.attributes[\"ext.example.com\"].family\ndevice.capacity[\"dra.example.com\"].modules", "completion": "This text code ResourceClaim | The device.driver field can be used to check for a specific driver, either as a high-level precondition (i.e. you only want to consider devices from this driver) or as part of a multi-clause expressio", "language": "text", "type": "code-example_explanation", "context": "ResourceClaim | The device.driver field can be used to check for a specific driver, either as a high-level precondition (i.e. you only want to consider devices from this driver) or as part of a multi-clause expressio", "source": "https://kubernetes.io/docs/reference/kubernetes-api/workload-resources/resource-claim-v1/", "id": "k8s_00552_code_explain_14174"} {"prompt": "Explain this text code:\n\ncel.bind(dra, device.attributes[\"dra.example.com\"], dra.someBool && dra.anotherBool)", "completion": "This text code ResourceClaim | The length of the expression must be smaller or equal to 10 Ki. The cost of evaluating it is also limited based on the estimated number of logical steps.", "language": "text", "type": "code-example_explanation", "context": "ResourceClaim | The length of the expression must be smaller or equal to 10 Ki. The cost of evaluating it is also limited based on the estimated number of logical steps.", "source": "https://kubernetes.io/docs/reference/kubernetes-api/workload-resources/resource-claim-v1/", "id": "k8s_00552_code_explain_14175"} {"prompt": "Complete this text code:\n\ndevice.driver\ndevice.attributes[\"dra.example.com\"].model", "completion": "device.attributes[\"ext.example.com\"].family\ndevice.capacity[\"dra.example.com\"].modules", "language": "text", "type": "code-example", "context": "ResourceClaim | The device.driver field can be used to check for a specific driver, either as a high-level precondition (i.e. you only want to consider devices from this driver) or as part of a multi-clause expressio", "source": "https://kubernetes.io/docs/reference/kubernetes-api/workload-resources/resource-claim-v1/", "id": "k8s_00552_code_completion_14176"} {"prompt": "Explain this text code:\n\ndevice.driver\ndevice.attributes[\"dra.example.com\"].model\ndevice.attributes[\"ext.example.com\"].family\ndevice.capacity[\"dra.example.com\"].modules", "completion": "This text code ResourceClaim | The device.driver field can be used to check for a specific driver, either as a high-level precondition (i.e. you only want to consider devices from this driver) or as part of a multi-clause expressio", "language": "text", "type": "code-example_explanation", "context": "ResourceClaim | The device.driver field can be used to check for a specific driver, either as a high-level precondition (i.e. you only want to consider devices from this driver) or as part of a multi-clause expressio", "source": "https://kubernetes.io/docs/reference/kubernetes-api/workload-resources/resource-claim-v1/", "id": "k8s_00552_code_explain_14177"} {"prompt": "Explain this text code:\n\ncel.bind(dra, device.attributes[\"dra.example.com\"], dra.someBool && dra.anotherBool)", "completion": "This text code ResourceClaim | The length of the expression must be smaller or equal to 10 Ki. The cost of evaluating it is also limited based on the estimated number of logical steps.", "language": "text", "type": "code-example_explanation", "context": "ResourceClaim | The length of the expression must be smaller or equal to 10 Ki. The cost of evaluating it is also limited based on the estimated number of logical steps.", "source": "https://kubernetes.io/docs/reference/kubernetes-api/workload-resources/resource-claim-v1/", "id": "k8s_00552_code_explain_14178"} {"prompt": "Complete this text code:\n\ntype MyAPIObject struct {\n\truntime.TypeMeta `json:\",inline\"`\n\tMyPlugin runtime.Object `json:\"myPlugin\"`\n}", "completion": "\ntype PluginA struct {\n\tAOption string `json:\"aOption\"`\n}", "language": "text", "type": "code-example", "context": "ResourceClaim | // External package:", "source": "https://kubernetes.io/docs/reference/kubernetes-api/workload-resources/resource-claim-v1/", "id": "k8s_00552_code_completion_14179"} {"prompt": "Explain this text code:\n\ntype MyAPIObject struct {\n\truntime.TypeMeta `json:\",inline\"`\n\tMyPlugin runtime.Object `json:\"myPlugin\"`\n}\n\ntype PluginA struct {\n\tAOption string `json:\"aOption\"`\n}", "completion": "This text code ResourceClaim | // External package:", "language": "text", "type": "code-example_explanation", "context": "ResourceClaim | // External package:", "source": "https://kubernetes.io/docs/reference/kubernetes-api/workload-resources/resource-claim-v1/", "id": "k8s_00552_code_explain_14180"} {"prompt": "Complete this text code:\n\ntype MyAPIObject struct {\n\truntime.TypeMeta `json:\",inline\"`\n\tMyPlugin runtime.RawExtension `json:\"myPlugin\"`\n}", "completion": "\ntype PluginA struct {\n\tAOption string `json:\"aOption\"`\n}", "language": "text", "type": "code-example", "context": "ResourceClaim | // On the wire, the JSON will look something like this:", "source": "https://kubernetes.io/docs/reference/kubernetes-api/workload-resources/resource-claim-v1/", "id": "k8s_00552_code_completion_14181"} {"prompt": "Explain this text code:\n\ntype MyAPIObject struct {\n\truntime.TypeMeta `json:\",inline\"`\n\tMyPlugin runtime.RawExtension `json:\"myPlugin\"`\n}\n\ntype PluginA struct {\n\tAOption string `json:\"aOption\"`\n}", "completion": "This text code ResourceClaim | // On the wire, the JSON will look something like this:", "language": "text", "type": "code-example_explanation", "context": "ResourceClaim | // On the wire, the JSON will look something like this:", "source": "https://kubernetes.io/docs/reference/kubernetes-api/workload-resources/resource-claim-v1/", "id": "k8s_00552_code_explain_14182"} {"prompt": "Complete this json code:\n\n{\n\t\"kind\":\"MyAPIObject\",\n\t\"apiVersion\":\"v1\",\n\t\"myPlugin\": {", "completion": "\t\t\"kind\":\"PluginA\",\n\t\t\"aOption\":\"foo\",\n\t},\n}", "language": "json", "type": "api-response", "context": "ResourceClaim | So what happens? Decode first uses json or yaml to unmarshal the serialized data into your external MyAPIObject. That causes the raw JSON to be stored, but not unpacked. The next step is to copy (usin", "source": "https://kubernetes.io/docs/reference/kubernetes-api/workload-resources/resource-claim-v1/", "id": "k8s_00552_code_completion_14183"} {"prompt": "Explain this json code:\n\n{\n\t\"kind\":\"MyAPIObject\",\n\t\"apiVersion\":\"v1\",\n\t\"myPlugin\": {\n\t\t\"kind\":\"PluginA\",\n\t\t\"aOption\":\"foo\",\n\t},\n}", "completion": "This json code ResourceClaim | So what happens? Decode first uses json or yaml to unmarshal the serialized data into your external MyAPIObject. That causes the raw JSON to be stored, but not unpacked. The next step is to copy (usin", "language": "json", "type": "api-response_explanation", "context": "ResourceClaim | So what happens? Decode first uses json or yaml to unmarshal the serialized data into your external MyAPIObject. That causes the raw JSON to be stored, but not unpacked. The next step is to copy (usin", "source": "https://kubernetes.io/docs/reference/kubernetes-api/workload-resources/resource-claim-v1/", "id": "k8s_00552_code_explain_14184"} {"prompt": "Complete this text code:\n\ntype MyAPIObject struct {\n\truntime.TypeMeta `json:\",inline\"`\n\tMyPlugin runtime.Object `json:\"myPlugin\"`\n}", "completion": "\ntype PluginA struct {\n\tAOption string `json:\"aOption\"`\n}", "language": "text", "type": "code-example", "context": "ResourceClaim | // External package:", "source": "https://kubernetes.io/docs/reference/kubernetes-api/workload-resources/resource-claim-v1/", "id": "k8s_00552_code_completion_14185"} {"prompt": "Explain this text code:\n\ntype MyAPIObject struct {\n\truntime.TypeMeta `json:\",inline\"`\n\tMyPlugin runtime.Object `json:\"myPlugin\"`\n}\n\ntype PluginA struct {\n\tAOption string `json:\"aOption\"`\n}", "completion": "This text code ResourceClaim | // External package:", "language": "text", "type": "code-example_explanation", "context": "ResourceClaim | // External package:", "source": "https://kubernetes.io/docs/reference/kubernetes-api/workload-resources/resource-claim-v1/", "id": "k8s_00552_code_explain_14186"} {"prompt": "Complete this text code:\n\ntype MyAPIObject struct {\n\truntime.TypeMeta `json:\",inline\"`\n\tMyPlugin runtime.RawExtension `json:\"myPlugin\"`\n}", "completion": "\ntype PluginA struct {\n\tAOption string `json:\"aOption\"`\n}", "language": "text", "type": "code-example", "context": "ResourceClaim | // On the wire, the JSON will look something like this:", "source": "https://kubernetes.io/docs/reference/kubernetes-api/workload-resources/resource-claim-v1/", "id": "k8s_00552_code_completion_14187"} {"prompt": "Explain this text code:\n\ntype MyAPIObject struct {\n\truntime.TypeMeta `json:\",inline\"`\n\tMyPlugin runtime.RawExtension `json:\"myPlugin\"`\n}\n\ntype PluginA struct {\n\tAOption string `json:\"aOption\"`\n}", "completion": "This text code ResourceClaim | // On the wire, the JSON will look something like this:", "language": "text", "type": "code-example_explanation", "context": "ResourceClaim | // On the wire, the JSON will look something like this:", "source": "https://kubernetes.io/docs/reference/kubernetes-api/workload-resources/resource-claim-v1/", "id": "k8s_00552_code_explain_14188"} {"prompt": "Complete this json code:\n\n{\n\t\"kind\":\"MyAPIObject\",\n\t\"apiVersion\":\"v1\",\n\t\"myPlugin\": {", "completion": "\t\t\"kind\":\"PluginA\",\n\t\t\"aOption\":\"foo\",\n\t},\n}", "language": "json", "type": "api-response", "context": "ResourceClaim | So what happens? Decode first uses json or yaml to unmarshal the serialized data into your external MyAPIObject. That causes the raw JSON to be stored, but not unpacked. The next step is to copy (usin", "source": "https://kubernetes.io/docs/reference/kubernetes-api/workload-resources/resource-claim-v1/", "id": "k8s_00552_code_completion_14189"} {"prompt": "Explain this json code:\n\n{\n\t\"kind\":\"MyAPIObject\",\n\t\"apiVersion\":\"v1\",\n\t\"myPlugin\": {\n\t\t\"kind\":\"PluginA\",\n\t\t\"aOption\":\"foo\",\n\t},\n}", "completion": "This json code ResourceClaim | So what happens? Decode first uses json or yaml to unmarshal the serialized data into your external MyAPIObject. That causes the raw JSON to be stored, but not unpacked. The next step is to copy (usin", "language": "json", "type": "api-response_explanation", "context": "ResourceClaim | So what happens? Decode first uses json or yaml to unmarshal the serialized data into your external MyAPIObject. That causes the raw JSON to be stored, but not unpacked. The next step is to copy (usin", "source": "https://kubernetes.io/docs/reference/kubernetes-api/workload-resources/resource-claim-v1/", "id": "k8s_00552_code_explain_14190"} {"prompt": "Explain this text code:\n\ndevice.capacity[\\<domain>].\\<name>.compareTo(quantity(\\<request quantity>)) >= 0", "completion": "This text code ResourceClaim | When a requestPolicy is defined, the requested amount is adjusted upward to the nearest valid value based on the policy. If the requested amount cannot be adjusted to a valid value—because it exceeds", "language": "text", "type": "code-example_explanation", "context": "ResourceClaim | When a requestPolicy is defined, the requested amount is adjusted upward to the nearest valid value based on the policy. If the requested amount cannot be adjusted to a valid value—because it exceeds", "source": "https://kubernetes.io/docs/reference/kubernetes-api/workload-resources/resource-claim-v1/", "id": "k8s_00552_code_explain_14191"} {"prompt": "Complete this text code:\n\ndevice.driver\ndevice.attributes[\"dra.example.com\"].model", "completion": "device.attributes[\"ext.example.com\"].family\ndevice.capacity[\"dra.example.com\"].modules", "language": "text", "type": "code-example", "context": "ResourceClaim | The device.driver field can be used to check for a specific driver, either as a high-level precondition (i.e. you only want to consider devices from this driver) or as part of a multi-clause expressio", "source": "https://kubernetes.io/docs/reference/kubernetes-api/workload-resources/resource-claim-v1/", "id": "k8s_00552_code_completion_14192"} {"prompt": "Explain this text code:\n\ndevice.driver\ndevice.attributes[\"dra.example.com\"].model\ndevice.attributes[\"ext.example.com\"].family\ndevice.capacity[\"dra.example.com\"].modules", "completion": "This text code ResourceClaim | The device.driver field can be used to check for a specific driver, either as a high-level precondition (i.e. you only want to consider devices from this driver) or as part of a multi-clause expressio", "language": "text", "type": "code-example_explanation", "context": "ResourceClaim | The device.driver field can be used to check for a specific driver, either as a high-level precondition (i.e. you only want to consider devices from this driver) or as part of a multi-clause expressio", "source": "https://kubernetes.io/docs/reference/kubernetes-api/workload-resources/resource-claim-v1/", "id": "k8s_00552_code_explain_14193"} {"prompt": "Explain this text code:\n\ncel.bind(dra, device.attributes[\"dra.example.com\"], dra.someBool && dra.anotherBool)", "completion": "This text code ResourceClaim | The length of the expression must be smaller or equal to 10 Ki. The cost of evaluating it is also limited based on the estimated number of logical steps.", "language": "text", "type": "code-example_explanation", "context": "ResourceClaim | The length of the expression must be smaller or equal to 10 Ki. The cost of evaluating it is also limited based on the estimated number of logical steps.", "source": "https://kubernetes.io/docs/reference/kubernetes-api/workload-resources/resource-claim-v1/", "id": "k8s_00552_code_explain_14194"} {"prompt": "Explain this text code:\n\ndevice.capacity[\\<domain>].\\<name>.compareTo(quantity(\\<request quantity>)) >= 0", "completion": "This text code ResourceClaim | When a requestPolicy is defined, the requested amount is adjusted upward to the nearest valid value based on the policy. If the requested amount cannot be adjusted to a valid value—because it exceeds", "language": "text", "type": "code-example_explanation", "context": "ResourceClaim | When a requestPolicy is defined, the requested amount is adjusted upward to the nearest valid value based on the policy. If the requested amount cannot be adjusted to a valid value—because it exceeds", "source": "https://kubernetes.io/docs/reference/kubernetes-api/workload-resources/resource-claim-v1/", "id": "k8s_00552_code_explain_14195"} {"prompt": "Complete this text code:\n\ndevice.driver\ndevice.attributes[\"dra.example.com\"].model", "completion": "device.attributes[\"ext.example.com\"].family\ndevice.capacity[\"dra.example.com\"].modules", "language": "text", "type": "code-example", "context": "ResourceClaim | The device.driver field can be used to check for a specific driver, either as a high-level precondition (i.e. you only want to consider devices from this driver) or as part of a multi-clause expressio", "source": "https://kubernetes.io/docs/reference/kubernetes-api/workload-resources/resource-claim-v1/", "id": "k8s_00552_code_completion_14196"} {"prompt": "Explain this text code:\n\ndevice.driver\ndevice.attributes[\"dra.example.com\"].model\ndevice.attributes[\"ext.example.com\"].family\ndevice.capacity[\"dra.example.com\"].modules", "completion": "This text code ResourceClaim | The device.driver field can be used to check for a specific driver, either as a high-level precondition (i.e. you only want to consider devices from this driver) or as part of a multi-clause expressio", "language": "text", "type": "code-example_explanation", "context": "ResourceClaim | The device.driver field can be used to check for a specific driver, either as a high-level precondition (i.e. you only want to consider devices from this driver) or as part of a multi-clause expressio", "source": "https://kubernetes.io/docs/reference/kubernetes-api/workload-resources/resource-claim-v1/", "id": "k8s_00552_code_explain_14197"} {"prompt": "Explain this text code:\n\ncel.bind(dra, device.attributes[\"dra.example.com\"], dra.someBool && dra.anotherBool)", "completion": "This text code ResourceClaim | The length of the expression must be smaller or equal to 10 Ki. The cost of evaluating it is also limited based on the estimated number of logical steps.", "language": "text", "type": "code-example_explanation", "context": "ResourceClaim | The length of the expression must be smaller or equal to 10 Ki. The cost of evaluating it is also limited based on the estimated number of logical steps.", "source": "https://kubernetes.io/docs/reference/kubernetes-api/workload-resources/resource-claim-v1/", "id": "k8s_00552_code_explain_14198"} {"prompt": "Complete this text code:\n\ntype MyAPIObject struct {\n\truntime.TypeMeta `json:\",inline\"`\n\tMyPlugin runtime.Object `json:\"myPlugin\"`\n}", "completion": "\ntype PluginA struct {\n\tAOption string `json:\"aOption\"`\n}", "language": "text", "type": "code-example", "context": "ResourceClaim | // External package:", "source": "https://kubernetes.io/docs/reference/kubernetes-api/workload-resources/resource-claim-v1/", "id": "k8s_00552_code_completion_14199"} {"prompt": "Explain this text code:\n\ntype MyAPIObject struct {\n\truntime.TypeMeta `json:\",inline\"`\n\tMyPlugin runtime.Object `json:\"myPlugin\"`\n}\n\ntype PluginA struct {\n\tAOption string `json:\"aOption\"`\n}", "completion": "This text code ResourceClaim | // External package:", "language": "text", "type": "code-example_explanation", "context": "ResourceClaim | // External package:", "source": "https://kubernetes.io/docs/reference/kubernetes-api/workload-resources/resource-claim-v1/", "id": "k8s_00552_code_explain_14200"} {"prompt": "Complete this text code:\n\ntype MyAPIObject struct {\n\truntime.TypeMeta `json:\",inline\"`\n\tMyPlugin runtime.RawExtension `json:\"myPlugin\"`\n}", "completion": "\ntype PluginA struct {\n\tAOption string `json:\"aOption\"`\n}", "language": "text", "type": "code-example", "context": "ResourceClaim | // On the wire, the JSON will look something like this:", "source": "https://kubernetes.io/docs/reference/kubernetes-api/workload-resources/resource-claim-v1/", "id": "k8s_00552_code_completion_14201"} {"prompt": "Explain this text code:\n\ntype MyAPIObject struct {\n\truntime.TypeMeta `json:\",inline\"`\n\tMyPlugin runtime.RawExtension `json:\"myPlugin\"`\n}\n\ntype PluginA struct {\n\tAOption string `json:\"aOption\"`\n}", "completion": "This text code ResourceClaim | // On the wire, the JSON will look something like this:", "language": "text", "type": "code-example_explanation", "context": "ResourceClaim | // On the wire, the JSON will look something like this:", "source": "https://kubernetes.io/docs/reference/kubernetes-api/workload-resources/resource-claim-v1/", "id": "k8s_00552_code_explain_14202"} {"prompt": "Complete this json code:\n\n{\n\t\"kind\":\"MyAPIObject\",\n\t\"apiVersion\":\"v1\",\n\t\"myPlugin\": {", "completion": "\t\t\"kind\":\"PluginA\",\n\t\t\"aOption\":\"foo\",\n\t},\n}", "language": "json", "type": "api-response", "context": "ResourceClaim | So what happens? Decode first uses json or yaml to unmarshal the serialized data into your external MyAPIObject. That causes the raw JSON to be stored, but not unpacked. The next step is to copy (usin", "source": "https://kubernetes.io/docs/reference/kubernetes-api/workload-resources/resource-claim-v1/", "id": "k8s_00552_code_completion_14203"} {"prompt": "Explain this json code:\n\n{\n\t\"kind\":\"MyAPIObject\",\n\t\"apiVersion\":\"v1\",\n\t\"myPlugin\": {\n\t\t\"kind\":\"PluginA\",\n\t\t\"aOption\":\"foo\",\n\t},\n}", "completion": "This json code ResourceClaim | So what happens? Decode first uses json or yaml to unmarshal the serialized data into your external MyAPIObject. That causes the raw JSON to be stored, but not unpacked. The next step is to copy (usin", "language": "json", "type": "api-response_explanation", "context": "ResourceClaim | So what happens? Decode first uses json or yaml to unmarshal the serialized data into your external MyAPIObject. That causes the raw JSON to be stored, but not unpacked. The next step is to copy (usin", "source": "https://kubernetes.io/docs/reference/kubernetes-api/workload-resources/resource-claim-v1/", "id": "k8s_00552_code_explain_14204"} {"prompt": "Complete this text code:\n\nversion = 2\n\n[plugins.\"io.containerd.grpc.v1.cri\"]\n# Disable AppArmor\n disable_apparmor = true\n# Ignore an error during setting oom_score_adj\n restrict_oom_score_adj = true\n# Disable hugetlb cgroup v2 controller (because systemd does not support delegating hugetlb controller)\n disable_hugetlb_controller = true", "completion": "\n[plugins.\"io.containerd.grpc.v1.cri\".containerd]\n# Using non-fuse overlayfs is also possible for kernel >= 5.11, but requires SELinux to be disabled\n snapshotter = \"fuse-overlayfs\"\n\n[plugins.\"io.containerd.grpc.v1.cri\".containerd.runtimes.runc.options]\n# We use cgroupfs that is delegated by systemd, so we do not use SystemdCgroup driver\n# (unless you run another systemd in the namespace)\n SystemdCgroup = false", "language": "text", "type": "code-example", "context": "Running Kubernetes Node Components as a Non-root User | The default path of the configuration file is /etc/containerd/config.toml. The path can be specified with containerd -c /path/to/containerd/config.toml.", "source": "https://kubernetes.io/docs/tasks/administer-cluster/kubelet-in-userns/", "id": "k8s_00553_code_completion_14205"} {"prompt": "Explain this text code:\n\nversion = 2\n\n[plugins.\"io.containerd.grpc.v1.cri\"]\n# Disable AppArmor\n disable_apparmor = true\n# Ignore an error during setting oom_score_adj\n restrict_oom_score_adj = true\n# Disable hugetlb cgroup v2 controller (because systemd does not support delegating hugetlb controller)\n disable_hugetlb_controller = true\n\n[plugins.\"io.containerd.grpc.v1.cri\".containerd]\n# Using non-fuse overlayfs is also possible for kernel >= 5.11, but requires SELinux to be disabled\n snapshotter = \"fuse-overlayfs\"\n\n[plugins.\"io.containerd.grpc.v1.cri\".containerd.runtimes.runc.options]\n# We use cgroupfs that is delegated by systemd, so we do not use SystemdCgroup driver\n# (unless you run another systemd in the namespace)\n SystemdCgroup = false", "completion": "This text code Running Kubernetes Node Components as a Non-root User | The default path of the configuration file is /etc/containerd/config.toml. The path can be specified with containerd -c /path/to/containerd/config.toml.", "language": "text", "type": "code-example_explanation", "context": "Running Kubernetes Node Components as a Non-root User | The default path of the configuration file is /etc/containerd/config.toml. The path can be specified with containerd -c /path/to/containerd/config.toml.", "source": "https://kubernetes.io/docs/tasks/administer-cluster/kubelet-in-userns/", "id": "k8s_00553_code_explain_14206"} {"prompt": "Complete this json code:\n\n[crio]\n storage_driver = \"overlay\"\n# Using non-fuse overlayfs is also possible for kernel >= 5.11, but requires SELinux to be disabled\n storage_option = [\"overlay.mount_program=/usr/local/bin/fuse-overlayfs\"]", "completion": "\n[crio.runtime]\n# We use cgroupfs that is delegated by systemd, so we do not use \"systemd\" driver\n# (unless you run another systemd in the namespace)\n cgroup_manager = \"cgroupfs\"", "language": "json", "type": "api-response", "context": "Running Kubernetes Node Components as a Non-root User | The default path of the configuration file is /etc/crio/crio.conf. The path can be specified with crio --config /path/to/crio/crio.conf.", "source": "https://kubernetes.io/docs/tasks/administer-cluster/kubelet-in-userns/", "id": "k8s_00553_code_completion_14207"} {"prompt": "Explain this json code:\n\n[crio]\n storage_driver = \"overlay\"\n# Using non-fuse overlayfs is also possible for kernel >= 5.11, but requires SELinux to be disabled\n storage_option = [\"overlay.mount_program=/usr/local/bin/fuse-overlayfs\"]\n\n[crio.runtime]\n# We use cgroupfs that is delegated by systemd, so we do not use \"systemd\" driver\n# (unless you run another systemd in the namespace)\n cgroup_manager = \"cgroupfs\"", "completion": "This json code Running Kubernetes Node Components as a Non-root User | The default path of the configuration file is /etc/crio/crio.conf. The path can be specified with crio --config /path/to/crio/crio.conf.", "language": "json", "type": "api-response_explanation", "context": "Running Kubernetes Node Components as a Non-root User | The default path of the configuration file is /etc/crio/crio.conf. The path can be specified with crio --config /path/to/crio/crio.conf.", "source": "https://kubernetes.io/docs/tasks/administer-cluster/kubelet-in-userns/", "id": "k8s_00553_code_explain_14208"} {"prompt": "Complete this yaml code:\n\napiVersion: kubelet.config.k8s.io/v1beta1\nkind: KubeletConfiguration\nfeatureGates:", "completion": " KubeletInUserNamespace: true\n# We use cgroupfs that is delegated by systemd, so we do not use \"systemd\" driver\n# (unless you run another systemd in the namespace)\ncgroupDriver: \"cgroupfs\"", "language": "yaml", "type": "kubernetes-manifest", "context": "Running Kubernetes Node Components as a Non-root User | When the KubeletInUserNamespace feature gate is enabled, the kubelet ignores errors that may happen during setting the following sysctl values on the node.", "source": "https://kubernetes.io/docs/tasks/administer-cluster/kubelet-in-userns/", "id": "k8s_00553_code_completion_14209"} {"prompt": "Explain this yaml code:\n\napiVersion: kubelet.config.k8s.io/v1beta1\nkind: KubeletConfiguration\nfeatureGates:\n KubeletInUserNamespace: true\n# We use cgroupfs that is delegated by systemd, so we do not use \"systemd\" driver\n# (unless you run another systemd in the namespace)\ncgroupDriver: \"cgroupfs\"", "completion": "This yaml code Running Kubernetes Node Components as a Non-root User | When the KubeletInUserNamespace feature gate is enabled, the kubelet ignores errors that may happen during setting the following sysctl values on the node.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Running Kubernetes Node Components as a Non-root User | When the KubeletInUserNamespace feature gate is enabled, the kubelet ignores errors that may happen during setting the following sysctl values on the node.", "source": "https://kubernetes.io/docs/tasks/administer-cluster/kubelet-in-userns/", "id": "k8s_00553_code_explain_14210"} {"prompt": "Complete this yaml code:\n\napiVersion: kubeproxy.config.k8s.io/v1alpha1\nkind: KubeProxyConfiguration\nmode: \"iptables\" # or \"userspace\"\nconntrack:\n# Skip setting sysctl value \"net.netfilter.nf_conntrack_max\"", "completion": " maxPerCore: 0\n# Skip setting \"net.netfilter.nf_conntrack_tcp_timeout_established\"\n tcpEstablishedTimeout: 0s\n# Skip setting \"net.netfilter.nf_conntrack_tcp_timeout_close\"\n tcpCloseWaitTimeout: 0s", "language": "yaml", "type": "kubernetes-manifest", "context": "Running Kubernetes Node Components as a Non-root User | Most of \"non-local\" volume drivers such as nfs and iscsi do not work. Local volumes like local, hostPath, emptyDir, configMap, secret, and downwardAPI are known to work.", "source": "https://kubernetes.io/docs/tasks/administer-cluster/kubelet-in-userns/", "id": "k8s_00553_code_completion_14211"} {"prompt": "Explain this yaml code:\n\napiVersion: kubeproxy.config.k8s.io/v1alpha1\nkind: KubeProxyConfiguration\nmode: \"iptables\" # or \"userspace\"\nconntrack:\n# Skip setting sysctl value \"net.netfilter.nf_conntrack_max\"\n maxPerCore: 0\n# Skip setting \"net.netfilter.nf_conntrack_tcp_timeout_established\"\n tcpEstablishedTimeout: 0s\n# Skip setting \"net.netfilter.nf_conntrack_tcp_timeout_close\"\n tcpCloseWaitTimeout: 0s", "completion": "This yaml code Running Kubernetes Node Components as a Non-root User | Most of \"non-local\" volume drivers such as nfs and iscsi do not work. Local volumes like local, hostPath, emptyDir, configMap, secret, and downwardAPI are known to work.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Running Kubernetes Node Components as a Non-root User | Most of \"non-local\" volume drivers such as nfs and iscsi do not work. Local volumes like local, hostPath, emptyDir, configMap, secret, and downwardAPI are known to work.", "source": "https://kubernetes.io/docs/tasks/administer-cluster/kubelet-in-userns/", "id": "k8s_00553_code_explain_14212"} {"prompt": "Complete this toml code:\n\nversion = 2\n\n[plugins.\"io.containerd.grpc.v1.cri\"]\n# Disable AppArmor\n disable_apparmor = true\n# Ignore an error during setting oom_score_adj\n restrict_oom_score_adj = true\n# Disable hugetlb cgroup v2 controller (because systemd does not support delegating hugetlb controller)\n disable_hugetlb_controller = true", "completion": "\n[plugins.\"io.containerd.grpc.v1.cri\".containerd]\n# Using non-fuse overlayfs is also possible for kernel >= 5.11, but requires SELinux to be disabled\n snapshotter = \"fuse-overlayfs\"\n\n[plugins.\"io.containerd.grpc.v1.cri\".containerd.runtimes.runc.options]\n# We use cgroupfs that is delegated by systemd, so we do not use SystemdCgroup driver\n# (unless you run another systemd in the namespace)\n SystemdCgroup = false", "language": "toml", "type": "code-example", "context": "Running Kubernetes Node Components as a Non-root User | The default path of the configuration file is /etc/containerd/config.toml. The path can be specified with containerd -c /path/to/containerd/config.toml.", "source": "https://kubernetes.io/docs/tasks/administer-cluster/kubelet-in-userns/", "id": "k8s_00553_code_completion_14213"} {"prompt": "Explain this toml code:\n\nversion = 2\n\n[plugins.\"io.containerd.grpc.v1.cri\"]\n# Disable AppArmor\n disable_apparmor = true\n# Ignore an error during setting oom_score_adj\n restrict_oom_score_adj = true\n# Disable hugetlb cgroup v2 controller (because systemd does not support delegating hugetlb controller)\n disable_hugetlb_controller = true\n\n[plugins.\"io.containerd.grpc.v1.cri\".containerd]\n# Using non-fuse overlayfs is also possible for kernel >= 5.11, but requires SELinux to be disabled\n snapshotter = \"fuse-overlayfs\"\n\n[plugins.\"io.containerd.grpc.v1.cri\".containerd.runtimes.runc.options]\n# We use cgroupfs that is delegated by systemd, so we do not use SystemdCgroup driver\n# (unless you run another systemd in the namespace)\n SystemdCgroup = false", "completion": "This toml code Running Kubernetes Node Components as a Non-root User | The default path of the configuration file is /etc/containerd/config.toml. The path can be specified with containerd -c /path/to/containerd/config.toml.", "language": "toml", "type": "code-example_explanation", "context": "Running Kubernetes Node Components as a Non-root User | The default path of the configuration file is /etc/containerd/config.toml. The path can be specified with containerd -c /path/to/containerd/config.toml.", "source": "https://kubernetes.io/docs/tasks/administer-cluster/kubelet-in-userns/", "id": "k8s_00553_code_explain_14214"} {"prompt": "Complete this toml code:\n\n[crio]\n storage_driver = \"overlay\"\n# Using non-fuse overlayfs is also possible for kernel >= 5.11, but requires SELinux to be disabled\n storage_option = [\"overlay.mount_program=/usr/local/bin/fuse-overlayfs\"]", "completion": "\n[crio.runtime]\n# We use cgroupfs that is delegated by systemd, so we do not use \"systemd\" driver\n# (unless you run another systemd in the namespace)\n cgroup_manager = \"cgroupfs\"", "language": "toml", "type": "code-example", "context": "Running Kubernetes Node Components as a Non-root User | The default path of the configuration file is /etc/crio/crio.conf. The path can be specified with crio --config /path/to/crio/crio.conf.", "source": "https://kubernetes.io/docs/tasks/administer-cluster/kubelet-in-userns/", "id": "k8s_00553_code_completion_14215"} {"prompt": "Explain this toml code:\n\n[crio]\n storage_driver = \"overlay\"\n# Using non-fuse overlayfs is also possible for kernel >= 5.11, but requires SELinux to be disabled\n storage_option = [\"overlay.mount_program=/usr/local/bin/fuse-overlayfs\"]\n\n[crio.runtime]\n# We use cgroupfs that is delegated by systemd, so we do not use \"systemd\" driver\n# (unless you run another systemd in the namespace)\n cgroup_manager = \"cgroupfs\"", "completion": "This toml code Running Kubernetes Node Components as a Non-root User | The default path of the configuration file is /etc/crio/crio.conf. The path can be specified with crio --config /path/to/crio/crio.conf.", "language": "toml", "type": "code-example_explanation", "context": "Running Kubernetes Node Components as a Non-root User | The default path of the configuration file is /etc/crio/crio.conf. The path can be specified with crio --config /path/to/crio/crio.conf.", "source": "https://kubernetes.io/docs/tasks/administer-cluster/kubelet-in-userns/", "id": "k8s_00553_code_explain_14216"} {"prompt": "Complete this yaml code:\n\napiVersion: kubelet.config.k8s.io/v1beta1\nkind: KubeletConfiguration\nfeatureGates:", "completion": " KubeletInUserNamespace: true\n# We use cgroupfs that is delegated by systemd, so we do not use \"systemd\" driver\n# (unless you run another systemd in the namespace)\ncgroupDriver: \"cgroupfs\"", "language": "yaml", "type": "kubernetes-manifest", "context": "Running Kubernetes Node Components as a Non-root User | When the KubeletInUserNamespace feature gate is enabled, the kubelet ignores errors that may happen during setting the following sysctl values on the node.", "source": "https://kubernetes.io/docs/tasks/administer-cluster/kubelet-in-userns/", "id": "k8s_00553_code_completion_14217"} {"prompt": "Explain this yaml code:\n\napiVersion: kubelet.config.k8s.io/v1beta1\nkind: KubeletConfiguration\nfeatureGates:\n KubeletInUserNamespace: true\n# We use cgroupfs that is delegated by systemd, so we do not use \"systemd\" driver\n# (unless you run another systemd in the namespace)\ncgroupDriver: \"cgroupfs\"", "completion": "This yaml code Running Kubernetes Node Components as a Non-root User | When the KubeletInUserNamespace feature gate is enabled, the kubelet ignores errors that may happen during setting the following sysctl values on the node.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Running Kubernetes Node Components as a Non-root User | When the KubeletInUserNamespace feature gate is enabled, the kubelet ignores errors that may happen during setting the following sysctl values on the node.", "source": "https://kubernetes.io/docs/tasks/administer-cluster/kubelet-in-userns/", "id": "k8s_00553_code_explain_14218"} {"prompt": "Complete this yaml code:\n\napiVersion: kubeproxy.config.k8s.io/v1alpha1\nkind: KubeProxyConfiguration\nmode: \"iptables\" # or \"userspace\"\nconntrack:\n# Skip setting sysctl value \"net.netfilter.nf_conntrack_max\"", "completion": " maxPerCore: 0\n# Skip setting \"net.netfilter.nf_conntrack_tcp_timeout_established\"\n tcpEstablishedTimeout: 0s\n# Skip setting \"net.netfilter.nf_conntrack_tcp_timeout_close\"\n tcpCloseWaitTimeout: 0s", "language": "yaml", "type": "kubernetes-manifest", "context": "Running Kubernetes Node Components as a Non-root User | Most of \"non-local\" volume drivers such as nfs and iscsi do not work. Local volumes like local, hostPath, emptyDir, configMap, secret, and downwardAPI are known to work.", "source": "https://kubernetes.io/docs/tasks/administer-cluster/kubelet-in-userns/", "id": "k8s_00553_code_completion_14219"} {"prompt": "Explain this yaml code:\n\napiVersion: kubeproxy.config.k8s.io/v1alpha1\nkind: KubeProxyConfiguration\nmode: \"iptables\" # or \"userspace\"\nconntrack:\n# Skip setting sysctl value \"net.netfilter.nf_conntrack_max\"\n maxPerCore: 0\n# Skip setting \"net.netfilter.nf_conntrack_tcp_timeout_established\"\n tcpEstablishedTimeout: 0s\n# Skip setting \"net.netfilter.nf_conntrack_tcp_timeout_close\"\n tcpCloseWaitTimeout: 0s", "completion": "This yaml code Running Kubernetes Node Components as a Non-root User | Most of \"non-local\" volume drivers such as nfs and iscsi do not work. Local volumes like local, hostPath, emptyDir, configMap, secret, and downwardAPI are known to work.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Running Kubernetes Node Components as a Non-root User | Most of \"non-local\" volume drivers such as nfs and iscsi do not work. Local volumes like local, hostPath, emptyDir, configMap, secret, and downwardAPI are known to work.", "source": "https://kubernetes.io/docs/tasks/administer-cluster/kubelet-in-userns/", "id": "k8s_00553_code_explain_14220"} {"prompt": "Complete this text code:\n\nversion = 2\n\n[plugins.\"io.containerd.grpc.v1.cri\"]\n# Disable AppArmor\n disable_apparmor = true\n# Ignore an error during setting oom_score_adj\n restrict_oom_score_adj = true\n# Disable hugetlb cgroup v2 controller (because systemd does not support delegating hugetlb controller)\n disable_hugetlb_controller = true", "completion": "\n[plugins.\"io.containerd.grpc.v1.cri\".containerd]\n# Using non-fuse overlayfs is also possible for kernel >= 5.11, but requires SELinux to be disabled\n snapshotter = \"fuse-overlayfs\"\n\n[plugins.\"io.containerd.grpc.v1.cri\".containerd.runtimes.runc.options]\n# We use cgroupfs that is delegated by systemd, so we do not use SystemdCgroup driver\n# (unless you run another systemd in the namespace)\n SystemdCgroup = false", "language": "text", "type": "code-example", "context": "Running Kubernetes Node Components as a Non-root User | The default path of the configuration file is /etc/containerd/config.toml. The path can be specified with containerd -c /path/to/containerd/config.toml.", "source": "https://kubernetes.io/docs/tasks/administer-cluster/kubelet-in-userns/", "id": "k8s_00553_code_completion_14221"} {"prompt": "Explain this text code:\n\nversion = 2\n\n[plugins.\"io.containerd.grpc.v1.cri\"]\n# Disable AppArmor\n disable_apparmor = true\n# Ignore an error during setting oom_score_adj\n restrict_oom_score_adj = true\n# Disable hugetlb cgroup v2 controller (because systemd does not support delegating hugetlb controller)\n disable_hugetlb_controller = true\n\n[plugins.\"io.containerd.grpc.v1.cri\".containerd]\n# Using non-fuse overlayfs is also possible for kernel >= 5.11, but requires SELinux to be disabled\n snapshotter = \"fuse-overlayfs\"\n\n[plugins.\"io.containerd.grpc.v1.cri\".containerd.runtimes.runc.options]\n# We use cgroupfs that is delegated by systemd, so we do not use SystemdCgroup driver\n# (unless you run another systemd in the namespace)\n SystemdCgroup = false", "completion": "This text code Running Kubernetes Node Components as a Non-root User | The default path of the configuration file is /etc/containerd/config.toml. The path can be specified with containerd -c /path/to/containerd/config.toml.", "language": "text", "type": "code-example_explanation", "context": "Running Kubernetes Node Components as a Non-root User | The default path of the configuration file is /etc/containerd/config.toml. The path can be specified with containerd -c /path/to/containerd/config.toml.", "source": "https://kubernetes.io/docs/tasks/administer-cluster/kubelet-in-userns/", "id": "k8s_00553_code_explain_14222"} {"prompt": "Complete this json code:\n\n[crio]\n storage_driver = \"overlay\"\n# Using non-fuse overlayfs is also possible for kernel >= 5.11, but requires SELinux to be disabled\n storage_option = [\"overlay.mount_program=/usr/local/bin/fuse-overlayfs\"]", "completion": "\n[crio.runtime]\n# We use cgroupfs that is delegated by systemd, so we do not use \"systemd\" driver\n# (unless you run another systemd in the namespace)\n cgroup_manager = \"cgroupfs\"", "language": "json", "type": "api-response", "context": "Running Kubernetes Node Components as a Non-root User | The default path of the configuration file is /etc/crio/crio.conf. The path can be specified with crio --config /path/to/crio/crio.conf.", "source": "https://kubernetes.io/docs/tasks/administer-cluster/kubelet-in-userns/", "id": "k8s_00553_code_completion_14223"} {"prompt": "Explain this json code:\n\n[crio]\n storage_driver = \"overlay\"\n# Using non-fuse overlayfs is also possible for kernel >= 5.11, but requires SELinux to be disabled\n storage_option = [\"overlay.mount_program=/usr/local/bin/fuse-overlayfs\"]\n\n[crio.runtime]\n# We use cgroupfs that is delegated by systemd, so we do not use \"systemd\" driver\n# (unless you run another systemd in the namespace)\n cgroup_manager = \"cgroupfs\"", "completion": "This json code Running Kubernetes Node Components as a Non-root User | The default path of the configuration file is /etc/crio/crio.conf. The path can be specified with crio --config /path/to/crio/crio.conf.", "language": "json", "type": "api-response_explanation", "context": "Running Kubernetes Node Components as a Non-root User | The default path of the configuration file is /etc/crio/crio.conf. The path can be specified with crio --config /path/to/crio/crio.conf.", "source": "https://kubernetes.io/docs/tasks/administer-cluster/kubelet-in-userns/", "id": "k8s_00553_code_explain_14224"} {"prompt": "Complete this yaml code:\n\napiVersion: kubelet.config.k8s.io/v1beta1\nkind: KubeletConfiguration\nfeatureGates:", "completion": " KubeletInUserNamespace: true\n# We use cgroupfs that is delegated by systemd, so we do not use \"systemd\" driver\n# (unless you run another systemd in the namespace)\ncgroupDriver: \"cgroupfs\"", "language": "yaml", "type": "kubernetes-manifest", "context": "Running Kubernetes Node Components as a Non-root User | When the KubeletInUserNamespace feature gate is enabled, the kubelet ignores errors that may happen during setting the following sysctl values on the node.", "source": "https://kubernetes.io/docs/tasks/administer-cluster/kubelet-in-userns/", "id": "k8s_00553_code_completion_14225"} {"prompt": "Explain this yaml code:\n\napiVersion: kubelet.config.k8s.io/v1beta1\nkind: KubeletConfiguration\nfeatureGates:\n KubeletInUserNamespace: true\n# We use cgroupfs that is delegated by systemd, so we do not use \"systemd\" driver\n# (unless you run another systemd in the namespace)\ncgroupDriver: \"cgroupfs\"", "completion": "This yaml code Running Kubernetes Node Components as a Non-root User | When the KubeletInUserNamespace feature gate is enabled, the kubelet ignores errors that may happen during setting the following sysctl values on the node.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Running Kubernetes Node Components as a Non-root User | When the KubeletInUserNamespace feature gate is enabled, the kubelet ignores errors that may happen during setting the following sysctl values on the node.", "source": "https://kubernetes.io/docs/tasks/administer-cluster/kubelet-in-userns/", "id": "k8s_00553_code_explain_14226"} {"prompt": "Complete this yaml code:\n\napiVersion: kubeproxy.config.k8s.io/v1alpha1\nkind: KubeProxyConfiguration\nmode: \"iptables\" # or \"userspace\"\nconntrack:\n# Skip setting sysctl value \"net.netfilter.nf_conntrack_max\"", "completion": " maxPerCore: 0\n# Skip setting \"net.netfilter.nf_conntrack_tcp_timeout_established\"\n tcpEstablishedTimeout: 0s\n# Skip setting \"net.netfilter.nf_conntrack_tcp_timeout_close\"\n tcpCloseWaitTimeout: 0s", "language": "yaml", "type": "kubernetes-manifest", "context": "Running Kubernetes Node Components as a Non-root User | Most of \"non-local\" volume drivers such as nfs and iscsi do not work. Local volumes like local, hostPath, emptyDir, configMap, secret, and downwardAPI are known to work.", "source": "https://kubernetes.io/docs/tasks/administer-cluster/kubelet-in-userns/", "id": "k8s_00553_code_completion_14227"} {"prompt": "Explain this yaml code:\n\napiVersion: kubeproxy.config.k8s.io/v1alpha1\nkind: KubeProxyConfiguration\nmode: \"iptables\" # or \"userspace\"\nconntrack:\n# Skip setting sysctl value \"net.netfilter.nf_conntrack_max\"\n maxPerCore: 0\n# Skip setting \"net.netfilter.nf_conntrack_tcp_timeout_established\"\n tcpEstablishedTimeout: 0s\n# Skip setting \"net.netfilter.nf_conntrack_tcp_timeout_close\"\n tcpCloseWaitTimeout: 0s", "completion": "This yaml code Running Kubernetes Node Components as a Non-root User | Most of \"non-local\" volume drivers such as nfs and iscsi do not work. Local volumes like local, hostPath, emptyDir, configMap, secret, and downwardAPI are known to work.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Running Kubernetes Node Components as a Non-root User | Most of \"non-local\" volume drivers such as nfs and iscsi do not work. Local volumes like local, hostPath, emptyDir, configMap, secret, and downwardAPI are known to work.", "source": "https://kubernetes.io/docs/tasks/administer-cluster/kubelet-in-userns/", "id": "k8s_00553_code_explain_14228"} {"prompt": "Complete this yaml code:\n\napiVersion: kubelet.config.k8s.io/v1beta1\nkind: KubeletConfiguration\nevictionHard:\n memory.available: \"500Mi\"\n nodefs.available: \"1Gi\"", "completion": " imagefs.available: \"100Gi\"\nevictionMinimumReclaim:\n memory.available: \"0Mi\"\n nodefs.available: \"500Mi\"\n imagefs.available: \"2Gi\"", "language": "yaml", "type": "kubernetes-manifest", "context": "Node-pressure Eviction | In this example, if the nodefs.available signal meets the eviction threshold, the kubelet reclaims the resource until the signal reaches the threshold of 1GiB, and then continues to reclaim the minimu", "source": "https://kubernetes.io/docs/concepts/scheduling-eviction/node-pressure-eviction/", "id": "k8s_00554_code_completion_14229"} {"prompt": "Explain this yaml code:\n\napiVersion: kubelet.config.k8s.io/v1beta1\nkind: KubeletConfiguration\nevictionHard:\n memory.available: \"500Mi\"\n nodefs.available: \"1Gi\"\n imagefs.available: \"100Gi\"\nevictionMinimumReclaim:\n memory.available: \"0Mi\"\n nodefs.available: \"500Mi\"\n imagefs.available: \"2Gi\"", "completion": "This yaml code Node-pressure Eviction | In this example, if the nodefs.available signal meets the eviction threshold, the kubelet reclaims the resource until the signal reaches the threshold of 1GiB, and then continues to reclaim the minimu", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Node-pressure Eviction | In this example, if the nodefs.available signal meets the eviction threshold, the kubelet reclaims the resource until the signal reaches the threshold of 1GiB, and then continues to reclaim the minimu", "source": "https://kubernetes.io/docs/concepts/scheduling-eviction/node-pressure-eviction/", "id": "k8s_00554_code_explain_14230"} {"prompt": "Explain this text code:\n\n--eviction-hard=memory.available<500Mi\n--system-reserved=memory=1.5Gi", "completion": "This text code Node-pressure Eviction | In this configuration, the --system-reserved flag reserves 1.5GiB of memory for the system, which is 10% of the total memory + the eviction threshold amount.", "language": "text", "type": "code-example_explanation", "context": "Node-pressure Eviction | In this configuration, the --system-reserved flag reserves 1.5GiB of memory for the system, which is 10% of the total memory + the eviction threshold amount.", "source": "https://kubernetes.io/docs/concepts/scheduling-eviction/node-pressure-eviction/", "id": "k8s_00554_code_explain_14231"} {"prompt": "Explain this text code:\n\nlocal volumes + logs and a writable layer of all containers", "completion": "This text code Node-pressure Eviction | If nodefs triggers evictions, the kubelet sorts pods based on nodefs usage (local volumes + logs of all containers).", "language": "text", "type": "code-example_explanation", "context": "Node-pressure Eviction | If nodefs triggers evictions, the kubelet sorts pods based on nodefs usage (local volumes + logs of all containers).", "source": "https://kubernetes.io/docs/concepts/scheduling-eviction/node-pressure-eviction/", "id": "k8s_00554_code_explain_14232"} {"prompt": "Explain this text code:\n\nlocal volumes + logs and a writable layer of all containers", "completion": "This text code Node-pressure Eviction | If imagefs triggers evictions, the kubelet sorts pods based on the storage of images rank, which represents the disk usage of a given image.", "language": "text", "type": "code-example_explanation", "context": "Node-pressure Eviction | If imagefs triggers evictions, the kubelet sorts pods based on the storage of images rank, which represents the disk usage of a given image.", "source": "https://kubernetes.io/docs/concepts/scheduling-eviction/node-pressure-eviction/", "id": "k8s_00554_code_explain_14233"} {"prompt": "Complete this yaml code:\n\napiVersion: kubelet.config.k8s.io/v1beta1\nkind: KubeletConfiguration\nevictionHard:\n memory.available: \"500Mi\"\n nodefs.available: \"1Gi\"", "completion": " imagefs.available: \"100Gi\"\nevictionMinimumReclaim:\n memory.available: \"0Mi\"\n nodefs.available: \"500Mi\"\n imagefs.available: \"2Gi\"", "language": "yaml", "type": "kubernetes-manifest", "context": "Node-pressure Eviction | In this example, if the nodefs.available signal meets the eviction threshold, the kubelet reclaims the resource until the signal reaches the threshold of 1GiB, and then continues to reclaim the minimu", "source": "https://kubernetes.io/docs/concepts/scheduling-eviction/node-pressure-eviction/", "id": "k8s_00554_code_completion_14234"} {"prompt": "Explain this yaml code:\n\napiVersion: kubelet.config.k8s.io/v1beta1\nkind: KubeletConfiguration\nevictionHard:\n memory.available: \"500Mi\"\n nodefs.available: \"1Gi\"\n imagefs.available: \"100Gi\"\nevictionMinimumReclaim:\n memory.available: \"0Mi\"\n nodefs.available: \"500Mi\"\n imagefs.available: \"2Gi\"", "completion": "This yaml code Node-pressure Eviction | In this example, if the nodefs.available signal meets the eviction threshold, the kubelet reclaims the resource until the signal reaches the threshold of 1GiB, and then continues to reclaim the minimu", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Node-pressure Eviction | In this example, if the nodefs.available signal meets the eviction threshold, the kubelet reclaims the resource until the signal reaches the threshold of 1GiB, and then continues to reclaim the minimu", "source": "https://kubernetes.io/docs/concepts/scheduling-eviction/node-pressure-eviction/", "id": "k8s_00554_code_explain_14235"} {"prompt": "Explain this none code:\n\n--eviction-hard=memory.available<500Mi\n--system-reserved=memory=1.5Gi", "completion": "This none code Node-pressure Eviction | In this configuration, the --system-reserved flag reserves 1.5GiB of memory for the system, which is 10% of the total memory + the eviction threshold amount.", "language": "none", "type": "code-example_explanation", "context": "Node-pressure Eviction | In this configuration, the --system-reserved flag reserves 1.5GiB of memory for the system, which is 10% of the total memory + the eviction threshold amount.", "source": "https://kubernetes.io/docs/concepts/scheduling-eviction/node-pressure-eviction/", "id": "k8s_00554_code_explain_14236"} {"prompt": "Explain this text code:\n\n10% of the total memory + the eviction threshold amount", "completion": "This text code Node-pressure Eviction | The node can reach the eviction threshold if a pod is using more than its request, or if the system is using more than 1GiB of memory, which makes the memory.available signal fall below 500MiB and tri", "language": "text", "type": "code-example_explanation", "context": "Node-pressure Eviction | The node can reach the eviction threshold if a pod is using more than its request, or if the system is using more than 1GiB of memory, which makes the memory.available signal fall below 500MiB and tri", "source": "https://kubernetes.io/docs/concepts/scheduling-eviction/node-pressure-eviction/", "id": "k8s_00554_code_explain_14237"} {"prompt": "Complete this yaml code:\n\napiVersion: kubelet.config.k8s.io/v1beta1\nkind: KubeletConfiguration\nevictionHard:\n memory.available: \"500Mi\"\n nodefs.available: \"1Gi\"", "completion": " imagefs.available: \"100Gi\"\nevictionMinimumReclaim:\n memory.available: \"0Mi\"\n nodefs.available: \"500Mi\"\n imagefs.available: \"2Gi\"", "language": "yaml", "type": "kubernetes-manifest", "context": "Node-pressure Eviction | In this example, if the nodefs.available signal meets the eviction threshold, the kubelet reclaims the resource until the signal reaches the threshold of 1GiB, and then continues to reclaim the minimu", "source": "https://kubernetes.io/docs/concepts/scheduling-eviction/node-pressure-eviction/", "id": "k8s_00554_code_completion_14238"} {"prompt": "Explain this yaml code:\n\napiVersion: kubelet.config.k8s.io/v1beta1\nkind: KubeletConfiguration\nevictionHard:\n memory.available: \"500Mi\"\n nodefs.available: \"1Gi\"\n imagefs.available: \"100Gi\"\nevictionMinimumReclaim:\n memory.available: \"0Mi\"\n nodefs.available: \"500Mi\"\n imagefs.available: \"2Gi\"", "completion": "This yaml code Node-pressure Eviction | In this example, if the nodefs.available signal meets the eviction threshold, the kubelet reclaims the resource until the signal reaches the threshold of 1GiB, and then continues to reclaim the minimu", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Node-pressure Eviction | In this example, if the nodefs.available signal meets the eviction threshold, the kubelet reclaims the resource until the signal reaches the threshold of 1GiB, and then continues to reclaim the minimu", "source": "https://kubernetes.io/docs/concepts/scheduling-eviction/node-pressure-eviction/", "id": "k8s_00554_code_explain_14239"} {"prompt": "Explain this bash code:\n\nkubectl logs [-f] [-p] (POD | TYPE/NAME) [-c CONTAINER]", "completion": "This bash code kubectl logs | Get all containers' logs in the pod(s).", "language": "bash", "type": "kubectl-command_explanation", "context": "kubectl logs | Get all containers' logs in the pod(s).", "source": "https://kubernetes.io/docs/reference/kubectl/generated/kubectl_logs/", "id": "k8s_00555_code_explain_14240"} {"prompt": "Complete this text code:\n\n# Return snapshot logs from pod nginx with only one container\n kubectl logs nginx\n \n # Return snapshot logs from pod nginx, prefixing each line with the source pod and container name\n kubectl logs nginx --prefix\n \n # Return snapshot logs from pod nginx, limiting output to 500 bytes\n kubectl logs nginx --limit-bytes=500\n \n # Return snapshot logs from pod nginx, waiting up to 20 seconds for it to start running.\n kubectl logs nginx --pod-running-timeout=20s\n \n # Return snapshot logs from pod nginx with multi containers\n kubectl logs nginx --all-containers=true\n \n # Return snapshot logs from all pods in the deployment nginx\n kubectl logs deployment/nginx --all-pods=true\n \n # Return snapshot logs from all containers in pods defined by label app=nginx\n kubectl logs -l app=nginx --all-containers=true\n \n # Return snapshot logs from all pods defined by label app=nginx, limiting concurrent log requests to 10 pods\n kubectl logs -l app=nginx --max-log-requests=10\n \n # Return snapshot of previous terminated ruby container logs from pod web-1\n kubectl logs -p -c ruby web-1", "completion": " \n # Begin streaming the logs from pod nginx, continuing even if errors occur\n kubectl logs nginx -f --ignore-errors=true\n \n # Begin streaming the logs of the ruby container in pod web-1\n kubectl logs -f -c ruby web-1\n \n # Begin streaming the logs from all containers in pods defined by label app=nginx\n kubectl logs -f -l app=nginx --all-containers=true\n \n # Display only the most recent 20 lines of output in pod nginx\n kubectl logs --tail=20 nginx\n \n # Show all logs from pod nginx written in the last hour\n kubectl logs --since=1h nginx\n \n # Show all logs with timestamps from pod nginx starting from August 30, 2024, at 06:00:00 UTC\n kubectl logs nginx --since-time=2024-08-30T06:00:00Z --timestamps=true\n \n # Show logs from a kubelet with an expired serving certificate\n kubectl logs --insecure-skip-tls-verify-backend nginx\n \n # Return snapshot logs from first container of a job named hello\n kubectl logs job/hello\n \n # Return snapshot logs from container nginx-1 of a deployment named nginx\n kubectl logs deployment/nginx -c nginx-1", "language": "text", "type": "code-example", "context": "kubectl logs | Get all containers' logs in the pod(s).", "source": "https://kubernetes.io/docs/reference/kubectl/generated/kubectl_logs/", "id": "k8s_00555_code_completion_14241"} {"prompt": "Explain this text code:\n\n# Return snapshot logs from pod nginx with only one container\n kubectl logs nginx\n \n # Return snapshot logs from pod nginx, prefixing each line with the source pod and container name\n kubectl logs nginx --prefix\n \n # Return snapshot logs from pod nginx, limiting output to 500 bytes\n kubectl logs nginx --limit-bytes=500\n \n # Return snapshot logs from pod nginx, waiting up to 20 seconds for it to start running.\n kubectl logs nginx --pod-running-timeout=20s\n \n # Return snapshot logs from pod nginx with multi containers\n kubectl logs nginx --all-containers=true\n \n # Return snapshot logs from all pods in the deployment nginx\n kubectl logs deployment/nginx --all-pods=true\n \n # Return snapshot logs from all containers in pods defined by label app=nginx\n kubectl logs -l app=nginx --all-containers=true\n \n # Return snapshot logs from all pods defined by label app=nginx, limiting concurrent log requests to 10 pods\n kubectl logs -l app=nginx --max-log-requests=10\n \n # Return snapshot of previous terminated ruby container logs from pod web-1\n kubectl logs -p -c ruby web-1\n \n # Begin streaming the logs from pod nginx, continuing even if errors occur\n kubectl logs nginx -f --ignore-errors=true\n \n # Begin streaming the logs of the ruby container in pod web-1\n kubectl logs -f -c ruby web-1\n \n # Begin streaming the logs from all containers in pods defined by label app=nginx\n kubectl logs -f -l app=nginx --all-containers=true\n \n # Display only the most recent 20 lines of output in pod nginx\n kubectl logs --tail=20 nginx\n \n # Show all logs from pod nginx written in the last hour\n kubectl logs --since=1h nginx\n \n # Show all logs with timestamps from pod nginx starting from August 30, 2024, at 06:00:00 UTC\n kubectl logs nginx --since-time=2024-08-30T06:00:00Z --timestamps=true\n \n # Show logs from a kubelet with an expired serving certificate\n kubectl logs --insecure-skip-tls-verify-backend nginx\n \n # Return snapshot logs from first container of a job named hello\n kubectl logs job/hello\n \n # Return snapshot logs from container nginx-1 of a deployment named nginx\n kubectl logs deployment/nginx -c nginx-1", "completion": "This text code kubectl logs | Get all containers' logs in the pod(s).", "language": "text", "type": "code-example_explanation", "context": "kubectl logs | Get all containers' logs in the pod(s).", "source": "https://kubernetes.io/docs/reference/kubectl/generated/kubectl_logs/", "id": "k8s_00555_code_explain_14242"} {"prompt": "Explain this bash code:\n\nkubectl logs [-f] [-p] (POD | TYPE/NAME) [-c CONTAINER]", "completion": "This bash code kubectl logs | Get all containers' logs in the pod(s).", "language": "bash", "type": "kubectl-command_explanation", "context": "kubectl logs | Get all containers' logs in the pod(s).", "source": "https://kubernetes.io/docs/reference/kubectl/generated/kubectl_logs/", "id": "k8s_00555_code_explain_14243"} {"prompt": "Complete this text code:\n\n# Return snapshot logs from pod nginx with only one container\n kubectl logs nginx\n \n # Return snapshot logs from pod nginx, prefixing each line with the source pod and container name\n kubectl logs nginx --prefix\n \n # Return snapshot logs from pod nginx, limiting output to 500 bytes\n kubectl logs nginx --limit-bytes=500\n \n # Return snapshot logs from pod nginx, waiting up to 20 seconds for it to start running.\n kubectl logs nginx --pod-running-timeout=20s\n \n # Return snapshot logs from pod nginx with multi containers\n kubectl logs nginx --all-containers=true\n \n # Return snapshot logs from all pods in the deployment nginx\n kubectl logs deployment/nginx --all-pods=true\n \n # Return snapshot logs from all containers in pods defined by label app=nginx\n kubectl logs -l app=nginx --all-containers=true\n \n # Return snapshot logs from all pods defined by label app=nginx, limiting concurrent log requests to 10 pods\n kubectl logs -l app=nginx --max-log-requests=10\n \n # Return snapshot of previous terminated ruby container logs from pod web-1\n kubectl logs -p -c ruby web-1", "completion": " \n # Begin streaming the logs from pod nginx, continuing even if errors occur\n kubectl logs nginx -f --ignore-errors=true\n \n # Begin streaming the logs of the ruby container in pod web-1\n kubectl logs -f -c ruby web-1\n \n # Begin streaming the logs from all containers in pods defined by label app=nginx\n kubectl logs -f -l app=nginx --all-containers=true\n \n # Display only the most recent 20 lines of output in pod nginx\n kubectl logs --tail=20 nginx\n \n # Show all logs from pod nginx written in the last hour\n kubectl logs --since=1h nginx\n \n # Show all logs with timestamps from pod nginx starting from August 30, 2024, at 06:00:00 UTC\n kubectl logs nginx --since-time=2024-08-30T06:00:00Z --timestamps=true\n \n # Show logs from a kubelet with an expired serving certificate\n kubectl logs --insecure-skip-tls-verify-backend nginx\n \n # Return snapshot logs from first container of a job named hello\n kubectl logs job/hello\n \n # Return snapshot logs from container nginx-1 of a deployment named nginx\n kubectl logs deployment/nginx -c nginx-1", "language": "text", "type": "code-example", "context": "kubectl logs | Get all containers' logs in the pod(s).", "source": "https://kubernetes.io/docs/reference/kubectl/generated/kubectl_logs/", "id": "k8s_00555_code_completion_14244"} {"prompt": "Explain this text code:\n\n# Return snapshot logs from pod nginx with only one container\n kubectl logs nginx\n \n # Return snapshot logs from pod nginx, prefixing each line with the source pod and container name\n kubectl logs nginx --prefix\n \n # Return snapshot logs from pod nginx, limiting output to 500 bytes\n kubectl logs nginx --limit-bytes=500\n \n # Return snapshot logs from pod nginx, waiting up to 20 seconds for it to start running.\n kubectl logs nginx --pod-running-timeout=20s\n \n # Return snapshot logs from pod nginx with multi containers\n kubectl logs nginx --all-containers=true\n \n # Return snapshot logs from all pods in the deployment nginx\n kubectl logs deployment/nginx --all-pods=true\n \n # Return snapshot logs from all containers in pods defined by label app=nginx\n kubectl logs -l app=nginx --all-containers=true\n \n # Return snapshot logs from all pods defined by label app=nginx, limiting concurrent log requests to 10 pods\n kubectl logs -l app=nginx --max-log-requests=10\n \n # Return snapshot of previous terminated ruby container logs from pod web-1\n kubectl logs -p -c ruby web-1\n \n # Begin streaming the logs from pod nginx, continuing even if errors occur\n kubectl logs nginx -f --ignore-errors=true\n \n # Begin streaming the logs of the ruby container in pod web-1\n kubectl logs -f -c ruby web-1\n \n # Begin streaming the logs from all containers in pods defined by label app=nginx\n kubectl logs -f -l app=nginx --all-containers=true\n \n # Display only the most recent 20 lines of output in pod nginx\n kubectl logs --tail=20 nginx\n \n # Show all logs from pod nginx written in the last hour\n kubectl logs --since=1h nginx\n \n # Show all logs with timestamps from pod nginx starting from August 30, 2024, at 06:00:00 UTC\n kubectl logs nginx --since-time=2024-08-30T06:00:00Z --timestamps=true\n \n # Show logs from a kubelet with an expired serving certificate\n kubectl logs --insecure-skip-tls-verify-backend nginx\n \n # Return snapshot logs from first container of a job named hello\n kubectl logs job/hello\n \n # Return snapshot logs from container nginx-1 of a deployment named nginx\n kubectl logs deployment/nginx -c nginx-1", "completion": "This text code kubectl logs | Get all containers' logs in the pod(s).", "language": "text", "type": "code-example_explanation", "context": "kubectl logs | Get all containers' logs in the pod(s).", "source": "https://kubernetes.io/docs/reference/kubectl/generated/kubectl_logs/", "id": "k8s_00555_code_explain_14245"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Service\nmetadata:\n name: nginx\n labels:\n app: nginx\nspec:\n ports:\n - port: 80\n name: web\n clusterIP: None\n selector:\n app: nginx\n---\napiVersion: apps/v1\nkind: StatefulSet\nmetadata:\n name: web\nspec:\n serviceName: \"nginx\"\n replicas: 2\n selector:\n matchLabels:", "completion": " app: nginx\n template:\n metadata:\n labels:\n app: nginx\n spec:\n containers:\n - name: nginx\n image: registry.k8s.io/nginx-slim:0.21\n ports:\n - containerPort: 80\n name: web\n volumeMounts:\n - name: www\n mountPath: /usr/share/nginx/html\n volumeClaimTemplates:\n - metadata:\n name: www\n spec:\n accessModes: [ \"ReadWriteOnce\" ]\n resources:\n requests:\n storage: 1Gi", "language": "yaml", "type": "service-definition", "context": "StatefulSet Basics | You will need to use at least two terminal windows. In the first terminal, use kubectl get to watch the creation of the StatefulSet's Pods.", "source": "https://kubernetes.io/docs/tutorials/stateful-application/basic-stateful-set/", "id": "k8s_00556_code_completion_14246"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Service\nmetadata:\n name: nginx\n labels:\n app: nginx\nspec:\n ports:\n - port: 80\n name: web\n clusterIP: None\n selector:\n app: nginx\n---\napiVersion: apps/v1\nkind: StatefulSet\nmetadata:\n name: web\nspec:\n serviceName: \"nginx\"\n replicas: 2\n selector:\n matchLabels:\n app: nginx\n template:\n metadata:\n labels:\n app: nginx\n spec:\n containers:\n - name: nginx\n image: registry.k8s.io/nginx-slim:0.21\n ports:\n - containerPort: 80\n name: web\n volumeMounts:\n - name: www\n mountPath: /usr/share/nginx/html\n volumeClaimTemplates:\n - metadata:\n name: www\n spec:\n accessModes: [ \"ReadWriteOnce\" ]\n resources:\n requests:\n storage: 1Gi", "completion": "This yaml code StatefulSet Basics | You will need to use at least two terminal windows. In the first terminal, use kubectl get to watch the creation of the StatefulSet's Pods.", "language": "yaml", "type": "service-definition_explanation", "context": "StatefulSet Basics | You will need to use at least two terminal windows. In the first terminal, use kubectl get to watch the creation of the StatefulSet's Pods.", "source": "https://kubernetes.io/docs/tutorials/stateful-application/basic-stateful-set/", "id": "k8s_00556_code_explain_14247"} {"prompt": "Explain this text code:\n\n# use this terminal to run commands that specify --watch\n# end this watch when you are asked to start a new watch\nkubectl get pods --watch -l app=nginx", "completion": "This text code StatefulSet Basics | In the second terminal, use kubectl apply to create the headless Service and StatefulSet:", "language": "text", "type": "code-example_explanation", "context": "StatefulSet Basics | In the second terminal, use kubectl apply to create the headless Service and StatefulSet:", "source": "https://kubernetes.io/docs/tutorials/stateful-application/basic-stateful-set/", "id": "k8s_00556_code_explain_14248"} {"prompt": "Explain this bash code:\n\nkubectl apply -f https://k8s.io/examples/application/web/web.yaml", "completion": "This bash code StatefulSet Basics | The command above creates two Pods, each running an NGINX webserver. Get the nginx Service...", "language": "bash", "type": "kubectl-command_explanation", "context": "StatefulSet Basics | The command above creates two Pods, each running an NGINX webserver. Get the nginx Service...", "source": "https://kubernetes.io/docs/tutorials/stateful-application/basic-stateful-set/", "id": "k8s_00556_code_explain_14249"} {"prompt": "Explain this text code:\n\nNAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE\nnginx ClusterIP None <none> 80/TCP 12s", "completion": "This text code StatefulSet Basics | ...then get the web StatefulSet, to verify that both were created successfully:", "language": "text", "type": "code-example_explanation", "context": "StatefulSet Basics | ...then get the web StatefulSet, to verify that both were created successfully:", "source": "https://kubernetes.io/docs/tutorials/stateful-application/basic-stateful-set/", "id": "k8s_00556_code_explain_14250"} {"prompt": "Explain this text code:\n\n# Do not start a new watch;\n# this should already be running\nkubectl get pods --watch -l app=nginx", "completion": "This text code StatefulSet Basics | Notice that the web-1 Pod is not launched until the web-0 Pod is Running (see Pod Phase) and Ready (see type in Pod Conditions).", "language": "text", "type": "code-example_explanation", "context": "StatefulSet Basics | Notice that the web-1 Pod is not launched until the web-0 Pod is Running (see Pod Phase) and Ready (see type in Pod Conditions).", "source": "https://kubernetes.io/docs/tutorials/stateful-application/basic-stateful-set/", "id": "k8s_00556_code_explain_14251"} {"prompt": "Complete this text code:\n\nNAME READY STATUS RESTARTS AGE\nweb-0 0/1 Pending 0 0s\nweb-0 0/1 Pending 0 0s\nweb-0 0/1 ContainerCreating 0 0s", "completion": "web-0 1/1 Running 0 19s\nweb-1 0/1 Pending 0 0s\nweb-1 0/1 Pending 0 0s\nweb-1 0/1 ContainerCreating 0 0s\nweb-1 1/1 Running 0 18s", "language": "text", "type": "code-example", "context": "StatefulSet Basics | Notice that the web-1 Pod is not launched until the web-0 Pod is Running (see Pod Phase) and Ready (see type in Pod Conditions).", "source": "https://kubernetes.io/docs/tutorials/stateful-application/basic-stateful-set/", "id": "k8s_00556_code_completion_14252"} {"prompt": "Explain this text code:\n\nNAME READY STATUS RESTARTS AGE\nweb-0 0/1 Pending 0 0s\nweb-0 0/1 Pending 0 0s\nweb-0 0/1 ContainerCreating 0 0s\nweb-0 1/1 Running 0 19s\nweb-1 0/1 Pending 0 0s\nweb-1 0/1 Pending 0 0s\nweb-1 0/1 ContainerCreating 0 0s\nweb-1 1/1 Running 0 18s", "completion": "This text code StatefulSet Basics | Notice that the web-1 Pod is not launched until the web-0 Pod is Running (see Pod Phase) and Ready (see type in Pod Conditions).", "language": "text", "type": "code-example_explanation", "context": "StatefulSet Basics | Notice that the web-1 Pod is not launched until the web-0 Pod is Running (see Pod Phase) and Ready (see type in Pod Conditions).", "source": "https://kubernetes.io/docs/tutorials/stateful-application/basic-stateful-set/", "id": "k8s_00556_code_explain_14253"} {"prompt": "Explain this text code:\n\nNAME READY STATUS RESTARTS AGE\nweb-0 1/1 Running 0 1m\nweb-1 1/1 Running 0 1m", "completion": "This text code StatefulSet Basics | As mentioned in the StatefulSets concept, the Pods in a StatefulSet have a sticky, unique identity. This identity is based on a unique ordinal index that is assigned to each Pod by the StatefulSet con", "language": "text", "type": "code-example_explanation", "context": "StatefulSet Basics | As mentioned in the StatefulSets concept, the Pods in a StatefulSet have a sticky, unique identity. This identity is based on a unique ordinal index that is assigned to each Pod by the StatefulSet con", "source": "https://kubernetes.io/docs/tutorials/stateful-application/basic-stateful-set/", "id": "k8s_00556_code_explain_14254"} {"prompt": "Explain this text code:\n\nfor i in 0 1; do kubectl exec \"web-$i\" -- sh -c 'hostname'; done", "completion": "This text code StatefulSet Basics | Use kubectl run to execute a container that provides the nslookup command from the dnsutils package. Using nslookup on the Pods' hostnames, you can examine their in-cluster DNS addresses:", "language": "text", "type": "code-example_explanation", "context": "StatefulSet Basics | Use kubectl run to execute a container that provides the nslookup command from the dnsutils package. Using nslookup on the Pods' hostnames, you can examine their in-cluster DNS addresses:", "source": "https://kubernetes.io/docs/tutorials/stateful-application/basic-stateful-set/", "id": "k8s_00556_code_explain_14255"} {"prompt": "Explain this bash code:\n\nkubectl run -i --tty --image busybox:1.28 dns-test --restart=Never --rm", "completion": "This bash code StatefulSet Basics | which starts a new shell. In that new shell, run:", "language": "bash", "type": "kubectl-command_explanation", "context": "StatefulSet Basics | which starts a new shell. In that new shell, run:", "source": "https://kubernetes.io/docs/tutorials/stateful-application/basic-stateful-set/", "id": "k8s_00556_code_explain_14256"} {"prompt": "Explain this text code:\n\n# Run this in the dns-test container shell\nnslookup web-0.nginx", "completion": "This text code StatefulSet Basics | The output is similar to:", "language": "text", "type": "code-example_explanation", "context": "StatefulSet Basics | The output is similar to:", "source": "https://kubernetes.io/docs/tutorials/stateful-application/basic-stateful-set/", "id": "k8s_00556_code_explain_14257"} {"prompt": "Complete this text code:\n\nServer: 10.0.0.10\nAddress 1: 10.0.0.10 kube-dns.kube-system.svc.cluster.local\n\nName: web-0.nginx\nAddress 1: 10.244.1.6\n", "completion": "nslookup web-1.nginx\nServer: 10.0.0.10\nAddress 1: 10.0.0.10 kube-dns.kube-system.svc.cluster.local\n\nName: web-1.nginx\nAddress 1: 10.244.2.6", "language": "text", "type": "code-example", "context": "StatefulSet Basics | (and now exit the container shell: exit)", "source": "https://kubernetes.io/docs/tutorials/stateful-application/basic-stateful-set/", "id": "k8s_00556_code_completion_14258"} {"prompt": "Explain this text code:\n\nServer: 10.0.0.10\nAddress 1: 10.0.0.10 kube-dns.kube-system.svc.cluster.local\n\nName: web-0.nginx\nAddress 1: 10.244.1.6\n\nnslookup web-1.nginx\nServer: 10.0.0.10\nAddress 1: 10.0.0.10 kube-dns.kube-system.svc.cluster.local\n\nName: web-1.nginx\nAddress 1: 10.244.2.6", "completion": "This text code StatefulSet Basics | (and now exit the container shell: exit)", "language": "text", "type": "code-example_explanation", "context": "StatefulSet Basics | (and now exit the container shell: exit)", "source": "https://kubernetes.io/docs/tutorials/stateful-application/basic-stateful-set/", "id": "k8s_00556_code_explain_14259"} {"prompt": "Explain this text code:\n\n# Start a new watch\n# End this watch when you've seen that the delete is finished\nkubectl get pod --watch -l app=nginx", "completion": "This text code StatefulSet Basics | In a second terminal, use kubectl delete to delete all the Pods in the StatefulSet:", "language": "text", "type": "code-example_explanation", "context": "StatefulSet Basics | In a second terminal, use kubectl delete to delete all the Pods in the StatefulSet:", "source": "https://kubernetes.io/docs/tutorials/stateful-application/basic-stateful-set/", "id": "k8s_00556_code_explain_14260"} {"prompt": "Explain this text code:\n\n# This should already be running\nkubectl get pod --watch -l app=nginx", "completion": "This text code StatefulSet Basics | Use kubectl exec and kubectl run to view the Pods' hostnames and in-cluster DNS entries. First, view the Pods' hostnames:", "language": "text", "type": "code-example_explanation", "context": "StatefulSet Basics | Use kubectl exec and kubectl run to view the Pods' hostnames and in-cluster DNS entries. First, view the Pods' hostnames:", "source": "https://kubernetes.io/docs/tutorials/stateful-application/basic-stateful-set/", "id": "k8s_00556_code_explain_14261"} {"prompt": "Complete this text code:\n\nNAME READY STATUS RESTARTS AGE\nweb-0 0/1 ContainerCreating 0 0s\nNAME READY STATUS RESTARTS AGE\nweb-0 1/1 Running 0 2s", "completion": "web-1 0/1 Pending 0 0s\nweb-1 0/1 Pending 0 0s\nweb-1 0/1 ContainerCreating 0 0s\nweb-1 1/1 Running 0 34s", "language": "text", "type": "code-example", "context": "StatefulSet Basics | Use kubectl exec and kubectl run to view the Pods' hostnames and in-cluster DNS entries. First, view the Pods' hostnames:", "source": "https://kubernetes.io/docs/tutorials/stateful-application/basic-stateful-set/", "id": "k8s_00556_code_completion_14262"} {"prompt": "Explain this text code:\n\nNAME READY STATUS RESTARTS AGE\nweb-0 0/1 ContainerCreating 0 0s\nNAME READY STATUS RESTARTS AGE\nweb-0 1/1 Running 0 2s\nweb-1 0/1 Pending 0 0s\nweb-1 0/1 Pending 0 0s\nweb-1 0/1 ContainerCreating 0 0s\nweb-1 1/1 Running 0 34s", "completion": "This text code StatefulSet Basics | Use kubectl exec and kubectl run to view the Pods' hostnames and in-cluster DNS entries. First, view the Pods' hostnames:", "language": "text", "type": "code-example_explanation", "context": "StatefulSet Basics | Use kubectl exec and kubectl run to view the Pods' hostnames and in-cluster DNS entries. First, view the Pods' hostnames:", "source": "https://kubernetes.io/docs/tutorials/stateful-application/basic-stateful-set/", "id": "k8s_00556_code_explain_14263"} {"prompt": "Explain this text code:\n\nfor i in 0 1; do kubectl exec web-$i -- sh -c 'hostname'; done", "completion": "This text code StatefulSet Basics | then, run:", "language": "text", "type": "code-example_explanation", "context": "StatefulSet Basics | then, run:", "source": "https://kubernetes.io/docs/tutorials/stateful-application/basic-stateful-set/", "id": "k8s_00556_code_explain_14264"} {"prompt": "Explain this bash code:\n\nkubectl run -i --tty --image busybox:1.28 dns-test --restart=Never --rm", "completion": "This bash code StatefulSet Basics | which starts a new shell.In that new shell, run:", "language": "bash", "type": "kubectl-command_explanation", "context": "StatefulSet Basics | which starts a new shell.In that new shell, run:", "source": "https://kubernetes.io/docs/tutorials/stateful-application/basic-stateful-set/", "id": "k8s_00556_code_explain_14265"} {"prompt": "Explain this text code:\n\n# Run this in the dns-test container shell\nnslookup web-0.nginx", "completion": "This text code StatefulSet Basics | The output is similar to:", "language": "text", "type": "code-example_explanation", "context": "StatefulSet Basics | The output is similar to:", "source": "https://kubernetes.io/docs/tutorials/stateful-application/basic-stateful-set/", "id": "k8s_00556_code_explain_14266"} {"prompt": "Complete this text code:\n\nServer: 10.0.0.10\nAddress 1: 10.0.0.10 kube-dns.kube-system.svc.cluster.local\n\nName: web-0.nginx\nAddress 1: 10.244.1.7\n", "completion": "nslookup web-1.nginx\nServer: 10.0.0.10\nAddress 1: 10.0.0.10 kube-dns.kube-system.svc.cluster.local\n\nName: web-1.nginx\nAddress 1: 10.244.2.8", "language": "text", "type": "code-example", "context": "StatefulSet Basics | (and now exit the container shell: exit)", "source": "https://kubernetes.io/docs/tutorials/stateful-application/basic-stateful-set/", "id": "k8s_00556_code_completion_14267"} {"prompt": "Explain this text code:\n\nServer: 10.0.0.10\nAddress 1: 10.0.0.10 kube-dns.kube-system.svc.cluster.local\n\nName: web-0.nginx\nAddress 1: 10.244.1.7\n\nnslookup web-1.nginx\nServer: 10.0.0.10\nAddress 1: 10.0.0.10 kube-dns.kube-system.svc.cluster.local\n\nName: web-1.nginx\nAddress 1: 10.244.2.8", "completion": "This text code StatefulSet Basics | (and now exit the container shell: exit)", "language": "text", "type": "code-example_explanation", "context": "StatefulSet Basics | (and now exit the container shell: exit)", "source": "https://kubernetes.io/docs/tutorials/stateful-application/basic-stateful-set/", "id": "k8s_00556_code_explain_14268"} {"prompt": "Explain this text code:\n\nNAME STATUS VOLUME CAPACITY ACCESSMODES AGE\nwww-web-0 Bound pvc-15c268c7-b507-11e6-932f-42010a800002 1Gi RWO 48s\nwww-web-1 Bound pvc-15c79307-b507-11e6-932f-42010a800002 1Gi RWO 48s", "completion": "This text code StatefulSet Basics | The StatefulSet controller created two PersistentVolumeClaims that are bound to two PersistentVolumes.", "language": "text", "type": "code-example_explanation", "context": "StatefulSet Basics | The StatefulSet controller created two PersistentVolumeClaims that are bound to two PersistentVolumes.", "source": "https://kubernetes.io/docs/tutorials/stateful-application/basic-stateful-set/", "id": "k8s_00556_code_explain_14269"} {"prompt": "Explain this text code:\n\nfor i in 0 1; do kubectl exec \"web-$i\" -- sh -c 'echo \"$(hostname)\" > /usr/share/nginx/html/index.html'; done\n\nfor i in 0 1; do kubectl exec -i -t \"web-$i\" -- curl http://localhost/; done", "completion": "This text code StatefulSet Basics | If you instead see 403 Forbidden responses for the above curl command, you will need to fix the permissions of the directory mounted by the volumeMounts (due to a bug when using hostPath volumes), by", "language": "text", "type": "code-example_explanation", "context": "StatefulSet Basics | If you instead see 403 Forbidden responses for the above curl command, you will need to fix the permissions of the directory mounted by the volumeMounts (due to a bug when using hostPath volumes), by", "source": "https://kubernetes.io/docs/tutorials/stateful-application/basic-stateful-set/", "id": "k8s_00556_code_explain_14270"} {"prompt": "Explain this text code:\n\n# End this watch when you've reached the end of the section.\n# At the start of \"Scaling a StatefulSet\" you'll start a new watch.\nkubectl get pod --watch -l app=nginx", "completion": "This text code StatefulSet Basics | In a second terminal, delete all of the StatefulSet's Pods:", "language": "text", "type": "code-example_explanation", "context": "StatefulSet Basics | In a second terminal, delete all of the StatefulSet's Pods:", "source": "https://kubernetes.io/docs/tutorials/stateful-application/basic-stateful-set/", "id": "k8s_00556_code_explain_14271"} {"prompt": "Explain this text code:\n\n# This should already be running\nkubectl get pod --watch -l app=nginx", "completion": "This text code StatefulSet Basics | Verify the web servers continue to serve their hostnames:", "language": "text", "type": "code-example_explanation", "context": "StatefulSet Basics | Verify the web servers continue to serve their hostnames:", "source": "https://kubernetes.io/docs/tutorials/stateful-application/basic-stateful-set/", "id": "k8s_00556_code_explain_14272"} {"prompt": "Complete this text code:\n\nNAME READY STATUS RESTARTS AGE\nweb-0 0/1 ContainerCreating 0 0s\nNAME READY STATUS RESTARTS AGE\nweb-0 1/1 Running 0 2s", "completion": "web-1 0/1 Pending 0 0s\nweb-1 0/1 Pending 0 0s\nweb-1 0/1 ContainerCreating 0 0s\nweb-1 1/1 Running 0 34s", "language": "text", "type": "code-example", "context": "StatefulSet Basics | Verify the web servers continue to serve their hostnames:", "source": "https://kubernetes.io/docs/tutorials/stateful-application/basic-stateful-set/", "id": "k8s_00556_code_completion_14273"} {"prompt": "Explain this text code:\n\nNAME READY STATUS RESTARTS AGE\nweb-0 0/1 ContainerCreating 0 0s\nNAME READY STATUS RESTARTS AGE\nweb-0 1/1 Running 0 2s\nweb-1 0/1 Pending 0 0s\nweb-1 0/1 Pending 0 0s\nweb-1 0/1 ContainerCreating 0 0s\nweb-1 1/1 Running 0 34s", "completion": "This text code StatefulSet Basics | Verify the web servers continue to serve their hostnames:", "language": "text", "type": "code-example_explanation", "context": "StatefulSet Basics | Verify the web servers continue to serve their hostnames:", "source": "https://kubernetes.io/docs/tutorials/stateful-application/basic-stateful-set/", "id": "k8s_00556_code_explain_14274"} {"prompt": "Explain this text code:\n\nfor i in 0 1; do kubectl exec -i -t \"web-$i\" -- curl http://localhost/; done", "completion": "This text code StatefulSet Basics | Even though web-0 and web-1 were rescheduled, they continue to serve their hostnames because the PersistentVolumes associated with their PersistentVolumeClaims are remounted to their volumeMounts. No", "language": "text", "type": "code-example_explanation", "context": "StatefulSet Basics | Even though web-0 and web-1 were rescheduled, they continue to serve their hostnames because the PersistentVolumes associated with their PersistentVolumeClaims are remounted to their volumeMounts. No", "source": "https://kubernetes.io/docs/tutorials/stateful-application/basic-stateful-set/", "id": "k8s_00556_code_explain_14275"} {"prompt": "Complete this text code:\n\n# If you already have a watch running, you can continue using that.\n# Otherwise, start one.", "completion": "# End this watch when there are 5 healthy Pods for the StatefulSet\nkubectl get pods --watch -l app=nginx", "language": "text", "type": "code-example", "context": "StatefulSet Basics | In another terminal window, use kubectl scale to scale the number of replicas to 5:", "source": "https://kubernetes.io/docs/tutorials/stateful-application/basic-stateful-set/", "id": "k8s_00556_code_completion_14276"} {"prompt": "Explain this text code:\n\n# If you already have a watch running, you can continue using that.\n# Otherwise, start one.\n# End this watch when there are 5 healthy Pods for the StatefulSet\nkubectl get pods --watch -l app=nginx", "completion": "This text code StatefulSet Basics | In another terminal window, use kubectl scale to scale the number of replicas to 5:", "language": "text", "type": "code-example_explanation", "context": "StatefulSet Basics | In another terminal window, use kubectl scale to scale the number of replicas to 5:", "source": "https://kubernetes.io/docs/tutorials/stateful-application/basic-stateful-set/", "id": "k8s_00556_code_explain_14277"} {"prompt": "Explain this text code:\n\n# This should already be running\nkubectl get pod --watch -l app=nginx", "completion": "This text code StatefulSet Basics | The StatefulSet controller scaled the number of replicas. As with StatefulSet creation, the StatefulSet controller created each Pod sequentially with respect to its ordinal index, and it waited for ea", "language": "text", "type": "code-example_explanation", "context": "StatefulSet Basics | The StatefulSet controller scaled the number of replicas. As with StatefulSet creation, the StatefulSet controller created each Pod sequentially with respect to its ordinal index, and it waited for ea", "source": "https://kubernetes.io/docs/tutorials/stateful-application/basic-stateful-set/", "id": "k8s_00556_code_explain_14278"} {"prompt": "Complete this text code:\n\nNAME READY STATUS RESTARTS AGE\nweb-0 1/1 Running 0 2h\nweb-1 1/1 Running 0 2h\nNAME READY STATUS RESTARTS AGE\nweb-2 0/1 Pending 0 0s\nweb-2 0/1 Pending 0 0s\nweb-2 0/1 ContainerCreating 0 0s\nweb-2 1/1 Running 0 19s", "completion": "web-3 0/1 Pending 0 0s\nweb-3 0/1 Pending 0 0s\nweb-3 0/1 ContainerCreating 0 0s\nweb-3 1/1 Running 0 18s\nweb-4 0/1 Pending 0 0s\nweb-4 0/1 Pending 0 0s\nweb-4 0/1 ContainerCreating 0 0s\nweb-4 1/1 Running 0 19s", "language": "text", "type": "code-example", "context": "StatefulSet Basics | The StatefulSet controller scaled the number of replicas. As with StatefulSet creation, the StatefulSet controller created each Pod sequentially with respect to its ordinal index, and it waited for ea", "source": "https://kubernetes.io/docs/tutorials/stateful-application/basic-stateful-set/", "id": "k8s_00556_code_completion_14279"} {"prompt": "Explain this text code:\n\nNAME READY STATUS RESTARTS AGE\nweb-0 1/1 Running 0 2h\nweb-1 1/1 Running 0 2h\nNAME READY STATUS RESTARTS AGE\nweb-2 0/1 Pending 0 0s\nweb-2 0/1 Pending 0 0s\nweb-2 0/1 ContainerCreating 0 0s\nweb-2 1/1 Running 0 19s\nweb-3 0/1 Pending 0 0s\nweb-3 0/1 Pending 0 0s\nweb-3 0/1 ContainerCreating 0 0s\nweb-3 1/1 Running 0 18s\nweb-4 0/1 Pending 0 0s\nweb-4 0/1 Pending 0 0s\nweb-4 0/1 ContainerCreating 0 0s\nweb-4 1/1 Running 0 19s", "completion": "This text code StatefulSet Basics | The StatefulSet controller scaled the number of replicas. As with StatefulSet creation, the StatefulSet controller created each Pod sequentially with respect to its ordinal index, and it waited for ea", "language": "text", "type": "code-example_explanation", "context": "StatefulSet Basics | The StatefulSet controller scaled the number of replicas. As with StatefulSet creation, the StatefulSet controller created each Pod sequentially with respect to its ordinal index, and it waited for ea", "source": "https://kubernetes.io/docs/tutorials/stateful-application/basic-stateful-set/", "id": "k8s_00556_code_explain_14280"} {"prompt": "Explain this text code:\n\n# End this watch when there are only 3 Pods for the StatefulSet\nkubectl get pod --watch -l app=nginx", "completion": "This text code StatefulSet Basics | In another terminal, use kubectl patch to scale the StatefulSet back down to three replicas:", "language": "text", "type": "code-example_explanation", "context": "StatefulSet Basics | In another terminal, use kubectl patch to scale the StatefulSet back down to three replicas:", "source": "https://kubernetes.io/docs/tutorials/stateful-application/basic-stateful-set/", "id": "k8s_00556_code_explain_14281"} {"prompt": "Explain this text code:\n\n# This should already be running\nkubectl get pods --watch -l app=nginx", "completion": "This text code StatefulSet Basics | The control plane deleted one Pod at a time, in reverse order with respect to its ordinal index, and it waited for each Pod to be completely shut down before deleting the next one.", "language": "text", "type": "code-example_explanation", "context": "StatefulSet Basics | The control plane deleted one Pod at a time, in reverse order with respect to its ordinal index, and it waited for each Pod to be completely shut down before deleting the next one.", "source": "https://kubernetes.io/docs/tutorials/stateful-application/basic-stateful-set/", "id": "k8s_00556_code_explain_14282"} {"prompt": "Complete this text code:\n\nNAME READY STATUS RESTARTS AGE\nweb-0 1/1 Running 0 3h\nweb-1 1/1 Running 0 3h\nweb-2 1/1 Running 0 55s\nweb-3 1/1 Running 0 36s\nweb-4 0/1 ContainerCreating 0 18s", "completion": "NAME READY STATUS RESTARTS AGE\nweb-4 1/1 Running 0 19s\nweb-4 1/1 Terminating 0 24s\nweb-4 1/1 Terminating 0 24s\nweb-3 1/1 Terminating 0 42s\nweb-3 1/1 Terminating 0 42s", "language": "text", "type": "code-example", "context": "StatefulSet Basics | The control plane deleted one Pod at a time, in reverse order with respect to its ordinal index, and it waited for each Pod to be completely shut down before deleting the next one.", "source": "https://kubernetes.io/docs/tutorials/stateful-application/basic-stateful-set/", "id": "k8s_00556_code_completion_14283"} {"prompt": "Explain this text code:\n\nNAME READY STATUS RESTARTS AGE\nweb-0 1/1 Running 0 3h\nweb-1 1/1 Running 0 3h\nweb-2 1/1 Running 0 55s\nweb-3 1/1 Running 0 36s\nweb-4 0/1 ContainerCreating 0 18s\nNAME READY STATUS RESTARTS AGE\nweb-4 1/1 Running 0 19s\nweb-4 1/1 Terminating 0 24s\nweb-4 1/1 Terminating 0 24s\nweb-3 1/1 Terminating 0 42s\nweb-3 1/1 Terminating 0 42s", "completion": "This text code StatefulSet Basics | The control plane deleted one Pod at a time, in reverse order with respect to its ordinal index, and it waited for each Pod to be completely shut down before deleting the next one.", "language": "text", "type": "code-example_explanation", "context": "StatefulSet Basics | The control plane deleted one Pod at a time, in reverse order with respect to its ordinal index, and it waited for each Pod to be completely shut down before deleting the next one.", "source": "https://kubernetes.io/docs/tutorials/stateful-application/basic-stateful-set/", "id": "k8s_00556_code_explain_14284"} {"prompt": "Complete this text code:\n\nNAME STATUS VOLUME CAPACITY ACCESSMODES AGE\nwww-web-0 Bound pvc-15c268c7-b507-11e6-932f-42010a800002 1Gi RWO 13h\nwww-web-1 Bound pvc-15c79307-b507-11e6-932f-42010a800002 1Gi RWO 13h", "completion": "www-web-2 Bound pvc-e1125b27-b508-11e6-932f-42010a800002 1Gi RWO 13h\nwww-web-3 Bound pvc-e1176df6-b508-11e6-932f-42010a800002 1Gi RWO 13h\nwww-web-4 Bound pvc-e11bb5f8-b508-11e6-932f-42010a800002 1Gi RWO 13h", "language": "text", "type": "code-example", "context": "StatefulSet Basics | There are still five PersistentVolumeClaims and five PersistentVolumes. When exploring a Pod's stable storage, you saw that the PersistentVolumes mounted to the Pods of a StatefulSet are not deleted w", "source": "https://kubernetes.io/docs/tutorials/stateful-application/basic-stateful-set/", "id": "k8s_00556_code_completion_14285"} {"prompt": "Explain this text code:\n\nNAME STATUS VOLUME CAPACITY ACCESSMODES AGE\nwww-web-0 Bound pvc-15c268c7-b507-11e6-932f-42010a800002 1Gi RWO 13h\nwww-web-1 Bound pvc-15c79307-b507-11e6-932f-42010a800002 1Gi RWO 13h\nwww-web-2 Bound pvc-e1125b27-b508-11e6-932f-42010a800002 1Gi RWO 13h\nwww-web-3 Bound pvc-e1176df6-b508-11e6-932f-42010a800002 1Gi RWO 13h\nwww-web-4 Bound pvc-e11bb5f8-b508-11e6-932f-42010a800002 1Gi RWO 13h", "completion": "This text code StatefulSet Basics | There are still five PersistentVolumeClaims and five PersistentVolumes. When exploring a Pod's stable storage, you saw that the PersistentVolumes mounted to the Pods of a StatefulSet are not deleted w", "language": "text", "type": "code-example_explanation", "context": "StatefulSet Basics | There are still five PersistentVolumeClaims and five PersistentVolumes. When exploring a Pod's stable storage, you saw that the PersistentVolumes mounted to the Pods of a StatefulSet are not deleted w", "source": "https://kubernetes.io/docs/tutorials/stateful-application/basic-stateful-set/", "id": "k8s_00556_code_explain_14286"} {"prompt": "Explain this bash code:\n\nkubectl patch statefulset web --type='json' -p='[{\"op\": \"replace\", \"path\": \"/spec/template/spec/containers/0/image\", \"value\":\"registry.k8s.io/nginx-slim:0.24\"}]'", "completion": "This bash code StatefulSet Basics | In another terminal, watch the Pods in the StatefulSet:", "language": "bash", "type": "kubectl-command_explanation", "context": "StatefulSet Basics | In another terminal, watch the Pods in the StatefulSet:", "source": "https://kubernetes.io/docs/tutorials/stateful-application/basic-stateful-set/", "id": "k8s_00556_code_explain_14287"} {"prompt": "Complete this text code:\n\n# End this watch when the rollout is complete\n#", "completion": "# If you're not sure, leave it running one more minute\nkubectl get pod -l app=nginx --watch", "language": "text", "type": "code-example", "context": "StatefulSet Basics | The output is similar to:", "source": "https://kubernetes.io/docs/tutorials/stateful-application/basic-stateful-set/", "id": "k8s_00556_code_completion_14288"} {"prompt": "Explain this text code:\n\n# End this watch when the rollout is complete\n#\n# If you're not sure, leave it running one more minute\nkubectl get pod -l app=nginx --watch", "completion": "This text code StatefulSet Basics | The output is similar to:", "language": "text", "type": "code-example_explanation", "context": "StatefulSet Basics | The output is similar to:", "source": "https://kubernetes.io/docs/tutorials/stateful-application/basic-stateful-set/", "id": "k8s_00556_code_explain_14289"} {"prompt": "Complete this text code:\n\nNAME READY STATUS RESTARTS AGE\nweb-0 1/1 Running 0 7m\nweb-1 1/1 Running 0 7m\nweb-2 1/1 Running 0 8m\nweb-2 1/1 Terminating 0 8m\nweb-2 1/1 Terminating 0 8m\nweb-2 0/1 Terminating 0 8m\nweb-2 0/1 Terminating 0 8m\nweb-2 0/1 Terminating 0 8m\nweb-2 0/1 Terminating 0 8m\nweb-2 0/1 Pending 0 0s\nweb-2 0/1 Pending 0 0s\nweb-2 0/1 ContainerCreating 0 0s\nweb-2 1/1 Running 0 19s\nweb-1 1/1 Terminating 0 8m\nweb-1 0/1 Terminating 0 8m", "completion": "web-1 0/1 Terminating 0 8m\nweb-1 0/1 Terminating 0 8m\nweb-1 0/1 Pending 0 0s\nweb-1 0/1 Pending 0 0s\nweb-1 0/1 ContainerCreating 0 0s\nweb-1 1/1 Running 0 6s\nweb-0 1/1 Terminating 0 7m\nweb-0 1/1 Terminating 0 7m\nweb-0 0/1 Terminating 0 7m\nweb-0 0/1 Terminating 0 7m\nweb-0 0/1 Terminating 0 7m\nweb-0 0/1 Terminating 0 7m\nweb-0 0/1 Pending 0 0s\nweb-0 0/1 Pending 0 0s\nweb-0 0/1 ContainerCreating 0 0s\nweb-0 1/1 Running 0 10s", "language": "text", "type": "code-example", "context": "StatefulSet Basics | The Pods in the StatefulSet are updated in reverse ordinal order. The StatefulSet controller terminates each Pod, and waits for it to transition to Running and Ready prior to updating the next Pod. No", "source": "https://kubernetes.io/docs/tutorials/stateful-application/basic-stateful-set/", "id": "k8s_00556_code_completion_14290"} {"prompt": "Explain this text code:\n\nNAME READY STATUS RESTARTS AGE\nweb-0 1/1 Running 0 7m\nweb-1 1/1 Running 0 7m\nweb-2 1/1 Running 0 8m\nweb-2 1/1 Terminating 0 8m\nweb-2 1/1 Terminating 0 8m\nweb-2 0/1 Terminating 0 8m\nweb-2 0/1 Terminating 0 8m\nweb-2 0/1 Terminating 0 8m\nweb-2 0/1 Terminating 0 8m\nweb-2 0/1 Pending 0 0s\nweb-2 0/1 Pending 0 0s\nweb-2 0/1 ContainerCreating 0 0s\nweb-2 1/1 Running 0 19s\nweb-1 1/1 Terminating 0 8m\nweb-1 0/1 Terminating 0 8m\nweb-1 0/1 Terminating 0 8m\nweb-1 0/1 Terminating 0 8m\nweb-1 0/1 Pending 0 0s\nweb-1 0/1 Pending 0 0s\nweb-1 0/1 ContainerCreating 0 0s\nweb-1 1/1 Running 0 6s\nweb-0 1/1 Terminating 0 7m\nweb-0 1/1 Terminating 0 7m\nweb-0 0/1 Terminating 0 7m\nweb-0 0/1 Terminating 0 7m\nweb-0 0/1 Terminating 0 7m\nweb-0 0/1 Terminating 0 7m\nweb-0 0/1 Pending 0 0s\nweb-0 0/1 Pending 0 0s\nweb-0 0/1 ContainerCreating 0 0s\nweb-0 1/1 Running 0 10s", "completion": "This text code StatefulSet Basics | The Pods in the StatefulSet are updated in reverse ordinal order. The StatefulSet controller terminates each Pod, and waits for it to transition to Running and Ready prior to updating the next Pod. No", "language": "text", "type": "code-example_explanation", "context": "StatefulSet Basics | The Pods in the StatefulSet are updated in reverse ordinal order. The StatefulSet controller terminates each Pod, and waits for it to transition to Running and Ready prior to updating the next Pod. No", "source": "https://kubernetes.io/docs/tutorials/stateful-application/basic-stateful-set/", "id": "k8s_00556_code_explain_14291"} {"prompt": "Explain this text code:\n\nfor p in 0 1 2; do kubectl get pod \"web-$p\" --template '{{range $i, $c := .spec.containers}}{{$c.image}}{{end}}'; echo; done", "completion": "This text code StatefulSet Basics | All the Pods in the StatefulSet are now running the previous container image.", "language": "text", "type": "code-example_explanation", "context": "StatefulSet Basics | All the Pods in the StatefulSet are now running the previous container image.", "source": "https://kubernetes.io/docs/tutorials/stateful-application/basic-stateful-set/", "id": "k8s_00556_code_explain_14292"} {"prompt": "Explain this text code:\n\nregistry.k8s.io/nginx-slim:0.24\nregistry.k8s.io/nginx-slim:0.24\nregistry.k8s.io/nginx-slim:0.24", "completion": "This text code StatefulSet Basics | All the Pods in the StatefulSet are now running the previous container image.", "language": "text", "type": "code-example_explanation", "context": "StatefulSet Basics | All the Pods in the StatefulSet are now running the previous container image.", "source": "https://kubernetes.io/docs/tutorials/stateful-application/basic-stateful-set/", "id": "k8s_00556_code_explain_14293"} {"prompt": "Complete this text code:\n\n# The value of \"partition\" determines which ordinals a change applies to\n# Make sure to use a number bigger than the last ordinal for the", "completion": "# StatefulSet\nkubectl patch statefulset web -p '{\"spec\":{\"updateStrategy\":{\"type\":\"RollingUpdate\",\"rollingUpdate\":{\"partition\":3}}}}'", "language": "text", "type": "code-example", "context": "StatefulSet Basics | Patch the StatefulSet again to change the container image that this StatefulSet uses:", "source": "https://kubernetes.io/docs/tutorials/stateful-application/basic-stateful-set/", "id": "k8s_00556_code_completion_14294"} {"prompt": "Explain this text code:\n\n# The value of \"partition\" determines which ordinals a change applies to\n# Make sure to use a number bigger than the last ordinal for the\n# StatefulSet\nkubectl patch statefulset web -p '{\"spec\":{\"updateStrategy\":{\"type\":\"RollingUpdate\",\"rollingUpdate\":{\"partition\":3}}}}'", "completion": "This text code StatefulSet Basics | Patch the StatefulSet again to change the container image that this StatefulSet uses:", "language": "text", "type": "code-example_explanation", "context": "StatefulSet Basics | Patch the StatefulSet again to change the container image that this StatefulSet uses:", "source": "https://kubernetes.io/docs/tutorials/stateful-application/basic-stateful-set/", "id": "k8s_00556_code_explain_14295"} {"prompt": "Explain this bash code:\n\nkubectl patch statefulset web --type='json' -p='[{\"op\": \"replace\", \"path\": \"/spec/template/spec/containers/0/image\", \"value\":\"registry.k8s.io/nginx-slim:0.21\"}]'", "completion": "This bash code StatefulSet Basics | Delete a Pod in the StatefulSet:", "language": "bash", "type": "kubectl-command_explanation", "context": "StatefulSet Basics | Delete a Pod in the StatefulSet:", "source": "https://kubernetes.io/docs/tutorials/stateful-application/basic-stateful-set/", "id": "k8s_00556_code_explain_14296"} {"prompt": "Explain this text code:\n\n# End the watch when you see that web-2 is healthy\nkubectl get pod -l app=nginx --watch", "completion": "This text code StatefulSet Basics | Get the Pod's container image:", "language": "text", "type": "code-example_explanation", "context": "StatefulSet Basics | Get the Pod's container image:", "source": "https://kubernetes.io/docs/tutorials/stateful-application/basic-stateful-set/", "id": "k8s_00556_code_explain_14297"} {"prompt": "Complete this text code:\n\nNAME READY STATUS RESTARTS AGE\nweb-0 1/1 Running 0 4m", "completion": "web-1 1/1 Running 0 4m\nweb-2 0/1 ContainerCreating 0 11s\nweb-2 1/1 Running 0 18s", "language": "text", "type": "code-example", "context": "StatefulSet Basics | Get the Pod's container image:", "source": "https://kubernetes.io/docs/tutorials/stateful-application/basic-stateful-set/", "id": "k8s_00556_code_completion_14298"} {"prompt": "Explain this text code:\n\nNAME READY STATUS RESTARTS AGE\nweb-0 1/1 Running 0 4m\nweb-1 1/1 Running 0 4m\nweb-2 0/1 ContainerCreating 0 11s\nweb-2 1/1 Running 0 18s", "completion": "This text code StatefulSet Basics | Get the Pod's container image:", "language": "text", "type": "code-example_explanation", "context": "StatefulSet Basics | Get the Pod's container image:", "source": "https://kubernetes.io/docs/tutorials/stateful-application/basic-stateful-set/", "id": "k8s_00556_code_explain_14299"} {"prompt": "Explain this bash code:\n\nkubectl get pod web-2 --template '{{range $i, $c := .spec.containers}}{{$c.image}}{{end}}'", "completion": "This bash code StatefulSet Basics | Notice that, even though the update strategy is RollingUpdate the StatefulSet restored the Pod with the original container image. This is because the ordinal of the Pod is less than the partition spec", "language": "bash", "type": "kubectl-command_explanation", "context": "StatefulSet Basics | Notice that, even though the update strategy is RollingUpdate the StatefulSet restored the Pod with the original container image. This is because the ordinal of the Pod is less than the partition spec", "source": "https://kubernetes.io/docs/tutorials/stateful-application/basic-stateful-set/", "id": "k8s_00556_code_explain_14300"} {"prompt": "Explain this text code:\n\n# The value of \"partition\" should match the highest existing ordinal for\n# the StatefulSet\nkubectl patch statefulset web -p '{\"spec\":{\"updateStrategy\":{\"type\":\"RollingUpdate\",\"rollingUpdate\":{\"partition\":2}}}}'", "completion": "This text code StatefulSet Basics | The control plane triggers replacement for web-2 (implemented by a graceful delete followed by creating a new Pod once the deletion is complete). Wait for the new web-2 Pod to be Running and Ready.", "language": "text", "type": "code-example_explanation", "context": "StatefulSet Basics | The control plane triggers replacement for web-2 (implemented by a graceful delete followed by creating a new Pod once the deletion is complete). Wait for the new web-2 Pod to be Running and Ready.", "source": "https://kubernetes.io/docs/tutorials/stateful-application/basic-stateful-set/", "id": "k8s_00556_code_explain_14301"} {"prompt": "Explain this text code:\n\n# This should already be running\nkubectl get pod -l app=nginx --watch", "completion": "This text code StatefulSet Basics | Get the Pod's container:", "language": "text", "type": "code-example_explanation", "context": "StatefulSet Basics | Get the Pod's container:", "source": "https://kubernetes.io/docs/tutorials/stateful-application/basic-stateful-set/", "id": "k8s_00556_code_explain_14302"} {"prompt": "Complete this text code:\n\nNAME READY STATUS RESTARTS AGE\nweb-0 1/1 Running 0 4m", "completion": "web-1 1/1 Running 0 4m\nweb-2 0/1 ContainerCreating 0 11s\nweb-2 1/1 Running 0 18s", "language": "text", "type": "code-example", "context": "StatefulSet Basics | Get the Pod's container:", "source": "https://kubernetes.io/docs/tutorials/stateful-application/basic-stateful-set/", "id": "k8s_00556_code_completion_14303"} {"prompt": "Explain this text code:\n\nNAME READY STATUS RESTARTS AGE\nweb-0 1/1 Running 0 4m\nweb-1 1/1 Running 0 4m\nweb-2 0/1 ContainerCreating 0 11s\nweb-2 1/1 Running 0 18s", "completion": "This text code StatefulSet Basics | Get the Pod's container:", "language": "text", "type": "code-example_explanation", "context": "StatefulSet Basics | Get the Pod's container:", "source": "https://kubernetes.io/docs/tutorials/stateful-application/basic-stateful-set/", "id": "k8s_00556_code_explain_14304"} {"prompt": "Explain this bash code:\n\nkubectl get pod web-2 --template '{{range $i, $c := .spec.containers}}{{$c.image}}{{end}}'", "completion": "This bash code StatefulSet Basics | When you changed the partition, the StatefulSet controller automatically updated the web-2 Pod because the Pod's ordinal was greater than or equal to the partition.", "language": "bash", "type": "kubectl-command_explanation", "context": "StatefulSet Basics | When you changed the partition, the StatefulSet controller automatically updated the web-2 Pod because the Pod's ordinal was greater than or equal to the partition.", "source": "https://kubernetes.io/docs/tutorials/stateful-application/basic-stateful-set/", "id": "k8s_00556_code_explain_14305"} {"prompt": "Explain this text code:\n\n# This should already be running\nkubectl get pod -l app=nginx --watch", "completion": "This text code StatefulSet Basics | The output is similar to:", "language": "text", "type": "code-example_explanation", "context": "StatefulSet Basics | The output is similar to:", "source": "https://kubernetes.io/docs/tutorials/stateful-application/basic-stateful-set/", "id": "k8s_00556_code_explain_14306"} {"prompt": "Complete this text code:\n\nNAME READY STATUS RESTARTS AGE\nweb-0 1/1 Running 0 6m\nweb-1 0/1 Terminating 0 6m\nweb-2 1/1 Running 0 2m\nweb-1 0/1 Terminating 0 6m", "completion": "web-1 0/1 Terminating 0 6m\nweb-1 0/1 Terminating 0 6m\nweb-1 0/1 Pending 0 0s\nweb-1 0/1 Pending 0 0s\nweb-1 0/1 ContainerCreating 0 0s\nweb-1 1/1 Running 0 18s", "language": "text", "type": "code-example", "context": "StatefulSet Basics | Get the web-1 Pod's container image:", "source": "https://kubernetes.io/docs/tutorials/stateful-application/basic-stateful-set/", "id": "k8s_00556_code_completion_14307"} {"prompt": "Explain this text code:\n\nNAME READY STATUS RESTARTS AGE\nweb-0 1/1 Running 0 6m\nweb-1 0/1 Terminating 0 6m\nweb-2 1/1 Running 0 2m\nweb-1 0/1 Terminating 0 6m\nweb-1 0/1 Terminating 0 6m\nweb-1 0/1 Terminating 0 6m\nweb-1 0/1 Pending 0 0s\nweb-1 0/1 Pending 0 0s\nweb-1 0/1 ContainerCreating 0 0s\nweb-1 1/1 Running 0 18s", "completion": "This text code StatefulSet Basics | Get the web-1 Pod's container image:", "language": "text", "type": "code-example_explanation", "context": "StatefulSet Basics | Get the web-1 Pod's container image:", "source": "https://kubernetes.io/docs/tutorials/stateful-application/basic-stateful-set/", "id": "k8s_00556_code_explain_14308"} {"prompt": "Explain this bash code:\n\nkubectl get pod web-1 --template '{{range $i, $c := .spec.containers}}{{$c.image}}{{end}}'", "completion": "This bash code StatefulSet Basics | web-1 was restored to its original configuration because the Pod's ordinal was less than the partition. When a partition is specified, all Pods with an ordinal that is greater than or equal to the par", "language": "bash", "type": "kubectl-command_explanation", "context": "StatefulSet Basics | web-1 was restored to its original configuration because the Pod's ordinal was less than the partition. When a partition is specified, all Pods with an ordinal that is greater than or equal to the par", "source": "https://kubernetes.io/docs/tutorials/stateful-application/basic-stateful-set/", "id": "k8s_00556_code_explain_14309"} {"prompt": "Explain this bash code:\n\nkubectl patch statefulset web -p '{\"spec\":{\"updateStrategy\":{\"type\":\"RollingUpdate\",\"rollingUpdate\":{\"partition\":0}}}}'", "completion": "This bash code StatefulSet Basics | Wait for all of the Pods in the StatefulSet to become Running and Ready.", "language": "bash", "type": "kubectl-command_explanation", "context": "StatefulSet Basics | Wait for all of the Pods in the StatefulSet to become Running and Ready.", "source": "https://kubernetes.io/docs/tutorials/stateful-application/basic-stateful-set/", "id": "k8s_00556_code_explain_14310"} {"prompt": "Explain this text code:\n\n# This should already be running\nkubectl get pod -l app=nginx --watch", "completion": "This text code StatefulSet Basics | The output is similar to:", "language": "text", "type": "code-example_explanation", "context": "StatefulSet Basics | The output is similar to:", "source": "https://kubernetes.io/docs/tutorials/stateful-application/basic-stateful-set/", "id": "k8s_00556_code_explain_14311"} {"prompt": "Complete this text code:\n\nNAME READY STATUS RESTARTS AGE\nweb-0 1/1 Running 0 3m\nweb-1 0/1 ContainerCreating 0 11s\nweb-2 1/1 Running 0 2m\nweb-1 1/1 Running 0 18s\nweb-0 1/1 Terminating 0 3m\nweb-0 1/1 Terminating 0 3m", "completion": "web-0 0/1 Terminating 0 3m\nweb-0 0/1 Terminating 0 3m\nweb-0 0/1 Terminating 0 3m\nweb-0 0/1 Terminating 0 3m\nweb-0 0/1 Pending 0 0s\nweb-0 0/1 Pending 0 0s\nweb-0 0/1 ContainerCreating 0 0s\nweb-0 1/1 Running 0 3s", "language": "text", "type": "code-example", "context": "StatefulSet Basics | Get the container image details for the Pods in the StatefulSet:", "source": "https://kubernetes.io/docs/tutorials/stateful-application/basic-stateful-set/", "id": "k8s_00556_code_completion_14312"} {"prompt": "Explain this text code:\n\nNAME READY STATUS RESTARTS AGE\nweb-0 1/1 Running 0 3m\nweb-1 0/1 ContainerCreating 0 11s\nweb-2 1/1 Running 0 2m\nweb-1 1/1 Running 0 18s\nweb-0 1/1 Terminating 0 3m\nweb-0 1/1 Terminating 0 3m\nweb-0 0/1 Terminating 0 3m\nweb-0 0/1 Terminating 0 3m\nweb-0 0/1 Terminating 0 3m\nweb-0 0/1 Terminating 0 3m\nweb-0 0/1 Pending 0 0s\nweb-0 0/1 Pending 0 0s\nweb-0 0/1 ContainerCreating 0 0s\nweb-0 1/1 Running 0 3s", "completion": "This text code StatefulSet Basics | Get the container image details for the Pods in the StatefulSet:", "language": "text", "type": "code-example_explanation", "context": "StatefulSet Basics | Get the container image details for the Pods in the StatefulSet:", "source": "https://kubernetes.io/docs/tutorials/stateful-application/basic-stateful-set/", "id": "k8s_00556_code_explain_14313"} {"prompt": "Explain this text code:\n\nfor p in 0 1 2; do kubectl get pod \"web-$p\" --template '{{range $i, $c := .spec.containers}}{{$c.image}}{{end}}'; echo; done", "completion": "This text code StatefulSet Basics | By moving the partition to 0, you allowed the StatefulSet to continue the update process.", "language": "text", "type": "code-example_explanation", "context": "StatefulSet Basics | By moving the partition to 0, you allowed the StatefulSet to continue the update process.", "source": "https://kubernetes.io/docs/tutorials/stateful-application/basic-stateful-set/", "id": "k8s_00556_code_explain_14314"} {"prompt": "Explain this text code:\n\nregistry.k8s.io/nginx-slim:0.21\nregistry.k8s.io/nginx-slim:0.21\nregistry.k8s.io/nginx-slim:0.21", "completion": "This text code StatefulSet Basics | By moving the partition to 0, you allowed the StatefulSet to continue the update process.", "language": "text", "type": "code-example_explanation", "context": "StatefulSet Basics | By moving the partition to 0, you allowed the StatefulSet to continue the update process.", "source": "https://kubernetes.io/docs/tutorials/stateful-application/basic-stateful-set/", "id": "k8s_00556_code_explain_14315"} {"prompt": "Explain this bash code:\n\nkubectl patch statefulset web -p '{\"spec\":{\"updateStrategy\":{\"type\":\"OnDelete\", \"rollingUpdate\": null}}}'", "completion": "This bash code StatefulSet Basics | When you select this update strategy, the StatefulSet controller does not automatically update Pods when a modification is made to the StatefulSet's .spec.template field. You need to manage the rollou", "language": "bash", "type": "kubectl-command_explanation", "context": "StatefulSet Basics | When you select this update strategy, the StatefulSet controller does not automatically update Pods when a modification is made to the StatefulSet's .spec.template field. You need to manage the rollou", "source": "https://kubernetes.io/docs/tutorials/stateful-application/basic-stateful-set/", "id": "k8s_00556_code_explain_14316"} {"prompt": "Explain this text code:\n\n# End this watch when there are no Pods for the StatefulSet\nkubectl get pods --watch -l app=nginx", "completion": "This text code StatefulSet Basics | Use kubectl delete to delete the StatefulSet. Make sure to supply the --cascade=orphan parameter to the command. This parameter tells Kubernetes to only delete the StatefulSet, and to not delete any o", "language": "text", "type": "code-example_explanation", "context": "StatefulSet Basics | Use kubectl delete to delete the StatefulSet. Make sure to supply the --cascade=orphan parameter to the command. This parameter tells Kubernetes to only delete the StatefulSet, and to not delete any o", "source": "https://kubernetes.io/docs/tutorials/stateful-application/basic-stateful-set/", "id": "k8s_00556_code_explain_14317"} {"prompt": "Complete this text code:\n\nNAME READY STATUS RESTARTS AGE\nweb-0 1/1 Running 0 6m", "completion": "web-1 1/1 Running 0 7m\nweb-2 1/1 Running 0 5m", "language": "text", "type": "code-example", "context": "StatefulSet Basics | Even though web has been deleted, all of the Pods are still Running and Ready. Delete web-0:", "source": "https://kubernetes.io/docs/tutorials/stateful-application/basic-stateful-set/", "id": "k8s_00556_code_completion_14318"} {"prompt": "Explain this text code:\n\nNAME READY STATUS RESTARTS AGE\nweb-0 1/1 Running 0 6m\nweb-1 1/1 Running 0 7m\nweb-2 1/1 Running 0 5m", "completion": "This text code StatefulSet Basics | Even though web has been deleted, all of the Pods are still Running and Ready. Delete web-0:", "language": "text", "type": "code-example_explanation", "context": "StatefulSet Basics | Even though web has been deleted, all of the Pods are still Running and Ready. Delete web-0:", "source": "https://kubernetes.io/docs/tutorials/stateful-application/basic-stateful-set/", "id": "k8s_00556_code_explain_14319"} {"prompt": "Explain this text code:\n\nNAME READY STATUS RESTARTS AGE\nweb-1 1/1 Running 0 10m\nweb-2 1/1 Running 0 7m", "completion": "This text code StatefulSet Basics | As the web StatefulSet has been deleted, web-0 has not been relaunched.", "language": "text", "type": "code-example_explanation", "context": "StatefulSet Basics | As the web StatefulSet has been deleted, web-0 has not been relaunched.", "source": "https://kubernetes.io/docs/tutorials/stateful-application/basic-stateful-set/", "id": "k8s_00556_code_explain_14320"} {"prompt": "Explain this text code:\n\n# Leave this watch running until the next time you start a watch\nkubectl get pods --watch -l app=nginx", "completion": "This text code StatefulSet Basics | In a second terminal, recreate the StatefulSet. Note that, unless you deleted the nginx Service (which you should not have), you will see an error indicating that the Service already exists.", "language": "text", "type": "code-example_explanation", "context": "StatefulSet Basics | In a second terminal, recreate the StatefulSet. Note that, unless you deleted the nginx Service (which you should not have), you will see an error indicating that the Service already exists.", "source": "https://kubernetes.io/docs/tutorials/stateful-application/basic-stateful-set/", "id": "k8s_00556_code_explain_14321"} {"prompt": "Explain this bash code:\n\nkubectl apply -f https://k8s.io/examples/application/web/web.yaml", "completion": "This bash code StatefulSet Basics | Ignore the error. It only indicates that an attempt was made to create the nginx headless Service even though that Service already exists.", "language": "bash", "type": "kubectl-command_explanation", "context": "StatefulSet Basics | Ignore the error. It only indicates that an attempt was made to create the nginx headless Service even though that Service already exists.", "source": "https://kubernetes.io/docs/tutorials/stateful-application/basic-stateful-set/", "id": "k8s_00556_code_explain_14322"} {"prompt": "Explain this text code:\n\nstatefulset.apps/web created\nservice/nginx unchanged", "completion": "This text code StatefulSet Basics | Ignore the error. It only indicates that an attempt was made to create the nginx headless Service even though that Service already exists.", "language": "text", "type": "code-example_explanation", "context": "StatefulSet Basics | Ignore the error. It only indicates that an attempt was made to create the nginx headless Service even though that Service already exists.", "source": "https://kubernetes.io/docs/tutorials/stateful-application/basic-stateful-set/", "id": "k8s_00556_code_explain_14323"} {"prompt": "Explain this text code:\n\n# This should already be running\nkubectl get pods --watch -l app=nginx", "completion": "This text code StatefulSet Basics | When the web StatefulSet was recreated, it first relaunched web-0. Since web-1 was already Running and Ready, when web-0 transitioned to Running and Ready, it adopted this Pod. Since you recreated the", "language": "text", "type": "code-example_explanation", "context": "StatefulSet Basics | When the web StatefulSet was recreated, it first relaunched web-0. Since web-1 was already Running and Ready, when web-0 transitioned to Running and Ready, it adopted this Pod. Since you recreated the", "source": "https://kubernetes.io/docs/tutorials/stateful-application/basic-stateful-set/", "id": "k8s_00556_code_explain_14324"} {"prompt": "Complete this text code:\n\nNAME READY STATUS RESTARTS AGE\nweb-1 1/1 Running 0 16m\nweb-2 1/1 Running 0 2m\nNAME READY STATUS RESTARTS AGE\nweb-0 0/1 Pending 0 0s\nweb-0 0/1 Pending 0 0s", "completion": "web-0 0/1 ContainerCreating 0 0s\nweb-0 1/1 Running 0 18s\nweb-2 1/1 Terminating 0 3m\nweb-2 0/1 Terminating 0 3m\nweb-2 0/1 Terminating 0 3m\nweb-2 0/1 Terminating 0 3m", "language": "text", "type": "code-example", "context": "StatefulSet Basics | When the web StatefulSet was recreated, it first relaunched web-0. Since web-1 was already Running and Ready, when web-0 transitioned to Running and Ready, it adopted this Pod. Since you recreated the", "source": "https://kubernetes.io/docs/tutorials/stateful-application/basic-stateful-set/", "id": "k8s_00556_code_completion_14325"} {"prompt": "Explain this text code:\n\nNAME READY STATUS RESTARTS AGE\nweb-1 1/1 Running 0 16m\nweb-2 1/1 Running 0 2m\nNAME READY STATUS RESTARTS AGE\nweb-0 0/1 Pending 0 0s\nweb-0 0/1 Pending 0 0s\nweb-0 0/1 ContainerCreating 0 0s\nweb-0 1/1 Running 0 18s\nweb-2 1/1 Terminating 0 3m\nweb-2 0/1 Terminating 0 3m\nweb-2 0/1 Terminating 0 3m\nweb-2 0/1 Terminating 0 3m", "completion": "This text code StatefulSet Basics | When the web StatefulSet was recreated, it first relaunched web-0. Since web-1 was already Running and Ready, when web-0 transitioned to Running and Ready, it adopted this Pod. Since you recreated the", "language": "text", "type": "code-example_explanation", "context": "StatefulSet Basics | When the web StatefulSet was recreated, it first relaunched web-0. Since web-1 was already Running and Ready, when web-0 transitioned to Running and Ready, it adopted this Pod. Since you recreated the", "source": "https://kubernetes.io/docs/tutorials/stateful-application/basic-stateful-set/", "id": "k8s_00556_code_explain_14326"} {"prompt": "Explain this text code:\n\nfor i in 0 1; do kubectl exec -i -t \"web-$i\" -- curl http://localhost/; done", "completion": "This text code StatefulSet Basics | Even though you deleted both the StatefulSet and the web-0 Pod, it still serves the hostname originally entered into its index.html file. This is because the StatefulSet never deletes the PersistentVo", "language": "text", "type": "code-example_explanation", "context": "StatefulSet Basics | Even though you deleted both the StatefulSet and the web-0 Pod, it still serves the hostname originally entered into its index.html file. This is because the StatefulSet never deletes the PersistentVo", "source": "https://kubernetes.io/docs/tutorials/stateful-application/basic-stateful-set/", "id": "k8s_00556_code_explain_14327"} {"prompt": "Explain this text code:\n\n# Leave this running until the next page section\nkubectl get pods --watch -l app=nginx", "completion": "This text code StatefulSet Basics | In another terminal, delete the StatefulSet again. This time, omit the --cascade=orphan parameter.", "language": "text", "type": "code-example_explanation", "context": "StatefulSet Basics | In another terminal, delete the StatefulSet again. This time, omit the --cascade=orphan parameter.", "source": "https://kubernetes.io/docs/tutorials/stateful-application/basic-stateful-set/", "id": "k8s_00556_code_explain_14328"} {"prompt": "Explain this text code:\n\n# This should already be running\nkubectl get pods --watch -l app=nginx", "completion": "This text code StatefulSet Basics | As you saw in the Scaling Down section, the Pods are terminated one at a time, with respect to the reverse order of their ordinal indices. Before terminating a Pod, the StatefulSet controller waits fo", "language": "text", "type": "code-example_explanation", "context": "StatefulSet Basics | As you saw in the Scaling Down section, the Pods are terminated one at a time, with respect to the reverse order of their ordinal indices. Before terminating a Pod, the StatefulSet controller waits fo", "source": "https://kubernetes.io/docs/tutorials/stateful-application/basic-stateful-set/", "id": "k8s_00556_code_explain_14329"} {"prompt": "Complete this text code:\n\nNAME READY STATUS RESTARTS AGE\nweb-0 1/1 Running 0 11m\nweb-1 1/1 Running 0 27m\nNAME READY STATUS RESTARTS AGE\nweb-0 1/1 Terminating 0 12m\nweb-1 1/1 Terminating 0 29m", "completion": "web-0 0/1 Terminating 0 12m\nweb-0 0/1 Terminating 0 12m\nweb-0 0/1 Terminating 0 12m\nweb-1 0/1 Terminating 0 29m\nweb-1 0/1 Terminating 0 29m\nweb-1 0/1 Terminating 0 29m", "language": "text", "type": "code-example", "context": "StatefulSet Basics | As you saw in the Scaling Down section, the Pods are terminated one at a time, with respect to the reverse order of their ordinal indices. Before terminating a Pod, the StatefulSet controller waits fo", "source": "https://kubernetes.io/docs/tutorials/stateful-application/basic-stateful-set/", "id": "k8s_00556_code_completion_14330"} {"prompt": "Explain this text code:\n\nNAME READY STATUS RESTARTS AGE\nweb-0 1/1 Running 0 11m\nweb-1 1/1 Running 0 27m\nNAME READY STATUS RESTARTS AGE\nweb-0 1/1 Terminating 0 12m\nweb-1 1/1 Terminating 0 29m\nweb-0 0/1 Terminating 0 12m\nweb-0 0/1 Terminating 0 12m\nweb-0 0/1 Terminating 0 12m\nweb-1 0/1 Terminating 0 29m\nweb-1 0/1 Terminating 0 29m\nweb-1 0/1 Terminating 0 29m", "completion": "This text code StatefulSet Basics | As you saw in the Scaling Down section, the Pods are terminated one at a time, with respect to the reverse order of their ordinal indices. Before terminating a Pod, the StatefulSet controller waits fo", "language": "text", "type": "code-example_explanation", "context": "StatefulSet Basics | As you saw in the Scaling Down section, the Pods are terminated one at a time, with respect to the reverse order of their ordinal indices. Before terminating a Pod, the StatefulSet controller waits fo", "source": "https://kubernetes.io/docs/tutorials/stateful-application/basic-stateful-set/", "id": "k8s_00556_code_explain_14331"} {"prompt": "Explain this bash code:\n\nkubectl apply -f https://k8s.io/examples/application/web/web.yaml", "completion": "This bash code StatefulSet Basics | When all of the StatefulSet's Pods transition to Running and Ready, retrieve the contents of their index.html files:", "language": "bash", "type": "kubectl-command_explanation", "context": "StatefulSet Basics | When all of the StatefulSet's Pods transition to Running and Ready, retrieve the contents of their index.html files:", "source": "https://kubernetes.io/docs/tutorials/stateful-application/basic-stateful-set/", "id": "k8s_00556_code_explain_14332"} {"prompt": "Explain this text code:\n\nfor i in 0 1; do kubectl exec -i -t \"web-$i\" -- curl http://localhost/; done", "completion": "This text code StatefulSet Basics | Even though you completely deleted the StatefulSet, and all of its Pods, the Pods are recreated with their PersistentVolumes mounted, and web-0 and web-1 continue to serve their hostnames.", "language": "text", "type": "code-example_explanation", "context": "StatefulSet Basics | Even though you completely deleted the StatefulSet, and all of its Pods, the Pods are recreated with their PersistentVolumes mounted, and web-0 and web-1 continue to serve their hostnames.", "source": "https://kubernetes.io/docs/tutorials/stateful-application/basic-stateful-set/", "id": "k8s_00556_code_explain_14333"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Service\nmetadata:\n name: nginx\n labels:\n app: nginx\nspec:\n ports:\n - port: 80\n name: web\n clusterIP: None\n selector:\n app: nginx\n---\napiVersion: apps/v1\nkind: StatefulSet\nmetadata:\n name: web\nspec:\n serviceName: \"nginx\"\n podManagementPolicy: \"Parallel\"\n replicas: 2\n selector:", "completion": " matchLabels:\n app: nginx\n template:\n metadata:\n labels:\n app: nginx\n spec:\n containers:\n - name: nginx\n image: registry.k8s.io/nginx-slim:0.24\n ports:\n - containerPort: 80\n name: web\n volumeMounts:\n - name: www\n mountPath: /usr/share/nginx/html\n volumeClaimTemplates:\n - metadata:\n name: www\n spec:\n accessModes: [ \"ReadWriteOnce\" ]\n resources:\n requests:\n storage: 1Gi", "language": "yaml", "type": "service-definition", "context": "StatefulSet Basics | This manifest is identical to the one you downloaded above except that the .spec.podManagementPolicy of the web StatefulSet is set to Parallel.", "source": "https://kubernetes.io/docs/tutorials/stateful-application/basic-stateful-set/", "id": "k8s_00556_code_completion_14334"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Service\nmetadata:\n name: nginx\n labels:\n app: nginx\nspec:\n ports:\n - port: 80\n name: web\n clusterIP: None\n selector:\n app: nginx\n---\napiVersion: apps/v1\nkind: StatefulSet\nmetadata:\n name: web\nspec:\n serviceName: \"nginx\"\n podManagementPolicy: \"Parallel\"\n replicas: 2\n selector:\n matchLabels:\n app: nginx\n template:\n metadata:\n labels:\n app: nginx\n spec:\n containers:\n - name: nginx\n image: registry.k8s.io/nginx-slim:0.24\n ports:\n - containerPort: 80\n name: web\n volumeMounts:\n - name: www\n mountPath: /usr/share/nginx/html\n volumeClaimTemplates:\n - metadata:\n name: www\n spec:\n accessModes: [ \"ReadWriteOnce\" ]\n resources:\n requests:\n storage: 1Gi", "completion": "This yaml code StatefulSet Basics | This manifest is identical to the one you downloaded above except that the .spec.podManagementPolicy of the web StatefulSet is set to Parallel.", "language": "yaml", "type": "service-definition_explanation", "context": "StatefulSet Basics | This manifest is identical to the one you downloaded above except that the .spec.podManagementPolicy of the web StatefulSet is set to Parallel.", "source": "https://kubernetes.io/docs/tutorials/stateful-application/basic-stateful-set/", "id": "k8s_00556_code_explain_14335"} {"prompt": "Explain this text code:\n\n# Leave this watch running until the end of the section\nkubectl get pod -l app=nginx --watch", "completion": "This text code StatefulSet Basics | In another terminal, reconfigure the StatefulSet for Parallel Pod management:", "language": "text", "type": "code-example_explanation", "context": "StatefulSet Basics | In another terminal, reconfigure the StatefulSet for Parallel Pod management:", "source": "https://kubernetes.io/docs/tutorials/stateful-application/basic-stateful-set/", "id": "k8s_00556_code_explain_14336"} {"prompt": "Explain this bash code:\n\nkubectl apply -f https://k8s.io/examples/application/web/web-parallel.yaml", "completion": "This bash code StatefulSet Basics | Keep the terminal open where you're running the watch. In another terminal window, scale the StatefulSet:", "language": "bash", "type": "kubectl-command_explanation", "context": "StatefulSet Basics | Keep the terminal open where you're running the watch. In another terminal window, scale the StatefulSet:", "source": "https://kubernetes.io/docs/tutorials/stateful-application/basic-stateful-set/", "id": "k8s_00556_code_explain_14337"} {"prompt": "Complete this text code:\n\nweb-3 0/1 Pending 0 0s\nweb-3 0/1 Pending 0 0s\nweb-3 0/1 Pending 0 7s\nweb-3 0/1 ContainerCreating 0 7s\nweb-2 0/1 Pending 0 0s", "completion": "web-4 0/1 Pending 0 0s\nweb-2 1/1 Running 0 8s\nweb-4 0/1 ContainerCreating 0 4s\nweb-3 1/1 Running 0 26s\nweb-4 1/1 Running 0 2s", "language": "text", "type": "code-example", "context": "StatefulSet Basics | The StatefulSet launched three new Pods, and it did not wait for the first to become Running and Ready prior to launching the second and third Pods.", "source": "https://kubernetes.io/docs/tutorials/stateful-application/basic-stateful-set/", "id": "k8s_00556_code_completion_14338"} {"prompt": "Explain this text code:\n\nweb-3 0/1 Pending 0 0s\nweb-3 0/1 Pending 0 0s\nweb-3 0/1 Pending 0 7s\nweb-3 0/1 ContainerCreating 0 7s\nweb-2 0/1 Pending 0 0s\nweb-4 0/1 Pending 0 0s\nweb-2 1/1 Running 0 8s\nweb-4 0/1 ContainerCreating 0 4s\nweb-3 1/1 Running 0 26s\nweb-4 1/1 Running 0 2s", "completion": "This text code StatefulSet Basics | The StatefulSet launched three new Pods, and it did not wait for the first to become Running and Ready prior to launching the second and third Pods.", "language": "text", "type": "code-example_explanation", "context": "StatefulSet Basics | The StatefulSet launched three new Pods, and it did not wait for the first to become Running and Ready prior to launching the second and third Pods.", "source": "https://kubernetes.io/docs/tutorials/stateful-application/basic-stateful-set/", "id": "k8s_00556_code_explain_14339"} {"prompt": "Explain this bash code:\n\nkubectl delete sts web\n# sts is an abbreviation for statefulset", "completion": "This bash code StatefulSet Basics | You can watch kubectl get to see those Pods being deleted.", "language": "bash", "type": "kubectl-command_explanation", "context": "StatefulSet Basics | You can watch kubectl get to see those Pods being deleted.", "source": "https://kubernetes.io/docs/tutorials/stateful-application/basic-stateful-set/", "id": "k8s_00556_code_explain_14340"} {"prompt": "Explain this text code:\n\n# end the watch when you've seen what you need to\nkubectl get pod -l app=nginx --watch", "completion": "This text code StatefulSet Basics | During deletion, a StatefulSet removes all Pods concurrently; it does not wait for a Pod's ordinal successor to terminate prior to deleting that Pod.", "language": "text", "type": "code-example_explanation", "context": "StatefulSet Basics | During deletion, a StatefulSet removes all Pods concurrently; it does not wait for a Pod's ordinal successor to terminate prior to deleting that Pod.", "source": "https://kubernetes.io/docs/tutorials/stateful-application/basic-stateful-set/", "id": "k8s_00556_code_explain_14341"} {"prompt": "Complete this text code:\n\nweb-3 1/1 Terminating 0 9m\nweb-2 1/1 Terminating 0 9m\nweb-3 1/1 Terminating 0 9m\nweb-2 1/1 Terminating 0 9m\nweb-1 1/1 Terminating 0 44m\nweb-0 1/1 Terminating 0 44m\nweb-0 0/1 Terminating 0 44m\nweb-3 0/1 Terminating 0 9m\nweb-2 0/1 Terminating 0 9m\nweb-1 0/1 Terminating 0 44m\nweb-0 0/1 Terminating 0 44m", "completion": "web-2 0/1 Terminating 0 9m\nweb-2 0/1 Terminating 0 9m\nweb-2 0/1 Terminating 0 9m\nweb-1 0/1 Terminating 0 44m\nweb-1 0/1 Terminating 0 44m\nweb-1 0/1 Terminating 0 44m\nweb-0 0/1 Terminating 0 44m\nweb-0 0/1 Terminating 0 44m\nweb-0 0/1 Terminating 0 44m\nweb-3 0/1 Terminating 0 9m\nweb-3 0/1 Terminating 0 9m\nweb-3 0/1 Terminating 0 9m", "language": "text", "type": "code-example", "context": "StatefulSet Basics | During deletion, a StatefulSet removes all Pods concurrently; it does not wait for a Pod's ordinal successor to terminate prior to deleting that Pod.", "source": "https://kubernetes.io/docs/tutorials/stateful-application/basic-stateful-set/", "id": "k8s_00556_code_completion_14342"} {"prompt": "Explain this text code:\n\nweb-3 1/1 Terminating 0 9m\nweb-2 1/1 Terminating 0 9m\nweb-3 1/1 Terminating 0 9m\nweb-2 1/1 Terminating 0 9m\nweb-1 1/1 Terminating 0 44m\nweb-0 1/1 Terminating 0 44m\nweb-0 0/1 Terminating 0 44m\nweb-3 0/1 Terminating 0 9m\nweb-2 0/1 Terminating 0 9m\nweb-1 0/1 Terminating 0 44m\nweb-0 0/1 Terminating 0 44m\nweb-2 0/1 Terminating 0 9m\nweb-2 0/1 Terminating 0 9m\nweb-2 0/1 Terminating 0 9m\nweb-1 0/1 Terminating 0 44m\nweb-1 0/1 Terminating 0 44m\nweb-1 0/1 Terminating 0 44m\nweb-0 0/1 Terminating 0 44m\nweb-0 0/1 Terminating 0 44m\nweb-0 0/1 Terminating 0 44m\nweb-3 0/1 Terminating 0 9m\nweb-3 0/1 Terminating 0 9m\nweb-3 0/1 Terminating 0 9m", "completion": "This text code StatefulSet Basics | During deletion, a StatefulSet removes all Pods concurrently; it does not wait for a Pod's ordinal successor to terminate prior to deleting that Pod.", "language": "text", "type": "code-example_explanation", "context": "StatefulSet Basics | During deletion, a StatefulSet removes all Pods concurrently; it does not wait for a Pod's ordinal successor to terminate prior to deleting that Pod.", "source": "https://kubernetes.io/docs/tutorials/stateful-application/basic-stateful-set/", "id": "k8s_00556_code_explain_14343"} {"prompt": "Complete this text code:\n\nNAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE\nwww-web-0 Bound pvc-2bf00408-d366-4a12-bad0-1869c65d0bee 1Gi RWO standard 25m\nwww-web-1 Bound pvc-ba3bfe9c-413e-4b95-a2c0-3ea8a54dbab4 1Gi RWO standard 24m", "completion": "www-web-2 Bound pvc-cba6cfa6-3a47-486b-a138-db5930207eaf 1Gi RWO standard 15m\nwww-web-3 Bound pvc-0c04d7f0-787a-4977-8da3-d9d3a6d8d752 1Gi RWO standard 15m\nwww-web-4 Bound pvc-b2c73489-e70b-4a4e-9ec1-9eab439aa43e 1Gi RWO standard 14m", "language": "text", "type": "code-example", "context": "StatefulSet Basics |", "source": "https://kubernetes.io/docs/tutorials/stateful-application/basic-stateful-set/", "id": "k8s_00556_code_completion_14344"} {"prompt": "Explain this text code:\n\nNAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE\nwww-web-0 Bound pvc-2bf00408-d366-4a12-bad0-1869c65d0bee 1Gi RWO standard 25m\nwww-web-1 Bound pvc-ba3bfe9c-413e-4b95-a2c0-3ea8a54dbab4 1Gi RWO standard 24m\nwww-web-2 Bound pvc-cba6cfa6-3a47-486b-a138-db5930207eaf 1Gi RWO standard 15m\nwww-web-3 Bound pvc-0c04d7f0-787a-4977-8da3-d9d3a6d8d752 1Gi RWO standard 15m\nwww-web-4 Bound pvc-b2c73489-e70b-4a4e-9ec1-9eab439aa43e 1Gi RWO standard 14m", "completion": "This text code StatefulSet Basics |", "language": "text", "type": "code-example_explanation", "context": "StatefulSet Basics |", "source": "https://kubernetes.io/docs/tutorials/stateful-application/basic-stateful-set/", "id": "k8s_00556_code_explain_14345"} {"prompt": "Complete this text code:\n\nNAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE\npvc-0c04d7f0-787a-4977-8da3-d9d3a6d8d752 1Gi RWO Delete Bound default/www-web-3 standard 15m\npvc-2bf00408-d366-4a12-bad0-1869c65d0bee 1Gi RWO Delete Bound default/www-web-0 standard 25m", "completion": "pvc-b2c73489-e70b-4a4e-9ec1-9eab439aa43e 1Gi RWO Delete Bound default/www-web-4 standard 14m\npvc-ba3bfe9c-413e-4b95-a2c0-3ea8a54dbab4 1Gi RWO Delete Bound default/www-web-1 standard 24m\npvc-cba6cfa6-3a47-486b-a138-db5930207eaf 1Gi RWO Delete Bound default/www-web-2 standard 15m", "language": "text", "type": "code-example", "context": "StatefulSet Basics |", "source": "https://kubernetes.io/docs/tutorials/stateful-application/basic-stateful-set/", "id": "k8s_00556_code_completion_14346"} {"prompt": "Explain this text code:\n\nNAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE\npvc-0c04d7f0-787a-4977-8da3-d9d3a6d8d752 1Gi RWO Delete Bound default/www-web-3 standard 15m\npvc-2bf00408-d366-4a12-bad0-1869c65d0bee 1Gi RWO Delete Bound default/www-web-0 standard 25m\npvc-b2c73489-e70b-4a4e-9ec1-9eab439aa43e 1Gi RWO Delete Bound default/www-web-4 standard 14m\npvc-ba3bfe9c-413e-4b95-a2c0-3ea8a54dbab4 1Gi RWO Delete Bound default/www-web-1 standard 24m\npvc-cba6cfa6-3a47-486b-a138-db5930207eaf 1Gi RWO Delete Bound default/www-web-2 standard 15m", "completion": "This text code StatefulSet Basics |", "language": "text", "type": "code-example_explanation", "context": "StatefulSet Basics |", "source": "https://kubernetes.io/docs/tutorials/stateful-application/basic-stateful-set/", "id": "k8s_00556_code_explain_14347"} {"prompt": "Explain this bash code:\n\nkubectl delete pvc www-web-0 www-web-1 www-web-2 www-web-3 www-web-4", "completion": "This bash code StatefulSet Basics |", "language": "bash", "type": "kubectl-command_explanation", "context": "StatefulSet Basics |", "source": "https://kubernetes.io/docs/tutorials/stateful-application/basic-stateful-set/", "id": "k8s_00556_code_explain_14348"} {"prompt": "Complete this text code:\n\npersistentvolumeclaim \"www-web-0\" deleted\npersistentvolumeclaim \"www-web-1\" deleted", "completion": "persistentvolumeclaim \"www-web-2\" deleted\npersistentvolumeclaim \"www-web-3\" deleted\npersistentvolumeclaim \"www-web-4\" deleted", "language": "text", "type": "code-example", "context": "StatefulSet Basics |", "source": "https://kubernetes.io/docs/tutorials/stateful-application/basic-stateful-set/", "id": "k8s_00556_code_completion_14349"} {"prompt": "Explain this text code:\n\npersistentvolumeclaim \"www-web-0\" deleted\npersistentvolumeclaim \"www-web-1\" deleted\npersistentvolumeclaim \"www-web-2\" deleted\npersistentvolumeclaim \"www-web-3\" deleted\npersistentvolumeclaim \"www-web-4\" deleted", "completion": "This text code StatefulSet Basics |", "language": "text", "type": "code-example_explanation", "context": "StatefulSet Basics |", "source": "https://kubernetes.io/docs/tutorials/stateful-application/basic-stateful-set/", "id": "k8s_00556_code_explain_14350"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Service\nmetadata:\n name: nginx\n labels:\n app: nginx\nspec:\n ports:\n - port: 80\n name: web\n clusterIP: None\n selector:\n app: nginx\n---\napiVersion: apps/v1\nkind: StatefulSet\nmetadata:\n name: web\nspec:\n serviceName: \"nginx\"\n replicas: 2\n selector:\n matchLabels:", "completion": " app: nginx\n template:\n metadata:\n labels:\n app: nginx\n spec:\n containers:\n - name: nginx\n image: registry.k8s.io/nginx-slim:0.21\n ports:\n - containerPort: 80\n name: web\n volumeMounts:\n - name: www\n mountPath: /usr/share/nginx/html\n volumeClaimTemplates:\n - metadata:\n name: www\n spec:\n accessModes: [ \"ReadWriteOnce\" ]\n resources:\n requests:\n storage: 1Gi", "language": "yaml", "type": "service-definition", "context": "StatefulSet Basics | You will need to use at least two terminal windows. In the first terminal, use kubectl get to watch the creation of the StatefulSet's Pods.", "source": "https://kubernetes.io/docs/tutorials/stateful-application/basic-stateful-set/", "id": "k8s_00556_code_completion_14351"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Service\nmetadata:\n name: nginx\n labels:\n app: nginx\nspec:\n ports:\n - port: 80\n name: web\n clusterIP: None\n selector:\n app: nginx\n---\napiVersion: apps/v1\nkind: StatefulSet\nmetadata:\n name: web\nspec:\n serviceName: \"nginx\"\n replicas: 2\n selector:\n matchLabels:\n app: nginx\n template:\n metadata:\n labels:\n app: nginx\n spec:\n containers:\n - name: nginx\n image: registry.k8s.io/nginx-slim:0.21\n ports:\n - containerPort: 80\n name: web\n volumeMounts:\n - name: www\n mountPath: /usr/share/nginx/html\n volumeClaimTemplates:\n - metadata:\n name: www\n spec:\n accessModes: [ \"ReadWriteOnce\" ]\n resources:\n requests:\n storage: 1Gi", "completion": "This yaml code StatefulSet Basics | You will need to use at least two terminal windows. In the first terminal, use kubectl get to watch the creation of the StatefulSet's Pods.", "language": "yaml", "type": "service-definition_explanation", "context": "StatefulSet Basics | You will need to use at least two terminal windows. In the first terminal, use kubectl get to watch the creation of the StatefulSet's Pods.", "source": "https://kubernetes.io/docs/tutorials/stateful-application/basic-stateful-set/", "id": "k8s_00556_code_explain_14352"} {"prompt": "Explain this shell code:\n\n# use this terminal to run commands that specify --watch\n# end this watch when you are asked to start a new watch\nkubectl get pods --watch -l app=nginx", "completion": "This shell code StatefulSet Basics | In the second terminal, use kubectl apply to create the headless Service and StatefulSet:", "language": "shell", "type": "code-example_explanation", "context": "StatefulSet Basics | In the second terminal, use kubectl apply to create the headless Service and StatefulSet:", "source": "https://kubernetes.io/docs/tutorials/stateful-application/basic-stateful-set/", "id": "k8s_00556_code_explain_14353"} {"prompt": "Explain this shell code:\n\nkubectl apply -f https://k8s.io/examples/application/web/web.yaml", "completion": "This shell code StatefulSet Basics | The command above creates two Pods, each running an NGINX webserver. Get the nginx Service...", "language": "shell", "type": "code-example_explanation", "context": "StatefulSet Basics | The command above creates two Pods, each running an NGINX webserver. Get the nginx Service...", "source": "https://kubernetes.io/docs/tutorials/stateful-application/basic-stateful-set/", "id": "k8s_00556_code_explain_14354"} {"prompt": "Explain this text code:\n\nNAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE\nnginx ClusterIP None <none> 80/TCP 12s", "completion": "This text code StatefulSet Basics | ...then get the web StatefulSet, to verify that both were created successfully:", "language": "text", "type": "code-example_explanation", "context": "StatefulSet Basics | ...then get the web StatefulSet, to verify that both were created successfully:", "source": "https://kubernetes.io/docs/tutorials/stateful-application/basic-stateful-set/", "id": "k8s_00556_code_explain_14355"} {"prompt": "Explain this shell code:\n\n# Do not start a new watch;\n# this should already be running\nkubectl get pods --watch -l app=nginx", "completion": "This shell code StatefulSet Basics | Notice that the web-1 Pod is not launched until the web-0 Pod is Running (see Pod Phase) and Ready (see type in Pod Conditions).", "language": "shell", "type": "code-example_explanation", "context": "StatefulSet Basics | Notice that the web-1 Pod is not launched until the web-0 Pod is Running (see Pod Phase) and Ready (see type in Pod Conditions).", "source": "https://kubernetes.io/docs/tutorials/stateful-application/basic-stateful-set/", "id": "k8s_00556_code_explain_14356"} {"prompt": "Complete this text code:\n\nNAME READY STATUS RESTARTS AGE\nweb-0 0/1 Pending 0 0s\nweb-0 0/1 Pending 0 0s\nweb-0 0/1 ContainerCreating 0 0s", "completion": "web-0 1/1 Running 0 19s\nweb-1 0/1 Pending 0 0s\nweb-1 0/1 Pending 0 0s\nweb-1 0/1 ContainerCreating 0 0s\nweb-1 1/1 Running 0 18s", "language": "text", "type": "code-example", "context": "StatefulSet Basics | Notice that the web-1 Pod is not launched until the web-0 Pod is Running (see Pod Phase) and Ready (see type in Pod Conditions).", "source": "https://kubernetes.io/docs/tutorials/stateful-application/basic-stateful-set/", "id": "k8s_00556_code_completion_14357"} {"prompt": "Explain this text code:\n\nNAME READY STATUS RESTARTS AGE\nweb-0 0/1 Pending 0 0s\nweb-0 0/1 Pending 0 0s\nweb-0 0/1 ContainerCreating 0 0s\nweb-0 1/1 Running 0 19s\nweb-1 0/1 Pending 0 0s\nweb-1 0/1 Pending 0 0s\nweb-1 0/1 ContainerCreating 0 0s\nweb-1 1/1 Running 0 18s", "completion": "This text code StatefulSet Basics | Notice that the web-1 Pod is not launched until the web-0 Pod is Running (see Pod Phase) and Ready (see type in Pod Conditions).", "language": "text", "type": "code-example_explanation", "context": "StatefulSet Basics | Notice that the web-1 Pod is not launched until the web-0 Pod is Running (see Pod Phase) and Ready (see type in Pod Conditions).", "source": "https://kubernetes.io/docs/tutorials/stateful-application/basic-stateful-set/", "id": "k8s_00556_code_explain_14358"} {"prompt": "Explain this text code:\n\nNAME READY STATUS RESTARTS AGE\nweb-0 1/1 Running 0 1m\nweb-1 1/1 Running 0 1m", "completion": "This text code StatefulSet Basics | As mentioned in the StatefulSets concept, the Pods in a StatefulSet have a sticky, unique identity. This identity is based on a unique ordinal index that is assigned to each Pod by the StatefulSet con", "language": "text", "type": "code-example_explanation", "context": "StatefulSet Basics | As mentioned in the StatefulSets concept, the Pods in a StatefulSet have a sticky, unique identity. This identity is based on a unique ordinal index that is assigned to each Pod by the StatefulSet con", "source": "https://kubernetes.io/docs/tutorials/stateful-application/basic-stateful-set/", "id": "k8s_00556_code_explain_14359"} {"prompt": "Explain this shell code:\n\nfor i in 0 1; do kubectl exec \"web-$i\" -- sh -c 'hostname'; done", "completion": "This shell code StatefulSet Basics | Use kubectl run to execute a container that provides the nslookup command from the dnsutils package. Using nslookup on the Pods' hostnames, you can examine their in-cluster DNS addresses:", "language": "shell", "type": "code-example_explanation", "context": "StatefulSet Basics | Use kubectl run to execute a container that provides the nslookup command from the dnsutils package. Using nslookup on the Pods' hostnames, you can examine their in-cluster DNS addresses:", "source": "https://kubernetes.io/docs/tutorials/stateful-application/basic-stateful-set/", "id": "k8s_00556_code_explain_14360"} {"prompt": "Explain this shell code:\n\nkubectl run -i --tty --image busybox:1.28 dns-test --restart=Never --rm", "completion": "This shell code StatefulSet Basics | which starts a new shell. In that new shell, run:", "language": "shell", "type": "code-example_explanation", "context": "StatefulSet Basics | which starts a new shell. In that new shell, run:", "source": "https://kubernetes.io/docs/tutorials/stateful-application/basic-stateful-set/", "id": "k8s_00556_code_explain_14361"} {"prompt": "Explain this shell code:\n\n# Run this in the dns-test container shell\nnslookup web-0.nginx", "completion": "This shell code StatefulSet Basics | The output is similar to:", "language": "shell", "type": "code-example_explanation", "context": "StatefulSet Basics | The output is similar to:", "source": "https://kubernetes.io/docs/tutorials/stateful-application/basic-stateful-set/", "id": "k8s_00556_code_explain_14362"} {"prompt": "Complete this text code:\n\nServer: 10.0.0.10\nAddress 1: 10.0.0.10 kube-dns.kube-system.svc.cluster.local\n\nName: web-0.nginx\nAddress 1: 10.244.1.6\n", "completion": "nslookup web-1.nginx\nServer: 10.0.0.10\nAddress 1: 10.0.0.10 kube-dns.kube-system.svc.cluster.local\n\nName: web-1.nginx\nAddress 1: 10.244.2.6", "language": "text", "type": "code-example", "context": "StatefulSet Basics | (and now exit the container shell: exit)", "source": "https://kubernetes.io/docs/tutorials/stateful-application/basic-stateful-set/", "id": "k8s_00556_code_completion_14363"} {"prompt": "Explain this text code:\n\nServer: 10.0.0.10\nAddress 1: 10.0.0.10 kube-dns.kube-system.svc.cluster.local\n\nName: web-0.nginx\nAddress 1: 10.244.1.6\n\nnslookup web-1.nginx\nServer: 10.0.0.10\nAddress 1: 10.0.0.10 kube-dns.kube-system.svc.cluster.local\n\nName: web-1.nginx\nAddress 1: 10.244.2.6", "completion": "This text code StatefulSet Basics | (and now exit the container shell: exit)", "language": "text", "type": "code-example_explanation", "context": "StatefulSet Basics | (and now exit the container shell: exit)", "source": "https://kubernetes.io/docs/tutorials/stateful-application/basic-stateful-set/", "id": "k8s_00556_code_explain_14364"} {"prompt": "Explain this shell code:\n\n# Start a new watch\n# End this watch when you've seen that the delete is finished\nkubectl get pod --watch -l app=nginx", "completion": "This shell code StatefulSet Basics | In a second terminal, use kubectl delete to delete all the Pods in the StatefulSet:", "language": "shell", "type": "code-example_explanation", "context": "StatefulSet Basics | In a second terminal, use kubectl delete to delete all the Pods in the StatefulSet:", "source": "https://kubernetes.io/docs/tutorials/stateful-application/basic-stateful-set/", "id": "k8s_00556_code_explain_14365"} {"prompt": "Explain this shell code:\n\n# This should already be running\nkubectl get pod --watch -l app=nginx", "completion": "This shell code StatefulSet Basics | Use kubectl exec and kubectl run to view the Pods' hostnames and in-cluster DNS entries. First, view the Pods' hostnames:", "language": "shell", "type": "code-example_explanation", "context": "StatefulSet Basics | Use kubectl exec and kubectl run to view the Pods' hostnames and in-cluster DNS entries. First, view the Pods' hostnames:", "source": "https://kubernetes.io/docs/tutorials/stateful-application/basic-stateful-set/", "id": "k8s_00556_code_explain_14366"} {"prompt": "Complete this text code:\n\nNAME READY STATUS RESTARTS AGE\nweb-0 0/1 ContainerCreating 0 0s\nNAME READY STATUS RESTARTS AGE\nweb-0 1/1 Running 0 2s", "completion": "web-1 0/1 Pending 0 0s\nweb-1 0/1 Pending 0 0s\nweb-1 0/1 ContainerCreating 0 0s\nweb-1 1/1 Running 0 34s", "language": "text", "type": "code-example", "context": "StatefulSet Basics | Use kubectl exec and kubectl run to view the Pods' hostnames and in-cluster DNS entries. First, view the Pods' hostnames:", "source": "https://kubernetes.io/docs/tutorials/stateful-application/basic-stateful-set/", "id": "k8s_00556_code_completion_14367"} {"prompt": "Explain this text code:\n\nNAME READY STATUS RESTARTS AGE\nweb-0 0/1 ContainerCreating 0 0s\nNAME READY STATUS RESTARTS AGE\nweb-0 1/1 Running 0 2s\nweb-1 0/1 Pending 0 0s\nweb-1 0/1 Pending 0 0s\nweb-1 0/1 ContainerCreating 0 0s\nweb-1 1/1 Running 0 34s", "completion": "This text code StatefulSet Basics | Use kubectl exec and kubectl run to view the Pods' hostnames and in-cluster DNS entries. First, view the Pods' hostnames:", "language": "text", "type": "code-example_explanation", "context": "StatefulSet Basics | Use kubectl exec and kubectl run to view the Pods' hostnames and in-cluster DNS entries. First, view the Pods' hostnames:", "source": "https://kubernetes.io/docs/tutorials/stateful-application/basic-stateful-set/", "id": "k8s_00556_code_explain_14368"} {"prompt": "Explain this shell code:\n\nfor i in 0 1; do kubectl exec web-$i -- sh -c 'hostname'; done", "completion": "This shell code StatefulSet Basics | then, run:", "language": "shell", "type": "code-example_explanation", "context": "StatefulSet Basics | then, run:", "source": "https://kubernetes.io/docs/tutorials/stateful-application/basic-stateful-set/", "id": "k8s_00556_code_explain_14369"} {"prompt": "Explain this shell code:\n\nkubectl run -i --tty --image busybox:1.28 dns-test --restart=Never --rm", "completion": "This shell code StatefulSet Basics | which starts a new shell.In that new shell, run:", "language": "shell", "type": "code-example_explanation", "context": "StatefulSet Basics | which starts a new shell.In that new shell, run:", "source": "https://kubernetes.io/docs/tutorials/stateful-application/basic-stateful-set/", "id": "k8s_00556_code_explain_14370"} {"prompt": "Explain this shell code:\n\n# Run this in the dns-test container shell\nnslookup web-0.nginx", "completion": "This shell code StatefulSet Basics | The output is similar to:", "language": "shell", "type": "code-example_explanation", "context": "StatefulSet Basics | The output is similar to:", "source": "https://kubernetes.io/docs/tutorials/stateful-application/basic-stateful-set/", "id": "k8s_00556_code_explain_14371"} {"prompt": "Complete this text code:\n\nServer: 10.0.0.10\nAddress 1: 10.0.0.10 kube-dns.kube-system.svc.cluster.local\n\nName: web-0.nginx\nAddress 1: 10.244.1.7\n", "completion": "nslookup web-1.nginx\nServer: 10.0.0.10\nAddress 1: 10.0.0.10 kube-dns.kube-system.svc.cluster.local\n\nName: web-1.nginx\nAddress 1: 10.244.2.8", "language": "text", "type": "code-example", "context": "StatefulSet Basics | (and now exit the container shell: exit)", "source": "https://kubernetes.io/docs/tutorials/stateful-application/basic-stateful-set/", "id": "k8s_00556_code_completion_14372"} {"prompt": "Explain this text code:\n\nServer: 10.0.0.10\nAddress 1: 10.0.0.10 kube-dns.kube-system.svc.cluster.local\n\nName: web-0.nginx\nAddress 1: 10.244.1.7\n\nnslookup web-1.nginx\nServer: 10.0.0.10\nAddress 1: 10.0.0.10 kube-dns.kube-system.svc.cluster.local\n\nName: web-1.nginx\nAddress 1: 10.244.2.8", "completion": "This text code StatefulSet Basics | (and now exit the container shell: exit)", "language": "text", "type": "code-example_explanation", "context": "StatefulSet Basics | (and now exit the container shell: exit)", "source": "https://kubernetes.io/docs/tutorials/stateful-application/basic-stateful-set/", "id": "k8s_00556_code_explain_14373"} {"prompt": "Explain this text code:\n\nNAME STATUS VOLUME CAPACITY ACCESSMODES AGE\nwww-web-0 Bound pvc-15c268c7-b507-11e6-932f-42010a800002 1Gi RWO 48s\nwww-web-1 Bound pvc-15c79307-b507-11e6-932f-42010a800002 1Gi RWO 48s", "completion": "This text code StatefulSet Basics | The StatefulSet controller created two PersistentVolumeClaims that are bound to two PersistentVolumes.", "language": "text", "type": "code-example_explanation", "context": "StatefulSet Basics | The StatefulSet controller created two PersistentVolumeClaims that are bound to two PersistentVolumes.", "source": "https://kubernetes.io/docs/tutorials/stateful-application/basic-stateful-set/", "id": "k8s_00556_code_explain_14374"} {"prompt": "Explain this shell code:\n\nfor i in 0 1; do kubectl exec \"web-$i\" -- sh -c 'echo \"$(hostname)\" > /usr/share/nginx/html/index.html'; done\n\nfor i in 0 1; do kubectl exec -i -t \"web-$i\" -- curl http://localhost/; done", "completion": "This shell code StatefulSet Basics | If you instead see 403 Forbidden responses for the above curl command, you will need to fix the permissions of the directory mounted by the volumeMounts (due to a bug when using hostPath volumes), by", "language": "shell", "type": "code-example_explanation", "context": "StatefulSet Basics | If you instead see 403 Forbidden responses for the above curl command, you will need to fix the permissions of the directory mounted by the volumeMounts (due to a bug when using hostPath volumes), by", "source": "https://kubernetes.io/docs/tutorials/stateful-application/basic-stateful-set/", "id": "k8s_00556_code_explain_14375"} {"prompt": "Explain this text code:\n\nfor i in 0 1; do kubectl exec web-$i -- chmod 755 /usr/share/nginx/html; done", "completion": "This text code StatefulSet Basics | before retrying the curl command above.", "language": "text", "type": "code-example_explanation", "context": "StatefulSet Basics | before retrying the curl command above.", "source": "https://kubernetes.io/docs/tutorials/stateful-application/basic-stateful-set/", "id": "k8s_00556_code_explain_14376"} {"prompt": "Explain this shell code:\n\n# End this watch when you've reached the end of the section.\n# At the start of \"Scaling a StatefulSet\" you'll start a new watch.\nkubectl get pod --watch -l app=nginx", "completion": "This shell code StatefulSet Basics | In a second terminal, delete all of the StatefulSet's Pods:", "language": "shell", "type": "code-example_explanation", "context": "StatefulSet Basics | In a second terminal, delete all of the StatefulSet's Pods:", "source": "https://kubernetes.io/docs/tutorials/stateful-application/basic-stateful-set/", "id": "k8s_00556_code_explain_14377"} {"prompt": "Explain this shell code:\n\n# This should already be running\nkubectl get pod --watch -l app=nginx", "completion": "This shell code StatefulSet Basics | Verify the web servers continue to serve their hostnames:", "language": "shell", "type": "code-example_explanation", "context": "StatefulSet Basics | Verify the web servers continue to serve their hostnames:", "source": "https://kubernetes.io/docs/tutorials/stateful-application/basic-stateful-set/", "id": "k8s_00556_code_explain_14378"} {"prompt": "Complete this text code:\n\nNAME READY STATUS RESTARTS AGE\nweb-0 0/1 ContainerCreating 0 0s\nNAME READY STATUS RESTARTS AGE\nweb-0 1/1 Running 0 2s", "completion": "web-1 0/1 Pending 0 0s\nweb-1 0/1 Pending 0 0s\nweb-1 0/1 ContainerCreating 0 0s\nweb-1 1/1 Running 0 34s", "language": "text", "type": "code-example", "context": "StatefulSet Basics | Verify the web servers continue to serve their hostnames:", "source": "https://kubernetes.io/docs/tutorials/stateful-application/basic-stateful-set/", "id": "k8s_00556_code_completion_14379"} {"prompt": "Explain this text code:\n\nNAME READY STATUS RESTARTS AGE\nweb-0 0/1 ContainerCreating 0 0s\nNAME READY STATUS RESTARTS AGE\nweb-0 1/1 Running 0 2s\nweb-1 0/1 Pending 0 0s\nweb-1 0/1 Pending 0 0s\nweb-1 0/1 ContainerCreating 0 0s\nweb-1 1/1 Running 0 34s", "completion": "This text code StatefulSet Basics | Verify the web servers continue to serve their hostnames:", "language": "text", "type": "code-example_explanation", "context": "StatefulSet Basics | Verify the web servers continue to serve their hostnames:", "source": "https://kubernetes.io/docs/tutorials/stateful-application/basic-stateful-set/", "id": "k8s_00556_code_explain_14380"} {"prompt": "Explain this text code:\n\nfor i in 0 1; do kubectl exec -i -t \"web-$i\" -- curl http://localhost/; done", "completion": "This text code StatefulSet Basics | Even though web-0 and web-1 were rescheduled, they continue to serve their hostnames because the PersistentVolumes associated with their PersistentVolumeClaims are remounted to their volumeMounts. No", "language": "text", "type": "code-example_explanation", "context": "StatefulSet Basics | Even though web-0 and web-1 were rescheduled, they continue to serve their hostnames because the PersistentVolumes associated with their PersistentVolumeClaims are remounted to their volumeMounts. No", "source": "https://kubernetes.io/docs/tutorials/stateful-application/basic-stateful-set/", "id": "k8s_00556_code_explain_14381"} {"prompt": "Complete this shell code:\n\n# If you already have a watch running, you can continue using that.\n# Otherwise, start one.", "completion": "# End this watch when there are 5 healthy Pods for the StatefulSet\nkubectl get pods --watch -l app=nginx", "language": "shell", "type": "code-example", "context": "StatefulSet Basics | In another terminal window, use kubectl scale to scale the number of replicas to 5:", "source": "https://kubernetes.io/docs/tutorials/stateful-application/basic-stateful-set/", "id": "k8s_00556_code_completion_14382"} {"prompt": "Explain this shell code:\n\n# If you already have a watch running, you can continue using that.\n# Otherwise, start one.\n# End this watch when there are 5 healthy Pods for the StatefulSet\nkubectl get pods --watch -l app=nginx", "completion": "This shell code StatefulSet Basics | In another terminal window, use kubectl scale to scale the number of replicas to 5:", "language": "shell", "type": "code-example_explanation", "context": "StatefulSet Basics | In another terminal window, use kubectl scale to scale the number of replicas to 5:", "source": "https://kubernetes.io/docs/tutorials/stateful-application/basic-stateful-set/", "id": "k8s_00556_code_explain_14383"} {"prompt": "Explain this shell code:\n\n# This should already be running\nkubectl get pod --watch -l app=nginx", "completion": "This shell code StatefulSet Basics | The StatefulSet controller scaled the number of replicas. As with StatefulSet creation, the StatefulSet controller created each Pod sequentially with respect to its ordinal index, and it waited for ea", "language": "shell", "type": "code-example_explanation", "context": "StatefulSet Basics | The StatefulSet controller scaled the number of replicas. As with StatefulSet creation, the StatefulSet controller created each Pod sequentially with respect to its ordinal index, and it waited for ea", "source": "https://kubernetes.io/docs/tutorials/stateful-application/basic-stateful-set/", "id": "k8s_00556_code_explain_14384"} {"prompt": "Complete this text code:\n\nNAME READY STATUS RESTARTS AGE\nweb-0 1/1 Running 0 2h\nweb-1 1/1 Running 0 2h\nNAME READY STATUS RESTARTS AGE\nweb-2 0/1 Pending 0 0s\nweb-2 0/1 Pending 0 0s\nweb-2 0/1 ContainerCreating 0 0s\nweb-2 1/1 Running 0 19s", "completion": "web-3 0/1 Pending 0 0s\nweb-3 0/1 Pending 0 0s\nweb-3 0/1 ContainerCreating 0 0s\nweb-3 1/1 Running 0 18s\nweb-4 0/1 Pending 0 0s\nweb-4 0/1 Pending 0 0s\nweb-4 0/1 ContainerCreating 0 0s\nweb-4 1/1 Running 0 19s", "language": "text", "type": "code-example", "context": "StatefulSet Basics | The StatefulSet controller scaled the number of replicas. As with StatefulSet creation, the StatefulSet controller created each Pod sequentially with respect to its ordinal index, and it waited for ea", "source": "https://kubernetes.io/docs/tutorials/stateful-application/basic-stateful-set/", "id": "k8s_00556_code_completion_14385"} {"prompt": "Explain this text code:\n\nNAME READY STATUS RESTARTS AGE\nweb-0 1/1 Running 0 2h\nweb-1 1/1 Running 0 2h\nNAME READY STATUS RESTARTS AGE\nweb-2 0/1 Pending 0 0s\nweb-2 0/1 Pending 0 0s\nweb-2 0/1 ContainerCreating 0 0s\nweb-2 1/1 Running 0 19s\nweb-3 0/1 Pending 0 0s\nweb-3 0/1 Pending 0 0s\nweb-3 0/1 ContainerCreating 0 0s\nweb-3 1/1 Running 0 18s\nweb-4 0/1 Pending 0 0s\nweb-4 0/1 Pending 0 0s\nweb-4 0/1 ContainerCreating 0 0s\nweb-4 1/1 Running 0 19s", "completion": "This text code StatefulSet Basics | The StatefulSet controller scaled the number of replicas. As with StatefulSet creation, the StatefulSet controller created each Pod sequentially with respect to its ordinal index, and it waited for ea", "language": "text", "type": "code-example_explanation", "context": "StatefulSet Basics | The StatefulSet controller scaled the number of replicas. As with StatefulSet creation, the StatefulSet controller created each Pod sequentially with respect to its ordinal index, and it waited for ea", "source": "https://kubernetes.io/docs/tutorials/stateful-application/basic-stateful-set/", "id": "k8s_00556_code_explain_14386"} {"prompt": "Explain this shell code:\n\n# End this watch when there are only 3 Pods for the StatefulSet\nkubectl get pod --watch -l app=nginx", "completion": "This shell code StatefulSet Basics | In another terminal, use kubectl patch to scale the StatefulSet back down to three replicas:", "language": "shell", "type": "code-example_explanation", "context": "StatefulSet Basics | In another terminal, use kubectl patch to scale the StatefulSet back down to three replicas:", "source": "https://kubernetes.io/docs/tutorials/stateful-application/basic-stateful-set/", "id": "k8s_00556_code_explain_14387"} {"prompt": "Explain this shell code:\n\n# This should already be running\nkubectl get pods --watch -l app=nginx", "completion": "This shell code StatefulSet Basics | The control plane deleted one Pod at a time, in reverse order with respect to its ordinal index, and it waited for each Pod to be completely shut down before deleting the next one.", "language": "shell", "type": "code-example_explanation", "context": "StatefulSet Basics | The control plane deleted one Pod at a time, in reverse order with respect to its ordinal index, and it waited for each Pod to be completely shut down before deleting the next one.", "source": "https://kubernetes.io/docs/tutorials/stateful-application/basic-stateful-set/", "id": "k8s_00556_code_explain_14388"} {"prompt": "Complete this text code:\n\nNAME READY STATUS RESTARTS AGE\nweb-0 1/1 Running 0 3h\nweb-1 1/1 Running 0 3h\nweb-2 1/1 Running 0 55s\nweb-3 1/1 Running 0 36s\nweb-4 0/1 ContainerCreating 0 18s", "completion": "NAME READY STATUS RESTARTS AGE\nweb-4 1/1 Running 0 19s\nweb-4 1/1 Terminating 0 24s\nweb-4 1/1 Terminating 0 24s\nweb-3 1/1 Terminating 0 42s\nweb-3 1/1 Terminating 0 42s", "language": "text", "type": "code-example", "context": "StatefulSet Basics | The control plane deleted one Pod at a time, in reverse order with respect to its ordinal index, and it waited for each Pod to be completely shut down before deleting the next one.", "source": "https://kubernetes.io/docs/tutorials/stateful-application/basic-stateful-set/", "id": "k8s_00556_code_completion_14389"} {"prompt": "Explain this text code:\n\nNAME READY STATUS RESTARTS AGE\nweb-0 1/1 Running 0 3h\nweb-1 1/1 Running 0 3h\nweb-2 1/1 Running 0 55s\nweb-3 1/1 Running 0 36s\nweb-4 0/1 ContainerCreating 0 18s\nNAME READY STATUS RESTARTS AGE\nweb-4 1/1 Running 0 19s\nweb-4 1/1 Terminating 0 24s\nweb-4 1/1 Terminating 0 24s\nweb-3 1/1 Terminating 0 42s\nweb-3 1/1 Terminating 0 42s", "completion": "This text code StatefulSet Basics | The control plane deleted one Pod at a time, in reverse order with respect to its ordinal index, and it waited for each Pod to be completely shut down before deleting the next one.", "language": "text", "type": "code-example_explanation", "context": "StatefulSet Basics | The control plane deleted one Pod at a time, in reverse order with respect to its ordinal index, and it waited for each Pod to be completely shut down before deleting the next one.", "source": "https://kubernetes.io/docs/tutorials/stateful-application/basic-stateful-set/", "id": "k8s_00556_code_explain_14390"} {"prompt": "Complete this text code:\n\nNAME STATUS VOLUME CAPACITY ACCESSMODES AGE\nwww-web-0 Bound pvc-15c268c7-b507-11e6-932f-42010a800002 1Gi RWO 13h\nwww-web-1 Bound pvc-15c79307-b507-11e6-932f-42010a800002 1Gi RWO 13h", "completion": "www-web-2 Bound pvc-e1125b27-b508-11e6-932f-42010a800002 1Gi RWO 13h\nwww-web-3 Bound pvc-e1176df6-b508-11e6-932f-42010a800002 1Gi RWO 13h\nwww-web-4 Bound pvc-e11bb5f8-b508-11e6-932f-42010a800002 1Gi RWO 13h", "language": "text", "type": "code-example", "context": "StatefulSet Basics | There are still five PersistentVolumeClaims and five PersistentVolumes. When exploring a Pod's stable storage, you saw that the PersistentVolumes mounted to the Pods of a StatefulSet are not deleted w", "source": "https://kubernetes.io/docs/tutorials/stateful-application/basic-stateful-set/", "id": "k8s_00556_code_completion_14391"} {"prompt": "Explain this text code:\n\nNAME STATUS VOLUME CAPACITY ACCESSMODES AGE\nwww-web-0 Bound pvc-15c268c7-b507-11e6-932f-42010a800002 1Gi RWO 13h\nwww-web-1 Bound pvc-15c79307-b507-11e6-932f-42010a800002 1Gi RWO 13h\nwww-web-2 Bound pvc-e1125b27-b508-11e6-932f-42010a800002 1Gi RWO 13h\nwww-web-3 Bound pvc-e1176df6-b508-11e6-932f-42010a800002 1Gi RWO 13h\nwww-web-4 Bound pvc-e11bb5f8-b508-11e6-932f-42010a800002 1Gi RWO 13h", "completion": "This text code StatefulSet Basics | There are still five PersistentVolumeClaims and five PersistentVolumes. When exploring a Pod's stable storage, you saw that the PersistentVolumes mounted to the Pods of a StatefulSet are not deleted w", "language": "text", "type": "code-example_explanation", "context": "StatefulSet Basics | There are still five PersistentVolumeClaims and five PersistentVolumes. When exploring a Pod's stable storage, you saw that the PersistentVolumes mounted to the Pods of a StatefulSet are not deleted w", "source": "https://kubernetes.io/docs/tutorials/stateful-application/basic-stateful-set/", "id": "k8s_00556_code_explain_14392"} {"prompt": "Explain this shell code:\n\nkubectl patch statefulset web --type='json' -p='[{\"op\": \"replace\", \"path\": \"/spec/template/spec/containers/0/image\", \"value\":\"registry.k8s.io/nginx-slim:0.24\"}]'", "completion": "This shell code StatefulSet Basics | In another terminal, watch the Pods in the StatefulSet:", "language": "shell", "type": "code-example_explanation", "context": "StatefulSet Basics | In another terminal, watch the Pods in the StatefulSet:", "source": "https://kubernetes.io/docs/tutorials/stateful-application/basic-stateful-set/", "id": "k8s_00556_code_explain_14393"} {"prompt": "Complete this shell code:\n\n# End this watch when the rollout is complete\n#", "completion": "# If you're not sure, leave it running one more minute\nkubectl get pod -l app=nginx --watch", "language": "shell", "type": "code-example", "context": "StatefulSet Basics | The output is similar to:", "source": "https://kubernetes.io/docs/tutorials/stateful-application/basic-stateful-set/", "id": "k8s_00556_code_completion_14394"} {"prompt": "Explain this shell code:\n\n# End this watch when the rollout is complete\n#\n# If you're not sure, leave it running one more minute\nkubectl get pod -l app=nginx --watch", "completion": "This shell code StatefulSet Basics | The output is similar to:", "language": "shell", "type": "code-example_explanation", "context": "StatefulSet Basics | The output is similar to:", "source": "https://kubernetes.io/docs/tutorials/stateful-application/basic-stateful-set/", "id": "k8s_00556_code_explain_14395"} {"prompt": "Complete this text code:\n\nNAME READY STATUS RESTARTS AGE\nweb-0 1/1 Running 0 7m\nweb-1 1/1 Running 0 7m\nweb-2 1/1 Running 0 8m\nweb-2 1/1 Terminating 0 8m\nweb-2 1/1 Terminating 0 8m\nweb-2 0/1 Terminating 0 8m\nweb-2 0/1 Terminating 0 8m\nweb-2 0/1 Terminating 0 8m\nweb-2 0/1 Terminating 0 8m\nweb-2 0/1 Pending 0 0s\nweb-2 0/1 Pending 0 0s\nweb-2 0/1 ContainerCreating 0 0s\nweb-2 1/1 Running 0 19s\nweb-1 1/1 Terminating 0 8m\nweb-1 0/1 Terminating 0 8m", "completion": "web-1 0/1 Terminating 0 8m\nweb-1 0/1 Terminating 0 8m\nweb-1 0/1 Pending 0 0s\nweb-1 0/1 Pending 0 0s\nweb-1 0/1 ContainerCreating 0 0s\nweb-1 1/1 Running 0 6s\nweb-0 1/1 Terminating 0 7m\nweb-0 1/1 Terminating 0 7m\nweb-0 0/1 Terminating 0 7m\nweb-0 0/1 Terminating 0 7m\nweb-0 0/1 Terminating 0 7m\nweb-0 0/1 Terminating 0 7m\nweb-0 0/1 Pending 0 0s\nweb-0 0/1 Pending 0 0s\nweb-0 0/1 ContainerCreating 0 0s\nweb-0 1/1 Running 0 10s", "language": "text", "type": "code-example", "context": "StatefulSet Basics | The Pods in the StatefulSet are updated in reverse ordinal order. The StatefulSet controller terminates each Pod, and waits for it to transition to Running and Ready prior to updating the next Pod. No", "source": "https://kubernetes.io/docs/tutorials/stateful-application/basic-stateful-set/", "id": "k8s_00556_code_completion_14396"} {"prompt": "Explain this text code:\n\nNAME READY STATUS RESTARTS AGE\nweb-0 1/1 Running 0 7m\nweb-1 1/1 Running 0 7m\nweb-2 1/1 Running 0 8m\nweb-2 1/1 Terminating 0 8m\nweb-2 1/1 Terminating 0 8m\nweb-2 0/1 Terminating 0 8m\nweb-2 0/1 Terminating 0 8m\nweb-2 0/1 Terminating 0 8m\nweb-2 0/1 Terminating 0 8m\nweb-2 0/1 Pending 0 0s\nweb-2 0/1 Pending 0 0s\nweb-2 0/1 ContainerCreating 0 0s\nweb-2 1/1 Running 0 19s\nweb-1 1/1 Terminating 0 8m\nweb-1 0/1 Terminating 0 8m\nweb-1 0/1 Terminating 0 8m\nweb-1 0/1 Terminating 0 8m\nweb-1 0/1 Pending 0 0s\nweb-1 0/1 Pending 0 0s\nweb-1 0/1 ContainerCreating 0 0s\nweb-1 1/1 Running 0 6s\nweb-0 1/1 Terminating 0 7m\nweb-0 1/1 Terminating 0 7m\nweb-0 0/1 Terminating 0 7m\nweb-0 0/1 Terminating 0 7m\nweb-0 0/1 Terminating 0 7m\nweb-0 0/1 Terminating 0 7m\nweb-0 0/1 Pending 0 0s\nweb-0 0/1 Pending 0 0s\nweb-0 0/1 ContainerCreating 0 0s\nweb-0 1/1 Running 0 10s", "completion": "This text code StatefulSet Basics | The Pods in the StatefulSet are updated in reverse ordinal order. The StatefulSet controller terminates each Pod, and waits for it to transition to Running and Ready prior to updating the next Pod. No", "language": "text", "type": "code-example_explanation", "context": "StatefulSet Basics | The Pods in the StatefulSet are updated in reverse ordinal order. The StatefulSet controller terminates each Pod, and waits for it to transition to Running and Ready prior to updating the next Pod. No", "source": "https://kubernetes.io/docs/tutorials/stateful-application/basic-stateful-set/", "id": "k8s_00556_code_explain_14397"} {"prompt": "Explain this shell code:\n\nfor p in 0 1 2; do kubectl get pod \"web-$p\" --template '{{range $i, $c := .spec.containers}}{{$c.image}}{{end}}'; echo; done", "completion": "This shell code StatefulSet Basics | All the Pods in the StatefulSet are now running the previous container image.", "language": "shell", "type": "code-example_explanation", "context": "StatefulSet Basics | All the Pods in the StatefulSet are now running the previous container image.", "source": "https://kubernetes.io/docs/tutorials/stateful-application/basic-stateful-set/", "id": "k8s_00556_code_explain_14398"} {"prompt": "Explain this text code:\n\nregistry.k8s.io/nginx-slim:0.24\nregistry.k8s.io/nginx-slim:0.24\nregistry.k8s.io/nginx-slim:0.24", "completion": "This text code StatefulSet Basics | All the Pods in the StatefulSet are now running the previous container image.", "language": "text", "type": "code-example_explanation", "context": "StatefulSet Basics | All the Pods in the StatefulSet are now running the previous container image.", "source": "https://kubernetes.io/docs/tutorials/stateful-application/basic-stateful-set/", "id": "k8s_00556_code_explain_14399"} {"prompt": "Complete this shell code:\n\n# The value of \"partition\" determines which ordinals a change applies to\n# Make sure to use a number bigger than the last ordinal for the", "completion": "# StatefulSet\nkubectl patch statefulset web -p '{\"spec\":{\"updateStrategy\":{\"type\":\"RollingUpdate\",\"rollingUpdate\":{\"partition\":3}}}}'", "language": "shell", "type": "code-example", "context": "StatefulSet Basics | Patch the StatefulSet again to change the container image that this StatefulSet uses:", "source": "https://kubernetes.io/docs/tutorials/stateful-application/basic-stateful-set/", "id": "k8s_00556_code_completion_14400"} {"prompt": "Explain this shell code:\n\n# The value of \"partition\" determines which ordinals a change applies to\n# Make sure to use a number bigger than the last ordinal for the\n# StatefulSet\nkubectl patch statefulset web -p '{\"spec\":{\"updateStrategy\":{\"type\":\"RollingUpdate\",\"rollingUpdate\":{\"partition\":3}}}}'", "completion": "This shell code StatefulSet Basics | Patch the StatefulSet again to change the container image that this StatefulSet uses:", "language": "shell", "type": "code-example_explanation", "context": "StatefulSet Basics | Patch the StatefulSet again to change the container image that this StatefulSet uses:", "source": "https://kubernetes.io/docs/tutorials/stateful-application/basic-stateful-set/", "id": "k8s_00556_code_explain_14401"} {"prompt": "Explain this shell code:\n\nkubectl patch statefulset web --type='json' -p='[{\"op\": \"replace\", \"path\": \"/spec/template/spec/containers/0/image\", \"value\":\"registry.k8s.io/nginx-slim:0.21\"}]'", "completion": "This shell code StatefulSet Basics | Delete a Pod in the StatefulSet:", "language": "shell", "type": "code-example_explanation", "context": "StatefulSet Basics | Delete a Pod in the StatefulSet:", "source": "https://kubernetes.io/docs/tutorials/stateful-application/basic-stateful-set/", "id": "k8s_00556_code_explain_14402"} {"prompt": "Explain this shell code:\n\n# End the watch when you see that web-2 is healthy\nkubectl get pod -l app=nginx --watch", "completion": "This shell code StatefulSet Basics | Get the Pod's container image:", "language": "shell", "type": "code-example_explanation", "context": "StatefulSet Basics | Get the Pod's container image:", "source": "https://kubernetes.io/docs/tutorials/stateful-application/basic-stateful-set/", "id": "k8s_00556_code_explain_14403"} {"prompt": "Complete this text code:\n\nNAME READY STATUS RESTARTS AGE\nweb-0 1/1 Running 0 4m", "completion": "web-1 1/1 Running 0 4m\nweb-2 0/1 ContainerCreating 0 11s\nweb-2 1/1 Running 0 18s", "language": "text", "type": "code-example", "context": "StatefulSet Basics | Get the Pod's container image:", "source": "https://kubernetes.io/docs/tutorials/stateful-application/basic-stateful-set/", "id": "k8s_00556_code_completion_14404"} {"prompt": "Explain this text code:\n\nNAME READY STATUS RESTARTS AGE\nweb-0 1/1 Running 0 4m\nweb-1 1/1 Running 0 4m\nweb-2 0/1 ContainerCreating 0 11s\nweb-2 1/1 Running 0 18s", "completion": "This text code StatefulSet Basics | Get the Pod's container image:", "language": "text", "type": "code-example_explanation", "context": "StatefulSet Basics | Get the Pod's container image:", "source": "https://kubernetes.io/docs/tutorials/stateful-application/basic-stateful-set/", "id": "k8s_00556_code_explain_14405"} {"prompt": "Explain this shell code:\n\nkubectl get pod web-2 --template '{{range $i, $c := .spec.containers}}{{$c.image}}{{end}}'", "completion": "This shell code StatefulSet Basics | Notice that, even though the update strategy is RollingUpdate the StatefulSet restored the Pod with the original container image. This is because the ordinal of the Pod is less than the partition spec", "language": "shell", "type": "code-example_explanation", "context": "StatefulSet Basics | Notice that, even though the update strategy is RollingUpdate the StatefulSet restored the Pod with the original container image. This is because the ordinal of the Pod is less than the partition spec", "source": "https://kubernetes.io/docs/tutorials/stateful-application/basic-stateful-set/", "id": "k8s_00556_code_explain_14406"} {"prompt": "Explain this shell code:\n\n# The value of \"partition\" should match the highest existing ordinal for\n# the StatefulSet\nkubectl patch statefulset web -p '{\"spec\":{\"updateStrategy\":{\"type\":\"RollingUpdate\",\"rollingUpdate\":{\"partition\":2}}}}'", "completion": "This shell code StatefulSet Basics | The control plane triggers replacement for web-2 (implemented by a graceful delete followed by creating a new Pod once the deletion is complete). Wait for the new web-2 Pod to be Running and Ready.", "language": "shell", "type": "code-example_explanation", "context": "StatefulSet Basics | The control plane triggers replacement for web-2 (implemented by a graceful delete followed by creating a new Pod once the deletion is complete). Wait for the new web-2 Pod to be Running and Ready.", "source": "https://kubernetes.io/docs/tutorials/stateful-application/basic-stateful-set/", "id": "k8s_00556_code_explain_14407"} {"prompt": "Explain this shell code:\n\n# This should already be running\nkubectl get pod -l app=nginx --watch", "completion": "This shell code StatefulSet Basics | Get the Pod's container:", "language": "shell", "type": "code-example_explanation", "context": "StatefulSet Basics | Get the Pod's container:", "source": "https://kubernetes.io/docs/tutorials/stateful-application/basic-stateful-set/", "id": "k8s_00556_code_explain_14408"} {"prompt": "Complete this text code:\n\nNAME READY STATUS RESTARTS AGE\nweb-0 1/1 Running 0 4m", "completion": "web-1 1/1 Running 0 4m\nweb-2 0/1 ContainerCreating 0 11s\nweb-2 1/1 Running 0 18s", "language": "text", "type": "code-example", "context": "StatefulSet Basics | Get the Pod's container:", "source": "https://kubernetes.io/docs/tutorials/stateful-application/basic-stateful-set/", "id": "k8s_00556_code_completion_14409"} {"prompt": "Explain this text code:\n\nNAME READY STATUS RESTARTS AGE\nweb-0 1/1 Running 0 4m\nweb-1 1/1 Running 0 4m\nweb-2 0/1 ContainerCreating 0 11s\nweb-2 1/1 Running 0 18s", "completion": "This text code StatefulSet Basics | Get the Pod's container:", "language": "text", "type": "code-example_explanation", "context": "StatefulSet Basics | Get the Pod's container:", "source": "https://kubernetes.io/docs/tutorials/stateful-application/basic-stateful-set/", "id": "k8s_00556_code_explain_14410"} {"prompt": "Explain this shell code:\n\nkubectl get pod web-2 --template '{{range $i, $c := .spec.containers}}{{$c.image}}{{end}}'", "completion": "This shell code StatefulSet Basics | When you changed the partition, the StatefulSet controller automatically updated the web-2 Pod because the Pod's ordinal was greater than or equal to the partition.", "language": "shell", "type": "code-example_explanation", "context": "StatefulSet Basics | When you changed the partition, the StatefulSet controller automatically updated the web-2 Pod because the Pod's ordinal was greater than or equal to the partition.", "source": "https://kubernetes.io/docs/tutorials/stateful-application/basic-stateful-set/", "id": "k8s_00556_code_explain_14411"} {"prompt": "Explain this shell code:\n\n# This should already be running\nkubectl get pod -l app=nginx --watch", "completion": "This shell code StatefulSet Basics | The output is similar to:", "language": "shell", "type": "code-example_explanation", "context": "StatefulSet Basics | The output is similar to:", "source": "https://kubernetes.io/docs/tutorials/stateful-application/basic-stateful-set/", "id": "k8s_00556_code_explain_14412"} {"prompt": "Complete this text code:\n\nNAME READY STATUS RESTARTS AGE\nweb-0 1/1 Running 0 6m\nweb-1 0/1 Terminating 0 6m\nweb-2 1/1 Running 0 2m\nweb-1 0/1 Terminating 0 6m", "completion": "web-1 0/1 Terminating 0 6m\nweb-1 0/1 Terminating 0 6m\nweb-1 0/1 Pending 0 0s\nweb-1 0/1 Pending 0 0s\nweb-1 0/1 ContainerCreating 0 0s\nweb-1 1/1 Running 0 18s", "language": "text", "type": "code-example", "context": "StatefulSet Basics | Get the web-1 Pod's container image:", "source": "https://kubernetes.io/docs/tutorials/stateful-application/basic-stateful-set/", "id": "k8s_00556_code_completion_14413"} {"prompt": "Explain this text code:\n\nNAME READY STATUS RESTARTS AGE\nweb-0 1/1 Running 0 6m\nweb-1 0/1 Terminating 0 6m\nweb-2 1/1 Running 0 2m\nweb-1 0/1 Terminating 0 6m\nweb-1 0/1 Terminating 0 6m\nweb-1 0/1 Terminating 0 6m\nweb-1 0/1 Pending 0 0s\nweb-1 0/1 Pending 0 0s\nweb-1 0/1 ContainerCreating 0 0s\nweb-1 1/1 Running 0 18s", "completion": "This text code StatefulSet Basics | Get the web-1 Pod's container image:", "language": "text", "type": "code-example_explanation", "context": "StatefulSet Basics | Get the web-1 Pod's container image:", "source": "https://kubernetes.io/docs/tutorials/stateful-application/basic-stateful-set/", "id": "k8s_00556_code_explain_14414"} {"prompt": "Explain this shell code:\n\nkubectl get pod web-1 --template '{{range $i, $c := .spec.containers}}{{$c.image}}{{end}}'", "completion": "This shell code StatefulSet Basics | web-1 was restored to its original configuration because the Pod's ordinal was less than the partition. When a partition is specified, all Pods with an ordinal that is greater than or equal to the par", "language": "shell", "type": "code-example_explanation", "context": "StatefulSet Basics | web-1 was restored to its original configuration because the Pod's ordinal was less than the partition. When a partition is specified, all Pods with an ordinal that is greater than or equal to the par", "source": "https://kubernetes.io/docs/tutorials/stateful-application/basic-stateful-set/", "id": "k8s_00556_code_explain_14415"} {"prompt": "Explain this shell code:\n\nkubectl patch statefulset web -p '{\"spec\":{\"updateStrategy\":{\"type\":\"RollingUpdate\",\"rollingUpdate\":{\"partition\":0}}}}'", "completion": "This shell code StatefulSet Basics | Wait for all of the Pods in the StatefulSet to become Running and Ready.", "language": "shell", "type": "code-example_explanation", "context": "StatefulSet Basics | Wait for all of the Pods in the StatefulSet to become Running and Ready.", "source": "https://kubernetes.io/docs/tutorials/stateful-application/basic-stateful-set/", "id": "k8s_00556_code_explain_14416"} {"prompt": "Explain this shell code:\n\n# This should already be running\nkubectl get pod -l app=nginx --watch", "completion": "This shell code StatefulSet Basics | The output is similar to:", "language": "shell", "type": "code-example_explanation", "context": "StatefulSet Basics | The output is similar to:", "source": "https://kubernetes.io/docs/tutorials/stateful-application/basic-stateful-set/", "id": "k8s_00556_code_explain_14417"} {"prompt": "Complete this text code:\n\nNAME READY STATUS RESTARTS AGE\nweb-0 1/1 Running 0 3m\nweb-1 0/1 ContainerCreating 0 11s\nweb-2 1/1 Running 0 2m\nweb-1 1/1 Running 0 18s\nweb-0 1/1 Terminating 0 3m\nweb-0 1/1 Terminating 0 3m", "completion": "web-0 0/1 Terminating 0 3m\nweb-0 0/1 Terminating 0 3m\nweb-0 0/1 Terminating 0 3m\nweb-0 0/1 Terminating 0 3m\nweb-0 0/1 Pending 0 0s\nweb-0 0/1 Pending 0 0s\nweb-0 0/1 ContainerCreating 0 0s\nweb-0 1/1 Running 0 3s", "language": "text", "type": "code-example", "context": "StatefulSet Basics | Get the container image details for the Pods in the StatefulSet:", "source": "https://kubernetes.io/docs/tutorials/stateful-application/basic-stateful-set/", "id": "k8s_00556_code_completion_14418"} {"prompt": "Explain this text code:\n\nNAME READY STATUS RESTARTS AGE\nweb-0 1/1 Running 0 3m\nweb-1 0/1 ContainerCreating 0 11s\nweb-2 1/1 Running 0 2m\nweb-1 1/1 Running 0 18s\nweb-0 1/1 Terminating 0 3m\nweb-0 1/1 Terminating 0 3m\nweb-0 0/1 Terminating 0 3m\nweb-0 0/1 Terminating 0 3m\nweb-0 0/1 Terminating 0 3m\nweb-0 0/1 Terminating 0 3m\nweb-0 0/1 Pending 0 0s\nweb-0 0/1 Pending 0 0s\nweb-0 0/1 ContainerCreating 0 0s\nweb-0 1/1 Running 0 3s", "completion": "This text code StatefulSet Basics | Get the container image details for the Pods in the StatefulSet:", "language": "text", "type": "code-example_explanation", "context": "StatefulSet Basics | Get the container image details for the Pods in the StatefulSet:", "source": "https://kubernetes.io/docs/tutorials/stateful-application/basic-stateful-set/", "id": "k8s_00556_code_explain_14419"} {"prompt": "Explain this shell code:\n\nfor p in 0 1 2; do kubectl get pod \"web-$p\" --template '{{range $i, $c := .spec.containers}}{{$c.image}}{{end}}'; echo; done", "completion": "This shell code StatefulSet Basics | By moving the partition to 0, you allowed the StatefulSet to continue the update process.", "language": "shell", "type": "code-example_explanation", "context": "StatefulSet Basics | By moving the partition to 0, you allowed the StatefulSet to continue the update process.", "source": "https://kubernetes.io/docs/tutorials/stateful-application/basic-stateful-set/", "id": "k8s_00556_code_explain_14420"} {"prompt": "Explain this text code:\n\nregistry.k8s.io/nginx-slim:0.21\nregistry.k8s.io/nginx-slim:0.21\nregistry.k8s.io/nginx-slim:0.21", "completion": "This text code StatefulSet Basics | By moving the partition to 0, you allowed the StatefulSet to continue the update process.", "language": "text", "type": "code-example_explanation", "context": "StatefulSet Basics | By moving the partition to 0, you allowed the StatefulSet to continue the update process.", "source": "https://kubernetes.io/docs/tutorials/stateful-application/basic-stateful-set/", "id": "k8s_00556_code_explain_14421"} {"prompt": "Explain this shell code:\n\nkubectl patch statefulset web -p '{\"spec\":{\"updateStrategy\":{\"type\":\"OnDelete\", \"rollingUpdate\": null}}}'", "completion": "This shell code StatefulSet Basics | When you select this update strategy, the StatefulSet controller does not automatically update Pods when a modification is made to the StatefulSet's .spec.template field. You need to manage the rollou", "language": "shell", "type": "code-example_explanation", "context": "StatefulSet Basics | When you select this update strategy, the StatefulSet controller does not automatically update Pods when a modification is made to the StatefulSet's .spec.template field. You need to manage the rollou", "source": "https://kubernetes.io/docs/tutorials/stateful-application/basic-stateful-set/", "id": "k8s_00556_code_explain_14422"} {"prompt": "Explain this text code:\n\n# End this watch when there are no Pods for the StatefulSet\nkubectl get pods --watch -l app=nginx", "completion": "This text code StatefulSet Basics | Use kubectl delete to delete the StatefulSet. Make sure to supply the --cascade=orphan parameter to the command. This parameter tells Kubernetes to only delete the StatefulSet, and to not delete any o", "language": "text", "type": "code-example_explanation", "context": "StatefulSet Basics | Use kubectl delete to delete the StatefulSet. Make sure to supply the --cascade=orphan parameter to the command. This parameter tells Kubernetes to only delete the StatefulSet, and to not delete any o", "source": "https://kubernetes.io/docs/tutorials/stateful-application/basic-stateful-set/", "id": "k8s_00556_code_explain_14423"} {"prompt": "Complete this text code:\n\nNAME READY STATUS RESTARTS AGE\nweb-0 1/1 Running 0 6m", "completion": "web-1 1/1 Running 0 7m\nweb-2 1/1 Running 0 5m", "language": "text", "type": "code-example", "context": "StatefulSet Basics | Even though web has been deleted, all of the Pods are still Running and Ready. Delete web-0:", "source": "https://kubernetes.io/docs/tutorials/stateful-application/basic-stateful-set/", "id": "k8s_00556_code_completion_14424"} {"prompt": "Explain this text code:\n\nNAME READY STATUS RESTARTS AGE\nweb-0 1/1 Running 0 6m\nweb-1 1/1 Running 0 7m\nweb-2 1/1 Running 0 5m", "completion": "This text code StatefulSet Basics | Even though web has been deleted, all of the Pods are still Running and Ready. Delete web-0:", "language": "text", "type": "code-example_explanation", "context": "StatefulSet Basics | Even though web has been deleted, all of the Pods are still Running and Ready. Delete web-0:", "source": "https://kubernetes.io/docs/tutorials/stateful-application/basic-stateful-set/", "id": "k8s_00556_code_explain_14425"} {"prompt": "Explain this text code:\n\nNAME READY STATUS RESTARTS AGE\nweb-1 1/1 Running 0 10m\nweb-2 1/1 Running 0 7m", "completion": "This text code StatefulSet Basics | As the web StatefulSet has been deleted, web-0 has not been relaunched.", "language": "text", "type": "code-example_explanation", "context": "StatefulSet Basics | As the web StatefulSet has been deleted, web-0 has not been relaunched.", "source": "https://kubernetes.io/docs/tutorials/stateful-application/basic-stateful-set/", "id": "k8s_00556_code_explain_14426"} {"prompt": "Explain this shell code:\n\n# Leave this watch running until the next time you start a watch\nkubectl get pods --watch -l app=nginx", "completion": "This shell code StatefulSet Basics | In a second terminal, recreate the StatefulSet. Note that, unless you deleted the nginx Service (which you should not have), you will see an error indicating that the Service already exists.", "language": "shell", "type": "code-example_explanation", "context": "StatefulSet Basics | In a second terminal, recreate the StatefulSet. Note that, unless you deleted the nginx Service (which you should not have), you will see an error indicating that the Service already exists.", "source": "https://kubernetes.io/docs/tutorials/stateful-application/basic-stateful-set/", "id": "k8s_00556_code_explain_14427"} {"prompt": "Explain this shell code:\n\nkubectl apply -f https://k8s.io/examples/application/web/web.yaml", "completion": "This shell code StatefulSet Basics | Ignore the error. It only indicates that an attempt was made to create the nginx headless Service even though that Service already exists.", "language": "shell", "type": "code-example_explanation", "context": "StatefulSet Basics | Ignore the error. It only indicates that an attempt was made to create the nginx headless Service even though that Service already exists.", "source": "https://kubernetes.io/docs/tutorials/stateful-application/basic-stateful-set/", "id": "k8s_00556_code_explain_14428"} {"prompt": "Explain this text code:\n\nstatefulset.apps/web created\nservice/nginx unchanged", "completion": "This text code StatefulSet Basics | Ignore the error. It only indicates that an attempt was made to create the nginx headless Service even though that Service already exists.", "language": "text", "type": "code-example_explanation", "context": "StatefulSet Basics | Ignore the error. It only indicates that an attempt was made to create the nginx headless Service even though that Service already exists.", "source": "https://kubernetes.io/docs/tutorials/stateful-application/basic-stateful-set/", "id": "k8s_00556_code_explain_14429"} {"prompt": "Explain this shell code:\n\n# This should already be running\nkubectl get pods --watch -l app=nginx", "completion": "This shell code StatefulSet Basics | When the web StatefulSet was recreated, it first relaunched web-0. Since web-1 was already Running and Ready, when web-0 transitioned to Running and Ready, it adopted this Pod. Since you recreated the", "language": "shell", "type": "code-example_explanation", "context": "StatefulSet Basics | When the web StatefulSet was recreated, it first relaunched web-0. Since web-1 was already Running and Ready, when web-0 transitioned to Running and Ready, it adopted this Pod. Since you recreated the", "source": "https://kubernetes.io/docs/tutorials/stateful-application/basic-stateful-set/", "id": "k8s_00556_code_explain_14430"} {"prompt": "Complete this text code:\n\nNAME READY STATUS RESTARTS AGE\nweb-1 1/1 Running 0 16m\nweb-2 1/1 Running 0 2m\nNAME READY STATUS RESTARTS AGE\nweb-0 0/1 Pending 0 0s\nweb-0 0/1 Pending 0 0s", "completion": "web-0 0/1 ContainerCreating 0 0s\nweb-0 1/1 Running 0 18s\nweb-2 1/1 Terminating 0 3m\nweb-2 0/1 Terminating 0 3m\nweb-2 0/1 Terminating 0 3m\nweb-2 0/1 Terminating 0 3m", "language": "text", "type": "code-example", "context": "StatefulSet Basics | When the web StatefulSet was recreated, it first relaunched web-0. Since web-1 was already Running and Ready, when web-0 transitioned to Running and Ready, it adopted this Pod. Since you recreated the", "source": "https://kubernetes.io/docs/tutorials/stateful-application/basic-stateful-set/", "id": "k8s_00556_code_completion_14431"} {"prompt": "Explain this text code:\n\nNAME READY STATUS RESTARTS AGE\nweb-1 1/1 Running 0 16m\nweb-2 1/1 Running 0 2m\nNAME READY STATUS RESTARTS AGE\nweb-0 0/1 Pending 0 0s\nweb-0 0/1 Pending 0 0s\nweb-0 0/1 ContainerCreating 0 0s\nweb-0 1/1 Running 0 18s\nweb-2 1/1 Terminating 0 3m\nweb-2 0/1 Terminating 0 3m\nweb-2 0/1 Terminating 0 3m\nweb-2 0/1 Terminating 0 3m", "completion": "This text code StatefulSet Basics | When the web StatefulSet was recreated, it first relaunched web-0. Since web-1 was already Running and Ready, when web-0 transitioned to Running and Ready, it adopted this Pod. Since you recreated the", "language": "text", "type": "code-example_explanation", "context": "StatefulSet Basics | When the web StatefulSet was recreated, it first relaunched web-0. Since web-1 was already Running and Ready, when web-0 transitioned to Running and Ready, it adopted this Pod. Since you recreated the", "source": "https://kubernetes.io/docs/tutorials/stateful-application/basic-stateful-set/", "id": "k8s_00556_code_explain_14432"} {"prompt": "Explain this shell code:\n\nfor i in 0 1; do kubectl exec -i -t \"web-$i\" -- curl http://localhost/; done", "completion": "This shell code StatefulSet Basics | Even though you deleted both the StatefulSet and the web-0 Pod, it still serves the hostname originally entered into its index.html file. This is because the StatefulSet never deletes the PersistentVo", "language": "shell", "type": "code-example_explanation", "context": "StatefulSet Basics | Even though you deleted both the StatefulSet and the web-0 Pod, it still serves the hostname originally entered into its index.html file. This is because the StatefulSet never deletes the PersistentVo", "source": "https://kubernetes.io/docs/tutorials/stateful-application/basic-stateful-set/", "id": "k8s_00556_code_explain_14433"} {"prompt": "Explain this shell code:\n\n# Leave this running until the next page section\nkubectl get pods --watch -l app=nginx", "completion": "This shell code StatefulSet Basics | In another terminal, delete the StatefulSet again. This time, omit the --cascade=orphan parameter.", "language": "shell", "type": "code-example_explanation", "context": "StatefulSet Basics | In another terminal, delete the StatefulSet again. This time, omit the --cascade=orphan parameter.", "source": "https://kubernetes.io/docs/tutorials/stateful-application/basic-stateful-set/", "id": "k8s_00556_code_explain_14434"} {"prompt": "Explain this shell code:\n\n# This should already be running\nkubectl get pods --watch -l app=nginx", "completion": "This shell code StatefulSet Basics | As you saw in the Scaling Down section, the Pods are terminated one at a time, with respect to the reverse order of their ordinal indices. Before terminating a Pod, the StatefulSet controller waits fo", "language": "shell", "type": "code-example_explanation", "context": "StatefulSet Basics | As you saw in the Scaling Down section, the Pods are terminated one at a time, with respect to the reverse order of their ordinal indices. Before terminating a Pod, the StatefulSet controller waits fo", "source": "https://kubernetes.io/docs/tutorials/stateful-application/basic-stateful-set/", "id": "k8s_00556_code_explain_14435"} {"prompt": "Complete this text code:\n\nNAME READY STATUS RESTARTS AGE\nweb-0 1/1 Running 0 11m\nweb-1 1/1 Running 0 27m\nNAME READY STATUS RESTARTS AGE\nweb-0 1/1 Terminating 0 12m\nweb-1 1/1 Terminating 0 29m", "completion": "web-0 0/1 Terminating 0 12m\nweb-0 0/1 Terminating 0 12m\nweb-0 0/1 Terminating 0 12m\nweb-1 0/1 Terminating 0 29m\nweb-1 0/1 Terminating 0 29m\nweb-1 0/1 Terminating 0 29m", "language": "text", "type": "code-example", "context": "StatefulSet Basics | As you saw in the Scaling Down section, the Pods are terminated one at a time, with respect to the reverse order of their ordinal indices. Before terminating a Pod, the StatefulSet controller waits fo", "source": "https://kubernetes.io/docs/tutorials/stateful-application/basic-stateful-set/", "id": "k8s_00556_code_completion_14436"} {"prompt": "Explain this text code:\n\nNAME READY STATUS RESTARTS AGE\nweb-0 1/1 Running 0 11m\nweb-1 1/1 Running 0 27m\nNAME READY STATUS RESTARTS AGE\nweb-0 1/1 Terminating 0 12m\nweb-1 1/1 Terminating 0 29m\nweb-0 0/1 Terminating 0 12m\nweb-0 0/1 Terminating 0 12m\nweb-0 0/1 Terminating 0 12m\nweb-1 0/1 Terminating 0 29m\nweb-1 0/1 Terminating 0 29m\nweb-1 0/1 Terminating 0 29m", "completion": "This text code StatefulSet Basics | As you saw in the Scaling Down section, the Pods are terminated one at a time, with respect to the reverse order of their ordinal indices. Before terminating a Pod, the StatefulSet controller waits fo", "language": "text", "type": "code-example_explanation", "context": "StatefulSet Basics | As you saw in the Scaling Down section, the Pods are terminated one at a time, with respect to the reverse order of their ordinal indices. Before terminating a Pod, the StatefulSet controller waits fo", "source": "https://kubernetes.io/docs/tutorials/stateful-application/basic-stateful-set/", "id": "k8s_00556_code_explain_14437"} {"prompt": "Explain this shell code:\n\nkubectl apply -f https://k8s.io/examples/application/web/web.yaml", "completion": "This shell code StatefulSet Basics | When all of the StatefulSet's Pods transition to Running and Ready, retrieve the contents of their index.html files:", "language": "shell", "type": "code-example_explanation", "context": "StatefulSet Basics | When all of the StatefulSet's Pods transition to Running and Ready, retrieve the contents of their index.html files:", "source": "https://kubernetes.io/docs/tutorials/stateful-application/basic-stateful-set/", "id": "k8s_00556_code_explain_14438"} {"prompt": "Explain this shell code:\n\nfor i in 0 1; do kubectl exec -i -t \"web-$i\" -- curl http://localhost/; done", "completion": "This shell code StatefulSet Basics | Even though you completely deleted the StatefulSet, and all of its Pods, the Pods are recreated with their PersistentVolumes mounted, and web-0 and web-1 continue to serve their hostnames.", "language": "shell", "type": "code-example_explanation", "context": "StatefulSet Basics | Even though you completely deleted the StatefulSet, and all of its Pods, the Pods are recreated with their PersistentVolumes mounted, and web-0 and web-1 continue to serve their hostnames.", "source": "https://kubernetes.io/docs/tutorials/stateful-application/basic-stateful-set/", "id": "k8s_00556_code_explain_14439"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Service\nmetadata:\n name: nginx\n labels:\n app: nginx\nspec:\n ports:\n - port: 80\n name: web\n clusterIP: None\n selector:\n app: nginx\n---\napiVersion: apps/v1\nkind: StatefulSet\nmetadata:\n name: web\nspec:\n serviceName: \"nginx\"\n podManagementPolicy: \"Parallel\"\n replicas: 2\n selector:", "completion": " matchLabels:\n app: nginx\n template:\n metadata:\n labels:\n app: nginx\n spec:\n containers:\n - name: nginx\n image: registry.k8s.io/nginx-slim:0.24\n ports:\n - containerPort: 80\n name: web\n volumeMounts:\n - name: www\n mountPath: /usr/share/nginx/html\n volumeClaimTemplates:\n - metadata:\n name: www\n spec:\n accessModes: [ \"ReadWriteOnce\" ]\n resources:\n requests:\n storage: 1Gi", "language": "yaml", "type": "service-definition", "context": "StatefulSet Basics | This manifest is identical to the one you downloaded above except that the .spec.podManagementPolicy of the web StatefulSet is set to Parallel.", "source": "https://kubernetes.io/docs/tutorials/stateful-application/basic-stateful-set/", "id": "k8s_00556_code_completion_14440"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Service\nmetadata:\n name: nginx\n labels:\n app: nginx\nspec:\n ports:\n - port: 80\n name: web\n clusterIP: None\n selector:\n app: nginx\n---\napiVersion: apps/v1\nkind: StatefulSet\nmetadata:\n name: web\nspec:\n serviceName: \"nginx\"\n podManagementPolicy: \"Parallel\"\n replicas: 2\n selector:\n matchLabels:\n app: nginx\n template:\n metadata:\n labels:\n app: nginx\n spec:\n containers:\n - name: nginx\n image: registry.k8s.io/nginx-slim:0.24\n ports:\n - containerPort: 80\n name: web\n volumeMounts:\n - name: www\n mountPath: /usr/share/nginx/html\n volumeClaimTemplates:\n - metadata:\n name: www\n spec:\n accessModes: [ \"ReadWriteOnce\" ]\n resources:\n requests:\n storage: 1Gi", "completion": "This yaml code StatefulSet Basics | This manifest is identical to the one you downloaded above except that the .spec.podManagementPolicy of the web StatefulSet is set to Parallel.", "language": "yaml", "type": "service-definition_explanation", "context": "StatefulSet Basics | This manifest is identical to the one you downloaded above except that the .spec.podManagementPolicy of the web StatefulSet is set to Parallel.", "source": "https://kubernetes.io/docs/tutorials/stateful-application/basic-stateful-set/", "id": "k8s_00556_code_explain_14441"} {"prompt": "Explain this shell code:\n\n# Leave this watch running until the end of the section\nkubectl get pod -l app=nginx --watch", "completion": "This shell code StatefulSet Basics | In another terminal, reconfigure the StatefulSet for Parallel Pod management:", "language": "shell", "type": "code-example_explanation", "context": "StatefulSet Basics | In another terminal, reconfigure the StatefulSet for Parallel Pod management:", "source": "https://kubernetes.io/docs/tutorials/stateful-application/basic-stateful-set/", "id": "k8s_00556_code_explain_14442"} {"prompt": "Explain this shell code:\n\nkubectl apply -f https://k8s.io/examples/application/web/web-parallel.yaml", "completion": "This shell code StatefulSet Basics | Keep the terminal open where you're running the watch. In another terminal window, scale the StatefulSet:", "language": "shell", "type": "code-example_explanation", "context": "StatefulSet Basics | Keep the terminal open where you're running the watch. In another terminal window, scale the StatefulSet:", "source": "https://kubernetes.io/docs/tutorials/stateful-application/basic-stateful-set/", "id": "k8s_00556_code_explain_14443"} {"prompt": "Complete this text code:\n\nweb-3 0/1 Pending 0 0s\nweb-3 0/1 Pending 0 0s\nweb-3 0/1 Pending 0 7s\nweb-3 0/1 ContainerCreating 0 7s\nweb-2 0/1 Pending 0 0s", "completion": "web-4 0/1 Pending 0 0s\nweb-2 1/1 Running 0 8s\nweb-4 0/1 ContainerCreating 0 4s\nweb-3 1/1 Running 0 26s\nweb-4 1/1 Running 0 2s", "language": "text", "type": "code-example", "context": "StatefulSet Basics | The StatefulSet launched three new Pods, and it did not wait for the first to become Running and Ready prior to launching the second and third Pods.", "source": "https://kubernetes.io/docs/tutorials/stateful-application/basic-stateful-set/", "id": "k8s_00556_code_completion_14444"} {"prompt": "Explain this text code:\n\nweb-3 0/1 Pending 0 0s\nweb-3 0/1 Pending 0 0s\nweb-3 0/1 Pending 0 7s\nweb-3 0/1 ContainerCreating 0 7s\nweb-2 0/1 Pending 0 0s\nweb-4 0/1 Pending 0 0s\nweb-2 1/1 Running 0 8s\nweb-4 0/1 ContainerCreating 0 4s\nweb-3 1/1 Running 0 26s\nweb-4 1/1 Running 0 2s", "completion": "This text code StatefulSet Basics | The StatefulSet launched three new Pods, and it did not wait for the first to become Running and Ready prior to launching the second and third Pods.", "language": "text", "type": "code-example_explanation", "context": "StatefulSet Basics | The StatefulSet launched three new Pods, and it did not wait for the first to become Running and Ready prior to launching the second and third Pods.", "source": "https://kubernetes.io/docs/tutorials/stateful-application/basic-stateful-set/", "id": "k8s_00556_code_explain_14445"} {"prompt": "Explain this shell code:\n\nkubectl delete sts web\n# sts is an abbreviation for statefulset", "completion": "This shell code StatefulSet Basics | You can watch kubectl get to see those Pods being deleted.", "language": "shell", "type": "code-example_explanation", "context": "StatefulSet Basics | You can watch kubectl get to see those Pods being deleted.", "source": "https://kubernetes.io/docs/tutorials/stateful-application/basic-stateful-set/", "id": "k8s_00556_code_explain_14446"} {"prompt": "Explain this shell code:\n\n# end the watch when you've seen what you need to\nkubectl get pod -l app=nginx --watch", "completion": "This shell code StatefulSet Basics | During deletion, a StatefulSet removes all Pods concurrently; it does not wait for a Pod's ordinal successor to terminate prior to deleting that Pod.", "language": "shell", "type": "code-example_explanation", "context": "StatefulSet Basics | During deletion, a StatefulSet removes all Pods concurrently; it does not wait for a Pod's ordinal successor to terminate prior to deleting that Pod.", "source": "https://kubernetes.io/docs/tutorials/stateful-application/basic-stateful-set/", "id": "k8s_00556_code_explain_14447"} {"prompt": "Complete this text code:\n\nweb-3 1/1 Terminating 0 9m\nweb-2 1/1 Terminating 0 9m\nweb-3 1/1 Terminating 0 9m\nweb-2 1/1 Terminating 0 9m\nweb-1 1/1 Terminating 0 44m\nweb-0 1/1 Terminating 0 44m\nweb-0 0/1 Terminating 0 44m\nweb-3 0/1 Terminating 0 9m\nweb-2 0/1 Terminating 0 9m\nweb-1 0/1 Terminating 0 44m\nweb-0 0/1 Terminating 0 44m", "completion": "web-2 0/1 Terminating 0 9m\nweb-2 0/1 Terminating 0 9m\nweb-2 0/1 Terminating 0 9m\nweb-1 0/1 Terminating 0 44m\nweb-1 0/1 Terminating 0 44m\nweb-1 0/1 Terminating 0 44m\nweb-0 0/1 Terminating 0 44m\nweb-0 0/1 Terminating 0 44m\nweb-0 0/1 Terminating 0 44m\nweb-3 0/1 Terminating 0 9m\nweb-3 0/1 Terminating 0 9m\nweb-3 0/1 Terminating 0 9m", "language": "text", "type": "code-example", "context": "StatefulSet Basics | During deletion, a StatefulSet removes all Pods concurrently; it does not wait for a Pod's ordinal successor to terminate prior to deleting that Pod.", "source": "https://kubernetes.io/docs/tutorials/stateful-application/basic-stateful-set/", "id": "k8s_00556_code_completion_14448"} {"prompt": "Explain this text code:\n\nweb-3 1/1 Terminating 0 9m\nweb-2 1/1 Terminating 0 9m\nweb-3 1/1 Terminating 0 9m\nweb-2 1/1 Terminating 0 9m\nweb-1 1/1 Terminating 0 44m\nweb-0 1/1 Terminating 0 44m\nweb-0 0/1 Terminating 0 44m\nweb-3 0/1 Terminating 0 9m\nweb-2 0/1 Terminating 0 9m\nweb-1 0/1 Terminating 0 44m\nweb-0 0/1 Terminating 0 44m\nweb-2 0/1 Terminating 0 9m\nweb-2 0/1 Terminating 0 9m\nweb-2 0/1 Terminating 0 9m\nweb-1 0/1 Terminating 0 44m\nweb-1 0/1 Terminating 0 44m\nweb-1 0/1 Terminating 0 44m\nweb-0 0/1 Terminating 0 44m\nweb-0 0/1 Terminating 0 44m\nweb-0 0/1 Terminating 0 44m\nweb-3 0/1 Terminating 0 9m\nweb-3 0/1 Terminating 0 9m\nweb-3 0/1 Terminating 0 9m", "completion": "This text code StatefulSet Basics | During deletion, a StatefulSet removes all Pods concurrently; it does not wait for a Pod's ordinal successor to terminate prior to deleting that Pod.", "language": "text", "type": "code-example_explanation", "context": "StatefulSet Basics | During deletion, a StatefulSet removes all Pods concurrently; it does not wait for a Pod's ordinal successor to terminate prior to deleting that Pod.", "source": "https://kubernetes.io/docs/tutorials/stateful-application/basic-stateful-set/", "id": "k8s_00556_code_explain_14449"} {"prompt": "Complete this text code:\n\nNAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE\nwww-web-0 Bound pvc-2bf00408-d366-4a12-bad0-1869c65d0bee 1Gi RWO standard 25m\nwww-web-1 Bound pvc-ba3bfe9c-413e-4b95-a2c0-3ea8a54dbab4 1Gi RWO standard 24m", "completion": "www-web-2 Bound pvc-cba6cfa6-3a47-486b-a138-db5930207eaf 1Gi RWO standard 15m\nwww-web-3 Bound pvc-0c04d7f0-787a-4977-8da3-d9d3a6d8d752 1Gi RWO standard 15m\nwww-web-4 Bound pvc-b2c73489-e70b-4a4e-9ec1-9eab439aa43e 1Gi RWO standard 14m", "language": "text", "type": "code-example", "context": "StatefulSet Basics |", "source": "https://kubernetes.io/docs/tutorials/stateful-application/basic-stateful-set/", "id": "k8s_00556_code_completion_14450"} {"prompt": "Explain this text code:\n\nNAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE\nwww-web-0 Bound pvc-2bf00408-d366-4a12-bad0-1869c65d0bee 1Gi RWO standard 25m\nwww-web-1 Bound pvc-ba3bfe9c-413e-4b95-a2c0-3ea8a54dbab4 1Gi RWO standard 24m\nwww-web-2 Bound pvc-cba6cfa6-3a47-486b-a138-db5930207eaf 1Gi RWO standard 15m\nwww-web-3 Bound pvc-0c04d7f0-787a-4977-8da3-d9d3a6d8d752 1Gi RWO standard 15m\nwww-web-4 Bound pvc-b2c73489-e70b-4a4e-9ec1-9eab439aa43e 1Gi RWO standard 14m", "completion": "This text code StatefulSet Basics |", "language": "text", "type": "code-example_explanation", "context": "StatefulSet Basics |", "source": "https://kubernetes.io/docs/tutorials/stateful-application/basic-stateful-set/", "id": "k8s_00556_code_explain_14451"} {"prompt": "Complete this text code:\n\nNAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE\npvc-0c04d7f0-787a-4977-8da3-d9d3a6d8d752 1Gi RWO Delete Bound default/www-web-3 standard 15m\npvc-2bf00408-d366-4a12-bad0-1869c65d0bee 1Gi RWO Delete Bound default/www-web-0 standard 25m", "completion": "pvc-b2c73489-e70b-4a4e-9ec1-9eab439aa43e 1Gi RWO Delete Bound default/www-web-4 standard 14m\npvc-ba3bfe9c-413e-4b95-a2c0-3ea8a54dbab4 1Gi RWO Delete Bound default/www-web-1 standard 24m\npvc-cba6cfa6-3a47-486b-a138-db5930207eaf 1Gi RWO Delete Bound default/www-web-2 standard 15m", "language": "text", "type": "code-example", "context": "StatefulSet Basics |", "source": "https://kubernetes.io/docs/tutorials/stateful-application/basic-stateful-set/", "id": "k8s_00556_code_completion_14452"} {"prompt": "Explain this text code:\n\nNAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE\npvc-0c04d7f0-787a-4977-8da3-d9d3a6d8d752 1Gi RWO Delete Bound default/www-web-3 standard 15m\npvc-2bf00408-d366-4a12-bad0-1869c65d0bee 1Gi RWO Delete Bound default/www-web-0 standard 25m\npvc-b2c73489-e70b-4a4e-9ec1-9eab439aa43e 1Gi RWO Delete Bound default/www-web-4 standard 14m\npvc-ba3bfe9c-413e-4b95-a2c0-3ea8a54dbab4 1Gi RWO Delete Bound default/www-web-1 standard 24m\npvc-cba6cfa6-3a47-486b-a138-db5930207eaf 1Gi RWO Delete Bound default/www-web-2 standard 15m", "completion": "This text code StatefulSet Basics |", "language": "text", "type": "code-example_explanation", "context": "StatefulSet Basics |", "source": "https://kubernetes.io/docs/tutorials/stateful-application/basic-stateful-set/", "id": "k8s_00556_code_explain_14453"} {"prompt": "Explain this shell code:\n\nkubectl delete pvc www-web-0 www-web-1 www-web-2 www-web-3 www-web-4", "completion": "This shell code StatefulSet Basics |", "language": "shell", "type": "code-example_explanation", "context": "StatefulSet Basics |", "source": "https://kubernetes.io/docs/tutorials/stateful-application/basic-stateful-set/", "id": "k8s_00556_code_explain_14454"} {"prompt": "Complete this text code:\n\npersistentvolumeclaim \"www-web-0\" deleted\npersistentvolumeclaim \"www-web-1\" deleted", "completion": "persistentvolumeclaim \"www-web-2\" deleted\npersistentvolumeclaim \"www-web-3\" deleted\npersistentvolumeclaim \"www-web-4\" deleted", "language": "text", "type": "code-example", "context": "StatefulSet Basics |", "source": "https://kubernetes.io/docs/tutorials/stateful-application/basic-stateful-set/", "id": "k8s_00556_code_completion_14455"} {"prompt": "Explain this text code:\n\npersistentvolumeclaim \"www-web-0\" deleted\npersistentvolumeclaim \"www-web-1\" deleted\npersistentvolumeclaim \"www-web-2\" deleted\npersistentvolumeclaim \"www-web-3\" deleted\npersistentvolumeclaim \"www-web-4\" deleted", "completion": "This text code StatefulSet Basics |", "language": "text", "type": "code-example_explanation", "context": "StatefulSet Basics |", "source": "https://kubernetes.io/docs/tutorials/stateful-application/basic-stateful-set/", "id": "k8s_00556_code_explain_14456"} {"prompt": "Complete this yaml code:\n\napplication/web/web.yaml\napiVersion: v1\nkind: Service\nmetadata:\n name: nginx\n labels:\n app: nginx\nspec:\n ports:\n - port: 80\n name: web\n clusterIP: None\n selector:\n app: nginx\n---\napiVersion: apps/v1\nkind: StatefulSet\nmetadata:\n name: web\nspec:\n serviceName: \"nginx\"\n replicas: 2\n selector:", "completion": " matchLabels:\n app: nginx\n template:\n metadata:\n labels:\n app: nginx\n spec:\n containers:\n - name: nginx\n image: registry.k8s.io/nginx-slim:0.21\n ports:\n - containerPort: 80\n name: web\n volumeMounts:\n - name: www\n mountPath: /usr/share/nginx/html\n volumeClaimTemplates:\n - metadata:\n name: www\n spec:\n accessModes: [ \"ReadWriteOnce\" ]\n resources:\n requests:\n storage: 1Gi", "language": "yaml", "type": "service-definition", "context": "StatefulSet Basics | You will need to use at least two terminal windows. In the first terminal, use kubectl get to watch the creation of the StatefulSet's Pods.", "source": "https://kubernetes.io/docs/tutorials/stateful-application/basic-stateful-set/", "id": "k8s_00556_code_completion_14457"} {"prompt": "Explain this yaml code:\n\napplication/web/web.yaml\napiVersion: v1\nkind: Service\nmetadata:\n name: nginx\n labels:\n app: nginx\nspec:\n ports:\n - port: 80\n name: web\n clusterIP: None\n selector:\n app: nginx\n---\napiVersion: apps/v1\nkind: StatefulSet\nmetadata:\n name: web\nspec:\n serviceName: \"nginx\"\n replicas: 2\n selector:\n matchLabels:\n app: nginx\n template:\n metadata:\n labels:\n app: nginx\n spec:\n containers:\n - name: nginx\n image: registry.k8s.io/nginx-slim:0.21\n ports:\n - containerPort: 80\n name: web\n volumeMounts:\n - name: www\n mountPath: /usr/share/nginx/html\n volumeClaimTemplates:\n - metadata:\n name: www\n spec:\n accessModes: [ \"ReadWriteOnce\" ]\n resources:\n requests:\n storage: 1Gi", "completion": "This yaml code StatefulSet Basics | You will need to use at least two terminal windows. In the first terminal, use kubectl get to watch the creation of the StatefulSet's Pods.", "language": "yaml", "type": "service-definition_explanation", "context": "StatefulSet Basics | You will need to use at least two terminal windows. In the first terminal, use kubectl get to watch the creation of the StatefulSet's Pods.", "source": "https://kubernetes.io/docs/tutorials/stateful-application/basic-stateful-set/", "id": "k8s_00556_code_explain_14458"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Service\nmetadata:\n name: nginx\n labels:\n app: nginx\nspec:\n ports:\n - port: 80\n name: web\n clusterIP: None\n selector:\n app: nginx\n---\napiVersion: apps/v1\nkind: StatefulSet\nmetadata:\n name: web\nspec:\n serviceName: \"nginx\"\n replicas: 2\n selector:\n matchLabels:", "completion": " app: nginx\n template:\n metadata:\n labels:\n app: nginx\n spec:\n containers:\n - name: nginx\n image: registry.k8s.io/nginx-slim:0.21\n ports:\n - containerPort: 80\n name: web\n volumeMounts:\n - name: www\n mountPath: /usr/share/nginx/html\n volumeClaimTemplates:\n - metadata:\n name: www\n spec:\n accessModes: [ \"ReadWriteOnce\" ]\n resources:\n requests:\n storage: 1Gi", "language": "yaml", "type": "service-definition", "context": "StatefulSet Basics | You will need to use at least two terminal windows. In the first terminal, use kubectl get to watch the creation of the StatefulSet's Pods.", "source": "https://kubernetes.io/docs/tutorials/stateful-application/basic-stateful-set/", "id": "k8s_00556_code_completion_14459"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Service\nmetadata:\n name: nginx\n labels:\n app: nginx\nspec:\n ports:\n - port: 80\n name: web\n clusterIP: None\n selector:\n app: nginx\n---\napiVersion: apps/v1\nkind: StatefulSet\nmetadata:\n name: web\nspec:\n serviceName: \"nginx\"\n replicas: 2\n selector:\n matchLabels:\n app: nginx\n template:\n metadata:\n labels:\n app: nginx\n spec:\n containers:\n - name: nginx\n image: registry.k8s.io/nginx-slim:0.21\n ports:\n - containerPort: 80\n name: web\n volumeMounts:\n - name: www\n mountPath: /usr/share/nginx/html\n volumeClaimTemplates:\n - metadata:\n name: www\n spec:\n accessModes: [ \"ReadWriteOnce\" ]\n resources:\n requests:\n storage: 1Gi", "completion": "This yaml code StatefulSet Basics | You will need to use at least two terminal windows. In the first terminal, use kubectl get to watch the creation of the StatefulSet's Pods.", "language": "yaml", "type": "service-definition_explanation", "context": "StatefulSet Basics | You will need to use at least two terminal windows. In the first terminal, use kubectl get to watch the creation of the StatefulSet's Pods.", "source": "https://kubernetes.io/docs/tutorials/stateful-application/basic-stateful-set/", "id": "k8s_00556_code_explain_14460"} {"prompt": "Explain this text code:\n\n# use this terminal to run commands that specify --watch\n# end this watch when you are asked to start a new watch\nkubectl get pods --watch -l app=nginx", "completion": "This text code StatefulSet Basics | In the second terminal, use kubectl apply to create the headless Service and StatefulSet:", "language": "text", "type": "code-example_explanation", "context": "StatefulSet Basics | In the second terminal, use kubectl apply to create the headless Service and StatefulSet:", "source": "https://kubernetes.io/docs/tutorials/stateful-application/basic-stateful-set/", "id": "k8s_00556_code_explain_14461"} {"prompt": "Explain this bash code:\n\nkubectl apply -f https://k8s.io/examples/application/web/web.yaml", "completion": "This bash code StatefulSet Basics | The command above creates two Pods, each running an NGINX webserver. Get the nginx Service...", "language": "bash", "type": "kubectl-command_explanation", "context": "StatefulSet Basics | The command above creates two Pods, each running an NGINX webserver. Get the nginx Service...", "source": "https://kubernetes.io/docs/tutorials/stateful-application/basic-stateful-set/", "id": "k8s_00556_code_explain_14462"} {"prompt": "Explain this text code:\n\n# Do not start a new watch;\n# this should already be running\nkubectl get pods --watch -l app=nginx", "completion": "This text code StatefulSet Basics | Notice that the web-1 Pod is not launched until the web-0 Pod is Running (see Pod Phase) and Ready (see type in Pod Conditions).", "language": "text", "type": "code-example_explanation", "context": "StatefulSet Basics | Notice that the web-1 Pod is not launched until the web-0 Pod is Running (see Pod Phase) and Ready (see type in Pod Conditions).", "source": "https://kubernetes.io/docs/tutorials/stateful-application/basic-stateful-set/", "id": "k8s_00556_code_explain_14463"} {"prompt": "Explain this text code:\n\nfor i in 0 1; do kubectl exec \"web-$i\" -- sh -c 'hostname'; done", "completion": "This text code StatefulSet Basics | Use kubectl run to execute a container that provides the nslookup command from the dnsutils package. Using nslookup on the Pods' hostnames, you can examine their in-cluster DNS addresses:", "language": "text", "type": "code-example_explanation", "context": "StatefulSet Basics | Use kubectl run to execute a container that provides the nslookup command from the dnsutils package. Using nslookup on the Pods' hostnames, you can examine their in-cluster DNS addresses:", "source": "https://kubernetes.io/docs/tutorials/stateful-application/basic-stateful-set/", "id": "k8s_00556_code_explain_14464"} {"prompt": "Explain this bash code:\n\nkubectl run -i --tty --image busybox:1.28 dns-test --restart=Never --rm", "completion": "This bash code StatefulSet Basics | which starts a new shell. In that new shell, run:", "language": "bash", "type": "kubectl-command_explanation", "context": "StatefulSet Basics | which starts a new shell. In that new shell, run:", "source": "https://kubernetes.io/docs/tutorials/stateful-application/basic-stateful-set/", "id": "k8s_00556_code_explain_14465"} {"prompt": "Explain this text code:\n\n# Run this in the dns-test container shell\nnslookup web-0.nginx", "completion": "This text code StatefulSet Basics | The output is similar to:", "language": "text", "type": "code-example_explanation", "context": "StatefulSet Basics | The output is similar to:", "source": "https://kubernetes.io/docs/tutorials/stateful-application/basic-stateful-set/", "id": "k8s_00556_code_explain_14466"} {"prompt": "Explain this text code:\n\n# Start a new watch\n# End this watch when you've seen that the delete is finished\nkubectl get pod --watch -l app=nginx", "completion": "This text code StatefulSet Basics | In a second terminal, use kubectl delete to delete all the Pods in the StatefulSet:", "language": "text", "type": "code-example_explanation", "context": "StatefulSet Basics | In a second terminal, use kubectl delete to delete all the Pods in the StatefulSet:", "source": "https://kubernetes.io/docs/tutorials/stateful-application/basic-stateful-set/", "id": "k8s_00556_code_explain_14467"} {"prompt": "Explain this text code:\n\n# This should already be running\nkubectl get pod --watch -l app=nginx", "completion": "This text code StatefulSet Basics | Use kubectl exec and kubectl run to view the Pods' hostnames and in-cluster DNS entries. First, view the Pods' hostnames:", "language": "text", "type": "code-example_explanation", "context": "StatefulSet Basics | Use kubectl exec and kubectl run to view the Pods' hostnames and in-cluster DNS entries. First, view the Pods' hostnames:", "source": "https://kubernetes.io/docs/tutorials/stateful-application/basic-stateful-set/", "id": "k8s_00556_code_explain_14468"} {"prompt": "Explain this text code:\n\nfor i in 0 1; do kubectl exec web-$i -- sh -c 'hostname'; done", "completion": "This text code StatefulSet Basics | then, run:", "language": "text", "type": "code-example_explanation", "context": "StatefulSet Basics | then, run:", "source": "https://kubernetes.io/docs/tutorials/stateful-application/basic-stateful-set/", "id": "k8s_00556_code_explain_14469"} {"prompt": "Explain this bash code:\n\nkubectl run -i --tty --image busybox:1.28 dns-test --restart=Never --rm", "completion": "This bash code StatefulSet Basics | which starts a new shell.In that new shell, run:", "language": "bash", "type": "kubectl-command_explanation", "context": "StatefulSet Basics | which starts a new shell.In that new shell, run:", "source": "https://kubernetes.io/docs/tutorials/stateful-application/basic-stateful-set/", "id": "k8s_00556_code_explain_14470"} {"prompt": "Explain this text code:\n\n# Run this in the dns-test container shell\nnslookup web-0.nginx", "completion": "This text code StatefulSet Basics | The output is similar to:", "language": "text", "type": "code-example_explanation", "context": "StatefulSet Basics | The output is similar to:", "source": "https://kubernetes.io/docs/tutorials/stateful-application/basic-stateful-set/", "id": "k8s_00556_code_explain_14471"} {"prompt": "Explain this text code:\n\nfor i in 0 1; do kubectl exec \"web-$i\" -- sh -c 'echo \"$(hostname)\" > /usr/share/nginx/html/index.html'; done\n\nfor i in 0 1; do kubectl exec -i -t \"web-$i\" -- curl http://localhost/; done", "completion": "This text code StatefulSet Basics | If you instead see 403 Forbidden responses for the above curl command, you will need to fix the permissions of the directory mounted by the volumeMounts (due to a bug when using hostPath volumes), by", "language": "text", "type": "code-example_explanation", "context": "StatefulSet Basics | If you instead see 403 Forbidden responses for the above curl command, you will need to fix the permissions of the directory mounted by the volumeMounts (due to a bug when using hostPath volumes), by", "source": "https://kubernetes.io/docs/tutorials/stateful-application/basic-stateful-set/", "id": "k8s_00556_code_explain_14472"} {"prompt": "Explain this text code:\n\n# End this watch when you've reached the end of the section.\n# At the start of \"Scaling a StatefulSet\" you'll start a new watch.\nkubectl get pod --watch -l app=nginx", "completion": "This text code StatefulSet Basics | In a second terminal, delete all of the StatefulSet's Pods:", "language": "text", "type": "code-example_explanation", "context": "StatefulSet Basics | In a second terminal, delete all of the StatefulSet's Pods:", "source": "https://kubernetes.io/docs/tutorials/stateful-application/basic-stateful-set/", "id": "k8s_00556_code_explain_14473"} {"prompt": "Explain this text code:\n\n# This should already be running\nkubectl get pod --watch -l app=nginx", "completion": "This text code StatefulSet Basics | Verify the web servers continue to serve their hostnames:", "language": "text", "type": "code-example_explanation", "context": "StatefulSet Basics | Verify the web servers continue to serve their hostnames:", "source": "https://kubernetes.io/docs/tutorials/stateful-application/basic-stateful-set/", "id": "k8s_00556_code_explain_14474"} {"prompt": "Complete this text code:\n\n# If you already have a watch running, you can continue using that.\n# Otherwise, start one.", "completion": "# End this watch when there are 5 healthy Pods for the StatefulSet\nkubectl get pods --watch -l app=nginx", "language": "text", "type": "code-example", "context": "StatefulSet Basics | In another terminal window, use kubectl scale to scale the number of replicas to 5:", "source": "https://kubernetes.io/docs/tutorials/stateful-application/basic-stateful-set/", "id": "k8s_00556_code_completion_14475"} {"prompt": "Explain this text code:\n\n# If you already have a watch running, you can continue using that.\n# Otherwise, start one.\n# End this watch when there are 5 healthy Pods for the StatefulSet\nkubectl get pods --watch -l app=nginx", "completion": "This text code StatefulSet Basics | In another terminal window, use kubectl scale to scale the number of replicas to 5:", "language": "text", "type": "code-example_explanation", "context": "StatefulSet Basics | In another terminal window, use kubectl scale to scale the number of replicas to 5:", "source": "https://kubernetes.io/docs/tutorials/stateful-application/basic-stateful-set/", "id": "k8s_00556_code_explain_14476"} {"prompt": "Explain this text code:\n\n# This should already be running\nkubectl get pod --watch -l app=nginx", "completion": "This text code StatefulSet Basics | The StatefulSet controller scaled the number of replicas. As with StatefulSet creation, the StatefulSet controller created each Pod sequentially with respect to its ordinal index, and it waited for ea", "language": "text", "type": "code-example_explanation", "context": "StatefulSet Basics | The StatefulSet controller scaled the number of replicas. As with StatefulSet creation, the StatefulSet controller created each Pod sequentially with respect to its ordinal index, and it waited for ea", "source": "https://kubernetes.io/docs/tutorials/stateful-application/basic-stateful-set/", "id": "k8s_00556_code_explain_14477"} {"prompt": "Explain this text code:\n\n# End this watch when there are only 3 Pods for the StatefulSet\nkubectl get pod --watch -l app=nginx", "completion": "This text code StatefulSet Basics | In another terminal, use kubectl patch to scale the StatefulSet back down to three replicas:", "language": "text", "type": "code-example_explanation", "context": "StatefulSet Basics | In another terminal, use kubectl patch to scale the StatefulSet back down to three replicas:", "source": "https://kubernetes.io/docs/tutorials/stateful-application/basic-stateful-set/", "id": "k8s_00556_code_explain_14478"} {"prompt": "Explain this text code:\n\n# This should already be running\nkubectl get pods --watch -l app=nginx", "completion": "This text code StatefulSet Basics | The control plane deleted one Pod at a time, in reverse order with respect to its ordinal index, and it waited for each Pod to be completely shut down before deleting the next one.", "language": "text", "type": "code-example_explanation", "context": "StatefulSet Basics | The control plane deleted one Pod at a time, in reverse order with respect to its ordinal index, and it waited for each Pod to be completely shut down before deleting the next one.", "source": "https://kubernetes.io/docs/tutorials/stateful-application/basic-stateful-set/", "id": "k8s_00556_code_explain_14479"} {"prompt": "Explain this bash code:\n\nkubectl patch statefulset web --type='json' -p='[{\"op\": \"replace\", \"path\": \"/spec/template/spec/containers/0/image\", \"value\":\"registry.k8s.io/nginx-slim:0.24\"}]'", "completion": "This bash code StatefulSet Basics | In another terminal, watch the Pods in the StatefulSet:", "language": "bash", "type": "kubectl-command_explanation", "context": "StatefulSet Basics | In another terminal, watch the Pods in the StatefulSet:", "source": "https://kubernetes.io/docs/tutorials/stateful-application/basic-stateful-set/", "id": "k8s_00556_code_explain_14480"} {"prompt": "Complete this text code:\n\n# End this watch when the rollout is complete\n#", "completion": "# If you're not sure, leave it running one more minute\nkubectl get pod -l app=nginx --watch", "language": "text", "type": "code-example", "context": "StatefulSet Basics | The output is similar to:", "source": "https://kubernetes.io/docs/tutorials/stateful-application/basic-stateful-set/", "id": "k8s_00556_code_completion_14481"} {"prompt": "Explain this text code:\n\n# End this watch when the rollout is complete\n#\n# If you're not sure, leave it running one more minute\nkubectl get pod -l app=nginx --watch", "completion": "This text code StatefulSet Basics | The output is similar to:", "language": "text", "type": "code-example_explanation", "context": "StatefulSet Basics | The output is similar to:", "source": "https://kubernetes.io/docs/tutorials/stateful-application/basic-stateful-set/", "id": "k8s_00556_code_explain_14482"} {"prompt": "Explain this text code:\n\nfor p in 0 1 2; do kubectl get pod \"web-$p\" --template '{{range $i, $c := .spec.containers}}{{$c.image}}{{end}}'; echo; done", "completion": "This text code StatefulSet Basics | All the Pods in the StatefulSet are now running the previous container image.", "language": "text", "type": "code-example_explanation", "context": "StatefulSet Basics | All the Pods in the StatefulSet are now running the previous container image.", "source": "https://kubernetes.io/docs/tutorials/stateful-application/basic-stateful-set/", "id": "k8s_00556_code_explain_14483"} {"prompt": "Complete this text code:\n\n# The value of \"partition\" determines which ordinals a change applies to\n# Make sure to use a number bigger than the last ordinal for the", "completion": "# StatefulSet\nkubectl patch statefulset web -p '{\"spec\":{\"updateStrategy\":{\"type\":\"RollingUpdate\",\"rollingUpdate\":{\"partition\":3}}}}'", "language": "text", "type": "code-example", "context": "StatefulSet Basics | Patch the StatefulSet again to change the container image that this StatefulSet uses:", "source": "https://kubernetes.io/docs/tutorials/stateful-application/basic-stateful-set/", "id": "k8s_00556_code_completion_14484"} {"prompt": "Explain this text code:\n\n# The value of \"partition\" determines which ordinals a change applies to\n# Make sure to use a number bigger than the last ordinal for the\n# StatefulSet\nkubectl patch statefulset web -p '{\"spec\":{\"updateStrategy\":{\"type\":\"RollingUpdate\",\"rollingUpdate\":{\"partition\":3}}}}'", "completion": "This text code StatefulSet Basics | Patch the StatefulSet again to change the container image that this StatefulSet uses:", "language": "text", "type": "code-example_explanation", "context": "StatefulSet Basics | Patch the StatefulSet again to change the container image that this StatefulSet uses:", "source": "https://kubernetes.io/docs/tutorials/stateful-application/basic-stateful-set/", "id": "k8s_00556_code_explain_14485"} {"prompt": "Explain this bash code:\n\nkubectl patch statefulset web --type='json' -p='[{\"op\": \"replace\", \"path\": \"/spec/template/spec/containers/0/image\", \"value\":\"registry.k8s.io/nginx-slim:0.21\"}]'", "completion": "This bash code StatefulSet Basics | Delete a Pod in the StatefulSet:", "language": "bash", "type": "kubectl-command_explanation", "context": "StatefulSet Basics | Delete a Pod in the StatefulSet:", "source": "https://kubernetes.io/docs/tutorials/stateful-application/basic-stateful-set/", "id": "k8s_00556_code_explain_14486"} {"prompt": "Explain this text code:\n\n# End the watch when you see that web-2 is healthy\nkubectl get pod -l app=nginx --watch", "completion": "This text code StatefulSet Basics | Get the Pod's container image:", "language": "text", "type": "code-example_explanation", "context": "StatefulSet Basics | Get the Pod's container image:", "source": "https://kubernetes.io/docs/tutorials/stateful-application/basic-stateful-set/", "id": "k8s_00556_code_explain_14487"} {"prompt": "Explain this bash code:\n\nkubectl get pod web-2 --template '{{range $i, $c := .spec.containers}}{{$c.image}}{{end}}'", "completion": "This bash code StatefulSet Basics | Notice that, even though the update strategy is RollingUpdate the StatefulSet restored the Pod with the original container image. This is because the ordinal of the Pod is less than the partition spec", "language": "bash", "type": "kubectl-command_explanation", "context": "StatefulSet Basics | Notice that, even though the update strategy is RollingUpdate the StatefulSet restored the Pod with the original container image. This is because the ordinal of the Pod is less than the partition spec", "source": "https://kubernetes.io/docs/tutorials/stateful-application/basic-stateful-set/", "id": "k8s_00556_code_explain_14488"} {"prompt": "Explain this text code:\n\n# The value of \"partition\" should match the highest existing ordinal for\n# the StatefulSet\nkubectl patch statefulset web -p '{\"spec\":{\"updateStrategy\":{\"type\":\"RollingUpdate\",\"rollingUpdate\":{\"partition\":2}}}}'", "completion": "This text code StatefulSet Basics | The control plane triggers replacement for web-2 (implemented by a graceful delete followed by creating a new Pod once the deletion is complete). Wait for the new web-2 Pod to be Running and Ready.", "language": "text", "type": "code-example_explanation", "context": "StatefulSet Basics | The control plane triggers replacement for web-2 (implemented by a graceful delete followed by creating a new Pod once the deletion is complete). Wait for the new web-2 Pod to be Running and Ready.", "source": "https://kubernetes.io/docs/tutorials/stateful-application/basic-stateful-set/", "id": "k8s_00556_code_explain_14489"} {"prompt": "Explain this text code:\n\n# This should already be running\nkubectl get pod -l app=nginx --watch", "completion": "This text code StatefulSet Basics | Get the Pod's container:", "language": "text", "type": "code-example_explanation", "context": "StatefulSet Basics | Get the Pod's container:", "source": "https://kubernetes.io/docs/tutorials/stateful-application/basic-stateful-set/", "id": "k8s_00556_code_explain_14490"} {"prompt": "Explain this bash code:\n\nkubectl get pod web-2 --template '{{range $i, $c := .spec.containers}}{{$c.image}}{{end}}'", "completion": "This bash code StatefulSet Basics | When you changed the partition, the StatefulSet controller automatically updated the web-2 Pod because the Pod's ordinal was greater than or equal to the partition.", "language": "bash", "type": "kubectl-command_explanation", "context": "StatefulSet Basics | When you changed the partition, the StatefulSet controller automatically updated the web-2 Pod because the Pod's ordinal was greater than or equal to the partition.", "source": "https://kubernetes.io/docs/tutorials/stateful-application/basic-stateful-set/", "id": "k8s_00556_code_explain_14491"} {"prompt": "Explain this text code:\n\n# This should already be running\nkubectl get pod -l app=nginx --watch", "completion": "This text code StatefulSet Basics | The output is similar to:", "language": "text", "type": "code-example_explanation", "context": "StatefulSet Basics | The output is similar to:", "source": "https://kubernetes.io/docs/tutorials/stateful-application/basic-stateful-set/", "id": "k8s_00556_code_explain_14492"} {"prompt": "Explain this bash code:\n\nkubectl get pod web-1 --template '{{range $i, $c := .spec.containers}}{{$c.image}}{{end}}'", "completion": "This bash code StatefulSet Basics | web-1 was restored to its original configuration because the Pod's ordinal was less than the partition. When a partition is specified, all Pods with an ordinal that is greater than or equal to the par", "language": "bash", "type": "kubectl-command_explanation", "context": "StatefulSet Basics | web-1 was restored to its original configuration because the Pod's ordinal was less than the partition. When a partition is specified, all Pods with an ordinal that is greater than or equal to the par", "source": "https://kubernetes.io/docs/tutorials/stateful-application/basic-stateful-set/", "id": "k8s_00556_code_explain_14493"} {"prompt": "Explain this bash code:\n\nkubectl patch statefulset web -p '{\"spec\":{\"updateStrategy\":{\"type\":\"RollingUpdate\",\"rollingUpdate\":{\"partition\":0}}}}'", "completion": "This bash code StatefulSet Basics | Wait for all of the Pods in the StatefulSet to become Running and Ready.", "language": "bash", "type": "kubectl-command_explanation", "context": "StatefulSet Basics | Wait for all of the Pods in the StatefulSet to become Running and Ready.", "source": "https://kubernetes.io/docs/tutorials/stateful-application/basic-stateful-set/", "id": "k8s_00556_code_explain_14494"} {"prompt": "Explain this text code:\n\n# This should already be running\nkubectl get pod -l app=nginx --watch", "completion": "This text code StatefulSet Basics | The output is similar to:", "language": "text", "type": "code-example_explanation", "context": "StatefulSet Basics | The output is similar to:", "source": "https://kubernetes.io/docs/tutorials/stateful-application/basic-stateful-set/", "id": "k8s_00556_code_explain_14495"} {"prompt": "Explain this text code:\n\nfor p in 0 1 2; do kubectl get pod \"web-$p\" --template '{{range $i, $c := .spec.containers}}{{$c.image}}{{end}}'; echo; done", "completion": "This text code StatefulSet Basics | By moving the partition to 0, you allowed the StatefulSet to continue the update process.", "language": "text", "type": "code-example_explanation", "context": "StatefulSet Basics | By moving the partition to 0, you allowed the StatefulSet to continue the update process.", "source": "https://kubernetes.io/docs/tutorials/stateful-application/basic-stateful-set/", "id": "k8s_00556_code_explain_14496"} {"prompt": "Explain this bash code:\n\nkubectl patch statefulset web -p '{\"spec\":{\"updateStrategy\":{\"type\":\"OnDelete\", \"rollingUpdate\": null}}}'", "completion": "This bash code StatefulSet Basics | When you select this update strategy, the StatefulSet controller does not automatically update Pods when a modification is made to the StatefulSet's .spec.template field. You need to manage the rollou", "language": "bash", "type": "kubectl-command_explanation", "context": "StatefulSet Basics | When you select this update strategy, the StatefulSet controller does not automatically update Pods when a modification is made to the StatefulSet's .spec.template field. You need to manage the rollou", "source": "https://kubernetes.io/docs/tutorials/stateful-application/basic-stateful-set/", "id": "k8s_00556_code_explain_14497"} {"prompt": "Explain this text code:\n\n# Leave this watch running until the next time you start a watch\nkubectl get pods --watch -l app=nginx", "completion": "This text code StatefulSet Basics | In a second terminal, recreate the StatefulSet. Note that, unless you deleted the nginx Service (which you should not have), you will see an error indicating that the Service already exists.", "language": "text", "type": "code-example_explanation", "context": "StatefulSet Basics | In a second terminal, recreate the StatefulSet. Note that, unless you deleted the nginx Service (which you should not have), you will see an error indicating that the Service already exists.", "source": "https://kubernetes.io/docs/tutorials/stateful-application/basic-stateful-set/", "id": "k8s_00556_code_explain_14498"} {"prompt": "Explain this bash code:\n\nkubectl apply -f https://k8s.io/examples/application/web/web.yaml", "completion": "This bash code StatefulSet Basics | Ignore the error. It only indicates that an attempt was made to create the nginx headless Service even though that Service already exists.", "language": "bash", "type": "kubectl-command_explanation", "context": "StatefulSet Basics | Ignore the error. It only indicates that an attempt was made to create the nginx headless Service even though that Service already exists.", "source": "https://kubernetes.io/docs/tutorials/stateful-application/basic-stateful-set/", "id": "k8s_00556_code_explain_14499"} {"prompt": "Explain this text code:\n\n# This should already be running\nkubectl get pods --watch -l app=nginx", "completion": "This text code StatefulSet Basics | When the web StatefulSet was recreated, it first relaunched web-0. Since web-1 was already Running and Ready, when web-0 transitioned to Running and Ready, it adopted this Pod. Since you recreated the", "language": "text", "type": "code-example_explanation", "context": "StatefulSet Basics | When the web StatefulSet was recreated, it first relaunched web-0. Since web-1 was already Running and Ready, when web-0 transitioned to Running and Ready, it adopted this Pod. Since you recreated the", "source": "https://kubernetes.io/docs/tutorials/stateful-application/basic-stateful-set/", "id": "k8s_00556_code_explain_14500"} {"prompt": "Explain this text code:\n\nfor i in 0 1; do kubectl exec -i -t \"web-$i\" -- curl http://localhost/; done", "completion": "This text code StatefulSet Basics | Even though you deleted both the StatefulSet and the web-0 Pod, it still serves the hostname originally entered into its index.html file. This is because the StatefulSet never deletes the PersistentVo", "language": "text", "type": "code-example_explanation", "context": "StatefulSet Basics | Even though you deleted both the StatefulSet and the web-0 Pod, it still serves the hostname originally entered into its index.html file. This is because the StatefulSet never deletes the PersistentVo", "source": "https://kubernetes.io/docs/tutorials/stateful-application/basic-stateful-set/", "id": "k8s_00556_code_explain_14501"} {"prompt": "Explain this text code:\n\n# Leave this running until the next page section\nkubectl get pods --watch -l app=nginx", "completion": "This text code StatefulSet Basics | In another terminal, delete the StatefulSet again. This time, omit the --cascade=orphan parameter.", "language": "text", "type": "code-example_explanation", "context": "StatefulSet Basics | In another terminal, delete the StatefulSet again. This time, omit the --cascade=orphan parameter.", "source": "https://kubernetes.io/docs/tutorials/stateful-application/basic-stateful-set/", "id": "k8s_00556_code_explain_14502"} {"prompt": "Explain this text code:\n\n# This should already be running\nkubectl get pods --watch -l app=nginx", "completion": "This text code StatefulSet Basics | As you saw in the Scaling Down section, the Pods are terminated one at a time, with respect to the reverse order of their ordinal indices. Before terminating a Pod, the StatefulSet controller waits fo", "language": "text", "type": "code-example_explanation", "context": "StatefulSet Basics | As you saw in the Scaling Down section, the Pods are terminated one at a time, with respect to the reverse order of their ordinal indices. Before terminating a Pod, the StatefulSet controller waits fo", "source": "https://kubernetes.io/docs/tutorials/stateful-application/basic-stateful-set/", "id": "k8s_00556_code_explain_14503"} {"prompt": "Explain this bash code:\n\nkubectl apply -f https://k8s.io/examples/application/web/web.yaml", "completion": "This bash code StatefulSet Basics | When all of the StatefulSet's Pods transition to Running and Ready, retrieve the contents of their index.html files:", "language": "bash", "type": "kubectl-command_explanation", "context": "StatefulSet Basics | When all of the StatefulSet's Pods transition to Running and Ready, retrieve the contents of their index.html files:", "source": "https://kubernetes.io/docs/tutorials/stateful-application/basic-stateful-set/", "id": "k8s_00556_code_explain_14504"} {"prompt": "Explain this text code:\n\nfor i in 0 1; do kubectl exec -i -t \"web-$i\" -- curl http://localhost/; done", "completion": "This text code StatefulSet Basics | Even though you completely deleted the StatefulSet, and all of its Pods, the Pods are recreated with their PersistentVolumes mounted, and web-0 and web-1 continue to serve their hostnames.", "language": "text", "type": "code-example_explanation", "context": "StatefulSet Basics | Even though you completely deleted the StatefulSet, and all of its Pods, the Pods are recreated with their PersistentVolumes mounted, and web-0 and web-1 continue to serve their hostnames.", "source": "https://kubernetes.io/docs/tutorials/stateful-application/basic-stateful-set/", "id": "k8s_00556_code_explain_14505"} {"prompt": "Complete this yaml code:\n\napplication/web/web-parallel.yaml\napiVersion: v1\nkind: Service\nmetadata:\n name: nginx\n labels:\n app: nginx\nspec:\n ports:\n - port: 80\n name: web\n clusterIP: None\n selector:\n app: nginx\n---\napiVersion: apps/v1\nkind: StatefulSet\nmetadata:\n name: web\nspec:\n serviceName: \"nginx\"\n podManagementPolicy: \"Parallel\"\n replicas: 2\n selector:", "completion": " matchLabels:\n app: nginx\n template:\n metadata:\n labels:\n app: nginx\n spec:\n containers:\n - name: nginx\n image: registry.k8s.io/nginx-slim:0.24\n ports:\n - containerPort: 80\n name: web\n volumeMounts:\n - name: www\n mountPath: /usr/share/nginx/html\n volumeClaimTemplates:\n - metadata:\n name: www\n spec:\n accessModes: [ \"ReadWriteOnce\" ]\n resources:\n requests:\n storage: 1Gi", "language": "yaml", "type": "service-definition", "context": "StatefulSet Basics | This manifest is identical to the one you downloaded above except that the .spec.podManagementPolicy of the web StatefulSet is set to Parallel.", "source": "https://kubernetes.io/docs/tutorials/stateful-application/basic-stateful-set/", "id": "k8s_00556_code_completion_14506"} {"prompt": "Explain this yaml code:\n\napplication/web/web-parallel.yaml\napiVersion: v1\nkind: Service\nmetadata:\n name: nginx\n labels:\n app: nginx\nspec:\n ports:\n - port: 80\n name: web\n clusterIP: None\n selector:\n app: nginx\n---\napiVersion: apps/v1\nkind: StatefulSet\nmetadata:\n name: web\nspec:\n serviceName: \"nginx\"\n podManagementPolicy: \"Parallel\"\n replicas: 2\n selector:\n matchLabels:\n app: nginx\n template:\n metadata:\n labels:\n app: nginx\n spec:\n containers:\n - name: nginx\n image: registry.k8s.io/nginx-slim:0.24\n ports:\n - containerPort: 80\n name: web\n volumeMounts:\n - name: www\n mountPath: /usr/share/nginx/html\n volumeClaimTemplates:\n - metadata:\n name: www\n spec:\n accessModes: [ \"ReadWriteOnce\" ]\n resources:\n requests:\n storage: 1Gi", "completion": "This yaml code StatefulSet Basics | This manifest is identical to the one you downloaded above except that the .spec.podManagementPolicy of the web StatefulSet is set to Parallel.", "language": "yaml", "type": "service-definition_explanation", "context": "StatefulSet Basics | This manifest is identical to the one you downloaded above except that the .spec.podManagementPolicy of the web StatefulSet is set to Parallel.", "source": "https://kubernetes.io/docs/tutorials/stateful-application/basic-stateful-set/", "id": "k8s_00556_code_explain_14507"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Service\nmetadata:\n name: nginx\n labels:\n app: nginx\nspec:\n ports:\n - port: 80\n name: web\n clusterIP: None\n selector:\n app: nginx\n---\napiVersion: apps/v1\nkind: StatefulSet\nmetadata:\n name: web\nspec:\n serviceName: \"nginx\"\n podManagementPolicy: \"Parallel\"\n replicas: 2\n selector:", "completion": " matchLabels:\n app: nginx\n template:\n metadata:\n labels:\n app: nginx\n spec:\n containers:\n - name: nginx\n image: registry.k8s.io/nginx-slim:0.24\n ports:\n - containerPort: 80\n name: web\n volumeMounts:\n - name: www\n mountPath: /usr/share/nginx/html\n volumeClaimTemplates:\n - metadata:\n name: www\n spec:\n accessModes: [ \"ReadWriteOnce\" ]\n resources:\n requests:\n storage: 1Gi", "language": "yaml", "type": "service-definition", "context": "StatefulSet Basics | This manifest is identical to the one you downloaded above except that the .spec.podManagementPolicy of the web StatefulSet is set to Parallel.", "source": "https://kubernetes.io/docs/tutorials/stateful-application/basic-stateful-set/", "id": "k8s_00556_code_completion_14508"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Service\nmetadata:\n name: nginx\n labels:\n app: nginx\nspec:\n ports:\n - port: 80\n name: web\n clusterIP: None\n selector:\n app: nginx\n---\napiVersion: apps/v1\nkind: StatefulSet\nmetadata:\n name: web\nspec:\n serviceName: \"nginx\"\n podManagementPolicy: \"Parallel\"\n replicas: 2\n selector:\n matchLabels:\n app: nginx\n template:\n metadata:\n labels:\n app: nginx\n spec:\n containers:\n - name: nginx\n image: registry.k8s.io/nginx-slim:0.24\n ports:\n - containerPort: 80\n name: web\n volumeMounts:\n - name: www\n mountPath: /usr/share/nginx/html\n volumeClaimTemplates:\n - metadata:\n name: www\n spec:\n accessModes: [ \"ReadWriteOnce\" ]\n resources:\n requests:\n storage: 1Gi", "completion": "This yaml code StatefulSet Basics | This manifest is identical to the one you downloaded above except that the .spec.podManagementPolicy of the web StatefulSet is set to Parallel.", "language": "yaml", "type": "service-definition_explanation", "context": "StatefulSet Basics | This manifest is identical to the one you downloaded above except that the .spec.podManagementPolicy of the web StatefulSet is set to Parallel.", "source": "https://kubernetes.io/docs/tutorials/stateful-application/basic-stateful-set/", "id": "k8s_00556_code_explain_14509"} {"prompt": "Explain this text code:\n\n# Leave this watch running until the end of the section\nkubectl get pod -l app=nginx --watch", "completion": "This text code StatefulSet Basics | In another terminal, reconfigure the StatefulSet for Parallel Pod management:", "language": "text", "type": "code-example_explanation", "context": "StatefulSet Basics | In another terminal, reconfigure the StatefulSet for Parallel Pod management:", "source": "https://kubernetes.io/docs/tutorials/stateful-application/basic-stateful-set/", "id": "k8s_00556_code_explain_14510"} {"prompt": "Explain this bash code:\n\nkubectl apply -f https://k8s.io/examples/application/web/web-parallel.yaml", "completion": "This bash code StatefulSet Basics | Keep the terminal open where you're running the watch. In another terminal window, scale the StatefulSet:", "language": "bash", "type": "kubectl-command_explanation", "context": "StatefulSet Basics | Keep the terminal open where you're running the watch. In another terminal window, scale the StatefulSet:", "source": "https://kubernetes.io/docs/tutorials/stateful-application/basic-stateful-set/", "id": "k8s_00556_code_explain_14511"} {"prompt": "Explain this bash code:\n\nkubectl delete sts web\n# sts is an abbreviation for statefulset", "completion": "This bash code StatefulSet Basics | You can watch kubectl get to see those Pods being deleted.", "language": "bash", "type": "kubectl-command_explanation", "context": "StatefulSet Basics | You can watch kubectl get to see those Pods being deleted.", "source": "https://kubernetes.io/docs/tutorials/stateful-application/basic-stateful-set/", "id": "k8s_00556_code_explain_14512"} {"prompt": "Explain this text code:\n\n# end the watch when you've seen what you need to\nkubectl get pod -l app=nginx --watch", "completion": "This text code StatefulSet Basics | During deletion, a StatefulSet removes all Pods concurrently; it does not wait for a Pod's ordinal successor to terminate prior to deleting that Pod.", "language": "text", "type": "code-example_explanation", "context": "StatefulSet Basics | During deletion, a StatefulSet removes all Pods concurrently; it does not wait for a Pod's ordinal successor to terminate prior to deleting that Pod.", "source": "https://kubernetes.io/docs/tutorials/stateful-application/basic-stateful-set/", "id": "k8s_00556_code_explain_14513"} {"prompt": "Explain this bash code:\n\nkubectl delete pvc www-web-0 www-web-1 www-web-2 www-web-3 www-web-4", "completion": "This bash code StatefulSet Basics |", "language": "bash", "type": "kubectl-command_explanation", "context": "StatefulSet Basics |", "source": "https://kubernetes.io/docs/tutorials/stateful-application/basic-stateful-set/", "id": "k8s_00556_code_explain_14514"} {"prompt": "Complete this text code:\n\nspec:\n containers:", "completion": " - name: nginx\n image: nginx", "language": "text", "type": "code-example", "context": "Node Resource Managers | The pod above runs in the BestEffort QoS class because no resource requests or limits are specified. It runs in the shared pool.", "source": "https://kubernetes.io/docs/concepts/policy/node-resource-managers/", "id": "k8s_00559_code_completion_14515"} {"prompt": "Explain this text code:\n\nspec:\n containers:\n - name: nginx\n image: nginx", "completion": "This text code Node Resource Managers | The pod above runs in the BestEffort QoS class because no resource requests or limits are specified. It runs in the shared pool.", "language": "text", "type": "code-example_explanation", "context": "Node Resource Managers | The pod above runs in the BestEffort QoS class because no resource requests or limits are specified. It runs in the shared pool.", "source": "https://kubernetes.io/docs/concepts/policy/node-resource-managers/", "id": "k8s_00559_code_explain_14516"} {"prompt": "Complete this text code:\n\nspec:\n containers:\n - name: nginx\n image: nginx", "completion": " resources:\n limits:\n memory: \"200Mi\"\n requests:\n memory: \"100Mi\"", "language": "text", "type": "code-example", "context": "Node Resource Managers | The pod above runs in the Burstable QoS class because resource requests do not equal limits and the cpu quantity is not specified. It runs in the shared pool.", "source": "https://kubernetes.io/docs/concepts/policy/node-resource-managers/", "id": "k8s_00559_code_completion_14517"} {"prompt": "Explain this text code:\n\nspec:\n containers:\n - name: nginx\n image: nginx\n resources:\n limits:\n memory: \"200Mi\"\n requests:\n memory: \"100Mi\"", "completion": "This text code Node Resource Managers | The pod above runs in the Burstable QoS class because resource requests do not equal limits and the cpu quantity is not specified. It runs in the shared pool.", "language": "text", "type": "code-example_explanation", "context": "Node Resource Managers | The pod above runs in the Burstable QoS class because resource requests do not equal limits and the cpu quantity is not specified. It runs in the shared pool.", "source": "https://kubernetes.io/docs/concepts/policy/node-resource-managers/", "id": "k8s_00559_code_explain_14518"} {"prompt": "Complete this text code:\n\nspec:\n containers:\n - name: nginx\n image: nginx\n resources:", "completion": " limits:\n memory: \"200Mi\"\n cpu: \"2\"\n requests:\n memory: \"100Mi\"\n cpu: \"1\"", "language": "text", "type": "code-example", "context": "Node Resource Managers | The pod above runs in the Burstable QoS class because resource requests do not equal limits. It runs in the shared pool.", "source": "https://kubernetes.io/docs/concepts/policy/node-resource-managers/", "id": "k8s_00559_code_completion_14519"} {"prompt": "Explain this text code:\n\nspec:\n containers:\n - name: nginx\n image: nginx\n resources:\n limits:\n memory: \"200Mi\"\n cpu: \"2\"\n requests:\n memory: \"100Mi\"\n cpu: \"1\"", "completion": "This text code Node Resource Managers | The pod above runs in the Burstable QoS class because resource requests do not equal limits. It runs in the shared pool.", "language": "text", "type": "code-example_explanation", "context": "Node Resource Managers | The pod above runs in the Burstable QoS class because resource requests do not equal limits. It runs in the shared pool.", "source": "https://kubernetes.io/docs/concepts/policy/node-resource-managers/", "id": "k8s_00559_code_explain_14520"} {"prompt": "Complete this text code:\n\nspec:\n containers:\n - name: nginx\n image: nginx\n resources:", "completion": " limits:\n memory: \"200Mi\"\n cpu: \"2\"\n requests:\n memory: \"200Mi\"\n cpu: \"2\"", "language": "text", "type": "code-example", "context": "Node Resource Managers | The pod above runs in the Guaranteed QoS class because requests are equal to limits. And the container's resource limit for the CPU resource is an integer greater than or equal to one. The nginx conta", "source": "https://kubernetes.io/docs/concepts/policy/node-resource-managers/", "id": "k8s_00559_code_completion_14521"} {"prompt": "Explain this text code:\n\nspec:\n containers:\n - name: nginx\n image: nginx\n resources:\n limits:\n memory: \"200Mi\"\n cpu: \"2\"\n requests:\n memory: \"200Mi\"\n cpu: \"2\"", "completion": "This text code Node Resource Managers | The pod above runs in the Guaranteed QoS class because requests are equal to limits. And the container's resource limit for the CPU resource is an integer greater than or equal to one. The nginx conta", "language": "text", "type": "code-example_explanation", "context": "Node Resource Managers | The pod above runs in the Guaranteed QoS class because requests are equal to limits. And the container's resource limit for the CPU resource is an integer greater than or equal to one. The nginx conta", "source": "https://kubernetes.io/docs/concepts/policy/node-resource-managers/", "id": "k8s_00559_code_explain_14522"} {"prompt": "Complete this text code:\n\nspec:\n containers:\n - name: nginx\n image: nginx\n resources:", "completion": " limits:\n memory: \"200Mi\"\n cpu: \"1.5\"\n requests:\n memory: \"200Mi\"\n cpu: \"1.5\"", "language": "text", "type": "code-example", "context": "Node Resource Managers | The pod above runs in the Guaranteed QoS class because requests are equal to limits. But the container's resource limit for the CPU resource is a fraction. It runs in the shared pool.", "source": "https://kubernetes.io/docs/concepts/policy/node-resource-managers/", "id": "k8s_00559_code_completion_14523"} {"prompt": "Explain this text code:\n\nspec:\n containers:\n - name: nginx\n image: nginx\n resources:\n limits:\n memory: \"200Mi\"\n cpu: \"1.5\"\n requests:\n memory: \"200Mi\"\n cpu: \"1.5\"", "completion": "This text code Node Resource Managers | The pod above runs in the Guaranteed QoS class because requests are equal to limits. But the container's resource limit for the CPU resource is a fraction. It runs in the shared pool.", "language": "text", "type": "code-example_explanation", "context": "Node Resource Managers | The pod above runs in the Guaranteed QoS class because requests are equal to limits. But the container's resource limit for the CPU resource is a fraction. It runs in the shared pool.", "source": "https://kubernetes.io/docs/concepts/policy/node-resource-managers/", "id": "k8s_00559_code_explain_14524"} {"prompt": "Complete this text code:\n\nspec:\n containers:\n - name: nginx\n image: nginx", "completion": " resources:\n limits:\n memory: \"200Mi\"\n cpu: \"2\"", "language": "text", "type": "code-example", "context": "Node Resource Managers | The pod above runs in the Guaranteed QoS class because only limits are specified and requests are set equal to limits when not explicitly specified. And the container's resource limit for the CPU reso", "source": "https://kubernetes.io/docs/concepts/policy/node-resource-managers/", "id": "k8s_00559_code_completion_14525"} {"prompt": "Explain this text code:\n\nspec:\n containers:\n - name: nginx\n image: nginx\n resources:\n limits:\n memory: \"200Mi\"\n cpu: \"2\"", "completion": "This text code Node Resource Managers | The pod above runs in the Guaranteed QoS class because only limits are specified and requests are set equal to limits when not explicitly specified. And the container's resource limit for the CPU reso", "language": "text", "type": "code-example_explanation", "context": "Node Resource Managers | The pod above runs in the Guaranteed QoS class because only limits are specified and requests are set equal to limits when not explicitly specified. And the container's resource limit for the CPU reso", "source": "https://kubernetes.io/docs/concepts/policy/node-resource-managers/", "id": "k8s_00559_code_explain_14526"} {"prompt": "Complete this yaml code:\n\nspec:\n containers:", "completion": " - name: nginx\n image: nginx", "language": "yaml", "type": "kubernetes-manifest", "context": "Node Resource Managers | The pod above runs in the BestEffort QoS class because no resource requests or limits are specified. It runs in the shared pool.", "source": "https://kubernetes.io/docs/concepts/policy/node-resource-managers/", "id": "k8s_00559_code_completion_14527"} {"prompt": "Explain this yaml code:\n\nspec:\n containers:\n - name: nginx\n image: nginx", "completion": "This yaml code Node Resource Managers | The pod above runs in the BestEffort QoS class because no resource requests or limits are specified. It runs in the shared pool.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Node Resource Managers | The pod above runs in the BestEffort QoS class because no resource requests or limits are specified. It runs in the shared pool.", "source": "https://kubernetes.io/docs/concepts/policy/node-resource-managers/", "id": "k8s_00559_code_explain_14528"} {"prompt": "Complete this yaml code:\n\nspec:\n containers:\n - name: nginx\n image: nginx", "completion": " resources:\n limits:\n memory: \"200Mi\"\n requests:\n memory: \"100Mi\"", "language": "yaml", "type": "kubernetes-manifest", "context": "Node Resource Managers | The pod above runs in the Burstable QoS class because resource requests do not equal limits and the cpu quantity is not specified. It runs in the shared pool.", "source": "https://kubernetes.io/docs/concepts/policy/node-resource-managers/", "id": "k8s_00559_code_completion_14529"} {"prompt": "Explain this yaml code:\n\nspec:\n containers:\n - name: nginx\n image: nginx\n resources:\n limits:\n memory: \"200Mi\"\n requests:\n memory: \"100Mi\"", "completion": "This yaml code Node Resource Managers | The pod above runs in the Burstable QoS class because resource requests do not equal limits and the cpu quantity is not specified. It runs in the shared pool.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Node Resource Managers | The pod above runs in the Burstable QoS class because resource requests do not equal limits and the cpu quantity is not specified. It runs in the shared pool.", "source": "https://kubernetes.io/docs/concepts/policy/node-resource-managers/", "id": "k8s_00559_code_explain_14530"} {"prompt": "Complete this yaml code:\n\nspec:\n containers:\n - name: nginx\n image: nginx\n resources:", "completion": " limits:\n memory: \"200Mi\"\n cpu: \"2\"\n requests:\n memory: \"100Mi\"\n cpu: \"1\"", "language": "yaml", "type": "kubernetes-manifest", "context": "Node Resource Managers | The pod above runs in the Burstable QoS class because resource requests do not equal limits. It runs in the shared pool.", "source": "https://kubernetes.io/docs/concepts/policy/node-resource-managers/", "id": "k8s_00559_code_completion_14531"} {"prompt": "Explain this yaml code:\n\nspec:\n containers:\n - name: nginx\n image: nginx\n resources:\n limits:\n memory: \"200Mi\"\n cpu: \"2\"\n requests:\n memory: \"100Mi\"\n cpu: \"1\"", "completion": "This yaml code Node Resource Managers | The pod above runs in the Burstable QoS class because resource requests do not equal limits. It runs in the shared pool.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Node Resource Managers | The pod above runs in the Burstable QoS class because resource requests do not equal limits. It runs in the shared pool.", "source": "https://kubernetes.io/docs/concepts/policy/node-resource-managers/", "id": "k8s_00559_code_explain_14532"} {"prompt": "Complete this yaml code:\n\nspec:\n containers:\n - name: nginx\n image: nginx\n resources:", "completion": " limits:\n memory: \"200Mi\"\n cpu: \"2\"\n requests:\n memory: \"200Mi\"\n cpu: \"2\"", "language": "yaml", "type": "kubernetes-manifest", "context": "Node Resource Managers | The pod above runs in the Guaranteed QoS class because requests are equal to limits. And the container's resource limit for the CPU resource is an integer greater than or equal to one. The nginx conta", "source": "https://kubernetes.io/docs/concepts/policy/node-resource-managers/", "id": "k8s_00559_code_completion_14533"} {"prompt": "Explain this yaml code:\n\nspec:\n containers:\n - name: nginx\n image: nginx\n resources:\n limits:\n memory: \"200Mi\"\n cpu: \"2\"\n requests:\n memory: \"200Mi\"\n cpu: \"2\"", "completion": "This yaml code Node Resource Managers | The pod above runs in the Guaranteed QoS class because requests are equal to limits. And the container's resource limit for the CPU resource is an integer greater than or equal to one. The nginx conta", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Node Resource Managers | The pod above runs in the Guaranteed QoS class because requests are equal to limits. And the container's resource limit for the CPU resource is an integer greater than or equal to one. The nginx conta", "source": "https://kubernetes.io/docs/concepts/policy/node-resource-managers/", "id": "k8s_00559_code_explain_14534"} {"prompt": "Complete this yaml code:\n\nspec:\n containers:\n - name: nginx\n image: nginx\n resources:", "completion": " limits:\n memory: \"200Mi\"\n cpu: \"1.5\"\n requests:\n memory: \"200Mi\"\n cpu: \"1.5\"", "language": "yaml", "type": "kubernetes-manifest", "context": "Node Resource Managers | The pod above runs in the Guaranteed QoS class because requests are equal to limits. But the container's resource limit for the CPU resource is a fraction. It runs in the shared pool.", "source": "https://kubernetes.io/docs/concepts/policy/node-resource-managers/", "id": "k8s_00559_code_completion_14535"} {"prompt": "Explain this yaml code:\n\nspec:\n containers:\n - name: nginx\n image: nginx\n resources:\n limits:\n memory: \"200Mi\"\n cpu: \"1.5\"\n requests:\n memory: \"200Mi\"\n cpu: \"1.5\"", "completion": "This yaml code Node Resource Managers | The pod above runs in the Guaranteed QoS class because requests are equal to limits. But the container's resource limit for the CPU resource is a fraction. It runs in the shared pool.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Node Resource Managers | The pod above runs in the Guaranteed QoS class because requests are equal to limits. But the container's resource limit for the CPU resource is a fraction. It runs in the shared pool.", "source": "https://kubernetes.io/docs/concepts/policy/node-resource-managers/", "id": "k8s_00559_code_explain_14536"} {"prompt": "Complete this yaml code:\n\nspec:\n containers:\n - name: nginx\n image: nginx", "completion": " resources:\n limits:\n memory: \"200Mi\"\n cpu: \"2\"", "language": "yaml", "type": "kubernetes-manifest", "context": "Node Resource Managers | The pod above runs in the Guaranteed QoS class because only limits are specified and requests are set equal to limits when not explicitly specified. And the container's resource limit for the CPU reso", "source": "https://kubernetes.io/docs/concepts/policy/node-resource-managers/", "id": "k8s_00559_code_completion_14537"} {"prompt": "Explain this yaml code:\n\nspec:\n containers:\n - name: nginx\n image: nginx\n resources:\n limits:\n memory: \"200Mi\"\n cpu: \"2\"", "completion": "This yaml code Node Resource Managers | The pod above runs in the Guaranteed QoS class because only limits are specified and requests are set equal to limits when not explicitly specified. And the container's resource limit for the CPU reso", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Node Resource Managers | The pod above runs in the Guaranteed QoS class because only limits are specified and requests are set equal to limits when not explicitly specified. And the container's resource limit for the CPU reso", "source": "https://kubernetes.io/docs/concepts/policy/node-resource-managers/", "id": "k8s_00559_code_explain_14538"} {"prompt": "Complete this text code:\n\nspec:\n containers:", "completion": " - name: nginx\n image: nginx", "language": "text", "type": "code-example", "context": "Node Resource Managers | The pod above runs in the BestEffort QoS class because no resource requests or limits are specified. It runs in the shared pool.", "source": "https://kubernetes.io/docs/concepts/policy/node-resource-managers/", "id": "k8s_00559_code_completion_14539"} {"prompt": "Explain this text code:\n\nspec:\n containers:\n - name: nginx\n image: nginx", "completion": "This text code Node Resource Managers | The pod above runs in the BestEffort QoS class because no resource requests or limits are specified. It runs in the shared pool.", "language": "text", "type": "code-example_explanation", "context": "Node Resource Managers | The pod above runs in the BestEffort QoS class because no resource requests or limits are specified. It runs in the shared pool.", "source": "https://kubernetes.io/docs/concepts/policy/node-resource-managers/", "id": "k8s_00559_code_explain_14540"} {"prompt": "Complete this text code:\n\nspec:\n containers:\n - name: nginx\n image: nginx", "completion": " resources:\n limits:\n memory: \"200Mi\"\n requests:\n memory: \"100Mi\"", "language": "text", "type": "code-example", "context": "Node Resource Managers | The pod above runs in the Burstable QoS class because resource requests do not equal limits and the cpu quantity is not specified. It runs in the shared pool.", "source": "https://kubernetes.io/docs/concepts/policy/node-resource-managers/", "id": "k8s_00559_code_completion_14541"} {"prompt": "Explain this text code:\n\nspec:\n containers:\n - name: nginx\n image: nginx\n resources:\n limits:\n memory: \"200Mi\"\n requests:\n memory: \"100Mi\"", "completion": "This text code Node Resource Managers | The pod above runs in the Burstable QoS class because resource requests do not equal limits and the cpu quantity is not specified. It runs in the shared pool.", "language": "text", "type": "code-example_explanation", "context": "Node Resource Managers | The pod above runs in the Burstable QoS class because resource requests do not equal limits and the cpu quantity is not specified. It runs in the shared pool.", "source": "https://kubernetes.io/docs/concepts/policy/node-resource-managers/", "id": "k8s_00559_code_explain_14542"} {"prompt": "Complete this text code:\n\nspec:\n containers:\n - name: nginx\n image: nginx\n resources:", "completion": " limits:\n memory: \"200Mi\"\n cpu: \"2\"\n requests:\n memory: \"100Mi\"\n cpu: \"1\"", "language": "text", "type": "code-example", "context": "Node Resource Managers | The pod above runs in the Burstable QoS class because resource requests do not equal limits. It runs in the shared pool.", "source": "https://kubernetes.io/docs/concepts/policy/node-resource-managers/", "id": "k8s_00559_code_completion_14543"} {"prompt": "Explain this text code:\n\nspec:\n containers:\n - name: nginx\n image: nginx\n resources:\n limits:\n memory: \"200Mi\"\n cpu: \"2\"\n requests:\n memory: \"100Mi\"\n cpu: \"1\"", "completion": "This text code Node Resource Managers | The pod above runs in the Burstable QoS class because resource requests do not equal limits. It runs in the shared pool.", "language": "text", "type": "code-example_explanation", "context": "Node Resource Managers | The pod above runs in the Burstable QoS class because resource requests do not equal limits. It runs in the shared pool.", "source": "https://kubernetes.io/docs/concepts/policy/node-resource-managers/", "id": "k8s_00559_code_explain_14544"} {"prompt": "Complete this text code:\n\nspec:\n containers:\n - name: nginx\n image: nginx\n resources:", "completion": " limits:\n memory: \"200Mi\"\n cpu: \"2\"\n requests:\n memory: \"200Mi\"\n cpu: \"2\"", "language": "text", "type": "code-example", "context": "Node Resource Managers | The pod above runs in the Guaranteed QoS class because requests are equal to limits. And the container's resource limit for the CPU resource is an integer greater than or equal to one. The nginx conta", "source": "https://kubernetes.io/docs/concepts/policy/node-resource-managers/", "id": "k8s_00559_code_completion_14545"} {"prompt": "Explain this text code:\n\nspec:\n containers:\n - name: nginx\n image: nginx\n resources:\n limits:\n memory: \"200Mi\"\n cpu: \"2\"\n requests:\n memory: \"200Mi\"\n cpu: \"2\"", "completion": "This text code Node Resource Managers | The pod above runs in the Guaranteed QoS class because requests are equal to limits. And the container's resource limit for the CPU resource is an integer greater than or equal to one. The nginx conta", "language": "text", "type": "code-example_explanation", "context": "Node Resource Managers | The pod above runs in the Guaranteed QoS class because requests are equal to limits. And the container's resource limit for the CPU resource is an integer greater than or equal to one. The nginx conta", "source": "https://kubernetes.io/docs/concepts/policy/node-resource-managers/", "id": "k8s_00559_code_explain_14546"} {"prompt": "Complete this text code:\n\nspec:\n containers:\n - name: nginx\n image: nginx\n resources:", "completion": " limits:\n memory: \"200Mi\"\n cpu: \"1.5\"\n requests:\n memory: \"200Mi\"\n cpu: \"1.5\"", "language": "text", "type": "code-example", "context": "Node Resource Managers | The pod above runs in the Guaranteed QoS class because requests are equal to limits. But the container's resource limit for the CPU resource is a fraction. It runs in the shared pool.", "source": "https://kubernetes.io/docs/concepts/policy/node-resource-managers/", "id": "k8s_00559_code_completion_14547"} {"prompt": "Explain this text code:\n\nspec:\n containers:\n - name: nginx\n image: nginx\n resources:\n limits:\n memory: \"200Mi\"\n cpu: \"1.5\"\n requests:\n memory: \"200Mi\"\n cpu: \"1.5\"", "completion": "This text code Node Resource Managers | The pod above runs in the Guaranteed QoS class because requests are equal to limits. But the container's resource limit for the CPU resource is a fraction. It runs in the shared pool.", "language": "text", "type": "code-example_explanation", "context": "Node Resource Managers | The pod above runs in the Guaranteed QoS class because requests are equal to limits. But the container's resource limit for the CPU resource is a fraction. It runs in the shared pool.", "source": "https://kubernetes.io/docs/concepts/policy/node-resource-managers/", "id": "k8s_00559_code_explain_14548"} {"prompt": "Complete this text code:\n\nspec:\n containers:\n - name: nginx\n image: nginx", "completion": " resources:\n limits:\n memory: \"200Mi\"\n cpu: \"2\"", "language": "text", "type": "code-example", "context": "Node Resource Managers | The pod above runs in the Guaranteed QoS class because only limits are specified and requests are set equal to limits when not explicitly specified. And the container's resource limit for the CPU reso", "source": "https://kubernetes.io/docs/concepts/policy/node-resource-managers/", "id": "k8s_00559_code_completion_14549"} {"prompt": "Explain this text code:\n\nspec:\n containers:\n - name: nginx\n image: nginx\n resources:\n limits:\n memory: \"200Mi\"\n cpu: \"2\"", "completion": "This text code Node Resource Managers | The pod above runs in the Guaranteed QoS class because only limits are specified and requests are set equal to limits when not explicitly specified. And the container's resource limit for the CPU reso", "language": "text", "type": "code-example_explanation", "context": "Node Resource Managers | The pod above runs in the Guaranteed QoS class because only limits are specified and requests are set equal to limits when not explicitly specified. And the container's resource limit for the CPU reso", "source": "https://kubernetes.io/docs/concepts/policy/node-resource-managers/", "id": "k8s_00559_code_explain_14550"} {"prompt": "Complete this yaml code:\n\napiVersion: apps/v1\nkind: Deployment\nmetadata:\n name: nginx-deployment\nspec:\n selector:\n matchLabels:\n app: nginx\n replicas: 2 # tells deployment to run 2 pods matching the template", "completion": " template:\n metadata:\n labels:\n app: nginx\n spec:\n containers:\n - name: nginx\n image: nginx:1.14.2\n ports:\n - containerPort: 80", "language": "yaml", "type": "deployment-definition", "context": "2 - Objects In Kubernetes | One way to create a Deployment using a manifest file like the one above is to use the kubectl apply command in the kubectl command-line interface, passing the .yaml file as an argument. Here's an exam", "source": "https://kubernetes.io/docs/concepts/overview/_print/", "id": "k8s_00560_code_completion_14551"} {"prompt": "Explain this yaml code:\n\napiVersion: apps/v1\nkind: Deployment\nmetadata:\n name: nginx-deployment\nspec:\n selector:\n matchLabels:\n app: nginx\n replicas: 2 # tells deployment to run 2 pods matching the template\n template:\n metadata:\n labels:\n app: nginx\n spec:\n containers:\n - name: nginx\n image: nginx:1.14.2\n ports:\n - containerPort: 80", "completion": "This yaml code 2 - Objects In Kubernetes | One way to create a Deployment using a manifest file like the one above is to use the kubectl apply command in the kubectl command-line interface, passing the .yaml file as an argument. Here's an exam", "language": "yaml", "type": "deployment-definition_explanation", "context": "2 - Objects In Kubernetes | One way to create a Deployment using a manifest file like the one above is to use the kubectl apply command in the kubectl command-line interface, passing the .yaml file as an argument. Here's an exam", "source": "https://kubernetes.io/docs/concepts/overview/_print/", "id": "k8s_00560_code_explain_14552"} {"prompt": "Explain this bash code:\n\nkubectl apply -f https://k8s.io/examples/application/deployment.yaml", "completion": "This bash code 2 - Objects In Kubernetes | The output is similar to this:", "language": "bash", "type": "kubectl-command_explanation", "context": "2 - Objects In Kubernetes | The output is similar to this:", "source": "https://kubernetes.io/docs/concepts/overview/_print/", "id": "k8s_00560_code_explain_14553"} {"prompt": "Explain this bash code:\n\nkubectl diff -R -f configs/\nkubectl apply -R -f configs/", "completion": "This bash code 2.1 - Kubernetes Object Management | Advantages compared to imperative object configuration:", "language": "bash", "type": "kubectl-command_explanation", "context": "2.1 - Kubernetes Object Management | Advantages compared to imperative object configuration:", "source": "https://kubernetes.io/docs/concepts/overview/_print/", "id": "k8s_00560_code_explain_14554"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: nginx-demo\nspec:", "completion": " containers:\n - name: nginx\n image: nginx:1.14.2\n ports:\n - containerPort: 80", "language": "yaml", "type": "pod-definition", "context": "2.2 - Object Names and IDs | A Kubernetes systems-generated string to uniquely identify objects.", "source": "https://kubernetes.io/docs/concepts/overview/_print/", "id": "k8s_00560_code_completion_14555"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: nginx-demo\nspec:\n containers:\n - name: nginx\n image: nginx:1.14.2\n ports:\n - containerPort: 80", "completion": "This yaml code 2.2 - Object Names and IDs | A Kubernetes systems-generated string to uniquely identify objects.", "language": "yaml", "type": "pod-definition_explanation", "context": "2.2 - Object Names and IDs | A Kubernetes systems-generated string to uniquely identify objects.", "source": "https://kubernetes.io/docs/concepts/overview/_print/", "id": "k8s_00560_code_explain_14556"} {"prompt": "Complete this text code:\n\n\"metadata\": {\n \"labels\": {\n \"key1\" : \"value1\",", "completion": " \"key2\" : \"value2\"\n }\n}", "language": "text", "type": "code-example", "context": "2.3 - Labels and Selectors | Labels allow for efficient queries and watches and are ideal for use in UIs and CLIs. Non-identifying information should be recorded using annotations.", "source": "https://kubernetes.io/docs/concepts/overview/_print/", "id": "k8s_00560_code_completion_14557"} {"prompt": "Explain this text code:\n\n\"metadata\": {\n \"labels\": {\n \"key1\" : \"value1\",\n \"key2\" : \"value2\"\n }\n}", "completion": "This text code 2.3 - Labels and Selectors | Labels allow for efficient queries and watches and are ideal for use in UIs and CLIs. Non-identifying information should be recorded using annotations.", "language": "text", "type": "code-example_explanation", "context": "2.3 - Labels and Selectors | Labels allow for efficient queries and watches and are ideal for use in UIs and CLIs. Non-identifying information should be recorded using annotations.", "source": "https://kubernetes.io/docs/concepts/overview/_print/", "id": "k8s_00560_code_explain_14558"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: label-demo\n labels:\n environment: production", "completion": " app: nginx\nspec:\n containers:\n - name: nginx\n image: nginx:1.14.2\n ports:\n - containerPort: 80", "language": "yaml", "type": "pod-definition", "context": "2.3 - Labels and Selectors | Unlike names and UIDs, labels do not provide uniqueness. In general, we expect many objects to carry the same label(s).", "source": "https://kubernetes.io/docs/concepts/overview/_print/", "id": "k8s_00560_code_completion_14559"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: label-demo\n labels:\n environment: production\n app: nginx\nspec:\n containers:\n - name: nginx\n image: nginx:1.14.2\n ports:\n - containerPort: 80", "completion": "This yaml code 2.3 - Labels and Selectors | Unlike names and UIDs, labels do not provide uniqueness. In general, we expect many objects to carry the same label(s).", "language": "yaml", "type": "pod-definition_explanation", "context": "2.3 - Labels and Selectors | Unlike names and UIDs, labels do not provide uniqueness. In general, we expect many objects to carry the same label(s).", "source": "https://kubernetes.io/docs/concepts/overview/_print/", "id": "k8s_00560_code_explain_14560"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: cuda-test\nspec:\n containers:", "completion": " - name: cuda-test\n image: \"registry.k8s.io/cuda-vector-add:v0.1\"\n resources:\n limits:\n nvidia.com/gpu: 1\n nodeSelector:\n accelerator: nvidia-tesla-p100", "language": "yaml", "type": "pod-definition", "context": "2.3 - Labels and Selectors | Set-based label requirements allow filtering keys according to a set of values. Three kinds of operators are supported: in,notin and exists (only the key identifier). For example:", "source": "https://kubernetes.io/docs/concepts/overview/_print/", "id": "k8s_00560_code_completion_14561"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: cuda-test\nspec:\n containers:\n - name: cuda-test\n image: \"registry.k8s.io/cuda-vector-add:v0.1\"\n resources:\n limits:\n nvidia.com/gpu: 1\n nodeSelector:\n accelerator: nvidia-tesla-p100", "completion": "This yaml code 2.3 - Labels and Selectors | Set-based label requirements allow filtering keys according to a set of values. Three kinds of operators are supported: in,notin and exists (only the key identifier). For example:", "language": "yaml", "type": "pod-definition_explanation", "context": "2.3 - Labels and Selectors | Set-based label requirements allow filtering keys according to a set of values. Three kinds of operators are supported: in,notin and exists (only the key identifier). For example:", "source": "https://kubernetes.io/docs/concepts/overview/_print/", "id": "k8s_00560_code_explain_14562"} {"prompt": "Complete this text code:\n\nenvironment in (production, qa)\ntier notin (frontend, backend)", "completion": "partition\n!partition", "language": "text", "type": "code-example", "context": "2.3 - Labels and Selectors | Similarly the comma separator acts as an AND operator. So filtering resources with a partition key (no matter the value) and with environment different than qa can be achieved using partition,environm", "source": "https://kubernetes.io/docs/concepts/overview/_print/", "id": "k8s_00560_code_completion_14563"} {"prompt": "Explain this text code:\n\nenvironment in (production, qa)\ntier notin (frontend, backend)\npartition\n!partition", "completion": "This text code 2.3 - Labels and Selectors | Similarly the comma separator acts as an AND operator. So filtering resources with a partition key (no matter the value) and with environment different than qa can be achieved using partition,environm", "language": "text", "type": "code-example_explanation", "context": "2.3 - Labels and Selectors | Similarly the comma separator acts as an AND operator. So filtering resources with a partition key (no matter the value) and with environment different than qa can be achieved using partition,environm", "source": "https://kubernetes.io/docs/concepts/overview/_print/", "id": "k8s_00560_code_explain_14564"} {"prompt": "Explain this bash code:\n\nkubectl get pods -l environment=production,tier=frontend", "completion": "This bash code 2.3 - Labels and Selectors | or using set-based requirements:", "language": "bash", "type": "kubectl-command_explanation", "context": "2.3 - Labels and Selectors | or using set-based requirements:", "source": "https://kubernetes.io/docs/concepts/overview/_print/", "id": "k8s_00560_code_explain_14565"} {"prompt": "Explain this bash code:\n\nkubectl get pods -l 'environment in (production),tier in (frontend)'", "completion": "This bash code 2.3 - Labels and Selectors | As already mentioned set-based requirements are more expressive. For instance, they can implement the OR operator on values:", "language": "bash", "type": "kubectl-command_explanation", "context": "2.3 - Labels and Selectors | As already mentioned set-based requirements are more expressive. For instance, they can implement the OR operator on values:", "source": "https://kubernetes.io/docs/concepts/overview/_print/", "id": "k8s_00560_code_explain_14566"} {"prompt": "Explain this bash code:\n\nkubectl get pods -l 'environment in (production, qa)'", "completion": "This bash code 2.3 - Labels and Selectors | or restricting negative matching via notin operator:", "language": "bash", "type": "kubectl-command_explanation", "context": "2.3 - Labels and Selectors | or restricting negative matching via notin operator:", "source": "https://kubernetes.io/docs/concepts/overview/_print/", "id": "k8s_00560_code_explain_14567"} {"prompt": "Explain this bash code:\n\nkubectl get pods -l 'environment,environment notin (frontend)'", "completion": "This bash code 2.3 - Labels and Selectors | Some Kubernetes objects, such as services and replicationcontrollers, also use label selectors to specify sets of other resources, such as pods.", "language": "bash", "type": "kubectl-command_explanation", "context": "2.3 - Labels and Selectors | Some Kubernetes objects, such as services and replicationcontrollers, also use label selectors to specify sets of other resources, such as pods.", "source": "https://kubernetes.io/docs/concepts/overview/_print/", "id": "k8s_00560_code_explain_14568"} {"prompt": "Complete this text code:\n\nselector:\n matchLabels:\n component: redis", "completion": " matchExpressions:\n - { key: tier, operator: In, values: [cache] }\n - { key: environment, operator: NotIn, values: [dev] }", "language": "text", "type": "code-example", "context": "2.3 - Labels and Selectors | matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels map is equivalent to an element of matchExpressions, whose key field is \"key\", the operator is \"In\", and the values a", "source": "https://kubernetes.io/docs/concepts/overview/_print/", "id": "k8s_00560_code_completion_14569"} {"prompt": "Explain this text code:\n\nselector:\n matchLabels:\n component: redis\n matchExpressions:\n - { key: tier, operator: In, values: [cache] }\n - { key: environment, operator: NotIn, values: [dev] }", "completion": "This text code 2.3 - Labels and Selectors | matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels map is equivalent to an element of matchExpressions, whose key field is \"key\", the operator is \"In\", and the values a", "language": "text", "type": "code-example_explanation", "context": "2.3 - Labels and Selectors | matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels map is equivalent to an element of matchExpressions, whose key field is \"key\", the operator is \"In\", and the values a", "source": "https://kubernetes.io/docs/concepts/overview/_print/", "id": "k8s_00560_code_explain_14570"} {"prompt": "Complete this text code:\n\nlabels:\n app: guestbook", "completion": " tier: backend\n role: master", "language": "text", "type": "code-example", "context": "2.3 - Labels and Selectors | and", "source": "https://kubernetes.io/docs/concepts/overview/_print/", "id": "k8s_00560_code_completion_14571"} {"prompt": "Explain this text code:\n\nlabels:\n app: guestbook\n tier: backend\n role: master", "completion": "This text code 2.3 - Labels and Selectors | and", "language": "text", "type": "code-example_explanation", "context": "2.3 - Labels and Selectors | and", "source": "https://kubernetes.io/docs/concepts/overview/_print/", "id": "k8s_00560_code_explain_14572"} {"prompt": "Complete this text code:\n\nlabels:\n app: guestbook", "completion": " tier: backend\n role: replica", "language": "text", "type": "code-example", "context": "2.3 - Labels and Selectors | The labels allow for slicing and dicing the resources along any dimension specified by a label:", "source": "https://kubernetes.io/docs/concepts/overview/_print/", "id": "k8s_00560_code_completion_14573"} {"prompt": "Explain this text code:\n\nlabels:\n app: guestbook\n tier: backend\n role: replica", "completion": "This text code 2.3 - Labels and Selectors | The labels allow for slicing and dicing the resources along any dimension specified by a label:", "language": "text", "type": "code-example_explanation", "context": "2.3 - Labels and Selectors | The labels allow for slicing and dicing the resources along any dimension specified by a label:", "source": "https://kubernetes.io/docs/concepts/overview/_print/", "id": "k8s_00560_code_explain_14574"} {"prompt": "Explain this bash code:\n\nkubectl apply -f examples/guestbook/all-in-one/guestbook-all-in-one.yaml\nkubectl get pods -Lapp -Ltier -Lrole", "completion": "This bash code 2.3 - Labels and Selectors | Sometimes you may want to relabel existing pods and other resources before creating new resources. This can be done with kubectl label. For example, if you want to label all your NGINX Pods as fronten", "language": "bash", "type": "kubectl-command_explanation", "context": "2.3 - Labels and Selectors | Sometimes you may want to relabel existing pods and other resources before creating new resources. This can be done with kubectl label. For example, if you want to label all your NGINX Pods as fronten", "source": "https://kubernetes.io/docs/concepts/overview/_print/", "id": "k8s_00560_code_explain_14575"} {"prompt": "Complete this text code:\n\nNAME READY STATUS RESTARTS AGE APP TIER ROLE\nguestbook-fe-4nlpb 1/1 Running 0 1m guestbook frontend <none>\nguestbook-fe-ght6d 1/1 Running 0 1m guestbook frontend <none>\nguestbook-fe-jpy62 1/1 Running 0 1m guestbook frontend <none>", "completion": "guestbook-redis-master-5pg3b 1/1 Running 0 1m guestbook backend master\nguestbook-redis-replica-2q2yf 1/1 Running 0 1m guestbook backend replica\nguestbook-redis-replica-qgazl 1/1 Running 0 1m guestbook backend replica\nmy-nginx-divi2 1/1 Running 0 29m nginx <none> <none>\nmy-nginx-o0ef1 1/1 Running 0 29m nginx <none> <none>", "language": "text", "type": "code-example", "context": "2.3 - Labels and Selectors | Sometimes you may want to relabel existing pods and other resources before creating new resources. This can be done with kubectl label. For example, if you want to label all your NGINX Pods as fronten", "source": "https://kubernetes.io/docs/concepts/overview/_print/", "id": "k8s_00560_code_completion_14576"} {"prompt": "Explain this text code:\n\nNAME READY STATUS RESTARTS AGE APP TIER ROLE\nguestbook-fe-4nlpb 1/1 Running 0 1m guestbook frontend <none>\nguestbook-fe-ght6d 1/1 Running 0 1m guestbook frontend <none>\nguestbook-fe-jpy62 1/1 Running 0 1m guestbook frontend <none>\nguestbook-redis-master-5pg3b 1/1 Running 0 1m guestbook backend master\nguestbook-redis-replica-2q2yf 1/1 Running 0 1m guestbook backend replica\nguestbook-redis-replica-qgazl 1/1 Running 0 1m guestbook backend replica\nmy-nginx-divi2 1/1 Running 0 29m nginx <none> <none>\nmy-nginx-o0ef1 1/1 Running 0 29m nginx <none> <none>", "completion": "This text code 2.3 - Labels and Selectors | Sometimes you may want to relabel existing pods and other resources before creating new resources. This can be done with kubectl label. For example, if you want to label all your NGINX Pods as fronten", "language": "text", "type": "code-example_explanation", "context": "2.3 - Labels and Selectors | Sometimes you may want to relabel existing pods and other resources before creating new resources. This can be done with kubectl label. For example, if you want to label all your NGINX Pods as fronten", "source": "https://kubernetes.io/docs/concepts/overview/_print/", "id": "k8s_00560_code_explain_14577"} {"prompt": "Explain this text code:\n\nNAME READY STATUS RESTARTS AGE\nguestbook-redis-replica-2q2yf 1/1 Running 0 3m\nguestbook-redis-replica-qgazl 1/1 Running 0 3m", "completion": "This text code 2.3 - Labels and Selectors | Sometimes you may want to relabel existing pods and other resources before creating new resources. This can be done with kubectl label. For example, if you want to label all your NGINX Pods as fronten", "language": "text", "type": "code-example_explanation", "context": "2.3 - Labels and Selectors | Sometimes you may want to relabel existing pods and other resources before creating new resources. This can be done with kubectl label. For example, if you want to label all your NGINX Pods as fronten", "source": "https://kubernetes.io/docs/concepts/overview/_print/", "id": "k8s_00560_code_explain_14578"} {"prompt": "Explain this text code:\n\npod/my-nginx-2035384211-j5fhi labeled\npod/my-nginx-2035384211-u2c7e labeled\npod/my-nginx-2035384211-u3t6x labeled", "completion": "This text code 2.3 - Labels and Selectors | This first filters all pods with the label \"app=nginx\", and then labels them with the \"tier=fe\". To see the pods you labeled, run:", "language": "text", "type": "code-example_explanation", "context": "2.3 - Labels and Selectors | This first filters all pods with the label \"app=nginx\", and then labels them with the \"tier=fe\". To see the pods you labeled, run:", "source": "https://kubernetes.io/docs/concepts/overview/_print/", "id": "k8s_00560_code_explain_14579"} {"prompt": "Complete this text code:\n\nNAME READY STATUS RESTARTS AGE TIER\nmy-nginx-2035384211-j5fhi 1/1 Running 0 23m fe", "completion": "my-nginx-2035384211-u2c7e 1/1 Running 0 23m fe\nmy-nginx-2035384211-u3t6x 1/1 Running 0 23m fe", "language": "text", "type": "code-example", "context": "2.3 - Labels and Selectors | This outputs all \"app=nginx\" pods, with an additional label column of pods' tier (specified with -L or --label-columns).", "source": "https://kubernetes.io/docs/concepts/overview/_print/", "id": "k8s_00560_code_completion_14580"} {"prompt": "Explain this text code:\n\nNAME READY STATUS RESTARTS AGE TIER\nmy-nginx-2035384211-j5fhi 1/1 Running 0 23m fe\nmy-nginx-2035384211-u2c7e 1/1 Running 0 23m fe\nmy-nginx-2035384211-u3t6x 1/1 Running 0 23m fe", "completion": "This text code 2.3 - Labels and Selectors | This outputs all \"app=nginx\" pods, with an additional label column of pods' tier (specified with -L or --label-columns).", "language": "text", "type": "code-example_explanation", "context": "2.3 - Labels and Selectors | This outputs all \"app=nginx\" pods, with an additional label column of pods' tier (specified with -L or --label-columns).", "source": "https://kubernetes.io/docs/concepts/overview/_print/", "id": "k8s_00560_code_explain_14581"} {"prompt": "Complete this text code:\n\nNAME STATUS AGE\ndefault Active 1d", "completion": "kube-node-lease Active 1d\nkube-public Active 1d\nkube-system Active 1d", "language": "text", "type": "code-example", "context": "2.4 - Namespaces | To set the namespace for a current request, use the --namespace flag.", "source": "https://kubernetes.io/docs/concepts/overview/_print/", "id": "k8s_00560_code_completion_14582"} {"prompt": "Explain this text code:\n\nNAME STATUS AGE\ndefault Active 1d\nkube-node-lease Active 1d\nkube-public Active 1d\nkube-system Active 1d", "completion": "This text code 2.4 - Namespaces | To set the namespace for a current request, use the --namespace flag.", "language": "text", "type": "code-example_explanation", "context": "2.4 - Namespaces | To set the namespace for a current request, use the --namespace flag.", "source": "https://kubernetes.io/docs/concepts/overview/_print/", "id": "k8s_00560_code_explain_14583"} {"prompt": "Explain this bash code:\n\nkubectl run nginx --image=nginx --namespace=<insert-namespace-name-here>\nkubectl get pods --namespace=<insert-namespace-name-here>", "completion": "This bash code 2.4 - Namespaces | You can permanently save the namespace for all subsequent kubectl commands in that context.", "language": "bash", "type": "kubectl-command_explanation", "context": "2.4 - Namespaces | You can permanently save the namespace for all subsequent kubectl commands in that context.", "source": "https://kubernetes.io/docs/concepts/overview/_print/", "id": "k8s_00560_code_explain_14584"} {"prompt": "Explain this bash code:\n\nkubectl config set-context --current --namespace=<insert-namespace-name-here>\n# Validate it\nkubectl config view --minify | grep namespace:", "completion": "This bash code 2.4 - Namespaces | When you create a Service, it creates a corresponding DNS entry. This entry is of the form <service-name>.<namespace-name>.svc.cluster.local, which means that if a container only uses <service-name>,", "language": "bash", "type": "kubectl-command_explanation", "context": "2.4 - Namespaces | When you create a Service, it creates a corresponding DNS entry. This entry is of the form <service-name>.<namespace-name>.svc.cluster.local, which means that if a container only uses <service-name>,", "source": "https://kubernetes.io/docs/concepts/overview/_print/", "id": "k8s_00560_code_explain_14585"} {"prompt": "Complete this text code:\n\n# In a namespace\nkubectl api-resources --namespaced=true", "completion": "\n# Not in a namespace\nkubectl api-resources --namespaced=false", "language": "text", "type": "code-example", "context": "2.4 - Namespaces | The Kubernetes control plane sets an immutable label kubernetes.io/metadata.name on all namespaces. The value of the label is the namespace name.", "source": "https://kubernetes.io/docs/concepts/overview/_print/", "id": "k8s_00560_code_completion_14586"} {"prompt": "Explain this text code:\n\n# In a namespace\nkubectl api-resources --namespaced=true\n\n# Not in a namespace\nkubectl api-resources --namespaced=false", "completion": "This text code 2.4 - Namespaces | The Kubernetes control plane sets an immutable label kubernetes.io/metadata.name on all namespaces. The value of the label is the namespace name.", "language": "text", "type": "code-example_explanation", "context": "2.4 - Namespaces | The Kubernetes control plane sets an immutable label kubernetes.io/metadata.name on all namespaces. The value of the label is the namespace name.", "source": "https://kubernetes.io/docs/concepts/overview/_print/", "id": "k8s_00560_code_explain_14587"} {"prompt": "Complete this text code:\n\n\"metadata\": {\n \"annotations\": {\n \"key1\" : \"value1\",", "completion": " \"key2\" : \"value2\"\n }\n}", "language": "text", "type": "code-example", "context": "2.5 - Annotations | Here are some examples of information that could be recorded in annotations:", "source": "https://kubernetes.io/docs/concepts/overview/_print/", "id": "k8s_00560_code_completion_14588"} {"prompt": "Explain this text code:\n\n\"metadata\": {\n \"annotations\": {\n \"key1\" : \"value1\",\n \"key2\" : \"value2\"\n }\n}", "completion": "This text code 2.5 - Annotations | Here are some examples of information that could be recorded in annotations:", "language": "text", "type": "code-example_explanation", "context": "2.5 - Annotations | Here are some examples of information that could be recorded in annotations:", "source": "https://kubernetes.io/docs/concepts/overview/_print/", "id": "k8s_00560_code_explain_14589"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: annotations-demo\n annotations:\n imageregistry: \"https://hub.docker.com/\"", "completion": "spec:\n containers:\n - name: nginx\n image: nginx:1.14.2\n ports:\n - containerPort: 80", "language": "yaml", "type": "pod-definition", "context": "2.5 - Annotations | Field selectors let you select Kubernetes objects based on the value of one or more resource fields. Here are some examples of field selector queries:", "source": "https://kubernetes.io/docs/concepts/overview/_print/", "id": "k8s_00560_code_completion_14590"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: annotations-demo\n annotations:\n imageregistry: \"https://hub.docker.com/\"\nspec:\n containers:\n - name: nginx\n image: nginx:1.14.2\n ports:\n - containerPort: 80", "completion": "This yaml code 2.5 - Annotations | Field selectors let you select Kubernetes objects based on the value of one or more resource fields. Here are some examples of field selector queries:", "language": "yaml", "type": "pod-definition_explanation", "context": "2.5 - Annotations | Field selectors let you select Kubernetes objects based on the value of one or more resource fields. Here are some examples of field selector queries:", "source": "https://kubernetes.io/docs/concepts/overview/_print/", "id": "k8s_00560_code_explain_14591"} {"prompt": "Explain this bash code:\n\nkubectl get pods --field-selector status.phase=Running", "completion": "This bash code 2.6 - Field Selectors | Supported field selectors vary by Kubernetes resource type. All resource types support the metadata.name and metadata.namespace fields. Using unsupported field selectors produces an error. For example", "language": "bash", "type": "kubectl-command_explanation", "context": "2.6 - Field Selectors | Supported field selectors vary by Kubernetes resource type. All resource types support the metadata.name and metadata.namespace fields. Using unsupported field selectors produces an error. For example", "source": "https://kubernetes.io/docs/concepts/overview/_print/", "id": "k8s_00560_code_explain_14592"} {"prompt": "Explain this text code:\n\nError from server (BadRequest): Unable to find \"ingresses\" that match label selector \"\", field selector \"foo.bar=baz\": \"foo.bar\" is not a known field selector: only \"metadata.name\", \"metadata.namespace\"", "completion": "This text code 2.6 - Field Selectors | All custom resource types support the metadata.name and metadata.namespace fields.", "language": "text", "type": "code-example_explanation", "context": "2.6 - Field Selectors | All custom resource types support the metadata.name and metadata.namespace fields.", "source": "https://kubernetes.io/docs/concepts/overview/_print/", "id": "k8s_00560_code_explain_14593"} {"prompt": "Explain this bash code:\n\nkubectl get services --all-namespaces --field-selector metadata.namespace!=default", "completion": "This bash code 2.6 - Field Selectors | As with label and other selectors, field selectors can be chained together as a comma-separated list. This kubectl command selects all Pods for which the status.phase does not equal Running and the sp", "language": "bash", "type": "kubectl-command_explanation", "context": "2.6 - Field Selectors | As with label and other selectors, field selectors can be chained together as a comma-separated list. This kubectl command selects all Pods for which the status.phase does not equal Running and the sp", "source": "https://kubernetes.io/docs/concepts/overview/_print/", "id": "k8s_00560_code_explain_14594"} {"prompt": "Explain this bash code:\n\nkubectl get pods --field-selector=status.phase!=Running,spec.restartPolicy=Always", "completion": "This bash code 2.6 - Field Selectors | You can use field selectors across multiple resource types. This kubectl command selects all Statefulsets and Services that are not in the default namespace:", "language": "bash", "type": "kubectl-command_explanation", "context": "2.6 - Field Selectors | You can use field selectors across multiple resource types. This kubectl command selects all Statefulsets and Services that are not in the default namespace:", "source": "https://kubernetes.io/docs/concepts/overview/_print/", "id": "k8s_00560_code_explain_14595"} {"prompt": "Explain this bash code:\n\nkubectl get statefulsets,services --all-namespaces --field-selector metadata.namespace!=default", "completion": "This bash code 2.6 - Field Selectors | Finalizers are namespaced keys that tell Kubernetes to wait until specific conditions are met before it fully deletes resources that are marked for deletion. Finalizers alert controllers to clean up r", "language": "bash", "type": "kubectl-command_explanation", "context": "2.6 - Field Selectors | Finalizers are namespaced keys that tell Kubernetes to wait until specific conditions are met before it fully deletes resources that are marked for deletion. Finalizers alert controllers to clean up r", "source": "https://kubernetes.io/docs/concepts/overview/_print/", "id": "k8s_00560_code_explain_14596"} {"prompt": "Complete this yaml code:\n\n# This is an excerpt\napiVersion: apps/v1\nkind: StatefulSet\nmetadata:\n labels:", "completion": " app.kubernetes.io/name: mysql\n app.kubernetes.io/instance: mysql-abcxyz\n app.kubernetes.io/version: \"5.7.21\"\n app.kubernetes.io/component: database\n app.kubernetes.io/part-of: wordpress\n app.kubernetes.io/managed-by: Helm", "language": "yaml", "type": "kubernetes-manifest", "context": "2.9 - Recommended Labels | An application can be installed one or more times into a Kubernetes cluster and, in some cases, the same namespace. For example, WordPress can be installed more than once where different websites are", "source": "https://kubernetes.io/docs/concepts/overview/_print/", "id": "k8s_00560_code_completion_14597"} {"prompt": "Explain this yaml code:\n\n# This is an excerpt\napiVersion: apps/v1\nkind: StatefulSet\nmetadata:\n labels:\n app.kubernetes.io/name: mysql\n app.kubernetes.io/instance: mysql-abcxyz\n app.kubernetes.io/version: \"5.7.21\"\n app.kubernetes.io/component: database\n app.kubernetes.io/part-of: wordpress\n app.kubernetes.io/managed-by: Helm", "completion": "This yaml code 2.9 - Recommended Labels | An application can be installed one or more times into a Kubernetes cluster and, in some cases, the same namespace. For example, WordPress can be installed more than once where different websites are", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "2.9 - Recommended Labels | An application can be installed one or more times into a Kubernetes cluster and, in some cases, the same namespace. For example, WordPress can be installed more than once where different websites are", "source": "https://kubernetes.io/docs/concepts/overview/_print/", "id": "k8s_00560_code_explain_14598"} {"prompt": "Complete this yaml code:\n\napiVersion: apps/v1\nkind: Deployment\nmetadata:", "completion": " labels:\n app.kubernetes.io/name: myservice\n app.kubernetes.io/instance: myservice-abcxyz\n...", "language": "yaml", "type": "deployment-definition", "context": "2.9 - Recommended Labels | The Service is used to expose the application.", "source": "https://kubernetes.io/docs/concepts/overview/_print/", "id": "k8s_00560_code_completion_14599"} {"prompt": "Explain this yaml code:\n\napiVersion: apps/v1\nkind: Deployment\nmetadata:\n labels:\n app.kubernetes.io/name: myservice\n app.kubernetes.io/instance: myservice-abcxyz\n...", "completion": "This yaml code 2.9 - Recommended Labels | The Service is used to expose the application.", "language": "yaml", "type": "deployment-definition_explanation", "context": "2.9 - Recommended Labels | The Service is used to expose the application.", "source": "https://kubernetes.io/docs/concepts/overview/_print/", "id": "k8s_00560_code_explain_14600"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Service\nmetadata:", "completion": " labels:\n app.kubernetes.io/name: myservice\n app.kubernetes.io/instance: myservice-abcxyz\n...", "language": "yaml", "type": "service-definition", "context": "2.9 - Recommended Labels | Consider a slightly more complicated application: a web application (WordPress) using a database (MySQL), installed using Helm. The following snippets illustrate the start of objects used to deploy th", "source": "https://kubernetes.io/docs/concepts/overview/_print/", "id": "k8s_00560_code_completion_14601"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Service\nmetadata:\n labels:\n app.kubernetes.io/name: myservice\n app.kubernetes.io/instance: myservice-abcxyz\n...", "completion": "This yaml code 2.9 - Recommended Labels | Consider a slightly more complicated application: a web application (WordPress) using a database (MySQL), installed using Helm. The following snippets illustrate the start of objects used to deploy th", "language": "yaml", "type": "service-definition_explanation", "context": "2.9 - Recommended Labels | Consider a slightly more complicated application: a web application (WordPress) using a database (MySQL), installed using Helm. The following snippets illustrate the start of objects used to deploy th", "source": "https://kubernetes.io/docs/concepts/overview/_print/", "id": "k8s_00560_code_explain_14602"} {"prompt": "Complete this yaml code:\n\napiVersion: apps/v1\nkind: Deployment\nmetadata:\n labels:\n app.kubernetes.io/name: wordpress", "completion": " app.kubernetes.io/instance: wordpress-abcxyz\n app.kubernetes.io/version: \"4.9.4\"\n app.kubernetes.io/managed-by: Helm\n app.kubernetes.io/component: server\n app.kubernetes.io/part-of: wordpress\n...", "language": "yaml", "type": "deployment-definition", "context": "2.9 - Recommended Labels | The Service is used to expose WordPress:", "source": "https://kubernetes.io/docs/concepts/overview/_print/", "id": "k8s_00560_code_completion_14603"} {"prompt": "Explain this yaml code:\n\napiVersion: apps/v1\nkind: Deployment\nmetadata:\n labels:\n app.kubernetes.io/name: wordpress\n app.kubernetes.io/instance: wordpress-abcxyz\n app.kubernetes.io/version: \"4.9.4\"\n app.kubernetes.io/managed-by: Helm\n app.kubernetes.io/component: server\n app.kubernetes.io/part-of: wordpress\n...", "completion": "This yaml code 2.9 - Recommended Labels | The Service is used to expose WordPress:", "language": "yaml", "type": "deployment-definition_explanation", "context": "2.9 - Recommended Labels | The Service is used to expose WordPress:", "source": "https://kubernetes.io/docs/concepts/overview/_print/", "id": "k8s_00560_code_explain_14604"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Service\nmetadata:\n labels:\n app.kubernetes.io/name: wordpress", "completion": " app.kubernetes.io/instance: wordpress-abcxyz\n app.kubernetes.io/version: \"4.9.4\"\n app.kubernetes.io/managed-by: Helm\n app.kubernetes.io/component: server\n app.kubernetes.io/part-of: wordpress\n...", "language": "yaml", "type": "service-definition", "context": "2.9 - Recommended Labels | MySQL is exposed as a StatefulSet with metadata for both it and the larger application it belongs to:", "source": "https://kubernetes.io/docs/concepts/overview/_print/", "id": "k8s_00560_code_completion_14605"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Service\nmetadata:\n labels:\n app.kubernetes.io/name: wordpress\n app.kubernetes.io/instance: wordpress-abcxyz\n app.kubernetes.io/version: \"4.9.4\"\n app.kubernetes.io/managed-by: Helm\n app.kubernetes.io/component: server\n app.kubernetes.io/part-of: wordpress\n...", "completion": "This yaml code 2.9 - Recommended Labels | MySQL is exposed as a StatefulSet with metadata for both it and the larger application it belongs to:", "language": "yaml", "type": "service-definition_explanation", "context": "2.9 - Recommended Labels | MySQL is exposed as a StatefulSet with metadata for both it and the larger application it belongs to:", "source": "https://kubernetes.io/docs/concepts/overview/_print/", "id": "k8s_00560_code_explain_14606"} {"prompt": "Complete this yaml code:\n\napiVersion: apps/v1\nkind: StatefulSet\nmetadata:\n labels:\n app.kubernetes.io/name: mysql", "completion": " app.kubernetes.io/instance: mysql-abcxyz\n app.kubernetes.io/version: \"5.7.21\"\n app.kubernetes.io/managed-by: Helm\n app.kubernetes.io/component: database\n app.kubernetes.io/part-of: wordpress\n...", "language": "yaml", "type": "kubernetes-manifest", "context": "2.9 - Recommended Labels | The Service is used to expose MySQL as part of WordPress:", "source": "https://kubernetes.io/docs/concepts/overview/_print/", "id": "k8s_00560_code_completion_14607"} {"prompt": "Explain this yaml code:\n\napiVersion: apps/v1\nkind: StatefulSet\nmetadata:\n labels:\n app.kubernetes.io/name: mysql\n app.kubernetes.io/instance: mysql-abcxyz\n app.kubernetes.io/version: \"5.7.21\"\n app.kubernetes.io/managed-by: Helm\n app.kubernetes.io/component: database\n app.kubernetes.io/part-of: wordpress\n...", "completion": "This yaml code 2.9 - Recommended Labels | The Service is used to expose MySQL as part of WordPress:", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "2.9 - Recommended Labels | The Service is used to expose MySQL as part of WordPress:", "source": "https://kubernetes.io/docs/concepts/overview/_print/", "id": "k8s_00560_code_explain_14608"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Service\nmetadata:\n labels:\n app.kubernetes.io/name: mysql", "completion": " app.kubernetes.io/instance: mysql-abcxyz\n app.kubernetes.io/version: \"5.7.21\"\n app.kubernetes.io/managed-by: Helm\n app.kubernetes.io/component: database\n app.kubernetes.io/part-of: wordpress\n...", "language": "yaml", "type": "service-definition", "context": "2.9 - Recommended Labels | With the MySQL StatefulSet and Service you'll notice information about both MySQL and WordPress, the broader application, are included.", "source": "https://kubernetes.io/docs/concepts/overview/_print/", "id": "k8s_00560_code_completion_14609"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Service\nmetadata:\n labels:\n app.kubernetes.io/name: mysql\n app.kubernetes.io/instance: mysql-abcxyz\n app.kubernetes.io/version: \"5.7.21\"\n app.kubernetes.io/managed-by: Helm\n app.kubernetes.io/component: database\n app.kubernetes.io/part-of: wordpress\n...", "completion": "This yaml code 2.9 - Recommended Labels | With the MySQL StatefulSet and Service you'll notice information about both MySQL and WordPress, the broader application, are included.", "language": "yaml", "type": "service-definition_explanation", "context": "2.9 - Recommended Labels | With the MySQL StatefulSet and Service you'll notice information about both MySQL and WordPress, the broader application, are included.", "source": "https://kubernetes.io/docs/concepts/overview/_print/", "id": "k8s_00560_code_explain_14610"} {"prompt": "Complete this json code:\n\n{\n \"kind\": \"APIGroupList\",\n \"apiVersion\": \"v1\",\n \"groups\": [\n {\n \"name\": \"apiregistration.k8s.io\",\n \"versions\": [\n {\n \"groupVersion\": \"apiregistration.k8s.io/v1\",\n \"version\": \"v1\"\n }\n ],\n \"preferredVersion\": {\n \"groupVersion\": \"apiregistration.k8s.io/v1\",\n \"version\": \"v1\"\n }", "completion": " },\n {\n \"name\": \"apps\",\n \"versions\": [\n {\n \"groupVersion\": \"apps/v1\",\n \"version\": \"v1\"\n }\n ],\n \"preferredVersion\": {\n \"groupVersion\": \"apps/v1\",\n \"version\": \"v1\"\n }\n },\n ...\n}", "language": "json", "type": "api-response", "context": "3 - The Kubernetes API | Additional requests are needed to obtain the discovery document for each group version at /apis/<group>/<version> (for example: /apis/rbac.authorization.k8s.io/v1alpha1), which advertises the list of", "source": "https://kubernetes.io/docs/concepts/overview/_print/", "id": "k8s_00560_code_completion_14611"} {"prompt": "Explain this json code:\n\n{\n \"kind\": \"APIGroupList\",\n \"apiVersion\": \"v1\",\n \"groups\": [\n {\n \"name\": \"apiregistration.k8s.io\",\n \"versions\": [\n {\n \"groupVersion\": \"apiregistration.k8s.io/v1\",\n \"version\": \"v1\"\n }\n ],\n \"preferredVersion\": {\n \"groupVersion\": \"apiregistration.k8s.io/v1\",\n \"version\": \"v1\"\n }\n },\n {\n \"name\": \"apps\",\n \"versions\": [\n {\n \"groupVersion\": \"apps/v1\",\n \"version\": \"v1\"\n }\n ],\n \"preferredVersion\": {\n \"groupVersion\": \"apps/v1\",\n \"version\": \"v1\"\n }\n },\n ...\n}", "completion": "This json code 3 - The Kubernetes API | Additional requests are needed to obtain the discovery document for each group version at /apis/<group>/<version> (for example: /apis/rbac.authorization.k8s.io/v1alpha1), which advertises the list of", "language": "json", "type": "api-response_explanation", "context": "3 - The Kubernetes API | Additional requests are needed to obtain the discovery document for each group version at /apis/<group>/<version> (for example: /apis/rbac.authorization.k8s.io/v1alpha1), which advertises the list of", "source": "https://kubernetes.io/docs/concepts/overview/_print/", "id": "k8s_00560_code_explain_14612"} {"prompt": "Complete this json code:\n\n{\n \"paths\": {\n ...,\n \"api/v1\": {\n \"serverRelativeURL\": \"/openapi/v3/api/v1?hash=CC0E9BFD992D8C59AEC98A1E2336F899E8318D3CF4C68944C3DEC640AF5AB52D864AC50DAA8D145B3494F75FA3CFF939FCBDDA431DAD3CA79738B297795818CF\"\n },", "completion": " \"apis/admissionregistration.k8s.io/v1\": {\n \"serverRelativeURL\": \"/openapi/v3/apis/admissionregistration.k8s.io/v1?hash=E19CC93A116982CE5422FC42B590A8AFAD92CDE9AE4D59B5CAAD568F083AD07946E6CB5817531680BCE6E215C16973CD39003B0425F3477CFD854E89A9DB6597\"\n },\n ....\n }\n}", "language": "json", "type": "api-response", "context": "3 - The Kubernetes API | The relative URLs are pointing to immutable OpenAPI descriptions, in order to improve client-side caching. The proper HTTP caching headers are also set by the API server for that purpose (Expires to 1", "source": "https://kubernetes.io/docs/concepts/overview/_print/", "id": "k8s_00560_code_completion_14613"} {"prompt": "Explain this json code:\n\n{\n \"paths\": {\n ...,\n \"api/v1\": {\n \"serverRelativeURL\": \"/openapi/v3/api/v1?hash=CC0E9BFD992D8C59AEC98A1E2336F899E8318D3CF4C68944C3DEC640AF5AB52D864AC50DAA8D145B3494F75FA3CFF939FCBDDA431DAD3CA79738B297795818CF\"\n },\n \"apis/admissionregistration.k8s.io/v1\": {\n \"serverRelativeURL\": \"/openapi/v3/apis/admissionregistration.k8s.io/v1?hash=E19CC93A116982CE5422FC42B590A8AFAD92CDE9AE4D59B5CAAD568F083AD07946E6CB5817531680BCE6E215C16973CD39003B0425F3477CFD854E89A9DB6597\"\n },\n ....\n }\n}", "completion": "This json code 3 - The Kubernetes API | The relative URLs are pointing to immutable OpenAPI descriptions, in order to improve client-side caching. The proper HTTP caching headers are also set by the API server for that purpose (Expires to 1", "language": "json", "type": "api-response_explanation", "context": "3 - The Kubernetes API | The relative URLs are pointing to immutable OpenAPI descriptions, in order to improve client-side caching. The proper HTTP caching headers are also set by the API server for that purpose (Expires to 1", "source": "https://kubernetes.io/docs/concepts/overview/_print/", "id": "k8s_00560_code_explain_14614"} {"prompt": "Complete this yaml code:\n\napiVersion: apps/v1\nkind: Deployment\nmetadata:\n name: nginx-deployment\nspec:\n selector:\n matchLabels:\n app: nginx\n replicas: 2 # tells deployment to run 2 pods matching the template", "completion": " template:\n metadata:\n labels:\n app: nginx\n spec:\n containers:\n - name: nginx\n image: nginx:1.14.2\n ports:\n - containerPort: 80", "language": "yaml", "type": "deployment-definition", "context": "2 - Objects In Kubernetes | One way to create a Deployment using a manifest file like the one above is to use the kubectl apply command in the kubectl command-line interface, passing the .yaml file as an argument. Here's an exam", "source": "https://kubernetes.io/docs/concepts/overview/_print/", "id": "k8s_00560_code_completion_14615"} {"prompt": "Explain this yaml code:\n\napiVersion: apps/v1\nkind: Deployment\nmetadata:\n name: nginx-deployment\nspec:\n selector:\n matchLabels:\n app: nginx\n replicas: 2 # tells deployment to run 2 pods matching the template\n template:\n metadata:\n labels:\n app: nginx\n spec:\n containers:\n - name: nginx\n image: nginx:1.14.2\n ports:\n - containerPort: 80", "completion": "This yaml code 2 - Objects In Kubernetes | One way to create a Deployment using a manifest file like the one above is to use the kubectl apply command in the kubectl command-line interface, passing the .yaml file as an argument. Here's an exam", "language": "yaml", "type": "deployment-definition_explanation", "context": "2 - Objects In Kubernetes | One way to create a Deployment using a manifest file like the one above is to use the kubectl apply command in the kubectl command-line interface, passing the .yaml file as an argument. Here's an exam", "source": "https://kubernetes.io/docs/concepts/overview/_print/", "id": "k8s_00560_code_explain_14616"} {"prompt": "Explain this shell code:\n\nkubectl apply -f https://k8s.io/examples/application/deployment.yaml", "completion": "This shell code 2 - Objects In Kubernetes | The output is similar to this:", "language": "shell", "type": "code-example_explanation", "context": "2 - Objects In Kubernetes | The output is similar to this:", "source": "https://kubernetes.io/docs/concepts/overview/_print/", "id": "k8s_00560_code_explain_14617"} {"prompt": "Explain this sh code:\n\nkubectl diff -R -f configs/\nkubectl apply -R -f configs/", "completion": "This sh code 2.1 - Kubernetes Object Management | Advantages compared to imperative object configuration:", "language": "sh", "type": "code-example_explanation", "context": "2.1 - Kubernetes Object Management | Advantages compared to imperative object configuration:", "source": "https://kubernetes.io/docs/concepts/overview/_print/", "id": "k8s_00560_code_explain_14618"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: nginx-demo\nspec:", "completion": " containers:\n - name: nginx\n image: nginx:1.14.2\n ports:\n - containerPort: 80", "language": "yaml", "type": "pod-definition", "context": "2.2 - Object Names and IDs | A Kubernetes systems-generated string to uniquely identify objects.", "source": "https://kubernetes.io/docs/concepts/overview/_print/", "id": "k8s_00560_code_completion_14619"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: nginx-demo\nspec:\n containers:\n - name: nginx\n image: nginx:1.14.2\n ports:\n - containerPort: 80", "completion": "This yaml code 2.2 - Object Names and IDs | A Kubernetes systems-generated string to uniquely identify objects.", "language": "yaml", "type": "pod-definition_explanation", "context": "2.2 - Object Names and IDs | A Kubernetes systems-generated string to uniquely identify objects.", "source": "https://kubernetes.io/docs/concepts/overview/_print/", "id": "k8s_00560_code_explain_14620"} {"prompt": "Complete this json code:\n\n\"metadata\": {\n \"labels\": {\n \"key1\" : \"value1\",", "completion": " \"key2\" : \"value2\"\n }\n}", "language": "json", "type": "api-response", "context": "2.3 - Labels and Selectors | Labels allow for efficient queries and watches and are ideal for use in UIs and CLIs. Non-identifying information should be recorded using annotations.", "source": "https://kubernetes.io/docs/concepts/overview/_print/", "id": "k8s_00560_code_completion_14621"} {"prompt": "Explain this json code:\n\n\"metadata\": {\n \"labels\": {\n \"key1\" : \"value1\",\n \"key2\" : \"value2\"\n }\n}", "completion": "This json code 2.3 - Labels and Selectors | Labels allow for efficient queries and watches and are ideal for use in UIs and CLIs. Non-identifying information should be recorded using annotations.", "language": "json", "type": "api-response_explanation", "context": "2.3 - Labels and Selectors | Labels allow for efficient queries and watches and are ideal for use in UIs and CLIs. Non-identifying information should be recorded using annotations.", "source": "https://kubernetes.io/docs/concepts/overview/_print/", "id": "k8s_00560_code_explain_14622"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: label-demo\n labels:\n environment: production", "completion": " app: nginx\nspec:\n containers:\n - name: nginx\n image: nginx:1.14.2\n ports:\n - containerPort: 80", "language": "yaml", "type": "pod-definition", "context": "2.3 - Labels and Selectors | Unlike names and UIDs, labels do not provide uniqueness. In general, we expect many objects to carry the same label(s).", "source": "https://kubernetes.io/docs/concepts/overview/_print/", "id": "k8s_00560_code_completion_14623"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: label-demo\n labels:\n environment: production\n app: nginx\nspec:\n containers:\n - name: nginx\n image: nginx:1.14.2\n ports:\n - containerPort: 80", "completion": "This yaml code 2.3 - Labels and Selectors | Unlike names and UIDs, labels do not provide uniqueness. In general, we expect many objects to carry the same label(s).", "language": "yaml", "type": "pod-definition_explanation", "context": "2.3 - Labels and Selectors | Unlike names and UIDs, labels do not provide uniqueness. In general, we expect many objects to carry the same label(s).", "source": "https://kubernetes.io/docs/concepts/overview/_print/", "id": "k8s_00560_code_explain_14624"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: cuda-test\nspec:\n containers:", "completion": " - name: cuda-test\n image: \"registry.k8s.io/cuda-vector-add:v0.1\"\n resources:\n limits:\n nvidia.com/gpu: 1\n nodeSelector:\n accelerator: nvidia-tesla-p100", "language": "yaml", "type": "pod-definition", "context": "2.3 - Labels and Selectors | Set-based label requirements allow filtering keys according to a set of values. Three kinds of operators are supported: in,notin and exists (only the key identifier). For example:", "source": "https://kubernetes.io/docs/concepts/overview/_print/", "id": "k8s_00560_code_completion_14625"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: cuda-test\nspec:\n containers:\n - name: cuda-test\n image: \"registry.k8s.io/cuda-vector-add:v0.1\"\n resources:\n limits:\n nvidia.com/gpu: 1\n nodeSelector:\n accelerator: nvidia-tesla-p100", "completion": "This yaml code 2.3 - Labels and Selectors | Set-based label requirements allow filtering keys according to a set of values. Three kinds of operators are supported: in,notin and exists (only the key identifier). For example:", "language": "yaml", "type": "pod-definition_explanation", "context": "2.3 - Labels and Selectors | Set-based label requirements allow filtering keys according to a set of values. Three kinds of operators are supported: in,notin and exists (only the key identifier). For example:", "source": "https://kubernetes.io/docs/concepts/overview/_print/", "id": "k8s_00560_code_explain_14626"} {"prompt": "Complete this text code:\n\nenvironment in (production, qa)\ntier notin (frontend, backend)", "completion": "partition\n!partition", "language": "text", "type": "code-example", "context": "2.3 - Labels and Selectors | Similarly the comma separator acts as an AND operator. So filtering resources with a partition key (no matter the value) and with environment different than qa can be achieved using partition,environm", "source": "https://kubernetes.io/docs/concepts/overview/_print/", "id": "k8s_00560_code_completion_14627"} {"prompt": "Explain this text code:\n\nenvironment in (production, qa)\ntier notin (frontend, backend)\npartition\n!partition", "completion": "This text code 2.3 - Labels and Selectors | Similarly the comma separator acts as an AND operator. So filtering resources with a partition key (no matter the value) and with environment different than qa can be achieved using partition,environm", "language": "text", "type": "code-example_explanation", "context": "2.3 - Labels and Selectors | Similarly the comma separator acts as an AND operator. So filtering resources with a partition key (no matter the value) and with environment different than qa can be achieved using partition,environm", "source": "https://kubernetes.io/docs/concepts/overview/_print/", "id": "k8s_00560_code_explain_14628"} {"prompt": "Explain this text code:\n\npartition in (customerA, customerB),environment!=qa", "completion": "This text code 2.3 - Labels and Selectors | For list and watch operations, you can specify label selectors to filter the sets of objects returned; you specify the filter using a query parameter. (To learn in detail about watches in Kubernetes,", "language": "text", "type": "code-example_explanation", "context": "2.3 - Labels and Selectors | For list and watch operations, you can specify label selectors to filter the sets of objects returned; you specify the filter using a query parameter. (To learn in detail about watches in Kubernetes,", "source": "https://kubernetes.io/docs/concepts/overview/_print/", "id": "k8s_00560_code_explain_14629"} {"prompt": "Explain this text code:\n\n?labelSelector=environment%3Dproduction,tier%3Dfrontend", "completion": "This text code 2.3 - Labels and Selectors | Both label selector styles can be used to list or watch resources via a REST client. For example, targeting apiserver with kubectl and using equality-based one may write:", "language": "text", "type": "code-example_explanation", "context": "2.3 - Labels and Selectors | Both label selector styles can be used to list or watch resources via a REST client. For example, targeting apiserver with kubectl and using equality-based one may write:", "source": "https://kubernetes.io/docs/concepts/overview/_print/", "id": "k8s_00560_code_explain_14630"} {"prompt": "Explain this text code:\n\n?labelSelector=environment+in+%28production%2Cqa%29%2Ctier+in+%28frontend%29", "completion": "This text code 2.3 - Labels and Selectors | Both label selector styles can be used to list or watch resources via a REST client. For example, targeting apiserver with kubectl and using equality-based one may write:", "language": "text", "type": "code-example_explanation", "context": "2.3 - Labels and Selectors | Both label selector styles can be used to list or watch resources via a REST client. For example, targeting apiserver with kubectl and using equality-based one may write:", "source": "https://kubernetes.io/docs/concepts/overview/_print/", "id": "k8s_00560_code_explain_14631"} {"prompt": "Explain this shell code:\n\nkubectl get pods -l environment=production,tier=frontend", "completion": "This shell code 2.3 - Labels and Selectors | or using set-based requirements:", "language": "shell", "type": "code-example_explanation", "context": "2.3 - Labels and Selectors | or using set-based requirements:", "source": "https://kubernetes.io/docs/concepts/overview/_print/", "id": "k8s_00560_code_explain_14632"} {"prompt": "Explain this shell code:\n\nkubectl get pods -l 'environment in (production),tier in (frontend)'", "completion": "This shell code 2.3 - Labels and Selectors | As already mentioned set-based requirements are more expressive. For instance, they can implement the OR operator on values:", "language": "shell", "type": "code-example_explanation", "context": "2.3 - Labels and Selectors | As already mentioned set-based requirements are more expressive. For instance, they can implement the OR operator on values:", "source": "https://kubernetes.io/docs/concepts/overview/_print/", "id": "k8s_00560_code_explain_14633"} {"prompt": "Explain this shell code:\n\nkubectl get pods -l 'environment in (production, qa)'", "completion": "This shell code 2.3 - Labels and Selectors | or restricting negative matching via notin operator:", "language": "shell", "type": "code-example_explanation", "context": "2.3 - Labels and Selectors | or restricting negative matching via notin operator:", "source": "https://kubernetes.io/docs/concepts/overview/_print/", "id": "k8s_00560_code_explain_14634"} {"prompt": "Explain this shell code:\n\nkubectl get pods -l 'environment,environment notin (frontend)'", "completion": "This shell code 2.3 - Labels and Selectors | Some Kubernetes objects, such as services and replicationcontrollers, also use label selectors to specify sets of other resources, such as pods.", "language": "shell", "type": "code-example_explanation", "context": "2.3 - Labels and Selectors | Some Kubernetes objects, such as services and replicationcontrollers, also use label selectors to specify sets of other resources, such as pods.", "source": "https://kubernetes.io/docs/concepts/overview/_print/", "id": "k8s_00560_code_explain_14635"} {"prompt": "Complete this yaml code:\n\nselector:\n matchLabels:\n component: redis", "completion": " matchExpressions:\n - { key: tier, operator: In, values: [cache] }\n - { key: environment, operator: NotIn, values: [dev] }", "language": "yaml", "type": "kubernetes-manifest", "context": "2.3 - Labels and Selectors | matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels map is equivalent to an element of matchExpressions, whose key field is \"key\", the operator is \"In\", and the values a", "source": "https://kubernetes.io/docs/concepts/overview/_print/", "id": "k8s_00560_code_completion_14636"} {"prompt": "Explain this yaml code:\n\nselector:\n matchLabels:\n component: redis\n matchExpressions:\n - { key: tier, operator: In, values: [cache] }\n - { key: environment, operator: NotIn, values: [dev] }", "completion": "This yaml code 2.3 - Labels and Selectors | matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels map is equivalent to an element of matchExpressions, whose key field is \"key\", the operator is \"In\", and the values a", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "2.3 - Labels and Selectors | matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels map is equivalent to an element of matchExpressions, whose key field is \"key\", the operator is \"In\", and the values a", "source": "https://kubernetes.io/docs/concepts/overview/_print/", "id": "k8s_00560_code_explain_14637"} {"prompt": "Complete this yaml code:\n\nlabels:\n app: guestbook", "completion": " tier: backend\n role: master", "language": "yaml", "type": "kubernetes-manifest", "context": "2.3 - Labels and Selectors | and", "source": "https://kubernetes.io/docs/concepts/overview/_print/", "id": "k8s_00560_code_completion_14638"} {"prompt": "Explain this yaml code:\n\nlabels:\n app: guestbook\n tier: backend\n role: master", "completion": "This yaml code 2.3 - Labels and Selectors | and", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "2.3 - Labels and Selectors | and", "source": "https://kubernetes.io/docs/concepts/overview/_print/", "id": "k8s_00560_code_explain_14639"} {"prompt": "Complete this yaml code:\n\nlabels:\n app: guestbook", "completion": " tier: backend\n role: replica", "language": "yaml", "type": "kubernetes-manifest", "context": "2.3 - Labels and Selectors | The labels allow for slicing and dicing the resources along any dimension specified by a label:", "source": "https://kubernetes.io/docs/concepts/overview/_print/", "id": "k8s_00560_code_completion_14640"} {"prompt": "Explain this yaml code:\n\nlabels:\n app: guestbook\n tier: backend\n role: replica", "completion": "This yaml code 2.3 - Labels and Selectors | The labels allow for slicing and dicing the resources along any dimension specified by a label:", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "2.3 - Labels and Selectors | The labels allow for slicing and dicing the resources along any dimension specified by a label:", "source": "https://kubernetes.io/docs/concepts/overview/_print/", "id": "k8s_00560_code_explain_14641"} {"prompt": "Explain this shell code:\n\nkubectl apply -f examples/guestbook/all-in-one/guestbook-all-in-one.yaml\nkubectl get pods -Lapp -Ltier -Lrole", "completion": "This shell code 2.3 - Labels and Selectors | Sometimes you may want to relabel existing pods and other resources before creating new resources. This can be done with kubectl label. For example, if you want to label all your NGINX Pods as fronten", "language": "shell", "type": "code-example_explanation", "context": "2.3 - Labels and Selectors | Sometimes you may want to relabel existing pods and other resources before creating new resources. This can be done with kubectl label. For example, if you want to label all your NGINX Pods as fronten", "source": "https://kubernetes.io/docs/concepts/overview/_print/", "id": "k8s_00560_code_explain_14642"} {"prompt": "Complete this none code:\n\nNAME READY STATUS RESTARTS AGE APP TIER ROLE\nguestbook-fe-4nlpb 1/1 Running 0 1m guestbook frontend <none>\nguestbook-fe-ght6d 1/1 Running 0 1m guestbook frontend <none>\nguestbook-fe-jpy62 1/1 Running 0 1m guestbook frontend <none>", "completion": "guestbook-redis-master-5pg3b 1/1 Running 0 1m guestbook backend master\nguestbook-redis-replica-2q2yf 1/1 Running 0 1m guestbook backend replica\nguestbook-redis-replica-qgazl 1/1 Running 0 1m guestbook backend replica\nmy-nginx-divi2 1/1 Running 0 29m nginx <none> <none>\nmy-nginx-o0ef1 1/1 Running 0 29m nginx <none> <none>", "language": "none", "type": "code-example", "context": "2.3 - Labels and Selectors | Sometimes you may want to relabel existing pods and other resources before creating new resources. This can be done with kubectl label. For example, if you want to label all your NGINX Pods as fronten", "source": "https://kubernetes.io/docs/concepts/overview/_print/", "id": "k8s_00560_code_completion_14643"} {"prompt": "Explain this none code:\n\nNAME READY STATUS RESTARTS AGE APP TIER ROLE\nguestbook-fe-4nlpb 1/1 Running 0 1m guestbook frontend <none>\nguestbook-fe-ght6d 1/1 Running 0 1m guestbook frontend <none>\nguestbook-fe-jpy62 1/1 Running 0 1m guestbook frontend <none>\nguestbook-redis-master-5pg3b 1/1 Running 0 1m guestbook backend master\nguestbook-redis-replica-2q2yf 1/1 Running 0 1m guestbook backend replica\nguestbook-redis-replica-qgazl 1/1 Running 0 1m guestbook backend replica\nmy-nginx-divi2 1/1 Running 0 29m nginx <none> <none>\nmy-nginx-o0ef1 1/1 Running 0 29m nginx <none> <none>", "completion": "This none code 2.3 - Labels and Selectors | Sometimes you may want to relabel existing pods and other resources before creating new resources. This can be done with kubectl label. For example, if you want to label all your NGINX Pods as fronten", "language": "none", "type": "code-example_explanation", "context": "2.3 - Labels and Selectors | Sometimes you may want to relabel existing pods and other resources before creating new resources. This can be done with kubectl label. For example, if you want to label all your NGINX Pods as fronten", "source": "https://kubernetes.io/docs/concepts/overview/_print/", "id": "k8s_00560_code_explain_14644"} {"prompt": "Explain this none code:\n\nNAME READY STATUS RESTARTS AGE\nguestbook-redis-replica-2q2yf 1/1 Running 0 3m\nguestbook-redis-replica-qgazl 1/1 Running 0 3m", "completion": "This none code 2.3 - Labels and Selectors | Sometimes you may want to relabel existing pods and other resources before creating new resources. This can be done with kubectl label. For example, if you want to label all your NGINX Pods as fronten", "language": "none", "type": "code-example_explanation", "context": "2.3 - Labels and Selectors | Sometimes you may want to relabel existing pods and other resources before creating new resources. This can be done with kubectl label. For example, if you want to label all your NGINX Pods as fronten", "source": "https://kubernetes.io/docs/concepts/overview/_print/", "id": "k8s_00560_code_explain_14645"} {"prompt": "Explain this none code:\n\npod/my-nginx-2035384211-j5fhi labeled\npod/my-nginx-2035384211-u2c7e labeled\npod/my-nginx-2035384211-u3t6x labeled", "completion": "This none code 2.3 - Labels and Selectors | This first filters all pods with the label \"app=nginx\", and then labels them with the \"tier=fe\". To see the pods you labeled, run:", "language": "none", "type": "code-example_explanation", "context": "2.3 - Labels and Selectors | This first filters all pods with the label \"app=nginx\", and then labels them with the \"tier=fe\". To see the pods you labeled, run:", "source": "https://kubernetes.io/docs/concepts/overview/_print/", "id": "k8s_00560_code_explain_14646"} {"prompt": "Complete this none code:\n\nNAME READY STATUS RESTARTS AGE TIER\nmy-nginx-2035384211-j5fhi 1/1 Running 0 23m fe", "completion": "my-nginx-2035384211-u2c7e 1/1 Running 0 23m fe\nmy-nginx-2035384211-u3t6x 1/1 Running 0 23m fe", "language": "none", "type": "code-example", "context": "2.3 - Labels and Selectors | This outputs all \"app=nginx\" pods, with an additional label column of pods' tier (specified with -L or --label-columns).", "source": "https://kubernetes.io/docs/concepts/overview/_print/", "id": "k8s_00560_code_completion_14647"} {"prompt": "Explain this none code:\n\nNAME READY STATUS RESTARTS AGE TIER\nmy-nginx-2035384211-j5fhi 1/1 Running 0 23m fe\nmy-nginx-2035384211-u2c7e 1/1 Running 0 23m fe\nmy-nginx-2035384211-u3t6x 1/1 Running 0 23m fe", "completion": "This none code 2.3 - Labels and Selectors | This outputs all \"app=nginx\" pods, with an additional label column of pods' tier (specified with -L or --label-columns).", "language": "none", "type": "code-example_explanation", "context": "2.3 - Labels and Selectors | This outputs all \"app=nginx\" pods, with an additional label column of pods' tier (specified with -L or --label-columns).", "source": "https://kubernetes.io/docs/concepts/overview/_print/", "id": "k8s_00560_code_explain_14648"} {"prompt": "Complete this text code:\n\nNAME STATUS AGE\ndefault Active 1d", "completion": "kube-node-lease Active 1d\nkube-public Active 1d\nkube-system Active 1d", "language": "text", "type": "code-example", "context": "2.4 - Namespaces | To set the namespace for a current request, use the --namespace flag.", "source": "https://kubernetes.io/docs/concepts/overview/_print/", "id": "k8s_00560_code_completion_14649"} {"prompt": "Explain this text code:\n\nNAME STATUS AGE\ndefault Active 1d\nkube-node-lease Active 1d\nkube-public Active 1d\nkube-system Active 1d", "completion": "This text code 2.4 - Namespaces | To set the namespace for a current request, use the --namespace flag.", "language": "text", "type": "code-example_explanation", "context": "2.4 - Namespaces | To set the namespace for a current request, use the --namespace flag.", "source": "https://kubernetes.io/docs/concepts/overview/_print/", "id": "k8s_00560_code_explain_14650"} {"prompt": "Explain this shell code:\n\nkubectl run nginx --image=nginx --namespace=<insert-namespace-name-here>\nkubectl get pods --namespace=<insert-namespace-name-here>", "completion": "This shell code 2.4 - Namespaces | You can permanently save the namespace for all subsequent kubectl commands in that context.", "language": "shell", "type": "code-example_explanation", "context": "2.4 - Namespaces | You can permanently save the namespace for all subsequent kubectl commands in that context.", "source": "https://kubernetes.io/docs/concepts/overview/_print/", "id": "k8s_00560_code_explain_14651"} {"prompt": "Explain this shell code:\n\nkubectl config set-context --current --namespace=<insert-namespace-name-here>\n# Validate it\nkubectl config view --minify | grep namespace:", "completion": "This shell code 2.4 - Namespaces | When you create a Service, it creates a corresponding DNS entry. This entry is of the form <service-name>.<namespace-name>.svc.cluster.local, which means that if a container only uses <service-name>,", "language": "shell", "type": "code-example_explanation", "context": "2.4 - Namespaces | When you create a Service, it creates a corresponding DNS entry. This entry is of the form <service-name>.<namespace-name>.svc.cluster.local, which means that if a container only uses <service-name>,", "source": "https://kubernetes.io/docs/concepts/overview/_print/", "id": "k8s_00560_code_explain_14652"} {"prompt": "Complete this shell code:\n\n# In a namespace\nkubectl api-resources --namespaced=true", "completion": "\n# Not in a namespace\nkubectl api-resources --namespaced=false", "language": "shell", "type": "code-example", "context": "2.4 - Namespaces | The Kubernetes control plane sets an immutable label kubernetes.io/metadata.name on all namespaces. The value of the label is the namespace name.", "source": "https://kubernetes.io/docs/concepts/overview/_print/", "id": "k8s_00560_code_completion_14653"} {"prompt": "Explain this shell code:\n\n# In a namespace\nkubectl api-resources --namespaced=true\n\n# Not in a namespace\nkubectl api-resources --namespaced=false", "completion": "This shell code 2.4 - Namespaces | The Kubernetes control plane sets an immutable label kubernetes.io/metadata.name on all namespaces. The value of the label is the namespace name.", "language": "shell", "type": "code-example_explanation", "context": "2.4 - Namespaces | The Kubernetes control plane sets an immutable label kubernetes.io/metadata.name on all namespaces. The value of the label is the namespace name.", "source": "https://kubernetes.io/docs/concepts/overview/_print/", "id": "k8s_00560_code_explain_14654"} {"prompt": "Complete this json code:\n\n\"metadata\": {\n \"annotations\": {\n \"key1\" : \"value1\",", "completion": " \"key2\" : \"value2\"\n }\n}", "language": "json", "type": "api-response", "context": "2.5 - Annotations | Here are some examples of information that could be recorded in annotations:", "source": "https://kubernetes.io/docs/concepts/overview/_print/", "id": "k8s_00560_code_completion_14655"} {"prompt": "Explain this json code:\n\n\"metadata\": {\n \"annotations\": {\n \"key1\" : \"value1\",\n \"key2\" : \"value2\"\n }\n}", "completion": "This json code 2.5 - Annotations | Here are some examples of information that could be recorded in annotations:", "language": "json", "type": "api-response_explanation", "context": "2.5 - Annotations | Here are some examples of information that could be recorded in annotations:", "source": "https://kubernetes.io/docs/concepts/overview/_print/", "id": "k8s_00560_code_explain_14656"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: annotations-demo\n annotations:\n imageregistry: \"https://hub.docker.com/\"", "completion": "spec:\n containers:\n - name: nginx\n image: nginx:1.14.2\n ports:\n - containerPort: 80", "language": "yaml", "type": "pod-definition", "context": "2.5 - Annotations | Field selectors let you select Kubernetes objects based on the value of one or more resource fields. Here are some examples of field selector queries:", "source": "https://kubernetes.io/docs/concepts/overview/_print/", "id": "k8s_00560_code_completion_14657"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: annotations-demo\n annotations:\n imageregistry: \"https://hub.docker.com/\"\nspec:\n containers:\n - name: nginx\n image: nginx:1.14.2\n ports:\n - containerPort: 80", "completion": "This yaml code 2.5 - Annotations | Field selectors let you select Kubernetes objects based on the value of one or more resource fields. Here are some examples of field selector queries:", "language": "yaml", "type": "pod-definition_explanation", "context": "2.5 - Annotations | Field selectors let you select Kubernetes objects based on the value of one or more resource fields. Here are some examples of field selector queries:", "source": "https://kubernetes.io/docs/concepts/overview/_print/", "id": "k8s_00560_code_explain_14658"} {"prompt": "Explain this shell code:\n\nkubectl get pods --field-selector status.phase=Running", "completion": "This shell code 2.6 - Field Selectors | Supported field selectors vary by Kubernetes resource type. All resource types support the metadata.name and metadata.namespace fields. Using unsupported field selectors produces an error. For example", "language": "shell", "type": "code-example_explanation", "context": "2.6 - Field Selectors | Supported field selectors vary by Kubernetes resource type. All resource types support the metadata.name and metadata.namespace fields. Using unsupported field selectors produces an error. For example", "source": "https://kubernetes.io/docs/concepts/overview/_print/", "id": "k8s_00560_code_explain_14659"} {"prompt": "Explain this text code:\n\nError from server (BadRequest): Unable to find \"ingresses\" that match label selector \"\", field selector \"foo.bar=baz\": \"foo.bar\" is not a known field selector: only \"metadata.name\", \"metadata.namespace\"", "completion": "This text code 2.6 - Field Selectors | All custom resource types support the metadata.name and metadata.namespace fields.", "language": "text", "type": "code-example_explanation", "context": "2.6 - Field Selectors | All custom resource types support the metadata.name and metadata.namespace fields.", "source": "https://kubernetes.io/docs/concepts/overview/_print/", "id": "k8s_00560_code_explain_14660"} {"prompt": "Explain this shell code:\n\nkubectl get services --all-namespaces --field-selector metadata.namespace!=default", "completion": "This shell code 2.6 - Field Selectors | As with label and other selectors, field selectors can be chained together as a comma-separated list. This kubectl command selects all Pods for which the status.phase does not equal Running and the sp", "language": "shell", "type": "code-example_explanation", "context": "2.6 - Field Selectors | As with label and other selectors, field selectors can be chained together as a comma-separated list. This kubectl command selects all Pods for which the status.phase does not equal Running and the sp", "source": "https://kubernetes.io/docs/concepts/overview/_print/", "id": "k8s_00560_code_explain_14661"} {"prompt": "Explain this shell code:\n\nkubectl get pods --field-selector=status.phase!=Running,spec.restartPolicy=Always", "completion": "This shell code 2.6 - Field Selectors | You can use field selectors across multiple resource types. This kubectl command selects all Statefulsets and Services that are not in the default namespace:", "language": "shell", "type": "code-example_explanation", "context": "2.6 - Field Selectors | You can use field selectors across multiple resource types. This kubectl command selects all Statefulsets and Services that are not in the default namespace:", "source": "https://kubernetes.io/docs/concepts/overview/_print/", "id": "k8s_00560_code_explain_14662"} {"prompt": "Explain this shell code:\n\nkubectl get statefulsets,services --all-namespaces --field-selector metadata.namespace!=default", "completion": "This shell code 2.6 - Field Selectors | Finalizers are namespaced keys that tell Kubernetes to wait until specific conditions are met before it fully deletes resources that are marked for deletion. Finalizers alert controllers to clean up r", "language": "shell", "type": "code-example_explanation", "context": "2.6 - Field Selectors | Finalizers are namespaced keys that tell Kubernetes to wait until specific conditions are met before it fully deletes resources that are marked for deletion. Finalizers alert controllers to clean up r", "source": "https://kubernetes.io/docs/concepts/overview/_print/", "id": "k8s_00560_code_explain_14663"} {"prompt": "Explain this bash code:\n\nkubectl get events -A --field-selector=reason=OwnerRefInvalidNamespace", "completion": "This bash code 2.8 - Owners and Dependents | When you tell Kubernetes to delete a resource, the API server allows the managing controller to process any finalizer rules for the resource. Finalizers prevent accidental deletion of resources your c", "language": "bash", "type": "kubectl-command_explanation", "context": "2.8 - Owners and Dependents | When you tell Kubernetes to delete a resource, the API server allows the managing controller to process any finalizer rules for the resource. Finalizers prevent accidental deletion of resources your c", "source": "https://kubernetes.io/docs/concepts/overview/_print/", "id": "k8s_00560_code_explain_14664"} {"prompt": "Complete this yaml code:\n\n# This is an excerpt\napiVersion: apps/v1\nkind: StatefulSet\nmetadata:\n labels:", "completion": " app.kubernetes.io/name: mysql\n app.kubernetes.io/instance: mysql-abcxyz\n app.kubernetes.io/version: \"5.7.21\"\n app.kubernetes.io/component: database\n app.kubernetes.io/part-of: wordpress\n app.kubernetes.io/managed-by: Helm", "language": "yaml", "type": "kubernetes-manifest", "context": "2.9 - Recommended Labels | An application can be installed one or more times into a Kubernetes cluster and, in some cases, the same namespace. For example, WordPress can be installed more than once where different websites are", "source": "https://kubernetes.io/docs/concepts/overview/_print/", "id": "k8s_00560_code_completion_14665"} {"prompt": "Explain this yaml code:\n\n# This is an excerpt\napiVersion: apps/v1\nkind: StatefulSet\nmetadata:\n labels:\n app.kubernetes.io/name: mysql\n app.kubernetes.io/instance: mysql-abcxyz\n app.kubernetes.io/version: \"5.7.21\"\n app.kubernetes.io/component: database\n app.kubernetes.io/part-of: wordpress\n app.kubernetes.io/managed-by: Helm", "completion": "This yaml code 2.9 - Recommended Labels | An application can be installed one or more times into a Kubernetes cluster and, in some cases, the same namespace. For example, WordPress can be installed more than once where different websites are", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "2.9 - Recommended Labels | An application can be installed one or more times into a Kubernetes cluster and, in some cases, the same namespace. For example, WordPress can be installed more than once where different websites are", "source": "https://kubernetes.io/docs/concepts/overview/_print/", "id": "k8s_00560_code_explain_14666"} {"prompt": "Complete this yaml code:\n\napiVersion: apps/v1\nkind: Deployment\nmetadata:", "completion": " labels:\n app.kubernetes.io/name: myservice\n app.kubernetes.io/instance: myservice-abcxyz\n...", "language": "yaml", "type": "deployment-definition", "context": "2.9 - Recommended Labels | The Service is used to expose the application.", "source": "https://kubernetes.io/docs/concepts/overview/_print/", "id": "k8s_00560_code_completion_14667"} {"prompt": "Explain this yaml code:\n\napiVersion: apps/v1\nkind: Deployment\nmetadata:\n labels:\n app.kubernetes.io/name: myservice\n app.kubernetes.io/instance: myservice-abcxyz\n...", "completion": "This yaml code 2.9 - Recommended Labels | The Service is used to expose the application.", "language": "yaml", "type": "deployment-definition_explanation", "context": "2.9 - Recommended Labels | The Service is used to expose the application.", "source": "https://kubernetes.io/docs/concepts/overview/_print/", "id": "k8s_00560_code_explain_14668"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Service\nmetadata:", "completion": " labels:\n app.kubernetes.io/name: myservice\n app.kubernetes.io/instance: myservice-abcxyz\n...", "language": "yaml", "type": "service-definition", "context": "2.9 - Recommended Labels | Consider a slightly more complicated application: a web application (WordPress) using a database (MySQL), installed using Helm. The following snippets illustrate the start of objects used to deploy th", "source": "https://kubernetes.io/docs/concepts/overview/_print/", "id": "k8s_00560_code_completion_14669"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Service\nmetadata:\n labels:\n app.kubernetes.io/name: myservice\n app.kubernetes.io/instance: myservice-abcxyz\n...", "completion": "This yaml code 2.9 - Recommended Labels | Consider a slightly more complicated application: a web application (WordPress) using a database (MySQL), installed using Helm. The following snippets illustrate the start of objects used to deploy th", "language": "yaml", "type": "service-definition_explanation", "context": "2.9 - Recommended Labels | Consider a slightly more complicated application: a web application (WordPress) using a database (MySQL), installed using Helm. The following snippets illustrate the start of objects used to deploy th", "source": "https://kubernetes.io/docs/concepts/overview/_print/", "id": "k8s_00560_code_explain_14670"} {"prompt": "Complete this yaml code:\n\napiVersion: apps/v1\nkind: Deployment\nmetadata:\n labels:\n app.kubernetes.io/name: wordpress", "completion": " app.kubernetes.io/instance: wordpress-abcxyz\n app.kubernetes.io/version: \"4.9.4\"\n app.kubernetes.io/managed-by: Helm\n app.kubernetes.io/component: server\n app.kubernetes.io/part-of: wordpress\n...", "language": "yaml", "type": "deployment-definition", "context": "2.9 - Recommended Labels | The Service is used to expose WordPress:", "source": "https://kubernetes.io/docs/concepts/overview/_print/", "id": "k8s_00560_code_completion_14671"} {"prompt": "Explain this yaml code:\n\napiVersion: apps/v1\nkind: Deployment\nmetadata:\n labels:\n app.kubernetes.io/name: wordpress\n app.kubernetes.io/instance: wordpress-abcxyz\n app.kubernetes.io/version: \"4.9.4\"\n app.kubernetes.io/managed-by: Helm\n app.kubernetes.io/component: server\n app.kubernetes.io/part-of: wordpress\n...", "completion": "This yaml code 2.9 - Recommended Labels | The Service is used to expose WordPress:", "language": "yaml", "type": "deployment-definition_explanation", "context": "2.9 - Recommended Labels | The Service is used to expose WordPress:", "source": "https://kubernetes.io/docs/concepts/overview/_print/", "id": "k8s_00560_code_explain_14672"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Service\nmetadata:\n labels:\n app.kubernetes.io/name: wordpress", "completion": " app.kubernetes.io/instance: wordpress-abcxyz\n app.kubernetes.io/version: \"4.9.4\"\n app.kubernetes.io/managed-by: Helm\n app.kubernetes.io/component: server\n app.kubernetes.io/part-of: wordpress\n...", "language": "yaml", "type": "service-definition", "context": "2.9 - Recommended Labels | MySQL is exposed as a StatefulSet with metadata for both it and the larger application it belongs to:", "source": "https://kubernetes.io/docs/concepts/overview/_print/", "id": "k8s_00560_code_completion_14673"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Service\nmetadata:\n labels:\n app.kubernetes.io/name: wordpress\n app.kubernetes.io/instance: wordpress-abcxyz\n app.kubernetes.io/version: \"4.9.4\"\n app.kubernetes.io/managed-by: Helm\n app.kubernetes.io/component: server\n app.kubernetes.io/part-of: wordpress\n...", "completion": "This yaml code 2.9 - Recommended Labels | MySQL is exposed as a StatefulSet with metadata for both it and the larger application it belongs to:", "language": "yaml", "type": "service-definition_explanation", "context": "2.9 - Recommended Labels | MySQL is exposed as a StatefulSet with metadata for both it and the larger application it belongs to:", "source": "https://kubernetes.io/docs/concepts/overview/_print/", "id": "k8s_00560_code_explain_14674"} {"prompt": "Complete this yaml code:\n\napiVersion: apps/v1\nkind: StatefulSet\nmetadata:\n labels:\n app.kubernetes.io/name: mysql", "completion": " app.kubernetes.io/instance: mysql-abcxyz\n app.kubernetes.io/version: \"5.7.21\"\n app.kubernetes.io/managed-by: Helm\n app.kubernetes.io/component: database\n app.kubernetes.io/part-of: wordpress\n...", "language": "yaml", "type": "kubernetes-manifest", "context": "2.9 - Recommended Labels | The Service is used to expose MySQL as part of WordPress:", "source": "https://kubernetes.io/docs/concepts/overview/_print/", "id": "k8s_00560_code_completion_14675"} {"prompt": "Explain this yaml code:\n\napiVersion: apps/v1\nkind: StatefulSet\nmetadata:\n labels:\n app.kubernetes.io/name: mysql\n app.kubernetes.io/instance: mysql-abcxyz\n app.kubernetes.io/version: \"5.7.21\"\n app.kubernetes.io/managed-by: Helm\n app.kubernetes.io/component: database\n app.kubernetes.io/part-of: wordpress\n...", "completion": "This yaml code 2.9 - Recommended Labels | The Service is used to expose MySQL as part of WordPress:", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "2.9 - Recommended Labels | The Service is used to expose MySQL as part of WordPress:", "source": "https://kubernetes.io/docs/concepts/overview/_print/", "id": "k8s_00560_code_explain_14676"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Service\nmetadata:\n labels:\n app.kubernetes.io/name: mysql", "completion": " app.kubernetes.io/instance: mysql-abcxyz\n app.kubernetes.io/version: \"5.7.21\"\n app.kubernetes.io/managed-by: Helm\n app.kubernetes.io/component: database\n app.kubernetes.io/part-of: wordpress\n...", "language": "yaml", "type": "service-definition", "context": "2.9 - Recommended Labels | With the MySQL StatefulSet and Service you'll notice information about both MySQL and WordPress, the broader application, are included.", "source": "https://kubernetes.io/docs/concepts/overview/_print/", "id": "k8s_00560_code_completion_14677"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Service\nmetadata:\n labels:\n app.kubernetes.io/name: mysql\n app.kubernetes.io/instance: mysql-abcxyz\n app.kubernetes.io/version: \"5.7.21\"\n app.kubernetes.io/managed-by: Helm\n app.kubernetes.io/component: database\n app.kubernetes.io/part-of: wordpress\n...", "completion": "This yaml code 2.9 - Recommended Labels | With the MySQL StatefulSet and Service you'll notice information about both MySQL and WordPress, the broader application, are included.", "language": "yaml", "type": "service-definition_explanation", "context": "2.9 - Recommended Labels | With the MySQL StatefulSet and Service you'll notice information about both MySQL and WordPress, the broader application, are included.", "source": "https://kubernetes.io/docs/concepts/overview/_print/", "id": "k8s_00560_code_explain_14678"} {"prompt": "Explain this text code:\n\nAccept: application/json;v=v2;g=apidiscovery.k8s.io;as=APIGroupDiscoveryList", "completion": "This text code 3 - The Kubernetes API | Without indicating the resource type using the Accept header, the default response for the /api and /apis endpoint is an unaggregated discovery document.", "language": "text", "type": "code-example_explanation", "context": "3 - The Kubernetes API | Without indicating the resource type using the Accept header, the default response for the /api and /apis endpoint is an unaggregated discovery document.", "source": "https://kubernetes.io/docs/concepts/overview/_print/", "id": "k8s_00560_code_explain_14679"} {"prompt": "Complete this json code:\n\n{\n \"kind\": \"APIGroupList\",\n \"apiVersion\": \"v1\",\n \"groups\": [\n {\n \"name\": \"apiregistration.k8s.io\",\n \"versions\": [\n {\n \"groupVersion\": \"apiregistration.k8s.io/v1\",\n \"version\": \"v1\"\n }\n ],\n \"preferredVersion\": {\n \"groupVersion\": \"apiregistration.k8s.io/v1\",\n \"version\": \"v1\"\n }", "completion": " },\n {\n \"name\": \"apps\",\n \"versions\": [\n {\n \"groupVersion\": \"apps/v1\",\n \"version\": \"v1\"\n }\n ],\n \"preferredVersion\": {\n \"groupVersion\": \"apps/v1\",\n \"version\": \"v1\"\n }\n },\n ...\n}", "language": "json", "type": "api-response", "context": "3 - The Kubernetes API | Additional requests are needed to obtain the discovery document for each group version at /apis/<group>/<version> (for example: /apis/rbac.authorization.k8s.io/v1alpha1), which advertises the list of", "source": "https://kubernetes.io/docs/concepts/overview/_print/", "id": "k8s_00560_code_completion_14680"} {"prompt": "Explain this json code:\n\n{\n \"kind\": \"APIGroupList\",\n \"apiVersion\": \"v1\",\n \"groups\": [\n {\n \"name\": \"apiregistration.k8s.io\",\n \"versions\": [\n {\n \"groupVersion\": \"apiregistration.k8s.io/v1\",\n \"version\": \"v1\"\n }\n ],\n \"preferredVersion\": {\n \"groupVersion\": \"apiregistration.k8s.io/v1\",\n \"version\": \"v1\"\n }\n },\n {\n \"name\": \"apps\",\n \"versions\": [\n {\n \"groupVersion\": \"apps/v1\",\n \"version\": \"v1\"\n }\n ],\n \"preferredVersion\": {\n \"groupVersion\": \"apps/v1\",\n \"version\": \"v1\"\n }\n },\n ...\n}", "completion": "This json code 3 - The Kubernetes API | Additional requests are needed to obtain the discovery document for each group version at /apis/<group>/<version> (for example: /apis/rbac.authorization.k8s.io/v1alpha1), which advertises the list of", "language": "json", "type": "api-response_explanation", "context": "3 - The Kubernetes API | Additional requests are needed to obtain the discovery document for each group version at /apis/<group>/<version> (for example: /apis/rbac.authorization.k8s.io/v1alpha1), which advertises the list of", "source": "https://kubernetes.io/docs/concepts/overview/_print/", "id": "k8s_00560_code_explain_14681"} {"prompt": "Explain this text code:\n\napplication/com.github.proto-openapi.spec.v2@v1.0+protobuf", "completion": "This text code 3 - The Kubernetes API | Kubernetes supports publishing a description of its APIs as OpenAPI v3.", "language": "text", "type": "code-example_explanation", "context": "3 - The Kubernetes API | Kubernetes supports publishing a description of its APIs as OpenAPI v3.", "source": "https://kubernetes.io/docs/concepts/overview/_print/", "id": "k8s_00560_code_explain_14682"} {"prompt": "Complete this yaml code:\n\n{\n \"paths\": {\n ...,\n \"api/v1\": {\n \"serverRelativeURL\": \"/openapi/v3/api/v1?hash=CC0E9BFD992D8C59AEC98A1E2336F899E8318D3CF4C68944C3DEC640AF5AB52D864AC50DAA8D145B3494F75FA3CFF939FCBDDA431DAD3CA79738B297795818CF\"\n },", "completion": " \"apis/admissionregistration.k8s.io/v1\": {\n \"serverRelativeURL\": \"/openapi/v3/apis/admissionregistration.k8s.io/v1?hash=E19CC93A116982CE5422FC42B590A8AFAD92CDE9AE4D59B5CAAD568F083AD07946E6CB5817531680BCE6E215C16973CD39003B0425F3477CFD854E89A9DB6597\"\n },\n ....\n }\n}", "language": "yaml", "type": "kubernetes-manifest", "context": "3 - The Kubernetes API | The relative URLs are pointing to immutable OpenAPI descriptions, in order to improve client-side caching. The proper HTTP caching headers are also set by the API server for that purpose (Expires to 1", "source": "https://kubernetes.io/docs/concepts/overview/_print/", "id": "k8s_00560_code_completion_14683"} {"prompt": "Explain this yaml code:\n\n{\n \"paths\": {\n ...,\n \"api/v1\": {\n \"serverRelativeURL\": \"/openapi/v3/api/v1?hash=CC0E9BFD992D8C59AEC98A1E2336F899E8318D3CF4C68944C3DEC640AF5AB52D864AC50DAA8D145B3494F75FA3CFF939FCBDDA431DAD3CA79738B297795818CF\"\n },\n \"apis/admissionregistration.k8s.io/v1\": {\n \"serverRelativeURL\": \"/openapi/v3/apis/admissionregistration.k8s.io/v1?hash=E19CC93A116982CE5422FC42B590A8AFAD92CDE9AE4D59B5CAAD568F083AD07946E6CB5817531680BCE6E215C16973CD39003B0425F3477CFD854E89A9DB6597\"\n },\n ....\n }\n}", "completion": "This yaml code 3 - The Kubernetes API | The relative URLs are pointing to immutable OpenAPI descriptions, in order to improve client-side caching. The proper HTTP caching headers are also set by the API server for that purpose (Expires to 1", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "3 - The Kubernetes API | The relative URLs are pointing to immutable OpenAPI descriptions, in order to improve client-side caching. The proper HTTP caching headers are also set by the API server for that purpose (Expires to 1", "source": "https://kubernetes.io/docs/concepts/overview/_print/", "id": "k8s_00560_code_explain_14684"} {"prompt": "Explain this text code:\n\napplication/com.github.proto-openapi.spec.v3@v1.0+protobuf", "completion": "This text code 3 - The Kubernetes API | A Golang implementation to fetch the OpenAPI V3 is provided in the package k8s.io/client-go/openapi3.", "language": "text", "type": "code-example_explanation", "context": "3 - The Kubernetes API | A Golang implementation to fetch the OpenAPI V3 is provided in the package k8s.io/client-go/openapi3.", "source": "https://kubernetes.io/docs/concepts/overview/_print/", "id": "k8s_00560_code_explain_14685"} {"prompt": "Complete this yaml code:\n\napplication/deployment.yaml\napiVersion: apps/v1\nkind: Deployment\nmetadata:\n name: nginx-deployment\nspec:\n selector:\n matchLabels:\n app: nginx\n replicas: 2 # tells deployment to run 2 pods matching the template", "completion": " template:\n metadata:\n labels:\n app: nginx\n spec:\n containers:\n - name: nginx\n image: nginx:1.14.2\n ports:\n - containerPort: 80", "language": "yaml", "type": "deployment-definition", "context": "2 - Objects In Kubernetes | One way to create a Deployment using a manifest file like the one above is to use the kubectl apply command in the kubectl command-line interface, passing the .yaml file as an argument. Here's an exam", "source": "https://kubernetes.io/docs/concepts/overview/_print/", "id": "k8s_00560_code_completion_14686"} {"prompt": "Explain this yaml code:\n\napplication/deployment.yaml\napiVersion: apps/v1\nkind: Deployment\nmetadata:\n name: nginx-deployment\nspec:\n selector:\n matchLabels:\n app: nginx\n replicas: 2 # tells deployment to run 2 pods matching the template\n template:\n metadata:\n labels:\n app: nginx\n spec:\n containers:\n - name: nginx\n image: nginx:1.14.2\n ports:\n - containerPort: 80", "completion": "This yaml code 2 - Objects In Kubernetes | One way to create a Deployment using a manifest file like the one above is to use the kubectl apply command in the kubectl command-line interface, passing the .yaml file as an argument. Here's an exam", "language": "yaml", "type": "deployment-definition_explanation", "context": "2 - Objects In Kubernetes | One way to create a Deployment using a manifest file like the one above is to use the kubectl apply command in the kubectl command-line interface, passing the .yaml file as an argument. Here's an exam", "source": "https://kubernetes.io/docs/concepts/overview/_print/", "id": "k8s_00560_code_explain_14687"} {"prompt": "Complete this yaml code:\n\napiVersion: apps/v1\nkind: Deployment\nmetadata:\n name: nginx-deployment\nspec:\n selector:\n matchLabels:\n app: nginx\n replicas: 2 # tells deployment to run 2 pods matching the template", "completion": " template:\n metadata:\n labels:\n app: nginx\n spec:\n containers:\n - name: nginx\n image: nginx:1.14.2\n ports:\n - containerPort: 80", "language": "yaml", "type": "deployment-definition", "context": "2 - Objects In Kubernetes | One way to create a Deployment using a manifest file like the one above is to use the kubectl apply command in the kubectl command-line interface, passing the .yaml file as an argument. Here's an exam", "source": "https://kubernetes.io/docs/concepts/overview/_print/", "id": "k8s_00560_code_completion_14688"} {"prompt": "Explain this yaml code:\n\napiVersion: apps/v1\nkind: Deployment\nmetadata:\n name: nginx-deployment\nspec:\n selector:\n matchLabels:\n app: nginx\n replicas: 2 # tells deployment to run 2 pods matching the template\n template:\n metadata:\n labels:\n app: nginx\n spec:\n containers:\n - name: nginx\n image: nginx:1.14.2\n ports:\n - containerPort: 80", "completion": "This yaml code 2 - Objects In Kubernetes | One way to create a Deployment using a manifest file like the one above is to use the kubectl apply command in the kubectl command-line interface, passing the .yaml file as an argument. Here's an exam", "language": "yaml", "type": "deployment-definition_explanation", "context": "2 - Objects In Kubernetes | One way to create a Deployment using a manifest file like the one above is to use the kubectl apply command in the kubectl command-line interface, passing the .yaml file as an argument. Here's an exam", "source": "https://kubernetes.io/docs/concepts/overview/_print/", "id": "k8s_00560_code_explain_14689"} {"prompt": "Explain this bash code:\n\nkubectl apply -f https://k8s.io/examples/application/deployment.yaml", "completion": "This bash code 2 - Objects In Kubernetes | The output is similar to this:", "language": "bash", "type": "kubectl-command_explanation", "context": "2 - Objects In Kubernetes | The output is similar to this:", "source": "https://kubernetes.io/docs/concepts/overview/_print/", "id": "k8s_00560_code_explain_14690"} {"prompt": "Explain this bash code:\n\nkubectl diff -R -f configs/\nkubectl apply -R -f configs/", "completion": "This bash code 2.1 - Kubernetes Object Management | Advantages compared to imperative object configuration:", "language": "bash", "type": "kubectl-command_explanation", "context": "2.1 - Kubernetes Object Management | Advantages compared to imperative object configuration:", "source": "https://kubernetes.io/docs/concepts/overview/_print/", "id": "k8s_00560_code_explain_14691"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: nginx-demo\nspec:", "completion": " containers:\n - name: nginx\n image: nginx:1.14.2\n ports:\n - containerPort: 80", "language": "yaml", "type": "pod-definition", "context": "2.2 - Object Names and IDs | A Kubernetes systems-generated string to uniquely identify objects.", "source": "https://kubernetes.io/docs/concepts/overview/_print/", "id": "k8s_00560_code_completion_14692"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: nginx-demo\nspec:\n containers:\n - name: nginx\n image: nginx:1.14.2\n ports:\n - containerPort: 80", "completion": "This yaml code 2.2 - Object Names and IDs | A Kubernetes systems-generated string to uniquely identify objects.", "language": "yaml", "type": "pod-definition_explanation", "context": "2.2 - Object Names and IDs | A Kubernetes systems-generated string to uniquely identify objects.", "source": "https://kubernetes.io/docs/concepts/overview/_print/", "id": "k8s_00560_code_explain_14693"} {"prompt": "Complete this text code:\n\n\"metadata\": {\n \"labels\": {\n \"key1\" : \"value1\",", "completion": " \"key2\" : \"value2\"\n }\n}", "language": "text", "type": "code-example", "context": "2.3 - Labels and Selectors | Labels allow for efficient queries and watches and are ideal for use in UIs and CLIs. Non-identifying information should be recorded using annotations.", "source": "https://kubernetes.io/docs/concepts/overview/_print/", "id": "k8s_00560_code_completion_14694"} {"prompt": "Explain this text code:\n\n\"metadata\": {\n \"labels\": {\n \"key1\" : \"value1\",\n \"key2\" : \"value2\"\n }\n}", "completion": "This text code 2.3 - Labels and Selectors | Labels allow for efficient queries and watches and are ideal for use in UIs and CLIs. Non-identifying information should be recorded using annotations.", "language": "text", "type": "code-example_explanation", "context": "2.3 - Labels and Selectors | Labels allow for efficient queries and watches and are ideal for use in UIs and CLIs. Non-identifying information should be recorded using annotations.", "source": "https://kubernetes.io/docs/concepts/overview/_print/", "id": "k8s_00560_code_explain_14695"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: label-demo\n labels:\n environment: production", "completion": " app: nginx\nspec:\n containers:\n - name: nginx\n image: nginx:1.14.2\n ports:\n - containerPort: 80", "language": "yaml", "type": "pod-definition", "context": "2.3 - Labels and Selectors | Unlike names and UIDs, labels do not provide uniqueness. In general, we expect many objects to carry the same label(s).", "source": "https://kubernetes.io/docs/concepts/overview/_print/", "id": "k8s_00560_code_completion_14696"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: label-demo\n labels:\n environment: production\n app: nginx\nspec:\n containers:\n - name: nginx\n image: nginx:1.14.2\n ports:\n - containerPort: 80", "completion": "This yaml code 2.3 - Labels and Selectors | Unlike names and UIDs, labels do not provide uniqueness. In general, we expect many objects to carry the same label(s).", "language": "yaml", "type": "pod-definition_explanation", "context": "2.3 - Labels and Selectors | Unlike names and UIDs, labels do not provide uniqueness. In general, we expect many objects to carry the same label(s).", "source": "https://kubernetes.io/docs/concepts/overview/_print/", "id": "k8s_00560_code_explain_14697"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: cuda-test\nspec:\n containers:", "completion": " - name: cuda-test\n image: \"registry.k8s.io/cuda-vector-add:v0.1\"\n resources:\n limits:\n nvidia.com/gpu: 1\n nodeSelector:\n accelerator: nvidia-tesla-p100", "language": "yaml", "type": "pod-definition", "context": "2.3 - Labels and Selectors | Set-based label requirements allow filtering keys according to a set of values. Three kinds of operators are supported: in,notin and exists (only the key identifier). For example:", "source": "https://kubernetes.io/docs/concepts/overview/_print/", "id": "k8s_00560_code_completion_14698"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: cuda-test\nspec:\n containers:\n - name: cuda-test\n image: \"registry.k8s.io/cuda-vector-add:v0.1\"\n resources:\n limits:\n nvidia.com/gpu: 1\n nodeSelector:\n accelerator: nvidia-tesla-p100", "completion": "This yaml code 2.3 - Labels and Selectors | Set-based label requirements allow filtering keys according to a set of values. Three kinds of operators are supported: in,notin and exists (only the key identifier). For example:", "language": "yaml", "type": "pod-definition_explanation", "context": "2.3 - Labels and Selectors | Set-based label requirements allow filtering keys according to a set of values. Three kinds of operators are supported: in,notin and exists (only the key identifier). For example:", "source": "https://kubernetes.io/docs/concepts/overview/_print/", "id": "k8s_00560_code_explain_14699"} {"prompt": "Explain this bash code:\n\nkubectl get pods -l environment=production,tier=frontend", "completion": "This bash code 2.3 - Labels and Selectors | or using set-based requirements:", "language": "bash", "type": "kubectl-command_explanation", "context": "2.3 - Labels and Selectors | or using set-based requirements:", "source": "https://kubernetes.io/docs/concepts/overview/_print/", "id": "k8s_00560_code_explain_14700"} {"prompt": "Explain this bash code:\n\nkubectl get pods -l 'environment in (production),tier in (frontend)'", "completion": "This bash code 2.3 - Labels and Selectors | As already mentioned set-based requirements are more expressive. For instance, they can implement the OR operator on values:", "language": "bash", "type": "kubectl-command_explanation", "context": "2.3 - Labels and Selectors | As already mentioned set-based requirements are more expressive. For instance, they can implement the OR operator on values:", "source": "https://kubernetes.io/docs/concepts/overview/_print/", "id": "k8s_00560_code_explain_14701"} {"prompt": "Explain this bash code:\n\nkubectl get pods -l 'environment in (production, qa)'", "completion": "This bash code 2.3 - Labels and Selectors | or restricting negative matching via notin operator:", "language": "bash", "type": "kubectl-command_explanation", "context": "2.3 - Labels and Selectors | or restricting negative matching via notin operator:", "source": "https://kubernetes.io/docs/concepts/overview/_print/", "id": "k8s_00560_code_explain_14702"} {"prompt": "Explain this bash code:\n\nkubectl get pods -l 'environment,environment notin (frontend)'", "completion": "This bash code 2.3 - Labels and Selectors | Some Kubernetes objects, such as services and replicationcontrollers, also use label selectors to specify sets of other resources, such as pods.", "language": "bash", "type": "kubectl-command_explanation", "context": "2.3 - Labels and Selectors | Some Kubernetes objects, such as services and replicationcontrollers, also use label selectors to specify sets of other resources, such as pods.", "source": "https://kubernetes.io/docs/concepts/overview/_print/", "id": "k8s_00560_code_explain_14703"} {"prompt": "Complete this text code:\n\nselector:\n matchLabels:\n component: redis", "completion": " matchExpressions:\n - { key: tier, operator: In, values: [cache] }\n - { key: environment, operator: NotIn, values: [dev] }", "language": "text", "type": "code-example", "context": "2.3 - Labels and Selectors | matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels map is equivalent to an element of matchExpressions, whose key field is \"key\", the operator is \"In\", and the values a", "source": "https://kubernetes.io/docs/concepts/overview/_print/", "id": "k8s_00560_code_completion_14704"} {"prompt": "Explain this text code:\n\nselector:\n matchLabels:\n component: redis\n matchExpressions:\n - { key: tier, operator: In, values: [cache] }\n - { key: environment, operator: NotIn, values: [dev] }", "completion": "This text code 2.3 - Labels and Selectors | matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels map is equivalent to an element of matchExpressions, whose key field is \"key\", the operator is \"In\", and the values a", "language": "text", "type": "code-example_explanation", "context": "2.3 - Labels and Selectors | matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels map is equivalent to an element of matchExpressions, whose key field is \"key\", the operator is \"In\", and the values a", "source": "https://kubernetes.io/docs/concepts/overview/_print/", "id": "k8s_00560_code_explain_14705"} {"prompt": "Complete this text code:\n\nlabels:\n app: guestbook", "completion": " tier: backend\n role: master", "language": "text", "type": "code-example", "context": "2.3 - Labels and Selectors | and", "source": "https://kubernetes.io/docs/concepts/overview/_print/", "id": "k8s_00560_code_completion_14706"} {"prompt": "Explain this text code:\n\nlabels:\n app: guestbook\n tier: backend\n role: master", "completion": "This text code 2.3 - Labels and Selectors | and", "language": "text", "type": "code-example_explanation", "context": "2.3 - Labels and Selectors | and", "source": "https://kubernetes.io/docs/concepts/overview/_print/", "id": "k8s_00560_code_explain_14707"} {"prompt": "Complete this text code:\n\nlabels:\n app: guestbook", "completion": " tier: backend\n role: replica", "language": "text", "type": "code-example", "context": "2.3 - Labels and Selectors | The labels allow for slicing and dicing the resources along any dimension specified by a label:", "source": "https://kubernetes.io/docs/concepts/overview/_print/", "id": "k8s_00560_code_completion_14708"} {"prompt": "Explain this text code:\n\nlabels:\n app: guestbook\n tier: backend\n role: replica", "completion": "This text code 2.3 - Labels and Selectors | The labels allow for slicing and dicing the resources along any dimension specified by a label:", "language": "text", "type": "code-example_explanation", "context": "2.3 - Labels and Selectors | The labels allow for slicing and dicing the resources along any dimension specified by a label:", "source": "https://kubernetes.io/docs/concepts/overview/_print/", "id": "k8s_00560_code_explain_14709"} {"prompt": "Explain this bash code:\n\nkubectl apply -f examples/guestbook/all-in-one/guestbook-all-in-one.yaml\nkubectl get pods -Lapp -Ltier -Lrole", "completion": "This bash code 2.3 - Labels and Selectors | Sometimes you may want to relabel existing pods and other resources before creating new resources. This can be done with kubectl label. For example, if you want to label all your NGINX Pods as fronten", "language": "bash", "type": "kubectl-command_explanation", "context": "2.3 - Labels and Selectors | Sometimes you may want to relabel existing pods and other resources before creating new resources. This can be done with kubectl label. For example, if you want to label all your NGINX Pods as fronten", "source": "https://kubernetes.io/docs/concepts/overview/_print/", "id": "k8s_00560_code_explain_14710"} {"prompt": "Explain this bash code:\n\nkubectl run nginx --image=nginx --namespace=<insert-namespace-name-here>\nkubectl get pods --namespace=<insert-namespace-name-here>", "completion": "This bash code 2.4 - Namespaces | You can permanently save the namespace for all subsequent kubectl commands in that context.", "language": "bash", "type": "kubectl-command_explanation", "context": "2.4 - Namespaces | You can permanently save the namespace for all subsequent kubectl commands in that context.", "source": "https://kubernetes.io/docs/concepts/overview/_print/", "id": "k8s_00560_code_explain_14711"} {"prompt": "Explain this bash code:\n\nkubectl config set-context --current --namespace=<insert-namespace-name-here>\n# Validate it\nkubectl config view --minify | grep namespace:", "completion": "This bash code 2.4 - Namespaces | When you create a Service, it creates a corresponding DNS entry. This entry is of the form <service-name>.<namespace-name>.svc.cluster.local, which means that if a container only uses <service-name>,", "language": "bash", "type": "kubectl-command_explanation", "context": "2.4 - Namespaces | When you create a Service, it creates a corresponding DNS entry. This entry is of the form <service-name>.<namespace-name>.svc.cluster.local, which means that if a container only uses <service-name>,", "source": "https://kubernetes.io/docs/concepts/overview/_print/", "id": "k8s_00560_code_explain_14712"} {"prompt": "Complete this text code:\n\n# In a namespace\nkubectl api-resources --namespaced=true", "completion": "\n# Not in a namespace\nkubectl api-resources --namespaced=false", "language": "text", "type": "code-example", "context": "2.4 - Namespaces | The Kubernetes control plane sets an immutable label kubernetes.io/metadata.name on all namespaces. The value of the label is the namespace name.", "source": "https://kubernetes.io/docs/concepts/overview/_print/", "id": "k8s_00560_code_completion_14713"} {"prompt": "Explain this text code:\n\n# In a namespace\nkubectl api-resources --namespaced=true\n\n# Not in a namespace\nkubectl api-resources --namespaced=false", "completion": "This text code 2.4 - Namespaces | The Kubernetes control plane sets an immutable label kubernetes.io/metadata.name on all namespaces. The value of the label is the namespace name.", "language": "text", "type": "code-example_explanation", "context": "2.4 - Namespaces | The Kubernetes control plane sets an immutable label kubernetes.io/metadata.name on all namespaces. The value of the label is the namespace name.", "source": "https://kubernetes.io/docs/concepts/overview/_print/", "id": "k8s_00560_code_explain_14714"} {"prompt": "Complete this text code:\n\n\"metadata\": {\n \"annotations\": {\n \"key1\" : \"value1\",", "completion": " \"key2\" : \"value2\"\n }\n}", "language": "text", "type": "code-example", "context": "2.5 - Annotations | Here are some examples of information that could be recorded in annotations:", "source": "https://kubernetes.io/docs/concepts/overview/_print/", "id": "k8s_00560_code_completion_14715"} {"prompt": "Explain this text code:\n\n\"metadata\": {\n \"annotations\": {\n \"key1\" : \"value1\",\n \"key2\" : \"value2\"\n }\n}", "completion": "This text code 2.5 - Annotations | Here are some examples of information that could be recorded in annotations:", "language": "text", "type": "code-example_explanation", "context": "2.5 - Annotations | Here are some examples of information that could be recorded in annotations:", "source": "https://kubernetes.io/docs/concepts/overview/_print/", "id": "k8s_00560_code_explain_14716"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: annotations-demo\n annotations:\n imageregistry: \"https://hub.docker.com/\"", "completion": "spec:\n containers:\n - name: nginx\n image: nginx:1.14.2\n ports:\n - containerPort: 80", "language": "yaml", "type": "pod-definition", "context": "2.5 - Annotations | Field selectors let you select Kubernetes objects based on the value of one or more resource fields. Here are some examples of field selector queries:", "source": "https://kubernetes.io/docs/concepts/overview/_print/", "id": "k8s_00560_code_completion_14717"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: annotations-demo\n annotations:\n imageregistry: \"https://hub.docker.com/\"\nspec:\n containers:\n - name: nginx\n image: nginx:1.14.2\n ports:\n - containerPort: 80", "completion": "This yaml code 2.5 - Annotations | Field selectors let you select Kubernetes objects based on the value of one or more resource fields. Here are some examples of field selector queries:", "language": "yaml", "type": "pod-definition_explanation", "context": "2.5 - Annotations | Field selectors let you select Kubernetes objects based on the value of one or more resource fields. Here are some examples of field selector queries:", "source": "https://kubernetes.io/docs/concepts/overview/_print/", "id": "k8s_00560_code_explain_14718"} {"prompt": "Explain this bash code:\n\nkubectl get pods --field-selector status.phase=Running", "completion": "This bash code 2.6 - Field Selectors | Supported field selectors vary by Kubernetes resource type. All resource types support the metadata.name and metadata.namespace fields. Using unsupported field selectors produces an error. For example", "language": "bash", "type": "kubectl-command_explanation", "context": "2.6 - Field Selectors | Supported field selectors vary by Kubernetes resource type. All resource types support the metadata.name and metadata.namespace fields. Using unsupported field selectors produces an error. For example", "source": "https://kubernetes.io/docs/concepts/overview/_print/", "id": "k8s_00560_code_explain_14719"} {"prompt": "Explain this bash code:\n\nkubectl get services --all-namespaces --field-selector metadata.namespace!=default", "completion": "This bash code 2.6 - Field Selectors | As with label and other selectors, field selectors can be chained together as a comma-separated list. This kubectl command selects all Pods for which the status.phase does not equal Running and the sp", "language": "bash", "type": "kubectl-command_explanation", "context": "2.6 - Field Selectors | As with label and other selectors, field selectors can be chained together as a comma-separated list. This kubectl command selects all Pods for which the status.phase does not equal Running and the sp", "source": "https://kubernetes.io/docs/concepts/overview/_print/", "id": "k8s_00560_code_explain_14720"} {"prompt": "Explain this bash code:\n\nkubectl get pods --field-selector=status.phase!=Running,spec.restartPolicy=Always", "completion": "This bash code 2.6 - Field Selectors | You can use field selectors across multiple resource types. This kubectl command selects all Statefulsets and Services that are not in the default namespace:", "language": "bash", "type": "kubectl-command_explanation", "context": "2.6 - Field Selectors | You can use field selectors across multiple resource types. This kubectl command selects all Statefulsets and Services that are not in the default namespace:", "source": "https://kubernetes.io/docs/concepts/overview/_print/", "id": "k8s_00560_code_explain_14721"} {"prompt": "Explain this bash code:\n\nkubectl get statefulsets,services --all-namespaces --field-selector metadata.namespace!=default", "completion": "This bash code 2.6 - Field Selectors | Finalizers are namespaced keys that tell Kubernetes to wait until specific conditions are met before it fully deletes resources that are marked for deletion. Finalizers alert controllers to clean up r", "language": "bash", "type": "kubectl-command_explanation", "context": "2.6 - Field Selectors | Finalizers are namespaced keys that tell Kubernetes to wait until specific conditions are met before it fully deletes resources that are marked for deletion. Finalizers alert controllers to clean up r", "source": "https://kubernetes.io/docs/concepts/overview/_print/", "id": "k8s_00560_code_explain_14722"} {"prompt": "Complete this yaml code:\n\n# This is an excerpt\napiVersion: apps/v1\nkind: StatefulSet\nmetadata:\n labels:", "completion": " app.kubernetes.io/name: mysql\n app.kubernetes.io/instance: mysql-abcxyz\n app.kubernetes.io/version: \"5.7.21\"\n app.kubernetes.io/component: database\n app.kubernetes.io/part-of: wordpress\n app.kubernetes.io/managed-by: Helm", "language": "yaml", "type": "kubernetes-manifest", "context": "2.9 - Recommended Labels | An application can be installed one or more times into a Kubernetes cluster and, in some cases, the same namespace. For example, WordPress can be installed more than once where different websites are", "source": "https://kubernetes.io/docs/concepts/overview/_print/", "id": "k8s_00560_code_completion_14723"} {"prompt": "Explain this yaml code:\n\n# This is an excerpt\napiVersion: apps/v1\nkind: StatefulSet\nmetadata:\n labels:\n app.kubernetes.io/name: mysql\n app.kubernetes.io/instance: mysql-abcxyz\n app.kubernetes.io/version: \"5.7.21\"\n app.kubernetes.io/component: database\n app.kubernetes.io/part-of: wordpress\n app.kubernetes.io/managed-by: Helm", "completion": "This yaml code 2.9 - Recommended Labels | An application can be installed one or more times into a Kubernetes cluster and, in some cases, the same namespace. For example, WordPress can be installed more than once where different websites are", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "2.9 - Recommended Labels | An application can be installed one or more times into a Kubernetes cluster and, in some cases, the same namespace. For example, WordPress can be installed more than once where different websites are", "source": "https://kubernetes.io/docs/concepts/overview/_print/", "id": "k8s_00560_code_explain_14724"} {"prompt": "Complete this yaml code:\n\napiVersion: apps/v1\nkind: Deployment\nmetadata:", "completion": " labels:\n app.kubernetes.io/name: myservice\n app.kubernetes.io/instance: myservice-abcxyz\n...", "language": "yaml", "type": "deployment-definition", "context": "2.9 - Recommended Labels | The Service is used to expose the application.", "source": "https://kubernetes.io/docs/concepts/overview/_print/", "id": "k8s_00560_code_completion_14725"} {"prompt": "Explain this yaml code:\n\napiVersion: apps/v1\nkind: Deployment\nmetadata:\n labels:\n app.kubernetes.io/name: myservice\n app.kubernetes.io/instance: myservice-abcxyz\n...", "completion": "This yaml code 2.9 - Recommended Labels | The Service is used to expose the application.", "language": "yaml", "type": "deployment-definition_explanation", "context": "2.9 - Recommended Labels | The Service is used to expose the application.", "source": "https://kubernetes.io/docs/concepts/overview/_print/", "id": "k8s_00560_code_explain_14726"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Service\nmetadata:", "completion": " labels:\n app.kubernetes.io/name: myservice\n app.kubernetes.io/instance: myservice-abcxyz\n...", "language": "yaml", "type": "service-definition", "context": "2.9 - Recommended Labels | Consider a slightly more complicated application: a web application (WordPress) using a database (MySQL), installed using Helm. The following snippets illustrate the start of objects used to deploy th", "source": "https://kubernetes.io/docs/concepts/overview/_print/", "id": "k8s_00560_code_completion_14727"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Service\nmetadata:\n labels:\n app.kubernetes.io/name: myservice\n app.kubernetes.io/instance: myservice-abcxyz\n...", "completion": "This yaml code 2.9 - Recommended Labels | Consider a slightly more complicated application: a web application (WordPress) using a database (MySQL), installed using Helm. The following snippets illustrate the start of objects used to deploy th", "language": "yaml", "type": "service-definition_explanation", "context": "2.9 - Recommended Labels | Consider a slightly more complicated application: a web application (WordPress) using a database (MySQL), installed using Helm. The following snippets illustrate the start of objects used to deploy th", "source": "https://kubernetes.io/docs/concepts/overview/_print/", "id": "k8s_00560_code_explain_14728"} {"prompt": "Complete this yaml code:\n\napiVersion: apps/v1\nkind: Deployment\nmetadata:\n labels:\n app.kubernetes.io/name: wordpress", "completion": " app.kubernetes.io/instance: wordpress-abcxyz\n app.kubernetes.io/version: \"4.9.4\"\n app.kubernetes.io/managed-by: Helm\n app.kubernetes.io/component: server\n app.kubernetes.io/part-of: wordpress\n...", "language": "yaml", "type": "deployment-definition", "context": "2.9 - Recommended Labels | The Service is used to expose WordPress:", "source": "https://kubernetes.io/docs/concepts/overview/_print/", "id": "k8s_00560_code_completion_14729"} {"prompt": "Explain this yaml code:\n\napiVersion: apps/v1\nkind: Deployment\nmetadata:\n labels:\n app.kubernetes.io/name: wordpress\n app.kubernetes.io/instance: wordpress-abcxyz\n app.kubernetes.io/version: \"4.9.4\"\n app.kubernetes.io/managed-by: Helm\n app.kubernetes.io/component: server\n app.kubernetes.io/part-of: wordpress\n...", "completion": "This yaml code 2.9 - Recommended Labels | The Service is used to expose WordPress:", "language": "yaml", "type": "deployment-definition_explanation", "context": "2.9 - Recommended Labels | The Service is used to expose WordPress:", "source": "https://kubernetes.io/docs/concepts/overview/_print/", "id": "k8s_00560_code_explain_14730"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Service\nmetadata:\n labels:\n app.kubernetes.io/name: wordpress", "completion": " app.kubernetes.io/instance: wordpress-abcxyz\n app.kubernetes.io/version: \"4.9.4\"\n app.kubernetes.io/managed-by: Helm\n app.kubernetes.io/component: server\n app.kubernetes.io/part-of: wordpress\n...", "language": "yaml", "type": "service-definition", "context": "2.9 - Recommended Labels | MySQL is exposed as a StatefulSet with metadata for both it and the larger application it belongs to:", "source": "https://kubernetes.io/docs/concepts/overview/_print/", "id": "k8s_00560_code_completion_14731"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Service\nmetadata:\n labels:\n app.kubernetes.io/name: wordpress\n app.kubernetes.io/instance: wordpress-abcxyz\n app.kubernetes.io/version: \"4.9.4\"\n app.kubernetes.io/managed-by: Helm\n app.kubernetes.io/component: server\n app.kubernetes.io/part-of: wordpress\n...", "completion": "This yaml code 2.9 - Recommended Labels | MySQL is exposed as a StatefulSet with metadata for both it and the larger application it belongs to:", "language": "yaml", "type": "service-definition_explanation", "context": "2.9 - Recommended Labels | MySQL is exposed as a StatefulSet with metadata for both it and the larger application it belongs to:", "source": "https://kubernetes.io/docs/concepts/overview/_print/", "id": "k8s_00560_code_explain_14732"} {"prompt": "Complete this yaml code:\n\napiVersion: apps/v1\nkind: StatefulSet\nmetadata:\n labels:\n app.kubernetes.io/name: mysql", "completion": " app.kubernetes.io/instance: mysql-abcxyz\n app.kubernetes.io/version: \"5.7.21\"\n app.kubernetes.io/managed-by: Helm\n app.kubernetes.io/component: database\n app.kubernetes.io/part-of: wordpress\n...", "language": "yaml", "type": "kubernetes-manifest", "context": "2.9 - Recommended Labels | The Service is used to expose MySQL as part of WordPress:", "source": "https://kubernetes.io/docs/concepts/overview/_print/", "id": "k8s_00560_code_completion_14733"} {"prompt": "Explain this yaml code:\n\napiVersion: apps/v1\nkind: StatefulSet\nmetadata:\n labels:\n app.kubernetes.io/name: mysql\n app.kubernetes.io/instance: mysql-abcxyz\n app.kubernetes.io/version: \"5.7.21\"\n app.kubernetes.io/managed-by: Helm\n app.kubernetes.io/component: database\n app.kubernetes.io/part-of: wordpress\n...", "completion": "This yaml code 2.9 - Recommended Labels | The Service is used to expose MySQL as part of WordPress:", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "2.9 - Recommended Labels | The Service is used to expose MySQL as part of WordPress:", "source": "https://kubernetes.io/docs/concepts/overview/_print/", "id": "k8s_00560_code_explain_14734"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Service\nmetadata:\n labels:\n app.kubernetes.io/name: mysql", "completion": " app.kubernetes.io/instance: mysql-abcxyz\n app.kubernetes.io/version: \"5.7.21\"\n app.kubernetes.io/managed-by: Helm\n app.kubernetes.io/component: database\n app.kubernetes.io/part-of: wordpress\n...", "language": "yaml", "type": "service-definition", "context": "2.9 - Recommended Labels | With the MySQL StatefulSet and Service you'll notice information about both MySQL and WordPress, the broader application, are included.", "source": "https://kubernetes.io/docs/concepts/overview/_print/", "id": "k8s_00560_code_completion_14735"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Service\nmetadata:\n labels:\n app.kubernetes.io/name: mysql\n app.kubernetes.io/instance: mysql-abcxyz\n app.kubernetes.io/version: \"5.7.21\"\n app.kubernetes.io/managed-by: Helm\n app.kubernetes.io/component: database\n app.kubernetes.io/part-of: wordpress\n...", "completion": "This yaml code 2.9 - Recommended Labels | With the MySQL StatefulSet and Service you'll notice information about both MySQL and WordPress, the broader application, are included.", "language": "yaml", "type": "service-definition_explanation", "context": "2.9 - Recommended Labels | With the MySQL StatefulSet and Service you'll notice information about both MySQL and WordPress, the broader application, are included.", "source": "https://kubernetes.io/docs/concepts/overview/_print/", "id": "k8s_00560_code_explain_14736"} {"prompt": "Complete this json code:\n\n{\n \"paths\": {\n ...,\n \"api/v1\": {\n \"serverRelativeURL\": \"/openapi/v3/api/v1?hash=CC0E9BFD992D8C59AEC98A1E2336F899E8318D3CF4C68944C3DEC640AF5AB52D864AC50DAA8D145B3494F75FA3CFF939FCBDDA431DAD3CA79738B297795818CF\"\n },", "completion": " \"apis/admissionregistration.k8s.io/v1\": {\n \"serverRelativeURL\": \"/openapi/v3/apis/admissionregistration.k8s.io/v1?hash=E19CC93A116982CE5422FC42B590A8AFAD92CDE9AE4D59B5CAAD568F083AD07946E6CB5817531680BCE6E215C16973CD39003B0425F3477CFD854E89A9DB6597\"\n },\n ....\n }\n}", "language": "json", "type": "api-response", "context": "3 - The Kubernetes API | The relative URLs are pointing to immutable OpenAPI descriptions, in order to improve client-side caching. The proper HTTP caching headers are also set by the API server for that purpose (Expires to 1", "source": "https://kubernetes.io/docs/concepts/overview/_print/", "id": "k8s_00560_code_completion_14737"} {"prompt": "Explain this json code:\n\n{\n \"paths\": {\n ...,\n \"api/v1\": {\n \"serverRelativeURL\": \"/openapi/v3/api/v1?hash=CC0E9BFD992D8C59AEC98A1E2336F899E8318D3CF4C68944C3DEC640AF5AB52D864AC50DAA8D145B3494F75FA3CFF939FCBDDA431DAD3CA79738B297795818CF\"\n },\n \"apis/admissionregistration.k8s.io/v1\": {\n \"serverRelativeURL\": \"/openapi/v3/apis/admissionregistration.k8s.io/v1?hash=E19CC93A116982CE5422FC42B590A8AFAD92CDE9AE4D59B5CAAD568F083AD07946E6CB5817531680BCE6E215C16973CD39003B0425F3477CFD854E89A9DB6597\"\n },\n ....\n }\n}", "completion": "This json code 3 - The Kubernetes API | The relative URLs are pointing to immutable OpenAPI descriptions, in order to improve client-side caching. The proper HTTP caching headers are also set by the API server for that purpose (Expires to 1", "language": "json", "type": "api-response_explanation", "context": "3 - The Kubernetes API | The relative URLs are pointing to immutable OpenAPI descriptions, in order to improve client-side caching. The proper HTTP caching headers are also set by the API server for that purpose (Expires to 1", "source": "https://kubernetes.io/docs/concepts/overview/_print/", "id": "k8s_00560_code_explain_14738"} {"prompt": "Complete this json code:\n\n[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/start-kube-apiserver-admission-initializer ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/start-apiextensions-informers ok\n[+]poststarthook/start-apiextensions-controllers ok\n[+]poststarthook/crd-informer-synced ok\n[+]poststarthook/bootstrap-controller ok", "completion": "[+]poststarthook/rbac/bootstrap-roles ok\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/start-cluster-authentication-info-controller ok\n[+]poststarthook/start-kube-aggregator-informers ok\n[+]poststarthook/apiservice-registration-controller ok\n[+]poststarthook/apiservice-status-available-controller ok\n[+]poststarthook/kube-apiserver-autoregistration ok\n[+]autoregister-completion ok\n[+]poststarthook/apiservice-openapi-controller ok\nhealthz check passed", "language": "json", "type": "api-response", "context": "Kubernetes API health endpoints | The Kubernetes API server also supports to exclude specific checks. The query parameters can also be combined like in this example:", "source": "https://kubernetes.io/docs/reference/using-api/health-checks/", "id": "k8s_00561_code_completion_14739"} {"prompt": "Explain this json code:\n\n[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/start-kube-apiserver-admission-initializer ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/start-apiextensions-informers ok\n[+]poststarthook/start-apiextensions-controllers ok\n[+]poststarthook/crd-informer-synced ok\n[+]poststarthook/bootstrap-controller ok\n[+]poststarthook/rbac/bootstrap-roles ok\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/start-cluster-authentication-info-controller ok\n[+]poststarthook/start-kube-aggregator-informers ok\n[+]poststarthook/apiservice-registration-controller ok\n[+]poststarthook/apiservice-status-available-controller ok\n[+]poststarthook/kube-apiserver-autoregistration ok\n[+]autoregister-completion ok\n[+]poststarthook/apiservice-openapi-controller ok\nhealthz check passed", "completion": "This json code Kubernetes API health endpoints | The Kubernetes API server also supports to exclude specific checks. The query parameters can also be combined like in this example:", "language": "json", "type": "api-response_explanation", "context": "Kubernetes API health endpoints | The Kubernetes API server also supports to exclude specific checks. The query parameters can also be combined like in this example:", "source": "https://kubernetes.io/docs/reference/using-api/health-checks/", "id": "k8s_00561_code_explain_14740"} {"prompt": "Explain this text code:\n\ncurl -k 'https://localhost:6443/readyz?verbose&exclude=etcd'", "completion": "This text code Kubernetes API health endpoints | The output show that the etcd check is excluded:", "language": "text", "type": "code-example_explanation", "context": "Kubernetes API health endpoints | The output show that the etcd check is excluded:", "source": "https://kubernetes.io/docs/reference/using-api/health-checks/", "id": "k8s_00561_code_explain_14741"} {"prompt": "Complete this json code:\n\n[+]ping ok\n[+]log ok\n[+]etcd excluded: ok\n[+]poststarthook/start-kube-apiserver-admission-initializer ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/start-apiextensions-informers ok\n[+]poststarthook/start-apiextensions-controllers ok\n[+]poststarthook/crd-informer-synced ok\n[+]poststarthook/bootstrap-controller ok\n[+]poststarthook/rbac/bootstrap-roles ok", "completion": "[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/start-cluster-authentication-info-controller ok\n[+]poststarthook/start-kube-aggregator-informers ok\n[+]poststarthook/apiservice-registration-controller ok\n[+]poststarthook/apiservice-status-available-controller ok\n[+]poststarthook/kube-apiserver-autoregistration ok\n[+]autoregister-completion ok\n[+]poststarthook/apiservice-openapi-controller ok\n[+]shutdown ok\nhealthz check passed", "language": "json", "type": "api-response", "context": "Kubernetes API health endpoints | Each individual health check exposes an HTTP endpoint and can be checked individually. The schema for the individual health checks is /livez/<healthcheck-name> or /readyz/<healthcheck-name>, where liv", "source": "https://kubernetes.io/docs/reference/using-api/health-checks/", "id": "k8s_00561_code_completion_14742"} {"prompt": "Explain this json code:\n\n[+]ping ok\n[+]log ok\n[+]etcd excluded: ok\n[+]poststarthook/start-kube-apiserver-admission-initializer ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/start-apiextensions-informers ok\n[+]poststarthook/start-apiextensions-controllers ok\n[+]poststarthook/crd-informer-synced ok\n[+]poststarthook/bootstrap-controller ok\n[+]poststarthook/rbac/bootstrap-roles ok\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/start-cluster-authentication-info-controller ok\n[+]poststarthook/start-kube-aggregator-informers ok\n[+]poststarthook/apiservice-registration-controller ok\n[+]poststarthook/apiservice-status-available-controller ok\n[+]poststarthook/kube-apiserver-autoregistration ok\n[+]autoregister-completion ok\n[+]poststarthook/apiservice-openapi-controller ok\n[+]shutdown ok\nhealthz check passed", "completion": "This json code Kubernetes API health endpoints | Each individual health check exposes an HTTP endpoint and can be checked individually. The schema for the individual health checks is /livez/<healthcheck-name> or /readyz/<healthcheck-name>, where liv", "language": "json", "type": "api-response_explanation", "context": "Kubernetes API health endpoints | Each individual health check exposes an HTTP endpoint and can be checked individually. The schema for the individual health checks is /livez/<healthcheck-name> or /readyz/<healthcheck-name>, where liv", "source": "https://kubernetes.io/docs/reference/using-api/health-checks/", "id": "k8s_00561_code_explain_14743"} {"prompt": "Complete this json code:\n\n[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/start-kube-apiserver-admission-initializer ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/start-apiextensions-informers ok\n[+]poststarthook/start-apiextensions-controllers ok\n[+]poststarthook/crd-informer-synced ok\n[+]poststarthook/bootstrap-controller ok", "completion": "[+]poststarthook/rbac/bootstrap-roles ok\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/start-cluster-authentication-info-controller ok\n[+]poststarthook/start-kube-aggregator-informers ok\n[+]poststarthook/apiservice-registration-controller ok\n[+]poststarthook/apiservice-status-available-controller ok\n[+]poststarthook/kube-apiserver-autoregistration ok\n[+]autoregister-completion ok\n[+]poststarthook/apiservice-openapi-controller ok\nhealthz check passed", "language": "json", "type": "api-response", "context": "Kubernetes API health endpoints | The Kubernetes API server also supports to exclude specific checks. The query parameters can also be combined like in this example:", "source": "https://kubernetes.io/docs/reference/using-api/health-checks/", "id": "k8s_00561_code_completion_14744"} {"prompt": "Explain this json code:\n\n[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/start-kube-apiserver-admission-initializer ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/start-apiextensions-informers ok\n[+]poststarthook/start-apiextensions-controllers ok\n[+]poststarthook/crd-informer-synced ok\n[+]poststarthook/bootstrap-controller ok\n[+]poststarthook/rbac/bootstrap-roles ok\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/start-cluster-authentication-info-controller ok\n[+]poststarthook/start-kube-aggregator-informers ok\n[+]poststarthook/apiservice-registration-controller ok\n[+]poststarthook/apiservice-status-available-controller ok\n[+]poststarthook/kube-apiserver-autoregistration ok\n[+]autoregister-completion ok\n[+]poststarthook/apiservice-openapi-controller ok\nhealthz check passed", "completion": "This json code Kubernetes API health endpoints | The Kubernetes API server also supports to exclude specific checks. The query parameters can also be combined like in this example:", "language": "json", "type": "api-response_explanation", "context": "Kubernetes API health endpoints | The Kubernetes API server also supports to exclude specific checks. The query parameters can also be combined like in this example:", "source": "https://kubernetes.io/docs/reference/using-api/health-checks/", "id": "k8s_00561_code_explain_14745"} {"prompt": "Explain this shell code:\n\ncurl -k 'https://localhost:6443/readyz?verbose&exclude=etcd'", "completion": "This shell code Kubernetes API health endpoints | The output show that the etcd check is excluded:", "language": "shell", "type": "code-example_explanation", "context": "Kubernetes API health endpoints | The output show that the etcd check is excluded:", "source": "https://kubernetes.io/docs/reference/using-api/health-checks/", "id": "k8s_00561_code_explain_14746"} {"prompt": "Complete this json code:\n\n[+]ping ok\n[+]log ok\n[+]etcd excluded: ok\n[+]poststarthook/start-kube-apiserver-admission-initializer ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/start-apiextensions-informers ok\n[+]poststarthook/start-apiextensions-controllers ok\n[+]poststarthook/crd-informer-synced ok\n[+]poststarthook/bootstrap-controller ok\n[+]poststarthook/rbac/bootstrap-roles ok", "completion": "[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/start-cluster-authentication-info-controller ok\n[+]poststarthook/start-kube-aggregator-informers ok\n[+]poststarthook/apiservice-registration-controller ok\n[+]poststarthook/apiservice-status-available-controller ok\n[+]poststarthook/kube-apiserver-autoregistration ok\n[+]autoregister-completion ok\n[+]poststarthook/apiservice-openapi-controller ok\n[+]shutdown ok\nhealthz check passed", "language": "json", "type": "api-response", "context": "Kubernetes API health endpoints | Each individual health check exposes an HTTP endpoint and can be checked individually. The schema for the individual health checks is /livez/<healthcheck-name> or /readyz/<healthcheck-name>, where liv", "source": "https://kubernetes.io/docs/reference/using-api/health-checks/", "id": "k8s_00561_code_completion_14747"} {"prompt": "Explain this json code:\n\n[+]ping ok\n[+]log ok\n[+]etcd excluded: ok\n[+]poststarthook/start-kube-apiserver-admission-initializer ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/start-apiextensions-informers ok\n[+]poststarthook/start-apiextensions-controllers ok\n[+]poststarthook/crd-informer-synced ok\n[+]poststarthook/bootstrap-controller ok\n[+]poststarthook/rbac/bootstrap-roles ok\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/start-cluster-authentication-info-controller ok\n[+]poststarthook/start-kube-aggregator-informers ok\n[+]poststarthook/apiservice-registration-controller ok\n[+]poststarthook/apiservice-status-available-controller ok\n[+]poststarthook/kube-apiserver-autoregistration ok\n[+]autoregister-completion ok\n[+]poststarthook/apiservice-openapi-controller ok\n[+]shutdown ok\nhealthz check passed", "completion": "This json code Kubernetes API health endpoints | Each individual health check exposes an HTTP endpoint and can be checked individually. The schema for the individual health checks is /livez/<healthcheck-name> or /readyz/<healthcheck-name>, where liv", "language": "json", "type": "api-response_explanation", "context": "Kubernetes API health endpoints | Each individual health check exposes an HTTP endpoint and can be checked individually. The schema for the individual health checks is /livez/<healthcheck-name> or /readyz/<healthcheck-name>, where liv", "source": "https://kubernetes.io/docs/reference/using-api/health-checks/", "id": "k8s_00561_code_explain_14748"} {"prompt": "Explain this text code:\n\ncurl -k 'https://localhost:6443/readyz?verbose&exclude=etcd'", "completion": "This text code Kubernetes API health endpoints | The output show that the etcd check is excluded:", "language": "text", "type": "code-example_explanation", "context": "Kubernetes API health endpoints | The output show that the etcd check is excluded:", "source": "https://kubernetes.io/docs/reference/using-api/health-checks/", "id": "k8s_00561_code_explain_14749"} {"prompt": "Explain this bash code:\n\nkubectl set subject (-f FILENAME | TYPE NAME) [--user=username] [--group=groupname] [--serviceaccount=namespace:serviceaccountname] [--dry-run=server|client|none]", "completion": "This bash code kubectl set subject | Select all resources, in the namespace of the specified resource types", "language": "bash", "type": "kubectl-command_explanation", "context": "kubectl set subject | Select all resources, in the namespace of the specified resource types", "source": "https://kubernetes.io/docs/reference/kubectl/generated/kubectl_set/kubectl_set_subject/", "id": "k8s_00563_code_explain_14750"} {"prompt": "Complete this text code:\n\n# Update a cluster role binding for serviceaccount1\n kubectl set subject clusterrolebinding admin --serviceaccount=namespace:serviceaccount1\n \n # Update a role binding for user1, user2, and group1", "completion": " kubectl set subject rolebinding admin --user=user1 --user=user2 --group=group1\n \n # Print the result (in YAML format) of updating rolebinding subjects from a local, without hitting the server\n kubectl create rolebinding admin --role=admin --user=admin -o yaml --dry-run=client | kubectl set subject --local -f - --user=foo -o yaml", "language": "text", "type": "code-example", "context": "kubectl set subject | Select all resources, in the namespace of the specified resource types", "source": "https://kubernetes.io/docs/reference/kubectl/generated/kubectl_set/kubectl_set_subject/", "id": "k8s_00563_code_completion_14751"} {"prompt": "Explain this text code:\n\n# Update a cluster role binding for serviceaccount1\n kubectl set subject clusterrolebinding admin --serviceaccount=namespace:serviceaccount1\n \n # Update a role binding for user1, user2, and group1\n kubectl set subject rolebinding admin --user=user1 --user=user2 --group=group1\n \n # Print the result (in YAML format) of updating rolebinding subjects from a local, without hitting the server\n kubectl create rolebinding admin --role=admin --user=admin -o yaml --dry-run=client | kubectl set subject --local -f - --user=foo -o yaml", "completion": "This text code kubectl set subject | Select all resources, in the namespace of the specified resource types", "language": "text", "type": "code-example_explanation", "context": "kubectl set subject | Select all resources, in the namespace of the specified resource types", "source": "https://kubernetes.io/docs/reference/kubectl/generated/kubectl_set/kubectl_set_subject/", "id": "k8s_00563_code_explain_14752"} {"prompt": "Explain this bash code:\n\nkubectl set subject (-f FILENAME | TYPE NAME) [--user=username] [--group=groupname] [--serviceaccount=namespace:serviceaccountname] [--dry-run=server|client|none]", "completion": "This bash code kubectl set subject | Select all resources, in the namespace of the specified resource types", "language": "bash", "type": "kubectl-command_explanation", "context": "kubectl set subject | Select all resources, in the namespace of the specified resource types", "source": "https://kubernetes.io/docs/reference/kubectl/generated/kubectl_set/kubectl_set_subject/", "id": "k8s_00563_code_explain_14753"} {"prompt": "Complete this text code:\n\n# Update a cluster role binding for serviceaccount1\n kubectl set subject clusterrolebinding admin --serviceaccount=namespace:serviceaccount1\n \n # Update a role binding for user1, user2, and group1", "completion": " kubectl set subject rolebinding admin --user=user1 --user=user2 --group=group1\n \n # Print the result (in YAML format) of updating rolebinding subjects from a local, without hitting the server\n kubectl create rolebinding admin --role=admin --user=admin -o yaml --dry-run=client | kubectl set subject --local -f - --user=foo -o yaml", "language": "text", "type": "code-example", "context": "kubectl set subject | Select all resources, in the namespace of the specified resource types", "source": "https://kubernetes.io/docs/reference/kubectl/generated/kubectl_set/kubectl_set_subject/", "id": "k8s_00563_code_completion_14754"} {"prompt": "Explain this text code:\n\n# Update a cluster role binding for serviceaccount1\n kubectl set subject clusterrolebinding admin --serviceaccount=namespace:serviceaccount1\n \n # Update a role binding for user1, user2, and group1\n kubectl set subject rolebinding admin --user=user1 --user=user2 --group=group1\n \n # Print the result (in YAML format) of updating rolebinding subjects from a local, without hitting the server\n kubectl create rolebinding admin --role=admin --user=admin -o yaml --dry-run=client | kubectl set subject --local -f - --user=foo -o yaml", "completion": "This text code kubectl set subject | Select all resources, in the namespace of the specified resource types", "language": "text", "type": "code-example_explanation", "context": "kubectl set subject | Select all resources, in the namespace of the specified resource types", "source": "https://kubernetes.io/docs/reference/kubectl/generated/kubectl_set/kubectl_set_subject/", "id": "k8s_00563_code_explain_14755"} {"prompt": "Explain this bash code:\n\nkubectl create token my-sa --bound-object-kind=\"Pod\" --bound-object-name=\"test-pod\"", "completion": "This bash code Managing Service Accounts | Copy this token into a new file named tokenreview.yaml:", "language": "bash", "type": "kubectl-command_explanation", "context": "Managing Service Accounts | Copy this token into a new file named tokenreview.yaml:", "source": "https://kubernetes.io/docs/reference/access-authn-authz/service-accounts-admin/", "id": "k8s_00564_code_explain_14756"} {"prompt": "Complete this yaml code:\n\napiVersion: authentication.k8s.io/v1\nkind: TokenReview", "completion": "spec:\n token: <token from step 2>", "language": "yaml", "type": "kubernetes-manifest", "context": "Managing Service Accounts | Submit this resource to the apiserver for review:", "source": "https://kubernetes.io/docs/reference/access-authn-authz/service-accounts-admin/", "id": "k8s_00564_code_completion_14757"} {"prompt": "Explain this yaml code:\n\napiVersion: authentication.k8s.io/v1\nkind: TokenReview\nspec:\n token: <token from step 2>", "completion": "This yaml code Managing Service Accounts | Submit this resource to the apiserver for review:", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Managing Service Accounts | Submit this resource to the apiserver for review:", "source": "https://kubernetes.io/docs/reference/access-authn-authz/service-accounts-admin/", "id": "k8s_00564_code_explain_14758"} {"prompt": "Explain this text code:\n\n# use '-o yaml' to inspect the output\nkubectl create -o yaml -f tokenreview.yaml", "completion": "This text code Managing Service Accounts | You should see an output like below:", "language": "text", "type": "code-example_explanation", "context": "Managing Service Accounts | You should see an output like below:", "source": "https://kubernetes.io/docs/reference/access-authn-authz/service-accounts-admin/", "id": "k8s_00564_code_explain_14759"} {"prompt": "Complete this yaml code:\n\napiVersion: authentication.k8s.io/v1\nkind: TokenReview\nmetadata:\n creationTimestamp: null\nspec:\n token: <token>\nstatus:\n audiences:\n - https://kubernetes.default.svc.cluster.local\n authenticated: true\n user:\n extra:\n authentication.kubernetes.io/credential-id:\n - JTI=7ee52be0-9045-4653-aa5e-0da57b8dccdc", "completion": " authentication.kubernetes.io/node-name:\n - kind-control-plane\n authentication.kubernetes.io/node-uid:\n - 497e9d9a-47aa-4930-b0f6-9f2fb574c8c6\n authentication.kubernetes.io/pod-name:\n - test-pod\n authentication.kubernetes.io/pod-uid:\n - e87dbbd6-3d7e-45db-aafb-72b24627dff5\n groups:\n - system:serviceaccounts\n - system:serviceaccounts:default\n - system:authenticated\n uid: f8b4161b-2e2b-11e9-86b7-2afc33b31a7e\n username: system:serviceaccount:default:my-sa", "language": "yaml", "type": "kubernetes-manifest", "context": "Managing Service Accounts | The schema for the Kubernetes-specific claims within JWT tokens is not currently documented, however the relevant code area can be found in the serviceaccount package in the Kubernetes codebase.", "source": "https://kubernetes.io/docs/reference/access-authn-authz/service-accounts-admin/", "id": "k8s_00564_code_completion_14760"} {"prompt": "Explain this yaml code:\n\napiVersion: authentication.k8s.io/v1\nkind: TokenReview\nmetadata:\n creationTimestamp: null\nspec:\n token: <token>\nstatus:\n audiences:\n - https://kubernetes.default.svc.cluster.local\n authenticated: true\n user:\n extra:\n authentication.kubernetes.io/credential-id:\n - JTI=7ee52be0-9045-4653-aa5e-0da57b8dccdc\n authentication.kubernetes.io/node-name:\n - kind-control-plane\n authentication.kubernetes.io/node-uid:\n - 497e9d9a-47aa-4930-b0f6-9f2fb574c8c6\n authentication.kubernetes.io/pod-name:\n - test-pod\n authentication.kubernetes.io/pod-uid:\n - e87dbbd6-3d7e-45db-aafb-72b24627dff5\n groups:\n - system:serviceaccounts\n - system:serviceaccounts:default\n - system:authenticated\n uid: f8b4161b-2e2b-11e9-86b7-2afc33b31a7e\n username: system:serviceaccount:default:my-sa", "completion": "This yaml code Managing Service Accounts | The schema for the Kubernetes-specific claims within JWT tokens is not currently documented, however the relevant code area can be found in the serviceaccount package in the Kubernetes codebase.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Managing Service Accounts | The schema for the Kubernetes-specific claims within JWT tokens is not currently documented, however the relevant code area can be found in the serviceaccount package in the Kubernetes codebase.", "source": "https://kubernetes.io/docs/reference/access-authn-authz/service-accounts-admin/", "id": "k8s_00564_code_explain_14761"} {"prompt": "Complete this json code:\n\n{\n \"aud\": [\n \"https://my-audience.example.com\"\n ],\n \"exp\": 1729605240,\n \"iat\": 1729601640,\n \"iss\": \"https://my-cluster.example.com\",\n \"jti\": \"aed34954-b33a-4142-b1ec-389d6bbb4936\",\n \"kubernetes.io\": {\n \"namespace\": \"my-namespace\",\n \"node\": {\n \"name\": \"my-node\",\n \"uid\": \"646e7c5e-32d6-4d42-9dbd-e504e6cbe6b1\"", "completion": " },\n \"pod\": {\n \"name\": \"my-pod\",\n \"uid\": \"5e0bd49b-f040-43b0-99b7-22765a53f7f3\"\n },\n \"serviceaccount\": {\n \"name\": \"my-serviceaccount\",\n \"uid\": \"14ee3fa4-a7e2-420f-9f9a-dbc4507c3798\"\n }\n },\n \"nbf\": 1729601640,\n \"sub\": \"system:serviceaccount:my-namespace:my-serviceaccount\"\n}", "language": "json", "type": "api-response", "context": "Managing Service Accounts | The aud and iss fields in this JWT may differ between different Kubernetes clusters depending on your configuration.", "source": "https://kubernetes.io/docs/reference/access-authn-authz/service-accounts-admin/", "id": "k8s_00564_code_completion_14762"} {"prompt": "Explain this json code:\n\n{\n \"aud\": [\n \"https://my-audience.example.com\"\n ],\n \"exp\": 1729605240,\n \"iat\": 1729601640,\n \"iss\": \"https://my-cluster.example.com\",\n \"jti\": \"aed34954-b33a-4142-b1ec-389d6bbb4936\",\n \"kubernetes.io\": {\n \"namespace\": \"my-namespace\",\n \"node\": {\n \"name\": \"my-node\",\n \"uid\": \"646e7c5e-32d6-4d42-9dbd-e504e6cbe6b1\"\n },\n \"pod\": {\n \"name\": \"my-pod\",\n \"uid\": \"5e0bd49b-f040-43b0-99b7-22765a53f7f3\"\n },\n \"serviceaccount\": {\n \"name\": \"my-serviceaccount\",\n \"uid\": \"14ee3fa4-a7e2-420f-9f9a-dbc4507c3798\"\n }\n },\n \"nbf\": 1729601640,\n \"sub\": \"system:serviceaccount:my-namespace:my-serviceaccount\"\n}", "completion": "This json code Managing Service Accounts | The aud and iss fields in this JWT may differ between different Kubernetes clusters depending on your configuration.", "language": "json", "type": "api-response_explanation", "context": "Managing Service Accounts | The aud and iss fields in this JWT may differ between different Kubernetes clusters depending on your configuration.", "source": "https://kubernetes.io/docs/reference/access-authn-authz/service-accounts-admin/", "id": "k8s_00564_code_explain_14763"} {"prompt": "Complete this yaml code:\n\n...\n - name: kube-api-access-<random-suffix>\n projected:\n sources:\n - serviceAccountToken:\n path: token # must match the path the app expects\n - configMap:\n items:", "completion": " - key: ca.crt\n path: ca.crt\n name: kube-root-ca.crt\n - downwardAPI:\n items:\n - fieldRef:\n apiVersion: v1\n fieldPath: metadata.namespace\n path: namespace", "language": "yaml", "type": "kubernetes-manifest", "context": "Managing Service Accounts | That manifest snippet defines a projected volume that consists of three sources. In this case, each source also represents a single path within that volume. The three sources are:", "source": "https://kubernetes.io/docs/reference/access-authn-authz/service-accounts-admin/", "id": "k8s_00564_code_completion_14764"} {"prompt": "Explain this yaml code:\n\n...\n - name: kube-api-access-<random-suffix>\n projected:\n sources:\n - serviceAccountToken:\n path: token # must match the path the app expects\n - configMap:\n items:\n - key: ca.crt\n path: ca.crt\n name: kube-root-ca.crt\n - downwardAPI:\n items:\n - fieldRef:\n apiVersion: v1\n fieldPath: metadata.namespace\n path: namespace", "completion": "This yaml code Managing Service Accounts | That manifest snippet defines a projected volume that consists of three sources. In this case, each source also represents a single path within that volume. The three sources are:", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Managing Service Accounts | That manifest snippet defines a projected volume that consists of three sources. In this case, each source also represents a single path within that volume. The three sources are:", "source": "https://kubernetes.io/docs/reference/access-authn-authz/service-accounts-admin/", "id": "k8s_00564_code_explain_14765"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: ServiceAccount\nmetadata:", "completion": " name: build-robot\n namespace: default\nsecrets:\n - name: build-robot-secret # usually NOT present for a manually generated token", "language": "yaml", "type": "service-definition", "context": "Managing Service Accounts | Beginning from version 1.29, legacy ServiceAccount tokens that were generated automatically will be marked as invalid if they remain unused for a certain period of time (set to default at one year). T", "source": "https://kubernetes.io/docs/reference/access-authn-authz/service-accounts-admin/", "id": "k8s_00564_code_completion_14766"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: ServiceAccount\nmetadata:\n name: build-robot\n namespace: default\nsecrets:\n - name: build-robot-secret # usually NOT present for a manually generated token", "completion": "This yaml code Managing Service Accounts | Beginning from version 1.29, legacy ServiceAccount tokens that were generated automatically will be marked as invalid if they remain unused for a certain period of time (set to default at one year). T", "language": "yaml", "type": "service-definition_explanation", "context": "Managing Service Accounts | Beginning from version 1.29, legacy ServiceAccount tokens that were generated automatically will be marked as invalid if they remain unused for a certain period of time (set to default at one year). T", "source": "https://kubernetes.io/docs/reference/access-authn-authz/service-accounts-admin/", "id": "k8s_00564_code_explain_14767"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Secret\nmetadata:\n name: build-robot-secret\n namespace: default", "completion": " labels:\n kubernetes.io/legacy-token-last-used: 2022-10-24\n kubernetes.io/legacy-token-invalid-since: 2023-10-25\n annotations:\n kubernetes.io/service-account.name: build-robot\ntype: kubernetes.io/service-account-token", "language": "yaml", "type": "kubernetes-manifest", "context": "Managing Service Accounts | A ServiceAccount controller manages the ServiceAccounts inside namespaces, and ensures a ServiceAccount named \"default\" exists in every active namespace.", "source": "https://kubernetes.io/docs/reference/access-authn-authz/service-accounts-admin/", "id": "k8s_00564_code_completion_14768"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Secret\nmetadata:\n name: build-robot-secret\n namespace: default\n labels:\n kubernetes.io/legacy-token-last-used: 2022-10-24\n kubernetes.io/legacy-token-invalid-since: 2023-10-25\n annotations:\n kubernetes.io/service-account.name: build-robot\ntype: kubernetes.io/service-account-token", "completion": "This yaml code Managing Service Accounts | A ServiceAccount controller manages the ServiceAccounts inside namespaces, and ensures a ServiceAccount named \"default\" exists in every active namespace.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Managing Service Accounts | A ServiceAccount controller manages the ServiceAccounts inside namespaces, and ensures a ServiceAccount named \"default\" exists in every active namespace.", "source": "https://kubernetes.io/docs/reference/access-authn-authz/service-accounts-admin/", "id": "k8s_00564_code_explain_14769"} {"prompt": "Complete this yaml code:\n\n...\n - name: kube-api-access-<random-suffix>\n projected:\n defaultMode: 420 # decimal equivalent of octal 0644\n sources:\n - serviceAccountToken:\n expirationSeconds: 3607\n path: token\n - configMap:", "completion": " items:\n - key: ca.crt\n path: ca.crt\n name: kube-root-ca.crt\n - downwardAPI:\n items:\n - fieldRef:\n apiVersion: v1\n fieldPath: metadata.namespace\n path: namespace", "language": "yaml", "type": "kubernetes-manifest", "context": "Managing Service Accounts | That manifest snippet defines a projected volume that combines information from three sources:", "source": "https://kubernetes.io/docs/reference/access-authn-authz/service-accounts-admin/", "id": "k8s_00564_code_completion_14770"} {"prompt": "Explain this yaml code:\n\n...\n - name: kube-api-access-<random-suffix>\n projected:\n defaultMode: 420 # decimal equivalent of octal 0644\n sources:\n - serviceAccountToken:\n expirationSeconds: 3607\n path: token\n - configMap:\n items:\n - key: ca.crt\n path: ca.crt\n name: kube-root-ca.crt\n - downwardAPI:\n items:\n - fieldRef:\n apiVersion: v1\n fieldPath: metadata.namespace\n path: namespace", "completion": "This yaml code Managing Service Accounts | That manifest snippet defines a projected volume that combines information from three sources:", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Managing Service Accounts | That manifest snippet defines a projected volume that combines information from three sources:", "source": "https://kubernetes.io/docs/reference/access-authn-authz/service-accounts-admin/", "id": "k8s_00564_code_explain_14771"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Secret\ntype: kubernetes.io/service-account-token", "completion": "metadata:\n name: mysecretname\n annotations:\n kubernetes.io/service-account.name: myserviceaccount", "language": "yaml", "type": "kubernetes-manifest", "context": "Managing Service Accounts | To create a Secret based on this example, run:", "source": "https://kubernetes.io/docs/reference/access-authn-authz/service-accounts-admin/", "id": "k8s_00564_code_completion_14772"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Secret\ntype: kubernetes.io/service-account-token\nmetadata:\n name: mysecretname\n annotations:\n kubernetes.io/service-account.name: myserviceaccount", "completion": "This yaml code Managing Service Accounts | To create a Secret based on this example, run:", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Managing Service Accounts | To create a Secret based on this example, run:", "source": "https://kubernetes.io/docs/reference/access-authn-authz/service-accounts-admin/", "id": "k8s_00564_code_explain_14773"} {"prompt": "Explain this bash code:\n\nkubectl -n examplens create -f https://k8s.io/examples/secret/serviceaccount/mysecretname.yaml", "completion": "This bash code Managing Service Accounts | To see the details for that Secret, run:", "language": "bash", "type": "kubectl-command_explanation", "context": "Managing Service Accounts | To see the details for that Secret, run:", "source": "https://kubernetes.io/docs/reference/access-authn-authz/service-accounts-admin/", "id": "k8s_00564_code_explain_14774"} {"prompt": "Complete this text code:\n\nName: mysecretname\nNamespace: examplens\nLabels: <none>\nAnnotations: kubernetes.io/service-account.name=myserviceaccount\n kubernetes.io/service-account.uid=8a85c4c4-8483-11e9-bc42-526af7764f64\n", "completion": "Type: kubernetes.io/service-account-token\n\nData\n====\nca.crt: 1362 bytes\nnamespace: 9 bytes\ntoken: ...", "language": "text", "type": "code-example", "context": "Managing Service Accounts | If you launch a new Pod into the examplens namespace, it can use the myserviceaccount service-account-token Secret that you just created.", "source": "https://kubernetes.io/docs/reference/access-authn-authz/service-accounts-admin/", "id": "k8s_00564_code_completion_14775"} {"prompt": "Explain this text code:\n\nName: mysecretname\nNamespace: examplens\nLabels: <none>\nAnnotations: kubernetes.io/service-account.name=myserviceaccount\n kubernetes.io/service-account.uid=8a85c4c4-8483-11e9-bc42-526af7764f64\n\nType: kubernetes.io/service-account-token\n\nData\n====\nca.crt: 1362 bytes\nnamespace: 9 bytes\ntoken: ...", "completion": "This text code Managing Service Accounts | If you launch a new Pod into the examplens namespace, it can use the myserviceaccount service-account-token Secret that you just created.", "language": "text", "type": "code-example_explanation", "context": "Managing Service Accounts | If you launch a new Pod into the examplens namespace, it can use the myserviceaccount service-account-token Secret that you just created.", "source": "https://kubernetes.io/docs/reference/access-authn-authz/service-accounts-admin/", "id": "k8s_00564_code_explain_14776"} {"prompt": "Explain this text code:\n\n# This assumes that you already have a namespace named 'examplens'\nkubectl -n examplens get serviceaccount/example-automated-thing -o yaml", "completion": "This text code Managing Service Accounts | The output is similar to:", "language": "text", "type": "code-example_explanation", "context": "Managing Service Accounts | The output is similar to:", "source": "https://kubernetes.io/docs/reference/access-authn-authz/service-accounts-admin/", "id": "k8s_00564_code_explain_14777"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: ServiceAccount\nmetadata:\n annotations:\n kubectl.kubernetes.io/last-applied-configuration: |\n {\"apiVersion\":\"v1\",\"kind\":\"ServiceAccount\",\"metadata\":{\"annotations\":{},\"name\":\"example-automated-thing\",\"namespace\":\"examplens\"}} \n creationTimestamp: \"2019-07-21T07:07:07Z\"", "completion": " name: example-automated-thing\n namespace: examplens\n resourceVersion: \"777\"\n selfLink: /api/v1/namespaces/examplens/serviceaccounts/example-automated-thing\n uid: f23fd170-66f2-4697-b049-e1e266b7f835\nsecrets:\n - name: example-automated-thing-token-zyxwv", "language": "yaml", "type": "service-definition", "context": "Managing Service Accounts | Then, delete the Secret you now know the name of:", "source": "https://kubernetes.io/docs/reference/access-authn-authz/service-accounts-admin/", "id": "k8s_00564_code_completion_14778"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: ServiceAccount\nmetadata:\n annotations:\n kubectl.kubernetes.io/last-applied-configuration: |\n {\"apiVersion\":\"v1\",\"kind\":\"ServiceAccount\",\"metadata\":{\"annotations\":{},\"name\":\"example-automated-thing\",\"namespace\":\"examplens\"}} \n creationTimestamp: \"2019-07-21T07:07:07Z\"\n name: example-automated-thing\n namespace: examplens\n resourceVersion: \"777\"\n selfLink: /api/v1/namespaces/examplens/serviceaccounts/example-automated-thing\n uid: f23fd170-66f2-4697-b049-e1e266b7f835\nsecrets:\n - name: example-automated-thing-token-zyxwv", "completion": "This yaml code Managing Service Accounts | Then, delete the Secret you now know the name of:", "language": "yaml", "type": "service-definition_explanation", "context": "Managing Service Accounts | Then, delete the Secret you now know the name of:", "source": "https://kubernetes.io/docs/reference/access-authn-authz/service-accounts-admin/", "id": "k8s_00564_code_explain_14779"} {"prompt": "Explain this bash code:\n\nkubectl -n examplens delete secret/example-automated-thing-token-zyxwv", "completion": "This bash code Managing Service Accounts | Short lived ServiceAccount tokens automatically expire after the time-limit specified during their creation. There is no central record of tokens issued, so there is no way to revoke individual tokens", "language": "bash", "type": "kubectl-command_explanation", "context": "Managing Service Accounts | Short lived ServiceAccount tokens automatically expire after the time-limit specified during their creation. There is no central record of tokens issued, so there is no way to revoke individual tokens", "source": "https://kubernetes.io/docs/reference/access-authn-authz/service-accounts-admin/", "id": "k8s_00564_code_explain_14780"} {"prompt": "Complete this text code:\n\nrpc Metadata(MetadataRequest) returns (MetadataResponse) {}\n\nmessage MetadataRequest {}\n\nmessage MetadataResponse {\n // used by kube-apiserver for defaulting/validation of JWT lifetime while accounting for configuration flag values:\n // 1. `--service-account-max-token-expiration`\n // 2. `--service-account-extend-token-expiration`", "completion": " //\n // * If `--service-account-max-token-expiration` is greater than `max_token_expiration_seconds`, kube-apiserver treats that as misconfiguration and exits.\n // * If `--service-account-max-token-expiration` is not explicitly set, kube-apiserver defaults to `max_token_expiration_seconds`.\n // * If `--service-account-extend-token-expiration` is true, the extended expiration is `min(1 year, max_token_expiration_seconds)`.\n //\n // `max_token_expiration_seconds` must be at least 600s.\n int64 max_token_expiration_seconds = 1;\n}", "language": "text", "type": "code-example", "context": "Managing Service Accounts | FetchKeys returns the set of public keys that are trusted to sign Kubernetes service account tokens. Kube-apiserver will call this RPC:", "source": "https://kubernetes.io/docs/reference/access-authn-authz/service-accounts-admin/", "id": "k8s_00564_code_completion_14781"} {"prompt": "Explain this text code:\n\nrpc Metadata(MetadataRequest) returns (MetadataResponse) {}\n\nmessage MetadataRequest {}\n\nmessage MetadataResponse {\n // used by kube-apiserver for defaulting/validation of JWT lifetime while accounting for configuration flag values:\n // 1. `--service-account-max-token-expiration`\n // 2. `--service-account-extend-token-expiration`\n //\n // * If `--service-account-max-token-expiration` is greater than `max_token_expiration_seconds`, kube-apiserver treats that as misconfiguration and exits.\n // * If `--service-account-max-token-expiration` is not explicitly set, kube-apiserver defaults to `max_token_expiration_seconds`.\n // * If `--service-account-extend-token-expiration` is true, the extended expiration is `min(1 year, max_token_expiration_seconds)`.\n //\n // `max_token_expiration_seconds` must be at least 600s.\n int64 max_token_expiration_seconds = 1;\n}", "completion": "This text code Managing Service Accounts | FetchKeys returns the set of public keys that are trusted to sign Kubernetes service account tokens. Kube-apiserver will call this RPC:", "language": "text", "type": "code-example_explanation", "context": "Managing Service Accounts | FetchKeys returns the set of public keys that are trusted to sign Kubernetes service account tokens. Kube-apiserver will call this RPC:", "source": "https://kubernetes.io/docs/reference/access-authn-authz/service-accounts-admin/", "id": "k8s_00564_code_explain_14782"} {"prompt": "Complete this text code:\n\nrpc FetchKeys(FetchKeysRequest) returns (FetchKeysResponse) {}\n\nmessage FetchKeysRequest {}\n\nmessage FetchKeysResponse {\n repeated Key keys = 1;\n\n // The timestamp when this data was pulled from the authoritative source of\n // truth for verification keys.\n // kube-apiserver can export this from metrics, to enable end-to-end SLOs.\n google.protobuf.Timestamp data_timestamp = 2;\n\n // refresh interval for verification keys to pick changes if any.\n // any value <= 0 is considered a misconfiguration.\n int64 refresh_hint_seconds = 3;\n}", "completion": "\nmessage Key {\n // A unique identifier for this key.\n // Length must be <=1024.\n string key_id = 1;\n\n // The public key, PKIX-serialized.\n // must be a public key supported by kube-apiserver (currently RSA 256 or ECDSA 256/384/521)\n bytes key = 2;\n\n // Set only for keys that are not used to sign bound tokens.\n // eg: supported keys for legacy tokens.\n // If set, key is used for verification but excluded from OIDC discovery docs.\n // if set, external signer should not use this key to sign a JWT.\n bool exclude_from_oidc_discovery = 3;\n}", "language": "text", "type": "code-example", "context": "Managing Service Accounts | Sign takes a serialized JWT payload, and returns the serialized header and signature. kube-apiserver then assembles the JWT from the header, payload, and signature.", "source": "https://kubernetes.io/docs/reference/access-authn-authz/service-accounts-admin/", "id": "k8s_00564_code_completion_14783"} {"prompt": "Explain this text code:\n\nrpc FetchKeys(FetchKeysRequest) returns (FetchKeysResponse) {}\n\nmessage FetchKeysRequest {}\n\nmessage FetchKeysResponse {\n repeated Key keys = 1;\n\n // The timestamp when this data was pulled from the authoritative source of\n // truth for verification keys.\n // kube-apiserver can export this from metrics, to enable end-to-end SLOs.\n google.protobuf.Timestamp data_timestamp = 2;\n\n // refresh interval for verification keys to pick changes if any.\n // any value <= 0 is considered a misconfiguration.\n int64 refresh_hint_seconds = 3;\n}\n\nmessage Key {\n // A unique identifier for this key.\n // Length must be <=1024.\n string key_id = 1;\n\n // The public key, PKIX-serialized.\n // must be a public key supported by kube-apiserver (currently RSA 256 or ECDSA 256/384/521)\n bytes key = 2;\n\n // Set only for keys that are not used to sign bound tokens.\n // eg: supported keys for legacy tokens.\n // If set, key is used for verification but excluded from OIDC discovery docs.\n // if set, external signer should not use this key to sign a JWT.\n bool exclude_from_oidc_discovery = 3;\n}", "completion": "This text code Managing Service Accounts | Sign takes a serialized JWT payload, and returns the serialized header and signature. kube-apiserver then assembles the JWT from the header, payload, and signature.", "language": "text", "type": "code-example_explanation", "context": "Managing Service Accounts | Sign takes a serialized JWT payload, and returns the serialized header and signature. kube-apiserver then assembles the JWT from the header, payload, and signature.", "source": "https://kubernetes.io/docs/reference/access-authn-authz/service-accounts-admin/", "id": "k8s_00564_code_explain_14784"} {"prompt": "Complete this text code:\n\nrpc Sign(SignJWTRequest) returns (SignJWTResponse) {}\n\nmessage SignJWTRequest {\n // URL-safe base64 wrapped payload to be signed.\n // Exactly as it appears in the second segment of the JWT\n string claims = 1;\n}\n\nmessage SignJWTResponse {\n // header must contain only alg, kid, typ claims.", "completion": " // typ must be “JWT”.\n // kid must be non-empty, <=1024 characters, and its corresponding public key should not be excluded from OIDC discovery.\n // alg must be one of the algorithms supported by kube-apiserver (currently RS256, ES256, ES384, ES512).\n // header cannot have any additional data that kube-apiserver does not recognize.\n // Already wrapped in URL-safe base64, exactly as it appears in the first segment of the JWT.\n string header = 1;\n\n // The signature for the JWT.\n // Already wrapped in URL-safe base64, exactly as it appears in the final segment of the JWT.\n string signature = 2;\n}", "language": "text", "type": "code-example", "context": "Managing Service Accounts | If you created a namespace examplens to experiment with, you can remove it:", "source": "https://kubernetes.io/docs/reference/access-authn-authz/service-accounts-admin/", "id": "k8s_00564_code_completion_14785"} {"prompt": "Explain this text code:\n\nrpc Sign(SignJWTRequest) returns (SignJWTResponse) {}\n\nmessage SignJWTRequest {\n // URL-safe base64 wrapped payload to be signed.\n // Exactly as it appears in the second segment of the JWT\n string claims = 1;\n}\n\nmessage SignJWTResponse {\n // header must contain only alg, kid, typ claims.\n // typ must be “JWT”.\n // kid must be non-empty, <=1024 characters, and its corresponding public key should not be excluded from OIDC discovery.\n // alg must be one of the algorithms supported by kube-apiserver (currently RS256, ES256, ES384, ES512).\n // header cannot have any additional data that kube-apiserver does not recognize.\n // Already wrapped in URL-safe base64, exactly as it appears in the first segment of the JWT.\n string header = 1;\n\n // The signature for the JWT.\n // Already wrapped in URL-safe base64, exactly as it appears in the final segment of the JWT.\n string signature = 2;\n}", "completion": "This text code Managing Service Accounts | If you created a namespace examplens to experiment with, you can remove it:", "language": "text", "type": "code-example_explanation", "context": "Managing Service Accounts | If you created a namespace examplens to experiment with, you can remove it:", "source": "https://kubernetes.io/docs/reference/access-authn-authz/service-accounts-admin/", "id": "k8s_00564_code_explain_14786"} {"prompt": "Explain this shell code:\n\nkubectl create token my-sa --bound-object-kind=\"Pod\" --bound-object-name=\"test-pod\"", "completion": "This shell code Managing Service Accounts | Copy this token into a new file named tokenreview.yaml:", "language": "shell", "type": "code-example_explanation", "context": "Managing Service Accounts | Copy this token into a new file named tokenreview.yaml:", "source": "https://kubernetes.io/docs/reference/access-authn-authz/service-accounts-admin/", "id": "k8s_00564_code_explain_14787"} {"prompt": "Complete this yaml code:\n\napiVersion: authentication.k8s.io/v1\nkind: TokenReview", "completion": "spec:\n token: <token from step 2>", "language": "yaml", "type": "kubernetes-manifest", "context": "Managing Service Accounts | Submit this resource to the apiserver for review:", "source": "https://kubernetes.io/docs/reference/access-authn-authz/service-accounts-admin/", "id": "k8s_00564_code_completion_14788"} {"prompt": "Explain this yaml code:\n\napiVersion: authentication.k8s.io/v1\nkind: TokenReview\nspec:\n token: <token from step 2>", "completion": "This yaml code Managing Service Accounts | Submit this resource to the apiserver for review:", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Managing Service Accounts | Submit this resource to the apiserver for review:", "source": "https://kubernetes.io/docs/reference/access-authn-authz/service-accounts-admin/", "id": "k8s_00564_code_explain_14789"} {"prompt": "Explain this shell code:\n\n# use '-o yaml' to inspect the output\nkubectl create -o yaml -f tokenreview.yaml", "completion": "This shell code Managing Service Accounts | You should see an output like below:", "language": "shell", "type": "code-example_explanation", "context": "Managing Service Accounts | You should see an output like below:", "source": "https://kubernetes.io/docs/reference/access-authn-authz/service-accounts-admin/", "id": "k8s_00564_code_explain_14790"} {"prompt": "Complete this yaml code:\n\napiVersion: authentication.k8s.io/v1\nkind: TokenReview\nmetadata:\n creationTimestamp: null\nspec:\n token: <token>\nstatus:\n audiences:\n - https://kubernetes.default.svc.cluster.local\n authenticated: true\n user:\n extra:\n authentication.kubernetes.io/credential-id:\n - JTI=7ee52be0-9045-4653-aa5e-0da57b8dccdc", "completion": " authentication.kubernetes.io/node-name:\n - kind-control-plane\n authentication.kubernetes.io/node-uid:\n - 497e9d9a-47aa-4930-b0f6-9f2fb574c8c6\n authentication.kubernetes.io/pod-name:\n - test-pod\n authentication.kubernetes.io/pod-uid:\n - e87dbbd6-3d7e-45db-aafb-72b24627dff5\n groups:\n - system:serviceaccounts\n - system:serviceaccounts:default\n - system:authenticated\n uid: f8b4161b-2e2b-11e9-86b7-2afc33b31a7e\n username: system:serviceaccount:default:my-sa", "language": "yaml", "type": "kubernetes-manifest", "context": "Managing Service Accounts | The schema for the Kubernetes-specific claims within JWT tokens is not currently documented, however the relevant code area can be found in the serviceaccount package in the Kubernetes codebase.", "source": "https://kubernetes.io/docs/reference/access-authn-authz/service-accounts-admin/", "id": "k8s_00564_code_completion_14791"} {"prompt": "Explain this yaml code:\n\napiVersion: authentication.k8s.io/v1\nkind: TokenReview\nmetadata:\n creationTimestamp: null\nspec:\n token: <token>\nstatus:\n audiences:\n - https://kubernetes.default.svc.cluster.local\n authenticated: true\n user:\n extra:\n authentication.kubernetes.io/credential-id:\n - JTI=7ee52be0-9045-4653-aa5e-0da57b8dccdc\n authentication.kubernetes.io/node-name:\n - kind-control-plane\n authentication.kubernetes.io/node-uid:\n - 497e9d9a-47aa-4930-b0f6-9f2fb574c8c6\n authentication.kubernetes.io/pod-name:\n - test-pod\n authentication.kubernetes.io/pod-uid:\n - e87dbbd6-3d7e-45db-aafb-72b24627dff5\n groups:\n - system:serviceaccounts\n - system:serviceaccounts:default\n - system:authenticated\n uid: f8b4161b-2e2b-11e9-86b7-2afc33b31a7e\n username: system:serviceaccount:default:my-sa", "completion": "This yaml code Managing Service Accounts | The schema for the Kubernetes-specific claims within JWT tokens is not currently documented, however the relevant code area can be found in the serviceaccount package in the Kubernetes codebase.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Managing Service Accounts | The schema for the Kubernetes-specific claims within JWT tokens is not currently documented, however the relevant code area can be found in the serviceaccount package in the Kubernetes codebase.", "source": "https://kubernetes.io/docs/reference/access-authn-authz/service-accounts-admin/", "id": "k8s_00564_code_explain_14792"} {"prompt": "Complete this json code:\n\n{\n \"aud\": [\n \"https://my-audience.example.com\"\n ],\n \"exp\": 1729605240,\n \"iat\": 1729601640,\n \"iss\": \"https://my-cluster.example.com\",\n \"jti\": \"aed34954-b33a-4142-b1ec-389d6bbb4936\",\n \"kubernetes.io\": {\n \"namespace\": \"my-namespace\",\n \"node\": {\n \"name\": \"my-node\",\n \"uid\": \"646e7c5e-32d6-4d42-9dbd-e504e6cbe6b1\"", "completion": " },\n \"pod\": {\n \"name\": \"my-pod\",\n \"uid\": \"5e0bd49b-f040-43b0-99b7-22765a53f7f3\"\n },\n \"serviceaccount\": {\n \"name\": \"my-serviceaccount\",\n \"uid\": \"14ee3fa4-a7e2-420f-9f9a-dbc4507c3798\"\n }\n },\n \"nbf\": 1729601640,\n \"sub\": \"system:serviceaccount:my-namespace:my-serviceaccount\"\n}", "language": "json", "type": "api-response", "context": "Managing Service Accounts | The aud and iss fields in this JWT may differ between different Kubernetes clusters depending on your configuration.", "source": "https://kubernetes.io/docs/reference/access-authn-authz/service-accounts-admin/", "id": "k8s_00564_code_completion_14793"} {"prompt": "Explain this json code:\n\n{\n \"aud\": [\n \"https://my-audience.example.com\"\n ],\n \"exp\": 1729605240,\n \"iat\": 1729601640,\n \"iss\": \"https://my-cluster.example.com\",\n \"jti\": \"aed34954-b33a-4142-b1ec-389d6bbb4936\",\n \"kubernetes.io\": {\n \"namespace\": \"my-namespace\",\n \"node\": {\n \"name\": \"my-node\",\n \"uid\": \"646e7c5e-32d6-4d42-9dbd-e504e6cbe6b1\"\n },\n \"pod\": {\n \"name\": \"my-pod\",\n \"uid\": \"5e0bd49b-f040-43b0-99b7-22765a53f7f3\"\n },\n \"serviceaccount\": {\n \"name\": \"my-serviceaccount\",\n \"uid\": \"14ee3fa4-a7e2-420f-9f9a-dbc4507c3798\"\n }\n },\n \"nbf\": 1729601640,\n \"sub\": \"system:serviceaccount:my-namespace:my-serviceaccount\"\n}", "completion": "This json code Managing Service Accounts | The aud and iss fields in this JWT may differ between different Kubernetes clusters depending on your configuration.", "language": "json", "type": "api-response_explanation", "context": "Managing Service Accounts | The aud and iss fields in this JWT may differ between different Kubernetes clusters depending on your configuration.", "source": "https://kubernetes.io/docs/reference/access-authn-authz/service-accounts-admin/", "id": "k8s_00564_code_explain_14794"} {"prompt": "Complete this yaml code:\n\n...\n - name: kube-api-access-<random-suffix>\n projected:\n sources:\n - serviceAccountToken:\n path: token # must match the path the app expects\n - configMap:\n items:", "completion": " - key: ca.crt\n path: ca.crt\n name: kube-root-ca.crt\n - downwardAPI:\n items:\n - fieldRef:\n apiVersion: v1\n fieldPath: metadata.namespace\n path: namespace", "language": "yaml", "type": "kubernetes-manifest", "context": "Managing Service Accounts | That manifest snippet defines a projected volume that consists of three sources. In this case, each source also represents a single path within that volume. The three sources are:", "source": "https://kubernetes.io/docs/reference/access-authn-authz/service-accounts-admin/", "id": "k8s_00564_code_completion_14795"} {"prompt": "Explain this yaml code:\n\n...\n - name: kube-api-access-<random-suffix>\n projected:\n sources:\n - serviceAccountToken:\n path: token # must match the path the app expects\n - configMap:\n items:\n - key: ca.crt\n path: ca.crt\n name: kube-root-ca.crt\n - downwardAPI:\n items:\n - fieldRef:\n apiVersion: v1\n fieldPath: metadata.namespace\n path: namespace", "completion": "This yaml code Managing Service Accounts | That manifest snippet defines a projected volume that consists of three sources. In this case, each source also represents a single path within that volume. The three sources are:", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Managing Service Accounts | That manifest snippet defines a projected volume that consists of three sources. In this case, each source also represents a single path within that volume. The three sources are:", "source": "https://kubernetes.io/docs/reference/access-authn-authz/service-accounts-admin/", "id": "k8s_00564_code_explain_14796"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: ServiceAccount\nmetadata:", "completion": " name: build-robot\n namespace: default\nsecrets:\n - name: build-robot-secret # usually NOT present for a manually generated token", "language": "yaml", "type": "service-definition", "context": "Managing Service Accounts | Beginning from version 1.29, legacy ServiceAccount tokens that were generated automatically will be marked as invalid if they remain unused for a certain period of time (set to default at one year). T", "source": "https://kubernetes.io/docs/reference/access-authn-authz/service-accounts-admin/", "id": "k8s_00564_code_completion_14797"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: ServiceAccount\nmetadata:\n name: build-robot\n namespace: default\nsecrets:\n - name: build-robot-secret # usually NOT present for a manually generated token", "completion": "This yaml code Managing Service Accounts | Beginning from version 1.29, legacy ServiceAccount tokens that were generated automatically will be marked as invalid if they remain unused for a certain period of time (set to default at one year). T", "language": "yaml", "type": "service-definition_explanation", "context": "Managing Service Accounts | Beginning from version 1.29, legacy ServiceAccount tokens that were generated automatically will be marked as invalid if they remain unused for a certain period of time (set to default at one year). T", "source": "https://kubernetes.io/docs/reference/access-authn-authz/service-accounts-admin/", "id": "k8s_00564_code_explain_14798"} {"prompt": "Explain this text code:\n\nauthentication.k8s.io/legacy-token-invalidated: <secret name>/<namespace>", "completion": "This text code Managing Service Accounts | When receiving this validation error, users can update the Secret to remove the kubernetes.io/legacy-token-invalid-since label to temporarily allow use of this token.", "language": "text", "type": "code-example_explanation", "context": "Managing Service Accounts | When receiving this validation error, users can update the Secret to remove the kubernetes.io/legacy-token-invalid-since label to temporarily allow use of this token.", "source": "https://kubernetes.io/docs/reference/access-authn-authz/service-accounts-admin/", "id": "k8s_00564_code_explain_14799"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Secret\nmetadata:\n name: build-robot-secret\n namespace: default", "completion": " labels:\n kubernetes.io/legacy-token-last-used: 2022-10-24\n kubernetes.io/legacy-token-invalid-since: 2023-10-25\n annotations:\n kubernetes.io/service-account.name: build-robot\ntype: kubernetes.io/service-account-token", "language": "yaml", "type": "kubernetes-manifest", "context": "Managing Service Accounts | A ServiceAccount controller manages the ServiceAccounts inside namespaces, and ensures a ServiceAccount named \"default\" exists in every active namespace.", "source": "https://kubernetes.io/docs/reference/access-authn-authz/service-accounts-admin/", "id": "k8s_00564_code_completion_14800"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Secret\nmetadata:\n name: build-robot-secret\n namespace: default\n labels:\n kubernetes.io/legacy-token-last-used: 2022-10-24\n kubernetes.io/legacy-token-invalid-since: 2023-10-25\n annotations:\n kubernetes.io/service-account.name: build-robot\ntype: kubernetes.io/service-account-token", "completion": "This yaml code Managing Service Accounts | A ServiceAccount controller manages the ServiceAccounts inside namespaces, and ensures a ServiceAccount named \"default\" exists in every active namespace.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Managing Service Accounts | A ServiceAccount controller manages the ServiceAccounts inside namespaces, and ensures a ServiceAccount named \"default\" exists in every active namespace.", "source": "https://kubernetes.io/docs/reference/access-authn-authz/service-accounts-admin/", "id": "k8s_00564_code_explain_14801"} {"prompt": "Explain this text code:\n\nkube-system/kube-apiserver-legacy-service-account-token-tracking", "completion": "This text code Managing Service Accounts | The legacy ServiceAccount token cleaner runs as part of the kube-controller-manager and checks every 24 hours to see if any auto-generated legacy ServiceAccount token has not been used in a specified", "language": "text", "type": "code-example_explanation", "context": "Managing Service Accounts | The legacy ServiceAccount token cleaner runs as part of the kube-controller-manager and checks every 24 hours to see if any auto-generated legacy ServiceAccount token has not been used in a specified", "source": "https://kubernetes.io/docs/reference/access-authn-authz/service-accounts-admin/", "id": "k8s_00564_code_explain_14802"} {"prompt": "Complete this yaml code:\n\n...\n - name: kube-api-access-<random-suffix>\n projected:\n defaultMode: 420 # decimal equivalent of octal 0644\n sources:\n - serviceAccountToken:\n expirationSeconds: 3607\n path: token\n - configMap:", "completion": " items:\n - key: ca.crt\n path: ca.crt\n name: kube-root-ca.crt\n - downwardAPI:\n items:\n - fieldRef:\n apiVersion: v1\n fieldPath: metadata.namespace\n path: namespace", "language": "yaml", "type": "kubernetes-manifest", "context": "Managing Service Accounts | That manifest snippet defines a projected volume that combines information from three sources:", "source": "https://kubernetes.io/docs/reference/access-authn-authz/service-accounts-admin/", "id": "k8s_00564_code_completion_14803"} {"prompt": "Explain this yaml code:\n\n...\n - name: kube-api-access-<random-suffix>\n projected:\n defaultMode: 420 # decimal equivalent of octal 0644\n sources:\n - serviceAccountToken:\n expirationSeconds: 3607\n path: token\n - configMap:\n items:\n - key: ca.crt\n path: ca.crt\n name: kube-root-ca.crt\n - downwardAPI:\n items:\n - fieldRef:\n apiVersion: v1\n fieldPath: metadata.namespace\n path: namespace", "completion": "This yaml code Managing Service Accounts | That manifest snippet defines a projected volume that combines information from three sources:", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Managing Service Accounts | That manifest snippet defines a projected volume that combines information from three sources:", "source": "https://kubernetes.io/docs/reference/access-authn-authz/service-accounts-admin/", "id": "k8s_00564_code_explain_14804"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Secret\ntype: kubernetes.io/service-account-token", "completion": "metadata:\n name: mysecretname\n annotations:\n kubernetes.io/service-account.name: myserviceaccount", "language": "yaml", "type": "kubernetes-manifest", "context": "Managing Service Accounts | To create a Secret based on this example, run:", "source": "https://kubernetes.io/docs/reference/access-authn-authz/service-accounts-admin/", "id": "k8s_00564_code_completion_14805"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Secret\ntype: kubernetes.io/service-account-token\nmetadata:\n name: mysecretname\n annotations:\n kubernetes.io/service-account.name: myserviceaccount", "completion": "This yaml code Managing Service Accounts | To create a Secret based on this example, run:", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Managing Service Accounts | To create a Secret based on this example, run:", "source": "https://kubernetes.io/docs/reference/access-authn-authz/service-accounts-admin/", "id": "k8s_00564_code_explain_14806"} {"prompt": "Explain this shell code:\n\nkubectl -n examplens create -f https://k8s.io/examples/secret/serviceaccount/mysecretname.yaml", "completion": "This shell code Managing Service Accounts | To see the details for that Secret, run:", "language": "shell", "type": "code-example_explanation", "context": "Managing Service Accounts | To see the details for that Secret, run:", "source": "https://kubernetes.io/docs/reference/access-authn-authz/service-accounts-admin/", "id": "k8s_00564_code_explain_14807"} {"prompt": "Complete this text code:\n\nName: mysecretname\nNamespace: examplens\nLabels: <none>\nAnnotations: kubernetes.io/service-account.name=myserviceaccount\n kubernetes.io/service-account.uid=8a85c4c4-8483-11e9-bc42-526af7764f64\n", "completion": "Type: kubernetes.io/service-account-token\n\nData\n====\nca.crt: 1362 bytes\nnamespace: 9 bytes\ntoken: ...", "language": "text", "type": "code-example", "context": "Managing Service Accounts | If you launch a new Pod into the examplens namespace, it can use the myserviceaccount service-account-token Secret that you just created.", "source": "https://kubernetes.io/docs/reference/access-authn-authz/service-accounts-admin/", "id": "k8s_00564_code_completion_14808"} {"prompt": "Explain this text code:\n\nName: mysecretname\nNamespace: examplens\nLabels: <none>\nAnnotations: kubernetes.io/service-account.name=myserviceaccount\n kubernetes.io/service-account.uid=8a85c4c4-8483-11e9-bc42-526af7764f64\n\nType: kubernetes.io/service-account-token\n\nData\n====\nca.crt: 1362 bytes\nnamespace: 9 bytes\ntoken: ...", "completion": "This text code Managing Service Accounts | If you launch a new Pod into the examplens namespace, it can use the myserviceaccount service-account-token Secret that you just created.", "language": "text", "type": "code-example_explanation", "context": "Managing Service Accounts | If you launch a new Pod into the examplens namespace, it can use the myserviceaccount service-account-token Secret that you just created.", "source": "https://kubernetes.io/docs/reference/access-authn-authz/service-accounts-admin/", "id": "k8s_00564_code_explain_14809"} {"prompt": "Explain this shell code:\n\n# This assumes that you already have a namespace named 'examplens'\nkubectl -n examplens get serviceaccount/example-automated-thing -o yaml", "completion": "This shell code Managing Service Accounts | The output is similar to:", "language": "shell", "type": "code-example_explanation", "context": "Managing Service Accounts | The output is similar to:", "source": "https://kubernetes.io/docs/reference/access-authn-authz/service-accounts-admin/", "id": "k8s_00564_code_explain_14810"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: ServiceAccount\nmetadata:\n annotations:\n kubectl.kubernetes.io/last-applied-configuration: |\n {\"apiVersion\":\"v1\",\"kind\":\"ServiceAccount\",\"metadata\":{\"annotations\":{},\"name\":\"example-automated-thing\",\"namespace\":\"examplens\"}} \n creationTimestamp: \"2019-07-21T07:07:07Z\"", "completion": " name: example-automated-thing\n namespace: examplens\n resourceVersion: \"777\"\n selfLink: /api/v1/namespaces/examplens/serviceaccounts/example-automated-thing\n uid: f23fd170-66f2-4697-b049-e1e266b7f835\nsecrets:\n - name: example-automated-thing-token-zyxwv", "language": "yaml", "type": "service-definition", "context": "Managing Service Accounts | Then, delete the Secret you now know the name of:", "source": "https://kubernetes.io/docs/reference/access-authn-authz/service-accounts-admin/", "id": "k8s_00564_code_completion_14811"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: ServiceAccount\nmetadata:\n annotations:\n kubectl.kubernetes.io/last-applied-configuration: |\n {\"apiVersion\":\"v1\",\"kind\":\"ServiceAccount\",\"metadata\":{\"annotations\":{},\"name\":\"example-automated-thing\",\"namespace\":\"examplens\"}} \n creationTimestamp: \"2019-07-21T07:07:07Z\"\n name: example-automated-thing\n namespace: examplens\n resourceVersion: \"777\"\n selfLink: /api/v1/namespaces/examplens/serviceaccounts/example-automated-thing\n uid: f23fd170-66f2-4697-b049-e1e266b7f835\nsecrets:\n - name: example-automated-thing-token-zyxwv", "completion": "This yaml code Managing Service Accounts | Then, delete the Secret you now know the name of:", "language": "yaml", "type": "service-definition_explanation", "context": "Managing Service Accounts | Then, delete the Secret you now know the name of:", "source": "https://kubernetes.io/docs/reference/access-authn-authz/service-accounts-admin/", "id": "k8s_00564_code_explain_14812"} {"prompt": "Explain this shell code:\n\nkubectl -n examplens delete secret/example-automated-thing-token-zyxwv", "completion": "This shell code Managing Service Accounts | Short lived ServiceAccount tokens automatically expire after the time-limit specified during their creation. There is no central record of tokens issued, so there is no way to revoke individual tokens", "language": "shell", "type": "code-example_explanation", "context": "Managing Service Accounts | Short lived ServiceAccount tokens automatically expire after the time-limit specified during their creation. There is no central record of tokens issued, so there is no way to revoke individual tokens", "source": "https://kubernetes.io/docs/reference/access-authn-authz/service-accounts-admin/", "id": "k8s_00564_code_explain_14813"} {"prompt": "Complete this proto code:\n\nrpc Metadata(MetadataRequest) returns (MetadataResponse) {}\n\nmessage MetadataRequest {}\n\nmessage MetadataResponse {\n // used by kube-apiserver for defaulting/validation of JWT lifetime while accounting for configuration flag values:\n // 1. `--service-account-max-token-expiration`\n // 2. `--service-account-extend-token-expiration`", "completion": " //\n // * If `--service-account-max-token-expiration` is greater than `max_token_expiration_seconds`, kube-apiserver treats that as misconfiguration and exits.\n // * If `--service-account-max-token-expiration` is not explicitly set, kube-apiserver defaults to `max_token_expiration_seconds`.\n // * If `--service-account-extend-token-expiration` is true, the extended expiration is `min(1 year, max_token_expiration_seconds)`.\n //\n // `max_token_expiration_seconds` must be at least 600s.\n int64 max_token_expiration_seconds = 1;\n}", "language": "proto", "type": "code-example", "context": "Managing Service Accounts | FetchKeys returns the set of public keys that are trusted to sign Kubernetes service account tokens. Kube-apiserver will call this RPC:", "source": "https://kubernetes.io/docs/reference/access-authn-authz/service-accounts-admin/", "id": "k8s_00564_code_completion_14814"} {"prompt": "Explain this proto code:\n\nrpc Metadata(MetadataRequest) returns (MetadataResponse) {}\n\nmessage MetadataRequest {}\n\nmessage MetadataResponse {\n // used by kube-apiserver for defaulting/validation of JWT lifetime while accounting for configuration flag values:\n // 1. `--service-account-max-token-expiration`\n // 2. `--service-account-extend-token-expiration`\n //\n // * If `--service-account-max-token-expiration` is greater than `max_token_expiration_seconds`, kube-apiserver treats that as misconfiguration and exits.\n // * If `--service-account-max-token-expiration` is not explicitly set, kube-apiserver defaults to `max_token_expiration_seconds`.\n // * If `--service-account-extend-token-expiration` is true, the extended expiration is `min(1 year, max_token_expiration_seconds)`.\n //\n // `max_token_expiration_seconds` must be at least 600s.\n int64 max_token_expiration_seconds = 1;\n}", "completion": "This proto code Managing Service Accounts | FetchKeys returns the set of public keys that are trusted to sign Kubernetes service account tokens. Kube-apiserver will call this RPC:", "language": "proto", "type": "code-example_explanation", "context": "Managing Service Accounts | FetchKeys returns the set of public keys that are trusted to sign Kubernetes service account tokens. Kube-apiserver will call this RPC:", "source": "https://kubernetes.io/docs/reference/access-authn-authz/service-accounts-admin/", "id": "k8s_00564_code_explain_14815"} {"prompt": "Complete this proto code:\n\nrpc FetchKeys(FetchKeysRequest) returns (FetchKeysResponse) {}\n\nmessage FetchKeysRequest {}\n\nmessage FetchKeysResponse {\n repeated Key keys = 1;\n\n // The timestamp when this data was pulled from the authoritative source of\n // truth for verification keys.\n // kube-apiserver can export this from metrics, to enable end-to-end SLOs.\n google.protobuf.Timestamp data_timestamp = 2;\n\n // refresh interval for verification keys to pick changes if any.\n // any value <= 0 is considered a misconfiguration.\n int64 refresh_hint_seconds = 3;\n}", "completion": "\nmessage Key {\n // A unique identifier for this key.\n // Length must be <=1024.\n string key_id = 1;\n\n // The public key, PKIX-serialized.\n // must be a public key supported by kube-apiserver (currently RSA 256 or ECDSA 256/384/521)\n bytes key = 2;\n\n // Set only for keys that are not used to sign bound tokens.\n // eg: supported keys for legacy tokens.\n // If set, key is used for verification but excluded from OIDC discovery docs.\n // if set, external signer should not use this key to sign a JWT.\n bool exclude_from_oidc_discovery = 3;\n}", "language": "proto", "type": "code-example", "context": "Managing Service Accounts | Sign takes a serialized JWT payload, and returns the serialized header and signature. kube-apiserver then assembles the JWT from the header, payload, and signature.", "source": "https://kubernetes.io/docs/reference/access-authn-authz/service-accounts-admin/", "id": "k8s_00564_code_completion_14816"} {"prompt": "Explain this proto code:\n\nrpc FetchKeys(FetchKeysRequest) returns (FetchKeysResponse) {}\n\nmessage FetchKeysRequest {}\n\nmessage FetchKeysResponse {\n repeated Key keys = 1;\n\n // The timestamp when this data was pulled from the authoritative source of\n // truth for verification keys.\n // kube-apiserver can export this from metrics, to enable end-to-end SLOs.\n google.protobuf.Timestamp data_timestamp = 2;\n\n // refresh interval for verification keys to pick changes if any.\n // any value <= 0 is considered a misconfiguration.\n int64 refresh_hint_seconds = 3;\n}\n\nmessage Key {\n // A unique identifier for this key.\n // Length must be <=1024.\n string key_id = 1;\n\n // The public key, PKIX-serialized.\n // must be a public key supported by kube-apiserver (currently RSA 256 or ECDSA 256/384/521)\n bytes key = 2;\n\n // Set only for keys that are not used to sign bound tokens.\n // eg: supported keys for legacy tokens.\n // If set, key is used for verification but excluded from OIDC discovery docs.\n // if set, external signer should not use this key to sign a JWT.\n bool exclude_from_oidc_discovery = 3;\n}", "completion": "This proto code Managing Service Accounts | Sign takes a serialized JWT payload, and returns the serialized header and signature. kube-apiserver then assembles the JWT from the header, payload, and signature.", "language": "proto", "type": "code-example_explanation", "context": "Managing Service Accounts | Sign takes a serialized JWT payload, and returns the serialized header and signature. kube-apiserver then assembles the JWT from the header, payload, and signature.", "source": "https://kubernetes.io/docs/reference/access-authn-authz/service-accounts-admin/", "id": "k8s_00564_code_explain_14817"} {"prompt": "Complete this proto code:\n\nrpc Sign(SignJWTRequest) returns (SignJWTResponse) {}\n\nmessage SignJWTRequest {\n // URL-safe base64 wrapped payload to be signed.\n // Exactly as it appears in the second segment of the JWT\n string claims = 1;\n}\n\nmessage SignJWTResponse {\n // header must contain only alg, kid, typ claims.", "completion": " // typ must be “JWT”.\n // kid must be non-empty, <=1024 characters, and its corresponding public key should not be excluded from OIDC discovery.\n // alg must be one of the algorithms supported by kube-apiserver (currently RS256, ES256, ES384, ES512).\n // header cannot have any additional data that kube-apiserver does not recognize.\n // Already wrapped in URL-safe base64, exactly as it appears in the first segment of the JWT.\n string header = 1;\n\n // The signature for the JWT.\n // Already wrapped in URL-safe base64, exactly as it appears in the final segment of the JWT.\n string signature = 2;\n}", "language": "proto", "type": "code-example", "context": "Managing Service Accounts | If you created a namespace examplens to experiment with, you can remove it:", "source": "https://kubernetes.io/docs/reference/access-authn-authz/service-accounts-admin/", "id": "k8s_00564_code_completion_14818"} {"prompt": "Explain this proto code:\n\nrpc Sign(SignJWTRequest) returns (SignJWTResponse) {}\n\nmessage SignJWTRequest {\n // URL-safe base64 wrapped payload to be signed.\n // Exactly as it appears in the second segment of the JWT\n string claims = 1;\n}\n\nmessage SignJWTResponse {\n // header must contain only alg, kid, typ claims.\n // typ must be “JWT”.\n // kid must be non-empty, <=1024 characters, and its corresponding public key should not be excluded from OIDC discovery.\n // alg must be one of the algorithms supported by kube-apiserver (currently RS256, ES256, ES384, ES512).\n // header cannot have any additional data that kube-apiserver does not recognize.\n // Already wrapped in URL-safe base64, exactly as it appears in the first segment of the JWT.\n string header = 1;\n\n // The signature for the JWT.\n // Already wrapped in URL-safe base64, exactly as it appears in the final segment of the JWT.\n string signature = 2;\n}", "completion": "This proto code Managing Service Accounts | If you created a namespace examplens to experiment with, you can remove it:", "language": "proto", "type": "code-example_explanation", "context": "Managing Service Accounts | If you created a namespace examplens to experiment with, you can remove it:", "source": "https://kubernetes.io/docs/reference/access-authn-authz/service-accounts-admin/", "id": "k8s_00564_code_explain_14819"} {"prompt": "Explain this bash code:\n\nkubectl create token my-sa --bound-object-kind=\"Pod\" --bound-object-name=\"test-pod\"", "completion": "This bash code Managing Service Accounts | Copy this token into a new file named tokenreview.yaml:", "language": "bash", "type": "kubectl-command_explanation", "context": "Managing Service Accounts | Copy this token into a new file named tokenreview.yaml:", "source": "https://kubernetes.io/docs/reference/access-authn-authz/service-accounts-admin/", "id": "k8s_00564_code_explain_14820"} {"prompt": "Complete this yaml code:\n\napiVersion: authentication.k8s.io/v1\nkind: TokenReview", "completion": "spec:\n token: <token from step 2>", "language": "yaml", "type": "kubernetes-manifest", "context": "Managing Service Accounts | Submit this resource to the apiserver for review:", "source": "https://kubernetes.io/docs/reference/access-authn-authz/service-accounts-admin/", "id": "k8s_00564_code_completion_14821"} {"prompt": "Explain this yaml code:\n\napiVersion: authentication.k8s.io/v1\nkind: TokenReview\nspec:\n token: <token from step 2>", "completion": "This yaml code Managing Service Accounts | Submit this resource to the apiserver for review:", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Managing Service Accounts | Submit this resource to the apiserver for review:", "source": "https://kubernetes.io/docs/reference/access-authn-authz/service-accounts-admin/", "id": "k8s_00564_code_explain_14822"} {"prompt": "Explain this text code:\n\n# use '-o yaml' to inspect the output\nkubectl create -o yaml -f tokenreview.yaml", "completion": "This text code Managing Service Accounts | You should see an output like below:", "language": "text", "type": "code-example_explanation", "context": "Managing Service Accounts | You should see an output like below:", "source": "https://kubernetes.io/docs/reference/access-authn-authz/service-accounts-admin/", "id": "k8s_00564_code_explain_14823"} {"prompt": "Complete this yaml code:\n\napiVersion: authentication.k8s.io/v1\nkind: TokenReview\nmetadata:\n creationTimestamp: null\nspec:\n token: <token>\nstatus:\n audiences:\n - https://kubernetes.default.svc.cluster.local\n authenticated: true\n user:\n extra:\n authentication.kubernetes.io/credential-id:\n - JTI=7ee52be0-9045-4653-aa5e-0da57b8dccdc", "completion": " authentication.kubernetes.io/node-name:\n - kind-control-plane\n authentication.kubernetes.io/node-uid:\n - 497e9d9a-47aa-4930-b0f6-9f2fb574c8c6\n authentication.kubernetes.io/pod-name:\n - test-pod\n authentication.kubernetes.io/pod-uid:\n - e87dbbd6-3d7e-45db-aafb-72b24627dff5\n groups:\n - system:serviceaccounts\n - system:serviceaccounts:default\n - system:authenticated\n uid: f8b4161b-2e2b-11e9-86b7-2afc33b31a7e\n username: system:serviceaccount:default:my-sa", "language": "yaml", "type": "kubernetes-manifest", "context": "Managing Service Accounts | The schema for the Kubernetes-specific claims within JWT tokens is not currently documented, however the relevant code area can be found in the serviceaccount package in the Kubernetes codebase.", "source": "https://kubernetes.io/docs/reference/access-authn-authz/service-accounts-admin/", "id": "k8s_00564_code_completion_14824"} {"prompt": "Explain this yaml code:\n\napiVersion: authentication.k8s.io/v1\nkind: TokenReview\nmetadata:\n creationTimestamp: null\nspec:\n token: <token>\nstatus:\n audiences:\n - https://kubernetes.default.svc.cluster.local\n authenticated: true\n user:\n extra:\n authentication.kubernetes.io/credential-id:\n - JTI=7ee52be0-9045-4653-aa5e-0da57b8dccdc\n authentication.kubernetes.io/node-name:\n - kind-control-plane\n authentication.kubernetes.io/node-uid:\n - 497e9d9a-47aa-4930-b0f6-9f2fb574c8c6\n authentication.kubernetes.io/pod-name:\n - test-pod\n authentication.kubernetes.io/pod-uid:\n - e87dbbd6-3d7e-45db-aafb-72b24627dff5\n groups:\n - system:serviceaccounts\n - system:serviceaccounts:default\n - system:authenticated\n uid: f8b4161b-2e2b-11e9-86b7-2afc33b31a7e\n username: system:serviceaccount:default:my-sa", "completion": "This yaml code Managing Service Accounts | The schema for the Kubernetes-specific claims within JWT tokens is not currently documented, however the relevant code area can be found in the serviceaccount package in the Kubernetes codebase.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Managing Service Accounts | The schema for the Kubernetes-specific claims within JWT tokens is not currently documented, however the relevant code area can be found in the serviceaccount package in the Kubernetes codebase.", "source": "https://kubernetes.io/docs/reference/access-authn-authz/service-accounts-admin/", "id": "k8s_00564_code_explain_14825"} {"prompt": "Complete this json code:\n\n{\n \"aud\": [\n \"https://my-audience.example.com\"\n ],\n \"exp\": 1729605240,\n \"iat\": 1729601640,\n \"iss\": \"https://my-cluster.example.com\",\n \"jti\": \"aed34954-b33a-4142-b1ec-389d6bbb4936\",\n \"kubernetes.io\": {\n \"namespace\": \"my-namespace\",\n \"node\": {\n \"name\": \"my-node\",\n \"uid\": \"646e7c5e-32d6-4d42-9dbd-e504e6cbe6b1\"", "completion": " },\n \"pod\": {\n \"name\": \"my-pod\",\n \"uid\": \"5e0bd49b-f040-43b0-99b7-22765a53f7f3\"\n },\n \"serviceaccount\": {\n \"name\": \"my-serviceaccount\",\n \"uid\": \"14ee3fa4-a7e2-420f-9f9a-dbc4507c3798\"\n }\n },\n \"nbf\": 1729601640,\n \"sub\": \"system:serviceaccount:my-namespace:my-serviceaccount\"\n}", "language": "json", "type": "api-response", "context": "Managing Service Accounts | The aud and iss fields in this JWT may differ between different Kubernetes clusters depending on your configuration.", "source": "https://kubernetes.io/docs/reference/access-authn-authz/service-accounts-admin/", "id": "k8s_00564_code_completion_14826"} {"prompt": "Explain this json code:\n\n{\n \"aud\": [\n \"https://my-audience.example.com\"\n ],\n \"exp\": 1729605240,\n \"iat\": 1729601640,\n \"iss\": \"https://my-cluster.example.com\",\n \"jti\": \"aed34954-b33a-4142-b1ec-389d6bbb4936\",\n \"kubernetes.io\": {\n \"namespace\": \"my-namespace\",\n \"node\": {\n \"name\": \"my-node\",\n \"uid\": \"646e7c5e-32d6-4d42-9dbd-e504e6cbe6b1\"\n },\n \"pod\": {\n \"name\": \"my-pod\",\n \"uid\": \"5e0bd49b-f040-43b0-99b7-22765a53f7f3\"\n },\n \"serviceaccount\": {\n \"name\": \"my-serviceaccount\",\n \"uid\": \"14ee3fa4-a7e2-420f-9f9a-dbc4507c3798\"\n }\n },\n \"nbf\": 1729601640,\n \"sub\": \"system:serviceaccount:my-namespace:my-serviceaccount\"\n}", "completion": "This json code Managing Service Accounts | The aud and iss fields in this JWT may differ between different Kubernetes clusters depending on your configuration.", "language": "json", "type": "api-response_explanation", "context": "Managing Service Accounts | The aud and iss fields in this JWT may differ between different Kubernetes clusters depending on your configuration.", "source": "https://kubernetes.io/docs/reference/access-authn-authz/service-accounts-admin/", "id": "k8s_00564_code_explain_14827"} {"prompt": "Complete this yaml code:\n\n...\n - name: kube-api-access-<random-suffix>\n projected:\n sources:\n - serviceAccountToken:\n path: token # must match the path the app expects\n - configMap:\n items:", "completion": " - key: ca.crt\n path: ca.crt\n name: kube-root-ca.crt\n - downwardAPI:\n items:\n - fieldRef:\n apiVersion: v1\n fieldPath: metadata.namespace\n path: namespace", "language": "yaml", "type": "kubernetes-manifest", "context": "Managing Service Accounts | That manifest snippet defines a projected volume that consists of three sources. In this case, each source also represents a single path within that volume. The three sources are:", "source": "https://kubernetes.io/docs/reference/access-authn-authz/service-accounts-admin/", "id": "k8s_00564_code_completion_14828"} {"prompt": "Explain this yaml code:\n\n...\n - name: kube-api-access-<random-suffix>\n projected:\n sources:\n - serviceAccountToken:\n path: token # must match the path the app expects\n - configMap:\n items:\n - key: ca.crt\n path: ca.crt\n name: kube-root-ca.crt\n - downwardAPI:\n items:\n - fieldRef:\n apiVersion: v1\n fieldPath: metadata.namespace\n path: namespace", "completion": "This yaml code Managing Service Accounts | That manifest snippet defines a projected volume that consists of three sources. In this case, each source also represents a single path within that volume. The three sources are:", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Managing Service Accounts | That manifest snippet defines a projected volume that consists of three sources. In this case, each source also represents a single path within that volume. The three sources are:", "source": "https://kubernetes.io/docs/reference/access-authn-authz/service-accounts-admin/", "id": "k8s_00564_code_explain_14829"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: ServiceAccount\nmetadata:", "completion": " name: build-robot\n namespace: default\nsecrets:\n - name: build-robot-secret # usually NOT present for a manually generated token", "language": "yaml", "type": "service-definition", "context": "Managing Service Accounts | Beginning from version 1.29, legacy ServiceAccount tokens that were generated automatically will be marked as invalid if they remain unused for a certain period of time (set to default at one year). T", "source": "https://kubernetes.io/docs/reference/access-authn-authz/service-accounts-admin/", "id": "k8s_00564_code_completion_14830"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: ServiceAccount\nmetadata:\n name: build-robot\n namespace: default\nsecrets:\n - name: build-robot-secret # usually NOT present for a manually generated token", "completion": "This yaml code Managing Service Accounts | Beginning from version 1.29, legacy ServiceAccount tokens that were generated automatically will be marked as invalid if they remain unused for a certain period of time (set to default at one year). T", "language": "yaml", "type": "service-definition_explanation", "context": "Managing Service Accounts | Beginning from version 1.29, legacy ServiceAccount tokens that were generated automatically will be marked as invalid if they remain unused for a certain period of time (set to default at one year). T", "source": "https://kubernetes.io/docs/reference/access-authn-authz/service-accounts-admin/", "id": "k8s_00564_code_explain_14831"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Secret\nmetadata:\n name: build-robot-secret\n namespace: default", "completion": " labels:\n kubernetes.io/legacy-token-last-used: 2022-10-24\n kubernetes.io/legacy-token-invalid-since: 2023-10-25\n annotations:\n kubernetes.io/service-account.name: build-robot\ntype: kubernetes.io/service-account-token", "language": "yaml", "type": "kubernetes-manifest", "context": "Managing Service Accounts | A ServiceAccount controller manages the ServiceAccounts inside namespaces, and ensures a ServiceAccount named \"default\" exists in every active namespace.", "source": "https://kubernetes.io/docs/reference/access-authn-authz/service-accounts-admin/", "id": "k8s_00564_code_completion_14832"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Secret\nmetadata:\n name: build-robot-secret\n namespace: default\n labels:\n kubernetes.io/legacy-token-last-used: 2022-10-24\n kubernetes.io/legacy-token-invalid-since: 2023-10-25\n annotations:\n kubernetes.io/service-account.name: build-robot\ntype: kubernetes.io/service-account-token", "completion": "This yaml code Managing Service Accounts | A ServiceAccount controller manages the ServiceAccounts inside namespaces, and ensures a ServiceAccount named \"default\" exists in every active namespace.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Managing Service Accounts | A ServiceAccount controller manages the ServiceAccounts inside namespaces, and ensures a ServiceAccount named \"default\" exists in every active namespace.", "source": "https://kubernetes.io/docs/reference/access-authn-authz/service-accounts-admin/", "id": "k8s_00564_code_explain_14833"} {"prompt": "Complete this yaml code:\n\n...\n - name: kube-api-access-<random-suffix>\n projected:\n defaultMode: 420 # decimal equivalent of octal 0644\n sources:\n - serviceAccountToken:\n expirationSeconds: 3607\n path: token\n - configMap:", "completion": " items:\n - key: ca.crt\n path: ca.crt\n name: kube-root-ca.crt\n - downwardAPI:\n items:\n - fieldRef:\n apiVersion: v1\n fieldPath: metadata.namespace\n path: namespace", "language": "yaml", "type": "kubernetes-manifest", "context": "Managing Service Accounts | That manifest snippet defines a projected volume that combines information from three sources:", "source": "https://kubernetes.io/docs/reference/access-authn-authz/service-accounts-admin/", "id": "k8s_00564_code_completion_14834"} {"prompt": "Explain this yaml code:\n\n...\n - name: kube-api-access-<random-suffix>\n projected:\n defaultMode: 420 # decimal equivalent of octal 0644\n sources:\n - serviceAccountToken:\n expirationSeconds: 3607\n path: token\n - configMap:\n items:\n - key: ca.crt\n path: ca.crt\n name: kube-root-ca.crt\n - downwardAPI:\n items:\n - fieldRef:\n apiVersion: v1\n fieldPath: metadata.namespace\n path: namespace", "completion": "This yaml code Managing Service Accounts | That manifest snippet defines a projected volume that combines information from three sources:", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Managing Service Accounts | That manifest snippet defines a projected volume that combines information from three sources:", "source": "https://kubernetes.io/docs/reference/access-authn-authz/service-accounts-admin/", "id": "k8s_00564_code_explain_14835"} {"prompt": "Complete this yaml code:\n\nsecret/serviceaccount/mysecretname.yaml\napiVersion: v1\nkind: Secret\ntype: kubernetes.io/service-account-token", "completion": "metadata:\n name: mysecretname\n annotations:\n kubernetes.io/service-account.name: myserviceaccount", "language": "yaml", "type": "kubernetes-manifest", "context": "Managing Service Accounts | To create a Secret based on this example, run:", "source": "https://kubernetes.io/docs/reference/access-authn-authz/service-accounts-admin/", "id": "k8s_00564_code_completion_14836"} {"prompt": "Explain this yaml code:\n\nsecret/serviceaccount/mysecretname.yaml\napiVersion: v1\nkind: Secret\ntype: kubernetes.io/service-account-token\nmetadata:\n name: mysecretname\n annotations:\n kubernetes.io/service-account.name: myserviceaccount", "completion": "This yaml code Managing Service Accounts | To create a Secret based on this example, run:", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Managing Service Accounts | To create a Secret based on this example, run:", "source": "https://kubernetes.io/docs/reference/access-authn-authz/service-accounts-admin/", "id": "k8s_00564_code_explain_14837"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Secret\ntype: kubernetes.io/service-account-token", "completion": "metadata:\n name: mysecretname\n annotations:\n kubernetes.io/service-account.name: myserviceaccount", "language": "yaml", "type": "kubernetes-manifest", "context": "Managing Service Accounts | To create a Secret based on this example, run:", "source": "https://kubernetes.io/docs/reference/access-authn-authz/service-accounts-admin/", "id": "k8s_00564_code_completion_14838"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Secret\ntype: kubernetes.io/service-account-token\nmetadata:\n name: mysecretname\n annotations:\n kubernetes.io/service-account.name: myserviceaccount", "completion": "This yaml code Managing Service Accounts | To create a Secret based on this example, run:", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Managing Service Accounts | To create a Secret based on this example, run:", "source": "https://kubernetes.io/docs/reference/access-authn-authz/service-accounts-admin/", "id": "k8s_00564_code_explain_14839"} {"prompt": "Explain this bash code:\n\nkubectl -n examplens create -f https://k8s.io/examples/secret/serviceaccount/mysecretname.yaml", "completion": "This bash code Managing Service Accounts | To see the details for that Secret, run:", "language": "bash", "type": "kubectl-command_explanation", "context": "Managing Service Accounts | To see the details for that Secret, run:", "source": "https://kubernetes.io/docs/reference/access-authn-authz/service-accounts-admin/", "id": "k8s_00564_code_explain_14840"} {"prompt": "Explain this text code:\n\n# This assumes that you already have a namespace named 'examplens'\nkubectl -n examplens get serviceaccount/example-automated-thing -o yaml", "completion": "This text code Managing Service Accounts | The output is similar to:", "language": "text", "type": "code-example_explanation", "context": "Managing Service Accounts | The output is similar to:", "source": "https://kubernetes.io/docs/reference/access-authn-authz/service-accounts-admin/", "id": "k8s_00564_code_explain_14841"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: ServiceAccount\nmetadata:\n annotations:\n kubectl.kubernetes.io/last-applied-configuration: |\n {\"apiVersion\":\"v1\",\"kind\":\"ServiceAccount\",\"metadata\":{\"annotations\":{},\"name\":\"example-automated-thing\",\"namespace\":\"examplens\"}} \n creationTimestamp: \"2019-07-21T07:07:07Z\"", "completion": " name: example-automated-thing\n namespace: examplens\n resourceVersion: \"777\"\n selfLink: /api/v1/namespaces/examplens/serviceaccounts/example-automated-thing\n uid: f23fd170-66f2-4697-b049-e1e266b7f835\nsecrets:\n - name: example-automated-thing-token-zyxwv", "language": "yaml", "type": "service-definition", "context": "Managing Service Accounts | Then, delete the Secret you now know the name of:", "source": "https://kubernetes.io/docs/reference/access-authn-authz/service-accounts-admin/", "id": "k8s_00564_code_completion_14842"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: ServiceAccount\nmetadata:\n annotations:\n kubectl.kubernetes.io/last-applied-configuration: |\n {\"apiVersion\":\"v1\",\"kind\":\"ServiceAccount\",\"metadata\":{\"annotations\":{},\"name\":\"example-automated-thing\",\"namespace\":\"examplens\"}} \n creationTimestamp: \"2019-07-21T07:07:07Z\"\n name: example-automated-thing\n namespace: examplens\n resourceVersion: \"777\"\n selfLink: /api/v1/namespaces/examplens/serviceaccounts/example-automated-thing\n uid: f23fd170-66f2-4697-b049-e1e266b7f835\nsecrets:\n - name: example-automated-thing-token-zyxwv", "completion": "This yaml code Managing Service Accounts | Then, delete the Secret you now know the name of:", "language": "yaml", "type": "service-definition_explanation", "context": "Managing Service Accounts | Then, delete the Secret you now know the name of:", "source": "https://kubernetes.io/docs/reference/access-authn-authz/service-accounts-admin/", "id": "k8s_00564_code_explain_14843"} {"prompt": "Explain this bash code:\n\nkubectl -n examplens delete secret/example-automated-thing-token-zyxwv", "completion": "This bash code Managing Service Accounts | Short lived ServiceAccount tokens automatically expire after the time-limit specified during their creation. There is no central record of tokens issued, so there is no way to revoke individual tokens", "language": "bash", "type": "kubectl-command_explanation", "context": "Managing Service Accounts | Short lived ServiceAccount tokens automatically expire after the time-limit specified during their creation. There is no central record of tokens issued, so there is no way to revoke individual tokens", "source": "https://kubernetes.io/docs/reference/access-authn-authz/service-accounts-admin/", "id": "k8s_00564_code_explain_14844"} {"prompt": "Complete this text code:\n\nrpc Metadata(MetadataRequest) returns (MetadataResponse) {}\n\nmessage MetadataRequest {}\n\nmessage MetadataResponse {\n // used by kube-apiserver for defaulting/validation of JWT lifetime while accounting for configuration flag values:\n // 1. `--service-account-max-token-expiration`\n // 2. `--service-account-extend-token-expiration`", "completion": " //\n // * If `--service-account-max-token-expiration` is greater than `max_token_expiration_seconds`, kube-apiserver treats that as misconfiguration and exits.\n // * If `--service-account-max-token-expiration` is not explicitly set, kube-apiserver defaults to `max_token_expiration_seconds`.\n // * If `--service-account-extend-token-expiration` is true, the extended expiration is `min(1 year, max_token_expiration_seconds)`.\n //\n // `max_token_expiration_seconds` must be at least 600s.\n int64 max_token_expiration_seconds = 1;\n}", "language": "text", "type": "code-example", "context": "Managing Service Accounts | FetchKeys returns the set of public keys that are trusted to sign Kubernetes service account tokens. Kube-apiserver will call this RPC:", "source": "https://kubernetes.io/docs/reference/access-authn-authz/service-accounts-admin/", "id": "k8s_00564_code_completion_14845"} {"prompt": "Explain this text code:\n\nrpc Metadata(MetadataRequest) returns (MetadataResponse) {}\n\nmessage MetadataRequest {}\n\nmessage MetadataResponse {\n // used by kube-apiserver for defaulting/validation of JWT lifetime while accounting for configuration flag values:\n // 1. `--service-account-max-token-expiration`\n // 2. `--service-account-extend-token-expiration`\n //\n // * If `--service-account-max-token-expiration` is greater than `max_token_expiration_seconds`, kube-apiserver treats that as misconfiguration and exits.\n // * If `--service-account-max-token-expiration` is not explicitly set, kube-apiserver defaults to `max_token_expiration_seconds`.\n // * If `--service-account-extend-token-expiration` is true, the extended expiration is `min(1 year, max_token_expiration_seconds)`.\n //\n // `max_token_expiration_seconds` must be at least 600s.\n int64 max_token_expiration_seconds = 1;\n}", "completion": "This text code Managing Service Accounts | FetchKeys returns the set of public keys that are trusted to sign Kubernetes service account tokens. Kube-apiserver will call this RPC:", "language": "text", "type": "code-example_explanation", "context": "Managing Service Accounts | FetchKeys returns the set of public keys that are trusted to sign Kubernetes service account tokens. Kube-apiserver will call this RPC:", "source": "https://kubernetes.io/docs/reference/access-authn-authz/service-accounts-admin/", "id": "k8s_00564_code_explain_14846"} {"prompt": "Complete this text code:\n\nrpc FetchKeys(FetchKeysRequest) returns (FetchKeysResponse) {}\n\nmessage FetchKeysRequest {}\n\nmessage FetchKeysResponse {\n repeated Key keys = 1;\n\n // The timestamp when this data was pulled from the authoritative source of\n // truth for verification keys.\n // kube-apiserver can export this from metrics, to enable end-to-end SLOs.\n google.protobuf.Timestamp data_timestamp = 2;\n\n // refresh interval for verification keys to pick changes if any.\n // any value <= 0 is considered a misconfiguration.\n int64 refresh_hint_seconds = 3;\n}", "completion": "\nmessage Key {\n // A unique identifier for this key.\n // Length must be <=1024.\n string key_id = 1;\n\n // The public key, PKIX-serialized.\n // must be a public key supported by kube-apiserver (currently RSA 256 or ECDSA 256/384/521)\n bytes key = 2;\n\n // Set only for keys that are not used to sign bound tokens.\n // eg: supported keys for legacy tokens.\n // If set, key is used for verification but excluded from OIDC discovery docs.\n // if set, external signer should not use this key to sign a JWT.\n bool exclude_from_oidc_discovery = 3;\n}", "language": "text", "type": "code-example", "context": "Managing Service Accounts | Sign takes a serialized JWT payload, and returns the serialized header and signature. kube-apiserver then assembles the JWT from the header, payload, and signature.", "source": "https://kubernetes.io/docs/reference/access-authn-authz/service-accounts-admin/", "id": "k8s_00564_code_completion_14847"} {"prompt": "Explain this text code:\n\nrpc FetchKeys(FetchKeysRequest) returns (FetchKeysResponse) {}\n\nmessage FetchKeysRequest {}\n\nmessage FetchKeysResponse {\n repeated Key keys = 1;\n\n // The timestamp when this data was pulled from the authoritative source of\n // truth for verification keys.\n // kube-apiserver can export this from metrics, to enable end-to-end SLOs.\n google.protobuf.Timestamp data_timestamp = 2;\n\n // refresh interval for verification keys to pick changes if any.\n // any value <= 0 is considered a misconfiguration.\n int64 refresh_hint_seconds = 3;\n}\n\nmessage Key {\n // A unique identifier for this key.\n // Length must be <=1024.\n string key_id = 1;\n\n // The public key, PKIX-serialized.\n // must be a public key supported by kube-apiserver (currently RSA 256 or ECDSA 256/384/521)\n bytes key = 2;\n\n // Set only for keys that are not used to sign bound tokens.\n // eg: supported keys for legacy tokens.\n // If set, key is used for verification but excluded from OIDC discovery docs.\n // if set, external signer should not use this key to sign a JWT.\n bool exclude_from_oidc_discovery = 3;\n}", "completion": "This text code Managing Service Accounts | Sign takes a serialized JWT payload, and returns the serialized header and signature. kube-apiserver then assembles the JWT from the header, payload, and signature.", "language": "text", "type": "code-example_explanation", "context": "Managing Service Accounts | Sign takes a serialized JWT payload, and returns the serialized header and signature. kube-apiserver then assembles the JWT from the header, payload, and signature.", "source": "https://kubernetes.io/docs/reference/access-authn-authz/service-accounts-admin/", "id": "k8s_00564_code_explain_14848"} {"prompt": "Complete this text code:\n\nrpc Sign(SignJWTRequest) returns (SignJWTResponse) {}\n\nmessage SignJWTRequest {\n // URL-safe base64 wrapped payload to be signed.\n // Exactly as it appears in the second segment of the JWT\n string claims = 1;\n}\n\nmessage SignJWTResponse {\n // header must contain only alg, kid, typ claims.", "completion": " // typ must be “JWT”.\n // kid must be non-empty, <=1024 characters, and its corresponding public key should not be excluded from OIDC discovery.\n // alg must be one of the algorithms supported by kube-apiserver (currently RS256, ES256, ES384, ES512).\n // header cannot have any additional data that kube-apiserver does not recognize.\n // Already wrapped in URL-safe base64, exactly as it appears in the first segment of the JWT.\n string header = 1;\n\n // The signature for the JWT.\n // Already wrapped in URL-safe base64, exactly as it appears in the final segment of the JWT.\n string signature = 2;\n}", "language": "text", "type": "code-example", "context": "Managing Service Accounts | If you created a namespace examplens to experiment with, you can remove it:", "source": "https://kubernetes.io/docs/reference/access-authn-authz/service-accounts-admin/", "id": "k8s_00564_code_completion_14849"} {"prompt": "Explain this text code:\n\nrpc Sign(SignJWTRequest) returns (SignJWTResponse) {}\n\nmessage SignJWTRequest {\n // URL-safe base64 wrapped payload to be signed.\n // Exactly as it appears in the second segment of the JWT\n string claims = 1;\n}\n\nmessage SignJWTResponse {\n // header must contain only alg, kid, typ claims.\n // typ must be “JWT”.\n // kid must be non-empty, <=1024 characters, and its corresponding public key should not be excluded from OIDC discovery.\n // alg must be one of the algorithms supported by kube-apiserver (currently RS256, ES256, ES384, ES512).\n // header cannot have any additional data that kube-apiserver does not recognize.\n // Already wrapped in URL-safe base64, exactly as it appears in the first segment of the JWT.\n string header = 1;\n\n // The signature for the JWT.\n // Already wrapped in URL-safe base64, exactly as it appears in the final segment of the JWT.\n string signature = 2;\n}", "completion": "This text code Managing Service Accounts | If you created a namespace examplens to experiment with, you can remove it:", "language": "text", "type": "code-example_explanation", "context": "Managing Service Accounts | If you created a namespace examplens to experiment with, you can remove it:", "source": "https://kubernetes.io/docs/reference/access-authn-authz/service-accounts-admin/", "id": "k8s_00564_code_explain_14850"} {"prompt": "Explain this bash code:\n\nkubectl delete <type> <name>\nkubectl delete <type> -l <label>", "completion": "This bash code Imperative Management of Kubernetes Objects Using Configuration Files | You can use kubectl get -f to view information about an object that is described in a configuration file.", "language": "bash", "type": "kubectl-command_explanation", "context": "Imperative Management of Kubernetes Objects Using Configuration Files | You can use kubectl get -f to view information about an object that is described in a configuration file.", "source": "https://kubernetes.io/docs/tasks/manage-kubernetes-objects/imperative-config/", "id": "k8s_00565_code_explain_14851"} {"prompt": "Explain this bash code:\n\nkubectl get <kind>/<name> -o yaml > <kind>_<name>.yaml", "completion": "This bash code Imperative Management of Kubernetes Objects Using Configuration Files | Manually remove the status field from the object configuration file.", "language": "bash", "type": "kubectl-command_explanation", "context": "Imperative Management of Kubernetes Objects Using Configuration Files | Manually remove the status field from the object configuration file.", "source": "https://kubernetes.io/docs/tasks/manage-kubernetes-objects/imperative-config/", "id": "k8s_00565_code_explain_14852"} {"prompt": "Complete this text code:\n\nselector:\n matchLabels:\n controller-selector: \"apps/v1/deployment/nginx\"", "completion": "template:\n metadata:\n labels:\n controller-selector: \"apps/v1/deployment/nginx\"", "language": "text", "type": "code-example", "context": "Imperative Management of Kubernetes Objects Using Configuration Files |", "source": "https://kubernetes.io/docs/tasks/manage-kubernetes-objects/imperative-config/", "id": "k8s_00565_code_completion_14853"} {"prompt": "Explain this text code:\n\nselector:\n matchLabels:\n controller-selector: \"apps/v1/deployment/nginx\"\ntemplate:\n metadata:\n labels:\n controller-selector: \"apps/v1/deployment/nginx\"", "completion": "This text code Imperative Management of Kubernetes Objects Using Configuration Files |", "language": "text", "type": "code-example_explanation", "context": "Imperative Management of Kubernetes Objects Using Configuration Files |", "source": "https://kubernetes.io/docs/tasks/manage-kubernetes-objects/imperative-config/", "id": "k8s_00565_code_explain_14854"} {"prompt": "Explain this shell code:\n\nkubectl delete <type> <name>\nkubectl delete <type> -l <label>", "completion": "This shell code Imperative Management of Kubernetes Objects Using Configuration Files | You can use kubectl get -f to view information about an object that is described in a configuration file.", "language": "shell", "type": "code-example_explanation", "context": "Imperative Management of Kubernetes Objects Using Configuration Files | You can use kubectl get -f to view information about an object that is described in a configuration file.", "source": "https://kubernetes.io/docs/tasks/manage-kubernetes-objects/imperative-config/", "id": "k8s_00565_code_explain_14855"} {"prompt": "Explain this shell code:\n\nkubectl get <kind>/<name> -o yaml > <kind>_<name>.yaml", "completion": "This shell code Imperative Management of Kubernetes Objects Using Configuration Files | Manually remove the status field from the object configuration file.", "language": "shell", "type": "code-example_explanation", "context": "Imperative Management of Kubernetes Objects Using Configuration Files | Manually remove the status field from the object configuration file.", "source": "https://kubernetes.io/docs/tasks/manage-kubernetes-objects/imperative-config/", "id": "k8s_00565_code_explain_14856"} {"prompt": "Complete this yaml code:\n\nselector:\n matchLabels:\n controller-selector: \"apps/v1/deployment/nginx\"", "completion": "template:\n metadata:\n labels:\n controller-selector: \"apps/v1/deployment/nginx\"", "language": "yaml", "type": "kubernetes-manifest", "context": "Imperative Management of Kubernetes Objects Using Configuration Files |", "source": "https://kubernetes.io/docs/tasks/manage-kubernetes-objects/imperative-config/", "id": "k8s_00565_code_completion_14857"} {"prompt": "Explain this yaml code:\n\nselector:\n matchLabels:\n controller-selector: \"apps/v1/deployment/nginx\"\ntemplate:\n metadata:\n labels:\n controller-selector: \"apps/v1/deployment/nginx\"", "completion": "This yaml code Imperative Management of Kubernetes Objects Using Configuration Files |", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Imperative Management of Kubernetes Objects Using Configuration Files |", "source": "https://kubernetes.io/docs/tasks/manage-kubernetes-objects/imperative-config/", "id": "k8s_00565_code_explain_14858"} {"prompt": "Explain this bash code:\n\nkubectl delete <type> <name>\nkubectl delete <type> -l <label>", "completion": "This bash code Imperative Management of Kubernetes Objects Using Configuration Files | You can use kubectl get -f to view information about an object that is described in a configuration file.", "language": "bash", "type": "kubectl-command_explanation", "context": "Imperative Management of Kubernetes Objects Using Configuration Files | You can use kubectl get -f to view information about an object that is described in a configuration file.", "source": "https://kubernetes.io/docs/tasks/manage-kubernetes-objects/imperative-config/", "id": "k8s_00565_code_explain_14859"} {"prompt": "Explain this bash code:\n\nkubectl get <kind>/<name> -o yaml > <kind>_<name>.yaml", "completion": "This bash code Imperative Management of Kubernetes Objects Using Configuration Files | Manually remove the status field from the object configuration file.", "language": "bash", "type": "kubectl-command_explanation", "context": "Imperative Management of Kubernetes Objects Using Configuration Files | Manually remove the status field from the object configuration file.", "source": "https://kubernetes.io/docs/tasks/manage-kubernetes-objects/imperative-config/", "id": "k8s_00565_code_explain_14860"} {"prompt": "Complete this text code:\n\nselector:\n matchLabels:\n controller-selector: \"apps/v1/deployment/nginx\"", "completion": "template:\n metadata:\n labels:\n controller-selector: \"apps/v1/deployment/nginx\"", "language": "text", "type": "code-example", "context": "Imperative Management of Kubernetes Objects Using Configuration Files |", "source": "https://kubernetes.io/docs/tasks/manage-kubernetes-objects/imperative-config/", "id": "k8s_00565_code_completion_14861"} {"prompt": "Explain this text code:\n\nselector:\n matchLabels:\n controller-selector: \"apps/v1/deployment/nginx\"\ntemplate:\n metadata:\n labels:\n controller-selector: \"apps/v1/deployment/nginx\"", "completion": "This text code Imperative Management of Kubernetes Objects Using Configuration Files |", "language": "text", "type": "code-example_explanation", "context": "Imperative Management of Kubernetes Objects Using Configuration Files |", "source": "https://kubernetes.io/docs/tasks/manage-kubernetes-objects/imperative-config/", "id": "k8s_00565_code_explain_14862"} {"prompt": "Explain this text code:\n\n# Mark node \"foo\" as unschedulable\n kubectl cordon foo", "completion": "This text code kubectl cordon | Must be \"none\", \"server\", or \"client\". If client strategy, only print the object that would be sent, without sending it. If server strategy, submit server-side request without persisting the resource.", "language": "text", "type": "code-example_explanation", "context": "kubectl cordon | Must be \"none\", \"server\", or \"client\". If client strategy, only print the object that would be sent, without sending it. If server strategy, submit server-side request without persisting the resource.", "source": "https://kubernetes.io/docs/reference/kubectl/generated/kubectl_cordon/", "id": "k8s_00566_code_explain_14863"} {"prompt": "Explain this text code:\n\n# Mark node \"foo\" as unschedulable\n kubectl cordon foo", "completion": "This text code kubectl cordon | Must be \"none\", \"server\", or \"client\". If client strategy, only print the object that would be sent, without sending it. If server strategy, submit server-side request without persisting the resource.", "language": "text", "type": "code-example_explanation", "context": "kubectl cordon | Must be \"none\", \"server\", or \"client\". If client strategy, only print the object that would be sent, without sending it. If server strategy, submit server-side request without persisting the resource.", "source": "https://kubernetes.io/docs/reference/kubectl/generated/kubectl_cordon/", "id": "k8s_00566_code_explain_14864"} {"prompt": "Explain this bash code:\n\nkubectl create secret generic NAME [--type=string] [--from-file=[key=]source] [--from-literal=key1=value1] [--dry-run=server|client|none]", "completion": "This bash code kubectl create secret generic | If true, ignore any errors in templates when a field or map key is missing in the template. Only applies to golang and jsonpath output formats.", "language": "bash", "type": "kubectl-command_explanation", "context": "kubectl create secret generic | If true, ignore any errors in templates when a field or map key is missing in the template. Only applies to golang and jsonpath output formats.", "source": "https://kubernetes.io/docs/reference/kubectl/generated/kubectl_create/kubectl_create_secret_generic/", "id": "k8s_00568_code_explain_14865"} {"prompt": "Complete this text code:\n\n# Create a new secret named my-secret with keys for each file in folder bar\n kubectl create secret generic my-secret --from-file=path/to/bar\n \n # Create a new secret named my-secret with specified keys instead of names on disk\n kubectl create secret generic my-secret --from-file=ssh-privatekey=path/to/id_rsa --from-file=ssh-publickey=path/to/id_rsa.pub\n \n # Create a new secret named my-secret with key1=supersecret and key2=topsecret", "completion": " kubectl create secret generic my-secret --from-literal=key1=supersecret --from-literal=key2=topsecret\n \n # Create a new secret named my-secret using a combination of a file and a literal\n kubectl create secret generic my-secret --from-file=ssh-privatekey=path/to/id_rsa --from-literal=passphrase=topsecret\n \n # Create a new secret named my-secret from env files\n kubectl create secret generic my-secret --from-env-file=path/to/foo.env --from-env-file=path/to/bar.env", "language": "text", "type": "code-example", "context": "kubectl create secret generic | If true, ignore any errors in templates when a field or map key is missing in the template. Only applies to golang and jsonpath output formats.", "source": "https://kubernetes.io/docs/reference/kubectl/generated/kubectl_create/kubectl_create_secret_generic/", "id": "k8s_00568_code_completion_14866"} {"prompt": "Explain this text code:\n\n# Create a new secret named my-secret with keys for each file in folder bar\n kubectl create secret generic my-secret --from-file=path/to/bar\n \n # Create a new secret named my-secret with specified keys instead of names on disk\n kubectl create secret generic my-secret --from-file=ssh-privatekey=path/to/id_rsa --from-file=ssh-publickey=path/to/id_rsa.pub\n \n # Create a new secret named my-secret with key1=supersecret and key2=topsecret\n kubectl create secret generic my-secret --from-literal=key1=supersecret --from-literal=key2=topsecret\n \n # Create a new secret named my-secret using a combination of a file and a literal\n kubectl create secret generic my-secret --from-file=ssh-privatekey=path/to/id_rsa --from-literal=passphrase=topsecret\n \n # Create a new secret named my-secret from env files\n kubectl create secret generic my-secret --from-env-file=path/to/foo.env --from-env-file=path/to/bar.env", "completion": "This text code kubectl create secret generic | If true, ignore any errors in templates when a field or map key is missing in the template. Only applies to golang and jsonpath output formats.", "language": "text", "type": "code-example_explanation", "context": "kubectl create secret generic | If true, ignore any errors in templates when a field or map key is missing in the template. Only applies to golang and jsonpath output formats.", "source": "https://kubernetes.io/docs/reference/kubectl/generated/kubectl_create/kubectl_create_secret_generic/", "id": "k8s_00568_code_explain_14867"} {"prompt": "Explain this bash code:\n\nkubectl create secret generic NAME [--type=string] [--from-file=[key=]source] [--from-literal=key1=value1] [--dry-run=server|client|none]", "completion": "This bash code kubectl create secret generic | If true, ignore any errors in templates when a field or map key is missing in the template. Only applies to golang and jsonpath output formats.", "language": "bash", "type": "kubectl-command_explanation", "context": "kubectl create secret generic | If true, ignore any errors in templates when a field or map key is missing in the template. Only applies to golang and jsonpath output formats.", "source": "https://kubernetes.io/docs/reference/kubectl/generated/kubectl_create/kubectl_create_secret_generic/", "id": "k8s_00568_code_explain_14868"} {"prompt": "Complete this text code:\n\n# Create a new secret named my-secret with keys for each file in folder bar\n kubectl create secret generic my-secret --from-file=path/to/bar\n \n # Create a new secret named my-secret with specified keys instead of names on disk\n kubectl create secret generic my-secret --from-file=ssh-privatekey=path/to/id_rsa --from-file=ssh-publickey=path/to/id_rsa.pub\n \n # Create a new secret named my-secret with key1=supersecret and key2=topsecret", "completion": " kubectl create secret generic my-secret --from-literal=key1=supersecret --from-literal=key2=topsecret\n \n # Create a new secret named my-secret using a combination of a file and a literal\n kubectl create secret generic my-secret --from-file=ssh-privatekey=path/to/id_rsa --from-literal=passphrase=topsecret\n \n # Create a new secret named my-secret from env files\n kubectl create secret generic my-secret --from-env-file=path/to/foo.env --from-env-file=path/to/bar.env", "language": "text", "type": "code-example", "context": "kubectl create secret generic | If true, ignore any errors in templates when a field or map key is missing in the template. Only applies to golang and jsonpath output formats.", "source": "https://kubernetes.io/docs/reference/kubectl/generated/kubectl_create/kubectl_create_secret_generic/", "id": "k8s_00568_code_completion_14869"} {"prompt": "Explain this text code:\n\n# Create a new secret named my-secret with keys for each file in folder bar\n kubectl create secret generic my-secret --from-file=path/to/bar\n \n # Create a new secret named my-secret with specified keys instead of names on disk\n kubectl create secret generic my-secret --from-file=ssh-privatekey=path/to/id_rsa --from-file=ssh-publickey=path/to/id_rsa.pub\n \n # Create a new secret named my-secret with key1=supersecret and key2=topsecret\n kubectl create secret generic my-secret --from-literal=key1=supersecret --from-literal=key2=topsecret\n \n # Create a new secret named my-secret using a combination of a file and a literal\n kubectl create secret generic my-secret --from-file=ssh-privatekey=path/to/id_rsa --from-literal=passphrase=topsecret\n \n # Create a new secret named my-secret from env files\n kubectl create secret generic my-secret --from-env-file=path/to/foo.env --from-env-file=path/to/bar.env", "completion": "This text code kubectl create secret generic | If true, ignore any errors in templates when a field or map key is missing in the template. Only applies to golang and jsonpath output formats.", "language": "text", "type": "code-example_explanation", "context": "kubectl create secret generic | If true, ignore any errors in templates when a field or map key is missing in the template. Only applies to golang and jsonpath output formats.", "source": "https://kubernetes.io/docs/reference/kubectl/generated/kubectl_create/kubectl_create_secret_generic/", "id": "k8s_00568_code_explain_14870"} {"prompt": "Complete this text code:\n\nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal Scheduled 7s default-scheduler Successfully assigned default/lifecycle-demo to ip-XXX-XXX-XX-XX.us-east-2...\n Normal Pulled 6s kubelet Successfully pulled image \"nginx\" in 229.604315ms\n Normal Pulling 4s (x2 over 6s) kubelet Pulling image \"nginx\"", "completion": " Normal Created 4s (x2 over 5s) kubelet Created container lifecycle-demo-container\n Normal Started 4s (x2 over 5s) kubelet Started container lifecycle-demo-container\n Warning FailedPostStartHook 4s (x2 over 5s) kubelet Exec lifecycle hook ([badcommand]) for Container \"lifecycle-demo-container\" in Pod \"lifecycle-demo_default(30229739-9651-4e5a-9a32-a8f1688862db)\" failed - error: command 'badcommand' exited with 126: , message: \"OCI runtime exec failed: exec failed: container_linux.go:380: starting container process caused: exec: \\\"badcommand\\\": executable file not found in $PATH: unknown\\r\\n\"\n Normal Killing 4s (x2 over 5s) kubelet FailedPostStartHook\n Normal Pulled 4s kubelet Successfully pulled image \"nginx\" in 215.66395ms\n Warning BackOff 2s (x2 over 3s) kubelet Back-off restarting failed container", "language": "text", "type": "code-example", "context": "Container Lifecycle Hooks |", "source": "https://kubernetes.io/docs/concepts/containers/container-lifecycle-hooks/", "id": "k8s_00569_code_completion_14871"} {"prompt": "Explain this text code:\n\nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal Scheduled 7s default-scheduler Successfully assigned default/lifecycle-demo to ip-XXX-XXX-XX-XX.us-east-2...\n Normal Pulled 6s kubelet Successfully pulled image \"nginx\" in 229.604315ms\n Normal Pulling 4s (x2 over 6s) kubelet Pulling image \"nginx\"\n Normal Created 4s (x2 over 5s) kubelet Created container lifecycle-demo-container\n Normal Started 4s (x2 over 5s) kubelet Started container lifecycle-demo-container\n Warning FailedPostStartHook 4s (x2 over 5s) kubelet Exec lifecycle hook ([badcommand]) for Container \"lifecycle-demo-container\" in Pod \"lifecycle-demo_default(30229739-9651-4e5a-9a32-a8f1688862db)\" failed - error: command 'badcommand' exited with 126: , message: \"OCI runtime exec failed: exec failed: container_linux.go:380: starting container process caused: exec: \\\"badcommand\\\": executable file not found in $PATH: unknown\\r\\n\"\n Normal Killing 4s (x2 over 5s) kubelet FailedPostStartHook\n Normal Pulled 4s kubelet Successfully pulled image \"nginx\" in 215.66395ms\n Warning BackOff 2s (x2 over 3s) kubelet Back-off restarting failed container", "completion": "This text code Container Lifecycle Hooks |", "language": "text", "type": "code-example_explanation", "context": "Container Lifecycle Hooks |", "source": "https://kubernetes.io/docs/concepts/containers/container-lifecycle-hooks/", "id": "k8s_00569_code_explain_14872"} {"prompt": "Complete this text code:\n\nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal Scheduled 7s default-scheduler Successfully assigned default/lifecycle-demo to ip-XXX-XXX-XX-XX.us-east-2...\n Normal Pulled 6s kubelet Successfully pulled image \"nginx\" in 229.604315ms\n Normal Pulling 4s (x2 over 6s) kubelet Pulling image \"nginx\"", "completion": " Normal Created 4s (x2 over 5s) kubelet Created container lifecycle-demo-container\n Normal Started 4s (x2 over 5s) kubelet Started container lifecycle-demo-container\n Warning FailedPostStartHook 4s (x2 over 5s) kubelet Exec lifecycle hook ([badcommand]) for Container \"lifecycle-demo-container\" in Pod \"lifecycle-demo_default(30229739-9651-4e5a-9a32-a8f1688862db)\" failed - error: command 'badcommand' exited with 126: , message: \"OCI runtime exec failed: exec failed: container_linux.go:380: starting container process caused: exec: \\\"badcommand\\\": executable file not found in $PATH: unknown\\r\\n\"\n Normal Killing 4s (x2 over 5s) kubelet FailedPostStartHook\n Normal Pulled 4s kubelet Successfully pulled image \"nginx\" in 215.66395ms\n Warning BackOff 2s (x2 over 3s) kubelet Back-off restarting failed container", "language": "text", "type": "code-example", "context": "Container Lifecycle Hooks |", "source": "https://kubernetes.io/docs/concepts/containers/container-lifecycle-hooks/", "id": "k8s_00569_code_completion_14873"} {"prompt": "Explain this text code:\n\nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal Scheduled 7s default-scheduler Successfully assigned default/lifecycle-demo to ip-XXX-XXX-XX-XX.us-east-2...\n Normal Pulled 6s kubelet Successfully pulled image \"nginx\" in 229.604315ms\n Normal Pulling 4s (x2 over 6s) kubelet Pulling image \"nginx\"\n Normal Created 4s (x2 over 5s) kubelet Created container lifecycle-demo-container\n Normal Started 4s (x2 over 5s) kubelet Started container lifecycle-demo-container\n Warning FailedPostStartHook 4s (x2 over 5s) kubelet Exec lifecycle hook ([badcommand]) for Container \"lifecycle-demo-container\" in Pod \"lifecycle-demo_default(30229739-9651-4e5a-9a32-a8f1688862db)\" failed - error: command 'badcommand' exited with 126: , message: \"OCI runtime exec failed: exec failed: container_linux.go:380: starting container process caused: exec: \\\"badcommand\\\": executable file not found in $PATH: unknown\\r\\n\"\n Normal Killing 4s (x2 over 5s) kubelet FailedPostStartHook\n Normal Pulled 4s kubelet Successfully pulled image \"nginx\" in 215.66395ms\n Warning BackOff 2s (x2 over 3s) kubelet Back-off restarting failed container", "completion": "This text code Container Lifecycle Hooks |", "language": "text", "type": "code-example_explanation", "context": "Container Lifecycle Hooks |", "source": "https://kubernetes.io/docs/concepts/containers/container-lifecycle-hooks/", "id": "k8s_00569_code_explain_14874"} {"prompt": "Explain this bash code:\n\nkubectl exec (POD | TYPE/NAME) [-c CONTAINER] [flags] -- COMMAND [args...]", "completion": "This bash code kubectl exec | Container name. If omitted, use the kubectl.kubernetes.io/default-container annotation for selecting the container to be attached or the first container in the pod will be chosen", "language": "bash", "type": "kubectl-command_explanation", "context": "kubectl exec | Container name. If omitted, use the kubectl.kubernetes.io/default-container annotation for selecting the container to be attached or the first container in the pod will be chosen", "source": "https://kubernetes.io/docs/reference/kubectl/generated/kubectl_exec/", "id": "k8s_00570_code_explain_14875"} {"prompt": "Complete this text code:\n\n# Get output from running the 'date' command from pod mypod, using the first container by default\n kubectl exec mypod -- date\n \n # Get output from running the 'date' command in ruby-container from pod mypod\n kubectl exec mypod -c ruby-container -- date\n \n # Switch to raw terminal mode; sends stdin to 'bash' in ruby-container from pod mypod\n # and sends stdout/stderr from 'bash' back to the client\n kubectl exec mypod -c ruby-container -i -t -- bash -il\n \n # List contents of /usr from the first container of pod mypod and sort by modification time", "completion": " # If the command you want to execute in the pod has any flags in common (e.g. -i),\n # you must use two dashes (--) to separate your command's flags/arguments\n # Also note, do not surround your command and its flags/arguments with quotes\n # unless that is how you would execute it normally (i.e., do ls -t /usr, not \"ls -t /usr\")\n kubectl exec mypod -i -t -- ls -t /usr\n \n # Get output from running 'date' command from the first pod of the deployment mydeployment, using the first container by default\n kubectl exec deploy/mydeployment -- date\n \n # Get output from running 'date' command from the first pod of the service myservice, using the first container by default\n kubectl exec svc/myservice -- date", "language": "text", "type": "code-example", "context": "kubectl exec | Container name. If omitted, use the kubectl.kubernetes.io/default-container annotation for selecting the container to be attached or the first container in the pod will be chosen", "source": "https://kubernetes.io/docs/reference/kubectl/generated/kubectl_exec/", "id": "k8s_00570_code_completion_14876"} {"prompt": "Explain this text code:\n\n# Get output from running the 'date' command from pod mypod, using the first container by default\n kubectl exec mypod -- date\n \n # Get output from running the 'date' command in ruby-container from pod mypod\n kubectl exec mypod -c ruby-container -- date\n \n # Switch to raw terminal mode; sends stdin to 'bash' in ruby-container from pod mypod\n # and sends stdout/stderr from 'bash' back to the client\n kubectl exec mypod -c ruby-container -i -t -- bash -il\n \n # List contents of /usr from the first container of pod mypod and sort by modification time\n # If the command you want to execute in the pod has any flags in common (e.g. -i),\n # you must use two dashes (--) to separate your command's flags/arguments\n # Also note, do not surround your command and its flags/arguments with quotes\n # unless that is how you would execute it normally (i.e., do ls -t /usr, not \"ls -t /usr\")\n kubectl exec mypod -i -t -- ls -t /usr\n \n # Get output from running 'date' command from the first pod of the deployment mydeployment, using the first container by default\n kubectl exec deploy/mydeployment -- date\n \n # Get output from running 'date' command from the first pod of the service myservice, using the first container by default\n kubectl exec svc/myservice -- date", "completion": "This text code kubectl exec | Container name. If omitted, use the kubectl.kubernetes.io/default-container annotation for selecting the container to be attached or the first container in the pod will be chosen", "language": "text", "type": "code-example_explanation", "context": "kubectl exec | Container name. If omitted, use the kubectl.kubernetes.io/default-container annotation for selecting the container to be attached or the first container in the pod will be chosen", "source": "https://kubernetes.io/docs/reference/kubectl/generated/kubectl_exec/", "id": "k8s_00570_code_explain_14877"} {"prompt": "Explain this bash code:\n\nkubectl exec (POD | TYPE/NAME) [-c CONTAINER] [flags] -- COMMAND [args...]", "completion": "This bash code kubectl exec | Container name. If omitted, use the kubectl.kubernetes.io/default-container annotation for selecting the container to be attached or the first container in the pod will be chosen", "language": "bash", "type": "kubectl-command_explanation", "context": "kubectl exec | Container name. If omitted, use the kubectl.kubernetes.io/default-container annotation for selecting the container to be attached or the first container in the pod will be chosen", "source": "https://kubernetes.io/docs/reference/kubectl/generated/kubectl_exec/", "id": "k8s_00570_code_explain_14878"} {"prompt": "Complete this text code:\n\n# Get output from running the 'date' command from pod mypod, using the first container by default\n kubectl exec mypod -- date\n \n # Get output from running the 'date' command in ruby-container from pod mypod\n kubectl exec mypod -c ruby-container -- date\n \n # Switch to raw terminal mode; sends stdin to 'bash' in ruby-container from pod mypod\n # and sends stdout/stderr from 'bash' back to the client\n kubectl exec mypod -c ruby-container -i -t -- bash -il\n \n # List contents of /usr from the first container of pod mypod and sort by modification time", "completion": " # If the command you want to execute in the pod has any flags in common (e.g. -i),\n # you must use two dashes (--) to separate your command's flags/arguments\n # Also note, do not surround your command and its flags/arguments with quotes\n # unless that is how you would execute it normally (i.e., do ls -t /usr, not \"ls -t /usr\")\n kubectl exec mypod -i -t -- ls -t /usr\n \n # Get output from running 'date' command from the first pod of the deployment mydeployment, using the first container by default\n kubectl exec deploy/mydeployment -- date\n \n # Get output from running 'date' command from the first pod of the service myservice, using the first container by default\n kubectl exec svc/myservice -- date", "language": "text", "type": "code-example", "context": "kubectl exec | Container name. If omitted, use the kubectl.kubernetes.io/default-container annotation for selecting the container to be attached or the first container in the pod will be chosen", "source": "https://kubernetes.io/docs/reference/kubectl/generated/kubectl_exec/", "id": "k8s_00570_code_completion_14879"} {"prompt": "Explain this text code:\n\n# Get output from running the 'date' command from pod mypod, using the first container by default\n kubectl exec mypod -- date\n \n # Get output from running the 'date' command in ruby-container from pod mypod\n kubectl exec mypod -c ruby-container -- date\n \n # Switch to raw terminal mode; sends stdin to 'bash' in ruby-container from pod mypod\n # and sends stdout/stderr from 'bash' back to the client\n kubectl exec mypod -c ruby-container -i -t -- bash -il\n \n # List contents of /usr from the first container of pod mypod and sort by modification time\n # If the command you want to execute in the pod has any flags in common (e.g. -i),\n # you must use two dashes (--) to separate your command's flags/arguments\n # Also note, do not surround your command and its flags/arguments with quotes\n # unless that is how you would execute it normally (i.e., do ls -t /usr, not \"ls -t /usr\")\n kubectl exec mypod -i -t -- ls -t /usr\n \n # Get output from running 'date' command from the first pod of the deployment mydeployment, using the first container by default\n kubectl exec deploy/mydeployment -- date\n \n # Get output from running 'date' command from the first pod of the service myservice, using the first container by default\n kubectl exec svc/myservice -- date", "completion": "This text code kubectl exec | Container name. If omitted, use the kubectl.kubernetes.io/default-container annotation for selecting the container to be attached or the first container in the pod will be chosen", "language": "text", "type": "code-example_explanation", "context": "kubectl exec | Container name. If omitted, use the kubectl.kubernetes.io/default-container annotation for selecting the container to be attached or the first container in the pod will be chosen", "source": "https://kubernetes.io/docs/reference/kubectl/generated/kubectl_exec/", "id": "k8s_00570_code_explain_14880"} {"prompt": "Complete this text code:\n\n### H3\n\nThis is in an H3 section.\n\n#### H4\n\nThis is in an H4 section.", "completion": "\n##### H5\n\nThis is in an H5 section.\n\n###### H6\n\nThis is in an H6 section.", "language": "text", "type": "code-example", "context": "Docs smoke test page | Produces:", "source": "https://kubernetes.io/docs/test/", "id": "k8s_00572_code_completion_14881"} {"prompt": "Explain this text code:\n\n### H3\n\nThis is in an H3 section.\n\n#### H4\n\nThis is in an H4 section.\n\n##### H5\n\nThis is in an H5 section.\n\n###### H6\n\nThis is in an H6 section.", "completion": "This text code Docs smoke test page | Produces:", "language": "text", "type": "code-example_explanation", "context": "Docs smoke test page | Produces:", "source": "https://kubernetes.io/docs/test/", "id": "k8s_00572_code_explain_14882"} {"prompt": "Complete this text code:\n\n- **bold**\n- _italic_\n- ***bold italic***\n- ~~strikethrough~~\n- <u>underline</u>", "completion": "- _<u>underline italic</u>_\n- **<u>underline bold</u>**\n- ***<u>underline bold italic</u>***\n- `monospace text`\n- **`monospace bold`**", "language": "text", "type": "code-example", "context": "Docs smoke test page | Produces:", "source": "https://kubernetes.io/docs/test/", "id": "k8s_00572_code_completion_14883"} {"prompt": "Explain this text code:\n\n- **bold**\n- _italic_\n- ***bold italic***\n- ~~strikethrough~~\n- <u>underline</u>\n- _<u>underline italic</u>_\n- **<u>underline bold</u>**\n- ***<u>underline bold italic</u>***\n- `monospace text`\n- **`monospace bold`**", "completion": "This text code Docs smoke test page | Produces:", "language": "text", "type": "code-example_explanation", "context": "Docs smoke test page | Produces:", "source": "https://kubernetes.io/docs/test/", "id": "k8s_00572_code_explain_14884"} {"prompt": "Complete this text code:\n\n- This is a list item.\n* This is another list item in the same list.\n- You can mix `-` and `*`.", "completion": " - To make a sub-item, indent two spaces.\n - This is a sub-sub-item. Indent two more spaces.\n - Another sub-item.", "language": "text", "type": "code-example", "context": "Docs smoke test page | Produces:", "source": "https://kubernetes.io/docs/test/", "id": "k8s_00572_code_completion_14885"} {"prompt": "Explain this text code:\n\n- This is a list item.\n* This is another list item in the same list.\n- You can mix `-` and `*`.\n - To make a sub-item, indent two spaces.\n - This is a sub-sub-item. Indent two more spaces.\n - Another sub-item.", "completion": "This text code Docs smoke test page | Produces:", "language": "text", "type": "code-example_explanation", "context": "Docs smoke test page | Produces:", "source": "https://kubernetes.io/docs/test/", "id": "k8s_00572_code_explain_14886"} {"prompt": "Explain this json code:\n\n{{< alert color=\"warning\" >}}This is a warning.{{< /alert >}}", "completion": "This json code Docs smoke test page | To format a link, put the link text inside square brackets, followed by the link target in parentheses.", "language": "json", "type": "api-response_explanation", "context": "Docs smoke test page | To format a link, put the link text inside square brackets, followed by the link target in parentheses.", "source": "https://kubernetes.io/docs/test/", "id": "k8s_00572_code_explain_14887"} {"prompt": "Explain this json code:\n\n[![pencil icon](/images/pencil.png)](https://kubernetes.io)", "completion": "This json code Docs smoke test page | Produces:", "language": "json", "type": "api-response_explanation", "context": "Docs smoke test page | Produces:", "source": "https://kubernetes.io/docs/test/", "id": "k8s_00572_code_explain_14888"} {"prompt": "Complete this text code:\n\n<table>\n<thead>\n <tr>\n <th>Heading cell 1</th>\n <th>Heading cell 2</th>\n </tr>\n</thead>", "completion": "<tbody>\n <tr>\n <td>Body cell 1</td>\n <td>Body cell 2</td>\n </tr>\n</tbody>\n</table>", "language": "text", "type": "code-example", "context": "Docs smoke test page | Produces:", "source": "https://kubernetes.io/docs/test/", "id": "k8s_00572_code_completion_14889"} {"prompt": "Explain this text code:\n\n<table>\n<thead>\n <tr>\n <th>Heading cell 1</th>\n <th>Heading cell 2</th>\n </tr>\n</thead>\n<tbody>\n <tr>\n <td>Body cell 1</td>\n <td>Body cell 2</td>\n </tr>\n</tbody>\n</table>", "completion": "This text code Docs smoke test page | Produces:", "language": "text", "type": "code-example_explanation", "context": "Docs smoke test page | Produces:", "source": "https://kubernetes.io/docs/test/", "id": "k8s_00572_code_explain_14890"} {"prompt": "Complete this json code:\n\n{{< mermaid >}}\ngraph TD;\n A-->B;", "completion": " A-->C;\n B-->D;\n C-->D;\n{{</ mermaid >}}", "language": "json", "type": "api-response", "context": "Docs smoke test page | Produces:", "source": "https://kubernetes.io/docs/test/", "id": "k8s_00572_code_completion_14891"} {"prompt": "Explain this json code:\n\n{{< mermaid >}}\ngraph TD;\n A-->B;\n A-->C;\n B-->D;\n C-->D;\n{{</ mermaid >}}", "completion": "This json code Docs smoke test page | Produces:", "language": "json", "type": "api-response_explanation", "context": "Docs smoke test page | Produces:", "source": "https://kubernetes.io/docs/test/", "id": "k8s_00572_code_explain_14892"} {"prompt": "Complete this json code:\n\n{{< mermaid >}}\nsequenceDiagram\n Alice ->> Bob: Hello Bob, how are you?\n Bob-->>John: How about you John?\n Bob--x Alice: I am good thanks!", "completion": " Bob-x John: I am good thanks!\n Note right of John: Bob thinks a long<br/>long time, so long<br/>that the text does<br/>not fit on a row.\n\n Bob-->Alice: Checking with John...\n Alice->John: Yes... John, how are you?\n{{</ mermaid >}}", "language": "json", "type": "api-response", "context": "Docs smoke test page | Produces:", "source": "https://kubernetes.io/docs/test/", "id": "k8s_00572_code_completion_14893"} {"prompt": "Explain this json code:\n\n{{< mermaid >}}\nsequenceDiagram\n Alice ->> Bob: Hello Bob, how are you?\n Bob-->>John: How about you John?\n Bob--x Alice: I am good thanks!\n Bob-x John: I am good thanks!\n Note right of John: Bob thinks a long<br/>long time, so long<br/>that the text does<br/>not fit on a row.\n\n Bob-->Alice: Checking with John...\n Alice->John: Yes... John, how are you?\n{{</ mermaid >}}", "completion": "This json code Docs smoke test page | Produces:", "language": "json", "type": "api-response_explanation", "context": "Docs smoke test page | Produces:", "source": "https://kubernetes.io/docs/test/", "id": "k8s_00572_code_explain_14894"} {"prompt": "Complete this markdown code:\n\n### H3\n\nThis is in an H3 section.\n\n#### H4\n\nThis is in an H4 section.", "completion": "\n##### H5\n\nThis is in an H5 section.\n\n###### H6\n\nThis is in an H6 section.", "language": "markdown", "type": "code-example", "context": "Docs smoke test page | Produces:", "source": "https://kubernetes.io/docs/test/", "id": "k8s_00572_code_completion_14895"} {"prompt": "Explain this markdown code:\n\n### H3\n\nThis is in an H3 section.\n\n#### H4\n\nThis is in an H4 section.\n\n##### H5\n\nThis is in an H5 section.\n\n###### H6\n\nThis is in an H6 section.", "completion": "This markdown code Docs smoke test page | Produces:", "language": "markdown", "type": "code-example_explanation", "context": "Docs smoke test page | Produces:", "source": "https://kubernetes.io/docs/test/", "id": "k8s_00572_code_explain_14896"} {"prompt": "Complete this markdown code:\n\n- **bold**\n- _italic_\n- ***bold italic***\n- ~~strikethrough~~\n- <u>underline</u>", "completion": "- _<u>underline italic</u>_\n- **<u>underline bold</u>**\n- ***<u>underline bold italic</u>***\n- `monospace text`\n- **`monospace bold`**", "language": "markdown", "type": "code-example", "context": "Docs smoke test page | Produces:", "source": "https://kubernetes.io/docs/test/", "id": "k8s_00572_code_completion_14897"} {"prompt": "Explain this markdown code:\n\n- **bold**\n- _italic_\n- ***bold italic***\n- ~~strikethrough~~\n- <u>underline</u>\n- _<u>underline italic</u>_\n- **<u>underline bold</u>**\n- ***<u>underline bold italic</u>***\n- `monospace text`\n- **`monospace bold`**", "completion": "This markdown code Docs smoke test page | Produces:", "language": "markdown", "type": "code-example_explanation", "context": "Docs smoke test page | Produces:", "source": "https://kubernetes.io/docs/test/", "id": "k8s_00572_code_explain_14898"} {"prompt": "Complete this markdown code:\n\n- This is a list item.\n* This is another list item in the same list.\n- You can mix `-` and `*`.", "completion": " - To make a sub-item, indent two spaces.\n - This is a sub-sub-item. Indent two more spaces.\n - Another sub-item.", "language": "markdown", "type": "code-example", "context": "Docs smoke test page | Produces:", "source": "https://kubernetes.io/docs/test/", "id": "k8s_00572_code_completion_14899"} {"prompt": "Explain this markdown code:\n\n- This is a list item.\n* This is another list item in the same list.\n- You can mix `-` and `*`.\n - To make a sub-item, indent two spaces.\n - This is a sub-sub-item. Indent two more spaces.\n - Another sub-item.", "completion": "This markdown code Docs smoke test page | Produces:", "language": "markdown", "type": "code-example_explanation", "context": "Docs smoke test page | Produces:", "source": "https://kubernetes.io/docs/test/", "id": "k8s_00572_code_explain_14900"} {"prompt": "Explain this none code:\n\n{{< alert color=\"warning\" >}}This is a warning.{{< /alert >}}", "completion": "This none code Docs smoke test page | To format a link, put the link text inside square brackets, followed by the link target in parentheses.", "language": "none", "type": "code-example_explanation", "context": "Docs smoke test page | To format a link, put the link text inside square brackets, followed by the link target in parentheses.", "source": "https://kubernetes.io/docs/test/", "id": "k8s_00572_code_explain_14901"} {"prompt": "Explain this text code:\n\n<a href=\"https://kubernetes.io/\">Link to Kubernetes.io</a>", "completion": "This text code Docs smoke test page | To format an image, use similar syntax to links, but add a leading ! character. The square brackets contain the image's alt text. Try to always use alt text so that people using screen readers can get", "language": "text", "type": "code-example_explanation", "context": "Docs smoke test page | To format an image, use similar syntax to links, but add a leading ! character. The square brackets contain the image's alt text. Try to always use alt text so that people using screen readers can get", "source": "https://kubernetes.io/docs/test/", "id": "k8s_00572_code_explain_14902"} {"prompt": "Explain this markdown code:\n\n[![pencil icon](/images/pencil.png)](https://kubernetes.io)", "completion": "This markdown code Docs smoke test page | Produces:", "language": "markdown", "type": "code-example_explanation", "context": "Docs smoke test page | Produces:", "source": "https://kubernetes.io/docs/test/", "id": "k8s_00572_code_explain_14903"} {"prompt": "Complete this html code:\n\n<table>\n<thead>\n <tr>\n <th>Heading cell 1</th>\n <th>Heading cell 2</th>\n </tr>\n</thead>", "completion": "<tbody>\n <tr>\n <td>Body cell 1</td>\n <td>Body cell 2</td>\n </tr>\n</tbody>\n</table>", "language": "html", "type": "code-example", "context": "Docs smoke test page | Produces:", "source": "https://kubernetes.io/docs/test/", "id": "k8s_00572_code_completion_14904"} {"prompt": "Explain this html code:\n\n<table>\n<thead>\n <tr>\n <th>Heading cell 1</th>\n <th>Heading cell 2</th>\n </tr>\n</thead>\n<tbody>\n <tr>\n <td>Body cell 1</td>\n <td>Body cell 2</td>\n </tr>\n</tbody>\n</table>", "completion": "This html code Docs smoke test page | Produces:", "language": "html", "type": "code-example_explanation", "context": "Docs smoke test page | Produces:", "source": "https://kubernetes.io/docs/test/", "id": "k8s_00572_code_explain_14905"} {"prompt": "Complete this markdown code:\n\n{{< mermaid >}}\ngraph TD;\n A-->B;", "completion": " A-->C;\n B-->D;\n C-->D;\n{{</ mermaid >}}", "language": "markdown", "type": "code-example", "context": "Docs smoke test page | Produces:", "source": "https://kubernetes.io/docs/test/", "id": "k8s_00572_code_completion_14906"} {"prompt": "Explain this markdown code:\n\n{{< mermaid >}}\ngraph TD;\n A-->B;\n A-->C;\n B-->D;\n C-->D;\n{{</ mermaid >}}", "completion": "This markdown code Docs smoke test page | Produces:", "language": "markdown", "type": "code-example_explanation", "context": "Docs smoke test page | Produces:", "source": "https://kubernetes.io/docs/test/", "id": "k8s_00572_code_explain_14907"} {"prompt": "Complete this markdown code:\n\n{{< mermaid >}}\nsequenceDiagram\n Alice ->> Bob: Hello Bob, how are you?\n Bob-->>John: How about you John?\n Bob--x Alice: I am good thanks!", "completion": " Bob-x John: I am good thanks!\n Note right of John: Bob thinks a long<br/>long time, so long<br/>that the text does<br/>not fit on a row.\n\n Bob-->Alice: Checking with John...\n Alice->John: Yes... John, how are you?\n{{</ mermaid >}}", "language": "markdown", "type": "code-example", "context": "Docs smoke test page | Produces:", "source": "https://kubernetes.io/docs/test/", "id": "k8s_00572_code_completion_14908"} {"prompt": "Explain this markdown code:\n\n{{< mermaid >}}\nsequenceDiagram\n Alice ->> Bob: Hello Bob, how are you?\n Bob-->>John: How about you John?\n Bob--x Alice: I am good thanks!\n Bob-x John: I am good thanks!\n Note right of John: Bob thinks a long<br/>long time, so long<br/>that the text does<br/>not fit on a row.\n\n Bob-->Alice: Checking with John...\n Alice->John: Yes... John, how are you?\n{{</ mermaid >}}", "completion": "This markdown code Docs smoke test page | Produces:", "language": "markdown", "type": "code-example_explanation", "context": "Docs smoke test page | Produces:", "source": "https://kubernetes.io/docs/test/", "id": "k8s_00572_code_explain_14909"} {"prompt": "Complete this text code:\n\n### H3\n\nThis is in an H3 section.\n\n#### H4\n\nThis is in an H4 section.", "completion": "\n##### H5\n\nThis is in an H5 section.\n\n###### H6\n\nThis is in an H6 section.", "language": "text", "type": "code-example", "context": "Docs smoke test page | Produces:", "source": "https://kubernetes.io/docs/test/", "id": "k8s_00572_code_completion_14910"} {"prompt": "Explain this text code:\n\n### H3\n\nThis is in an H3 section.\n\n#### H4\n\nThis is in an H4 section.\n\n##### H5\n\nThis is in an H5 section.\n\n###### H6\n\nThis is in an H6 section.", "completion": "This text code Docs smoke test page | Produces:", "language": "text", "type": "code-example_explanation", "context": "Docs smoke test page | Produces:", "source": "https://kubernetes.io/docs/test/", "id": "k8s_00572_code_explain_14911"} {"prompt": "Complete this text code:\n\n- **bold**\n- _italic_\n- ***bold italic***\n- ~~strikethrough~~\n- <u>underline</u>", "completion": "- _<u>underline italic</u>_\n- **<u>underline bold</u>**\n- ***<u>underline bold italic</u>***\n- `monospace text`\n- **`monospace bold`**", "language": "text", "type": "code-example", "context": "Docs smoke test page | Produces:", "source": "https://kubernetes.io/docs/test/", "id": "k8s_00572_code_completion_14912"} {"prompt": "Explain this text code:\n\n- **bold**\n- _italic_\n- ***bold italic***\n- ~~strikethrough~~\n- <u>underline</u>\n- _<u>underline italic</u>_\n- **<u>underline bold</u>**\n- ***<u>underline bold italic</u>***\n- `monospace text`\n- **`monospace bold`**", "completion": "This text code Docs smoke test page | Produces:", "language": "text", "type": "code-example_explanation", "context": "Docs smoke test page | Produces:", "source": "https://kubernetes.io/docs/test/", "id": "k8s_00572_code_explain_14913"} {"prompt": "Complete this text code:\n\n- This is a list item.\n* This is another list item in the same list.\n- You can mix `-` and `*`.", "completion": " - To make a sub-item, indent two spaces.\n - This is a sub-sub-item. Indent two more spaces.\n - Another sub-item.", "language": "text", "type": "code-example", "context": "Docs smoke test page | Produces:", "source": "https://kubernetes.io/docs/test/", "id": "k8s_00572_code_completion_14914"} {"prompt": "Explain this text code:\n\n- This is a list item.\n* This is another list item in the same list.\n- You can mix `-` and `*`.\n - To make a sub-item, indent two spaces.\n - This is a sub-sub-item. Indent two more spaces.\n - Another sub-item.", "completion": "This text code Docs smoke test page | Produces:", "language": "text", "type": "code-example_explanation", "context": "Docs smoke test page | Produces:", "source": "https://kubernetes.io/docs/test/", "id": "k8s_00572_code_explain_14915"} {"prompt": "Explain this json code:\n\n[![pencil icon](/images/pencil.png)](https://kubernetes.io)", "completion": "This json code Docs smoke test page | Produces:", "language": "json", "type": "api-response_explanation", "context": "Docs smoke test page | Produces:", "source": "https://kubernetes.io/docs/test/", "id": "k8s_00572_code_explain_14916"} {"prompt": "Complete this text code:\n\n<table>\n<thead>\n <tr>\n <th>Heading cell 1</th>\n <th>Heading cell 2</th>\n </tr>\n</thead>", "completion": "<tbody>\n <tr>\n <td>Body cell 1</td>\n <td>Body cell 2</td>\n </tr>\n</tbody>\n</table>", "language": "text", "type": "code-example", "context": "Docs smoke test page | Produces:", "source": "https://kubernetes.io/docs/test/", "id": "k8s_00572_code_completion_14917"} {"prompt": "Explain this text code:\n\n<table>\n<thead>\n <tr>\n <th>Heading cell 1</th>\n <th>Heading cell 2</th>\n </tr>\n</thead>\n<tbody>\n <tr>\n <td>Body cell 1</td>\n <td>Body cell 2</td>\n </tr>\n</tbody>\n</table>", "completion": "This text code Docs smoke test page | Produces:", "language": "text", "type": "code-example_explanation", "context": "Docs smoke test page | Produces:", "source": "https://kubernetes.io/docs/test/", "id": "k8s_00572_code_explain_14918"} {"prompt": "Complete this json code:\n\n{{< mermaid >}}\ngraph TD;\n A-->B;", "completion": " A-->C;\n B-->D;\n C-->D;\n{{</ mermaid >}}", "language": "json", "type": "api-response", "context": "Docs smoke test page | Produces:", "source": "https://kubernetes.io/docs/test/", "id": "k8s_00572_code_completion_14919"} {"prompt": "Explain this json code:\n\n{{< mermaid >}}\ngraph TD;\n A-->B;\n A-->C;\n B-->D;\n C-->D;\n{{</ mermaid >}}", "completion": "This json code Docs smoke test page | Produces:", "language": "json", "type": "api-response_explanation", "context": "Docs smoke test page | Produces:", "source": "https://kubernetes.io/docs/test/", "id": "k8s_00572_code_explain_14920"} {"prompt": "Complete this json code:\n\n{{< mermaid >}}\nsequenceDiagram\n Alice ->> Bob: Hello Bob, how are you?\n Bob-->>John: How about you John?\n Bob--x Alice: I am good thanks!", "completion": " Bob-x John: I am good thanks!\n Note right of John: Bob thinks a long<br/>long time, so long<br/>that the text does<br/>not fit on a row.\n\n Bob-->Alice: Checking with John...\n Alice->John: Yes... John, how are you?\n{{</ mermaid >}}", "language": "json", "type": "api-response", "context": "Docs smoke test page | Produces:", "source": "https://kubernetes.io/docs/test/", "id": "k8s_00572_code_completion_14921"} {"prompt": "Explain this json code:\n\n{{< mermaid >}}\nsequenceDiagram\n Alice ->> Bob: Hello Bob, how are you?\n Bob-->>John: How about you John?\n Bob--x Alice: I am good thanks!\n Bob-x John: I am good thanks!\n Note right of John: Bob thinks a long<br/>long time, so long<br/>that the text does<br/>not fit on a row.\n\n Bob-->Alice: Checking with John...\n Alice->John: Yes... John, how are you?\n{{</ mermaid >}}", "completion": "This json code Docs smoke test page | Produces:", "language": "json", "type": "api-response_explanation", "context": "Docs smoke test page | Produces:", "source": "https://kubernetes.io/docs/test/", "id": "k8s_00572_code_explain_14922"} {"prompt": "Explain this text code:\n\ncurl.exe -LO https://raw.githubusercontent.com/kubernetes-sigs/sig-windows-tools/master/hostprocess/Install-Containerd.ps1", "completion": "This text code Adding Windows worker nodes | Then run the following command, but first replace CONTAINERD_VERSION with a recent release from the containerd repository. The version must not have a v prefix. For example, use 1.7.22 instead of v1.7", "language": "text", "type": "code-example_explanation", "context": "Adding Windows worker nodes | Then run the following command, but first replace CONTAINERD_VERSION with a recent release from the containerd repository. The version must not have a v prefix. For example, use 1.7.22 instead of v1.7", "source": "https://kubernetes.io/docs/tasks/administer-cluster/kubeadm/adding-windows-nodes/", "id": "k8s_00574_code_explain_14923"} {"prompt": "Explain this text code:\n\n.\\Install-Containerd.ps1 -ContainerDVersion CONTAINERD_VERSION", "completion": "This text code Adding Windows worker nodes | Run the following commands to install kubeadm and the kubelet:", "language": "text", "type": "code-example_explanation", "context": "Adding Windows worker nodes | Run the following commands to install kubeadm and the kubelet:", "source": "https://kubernetes.io/docs/tasks/administer-cluster/kubeadm/adding-windows-nodes/", "id": "k8s_00574_code_explain_14924"} {"prompt": "Explain this text code:\n\ncurl.exe -LO https://raw.githubusercontent.com/kubernetes-sigs/sig-windows-tools/master/hostprocess/PrepareNode.ps1\n.\\PrepareNode.ps1 -KubernetesVersion v1.34", "completion": "This text code Adding Windows worker nodes | Run the command that was output by kubeadm init. For example:", "language": "text", "type": "code-example_explanation", "context": "Adding Windows worker nodes | Run the command that was output by kubeadm init. For example:", "source": "https://kubernetes.io/docs/tasks/administer-cluster/kubeadm/adding-windows-nodes/", "id": "k8s_00574_code_explain_14925"} {"prompt": "Explain this text code:\n\nkubeadm join --token <token> <control-plane-host>:<control-plane-port> --discovery-token-ca-cert-hash sha256:<hash>", "completion": "This text code Adding Windows worker nodes | If you do not have the token, you can get it by running the following command on the control plane node:", "language": "text", "type": "code-example_explanation", "context": "Adding Windows worker nodes | If you do not have the token, you can get it by running the following command on the control plane node:", "source": "https://kubernetes.io/docs/tasks/administer-cluster/kubeadm/adding-windows-nodes/", "id": "k8s_00574_code_explain_14926"} {"prompt": "Explain this text code:\n\n# Run this on a control plane node\nsudo kubeadm token list", "completion": "This text code Adding Windows worker nodes | The output is similar to this:", "language": "text", "type": "code-example_explanation", "context": "Adding Windows worker nodes | The output is similar to this:", "source": "https://kubernetes.io/docs/tasks/administer-cluster/kubeadm/adding-windows-nodes/", "id": "k8s_00574_code_explain_14927"} {"prompt": "Complete this text code:\n\nTOKEN TTL EXPIRES USAGES DESCRIPTION EXTRA GROUPS\n8ewj1p.9r9hcjoqgajrj4gi 23h 2018-06-12T02:51:28Z authentication, The default bootstrap system:", "completion": " signing token generated by bootstrappers:\n 'kubeadm init'. kubeadm:\n default-node-token", "language": "text", "type": "code-example", "context": "Adding Windows worker nodes | By default, node join tokens expire after 24 hours. If you are joining a node to the cluster after the current token has expired, you can create a new token by running the following command on the con", "source": "https://kubernetes.io/docs/tasks/administer-cluster/kubeadm/adding-windows-nodes/", "id": "k8s_00574_code_completion_14928"} {"prompt": "Explain this text code:\n\nTOKEN TTL EXPIRES USAGES DESCRIPTION EXTRA GROUPS\n8ewj1p.9r9hcjoqgajrj4gi 23h 2018-06-12T02:51:28Z authentication, The default bootstrap system:\n signing token generated by bootstrappers:\n 'kubeadm init'. kubeadm:\n default-node-token", "completion": "This text code Adding Windows worker nodes | By default, node join tokens expire after 24 hours. If you are joining a node to the cluster after the current token has expired, you can create a new token by running the following command on the con", "language": "text", "type": "code-example_explanation", "context": "Adding Windows worker nodes | By default, node join tokens expire after 24 hours. If you are joining a node to the cluster after the current token has expired, you can create a new token by running the following command on the con", "source": "https://kubernetes.io/docs/tasks/administer-cluster/kubeadm/adding-windows-nodes/", "id": "k8s_00574_code_explain_14929"} {"prompt": "Explain this text code:\n\n# Run this on a control plane node\nsudo kubeadm token create", "completion": "This text code Adding Windows worker nodes | The output is similar to this:", "language": "text", "type": "code-example_explanation", "context": "Adding Windows worker nodes | The output is similar to this:", "source": "https://kubernetes.io/docs/tasks/administer-cluster/kubeadm/adding-windows-nodes/", "id": "k8s_00574_code_explain_14930"} {"prompt": "Explain this text code:\n\nsudo cat /etc/kubernetes/pki/ca.crt | openssl x509 -pubkey | openssl rsa -pubin -outform der 2>/dev/null | \\\n openssl dgst -sha256 -hex | sed 's/^.* //'", "completion": "This text code Adding Windows worker nodes | The output is similar to:", "language": "text", "type": "code-example_explanation", "context": "Adding Windows worker nodes | The output is similar to:", "source": "https://kubernetes.io/docs/tasks/administer-cluster/kubeadm/adding-windows-nodes/", "id": "k8s_00574_code_explain_14931"} {"prompt": "Explain this text code:\n\n8cb2de97839780a412b93877f8507ad6c94f73add17d5d7058e91741c9d5ec78", "completion": "This text code Adding Windows worker nodes | The output of the kubeadm join command should look something like:", "language": "text", "type": "code-example_explanation", "context": "Adding Windows worker nodes | The output of the kubeadm join command should look something like:", "source": "https://kubernetes.io/docs/tasks/administer-cluster/kubeadm/adding-windows-nodes/", "id": "k8s_00574_code_explain_14932"} {"prompt": "Complete this json code:\n\n[preflight] Running pre-flight checks\n\n... (log output of join workflow) ...\n\nNode join complete:", "completion": "* Certificate signing request sent to control-plane and response\n received.\n* Kubelet informed of new secure connection details.\n\nRun 'kubectl get nodes' on control-plane to see this machine join.", "language": "json", "type": "api-response", "context": "Adding Windows worker nodes | A few seconds later, you should notice this node in the output from kubectl get nodes. (for example, run kubectl on a control plane node).", "source": "https://kubernetes.io/docs/tasks/administer-cluster/kubeadm/adding-windows-nodes/", "id": "k8s_00574_code_completion_14933"} {"prompt": "Explain this json code:\n\n[preflight] Running pre-flight checks\n\n... (log output of join workflow) ...\n\nNode join complete:\n* Certificate signing request sent to control-plane and response\n received.\n* Kubelet informed of new secure connection details.\n\nRun 'kubectl get nodes' on control-plane to see this machine join.", "completion": "This json code Adding Windows worker nodes | A few seconds later, you should notice this node in the output from kubectl get nodes. (for example, run kubectl on a control plane node).", "language": "json", "type": "api-response_explanation", "context": "Adding Windows worker nodes | A few seconds later, you should notice this node in the output from kubectl get nodes. (for example, run kubectl on a control plane node).", "source": "https://kubernetes.io/docs/tasks/administer-cluster/kubeadm/adding-windows-nodes/", "id": "k8s_00574_code_explain_14934"} {"prompt": "Explain this PowerShell code:\n\ncurl.exe -LO https://raw.githubusercontent.com/kubernetes-sigs/sig-windows-tools/master/hostprocess/Install-Containerd.ps1", "completion": "This PowerShell code Adding Windows worker nodes | Then run the following command, but first replace CONTAINERD_VERSION with a recent release from the containerd repository. The version must not have a v prefix. For example, use 1.7.22 instead of v1.7", "language": "PowerShell", "type": "code-example_explanation", "context": "Adding Windows worker nodes | Then run the following command, but first replace CONTAINERD_VERSION with a recent release from the containerd repository. The version must not have a v prefix. For example, use 1.7.22 instead of v1.7", "source": "https://kubernetes.io/docs/tasks/administer-cluster/kubeadm/adding-windows-nodes/", "id": "k8s_00574_code_explain_14935"} {"prompt": "Explain this PowerShell code:\n\n.\\Install-Containerd.ps1 -ContainerDVersion CONTAINERD_VERSION", "completion": "This PowerShell code Adding Windows worker nodes | Run the following commands to install kubeadm and the kubelet:", "language": "PowerShell", "type": "code-example_explanation", "context": "Adding Windows worker nodes | Run the following commands to install kubeadm and the kubelet:", "source": "https://kubernetes.io/docs/tasks/administer-cluster/kubeadm/adding-windows-nodes/", "id": "k8s_00574_code_explain_14936"} {"prompt": "Explain this PowerShell code:\n\ncurl.exe -LO https://raw.githubusercontent.com/kubernetes-sigs/sig-windows-tools/master/hostprocess/PrepareNode.ps1\n.\\PrepareNode.ps1 -KubernetesVersion v1.34", "completion": "This PowerShell code Adding Windows worker nodes | Run the command that was output by kubeadm init. For example:", "language": "PowerShell", "type": "code-example_explanation", "context": "Adding Windows worker nodes | Run the command that was output by kubeadm init. For example:", "source": "https://kubernetes.io/docs/tasks/administer-cluster/kubeadm/adding-windows-nodes/", "id": "k8s_00574_code_explain_14937"} {"prompt": "Explain this bash code:\n\nkubeadm join --token <token> <control-plane-host>:<control-plane-port> --discovery-token-ca-cert-hash sha256:<hash>", "completion": "This bash code Adding Windows worker nodes | If you do not have the token, you can get it by running the following command on the control plane node:", "language": "bash", "type": "shell-command_explanation", "context": "Adding Windows worker nodes | If you do not have the token, you can get it by running the following command on the control plane node:", "source": "https://kubernetes.io/docs/tasks/administer-cluster/kubeadm/adding-windows-nodes/", "id": "k8s_00574_code_explain_14938"} {"prompt": "Explain this bash code:\n\n# Run this on a control plane node\nsudo kubeadm token list", "completion": "This bash code Adding Windows worker nodes | The output is similar to this:", "language": "bash", "type": "shell-command_explanation", "context": "Adding Windows worker nodes | The output is similar to this:", "source": "https://kubernetes.io/docs/tasks/administer-cluster/kubeadm/adding-windows-nodes/", "id": "k8s_00574_code_explain_14939"} {"prompt": "Complete this console code:\n\nTOKEN TTL EXPIRES USAGES DESCRIPTION EXTRA GROUPS\n8ewj1p.9r9hcjoqgajrj4gi 23h 2018-06-12T02:51:28Z authentication, The default bootstrap system:", "completion": " signing token generated by bootstrappers:\n 'kubeadm init'. kubeadm:\n default-node-token", "language": "console", "type": "code-example", "context": "Adding Windows worker nodes | By default, node join tokens expire after 24 hours. If you are joining a node to the cluster after the current token has expired, you can create a new token by running the following command on the con", "source": "https://kubernetes.io/docs/tasks/administer-cluster/kubeadm/adding-windows-nodes/", "id": "k8s_00574_code_completion_14940"} {"prompt": "Explain this console code:\n\nTOKEN TTL EXPIRES USAGES DESCRIPTION EXTRA GROUPS\n8ewj1p.9r9hcjoqgajrj4gi 23h 2018-06-12T02:51:28Z authentication, The default bootstrap system:\n signing token generated by bootstrappers:\n 'kubeadm init'. kubeadm:\n default-node-token", "completion": "This console code Adding Windows worker nodes | By default, node join tokens expire after 24 hours. If you are joining a node to the cluster after the current token has expired, you can create a new token by running the following command on the con", "language": "console", "type": "code-example_explanation", "context": "Adding Windows worker nodes | By default, node join tokens expire after 24 hours. If you are joining a node to the cluster after the current token has expired, you can create a new token by running the following command on the con", "source": "https://kubernetes.io/docs/tasks/administer-cluster/kubeadm/adding-windows-nodes/", "id": "k8s_00574_code_explain_14941"} {"prompt": "Explain this bash code:\n\n# Run this on a control plane node\nsudo kubeadm token create", "completion": "This bash code Adding Windows worker nodes | The output is similar to this:", "language": "bash", "type": "shell-command_explanation", "context": "Adding Windows worker nodes | The output is similar to this:", "source": "https://kubernetes.io/docs/tasks/administer-cluster/kubeadm/adding-windows-nodes/", "id": "k8s_00574_code_explain_14942"} {"prompt": "Explain this bash code:\n\nsudo cat /etc/kubernetes/pki/ca.crt | openssl x509 -pubkey | openssl rsa -pubin -outform der 2>/dev/null | \\\n openssl dgst -sha256 -hex | sed 's/^.* //'", "completion": "This bash code Adding Windows worker nodes | The output is similar to:", "language": "bash", "type": "shell-command_explanation", "context": "Adding Windows worker nodes | The output is similar to:", "source": "https://kubernetes.io/docs/tasks/administer-cluster/kubeadm/adding-windows-nodes/", "id": "k8s_00574_code_explain_14943"} {"prompt": "Explain this console code:\n\n8cb2de97839780a412b93877f8507ad6c94f73add17d5d7058e91741c9d5ec78", "completion": "This console code Adding Windows worker nodes | The output of the kubeadm join command should look something like:", "language": "console", "type": "code-example_explanation", "context": "Adding Windows worker nodes | The output of the kubeadm join command should look something like:", "source": "https://kubernetes.io/docs/tasks/administer-cluster/kubeadm/adding-windows-nodes/", "id": "k8s_00574_code_explain_14944"} {"prompt": "Complete this json code:\n\n[preflight] Running pre-flight checks\n\n... (log output of join workflow) ...\n\nNode join complete:", "completion": "* Certificate signing request sent to control-plane and response\n received.\n* Kubelet informed of new secure connection details.\n\nRun 'kubectl get nodes' on control-plane to see this machine join.", "language": "json", "type": "api-response", "context": "Adding Windows worker nodes | A few seconds later, you should notice this node in the output from kubectl get nodes. (for example, run kubectl on a control plane node).", "source": "https://kubernetes.io/docs/tasks/administer-cluster/kubeadm/adding-windows-nodes/", "id": "k8s_00574_code_completion_14945"} {"prompt": "Explain this json code:\n\n[preflight] Running pre-flight checks\n\n... (log output of join workflow) ...\n\nNode join complete:\n* Certificate signing request sent to control-plane and response\n received.\n* Kubelet informed of new secure connection details.\n\nRun 'kubectl get nodes' on control-plane to see this machine join.", "completion": "This json code Adding Windows worker nodes | A few seconds later, you should notice this node in the output from kubectl get nodes. (for example, run kubectl on a control plane node).", "language": "json", "type": "api-response_explanation", "context": "Adding Windows worker nodes | A few seconds later, you should notice this node in the output from kubectl get nodes. (for example, run kubectl on a control plane node).", "source": "https://kubernetes.io/docs/tasks/administer-cluster/kubeadm/adding-windows-nodes/", "id": "k8s_00574_code_explain_14946"} {"prompt": "Explain this text code:\n\ncurl.exe -LO https://raw.githubusercontent.com/kubernetes-sigs/sig-windows-tools/master/hostprocess/Install-Containerd.ps1", "completion": "This text code Adding Windows worker nodes | Then run the following command, but first replace CONTAINERD_VERSION with a recent release from the containerd repository. The version must not have a v prefix. For example, use 1.7.22 instead of v1.7", "language": "text", "type": "code-example_explanation", "context": "Adding Windows worker nodes | Then run the following command, but first replace CONTAINERD_VERSION with a recent release from the containerd repository. The version must not have a v prefix. For example, use 1.7.22 instead of v1.7", "source": "https://kubernetes.io/docs/tasks/administer-cluster/kubeadm/adding-windows-nodes/", "id": "k8s_00574_code_explain_14947"} {"prompt": "Explain this text code:\n\n.\\Install-Containerd.ps1 -ContainerDVersion CONTAINERD_VERSION", "completion": "This text code Adding Windows worker nodes | Run the following commands to install kubeadm and the kubelet:", "language": "text", "type": "code-example_explanation", "context": "Adding Windows worker nodes | Run the following commands to install kubeadm and the kubelet:", "source": "https://kubernetes.io/docs/tasks/administer-cluster/kubeadm/adding-windows-nodes/", "id": "k8s_00574_code_explain_14948"} {"prompt": "Explain this text code:\n\ncurl.exe -LO https://raw.githubusercontent.com/kubernetes-sigs/sig-windows-tools/master/hostprocess/PrepareNode.ps1\n.\\PrepareNode.ps1 -KubernetesVersion v1.34", "completion": "This text code Adding Windows worker nodes | Run the command that was output by kubeadm init. For example:", "language": "text", "type": "code-example_explanation", "context": "Adding Windows worker nodes | Run the command that was output by kubeadm init. For example:", "source": "https://kubernetes.io/docs/tasks/administer-cluster/kubeadm/adding-windows-nodes/", "id": "k8s_00574_code_explain_14949"} {"prompt": "Explain this text code:\n\nkubeadm join --token <token> <control-plane-host>:<control-plane-port> --discovery-token-ca-cert-hash sha256:<hash>", "completion": "This text code Adding Windows worker nodes | If you do not have the token, you can get it by running the following command on the control plane node:", "language": "text", "type": "code-example_explanation", "context": "Adding Windows worker nodes | If you do not have the token, you can get it by running the following command on the control plane node:", "source": "https://kubernetes.io/docs/tasks/administer-cluster/kubeadm/adding-windows-nodes/", "id": "k8s_00574_code_explain_14950"} {"prompt": "Explain this text code:\n\n# Run this on a control plane node\nsudo kubeadm token list", "completion": "This text code Adding Windows worker nodes | The output is similar to this:", "language": "text", "type": "code-example_explanation", "context": "Adding Windows worker nodes | The output is similar to this:", "source": "https://kubernetes.io/docs/tasks/administer-cluster/kubeadm/adding-windows-nodes/", "id": "k8s_00574_code_explain_14951"} {"prompt": "Complete this text code:\n\nTOKEN TTL EXPIRES USAGES DESCRIPTION EXTRA GROUPS\n8ewj1p.9r9hcjoqgajrj4gi 23h 2018-06-12T02:51:28Z authentication, The default bootstrap system:", "completion": " signing token generated by bootstrappers:\n 'kubeadm init'. kubeadm:\n default-node-token", "language": "text", "type": "code-example", "context": "Adding Windows worker nodes | By default, node join tokens expire after 24 hours. If you are joining a node to the cluster after the current token has expired, you can create a new token by running the following command on the con", "source": "https://kubernetes.io/docs/tasks/administer-cluster/kubeadm/adding-windows-nodes/", "id": "k8s_00574_code_completion_14952"} {"prompt": "Explain this text code:\n\nTOKEN TTL EXPIRES USAGES DESCRIPTION EXTRA GROUPS\n8ewj1p.9r9hcjoqgajrj4gi 23h 2018-06-12T02:51:28Z authentication, The default bootstrap system:\n signing token generated by bootstrappers:\n 'kubeadm init'. kubeadm:\n default-node-token", "completion": "This text code Adding Windows worker nodes | By default, node join tokens expire after 24 hours. If you are joining a node to the cluster after the current token has expired, you can create a new token by running the following command on the con", "language": "text", "type": "code-example_explanation", "context": "Adding Windows worker nodes | By default, node join tokens expire after 24 hours. If you are joining a node to the cluster after the current token has expired, you can create a new token by running the following command on the con", "source": "https://kubernetes.io/docs/tasks/administer-cluster/kubeadm/adding-windows-nodes/", "id": "k8s_00574_code_explain_14953"} {"prompt": "Explain this text code:\n\n# Run this on a control plane node\nsudo kubeadm token create", "completion": "This text code Adding Windows worker nodes | The output is similar to this:", "language": "text", "type": "code-example_explanation", "context": "Adding Windows worker nodes | The output is similar to this:", "source": "https://kubernetes.io/docs/tasks/administer-cluster/kubeadm/adding-windows-nodes/", "id": "k8s_00574_code_explain_14954"} {"prompt": "Explain this text code:\n\nsudo cat /etc/kubernetes/pki/ca.crt | openssl x509 -pubkey | openssl rsa -pubin -outform der 2>/dev/null | \\\n openssl dgst -sha256 -hex | sed 's/^.* //'", "completion": "This text code Adding Windows worker nodes | The output is similar to:", "language": "text", "type": "code-example_explanation", "context": "Adding Windows worker nodes | The output is similar to:", "source": "https://kubernetes.io/docs/tasks/administer-cluster/kubeadm/adding-windows-nodes/", "id": "k8s_00574_code_explain_14955"} {"prompt": "Explain this text code:\n\n8cb2de97839780a412b93877f8507ad6c94f73add17d5d7058e91741c9d5ec78", "completion": "This text code Adding Windows worker nodes | The output of the kubeadm join command should look something like:", "language": "text", "type": "code-example_explanation", "context": "Adding Windows worker nodes | The output of the kubeadm join command should look something like:", "source": "https://kubernetes.io/docs/tasks/administer-cluster/kubeadm/adding-windows-nodes/", "id": "k8s_00574_code_explain_14956"} {"prompt": "Explain this text code:\n\nStructuredAuthenticationConfigurationEgressSelector", "completion": "This text code Feature Gates | A feature can be in Alpha, Beta or GA stage. An Alpha feature means:", "language": "text", "type": "code-example_explanation", "context": "Feature Gates | A feature can be in Alpha, Beta or GA stage. An Alpha feature means:", "source": "https://kubernetes.io/docs/reference/command-line-tools-reference/feature-gates/", "id": "k8s_00575_code_explain_14957"} {"prompt": "Explain this text code:\n\nStructuredAuthenticationConfigurationEgressSelector", "completion": "This text code Feature Gates | StructuredAuthorizationConfiguration: Enable structured authorization configuration, so that cluster administrators can specify more than one authorization webhook in the API server handler chain.", "language": "text", "type": "code-example_explanation", "context": "Feature Gates | StructuredAuthorizationConfiguration: Enable structured authorization configuration, so that cluster administrators can specify more than one authorization webhook in the API server handler chain.", "source": "https://kubernetes.io/docs/reference/command-line-tools-reference/feature-gates/", "id": "k8s_00575_code_explain_14958"} {"prompt": "Explain this text code:\n\n--runtime-config=storage.k8s.io/v1beta1/csistoragecapacities", "completion": "This text code Feature Gates |", "language": "text", "type": "code-example_explanation", "context": "Feature Gates |", "source": "https://kubernetes.io/docs/reference/command-line-tools-reference/feature-gates/", "id": "k8s_00575_code_explain_14959"} {"prompt": "Explain this text code:\n\n# Start a new terminal, and leave this running.\nminikube dashboard", "completion": "This text code Hello Minikube | Now, switch back to the terminal where you ran minikube start.", "language": "text", "type": "code-example_explanation", "context": "Hello Minikube | Now, switch back to the terminal where you ran minikube start.", "source": "https://kubernetes.io/docs/tutorials/hello-minikube/", "id": "k8s_00576_code_explain_14960"} {"prompt": "Explain this text code:\n\n# Start a new terminal, and leave this running.\nminikube dashboard --url", "completion": "This text code Hello Minikube | Now, you can use this URL and switch back to the terminal where you ran minikube start.", "language": "text", "type": "code-example_explanation", "context": "Hello Minikube | Now, you can use this URL and switch back to the terminal where you ran minikube start.", "source": "https://kubernetes.io/docs/tutorials/hello-minikube/", "id": "k8s_00576_code_explain_14961"} {"prompt": "Explain this text code:\n\n# Run a test container image that includes a webserver\nkubectl create deployment hello-node --image=registry.k8s.io/e2e-test-images/agnhost:2.53 -- /agnhost netexec --http-port=8080", "completion": "This text code Hello Minikube | View the Deployment:", "language": "text", "type": "code-example_explanation", "context": "Hello Minikube | View the Deployment:", "source": "https://kubernetes.io/docs/tutorials/hello-minikube/", "id": "k8s_00576_code_explain_14962"} {"prompt": "Explain this text code:\n\nNAME READY UP-TO-DATE AVAILABLE AGE\nhello-node 1/1 1 1 1m", "completion": "This text code Hello Minikube | (It may take some time for the pod to become available. If you see \"0/1\", try again in a few seconds.)", "language": "text", "type": "code-example_explanation", "context": "Hello Minikube | (It may take some time for the pod to become available. If you see \"0/1\", try again in a few seconds.)", "source": "https://kubernetes.io/docs/tutorials/hello-minikube/", "id": "k8s_00576_code_explain_14963"} {"prompt": "Explain this text code:\n\nNAME READY STATUS RESTARTS AGE\nhello-node-5f76cf6ccf-br9b5 1/1 Running 0 1m", "completion": "This text code Hello Minikube | View cluster events:", "language": "text", "type": "code-example_explanation", "context": "Hello Minikube | View cluster events:", "source": "https://kubernetes.io/docs/tutorials/hello-minikube/", "id": "k8s_00576_code_explain_14964"} {"prompt": "Explain this text code:\n\nI0911 09:19:26.677397 1 log.go:195] Started HTTP server on port 8080\nI0911 09:19:26.677586 1 log.go:195] Started UDP server on port 8081", "completion": "This text code Hello Minikube | By default, the Pod is only accessible by its internal IP address within the Kubernetes cluster. To make the hello-node Container accessible from outside the Kubernetes virtual network, you have to ex", "language": "text", "type": "code-example_explanation", "context": "Hello Minikube | By default, the Pod is only accessible by its internal IP address within the Kubernetes cluster. To make the hello-node Container accessible from outside the Kubernetes virtual network, you have to ex", "source": "https://kubernetes.io/docs/tutorials/hello-minikube/", "id": "k8s_00576_code_explain_14965"} {"prompt": "Explain this bash code:\n\nkubectl expose deployment hello-node --type=LoadBalancer --port=8080", "completion": "This bash code Hello Minikube | The --type=LoadBalancer flag indicates that you want to expose your Service outside of the cluster.", "language": "bash", "type": "kubectl-command_explanation", "context": "Hello Minikube | The --type=LoadBalancer flag indicates that you want to expose your Service outside of the cluster.", "source": "https://kubernetes.io/docs/tutorials/hello-minikube/", "id": "k8s_00576_code_explain_14966"} {"prompt": "Explain this text code:\n\nNAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE\nhello-node LoadBalancer 10.108.144.78 <pending> 8080:30369/TCP 21s\nkubernetes ClusterIP 10.96.0.1 <none> 443/TCP 23m", "completion": "This text code Hello Minikube | On cloud providers that support load balancers, an external IP address would be provisioned to access the Service. On minikube, the LoadBalancer type makes the Service accessible through the minikube", "language": "text", "type": "code-example_explanation", "context": "Hello Minikube | On cloud providers that support load balancers, an external IP address would be provisioned to access the Service. On minikube, the LoadBalancer type makes the Service accessible through the minikube", "source": "https://kubernetes.io/docs/tutorials/hello-minikube/", "id": "k8s_00576_code_explain_14967"} {"prompt": "Complete this text code:\n\naddon-manager: enabled\ndashboard: enabled\ndefault-storageclass: enabled\nefk: disabled\nfreshpod: disabled\ngvisor: disabled\nhelm-tiller: disabled\ningress: disabled", "completion": "ingress-dns: disabled\nlogviewer: disabled\nmetrics-server: disabled\nnvidia-driver-installer: disabled\nnvidia-gpu-device-plugin: disabled\nregistry: disabled\nregistry-creds: disabled\nstorage-provisioner: enabled\nstorage-provisioner-gluster: disabled", "language": "text", "type": "code-example", "context": "Hello Minikube | Enable an addon, for example, metrics-server:", "source": "https://kubernetes.io/docs/tutorials/hello-minikube/", "id": "k8s_00576_code_completion_14968"} {"prompt": "Explain this text code:\n\naddon-manager: enabled\ndashboard: enabled\ndefault-storageclass: enabled\nefk: disabled\nfreshpod: disabled\ngvisor: disabled\nhelm-tiller: disabled\ningress: disabled\ningress-dns: disabled\nlogviewer: disabled\nmetrics-server: disabled\nnvidia-driver-installer: disabled\nnvidia-gpu-device-plugin: disabled\nregistry: disabled\nregistry-creds: disabled\nstorage-provisioner: enabled\nstorage-provisioner-gluster: disabled", "completion": "This text code Hello Minikube | Enable an addon, for example, metrics-server:", "language": "text", "type": "code-example_explanation", "context": "Hello Minikube | Enable an addon, for example, metrics-server:", "source": "https://kubernetes.io/docs/tutorials/hello-minikube/", "id": "k8s_00576_code_explain_14969"} {"prompt": "Complete this text code:\n\nNAME READY STATUS RESTARTS AGE\npod/coredns-5644d7b6d9-mh9ll 1/1 Running 0 34m\npod/coredns-5644d7b6d9-pqd2t 1/1 Running 0 34m\npod/metrics-server-67fb648c5 1/1 Running 0 26s\npod/etcd-minikube 1/1 Running 0 34m\npod/influxdb-grafana-b29w8 2/2 Running 0 26s\npod/kube-addon-manager-minikube 1/1 Running 0 34m\npod/kube-apiserver-minikube 1/1 Running 0 34m\npod/kube-controller-manager-minikube 1/1 Running 0 34m", "completion": "pod/kube-proxy-rnlps 1/1 Running 0 34m\npod/kube-scheduler-minikube 1/1 Running 0 34m\npod/storage-provisioner 1/1 Running 0 34m\n\nNAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE\nservice/metrics-server ClusterIP 10.96.241.45 <none> 80/TCP 26s\nservice/kube-dns ClusterIP 10.96.0.10 <none> 53/UDP,53/TCP 34m\nservice/monitoring-grafana NodePort 10.99.24.54 <none> 80:30002/TCP 26s\nservice/monitoring-influxdb ClusterIP 10.111.169.94 <none> 8083/TCP,8086/TCP 26s", "language": "text", "type": "code-example", "context": "Hello Minikube | Check the output from metrics-server:", "source": "https://kubernetes.io/docs/tutorials/hello-minikube/", "id": "k8s_00576_code_completion_14970"} {"prompt": "Explain this text code:\n\nNAME READY STATUS RESTARTS AGE\npod/coredns-5644d7b6d9-mh9ll 1/1 Running 0 34m\npod/coredns-5644d7b6d9-pqd2t 1/1 Running 0 34m\npod/metrics-server-67fb648c5 1/1 Running 0 26s\npod/etcd-minikube 1/1 Running 0 34m\npod/influxdb-grafana-b29w8 2/2 Running 0 26s\npod/kube-addon-manager-minikube 1/1 Running 0 34m\npod/kube-apiserver-minikube 1/1 Running 0 34m\npod/kube-controller-manager-minikube 1/1 Running 0 34m\npod/kube-proxy-rnlps 1/1 Running 0 34m\npod/kube-scheduler-minikube 1/1 Running 0 34m\npod/storage-provisioner 1/1 Running 0 34m\n\nNAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE\nservice/metrics-server ClusterIP 10.96.241.45 <none> 80/TCP 26s\nservice/kube-dns ClusterIP 10.96.0.10 <none> 53/UDP,53/TCP 34m\nservice/monitoring-grafana NodePort 10.99.24.54 <none> 80:30002/TCP 26s\nservice/monitoring-influxdb ClusterIP 10.111.169.94 <none> 8083/TCP,8086/TCP 26s", "completion": "This text code Hello Minikube | Check the output from metrics-server:", "language": "text", "type": "code-example_explanation", "context": "Hello Minikube | Check the output from metrics-server:", "source": "https://kubernetes.io/docs/tutorials/hello-minikube/", "id": "k8s_00576_code_explain_14971"} {"prompt": "Explain this text code:\n\nNAME CPU(cores) MEMORY(bytes) \nhello-node-ccf4b9788-4jn97 1m 6Mi", "completion": "This text code Hello Minikube | If you see the following message, wait, and try again:", "language": "text", "type": "code-example_explanation", "context": "Hello Minikube | If you see the following message, wait, and try again:", "source": "https://kubernetes.io/docs/tutorials/hello-minikube/", "id": "k8s_00576_code_explain_14972"} {"prompt": "Explain this bash code:\n\nkubectl delete service hello-node\nkubectl delete deployment hello-node", "completion": "This bash code Hello Minikube | Stop the Minikube cluster", "language": "bash", "type": "kubectl-command_explanation", "context": "Hello Minikube | Stop the Minikube cluster", "source": "https://kubernetes.io/docs/tutorials/hello-minikube/", "id": "k8s_00576_code_explain_14973"} {"prompt": "Explain this shell code:\n\n# Start a new terminal, and leave this running.\nminikube dashboard", "completion": "This shell code Hello Minikube | Now, switch back to the terminal where you ran minikube start.", "language": "shell", "type": "code-example_explanation", "context": "Hello Minikube | Now, switch back to the terminal where you ran minikube start.", "source": "https://kubernetes.io/docs/tutorials/hello-minikube/", "id": "k8s_00576_code_explain_14974"} {"prompt": "Explain this shell code:\n\n# Start a new terminal, and leave this running.\nminikube dashboard --url", "completion": "This shell code Hello Minikube | Now, you can use this URL and switch back to the terminal where you ran minikube start.", "language": "shell", "type": "code-example_explanation", "context": "Hello Minikube | Now, you can use this URL and switch back to the terminal where you ran minikube start.", "source": "https://kubernetes.io/docs/tutorials/hello-minikube/", "id": "k8s_00576_code_explain_14975"} {"prompt": "Explain this shell code:\n\n# Run a test container image that includes a webserver\nkubectl create deployment hello-node --image=registry.k8s.io/e2e-test-images/agnhost:2.53 -- /agnhost netexec --http-port=8080", "completion": "This shell code Hello Minikube | View the Deployment:", "language": "shell", "type": "code-example_explanation", "context": "Hello Minikube | View the Deployment:", "source": "https://kubernetes.io/docs/tutorials/hello-minikube/", "id": "k8s_00576_code_explain_14976"} {"prompt": "Explain this text code:\n\nNAME READY UP-TO-DATE AVAILABLE AGE\nhello-node 1/1 1 1 1m", "completion": "This text code Hello Minikube | (It may take some time for the pod to become available. If you see \"0/1\", try again in a few seconds.)", "language": "text", "type": "code-example_explanation", "context": "Hello Minikube | (It may take some time for the pod to become available. If you see \"0/1\", try again in a few seconds.)", "source": "https://kubernetes.io/docs/tutorials/hello-minikube/", "id": "k8s_00576_code_explain_14977"} {"prompt": "Explain this text code:\n\nNAME READY STATUS RESTARTS AGE\nhello-node-5f76cf6ccf-br9b5 1/1 Running 0 1m", "completion": "This text code Hello Minikube | View cluster events:", "language": "text", "type": "code-example_explanation", "context": "Hello Minikube | View cluster events:", "source": "https://kubernetes.io/docs/tutorials/hello-minikube/", "id": "k8s_00576_code_explain_14978"} {"prompt": "Explain this text code:\n\nI0911 09:19:26.677397 1 log.go:195] Started HTTP server on port 8080\nI0911 09:19:26.677586 1 log.go:195] Started UDP server on port 8081", "completion": "This text code Hello Minikube | By default, the Pod is only accessible by its internal IP address within the Kubernetes cluster. To make the hello-node Container accessible from outside the Kubernetes virtual network, you have to ex", "language": "text", "type": "code-example_explanation", "context": "Hello Minikube | By default, the Pod is only accessible by its internal IP address within the Kubernetes cluster. To make the hello-node Container accessible from outside the Kubernetes virtual network, you have to ex", "source": "https://kubernetes.io/docs/tutorials/hello-minikube/", "id": "k8s_00576_code_explain_14979"} {"prompt": "Explain this shell code:\n\nkubectl expose deployment hello-node --type=LoadBalancer --port=8080", "completion": "This shell code Hello Minikube | The --type=LoadBalancer flag indicates that you want to expose your Service outside of the cluster.", "language": "shell", "type": "code-example_explanation", "context": "Hello Minikube | The --type=LoadBalancer flag indicates that you want to expose your Service outside of the cluster.", "source": "https://kubernetes.io/docs/tutorials/hello-minikube/", "id": "k8s_00576_code_explain_14980"} {"prompt": "Explain this text code:\n\nNAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE\nhello-node LoadBalancer 10.108.144.78 <pending> 8080:30369/TCP 21s\nkubernetes ClusterIP 10.96.0.1 <none> 443/TCP 23m", "completion": "This text code Hello Minikube | On cloud providers that support load balancers, an external IP address would be provisioned to access the Service. On minikube, the LoadBalancer type makes the Service accessible through the minikube", "language": "text", "type": "code-example_explanation", "context": "Hello Minikube | On cloud providers that support load balancers, an external IP address would be provisioned to access the Service. On minikube, the LoadBalancer type makes the Service accessible through the minikube", "source": "https://kubernetes.io/docs/tutorials/hello-minikube/", "id": "k8s_00576_code_explain_14981"} {"prompt": "Complete this text code:\n\naddon-manager: enabled\ndashboard: enabled\ndefault-storageclass: enabled\nefk: disabled\nfreshpod: disabled\ngvisor: disabled\nhelm-tiller: disabled\ningress: disabled", "completion": "ingress-dns: disabled\nlogviewer: disabled\nmetrics-server: disabled\nnvidia-driver-installer: disabled\nnvidia-gpu-device-plugin: disabled\nregistry: disabled\nregistry-creds: disabled\nstorage-provisioner: enabled\nstorage-provisioner-gluster: disabled", "language": "text", "type": "code-example", "context": "Hello Minikube | Enable an addon, for example, metrics-server:", "source": "https://kubernetes.io/docs/tutorials/hello-minikube/", "id": "k8s_00576_code_completion_14982"} {"prompt": "Explain this text code:\n\naddon-manager: enabled\ndashboard: enabled\ndefault-storageclass: enabled\nefk: disabled\nfreshpod: disabled\ngvisor: disabled\nhelm-tiller: disabled\ningress: disabled\ningress-dns: disabled\nlogviewer: disabled\nmetrics-server: disabled\nnvidia-driver-installer: disabled\nnvidia-gpu-device-plugin: disabled\nregistry: disabled\nregistry-creds: disabled\nstorage-provisioner: enabled\nstorage-provisioner-gluster: disabled", "completion": "This text code Hello Minikube | Enable an addon, for example, metrics-server:", "language": "text", "type": "code-example_explanation", "context": "Hello Minikube | Enable an addon, for example, metrics-server:", "source": "https://kubernetes.io/docs/tutorials/hello-minikube/", "id": "k8s_00576_code_explain_14983"} {"prompt": "Complete this text code:\n\nNAME READY STATUS RESTARTS AGE\npod/coredns-5644d7b6d9-mh9ll 1/1 Running 0 34m\npod/coredns-5644d7b6d9-pqd2t 1/1 Running 0 34m\npod/metrics-server-67fb648c5 1/1 Running 0 26s\npod/etcd-minikube 1/1 Running 0 34m\npod/influxdb-grafana-b29w8 2/2 Running 0 26s\npod/kube-addon-manager-minikube 1/1 Running 0 34m\npod/kube-apiserver-minikube 1/1 Running 0 34m\npod/kube-controller-manager-minikube 1/1 Running 0 34m", "completion": "pod/kube-proxy-rnlps 1/1 Running 0 34m\npod/kube-scheduler-minikube 1/1 Running 0 34m\npod/storage-provisioner 1/1 Running 0 34m\n\nNAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE\nservice/metrics-server ClusterIP 10.96.241.45 <none> 80/TCP 26s\nservice/kube-dns ClusterIP 10.96.0.10 <none> 53/UDP,53/TCP 34m\nservice/monitoring-grafana NodePort 10.99.24.54 <none> 80:30002/TCP 26s\nservice/monitoring-influxdb ClusterIP 10.111.169.94 <none> 8083/TCP,8086/TCP 26s", "language": "text", "type": "code-example", "context": "Hello Minikube | Check the output from metrics-server:", "source": "https://kubernetes.io/docs/tutorials/hello-minikube/", "id": "k8s_00576_code_completion_14984"} {"prompt": "Explain this text code:\n\nNAME READY STATUS RESTARTS AGE\npod/coredns-5644d7b6d9-mh9ll 1/1 Running 0 34m\npod/coredns-5644d7b6d9-pqd2t 1/1 Running 0 34m\npod/metrics-server-67fb648c5 1/1 Running 0 26s\npod/etcd-minikube 1/1 Running 0 34m\npod/influxdb-grafana-b29w8 2/2 Running 0 26s\npod/kube-addon-manager-minikube 1/1 Running 0 34m\npod/kube-apiserver-minikube 1/1 Running 0 34m\npod/kube-controller-manager-minikube 1/1 Running 0 34m\npod/kube-proxy-rnlps 1/1 Running 0 34m\npod/kube-scheduler-minikube 1/1 Running 0 34m\npod/storage-provisioner 1/1 Running 0 34m\n\nNAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE\nservice/metrics-server ClusterIP 10.96.241.45 <none> 80/TCP 26s\nservice/kube-dns ClusterIP 10.96.0.10 <none> 53/UDP,53/TCP 34m\nservice/monitoring-grafana NodePort 10.99.24.54 <none> 80:30002/TCP 26s\nservice/monitoring-influxdb ClusterIP 10.111.169.94 <none> 8083/TCP,8086/TCP 26s", "completion": "This text code Hello Minikube | Check the output from metrics-server:", "language": "text", "type": "code-example_explanation", "context": "Hello Minikube | Check the output from metrics-server:", "source": "https://kubernetes.io/docs/tutorials/hello-minikube/", "id": "k8s_00576_code_explain_14985"} {"prompt": "Explain this text code:\n\nNAME CPU(cores) MEMORY(bytes) \nhello-node-ccf4b9788-4jn97 1m 6Mi", "completion": "This text code Hello Minikube | If you see the following message, wait, and try again:", "language": "text", "type": "code-example_explanation", "context": "Hello Minikube | If you see the following message, wait, and try again:", "source": "https://kubernetes.io/docs/tutorials/hello-minikube/", "id": "k8s_00576_code_explain_14986"} {"prompt": "Explain this shell code:\n\nkubectl delete service hello-node\nkubectl delete deployment hello-node", "completion": "This shell code Hello Minikube | Stop the Minikube cluster", "language": "shell", "type": "code-example_explanation", "context": "Hello Minikube | Stop the Minikube cluster", "source": "https://kubernetes.io/docs/tutorials/hello-minikube/", "id": "k8s_00576_code_explain_14987"} {"prompt": "Explain this text code:\n\n# Start a new terminal, and leave this running.\nminikube dashboard", "completion": "This text code Hello Minikube | Now, switch back to the terminal where you ran minikube start.", "language": "text", "type": "code-example_explanation", "context": "Hello Minikube | Now, switch back to the terminal where you ran minikube start.", "source": "https://kubernetes.io/docs/tutorials/hello-minikube/", "id": "k8s_00576_code_explain_14988"} {"prompt": "Explain this text code:\n\n# Start a new terminal, and leave this running.\nminikube dashboard --url", "completion": "This text code Hello Minikube | Now, you can use this URL and switch back to the terminal where you ran minikube start.", "language": "text", "type": "code-example_explanation", "context": "Hello Minikube | Now, you can use this URL and switch back to the terminal where you ran minikube start.", "source": "https://kubernetes.io/docs/tutorials/hello-minikube/", "id": "k8s_00576_code_explain_14989"} {"prompt": "Explain this text code:\n\n# Run a test container image that includes a webserver\nkubectl create deployment hello-node --image=registry.k8s.io/e2e-test-images/agnhost:2.53 -- /agnhost netexec --http-port=8080", "completion": "This text code Hello Minikube | View the Deployment:", "language": "text", "type": "code-example_explanation", "context": "Hello Minikube | View the Deployment:", "source": "https://kubernetes.io/docs/tutorials/hello-minikube/", "id": "k8s_00576_code_explain_14990"} {"prompt": "Explain this bash code:\n\nkubectl expose deployment hello-node --type=LoadBalancer --port=8080", "completion": "This bash code Hello Minikube | The --type=LoadBalancer flag indicates that you want to expose your Service outside of the cluster.", "language": "bash", "type": "kubectl-command_explanation", "context": "Hello Minikube | The --type=LoadBalancer flag indicates that you want to expose your Service outside of the cluster.", "source": "https://kubernetes.io/docs/tutorials/hello-minikube/", "id": "k8s_00576_code_explain_14991"} {"prompt": "Explain this bash code:\n\nkubectl delete service hello-node\nkubectl delete deployment hello-node", "completion": "This bash code Hello Minikube | Stop the Minikube cluster", "language": "bash", "type": "kubectl-command_explanation", "context": "Hello Minikube | Stop the Minikube cluster", "source": "https://kubernetes.io/docs/tutorials/hello-minikube/", "id": "k8s_00576_code_explain_14992"} {"prompt": "Complete this text code:\n\nObject{\n spec: Object.spec{", "completion": " serviceAccountName: \"example\"\n }\n}", "language": "text", "type": "code-example", "context": "MutatingAdmissionPolicy v1beta1 | Apply configurations may not modify atomic structs, maps or arrays due to the risk of accidental deletion of values not included in the apply configuration.", "source": "https://kubernetes.io/docs/reference/kubernetes-api/policy-resources/mutating-admission-policy-v1beta1/", "id": "k8s_00577_code_completion_14993"} {"prompt": "Explain this text code:\n\nObject{\n spec: Object.spec{\n serviceAccountName: \"example\"\n }\n}", "completion": "This text code MutatingAdmissionPolicy v1beta1 | Apply configurations may not modify atomic structs, maps or arrays due to the risk of accidental deletion of values not included in the apply configuration.", "language": "text", "type": "code-example_explanation", "context": "MutatingAdmissionPolicy v1beta1 | Apply configurations may not modify atomic structs, maps or arrays due to the risk of accidental deletion of values not included in the apply configuration.", "source": "https://kubernetes.io/docs/reference/kubernetes-api/policy-resources/mutating-admission-policy-v1beta1/", "id": "k8s_00577_code_explain_14994"} {"prompt": "Complete this json code:\n\n[\n JSONPatch{op: \"test\", path: \"/spec/example\", value: \"Red\"},", "completion": " JSONPatch{op: \"replace\", path: \"/spec/example\", value: \"Green\"}\n ]", "language": "json", "type": "api-response", "context": "MutatingAdmissionPolicy v1beta1 | To define an object for the patch value, use Object types. For example:", "source": "https://kubernetes.io/docs/reference/kubernetes-api/policy-resources/mutating-admission-policy-v1beta1/", "id": "k8s_00577_code_completion_14995"} {"prompt": "Explain this json code:\n\n[\n JSONPatch{op: \"test\", path: \"/spec/example\", value: \"Red\"},\n JSONPatch{op: \"replace\", path: \"/spec/example\", value: \"Green\"}\n ]", "completion": "This json code MutatingAdmissionPolicy v1beta1 | To define an object for the patch value, use Object types. For example:", "language": "json", "type": "api-response_explanation", "context": "MutatingAdmissionPolicy v1beta1 | To define an object for the patch value, use Object types. For example:", "source": "https://kubernetes.io/docs/reference/kubernetes-api/policy-resources/mutating-admission-policy-v1beta1/", "id": "k8s_00577_code_explain_14996"} {"prompt": "Complete this json code:\n\n[\n JSONPatch{\n op: \"add\",", "completion": " path: \"/spec/selector\",\n value: Object.spec.selector{matchLabels: {\"environment\": \"test\"}}\n }\n ]", "language": "json", "type": "api-response", "context": "MutatingAdmissionPolicy v1beta1 | To use strings containing '/' and '~' as JSONPatch path keys, use \"jsonpatch.escapeKey\". For example:", "source": "https://kubernetes.io/docs/reference/kubernetes-api/policy-resources/mutating-admission-policy-v1beta1/", "id": "k8s_00577_code_completion_14997"} {"prompt": "Explain this json code:\n\n[\n JSONPatch{\n op: \"add\",\n path: \"/spec/selector\",\n value: Object.spec.selector{matchLabels: {\"environment\": \"test\"}}\n }\n ]", "completion": "This json code MutatingAdmissionPolicy v1beta1 | To use strings containing '/' and '~' as JSONPatch path keys, use \"jsonpatch.escapeKey\". For example:", "language": "json", "type": "api-response_explanation", "context": "MutatingAdmissionPolicy v1beta1 | To use strings containing '/' and '~' as JSONPatch path keys, use \"jsonpatch.escapeKey\". For example:", "source": "https://kubernetes.io/docs/reference/kubernetes-api/policy-resources/mutating-admission-policy-v1beta1/", "id": "k8s_00577_code_explain_14998"} {"prompt": "Complete this json code:\n\n[\n JSONPatch{\n op: \"add\",", "completion": " path: \"/metadata/labels/\" + jsonpatch.escapeKey(\"example.com/environment\"),\n value: \"test\"\n },\n ]", "language": "json", "type": "api-response", "context": "MutatingAdmissionPolicy v1beta1 | CEL expressions have access to the types needed to create JSON patches and objects:", "source": "https://kubernetes.io/docs/reference/kubernetes-api/policy-resources/mutating-admission-policy-v1beta1/", "id": "k8s_00577_code_completion_14999"} {"prompt": "Explain this json code:\n\n[\n JSONPatch{\n op: \"add\",\n path: \"/metadata/labels/\" + jsonpatch.escapeKey(\"example.com/environment\"),\n value: \"test\"\n },\n ]", "completion": "This json code MutatingAdmissionPolicy v1beta1 | CEL expressions have access to the types needed to create JSON patches and objects:", "language": "json", "type": "api-response_explanation", "context": "MutatingAdmissionPolicy v1beta1 | CEL expressions have access to the types needed to create JSON patches and objects:", "source": "https://kubernetes.io/docs/reference/kubernetes-api/policy-resources/mutating-admission-policy-v1beta1/", "id": "k8s_00577_code_explain_15000"} {"prompt": "Explain this text code:\n\napiGroups:[\"apps\"], apiVersions:[\"v1\"], resources: [\"deployments\"]", "completion": "This text code MutatingAdmissionPolicy v1beta1 | Equivalent: match a request if modifies a resource listed in rules, even via another API group or version. For example, if deployments can be modified via apps/v1, apps/v1beta1, and extensions/v1beta1", "language": "text", "type": "code-example_explanation", "context": "MutatingAdmissionPolicy v1beta1 | Equivalent: match a request if modifies a resource listed in rules, even via another API group or version. For example, if deployments can be modified via apps/v1, apps/v1beta1, and extensions/v1beta1", "source": "https://kubernetes.io/docs/reference/kubernetes-api/policy-resources/mutating-admission-policy-v1beta1/", "id": "k8s_00577_code_explain_15001"} {"prompt": "Explain this text code:\n\napiGroups:[\"apps\"], apiVersions:[\"v1\"], resources: [\"deployments\"]", "completion": "This text code MutatingAdmissionPolicy v1beta1 | Defaults to \"Equivalent\"", "language": "text", "type": "code-example_explanation", "context": "MutatingAdmissionPolicy v1beta1 | Defaults to \"Equivalent\"", "source": "https://kubernetes.io/docs/reference/kubernetes-api/policy-resources/mutating-admission-policy-v1beta1/", "id": "k8s_00577_code_explain_15002"} {"prompt": "Complete this text code:\n\nObject{\n spec: Object.spec{", "completion": " serviceAccountName: \"example\"\n }\n}", "language": "text", "type": "code-example", "context": "MutatingAdmissionPolicy v1beta1 | Apply configurations may not modify atomic structs, maps or arrays due to the risk of accidental deletion of values not included in the apply configuration.", "source": "https://kubernetes.io/docs/reference/kubernetes-api/policy-resources/mutating-admission-policy-v1beta1/", "id": "k8s_00577_code_completion_15003"} {"prompt": "Explain this text code:\n\nObject{\n spec: Object.spec{\n serviceAccountName: \"example\"\n }\n}", "completion": "This text code MutatingAdmissionPolicy v1beta1 | Apply configurations may not modify atomic structs, maps or arrays due to the risk of accidental deletion of values not included in the apply configuration.", "language": "text", "type": "code-example_explanation", "context": "MutatingAdmissionPolicy v1beta1 | Apply configurations may not modify atomic structs, maps or arrays due to the risk of accidental deletion of values not included in the apply configuration.", "source": "https://kubernetes.io/docs/reference/kubernetes-api/policy-resources/mutating-admission-policy-v1beta1/", "id": "k8s_00577_code_explain_15004"} {"prompt": "Complete this json code:\n\n[\n JSONPatch{op: \"test\", path: \"/spec/example\", value: \"Red\"},", "completion": " JSONPatch{op: \"replace\", path: \"/spec/example\", value: \"Green\"}\n ]", "language": "json", "type": "api-response", "context": "MutatingAdmissionPolicy v1beta1 | To define an object for the patch value, use Object types. For example:", "source": "https://kubernetes.io/docs/reference/kubernetes-api/policy-resources/mutating-admission-policy-v1beta1/", "id": "k8s_00577_code_completion_15005"} {"prompt": "Explain this json code:\n\n[\n JSONPatch{op: \"test\", path: \"/spec/example\", value: \"Red\"},\n JSONPatch{op: \"replace\", path: \"/spec/example\", value: \"Green\"}\n ]", "completion": "This json code MutatingAdmissionPolicy v1beta1 | To define an object for the patch value, use Object types. For example:", "language": "json", "type": "api-response_explanation", "context": "MutatingAdmissionPolicy v1beta1 | To define an object for the patch value, use Object types. For example:", "source": "https://kubernetes.io/docs/reference/kubernetes-api/policy-resources/mutating-admission-policy-v1beta1/", "id": "k8s_00577_code_explain_15006"} {"prompt": "Complete this json code:\n\n[\n JSONPatch{\n op: \"add\",", "completion": " path: \"/spec/selector\",\n value: Object.spec.selector{matchLabels: {\"environment\": \"test\"}}\n }\n ]", "language": "json", "type": "api-response", "context": "MutatingAdmissionPolicy v1beta1 | To use strings containing '/' and '~' as JSONPatch path keys, use \"jsonpatch.escapeKey\". For example:", "source": "https://kubernetes.io/docs/reference/kubernetes-api/policy-resources/mutating-admission-policy-v1beta1/", "id": "k8s_00577_code_completion_15007"} {"prompt": "Explain this json code:\n\n[\n JSONPatch{\n op: \"add\",\n path: \"/spec/selector\",\n value: Object.spec.selector{matchLabels: {\"environment\": \"test\"}}\n }\n ]", "completion": "This json code MutatingAdmissionPolicy v1beta1 | To use strings containing '/' and '~' as JSONPatch path keys, use \"jsonpatch.escapeKey\". For example:", "language": "json", "type": "api-response_explanation", "context": "MutatingAdmissionPolicy v1beta1 | To use strings containing '/' and '~' as JSONPatch path keys, use \"jsonpatch.escapeKey\". For example:", "source": "https://kubernetes.io/docs/reference/kubernetes-api/policy-resources/mutating-admission-policy-v1beta1/", "id": "k8s_00577_code_explain_15008"} {"prompt": "Complete this json code:\n\n[\n JSONPatch{\n op: \"add\",", "completion": " path: \"/metadata/labels/\" + jsonpatch.escapeKey(\"example.com/environment\"),\n value: \"test\"\n },\n ]", "language": "json", "type": "api-response", "context": "MutatingAdmissionPolicy v1beta1 | CEL expressions have access to the types needed to create JSON patches and objects:", "source": "https://kubernetes.io/docs/reference/kubernetes-api/policy-resources/mutating-admission-policy-v1beta1/", "id": "k8s_00577_code_completion_15009"} {"prompt": "Explain this json code:\n\n[\n JSONPatch{\n op: \"add\",\n path: \"/metadata/labels/\" + jsonpatch.escapeKey(\"example.com/environment\"),\n value: \"test\"\n },\n ]", "completion": "This json code MutatingAdmissionPolicy v1beta1 | CEL expressions have access to the types needed to create JSON patches and objects:", "language": "json", "type": "api-response_explanation", "context": "MutatingAdmissionPolicy v1beta1 | CEL expressions have access to the types needed to create JSON patches and objects:", "source": "https://kubernetes.io/docs/reference/kubernetes-api/policy-resources/mutating-admission-policy-v1beta1/", "id": "k8s_00577_code_explain_15010"} {"prompt": "Explain this text code:\n\napiGroups:[\"apps\"], apiVersions:[\"v1\"], resources: [\"deployments\"]", "completion": "This text code MutatingAdmissionPolicy v1beta1 | Equivalent: match a request if modifies a resource listed in rules, even via another API group or version. For example, if deployments can be modified via apps/v1, apps/v1beta1, and extensions/v1beta1", "language": "text", "type": "code-example_explanation", "context": "MutatingAdmissionPolicy v1beta1 | Equivalent: match a request if modifies a resource listed in rules, even via another API group or version. For example, if deployments can be modified via apps/v1, apps/v1beta1, and extensions/v1beta1", "source": "https://kubernetes.io/docs/reference/kubernetes-api/policy-resources/mutating-admission-policy-v1beta1/", "id": "k8s_00577_code_explain_15011"} {"prompt": "Explain this text code:\n\napiGroups:[\"apps\"], apiVersions:[\"v1\"], resources: [\"deployments\"]", "completion": "This text code MutatingAdmissionPolicy v1beta1 | Defaults to \"Equivalent\"", "language": "text", "type": "code-example_explanation", "context": "MutatingAdmissionPolicy v1beta1 | Defaults to \"Equivalent\"", "source": "https://kubernetes.io/docs/reference/kubernetes-api/policy-resources/mutating-admission-policy-v1beta1/", "id": "k8s_00577_code_explain_15012"} {"prompt": "Explain this bash code:\n\nkubectl create deployment NAME --image=image -- [COMMAND] [args...]", "completion": "This bash code kubectl create deployment | If true, ignore any errors in templates when a field or map key is missing in the template. Only applies to golang and jsonpath output formats.", "language": "bash", "type": "kubectl-command_explanation", "context": "kubectl create deployment | If true, ignore any errors in templates when a field or map key is missing in the template. Only applies to golang and jsonpath output formats.", "source": "https://kubernetes.io/docs/reference/kubectl/generated/kubectl_create/kubectl_create_deployment/", "id": "k8s_00578_code_explain_15013"} {"prompt": "Complete this text code:\n\n# Create a deployment named my-dep that runs the busybox image\n kubectl create deployment my-dep --image=busybox\n \n # Create a deployment with a command\n kubectl create deployment my-dep --image=busybox -- date\n \n # Create a deployment named my-dep that runs the nginx image with 3 replicas", "completion": " kubectl create deployment my-dep --image=nginx --replicas=3\n \n # Create a deployment named my-dep that runs the busybox image and expose port 5701\n kubectl create deployment my-dep --image=busybox --port=5701\n \n # Create a deployment named my-dep that runs multiple containers\n kubectl create deployment my-dep --image=busybox:latest --image=ubuntu:latest --image=nginx", "language": "text", "type": "code-example", "context": "kubectl create deployment | If true, ignore any errors in templates when a field or map key is missing in the template. Only applies to golang and jsonpath output formats.", "source": "https://kubernetes.io/docs/reference/kubectl/generated/kubectl_create/kubectl_create_deployment/", "id": "k8s_00578_code_completion_15014"} {"prompt": "Explain this text code:\n\n# Create a deployment named my-dep that runs the busybox image\n kubectl create deployment my-dep --image=busybox\n \n # Create a deployment with a command\n kubectl create deployment my-dep --image=busybox -- date\n \n # Create a deployment named my-dep that runs the nginx image with 3 replicas\n kubectl create deployment my-dep --image=nginx --replicas=3\n \n # Create a deployment named my-dep that runs the busybox image and expose port 5701\n kubectl create deployment my-dep --image=busybox --port=5701\n \n # Create a deployment named my-dep that runs multiple containers\n kubectl create deployment my-dep --image=busybox:latest --image=ubuntu:latest --image=nginx", "completion": "This text code kubectl create deployment | If true, ignore any errors in templates when a field or map key is missing in the template. Only applies to golang and jsonpath output formats.", "language": "text", "type": "code-example_explanation", "context": "kubectl create deployment | If true, ignore any errors in templates when a field or map key is missing in the template. Only applies to golang and jsonpath output formats.", "source": "https://kubernetes.io/docs/reference/kubectl/generated/kubectl_create/kubectl_create_deployment/", "id": "k8s_00578_code_explain_15015"} {"prompt": "Explain this bash code:\n\nkubectl create deployment NAME --image=image -- [COMMAND] [args...]", "completion": "This bash code kubectl create deployment | If true, ignore any errors in templates when a field or map key is missing in the template. Only applies to golang and jsonpath output formats.", "language": "bash", "type": "kubectl-command_explanation", "context": "kubectl create deployment | If true, ignore any errors in templates when a field or map key is missing in the template. Only applies to golang and jsonpath output formats.", "source": "https://kubernetes.io/docs/reference/kubectl/generated/kubectl_create/kubectl_create_deployment/", "id": "k8s_00578_code_explain_15016"} {"prompt": "Complete this text code:\n\n# Create a deployment named my-dep that runs the busybox image\n kubectl create deployment my-dep --image=busybox\n \n # Create a deployment with a command\n kubectl create deployment my-dep --image=busybox -- date\n \n # Create a deployment named my-dep that runs the nginx image with 3 replicas", "completion": " kubectl create deployment my-dep --image=nginx --replicas=3\n \n # Create a deployment named my-dep that runs the busybox image and expose port 5701\n kubectl create deployment my-dep --image=busybox --port=5701\n \n # Create a deployment named my-dep that runs multiple containers\n kubectl create deployment my-dep --image=busybox:latest --image=ubuntu:latest --image=nginx", "language": "text", "type": "code-example", "context": "kubectl create deployment | If true, ignore any errors in templates when a field or map key is missing in the template. Only applies to golang and jsonpath output formats.", "source": "https://kubernetes.io/docs/reference/kubectl/generated/kubectl_create/kubectl_create_deployment/", "id": "k8s_00578_code_completion_15017"} {"prompt": "Explain this text code:\n\n# Create a deployment named my-dep that runs the busybox image\n kubectl create deployment my-dep --image=busybox\n \n # Create a deployment with a command\n kubectl create deployment my-dep --image=busybox -- date\n \n # Create a deployment named my-dep that runs the nginx image with 3 replicas\n kubectl create deployment my-dep --image=nginx --replicas=3\n \n # Create a deployment named my-dep that runs the busybox image and expose port 5701\n kubectl create deployment my-dep --image=busybox --port=5701\n \n # Create a deployment named my-dep that runs multiple containers\n kubectl create deployment my-dep --image=busybox:latest --image=ubuntu:latest --image=nginx", "completion": "This text code kubectl create deployment | If true, ignore any errors in templates when a field or map key is missing in the template. Only applies to golang and jsonpath output formats.", "language": "text", "type": "code-example_explanation", "context": "kubectl create deployment | If true, ignore any errors in templates when a field or map key is missing in the template. Only applies to golang and jsonpath output formats.", "source": "https://kubernetes.io/docs/reference/kubectl/generated/kubectl_create/kubectl_create_deployment/", "id": "k8s_00578_code_explain_15018"} {"prompt": "Explain this text code:\n\n# Create a private key\nopenssl genrsa -out myuser.key 3072", "completion": "This text code Issue a Certificate for a Kubernetes API Client Using A CertificateSigningRequest | It is important to set CN and O attribute of the CSR. CN is the name of the user and O is the group that this user will belong to. You can refer to RBAC for standard groups.", "language": "text", "type": "code-example_explanation", "context": "Issue a Certificate for a Kubernetes API Client Using A CertificateSigningRequest | It is important to set CN and O attribute of the CSR. CN is the name of the user and O is the group that this user will belong to. You can refer to RBAC for standard groups.", "source": "https://kubernetes.io/docs/tasks/tls/certificate-issue-client-csr/", "id": "k8s_00583_code_explain_15019"} {"prompt": "Explain this text code:\n\n# Change the common name \"myuser\" to the actual username that you want to use\nopenssl req -new -key myuser.key -out myuser.csr -subj \"/CN=myuser\"", "completion": "This text code Issue a Certificate for a Kubernetes API Client Using A CertificateSigningRequest | Encode the CSR document using this command:", "language": "text", "type": "code-example_explanation", "context": "Issue a Certificate for a Kubernetes API Client Using A CertificateSigningRequest | Encode the CSR document using this command:", "source": "https://kubernetes.io/docs/tasks/tls/certificate-issue-client-csr/", "id": "k8s_00583_code_explain_15020"} {"prompt": "Complete this yaml code:\n\ncat <<EOF | kubectl apply -f -\napiVersion: certificates.k8s.io/v1\nkind: CertificateSigningRequest\nmetadata:\n name: myuser # example\nspec:", "completion": " # This is an encoded CSR. Change this to the base64-encoded contents of myuser.csr\n request: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURSBSRVFVRVNULS0tLS0KTUlJQ1ZqQ0NBVDRDQVFBd0VURVBNQTBHQTFVRUF3d0dZVzVuWld4aE1JSUJJakFOQmdrcWhraUc5dzBCQVFFRgpBQU9DQVE4QU1JSUJDZ0tDQVFFQTByczhJTHRHdTYxakx2dHhWTTJSVlRWMDNHWlJTWWw0dWluVWo4RElaWjBOCnR2MUZtRVFSd3VoaUZsOFEzcWl0Qm0wMUFSMkNJVXBGd2ZzSjZ4MXF3ckJzVkhZbGlBNVhwRVpZM3ExcGswSDQKM3Z3aGJlK1o2MVNrVHF5SVBYUUwrTWM5T1Nsbm0xb0R2N0NtSkZNMUlMRVI3QTVGZnZKOEdFRjJ6dHBoaUlFMwpub1dtdHNZb3JuT2wzc2lHQ2ZGZzR4Zmd4eW8ybmlneFNVekl1bXNnVm9PM2ttT0x1RVF6cXpkakJ3TFJXbWlECklmMXBMWnoyalVnald4UkhCM1gyWnVVV1d1T09PZnpXM01LaE8ybHEvZi9DdS8wYk83c0x0MCt3U2ZMSU91TFcKcW90blZtRmxMMytqTy82WDNDKzBERHk5aUtwbXJjVDBnWGZLemE1dHJRSURBUUFCb0FBd0RRWUpLb1pJaHZjTgpBUUVMQlFBRGdnRUJBR05WdmVIOGR4ZzNvK21VeVRkbmFjVmQ1N24zSkExdnZEU1JWREkyQTZ1eXN3ZFp1L1BVCkkwZXpZWFV0RVNnSk1IRmQycVVNMjNuNVJsSXJ3R0xuUXFISUh5VStWWHhsdnZsRnpNOVpEWllSTmU3QlJvYXgKQVlEdUI5STZXT3FYbkFvczFqRmxNUG5NbFpqdU5kSGxpT1BjTU1oNndLaTZzZFhpVStHYTJ2RUVLY01jSVUyRgpvU2djUWdMYTk0aEpacGk3ZnNMdm1OQUxoT045UHdNMGM1dVJVejV4T0dGMUtCbWRSeEgvbUNOS2JKYjFRQm1HCkkwYitEUEdaTktXTU0xMzhIQXdoV0tkNjVoVHdYOWl4V3ZHMkh4TG1WQzg0L1BHT0tWQW9FNkpsYWFHdTlQVmkKdjlOSjVaZlZrcXdCd0hKbzZXdk9xVlA3SVFjZmg3d0drWm89Ci0tLS0tRU5EIENFUlRJRklDQVRFIFJFUVVFU1QtLS0tLQo=\n signerName: kubernetes.io/kube-apiserver-client\n expirationSeconds: 86400 # one day\n usages:\n - client auth\nEOF", "language": "yaml", "type": "kubernetes-manifest", "context": "Issue a Certificate for a Kubernetes API Client Using A CertificateSigningRequest | Some points to note:", "source": "https://kubernetes.io/docs/tasks/tls/certificate-issue-client-csr/", "id": "k8s_00583_code_completion_15021"} {"prompt": "Explain this yaml code:\n\ncat <<EOF | kubectl apply -f -\napiVersion: certificates.k8s.io/v1\nkind: CertificateSigningRequest\nmetadata:\n name: myuser # example\nspec:\n # This is an encoded CSR. Change this to the base64-encoded contents of myuser.csr\n request: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURSBSRVFVRVNULS0tLS0KTUlJQ1ZqQ0NBVDRDQVFBd0VURVBNQTBHQTFVRUF3d0dZVzVuWld4aE1JSUJJakFOQmdrcWhraUc5dzBCQVFFRgpBQU9DQVE4QU1JSUJDZ0tDQVFFQTByczhJTHRHdTYxakx2dHhWTTJSVlRWMDNHWlJTWWw0dWluVWo4RElaWjBOCnR2MUZtRVFSd3VoaUZsOFEzcWl0Qm0wMUFSMkNJVXBGd2ZzSjZ4MXF3ckJzVkhZbGlBNVhwRVpZM3ExcGswSDQKM3Z3aGJlK1o2MVNrVHF5SVBYUUwrTWM5T1Nsbm0xb0R2N0NtSkZNMUlMRVI3QTVGZnZKOEdFRjJ6dHBoaUlFMwpub1dtdHNZb3JuT2wzc2lHQ2ZGZzR4Zmd4eW8ybmlneFNVekl1bXNnVm9PM2ttT0x1RVF6cXpkakJ3TFJXbWlECklmMXBMWnoyalVnald4UkhCM1gyWnVVV1d1T09PZnpXM01LaE8ybHEvZi9DdS8wYk83c0x0MCt3U2ZMSU91TFcKcW90blZtRmxMMytqTy82WDNDKzBERHk5aUtwbXJjVDBnWGZLemE1dHJRSURBUUFCb0FBd0RRWUpLb1pJaHZjTgpBUUVMQlFBRGdnRUJBR05WdmVIOGR4ZzNvK21VeVRkbmFjVmQ1N24zSkExdnZEU1JWREkyQTZ1eXN3ZFp1L1BVCkkwZXpZWFV0RVNnSk1IRmQycVVNMjNuNVJsSXJ3R0xuUXFISUh5VStWWHhsdnZsRnpNOVpEWllSTmU3QlJvYXgKQVlEdUI5STZXT3FYbkFvczFqRmxNUG5NbFpqdU5kSGxpT1BjTU1oNndLaTZzZFhpVStHYTJ2RUVLY01jSVUyRgpvU2djUWdMYTk0aEpacGk3ZnNMdm1OQUxoT045UHdNMGM1dVJVejV4T0dGMUtCbWRSeEgvbUNOS2JKYjFRQm1HCkkwYitEUEdaTktXTU0xMzhIQXdoV0tkNjVoVHdYOWl4V3ZHMkh4TG1WQzg0L1BHT0tWQW9FNkpsYWFHdTlQVmkKdjlOSjVaZlZrcXdCd0hKbzZXdk9xVlA3SVFjZmg3d0drWm89Ci0tLS0tRU5EIENFUlRJRklDQVRFIFJFUVVFU1QtLS0tLQo=\n signerName: kubernetes.io/kube-apiserver-client\n expirationSeconds: 86400 # one day\n usages:\n - client auth\nEOF", "completion": "This yaml code Issue a Certificate for a Kubernetes API Client Using A CertificateSigningRequest | Some points to note:", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Issue a Certificate for a Kubernetes API Client Using A CertificateSigningRequest | Some points to note:", "source": "https://kubernetes.io/docs/tasks/tls/certificate-issue-client-csr/", "id": "k8s_00583_code_explain_15022"} {"prompt": "Explain this bash code:\n\nkubectl get csr myuser -o jsonpath='{.status.certificate}'| base64 -d > myuser.crt", "completion": "This bash code Issue a Certificate for a Kubernetes API Client Using A CertificateSigningRequest | The next step is to add this user into the kubeconfig file.", "language": "bash", "type": "kubectl-command_explanation", "context": "Issue a Certificate for a Kubernetes API Client Using A CertificateSigningRequest | The next step is to add this user into the kubeconfig file.", "source": "https://kubernetes.io/docs/tasks/tls/certificate-issue-client-csr/", "id": "k8s_00583_code_explain_15023"} {"prompt": "Explain this bash code:\n\nkubectl config set-credentials myuser --client-key=myuser.key --client-certificate=myuser.crt --embed-certs=true", "completion": "This bash code Issue a Certificate for a Kubernetes API Client Using A CertificateSigningRequest | Then, you need to add the context:", "language": "bash", "type": "kubectl-command_explanation", "context": "Issue a Certificate for a Kubernetes API Client Using A CertificateSigningRequest | Then, you need to add the context:", "source": "https://kubernetes.io/docs/tasks/tls/certificate-issue-client-csr/", "id": "k8s_00583_code_explain_15024"} {"prompt": "Explain this bash code:\n\nkubectl config set-context myuser --cluster=kubernetes --user=myuser", "completion": "This bash code Issue a Certificate for a Kubernetes API Client Using A CertificateSigningRequest | To test it:", "language": "bash", "type": "kubectl-command_explanation", "context": "Issue a Certificate for a Kubernetes API Client Using A CertificateSigningRequest | To test it:", "source": "https://kubernetes.io/docs/tasks/tls/certificate-issue-client-csr/", "id": "k8s_00583_code_explain_15025"} {"prompt": "Explain this bash code:\n\nkubectl create role developer --verb=create --verb=get --verb=list --verb=update --verb=delete --resource=pods", "completion": "This bash code Issue a Certificate for a Kubernetes API Client Using A CertificateSigningRequest | This is a sample command to create a RoleBinding for this new user:", "language": "bash", "type": "kubectl-command_explanation", "context": "Issue a Certificate for a Kubernetes API Client Using A CertificateSigningRequest | This is a sample command to create a RoleBinding for this new user:", "source": "https://kubernetes.io/docs/tasks/tls/certificate-issue-client-csr/", "id": "k8s_00583_code_explain_15026"} {"prompt": "Explain this bash code:\n\nkubectl create rolebinding developer-binding-myuser --role=developer --user=myuser", "completion": "This bash code Issue a Certificate for a Kubernetes API Client Using A CertificateSigningRequest |", "language": "bash", "type": "kubectl-command_explanation", "context": "Issue a Certificate for a Kubernetes API Client Using A CertificateSigningRequest |", "source": "https://kubernetes.io/docs/tasks/tls/certificate-issue-client-csr/", "id": "k8s_00583_code_explain_15027"} {"prompt": "Explain this shell code:\n\n# Create a private key\nopenssl genrsa -out myuser.key 3072", "completion": "This shell code Issue a Certificate for a Kubernetes API Client Using A CertificateSigningRequest | It is important to set CN and O attribute of the CSR. CN is the name of the user and O is the group that this user will belong to. You can refer to RBAC for standard groups.", "language": "shell", "type": "code-example_explanation", "context": "Issue a Certificate for a Kubernetes API Client Using A CertificateSigningRequest | It is important to set CN and O attribute of the CSR. CN is the name of the user and O is the group that this user will belong to. You can refer to RBAC for standard groups.", "source": "https://kubernetes.io/docs/tasks/tls/certificate-issue-client-csr/", "id": "k8s_00583_code_explain_15028"} {"prompt": "Explain this shell code:\n\n# Change the common name \"myuser\" to the actual username that you want to use\nopenssl req -new -key myuser.key -out myuser.csr -subj \"/CN=myuser\"", "completion": "This shell code Issue a Certificate for a Kubernetes API Client Using A CertificateSigningRequest | Encode the CSR document using this command:", "language": "shell", "type": "code-example_explanation", "context": "Issue a Certificate for a Kubernetes API Client Using A CertificateSigningRequest | Encode the CSR document using this command:", "source": "https://kubernetes.io/docs/tasks/tls/certificate-issue-client-csr/", "id": "k8s_00583_code_explain_15029"} {"prompt": "Complete this shell code:\n\ncat <<EOF | kubectl apply -f -\napiVersion: certificates.k8s.io/v1\nkind: CertificateSigningRequest\nmetadata:\n name: myuser # example\nspec:", "completion": " # This is an encoded CSR. Change this to the base64-encoded contents of myuser.csr\n request: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURSBSRVFVRVNULS0tLS0KTUlJQ1ZqQ0NBVDRDQVFBd0VURVBNQTBHQTFVRUF3d0dZVzVuWld4aE1JSUJJakFOQmdrcWhraUc5dzBCQVFFRgpBQU9DQVE4QU1JSUJDZ0tDQVFFQTByczhJTHRHdTYxakx2dHhWTTJSVlRWMDNHWlJTWWw0dWluVWo4RElaWjBOCnR2MUZtRVFSd3VoaUZsOFEzcWl0Qm0wMUFSMkNJVXBGd2ZzSjZ4MXF3ckJzVkhZbGlBNVhwRVpZM3ExcGswSDQKM3Z3aGJlK1o2MVNrVHF5SVBYUUwrTWM5T1Nsbm0xb0R2N0NtSkZNMUlMRVI3QTVGZnZKOEdFRjJ6dHBoaUlFMwpub1dtdHNZb3JuT2wzc2lHQ2ZGZzR4Zmd4eW8ybmlneFNVekl1bXNnVm9PM2ttT0x1RVF6cXpkakJ3TFJXbWlECklmMXBMWnoyalVnald4UkhCM1gyWnVVV1d1T09PZnpXM01LaE8ybHEvZi9DdS8wYk83c0x0MCt3U2ZMSU91TFcKcW90blZtRmxMMytqTy82WDNDKzBERHk5aUtwbXJjVDBnWGZLemE1dHJRSURBUUFCb0FBd0RRWUpLb1pJaHZjTgpBUUVMQlFBRGdnRUJBR05WdmVIOGR4ZzNvK21VeVRkbmFjVmQ1N24zSkExdnZEU1JWREkyQTZ1eXN3ZFp1L1BVCkkwZXpZWFV0RVNnSk1IRmQycVVNMjNuNVJsSXJ3R0xuUXFISUh5VStWWHhsdnZsRnpNOVpEWllSTmU3QlJvYXgKQVlEdUI5STZXT3FYbkFvczFqRmxNUG5NbFpqdU5kSGxpT1BjTU1oNndLaTZzZFhpVStHYTJ2RUVLY01jSVUyRgpvU2djUWdMYTk0aEpacGk3ZnNMdm1OQUxoT045UHdNMGM1dVJVejV4T0dGMUtCbWRSeEgvbUNOS2JKYjFRQm1HCkkwYitEUEdaTktXTU0xMzhIQXdoV0tkNjVoVHdYOWl4V3ZHMkh4TG1WQzg0L1BHT0tWQW9FNkpsYWFHdTlQVmkKdjlOSjVaZlZrcXdCd0hKbzZXdk9xVlA3SVFjZmg3d0drWm89Ci0tLS0tRU5EIENFUlRJRklDQVRFIFJFUVVFU1QtLS0tLQo=\n signerName: kubernetes.io/kube-apiserver-client\n expirationSeconds: 86400 # one day\n usages:\n - client auth\nEOF", "language": "shell", "type": "code-example", "context": "Issue a Certificate for a Kubernetes API Client Using A CertificateSigningRequest | Some points to note:", "source": "https://kubernetes.io/docs/tasks/tls/certificate-issue-client-csr/", "id": "k8s_00583_code_completion_15030"} {"prompt": "Explain this shell code:\n\ncat <<EOF | kubectl apply -f -\napiVersion: certificates.k8s.io/v1\nkind: CertificateSigningRequest\nmetadata:\n name: myuser # example\nspec:\n # This is an encoded CSR. Change this to the base64-encoded contents of myuser.csr\n request: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURSBSRVFVRVNULS0tLS0KTUlJQ1ZqQ0NBVDRDQVFBd0VURVBNQTBHQTFVRUF3d0dZVzVuWld4aE1JSUJJakFOQmdrcWhraUc5dzBCQVFFRgpBQU9DQVE4QU1JSUJDZ0tDQVFFQTByczhJTHRHdTYxakx2dHhWTTJSVlRWMDNHWlJTWWw0dWluVWo4RElaWjBOCnR2MUZtRVFSd3VoaUZsOFEzcWl0Qm0wMUFSMkNJVXBGd2ZzSjZ4MXF3ckJzVkhZbGlBNVhwRVpZM3ExcGswSDQKM3Z3aGJlK1o2MVNrVHF5SVBYUUwrTWM5T1Nsbm0xb0R2N0NtSkZNMUlMRVI3QTVGZnZKOEdFRjJ6dHBoaUlFMwpub1dtdHNZb3JuT2wzc2lHQ2ZGZzR4Zmd4eW8ybmlneFNVekl1bXNnVm9PM2ttT0x1RVF6cXpkakJ3TFJXbWlECklmMXBMWnoyalVnald4UkhCM1gyWnVVV1d1T09PZnpXM01LaE8ybHEvZi9DdS8wYk83c0x0MCt3U2ZMSU91TFcKcW90blZtRmxMMytqTy82WDNDKzBERHk5aUtwbXJjVDBnWGZLemE1dHJRSURBUUFCb0FBd0RRWUpLb1pJaHZjTgpBUUVMQlFBRGdnRUJBR05WdmVIOGR4ZzNvK21VeVRkbmFjVmQ1N24zSkExdnZEU1JWREkyQTZ1eXN3ZFp1L1BVCkkwZXpZWFV0RVNnSk1IRmQycVVNMjNuNVJsSXJ3R0xuUXFISUh5VStWWHhsdnZsRnpNOVpEWllSTmU3QlJvYXgKQVlEdUI5STZXT3FYbkFvczFqRmxNUG5NbFpqdU5kSGxpT1BjTU1oNndLaTZzZFhpVStHYTJ2RUVLY01jSVUyRgpvU2djUWdMYTk0aEpacGk3ZnNMdm1OQUxoT045UHdNMGM1dVJVejV4T0dGMUtCbWRSeEgvbUNOS2JKYjFRQm1HCkkwYitEUEdaTktXTU0xMzhIQXdoV0tkNjVoVHdYOWl4V3ZHMkh4TG1WQzg0L1BHT0tWQW9FNkpsYWFHdTlQVmkKdjlOSjVaZlZrcXdCd0hKbzZXdk9xVlA3SVFjZmg3d0drWm89Ci0tLS0tRU5EIENFUlRJRklDQVRFIFJFUVVFU1QtLS0tLQo=\n signerName: kubernetes.io/kube-apiserver-client\n expirationSeconds: 86400 # one day\n usages:\n - client auth\nEOF", "completion": "This shell code Issue a Certificate for a Kubernetes API Client Using A CertificateSigningRequest | Some points to note:", "language": "shell", "type": "code-example_explanation", "context": "Issue a Certificate for a Kubernetes API Client Using A CertificateSigningRequest | Some points to note:", "source": "https://kubernetes.io/docs/tasks/tls/certificate-issue-client-csr/", "id": "k8s_00583_code_explain_15031"} {"prompt": "Explain this shell code:\n\nkubectl get csr myuser -o jsonpath='{.status.certificate}'| base64 -d > myuser.crt", "completion": "This shell code Issue a Certificate for a Kubernetes API Client Using A CertificateSigningRequest | The next step is to add this user into the kubeconfig file.", "language": "shell", "type": "code-example_explanation", "context": "Issue a Certificate for a Kubernetes API Client Using A CertificateSigningRequest | The next step is to add this user into the kubeconfig file.", "source": "https://kubernetes.io/docs/tasks/tls/certificate-issue-client-csr/", "id": "k8s_00583_code_explain_15032"} {"prompt": "Explain this shell code:\n\nkubectl config set-credentials myuser --client-key=myuser.key --client-certificate=myuser.crt --embed-certs=true", "completion": "This shell code Issue a Certificate for a Kubernetes API Client Using A CertificateSigningRequest | Then, you need to add the context:", "language": "shell", "type": "code-example_explanation", "context": "Issue a Certificate for a Kubernetes API Client Using A CertificateSigningRequest | Then, you need to add the context:", "source": "https://kubernetes.io/docs/tasks/tls/certificate-issue-client-csr/", "id": "k8s_00583_code_explain_15033"} {"prompt": "Explain this shell code:\n\nkubectl config set-context myuser --cluster=kubernetes --user=myuser", "completion": "This shell code Issue a Certificate for a Kubernetes API Client Using A CertificateSigningRequest | To test it:", "language": "shell", "type": "code-example_explanation", "context": "Issue a Certificate for a Kubernetes API Client Using A CertificateSigningRequest | To test it:", "source": "https://kubernetes.io/docs/tasks/tls/certificate-issue-client-csr/", "id": "k8s_00583_code_explain_15034"} {"prompt": "Explain this shell code:\n\nkubectl create role developer --verb=create --verb=get --verb=list --verb=update --verb=delete --resource=pods", "completion": "This shell code Issue a Certificate for a Kubernetes API Client Using A CertificateSigningRequest | This is a sample command to create a RoleBinding for this new user:", "language": "shell", "type": "code-example_explanation", "context": "Issue a Certificate for a Kubernetes API Client Using A CertificateSigningRequest | This is a sample command to create a RoleBinding for this new user:", "source": "https://kubernetes.io/docs/tasks/tls/certificate-issue-client-csr/", "id": "k8s_00583_code_explain_15035"} {"prompt": "Explain this shell code:\n\nkubectl create rolebinding developer-binding-myuser --role=developer --user=myuser", "completion": "This shell code Issue a Certificate for a Kubernetes API Client Using A CertificateSigningRequest |", "language": "shell", "type": "code-example_explanation", "context": "Issue a Certificate for a Kubernetes API Client Using A CertificateSigningRequest |", "source": "https://kubernetes.io/docs/tasks/tls/certificate-issue-client-csr/", "id": "k8s_00583_code_explain_15036"} {"prompt": "Explain this text code:\n\n# Create a private key\nopenssl genrsa -out myuser.key 3072", "completion": "This text code Issue a Certificate for a Kubernetes API Client Using A CertificateSigningRequest | It is important to set CN and O attribute of the CSR. CN is the name of the user and O is the group that this user will belong to. You can refer to RBAC for standard groups.", "language": "text", "type": "code-example_explanation", "context": "Issue a Certificate for a Kubernetes API Client Using A CertificateSigningRequest | It is important to set CN and O attribute of the CSR. CN is the name of the user and O is the group that this user will belong to. You can refer to RBAC for standard groups.", "source": "https://kubernetes.io/docs/tasks/tls/certificate-issue-client-csr/", "id": "k8s_00583_code_explain_15037"} {"prompt": "Explain this text code:\n\n# Change the common name \"myuser\" to the actual username that you want to use\nopenssl req -new -key myuser.key -out myuser.csr -subj \"/CN=myuser\"", "completion": "This text code Issue a Certificate for a Kubernetes API Client Using A CertificateSigningRequest | Encode the CSR document using this command:", "language": "text", "type": "code-example_explanation", "context": "Issue a Certificate for a Kubernetes API Client Using A CertificateSigningRequest | Encode the CSR document using this command:", "source": "https://kubernetes.io/docs/tasks/tls/certificate-issue-client-csr/", "id": "k8s_00583_code_explain_15038"} {"prompt": "Complete this yaml code:\n\ncat <<EOF | kubectl apply -f -\napiVersion: certificates.k8s.io/v1\nkind: CertificateSigningRequest\nmetadata:\n name: myuser # example\nspec:", "completion": " # This is an encoded CSR. Change this to the base64-encoded contents of myuser.csr\n request: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURSBSRVFVRVNULS0tLS0KTUlJQ1ZqQ0NBVDRDQVFBd0VURVBNQTBHQTFVRUF3d0dZVzVuWld4aE1JSUJJakFOQmdrcWhraUc5dzBCQVFFRgpBQU9DQVE4QU1JSUJDZ0tDQVFFQTByczhJTHRHdTYxakx2dHhWTTJSVlRWMDNHWlJTWWw0dWluVWo4RElaWjBOCnR2MUZtRVFSd3VoaUZsOFEzcWl0Qm0wMUFSMkNJVXBGd2ZzSjZ4MXF3ckJzVkhZbGlBNVhwRVpZM3ExcGswSDQKM3Z3aGJlK1o2MVNrVHF5SVBYUUwrTWM5T1Nsbm0xb0R2N0NtSkZNMUlMRVI3QTVGZnZKOEdFRjJ6dHBoaUlFMwpub1dtdHNZb3JuT2wzc2lHQ2ZGZzR4Zmd4eW8ybmlneFNVekl1bXNnVm9PM2ttT0x1RVF6cXpkakJ3TFJXbWlECklmMXBMWnoyalVnald4UkhCM1gyWnVVV1d1T09PZnpXM01LaE8ybHEvZi9DdS8wYk83c0x0MCt3U2ZMSU91TFcKcW90blZtRmxMMytqTy82WDNDKzBERHk5aUtwbXJjVDBnWGZLemE1dHJRSURBUUFCb0FBd0RRWUpLb1pJaHZjTgpBUUVMQlFBRGdnRUJBR05WdmVIOGR4ZzNvK21VeVRkbmFjVmQ1N24zSkExdnZEU1JWREkyQTZ1eXN3ZFp1L1BVCkkwZXpZWFV0RVNnSk1IRmQycVVNMjNuNVJsSXJ3R0xuUXFISUh5VStWWHhsdnZsRnpNOVpEWllSTmU3QlJvYXgKQVlEdUI5STZXT3FYbkFvczFqRmxNUG5NbFpqdU5kSGxpT1BjTU1oNndLaTZzZFhpVStHYTJ2RUVLY01jSVUyRgpvU2djUWdMYTk0aEpacGk3ZnNMdm1OQUxoT045UHdNMGM1dVJVejV4T0dGMUtCbWRSeEgvbUNOS2JKYjFRQm1HCkkwYitEUEdaTktXTU0xMzhIQXdoV0tkNjVoVHdYOWl4V3ZHMkh4TG1WQzg0L1BHT0tWQW9FNkpsYWFHdTlQVmkKdjlOSjVaZlZrcXdCd0hKbzZXdk9xVlA3SVFjZmg3d0drWm89Ci0tLS0tRU5EIENFUlRJRklDQVRFIFJFUVVFU1QtLS0tLQo=\n signerName: kubernetes.io/kube-apiserver-client\n expirationSeconds: 86400 # one day\n usages:\n - client auth\nEOF", "language": "yaml", "type": "kubernetes-manifest", "context": "Issue a Certificate for a Kubernetes API Client Using A CertificateSigningRequest | Some points to note:", "source": "https://kubernetes.io/docs/tasks/tls/certificate-issue-client-csr/", "id": "k8s_00583_code_completion_15039"} {"prompt": "Explain this yaml code:\n\ncat <<EOF | kubectl apply -f -\napiVersion: certificates.k8s.io/v1\nkind: CertificateSigningRequest\nmetadata:\n name: myuser # example\nspec:\n # This is an encoded CSR. Change this to the base64-encoded contents of myuser.csr\n request: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURSBSRVFVRVNULS0tLS0KTUlJQ1ZqQ0NBVDRDQVFBd0VURVBNQTBHQTFVRUF3d0dZVzVuWld4aE1JSUJJakFOQmdrcWhraUc5dzBCQVFFRgpBQU9DQVE4QU1JSUJDZ0tDQVFFQTByczhJTHRHdTYxakx2dHhWTTJSVlRWMDNHWlJTWWw0dWluVWo4RElaWjBOCnR2MUZtRVFSd3VoaUZsOFEzcWl0Qm0wMUFSMkNJVXBGd2ZzSjZ4MXF3ckJzVkhZbGlBNVhwRVpZM3ExcGswSDQKM3Z3aGJlK1o2MVNrVHF5SVBYUUwrTWM5T1Nsbm0xb0R2N0NtSkZNMUlMRVI3QTVGZnZKOEdFRjJ6dHBoaUlFMwpub1dtdHNZb3JuT2wzc2lHQ2ZGZzR4Zmd4eW8ybmlneFNVekl1bXNnVm9PM2ttT0x1RVF6cXpkakJ3TFJXbWlECklmMXBMWnoyalVnald4UkhCM1gyWnVVV1d1T09PZnpXM01LaE8ybHEvZi9DdS8wYk83c0x0MCt3U2ZMSU91TFcKcW90blZtRmxMMytqTy82WDNDKzBERHk5aUtwbXJjVDBnWGZLemE1dHJRSURBUUFCb0FBd0RRWUpLb1pJaHZjTgpBUUVMQlFBRGdnRUJBR05WdmVIOGR4ZzNvK21VeVRkbmFjVmQ1N24zSkExdnZEU1JWREkyQTZ1eXN3ZFp1L1BVCkkwZXpZWFV0RVNnSk1IRmQycVVNMjNuNVJsSXJ3R0xuUXFISUh5VStWWHhsdnZsRnpNOVpEWllSTmU3QlJvYXgKQVlEdUI5STZXT3FYbkFvczFqRmxNUG5NbFpqdU5kSGxpT1BjTU1oNndLaTZzZFhpVStHYTJ2RUVLY01jSVUyRgpvU2djUWdMYTk0aEpacGk3ZnNMdm1OQUxoT045UHdNMGM1dVJVejV4T0dGMUtCbWRSeEgvbUNOS2JKYjFRQm1HCkkwYitEUEdaTktXTU0xMzhIQXdoV0tkNjVoVHdYOWl4V3ZHMkh4TG1WQzg0L1BHT0tWQW9FNkpsYWFHdTlQVmkKdjlOSjVaZlZrcXdCd0hKbzZXdk9xVlA3SVFjZmg3d0drWm89Ci0tLS0tRU5EIENFUlRJRklDQVRFIFJFUVVFU1QtLS0tLQo=\n signerName: kubernetes.io/kube-apiserver-client\n expirationSeconds: 86400 # one day\n usages:\n - client auth\nEOF", "completion": "This yaml code Issue a Certificate for a Kubernetes API Client Using A CertificateSigningRequest | Some points to note:", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Issue a Certificate for a Kubernetes API Client Using A CertificateSigningRequest | Some points to note:", "source": "https://kubernetes.io/docs/tasks/tls/certificate-issue-client-csr/", "id": "k8s_00583_code_explain_15040"} {"prompt": "Explain this bash code:\n\nkubectl get csr myuser -o jsonpath='{.status.certificate}'| base64 -d > myuser.crt", "completion": "This bash code Issue a Certificate for a Kubernetes API Client Using A CertificateSigningRequest | The next step is to add this user into the kubeconfig file.", "language": "bash", "type": "kubectl-command_explanation", "context": "Issue a Certificate for a Kubernetes API Client Using A CertificateSigningRequest | The next step is to add this user into the kubeconfig file.", "source": "https://kubernetes.io/docs/tasks/tls/certificate-issue-client-csr/", "id": "k8s_00583_code_explain_15041"} {"prompt": "Explain this bash code:\n\nkubectl config set-credentials myuser --client-key=myuser.key --client-certificate=myuser.crt --embed-certs=true", "completion": "This bash code Issue a Certificate for a Kubernetes API Client Using A CertificateSigningRequest | Then, you need to add the context:", "language": "bash", "type": "kubectl-command_explanation", "context": "Issue a Certificate for a Kubernetes API Client Using A CertificateSigningRequest | Then, you need to add the context:", "source": "https://kubernetes.io/docs/tasks/tls/certificate-issue-client-csr/", "id": "k8s_00583_code_explain_15042"} {"prompt": "Explain this bash code:\n\nkubectl config set-context myuser --cluster=kubernetes --user=myuser", "completion": "This bash code Issue a Certificate for a Kubernetes API Client Using A CertificateSigningRequest | To test it:", "language": "bash", "type": "kubectl-command_explanation", "context": "Issue a Certificate for a Kubernetes API Client Using A CertificateSigningRequest | To test it:", "source": "https://kubernetes.io/docs/tasks/tls/certificate-issue-client-csr/", "id": "k8s_00583_code_explain_15043"} {"prompt": "Explain this bash code:\n\nkubectl create role developer --verb=create --verb=get --verb=list --verb=update --verb=delete --resource=pods", "completion": "This bash code Issue a Certificate for a Kubernetes API Client Using A CertificateSigningRequest | This is a sample command to create a RoleBinding for this new user:", "language": "bash", "type": "kubectl-command_explanation", "context": "Issue a Certificate for a Kubernetes API Client Using A CertificateSigningRequest | This is a sample command to create a RoleBinding for this new user:", "source": "https://kubernetes.io/docs/tasks/tls/certificate-issue-client-csr/", "id": "k8s_00583_code_explain_15044"} {"prompt": "Explain this bash code:\n\nkubectl create rolebinding developer-binding-myuser --role=developer --user=myuser", "completion": "This bash code Issue a Certificate for a Kubernetes API Client Using A CertificateSigningRequest |", "language": "bash", "type": "kubectl-command_explanation", "context": "Issue a Certificate for a Kubernetes API Client Using A CertificateSigningRequest |", "source": "https://kubernetes.io/docs/tasks/tls/certificate-issue-client-csr/", "id": "k8s_00583_code_explain_15045"} {"prompt": "Complete this text code:\n\n\"conditions\": [\n {\n \"type\": \"Ready\",\n \"status\": \"True\",\n \"reason\": \"KubeletReady\",", "completion": " \"message\": \"kubelet is posting ready status\",\n \"lastHeartbeatTime\": \"2019-06-05T18:38:35Z\",\n \"lastTransitionTime\": \"2019-06-05T11:41:27Z\"\n }\n]", "language": "text", "type": "code-example", "context": "Node Status | When problems occur on nodes, the Kubernetes control plane automatically creates taints that match the conditions affecting the node. An example of this is when the status of the Ready condition remai", "source": "https://kubernetes.io/docs/reference/node/node-status/", "id": "k8s_00584_code_completion_15046"} {"prompt": "Explain this text code:\n\n\"conditions\": [\n {\n \"type\": \"Ready\",\n \"status\": \"True\",\n \"reason\": \"KubeletReady\",\n \"message\": \"kubelet is posting ready status\",\n \"lastHeartbeatTime\": \"2019-06-05T18:38:35Z\",\n \"lastTransitionTime\": \"2019-06-05T11:41:27Z\"\n }\n]", "completion": "This text code Node Status | When problems occur on nodes, the Kubernetes control plane automatically creates taints that match the conditions affecting the node. An example of this is when the status of the Ready condition remai", "language": "text", "type": "code-example_explanation", "context": "Node Status | When problems occur on nodes, the Kubernetes control plane automatically creates taints that match the conditions affecting the node. An example of this is when the status of the Ready condition remai", "source": "https://kubernetes.io/docs/reference/node/node-status/", "id": "k8s_00584_code_explain_15047"} {"prompt": "Complete this json code:\n\n\"conditions\": [\n {\n \"type\": \"Ready\",\n \"status\": \"True\",\n \"reason\": \"KubeletReady\",", "completion": " \"message\": \"kubelet is posting ready status\",\n \"lastHeartbeatTime\": \"2019-06-05T18:38:35Z\",\n \"lastTransitionTime\": \"2019-06-05T11:41:27Z\"\n }\n]", "language": "json", "type": "api-response", "context": "Node Status | When problems occur on nodes, the Kubernetes control plane automatically creates taints that match the conditions affecting the node. An example of this is when the status of the Ready condition remai", "source": "https://kubernetes.io/docs/reference/node/node-status/", "id": "k8s_00584_code_completion_15048"} {"prompt": "Explain this json code:\n\n\"conditions\": [\n {\n \"type\": \"Ready\",\n \"status\": \"True\",\n \"reason\": \"KubeletReady\",\n \"message\": \"kubelet is posting ready status\",\n \"lastHeartbeatTime\": \"2019-06-05T18:38:35Z\",\n \"lastTransitionTime\": \"2019-06-05T11:41:27Z\"\n }\n]", "completion": "This json code Node Status | When problems occur on nodes, the Kubernetes control plane automatically creates taints that match the conditions affecting the node. An example of this is when the status of the Ready condition remai", "language": "json", "type": "api-response_explanation", "context": "Node Status | When problems occur on nodes, the Kubernetes control plane automatically creates taints that match the conditions affecting the node. An example of this is when the status of the Ready condition remai", "source": "https://kubernetes.io/docs/reference/node/node-status/", "id": "k8s_00584_code_explain_15049"} {"prompt": "Complete this text code:\n\n\"conditions\": [\n {\n \"type\": \"Ready\",\n \"status\": \"True\",\n \"reason\": \"KubeletReady\",", "completion": " \"message\": \"kubelet is posting ready status\",\n \"lastHeartbeatTime\": \"2019-06-05T18:38:35Z\",\n \"lastTransitionTime\": \"2019-06-05T11:41:27Z\"\n }\n]", "language": "text", "type": "code-example", "context": "Node Status | When problems occur on nodes, the Kubernetes control plane automatically creates taints that match the conditions affecting the node. An example of this is when the status of the Ready condition remai", "source": "https://kubernetes.io/docs/reference/node/node-status/", "id": "k8s_00584_code_completion_15050"} {"prompt": "Explain this text code:\n\n\"conditions\": [\n {\n \"type\": \"Ready\",\n \"status\": \"True\",\n \"reason\": \"KubeletReady\",\n \"message\": \"kubelet is posting ready status\",\n \"lastHeartbeatTime\": \"2019-06-05T18:38:35Z\",\n \"lastTransitionTime\": \"2019-06-05T11:41:27Z\"\n }\n]", "completion": "This text code Node Status | When problems occur on nodes, the Kubernetes control plane automatically creates taints that match the conditions affecting the node. An example of this is when the status of the Ready condition remai", "language": "text", "type": "code-example_explanation", "context": "Node Status | When problems occur on nodes, the Kubernetes control plane automatically creates taints that match the conditions affecting the node. An example of this is when the status of the Ready condition remai", "source": "https://kubernetes.io/docs/reference/node/node-status/", "id": "k8s_00584_code_explain_15051"} {"prompt": "Complete this text code:\n\ndevice.driver\ndevice.attributes[\"dra.example.com\"].model", "completion": "device.attributes[\"ext.example.com\"].family\ndevice.capacity[\"dra.example.com\"].modules", "language": "text", "type": "code-example", "context": "DeviceTaintRule v1alpha3 | The device.driver field can be used to check for a specific driver, either as a high-level precondition (i.e. you only want to consider devices from this driver) or as part of a multi-clause expressio", "source": "https://kubernetes.io/docs/reference/kubernetes-api/workload-resources/device-taint-rule-v1alpha3/", "id": "k8s_00585_code_completion_15052"} {"prompt": "Explain this text code:\n\ndevice.driver\ndevice.attributes[\"dra.example.com\"].model\ndevice.attributes[\"ext.example.com\"].family\ndevice.capacity[\"dra.example.com\"].modules", "completion": "This text code DeviceTaintRule v1alpha3 | The device.driver field can be used to check for a specific driver, either as a high-level precondition (i.e. you only want to consider devices from this driver) or as part of a multi-clause expressio", "language": "text", "type": "code-example_explanation", "context": "DeviceTaintRule v1alpha3 | The device.driver field can be used to check for a specific driver, either as a high-level precondition (i.e. you only want to consider devices from this driver) or as part of a multi-clause expressio", "source": "https://kubernetes.io/docs/reference/kubernetes-api/workload-resources/device-taint-rule-v1alpha3/", "id": "k8s_00585_code_explain_15053"} {"prompt": "Explain this text code:\n\ncel.bind(dra, device.attributes[\"dra.example.com\"], dra.someBool && dra.anotherBool)", "completion": "This text code DeviceTaintRule v1alpha3 | The length of the expression must be smaller or equal to 10 Ki. The cost of evaluating it is also limited based on the estimated number of logical steps.", "language": "text", "type": "code-example_explanation", "context": "DeviceTaintRule v1alpha3 | The length of the expression must be smaller or equal to 10 Ki. The cost of evaluating it is also limited based on the estimated number of logical steps.", "source": "https://kubernetes.io/docs/reference/kubernetes-api/workload-resources/device-taint-rule-v1alpha3/", "id": "k8s_00585_code_explain_15054"} {"prompt": "Complete this text code:\n\ndevice.driver\ndevice.attributes[\"dra.example.com\"].model", "completion": "device.attributes[\"ext.example.com\"].family\ndevice.capacity[\"dra.example.com\"].modules", "language": "text", "type": "code-example", "context": "DeviceTaintRule v1alpha3 | The device.driver field can be used to check for a specific driver, either as a high-level precondition (i.e. you only want to consider devices from this driver) or as part of a multi-clause expressio", "source": "https://kubernetes.io/docs/reference/kubernetes-api/workload-resources/device-taint-rule-v1alpha3/", "id": "k8s_00585_code_completion_15055"} {"prompt": "Explain this text code:\n\ndevice.driver\ndevice.attributes[\"dra.example.com\"].model\ndevice.attributes[\"ext.example.com\"].family\ndevice.capacity[\"dra.example.com\"].modules", "completion": "This text code DeviceTaintRule v1alpha3 | The device.driver field can be used to check for a specific driver, either as a high-level precondition (i.e. you only want to consider devices from this driver) or as part of a multi-clause expressio", "language": "text", "type": "code-example_explanation", "context": "DeviceTaintRule v1alpha3 | The device.driver field can be used to check for a specific driver, either as a high-level precondition (i.e. you only want to consider devices from this driver) or as part of a multi-clause expressio", "source": "https://kubernetes.io/docs/reference/kubernetes-api/workload-resources/device-taint-rule-v1alpha3/", "id": "k8s_00585_code_explain_15056"} {"prompt": "Explain this text code:\n\ncel.bind(dra, device.attributes[\"dra.example.com\"], dra.someBool && dra.anotherBool)", "completion": "This text code DeviceTaintRule v1alpha3 | The length of the expression must be smaller or equal to 10 Ki. The cost of evaluating it is also limited based on the estimated number of logical steps.", "language": "text", "type": "code-example_explanation", "context": "DeviceTaintRule v1alpha3 | The length of the expression must be smaller or equal to 10 Ki. The cost of evaluating it is also limited based on the estimated number of logical steps.", "source": "https://kubernetes.io/docs/reference/kubernetes-api/workload-resources/device-taint-rule-v1alpha3/", "id": "k8s_00585_code_explain_15057"} {"prompt": "Explain this bash code:\n\nkubectl wait ([-f FILENAME] | resource.group/resource.name | resource.group [(-l label | --all)]) [--for=create|--for=delete|--for condition=available|--for=jsonpath='{}'[=value]]", "completion": "This bash code kubectl wait | Select all resources in the namespace of the specified resource types", "language": "bash", "type": "kubectl-command_explanation", "context": "kubectl wait | Select all resources in the namespace of the specified resource types", "source": "https://kubernetes.io/docs/reference/kubectl/generated/kubectl_wait/", "id": "k8s_00587_code_explain_15058"} {"prompt": "Complete this text code:\n\n# Wait for the pod \"busybox1\" to contain the status condition of type \"Ready\"\n kubectl wait --for=condition=Ready pod/busybox1\n \n # The default value of status condition is true; you can wait for other targets after an equal delimiter (compared after Unicode simple case folding, which is a more general form of case-insensitivity)\n kubectl wait --for=condition=Ready=false pod/busybox1\n \n # Wait for the pod \"busybox1\" to contain the status phase to be \"Running\"\n kubectl wait --for=jsonpath='{.status.phase}'=Running pod/busybox1\n \n # Wait for pod \"busybox1\" to be Ready\n kubectl wait --for='jsonpath={.status.conditions[?(@.type==\"Ready\")].status}=True' pod/busybox1", "completion": " \n # Wait for the service \"loadbalancer\" to have ingress\n kubectl wait --for=jsonpath='{.status.loadBalancer.ingress}' service/loadbalancer\n \n # Wait for the secret \"busybox1\" to be created, with a timeout of 30s\n kubectl create secret generic busybox1\n kubectl wait --for=create secret/busybox1 --timeout=30s\n \n # Wait for the pod \"busybox1\" to be deleted, with a timeout of 60s, after having issued the \"delete\" command\n kubectl delete pod/busybox1\n kubectl wait --for=delete pod/busybox1 --timeout=60s", "language": "text", "type": "code-example", "context": "kubectl wait | Select all resources in the namespace of the specified resource types", "source": "https://kubernetes.io/docs/reference/kubectl/generated/kubectl_wait/", "id": "k8s_00587_code_completion_15059"} {"prompt": "Explain this text code:\n\n# Wait for the pod \"busybox1\" to contain the status condition of type \"Ready\"\n kubectl wait --for=condition=Ready pod/busybox1\n \n # The default value of status condition is true; you can wait for other targets after an equal delimiter (compared after Unicode simple case folding, which is a more general form of case-insensitivity)\n kubectl wait --for=condition=Ready=false pod/busybox1\n \n # Wait for the pod \"busybox1\" to contain the status phase to be \"Running\"\n kubectl wait --for=jsonpath='{.status.phase}'=Running pod/busybox1\n \n # Wait for pod \"busybox1\" to be Ready\n kubectl wait --for='jsonpath={.status.conditions[?(@.type==\"Ready\")].status}=True' pod/busybox1\n \n # Wait for the service \"loadbalancer\" to have ingress\n kubectl wait --for=jsonpath='{.status.loadBalancer.ingress}' service/loadbalancer\n \n # Wait for the secret \"busybox1\" to be created, with a timeout of 30s\n kubectl create secret generic busybox1\n kubectl wait --for=create secret/busybox1 --timeout=30s\n \n # Wait for the pod \"busybox1\" to be deleted, with a timeout of 60s, after having issued the \"delete\" command\n kubectl delete pod/busybox1\n kubectl wait --for=delete pod/busybox1 --timeout=60s", "completion": "This text code kubectl wait | Select all resources in the namespace of the specified resource types", "language": "text", "type": "code-example_explanation", "context": "kubectl wait | Select all resources in the namespace of the specified resource types", "source": "https://kubernetes.io/docs/reference/kubectl/generated/kubectl_wait/", "id": "k8s_00587_code_explain_15060"} {"prompt": "Explain this bash code:\n\nkubectl wait ([-f FILENAME] | resource.group/resource.name | resource.group [(-l label | --all)]) [--for=create|--for=delete|--for condition=available|--for=jsonpath='{}'[=value]]", "completion": "This bash code kubectl wait | Select all resources in the namespace of the specified resource types", "language": "bash", "type": "kubectl-command_explanation", "context": "kubectl wait | Select all resources in the namespace of the specified resource types", "source": "https://kubernetes.io/docs/reference/kubectl/generated/kubectl_wait/", "id": "k8s_00587_code_explain_15061"} {"prompt": "Complete this text code:\n\n# Wait for the pod \"busybox1\" to contain the status condition of type \"Ready\"\n kubectl wait --for=condition=Ready pod/busybox1\n \n # The default value of status condition is true; you can wait for other targets after an equal delimiter (compared after Unicode simple case folding, which is a more general form of case-insensitivity)\n kubectl wait --for=condition=Ready=false pod/busybox1\n \n # Wait for the pod \"busybox1\" to contain the status phase to be \"Running\"\n kubectl wait --for=jsonpath='{.status.phase}'=Running pod/busybox1\n \n # Wait for pod \"busybox1\" to be Ready\n kubectl wait --for='jsonpath={.status.conditions[?(@.type==\"Ready\")].status}=True' pod/busybox1", "completion": " \n # Wait for the service \"loadbalancer\" to have ingress\n kubectl wait --for=jsonpath='{.status.loadBalancer.ingress}' service/loadbalancer\n \n # Wait for the secret \"busybox1\" to be created, with a timeout of 30s\n kubectl create secret generic busybox1\n kubectl wait --for=create secret/busybox1 --timeout=30s\n \n # Wait for the pod \"busybox1\" to be deleted, with a timeout of 60s, after having issued the \"delete\" command\n kubectl delete pod/busybox1\n kubectl wait --for=delete pod/busybox1 --timeout=60s", "language": "text", "type": "code-example", "context": "kubectl wait | Select all resources in the namespace of the specified resource types", "source": "https://kubernetes.io/docs/reference/kubectl/generated/kubectl_wait/", "id": "k8s_00587_code_completion_15062"} {"prompt": "Explain this text code:\n\n# Wait for the pod \"busybox1\" to contain the status condition of type \"Ready\"\n kubectl wait --for=condition=Ready pod/busybox1\n \n # The default value of status condition is true; you can wait for other targets after an equal delimiter (compared after Unicode simple case folding, which is a more general form of case-insensitivity)\n kubectl wait --for=condition=Ready=false pod/busybox1\n \n # Wait for the pod \"busybox1\" to contain the status phase to be \"Running\"\n kubectl wait --for=jsonpath='{.status.phase}'=Running pod/busybox1\n \n # Wait for pod \"busybox1\" to be Ready\n kubectl wait --for='jsonpath={.status.conditions[?(@.type==\"Ready\")].status}=True' pod/busybox1\n \n # Wait for the service \"loadbalancer\" to have ingress\n kubectl wait --for=jsonpath='{.status.loadBalancer.ingress}' service/loadbalancer\n \n # Wait for the secret \"busybox1\" to be created, with a timeout of 30s\n kubectl create secret generic busybox1\n kubectl wait --for=create secret/busybox1 --timeout=30s\n \n # Wait for the pod \"busybox1\" to be deleted, with a timeout of 60s, after having issued the \"delete\" command\n kubectl delete pod/busybox1\n kubectl wait --for=delete pod/busybox1 --timeout=60s", "completion": "This text code kubectl wait | Select all resources in the namespace of the specified resource types", "language": "text", "type": "code-example_explanation", "context": "kubectl wait | Select all resources in the namespace of the specified resource types", "source": "https://kubernetes.io/docs/reference/kubectl/generated/kubectl_wait/", "id": "k8s_00587_code_explain_15063"} {"prompt": "Explain this bash code:\n\nkubectl set resources (-f FILENAME | TYPE NAME) ([--limits=LIMITS & --requests=REQUESTS]", "completion": "This bash code kubectl set resources | Select all resources, in the namespace of the specified resource types", "language": "bash", "type": "kubectl-command_explanation", "context": "kubectl set resources | Select all resources, in the namespace of the specified resource types", "source": "https://kubernetes.io/docs/reference/kubectl/generated/kubectl_set/kubectl_set_resources/", "id": "k8s_00590_code_explain_15064"} {"prompt": "Complete this text code:\n\n# Set a deployments nginx container cpu limits to \"200m\" and memory to \"512Mi\"\n kubectl set resources deployment nginx -c=nginx --limits=cpu=200m,memory=512Mi\n \n # Set the resource request and limits for all containers in nginx\n kubectl set resources deployment nginx --limits=cpu=200m,memory=512Mi --requests=cpu=100m,memory=256Mi", "completion": " \n # Remove the resource requests for resources on containers in nginx\n kubectl set resources deployment nginx --limits=cpu=0,memory=0 --requests=cpu=0,memory=0\n \n # Print the result (in yaml format) of updating nginx container limits from a local, without hitting the server\n kubectl set resources -f path/to/file.yaml --limits=cpu=200m,memory=512Mi --local -o yaml", "language": "text", "type": "code-example", "context": "kubectl set resources | Select all resources, in the namespace of the specified resource types", "source": "https://kubernetes.io/docs/reference/kubectl/generated/kubectl_set/kubectl_set_resources/", "id": "k8s_00590_code_completion_15065"} {"prompt": "Explain this text code:\n\n# Set a deployments nginx container cpu limits to \"200m\" and memory to \"512Mi\"\n kubectl set resources deployment nginx -c=nginx --limits=cpu=200m,memory=512Mi\n \n # Set the resource request and limits for all containers in nginx\n kubectl set resources deployment nginx --limits=cpu=200m,memory=512Mi --requests=cpu=100m,memory=256Mi\n \n # Remove the resource requests for resources on containers in nginx\n kubectl set resources deployment nginx --limits=cpu=0,memory=0 --requests=cpu=0,memory=0\n \n # Print the result (in yaml format) of updating nginx container limits from a local, without hitting the server\n kubectl set resources -f path/to/file.yaml --limits=cpu=200m,memory=512Mi --local -o yaml", "completion": "This text code kubectl set resources | Select all resources, in the namespace of the specified resource types", "language": "text", "type": "code-example_explanation", "context": "kubectl set resources | Select all resources, in the namespace of the specified resource types", "source": "https://kubernetes.io/docs/reference/kubectl/generated/kubectl_set/kubectl_set_resources/", "id": "k8s_00590_code_explain_15066"} {"prompt": "Explain this bash code:\n\nkubectl set resources (-f FILENAME | TYPE NAME) ([--limits=LIMITS & --requests=REQUESTS]", "completion": "This bash code kubectl set resources | Select all resources, in the namespace of the specified resource types", "language": "bash", "type": "kubectl-command_explanation", "context": "kubectl set resources | Select all resources, in the namespace of the specified resource types", "source": "https://kubernetes.io/docs/reference/kubectl/generated/kubectl_set/kubectl_set_resources/", "id": "k8s_00590_code_explain_15067"} {"prompt": "Complete this text code:\n\n# Set a deployments nginx container cpu limits to \"200m\" and memory to \"512Mi\"\n kubectl set resources deployment nginx -c=nginx --limits=cpu=200m,memory=512Mi\n \n # Set the resource request and limits for all containers in nginx\n kubectl set resources deployment nginx --limits=cpu=200m,memory=512Mi --requests=cpu=100m,memory=256Mi", "completion": " \n # Remove the resource requests for resources on containers in nginx\n kubectl set resources deployment nginx --limits=cpu=0,memory=0 --requests=cpu=0,memory=0\n \n # Print the result (in yaml format) of updating nginx container limits from a local, without hitting the server\n kubectl set resources -f path/to/file.yaml --limits=cpu=200m,memory=512Mi --local -o yaml", "language": "text", "type": "code-example", "context": "kubectl set resources | Select all resources, in the namespace of the specified resource types", "source": "https://kubernetes.io/docs/reference/kubectl/generated/kubectl_set/kubectl_set_resources/", "id": "k8s_00590_code_completion_15068"} {"prompt": "Explain this text code:\n\n# Set a deployments nginx container cpu limits to \"200m\" and memory to \"512Mi\"\n kubectl set resources deployment nginx -c=nginx --limits=cpu=200m,memory=512Mi\n \n # Set the resource request and limits for all containers in nginx\n kubectl set resources deployment nginx --limits=cpu=200m,memory=512Mi --requests=cpu=100m,memory=256Mi\n \n # Remove the resource requests for resources on containers in nginx\n kubectl set resources deployment nginx --limits=cpu=0,memory=0 --requests=cpu=0,memory=0\n \n # Print the result (in yaml format) of updating nginx container limits from a local, without hitting the server\n kubectl set resources -f path/to/file.yaml --limits=cpu=200m,memory=512Mi --local -o yaml", "completion": "This text code kubectl set resources | Select all resources, in the namespace of the specified resource types", "language": "text", "type": "code-example_explanation", "context": "kubectl set resources | Select all resources, in the namespace of the specified resource types", "source": "https://kubernetes.io/docs/reference/kubectl/generated/kubectl_set/kubectl_set_resources/", "id": "k8s_00590_code_explain_15069"} {"prompt": "Explain this text code:\n\nI0314 11:10:13.909915 1105457 feature_gate.go:249] feature gates: &{map[EventedPLEG:true]}", "completion": "This text code Switching from Polling to CRI Event-based Updates to Container Status | If you have set --v to 4 and above, you might see more entries that indicate that the kubelet is using event-based container state monitoring.", "language": "text", "type": "code-example_explanation", "context": "Switching from Polling to CRI Event-based Updates to Container Status | If you have set --v to 4 and above, you might see more entries that indicate that the kubelet is using event-based container state monitoring.", "source": "https://kubernetes.io/docs/tasks/administer-cluster/switch-to-evented-pleg/", "id": "k8s_00591_code_explain_15070"} {"prompt": "Explain this text code:\n\nI0314 11:12:42.009542 1110177 evented.go:238] \"Evented PLEG: Generated pod status from the received event\" podUID=3b2c6172-b112-447a-ba96-94e7022912dc\nI0314 11:12:44.623326 1110177 evented.go:238] \"Evented PLEG: Generated pod status from the received event\" podUID=b3fba5ea-a8c5-4b76-8f43-481e17e8ec40\nI0314 11:12:44.714564 1110177 evented.go:238] \"Evented PLEG: Generated pod status from the received event\" podUID=b3fba5ea-a8c5-4b76-8f43-481e17e8ec40", "completion": "This text code Switching from Polling to CRI Event-based Updates to Container Status |", "language": "text", "type": "code-example_explanation", "context": "Switching from Polling to CRI Event-based Updates to Container Status |", "source": "https://kubernetes.io/docs/tasks/administer-cluster/switch-to-evented-pleg/", "id": "k8s_00591_code_explain_15071"} {"prompt": "Explain this console code:\n\nI0314 11:10:13.909915 1105457 feature_gate.go:249] feature gates: &{map[EventedPLEG:true]}", "completion": "This console code Switching from Polling to CRI Event-based Updates to Container Status | If you have set --v to 4 and above, you might see more entries that indicate that the kubelet is using event-based container state monitoring.", "language": "console", "type": "code-example_explanation", "context": "Switching from Polling to CRI Event-based Updates to Container Status | If you have set --v to 4 and above, you might see more entries that indicate that the kubelet is using event-based container state monitoring.", "source": "https://kubernetes.io/docs/tasks/administer-cluster/switch-to-evented-pleg/", "id": "k8s_00591_code_explain_15072"} {"prompt": "Explain this console code:\n\nI0314 11:12:42.009542 1110177 evented.go:238] \"Evented PLEG: Generated pod status from the received event\" podUID=3b2c6172-b112-447a-ba96-94e7022912dc\nI0314 11:12:44.623326 1110177 evented.go:238] \"Evented PLEG: Generated pod status from the received event\" podUID=b3fba5ea-a8c5-4b76-8f43-481e17e8ec40\nI0314 11:12:44.714564 1110177 evented.go:238] \"Evented PLEG: Generated pod status from the received event\" podUID=b3fba5ea-a8c5-4b76-8f43-481e17e8ec40", "completion": "This console code Switching from Polling to CRI Event-based Updates to Container Status |", "language": "console", "type": "code-example_explanation", "context": "Switching from Polling to CRI Event-based Updates to Container Status |", "source": "https://kubernetes.io/docs/tasks/administer-cluster/switch-to-evented-pleg/", "id": "k8s_00591_code_explain_15073"} {"prompt": "Explain this text code:\n\nI0314 11:10:13.909915 1105457 feature_gate.go:249] feature gates: &{map[EventedPLEG:true]}", "completion": "This text code Switching from Polling to CRI Event-based Updates to Container Status | If you have set --v to 4 and above, you might see more entries that indicate that the kubelet is using event-based container state monitoring.", "language": "text", "type": "code-example_explanation", "context": "Switching from Polling to CRI Event-based Updates to Container Status | If you have set --v to 4 and above, you might see more entries that indicate that the kubelet is using event-based container state monitoring.", "source": "https://kubernetes.io/docs/tasks/administer-cluster/switch-to-evented-pleg/", "id": "k8s_00591_code_explain_15074"} {"prompt": "Explain this text code:\n\nI0314 11:12:42.009542 1110177 evented.go:238] \"Evented PLEG: Generated pod status from the received event\" podUID=3b2c6172-b112-447a-ba96-94e7022912dc\nI0314 11:12:44.623326 1110177 evented.go:238] \"Evented PLEG: Generated pod status from the received event\" podUID=b3fba5ea-a8c5-4b76-8f43-481e17e8ec40\nI0314 11:12:44.714564 1110177 evented.go:238] \"Evented PLEG: Generated pod status from the received event\" podUID=b3fba5ea-a8c5-4b76-8f43-481e17e8ec40", "completion": "This text code Switching from Polling to CRI Event-based Updates to Container Status |", "language": "text", "type": "code-example_explanation", "context": "Switching from Polling to CRI Event-based Updates to Container Status |", "source": "https://kubernetes.io/docs/tasks/administer-cluster/switch-to-evented-pleg/", "id": "k8s_00591_code_explain_15075"} {"prompt": "Complete this text code:\n\nsecretGenerator:\n- name: database-creds", "completion": " literals:\n - username=admin\n - password=1f2d1e2e67df", "language": "text", "type": "code-example", "context": "Managing Secrets using Kustomize | Store the credentials in files. The filenames are the keys of the secret:echo -n 'admin' > ./username.txt echo -n '1f2d1e2e67df' > ./password.txt The -n flag ensures that there's no newline character", "source": "https://kubernetes.io/docs/tasks/configmap-secret/managing-secret-using-kustomize/", "id": "k8s_00595_code_completion_15076"} {"prompt": "Explain this text code:\n\nsecretGenerator:\n- name: database-creds\n literals:\n - username=admin\n - password=1f2d1e2e67df", "completion": "This text code Managing Secrets using Kustomize | Store the credentials in files. The filenames are the keys of the secret:echo -n 'admin' > ./username.txt echo -n '1f2d1e2e67df' > ./password.txt The -n flag ensures that there's no newline character", "language": "text", "type": "code-example_explanation", "context": "Managing Secrets using Kustomize | Store the credentials in files. The filenames are the keys of the secret:echo -n 'admin' > ./username.txt echo -n '1f2d1e2e67df' > ./password.txt The -n flag ensures that there's no newline character", "source": "https://kubernetes.io/docs/tasks/configmap-secret/managing-secret-using-kustomize/", "id": "k8s_00595_code_explain_15077"} {"prompt": "Explain this text code:\n\necho -n 'admin' > ./username.txt\necho -n '1f2d1e2e67df' > ./password.txt", "completion": "This text code Managing Secrets using Kustomize | The -n flag ensures that there's no newline character at the end of your files.", "language": "text", "type": "code-example_explanation", "context": "Managing Secrets using Kustomize | The -n flag ensures that there's no newline character at the end of your files.", "source": "https://kubernetes.io/docs/tasks/configmap-secret/managing-secret-using-kustomize/", "id": "k8s_00595_code_explain_15078"} {"prompt": "Complete this text code:\n\nsecretGenerator:\n- name: database-creds", "completion": " files:\n - username.txt\n - password.txt", "language": "text", "type": "code-example", "context": "Managing Secrets using Kustomize | You can also define the secretGenerator in the kustomization.yaml file by providing .env files. For example, the following kustomization.yaml file pulls in data from an .env.secret file:secretGenerato", "source": "https://kubernetes.io/docs/tasks/configmap-secret/managing-secret-using-kustomize/", "id": "k8s_00595_code_completion_15079"} {"prompt": "Explain this text code:\n\nsecretGenerator:\n- name: database-creds\n files:\n - username.txt\n - password.txt", "completion": "This text code Managing Secrets using Kustomize | You can also define the secretGenerator in the kustomization.yaml file by providing .env files. For example, the following kustomization.yaml file pulls in data from an .env.secret file:secretGenerato", "language": "text", "type": "code-example_explanation", "context": "Managing Secrets using Kustomize | You can also define the secretGenerator in the kustomization.yaml file by providing .env files. For example, the following kustomization.yaml file pulls in data from an .env.secret file:secretGenerato", "source": "https://kubernetes.io/docs/tasks/configmap-secret/managing-secret-using-kustomize/", "id": "k8s_00595_code_explain_15080"} {"prompt": "Complete this text code:\n\nsecretGenerator:\n- name: db-user-pass", "completion": " envs:\n - .env.secret", "language": "text", "type": "code-example", "context": "Managing Secrets using Kustomize | In all cases, you don't need to encode the values in base64. The name of the YAML file must be kustomization.yaml or kustomization.yml.", "source": "https://kubernetes.io/docs/tasks/configmap-secret/managing-secret-using-kustomize/", "id": "k8s_00595_code_completion_15081"} {"prompt": "Explain this text code:\n\nsecretGenerator:\n- name: db-user-pass\n envs:\n - .env.secret", "completion": "This text code Managing Secrets using Kustomize | In all cases, you don't need to encode the values in base64. The name of the YAML file must be kustomization.yaml or kustomization.yml.", "language": "text", "type": "code-example_explanation", "context": "Managing Secrets using Kustomize | In all cases, you don't need to encode the values in base64. The name of the YAML file must be kustomization.yaml or kustomization.yml.", "source": "https://kubernetes.io/docs/tasks/configmap-secret/managing-secret-using-kustomize/", "id": "k8s_00595_code_explain_15082"} {"prompt": "Explain this bash code:\n\nkubectl get -k <directory-path> -o jsonpath='{.data}'", "completion": "This bash code Managing Secrets using Kustomize | The output is similar to:", "language": "bash", "type": "kubectl-command_explanation", "context": "Managing Secrets using Kustomize | The output is similar to:", "source": "https://kubernetes.io/docs/tasks/configmap-secret/managing-secret-using-kustomize/", "id": "k8s_00595_code_explain_15083"} {"prompt": "Explain this json code:\n\n{ \"password\": \"MWYyZDFlMmU2N2Rm\", \"username\": \"YWRtaW4=\" }", "completion": "This json code Managing Secrets using Kustomize | The output is similar to:", "language": "json", "type": "api-response_explanation", "context": "Managing Secrets using Kustomize | The output is similar to:", "source": "https://kubernetes.io/docs/tasks/configmap-secret/managing-secret-using-kustomize/", "id": "k8s_00595_code_explain_15084"} {"prompt": "Complete this yaml code:\n\nsecretGenerator:\n- name: database-creds", "completion": " literals:\n - username=admin\n - password=1f2d1e2e67df", "language": "yaml", "type": "kubernetes-manifest", "context": "Managing Secrets using Kustomize | Store the credentials in files. The filenames are the keys of the secret:echo -n 'admin' > ./username.txt echo -n '1f2d1e2e67df' > ./password.txt The -n flag ensures that there's no newline character", "source": "https://kubernetes.io/docs/tasks/configmap-secret/managing-secret-using-kustomize/", "id": "k8s_00595_code_completion_15085"} {"prompt": "Explain this yaml code:\n\nsecretGenerator:\n- name: database-creds\n literals:\n - username=admin\n - password=1f2d1e2e67df", "completion": "This yaml code Managing Secrets using Kustomize | Store the credentials in files. The filenames are the keys of the secret:echo -n 'admin' > ./username.txt echo -n '1f2d1e2e67df' > ./password.txt The -n flag ensures that there's no newline character", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Managing Secrets using Kustomize | Store the credentials in files. The filenames are the keys of the secret:echo -n 'admin' > ./username.txt echo -n '1f2d1e2e67df' > ./password.txt The -n flag ensures that there's no newline character", "source": "https://kubernetes.io/docs/tasks/configmap-secret/managing-secret-using-kustomize/", "id": "k8s_00595_code_explain_15086"} {"prompt": "Explain this shell code:\n\necho -n 'admin' > ./username.txt\necho -n '1f2d1e2e67df' > ./password.txt", "completion": "This shell code Managing Secrets using Kustomize | The -n flag ensures that there's no newline character at the end of your files.", "language": "shell", "type": "code-example_explanation", "context": "Managing Secrets using Kustomize | The -n flag ensures that there's no newline character at the end of your files.", "source": "https://kubernetes.io/docs/tasks/configmap-secret/managing-secret-using-kustomize/", "id": "k8s_00595_code_explain_15087"} {"prompt": "Complete this yaml code:\n\nsecretGenerator:\n- name: database-creds", "completion": " files:\n - username.txt\n - password.txt", "language": "yaml", "type": "kubernetes-manifest", "context": "Managing Secrets using Kustomize | You can also define the secretGenerator in the kustomization.yaml file by providing .env files. For example, the following kustomization.yaml file pulls in data from an .env.secret file:secretGenerato", "source": "https://kubernetes.io/docs/tasks/configmap-secret/managing-secret-using-kustomize/", "id": "k8s_00595_code_completion_15088"} {"prompt": "Explain this yaml code:\n\nsecretGenerator:\n- name: database-creds\n files:\n - username.txt\n - password.txt", "completion": "This yaml code Managing Secrets using Kustomize | You can also define the secretGenerator in the kustomization.yaml file by providing .env files. For example, the following kustomization.yaml file pulls in data from an .env.secret file:secretGenerato", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Managing Secrets using Kustomize | You can also define the secretGenerator in the kustomization.yaml file by providing .env files. For example, the following kustomization.yaml file pulls in data from an .env.secret file:secretGenerato", "source": "https://kubernetes.io/docs/tasks/configmap-secret/managing-secret-using-kustomize/", "id": "k8s_00595_code_explain_15089"} {"prompt": "Complete this yaml code:\n\nsecretGenerator:\n- name: db-user-pass", "completion": " envs:\n - .env.secret", "language": "yaml", "type": "kubernetes-manifest", "context": "Managing Secrets using Kustomize | In all cases, you don't need to encode the values in base64. The name of the YAML file must be kustomization.yaml or kustomization.yml.", "source": "https://kubernetes.io/docs/tasks/configmap-secret/managing-secret-using-kustomize/", "id": "k8s_00595_code_completion_15090"} {"prompt": "Explain this yaml code:\n\nsecretGenerator:\n- name: db-user-pass\n envs:\n - .env.secret", "completion": "This yaml code Managing Secrets using Kustomize | In all cases, you don't need to encode the values in base64. The name of the YAML file must be kustomization.yaml or kustomization.yml.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Managing Secrets using Kustomize | In all cases, you don't need to encode the values in base64. The name of the YAML file must be kustomization.yaml or kustomization.yml.", "source": "https://kubernetes.io/docs/tasks/configmap-secret/managing-secret-using-kustomize/", "id": "k8s_00595_code_explain_15091"} {"prompt": "Explain this shell code:\n\nkubectl get -k <directory-path> -o jsonpath='{.data}'", "completion": "This shell code Managing Secrets using Kustomize | The output is similar to:", "language": "shell", "type": "code-example_explanation", "context": "Managing Secrets using Kustomize | The output is similar to:", "source": "https://kubernetes.io/docs/tasks/configmap-secret/managing-secret-using-kustomize/", "id": "k8s_00595_code_explain_15092"} {"prompt": "Explain this json code:\n\n{ \"password\": \"MWYyZDFlMmU2N2Rm\", \"username\": \"YWRtaW4=\" }", "completion": "This json code Managing Secrets using Kustomize | The output is similar to:", "language": "json", "type": "api-response_explanation", "context": "Managing Secrets using Kustomize | The output is similar to:", "source": "https://kubernetes.io/docs/tasks/configmap-secret/managing-secret-using-kustomize/", "id": "k8s_00595_code_explain_15093"} {"prompt": "Complete this text code:\n\nsecretGenerator:\n- name: database-creds", "completion": " literals:\n - username=admin\n - password=1f2d1e2e67df", "language": "text", "type": "code-example", "context": "Managing Secrets using Kustomize | Store the credentials in files. The filenames are the keys of the secret:echo -n 'admin' > ./username.txt echo -n '1f2d1e2e67df' > ./password.txt The -n flag ensures that there's no newline character", "source": "https://kubernetes.io/docs/tasks/configmap-secret/managing-secret-using-kustomize/", "id": "k8s_00595_code_completion_15094"} {"prompt": "Explain this text code:\n\nsecretGenerator:\n- name: database-creds\n literals:\n - username=admin\n - password=1f2d1e2e67df", "completion": "This text code Managing Secrets using Kustomize | Store the credentials in files. The filenames are the keys of the secret:echo -n 'admin' > ./username.txt echo -n '1f2d1e2e67df' > ./password.txt The -n flag ensures that there's no newline character", "language": "text", "type": "code-example_explanation", "context": "Managing Secrets using Kustomize | Store the credentials in files. The filenames are the keys of the secret:echo -n 'admin' > ./username.txt echo -n '1f2d1e2e67df' > ./password.txt The -n flag ensures that there's no newline character", "source": "https://kubernetes.io/docs/tasks/configmap-secret/managing-secret-using-kustomize/", "id": "k8s_00595_code_explain_15095"} {"prompt": "Explain this text code:\n\necho -n 'admin' > ./username.txt\necho -n '1f2d1e2e67df' > ./password.txt", "completion": "This text code Managing Secrets using Kustomize | The -n flag ensures that there's no newline character at the end of your files.", "language": "text", "type": "code-example_explanation", "context": "Managing Secrets using Kustomize | The -n flag ensures that there's no newline character at the end of your files.", "source": "https://kubernetes.io/docs/tasks/configmap-secret/managing-secret-using-kustomize/", "id": "k8s_00595_code_explain_15096"} {"prompt": "Complete this text code:\n\nsecretGenerator:\n- name: database-creds", "completion": " files:\n - username.txt\n - password.txt", "language": "text", "type": "code-example", "context": "Managing Secrets using Kustomize | You can also define the secretGenerator in the kustomization.yaml file by providing .env files. For example, the following kustomization.yaml file pulls in data from an .env.secret file:secretGenerato", "source": "https://kubernetes.io/docs/tasks/configmap-secret/managing-secret-using-kustomize/", "id": "k8s_00595_code_completion_15097"} {"prompt": "Explain this text code:\n\nsecretGenerator:\n- name: database-creds\n files:\n - username.txt\n - password.txt", "completion": "This text code Managing Secrets using Kustomize | You can also define the secretGenerator in the kustomization.yaml file by providing .env files. For example, the following kustomization.yaml file pulls in data from an .env.secret file:secretGenerato", "language": "text", "type": "code-example_explanation", "context": "Managing Secrets using Kustomize | You can also define the secretGenerator in the kustomization.yaml file by providing .env files. For example, the following kustomization.yaml file pulls in data from an .env.secret file:secretGenerato", "source": "https://kubernetes.io/docs/tasks/configmap-secret/managing-secret-using-kustomize/", "id": "k8s_00595_code_explain_15098"} {"prompt": "Complete this text code:\n\nsecretGenerator:\n- name: db-user-pass", "completion": " envs:\n - .env.secret", "language": "text", "type": "code-example", "context": "Managing Secrets using Kustomize | In all cases, you don't need to encode the values in base64. The name of the YAML file must be kustomization.yaml or kustomization.yml.", "source": "https://kubernetes.io/docs/tasks/configmap-secret/managing-secret-using-kustomize/", "id": "k8s_00595_code_completion_15099"} {"prompt": "Explain this text code:\n\nsecretGenerator:\n- name: db-user-pass\n envs:\n - .env.secret", "completion": "This text code Managing Secrets using Kustomize | In all cases, you don't need to encode the values in base64. The name of the YAML file must be kustomization.yaml or kustomization.yml.", "language": "text", "type": "code-example_explanation", "context": "Managing Secrets using Kustomize | In all cases, you don't need to encode the values in base64. The name of the YAML file must be kustomization.yaml or kustomization.yml.", "source": "https://kubernetes.io/docs/tasks/configmap-secret/managing-secret-using-kustomize/", "id": "k8s_00595_code_explain_15100"} {"prompt": "Explain this bash code:\n\nkubectl get -k <directory-path> -o jsonpath='{.data}'", "completion": "This bash code Managing Secrets using Kustomize | The output is similar to:", "language": "bash", "type": "kubectl-command_explanation", "context": "Managing Secrets using Kustomize | The output is similar to:", "source": "https://kubernetes.io/docs/tasks/configmap-secret/managing-secret-using-kustomize/", "id": "k8s_00595_code_explain_15101"} {"prompt": "Explain this bash code:\n\nkubectl apply -f https://k8s.io/examples/application/mongodb/mongo-deployment.yaml", "completion": "This bash code Use Port Forwarding to Access Applications in a Cluster | The output of a successful command verifies that the deployment was created:", "language": "bash", "type": "kubectl-command_explanation", "context": "Use Port Forwarding to Access Applications in a Cluster | The output of a successful command verifies that the deployment was created:", "source": "https://kubernetes.io/docs/tasks/access-application-cluster/port-forward-access-application-cluster/", "id": "k8s_00596_code_explain_15102"} {"prompt": "Explain this text code:\n\nNAME READY STATUS RESTARTS AGE\nmongo-75f59d57f4-4nd6q 1/1 Running 0 2m4s", "completion": "This text code Use Port Forwarding to Access Applications in a Cluster | View the Deployment's status:", "language": "text", "type": "code-example_explanation", "context": "Use Port Forwarding to Access Applications in a Cluster | View the Deployment's status:", "source": "https://kubernetes.io/docs/tasks/access-application-cluster/port-forward-access-application-cluster/", "id": "k8s_00596_code_explain_15103"} {"prompt": "Explain this text code:\n\nNAME READY UP-TO-DATE AVAILABLE AGE\nmongo 1/1 1 1 2m21s", "completion": "This text code Use Port Forwarding to Access Applications in a Cluster | The Deployment automatically manages a ReplicaSet. View the ReplicaSet status using:", "language": "text", "type": "code-example_explanation", "context": "Use Port Forwarding to Access Applications in a Cluster | The Deployment automatically manages a ReplicaSet. View the ReplicaSet status using:", "source": "https://kubernetes.io/docs/tasks/access-application-cluster/port-forward-access-application-cluster/", "id": "k8s_00596_code_explain_15104"} {"prompt": "Explain this text code:\n\nNAME DESIRED CURRENT READY AGE\nmongo-75f59d57f4 1 1 1 3m12s", "completion": "This text code Use Port Forwarding to Access Applications in a Cluster | Create a Service to expose MongoDB on the network:", "language": "text", "type": "code-example_explanation", "context": "Use Port Forwarding to Access Applications in a Cluster | Create a Service to expose MongoDB on the network:", "source": "https://kubernetes.io/docs/tasks/access-application-cluster/port-forward-access-application-cluster/", "id": "k8s_00596_code_explain_15105"} {"prompt": "Explain this bash code:\n\nkubectl apply -f https://k8s.io/examples/application/mongodb/mongo-service.yaml", "completion": "This bash code Use Port Forwarding to Access Applications in a Cluster | The output of a successful command verifies that the Service was created:", "language": "bash", "type": "kubectl-command_explanation", "context": "Use Port Forwarding to Access Applications in a Cluster | The output of a successful command verifies that the Service was created:", "source": "https://kubernetes.io/docs/tasks/access-application-cluster/port-forward-access-application-cluster/", "id": "k8s_00596_code_explain_15106"} {"prompt": "Explain this text code:\n\nNAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE\nmongo ClusterIP 10.96.41.183 <none> 27017/TCP 11s", "completion": "This text code Use Port Forwarding to Access Applications in a Cluster | Verify that the MongoDB server is running in the Pod, and listening on port 27017:", "language": "text", "type": "code-example_explanation", "context": "Use Port Forwarding to Access Applications in a Cluster | Verify that the MongoDB server is running in the Pod, and listening on port 27017:", "source": "https://kubernetes.io/docs/tasks/access-application-cluster/port-forward-access-application-cluster/", "id": "k8s_00596_code_explain_15107"} {"prompt": "Explain this text code:\n\n# Change mongo-75f59d57f4-4nd6q to the name of the Pod\nkubectl get pod mongo-75f59d57f4-4nd6q --template='{{(index (index .spec.containers 0).ports 0).containerPort}}{{\"\\n\"}}'", "completion": "This text code Use Port Forwarding to Access Applications in a Cluster | The output displays the port for MongoDB in that Pod:", "language": "text", "type": "code-example_explanation", "context": "Use Port Forwarding to Access Applications in a Cluster | The output displays the port for MongoDB in that Pod:", "source": "https://kubernetes.io/docs/tasks/access-application-cluster/port-forward-access-application-cluster/", "id": "k8s_00596_code_explain_15108"} {"prompt": "Explain this text code:\n\n# Change mongo-75f59d57f4-4nd6q to the name of the Pod\nkubectl port-forward mongo-75f59d57f4-4nd6q 28015:27017", "completion": "This text code Use Port Forwarding to Access Applications in a Cluster | which is the same as", "language": "text", "type": "code-example_explanation", "context": "Use Port Forwarding to Access Applications in a Cluster | which is the same as", "source": "https://kubernetes.io/docs/tasks/access-application-cluster/port-forward-access-application-cluster/", "id": "k8s_00596_code_explain_15109"} {"prompt": "Explain this bash code:\n\nkubectl port-forward pods/mongo-75f59d57f4-4nd6q 28015:27017", "completion": "This bash code Use Port Forwarding to Access Applications in a Cluster | or", "language": "bash", "type": "kubectl-command_explanation", "context": "Use Port Forwarding to Access Applications in a Cluster | or", "source": "https://kubernetes.io/docs/tasks/access-application-cluster/port-forward-access-application-cluster/", "id": "k8s_00596_code_explain_15110"} {"prompt": "Explain this bash code:\n\nkubectl port-forward replicaset/mongo-75f59d57f4 28015:27017", "completion": "This bash code Use Port Forwarding to Access Applications in a Cluster | or", "language": "bash", "type": "kubectl-command_explanation", "context": "Use Port Forwarding to Access Applications in a Cluster | or", "source": "https://kubernetes.io/docs/tasks/access-application-cluster/port-forward-access-application-cluster/", "id": "k8s_00596_code_explain_15111"} {"prompt": "Explain this text code:\n\nForwarding from 127.0.0.1:28015 -> 27017\nForwarding from [::1]:28015 -> 27017", "completion": "This text code Use Port Forwarding to Access Applications in a Cluster | Start the MongoDB command line interface:", "language": "text", "type": "code-example_explanation", "context": "Use Port Forwarding to Access Applications in a Cluster | Start the MongoDB command line interface:", "source": "https://kubernetes.io/docs/tasks/access-application-cluster/port-forward-access-application-cluster/", "id": "k8s_00596_code_explain_15112"} {"prompt": "Explain this text code:\n\nForwarding from 127.0.0.1:63753 -> 27017\nForwarding from [::1]:63753 -> 27017", "completion": "This text code Use Port Forwarding to Access Applications in a Cluster | Connections made to local port 28015 are forwarded to port 27017 of the Pod that is running the MongoDB server. With this connection in place, you can use your local workstation to debug the database", "language": "text", "type": "code-example_explanation", "context": "Use Port Forwarding to Access Applications in a Cluster | Connections made to local port 28015 are forwarded to port 27017 of the Pod that is running the MongoDB server. With this connection in place, you can use your local workstation to debug the database", "source": "https://kubernetes.io/docs/tasks/access-application-cluster/port-forward-access-application-cluster/", "id": "k8s_00596_code_explain_15113"} {"prompt": "Explain this shell code:\n\nkubectl apply -f https://k8s.io/examples/application/mongodb/mongo-deployment.yaml", "completion": "This shell code Use Port Forwarding to Access Applications in a Cluster | The output of a successful command verifies that the deployment was created:", "language": "shell", "type": "code-example_explanation", "context": "Use Port Forwarding to Access Applications in a Cluster | The output of a successful command verifies that the deployment was created:", "source": "https://kubernetes.io/docs/tasks/access-application-cluster/port-forward-access-application-cluster/", "id": "k8s_00596_code_explain_15114"} {"prompt": "Explain this text code:\n\nNAME READY STATUS RESTARTS AGE\nmongo-75f59d57f4-4nd6q 1/1 Running 0 2m4s", "completion": "This text code Use Port Forwarding to Access Applications in a Cluster | View the Deployment's status:", "language": "text", "type": "code-example_explanation", "context": "Use Port Forwarding to Access Applications in a Cluster | View the Deployment's status:", "source": "https://kubernetes.io/docs/tasks/access-application-cluster/port-forward-access-application-cluster/", "id": "k8s_00596_code_explain_15115"} {"prompt": "Explain this text code:\n\nNAME READY UP-TO-DATE AVAILABLE AGE\nmongo 1/1 1 1 2m21s", "completion": "This text code Use Port Forwarding to Access Applications in a Cluster | The Deployment automatically manages a ReplicaSet. View the ReplicaSet status using:", "language": "text", "type": "code-example_explanation", "context": "Use Port Forwarding to Access Applications in a Cluster | The Deployment automatically manages a ReplicaSet. View the ReplicaSet status using:", "source": "https://kubernetes.io/docs/tasks/access-application-cluster/port-forward-access-application-cluster/", "id": "k8s_00596_code_explain_15116"} {"prompt": "Explain this text code:\n\nNAME DESIRED CURRENT READY AGE\nmongo-75f59d57f4 1 1 1 3m12s", "completion": "This text code Use Port Forwarding to Access Applications in a Cluster | Create a Service to expose MongoDB on the network:", "language": "text", "type": "code-example_explanation", "context": "Use Port Forwarding to Access Applications in a Cluster | Create a Service to expose MongoDB on the network:", "source": "https://kubernetes.io/docs/tasks/access-application-cluster/port-forward-access-application-cluster/", "id": "k8s_00596_code_explain_15117"} {"prompt": "Explain this shell code:\n\nkubectl apply -f https://k8s.io/examples/application/mongodb/mongo-service.yaml", "completion": "This shell code Use Port Forwarding to Access Applications in a Cluster | The output of a successful command verifies that the Service was created:", "language": "shell", "type": "code-example_explanation", "context": "Use Port Forwarding to Access Applications in a Cluster | The output of a successful command verifies that the Service was created:", "source": "https://kubernetes.io/docs/tasks/access-application-cluster/port-forward-access-application-cluster/", "id": "k8s_00596_code_explain_15118"} {"prompt": "Explain this text code:\n\nNAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE\nmongo ClusterIP 10.96.41.183 <none> 27017/TCP 11s", "completion": "This text code Use Port Forwarding to Access Applications in a Cluster | Verify that the MongoDB server is running in the Pod, and listening on port 27017:", "language": "text", "type": "code-example_explanation", "context": "Use Port Forwarding to Access Applications in a Cluster | Verify that the MongoDB server is running in the Pod, and listening on port 27017:", "source": "https://kubernetes.io/docs/tasks/access-application-cluster/port-forward-access-application-cluster/", "id": "k8s_00596_code_explain_15119"} {"prompt": "Explain this shell code:\n\n# Change mongo-75f59d57f4-4nd6q to the name of the Pod\nkubectl get pod mongo-75f59d57f4-4nd6q --template='{{(index (index .spec.containers 0).ports 0).containerPort}}{{\"\\n\"}}'", "completion": "This shell code Use Port Forwarding to Access Applications in a Cluster | The output displays the port for MongoDB in that Pod:", "language": "shell", "type": "code-example_explanation", "context": "Use Port Forwarding to Access Applications in a Cluster | The output displays the port for MongoDB in that Pod:", "source": "https://kubernetes.io/docs/tasks/access-application-cluster/port-forward-access-application-cluster/", "id": "k8s_00596_code_explain_15120"} {"prompt": "Explain this shell code:\n\n# Change mongo-75f59d57f4-4nd6q to the name of the Pod\nkubectl port-forward mongo-75f59d57f4-4nd6q 28015:27017", "completion": "This shell code Use Port Forwarding to Access Applications in a Cluster | which is the same as", "language": "shell", "type": "code-example_explanation", "context": "Use Port Forwarding to Access Applications in a Cluster | which is the same as", "source": "https://kubernetes.io/docs/tasks/access-application-cluster/port-forward-access-application-cluster/", "id": "k8s_00596_code_explain_15121"} {"prompt": "Explain this shell code:\n\nkubectl port-forward pods/mongo-75f59d57f4-4nd6q 28015:27017", "completion": "This shell code Use Port Forwarding to Access Applications in a Cluster | or", "language": "shell", "type": "code-example_explanation", "context": "Use Port Forwarding to Access Applications in a Cluster | or", "source": "https://kubernetes.io/docs/tasks/access-application-cluster/port-forward-access-application-cluster/", "id": "k8s_00596_code_explain_15122"} {"prompt": "Explain this shell code:\n\nkubectl port-forward replicaset/mongo-75f59d57f4 28015:27017", "completion": "This shell code Use Port Forwarding to Access Applications in a Cluster | or", "language": "shell", "type": "code-example_explanation", "context": "Use Port Forwarding to Access Applications in a Cluster | or", "source": "https://kubernetes.io/docs/tasks/access-application-cluster/port-forward-access-application-cluster/", "id": "k8s_00596_code_explain_15123"} {"prompt": "Explain this text code:\n\nForwarding from 127.0.0.1:28015 -> 27017\nForwarding from [::1]:28015 -> 27017", "completion": "This text code Use Port Forwarding to Access Applications in a Cluster | Start the MongoDB command line interface:", "language": "text", "type": "code-example_explanation", "context": "Use Port Forwarding to Access Applications in a Cluster | Start the MongoDB command line interface:", "source": "https://kubernetes.io/docs/tasks/access-application-cluster/port-forward-access-application-cluster/", "id": "k8s_00596_code_explain_15124"} {"prompt": "Explain this text code:\n\nForwarding from 127.0.0.1:63753 -> 27017\nForwarding from [::1]:63753 -> 27017", "completion": "This text code Use Port Forwarding to Access Applications in a Cluster | Connections made to local port 28015 are forwarded to port 27017 of the Pod that is running the MongoDB server. With this connection in place, you can use your local workstation to debug the database", "language": "text", "type": "code-example_explanation", "context": "Use Port Forwarding to Access Applications in a Cluster | Connections made to local port 28015 are forwarded to port 27017 of the Pod that is running the MongoDB server. With this connection in place, you can use your local workstation to debug the database", "source": "https://kubernetes.io/docs/tasks/access-application-cluster/port-forward-access-application-cluster/", "id": "k8s_00596_code_explain_15125"} {"prompt": "Explain this bash code:\n\nkubectl apply -f https://k8s.io/examples/application/mongodb/mongo-deployment.yaml", "completion": "This bash code Use Port Forwarding to Access Applications in a Cluster | The output of a successful command verifies that the deployment was created:", "language": "bash", "type": "kubectl-command_explanation", "context": "Use Port Forwarding to Access Applications in a Cluster | The output of a successful command verifies that the deployment was created:", "source": "https://kubernetes.io/docs/tasks/access-application-cluster/port-forward-access-application-cluster/", "id": "k8s_00596_code_explain_15126"} {"prompt": "Explain this bash code:\n\nkubectl apply -f https://k8s.io/examples/application/mongodb/mongo-service.yaml", "completion": "This bash code Use Port Forwarding to Access Applications in a Cluster | The output of a successful command verifies that the Service was created:", "language": "bash", "type": "kubectl-command_explanation", "context": "Use Port Forwarding to Access Applications in a Cluster | The output of a successful command verifies that the Service was created:", "source": "https://kubernetes.io/docs/tasks/access-application-cluster/port-forward-access-application-cluster/", "id": "k8s_00596_code_explain_15127"} {"prompt": "Explain this text code:\n\n# Change mongo-75f59d57f4-4nd6q to the name of the Pod\nkubectl get pod mongo-75f59d57f4-4nd6q --template='{{(index (index .spec.containers 0).ports 0).containerPort}}{{\"\\n\"}}'", "completion": "This text code Use Port Forwarding to Access Applications in a Cluster | The output displays the port for MongoDB in that Pod:", "language": "text", "type": "code-example_explanation", "context": "Use Port Forwarding to Access Applications in a Cluster | The output displays the port for MongoDB in that Pod:", "source": "https://kubernetes.io/docs/tasks/access-application-cluster/port-forward-access-application-cluster/", "id": "k8s_00596_code_explain_15128"} {"prompt": "Explain this text code:\n\n# Change mongo-75f59d57f4-4nd6q to the name of the Pod\nkubectl port-forward mongo-75f59d57f4-4nd6q 28015:27017", "completion": "This text code Use Port Forwarding to Access Applications in a Cluster | which is the same as", "language": "text", "type": "code-example_explanation", "context": "Use Port Forwarding to Access Applications in a Cluster | which is the same as", "source": "https://kubernetes.io/docs/tasks/access-application-cluster/port-forward-access-application-cluster/", "id": "k8s_00596_code_explain_15129"} {"prompt": "Explain this bash code:\n\nkubectl port-forward pods/mongo-75f59d57f4-4nd6q 28015:27017", "completion": "This bash code Use Port Forwarding to Access Applications in a Cluster | or", "language": "bash", "type": "kubectl-command_explanation", "context": "Use Port Forwarding to Access Applications in a Cluster | or", "source": "https://kubernetes.io/docs/tasks/access-application-cluster/port-forward-access-application-cluster/", "id": "k8s_00596_code_explain_15130"} {"prompt": "Explain this bash code:\n\nkubectl port-forward replicaset/mongo-75f59d57f4 28015:27017", "completion": "This bash code Use Port Forwarding to Access Applications in a Cluster | or", "language": "bash", "type": "kubectl-command_explanation", "context": "Use Port Forwarding to Access Applications in a Cluster | or", "source": "https://kubernetes.io/docs/tasks/access-application-cluster/port-forward-access-application-cluster/", "id": "k8s_00596_code_explain_15131"} {"prompt": "Explain this bash code:\n\nkubectl create service clusterip NAME [--tcp=<port>:<targetPort>] [--dry-run=server|client|none]", "completion": "This bash code kubectl create service clusterip | If true, ignore any errors in templates when a field or map key is missing in the template. Only applies to golang and jsonpath output formats.", "language": "bash", "type": "kubectl-command_explanation", "context": "kubectl create service clusterip | If true, ignore any errors in templates when a field or map key is missing in the template. Only applies to golang and jsonpath output formats.", "source": "https://kubernetes.io/docs/reference/kubectl/generated/kubectl_create/kubectl_create_service_clusterip/", "id": "k8s_00598_code_explain_15132"} {"prompt": "Complete this text code:\n\n# Create a new ClusterIP service named my-cs\n kubectl create service clusterip my-cs --tcp=5678:8080", "completion": " \n # Create a new ClusterIP service named my-cs (in headless mode)\n kubectl create service clusterip my-cs --clusterip=\"None\"", "language": "text", "type": "code-example", "context": "kubectl create service clusterip | If true, ignore any errors in templates when a field or map key is missing in the template. Only applies to golang and jsonpath output formats.", "source": "https://kubernetes.io/docs/reference/kubectl/generated/kubectl_create/kubectl_create_service_clusterip/", "id": "k8s_00598_code_completion_15133"} {"prompt": "Explain this text code:\n\n# Create a new ClusterIP service named my-cs\n kubectl create service clusterip my-cs --tcp=5678:8080\n \n # Create a new ClusterIP service named my-cs (in headless mode)\n kubectl create service clusterip my-cs --clusterip=\"None\"", "completion": "This text code kubectl create service clusterip | If true, ignore any errors in templates when a field or map key is missing in the template. Only applies to golang and jsonpath output formats.", "language": "text", "type": "code-example_explanation", "context": "kubectl create service clusterip | If true, ignore any errors in templates when a field or map key is missing in the template. Only applies to golang and jsonpath output formats.", "source": "https://kubernetes.io/docs/reference/kubectl/generated/kubectl_create/kubectl_create_service_clusterip/", "id": "k8s_00598_code_explain_15134"} {"prompt": "Explain this bash code:\n\nkubectl create service clusterip NAME [--tcp=<port>:<targetPort>] [--dry-run=server|client|none]", "completion": "This bash code kubectl create service clusterip | If true, ignore any errors in templates when a field or map key is missing in the template. Only applies to golang and jsonpath output formats.", "language": "bash", "type": "kubectl-command_explanation", "context": "kubectl create service clusterip | If true, ignore any errors in templates when a field or map key is missing in the template. Only applies to golang and jsonpath output formats.", "source": "https://kubernetes.io/docs/reference/kubectl/generated/kubectl_create/kubectl_create_service_clusterip/", "id": "k8s_00598_code_explain_15135"} {"prompt": "Complete this text code:\n\n# Create a new ClusterIP service named my-cs\n kubectl create service clusterip my-cs --tcp=5678:8080", "completion": " \n # Create a new ClusterIP service named my-cs (in headless mode)\n kubectl create service clusterip my-cs --clusterip=\"None\"", "language": "text", "type": "code-example", "context": "kubectl create service clusterip | If true, ignore any errors in templates when a field or map key is missing in the template. Only applies to golang and jsonpath output formats.", "source": "https://kubernetes.io/docs/reference/kubectl/generated/kubectl_create/kubectl_create_service_clusterip/", "id": "k8s_00598_code_completion_15136"} {"prompt": "Explain this text code:\n\n# Create a new ClusterIP service named my-cs\n kubectl create service clusterip my-cs --tcp=5678:8080\n \n # Create a new ClusterIP service named my-cs (in headless mode)\n kubectl create service clusterip my-cs --clusterip=\"None\"", "completion": "This text code kubectl create service clusterip | If true, ignore any errors in templates when a field or map key is missing in the template. Only applies to golang and jsonpath output formats.", "language": "text", "type": "code-example_explanation", "context": "kubectl create service clusterip | If true, ignore any errors in templates when a field or map key is missing in the template. Only applies to golang and jsonpath output formats.", "source": "https://kubernetes.io/docs/reference/kubectl/generated/kubectl_create/kubectl_create_service_clusterip/", "id": "k8s_00598_code_explain_15137"} {"prompt": "Explain this bash code:\n\nkubectl create deployment kubernetes-bootcamp --image=gcr.io/google-samples/kubernetes-bootcamp:v1", "completion": "This bash code Using kubectl to Create a Deployment | Great! You just deployed your first application by creating a deployment. This performed a few things for you:", "language": "bash", "type": "kubectl-command_explanation", "context": "Using kubectl to Create a Deployment | Great! You just deployed your first application by creating a deployment. This performed a few things for you:", "source": "https://kubernetes.io/docs/tutorials/kubernetes-basics/deploy-app/deploy-intro/", "id": "k8s_00599_code_explain_15138"} {"prompt": "Explain this text code:\n\nexport POD_NAME=$(kubectl get pods -o go-template --template '{{range .items}}{{.metadata.name}}{{\"\\n\"}}{{end}}')\necho Name of the Pod: $POD_NAME", "completion": "This text code Using kubectl to Create a Deployment | You can access the Pod through the proxied API, by running:", "language": "text", "type": "code-example_explanation", "context": "Using kubectl to Create a Deployment | You can access the Pod through the proxied API, by running:", "source": "https://kubernetes.io/docs/tutorials/kubernetes-basics/deploy-app/deploy-intro/", "id": "k8s_00599_code_explain_15139"} {"prompt": "Explain this text code:\n\ncurl http://localhost:8001/api/v1/namespaces/default/pods/$POD_NAME:8080/proxy/", "completion": "This text code Using kubectl to Create a Deployment | In order for the new Deployment to be accessible without using the proxy, a Service is required which will be explained in Module 4.", "language": "text", "type": "code-example_explanation", "context": "Using kubectl to Create a Deployment | In order for the new Deployment to be accessible without using the proxy, a Service is required which will be explained in Module 4.", "source": "https://kubernetes.io/docs/tutorials/kubernetes-basics/deploy-app/deploy-intro/", "id": "k8s_00599_code_explain_15140"} {"prompt": "Explain this shell code:\n\nkubectl create deployment kubernetes-bootcamp --image=gcr.io/google-samples/kubernetes-bootcamp:v1", "completion": "This shell code Using kubectl to Create a Deployment | Great! You just deployed your first application by creating a deployment. This performed a few things for you:", "language": "shell", "type": "code-example_explanation", "context": "Using kubectl to Create a Deployment | Great! You just deployed your first application by creating a deployment. This performed a few things for you:", "source": "https://kubernetes.io/docs/tutorials/kubernetes-basics/deploy-app/deploy-intro/", "id": "k8s_00599_code_explain_15141"} {"prompt": "Explain this shell code:\n\nexport POD_NAME=$(kubectl get pods -o go-template --template '{{range .items}}{{.metadata.name}}{{\"\\n\"}}{{end}}')\necho Name of the Pod: $POD_NAME", "completion": "This shell code Using kubectl to Create a Deployment | You can access the Pod through the proxied API, by running:", "language": "shell", "type": "code-example_explanation", "context": "Using kubectl to Create a Deployment | You can access the Pod through the proxied API, by running:", "source": "https://kubernetes.io/docs/tutorials/kubernetes-basics/deploy-app/deploy-intro/", "id": "k8s_00599_code_explain_15142"} {"prompt": "Explain this shell code:\n\ncurl http://localhost:8001/api/v1/namespaces/default/pods/$POD_NAME:8080/proxy/", "completion": "This shell code Using kubectl to Create a Deployment | In order for the new Deployment to be accessible without using the proxy, a Service is required which will be explained in Module 4.", "language": "shell", "type": "code-example_explanation", "context": "Using kubectl to Create a Deployment | In order for the new Deployment to be accessible without using the proxy, a Service is required which will be explained in Module 4.", "source": "https://kubernetes.io/docs/tutorials/kubernetes-basics/deploy-app/deploy-intro/", "id": "k8s_00599_code_explain_15143"} {"prompt": "Explain this bash code:\n\nkubectl create deployment kubernetes-bootcamp --image=gcr.io/google-samples/kubernetes-bootcamp:v1", "completion": "This bash code Using kubectl to Create a Deployment | Great! You just deployed your first application by creating a deployment. This performed a few things for you:", "language": "bash", "type": "kubectl-command_explanation", "context": "Using kubectl to Create a Deployment | Great! You just deployed your first application by creating a deployment. This performed a few things for you:", "source": "https://kubernetes.io/docs/tutorials/kubernetes-basics/deploy-app/deploy-intro/", "id": "k8s_00599_code_explain_15144"} {"prompt": "Explain this text code:\n\nexport POD_NAME=$(kubectl get pods -o go-template --template '{{range .items}}{{.metadata.name}}{{\"\\n\"}}{{end}}')\necho Name of the Pod: $POD_NAME", "completion": "This text code Using kubectl to Create a Deployment | You can access the Pod through the proxied API, by running:", "language": "text", "type": "code-example_explanation", "context": "Using kubectl to Create a Deployment | You can access the Pod through the proxied API, by running:", "source": "https://kubernetes.io/docs/tutorials/kubernetes-basics/deploy-app/deploy-intro/", "id": "k8s_00599_code_explain_15145"} {"prompt": "Explain this text code:\n\ncurl http://localhost:8001/api/v1/namespaces/default/pods/$POD_NAME:8080/proxy/", "completion": "This text code Using kubectl to Create a Deployment | In order for the new Deployment to be accessible without using the proxy, a Service is required which will be explained in Module 4.", "language": "text", "type": "code-example_explanation", "context": "Using kubectl to Create a Deployment | In order for the new Deployment to be accessible without using the proxy, a Service is required which will be explained in Module 4.", "source": "https://kubernetes.io/docs/tutorials/kubernetes-basics/deploy-app/deploy-intro/", "id": "k8s_00599_code_explain_15146"} {"prompt": "Explain this text code:\n\n# Mark node \"foo\" as schedulable\n kubectl uncordon foo", "completion": "This text code kubectl uncordon | Must be \"none\", \"server\", or \"client\". If client strategy, only print the object that would be sent, without sending it. If server strategy, submit server-side request without persisting the resource.", "language": "text", "type": "code-example_explanation", "context": "kubectl uncordon | Must be \"none\", \"server\", or \"client\". If client strategy, only print the object that would be sent, without sending it. If server strategy, submit server-side request without persisting the resource.", "source": "https://kubernetes.io/docs/reference/kubectl/generated/kubectl_uncordon/", "id": "k8s_00600_code_explain_15147"} {"prompt": "Explain this text code:\n\n# Mark node \"foo\" as schedulable\n kubectl uncordon foo", "completion": "This text code kubectl uncordon | Must be \"none\", \"server\", or \"client\". If client strategy, only print the object that would be sent, without sending it. If server strategy, submit server-side request without persisting the resource.", "language": "text", "type": "code-example_explanation", "context": "kubectl uncordon | Must be \"none\", \"server\", or \"client\". If client strategy, only print the object that would be sent, without sending it. If server strategy, submit server-side request without persisting the resource.", "source": "https://kubernetes.io/docs/reference/kubectl/generated/kubectl_uncordon/", "id": "k8s_00600_code_explain_15148"} {"prompt": "Complete this yaml code:\n\napiVersion: apps/v1\nkind: Deployment\nmetadata:\n name: patch-demo\nspec:\n replicas: 2\n selector:\n matchLabels:\n app: nginx\n template:", "completion": " metadata:\n labels:\n app: nginx\n spec:\n containers:\n - name: patch-demo-ctr\n image: nginx\n tolerations:\n - effect: NoSchedule\n key: dedicated\n value: test-team", "language": "yaml", "type": "deployment-definition", "context": "Update API Objects in Place Using kubectl patch | Create the Deployment:", "source": "https://kubernetes.io/docs/tasks/manage-kubernetes-objects/update-api-object-kubectl-patch/", "id": "k8s_00602_code_completion_15149"} {"prompt": "Explain this yaml code:\n\napiVersion: apps/v1\nkind: Deployment\nmetadata:\n name: patch-demo\nspec:\n replicas: 2\n selector:\n matchLabels:\n app: nginx\n template:\n metadata:\n labels:\n app: nginx\n spec:\n containers:\n - name: patch-demo-ctr\n image: nginx\n tolerations:\n - effect: NoSchedule\n key: dedicated\n value: test-team", "completion": "This yaml code Update API Objects in Place Using kubectl patch | Create the Deployment:", "language": "yaml", "type": "deployment-definition_explanation", "context": "Update API Objects in Place Using kubectl patch | Create the Deployment:", "source": "https://kubernetes.io/docs/tasks/manage-kubernetes-objects/update-api-object-kubectl-patch/", "id": "k8s_00602_code_explain_15150"} {"prompt": "Explain this bash code:\n\nkubectl apply -f https://k8s.io/examples/application/deployment-patch.yaml", "completion": "This bash code Update API Objects in Place Using kubectl patch | View the Pods associated with your Deployment:", "language": "bash", "type": "kubectl-command_explanation", "context": "Update API Objects in Place Using kubectl patch | View the Pods associated with your Deployment:", "source": "https://kubernetes.io/docs/tasks/manage-kubernetes-objects/update-api-object-kubectl-patch/", "id": "k8s_00602_code_explain_15151"} {"prompt": "Explain this text code:\n\nNAME READY STATUS RESTARTS AGE\npatch-demo-28633765-670qr 1/1 Running 0 23s\npatch-demo-28633765-j5qs3 1/1 Running 0 23s", "completion": "This text code Update API Objects in Place Using kubectl patch | Make a note of the names of the running Pods. Later, you will see that these Pods get terminated and replaced by new ones.", "language": "text", "type": "code-example_explanation", "context": "Update API Objects in Place Using kubectl patch | Make a note of the names of the running Pods. Later, you will see that these Pods get terminated and replaced by new ones.", "source": "https://kubernetes.io/docs/tasks/manage-kubernetes-objects/update-api-object-kubectl-patch/", "id": "k8s_00602_code_explain_15152"} {"prompt": "Complete this text code:\n\nspec:\n template:\n spec:", "completion": " containers:\n - name: patch-demo-ctr-2\n image: redis", "language": "text", "type": "code-example", "context": "Update API Objects in Place Using kubectl patch | Patch your Deployment:", "source": "https://kubernetes.io/docs/tasks/manage-kubernetes-objects/update-api-object-kubectl-patch/", "id": "k8s_00602_code_completion_15153"} {"prompt": "Explain this text code:\n\nspec:\n template:\n spec:\n containers:\n - name: patch-demo-ctr-2\n image: redis", "completion": "This text code Update API Objects in Place Using kubectl patch | Patch your Deployment:", "language": "text", "type": "code-example_explanation", "context": "Update API Objects in Place Using kubectl patch | Patch your Deployment:", "source": "https://kubernetes.io/docs/tasks/manage-kubernetes-objects/update-api-object-kubectl-patch/", "id": "k8s_00602_code_explain_15154"} {"prompt": "Explain this bash code:\n\nkubectl patch deployment patch-demo --patch-file patch-file.yaml", "completion": "This bash code Update API Objects in Place Using kubectl patch | View the patched Deployment:", "language": "bash", "type": "kubectl-command_explanation", "context": "Update API Objects in Place Using kubectl patch | View the patched Deployment:", "source": "https://kubernetes.io/docs/tasks/manage-kubernetes-objects/update-api-object-kubectl-patch/", "id": "k8s_00602_code_explain_15155"} {"prompt": "Complete this text code:\n\ncontainers:\n- image: redis\n imagePullPolicy: Always\n name: patch-demo-ctr-2", "completion": " ...\n- image: nginx\n imagePullPolicy: Always\n name: patch-demo-ctr\n ...", "language": "text", "type": "code-example", "context": "Update API Objects in Place Using kubectl patch | View the Pods associated with your patched Deployment:", "source": "https://kubernetes.io/docs/tasks/manage-kubernetes-objects/update-api-object-kubectl-patch/", "id": "k8s_00602_code_completion_15156"} {"prompt": "Explain this text code:\n\ncontainers:\n- image: redis\n imagePullPolicy: Always\n name: patch-demo-ctr-2\n ...\n- image: nginx\n imagePullPolicy: Always\n name: patch-demo-ctr\n ...", "completion": "This text code Update API Objects in Place Using kubectl patch | View the Pods associated with your patched Deployment:", "language": "text", "type": "code-example_explanation", "context": "Update API Objects in Place Using kubectl patch | View the Pods associated with your patched Deployment:", "source": "https://kubernetes.io/docs/tasks/manage-kubernetes-objects/update-api-object-kubectl-patch/", "id": "k8s_00602_code_explain_15157"} {"prompt": "Explain this text code:\n\nNAME READY STATUS RESTARTS AGE\npatch-demo-1081991389-2wrn5 2/2 Running 0 1m\npatch-demo-1081991389-jmg7b 2/2 Running 0 1m", "completion": "This text code Update API Objects in Place Using kubectl patch | Take a closer look at one of the patch-demo Pods:", "language": "text", "type": "code-example_explanation", "context": "Update API Objects in Place Using kubectl patch | Take a closer look at one of the patch-demo Pods:", "source": "https://kubernetes.io/docs/tasks/manage-kubernetes-objects/update-api-object-kubectl-patch/", "id": "k8s_00602_code_explain_15158"} {"prompt": "Complete this text code:\n\ncontainers:\n- image: redis", "completion": " ...\n- image: nginx\n ...", "language": "text", "type": "code-example", "context": "Update API Objects in Place Using kubectl patch | The patch you did in the preceding exercise is called a strategic merge patch. Notice that the patch did not replace the containers list. Instead it added a new Container to the list. In other words,", "source": "https://kubernetes.io/docs/tasks/manage-kubernetes-objects/update-api-object-kubectl-patch/", "id": "k8s_00602_code_completion_15159"} {"prompt": "Explain this text code:\n\ncontainers:\n- image: redis\n ...\n- image: nginx\n ...", "completion": "This text code Update API Objects in Place Using kubectl patch | The patch you did in the preceding exercise is called a strategic merge patch. Notice that the patch did not replace the containers list. Instead it added a new Container to the list. In other words,", "language": "text", "type": "code-example_explanation", "context": "Update API Objects in Place Using kubectl patch | The patch you did in the preceding exercise is called a strategic merge patch. Notice that the patch did not replace the containers list. Instead it added a new Container to the list. In other words,", "source": "https://kubernetes.io/docs/tasks/manage-kubernetes-objects/update-api-object-kubectl-patch/", "id": "k8s_00602_code_explain_15160"} {"prompt": "Complete this text code:\n\ntype PodSpec struct {\n ...", "completion": " Containers []Container `json:\"containers\" patchStrategy:\"merge\" patchMergeKey:\"name\" ...`\n ...\n}", "language": "text", "type": "code-example", "context": "Update API Objects in Place Using kubectl patch | You can also see the patch strategy in the OpenApi spec:", "source": "https://kubernetes.io/docs/tasks/manage-kubernetes-objects/update-api-object-kubectl-patch/", "id": "k8s_00602_code_completion_15161"} {"prompt": "Explain this text code:\n\ntype PodSpec struct {\n ...\n Containers []Container `json:\"containers\" patchStrategy:\"merge\" patchMergeKey:\"name\" ...`\n ...\n}", "completion": "This text code Update API Objects in Place Using kubectl patch | You can also see the patch strategy in the OpenApi spec:", "language": "text", "type": "code-example_explanation", "context": "Update API Objects in Place Using kubectl patch | You can also see the patch strategy in the OpenApi spec:", "source": "https://kubernetes.io/docs/tasks/manage-kubernetes-objects/update-api-object-kubectl-patch/", "id": "k8s_00602_code_explain_15162"} {"prompt": "Complete this text code:\n\n\"io.k8s.api.core.v1.PodSpec\": {\n ...,\n \"containers\": {\n \"description\": \"List of containers belonging to the pod. ....\"", "completion": " },\n \"x-kubernetes-patch-merge-key\": \"name\",\n \"x-kubernetes-patch-strategy\": \"merge\"\n}", "language": "text", "type": "code-example", "context": "Update API Objects in Place Using kubectl patch | And you can see the patch strategy in the Kubernetes API documentation.", "source": "https://kubernetes.io/docs/tasks/manage-kubernetes-objects/update-api-object-kubectl-patch/", "id": "k8s_00602_code_completion_15163"} {"prompt": "Explain this text code:\n\n\"io.k8s.api.core.v1.PodSpec\": {\n ...,\n \"containers\": {\n \"description\": \"List of containers belonging to the pod. ....\"\n },\n \"x-kubernetes-patch-merge-key\": \"name\",\n \"x-kubernetes-patch-strategy\": \"merge\"\n}", "completion": "This text code Update API Objects in Place Using kubectl patch | And you can see the patch strategy in the Kubernetes API documentation.", "language": "text", "type": "code-example_explanation", "context": "Update API Objects in Place Using kubectl patch | And you can see the patch strategy in the Kubernetes API documentation.", "source": "https://kubernetes.io/docs/tasks/manage-kubernetes-objects/update-api-object-kubectl-patch/", "id": "k8s_00602_code_explain_15164"} {"prompt": "Complete this text code:\n\nspec:\n template:\n spec:", "completion": " tolerations:\n - effect: NoSchedule\n key: disktype\n value: ssd", "language": "text", "type": "code-example", "context": "Update API Objects in Place Using kubectl patch | Patch your Deployment:", "source": "https://kubernetes.io/docs/tasks/manage-kubernetes-objects/update-api-object-kubectl-patch/", "id": "k8s_00602_code_completion_15165"} {"prompt": "Explain this text code:\n\nspec:\n template:\n spec:\n tolerations:\n - effect: NoSchedule\n key: disktype\n value: ssd", "completion": "This text code Update API Objects in Place Using kubectl patch | Patch your Deployment:", "language": "text", "type": "code-example_explanation", "context": "Update API Objects in Place Using kubectl patch | Patch your Deployment:", "source": "https://kubernetes.io/docs/tasks/manage-kubernetes-objects/update-api-object-kubectl-patch/", "id": "k8s_00602_code_explain_15166"} {"prompt": "Explain this bash code:\n\nkubectl patch deployment patch-demo --patch-file patch-file-tolerations.yaml", "completion": "This bash code Update API Objects in Place Using kubectl patch | View the patched Deployment:", "language": "bash", "type": "kubectl-command_explanation", "context": "Update API Objects in Place Using kubectl patch | View the patched Deployment:", "source": "https://kubernetes.io/docs/tasks/manage-kubernetes-objects/update-api-object-kubectl-patch/", "id": "k8s_00602_code_explain_15167"} {"prompt": "Complete this text code:\n\ntolerations:\n- effect: NoSchedule", "completion": " key: disktype\n value: ssd", "language": "text", "type": "code-example", "context": "Update API Objects in Place Using kubectl patch | Notice that the tolerations list in the PodSpec was replaced, not merged. This is because the Tolerations field of PodSpec does not have a patchStrategy key in its field tag. So the strategic merge pa", "source": "https://kubernetes.io/docs/tasks/manage-kubernetes-objects/update-api-object-kubectl-patch/", "id": "k8s_00602_code_completion_15168"} {"prompt": "Explain this text code:\n\ntolerations:\n- effect: NoSchedule\n key: disktype\n value: ssd", "completion": "This text code Update API Objects in Place Using kubectl patch | Notice that the tolerations list in the PodSpec was replaced, not merged. This is because the Tolerations field of PodSpec does not have a patchStrategy key in its field tag. So the strategic merge pa", "language": "text", "type": "code-example_explanation", "context": "Update API Objects in Place Using kubectl patch | Notice that the tolerations list in the PodSpec was replaced, not merged. This is because the Tolerations field of PodSpec does not have a patchStrategy key in its field tag. So the strategic merge pa", "source": "https://kubernetes.io/docs/tasks/manage-kubernetes-objects/update-api-object-kubectl-patch/", "id": "k8s_00602_code_explain_15169"} {"prompt": "Complete this text code:\n\ntype PodSpec struct {\n ...", "completion": " Tolerations []Toleration `json:\"tolerations,omitempty\" protobuf:\"bytes,22,opt,name=tolerations\"`\n ...\n}", "language": "text", "type": "code-example", "context": "Update API Objects in Place Using kubectl patch | A strategic merge patch is different from a JSON merge patch. With a JSON merge patch, if you want to update a list, you have to specify the entire new list. And the new list completely replaces the e", "source": "https://kubernetes.io/docs/tasks/manage-kubernetes-objects/update-api-object-kubectl-patch/", "id": "k8s_00602_code_completion_15170"} {"prompt": "Explain this text code:\n\ntype PodSpec struct {\n ...\n Tolerations []Toleration `json:\"tolerations,omitempty\" protobuf:\"bytes,22,opt,name=tolerations\"`\n ...\n}", "completion": "This text code Update API Objects in Place Using kubectl patch | A strategic merge patch is different from a JSON merge patch. With a JSON merge patch, if you want to update a list, you have to specify the entire new list. And the new list completely replaces the e", "language": "text", "type": "code-example_explanation", "context": "Update API Objects in Place Using kubectl patch | A strategic merge patch is different from a JSON merge patch. With a JSON merge patch, if you want to update a list, you have to specify the entire new list. And the new list completely replaces the e", "source": "https://kubernetes.io/docs/tasks/manage-kubernetes-objects/update-api-object-kubectl-patch/", "id": "k8s_00602_code_explain_15171"} {"prompt": "Complete this text code:\n\nspec:\n template:\n spec:", "completion": " containers:\n - name: patch-demo-ctr-3\n image: gcr.io/google-samples/hello-app:2.0", "language": "text", "type": "code-example", "context": "Update API Objects in Place Using kubectl patch | In your patch command, set type to merge:", "source": "https://kubernetes.io/docs/tasks/manage-kubernetes-objects/update-api-object-kubectl-patch/", "id": "k8s_00602_code_completion_15172"} {"prompt": "Explain this text code:\n\nspec:\n template:\n spec:\n containers:\n - name: patch-demo-ctr-3\n image: gcr.io/google-samples/hello-app:2.0", "completion": "This text code Update API Objects in Place Using kubectl patch | In your patch command, set type to merge:", "language": "text", "type": "code-example_explanation", "context": "Update API Objects in Place Using kubectl patch | In your patch command, set type to merge:", "source": "https://kubernetes.io/docs/tasks/manage-kubernetes-objects/update-api-object-kubectl-patch/", "id": "k8s_00602_code_explain_15173"} {"prompt": "Explain this bash code:\n\nkubectl patch deployment patch-demo --type merge --patch-file patch-file-2.yaml", "completion": "This bash code Update API Objects in Place Using kubectl patch | View the patched Deployment:", "language": "bash", "type": "kubectl-command_explanation", "context": "Update API Objects in Place Using kubectl patch | View the patched Deployment:", "source": "https://kubernetes.io/docs/tasks/manage-kubernetes-objects/update-api-object-kubectl-patch/", "id": "k8s_00602_code_explain_15174"} {"prompt": "Complete this text code:\n\nspec:\n containers:", "completion": " - image: gcr.io/google-samples/hello-app:2.0\n ...\n name: patch-demo-ctr-3", "language": "text", "type": "code-example", "context": "Update API Objects in Place Using kubectl patch | List the running Pods:", "source": "https://kubernetes.io/docs/tasks/manage-kubernetes-objects/update-api-object-kubectl-patch/", "id": "k8s_00602_code_completion_15175"} {"prompt": "Explain this text code:\n\nspec:\n containers:\n - image: gcr.io/google-samples/hello-app:2.0\n ...\n name: patch-demo-ctr-3", "completion": "This text code Update API Objects in Place Using kubectl patch | List the running Pods:", "language": "text", "type": "code-example_explanation", "context": "Update API Objects in Place Using kubectl patch | List the running Pods:", "source": "https://kubernetes.io/docs/tasks/manage-kubernetes-objects/update-api-object-kubectl-patch/", "id": "k8s_00602_code_explain_15176"} {"prompt": "Explain this text code:\n\nNAME READY STATUS RESTARTS AGE\npatch-demo-1307768864-69308 1/1 Running 0 1m\npatch-demo-1307768864-c86dc 1/1 Running 0 1m", "completion": "This text code Update API Objects in Place Using kubectl patch | Here's the configuration file for a Deployment that uses the RollingUpdate strategy:", "language": "text", "type": "code-example_explanation", "context": "Update API Objects in Place Using kubectl patch | Here's the configuration file for a Deployment that uses the RollingUpdate strategy:", "source": "https://kubernetes.io/docs/tasks/manage-kubernetes-objects/update-api-object-kubectl-patch/", "id": "k8s_00602_code_explain_15177"} {"prompt": "Complete this yaml code:\n\napiVersion: apps/v1\nkind: Deployment\nmetadata:\n name: retainkeys-demo\nspec:\n selector:\n matchLabels:\n app: nginx\n strategy:", "completion": " rollingUpdate:\n maxSurge: 30%\n template:\n metadata:\n labels:\n app: nginx\n spec:\n containers:\n - name: retainkeys-demo-ctr\n image: nginx", "language": "yaml", "type": "deployment-definition", "context": "Update API Objects in Place Using kubectl patch | Create the deployment:", "source": "https://kubernetes.io/docs/tasks/manage-kubernetes-objects/update-api-object-kubectl-patch/", "id": "k8s_00602_code_completion_15178"} {"prompt": "Explain this yaml code:\n\napiVersion: apps/v1\nkind: Deployment\nmetadata:\n name: retainkeys-demo\nspec:\n selector:\n matchLabels:\n app: nginx\n strategy:\n rollingUpdate:\n maxSurge: 30%\n template:\n metadata:\n labels:\n app: nginx\n spec:\n containers:\n - name: retainkeys-demo-ctr\n image: nginx", "completion": "This yaml code Update API Objects in Place Using kubectl patch | Create the deployment:", "language": "yaml", "type": "deployment-definition_explanation", "context": "Update API Objects in Place Using kubectl patch | Create the deployment:", "source": "https://kubernetes.io/docs/tasks/manage-kubernetes-objects/update-api-object-kubectl-patch/", "id": "k8s_00602_code_explain_15179"} {"prompt": "Explain this bash code:\n\nkubectl apply -f https://k8s.io/examples/application/deployment-retainkeys.yaml", "completion": "This bash code Update API Objects in Place Using kubectl patch | At this point, the deployment is created and is using the RollingUpdate strategy.", "language": "bash", "type": "kubectl-command_explanation", "context": "Update API Objects in Place Using kubectl patch | At this point, the deployment is created and is using the RollingUpdate strategy.", "source": "https://kubernetes.io/docs/tasks/manage-kubernetes-objects/update-api-object-kubectl-patch/", "id": "k8s_00602_code_explain_15180"} {"prompt": "Explain this bash code:\n\nkubectl patch deployment retainkeys-demo --type strategic --patch-file patch-file-no-retainkeys.yaml", "completion": "This bash code Update API Objects in Place Using kubectl patch | In the output, you can see that it is not possible to set type as Recreate when a value is defined for spec.strategy.rollingUpdate:", "language": "bash", "type": "kubectl-command_explanation", "context": "Update API Objects in Place Using kubectl patch | In the output, you can see that it is not possible to set type as Recreate when a value is defined for spec.strategy.rollingUpdate:", "source": "https://kubernetes.io/docs/tasks/manage-kubernetes-objects/update-api-object-kubectl-patch/", "id": "k8s_00602_code_explain_15181"} {"prompt": "Explain this text code:\n\nThe Deployment \"retainkeys-demo\" is invalid: spec.strategy.rollingUpdate: Forbidden: may not be specified when strategy `type` is 'Recreate'", "completion": "This text code Update API Objects in Place Using kubectl patch | The way to remove the value for spec.strategy.rollingUpdate when updating the value for type is to use the retainKeys strategy for the strategic merge.", "language": "text", "type": "code-example_explanation", "context": "Update API Objects in Place Using kubectl patch | The way to remove the value for spec.strategy.rollingUpdate when updating the value for type is to use the retainKeys strategy for the strategic merge.", "source": "https://kubernetes.io/docs/tasks/manage-kubernetes-objects/update-api-object-kubectl-patch/", "id": "k8s_00602_code_explain_15182"} {"prompt": "Complete this text code:\n\nspec:\n strategy:", "completion": " $retainKeys:\n - type\n type: Recreate", "language": "text", "type": "code-example", "context": "Update API Objects in Place Using kubectl patch | With this patch, we indicate that we want to retain only the type key of the strategy object. Thus, the rollingUpdate will be removed during the patch operation.", "source": "https://kubernetes.io/docs/tasks/manage-kubernetes-objects/update-api-object-kubectl-patch/", "id": "k8s_00602_code_completion_15183"} {"prompt": "Explain this text code:\n\nspec:\n strategy:\n $retainKeys:\n - type\n type: Recreate", "completion": "This text code Update API Objects in Place Using kubectl patch | With this patch, we indicate that we want to retain only the type key of the strategy object. Thus, the rollingUpdate will be removed during the patch operation.", "language": "text", "type": "code-example_explanation", "context": "Update API Objects in Place Using kubectl patch | With this patch, we indicate that we want to retain only the type key of the strategy object. Thus, the rollingUpdate will be removed during the patch operation.", "source": "https://kubernetes.io/docs/tasks/manage-kubernetes-objects/update-api-object-kubectl-patch/", "id": "k8s_00602_code_explain_15184"} {"prompt": "Explain this bash code:\n\nkubectl patch deployment retainkeys-demo --type strategic --patch-file patch-file-retainkeys.yaml", "completion": "This bash code Update API Objects in Place Using kubectl patch | Examine the content of the Deployment:", "language": "bash", "type": "kubectl-command_explanation", "context": "Update API Objects in Place Using kubectl patch | Examine the content of the Deployment:", "source": "https://kubernetes.io/docs/tasks/manage-kubernetes-objects/update-api-object-kubectl-patch/", "id": "k8s_00602_code_explain_15185"} {"prompt": "Explain this bash code:\n\nkubectl get deployment retainkeys-demo --output yaml", "completion": "This bash code Update API Objects in Place Using kubectl patch | The output shows that the strategy object in the Deployment does not contain the rollingUpdate key anymore:", "language": "bash", "type": "kubectl-command_explanation", "context": "Update API Objects in Place Using kubectl patch | The output shows that the strategy object in the Deployment does not contain the rollingUpdate key anymore:", "source": "https://kubernetes.io/docs/tasks/manage-kubernetes-objects/update-api-object-kubectl-patch/", "id": "k8s_00602_code_explain_15186"} {"prompt": "Complete this text code:\n\ntype DeploymentSpec struct {\n ...\n // +patchStrategy=retainKeys", "completion": " Strategy DeploymentStrategy `json:\"strategy,omitempty\" patchStrategy:\"retainKeys\" ...`\n ...\n}", "language": "text", "type": "code-example", "context": "Update API Objects in Place Using kubectl patch | You can also see the retainKeys strategy in the OpenApi spec:", "source": "https://kubernetes.io/docs/tasks/manage-kubernetes-objects/update-api-object-kubectl-patch/", "id": "k8s_00602_code_completion_15187"} {"prompt": "Explain this text code:\n\ntype DeploymentSpec struct {\n ...\n // +patchStrategy=retainKeys\n Strategy DeploymentStrategy `json:\"strategy,omitempty\" patchStrategy:\"retainKeys\" ...`\n ...\n}", "completion": "This text code Update API Objects in Place Using kubectl patch | You can also see the retainKeys strategy in the OpenApi spec:", "language": "text", "type": "code-example_explanation", "context": "Update API Objects in Place Using kubectl patch | You can also see the retainKeys strategy in the OpenApi spec:", "source": "https://kubernetes.io/docs/tasks/manage-kubernetes-objects/update-api-object-kubectl-patch/", "id": "k8s_00602_code_explain_15188"} {"prompt": "Complete this text code:\n\n\"io.k8s.api.apps.v1.DeploymentSpec\": {\n ...,\n \"strategy\": {\n \"$ref\": \"#/definitions/io.k8s.api.apps.v1.DeploymentStrategy\",", "completion": " \"description\": \"The deployment strategy to use to replace existing pods with new ones.\",\n \"x-kubernetes-patch-strategy\": \"retainKeys\"\n },\n ....\n}", "language": "text", "type": "code-example", "context": "Update API Objects in Place Using kubectl patch | And you can see the retainKeys strategy in the Kubernetes API documentation.", "source": "https://kubernetes.io/docs/tasks/manage-kubernetes-objects/update-api-object-kubectl-patch/", "id": "k8s_00602_code_completion_15189"} {"prompt": "Explain this text code:\n\n\"io.k8s.api.apps.v1.DeploymentSpec\": {\n ...,\n \"strategy\": {\n \"$ref\": \"#/definitions/io.k8s.api.apps.v1.DeploymentStrategy\",\n \"description\": \"The deployment strategy to use to replace existing pods with new ones.\",\n \"x-kubernetes-patch-strategy\": \"retainKeys\"\n },\n ....\n}", "completion": "This text code Update API Objects in Place Using kubectl patch | And you can see the retainKeys strategy in the Kubernetes API documentation.", "language": "text", "type": "code-example_explanation", "context": "Update API Objects in Place Using kubectl patch | And you can see the retainKeys strategy in the Kubernetes API documentation.", "source": "https://kubernetes.io/docs/tasks/manage-kubernetes-objects/update-api-object-kubectl-patch/", "id": "k8s_00602_code_explain_15190"} {"prompt": "Complete this json code:\n\n{\n \"spec\": {\n \"template\": {\n \"spec\": {\n \"containers\": [\n {\n \"name\": \"patch-demo-ctr-2\",", "completion": " \"image\": \"redis\"\n }\n ]\n }\n }\n }\n}", "language": "json", "type": "api-response", "context": "Update API Objects in Place Using kubectl patch | The following commands are equivalent:", "source": "https://kubernetes.io/docs/tasks/manage-kubernetes-objects/update-api-object-kubectl-patch/", "id": "k8s_00602_code_completion_15191"} {"prompt": "Explain this json code:\n\n{\n \"spec\": {\n \"template\": {\n \"spec\": {\n \"containers\": [\n {\n \"name\": \"patch-demo-ctr-2\",\n \"image\": \"redis\"\n }\n ]\n }\n }\n }\n}", "completion": "This json code Update API Objects in Place Using kubectl patch | The following commands are equivalent:", "language": "json", "type": "api-response_explanation", "context": "Update API Objects in Place Using kubectl patch | The following commands are equivalent:", "source": "https://kubernetes.io/docs/tasks/manage-kubernetes-objects/update-api-object-kubectl-patch/", "id": "k8s_00602_code_explain_15192"} {"prompt": "Complete this bash code:\n\nkubectl patch deployment patch-demo --patch-file patch-file.yaml\nkubectl patch deployment patch-demo --patch 'spec:\\n template:\\n spec:\\n containers:\\n - name: patch-demo-ctr-2\\n image: redis'", "completion": "\nkubectl patch deployment patch-demo --patch-file patch-file.json\nkubectl patch deployment patch-demo --patch '{\"spec\": {\"template\": {\"spec\": {\"containers\": [{\"name\": \"patch-demo-ctr-2\",\"image\": \"redis\"}]}}}}'", "language": "bash", "type": "kubectl-command", "context": "Update API Objects in Place Using kubectl patch | The flag --subresource=[subresource-name] is used with kubectl commands like get, patch, edit, apply and replace to fetch and update status, scale and resize subresource of the resources you specify.", "source": "https://kubernetes.io/docs/tasks/manage-kubernetes-objects/update-api-object-kubectl-patch/", "id": "k8s_00602_code_completion_15193"} {"prompt": "Explain this bash code:\n\nkubectl patch deployment patch-demo --patch-file patch-file.yaml\nkubectl patch deployment patch-demo --patch 'spec:\\n template:\\n spec:\\n containers:\\n - name: patch-demo-ctr-2\\n image: redis'\n\nkubectl patch deployment patch-demo --patch-file patch-file.json\nkubectl patch deployment patch-demo --patch '{\"spec\": {\"template\": {\"spec\": {\"containers\": [{\"name\": \"patch-demo-ctr-2\",\"image\": \"redis\"}]}}}}'", "completion": "This bash code Update API Objects in Place Using kubectl patch | The flag --subresource=[subresource-name] is used with kubectl commands like get, patch, edit, apply and replace to fetch and update status, scale and resize subresource of the resources you specify.", "language": "bash", "type": "kubectl-command_explanation", "context": "Update API Objects in Place Using kubectl patch | The flag --subresource=[subresource-name] is used with kubectl commands like get, patch, edit, apply and replace to fetch and update status, scale and resize subresource of the resources you specify.", "source": "https://kubernetes.io/docs/tasks/manage-kubernetes-objects/update-api-object-kubectl-patch/", "id": "k8s_00602_code_explain_15194"} {"prompt": "Complete this yaml code:\n\napiVersion: apps/v1\nkind: Deployment\nmetadata:\n name: nginx-deployment\nspec:\n selector:\n matchLabels:\n app: nginx\n replicas: 2 # tells deployment to run 2 pods matching the template", "completion": " template:\n metadata:\n labels:\n app: nginx\n spec:\n containers:\n - name: nginx\n image: nginx:1.14.2\n ports:\n - containerPort: 80", "language": "yaml", "type": "deployment-definition", "context": "Update API Objects in Place Using kubectl patch | Create the Deployment:", "source": "https://kubernetes.io/docs/tasks/manage-kubernetes-objects/update-api-object-kubectl-patch/", "id": "k8s_00602_code_completion_15195"} {"prompt": "Explain this yaml code:\n\napiVersion: apps/v1\nkind: Deployment\nmetadata:\n name: nginx-deployment\nspec:\n selector:\n matchLabels:\n app: nginx\n replicas: 2 # tells deployment to run 2 pods matching the template\n template:\n metadata:\n labels:\n app: nginx\n spec:\n containers:\n - name: nginx\n image: nginx:1.14.2\n ports:\n - containerPort: 80", "completion": "This yaml code Update API Objects in Place Using kubectl patch | Create the Deployment:", "language": "yaml", "type": "deployment-definition_explanation", "context": "Update API Objects in Place Using kubectl patch | Create the Deployment:", "source": "https://kubernetes.io/docs/tasks/manage-kubernetes-objects/update-api-object-kubectl-patch/", "id": "k8s_00602_code_explain_15196"} {"prompt": "Explain this bash code:\n\nkubectl apply -f https://k8s.io/examples/application/deployment.yaml", "completion": "This bash code Update API Objects in Place Using kubectl patch | View the Pods associated with your Deployment:", "language": "bash", "type": "kubectl-command_explanation", "context": "Update API Objects in Place Using kubectl patch | View the Pods associated with your Deployment:", "source": "https://kubernetes.io/docs/tasks/manage-kubernetes-objects/update-api-object-kubectl-patch/", "id": "k8s_00602_code_explain_15197"} {"prompt": "Explain this text code:\n\nNAME READY STATUS RESTARTS AGE\nnginx-deployment-7fb96c846b-22567 1/1 Running 0 47s\nnginx-deployment-7fb96c846b-mlgns 1/1 Running 0 47s", "completion": "This text code Update API Objects in Place Using kubectl patch | Now, patch that Deployment with --subresource=[subresource-name] flag:", "language": "text", "type": "code-example_explanation", "context": "Update API Objects in Place Using kubectl patch | Now, patch that Deployment with --subresource=[subresource-name] flag:", "source": "https://kubernetes.io/docs/tasks/manage-kubernetes-objects/update-api-object-kubectl-patch/", "id": "k8s_00602_code_explain_15198"} {"prompt": "Explain this bash code:\n\nkubectl patch deployment nginx-deployment --subresource='scale' --type='merge' -p '{\"spec\":{\"replicas\":3}}'", "completion": "This bash code Update API Objects in Place Using kubectl patch | The output is:", "language": "bash", "type": "kubectl-command_explanation", "context": "Update API Objects in Place Using kubectl patch | The output is:", "source": "https://kubernetes.io/docs/tasks/manage-kubernetes-objects/update-api-object-kubectl-patch/", "id": "k8s_00602_code_explain_15199"} {"prompt": "Complete this text code:\n\nNAME READY STATUS RESTARTS AGE\nnginx-deployment-7fb96c846b-22567 1/1 Running 0 107s", "completion": "nginx-deployment-7fb96c846b-lxfr2 1/1 Running 0 14s\nnginx-deployment-7fb96c846b-mlgns 1/1 Running 0 107s", "language": "text", "type": "code-example", "context": "Update API Objects in Place Using kubectl patch | View the patched Deployment:", "source": "https://kubernetes.io/docs/tasks/manage-kubernetes-objects/update-api-object-kubectl-patch/", "id": "k8s_00602_code_completion_15200"} {"prompt": "Explain this text code:\n\nNAME READY STATUS RESTARTS AGE\nnginx-deployment-7fb96c846b-22567 1/1 Running 0 107s\nnginx-deployment-7fb96c846b-lxfr2 1/1 Running 0 14s\nnginx-deployment-7fb96c846b-mlgns 1/1 Running 0 107s", "completion": "This text code Update API Objects in Place Using kubectl patch | View the patched Deployment:", "language": "text", "type": "code-example_explanation", "context": "Update API Objects in Place Using kubectl patch | View the patched Deployment:", "source": "https://kubernetes.io/docs/tasks/manage-kubernetes-objects/update-api-object-kubectl-patch/", "id": "k8s_00602_code_explain_15201"} {"prompt": "Complete this text code:\n\n...\nspec:\n replicas: 3\n ...", "completion": "status:\n ...\n availableReplicas: 3\n readyReplicas: 3\n replicas: 3", "language": "text", "type": "code-example", "context": "Update API Objects in Place Using kubectl patch | In this exercise, you used kubectl patch to change the live configuration of a Deployment object. You did not change the configuration file that you originally used to create the Deployment object. Ot", "source": "https://kubernetes.io/docs/tasks/manage-kubernetes-objects/update-api-object-kubectl-patch/", "id": "k8s_00602_code_completion_15202"} {"prompt": "Explain this text code:\n\n...\nspec:\n replicas: 3\n ...\nstatus:\n ...\n availableReplicas: 3\n readyReplicas: 3\n replicas: 3", "completion": "This text code Update API Objects in Place Using kubectl patch | In this exercise, you used kubectl patch to change the live configuration of a Deployment object. You did not change the configuration file that you originally used to create the Deployment object. Ot", "language": "text", "type": "code-example_explanation", "context": "Update API Objects in Place Using kubectl patch | In this exercise, you used kubectl patch to change the live configuration of a Deployment object. You did not change the configuration file that you originally used to create the Deployment object. Ot", "source": "https://kubernetes.io/docs/tasks/manage-kubernetes-objects/update-api-object-kubectl-patch/", "id": "k8s_00602_code_explain_15203"} {"prompt": "Complete this yaml code:\n\napiVersion: apps/v1\nkind: Deployment\nmetadata:\n name: patch-demo\nspec:\n replicas: 2\n selector:\n matchLabels:\n app: nginx\n template:", "completion": " metadata:\n labels:\n app: nginx\n spec:\n containers:\n - name: patch-demo-ctr\n image: nginx\n tolerations:\n - effect: NoSchedule\n key: dedicated\n value: test-team", "language": "yaml", "type": "deployment-definition", "context": "Update API Objects in Place Using kubectl patch | Create the Deployment:", "source": "https://kubernetes.io/docs/tasks/manage-kubernetes-objects/update-api-object-kubectl-patch/", "id": "k8s_00602_code_completion_15204"} {"prompt": "Explain this yaml code:\n\napiVersion: apps/v1\nkind: Deployment\nmetadata:\n name: patch-demo\nspec:\n replicas: 2\n selector:\n matchLabels:\n app: nginx\n template:\n metadata:\n labels:\n app: nginx\n spec:\n containers:\n - name: patch-demo-ctr\n image: nginx\n tolerations:\n - effect: NoSchedule\n key: dedicated\n value: test-team", "completion": "This yaml code Update API Objects in Place Using kubectl patch | Create the Deployment:", "language": "yaml", "type": "deployment-definition_explanation", "context": "Update API Objects in Place Using kubectl patch | Create the Deployment:", "source": "https://kubernetes.io/docs/tasks/manage-kubernetes-objects/update-api-object-kubectl-patch/", "id": "k8s_00602_code_explain_15205"} {"prompt": "Explain this shell code:\n\nkubectl apply -f https://k8s.io/examples/application/deployment-patch.yaml", "completion": "This shell code Update API Objects in Place Using kubectl patch | View the Pods associated with your Deployment:", "language": "shell", "type": "code-example_explanation", "context": "Update API Objects in Place Using kubectl patch | View the Pods associated with your Deployment:", "source": "https://kubernetes.io/docs/tasks/manage-kubernetes-objects/update-api-object-kubectl-patch/", "id": "k8s_00602_code_explain_15206"} {"prompt": "Explain this text code:\n\nNAME READY STATUS RESTARTS AGE\npatch-demo-28633765-670qr 1/1 Running 0 23s\npatch-demo-28633765-j5qs3 1/1 Running 0 23s", "completion": "This text code Update API Objects in Place Using kubectl patch | Make a note of the names of the running Pods. Later, you will see that these Pods get terminated and replaced by new ones.", "language": "text", "type": "code-example_explanation", "context": "Update API Objects in Place Using kubectl patch | Make a note of the names of the running Pods. Later, you will see that these Pods get terminated and replaced by new ones.", "source": "https://kubernetes.io/docs/tasks/manage-kubernetes-objects/update-api-object-kubectl-patch/", "id": "k8s_00602_code_explain_15207"} {"prompt": "Complete this yaml code:\n\nspec:\n template:\n spec:", "completion": " containers:\n - name: patch-demo-ctr-2\n image: redis", "language": "yaml", "type": "kubernetes-manifest", "context": "Update API Objects in Place Using kubectl patch | Patch your Deployment:", "source": "https://kubernetes.io/docs/tasks/manage-kubernetes-objects/update-api-object-kubectl-patch/", "id": "k8s_00602_code_completion_15208"} {"prompt": "Explain this yaml code:\n\nspec:\n template:\n spec:\n containers:\n - name: patch-demo-ctr-2\n image: redis", "completion": "This yaml code Update API Objects in Place Using kubectl patch | Patch your Deployment:", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Update API Objects in Place Using kubectl patch | Patch your Deployment:", "source": "https://kubernetes.io/docs/tasks/manage-kubernetes-objects/update-api-object-kubectl-patch/", "id": "k8s_00602_code_explain_15209"} {"prompt": "Explain this shell code:\n\nkubectl patch deployment patch-demo --patch-file patch-file.yaml", "completion": "This shell code Update API Objects in Place Using kubectl patch | View the patched Deployment:", "language": "shell", "type": "code-example_explanation", "context": "Update API Objects in Place Using kubectl patch | View the patched Deployment:", "source": "https://kubernetes.io/docs/tasks/manage-kubernetes-objects/update-api-object-kubectl-patch/", "id": "k8s_00602_code_explain_15210"} {"prompt": "Complete this yaml code:\n\ncontainers:\n- image: redis\n imagePullPolicy: Always\n name: patch-demo-ctr-2", "completion": " ...\n- image: nginx\n imagePullPolicy: Always\n name: patch-demo-ctr\n ...", "language": "yaml", "type": "kubernetes-manifest", "context": "Update API Objects in Place Using kubectl patch | View the Pods associated with your patched Deployment:", "source": "https://kubernetes.io/docs/tasks/manage-kubernetes-objects/update-api-object-kubectl-patch/", "id": "k8s_00602_code_completion_15211"} {"prompt": "Explain this yaml code:\n\ncontainers:\n- image: redis\n imagePullPolicy: Always\n name: patch-demo-ctr-2\n ...\n- image: nginx\n imagePullPolicy: Always\n name: patch-demo-ctr\n ...", "completion": "This yaml code Update API Objects in Place Using kubectl patch | View the Pods associated with your patched Deployment:", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Update API Objects in Place Using kubectl patch | View the Pods associated with your patched Deployment:", "source": "https://kubernetes.io/docs/tasks/manage-kubernetes-objects/update-api-object-kubectl-patch/", "id": "k8s_00602_code_explain_15212"} {"prompt": "Explain this text code:\n\nNAME READY STATUS RESTARTS AGE\npatch-demo-1081991389-2wrn5 2/2 Running 0 1m\npatch-demo-1081991389-jmg7b 2/2 Running 0 1m", "completion": "This text code Update API Objects in Place Using kubectl patch | Take a closer look at one of the patch-demo Pods:", "language": "text", "type": "code-example_explanation", "context": "Update API Objects in Place Using kubectl patch | Take a closer look at one of the patch-demo Pods:", "source": "https://kubernetes.io/docs/tasks/manage-kubernetes-objects/update-api-object-kubectl-patch/", "id": "k8s_00602_code_explain_15213"} {"prompt": "Complete this text code:\n\ncontainers:\n- image: redis", "completion": " ...\n- image: nginx\n ...", "language": "text", "type": "code-example", "context": "Update API Objects in Place Using kubectl patch | The patch you did in the preceding exercise is called a strategic merge patch. Notice that the patch did not replace the containers list. Instead it added a new Container to the list. In other words,", "source": "https://kubernetes.io/docs/tasks/manage-kubernetes-objects/update-api-object-kubectl-patch/", "id": "k8s_00602_code_completion_15214"} {"prompt": "Explain this text code:\n\ncontainers:\n- image: redis\n ...\n- image: nginx\n ...", "completion": "This text code Update API Objects in Place Using kubectl patch | The patch you did in the preceding exercise is called a strategic merge patch. Notice that the patch did not replace the containers list. Instead it added a new Container to the list. In other words,", "language": "text", "type": "code-example_explanation", "context": "Update API Objects in Place Using kubectl patch | The patch you did in the preceding exercise is called a strategic merge patch. Notice that the patch did not replace the containers list. Instead it added a new Container to the list. In other words,", "source": "https://kubernetes.io/docs/tasks/manage-kubernetes-objects/update-api-object-kubectl-patch/", "id": "k8s_00602_code_explain_15215"} {"prompt": "Complete this go code:\n\ntype PodSpec struct {\n ...", "completion": " Containers []Container `json:\"containers\" patchStrategy:\"merge\" patchMergeKey:\"name\" ...`\n ...\n}", "language": "go", "type": "code-example", "context": "Update API Objects in Place Using kubectl patch | You can also see the patch strategy in the OpenApi spec:", "source": "https://kubernetes.io/docs/tasks/manage-kubernetes-objects/update-api-object-kubectl-patch/", "id": "k8s_00602_code_completion_15216"} {"prompt": "Explain this go code:\n\ntype PodSpec struct {\n ...\n Containers []Container `json:\"containers\" patchStrategy:\"merge\" patchMergeKey:\"name\" ...`\n ...\n}", "completion": "This go code Update API Objects in Place Using kubectl patch | You can also see the patch strategy in the OpenApi spec:", "language": "go", "type": "code-example_explanation", "context": "Update API Objects in Place Using kubectl patch | You can also see the patch strategy in the OpenApi spec:", "source": "https://kubernetes.io/docs/tasks/manage-kubernetes-objects/update-api-object-kubectl-patch/", "id": "k8s_00602_code_explain_15217"} {"prompt": "Complete this yaml code:\n\n\"io.k8s.api.core.v1.PodSpec\": {\n ...,\n \"containers\": {\n \"description\": \"List of containers belonging to the pod. ....\"", "completion": " },\n \"x-kubernetes-patch-merge-key\": \"name\",\n \"x-kubernetes-patch-strategy\": \"merge\"\n}", "language": "yaml", "type": "kubernetes-manifest", "context": "Update API Objects in Place Using kubectl patch | And you can see the patch strategy in the Kubernetes API documentation.", "source": "https://kubernetes.io/docs/tasks/manage-kubernetes-objects/update-api-object-kubectl-patch/", "id": "k8s_00602_code_completion_15218"} {"prompt": "Explain this yaml code:\n\n\"io.k8s.api.core.v1.PodSpec\": {\n ...,\n \"containers\": {\n \"description\": \"List of containers belonging to the pod. ....\"\n },\n \"x-kubernetes-patch-merge-key\": \"name\",\n \"x-kubernetes-patch-strategy\": \"merge\"\n}", "completion": "This yaml code Update API Objects in Place Using kubectl patch | And you can see the patch strategy in the Kubernetes API documentation.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Update API Objects in Place Using kubectl patch | And you can see the patch strategy in the Kubernetes API documentation.", "source": "https://kubernetes.io/docs/tasks/manage-kubernetes-objects/update-api-object-kubectl-patch/", "id": "k8s_00602_code_explain_15219"} {"prompt": "Complete this yaml code:\n\nspec:\n template:\n spec:", "completion": " tolerations:\n - effect: NoSchedule\n key: disktype\n value: ssd", "language": "yaml", "type": "kubernetes-manifest", "context": "Update API Objects in Place Using kubectl patch | Patch your Deployment:", "source": "https://kubernetes.io/docs/tasks/manage-kubernetes-objects/update-api-object-kubectl-patch/", "id": "k8s_00602_code_completion_15220"} {"prompt": "Explain this yaml code:\n\nspec:\n template:\n spec:\n tolerations:\n - effect: NoSchedule\n key: disktype\n value: ssd", "completion": "This yaml code Update API Objects in Place Using kubectl patch | Patch your Deployment:", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Update API Objects in Place Using kubectl patch | Patch your Deployment:", "source": "https://kubernetes.io/docs/tasks/manage-kubernetes-objects/update-api-object-kubectl-patch/", "id": "k8s_00602_code_explain_15221"} {"prompt": "Explain this shell code:\n\nkubectl patch deployment patch-demo --patch-file patch-file-tolerations.yaml", "completion": "This shell code Update API Objects in Place Using kubectl patch | View the patched Deployment:", "language": "shell", "type": "code-example_explanation", "context": "Update API Objects in Place Using kubectl patch | View the patched Deployment:", "source": "https://kubernetes.io/docs/tasks/manage-kubernetes-objects/update-api-object-kubectl-patch/", "id": "k8s_00602_code_explain_15222"} {"prompt": "Complete this yaml code:\n\ntolerations:\n- effect: NoSchedule", "completion": " key: disktype\n value: ssd", "language": "yaml", "type": "kubernetes-manifest", "context": "Update API Objects in Place Using kubectl patch | Notice that the tolerations list in the PodSpec was replaced, not merged. This is because the Tolerations field of PodSpec does not have a patchStrategy key in its field tag. So the strategic merge pa", "source": "https://kubernetes.io/docs/tasks/manage-kubernetes-objects/update-api-object-kubectl-patch/", "id": "k8s_00602_code_completion_15223"} {"prompt": "Explain this yaml code:\n\ntolerations:\n- effect: NoSchedule\n key: disktype\n value: ssd", "completion": "This yaml code Update API Objects in Place Using kubectl patch | Notice that the tolerations list in the PodSpec was replaced, not merged. This is because the Tolerations field of PodSpec does not have a patchStrategy key in its field tag. So the strategic merge pa", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Update API Objects in Place Using kubectl patch | Notice that the tolerations list in the PodSpec was replaced, not merged. This is because the Tolerations field of PodSpec does not have a patchStrategy key in its field tag. So the strategic merge pa", "source": "https://kubernetes.io/docs/tasks/manage-kubernetes-objects/update-api-object-kubectl-patch/", "id": "k8s_00602_code_explain_15224"} {"prompt": "Complete this go code:\n\ntype PodSpec struct {\n ...", "completion": " Tolerations []Toleration `json:\"tolerations,omitempty\" protobuf:\"bytes,22,opt,name=tolerations\"`\n ...\n}", "language": "go", "type": "code-example", "context": "Update API Objects in Place Using kubectl patch | A strategic merge patch is different from a JSON merge patch. With a JSON merge patch, if you want to update a list, you have to specify the entire new list. And the new list completely replaces the e", "source": "https://kubernetes.io/docs/tasks/manage-kubernetes-objects/update-api-object-kubectl-patch/", "id": "k8s_00602_code_completion_15225"} {"prompt": "Explain this go code:\n\ntype PodSpec struct {\n ...\n Tolerations []Toleration `json:\"tolerations,omitempty\" protobuf:\"bytes,22,opt,name=tolerations\"`\n ...\n}", "completion": "This go code Update API Objects in Place Using kubectl patch | A strategic merge patch is different from a JSON merge patch. With a JSON merge patch, if you want to update a list, you have to specify the entire new list. And the new list completely replaces the e", "language": "go", "type": "code-example_explanation", "context": "Update API Objects in Place Using kubectl patch | A strategic merge patch is different from a JSON merge patch. With a JSON merge patch, if you want to update a list, you have to specify the entire new list. And the new list completely replaces the e", "source": "https://kubernetes.io/docs/tasks/manage-kubernetes-objects/update-api-object-kubectl-patch/", "id": "k8s_00602_code_explain_15226"} {"prompt": "Complete this yaml code:\n\nspec:\n template:\n spec:", "completion": " containers:\n - name: patch-demo-ctr-3\n image: gcr.io/google-samples/hello-app:2.0", "language": "yaml", "type": "kubernetes-manifest", "context": "Update API Objects in Place Using kubectl patch | In your patch command, set type to merge:", "source": "https://kubernetes.io/docs/tasks/manage-kubernetes-objects/update-api-object-kubectl-patch/", "id": "k8s_00602_code_completion_15227"} {"prompt": "Explain this yaml code:\n\nspec:\n template:\n spec:\n containers:\n - name: patch-demo-ctr-3\n image: gcr.io/google-samples/hello-app:2.0", "completion": "This yaml code Update API Objects in Place Using kubectl patch | In your patch command, set type to merge:", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Update API Objects in Place Using kubectl patch | In your patch command, set type to merge:", "source": "https://kubernetes.io/docs/tasks/manage-kubernetes-objects/update-api-object-kubectl-patch/", "id": "k8s_00602_code_explain_15228"} {"prompt": "Explain this shell code:\n\nkubectl patch deployment patch-demo --type merge --patch-file patch-file-2.yaml", "completion": "This shell code Update API Objects in Place Using kubectl patch | View the patched Deployment:", "language": "shell", "type": "code-example_explanation", "context": "Update API Objects in Place Using kubectl patch | View the patched Deployment:", "source": "https://kubernetes.io/docs/tasks/manage-kubernetes-objects/update-api-object-kubectl-patch/", "id": "k8s_00602_code_explain_15229"} {"prompt": "Complete this yaml code:\n\nspec:\n containers:", "completion": " - image: gcr.io/google-samples/hello-app:2.0\n ...\n name: patch-demo-ctr-3", "language": "yaml", "type": "kubernetes-manifest", "context": "Update API Objects in Place Using kubectl patch | List the running Pods:", "source": "https://kubernetes.io/docs/tasks/manage-kubernetes-objects/update-api-object-kubectl-patch/", "id": "k8s_00602_code_completion_15230"} {"prompt": "Explain this yaml code:\n\nspec:\n containers:\n - image: gcr.io/google-samples/hello-app:2.0\n ...\n name: patch-demo-ctr-3", "completion": "This yaml code Update API Objects in Place Using kubectl patch | List the running Pods:", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Update API Objects in Place Using kubectl patch | List the running Pods:", "source": "https://kubernetes.io/docs/tasks/manage-kubernetes-objects/update-api-object-kubectl-patch/", "id": "k8s_00602_code_explain_15231"} {"prompt": "Explain this shell code:\n\nNAME READY STATUS RESTARTS AGE\npatch-demo-1307768864-69308 1/1 Running 0 1m\npatch-demo-1307768864-c86dc 1/1 Running 0 1m", "completion": "This shell code Update API Objects in Place Using kubectl patch | Here's the configuration file for a Deployment that uses the RollingUpdate strategy:", "language": "shell", "type": "code-example_explanation", "context": "Update API Objects in Place Using kubectl patch | Here's the configuration file for a Deployment that uses the RollingUpdate strategy:", "source": "https://kubernetes.io/docs/tasks/manage-kubernetes-objects/update-api-object-kubectl-patch/", "id": "k8s_00602_code_explain_15232"} {"prompt": "Complete this yaml code:\n\napiVersion: apps/v1\nkind: Deployment\nmetadata:\n name: retainkeys-demo\nspec:\n selector:\n matchLabels:\n app: nginx\n strategy:", "completion": " rollingUpdate:\n maxSurge: 30%\n template:\n metadata:\n labels:\n app: nginx\n spec:\n containers:\n - name: retainkeys-demo-ctr\n image: nginx", "language": "yaml", "type": "deployment-definition", "context": "Update API Objects in Place Using kubectl patch | Create the deployment:", "source": "https://kubernetes.io/docs/tasks/manage-kubernetes-objects/update-api-object-kubectl-patch/", "id": "k8s_00602_code_completion_15233"} {"prompt": "Explain this yaml code:\n\napiVersion: apps/v1\nkind: Deployment\nmetadata:\n name: retainkeys-demo\nspec:\n selector:\n matchLabels:\n app: nginx\n strategy:\n rollingUpdate:\n maxSurge: 30%\n template:\n metadata:\n labels:\n app: nginx\n spec:\n containers:\n - name: retainkeys-demo-ctr\n image: nginx", "completion": "This yaml code Update API Objects in Place Using kubectl patch | Create the deployment:", "language": "yaml", "type": "deployment-definition_explanation", "context": "Update API Objects in Place Using kubectl patch | Create the deployment:", "source": "https://kubernetes.io/docs/tasks/manage-kubernetes-objects/update-api-object-kubectl-patch/", "id": "k8s_00602_code_explain_15234"} {"prompt": "Explain this shell code:\n\nkubectl apply -f https://k8s.io/examples/application/deployment-retainkeys.yaml", "completion": "This shell code Update API Objects in Place Using kubectl patch | At this point, the deployment is created and is using the RollingUpdate strategy.", "language": "shell", "type": "code-example_explanation", "context": "Update API Objects in Place Using kubectl patch | At this point, the deployment is created and is using the RollingUpdate strategy.", "source": "https://kubernetes.io/docs/tasks/manage-kubernetes-objects/update-api-object-kubectl-patch/", "id": "k8s_00602_code_explain_15235"} {"prompt": "Explain this shell code:\n\nkubectl patch deployment retainkeys-demo --type strategic --patch-file patch-file-no-retainkeys.yaml", "completion": "This shell code Update API Objects in Place Using kubectl patch | In the output, you can see that it is not possible to set type as Recreate when a value is defined for spec.strategy.rollingUpdate:", "language": "shell", "type": "code-example_explanation", "context": "Update API Objects in Place Using kubectl patch | In the output, you can see that it is not possible to set type as Recreate when a value is defined for spec.strategy.rollingUpdate:", "source": "https://kubernetes.io/docs/tasks/manage-kubernetes-objects/update-api-object-kubectl-patch/", "id": "k8s_00602_code_explain_15236"} {"prompt": "Explain this text code:\n\nThe Deployment \"retainkeys-demo\" is invalid: spec.strategy.rollingUpdate: Forbidden: may not be specified when strategy `type` is 'Recreate'", "completion": "This text code Update API Objects in Place Using kubectl patch | The way to remove the value for spec.strategy.rollingUpdate when updating the value for type is to use the retainKeys strategy for the strategic merge.", "language": "text", "type": "code-example_explanation", "context": "Update API Objects in Place Using kubectl patch | The way to remove the value for spec.strategy.rollingUpdate when updating the value for type is to use the retainKeys strategy for the strategic merge.", "source": "https://kubernetes.io/docs/tasks/manage-kubernetes-objects/update-api-object-kubectl-patch/", "id": "k8s_00602_code_explain_15237"} {"prompt": "Complete this yaml code:\n\nspec:\n strategy:", "completion": " $retainKeys:\n - type\n type: Recreate", "language": "yaml", "type": "kubernetes-manifest", "context": "Update API Objects in Place Using kubectl patch | With this patch, we indicate that we want to retain only the type key of the strategy object. Thus, the rollingUpdate will be removed during the patch operation.", "source": "https://kubernetes.io/docs/tasks/manage-kubernetes-objects/update-api-object-kubectl-patch/", "id": "k8s_00602_code_completion_15238"} {"prompt": "Explain this yaml code:\n\nspec:\n strategy:\n $retainKeys:\n - type\n type: Recreate", "completion": "This yaml code Update API Objects in Place Using kubectl patch | With this patch, we indicate that we want to retain only the type key of the strategy object. Thus, the rollingUpdate will be removed during the patch operation.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Update API Objects in Place Using kubectl patch | With this patch, we indicate that we want to retain only the type key of the strategy object. Thus, the rollingUpdate will be removed during the patch operation.", "source": "https://kubernetes.io/docs/tasks/manage-kubernetes-objects/update-api-object-kubectl-patch/", "id": "k8s_00602_code_explain_15239"} {"prompt": "Explain this shell code:\n\nkubectl patch deployment retainkeys-demo --type strategic --patch-file patch-file-retainkeys.yaml", "completion": "This shell code Update API Objects in Place Using kubectl patch | Examine the content of the Deployment:", "language": "shell", "type": "code-example_explanation", "context": "Update API Objects in Place Using kubectl patch | Examine the content of the Deployment:", "source": "https://kubernetes.io/docs/tasks/manage-kubernetes-objects/update-api-object-kubectl-patch/", "id": "k8s_00602_code_explain_15240"} {"prompt": "Explain this shell code:\n\nkubectl get deployment retainkeys-demo --output yaml", "completion": "This shell code Update API Objects in Place Using kubectl patch | The output shows that the strategy object in the Deployment does not contain the rollingUpdate key anymore:", "language": "shell", "type": "code-example_explanation", "context": "Update API Objects in Place Using kubectl patch | The output shows that the strategy object in the Deployment does not contain the rollingUpdate key anymore:", "source": "https://kubernetes.io/docs/tasks/manage-kubernetes-objects/update-api-object-kubectl-patch/", "id": "k8s_00602_code_explain_15241"} {"prompt": "Complete this go code:\n\ntype DeploymentSpec struct {\n ...\n // +patchStrategy=retainKeys", "completion": " Strategy DeploymentStrategy `json:\"strategy,omitempty\" patchStrategy:\"retainKeys\" ...`\n ...\n}", "language": "go", "type": "code-example", "context": "Update API Objects in Place Using kubectl patch | You can also see the retainKeys strategy in the OpenApi spec:", "source": "https://kubernetes.io/docs/tasks/manage-kubernetes-objects/update-api-object-kubectl-patch/", "id": "k8s_00602_code_completion_15242"} {"prompt": "Explain this go code:\n\ntype DeploymentSpec struct {\n ...\n // +patchStrategy=retainKeys\n Strategy DeploymentStrategy `json:\"strategy,omitempty\" patchStrategy:\"retainKeys\" ...`\n ...\n}", "completion": "This go code Update API Objects in Place Using kubectl patch | You can also see the retainKeys strategy in the OpenApi spec:", "language": "go", "type": "code-example_explanation", "context": "Update API Objects in Place Using kubectl patch | You can also see the retainKeys strategy in the OpenApi spec:", "source": "https://kubernetes.io/docs/tasks/manage-kubernetes-objects/update-api-object-kubectl-patch/", "id": "k8s_00602_code_explain_15243"} {"prompt": "Complete this yaml code:\n\n\"io.k8s.api.apps.v1.DeploymentSpec\": {\n ...,\n \"strategy\": {\n \"$ref\": \"#/definitions/io.k8s.api.apps.v1.DeploymentStrategy\",", "completion": " \"description\": \"The deployment strategy to use to replace existing pods with new ones.\",\n \"x-kubernetes-patch-strategy\": \"retainKeys\"\n },\n ....\n}", "language": "yaml", "type": "kubernetes-manifest", "context": "Update API Objects in Place Using kubectl patch | And you can see the retainKeys strategy in the Kubernetes API documentation.", "source": "https://kubernetes.io/docs/tasks/manage-kubernetes-objects/update-api-object-kubectl-patch/", "id": "k8s_00602_code_completion_15244"} {"prompt": "Explain this yaml code:\n\n\"io.k8s.api.apps.v1.DeploymentSpec\": {\n ...,\n \"strategy\": {\n \"$ref\": \"#/definitions/io.k8s.api.apps.v1.DeploymentStrategy\",\n \"description\": \"The deployment strategy to use to replace existing pods with new ones.\",\n \"x-kubernetes-patch-strategy\": \"retainKeys\"\n },\n ....\n}", "completion": "This yaml code Update API Objects in Place Using kubectl patch | And you can see the retainKeys strategy in the Kubernetes API documentation.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Update API Objects in Place Using kubectl patch | And you can see the retainKeys strategy in the Kubernetes API documentation.", "source": "https://kubernetes.io/docs/tasks/manage-kubernetes-objects/update-api-object-kubectl-patch/", "id": "k8s_00602_code_explain_15245"} {"prompt": "Complete this json code:\n\n{\n \"spec\": {\n \"template\": {\n \"spec\": {\n \"containers\": [\n {\n \"name\": \"patch-demo-ctr-2\",", "completion": " \"image\": \"redis\"\n }\n ]\n }\n }\n }\n}", "language": "json", "type": "api-response", "context": "Update API Objects in Place Using kubectl patch | The following commands are equivalent:", "source": "https://kubernetes.io/docs/tasks/manage-kubernetes-objects/update-api-object-kubectl-patch/", "id": "k8s_00602_code_completion_15246"} {"prompt": "Explain this json code:\n\n{\n \"spec\": {\n \"template\": {\n \"spec\": {\n \"containers\": [\n {\n \"name\": \"patch-demo-ctr-2\",\n \"image\": \"redis\"\n }\n ]\n }\n }\n }\n}", "completion": "This json code Update API Objects in Place Using kubectl patch | The following commands are equivalent:", "language": "json", "type": "api-response_explanation", "context": "Update API Objects in Place Using kubectl patch | The following commands are equivalent:", "source": "https://kubernetes.io/docs/tasks/manage-kubernetes-objects/update-api-object-kubectl-patch/", "id": "k8s_00602_code_explain_15247"} {"prompt": "Complete this shell code:\n\nkubectl patch deployment patch-demo --patch-file patch-file.yaml\nkubectl patch deployment patch-demo --patch 'spec:\\n template:\\n spec:\\n containers:\\n - name: patch-demo-ctr-2\\n image: redis'", "completion": "\nkubectl patch deployment patch-demo --patch-file patch-file.json\nkubectl patch deployment patch-demo --patch '{\"spec\": {\"template\": {\"spec\": {\"containers\": [{\"name\": \"patch-demo-ctr-2\",\"image\": \"redis\"}]}}}}'", "language": "shell", "type": "code-example", "context": "Update API Objects in Place Using kubectl patch | The flag --subresource=[subresource-name] is used with kubectl commands like get, patch, edit, apply and replace to fetch and update status, scale and resize subresource of the resources you specify.", "source": "https://kubernetes.io/docs/tasks/manage-kubernetes-objects/update-api-object-kubectl-patch/", "id": "k8s_00602_code_completion_15248"} {"prompt": "Explain this shell code:\n\nkubectl patch deployment patch-demo --patch-file patch-file.yaml\nkubectl patch deployment patch-demo --patch 'spec:\\n template:\\n spec:\\n containers:\\n - name: patch-demo-ctr-2\\n image: redis'\n\nkubectl patch deployment patch-demo --patch-file patch-file.json\nkubectl patch deployment patch-demo --patch '{\"spec\": {\"template\": {\"spec\": {\"containers\": [{\"name\": \"patch-demo-ctr-2\",\"image\": \"redis\"}]}}}}'", "completion": "This shell code Update API Objects in Place Using kubectl patch | The flag --subresource=[subresource-name] is used with kubectl commands like get, patch, edit, apply and replace to fetch and update status, scale and resize subresource of the resources you specify.", "language": "shell", "type": "code-example_explanation", "context": "Update API Objects in Place Using kubectl patch | The flag --subresource=[subresource-name] is used with kubectl commands like get, patch, edit, apply and replace to fetch and update status, scale and resize subresource of the resources you specify.", "source": "https://kubernetes.io/docs/tasks/manage-kubernetes-objects/update-api-object-kubectl-patch/", "id": "k8s_00602_code_explain_15249"} {"prompt": "Complete this yaml code:\n\napiVersion: apps/v1\nkind: Deployment\nmetadata:\n name: nginx-deployment\nspec:\n selector:\n matchLabels:\n app: nginx\n replicas: 2 # tells deployment to run 2 pods matching the template", "completion": " template:\n metadata:\n labels:\n app: nginx\n spec:\n containers:\n - name: nginx\n image: nginx:1.14.2\n ports:\n - containerPort: 80", "language": "yaml", "type": "deployment-definition", "context": "Update API Objects in Place Using kubectl patch | Create the Deployment:", "source": "https://kubernetes.io/docs/tasks/manage-kubernetes-objects/update-api-object-kubectl-patch/", "id": "k8s_00602_code_completion_15250"} {"prompt": "Explain this yaml code:\n\napiVersion: apps/v1\nkind: Deployment\nmetadata:\n name: nginx-deployment\nspec:\n selector:\n matchLabels:\n app: nginx\n replicas: 2 # tells deployment to run 2 pods matching the template\n template:\n metadata:\n labels:\n app: nginx\n spec:\n containers:\n - name: nginx\n image: nginx:1.14.2\n ports:\n - containerPort: 80", "completion": "This yaml code Update API Objects in Place Using kubectl patch | Create the Deployment:", "language": "yaml", "type": "deployment-definition_explanation", "context": "Update API Objects in Place Using kubectl patch | Create the Deployment:", "source": "https://kubernetes.io/docs/tasks/manage-kubernetes-objects/update-api-object-kubectl-patch/", "id": "k8s_00602_code_explain_15251"} {"prompt": "Explain this shell code:\n\nkubectl apply -f https://k8s.io/examples/application/deployment.yaml", "completion": "This shell code Update API Objects in Place Using kubectl patch | View the Pods associated with your Deployment:", "language": "shell", "type": "code-example_explanation", "context": "Update API Objects in Place Using kubectl patch | View the Pods associated with your Deployment:", "source": "https://kubernetes.io/docs/tasks/manage-kubernetes-objects/update-api-object-kubectl-patch/", "id": "k8s_00602_code_explain_15252"} {"prompt": "Explain this text code:\n\nNAME READY STATUS RESTARTS AGE\nnginx-deployment-7fb96c846b-22567 1/1 Running 0 47s\nnginx-deployment-7fb96c846b-mlgns 1/1 Running 0 47s", "completion": "This text code Update API Objects in Place Using kubectl patch | Now, patch that Deployment with --subresource=[subresource-name] flag:", "language": "text", "type": "code-example_explanation", "context": "Update API Objects in Place Using kubectl patch | Now, patch that Deployment with --subresource=[subresource-name] flag:", "source": "https://kubernetes.io/docs/tasks/manage-kubernetes-objects/update-api-object-kubectl-patch/", "id": "k8s_00602_code_explain_15253"} {"prompt": "Explain this shell code:\n\nkubectl patch deployment nginx-deployment --subresource='scale' --type='merge' -p '{\"spec\":{\"replicas\":3}}'", "completion": "This shell code Update API Objects in Place Using kubectl patch | The output is:", "language": "shell", "type": "code-example_explanation", "context": "Update API Objects in Place Using kubectl patch | The output is:", "source": "https://kubernetes.io/docs/tasks/manage-kubernetes-objects/update-api-object-kubectl-patch/", "id": "k8s_00602_code_explain_15254"} {"prompt": "Complete this text code:\n\nNAME READY STATUS RESTARTS AGE\nnginx-deployment-7fb96c846b-22567 1/1 Running 0 107s", "completion": "nginx-deployment-7fb96c846b-lxfr2 1/1 Running 0 14s\nnginx-deployment-7fb96c846b-mlgns 1/1 Running 0 107s", "language": "text", "type": "code-example", "context": "Update API Objects in Place Using kubectl patch | View the patched Deployment:", "source": "https://kubernetes.io/docs/tasks/manage-kubernetes-objects/update-api-object-kubectl-patch/", "id": "k8s_00602_code_completion_15255"} {"prompt": "Explain this text code:\n\nNAME READY STATUS RESTARTS AGE\nnginx-deployment-7fb96c846b-22567 1/1 Running 0 107s\nnginx-deployment-7fb96c846b-lxfr2 1/1 Running 0 14s\nnginx-deployment-7fb96c846b-mlgns 1/1 Running 0 107s", "completion": "This text code Update API Objects in Place Using kubectl patch | View the patched Deployment:", "language": "text", "type": "code-example_explanation", "context": "Update API Objects in Place Using kubectl patch | View the patched Deployment:", "source": "https://kubernetes.io/docs/tasks/manage-kubernetes-objects/update-api-object-kubectl-patch/", "id": "k8s_00602_code_explain_15256"} {"prompt": "Complete this yaml code:\n\n...\nspec:\n replicas: 3\n ...", "completion": "status:\n ...\n availableReplicas: 3\n readyReplicas: 3\n replicas: 3", "language": "yaml", "type": "kubernetes-manifest", "context": "Update API Objects in Place Using kubectl patch | In this exercise, you used kubectl patch to change the live configuration of a Deployment object. You did not change the configuration file that you originally used to create the Deployment object. Ot", "source": "https://kubernetes.io/docs/tasks/manage-kubernetes-objects/update-api-object-kubectl-patch/", "id": "k8s_00602_code_completion_15257"} {"prompt": "Explain this yaml code:\n\n...\nspec:\n replicas: 3\n ...\nstatus:\n ...\n availableReplicas: 3\n readyReplicas: 3\n replicas: 3", "completion": "This yaml code Update API Objects in Place Using kubectl patch | In this exercise, you used kubectl patch to change the live configuration of a Deployment object. You did not change the configuration file that you originally used to create the Deployment object. Ot", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Update API Objects in Place Using kubectl patch | In this exercise, you used kubectl patch to change the live configuration of a Deployment object. You did not change the configuration file that you originally used to create the Deployment object. Ot", "source": "https://kubernetes.io/docs/tasks/manage-kubernetes-objects/update-api-object-kubectl-patch/", "id": "k8s_00602_code_explain_15258"} {"prompt": "Complete this yaml code:\n\napplication/deployment-patch.yaml\napiVersion: apps/v1\nkind: Deployment\nmetadata:\n name: patch-demo\nspec:\n replicas: 2\n selector:\n matchLabels:\n app: nginx\n template:", "completion": " metadata:\n labels:\n app: nginx\n spec:\n containers:\n - name: patch-demo-ctr\n image: nginx\n tolerations:\n - effect: NoSchedule\n key: dedicated\n value: test-team", "language": "yaml", "type": "deployment-definition", "context": "Update API Objects in Place Using kubectl patch | Create the Deployment:", "source": "https://kubernetes.io/docs/tasks/manage-kubernetes-objects/update-api-object-kubectl-patch/", "id": "k8s_00602_code_completion_15259"} {"prompt": "Explain this yaml code:\n\napplication/deployment-patch.yaml\napiVersion: apps/v1\nkind: Deployment\nmetadata:\n name: patch-demo\nspec:\n replicas: 2\n selector:\n matchLabels:\n app: nginx\n template:\n metadata:\n labels:\n app: nginx\n spec:\n containers:\n - name: patch-demo-ctr\n image: nginx\n tolerations:\n - effect: NoSchedule\n key: dedicated\n value: test-team", "completion": "This yaml code Update API Objects in Place Using kubectl patch | Create the Deployment:", "language": "yaml", "type": "deployment-definition_explanation", "context": "Update API Objects in Place Using kubectl patch | Create the Deployment:", "source": "https://kubernetes.io/docs/tasks/manage-kubernetes-objects/update-api-object-kubectl-patch/", "id": "k8s_00602_code_explain_15260"} {"prompt": "Complete this yaml code:\n\napiVersion: apps/v1\nkind: Deployment\nmetadata:\n name: patch-demo\nspec:\n replicas: 2\n selector:\n matchLabels:\n app: nginx\n template:", "completion": " metadata:\n labels:\n app: nginx\n spec:\n containers:\n - name: patch-demo-ctr\n image: nginx\n tolerations:\n - effect: NoSchedule\n key: dedicated\n value: test-team", "language": "yaml", "type": "deployment-definition", "context": "Update API Objects in Place Using kubectl patch | Create the Deployment:", "source": "https://kubernetes.io/docs/tasks/manage-kubernetes-objects/update-api-object-kubectl-patch/", "id": "k8s_00602_code_completion_15261"} {"prompt": "Explain this yaml code:\n\napiVersion: apps/v1\nkind: Deployment\nmetadata:\n name: patch-demo\nspec:\n replicas: 2\n selector:\n matchLabels:\n app: nginx\n template:\n metadata:\n labels:\n app: nginx\n spec:\n containers:\n - name: patch-demo-ctr\n image: nginx\n tolerations:\n - effect: NoSchedule\n key: dedicated\n value: test-team", "completion": "This yaml code Update API Objects in Place Using kubectl patch | Create the Deployment:", "language": "yaml", "type": "deployment-definition_explanation", "context": "Update API Objects in Place Using kubectl patch | Create the Deployment:", "source": "https://kubernetes.io/docs/tasks/manage-kubernetes-objects/update-api-object-kubectl-patch/", "id": "k8s_00602_code_explain_15262"} {"prompt": "Explain this bash code:\n\nkubectl apply -f https://k8s.io/examples/application/deployment-patch.yaml", "completion": "This bash code Update API Objects in Place Using kubectl patch | View the Pods associated with your Deployment:", "language": "bash", "type": "kubectl-command_explanation", "context": "Update API Objects in Place Using kubectl patch | View the Pods associated with your Deployment:", "source": "https://kubernetes.io/docs/tasks/manage-kubernetes-objects/update-api-object-kubectl-patch/", "id": "k8s_00602_code_explain_15263"} {"prompt": "Complete this text code:\n\nspec:\n template:\n spec:", "completion": " containers:\n - name: patch-demo-ctr-2\n image: redis", "language": "text", "type": "code-example", "context": "Update API Objects in Place Using kubectl patch | Patch your Deployment:", "source": "https://kubernetes.io/docs/tasks/manage-kubernetes-objects/update-api-object-kubectl-patch/", "id": "k8s_00602_code_completion_15264"} {"prompt": "Explain this text code:\n\nspec:\n template:\n spec:\n containers:\n - name: patch-demo-ctr-2\n image: redis", "completion": "This text code Update API Objects in Place Using kubectl patch | Patch your Deployment:", "language": "text", "type": "code-example_explanation", "context": "Update API Objects in Place Using kubectl patch | Patch your Deployment:", "source": "https://kubernetes.io/docs/tasks/manage-kubernetes-objects/update-api-object-kubectl-patch/", "id": "k8s_00602_code_explain_15265"} {"prompt": "Explain this bash code:\n\nkubectl patch deployment patch-demo --patch-file patch-file.yaml", "completion": "This bash code Update API Objects in Place Using kubectl patch | View the patched Deployment:", "language": "bash", "type": "kubectl-command_explanation", "context": "Update API Objects in Place Using kubectl patch | View the patched Deployment:", "source": "https://kubernetes.io/docs/tasks/manage-kubernetes-objects/update-api-object-kubectl-patch/", "id": "k8s_00602_code_explain_15266"} {"prompt": "Complete this text code:\n\ncontainers:\n- image: redis\n imagePullPolicy: Always\n name: patch-demo-ctr-2", "completion": " ...\n- image: nginx\n imagePullPolicy: Always\n name: patch-demo-ctr\n ...", "language": "text", "type": "code-example", "context": "Update API Objects in Place Using kubectl patch | View the Pods associated with your patched Deployment:", "source": "https://kubernetes.io/docs/tasks/manage-kubernetes-objects/update-api-object-kubectl-patch/", "id": "k8s_00602_code_completion_15267"} {"prompt": "Explain this text code:\n\ncontainers:\n- image: redis\n imagePullPolicy: Always\n name: patch-demo-ctr-2\n ...\n- image: nginx\n imagePullPolicy: Always\n name: patch-demo-ctr\n ...", "completion": "This text code Update API Objects in Place Using kubectl patch | View the Pods associated with your patched Deployment:", "language": "text", "type": "code-example_explanation", "context": "Update API Objects in Place Using kubectl patch | View the Pods associated with your patched Deployment:", "source": "https://kubernetes.io/docs/tasks/manage-kubernetes-objects/update-api-object-kubectl-patch/", "id": "k8s_00602_code_explain_15268"} {"prompt": "Complete this text code:\n\ntype PodSpec struct {\n ...", "completion": " Containers []Container `json:\"containers\" patchStrategy:\"merge\" patchMergeKey:\"name\" ...`\n ...\n}", "language": "text", "type": "code-example", "context": "Update API Objects in Place Using kubectl patch | You can also see the patch strategy in the OpenApi spec:", "source": "https://kubernetes.io/docs/tasks/manage-kubernetes-objects/update-api-object-kubectl-patch/", "id": "k8s_00602_code_completion_15269"} {"prompt": "Explain this text code:\n\ntype PodSpec struct {\n ...\n Containers []Container `json:\"containers\" patchStrategy:\"merge\" patchMergeKey:\"name\" ...`\n ...\n}", "completion": "This text code Update API Objects in Place Using kubectl patch | You can also see the patch strategy in the OpenApi spec:", "language": "text", "type": "code-example_explanation", "context": "Update API Objects in Place Using kubectl patch | You can also see the patch strategy in the OpenApi spec:", "source": "https://kubernetes.io/docs/tasks/manage-kubernetes-objects/update-api-object-kubectl-patch/", "id": "k8s_00602_code_explain_15270"} {"prompt": "Complete this text code:\n\n\"io.k8s.api.core.v1.PodSpec\": {\n ...,\n \"containers\": {\n \"description\": \"List of containers belonging to the pod. ....\"", "completion": " },\n \"x-kubernetes-patch-merge-key\": \"name\",\n \"x-kubernetes-patch-strategy\": \"merge\"\n}", "language": "text", "type": "code-example", "context": "Update API Objects in Place Using kubectl patch | And you can see the patch strategy in the Kubernetes API documentation.", "source": "https://kubernetes.io/docs/tasks/manage-kubernetes-objects/update-api-object-kubectl-patch/", "id": "k8s_00602_code_completion_15271"} {"prompt": "Explain this text code:\n\n\"io.k8s.api.core.v1.PodSpec\": {\n ...,\n \"containers\": {\n \"description\": \"List of containers belonging to the pod. ....\"\n },\n \"x-kubernetes-patch-merge-key\": \"name\",\n \"x-kubernetes-patch-strategy\": \"merge\"\n}", "completion": "This text code Update API Objects in Place Using kubectl patch | And you can see the patch strategy in the Kubernetes API documentation.", "language": "text", "type": "code-example_explanation", "context": "Update API Objects in Place Using kubectl patch | And you can see the patch strategy in the Kubernetes API documentation.", "source": "https://kubernetes.io/docs/tasks/manage-kubernetes-objects/update-api-object-kubectl-patch/", "id": "k8s_00602_code_explain_15272"} {"prompt": "Complete this text code:\n\nspec:\n template:\n spec:", "completion": " tolerations:\n - effect: NoSchedule\n key: disktype\n value: ssd", "language": "text", "type": "code-example", "context": "Update API Objects in Place Using kubectl patch | Patch your Deployment:", "source": "https://kubernetes.io/docs/tasks/manage-kubernetes-objects/update-api-object-kubectl-patch/", "id": "k8s_00602_code_completion_15273"} {"prompt": "Explain this text code:\n\nspec:\n template:\n spec:\n tolerations:\n - effect: NoSchedule\n key: disktype\n value: ssd", "completion": "This text code Update API Objects in Place Using kubectl patch | Patch your Deployment:", "language": "text", "type": "code-example_explanation", "context": "Update API Objects in Place Using kubectl patch | Patch your Deployment:", "source": "https://kubernetes.io/docs/tasks/manage-kubernetes-objects/update-api-object-kubectl-patch/", "id": "k8s_00602_code_explain_15274"} {"prompt": "Explain this bash code:\n\nkubectl patch deployment patch-demo --patch-file patch-file-tolerations.yaml", "completion": "This bash code Update API Objects in Place Using kubectl patch | View the patched Deployment:", "language": "bash", "type": "kubectl-command_explanation", "context": "Update API Objects in Place Using kubectl patch | View the patched Deployment:", "source": "https://kubernetes.io/docs/tasks/manage-kubernetes-objects/update-api-object-kubectl-patch/", "id": "k8s_00602_code_explain_15275"} {"prompt": "Complete this text code:\n\ntolerations:\n- effect: NoSchedule", "completion": " key: disktype\n value: ssd", "language": "text", "type": "code-example", "context": "Update API Objects in Place Using kubectl patch | Notice that the tolerations list in the PodSpec was replaced, not merged. This is because the Tolerations field of PodSpec does not have a patchStrategy key in its field tag. So the strategic merge pa", "source": "https://kubernetes.io/docs/tasks/manage-kubernetes-objects/update-api-object-kubectl-patch/", "id": "k8s_00602_code_completion_15276"} {"prompt": "Explain this text code:\n\ntolerations:\n- effect: NoSchedule\n key: disktype\n value: ssd", "completion": "This text code Update API Objects in Place Using kubectl patch | Notice that the tolerations list in the PodSpec was replaced, not merged. This is because the Tolerations field of PodSpec does not have a patchStrategy key in its field tag. So the strategic merge pa", "language": "text", "type": "code-example_explanation", "context": "Update API Objects in Place Using kubectl patch | Notice that the tolerations list in the PodSpec was replaced, not merged. This is because the Tolerations field of PodSpec does not have a patchStrategy key in its field tag. So the strategic merge pa", "source": "https://kubernetes.io/docs/tasks/manage-kubernetes-objects/update-api-object-kubectl-patch/", "id": "k8s_00602_code_explain_15277"} {"prompt": "Complete this text code:\n\ntype PodSpec struct {\n ...", "completion": " Tolerations []Toleration `json:\"tolerations,omitempty\" protobuf:\"bytes,22,opt,name=tolerations\"`\n ...\n}", "language": "text", "type": "code-example", "context": "Update API Objects in Place Using kubectl patch | A strategic merge patch is different from a JSON merge patch. With a JSON merge patch, if you want to update a list, you have to specify the entire new list. And the new list completely replaces the e", "source": "https://kubernetes.io/docs/tasks/manage-kubernetes-objects/update-api-object-kubectl-patch/", "id": "k8s_00602_code_completion_15278"} {"prompt": "Explain this text code:\n\ntype PodSpec struct {\n ...\n Tolerations []Toleration `json:\"tolerations,omitempty\" protobuf:\"bytes,22,opt,name=tolerations\"`\n ...\n}", "completion": "This text code Update API Objects in Place Using kubectl patch | A strategic merge patch is different from a JSON merge patch. With a JSON merge patch, if you want to update a list, you have to specify the entire new list. And the new list completely replaces the e", "language": "text", "type": "code-example_explanation", "context": "Update API Objects in Place Using kubectl patch | A strategic merge patch is different from a JSON merge patch. With a JSON merge patch, if you want to update a list, you have to specify the entire new list. And the new list completely replaces the e", "source": "https://kubernetes.io/docs/tasks/manage-kubernetes-objects/update-api-object-kubectl-patch/", "id": "k8s_00602_code_explain_15279"} {"prompt": "Complete this text code:\n\nspec:\n template:\n spec:", "completion": " containers:\n - name: patch-demo-ctr-3\n image: gcr.io/google-samples/hello-app:2.0", "language": "text", "type": "code-example", "context": "Update API Objects in Place Using kubectl patch | In your patch command, set type to merge:", "source": "https://kubernetes.io/docs/tasks/manage-kubernetes-objects/update-api-object-kubectl-patch/", "id": "k8s_00602_code_completion_15280"} {"prompt": "Explain this text code:\n\nspec:\n template:\n spec:\n containers:\n - name: patch-demo-ctr-3\n image: gcr.io/google-samples/hello-app:2.0", "completion": "This text code Update API Objects in Place Using kubectl patch | In your patch command, set type to merge:", "language": "text", "type": "code-example_explanation", "context": "Update API Objects in Place Using kubectl patch | In your patch command, set type to merge:", "source": "https://kubernetes.io/docs/tasks/manage-kubernetes-objects/update-api-object-kubectl-patch/", "id": "k8s_00602_code_explain_15281"} {"prompt": "Explain this bash code:\n\nkubectl patch deployment patch-demo --type merge --patch-file patch-file-2.yaml", "completion": "This bash code Update API Objects in Place Using kubectl patch | View the patched Deployment:", "language": "bash", "type": "kubectl-command_explanation", "context": "Update API Objects in Place Using kubectl patch | View the patched Deployment:", "source": "https://kubernetes.io/docs/tasks/manage-kubernetes-objects/update-api-object-kubectl-patch/", "id": "k8s_00602_code_explain_15282"} {"prompt": "Complete this text code:\n\nspec:\n containers:", "completion": " - image: gcr.io/google-samples/hello-app:2.0\n ...\n name: patch-demo-ctr-3", "language": "text", "type": "code-example", "context": "Update API Objects in Place Using kubectl patch | List the running Pods:", "source": "https://kubernetes.io/docs/tasks/manage-kubernetes-objects/update-api-object-kubectl-patch/", "id": "k8s_00602_code_completion_15283"} {"prompt": "Explain this text code:\n\nspec:\n containers:\n - image: gcr.io/google-samples/hello-app:2.0\n ...\n name: patch-demo-ctr-3", "completion": "This text code Update API Objects in Place Using kubectl patch | List the running Pods:", "language": "text", "type": "code-example_explanation", "context": "Update API Objects in Place Using kubectl patch | List the running Pods:", "source": "https://kubernetes.io/docs/tasks/manage-kubernetes-objects/update-api-object-kubectl-patch/", "id": "k8s_00602_code_explain_15284"} {"prompt": "Explain this text code:\n\nNAME READY STATUS RESTARTS AGE\npatch-demo-1307768864-69308 1/1 Running 0 1m\npatch-demo-1307768864-c86dc 1/1 Running 0 1m", "completion": "This text code Update API Objects in Place Using kubectl patch | Here's the configuration file for a Deployment that uses the RollingUpdate strategy:", "language": "text", "type": "code-example_explanation", "context": "Update API Objects in Place Using kubectl patch | Here's the configuration file for a Deployment that uses the RollingUpdate strategy:", "source": "https://kubernetes.io/docs/tasks/manage-kubernetes-objects/update-api-object-kubectl-patch/", "id": "k8s_00602_code_explain_15285"} {"prompt": "Complete this yaml code:\n\napplication/deployment-retainkeys.yaml\napiVersion: apps/v1\nkind: Deployment\nmetadata:\n name: retainkeys-demo\nspec:\n selector:\n matchLabels:\n app: nginx\n strategy:", "completion": " rollingUpdate:\n maxSurge: 30%\n template:\n metadata:\n labels:\n app: nginx\n spec:\n containers:\n - name: retainkeys-demo-ctr\n image: nginx", "language": "yaml", "type": "deployment-definition", "context": "Update API Objects in Place Using kubectl patch | Create the deployment:", "source": "https://kubernetes.io/docs/tasks/manage-kubernetes-objects/update-api-object-kubectl-patch/", "id": "k8s_00602_code_completion_15286"} {"prompt": "Explain this yaml code:\n\napplication/deployment-retainkeys.yaml\napiVersion: apps/v1\nkind: Deployment\nmetadata:\n name: retainkeys-demo\nspec:\n selector:\n matchLabels:\n app: nginx\n strategy:\n rollingUpdate:\n maxSurge: 30%\n template:\n metadata:\n labels:\n app: nginx\n spec:\n containers:\n - name: retainkeys-demo-ctr\n image: nginx", "completion": "This yaml code Update API Objects in Place Using kubectl patch | Create the deployment:", "language": "yaml", "type": "deployment-definition_explanation", "context": "Update API Objects in Place Using kubectl patch | Create the deployment:", "source": "https://kubernetes.io/docs/tasks/manage-kubernetes-objects/update-api-object-kubectl-patch/", "id": "k8s_00602_code_explain_15287"} {"prompt": "Complete this yaml code:\n\napiVersion: apps/v1\nkind: Deployment\nmetadata:\n name: retainkeys-demo\nspec:\n selector:\n matchLabels:\n app: nginx\n strategy:", "completion": " rollingUpdate:\n maxSurge: 30%\n template:\n metadata:\n labels:\n app: nginx\n spec:\n containers:\n - name: retainkeys-demo-ctr\n image: nginx", "language": "yaml", "type": "deployment-definition", "context": "Update API Objects in Place Using kubectl patch | Create the deployment:", "source": "https://kubernetes.io/docs/tasks/manage-kubernetes-objects/update-api-object-kubectl-patch/", "id": "k8s_00602_code_completion_15288"} {"prompt": "Explain this yaml code:\n\napiVersion: apps/v1\nkind: Deployment\nmetadata:\n name: retainkeys-demo\nspec:\n selector:\n matchLabels:\n app: nginx\n strategy:\n rollingUpdate:\n maxSurge: 30%\n template:\n metadata:\n labels:\n app: nginx\n spec:\n containers:\n - name: retainkeys-demo-ctr\n image: nginx", "completion": "This yaml code Update API Objects in Place Using kubectl patch | Create the deployment:", "language": "yaml", "type": "deployment-definition_explanation", "context": "Update API Objects in Place Using kubectl patch | Create the deployment:", "source": "https://kubernetes.io/docs/tasks/manage-kubernetes-objects/update-api-object-kubectl-patch/", "id": "k8s_00602_code_explain_15289"} {"prompt": "Explain this bash code:\n\nkubectl apply -f https://k8s.io/examples/application/deployment-retainkeys.yaml", "completion": "This bash code Update API Objects in Place Using kubectl patch | At this point, the deployment is created and is using the RollingUpdate strategy.", "language": "bash", "type": "kubectl-command_explanation", "context": "Update API Objects in Place Using kubectl patch | At this point, the deployment is created and is using the RollingUpdate strategy.", "source": "https://kubernetes.io/docs/tasks/manage-kubernetes-objects/update-api-object-kubectl-patch/", "id": "k8s_00602_code_explain_15290"} {"prompt": "Explain this bash code:\n\nkubectl patch deployment retainkeys-demo --type strategic --patch-file patch-file-no-retainkeys.yaml", "completion": "This bash code Update API Objects in Place Using kubectl patch | In the output, you can see that it is not possible to set type as Recreate when a value is defined for spec.strategy.rollingUpdate:", "language": "bash", "type": "kubectl-command_explanation", "context": "Update API Objects in Place Using kubectl patch | In the output, you can see that it is not possible to set type as Recreate when a value is defined for spec.strategy.rollingUpdate:", "source": "https://kubernetes.io/docs/tasks/manage-kubernetes-objects/update-api-object-kubectl-patch/", "id": "k8s_00602_code_explain_15291"} {"prompt": "Complete this text code:\n\nspec:\n strategy:", "completion": " $retainKeys:\n - type\n type: Recreate", "language": "text", "type": "code-example", "context": "Update API Objects in Place Using kubectl patch | With this patch, we indicate that we want to retain only the type key of the strategy object. Thus, the rollingUpdate will be removed during the patch operation.", "source": "https://kubernetes.io/docs/tasks/manage-kubernetes-objects/update-api-object-kubectl-patch/", "id": "k8s_00602_code_completion_15292"} {"prompt": "Explain this text code:\n\nspec:\n strategy:\n $retainKeys:\n - type\n type: Recreate", "completion": "This text code Update API Objects in Place Using kubectl patch | With this patch, we indicate that we want to retain only the type key of the strategy object. Thus, the rollingUpdate will be removed during the patch operation.", "language": "text", "type": "code-example_explanation", "context": "Update API Objects in Place Using kubectl patch | With this patch, we indicate that we want to retain only the type key of the strategy object. Thus, the rollingUpdate will be removed during the patch operation.", "source": "https://kubernetes.io/docs/tasks/manage-kubernetes-objects/update-api-object-kubectl-patch/", "id": "k8s_00602_code_explain_15293"} {"prompt": "Explain this bash code:\n\nkubectl patch deployment retainkeys-demo --type strategic --patch-file patch-file-retainkeys.yaml", "completion": "This bash code Update API Objects in Place Using kubectl patch | Examine the content of the Deployment:", "language": "bash", "type": "kubectl-command_explanation", "context": "Update API Objects in Place Using kubectl patch | Examine the content of the Deployment:", "source": "https://kubernetes.io/docs/tasks/manage-kubernetes-objects/update-api-object-kubectl-patch/", "id": "k8s_00602_code_explain_15294"} {"prompt": "Explain this bash code:\n\nkubectl get deployment retainkeys-demo --output yaml", "completion": "This bash code Update API Objects in Place Using kubectl patch | The output shows that the strategy object in the Deployment does not contain the rollingUpdate key anymore:", "language": "bash", "type": "kubectl-command_explanation", "context": "Update API Objects in Place Using kubectl patch | The output shows that the strategy object in the Deployment does not contain the rollingUpdate key anymore:", "source": "https://kubernetes.io/docs/tasks/manage-kubernetes-objects/update-api-object-kubectl-patch/", "id": "k8s_00602_code_explain_15295"} {"prompt": "Complete this text code:\n\ntype DeploymentSpec struct {\n ...\n // +patchStrategy=retainKeys", "completion": " Strategy DeploymentStrategy `json:\"strategy,omitempty\" patchStrategy:\"retainKeys\" ...`\n ...\n}", "language": "text", "type": "code-example", "context": "Update API Objects in Place Using kubectl patch | You can also see the retainKeys strategy in the OpenApi spec:", "source": "https://kubernetes.io/docs/tasks/manage-kubernetes-objects/update-api-object-kubectl-patch/", "id": "k8s_00602_code_completion_15296"} {"prompt": "Explain this text code:\n\ntype DeploymentSpec struct {\n ...\n // +patchStrategy=retainKeys\n Strategy DeploymentStrategy `json:\"strategy,omitempty\" patchStrategy:\"retainKeys\" ...`\n ...\n}", "completion": "This text code Update API Objects in Place Using kubectl patch | You can also see the retainKeys strategy in the OpenApi spec:", "language": "text", "type": "code-example_explanation", "context": "Update API Objects in Place Using kubectl patch | You can also see the retainKeys strategy in the OpenApi spec:", "source": "https://kubernetes.io/docs/tasks/manage-kubernetes-objects/update-api-object-kubectl-patch/", "id": "k8s_00602_code_explain_15297"} {"prompt": "Complete this text code:\n\n\"io.k8s.api.apps.v1.DeploymentSpec\": {\n ...,\n \"strategy\": {\n \"$ref\": \"#/definitions/io.k8s.api.apps.v1.DeploymentStrategy\",", "completion": " \"description\": \"The deployment strategy to use to replace existing pods with new ones.\",\n \"x-kubernetes-patch-strategy\": \"retainKeys\"\n },\n ....\n}", "language": "text", "type": "code-example", "context": "Update API Objects in Place Using kubectl patch | And you can see the retainKeys strategy in the Kubernetes API documentation.", "source": "https://kubernetes.io/docs/tasks/manage-kubernetes-objects/update-api-object-kubectl-patch/", "id": "k8s_00602_code_completion_15298"} {"prompt": "Explain this text code:\n\n\"io.k8s.api.apps.v1.DeploymentSpec\": {\n ...,\n \"strategy\": {\n \"$ref\": \"#/definitions/io.k8s.api.apps.v1.DeploymentStrategy\",\n \"description\": \"The deployment strategy to use to replace existing pods with new ones.\",\n \"x-kubernetes-patch-strategy\": \"retainKeys\"\n },\n ....\n}", "completion": "This text code Update API Objects in Place Using kubectl patch | And you can see the retainKeys strategy in the Kubernetes API documentation.", "language": "text", "type": "code-example_explanation", "context": "Update API Objects in Place Using kubectl patch | And you can see the retainKeys strategy in the Kubernetes API documentation.", "source": "https://kubernetes.io/docs/tasks/manage-kubernetes-objects/update-api-object-kubectl-patch/", "id": "k8s_00602_code_explain_15299"} {"prompt": "Complete this json code:\n\n{\n \"spec\": {\n \"template\": {\n \"spec\": {\n \"containers\": [\n {\n \"name\": \"patch-demo-ctr-2\",", "completion": " \"image\": \"redis\"\n }\n ]\n }\n }\n }\n}", "language": "json", "type": "api-response", "context": "Update API Objects in Place Using kubectl patch | The following commands are equivalent:", "source": "https://kubernetes.io/docs/tasks/manage-kubernetes-objects/update-api-object-kubectl-patch/", "id": "k8s_00602_code_completion_15300"} {"prompt": "Explain this json code:\n\n{\n \"spec\": {\n \"template\": {\n \"spec\": {\n \"containers\": [\n {\n \"name\": \"patch-demo-ctr-2\",\n \"image\": \"redis\"\n }\n ]\n }\n }\n }\n}", "completion": "This json code Update API Objects in Place Using kubectl patch | The following commands are equivalent:", "language": "json", "type": "api-response_explanation", "context": "Update API Objects in Place Using kubectl patch | The following commands are equivalent:", "source": "https://kubernetes.io/docs/tasks/manage-kubernetes-objects/update-api-object-kubectl-patch/", "id": "k8s_00602_code_explain_15301"} {"prompt": "Complete this bash code:\n\nkubectl patch deployment patch-demo --patch-file patch-file.yaml\nkubectl patch deployment patch-demo --patch 'spec:\\n template:\\n spec:\\n containers:\\n - name: patch-demo-ctr-2\\n image: redis'", "completion": "\nkubectl patch deployment patch-demo --patch-file patch-file.json\nkubectl patch deployment patch-demo --patch '{\"spec\": {\"template\": {\"spec\": {\"containers\": [{\"name\": \"patch-demo-ctr-2\",\"image\": \"redis\"}]}}}}'", "language": "bash", "type": "kubectl-command", "context": "Update API Objects in Place Using kubectl patch | The flag --subresource=[subresource-name] is used with kubectl commands like get, patch, edit, apply and replace to fetch and update status, scale and resize subresource of the resources you specify.", "source": "https://kubernetes.io/docs/tasks/manage-kubernetes-objects/update-api-object-kubectl-patch/", "id": "k8s_00602_code_completion_15302"} {"prompt": "Explain this bash code:\n\nkubectl patch deployment patch-demo --patch-file patch-file.yaml\nkubectl patch deployment patch-demo --patch 'spec:\\n template:\\n spec:\\n containers:\\n - name: patch-demo-ctr-2\\n image: redis'\n\nkubectl patch deployment patch-demo --patch-file patch-file.json\nkubectl patch deployment patch-demo --patch '{\"spec\": {\"template\": {\"spec\": {\"containers\": [{\"name\": \"patch-demo-ctr-2\",\"image\": \"redis\"}]}}}}'", "completion": "This bash code Update API Objects in Place Using kubectl patch | The flag --subresource=[subresource-name] is used with kubectl commands like get, patch, edit, apply and replace to fetch and update status, scale and resize subresource of the resources you specify.", "language": "bash", "type": "kubectl-command_explanation", "context": "Update API Objects in Place Using kubectl patch | The flag --subresource=[subresource-name] is used with kubectl commands like get, patch, edit, apply and replace to fetch and update status, scale and resize subresource of the resources you specify.", "source": "https://kubernetes.io/docs/tasks/manage-kubernetes-objects/update-api-object-kubectl-patch/", "id": "k8s_00602_code_explain_15303"} {"prompt": "Complete this yaml code:\n\napplication/deployment.yaml\napiVersion: apps/v1\nkind: Deployment\nmetadata:\n name: nginx-deployment\nspec:\n selector:\n matchLabels:\n app: nginx\n replicas: 2 # tells deployment to run 2 pods matching the template", "completion": " template:\n metadata:\n labels:\n app: nginx\n spec:\n containers:\n - name: nginx\n image: nginx:1.14.2\n ports:\n - containerPort: 80", "language": "yaml", "type": "deployment-definition", "context": "Update API Objects in Place Using kubectl patch | Create the Deployment:", "source": "https://kubernetes.io/docs/tasks/manage-kubernetes-objects/update-api-object-kubectl-patch/", "id": "k8s_00602_code_completion_15304"} {"prompt": "Explain this yaml code:\n\napplication/deployment.yaml\napiVersion: apps/v1\nkind: Deployment\nmetadata:\n name: nginx-deployment\nspec:\n selector:\n matchLabels:\n app: nginx\n replicas: 2 # tells deployment to run 2 pods matching the template\n template:\n metadata:\n labels:\n app: nginx\n spec:\n containers:\n - name: nginx\n image: nginx:1.14.2\n ports:\n - containerPort: 80", "completion": "This yaml code Update API Objects in Place Using kubectl patch | Create the Deployment:", "language": "yaml", "type": "deployment-definition_explanation", "context": "Update API Objects in Place Using kubectl patch | Create the Deployment:", "source": "https://kubernetes.io/docs/tasks/manage-kubernetes-objects/update-api-object-kubectl-patch/", "id": "k8s_00602_code_explain_15305"} {"prompt": "Complete this yaml code:\n\napiVersion: apps/v1\nkind: Deployment\nmetadata:\n name: nginx-deployment\nspec:\n selector:\n matchLabels:\n app: nginx\n replicas: 2 # tells deployment to run 2 pods matching the template", "completion": " template:\n metadata:\n labels:\n app: nginx\n spec:\n containers:\n - name: nginx\n image: nginx:1.14.2\n ports:\n - containerPort: 80", "language": "yaml", "type": "deployment-definition", "context": "Update API Objects in Place Using kubectl patch | Create the Deployment:", "source": "https://kubernetes.io/docs/tasks/manage-kubernetes-objects/update-api-object-kubectl-patch/", "id": "k8s_00602_code_completion_15306"} {"prompt": "Explain this yaml code:\n\napiVersion: apps/v1\nkind: Deployment\nmetadata:\n name: nginx-deployment\nspec:\n selector:\n matchLabels:\n app: nginx\n replicas: 2 # tells deployment to run 2 pods matching the template\n template:\n metadata:\n labels:\n app: nginx\n spec:\n containers:\n - name: nginx\n image: nginx:1.14.2\n ports:\n - containerPort: 80", "completion": "This yaml code Update API Objects in Place Using kubectl patch | Create the Deployment:", "language": "yaml", "type": "deployment-definition_explanation", "context": "Update API Objects in Place Using kubectl patch | Create the Deployment:", "source": "https://kubernetes.io/docs/tasks/manage-kubernetes-objects/update-api-object-kubectl-patch/", "id": "k8s_00602_code_explain_15307"} {"prompt": "Explain this bash code:\n\nkubectl apply -f https://k8s.io/examples/application/deployment.yaml", "completion": "This bash code Update API Objects in Place Using kubectl patch | View the Pods associated with your Deployment:", "language": "bash", "type": "kubectl-command_explanation", "context": "Update API Objects in Place Using kubectl patch | View the Pods associated with your Deployment:", "source": "https://kubernetes.io/docs/tasks/manage-kubernetes-objects/update-api-object-kubectl-patch/", "id": "k8s_00602_code_explain_15308"} {"prompt": "Explain this bash code:\n\nkubectl patch deployment nginx-deployment --subresource='scale' --type='merge' -p '{\"spec\":{\"replicas\":3}}'", "completion": "This bash code Update API Objects in Place Using kubectl patch | The output is:", "language": "bash", "type": "kubectl-command_explanation", "context": "Update API Objects in Place Using kubectl patch | The output is:", "source": "https://kubernetes.io/docs/tasks/manage-kubernetes-objects/update-api-object-kubectl-patch/", "id": "k8s_00602_code_explain_15309"} {"prompt": "Complete this text code:\n\n...\nspec:\n replicas: 3\n ...", "completion": "status:\n ...\n availableReplicas: 3\n readyReplicas: 3\n replicas: 3", "language": "text", "type": "code-example", "context": "Update API Objects in Place Using kubectl patch | In this exercise, you used kubectl patch to change the live configuration of a Deployment object. You did not change the configuration file that you originally used to create the Deployment object. Ot", "source": "https://kubernetes.io/docs/tasks/manage-kubernetes-objects/update-api-object-kubectl-patch/", "id": "k8s_00602_code_completion_15310"} {"prompt": "Explain this text code:\n\n...\nspec:\n replicas: 3\n ...\nstatus:\n ...\n availableReplicas: 3\n readyReplicas: 3\n replicas: 3", "completion": "This text code Update API Objects in Place Using kubectl patch | In this exercise, you used kubectl patch to change the live configuration of a Deployment object. You did not change the configuration file that you originally used to create the Deployment object. Ot", "language": "text", "type": "code-example_explanation", "context": "Update API Objects in Place Using kubectl patch | In this exercise, you used kubectl patch to change the live configuration of a Deployment object. You did not change the configuration file that you originally used to create the Deployment object. Ot", "source": "https://kubernetes.io/docs/tasks/manage-kubernetes-objects/update-api-object-kubectl-patch/", "id": "k8s_00602_code_explain_15311"} {"prompt": "Complete this text code:\n\nNAME STATUS ROLES AGE VERSION LABELS\nworker0 Ready <none> 1d v1.13.0 ...,kubernetes.io/hostname=worker0", "completion": "worker1 Ready <none> 1d v1.13.0 ...,kubernetes.io/hostname=worker1\nworker2 Ready <none> 1d v1.13.0 ...,kubernetes.io/hostname=worker2", "language": "text", "type": "code-example", "context": "Assign Pods to Nodes | Choose one of your nodes, and add a label to it:", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/assign-pods-nodes/", "id": "k8s_00604_code_completion_15312"} {"prompt": "Explain this text code:\n\nNAME STATUS ROLES AGE VERSION LABELS\nworker0 Ready <none> 1d v1.13.0 ...,kubernetes.io/hostname=worker0\nworker1 Ready <none> 1d v1.13.0 ...,kubernetes.io/hostname=worker1\nworker2 Ready <none> 1d v1.13.0 ...,kubernetes.io/hostname=worker2", "completion": "This text code Assign Pods to Nodes | Choose one of your nodes, and add a label to it:", "language": "text", "type": "code-example_explanation", "context": "Assign Pods to Nodes | Choose one of your nodes, and add a label to it:", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/assign-pods-nodes/", "id": "k8s_00604_code_explain_15313"} {"prompt": "Complete this text code:\n\nNAME STATUS ROLES AGE VERSION LABELS\nworker0 Ready <none> 1d v1.13.0 ...,disktype=ssd,kubernetes.io/hostname=worker0", "completion": "worker1 Ready <none> 1d v1.13.0 ...,kubernetes.io/hostname=worker1\nworker2 Ready <none> 1d v1.13.0 ...,kubernetes.io/hostname=worker2", "language": "text", "type": "code-example", "context": "Assign Pods to Nodes | In the preceding output, you can see that the worker0 node has a disktype=ssd label.", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/assign-pods-nodes/", "id": "k8s_00604_code_completion_15314"} {"prompt": "Explain this text code:\n\nNAME STATUS ROLES AGE VERSION LABELS\nworker0 Ready <none> 1d v1.13.0 ...,disktype=ssd,kubernetes.io/hostname=worker0\nworker1 Ready <none> 1d v1.13.0 ...,kubernetes.io/hostname=worker1\nworker2 Ready <none> 1d v1.13.0 ...,kubernetes.io/hostname=worker2", "completion": "This text code Assign Pods to Nodes | In the preceding output, you can see that the worker0 node has a disktype=ssd label.", "language": "text", "type": "code-example_explanation", "context": "Assign Pods to Nodes | In the preceding output, you can see that the worker0 node has a disktype=ssd label.", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/assign-pods-nodes/", "id": "k8s_00604_code_explain_15315"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: nginx\n labels:\n env: test", "completion": "spec:\n containers:\n - name: nginx\n image: nginx\n imagePullPolicy: IfNotPresent\n nodeSelector:\n disktype: ssd", "language": "yaml", "type": "pod-definition", "context": "Assign Pods to Nodes | Use the configuration file to create a pod that will get scheduled on your chosen node:", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/assign-pods-nodes/", "id": "k8s_00604_code_completion_15316"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: nginx\n labels:\n env: test\nspec:\n containers:\n - name: nginx\n image: nginx\n imagePullPolicy: IfNotPresent\n nodeSelector:\n disktype: ssd", "completion": "This yaml code Assign Pods to Nodes | Use the configuration file to create a pod that will get scheduled on your chosen node:", "language": "yaml", "type": "pod-definition_explanation", "context": "Assign Pods to Nodes | Use the configuration file to create a pod that will get scheduled on your chosen node:", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/assign-pods-nodes/", "id": "k8s_00604_code_explain_15317"} {"prompt": "Explain this bash code:\n\nkubectl apply -f https://k8s.io/examples/pods/pod-nginx.yaml", "completion": "This bash code Assign Pods to Nodes | Verify that the pod is running on your chosen node:", "language": "bash", "type": "kubectl-command_explanation", "context": "Assign Pods to Nodes | Verify that the pod is running on your chosen node:", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/assign-pods-nodes/", "id": "k8s_00604_code_explain_15318"} {"prompt": "Explain this text code:\n\nNAME READY STATUS RESTARTS AGE IP NODE\nnginx 1/1 Running 0 13s 10.200.0.4 worker0", "completion": "This text code Assign Pods to Nodes | You can also schedule a pod to one specific node via setting nodeName.", "language": "text", "type": "code-example_explanation", "context": "Assign Pods to Nodes | You can also schedule a pod to one specific node via setting nodeName.", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/assign-pods-nodes/", "id": "k8s_00604_code_explain_15319"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: nginx\nspec:", "completion": " nodeName: foo-node # schedule pod to specific node\n containers:\n - name: nginx\n image: nginx\n imagePullPolicy: IfNotPresent", "language": "yaml", "type": "pod-definition", "context": "Assign Pods to Nodes | Use the configuration file to create a pod that will get scheduled on foo-node only.", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/assign-pods-nodes/", "id": "k8s_00604_code_completion_15320"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: nginx\nspec:\n nodeName: foo-node # schedule pod to specific node\n containers:\n - name: nginx\n image: nginx\n imagePullPolicy: IfNotPresent", "completion": "This yaml code Assign Pods to Nodes | Use the configuration file to create a pod that will get scheduled on foo-node only.", "language": "yaml", "type": "pod-definition_explanation", "context": "Assign Pods to Nodes | Use the configuration file to create a pod that will get scheduled on foo-node only.", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/assign-pods-nodes/", "id": "k8s_00604_code_explain_15321"} {"prompt": "Complete this shell code:\n\nNAME STATUS ROLES AGE VERSION LABELS\nworker0 Ready <none> 1d v1.13.0 ...,kubernetes.io/hostname=worker0", "completion": "worker1 Ready <none> 1d v1.13.0 ...,kubernetes.io/hostname=worker1\nworker2 Ready <none> 1d v1.13.0 ...,kubernetes.io/hostname=worker2", "language": "shell", "type": "code-example", "context": "Assign Pods to Nodes | Choose one of your nodes, and add a label to it:", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/assign-pods-nodes/", "id": "k8s_00604_code_completion_15322"} {"prompt": "Explain this shell code:\n\nNAME STATUS ROLES AGE VERSION LABELS\nworker0 Ready <none> 1d v1.13.0 ...,kubernetes.io/hostname=worker0\nworker1 Ready <none> 1d v1.13.0 ...,kubernetes.io/hostname=worker1\nworker2 Ready <none> 1d v1.13.0 ...,kubernetes.io/hostname=worker2", "completion": "This shell code Assign Pods to Nodes | Choose one of your nodes, and add a label to it:", "language": "shell", "type": "code-example_explanation", "context": "Assign Pods to Nodes | Choose one of your nodes, and add a label to it:", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/assign-pods-nodes/", "id": "k8s_00604_code_explain_15323"} {"prompt": "Complete this shell code:\n\nNAME STATUS ROLES AGE VERSION LABELS\nworker0 Ready <none> 1d v1.13.0 ...,disktype=ssd,kubernetes.io/hostname=worker0", "completion": "worker1 Ready <none> 1d v1.13.0 ...,kubernetes.io/hostname=worker1\nworker2 Ready <none> 1d v1.13.0 ...,kubernetes.io/hostname=worker2", "language": "shell", "type": "code-example", "context": "Assign Pods to Nodes | In the preceding output, you can see that the worker0 node has a disktype=ssd label.", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/assign-pods-nodes/", "id": "k8s_00604_code_completion_15324"} {"prompt": "Explain this shell code:\n\nNAME STATUS ROLES AGE VERSION LABELS\nworker0 Ready <none> 1d v1.13.0 ...,disktype=ssd,kubernetes.io/hostname=worker0\nworker1 Ready <none> 1d v1.13.0 ...,kubernetes.io/hostname=worker1\nworker2 Ready <none> 1d v1.13.0 ...,kubernetes.io/hostname=worker2", "completion": "This shell code Assign Pods to Nodes | In the preceding output, you can see that the worker0 node has a disktype=ssd label.", "language": "shell", "type": "code-example_explanation", "context": "Assign Pods to Nodes | In the preceding output, you can see that the worker0 node has a disktype=ssd label.", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/assign-pods-nodes/", "id": "k8s_00604_code_explain_15325"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: nginx\n labels:\n env: test", "completion": "spec:\n containers:\n - name: nginx\n image: nginx\n imagePullPolicy: IfNotPresent\n nodeSelector:\n disktype: ssd", "language": "yaml", "type": "pod-definition", "context": "Assign Pods to Nodes | Use the configuration file to create a pod that will get scheduled on your chosen node:", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/assign-pods-nodes/", "id": "k8s_00604_code_completion_15326"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: nginx\n labels:\n env: test\nspec:\n containers:\n - name: nginx\n image: nginx\n imagePullPolicy: IfNotPresent\n nodeSelector:\n disktype: ssd", "completion": "This yaml code Assign Pods to Nodes | Use the configuration file to create a pod that will get scheduled on your chosen node:", "language": "yaml", "type": "pod-definition_explanation", "context": "Assign Pods to Nodes | Use the configuration file to create a pod that will get scheduled on your chosen node:", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/assign-pods-nodes/", "id": "k8s_00604_code_explain_15327"} {"prompt": "Explain this shell code:\n\nkubectl apply -f https://k8s.io/examples/pods/pod-nginx.yaml", "completion": "This shell code Assign Pods to Nodes | Verify that the pod is running on your chosen node:", "language": "shell", "type": "code-example_explanation", "context": "Assign Pods to Nodes | Verify that the pod is running on your chosen node:", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/assign-pods-nodes/", "id": "k8s_00604_code_explain_15328"} {"prompt": "Explain this shell code:\n\nNAME READY STATUS RESTARTS AGE IP NODE\nnginx 1/1 Running 0 13s 10.200.0.4 worker0", "completion": "This shell code Assign Pods to Nodes | You can also schedule a pod to one specific node via setting nodeName.", "language": "shell", "type": "code-example_explanation", "context": "Assign Pods to Nodes | You can also schedule a pod to one specific node via setting nodeName.", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/assign-pods-nodes/", "id": "k8s_00604_code_explain_15329"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: nginx\nspec:", "completion": " nodeName: foo-node # schedule pod to specific node\n containers:\n - name: nginx\n image: nginx\n imagePullPolicy: IfNotPresent", "language": "yaml", "type": "pod-definition", "context": "Assign Pods to Nodes | Use the configuration file to create a pod that will get scheduled on foo-node only.", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/assign-pods-nodes/", "id": "k8s_00604_code_completion_15330"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: nginx\nspec:\n nodeName: foo-node # schedule pod to specific node\n containers:\n - name: nginx\n image: nginx\n imagePullPolicy: IfNotPresent", "completion": "This yaml code Assign Pods to Nodes | Use the configuration file to create a pod that will get scheduled on foo-node only.", "language": "yaml", "type": "pod-definition_explanation", "context": "Assign Pods to Nodes | Use the configuration file to create a pod that will get scheduled on foo-node only.", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/assign-pods-nodes/", "id": "k8s_00604_code_explain_15331"} {"prompt": "Complete this text code:\n\nNAME STATUS ROLES AGE VERSION LABELS\nworker0 Ready <none> 1d v1.13.0 ...,kubernetes.io/hostname=worker0", "completion": "worker1 Ready <none> 1d v1.13.0 ...,kubernetes.io/hostname=worker1\nworker2 Ready <none> 1d v1.13.0 ...,kubernetes.io/hostname=worker2", "language": "text", "type": "code-example", "context": "Assign Pods to Nodes | Choose one of your nodes, and add a label to it:", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/assign-pods-nodes/", "id": "k8s_00604_code_completion_15332"} {"prompt": "Explain this text code:\n\nNAME STATUS ROLES AGE VERSION LABELS\nworker0 Ready <none> 1d v1.13.0 ...,kubernetes.io/hostname=worker0\nworker1 Ready <none> 1d v1.13.0 ...,kubernetes.io/hostname=worker1\nworker2 Ready <none> 1d v1.13.0 ...,kubernetes.io/hostname=worker2", "completion": "This text code Assign Pods to Nodes | Choose one of your nodes, and add a label to it:", "language": "text", "type": "code-example_explanation", "context": "Assign Pods to Nodes | Choose one of your nodes, and add a label to it:", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/assign-pods-nodes/", "id": "k8s_00604_code_explain_15333"} {"prompt": "Complete this text code:\n\nNAME STATUS ROLES AGE VERSION LABELS\nworker0 Ready <none> 1d v1.13.0 ...,disktype=ssd,kubernetes.io/hostname=worker0", "completion": "worker1 Ready <none> 1d v1.13.0 ...,kubernetes.io/hostname=worker1\nworker2 Ready <none> 1d v1.13.0 ...,kubernetes.io/hostname=worker2", "language": "text", "type": "code-example", "context": "Assign Pods to Nodes | In the preceding output, you can see that the worker0 node has a disktype=ssd label.", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/assign-pods-nodes/", "id": "k8s_00604_code_completion_15334"} {"prompt": "Explain this text code:\n\nNAME STATUS ROLES AGE VERSION LABELS\nworker0 Ready <none> 1d v1.13.0 ...,disktype=ssd,kubernetes.io/hostname=worker0\nworker1 Ready <none> 1d v1.13.0 ...,kubernetes.io/hostname=worker1\nworker2 Ready <none> 1d v1.13.0 ...,kubernetes.io/hostname=worker2", "completion": "This text code Assign Pods to Nodes | In the preceding output, you can see that the worker0 node has a disktype=ssd label.", "language": "text", "type": "code-example_explanation", "context": "Assign Pods to Nodes | In the preceding output, you can see that the worker0 node has a disktype=ssd label.", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/assign-pods-nodes/", "id": "k8s_00604_code_explain_15335"} {"prompt": "Complete this yaml code:\n\npods/pod-nginx.yaml\napiVersion: v1\nkind: Pod\nmetadata:\n name: nginx\n labels:\n env: test", "completion": "spec:\n containers:\n - name: nginx\n image: nginx\n imagePullPolicy: IfNotPresent\n nodeSelector:\n disktype: ssd", "language": "yaml", "type": "pod-definition", "context": "Assign Pods to Nodes | Use the configuration file to create a pod that will get scheduled on your chosen node:", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/assign-pods-nodes/", "id": "k8s_00604_code_completion_15336"} {"prompt": "Explain this yaml code:\n\npods/pod-nginx.yaml\napiVersion: v1\nkind: Pod\nmetadata:\n name: nginx\n labels:\n env: test\nspec:\n containers:\n - name: nginx\n image: nginx\n imagePullPolicy: IfNotPresent\n nodeSelector:\n disktype: ssd", "completion": "This yaml code Assign Pods to Nodes | Use the configuration file to create a pod that will get scheduled on your chosen node:", "language": "yaml", "type": "pod-definition_explanation", "context": "Assign Pods to Nodes | Use the configuration file to create a pod that will get scheduled on your chosen node:", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/assign-pods-nodes/", "id": "k8s_00604_code_explain_15337"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: nginx\n labels:\n env: test", "completion": "spec:\n containers:\n - name: nginx\n image: nginx\n imagePullPolicy: IfNotPresent\n nodeSelector:\n disktype: ssd", "language": "yaml", "type": "pod-definition", "context": "Assign Pods to Nodes | Use the configuration file to create a pod that will get scheduled on your chosen node:", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/assign-pods-nodes/", "id": "k8s_00604_code_completion_15338"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: nginx\n labels:\n env: test\nspec:\n containers:\n - name: nginx\n image: nginx\n imagePullPolicy: IfNotPresent\n nodeSelector:\n disktype: ssd", "completion": "This yaml code Assign Pods to Nodes | Use the configuration file to create a pod that will get scheduled on your chosen node:", "language": "yaml", "type": "pod-definition_explanation", "context": "Assign Pods to Nodes | Use the configuration file to create a pod that will get scheduled on your chosen node:", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/assign-pods-nodes/", "id": "k8s_00604_code_explain_15339"} {"prompt": "Explain this bash code:\n\nkubectl apply -f https://k8s.io/examples/pods/pod-nginx.yaml", "completion": "This bash code Assign Pods to Nodes | Verify that the pod is running on your chosen node:", "language": "bash", "type": "kubectl-command_explanation", "context": "Assign Pods to Nodes | Verify that the pod is running on your chosen node:", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/assign-pods-nodes/", "id": "k8s_00604_code_explain_15340"} {"prompt": "Explain this text code:\n\nNAME READY STATUS RESTARTS AGE IP NODE\nnginx 1/1 Running 0 13s 10.200.0.4 worker0", "completion": "This text code Assign Pods to Nodes | You can also schedule a pod to one specific node via setting nodeName.", "language": "text", "type": "code-example_explanation", "context": "Assign Pods to Nodes | You can also schedule a pod to one specific node via setting nodeName.", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/assign-pods-nodes/", "id": "k8s_00604_code_explain_15341"} {"prompt": "Complete this yaml code:\n\npods/pod-nginx-specific-node.yaml\napiVersion: v1\nkind: Pod\nmetadata:\n name: nginx", "completion": "spec:\n nodeName: foo-node # schedule pod to specific node\n containers:\n - name: nginx\n image: nginx\n imagePullPolicy: IfNotPresent", "language": "yaml", "type": "pod-definition", "context": "Assign Pods to Nodes | Use the configuration file to create a pod that will get scheduled on foo-node only.", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/assign-pods-nodes/", "id": "k8s_00604_code_completion_15342"} {"prompt": "Explain this yaml code:\n\npods/pod-nginx-specific-node.yaml\napiVersion: v1\nkind: Pod\nmetadata:\n name: nginx\nspec:\n nodeName: foo-node # schedule pod to specific node\n containers:\n - name: nginx\n image: nginx\n imagePullPolicy: IfNotPresent", "completion": "This yaml code Assign Pods to Nodes | Use the configuration file to create a pod that will get scheduled on foo-node only.", "language": "yaml", "type": "pod-definition_explanation", "context": "Assign Pods to Nodes | Use the configuration file to create a pod that will get scheduled on foo-node only.", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/assign-pods-nodes/", "id": "k8s_00604_code_explain_15343"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: nginx\nspec:", "completion": " nodeName: foo-node # schedule pod to specific node\n containers:\n - name: nginx\n image: nginx\n imagePullPolicy: IfNotPresent", "language": "yaml", "type": "pod-definition", "context": "Assign Pods to Nodes | Use the configuration file to create a pod that will get scheduled on foo-node only.", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/assign-pods-nodes/", "id": "k8s_00604_code_completion_15344"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: nginx\nspec:\n nodeName: foo-node # schedule pod to specific node\n containers:\n - name: nginx\n image: nginx\n imagePullPolicy: IfNotPresent", "completion": "This yaml code Assign Pods to Nodes | Use the configuration file to create a pod that will get scheduled on foo-node only.", "language": "yaml", "type": "pod-definition_explanation", "context": "Assign Pods to Nodes | Use the configuration file to create a pod that will get scheduled on foo-node only.", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/assign-pods-nodes/", "id": "k8s_00604_code_explain_15345"} {"prompt": "Complete this text code:\n\ntolerations:\n- key: \"key1\"", "completion": " operator: \"Equal\"\n value: \"value1\"\n effect: \"NoSchedule\"", "language": "text", "type": "code-example", "context": "Taints and Tolerations | The default Kubernetes scheduler takes taints and tolerations into account when selecting a node to run a particular Pod. However, if you manually specify the .spec.nodeName for a Pod, that action byp", "source": "https://kubernetes.io/docs/concepts/scheduling-eviction/taint-and-toleration/", "id": "k8s_00605_code_completion_15346"} {"prompt": "Explain this text code:\n\ntolerations:\n- key: \"key1\"\n operator: \"Equal\"\n value: \"value1\"\n effect: \"NoSchedule\"", "completion": "This text code Taints and Tolerations | The default Kubernetes scheduler takes taints and tolerations into account when selecting a node to run a particular Pod. However, if you manually specify the .spec.nodeName for a Pod, that action byp", "language": "text", "type": "code-example_explanation", "context": "Taints and Tolerations | The default Kubernetes scheduler takes taints and tolerations into account when selecting a node to run a particular Pod. However, if you manually specify the .spec.nodeName for a Pod, that action byp", "source": "https://kubernetes.io/docs/concepts/scheduling-eviction/taint-and-toleration/", "id": "k8s_00605_code_explain_15347"} {"prompt": "Complete this text code:\n\ntolerations:\n- key: \"key1\"", "completion": " operator: \"Exists\"\n effect: \"NoSchedule\"", "language": "text", "type": "code-example", "context": "Taints and Tolerations | The default Kubernetes scheduler takes taints and tolerations into account when selecting a node to run a particular Pod. However, if you manually specify the .spec.nodeName for a Pod, that action byp", "source": "https://kubernetes.io/docs/concepts/scheduling-eviction/taint-and-toleration/", "id": "k8s_00605_code_completion_15348"} {"prompt": "Explain this text code:\n\ntolerations:\n- key: \"key1\"\n operator: \"Exists\"\n effect: \"NoSchedule\"", "completion": "This text code Taints and Tolerations | The default Kubernetes scheduler takes taints and tolerations into account when selecting a node to run a particular Pod. However, if you manually specify the .spec.nodeName for a Pod, that action byp", "language": "text", "type": "code-example_explanation", "context": "Taints and Tolerations | The default Kubernetes scheduler takes taints and tolerations into account when selecting a node to run a particular Pod. However, if you manually specify the .spec.nodeName for a Pod, that action byp", "source": "https://kubernetes.io/docs/concepts/scheduling-eviction/taint-and-toleration/", "id": "k8s_00605_code_explain_15349"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: nginx\n labels:\n env: test\nspec:", "completion": " containers:\n - name: nginx\n image: nginx\n imagePullPolicy: IfNotPresent\n tolerations:\n - key: \"example-key\"\n operator: \"Exists\"\n effect: \"NoSchedule\"", "language": "yaml", "type": "pod-definition", "context": "Taints and Tolerations | The default value for operator is Equal.", "source": "https://kubernetes.io/docs/concepts/scheduling-eviction/taint-and-toleration/", "id": "k8s_00605_code_completion_15350"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: nginx\n labels:\n env: test\nspec:\n containers:\n - name: nginx\n image: nginx\n imagePullPolicy: IfNotPresent\n tolerations:\n - key: \"example-key\"\n operator: \"Exists\"\n effect: \"NoSchedule\"", "completion": "This yaml code Taints and Tolerations | The default value for operator is Equal.", "language": "yaml", "type": "pod-definition_explanation", "context": "Taints and Tolerations | The default value for operator is Equal.", "source": "https://kubernetes.io/docs/concepts/scheduling-eviction/taint-and-toleration/", "id": "k8s_00605_code_explain_15351"} {"prompt": "Explain this bash code:\n\nkubectl taint nodes node1 key1=value1:NoSchedule\nkubectl taint nodes node1 key1=value1:NoExecute\nkubectl taint nodes node1 key2=value2:NoSchedule", "completion": "This bash code Taints and Tolerations | And a pod has two tolerations:", "language": "bash", "type": "kubectl-command_explanation", "context": "Taints and Tolerations | And a pod has two tolerations:", "source": "https://kubernetes.io/docs/concepts/scheduling-eviction/taint-and-toleration/", "id": "k8s_00605_code_explain_15352"} {"prompt": "Complete this text code:\n\ntolerations:\n- key: \"key1\"\n operator: \"Equal\"\n value: \"value1\"", "completion": " effect: \"NoSchedule\"\n- key: \"key1\"\n operator: \"Equal\"\n value: \"value1\"\n effect: \"NoExecute\"", "language": "text", "type": "code-example", "context": "Taints and Tolerations | In this case, the pod will not be able to schedule onto the node, because there is no toleration matching the third taint. But it will be able to continue running if it is already running on the node", "source": "https://kubernetes.io/docs/concepts/scheduling-eviction/taint-and-toleration/", "id": "k8s_00605_code_completion_15353"} {"prompt": "Explain this text code:\n\ntolerations:\n- key: \"key1\"\n operator: \"Equal\"\n value: \"value1\"\n effect: \"NoSchedule\"\n- key: \"key1\"\n operator: \"Equal\"\n value: \"value1\"\n effect: \"NoExecute\"", "completion": "This text code Taints and Tolerations | In this case, the pod will not be able to schedule onto the node, because there is no toleration matching the third taint. But it will be able to continue running if it is already running on the node", "language": "text", "type": "code-example_explanation", "context": "Taints and Tolerations | In this case, the pod will not be able to schedule onto the node, because there is no toleration matching the third taint. But it will be able to continue running if it is already running on the node", "source": "https://kubernetes.io/docs/concepts/scheduling-eviction/taint-and-toleration/", "id": "k8s_00605_code_explain_15354"} {"prompt": "Complete this text code:\n\ntolerations:\n- key: \"key1\"\n operator: \"Equal\"", "completion": " value: \"value1\"\n effect: \"NoExecute\"\n tolerationSeconds: 3600", "language": "text", "type": "code-example", "context": "Taints and Tolerations | means that if this pod is running and a matching taint is added to the node, then the pod will stay bound to the node for 3600 seconds, and then be evicted. If the taint is removed before that time, t", "source": "https://kubernetes.io/docs/concepts/scheduling-eviction/taint-and-toleration/", "id": "k8s_00605_code_completion_15355"} {"prompt": "Explain this text code:\n\ntolerations:\n- key: \"key1\"\n operator: \"Equal\"\n value: \"value1\"\n effect: \"NoExecute\"\n tolerationSeconds: 3600", "completion": "This text code Taints and Tolerations | means that if this pod is running and a matching taint is added to the node, then the pod will stay bound to the node for 3600 seconds, and then be evicted. If the taint is removed before that time, t", "language": "text", "type": "code-example_explanation", "context": "Taints and Tolerations | means that if this pod is running and a matching taint is added to the node, then the pod will stay bound to the node for 3600 seconds, and then be evicted. If the taint is removed before that time, t", "source": "https://kubernetes.io/docs/concepts/scheduling-eviction/taint-and-toleration/", "id": "k8s_00605_code_explain_15356"} {"prompt": "Complete this text code:\n\ntolerations:\n- key: \"node.kubernetes.io/unreachable\"", "completion": " operator: \"Exists\"\n effect: \"NoExecute\"\n tolerationSeconds: 6000", "language": "text", "type": "code-example", "context": "Taints and Tolerations | Kubernetes automatically adds a toleration for node.kubernetes.io/not-ready and node.kubernetes.io/unreachable with tolerationSeconds=300, unless you, or a controller, set those tolerations explicitly", "source": "https://kubernetes.io/docs/concepts/scheduling-eviction/taint-and-toleration/", "id": "k8s_00605_code_completion_15357"} {"prompt": "Explain this text code:\n\ntolerations:\n- key: \"node.kubernetes.io/unreachable\"\n operator: \"Exists\"\n effect: \"NoExecute\"\n tolerationSeconds: 6000", "completion": "This text code Taints and Tolerations | Kubernetes automatically adds a toleration for node.kubernetes.io/not-ready and node.kubernetes.io/unreachable with tolerationSeconds=300, unless you, or a controller, set those tolerations explicitly", "language": "text", "type": "code-example_explanation", "context": "Taints and Tolerations | Kubernetes automatically adds a toleration for node.kubernetes.io/not-ready and node.kubernetes.io/unreachable with tolerationSeconds=300, unless you, or a controller, set those tolerations explicitly", "source": "https://kubernetes.io/docs/concepts/scheduling-eviction/taint-and-toleration/", "id": "k8s_00605_code_explain_15358"} {"prompt": "Complete this yaml code:\n\ntolerations:\n- key: \"key1\"", "completion": " operator: \"Equal\"\n value: \"value1\"\n effect: \"NoSchedule\"", "language": "yaml", "type": "kubernetes-manifest", "context": "Taints and Tolerations | The default Kubernetes scheduler takes taints and tolerations into account when selecting a node to run a particular Pod. However, if you manually specify the .spec.nodeName for a Pod, that action byp", "source": "https://kubernetes.io/docs/concepts/scheduling-eviction/taint-and-toleration/", "id": "k8s_00605_code_completion_15359"} {"prompt": "Explain this yaml code:\n\ntolerations:\n- key: \"key1\"\n operator: \"Equal\"\n value: \"value1\"\n effect: \"NoSchedule\"", "completion": "This yaml code Taints and Tolerations | The default Kubernetes scheduler takes taints and tolerations into account when selecting a node to run a particular Pod. However, if you manually specify the .spec.nodeName for a Pod, that action byp", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Taints and Tolerations | The default Kubernetes scheduler takes taints and tolerations into account when selecting a node to run a particular Pod. However, if you manually specify the .spec.nodeName for a Pod, that action byp", "source": "https://kubernetes.io/docs/concepts/scheduling-eviction/taint-and-toleration/", "id": "k8s_00605_code_explain_15360"} {"prompt": "Complete this yaml code:\n\ntolerations:\n- key: \"key1\"", "completion": " operator: \"Exists\"\n effect: \"NoSchedule\"", "language": "yaml", "type": "kubernetes-manifest", "context": "Taints and Tolerations | The default Kubernetes scheduler takes taints and tolerations into account when selecting a node to run a particular Pod. However, if you manually specify the .spec.nodeName for a Pod, that action byp", "source": "https://kubernetes.io/docs/concepts/scheduling-eviction/taint-and-toleration/", "id": "k8s_00605_code_completion_15361"} {"prompt": "Explain this yaml code:\n\ntolerations:\n- key: \"key1\"\n operator: \"Exists\"\n effect: \"NoSchedule\"", "completion": "This yaml code Taints and Tolerations | The default Kubernetes scheduler takes taints and tolerations into account when selecting a node to run a particular Pod. However, if you manually specify the .spec.nodeName for a Pod, that action byp", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Taints and Tolerations | The default Kubernetes scheduler takes taints and tolerations into account when selecting a node to run a particular Pod. However, if you manually specify the .spec.nodeName for a Pod, that action byp", "source": "https://kubernetes.io/docs/concepts/scheduling-eviction/taint-and-toleration/", "id": "k8s_00605_code_explain_15362"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: nginx\n labels:\n env: test\nspec:", "completion": " containers:\n - name: nginx\n image: nginx\n imagePullPolicy: IfNotPresent\n tolerations:\n - key: \"example-key\"\n operator: \"Exists\"\n effect: \"NoSchedule\"", "language": "yaml", "type": "pod-definition", "context": "Taints and Tolerations | The default value for operator is Equal.", "source": "https://kubernetes.io/docs/concepts/scheduling-eviction/taint-and-toleration/", "id": "k8s_00605_code_completion_15363"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: nginx\n labels:\n env: test\nspec:\n containers:\n - name: nginx\n image: nginx\n imagePullPolicy: IfNotPresent\n tolerations:\n - key: \"example-key\"\n operator: \"Exists\"\n effect: \"NoSchedule\"", "completion": "This yaml code Taints and Tolerations | The default value for operator is Equal.", "language": "yaml", "type": "pod-definition_explanation", "context": "Taints and Tolerations | The default value for operator is Equal.", "source": "https://kubernetes.io/docs/concepts/scheduling-eviction/taint-and-toleration/", "id": "k8s_00605_code_explain_15364"} {"prompt": "Explain this shell code:\n\nkubectl taint nodes node1 key1=value1:NoSchedule\nkubectl taint nodes node1 key1=value1:NoExecute\nkubectl taint nodes node1 key2=value2:NoSchedule", "completion": "This shell code Taints and Tolerations | And a pod has two tolerations:", "language": "shell", "type": "code-example_explanation", "context": "Taints and Tolerations | And a pod has two tolerations:", "source": "https://kubernetes.io/docs/concepts/scheduling-eviction/taint-and-toleration/", "id": "k8s_00605_code_explain_15365"} {"prompt": "Complete this yaml code:\n\ntolerations:\n- key: \"key1\"\n operator: \"Equal\"\n value: \"value1\"", "completion": " effect: \"NoSchedule\"\n- key: \"key1\"\n operator: \"Equal\"\n value: \"value1\"\n effect: \"NoExecute\"", "language": "yaml", "type": "kubernetes-manifest", "context": "Taints and Tolerations | In this case, the pod will not be able to schedule onto the node, because there is no toleration matching the third taint. But it will be able to continue running if it is already running on the node", "source": "https://kubernetes.io/docs/concepts/scheduling-eviction/taint-and-toleration/", "id": "k8s_00605_code_completion_15366"} {"prompt": "Explain this yaml code:\n\ntolerations:\n- key: \"key1\"\n operator: \"Equal\"\n value: \"value1\"\n effect: \"NoSchedule\"\n- key: \"key1\"\n operator: \"Equal\"\n value: \"value1\"\n effect: \"NoExecute\"", "completion": "This yaml code Taints and Tolerations | In this case, the pod will not be able to schedule onto the node, because there is no toleration matching the third taint. But it will be able to continue running if it is already running on the node", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Taints and Tolerations | In this case, the pod will not be able to schedule onto the node, because there is no toleration matching the third taint. But it will be able to continue running if it is already running on the node", "source": "https://kubernetes.io/docs/concepts/scheduling-eviction/taint-and-toleration/", "id": "k8s_00605_code_explain_15367"} {"prompt": "Complete this yaml code:\n\ntolerations:\n- key: \"key1\"\n operator: \"Equal\"", "completion": " value: \"value1\"\n effect: \"NoExecute\"\n tolerationSeconds: 3600", "language": "yaml", "type": "kubernetes-manifest", "context": "Taints and Tolerations | means that if this pod is running and a matching taint is added to the node, then the pod will stay bound to the node for 3600 seconds, and then be evicted. If the taint is removed before that time, t", "source": "https://kubernetes.io/docs/concepts/scheduling-eviction/taint-and-toleration/", "id": "k8s_00605_code_completion_15368"} {"prompt": "Explain this yaml code:\n\ntolerations:\n- key: \"key1\"\n operator: \"Equal\"\n value: \"value1\"\n effect: \"NoExecute\"\n tolerationSeconds: 3600", "completion": "This yaml code Taints and Tolerations | means that if this pod is running and a matching taint is added to the node, then the pod will stay bound to the node for 3600 seconds, and then be evicted. If the taint is removed before that time, t", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Taints and Tolerations | means that if this pod is running and a matching taint is added to the node, then the pod will stay bound to the node for 3600 seconds, and then be evicted. If the taint is removed before that time, t", "source": "https://kubernetes.io/docs/concepts/scheduling-eviction/taint-and-toleration/", "id": "k8s_00605_code_explain_15369"} {"prompt": "Explain this bash code:\n\nkubectl taint nodes nodename dedicated=groupName:NoSchedule", "completion": "This bash code Taints and Tolerations | Nodes with Special Hardware: In a cluster where a small subset of nodes have specialized hardware (for example GPUs), it is desirable to keep pods that don't need the specialized hardware off of those", "language": "bash", "type": "kubectl-command_explanation", "context": "Taints and Tolerations | Nodes with Special Hardware: In a cluster where a small subset of nodes have specialized hardware (for example GPUs), it is desirable to keep pods that don't need the specialized hardware off of those", "source": "https://kubernetes.io/docs/concepts/scheduling-eviction/taint-and-toleration/", "id": "k8s_00605_code_explain_15370"} {"prompt": "Explain this bash code:\n\nkubectl taint nodes nodename special=true:NoSchedule", "completion": "This bash code Taints and Tolerations | Taint based Evictions: A per-pod-configurable eviction behavior when there are node problems, which is described in the next section.", "language": "bash", "type": "kubectl-command_explanation", "context": "Taints and Tolerations | Taint based Evictions: A per-pod-configurable eviction behavior when there are node problems, which is described in the next section.", "source": "https://kubernetes.io/docs/concepts/scheduling-eviction/taint-and-toleration/", "id": "k8s_00605_code_explain_15371"} {"prompt": "Explain this bash code:\n\nkubectl taint nodes nodename special=true:PreferNoSchedule", "completion": "This bash code Taints and Tolerations | Taint based Evictions: A per-pod-configurable eviction behavior when there are node problems, which is described in the next section.", "language": "bash", "type": "kubectl-command_explanation", "context": "Taints and Tolerations | Taint based Evictions: A per-pod-configurable eviction behavior when there are node problems, which is described in the next section.", "source": "https://kubernetes.io/docs/concepts/scheduling-eviction/taint-and-toleration/", "id": "k8s_00605_code_explain_15372"} {"prompt": "Complete this yaml code:\n\ntolerations:\n- key: \"node.kubernetes.io/unreachable\"", "completion": " operator: \"Exists\"\n effect: \"NoExecute\"\n tolerationSeconds: 6000", "language": "yaml", "type": "kubernetes-manifest", "context": "Taints and Tolerations | Kubernetes automatically adds a toleration for node.kubernetes.io/not-ready and node.kubernetes.io/unreachable with tolerationSeconds=300, unless you, or a controller, set those tolerations explicitly", "source": "https://kubernetes.io/docs/concepts/scheduling-eviction/taint-and-toleration/", "id": "k8s_00605_code_completion_15373"} {"prompt": "Explain this yaml code:\n\ntolerations:\n- key: \"node.kubernetes.io/unreachable\"\n operator: \"Exists\"\n effect: \"NoExecute\"\n tolerationSeconds: 6000", "completion": "This yaml code Taints and Tolerations | Kubernetes automatically adds a toleration for node.kubernetes.io/not-ready and node.kubernetes.io/unreachable with tolerationSeconds=300, unless you, or a controller, set those tolerations explicitly", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Taints and Tolerations | Kubernetes automatically adds a toleration for node.kubernetes.io/not-ready and node.kubernetes.io/unreachable with tolerationSeconds=300, unless you, or a controller, set those tolerations explicitly", "source": "https://kubernetes.io/docs/concepts/scheduling-eviction/taint-and-toleration/", "id": "k8s_00605_code_explain_15374"} {"prompt": "Complete this text code:\n\ntolerations:\n- key: \"key1\"", "completion": " operator: \"Equal\"\n value: \"value1\"\n effect: \"NoSchedule\"", "language": "text", "type": "code-example", "context": "Taints and Tolerations | The default Kubernetes scheduler takes taints and tolerations into account when selecting a node to run a particular Pod. However, if you manually specify the .spec.nodeName for a Pod, that action byp", "source": "https://kubernetes.io/docs/concepts/scheduling-eviction/taint-and-toleration/", "id": "k8s_00605_code_completion_15375"} {"prompt": "Explain this text code:\n\ntolerations:\n- key: \"key1\"\n operator: \"Equal\"\n value: \"value1\"\n effect: \"NoSchedule\"", "completion": "This text code Taints and Tolerations | The default Kubernetes scheduler takes taints and tolerations into account when selecting a node to run a particular Pod. However, if you manually specify the .spec.nodeName for a Pod, that action byp", "language": "text", "type": "code-example_explanation", "context": "Taints and Tolerations | The default Kubernetes scheduler takes taints and tolerations into account when selecting a node to run a particular Pod. However, if you manually specify the .spec.nodeName for a Pod, that action byp", "source": "https://kubernetes.io/docs/concepts/scheduling-eviction/taint-and-toleration/", "id": "k8s_00605_code_explain_15376"} {"prompt": "Complete this text code:\n\ntolerations:\n- key: \"key1\"", "completion": " operator: \"Exists\"\n effect: \"NoSchedule\"", "language": "text", "type": "code-example", "context": "Taints and Tolerations | The default Kubernetes scheduler takes taints and tolerations into account when selecting a node to run a particular Pod. However, if you manually specify the .spec.nodeName for a Pod, that action byp", "source": "https://kubernetes.io/docs/concepts/scheduling-eviction/taint-and-toleration/", "id": "k8s_00605_code_completion_15377"} {"prompt": "Explain this text code:\n\ntolerations:\n- key: \"key1\"\n operator: \"Exists\"\n effect: \"NoSchedule\"", "completion": "This text code Taints and Tolerations | The default Kubernetes scheduler takes taints and tolerations into account when selecting a node to run a particular Pod. However, if you manually specify the .spec.nodeName for a Pod, that action byp", "language": "text", "type": "code-example_explanation", "context": "Taints and Tolerations | The default Kubernetes scheduler takes taints and tolerations into account when selecting a node to run a particular Pod. However, if you manually specify the .spec.nodeName for a Pod, that action byp", "source": "https://kubernetes.io/docs/concepts/scheduling-eviction/taint-and-toleration/", "id": "k8s_00605_code_explain_15378"} {"prompt": "Complete this yaml code:\n\npods/pod-with-toleration.yaml\napiVersion: v1\nkind: Pod\nmetadata:\n name: nginx\n labels:\n env: test\nspec:", "completion": " containers:\n - name: nginx\n image: nginx\n imagePullPolicy: IfNotPresent\n tolerations:\n - key: \"example-key\"\n operator: \"Exists\"\n effect: \"NoSchedule\"", "language": "yaml", "type": "pod-definition", "context": "Taints and Tolerations | The default value for operator is Equal.", "source": "https://kubernetes.io/docs/concepts/scheduling-eviction/taint-and-toleration/", "id": "k8s_00605_code_completion_15379"} {"prompt": "Explain this yaml code:\n\npods/pod-with-toleration.yaml\napiVersion: v1\nkind: Pod\nmetadata:\n name: nginx\n labels:\n env: test\nspec:\n containers:\n - name: nginx\n image: nginx\n imagePullPolicy: IfNotPresent\n tolerations:\n - key: \"example-key\"\n operator: \"Exists\"\n effect: \"NoSchedule\"", "completion": "This yaml code Taints and Tolerations | The default value for operator is Equal.", "language": "yaml", "type": "pod-definition_explanation", "context": "Taints and Tolerations | The default value for operator is Equal.", "source": "https://kubernetes.io/docs/concepts/scheduling-eviction/taint-and-toleration/", "id": "k8s_00605_code_explain_15380"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: nginx\n labels:\n env: test\nspec:", "completion": " containers:\n - name: nginx\n image: nginx\n imagePullPolicy: IfNotPresent\n tolerations:\n - key: \"example-key\"\n operator: \"Exists\"\n effect: \"NoSchedule\"", "language": "yaml", "type": "pod-definition", "context": "Taints and Tolerations | The default value for operator is Equal.", "source": "https://kubernetes.io/docs/concepts/scheduling-eviction/taint-and-toleration/", "id": "k8s_00605_code_completion_15381"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: nginx\n labels:\n env: test\nspec:\n containers:\n - name: nginx\n image: nginx\n imagePullPolicy: IfNotPresent\n tolerations:\n - key: \"example-key\"\n operator: \"Exists\"\n effect: \"NoSchedule\"", "completion": "This yaml code Taints and Tolerations | The default value for operator is Equal.", "language": "yaml", "type": "pod-definition_explanation", "context": "Taints and Tolerations | The default value for operator is Equal.", "source": "https://kubernetes.io/docs/concepts/scheduling-eviction/taint-and-toleration/", "id": "k8s_00605_code_explain_15382"} {"prompt": "Explain this bash code:\n\nkubectl taint nodes node1 key1=value1:NoSchedule\nkubectl taint nodes node1 key1=value1:NoExecute\nkubectl taint nodes node1 key2=value2:NoSchedule", "completion": "This bash code Taints and Tolerations | And a pod has two tolerations:", "language": "bash", "type": "kubectl-command_explanation", "context": "Taints and Tolerations | And a pod has two tolerations:", "source": "https://kubernetes.io/docs/concepts/scheduling-eviction/taint-and-toleration/", "id": "k8s_00605_code_explain_15383"} {"prompt": "Complete this text code:\n\ntolerations:\n- key: \"key1\"\n operator: \"Equal\"\n value: \"value1\"", "completion": " effect: \"NoSchedule\"\n- key: \"key1\"\n operator: \"Equal\"\n value: \"value1\"\n effect: \"NoExecute\"", "language": "text", "type": "code-example", "context": "Taints and Tolerations | In this case, the pod will not be able to schedule onto the node, because there is no toleration matching the third taint. But it will be able to continue running if it is already running on the node", "source": "https://kubernetes.io/docs/concepts/scheduling-eviction/taint-and-toleration/", "id": "k8s_00605_code_completion_15384"} {"prompt": "Explain this text code:\n\ntolerations:\n- key: \"key1\"\n operator: \"Equal\"\n value: \"value1\"\n effect: \"NoSchedule\"\n- key: \"key1\"\n operator: \"Equal\"\n value: \"value1\"\n effect: \"NoExecute\"", "completion": "This text code Taints and Tolerations | In this case, the pod will not be able to schedule onto the node, because there is no toleration matching the third taint. But it will be able to continue running if it is already running on the node", "language": "text", "type": "code-example_explanation", "context": "Taints and Tolerations | In this case, the pod will not be able to schedule onto the node, because there is no toleration matching the third taint. But it will be able to continue running if it is already running on the node", "source": "https://kubernetes.io/docs/concepts/scheduling-eviction/taint-and-toleration/", "id": "k8s_00605_code_explain_15385"} {"prompt": "Complete this text code:\n\ntolerations:\n- key: \"key1\"\n operator: \"Equal\"", "completion": " value: \"value1\"\n effect: \"NoExecute\"\n tolerationSeconds: 3600", "language": "text", "type": "code-example", "context": "Taints and Tolerations | means that if this pod is running and a matching taint is added to the node, then the pod will stay bound to the node for 3600 seconds, and then be evicted. If the taint is removed before that time, t", "source": "https://kubernetes.io/docs/concepts/scheduling-eviction/taint-and-toleration/", "id": "k8s_00605_code_completion_15386"} {"prompt": "Explain this text code:\n\ntolerations:\n- key: \"key1\"\n operator: \"Equal\"\n value: \"value1\"\n effect: \"NoExecute\"\n tolerationSeconds: 3600", "completion": "This text code Taints and Tolerations | means that if this pod is running and a matching taint is added to the node, then the pod will stay bound to the node for 3600 seconds, and then be evicted. If the taint is removed before that time, t", "language": "text", "type": "code-example_explanation", "context": "Taints and Tolerations | means that if this pod is running and a matching taint is added to the node, then the pod will stay bound to the node for 3600 seconds, and then be evicted. If the taint is removed before that time, t", "source": "https://kubernetes.io/docs/concepts/scheduling-eviction/taint-and-toleration/", "id": "k8s_00605_code_explain_15387"} {"prompt": "Complete this text code:\n\ntolerations:\n- key: \"node.kubernetes.io/unreachable\"", "completion": " operator: \"Exists\"\n effect: \"NoExecute\"\n tolerationSeconds: 6000", "language": "text", "type": "code-example", "context": "Taints and Tolerations | Kubernetes automatically adds a toleration for node.kubernetes.io/not-ready and node.kubernetes.io/unreachable with tolerationSeconds=300, unless you, or a controller, set those tolerations explicitly", "source": "https://kubernetes.io/docs/concepts/scheduling-eviction/taint-and-toleration/", "id": "k8s_00605_code_completion_15388"} {"prompt": "Explain this text code:\n\ntolerations:\n- key: \"node.kubernetes.io/unreachable\"\n operator: \"Exists\"\n effect: \"NoExecute\"\n tolerationSeconds: 6000", "completion": "This text code Taints and Tolerations | Kubernetes automatically adds a toleration for node.kubernetes.io/not-ready and node.kubernetes.io/unreachable with tolerationSeconds=300, unless you, or a controller, set those tolerations explicitly", "language": "text", "type": "code-example_explanation", "context": "Taints and Tolerations | Kubernetes automatically adds a toleration for node.kubernetes.io/not-ready and node.kubernetes.io/unreachable with tolerationSeconds=300, unless you, or a controller, set those tolerations explicitly", "source": "https://kubernetes.io/docs/concepts/scheduling-eviction/taint-and-toleration/", "id": "k8s_00605_code_explain_15389"} {"prompt": "Explain this bash code:\n\nkubectl port-forward TYPE/NAME [options] [LOCAL_PORT:]REMOTE_PORT [...[LOCAL_PORT_N:]REMOTE_PORT_N]", "completion": "This bash code kubectl port-forward | Addresses to listen on (comma separated). Only accepts IP addresses or localhost as a value. When localhost is supplied, kubectl will try to bind on both 127.0.0.1 and ::1 and will fail if neither of", "language": "bash", "type": "kubectl-command_explanation", "context": "kubectl port-forward | Addresses to listen on (comma separated). Only accepts IP addresses or localhost as a value. When localhost is supplied, kubectl will try to bind on both 127.0.0.1 and ::1 and will fail if neither of", "source": "https://kubernetes.io/docs/reference/kubectl/generated/kubectl_port-forward/", "id": "k8s_00606_code_explain_15390"} {"prompt": "Complete this text code:\n\n# Listen on ports 5000 and 6000 locally, forwarding data to/from ports 5000 and 6000 in the pod\n kubectl port-forward pod/mypod 5000 6000\n \n # Listen on ports 5000 and 6000 locally, forwarding data to/from ports 5000 and 6000 in a pod selected by the deployment\n kubectl port-forward deployment/mydeployment 5000 6000\n \n # Listen on port 8443 locally, forwarding to the targetPort of the service's port named \"https\" in a pod selected by the service\n kubectl port-forward service/myservice 8443:https\n \n # Listen on port 8888 locally, forwarding to 5000 in the pod", "completion": " kubectl port-forward pod/mypod 8888:5000\n \n # Listen on port 8888 on all addresses, forwarding to 5000 in the pod\n kubectl port-forward --address 0.0.0.0 pod/mypod 8888:5000\n \n # Listen on port 8888 on localhost and selected IP, forwarding to 5000 in the pod\n kubectl port-forward --address localhost,10.19.21.23 pod/mypod 8888:5000\n \n # Listen on a random port locally, forwarding to 5000 in the pod\n kubectl port-forward pod/mypod :5000", "language": "text", "type": "code-example", "context": "kubectl port-forward | Addresses to listen on (comma separated). Only accepts IP addresses or localhost as a value. When localhost is supplied, kubectl will try to bind on both 127.0.0.1 and ::1 and will fail if neither of", "source": "https://kubernetes.io/docs/reference/kubectl/generated/kubectl_port-forward/", "id": "k8s_00606_code_completion_15391"} {"prompt": "Explain this text code:\n\n# Listen on ports 5000 and 6000 locally, forwarding data to/from ports 5000 and 6000 in the pod\n kubectl port-forward pod/mypod 5000 6000\n \n # Listen on ports 5000 and 6000 locally, forwarding data to/from ports 5000 and 6000 in a pod selected by the deployment\n kubectl port-forward deployment/mydeployment 5000 6000\n \n # Listen on port 8443 locally, forwarding to the targetPort of the service's port named \"https\" in a pod selected by the service\n kubectl port-forward service/myservice 8443:https\n \n # Listen on port 8888 locally, forwarding to 5000 in the pod\n kubectl port-forward pod/mypod 8888:5000\n \n # Listen on port 8888 on all addresses, forwarding to 5000 in the pod\n kubectl port-forward --address 0.0.0.0 pod/mypod 8888:5000\n \n # Listen on port 8888 on localhost and selected IP, forwarding to 5000 in the pod\n kubectl port-forward --address localhost,10.19.21.23 pod/mypod 8888:5000\n \n # Listen on a random port locally, forwarding to 5000 in the pod\n kubectl port-forward pod/mypod :5000", "completion": "This text code kubectl port-forward | Addresses to listen on (comma separated). Only accepts IP addresses or localhost as a value. When localhost is supplied, kubectl will try to bind on both 127.0.0.1 and ::1 and will fail if neither of", "language": "text", "type": "code-example_explanation", "context": "kubectl port-forward | Addresses to listen on (comma separated). Only accepts IP addresses or localhost as a value. When localhost is supplied, kubectl will try to bind on both 127.0.0.1 and ::1 and will fail if neither of", "source": "https://kubernetes.io/docs/reference/kubectl/generated/kubectl_port-forward/", "id": "k8s_00606_code_explain_15392"} {"prompt": "Explain this bash code:\n\nkubectl port-forward TYPE/NAME [options] [LOCAL_PORT:]REMOTE_PORT [...[LOCAL_PORT_N:]REMOTE_PORT_N]", "completion": "This bash code kubectl port-forward | Addresses to listen on (comma separated). Only accepts IP addresses or localhost as a value. When localhost is supplied, kubectl will try to bind on both 127.0.0.1 and ::1 and will fail if neither of", "language": "bash", "type": "kubectl-command_explanation", "context": "kubectl port-forward | Addresses to listen on (comma separated). Only accepts IP addresses or localhost as a value. When localhost is supplied, kubectl will try to bind on both 127.0.0.1 and ::1 and will fail if neither of", "source": "https://kubernetes.io/docs/reference/kubectl/generated/kubectl_port-forward/", "id": "k8s_00606_code_explain_15393"} {"prompt": "Complete this text code:\n\n# Listen on ports 5000 and 6000 locally, forwarding data to/from ports 5000 and 6000 in the pod\n kubectl port-forward pod/mypod 5000 6000\n \n # Listen on ports 5000 and 6000 locally, forwarding data to/from ports 5000 and 6000 in a pod selected by the deployment\n kubectl port-forward deployment/mydeployment 5000 6000\n \n # Listen on port 8443 locally, forwarding to the targetPort of the service's port named \"https\" in a pod selected by the service\n kubectl port-forward service/myservice 8443:https\n \n # Listen on port 8888 locally, forwarding to 5000 in the pod", "completion": " kubectl port-forward pod/mypod 8888:5000\n \n # Listen on port 8888 on all addresses, forwarding to 5000 in the pod\n kubectl port-forward --address 0.0.0.0 pod/mypod 8888:5000\n \n # Listen on port 8888 on localhost and selected IP, forwarding to 5000 in the pod\n kubectl port-forward --address localhost,10.19.21.23 pod/mypod 8888:5000\n \n # Listen on a random port locally, forwarding to 5000 in the pod\n kubectl port-forward pod/mypod :5000", "language": "text", "type": "code-example", "context": "kubectl port-forward | Addresses to listen on (comma separated). Only accepts IP addresses or localhost as a value. When localhost is supplied, kubectl will try to bind on both 127.0.0.1 and ::1 and will fail if neither of", "source": "https://kubernetes.io/docs/reference/kubectl/generated/kubectl_port-forward/", "id": "k8s_00606_code_completion_15394"} {"prompt": "Explain this text code:\n\n# Listen on ports 5000 and 6000 locally, forwarding data to/from ports 5000 and 6000 in the pod\n kubectl port-forward pod/mypod 5000 6000\n \n # Listen on ports 5000 and 6000 locally, forwarding data to/from ports 5000 and 6000 in a pod selected by the deployment\n kubectl port-forward deployment/mydeployment 5000 6000\n \n # Listen on port 8443 locally, forwarding to the targetPort of the service's port named \"https\" in a pod selected by the service\n kubectl port-forward service/myservice 8443:https\n \n # Listen on port 8888 locally, forwarding to 5000 in the pod\n kubectl port-forward pod/mypod 8888:5000\n \n # Listen on port 8888 on all addresses, forwarding to 5000 in the pod\n kubectl port-forward --address 0.0.0.0 pod/mypod 8888:5000\n \n # Listen on port 8888 on localhost and selected IP, forwarding to 5000 in the pod\n kubectl port-forward --address localhost,10.19.21.23 pod/mypod 8888:5000\n \n # Listen on a random port locally, forwarding to 5000 in the pod\n kubectl port-forward pod/mypod :5000", "completion": "This text code kubectl port-forward | Addresses to listen on (comma separated). Only accepts IP addresses or localhost as a value. When localhost is supplied, kubectl will try to bind on both 127.0.0.1 and ::1 and will fail if neither of", "language": "text", "type": "code-example_explanation", "context": "kubectl port-forward | Addresses to listen on (comma separated). Only accepts IP addresses or localhost as a value. When localhost is supplied, kubectl will try to bind on both 127.0.0.1 and ::1 and will fail if neither of", "source": "https://kubernetes.io/docs/reference/kubectl/generated/kubectl_port-forward/", "id": "k8s_00606_code_explain_15395"} {"prompt": "Complete this text code:\n\nObject{\n spec: Object.spec{", "completion": " serviceAccountName: \"example\"\n }\n}", "language": "text", "type": "code-example", "context": "MutatingAdmissionPolicyBinding v1alpha1 | Apply configurations may not modify atomic structs, maps or arrays due to the risk of accidental deletion of values not included in the apply configuration.", "source": "https://kubernetes.io/docs/reference/kubernetes-api/policy-resources/mutating-admission-policy-binding-v1alpha1/", "id": "k8s_00607_code_completion_15396"} {"prompt": "Explain this text code:\n\nObject{\n spec: Object.spec{\n serviceAccountName: \"example\"\n }\n}", "completion": "This text code MutatingAdmissionPolicyBinding v1alpha1 | Apply configurations may not modify atomic structs, maps or arrays due to the risk of accidental deletion of values not included in the apply configuration.", "language": "text", "type": "code-example_explanation", "context": "MutatingAdmissionPolicyBinding v1alpha1 | Apply configurations may not modify atomic structs, maps or arrays due to the risk of accidental deletion of values not included in the apply configuration.", "source": "https://kubernetes.io/docs/reference/kubernetes-api/policy-resources/mutating-admission-policy-binding-v1alpha1/", "id": "k8s_00607_code_explain_15397"} {"prompt": "Complete this json code:\n\n[\n JSONPatch{op: \"test\", path: \"/spec/example\", value: \"Red\"},", "completion": " JSONPatch{op: \"replace\", path: \"/spec/example\", value: \"Green\"}\n ]", "language": "json", "type": "api-response", "context": "MutatingAdmissionPolicyBinding v1alpha1 | To define an object for the patch value, use Object types. For example:", "source": "https://kubernetes.io/docs/reference/kubernetes-api/policy-resources/mutating-admission-policy-binding-v1alpha1/", "id": "k8s_00607_code_completion_15398"} {"prompt": "Explain this json code:\n\n[\n JSONPatch{op: \"test\", path: \"/spec/example\", value: \"Red\"},\n JSONPatch{op: \"replace\", path: \"/spec/example\", value: \"Green\"}\n ]", "completion": "This json code MutatingAdmissionPolicyBinding v1alpha1 | To define an object for the patch value, use Object types. For example:", "language": "json", "type": "api-response_explanation", "context": "MutatingAdmissionPolicyBinding v1alpha1 | To define an object for the patch value, use Object types. For example:", "source": "https://kubernetes.io/docs/reference/kubernetes-api/policy-resources/mutating-admission-policy-binding-v1alpha1/", "id": "k8s_00607_code_explain_15399"} {"prompt": "Complete this json code:\n\n[\n JSONPatch{\n op: \"add\",", "completion": " path: \"/spec/selector\",\n value: Object.spec.selector{matchLabels: {\"environment\": \"test\"}}\n }\n ]", "language": "json", "type": "api-response", "context": "MutatingAdmissionPolicyBinding v1alpha1 | To use strings containing '/' and '~' as JSONPatch path keys, use \"jsonpatch.escapeKey\". For example:", "source": "https://kubernetes.io/docs/reference/kubernetes-api/policy-resources/mutating-admission-policy-binding-v1alpha1/", "id": "k8s_00607_code_completion_15400"} {"prompt": "Explain this json code:\n\n[\n JSONPatch{\n op: \"add\",\n path: \"/spec/selector\",\n value: Object.spec.selector{matchLabels: {\"environment\": \"test\"}}\n }\n ]", "completion": "This json code MutatingAdmissionPolicyBinding v1alpha1 | To use strings containing '/' and '~' as JSONPatch path keys, use \"jsonpatch.escapeKey\". For example:", "language": "json", "type": "api-response_explanation", "context": "MutatingAdmissionPolicyBinding v1alpha1 | To use strings containing '/' and '~' as JSONPatch path keys, use \"jsonpatch.escapeKey\". For example:", "source": "https://kubernetes.io/docs/reference/kubernetes-api/policy-resources/mutating-admission-policy-binding-v1alpha1/", "id": "k8s_00607_code_explain_15401"} {"prompt": "Complete this json code:\n\n[\n JSONPatch{\n op: \"add\",", "completion": " path: \"/metadata/labels/\" + jsonpatch.escapeKey(\"example.com/environment\"),\n value: \"test\"\n },\n ]", "language": "json", "type": "api-response", "context": "MutatingAdmissionPolicyBinding v1alpha1 | CEL expressions have access to the types needed to create JSON patches and objects:", "source": "https://kubernetes.io/docs/reference/kubernetes-api/policy-resources/mutating-admission-policy-binding-v1alpha1/", "id": "k8s_00607_code_completion_15402"} {"prompt": "Explain this json code:\n\n[\n JSONPatch{\n op: \"add\",\n path: \"/metadata/labels/\" + jsonpatch.escapeKey(\"example.com/environment\"),\n value: \"test\"\n },\n ]", "completion": "This json code MutatingAdmissionPolicyBinding v1alpha1 | CEL expressions have access to the types needed to create JSON patches and objects:", "language": "json", "type": "api-response_explanation", "context": "MutatingAdmissionPolicyBinding v1alpha1 | CEL expressions have access to the types needed to create JSON patches and objects:", "source": "https://kubernetes.io/docs/reference/kubernetes-api/policy-resources/mutating-admission-policy-binding-v1alpha1/", "id": "k8s_00607_code_explain_15403"} {"prompt": "Explain this text code:\n\napiGroups:[\"apps\"], apiVersions:[\"v1\"], resources: [\"deployments\"]", "completion": "This text code MutatingAdmissionPolicyBinding v1alpha1 | Equivalent: match a request if modifies a resource listed in rules, even via another API group or version. For example, if deployments can be modified via apps/v1, apps/v1beta1, and extensions/v1beta1", "language": "text", "type": "code-example_explanation", "context": "MutatingAdmissionPolicyBinding v1alpha1 | Equivalent: match a request if modifies a resource listed in rules, even via another API group or version. For example, if deployments can be modified via apps/v1, apps/v1beta1, and extensions/v1beta1", "source": "https://kubernetes.io/docs/reference/kubernetes-api/policy-resources/mutating-admission-policy-binding-v1alpha1/", "id": "k8s_00607_code_explain_15404"} {"prompt": "Explain this text code:\n\napiGroups:[\"apps\"], apiVersions:[\"v1\"], resources: [\"deployments\"]", "completion": "This text code MutatingAdmissionPolicyBinding v1alpha1 | Defaults to \"Equivalent\"", "language": "text", "type": "code-example_explanation", "context": "MutatingAdmissionPolicyBinding v1alpha1 | Defaults to \"Equivalent\"", "source": "https://kubernetes.io/docs/reference/kubernetes-api/policy-resources/mutating-admission-policy-binding-v1alpha1/", "id": "k8s_00607_code_explain_15405"} {"prompt": "Explain this text code:\n\napiGroups:[\"apps\"], apiVersions:[\"v1\"], resources: [\"deployments\"]", "completion": "This text code MutatingAdmissionPolicyBinding v1alpha1 | Equivalent: match a request if modifies a resource listed in rules, even via another API group or version. For example, if deployments can be modified via apps/v1, apps/v1beta1, and extensions/v1beta1", "language": "text", "type": "code-example_explanation", "context": "MutatingAdmissionPolicyBinding v1alpha1 | Equivalent: match a request if modifies a resource listed in rules, even via another API group or version. For example, if deployments can be modified via apps/v1, apps/v1beta1, and extensions/v1beta1", "source": "https://kubernetes.io/docs/reference/kubernetes-api/policy-resources/mutating-admission-policy-binding-v1alpha1/", "id": "k8s_00607_code_explain_15406"} {"prompt": "Explain this text code:\n\napiGroups:[\"apps\"], apiVersions:[\"v1\"], resources: [\"deployments\"]", "completion": "This text code MutatingAdmissionPolicyBinding v1alpha1 | Defaults to \"Equivalent\"", "language": "text", "type": "code-example_explanation", "context": "MutatingAdmissionPolicyBinding v1alpha1 | Defaults to \"Equivalent\"", "source": "https://kubernetes.io/docs/reference/kubernetes-api/policy-resources/mutating-admission-policy-binding-v1alpha1/", "id": "k8s_00607_code_explain_15407"} {"prompt": "Complete this text code:\n\nObject{\n spec: Object.spec{", "completion": " serviceAccountName: \"example\"\n }\n}", "language": "text", "type": "code-example", "context": "MutatingAdmissionPolicyBinding v1alpha1 | Apply configurations may not modify atomic structs, maps or arrays due to the risk of accidental deletion of values not included in the apply configuration.", "source": "https://kubernetes.io/docs/reference/kubernetes-api/policy-resources/mutating-admission-policy-binding-v1alpha1/", "id": "k8s_00607_code_completion_15408"} {"prompt": "Explain this text code:\n\nObject{\n spec: Object.spec{\n serviceAccountName: \"example\"\n }\n}", "completion": "This text code MutatingAdmissionPolicyBinding v1alpha1 | Apply configurations may not modify atomic structs, maps or arrays due to the risk of accidental deletion of values not included in the apply configuration.", "language": "text", "type": "code-example_explanation", "context": "MutatingAdmissionPolicyBinding v1alpha1 | Apply configurations may not modify atomic structs, maps or arrays due to the risk of accidental deletion of values not included in the apply configuration.", "source": "https://kubernetes.io/docs/reference/kubernetes-api/policy-resources/mutating-admission-policy-binding-v1alpha1/", "id": "k8s_00607_code_explain_15409"} {"prompt": "Complete this json code:\n\n[\n JSONPatch{op: \"test\", path: \"/spec/example\", value: \"Red\"},", "completion": " JSONPatch{op: \"replace\", path: \"/spec/example\", value: \"Green\"}\n ]", "language": "json", "type": "api-response", "context": "MutatingAdmissionPolicyBinding v1alpha1 | To define an object for the patch value, use Object types. For example:", "source": "https://kubernetes.io/docs/reference/kubernetes-api/policy-resources/mutating-admission-policy-binding-v1alpha1/", "id": "k8s_00607_code_completion_15410"} {"prompt": "Explain this json code:\n\n[\n JSONPatch{op: \"test\", path: \"/spec/example\", value: \"Red\"},\n JSONPatch{op: \"replace\", path: \"/spec/example\", value: \"Green\"}\n ]", "completion": "This json code MutatingAdmissionPolicyBinding v1alpha1 | To define an object for the patch value, use Object types. For example:", "language": "json", "type": "api-response_explanation", "context": "MutatingAdmissionPolicyBinding v1alpha1 | To define an object for the patch value, use Object types. For example:", "source": "https://kubernetes.io/docs/reference/kubernetes-api/policy-resources/mutating-admission-policy-binding-v1alpha1/", "id": "k8s_00607_code_explain_15411"} {"prompt": "Complete this json code:\n\n[\n JSONPatch{\n op: \"add\",", "completion": " path: \"/spec/selector\",\n value: Object.spec.selector{matchLabels: {\"environment\": \"test\"}}\n }\n ]", "language": "json", "type": "api-response", "context": "MutatingAdmissionPolicyBinding v1alpha1 | To use strings containing '/' and '~' as JSONPatch path keys, use \"jsonpatch.escapeKey\". For example:", "source": "https://kubernetes.io/docs/reference/kubernetes-api/policy-resources/mutating-admission-policy-binding-v1alpha1/", "id": "k8s_00607_code_completion_15412"} {"prompt": "Explain this json code:\n\n[\n JSONPatch{\n op: \"add\",\n path: \"/spec/selector\",\n value: Object.spec.selector{matchLabels: {\"environment\": \"test\"}}\n }\n ]", "completion": "This json code MutatingAdmissionPolicyBinding v1alpha1 | To use strings containing '/' and '~' as JSONPatch path keys, use \"jsonpatch.escapeKey\". For example:", "language": "json", "type": "api-response_explanation", "context": "MutatingAdmissionPolicyBinding v1alpha1 | To use strings containing '/' and '~' as JSONPatch path keys, use \"jsonpatch.escapeKey\". For example:", "source": "https://kubernetes.io/docs/reference/kubernetes-api/policy-resources/mutating-admission-policy-binding-v1alpha1/", "id": "k8s_00607_code_explain_15413"} {"prompt": "Complete this json code:\n\n[\n JSONPatch{\n op: \"add\",", "completion": " path: \"/metadata/labels/\" + jsonpatch.escapeKey(\"example.com/environment\"),\n value: \"test\"\n },\n ]", "language": "json", "type": "api-response", "context": "MutatingAdmissionPolicyBinding v1alpha1 | CEL expressions have access to the types needed to create JSON patches and objects:", "source": "https://kubernetes.io/docs/reference/kubernetes-api/policy-resources/mutating-admission-policy-binding-v1alpha1/", "id": "k8s_00607_code_completion_15414"} {"prompt": "Explain this json code:\n\n[\n JSONPatch{\n op: \"add\",\n path: \"/metadata/labels/\" + jsonpatch.escapeKey(\"example.com/environment\"),\n value: \"test\"\n },\n ]", "completion": "This json code MutatingAdmissionPolicyBinding v1alpha1 | CEL expressions have access to the types needed to create JSON patches and objects:", "language": "json", "type": "api-response_explanation", "context": "MutatingAdmissionPolicyBinding v1alpha1 | CEL expressions have access to the types needed to create JSON patches and objects:", "source": "https://kubernetes.io/docs/reference/kubernetes-api/policy-resources/mutating-admission-policy-binding-v1alpha1/", "id": "k8s_00607_code_explain_15415"} {"prompt": "Complete this text code:\n\n\\<quantity> ::= \\<signedNumber>\\<suffix>\n\n (Note that \\<suffix> may be empty, from the \"\" case in \\<decimalSI>.)\n\n\\<digit> ::= 0 | 1 | ... | 9 \\<digits> ::= \\<digit> | \\<digit>\\<digits> \\<number> ::= \\<digits> | \\<digits>.\\<digits> | \\<digits>. | .\\<digits> \\<sign> ::= \"+\" | \"-\" \\<signedNumber> ::= \\<number> | \\<sign>\\<number> \\<suffix> ::= \\<binarySI> | \\<decimalExponent> | \\<decimalSI> \\<binarySI> ::= Ki | Mi | Gi | Ti | Pi | Ei\n", "completion": " (International System of units; See: http://physics.nist.gov/cuu/Units/binary.html)\n\n\\<decimalSI> ::= m | \"\" | k | M | G | T | P | E\n\n (Note that 1024 = 1Ki but 1000 = 1k; I didn't choose the capitalization.)\n\n\\<decimalExponent> ::= \"e\" \\<signedNumber> | \"E\" \\<signedNumber>", "language": "text", "type": "code-example", "context": "Quantity | No matter which of the three exponent forms is used, no quantity may represent a number greater than 2^63-1 in magnitude, nor may it have more than 3 decimal places. Numbers larger or more precise wil", "source": "https://kubernetes.io/docs/reference/kubernetes-api/common-definitions/quantity/", "id": "k8s_00608_code_completion_15416"} {"prompt": "Explain this text code:\n\n\\<quantity> ::= \\<signedNumber>\\<suffix>\n\n (Note that \\<suffix> may be empty, from the \"\" case in \\<decimalSI>.)\n\n\\<digit> ::= 0 | 1 | ... | 9 \\<digits> ::= \\<digit> | \\<digit>\\<digits> \\<number> ::= \\<digits> | \\<digits>.\\<digits> | \\<digits>. | .\\<digits> \\<sign> ::= \"+\" | \"-\" \\<signedNumber> ::= \\<number> | \\<sign>\\<number> \\<suffix> ::= \\<binarySI> | \\<decimalExponent> | \\<decimalSI> \\<binarySI> ::= Ki | Mi | Gi | Ti | Pi | Ei\n\n (International System of units; See: http://physics.nist.gov/cuu/Units/binary.html)\n\n\\<decimalSI> ::= m | \"\" | k | M | G | T | P | E\n\n (Note that 1024 = 1Ki but 1000 = 1k; I didn't choose the capitalization.)\n\n\\<decimalExponent> ::= \"e\" \\<signedNumber> | \"E\" \\<signedNumber>", "completion": "This text code Quantity | No matter which of the three exponent forms is used, no quantity may represent a number greater than 2^63-1 in magnitude, nor may it have more than 3 decimal places. Numbers larger or more precise wil", "language": "text", "type": "code-example_explanation", "context": "Quantity | No matter which of the three exponent forms is used, no quantity may represent a number greater than 2^63-1 in magnitude, nor may it have more than 3 decimal places. Numbers larger or more precise wil", "source": "https://kubernetes.io/docs/reference/kubernetes-api/common-definitions/quantity/", "id": "k8s_00608_code_explain_15417"} {"prompt": "Complete this text code:\n\n\\<quantity> ::= \\<signedNumber>\\<suffix>\n\n (Note that \\<suffix> may be empty, from the \"\" case in \\<decimalSI>.)\n\n\\<digit> ::= 0 | 1 | ... | 9 \\<digits> ::= \\<digit> | \\<digit>\\<digits> \\<number> ::= \\<digits> | \\<digits>.\\<digits> | \\<digits>. | .\\<digits> \\<sign> ::= \"+\" | \"-\" \\<signedNumber> ::= \\<number> | \\<sign>\\<number> \\<suffix> ::= \\<binarySI> | \\<decimalExponent> | \\<decimalSI> \\<binarySI> ::= Ki | Mi | Gi | Ti | Pi | Ei\n", "completion": " (International System of units; See: http://physics.nist.gov/cuu/Units/binary.html)\n\n\\<decimalSI> ::= m | \"\" | k | M | G | T | P | E\n\n (Note that 1024 = 1Ki but 1000 = 1k; I didn't choose the capitalization.)\n\n\\<decimalExponent> ::= \"e\" \\<signedNumber> | \"E\" \\<signedNumber>", "language": "text", "type": "code-example", "context": "Quantity | No matter which of the three exponent forms is used, no quantity may represent a number greater than 2^63-1 in magnitude, nor may it have more than 3 decimal places. Numbers larger or more precise wil", "source": "https://kubernetes.io/docs/reference/kubernetes-api/common-definitions/quantity/", "id": "k8s_00608_code_completion_15418"} {"prompt": "Explain this text code:\n\n\\<quantity> ::= \\<signedNumber>\\<suffix>\n\n (Note that \\<suffix> may be empty, from the \"\" case in \\<decimalSI>.)\n\n\\<digit> ::= 0 | 1 | ... | 9 \\<digits> ::= \\<digit> | \\<digit>\\<digits> \\<number> ::= \\<digits> | \\<digits>.\\<digits> | \\<digits>. | .\\<digits> \\<sign> ::= \"+\" | \"-\" \\<signedNumber> ::= \\<number> | \\<sign>\\<number> \\<suffix> ::= \\<binarySI> | \\<decimalExponent> | \\<decimalSI> \\<binarySI> ::= Ki | Mi | Gi | Ti | Pi | Ei\n\n (International System of units; See: http://physics.nist.gov/cuu/Units/binary.html)\n\n\\<decimalSI> ::= m | \"\" | k | M | G | T | P | E\n\n (Note that 1024 = 1Ki but 1000 = 1k; I didn't choose the capitalization.)\n\n\\<decimalExponent> ::= \"e\" \\<signedNumber> | \"E\" \\<signedNumber>", "completion": "This text code Quantity | No matter which of the three exponent forms is used, no quantity may represent a number greater than 2^63-1 in magnitude, nor may it have more than 3 decimal places. Numbers larger or more precise wil", "language": "text", "type": "code-example_explanation", "context": "Quantity | No matter which of the three exponent forms is used, no quantity may represent a number greater than 2^63-1 in magnitude, nor may it have more than 3 decimal places. Numbers larger or more precise wil", "source": "https://kubernetes.io/docs/reference/kubernetes-api/common-definitions/quantity/", "id": "k8s_00608_code_explain_15419"} {"prompt": "Explain this text code:\n\nkube-apiserver \\\n--runtime-config=flowcontrol.apiserver.k8s.io/v1beta3=false \\\n # …and other flags as usual", "completion": "This text code API Priority and Fairness | The command-line flag --enable-priority-and-fairness=false will disable the API Priority and Fairness feature.", "language": "text", "type": "code-example_explanation", "context": "API Priority and Fairness | The command-line flag --enable-priority-and-fairness=false will disable the API Priority and Fairness feature.", "source": "https://kubernetes.io/docs/concepts/cluster-administration/flow-control/", "id": "k8s_00609_code_explain_15420"} {"prompt": "Complete this yaml code:\n\napiVersion: flowcontrol.apiserver.k8s.io/v1\nkind: FlowSchema\nmetadata:\n name: health-for-strangers\nspec:\n matchingPrecedence: 1000\n priorityLevelConfiguration:\n name: exempt\n rules:\n - nonResourceRules:", "completion": " - nonResourceURLs:\n - \"/healthz\"\n - \"/livez\"\n - \"/readyz\"\n verbs:\n - \"*\"\n subjects:\n - kind: Group\n group:\n name: \"system:unauthenticated\"", "language": "yaml", "type": "kubernetes-manifest", "context": "API Priority and Fairness | When you enable the API Priority and Fairness feature, the kube-apiserver exports additional metrics. Monitoring these can help you determine whether your configuration is inappropriately throttling i", "source": "https://kubernetes.io/docs/concepts/cluster-administration/flow-control/", "id": "k8s_00609_code_completion_15421"} {"prompt": "Explain this yaml code:\n\napiVersion: flowcontrol.apiserver.k8s.io/v1\nkind: FlowSchema\nmetadata:\n name: health-for-strangers\nspec:\n matchingPrecedence: 1000\n priorityLevelConfiguration:\n name: exempt\n rules:\n - nonResourceRules:\n - nonResourceURLs:\n - \"/healthz\"\n - \"/livez\"\n - \"/readyz\"\n verbs:\n - \"*\"\n subjects:\n - kind: Group\n group:\n name: \"system:unauthenticated\"", "completion": "This yaml code API Priority and Fairness | When you enable the API Priority and Fairness feature, the kube-apiserver exports additional metrics. Monitoring these can help you determine whether your configuration is inappropriately throttling i", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "API Priority and Fairness | When you enable the API Priority and Fairness feature, the kube-apiserver exports additional metrics. Monitoring these can help you determine whether your configuration is inappropriately throttling i", "source": "https://kubernetes.io/docs/concepts/cluster-administration/flow-control/", "id": "k8s_00609_code_explain_15422"} {"prompt": "Complete this yaml code:\n\napiVersion: flowcontrol.apiserver.k8s.io/v1\nkind: FlowSchema\nmetadata:\n name: list-events-default-service-account\nspec:\n distinguisherMethod:\n type: ByUser\n matchingPrecedence: 8000\n priorityLevelConfiguration:\n name: catch-all\n rules:\n - resourceRules:", "completion": " - apiGroups:\n - '*'\n namespaces:\n - default\n resources:\n - events\n verbs:\n - list\n subjects:\n - kind: ServiceAccount\n serviceAccount:\n name: default\n namespace: default", "language": "yaml", "type": "service-definition", "context": "API Priority and Fairness |", "source": "https://kubernetes.io/docs/concepts/cluster-administration/flow-control/", "id": "k8s_00609_code_completion_15423"} {"prompt": "Explain this yaml code:\n\napiVersion: flowcontrol.apiserver.k8s.io/v1\nkind: FlowSchema\nmetadata:\n name: list-events-default-service-account\nspec:\n distinguisherMethod:\n type: ByUser\n matchingPrecedence: 8000\n priorityLevelConfiguration:\n name: catch-all\n rules:\n - resourceRules:\n - apiGroups:\n - '*'\n namespaces:\n - default\n resources:\n - events\n verbs:\n - list\n subjects:\n - kind: ServiceAccount\n serviceAccount:\n name: default\n namespace: default", "completion": "This yaml code API Priority and Fairness |", "language": "yaml", "type": "service-definition_explanation", "context": "API Priority and Fairness |", "source": "https://kubernetes.io/docs/concepts/cluster-administration/flow-control/", "id": "k8s_00609_code_explain_15424"} {"prompt": "Explain this shell code:\n\nkube-apiserver \\\n--runtime-config=flowcontrol.apiserver.k8s.io/v1beta3=false \\\n # …and other flags as usual", "completion": "This shell code API Priority and Fairness | The command-line flag --enable-priority-and-fairness=false will disable the API Priority and Fairness feature.", "language": "shell", "type": "code-example_explanation", "context": "API Priority and Fairness | The command-line flag --enable-priority-and-fairness=false will disable the API Priority and Fairness feature.", "source": "https://kubernetes.io/docs/concepts/cluster-administration/flow-control/", "id": "k8s_00609_code_explain_15425"} {"prompt": "Complete this yaml code:\n\napiVersion: flowcontrol.apiserver.k8s.io/v1\nkind: FlowSchema\nmetadata:\n name: health-for-strangers\nspec:\n matchingPrecedence: 1000\n priorityLevelConfiguration:\n name: exempt\n rules:\n - nonResourceRules:", "completion": " - nonResourceURLs:\n - \"/healthz\"\n - \"/livez\"\n - \"/readyz\"\n verbs:\n - \"*\"\n subjects:\n - kind: Group\n group:\n name: \"system:unauthenticated\"", "language": "yaml", "type": "kubernetes-manifest", "context": "API Priority and Fairness | When you enable the API Priority and Fairness feature, the kube-apiserver exports additional metrics. Monitoring these can help you determine whether your configuration is inappropriately throttling i", "source": "https://kubernetes.io/docs/concepts/cluster-administration/flow-control/", "id": "k8s_00609_code_completion_15426"} {"prompt": "Explain this yaml code:\n\napiVersion: flowcontrol.apiserver.k8s.io/v1\nkind: FlowSchema\nmetadata:\n name: health-for-strangers\nspec:\n matchingPrecedence: 1000\n priorityLevelConfiguration:\n name: exempt\n rules:\n - nonResourceRules:\n - nonResourceURLs:\n - \"/healthz\"\n - \"/livez\"\n - \"/readyz\"\n verbs:\n - \"*\"\n subjects:\n - kind: Group\n group:\n name: \"system:unauthenticated\"", "completion": "This yaml code API Priority and Fairness | When you enable the API Priority and Fairness feature, the kube-apiserver exports additional metrics. Monitoring these can help you determine whether your configuration is inappropriately throttling i", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "API Priority and Fairness | When you enable the API Priority and Fairness feature, the kube-apiserver exports additional metrics. Monitoring these can help you determine whether your configuration is inappropriately throttling i", "source": "https://kubernetes.io/docs/concepts/cluster-administration/flow-control/", "id": "k8s_00609_code_explain_15427"} {"prompt": "Explain this text code:\n\napiserver_flowcontrol_request_wait_duration_seconds", "completion": "This text code API Priority and Fairness | apiserver_flowcontrol_nominal_limit_seats is a gauge vector holding each priority level's nominal concurrency limit, computed from the API server's total concurrency limit and the priority level's con", "language": "text", "type": "code-example_explanation", "context": "API Priority and Fairness | apiserver_flowcontrol_nominal_limit_seats is a gauge vector holding each priority level's nominal concurrency limit, computed from the API server's total concurrency limit and the priority level's con", "source": "https://kubernetes.io/docs/concepts/cluster-administration/flow-control/", "id": "k8s_00609_code_explain_15428"} {"prompt": "Explain this text code:\n\napiserver_flowcontrol_read_vs_write_current_requests", "completion": "This text code API Priority and Fairness | apiserver_flowcontrol_request_concurrency_in_use is a gauge vector holding the instantaneous number of occupied seats, broken down by priority_level and flow_schema.", "language": "text", "type": "code-example_explanation", "context": "API Priority and Fairness | apiserver_flowcontrol_request_concurrency_in_use is a gauge vector holding the instantaneous number of occupied seats, broken down by priority_level and flow_schema.", "source": "https://kubernetes.io/docs/concepts/cluster-administration/flow-control/", "id": "k8s_00609_code_explain_15429"} {"prompt": "Explain this text code:\n\napiserver_flowcontrol_priority_level_request_utilization", "completion": "This text code API Priority and Fairness | apiserver_flowcontrol_priority_level_seat_utilization is a histogram vector of observations, made at the end of each nanosecond, of the utilization of a priority level's concurrency limit, broken down", "language": "text", "type": "code-example_explanation", "context": "API Priority and Fairness | apiserver_flowcontrol_priority_level_seat_utilization is a histogram vector of observations, made at the end of each nanosecond, of the utilization of a priority level's concurrency limit, broken down", "source": "https://kubernetes.io/docs/concepts/cluster-administration/flow-control/", "id": "k8s_00609_code_explain_15430"} {"prompt": "Explain this text code:\n\napiserver_flowcontrol_priority_level_seat_utilization", "completion": "This text code API Priority and Fairness | apiserver_flowcontrol_request_queue_length_after_enqueue is a histogram vector of queue lengths for the queues, broken down by priority_level and flow_schema, as sampled by the enqueued requests. Each", "language": "text", "type": "code-example_explanation", "context": "API Priority and Fairness | apiserver_flowcontrol_request_queue_length_after_enqueue is a histogram vector of queue lengths for the queues, broken down by priority_level and flow_schema, as sampled by the enqueued requests. Each", "source": "https://kubernetes.io/docs/concepts/cluster-administration/flow-control/", "id": "k8s_00609_code_explain_15431"} {"prompt": "Explain this text code:\n\napiserver_flowcontrol_request_queue_length_after_enqueue", "completion": "This text code API Priority and Fairness | apiserver_flowcontrol_request_concurrency_limit is the same as apiserver_flowcontrol_nominal_limit_seats. Before the introduction of concurrency borrowing between priority levels, this was always equa", "language": "text", "type": "code-example_explanation", "context": "API Priority and Fairness | apiserver_flowcontrol_request_concurrency_limit is the same as apiserver_flowcontrol_nominal_limit_seats. Before the introduction of concurrency borrowing between priority levels, this was always equa", "source": "https://kubernetes.io/docs/concepts/cluster-administration/flow-control/", "id": "k8s_00609_code_explain_15432"} {"prompt": "Explain this text code:\n\napiserver_flowcontrol_request_dispatch_no_accommodation_total", "completion": "This text code API Priority and Fairness | apiserver_flowcontrol_epoch_advance_total is a counter vector of the number of attempts to jump a priority level's progress meter backward to avoid numeric overflow, grouped by priority_level and succ", "language": "text", "type": "code-example_explanation", "context": "API Priority and Fairness | apiserver_flowcontrol_epoch_advance_total is a counter vector of the number of attempts to jump a priority level's progress meter backward to avoid numeric overflow, grouped by priority_level and succ", "source": "https://kubernetes.io/docs/concepts/cluster-administration/flow-control/", "id": "k8s_00609_code_explain_15433"} {"prompt": "Explain this text code:\n\npriority-and-fairness/list-events-default-service-account.yaml", "completion": "This text code API Priority and Fairness |", "language": "text", "type": "code-example_explanation", "context": "API Priority and Fairness |", "source": "https://kubernetes.io/docs/concepts/cluster-administration/flow-control/", "id": "k8s_00609_code_explain_15434"} {"prompt": "Complete this yaml code:\n\napiVersion: flowcontrol.apiserver.k8s.io/v1\nkind: FlowSchema\nmetadata:\n name: list-events-default-service-account\nspec:\n distinguisherMethod:\n type: ByUser\n matchingPrecedence: 8000\n priorityLevelConfiguration:\n name: catch-all\n rules:\n - resourceRules:", "completion": " - apiGroups:\n - '*'\n namespaces:\n - default\n resources:\n - events\n verbs:\n - list\n subjects:\n - kind: ServiceAccount\n serviceAccount:\n name: default\n namespace: default", "language": "yaml", "type": "service-definition", "context": "API Priority and Fairness |", "source": "https://kubernetes.io/docs/concepts/cluster-administration/flow-control/", "id": "k8s_00609_code_completion_15435"} {"prompt": "Explain this yaml code:\n\napiVersion: flowcontrol.apiserver.k8s.io/v1\nkind: FlowSchema\nmetadata:\n name: list-events-default-service-account\nspec:\n distinguisherMethod:\n type: ByUser\n matchingPrecedence: 8000\n priorityLevelConfiguration:\n name: catch-all\n rules:\n - resourceRules:\n - apiGroups:\n - '*'\n namespaces:\n - default\n resources:\n - events\n verbs:\n - list\n subjects:\n - kind: ServiceAccount\n serviceAccount:\n name: default\n namespace: default", "completion": "This yaml code API Priority and Fairness |", "language": "yaml", "type": "service-definition_explanation", "context": "API Priority and Fairness |", "source": "https://kubernetes.io/docs/concepts/cluster-administration/flow-control/", "id": "k8s_00609_code_explain_15436"} {"prompt": "Explain this text code:\n\nkube-apiserver \\\n--runtime-config=flowcontrol.apiserver.k8s.io/v1beta3=false \\\n # …and other flags as usual", "completion": "This text code API Priority and Fairness | The command-line flag --enable-priority-and-fairness=false will disable the API Priority and Fairness feature.", "language": "text", "type": "code-example_explanation", "context": "API Priority and Fairness | The command-line flag --enable-priority-and-fairness=false will disable the API Priority and Fairness feature.", "source": "https://kubernetes.io/docs/concepts/cluster-administration/flow-control/", "id": "k8s_00609_code_explain_15437"} {"prompt": "Complete this yaml code:\n\npriority-and-fairness/health-for-strangers.yaml\napiVersion: flowcontrol.apiserver.k8s.io/v1\nkind: FlowSchema\nmetadata:\n name: health-for-strangers\nspec:\n matchingPrecedence: 1000\n priorityLevelConfiguration:\n name: exempt\n rules:", "completion": " - nonResourceRules:\n - nonResourceURLs:\n - \"/healthz\"\n - \"/livez\"\n - \"/readyz\"\n verbs:\n - \"*\"\n subjects:\n - kind: Group\n group:\n name: \"system:unauthenticated\"", "language": "yaml", "type": "kubernetes-manifest", "context": "API Priority and Fairness | When you enable the API Priority and Fairness feature, the kube-apiserver exports additional metrics. Monitoring these can help you determine whether your configuration is inappropriately throttling i", "source": "https://kubernetes.io/docs/concepts/cluster-administration/flow-control/", "id": "k8s_00609_code_completion_15438"} {"prompt": "Explain this yaml code:\n\npriority-and-fairness/health-for-strangers.yaml\napiVersion: flowcontrol.apiserver.k8s.io/v1\nkind: FlowSchema\nmetadata:\n name: health-for-strangers\nspec:\n matchingPrecedence: 1000\n priorityLevelConfiguration:\n name: exempt\n rules:\n - nonResourceRules:\n - nonResourceURLs:\n - \"/healthz\"\n - \"/livez\"\n - \"/readyz\"\n verbs:\n - \"*\"\n subjects:\n - kind: Group\n group:\n name: \"system:unauthenticated\"", "completion": "This yaml code API Priority and Fairness | When you enable the API Priority and Fairness feature, the kube-apiserver exports additional metrics. Monitoring these can help you determine whether your configuration is inappropriately throttling i", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "API Priority and Fairness | When you enable the API Priority and Fairness feature, the kube-apiserver exports additional metrics. Monitoring these can help you determine whether your configuration is inappropriately throttling i", "source": "https://kubernetes.io/docs/concepts/cluster-administration/flow-control/", "id": "k8s_00609_code_explain_15439"} {"prompt": "Complete this yaml code:\n\napiVersion: flowcontrol.apiserver.k8s.io/v1\nkind: FlowSchema\nmetadata:\n name: health-for-strangers\nspec:\n matchingPrecedence: 1000\n priorityLevelConfiguration:\n name: exempt\n rules:\n - nonResourceRules:", "completion": " - nonResourceURLs:\n - \"/healthz\"\n - \"/livez\"\n - \"/readyz\"\n verbs:\n - \"*\"\n subjects:\n - kind: Group\n group:\n name: \"system:unauthenticated\"", "language": "yaml", "type": "kubernetes-manifest", "context": "API Priority and Fairness | When you enable the API Priority and Fairness feature, the kube-apiserver exports additional metrics. Monitoring these can help you determine whether your configuration is inappropriately throttling i", "source": "https://kubernetes.io/docs/concepts/cluster-administration/flow-control/", "id": "k8s_00609_code_completion_15440"} {"prompt": "Explain this yaml code:\n\napiVersion: flowcontrol.apiserver.k8s.io/v1\nkind: FlowSchema\nmetadata:\n name: health-for-strangers\nspec:\n matchingPrecedence: 1000\n priorityLevelConfiguration:\n name: exempt\n rules:\n - nonResourceRules:\n - nonResourceURLs:\n - \"/healthz\"\n - \"/livez\"\n - \"/readyz\"\n verbs:\n - \"*\"\n subjects:\n - kind: Group\n group:\n name: \"system:unauthenticated\"", "completion": "This yaml code API Priority and Fairness | When you enable the API Priority and Fairness feature, the kube-apiserver exports additional metrics. Monitoring these can help you determine whether your configuration is inappropriately throttling i", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "API Priority and Fairness | When you enable the API Priority and Fairness feature, the kube-apiserver exports additional metrics. Monitoring these can help you determine whether your configuration is inappropriately throttling i", "source": "https://kubernetes.io/docs/concepts/cluster-administration/flow-control/", "id": "k8s_00609_code_explain_15441"} {"prompt": "Complete this yaml code:\n\npriority-and-fairness/list-events-default-service-account.yaml\napiVersion: flowcontrol.apiserver.k8s.io/v1\nkind: FlowSchema\nmetadata:\n name: list-events-default-service-account\nspec:\n distinguisherMethod:\n type: ByUser\n matchingPrecedence: 8000\n priorityLevelConfiguration:\n name: catch-all\n rules:\n - resourceRules:", "completion": " - apiGroups:\n - '*'\n namespaces:\n - default\n resources:\n - events\n verbs:\n - list\n subjects:\n - kind: ServiceAccount\n serviceAccount:\n name: default\n namespace: default", "language": "yaml", "type": "service-definition", "context": "API Priority and Fairness |", "source": "https://kubernetes.io/docs/concepts/cluster-administration/flow-control/", "id": "k8s_00609_code_completion_15442"} {"prompt": "Explain this yaml code:\n\npriority-and-fairness/list-events-default-service-account.yaml\napiVersion: flowcontrol.apiserver.k8s.io/v1\nkind: FlowSchema\nmetadata:\n name: list-events-default-service-account\nspec:\n distinguisherMethod:\n type: ByUser\n matchingPrecedence: 8000\n priorityLevelConfiguration:\n name: catch-all\n rules:\n - resourceRules:\n - apiGroups:\n - '*'\n namespaces:\n - default\n resources:\n - events\n verbs:\n - list\n subjects:\n - kind: ServiceAccount\n serviceAccount:\n name: default\n namespace: default", "completion": "This yaml code API Priority and Fairness |", "language": "yaml", "type": "service-definition_explanation", "context": "API Priority and Fairness |", "source": "https://kubernetes.io/docs/concepts/cluster-administration/flow-control/", "id": "k8s_00609_code_explain_15443"} {"prompt": "Complete this yaml code:\n\napiVersion: flowcontrol.apiserver.k8s.io/v1\nkind: FlowSchema\nmetadata:\n name: list-events-default-service-account\nspec:\n distinguisherMethod:\n type: ByUser\n matchingPrecedence: 8000\n priorityLevelConfiguration:\n name: catch-all\n rules:\n - resourceRules:", "completion": " - apiGroups:\n - '*'\n namespaces:\n - default\n resources:\n - events\n verbs:\n - list\n subjects:\n - kind: ServiceAccount\n serviceAccount:\n name: default\n namespace: default", "language": "yaml", "type": "service-definition", "context": "API Priority and Fairness |", "source": "https://kubernetes.io/docs/concepts/cluster-administration/flow-control/", "id": "k8s_00609_code_completion_15444"} {"prompt": "Explain this yaml code:\n\napiVersion: flowcontrol.apiserver.k8s.io/v1\nkind: FlowSchema\nmetadata:\n name: list-events-default-service-account\nspec:\n distinguisherMethod:\n type: ByUser\n matchingPrecedence: 8000\n priorityLevelConfiguration:\n name: catch-all\n rules:\n - resourceRules:\n - apiGroups:\n - '*'\n namespaces:\n - default\n resources:\n - events\n verbs:\n - list\n subjects:\n - kind: ServiceAccount\n serviceAccount:\n name: default\n namespace: default", "completion": "This yaml code API Priority and Fairness |", "language": "yaml", "type": "service-definition_explanation", "context": "API Priority and Fairness |", "source": "https://kubernetes.io/docs/concepts/cluster-administration/flow-control/", "id": "k8s_00609_code_explain_15445"} {"prompt": "Explain this bash code:\n\nkubectl get pvc cat-pictures-pvc -o jsonpath='{.spec.volumeName}'", "completion": "This bash code Change the Access Mode of a PersistentVolume to ReadWriteOncePod | And you can view the PVC before you make changes. Either view the manifest locally, or run kubectl get pvc <name-of-pvc> -o yaml. The output is similar to:", "language": "bash", "type": "kubectl-command_explanation", "context": "Change the Access Mode of a PersistentVolume to ReadWriteOncePod | And you can view the PVC before you make changes. Either view the manifest locally, or run kubectl get pvc <name-of-pvc> -o yaml. The output is similar to:", "source": "https://kubernetes.io/docs/tasks/administer-cluster/change-pv-access-mode-readwriteoncepod/", "id": "k8s_00610_code_explain_15446"} {"prompt": "Complete this yaml code:\n\n# cat-pictures-pvc.yaml\nkind: PersistentVolumeClaim\napiVersion: v1\nmetadata:\n name: cat-pictures-pvc", "completion": "spec:\n accessModes:\n - ReadWriteOnce\n resources:\n requests:\n storage: 1Gi", "language": "yaml", "type": "kubernetes-manifest", "context": "Change the Access Mode of a PersistentVolume to ReadWriteOncePod | Here's an example Deployment that relies on that PersistentVolumeClaim:", "source": "https://kubernetes.io/docs/tasks/administer-cluster/change-pv-access-mode-readwriteoncepod/", "id": "k8s_00610_code_completion_15447"} {"prompt": "Explain this yaml code:\n\n# cat-pictures-pvc.yaml\nkind: PersistentVolumeClaim\napiVersion: v1\nmetadata:\n name: cat-pictures-pvc\nspec:\n accessModes:\n - ReadWriteOnce\n resources:\n requests:\n storage: 1Gi", "completion": "This yaml code Change the Access Mode of a PersistentVolume to ReadWriteOncePod | Here's an example Deployment that relies on that PersistentVolumeClaim:", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Change the Access Mode of a PersistentVolume to ReadWriteOncePod | Here's an example Deployment that relies on that PersistentVolumeClaim:", "source": "https://kubernetes.io/docs/tasks/administer-cluster/change-pv-access-mode-readwriteoncepod/", "id": "k8s_00610_code_explain_15448"} {"prompt": "Complete this yaml code:\n\n# cat-pictures-writer-deployment.yaml\napiVersion: apps/v1\nkind: Deployment\nmetadata:\n name: cat-pictures-writer\nspec:\n replicas: 3\n selector:\n matchLabels:\n app: cat-pictures-writer\n template:\n metadata:\n labels:\n app: cat-pictures-writer", "completion": " spec:\n containers:\n - name: nginx\n image: nginx:1.14.2\n ports:\n - containerPort: 80\n volumeMounts:\n - name: cat-pictures\n mountPath: /mnt\n volumes:\n - name: cat-pictures\n persistentVolumeClaim:\n claimName: cat-pictures-pvc\n readOnly: false", "language": "yaml", "type": "deployment-definition", "context": "Change the Access Mode of a PersistentVolume to ReadWriteOncePod | As a first step, you need to edit your PersistentVolume's spec.persistentVolumeReclaimPolicy and set it to Retain. This ensures your PersistentVolume will not be deleted when you delete the correspond", "source": "https://kubernetes.io/docs/tasks/administer-cluster/change-pv-access-mode-readwriteoncepod/", "id": "k8s_00610_code_completion_15449"} {"prompt": "Explain this yaml code:\n\n# cat-pictures-writer-deployment.yaml\napiVersion: apps/v1\nkind: Deployment\nmetadata:\n name: cat-pictures-writer\nspec:\n replicas: 3\n selector:\n matchLabels:\n app: cat-pictures-writer\n template:\n metadata:\n labels:\n app: cat-pictures-writer\n spec:\n containers:\n - name: nginx\n image: nginx:1.14.2\n ports:\n - containerPort: 80\n volumeMounts:\n - name: cat-pictures\n mountPath: /mnt\n volumes:\n - name: cat-pictures\n persistentVolumeClaim:\n claimName: cat-pictures-pvc\n readOnly: false", "completion": "This yaml code Change the Access Mode of a PersistentVolume to ReadWriteOncePod | As a first step, you need to edit your PersistentVolume's spec.persistentVolumeReclaimPolicy and set it to Retain. This ensures your PersistentVolume will not be deleted when you delete the correspond", "language": "yaml", "type": "deployment-definition_explanation", "context": "Change the Access Mode of a PersistentVolume to ReadWriteOncePod | As a first step, you need to edit your PersistentVolume's spec.persistentVolumeReclaimPolicy and set it to Retain. This ensures your PersistentVolume will not be deleted when you delete the correspond", "source": "https://kubernetes.io/docs/tasks/administer-cluster/change-pv-access-mode-readwriteoncepod/", "id": "k8s_00610_code_explain_15450"} {"prompt": "Explain this bash code:\n\nkubectl patch pv cat-pictures-pv -p '{\"spec\":{\"persistentVolumeReclaimPolicy\":\"Retain\"}}'", "completion": "This bash code Change the Access Mode of a PersistentVolume to ReadWriteOncePod | Next you need to stop any workloads that are using the PersistentVolumeClaim bound to the PersistentVolume you want to migrate, and then delete the PersistentVolumeClaim. Avoid making any other change", "language": "bash", "type": "kubectl-command_explanation", "context": "Change the Access Mode of a PersistentVolume to ReadWriteOncePod | Next you need to stop any workloads that are using the PersistentVolumeClaim bound to the PersistentVolume you want to migrate, and then delete the PersistentVolumeClaim. Avoid making any other change", "source": "https://kubernetes.io/docs/tasks/administer-cluster/change-pv-access-mode-readwriteoncepod/", "id": "k8s_00610_code_explain_15451"} {"prompt": "Explain this bash code:\n\nkubectl scale --replicas=0 deployment cat-pictures-writer\nkubectl delete pvc cat-pictures-pvc\nkubectl patch pv cat-pictures-pv -p '{\"spec\":{\"claimRef\":{\"uid\":\"\"}}}'", "completion": "This bash code Change the Access Mode of a PersistentVolume to ReadWriteOncePod | After that, replace the PersistentVolume's list of valid access modes to be (only) ReadWriteOncePod:", "language": "bash", "type": "kubectl-command_explanation", "context": "Change the Access Mode of a PersistentVolume to ReadWriteOncePod | After that, replace the PersistentVolume's list of valid access modes to be (only) ReadWriteOncePod:", "source": "https://kubernetes.io/docs/tasks/administer-cluster/change-pv-access-mode-readwriteoncepod/", "id": "k8s_00610_code_explain_15452"} {"prompt": "Explain this bash code:\n\nkubectl patch pv cat-pictures-pv -p '{\"spec\":{\"accessModes\":[\"ReadWriteOncePod\"]}}'", "completion": "This bash code Change the Access Mode of a PersistentVolume to ReadWriteOncePod | Next you need to modify your PersistentVolumeClaim to set ReadWriteOncePod as the only access mode. You should also set the PersistentVolumeClaim's spec.volumeName to the name of your PersistentVolume", "language": "bash", "type": "kubectl-command_explanation", "context": "Change the Access Mode of a PersistentVolume to ReadWriteOncePod | Next you need to modify your PersistentVolumeClaim to set ReadWriteOncePod as the only access mode. You should also set the PersistentVolumeClaim's spec.volumeName to the name of your PersistentVolume", "source": "https://kubernetes.io/docs/tasks/administer-cluster/change-pv-access-mode-readwriteoncepod/", "id": "k8s_00610_code_explain_15453"} {"prompt": "Complete this text code:\n\n# IMPORTANT: Make sure to edit your PVC in cat-pictures-pvc.yaml before applying. You need to:\n# - Set ReadWriteOncePod as the only access mode\n# - Set spec.volumeName to \"cat-pictures-pv\"", "completion": "\nkubectl apply -f cat-pictures-pvc.yaml\nkubectl apply -f cat-pictures-writer-deployment.yaml", "language": "text", "type": "code-example", "context": "Change the Access Mode of a PersistentVolume to ReadWriteOncePod | Lastly you may edit your PersistentVolume's spec.persistentVolumeReclaimPolicy and set to it back to Delete if you previously changed it.", "source": "https://kubernetes.io/docs/tasks/administer-cluster/change-pv-access-mode-readwriteoncepod/", "id": "k8s_00610_code_completion_15454"} {"prompt": "Explain this text code:\n\n# IMPORTANT: Make sure to edit your PVC in cat-pictures-pvc.yaml before applying. You need to:\n# - Set ReadWriteOncePod as the only access mode\n# - Set spec.volumeName to \"cat-pictures-pv\"\n\nkubectl apply -f cat-pictures-pvc.yaml\nkubectl apply -f cat-pictures-writer-deployment.yaml", "completion": "This text code Change the Access Mode of a PersistentVolume to ReadWriteOncePod | Lastly you may edit your PersistentVolume's spec.persistentVolumeReclaimPolicy and set to it back to Delete if you previously changed it.", "language": "text", "type": "code-example_explanation", "context": "Change the Access Mode of a PersistentVolume to ReadWriteOncePod | Lastly you may edit your PersistentVolume's spec.persistentVolumeReclaimPolicy and set to it back to Delete if you previously changed it.", "source": "https://kubernetes.io/docs/tasks/administer-cluster/change-pv-access-mode-readwriteoncepod/", "id": "k8s_00610_code_explain_15455"} {"prompt": "Explain this bash code:\n\nkubectl patch pv cat-pictures-pv -p '{\"spec\":{\"persistentVolumeReclaimPolicy\":\"Delete\"}}'", "completion": "This bash code Change the Access Mode of a PersistentVolume to ReadWriteOncePod |", "language": "bash", "type": "kubectl-command_explanation", "context": "Change the Access Mode of a PersistentVolume to ReadWriteOncePod |", "source": "https://kubernetes.io/docs/tasks/administer-cluster/change-pv-access-mode-readwriteoncepod/", "id": "k8s_00610_code_explain_15456"} {"prompt": "Explain this shell code:\n\nkubectl get pvc cat-pictures-pvc -o jsonpath='{.spec.volumeName}'", "completion": "This shell code Change the Access Mode of a PersistentVolume to ReadWriteOncePod | And you can view the PVC before you make changes. Either view the manifest locally, or run kubectl get pvc <name-of-pvc> -o yaml. The output is similar to:", "language": "shell", "type": "code-example_explanation", "context": "Change the Access Mode of a PersistentVolume to ReadWriteOncePod | And you can view the PVC before you make changes. Either view the manifest locally, or run kubectl get pvc <name-of-pvc> -o yaml. The output is similar to:", "source": "https://kubernetes.io/docs/tasks/administer-cluster/change-pv-access-mode-readwriteoncepod/", "id": "k8s_00610_code_explain_15457"} {"prompt": "Complete this yaml code:\n\n# cat-pictures-pvc.yaml\nkind: PersistentVolumeClaim\napiVersion: v1\nmetadata:\n name: cat-pictures-pvc", "completion": "spec:\n accessModes:\n - ReadWriteOnce\n resources:\n requests:\n storage: 1Gi", "language": "yaml", "type": "kubernetes-manifest", "context": "Change the Access Mode of a PersistentVolume to ReadWriteOncePod | Here's an example Deployment that relies on that PersistentVolumeClaim:", "source": "https://kubernetes.io/docs/tasks/administer-cluster/change-pv-access-mode-readwriteoncepod/", "id": "k8s_00610_code_completion_15458"} {"prompt": "Explain this yaml code:\n\n# cat-pictures-pvc.yaml\nkind: PersistentVolumeClaim\napiVersion: v1\nmetadata:\n name: cat-pictures-pvc\nspec:\n accessModes:\n - ReadWriteOnce\n resources:\n requests:\n storage: 1Gi", "completion": "This yaml code Change the Access Mode of a PersistentVolume to ReadWriteOncePod | Here's an example Deployment that relies on that PersistentVolumeClaim:", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Change the Access Mode of a PersistentVolume to ReadWriteOncePod | Here's an example Deployment that relies on that PersistentVolumeClaim:", "source": "https://kubernetes.io/docs/tasks/administer-cluster/change-pv-access-mode-readwriteoncepod/", "id": "k8s_00610_code_explain_15459"} {"prompt": "Complete this yaml code:\n\n# cat-pictures-writer-deployment.yaml\napiVersion: apps/v1\nkind: Deployment\nmetadata:\n name: cat-pictures-writer\nspec:\n replicas: 3\n selector:\n matchLabels:\n app: cat-pictures-writer\n template:\n metadata:\n labels:\n app: cat-pictures-writer", "completion": " spec:\n containers:\n - name: nginx\n image: nginx:1.14.2\n ports:\n - containerPort: 80\n volumeMounts:\n - name: cat-pictures\n mountPath: /mnt\n volumes:\n - name: cat-pictures\n persistentVolumeClaim:\n claimName: cat-pictures-pvc\n readOnly: false", "language": "yaml", "type": "deployment-definition", "context": "Change the Access Mode of a PersistentVolume to ReadWriteOncePod | As a first step, you need to edit your PersistentVolume's spec.persistentVolumeReclaimPolicy and set it to Retain. This ensures your PersistentVolume will not be deleted when you delete the correspond", "source": "https://kubernetes.io/docs/tasks/administer-cluster/change-pv-access-mode-readwriteoncepod/", "id": "k8s_00610_code_completion_15460"} {"prompt": "Explain this yaml code:\n\n# cat-pictures-writer-deployment.yaml\napiVersion: apps/v1\nkind: Deployment\nmetadata:\n name: cat-pictures-writer\nspec:\n replicas: 3\n selector:\n matchLabels:\n app: cat-pictures-writer\n template:\n metadata:\n labels:\n app: cat-pictures-writer\n spec:\n containers:\n - name: nginx\n image: nginx:1.14.2\n ports:\n - containerPort: 80\n volumeMounts:\n - name: cat-pictures\n mountPath: /mnt\n volumes:\n - name: cat-pictures\n persistentVolumeClaim:\n claimName: cat-pictures-pvc\n readOnly: false", "completion": "This yaml code Change the Access Mode of a PersistentVolume to ReadWriteOncePod | As a first step, you need to edit your PersistentVolume's spec.persistentVolumeReclaimPolicy and set it to Retain. This ensures your PersistentVolume will not be deleted when you delete the correspond", "language": "yaml", "type": "deployment-definition_explanation", "context": "Change the Access Mode of a PersistentVolume to ReadWriteOncePod | As a first step, you need to edit your PersistentVolume's spec.persistentVolumeReclaimPolicy and set it to Retain. This ensures your PersistentVolume will not be deleted when you delete the correspond", "source": "https://kubernetes.io/docs/tasks/administer-cluster/change-pv-access-mode-readwriteoncepod/", "id": "k8s_00610_code_explain_15461"} {"prompt": "Explain this shell code:\n\nkubectl patch pv cat-pictures-pv -p '{\"spec\":{\"persistentVolumeReclaimPolicy\":\"Retain\"}}'", "completion": "This shell code Change the Access Mode of a PersistentVolume to ReadWriteOncePod | Next you need to stop any workloads that are using the PersistentVolumeClaim bound to the PersistentVolume you want to migrate, and then delete the PersistentVolumeClaim. Avoid making any other change", "language": "shell", "type": "code-example_explanation", "context": "Change the Access Mode of a PersistentVolume to ReadWriteOncePod | Next you need to stop any workloads that are using the PersistentVolumeClaim bound to the PersistentVolume you want to migrate, and then delete the PersistentVolumeClaim. Avoid making any other change", "source": "https://kubernetes.io/docs/tasks/administer-cluster/change-pv-access-mode-readwriteoncepod/", "id": "k8s_00610_code_explain_15462"} {"prompt": "Explain this shell code:\n\nkubectl scale --replicas=0 deployment cat-pictures-writer\nkubectl delete pvc cat-pictures-pvc\nkubectl patch pv cat-pictures-pv -p '{\"spec\":{\"claimRef\":{\"uid\":\"\"}}}'", "completion": "This shell code Change the Access Mode of a PersistentVolume to ReadWriteOncePod | After that, replace the PersistentVolume's list of valid access modes to be (only) ReadWriteOncePod:", "language": "shell", "type": "code-example_explanation", "context": "Change the Access Mode of a PersistentVolume to ReadWriteOncePod | After that, replace the PersistentVolume's list of valid access modes to be (only) ReadWriteOncePod:", "source": "https://kubernetes.io/docs/tasks/administer-cluster/change-pv-access-mode-readwriteoncepod/", "id": "k8s_00610_code_explain_15463"} {"prompt": "Explain this shell code:\n\nkubectl patch pv cat-pictures-pv -p '{\"spec\":{\"accessModes\":[\"ReadWriteOncePod\"]}}'", "completion": "This shell code Change the Access Mode of a PersistentVolume to ReadWriteOncePod | Next you need to modify your PersistentVolumeClaim to set ReadWriteOncePod as the only access mode. You should also set the PersistentVolumeClaim's spec.volumeName to the name of your PersistentVolume", "language": "shell", "type": "code-example_explanation", "context": "Change the Access Mode of a PersistentVolume to ReadWriteOncePod | Next you need to modify your PersistentVolumeClaim to set ReadWriteOncePod as the only access mode. You should also set the PersistentVolumeClaim's spec.volumeName to the name of your PersistentVolume", "source": "https://kubernetes.io/docs/tasks/administer-cluster/change-pv-access-mode-readwriteoncepod/", "id": "k8s_00610_code_explain_15464"} {"prompt": "Complete this shell code:\n\n# IMPORTANT: Make sure to edit your PVC in cat-pictures-pvc.yaml before applying. You need to:\n# - Set ReadWriteOncePod as the only access mode\n# - Set spec.volumeName to \"cat-pictures-pv\"", "completion": "\nkubectl apply -f cat-pictures-pvc.yaml\nkubectl apply -f cat-pictures-writer-deployment.yaml", "language": "shell", "type": "code-example", "context": "Change the Access Mode of a PersistentVolume to ReadWriteOncePod | Lastly you may edit your PersistentVolume's spec.persistentVolumeReclaimPolicy and set to it back to Delete if you previously changed it.", "source": "https://kubernetes.io/docs/tasks/administer-cluster/change-pv-access-mode-readwriteoncepod/", "id": "k8s_00610_code_completion_15465"} {"prompt": "Explain this shell code:\n\n# IMPORTANT: Make sure to edit your PVC in cat-pictures-pvc.yaml before applying. You need to:\n# - Set ReadWriteOncePod as the only access mode\n# - Set spec.volumeName to \"cat-pictures-pv\"\n\nkubectl apply -f cat-pictures-pvc.yaml\nkubectl apply -f cat-pictures-writer-deployment.yaml", "completion": "This shell code Change the Access Mode of a PersistentVolume to ReadWriteOncePod | Lastly you may edit your PersistentVolume's spec.persistentVolumeReclaimPolicy and set to it back to Delete if you previously changed it.", "language": "shell", "type": "code-example_explanation", "context": "Change the Access Mode of a PersistentVolume to ReadWriteOncePod | Lastly you may edit your PersistentVolume's spec.persistentVolumeReclaimPolicy and set to it back to Delete if you previously changed it.", "source": "https://kubernetes.io/docs/tasks/administer-cluster/change-pv-access-mode-readwriteoncepod/", "id": "k8s_00610_code_explain_15466"} {"prompt": "Explain this shell code:\n\nkubectl patch pv cat-pictures-pv -p '{\"spec\":{\"persistentVolumeReclaimPolicy\":\"Delete\"}}'", "completion": "This shell code Change the Access Mode of a PersistentVolume to ReadWriteOncePod |", "language": "shell", "type": "code-example_explanation", "context": "Change the Access Mode of a PersistentVolume to ReadWriteOncePod |", "source": "https://kubernetes.io/docs/tasks/administer-cluster/change-pv-access-mode-readwriteoncepod/", "id": "k8s_00610_code_explain_15467"} {"prompt": "Explain this bash code:\n\nkubectl get pvc cat-pictures-pvc -o jsonpath='{.spec.volumeName}'", "completion": "This bash code Change the Access Mode of a PersistentVolume to ReadWriteOncePod | And you can view the PVC before you make changes. Either view the manifest locally, or run kubectl get pvc <name-of-pvc> -o yaml. The output is similar to:", "language": "bash", "type": "kubectl-command_explanation", "context": "Change the Access Mode of a PersistentVolume to ReadWriteOncePod | And you can view the PVC before you make changes. Either view the manifest locally, or run kubectl get pvc <name-of-pvc> -o yaml. The output is similar to:", "source": "https://kubernetes.io/docs/tasks/administer-cluster/change-pv-access-mode-readwriteoncepod/", "id": "k8s_00610_code_explain_15468"} {"prompt": "Complete this yaml code:\n\n# cat-pictures-pvc.yaml\nkind: PersistentVolumeClaim\napiVersion: v1\nmetadata:\n name: cat-pictures-pvc", "completion": "spec:\n accessModes:\n - ReadWriteOnce\n resources:\n requests:\n storage: 1Gi", "language": "yaml", "type": "kubernetes-manifest", "context": "Change the Access Mode of a PersistentVolume to ReadWriteOncePod | Here's an example Deployment that relies on that PersistentVolumeClaim:", "source": "https://kubernetes.io/docs/tasks/administer-cluster/change-pv-access-mode-readwriteoncepod/", "id": "k8s_00610_code_completion_15469"} {"prompt": "Explain this yaml code:\n\n# cat-pictures-pvc.yaml\nkind: PersistentVolumeClaim\napiVersion: v1\nmetadata:\n name: cat-pictures-pvc\nspec:\n accessModes:\n - ReadWriteOnce\n resources:\n requests:\n storage: 1Gi", "completion": "This yaml code Change the Access Mode of a PersistentVolume to ReadWriteOncePod | Here's an example Deployment that relies on that PersistentVolumeClaim:", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Change the Access Mode of a PersistentVolume to ReadWriteOncePod | Here's an example Deployment that relies on that PersistentVolumeClaim:", "source": "https://kubernetes.io/docs/tasks/administer-cluster/change-pv-access-mode-readwriteoncepod/", "id": "k8s_00610_code_explain_15470"} {"prompt": "Complete this yaml code:\n\n# cat-pictures-writer-deployment.yaml\napiVersion: apps/v1\nkind: Deployment\nmetadata:\n name: cat-pictures-writer\nspec:\n replicas: 3\n selector:\n matchLabels:\n app: cat-pictures-writer\n template:\n metadata:\n labels:\n app: cat-pictures-writer", "completion": " spec:\n containers:\n - name: nginx\n image: nginx:1.14.2\n ports:\n - containerPort: 80\n volumeMounts:\n - name: cat-pictures\n mountPath: /mnt\n volumes:\n - name: cat-pictures\n persistentVolumeClaim:\n claimName: cat-pictures-pvc\n readOnly: false", "language": "yaml", "type": "deployment-definition", "context": "Change the Access Mode of a PersistentVolume to ReadWriteOncePod | As a first step, you need to edit your PersistentVolume's spec.persistentVolumeReclaimPolicy and set it to Retain. This ensures your PersistentVolume will not be deleted when you delete the correspond", "source": "https://kubernetes.io/docs/tasks/administer-cluster/change-pv-access-mode-readwriteoncepod/", "id": "k8s_00610_code_completion_15471"} {"prompt": "Explain this yaml code:\n\n# cat-pictures-writer-deployment.yaml\napiVersion: apps/v1\nkind: Deployment\nmetadata:\n name: cat-pictures-writer\nspec:\n replicas: 3\n selector:\n matchLabels:\n app: cat-pictures-writer\n template:\n metadata:\n labels:\n app: cat-pictures-writer\n spec:\n containers:\n - name: nginx\n image: nginx:1.14.2\n ports:\n - containerPort: 80\n volumeMounts:\n - name: cat-pictures\n mountPath: /mnt\n volumes:\n - name: cat-pictures\n persistentVolumeClaim:\n claimName: cat-pictures-pvc\n readOnly: false", "completion": "This yaml code Change the Access Mode of a PersistentVolume to ReadWriteOncePod | As a first step, you need to edit your PersistentVolume's spec.persistentVolumeReclaimPolicy and set it to Retain. This ensures your PersistentVolume will not be deleted when you delete the correspond", "language": "yaml", "type": "deployment-definition_explanation", "context": "Change the Access Mode of a PersistentVolume to ReadWriteOncePod | As a first step, you need to edit your PersistentVolume's spec.persistentVolumeReclaimPolicy and set it to Retain. This ensures your PersistentVolume will not be deleted when you delete the correspond", "source": "https://kubernetes.io/docs/tasks/administer-cluster/change-pv-access-mode-readwriteoncepod/", "id": "k8s_00610_code_explain_15472"} {"prompt": "Explain this bash code:\n\nkubectl patch pv cat-pictures-pv -p '{\"spec\":{\"persistentVolumeReclaimPolicy\":\"Retain\"}}'", "completion": "This bash code Change the Access Mode of a PersistentVolume to ReadWriteOncePod | Next you need to stop any workloads that are using the PersistentVolumeClaim bound to the PersistentVolume you want to migrate, and then delete the PersistentVolumeClaim. Avoid making any other change", "language": "bash", "type": "kubectl-command_explanation", "context": "Change the Access Mode of a PersistentVolume to ReadWriteOncePod | Next you need to stop any workloads that are using the PersistentVolumeClaim bound to the PersistentVolume you want to migrate, and then delete the PersistentVolumeClaim. Avoid making any other change", "source": "https://kubernetes.io/docs/tasks/administer-cluster/change-pv-access-mode-readwriteoncepod/", "id": "k8s_00610_code_explain_15473"} {"prompt": "Explain this bash code:\n\nkubectl scale --replicas=0 deployment cat-pictures-writer\nkubectl delete pvc cat-pictures-pvc\nkubectl patch pv cat-pictures-pv -p '{\"spec\":{\"claimRef\":{\"uid\":\"\"}}}'", "completion": "This bash code Change the Access Mode of a PersistentVolume to ReadWriteOncePod | After that, replace the PersistentVolume's list of valid access modes to be (only) ReadWriteOncePod:", "language": "bash", "type": "kubectl-command_explanation", "context": "Change the Access Mode of a PersistentVolume to ReadWriteOncePod | After that, replace the PersistentVolume's list of valid access modes to be (only) ReadWriteOncePod:", "source": "https://kubernetes.io/docs/tasks/administer-cluster/change-pv-access-mode-readwriteoncepod/", "id": "k8s_00610_code_explain_15474"} {"prompt": "Explain this bash code:\n\nkubectl patch pv cat-pictures-pv -p '{\"spec\":{\"accessModes\":[\"ReadWriteOncePod\"]}}'", "completion": "This bash code Change the Access Mode of a PersistentVolume to ReadWriteOncePod | Next you need to modify your PersistentVolumeClaim to set ReadWriteOncePod as the only access mode. You should also set the PersistentVolumeClaim's spec.volumeName to the name of your PersistentVolume", "language": "bash", "type": "kubectl-command_explanation", "context": "Change the Access Mode of a PersistentVolume to ReadWriteOncePod | Next you need to modify your PersistentVolumeClaim to set ReadWriteOncePod as the only access mode. You should also set the PersistentVolumeClaim's spec.volumeName to the name of your PersistentVolume", "source": "https://kubernetes.io/docs/tasks/administer-cluster/change-pv-access-mode-readwriteoncepod/", "id": "k8s_00610_code_explain_15475"} {"prompt": "Complete this text code:\n\n# IMPORTANT: Make sure to edit your PVC in cat-pictures-pvc.yaml before applying. You need to:\n# - Set ReadWriteOncePod as the only access mode\n# - Set spec.volumeName to \"cat-pictures-pv\"", "completion": "\nkubectl apply -f cat-pictures-pvc.yaml\nkubectl apply -f cat-pictures-writer-deployment.yaml", "language": "text", "type": "code-example", "context": "Change the Access Mode of a PersistentVolume to ReadWriteOncePod | Lastly you may edit your PersistentVolume's spec.persistentVolumeReclaimPolicy and set to it back to Delete if you previously changed it.", "source": "https://kubernetes.io/docs/tasks/administer-cluster/change-pv-access-mode-readwriteoncepod/", "id": "k8s_00610_code_completion_15476"} {"prompt": "Explain this text code:\n\n# IMPORTANT: Make sure to edit your PVC in cat-pictures-pvc.yaml before applying. You need to:\n# - Set ReadWriteOncePod as the only access mode\n# - Set spec.volumeName to \"cat-pictures-pv\"\n\nkubectl apply -f cat-pictures-pvc.yaml\nkubectl apply -f cat-pictures-writer-deployment.yaml", "completion": "This text code Change the Access Mode of a PersistentVolume to ReadWriteOncePod | Lastly you may edit your PersistentVolume's spec.persistentVolumeReclaimPolicy and set to it back to Delete if you previously changed it.", "language": "text", "type": "code-example_explanation", "context": "Change the Access Mode of a PersistentVolume to ReadWriteOncePod | Lastly you may edit your PersistentVolume's spec.persistentVolumeReclaimPolicy and set to it back to Delete if you previously changed it.", "source": "https://kubernetes.io/docs/tasks/administer-cluster/change-pv-access-mode-readwriteoncepod/", "id": "k8s_00610_code_explain_15477"} {"prompt": "Explain this bash code:\n\nkubectl patch pv cat-pictures-pv -p '{\"spec\":{\"persistentVolumeReclaimPolicy\":\"Delete\"}}'", "completion": "This bash code Change the Access Mode of a PersistentVolume to ReadWriteOncePod |", "language": "bash", "type": "kubectl-command_explanation", "context": "Change the Access Mode of a PersistentVolume to ReadWriteOncePod |", "source": "https://kubernetes.io/docs/tasks/administer-cluster/change-pv-access-mode-readwriteoncepod/", "id": "k8s_00610_code_explain_15478"} {"prompt": "Complete this text code:\n\nresizePolicy:\n - resourceName: cpu", "completion": " restartPolicy: NotRequired\n - resourceName: memory\n restartPolicy: RestartContainer", "language": "text", "type": "code-example", "context": "Resize CPU and Memory Resources assigned to Containers | If resizePolicy[*].restartPolicy is not specified for a resource, it defaults to NotRequired.", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/resize-container-resources/", "id": "k8s_00611_code_completion_15479"} {"prompt": "Explain this text code:\n\nresizePolicy:\n - resourceName: cpu\n restartPolicy: NotRequired\n - resourceName: memory\n restartPolicy: RestartContainer", "completion": "This text code Resize CPU and Memory Resources assigned to Containers | If resizePolicy[*].restartPolicy is not specified for a resource, it defaults to NotRequired.", "language": "text", "type": "code-example_explanation", "context": "Resize CPU and Memory Resources assigned to Containers | If resizePolicy[*].restartPolicy is not specified for a resource, it defaults to NotRequired.", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/resize-container-resources/", "id": "k8s_00611_code_explain_15480"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: resize-demo\nspec:\n containers:\n - name: pause\n image: registry.k8s.io/pause:3.8\n resizePolicy:\n - resourceName: cpu", "completion": " restartPolicy: NotRequired # Default, but explicit here\n - resourceName: memory\n restartPolicy: RestartContainer\n resources:\n limits:\n memory: \"200Mi\"\n cpu: \"700m\"\n requests:\n memory: \"200Mi\"\n cpu: \"700m\"", "language": "yaml", "type": "pod-definition", "context": "Resize CPU and Memory Resources assigned to Containers | Create the pod:", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/resize-container-resources/", "id": "k8s_00611_code_completion_15481"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: resize-demo\nspec:\n containers:\n - name: pause\n image: registry.k8s.io/pause:3.8\n resizePolicy:\n - resourceName: cpu\n restartPolicy: NotRequired # Default, but explicit here\n - resourceName: memory\n restartPolicy: RestartContainer\n resources:\n limits:\n memory: \"200Mi\"\n cpu: \"700m\"\n requests:\n memory: \"200Mi\"\n cpu: \"700m\"", "completion": "This yaml code Resize CPU and Memory Resources assigned to Containers | Create the pod:", "language": "yaml", "type": "pod-definition_explanation", "context": "Resize CPU and Memory Resources assigned to Containers | Create the pod:", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/resize-container-resources/", "id": "k8s_00611_code_explain_15482"} {"prompt": "Explain this text code:\n\n# Wait a moment for the pod to be running\nkubectl get pod resize-demo --output=yaml", "completion": "This text code Resize CPU and Memory Resources assigned to Containers | Observe the spec.containers[0].resources and status.containerStatuses[0].resources. They should match the manifest (700m CPU, 200Mi memory). Note the status.containerStatuses[0].restartCount (should b", "language": "text", "type": "code-example_explanation", "context": "Resize CPU and Memory Resources assigned to Containers | Observe the spec.containers[0].resources and status.containerStatuses[0].resources. They should match the manifest (700m CPU, 200Mi memory). Note the status.containerStatuses[0].restartCount (should b", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/resize-container-resources/", "id": "k8s_00611_code_explain_15483"} {"prompt": "Complete this bash code:\n\nkubectl patch pod resize-demo --subresource resize --patch \\\n '{\"spec\":{\"containers\":[{\"name\":\"pause\", \"resources\":{\"requests\":{\"cpu\":\"800m\"}, \"limits\":{\"cpu\":\"800m\"}}}]}}'\n", "completion": "# Alternative methods:\n# kubectl -n qos-example edit pod resize-demo --subresource resize\n# kubectl -n qos-example apply -f <updated-manifest> --subresource resize --server-side", "language": "bash", "type": "kubectl-command", "context": "Resize CPU and Memory Resources assigned to Containers | Check the pod status again after patching:", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/resize-container-resources/", "id": "k8s_00611_code_completion_15484"} {"prompt": "Explain this bash code:\n\nkubectl patch pod resize-demo --subresource resize --patch \\\n '{\"spec\":{\"containers\":[{\"name\":\"pause\", \"resources\":{\"requests\":{\"cpu\":\"800m\"}, \"limits\":{\"cpu\":\"800m\"}}}]}}'\n\n# Alternative methods:\n# kubectl -n qos-example edit pod resize-demo --subresource resize\n# kubectl -n qos-example apply -f <updated-manifest> --subresource resize --server-side", "completion": "This bash code Resize CPU and Memory Resources assigned to Containers | Check the pod status again after patching:", "language": "bash", "type": "kubectl-command_explanation", "context": "Resize CPU and Memory Resources assigned to Containers | Check the pod status again after patching:", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/resize-container-resources/", "id": "k8s_00611_code_explain_15485"} {"prompt": "Explain this bash code:\n\nkubectl get pod resize-demo --output=yaml --namespace=qos-example", "completion": "This bash code Resize CPU and Memory Resources assigned to Containers | You should see:", "language": "bash", "type": "kubectl-command_explanation", "context": "Resize CPU and Memory Resources assigned to Containers | You should see:", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/resize-container-resources/", "id": "k8s_00611_code_explain_15486"} {"prompt": "Explain this bash code:\n\nkubectl patch pod resize-demo --subresource resize --patch \\\n '{\"spec\":{\"containers\":[{\"name\":\"pause\", \"resources\":{\"requests\":{\"memory\":\"300Mi\"}, \"limits\":{\"memory\":\"300Mi\"}}}]}}'", "completion": "This bash code Resize CPU and Memory Resources assigned to Containers | Check the pod status shortly after patching:", "language": "bash", "type": "kubectl-command_explanation", "context": "Resize CPU and Memory Resources assigned to Containers | Check the pod status shortly after patching:", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/resize-container-resources/", "id": "k8s_00611_code_explain_15487"} {"prompt": "Explain this text code:\n\n# Attempt to patch with an excessively large CPU request\nkubectl patch pod resize-demo --subresource resize --patch \\\n '{\"spec\":{\"containers\":[{\"name\":\"pause\", \"resources\":{\"requests\":{\"cpu\":\"1000\"}, \"limits\":{\"cpu\":\"1000\"}}}]}}'", "completion": "This text code Resize CPU and Memory Resources assigned to Containers | Query the Pod's details:", "language": "text", "type": "code-example_explanation", "context": "Resize CPU and Memory Resources assigned to Containers | Query the Pod's details:", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/resize-container-resources/", "id": "k8s_00611_code_explain_15488"} {"prompt": "Complete this yaml code:\n\nresizePolicy:\n - resourceName: cpu", "completion": " restartPolicy: NotRequired\n - resourceName: memory\n restartPolicy: RestartContainer", "language": "yaml", "type": "kubernetes-manifest", "context": "Resize CPU and Memory Resources assigned to Containers | If resizePolicy[*].restartPolicy is not specified for a resource, it defaults to NotRequired.", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/resize-container-resources/", "id": "k8s_00611_code_completion_15489"} {"prompt": "Explain this yaml code:\n\nresizePolicy:\n - resourceName: cpu\n restartPolicy: NotRequired\n - resourceName: memory\n restartPolicy: RestartContainer", "completion": "This yaml code Resize CPU and Memory Resources assigned to Containers | If resizePolicy[*].restartPolicy is not specified for a resource, it defaults to NotRequired.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Resize CPU and Memory Resources assigned to Containers | If resizePolicy[*].restartPolicy is not specified for a resource, it defaults to NotRequired.", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/resize-container-resources/", "id": "k8s_00611_code_explain_15490"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: resize-demo\nspec:\n containers:\n - name: pause\n image: registry.k8s.io/pause:3.8\n resizePolicy:\n - resourceName: cpu", "completion": " restartPolicy: NotRequired # Default, but explicit here\n - resourceName: memory\n restartPolicy: RestartContainer\n resources:\n limits:\n memory: \"200Mi\"\n cpu: \"700m\"\n requests:\n memory: \"200Mi\"\n cpu: \"700m\"", "language": "yaml", "type": "pod-definition", "context": "Resize CPU and Memory Resources assigned to Containers | Create the pod:", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/resize-container-resources/", "id": "k8s_00611_code_completion_15491"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: resize-demo\nspec:\n containers:\n - name: pause\n image: registry.k8s.io/pause:3.8\n resizePolicy:\n - resourceName: cpu\n restartPolicy: NotRequired # Default, but explicit here\n - resourceName: memory\n restartPolicy: RestartContainer\n resources:\n limits:\n memory: \"200Mi\"\n cpu: \"700m\"\n requests:\n memory: \"200Mi\"\n cpu: \"700m\"", "completion": "This yaml code Resize CPU and Memory Resources assigned to Containers | Create the pod:", "language": "yaml", "type": "pod-definition_explanation", "context": "Resize CPU and Memory Resources assigned to Containers | Create the pod:", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/resize-container-resources/", "id": "k8s_00611_code_explain_15492"} {"prompt": "Explain this shell code:\n\n# Wait a moment for the pod to be running\nkubectl get pod resize-demo --output=yaml", "completion": "This shell code Resize CPU and Memory Resources assigned to Containers | Observe the spec.containers[0].resources and status.containerStatuses[0].resources. They should match the manifest (700m CPU, 200Mi memory). Note the status.containerStatuses[0].restartCount (should b", "language": "shell", "type": "code-example_explanation", "context": "Resize CPU and Memory Resources assigned to Containers | Observe the spec.containers[0].resources and status.containerStatuses[0].resources. They should match the manifest (700m CPU, 200Mi memory). Note the status.containerStatuses[0].restartCount (should b", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/resize-container-resources/", "id": "k8s_00611_code_explain_15493"} {"prompt": "Complete this shell code:\n\nkubectl patch pod resize-demo --subresource resize --patch \\\n '{\"spec\":{\"containers\":[{\"name\":\"pause\", \"resources\":{\"requests\":{\"cpu\":\"800m\"}, \"limits\":{\"cpu\":\"800m\"}}}]}}'\n", "completion": "# Alternative methods:\n# kubectl -n qos-example edit pod resize-demo --subresource resize\n# kubectl -n qos-example apply -f <updated-manifest> --subresource resize --server-side", "language": "shell", "type": "code-example", "context": "Resize CPU and Memory Resources assigned to Containers | Check the pod status again after patching:", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/resize-container-resources/", "id": "k8s_00611_code_completion_15494"} {"prompt": "Explain this shell code:\n\nkubectl patch pod resize-demo --subresource resize --patch \\\n '{\"spec\":{\"containers\":[{\"name\":\"pause\", \"resources\":{\"requests\":{\"cpu\":\"800m\"}, \"limits\":{\"cpu\":\"800m\"}}}]}}'\n\n# Alternative methods:\n# kubectl -n qos-example edit pod resize-demo --subresource resize\n# kubectl -n qos-example apply -f <updated-manifest> --subresource resize --server-side", "completion": "This shell code Resize CPU and Memory Resources assigned to Containers | Check the pod status again after patching:", "language": "shell", "type": "code-example_explanation", "context": "Resize CPU and Memory Resources assigned to Containers | Check the pod status again after patching:", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/resize-container-resources/", "id": "k8s_00611_code_explain_15495"} {"prompt": "Explain this shell code:\n\nkubectl get pod resize-demo --output=yaml --namespace=qos-example", "completion": "This shell code Resize CPU and Memory Resources assigned to Containers | You should see:", "language": "shell", "type": "code-example_explanation", "context": "Resize CPU and Memory Resources assigned to Containers | You should see:", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/resize-container-resources/", "id": "k8s_00611_code_explain_15496"} {"prompt": "Explain this shell code:\n\nkubectl patch pod resize-demo --subresource resize --patch \\\n '{\"spec\":{\"containers\":[{\"name\":\"pause\", \"resources\":{\"requests\":{\"memory\":\"300Mi\"}, \"limits\":{\"memory\":\"300Mi\"}}}]}}'", "completion": "This shell code Resize CPU and Memory Resources assigned to Containers | Check the pod status shortly after patching:", "language": "shell", "type": "code-example_explanation", "context": "Resize CPU and Memory Resources assigned to Containers | Check the pod status shortly after patching:", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/resize-container-resources/", "id": "k8s_00611_code_explain_15497"} {"prompt": "Explain this shell code:\n\n# Attempt to patch with an excessively large CPU request\nkubectl patch pod resize-demo --subresource resize --patch \\\n '{\"spec\":{\"containers\":[{\"name\":\"pause\", \"resources\":{\"requests\":{\"cpu\":\"1000\"}, \"limits\":{\"cpu\":\"1000\"}}}]}}'", "completion": "This shell code Resize CPU and Memory Resources assigned to Containers | Query the Pod's details:", "language": "shell", "type": "code-example_explanation", "context": "Resize CPU and Memory Resources assigned to Containers | Query the Pod's details:", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/resize-container-resources/", "id": "k8s_00611_code_explain_15498"} {"prompt": "Explain this text code:\n\nNode didn't have enough capacity: cpu, requested: 800000, capacity: ...", "completion": "This text code Resize CPU and Memory Resources assigned to Containers | To fix this, you would need to patch the pod again with feasible resource values.", "language": "text", "type": "code-example_explanation", "context": "Resize CPU and Memory Resources assigned to Containers | To fix this, you would need to patch the pod again with feasible resource values.", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/resize-container-resources/", "id": "k8s_00611_code_explain_15499"} {"prompt": "Complete this text code:\n\nresizePolicy:\n - resourceName: cpu", "completion": " restartPolicy: NotRequired\n - resourceName: memory\n restartPolicy: RestartContainer", "language": "text", "type": "code-example", "context": "Resize CPU and Memory Resources assigned to Containers | If resizePolicy[*].restartPolicy is not specified for a resource, it defaults to NotRequired.", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/resize-container-resources/", "id": "k8s_00611_code_completion_15500"} {"prompt": "Explain this text code:\n\nresizePolicy:\n - resourceName: cpu\n restartPolicy: NotRequired\n - resourceName: memory\n restartPolicy: RestartContainer", "completion": "This text code Resize CPU and Memory Resources assigned to Containers | If resizePolicy[*].restartPolicy is not specified for a resource, it defaults to NotRequired.", "language": "text", "type": "code-example_explanation", "context": "Resize CPU and Memory Resources assigned to Containers | If resizePolicy[*].restartPolicy is not specified for a resource, it defaults to NotRequired.", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/resize-container-resources/", "id": "k8s_00611_code_explain_15501"} {"prompt": "Complete this yaml code:\n\npods/resource/pod-resize.yaml\napiVersion: v1\nkind: Pod\nmetadata:\n name: resize-demo\nspec:\n containers:\n - name: pause\n image: registry.k8s.io/pause:3.8\n resizePolicy:", "completion": " - resourceName: cpu\n restartPolicy: NotRequired # Default, but explicit here\n - resourceName: memory\n restartPolicy: RestartContainer\n resources:\n limits:\n memory: \"200Mi\"\n cpu: \"700m\"\n requests:\n memory: \"200Mi\"\n cpu: \"700m\"", "language": "yaml", "type": "pod-definition", "context": "Resize CPU and Memory Resources assigned to Containers | Create the pod:", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/resize-container-resources/", "id": "k8s_00611_code_completion_15502"} {"prompt": "Explain this yaml code:\n\npods/resource/pod-resize.yaml\napiVersion: v1\nkind: Pod\nmetadata:\n name: resize-demo\nspec:\n containers:\n - name: pause\n image: registry.k8s.io/pause:3.8\n resizePolicy:\n - resourceName: cpu\n restartPolicy: NotRequired # Default, but explicit here\n - resourceName: memory\n restartPolicy: RestartContainer\n resources:\n limits:\n memory: \"200Mi\"\n cpu: \"700m\"\n requests:\n memory: \"200Mi\"\n cpu: \"700m\"", "completion": "This yaml code Resize CPU and Memory Resources assigned to Containers | Create the pod:", "language": "yaml", "type": "pod-definition_explanation", "context": "Resize CPU and Memory Resources assigned to Containers | Create the pod:", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/resize-container-resources/", "id": "k8s_00611_code_explain_15503"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: resize-demo\nspec:\n containers:\n - name: pause\n image: registry.k8s.io/pause:3.8\n resizePolicy:\n - resourceName: cpu", "completion": " restartPolicy: NotRequired # Default, but explicit here\n - resourceName: memory\n restartPolicy: RestartContainer\n resources:\n limits:\n memory: \"200Mi\"\n cpu: \"700m\"\n requests:\n memory: \"200Mi\"\n cpu: \"700m\"", "language": "yaml", "type": "pod-definition", "context": "Resize CPU and Memory Resources assigned to Containers | Create the pod:", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/resize-container-resources/", "id": "k8s_00611_code_completion_15504"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: resize-demo\nspec:\n containers:\n - name: pause\n image: registry.k8s.io/pause:3.8\n resizePolicy:\n - resourceName: cpu\n restartPolicy: NotRequired # Default, but explicit here\n - resourceName: memory\n restartPolicy: RestartContainer\n resources:\n limits:\n memory: \"200Mi\"\n cpu: \"700m\"\n requests:\n memory: \"200Mi\"\n cpu: \"700m\"", "completion": "This yaml code Resize CPU and Memory Resources assigned to Containers | Create the pod:", "language": "yaml", "type": "pod-definition_explanation", "context": "Resize CPU and Memory Resources assigned to Containers | Create the pod:", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/resize-container-resources/", "id": "k8s_00611_code_explain_15505"} {"prompt": "Explain this text code:\n\n# Wait a moment for the pod to be running\nkubectl get pod resize-demo --output=yaml", "completion": "This text code Resize CPU and Memory Resources assigned to Containers | Observe the spec.containers[0].resources and status.containerStatuses[0].resources. They should match the manifest (700m CPU, 200Mi memory). Note the status.containerStatuses[0].restartCount (should b", "language": "text", "type": "code-example_explanation", "context": "Resize CPU and Memory Resources assigned to Containers | Observe the spec.containers[0].resources and status.containerStatuses[0].resources. They should match the manifest (700m CPU, 200Mi memory). Note the status.containerStatuses[0].restartCount (should b", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/resize-container-resources/", "id": "k8s_00611_code_explain_15506"} {"prompt": "Complete this bash code:\n\nkubectl patch pod resize-demo --subresource resize --patch \\\n '{\"spec\":{\"containers\":[{\"name\":\"pause\", \"resources\":{\"requests\":{\"cpu\":\"800m\"}, \"limits\":{\"cpu\":\"800m\"}}}]}}'\n", "completion": "# Alternative methods:\n# kubectl -n qos-example edit pod resize-demo --subresource resize\n# kubectl -n qos-example apply -f <updated-manifest> --subresource resize --server-side", "language": "bash", "type": "kubectl-command", "context": "Resize CPU and Memory Resources assigned to Containers | Check the pod status again after patching:", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/resize-container-resources/", "id": "k8s_00611_code_completion_15507"} {"prompt": "Explain this bash code:\n\nkubectl patch pod resize-demo --subresource resize --patch \\\n '{\"spec\":{\"containers\":[{\"name\":\"pause\", \"resources\":{\"requests\":{\"cpu\":\"800m\"}, \"limits\":{\"cpu\":\"800m\"}}}]}}'\n\n# Alternative methods:\n# kubectl -n qos-example edit pod resize-demo --subresource resize\n# kubectl -n qos-example apply -f <updated-manifest> --subresource resize --server-side", "completion": "This bash code Resize CPU and Memory Resources assigned to Containers | Check the pod status again after patching:", "language": "bash", "type": "kubectl-command_explanation", "context": "Resize CPU and Memory Resources assigned to Containers | Check the pod status again after patching:", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/resize-container-resources/", "id": "k8s_00611_code_explain_15508"} {"prompt": "Explain this bash code:\n\nkubectl get pod resize-demo --output=yaml --namespace=qos-example", "completion": "This bash code Resize CPU and Memory Resources assigned to Containers | You should see:", "language": "bash", "type": "kubectl-command_explanation", "context": "Resize CPU and Memory Resources assigned to Containers | You should see:", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/resize-container-resources/", "id": "k8s_00611_code_explain_15509"} {"prompt": "Explain this bash code:\n\nkubectl patch pod resize-demo --subresource resize --patch \\\n '{\"spec\":{\"containers\":[{\"name\":\"pause\", \"resources\":{\"requests\":{\"memory\":\"300Mi\"}, \"limits\":{\"memory\":\"300Mi\"}}}]}}'", "completion": "This bash code Resize CPU and Memory Resources assigned to Containers | Check the pod status shortly after patching:", "language": "bash", "type": "kubectl-command_explanation", "context": "Resize CPU and Memory Resources assigned to Containers | Check the pod status shortly after patching:", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/resize-container-resources/", "id": "k8s_00611_code_explain_15510"} {"prompt": "Explain this text code:\n\n# Attempt to patch with an excessively large CPU request\nkubectl patch pod resize-demo --subresource resize --patch \\\n '{\"spec\":{\"containers\":[{\"name\":\"pause\", \"resources\":{\"requests\":{\"cpu\":\"1000\"}, \"limits\":{\"cpu\":\"1000\"}}}]}}'", "completion": "This text code Resize CPU and Memory Resources assigned to Containers | Query the Pod's details:", "language": "text", "type": "code-example_explanation", "context": "Resize CPU and Memory Resources assigned to Containers | Query the Pod's details:", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/resize-container-resources/", "id": "k8s_00611_code_explain_15511"} {"prompt": "Complete this yaml code:\n\napiVersion: kubelet.config.k8s.io/v1beta1\nkind: KubeletConfiguration", "completion": "clusterDNS:\n- 10.96.0.10", "language": "yaml", "type": "kubernetes-manifest", "context": "Configuring each kubelet in your cluster using kubeadm | For more details on the KubeletConfiguration have a look at this section.", "source": "https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/kubelet-integration/", "id": "k8s_00615_code_completion_15512"} {"prompt": "Explain this yaml code:\n\napiVersion: kubelet.config.k8s.io/v1beta1\nkind: KubeletConfiguration\nclusterDNS:\n- 10.96.0.10", "completion": "This yaml code Configuring each kubelet in your cluster using kubeadm | For more details on the KubeletConfiguration have a look at this section.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Configuring each kubelet in your cluster using kubeadm | For more details on the KubeletConfiguration have a look at this section.", "source": "https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/kubelet-integration/", "id": "k8s_00615_code_explain_15513"} {"prompt": "Explain this text code:\n\nKUBELET_KUBEADM_ARGS=\"--flag1=value1 --flag2=value2 ...\"", "completion": "This text code Configuring each kubelet in your cluster using kubeadm | In addition to the flags used when starting the kubelet, the file also contains dynamic parameters such as the cgroup driver.", "language": "text", "type": "code-example_explanation", "context": "Configuring each kubelet in your cluster using kubeadm | In addition to the flags used when starting the kubelet, the file also contains dynamic parameters such as the cgroup driver.", "source": "https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/kubelet-integration/", "id": "k8s_00615_code_explain_15514"} {"prompt": "Explain this text code:\n\nsystemctl daemon-reload && systemctl restart kubelet", "completion": "This text code Configuring each kubelet in your cluster using kubeadm | If the reload and restart are successful, the normal kubeadm init workflow continues.", "language": "text", "type": "code-example_explanation", "context": "Configuring each kubelet in your cluster using kubeadm | If the reload and restart are successful, the normal kubeadm init workflow continues.", "source": "https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/kubelet-integration/", "id": "k8s_00615_code_explain_15515"} {"prompt": "Explain this text code:\n\nsystemctl daemon-reload && systemctl restart kubelet", "completion": "This text code Configuring each kubelet in your cluster using kubeadm | After the kubelet loads the new configuration, kubeadm writes the /etc/kubernetes/bootstrap-kubelet.conf KubeConfig file, which contains a CA certificate and Bootstrap Token. These are used by the kub", "language": "text", "type": "code-example_explanation", "context": "Configuring each kubelet in your cluster using kubeadm | After the kubelet loads the new configuration, kubeadm writes the /etc/kubernetes/bootstrap-kubelet.conf KubeConfig file, which contains a CA certificate and Bootstrap Token. These are used by the kub", "source": "https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/kubelet-integration/", "id": "k8s_00615_code_explain_15516"} {"prompt": "Complete this json code:\n\n[Service]\nEnvironment=\"KUBELET_KUBECONFIG_ARGS=--bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --kubeconfig=/etc/kubernetes/kubelet.conf\"\nEnvironment=\"KUBELET_CONFIG_ARGS=--config=/var/lib/kubelet/config.yaml\"\n# This is a file that \"kubeadm init\" and \"kubeadm join\" generate at runtime, populating\n# the KUBELET_KUBEADM_ARGS variable dynamically\nEnvironmentFile=-/var/lib/kubelet/kubeadm-flags.env", "completion": "# This is a file that the user can use for overrides of the kubelet args as a last resort. Preferably,\n# the user should use the .NodeRegistration.KubeletExtraArgs object in the configuration files instead.\n# KUBELET_EXTRA_ARGS should be sourced from this file.\nEnvironmentFile=-/etc/default/kubelet\nExecStart=\nExecStart=/usr/bin/kubelet $KUBELET_KUBECONFIG_ARGS $KUBELET_CONFIG_ARGS $KUBELET_KUBEADM_ARGS $KUBELET_EXTRA_ARGS", "language": "json", "type": "api-response", "context": "Configuring each kubelet in your cluster using kubeadm | This file specifies the default locations for all of the files managed by kubeadm for the kubelet.", "source": "https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/kubelet-integration/", "id": "k8s_00615_code_completion_15517"} {"prompt": "Explain this json code:\n\n[Service]\nEnvironment=\"KUBELET_KUBECONFIG_ARGS=--bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --kubeconfig=/etc/kubernetes/kubelet.conf\"\nEnvironment=\"KUBELET_CONFIG_ARGS=--config=/var/lib/kubelet/config.yaml\"\n# This is a file that \"kubeadm init\" and \"kubeadm join\" generate at runtime, populating\n# the KUBELET_KUBEADM_ARGS variable dynamically\nEnvironmentFile=-/var/lib/kubelet/kubeadm-flags.env\n# This is a file that the user can use for overrides of the kubelet args as a last resort. Preferably,\n# the user should use the .NodeRegistration.KubeletExtraArgs object in the configuration files instead.\n# KUBELET_EXTRA_ARGS should be sourced from this file.\nEnvironmentFile=-/etc/default/kubelet\nExecStart=\nExecStart=/usr/bin/kubelet $KUBELET_KUBECONFIG_ARGS $KUBELET_CONFIG_ARGS $KUBELET_KUBEADM_ARGS $KUBELET_EXTRA_ARGS", "completion": "This json code Configuring each kubelet in your cluster using kubeadm | This file specifies the default locations for all of the files managed by kubeadm for the kubelet.", "language": "json", "type": "api-response_explanation", "context": "Configuring each kubelet in your cluster using kubeadm | This file specifies the default locations for all of the files managed by kubeadm for the kubelet.", "source": "https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/kubelet-integration/", "id": "k8s_00615_code_explain_15518"} {"prompt": "Complete this yaml code:\n\napiVersion: kubelet.config.k8s.io/v1beta1\nkind: KubeletConfiguration", "completion": "clusterDNS:\n- 10.96.0.10", "language": "yaml", "type": "kubernetes-manifest", "context": "Configuring each kubelet in your cluster using kubeadm | For more details on the KubeletConfiguration have a look at this section.", "source": "https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/kubelet-integration/", "id": "k8s_00615_code_completion_15519"} {"prompt": "Explain this yaml code:\n\napiVersion: kubelet.config.k8s.io/v1beta1\nkind: KubeletConfiguration\nclusterDNS:\n- 10.96.0.10", "completion": "This yaml code Configuring each kubelet in your cluster using kubeadm | For more details on the KubeletConfiguration have a look at this section.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Configuring each kubelet in your cluster using kubeadm | For more details on the KubeletConfiguration have a look at this section.", "source": "https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/kubelet-integration/", "id": "k8s_00615_code_explain_15520"} {"prompt": "Explain this text code:\n\nkubeadm config print init-defaults --component-configs KubeletConfiguration", "completion": "This text code Configuring each kubelet in your cluster using kubeadm | It is also possible to apply instance-specific patches over the base KubeletConfiguration. Have a look at Customizing the kubelet for more details.", "language": "text", "type": "code-example_explanation", "context": "Configuring each kubelet in your cluster using kubeadm | It is also possible to apply instance-specific patches over the base KubeletConfiguration. Have a look at Customizing the kubelet for more details.", "source": "https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/kubelet-integration/", "id": "k8s_00615_code_explain_15521"} {"prompt": "Explain this bash code:\n\nKUBELET_KUBEADM_ARGS=\"--flag1=value1 --flag2=value2 ...\"", "completion": "This bash code Configuring each kubelet in your cluster using kubeadm | In addition to the flags used when starting the kubelet, the file also contains dynamic parameters such as the cgroup driver.", "language": "bash", "type": "shell-command_explanation", "context": "Configuring each kubelet in your cluster using kubeadm | In addition to the flags used when starting the kubelet, the file also contains dynamic parameters such as the cgroup driver.", "source": "https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/kubelet-integration/", "id": "k8s_00615_code_explain_15522"} {"prompt": "Explain this bash code:\n\nsystemctl daemon-reload && systemctl restart kubelet", "completion": "This bash code Configuring each kubelet in your cluster using kubeadm | If the reload and restart are successful, the normal kubeadm init workflow continues.", "language": "bash", "type": "shell-command_explanation", "context": "Configuring each kubelet in your cluster using kubeadm | If the reload and restart are successful, the normal kubeadm init workflow continues.", "source": "https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/kubelet-integration/", "id": "k8s_00615_code_explain_15523"} {"prompt": "Explain this bash code:\n\nsystemctl daemon-reload && systemctl restart kubelet", "completion": "This bash code Configuring each kubelet in your cluster using kubeadm | After the kubelet loads the new configuration, kubeadm writes the /etc/kubernetes/bootstrap-kubelet.conf KubeConfig file, which contains a CA certificate and Bootstrap Token. These are used by the kub", "language": "bash", "type": "shell-command_explanation", "context": "Configuring each kubelet in your cluster using kubeadm | After the kubelet loads the new configuration, kubeadm writes the /etc/kubernetes/bootstrap-kubelet.conf KubeConfig file, which contains a CA certificate and Bootstrap Token. These are used by the kub", "source": "https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/kubelet-integration/", "id": "k8s_00615_code_explain_15524"} {"prompt": "Explain this text code:\n\n/usr/lib/systemd/system/kubelet.service.d/10-kubeadm.conf", "completion": "This text code Configuring each kubelet in your cluster using kubeadm | If you want to override that further, you can make a directory /etc/systemd/system/kubelet.service.d/ (not /usr/lib/systemd/system/kubelet.service.d/) and put your own customizations into a file there", "language": "text", "type": "code-example_explanation", "context": "Configuring each kubelet in your cluster using kubeadm | If you want to override that further, you can make a directory /etc/systemd/system/kubelet.service.d/ (not /usr/lib/systemd/system/kubelet.service.d/) and put your own customizations into a file there", "source": "https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/kubelet-integration/", "id": "k8s_00615_code_explain_15525"} {"prompt": "Explain this text code:\n\n/etc/systemd/system/kubelet.service.d/local-overrides.conf", "completion": "This text code Configuring each kubelet in your cluster using kubeadm | Here is what you are likely to find in /usr/lib/systemd/system/kubelet.service.d/10-kubeadm.conf:", "language": "text", "type": "code-example_explanation", "context": "Configuring each kubelet in your cluster using kubeadm | Here is what you are likely to find in /usr/lib/systemd/system/kubelet.service.d/10-kubeadm.conf:", "source": "https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/kubelet-integration/", "id": "k8s_00615_code_explain_15526"} {"prompt": "Explain this text code:\n\n/usr/lib/systemd/system/kubelet.service.d/10-kubeadm.conf", "completion": "This text code Configuring each kubelet in your cluster using kubeadm | This file specifies the default locations for all of the files managed by kubeadm for the kubelet.", "language": "text", "type": "code-example_explanation", "context": "Configuring each kubelet in your cluster using kubeadm | This file specifies the default locations for all of the files managed by kubeadm for the kubelet.", "source": "https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/kubelet-integration/", "id": "k8s_00615_code_explain_15527"} {"prompt": "Complete this none code:\n\n[Service]\nEnvironment=\"KUBELET_KUBECONFIG_ARGS=--bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --kubeconfig=/etc/kubernetes/kubelet.conf\"\nEnvironment=\"KUBELET_CONFIG_ARGS=--config=/var/lib/kubelet/config.yaml\"\n# This is a file that \"kubeadm init\" and \"kubeadm join\" generate at runtime, populating\n# the KUBELET_KUBEADM_ARGS variable dynamically\nEnvironmentFile=-/var/lib/kubelet/kubeadm-flags.env", "completion": "# This is a file that the user can use for overrides of the kubelet args as a last resort. Preferably,\n# the user should use the .NodeRegistration.KubeletExtraArgs object in the configuration files instead.\n# KUBELET_EXTRA_ARGS should be sourced from this file.\nEnvironmentFile=-/etc/default/kubelet\nExecStart=\nExecStart=/usr/bin/kubelet $KUBELET_KUBECONFIG_ARGS $KUBELET_CONFIG_ARGS $KUBELET_KUBEADM_ARGS $KUBELET_EXTRA_ARGS", "language": "none", "type": "code-example", "context": "Configuring each kubelet in your cluster using kubeadm | This file specifies the default locations for all of the files managed by kubeadm for the kubelet.", "source": "https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/kubelet-integration/", "id": "k8s_00615_code_completion_15528"} {"prompt": "Explain this none code:\n\n[Service]\nEnvironment=\"KUBELET_KUBECONFIG_ARGS=--bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --kubeconfig=/etc/kubernetes/kubelet.conf\"\nEnvironment=\"KUBELET_CONFIG_ARGS=--config=/var/lib/kubelet/config.yaml\"\n# This is a file that \"kubeadm init\" and \"kubeadm join\" generate at runtime, populating\n# the KUBELET_KUBEADM_ARGS variable dynamically\nEnvironmentFile=-/var/lib/kubelet/kubeadm-flags.env\n# This is a file that the user can use for overrides of the kubelet args as a last resort. Preferably,\n# the user should use the .NodeRegistration.KubeletExtraArgs object in the configuration files instead.\n# KUBELET_EXTRA_ARGS should be sourced from this file.\nEnvironmentFile=-/etc/default/kubelet\nExecStart=\nExecStart=/usr/bin/kubelet $KUBELET_KUBECONFIG_ARGS $KUBELET_CONFIG_ARGS $KUBELET_KUBEADM_ARGS $KUBELET_EXTRA_ARGS", "completion": "This none code Configuring each kubelet in your cluster using kubeadm | This file specifies the default locations for all of the files managed by kubeadm for the kubelet.", "language": "none", "type": "code-example_explanation", "context": "Configuring each kubelet in your cluster using kubeadm | This file specifies the default locations for all of the files managed by kubeadm for the kubelet.", "source": "https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/kubelet-integration/", "id": "k8s_00615_code_explain_15529"} {"prompt": "Complete this yaml code:\n\napiVersion: kubelet.config.k8s.io/v1beta1\nkind: KubeletConfiguration", "completion": "clusterDNS:\n- 10.96.0.10", "language": "yaml", "type": "kubernetes-manifest", "context": "Configuring each kubelet in your cluster using kubeadm | For more details on the KubeletConfiguration have a look at this section.", "source": "https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/kubelet-integration/", "id": "k8s_00615_code_completion_15530"} {"prompt": "Explain this yaml code:\n\napiVersion: kubelet.config.k8s.io/v1beta1\nkind: KubeletConfiguration\nclusterDNS:\n- 10.96.0.10", "completion": "This yaml code Configuring each kubelet in your cluster using kubeadm | For more details on the KubeletConfiguration have a look at this section.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Configuring each kubelet in your cluster using kubeadm | For more details on the KubeletConfiguration have a look at this section.", "source": "https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/kubelet-integration/", "id": "k8s_00615_code_explain_15531"} {"prompt": "Explain this text code:\n\nKUBELET_KUBEADM_ARGS=\"--flag1=value1 --flag2=value2 ...\"", "completion": "This text code Configuring each kubelet in your cluster using kubeadm | In addition to the flags used when starting the kubelet, the file also contains dynamic parameters such as the cgroup driver.", "language": "text", "type": "code-example_explanation", "context": "Configuring each kubelet in your cluster using kubeadm | In addition to the flags used when starting the kubelet, the file also contains dynamic parameters such as the cgroup driver.", "source": "https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/kubelet-integration/", "id": "k8s_00615_code_explain_15532"} {"prompt": "Explain this text code:\n\nsystemctl daemon-reload && systemctl restart kubelet", "completion": "This text code Configuring each kubelet in your cluster using kubeadm | If the reload and restart are successful, the normal kubeadm init workflow continues.", "language": "text", "type": "code-example_explanation", "context": "Configuring each kubelet in your cluster using kubeadm | If the reload and restart are successful, the normal kubeadm init workflow continues.", "source": "https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/kubelet-integration/", "id": "k8s_00615_code_explain_15533"} {"prompt": "Explain this text code:\n\nsystemctl daemon-reload && systemctl restart kubelet", "completion": "This text code Configuring each kubelet in your cluster using kubeadm | After the kubelet loads the new configuration, kubeadm writes the /etc/kubernetes/bootstrap-kubelet.conf KubeConfig file, which contains a CA certificate and Bootstrap Token. These are used by the kub", "language": "text", "type": "code-example_explanation", "context": "Configuring each kubelet in your cluster using kubeadm | After the kubelet loads the new configuration, kubeadm writes the /etc/kubernetes/bootstrap-kubelet.conf KubeConfig file, which contains a CA certificate and Bootstrap Token. These are used by the kub", "source": "https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/kubelet-integration/", "id": "k8s_00615_code_explain_15534"} {"prompt": "Explain this text code:\n\nmain_menu: true\ntitle: Page Title\nlinkTitle: Title used in links", "completion": "This text code Content organization | The documentation side-bar menu is built from the current section tree starting below docs/.", "language": "text", "type": "code-example_explanation", "context": "Content organization | The documentation side-bar menu is built from the current section tree starting below docs/.", "source": "https://kubernetes.io/docs/contribute/style/content-organization/", "id": "k8s_00616_code_explain_15535"} {"prompt": "Complete this text code:\n\nen/docs/home/contribute/includes\n├── example1.md", "completion": "├── example2.md\n├── index.md\n└── podtemplate.json", "language": "text", "type": "code-example", "context": "Content organization | Another widely used example is the includes bundle. It sets headless: true in front matter, which means that it does not get its own URL. It is only used in other pages.", "source": "https://kubernetes.io/docs/contribute/style/content-organization/", "id": "k8s_00616_code_completion_15536"} {"prompt": "Explain this text code:\n\nen/docs/home/contribute/includes\n├── example1.md\n├── example2.md\n├── index.md\n└── podtemplate.json", "completion": "This text code Content organization | Another widely used example is the includes bundle. It sets headless: true in front matter, which means that it does not get its own URL. It is only used in other pages.", "language": "text", "type": "code-example_explanation", "context": "Content organization | Another widely used example is the includes bundle. It sets headless: true in front matter, which means that it does not get its own URL. It is only used in other pages.", "source": "https://kubernetes.io/docs/contribute/style/content-organization/", "id": "k8s_00616_code_explain_15537"} {"prompt": "Complete this text code:\n\nen/includes\n├── default-storage-class-prereqs.md\n├── index.md\n├── partner-script.js", "completion": "├── partner-style.css\n├── task-tutorial-prereqs.md\n├── user-guide-content-moved.md\n└── user-guide-migration-notice.md", "language": "text", "type": "code-example", "context": "Content organization | Some important notes to the files in the bundles:", "source": "https://kubernetes.io/docs/contribute/style/content-organization/", "id": "k8s_00616_code_completion_15538"} {"prompt": "Explain this text code:\n\nen/includes\n├── default-storage-class-prereqs.md\n├── index.md\n├── partner-script.js\n├── partner-style.css\n├── task-tutorial-prereqs.md\n├── user-guide-content-moved.md\n└── user-guide-migration-notice.md", "completion": "This text code Content organization | Some important notes to the files in the bundles:", "language": "text", "type": "code-example_explanation", "context": "Content organization | Some important notes to the files in the bundles:", "source": "https://kubernetes.io/docs/contribute/style/content-organization/", "id": "k8s_00616_code_explain_15539"} {"prompt": "Explain this yaml code:\n\nmain_menu: true\ntitle: Page Title\nlinkTitle: Title used in links", "completion": "This yaml code Content organization | The documentation side-bar menu is built from the current section tree starting below docs/.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Content organization | The documentation side-bar menu is built from the current section tree starting below docs/.", "source": "https://kubernetes.io/docs/contribute/style/content-organization/", "id": "k8s_00616_code_explain_15540"} {"prompt": "Complete this bash code:\n\nen/docs/home/contribute/includes\n├── example1.md", "completion": "├── example2.md\n├── index.md\n└── podtemplate.json", "language": "bash", "type": "shell-command", "context": "Content organization | Another widely used example is the includes bundle. It sets headless: true in front matter, which means that it does not get its own URL. It is only used in other pages.", "source": "https://kubernetes.io/docs/contribute/style/content-organization/", "id": "k8s_00616_code_completion_15541"} {"prompt": "Explain this bash code:\n\nen/docs/home/contribute/includes\n├── example1.md\n├── example2.md\n├── index.md\n└── podtemplate.json", "completion": "This bash code Content organization | Another widely used example is the includes bundle. It sets headless: true in front matter, which means that it does not get its own URL. It is only used in other pages.", "language": "bash", "type": "shell-command_explanation", "context": "Content organization | Another widely used example is the includes bundle. It sets headless: true in front matter, which means that it does not get its own URL. It is only used in other pages.", "source": "https://kubernetes.io/docs/contribute/style/content-organization/", "id": "k8s_00616_code_explain_15542"} {"prompt": "Complete this bash code:\n\nen/includes\n├── default-storage-class-prereqs.md\n├── index.md\n├── partner-script.js", "completion": "├── partner-style.css\n├── task-tutorial-prereqs.md\n├── user-guide-content-moved.md\n└── user-guide-migration-notice.md", "language": "bash", "type": "shell-command", "context": "Content organization | Some important notes to the files in the bundles:", "source": "https://kubernetes.io/docs/contribute/style/content-organization/", "id": "k8s_00616_code_completion_15543"} {"prompt": "Explain this bash code:\n\nen/includes\n├── default-storage-class-prereqs.md\n├── index.md\n├── partner-script.js\n├── partner-style.css\n├── task-tutorial-prereqs.md\n├── user-guide-content-moved.md\n└── user-guide-migration-notice.md", "completion": "This bash code Content organization | Some important notes to the files in the bundles:", "language": "bash", "type": "shell-command_explanation", "context": "Content organization | Some important notes to the files in the bundles:", "source": "https://kubernetes.io/docs/contribute/style/content-organization/", "id": "k8s_00616_code_explain_15544"} {"prompt": "Explain this text code:\n\nmain_menu: true\ntitle: Page Title\nlinkTitle: Title used in links", "completion": "This text code Content organization | The documentation side-bar menu is built from the current section tree starting below docs/.", "language": "text", "type": "code-example_explanation", "context": "Content organization | The documentation side-bar menu is built from the current section tree starting below docs/.", "source": "https://kubernetes.io/docs/contribute/style/content-organization/", "id": "k8s_00616_code_explain_15545"} {"prompt": "Complete this text code:\n\nen/docs/home/contribute/includes\n├── example1.md", "completion": "├── example2.md\n├── index.md\n└── podtemplate.json", "language": "text", "type": "code-example", "context": "Content organization | Another widely used example is the includes bundle. It sets headless: true in front matter, which means that it does not get its own URL. It is only used in other pages.", "source": "https://kubernetes.io/docs/contribute/style/content-organization/", "id": "k8s_00616_code_completion_15546"} {"prompt": "Explain this text code:\n\nen/docs/home/contribute/includes\n├── example1.md\n├── example2.md\n├── index.md\n└── podtemplate.json", "completion": "This text code Content organization | Another widely used example is the includes bundle. It sets headless: true in front matter, which means that it does not get its own URL. It is only used in other pages.", "language": "text", "type": "code-example_explanation", "context": "Content organization | Another widely used example is the includes bundle. It sets headless: true in front matter, which means that it does not get its own URL. It is only used in other pages.", "source": "https://kubernetes.io/docs/contribute/style/content-organization/", "id": "k8s_00616_code_explain_15547"} {"prompt": "Complete this text code:\n\nen/includes\n├── default-storage-class-prereqs.md\n├── index.md\n├── partner-script.js", "completion": "├── partner-style.css\n├── task-tutorial-prereqs.md\n├── user-guide-content-moved.md\n└── user-guide-migration-notice.md", "language": "text", "type": "code-example", "context": "Content organization | Some important notes to the files in the bundles:", "source": "https://kubernetes.io/docs/contribute/style/content-organization/", "id": "k8s_00616_code_completion_15548"} {"prompt": "Explain this text code:\n\nen/includes\n├── default-storage-class-prereqs.md\n├── index.md\n├── partner-script.js\n├── partner-style.css\n├── task-tutorial-prereqs.md\n├── user-guide-content-moved.md\n└── user-guide-migration-notice.md", "completion": "This text code Content organization | Some important notes to the files in the bundles:", "language": "text", "type": "code-example_explanation", "context": "Content organization | Some important notes to the files in the bundles:", "source": "https://kubernetes.io/docs/contribute/style/content-organization/", "id": "k8s_00616_code_explain_15549"} {"prompt": "Complete this yaml code:\n\napiVersion: apps/v1\nkind: Deployment\nmetadata:\n name: nginx-deployment\nspec:\n selector:\n matchLabels:\n app: nginx\n replicas: 2 # tells deployment to run 2 pods matching the template", "completion": " template:\n metadata:\n labels:\n app: nginx\n spec:\n containers:\n - name: nginx\n image: nginx:1.14.2\n ports:\n - containerPort: 80", "language": "yaml", "type": "deployment-definition", "context": "1.2 - Objects In Kubernetes | One way to create a Deployment using a manifest file like the one above is to use the kubectl apply command in the kubectl command-line interface, passing the .yaml file as an argument. Here's an exam", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_15550"} {"prompt": "Explain this yaml code:\n\napiVersion: apps/v1\nkind: Deployment\nmetadata:\n name: nginx-deployment\nspec:\n selector:\n matchLabels:\n app: nginx\n replicas: 2 # tells deployment to run 2 pods matching the template\n template:\n metadata:\n labels:\n app: nginx\n spec:\n containers:\n - name: nginx\n image: nginx:1.14.2\n ports:\n - containerPort: 80", "completion": "This yaml code 1.2 - Objects In Kubernetes | One way to create a Deployment using a manifest file like the one above is to use the kubectl apply command in the kubectl command-line interface, passing the .yaml file as an argument. Here's an exam", "language": "yaml", "type": "deployment-definition_explanation", "context": "1.2 - Objects In Kubernetes | One way to create a Deployment using a manifest file like the one above is to use the kubectl apply command in the kubectl command-line interface, passing the .yaml file as an argument. Here's an exam", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_15551"} {"prompt": "Explain this bash code:\n\nkubectl apply -f https://k8s.io/examples/application/deployment.yaml", "completion": "This bash code 1.2 - Objects In Kubernetes | The output is similar to this:", "language": "bash", "type": "kubectl-command_explanation", "context": "1.2 - Objects In Kubernetes | The output is similar to this:", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_15552"} {"prompt": "Explain this bash code:\n\nkubectl diff -R -f configs/\nkubectl apply -R -f configs/", "completion": "This bash code 1.2.1 - Kubernetes Object Management | Advantages compared to imperative object configuration:", "language": "bash", "type": "kubectl-command_explanation", "context": "1.2.1 - Kubernetes Object Management | Advantages compared to imperative object configuration:", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_15553"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: nginx-demo\nspec:", "completion": " containers:\n - name: nginx\n image: nginx:1.14.2\n ports:\n - containerPort: 80", "language": "yaml", "type": "pod-definition", "context": "1.2.2 - Object Names and IDs | A Kubernetes systems-generated string to uniquely identify objects.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_15554"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: nginx-demo\nspec:\n containers:\n - name: nginx\n image: nginx:1.14.2\n ports:\n - containerPort: 80", "completion": "This yaml code 1.2.2 - Object Names and IDs | A Kubernetes systems-generated string to uniquely identify objects.", "language": "yaml", "type": "pod-definition_explanation", "context": "1.2.2 - Object Names and IDs | A Kubernetes systems-generated string to uniquely identify objects.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_15555"} {"prompt": "Complete this text code:\n\n\"metadata\": {\n \"labels\": {\n \"key1\" : \"value1\",", "completion": " \"key2\" : \"value2\"\n }\n}", "language": "text", "type": "code-example", "context": "1.2.3 - Labels and Selectors | Labels allow for efficient queries and watches and are ideal for use in UIs and CLIs. Non-identifying information should be recorded using annotations.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_15556"} {"prompt": "Explain this text code:\n\n\"metadata\": {\n \"labels\": {\n \"key1\" : \"value1\",\n \"key2\" : \"value2\"\n }\n}", "completion": "This text code 1.2.3 - Labels and Selectors | Labels allow for efficient queries and watches and are ideal for use in UIs and CLIs. Non-identifying information should be recorded using annotations.", "language": "text", "type": "code-example_explanation", "context": "1.2.3 - Labels and Selectors | Labels allow for efficient queries and watches and are ideal for use in UIs and CLIs. Non-identifying information should be recorded using annotations.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_15557"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: label-demo\n labels:\n environment: production", "completion": " app: nginx\nspec:\n containers:\n - name: nginx\n image: nginx:1.14.2\n ports:\n - containerPort: 80", "language": "yaml", "type": "pod-definition", "context": "1.2.3 - Labels and Selectors | Unlike names and UIDs, labels do not provide uniqueness. In general, we expect many objects to carry the same label(s).", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_15558"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: label-demo\n labels:\n environment: production\n app: nginx\nspec:\n containers:\n - name: nginx\n image: nginx:1.14.2\n ports:\n - containerPort: 80", "completion": "This yaml code 1.2.3 - Labels and Selectors | Unlike names and UIDs, labels do not provide uniqueness. In general, we expect many objects to carry the same label(s).", "language": "yaml", "type": "pod-definition_explanation", "context": "1.2.3 - Labels and Selectors | Unlike names and UIDs, labels do not provide uniqueness. In general, we expect many objects to carry the same label(s).", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_15559"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: cuda-test\nspec:\n containers:", "completion": " - name: cuda-test\n image: \"registry.k8s.io/cuda-vector-add:v0.1\"\n resources:\n limits:\n nvidia.com/gpu: 1\n nodeSelector:\n accelerator: nvidia-tesla-p100", "language": "yaml", "type": "pod-definition", "context": "1.2.3 - Labels and Selectors | Set-based label requirements allow filtering keys according to a set of values. Three kinds of operators are supported: in,notin and exists (only the key identifier). For example:", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_15560"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: cuda-test\nspec:\n containers:\n - name: cuda-test\n image: \"registry.k8s.io/cuda-vector-add:v0.1\"\n resources:\n limits:\n nvidia.com/gpu: 1\n nodeSelector:\n accelerator: nvidia-tesla-p100", "completion": "This yaml code 1.2.3 - Labels and Selectors | Set-based label requirements allow filtering keys according to a set of values. Three kinds of operators are supported: in,notin and exists (only the key identifier). For example:", "language": "yaml", "type": "pod-definition_explanation", "context": "1.2.3 - Labels and Selectors | Set-based label requirements allow filtering keys according to a set of values. Three kinds of operators are supported: in,notin and exists (only the key identifier). For example:", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_15561"} {"prompt": "Complete this text code:\n\nenvironment in (production, qa)\ntier notin (frontend, backend)", "completion": "partition\n!partition", "language": "text", "type": "code-example", "context": "1.2.3 - Labels and Selectors | Similarly the comma separator acts as an AND operator. So filtering resources with a partition key (no matter the value) and with environment different than qa can be achieved using partition,environm", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_15562"} {"prompt": "Explain this text code:\n\nenvironment in (production, qa)\ntier notin (frontend, backend)\npartition\n!partition", "completion": "This text code 1.2.3 - Labels and Selectors | Similarly the comma separator acts as an AND operator. So filtering resources with a partition key (no matter the value) and with environment different than qa can be achieved using partition,environm", "language": "text", "type": "code-example_explanation", "context": "1.2.3 - Labels and Selectors | Similarly the comma separator acts as an AND operator. So filtering resources with a partition key (no matter the value) and with environment different than qa can be achieved using partition,environm", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_15563"} {"prompt": "Explain this bash code:\n\nkubectl get pods -l environment=production,tier=frontend", "completion": "This bash code 1.2.3 - Labels and Selectors | or using set-based requirements:", "language": "bash", "type": "kubectl-command_explanation", "context": "1.2.3 - Labels and Selectors | or using set-based requirements:", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_15564"} {"prompt": "Explain this bash code:\n\nkubectl get pods -l 'environment in (production),tier in (frontend)'", "completion": "This bash code 1.2.3 - Labels and Selectors | As already mentioned set-based requirements are more expressive. For instance, they can implement the OR operator on values:", "language": "bash", "type": "kubectl-command_explanation", "context": "1.2.3 - Labels and Selectors | As already mentioned set-based requirements are more expressive. For instance, they can implement the OR operator on values:", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_15565"} {"prompt": "Explain this bash code:\n\nkubectl get pods -l 'environment in (production, qa)'", "completion": "This bash code 1.2.3 - Labels and Selectors | or restricting negative matching via notin operator:", "language": "bash", "type": "kubectl-command_explanation", "context": "1.2.3 - Labels and Selectors | or restricting negative matching via notin operator:", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_15566"} {"prompt": "Explain this bash code:\n\nkubectl get pods -l 'environment,environment notin (frontend)'", "completion": "This bash code 1.2.3 - Labels and Selectors | Some Kubernetes objects, such as services and replicationcontrollers, also use label selectors to specify sets of other resources, such as pods.", "language": "bash", "type": "kubectl-command_explanation", "context": "1.2.3 - Labels and Selectors | Some Kubernetes objects, such as services and replicationcontrollers, also use label selectors to specify sets of other resources, such as pods.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_15567"} {"prompt": "Complete this text code:\n\nselector:\n matchLabels:\n component: redis", "completion": " matchExpressions:\n - { key: tier, operator: In, values: [cache] }\n - { key: environment, operator: NotIn, values: [dev] }", "language": "text", "type": "code-example", "context": "1.2.3 - Labels and Selectors | matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels map is equivalent to an element of matchExpressions, whose key field is \"key\", the operator is \"In\", and the values a", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_15568"} {"prompt": "Explain this text code:\n\nselector:\n matchLabels:\n component: redis\n matchExpressions:\n - { key: tier, operator: In, values: [cache] }\n - { key: environment, operator: NotIn, values: [dev] }", "completion": "This text code 1.2.3 - Labels and Selectors | matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels map is equivalent to an element of matchExpressions, whose key field is \"key\", the operator is \"In\", and the values a", "language": "text", "type": "code-example_explanation", "context": "1.2.3 - Labels and Selectors | matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels map is equivalent to an element of matchExpressions, whose key field is \"key\", the operator is \"In\", and the values a", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_15569"} {"prompt": "Complete this text code:\n\nlabels:\n app: guestbook", "completion": " tier: backend\n role: master", "language": "text", "type": "code-example", "context": "1.2.3 - Labels and Selectors | and", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_15570"} {"prompt": "Explain this text code:\n\nlabels:\n app: guestbook\n tier: backend\n role: master", "completion": "This text code 1.2.3 - Labels and Selectors | and", "language": "text", "type": "code-example_explanation", "context": "1.2.3 - Labels and Selectors | and", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_15571"} {"prompt": "Complete this text code:\n\nlabels:\n app: guestbook", "completion": " tier: backend\n role: replica", "language": "text", "type": "code-example", "context": "1.2.3 - Labels and Selectors | The labels allow for slicing and dicing the resources along any dimension specified by a label:", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_15572"} {"prompt": "Explain this text code:\n\nlabels:\n app: guestbook\n tier: backend\n role: replica", "completion": "This text code 1.2.3 - Labels and Selectors | The labels allow for slicing and dicing the resources along any dimension specified by a label:", "language": "text", "type": "code-example_explanation", "context": "1.2.3 - Labels and Selectors | The labels allow for slicing and dicing the resources along any dimension specified by a label:", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_15573"} {"prompt": "Explain this bash code:\n\nkubectl apply -f examples/guestbook/all-in-one/guestbook-all-in-one.yaml\nkubectl get pods -Lapp -Ltier -Lrole", "completion": "This bash code 1.2.3 - Labels and Selectors | Sometimes you may want to relabel existing pods and other resources before creating new resources. This can be done with kubectl label. For example, if you want to label all your NGINX Pods as fronten", "language": "bash", "type": "kubectl-command_explanation", "context": "1.2.3 - Labels and Selectors | Sometimes you may want to relabel existing pods and other resources before creating new resources. This can be done with kubectl label. For example, if you want to label all your NGINX Pods as fronten", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_15574"} {"prompt": "Complete this text code:\n\nNAME READY STATUS RESTARTS AGE APP TIER ROLE\nguestbook-fe-4nlpb 1/1 Running 0 1m guestbook frontend <none>\nguestbook-fe-ght6d 1/1 Running 0 1m guestbook frontend <none>\nguestbook-fe-jpy62 1/1 Running 0 1m guestbook frontend <none>", "completion": "guestbook-redis-master-5pg3b 1/1 Running 0 1m guestbook backend master\nguestbook-redis-replica-2q2yf 1/1 Running 0 1m guestbook backend replica\nguestbook-redis-replica-qgazl 1/1 Running 0 1m guestbook backend replica\nmy-nginx-divi2 1/1 Running 0 29m nginx <none> <none>\nmy-nginx-o0ef1 1/1 Running 0 29m nginx <none> <none>", "language": "text", "type": "code-example", "context": "1.2.3 - Labels and Selectors | Sometimes you may want to relabel existing pods and other resources before creating new resources. This can be done with kubectl label. For example, if you want to label all your NGINX Pods as fronten", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_15575"} {"prompt": "Explain this text code:\n\nNAME READY STATUS RESTARTS AGE APP TIER ROLE\nguestbook-fe-4nlpb 1/1 Running 0 1m guestbook frontend <none>\nguestbook-fe-ght6d 1/1 Running 0 1m guestbook frontend <none>\nguestbook-fe-jpy62 1/1 Running 0 1m guestbook frontend <none>\nguestbook-redis-master-5pg3b 1/1 Running 0 1m guestbook backend master\nguestbook-redis-replica-2q2yf 1/1 Running 0 1m guestbook backend replica\nguestbook-redis-replica-qgazl 1/1 Running 0 1m guestbook backend replica\nmy-nginx-divi2 1/1 Running 0 29m nginx <none> <none>\nmy-nginx-o0ef1 1/1 Running 0 29m nginx <none> <none>", "completion": "This text code 1.2.3 - Labels and Selectors | Sometimes you may want to relabel existing pods and other resources before creating new resources. This can be done with kubectl label. For example, if you want to label all your NGINX Pods as fronten", "language": "text", "type": "code-example_explanation", "context": "1.2.3 - Labels and Selectors | Sometimes you may want to relabel existing pods and other resources before creating new resources. This can be done with kubectl label. For example, if you want to label all your NGINX Pods as fronten", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_15576"} {"prompt": "Explain this text code:\n\nNAME READY STATUS RESTARTS AGE\nguestbook-redis-replica-2q2yf 1/1 Running 0 3m\nguestbook-redis-replica-qgazl 1/1 Running 0 3m", "completion": "This text code 1.2.3 - Labels and Selectors | Sometimes you may want to relabel existing pods and other resources before creating new resources. This can be done with kubectl label. For example, if you want to label all your NGINX Pods as fronten", "language": "text", "type": "code-example_explanation", "context": "1.2.3 - Labels and Selectors | Sometimes you may want to relabel existing pods and other resources before creating new resources. This can be done with kubectl label. For example, if you want to label all your NGINX Pods as fronten", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_15577"} {"prompt": "Explain this text code:\n\npod/my-nginx-2035384211-j5fhi labeled\npod/my-nginx-2035384211-u2c7e labeled\npod/my-nginx-2035384211-u3t6x labeled", "completion": "This text code 1.2.3 - Labels and Selectors | This first filters all pods with the label \"app=nginx\", and then labels them with the \"tier=fe\". To see the pods you labeled, run:", "language": "text", "type": "code-example_explanation", "context": "1.2.3 - Labels and Selectors | This first filters all pods with the label \"app=nginx\", and then labels them with the \"tier=fe\". To see the pods you labeled, run:", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_15578"} {"prompt": "Complete this text code:\n\nNAME READY STATUS RESTARTS AGE TIER\nmy-nginx-2035384211-j5fhi 1/1 Running 0 23m fe", "completion": "my-nginx-2035384211-u2c7e 1/1 Running 0 23m fe\nmy-nginx-2035384211-u3t6x 1/1 Running 0 23m fe", "language": "text", "type": "code-example", "context": "1.2.3 - Labels and Selectors | This outputs all \"app=nginx\" pods, with an additional label column of pods' tier (specified with -L or --label-columns).", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_15579"} {"prompt": "Explain this text code:\n\nNAME READY STATUS RESTARTS AGE TIER\nmy-nginx-2035384211-j5fhi 1/1 Running 0 23m fe\nmy-nginx-2035384211-u2c7e 1/1 Running 0 23m fe\nmy-nginx-2035384211-u3t6x 1/1 Running 0 23m fe", "completion": "This text code 1.2.3 - Labels and Selectors | This outputs all \"app=nginx\" pods, with an additional label column of pods' tier (specified with -L or --label-columns).", "language": "text", "type": "code-example_explanation", "context": "1.2.3 - Labels and Selectors | This outputs all \"app=nginx\" pods, with an additional label column of pods' tier (specified with -L or --label-columns).", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_15580"} {"prompt": "Complete this text code:\n\nNAME STATUS AGE\ndefault Active 1d", "completion": "kube-node-lease Active 1d\nkube-public Active 1d\nkube-system Active 1d", "language": "text", "type": "code-example", "context": "1.2.4 - Namespaces | To set the namespace for a current request, use the --namespace flag.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_15581"} {"prompt": "Explain this text code:\n\nNAME STATUS AGE\ndefault Active 1d\nkube-node-lease Active 1d\nkube-public Active 1d\nkube-system Active 1d", "completion": "This text code 1.2.4 - Namespaces | To set the namespace for a current request, use the --namespace flag.", "language": "text", "type": "code-example_explanation", "context": "1.2.4 - Namespaces | To set the namespace for a current request, use the --namespace flag.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_15582"} {"prompt": "Explain this bash code:\n\nkubectl run nginx --image=nginx --namespace=<insert-namespace-name-here>\nkubectl get pods --namespace=<insert-namespace-name-here>", "completion": "This bash code 1.2.4 - Namespaces | You can permanently save the namespace for all subsequent kubectl commands in that context.", "language": "bash", "type": "kubectl-command_explanation", "context": "1.2.4 - Namespaces | You can permanently save the namespace for all subsequent kubectl commands in that context.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_15583"} {"prompt": "Explain this bash code:\n\nkubectl config set-context --current --namespace=<insert-namespace-name-here>\n# Validate it\nkubectl config view --minify | grep namespace:", "completion": "This bash code 1.2.4 - Namespaces | When you create a Service, it creates a corresponding DNS entry. This entry is of the form <service-name>.<namespace-name>.svc.cluster.local, which means that if a container only uses <service-name>,", "language": "bash", "type": "kubectl-command_explanation", "context": "1.2.4 - Namespaces | When you create a Service, it creates a corresponding DNS entry. This entry is of the form <service-name>.<namespace-name>.svc.cluster.local, which means that if a container only uses <service-name>,", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_15584"} {"prompt": "Complete this text code:\n\n# In a namespace\nkubectl api-resources --namespaced=true", "completion": "\n# Not in a namespace\nkubectl api-resources --namespaced=false", "language": "text", "type": "code-example", "context": "1.2.4 - Namespaces | The Kubernetes control plane sets an immutable label kubernetes.io/metadata.name on all namespaces. The value of the label is the namespace name.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_15585"} {"prompt": "Explain this text code:\n\n# In a namespace\nkubectl api-resources --namespaced=true\n\n# Not in a namespace\nkubectl api-resources --namespaced=false", "completion": "This text code 1.2.4 - Namespaces | The Kubernetes control plane sets an immutable label kubernetes.io/metadata.name on all namespaces. The value of the label is the namespace name.", "language": "text", "type": "code-example_explanation", "context": "1.2.4 - Namespaces | The Kubernetes control plane sets an immutable label kubernetes.io/metadata.name on all namespaces. The value of the label is the namespace name.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_15586"} {"prompt": "Complete this text code:\n\n\"metadata\": {\n \"annotations\": {\n \"key1\" : \"value1\",", "completion": " \"key2\" : \"value2\"\n }\n}", "language": "text", "type": "code-example", "context": "1.2.5 - Annotations | Here are some examples of information that could be recorded in annotations:", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_15587"} {"prompt": "Explain this text code:\n\n\"metadata\": {\n \"annotations\": {\n \"key1\" : \"value1\",\n \"key2\" : \"value2\"\n }\n}", "completion": "This text code 1.2.5 - Annotations | Here are some examples of information that could be recorded in annotations:", "language": "text", "type": "code-example_explanation", "context": "1.2.5 - Annotations | Here are some examples of information that could be recorded in annotations:", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_15588"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: annotations-demo\n annotations:\n imageregistry: \"https://hub.docker.com/\"", "completion": "spec:\n containers:\n - name: nginx\n image: nginx:1.14.2\n ports:\n - containerPort: 80", "language": "yaml", "type": "pod-definition", "context": "1.2.5 - Annotations | Field selectors let you select Kubernetes objects based on the value of one or more resource fields. Here are some examples of field selector queries:", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_15589"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: annotations-demo\n annotations:\n imageregistry: \"https://hub.docker.com/\"\nspec:\n containers:\n - name: nginx\n image: nginx:1.14.2\n ports:\n - containerPort: 80", "completion": "This yaml code 1.2.5 - Annotations | Field selectors let you select Kubernetes objects based on the value of one or more resource fields. Here are some examples of field selector queries:", "language": "yaml", "type": "pod-definition_explanation", "context": "1.2.5 - Annotations | Field selectors let you select Kubernetes objects based on the value of one or more resource fields. Here are some examples of field selector queries:", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_15590"} {"prompt": "Explain this bash code:\n\nkubectl get pods --field-selector status.phase=Running", "completion": "This bash code 1.2.6 - Field Selectors | Supported field selectors vary by Kubernetes resource type. All resource types support the metadata.name and metadata.namespace fields. Using unsupported field selectors produces an error. For example", "language": "bash", "type": "kubectl-command_explanation", "context": "1.2.6 - Field Selectors | Supported field selectors vary by Kubernetes resource type. All resource types support the metadata.name and metadata.namespace fields. Using unsupported field selectors produces an error. For example", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_15591"} {"prompt": "Explain this text code:\n\nError from server (BadRequest): Unable to find \"ingresses\" that match label selector \"\", field selector \"foo.bar=baz\": \"foo.bar\" is not a known field selector: only \"metadata.name\", \"metadata.namespace\"", "completion": "This text code 1.2.6 - Field Selectors | All custom resource types support the metadata.name and metadata.namespace fields.", "language": "text", "type": "code-example_explanation", "context": "1.2.6 - Field Selectors | All custom resource types support the metadata.name and metadata.namespace fields.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_15592"} {"prompt": "Explain this bash code:\n\nkubectl get services --all-namespaces --field-selector metadata.namespace!=default", "completion": "This bash code 1.2.6 - Field Selectors | As with label and other selectors, field selectors can be chained together as a comma-separated list. This kubectl command selects all Pods for which the status.phase does not equal Running and the sp", "language": "bash", "type": "kubectl-command_explanation", "context": "1.2.6 - Field Selectors | As with label and other selectors, field selectors can be chained together as a comma-separated list. This kubectl command selects all Pods for which the status.phase does not equal Running and the sp", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_15593"} {"prompt": "Explain this bash code:\n\nkubectl get pods --field-selector=status.phase!=Running,spec.restartPolicy=Always", "completion": "This bash code 1.2.6 - Field Selectors | You can use field selectors across multiple resource types. This kubectl command selects all Statefulsets and Services that are not in the default namespace:", "language": "bash", "type": "kubectl-command_explanation", "context": "1.2.6 - Field Selectors | You can use field selectors across multiple resource types. This kubectl command selects all Statefulsets and Services that are not in the default namespace:", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_15594"} {"prompt": "Explain this bash code:\n\nkubectl get statefulsets,services --all-namespaces --field-selector metadata.namespace!=default", "completion": "This bash code 1.2.6 - Field Selectors | Finalizers are namespaced keys that tell Kubernetes to wait until specific conditions are met before it fully deletes resources that are marked for deletion. Finalizers alert controllers to clean up r", "language": "bash", "type": "kubectl-command_explanation", "context": "1.2.6 - Field Selectors | Finalizers are namespaced keys that tell Kubernetes to wait until specific conditions are met before it fully deletes resources that are marked for deletion. Finalizers alert controllers to clean up r", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_15595"} {"prompt": "Complete this yaml code:\n\n# This is an excerpt\napiVersion: apps/v1\nkind: StatefulSet\nmetadata:\n labels:", "completion": " app.kubernetes.io/name: mysql\n app.kubernetes.io/instance: mysql-abcxyz\n app.kubernetes.io/version: \"5.7.21\"\n app.kubernetes.io/component: database\n app.kubernetes.io/part-of: wordpress\n app.kubernetes.io/managed-by: Helm", "language": "yaml", "type": "kubernetes-manifest", "context": "1.2.9 - Recommended Labels | An application can be installed one or more times into a Kubernetes cluster and, in some cases, the same namespace. For example, WordPress can be installed more than once where different websites are", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_15596"} {"prompt": "Explain this yaml code:\n\n# This is an excerpt\napiVersion: apps/v1\nkind: StatefulSet\nmetadata:\n labels:\n app.kubernetes.io/name: mysql\n app.kubernetes.io/instance: mysql-abcxyz\n app.kubernetes.io/version: \"5.7.21\"\n app.kubernetes.io/component: database\n app.kubernetes.io/part-of: wordpress\n app.kubernetes.io/managed-by: Helm", "completion": "This yaml code 1.2.9 - Recommended Labels | An application can be installed one or more times into a Kubernetes cluster and, in some cases, the same namespace. For example, WordPress can be installed more than once where different websites are", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "1.2.9 - Recommended Labels | An application can be installed one or more times into a Kubernetes cluster and, in some cases, the same namespace. For example, WordPress can be installed more than once where different websites are", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_15597"} {"prompt": "Complete this yaml code:\n\napiVersion: apps/v1\nkind: Deployment\nmetadata:", "completion": " labels:\n app.kubernetes.io/name: myservice\n app.kubernetes.io/instance: myservice-abcxyz\n...", "language": "yaml", "type": "deployment-definition", "context": "1.2.9 - Recommended Labels | The Service is used to expose the application.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_15598"} {"prompt": "Explain this yaml code:\n\napiVersion: apps/v1\nkind: Deployment\nmetadata:\n labels:\n app.kubernetes.io/name: myservice\n app.kubernetes.io/instance: myservice-abcxyz\n...", "completion": "This yaml code 1.2.9 - Recommended Labels | The Service is used to expose the application.", "language": "yaml", "type": "deployment-definition_explanation", "context": "1.2.9 - Recommended Labels | The Service is used to expose the application.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_15599"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Service\nmetadata:", "completion": " labels:\n app.kubernetes.io/name: myservice\n app.kubernetes.io/instance: myservice-abcxyz\n...", "language": "yaml", "type": "service-definition", "context": "1.2.9 - Recommended Labels | Consider a slightly more complicated application: a web application (WordPress) using a database (MySQL), installed using Helm. The following snippets illustrate the start of objects used to deploy th", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_15600"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Service\nmetadata:\n labels:\n app.kubernetes.io/name: myservice\n app.kubernetes.io/instance: myservice-abcxyz\n...", "completion": "This yaml code 1.2.9 - Recommended Labels | Consider a slightly more complicated application: a web application (WordPress) using a database (MySQL), installed using Helm. The following snippets illustrate the start of objects used to deploy th", "language": "yaml", "type": "service-definition_explanation", "context": "1.2.9 - Recommended Labels | Consider a slightly more complicated application: a web application (WordPress) using a database (MySQL), installed using Helm. The following snippets illustrate the start of objects used to deploy th", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_15601"} {"prompt": "Complete this yaml code:\n\napiVersion: apps/v1\nkind: Deployment\nmetadata:\n labels:\n app.kubernetes.io/name: wordpress", "completion": " app.kubernetes.io/instance: wordpress-abcxyz\n app.kubernetes.io/version: \"4.9.4\"\n app.kubernetes.io/managed-by: Helm\n app.kubernetes.io/component: server\n app.kubernetes.io/part-of: wordpress\n...", "language": "yaml", "type": "deployment-definition", "context": "1.2.9 - Recommended Labels | The Service is used to expose WordPress:", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_15602"} {"prompt": "Explain this yaml code:\n\napiVersion: apps/v1\nkind: Deployment\nmetadata:\n labels:\n app.kubernetes.io/name: wordpress\n app.kubernetes.io/instance: wordpress-abcxyz\n app.kubernetes.io/version: \"4.9.4\"\n app.kubernetes.io/managed-by: Helm\n app.kubernetes.io/component: server\n app.kubernetes.io/part-of: wordpress\n...", "completion": "This yaml code 1.2.9 - Recommended Labels | The Service is used to expose WordPress:", "language": "yaml", "type": "deployment-definition_explanation", "context": "1.2.9 - Recommended Labels | The Service is used to expose WordPress:", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_15603"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Service\nmetadata:\n labels:\n app.kubernetes.io/name: wordpress", "completion": " app.kubernetes.io/instance: wordpress-abcxyz\n app.kubernetes.io/version: \"4.9.4\"\n app.kubernetes.io/managed-by: Helm\n app.kubernetes.io/component: server\n app.kubernetes.io/part-of: wordpress\n...", "language": "yaml", "type": "service-definition", "context": "1.2.9 - Recommended Labels | MySQL is exposed as a StatefulSet with metadata for both it and the larger application it belongs to:", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_15604"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Service\nmetadata:\n labels:\n app.kubernetes.io/name: wordpress\n app.kubernetes.io/instance: wordpress-abcxyz\n app.kubernetes.io/version: \"4.9.4\"\n app.kubernetes.io/managed-by: Helm\n app.kubernetes.io/component: server\n app.kubernetes.io/part-of: wordpress\n...", "completion": "This yaml code 1.2.9 - Recommended Labels | MySQL is exposed as a StatefulSet with metadata for both it and the larger application it belongs to:", "language": "yaml", "type": "service-definition_explanation", "context": "1.2.9 - Recommended Labels | MySQL is exposed as a StatefulSet with metadata for both it and the larger application it belongs to:", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_15605"} {"prompt": "Complete this yaml code:\n\napiVersion: apps/v1\nkind: StatefulSet\nmetadata:\n labels:\n app.kubernetes.io/name: mysql", "completion": " app.kubernetes.io/instance: mysql-abcxyz\n app.kubernetes.io/version: \"5.7.21\"\n app.kubernetes.io/managed-by: Helm\n app.kubernetes.io/component: database\n app.kubernetes.io/part-of: wordpress\n...", "language": "yaml", "type": "kubernetes-manifest", "context": "1.2.9 - Recommended Labels | The Service is used to expose MySQL as part of WordPress:", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_15606"} {"prompt": "Explain this yaml code:\n\napiVersion: apps/v1\nkind: StatefulSet\nmetadata:\n labels:\n app.kubernetes.io/name: mysql\n app.kubernetes.io/instance: mysql-abcxyz\n app.kubernetes.io/version: \"5.7.21\"\n app.kubernetes.io/managed-by: Helm\n app.kubernetes.io/component: database\n app.kubernetes.io/part-of: wordpress\n...", "completion": "This yaml code 1.2.9 - Recommended Labels | The Service is used to expose MySQL as part of WordPress:", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "1.2.9 - Recommended Labels | The Service is used to expose MySQL as part of WordPress:", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_15607"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Service\nmetadata:\n labels:\n app.kubernetes.io/name: mysql", "completion": " app.kubernetes.io/instance: mysql-abcxyz\n app.kubernetes.io/version: \"5.7.21\"\n app.kubernetes.io/managed-by: Helm\n app.kubernetes.io/component: database\n app.kubernetes.io/part-of: wordpress\n...", "language": "yaml", "type": "service-definition", "context": "1.2.9 - Recommended Labels | With the MySQL StatefulSet and Service you'll notice information about both MySQL and WordPress, the broader application, are included.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_15608"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Service\nmetadata:\n labels:\n app.kubernetes.io/name: mysql\n app.kubernetes.io/instance: mysql-abcxyz\n app.kubernetes.io/version: \"5.7.21\"\n app.kubernetes.io/managed-by: Helm\n app.kubernetes.io/component: database\n app.kubernetes.io/part-of: wordpress\n...", "completion": "This yaml code 1.2.9 - Recommended Labels | With the MySQL StatefulSet and Service you'll notice information about both MySQL and WordPress, the broader application, are included.", "language": "yaml", "type": "service-definition_explanation", "context": "1.2.9 - Recommended Labels | With the MySQL StatefulSet and Service you'll notice information about both MySQL and WordPress, the broader application, are included.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_15609"} {"prompt": "Complete this json code:\n\n{\n \"kind\": \"APIGroupList\",\n \"apiVersion\": \"v1\",\n \"groups\": [\n {\n \"name\": \"apiregistration.k8s.io\",\n \"versions\": [\n {\n \"groupVersion\": \"apiregistration.k8s.io/v1\",\n \"version\": \"v1\"\n }\n ],\n \"preferredVersion\": {\n \"groupVersion\": \"apiregistration.k8s.io/v1\",\n \"version\": \"v1\"\n }", "completion": " },\n {\n \"name\": \"apps\",\n \"versions\": [\n {\n \"groupVersion\": \"apps/v1\",\n \"version\": \"v1\"\n }\n ],\n \"preferredVersion\": {\n \"groupVersion\": \"apps/v1\",\n \"version\": \"v1\"\n }\n },\n ...\n}", "language": "json", "type": "api-response", "context": "1.3 - The Kubernetes API | Additional requests are needed to obtain the discovery document for each group version at /apis/<group>/<version> (for example: /apis/rbac.authorization.k8s.io/v1alpha1), which advertises the list of", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_15610"} {"prompt": "Explain this json code:\n\n{\n \"kind\": \"APIGroupList\",\n \"apiVersion\": \"v1\",\n \"groups\": [\n {\n \"name\": \"apiregistration.k8s.io\",\n \"versions\": [\n {\n \"groupVersion\": \"apiregistration.k8s.io/v1\",\n \"version\": \"v1\"\n }\n ],\n \"preferredVersion\": {\n \"groupVersion\": \"apiregistration.k8s.io/v1\",\n \"version\": \"v1\"\n }\n },\n {\n \"name\": \"apps\",\n \"versions\": [\n {\n \"groupVersion\": \"apps/v1\",\n \"version\": \"v1\"\n }\n ],\n \"preferredVersion\": {\n \"groupVersion\": \"apps/v1\",\n \"version\": \"v1\"\n }\n },\n ...\n}", "completion": "This json code 1.3 - The Kubernetes API | Additional requests are needed to obtain the discovery document for each group version at /apis/<group>/<version> (for example: /apis/rbac.authorization.k8s.io/v1alpha1), which advertises the list of", "language": "json", "type": "api-response_explanation", "context": "1.3 - The Kubernetes API | Additional requests are needed to obtain the discovery document for each group version at /apis/<group>/<version> (for example: /apis/rbac.authorization.k8s.io/v1alpha1), which advertises the list of", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_15611"} {"prompt": "Complete this json code:\n\n{\n \"paths\": {\n ...,\n \"api/v1\": {\n \"serverRelativeURL\": \"/openapi/v3/api/v1?hash=CC0E9BFD992D8C59AEC98A1E2336F899E8318D3CF4C68944C3DEC640AF5AB52D864AC50DAA8D145B3494F75FA3CFF939FCBDDA431DAD3CA79738B297795818CF\"\n },", "completion": " \"apis/admissionregistration.k8s.io/v1\": {\n \"serverRelativeURL\": \"/openapi/v3/apis/admissionregistration.k8s.io/v1?hash=E19CC93A116982CE5422FC42B590A8AFAD92CDE9AE4D59B5CAAD568F083AD07946E6CB5817531680BCE6E215C16973CD39003B0425F3477CFD854E89A9DB6597\"\n },\n ....\n }\n}", "language": "json", "type": "api-response", "context": "1.3 - The Kubernetes API | The relative URLs are pointing to immutable OpenAPI descriptions, in order to improve client-side caching. The proper HTTP caching headers are also set by the API server for that purpose (Expires to 1", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_15612"} {"prompt": "Explain this json code:\n\n{\n \"paths\": {\n ...,\n \"api/v1\": {\n \"serverRelativeURL\": \"/openapi/v3/api/v1?hash=CC0E9BFD992D8C59AEC98A1E2336F899E8318D3CF4C68944C3DEC640AF5AB52D864AC50DAA8D145B3494F75FA3CFF939FCBDDA431DAD3CA79738B297795818CF\"\n },\n \"apis/admissionregistration.k8s.io/v1\": {\n \"serverRelativeURL\": \"/openapi/v3/apis/admissionregistration.k8s.io/v1?hash=E19CC93A116982CE5422FC42B590A8AFAD92CDE9AE4D59B5CAAD568F083AD07946E6CB5817531680BCE6E215C16973CD39003B0425F3477CFD854E89A9DB6597\"\n },\n ....\n }\n}", "completion": "This json code 1.3 - The Kubernetes API | The relative URLs are pointing to immutable OpenAPI descriptions, in order to improve client-side caching. The proper HTTP caching headers are also set by the API server for that purpose (Expires to 1", "language": "json", "type": "api-response_explanation", "context": "1.3 - The Kubernetes API | The relative URLs are pointing to immutable OpenAPI descriptions, in order to improve client-side caching. The proper HTTP caching headers are also set by the API server for that purpose (Expires to 1", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_15613"} {"prompt": "Complete this json code:\n\n{\n \"kind\": \"Node\",\n \"apiVersion\": \"v1\",\n \"metadata\": {\n \"name\": \"10.240.79.157\",", "completion": " \"labels\": {\n \"name\": \"my-first-k8s-node\"\n }\n }\n}", "language": "json", "type": "api-response", "context": "2.1 - Nodes | Kubernetes creates a Node object internally (the representation). Kubernetes checks that a kubelet has registered to the API server that matches the metadata.name field of the Node. If the node is hea", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_15614"} {"prompt": "Explain this json code:\n\n{\n \"kind\": \"Node\",\n \"apiVersion\": \"v1\",\n \"metadata\": {\n \"name\": \"10.240.79.157\",\n \"labels\": {\n \"name\": \"my-first-k8s-node\"\n }\n }\n}", "completion": "This json code 2.1 - Nodes | Kubernetes creates a Node object internally (the representation). Kubernetes checks that a kubelet has registered to the API server that matches the metadata.name field of the Node. If the node is hea", "language": "json", "type": "api-response_explanation", "context": "2.1 - Nodes | Kubernetes creates a Node object internally (the representation). Kubernetes checks that a kubelet has registered to the API server that matches the metadata.name field of the Node. If the node is hea", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_15615"} {"prompt": "Explain this bash code:\n\nkubectl -n kube-system get lease -l apiserver.kubernetes.io/identity=kube-apiserver", "completion": "This bash code 2.4 - Leases | The SHA256 hash used in the lease name is based on the OS hostname as seen by that API server. Each kube-apiserver should be configured to use a hostname that is unique within the cluster. New instanc", "language": "bash", "type": "kubectl-command_explanation", "context": "2.4 - Leases | The SHA256 hash used in the lease name is based on the OS hostname as seen by that API server. Each kube-apiserver should be configured to use a hostname that is unique within the cluster. New instanc", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_15616"} {"prompt": "Complete this text code:\n\nNAME HOLDER AGE\napiserver-07a5ea9b9b072c4a5f3d1c3702 apiserver-07a5ea9b9b072c4a5f3d1c3702_0c8914f7-0f35-440e-8676-7844977d3a05 5m33s", "completion": "apiserver-7be9e061c59d368b3ddaf1376e apiserver-7be9e061c59d368b3ddaf1376e_84f2a85d-37c1-4b14-b6b9-603e62e4896f 4m23s\napiserver-1dfef752bcb36637d2763d1868 apiserver-1dfef752bcb36637d2763d1868_c5ffa286-8a9a-45d4-91e7-61118ed58d2e 4m43s", "language": "text", "type": "code-example", "context": "2.4 - Leases | The SHA256 hash used in the lease name is based on the OS hostname as seen by that API server. Each kube-apiserver should be configured to use a hostname that is unique within the cluster. New instanc", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_15617"} {"prompt": "Explain this text code:\n\nNAME HOLDER AGE\napiserver-07a5ea9b9b072c4a5f3d1c3702 apiserver-07a5ea9b9b072c4a5f3d1c3702_0c8914f7-0f35-440e-8676-7844977d3a05 5m33s\napiserver-7be9e061c59d368b3ddaf1376e apiserver-7be9e061c59d368b3ddaf1376e_84f2a85d-37c1-4b14-b6b9-603e62e4896f 4m23s\napiserver-1dfef752bcb36637d2763d1868 apiserver-1dfef752bcb36637d2763d1868_c5ffa286-8a9a-45d4-91e7-61118ed58d2e 4m43s", "completion": "This text code 2.4 - Leases | The SHA256 hash used in the lease name is based on the OS hostname as seen by that API server. Each kube-apiserver should be configured to use a hostname that is unique within the cluster. New instanc", "language": "text", "type": "code-example_explanation", "context": "2.4 - Leases | The SHA256 hash used in the lease name is based on the OS hostname as seen by that API server. Each kube-apiserver should be configured to use a hostname that is unique within the cluster. New instanc", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_15618"} {"prompt": "Explain this bash code:\n\nkubectl -n kube-system get lease apiserver-07a5ea9b9b072c4a5f3d1c3702 -o yaml", "completion": "This bash code 2.4 - Leases | Expired leases from kube-apiservers that no longer exist are garbage collected by new kube-apiservers after 1 hour.", "language": "bash", "type": "kubectl-command_explanation", "context": "2.4 - Leases | Expired leases from kube-apiservers that no longer exist are garbage collected by new kube-apiservers after 1 hour.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_15619"} {"prompt": "Complete this yaml code:\n\napiVersion: coordination.k8s.io/v1\nkind: Lease\nmetadata:\n creationTimestamp: \"2023-07-02T13:16:48Z\"\n labels:\n apiserver.kubernetes.io/identity: kube-apiserver\n kubernetes.io/hostname: master-1", "completion": " name: apiserver-07a5ea9b9b072c4a5f3d1c3702\n namespace: kube-system\n resourceVersion: \"334899\"\n uid: 90870ab5-1ba9-4523-b215-e4d4e662acb1\nspec:\n holderIdentity: apiserver-07a5ea9b9b072c4a5f3d1c3702_0c8914f7-0f35-440e-8676-7844977d3a05\n leaseDurationSeconds: 3600\n renewTime: \"2023-07-04T21:58:48.065888Z\"", "language": "yaml", "type": "kubernetes-manifest", "context": "2.4 - Leases | Expired leases from kube-apiservers that no longer exist are garbage collected by new kube-apiservers after 1 hour.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_15620"} {"prompt": "Explain this yaml code:\n\napiVersion: coordination.k8s.io/v1\nkind: Lease\nmetadata:\n creationTimestamp: \"2023-07-02T13:16:48Z\"\n labels:\n apiserver.kubernetes.io/identity: kube-apiserver\n kubernetes.io/hostname: master-1\n name: apiserver-07a5ea9b9b072c4a5f3d1c3702\n namespace: kube-system\n resourceVersion: \"334899\"\n uid: 90870ab5-1ba9-4523-b215-e4d4e662acb1\nspec:\n holderIdentity: apiserver-07a5ea9b9b072c4a5f3d1c3702_0c8914f7-0f35-440e-8676-7844977d3a05\n leaseDurationSeconds: 3600\n renewTime: \"2023-07-04T21:58:48.065888Z\"", "completion": "This yaml code 2.4 - Leases | Expired leases from kube-apiservers that no longer exist are garbage collected by new kube-apiservers after 1 hour.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "2.4 - Leases | Expired leases from kube-apiservers that no longer exist are garbage collected by new kube-apiservers after 1 hour.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_15621"} {"prompt": "Complete this yaml code:\n\napiVersion: rbac.authorization.k8s.io/v1\nkind: ClusterRole\nmetadata:\n name: cloud-controller-manager\nrules:\n- apiGroups:\n - \"\"\n resources:\n - events\n verbs:\n - create\n - patch\n - update\n- apiGroups:\n - \"\"\n resources:\n - nodes\n verbs:\n - '*'\n- apiGroups:\n - \"\"\n resources:\n - nodes/status\n verbs:\n - patch\n- apiGroups:\n - \"\"", "completion": " resources:\n - services\n verbs:\n - list\n - watch\n- apiGroups:\n - \"\"\n resources:\n - services/status\n verbs:\n - patch\n - update\n- apiGroups:\n - \"\"\n resources:\n - serviceaccounts\n verbs:\n - create\n- apiGroups:\n - \"\"\n resources:\n - persistentvolumes\n verbs:\n - get\n - list\n - update\n - watch", "language": "yaml", "type": "kubernetes-manifest", "context": "2.5 - Cloud Controller Manager | Cloud Controller Manager Administration has instructions on running and managing the cloud controller manager.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_15622"} {"prompt": "Explain this yaml code:\n\napiVersion: rbac.authorization.k8s.io/v1\nkind: ClusterRole\nmetadata:\n name: cloud-controller-manager\nrules:\n- apiGroups:\n - \"\"\n resources:\n - events\n verbs:\n - create\n - patch\n - update\n- apiGroups:\n - \"\"\n resources:\n - nodes\n verbs:\n - '*'\n- apiGroups:\n - \"\"\n resources:\n - nodes/status\n verbs:\n - patch\n- apiGroups:\n - \"\"\n resources:\n - services\n verbs:\n - list\n - watch\n- apiGroups:\n - \"\"\n resources:\n - services/status\n verbs:\n - patch\n - update\n- apiGroups:\n - \"\"\n resources:\n - serviceaccounts\n verbs:\n - create\n- apiGroups:\n - \"\"\n resources:\n - persistentvolumes\n verbs:\n - get\n - list\n - update\n - watch", "completion": "This yaml code 2.5 - Cloud Controller Manager | Cloud Controller Manager Administration has instructions on running and managing the cloud controller manager.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "2.5 - Cloud Controller Manager | Cloud Controller Manager Administration has instructions on running and managing the cloud controller manager.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_15623"} {"prompt": "Complete this text code:\n\nkube-apiserver \\\n--feature-gates=UnknownVersionInteroperabilityProxy=true \\\n# required command line arguments for this feature\n--peer-ca-file=<path to kube-apiserver CA cert>\n--proxy-client-cert-file=<path to aggregator proxy cert>,\n--proxy-client-key-file=<path to aggregator proxy key>,\n--requestheader-client-ca-file=<path to aggregator CA cert>,", "completion": "# requestheader-allowed-names can be set to blank to allow any Common Name\n--requestheader-allowed-names=<valid Common Names to verify proxy client cert against>,\n\n# optional flags for this feature\n--peer-advertise-ip=`IP of this kube-apiserver that should be used by peers to proxy requests`\n--peer-advertise-port=`port of this kube-apiserver that should be used by peers to proxy requests`\n\n# …and other flags as usual", "language": "text", "type": "code-example", "context": "2.10 - Mixed Version Proxy | The source kube-apiserver reuses the existing APIserver client authentication flags --proxy-client-cert-file and --proxy-client-key-file to present its identity that will be verified by its peer (the", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_15624"} {"prompt": "Explain this text code:\n\nkube-apiserver \\\n--feature-gates=UnknownVersionInteroperabilityProxy=true \\\n# required command line arguments for this feature\n--peer-ca-file=<path to kube-apiserver CA cert>\n--proxy-client-cert-file=<path to aggregator proxy cert>,\n--proxy-client-key-file=<path to aggregator proxy key>,\n--requestheader-client-ca-file=<path to aggregator CA cert>,\n# requestheader-allowed-names can be set to blank to allow any Common Name\n--requestheader-allowed-names=<valid Common Names to verify proxy client cert against>,\n\n# optional flags for this feature\n--peer-advertise-ip=`IP of this kube-apiserver that should be used by peers to proxy requests`\n--peer-advertise-port=`port of this kube-apiserver that should be used by peers to proxy requests`\n\n# …and other flags as usual", "completion": "This text code 2.10 - Mixed Version Proxy | The source kube-apiserver reuses the existing APIserver client authentication flags --proxy-client-cert-file and --proxy-client-key-file to present its identity that will be verified by its peer (the", "language": "text", "type": "code-example_explanation", "context": "2.10 - Mixed Version Proxy | The source kube-apiserver reuses the existing APIserver client authentication flags --proxy-client-cert-file and --proxy-client-key-file to present its identity that will be verified by its peer (the", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_15625"} {"prompt": "Complete this json code:\n\n{\n \"auths\": {\n \"my-registry.example/images\": { \"auth\": \"…\" },", "completion": " \"*.my-registry.example/images\": { \"auth\": \"…\" }\n }\n}", "language": "json", "type": "api-response", "context": "3.1 - Images | Image pull operations pass the credentials to the CRI container runtime for every valid pattern. For example, the following container image names would match successfully:", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_15626"} {"prompt": "Explain this json code:\n\n{\n \"auths\": {\n \"my-registry.example/images\": { \"auth\": \"…\" },\n \"*.my-registry.example/images\": { \"auth\": \"…\" }\n }\n}", "completion": "This json code 3.1 - Images | Image pull operations pass the credentials to the CRI container runtime for every valid pattern. For example, the following container image names would match successfully:", "language": "json", "type": "api-response_explanation", "context": "3.1 - Images | Image pull operations pass the credentials to the CRI container runtime for every valid pattern. For example, the following container image names would match successfully:", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_15627"} {"prompt": "Complete this json code:\n\n{\n \"auths\": {\n \"my-registry.example/images\": {\n \"auth\": \"…\"\n },", "completion": " \"my-registry.example/images/subpath\": {\n \"auth\": \"…\"\n }\n }\n}", "language": "json", "type": "api-response", "context": "3.1 - Images | If now a container specifies an image my-registry.example/images/subpath/my-image to be pulled, then the kubelet will try to download it using both authentication sources if one of them fails.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_15628"} {"prompt": "Explain this json code:\n\n{\n \"auths\": {\n \"my-registry.example/images\": {\n \"auth\": \"…\"\n },\n \"my-registry.example/images/subpath\": {\n \"auth\": \"…\"\n }\n }\n}", "completion": "This json code 3.1 - Images | If now a container specifies an image my-registry.example/images/subpath/my-image to be pulled, then the kubelet will try to download it using both authentication sources if one of them fails.", "language": "json", "type": "api-response_explanation", "context": "3.1 - Images | If now a container specifies an image my-registry.example/images/subpath/my-image to be pulled, then the kubelet will try to download it using both authentication sources if one of them fails.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_15629"} {"prompt": "Complete this bash code:\n\nkubectl create secret docker-registry <name> \\\n --docker-server=<docker-registry-server> \\", "completion": " --docker-username=<docker-user> \\\n --docker-password=<docker-password> \\\n --docker-email=<docker-email>", "language": "bash", "type": "kubectl-command", "context": "3.1 - Images | If you already have a Docker credentials file then, rather than using the above command, you can import the credentials file as a Kubernetes Secret. Create a Secret based on existing Docker credential", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_15630"} {"prompt": "Explain this bash code:\n\nkubectl create secret docker-registry <name> \\\n --docker-server=<docker-registry-server> \\\n --docker-username=<docker-user> \\\n --docker-password=<docker-password> \\\n --docker-email=<docker-email>", "completion": "This bash code 3.1 - Images | If you already have a Docker credentials file then, rather than using the above command, you can import the credentials file as a Kubernetes Secret. Create a Secret based on existing Docker credential", "language": "bash", "type": "kubectl-command_explanation", "context": "3.1 - Images | If you already have a Docker credentials file then, rather than using the above command, you can import the credentials file as a Kubernetes Secret. Create a Secret based on existing Docker credential", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_15631"} {"prompt": "Complete this yaml code:\n\ncat <<EOF > pod.yaml\napiVersion: v1\nkind: Pod\nmetadata:\n name: foo\n namespace: awesomeapps\nspec:\n containers:\n - name: foo", "completion": " image: janedoe/awesomeapp:v1\n imagePullSecrets:\n - name: myregistrykey\nEOF\n\ncat <<EOF >> ./kustomization.yaml\nresources:\n- pod.yaml\nEOF", "language": "yaml", "type": "pod-definition", "context": "3.1 - Images | This needs to be done for each Pod that is using a private registry.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_15632"} {"prompt": "Explain this yaml code:\n\ncat <<EOF > pod.yaml\napiVersion: v1\nkind: Pod\nmetadata:\n name: foo\n namespace: awesomeapps\nspec:\n containers:\n - name: foo\n image: janedoe/awesomeapp:v1\n imagePullSecrets:\n - name: myregistrykey\nEOF\n\ncat <<EOF >> ./kustomization.yaml\nresources:\n- pod.yaml\nEOF", "completion": "This yaml code 3.1 - Images | This needs to be done for each Pod that is using a private registry.", "language": "yaml", "type": "pod-definition_explanation", "context": "3.1 - Images | This needs to be done for each Pod that is using a private registry.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_15633"} {"prompt": "Explain this text code:\n\nFOO_SERVICE_HOST=<the host the service is running on>\nFOO_SERVICE_PORT=<the port the service is running on>", "completion": "This text code 3.2 - Container Environment | Services have dedicated IP addresses and are available to the Container via DNS, if DNS addon is enabled.", "language": "text", "type": "code-example_explanation", "context": "3.2 - Container Environment | Services have dedicated IP addresses and are available to the Container via DNS, if DNS addon is enabled.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_15634"} {"prompt": "Complete this yaml code:\n\n# RuntimeClass is defined in the node.k8s.io API group\napiVersion: node.k8s.io/v1\nkind: RuntimeClass\nmetadata:", "completion": " # The name the RuntimeClass will be referenced by.\n # RuntimeClass is a non-namespaced resource.\n name: myclass \n# The name of the corresponding CRI configuration\nhandler: myconfiguration", "language": "yaml", "type": "kubernetes-manifest", "context": "3.3 - Runtime Class | The name of a RuntimeClass object must be a valid DNS subdomain name.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_15635"} {"prompt": "Explain this yaml code:\n\n# RuntimeClass is defined in the node.k8s.io API group\napiVersion: node.k8s.io/v1\nkind: RuntimeClass\nmetadata:\n # The name the RuntimeClass will be referenced by.\n # RuntimeClass is a non-namespaced resource.\n name: myclass \n# The name of the corresponding CRI configuration\nhandler: myconfiguration", "completion": "This yaml code 3.3 - Runtime Class | The name of a RuntimeClass object must be a valid DNS subdomain name.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "3.3 - Runtime Class | The name of a RuntimeClass object must be a valid DNS subdomain name.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_15636"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:", "completion": " name: mypod\nspec:\n runtimeClassName: myclass\n # ...", "language": "yaml", "type": "pod-definition", "context": "3.3 - Runtime Class | This will instruct the kubelet to use the named RuntimeClass to run this pod. If the named RuntimeClass does not exist, or the CRI cannot run the corresponding handler, the pod will enter the Failed t", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_15637"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: mypod\nspec:\n runtimeClassName: myclass\n # ...", "completion": "This yaml code 3.3 - Runtime Class | This will instruct the kubelet to use the named RuntimeClass to run this pod. If the named RuntimeClass does not exist, or the CRI cannot run the corresponding handler, the pod will enter the Failed t", "language": "yaml", "type": "pod-definition_explanation", "context": "3.3 - Runtime Class | This will instruct the kubelet to use the named RuntimeClass to run this pod. If the named RuntimeClass does not exist, or the CRI cannot run the corresponding handler, the pod will enter the Failed t", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_15638"} {"prompt": "Explain this json code:\n\n[plugins.\"io.containerd.grpc.v1.cri\".containerd.runtimes.${HANDLER_NAME}]", "completion": "This json code 3.3 - Runtime Class | See containerd's config documentation for more details:", "language": "json", "type": "api-response_explanation", "context": "3.3 - Runtime Class | See containerd's config documentation for more details:", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_15639"} {"prompt": "Explain this json code:\n\n[crio.runtime.runtimes.${HANDLER_NAME}]\n runtime_path = \"${PATH_TO_BINARY}\"", "completion": "This json code 3.3 - Runtime Class | See CRI-O's config documentation for more details.", "language": "json", "type": "api-response_explanation", "context": "3.3 - Runtime Class | See CRI-O's config documentation for more details.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_15640"} {"prompt": "Complete this text code:\n\nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal Scheduled 7s default-scheduler Successfully assigned default/lifecycle-demo to ip-XXX-XXX-XX-XX.us-east-2...\n Normal Pulled 6s kubelet Successfully pulled image \"nginx\" in 229.604315ms\n Normal Pulling 4s (x2 over 6s) kubelet Pulling image \"nginx\"", "completion": " Normal Created 4s (x2 over 5s) kubelet Created container lifecycle-demo-container\n Normal Started 4s (x2 over 5s) kubelet Started container lifecycle-demo-container\n Warning FailedPostStartHook 4s (x2 over 5s) kubelet Exec lifecycle hook ([badcommand]) for Container \"lifecycle-demo-container\" in Pod \"lifecycle-demo_default(30229739-9651-4e5a-9a32-a8f1688862db)\" failed - error: command 'badcommand' exited with 126: , message: \"OCI runtime exec failed: exec failed: container_linux.go:380: starting container process caused: exec: \\\"badcommand\\\": executable file not found in $PATH: unknown\\r\\n\"\n Normal Killing 4s (x2 over 5s) kubelet FailedPostStartHook\n Normal Pulled 4s kubelet Successfully pulled image \"nginx\" in 215.66395ms\n Warning BackOff 2s (x2 over 3s) kubelet Back-off restarting failed container", "language": "text", "type": "code-example", "context": "3.4 - Container Lifecycle Hooks | A workload is an application running on Kubernetes. Whether your workload is a single component or several that work together, on Kubernetes you run it inside a set of pods. In Kubernetes, a Pod repre", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_15641"} {"prompt": "Explain this text code:\n\nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal Scheduled 7s default-scheduler Successfully assigned default/lifecycle-demo to ip-XXX-XXX-XX-XX.us-east-2...\n Normal Pulled 6s kubelet Successfully pulled image \"nginx\" in 229.604315ms\n Normal Pulling 4s (x2 over 6s) kubelet Pulling image \"nginx\"\n Normal Created 4s (x2 over 5s) kubelet Created container lifecycle-demo-container\n Normal Started 4s (x2 over 5s) kubelet Started container lifecycle-demo-container\n Warning FailedPostStartHook 4s (x2 over 5s) kubelet Exec lifecycle hook ([badcommand]) for Container \"lifecycle-demo-container\" in Pod \"lifecycle-demo_default(30229739-9651-4e5a-9a32-a8f1688862db)\" failed - error: command 'badcommand' exited with 126: , message: \"OCI runtime exec failed: exec failed: container_linux.go:380: starting container process caused: exec: \\\"badcommand\\\": executable file not found in $PATH: unknown\\r\\n\"\n Normal Killing 4s (x2 over 5s) kubelet FailedPostStartHook\n Normal Pulled 4s kubelet Successfully pulled image \"nginx\" in 215.66395ms\n Warning BackOff 2s (x2 over 3s) kubelet Back-off restarting failed container", "completion": "This text code 3.4 - Container Lifecycle Hooks | A workload is an application running on Kubernetes. Whether your workload is a single component or several that work together, on Kubernetes you run it inside a set of pods. In Kubernetes, a Pod repre", "language": "text", "type": "code-example_explanation", "context": "3.4 - Container Lifecycle Hooks | A workload is an application running on Kubernetes. Whether your workload is a single component or several that work together, on Kubernetes you run it inside a set of pods. In Kubernetes, a Pod repre", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_15642"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: nginx\nspec:", "completion": " containers:\n - name: nginx\n image: nginx:1.14.2\n ports:\n - containerPort: 80", "language": "yaml", "type": "pod-definition", "context": "4.1 - Pods | To create the Pod shown above, run the following command:", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_15643"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: nginx\nspec:\n containers:\n - name: nginx\n image: nginx:1.14.2\n ports:\n - containerPort: 80", "completion": "This yaml code 4.1 - Pods | To create the Pod shown above, run the following command:", "language": "yaml", "type": "pod-definition_explanation", "context": "4.1 - Pods | To create the Pod shown above, run the following command:", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_15644"} {"prompt": "Explain this bash code:\n\nkubectl apply -f https://k8s.io/examples/pods/simple-pod.yaml", "completion": "This bash code 4.1 - Pods | Pods are generally not created directly and are created using workload resources. See Working with Pods for more information on how Pods are used with workload resources.", "language": "bash", "type": "kubectl-command_explanation", "context": "4.1 - Pods | Pods are generally not created directly and are created using workload resources. See Working with Pods for more information on how Pods are used with workload resources.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_15645"} {"prompt": "Complete this yaml code:\n\napiVersion: batch/v1\nkind: Job\nmetadata:\n name: hello\nspec:\n template:\n # This is the pod template", "completion": " spec:\n containers:\n - name: hello\n image: busybox:1.28\n command: ['sh', '-c', 'echo \"Hello, Kubernetes!\" && sleep 3600']\n restartPolicy: OnFailure\n # The pod template ends here", "language": "yaml", "type": "kubernetes-manifest", "context": "4.1 - Pods | Modifying the pod template or switching to a new pod template has no direct effect on the Pods that already exist. If you change the pod template for a workload resource, that resource needs to create", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_15646"} {"prompt": "Explain this yaml code:\n\napiVersion: batch/v1\nkind: Job\nmetadata:\n name: hello\nspec:\n template:\n # This is the pod template\n spec:\n containers:\n - name: hello\n image: busybox:1.28\n command: ['sh', '-c', 'echo \"Hello, Kubernetes!\" && sleep 3600']\n restartPolicy: OnFailure\n # The pod template ends here", "completion": "This yaml code 4.1 - Pods | Modifying the pod template or switching to a new pod template has no direct effect on the Pods that already exist. If you change the pod template for a workload resource, that resource needs to create", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "4.1 - Pods | Modifying the pod template or switching to a new pod template has no direct effect on the Pods that already exist. If you change the pod template for a workload resource, that resource needs to create", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_15647"} {"prompt": "Explain this text code:\n\nNAMESPACE NAME READY STATUS RESTARTS AGE\n alessandras-namespace alessandras-pod 0/1 CrashLoopBackOff 200 2d9h", "completion": "This text code 4.1.1 - Pod Lifecycle | A Pod is granted a term to terminate gracefully, which defaults to 30 seconds. You can use the flag --force to terminate a Pod by force.", "language": "text", "type": "code-example_explanation", "context": "4.1.1 - Pod Lifecycle | A Pod is granted a term to terminate gracefully, which defaults to 30 seconds. You can use the flag --force to terminate a Pod by force.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_15648"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: on-failure-pod\nspec:\n restartPolicy: OnFailure\n containers:", "completion": " - name: try-once-container # This container will run only once because the restartPolicy is Never.\n image: docker.io/library/busybox:1.28\n command: ['sh', '-c', 'echo \"Only running once\" && sleep 10 && exit 1']\n restartPolicy: Never \n - name: on-failure-container # This container will be restarted on failure.\n image: docker.io/library/busybox:1.28\n command: ['sh', '-c', 'echo \"Keep restarting\" && sleep 1800 && exit 1']", "language": "yaml", "type": "pod-definition", "context": "4.1.1 - Pod Lifecycle | A Pod with Always restart policy with an init container that only execute once. If the init container fails, the Pod fails. This allows the Pod to fail if the initialization failed, but also keep runn", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_15649"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: on-failure-pod\nspec:\n restartPolicy: OnFailure\n containers:\n - name: try-once-container # This container will run only once because the restartPolicy is Never.\n image: docker.io/library/busybox:1.28\n command: ['sh', '-c', 'echo \"Only running once\" && sleep 10 && exit 1']\n restartPolicy: Never \n - name: on-failure-container # This container will be restarted on failure.\n image: docker.io/library/busybox:1.28\n command: ['sh', '-c', 'echo \"Keep restarting\" && sleep 1800 && exit 1']", "completion": "This yaml code 4.1.1 - Pod Lifecycle | A Pod with Always restart policy with an init container that only execute once. If the init container fails, the Pod fails. This allows the Pod to fail if the initialization failed, but also keep runn", "language": "yaml", "type": "pod-definition_explanation", "context": "4.1.1 - Pod Lifecycle | A Pod with Always restart policy with an init container that only execute once. If the init container fails, the Pod fails. This allows the Pod to fail if the initialization failed, but also keep runn", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_15650"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: fail-pod-if-init-fails\nspec:\n restartPolicy: Always\n initContainers:", "completion": " - name: init-once # This init container will only try once. If it fails, the pod will fail.\n image: docker.io/library/busybox:1.28\n command: ['sh', '-c', 'echo \"Failing initialization\" && sleep 10 && exit 1']\n restartPolicy: Never\n containers:\n - name: main-container # This container will always be restarted once initialization succeeds.\n image: docker.io/library/busybox:1.28\n command: ['sh', '-c', 'sleep 1800 && exit 0']", "language": "yaml", "type": "pod-definition", "context": "4.1.1 - Pod Lifecycle | A Pod with Never restart policy with a container that ignores and restarts on specific exit codes. This is useful to differentiate between restartable errors and non-restartable errors:", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_15651"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: fail-pod-if-init-fails\nspec:\n restartPolicy: Always\n initContainers:\n - name: init-once # This init container will only try once. If it fails, the pod will fail.\n image: docker.io/library/busybox:1.28\n command: ['sh', '-c', 'echo \"Failing initialization\" && sleep 10 && exit 1']\n restartPolicy: Never\n containers:\n - name: main-container # This container will always be restarted once initialization succeeds.\n image: docker.io/library/busybox:1.28\n command: ['sh', '-c', 'sleep 1800 && exit 0']", "completion": "This yaml code 4.1.1 - Pod Lifecycle | A Pod with Never restart policy with a container that ignores and restarts on specific exit codes. This is useful to differentiate between restartable errors and non-restartable errors:", "language": "yaml", "type": "pod-definition_explanation", "context": "4.1.1 - Pod Lifecycle | A Pod with Never restart policy with a container that ignores and restarts on specific exit codes. This is useful to differentiate between restartable errors and non-restartable errors:", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_15652"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: restart-on-exit-codes\nspec:\n restartPolicy: Never\n containers:\n - name: restart-on-exit-codes", "completion": " image: docker.io/library/busybox:1.28\n command: ['sh', '-c', 'sleep 60 && exit 0']\n restartPolicy: Never # Container restart policy must be specified if rules are specified\n restartPolicyRules: # Only restart the container if it exits with code 42\n - action: Restart\n exitCodes:\n operator: In\n values: [42]", "language": "yaml", "type": "pod-definition", "context": "4.1.1 - Pod Lifecycle | Restart rules can be used for many more advanced lifecycle management scenarios. Note, restart rules are affected by the same inconsistencies as the regular restart policy. Kubelet restarts, container", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_15653"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: restart-on-exit-codes\nspec:\n restartPolicy: Never\n containers:\n - name: restart-on-exit-codes\n image: docker.io/library/busybox:1.28\n command: ['sh', '-c', 'sleep 60 && exit 0']\n restartPolicy: Never # Container restart policy must be specified if rules are specified\n restartPolicyRules: # Only restart the container if it exits with code 42\n - action: Restart\n exitCodes:\n operator: In\n values: [42]", "completion": "This yaml code 4.1.1 - Pod Lifecycle | Restart rules can be used for many more advanced lifecycle management scenarios. Note, restart rules are affected by the same inconsistencies as the regular restart policy. Kubelet restarts, container", "language": "yaml", "type": "pod-definition_explanation", "context": "4.1.1 - Pod Lifecycle | Restart rules can be used for many more advanced lifecycle management scenarios. Note, restart rules are affected by the same inconsistencies as the regular restart policy. Kubelet restarts, container", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_15654"} {"prompt": "Complete this yaml code:\n\n# container restart delays will start at 10s, increasing\n# 2x each time they are restarted, to a maximum of 100s", "completion": "kind: KubeletConfiguration\ncrashLoopBackOff:\n maxContainerRestartPeriod: \"100s\"", "language": "yaml", "type": "kubernetes-manifest", "context": "4.1.1 - Pod Lifecycle | If you use this feature along with the alpha feature ReduceDefaultCrashLoopBackOffDecay (described above), your cluster defaults for initial backoff and maximum backoff will no longer be 10s and 300s,", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_15655"} {"prompt": "Explain this yaml code:\n\n# container restart delays will start at 10s, increasing\n# 2x each time they are restarted, to a maximum of 100s\nkind: KubeletConfiguration\ncrashLoopBackOff:\n maxContainerRestartPeriod: \"100s\"", "completion": "This yaml code 4.1.1 - Pod Lifecycle | If you use this feature along with the alpha feature ReduceDefaultCrashLoopBackOffDecay (described above), your cluster defaults for initial backoff and maximum backoff will no longer be 10s and 300s,", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "4.1.1 - Pod Lifecycle | If you use this feature along with the alpha feature ReduceDefaultCrashLoopBackOffDecay (described above), your cluster defaults for initial backoff and maximum backoff will no longer be 10s and 300s,", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_15656"} {"prompt": "Complete this yaml code:\n\n# delays between container restarts will always be 2s\nkind: KubeletConfiguration", "completion": "crashLoopBackOff:\n maxContainerRestartPeriod: \"2s\"", "language": "yaml", "type": "kubernetes-manifest", "context": "4.1.1 - Pod Lifecycle | If you use this feature along with the alpha feature ReduceDefaultCrashLoopBackOffDecay (described above), your cluster defaults for initial backoff and maximum backoff will no longer be 10s and 300s,", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_15657"} {"prompt": "Explain this yaml code:\n\n# delays between container restarts will always be 2s\nkind: KubeletConfiguration\ncrashLoopBackOff:\n maxContainerRestartPeriod: \"2s\"", "completion": "This yaml code 4.1.1 - Pod Lifecycle | If you use this feature along with the alpha feature ReduceDefaultCrashLoopBackOffDecay (described above), your cluster defaults for initial backoff and maximum backoff will no longer be 10s and 300s,", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "4.1.1 - Pod Lifecycle | If you use this feature along with the alpha feature ReduceDefaultCrashLoopBackOffDecay (described above), your cluster defaults for initial backoff and maximum backoff will no longer be 10s and 300s,", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_15658"} {"prompt": "Complete this yaml code:\n\nkind: Pod\n...\nspec:\n readinessGates:\n - conditionType: \"www.example.com/feature-1\"\nstatus:\n conditions:\n - type: Ready # a built-in PodCondition\n status: \"False\"", "completion": " lastProbeTime: null\n lastTransitionTime: 2018-01-01T00:00:00Z\n - type: \"www.example.com/feature-1\" # an extra PodCondition\n status: \"False\"\n lastProbeTime: null\n lastTransitionTime: 2018-01-01T00:00:00Z\n containerStatuses:\n - containerID: docker://abcd...\n ready: true\n...", "language": "yaml", "type": "pod-definition", "context": "4.1.1 - Pod Lifecycle | The Pod conditions you add must have names that meet the Kubernetes label key format.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_15659"} {"prompt": "Explain this yaml code:\n\nkind: Pod\n...\nspec:\n readinessGates:\n - conditionType: \"www.example.com/feature-1\"\nstatus:\n conditions:\n - type: Ready # a built-in PodCondition\n status: \"False\"\n lastProbeTime: null\n lastTransitionTime: 2018-01-01T00:00:00Z\n - type: \"www.example.com/feature-1\" # an extra PodCondition\n status: \"False\"\n lastProbeTime: null\n lastTransitionTime: 2018-01-01T00:00:00Z\n containerStatuses:\n - containerID: docker://abcd...\n ready: true\n...", "completion": "This yaml code 4.1.1 - Pod Lifecycle | The Pod conditions you add must have names that meet the Kubernetes label key format.", "language": "yaml", "type": "pod-definition_explanation", "context": "4.1.1 - Pod Lifecycle | The Pod conditions you add must have names that meet the Kubernetes label key format.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_15660"} {"prompt": "Complete this text code:\n\nspec:\n os:\n name: linux\n containers:", "completion": " - name: my-container\n image: container-image:latest\n lifecycle:\n stopSignal: SIGUSR1", "language": "text", "type": "code-example", "context": "4.1.1 - Pod Lifecycle | If a stop signal is defined in the lifecycle, this will override the signal defined in the container image. If no stop signal is defined in the container spec, the container would fall back to the def", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_15661"} {"prompt": "Explain this text code:\n\nspec:\n os:\n name: linux\n containers:\n - name: my-container\n image: container-image:latest\n lifecycle:\n stopSignal: SIGUSR1", "completion": "This text code 4.1.1 - Pod Lifecycle | If a stop signal is defined in the lifecycle, this will override the signal defined in the container image. If no stop signal is defined in the container spec, the container would fall back to the def", "language": "text", "type": "code-example_explanation", "context": "4.1.1 - Pod Lifecycle | If a stop signal is defined in the lifecycle, this will override the signal defined in the container image. If no stop signal is defined in the container spec, the container would fall back to the def", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_15662"} {"prompt": "Explain this text code:\n\nfor i in {1..100}; do sleep 1; if nslookup myservice; then exit 0; fi; done; exit 1", "completion": "This text code 4.1.2 - Init Containers | Register this Pod with a remote server from the downward API with a command like:", "language": "text", "type": "code-example_explanation", "context": "4.1.2 - Init Containers | Register this Pod with a remote server from the downward API with a command like:", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_15663"} {"prompt": "Explain this text code:\n\ncurl -X POST http://$MANAGEMENT_SERVICE_HOST:$MANAGEMENT_SERVICE_PORT/register -d 'instance=$(<POD_NAME>)&ip=$(<POD_IP>)'", "completion": "This text code 4.1.2 - Init Containers | Wait for some time before starting the app container with a command like", "language": "text", "type": "code-example_explanation", "context": "4.1.2 - Init Containers | Wait for some time before starting the app container with a command like", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_15664"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: myapp-pod\n labels:\n app.kubernetes.io/name: MyApp\nspec:\n containers:\n - name: myapp-container", "completion": " image: busybox:1.28\n command: ['sh', '-c', 'echo The app is running! && sleep 3600']\n initContainers:\n - name: init-myservice\n image: busybox:1.28\n command: ['sh', '-c', \"until nslookup myservice.$(cat /var/run/secrets/kubernetes.io/serviceaccount/namespace).svc.cluster.local; do echo waiting for myservice; sleep 2; done\"]\n - name: init-mydb\n image: busybox:1.28\n command: ['sh', '-c', \"until nslookup mydb.$(cat /var/run/secrets/kubernetes.io/serviceaccount/namespace).svc.cluster.local; do echo waiting for mydb; sleep 2; done\"]", "language": "yaml", "type": "pod-definition", "context": "4.1.2 - Init Containers | You can start this Pod by running:", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_15665"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: myapp-pod\n labels:\n app.kubernetes.io/name: MyApp\nspec:\n containers:\n - name: myapp-container\n image: busybox:1.28\n command: ['sh', '-c', 'echo The app is running! && sleep 3600']\n initContainers:\n - name: init-myservice\n image: busybox:1.28\n command: ['sh', '-c', \"until nslookup myservice.$(cat /var/run/secrets/kubernetes.io/serviceaccount/namespace).svc.cluster.local; do echo waiting for myservice; sleep 2; done\"]\n - name: init-mydb\n image: busybox:1.28\n command: ['sh', '-c', \"until nslookup mydb.$(cat /var/run/secrets/kubernetes.io/serviceaccount/namespace).svc.cluster.local; do echo waiting for mydb; sleep 2; done\"]", "completion": "This yaml code 4.1.2 - Init Containers | You can start this Pod by running:", "language": "yaml", "type": "pod-definition_explanation", "context": "4.1.2 - Init Containers | You can start this Pod by running:", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_15666"} {"prompt": "Explain this text code:\n\nNAME READY STATUS RESTARTS AGE\nmyapp-pod 0/1 Init:0/2 0 6m", "completion": "This text code 4.1.2 - Init Containers | or for more details:", "language": "text", "type": "code-example_explanation", "context": "4.1.2 - Init Containers | or for more details:", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_15667"} {"prompt": "Complete this text code:\n\nName: myapp-pod\nNamespace: default\n[...]\nLabels: app.kubernetes.io/name=MyApp\nStatus: Pending\n[...]\nInit Containers:\n init-myservice:\n[...]\n State: Running\n[...]\n init-mydb:\n[...]\n State: Waiting\n Reason: PodInitializing\n Ready: False", "completion": "[...]\nContainers:\n myapp-container:\n[...]\n State: Waiting\n Reason: PodInitializing\n Ready: False\n[...]\nEvents:\n FirstSeen LastSeen Count From SubObjectPath Type Reason Message\n --------- -------- ----- ---- ------------- -------- ------ -------\n 16s 16s 1 {default-scheduler } Normal Scheduled Successfully assigned myapp-pod to 172.17.4.201\n 16s 16s 1 {kubelet 172.17.4.201} spec.initContainers{init-myservice} Normal Pulling pulling image \"busybox\"\n 13s 13s 1 {kubelet 172.17.4.201} spec.initContainers{init-myservice} Normal Pulled Successfully pulled image \"busybox\"\n 13s 13s 1 {kubelet 172.17.4.201} spec.initContainers{init-myservice} Normal Created Created container init-myservice\n 13s 13s 1 {kubelet 172.17.4.201} spec.initContainers{init-myservice} Normal Started Started container init-myservice", "language": "text", "type": "code-example", "context": "4.1.2 - Init Containers | To see logs for the init containers in this Pod, run:", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_15668"} {"prompt": "Explain this text code:\n\nName: myapp-pod\nNamespace: default\n[...]\nLabels: app.kubernetes.io/name=MyApp\nStatus: Pending\n[...]\nInit Containers:\n init-myservice:\n[...]\n State: Running\n[...]\n init-mydb:\n[...]\n State: Waiting\n Reason: PodInitializing\n Ready: False\n[...]\nContainers:\n myapp-container:\n[...]\n State: Waiting\n Reason: PodInitializing\n Ready: False\n[...]\nEvents:\n FirstSeen LastSeen Count From SubObjectPath Type Reason Message\n --------- -------- ----- ---- ------------- -------- ------ -------\n 16s 16s 1 {default-scheduler } Normal Scheduled Successfully assigned myapp-pod to 172.17.4.201\n 16s 16s 1 {kubelet 172.17.4.201} spec.initContainers{init-myservice} Normal Pulling pulling image \"busybox\"\n 13s 13s 1 {kubelet 172.17.4.201} spec.initContainers{init-myservice} Normal Pulled Successfully pulled image \"busybox\"\n 13s 13s 1 {kubelet 172.17.4.201} spec.initContainers{init-myservice} Normal Created Created container init-myservice\n 13s 13s 1 {kubelet 172.17.4.201} spec.initContainers{init-myservice} Normal Started Started container init-myservice", "completion": "This text code 4.1.2 - Init Containers | To see logs for the init containers in this Pod, run:", "language": "text", "type": "code-example_explanation", "context": "4.1.2 - Init Containers | To see logs for the init containers in this Pod, run:", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_15669"} {"prompt": "Explain this bash code:\n\nkubectl logs myapp-pod -c init-myservice # Inspect the first init container\nkubectl logs myapp-pod -c init-mydb # Inspect the second init container", "completion": "This bash code 4.1.2 - Init Containers | At this point, those init containers will be waiting to discover Services named mydb and myservice.", "language": "bash", "type": "kubectl-command_explanation", "context": "4.1.2 - Init Containers | At this point, those init containers will be waiting to discover Services named mydb and myservice.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_15670"} {"prompt": "Complete this yaml code:\n\n---\napiVersion: v1\nkind: Service\nmetadata:\n name: myservice\nspec:\n ports:\n - protocol: TCP\n port: 80\n targetPort: 9376", "completion": "---\napiVersion: v1\nkind: Service\nmetadata:\n name: mydb\nspec:\n ports:\n - protocol: TCP\n port: 80\n targetPort: 9377", "language": "yaml", "type": "service-definition", "context": "4.1.2 - Init Containers | To create the mydb and myservice services:", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_15671"} {"prompt": "Explain this yaml code:\n\n---\napiVersion: v1\nkind: Service\nmetadata:\n name: myservice\nspec:\n ports:\n - protocol: TCP\n port: 80\n targetPort: 9376\n---\napiVersion: v1\nkind: Service\nmetadata:\n name: mydb\nspec:\n ports:\n - protocol: TCP\n port: 80\n targetPort: 9377", "completion": "This yaml code 4.1.2 - Init Containers | To create the mydb and myservice services:", "language": "yaml", "type": "service-definition_explanation", "context": "4.1.2 - Init Containers | To create the mydb and myservice services:", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_15672"} {"prompt": "Explain this text code:\n\nNAME READY STATUS RESTARTS AGE\nmyapp-pod 1/1 Running 0 9m", "completion": "This text code 4.1.2 - Init Containers | This simple example should provide some inspiration for you to create your own init containers. What's next contains a link to a more detailed example.", "language": "text", "type": "code-example_explanation", "context": "4.1.2 - Init Containers | This simple example should provide some inspiration for you to create your own init containers. What's next contains a link to a more detailed example.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_15673"} {"prompt": "Complete this yaml code:\n\napiVersion: apps/v1\nkind: Deployment\nmetadata:\n name: myapp\n labels:\n app: myapp\nspec:\n replicas: 1\n selector:\n matchLabels:\n app: myapp\n template:\n metadata:\n labels:\n app: myapp\n spec:\n containers:", "completion": " - name: myapp\n image: alpine:latest\n command: ['sh', '-c', 'while true; do echo \"logging\" >> /opt/logs.txt; sleep 1; done']\n volumeMounts:\n - name: data\n mountPath: /opt\n initContainers:\n - name: logshipper\n image: alpine:latest\n restartPolicy: Always\n command: ['sh', '-c', 'tail -F /opt/logs.txt']\n volumeMounts:\n - name: data\n mountPath: /opt\n volumes:\n - name: data\n emptyDir: {}", "language": "yaml", "type": "deployment-definition", "context": "4.1.3 - Sidecar Containers | If an init container is created with its restartPolicy set to Always, it will start and remain running during the entire life of the Pod. This can be helpful for running supporting services separated", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_15674"} {"prompt": "Explain this yaml code:\n\napiVersion: apps/v1\nkind: Deployment\nmetadata:\n name: myapp\n labels:\n app: myapp\nspec:\n replicas: 1\n selector:\n matchLabels:\n app: myapp\n template:\n metadata:\n labels:\n app: myapp\n spec:\n containers:\n - name: myapp\n image: alpine:latest\n command: ['sh', '-c', 'while true; do echo \"logging\" >> /opt/logs.txt; sleep 1; done']\n volumeMounts:\n - name: data\n mountPath: /opt\n initContainers:\n - name: logshipper\n image: alpine:latest\n restartPolicy: Always\n command: ['sh', '-c', 'tail -F /opt/logs.txt']\n volumeMounts:\n - name: data\n mountPath: /opt\n volumes:\n - name: data\n emptyDir: {}", "completion": "This yaml code 4.1.3 - Sidecar Containers | If an init container is created with its restartPolicy set to Always, it will start and remain running during the entire life of the Pod. This can be helpful for running supporting services separated", "language": "yaml", "type": "deployment-definition_explanation", "context": "4.1.3 - Sidecar Containers | If an init container is created with its restartPolicy set to Always, it will start and remain running during the entire life of the Pod. This can be helpful for running supporting services separated", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_15675"} {"prompt": "Complete this yaml code:\n\napiVersion: batch/v1\nkind: Job\nmetadata:\n name: myjob\nspec:\n template:\n spec:\n containers:\n - name: myjob\n image: alpine:latest\n command: ['sh', '-c', 'echo \"logging\" > /opt/logs.txt']\n volumeMounts:\n - name: data", "completion": " mountPath: /opt\n initContainers:\n - name: logshipper\n image: alpine:latest\n restartPolicy: Always\n command: ['sh', '-c', 'tail -F /opt/logs.txt']\n volumeMounts:\n - name: data\n mountPath: /opt\n restartPolicy: Never\n volumes:\n - name: data\n emptyDir: {}", "language": "yaml", "type": "kubernetes-manifest", "context": "4.1.3 - Sidecar Containers | Sidecar containers run alongside app containers in the same pod. However, they do not execute the primary application logic; instead, they provide supporting functionality to the main application.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_15676"} {"prompt": "Explain this yaml code:\n\napiVersion: batch/v1\nkind: Job\nmetadata:\n name: myjob\nspec:\n template:\n spec:\n containers:\n - name: myjob\n image: alpine:latest\n command: ['sh', '-c', 'echo \"logging\" > /opt/logs.txt']\n volumeMounts:\n - name: data\n mountPath: /opt\n initContainers:\n - name: logshipper\n image: alpine:latest\n restartPolicy: Always\n command: ['sh', '-c', 'tail -F /opt/logs.txt']\n volumeMounts:\n - name: data\n mountPath: /opt\n restartPolicy: Never\n volumes:\n - name: data\n emptyDir: {}", "completion": "This yaml code 4.1.3 - Sidecar Containers | Sidecar containers run alongside app containers in the same pod. However, they do not execute the primary application logic; instead, they provide supporting functionality to the main application.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "4.1.3 - Sidecar Containers | Sidecar containers run alongside app containers in the same pod. However, they do not execute the primary application logic; instead, they provide supporting functionality to the main application.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_15677"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: busybox-1\nspec:", "completion": " containers:\n - image: busybox:1.28\n command:\n - sleep\n - \"3600\"\n name: busybox", "language": "yaml", "type": "pod-definition", "context": "4.1.6 - Pod Hostname | The Pod created by this manifest will have its hostname and fully qualified domain name (FQDN) set to busybox-1.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_15678"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: busybox-1\nspec:\n containers:\n - image: busybox:1.28\n command:\n - sleep\n - \"3600\"\n name: busybox", "completion": "This yaml code 4.1.6 - Pod Hostname | The Pod created by this manifest will have its hostname and fully qualified domain name (FQDN) set to busybox-1.", "language": "yaml", "type": "pod-definition_explanation", "context": "4.1.6 - Pod Hostname | The Pod created by this manifest will have its hostname and fully qualified domain name (FQDN) set to busybox-1.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_15679"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: busybox-2-busybox-example-domain\nspec:\n hostnameOverride: busybox-2.busybox.example.domain", "completion": " containers:\n - image: busybox:1.28\n command:\n - sleep\n - \"3600\"\n name: busybox", "language": "yaml", "type": "pod-definition", "context": "4.1.6 - Pod Hostname | If hostnameOverride is set alongside hostname and subdomain fields:", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_15680"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: busybox-2-busybox-example-domain\nspec:\n hostnameOverride: busybox-2.busybox.example.domain\n containers:\n - image: busybox:1.28\n command:\n - sleep\n - \"3600\"\n name: busybox", "completion": "This yaml code 4.1.6 - Pod Hostname | If hostnameOverride is set alongside hostname and subdomain fields:", "language": "yaml", "type": "pod-definition_explanation", "context": "4.1.6 - Pod Hostname | If hostnameOverride is set alongside hostname and subdomain fields:", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_15681"} {"prompt": "Complete this text code:\n\n# The format is\n# name:firstID:count of IDs\n# where\n# - firstID is 65536 (the minimum value possible)", "completion": "# - count of IDs is 110 * 65536\n# (110 is the default limit for number of pods on the node)\n\nkubelet:65536:7208960", "language": "text", "type": "code-example", "context": "4.1.8 - User Namespaces | Starting with Kubernetes v1.33, the ID count for each of Pods can be set in KubeletConfiguration.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_15682"} {"prompt": "Explain this text code:\n\n# The format is\n# name:firstID:count of IDs\n# where\n# - firstID is 65536 (the minimum value possible)\n# - count of IDs is 110 * 65536\n# (110 is the default limit for number of pods on the node)\n\nkubelet:65536:7208960", "completion": "This text code 4.1.8 - User Namespaces | Starting with Kubernetes v1.33, the ID count for each of Pods can be set in KubeletConfiguration.", "language": "text", "type": "code-example_explanation", "context": "4.1.8 - User Namespaces | Starting with Kubernetes v1.33, the ID count for each of Pods can be set in KubeletConfiguration.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_15683"} {"prompt": "Complete this yaml code:\n\napiVersion: kubelet.config.k8s.io/v1beta1\nkind: KubeletConfiguration", "completion": "userNamespaces:\n idsPerPod: 1048576", "language": "yaml", "type": "kubernetes-manifest", "context": "4.1.8 - User Namespaces | The value of idsPerPod (uint32) must be a multiple of 65536. The default value is 65536. This value only applies to containers created after the kubelet was started with this KubeletConfiguration. Run", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_15684"} {"prompt": "Explain this yaml code:\n\napiVersion: kubelet.config.k8s.io/v1beta1\nkind: KubeletConfiguration\nuserNamespaces:\n idsPerPod: 1048576", "completion": "This yaml code 4.1.8 - User Namespaces | The value of idsPerPod (uint32) must be a multiple of 65536. The default value is 65536. This value only applies to containers created after the kubelet was started with this KubeletConfiguration. Run", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "4.1.8 - User Namespaces | The value of idsPerPod (uint32) must be a multiple of 65536. The default value is 65536. This value only applies to containers created after the kubelet was started with this KubeletConfiguration. Run", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_15685"} {"prompt": "Complete this yaml code:\n\napiVersion: apps/v1\nkind: Deployment\nmetadata:\n name: nginx-deployment\n labels:\n app: nginx\nspec:\n replicas: 3\n selector:\n matchLabels:", "completion": " app: nginx\n template:\n metadata:\n labels:\n app: nginx\n spec:\n containers:\n - name: nginx\n image: nginx:1.14.2\n ports:\n - containerPort: 80", "language": "yaml", "type": "deployment-definition", "context": "4.2.1 - Deployments | In this example:", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_15686"} {"prompt": "Explain this yaml code:\n\napiVersion: apps/v1\nkind: Deployment\nmetadata:\n name: nginx-deployment\n labels:\n app: nginx\nspec:\n replicas: 3\n selector:\n matchLabels:\n app: nginx\n template:\n metadata:\n labels:\n app: nginx\n spec:\n containers:\n - name: nginx\n image: nginx:1.14.2\n ports:\n - containerPort: 80", "completion": "This yaml code 4.2.1 - Deployments | In this example:", "language": "yaml", "type": "deployment-definition_explanation", "context": "4.2.1 - Deployments | In this example:", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_15687"} {"prompt": "Explain this bash code:\n\nkubectl apply -f https://k8s.io/examples/controllers/nginx-deployment.yaml", "completion": "This bash code 4.2.1 - Deployments | Run kubectl get deployments to check if the Deployment was created.", "language": "bash", "type": "kubectl-command_explanation", "context": "4.2.1 - Deployments | Run kubectl get deployments to check if the Deployment was created.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_15688"} {"prompt": "Explain this text code:\n\nNAME READY UP-TO-DATE AVAILABLE AGE\nnginx-deployment 0/3 0 0 1s", "completion": "This text code 4.2.1 - Deployments | When you inspect the Deployments in your cluster, the following fields are displayed:", "language": "text", "type": "code-example_explanation", "context": "4.2.1 - Deployments | When you inspect the Deployments in your cluster, the following fields are displayed:", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_15689"} {"prompt": "Explain this text code:\n\nWaiting for rollout to finish: 2 out of 3 new replicas have been updated...\ndeployment \"nginx-deployment\" successfully rolled out", "completion": "This text code 4.2.1 - Deployments | Run the kubectl get deployments again a few seconds later. The output is similar to this:", "language": "text", "type": "code-example_explanation", "context": "4.2.1 - Deployments | Run the kubectl get deployments again a few seconds later. The output is similar to this:", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_15690"} {"prompt": "Explain this text code:\n\nNAME READY UP-TO-DATE AVAILABLE AGE\nnginx-deployment 3/3 3 3 18s", "completion": "This text code 4.2.1 - Deployments | Notice that the Deployment has created all three replicas, and all replicas are up-to-date (they contain the latest Pod template) and available.", "language": "text", "type": "code-example_explanation", "context": "4.2.1 - Deployments | Notice that the Deployment has created all three replicas, and all replicas are up-to-date (they contain the latest Pod template) and available.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_15691"} {"prompt": "Explain this text code:\n\nNAME DESIRED CURRENT READY AGE\nnginx-deployment-75675f5897 3 3 3 18s", "completion": "This text code 4.2.1 - Deployments | ReplicaSet output shows the following fields:", "language": "text", "type": "code-example_explanation", "context": "4.2.1 - Deployments | ReplicaSet output shows the following fields:", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_15692"} {"prompt": "Complete this text code:\n\nNAME READY STATUS RESTARTS AGE LABELS\nnginx-deployment-75675f5897-7ci7o 1/1 Running 0 18s app=nginx,pod-template-hash=75675f5897", "completion": "nginx-deployment-75675f5897-kzszj 1/1 Running 0 18s app=nginx,pod-template-hash=75675f5897\nnginx-deployment-75675f5897-qqcnn 1/1 Running 0 18s app=nginx,pod-template-hash=75675f5897", "language": "text", "type": "code-example", "context": "4.2.1 - Deployments | The created ReplicaSet ensures that there are three nginx Pods.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_15693"} {"prompt": "Explain this text code:\n\nNAME READY STATUS RESTARTS AGE LABELS\nnginx-deployment-75675f5897-7ci7o 1/1 Running 0 18s app=nginx,pod-template-hash=75675f5897\nnginx-deployment-75675f5897-kzszj 1/1 Running 0 18s app=nginx,pod-template-hash=75675f5897\nnginx-deployment-75675f5897-qqcnn 1/1 Running 0 18s app=nginx,pod-template-hash=75675f5897", "completion": "This text code 4.2.1 - Deployments | The created ReplicaSet ensures that there are three nginx Pods.", "language": "text", "type": "code-example_explanation", "context": "4.2.1 - Deployments | The created ReplicaSet ensures that there are three nginx Pods.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_15694"} {"prompt": "Explain this bash code:\n\nkubectl set image deployment.v1.apps/nginx-deployment nginx=nginx:1.16.1", "completion": "This bash code 4.2.1 - Deployments | or use the following command:", "language": "bash", "type": "kubectl-command_explanation", "context": "4.2.1 - Deployments | or use the following command:", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_15695"} {"prompt": "Explain this bash code:\n\nkubectl set image deployment/nginx-deployment nginx=nginx:1.16.1", "completion": "This bash code 4.2.1 - Deployments | where deployment/nginx-deployment indicates the Deployment, nginx indicates the Container the update will take place and nginx:1.16.1 indicates the new image and its tag.", "language": "bash", "type": "kubectl-command_explanation", "context": "4.2.1 - Deployments | where deployment/nginx-deployment indicates the Deployment, nginx indicates the Container the update will take place and nginx:1.16.1 indicates the new image and its tag.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_15696"} {"prompt": "Explain this text code:\n\nWaiting for rollout to finish: 2 out of 3 new replicas have been updated...", "completion": "This text code 4.2.1 - Deployments | or", "language": "text", "type": "code-example_explanation", "context": "4.2.1 - Deployments | or", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_15697"} {"prompt": "Explain this text code:\n\ndeployment \"nginx-deployment\" successfully rolled out", "completion": "This text code 4.2.1 - Deployments | Get more details on your updated Deployment:", "language": "text", "type": "code-example_explanation", "context": "4.2.1 - Deployments | Get more details on your updated Deployment:", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_15698"} {"prompt": "Explain this text code:\n\nNAME READY UP-TO-DATE AVAILABLE AGE\nnginx-deployment 3/3 3 3 36s", "completion": "This text code 4.2.1 - Deployments | Run kubectl get rs to see that the Deployment updated the Pods by creating a new ReplicaSet and scaling it up to 3 replicas, as well as scaling down the old ReplicaSet to 0 replicas.", "language": "text", "type": "code-example_explanation", "context": "4.2.1 - Deployments | Run kubectl get rs to see that the Deployment updated the Pods by creating a new ReplicaSet and scaling it up to 3 replicas, as well as scaling down the old ReplicaSet to 0 replicas.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_15699"} {"prompt": "Explain this text code:\n\nNAME DESIRED CURRENT READY AGE\nnginx-deployment-1564180365 3 3 3 6s\nnginx-deployment-2035384211 0 0 0 36s", "completion": "This text code 4.2.1 - Deployments | Running get pods should now show only the new Pods:", "language": "text", "type": "code-example_explanation", "context": "4.2.1 - Deployments | Running get pods should now show only the new Pods:", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_15700"} {"prompt": "Complete this text code:\n\nNAME READY STATUS RESTARTS AGE\nnginx-deployment-1564180365-khku8 1/1 Running 0 14s", "completion": "nginx-deployment-1564180365-nacti 1/1 Running 0 14s\nnginx-deployment-1564180365-z9gth 1/1 Running 0 14s", "language": "text", "type": "code-example", "context": "4.2.1 - Deployments | Next time you want to update these Pods, you only need to update the Deployment's Pod template again.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_15701"} {"prompt": "Explain this text code:\n\nNAME READY STATUS RESTARTS AGE\nnginx-deployment-1564180365-khku8 1/1 Running 0 14s\nnginx-deployment-1564180365-nacti 1/1 Running 0 14s\nnginx-deployment-1564180365-z9gth 1/1 Running 0 14s", "completion": "This text code 4.2.1 - Deployments | Next time you want to update these Pods, you only need to update the Deployment's Pod template again.", "language": "text", "type": "code-example_explanation", "context": "4.2.1 - Deployments | Next time you want to update these Pods, you only need to update the Deployment's Pod template again.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_15702"} {"prompt": "Complete this text code:\n\nName: nginx-deployment\nNamespace: default\nCreationTimestamp: Thu, 30 Nov 2017 10:56:25 +0000\nLabels: app=nginx\nAnnotations: deployment.kubernetes.io/revision=2\nSelector: app=nginx\nReplicas: 3 desired | 3 updated | 3 total | 3 available | 0 unavailable\nStrategyType: RollingUpdate\nMinReadySeconds: 0\nRollingUpdateStrategy: 25% max unavailable, 25% max surge\nPod Template:\n Labels: app=nginx\n Containers:\n nginx:\n Image: nginx:1.16.1\n Port: 80/TCP\n Environment: <none>\n Mounts: <none>", "completion": " Volumes: <none>\n Conditions:\n Type Status Reason\n ---- ------ ------\n Available True MinimumReplicasAvailable\n Progressing True NewReplicaSetAvailable\n OldReplicaSets: <none>\n NewReplicaSet: nginx-deployment-1564180365 (3/3 replicas created)\n Events:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal ScalingReplicaSet 2m deployment-controller Scaled up replica set nginx-deployment-2035384211 to 3\n Normal ScalingReplicaSet 24s deployment-controller Scaled up replica set nginx-deployment-1564180365 to 1\n Normal ScalingReplicaSet 22s deployment-controller Scaled down replica set nginx-deployment-2035384211 to 2\n Normal ScalingReplicaSet 22s deployment-controller Scaled up replica set nginx-deployment-1564180365 to 2\n Normal ScalingReplicaSet 19s deployment-controller Scaled down replica set nginx-deployment-2035384211 to 1\n Normal ScalingReplicaSet 19s deployment-controller Scaled up replica set nginx-deployment-1564180365 to 3\n Normal ScalingReplicaSet 14s deployment-controller Scaled down replica set nginx-deployment-2035384211 to 0", "language": "text", "type": "code-example", "context": "4.2.1 - Deployments | Here you see that when you first created the Deployment, it created a ReplicaSet (nginx-deployment-2035384211) and scaled it up to 3 replicas directly. When you updated the Deployment, it created a ne", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_15703"} {"prompt": "Explain this text code:\n\nName: nginx-deployment\nNamespace: default\nCreationTimestamp: Thu, 30 Nov 2017 10:56:25 +0000\nLabels: app=nginx\nAnnotations: deployment.kubernetes.io/revision=2\nSelector: app=nginx\nReplicas: 3 desired | 3 updated | 3 total | 3 available | 0 unavailable\nStrategyType: RollingUpdate\nMinReadySeconds: 0\nRollingUpdateStrategy: 25% max unavailable, 25% max surge\nPod Template:\n Labels: app=nginx\n Containers:\n nginx:\n Image: nginx:1.16.1\n Port: 80/TCP\n Environment: <none>\n Mounts: <none>\n Volumes: <none>\n Conditions:\n Type Status Reason\n ---- ------ ------\n Available True MinimumReplicasAvailable\n Progressing True NewReplicaSetAvailable\n OldReplicaSets: <none>\n NewReplicaSet: nginx-deployment-1564180365 (3/3 replicas created)\n Events:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal ScalingReplicaSet 2m deployment-controller Scaled up replica set nginx-deployment-2035384211 to 3\n Normal ScalingReplicaSet 24s deployment-controller Scaled up replica set nginx-deployment-1564180365 to 1\n Normal ScalingReplicaSet 22s deployment-controller Scaled down replica set nginx-deployment-2035384211 to 2\n Normal ScalingReplicaSet 22s deployment-controller Scaled up replica set nginx-deployment-1564180365 to 2\n Normal ScalingReplicaSet 19s deployment-controller Scaled down replica set nginx-deployment-2035384211 to 1\n Normal ScalingReplicaSet 19s deployment-controller Scaled up replica set nginx-deployment-1564180365 to 3\n Normal ScalingReplicaSet 14s deployment-controller Scaled down replica set nginx-deployment-2035384211 to 0", "completion": "This text code 4.2.1 - Deployments | Here you see that when you first created the Deployment, it created a ReplicaSet (nginx-deployment-2035384211) and scaled it up to 3 replicas directly. When you updated the Deployment, it created a ne", "language": "text", "type": "code-example_explanation", "context": "4.2.1 - Deployments | Here you see that when you first created the Deployment, it created a ReplicaSet (nginx-deployment-2035384211) and scaled it up to 3 replicas directly. When you updated the Deployment, it created a ne", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_15704"} {"prompt": "Explain this bash code:\n\nkubectl set image deployment/nginx-deployment nginx=nginx:1.161", "completion": "This bash code 4.2.1 - Deployments | The output is similar to this:", "language": "bash", "type": "kubectl-command_explanation", "context": "4.2.1 - Deployments | The output is similar to this:", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_15705"} {"prompt": "Explain this text code:\n\nWaiting for rollout to finish: 1 out of 3 new replicas have been updated...", "completion": "This text code 4.2.1 - Deployments | Press Ctrl-C to stop the above rollout status watch. For more information on stuck rollouts, read more here.", "language": "text", "type": "code-example_explanation", "context": "4.2.1 - Deployments | Press Ctrl-C to stop the above rollout status watch. For more information on stuck rollouts, read more here.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_15706"} {"prompt": "Complete this text code:\n\nNAME DESIRED CURRENT READY AGE\nnginx-deployment-1564180365 3 3 3 25s", "completion": "nginx-deployment-2035384211 0 0 0 36s\nnginx-deployment-3066724191 1 1 0 6s", "language": "text", "type": "code-example", "context": "4.2.1 - Deployments | Looking at the Pods created, you see that 1 Pod created by new ReplicaSet is stuck in an image pull loop.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_15707"} {"prompt": "Explain this text code:\n\nNAME DESIRED CURRENT READY AGE\nnginx-deployment-1564180365 3 3 3 25s\nnginx-deployment-2035384211 0 0 0 36s\nnginx-deployment-3066724191 1 1 0 6s", "completion": "This text code 4.2.1 - Deployments | Looking at the Pods created, you see that 1 Pod created by new ReplicaSet is stuck in an image pull loop.", "language": "text", "type": "code-example_explanation", "context": "4.2.1 - Deployments | Looking at the Pods created, you see that 1 Pod created by new ReplicaSet is stuck in an image pull loop.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_15708"} {"prompt": "Complete this text code:\n\nNAME READY STATUS RESTARTS AGE\nnginx-deployment-1564180365-70iae 1/1 Running 0 25s", "completion": "nginx-deployment-1564180365-jbqqo 1/1 Running 0 25s\nnginx-deployment-1564180365-hysrc 1/1 Running 0 25s\nnginx-deployment-3066724191-08mng 0/1 ImagePullBackOff 0 6s", "language": "text", "type": "code-example", "context": "4.2.1 - Deployments | Get the description of the Deployment:", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_15709"} {"prompt": "Explain this text code:\n\nNAME READY STATUS RESTARTS AGE\nnginx-deployment-1564180365-70iae 1/1 Running 0 25s\nnginx-deployment-1564180365-jbqqo 1/1 Running 0 25s\nnginx-deployment-1564180365-hysrc 1/1 Running 0 25s\nnginx-deployment-3066724191-08mng 0/1 ImagePullBackOff 0 6s", "completion": "This text code 4.2.1 - Deployments | Get the description of the Deployment:", "language": "text", "type": "code-example_explanation", "context": "4.2.1 - Deployments | Get the description of the Deployment:", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_15710"} {"prompt": "Complete this text code:\n\nName: nginx-deployment\nNamespace: default\nCreationTimestamp: Tue, 15 Mar 2016 14:48:04 -0700\nLabels: app=nginx\nSelector: app=nginx\nReplicas: 3 desired | 1 updated | 4 total | 3 available | 1 unavailable\nStrategyType: RollingUpdate\nMinReadySeconds: 0\nRollingUpdateStrategy: 25% max unavailable, 25% max surge\nPod Template:\n Labels: app=nginx\n Containers:\n nginx:\n Image: nginx:1.161\n Port: 80/TCP\n Host Port: 0/TCP\n Environment: <none>\n Mounts: <none>", "completion": " Volumes: <none>\nConditions:\n Type Status Reason\n ---- ------ ------\n Available True MinimumReplicasAvailable\n Progressing True ReplicaSetUpdated\nOldReplicaSets: nginx-deployment-1564180365 (3/3 replicas created)\nNewReplicaSet: nginx-deployment-3066724191 (1/1 replicas created)\nEvents:\n FirstSeen LastSeen Count From SubObjectPath Type Reason Message\n --------- -------- ----- ---- ------------- -------- ------ -------\n 1m 1m 1 {deployment-controller } Normal ScalingReplicaSet Scaled up replica set nginx-deployment-2035384211 to 3\n 22s 22s 1 {deployment-controller } Normal ScalingReplicaSet Scaled up replica set nginx-deployment-1564180365 to 1\n 22s 22s 1 {deployment-controller } Normal ScalingReplicaSet Scaled down replica set nginx-deployment-2035384211 to 2\n 22s 22s 1 {deployment-controller } Normal ScalingReplicaSet Scaled up replica set nginx-deployment-1564180365 to 2\n 21s 21s 1 {deployment-controller } Normal ScalingReplicaSet Scaled down replica set nginx-deployment-2035384211 to 1\n 21s 21s 1 {deployment-controller } Normal ScalingReplicaSet Scaled up replica set nginx-deployment-1564180365 to 3\n 13s 13s 1 {deployment-controller } Normal ScalingReplicaSet Scaled down replica set nginx-deployment-2035384211 to 0\n 13s 13s 1 {deployment-controller } Normal ScalingReplicaSet Scaled up replica set nginx-deployment-3066724191 to 1", "language": "text", "type": "code-example", "context": "4.2.1 - Deployments | To fix this, you need to rollback to a previous revision of Deployment that is stable.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_15711"} {"prompt": "Explain this text code:\n\nName: nginx-deployment\nNamespace: default\nCreationTimestamp: Tue, 15 Mar 2016 14:48:04 -0700\nLabels: app=nginx\nSelector: app=nginx\nReplicas: 3 desired | 1 updated | 4 total | 3 available | 1 unavailable\nStrategyType: RollingUpdate\nMinReadySeconds: 0\nRollingUpdateStrategy: 25% max unavailable, 25% max surge\nPod Template:\n Labels: app=nginx\n Containers:\n nginx:\n Image: nginx:1.161\n Port: 80/TCP\n Host Port: 0/TCP\n Environment: <none>\n Mounts: <none>\n Volumes: <none>\nConditions:\n Type Status Reason\n ---- ------ ------\n Available True MinimumReplicasAvailable\n Progressing True ReplicaSetUpdated\nOldReplicaSets: nginx-deployment-1564180365 (3/3 replicas created)\nNewReplicaSet: nginx-deployment-3066724191 (1/1 replicas created)\nEvents:\n FirstSeen LastSeen Count From SubObjectPath Type Reason Message\n --------- -------- ----- ---- ------------- -------- ------ -------\n 1m 1m 1 {deployment-controller } Normal ScalingReplicaSet Scaled up replica set nginx-deployment-2035384211 to 3\n 22s 22s 1 {deployment-controller } Normal ScalingReplicaSet Scaled up replica set nginx-deployment-1564180365 to 1\n 22s 22s 1 {deployment-controller } Normal ScalingReplicaSet Scaled down replica set nginx-deployment-2035384211 to 2\n 22s 22s 1 {deployment-controller } Normal ScalingReplicaSet Scaled up replica set nginx-deployment-1564180365 to 2\n 21s 21s 1 {deployment-controller } Normal ScalingReplicaSet Scaled down replica set nginx-deployment-2035384211 to 1\n 21s 21s 1 {deployment-controller } Normal ScalingReplicaSet Scaled up replica set nginx-deployment-1564180365 to 3\n 13s 13s 1 {deployment-controller } Normal ScalingReplicaSet Scaled down replica set nginx-deployment-2035384211 to 0\n 13s 13s 1 {deployment-controller } Normal ScalingReplicaSet Scaled up replica set nginx-deployment-3066724191 to 1", "completion": "This text code 4.2.1 - Deployments | To fix this, you need to rollback to a previous revision of Deployment that is stable.", "language": "text", "type": "code-example_explanation", "context": "4.2.1 - Deployments | To fix this, you need to rollback to a previous revision of Deployment that is stable.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_15712"} {"prompt": "Explain this bash code:\n\nkubectl rollout history deployment/nginx-deployment", "completion": "This bash code 4.2.1 - Deployments | The output is similar to this:", "language": "bash", "type": "kubectl-command_explanation", "context": "4.2.1 - Deployments | The output is similar to this:", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_15713"} {"prompt": "Complete this text code:\n\ndeployments \"nginx-deployment\"\nREVISION CHANGE-CAUSE", "completion": "1 kubectl apply --filename=https://k8s.io/examples/controllers/nginx-deployment.yaml\n2 kubectl set image deployment/nginx-deployment nginx=nginx:1.16.1\n3 kubectl set image deployment/nginx-deployment nginx=nginx:1.161", "language": "text", "type": "code-example", "context": "4.2.1 - Deployments | CHANGE-CAUSE is copied from the Deployment annotation kubernetes.io/change-cause to its revisions upon creation. You can specify theCHANGE-CAUSE message by:", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_15714"} {"prompt": "Explain this text code:\n\ndeployments \"nginx-deployment\"\nREVISION CHANGE-CAUSE\n1 kubectl apply --filename=https://k8s.io/examples/controllers/nginx-deployment.yaml\n2 kubectl set image deployment/nginx-deployment nginx=nginx:1.16.1\n3 kubectl set image deployment/nginx-deployment nginx=nginx:1.161", "completion": "This text code 4.2.1 - Deployments | CHANGE-CAUSE is copied from the Deployment annotation kubernetes.io/change-cause to its revisions upon creation. You can specify theCHANGE-CAUSE message by:", "language": "text", "type": "code-example_explanation", "context": "4.2.1 - Deployments | CHANGE-CAUSE is copied from the Deployment annotation kubernetes.io/change-cause to its revisions upon creation. You can specify theCHANGE-CAUSE message by:", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_15715"} {"prompt": "Explain this bash code:\n\nkubectl rollout history deployment/nginx-deployment --revision=2", "completion": "This bash code 4.2.1 - Deployments | The output is similar to this:", "language": "bash", "type": "kubectl-command_explanation", "context": "4.2.1 - Deployments | The output is similar to this:", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_15716"} {"prompt": "Complete this text code:\n\ndeployments \"nginx-deployment\" revision 2\n Labels: app=nginx\n pod-template-hash=1159050644\n Annotations: kubernetes.io/change-cause=kubectl set image deployment/nginx-deployment nginx=nginx:1.16.1\n Containers:\n nginx:", "completion": " Image: nginx:1.16.1\n Port: 80/TCP\n QoS Tier:\n cpu: BestEffort\n memory: BestEffort\n Environment Variables: <none>\n No volumes.", "language": "text", "type": "code-example", "context": "4.2.1 - Deployments | Follow the steps given below to rollback the Deployment from the current version to the previous version, which is version 2.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_15717"} {"prompt": "Explain this text code:\n\ndeployments \"nginx-deployment\" revision 2\n Labels: app=nginx\n pod-template-hash=1159050644\n Annotations: kubernetes.io/change-cause=kubectl set image deployment/nginx-deployment nginx=nginx:1.16.1\n Containers:\n nginx:\n Image: nginx:1.16.1\n Port: 80/TCP\n QoS Tier:\n cpu: BestEffort\n memory: BestEffort\n Environment Variables: <none>\n No volumes.", "completion": "This text code 4.2.1 - Deployments | Follow the steps given below to rollback the Deployment from the current version to the previous version, which is version 2.", "language": "text", "type": "code-example_explanation", "context": "4.2.1 - Deployments | Follow the steps given below to rollback the Deployment from the current version to the previous version, which is version 2.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_15718"} {"prompt": "Explain this bash code:\n\nkubectl rollout undo deployment/nginx-deployment --to-revision=2", "completion": "This bash code 4.2.1 - Deployments | The output is similar to this:", "language": "bash", "type": "kubectl-command_explanation", "context": "4.2.1 - Deployments | The output is similar to this:", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_15719"} {"prompt": "Explain this text code:\n\nNAME READY UP-TO-DATE AVAILABLE AGE\nnginx-deployment 3/3 3 3 30m", "completion": "This text code 4.2.1 - Deployments | Get the description of the Deployment:", "language": "text", "type": "code-example_explanation", "context": "4.2.1 - Deployments | Get the description of the Deployment:", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_15720"} {"prompt": "Complete this text code:\n\nName: nginx-deployment\nNamespace: default\nCreationTimestamp: Sun, 02 Sep 2018 18:17:55 -0500\nLabels: app=nginx\nAnnotations: deployment.kubernetes.io/revision=4\n kubernetes.io/change-cause=kubectl set image deployment/nginx-deployment nginx=nginx:1.16.1\nSelector: app=nginx\nReplicas: 3 desired | 3 updated | 3 total | 3 available | 0 unavailable\nStrategyType: RollingUpdate\nMinReadySeconds: 0\nRollingUpdateStrategy: 25% max unavailable, 25% max surge\nPod Template:\n Labels: app=nginx\n Containers:\n nginx:\n Image: nginx:1.16.1\n Port: 80/TCP\n Host Port: 0/TCP\n Environment: <none>\n Mounts: <none>", "completion": " Volumes: <none>\nConditions:\n Type Status Reason\n ---- ------ ------\n Available True MinimumReplicasAvailable\n Progressing True NewReplicaSetAvailable\nOldReplicaSets: <none>\nNewReplicaSet: nginx-deployment-c4747d96c (3/3 replicas created)\nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal ScalingReplicaSet 12m deployment-controller Scaled up replica set nginx-deployment-75675f5897 to 3\n Normal ScalingReplicaSet 11m deployment-controller Scaled up replica set nginx-deployment-c4747d96c to 1\n Normal ScalingReplicaSet 11m deployment-controller Scaled down replica set nginx-deployment-75675f5897 to 2\n Normal ScalingReplicaSet 11m deployment-controller Scaled up replica set nginx-deployment-c4747d96c to 2\n Normal ScalingReplicaSet 11m deployment-controller Scaled down replica set nginx-deployment-75675f5897 to 1\n Normal ScalingReplicaSet 11m deployment-controller Scaled up replica set nginx-deployment-c4747d96c to 3\n Normal ScalingReplicaSet 11m deployment-controller Scaled down replica set nginx-deployment-75675f5897 to 0\n Normal ScalingReplicaSet 11m deployment-controller Scaled up replica set nginx-deployment-595696685f to 1\n Normal DeploymentRollback 15s deployment-controller Rolled back deployment \"nginx-deployment\" to revision 2\n Normal ScalingReplicaSet 15s deployment-controller Scaled down replica set nginx-deployment-595696685f to 0", "language": "text", "type": "code-example", "context": "4.2.1 - Deployments | You can scale a Deployment by using the following command:", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_15721"} {"prompt": "Explain this text code:\n\nName: nginx-deployment\nNamespace: default\nCreationTimestamp: Sun, 02 Sep 2018 18:17:55 -0500\nLabels: app=nginx\nAnnotations: deployment.kubernetes.io/revision=4\n kubernetes.io/change-cause=kubectl set image deployment/nginx-deployment nginx=nginx:1.16.1\nSelector: app=nginx\nReplicas: 3 desired | 3 updated | 3 total | 3 available | 0 unavailable\nStrategyType: RollingUpdate\nMinReadySeconds: 0\nRollingUpdateStrategy: 25% max unavailable, 25% max surge\nPod Template:\n Labels: app=nginx\n Containers:\n nginx:\n Image: nginx:1.16.1\n Port: 80/TCP\n Host Port: 0/TCP\n Environment: <none>\n Mounts: <none>\n Volumes: <none>\nConditions:\n Type Status Reason\n ---- ------ ------\n Available True MinimumReplicasAvailable\n Progressing True NewReplicaSetAvailable\nOldReplicaSets: <none>\nNewReplicaSet: nginx-deployment-c4747d96c (3/3 replicas created)\nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal ScalingReplicaSet 12m deployment-controller Scaled up replica set nginx-deployment-75675f5897 to 3\n Normal ScalingReplicaSet 11m deployment-controller Scaled up replica set nginx-deployment-c4747d96c to 1\n Normal ScalingReplicaSet 11m deployment-controller Scaled down replica set nginx-deployment-75675f5897 to 2\n Normal ScalingReplicaSet 11m deployment-controller Scaled up replica set nginx-deployment-c4747d96c to 2\n Normal ScalingReplicaSet 11m deployment-controller Scaled down replica set nginx-deployment-75675f5897 to 1\n Normal ScalingReplicaSet 11m deployment-controller Scaled up replica set nginx-deployment-c4747d96c to 3\n Normal ScalingReplicaSet 11m deployment-controller Scaled down replica set nginx-deployment-75675f5897 to 0\n Normal ScalingReplicaSet 11m deployment-controller Scaled up replica set nginx-deployment-595696685f to 1\n Normal DeploymentRollback 15s deployment-controller Rolled back deployment \"nginx-deployment\" to revision 2\n Normal ScalingReplicaSet 15s deployment-controller Scaled down replica set nginx-deployment-595696685f to 0", "completion": "This text code 4.2.1 - Deployments | You can scale a Deployment by using the following command:", "language": "text", "type": "code-example_explanation", "context": "4.2.1 - Deployments | You can scale a Deployment by using the following command:", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_15722"} {"prompt": "Explain this bash code:\n\nkubectl scale deployment/nginx-deployment --replicas=10", "completion": "This bash code 4.2.1 - Deployments | The output is similar to this:", "language": "bash", "type": "kubectl-command_explanation", "context": "4.2.1 - Deployments | The output is similar to this:", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_15723"} {"prompt": "Explain this bash code:\n\nkubectl autoscale deployment/nginx-deployment --min=10 --max=15 --cpu-percent=80", "completion": "This bash code 4.2.1 - Deployments | The output is similar to this:", "language": "bash", "type": "kubectl-command_explanation", "context": "4.2.1 - Deployments | The output is similar to this:", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_15724"} {"prompt": "Explain this text code:\n\nNAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE\nnginx-deployment 10 10 10 10 50s", "completion": "This text code 4.2.1 - Deployments | You update to a new image which happens to be unresolvable from inside the cluster.", "language": "text", "type": "code-example_explanation", "context": "4.2.1 - Deployments | You update to a new image which happens to be unresolvable from inside the cluster.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_15725"} {"prompt": "Explain this bash code:\n\nkubectl set image deployment/nginx-deployment nginx=nginx:sometag", "completion": "This bash code 4.2.1 - Deployments | The output is similar to this:", "language": "bash", "type": "kubectl-command_explanation", "context": "4.2.1 - Deployments | The output is similar to this:", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_15726"} {"prompt": "Explain this text code:\n\nNAME DESIRED CURRENT READY AGE\nnginx-deployment-1989198191 5 5 0 9s\nnginx-deployment-618515232 8 8 8 1m", "completion": "This text code 4.2.1 - Deployments | Then a new scaling request for the Deployment comes along. The autoscaler increments the Deployment replicas to 15. The Deployment controller needs to decide where to add these new 5 replicas. If you", "language": "text", "type": "code-example_explanation", "context": "4.2.1 - Deployments | Then a new scaling request for the Deployment comes along. The autoscaler increments the Deployment replicas to 15. The Deployment controller needs to decide where to add these new 5 replicas. If you", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_15727"} {"prompt": "Explain this text code:\n\nNAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE\nnginx-deployment 15 18 7 8 7m", "completion": "This text code 4.2.1 - Deployments | The rollout status confirms how the replicas were added to each ReplicaSet.", "language": "text", "type": "code-example_explanation", "context": "4.2.1 - Deployments | The rollout status confirms how the replicas were added to each ReplicaSet.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_15728"} {"prompt": "Explain this text code:\n\nNAME DESIRED CURRENT READY AGE\nnginx-deployment-1989198191 7 7 0 7m\nnginx-deployment-618515232 11 11 11 7m", "completion": "This text code 4.2.1 - Deployments | When you update a Deployment, or plan to, you can pause rollouts for that Deployment before you trigger one or more updates. When you're ready to apply those changes, you resume rollouts for the Deplo", "language": "text", "type": "code-example_explanation", "context": "4.2.1 - Deployments | When you update a Deployment, or plan to, you can pause rollouts for that Deployment before you trigger one or more updates. When you're ready to apply those changes, you resume rollouts for the Deplo", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_15729"} {"prompt": "Explain this text code:\n\nNAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE\nnginx 3 3 3 3 1m", "completion": "This text code 4.2.1 - Deployments | Get the rollout status:", "language": "text", "type": "code-example_explanation", "context": "4.2.1 - Deployments | Get the rollout status:", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_15730"} {"prompt": "Explain this text code:\n\nNAME DESIRED CURRENT READY AGE\nnginx-2142116321 3 3 3 1m", "completion": "This text code 4.2.1 - Deployments | Pause by running the following command:", "language": "text", "type": "code-example_explanation", "context": "4.2.1 - Deployments | Pause by running the following command:", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_15731"} {"prompt": "Explain this bash code:\n\nkubectl set image deployment/nginx-deployment nginx=nginx:1.16.1", "completion": "This bash code 4.2.1 - Deployments | The output is similar to this:", "language": "bash", "type": "kubectl-command_explanation", "context": "4.2.1 - Deployments | The output is similar to this:", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_15732"} {"prompt": "Explain this bash code:\n\nkubectl rollout history deployment/nginx-deployment", "completion": "This bash code 4.2.1 - Deployments | The output is similar to this:", "language": "bash", "type": "kubectl-command_explanation", "context": "4.2.1 - Deployments | The output is similar to this:", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_15733"} {"prompt": "Explain this text code:\n\ndeployments \"nginx\"\nREVISION CHANGE-CAUSE\n1 <none>", "completion": "This text code 4.2.1 - Deployments | Get the rollout status to verify that the existing ReplicaSet has not changed:", "language": "text", "type": "code-example_explanation", "context": "4.2.1 - Deployments | Get the rollout status to verify that the existing ReplicaSet has not changed:", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_15734"} {"prompt": "Explain this text code:\n\nNAME DESIRED CURRENT READY AGE\nnginx-2142116321 3 3 3 2m", "completion": "This text code 4.2.1 - Deployments | You can make as many updates as you wish, for example, update the resources that will be used:", "language": "text", "type": "code-example_explanation", "context": "4.2.1 - Deployments | You can make as many updates as you wish, for example, update the resources that will be used:", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_15735"} {"prompt": "Explain this bash code:\n\nkubectl set resources deployment/nginx-deployment -c=nginx --limits=cpu=200m,memory=512Mi", "completion": "This bash code 4.2.1 - Deployments | The output is similar to this:", "language": "bash", "type": "kubectl-command_explanation", "context": "4.2.1 - Deployments | The output is similar to this:", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_15736"} {"prompt": "Explain this text code:\n\ndeployment.apps/nginx-deployment resource requirements updated", "completion": "This text code 4.2.1 - Deployments | The initial state of the Deployment prior to pausing its rollout will continue its function, but new updates to the Deployment will not have any effect as long as the Deployment rollout is paused.", "language": "text", "type": "code-example_explanation", "context": "4.2.1 - Deployments | The initial state of the Deployment prior to pausing its rollout will continue its function, but new updates to the Deployment will not have any effect as long as the Deployment rollout is paused.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_15737"} {"prompt": "Complete this text code:\n\nNAME DESIRED CURRENT READY AGE\nnginx-2142116321 2 2 2 2m\nnginx-3926361531 2 2 0 6s\nnginx-3926361531 2 2 1 18s\nnginx-2142116321 1 2 2 2m\nnginx-2142116321 1 2 2 2m\nnginx-3926361531 3 2 1 18s", "completion": "nginx-3926361531 3 2 1 18s\nnginx-2142116321 1 1 1 2m\nnginx-3926361531 3 3 1 18s\nnginx-3926361531 3 3 2 19s\nnginx-2142116321 0 1 1 2m\nnginx-2142116321 0 1 1 2m\nnginx-2142116321 0 0 0 2m\nnginx-3926361531 3 3 3 20s", "language": "text", "type": "code-example", "context": "4.2.1 - Deployments | Get the status of the latest rollout:", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_15738"} {"prompt": "Explain this text code:\n\nNAME DESIRED CURRENT READY AGE\nnginx-2142116321 2 2 2 2m\nnginx-3926361531 2 2 0 6s\nnginx-3926361531 2 2 1 18s\nnginx-2142116321 1 2 2 2m\nnginx-2142116321 1 2 2 2m\nnginx-3926361531 3 2 1 18s\nnginx-3926361531 3 2 1 18s\nnginx-2142116321 1 1 1 2m\nnginx-3926361531 3 3 1 18s\nnginx-3926361531 3 3 2 19s\nnginx-2142116321 0 1 1 2m\nnginx-2142116321 0 1 1 2m\nnginx-2142116321 0 0 0 2m\nnginx-3926361531 3 3 3 20s", "completion": "This text code 4.2.1 - Deployments | Get the status of the latest rollout:", "language": "text", "type": "code-example_explanation", "context": "4.2.1 - Deployments | Get the status of the latest rollout:", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_15739"} {"prompt": "Explain this text code:\n\nNAME DESIRED CURRENT READY AGE\nnginx-2142116321 0 0 0 2m\nnginx-3926361531 3 3 3 28s", "completion": "This text code 4.2.1 - Deployments | A Deployment enters various states during its lifecycle. It can be progressing while rolling out a new ReplicaSet, it can be complete, or it can fail to progress.", "language": "text", "type": "code-example_explanation", "context": "4.2.1 - Deployments | A Deployment enters various states during its lifecycle. It can be progressing while rolling out a new ReplicaSet, it can be complete, or it can fail to progress.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_15740"} {"prompt": "Explain this text code:\n\nWaiting for rollout to finish: 2 of 3 updated replicas are available...\ndeployment \"nginx-deployment\" successfully rolled out", "completion": "This text code 4.2.1 - Deployments | and the exit status from kubectl rollout is 0 (success):", "language": "text", "type": "code-example_explanation", "context": "4.2.1 - Deployments | and the exit status from kubectl rollout is 0 (success):", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_15741"} {"prompt": "Explain this bash code:\n\nkubectl patch deployment/nginx-deployment -p '{\"spec\":{\"progressDeadlineSeconds\":600}}'", "completion": "This bash code 4.2.1 - Deployments | The output is similar to this:", "language": "bash", "type": "kubectl-command_explanation", "context": "4.2.1 - Deployments | The output is similar to this:", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_15742"} {"prompt": "Complete this text code:\n\n<...>\nConditions:\n Type Status Reason\n ---- ------ ------", "completion": " Available True MinimumReplicasAvailable\n Progressing True ReplicaSetUpdated\n ReplicaFailure True FailedCreate\n<...>", "language": "text", "type": "code-example", "context": "4.2.1 - Deployments | If you run kubectl get deployment nginx-deployment -o yaml, the Deployment status is similar to this:", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_15743"} {"prompt": "Explain this text code:\n\n<...>\nConditions:\n Type Status Reason\n ---- ------ ------\n Available True MinimumReplicasAvailable\n Progressing True ReplicaSetUpdated\n ReplicaFailure True FailedCreate\n<...>", "completion": "This text code 4.2.1 - Deployments | If you run kubectl get deployment nginx-deployment -o yaml, the Deployment status is similar to this:", "language": "text", "type": "code-example_explanation", "context": "4.2.1 - Deployments | If you run kubectl get deployment nginx-deployment -o yaml, the Deployment status is similar to this:", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_15744"} {"prompt": "Complete this text code:\n\nstatus:\n availableReplicas: 2\n conditions:\n - lastTransitionTime: 2016-10-04T12:25:39Z\n lastUpdateTime: 2016-10-04T12:25:39Z\n message: Replica set \"nginx-deployment-4262182780\" is progressing.\n reason: ReplicaSetUpdated\n status: \"True\"\n type: Progressing\n - lastTransitionTime: 2016-10-04T12:25:42Z\n lastUpdateTime: 2016-10-04T12:25:42Z\n message: Deployment has minimum availability.", "completion": " reason: MinimumReplicasAvailable\n status: \"True\"\n type: Available\n - lastTransitionTime: 2016-10-04T12:25:39Z\n lastUpdateTime: 2016-10-04T12:25:39Z\n message: 'Error creating: pods \"nginx-deployment-4262182780-\" is forbidden: exceeded quota:\n object-counts, requested: pods=1, used: pods=3, limited: pods=2'\n reason: FailedCreate\n status: \"True\"\n type: ReplicaFailure\n observedGeneration: 3\n replicas: 2\n unavailableReplicas: 2", "language": "text", "type": "code-example", "context": "4.2.1 - Deployments | Eventually, once the Deployment progress deadline is exceeded, Kubernetes updates the status and the reason for the Progressing condition:", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_15745"} {"prompt": "Explain this text code:\n\nstatus:\n availableReplicas: 2\n conditions:\n - lastTransitionTime: 2016-10-04T12:25:39Z\n lastUpdateTime: 2016-10-04T12:25:39Z\n message: Replica set \"nginx-deployment-4262182780\" is progressing.\n reason: ReplicaSetUpdated\n status: \"True\"\n type: Progressing\n - lastTransitionTime: 2016-10-04T12:25:42Z\n lastUpdateTime: 2016-10-04T12:25:42Z\n message: Deployment has minimum availability.\n reason: MinimumReplicasAvailable\n status: \"True\"\n type: Available\n - lastTransitionTime: 2016-10-04T12:25:39Z\n lastUpdateTime: 2016-10-04T12:25:39Z\n message: 'Error creating: pods \"nginx-deployment-4262182780-\" is forbidden: exceeded quota:\n object-counts, requested: pods=1, used: pods=3, limited: pods=2'\n reason: FailedCreate\n status: \"True\"\n type: ReplicaFailure\n observedGeneration: 3\n replicas: 2\n unavailableReplicas: 2", "completion": "This text code 4.2.1 - Deployments | Eventually, once the Deployment progress deadline is exceeded, Kubernetes updates the status and the reason for the Progressing condition:", "language": "text", "type": "code-example_explanation", "context": "4.2.1 - Deployments | Eventually, once the Deployment progress deadline is exceeded, Kubernetes updates the status and the reason for the Progressing condition:", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_15746"} {"prompt": "Complete this text code:\n\nConditions:\n Type Status Reason\n ---- ------ ------", "completion": " Available True MinimumReplicasAvailable\n Progressing False ProgressDeadlineExceeded\n ReplicaFailure True FailedCreate", "language": "text", "type": "code-example", "context": "4.2.1 - Deployments | You can address an issue of insufficient quota by scaling down your Deployment, by scaling down other controllers you may be running, or by increasing quota in your namespace. If you satisfy the quota", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_15747"} {"prompt": "Explain this text code:\n\nConditions:\n Type Status Reason\n ---- ------ ------\n Available True MinimumReplicasAvailable\n Progressing False ProgressDeadlineExceeded\n ReplicaFailure True FailedCreate", "completion": "This text code 4.2.1 - Deployments | You can address an issue of insufficient quota by scaling down your Deployment, by scaling down other controllers you may be running, or by increasing quota in your namespace. If you satisfy the quota", "language": "text", "type": "code-example_explanation", "context": "4.2.1 - Deployments | You can address an issue of insufficient quota by scaling down your Deployment, by scaling down other controllers you may be running, or by increasing quota in your namespace. If you satisfy the quota", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_15748"} {"prompt": "Complete this text code:\n\nConditions:\n Type Status Reason", "completion": " ---- ------ ------\n Available True MinimumReplicasAvailable\n Progressing True NewReplicaSetAvailable", "language": "text", "type": "code-example", "context": "4.2.1 - Deployments | type: Available with status: \"True\" means that your Deployment has minimum availability. Minimum availability is dictated by the parameters specified in the deployment strategy. type: Progressing with", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_15749"} {"prompt": "Explain this text code:\n\nConditions:\n Type Status Reason\n ---- ------ ------\n Available True MinimumReplicasAvailable\n Progressing True NewReplicaSetAvailable", "completion": "This text code 4.2.1 - Deployments | type: Available with status: \"True\" means that your Deployment has minimum availability. Minimum availability is dictated by the parameters specified in the deployment strategy. type: Progressing with", "language": "text", "type": "code-example_explanation", "context": "4.2.1 - Deployments | type: Available with status: \"True\" means that your Deployment has minimum availability. Minimum availability is dictated by the parameters specified in the deployment strategy. type: Progressing with", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_15750"} {"prompt": "Explain this text code:\n\nWaiting for rollout to finish: 2 out of 3 new replicas have been updated...\nerror: deployment \"nginx\" exceeded its progress deadline", "completion": "This text code 4.2.1 - Deployments | and the exit status from kubectl rollout is 1 (indicating an error):", "language": "text", "type": "code-example_explanation", "context": "4.2.1 - Deployments | and the exit status from kubectl rollout is 1 (indicating an error):", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_15751"} {"prompt": "Complete this yaml code:\n\napiVersion: apps/v1\nkind: Deployment\nmetadata:\n name: nginx-deployment\n labels:\n app: nginx\nspec:\n replicas: 3\n selector:\n matchLabels:\n app: nginx\n template:", "completion": " metadata:\n labels:\n app: nginx\n spec:\n containers:\n - name: nginx\n image: nginx:1.14.2\n ports:\n - containerPort: 80\n strategy:\n type: RollingUpdate\n rollingUpdate:\n maxUnavailable: 1", "language": "yaml", "type": "deployment-definition", "context": "4.2.1 - Deployments | apiVersion: apps/v1 kind: Deployment metadata: name: nginx-deployment labels: app: nginx spec: replicas: 3 selector: matchLabels: app: nginx template: metadata: labels:", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_15752"} {"prompt": "Explain this yaml code:\n\napiVersion: apps/v1\nkind: Deployment\nmetadata:\n name: nginx-deployment\n labels:\n app: nginx\nspec:\n replicas: 3\n selector:\n matchLabels:\n app: nginx\n template:\n metadata:\n labels:\n app: nginx\n spec:\n containers:\n - name: nginx\n image: nginx:1.14.2\n ports:\n - containerPort: 80\n strategy:\n type: RollingUpdate\n rollingUpdate:\n maxUnavailable: 1", "completion": "This yaml code 4.2.1 - Deployments | apiVersion: apps/v1 kind: Deployment metadata: name: nginx-deployment labels: app: nginx spec: replicas: 3 selector: matchLabels: app: nginx template: metadata: labels:", "language": "yaml", "type": "deployment-definition_explanation", "context": "4.2.1 - Deployments | apiVersion: apps/v1 kind: Deployment metadata: name: nginx-deployment labels: app: nginx spec: replicas: 3 selector: matchLabels: app: nginx template: metadata: labels:", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_15753"} {"prompt": "Complete this yaml code:\n\napiVersion: apps/v1\nkind: Deployment\nmetadata:\n name: nginx-deployment\n labels:\n app: nginx\nspec:\n replicas: 3\n selector:\n matchLabels:\n app: nginx\n template:", "completion": " metadata:\n labels:\n app: nginx\n spec:\n containers:\n - name: nginx\n image: nginx:1.14.2\n ports:\n - containerPort: 80\n strategy:\n type: RollingUpdate\n rollingUpdate:\n maxSurge: 1", "language": "yaml", "type": "deployment-definition", "context": "4.2.1 - Deployments | apiVersion: apps/v1 kind: Deployment metadata: name: nginx-deployment labels: app: nginx spec: replicas: 3 selector: matchLabels: app: nginx template: metadata: labels:", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_15754"} {"prompt": "Explain this yaml code:\n\napiVersion: apps/v1\nkind: Deployment\nmetadata:\n name: nginx-deployment\n labels:\n app: nginx\nspec:\n replicas: 3\n selector:\n matchLabels:\n app: nginx\n template:\n metadata:\n labels:\n app: nginx\n spec:\n containers:\n - name: nginx\n image: nginx:1.14.2\n ports:\n - containerPort: 80\n strategy:\n type: RollingUpdate\n rollingUpdate:\n maxSurge: 1", "completion": "This yaml code 4.2.1 - Deployments | apiVersion: apps/v1 kind: Deployment metadata: name: nginx-deployment labels: app: nginx spec: replicas: 3 selector: matchLabels: app: nginx template: metadata: labels:", "language": "yaml", "type": "deployment-definition_explanation", "context": "4.2.1 - Deployments | apiVersion: apps/v1 kind: Deployment metadata: name: nginx-deployment labels: app: nginx spec: replicas: 3 selector: matchLabels: app: nginx template: metadata: labels:", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_15755"} {"prompt": "Complete this yaml code:\n\napiVersion: apps/v1\nkind: Deployment\nmetadata:\n name: nginx-deployment\n labels:\n app: nginx\nspec:\n replicas: 3\n selector:\n matchLabels:\n app: nginx\n template:\n metadata:", "completion": " labels:\n app: nginx\n spec:\n containers:\n - name: nginx\n image: nginx:1.14.2\n ports:\n - containerPort: 80\n strategy:\n type: RollingUpdate\n rollingUpdate:\n maxSurge: 1\n maxUnavailable: 1", "language": "yaml", "type": "deployment-definition", "context": "4.2.1 - Deployments | .spec.progressDeadlineSeconds is an optional field that specifies the number of seconds you want to wait for your Deployment to progress before the system reports back that the Deployment has failed p", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_15756"} {"prompt": "Explain this yaml code:\n\napiVersion: apps/v1\nkind: Deployment\nmetadata:\n name: nginx-deployment\n labels:\n app: nginx\nspec:\n replicas: 3\n selector:\n matchLabels:\n app: nginx\n template:\n metadata:\n labels:\n app: nginx\n spec:\n containers:\n - name: nginx\n image: nginx:1.14.2\n ports:\n - containerPort: 80\n strategy:\n type: RollingUpdate\n rollingUpdate:\n maxSurge: 1\n maxUnavailable: 1", "completion": "This yaml code 4.2.1 - Deployments | .spec.progressDeadlineSeconds is an optional field that specifies the number of seconds you want to wait for your Deployment to progress before the system reports back that the Deployment has failed p", "language": "yaml", "type": "deployment-definition_explanation", "context": "4.2.1 - Deployments | .spec.progressDeadlineSeconds is an optional field that specifies the number of seconds you want to wait for your Deployment to progress before the system reports back that the Deployment has failed p", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_15757"} {"prompt": "Complete this yaml code:\n\napiVersion: apps/v1\nkind: ReplicaSet\nmetadata:\n name: frontend\n labels:\n app: guestbook\n tier: frontend\nspec:\n # modify replicas according to your case\n replicas: 3", "completion": " selector:\n matchLabels:\n tier: frontend\n template:\n metadata:\n labels:\n tier: frontend\n spec:\n containers:\n - name: php-redis\n image: us-docker.pkg.dev/google-samples/containers/gke/gb-frontend:v5", "language": "yaml", "type": "kubernetes-manifest", "context": "4.2.2 - ReplicaSet | Saving this manifest into frontend.yaml and submitting it to a Kubernetes cluster will create the defined ReplicaSet and the Pods that it manages.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_15758"} {"prompt": "Explain this yaml code:\n\napiVersion: apps/v1\nkind: ReplicaSet\nmetadata:\n name: frontend\n labels:\n app: guestbook\n tier: frontend\nspec:\n # modify replicas according to your case\n replicas: 3\n selector:\n matchLabels:\n tier: frontend\n template:\n metadata:\n labels:\n tier: frontend\n spec:\n containers:\n - name: php-redis\n image: us-docker.pkg.dev/google-samples/containers/gke/gb-frontend:v5", "completion": "This yaml code 4.2.2 - ReplicaSet | Saving this manifest into frontend.yaml and submitting it to a Kubernetes cluster will create the defined ReplicaSet and the Pods that it manages.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "4.2.2 - ReplicaSet | Saving this manifest into frontend.yaml and submitting it to a Kubernetes cluster will create the defined ReplicaSet and the Pods that it manages.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_15759"} {"prompt": "Explain this bash code:\n\nkubectl apply -f https://kubernetes.io/examples/controllers/frontend.yaml", "completion": "This bash code 4.2.2 - ReplicaSet | You can then get the current ReplicaSets deployed:", "language": "bash", "type": "kubectl-command_explanation", "context": "4.2.2 - ReplicaSet | You can then get the current ReplicaSets deployed:", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_15760"} {"prompt": "Explain this text code:\n\nNAME DESIRED CURRENT READY AGE\nfrontend 3 3 3 6s", "completion": "This text code 4.2.2 - ReplicaSet | You can also check on the state of the ReplicaSet:", "language": "text", "type": "code-example_explanation", "context": "4.2.2 - ReplicaSet | You can also check on the state of the ReplicaSet:", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_15761"} {"prompt": "Complete this text code:\n\nName: frontend\nNamespace: default\nSelector: tier=frontend\nLabels: app=guestbook\n tier=frontend\nAnnotations: <none>\nReplicas: 3 current / 3 desired\nPods Status: 3 Running / 0 Waiting / 0 Succeeded / 0 Failed\nPod Template:\n Labels: tier=frontend\n Containers:\n php-redis:", "completion": " Image: us-docker.pkg.dev/google-samples/containers/gke/gb-frontend:v5\n Port: <none>\n Host Port: <none>\n Environment: <none>\n Mounts: <none>\n Volumes: <none>\nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal SuccessfulCreate 13s replicaset-controller Created pod: frontend-gbgfx\n Normal SuccessfulCreate 13s replicaset-controller Created pod: frontend-rwz57\n Normal SuccessfulCreate 13s replicaset-controller Created pod: frontend-wkl7w", "language": "text", "type": "code-example", "context": "4.2.2 - ReplicaSet | And lastly you can check for the Pods brought up:", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_15762"} {"prompt": "Explain this text code:\n\nName: frontend\nNamespace: default\nSelector: tier=frontend\nLabels: app=guestbook\n tier=frontend\nAnnotations: <none>\nReplicas: 3 current / 3 desired\nPods Status: 3 Running / 0 Waiting / 0 Succeeded / 0 Failed\nPod Template:\n Labels: tier=frontend\n Containers:\n php-redis:\n Image: us-docker.pkg.dev/google-samples/containers/gke/gb-frontend:v5\n Port: <none>\n Host Port: <none>\n Environment: <none>\n Mounts: <none>\n Volumes: <none>\nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal SuccessfulCreate 13s replicaset-controller Created pod: frontend-gbgfx\n Normal SuccessfulCreate 13s replicaset-controller Created pod: frontend-rwz57\n Normal SuccessfulCreate 13s replicaset-controller Created pod: frontend-wkl7w", "completion": "This text code 4.2.2 - ReplicaSet | And lastly you can check for the Pods brought up:", "language": "text", "type": "code-example_explanation", "context": "4.2.2 - ReplicaSet | And lastly you can check for the Pods brought up:", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_15763"} {"prompt": "Complete this text code:\n\nNAME READY STATUS RESTARTS AGE\nfrontend-gbgfx 1/1 Running 0 10m", "completion": "frontend-rwz57 1/1 Running 0 10m\nfrontend-wkl7w 1/1 Running 0 10m", "language": "text", "type": "code-example", "context": "4.2.2 - ReplicaSet | You can also verify that the owner reference of these pods is set to the frontend ReplicaSet. To do this, get the yaml of one of the Pods running:", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_15764"} {"prompt": "Explain this text code:\n\nNAME READY STATUS RESTARTS AGE\nfrontend-gbgfx 1/1 Running 0 10m\nfrontend-rwz57 1/1 Running 0 10m\nfrontend-wkl7w 1/1 Running 0 10m", "completion": "This text code 4.2.2 - ReplicaSet | You can also verify that the owner reference of these pods is set to the frontend ReplicaSet. To do this, get the yaml of one of the Pods running:", "language": "text", "type": "code-example_explanation", "context": "4.2.2 - ReplicaSet | You can also verify that the owner reference of these pods is set to the frontend ReplicaSet. To do this, get the yaml of one of the Pods running:", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_15765"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n creationTimestamp: \"2024-02-28T22:30:44Z\"\n generateName: frontend-\n labels:\n tier: frontend\n name: frontend-gbgfx", "completion": " namespace: default\n ownerReferences:\n - apiVersion: apps/v1\n blockOwnerDeletion: true\n controller: true\n kind: ReplicaSet\n name: frontend\n uid: e129deca-f864-481b-bb16-b27abfd92292\n...", "language": "yaml", "type": "pod-definition", "context": "4.2.2 - ReplicaSet | While you can create bare Pods with no problems, it is strongly recommended to make sure that the bare Pods do not have labels which match the selector of one of your ReplicaSets. The reason for this", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_15766"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n creationTimestamp: \"2024-02-28T22:30:44Z\"\n generateName: frontend-\n labels:\n tier: frontend\n name: frontend-gbgfx\n namespace: default\n ownerReferences:\n - apiVersion: apps/v1\n blockOwnerDeletion: true\n controller: true\n kind: ReplicaSet\n name: frontend\n uid: e129deca-f864-481b-bb16-b27abfd92292\n...", "completion": "This yaml code 4.2.2 - ReplicaSet | While you can create bare Pods with no problems, it is strongly recommended to make sure that the bare Pods do not have labels which match the selector of one of your ReplicaSets. The reason for this", "language": "yaml", "type": "pod-definition_explanation", "context": "4.2.2 - ReplicaSet | While you can create bare Pods with no problems, it is strongly recommended to make sure that the bare Pods do not have labels which match the selector of one of your ReplicaSets. The reason for this", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_15767"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: pod1\n labels:\n tier: frontend\nspec:\n containers:\n - name: hello1\n image: gcr.io/google-samples/hello-app:2.0\n", "completion": "---\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: pod2\n labels:\n tier: frontend\nspec:\n containers:\n - name: hello2\n image: gcr.io/google-samples/hello-app:1.0", "language": "yaml", "type": "pod-definition", "context": "4.2.2 - ReplicaSet | As those Pods do not have a Controller (or any object) as their owner reference and match the selector of the frontend ReplicaSet, they will immediately be acquired by it.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_15768"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: pod1\n labels:\n tier: frontend\nspec:\n containers:\n - name: hello1\n image: gcr.io/google-samples/hello-app:2.0\n\n---\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: pod2\n labels:\n tier: frontend\nspec:\n containers:\n - name: hello2\n image: gcr.io/google-samples/hello-app:1.0", "completion": "This yaml code 4.2.2 - ReplicaSet | As those Pods do not have a Controller (or any object) as their owner reference and match the selector of the frontend ReplicaSet, they will immediately be acquired by it.", "language": "yaml", "type": "pod-definition_explanation", "context": "4.2.2 - ReplicaSet | As those Pods do not have a Controller (or any object) as their owner reference and match the selector of the frontend ReplicaSet, they will immediately be acquired by it.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_15769"} {"prompt": "Explain this bash code:\n\nkubectl apply -f https://kubernetes.io/examples/pods/pod-rs.yaml", "completion": "This bash code 4.2.2 - ReplicaSet | The new Pods will be acquired by the ReplicaSet, and then immediately terminated as the ReplicaSet would be over its desired count.", "language": "bash", "type": "kubectl-command_explanation", "context": "4.2.2 - ReplicaSet | The new Pods will be acquired by the ReplicaSet, and then immediately terminated as the ReplicaSet would be over its desired count.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_15770"} {"prompt": "Complete this text code:\n\nNAME READY STATUS RESTARTS AGE\nfrontend-b2zdv 1/1 Running 0 10m\nfrontend-vcmts 1/1 Running 0 10m", "completion": "frontend-wtsmm 1/1 Running 0 10m\npod1 0/1 Terminating 0 1s\npod2 0/1 Terminating 0 1s", "language": "text", "type": "code-example", "context": "4.2.2 - ReplicaSet | If you create the Pods first:", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_15771"} {"prompt": "Explain this text code:\n\nNAME READY STATUS RESTARTS AGE\nfrontend-b2zdv 1/1 Running 0 10m\nfrontend-vcmts 1/1 Running 0 10m\nfrontend-wtsmm 1/1 Running 0 10m\npod1 0/1 Terminating 0 1s\npod2 0/1 Terminating 0 1s", "completion": "This text code 4.2.2 - ReplicaSet | If you create the Pods first:", "language": "text", "type": "code-example_explanation", "context": "4.2.2 - ReplicaSet | If you create the Pods first:", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_15772"} {"prompt": "Explain this bash code:\n\nkubectl apply -f https://kubernetes.io/examples/pods/pod-rs.yaml", "completion": "This bash code 4.2.2 - ReplicaSet | And then create the ReplicaSet however:", "language": "bash", "type": "kubectl-command_explanation", "context": "4.2.2 - ReplicaSet | And then create the ReplicaSet however:", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_15773"} {"prompt": "Explain this bash code:\n\nkubectl apply -f https://kubernetes.io/examples/controllers/frontend.yaml", "completion": "This bash code 4.2.2 - ReplicaSet | You shall see that the ReplicaSet has acquired the Pods and has only created new ones according to its spec until the number of its new Pods and the original matches its desired count. As fetching the", "language": "bash", "type": "kubectl-command_explanation", "context": "4.2.2 - ReplicaSet | You shall see that the ReplicaSet has acquired the Pods and has only created new ones according to its spec until the number of its new Pods and the original matches its desired count. As fetching the", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_15774"} {"prompt": "Complete this text code:\n\nNAME READY STATUS RESTARTS AGE\nfrontend-hmmj2 1/1 Running 0 9s", "completion": "pod1 1/1 Running 0 36s\npod2 1/1 Running 0 36s", "language": "text", "type": "code-example", "context": "4.2.2 - ReplicaSet | In this manner, a ReplicaSet can own a non-homogeneous set of Pods", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_15775"} {"prompt": "Explain this text code:\n\nNAME READY STATUS RESTARTS AGE\nfrontend-hmmj2 1/1 Running 0 9s\npod1 1/1 Running 0 36s\npod2 1/1 Running 0 36s", "completion": "This text code 4.2.2 - ReplicaSet | In this manner, a ReplicaSet can own a non-homogeneous set of Pods", "language": "text", "type": "code-example_explanation", "context": "4.2.2 - ReplicaSet | In this manner, a ReplicaSet can own a non-homogeneous set of Pods", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_15776"} {"prompt": "Complete this bash code:\n\nkubectl proxy --port=8080\ncurl -X DELETE 'localhost:8080/apis/apps/v1/namespaces/default/replicasets/frontend' \\", "completion": " -d '{\"kind\":\"DeleteOptions\",\"apiVersion\":\"v1\",\"propagationPolicy\":\"Foreground\"}' \\\n -H \"Content-Type: application/json\"", "language": "bash", "type": "kubectl-command", "context": "4.2.2 - ReplicaSet | You can delete a ReplicaSet without affecting any of its Pods using kubectl delete with the --cascade=orphan option. When using the REST API or the client-go library, you must set propagationPolicy to", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_15777"} {"prompt": "Explain this bash code:\n\nkubectl proxy --port=8080\ncurl -X DELETE 'localhost:8080/apis/apps/v1/namespaces/default/replicasets/frontend' \\\n -d '{\"kind\":\"DeleteOptions\",\"apiVersion\":\"v1\",\"propagationPolicy\":\"Foreground\"}' \\\n -H \"Content-Type: application/json\"", "completion": "This bash code 4.2.2 - ReplicaSet | You can delete a ReplicaSet without affecting any of its Pods using kubectl delete with the --cascade=orphan option. When using the REST API or the client-go library, you must set propagationPolicy to", "language": "bash", "type": "kubectl-command_explanation", "context": "4.2.2 - ReplicaSet | You can delete a ReplicaSet without affecting any of its Pods using kubectl delete with the --cascade=orphan option. When using the REST API or the client-go library, you must set propagationPolicy to", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_15778"} {"prompt": "Complete this bash code:\n\nkubectl proxy --port=8080\ncurl -X DELETE 'localhost:8080/apis/apps/v1/namespaces/default/replicasets/frontend' \\", "completion": " -d '{\"kind\":\"DeleteOptions\",\"apiVersion\":\"v1\",\"propagationPolicy\":\"Orphan\"}' \\\n -H \"Content-Type: application/json\"", "language": "bash", "type": "kubectl-command", "context": "4.2.2 - ReplicaSet | Once the original is deleted, you can create a new ReplicaSet to replace it. As long as the old and new .spec.selector are the same, then the new one will adopt the old Pods. However, it will not make", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_15779"} {"prompt": "Explain this bash code:\n\nkubectl proxy --port=8080\ncurl -X DELETE 'localhost:8080/apis/apps/v1/namespaces/default/replicasets/frontend' \\\n -d '{\"kind\":\"DeleteOptions\",\"apiVersion\":\"v1\",\"propagationPolicy\":\"Orphan\"}' \\\n -H \"Content-Type: application/json\"", "completion": "This bash code 4.2.2 - ReplicaSet | Once the original is deleted, you can create a new ReplicaSet to replace it. As long as the old and new .spec.selector are the same, then the new one will adopt the old Pods. However, it will not make", "language": "bash", "type": "kubectl-command_explanation", "context": "4.2.2 - ReplicaSet | Once the original is deleted, you can create a new ReplicaSet to replace it. As long as the old and new .spec.selector are the same, then the new one will adopt the old Pods. However, it will not make", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_15780"} {"prompt": "Complete this yaml code:\n\napiVersion: autoscaling/v1\nkind: HorizontalPodAutoscaler\nmetadata:\n name: frontend-scaler\nspec:", "completion": " scaleTargetRef:\n kind: ReplicaSet\n name: frontend\n minReplicas: 3\n maxReplicas: 10\n targetCPUUtilizationPercentage: 50", "language": "yaml", "type": "kubernetes-manifest", "context": "4.2.2 - ReplicaSet | Saving this manifest into hpa-rs.yaml and submitting it to a Kubernetes cluster should create the defined HPA that autoscales the target ReplicaSet depending on the CPU usage of the replicated Pods.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_15781"} {"prompt": "Explain this yaml code:\n\napiVersion: autoscaling/v1\nkind: HorizontalPodAutoscaler\nmetadata:\n name: frontend-scaler\nspec:\n scaleTargetRef:\n kind: ReplicaSet\n name: frontend\n minReplicas: 3\n maxReplicas: 10\n targetCPUUtilizationPercentage: 50", "completion": "This yaml code 4.2.2 - ReplicaSet | Saving this manifest into hpa-rs.yaml and submitting it to a Kubernetes cluster should create the defined HPA that autoscales the target ReplicaSet depending on the CPU usage of the replicated Pods.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "4.2.2 - ReplicaSet | Saving this manifest into hpa-rs.yaml and submitting it to a Kubernetes cluster should create the defined HPA that autoscales the target ReplicaSet depending on the CPU usage of the replicated Pods.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_15782"} {"prompt": "Explain this bash code:\n\nkubectl apply -f https://k8s.io/examples/controllers/hpa-rs.yaml", "completion": "This bash code 4.2.2 - ReplicaSet | Alternatively, you can use the kubectl autoscale command to accomplish the same (and it's easier!)", "language": "bash", "type": "kubectl-command_explanation", "context": "4.2.2 - ReplicaSet | Alternatively, you can use the kubectl autoscale command to accomplish the same (and it's easier!)", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_15783"} {"prompt": "Explain this bash code:\n\nkubectl autoscale rs frontend --max=10 --min=3 --cpu-percent=50", "completion": "This bash code 4.2.2 - ReplicaSet | Deployment is an object which can own ReplicaSets and update them and their Pods via declarative, server-side rolling updates. While ReplicaSets can be used independently, today they're mainly used by", "language": "bash", "type": "kubectl-command_explanation", "context": "4.2.2 - ReplicaSet | Deployment is an object which can own ReplicaSets and update them and their Pods via declarative, server-side rolling updates. While ReplicaSets can be used independently, today they're mainly used by", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_15784"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Service\nmetadata:\n name: nginx\n labels:\n app: nginx\nspec:\n ports:\n - port: 80\n name: web\n clusterIP: None\n selector:\n app: nginx\n---\napiVersion: apps/v1\nkind: StatefulSet\nmetadata:\n name: web\nspec:\n selector:\n matchLabels:\n app: nginx # has to match .spec.template.metadata.labels\n serviceName: \"nginx\"\n replicas: 3 # by default is 1", "completion": " minReadySeconds: 10 # by default is 0\n template:\n metadata:\n labels:\n app: nginx # has to match .spec.selector.matchLabels\n spec:\n terminationGracePeriodSeconds: 10\n containers:\n - name: nginx\n image: registry.k8s.io/nginx-slim:0.24\n ports:\n - containerPort: 80\n name: web\n volumeMounts:\n - name: www\n mountPath: /usr/share/nginx/html\n volumeClaimTemplates:\n - metadata:\n name: www\n spec:\n accessModes: [ \"ReadWriteOnce\" ]\n storageClassName: \"my-storage-class\"\n resources:\n requests:\n storage: 1Gi", "language": "yaml", "type": "service-definition", "context": "4.2.3 - StatefulSets | In the above example:", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_15785"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Service\nmetadata:\n name: nginx\n labels:\n app: nginx\nspec:\n ports:\n - port: 80\n name: web\n clusterIP: None\n selector:\n app: nginx\n---\napiVersion: apps/v1\nkind: StatefulSet\nmetadata:\n name: web\nspec:\n selector:\n matchLabels:\n app: nginx # has to match .spec.template.metadata.labels\n serviceName: \"nginx\"\n replicas: 3 # by default is 1\n minReadySeconds: 10 # by default is 0\n template:\n metadata:\n labels:\n app: nginx # has to match .spec.selector.matchLabels\n spec:\n terminationGracePeriodSeconds: 10\n containers:\n - name: nginx\n image: registry.k8s.io/nginx-slim:0.24\n ports:\n - containerPort: 80\n name: web\n volumeMounts:\n - name: www\n mountPath: /usr/share/nginx/html\n volumeClaimTemplates:\n - metadata:\n name: www\n spec:\n accessModes: [ \"ReadWriteOnce\" ]\n storageClassName: \"my-storage-class\"\n resources:\n requests:\n storage: 1Gi", "completion": "This yaml code 4.2.3 - StatefulSets | In the above example:", "language": "yaml", "type": "service-definition_explanation", "context": "4.2.3 - StatefulSets | In the above example:", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_15786"} {"prompt": "Complete this yaml code:\n\napiVersion: apps/v1\nkind: StatefulSet\nmetadata:", "completion": " name: webapp\nspec:\n revisionHistoryLimit: 5 # Keep last 5 revisions\n # ... other spec fields ...", "language": "yaml", "type": "kubernetes-manifest", "context": "4.2.3 - StatefulSets | You can revert to a previous configuration using:", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_15787"} {"prompt": "Explain this yaml code:\n\napiVersion: apps/v1\nkind: StatefulSet\nmetadata:\n name: webapp\nspec:\n revisionHistoryLimit: 5 # Keep last 5 revisions\n # ... other spec fields ...", "completion": "This yaml code 4.2.3 - StatefulSets | You can revert to a previous configuration using:", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "4.2.3 - StatefulSets | You can revert to a previous configuration using:", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_15788"} {"prompt": "Complete this text code:\n\n# View revision history\nkubectl rollout history statefulset/webapp", "completion": "\n# Rollback to a specific revision\nkubectl rollout undo statefulset/webapp --to-revision=3", "language": "text", "type": "code-example", "context": "4.2.3 - StatefulSets | This will:", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_15789"} {"prompt": "Explain this text code:\n\n# View revision history\nkubectl rollout history statefulset/webapp\n\n# Rollback to a specific revision\nkubectl rollout undo statefulset/webapp --to-revision=3", "completion": "This text code 4.2.3 - StatefulSets | This will:", "language": "text", "type": "code-example_explanation", "context": "4.2.3 - StatefulSets | This will:", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_15790"} {"prompt": "Complete this text code:\n\n# List all revisions for the StatefulSet\nkubectl get controllerrevisions -l app.kubernetes.io/name=webapp", "completion": "\n# View detailed configuration of a specific revision\nkubectl get controllerrevision/webapp-3 -o yaml", "language": "text", "type": "code-example", "context": "4.2.3 - StatefulSets | Regularly check revisions with:", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_15791"} {"prompt": "Explain this text code:\n\n# List all revisions for the StatefulSet\nkubectl get controllerrevisions -l app.kubernetes.io/name=webapp\n\n# View detailed configuration of a specific revision\nkubectl get controllerrevision/webapp-3 -o yaml", "completion": "This text code 4.2.3 - StatefulSets | Regularly check revisions with:", "language": "text", "type": "code-example_explanation", "context": "4.2.3 - StatefulSets | Regularly check revisions with:", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_15792"} {"prompt": "Complete this yaml code:\n\napiVersion: apps/v1\nkind: StatefulSet\n...\nspec:", "completion": " persistentVolumeClaimRetentionPolicy:\n whenDeleted: Retain\n whenScaled: Delete\n...", "language": "yaml", "type": "kubernetes-manifest", "context": "4.2.3 - StatefulSets | The StatefulSet controller adds owner references to its PVCs, which are then deleted by the garbage collector after the Pod is terminated. This enables the Pod to cleanly unmount all volumes before th", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_15793"} {"prompt": "Explain this yaml code:\n\napiVersion: apps/v1\nkind: StatefulSet\n...\nspec:\n persistentVolumeClaimRetentionPolicy:\n whenDeleted: Retain\n whenScaled: Delete\n...", "completion": "This yaml code 4.2.3 - StatefulSets | The StatefulSet controller adds owner references to its PVCs, which are then deleted by the garbage collector after the Pod is terminated. This enables the Pod to cleanly unmount all volumes before th", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "4.2.3 - StatefulSets | The StatefulSet controller adds owner references to its PVCs, which are then deleted by the garbage collector after the Pod is terminated. This enables the Pod to cleanly unmount all volumes before th", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_15794"} {"prompt": "Complete this yaml code:\n\napiVersion: apps/v1\nkind: DaemonSet\nmetadata:\n name: fluentd-elasticsearch\n namespace: kube-system\n labels:\n k8s-app: fluentd-logging\nspec:\n selector:\n matchLabels:\n name: fluentd-elasticsearch\n template:\n metadata:\n labels:\n name: fluentd-elasticsearch\n spec:\n tolerations:\n # these tolerations are to have the daemonset runnable on control plane nodes\n # remove them if your control plane nodes should not run pods\n - key: node-role.kubernetes.io/control-plane\n operator: Exists\n effect: NoSchedule", "completion": " - key: node-role.kubernetes.io/master\n operator: Exists\n effect: NoSchedule\n containers:\n - name: fluentd-elasticsearch\n image: quay.io/fluentd_elasticsearch/fluentd:v5.0.1\n resources:\n limits:\n memory: 200Mi\n requests:\n cpu: 100m\n memory: 200Mi\n volumeMounts:\n - name: varlog\n mountPath: /var/log\n # it may be desirable to set a high priority class to ensure that a DaemonSet Pod\n # preempts running Pods\n # priorityClassName: important\n terminationGracePeriodSeconds: 30\n volumes:\n - name: varlog\n hostPath:\n path: /var/log", "language": "yaml", "type": "kubernetes-manifest", "context": "4.2.4 - DaemonSet | Create a DaemonSet based on the YAML file:", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_15795"} {"prompt": "Explain this yaml code:\n\napiVersion: apps/v1\nkind: DaemonSet\nmetadata:\n name: fluentd-elasticsearch\n namespace: kube-system\n labels:\n k8s-app: fluentd-logging\nspec:\n selector:\n matchLabels:\n name: fluentd-elasticsearch\n template:\n metadata:\n labels:\n name: fluentd-elasticsearch\n spec:\n tolerations:\n # these tolerations are to have the daemonset runnable on control plane nodes\n # remove them if your control plane nodes should not run pods\n - key: node-role.kubernetes.io/control-plane\n operator: Exists\n effect: NoSchedule\n - key: node-role.kubernetes.io/master\n operator: Exists\n effect: NoSchedule\n containers:\n - name: fluentd-elasticsearch\n image: quay.io/fluentd_elasticsearch/fluentd:v5.0.1\n resources:\n limits:\n memory: 200Mi\n requests:\n cpu: 100m\n memory: 200Mi\n volumeMounts:\n - name: varlog\n mountPath: /var/log\n # it may be desirable to set a high priority class to ensure that a DaemonSet Pod\n # preempts running Pods\n # priorityClassName: important\n terminationGracePeriodSeconds: 30\n volumes:\n - name: varlog\n hostPath:\n path: /var/log", "completion": "This yaml code 4.2.4 - DaemonSet | Create a DaemonSet based on the YAML file:", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "4.2.4 - DaemonSet | Create a DaemonSet based on the YAML file:", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_15796"} {"prompt": "Explain this bash code:\n\nkubectl apply -f https://k8s.io/examples/controllers/daemonset.yaml", "completion": "This bash code 4.2.4 - DaemonSet | As with all other Kubernetes config, a DaemonSet needs apiVersion, kind, and metadata fields. For general information about working with config files, see running stateless applications and object man", "language": "bash", "type": "kubectl-command_explanation", "context": "4.2.4 - DaemonSet | As with all other Kubernetes config, a DaemonSet needs apiVersion, kind, and metadata fields. For general information about working with config files, see running stateless applications and object man", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_15797"} {"prompt": "Complete this text code:\n\nnodeAffinity:\n requiredDuringSchedulingIgnoredDuringExecution:\n nodeSelectorTerms:\n - matchFields:", "completion": " - key: metadata.name\n operator: In\n values:\n - target-host-name", "language": "text", "type": "code-example", "context": "4.2.4 - DaemonSet | The DaemonSet controller automatically adds a set of tolerations to DaemonSet Pods:", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_15798"} {"prompt": "Explain this text code:\n\nnodeAffinity:\n requiredDuringSchedulingIgnoredDuringExecution:\n nodeSelectorTerms:\n - matchFields:\n - key: metadata.name\n operator: In\n values:\n - target-host-name", "completion": "This text code 4.2.4 - DaemonSet | The DaemonSet controller automatically adds a set of tolerations to DaemonSet Pods:", "language": "text", "type": "code-example_explanation", "context": "4.2.4 - DaemonSet | The DaemonSet controller automatically adds a set of tolerations to DaemonSet Pods:", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_15799"} {"prompt": "Complete this yaml code:\n\napiVersion: batch/v1\nkind: Job\nmetadata:\n name: pi\nspec:\n template:", "completion": " spec:\n containers:\n - name: pi\n image: perl:5.34.0\n command: [\"perl\", \"-Mbignum=bpi\", \"-wle\", \"print bpi(2000)\"]\n restartPolicy: Never\n backoffLimit: 4", "language": "yaml", "type": "kubernetes-manifest", "context": "4.2.5 - Jobs | You can run the example with this command:", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_15800"} {"prompt": "Explain this yaml code:\n\napiVersion: batch/v1\nkind: Job\nmetadata:\n name: pi\nspec:\n template:\n spec:\n containers:\n - name: pi\n image: perl:5.34.0\n command: [\"perl\", \"-Mbignum=bpi\", \"-wle\", \"print bpi(2000)\"]\n restartPolicy: Never\n backoffLimit: 4", "completion": "This yaml code 4.2.5 - Jobs | You can run the example with this command:", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "4.2.5 - Jobs | You can run the example with this command:", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_15801"} {"prompt": "Explain this bash code:\n\nkubectl apply -f https://kubernetes.io/examples/controllers/job.yaml", "completion": "This bash code 4.2.5 - Jobs | The output is similar to this:", "language": "bash", "type": "kubectl-command_explanation", "context": "4.2.5 - Jobs | The output is similar to this:", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_15802"} {"prompt": "Complete this text code:\n\nName: pi\nNamespace: default\nSelector: batch.kubernetes.io/controller-uid=c9948307-e56d-4b5d-8302-ae2d7b7da67c\nLabels: batch.kubernetes.io/controller-uid=c9948307-e56d-4b5d-8302-ae2d7b7da67c\n batch.kubernetes.io/job-name=pi\n ...\nAnnotations: batch.kubernetes.io/job-tracking: \"\"\nParallelism: 1\nCompletions: 1\nStart Time: Mon, 02 Dec 2019 15:20:11 +0200\nCompleted At: Mon, 02 Dec 2019 15:21:16 +0200\nDuration: 65s\nPods Statuses: 0 Running / 1 Succeeded / 0 Failed\nPod Template:\n Labels: batch.kubernetes.io/controller-uid=c9948307-e56d-4b5d-8302-ae2d7b7da67c\n batch.kubernetes.io/job-name=pi\n Containers:", "completion": " pi:\n Image: perl:5.34.0\n Port: <none>\n Host Port: <none>\n Command:\n perl\n -Mbignum=bpi\n -wle\n print bpi(2000)\n Environment: <none>\n Mounts: <none>\n Volumes: <none>\nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal SuccessfulCreate 21s job-controller Created pod: pi-xf9p4\n Normal Completed 18s job-controller Job completed", "language": "text", "type": "code-example", "context": "4.2.5 - Jobs | apiVersion: batch/v1 kind: Job metadata: annotations: batch.kubernetes.io/job-tracking: \"\" ... creationTimestamp: \"2022-11-10T17:53:53Z\" generation: 1 labels: batch.kubernet", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_15803"} {"prompt": "Explain this text code:\n\nName: pi\nNamespace: default\nSelector: batch.kubernetes.io/controller-uid=c9948307-e56d-4b5d-8302-ae2d7b7da67c\nLabels: batch.kubernetes.io/controller-uid=c9948307-e56d-4b5d-8302-ae2d7b7da67c\n batch.kubernetes.io/job-name=pi\n ...\nAnnotations: batch.kubernetes.io/job-tracking: \"\"\nParallelism: 1\nCompletions: 1\nStart Time: Mon, 02 Dec 2019 15:20:11 +0200\nCompleted At: Mon, 02 Dec 2019 15:21:16 +0200\nDuration: 65s\nPods Statuses: 0 Running / 1 Succeeded / 0 Failed\nPod Template:\n Labels: batch.kubernetes.io/controller-uid=c9948307-e56d-4b5d-8302-ae2d7b7da67c\n batch.kubernetes.io/job-name=pi\n Containers:\n pi:\n Image: perl:5.34.0\n Port: <none>\n Host Port: <none>\n Command:\n perl\n -Mbignum=bpi\n -wle\n print bpi(2000)\n Environment: <none>\n Mounts: <none>\n Volumes: <none>\nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal SuccessfulCreate 21s job-controller Created pod: pi-xf9p4\n Normal Completed 18s job-controller Job completed", "completion": "This text code 4.2.5 - Jobs | apiVersion: batch/v1 kind: Job metadata: annotations: batch.kubernetes.io/job-tracking: \"\" ... creationTimestamp: \"2022-11-10T17:53:53Z\" generation: 1 labels: batch.kubernet", "language": "text", "type": "code-example_explanation", "context": "4.2.5 - Jobs | apiVersion: batch/v1 kind: Job metadata: annotations: batch.kubernetes.io/job-tracking: \"\" ... creationTimestamp: \"2022-11-10T17:53:53Z\" generation: 1 labels: batch.kubernet", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_15804"} {"prompt": "Complete this yaml code:\n\napiVersion: batch/v1\nkind: Job\nmetadata:\n annotations: batch.kubernetes.io/job-tracking: \"\"\n ... \n creationTimestamp: \"2022-11-10T17:53:53Z\"\n generation: 1\n labels:\n batch.kubernetes.io/controller-uid: 863452e6-270d-420e-9b94-53a54146c223\n batch.kubernetes.io/job-name: pi\n name: pi\n namespace: default\n resourceVersion: \"4751\"\n uid: 204fb678-040b-497f-9266-35ffa8716d14\nspec:\n backoffLimit: 4\n completionMode: NonIndexed\n completions: 1\n parallelism: 1\n selector:\n matchLabels:\n batch.kubernetes.io/controller-uid: 863452e6-270d-420e-9b94-53a54146c223\n suspend: false\n template:\n metadata:\n creationTimestamp: null", "completion": " labels:\n batch.kubernetes.io/controller-uid: 863452e6-270d-420e-9b94-53a54146c223\n batch.kubernetes.io/job-name: pi\n spec:\n containers:\n - command:\n - perl\n - -Mbignum=bpi\n - -wle\n - print bpi(2000)\n image: perl:5.34.0\n imagePullPolicy: IfNotPresent\n name: pi\n resources: {}\n terminationMessagePath: /dev/termination-log\n terminationMessagePolicy: File\n dnsPolicy: ClusterFirst\n restartPolicy: Never\n schedulerName: default-scheduler\n securityContext: {}\n terminationGracePeriodSeconds: 30\nstatus:\n active: 1\n ready: 0\n startTime: \"2022-11-10T17:53:57Z\"\n uncountedTerminatedPods: {}", "language": "yaml", "type": "kubernetes-manifest", "context": "4.2.5 - Jobs | To view completed Pods of a Job, use kubectl get pods.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_15805"} {"prompt": "Explain this yaml code:\n\napiVersion: batch/v1\nkind: Job\nmetadata:\n annotations: batch.kubernetes.io/job-tracking: \"\"\n ... \n creationTimestamp: \"2022-11-10T17:53:53Z\"\n generation: 1\n labels:\n batch.kubernetes.io/controller-uid: 863452e6-270d-420e-9b94-53a54146c223\n batch.kubernetes.io/job-name: pi\n name: pi\n namespace: default\n resourceVersion: \"4751\"\n uid: 204fb678-040b-497f-9266-35ffa8716d14\nspec:\n backoffLimit: 4\n completionMode: NonIndexed\n completions: 1\n parallelism: 1\n selector:\n matchLabels:\n batch.kubernetes.io/controller-uid: 863452e6-270d-420e-9b94-53a54146c223\n suspend: false\n template:\n metadata:\n creationTimestamp: null\n labels:\n batch.kubernetes.io/controller-uid: 863452e6-270d-420e-9b94-53a54146c223\n batch.kubernetes.io/job-name: pi\n spec:\n containers:\n - command:\n - perl\n - -Mbignum=bpi\n - -wle\n - print bpi(2000)\n image: perl:5.34.0\n imagePullPolicy: IfNotPresent\n name: pi\n resources: {}\n terminationMessagePath: /dev/termination-log\n terminationMessagePolicy: File\n dnsPolicy: ClusterFirst\n restartPolicy: Never\n schedulerName: default-scheduler\n securityContext: {}\n terminationGracePeriodSeconds: 30\nstatus:\n active: 1\n ready: 0\n startTime: \"2022-11-10T17:53:57Z\"\n uncountedTerminatedPods: {}", "completion": "This yaml code 4.2.5 - Jobs | To view completed Pods of a Job, use kubectl get pods.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "4.2.5 - Jobs | To view completed Pods of a Job, use kubectl get pods.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_15806"} {"prompt": "Explain this text code:\n\npods=$(kubectl get pods --selector=batch.kubernetes.io/job-name=pi --output=jsonpath='{.items[*].metadata.name}')\necho $pods", "completion": "This text code 4.2.5 - Jobs | The output is similar to this:", "language": "text", "type": "code-example_explanation", "context": "4.2.5 - Jobs | The output is similar to this:", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_15807"} {"prompt": "Explain this text code:\n\n3.1415926535897932384626433832795028841971693993751058209749445923078164062862089986280348253421170679821480865132823066470938446095505822317253594081284811174502841027019385211055596446229489549303819644288109756659334461284756482337867831652712019091456485669234603486104543266482133936072602491412737245870066063155881748815209209628292540917153643678925903600113305305488204665213841469519415116094330572703657595919530921861173819326117931051185480744623799627495673518857527248912279381830119491298336733624406566430860213949463952247371907021798609437027705392171762931767523846748184676694051320005681271452635608277857713427577896091736371787214684409012249534301465495853710507922796892589235420199561121290219608640344181598136297747713099605187072113499999983729780499510597317328160963185950244594553469083026425223082533446850352619311881710100031378387528865875332083814206171776691473035982534904287554687311595628638823537875937519577818577805321712268066130019278766111959092164201989380952572010654858632788659361533818279682303019520353018529689957736225994138912497217752834791315155748572424541506959508295331168617278558890750983817546374649393192550604009277016711390098488240128583616035637076601047101819429555961989467678374494482553797747268471040475346462080466842590694912933136770289891521047521620569660240580381501935112533824300355876402474964732639141992726042699227967823547816360093417216412199245863150302861829745557067498385054945885869269956909272107975093029553211653449872027559602364806654991198818347977535663698074265425278625518184175746728909777727938000816470600161452491921732172147723501414419735685481613611573525521334757418494684385233239073941433345477624168625189835694855620992192221842725502542568876717904946016534668049886272327917860857843838279679766814541009538837863609506800642251252051173929848960841284886269456042419652850222106611863067442786220391949450471237137869609563643719172874677646575739624138908658326459958133904780275901", "completion": "This text code 4.2.5 - Jobs | As with all other Kubernetes config, a Job needs apiVersion, kind, and metadata fields.", "language": "text", "type": "code-example_explanation", "context": "4.2.5 - Jobs | As with all other Kubernetes config, a Job needs apiVersion, kind, and metadata fields.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_15808"} {"prompt": "Complete this yaml code:\n\napiVersion: batch/v1\nkind: Job\nmetadata:\n name: job-backoff-limit-per-index-example\nspec:\n completions: 10\n parallelism: 3\n completionMode: Indexed # required for the feature\n backoffLimitPerIndex: 1 # maximal number of failures per index\n maxFailedIndexes: 5 # maximal number of failed indexes before terminating the Job execution\n template:\n spec:\n restartPolicy: Never # required for the feature", "completion": " containers:\n - name: example\n image: python\n command: # The jobs fails as there is at least one failed index\n # (all even indexes fail in here), yet all indexes\n # are executed as maxFailedIndexes is not exceeded.\n - python3\n - -c\n - |\n import os, sys\n print(\"Hello world\")\n if int(os.environ.get(\"JOB_COMPLETION_INDEX\")) % 2 == 0:\n sys.exit(1)", "language": "yaml", "type": "kubernetes-manifest", "context": "4.2.5 - Jobs | In the example above, the Job controller allows for one restart for each of the indexes. When the total number of failed indexes exceeds 5, then the entire Job is terminated.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_15809"} {"prompt": "Explain this yaml code:\n\napiVersion: batch/v1\nkind: Job\nmetadata:\n name: job-backoff-limit-per-index-example\nspec:\n completions: 10\n parallelism: 3\n completionMode: Indexed # required for the feature\n backoffLimitPerIndex: 1 # maximal number of failures per index\n maxFailedIndexes: 5 # maximal number of failed indexes before terminating the Job execution\n template:\n spec:\n restartPolicy: Never # required for the feature\n containers:\n - name: example\n image: python\n command: # The jobs fails as there is at least one failed index\n # (all even indexes fail in here), yet all indexes\n # are executed as maxFailedIndexes is not exceeded.\n - python3\n - -c\n - |\n import os, sys\n print(\"Hello world\")\n if int(os.environ.get(\"JOB_COMPLETION_INDEX\")) % 2 == 0:\n sys.exit(1)", "completion": "This yaml code 4.2.5 - Jobs | In the example above, the Job controller allows for one restart for each of the indexes. When the total number of failed indexes exceeds 5, then the entire Job is terminated.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "4.2.5 - Jobs | In the example above, the Job controller allows for one restart for each of the indexes. When the total number of failed indexes exceeds 5, then the entire Job is terminated.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_15810"} {"prompt": "Explain this bash code:\n\nkubectl get -o yaml job job-backoff-limit-per-index-example", "completion": "This bash code 4.2.5 - Jobs | The Job controller adds the FailureTarget Job condition to trigger Job termination and cleanup. When all of the Job Pods are terminated, the Job controller adds the Failed condition with the same valu", "language": "bash", "type": "kubectl-command_explanation", "context": "4.2.5 - Jobs | The Job controller adds the FailureTarget Job condition to trigger Job termination and cleanup. When all of the Job Pods are terminated, the Job controller adds the Failed condition with the same valu", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_15811"} {"prompt": "Complete this text code:\n\nstatus:\n completedIndexes: 1,3,5,7,9\n failedIndexes: 0,2,4,6,8\n succeeded: 5 # 1 succeeded pod for each of 5 succeeded indexes\n failed: 10 # 2 failed pods (1 retry) for each of 5 failed indexes\n conditions:\n - message: Job has failed indexes", "completion": " reason: FailedIndexes\n status: \"True\"\n type: FailureTarget\n - message: Job has failed indexes\n reason: FailedIndexes\n status: \"True\"\n type: Failed", "language": "text", "type": "code-example", "context": "4.2.5 - Jobs | The Job controller adds the FailureTarget Job condition to trigger Job termination and cleanup. When all of the Job Pods are terminated, the Job controller adds the Failed condition with the same valu", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_15812"} {"prompt": "Explain this text code:\n\nstatus:\n completedIndexes: 1,3,5,7,9\n failedIndexes: 0,2,4,6,8\n succeeded: 5 # 1 succeeded pod for each of 5 succeeded indexes\n failed: 10 # 2 failed pods (1 retry) for each of 5 failed indexes\n conditions:\n - message: Job has failed indexes\n reason: FailedIndexes\n status: \"True\"\n type: FailureTarget\n - message: Job has failed indexes\n reason: FailedIndexes\n status: \"True\"\n type: Failed", "completion": "This text code 4.2.5 - Jobs | The Job controller adds the FailureTarget Job condition to trigger Job termination and cleanup. When all of the Job Pods are terminated, the Job controller adds the Failed condition with the same valu", "language": "text", "type": "code-example_explanation", "context": "4.2.5 - Jobs | The Job controller adds the FailureTarget Job condition to trigger Job termination and cleanup. When all of the Job Pods are terminated, the Job controller adds the Failed condition with the same valu", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_15813"} {"prompt": "Complete this yaml code:\n\napiVersion: batch/v1\nkind: Job\nmetadata:\n name: job-pod-failure-policy-example\nspec:\n completions: 12\n parallelism: 3\n template:\n spec:\n restartPolicy: Never\n containers:\n - name: main\n image: docker.io/library/bash:5\n command: [\"bash\"] # example command simulating a bug which triggers the FailJob action", "completion": " args:\n - -c\n - echo \"Hello world!\" && sleep 5 && exit 42\n backoffLimit: 6\n podFailurePolicy:\n rules:\n - action: FailJob\n onExitCodes:\n containerName: main # optional\n operator: In # one of: In, NotIn\n values: [42]\n - action: Ignore # one of: Ignore, FailJob, Count\n onPodConditions:\n - type: DisruptionTarget # indicates Pod disruption", "language": "yaml", "type": "kubernetes-manifest", "context": "4.2.5 - Jobs | In the example above, the first rule of the Pod failure policy specifies that the Job should be marked failed if the main container fails with the 42 exit code. The following are the rules for the mai", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_15814"} {"prompt": "Explain this yaml code:\n\napiVersion: batch/v1\nkind: Job\nmetadata:\n name: job-pod-failure-policy-example\nspec:\n completions: 12\n parallelism: 3\n template:\n spec:\n restartPolicy: Never\n containers:\n - name: main\n image: docker.io/library/bash:5\n command: [\"bash\"] # example command simulating a bug which triggers the FailJob action\n args:\n - -c\n - echo \"Hello world!\" && sleep 5 && exit 42\n backoffLimit: 6\n podFailurePolicy:\n rules:\n - action: FailJob\n onExitCodes:\n containerName: main # optional\n operator: In # one of: In, NotIn\n values: [42]\n - action: Ignore # one of: Ignore, FailJob, Count\n onPodConditions:\n - type: DisruptionTarget # indicates Pod disruption", "completion": "This yaml code 4.2.5 - Jobs | In the example above, the first rule of the Pod failure policy specifies that the Job should be marked failed if the main container fails with the 42 exit code. The following are the rules for the mai", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "4.2.5 - Jobs | In the example above, the first rule of the Pod failure policy specifies that the Job should be marked failed if the main container fails with the 42 exit code. The following are the rules for the mai", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_15815"} {"prompt": "Complete this yaml code:\n\napiVersion: batch/v1\nkind: Job\nmetadata:\n name: job-success\nspec:\n parallelism: 10\n completions: 10\n completionMode: Indexed # Required for the success policy\n successPolicy:\n rules:\n - succeededIndexes: 0,2-3\n succeededCount: 1\n template:\n spec:", "completion": " containers:\n - name: main\n image: python\n command: # Provided that at least one of the Pods with 0, 2, and 3 indexes has succeeded,\n # the overall Job is a success.\n - python3\n - -c\n - |\n import os, sys\n if os.environ.get(\"JOB_COMPLETION_INDEX\") == \"2\":\n sys.exit(0)\n else:\n sys.exit(1) \n restartPolicy: Never", "language": "yaml", "type": "kubernetes-manifest", "context": "4.2.5 - Jobs | In the example above, both succeededIndexes and succeededCount have been specified. Therefore, the job controller will mark the Job as succeeded and terminate the lingering Pods when either of the spe", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_15816"} {"prompt": "Explain this yaml code:\n\napiVersion: batch/v1\nkind: Job\nmetadata:\n name: job-success\nspec:\n parallelism: 10\n completions: 10\n completionMode: Indexed # Required for the success policy\n successPolicy:\n rules:\n - succeededIndexes: 0,2-3\n succeededCount: 1\n template:\n spec:\n containers:\n - name: main\n image: python\n command: # Provided that at least one of the Pods with 0, 2, and 3 indexes has succeeded,\n # the overall Job is a success.\n - python3\n - -c\n - |\n import os, sys\n if os.environ.get(\"JOB_COMPLETION_INDEX\") == \"2\":\n sys.exit(0)\n else:\n sys.exit(1) \n restartPolicy: Never", "completion": "This yaml code 4.2.5 - Jobs | In the example above, both succeededIndexes and succeededCount have been specified. Therefore, the job controller will mark the Job as succeeded and terminate the lingering Pods when either of the spe", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "4.2.5 - Jobs | In the example above, both succeededIndexes and succeededCount have been specified. Therefore, the job controller will mark the Job as succeeded and terminate the lingering Pods when either of the spe", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_15817"} {"prompt": "Complete this yaml code:\n\napiVersion: batch/v1\nkind: Job\nmetadata:\n name: pi-with-timeout\nspec:\n backoffLimit: 5\n activeDeadlineSeconds: 100", "completion": " template:\n spec:\n containers:\n - name: pi\n image: perl:5.34.0\n command: [\"perl\", \"-Mbignum=bpi\", \"-wle\", \"print bpi(2000)\"]\n restartPolicy: Never", "language": "yaml", "type": "kubernetes-manifest", "context": "4.2.5 - Jobs | Note that both the Job spec and the Pod template spec within the Job have an activeDeadlineSeconds field. Ensure that you set this field at the proper level.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_15818"} {"prompt": "Explain this yaml code:\n\napiVersion: batch/v1\nkind: Job\nmetadata:\n name: pi-with-timeout\nspec:\n backoffLimit: 5\n activeDeadlineSeconds: 100\n template:\n spec:\n containers:\n - name: pi\n image: perl:5.34.0\n command: [\"perl\", \"-Mbignum=bpi\", \"-wle\", \"print bpi(2000)\"]\n restartPolicy: Never", "completion": "This yaml code 4.2.5 - Jobs | Note that both the Job spec and the Pod template spec within the Job have an activeDeadlineSeconds field. Ensure that you set this field at the proper level.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "4.2.5 - Jobs | Note that both the Job spec and the Pod template spec within the Job have an activeDeadlineSeconds field. Ensure that you set this field at the proper level.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_15819"} {"prompt": "Complete this yaml code:\n\napiVersion: batch/v1\nkind: Job\nmetadata:\n name: pi-with-ttl\nspec:\n ttlSecondsAfterFinished: 100", "completion": " template:\n spec:\n containers:\n - name: pi\n image: perl:5.34.0\n command: [\"perl\", \"-Mbignum=bpi\", \"-wle\", \"print bpi(2000)\"]\n restartPolicy: Never", "language": "yaml", "type": "kubernetes-manifest", "context": "4.2.5 - Jobs | The Job pi-with-ttl will be eligible to be automatically deleted, 100 seconds after it finishes.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_15820"} {"prompt": "Explain this yaml code:\n\napiVersion: batch/v1\nkind: Job\nmetadata:\n name: pi-with-ttl\nspec:\n ttlSecondsAfterFinished: 100\n template:\n spec:\n containers:\n - name: pi\n image: perl:5.34.0\n command: [\"perl\", \"-Mbignum=bpi\", \"-wle\", \"print bpi(2000)\"]\n restartPolicy: Never", "completion": "This yaml code 4.2.5 - Jobs | The Job pi-with-ttl will be eligible to be automatically deleted, 100 seconds after it finishes.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "4.2.5 - Jobs | The Job pi-with-ttl will be eligible to be automatically deleted, 100 seconds after it finishes.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_15821"} {"prompt": "Complete this yaml code:\n\napiVersion: batch/v1\nkind: Job\nmetadata:\n name: myjob\nspec:", "completion": " suspend: true\n parallelism: 1\n completions: 5\n template:\n spec:\n ...", "language": "yaml", "type": "kubernetes-manifest", "context": "4.2.5 - Jobs | You can also toggle Job suspension by patching the Job using the command line.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_15822"} {"prompt": "Explain this yaml code:\n\napiVersion: batch/v1\nkind: Job\nmetadata:\n name: myjob\nspec:\n suspend: true\n parallelism: 1\n completions: 5\n template:\n spec:\n ...", "completion": "This yaml code 4.2.5 - Jobs | You can also toggle Job suspension by patching the Job using the command line.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "4.2.5 - Jobs | You can also toggle Job suspension by patching the Job using the command line.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_15823"} {"prompt": "Explain this bash code:\n\nkubectl patch job/myjob --type=strategic --patch '{\"spec\":{\"suspend\":true}}'", "completion": "This bash code 4.2.5 - Jobs | Resume a suspended Job:", "language": "bash", "type": "kubectl-command_explanation", "context": "4.2.5 - Jobs | Resume a suspended Job:", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_15824"} {"prompt": "Explain this bash code:\n\nkubectl patch job/myjob --type=strategic --patch '{\"spec\":{\"suspend\":false}}'", "completion": "This bash code 4.2.5 - Jobs | The Job's status can be used to determine if a Job is suspended or has been suspended in the past:", "language": "bash", "type": "kubectl-command_explanation", "context": "4.2.5 - Jobs | The Job's status can be used to determine if a Job is suspended or has been suspended in the past:", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_15825"} {"prompt": "Complete this yaml code:\n\napiVersion: batch/v1\nkind: Job\n# .metadata and .spec omitted\nstatus:\n conditions:", "completion": " - lastProbeTime: \"2021-02-05T13:14:33Z\"\n lastTransitionTime: \"2021-02-05T13:14:33Z\"\n status: \"True\"\n type: Suspended\n startTime: \"2021-02-05T13:13:48Z\"", "language": "yaml", "type": "kubernetes-manifest", "context": "4.2.5 - Jobs | The Job condition of type \"Suspended\" with status \"True\" means the Job is suspended; the lastTransitionTime field can be used to determine how long the Job has been suspended for. If the status of tha", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_15826"} {"prompt": "Explain this yaml code:\n\napiVersion: batch/v1\nkind: Job\n# .metadata and .spec omitted\nstatus:\n conditions:\n - lastProbeTime: \"2021-02-05T13:14:33Z\"\n lastTransitionTime: \"2021-02-05T13:14:33Z\"\n status: \"True\"\n type: Suspended\n startTime: \"2021-02-05T13:13:48Z\"", "completion": "This yaml code 4.2.5 - Jobs | The Job condition of type \"Suspended\" with status \"True\" means the Job is suspended; the lastTransitionTime field can be used to determine how long the Job has been suspended for. If the status of tha", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "4.2.5 - Jobs | The Job condition of type \"Suspended\" with status \"True\" means the Job is suspended; the lastTransitionTime field can be used to determine how long the Job has been suspended for. If the status of tha", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_15827"} {"prompt": "Complete this text code:\n\nName: myjob\n...\nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------", "completion": " Normal SuccessfulCreate 12m job-controller Created pod: myjob-hlrpl\n Normal SuccessfulDelete 11m job-controller Deleted pod: myjob-hlrpl\n Normal Suspended 11m job-controller Job suspended\n Normal SuccessfulCreate 3s job-controller Created pod: myjob-jvb44\n Normal Resumed 3s job-controller Job resumed", "language": "text", "type": "code-example", "context": "4.2.5 - Jobs | The last four events, particularly the \"Suspended\" and \"Resumed\" events, are directly a result of toggling the .spec.suspend field. In the time between these two events, we see that no Pods were creat", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_15828"} {"prompt": "Explain this text code:\n\nName: myjob\n...\nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal SuccessfulCreate 12m job-controller Created pod: myjob-hlrpl\n Normal SuccessfulDelete 11m job-controller Deleted pod: myjob-hlrpl\n Normal Suspended 11m job-controller Job suspended\n Normal SuccessfulCreate 3s job-controller Created pod: myjob-jvb44\n Normal Resumed 3s job-controller Job resumed", "completion": "This text code 4.2.5 - Jobs | The last four events, particularly the \"Suspended\" and \"Resumed\" events, are directly a result of toggling the .spec.suspend field. In the time between these two events, we see that no Pods were creat", "language": "text", "type": "code-example_explanation", "context": "4.2.5 - Jobs | The last four events, particularly the \"Suspended\" and \"Resumed\" events, are directly a result of toggling the .spec.suspend field. In the time between these two events, we see that no Pods were creat", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_15829"} {"prompt": "Complete this yaml code:\n\nkind: Job\nmetadata:\n name: old\n ...", "completion": "spec:\n selector:\n matchLabels:\n batch.kubernetes.io/controller-uid: a8f3d00d-c6d2-11e5-9f87-42010af00002\n ...", "language": "yaml", "type": "kubernetes-manifest", "context": "4.2.5 - Jobs | Then you create a new Job with name new and you explicitly specify the same selector. Since the existing Pods have label batch.kubernetes.io/controller-uid=a8f3d00d-c6d2-11e5-9f87-42010af00002, they a", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_15830"} {"prompt": "Explain this yaml code:\n\nkind: Job\nmetadata:\n name: old\n ...\nspec:\n selector:\n matchLabels:\n batch.kubernetes.io/controller-uid: a8f3d00d-c6d2-11e5-9f87-42010af00002\n ...", "completion": "This yaml code 4.2.5 - Jobs | Then you create a new Job with name new and you explicitly specify the same selector. Since the existing Pods have label batch.kubernetes.io/controller-uid=a8f3d00d-c6d2-11e5-9f87-42010af00002, they a", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "4.2.5 - Jobs | Then you create a new Job with name new and you explicitly specify the same selector. Since the existing Pods have label batch.kubernetes.io/controller-uid=a8f3d00d-c6d2-11e5-9f87-42010af00002, they a", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_15831"} {"prompt": "Complete this yaml code:\n\nkind: Job\nmetadata:\n name: new\n ...\nspec:", "completion": " manualSelector: true\n selector:\n matchLabels:\n batch.kubernetes.io/controller-uid: a8f3d00d-c6d2-11e5-9f87-42010af00002\n ...", "language": "yaml", "type": "kubernetes-manifest", "context": "4.2.5 - Jobs | The new Job itself will have a different uid from a8f3d00d-c6d2-11e5-9f87-42010af00002. Setting manualSelector: true tells the system that you know what you are doing and to allow this mismatch.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_15832"} {"prompt": "Explain this yaml code:\n\nkind: Job\nmetadata:\n name: new\n ...\nspec:\n manualSelector: true\n selector:\n matchLabels:\n batch.kubernetes.io/controller-uid: a8f3d00d-c6d2-11e5-9f87-42010af00002\n ...", "completion": "This yaml code 4.2.5 - Jobs | The new Job itself will have a different uid from a8f3d00d-c6d2-11e5-9f87-42010af00002. Setting manualSelector: true tells the system that you know what you are doing and to allow this mismatch.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "4.2.5 - Jobs | The new Job itself will have a different uid from a8f3d00d-c6d2-11e5-9f87-42010af00002. Setting manualSelector: true tells the system that you know what you are doing and to allow this mismatch.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_15833"} {"prompt": "Complete this yaml code:\n\nkind: Job\nmetadata:\n name: new", "completion": " ...\nspec:\n podReplacementPolicy: Failed\n ...", "language": "yaml", "type": "kubernetes-manifest", "context": "4.2.5 - Jobs | Provided your cluster has the feature gate enabled, you can inspect the .status.terminating field of a Job. The value of the field is the number of Pods owned by the Job that are currently terminating", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_15834"} {"prompt": "Explain this yaml code:\n\nkind: Job\nmetadata:\n name: new\n ...\nspec:\n podReplacementPolicy: Failed\n ...", "completion": "This yaml code 4.2.5 - Jobs | Provided your cluster has the feature gate enabled, you can inspect the .status.terminating field of a Job. The value of the field is the number of Pods owned by the Job that are currently terminating", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "4.2.5 - Jobs | Provided your cluster has the feature gate enabled, you can inspect the .status.terminating field of a Job. The value of the field is the number of Pods owned by the Job that are currently terminating", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_15835"} {"prompt": "Complete this yaml code:\n\napiVersion: batch/v1\nkind: Job", "completion": "# .metadata and .spec omitted\nstatus:\n terminating: 3 # three Pods are terminating and have not yet reached the Failed phase", "language": "yaml", "type": "kubernetes-manifest", "context": "4.2.5 - Jobs | This feature allows you to disable the built-in Job controller, for a specific Job, and delegate reconciliation of the Job to an external controller.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_15836"} {"prompt": "Explain this yaml code:\n\napiVersion: batch/v1\nkind: Job\n# .metadata and .spec omitted\nstatus:\n terminating: 3 # three Pods are terminating and have not yet reached the Failed phase", "completion": "This yaml code 4.2.5 - Jobs | This feature allows you to disable the built-in Job controller, for a specific Job, and delegate reconciliation of the Job to an external controller.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "4.2.5 - Jobs | This feature allows you to disable the built-in Job controller, for a specific Job, and delegate reconciliation of the Job to an external controller.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_15837"} {"prompt": "Complete this yaml code:\n\napiVersion: batch/v1\nkind: CronJob\nmetadata:\n name: hello\nspec:\n schedule: \"* * * * *\"\n jobTemplate:\n spec:\n template:", "completion": " spec:\n containers:\n - name: hello\n image: busybox:1.28\n imagePullPolicy: IfNotPresent\n command:\n - /bin/sh\n - -c\n - date; echo Hello from the Kubernetes cluster\n restartPolicy: OnFailure", "language": "yaml", "type": "kubernetes-manifest", "context": "4.2.7 - CronJob | (Running Automated Tasks with a CronJob takes you through this example in more detail).", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_15838"} {"prompt": "Explain this yaml code:\n\napiVersion: batch/v1\nkind: CronJob\nmetadata:\n name: hello\nspec:\n schedule: \"* * * * *\"\n jobTemplate:\n spec:\n template:\n spec:\n containers:\n - name: hello\n image: busybox:1.28\n imagePullPolicy: IfNotPresent\n command:\n - /bin/sh\n - -c\n - date; echo Hello from the Kubernetes cluster\n restartPolicy: OnFailure", "completion": "This yaml code 4.2.7 - CronJob | (Running Automated Tasks with a CronJob takes you through this example in more detail).", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "4.2.7 - CronJob | (Running Automated Tasks with a CronJob takes you through this example in more detail).", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_15839"} {"prompt": "Complete this text code:\n\n# ┌───────────── minute (0 - 59)\n# │ ┌───────────── hour (0 - 23)\n# │ │ ┌───────────── day of the month (1 - 31)\n# │ │ │ ┌───────────── month (1 - 12)", "completion": "# │ │ │ │ ┌───────────── day of the week (0 - 6) (Sunday to Saturday)\n# │ │ │ │ │ OR sun, mon, tue, wed, thu, fri, sat\n# │ │ │ │ │\n# │ │ │ │ │\n# * * * * *", "language": "text", "type": "code-example", "context": "4.2.7 - CronJob | For example, 0 3 * * 1 means this task is scheduled to run weekly on a Monday at 3 AM.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_15840"} {"prompt": "Explain this text code:\n\n# ┌───────────── minute (0 - 59)\n# │ ┌───────────── hour (0 - 23)\n# │ │ ┌───────────── day of the month (1 - 31)\n# │ │ │ ┌───────────── month (1 - 12)\n# │ │ │ │ ┌───────────── day of the week (0 - 6) (Sunday to Saturday)\n# │ │ │ │ │ OR sun, mon, tue, wed, thu, fri, sat\n# │ │ │ │ │\n# │ │ │ │ │\n# * * * * *", "completion": "This text code 4.2.7 - CronJob | For example, 0 3 * * 1 means this task is scheduled to run weekly on a Monday at 3 AM.", "language": "text", "type": "code-example_explanation", "context": "4.2.7 - CronJob | For example, 0 3 * * 1 means this task is scheduled to run weekly on a Monday at 3 AM.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_15841"} {"prompt": "Explain this text code:\n\nCannot determine if job needs to be started. Too many missed start time (> 100). Set or decrease .spec.startingDeadlineSeconds or check clock skew.", "completion": "This text code 4.2.7 - CronJob | It is important to note that if the startingDeadlineSeconds field is set (not nil), the controller counts how many missed Jobs occurred from the value of startingDeadlineSeconds until now rather than", "language": "text", "type": "code-example_explanation", "context": "4.2.7 - CronJob | It is important to note that if the startingDeadlineSeconds field is set (not nil), the controller counts how many missed Jobs occurred from the value of startingDeadlineSeconds until now rather than", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_15842"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: ReplicationController\nmetadata:\n name: nginx\nspec:\n replicas: 3\n selector:\n app: nginx\n template:", "completion": " metadata:\n name: nginx\n labels:\n app: nginx\n spec:\n containers:\n - name: nginx\n image: nginx\n ports:\n - containerPort: 80", "language": "yaml", "type": "kubernetes-manifest", "context": "4.2.8 - ReplicationController | Run the example job by downloading the example file and then running this command:", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_15843"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: ReplicationController\nmetadata:\n name: nginx\nspec:\n replicas: 3\n selector:\n app: nginx\n template:\n metadata:\n name: nginx\n labels:\n app: nginx\n spec:\n containers:\n - name: nginx\n image: nginx\n ports:\n - containerPort: 80", "completion": "This yaml code 4.2.8 - ReplicationController | Run the example job by downloading the example file and then running this command:", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "4.2.8 - ReplicationController | Run the example job by downloading the example file and then running this command:", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_15844"} {"prompt": "Explain this bash code:\n\nkubectl apply -f https://k8s.io/examples/controllers/replication.yaml", "completion": "This bash code 4.2.8 - ReplicationController | The output is similar to this:", "language": "bash", "type": "kubectl-command_explanation", "context": "4.2.8 - ReplicationController | The output is similar to this:", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_15845"} {"prompt": "Complete this text code:\n\nName: nginx\nNamespace: default\nSelector: app=nginx\nLabels: app=nginx\nAnnotations: <none>\nReplicas: 3 current / 3 desired\nPods Status: 0 Running / 3 Waiting / 0 Succeeded / 0 Failed\nPod Template:\n Labels: app=nginx\n Containers:\n nginx:", "completion": " Image: nginx\n Port: 80/TCP\n Environment: <none>\n Mounts: <none>\n Volumes: <none>\nEvents:\n FirstSeen LastSeen Count From SubobjectPath Type Reason Message\n --------- -------- ----- ---- ------------- ---- ------ -------\n 20s 20s 1 {replication-controller } Normal SuccessfulCreate Created pod: nginx-qrm3m\n 20s 20s 1 {replication-controller } Normal SuccessfulCreate Created pod: nginx-3ntk0\n 20s 20s 1 {replication-controller } Normal SuccessfulCreate Created pod: nginx-4ok8v", "language": "text", "type": "code-example", "context": "4.2.8 - ReplicationController | Here, three pods are created, but none is running yet, perhaps because the image is being pulled. A little later, the same command may show:", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_15846"} {"prompt": "Explain this text code:\n\nName: nginx\nNamespace: default\nSelector: app=nginx\nLabels: app=nginx\nAnnotations: <none>\nReplicas: 3 current / 3 desired\nPods Status: 0 Running / 3 Waiting / 0 Succeeded / 0 Failed\nPod Template:\n Labels: app=nginx\n Containers:\n nginx:\n Image: nginx\n Port: 80/TCP\n Environment: <none>\n Mounts: <none>\n Volumes: <none>\nEvents:\n FirstSeen LastSeen Count From SubobjectPath Type Reason Message\n --------- -------- ----- ---- ------------- ---- ------ -------\n 20s 20s 1 {replication-controller } Normal SuccessfulCreate Created pod: nginx-qrm3m\n 20s 20s 1 {replication-controller } Normal SuccessfulCreate Created pod: nginx-3ntk0\n 20s 20s 1 {replication-controller } Normal SuccessfulCreate Created pod: nginx-4ok8v", "completion": "This text code 4.2.8 - ReplicationController | Here, three pods are created, but none is running yet, perhaps because the image is being pulled. A little later, the same command may show:", "language": "text", "type": "code-example_explanation", "context": "4.2.8 - ReplicationController | Here, three pods are created, but none is running yet, perhaps because the image is being pulled. A little later, the same command may show:", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_15847"} {"prompt": "Explain this text code:\n\nPods Status: 3 Running / 0 Waiting / 0 Succeeded / 0 Failed", "completion": "This text code 4.2.8 - ReplicationController | To list all the pods that belong to the ReplicationController in a machine readable form, you can use a command like this:", "language": "text", "type": "code-example_explanation", "context": "4.2.8 - ReplicationController | To list all the pods that belong to the ReplicationController in a machine readable form, you can use a command like this:", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_15848"} {"prompt": "Explain this text code:\n\npods=$(kubectl get pods --selector=app=nginx --output=jsonpath={.items..metadata.name})\necho $pods", "completion": "This text code 4.2.8 - ReplicationController | The output is similar to this:", "language": "text", "type": "code-example_explanation", "context": "4.2.8 - ReplicationController | The output is similar to this:", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_15849"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Service\nmetadata:\n name: my-nginx-svc\n labels:\n app: nginx\nspec:\n type: LoadBalancer\n ports:\n - port: 80\n selector:\n app: nginx\n---\napiVersion: apps/v1\nkind: Deployment\nmetadata:\n name: my-nginx", "completion": " labels:\n app: nginx\nspec:\n replicas: 3\n selector:\n matchLabels:\n app: nginx\n template:\n metadata:\n labels:\n app: nginx\n spec:\n containers:\n - name: nginx\n image: nginx:1.14.2\n ports:\n - containerPort: 80", "language": "yaml", "type": "deployment-definition", "context": "4.4 - Managing Workloads | Multiple resources can be created the same way as a single resource:", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_15850"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Service\nmetadata:\n name: my-nginx-svc\n labels:\n app: nginx\nspec:\n type: LoadBalancer\n ports:\n - port: 80\n selector:\n app: nginx\n---\napiVersion: apps/v1\nkind: Deployment\nmetadata:\n name: my-nginx\n labels:\n app: nginx\nspec:\n replicas: 3\n selector:\n matchLabels:\n app: nginx\n template:\n metadata:\n labels:\n app: nginx\n spec:\n containers:\n - name: nginx\n image: nginx:1.14.2\n ports:\n - containerPort: 80", "completion": "This yaml code 4.4 - Managing Workloads | Multiple resources can be created the same way as a single resource:", "language": "yaml", "type": "deployment-definition_explanation", "context": "4.4 - Managing Workloads | Multiple resources can be created the same way as a single resource:", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_15851"} {"prompt": "Explain this bash code:\n\nkubectl apply -f https://k8s.io/examples/application/nginx-app.yaml", "completion": "This bash code 4.4 - Managing Workloads | The resources will be created in the order they appear in the manifest. Therefore, it's best to specify the Service first, since that will ensure the scheduler can spread the pods associated with the", "language": "bash", "type": "kubectl-command_explanation", "context": "4.4 - Managing Workloads | The resources will be created in the order they appear in the manifest. Therefore, it's best to specify the Service first, since that will ensure the scheduler can spread the pods associated with the", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_15852"} {"prompt": "Explain this text code:\n\nservice/my-nginx-svc created\ndeployment.apps/my-nginx created", "completion": "This text code 4.4 - Managing Workloads | The resources will be created in the order they appear in the manifest. Therefore, it's best to specify the Service first, since that will ensure the scheduler can spread the pods associated with the", "language": "text", "type": "code-example_explanation", "context": "4.4 - Managing Workloads | The resources will be created in the order they appear in the manifest. Therefore, it's best to specify the Service first, since that will ensure the scheduler can spread the pods associated with the", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_15853"} {"prompt": "Explain this bash code:\n\nkubectl apply -f https://k8s.io/examples/application/nginx/nginx-svc.yaml \\\n -f https://k8s.io/examples/application/nginx/nginx-deployment.yaml", "completion": "This bash code 4.4 - Managing Workloads | It is a recommended practice to put resources related to the same microservice or application tier into the same file, and to group all of the files associated with your application in the same direct", "language": "bash", "type": "kubectl-command_explanation", "context": "4.4 - Managing Workloads | It is a recommended practice to put resources related to the same microservice or application tier into the same file, and to group all of the files associated with your application in the same direct", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_15854"} {"prompt": "Explain this bash code:\n\nkubectl apply -f https://k8s.io/examples/application/nginx/nginx-deployment.yaml", "completion": "This bash code 4.4 - Managing Workloads | If you need to define more manifests, such as adding a ConfigMap, you can do that too.", "language": "bash", "type": "kubectl-command_explanation", "context": "4.4 - Managing Workloads | If you need to define more manifests, such as adding a ConfigMap, you can do that too.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_15855"} {"prompt": "Explain this bash code:\n\nkubectl delete -f https://k8s.io/examples/application/nginx-app.yaml", "completion": "This bash code 4.4 - Managing Workloads | In the case of two resources, you can specify both resources on the command line using the resource/name syntax:", "language": "bash", "type": "kubectl-command_explanation", "context": "4.4 - Managing Workloads | In the case of two resources, you can specify both resources on the command line using the resource/name syntax:", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_15856"} {"prompt": "Explain this text code:\n\ndeployment.apps \"my-nginx\" deleted\nservice \"my-nginx-svc\" deleted", "completion": "This text code 4.4 - Managing Workloads | In the case of two resources, you can specify both resources on the command line using the resource/name syntax:", "language": "text", "type": "code-example_explanation", "context": "4.4 - Managing Workloads | In the case of two resources, you can specify both resources on the command line using the resource/name syntax:", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_15857"} {"prompt": "Explain this bash code:\n\nkubectl delete deployments/my-nginx services/my-nginx-svc", "completion": "This bash code 4.4 - Managing Workloads | For larger numbers of resources, you'll find it easier to specify the selector (label query) specified using -l or --selector, to filter resources by their labels:", "language": "bash", "type": "kubectl-command_explanation", "context": "4.4 - Managing Workloads | For larger numbers of resources, you'll find it easier to specify the selector (label query) specified using -l or --selector, to filter resources by their labels:", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_15858"} {"prompt": "Explain this text code:\n\ndeployment.apps \"my-nginx\" deleted\nservice \"my-nginx-svc\" deleted", "completion": "This text code 4.4 - Managing Workloads | Because kubectl outputs resource names in the same syntax it accepts, you can chain operations using $() or xargs:", "language": "text", "type": "code-example_explanation", "context": "4.4 - Managing Workloads | Because kubectl outputs resource names in the same syntax it accepts, you can chain operations using $() or xargs:", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_15859"} {"prompt": "Explain this bash code:\n\nkubectl get $(kubectl create -f docs/concepts/cluster-administration/nginx/ -o name | grep service/ )\nkubectl create -f docs/concepts/cluster-administration/nginx/ -o name | grep service/ | xargs -i kubectl get '{}'", "completion": "This bash code 4.4 - Managing Workloads | The output might be similar to:", "language": "bash", "type": "kubectl-command_explanation", "context": "4.4 - Managing Workloads | The output might be similar to:", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_15860"} {"prompt": "Explain this text code:\n\nNAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE\nmy-nginx-svc LoadBalancer 10.0.0.208 <pending> 80/TCP 0s", "completion": "This text code 4.4 - Managing Workloads | With the above commands, first you create resources under examples/application/nginx/ and print the resources created with -o name output format (print each resource as resource/name). Then you grep o", "language": "text", "type": "code-example_explanation", "context": "4.4 - Managing Workloads | With the above commands, first you create resources under examples/application/nginx/ and print the resources created with -o name output format (print each resource as resource/name). Then you grep o", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_15861"} {"prompt": "Complete this text code:\n\nproject/k8s/development\n├── configmap\n│   └── my-configmap.yaml", "completion": "├── deployment\n│   └── my-deployment.yaml\n└── pvc\n └── my-pvc.yaml", "language": "text", "type": "code-example", "context": "4.4 - Managing Workloads | By default, performing a bulk operation on project/k8s/development will stop at the first level of the directory, not processing any subdirectories. If you had tried to create the resources in this di", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_15862"} {"prompt": "Explain this text code:\n\nproject/k8s/development\n├── configmap\n│   └── my-configmap.yaml\n├── deployment\n│   └── my-deployment.yaml\n└── pvc\n └── my-pvc.yaml", "completion": "This text code 4.4 - Managing Workloads | By default, performing a bulk operation on project/k8s/development will stop at the first level of the directory, not processing any subdirectories. If you had tried to create the resources in this di", "language": "text", "type": "code-example_explanation", "context": "4.4 - Managing Workloads | By default, performing a bulk operation on project/k8s/development will stop at the first level of the directory, not processing any subdirectories. If you had tried to create the resources in this di", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_15863"} {"prompt": "Explain this text code:\n\nerror: you must provide one or more resources by argument or filename (.json|.yaml|.yml|stdin)", "completion": "This text code 4.4 - Managing Workloads | Instead, specify the --recursive or -R command line argument along with the --filename/-f argument:", "language": "text", "type": "code-example_explanation", "context": "4.4 - Managing Workloads | Instead, specify the --recursive or -R command line argument along with the --filename/-f argument:", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_15864"} {"prompt": "Explain this bash code:\n\nkubectl apply -f project/k8s/development --recursive", "completion": "This bash code 4.4 - Managing Workloads | The --recursive argument works with any operation that accepts the --filename/-f argument such as: kubectl create, kubectl get, kubectl delete, kubectl describe, or even kubectl rollout.", "language": "bash", "type": "kubectl-command_explanation", "context": "4.4 - Managing Workloads | The --recursive argument works with any operation that accepts the --filename/-f argument such as: kubectl create, kubectl get, kubectl delete, kubectl describe, or even kubectl rollout.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_15865"} {"prompt": "Explain this text code:\n\nconfigmap/my-config created\ndeployment.apps/my-deployment created\npersistentvolumeclaim/my-pvc created", "completion": "This text code 4.4 - Managing Workloads | The --recursive argument works with any operation that accepts the --filename/-f argument such as: kubectl create, kubectl get, kubectl delete, kubectl describe, or even kubectl rollout.", "language": "text", "type": "code-example_explanation", "context": "4.4 - Managing Workloads | The --recursive argument works with any operation that accepts the --filename/-f argument such as: kubectl create, kubectl get, kubectl delete, kubectl describe, or even kubectl rollout.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_15866"} {"prompt": "Explain this bash code:\n\nkubectl apply -f project/k8s/namespaces -f project/k8s/development --recursive", "completion": "This bash code 4.4 - Managing Workloads | If you're interested in learning more about kubectl, go ahead and read Command line tool (kubectl).", "language": "bash", "type": "kubectl-command_explanation", "context": "4.4 - Managing Workloads | If you're interested in learning more about kubectl, go ahead and read Command line tool (kubectl).", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_15867"} {"prompt": "Complete this text code:\n\nnamespace/development created\nnamespace/staging created", "completion": "configmap/my-config created\ndeployment.apps/my-deployment created\npersistentvolumeclaim/my-pvc created", "language": "text", "type": "code-example", "context": "4.4 - Managing Workloads | If you're interested in learning more about kubectl, go ahead and read Command line tool (kubectl).", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_15868"} {"prompt": "Explain this text code:\n\nnamespace/development created\nnamespace/staging created\nconfigmap/my-config created\ndeployment.apps/my-deployment created\npersistentvolumeclaim/my-pvc created", "completion": "This text code 4.4 - Managing Workloads | If you're interested in learning more about kubectl, go ahead and read Command line tool (kubectl).", "language": "text", "type": "code-example_explanation", "context": "4.4 - Managing Workloads | If you're interested in learning more about kubectl, go ahead and read Command line tool (kubectl).", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_15869"} {"prompt": "Explain this bash code:\n\nkubectl create deployment my-nginx --image=nginx:1.14.2", "completion": "This bash code 4.4 - Managing Workloads | Ensure that there is 1 replica:", "language": "bash", "type": "kubectl-command_explanation", "context": "4.4 - Managing Workloads | Ensure that there is 1 replica:", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_15870"} {"prompt": "Explain this bash code:\n\nkubectl scale --replicas 1 deployments/my-nginx --subresource='scale' --type='merge' -p '{\"spec\":{\"replicas\": 1}}'", "completion": "This bash code 4.4 - Managing Workloads | and allow Kubernetes to add more temporary replicas during a rollout, by setting a surge maximum of 100%:", "language": "bash", "type": "kubectl-command_explanation", "context": "4.4 - Managing Workloads | and allow Kubernetes to add more temporary replicas during a rollout, by setting a surge maximum of 100%:", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_15871"} {"prompt": "Explain this bash code:\n\nkubectl patch --type='merge' -p '{\"spec\":{\"strategy\":{\"rollingUpdate\":{\"maxSurge\": \"100%\" }}}}'", "completion": "This bash code 4.4 - Managing Workloads | To update to version 1.16.1, change .spec.template.spec.containers[0].image from nginx:1.14.2 to nginx:1.16.1 using kubectl edit:", "language": "bash", "type": "kubectl-command_explanation", "context": "4.4 - Managing Workloads | To update to version 1.16.1, change .spec.template.spec.containers[0].image from nginx:1.14.2 to nginx:1.16.1 using kubectl edit:", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_15872"} {"prompt": "Explain this bash code:\n\nkubectl edit deployment/my-nginx\n# Change the manifest to use the newer container image, then save your changes", "completion": "This bash code 4.4 - Managing Workloads | That's it! The Deployment will declaratively update the deployed nginx application progressively behind the scene. It ensures that only a certain number of old replicas may be down while they are bein", "language": "bash", "type": "kubectl-command_explanation", "context": "4.4 - Managing Workloads | That's it! The Deployment will declaratively update the deployed nginx application progressively behind the scene. It ensures that only a certain number of old replicas may be down while they are bein", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_15873"} {"prompt": "Complete this bash code:\n\nkubectl apply -f my-deployment.yaml\n", "completion": "# wait for rollout to finish\nkubectl rollout status deployment/my-deployment --timeout 10m # 10 minute timeout", "language": "bash", "type": "kubectl-command", "context": "4.4 - Managing Workloads | or", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_15874"} {"prompt": "Explain this bash code:\n\nkubectl apply -f my-deployment.yaml\n\n# wait for rollout to finish\nkubectl rollout status deployment/my-deployment --timeout 10m # 10 minute timeout", "completion": "This bash code 4.4 - Managing Workloads | or", "language": "bash", "type": "kubectl-command_explanation", "context": "4.4 - Managing Workloads | or", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_15875"} {"prompt": "Complete this bash code:\n\nkubectl apply -f backing-stateful-component.yaml\n", "completion": "# don't wait for rollout to finish, just check the status\nkubectl rollout status statefulsets/backing-stateful-component --watch=false", "language": "bash", "type": "kubectl-command", "context": "4.4 - Managing Workloads | You can also pause, resume or cancel a rollout. Visit kubectl rollout to learn more.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_15876"} {"prompt": "Explain this bash code:\n\nkubectl apply -f backing-stateful-component.yaml\n\n# don't wait for rollout to finish, just check the status\nkubectl rollout status statefulsets/backing-stateful-component --watch=false", "completion": "This bash code 4.4 - Managing Workloads | You can also pause, resume or cancel a rollout. Visit kubectl rollout to learn more.", "language": "bash", "type": "kubectl-command_explanation", "context": "4.4 - Managing Workloads | You can also pause, resume or cancel a rollout. Visit kubectl rollout to learn more.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_15877"} {"prompt": "Complete this text code:\n\nname: frontend\nreplicas: 3\n...\nlabels:", "completion": " app: guestbook\n tier: frontend\n track: stable\n...\nimage: gb-frontend:v3", "language": "text", "type": "code-example", "context": "4.4 - Managing Workloads | and then you can create a new release of the guestbook frontend that carries the track label with different value (i.e. canary), so that two sets of pods would not overlap:", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_15878"} {"prompt": "Explain this text code:\n\nname: frontend\nreplicas: 3\n...\nlabels:\n app: guestbook\n tier: frontend\n track: stable\n...\nimage: gb-frontend:v3", "completion": "This text code 4.4 - Managing Workloads | and then you can create a new release of the guestbook frontend that carries the track label with different value (i.e. canary), so that two sets of pods would not overlap:", "language": "text", "type": "code-example_explanation", "context": "4.4 - Managing Workloads | and then you can create a new release of the guestbook frontend that carries the track label with different value (i.e. canary), so that two sets of pods would not overlap:", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_15879"} {"prompt": "Complete this text code:\n\nname: frontend-canary\nreplicas: 1\n...\nlabels:", "completion": " app: guestbook\n tier: frontend\n track: canary\n...\nimage: gb-frontend:v4", "language": "text", "type": "code-example", "context": "4.4 - Managing Workloads | The frontend service would span both sets of replicas by selecting the common subset of their labels (i.e. omitting the track label), so that the traffic will be redirected to both applications:", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_15880"} {"prompt": "Explain this text code:\n\nname: frontend-canary\nreplicas: 1\n...\nlabels:\n app: guestbook\n tier: frontend\n track: canary\n...\nimage: gb-frontend:v4", "completion": "This text code 4.4 - Managing Workloads | The frontend service would span both sets of replicas by selecting the common subset of their labels (i.e. omitting the track label), so that the traffic will be redirected to both applications:", "language": "text", "type": "code-example_explanation", "context": "4.4 - Managing Workloads | The frontend service would span both sets of replicas by selecting the common subset of their labels (i.e. omitting the track label), so that the traffic will be redirected to both applications:", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_15881"} {"prompt": "Explain this bash code:\n\nkubectl annotate pods my-nginx-v4-9gw19 description='my frontend running nginx'\nkubectl get pods my-nginx-v4-9gw19 -o yaml", "completion": "This bash code 4.4 - Managing Workloads | For more information, see annotations and kubectl annotate.", "language": "bash", "type": "kubectl-command_explanation", "context": "4.4 - Managing Workloads | For more information, see annotations and kubectl annotate.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_15882"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: pod\nmetadata:", "completion": " annotations:\n description: my frontend running nginx\n...", "language": "yaml", "type": "kubernetes-manifest", "context": "4.4 - Managing Workloads | For more information, see annotations and kubectl annotate.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_15883"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: pod\nmetadata:\n annotations:\n description: my frontend running nginx\n...", "completion": "This yaml code 4.4 - Managing Workloads | For more information, see annotations and kubectl annotate.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "4.4 - Managing Workloads | For more information, see annotations and kubectl annotate.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_15884"} {"prompt": "Explain this text code:\n\nNAME READY STATUS RESTARTS AGE\nmy-nginx-2035384211-j5fhi 1/1 Running 0 30m", "completion": "This text code 4.4 - Managing Workloads | To have the system automatically choose the number of nginx replicas as needed, ranging from 1 to 3, do:", "language": "text", "type": "code-example_explanation", "context": "4.4 - Managing Workloads | To have the system automatically choose the number of nginx replicas as needed, ranging from 1 to 3, do:", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_15885"} {"prompt": "Explain this text code:\n\n# This requires an existing source of container and Pod metrics\nkubectl autoscale deployment/my-nginx --min=1 --max=3", "completion": "This text code 4.4 - Managing Workloads | Now your nginx replicas will be scaled up and down as needed, automatically.", "language": "text", "type": "code-example_explanation", "context": "4.4 - Managing Workloads | Now your nginx replicas will be scaled up and down as needed, automatically.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_15886"} {"prompt": "Explain this text code:\n\nhorizontalpodautoscaler.autoscaling/my-nginx autoscaled", "completion": "This text code 4.4 - Managing Workloads | Now your nginx replicas will be scaled up and down as needed, automatically.", "language": "text", "type": "code-example_explanation", "context": "4.4 - Managing Workloads | Now your nginx replicas will be scaled up and down as needed, automatically.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_15887"} {"prompt": "Explain this bash code:\n\nkubectl apply -f https://k8s.io/examples/application/nginx/nginx-deployment.yaml", "completion": "This bash code 4.4 - Managing Workloads | To learn more about the underlying mechanism, read server-side apply.", "language": "bash", "type": "kubectl-command_explanation", "context": "4.4 - Managing Workloads | To learn more about the underlying mechanism, read server-side apply.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_15888"} {"prompt": "Complete this bash code:\n\nkubectl get deployment my-nginx -o yaml > /tmp/nginx.yaml\nvi /tmp/nginx.yaml\n# do some edit, and then save the file\n", "completion": "kubectl apply -f /tmp/nginx.yaml\ndeployment.apps/my-nginx configured\n\nrm /tmp/nginx.yaml", "language": "bash", "type": "kubectl-command", "context": "4.4 - Managing Workloads | This allows you to do more significant changes more easily. Note that you can specify the editor with your EDITOR or KUBE_EDITOR environment variables.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_15889"} {"prompt": "Explain this bash code:\n\nkubectl get deployment my-nginx -o yaml > /tmp/nginx.yaml\nvi /tmp/nginx.yaml\n# do some edit, and then save the file\n\nkubectl apply -f /tmp/nginx.yaml\ndeployment.apps/my-nginx configured\n\nrm /tmp/nginx.yaml", "completion": "This bash code 4.4 - Managing Workloads | This allows you to do more significant changes more easily. Note that you can specify the editor with your EDITOR or KUBE_EDITOR environment variables.", "language": "bash", "type": "kubectl-command_explanation", "context": "4.4 - Managing Workloads | This allows you to do more significant changes more easily. Note that you can specify the editor with your EDITOR or KUBE_EDITOR environment variables.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_15890"} {"prompt": "Explain this bash code:\n\nkubectl replace -f https://k8s.io/examples/application/nginx/nginx-deployment.yaml --force", "completion": "This bash code 4.4 - Managing Workloads | The Kubernetes network model is built out of several pieces:", "language": "bash", "type": "kubectl-command_explanation", "context": "4.4 - Managing Workloads | The Kubernetes network model is built out of several pieces:", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_15891"} {"prompt": "Explain this text code:\n\ndeployment.apps/my-nginx deleted\ndeployment.apps/my-nginx replaced", "completion": "This text code 4.4 - Managing Workloads | The Kubernetes network model is built out of several pieces:", "language": "text", "type": "code-example_explanation", "context": "4.4 - Managing Workloads | The Kubernetes network model is built out of several pieces:", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_15892"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Service\nmetadata:\n name: my-service\nspec:", "completion": " selector:\n app.kubernetes.io/name: MyApp\n ports:\n - protocol: TCP\n port: 80\n targetPort: 9376", "language": "yaml", "type": "service-definition", "context": "5.1 - Service | Applying this manifest creates a new Service named \"my-service\" with the default ClusterIP service type. The Service targets TCP port 9376 on any Pod with the app.kubernetes.io/name: MyApp label.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_15893"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Service\nmetadata:\n name: my-service\nspec:\n selector:\n app.kubernetes.io/name: MyApp\n ports:\n - protocol: TCP\n port: 80\n targetPort: 9376", "completion": "This yaml code 5.1 - Service | Applying this manifest creates a new Service named \"my-service\" with the default ClusterIP service type. The Service targets TCP port 9376 on any Pod with the app.kubernetes.io/name: MyApp label.", "language": "yaml", "type": "service-definition_explanation", "context": "5.1 - Service | Applying this manifest creates a new Service named \"my-service\" with the default ClusterIP service type. The Service targets TCP port 9376 on any Pod with the app.kubernetes.io/name: MyApp label.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_15894"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: nginx\n labels:\n app.kubernetes.io/name: proxy\nspec:\n containers:\n - name: nginx\n image: nginx:stable\n ports:\n - containerPort: 80\n name: http-web-svc", "completion": "\n---\napiVersion: v1\nkind: Service\nmetadata:\n name: nginx-service\nspec:\n selector:\n app.kubernetes.io/name: proxy\n ports:\n - name: name-of-service-port\n protocol: TCP\n port: 80\n targetPort: http-web-svc", "language": "yaml", "type": "pod-definition", "context": "5.1 - Service | This works even if there is a mixture of Pods in the Service using a single configured name, with the same network protocol available via different port numbers. This offers a lot of flexibility for d", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_15895"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: nginx\n labels:\n app.kubernetes.io/name: proxy\nspec:\n containers:\n - name: nginx\n image: nginx:stable\n ports:\n - containerPort: 80\n name: http-web-svc\n\n---\napiVersion: v1\nkind: Service\nmetadata:\n name: nginx-service\nspec:\n selector:\n app.kubernetes.io/name: proxy\n ports:\n - name: name-of-service-port\n protocol: TCP\n port: 80\n targetPort: http-web-svc", "completion": "This yaml code 5.1 - Service | This works even if there is a mixture of Pods in the Service using a single configured name, with the same network protocol available via different port numbers. This offers a lot of flexibility for d", "language": "yaml", "type": "pod-definition_explanation", "context": "5.1 - Service | This works even if there is a mixture of Pods in the Service using a single configured name, with the same network protocol available via different port numbers. This offers a lot of flexibility for d", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_15896"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Service\nmetadata:\n name: my-service\nspec:", "completion": " ports:\n - name: http\n protocol: TCP\n port: 80\n targetPort: 9376", "language": "yaml", "type": "service-definition", "context": "5.1 - Service | Because this Service has no selector, the corresponding EndpointSlice objects are not created automatically. You can map the Service to the network address and port where it's running, by adding an En", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_15897"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Service\nmetadata:\n name: my-service\nspec:\n ports:\n - name: http\n protocol: TCP\n port: 80\n targetPort: 9376", "completion": "This yaml code 5.1 - Service | Because this Service has no selector, the corresponding EndpointSlice objects are not created automatically. You can map the Service to the network address and port where it's running, by adding an En", "language": "yaml", "type": "service-definition_explanation", "context": "5.1 - Service | Because this Service has no selector, the corresponding EndpointSlice objects are not created automatically. You can map the Service to the network address and port where it's running, by adding an En", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_15898"} {"prompt": "Complete this yaml code:\n\napiVersion: discovery.k8s.io/v1\nkind: EndpointSlice\nmetadata:\n name: my-service-1 # by convention, use the name of the Service\n # as a prefix for the name of the EndpointSlice\n labels:\n # You should set the \"kubernetes.io/service-name\" label.\n # Set its value to match the name of the Service\n kubernetes.io/service-name: my-service\naddressType: IPv4", "completion": "ports:\n - name: http # should match with the name of the service port defined above\n appProtocol: http\n protocol: TCP\n port: 9376\nendpoints:\n - addresses:\n - \"10.4.5.6\"\n - addresses:\n - \"10.1.2.3\"", "language": "yaml", "type": "kubernetes-manifest", "context": "5.1 - Service | When you create an EndpointSlice object for a Service, you can use any name for the EndpointSlice. Each EndpointSlice in a namespace must have a unique name. You link an EndpointSlice to a Service by", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_15899"} {"prompt": "Explain this yaml code:\n\napiVersion: discovery.k8s.io/v1\nkind: EndpointSlice\nmetadata:\n name: my-service-1 # by convention, use the name of the Service\n # as a prefix for the name of the EndpointSlice\n labels:\n # You should set the \"kubernetes.io/service-name\" label.\n # Set its value to match the name of the Service\n kubernetes.io/service-name: my-service\naddressType: IPv4\nports:\n - name: http # should match with the name of the service port defined above\n appProtocol: http\n protocol: TCP\n port: 9376\nendpoints:\n - addresses:\n - \"10.4.5.6\"\n - addresses:\n - \"10.1.2.3\"", "completion": "This yaml code 5.1 - Service | When you create an EndpointSlice object for a Service, you can use any name for the EndpointSlice. Each EndpointSlice in a namespace must have a unique name. You link an EndpointSlice to a Service by", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "5.1 - Service | When you create an EndpointSlice object for a Service, you can use any name for the EndpointSlice. Each EndpointSlice in a namespace must have a unique name. You link an EndpointSlice to a Service by", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_15900"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Service\nmetadata:\n name: my-service\nspec:\n selector:\n app.kubernetes.io/name: MyApp\n ports:", "completion": " - name: http\n protocol: TCP\n port: 80\n targetPort: 9376\n - name: https\n protocol: TCP\n port: 443\n targetPort: 9377", "language": "yaml", "type": "service-definition", "context": "5.1 - Service | As with Kubernetes names in general, names for ports must only contain lowercase alphanumeric characters and -. Port names must also start and end with an alphanumeric character.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_15901"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Service\nmetadata:\n name: my-service\nspec:\n selector:\n app.kubernetes.io/name: MyApp\n ports:\n - name: http\n protocol: TCP\n port: 80\n targetPort: 9376\n - name: https\n protocol: TCP\n port: 443\n targetPort: 9377", "completion": "This yaml code 5.1 - Service | As with Kubernetes names in general, names for ports must only contain lowercase alphanumeric characters and -. Port names must also start and end with an alphanumeric character.", "language": "yaml", "type": "service-definition_explanation", "context": "5.1 - Service | As with Kubernetes names in general, names for ports must only contain lowercase alphanumeric characters and -. Port names must also start and end with an alphanumeric character.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_15902"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Service\nmetadata:\n name: my-service\nspec:\n type: NodePort\n selector:\n app.kubernetes.io/name: MyApp", "completion": " ports:\n - port: 80\n # By default and for convenience, the `targetPort` is set to\n # the same value as the `port` field.\n targetPort: 80\n # Optional field\n # By default and for convenience, the Kubernetes control plane\n # will allocate a port from a range (default: 30000-32767)\n nodePort: 30007", "language": "yaml", "type": "service-definition", "context": "5.1 - Service | The policy for assigning ports to NodePort services applies to both the auto-assignment and the manual assignment scenarios. When a user wants to create a NodePort service that uses a specific port, t", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_15903"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Service\nmetadata:\n name: my-service\nspec:\n type: NodePort\n selector:\n app.kubernetes.io/name: MyApp\n ports:\n - port: 80\n # By default and for convenience, the `targetPort` is set to\n # the same value as the `port` field.\n targetPort: 80\n # Optional field\n # By default and for convenience, the Kubernetes control plane\n # will allocate a port from a range (default: 30000-32767)\n nodePort: 30007", "completion": "This yaml code 5.1 - Service | The policy for assigning ports to NodePort services applies to both the auto-assignment and the manual assignment scenarios. When a user wants to create a NodePort service that uses a specific port, t", "language": "yaml", "type": "service-definition_explanation", "context": "5.1 - Service | The policy for assigning ports to NodePort services applies to both the auto-assignment and the manual assignment scenarios. When a user wants to create a NodePort service that uses a specific port, t", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_15904"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Service\nmetadata:\n name: my-service\nspec:\n selector:\n app.kubernetes.io/name: MyApp\n ports:", "completion": " - protocol: TCP\n port: 80\n targetPort: 9376\n clusterIP: 10.0.171.239\n type: LoadBalancer\nstatus:\n loadBalancer:\n ingress:\n - ip: 192.0.2.127", "language": "yaml", "type": "service-definition", "context": "5.1 - Service | Traffic from the external load balancer is directed at the backend Pods. The cloud provider decides how it is load balanced.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_15905"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Service\nmetadata:\n name: my-service\nspec:\n selector:\n app.kubernetes.io/name: MyApp\n ports:\n - protocol: TCP\n port: 80\n targetPort: 9376\n clusterIP: 10.0.171.239\n type: LoadBalancer\nstatus:\n loadBalancer:\n ingress:\n - ip: 192.0.2.127", "completion": "This yaml code 5.1 - Service | Traffic from the external load balancer is directed at the backend Pods. The cloud provider decides how it is load balanced.", "language": "yaml", "type": "service-definition_explanation", "context": "5.1 - Service | Traffic from the external load balancer is directed at the backend Pods. The cloud provider decides how it is load balanced.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_15906"} {"prompt": "Complete this text code:\n\nmetadata:\n name: my-service", "completion": " annotations:\n networking.gke.io/load-balancer-type: \"Internal\"", "language": "text", "type": "code-example", "context": "5.1 - Service | metadata: name: my-service annotations: service.beta.kubernetes.io/aws-load-balancer-internal: \"true\"", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_15907"} {"prompt": "Explain this text code:\n\nmetadata:\n name: my-service\n annotations:\n networking.gke.io/load-balancer-type: \"Internal\"", "completion": "This text code 5.1 - Service | metadata: name: my-service annotations: service.beta.kubernetes.io/aws-load-balancer-internal: \"true\"", "language": "text", "type": "code-example_explanation", "context": "5.1 - Service | metadata: name: my-service annotations: service.beta.kubernetes.io/aws-load-balancer-internal: \"true\"", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_15908"} {"prompt": "Complete this text code:\n\nmetadata:\n name: my-service", "completion": " annotations:\n service.beta.kubernetes.io/aws-load-balancer-internal: \"true\"", "language": "text", "type": "code-example", "context": "5.1 - Service | metadata: name: my-service annotations: service.beta.kubernetes.io/azure-load-balancer-internal: \"true\"", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_15909"} {"prompt": "Explain this text code:\n\nmetadata:\n name: my-service\n annotations:\n service.beta.kubernetes.io/aws-load-balancer-internal: \"true\"", "completion": "This text code 5.1 - Service | metadata: name: my-service annotations: service.beta.kubernetes.io/azure-load-balancer-internal: \"true\"", "language": "text", "type": "code-example_explanation", "context": "5.1 - Service | metadata: name: my-service annotations: service.beta.kubernetes.io/azure-load-balancer-internal: \"true\"", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_15910"} {"prompt": "Complete this text code:\n\nmetadata:\n name: my-service", "completion": " annotations:\n service.beta.kubernetes.io/azure-load-balancer-internal: \"true\"", "language": "text", "type": "code-example", "context": "5.1 - Service | metadata: name: my-service annotations: service.kubernetes.io/ibm-load-balancer-cloud-provider-ip-type: \"private\"", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_15911"} {"prompt": "Explain this text code:\n\nmetadata:\n name: my-service\n annotations:\n service.beta.kubernetes.io/azure-load-balancer-internal: \"true\"", "completion": "This text code 5.1 - Service | metadata: name: my-service annotations: service.kubernetes.io/ibm-load-balancer-cloud-provider-ip-type: \"private\"", "language": "text", "type": "code-example_explanation", "context": "5.1 - Service | metadata: name: my-service annotations: service.kubernetes.io/ibm-load-balancer-cloud-provider-ip-type: \"private\"", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_15912"} {"prompt": "Complete this text code:\n\nmetadata:\n name: my-service", "completion": " annotations:\n service.kubernetes.io/ibm-load-balancer-cloud-provider-ip-type: \"private\"", "language": "text", "type": "code-example", "context": "5.1 - Service | metadata: name: my-service annotations: service.beta.kubernetes.io/openstack-internal-load-balancer: \"true\"", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_15913"} {"prompt": "Explain this text code:\n\nmetadata:\n name: my-service\n annotations:\n service.kubernetes.io/ibm-load-balancer-cloud-provider-ip-type: \"private\"", "completion": "This text code 5.1 - Service | metadata: name: my-service annotations: service.beta.kubernetes.io/openstack-internal-load-balancer: \"true\"", "language": "text", "type": "code-example_explanation", "context": "5.1 - Service | metadata: name: my-service annotations: service.beta.kubernetes.io/openstack-internal-load-balancer: \"true\"", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_15914"} {"prompt": "Complete this text code:\n\nmetadata:\n name: my-service", "completion": " annotations:\n service.beta.kubernetes.io/openstack-internal-load-balancer: \"true\"", "language": "text", "type": "code-example", "context": "5.1 - Service | metadata: name: my-service annotations: service.beta.kubernetes.io/cce-load-balancer-internal-vpc: \"true\"", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_15915"} {"prompt": "Explain this text code:\n\nmetadata:\n name: my-service\n annotations:\n service.beta.kubernetes.io/openstack-internal-load-balancer: \"true\"", "completion": "This text code 5.1 - Service | metadata: name: my-service annotations: service.beta.kubernetes.io/cce-load-balancer-internal-vpc: \"true\"", "language": "text", "type": "code-example_explanation", "context": "5.1 - Service | metadata: name: my-service annotations: service.beta.kubernetes.io/cce-load-balancer-internal-vpc: \"true\"", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_15916"} {"prompt": "Complete this text code:\n\nmetadata:\n name: my-service", "completion": " annotations:\n service.beta.kubernetes.io/cce-load-balancer-internal-vpc: \"true\"", "language": "text", "type": "code-example", "context": "5.1 - Service | metadata: annotations: service.kubernetes.io/qcloud-loadbalancer-internal-subnetid: subnet-xxxxx", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_15917"} {"prompt": "Explain this text code:\n\nmetadata:\n name: my-service\n annotations:\n service.beta.kubernetes.io/cce-load-balancer-internal-vpc: \"true\"", "completion": "This text code 5.1 - Service | metadata: annotations: service.kubernetes.io/qcloud-loadbalancer-internal-subnetid: subnet-xxxxx", "language": "text", "type": "code-example_explanation", "context": "5.1 - Service | metadata: annotations: service.kubernetes.io/qcloud-loadbalancer-internal-subnetid: subnet-xxxxx", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_15918"} {"prompt": "Explain this text code:\n\nmetadata:\n annotations:\n service.kubernetes.io/qcloud-loadbalancer-internal-subnetid: subnet-xxxxx", "completion": "This text code 5.1 - Service | metadata: annotations: service.beta.kubernetes.io/alibaba-cloud-loadbalancer-address-type: \"intranet\"", "language": "text", "type": "code-example_explanation", "context": "5.1 - Service | metadata: annotations: service.beta.kubernetes.io/alibaba-cloud-loadbalancer-address-type: \"intranet\"", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_15919"} {"prompt": "Explain this text code:\n\nmetadata:\n annotations:\n service.beta.kubernetes.io/alibaba-cloud-loadbalancer-address-type: \"intranet\"", "completion": "This text code 5.1 - Service | metadata: name: my-service annotations: service.beta.kubernetes.io/oci-load-balancer-internal: true", "language": "text", "type": "code-example_explanation", "context": "5.1 - Service | metadata: name: my-service annotations: service.beta.kubernetes.io/oci-load-balancer-internal: true", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_15920"} {"prompt": "Complete this text code:\n\nmetadata:\n name: my-service", "completion": " annotations:\n service.beta.kubernetes.io/oci-load-balancer-internal: true", "language": "text", "type": "code-example", "context": "5.1 - Service | Services of type ExternalName map a Service to a DNS name, not to a typical selector such as my-service or cassandra. You specify these Services with the spec.externalName parameter.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_15921"} {"prompt": "Explain this text code:\n\nmetadata:\n name: my-service\n annotations:\n service.beta.kubernetes.io/oci-load-balancer-internal: true", "completion": "This text code 5.1 - Service | Services of type ExternalName map a Service to a DNS name, not to a typical selector such as my-service or cassandra. You specify these Services with the spec.externalName parameter.", "language": "text", "type": "code-example_explanation", "context": "5.1 - Service | Services of type ExternalName map a Service to a DNS name, not to a typical selector such as my-service or cassandra. You specify these Services with the spec.externalName parameter.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_15922"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Service\nmetadata:\n name: my-service", "completion": " namespace: prod\nspec:\n type: ExternalName\n externalName: my.database.example.com", "language": "yaml", "type": "service-definition", "context": "5.1 - Service | A Service of type: ExternalName accepts an IPv4 address string, but treats that string as a DNS name comprised of digits, not as an IP address (the internet does not however allow such names in DNS).", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_15923"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Service\nmetadata:\n name: my-service\n namespace: prod\nspec:\n type: ExternalName\n externalName: my.database.example.com", "completion": "This yaml code 5.1 - Service | A Service of type: ExternalName accepts an IPv4 address string, but treats that string as a DNS name comprised of digits, not as an IP address (the internet does not however allow such names in DNS).", "language": "yaml", "type": "service-definition_explanation", "context": "5.1 - Service | A Service of type: ExternalName accepts an IPv4 address string, but treats that string as a DNS name comprised of digits, not as an IP address (the internet does not however allow such names in DNS).", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_15924"} {"prompt": "Complete this text code:\n\nREDIS_PRIMARY_SERVICE_HOST=10.0.0.11\nREDIS_PRIMARY_SERVICE_PORT=6379\nREDIS_PRIMARY_PORT=tcp://10.0.0.11:6379", "completion": "REDIS_PRIMARY_PORT_6379_TCP=tcp://10.0.0.11:6379\nREDIS_PRIMARY_PORT_6379_TCP_PROTO=tcp\nREDIS_PRIMARY_PORT_6379_TCP_PORT=6379\nREDIS_PRIMARY_PORT_6379_TCP_ADDR=10.0.0.11", "language": "text", "type": "code-example", "context": "5.1 - Service | When you have a Pod that needs to access a Service, and you are using the environment variable method to publish the port and cluster IP to the client Pods, you must create the Service before the clie", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_15925"} {"prompt": "Explain this text code:\n\nREDIS_PRIMARY_SERVICE_HOST=10.0.0.11\nREDIS_PRIMARY_SERVICE_PORT=6379\nREDIS_PRIMARY_PORT=tcp://10.0.0.11:6379\nREDIS_PRIMARY_PORT_6379_TCP=tcp://10.0.0.11:6379\nREDIS_PRIMARY_PORT_6379_TCP_PROTO=tcp\nREDIS_PRIMARY_PORT_6379_TCP_PORT=6379\nREDIS_PRIMARY_PORT_6379_TCP_ADDR=10.0.0.11", "completion": "This text code 5.1 - Service | When you have a Pod that needs to access a Service, and you are using the environment variable method to publish the port and cluster IP to the client Pods, you must create the Service before the clie", "language": "text", "type": "code-example_explanation", "context": "5.1 - Service | When you have a Pod that needs to access a Service, and you are using the environment variable method to publish the port and cluster IP to the client Pods, you must create the Service before the clie", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_15926"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Service\nmetadata:\n name: my-service\nspec:\n selector:\n app.kubernetes.io/name: MyApp", "completion": " ports:\n - name: http\n protocol: TCP\n port: 80\n targetPort: 49152\n externalIPs:\n - 198.51.100.32", "language": "yaml", "type": "service-definition", "context": "5.1 - Service | Service is a top-level resource in the Kubernetes REST API. You can find more details about the Service API object.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_15927"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Service\nmetadata:\n name: my-service\nspec:\n selector:\n app.kubernetes.io/name: MyApp\n ports:\n - name: http\n protocol: TCP\n port: 80\n targetPort: 49152\n externalIPs:\n - 198.51.100.32", "completion": "This yaml code 5.1 - Service | Service is a top-level resource in the Kubernetes REST API. You can find more details about the Service API object.", "language": "yaml", "type": "service-definition_explanation", "context": "5.1 - Service | Service is a top-level resource in the Kubernetes REST API. You can find more details about the Service API object.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_15928"} {"prompt": "Complete this yaml code:\n\napiVersion: networking.k8s.io/v1\nkind: Ingress\nmetadata:\n name: minimal-ingress\n annotations:\n nginx.ingress.kubernetes.io/rewrite-target: /\nspec:\n ingressClassName: nginx-example\n rules:", "completion": " - http:\n paths:\n - path: /testpath\n pathType: Prefix\n backend:\n service:\n name: test\n port:\n number: 80", "language": "yaml", "type": "kubernetes-manifest", "context": "5.2 - Ingress | An Ingress needs apiVersion, kind, metadata and spec fields. The name of an Ingress object must be a valid DNS subdomain name. For general information about working with config files, see deploying ap", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_15929"} {"prompt": "Explain this yaml code:\n\napiVersion: networking.k8s.io/v1\nkind: Ingress\nmetadata:\n name: minimal-ingress\n annotations:\n nginx.ingress.kubernetes.io/rewrite-target: /\nspec:\n ingressClassName: nginx-example\n rules:\n - http:\n paths:\n - path: /testpath\n pathType: Prefix\n backend:\n service:\n name: test\n port:\n number: 80", "completion": "This yaml code 5.2 - Ingress | An Ingress needs apiVersion, kind, metadata and spec fields. The name of an Ingress object must be a valid DNS subdomain name. For general information about working with config files, see deploying ap", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "5.2 - Ingress | An Ingress needs apiVersion, kind, metadata and spec fields. The name of an Ingress object must be a valid DNS subdomain name. For general information about working with config files, see deploying ap", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_15930"} {"prompt": "Complete this yaml code:\n\napiVersion: networking.k8s.io/v1\nkind: Ingress\nmetadata:\n name: ingress-resource-backend\nspec:\n defaultBackend:\n resource:\n apiGroup: k8s.example.com\n kind: StorageBucket\n name: static-assets", "completion": " rules:\n - http:\n paths:\n - path: /icons\n pathType: ImplementationSpecific\n backend:\n resource:\n apiGroup: k8s.example.com\n kind: StorageBucket\n name: icon-assets", "language": "yaml", "type": "kubernetes-manifest", "context": "5.2 - Ingress | After creating the Ingress above, you can view it with the following command:", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_15931"} {"prompt": "Explain this yaml code:\n\napiVersion: networking.k8s.io/v1\nkind: Ingress\nmetadata:\n name: ingress-resource-backend\nspec:\n defaultBackend:\n resource:\n apiGroup: k8s.example.com\n kind: StorageBucket\n name: static-assets\n rules:\n - http:\n paths:\n - path: /icons\n pathType: ImplementationSpecific\n backend:\n resource:\n apiGroup: k8s.example.com\n kind: StorageBucket\n name: icon-assets", "completion": "This yaml code 5.2 - Ingress | After creating the Ingress above, you can view it with the following command:", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "5.2 - Ingress | After creating the Ingress above, you can view it with the following command:", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_15932"} {"prompt": "Complete this text code:\n\nName: ingress-resource-backend\nNamespace: default\nAddress:\nDefault backend: APIGroup: k8s.example.com, Kind: StorageBucket, Name: static-assets\nRules:", "completion": " Host Path Backends\n ---- ---- --------\n *\n /icons APIGroup: k8s.example.com, Kind: StorageBucket, Name: icon-assets\nAnnotations: <none>\nEvents: <none>", "language": "text", "type": "code-example", "context": "5.2 - Ingress | Each path in an Ingress is required to have a corresponding path type. Paths that do not include an explicit pathType will fail validation. There are three supported path types:", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_15933"} {"prompt": "Explain this text code:\n\nName: ingress-resource-backend\nNamespace: default\nAddress:\nDefault backend: APIGroup: k8s.example.com, Kind: StorageBucket, Name: static-assets\nRules:\n Host Path Backends\n ---- ---- --------\n *\n /icons APIGroup: k8s.example.com, Kind: StorageBucket, Name: icon-assets\nAnnotations: <none>\nEvents: <none>", "completion": "This text code 5.2 - Ingress | Each path in an Ingress is required to have a corresponding path type. Paths that do not include an explicit pathType will fail validation. There are three supported path types:", "language": "text", "type": "code-example_explanation", "context": "5.2 - Ingress | Each path in an Ingress is required to have a corresponding path type. Paths that do not include an explicit pathType will fail validation. There are three supported path types:", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_15934"} {"prompt": "Complete this yaml code:\n\napiVersion: networking.k8s.io/v1\nkind: Ingress\nmetadata:\n name: ingress-wildcard-host\nspec:\n rules:\n - host: \"foo.bar.com\"\n http:\n paths:\n - pathType: Prefix\n path: \"/bar\"\n backend:\n service:", "completion": " name: service1\n port:\n number: 80\n - host: \"*.foo.com\"\n http:\n paths:\n - pathType: Prefix\n path: \"/foo\"\n backend:\n service:\n name: service2\n port:\n number: 80", "language": "yaml", "type": "kubernetes-manifest", "context": "5.2 - Ingress | Ingresses can be implemented by different controllers, often with different configuration. Each Ingress should specify a class, a reference to an IngressClass resource that contains additional configu", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_15935"} {"prompt": "Explain this yaml code:\n\napiVersion: networking.k8s.io/v1\nkind: Ingress\nmetadata:\n name: ingress-wildcard-host\nspec:\n rules:\n - host: \"foo.bar.com\"\n http:\n paths:\n - pathType: Prefix\n path: \"/bar\"\n backend:\n service:\n name: service1\n port:\n number: 80\n - host: \"*.foo.com\"\n http:\n paths:\n - pathType: Prefix\n path: \"/foo\"\n backend:\n service:\n name: service2\n port:\n number: 80", "completion": "This yaml code 5.2 - Ingress | Ingresses can be implemented by different controllers, often with different configuration. Each Ingress should specify a class, a reference to an IngressClass resource that contains additional configu", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "5.2 - Ingress | Ingresses can be implemented by different controllers, often with different configuration. Each Ingress should specify a class, a reference to an IngressClass resource that contains additional configu", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_15936"} {"prompt": "Complete this yaml code:\n\napiVersion: networking.k8s.io/v1\nkind: IngressClass\nmetadata:\n name: external-lb\nspec:", "completion": " controller: example.com/ingress-controller\n parameters:\n apiGroup: k8s.example.com\n kind: IngressParameters\n name: external-lb", "language": "yaml", "type": "kubernetes-manifest", "context": "5.2 - Ingress | The .spec.parameters field of an IngressClass lets you reference another resource that provides configuration related to that IngressClass.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_15937"} {"prompt": "Explain this yaml code:\n\napiVersion: networking.k8s.io/v1\nkind: IngressClass\nmetadata:\n name: external-lb\nspec:\n controller: example.com/ingress-controller\n parameters:\n apiGroup: k8s.example.com\n kind: IngressParameters\n name: external-lb", "completion": "This yaml code 5.2 - Ingress | The .spec.parameters field of an IngressClass lets you reference another resource that provides configuration related to that IngressClass.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "5.2 - Ingress | The .spec.parameters field of an IngressClass lets you reference another resource that provides configuration related to that IngressClass.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_15938"} {"prompt": "Complete this yaml code:\n\n---\napiVersion: networking.k8s.io/v1\nkind: IngressClass\nmetadata:\n name: external-lb-1\nspec:\n controller: example.com/ingress-controller\n parameters:", "completion": " # The parameters for this IngressClass are specified in a\n # ClusterIngressParameter (API group k8s.example.net) named\n # \"external-config-1\". This definition tells Kubernetes to\n # look for a cluster-scoped parameter resource.\n scope: Cluster\n apiGroup: k8s.example.net\n kind: ClusterIngressParameter\n name: external-config-1", "language": "yaml", "type": "kubernetes-manifest", "context": "5.2 - Ingress | FEATURE STATE: Kubernetes v1.23 [stable]If you set the .spec.parameters field and set .spec.parameters.scope to Namespace, then the IngressClass refers to a namespaced-scoped resource. You must also s", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_15939"} {"prompt": "Explain this yaml code:\n\n---\napiVersion: networking.k8s.io/v1\nkind: IngressClass\nmetadata:\n name: external-lb-1\nspec:\n controller: example.com/ingress-controller\n parameters:\n # The parameters for this IngressClass are specified in a\n # ClusterIngressParameter (API group k8s.example.net) named\n # \"external-config-1\". This definition tells Kubernetes to\n # look for a cluster-scoped parameter resource.\n scope: Cluster\n apiGroup: k8s.example.net\n kind: ClusterIngressParameter\n name: external-config-1", "completion": "This yaml code 5.2 - Ingress | FEATURE STATE: Kubernetes v1.23 [stable]If you set the .spec.parameters field and set .spec.parameters.scope to Namespace, then the IngressClass refers to a namespaced-scoped resource. You must also s", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "5.2 - Ingress | FEATURE STATE: Kubernetes v1.23 [stable]If you set the .spec.parameters field and set .spec.parameters.scope to Namespace, then the IngressClass refers to a namespaced-scoped resource. You must also s", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_15940"} {"prompt": "Complete this yaml code:\n\n---\napiVersion: networking.k8s.io/v1\nkind: IngressClass\nmetadata:\n name: external-lb-2\nspec:\n controller: example.com/ingress-controller\n parameters:", "completion": " # The parameters for this IngressClass are specified in an\n # IngressParameter (API group k8s.example.com) named \"external-config\",\n # that's in the \"external-configuration\" namespace.\n scope: Namespace\n apiGroup: k8s.example.com\n kind: IngressParameter\n namespace: external-configuration\n name: external-config", "language": "yaml", "type": "kubernetes-manifest", "context": "5.2 - Ingress | Before the IngressClass resource and ingressClassName field were added in Kubernetes 1.18, Ingress classes were specified with a kubernetes.io/ingress.class annotation on the Ingress. This annotation", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_15941"} {"prompt": "Explain this yaml code:\n\n---\napiVersion: networking.k8s.io/v1\nkind: IngressClass\nmetadata:\n name: external-lb-2\nspec:\n controller: example.com/ingress-controller\n parameters:\n # The parameters for this IngressClass are specified in an\n # IngressParameter (API group k8s.example.com) named \"external-config\",\n # that's in the \"external-configuration\" namespace.\n scope: Namespace\n apiGroup: k8s.example.com\n kind: IngressParameter\n namespace: external-configuration\n name: external-config", "completion": "This yaml code 5.2 - Ingress | Before the IngressClass resource and ingressClassName field were added in Kubernetes 1.18, Ingress classes were specified with a kubernetes.io/ingress.class annotation on the Ingress. This annotation", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "5.2 - Ingress | Before the IngressClass resource and ingressClassName field were added in Kubernetes 1.18, Ingress classes were specified with a kubernetes.io/ingress.class annotation on the Ingress. This annotation", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_15942"} {"prompt": "Complete this yaml code:\n\napiVersion: networking.k8s.io/v1\nkind: IngressClass\nmetadata:\n labels:\n app.kubernetes.io/component: controller", "completion": " name: nginx-example\n annotations:\n ingressclass.kubernetes.io/is-default-class: \"true\"\nspec:\n controller: k8s.io/ingress-nginx", "language": "yaml", "type": "kubernetes-manifest", "context": "5.2 - Ingress | There are existing Kubernetes concepts that allow you to expose a single Service (see alternatives). You can also do this with an Ingress by specifying a default backend with no rules.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_15943"} {"prompt": "Explain this yaml code:\n\napiVersion: networking.k8s.io/v1\nkind: IngressClass\nmetadata:\n labels:\n app.kubernetes.io/component: controller\n name: nginx-example\n annotations:\n ingressclass.kubernetes.io/is-default-class: \"true\"\nspec:\n controller: k8s.io/ingress-nginx", "completion": "This yaml code 5.2 - Ingress | There are existing Kubernetes concepts that allow you to expose a single Service (see alternatives). You can also do this with an Ingress by specifying a default backend with no rules.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "5.2 - Ingress | There are existing Kubernetes concepts that allow you to expose a single Service (see alternatives). You can also do this with an Ingress by specifying a default backend with no rules.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_15944"} {"prompt": "Complete this yaml code:\n\napiVersion: networking.k8s.io/v1\nkind: Ingress\nmetadata:\n name: test-ingress\nspec:", "completion": " defaultBackend:\n service:\n name: test\n port:\n number: 80", "language": "yaml", "type": "kubernetes-manifest", "context": "5.2 - Ingress | If you create it using kubectl apply -f you should be able to view the state of the Ingress you added:", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_15945"} {"prompt": "Explain this yaml code:\n\napiVersion: networking.k8s.io/v1\nkind: Ingress\nmetadata:\n name: test-ingress\nspec:\n defaultBackend:\n service:\n name: test\n port:\n number: 80", "completion": "This yaml code 5.2 - Ingress | If you create it using kubectl apply -f you should be able to view the state of the Ingress you added:", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "5.2 - Ingress | If you create it using kubectl apply -f you should be able to view the state of the Ingress you added:", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_15946"} {"prompt": "Explain this text code:\n\nNAME CLASS HOSTS ADDRESS PORTS AGE\ntest-ingress external-lb * 203.0.113.123 80 59s", "completion": "This text code 5.2 - Ingress | Where 203.0.113.123 is the IP allocated by the Ingress controller to satisfy this Ingress.", "language": "text", "type": "code-example_explanation", "context": "5.2 - Ingress | Where 203.0.113.123 is the IP allocated by the Ingress controller to satisfy this Ingress.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_15947"} {"prompt": "Complete this yaml code:\n\napiVersion: networking.k8s.io/v1\nkind: Ingress\nmetadata:\n name: simple-fanout-example\nspec:\n rules:\n - host: foo.bar.com\n http:\n paths:\n - path: /foo\n pathType: Prefix", "completion": " backend:\n service:\n name: service1\n port:\n number: 4200\n - path: /bar\n pathType: Prefix\n backend:\n service:\n name: service2\n port:\n number: 8080", "language": "yaml", "type": "kubernetes-manifest", "context": "5.2 - Ingress | When you create the Ingress with kubectl apply -f:", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_15948"} {"prompt": "Explain this yaml code:\n\napiVersion: networking.k8s.io/v1\nkind: Ingress\nmetadata:\n name: simple-fanout-example\nspec:\n rules:\n - host: foo.bar.com\n http:\n paths:\n - path: /foo\n pathType: Prefix\n backend:\n service:\n name: service1\n port:\n number: 4200\n - path: /bar\n pathType: Prefix\n backend:\n service:\n name: service2\n port:\n number: 8080", "completion": "This yaml code 5.2 - Ingress | When you create the Ingress with kubectl apply -f:", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "5.2 - Ingress | When you create the Ingress with kubectl apply -f:", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_15949"} {"prompt": "Complete this text code:\n\nName: simple-fanout-example\nNamespace: default\nAddress: 178.91.123.132\nDefault backend: default-http-backend:80 (10.8.2.3:8080)\nRules:\n Host Path Backends\n ---- ---- --------", "completion": " foo.bar.com\n /foo service1:4200 (10.8.0.90:4200)\n /bar service2:8080 (10.8.0.91:8080)\nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal ADD 22s loadbalancer-controller default/test", "language": "text", "type": "code-example", "context": "5.2 - Ingress | The Ingress controller provisions an implementation-specific load balancer that satisfies the Ingress, as long as the Services (service1, service2) exist. When it has done so, you can see the address", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_15950"} {"prompt": "Explain this text code:\n\nName: simple-fanout-example\nNamespace: default\nAddress: 178.91.123.132\nDefault backend: default-http-backend:80 (10.8.2.3:8080)\nRules:\n Host Path Backends\n ---- ---- --------\n foo.bar.com\n /foo service1:4200 (10.8.0.90:4200)\n /bar service2:8080 (10.8.0.91:8080)\nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal ADD 22s loadbalancer-controller default/test", "completion": "This text code 5.2 - Ingress | The Ingress controller provisions an implementation-specific load balancer that satisfies the Ingress, as long as the Services (service1, service2) exist. When it has done so, you can see the address", "language": "text", "type": "code-example_explanation", "context": "5.2 - Ingress | The Ingress controller provisions an implementation-specific load balancer that satisfies the Ingress, as long as the Services (service1, service2) exist. When it has done so, you can see the address", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_15951"} {"prompt": "Complete this yaml code:\n\napiVersion: networking.k8s.io/v1\nkind: Ingress\nmetadata:\n name: name-virtual-host-ingress\nspec:\n rules:\n - host: foo.bar.com\n http:\n paths:\n - pathType: Prefix\n path: \"/\"\n backend:\n service:", "completion": " name: service1\n port:\n number: 80\n - host: bar.foo.com\n http:\n paths:\n - pathType: Prefix\n path: \"/\"\n backend:\n service:\n name: service2\n port:\n number: 80", "language": "yaml", "type": "kubernetes-manifest", "context": "5.2 - Ingress | If you create an Ingress resource without any hosts defined in the rules, then any web traffic to the IP address of your Ingress controller can be matched without a name based virtual host being requi", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_15952"} {"prompt": "Explain this yaml code:\n\napiVersion: networking.k8s.io/v1\nkind: Ingress\nmetadata:\n name: name-virtual-host-ingress\nspec:\n rules:\n - host: foo.bar.com\n http:\n paths:\n - pathType: Prefix\n path: \"/\"\n backend:\n service:\n name: service1\n port:\n number: 80\n - host: bar.foo.com\n http:\n paths:\n - pathType: Prefix\n path: \"/\"\n backend:\n service:\n name: service2\n port:\n number: 80", "completion": "This yaml code 5.2 - Ingress | If you create an Ingress resource without any hosts defined in the rules, then any web traffic to the IP address of your Ingress controller can be matched without a name based virtual host being requi", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "5.2 - Ingress | If you create an Ingress resource without any hosts defined in the rules, then any web traffic to the IP address of your Ingress controller can be matched without a name based virtual host being requi", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_15953"} {"prompt": "Complete this yaml code:\n\napiVersion: networking.k8s.io/v1\nkind: Ingress\nmetadata:\n name: name-virtual-host-ingress-no-third-host\nspec:\n rules:\n - host: first.bar.com\n http:\n paths:\n - pathType: Prefix\n path: \"/\"\n backend:\n service:\n name: service1\n port:\n number: 80\n - host: second.bar.com", "completion": " http:\n paths:\n - pathType: Prefix\n path: \"/\"\n backend:\n service:\n name: service2\n port:\n number: 80\n - http:\n paths:\n - pathType: Prefix\n path: \"/\"\n backend:\n service:\n name: service3\n port:\n number: 80", "language": "yaml", "type": "kubernetes-manifest", "context": "5.2 - Ingress | You can secure an Ingress by specifying a Secret that contains a TLS private key and certificate. The Ingress resource only supports a single TLS port, 443, and assumes TLS termination at the ingress", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_15954"} {"prompt": "Explain this yaml code:\n\napiVersion: networking.k8s.io/v1\nkind: Ingress\nmetadata:\n name: name-virtual-host-ingress-no-third-host\nspec:\n rules:\n - host: first.bar.com\n http:\n paths:\n - pathType: Prefix\n path: \"/\"\n backend:\n service:\n name: service1\n port:\n number: 80\n - host: second.bar.com\n http:\n paths:\n - pathType: Prefix\n path: \"/\"\n backend:\n service:\n name: service2\n port:\n number: 80\n - http:\n paths:\n - pathType: Prefix\n path: \"/\"\n backend:\n service:\n name: service3\n port:\n number: 80", "completion": "This yaml code 5.2 - Ingress | You can secure an Ingress by specifying a Secret that contains a TLS private key and certificate. The Ingress resource only supports a single TLS port, 443, and assumes TLS termination at the ingress", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "5.2 - Ingress | You can secure an Ingress by specifying a Secret that contains a TLS private key and certificate. The Ingress resource only supports a single TLS port, 443, and assumes TLS termination at the ingress", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_15955"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Secret\nmetadata:\n name: testsecret-tls", "completion": " namespace: default\ndata:\n tls.crt: base64 encoded cert\n tls.key: base64 encoded key\ntype: kubernetes.io/tls", "language": "yaml", "type": "kubernetes-manifest", "context": "5.2 - Ingress | Referencing this secret in an Ingress tells the Ingress controller to secure the channel from the client to the load balancer using TLS. You need to make sure the TLS secret you created came from a ce", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_15956"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Secret\nmetadata:\n name: testsecret-tls\n namespace: default\ndata:\n tls.crt: base64 encoded cert\n tls.key: base64 encoded key\ntype: kubernetes.io/tls", "completion": "This yaml code 5.2 - Ingress | Referencing this secret in an Ingress tells the Ingress controller to secure the channel from the client to the load balancer using TLS. You need to make sure the TLS secret you created came from a ce", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "5.2 - Ingress | Referencing this secret in an Ingress tells the Ingress controller to secure the channel from the client to the load balancer using TLS. You need to make sure the TLS secret you created came from a ce", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_15957"} {"prompt": "Complete this yaml code:\n\napiVersion: networking.k8s.io/v1\nkind: Ingress\nmetadata:\n name: tls-example-ingress\nspec:\n tls:\n - hosts:\n - https-example.foo.com\n secretName: testsecret-tls\n rules:", "completion": " - host: https-example.foo.com\n http:\n paths:\n - path: /\n pathType: Prefix\n backend:\n service:\n name: service1\n port:\n number: 80", "language": "yaml", "type": "kubernetes-manifest", "context": "5.2 - Ingress | An Ingress controller is bootstrapped with some load balancing policy settings that it applies to all Ingress, such as the load balancing algorithm, backend weight scheme, and others. More advanced lo", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_15958"} {"prompt": "Explain this yaml code:\n\napiVersion: networking.k8s.io/v1\nkind: Ingress\nmetadata:\n name: tls-example-ingress\nspec:\n tls:\n - hosts:\n - https-example.foo.com\n secretName: testsecret-tls\n rules:\n - host: https-example.foo.com\n http:\n paths:\n - path: /\n pathType: Prefix\n backend:\n service:\n name: service1\n port:\n number: 80", "completion": "This yaml code 5.2 - Ingress | An Ingress controller is bootstrapped with some load balancing policy settings that it applies to all Ingress, such as the load balancing algorithm, backend weight scheme, and others. More advanced lo", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "5.2 - Ingress | An Ingress controller is bootstrapped with some load balancing policy settings that it applies to all Ingress, such as the load balancing algorithm, backend weight scheme, and others. More advanced lo", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_15959"} {"prompt": "Complete this text code:\n\nName: test\nNamespace: default\nAddress: 178.91.123.132\nDefault backend: default-http-backend:80 (10.8.2.3:8080)\nRules:\n Host Path Backends\n ---- ---- --------", "completion": " foo.bar.com\n /foo service1:80 (10.8.0.90:80)\nAnnotations:\n nginx.ingress.kubernetes.io/rewrite-target: /\nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal ADD 35s loadbalancer-controller default/test", "language": "text", "type": "code-example", "context": "5.2 - Ingress | This pops up an editor with the existing configuration in YAML format. Modify it to include the new Host:", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_15960"} {"prompt": "Explain this text code:\n\nName: test\nNamespace: default\nAddress: 178.91.123.132\nDefault backend: default-http-backend:80 (10.8.2.3:8080)\nRules:\n Host Path Backends\n ---- ---- --------\n foo.bar.com\n /foo service1:80 (10.8.0.90:80)\nAnnotations:\n nginx.ingress.kubernetes.io/rewrite-target: /\nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal ADD 35s loadbalancer-controller default/test", "completion": "This text code 5.2 - Ingress | This pops up an editor with the existing configuration in YAML format. Modify it to include the new Host:", "language": "text", "type": "code-example_explanation", "context": "5.2 - Ingress | This pops up an editor with the existing configuration in YAML format. Modify it to include the new Host:", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_15961"} {"prompt": "Complete this text code:\n\nspec:\n rules:\n - host: foo.bar.com\n http:\n paths:\n - backend:\n service:\n name: service1\n port:\n number: 80\n path: /foo", "completion": " pathType: Prefix\n - host: bar.baz.com\n http:\n paths:\n - backend:\n service:\n name: service2\n port:\n number: 80\n path: /foo\n pathType: Prefix\n..", "language": "text", "type": "code-example", "context": "5.2 - Ingress | After you save your changes, kubectl updates the resource in the API server, which tells the Ingress controller to reconfigure the load balancer.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_15962"} {"prompt": "Explain this text code:\n\nspec:\n rules:\n - host: foo.bar.com\n http:\n paths:\n - backend:\n service:\n name: service1\n port:\n number: 80\n path: /foo\n pathType: Prefix\n - host: bar.baz.com\n http:\n paths:\n - backend:\n service:\n name: service2\n port:\n number: 80\n path: /foo\n pathType: Prefix\n..", "completion": "This text code 5.2 - Ingress | After you save your changes, kubectl updates the resource in the API server, which tells the Ingress controller to reconfigure the load balancer.", "language": "text", "type": "code-example_explanation", "context": "5.2 - Ingress | After you save your changes, kubectl updates the resource in the API server, which tells the Ingress controller to reconfigure the load balancer.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_15963"} {"prompt": "Complete this text code:\n\nName: test\nNamespace: default\nAddress: 178.91.123.132\nDefault backend: default-http-backend:80 (10.8.2.3:8080)\nRules:\n Host Path Backends\n ---- ---- --------\n foo.bar.com", "completion": " /foo service1:80 (10.8.0.90:80)\n bar.baz.com\n /foo service2:80 (10.8.0.91:80)\nAnnotations:\n nginx.ingress.kubernetes.io/rewrite-target: /\nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal ADD 45s loadbalancer-controller default/test", "language": "text", "type": "code-example", "context": "5.2 - Ingress | You can achieve the same outcome by invoking kubectl replace -f on a modified Ingress YAML file.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_15964"} {"prompt": "Explain this text code:\n\nName: test\nNamespace: default\nAddress: 178.91.123.132\nDefault backend: default-http-backend:80 (10.8.2.3:8080)\nRules:\n Host Path Backends\n ---- ---- --------\n foo.bar.com\n /foo service1:80 (10.8.0.90:80)\n bar.baz.com\n /foo service2:80 (10.8.0.91:80)\nAnnotations:\n nginx.ingress.kubernetes.io/rewrite-target: /\nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal ADD 45s loadbalancer-controller default/test", "completion": "This text code 5.2 - Ingress | You can achieve the same outcome by invoking kubectl replace -f on a modified Ingress YAML file.", "language": "text", "type": "code-example_explanation", "context": "5.2 - Ingress | You can achieve the same outcome by invoking kubectl replace -f on a modified Ingress YAML file.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_15965"} {"prompt": "Complete this yaml code:\n\napiVersion: gateway.networking.k8s.io/v1\nkind: GatewayClass\nmetadata:", "completion": " name: example-class\nspec:\n controllerName: example.com/gateway-controller", "language": "yaml", "type": "kubernetes-manifest", "context": "5.4 - Gateway API | In this example, a controller that has implemented Gateway API is configured to manage GatewayClasses with the controller name example.com/gateway-controller. Gateways of this class will be managed by", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_15966"} {"prompt": "Explain this yaml code:\n\napiVersion: gateway.networking.k8s.io/v1\nkind: GatewayClass\nmetadata:\n name: example-class\nspec:\n controllerName: example.com/gateway-controller", "completion": "This yaml code 5.4 - Gateway API | In this example, a controller that has implemented Gateway API is configured to manage GatewayClasses with the controller name example.com/gateway-controller. Gateways of this class will be managed by", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "5.4 - Gateway API | In this example, a controller that has implemented Gateway API is configured to manage GatewayClasses with the controller name example.com/gateway-controller. Gateways of this class will be managed by", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_15967"} {"prompt": "Complete this yaml code:\n\napiVersion: gateway.networking.k8s.io/v1\nkind: Gateway\nmetadata:\n name: example-gateway\nspec:", "completion": " gatewayClassName: example-class\n listeners:\n - name: http\n protocol: HTTP\n port: 80", "language": "yaml", "type": "kubernetes-manifest", "context": "5.4 - Gateway API | In this example, an instance of traffic handling infrastructure is programmed to listen for HTTP traffic on port 80. Since the addresses field is unspecified, an address or hostname is assigned to the", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_15968"} {"prompt": "Explain this yaml code:\n\napiVersion: gateway.networking.k8s.io/v1\nkind: Gateway\nmetadata:\n name: example-gateway\nspec:\n gatewayClassName: example-class\n listeners:\n - name: http\n protocol: HTTP\n port: 80", "completion": "This yaml code 5.4 - Gateway API | In this example, an instance of traffic handling infrastructure is programmed to listen for HTTP traffic on port 80. Since the addresses field is unspecified, an address or hostname is assigned to the", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "5.4 - Gateway API | In this example, an instance of traffic handling infrastructure is programmed to listen for HTTP traffic on port 80. Since the addresses field is unspecified, an address or hostname is assigned to the", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_15969"} {"prompt": "Complete this yaml code:\n\napiVersion: gateway.networking.k8s.io/v1\nkind: HTTPRoute\nmetadata:\n name: example-httproute\nspec:\n parentRefs:\n - name: example-gateway\n hostnames:", "completion": " - \"www.example.com\"\n rules:\n - matches:\n - path:\n type: PathPrefix\n value: /login\n backendRefs:\n - name: example-svc\n port: 8080", "language": "yaml", "type": "kubernetes-manifest", "context": "5.4 - Gateway API | In this example, HTTP traffic from Gateway example-gateway with the Host: header set to www.example.com and the request path specified as /login will be routed to Service example-svc on port 8080.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_15970"} {"prompt": "Explain this yaml code:\n\napiVersion: gateway.networking.k8s.io/v1\nkind: HTTPRoute\nmetadata:\n name: example-httproute\nspec:\n parentRefs:\n - name: example-gateway\n hostnames:\n - \"www.example.com\"\n rules:\n - matches:\n - path:\n type: PathPrefix\n value: /login\n backendRefs:\n - name: example-svc\n port: 8080", "completion": "This yaml code 5.4 - Gateway API | In this example, HTTP traffic from Gateway example-gateway with the Host: header set to www.example.com and the request path specified as /login will be routed to Service example-svc on port 8080.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "5.4 - Gateway API | In this example, HTTP traffic from Gateway example-gateway with the Host: header set to www.example.com and the request path specified as /login will be routed to Service example-svc on port 8080.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_15971"} {"prompt": "Complete this yaml code:\n\napiVersion: gateway.networking.k8s.io/v1\nkind: GRPCRoute\nmetadata:\n name: example-grpcroute\nspec:\n parentRefs:", "completion": " - name: example-gateway\n hostnames:\n - \"svc.example.com\"\n rules:\n - backendRefs:\n - name: example-svc\n port: 50051", "language": "yaml", "type": "kubernetes-manifest", "context": "5.4 - Gateway API | In this example, gRPC traffic from Gateway example-gateway with the host set to svc.example.com will be directed to the service example-svc on port 50051 from the same namespace.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_15972"} {"prompt": "Explain this yaml code:\n\napiVersion: gateway.networking.k8s.io/v1\nkind: GRPCRoute\nmetadata:\n name: example-grpcroute\nspec:\n parentRefs:\n - name: example-gateway\n hostnames:\n - \"svc.example.com\"\n rules:\n - backendRefs:\n - name: example-svc\n port: 50051", "completion": "This yaml code 5.4 - Gateway API | In this example, gRPC traffic from Gateway example-gateway with the host set to svc.example.com will be directed to the service example-svc on port 50051 from the same namespace.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "5.4 - Gateway API | In this example, gRPC traffic from Gateway example-gateway with the host set to svc.example.com will be directed to the service example-svc on port 50051 from the same namespace.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_15973"} {"prompt": "Complete this yaml code:\n\napiVersion: gateway.networking.k8s.io/v1\nkind: GRPCRoute\nmetadata:\n name: example-grpcroute\nspec:\n parentRefs:\n - name: example-gateway\n hostnames:", "completion": " - \"svc.example.com\"\n rules:\n - matches:\n - method:\n service: com.example\n method: Login\n backendRefs:\n - name: foo-svc\n port: 50051", "language": "yaml", "type": "kubernetes-manifest", "context": "5.4 - Gateway API | In this case, the GRPCRoute will match any traffic for svc.example.com and apply its routing rules to forward the traffic to the correct backend. Since there is only one match specified,only requests", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_15974"} {"prompt": "Explain this yaml code:\n\napiVersion: gateway.networking.k8s.io/v1\nkind: GRPCRoute\nmetadata:\n name: example-grpcroute\nspec:\n parentRefs:\n - name: example-gateway\n hostnames:\n - \"svc.example.com\"\n rules:\n - matches:\n - method:\n service: com.example\n method: Login\n backendRefs:\n - name: foo-svc\n port: 50051", "completion": "This yaml code 5.4 - Gateway API | In this case, the GRPCRoute will match any traffic for svc.example.com and apply its routing rules to forward the traffic to the correct backend. Since there is only one match specified,only requests", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "5.4 - Gateway API | In this case, the GRPCRoute will match any traffic for svc.example.com and apply its routing rules to forward the traffic to the correct backend. Since there is only one match specified,only requests", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_15975"} {"prompt": "Complete this yaml code:\n\napiVersion: discovery.k8s.io/v1\nkind: EndpointSlice\nmetadata:\n name: example-abc\n labels:\n kubernetes.io/service-name: example\naddressType: IPv4\nports:\n - name: http", "completion": " protocol: TCP\n port: 80\nendpoints:\n - addresses:\n - \"10.1.2.3\"\n conditions:\n ready: true\n hostname: pod-1\n nodeName: node-1\n zone: us-west2-a", "language": "yaml", "type": "kubernetes-manifest", "context": "5.5 - EndpointSlices | By default, the control plane creates and manages EndpointSlices to have no more than 100 endpoints each. You can configure this with the --max-endpoints-per-slice kube-controller-manager flag, up to", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_15976"} {"prompt": "Explain this yaml code:\n\napiVersion: discovery.k8s.io/v1\nkind: EndpointSlice\nmetadata:\n name: example-abc\n labels:\n kubernetes.io/service-name: example\naddressType: IPv4\nports:\n - name: http\n protocol: TCP\n port: 80\nendpoints:\n - addresses:\n - \"10.1.2.3\"\n conditions:\n ready: true\n hostname: pod-1\n nodeName: node-1\n zone: us-west2-a", "completion": "This yaml code 5.5 - EndpointSlices | By default, the control plane creates and manages EndpointSlices to have no more than 100 endpoints each. You can configure this with the --max-endpoints-per-slice kube-controller-manager flag, up to", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "5.5 - EndpointSlices | By default, the control plane creates and manages EndpointSlices to have no more than 100 endpoints each. You can configure this with the --max-endpoints-per-slice kube-controller-manager flag, up to", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_15977"} {"prompt": "Complete this yaml code:\n\napiVersion: networking.k8s.io/v1\nkind: NetworkPolicy\nmetadata:\n name: test-network-policy\n namespace: default\nspec:\n podSelector:\n matchLabels:\n role: db\n policyTypes:\n - Ingress\n - Egress\n ingress:\n - from:\n - ipBlock:\n cidr: 172.17.0.0/16\n except:", "completion": " - 172.17.1.0/24\n - namespaceSelector:\n matchLabels:\n project: myproject\n - podSelector:\n matchLabels:\n role: frontend\n ports:\n - protocol: TCP\n port: 6379\n egress:\n - to:\n - ipBlock:\n cidr: 10.0.0.0/24\n ports:\n - protocol: TCP\n port: 5978", "language": "yaml", "type": "kubernetes-manifest", "context": "5.6 - Network Policies | Mandatory Fields: As with all other Kubernetes config, a NetworkPolicy needs apiVersion, kind, and metadata fields. For general information about working with config files, see Configure a Pod to Use", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_15978"} {"prompt": "Explain this yaml code:\n\napiVersion: networking.k8s.io/v1\nkind: NetworkPolicy\nmetadata:\n name: test-network-policy\n namespace: default\nspec:\n podSelector:\n matchLabels:\n role: db\n policyTypes:\n - Ingress\n - Egress\n ingress:\n - from:\n - ipBlock:\n cidr: 172.17.0.0/16\n except:\n - 172.17.1.0/24\n - namespaceSelector:\n matchLabels:\n project: myproject\n - podSelector:\n matchLabels:\n role: frontend\n ports:\n - protocol: TCP\n port: 6379\n egress:\n - to:\n - ipBlock:\n cidr: 10.0.0.0/24\n ports:\n - protocol: TCP\n port: 5978", "completion": "This yaml code 5.6 - Network Policies | Mandatory Fields: As with all other Kubernetes config, a NetworkPolicy needs apiVersion, kind, and metadata fields. For general information about working with config files, see Configure a Pod to Use", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "5.6 - Network Policies | Mandatory Fields: As with all other Kubernetes config, a NetworkPolicy needs apiVersion, kind, and metadata fields. For general information about working with config files, see Configure a Pod to Use", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_15979"} {"prompt": "Complete this text code:\n\n...\n ingress:\n - from:\n - namespaceSelector:\n matchLabels:", "completion": " user: alice\n podSelector:\n matchLabels:\n role: client\n ...", "language": "text", "type": "code-example", "context": "5.6 - Network Policies | This policy contains a single from element allowing connections from Pods with the label role=client in namespaces with the label user=alice. But the following policy is different:", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_15980"} {"prompt": "Explain this text code:\n\n...\n ingress:\n - from:\n - namespaceSelector:\n matchLabels:\n user: alice\n podSelector:\n matchLabels:\n role: client\n ...", "completion": "This text code 5.6 - Network Policies | This policy contains a single from element allowing connections from Pods with the label role=client in namespaces with the label user=alice. But the following policy is different:", "language": "text", "type": "code-example_explanation", "context": "5.6 - Network Policies | This policy contains a single from element allowing connections from Pods with the label role=client in namespaces with the label user=alice. But the following policy is different:", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_15981"} {"prompt": "Complete this text code:\n\n...\n ingress:\n - from:\n - namespaceSelector:\n matchLabels:", "completion": " user: alice\n - podSelector:\n matchLabels:\n role: client\n ...", "language": "text", "type": "code-example", "context": "5.6 - Network Policies | It contains two elements in the from array, and allows connections from Pods in the local Namespace with the label role=client, or from any Pod in any namespace with the label user=alice.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_15982"} {"prompt": "Explain this text code:\n\n...\n ingress:\n - from:\n - namespaceSelector:\n matchLabels:\n user: alice\n - podSelector:\n matchLabels:\n role: client\n ...", "completion": "This text code 5.6 - Network Policies | It contains two elements in the from array, and allows connections from Pods in the local Namespace with the label role=client, or from any Pod in any namespace with the label user=alice.", "language": "text", "type": "code-example_explanation", "context": "5.6 - Network Policies | It contains two elements in the from array, and allows connections from Pods in the local Namespace with the label role=client, or from any Pod in any namespace with the label user=alice.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_15983"} {"prompt": "Complete this yaml code:\n\n---\napiVersion: networking.k8s.io/v1\nkind: NetworkPolicy\nmetadata:", "completion": " name: default-deny-ingress\nspec:\n podSelector: {}\n policyTypes:\n - Ingress", "language": "yaml", "type": "kubernetes-manifest", "context": "5.6 - Network Policies | This ensures that even pods that aren't selected by any other NetworkPolicy will still be isolated for ingress. This policy does not affect isolation for egress from any pod.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_15984"} {"prompt": "Explain this yaml code:\n\n---\napiVersion: networking.k8s.io/v1\nkind: NetworkPolicy\nmetadata:\n name: default-deny-ingress\nspec:\n podSelector: {}\n policyTypes:\n - Ingress", "completion": "This yaml code 5.6 - Network Policies | This ensures that even pods that aren't selected by any other NetworkPolicy will still be isolated for ingress. This policy does not affect isolation for egress from any pod.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "5.6 - Network Policies | This ensures that even pods that aren't selected by any other NetworkPolicy will still be isolated for ingress. This policy does not affect isolation for egress from any pod.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_15985"} {"prompt": "Complete this yaml code:\n\n---\napiVersion: networking.k8s.io/v1\nkind: NetworkPolicy\nmetadata:\n name: allow-all-ingress", "completion": "spec:\n podSelector: {}\n ingress:\n - {}\n policyTypes:\n - Ingress", "language": "yaml", "type": "kubernetes-manifest", "context": "5.6 - Network Policies | With this policy in place, no additional policy or policies can cause any incoming connection to those pods to be denied. This policy has no effect on isolation for egress from any pod.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_15986"} {"prompt": "Explain this yaml code:\n\n---\napiVersion: networking.k8s.io/v1\nkind: NetworkPolicy\nmetadata:\n name: allow-all-ingress\nspec:\n podSelector: {}\n ingress:\n - {}\n policyTypes:\n - Ingress", "completion": "This yaml code 5.6 - Network Policies | With this policy in place, no additional policy or policies can cause any incoming connection to those pods to be denied. This policy has no effect on isolation for egress from any pod.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "5.6 - Network Policies | With this policy in place, no additional policy or policies can cause any incoming connection to those pods to be denied. This policy has no effect on isolation for egress from any pod.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_15987"} {"prompt": "Complete this yaml code:\n\n---\napiVersion: networking.k8s.io/v1\nkind: NetworkPolicy\nmetadata:", "completion": " name: default-deny-egress\nspec:\n podSelector: {}\n policyTypes:\n - Egress", "language": "yaml", "type": "kubernetes-manifest", "context": "5.6 - Network Policies | This ensures that even pods that aren't selected by any other NetworkPolicy will not be allowed egress traffic. This policy does not change the ingress isolation behavior of any pod.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_15988"} {"prompt": "Explain this yaml code:\n\n---\napiVersion: networking.k8s.io/v1\nkind: NetworkPolicy\nmetadata:\n name: default-deny-egress\nspec:\n podSelector: {}\n policyTypes:\n - Egress", "completion": "This yaml code 5.6 - Network Policies | This ensures that even pods that aren't selected by any other NetworkPolicy will not be allowed egress traffic. This policy does not change the ingress isolation behavior of any pod.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "5.6 - Network Policies | This ensures that even pods that aren't selected by any other NetworkPolicy will not be allowed egress traffic. This policy does not change the ingress isolation behavior of any pod.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_15989"} {"prompt": "Complete this yaml code:\n\n---\napiVersion: networking.k8s.io/v1\nkind: NetworkPolicy\nmetadata:\n name: allow-all-egress", "completion": "spec:\n podSelector: {}\n egress:\n - {}\n policyTypes:\n - Egress", "language": "yaml", "type": "kubernetes-manifest", "context": "5.6 - Network Policies | With this policy in place, no additional policy or policies can cause any outgoing connection from those pods to be denied. This policy has no effect on isolation for ingress to any pod.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_15990"} {"prompt": "Explain this yaml code:\n\n---\napiVersion: networking.k8s.io/v1\nkind: NetworkPolicy\nmetadata:\n name: allow-all-egress\nspec:\n podSelector: {}\n egress:\n - {}\n policyTypes:\n - Egress", "completion": "This yaml code 5.6 - Network Policies | With this policy in place, no additional policy or policies can cause any outgoing connection from those pods to be denied. This policy has no effect on isolation for ingress to any pod.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "5.6 - Network Policies | With this policy in place, no additional policy or policies can cause any outgoing connection from those pods to be denied. This policy has no effect on isolation for ingress to any pod.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_15991"} {"prompt": "Complete this yaml code:\n\n---\napiVersion: networking.k8s.io/v1\nkind: NetworkPolicy\nmetadata:\n name: default-deny-all", "completion": "spec:\n podSelector: {}\n policyTypes:\n - Ingress\n - Egress", "language": "yaml", "type": "kubernetes-manifest", "context": "5.6 - Network Policies | This ensures that even pods that aren't selected by any other NetworkPolicy will not be allowed ingress or egress traffic.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_15992"} {"prompt": "Explain this yaml code:\n\n---\napiVersion: networking.k8s.io/v1\nkind: NetworkPolicy\nmetadata:\n name: default-deny-all\nspec:\n podSelector: {}\n policyTypes:\n - Ingress\n - Egress", "completion": "This yaml code 5.6 - Network Policies | This ensures that even pods that aren't selected by any other NetworkPolicy will not be allowed ingress or egress traffic.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "5.6 - Network Policies | This ensures that even pods that aren't selected by any other NetworkPolicy will not be allowed ingress or egress traffic.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_15993"} {"prompt": "Complete this yaml code:\n\napiVersion: networking.k8s.io/v1\nkind: NetworkPolicy\nmetadata:\n name: multi-port-egress\n namespace: default\nspec:\n podSelector:\n matchLabels:\n role: db", "completion": " policyTypes:\n - Egress\n egress:\n - to:\n - ipBlock:\n cidr: 10.0.0.0/24\n ports:\n - protocol: TCP\n port: 32000\n endPort: 32768", "language": "yaml", "type": "kubernetes-manifest", "context": "5.6 - Network Policies | The above rule allows any Pod with label role=db on the namespace default to communicate with any IP within the range 10.0.0.0/24 over TCP, provided that the target port is between the range 32000 and", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_15994"} {"prompt": "Explain this yaml code:\n\napiVersion: networking.k8s.io/v1\nkind: NetworkPolicy\nmetadata:\n name: multi-port-egress\n namespace: default\nspec:\n podSelector:\n matchLabels:\n role: db\n policyTypes:\n - Egress\n egress:\n - to:\n - ipBlock:\n cidr: 10.0.0.0/24\n ports:\n - protocol: TCP\n port: 32000\n endPort: 32768", "completion": "This yaml code 5.6 - Network Policies | The above rule allows any Pod with label role=db on the namespace default to communicate with any IP within the range 10.0.0.0/24 over TCP, provided that the target port is between the range 32000 and", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "5.6 - Network Policies | The above rule allows any Pod with label role=db on the namespace default to communicate with any IP within the range 10.0.0.0/24 over TCP, provided that the target port is between the range 32000 and", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_15995"} {"prompt": "Explain this bash code:\n\nkubectl label namespace frontend namespace=frontend\nkubectl label namespace backend namespace=backend", "completion": "This bash code 5.6 - Network Policies | Add the labels under namespaceSelector in your NetworkPolicy document. For example:", "language": "bash", "type": "kubectl-command_explanation", "context": "5.6 - Network Policies | Add the labels under namespaceSelector in your NetworkPolicy document. For example:", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_15996"} {"prompt": "Complete this yaml code:\n\napiVersion: networking.k8s.io/v1\nkind: NetworkPolicy\nmetadata:\n name: egress-namespaces\nspec:\n podSelector:\n matchLabels:\n app: myapp", "completion": " policyTypes:\n - Egress\n egress:\n - to:\n - namespaceSelector:\n matchExpressions:\n - key: namespace\n operator: In\n values: [\"frontend\", \"backend\"]", "language": "yaml", "type": "kubernetes-manifest", "context": "5.6 - Network Policies | The Kubernetes control plane sets an immutable label kubernetes.io/metadata.name on all namespaces, the value of the label is the namespace name.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_15997"} {"prompt": "Explain this yaml code:\n\napiVersion: networking.k8s.io/v1\nkind: NetworkPolicy\nmetadata:\n name: egress-namespaces\nspec:\n podSelector:\n matchLabels:\n app: myapp\n policyTypes:\n - Egress\n egress:\n - to:\n - namespaceSelector:\n matchExpressions:\n - key: namespace\n operator: In\n values: [\"frontend\", \"backend\"]", "completion": "This yaml code 5.6 - Network Policies | The Kubernetes control plane sets an immutable label kubernetes.io/metadata.name on all namespaces, the value of the label is the namespace name.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "5.6 - Network Policies | The Kubernetes control plane sets an immutable label kubernetes.io/metadata.name on all namespaces, the value of the label is the namespace name.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_15998"} {"prompt": "Complete this text code:\n\n...\n spec:\n podSelector:", "completion": " matchLabels:\n role: client\n ...", "language": "text", "type": "code-example", "context": "5.6 - Network Policies | a hostNetwork pod is selected by a podSelector or namespaceSelector in an ingress or egress rule.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_15999"} {"prompt": "Explain this text code:\n\n...\n spec:\n podSelector:\n matchLabels:\n role: client\n ...", "completion": "This text code 5.6 - Network Policies | a hostNetwork pod is selected by a podSelector or namespaceSelector in an ingress or egress rule.", "language": "text", "type": "code-example_explanation", "context": "5.6 - Network Policies | a hostNetwork pod is selected by a podSelector or namespaceSelector in an ingress or egress rule.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_16000"} {"prompt": "Complete this text code:\n\n...\n ingress:\n - from:", "completion": " - podSelector:\n matchLabels:\n role: client\n ...", "language": "text", "type": "code-example", "context": "5.6 - Network Policies | At the same time, since hostNetwork pods have the same IP addresses as the nodes they reside on, their connections will be treated as node connections. For example, you can allow traffic from a hostNe", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_16001"} {"prompt": "Explain this text code:\n\n...\n ingress:\n - from:\n - podSelector:\n matchLabels:\n role: client\n ...", "completion": "This text code 5.6 - Network Policies | At the same time, since hostNetwork pods have the same IP addresses as the nodes they reside on, their connections will be treated as node connections. For example, you can allow traffic from a hostNe", "language": "text", "type": "code-example_explanation", "context": "5.6 - Network Policies | At the same time, since hostNetwork pods have the same IP addresses as the nodes they reside on, their connections will be treated as node connections. For example, you can allow traffic from a hostNe", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_16002"} {"prompt": "Explain this text code:\n\nnameserver 10.32.0.10\nsearch <namespace>.svc.cluster.local svc.cluster.local cluster.local\noptions ndots:5", "completion": "This text code 5.7 - DNS for Services and Pods | In summary, a Pod in the test namespace can successfully resolve either data.prod or data.prod.svc.cluster.local.", "language": "text", "type": "code-example_explanation", "context": "5.7 - DNS for Services and Pods | In summary, a Pod in the test namespace can successfully resolve either data.prod or data.prod.svc.cluster.local.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_16003"} {"prompt": "Explain this text code:\n\n<pod-IPv4-address>.<namespace>.pod.<cluster-domain>", "completion": "This text code 5.7 - DNS for Services and Pods | For example, if a Pod in the default namespace has the IP address 172.17.0.3, and the domain name for your cluster is cluster.local, then the Pod has a DNS name:", "language": "text", "type": "code-example_explanation", "context": "5.7 - DNS for Services and Pods | For example, if a Pod in the default namespace has the IP address 172.17.0.3, and the domain name for your cluster is cluster.local, then the Pod has a DNS name:", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_16004"} {"prompt": "Explain this text code:\n\n<pod-ipv4-address>.<service-name>.<my-namespace>.svc.<cluster-domain.example>", "completion": "This text code 5.7 - DNS for Services and Pods | For example, if a Pod in the cafe namespace has the IP address 172.17.0.3, is an endpoint of a Service named barista, and the domain name for your cluster is cluster.local, then the Pod would have thi", "language": "text", "type": "code-example_explanation", "context": "5.7 - DNS for Services and Pods | For example, if a Pod in the cafe namespace has the IP address 172.17.0.3, is an endpoint of a Service named barista, and the domain name for your cluster is cluster.local, then the Pod would have thi", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_16005"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Service\nmetadata:\n name: busybox-subdomain\nspec:\n selector:\n name: busybox\n clusterIP: None\n ports:\n - name: foo # name is not required for single-port Services\n port: 1234\n---\napiVersion: v1\nkind: Pod\nmetadata:\n name: busybox1\n labels:\n name: busybox\nspec:\n hostname: busybox-1\n subdomain: busybox-subdomain", "completion": " containers:\n - image: busybox:1.28\n command:\n - sleep\n - \"3600\"\n name: busybox\n---\napiVersion: v1\nkind: Pod\nmetadata:\n name: busybox2\n labels:\n name: busybox\nspec:\n hostname: busybox-2\n subdomain: busybox-subdomain\n containers:\n - image: busybox:1.28\n command:\n - sleep\n - \"3600\"\n name: busybox", "language": "yaml", "type": "pod-definition", "context": "5.7 - DNS for Services and Pods | Given the above Service \"busybox-subdomain\" and the Pods which set spec.subdomain to \"busybox-subdomain\", the first Pod will see its own FQDN as \"busybox-1.busybox-subdomain.my-namespace.svc.cluster-d", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_16006"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Service\nmetadata:\n name: busybox-subdomain\nspec:\n selector:\n name: busybox\n clusterIP: None\n ports:\n - name: foo # name is not required for single-port Services\n port: 1234\n---\napiVersion: v1\nkind: Pod\nmetadata:\n name: busybox1\n labels:\n name: busybox\nspec:\n hostname: busybox-1\n subdomain: busybox-subdomain\n containers:\n - image: busybox:1.28\n command:\n - sleep\n - \"3600\"\n name: busybox\n---\napiVersion: v1\nkind: Pod\nmetadata:\n name: busybox2\n labels:\n name: busybox\nspec:\n hostname: busybox-2\n subdomain: busybox-subdomain\n containers:\n - image: busybox:1.28\n command:\n - sleep\n - \"3600\"\n name: busybox", "completion": "This yaml code 5.7 - DNS for Services and Pods | Given the above Service \"busybox-subdomain\" and the Pods which set spec.subdomain to \"busybox-subdomain\", the first Pod will see its own FQDN as \"busybox-1.busybox-subdomain.my-namespace.svc.cluster-d", "language": "yaml", "type": "pod-definition_explanation", "context": "5.7 - DNS for Services and Pods | Given the above Service \"busybox-subdomain\" and the Pods which set spec.subdomain to \"busybox-subdomain\", the first Pod will see its own FQDN as \"busybox-1.busybox-subdomain.my-namespace.svc.cluster-d", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_16007"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: busybox\n namespace: default\nspec:\n containers:\n - image: busybox:1.28", "completion": " command:\n - sleep\n - \"3600\"\n imagePullPolicy: IfNotPresent\n name: busybox\n restartPolicy: Always\n hostNetwork: true\n dnsPolicy: ClusterFirstWithHostNet", "language": "yaml", "type": "pod-definition", "context": "5.7 - DNS for Services and Pods | Pod's DNS Config allows users more control on the DNS settings for a Pod.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_16008"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: busybox\n namespace: default\nspec:\n containers:\n - image: busybox:1.28\n command:\n - sleep\n - \"3600\"\n imagePullPolicy: IfNotPresent\n name: busybox\n restartPolicy: Always\n hostNetwork: true\n dnsPolicy: ClusterFirstWithHostNet", "completion": "This yaml code 5.7 - DNS for Services and Pods | Pod's DNS Config allows users more control on the DNS settings for a Pod.", "language": "yaml", "type": "pod-definition_explanation", "context": "5.7 - DNS for Services and Pods | Pod's DNS Config allows users more control on the DNS settings for a Pod.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_16009"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n namespace: default\n name: dns-example\nspec:\n containers:\n - name: test\n image: nginx\n dnsPolicy: \"None\"", "completion": " dnsConfig:\n nameservers:\n - 192.0.2.1 # this is an example\n searches:\n - ns1.svc.cluster-domain.example\n - my.dns.search.suffix\n options:\n - name: ndots\n value: \"2\"\n - name: edns0", "language": "yaml", "type": "pod-definition", "context": "5.7 - DNS for Services and Pods | When the Pod above is created, the container test gets the following contents in its /etc/resolv.conf file:", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_16010"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n namespace: default\n name: dns-example\nspec:\n containers:\n - name: test\n image: nginx\n dnsPolicy: \"None\"\n dnsConfig:\n nameservers:\n - 192.0.2.1 # this is an example\n searches:\n - ns1.svc.cluster-domain.example\n - my.dns.search.suffix\n options:\n - name: ndots\n value: \"2\"\n - name: edns0", "completion": "This yaml code 5.7 - DNS for Services and Pods | When the Pod above is created, the container test gets the following contents in its /etc/resolv.conf file:", "language": "yaml", "type": "pod-definition_explanation", "context": "5.7 - DNS for Services and Pods | When the Pod above is created, the container test gets the following contents in its /etc/resolv.conf file:", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_16011"} {"prompt": "Explain this text code:\n\nnameserver 192.0.2.1\nsearch ns1.svc.cluster-domain.example my.dns.search.suffix\noptions ndots:2 edns0", "completion": "This text code 5.7 - DNS for Services and Pods | For IPv6 setup, search path and name server should be set up like this:", "language": "text", "type": "code-example_explanation", "context": "5.7 - DNS for Services and Pods | For IPv6 setup, search path and name server should be set up like this:", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_16012"} {"prompt": "Explain this bash code:\n\nkubectl exec -it dns-example -- cat /etc/resolv.conf", "completion": "This bash code 5.7 - DNS for Services and Pods | The output is similar to this:", "language": "bash", "type": "kubectl-command_explanation", "context": "5.7 - DNS for Services and Pods | The output is similar to this:", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_16013"} {"prompt": "Explain this text code:\n\nnameserver 2001:db8:30::a\nsearch default.svc.cluster-domain.example svc.cluster-domain.example cluster-domain.example\noptions ndots:5", "completion": "This text code 5.7 - DNS for Services and Pods | Kubernetes itself does not limit the DNS Config until the length of the search domain list exceeds 32 or the total length of all search domains exceeds 2048. This limit applies to the node's resolver", "language": "text", "type": "code-example_explanation", "context": "5.7 - DNS for Services and Pods | Kubernetes itself does not limit the DNS Config until the length of the search domain list exceeds 32 or the total length of all search domains exceeds 2048. This limit applies to the node's resolver", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_16014"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Service\nmetadata:\n name: my-service\n labels:\n app.kubernetes.io/name: MyApp", "completion": "spec:\n selector:\n app.kubernetes.io/name: MyApp\n ports:\n - protocol: TCP\n port: 80", "language": "yaml", "type": "service-definition", "context": "5.8 - IPv4/IPv6 dual-stack | This Service specification explicitly defines PreferDualStack in .spec.ipFamilyPolicy. When you create this Service on a dual-stack cluster, Kubernetes assigns both IPv4 and IPv6 addresses for the ser", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_16015"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Service\nmetadata:\n name: my-service\n labels:\n app.kubernetes.io/name: MyApp\nspec:\n selector:\n app.kubernetes.io/name: MyApp\n ports:\n - protocol: TCP\n port: 80", "completion": "This yaml code 5.8 - IPv4/IPv6 dual-stack | This Service specification explicitly defines PreferDualStack in .spec.ipFamilyPolicy. When you create this Service on a dual-stack cluster, Kubernetes assigns both IPv4 and IPv6 addresses for the ser", "language": "yaml", "type": "service-definition_explanation", "context": "5.8 - IPv4/IPv6 dual-stack | This Service specification explicitly defines PreferDualStack in .spec.ipFamilyPolicy. When you create this Service on a dual-stack cluster, Kubernetes assigns both IPv4 and IPv6 addresses for the ser", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_16016"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Service\nmetadata:\n name: my-service\n labels:\n app.kubernetes.io/name: MyApp", "completion": "spec:\n ipFamilyPolicy: PreferDualStack\n selector:\n app.kubernetes.io/name: MyApp\n ports:\n - protocol: TCP\n port: 80", "language": "yaml", "type": "service-definition", "context": "5.8 - IPv4/IPv6 dual-stack | This Service specification explicitly defines IPv6 and IPv4 in .spec.ipFamilies as well as defining PreferDualStack in .spec.ipFamilyPolicy. When Kubernetes assigns an IPv6 and IPv4 address in .spec.c", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_16017"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Service\nmetadata:\n name: my-service\n labels:\n app.kubernetes.io/name: MyApp\nspec:\n ipFamilyPolicy: PreferDualStack\n selector:\n app.kubernetes.io/name: MyApp\n ports:\n - protocol: TCP\n port: 80", "completion": "This yaml code 5.8 - IPv4/IPv6 dual-stack | This Service specification explicitly defines IPv6 and IPv4 in .spec.ipFamilies as well as defining PreferDualStack in .spec.ipFamilyPolicy. When Kubernetes assigns an IPv6 and IPv4 address in .spec.c", "language": "yaml", "type": "service-definition_explanation", "context": "5.8 - IPv4/IPv6 dual-stack | This Service specification explicitly defines IPv6 and IPv4 in .spec.ipFamilies as well as defining PreferDualStack in .spec.ipFamilyPolicy. When Kubernetes assigns an IPv6 and IPv4 address in .spec.c", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_16018"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Service\nmetadata:\n name: my-service\n labels:\n app.kubernetes.io/name: MyApp\nspec:\n ipFamilyPolicy: PreferDualStack", "completion": " ipFamilies:\n - IPv6\n - IPv4\n selector:\n app.kubernetes.io/name: MyApp\n ports:\n - protocol: TCP\n port: 80", "language": "yaml", "type": "service-definition", "context": "5.8 - IPv4/IPv6 dual-stack | These examples demonstrate the default behavior when dual-stack is newly enabled on a cluster where Services already exist. (Upgrading an existing cluster to 1.21 or beyond will enable dual-stack.)", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_16019"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Service\nmetadata:\n name: my-service\n labels:\n app.kubernetes.io/name: MyApp\nspec:\n ipFamilyPolicy: PreferDualStack\n ipFamilies:\n - IPv6\n - IPv4\n selector:\n app.kubernetes.io/name: MyApp\n ports:\n - protocol: TCP\n port: 80", "completion": "This yaml code 5.8 - IPv4/IPv6 dual-stack | These examples demonstrate the default behavior when dual-stack is newly enabled on a cluster where Services already exist. (Upgrading an existing cluster to 1.21 or beyond will enable dual-stack.)", "language": "yaml", "type": "service-definition_explanation", "context": "5.8 - IPv4/IPv6 dual-stack | These examples demonstrate the default behavior when dual-stack is newly enabled on a cluster where Services already exist. (Upgrading an existing cluster to 1.21 or beyond will enable dual-stack.)", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_16020"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Service\nmetadata:\n name: my-service\n labels:\n app.kubernetes.io/name: MyApp", "completion": "spec:\n selector:\n app.kubernetes.io/name: MyApp\n ports:\n - protocol: TCP\n port: 80", "language": "yaml", "type": "service-definition", "context": "5.8 - IPv4/IPv6 dual-stack | You can validate this behavior by using kubectl to inspect an existing service.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_16021"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Service\nmetadata:\n name: my-service\n labels:\n app.kubernetes.io/name: MyApp\nspec:\n selector:\n app.kubernetes.io/name: MyApp\n ports:\n - protocol: TCP\n port: 80", "completion": "This yaml code 5.8 - IPv4/IPv6 dual-stack | You can validate this behavior by using kubectl to inspect an existing service.", "language": "yaml", "type": "service-definition_explanation", "context": "5.8 - IPv4/IPv6 dual-stack | You can validate this behavior by using kubectl to inspect an existing service.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_16022"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Service\nmetadata:\n labels:\n app.kubernetes.io/name: MyApp\n name: my-service\nspec:\n clusterIP: 10.0.197.123\n clusterIPs:\n - 10.0.197.123\n ipFamilies:", "completion": " - IPv4\n ipFamilyPolicy: SingleStack\n ports:\n - port: 80\n protocol: TCP\n targetPort: 80\n selector:\n app.kubernetes.io/name: MyApp\n type: ClusterIP\nstatus:\n loadBalancer: {}", "language": "yaml", "type": "service-definition", "context": "5.8 - IPv4/IPv6 dual-stack | When dual-stack is enabled on a cluster, existing headless Services with selectors are configured by the control plane to set .spec.ipFamilyPolicy to SingleStack and set .spec.ipFamilies to the addres", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_16023"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Service\nmetadata:\n labels:\n app.kubernetes.io/name: MyApp\n name: my-service\nspec:\n clusterIP: 10.0.197.123\n clusterIPs:\n - 10.0.197.123\n ipFamilies:\n - IPv4\n ipFamilyPolicy: SingleStack\n ports:\n - port: 80\n protocol: TCP\n targetPort: 80\n selector:\n app.kubernetes.io/name: MyApp\n type: ClusterIP\nstatus:\n loadBalancer: {}", "completion": "This yaml code 5.8 - IPv4/IPv6 dual-stack | When dual-stack is enabled on a cluster, existing headless Services with selectors are configured by the control plane to set .spec.ipFamilyPolicy to SingleStack and set .spec.ipFamilies to the addres", "language": "yaml", "type": "service-definition_explanation", "context": "5.8 - IPv4/IPv6 dual-stack | When dual-stack is enabled on a cluster, existing headless Services with selectors are configured by the control plane to set .spec.ipFamilyPolicy to SingleStack and set .spec.ipFamilies to the addres", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_16024"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Service\nmetadata:\n name: my-service\n labels:\n app.kubernetes.io/name: MyApp", "completion": "spec:\n selector:\n app.kubernetes.io/name: MyApp\n ports:\n - protocol: TCP\n port: 80", "language": "yaml", "type": "service-definition", "context": "5.8 - IPv4/IPv6 dual-stack | You can validate this behavior by using kubectl to inspect an existing headless service with selectors.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_16025"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Service\nmetadata:\n name: my-service\n labels:\n app.kubernetes.io/name: MyApp\nspec:\n selector:\n app.kubernetes.io/name: MyApp\n ports:\n - protocol: TCP\n port: 80", "completion": "This yaml code 5.8 - IPv4/IPv6 dual-stack | You can validate this behavior by using kubectl to inspect an existing headless service with selectors.", "language": "yaml", "type": "service-definition_explanation", "context": "5.8 - IPv4/IPv6 dual-stack | You can validate this behavior by using kubectl to inspect an existing headless service with selectors.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_16026"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Service\nmetadata:\n labels:\n app.kubernetes.io/name: MyApp\n name: my-service\nspec:\n clusterIP: None\n clusterIPs:", "completion": " - None\n ipFamilies:\n - IPv4\n ipFamilyPolicy: SingleStack\n ports:\n - port: 80\n protocol: TCP\n targetPort: 80\n selector:\n app.kubernetes.io/name: MyApp", "language": "yaml", "type": "service-definition", "context": "5.8 - IPv4/IPv6 dual-stack | Services can be changed from single-stack to dual-stack and from dual-stack to single-stack.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_16027"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Service\nmetadata:\n labels:\n app.kubernetes.io/name: MyApp\n name: my-service\nspec:\n clusterIP: None\n clusterIPs:\n - None\n ipFamilies:\n - IPv4\n ipFamilyPolicy: SingleStack\n ports:\n - port: 80\n protocol: TCP\n targetPort: 80\n selector:\n app.kubernetes.io/name: MyApp", "completion": "This yaml code 5.8 - IPv4/IPv6 dual-stack | Services can be changed from single-stack to dual-stack and from dual-stack to single-stack.", "language": "yaml", "type": "service-definition_explanation", "context": "5.8 - IPv4/IPv6 dual-stack | Services can be changed from single-stack to dual-stack and from dual-stack to single-stack.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_16028"} {"prompt": "Complete this yaml code:\n\napiVersion: discovery.k8s.io/v1\nkind: EndpointSlice\nmetadata:\n name: example-hints\n labels:\n kubernetes.io/service-name: example-svc\naddressType: IPv4\nports:\n - name: http\n protocol: TCP", "completion": " port: 80\nendpoints:\n - addresses:\n - \"10.1.2.3\"\n conditions:\n ready: true\n hostname: pod-1\n zone: zone-a\n hints:\n forZones:\n - name: \"zone-a\"", "language": "yaml", "type": "kubernetes-manifest", "context": "5.9 - Topology Aware Routing | The kube-proxy component filters the endpoints it routes to based on the hints set by the EndpointSlice controller. In most cases, this means that the kube-proxy is able to route traffic to endpoints", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_16029"} {"prompt": "Explain this yaml code:\n\napiVersion: discovery.k8s.io/v1\nkind: EndpointSlice\nmetadata:\n name: example-hints\n labels:\n kubernetes.io/service-name: example-svc\naddressType: IPv4\nports:\n - name: http\n protocol: TCP\n port: 80\nendpoints:\n - addresses:\n - \"10.1.2.3\"\n conditions:\n ready: true\n hostname: pod-1\n zone: zone-a\n hints:\n forZones:\n - name: \"zone-a\"", "completion": "This yaml code 5.9 - Topology Aware Routing | The kube-proxy component filters the endpoints it routes to based on the hints set by the EndpointSlice controller. In most cases, this means that the kube-proxy is able to route traffic to endpoints", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "5.9 - Topology Aware Routing | The kube-proxy component filters the endpoints it routes to based on the hints set by the EndpointSlice controller. In most cases, this means that the kube-proxy is able to route traffic to endpoints", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_16030"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Service\nmetadata:\n labels:\n k8s-app: kube-dns\n kubernetes.io/cluster-service: \"true\"\n kubernetes.io/name: CoreDNS\n name: kube-dns\n namespace: kube-system\nspec:\n clusterIP: 10.96.0.10", "completion": " ports:\n - name: dns\n port: 53\n protocol: UDP\n targetPort: 53\n - name: dns-tcp\n port: 53\n protocol: TCP\n targetPort: 53\n selector:\n k8s-app: kube-dns\n type: ClusterIP", "language": "yaml", "type": "service-definition", "context": "5.11 - Service ClusterIP allocation | But, as it was explained before, the IP address 10.96.0.10 has not been reserved. If other Services are created before or in parallel with dynamic allocation, there is a chance they can allocate this", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_16031"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Service\nmetadata:\n labels:\n k8s-app: kube-dns\n kubernetes.io/cluster-service: \"true\"\n kubernetes.io/name: CoreDNS\n name: kube-dns\n namespace: kube-system\nspec:\n clusterIP: 10.96.0.10\n ports:\n - name: dns\n port: 53\n protocol: UDP\n targetPort: 53\n - name: dns-tcp\n port: 53\n protocol: TCP\n targetPort: 53\n selector:\n k8s-app: kube-dns\n type: ClusterIP", "completion": "This yaml code 5.11 - Service ClusterIP allocation | But, as it was explained before, the IP address 10.96.0.10 has not been reserved. If other Services are created before or in parallel with dynamic allocation, there is a chance they can allocate this", "language": "yaml", "type": "service-definition_explanation", "context": "5.11 - Service ClusterIP allocation | But, as it was explained before, the IP address 10.96.0.10 has not been reserved. If other Services are created before or in parallel with dynamic allocation, there is a chance they can allocate this", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_16032"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Service\nmetadata:\n name: my-service\nspec:\n selector:", "completion": " app.kubernetes.io/name: MyApp\n ports:\n - protocol: TCP\n port: 80\n targetPort: 9376\n internalTrafficPolicy: Local", "language": "yaml", "type": "service-definition", "context": "5.12 - Service Internal Traffic Policy | The kube-proxy filters the endpoints it routes to based on the spec.internalTrafficPolicy setting. When it's set to Local, only node local endpoints are considered. When it's Cluster (the default), or", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_16033"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Service\nmetadata:\n name: my-service\nspec:\n selector:\n app.kubernetes.io/name: MyApp\n ports:\n - protocol: TCP\n port: 80\n targetPort: 9376\n internalTrafficPolicy: Local", "completion": "This yaml code 5.12 - Service Internal Traffic Policy | The kube-proxy filters the endpoints it routes to based on the spec.internalTrafficPolicy setting. When it's set to Local, only node local endpoints are considered. When it's Cluster (the default), or", "language": "yaml", "type": "service-definition_explanation", "context": "5.12 - Service Internal Traffic Policy | The kube-proxy filters the endpoints it routes to based on the spec.internalTrafficPolicy setting. When it's set to Local, only node local endpoints are considered. When it's Cluster (the default), or", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_16034"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: configmap-pod\nspec:\n containers:\n - name: test\n image: busybox:1.28\n command: ['sh', '-c', 'echo \"The app is running!\" && tail -f /dev/null']", "completion": " volumeMounts:\n - name: config-vol\n mountPath: /etc/config\n volumes:\n - name: config-vol\n configMap:\n name: log-config\n items:\n - key: log_level\n path: log_level.conf", "language": "yaml", "type": "pod-definition", "context": "6.1 - Volumes | The log-config ConfigMap is mounted as a volume, and all contents stored in its log_level entry are mounted into the Pod at path /etc/config/log_level.conf. Note that this path is derived from the vol", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_16035"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: configmap-pod\nspec:\n containers:\n - name: test\n image: busybox:1.28\n command: ['sh', '-c', 'echo \"The app is running!\" && tail -f /dev/null']\n volumeMounts:\n - name: config-vol\n mountPath: /etc/config\n volumes:\n - name: config-vol\n configMap:\n name: log-config\n items:\n - key: log_level\n path: log_level.conf", "completion": "This yaml code 6.1 - Volumes | The log-config ConfigMap is mounted as a volume, and all contents stored in its log_level entry are mounted into the Pod at path /etc/config/log_level.conf. Note that this path is derived from the vol", "language": "yaml", "type": "pod-definition_explanation", "context": "6.1 - Volumes | The log-config ConfigMap is mounted as a volume, and all contents stored in its log_level entry are mounted into the Pod at path /etc/config/log_level.conf. Note that this path is derived from the vol", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_16036"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: test-pd\nspec:\n containers:\n - image: registry.k8s.io/test-webserver", "completion": " name: test-container\n volumeMounts:\n - mountPath: /cache\n name: cache-volume\n volumes:\n - name: cache-volume\n emptyDir:\n sizeLimit: 500Mi", "language": "yaml", "type": "pod-definition", "context": "6.1 - Volumes | An fc volume type allows an existing fibre channel block storage volume to be mounted in a Pod. You can specify single or multiple target world wide names (WWNs) using the parameter targetWWNs in your", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_16037"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: test-pd\nspec:\n containers:\n - image: registry.k8s.io/test-webserver\n name: test-container\n volumeMounts:\n - mountPath: /cache\n name: cache-volume\n volumes:\n - name: cache-volume\n emptyDir:\n sizeLimit: 500Mi", "completion": "This yaml code 6.1 - Volumes | An fc volume type allows an existing fibre channel block storage volume to be mounted in a Pod. You can specify single or multiple target world wide names (WWNs) using the parameter targetWWNs in your", "language": "yaml", "type": "pod-definition_explanation", "context": "6.1 - Volumes | An fc volume type allows an existing fibre channel block storage volume to be mounted in a Pod. You can specify single or multiple target world wide names (WWNs) using the parameter targetWWNs in your", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_16038"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: test-pd\nspec:\n containers:\n - image: registry.k8s.io/test-webserver\n name: test-container", "completion": " volumeMounts:\n - mountPath: /cache\n name: cache-volume\n volumes:\n - name: cache-volume\n emptyDir:\n sizeLimit: 500Mi\n medium: Memory", "language": "yaml", "type": "pod-definition", "context": "6.1 - Volumes | An fc volume type allows an existing fibre channel block storage volume to be mounted in a Pod. You can specify single or multiple target world wide names (WWNs) using the parameter targetWWNs in your", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_16039"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: test-pd\nspec:\n containers:\n - image: registry.k8s.io/test-webserver\n name: test-container\n volumeMounts:\n - mountPath: /cache\n name: cache-volume\n volumes:\n - name: cache-volume\n emptyDir:\n sizeLimit: 500Mi\n medium: Memory", "completion": "This yaml code 6.1 - Volumes | An fc volume type allows an existing fibre channel block storage volume to be mounted in a Pod. You can specify single or multiple target world wide names (WWNs) using the parameter targetWWNs in your", "language": "yaml", "type": "pod-definition_explanation", "context": "6.1 - Volumes | An fc volume type allows an existing fibre channel block storage volume to be mounted in a Pod. You can specify single or multiple target world wide names (WWNs) using the parameter targetWWNs in your", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_16040"} {"prompt": "Explain this text code:\n\n!has(object.spec.volumes) || !object.spec.volumes.exists(v, has(v.gitRepo))", "completion": "This text code 6.1 - Volumes | You can use this deprecated storage plugin in your cluster if you explicitly enable the GitRepoVolumeDriver feature gate.", "language": "text", "type": "code-example_explanation", "context": "6.1 - Volumes | You can use this deprecated storage plugin in your cluster if you explicitly enable the GitRepoVolumeDriver feature gate.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_16041"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: server\nspec:\n containers:\n - image: nginx\n name: nginx", "completion": " volumeMounts:\n - mountPath: /mypath\n name: git-volume\n volumes:\n - name: git-volume\n gitRepo:\n repository: \"git@somewhere:me/my-git-repository.git\"\n revision: \"22f1d8406d464b0c0874075539c1f2e96c253775\"", "language": "yaml", "type": "pod-definition", "context": "6.1 - Volumes | Kubernetes 1.34 does not include a glusterfs volume type.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_16042"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: server\nspec:\n containers:\n - image: nginx\n name: nginx\n volumeMounts:\n - mountPath: /mypath\n name: git-volume\n volumes:\n - name: git-volume\n gitRepo:\n repository: \"git@somewhere:me/my-git-repository.git\"\n revision: \"22f1d8406d464b0c0874075539c1f2e96c253775\"", "completion": "This yaml code 6.1 - Volumes | Kubernetes 1.34 does not include a glusterfs volume type.", "language": "yaml", "type": "pod-definition_explanation", "context": "6.1 - Volumes | Kubernetes 1.34 does not include a glusterfs volume type.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_16043"} {"prompt": "Complete this yaml code:\n\n---\n# This manifest mounts /data/foo on the host as /foo inside the\n# single container that runs within the hostpath-example-linux Pod.\n#\n# The mount into the container is read-only.\napiVersion: v1\nkind: Pod\nmetadata:\n name: hostpath-example-linux\nspec:\n os: { name: linux }\n nodeSelector:\n kubernetes.io/os: linux", "completion": " containers:\n - name: example-container\n image: registry.k8s.io/test-webserver\n volumeMounts:\n - mountPath: /foo\n name: example-volume\n readOnly: true\n volumes:\n - name: example-volume\n # mount /data/foo, but only if that directory already exists\n hostPath:\n path: /data/foo # directory location on host\n type: Directory # this field is optional", "language": "yaml", "type": "pod-definition", "context": "6.1 - Volumes | --- # This manifest mounts C:\\Data\\foo on the host as C:\\foo, inside the # single container that runs within the hostpath-example-windows Pod. # # The mount into the container is read-only. apiVersion", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_16044"} {"prompt": "Explain this yaml code:\n\n---\n# This manifest mounts /data/foo on the host as /foo inside the\n# single container that runs within the hostpath-example-linux Pod.\n#\n# The mount into the container is read-only.\napiVersion: v1\nkind: Pod\nmetadata:\n name: hostpath-example-linux\nspec:\n os: { name: linux }\n nodeSelector:\n kubernetes.io/os: linux\n containers:\n - name: example-container\n image: registry.k8s.io/test-webserver\n volumeMounts:\n - mountPath: /foo\n name: example-volume\n readOnly: true\n volumes:\n - name: example-volume\n # mount /data/foo, but only if that directory already exists\n hostPath:\n path: /data/foo # directory location on host\n type: Directory # this field is optional", "completion": "This yaml code 6.1 - Volumes | --- # This manifest mounts C:\\Data\\foo on the host as C:\\foo, inside the # single container that runs within the hostpath-example-windows Pod. # # The mount into the container is read-only. apiVersion", "language": "yaml", "type": "pod-definition_explanation", "context": "6.1 - Volumes | --- # This manifest mounts C:\\Data\\foo on the host as C:\\foo, inside the # single container that runs within the hostpath-example-windows Pod. # # The mount into the container is read-only. apiVersion", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_16045"} {"prompt": "Complete this yaml code:\n\n---\n# This manifest mounts C:\\Data\\foo on the host as C:\\foo, inside the\n# single container that runs within the hostpath-example-windows Pod.\n#\n# The mount into the container is read-only.\napiVersion: v1\nkind: Pod\nmetadata:\n name: hostpath-example-windows\nspec:\n os: { name: windows }\n nodeSelector:\n kubernetes.io/os: windows", "completion": " containers:\n - name: example-container\n image: microsoft/windowsservercore:1709\n volumeMounts:\n - name: example-volume\n mountPath: \"C:\\\\foo\"\n readOnly: true\n volumes:\n # mount C:\\Data\\foo from the host, but only if that directory already exists\n - name: example-volume\n hostPath:\n path: \"C:\\\\Data\\\\foo\" # directory location on host\n type: Directory # this field is optional", "language": "yaml", "type": "pod-definition", "context": "6.1 - Volumes | The following manifest defines a Pod that mounts /var/local/aaa inside the single container in the Pod. If the node does not already have a path /var/local/aaa, the kubelet creates it as a directory a", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_16046"} {"prompt": "Explain this yaml code:\n\n---\n# This manifest mounts C:\\Data\\foo on the host as C:\\foo, inside the\n# single container that runs within the hostpath-example-windows Pod.\n#\n# The mount into the container is read-only.\napiVersion: v1\nkind: Pod\nmetadata:\n name: hostpath-example-windows\nspec:\n os: { name: windows }\n nodeSelector:\n kubernetes.io/os: windows\n containers:\n - name: example-container\n image: microsoft/windowsservercore:1709\n volumeMounts:\n - name: example-volume\n mountPath: \"C:\\\\foo\"\n readOnly: true\n volumes:\n # mount C:\\Data\\foo from the host, but only if that directory already exists\n - name: example-volume\n hostPath:\n path: \"C:\\\\Data\\\\foo\" # directory location on host\n type: Directory # this field is optional", "completion": "This yaml code 6.1 - Volumes | The following manifest defines a Pod that mounts /var/local/aaa inside the single container in the Pod. If the node does not already have a path /var/local/aaa, the kubelet creates it as a directory a", "language": "yaml", "type": "pod-definition_explanation", "context": "6.1 - Volumes | The following manifest defines a Pod that mounts /var/local/aaa inside the single container in the Pod. If the node does not already have a path /var/local/aaa, the kubelet creates it as a directory a", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_16047"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: test-webserver\nspec:\n os: { name: linux }\n nodeSelector:\n kubernetes.io/os: linux\n containers:\n - name: test-webserver\n image: registry.k8s.io/test-webserver:latest\n volumeMounts:\n - mountPath: /var/local/aaa", "completion": " name: mydir\n - mountPath: /var/local/aaa/1.txt\n name: myfile\n volumes:\n - name: mydir\n hostPath:\n # Ensure the file directory is created.\n path: /var/local/aaa\n type: DirectoryOrCreate\n - name: myfile\n hostPath:\n path: /var/local/aaa/1.txt\n type: FileOrCreate", "language": "yaml", "type": "pod-definition", "context": "6.1 - Volumes | An image volume source represents an OCI object (a container image or artifact) which is available on the kubelet's host machine.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_16048"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: test-webserver\nspec:\n os: { name: linux }\n nodeSelector:\n kubernetes.io/os: linux\n containers:\n - name: test-webserver\n image: registry.k8s.io/test-webserver:latest\n volumeMounts:\n - mountPath: /var/local/aaa\n name: mydir\n - mountPath: /var/local/aaa/1.txt\n name: myfile\n volumes:\n - name: mydir\n hostPath:\n # Ensure the file directory is created.\n path: /var/local/aaa\n type: DirectoryOrCreate\n - name: myfile\n hostPath:\n path: /var/local/aaa/1.txt\n type: FileOrCreate", "completion": "This yaml code 6.1 - Volumes | An image volume source represents an OCI object (a container image or artifact) which is available on the kubelet's host machine.", "language": "yaml", "type": "pod-definition_explanation", "context": "6.1 - Volumes | An image volume source represents an OCI object (a container image or artifact) which is available on the kubelet's host machine.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_16049"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: image-volume\nspec:\n containers:\n - name: shell\n command: [\"sleep\", \"infinity\"]", "completion": " image: debian\n volumeMounts:\n - name: volume\n mountPath: /volume\n volumes:\n - name: volume\n image:\n reference: quay.io/crio/artifact:v2\n pullPolicy: IfNotPresent", "language": "yaml", "type": "pod-definition", "context": "6.1 - Volumes | The volume is resolved at pod startup depending on which pullPolicy value is provided:", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_16050"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: image-volume\nspec:\n containers:\n - name: shell\n command: [\"sleep\", \"infinity\"]\n image: debian\n volumeMounts:\n - name: volume\n mountPath: /volume\n volumes:\n - name: volume\n image:\n reference: quay.io/crio/artifact:v2\n pullPolicy: IfNotPresent", "completion": "This yaml code 6.1 - Volumes | The volume is resolved at pod startup depending on which pullPolicy value is provided:", "language": "yaml", "type": "pod-definition_explanation", "context": "6.1 - Volumes | The volume is resolved at pod startup depending on which pullPolicy value is provided:", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_16051"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: PersistentVolume\nmetadata:\n name: example-pv\nspec:\n capacity:\n storage: 100Gi\n volumeMode: Filesystem\n accessModes:\n - ReadWriteOnce\n persistentVolumeReclaimPolicy: Delete", "completion": " storageClassName: local-storage\n local:\n path: /mnt/disks/ssd1\n nodeAffinity:\n required:\n nodeSelectorTerms:\n - matchExpressions:\n - key: kubernetes.io/hostname\n operator: In\n values:\n - example-node", "language": "yaml", "type": "kubernetes-manifest", "context": "6.1 - Volumes | You must set a PersistentVolume nodeAffinity when using local volumes. The Kubernetes scheduler uses the PersistentVolume nodeAffinity to schedule these Pods to the correct node.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_16052"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: PersistentVolume\nmetadata:\n name: example-pv\nspec:\n capacity:\n storage: 100Gi\n volumeMode: Filesystem\n accessModes:\n - ReadWriteOnce\n persistentVolumeReclaimPolicy: Delete\n storageClassName: local-storage\n local:\n path: /mnt/disks/ssd1\n nodeAffinity:\n required:\n nodeSelectorTerms:\n - matchExpressions:\n - key: kubernetes.io/hostname\n operator: In\n values:\n - example-node", "completion": "This yaml code 6.1 - Volumes | You must set a PersistentVolume nodeAffinity when using local volumes. The Kubernetes scheduler uses the PersistentVolume nodeAffinity to schedule these Pods to the correct node.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "6.1 - Volumes | You must set a PersistentVolume nodeAffinity when using local volumes. The Kubernetes scheduler uses the PersistentVolume nodeAffinity to schedule these Pods to the correct node.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_16053"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: test-pd\nspec:\n containers:\n - image: registry.k8s.io/test-webserver\n name: test-container", "completion": " volumeMounts:\n - mountPath: /my-nfs-data\n name: test-volume\n volumes:\n - name: test-volume\n nfs:\n server: my-nfs-server.example.com\n path: /my-nfs-volume\n readOnly: true", "language": "yaml", "type": "pod-definition", "context": "6.1 - Volumes | You must have your own NFS server running with the share exported before you can use it.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_16054"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: test-pd\nspec:\n containers:\n - image: registry.k8s.io/test-webserver\n name: test-container\n volumeMounts:\n - mountPath: /my-nfs-data\n name: test-volume\n volumes:\n - name: test-volume\n nfs:\n server: my-nfs-server.example.com\n path: /my-nfs-volume\n readOnly: true", "completion": "This yaml code 6.1 - Volumes | You must have your own NFS server running with the share exported before you can use it.", "language": "yaml", "type": "pod-definition_explanation", "context": "6.1 - Volumes | You must have your own NFS server running with the share exported before you can use it.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_16055"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: test-portworx-volume-pod\nspec:\n containers:\n - image: registry.k8s.io/test-webserver\n name: test-container", "completion": " volumeMounts:\n - mountPath: /mnt\n name: pxvol\n volumes:\n - name: pxvol\n # This Portworx volume must already exist.\n portworxVolume:\n volumeID: \"pxvol\"\n fsType: \"<fs-type>\"", "language": "yaml", "type": "pod-definition", "context": "6.1 - Volumes | In Kubernetes 1.34, all operations for the in-tree Portworx volumes are redirected to the pxd.portworx.com Container Storage Interface (CSI) Driver by default.Portworx CSI Driver must be installed on", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_16056"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: test-portworx-volume-pod\nspec:\n containers:\n - image: registry.k8s.io/test-webserver\n name: test-container\n volumeMounts:\n - mountPath: /mnt\n name: pxvol\n volumes:\n - name: pxvol\n # This Portworx volume must already exist.\n portworxVolume:\n volumeID: \"pxvol\"\n fsType: \"<fs-type>\"", "completion": "This yaml code 6.1 - Volumes | In Kubernetes 1.34, all operations for the in-tree Portworx volumes are redirected to the pxd.portworx.com Container Storage Interface (CSI) Driver by default.Portworx CSI Driver must be installed on", "language": "yaml", "type": "pod-definition_explanation", "context": "6.1 - Volumes | In Kubernetes 1.34, all operations for the in-tree Portworx volumes are redirected to the pxd.portworx.com Container Storage Interface (CSI) Driver by default.Portworx CSI Driver must be installed on", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_16057"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: my-lamp-site\nspec:\n containers:\n - name: mysql\n image: mysql\n env:\n - name: MYSQL_ROOT_PASSWORD\n value: \"rootpasswd\"\n volumeMounts:", "completion": " - mountPath: /var/lib/mysql\n name: site-data\n subPath: mysql\n - name: php\n image: php:7.0-apache\n volumeMounts:\n - mountPath: /var/www/html\n name: site-data\n subPath: html\n volumes:\n - name: site-data\n persistentVolumeClaim:\n claimName: my-lamp-site-data", "language": "yaml", "type": "pod-definition", "context": "6.1 - Volumes | Use the subPathExpr field to construct subPath directory names from downward API environment variables. The subPath and subPathExpr properties are mutually exclusive.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_16058"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: my-lamp-site\nspec:\n containers:\n - name: mysql\n image: mysql\n env:\n - name: MYSQL_ROOT_PASSWORD\n value: \"rootpasswd\"\n volumeMounts:\n - mountPath: /var/lib/mysql\n name: site-data\n subPath: mysql\n - name: php\n image: php:7.0-apache\n volumeMounts:\n - mountPath: /var/www/html\n name: site-data\n subPath: html\n volumes:\n - name: site-data\n persistentVolumeClaim:\n claimName: my-lamp-site-data", "completion": "This yaml code 6.1 - Volumes | Use the subPathExpr field to construct subPath directory names from downward API environment variables. The subPath and subPathExpr properties are mutually exclusive.", "language": "yaml", "type": "pod-definition_explanation", "context": "6.1 - Volumes | Use the subPathExpr field to construct subPath directory names from downward API environment variables. The subPath and subPathExpr properties are mutually exclusive.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_16059"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: pod1\nspec:\n containers:\n - name: container1\n env:\n - name: POD_NAME\n valueFrom:\n fieldRef:\n apiVersion: v1", "completion": " fieldPath: metadata.name\n image: busybox:1.28\n command: [ \"sh\", \"-c\", \"while [ true ]; do echo 'Hello'; sleep 10; done | tee -a /logs/hello.txt\" ]\n volumeMounts:\n - name: workdir1\n mountPath: /logs\n # The variable expansion uses round brackets (not curly brackets).\n subPathExpr: $(POD_NAME)\n restartPolicy: Never\n volumes:\n - name: workdir1\n hostPath:\n path: /var/log/pods", "language": "yaml", "type": "pod-definition", "context": "6.1 - Volumes | The storage medium (such as Disk or SSD) of an emptyDir volume is determined by the medium of the filesystem holding the kubelet root dir (typically /var/lib/kubelet). There is no limit on how much sp", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_16060"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: pod1\nspec:\n containers:\n - name: container1\n env:\n - name: POD_NAME\n valueFrom:\n fieldRef:\n apiVersion: v1\n fieldPath: metadata.name\n image: busybox:1.28\n command: [ \"sh\", \"-c\", \"while [ true ]; do echo 'Hello'; sleep 10; done | tee -a /logs/hello.txt\" ]\n volumeMounts:\n - name: workdir1\n mountPath: /logs\n # The variable expansion uses round brackets (not curly brackets).\n subPathExpr: $(POD_NAME)\n restartPolicy: Never\n volumes:\n - name: workdir1\n hostPath:\n path: /var/log/pods", "completion": "This yaml code 6.1 - Volumes | The storage medium (such as Disk or SSD) of an emptyDir volume is determined by the medium of the filesystem holding the kubelet root dir (typically /var/lib/kubelet). There is no limit on how much sp", "language": "yaml", "type": "pod-definition_explanation", "context": "6.1 - Volumes | The storage medium (such as Disk or SSD) of an emptyDir volume is determined by the medium of the filesystem holding the kubelet root dir (typically /var/lib/kubelet). There is no limit on how much sp", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_16061"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: rro\nspec:\n volumes:\n - name: mnt\n hostPath:\n # tmpfs is mounted on /mnt/tmpfs\n path: /mnt\n containers:\n - name: busybox\n image: busybox\n args: [\"sleep\", \"infinity\"]", "completion": " volumeMounts:\n # /mnt-rro/tmpfs is not writable\n - name: mnt\n mountPath: /mnt-rro\n readOnly: true\n mountPropagation: None\n recursiveReadOnly: Enabled\n # /mnt-ro/tmpfs is writable\n - name: mnt\n mountPath: /mnt-ro\n readOnly: true\n # /mnt-rw/tmpfs is writable\n - name: mnt\n mountPath: /mnt-rw", "language": "yaml", "type": "pod-definition", "context": "6.1 - Volumes | When this property is recognized by kubelet and kube-apiserver, the .status.containerStatuses[].volumeMounts[].recursiveReadOnly field is set to either Enabled or Disabled.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_16062"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: rro\nspec:\n volumes:\n - name: mnt\n hostPath:\n # tmpfs is mounted on /mnt/tmpfs\n path: /mnt\n containers:\n - name: busybox\n image: busybox\n args: [\"sleep\", \"infinity\"]\n volumeMounts:\n # /mnt-rro/tmpfs is not writable\n - name: mnt\n mountPath: /mnt-rro\n readOnly: true\n mountPropagation: None\n recursiveReadOnly: Enabled\n # /mnt-ro/tmpfs is writable\n - name: mnt\n mountPath: /mnt-ro\n readOnly: true\n # /mnt-rw/tmpfs is writable\n - name: mnt\n mountPath: /mnt-rw", "completion": "This yaml code 6.1 - Volumes | When this property is recognized by kubelet and kube-apiserver, the .status.containerStatuses[].volumeMounts[].recursiveReadOnly field is set to either Enabled or Disabled.", "language": "yaml", "type": "pod-definition_explanation", "context": "6.1 - Volumes | When this property is recognized by kubelet and kube-apiserver, the .status.containerStatuses[].volumeMounts[].recursiveReadOnly field is set to either Enabled or Disabled.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_16063"} {"prompt": "Complete this bash code:\n\nkubectl describe pvc hostpath\nName: hostpath\nNamespace: default\nStorageClass: example-hostpath\nStatus: Terminating", "completion": "Volume:\nLabels: <none>\nAnnotations: volume.beta.kubernetes.io/storage-class=example-hostpath\n volume.beta.kubernetes.io/storage-provisioner=example.com/hostpath\nFinalizers: [kubernetes.io/pvc-protection]\n...", "language": "bash", "type": "kubectl-command", "context": "6.2 - Persistent Volumes | You can see that a PV is protected when the PV's status is Terminating and the Finalizers list includes kubernetes.io/pv-protection too:", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_16064"} {"prompt": "Explain this bash code:\n\nkubectl describe pvc hostpath\nName: hostpath\nNamespace: default\nStorageClass: example-hostpath\nStatus: Terminating\nVolume:\nLabels: <none>\nAnnotations: volume.beta.kubernetes.io/storage-class=example-hostpath\n volume.beta.kubernetes.io/storage-provisioner=example.com/hostpath\nFinalizers: [kubernetes.io/pvc-protection]\n...", "completion": "This bash code 6.2 - Persistent Volumes | You can see that a PV is protected when the PV's status is Terminating and the Finalizers list includes kubernetes.io/pv-protection too:", "language": "bash", "type": "kubectl-command_explanation", "context": "6.2 - Persistent Volumes | You can see that a PV is protected when the PV's status is Terminating and the Finalizers list includes kubernetes.io/pv-protection too:", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_16065"} {"prompt": "Complete this bash code:\n\nkubectl describe pv task-pv-volume\nName: task-pv-volume\nLabels: type=local\nAnnotations: <none>\nFinalizers: [kubernetes.io/pv-protection]\nStorageClass: standard\nStatus: Terminating\nClaim:", "completion": "Reclaim Policy: Delete\nAccess Modes: RWO\nCapacity: 1Gi\nMessage:\nSource:\n Type: HostPath (bare host directory volume)\n Path: /tmp/data\n HostPathType:\nEvents: <none>", "language": "bash", "type": "kubectl-command", "context": "6.2 - Persistent Volumes | When a user is done with their volume, they can delete the PVC objects from the API that allows reclamation of the resource. The reclaim policy for a PersistentVolume tells the cluster what to do with", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_16066"} {"prompt": "Explain this bash code:\n\nkubectl describe pv task-pv-volume\nName: task-pv-volume\nLabels: type=local\nAnnotations: <none>\nFinalizers: [kubernetes.io/pv-protection]\nStorageClass: standard\nStatus: Terminating\nClaim:\nReclaim Policy: Delete\nAccess Modes: RWO\nCapacity: 1Gi\nMessage:\nSource:\n Type: HostPath (bare host directory volume)\n Path: /tmp/data\n HostPathType:\nEvents: <none>", "completion": "This bash code 6.2 - Persistent Volumes | When a user is done with their volume, they can delete the PVC objects from the API that allows reclamation of the resource. The reclaim policy for a PersistentVolume tells the cluster what to do with", "language": "bash", "type": "kubectl-command_explanation", "context": "6.2 - Persistent Volumes | When a user is done with their volume, they can delete the PVC objects from the API that allows reclamation of the resource. The reclaim policy for a PersistentVolume tells the cluster what to do with", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_16067"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: pv-recycler\n namespace: default\nspec:\n restartPolicy: Never\n volumes:\n - name: vol", "completion": " hostPath:\n path: /any/path/it/will/be/replaced\n containers:\n - name: pv-recycler\n image: \"registry.k8s.io/busybox\"\n command: [\"/bin/sh\", \"-c\", \"test -e /scrub && rm -rf /scrub/..?* /scrub/.[!.]* /scrub/* && test -z \\\"$(ls -A /scrub)\\\" || exit 1\"]\n volumeMounts:\n - name: vol\n mountPath: /scrub", "language": "yaml", "type": "pod-definition", "context": "6.2 - Persistent Volumes | However, the particular path specified in the custom recycler Pod template in the volumes part is replaced with the particular path of the volume that is being recycled.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_16068"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: pv-recycler\n namespace: default\nspec:\n restartPolicy: Never\n volumes:\n - name: vol\n hostPath:\n path: /any/path/it/will/be/replaced\n containers:\n - name: pv-recycler\n image: \"registry.k8s.io/busybox\"\n command: [\"/bin/sh\", \"-c\", \"test -e /scrub && rm -rf /scrub/..?* /scrub/.[!.]* /scrub/* && test -z \\\"$(ls -A /scrub)\\\" || exit 1\"]\n volumeMounts:\n - name: vol\n mountPath: /scrub", "completion": "This yaml code 6.2 - Persistent Volumes | However, the particular path specified in the custom recycler Pod template in the volumes part is replaced with the particular path of the volume that is being recycled.", "language": "yaml", "type": "pod-definition_explanation", "context": "6.2 - Persistent Volumes | However, the particular path specified in the custom recycler Pod template in the volumes part is replaced with the particular path of the volume that is being recycled.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_16069"} {"prompt": "Complete this bash code:\n\nkubectl describe pv pvc-74a498d6-3929-47e8-8c02-078c1ece4d78\nName: pvc-74a498d6-3929-47e8-8c02-078c1ece4d78\nLabels: <none>\nAnnotations: kubernetes.io/createdby: vsphere-volume-dynamic-provisioner\n pv.kubernetes.io/bound-by-controller: yes\n pv.kubernetes.io/provisioned-by: kubernetes.io/vsphere-volume\nFinalizers: [kubernetes.io/pv-protection kubernetes.io/pv-controller]\nStorageClass: vcp-sc\nStatus: Bound\nClaim: default/vcp-pvc-1\nReclaim Policy: Delete", "completion": "Access Modes: RWO\nVolumeMode: Filesystem\nCapacity: 1Gi\nNode Affinity: <none>\nMessage:\nSource:\n Type: vSphereVolume (a Persistent Disk resource in vSphere)\n VolumePath: [vsanDatastore] d49c4a62-166f-ce12-c464-020077ba5d46/kubernetes-dynamic-pvc-74a498d6-3929-47e8-8c02-078c1ece4d78.vmdk\n FSType: ext4\n StoragePolicyName: vSAN Default Storage Policy\nEvents: <none>", "language": "bash", "type": "kubectl-command", "context": "6.2 - Persistent Volumes | The finalizer external-provisioner.volume.kubernetes.io/finalizer is added for CSI volumes. The following is an example:", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_16070"} {"prompt": "Explain this bash code:\n\nkubectl describe pv pvc-74a498d6-3929-47e8-8c02-078c1ece4d78\nName: pvc-74a498d6-3929-47e8-8c02-078c1ece4d78\nLabels: <none>\nAnnotations: kubernetes.io/createdby: vsphere-volume-dynamic-provisioner\n pv.kubernetes.io/bound-by-controller: yes\n pv.kubernetes.io/provisioned-by: kubernetes.io/vsphere-volume\nFinalizers: [kubernetes.io/pv-protection kubernetes.io/pv-controller]\nStorageClass: vcp-sc\nStatus: Bound\nClaim: default/vcp-pvc-1\nReclaim Policy: Delete\nAccess Modes: RWO\nVolumeMode: Filesystem\nCapacity: 1Gi\nNode Affinity: <none>\nMessage:\nSource:\n Type: vSphereVolume (a Persistent Disk resource in vSphere)\n VolumePath: [vsanDatastore] d49c4a62-166f-ce12-c464-020077ba5d46/kubernetes-dynamic-pvc-74a498d6-3929-47e8-8c02-078c1ece4d78.vmdk\n FSType: ext4\n StoragePolicyName: vSAN Default Storage Policy\nEvents: <none>", "completion": "This bash code 6.2 - Persistent Volumes | The finalizer external-provisioner.volume.kubernetes.io/finalizer is added for CSI volumes. The following is an example:", "language": "bash", "type": "kubectl-command_explanation", "context": "6.2 - Persistent Volumes | The finalizer external-provisioner.volume.kubernetes.io/finalizer is added for CSI volumes. The following is an example:", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_16071"} {"prompt": "Complete this text code:\n\nName: pvc-2f0bab97-85a8-4552-8044-eb8be45cf48d\nLabels: <none>\nAnnotations: pv.kubernetes.io/provisioned-by: csi.vsphere.vmware.com\nFinalizers: [kubernetes.io/pv-protection external-provisioner.volume.kubernetes.io/finalizer]\nStorageClass: fast\nStatus: Bound\nClaim: demo-app/nginx-logs\nReclaim Policy: Delete\nAccess Modes: RWO\nVolumeMode: Filesystem\nCapacity: 200Mi", "completion": "Node Affinity: <none>\nMessage:\nSource:\n Type: CSI (a Container Storage Interface (CSI) volume source)\n Driver: csi.vsphere.vmware.com\n FSType: ext4\n VolumeHandle: 44830fa8-79b4-406b-8b58-621ba25353fd\n ReadOnly: false\n VolumeAttributes: storage.kubernetes.io/csiProvisionerIdentity=1648442357185-8081-csi.vsphere.vmware.com\n type=vSphere CNS Block Volume\nEvents: <none>", "language": "text", "type": "code-example", "context": "6.2 - Persistent Volumes | When the CSIMigration{provider} feature flag is enabled for a specific in-tree volume plugin, the kubernetes.io/pv-controller finalizer is replaced by the external-provisioner.volume.kubernetes.io/fin", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_16072"} {"prompt": "Explain this text code:\n\nName: pvc-2f0bab97-85a8-4552-8044-eb8be45cf48d\nLabels: <none>\nAnnotations: pv.kubernetes.io/provisioned-by: csi.vsphere.vmware.com\nFinalizers: [kubernetes.io/pv-protection external-provisioner.volume.kubernetes.io/finalizer]\nStorageClass: fast\nStatus: Bound\nClaim: demo-app/nginx-logs\nReclaim Policy: Delete\nAccess Modes: RWO\nVolumeMode: Filesystem\nCapacity: 200Mi\nNode Affinity: <none>\nMessage:\nSource:\n Type: CSI (a Container Storage Interface (CSI) volume source)\n Driver: csi.vsphere.vmware.com\n FSType: ext4\n VolumeHandle: 44830fa8-79b4-406b-8b58-621ba25353fd\n ReadOnly: false\n VolumeAttributes: storage.kubernetes.io/csiProvisionerIdentity=1648442357185-8081-csi.vsphere.vmware.com\n type=vSphere CNS Block Volume\nEvents: <none>", "completion": "This text code 6.2 - Persistent Volumes | When the CSIMigration{provider} feature flag is enabled for a specific in-tree volume plugin, the kubernetes.io/pv-controller finalizer is replaced by the external-provisioner.volume.kubernetes.io/fin", "language": "text", "type": "code-example_explanation", "context": "6.2 - Persistent Volumes | When the CSIMigration{provider} feature flag is enabled for a specific in-tree volume plugin, the kubernetes.io/pv-controller finalizer is replaced by the external-provisioner.volume.kubernetes.io/fin", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_16073"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: PersistentVolumeClaim\nmetadata:\n name: foo-pvc", "completion": " namespace: foo\nspec:\n storageClassName: \"\" # Empty string must be explicitly set otherwise default StorageClass will be set\n volumeName: foo-pv\n ...", "language": "yaml", "type": "kubernetes-manifest", "context": "6.2 - Persistent Volumes | This method does not guarantee any binding privileges to the PersistentVolume. If other PersistentVolumeClaims could use the PV that you specify, you first need to reserve that storage volume. Specify", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_16074"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: PersistentVolumeClaim\nmetadata:\n name: foo-pvc\n namespace: foo\nspec:\n storageClassName: \"\" # Empty string must be explicitly set otherwise default StorageClass will be set\n volumeName: foo-pv\n ...", "completion": "This yaml code 6.2 - Persistent Volumes | This method does not guarantee any binding privileges to the PersistentVolume. If other PersistentVolumeClaims could use the PV that you specify, you first need to reserve that storage volume. Specify", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "6.2 - Persistent Volumes | This method does not guarantee any binding privileges to the PersistentVolume. If other PersistentVolumeClaims could use the PV that you specify, you first need to reserve that storage volume. Specify", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_16075"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: PersistentVolume\nmetadata:\n name: foo-pv\nspec:", "completion": " storageClassName: \"\"\n claimRef:\n name: foo-pvc\n namespace: foo\n ...", "language": "yaml", "type": "kubernetes-manifest", "context": "6.2 - Persistent Volumes | This is useful if you want to consume PersistentVolumes that have their persistentVolumeReclaimPolicy set to Retain, including cases where you are reusing an existing PV.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_16076"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: PersistentVolume\nmetadata:\n name: foo-pv\nspec:\n storageClassName: \"\"\n claimRef:\n name: foo-pvc\n namespace: foo\n ...", "completion": "This yaml code 6.2 - Persistent Volumes | This is useful if you want to consume PersistentVolumes that have their persistentVolumeReclaimPolicy set to Retain, including cases where you are reusing an existing PV.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "6.2 - Persistent Volumes | This is useful if you want to consume PersistentVolumes that have their persistentVolumeReclaimPolicy set to Retain, including cases where you are reusing an existing PV.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_16077"} {"prompt": "Complete this yaml code:\n\napiVersion: storage.k8s.io/v1\nkind: StorageClass\nmetadata:\n name: example-vol-default\nprovisioner: vendor-name.example/magicstorage", "completion": "parameters:\n resturl: \"http://192.168.10.100:8080\"\n restuser: \"\"\n secretNamespace: \"\"\n secretName: \"\"\nallowVolumeExpansion: true", "language": "yaml", "type": "kubernetes-manifest", "context": "6.2 - Persistent Volumes | To request a larger volume for a PVC, edit the PVC object and specify a larger size. This triggers expansion of the volume that backs the underlying PersistentVolume. A new PersistentVolume is never c", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_16078"} {"prompt": "Explain this yaml code:\n\napiVersion: storage.k8s.io/v1\nkind: StorageClass\nmetadata:\n name: example-vol-default\nprovisioner: vendor-name.example/magicstorage\nparameters:\n resturl: \"http://192.168.10.100:8080\"\n restuser: \"\"\n secretNamespace: \"\"\n secretName: \"\"\nallowVolumeExpansion: true", "completion": "This yaml code 6.2 - Persistent Volumes | To request a larger volume for a PVC, edit the PVC object and specify a larger size. This triggers expansion of the volume that backs the underlying PersistentVolume. A new PersistentVolume is never c", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "6.2 - Persistent Volumes | To request a larger volume for a PVC, edit the PVC object and specify a larger size. This triggers expansion of the volume that backs the underlying PersistentVolume. A new PersistentVolume is never c", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_16079"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: PersistentVolume\nmetadata:\n name: pv0003\nspec:\n capacity:\n storage: 5Gi\n volumeMode: Filesystem\n accessModes:", "completion": " - ReadWriteOnce\n persistentVolumeReclaimPolicy: Recycle\n storageClassName: slow\n mountOptions:\n - hard\n - nfsvers=4.1\n nfs:\n path: /tmp\n server: 172.17.0.2", "language": "yaml", "type": "kubernetes-manifest", "context": "6.2 - Persistent Volumes | Generally, a PV will have a specific storage capacity. This is set using the PV's capacity attribute which is a Quantity value.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_16080"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: PersistentVolume\nmetadata:\n name: pv0003\nspec:\n capacity:\n storage: 5Gi\n volumeMode: Filesystem\n accessModes:\n - ReadWriteOnce\n persistentVolumeReclaimPolicy: Recycle\n storageClassName: slow\n mountOptions:\n - hard\n - nfsvers=4.1\n nfs:\n path: /tmp\n server: 172.17.0.2", "completion": "This yaml code 6.2 - Persistent Volumes | Generally, a PV will have a specific storage capacity. This is set using the PV's capacity attribute which is a Quantity value.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "6.2 - Persistent Volumes | Generally, a PV will have a specific storage capacity. This is set using the PV's capacity attribute which is a Quantity value.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_16081"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: PersistentVolumeClaim\nmetadata:\n name: myclaim\nspec:\n accessModes:\n - ReadWriteOnce\n volumeMode: Filesystem", "completion": " resources:\n requests:\n storage: 8Gi\n storageClassName: slow\n selector:\n matchLabels:\n release: \"stable\"\n matchExpressions:\n - {key: environment, operator: In, values: [dev]}", "language": "yaml", "type": "kubernetes-manifest", "context": "6.2 - Persistent Volumes | Claims use the same conventions as volumes when requesting storage with specific access modes.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_16082"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: PersistentVolumeClaim\nmetadata:\n name: myclaim\nspec:\n accessModes:\n - ReadWriteOnce\n volumeMode: Filesystem\n resources:\n requests:\n storage: 8Gi\n storageClassName: slow\n selector:\n matchLabels:\n release: \"stable\"\n matchExpressions:\n - {key: environment, operator: In, values: [dev]}", "completion": "This yaml code 6.2 - Persistent Volumes | Claims use the same conventions as volumes when requesting storage with specific access modes.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "6.2 - Persistent Volumes | Claims use the same conventions as volumes when requesting storage with specific access modes.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_16083"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: mypod\nspec:\n containers:\n - name: myfrontend", "completion": " image: nginx\n volumeMounts:\n - mountPath: \"/var/www/html\"\n name: mypd\n volumes:\n - name: mypd\n persistentVolumeClaim:\n claimName: myclaim", "language": "yaml", "type": "pod-definition", "context": "6.2 - Persistent Volumes | PersistentVolumes binds are exclusive, and since PersistentVolumeClaims are namespaced objects, mounting claims with \"Many\" modes (ROX, RWX) is only possible within one namespace.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_16084"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: mypod\nspec:\n containers:\n - name: myfrontend\n image: nginx\n volumeMounts:\n - mountPath: \"/var/www/html\"\n name: mypd\n volumes:\n - name: mypd\n persistentVolumeClaim:\n claimName: myclaim", "completion": "This yaml code 6.2 - Persistent Volumes | PersistentVolumes binds are exclusive, and since PersistentVolumeClaims are namespaced objects, mounting claims with \"Many\" modes (ROX, RWX) is only possible within one namespace.", "language": "yaml", "type": "pod-definition_explanation", "context": "6.2 - Persistent Volumes | PersistentVolumes binds are exclusive, and since PersistentVolumeClaims are namespaced objects, mounting claims with \"Many\" modes (ROX, RWX) is only possible within one namespace.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_16085"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: PersistentVolume\nmetadata:\n name: block-pv\nspec:\n capacity:\n storage: 10Gi", "completion": " accessModes:\n - ReadWriteOnce\n volumeMode: Block\n persistentVolumeReclaimPolicy: Retain\n fc:\n targetWWNs: [\"50060e801049cfd1\"]\n lun: 0\n readOnly: false", "language": "yaml", "type": "kubernetes-manifest", "context": "6.2 - Persistent Volumes | If a user requests a raw block volume by indicating this using the volumeMode field in the PersistentVolumeClaim spec, the binding rules differ slightly from previous releases that didn't consider thi", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_16086"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: PersistentVolume\nmetadata:\n name: block-pv\nspec:\n capacity:\n storage: 10Gi\n accessModes:\n - ReadWriteOnce\n volumeMode: Block\n persistentVolumeReclaimPolicy: Retain\n fc:\n targetWWNs: [\"50060e801049cfd1\"]\n lun: 0\n readOnly: false", "completion": "This yaml code 6.2 - Persistent Volumes | If a user requests a raw block volume by indicating this using the volumeMode field in the PersistentVolumeClaim spec, the binding rules differ slightly from previous releases that didn't consider thi", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "6.2 - Persistent Volumes | If a user requests a raw block volume by indicating this using the volumeMode field in the PersistentVolumeClaim spec, the binding rules differ slightly from previous releases that didn't consider thi", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_16087"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: PersistentVolumeClaim\nmetadata:\n name: block-pvc\nspec:", "completion": " accessModes:\n - ReadWriteOnce\n volumeMode: Block\n resources:\n requests:\n storage: 10Gi", "language": "yaml", "type": "kubernetes-manifest", "context": "6.2 - Persistent Volumes | If a user requests a raw block volume by indicating this using the volumeMode field in the PersistentVolumeClaim spec, the binding rules differ slightly from previous releases that didn't consider thi", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_16088"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: PersistentVolumeClaim\nmetadata:\n name: block-pvc\nspec:\n accessModes:\n - ReadWriteOnce\n volumeMode: Block\n resources:\n requests:\n storage: 10Gi", "completion": "This yaml code 6.2 - Persistent Volumes | If a user requests a raw block volume by indicating this using the volumeMode field in the PersistentVolumeClaim spec, the binding rules differ slightly from previous releases that didn't consider thi", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "6.2 - Persistent Volumes | If a user requests a raw block volume by indicating this using the volumeMode field in the PersistentVolumeClaim spec, the binding rules differ slightly from previous releases that didn't consider thi", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_16089"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: pod-with-block-volume\nspec:\n containers:\n - name: fc-container\n image: fedora:26", "completion": " command: [\"/bin/sh\", \"-c\"]\n args: [ \"tail -f /dev/null\" ]\n volumeDevices:\n - name: data\n devicePath: /dev/xvda\n volumes:\n - name: data\n persistentVolumeClaim:\n claimName: block-pvc", "language": "yaml", "type": "pod-definition", "context": "6.2 - Persistent Volumes | If a user requests a raw block volume by indicating this using the volumeMode field in the PersistentVolumeClaim spec, the binding rules differ slightly from previous releases that didn't consider thi", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_16090"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: pod-with-block-volume\nspec:\n containers:\n - name: fc-container\n image: fedora:26\n command: [\"/bin/sh\", \"-c\"]\n args: [ \"tail -f /dev/null\" ]\n volumeDevices:\n - name: data\n devicePath: /dev/xvda\n volumes:\n - name: data\n persistentVolumeClaim:\n claimName: block-pvc", "completion": "This yaml code 6.2 - Persistent Volumes | If a user requests a raw block volume by indicating this using the volumeMode field in the PersistentVolumeClaim spec, the binding rules differ slightly from previous releases that didn't consider thi", "language": "yaml", "type": "pod-definition_explanation", "context": "6.2 - Persistent Volumes | If a user requests a raw block volume by indicating this using the volumeMode field in the PersistentVolumeClaim spec, the binding rules differ slightly from previous releases that didn't consider thi", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_16091"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: PersistentVolumeClaim\nmetadata:\n name: restore-pvc\nspec:\n storageClassName: csi-hostpath-sc\n dataSource:", "completion": " name: new-snapshot-test\n kind: VolumeSnapshot\n apiGroup: snapshot.storage.k8s.io\n accessModes:\n - ReadWriteOnce\n resources:\n requests:\n storage: 10Gi", "language": "yaml", "type": "kubernetes-manifest", "context": "6.2 - Persistent Volumes | Volume Cloning only available for CSI volume plugins.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_16092"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: PersistentVolumeClaim\nmetadata:\n name: restore-pvc\nspec:\n storageClassName: csi-hostpath-sc\n dataSource:\n name: new-snapshot-test\n kind: VolumeSnapshot\n apiGroup: snapshot.storage.k8s.io\n accessModes:\n - ReadWriteOnce\n resources:\n requests:\n storage: 10Gi", "completion": "This yaml code 6.2 - Persistent Volumes | Volume Cloning only available for CSI volume plugins.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "6.2 - Persistent Volumes | Volume Cloning only available for CSI volume plugins.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_16093"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: PersistentVolumeClaim\nmetadata:\n name: cloned-pvc\nspec:\n storageClassName: my-csi-plugin\n dataSource:", "completion": " name: existing-src-pvc-name\n kind: PersistentVolumeClaim\n accessModes:\n - ReadWriteOnce\n resources:\n requests:\n storage: 10Gi", "language": "yaml", "type": "kubernetes-manifest", "context": "6.2 - Persistent Volumes | Kubernetes supports custom volume populators. To use custom volume populators, you must enable the AnyVolumeDataSource feature gate for the kube-apiserver and kube-controller-manager.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_16094"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: PersistentVolumeClaim\nmetadata:\n name: cloned-pvc\nspec:\n storageClassName: my-csi-plugin\n dataSource:\n name: existing-src-pvc-name\n kind: PersistentVolumeClaim\n accessModes:\n - ReadWriteOnce\n resources:\n requests:\n storage: 10Gi", "completion": "This yaml code 6.2 - Persistent Volumes | Kubernetes supports custom volume populators. To use custom volume populators, you must enable the AnyVolumeDataSource feature gate for the kube-apiserver and kube-controller-manager.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "6.2 - Persistent Volumes | Kubernetes supports custom volume populators. To use custom volume populators, you must enable the AnyVolumeDataSource feature gate for the kube-apiserver and kube-controller-manager.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_16095"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: PersistentVolumeClaim\nmetadata:\n name: populated-pvc\nspec:\n dataSourceRef:\n name: example-name", "completion": " kind: ExampleDataSource\n apiGroup: example.storage.k8s.io\n accessModes:\n - ReadWriteOnce\n resources:\n requests:\n storage: 10Gi", "language": "yaml", "type": "kubernetes-manifest", "context": "6.2 - Persistent Volumes | Because volume populators are external components, attempts to create a PVC that uses one can fail if not all the correct components are installed. External controllers should generate events on the P", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_16096"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: PersistentVolumeClaim\nmetadata:\n name: populated-pvc\nspec:\n dataSourceRef:\n name: example-name\n kind: ExampleDataSource\n apiGroup: example.storage.k8s.io\n accessModes:\n - ReadWriteOnce\n resources:\n requests:\n storage: 10Gi", "completion": "This yaml code 6.2 - Persistent Volumes | Because volume populators are external components, attempts to create a PVC that uses one can fail if not all the correct components are installed. External controllers should generate events on the P", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "6.2 - Persistent Volumes | Because volume populators are external components, attempts to create a PVC that uses one can fail if not all the correct components are installed. External controllers should generate events on the P", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_16097"} {"prompt": "Complete this yaml code:\n\napiVersion: gateway.networking.k8s.io/v1beta1\nkind: ReferenceGrant\nmetadata:\n name: allow-ns1-pvc\n namespace: default\nspec:\n from:", "completion": " - group: \"\"\n kind: PersistentVolumeClaim\n namespace: ns1\n to:\n - group: snapshot.storage.k8s.io\n kind: VolumeSnapshot\n name: new-snapshot-demo", "language": "yaml", "type": "kubernetes-manifest", "context": "6.2 - Persistent Volumes | If you're writing configuration templates or examples that run on a wide range of clusters and need persistent storage, it is recommended that you use the following pattern:", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_16098"} {"prompt": "Explain this yaml code:\n\napiVersion: gateway.networking.k8s.io/v1beta1\nkind: ReferenceGrant\nmetadata:\n name: allow-ns1-pvc\n namespace: default\nspec:\n from:\n - group: \"\"\n kind: PersistentVolumeClaim\n namespace: ns1\n to:\n - group: snapshot.storage.k8s.io\n kind: VolumeSnapshot\n name: new-snapshot-demo", "completion": "This yaml code 6.2 - Persistent Volumes | If you're writing configuration templates or examples that run on a wide range of clusters and need persistent storage, it is recommended that you use the following pattern:", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "6.2 - Persistent Volumes | If you're writing configuration templates or examples that run on a wide range of clusters and need persistent storage, it is recommended that you use the following pattern:", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_16099"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: PersistentVolumeClaim\nmetadata:\n name: foo-pvc\n namespace: ns1\nspec:\n storageClassName: example\n accessModes:\n - ReadWriteOnce", "completion": " resources:\n requests:\n storage: 1Gi\n dataSourceRef:\n apiGroup: snapshot.storage.k8s.io\n kind: VolumeSnapshot\n name: new-snapshot-demo\n namespace: default\n volumeMode: Filesystem", "language": "yaml", "type": "kubernetes-manifest", "context": "6.2 - Persistent Volumes | If you're writing configuration templates or examples that run on a wide range of clusters and need persistent storage, it is recommended that you use the following pattern:", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_16100"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: PersistentVolumeClaim\nmetadata:\n name: foo-pvc\n namespace: ns1\nspec:\n storageClassName: example\n accessModes:\n - ReadWriteOnce\n resources:\n requests:\n storage: 1Gi\n dataSourceRef:\n apiGroup: snapshot.storage.k8s.io\n kind: VolumeSnapshot\n name: new-snapshot-demo\n namespace: default\n volumeMode: Filesystem", "completion": "This yaml code 6.2 - Persistent Volumes | If you're writing configuration templates or examples that run on a wide range of clusters and need persistent storage, it is recommended that you use the following pattern:", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "6.2 - Persistent Volumes | If you're writing configuration templates or examples that run on a wide range of clusters and need persistent storage, it is recommended that you use the following pattern:", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_16101"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: volume-test\nspec:\n containers:\n - name: container-test\n image: busybox:1.28\n command: [\"sleep\", \"3600\"]\n volumeMounts:\n - name: all-in-one\n mountPath: \"/projected-volume\"\n readOnly: true\n volumes:\n - name: all-in-one\n projected:\n sources:\n - secret:", "completion": " name: mysecret\n items:\n - key: username\n path: my-group/my-username\n - downwardAPI:\n items:\n - path: \"labels\"\n fieldRef:\n fieldPath: metadata.labels\n - path: \"cpu_limit\"\n resourceFieldRef:\n containerName: container-test\n resource: limits.cpu\n - configMap:\n name: myconfigmap\n items:\n - key: config\n path: my-group/my-config", "language": "yaml", "type": "pod-definition", "context": "6.3 - Projected Volumes | Each projected volume source is listed in the spec under sources. The parameters are nearly the same with two exceptions:", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_16102"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: volume-test\nspec:\n containers:\n - name: container-test\n image: busybox:1.28\n command: [\"sleep\", \"3600\"]\n volumeMounts:\n - name: all-in-one\n mountPath: \"/projected-volume\"\n readOnly: true\n volumes:\n - name: all-in-one\n projected:\n sources:\n - secret:\n name: mysecret\n items:\n - key: username\n path: my-group/my-username\n - downwardAPI:\n items:\n - path: \"labels\"\n fieldRef:\n fieldPath: metadata.labels\n - path: \"cpu_limit\"\n resourceFieldRef:\n containerName: container-test\n resource: limits.cpu\n - configMap:\n name: myconfigmap\n items:\n - key: config\n path: my-group/my-config", "completion": "This yaml code 6.3 - Projected Volumes | Each projected volume source is listed in the spec under sources. The parameters are nearly the same with two exceptions:", "language": "yaml", "type": "pod-definition_explanation", "context": "6.3 - Projected Volumes | Each projected volume source is listed in the spec under sources. The parameters are nearly the same with two exceptions:", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_16103"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: volume-test\nspec:\n containers:\n - name: container-test\n image: busybox:1.28\n command: [\"sleep\", \"3600\"]\n volumeMounts:\n - name: all-in-one\n mountPath: \"/projected-volume\"\n readOnly: true\n volumes:", "completion": " - name: all-in-one\n projected:\n sources:\n - secret:\n name: mysecret\n items:\n - key: username\n path: my-group/my-username\n - secret:\n name: mysecret2\n items:\n - key: password\n path: my-group/my-password\n mode: 511", "language": "yaml", "type": "pod-definition", "context": "6.3 - Projected Volumes | Each projected volume source is listed in the spec under sources. The parameters are nearly the same with two exceptions:", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_16104"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: volume-test\nspec:\n containers:\n - name: container-test\n image: busybox:1.28\n command: [\"sleep\", \"3600\"]\n volumeMounts:\n - name: all-in-one\n mountPath: \"/projected-volume\"\n readOnly: true\n volumes:\n - name: all-in-one\n projected:\n sources:\n - secret:\n name: mysecret\n items:\n - key: username\n path: my-group/my-username\n - secret:\n name: mysecret2\n items:\n - key: password\n path: my-group/my-password\n mode: 511", "completion": "This yaml code 6.3 - Projected Volumes | Each projected volume source is listed in the spec under sources. The parameters are nearly the same with two exceptions:", "language": "yaml", "type": "pod-definition_explanation", "context": "6.3 - Projected Volumes | Each projected volume source is listed in the spec under sources. The parameters are nearly the same with two exceptions:", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_16105"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: sa-token-test\nspec:\n containers:\n - name: container-test\n image: busybox:1.28\n command: [\"sleep\", \"3600\"]\n volumeMounts:\n - name: token-vol", "completion": " mountPath: \"/service-account\"\n readOnly: true\n serviceAccountName: default\n volumes:\n - name: token-vol\n projected:\n sources:\n - serviceAccountToken:\n audience: api\n expirationSeconds: 3600\n path: token", "language": "yaml", "type": "pod-definition", "context": "6.3 - Projected Volumes | The example Pod has a projected volume containing the injected service account token. Containers in this Pod can use that token to access the Kubernetes API server, authenticating with the identity of", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_16106"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: sa-token-test\nspec:\n containers:\n - name: container-test\n image: busybox:1.28\n command: [\"sleep\", \"3600\"]\n volumeMounts:\n - name: token-vol\n mountPath: \"/service-account\"\n readOnly: true\n serviceAccountName: default\n volumes:\n - name: token-vol\n projected:\n sources:\n - serviceAccountToken:\n audience: api\n expirationSeconds: 3600\n path: token", "completion": "This yaml code 6.3 - Projected Volumes | The example Pod has a projected volume containing the injected service account token. Containers in this Pod can use that token to access the Kubernetes API server, authenticating with the identity of", "language": "yaml", "type": "pod-definition_explanation", "context": "6.3 - Projected Volumes | The example Pod has a projected volume containing the injected service account token. Containers in this Pod can use that token to access the Kubernetes API server, authenticating with the identity of", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_16107"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: sa-ctb-name-test\nspec:\n containers:\n - name: container-test\n image: busybox\n command: [\"sleep\", \"3600\"]\n volumeMounts:\n - name: token-vol\n mountPath: \"/root-certificates\"\n readOnly: true\n serviceAccountName: default", "completion": " volumes:\n - name: token-vol\n projected:\n sources:\n - clusterTrustBundle:\n name: example\n path: example-roots.pem\n - clusterTrustBundle:\n signerName: \"example.com/mysigner\"\n labelSelector:\n matchLabels:\n version: live\n path: mysigner-roots.pem\n optional: true", "language": "yaml", "type": "pod-definition", "context": "6.3 - Projected Volumes | The podCertificate projected volumes source securely provisions a private key and X.509 certificate chain for pod to use as client or server credentials. Kubelet will then handle refreshing the privat", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_16108"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: sa-ctb-name-test\nspec:\n containers:\n - name: container-test\n image: busybox\n command: [\"sleep\", \"3600\"]\n volumeMounts:\n - name: token-vol\n mountPath: \"/root-certificates\"\n readOnly: true\n serviceAccountName: default\n volumes:\n - name: token-vol\n projected:\n sources:\n - clusterTrustBundle:\n name: example\n path: example-roots.pem\n - clusterTrustBundle:\n signerName: \"example.com/mysigner\"\n labelSelector:\n matchLabels:\n version: live\n path: mysigner-roots.pem\n optional: true", "completion": "This yaml code 6.3 - Projected Volumes | The podCertificate projected volumes source securely provisions a private key and X.509 certificate chain for pod to use as client or server credentials. Kubelet will then handle refreshing the privat", "language": "yaml", "type": "pod-definition_explanation", "context": "6.3 - Projected Volumes | The podCertificate projected volumes source securely provisions a private key and X.509 certificate chain for pod to use as client or server credentials. Kubelet will then handle refreshing the privat", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_16109"} {"prompt": "Complete this yaml code:\n\n# Sample Pod spec that uses a podCertificate projection to request an ED25519\n# private key, a certificate from the `coolcert.example.com/foo` signer, and\n# write the results to `/var/run/my-x509-credentials/credentialbundle.pem`.\napiVersion: v1\nkind: Pod\nmetadata:\n namespace: default\n name: podcertificate-pod\nspec:\n serviceAccountName: default\n containers:\n - image: debian\n name: main", "completion": " command: [\"sleep\", \"infinity\"]\n volumeMounts:\n - name: my-x509-credentials\n mountPath: /var/run/my-x509-credentials\n volumes:\n - name: my-x509-credentials\n projected:\n defaultMode: 420\n sources:\n - podCertificate:\n keyType: ED25519\n signerName: coolcert.example.com/foo\n credentialBundlePath: credentialbundle.pem", "language": "yaml", "type": "pod-definition", "context": "6.3 - Projected Volumes | The proposal for file permission handling in projected service account volume enhancement introduced the projected files having the correct owner permissions set.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_16110"} {"prompt": "Explain this yaml code:\n\n# Sample Pod spec that uses a podCertificate projection to request an ED25519\n# private key, a certificate from the `coolcert.example.com/foo` signer, and\n# write the results to `/var/run/my-x509-credentials/credentialbundle.pem`.\napiVersion: v1\nkind: Pod\nmetadata:\n namespace: default\n name: podcertificate-pod\nspec:\n serviceAccountName: default\n containers:\n - image: debian\n name: main\n command: [\"sleep\", \"infinity\"]\n volumeMounts:\n - name: my-x509-credentials\n mountPath: /var/run/my-x509-credentials\n volumes:\n - name: my-x509-credentials\n projected:\n defaultMode: 420\n sources:\n - podCertificate:\n keyType: ED25519\n signerName: coolcert.example.com/foo\n credentialBundlePath: credentialbundle.pem", "completion": "This yaml code 6.3 - Projected Volumes | The proposal for file permission handling in projected service account volume enhancement introduced the projected files having the correct owner permissions set.", "language": "yaml", "type": "pod-definition_explanation", "context": "6.3 - Projected Volumes | The proposal for file permission handling in projected service account volume enhancement introduced the projected files having the correct owner permissions set.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_16111"} {"prompt": "Complete this text code:\n\nPS C:\\> Get-Acl C:\\var\\run\\secrets\\kubernetes.io\\serviceaccount\\..2021_08_31_22_22_18.318230061\\ca.crt | Format-List\n\nPath : Microsoft.PowerShell.Core\\FileSystem::C:\\var\\run\\secrets\\kubernetes.io\\serviceaccount\\..2021_08_31_22_22_18.318230061\\ca.crt\nOwner : BUILTIN\\Administrators\nGroup : NT AUTHORITY\\SYSTEM", "completion": "Access : NT AUTHORITY\\SYSTEM Allow FullControl\n BUILTIN\\Administrators Allow FullControl\n BUILTIN\\Users Allow ReadAndExecute, Synchronize\nAudit :\nSddl : O:BAG:SYD:AI(A;ID;FA;;;SY)(A;ID;FA;;;BA)(A;ID;0x1200a9;;;BU)", "language": "text", "type": "code-example", "context": "6.3 - Projected Volumes | This implies all administrator users like ContainerAdministrator will have read, write and execute access while, non-administrator users will have read and execute access.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_16112"} {"prompt": "Explain this text code:\n\nPS C:\\> Get-Acl C:\\var\\run\\secrets\\kubernetes.io\\serviceaccount\\..2021_08_31_22_22_18.318230061\\ca.crt | Format-List\n\nPath : Microsoft.PowerShell.Core\\FileSystem::C:\\var\\run\\secrets\\kubernetes.io\\serviceaccount\\..2021_08_31_22_22_18.318230061\\ca.crt\nOwner : BUILTIN\\Administrators\nGroup : NT AUTHORITY\\SYSTEM\nAccess : NT AUTHORITY\\SYSTEM Allow FullControl\n BUILTIN\\Administrators Allow FullControl\n BUILTIN\\Users Allow ReadAndExecute, Synchronize\nAudit :\nSddl : O:BAG:SYD:AI(A;ID;FA;;;SY)(A;ID;FA;;;BA)(A;ID;0x1200a9;;;BU)", "completion": "This text code 6.3 - Projected Volumes | This implies all administrator users like ContainerAdministrator will have read, write and execute access while, non-administrator users will have read and execute access.", "language": "text", "type": "code-example_explanation", "context": "6.3 - Projected Volumes | This implies all administrator users like ContainerAdministrator will have read, write and execute access while, non-administrator users will have read and execute access.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_16113"} {"prompt": "Complete this yaml code:\n\nkind: Pod\napiVersion: v1\nmetadata:\n name: my-csi-app\nspec:\n containers:\n - name: my-frontend\n image: busybox:1.28\n volumeMounts:", "completion": " - mountPath: \"/data\"\n name: my-csi-inline-vol\n command: [ \"sleep\", \"1000000\" ]\n volumes:\n - name: my-csi-inline-vol\n csi:\n driver: inline.storage.kubernetes.io\n volumeAttributes:\n foo: bar", "language": "yaml", "type": "pod-definition", "context": "6.4 - Ephemeral Volumes | The volumeAttributes determine what volume is prepared by the driver. These attributes are specific to each driver and not standardized. See the documentation of each CSI driver for further instructio", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_16114"} {"prompt": "Explain this yaml code:\n\nkind: Pod\napiVersion: v1\nmetadata:\n name: my-csi-app\nspec:\n containers:\n - name: my-frontend\n image: busybox:1.28\n volumeMounts:\n - mountPath: \"/data\"\n name: my-csi-inline-vol\n command: [ \"sleep\", \"1000000\" ]\n volumes:\n - name: my-csi-inline-vol\n csi:\n driver: inline.storage.kubernetes.io\n volumeAttributes:\n foo: bar", "completion": "This yaml code 6.4 - Ephemeral Volumes | The volumeAttributes determine what volume is prepared by the driver. These attributes are specific to each driver and not standardized. See the documentation of each CSI driver for further instructio", "language": "yaml", "type": "pod-definition_explanation", "context": "6.4 - Ephemeral Volumes | The volumeAttributes determine what volume is prepared by the driver. These attributes are specific to each driver and not standardized. See the documentation of each CSI driver for further instructio", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_16115"} {"prompt": "Complete this yaml code:\n\nkind: Pod\napiVersion: v1\nmetadata:\n name: my-app\nspec:\n containers:\n - name: my-frontend\n image: busybox:1.28\n volumeMounts:\n - mountPath: \"/scratch\"\n name: scratch-volume\n command: [ \"sleep\", \"1000000\" ]", "completion": " volumes:\n - name: scratch-volume\n ephemeral:\n volumeClaimTemplate:\n metadata:\n labels:\n type: my-frontend-volume\n spec:\n accessModes: [ \"ReadWriteOnce\" ]\n storageClassName: \"scratch-storage-class\"\n resources:\n requests:\n storage: 1Gi", "language": "yaml", "type": "pod-definition", "context": "6.4 - Ephemeral Volumes | The key design idea is that the parameters for a volume claim are allowed inside a volume source of the Pod. Labels, annotations and the whole set of fields for a PersistentVolumeClaim are supported.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_16116"} {"prompt": "Explain this yaml code:\n\nkind: Pod\napiVersion: v1\nmetadata:\n name: my-app\nspec:\n containers:\n - name: my-frontend\n image: busybox:1.28\n volumeMounts:\n - mountPath: \"/scratch\"\n name: scratch-volume\n command: [ \"sleep\", \"1000000\" ]\n volumes:\n - name: scratch-volume\n ephemeral:\n volumeClaimTemplate:\n metadata:\n labels:\n type: my-frontend-volume\n spec:\n accessModes: [ \"ReadWriteOnce\" ]\n storageClassName: \"scratch-storage-class\"\n resources:\n requests:\n storage: 1Gi", "completion": "This yaml code 6.4 - Ephemeral Volumes | The key design idea is that the parameters for a volume claim are allowed inside a volume source of the Pod. Labels, annotations and the whole set of fields for a PersistentVolumeClaim are supported.", "language": "yaml", "type": "pod-definition_explanation", "context": "6.4 - Ephemeral Volumes | The key design idea is that the parameters for a volume claim are allowed inside a volume source of the Pod. Labels, annotations and the whole set of fields for a PersistentVolumeClaim are supported.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_16117"} {"prompt": "Complete this yaml code:\n\napiVersion: storage.k8s.io/v1\nkind: StorageClass\nmetadata:\n name: low-latency\n annotations:\n storageclass.kubernetes.io/is-default-class: \"false\"\nprovisioner: csi-driver.example-vendor.example", "completion": "reclaimPolicy: Retain # default value is Delete\nallowVolumeExpansion: true\nmountOptions:\n - discard # this might enable UNMAP / TRIM at the block storage layer\nvolumeBindingMode: WaitForFirstConsumer\nparameters:\n guaranteedReadWriteLatency: \"true\" # provider-specific", "language": "yaml", "type": "kubernetes-manifest", "context": "6.5 - Storage Classes | You can mark a StorageClass as the default for your cluster. For instructions on setting the default StorageClass, see Change the default StorageClass.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_16118"} {"prompt": "Explain this yaml code:\n\napiVersion: storage.k8s.io/v1\nkind: StorageClass\nmetadata:\n name: low-latency\n annotations:\n storageclass.kubernetes.io/is-default-class: \"false\"\nprovisioner: csi-driver.example-vendor.example\nreclaimPolicy: Retain # default value is Delete\nallowVolumeExpansion: true\nmountOptions:\n - discard # this might enable UNMAP / TRIM at the block storage layer\nvolumeBindingMode: WaitForFirstConsumer\nparameters:\n guaranteedReadWriteLatency: \"true\" # provider-specific", "completion": "This yaml code 6.5 - Storage Classes | You can mark a StorageClass as the default for your cluster. For instructions on setting the default StorageClass, see Change the default StorageClass.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "6.5 - Storage Classes | You can mark a StorageClass as the default for your cluster. For instructions on setting the default StorageClass, see Change the default StorageClass.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_16119"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: task-pv-pod\nspec:\n nodeSelector:\n kubernetes.io/hostname: kube-01\n volumes:\n - name: task-pv-storage\n persistentVolumeClaim:", "completion": " claimName: task-pv-claim\n containers:\n - name: task-pv-container\n image: nginx\n ports:\n - containerPort: 80\n name: \"http-server\"\n volumeMounts:\n - mountPath: \"/usr/share/nginx/html\"\n name: task-pv-storage", "language": "yaml", "type": "pod-definition", "context": "6.5 - Storage Classes | When a cluster operator specifies the WaitForFirstConsumer volume binding mode, it is no longer necessary to restrict provisioning to specific topologies in most situations. However, if still required", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_16120"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: task-pv-pod\nspec:\n nodeSelector:\n kubernetes.io/hostname: kube-01\n volumes:\n - name: task-pv-storage\n persistentVolumeClaim:\n claimName: task-pv-claim\n containers:\n - name: task-pv-container\n image: nginx\n ports:\n - containerPort: 80\n name: \"http-server\"\n volumeMounts:\n - mountPath: \"/usr/share/nginx/html\"\n name: task-pv-storage", "completion": "This yaml code 6.5 - Storage Classes | When a cluster operator specifies the WaitForFirstConsumer volume binding mode, it is no longer necessary to restrict provisioning to specific topologies in most situations. However, if still required", "language": "yaml", "type": "pod-definition_explanation", "context": "6.5 - Storage Classes | When a cluster operator specifies the WaitForFirstConsumer volume binding mode, it is no longer necessary to restrict provisioning to specific topologies in most situations. However, if still required", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_16121"} {"prompt": "Complete this yaml code:\n\napiVersion: storage.k8s.io/v1\nkind: StorageClass\nmetadata:\n name: standard\nprovisioner: example.com/example\nparameters:\n type: pd-standard", "completion": "volumeBindingMode: WaitForFirstConsumer\nallowedTopologies:\n- matchLabelExpressions:\n - key: topology.kubernetes.io/zone\n values:\n - us-central-1a\n - us-central-1b", "language": "yaml", "type": "kubernetes-manifest", "context": "6.5 - Storage Classes | StorageClasses have parameters that describe volumes belonging to the storage class. Different parameters may be accepted depending on the provisioner. When a parameter is omitted, some default is use", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_16122"} {"prompt": "Explain this yaml code:\n\napiVersion: storage.k8s.io/v1\nkind: StorageClass\nmetadata:\n name: standard\nprovisioner: example.com/example\nparameters:\n type: pd-standard\nvolumeBindingMode: WaitForFirstConsumer\nallowedTopologies:\n- matchLabelExpressions:\n - key: topology.kubernetes.io/zone\n values:\n - us-central-1a\n - us-central-1b", "completion": "This yaml code 6.5 - Storage Classes | StorageClasses have parameters that describe volumes belonging to the storage class. Different parameters may be accepted depending on the provisioner. When a parameter is omitted, some default is use", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "6.5 - Storage Classes | StorageClasses have parameters that describe volumes belonging to the storage class. Different parameters may be accepted depending on the provisioner. When a parameter is omitted, some default is use", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_16123"} {"prompt": "Complete this yaml code:\n\napiVersion: storage.k8s.io/v1\nkind: StorageClass\nmetadata:\n name: ebs-sc\nprovisioner: ebs.csi.aws.com\nvolumeBindingMode: WaitForFirstConsumer\nparameters:\n csi.storage.k8s.io/fstype: xfs\n type: io1", "completion": " iopsPerGB: \"50\"\n encrypted: \"true\"\n tagSpecification_1: \"key1=value1\"\n tagSpecification_2: \"key2=value2\"\nallowedTopologies:\n- matchLabelExpressions:\n - key: topology.ebs.csi.aws.com/zone\n values:\n - us-east-2c", "language": "yaml", "type": "kubernetes-manifest", "context": "6.5 - Storage Classes | tagSpecification: Tags with this prefix are applied to dynamically provisioned EBS volumes.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_16124"} {"prompt": "Explain this yaml code:\n\napiVersion: storage.k8s.io/v1\nkind: StorageClass\nmetadata:\n name: ebs-sc\nprovisioner: ebs.csi.aws.com\nvolumeBindingMode: WaitForFirstConsumer\nparameters:\n csi.storage.k8s.io/fstype: xfs\n type: io1\n iopsPerGB: \"50\"\n encrypted: \"true\"\n tagSpecification_1: \"key1=value1\"\n tagSpecification_2: \"key2=value2\"\nallowedTopologies:\n- matchLabelExpressions:\n - key: topology.ebs.csi.aws.com/zone\n values:\n - us-east-2c", "completion": "This yaml code 6.5 - Storage Classes | tagSpecification: Tags with this prefix are applied to dynamically provisioned EBS volumes.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "6.5 - Storage Classes | tagSpecification: Tags with this prefix are applied to dynamically provisioned EBS volumes.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_16125"} {"prompt": "Complete this yaml code:\n\nkind: StorageClass\napiVersion: storage.k8s.io/v1\nmetadata:\n name: efs-sc", "completion": "provisioner: efs.csi.aws.com\nparameters:\n provisioningMode: efs-ap\n fileSystemId: fs-92107410\n directoryPerms: \"700\"", "language": "yaml", "type": "kubernetes-manifest", "context": "6.5 - Storage Classes | For more details, refer to the AWS_EFS_CSI_Driver Dynamic Provisioning documentation.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_16126"} {"prompt": "Explain this yaml code:\n\nkind: StorageClass\napiVersion: storage.k8s.io/v1\nmetadata:\n name: efs-sc\nprovisioner: efs.csi.aws.com\nparameters:\n provisioningMode: efs-ap\n fileSystemId: fs-92107410\n directoryPerms: \"700\"", "completion": "This yaml code 6.5 - Storage Classes | For more details, refer to the AWS_EFS_CSI_Driver Dynamic Provisioning documentation.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "6.5 - Storage Classes | For more details, refer to the AWS_EFS_CSI_Driver Dynamic Provisioning documentation.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_16127"} {"prompt": "Complete this yaml code:\n\napiVersion: storage.k8s.io/v1\nkind: StorageClass\nmetadata:\n name: example-nfs", "completion": "provisioner: example.com/external-nfs\nparameters:\n server: nfs-server.example.com\n path: /share\n readOnly: \"false\"", "language": "yaml", "type": "kubernetes-manifest", "context": "6.5 - Storage Classes | Kubernetes doesn't include an internal NFS provisioner. You need to use an external provisioner to create a StorageClass for NFS. Here are some examples:", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_16128"} {"prompt": "Explain this yaml code:\n\napiVersion: storage.k8s.io/v1\nkind: StorageClass\nmetadata:\n name: example-nfs\nprovisioner: example.com/external-nfs\nparameters:\n server: nfs-server.example.com\n path: /share\n readOnly: \"false\"", "completion": "This yaml code 6.5 - Storage Classes | Kubernetes doesn't include an internal NFS provisioner. You need to use an external provisioner to create a StorageClass for NFS. Here are some examples:", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "6.5 - Storage Classes | Kubernetes doesn't include an internal NFS provisioner. You need to use an external provisioner to create a StorageClass for NFS. Here are some examples:", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_16129"} {"prompt": "Complete this yaml code:\n\napiVersion: storage.k8s.io/v1\nkind: StorageClass\nmetadata:", "completion": " name: fast\nprovisioner: kubernetes.io/vsphere-volume\nparameters:\n diskformat: zeroedthick", "language": "yaml", "type": "kubernetes-manifest", "context": "6.5 - Storage Classes | diskformat: thin, zeroedthick and eagerzeroedthick. Default: \"thin\".", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_16130"} {"prompt": "Explain this yaml code:\n\napiVersion: storage.k8s.io/v1\nkind: StorageClass\nmetadata:\n name: fast\nprovisioner: kubernetes.io/vsphere-volume\nparameters:\n diskformat: zeroedthick", "completion": "This yaml code 6.5 - Storage Classes | diskformat: thin, zeroedthick and eagerzeroedthick. Default: \"thin\".", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "6.5 - Storage Classes | diskformat: thin, zeroedthick and eagerzeroedthick. Default: \"thin\".", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_16131"} {"prompt": "Complete this yaml code:\n\napiVersion: storage.k8s.io/v1\nkind: StorageClass\nmetadata:\n name: fast", "completion": "provisioner: kubernetes.io/vsphere-volume\nparameters:\n diskformat: zeroedthick\n datastore: VSANDatastore", "language": "yaml", "type": "kubernetes-manifest", "context": "6.5 - Storage Classes | datastore: The user can also specify the datastore in the StorageClass. The volume will be created on the datastore specified in the StorageClass, which in this case is VSANDatastore. This field is op", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_16132"} {"prompt": "Explain this yaml code:\n\napiVersion: storage.k8s.io/v1\nkind: StorageClass\nmetadata:\n name: fast\nprovisioner: kubernetes.io/vsphere-volume\nparameters:\n diskformat: zeroedthick\n datastore: VSANDatastore", "completion": "This yaml code 6.5 - Storage Classes | datastore: The user can also specify the datastore in the StorageClass. The volume will be created on the datastore specified in the StorageClass, which in this case is VSANDatastore. This field is op", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "6.5 - Storage Classes | datastore: The user can also specify the datastore in the StorageClass. The volume will be created on the datastore specified in the StorageClass, which in this case is VSANDatastore. This field is op", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_16133"} {"prompt": "Complete this yaml code:\n\napiVersion: storage.k8s.io/v1\nkind: StorageClass\nmetadata:\n name: fast\nprovisioner: kubernetes.io/rbd # This provisioner is deprecated\nparameters:\n monitors: 198.19.254.105:6789\n adminId: kube", "completion": " adminSecretName: ceph-secret\n adminSecretNamespace: kube-system\n pool: kube\n userId: kube\n userSecretName: ceph-secret-user\n userSecretNamespace: default\n fsType: ext4\n imageFormat: \"2\"\n imageFeatures: \"layering\"", "language": "yaml", "type": "kubernetes-manifest", "context": "6.5 - Storage Classes | monitors: Ceph monitors, comma delimited. This parameter is required.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_16134"} {"prompt": "Explain this yaml code:\n\napiVersion: storage.k8s.io/v1\nkind: StorageClass\nmetadata:\n name: fast\nprovisioner: kubernetes.io/rbd # This provisioner is deprecated\nparameters:\n monitors: 198.19.254.105:6789\n adminId: kube\n adminSecretName: ceph-secret\n adminSecretNamespace: kube-system\n pool: kube\n userId: kube\n userSecretName: ceph-secret-user\n userSecretNamespace: default\n fsType: ext4\n imageFormat: \"2\"\n imageFeatures: \"layering\"", "completion": "This yaml code 6.5 - Storage Classes | monitors: Ceph monitors, comma delimited. This parameter is required.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "6.5 - Storage Classes | monitors: Ceph monitors, comma delimited. This parameter is required.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_16135"} {"prompt": "Explain this bash code:\n\nkubectl create secret generic ceph-secret --type=\"kubernetes.io/rbd\" \\\n --from-literal=key='QVFEQ1pMdFhPUnQrSmhBQUFYaERWNHJsZ3BsMmNjcDR6RFZST0E9PQ==' \\\n --namespace=kube-system", "completion": "This bash code 6.5 - Storage Classes | userSecretNamespace: The namespace for userSecretName.", "language": "bash", "type": "kubectl-command_explanation", "context": "6.5 - Storage Classes | userSecretNamespace: The namespace for userSecretName.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_16136"} {"prompt": "Complete this yaml code:\n\napiVersion: storage.k8s.io/v1\nkind: StorageClass\nmetadata:\n name: azurefile", "completion": "provisioner: kubernetes.io/azure-file\nparameters:\n skuName: Standard_LRS\n location: eastus\n storageAccount: azure_storage_account_name # example value", "language": "yaml", "type": "kubernetes-manifest", "context": "6.5 - Storage Classes | During storage provisioning, a secret named by secretName is created for the mounting credentials. If the cluster has enabled both RBAC and Controller Roles, add the create permission of resource secr", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_16137"} {"prompt": "Explain this yaml code:\n\napiVersion: storage.k8s.io/v1\nkind: StorageClass\nmetadata:\n name: azurefile\nprovisioner: kubernetes.io/azure-file\nparameters:\n skuName: Standard_LRS\n location: eastus\n storageAccount: azure_storage_account_name # example value", "completion": "This yaml code 6.5 - Storage Classes | During storage provisioning, a secret named by secretName is created for the mounting credentials. If the cluster has enabled both RBAC and Controller Roles, add the create permission of resource secr", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "6.5 - Storage Classes | During storage provisioning, a secret named by secretName is created for the mounting credentials. If the cluster has enabled both RBAC and Controller Roles, add the create permission of resource secr", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_16138"} {"prompt": "Complete this yaml code:\n\napiVersion: storage.k8s.io/v1\nkind: StorageClass\nmetadata:\n name: portworx-io-priority-high", "completion": "provisioner: kubernetes.io/portworx-volume # This provisioner is deprecated\nparameters:\n repl: \"1\"\n snap_interval: \"70\"\n priority_io: \"high\"", "language": "yaml", "type": "kubernetes-manifest", "context": "6.5 - Storage Classes | Local volumes do not support dynamic provisioning in Kubernetes 1.34; however a StorageClass should still be created to delay volume binding until a Pod is actually scheduled to the appropriate node.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_16139"} {"prompt": "Explain this yaml code:\n\napiVersion: storage.k8s.io/v1\nkind: StorageClass\nmetadata:\n name: portworx-io-priority-high\nprovisioner: kubernetes.io/portworx-volume # This provisioner is deprecated\nparameters:\n repl: \"1\"\n snap_interval: \"70\"\n priority_io: \"high\"", "completion": "This yaml code 6.5 - Storage Classes | Local volumes do not support dynamic provisioning in Kubernetes 1.34; however a StorageClass should still be created to delay volume binding until a Pod is actually scheduled to the appropriate node.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "6.5 - Storage Classes | Local volumes do not support dynamic provisioning in Kubernetes 1.34; however a StorageClass should still be created to delay volume binding until a Pod is actually scheduled to the appropriate node.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_16140"} {"prompt": "Complete this yaml code:\n\napiVersion: storage.k8s.io/v1\nkind: StorageClass\nmetadata:", "completion": " name: local-storage\nprovisioner: kubernetes.io/no-provisioner # indicates that this StorageClass does not support automatic provisioning\nvolumeBindingMode: WaitForFirstConsumer", "language": "yaml", "type": "kubernetes-manifest", "context": "6.5 - Storage Classes | Local volumes do not support dynamic provisioning in Kubernetes 1.34; however a StorageClass should still be created to delay volume binding until a Pod is actually scheduled to the appropriate node.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_16141"} {"prompt": "Explain this yaml code:\n\napiVersion: storage.k8s.io/v1\nkind: StorageClass\nmetadata:\n name: local-storage\nprovisioner: kubernetes.io/no-provisioner # indicates that this StorageClass does not support automatic provisioning\nvolumeBindingMode: WaitForFirstConsumer", "completion": "This yaml code 6.5 - Storage Classes | Local volumes do not support dynamic provisioning in Kubernetes 1.34; however a StorageClass should still be created to delay volume binding until a Pod is actually scheduled to the appropriate node.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "6.5 - Storage Classes | Local volumes do not support dynamic provisioning in Kubernetes 1.34; however a StorageClass should still be created to delay volume binding until a Pod is actually scheduled to the appropriate node.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_16142"} {"prompt": "Complete this yaml code:\n\napiVersion: storage.k8s.io/v1\nkind: VolumeAttributesClass\nmetadata:\n name: silver", "completion": "driverName: pd.csi.storage.gke.io\nparameters:\n provisioned-iops: \"3000\"\n provisioned-throughput: \"50\"", "language": "yaml", "type": "kubernetes-manifest", "context": "6.6 - Volume Attributes Classes | Each VolumeAttributesClass has a provisioner that determines what volume plugin is used for provisioning PVs. The field driverName must be specified.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_16143"} {"prompt": "Explain this yaml code:\n\napiVersion: storage.k8s.io/v1\nkind: VolumeAttributesClass\nmetadata:\n name: silver\ndriverName: pd.csi.storage.gke.io\nparameters:\n provisioned-iops: \"3000\"\n provisioned-throughput: \"50\"", "completion": "This yaml code 6.6 - Volume Attributes Classes | Each VolumeAttributesClass has a provisioner that determines what volume plugin is used for provisioning PVs. The field driverName must be specified.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "6.6 - Volume Attributes Classes | Each VolumeAttributesClass has a provisioner that determines what volume plugin is used for provisioning PVs. The field driverName must be specified.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_16144"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: PersistentVolumeClaim\nmetadata:\n name: test-pv-claim", "completion": "spec:\n …\n volumeAttributesClassName: silver\n …", "language": "yaml", "type": "kubernetes-manifest", "context": "6.6 - Volume Attributes Classes | A new VolumeAttributesClass gold is available in the cluster:", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_16145"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: PersistentVolumeClaim\nmetadata:\n name: test-pv-claim\nspec:\n …\n volumeAttributesClassName: silver\n …", "completion": "This yaml code 6.6 - Volume Attributes Classes | A new VolumeAttributesClass gold is available in the cluster:", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "6.6 - Volume Attributes Classes | A new VolumeAttributesClass gold is available in the cluster:", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_16146"} {"prompt": "Complete this yaml code:\n\napiVersion: storage.k8s.io/v1\nkind: VolumeAttributesClass\nmetadata:\n name: gold", "completion": "driverName: pd.csi.storage.gke.io\nparameters:\n iops: \"4000\"\n throughput: \"60\"", "language": "yaml", "type": "kubernetes-manifest", "context": "6.6 - Volume Attributes Classes | The end user can update the PVC with the new VolumeAttributesClass gold and apply:", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_16147"} {"prompt": "Explain this yaml code:\n\napiVersion: storage.k8s.io/v1\nkind: VolumeAttributesClass\nmetadata:\n name: gold\ndriverName: pd.csi.storage.gke.io\nparameters:\n iops: \"4000\"\n throughput: \"60\"", "completion": "This yaml code 6.6 - Volume Attributes Classes | The end user can update the PVC with the new VolumeAttributesClass gold and apply:", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "6.6 - Volume Attributes Classes | The end user can update the PVC with the new VolumeAttributesClass gold and apply:", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_16148"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: PersistentVolumeClaim\nmetadata:\n name: test-pv-claim", "completion": "spec:\n …\n volumeAttributesClassName: gold\n …", "language": "yaml", "type": "kubernetes-manifest", "context": "6.6 - Volume Attributes Classes | To understand how the resizer works with VolumeAttributesClass, refer to the CSI external-resizer documentation.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_16149"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: PersistentVolumeClaim\nmetadata:\n name: test-pv-claim\nspec:\n …\n volumeAttributesClassName: gold\n …", "completion": "This yaml code 6.6 - Volume Attributes Classes | To understand how the resizer works with VolumeAttributesClass, refer to the CSI external-resizer documentation.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "6.6 - Volume Attributes Classes | To understand how the resizer works with VolumeAttributesClass, refer to the CSI external-resizer documentation.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_16150"} {"prompt": "Complete this yaml code:\n\napiVersion: storage.k8s.io/v1\nkind: StorageClass\nmetadata:", "completion": " name: slow\nprovisioner: kubernetes.io/gce-pd\nparameters:\n type: pd-standard", "language": "yaml", "type": "kubernetes-manifest", "context": "6.7 - Dynamic Volume Provisioning | The following manifest creates a storage class \"fast\" which provisions SSD-like persistent disks.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_16151"} {"prompt": "Explain this yaml code:\n\napiVersion: storage.k8s.io/v1\nkind: StorageClass\nmetadata:\n name: slow\nprovisioner: kubernetes.io/gce-pd\nparameters:\n type: pd-standard", "completion": "This yaml code 6.7 - Dynamic Volume Provisioning | The following manifest creates a storage class \"fast\" which provisions SSD-like persistent disks.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "6.7 - Dynamic Volume Provisioning | The following manifest creates a storage class \"fast\" which provisions SSD-like persistent disks.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_16152"} {"prompt": "Complete this yaml code:\n\napiVersion: storage.k8s.io/v1\nkind: StorageClass\nmetadata:", "completion": " name: fast\nprovisioner: kubernetes.io/gce-pd\nparameters:\n type: pd-ssd", "language": "yaml", "type": "kubernetes-manifest", "context": "6.7 - Dynamic Volume Provisioning | Users request dynamically provisioned storage by including a storage class in their PersistentVolumeClaim. Before Kubernetes v1.6, this was done via the volume.beta.kubernetes.io/storage-class annotat", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_16153"} {"prompt": "Explain this yaml code:\n\napiVersion: storage.k8s.io/v1\nkind: StorageClass\nmetadata:\n name: fast\nprovisioner: kubernetes.io/gce-pd\nparameters:\n type: pd-ssd", "completion": "This yaml code 6.7 - Dynamic Volume Provisioning | Users request dynamically provisioned storage by including a storage class in their PersistentVolumeClaim. Before Kubernetes v1.6, this was done via the volume.beta.kubernetes.io/storage-class annotat", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "6.7 - Dynamic Volume Provisioning | Users request dynamically provisioned storage by including a storage class in their PersistentVolumeClaim. Before Kubernetes v1.6, this was done via the volume.beta.kubernetes.io/storage-class annotat", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_16154"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: PersistentVolumeClaim\nmetadata:\n name: claim1\nspec:", "completion": " accessModes:\n - ReadWriteOnce\n storageClassName: fast\n resources:\n requests:\n storage: 30Gi", "language": "yaml", "type": "kubernetes-manifest", "context": "6.7 - Dynamic Volume Provisioning | This claim results in an SSD-like Persistent Disk being automatically provisioned. When the claim is deleted, the volume is destroyed.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_16155"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: PersistentVolumeClaim\nmetadata:\n name: claim1\nspec:\n accessModes:\n - ReadWriteOnce\n storageClassName: fast\n resources:\n requests:\n storage: 30Gi", "completion": "This yaml code 6.7 - Dynamic Volume Provisioning | This claim results in an SSD-like Persistent Disk being automatically provisioned. When the claim is deleted, the volume is destroyed.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "6.7 - Dynamic Volume Provisioning | This claim results in an SSD-like Persistent Disk being automatically provisioned. When the claim is deleted, the volume is destroyed.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_16156"} {"prompt": "Complete this yaml code:\n\napiVersion: snapshot.storage.k8s.io/v1\nkind: VolumeSnapshot\nmetadata:\n name: new-snapshot-test", "completion": "spec:\n volumeSnapshotClassName: csi-hostpath-snapclass\n source:\n persistentVolumeClaimName: pvc-test", "language": "yaml", "type": "kubernetes-manifest", "context": "6.8 - Volume Snapshots | persistentVolumeClaimName is the name of the PersistentVolumeClaim data source for the snapshot. This field is required for dynamically provisioning a snapshot.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_16157"} {"prompt": "Explain this yaml code:\n\napiVersion: snapshot.storage.k8s.io/v1\nkind: VolumeSnapshot\nmetadata:\n name: new-snapshot-test\nspec:\n volumeSnapshotClassName: csi-hostpath-snapclass\n source:\n persistentVolumeClaimName: pvc-test", "completion": "This yaml code 6.8 - Volume Snapshots | persistentVolumeClaimName is the name of the PersistentVolumeClaim data source for the snapshot. This field is required for dynamically provisioning a snapshot.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "6.8 - Volume Snapshots | persistentVolumeClaimName is the name of the PersistentVolumeClaim data source for the snapshot. This field is required for dynamically provisioning a snapshot.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_16158"} {"prompt": "Complete this yaml code:\n\napiVersion: snapshot.storage.k8s.io/v1\nkind: VolumeSnapshot\nmetadata:", "completion": " name: test-snapshot\nspec:\n source:\n volumeSnapshotContentName: test-content", "language": "yaml", "type": "kubernetes-manifest", "context": "6.8 - Volume Snapshots | Each VolumeSnapshotContent contains a spec and status. In dynamic provisioning, the snapshot common controller creates VolumeSnapshotContent objects. Here is an example:", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_16159"} {"prompt": "Explain this yaml code:\n\napiVersion: snapshot.storage.k8s.io/v1\nkind: VolumeSnapshot\nmetadata:\n name: test-snapshot\nspec:\n source:\n volumeSnapshotContentName: test-content", "completion": "This yaml code 6.8 - Volume Snapshots | Each VolumeSnapshotContent contains a spec and status. In dynamic provisioning, the snapshot common controller creates VolumeSnapshotContent objects. Here is an example:", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "6.8 - Volume Snapshots | Each VolumeSnapshotContent contains a spec and status. In dynamic provisioning, the snapshot common controller creates VolumeSnapshotContent objects. Here is an example:", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_16160"} {"prompt": "Complete this yaml code:\n\napiVersion: snapshot.storage.k8s.io/v1\nkind: VolumeSnapshotContent\nmetadata:\n name: snapcontent-72d9a349-aacd-42d2-a240-d775650d2455\nspec:\n deletionPolicy: Delete\n driver: hostpath.csi.k8s.io", "completion": " source:\n volumeHandle: ee0cfb94-f8d4-11e9-b2d8-0242ac110002\n sourceVolumeMode: Filesystem\n volumeSnapshotClassName: csi-hostpath-snapclass\n volumeSnapshotRef:\n name: new-snapshot-test\n namespace: default\n uid: 72d9a349-aacd-42d2-a240-d775650d2455", "language": "yaml", "type": "kubernetes-manifest", "context": "6.8 - Volume Snapshots | volumeHandle is the unique identifier of the volume created on the storage backend and returned by the CSI driver during the volume creation. This field is required for dynamically provisioning a snap", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_16161"} {"prompt": "Explain this yaml code:\n\napiVersion: snapshot.storage.k8s.io/v1\nkind: VolumeSnapshotContent\nmetadata:\n name: snapcontent-72d9a349-aacd-42d2-a240-d775650d2455\nspec:\n deletionPolicy: Delete\n driver: hostpath.csi.k8s.io\n source:\n volumeHandle: ee0cfb94-f8d4-11e9-b2d8-0242ac110002\n sourceVolumeMode: Filesystem\n volumeSnapshotClassName: csi-hostpath-snapclass\n volumeSnapshotRef:\n name: new-snapshot-test\n namespace: default\n uid: 72d9a349-aacd-42d2-a240-d775650d2455", "completion": "This yaml code 6.8 - Volume Snapshots | volumeHandle is the unique identifier of the volume created on the storage backend and returned by the CSI driver during the volume creation. This field is required for dynamically provisioning a snap", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "6.8 - Volume Snapshots | volumeHandle is the unique identifier of the volume created on the storage backend and returned by the CSI driver during the volume creation. This field is required for dynamically provisioning a snap", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_16162"} {"prompt": "Complete this yaml code:\n\napiVersion: snapshot.storage.k8s.io/v1\nkind: VolumeSnapshotContent\nmetadata:\n name: new-snapshot-content-test\nspec:\n deletionPolicy: Delete", "completion": " driver: hostpath.csi.k8s.io\n source:\n snapshotHandle: 7bdd0de3-aaeb-11e8-9aae-0242ac110002\n sourceVolumeMode: Filesystem\n volumeSnapshotRef:\n name: new-snapshot-test\n namespace: default", "language": "yaml", "type": "kubernetes-manifest", "context": "6.8 - Volume Snapshots | snapshotHandle is the unique identifier of the volume snapshot created on the storage backend. This field is required for the pre-provisioned snapshots. It specifies the CSI snapshot id on the storage", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_16163"} {"prompt": "Explain this yaml code:\n\napiVersion: snapshot.storage.k8s.io/v1\nkind: VolumeSnapshotContent\nmetadata:\n name: new-snapshot-content-test\nspec:\n deletionPolicy: Delete\n driver: hostpath.csi.k8s.io\n source:\n snapshotHandle: 7bdd0de3-aaeb-11e8-9aae-0242ac110002\n sourceVolumeMode: Filesystem\n volumeSnapshotRef:\n name: new-snapshot-test\n namespace: default", "completion": "This yaml code 6.8 - Volume Snapshots | snapshotHandle is the unique identifier of the volume snapshot created on the storage backend. This field is required for the pre-provisioned snapshots. It specifies the CSI snapshot id on the storage", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "6.8 - Volume Snapshots | snapshotHandle is the unique identifier of the volume snapshot created on the storage backend. This field is required for the pre-provisioned snapshots. It specifies the CSI snapshot id on the storage", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_16164"} {"prompt": "Complete this yaml code:\n\napiVersion: snapshot.storage.k8s.io/v1\nkind: VolumeSnapshotContent\nmetadata:\n name: new-snapshot-content-test\n annotations:\n - snapshot.storage.kubernetes.io/allow-volume-mode-change: \"true\"\nspec:", "completion": " deletionPolicy: Delete\n driver: hostpath.csi.k8s.io\n source:\n snapshotHandle: 7bdd0de3-aaeb-11e8-9aae-0242ac110002\n sourceVolumeMode: Filesystem\n volumeSnapshotRef:\n name: new-snapshot-test\n namespace: default", "language": "yaml", "type": "kubernetes-manifest", "context": "6.8 - Volume Snapshots | You can provision a new volume, pre-populated with data from a snapshot, by using the dataSource field in the PersistentVolumeClaim object.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_16165"} {"prompt": "Explain this yaml code:\n\napiVersion: snapshot.storage.k8s.io/v1\nkind: VolumeSnapshotContent\nmetadata:\n name: new-snapshot-content-test\n annotations:\n - snapshot.storage.kubernetes.io/allow-volume-mode-change: \"true\"\nspec:\n deletionPolicy: Delete\n driver: hostpath.csi.k8s.io\n source:\n snapshotHandle: 7bdd0de3-aaeb-11e8-9aae-0242ac110002\n sourceVolumeMode: Filesystem\n volumeSnapshotRef:\n name: new-snapshot-test\n namespace: default", "completion": "This yaml code 6.8 - Volume Snapshots | You can provision a new volume, pre-populated with data from a snapshot, by using the dataSource field in the PersistentVolumeClaim object.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "6.8 - Volume Snapshots | You can provision a new volume, pre-populated with data from a snapshot, by using the dataSource field in the PersistentVolumeClaim object.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_16166"} {"prompt": "Complete this yaml code:\n\napiVersion: snapshot.storage.k8s.io/v1\nkind: VolumeSnapshotClass\nmetadata:", "completion": " name: csi-hostpath-snapclass\ndriver: hostpath.csi.k8s.io\ndeletionPolicy: Delete\nparameters:", "language": "yaml", "type": "kubernetes-manifest", "context": "6.9 - Volume Snapshot Classes | Administrators can specify a default VolumeSnapshotClass for VolumeSnapshots that don't request any particular class to bind to by adding the snapshot.storage.kubernetes.io/is-default-class: \"true\" an", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_16167"} {"prompt": "Explain this yaml code:\n\napiVersion: snapshot.storage.k8s.io/v1\nkind: VolumeSnapshotClass\nmetadata:\n name: csi-hostpath-snapclass\ndriver: hostpath.csi.k8s.io\ndeletionPolicy: Delete\nparameters:", "completion": "This yaml code 6.9 - Volume Snapshot Classes | Administrators can specify a default VolumeSnapshotClass for VolumeSnapshots that don't request any particular class to bind to by adding the snapshot.storage.kubernetes.io/is-default-class: \"true\" an", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "6.9 - Volume Snapshot Classes | Administrators can specify a default VolumeSnapshotClass for VolumeSnapshots that don't request any particular class to bind to by adding the snapshot.storage.kubernetes.io/is-default-class: \"true\" an", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_16168"} {"prompt": "Complete this yaml code:\n\napiVersion: snapshot.storage.k8s.io/v1\nkind: VolumeSnapshotClass\nmetadata:\n name: csi-hostpath-snapclass", "completion": " annotations:\n snapshot.storage.kubernetes.io/is-default-class: \"true\"\ndriver: hostpath.csi.k8s.io\ndeletionPolicy: Delete\nparameters:", "language": "yaml", "type": "kubernetes-manifest", "context": "6.9 - Volume Snapshot Classes | If multiple CSI drivers exist, a default VolumeSnapshotClass can be specified for each of them.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_16169"} {"prompt": "Explain this yaml code:\n\napiVersion: snapshot.storage.k8s.io/v1\nkind: VolumeSnapshotClass\nmetadata:\n name: csi-hostpath-snapclass\n annotations:\n snapshot.storage.kubernetes.io/is-default-class: \"true\"\ndriver: hostpath.csi.k8s.io\ndeletionPolicy: Delete\nparameters:", "completion": "This yaml code 6.9 - Volume Snapshot Classes | If multiple CSI drivers exist, a default VolumeSnapshotClass can be specified for each of them.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "6.9 - Volume Snapshot Classes | If multiple CSI drivers exist, a default VolumeSnapshotClass can be specified for each of them.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_16170"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: PersistentVolumeClaim\nmetadata:\n name: clone-of-pvc-1\n namespace: myns\nspec:\n accessModes:", "completion": " - ReadWriteOnce\n storageClassName: cloning\n resources:\n requests:\n storage: 5Gi\n dataSource:\n kind: PersistentVolumeClaim\n name: pvc-1", "language": "yaml", "type": "kubernetes-manifest", "context": "6.10 - CSI Volume Cloning | The result is a new PVC with the name clone-of-pvc-1 that has the exact same content as the specified source pvc-1.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_16171"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: PersistentVolumeClaim\nmetadata:\n name: clone-of-pvc-1\n namespace: myns\nspec:\n accessModes:\n - ReadWriteOnce\n storageClassName: cloning\n resources:\n requests:\n storage: 5Gi\n dataSource:\n kind: PersistentVolumeClaim\n name: pvc-1", "completion": "This yaml code 6.10 - CSI Volume Cloning | The result is a new PVC with the name clone-of-pvc-1 that has the exact same content as the specified source pvc-1.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "6.10 - CSI Volume Cloning | The result is a new PVC with the name clone-of-pvc-1 that has the exact same content as the specified source pvc-1.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_16172"} {"prompt": "Complete this yaml code:\n\napiVersion: storage.k8s.io/v1\nkind: CSIDriver\nmetadata:", "completion": " name: hostpath.csi.k8s.io\nspec:\n nodeAllocatableUpdatePeriodSeconds: 60", "language": "yaml", "type": "kubernetes-manifest", "context": "6.12 - Node-specific Volume Limits | Kubelet will periodically call the corresponding CSI driver’s NodeGetInfo endpoint to refresh the maximum number of attachable volumes, using the interval specified in nodeAllocatableUpdatePeriodSecon", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_16173"} {"prompt": "Explain this yaml code:\n\napiVersion: storage.k8s.io/v1\nkind: CSIDriver\nmetadata:\n name: hostpath.csi.k8s.io\nspec:\n nodeAllocatableUpdatePeriodSeconds: 60", "completion": "This yaml code 6.12 - Node-specific Volume Limits | Kubelet will periodically call the corresponding CSI driver’s NodeGetInfo endpoint to refresh the maximum number of attachable volumes, using the interval specified in nodeAllocatableUpdatePeriodSecon", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "6.12 - Node-specific Volume Limits | Kubelet will periodically call the corresponding CSI driver’s NodeGetInfo endpoint to refresh the maximum number of attachable volumes, using the interval specified in nodeAllocatableUpdatePeriodSecon", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_16174"} {"prompt": "Explain this text code:\n\nFOO_SERVICE_HOST=<the host the Service is running on>\nFOO_SERVICE_PORT=<the port the Service is running on>", "completion": "This text code 7.1 - Configuration Best Practices | This does imply an ordering requirement - any Service that a Pod wants to access must be created before the Pod itself, or else the environment variables will not be populated. DNS does not have this", "language": "text", "type": "code-example_explanation", "context": "7.1 - Configuration Best Practices | This does imply an ordering requirement - any Service that a Pod wants to access must be created before the Pod itself, or else the environment variables will not be populated. DNS does not have this", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_16175"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: ConfigMap\nmetadata:\n name: game-demo\ndata:\n # property-like keys; each key maps to a simple value\n player_initial_lives: \"3\"\n ui_properties_file_name: \"user-interface.properties\"", "completion": "\n # file-like keys\n game.properties: |\n enemy.types=aliens,monsters\n player.maximum-lives=5 \n user-interface.properties: |\n color.good=purple\n color.bad=yellow\n allow.textmode=true", "language": "yaml", "type": "kubernetes-manifest", "context": "7.2 - ConfigMaps | There are four different ways that you can use a ConfigMap to configure a container inside a Pod:", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_16176"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: ConfigMap\nmetadata:\n name: game-demo\ndata:\n # property-like keys; each key maps to a simple value\n player_initial_lives: \"3\"\n ui_properties_file_name: \"user-interface.properties\"\n\n # file-like keys\n game.properties: |\n enemy.types=aliens,monsters\n player.maximum-lives=5 \n user-interface.properties: |\n color.good=purple\n color.bad=yellow\n allow.textmode=true", "completion": "This yaml code 7.2 - ConfigMaps | There are four different ways that you can use a ConfigMap to configure a container inside a Pod:", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "7.2 - ConfigMaps | There are four different ways that you can use a ConfigMap to configure a container inside a Pod:", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_16177"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: configmap-demo-pod\nspec:\n containers:\n - name: demo\n image: alpine\n command: [\"sleep\", \"3600\"]\n env:\n # Define the environment variable\n - name: PLAYER_INITIAL_LIVES # Notice that the case is different here\n # from the key name in the ConfigMap.\n valueFrom:\n configMapKeyRef:\n name: game-demo # The ConfigMap this value comes from.\n key: player_initial_lives # The key to fetch.\n - name: UI_PROPERTIES_FILE_NAME\n valueFrom:", "completion": " configMapKeyRef:\n name: game-demo\n key: ui_properties_file_name\n volumeMounts:\n - name: config\n mountPath: \"/config\"\n readOnly: true\n volumes:\n # You set volumes at the Pod level, then mount them into containers inside that Pod\n - name: config\n configMap:\n # Provide the name of the ConfigMap you want to mount.\n name: game-demo\n # An array of keys from the ConfigMap to create as files\n items:\n - key: \"game.properties\"\n path: \"game.properties\"\n - key: \"user-interface.properties\"\n path: \"user-interface.properties\"", "language": "yaml", "type": "pod-definition", "context": "7.2 - ConfigMaps | A ConfigMap doesn't differentiate between single line property values and multi-line file-like values. What matters is how Pods and other objects consume those values.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_16178"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: configmap-demo-pod\nspec:\n containers:\n - name: demo\n image: alpine\n command: [\"sleep\", \"3600\"]\n env:\n # Define the environment variable\n - name: PLAYER_INITIAL_LIVES # Notice that the case is different here\n # from the key name in the ConfigMap.\n valueFrom:\n configMapKeyRef:\n name: game-demo # The ConfigMap this value comes from.\n key: player_initial_lives # The key to fetch.\n - name: UI_PROPERTIES_FILE_NAME\n valueFrom:\n configMapKeyRef:\n name: game-demo\n key: ui_properties_file_name\n volumeMounts:\n - name: config\n mountPath: \"/config\"\n readOnly: true\n volumes:\n # You set volumes at the Pod level, then mount them into containers inside that Pod\n - name: config\n configMap:\n # Provide the name of the ConfigMap you want to mount.\n name: game-demo\n # An array of keys from the ConfigMap to create as files\n items:\n - key: \"game.properties\"\n path: \"game.properties\"\n - key: \"user-interface.properties\"\n path: \"user-interface.properties\"", "completion": "This yaml code 7.2 - ConfigMaps | A ConfigMap doesn't differentiate between single line property values and multi-line file-like values. What matters is how Pods and other objects consume those values.", "language": "yaml", "type": "pod-definition_explanation", "context": "7.2 - ConfigMaps | A ConfigMap doesn't differentiate between single line property values and multi-line file-like values. What matters is how Pods and other objects consume those values.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_16179"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: mypod\nspec:\n containers:\n - name: mypod\n image: redis", "completion": " volumeMounts:\n - name: foo\n mountPath: \"/etc/foo\"\n readOnly: true\n volumes:\n - name: foo\n configMap:\n name: myconfigmap", "language": "yaml", "type": "pod-definition", "context": "7.2 - ConfigMaps | Each ConfigMap you want to use needs to be referred to in .spec.volumes.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_16180"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: mypod\nspec:\n containers:\n - name: mypod\n image: redis\n volumeMounts:\n - name: foo\n mountPath: \"/etc/foo\"\n readOnly: true\n volumes:\n - name: foo\n configMap:\n name: myconfigmap", "completion": "This yaml code 7.2 - ConfigMaps | Each ConfigMap you want to use needs to be referred to in .spec.volumes.", "language": "yaml", "type": "pod-definition_explanation", "context": "7.2 - ConfigMaps | Each ConfigMap you want to use needs to be referred to in .spec.volumes.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_16181"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: ConfigMap\nmetadata:", "completion": " name: myconfigmap\ndata:\n username: k8s-admin\n access_level: \"1\"", "language": "yaml", "type": "kubernetes-manifest", "context": "7.2 - ConfigMaps | The following command will create the ConfigMap object:", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_16182"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: ConfigMap\nmetadata:\n name: myconfigmap\ndata:\n username: k8s-admin\n access_level: \"1\"", "completion": "This yaml code 7.2 - ConfigMaps | The following command will create the ConfigMap object:", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "7.2 - ConfigMaps | The following command will create the ConfigMap object:", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_16183"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: env-configmap\nspec:\n containers:", "completion": " - name: app\n command: [\"/bin/sh\", \"-c\", \"printenv\"]\n image: busybox:latest\n envFrom:\n - configMapRef:\n name: myconfigmap", "language": "yaml", "type": "pod-definition", "context": "7.2 - ConfigMaps | The envFrom field instructs Kubernetes to create environment variables from the sources nested within it. The inner configMapRef refers to a ConfigMap by its name and selects all its key-value pairs.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_16184"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: env-configmap\nspec:\n containers:\n - name: app\n command: [\"/bin/sh\", \"-c\", \"printenv\"]\n image: busybox:latest\n envFrom:\n - configMapRef:\n name: myconfigmap", "completion": "This yaml code 7.2 - ConfigMaps | The envFrom field instructs Kubernetes to create environment variables from the sources nested within it. The inner configMapRef refers to a ConfigMap by its name and selects all its key-value pairs.", "language": "yaml", "type": "pod-definition_explanation", "context": "7.2 - ConfigMaps | The envFrom field instructs Kubernetes to create environment variables from the sources nested within it. The inner configMapRef refers to a ConfigMap by its name and selects all its key-value pairs.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_16185"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: env-configmap\nspec:\n containers:\n - name: envars-test-container", "completion": " image: nginx\n env:\n - name: CONFIGMAP_USERNAME\n valueFrom:\n configMapKeyRef:\n name: myconfigmap\n key: username", "language": "yaml", "type": "pod-definition", "context": "7.2 - ConfigMaps | In the Pod created from this manifest, you will see that the environment variable CONFIGMAP_USERNAME is set to the value of the username value from the ConfigMap. Other keys from the ConfigMap data ar", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_16186"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: env-configmap\nspec:\n containers:\n - name: envars-test-container\n image: nginx\n env:\n - name: CONFIGMAP_USERNAME\n valueFrom:\n configMapKeyRef:\n name: myconfigmap\n key: username", "completion": "This yaml code 7.2 - ConfigMaps | In the Pod created from this manifest, you will see that the environment variable CONFIGMAP_USERNAME is set to the value of the username value from the ConfigMap. Other keys from the ConfigMap data ar", "language": "yaml", "type": "pod-definition_explanation", "context": "7.2 - ConfigMaps | In the Pod created from this manifest, you will see that the environment variable CONFIGMAP_USERNAME is set to the value of the username value from the ConfigMap. Other keys from the ConfigMap data ar", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_16187"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: ConfigMap\nmetadata:", "completion": " ...\ndata:\n ...\nimmutable: true", "language": "yaml", "type": "kubernetes-manifest", "context": "7.2 - ConfigMaps | Once a ConfigMap is marked as immutable, it is not possible to revert this change nor to mutate the contents of the data or the binaryData field. You can only delete and recreate the ConfigMap. Becaus", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_16188"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: ConfigMap\nmetadata:\n ...\ndata:\n ...\nimmutable: true", "completion": "This yaml code 7.2 - ConfigMaps | Once a ConfigMap is marked as immutable, it is not possible to revert this change nor to mutate the contents of the data or the binaryData field. You can only delete and recreate the ConfigMap. Becaus", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "7.2 - ConfigMaps | Once a ConfigMap is marked as immutable, it is not possible to revert this change nor to mutate the contents of the data or the binaryData field. You can only delete and recreate the ConfigMap. Becaus", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_16189"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Secret\nmetadata:\n name: dotfile-secret\ndata:\n .secret-file: dmFsdWUtMg0KDQo=\n---\napiVersion: v1\nkind: Pod\nmetadata:\n name: secret-dotfiles-pod\nspec:\n volumes:", "completion": " - name: secret-volume\n secret:\n secretName: dotfile-secret\n containers:\n - name: dotfile-test-container\n image: registry.k8s.io/busybox\n command:\n - ls\n - \"-l\"\n - \"/etc/secret-volume\"\n volumeMounts:\n - name: secret-volume\n readOnly: true\n mountPath: \"/etc/secret-volume\"", "language": "yaml", "type": "pod-definition", "context": "7.3 - Secrets | Consider a program that needs to handle HTTP requests, do some complex business logic, and then sign some messages with an HMAC. Because it has complex application logic, there might be an unnoticed r", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_16190"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Secret\nmetadata:\n name: dotfile-secret\ndata:\n .secret-file: dmFsdWUtMg0KDQo=\n---\napiVersion: v1\nkind: Pod\nmetadata:\n name: secret-dotfiles-pod\nspec:\n volumes:\n - name: secret-volume\n secret:\n secretName: dotfile-secret\n containers:\n - name: dotfile-test-container\n image: registry.k8s.io/busybox\n command:\n - ls\n - \"-l\"\n - \"/etc/secret-volume\"\n volumeMounts:\n - name: secret-volume\n readOnly: true\n mountPath: \"/etc/secret-volume\"", "completion": "This yaml code 7.3 - Secrets | Consider a program that needs to handle HTTP requests, do some complex business logic, and then sign some messages with an HMAC. Because it has complex application logic, there might be an unnoticed r", "language": "yaml", "type": "pod-definition_explanation", "context": "7.3 - Secrets | Consider a program that needs to handle HTTP requests, do some complex business logic, and then sign some messages with an HMAC. Because it has complex application logic, there might be an unnoticed r", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_16191"} {"prompt": "Explain this bash code:\n\nkubectl create secret generic empty-secret\nkubectl get secret empty-secret", "completion": "This bash code 7.3 - Secrets | The output looks like:", "language": "bash", "type": "kubectl-command_explanation", "context": "7.3 - Secrets | The output looks like:", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_16192"} {"prompt": "Explain this text code:\n\nNAME TYPE DATA AGE\nempty-secret Opaque 0 2m6s", "completion": "This text code 7.3 - Secrets | The DATA column shows the number of data items stored in the Secret. In this case, 0 means you have created an empty Secret.", "language": "text", "type": "code-example_explanation", "context": "7.3 - Secrets | The DATA column shows the number of data items stored in the Secret. In this case, 0 means you have created an empty Secret.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_16193"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Secret\nmetadata:\n name: secret-sa-sample", "completion": " annotations:\n kubernetes.io/service-account.name: \"sa-name\"\ntype: kubernetes.io/service-account-token\ndata:\n extra: YmFyCg==", "language": "yaml", "type": "kubernetes-manifest", "context": "7.3 - Secrets | After creating the Secret, wait for Kubernetes to populate the token key in the data field.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_16194"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Secret\nmetadata:\n name: secret-sa-sample\n annotations:\n kubernetes.io/service-account.name: \"sa-name\"\ntype: kubernetes.io/service-account-token\ndata:\n extra: YmFyCg==", "completion": "This yaml code 7.3 - Secrets | After creating the Secret, wait for Kubernetes to populate the token key in the data field.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "7.3 - Secrets | After creating the Secret, wait for Kubernetes to populate the token key in the data field.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_16195"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Secret\nmetadata:\n name: secret-dockercfg", "completion": "type: kubernetes.io/dockercfg\ndata:\n .dockercfg: |\n eyJhdXRocyI6eyJodHRwczovL2V4YW1wbGUvdjEvIjp7ImF1dGgiOiJvcGVuc2VzYW1lIn19fQo=", "language": "yaml", "type": "kubernetes-manifest", "context": "7.3 - Secrets | When you create Docker config Secrets using a manifest, the API server checks whether the expected key exists in the data field, and it verifies if the value provided can be parsed as a valid JSON. Th", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_16196"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Secret\nmetadata:\n name: secret-dockercfg\ntype: kubernetes.io/dockercfg\ndata:\n .dockercfg: |\n eyJhdXRocyI6eyJodHRwczovL2V4YW1wbGUvdjEvIjp7ImF1dGgiOiJvcGVuc2VzYW1lIn19fQo=", "completion": "This yaml code 7.3 - Secrets | When you create Docker config Secrets using a manifest, the API server checks whether the expected key exists in the data field, and it verifies if the value provided can be parsed as a valid JSON. Th", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "7.3 - Secrets | When you create Docker config Secrets using a manifest, the API server checks whether the expected key exists in the data field, and it verifies if the value provided can be parsed as a valid JSON. Th", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_16197"} {"prompt": "Complete this bash code:\n\nkubectl create secret docker-registry secret-tiger-docker \\\n --docker-email=tiger@acme.example \\", "completion": " --docker-username=tiger \\\n --docker-password=pass1234 \\\n --docker-server=my-registry.example:5000", "language": "bash", "type": "kubectl-command", "context": "7.3 - Secrets | This command creates a Secret of type kubernetes.io/dockerconfigjson.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_16198"} {"prompt": "Explain this bash code:\n\nkubectl create secret docker-registry secret-tiger-docker \\\n --docker-email=tiger@acme.example \\\n --docker-username=tiger \\\n --docker-password=pass1234 \\\n --docker-server=my-registry.example:5000", "completion": "This bash code 7.3 - Secrets | This command creates a Secret of type kubernetes.io/dockerconfigjson.", "language": "bash", "type": "kubectl-command_explanation", "context": "7.3 - Secrets | This command creates a Secret of type kubernetes.io/dockerconfigjson.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_16199"} {"prompt": "Explain this bash code:\n\nkubectl get secret secret-tiger-docker -o jsonpath='{.data.*}' | base64 -d", "completion": "This bash code 7.3 - Secrets | The output is equivalent to the following JSON document (which is also a valid Docker configuration file):", "language": "bash", "type": "kubectl-command_explanation", "context": "7.3 - Secrets | The output is equivalent to the following JSON document (which is also a valid Docker configuration file):", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_16200"} {"prompt": "Complete this json code:\n\n{\n \"auths\": {\n \"my-registry.example:5000\": {\n \"username\": \"tiger\",\n \"password\": \"pass1234\",", "completion": " \"email\": \"tiger@acme.example\",\n \"auth\": \"dGlnZXI6cGFzczEyMzQ=\"\n }\n }\n}", "language": "json", "type": "api-response", "context": "7.3 - Secrets | The auth value there is base64 encoded; it is obscured but not secret. Anyone who can read that Secret can learn the registry access bearer token.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_16201"} {"prompt": "Explain this json code:\n\n{\n \"auths\": {\n \"my-registry.example:5000\": {\n \"username\": \"tiger\",\n \"password\": \"pass1234\",\n \"email\": \"tiger@acme.example\",\n \"auth\": \"dGlnZXI6cGFzczEyMzQ=\"\n }\n }\n}", "completion": "This json code 7.3 - Secrets | The auth value there is base64 encoded; it is obscured but not secret. Anyone who can read that Secret can learn the registry access bearer token.", "language": "json", "type": "api-response_explanation", "context": "7.3 - Secrets | The auth value there is base64 encoded; it is obscured but not secret. Anyone who can read that Secret can learn the registry access bearer token.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_16202"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Secret\nmetadata:\n name: secret-basic-auth", "completion": "type: kubernetes.io/basic-auth\nstringData:\n username: admin # required field for kubernetes.io/basic-auth\n password: t0p-Secret # required field for kubernetes.io/basic-auth", "language": "yaml", "type": "kubernetes-manifest", "context": "7.3 - Secrets | The basic authentication Secret type is provided only for convenience. You can create an Opaque type for credentials used for basic authentication. However, using the defined and public Secret type (k", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_16203"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Secret\nmetadata:\n name: secret-basic-auth\ntype: kubernetes.io/basic-auth\nstringData:\n username: admin # required field for kubernetes.io/basic-auth\n password: t0p-Secret # required field for kubernetes.io/basic-auth", "completion": "This yaml code 7.3 - Secrets | The basic authentication Secret type is provided only for convenience. You can create an Opaque type for credentials used for basic authentication. However, using the defined and public Secret type (k", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "7.3 - Secrets | The basic authentication Secret type is provided only for convenience. You can create an Opaque type for credentials used for basic authentication. However, using the defined and public Secret type (k", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_16204"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Secret\nmetadata:\n name: secret-ssh-auth", "completion": "type: kubernetes.io/ssh-auth\ndata:\n # the data is abbreviated in this example\n ssh-privatekey: |\n UG91cmluZzYlRW1vdGljb24lU2N1YmE=", "language": "yaml", "type": "kubernetes-manifest", "context": "7.3 - Secrets | The SSH authentication Secret type is provided only for convenience. You can create an Opaque type for credentials used for SSH authentication. However, using the defined and public Secret type (kuber", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_16205"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Secret\nmetadata:\n name: secret-ssh-auth\ntype: kubernetes.io/ssh-auth\ndata:\n # the data is abbreviated in this example\n ssh-privatekey: |\n UG91cmluZzYlRW1vdGljb24lU2N1YmE=", "completion": "This yaml code 7.3 - Secrets | The SSH authentication Secret type is provided only for convenience. You can create an Opaque type for credentials used for SSH authentication. However, using the defined and public Secret type (kuber", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "7.3 - Secrets | The SSH authentication Secret type is provided only for convenience. You can create an Opaque type for credentials used for SSH authentication. However, using the defined and public Secret type (kuber", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_16206"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Secret\nmetadata:\n name: secret-tls\ntype: kubernetes.io/tls", "completion": "data:\n # values are base64 encoded, which obscures them but does NOT provide\n # any useful level of confidentiality\n # Replace the following values with your own base64-encoded certificate and key.\n tls.crt: \"REPLACE_WITH_BASE64_CERT\" \n tls.key: \"REPLACE_WITH_BASE64_KEY\"", "language": "yaml", "type": "kubernetes-manifest", "context": "7.3 - Secrets | The TLS Secret type is provided only for convenience. You can create an Opaque type for credentials used for TLS authentication. However, using the defined and public Secret type (kubernetes.io/tls) h", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_16207"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Secret\nmetadata:\n name: secret-tls\ntype: kubernetes.io/tls\ndata:\n # values are base64 encoded, which obscures them but does NOT provide\n # any useful level of confidentiality\n # Replace the following values with your own base64-encoded certificate and key.\n tls.crt: \"REPLACE_WITH_BASE64_CERT\" \n tls.key: \"REPLACE_WITH_BASE64_KEY\"", "completion": "This yaml code 7.3 - Secrets | The TLS Secret type is provided only for convenience. You can create an Opaque type for credentials used for TLS authentication. However, using the defined and public Secret type (kubernetes.io/tls) h", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "7.3 - Secrets | The TLS Secret type is provided only for convenience. You can create an Opaque type for credentials used for TLS authentication. However, using the defined and public Secret type (kubernetes.io/tls) h", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_16208"} {"prompt": "Explain this bash code:\n\nkubectl create secret tls my-tls-secret \\\n --cert=path/to/cert/file \\\n --key=path/to/key/file", "completion": "This bash code 7.3 - Secrets | The public/private key pair must exist before hand. The public key certificate for --cert must be .PEM encoded and must match the given private key for --key.", "language": "bash", "type": "kubectl-command_explanation", "context": "7.3 - Secrets | The public/private key pair must exist before hand. The public key certificate for --cert must be .PEM encoded and must match the given private key for --key.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_16209"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Secret\nmetadata:\n name: bootstrap-token-5emitj\n namespace: kube-system\ntype: bootstrap.kubernetes.io/token", "completion": "data:\n auth-extra-groups: c3lzdGVtOmJvb3RzdHJhcHBlcnM6a3ViZWFkbTpkZWZhdWx0LW5vZGUtdG9rZW4=\n expiration: MjAyMC0wOS0xM1QwNDozOToxMFo=\n token-id: NWVtaXRq\n token-secret: a3E0Z2lodnN6emduMXAwcg==\n usage-bootstrap-authentication: dHJ1ZQ==\n usage-bootstrap-signing: dHJ1ZQ==", "language": "yaml", "type": "kubernetes-manifest", "context": "7.3 - Secrets | A bootstrap token Secret has the following keys specified under data:", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_16210"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Secret\nmetadata:\n name: bootstrap-token-5emitj\n namespace: kube-system\ntype: bootstrap.kubernetes.io/token\ndata:\n auth-extra-groups: c3lzdGVtOmJvb3RzdHJhcHBlcnM6a3ViZWFkbTpkZWZhdWx0LW5vZGUtdG9rZW4=\n expiration: MjAyMC0wOS0xM1QwNDozOToxMFo=\n token-id: NWVtaXRq\n token-secret: a3E0Z2lodnN6emduMXAwcg==\n usage-bootstrap-authentication: dHJ1ZQ==\n usage-bootstrap-signing: dHJ1ZQ==", "completion": "This yaml code 7.3 - Secrets | A bootstrap token Secret has the following keys specified under data:", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "7.3 - Secrets | A bootstrap token Secret has the following keys specified under data:", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_16211"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Secret\nmetadata:\n # Note how the Secret is named\n name: bootstrap-token-5emitj\n # A bootstrap token Secret usually resides in the kube-system namespace\n namespace: kube-system\ntype: bootstrap.kubernetes.io/token\nstringData:", "completion": " auth-extra-groups: \"system:bootstrappers:kubeadm:default-node-token\"\n expiration: \"2020-09-13T04:39:10Z\"\n # This token ID is used in the name\n token-id: \"5emitj\"\n token-secret: \"kq4gihvszzgn1p0r\"\n # This token can be used for authentication\n usage-bootstrap-authentication: \"true\"\n # and it can be used for signing\n usage-bootstrap-signing: \"true\"", "language": "yaml", "type": "kubernetes-manifest", "context": "7.3 - Secrets | There are several options to create a Secret:", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_16212"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Secret\nmetadata:\n # Note how the Secret is named\n name: bootstrap-token-5emitj\n # A bootstrap token Secret usually resides in the kube-system namespace\n namespace: kube-system\ntype: bootstrap.kubernetes.io/token\nstringData:\n auth-extra-groups: \"system:bootstrappers:kubeadm:default-node-token\"\n expiration: \"2020-09-13T04:39:10Z\"\n # This token ID is used in the name\n token-id: \"5emitj\"\n token-secret: \"kq4gihvszzgn1p0r\"\n # This token can be used for authentication\n usage-bootstrap-authentication: \"true\"\n # and it can be used for signing\n usage-bootstrap-signing: \"true\"", "completion": "This yaml code 7.3 - Secrets | There are several options to create a Secret:", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "7.3 - Secrets | There are several options to create a Secret:", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_16213"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: mypod\nspec:\n containers:\n - name: mypod\n image: redis", "completion": " volumeMounts:\n - name: foo\n mountPath: \"/etc/foo\"\n readOnly: true\n volumes:\n - name: foo\n secret:\n secretName: mysecret\n optional: true", "language": "yaml", "type": "pod-definition", "context": "7.3 - Secrets | By default, Secrets are required. None of a Pod's containers will start until all non-optional Secrets are available.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_16214"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: mypod\nspec:\n containers:\n - name: mypod\n image: redis\n volumeMounts:\n - name: foo\n mountPath: \"/etc/foo\"\n readOnly: true\n volumes:\n - name: foo\n secret:\n secretName: mysecret\n optional: true", "completion": "This yaml code 7.3 - Secrets | By default, Secrets are required. None of a Pod's containers will start until all non-optional Secrets are available.", "language": "yaml", "type": "pod-definition_explanation", "context": "7.3 - Secrets | By default, Secrets are required. None of a Pod's containers will start until all non-optional Secrets are available.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_16215"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Secret", "completion": "metadata: ...\ndata: ...\nimmutable: true", "language": "yaml", "type": "kubernetes-manifest", "context": "7.3 - Secrets | You can also update any existing mutable Secret to make it immutable.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_16216"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Secret\nmetadata: ...\ndata: ...\nimmutable: true", "completion": "This yaml code 7.3 - Secrets | You can also update any existing mutable Secret to make it immutable.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "7.3 - Secrets | You can also update any existing mutable Secret to make it immutable.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_16217"} {"prompt": "Complete this yaml code:\n\n---\napiVersion: v1\nkind: Pod\nmetadata:\n name: frontend\nspec:\n containers:\n - name: app\n image: images.my-company.example/app:v4\n resources:\n requests:\n memory: \"64Mi\"", "completion": " cpu: \"250m\"\n limits:\n memory: \"128Mi\"\n cpu: \"500m\"\n - name: log-aggregator\n image: images.my-company.example/log-aggregator:v6\n resources:\n requests:\n memory: \"64Mi\"\n cpu: \"250m\"\n limits:\n memory: \"128Mi\"\n cpu: \"500m\"", "language": "yaml", "type": "pod-definition", "context": "7.5 - Resource Management for Pods and Containers | This feature can be enabled by setting the PodLevelResources feature gate. The following Pod has an explicit request of 1 CPU and 100 MiB of memory, and an explicit limit of 1 CPU and 200 MiB of memor", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_16218"} {"prompt": "Explain this yaml code:\n\n---\napiVersion: v1\nkind: Pod\nmetadata:\n name: frontend\nspec:\n containers:\n - name: app\n image: images.my-company.example/app:v4\n resources:\n requests:\n memory: \"64Mi\"\n cpu: \"250m\"\n limits:\n memory: \"128Mi\"\n cpu: \"500m\"\n - name: log-aggregator\n image: images.my-company.example/log-aggregator:v6\n resources:\n requests:\n memory: \"64Mi\"\n cpu: \"250m\"\n limits:\n memory: \"128Mi\"\n cpu: \"500m\"", "completion": "This yaml code 7.5 - Resource Management for Pods and Containers | This feature can be enabled by setting the PodLevelResources feature gate. The following Pod has an explicit request of 1 CPU and 100 MiB of memory, and an explicit limit of 1 CPU and 200 MiB of memor", "language": "yaml", "type": "pod-definition_explanation", "context": "7.5 - Resource Management for Pods and Containers | This feature can be enabled by setting the PodLevelResources feature gate. The following Pod has an explicit request of 1 CPU and 100 MiB of memory, and an explicit limit of 1 CPU and 200 MiB of memor", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_16219"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: pod-resources-demo\n namespace: pod-resources-example\nspec:\n resources:\n limits:\n cpu: \"1\"\n memory: \"200Mi\"\n requests:\n cpu: \"1\"\n memory: \"100Mi\"\n containers:", "completion": " - name: pod-resources-demo-ctr-1\n image: nginx\n resources:\n limits:\n cpu: \"0.5\"\n memory: \"100Mi\"\n requests:\n cpu: \"0.5\"\n memory: \"50Mi\"\n - name: pod-resources-demo-ctr-2\n image: fedora\n command:\n - sleep\n - inf", "language": "yaml", "type": "pod-definition", "context": "7.5 - Resource Management for Pods and Containers | When you create a Pod, the Kubernetes scheduler selects a node for the Pod to run on. Each node has a maximum capacity for each of the resource types: the amount of CPU and memory it can provide for P", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_16220"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: pod-resources-demo\n namespace: pod-resources-example\nspec:\n resources:\n limits:\n cpu: \"1\"\n memory: \"200Mi\"\n requests:\n cpu: \"1\"\n memory: \"100Mi\"\n containers:\n - name: pod-resources-demo-ctr-1\n image: nginx\n resources:\n limits:\n cpu: \"0.5\"\n memory: \"100Mi\"\n requests:\n cpu: \"0.5\"\n memory: \"50Mi\"\n - name: pod-resources-demo-ctr-2\n image: fedora\n command:\n - sleep\n - inf", "completion": "This yaml code 7.5 - Resource Management for Pods and Containers | When you create a Pod, the Kubernetes scheduler selects a node for the Pod to run on. Each node has a maximum capacity for each of the resource types: the amount of CPU and memory it can provide for P", "language": "yaml", "type": "pod-definition_explanation", "context": "7.5 - Resource Management for Pods and Containers | When you create a Pod, the Kubernetes scheduler selects a node for the Pod to run on. Each node has a maximum capacity for each of the resource types: the amount of CPU and memory it can provide for P", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_16221"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: frontend\nspec:\n containers:\n - name: app\n image: images.my-company.example/app:v4\n resources:\n requests:\n ephemeral-storage: \"2Gi\"\n limits:\n ephemeral-storage: \"4Gi\"\n volumeMounts:\n - name: ephemeral", "completion": " mountPath: \"/tmp\"\n - name: log-aggregator\n image: images.my-company.example/log-aggregator:v6\n resources:\n requests:\n ephemeral-storage: \"2Gi\"\n limits:\n ephemeral-storage: \"4Gi\"\n volumeMounts:\n - name: ephemeral\n mountPath: \"/tmp\"\n volumes:\n - name: ephemeral\n emptyDir:\n sizeLimit: 500Mi", "language": "yaml", "type": "pod-definition", "context": "7.5 - Resource Management for Pods and Containers | When you create a Pod, the Kubernetes scheduler selects a node for the Pod to run on. Each node has a maximum amount of local ephemeral storage it can provide for Pods. For more information, see Node", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_16222"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: frontend\nspec:\n containers:\n - name: app\n image: images.my-company.example/app:v4\n resources:\n requests:\n ephemeral-storage: \"2Gi\"\n limits:\n ephemeral-storage: \"4Gi\"\n volumeMounts:\n - name: ephemeral\n mountPath: \"/tmp\"\n - name: log-aggregator\n image: images.my-company.example/log-aggregator:v6\n resources:\n requests:\n ephemeral-storage: \"2Gi\"\n limits:\n ephemeral-storage: \"4Gi\"\n volumeMounts:\n - name: ephemeral\n mountPath: \"/tmp\"\n volumes:\n - name: ephemeral\n emptyDir:\n sizeLimit: 500Mi", "completion": "This yaml code 7.5 - Resource Management for Pods and Containers | When you create a Pod, the Kubernetes scheduler selects a node for the Pod to run on. Each node has a maximum amount of local ephemeral storage it can provide for Pods. For more information, see Node", "language": "yaml", "type": "pod-definition_explanation", "context": "7.5 - Resource Management for Pods and Containers | When you create a Pod, the Kubernetes scheduler selects a node for the Pod to run on. Each node has a maximum amount of local ephemeral storage it can provide for Pods. For more information, see Node", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_16223"} {"prompt": "Explain this text code:\n\n# For ext4, with /dev/block-device not mounted\nsudo tune2fs -O project -Q prjquota /dev/block-device", "completion": "This text code 7.5 - Resource Management for Pods and Containers | Ensure that the root filesystem (or optional runtime filesystem) is mounted with project quotas enabled. For both XFS and ext4fs, the mount option is named prjquota.", "language": "text", "type": "code-example_explanation", "context": "7.5 - Resource Management for Pods and Containers | Ensure that the root filesystem (or optional runtime filesystem) is mounted with project quotas enabled. For both XFS and ext4fs, the mount option is named prjquota.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_16224"} {"prompt": "Complete this text code:\n\ncurl --header \"Content-Type: application/json-patch+json\" \\\n--request PATCH \\", "completion": "--data '[{\"op\": \"add\", \"path\": \"/status/capacity/example.com~1foo\", \"value\": \"5\"}]' \\\nhttp://k8s-master:8080/api/v1/nodes/k8s-node-1/status", "language": "text", "type": "code-example", "context": "7.5 - Resource Management for Pods and Containers | Cluster-level extended resources are not tied to nodes. They are usually managed by scheduler extenders, which handle the resource consumption and resource quota.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_16225"} {"prompt": "Explain this text code:\n\ncurl --header \"Content-Type: application/json-patch+json\" \\\n--request PATCH \\\n--data '[{\"op\": \"add\", \"path\": \"/status/capacity/example.com~1foo\", \"value\": \"5\"}]' \\\nhttp://k8s-master:8080/api/v1/nodes/k8s-node-1/status", "completion": "This text code 7.5 - Resource Management for Pods and Containers | Cluster-level extended resources are not tied to nodes. They are usually managed by scheduler extenders, which handle the resource consumption and resource quota.", "language": "text", "type": "code-example_explanation", "context": "7.5 - Resource Management for Pods and Containers | Cluster-level extended resources are not tied to nodes. They are usually managed by scheduler extenders, which handle the resource consumption and resource quota.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_16226"} {"prompt": "Complete this json code:\n\n{\n \"kind\": \"Policy\",\n \"apiVersion\": \"v1\",\n \"extenders\": [\n {\n \"urlPrefix\":\"<extender-endpoint>\",\n \"bindVerb\": \"bind\",\n \"managedResources\": [", "completion": " {\n \"name\": \"example.com/foo\",\n \"ignoredByScheduler\": true\n }\n ]\n }\n ]\n}", "language": "json", "type": "api-response", "context": "7.5 - Resource Management for Pods and Containers | Extended resources allocation by DRA allows cluster administrators to specify an extendedResourceName in DeviceClass, then the devices matching the DeviceClass can be requested from a pod's extended r", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_16227"} {"prompt": "Explain this json code:\n\n{\n \"kind\": \"Policy\",\n \"apiVersion\": \"v1\",\n \"extenders\": [\n {\n \"urlPrefix\":\"<extender-endpoint>\",\n \"bindVerb\": \"bind\",\n \"managedResources\": [\n {\n \"name\": \"example.com/foo\",\n \"ignoredByScheduler\": true\n }\n ]\n }\n ]\n}", "completion": "This json code 7.5 - Resource Management for Pods and Containers | Extended resources allocation by DRA allows cluster administrators to specify an extendedResourceName in DeviceClass, then the devices matching the DeviceClass can be requested from a pod's extended r", "language": "json", "type": "api-response_explanation", "context": "7.5 - Resource Management for Pods and Containers | Extended resources allocation by DRA allows cluster administrators to specify an extendedResourceName in DeviceClass, then the devices matching the DeviceClass can be requested from a pod's extended r", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_16228"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: my-pod\nspec:\n containers:\n - name: my-container", "completion": " image: myimage\n resources:\n requests:\n cpu: 2\n example.com/foo: 1\n limits:\n example.com/foo: 1", "language": "yaml", "type": "pod-definition", "context": "7.5 - Resource Management for Pods and Containers | Process ID (PID) limits allow for the configuration of a kubelet to limit the number of PIDs that a given Pod can consume. See PID Limiting for information.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_16229"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: my-pod\nspec:\n containers:\n - name: my-container\n image: myimage\n resources:\n requests:\n cpu: 2\n example.com/foo: 1\n limits:\n example.com/foo: 1", "completion": "This yaml code 7.5 - Resource Management for Pods and Containers | Process ID (PID) limits allow for the configuration of a kubelet to limit the number of PIDs that a given Pod can consume. See PID Limiting for information.", "language": "yaml", "type": "pod-definition_explanation", "context": "7.5 - Resource Management for Pods and Containers | Process ID (PID) limits allow for the configuration of a kubelet to limit the number of PIDs that a given Pod can consume. See PID Limiting for information.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_16230"} {"prompt": "Explain this bash code:\n\nkubectl describe pod frontend | grep -A 9999999999 Events", "completion": "This bash code 7.5 - Resource Management for Pods and Containers | In the preceding example, the Pod named \"frontend\" fails to be scheduled due to insufficient CPU resource on any node. Similar error messages can also suggest failure due to insufficient memory (PodEx", "language": "bash", "type": "kubectl-command_explanation", "context": "7.5 - Resource Management for Pods and Containers | In the preceding example, the Pod named \"frontend\" fails to be scheduled due to insufficient CPU resource on any node. Similar error messages can also suggest failure due to insufficient memory (PodEx", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_16231"} {"prompt": "Complete this text code:\n\nEvents:\n Type Reason Age From Message", "completion": " ---- ------ ---- ---- -------\n Warning FailedScheduling 23s default-scheduler 0/42 nodes available: insufficient cpu", "language": "text", "type": "code-example", "context": "7.5 - Resource Management for Pods and Containers | In the preceding example, the Pod named \"frontend\" fails to be scheduled due to insufficient CPU resource on any node. Similar error messages can also suggest failure due to insufficient memory (PodEx", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_16232"} {"prompt": "Explain this text code:\n\nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Warning FailedScheduling 23s default-scheduler 0/42 nodes available: insufficient cpu", "completion": "This text code 7.5 - Resource Management for Pods and Containers | In the preceding example, the Pod named \"frontend\" fails to be scheduled due to insufficient CPU resource on any node. Similar error messages can also suggest failure due to insufficient memory (PodEx", "language": "text", "type": "code-example_explanation", "context": "7.5 - Resource Management for Pods and Containers | In the preceding example, the Pod named \"frontend\" fails to be scheduled due to insufficient CPU resource on any node. Similar error messages can also suggest failure due to insufficient memory (PodEx", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_16233"} {"prompt": "Complete this text code:\n\nName: e2e-test-node-pool-4lw4\n[ ... lines removed for clarity ...]\nCapacity:\n cpu: 2\n memory: 7679792Ki\n pods: 110\nAllocatable:\n cpu: 1800m\n memory: 7474992Ki\n pods: 110\n[ ... lines removed for clarity ...]\nNon-terminated Pods: (5 in total)", "completion": " Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits\n --------- ---- ------------ ---------- --------------- -------------\n kube-system fluentd-gcp-v1.38-28bv1 100m (5%) 0 (0%) 200Mi (2%) 200Mi (2%)\n kube-system kube-dns-3297075139-61lj3 260m (13%) 0 (0%) 100Mi (1%) 170Mi (2%)\n kube-system kube-proxy-e2e-test-... 100m (5%) 0 (0%) 0 (0%) 0 (0%)\n kube-system monitoring-influxdb-grafana-v4-z1m12 200m (10%) 200m (10%) 600Mi (8%) 600Mi (8%)\n kube-system node-problem-detector-v0.1-fj7m3 20m (1%) 200m (10%) 20Mi (0%) 100Mi (1%)\nAllocated resources:\n (Total limits may be over 100 percent, i.e., overcommitted.)\n CPU Requests CPU Limits Memory Requests Memory Limits\n ------------ ---------- --------------- -------------\n 680m (34%) 400m (20%) 920Mi (11%) 1070Mi (13%)", "language": "text", "type": "code-example", "context": "7.5 - Resource Management for Pods and Containers | In the preceding output, you can see that if a Pod requests more than 1.120 CPUs or more than 6.23Gi of memory, that Pod will not fit on the node.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_16234"} {"prompt": "Explain this text code:\n\nName: e2e-test-node-pool-4lw4\n[ ... lines removed for clarity ...]\nCapacity:\n cpu: 2\n memory: 7679792Ki\n pods: 110\nAllocatable:\n cpu: 1800m\n memory: 7474992Ki\n pods: 110\n[ ... lines removed for clarity ...]\nNon-terminated Pods: (5 in total)\n Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits\n --------- ---- ------------ ---------- --------------- -------------\n kube-system fluentd-gcp-v1.38-28bv1 100m (5%) 0 (0%) 200Mi (2%) 200Mi (2%)\n kube-system kube-dns-3297075139-61lj3 260m (13%) 0 (0%) 100Mi (1%) 170Mi (2%)\n kube-system kube-proxy-e2e-test-... 100m (5%) 0 (0%) 0 (0%) 0 (0%)\n kube-system monitoring-influxdb-grafana-v4-z1m12 200m (10%) 200m (10%) 600Mi (8%) 600Mi (8%)\n kube-system node-problem-detector-v0.1-fj7m3 20m (1%) 200m (10%) 20Mi (0%) 100Mi (1%)\nAllocated resources:\n (Total limits may be over 100 percent, i.e., overcommitted.)\n CPU Requests CPU Limits Memory Requests Memory Limits\n ------------ ---------- --------------- -------------\n 680m (34%) 400m (20%) 920Mi (11%) 1070Mi (13%)", "completion": "This text code 7.5 - Resource Management for Pods and Containers | In the preceding output, you can see that if a Pod requests more than 1.120 CPUs or more than 6.23Gi of memory, that Pod will not fit on the node.", "language": "text", "type": "code-example_explanation", "context": "7.5 - Resource Management for Pods and Containers | In the preceding output, you can see that if a Pod requests more than 1.120 CPUs or more than 6.23Gi of memory, that Pod will not fit on the node.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_16235"} {"prompt": "Complete this text code:\n\nName: simmemleak-hra99\nNamespace: default\nImage(s): saadali/simmemleak\nNode: kubernetes-node-tf0f/10.240.216.66\nLabels: name=simmemleak\nStatus: Running\nReason:\nMessage:\nIP: 10.244.2.75\nContainers:\n simmemleak:\n Image: saadali/simmemleak:latest\n Limits:\n cpu: 100m\n memory: 50Mi\n State: Running\n Started: Tue, 07 Jul 2019 12:54:41 -0700", "completion": " Last State: Terminated\n Reason: OOMKilled\n Exit Code: 137\n Started: Fri, 07 Jul 2019 12:54:30 -0700\n Finished: Fri, 07 Jul 2019 12:54:33 -0700\n Ready: False\n Restart Count: 5\nConditions:\n Type Status\n Ready False\nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal Scheduled 42s default-scheduler Successfully assigned simmemleak-hra99 to kubernetes-node-tf0f\n Normal Pulled 41s kubelet Container image \"saadali/simmemleak:latest\" already present on machine\n Normal Created 41s kubelet Created container simmemleak\n Normal Started 40s kubelet Started container simmemleak\n Normal Killing 32s kubelet Killing container with id ead3fb35-5cf5-44ed-9ae1-488115be66c6: Need to kill Pod", "language": "text", "type": "code-example", "context": "7.5 - Resource Management for Pods and Containers | In the preceding example, the Restart Count: 5 indicates that the simmemleak container in the Pod was terminated and restarted five times (so far). The OOMKilled reason shows that the container tried", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_16236"} {"prompt": "Explain this text code:\n\nName: simmemleak-hra99\nNamespace: default\nImage(s): saadali/simmemleak\nNode: kubernetes-node-tf0f/10.240.216.66\nLabels: name=simmemleak\nStatus: Running\nReason:\nMessage:\nIP: 10.244.2.75\nContainers:\n simmemleak:\n Image: saadali/simmemleak:latest\n Limits:\n cpu: 100m\n memory: 50Mi\n State: Running\n Started: Tue, 07 Jul 2019 12:54:41 -0700\n Last State: Terminated\n Reason: OOMKilled\n Exit Code: 137\n Started: Fri, 07 Jul 2019 12:54:30 -0700\n Finished: Fri, 07 Jul 2019 12:54:33 -0700\n Ready: False\n Restart Count: 5\nConditions:\n Type Status\n Ready False\nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal Scheduled 42s default-scheduler Successfully assigned simmemleak-hra99 to kubernetes-node-tf0f\n Normal Pulled 41s kubelet Container image \"saadali/simmemleak:latest\" already present on machine\n Normal Created 41s kubelet Created container simmemleak\n Normal Started 40s kubelet Started container simmemleak\n Normal Killing 32s kubelet Killing container with id ead3fb35-5cf5-44ed-9ae1-488115be66c6: Need to kill Pod", "completion": "This text code 7.5 - Resource Management for Pods and Containers | In the preceding example, the Restart Count: 5 indicates that the simmemleak container in the Pod was terminated and restarted five times (so far). The OOMKilled reason shows that the container tried", "language": "text", "type": "code-example_explanation", "context": "7.5 - Resource Management for Pods and Containers | In the preceding example, the Restart Count: 5 indicates that the simmemleak container in the Pod was terminated and restarted five times (so far). The OOMKilled reason shows that the container tried", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_16237"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Config\n\nclusters:\n- cluster:\n proxy-url: http://proxy.example.org:3128\n server: https://k8s.example.org/k8s/clusters/c-xxyyzz", "completion": " name: development\n\nusers:\n- name: developer\n\ncontexts:\n- context:\n name: development", "language": "yaml", "type": "kubernetes-manifest", "context": "7.6 - Organizing Cluster Access Using kubeconfig Files | This page outlines the differences in how resources are managed between Linux and Windows.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_16238"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Config\n\nclusters:\n- cluster:\n proxy-url: http://proxy.example.org:3128\n server: https://k8s.example.org/k8s/clusters/c-xxyyzz\n name: development\n\nusers:\n- name: developer\n\ncontexts:\n- context:\n name: development", "completion": "This yaml code 7.6 - Organizing Cluster Access Using kubeconfig Files | This page outlines the differences in how resources are managed between Linux and Windows.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "7.6 - Organizing Cluster Access Using kubeconfig Files | This page outlines the differences in how resources are managed between Linux and Windows.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_16239"} {"prompt": "Complete this text code:\n\n# The per-mode level label indicates which policy level to apply for the mode.\n#\n# MODE must be one of `enforce`, `audit`, or `warn`.\n# LEVEL must be one of `privileged`, `baseline`, or `restricted`.\npod-security.kubernetes.io/<MODE>: <LEVEL>\n", "completion": "# Optional: per-mode version label that can be used to pin the policy to the\n# version that shipped with a given Kubernetes minor version (for example v1.34).\n#\n# MODE must be one of `enforce`, `audit`, or `warn`.\n# VERSION must be a valid Kubernetes minor version, or `latest`.\npod-security.kubernetes.io/<MODE>-version: <VERSION>", "language": "text", "type": "code-example", "context": "8.3 - Pod Security Admission | Check out Enforce Pod Security Standards with Namespace Labels to see example usage.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_16240"} {"prompt": "Explain this text code:\n\n# The per-mode level label indicates which policy level to apply for the mode.\n#\n# MODE must be one of `enforce`, `audit`, or `warn`.\n# LEVEL must be one of `privileged`, `baseline`, or `restricted`.\npod-security.kubernetes.io/<MODE>: <LEVEL>\n\n# Optional: per-mode version label that can be used to pin the policy to the\n# version that shipped with a given Kubernetes minor version (for example v1.34).\n#\n# MODE must be one of `enforce`, `audit`, or `warn`.\n# VERSION must be a valid Kubernetes minor version, or `latest`.\npod-security.kubernetes.io/<MODE>-version: <VERSION>", "completion": "This text code 8.3 - Pod Security Admission | Check out Enforce Pod Security Standards with Namespace Labels to see example usage.", "language": "text", "type": "code-example_explanation", "context": "8.3 - Pod Security Admission | Check out Enforce Pod Security Standards with Namespace Labels to see example usage.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_16241"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: ServiceAccount\nmetadata:", "completion": " annotations:\n kubernetes.io/enforce-mountable-secrets: \"true\"\n name: my-serviceaccount\n namespace: my-namespace", "language": "yaml", "type": "service-definition", "context": "8.4 - Service Accounts | When this annotation is set to \"true\", the Kubernetes control plane ensures that the Secrets from this ServiceAccount are subject to certain mounting restrictions.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_16242"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: ServiceAccount\nmetadata:\n annotations:\n kubernetes.io/enforce-mountable-secrets: \"true\"\n name: my-serviceaccount\n namespace: my-namespace", "completion": "This yaml code 8.4 - Service Accounts | When this annotation is set to \"true\", the Kubernetes control plane ensures that the Secrets from this ServiceAccount are subject to certain mounting restrictions.", "language": "yaml", "type": "service-definition_explanation", "context": "8.4 - Service Accounts | When this annotation is set to \"true\", the Kubernetes control plane ensures that the Secrets from this ServiceAccount are subject to certain mounting restrictions.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_16243"} {"prompt": "Complete this json code:\n\n{\n \"apiVersion\": \"abac.authorization.kubernetes.io/v1beta1\",\n \"kind\": \"Policy\",\n \"spec\": {\n \"user\": \"bob\",", "completion": " \"namespace\": \"projectCaribou\",\n \"resource\": \"pods\",\n \"readonly\": true\n }\n}", "language": "json", "type": "api-response", "context": "8.8 - Controlling Access to the Kubernetes API | If Bob makes the following request, the request is authorized because he is allowed to read objects in the projectCaribou namespace:", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_16244"} {"prompt": "Explain this json code:\n\n{\n \"apiVersion\": \"abac.authorization.kubernetes.io/v1beta1\",\n \"kind\": \"Policy\",\n \"spec\": {\n \"user\": \"bob\",\n \"namespace\": \"projectCaribou\",\n \"resource\": \"pods\",\n \"readonly\": true\n }\n}", "completion": "This json code 8.8 - Controlling Access to the Kubernetes API | If Bob makes the following request, the request is authorized because he is allowed to read objects in the projectCaribou namespace:", "language": "json", "type": "api-response_explanation", "context": "8.8 - Controlling Access to the Kubernetes API | If Bob makes the following request, the request is authorized because he is allowed to read objects in the projectCaribou namespace:", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_16245"} {"prompt": "Complete this json code:\n\n{\n \"apiVersion\": \"authorization.k8s.io/v1beta1\",\n \"kind\": \"SubjectAccessReview\",\n \"spec\": {\n \"resourceAttributes\": {\n \"namespace\": \"projectCaribou\",", "completion": " \"verb\": \"get\",\n \"group\": \"unicorn.example.org\",\n \"resource\": \"pods\"\n }\n }\n}", "language": "json", "type": "api-response", "context": "8.8 - Controlling Access to the Kubernetes API | If Bob makes a request to write (create or update) to the objects in the projectCaribou namespace, his authorization is denied. If Bob makes a request to read (get) objects in a different namespace su", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_16246"} {"prompt": "Explain this json code:\n\n{\n \"apiVersion\": \"authorization.k8s.io/v1beta1\",\n \"kind\": \"SubjectAccessReview\",\n \"spec\": {\n \"resourceAttributes\": {\n \"namespace\": \"projectCaribou\",\n \"verb\": \"get\",\n \"group\": \"unicorn.example.org\",\n \"resource\": \"pods\"\n }\n }\n}", "completion": "This json code 8.8 - Controlling Access to the Kubernetes API | If Bob makes a request to write (create or update) to the objects in the projectCaribou namespace, his authorization is denied. If Bob makes a request to read (get) objects in a different namespace su", "language": "json", "type": "api-response_explanation", "context": "8.8 - Controlling Access to the Kubernetes API | If Bob makes a request to write (create or update) to the objects in the projectCaribou namespace, his authorization is denied. If Bob makes a request to read (get) objects in a different namespace su", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_16247"} {"prompt": "Complete this yaml code:\n\napiVersion: kubescheduler.config.k8s.io/v1\nkind: KubeSchedulerConfiguration\nprofiles:\n - schedulerName: my-scheduler\n plugins:\n # Disable specific plugins for different extension points\n # You can disable all plugins for an extension point using \"*\"\n queueSort:\n disabled:", "completion": " - name: \"*\" # Disable all queueSort plugins\n # - name: \"PrioritySort\" # Disable specific queueSort plugin\n filter:\n disabled:\n - name: \"*\" # Disable all filter plugins\n # - name: \"NodeResourcesFit\" # Disable specific filter plugin\n permit:\n disabled:\n - name: \"*\" # Disables all permit plugins\n # - name: \"TaintToleration\" # Disable specific permit plugin", "language": "yaml", "type": "kubernetes-manifest", "context": "8.13 - Hardening Guide - Scheduler Configuration | This creates a scheduler profile my-custom-scheduler. Whenever the .spec of a Pod does not have a value for .spec.schedulerName, the kube-scheduler runs for that Pod, using its main configuration, and", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_16248"} {"prompt": "Explain this yaml code:\n\napiVersion: kubescheduler.config.k8s.io/v1\nkind: KubeSchedulerConfiguration\nprofiles:\n - schedulerName: my-scheduler\n plugins:\n # Disable specific plugins for different extension points\n # You can disable all plugins for an extension point using \"*\"\n queueSort:\n disabled:\n - name: \"*\" # Disable all queueSort plugins\n # - name: \"PrioritySort\" # Disable specific queueSort plugin\n filter:\n disabled:\n - name: \"*\" # Disable all filter plugins\n # - name: \"NodeResourcesFit\" # Disable specific filter plugin\n permit:\n disabled:\n - name: \"*\" # Disables all permit plugins\n # - name: \"TaintToleration\" # Disable specific permit plugin", "completion": "This yaml code 8.13 - Hardening Guide - Scheduler Configuration | This creates a scheduler profile my-custom-scheduler. Whenever the .spec of a Pod does not have a value for .spec.schedulerName, the kube-scheduler runs for that Pod, using its main configuration, and", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "8.13 - Hardening Guide - Scheduler Configuration | This creates a scheduler profile my-custom-scheduler. Whenever the .spec of a Pod does not have a value for .spec.schedulerName, the kube-scheduler runs for that Pod, using its main configuration, and", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_16249"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: LimitRange\nmetadata:\n name: cpu-resource-constraint\nspec:\n limits:\n - default: # this section defines default limits", "completion": " cpu: 500m\n defaultRequest: # this section defines default requests\n cpu: 500m\n max: # max and min define the limit range\n cpu: \"1\"\n min:\n cpu: 100m\n type: Container", "language": "yaml", "type": "kubernetes-manifest", "context": "9.1 - Limit Ranges | along with a Pod that declares a CPU resource request of 700m, but not a limit:", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_16250"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: LimitRange\nmetadata:\n name: cpu-resource-constraint\nspec:\n limits:\n - default: # this section defines default limits\n cpu: 500m\n defaultRequest: # this section defines default requests\n cpu: 500m\n max: # max and min define the limit range\n cpu: \"1\"\n min:\n cpu: 100m\n type: Container", "completion": "This yaml code 9.1 - Limit Ranges | along with a Pod that declares a CPU resource request of 700m, but not a limit:", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "9.1 - Limit Ranges | along with a Pod that declares a CPU resource request of 700m, but not a limit:", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_16251"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: example-conflict-with-limitrange-cpu\nspec:", "completion": " containers:\n - name: demo\n image: registry.k8s.io/pause:3.8\n resources:\n requests:\n cpu: 700m", "language": "yaml", "type": "pod-definition", "context": "9.1 - Limit Ranges | then that Pod will not be scheduled, failing with an error similar to:", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_16252"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: example-conflict-with-limitrange-cpu\nspec:\n containers:\n - name: demo\n image: registry.k8s.io/pause:3.8\n resources:\n requests:\n cpu: 700m", "completion": "This yaml code 9.1 - Limit Ranges | then that Pod will not be scheduled, failing with an error similar to:", "language": "yaml", "type": "pod-definition_explanation", "context": "9.1 - Limit Ranges | then that Pod will not be scheduled, failing with an error similar to:", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_16253"} {"prompt": "Explain this text code:\n\nPod \"example-conflict-with-limitrange-cpu\" is invalid: spec.containers[0].resources.requests: Invalid value: \"700m\": must be less than or equal to cpu limit", "completion": "This text code 9.1 - Limit Ranges | If you set both request and limit, then that new Pod will be scheduled successfully even with the same LimitRange in place:", "language": "text", "type": "code-example_explanation", "context": "9.1 - Limit Ranges | If you set both request and limit, then that new Pod will be scheduled successfully even with the same LimitRange in place:", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_16254"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: example-no-conflict-with-limitrange-cpu\nspec:\n containers:", "completion": " - name: demo\n image: registry.k8s.io/pause:3.8\n resources:\n requests:\n cpu: 700m\n limits:\n cpu: 700m", "language": "yaml", "type": "pod-definition", "context": "9.1 - Limit Ranges | Examples of policies that could be created using LimitRange are:", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_16255"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: example-no-conflict-with-limitrange-cpu\nspec:\n containers:\n - name: demo\n image: registry.k8s.io/pause:3.8\n resources:\n requests:\n cpu: 700m\n limits:\n cpu: 700m", "completion": "This yaml code 9.1 - Limit Ranges | Examples of policies that could be created using LimitRange are:", "language": "yaml", "type": "pod-definition_explanation", "context": "9.1 - Limit Ranges | Examples of policies that could be created using LimitRange are:", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_16256"} {"prompt": "Complete this text code:\n\nscopeSelector:\n matchExpressions:\n - scopeName: PriorityClass", "completion": " operator: In\n values:\n - middle", "language": "text", "type": "code-example", "context": "9.2 - Resource Quotas | If the operator is Exists or DoesNotExist, the values field must NOT be specified.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_16257"} {"prompt": "Explain this text code:\n\nscopeSelector:\n matchExpressions:\n - scopeName: PriorityClass\n operator: In\n values:\n - middle", "completion": "This text code 9.2 - Resource Quotas | If the operator is Exists or DoesNotExist, the values field must NOT be specified.", "language": "text", "type": "code-example_explanation", "context": "9.2 - Resource Quotas | If the operator is Exists or DoesNotExist, the values field must NOT be specified.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_16258"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: List\nitems:\n- apiVersion: v1\n kind: ResourceQuota\n metadata:\n name: pods-high\n spec:\n hard:\n cpu: \"1000\"\n memory: \"200Gi\"\n pods: \"10\"\n scopeSelector:\n matchExpressions:\n - operator: In\n scopeName: PriorityClass\n values: [\"high\"]\n- apiVersion: v1\n kind: ResourceQuota\n metadata:\n name: pods-medium\n spec:", "completion": " hard:\n cpu: \"10\"\n memory: \"20Gi\"\n pods: \"10\"\n scopeSelector:\n matchExpressions:\n - operator: In\n scopeName: PriorityClass\n values: [\"medium\"]\n- apiVersion: v1\n kind: ResourceQuota\n metadata:\n name: pods-low\n spec:\n hard:\n cpu: \"5\"\n memory: \"10Gi\"\n pods: \"10\"\n scopeSelector:\n matchExpressions:\n - operator: In\n scopeName: PriorityClass\n values: [\"low\"]", "language": "yaml", "type": "kubernetes-manifest", "context": "9.2 - Resource Quotas | Apply the YAML using kubectl create.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_16259"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: List\nitems:\n- apiVersion: v1\n kind: ResourceQuota\n metadata:\n name: pods-high\n spec:\n hard:\n cpu: \"1000\"\n memory: \"200Gi\"\n pods: \"10\"\n scopeSelector:\n matchExpressions:\n - operator: In\n scopeName: PriorityClass\n values: [\"high\"]\n- apiVersion: v1\n kind: ResourceQuota\n metadata:\n name: pods-medium\n spec:\n hard:\n cpu: \"10\"\n memory: \"20Gi\"\n pods: \"10\"\n scopeSelector:\n matchExpressions:\n - operator: In\n scopeName: PriorityClass\n values: [\"medium\"]\n- apiVersion: v1\n kind: ResourceQuota\n metadata:\n name: pods-low\n spec:\n hard:\n cpu: \"5\"\n memory: \"10Gi\"\n pods: \"10\"\n scopeSelector:\n matchExpressions:\n - operator: In\n scopeName: PriorityClass\n values: [\"low\"]", "completion": "This yaml code 9.2 - Resource Quotas | Apply the YAML using kubectl create.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "9.2 - Resource Quotas | Apply the YAML using kubectl create.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_16260"} {"prompt": "Explain this text code:\n\nresourcequota/pods-high created\nresourcequota/pods-medium created\nresourcequota/pods-low created", "completion": "This text code 9.2 - Resource Quotas | Verify that Used quota is 0 using kubectl describe quota.", "language": "text", "type": "code-example_explanation", "context": "9.2 - Resource Quotas | Verify that Used quota is 0 using kubectl describe quota.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_16261"} {"prompt": "Complete this text code:\n\nName: pods-high\nNamespace: default\nResource Used Hard\n-------- ---- ----\ncpu 0 1k\nmemory 0 200Gi\npods 0 10\n\n\nName: pods-low\nNamespace: default\nResource Used Hard", "completion": "-------- ---- ----\ncpu 0 5\nmemory 0 10Gi\npods 0 10\n\n\nName: pods-medium\nNamespace: default\nResource Used Hard\n-------- ---- ----\ncpu 0 10\nmemory 0 20Gi\npods 0 10", "language": "text", "type": "code-example", "context": "9.2 - Resource Quotas | Create a pod with priority \"high\". Save the following YAML to a file high-priority-pod.yaml.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_16262"} {"prompt": "Explain this text code:\n\nName: pods-high\nNamespace: default\nResource Used Hard\n-------- ---- ----\ncpu 0 1k\nmemory 0 200Gi\npods 0 10\n\n\nName: pods-low\nNamespace: default\nResource Used Hard\n-------- ---- ----\ncpu 0 5\nmemory 0 10Gi\npods 0 10\n\n\nName: pods-medium\nNamespace: default\nResource Used Hard\n-------- ---- ----\ncpu 0 10\nmemory 0 20Gi\npods 0 10", "completion": "This text code 9.2 - Resource Quotas | Create a pod with priority \"high\". Save the following YAML to a file high-priority-pod.yaml.", "language": "text", "type": "code-example_explanation", "context": "9.2 - Resource Quotas | Create a pod with priority \"high\". Save the following YAML to a file high-priority-pod.yaml.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_16263"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: high-priority\nspec:\n containers:\n - name: high-priority\n image: ubuntu\n command: [\"/bin/sh\"]", "completion": " args: [\"-c\", \"while true; do echo hello; sleep 10;done\"]\n resources:\n requests:\n memory: \"10Gi\"\n cpu: \"500m\"\n limits:\n memory: \"10Gi\"\n cpu: \"500m\"\n priorityClassName: high", "language": "yaml", "type": "pod-definition", "context": "9.2 - Resource Quotas | Apply it with kubectl create.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_16264"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: high-priority\nspec:\n containers:\n - name: high-priority\n image: ubuntu\n command: [\"/bin/sh\"]\n args: [\"-c\", \"while true; do echo hello; sleep 10;done\"]\n resources:\n requests:\n memory: \"10Gi\"\n cpu: \"500m\"\n limits:\n memory: \"10Gi\"\n cpu: \"500m\"\n priorityClassName: high", "completion": "This yaml code 9.2 - Resource Quotas | Apply it with kubectl create.", "language": "yaml", "type": "pod-definition_explanation", "context": "9.2 - Resource Quotas | Apply it with kubectl create.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_16265"} {"prompt": "Complete this text code:\n\nName: pods-high\nNamespace: default\nResource Used Hard\n-------- ---- ----\ncpu 500m 1k\nmemory 10Gi 200Gi\npods 1 10\n\n\nName: pods-low\nNamespace: default\nResource Used Hard", "completion": "-------- ---- ----\ncpu 0 5\nmemory 0 10Gi\npods 0 10\n\n\nName: pods-medium\nNamespace: default\nResource Used Hard\n-------- ---- ----\ncpu 0 10\nmemory 0 20Gi\npods 0 10", "language": "text", "type": "code-example", "context": "9.2 - Resource Quotas | Operators can use CrossNamespacePodAffinity quota scope to limit which namespaces are allowed to have pods with affinity terms that cross namespaces. Specifically, it controls which pods are allowed t", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_16266"} {"prompt": "Explain this text code:\n\nName: pods-high\nNamespace: default\nResource Used Hard\n-------- ---- ----\ncpu 500m 1k\nmemory 10Gi 200Gi\npods 1 10\n\n\nName: pods-low\nNamespace: default\nResource Used Hard\n-------- ---- ----\ncpu 0 5\nmemory 0 10Gi\npods 0 10\n\n\nName: pods-medium\nNamespace: default\nResource Used Hard\n-------- ---- ----\ncpu 0 10\nmemory 0 20Gi\npods 0 10", "completion": "This text code 9.2 - Resource Quotas | Operators can use CrossNamespacePodAffinity quota scope to limit which namespaces are allowed to have pods with affinity terms that cross namespaces. Specifically, it controls which pods are allowed t", "language": "text", "type": "code-example_explanation", "context": "9.2 - Resource Quotas | Operators can use CrossNamespacePodAffinity quota scope to limit which namespaces are allowed to have pods with affinity terms that cross namespaces. Specifically, it controls which pods are allowed t", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_16267"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: ResourceQuota\nmetadata:\n name: disable-cross-namespace-affinity\n namespace: foo-ns\nspec:", "completion": " hard:\n pods: \"0\"\n scopeSelector:\n matchExpressions:\n - scopeName: CrossNamespacePodAffinity\n operator: Exists", "language": "yaml", "type": "kubernetes-manifest", "context": "9.2 - Resource Quotas | If operators want to disallow using namespaces and namespaceSelector by default, and only allow it for specific namespaces, they could configure CrossNamespacePodAffinity as a limited resource by sett", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_16268"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: ResourceQuota\nmetadata:\n name: disable-cross-namespace-affinity\n namespace: foo-ns\nspec:\n hard:\n pods: \"0\"\n scopeSelector:\n matchExpressions:\n - scopeName: CrossNamespacePodAffinity\n operator: Exists", "completion": "This yaml code 9.2 - Resource Quotas | If operators want to disallow using namespaces and namespaceSelector by default, and only allow it for specific namespaces, they could configure CrossNamespacePodAffinity as a limited resource by sett", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "9.2 - Resource Quotas | If operators want to disallow using namespaces and namespaceSelector by default, and only allow it for specific namespaces, they could configure CrossNamespacePodAffinity as a limited resource by sett", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_16269"} {"prompt": "Complete this yaml code:\n\napiVersion: apiserver.config.k8s.io/v1\nkind: AdmissionConfiguration\nplugins:\n- name: \"ResourceQuota\"\n configuration:\n apiVersion: apiserver.config.k8s.io/v1", "completion": " kind: ResourceQuotaConfiguration\n limitedResources:\n - resource: pods\n matchScopes:\n - scopeName: CrossNamespacePodAffinity\n operator: Exists", "language": "yaml", "type": "kubernetes-manifest", "context": "9.2 - Resource Quotas | With the above configuration, pods can use namespaces and namespaceSelector in pod affinity only if the namespace where they are created have a resource quota object with CrossNamespacePodAffinity sco", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_16270"} {"prompt": "Explain this yaml code:\n\napiVersion: apiserver.config.k8s.io/v1\nkind: AdmissionConfiguration\nplugins:\n- name: \"ResourceQuota\"\n configuration:\n apiVersion: apiserver.config.k8s.io/v1\n kind: ResourceQuotaConfiguration\n limitedResources:\n - resource: pods\n matchScopes:\n - scopeName: CrossNamespacePodAffinity\n operator: Exists", "completion": "This yaml code 9.2 - Resource Quotas | With the above configuration, pods can use namespaces and namespaceSelector in pod affinity only if the namespace where they are created have a resource quota object with CrossNamespacePodAffinity sco", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "9.2 - Resource Quotas | With the above configuration, pods can use namespaces and namespaceSelector in pod affinity only if the namespace where they are created have a resource quota object with CrossNamespacePodAffinity sco", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_16271"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: List\nitems:\n- apiVersion: v1\n kind: ResourceQuota\n metadata:\n name: pvcs-gold\n spec:\n hard:\n requests.storage: \"10Gi\"\n persistentvolumeclaims: \"10\"\n scopeSelector:\n matchExpressions:\n - operator: In\n scopeName: VolumeAttributesClass\n values: [\"gold\"]\n- apiVersion: v1\n kind: ResourceQuota\n metadata:\n name: pvcs-silver\n spec:", "completion": " hard:\n requests.storage: \"20Gi\"\n persistentvolumeclaims: \"10\"\n scopeSelector:\n matchExpressions:\n - operator: In\n scopeName: VolumeAttributesClass\n values: [\"silver\"]\n- apiVersion: v1\n kind: ResourceQuota\n metadata:\n name: pvcs-copper\n spec:\n hard:\n requests.storage: \"30Gi\"\n persistentvolumeclaims: \"10\"\n scopeSelector:\n matchExpressions:\n - operator: In\n scopeName: VolumeAttributesClass\n values: [\"copper\"]", "language": "yaml", "type": "kubernetes-manifest", "context": "9.2 - Resource Quotas | Apply the YAML using kubectl create.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_16272"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: List\nitems:\n- apiVersion: v1\n kind: ResourceQuota\n metadata:\n name: pvcs-gold\n spec:\n hard:\n requests.storage: \"10Gi\"\n persistentvolumeclaims: \"10\"\n scopeSelector:\n matchExpressions:\n - operator: In\n scopeName: VolumeAttributesClass\n values: [\"gold\"]\n- apiVersion: v1\n kind: ResourceQuota\n metadata:\n name: pvcs-silver\n spec:\n hard:\n requests.storage: \"20Gi\"\n persistentvolumeclaims: \"10\"\n scopeSelector:\n matchExpressions:\n - operator: In\n scopeName: VolumeAttributesClass\n values: [\"silver\"]\n- apiVersion: v1\n kind: ResourceQuota\n metadata:\n name: pvcs-copper\n spec:\n hard:\n requests.storage: \"30Gi\"\n persistentvolumeclaims: \"10\"\n scopeSelector:\n matchExpressions:\n - operator: In\n scopeName: VolumeAttributesClass\n values: [\"copper\"]", "completion": "This yaml code 9.2 - Resource Quotas | Apply the YAML using kubectl create.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "9.2 - Resource Quotas | Apply the YAML using kubectl create.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_16273"} {"prompt": "Explain this text code:\n\nresourcequota/pvcs-gold created\nresourcequota/pvcs-silver created\nresourcequota/pvcs-copper created", "completion": "This text code 9.2 - Resource Quotas | Verify that Used quota is 0 using kubectl describe quota.", "language": "text", "type": "code-example_explanation", "context": "9.2 - Resource Quotas | Verify that Used quota is 0 using kubectl describe quota.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_16274"} {"prompt": "Complete this text code:\n\nName: pvcs-gold\nNamespace: default\nResource Used Hard\n-------- ---- ----\npersistentvolumeclaims 0 10\nrequests.storage 0 10Gi\n\n\nName: pvcs-silver\nNamespace: default\nResource Used Hard", "completion": "-------- ---- ----\npersistentvolumeclaims 0 10\nrequests.storage 0 20Gi\n\n\nName: pvcs-copper\nNamespace: default\nResource Used Hard\n-------- ---- ----\npersistentvolumeclaims 0 10\nrequests.storage 0 30Gi", "language": "text", "type": "code-example", "context": "9.2 - Resource Quotas | Create a pvc with volume attributes class \"gold\". Save the following YAML to a file gold-vac-pvc.yaml.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_16275"} {"prompt": "Explain this text code:\n\nName: pvcs-gold\nNamespace: default\nResource Used Hard\n-------- ---- ----\npersistentvolumeclaims 0 10\nrequests.storage 0 10Gi\n\n\nName: pvcs-silver\nNamespace: default\nResource Used Hard\n-------- ---- ----\npersistentvolumeclaims 0 10\nrequests.storage 0 20Gi\n\n\nName: pvcs-copper\nNamespace: default\nResource Used Hard\n-------- ---- ----\npersistentvolumeclaims 0 10\nrequests.storage 0 30Gi", "completion": "This text code 9.2 - Resource Quotas | Create a pvc with volume attributes class \"gold\". Save the following YAML to a file gold-vac-pvc.yaml.", "language": "text", "type": "code-example_explanation", "context": "9.2 - Resource Quotas | Create a pvc with volume attributes class \"gold\". Save the following YAML to a file gold-vac-pvc.yaml.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_16276"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: PersistentVolumeClaim\nmetadata:\n name: gold-vac-pvc\nspec:\n accessModes:", "completion": " - ReadWriteOnce\n resources:\n requests:\n storage: 2Gi\n storageClassName: # change this to the name of the storage class you want to use\n volumeAttributesClassName: gold", "language": "yaml", "type": "kubernetes-manifest", "context": "9.2 - Resource Quotas | Apply it with kubectl create.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_16277"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: PersistentVolumeClaim\nmetadata:\n name: gold-vac-pvc\nspec:\n accessModes:\n - ReadWriteOnce\n resources:\n requests:\n storage: 2Gi\n storageClassName: # change this to the name of the storage class you want to use\n volumeAttributesClassName: gold", "completion": "This yaml code 9.2 - Resource Quotas | Apply it with kubectl create.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "9.2 - Resource Quotas | Apply it with kubectl create.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_16278"} {"prompt": "Complete this text code:\n\nName: pvcs-gold\nNamespace: default\nResource Used Hard\n-------- ---- ----\npersistentvolumeclaims 1 10\nrequests.storage 2Gi 10Gi\n\n\nName: pvcs-silver\nNamespace: default\nResource Used Hard", "completion": "-------- ---- ----\npersistentvolumeclaims 0 10\nrequests.storage 0 20Gi\n\n\nName: pvcs-copper\nNamespace: default\nResource Used Hard\n-------- ---- ----\npersistentvolumeclaims 0 10\nrequests.storage 0 30Gi", "language": "text", "type": "code-example", "context": "9.2 - Resource Quotas | Once the PVC is bound, it is allowed to modify the desired volume attributes class. Let's change it to \"silver\" with kubectl patch.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_16279"} {"prompt": "Explain this text code:\n\nName: pvcs-gold\nNamespace: default\nResource Used Hard\n-------- ---- ----\npersistentvolumeclaims 1 10\nrequests.storage 2Gi 10Gi\n\n\nName: pvcs-silver\nNamespace: default\nResource Used Hard\n-------- ---- ----\npersistentvolumeclaims 0 10\nrequests.storage 0 20Gi\n\n\nName: pvcs-copper\nNamespace: default\nResource Used Hard\n-------- ---- ----\npersistentvolumeclaims 0 10\nrequests.storage 0 30Gi", "completion": "This text code 9.2 - Resource Quotas | Once the PVC is bound, it is allowed to modify the desired volume attributes class. Let's change it to \"silver\" with kubectl patch.", "language": "text", "type": "code-example_explanation", "context": "9.2 - Resource Quotas | Once the PVC is bound, it is allowed to modify the desired volume attributes class. Let's change it to \"silver\" with kubectl patch.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_16280"} {"prompt": "Explain this bash code:\n\nkubectl patch pvc gold-vac-pvc --type='merge' -p '{\"spec\":{\"volumeAttributesClassName\":\"silver\"}}'", "completion": "This bash code 9.2 - Resource Quotas | Verify that \"Used\" stats for \"silver\" volume attributes class quota, pvcs-silver has changed, pvcs-copper is unchanged, and pvcs-gold might be unchanged or released, which depends on the PVC's status.", "language": "bash", "type": "kubectl-command_explanation", "context": "9.2 - Resource Quotas | Verify that \"Used\" stats for \"silver\" volume attributes class quota, pvcs-silver has changed, pvcs-copper is unchanged, and pvcs-gold might be unchanged or released, which depends on the PVC's status.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_16281"} {"prompt": "Complete this text code:\n\nName: pvcs-gold\nNamespace: default\nResource Used Hard\n-------- ---- ----\npersistentvolumeclaims 1 10\nrequests.storage 2Gi 10Gi\n\n\nName: pvcs-silver\nNamespace: default\nResource Used Hard", "completion": "-------- ---- ----\npersistentvolumeclaims 1 10\nrequests.storage 2Gi 20Gi\n\n\nName: pvcs-copper\nNamespace: default\nResource Used Hard\n-------- ---- ----\npersistentvolumeclaims 0 10\nrequests.storage 0 30Gi", "language": "text", "type": "code-example", "context": "9.2 - Resource Quotas | Let's change it to \"copper\" with kubectl patch.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_16282"} {"prompt": "Explain this text code:\n\nName: pvcs-gold\nNamespace: default\nResource Used Hard\n-------- ---- ----\npersistentvolumeclaims 1 10\nrequests.storage 2Gi 10Gi\n\n\nName: pvcs-silver\nNamespace: default\nResource Used Hard\n-------- ---- ----\npersistentvolumeclaims 1 10\nrequests.storage 2Gi 20Gi\n\n\nName: pvcs-copper\nNamespace: default\nResource Used Hard\n-------- ---- ----\npersistentvolumeclaims 0 10\nrequests.storage 0 30Gi", "completion": "This text code 9.2 - Resource Quotas | Let's change it to \"copper\" with kubectl patch.", "language": "text", "type": "code-example_explanation", "context": "9.2 - Resource Quotas | Let's change it to \"copper\" with kubectl patch.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_16283"} {"prompt": "Explain this bash code:\n\nkubectl patch pvc gold-vac-pvc --type='merge' -p '{\"spec\":{\"volumeAttributesClassName\":\"copper\"}}'", "completion": "This bash code 9.2 - Resource Quotas | Verify that \"Used\" stats for \"copper\" volume attributes class quota, pvcs-copper has changed, pvcs-silver and pvcs-gold might be unchanged or released, which depends on the PVC's status.", "language": "bash", "type": "kubectl-command_explanation", "context": "9.2 - Resource Quotas | Verify that \"Used\" stats for \"copper\" volume attributes class quota, pvcs-copper has changed, pvcs-silver and pvcs-gold might be unchanged or released, which depends on the PVC's status.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_16284"} {"prompt": "Complete this text code:\n\nName: pvcs-gold\nNamespace: default\nResource Used Hard\n-------- ---- ----\npersistentvolumeclaims 1 10\nrequests.storage 2Gi 10Gi\n\n\nName: pvcs-silver\nNamespace: default\nResource Used Hard", "completion": "-------- ---- ----\npersistentvolumeclaims 1 10\nrequests.storage 2Gi 20Gi\n\n\nName: pvcs-copper\nNamespace: default\nResource Used Hard\n-------- ---- ----\npersistentvolumeclaims 1 10\nrequests.storage 2Gi 30Gi", "language": "text", "type": "code-example", "context": "9.2 - Resource Quotas | Print the manifest of the PVC using the following command:", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_16285"} {"prompt": "Explain this text code:\n\nName: pvcs-gold\nNamespace: default\nResource Used Hard\n-------- ---- ----\npersistentvolumeclaims 1 10\nrequests.storage 2Gi 10Gi\n\n\nName: pvcs-silver\nNamespace: default\nResource Used Hard\n-------- ---- ----\npersistentvolumeclaims 1 10\nrequests.storage 2Gi 20Gi\n\n\nName: pvcs-copper\nNamespace: default\nResource Used Hard\n-------- ---- ----\npersistentvolumeclaims 1 10\nrequests.storage 2Gi 30Gi", "completion": "This text code 9.2 - Resource Quotas | Print the manifest of the PVC using the following command:", "language": "text", "type": "code-example_explanation", "context": "9.2 - Resource Quotas | Print the manifest of the PVC using the following command:", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_16286"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: PersistentVolumeClaim\nmetadata:\n name: gold-vac-pvc\nspec:\n accessModes:\n - ReadWriteOnce\n resources:\n requests:\n storage: 2Gi\n storageClassName: default", "completion": " volumeAttributesClassName: copper\nstatus:\n accessModes:\n - ReadWriteOnce\n capacity:\n storage: 2Gi\n currentVolumeAttributesClassName: gold\n phase: Bound\n modifyVolumeStatus:\n status: InProgress\n targetVolumeAttributesClassName: silver\n storageClassName: default", "language": "yaml", "type": "kubernetes-manifest", "context": "9.2 - Resource Quotas | Wait a moment for the volume modification to complete, then verify the quota again.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_16287"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: PersistentVolumeClaim\nmetadata:\n name: gold-vac-pvc\nspec:\n accessModes:\n - ReadWriteOnce\n resources:\n requests:\n storage: 2Gi\n storageClassName: default\n volumeAttributesClassName: copper\nstatus:\n accessModes:\n - ReadWriteOnce\n capacity:\n storage: 2Gi\n currentVolumeAttributesClassName: gold\n phase: Bound\n modifyVolumeStatus:\n status: InProgress\n targetVolumeAttributesClassName: silver\n storageClassName: default", "completion": "This yaml code 9.2 - Resource Quotas | Wait a moment for the volume modification to complete, then verify the quota again.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "9.2 - Resource Quotas | Wait a moment for the volume modification to complete, then verify the quota again.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_16288"} {"prompt": "Complete this text code:\n\nName: pvcs-gold\nNamespace: default\nResource Used Hard\n-------- ---- ----\npersistentvolumeclaims 0 10\nrequests.storage 0 10Gi\n\n\nName: pvcs-silver\nNamespace: default\nResource Used Hard", "completion": "-------- ---- ----\npersistentvolumeclaims 0 10\nrequests.storage 0 20Gi\n\n\nName: pvcs-copper\nNamespace: default\nResource Used Hard\n-------- ---- ----\npersistentvolumeclaims 1 10\nrequests.storage 2Gi 30Gi", "language": "text", "type": "code-example", "context": "9.2 - Resource Quotas | When allocating compute resources, each container may specify a request and a limit value for either CPU or memory. The quota can be configured to quota either value.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_16289"} {"prompt": "Explain this text code:\n\nName: pvcs-gold\nNamespace: default\nResource Used Hard\n-------- ---- ----\npersistentvolumeclaims 0 10\nrequests.storage 0 10Gi\n\n\nName: pvcs-silver\nNamespace: default\nResource Used Hard\n-------- ---- ----\npersistentvolumeclaims 0 10\nrequests.storage 0 20Gi\n\n\nName: pvcs-copper\nNamespace: default\nResource Used Hard\n-------- ---- ----\npersistentvolumeclaims 1 10\nrequests.storage 2Gi 30Gi", "completion": "This text code 9.2 - Resource Quotas | When allocating compute resources, each container may specify a request and a limit value for either CPU or memory. The quota can be configured to quota either value.", "language": "text", "type": "code-example_explanation", "context": "9.2 - Resource Quotas | When allocating compute resources, each container may specify a request and a limit value for either CPU or memory. The quota can be configured to quota either value.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_16290"} {"prompt": "Complete this yaml code:\n\ncat <<EOF > compute-resources.yaml\napiVersion: v1\nkind: ResourceQuota\nmetadata:\n name: compute-resources\nspec:", "completion": " hard:\n requests.cpu: \"1\"\n requests.memory: \"1Gi\"\n limits.cpu: \"2\"\n limits.memory: \"2Gi\"\n requests.nvidia.com/gpu: 4\nEOF", "language": "yaml", "type": "kubernetes-manifest", "context": "9.2 - Resource Quotas | kubectl also supports object count quota for all standard namespaced resources using the syntax count/<resource>.<group>:", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_16291"} {"prompt": "Explain this yaml code:\n\ncat <<EOF > compute-resources.yaml\napiVersion: v1\nkind: ResourceQuota\nmetadata:\n name: compute-resources\nspec:\n hard:\n requests.cpu: \"1\"\n requests.memory: \"1Gi\"\n limits.cpu: \"2\"\n limits.memory: \"2Gi\"\n requests.nvidia.com/gpu: 4\nEOF", "completion": "This yaml code 9.2 - Resource Quotas | kubectl also supports object count quota for all standard namespaced resources using the syntax count/<resource>.<group>:", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "9.2 - Resource Quotas | kubectl also supports object count quota for all standard namespaced resources using the syntax count/<resource>.<group>:", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_16292"} {"prompt": "Explain this bash code:\n\nkubectl create -f ./compute-resources.yaml --namespace=myspace", "completion": "This bash code 9.2 - Resource Quotas | kubectl also supports object count quota for all standard namespaced resources using the syntax count/<resource>.<group>:", "language": "bash", "type": "kubectl-command_explanation", "context": "9.2 - Resource Quotas | kubectl also supports object count quota for all standard namespaced resources using the syntax count/<resource>.<group>:", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_16293"} {"prompt": "Complete this yaml code:\n\ncat <<EOF > object-counts.yaml\napiVersion: v1\nkind: ResourceQuota\nmetadata:\n name: object-counts\nspec:\n hard:", "completion": " configmaps: \"10\"\n persistentvolumeclaims: \"4\"\n pods: \"4\"\n replicationcontrollers: \"20\"\n secrets: \"10\"\n services: \"10\"\n services.loadbalancers: \"2\"\nEOF", "language": "yaml", "type": "kubernetes-manifest", "context": "9.2 - Resource Quotas | kubectl also supports object count quota for all standard namespaced resources using the syntax count/<resource>.<group>:", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_16294"} {"prompt": "Explain this yaml code:\n\ncat <<EOF > object-counts.yaml\napiVersion: v1\nkind: ResourceQuota\nmetadata:\n name: object-counts\nspec:\n hard:\n configmaps: \"10\"\n persistentvolumeclaims: \"4\"\n pods: \"4\"\n replicationcontrollers: \"20\"\n secrets: \"10\"\n services: \"10\"\n services.loadbalancers: \"2\"\nEOF", "completion": "This yaml code 9.2 - Resource Quotas | kubectl also supports object count quota for all standard namespaced resources using the syntax count/<resource>.<group>:", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "9.2 - Resource Quotas | kubectl also supports object count quota for all standard namespaced resources using the syntax count/<resource>.<group>:", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_16295"} {"prompt": "Explain this bash code:\n\nkubectl create -f ./object-counts.yaml --namespace=myspace", "completion": "This bash code 9.2 - Resource Quotas | kubectl also supports object count quota for all standard namespaced resources using the syntax count/<resource>.<group>:", "language": "bash", "type": "kubectl-command_explanation", "context": "9.2 - Resource Quotas | kubectl also supports object count quota for all standard namespaced resources using the syntax count/<resource>.<group>:", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_16296"} {"prompt": "Explain this text code:\n\nNAME AGE\ncompute-resources 30s\nobject-counts 32s", "completion": "This text code 9.2 - Resource Quotas | kubectl also supports object count quota for all standard namespaced resources using the syntax count/<resource>.<group>:", "language": "text", "type": "code-example_explanation", "context": "9.2 - Resource Quotas | kubectl also supports object count quota for all standard namespaced resources using the syntax count/<resource>.<group>:", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_16297"} {"prompt": "Explain this bash code:\n\nkubectl describe quota compute-resources --namespace=myspace", "completion": "This bash code 9.2 - Resource Quotas | kubectl also supports object count quota for all standard namespaced resources using the syntax count/<resource>.<group>:", "language": "bash", "type": "kubectl-command_explanation", "context": "9.2 - Resource Quotas | kubectl also supports object count quota for all standard namespaced resources using the syntax count/<resource>.<group>:", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_16298"} {"prompt": "Complete this text code:\n\nName: compute-resources\nNamespace: myspace\nResource Used Hard\n-------- ---- ----", "completion": "limits.cpu 0 2\nlimits.memory 0 2Gi\nrequests.cpu 0 1\nrequests.memory 0 1Gi\nrequests.nvidia.com/gpu 0 4", "language": "text", "type": "code-example", "context": "9.2 - Resource Quotas | kubectl also supports object count quota for all standard namespaced resources using the syntax count/<resource>.<group>:", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_16299"} {"prompt": "Explain this text code:\n\nName: compute-resources\nNamespace: myspace\nResource Used Hard\n-------- ---- ----\nlimits.cpu 0 2\nlimits.memory 0 2Gi\nrequests.cpu 0 1\nrequests.memory 0 1Gi\nrequests.nvidia.com/gpu 0 4", "completion": "This text code 9.2 - Resource Quotas | kubectl also supports object count quota for all standard namespaced resources using the syntax count/<resource>.<group>:", "language": "text", "type": "code-example_explanation", "context": "9.2 - Resource Quotas | kubectl also supports object count quota for all standard namespaced resources using the syntax count/<resource>.<group>:", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_16300"} {"prompt": "Explain this bash code:\n\nkubectl describe quota object-counts --namespace=myspace", "completion": "This bash code 9.2 - Resource Quotas | kubectl also supports object count quota for all standard namespaced resources using the syntax count/<resource>.<group>:", "language": "bash", "type": "kubectl-command_explanation", "context": "9.2 - Resource Quotas | kubectl also supports object count quota for all standard namespaced resources using the syntax count/<resource>.<group>:", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_16301"} {"prompt": "Complete this text code:\n\nName: object-counts\nNamespace: myspace\nResource Used Hard\n-------- ---- ----\nconfigmaps 0 10", "completion": "persistentvolumeclaims 0 4\npods 0 4\nreplicationcontrollers 0 20\nsecrets 1 10\nservices 0 10\nservices.loadbalancers 0 2", "language": "text", "type": "code-example", "context": "9.2 - Resource Quotas | kubectl also supports object count quota for all standard namespaced resources using the syntax count/<resource>.<group>:", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_16302"} {"prompt": "Explain this text code:\n\nName: object-counts\nNamespace: myspace\nResource Used Hard\n-------- ---- ----\nconfigmaps 0 10\npersistentvolumeclaims 0 4\npods 0 4\nreplicationcontrollers 0 20\nsecrets 1 10\nservices 0 10\nservices.loadbalancers 0 2", "completion": "This text code 9.2 - Resource Quotas | kubectl also supports object count quota for all standard namespaced resources using the syntax count/<resource>.<group>:", "language": "text", "type": "code-example_explanation", "context": "9.2 - Resource Quotas | kubectl also supports object count quota for all standard namespaced resources using the syntax count/<resource>.<group>:", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_16303"} {"prompt": "Explain this bash code:\n\nkubectl create quota test --hard=count/deployments.apps=2,count/replicasets.apps=4,count/pods=3,count/secrets=4 --namespace=myspace", "completion": "This bash code 9.2 - Resource Quotas | ResourceQuotas are independent of the cluster capacity. They are expressed in absolute units. So, if you add nodes to your cluster, this does not automatically give each namespace the ability to consu", "language": "bash", "type": "kubectl-command_explanation", "context": "9.2 - Resource Quotas | ResourceQuotas are independent of the cluster capacity. They are expressed in absolute units. So, if you add nodes to your cluster, this does not automatically give each namespace the ability to consu", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_16304"} {"prompt": "Explain this bash code:\n\nkubectl create deployment nginx --image=nginx --namespace=myspace --replicas=2", "completion": "This bash code 9.2 - Resource Quotas | ResourceQuotas are independent of the cluster capacity. They are expressed in absolute units. So, if you add nodes to your cluster, this does not automatically give each namespace the ability to consu", "language": "bash", "type": "kubectl-command_explanation", "context": "9.2 - Resource Quotas | ResourceQuotas are independent of the cluster capacity. They are expressed in absolute units. So, if you add nodes to your cluster, this does not automatically give each namespace the ability to consu", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_16305"} {"prompt": "Complete this text code:\n\nName: test\nNamespace: myspace\nResource Used Hard\n-------- ---- ----", "completion": "count/deployments.apps 1 2\ncount/pods 2 3\ncount/replicasets.apps 1 4\ncount/secrets 1 4", "language": "text", "type": "code-example", "context": "9.2 - Resource Quotas | ResourceQuotas are independent of the cluster capacity. They are expressed in absolute units. So, if you add nodes to your cluster, this does not automatically give each namespace the ability to consu", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_16306"} {"prompt": "Explain this text code:\n\nName: test\nNamespace: myspace\nResource Used Hard\n-------- ---- ----\ncount/deployments.apps 1 2\ncount/pods 2 3\ncount/replicasets.apps 1 4\ncount/secrets 1 4", "completion": "This text code 9.2 - Resource Quotas | ResourceQuotas are independent of the cluster capacity. They are expressed in absolute units. So, if you add nodes to your cluster, this does not automatically give each namespace the ability to consu", "language": "text", "type": "code-example_explanation", "context": "9.2 - Resource Quotas | ResourceQuotas are independent of the cluster capacity. They are expressed in absolute units. So, if you add nodes to your cluster, this does not automatically give each namespace the ability to consu", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_16307"} {"prompt": "Complete this yaml code:\n\napiVersion: apiserver.config.k8s.io/v1\nkind: AdmissionConfiguration\nplugins:\n- name: \"ResourceQuota\"\n configuration:\n apiVersion: apiserver.config.k8s.io/v1", "completion": " kind: ResourceQuotaConfiguration\n limitedResources:\n - resource: pods\n matchScopes:\n - scopeName: PriorityClass\n operator: In\n values: [\"cluster-services\"]", "language": "yaml", "type": "kubernetes-manifest", "context": "9.2 - Resource Quotas | Then, create a resource quota object in the kube-system namespace:", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_16308"} {"prompt": "Explain this yaml code:\n\napiVersion: apiserver.config.k8s.io/v1\nkind: AdmissionConfiguration\nplugins:\n- name: \"ResourceQuota\"\n configuration:\n apiVersion: apiserver.config.k8s.io/v1\n kind: ResourceQuotaConfiguration\n limitedResources:\n - resource: pods\n matchScopes:\n - scopeName: PriorityClass\n operator: In\n values: [\"cluster-services\"]", "completion": "This yaml code 9.2 - Resource Quotas | Then, create a resource quota object in the kube-system namespace:", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "9.2 - Resource Quotas | Then, create a resource quota object in the kube-system namespace:", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_16309"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: ResourceQuota\nmetadata:\n name: pods-cluster-services\nspec:", "completion": " scopeSelector:\n matchExpressions:\n - operator : In\n scopeName: PriorityClass\n values: [\"cluster-services\"]", "language": "yaml", "type": "kubernetes-manifest", "context": "9.2 - Resource Quotas | In this case, a pod creation will be allowed if:", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_16310"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: ResourceQuota\nmetadata:\n name: pods-cluster-services\nspec:\n scopeSelector:\n matchExpressions:\n - operator : In\n scopeName: PriorityClass\n values: [\"cluster-services\"]", "completion": "This yaml code 9.2 - Resource Quotas | In this case, a pod creation will be allowed if:", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "9.2 - Resource Quotas | In this case, a pod creation will be allowed if:", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_16311"} {"prompt": "Explain this bash code:\n\nkubectl apply -f https://k8s.io/examples/policy/priority-class-resourcequota.yaml -n kube-system", "completion": "This bash code 9.2 - Resource Quotas | In this case, a pod creation will be allowed if:", "language": "bash", "type": "kubectl-command_explanation", "context": "9.2 - Resource Quotas | In this case, a pod creation will be allowed if:", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_16312"} {"prompt": "Complete this text code:\n\nspec:\n containers:", "completion": " - name: nginx\n image: nginx", "language": "text", "type": "code-example", "context": "9.4 - Node Resource Managers | The pod above runs in the BestEffort QoS class because no resource requests or limits are specified. It runs in the shared pool.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_16313"} {"prompt": "Explain this text code:\n\nspec:\n containers:\n - name: nginx\n image: nginx", "completion": "This text code 9.4 - Node Resource Managers | The pod above runs in the BestEffort QoS class because no resource requests or limits are specified. It runs in the shared pool.", "language": "text", "type": "code-example_explanation", "context": "9.4 - Node Resource Managers | The pod above runs in the BestEffort QoS class because no resource requests or limits are specified. It runs in the shared pool.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_16314"} {"prompt": "Complete this text code:\n\nspec:\n containers:\n - name: nginx\n image: nginx", "completion": " resources:\n limits:\n memory: \"200Mi\"\n requests:\n memory: \"100Mi\"", "language": "text", "type": "code-example", "context": "9.4 - Node Resource Managers | The pod above runs in the Burstable QoS class because resource requests do not equal limits and the cpu quantity is not specified. It runs in the shared pool.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_16315"} {"prompt": "Explain this text code:\n\nspec:\n containers:\n - name: nginx\n image: nginx\n resources:\n limits:\n memory: \"200Mi\"\n requests:\n memory: \"100Mi\"", "completion": "This text code 9.4 - Node Resource Managers | The pod above runs in the Burstable QoS class because resource requests do not equal limits and the cpu quantity is not specified. It runs in the shared pool.", "language": "text", "type": "code-example_explanation", "context": "9.4 - Node Resource Managers | The pod above runs in the Burstable QoS class because resource requests do not equal limits and the cpu quantity is not specified. It runs in the shared pool.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_16316"} {"prompt": "Complete this text code:\n\nspec:\n containers:\n - name: nginx\n image: nginx\n resources:", "completion": " limits:\n memory: \"200Mi\"\n cpu: \"2\"\n requests:\n memory: \"100Mi\"\n cpu: \"1\"", "language": "text", "type": "code-example", "context": "9.4 - Node Resource Managers | The pod above runs in the Burstable QoS class because resource requests do not equal limits. It runs in the shared pool.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_16317"} {"prompt": "Explain this text code:\n\nspec:\n containers:\n - name: nginx\n image: nginx\n resources:\n limits:\n memory: \"200Mi\"\n cpu: \"2\"\n requests:\n memory: \"100Mi\"\n cpu: \"1\"", "completion": "This text code 9.4 - Node Resource Managers | The pod above runs in the Burstable QoS class because resource requests do not equal limits. It runs in the shared pool.", "language": "text", "type": "code-example_explanation", "context": "9.4 - Node Resource Managers | The pod above runs in the Burstable QoS class because resource requests do not equal limits. It runs in the shared pool.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_16318"} {"prompt": "Complete this text code:\n\nspec:\n containers:\n - name: nginx\n image: nginx\n resources:", "completion": " limits:\n memory: \"200Mi\"\n cpu: \"2\"\n requests:\n memory: \"200Mi\"\n cpu: \"2\"", "language": "text", "type": "code-example", "context": "9.4 - Node Resource Managers | The pod above runs in the Guaranteed QoS class because requests are equal to limits. And the container's resource limit for the CPU resource is an integer greater than or equal to one. The nginx conta", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_16319"} {"prompt": "Explain this text code:\n\nspec:\n containers:\n - name: nginx\n image: nginx\n resources:\n limits:\n memory: \"200Mi\"\n cpu: \"2\"\n requests:\n memory: \"200Mi\"\n cpu: \"2\"", "completion": "This text code 9.4 - Node Resource Managers | The pod above runs in the Guaranteed QoS class because requests are equal to limits. And the container's resource limit for the CPU resource is an integer greater than or equal to one. The nginx conta", "language": "text", "type": "code-example_explanation", "context": "9.4 - Node Resource Managers | The pod above runs in the Guaranteed QoS class because requests are equal to limits. And the container's resource limit for the CPU resource is an integer greater than or equal to one. The nginx conta", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_16320"} {"prompt": "Complete this text code:\n\nspec:\n containers:\n - name: nginx\n image: nginx\n resources:", "completion": " limits:\n memory: \"200Mi\"\n cpu: \"1.5\"\n requests:\n memory: \"200Mi\"\n cpu: \"1.5\"", "language": "text", "type": "code-example", "context": "9.4 - Node Resource Managers | The pod above runs in the Guaranteed QoS class because requests are equal to limits. But the container's resource limit for the CPU resource is a fraction. It runs in the shared pool.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_16321"} {"prompt": "Explain this text code:\n\nspec:\n containers:\n - name: nginx\n image: nginx\n resources:\n limits:\n memory: \"200Mi\"\n cpu: \"1.5\"\n requests:\n memory: \"200Mi\"\n cpu: \"1.5\"", "completion": "This text code 9.4 - Node Resource Managers | The pod above runs in the Guaranteed QoS class because requests are equal to limits. But the container's resource limit for the CPU resource is a fraction. It runs in the shared pool.", "language": "text", "type": "code-example_explanation", "context": "9.4 - Node Resource Managers | The pod above runs in the Guaranteed QoS class because requests are equal to limits. But the container's resource limit for the CPU resource is a fraction. It runs in the shared pool.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_16322"} {"prompt": "Complete this text code:\n\nspec:\n containers:\n - name: nginx\n image: nginx", "completion": " resources:\n limits:\n memory: \"200Mi\"\n cpu: \"2\"", "language": "text", "type": "code-example", "context": "9.4 - Node Resource Managers | The pod above runs in the Guaranteed QoS class because only limits are specified and requests are set equal to limits when not explicitly specified. And the container's resource limit for the CPU reso", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_16323"} {"prompt": "Explain this text code:\n\nspec:\n containers:\n - name: nginx\n image: nginx\n resources:\n limits:\n memory: \"200Mi\"\n cpu: \"2\"", "completion": "This text code 9.4 - Node Resource Managers | The pod above runs in the Guaranteed QoS class because only limits are specified and requests are set equal to limits when not explicitly specified. And the container's resource limit for the CPU reso", "language": "text", "type": "code-example_explanation", "context": "9.4 - Node Resource Managers | The pod above runs in the Guaranteed QoS class because only limits are specified and requests are set equal to limits when not explicitly specified. And the container's resource limit for the CPU reso", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_16324"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: with-node-affinity\nspec:\n affinity:\n nodeAffinity:\n requiredDuringSchedulingIgnoredDuringExecution:\n nodeSelectorTerms:\n - matchExpressions:\n - key: topology.kubernetes.io/zone\n operator: In\n values:", "completion": " - antarctica-east1\n - antarctica-west1\n preferredDuringSchedulingIgnoredDuringExecution:\n - weight: 1\n preference:\n matchExpressions:\n - key: another-node-label-key\n operator: In\n values:\n - another-node-label-value\n containers:\n - name: with-node-affinity\n image: registry.k8s.io/pause:3.8", "language": "yaml", "type": "pod-definition", "context": "10.2 - Assigning Pods to Nodes | In this example, the following rules apply:", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_16325"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: with-node-affinity\nspec:\n affinity:\n nodeAffinity:\n requiredDuringSchedulingIgnoredDuringExecution:\n nodeSelectorTerms:\n - matchExpressions:\n - key: topology.kubernetes.io/zone\n operator: In\n values:\n - antarctica-east1\n - antarctica-west1\n preferredDuringSchedulingIgnoredDuringExecution:\n - weight: 1\n preference:\n matchExpressions:\n - key: another-node-label-key\n operator: In\n values:\n - another-node-label-value\n containers:\n - name: with-node-affinity\n image: registry.k8s.io/pause:3.8", "completion": "This yaml code 10.2 - Assigning Pods to Nodes | In this example, the following rules apply:", "language": "yaml", "type": "pod-definition_explanation", "context": "10.2 - Assigning Pods to Nodes | In this example, the following rules apply:", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_16326"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: with-affinity-preferred-weight\nspec:\n affinity:\n nodeAffinity:\n requiredDuringSchedulingIgnoredDuringExecution:\n nodeSelectorTerms:\n - matchExpressions:\n - key: kubernetes.io/os\n operator: In\n values:\n - linux\n preferredDuringSchedulingIgnoredDuringExecution:\n - weight: 1", "completion": " preference:\n matchExpressions:\n - key: label-1\n operator: In\n values:\n - key-1\n - weight: 50\n preference:\n matchExpressions:\n - key: label-2\n operator: In\n values:\n - key-2\n containers:\n - name: with-node-affinity\n image: registry.k8s.io/pause:3.8", "language": "yaml", "type": "pod-definition", "context": "10.2 - Assigning Pods to Nodes | If there are two possible nodes that match the preferredDuringSchedulingIgnoredDuringExecution rule, one with the label-1:key-1 label and another with the label-2:key-2 label, the scheduler considers", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_16327"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: with-affinity-preferred-weight\nspec:\n affinity:\n nodeAffinity:\n requiredDuringSchedulingIgnoredDuringExecution:\n nodeSelectorTerms:\n - matchExpressions:\n - key: kubernetes.io/os\n operator: In\n values:\n - linux\n preferredDuringSchedulingIgnoredDuringExecution:\n - weight: 1\n preference:\n matchExpressions:\n - key: label-1\n operator: In\n values:\n - key-1\n - weight: 50\n preference:\n matchExpressions:\n - key: label-2\n operator: In\n values:\n - key-2\n containers:\n - name: with-node-affinity\n image: registry.k8s.io/pause:3.8", "completion": "This yaml code 10.2 - Assigning Pods to Nodes | If there are two possible nodes that match the preferredDuringSchedulingIgnoredDuringExecution rule, one with the label-1:key-1 label and another with the label-2:key-2 label, the scheduler considers", "language": "yaml", "type": "pod-definition_explanation", "context": "10.2 - Assigning Pods to Nodes | If there are two possible nodes that match the preferredDuringSchedulingIgnoredDuringExecution rule, one with the label-1:key-1 label and another with the label-2:key-2 label, the scheduler considers", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_16328"} {"prompt": "Complete this yaml code:\n\napiVersion: kubescheduler.config.k8s.io/v1\nkind: KubeSchedulerConfiguration\n\nprofiles:\n - schedulerName: default-scheduler\n - schedulerName: foo-scheduler\n pluginConfig:\n - name: NodeAffinity", "completion": " args:\n addedAffinity:\n requiredDuringSchedulingIgnoredDuringExecution:\n nodeSelectorTerms:\n - matchExpressions:\n - key: scheduler-profile\n operator: In\n values:\n - foo", "language": "yaml", "type": "kubernetes-manifest", "context": "10.2 - Assigning Pods to Nodes | The addedAffinity is applied to all Pods that set .spec.schedulerName to foo-scheduler, in addition to the NodeAffinity specified in the PodSpec. That is, in order to match the Pod, nodes need to sati", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_16329"} {"prompt": "Explain this yaml code:\n\napiVersion: kubescheduler.config.k8s.io/v1\nkind: KubeSchedulerConfiguration\n\nprofiles:\n - schedulerName: default-scheduler\n - schedulerName: foo-scheduler\n pluginConfig:\n - name: NodeAffinity\n args:\n addedAffinity:\n requiredDuringSchedulingIgnoredDuringExecution:\n nodeSelectorTerms:\n - matchExpressions:\n - key: scheduler-profile\n operator: In\n values:\n - foo", "completion": "This yaml code 10.2 - Assigning Pods to Nodes | The addedAffinity is applied to all Pods that set .spec.schedulerName to foo-scheduler, in addition to the NodeAffinity specified in the PodSpec. That is, in order to match the Pod, nodes need to sati", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "10.2 - Assigning Pods to Nodes | The addedAffinity is applied to all Pods that set .spec.schedulerName to foo-scheduler, in addition to the NodeAffinity specified in the PodSpec. That is, in order to match the Pod, nodes need to sati", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_16330"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: with-pod-affinity\nspec:\n affinity:\n podAffinity:\n requiredDuringSchedulingIgnoredDuringExecution:\n - labelSelector:\n matchExpressions:\n - key: security\n operator: In\n values:\n - S1", "completion": " topologyKey: topology.kubernetes.io/zone\n podAntiAffinity:\n preferredDuringSchedulingIgnoredDuringExecution:\n - weight: 100\n podAffinityTerm:\n labelSelector:\n matchExpressions:\n - key: security\n operator: In\n values:\n - S2\n topologyKey: topology.kubernetes.io/zone\n containers:\n - name: with-pod-affinity\n image: registry.k8s.io/pause:3.8", "language": "yaml", "type": "pod-definition", "context": "10.2 - Assigning Pods to Nodes | This example defines one Pod affinity rule and one Pod anti-affinity rule. The Pod affinity rule uses the \"hard\" requiredDuringSchedulingIgnoredDuringExecution, while the anti-affinity rule uses the \"", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_16331"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: with-pod-affinity\nspec:\n affinity:\n podAffinity:\n requiredDuringSchedulingIgnoredDuringExecution:\n - labelSelector:\n matchExpressions:\n - key: security\n operator: In\n values:\n - S1\n topologyKey: topology.kubernetes.io/zone\n podAntiAffinity:\n preferredDuringSchedulingIgnoredDuringExecution:\n - weight: 100\n podAffinityTerm:\n labelSelector:\n matchExpressions:\n - key: security\n operator: In\n values:\n - S2\n topologyKey: topology.kubernetes.io/zone\n containers:\n - name: with-pod-affinity\n image: registry.k8s.io/pause:3.8", "completion": "This yaml code 10.2 - Assigning Pods to Nodes | This example defines one Pod affinity rule and one Pod anti-affinity rule. The Pod affinity rule uses the \"hard\" requiredDuringSchedulingIgnoredDuringExecution, while the anti-affinity rule uses the \"", "language": "yaml", "type": "pod-definition_explanation", "context": "10.2 - Assigning Pods to Nodes | This example defines one Pod affinity rule and one Pod anti-affinity rule. The Pod affinity rule uses the \"hard\" requiredDuringSchedulingIgnoredDuringExecution, while the anti-affinity rule uses the \"", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_16332"} {"prompt": "Complete this yaml code:\n\napiVersion: apps/v1\nkind: Deployment\nmetadata:\n name: application-server\n...\nspec:\n template:\n spec:\n affinity:\n podAffinity:\n requiredDuringSchedulingIgnoredDuringExecution:", "completion": " - labelSelector:\n matchExpressions:\n - key: app\n operator: In\n values:\n - database\n topologyKey: topology.kubernetes.io/zone\n # Only Pods from a given rollout are taken into consideration when calculating pod affinity.\n # If you update the Deployment, the replacement Pods follow their own affinity rules\n # (if there are any defined in the new Pod template)\n matchLabelKeys:\n - pod-template-hash", "language": "yaml", "type": "deployment-definition", "context": "10.2 - Assigning Pods to Nodes | The mismatchLabelKeys field is a beta-level field and is enabled by default in Kubernetes 1.34. When you want to disable it, you have to disable it explicitly via the MatchLabelKeysInPodAffinity featu", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_16333"} {"prompt": "Explain this yaml code:\n\napiVersion: apps/v1\nkind: Deployment\nmetadata:\n name: application-server\n...\nspec:\n template:\n spec:\n affinity:\n podAffinity:\n requiredDuringSchedulingIgnoredDuringExecution:\n - labelSelector:\n matchExpressions:\n - key: app\n operator: In\n values:\n - database\n topologyKey: topology.kubernetes.io/zone\n # Only Pods from a given rollout are taken into consideration when calculating pod affinity.\n # If you update the Deployment, the replacement Pods follow their own affinity rules\n # (if there are any defined in the new Pod template)\n matchLabelKeys:\n - pod-template-hash", "completion": "This yaml code 10.2 - Assigning Pods to Nodes | The mismatchLabelKeys field is a beta-level field and is enabled by default in Kubernetes 1.34. When you want to disable it, you have to disable it explicitly via the MatchLabelKeysInPodAffinity featu", "language": "yaml", "type": "deployment-definition_explanation", "context": "10.2 - Assigning Pods to Nodes | The mismatchLabelKeys field is a beta-level field and is enabled by default in Kubernetes 1.34. When you want to disable it, you have to disable it explicitly via the MatchLabelKeysInPodAffinity featu", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_16334"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n labels:\n # Assume that all relevant Pods have a \"tenant\" label set\n tenant: tenant-a\n...\nspec:\n affinity:\n podAffinity:\n requiredDuringSchedulingIgnoredDuringExecution:\n # ensure that Pods associated with this tenant land on the correct node pool\n - matchLabelKeys:\n - tenant\n topologyKey: node-pool", "completion": " podAntiAffinity:\n requiredDuringSchedulingIgnoredDuringExecution:\n # ensure that Pods associated with this tenant can't schedule to nodes used for another tenant\n - mismatchLabelKeys:\n - tenant # whatever the value of the \"tenant\" label for this Pod, prevent\n # scheduling to nodes in any pool where any Pod from a different\n # tenant is running.\n labelSelector:\n # We have to have the labelSelector which selects only Pods with the tenant label,\n # otherwise this Pod would have anti-affinity against Pods from daemonsets as well, for example,\n # which aren't supposed to have the tenant label.\n matchExpressions:\n - key: tenant\n operator: Exists\n topologyKey: node-pool", "language": "yaml", "type": "pod-definition", "context": "10.2 - Assigning Pods to Nodes | Inter-pod affinity and anti-affinity can be even more useful when they are used with higher level collections such as ReplicaSets, StatefulSets, Deployments, etc. These rules allow you to configure th", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_16335"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n labels:\n # Assume that all relevant Pods have a \"tenant\" label set\n tenant: tenant-a\n...\nspec:\n affinity:\n podAffinity:\n requiredDuringSchedulingIgnoredDuringExecution:\n # ensure that Pods associated with this tenant land on the correct node pool\n - matchLabelKeys:\n - tenant\n topologyKey: node-pool\n podAntiAffinity:\n requiredDuringSchedulingIgnoredDuringExecution:\n # ensure that Pods associated with this tenant can't schedule to nodes used for another tenant\n - mismatchLabelKeys:\n - tenant # whatever the value of the \"tenant\" label for this Pod, prevent\n # scheduling to nodes in any pool where any Pod from a different\n # tenant is running.\n labelSelector:\n # We have to have the labelSelector which selects only Pods with the tenant label,\n # otherwise this Pod would have anti-affinity against Pods from daemonsets as well, for example,\n # which aren't supposed to have the tenant label.\n matchExpressions:\n - key: tenant\n operator: Exists\n topologyKey: node-pool", "completion": "This yaml code 10.2 - Assigning Pods to Nodes | Inter-pod affinity and anti-affinity can be even more useful when they are used with higher level collections such as ReplicaSets, StatefulSets, Deployments, etc. These rules allow you to configure th", "language": "yaml", "type": "pod-definition_explanation", "context": "10.2 - Assigning Pods to Nodes | Inter-pod affinity and anti-affinity can be even more useful when they are used with higher level collections such as ReplicaSets, StatefulSets, Deployments, etc. These rules allow you to configure th", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_16336"} {"prompt": "Complete this yaml code:\n\napiVersion: apps/v1\nkind: Deployment\nmetadata:\n name: redis-cache\nspec:\n selector:\n matchLabels:\n app: store\n replicas: 3\n template:\n metadata:\n labels:\n app: store", "completion": " spec:\n affinity:\n podAntiAffinity:\n requiredDuringSchedulingIgnoredDuringExecution:\n - labelSelector:\n matchExpressions:\n - key: app\n operator: In\n values:\n - store\n topologyKey: \"kubernetes.io/hostname\"\n containers:\n - name: redis-server\n image: redis:3.2-alpine", "language": "yaml", "type": "deployment-definition", "context": "10.2 - Assigning Pods to Nodes | The following example Deployment for the web servers creates replicas with the label app=web-store. The Pod affinity rule tells the scheduler to place each replica on a node that has a Pod with the la", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_16337"} {"prompt": "Explain this yaml code:\n\napiVersion: apps/v1\nkind: Deployment\nmetadata:\n name: redis-cache\nspec:\n selector:\n matchLabels:\n app: store\n replicas: 3\n template:\n metadata:\n labels:\n app: store\n spec:\n affinity:\n podAntiAffinity:\n requiredDuringSchedulingIgnoredDuringExecution:\n - labelSelector:\n matchExpressions:\n - key: app\n operator: In\n values:\n - store\n topologyKey: \"kubernetes.io/hostname\"\n containers:\n - name: redis-server\n image: redis:3.2-alpine", "completion": "This yaml code 10.2 - Assigning Pods to Nodes | The following example Deployment for the web servers creates replicas with the label app=web-store. The Pod affinity rule tells the scheduler to place each replica on a node that has a Pod with the la", "language": "yaml", "type": "deployment-definition_explanation", "context": "10.2 - Assigning Pods to Nodes | The following example Deployment for the web servers creates replicas with the label app=web-store. The Pod affinity rule tells the scheduler to place each replica on a node that has a Pod with the la", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_16338"} {"prompt": "Complete this yaml code:\n\napiVersion: apps/v1\nkind: Deployment\nmetadata:\n name: web-server\nspec:\n selector:\n matchLabels:\n app: web-store\n replicas: 3\n template:\n metadata:\n labels:\n app: web-store\n spec:\n affinity:\n podAntiAffinity:\n requiredDuringSchedulingIgnoredDuringExecution:\n - labelSelector:", "completion": " matchExpressions:\n - key: app\n operator: In\n values:\n - web-store\n topologyKey: \"kubernetes.io/hostname\"\n podAffinity:\n requiredDuringSchedulingIgnoredDuringExecution:\n - labelSelector:\n matchExpressions:\n - key: app\n operator: In\n values:\n - store\n topologyKey: \"kubernetes.io/hostname\"\n containers:\n - name: web-app\n image: nginx:1.16-alpine", "language": "yaml", "type": "deployment-definition", "context": "10.2 - Assigning Pods to Nodes | Creating the two preceding Deployments results in the following cluster layout, where each web server is co-located with a cache, on three separate nodes.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_16339"} {"prompt": "Explain this yaml code:\n\napiVersion: apps/v1\nkind: Deployment\nmetadata:\n name: web-server\nspec:\n selector:\n matchLabels:\n app: web-store\n replicas: 3\n template:\n metadata:\n labels:\n app: web-store\n spec:\n affinity:\n podAntiAffinity:\n requiredDuringSchedulingIgnoredDuringExecution:\n - labelSelector:\n matchExpressions:\n - key: app\n operator: In\n values:\n - web-store\n topologyKey: \"kubernetes.io/hostname\"\n podAffinity:\n requiredDuringSchedulingIgnoredDuringExecution:\n - labelSelector:\n matchExpressions:\n - key: app\n operator: In\n values:\n - store\n topologyKey: \"kubernetes.io/hostname\"\n containers:\n - name: web-app\n image: nginx:1.16-alpine", "completion": "This yaml code 10.2 - Assigning Pods to Nodes | Creating the two preceding Deployments results in the following cluster layout, where each web server is co-located with a cache, on three separate nodes.", "language": "yaml", "type": "deployment-definition_explanation", "context": "10.2 - Assigning Pods to Nodes | Creating the two preceding Deployments results in the following cluster layout, where each web server is co-located with a cache, on three separate nodes.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_16340"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: nginx", "completion": "spec:\n containers:\n - name: nginx\n image: nginx\n nodeName: kube-01", "language": "yaml", "type": "pod-definition", "context": "10.2 - Assigning Pods to Nodes | The above Pod will only run on the node kube-01.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_16341"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: nginx\nspec:\n containers:\n - name: nginx\n image: nginx\n nodeName: kube-01", "completion": "This yaml code 10.2 - Assigning Pods to Nodes | The above Pod will only run on the node kube-01.", "language": "yaml", "type": "pod-definition_explanation", "context": "10.2 - Assigning Pods to Nodes | The above Pod will only run on the node kube-01.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_16342"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:", "completion": " name: nginx\n...\nstatus:\n nominatedNodeName: kube-01", "language": "yaml", "type": "pod-definition", "context": "10.2 - Assigning Pods to Nodes | You can use topology spread constraints to control how Pods are spread across your cluster among failure-domains such as regions, zones, nodes, or among any other topology domains that you define. You", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_16343"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: nginx\n...\nstatus:\n nominatedNodeName: kube-01", "completion": "This yaml code 10.2 - Assigning Pods to Nodes | You can use topology spread constraints to control how Pods are spread across your cluster among failure-domains such as regions, zones, nodes, or among any other topology domains that you define. You", "language": "yaml", "type": "pod-definition_explanation", "context": "10.2 - Assigning Pods to Nodes | You can use topology spread constraints to control how Pods are spread across your cluster among failure-domains such as regions, zones, nodes, or among any other topology domains that you define. You", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_16344"} {"prompt": "Complete this yaml code:\n\n# You need to change this example to match the actual runtime name, and per-Pod\n# resource overhead, that the container runtime is adding in your cluster.\napiVersion: node.k8s.io/v1\nkind: RuntimeClass\nmetadata:", "completion": " name: kata-fc\nhandler: kata-fc\noverhead:\n podFixed:\n memory: \"120Mi\"\n cpu: \"250m\"", "language": "yaml", "type": "kubernetes-manifest", "context": "10.3 - Pod Overhead | Workloads which are created which specify the kata-fc RuntimeClass handler will take the memory and cpu overheads into account for resource quota calculations, node scheduling, as well as Pod cgroup s", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_16345"} {"prompt": "Explain this yaml code:\n\n# You need to change this example to match the actual runtime name, and per-Pod\n# resource overhead, that the container runtime is adding in your cluster.\napiVersion: node.k8s.io/v1\nkind: RuntimeClass\nmetadata:\n name: kata-fc\nhandler: kata-fc\noverhead:\n podFixed:\n memory: \"120Mi\"\n cpu: \"250m\"", "completion": "This yaml code 10.3 - Pod Overhead | Workloads which are created which specify the kata-fc RuntimeClass handler will take the memory and cpu overheads into account for resource quota calculations, node scheduling, as well as Pod cgroup s", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "10.3 - Pod Overhead | Workloads which are created which specify the kata-fc RuntimeClass handler will take the memory and cpu overheads into account for resource quota calculations, node scheduling, as well as Pod cgroup s", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_16346"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: test-pod\nspec:\n runtimeClassName: kata-fc\n containers:\n - name: busybox-ctr\n image: busybox:1.28\n stdin: true", "completion": " tty: true\n resources:\n limits:\n cpu: 500m\n memory: 100Mi\n - name: nginx-ctr\n image: nginx\n resources:\n limits:\n cpu: 1500m\n memory: 100Mi", "language": "yaml", "type": "pod-definition", "context": "10.3 - Pod Overhead | At admission time the RuntimeClass admission controller updates the workload's PodSpec to include the overhead as described in the RuntimeClass. If the PodSpec already has this field defined, the Pod", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_16347"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: test-pod\nspec:\n runtimeClassName: kata-fc\n containers:\n - name: busybox-ctr\n image: busybox:1.28\n stdin: true\n tty: true\n resources:\n limits:\n cpu: 500m\n memory: 100Mi\n - name: nginx-ctr\n image: nginx\n resources:\n limits:\n cpu: 1500m\n memory: 100Mi", "completion": "This yaml code 10.3 - Pod Overhead | At admission time the RuntimeClass admission controller updates the workload's PodSpec to include the overhead as described in the RuntimeClass. If the PodSpec already has this field defined, the Pod", "language": "yaml", "type": "pod-definition_explanation", "context": "10.3 - Pod Overhead | At admission time the RuntimeClass admission controller updates the workload's PodSpec to include the overhead as described in the RuntimeClass. If the PodSpec already has this field defined, the Pod", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_16348"} {"prompt": "Explain this bash code:\n\nkubectl get pod test-pod -o jsonpath='{.spec.overhead}'", "completion": "This bash code 10.3 - Pod Overhead | The output is:", "language": "bash", "type": "kubectl-command_explanation", "context": "10.3 - Pod Overhead | The output is:", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_16349"} {"prompt": "Explain this bash code:\n\nkubectl get pod test-pod -o jsonpath='{.spec.containers[*].resources.limits}'", "completion": "This bash code 10.3 - Pod Overhead | The total container requests are 2000m CPU and 200MiB of memory:", "language": "bash", "type": "kubectl-command_explanation", "context": "10.3 - Pod Overhead | The total container requests are 2000m CPU and 200MiB of memory:", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_16350"} {"prompt": "Explain this text code:\n\nmap[cpu: 500m memory:100Mi] map[cpu:1500m memory:100Mi]", "completion": "This text code 10.3 - Pod Overhead | Check this against what is observed by the node:", "language": "text", "type": "code-example_explanation", "context": "10.3 - Pod Overhead | Check this against what is observed by the node:", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_16351"} {"prompt": "Explain this text code:\n\nNamespace Name CPU Requests CPU Limits Memory Requests Memory Limits AGE\n --------- ---- ------------ ---------- --------------- ------------- ---\n default test-pod 2250m (56%) 2250m (56%) 320Mi (1%) 320Mi (1%) 36m", "completion": "This text code 10.3 - Pod Overhead | Check the Pod's memory cgroups on the node where the workload is running. In the following example, crictl is used on the node, which provides a CLI for CRI-compatible container runtimes. This is an a", "language": "text", "type": "code-example_explanation", "context": "10.3 - Pod Overhead | Check the Pod's memory cgroups on the node where the workload is running. In the following example, crictl is used on the node, which provides a CLI for CRI-compatible container runtimes. This is an a", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_16352"} {"prompt": "Explain this text code:\n\n# Run this on the node where the Pod is scheduled\nPOD_ID=\"$(sudo crictl pods --name test-pod -q)\"", "completion": "This text code 10.3 - Pod Overhead | From this, you can determine the cgroup path for the Pod:", "language": "text", "type": "code-example_explanation", "context": "10.3 - Pod Overhead | From this, you can determine the cgroup path for the Pod:", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_16353"} {"prompt": "Explain this text code:\n\n# Run this on the node where the Pod is scheduled\nsudo crictl inspectp -o=json $POD_ID | grep cgroupsPath", "completion": "This text code 10.3 - Pod Overhead | The resulting cgroup path includes the Pod's pause container. The Pod level cgroup is one directory above.", "language": "text", "type": "code-example_explanation", "context": "10.3 - Pod Overhead | The resulting cgroup path includes the Pod's pause container. The Pod level cgroup is one directory above.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_16354"} {"prompt": "Explain this text code:\n\n\"cgroupsPath\": \"/kubepods/podd7f4b509-cf94-4951-9417-d1087c92a5b2/7ccf55aee35dd16aca4189c952d83487297f3cd760f1bbf09620e206e7d0c27a\"", "completion": "This text code 10.3 - Pod Overhead | In this specific case, the pod cgroup path is kubepods/podd7f4b509-cf94-4951-9417-d1087c92a5b2. Verify the Pod level cgroup setting for memory:", "language": "text", "type": "code-example_explanation", "context": "10.3 - Pod Overhead | In this specific case, the pod cgroup path is kubepods/podd7f4b509-cf94-4951-9417-d1087c92a5b2. Verify the Pod level cgroup setting for memory:", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_16355"} {"prompt": "Explain this text code:\n\n# Run this on the node where the Pod is scheduled.\n# Also, change the name of the cgroup to match the cgroup allocated for your pod.\n cat /sys/fs/cgroup/memory/kubepods/podd7f4b509-cf94-4951-9417-d1087c92a5b2/memory.limit_in_bytes", "completion": "This text code 10.3 - Pod Overhead | This is 320 MiB, as expected:", "language": "text", "type": "code-example_explanation", "context": "10.3 - Pod Overhead | This is 320 MiB, as expected:", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_16356"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: test-pod\nspec:", "completion": " schedulingGates:\n - name: example.com/foo\n - name: example.com/bar\n containers:\n - name: pause\n image: registry.k8s.io/pause:3.6", "language": "yaml", "type": "pod-definition", "context": "10.4 - Pod Scheduling Readiness | After the Pod's creation, you can check its state using:", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_16357"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: test-pod\nspec:\n schedulingGates:\n - name: example.com/foo\n - name: example.com/bar\n containers:\n - name: pause\n image: registry.k8s.io/pause:3.6", "completion": "This yaml code 10.4 - Pod Scheduling Readiness | After the Pod's creation, you can check its state using:", "language": "yaml", "type": "pod-definition_explanation", "context": "10.4 - Pod Scheduling Readiness | After the Pod's creation, you can check its state using:", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_16358"} {"prompt": "Explain this text code:\n\nNAME READY STATUS RESTARTS AGE\ntest-pod 0/1 SchedulingGated 0 7s", "completion": "This text code 10.4 - Pod Scheduling Readiness | You can also check its schedulingGates field by running:", "language": "text", "type": "code-example_explanation", "context": "10.4 - Pod Scheduling Readiness | You can also check its schedulingGates field by running:", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_16359"} {"prompt": "Explain this bash code:\n\nkubectl get pod test-pod -o jsonpath='{.spec.schedulingGates}'", "completion": "This bash code 10.4 - Pod Scheduling Readiness | The output is:", "language": "bash", "type": "kubectl-command_explanation", "context": "10.4 - Pod Scheduling Readiness | The output is:", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_16360"} {"prompt": "Explain this json code:\n\n[{\"name\":\"example.com/foo\"},{\"name\":\"example.com/bar\"}]", "completion": "This json code 10.4 - Pod Scheduling Readiness | To inform scheduler this Pod is ready for scheduling, you can remove its schedulingGates entirely by reapplying a modified manifest:", "language": "json", "type": "api-response_explanation", "context": "10.4 - Pod Scheduling Readiness | To inform scheduler this Pod is ready for scheduling, you can remove its schedulingGates entirely by reapplying a modified manifest:", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_16361"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: test-pod", "completion": "spec:\n containers:\n - name: pause\n image: registry.k8s.io/pause:3.6", "language": "yaml", "type": "pod-definition", "context": "10.4 - Pod Scheduling Readiness | You can check if the schedulingGates is cleared by running:", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_16362"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: test-pod\nspec:\n containers:\n - name: pause\n image: registry.k8s.io/pause:3.6", "completion": "This yaml code 10.4 - Pod Scheduling Readiness | You can check if the schedulingGates is cleared by running:", "language": "yaml", "type": "pod-definition_explanation", "context": "10.4 - Pod Scheduling Readiness | You can check if the schedulingGates is cleared by running:", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_16363"} {"prompt": "Explain this bash code:\n\nkubectl get pod test-pod -o jsonpath='{.spec.schedulingGates}'", "completion": "This bash code 10.4 - Pod Scheduling Readiness | The output is expected to be empty. And you can check its latest status by running:", "language": "bash", "type": "kubectl-command_explanation", "context": "10.4 - Pod Scheduling Readiness | The output is expected to be empty. And you can check its latest status by running:", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_16364"} {"prompt": "Explain this text code:\n\nNAME READY STATUS RESTARTS AGE IP NODE\ntest-pod 1/1 Running 0 15s 10.0.0.4 node-2", "completion": "This text code 10.4 - Pod Scheduling Readiness | The metric scheduler_pending_pods comes with a new label \"gated\" to distinguish whether a Pod has been tried scheduling but claimed as unschedulable, or explicitly marked as not ready for scheduling.", "language": "text", "type": "code-example_explanation", "context": "10.4 - Pod Scheduling Readiness | The metric scheduler_pending_pods comes with a new label \"gated\" to distinguish whether a Pod has been tried scheduling but claimed as unschedulable, or explicitly marked as not ready for scheduling.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_16365"} {"prompt": "Complete this yaml code:\n\n---\napiVersion: v1\nkind: Pod\nmetadata:\n name: example-pod\nspec:\n # Configure a topology spread constraint\n topologySpreadConstraints:", "completion": " - maxSkew: <integer>\n minDomains: <integer> # optional\n topologyKey: <string>\n whenUnsatisfiable: <string>\n labelSelector: <object>\n matchLabelKeys: <list> # optional; beta since v1.27\n nodeAffinityPolicy: [Honor|Ignore] # optional; beta since v1.26\n nodeTaintsPolicy: [Honor|Ignore] # optional; beta since v1.26\n ### other Pod fields go here", "language": "yaml", "type": "pod-definition", "context": "10.5 - Pod Topology Spread Constraints | You can read more about this field by running kubectl explain Pod.spec.topologySpreadConstraints or refer to the scheduling section of the API reference for Pod.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_16366"} {"prompt": "Explain this yaml code:\n\n---\napiVersion: v1\nkind: Pod\nmetadata:\n name: example-pod\nspec:\n # Configure a topology spread constraint\n topologySpreadConstraints:\n - maxSkew: <integer>\n minDomains: <integer> # optional\n topologyKey: <string>\n whenUnsatisfiable: <string>\n labelSelector: <object>\n matchLabelKeys: <list> # optional; beta since v1.27\n nodeAffinityPolicy: [Honor|Ignore] # optional; beta since v1.26\n nodeTaintsPolicy: [Honor|Ignore] # optional; beta since v1.26\n ### other Pod fields go here", "completion": "This yaml code 10.5 - Pod Topology Spread Constraints | You can read more about this field by running kubectl explain Pod.spec.topologySpreadConstraints or refer to the scheduling section of the API reference for Pod.", "language": "yaml", "type": "pod-definition_explanation", "context": "10.5 - Pod Topology Spread Constraints | You can read more about this field by running kubectl explain Pod.spec.topologySpreadConstraints or refer to the scheduling section of the API reference for Pod.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_16367"} {"prompt": "Complete this text code:\n\ntopologySpreadConstraints:\n - maxSkew: 1\n topologyKey: kubernetes.io/hostname\n whenUnsatisfiable: DoNotSchedule", "completion": " labelSelector:\n matchLabels:\n app: foo\n matchLabelKeys:\n - pod-template-hash", "language": "text", "type": "code-example", "context": "10.5 - Pod Topology Spread Constraints | The matchLabelKeys field is a beta-level field and enabled by default in 1.27. You can disable it by disabling the MatchLabelKeysInPodTopologySpread feature gate.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_16368"} {"prompt": "Explain this text code:\n\ntopologySpreadConstraints:\n - maxSkew: 1\n topologyKey: kubernetes.io/hostname\n whenUnsatisfiable: DoNotSchedule\n labelSelector:\n matchLabels:\n app: foo\n matchLabelKeys:\n - pod-template-hash", "completion": "This text code 10.5 - Pod Topology Spread Constraints | The matchLabelKeys field is a beta-level field and enabled by default in 1.27. You can disable it by disabling the MatchLabelKeysInPodTopologySpread feature gate.", "language": "text", "type": "code-example_explanation", "context": "10.5 - Pod Topology Spread Constraints | The matchLabelKeys field is a beta-level field and enabled by default in 1.27. You can disable it by disabling the MatchLabelKeysInPodTopologySpread feature gate.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_16369"} {"prompt": "Complete this text code:\n\nNAME STATUS ROLES AGE VERSION LABELS\nnode1 Ready <none> 4m26s v1.16.0 node=node1,zone=zoneA", "completion": "node2 Ready <none> 3m58s v1.16.0 node=node2,zone=zoneA\nnode3 Ready <none> 3m17s v1.16.0 node=node3,zone=zoneB\nnode4 Ready <none> 2m43s v1.16.0 node=node4,zone=zoneB", "language": "text", "type": "code-example", "context": "10.5 - Pod Topology Spread Constraints | Then the cluster is logically viewed as below:", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_16370"} {"prompt": "Explain this text code:\n\nNAME STATUS ROLES AGE VERSION LABELS\nnode1 Ready <none> 4m26s v1.16.0 node=node1,zone=zoneA\nnode2 Ready <none> 3m58s v1.16.0 node=node2,zone=zoneA\nnode3 Ready <none> 3m17s v1.16.0 node=node3,zone=zoneB\nnode4 Ready <none> 2m43s v1.16.0 node=node4,zone=zoneB", "completion": "This text code 10.5 - Pod Topology Spread Constraints | Then the cluster is logically viewed as below:", "language": "text", "type": "code-example_explanation", "context": "10.5 - Pod Topology Spread Constraints | Then the cluster is logically viewed as below:", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_16371"} {"prompt": "Complete this yaml code:\n\nkind: Pod\napiVersion: v1\nmetadata:\n name: mypod\n labels:\n foo: bar\nspec:\n topologySpreadConstraints:", "completion": " - maxSkew: 1\n topologyKey: zone\n whenUnsatisfiable: DoNotSchedule\n labelSelector:\n matchLabels:\n foo: bar\n containers:\n - name: pause\n image: registry.k8s.io/pause:3.1", "language": "yaml", "type": "pod-definition", "context": "10.5 - Pod Topology Spread Constraints | From that manifest, topologyKey: zone implies the even distribution will only be applied to nodes that are labeled zone: <any value> (nodes that don't have a zone label are skipped). The field whenUns", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_16372"} {"prompt": "Explain this yaml code:\n\nkind: Pod\napiVersion: v1\nmetadata:\n name: mypod\n labels:\n foo: bar\nspec:\n topologySpreadConstraints:\n - maxSkew: 1\n topologyKey: zone\n whenUnsatisfiable: DoNotSchedule\n labelSelector:\n matchLabels:\n foo: bar\n containers:\n - name: pause\n image: registry.k8s.io/pause:3.1", "completion": "This yaml code 10.5 - Pod Topology Spread Constraints | From that manifest, topologyKey: zone implies the even distribution will only be applied to nodes that are labeled zone: <any value> (nodes that don't have a zone label are skipped). The field whenUns", "language": "yaml", "type": "pod-definition_explanation", "context": "10.5 - Pod Topology Spread Constraints | From that manifest, topologyKey: zone implies the even distribution will only be applied to nodes that are labeled zone: <any value> (nodes that don't have a zone label are skipped). The field whenUns", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_16373"} {"prompt": "Complete this yaml code:\n\nkind: Pod\napiVersion: v1\nmetadata:\n name: mypod\n labels:\n foo: bar\nspec:\n topologySpreadConstraints:\n - maxSkew: 1\n topologyKey: zone\n whenUnsatisfiable: DoNotSchedule", "completion": " labelSelector:\n matchLabels:\n foo: bar\n - maxSkew: 1\n topologyKey: node\n whenUnsatisfiable: DoNotSchedule\n labelSelector:\n matchLabels:\n foo: bar\n containers:\n - name: pause\n image: registry.k8s.io/pause:3.1", "language": "yaml", "type": "pod-definition", "context": "10.5 - Pod Topology Spread Constraints | In this case, to match the first constraint, the incoming Pod can only be placed onto nodes in zone B; while in terms of the second constraint, the incoming Pod can only be scheduled to the node node4", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_16374"} {"prompt": "Explain this yaml code:\n\nkind: Pod\napiVersion: v1\nmetadata:\n name: mypod\n labels:\n foo: bar\nspec:\n topologySpreadConstraints:\n - maxSkew: 1\n topologyKey: zone\n whenUnsatisfiable: DoNotSchedule\n labelSelector:\n matchLabels:\n foo: bar\n - maxSkew: 1\n topologyKey: node\n whenUnsatisfiable: DoNotSchedule\n labelSelector:\n matchLabels:\n foo: bar\n containers:\n - name: pause\n image: registry.k8s.io/pause:3.1", "completion": "This yaml code 10.5 - Pod Topology Spread Constraints | In this case, to match the first constraint, the incoming Pod can only be placed onto nodes in zone B; while in terms of the second constraint, the incoming Pod can only be scheduled to the node node4", "language": "yaml", "type": "pod-definition_explanation", "context": "10.5 - Pod Topology Spread Constraints | In this case, to match the first constraint, the incoming Pod can only be placed onto nodes in zone B; while in terms of the second constraint, the incoming Pod can only be scheduled to the node node4", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_16375"} {"prompt": "Complete this yaml code:\n\nkind: Pod\napiVersion: v1\nmetadata:\n name: mypod\n labels:\n foo: bar\nspec:\n topologySpreadConstraints:\n - maxSkew: 1\n topologyKey: zone\n whenUnsatisfiable: DoNotSchedule\n labelSelector:\n matchLabels:", "completion": " foo: bar\n affinity:\n nodeAffinity:\n requiredDuringSchedulingIgnoredDuringExecution:\n nodeSelectorTerms:\n - matchExpressions:\n - key: zone\n operator: NotIn\n values:\n - zoneC\n containers:\n - name: pause\n image: registry.k8s.io/pause:3.1", "language": "yaml", "type": "pod-definition", "context": "10.5 - Pod Topology Spread Constraints | There are some implicit conventions worth noting here:", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_16376"} {"prompt": "Explain this yaml code:\n\nkind: Pod\napiVersion: v1\nmetadata:\n name: mypod\n labels:\n foo: bar\nspec:\n topologySpreadConstraints:\n - maxSkew: 1\n topologyKey: zone\n whenUnsatisfiable: DoNotSchedule\n labelSelector:\n matchLabels:\n foo: bar\n affinity:\n nodeAffinity:\n requiredDuringSchedulingIgnoredDuringExecution:\n nodeSelectorTerms:\n - matchExpressions:\n - key: zone\n operator: NotIn\n values:\n - zoneC\n containers:\n - name: pause\n image: registry.k8s.io/pause:3.1", "completion": "This yaml code 10.5 - Pod Topology Spread Constraints | There are some implicit conventions worth noting here:", "language": "yaml", "type": "pod-definition_explanation", "context": "10.5 - Pod Topology Spread Constraints | There are some implicit conventions worth noting here:", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_16377"} {"prompt": "Complete this yaml code:\n\napiVersion: kubescheduler.config.k8s.io/v1\nkind: KubeSchedulerConfiguration\n\nprofiles:\n - schedulerName: default-scheduler\n pluginConfig:", "completion": " - name: PodTopologySpread\n args:\n defaultConstraints:\n - maxSkew: 1\n topologyKey: topology.kubernetes.io/zone\n whenUnsatisfiable: ScheduleAnyway\n defaultingType: List", "language": "yaml", "type": "kubernetes-manifest", "context": "10.5 - Pod Topology Spread Constraints | If you don't configure any cluster-level default constraints for pod topology spreading, then kube-scheduler acts as if you specified the following default topology constraints:", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_16378"} {"prompt": "Explain this yaml code:\n\napiVersion: kubescheduler.config.k8s.io/v1\nkind: KubeSchedulerConfiguration\n\nprofiles:\n - schedulerName: default-scheduler\n pluginConfig:\n - name: PodTopologySpread\n args:\n defaultConstraints:\n - maxSkew: 1\n topologyKey: topology.kubernetes.io/zone\n whenUnsatisfiable: ScheduleAnyway\n defaultingType: List", "completion": "This yaml code 10.5 - Pod Topology Spread Constraints | If you don't configure any cluster-level default constraints for pod topology spreading, then kube-scheduler acts as if you specified the following default topology constraints:", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "10.5 - Pod Topology Spread Constraints | If you don't configure any cluster-level default constraints for pod topology spreading, then kube-scheduler acts as if you specified the following default topology constraints:", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_16379"} {"prompt": "Complete this text code:\n\ndefaultConstraints:\n - maxSkew: 3\n topologyKey: \"kubernetes.io/hostname\"", "completion": " whenUnsatisfiable: ScheduleAnyway\n - maxSkew: 5\n topologyKey: \"topology.kubernetes.io/zone\"\n whenUnsatisfiable: ScheduleAnyway", "language": "text", "type": "code-example", "context": "10.5 - Pod Topology Spread Constraints | Also, the legacy SelectorSpread plugin, which provides an equivalent behavior, is disabled by default.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_16380"} {"prompt": "Explain this text code:\n\ndefaultConstraints:\n - maxSkew: 3\n topologyKey: \"kubernetes.io/hostname\"\n whenUnsatisfiable: ScheduleAnyway\n - maxSkew: 5\n topologyKey: \"topology.kubernetes.io/zone\"\n whenUnsatisfiable: ScheduleAnyway", "completion": "This text code 10.5 - Pod Topology Spread Constraints | Also, the legacy SelectorSpread plugin, which provides an equivalent behavior, is disabled by default.", "language": "text", "type": "code-example_explanation", "context": "10.5 - Pod Topology Spread Constraints | Also, the legacy SelectorSpread plugin, which provides an equivalent behavior, is disabled by default.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_16381"} {"prompt": "Complete this yaml code:\n\napiVersion: kubescheduler.config.k8s.io/v1\nkind: KubeSchedulerConfiguration\n\nprofiles:\n - schedulerName: default-scheduler", "completion": " pluginConfig:\n - name: PodTopologySpread\n args:\n defaultConstraints: []\n defaultingType: List", "language": "yaml", "type": "kubernetes-manifest", "context": "10.5 - Pod Topology Spread Constraints | In Kubernetes, inter-Pod affinity and anti-affinity control how Pods are scheduled in relation to one another - either more packed or more scattered.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_16382"} {"prompt": "Explain this yaml code:\n\napiVersion: kubescheduler.config.k8s.io/v1\nkind: KubeSchedulerConfiguration\n\nprofiles:\n - schedulerName: default-scheduler\n pluginConfig:\n - name: PodTopologySpread\n args:\n defaultConstraints: []\n defaultingType: List", "completion": "This yaml code 10.5 - Pod Topology Spread Constraints | In Kubernetes, inter-Pod affinity and anti-affinity control how Pods are scheduled in relation to one another - either more packed or more scattered.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "10.5 - Pod Topology Spread Constraints | In Kubernetes, inter-Pod affinity and anti-affinity control how Pods are scheduled in relation to one another - either more packed or more scattered.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_16383"} {"prompt": "Complete this text code:\n\ntolerations:\n- key: \"key1\"", "completion": " operator: \"Equal\"\n value: \"value1\"\n effect: \"NoSchedule\"", "language": "text", "type": "code-example", "context": "10.6 - Taints and Tolerations | The default Kubernetes scheduler takes taints and tolerations into account when selecting a node to run a particular Pod. However, if you manually specify the .spec.nodeName for a Pod, that action byp", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_16384"} {"prompt": "Explain this text code:\n\ntolerations:\n- key: \"key1\"\n operator: \"Equal\"\n value: \"value1\"\n effect: \"NoSchedule\"", "completion": "This text code 10.6 - Taints and Tolerations | The default Kubernetes scheduler takes taints and tolerations into account when selecting a node to run a particular Pod. However, if you manually specify the .spec.nodeName for a Pod, that action byp", "language": "text", "type": "code-example_explanation", "context": "10.6 - Taints and Tolerations | The default Kubernetes scheduler takes taints and tolerations into account when selecting a node to run a particular Pod. However, if you manually specify the .spec.nodeName for a Pod, that action byp", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_16385"} {"prompt": "Complete this text code:\n\ntolerations:\n- key: \"key1\"", "completion": " operator: \"Exists\"\n effect: \"NoSchedule\"", "language": "text", "type": "code-example", "context": "10.6 - Taints and Tolerations | The default Kubernetes scheduler takes taints and tolerations into account when selecting a node to run a particular Pod. However, if you manually specify the .spec.nodeName for a Pod, that action byp", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_16386"} {"prompt": "Explain this text code:\n\ntolerations:\n- key: \"key1\"\n operator: \"Exists\"\n effect: \"NoSchedule\"", "completion": "This text code 10.6 - Taints and Tolerations | The default Kubernetes scheduler takes taints and tolerations into account when selecting a node to run a particular Pod. However, if you manually specify the .spec.nodeName for a Pod, that action byp", "language": "text", "type": "code-example_explanation", "context": "10.6 - Taints and Tolerations | The default Kubernetes scheduler takes taints and tolerations into account when selecting a node to run a particular Pod. However, if you manually specify the .spec.nodeName for a Pod, that action byp", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_16387"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: nginx\n labels:\n env: test\nspec:", "completion": " containers:\n - name: nginx\n image: nginx\n imagePullPolicy: IfNotPresent\n tolerations:\n - key: \"example-key\"\n operator: \"Exists\"\n effect: \"NoSchedule\"", "language": "yaml", "type": "pod-definition", "context": "10.6 - Taints and Tolerations | The default value for operator is Equal.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_16388"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: nginx\n labels:\n env: test\nspec:\n containers:\n - name: nginx\n image: nginx\n imagePullPolicy: IfNotPresent\n tolerations:\n - key: \"example-key\"\n operator: \"Exists\"\n effect: \"NoSchedule\"", "completion": "This yaml code 10.6 - Taints and Tolerations | The default value for operator is Equal.", "language": "yaml", "type": "pod-definition_explanation", "context": "10.6 - Taints and Tolerations | The default value for operator is Equal.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_16389"} {"prompt": "Explain this bash code:\n\nkubectl taint nodes node1 key1=value1:NoSchedule\nkubectl taint nodes node1 key1=value1:NoExecute\nkubectl taint nodes node1 key2=value2:NoSchedule", "completion": "This bash code 10.6 - Taints and Tolerations | And a pod has two tolerations:", "language": "bash", "type": "kubectl-command_explanation", "context": "10.6 - Taints and Tolerations | And a pod has two tolerations:", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_16390"} {"prompt": "Complete this text code:\n\ntolerations:\n- key: \"key1\"\n operator: \"Equal\"\n value: \"value1\"", "completion": " effect: \"NoSchedule\"\n- key: \"key1\"\n operator: \"Equal\"\n value: \"value1\"\n effect: \"NoExecute\"", "language": "text", "type": "code-example", "context": "10.6 - Taints and Tolerations | In this case, the pod will not be able to schedule onto the node, because there is no toleration matching the third taint. But it will be able to continue running if it is already running on the node", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_16391"} {"prompt": "Explain this text code:\n\ntolerations:\n- key: \"key1\"\n operator: \"Equal\"\n value: \"value1\"\n effect: \"NoSchedule\"\n- key: \"key1\"\n operator: \"Equal\"\n value: \"value1\"\n effect: \"NoExecute\"", "completion": "This text code 10.6 - Taints and Tolerations | In this case, the pod will not be able to schedule onto the node, because there is no toleration matching the third taint. But it will be able to continue running if it is already running on the node", "language": "text", "type": "code-example_explanation", "context": "10.6 - Taints and Tolerations | In this case, the pod will not be able to schedule onto the node, because there is no toleration matching the third taint. But it will be able to continue running if it is already running on the node", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_16392"} {"prompt": "Complete this text code:\n\ntolerations:\n- key: \"key1\"\n operator: \"Equal\"", "completion": " value: \"value1\"\n effect: \"NoExecute\"\n tolerationSeconds: 3600", "language": "text", "type": "code-example", "context": "10.6 - Taints and Tolerations | means that if this pod is running and a matching taint is added to the node, then the pod will stay bound to the node for 3600 seconds, and then be evicted. If the taint is removed before that time, t", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_16393"} {"prompt": "Explain this text code:\n\ntolerations:\n- key: \"key1\"\n operator: \"Equal\"\n value: \"value1\"\n effect: \"NoExecute\"\n tolerationSeconds: 3600", "completion": "This text code 10.6 - Taints and Tolerations | means that if this pod is running and a matching taint is added to the node, then the pod will stay bound to the node for 3600 seconds, and then be evicted. If the taint is removed before that time, t", "language": "text", "type": "code-example_explanation", "context": "10.6 - Taints and Tolerations | means that if this pod is running and a matching taint is added to the node, then the pod will stay bound to the node for 3600 seconds, and then be evicted. If the taint is removed before that time, t", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_16394"} {"prompt": "Complete this text code:\n\ntolerations:\n- key: \"node.kubernetes.io/unreachable\"", "completion": " operator: \"Exists\"\n effect: \"NoExecute\"\n tolerationSeconds: 6000", "language": "text", "type": "code-example", "context": "10.6 - Taints and Tolerations | Kubernetes automatically adds a toleration for node.kubernetes.io/not-ready and node.kubernetes.io/unreachable with tolerationSeconds=300, unless you, or a controller, set those tolerations explicitly", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_16395"} {"prompt": "Explain this text code:\n\ntolerations:\n- key: \"node.kubernetes.io/unreachable\"\n operator: \"Exists\"\n effect: \"NoExecute\"\n tolerationSeconds: 6000", "completion": "This text code 10.6 - Taints and Tolerations | Kubernetes automatically adds a toleration for node.kubernetes.io/not-ready and node.kubernetes.io/unreachable with tolerationSeconds=300, unless you, or a controller, set those tolerations explicitly", "language": "text", "type": "code-example_explanation", "context": "10.6 - Taints and Tolerations | Kubernetes automatically adds a toleration for node.kubernetes.io/not-ready and node.kubernetes.io/unreachable with tolerationSeconds=300, unless you, or a controller, set those tolerations explicitly", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_16396"} {"prompt": "Explain this go code:\n\nfunc ScoreNode(_ *v1.pod, n *v1.Node) (int, error) {\n return getBlinkingLightCount(n)\n}", "completion": "This go code 10.7 - Scheduling Framework | However, the maximum count of blinking lights may be small compared to NodeScoreMax. To fix this, BlinkingLightScorer should also register for this extension point.", "language": "go", "type": "code-example_explanation", "context": "10.7 - Scheduling Framework | However, the maximum count of blinking lights may be small compared to NodeScoreMax. To fix this, BlinkingLightScorer should also register for this extension point.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_16397"} {"prompt": "Complete this go code:\n\nfunc NormalizeScores(scores map[string]int) {\n highest := 0\n for _, score := range scores {\n highest = max(highest, score)", "completion": " }\n for node, score := range scores {\n scores[node] = score*NodeScoreMax/highest\n }\n}", "language": "go", "type": "code-example", "context": "10.7 - Scheduling Framework | If any NormalizeScore plugin returns an error, the scheduling cycle is aborted.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_16398"} {"prompt": "Explain this go code:\n\nfunc NormalizeScores(scores map[string]int) {\n highest := 0\n for _, score := range scores {\n highest = max(highest, score)\n }\n for node, score := range scores {\n scores[node] = score*NodeScoreMax/highest\n }\n}", "completion": "This go code 10.7 - Scheduling Framework | If any NormalizeScore plugin returns an error, the scheduling cycle is aborted.", "language": "go", "type": "code-example_explanation", "context": "10.7 - Scheduling Framework | If any NormalizeScore plugin returns an error, the scheduling cycle is aborted.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_16399"} {"prompt": "Complete this text code:\n\ntype Plugin interface {\n Name() string\n}\n\ntype QueueSortPlugin interface {\n Plugin\n Less(*v1.pod, *v1.pod) bool", "completion": "}\n\ntype PreFilterPlugin interface {\n Plugin\n PreFilter(context.Context, *framework.CycleState, *v1.pod) error\n}\n\n// ...", "language": "text", "type": "code-example", "context": "10.7 - Scheduling Framework | You can enable or disable plugins in the scheduler configuration. If you are using Kubernetes v1.18 or later, most scheduling plugins are in use and enabled by default.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_16400"} {"prompt": "Explain this text code:\n\ntype Plugin interface {\n Name() string\n}\n\ntype QueueSortPlugin interface {\n Plugin\n Less(*v1.pod, *v1.pod) bool\n}\n\ntype PreFilterPlugin interface {\n Plugin\n PreFilter(context.Context, *framework.CycleState, *v1.pod) error\n}\n\n// ...", "completion": "This text code 10.7 - Scheduling Framework | You can enable or disable plugins in the scheduler configuration. If you are using Kubernetes v1.18 or later, most scheduling plugins are in use and enabled by default.", "language": "text", "type": "code-example_explanation", "context": "10.7 - Scheduling Framework | You can enable or disable plugins in the scheduler configuration. If you are using Kubernetes v1.18 or later, most scheduling plugins are in use and enabled by default.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_16401"} {"prompt": "Complete this yaml code:\n\napiVersion: resource.k8s.io/v1\nkind: ResourceClaimTemplate\nmetadata:\n name: prioritized-list-claim-template\nspec:\n spec:\n devices:\n requests:\n - name: req-0\n firstAvailable:\n - name: large-black\n deviceClassName: resource.example.com", "completion": " selectors:\n - cel:\n expression: |-\n device.attributes[\"resource-driver.example.com\"].color == \"black\" &&\n device.attributes[\"resource-driver.example.com\"].size == \"large\" \n - name: small-white\n deviceClassName: resource.example.com\n selectors:\n - cel:\n expression: |-\n device.attributes[\"resource-driver.example.com\"].color == \"white\" &&\n device.attributes[\"resource-driver.example.com\"].size == \"small\" \n count: 2", "language": "yaml", "type": "kubernetes-manifest", "context": "10.8 - Dynamic Resource Allocation | The decision is made on a per-Pod basis, so if the Pod is a member of a ReplicaSet or similar grouping, you cannot rely on all the members of the group having the same subrequest chosen. Your workload", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_16402"} {"prompt": "Explain this yaml code:\n\napiVersion: resource.k8s.io/v1\nkind: ResourceClaimTemplate\nmetadata:\n name: prioritized-list-claim-template\nspec:\n spec:\n devices:\n requests:\n - name: req-0\n firstAvailable:\n - name: large-black\n deviceClassName: resource.example.com\n selectors:\n - cel:\n expression: |-\n device.attributes[\"resource-driver.example.com\"].color == \"black\" &&\n device.attributes[\"resource-driver.example.com\"].size == \"large\" \n - name: small-white\n deviceClassName: resource.example.com\n selectors:\n - cel:\n expression: |-\n device.attributes[\"resource-driver.example.com\"].color == \"white\" &&\n device.attributes[\"resource-driver.example.com\"].size == \"small\" \n count: 2", "completion": "This yaml code 10.8 - Dynamic Resource Allocation | The decision is made on a per-Pod basis, so if the Pod is a member of a ReplicaSet or similar grouping, you cannot rely on all the members of the group having the same subrequest chosen. Your workload", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "10.8 - Dynamic Resource Allocation | The decision is made on a per-Pod basis, so if the Pod is a member of a ReplicaSet or similar grouping, you cannot rely on all the members of the group having the same subrequest chosen. Your workload", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_16403"} {"prompt": "Complete this yaml code:\n\napiVersion: resource.k8s.io/v1\nkind: ResourceSlice\nmetadata:\n name: cat-slice\nspec:\n driver: \"resource-driver.example.com\"\n pool:\n generation: 1\n name: \"black-cat-pool\"\n resourceSliceCount: 1", "completion": " # The allNodes field defines whether any node in the cluster can access the device.\n allNodes: true\n devices:\n - name: \"large-black-cat\"\n attributes:\n color:\n string: \"black\"\n size:\n string: \"large\"\n cat:\n boolean: true", "language": "yaml", "type": "kubernetes-manifest", "context": "10.8 - Dynamic Resource Allocation | This ResourceSlice is managed by the resource-driver.example.com driver in the black-cat-pool pool. The allNodes: true field indicates that any node in the cluster can access the devices. There's one", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_16404"} {"prompt": "Explain this yaml code:\n\napiVersion: resource.k8s.io/v1\nkind: ResourceSlice\nmetadata:\n name: cat-slice\nspec:\n driver: \"resource-driver.example.com\"\n pool:\n generation: 1\n name: \"black-cat-pool\"\n resourceSliceCount: 1\n # The allNodes field defines whether any node in the cluster can access the device.\n allNodes: true\n devices:\n - name: \"large-black-cat\"\n attributes:\n color:\n string: \"black\"\n size:\n string: \"large\"\n cat:\n boolean: true", "completion": "This yaml code 10.8 - Dynamic Resource Allocation | This ResourceSlice is managed by the resource-driver.example.com driver in the black-cat-pool pool. The allNodes: true field indicates that any node in the cluster can access the devices. There's one", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "10.8 - Dynamic Resource Allocation | This ResourceSlice is managed by the resource-driver.example.com driver in the black-cat-pool pool. The allNodes: true field indicates that any node in the cluster can access the devices. There's one", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_16405"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: pod-with-cats", "completion": "spec:\n nodeSelector:\n kubernetes.io/hostname: name-of-the-intended-node\n ...", "language": "yaml", "type": "pod-definition", "context": "10.8 - Dynamic Resource Allocation | You may also be able to mutate the incoming Pod, at admission time, to unset the .spec.nodeName field and to use a node selector instead.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_16406"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: pod-with-cats\nspec:\n nodeSelector:\n kubernetes.io/hostname: name-of-the-intended-node\n ...", "completion": "This yaml code 10.8 - Dynamic Resource Allocation | You may also be able to mutate the incoming Pod, at admission time, to unset the .spec.nodeName field and to use a node selector instead.", "language": "yaml", "type": "pod-definition_explanation", "context": "10.8 - Dynamic Resource Allocation | You may also be able to mutate the incoming Pod, at admission time, to unset the .spec.nodeName field and to use a node selector instead.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_16407"} {"prompt": "Complete this yaml code:\n\napiVersion: resource.k8s.io/v1\nkind: ResourceClaimTemplate\nmetadata:\n name: large-black-cat-claim-template\nspec:\n spec:", "completion": " devices:\n requests:\n - name: req-0\n exactly:\n deviceClassName: resource.example.com\n allocationMode: All\n adminAccess: true", "language": "yaml", "type": "kubernetes-manifest", "context": "10.8 - Dynamic Resource Allocation | If this feature is disabled, the adminAccess field will be removed automatically when creating such a ResourceClaim.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_16408"} {"prompt": "Explain this yaml code:\n\napiVersion: resource.k8s.io/v1\nkind: ResourceClaimTemplate\nmetadata:\n name: large-black-cat-claim-template\nspec:\n spec:\n devices:\n requests:\n - name: req-0\n exactly:\n deviceClassName: resource.example.com\n allocationMode: All\n adminAccess: true", "completion": "This yaml code 10.8 - Dynamic Resource Allocation | If this feature is disabled, the adminAccess field will be removed automatically when creating such a ResourceClaim.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "10.8 - Dynamic Resource Allocation | If this feature is disabled, the adminAccess field will be removed automatically when creating such a ResourceClaim.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_16409"} {"prompt": "Complete this yaml code:\n\napiVersion: resource.k8s.io/v1\nkind: DeviceClass\nmetadata:\n name: gpu.example.com\nspec:", "completion": " selectors:\n - cel:\n expression: device.driver == 'gpu.example.com' && device.attributes['gpu.example.com'].type\n == 'gpu'\n extendedResourceName: example.com/gpu", "language": "yaml", "type": "kubernetes-manifest", "context": "10.8 - Dynamic Resource Allocation | In addition, users can use a special extended resource to allocate devices without having to explicitly create a ResourceClaim. Using the extended resource name prefix deviceclass.resource.kubernetes.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_16410"} {"prompt": "Explain this yaml code:\n\napiVersion: resource.k8s.io/v1\nkind: DeviceClass\nmetadata:\n name: gpu.example.com\nspec:\n selectors:\n - cel:\n expression: device.driver == 'gpu.example.com' && device.attributes['gpu.example.com'].type\n == 'gpu'\n extendedResourceName: example.com/gpu", "completion": "This yaml code 10.8 - Dynamic Resource Allocation | In addition, users can use a special extended resource to allocate devices without having to explicitly create a ResourceClaim. Using the extended resource name prefix deviceclass.resource.kubernetes.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "10.8 - Dynamic Resource Allocation | In addition, users can use a special extended resource to allocate devices without having to explicitly create a ResourceClaim. Using the extended resource name prefix deviceclass.resource.kubernetes.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_16411"} {"prompt": "Complete this yaml code:\n\nkind: ResourceSlice\napiVersion: resource.k8s.io/v1\nmetadata:\n name: resourceslice\nspec:\n nodeName: worker-1\n pool:\n name: pool\n generation: 1\n resourceSliceCount: 1\n driver: dra.example.com\n sharedCounters:\n - name: gpu-1-counters\n counters:", "completion": " memory:\n value: 8Gi\n devices:\n - name: device-1\n consumesCounters:\n - counterSet: gpu-1-counters\n counters:\n memory:\n value: 6Gi\n - name: device-2\n consumesCounters:\n - counterSet: gpu-1-counters\n counters:\n memory:\n value: 6Gi", "language": "yaml", "type": "kubernetes-manifest", "context": "10.8 - Dynamic Resource Allocation | Partitionable devices is an alpha feature and only enabled when the DRAPartitionableDevices feature gate is enabled in the kube-apiserver and kube-scheduler.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_16412"} {"prompt": "Explain this yaml code:\n\nkind: ResourceSlice\napiVersion: resource.k8s.io/v1\nmetadata:\n name: resourceslice\nspec:\n nodeName: worker-1\n pool:\n name: pool\n generation: 1\n resourceSliceCount: 1\n driver: dra.example.com\n sharedCounters:\n - name: gpu-1-counters\n counters:\n memory:\n value: 8Gi\n devices:\n - name: device-1\n consumesCounters:\n - counterSet: gpu-1-counters\n counters:\n memory:\n value: 6Gi\n - name: device-2\n consumesCounters:\n - counterSet: gpu-1-counters\n counters:\n memory:\n value: 6Gi", "completion": "This yaml code 10.8 - Dynamic Resource Allocation | Partitionable devices is an alpha feature and only enabled when the DRAPartitionableDevices feature gate is enabled in the kube-apiserver and kube-scheduler.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "10.8 - Dynamic Resource Allocation | Partitionable devices is an alpha feature and only enabled when the DRAPartitionableDevices feature gate is enabled in the kube-apiserver and kube-scheduler.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_16413"} {"prompt": "Complete this yaml code:\n\nkind: ResourceSlice\napiVersion: resource.k8s.io/v1\nmetadata:\n name: resourceslice\nspec:\n nodeName: worker-1\n pool:\n name: pool\n generation: 1\n resourceSliceCount: 1\n driver: dra.example.com\n devices:", "completion": " - name: eth1\n allowMultipleAllocations: true\n attributes:\n name:\n string: \"eth1\"\n capacity:\n bandwidth:\n requestPolicy:\n default: \"1M\"\n validRange:\n min: \"1M\"\n step: \"8\"\n value: \"10G\"", "language": "yaml", "type": "kubernetes-manifest", "context": "10.8 - Dynamic Resource Allocation | The consumable capacity can be requested as shown in the below example.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_16414"} {"prompt": "Explain this yaml code:\n\nkind: ResourceSlice\napiVersion: resource.k8s.io/v1\nmetadata:\n name: resourceslice\nspec:\n nodeName: worker-1\n pool:\n name: pool\n generation: 1\n resourceSliceCount: 1\n driver: dra.example.com\n devices:\n - name: eth1\n allowMultipleAllocations: true\n attributes:\n name:\n string: \"eth1\"\n capacity:\n bandwidth:\n requestPolicy:\n default: \"1M\"\n validRange:\n min: \"1M\"\n step: \"8\"\n value: \"10G\"", "completion": "This yaml code 10.8 - Dynamic Resource Allocation | The consumable capacity can be requested as shown in the below example.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "10.8 - Dynamic Resource Allocation | The consumable capacity can be requested as shown in the below example.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_16415"} {"prompt": "Complete this yaml code:\n\napiVersion: resource.k8s.io/v1\nkind: ResourceClaimTemplate\nmetadata:\n name: bandwidth-claim-template\nspec:\n spec:\n devices:", "completion": " requests:\n - name: req-0\n exactly:\n - deviceClassName: resource.example.com\n capacity:\n requests:\n bandwidth: 1G", "language": "yaml", "type": "kubernetes-manifest", "context": "10.8 - Dynamic Resource Allocation | The allocation result will include the consumed capacity and the identifier of the share.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_16416"} {"prompt": "Explain this yaml code:\n\napiVersion: resource.k8s.io/v1\nkind: ResourceClaimTemplate\nmetadata:\n name: bandwidth-claim-template\nspec:\n spec:\n devices:\n requests:\n - name: req-0\n exactly:\n - deviceClassName: resource.example.com\n capacity:\n requests:\n bandwidth: 1G", "completion": "This yaml code 10.8 - Dynamic Resource Allocation | The allocation result will include the consumed capacity and the identifier of the share.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "10.8 - Dynamic Resource Allocation | The allocation result will include the consumed capacity and the identifier of the share.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_16417"} {"prompt": "Complete this yaml code:\n\napiVersion: resource.k8s.io/v1\nkind: ResourceClaim\n...\nstatus:\n allocation:", "completion": " devices:\n results:\n - consumedCapacity:\n bandwidth: 1G\n device: eth1\n shareID: \"a671734a-e8e5-11e4-8fde-42010af09327\"", "language": "yaml", "type": "kubernetes-manifest", "context": "10.8 - Dynamic Resource Allocation | In this example, a multiply-allocatable device was chosen. However, any resource.example.com device with at least the requested 1G bandwidth could have met the requirement. If a non-multiply-allocatab", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_16418"} {"prompt": "Explain this yaml code:\n\napiVersion: resource.k8s.io/v1\nkind: ResourceClaim\n...\nstatus:\n allocation:\n devices:\n results:\n - consumedCapacity:\n bandwidth: 1G\n device: eth1\n shareID: \"a671734a-e8e5-11e4-8fde-42010af09327\"", "completion": "This yaml code 10.8 - Dynamic Resource Allocation | In this example, a multiply-allocatable device was chosen. However, any resource.example.com device with at least the requested 1G bandwidth could have met the requirement. If a non-multiply-allocatab", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "10.8 - Dynamic Resource Allocation | In this example, a multiply-allocatable device was chosen. However, any resource.example.com device with at least the requested 1G bandwidth could have met the requirement. If a non-multiply-allocatab", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_16419"} {"prompt": "Complete this yaml code:\n\napiVersion: resource.k8s.io/v1alpha3\nkind: DeviceTaintRule\nmetadata:\n name: example\nspec:\n # The entire hardware installation for this\n # particular driver is broken.", "completion": " # Evict all pods and don't schedule new ones.\n deviceSelector:\n driver: dra.example.com\n taint:\n key: dra.example.com/unhealthy\n value: Broken\n effect: NoExecute", "language": "yaml", "type": "kubernetes-manifest", "context": "10.8 - Dynamic Resource Allocation | Device Binding Conditions allow the Kubernetes scheduler to delay Pod binding until external resources, such as fabric-attached GPUs or reprogrammable FPGAs, are confirmed to be ready.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_16420"} {"prompt": "Explain this yaml code:\n\napiVersion: resource.k8s.io/v1alpha3\nkind: DeviceTaintRule\nmetadata:\n name: example\nspec:\n # The entire hardware installation for this\n # particular driver is broken.\n # Evict all pods and don't schedule new ones.\n deviceSelector:\n driver: dra.example.com\n taint:\n key: dra.example.com/unhealthy\n value: Broken\n effect: NoExecute", "completion": "This yaml code 10.8 - Dynamic Resource Allocation | Device Binding Conditions allow the Kubernetes scheduler to delay Pod binding until external resources, such as fabric-attached GPUs or reprogrammable FPGAs, are confirmed to be ready.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "10.8 - Dynamic Resource Allocation | Device Binding Conditions allow the Kubernetes scheduler to delay Pod binding until external resources, such as fabric-attached GPUs or reprogrammable FPGAs, are confirmed to be ready.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_16421"} {"prompt": "Complete this yaml code:\n\napiVersion: resource.k8s.io/v1\nkind: ResourceSlice\nmetadata:\n name: gpu-slice\nspec:\n driver: dra.example.com\n nodeSelector:\n accelerator-type: high-performance\n pool:\n name: gpu-pool\n generation: 1\n resourceSliceCount: 1", "completion": " devices:\n - name: gpu-1\n attributes:\n vendor:\n string: \"example\"\n model:\n string: \"example-gpu\"\n bindsToNode: true\n bindingConditions:\n - dra.example.com/is-prepared\n bindingFailureConditions:\n - dra.example.com/preparing-failed", "language": "yaml", "type": "kubernetes-manifest", "context": "10.8 - Dynamic Resource Allocation | This example ResourceSlice has the following properties:", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_16422"} {"prompt": "Explain this yaml code:\n\napiVersion: resource.k8s.io/v1\nkind: ResourceSlice\nmetadata:\n name: gpu-slice\nspec:\n driver: dra.example.com\n nodeSelector:\n accelerator-type: high-performance\n pool:\n name: gpu-pool\n generation: 1\n resourceSliceCount: 1\n devices:\n - name: gpu-1\n attributes:\n vendor:\n string: \"example\"\n model:\n string: \"example-gpu\"\n bindsToNode: true\n bindingConditions:\n - dra.example.com/is-prepared\n bindingFailureConditions:\n - dra.example.com/preparing-failed", "completion": "This yaml code 10.8 - Dynamic Resource Allocation | This example ResourceSlice has the following properties:", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "10.8 - Dynamic Resource Allocation | This example ResourceSlice has the following properties:", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_16423"} {"prompt": "Explain this bash code:\n\nkubectl get pods -n kube-system | grep kube-scheduler", "completion": "This bash code 10.9 - Scheduler Performance Tuning | to verify that the kube-scheduler component is healthy.", "language": "bash", "type": "kubectl-command_explanation", "context": "10.9 - Scheduler Performance Tuning | to verify that the kube-scheduler component is healthy.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_16424"} {"prompt": "Complete this yaml code:\n\napiVersion: kubescheduler.config.k8s.io/v1alpha1\nkind: KubeSchedulerConfiguration\nalgorithmSource:\n provider: DefaultProvider", "completion": "\n...\n\npercentageOfNodesToScore: 50", "language": "yaml", "type": "kubernetes-manifest", "context": "10.9 - Scheduler Performance Tuning | percentageOfNodesToScore must be a value between 1 and 100 with the default value being calculated based on the cluster size. There is also a hardcoded minimum value of 100 nodes.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_16425"} {"prompt": "Explain this yaml code:\n\napiVersion: kubescheduler.config.k8s.io/v1alpha1\nkind: KubeSchedulerConfiguration\nalgorithmSource:\n provider: DefaultProvider\n\n...\n\npercentageOfNodesToScore: 50", "completion": "This yaml code 10.9 - Scheduler Performance Tuning | percentageOfNodesToScore must be a value between 1 and 100 with the default value being calculated based on the cluster size. There is also a hardcoded minimum value of 100 nodes.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "10.9 - Scheduler Performance Tuning | percentageOfNodesToScore must be a value between 1 and 100 with the default value being calculated based on the cluster size. There is also a hardcoded minimum value of 100 nodes.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_16426"} {"prompt": "Explain this text code:\n\nZone 1: Node 1, Node 2, Node 3, Node 4\nZone 2: Node 5, Node 6", "completion": "This text code 10.9 - Scheduler Performance Tuning | The Scheduler evaluates feasibility of the nodes in this order:", "language": "text", "type": "code-example_explanation", "context": "10.9 - Scheduler Performance Tuning | The Scheduler evaluates feasibility of the nodes in this order:", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_16427"} {"prompt": "Complete this yaml code:\n\napiVersion: kubescheduler.config.k8s.io/v1\nkind: KubeSchedulerConfiguration\nprofiles:\n- pluginConfig:\n - args:\n scoringStrategy:\n resources:\n - name: cpu", "completion": " weight: 1\n - name: memory\n weight: 1\n - name: intel.com/foo\n weight: 3\n - name: intel.com/bar\n weight: 3\n type: MostAllocated\n name: NodeResourcesFit", "language": "yaml", "type": "kubernetes-manifest", "context": "10.10 - Resource Bin Packing | To learn more about other parameters and their default configuration, see the API documentation for NodeResourcesFitArgs.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_16428"} {"prompt": "Explain this yaml code:\n\napiVersion: kubescheduler.config.k8s.io/v1\nkind: KubeSchedulerConfiguration\nprofiles:\n- pluginConfig:\n - args:\n scoringStrategy:\n resources:\n - name: cpu\n weight: 1\n - name: memory\n weight: 1\n - name: intel.com/foo\n weight: 3\n - name: intel.com/bar\n weight: 3\n type: MostAllocated\n name: NodeResourcesFit", "completion": "This yaml code 10.10 - Resource Bin Packing | To learn more about other parameters and their default configuration, see the API documentation for NodeResourcesFitArgs.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "10.10 - Resource Bin Packing | To learn more about other parameters and their default configuration, see the API documentation for NodeResourcesFitArgs.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_16429"} {"prompt": "Complete this yaml code:\n\napiVersion: kubescheduler.config.k8s.io/v1\nkind: KubeSchedulerConfiguration\nprofiles:\n- pluginConfig:\n - args:\n scoringStrategy:\n resources:\n - name: intel.com/foo\n weight: 3", "completion": " - name: intel.com/bar\n weight: 3\n requestedToCapacityRatio:\n shape:\n - utilization: 0\n score: 0\n - utilization: 100\n score: 10\n type: RequestedToCapacityRatio\n name: NodeResourcesFit", "language": "yaml", "type": "kubernetes-manifest", "context": "10.10 - Resource Bin Packing | Referencing the KubeSchedulerConfiguration file with the kube-scheduler flag --config=/path/to/config/file will pass the configuration to the scheduler.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_16430"} {"prompt": "Explain this yaml code:\n\napiVersion: kubescheduler.config.k8s.io/v1\nkind: KubeSchedulerConfiguration\nprofiles:\n- pluginConfig:\n - args:\n scoringStrategy:\n resources:\n - name: intel.com/foo\n weight: 3\n - name: intel.com/bar\n weight: 3\n requestedToCapacityRatio:\n shape:\n - utilization: 0\n score: 0\n - utilization: 100\n score: 10\n type: RequestedToCapacityRatio\n name: NodeResourcesFit", "completion": "This yaml code 10.10 - Resource Bin Packing | Referencing the KubeSchedulerConfiguration file with the kube-scheduler flag --config=/path/to/config/file will pass the configuration to the scheduler.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "10.10 - Resource Bin Packing | Referencing the KubeSchedulerConfiguration file with the kube-scheduler flag --config=/path/to/config/file will pass the configuration to the scheduler.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_16431"} {"prompt": "Complete this text code:\n\nshape:\n - utilization: 0", "completion": " score: 0\n - utilization: 100\n score: 10", "language": "text", "type": "code-example", "context": "10.10 - Resource Bin Packing | The above arguments give the node a score of 0 if utilization is 0% and 10 for utilization 100%, thus enabling bin packing behavior. To enable least requested the score value must be reversed as follo", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_16432"} {"prompt": "Explain this text code:\n\nshape:\n - utilization: 0\n score: 0\n - utilization: 100\n score: 10", "completion": "This text code 10.10 - Resource Bin Packing | The above arguments give the node a score of 0 if utilization is 0% and 10 for utilization 100%, thus enabling bin packing behavior. To enable least requested the score value must be reversed as follo", "language": "text", "type": "code-example_explanation", "context": "10.10 - Resource Bin Packing | The above arguments give the node a score of 0 if utilization is 0% and 10 for utilization 100%, thus enabling bin packing behavior. To enable least requested the score value must be reversed as follo", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_16433"} {"prompt": "Complete this text code:\n\nshape:\n - utilization: 0", "completion": " score: 10\n - utilization: 100\n score: 0", "language": "text", "type": "code-example", "context": "10.10 - Resource Bin Packing | resources is an optional parameter which defaults to:", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_16434"} {"prompt": "Explain this text code:\n\nshape:\n - utilization: 0\n score: 10\n - utilization: 100\n score: 0", "completion": "This text code 10.10 - Resource Bin Packing | resources is an optional parameter which defaults to:", "language": "text", "type": "code-example_explanation", "context": "10.10 - Resource Bin Packing | resources is an optional parameter which defaults to:", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_16435"} {"prompt": "Complete this text code:\n\nresources:\n - name: cpu", "completion": " weight: 1\n - name: memory\n weight: 1", "language": "text", "type": "code-example", "context": "10.10 - Resource Bin Packing | It can be used to add extended resources as follows:", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_16436"} {"prompt": "Explain this text code:\n\nresources:\n - name: cpu\n weight: 1\n - name: memory\n weight: 1", "completion": "This text code 10.10 - Resource Bin Packing | It can be used to add extended resources as follows:", "language": "text", "type": "code-example_explanation", "context": "10.10 - Resource Bin Packing | It can be used to add extended resources as follows:", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_16437"} {"prompt": "Complete this text code:\n\nresources:\n - name: intel.com/foo\n weight: 5", "completion": " - name: cpu\n weight: 3\n - name: memory\n weight: 1", "language": "text", "type": "code-example", "context": "10.10 - Resource Bin Packing | The weight parameter is optional and is set to 1 if not specified. Also, the weight cannot be set to a negative value.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_16438"} {"prompt": "Explain this text code:\n\nresources:\n - name: intel.com/foo\n weight: 5\n - name: cpu\n weight: 3\n - name: memory\n weight: 1", "completion": "This text code 10.10 - Resource Bin Packing | The weight parameter is optional and is set to 1 if not specified. Also, the weight cannot be set to a negative value.", "language": "text", "type": "code-example_explanation", "context": "10.10 - Resource Bin Packing | The weight parameter is optional and is set to 1 if not specified. Also, the weight cannot be set to a negative value.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_16439"} {"prompt": "Complete this text code:\n\nAvailable:\n intel.com/foo: 4\n memory: 1 GB\n cpu: 8", "completion": "\nUsed:\n intel.com/foo: 1\n memory: 256MB\n cpu: 1", "language": "text", "type": "code-example", "context": "10.10 - Resource Bin Packing | Node score:", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_16440"} {"prompt": "Explain this text code:\n\nAvailable:\n intel.com/foo: 4\n memory: 1 GB\n cpu: 8\n\nUsed:\n intel.com/foo: 1\n memory: 256MB\n cpu: 1", "completion": "This text code 10.10 - Resource Bin Packing | Node score:", "language": "text", "type": "code-example_explanation", "context": "10.10 - Resource Bin Packing | Node score:", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_16441"} {"prompt": "Complete this text code:\n\nintel.com/foo = resourceScoringFunction((2+1),4)\n = (100 - ((4-3)*100/4)\n = (100 - 25)\n = 75 # requested + used = 75% * available\n = rawScoringFunction(75)\n = 7 # floor(75/10)\n\nmemory = resourceScoringFunction((256+256),1024)\n = (100 -((1024-512)*100/1024))\n = 50 # requested + used = 50% * available", "completion": " = rawScoringFunction(50)\n = 5 # floor(50/10)\n\ncpu = resourceScoringFunction((2+1),8)\n = (100 -((8-3)*100/8))\n = 37.5 # requested + used = 37.5% * available\n = rawScoringFunction(37.5)\n = 3 # floor(37.5/10)\n\nNodeScore = ((7 * 5) + (5 * 1) + (3 * 3)) / (5 + 1 + 3)\n = 5", "language": "text", "type": "code-example", "context": "10.10 - Resource Bin Packing | Node 2 spec:", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_16442"} {"prompt": "Explain this text code:\n\nintel.com/foo = resourceScoringFunction((2+1),4)\n = (100 - ((4-3)*100/4)\n = (100 - 25)\n = 75 # requested + used = 75% * available\n = rawScoringFunction(75)\n = 7 # floor(75/10)\n\nmemory = resourceScoringFunction((256+256),1024)\n = (100 -((1024-512)*100/1024))\n = 50 # requested + used = 50% * available\n = rawScoringFunction(50)\n = 5 # floor(50/10)\n\ncpu = resourceScoringFunction((2+1),8)\n = (100 -((8-3)*100/8))\n = 37.5 # requested + used = 37.5% * available\n = rawScoringFunction(37.5)\n = 3 # floor(37.5/10)\n\nNodeScore = ((7 * 5) + (5 * 1) + (3 * 3)) / (5 + 1 + 3)\n = 5", "completion": "This text code 10.10 - Resource Bin Packing | Node 2 spec:", "language": "text", "type": "code-example_explanation", "context": "10.10 - Resource Bin Packing | Node 2 spec:", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_16443"} {"prompt": "Complete this text code:\n\nAvailable:\n intel.com/foo: 8\n memory: 1GB\n cpu: 8", "completion": "Used:\n intel.com/foo: 2\n memory: 512MB\n cpu: 6", "language": "text", "type": "code-example", "context": "10.10 - Resource Bin Packing | Node score:", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_16444"} {"prompt": "Explain this text code:\n\nAvailable:\n intel.com/foo: 8\n memory: 1GB\n cpu: 8\nUsed:\n intel.com/foo: 2\n memory: 512MB\n cpu: 6", "completion": "This text code 10.10 - Resource Bin Packing | Node score:", "language": "text", "type": "code-example_explanation", "context": "10.10 - Resource Bin Packing | Node score:", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_16445"} {"prompt": "Complete this text code:\n\nintel.com/foo = resourceScoringFunction((2+2),8)\n = (100 - ((8-4)*100/8)\n = (100 - 50)\n = 50\n = rawScoringFunction(50)\n = 5\n\nmemory = resourceScoringFunction((256+512),1024)\n = (100 -((1024-768)*100/1024))\n = 75", "completion": " = rawScoringFunction(75)\n = 7\n\ncpu = resourceScoringFunction((2+6),8)\n = (100 -((8-8)*100/8))\n = 100\n = rawScoringFunction(100)\n = 10\n\nNodeScore = ((5 * 5) + (7 * 1) + (10 * 3)) / (5 + 1 + 3)\n = 7", "language": "text", "type": "code-example", "context": "10.10 - Resource Bin Packing | Pods can have priority. Priority indicates the importance of a Pod relative to other Pods. If a Pod cannot be scheduled, the scheduler tries to preempt (evict) lower priority Pods to make scheduling o", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_16446"} {"prompt": "Explain this text code:\n\nintel.com/foo = resourceScoringFunction((2+2),8)\n = (100 - ((8-4)*100/8)\n = (100 - 50)\n = 50\n = rawScoringFunction(50)\n = 5\n\nmemory = resourceScoringFunction((256+512),1024)\n = (100 -((1024-768)*100/1024))\n = 75\n = rawScoringFunction(75)\n = 7\n\ncpu = resourceScoringFunction((2+6),8)\n = (100 -((8-8)*100/8))\n = 100\n = rawScoringFunction(100)\n = 10\n\nNodeScore = ((5 * 5) + (7 * 1) + (10 * 3)) / (5 + 1 + 3)\n = 7", "completion": "This text code 10.10 - Resource Bin Packing | Pods can have priority. Priority indicates the importance of a Pod relative to other Pods. If a Pod cannot be scheduled, the scheduler tries to preempt (evict) lower priority Pods to make scheduling o", "language": "text", "type": "code-example_explanation", "context": "10.10 - Resource Bin Packing | Pods can have priority. Priority indicates the importance of a Pod relative to other Pods. If a Pod cannot be scheduled, the scheduler tries to preempt (evict) lower priority Pods to make scheduling o", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_16447"} {"prompt": "Complete this yaml code:\n\napiVersion: scheduling.k8s.io/v1\nkind: PriorityClass\nmetadata:", "completion": " name: high-priority\nvalue: 1000000\nglobalDefault: false\ndescription: \"This priority class should be used for XYZ service pods only.\"", "language": "yaml", "type": "kubernetes-manifest", "context": "10.11 - Pod Priority and Preemption | Pods with preemptionPolicy: Never will be placed in the scheduling queue ahead of lower-priority pods, but they cannot preempt other pods. A non-preempting pod waiting to be scheduled will stay in the", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_16448"} {"prompt": "Explain this yaml code:\n\napiVersion: scheduling.k8s.io/v1\nkind: PriorityClass\nmetadata:\n name: high-priority\nvalue: 1000000\nglobalDefault: false\ndescription: \"This priority class should be used for XYZ service pods only.\"", "completion": "This yaml code 10.11 - Pod Priority and Preemption | Pods with preemptionPolicy: Never will be placed in the scheduling queue ahead of lower-priority pods, but they cannot preempt other pods. A non-preempting pod waiting to be scheduled will stay in the", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "10.11 - Pod Priority and Preemption | Pods with preemptionPolicy: Never will be placed in the scheduling queue ahead of lower-priority pods, but they cannot preempt other pods. A non-preempting pod waiting to be scheduled will stay in the", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_16449"} {"prompt": "Complete this yaml code:\n\napiVersion: scheduling.k8s.io/v1\nkind: PriorityClass\nmetadata:\n name: high-priority-nonpreempting", "completion": "value: 1000000\npreemptionPolicy: Never\nglobalDefault: false\ndescription: \"This priority class will not cause other pods to be preempted.\"", "language": "yaml", "type": "kubernetes-manifest", "context": "10.11 - Pod Priority and Preemption | After you have one or more PriorityClasses, you can create Pods that specify one of those PriorityClass names in their specifications. The priority admission controller uses the priorityClassName fiel", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_16450"} {"prompt": "Explain this yaml code:\n\napiVersion: scheduling.k8s.io/v1\nkind: PriorityClass\nmetadata:\n name: high-priority-nonpreempting\nvalue: 1000000\npreemptionPolicy: Never\nglobalDefault: false\ndescription: \"This priority class will not cause other pods to be preempted.\"", "completion": "This yaml code 10.11 - Pod Priority and Preemption | After you have one or more PriorityClasses, you can create Pods that specify one of those PriorityClass names in their specifications. The priority admission controller uses the priorityClassName fiel", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "10.11 - Pod Priority and Preemption | After you have one or more PriorityClasses, you can create Pods that specify one of those PriorityClass names in their specifications. The priority admission controller uses the priorityClassName fiel", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_16451"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: nginx\n labels:\n env: test", "completion": "spec:\n containers:\n - name: nginx\n image: nginx\n imagePullPolicy: IfNotPresent\n priorityClassName: high-priority", "language": "yaml", "type": "pod-definition", "context": "10.11 - Pod Priority and Preemption | When Pod priority is enabled, the scheduler orders pending Pods by their priority and a pending Pod is placed ahead of other pending Pods with lower priority in the scheduling queue. As a result, the", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_16452"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: nginx\n labels:\n env: test\nspec:\n containers:\n - name: nginx\n image: nginx\n imagePullPolicy: IfNotPresent\n priorityClassName: high-priority", "completion": "This yaml code 10.11 - Pod Priority and Preemption | When Pod priority is enabled, the scheduler orders pending Pods by their priority and a pending Pod is placed ahead of other pending Pods with lower priority in the scheduling queue. As a result, the", "language": "yaml", "type": "pod-definition_explanation", "context": "10.11 - Pod Priority and Preemption | When Pod priority is enabled, the scheduler orders pending Pods by their priority and a pending Pod is placed ahead of other pending Pods with lower priority in the scheduling queue. As a result, the", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_16453"} {"prompt": "Complete this yaml code:\n\napiVersion: kubelet.config.k8s.io/v1beta1\nkind: KubeletConfiguration\nevictionHard:\n memory.available: \"500Mi\"\n nodefs.available: \"1Gi\"", "completion": " imagefs.available: \"100Gi\"\nevictionMinimumReclaim:\n memory.available: \"0Mi\"\n nodefs.available: \"500Mi\"\n imagefs.available: \"2Gi\"", "language": "yaml", "type": "kubernetes-manifest", "context": "10.12 - Node-pressure Eviction | In this example, if the nodefs.available signal meets the eviction threshold, the kubelet reclaims the resource until the signal reaches the threshold of 1GiB, and then continues to reclaim the minimu", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_16454"} {"prompt": "Explain this yaml code:\n\napiVersion: kubelet.config.k8s.io/v1beta1\nkind: KubeletConfiguration\nevictionHard:\n memory.available: \"500Mi\"\n nodefs.available: \"1Gi\"\n imagefs.available: \"100Gi\"\nevictionMinimumReclaim:\n memory.available: \"0Mi\"\n nodefs.available: \"500Mi\"\n imagefs.available: \"2Gi\"", "completion": "This yaml code 10.12 - Node-pressure Eviction | In this example, if the nodefs.available signal meets the eviction threshold, the kubelet reclaims the resource until the signal reaches the threshold of 1GiB, and then continues to reclaim the minimu", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "10.12 - Node-pressure Eviction | In this example, if the nodefs.available signal meets the eviction threshold, the kubelet reclaims the resource until the signal reaches the threshold of 1GiB, and then continues to reclaim the minimu", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_16455"} {"prompt": "Explain this text code:\n\n--eviction-hard=memory.available<500Mi\n--system-reserved=memory=1.5Gi", "completion": "This text code 10.12 - Node-pressure Eviction | In this configuration, the --system-reserved flag reserves 1.5GiB of memory for the system, which is 10% of the total memory + the eviction threshold amount.", "language": "text", "type": "code-example_explanation", "context": "10.12 - Node-pressure Eviction | In this configuration, the --system-reserved flag reserves 1.5GiB of memory for the system, which is 10% of the total memory + the eviction threshold amount.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_16456"} {"prompt": "Complete this json code:\n\n{\n \"apiVersion\": \"policy/v1\",\n \"kind\": \"Eviction\",\n \"metadata\": {", "completion": " \"name\": \"quux\",\n \"namespace\": \"default\"\n }\n}", "language": "json", "type": "api-response", "context": "10.13 - API-initiated Eviction | Note:Deprecated in v1.22 in favor of policy/v1{ \"apiVersion\": \"policy/v1beta1\", \"kind\": \"Eviction\", \"metadata\": { \"name\": \"quux\", \"namespace\": \"default\" } }", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_16457"} {"prompt": "Explain this json code:\n\n{\n \"apiVersion\": \"policy/v1\",\n \"kind\": \"Eviction\",\n \"metadata\": {\n \"name\": \"quux\",\n \"namespace\": \"default\"\n }\n}", "completion": "This json code 10.13 - API-initiated Eviction | Note:Deprecated in v1.22 in favor of policy/v1{ \"apiVersion\": \"policy/v1beta1\", \"kind\": \"Eviction\", \"metadata\": { \"name\": \"quux\", \"namespace\": \"default\" } }", "language": "json", "type": "api-response_explanation", "context": "10.13 - API-initiated Eviction | Note:Deprecated in v1.22 in favor of policy/v1{ \"apiVersion\": \"policy/v1beta1\", \"kind\": \"Eviction\", \"metadata\": { \"name\": \"quux\", \"namespace\": \"default\" } }", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_16458"} {"prompt": "Complete this json code:\n\n{\n \"apiVersion\": \"policy/v1beta1\",\n \"kind\": \"Eviction\",\n \"metadata\": {", "completion": " \"name\": \"quux\",\n \"namespace\": \"default\"\n }\n}", "language": "json", "type": "api-response", "context": "10.13 - API-initiated Eviction | Alternatively, you can attempt an eviction operation by accessing the API using curl or wget, similar to the following example:", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_16459"} {"prompt": "Explain this json code:\n\n{\n \"apiVersion\": \"policy/v1beta1\",\n \"kind\": \"Eviction\",\n \"metadata\": {\n \"name\": \"quux\",\n \"namespace\": \"default\"\n }\n}", "completion": "This json code 10.13 - API-initiated Eviction | Alternatively, you can attempt an eviction operation by accessing the API using curl or wget, similar to the following example:", "language": "json", "type": "api-response_explanation", "context": "10.13 - API-initiated Eviction | Alternatively, you can attempt an eviction operation by accessing the API using curl or wget, similar to the following example:", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_16460"} {"prompt": "Explain this text code:\n\ncurl -v -H 'Content-type: application/json' https://your-cluster-api-endpoint.example/api/v1/namespaces/default/pods/quux/eviction -d @eviction.json", "completion": "This text code 10.13 - API-initiated Eviction | When you request an eviction using the API, the API server performs admission checks and responds in one of the following ways:", "language": "text", "type": "code-example_explanation", "context": "10.13 - API-initiated Eviction | When you request an eviction using the API, the API server performs admission checks and responds in one of the following ways:", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_16461"} {"prompt": "Explain this text code:\n\nReason: Terminated\nMessage: Pod was terminated in response to imminent node shutdown.", "completion": "This text code 11.1 - Node Shutdowns | To provide more flexibility during graceful node shutdown around the ordering of pods during shutdown, graceful node shutdown honors the PriorityClass for Pods, provided that you enabled this feature", "language": "text", "type": "code-example_explanation", "context": "11.1 - Node Shutdowns | To provide more flexibility during graceful node shutdown around the ordering of pods during shutdown, graceful node shutdown honors the PriorityClass for Pods, provided that you enabled this feature", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_16462"} {"prompt": "Complete this text code:\n\nshutdownGracePeriodByPodPriority:\n - priority: 100000\n shutdownGracePeriodSeconds: 10\n - priority: 10000", "completion": " shutdownGracePeriodSeconds: 180\n - priority: 1000\n shutdownGracePeriodSeconds: 120\n - priority: 0\n shutdownGracePeriodSeconds: 60", "language": "text", "type": "code-example", "context": "11.1 - Node Shutdowns | The above table implies that any pod with priority value >= 100000 will get just 10 seconds to shut down, any pod with value >= 10000 and < 100000 will get 180 seconds to shut down, any pod with value", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_16463"} {"prompt": "Explain this text code:\n\nshutdownGracePeriodByPodPriority:\n - priority: 100000\n shutdownGracePeriodSeconds: 10\n - priority: 10000\n shutdownGracePeriodSeconds: 180\n - priority: 1000\n shutdownGracePeriodSeconds: 120\n - priority: 0\n shutdownGracePeriodSeconds: 60", "completion": "This text code 11.1 - Node Shutdowns | The above table implies that any pod with priority value >= 100000 will get just 10 seconds to shut down, any pod with value >= 10000 and < 100000 will get 180 seconds to shut down, any pod with value", "language": "text", "type": "code-example_explanation", "context": "11.1 - Node Shutdowns | The above table implies that any pod with priority value >= 100000 will get just 10 seconds to shut down, any pod with value >= 10000 and < 100000 will get 180 seconds to shut down, any pod with value", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_16464"} {"prompt": "Complete this text code:\n\nNAME CPU(cores) CPU(%) MEMORY(bytes) MEMORY(%) SWAP(bytes) SWAP(%) \nnode1 1m 10% 2Mi 10% 1Mi 0% ", "completion": "node2 5m 10% 6Mi 10% 2Mi 0% \nnode3 3m 10% 4Mi 10% <unknown> <unknown>", "language": "text", "type": "code-example", "context": "11.2 - Swap memory management | In order to receive information about swap usage by pods, kubectl top nodes --show-swap can be used:", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_16465"} {"prompt": "Explain this text code:\n\nNAME CPU(cores) CPU(%) MEMORY(bytes) MEMORY(%) SWAP(bytes) SWAP(%) \nnode1 1m 10% 2Mi 10% 1Mi 0% \nnode2 5m 10% 6Mi 10% 2Mi 0% \nnode3 3m 10% 4Mi 10% <unknown> <unknown>", "completion": "This text code 11.2 - Swap memory management | In order to receive information about swap usage by pods, kubectl top nodes --show-swap can be used:", "language": "text", "type": "code-example_explanation", "context": "11.2 - Swap memory management | In order to receive information about swap usage by pods, kubectl top nodes --show-swap can be used:", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_16466"} {"prompt": "Complete this text code:\n\nNAME CPU(cores) MEMORY(bytes) SWAP(bytes)\ncoredns-58d5bc5cdb-5nbk4 2m 19Mi 0Mi\ncoredns-58d5bc5cdb-jsh26 3m 37Mi 0Mi\netcd-node01 51m 143Mi 5Mi\nkube-apiserver-node01 98m 824Mi 16Mi", "completion": "kube-controller-manager-node01 20m 135Mi 9Mi\nkube-proxy-ffgs2 1m 24Mi 0Mi\nkube-proxy-fhvwx 1m 39Mi 0Mi\nkube-scheduler-node01 13m 69Mi 0Mi\nmetrics-server-8598789fdb-d2kcj 5m 26Mi 0Mi", "language": "text", "type": "code-example", "context": "11.2 - Swap memory management | A new node status field is now added, node.status.nodeInfo.swap.capacity, to report the swap capacity of a node.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_16467"} {"prompt": "Explain this text code:\n\nNAME CPU(cores) MEMORY(bytes) SWAP(bytes)\ncoredns-58d5bc5cdb-5nbk4 2m 19Mi 0Mi\ncoredns-58d5bc5cdb-jsh26 3m 37Mi 0Mi\netcd-node01 51m 143Mi 5Mi\nkube-apiserver-node01 98m 824Mi 16Mi\nkube-controller-manager-node01 20m 135Mi 9Mi\nkube-proxy-ffgs2 1m 24Mi 0Mi\nkube-proxy-fhvwx 1m 39Mi 0Mi\nkube-scheduler-node01 13m 69Mi 0Mi\nmetrics-server-8598789fdb-d2kcj 5m 26Mi 0Mi", "completion": "This text code 11.2 - Swap memory management | A new node status field is now added, node.status.nodeInfo.swap.capacity, to report the swap capacity of a node.", "language": "text", "type": "code-example_explanation", "context": "11.2 - Swap memory management | A new node status field is now added, node.status.nodeInfo.swap.capacity, to report the swap capacity of a node.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_16468"} {"prompt": "Explain this bash code:\n\nkubectl get nodes -o go-template='{{range .items}}{{.metadata.name}}: {{if .status.nodeInfo.swap.capacity}}{{.status.nodeInfo.swap.capacity}}{{else}}<unknown>{{end}}{{\"\\n\"}}{{end}}'", "completion": "This bash code 11.2 - Swap memory management | This will result in an output similar to:", "language": "bash", "type": "kubectl-command_explanation", "context": "11.2 - Swap memory management | This will result in an output similar to:", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_16469"} {"prompt": "Explain this text code:\n\nnode1: 21474836480\nnode2: 42949664768\nnode3: <unknown>", "completion": "This text code 11.2 - Swap memory management | Node Feature Discovery is a Kubernetes addon for detecting hardware features and configuration. It can be utilized to discover which nodes are provisioned with swap.", "language": "text", "type": "code-example_explanation", "context": "11.2 - Swap memory management | Node Feature Discovery is a Kubernetes addon for detecting hardware features and configuration. It can be utilized to discover which nodes are provisioned with swap.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_16470"} {"prompt": "Explain this bash code:\n\nkubectl get nodes -o jsonpath='{range .items[?(@.metadata.labels.feature\\.node\\.kubernetes\\.io/memory-swap)]}{.metadata.name}{\"\\t\"}{.metadata.labels.feature\\.node\\.kubernetes\\.io/memory-swap}{\"\\n\"}{end}'", "completion": "This bash code 11.2 - Swap memory management | This will result in an output similar to:", "language": "bash", "type": "kubectl-command_explanation", "context": "11.2 - Swap memory management | This will result in an output similar to:", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_16471"} {"prompt": "Explain this text code:\n\nk8s-worker1: true\nk8s-worker2: true\nk8s-worker3: false", "completion": "This text code 11.2 - Swap memory management | In this example, swap is provisioned on nodes k8s-worker1 and k8s-worker2, but not on k8s-worker3.", "language": "text", "type": "code-example_explanation", "context": "11.2 - Swap memory management | In this example, swap is provisioned on nodes k8s-worker1 and k8s-worker2, but not on k8s-worker3.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_16472"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: counter\nspec:", "completion": " containers:\n - name: count\n image: busybox:1.28\n args: [/bin/sh, -c,\n 'i=0; while true; do echo \"$i: $(date)\"; i=$((i+1)); sleep 1; done']", "language": "yaml", "type": "pod-definition", "context": "11.8 - Logging Architecture | To run this pod, use the following command:", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_16473"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: counter\nspec:\n containers:\n - name: count\n image: busybox:1.28\n args: [/bin/sh, -c,\n 'i=0; while true; do echo \"$i: $(date)\"; i=$((i+1)); sleep 1; done']", "completion": "This yaml code 11.8 - Logging Architecture | To run this pod, use the following command:", "language": "yaml", "type": "pod-definition_explanation", "context": "11.8 - Logging Architecture | To run this pod, use the following command:", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_16474"} {"prompt": "Explain this bash code:\n\nkubectl apply -f https://k8s.io/examples/debug/counter-pod.yaml", "completion": "This bash code 11.8 - Logging Architecture | The output is:", "language": "bash", "type": "kubectl-command_explanation", "context": "11.8 - Logging Architecture | The output is:", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_16475"} {"prompt": "Explain this text code:\n\n0: Fri Apr 1 11:42:23 UTC 2022\n1: Fri Apr 1 11:42:24 UTC 2022\n2: Fri Apr 1 11:42:25 UTC 2022", "completion": "This text code 11.8 - Logging Architecture | You can use kubectl logs --previous to retrieve logs from a previous instantiation of a container. If your pod has multiple containers, specify which container's logs you want to access by appending a", "language": "text", "type": "code-example_explanation", "context": "11.8 - Logging Architecture | You can use kubectl logs --previous to retrieve logs from a previous instantiation of a container. If your pod has multiple containers, specify which container's logs you want to access by appending a", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_16476"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: counter-err\nspec:", "completion": " containers:\n - name: count\n image: busybox:1.28\n args: [/bin/sh, -c,\n 'i=0; while true; do echo \"$i: $(date)\"; echo \"$i: err\" >&2 ; i=$((i+1)); sleep 1; done']", "language": "yaml", "type": "pod-definition", "context": "11.8 - Logging Architecture | To run this pod, use the following command:", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_16477"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: counter-err\nspec:\n containers:\n - name: count\n image: busybox:1.28\n args: [/bin/sh, -c,\n 'i=0; while true; do echo \"$i: $(date)\"; echo \"$i: err\" >&2 ; i=$((i+1)); sleep 1; done']", "completion": "This yaml code 11.8 - Logging Architecture | To run this pod, use the following command:", "language": "yaml", "type": "pod-definition_explanation", "context": "11.8 - Logging Architecture | To run this pod, use the following command:", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_16478"} {"prompt": "Explain this bash code:\n\nkubectl apply -f https://k8s.io/examples/debug/counter-pod-err.yaml", "completion": "This bash code 11.8 - Logging Architecture | To fetch only the stderr log stream, you can run:", "language": "bash", "type": "kubectl-command_explanation", "context": "11.8 - Logging Architecture | To fetch only the stderr log stream, you can run:", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_16479"} {"prompt": "Explain this bash code:\n\nkubectl get --raw \"/api/v1/namespaces/default/pods/counter-err/log?stream=Stderr\"", "completion": "This bash code 11.8 - Logging Architecture | See the kubectl logs documentation for more details.", "language": "bash", "type": "kubectl-command_explanation", "context": "11.8 - Logging Architecture | See the kubectl logs documentation for more details.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_16480"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: counter\nspec:\n containers:\n - name: count\n image: busybox:1.28\n args:\n - /bin/sh\n - -c\n - >\n i=0;", "completion": " while true;\n do\n echo \"$i: $(date)\" >> /var/log/1.log;\n echo \"$(date) INFO $i\" >> /var/log/2.log;\n i=$((i+1));\n sleep 1;\n done \n volumeMounts:\n - name: varlog\n mountPath: /var/log\n volumes:\n - name: varlog\n emptyDir: {}", "language": "yaml", "type": "pod-definition", "context": "11.8 - Logging Architecture | It is not recommended to write log entries with different formats to the same log stream, even if you managed to redirect both components to the stdout stream of the container. Instead, you can create", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_16481"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: counter\nspec:\n containers:\n - name: count\n image: busybox:1.28\n args:\n - /bin/sh\n - -c\n - >\n i=0;\n while true;\n do\n echo \"$i: $(date)\" >> /var/log/1.log;\n echo \"$(date) INFO $i\" >> /var/log/2.log;\n i=$((i+1));\n sleep 1;\n done \n volumeMounts:\n - name: varlog\n mountPath: /var/log\n volumes:\n - name: varlog\n emptyDir: {}", "completion": "This yaml code 11.8 - Logging Architecture | It is not recommended to write log entries with different formats to the same log stream, even if you managed to redirect both components to the stdout stream of the container. Instead, you can create", "language": "yaml", "type": "pod-definition_explanation", "context": "11.8 - Logging Architecture | It is not recommended to write log entries with different formats to the same log stream, even if you managed to redirect both components to the stdout stream of the container. Instead, you can create", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_16482"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: counter\nspec:\n containers:\n - name: count\n image: busybox:1.28\n args:\n - /bin/sh\n - -c\n - >\n i=0;\n while true;\n do\n echo \"$i: $(date)\" >> /var/log/1.log;\n echo \"$(date) INFO $i\" >> /var/log/2.log;\n i=$((i+1));\n sleep 1;", "completion": " done \n volumeMounts:\n - name: varlog\n mountPath: /var/log\n - name: count-log-1\n image: busybox:1.28\n args: [/bin/sh, -c, 'tail -n+1 -F /var/log/1.log']\n volumeMounts:\n - name: varlog\n mountPath: /var/log\n - name: count-log-2\n image: busybox:1.28\n args: [/bin/sh, -c, 'tail -n+1 -F /var/log/2.log']\n volumeMounts:\n - name: varlog\n mountPath: /var/log\n volumes:\n - name: varlog\n emptyDir: {}", "language": "yaml", "type": "pod-definition", "context": "11.8 - Logging Architecture | Now when you run this pod, you can access each log stream separately by running the following commands:", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_16483"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: counter\nspec:\n containers:\n - name: count\n image: busybox:1.28\n args:\n - /bin/sh\n - -c\n - >\n i=0;\n while true;\n do\n echo \"$i: $(date)\" >> /var/log/1.log;\n echo \"$(date) INFO $i\" >> /var/log/2.log;\n i=$((i+1));\n sleep 1;\n done \n volumeMounts:\n - name: varlog\n mountPath: /var/log\n - name: count-log-1\n image: busybox:1.28\n args: [/bin/sh, -c, 'tail -n+1 -F /var/log/1.log']\n volumeMounts:\n - name: varlog\n mountPath: /var/log\n - name: count-log-2\n image: busybox:1.28\n args: [/bin/sh, -c, 'tail -n+1 -F /var/log/2.log']\n volumeMounts:\n - name: varlog\n mountPath: /var/log\n volumes:\n - name: varlog\n emptyDir: {}", "completion": "This yaml code 11.8 - Logging Architecture | Now when you run this pod, you can access each log stream separately by running the following commands:", "language": "yaml", "type": "pod-definition_explanation", "context": "11.8 - Logging Architecture | Now when you run this pod, you can access each log stream separately by running the following commands:", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_16484"} {"prompt": "Complete this text code:\n\n0: Fri Apr 1 11:42:26 UTC 2022\n1: Fri Apr 1 11:42:27 UTC 2022", "completion": "2: Fri Apr 1 11:42:28 UTC 2022\n...", "language": "text", "type": "code-example", "context": "11.8 - Logging Architecture | The output is similar to:", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_16485"} {"prompt": "Explain this text code:\n\n0: Fri Apr 1 11:42:26 UTC 2022\n1: Fri Apr 1 11:42:27 UTC 2022\n2: Fri Apr 1 11:42:28 UTC 2022\n...", "completion": "This text code 11.8 - Logging Architecture | The output is similar to:", "language": "text", "type": "code-example_explanation", "context": "11.8 - Logging Architecture | The output is similar to:", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_16486"} {"prompt": "Complete this text code:\n\nFri Apr 1 11:42:29 UTC 2022 INFO 0\nFri Apr 1 11:42:30 UTC 2022 INFO 0", "completion": "Fri Apr 1 11:42:31 UTC 2022 INFO 0\n...", "language": "text", "type": "code-example", "context": "11.8 - Logging Architecture | If you installed a node-level agent in your cluster, that agent picks up those log streams automatically without any further configuration. If you like, you can configure the agent to parse log lines", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_16487"} {"prompt": "Explain this text code:\n\nFri Apr 1 11:42:29 UTC 2022 INFO 0\nFri Apr 1 11:42:30 UTC 2022 INFO 0\nFri Apr 1 11:42:31 UTC 2022 INFO 0\n...", "completion": "This text code 11.8 - Logging Architecture | If you installed a node-level agent in your cluster, that agent picks up those log streams automatically without any further configuration. If you like, you can configure the agent to parse log lines", "language": "text", "type": "code-example_explanation", "context": "11.8 - Logging Architecture | If you installed a node-level agent in your cluster, that agent picks up those log streams automatically without any further configuration. If you like, you can configure the agent to parse log lines", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_16488"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: ConfigMap\nmetadata:\n name: fluentd-config\ndata:\n fluentd.conf: |\n <source>\n type tail\n format none\n path /var/log/1.log\n pos_file /var/log/1.log.pos\n tag count.format1", "completion": " </source>\n\n <source>\n type tail\n format none\n path /var/log/2.log\n pos_file /var/log/2.log.pos\n tag count.format2\n </source>\n\n <match **>\n type google_cloud\n </match>", "language": "yaml", "type": "kubernetes-manifest", "context": "11.8 - Logging Architecture | The second manifest describes a pod that has a sidecar container running fluentd. The pod mounts a volume where fluentd can pick up its configuration data.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_16489"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: ConfigMap\nmetadata:\n name: fluentd-config\ndata:\n fluentd.conf: |\n <source>\n type tail\n format none\n path /var/log/1.log\n pos_file /var/log/1.log.pos\n tag count.format1\n </source>\n\n <source>\n type tail\n format none\n path /var/log/2.log\n pos_file /var/log/2.log.pos\n tag count.format2\n </source>\n\n <match **>\n type google_cloud\n </match>", "completion": "This yaml code 11.8 - Logging Architecture | The second manifest describes a pod that has a sidecar container running fluentd. The pod mounts a volume where fluentd can pick up its configuration data.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "11.8 - Logging Architecture | The second manifest describes a pod that has a sidecar container running fluentd. The pod mounts a volume where fluentd can pick up its configuration data.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_16490"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: counter\nspec:\n containers:\n - name: count\n image: busybox:1.28\n args:\n - /bin/sh\n - -c\n - >\n i=0;\n while true;\n do\n echo \"$i: $(date)\" >> /var/log/1.log;\n echo \"$(date) INFO $i\" >> /var/log/2.log;\n i=$((i+1));\n sleep 1;", "completion": " done \n volumeMounts:\n - name: varlog\n mountPath: /var/log\n - name: count-agent\n image: registry.k8s.io/fluentd-gcp:1.30\n env:\n - name: FLUENTD_ARGS\n value: -c /etc/fluentd-config/fluentd.conf\n volumeMounts:\n - name: varlog\n mountPath: /var/log\n - name: config-volume\n mountPath: /etc/fluentd-config\n volumes:\n - name: varlog\n emptyDir: {}\n - name: config-volume\n configMap:\n name: fluentd-config", "language": "yaml", "type": "pod-definition", "context": "11.8 - Logging Architecture |", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_16491"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: counter\nspec:\n containers:\n - name: count\n image: busybox:1.28\n args:\n - /bin/sh\n - -c\n - >\n i=0;\n while true;\n do\n echo \"$i: $(date)\" >> /var/log/1.log;\n echo \"$(date) INFO $i\" >> /var/log/2.log;\n i=$((i+1));\n sleep 1;\n done \n volumeMounts:\n - name: varlog\n mountPath: /var/log\n - name: count-agent\n image: registry.k8s.io/fluentd-gcp:1.30\n env:\n - name: FLUENTD_ARGS\n value: -c /etc/fluentd-config/fluentd.conf\n volumeMounts:\n - name: varlog\n mountPath: /var/log\n - name: config-volume\n mountPath: /etc/fluentd-config\n volumes:\n - name: varlog\n emptyDir: {}\n - name: config-volume\n configMap:\n name: fluentd-config", "completion": "This yaml code 11.8 - Logging Architecture |", "language": "yaml", "type": "pod-definition_explanation", "context": "11.8 - Logging Architecture |", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_16492"} {"prompt": "Complete this yaml code:\n\napiVersion: rbac.authorization.k8s.io/v1\nkind: ClusterRole\nmetadata:\n name: prometheus", "completion": "rules:\n - nonResourceURLs:\n - \"/metrics\"\n verbs:\n - get", "language": "yaml", "type": "kubernetes-manifest", "context": "11.10 - Metrics For Kubernetes System Components | Alpha metric → Stable metric → Deprecated metric → Hidden metric → Deleted metric", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_16493"} {"prompt": "Explain this yaml code:\n\napiVersion: rbac.authorization.k8s.io/v1\nkind: ClusterRole\nmetadata:\n name: prometheus\nrules:\n - nonResourceURLs:\n - \"/metrics\"\n verbs:\n - get", "completion": "This yaml code 11.10 - Metrics For Kubernetes System Components | Alpha metric → Stable metric → Deprecated metric → Hidden metric → Deleted metric", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "11.10 - Metrics For Kubernetes System Components | Alpha metric → Stable metric → Deprecated metric → Hidden metric → Deleted metric", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_16494"} {"prompt": "Explain this text code:\n\n# HELP some_counter this counts things\n# TYPE some_counter counter\nsome_counter 0", "completion": "This text code 11.10 - Metrics For Kubernetes System Components | After deprecation", "language": "text", "type": "code-example_explanation", "context": "11.10 - Metrics For Kubernetes System Components | After deprecation", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_16495"} {"prompt": "Explain this text code:\n\n# HELP some_counter (Deprecated since 1.15.0) this counts things\n# TYPE some_counter counter\nsome_counter 0", "completion": "This text code 11.10 - Metrics For Kubernetes System Components | Hidden metrics are no longer published for scraping, but are still available for use. To use a hidden metric, please refer to the Show hidden metrics section.", "language": "text", "type": "code-example_explanation", "context": "11.10 - Metrics For Kubernetes System Components | Hidden metrics are no longer published for scraping, but are still available for use. To use a hidden metric, please refer to the Show hidden metrics section.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_16496"} {"prompt": "Complete this text code:\n\ncloudprovider_gce_api_request_duration_seconds { request = \"instance_list\"}\ncloudprovider_gce_api_request_duration_seconds { request = \"disk_insert\"}\ncloudprovider_gce_api_request_duration_seconds { request = \"disk_delete\"}", "completion": "cloudprovider_gce_api_request_duration_seconds { request = \"attach_disk\"}\ncloudprovider_gce_api_request_duration_seconds { request = \"detach_disk\"}\ncloudprovider_gce_api_request_duration_seconds { request = \"list_disk\"}", "language": "text", "type": "code-example", "context": "11.10 - Metrics For Kubernetes System Components | The scheduler exposes optional metrics that reports the requested resources and the desired limits of all running pods. These metrics can be used to build capacity planning dashboards, assess current", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_16497"} {"prompt": "Explain this text code:\n\ncloudprovider_gce_api_request_duration_seconds { request = \"instance_list\"}\ncloudprovider_gce_api_request_duration_seconds { request = \"disk_insert\"}\ncloudprovider_gce_api_request_duration_seconds { request = \"disk_delete\"}\ncloudprovider_gce_api_request_duration_seconds { request = \"attach_disk\"}\ncloudprovider_gce_api_request_duration_seconds { request = \"detach_disk\"}\ncloudprovider_gce_api_request_duration_seconds { request = \"list_disk\"}", "completion": "This text code 11.10 - Metrics For Kubernetes System Components | The scheduler exposes optional metrics that reports the requested resources and the desired limits of all running pods. These metrics can be used to build capacity planning dashboards, assess current", "language": "text", "type": "code-example_explanation", "context": "11.10 - Metrics For Kubernetes System Components | The scheduler exposes optional metrics that reports the requested resources and the desired limits of all running pods. These metrics can be used to build capacity planning dashboards, assess current", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_16498"} {"prompt": "Complete this text code:\n\ncontainer_pressure_cpu_stalled_seconds_total\ncontainer_pressure_cpu_waiting_seconds_total\ncontainer_pressure_memory_stalled_seconds_total", "completion": "container_pressure_memory_waiting_seconds_total\ncontainer_pressure_io_stalled_seconds_total\ncontainer_pressure_io_waiting_seconds_total", "language": "text", "type": "code-example", "context": "11.10 - Metrics For Kubernetes System Components | This feature is enabled by default, by setting the KubeletPSI feature gate. The information is also exposed in the Summary API.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_16499"} {"prompt": "Explain this text code:\n\ncontainer_pressure_cpu_stalled_seconds_total\ncontainer_pressure_cpu_waiting_seconds_total\ncontainer_pressure_memory_stalled_seconds_total\ncontainer_pressure_memory_waiting_seconds_total\ncontainer_pressure_io_stalled_seconds_total\ncontainer_pressure_io_waiting_seconds_total", "completion": "This text code 11.10 - Metrics For Kubernetes System Components | This feature is enabled by default, by setting the KubeletPSI feature gate. The information is also exposed in the Summary API.", "language": "text", "type": "code-example_explanation", "context": "11.10 - Metrics For Kubernetes System Components | This feature is enabled by default, by setting the KubeletPSI feature gate. The information is also exposed in the Summary API.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_16500"} {"prompt": "Explain this text code:\n\n--allow-metric-labels <metric_name>,<label_name>='<allow_value1>, <allow_value2>...', <metric_name2>,<label_name>='<allow_value1>, <allow_value2>...', ...", "completion": "This text code 11.10 - Metrics For Kubernetes System Components | Here is an example:", "language": "text", "type": "code-example_explanation", "context": "11.10 - Metrics For Kubernetes System Components | Here is an example:", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_16501"} {"prompt": "Explain this text code:\n\n--allow-metric-labels number_count_metric,odd_number='1,3,5', number_count_metric,even_number='2,4,6', date_gauge_metric,weekend='Saturday,Sunday'", "completion": "This text code 11.10 - Metrics For Kubernetes System Components | In addition to specifying this from the CLI, this can also be done within a configuration file. You can specify the path to that configuration file using the --allow-metric-labels-manifest command lin", "language": "text", "type": "code-example_explanation", "context": "11.10 - Metrics For Kubernetes System Components | In addition to specifying this from the CLI, this can also be done within a configuration file. You can specify the path to that configuration file using the --allow-metric-labels-manifest command lin", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_16502"} {"prompt": "Explain this text code:\n\n\"metric1,label2\": \"v1,v2,v3\"\n\"metric2,label1\": \"v1,v2,v3\"", "completion": "This text code 11.10 - Metrics For Kubernetes System Components | Additionally, the cardinality_enforcement_unexpected_categorizations_total meta-metric records the count of unexpected categorizations during cardinality enforcement, that is, whenever a label value i", "language": "text", "type": "code-example_explanation", "context": "11.10 - Metrics For Kubernetes System Components | Additionally, the cardinality_enforcement_unexpected_categorizations_total meta-metric records the count of unexpected categorizations during cardinality enforcement, that is, whenever a label value i", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_16503"} {"prompt": "Explain this text code:\n\ncount(kube_pod_status_ready{condition=\"false\"}) by (namespace, pod)", "completion": "This text code 11.11 - Metrics for Kubernetes Object States | Metrics generated from kube-state-metrics also allow for alerting on issues in the cluster.", "language": "text", "type": "code-example_explanation", "context": "11.11 - Metrics for Kubernetes Object States | Metrics generated from kube-state-metrics also allow for alerting on issues in the cluster.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_16504"} {"prompt": "Complete this text code:\n\ngroups:\n- name: Pod state\n rules:\n - alert: PodsBlockedInTerminatingState\n expr: count(kube_pod_deletion_timestamp) by (namespace, pod) * count(kube_pod_status_reason{reason=\"NodeLost\"} == 0) by (namespace, pod) > 0", "completion": " for: 5m\n labels:\n severity: page\n annotations:\n summary: Pod {{$labels.namespace}}/{{$labels.pod}} blocked in Terminating state.", "language": "text", "type": "code-example", "context": "11.11 - Metrics for Kubernetes Object States | System component logs record events happening in cluster, which can be very useful for debugging. You can configure log verbosity to see more or less detail. Logs can be as coarse-grained as showing e", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_16505"} {"prompt": "Explain this text code:\n\ngroups:\n- name: Pod state\n rules:\n - alert: PodsBlockedInTerminatingState\n expr: count(kube_pod_deletion_timestamp) by (namespace, pod) * count(kube_pod_status_reason{reason=\"NodeLost\"} == 0) by (namespace, pod) > 0\n for: 5m\n labels:\n severity: page\n annotations:\n summary: Pod {{$labels.namespace}}/{{$labels.pod}} blocked in Terminating state.", "completion": "This text code 11.11 - Metrics for Kubernetes Object States | System component logs record events happening in cluster, which can be very useful for debugging. You can configure log verbosity to see more or less detail. Logs can be as coarse-grained as showing e", "language": "text", "type": "code-example_explanation", "context": "11.11 - Metrics for Kubernetes Object States | System component logs record events happening in cluster, which can be very useful for debugging. You can configure log verbosity to see more or less detail. Logs can be as coarse-grained as showing e", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_16506"} {"prompt": "Explain this text code:\n\nI1025 00:15:15.525108 1 httplog.go:79] GET /api/v1/namespaces/kube-system/pods/metrics-server-v0.3.1-57c75779f-9p8wg: (1.512ms) 200 [pod_nanny/v0.0.0 (linux/amd64) kubernetes/$Format 10.56.1.19:51756]", "completion": "This text code 11.12 - System Logs | The message string may contain line breaks:", "language": "text", "type": "code-example_explanation", "context": "11.12 - System Logs | The message string may contain line breaks:", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_16507"} {"prompt": "Explain this text code:\n\nI1025 00:15:15.525108 1 example.go:79] This is a message\nwhich has a line break.", "completion": "This text code 11.12 - System Logs | Migration to structured log messages is an ongoing process. Not all log messages are structured in this version. When parsing log files, you must also handle unstructured log messages.", "language": "text", "type": "code-example_explanation", "context": "11.12 - System Logs | Migration to structured log messages is an ongoing process. Not all log messages are structured in this version. When parsing log files, you must also handle unstructured log messages.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_16508"} {"prompt": "Explain this text code:\n\n<klog header> \"<message>\" <key1>=\"<value1>\" <key2>=\"<value2>\" ...", "completion": "This text code 11.12 - System Logs | Example:", "language": "text", "type": "code-example_explanation", "context": "11.12 - System Logs | Example:", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_16509"} {"prompt": "Explain this text code:\n\nI1025 00:15:15.525108 1 controller_utils.go:116] \"Pod status updated\" pod=\"kube-system/kubedns\" status=\"ready\"", "completion": "This text code 11.12 - System Logs | Strings are quoted. Other values are formatted with %+v, which may cause log messages to continue on the next line depending on the data.", "language": "text", "type": "code-example_explanation", "context": "11.12 - System Logs | Strings are quoted. Other values are formatted with %+v, which may cause log messages to continue on the next line depending on the data.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_16510"} {"prompt": "Explain this text code:\n\nI1025 00:15:15.525108 1 example.go:116] \"Example\" data=\"This is text with a line break\\nand \\\"quotation marks\\\".\" someInt=1 someFloat=0.1 someStruct={StringField: First line,\nsecond line.}", "completion": "This text code 11.12 - System Logs | Contextual logging builds on top of structured logging. It is primarily about how developers use logging calls: code based on that concept is more flexible and supports additional use cases as describ", "language": "text", "type": "code-example_explanation", "context": "11.12 - System Logs | Contextual logging builds on top of structured logging. It is primarily about how developers use logging calls: code based on that concept is more flexible and supports additional use cases as describ", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_16511"} {"prompt": "Complete this text code:\n\n$ cd $GOPATH/src/k8s.io/kubernetes/staging/src/k8s.io/component-base/logs/example/cmd/\n$ go run . --help\n...\n --feature-gates mapStringBool A set of key=value pairs that describe feature gates for alpha/experimental features. Options are:\n AllAlpha=true|false (ALPHA - default=false)", "completion": " AllBeta=true|false (BETA - default=false)\n ContextualLogging=true|false (BETA - default=true)\n$ go run . --feature-gates ContextualLogging=true\n...\nI0222 15:13:31.645988 197901 example.go:54] \"runtime\" logger=\"example.myname\" foo=\"bar\" duration=\"1m0s\"\nI0222 15:13:31.646007 197901 example.go:55] \"another runtime\" logger=\"example\" foo=\"bar\" duration=\"1h0m0s\" duration=\"1m0s\"", "language": "text", "type": "code-example", "context": "11.12 - System Logs | The logger key and foo=\"bar\" were added by the caller of the function which logs the runtime message and duration=\"1m0s\" value, without having to modify that function.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_16512"} {"prompt": "Explain this text code:\n\n$ cd $GOPATH/src/k8s.io/kubernetes/staging/src/k8s.io/component-base/logs/example/cmd/\n$ go run . --help\n...\n --feature-gates mapStringBool A set of key=value pairs that describe feature gates for alpha/experimental features. Options are:\n AllAlpha=true|false (ALPHA - default=false)\n AllBeta=true|false (BETA - default=false)\n ContextualLogging=true|false (BETA - default=true)\n$ go run . --feature-gates ContextualLogging=true\n...\nI0222 15:13:31.645988 197901 example.go:54] \"runtime\" logger=\"example.myname\" foo=\"bar\" duration=\"1m0s\"\nI0222 15:13:31.646007 197901 example.go:55] \"another runtime\" logger=\"example\" foo=\"bar\" duration=\"1h0m0s\" duration=\"1m0s\"", "completion": "This text code 11.12 - System Logs | The logger key and foo=\"bar\" were added by the caller of the function which logs the runtime message and duration=\"1m0s\" value, without having to modify that function.", "language": "text", "type": "code-example_explanation", "context": "11.12 - System Logs | The logger key and foo=\"bar\" were added by the caller of the function which logs the runtime message and duration=\"1m0s\" value, without having to modify that function.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_16513"} {"prompt": "Complete this text code:\n\n$ go run . --feature-gates ContextualLogging=false\n...", "completion": "I0222 15:14:40.497333 198174 example.go:54] \"runtime\" duration=\"1m0s\"\nI0222 15:14:40.497346 198174 example.go:55] \"another runtime\" duration=\"1h0m0s\" duration=\"1m0s\"", "language": "text", "type": "code-example", "context": "11.12 - System Logs | JSON output does not support many standard klog flags. For list of unsupported klog flags, see the Command line tool reference.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_16514"} {"prompt": "Explain this text code:\n\n$ go run . --feature-gates ContextualLogging=false\n...\nI0222 15:14:40.497333 198174 example.go:54] \"runtime\" duration=\"1m0s\"\nI0222 15:14:40.497346 198174 example.go:55] \"another runtime\" duration=\"1h0m0s\" duration=\"1m0s\"", "completion": "This text code 11.12 - System Logs | JSON output does not support many standard klog flags. For list of unsupported klog flags, see the Command line tool reference.", "language": "text", "type": "code-example_explanation", "context": "11.12 - System Logs | JSON output does not support many standard klog flags. For list of unsupported klog flags, see the Command line tool reference.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_16515"} {"prompt": "Complete this json code:\n\n{\n \"ts\": 1580306777.04728,\n \"v\": 4,\n \"msg\": \"Pod status updated\",\n \"pod\":{", "completion": " \"name\": \"nginx-1\",\n \"namespace\": \"default\"\n },\n \"status\": \"ready\"\n}", "language": "json", "type": "api-response", "context": "11.12 - System Logs | Keys with special meaning:", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_16516"} {"prompt": "Explain this json code:\n\n{\n \"ts\": 1580306777.04728,\n \"v\": 4,\n \"msg\": \"Pod status updated\",\n \"pod\":{\n \"name\": \"nginx-1\",\n \"namespace\": \"default\"\n },\n \"status\": \"ready\"\n}", "completion": "This json code 11.12 - System Logs | Keys with special meaning:", "language": "json", "type": "api-response_explanation", "context": "11.12 - System Logs | Keys with special meaning:", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_16517"} {"prompt": "Explain this text code:\n\n# Fetch kubelet logs from a node named node-1.example\nkubectl get --raw \"/api/v1/nodes/node-1.example/proxy/logs/?query=kubelet\"", "completion": "This text code 11.12 - System Logs | You can also fetch files, provided that the files are in a directory that the kubelet allows for log fetches. For example, you can fetch a log from /var/log on a Linux node:", "language": "text", "type": "code-example_explanation", "context": "11.12 - System Logs | You can also fetch files, provided that the files are in a directory that the kubelet allows for log fetches. For example, you can fetch a log from /var/log on a Linux node:", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_16518"} {"prompt": "Explain this bash code:\n\nkubectl get --raw \"/api/v1/nodes/<insert-node-name-here>/proxy/logs/?query=/<insert-log-file-name-here>\"", "completion": "This bash code 11.12 - System Logs | The kubelet uses heuristics to retrieve logs. This helps if you are not aware whether a given system service is writing logs to the operating system's native logger like journald or to a log file in /", "language": "bash", "type": "kubectl-command_explanation", "context": "11.12 - System Logs | The kubelet uses heuristics to retrieve logs. This helps if you are not aware whether a given system service is writing logs to the operating system's native logger like journald or to a log file in /", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_16519"} {"prompt": "Explain this text code:\n\n# Fetch kubelet logs from a node named node-1.example that have the word \"error\"\nkubectl get --raw \"/api/v1/nodes/node-1.example/proxy/logs/?query=kubelet&pattern=error\"", "completion": "This text code 11.12 - System Logs | System component traces record the latency of and relationships between operations in the cluster.", "language": "text", "type": "code-example_explanation", "context": "11.12 - System Logs | System component traces record the latency of and relationships between operations in the cluster.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_16520"} {"prompt": "Complete this text code:\n\nreceivers:\n otlp:\n protocols:\n grpc:\nexporters:\n # Replace this exporter with the exporter for your backend\n exporters:", "completion": " debug:\n verbosity: detailed\nservice:\n pipelines:\n traces:\n receivers: [otlp]\n exporters: [debug]", "language": "text", "type": "code-example", "context": "11.13 - Traces For Kubernetes System Components | To directly emit traces to a backend without utilizing a collector, specify the endpoint field in the Kubernetes tracing configuration file with the desired trace backend address. This method negates", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_16521"} {"prompt": "Explain this text code:\n\nreceivers:\n otlp:\n protocols:\n grpc:\nexporters:\n # Replace this exporter with the exporter for your backend\n exporters:\n debug:\n verbosity: detailed\nservice:\n pipelines:\n traces:\n receivers: [otlp]\n exporters: [debug]", "completion": "This text code 11.13 - Traces For Kubernetes System Components | To directly emit traces to a backend without utilizing a collector, specify the endpoint field in the Kubernetes tracing configuration file with the desired trace backend address. This method negates", "language": "text", "type": "code-example_explanation", "context": "11.13 - Traces For Kubernetes System Components | To directly emit traces to a backend without utilizing a collector, specify the endpoint field in the Kubernetes tracing configuration file with the desired trace backend address. This method negates", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_16522"} {"prompt": "Complete this yaml code:\n\napiVersion: apiserver.config.k8s.io/v1\nkind: TracingConfiguration", "completion": "# default value\n#endpoint: localhost:4317\nsamplingRatePerMillion: 100", "language": "yaml", "type": "kubernetes-manifest", "context": "11.13 - Traces For Kubernetes System Components | For more information about the TracingConfiguration struct, see API server config API (v1).", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_16523"} {"prompt": "Explain this yaml code:\n\napiVersion: apiserver.config.k8s.io/v1\nkind: TracingConfiguration\n# default value\n#endpoint: localhost:4317\nsamplingRatePerMillion: 100", "completion": "This yaml code 11.13 - Traces For Kubernetes System Components | For more information about the TracingConfiguration struct, see API server config API (v1).", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "11.13 - Traces For Kubernetes System Components | For more information about the TracingConfiguration struct, see API server config API (v1).", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_16524"} {"prompt": "Complete this yaml code:\n\napiVersion: kubelet.config.k8s.io/v1beta1\nkind: KubeletConfiguration\ntracing:", "completion": " # default value\n #endpoint: localhost:4317\n samplingRatePerMillion: 100", "language": "yaml", "type": "kubernetes-manifest", "context": "11.13 - Traces For Kubernetes System Components | If the samplingRatePerMillion is set to one million (1000000), then every span will be sent to the exporter.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_16525"} {"prompt": "Explain this yaml code:\n\napiVersion: kubelet.config.k8s.io/v1beta1\nkind: KubeletConfiguration\ntracing:\n # default value\n #endpoint: localhost:4317\n samplingRatePerMillion: 100", "completion": "This yaml code 11.13 - Traces For Kubernetes System Components | If the samplingRatePerMillion is set to one million (1000000), then every span will be sent to the exporter.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "11.13 - Traces For Kubernetes System Components | If the samplingRatePerMillion is set to one million (1000000), then every span will be sent to the exporter.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_16526"} {"prompt": "Explain this text code:\n\nkube-apiserver \\\n--runtime-config=flowcontrol.apiserver.k8s.io/v1beta3=false \\\n # …and other flags as usual", "completion": "This text code 11.15 - API Priority and Fairness | The command-line flag --enable-priority-and-fairness=false will disable the API Priority and Fairness feature.", "language": "text", "type": "code-example_explanation", "context": "11.15 - API Priority and Fairness | The command-line flag --enable-priority-and-fairness=false will disable the API Priority and Fairness feature.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_16527"} {"prompt": "Complete this yaml code:\n\napiVersion: flowcontrol.apiserver.k8s.io/v1\nkind: FlowSchema\nmetadata:\n name: health-for-strangers\nspec:\n matchingPrecedence: 1000\n priorityLevelConfiguration:\n name: exempt\n rules:\n - nonResourceRules:", "completion": " - nonResourceURLs:\n - \"/healthz\"\n - \"/livez\"\n - \"/readyz\"\n verbs:\n - \"*\"\n subjects:\n - kind: Group\n group:\n name: \"system:unauthenticated\"", "language": "yaml", "type": "kubernetes-manifest", "context": "11.15 - API Priority and Fairness | When you enable the API Priority and Fairness feature, the kube-apiserver exports additional metrics. Monitoring these can help you determine whether your configuration is inappropriately throttling i", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_16528"} {"prompt": "Explain this yaml code:\n\napiVersion: flowcontrol.apiserver.k8s.io/v1\nkind: FlowSchema\nmetadata:\n name: health-for-strangers\nspec:\n matchingPrecedence: 1000\n priorityLevelConfiguration:\n name: exempt\n rules:\n - nonResourceRules:\n - nonResourceURLs:\n - \"/healthz\"\n - \"/livez\"\n - \"/readyz\"\n verbs:\n - \"*\"\n subjects:\n - kind: Group\n group:\n name: \"system:unauthenticated\"", "completion": "This yaml code 11.15 - API Priority and Fairness | When you enable the API Priority and Fairness feature, the kube-apiserver exports additional metrics. Monitoring these can help you determine whether your configuration is inappropriately throttling i", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "11.15 - API Priority and Fairness | When you enable the API Priority and Fairness feature, the kube-apiserver exports additional metrics. Monitoring these can help you determine whether your configuration is inappropriately throttling i", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_16529"} {"prompt": "Complete this yaml code:\n\napiVersion: flowcontrol.apiserver.k8s.io/v1\nkind: FlowSchema\nmetadata:\n name: list-events-default-service-account\nspec:\n distinguisherMethod:\n type: ByUser\n matchingPrecedence: 8000\n priorityLevelConfiguration:\n name: catch-all\n rules:\n - resourceRules:", "completion": " - apiGroups:\n - '*'\n namespaces:\n - default\n resources:\n - events\n verbs:\n - list\n subjects:\n - kind: ServiceAccount\n serviceAccount:\n name: default\n namespace: default", "language": "yaml", "type": "service-definition", "context": "11.15 - API Priority and Fairness | Add-ons extend the functionality of Kubernetes.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_16530"} {"prompt": "Explain this yaml code:\n\napiVersion: flowcontrol.apiserver.k8s.io/v1\nkind: FlowSchema\nmetadata:\n name: list-events-default-service-account\nspec:\n distinguisherMethod:\n type: ByUser\n matchingPrecedence: 8000\n priorityLevelConfiguration:\n name: catch-all\n rules:\n - resourceRules:\n - apiGroups:\n - '*'\n namespaces:\n - default\n resources:\n - events\n verbs:\n - list\n subjects:\n - kind: ServiceAccount\n serviceAccount:\n name: default\n namespace: default", "completion": "This yaml code 11.15 - API Priority and Fairness | Add-ons extend the functionality of Kubernetes.", "language": "yaml", "type": "service-definition_explanation", "context": "11.15 - API Priority and Fairness | Add-ons extend the functionality of Kubernetes.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_16531"} {"prompt": "Complete this yaml code:\n\n---\napiVersion: v1\nkind: Service\nmetadata:\n name: win-webserver\n labels:\n app: win-webserver\nspec:\n ports:\n # the port that this service should serve on\n - port: 80\n targetPort: 80\n selector:\n app: win-webserver\n type: NodePort\n---\napiVersion: apps/v1\nkind: Deployment\nmetadata:\n labels:\n app: win-webserver", "completion": " name: win-webserver\nspec:\n replicas: 2\n selector:\n matchLabels:\n app: win-webserver\n template:\n metadata:\n labels:\n app: win-webserver\n name: win-webserver\n spec:\n containers:\n - name: windowswebserver\n image: mcr.microsoft.com/windows/servercore:ltsc2019\n command:\n - powershell.exe\n - -command\n - \"<#code used from https://gist.github.com/19WAS85/5424431#> ; $$listener = New-Object System.Net.HttpListener ; $$listener.Prefixes.Add('http://*:80/') ; $$listener.Start() ; $$callerCounts = @{} ; Write-Host('Listening at http://*:80/') ; while ($$listener.IsListening) { ;$$context = $$listener.GetContext() ;$$requestUrl = $$context.Request.Url ;$$clientIP = $$context.Request.RemoteEndPoint.Address ;$$response = $$context.Response ;Write-Host '' ;Write-Host('> {0}' -f $$requestUrl) ; ;$$count = 1 ;$$k=$$callerCounts.Get_Item($$clientIP) ;if ($$k -ne $$null) { $$count += $$k } ;$$callerCounts.Set_Item($$clientIP, $$count) ;$$ip=(Get-NetAdapter | Get-NetIpAddress); $$header='<html><body><H1>Windows Container Web Server</H1>' ;$$callerCountsString='' ;$$callerCounts.Keys | % { $$callerCountsString+='<p>IP {0} callerCount {1} ' -f $$ip[1].IPAddress,$$callerCounts.Item($$_) } ;$$footer='</body></html>' ;$$content='{0}{1}{2}' -f $$header,$$callerCountsString,$$footer ;Write-Output $$content ;$$buffer = [System.Text.Encoding]::UTF8.GetBytes($$content) ;$$response.ContentLength64 = $$buffer.Length ;$$response.OutputStream.Write($$buffer, 0, $$buffer.Length) ;$$response.Close() ;$$responseStatus = $$response.StatusCode ;Write-Host('< {0}' -f $$responseStatus) } ; \"\n nodeSelector:\n kubernetes.io/os: windows", "language": "yaml", "type": "deployment-definition", "context": "12.2 - Guide for Running Windows Containers in Kubernetes | Check that all nodes are healthy:", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_16532"} {"prompt": "Explain this yaml code:\n\n---\napiVersion: v1\nkind: Service\nmetadata:\n name: win-webserver\n labels:\n app: win-webserver\nspec:\n ports:\n # the port that this service should serve on\n - port: 80\n targetPort: 80\n selector:\n app: win-webserver\n type: NodePort\n---\napiVersion: apps/v1\nkind: Deployment\nmetadata:\n labels:\n app: win-webserver\n name: win-webserver\nspec:\n replicas: 2\n selector:\n matchLabels:\n app: win-webserver\n template:\n metadata:\n labels:\n app: win-webserver\n name: win-webserver\n spec:\n containers:\n - name: windowswebserver\n image: mcr.microsoft.com/windows/servercore:ltsc2019\n command:\n - powershell.exe\n - -command\n - \"<#code used from https://gist.github.com/19WAS85/5424431#> ; $$listener = New-Object System.Net.HttpListener ; $$listener.Prefixes.Add('http://*:80/') ; $$listener.Start() ; $$callerCounts = @{} ; Write-Host('Listening at http://*:80/') ; while ($$listener.IsListening) { ;$$context = $$listener.GetContext() ;$$requestUrl = $$context.Request.Url ;$$clientIP = $$context.Request.RemoteEndPoint.Address ;$$response = $$context.Response ;Write-Host '' ;Write-Host('> {0}' -f $$requestUrl) ; ;$$count = 1 ;$$k=$$callerCounts.Get_Item($$clientIP) ;if ($$k -ne $$null) { $$count += $$k } ;$$callerCounts.Set_Item($$clientIP, $$count) ;$$ip=(Get-NetAdapter | Get-NetIpAddress); $$header='<html><body><H1>Windows Container Web Server</H1>' ;$$callerCountsString='' ;$$callerCounts.Keys | % { $$callerCountsString+='<p>IP {0} callerCount {1} ' -f $$ip[1].IPAddress,$$callerCounts.Item($$_) } ;$$footer='</body></html>' ;$$content='{0}{1}{2}' -f $$header,$$callerCountsString,$$footer ;Write-Output $$content ;$$buffer = [System.Text.Encoding]::UTF8.GetBytes($$content) ;$$response.ContentLength64 = $$buffer.Length ;$$response.OutputStream.Write($$buffer, 0, $$buffer.Length) ;$$response.Close() ;$$responseStatus = $$response.StatusCode ;Write-Host('< {0}' -f $$responseStatus) } ; \"\n nodeSelector:\n kubernetes.io/os: windows", "completion": "This yaml code 12.2 - Guide for Running Windows Containers in Kubernetes | Check that all nodes are healthy:", "language": "yaml", "type": "deployment-definition_explanation", "context": "12.2 - Guide for Running Windows Containers in Kubernetes | Check that all nodes are healthy:", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_16533"} {"prompt": "Explain this bash code:\n\nkubectl apply -f win-webserver.yaml\nkubectl get pods -o wide -w", "completion": "This bash code 12.2 - Guide for Running Windows Containers in Kubernetes | When the service is deployed correctly both Pods are marked as Ready. To exit the watch command, press Ctrl+C.", "language": "bash", "type": "kubectl-command_explanation", "context": "12.2 - Guide for Running Windows Containers in Kubernetes | When the service is deployed correctly both Pods are marked as Ready. To exit the watch command, press Ctrl+C.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_16534"} {"prompt": "Complete this text code:\n\nnodeSelector:\n kubernetes.io/os: windows\n node.kubernetes.io/windows-build: '10.0.17763'\ntolerations:", "completion": " - key: \"os\"\n operator: \"Equal\"\n value: \"windows\"\n effect: \"NoSchedule\"", "language": "text", "type": "code-example", "context": "12.2 - Guide for Running Windows Containers in Kubernetes | The Windows Server version used by each pod must match that of the node. If you want to use multiple Windows Server versions in the same cluster, then you should set additional node labels and nodeSel", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_16535"} {"prompt": "Explain this text code:\n\nnodeSelector:\n kubernetes.io/os: windows\n node.kubernetes.io/windows-build: '10.0.17763'\ntolerations:\n - key: \"os\"\n operator: \"Equal\"\n value: \"windows\"\n effect: \"NoSchedule\"", "completion": "This text code 12.2 - Guide for Running Windows Containers in Kubernetes | The Windows Server version used by each pod must match that of the node. If you want to use multiple Windows Server versions in the same cluster, then you should set additional node labels and nodeSel", "language": "text", "type": "code-example_explanation", "context": "12.2 - Guide for Running Windows Containers in Kubernetes | The Windows Server version used by each pod must match that of the node. If you want to use multiple Windows Server versions in the same cluster, then you should set additional node labels and nodeSel", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_16536"} {"prompt": "Complete this yaml code:\n\n---\napiVersion: node.k8s.io/v1\nkind: RuntimeClass\nmetadata:\n name: windows-2019\nhandler: example-container-runtime-handler\nscheduling:\n nodeSelector:", "completion": " kubernetes.io/os: 'windows'\n kubernetes.io/arch: 'amd64'\n node.kubernetes.io/windows-build: '10.0.17763'\n tolerations:\n - effect: NoSchedule\n key: os\n operator: Equal\n value: \"windows\"", "language": "yaml", "type": "kubernetes-manifest", "context": "12.2 - Guide for Running Windows Containers in Kubernetes | Run kubectl create -f runtimeClasses.yml using as a cluster administrator", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_16537"} {"prompt": "Explain this yaml code:\n\n---\napiVersion: node.k8s.io/v1\nkind: RuntimeClass\nmetadata:\n name: windows-2019\nhandler: example-container-runtime-handler\nscheduling:\n nodeSelector:\n kubernetes.io/os: 'windows'\n kubernetes.io/arch: 'amd64'\n node.kubernetes.io/windows-build: '10.0.17763'\n tolerations:\n - effect: NoSchedule\n key: os\n operator: Equal\n value: \"windows\"", "completion": "This yaml code 12.2 - Guide for Running Windows Containers in Kubernetes | Run kubectl create -f runtimeClasses.yml using as a cluster administrator", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "12.2 - Guide for Running Windows Containers in Kubernetes | Run kubectl create -f runtimeClasses.yml using as a cluster administrator", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_16538"} {"prompt": "Complete this yaml code:\n\n---\napiVersion: apps/v1\nkind: Deployment\nmetadata:\n name: iis-2019\n labels:\n app: iis-2019\nspec:\n replicas: 1\n template:\n metadata:\n name: iis-2019\n labels:\n app: iis-2019\n spec:\n runtimeClassName: windows-2019\n containers:\n - name: iis\n image: mcr.microsoft.com/windows/servercore/iis:windowsservercore-ltsc2019\n resources:\n limits:", "completion": " cpu: 1\n memory: 800Mi\n requests:\n cpu: .1\n memory: 300Mi\n ports:\n - containerPort: 80\n selector:\n matchLabels:\n app: iis-2019\n---\napiVersion: v1\nkind: Service\nmetadata:\n name: iis\nspec:\n type: LoadBalancer\n ports:\n - protocol: TCP\n port: 80\n selector:\n app: iis-2019", "language": "yaml", "type": "deployment-definition", "context": "12.2 - Guide for Running Windows Containers in Kubernetes | Kubernetes is highly configurable and extensible. As a result, there is rarely a need to fork or submit patches to the Kubernetes project code.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_16539"} {"prompt": "Explain this yaml code:\n\n---\napiVersion: apps/v1\nkind: Deployment\nmetadata:\n name: iis-2019\n labels:\n app: iis-2019\nspec:\n replicas: 1\n template:\n metadata:\n name: iis-2019\n labels:\n app: iis-2019\n spec:\n runtimeClassName: windows-2019\n containers:\n - name: iis\n image: mcr.microsoft.com/windows/servercore/iis:windowsservercore-ltsc2019\n resources:\n limits:\n cpu: 1\n memory: 800Mi\n requests:\n cpu: .1\n memory: 300Mi\n ports:\n - containerPort: 80\n selector:\n matchLabels:\n app: iis-2019\n---\napiVersion: v1\nkind: Service\nmetadata:\n name: iis\nspec:\n type: LoadBalancer\n ports:\n - protocol: TCP\n port: 80\n selector:\n app: iis-2019", "completion": "This yaml code 12.2 - Guide for Running Windows Containers in Kubernetes | Kubernetes is highly configurable and extensible. As a result, there is rarely a need to fork or submit patches to the Kubernetes project code.", "language": "yaml", "type": "deployment-definition_explanation", "context": "12.2 - Guide for Running Windows Containers in Kubernetes | Kubernetes is highly configurable and extensible. As a result, there is rarely a need to fork or submit patches to the Kubernetes project code.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_16540"} {"prompt": "Complete this json code:\n\n{\n \"name\": \"k8s-pod-network\",\n \"cniVersion\": \"0.4.0\",\n \"plugins\": [\n {\n \"type\": \"calico\",\n \"log_level\": \"info\",\n \"datastore_type\": \"kubernetes\",\n \"nodename\": \"127.0.0.1\",\n \"ipam\": {\n \"type\": \"host-local\",\n \"subnet\": \"usePodCidr\"\n },", "completion": " \"policy\": {\n \"type\": \"k8s\"\n },\n \"kubernetes\": {\n \"kubeconfig\": \"/etc/cni/net.d/calico-kubeconfig\"\n }\n },\n {\n \"type\": \"portmap\",\n \"capabilities\": {\"portMappings\": true},\n \"externalSetMarkChain\": \"KUBE-MARK-MASQ\"\n }\n ]\n}", "language": "json", "type": "api-response", "context": "13.1.1 - Network Plugins | Experimental Feature", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_16541"} {"prompt": "Explain this json code:\n\n{\n \"name\": \"k8s-pod-network\",\n \"cniVersion\": \"0.4.0\",\n \"plugins\": [\n {\n \"type\": \"calico\",\n \"log_level\": \"info\",\n \"datastore_type\": \"kubernetes\",\n \"nodename\": \"127.0.0.1\",\n \"ipam\": {\n \"type\": \"host-local\",\n \"subnet\": \"usePodCidr\"\n },\n \"policy\": {\n \"type\": \"k8s\"\n },\n \"kubernetes\": {\n \"kubeconfig\": \"/etc/cni/net.d/calico-kubeconfig\"\n }\n },\n {\n \"type\": \"portmap\",\n \"capabilities\": {\"portMappings\": true},\n \"externalSetMarkChain\": \"KUBE-MARK-MASQ\"\n }\n ]\n}", "completion": "This json code 13.1.1 - Network Plugins | Experimental Feature", "language": "json", "type": "api-response_explanation", "context": "13.1.1 - Network Plugins | Experimental Feature", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_16542"} {"prompt": "Complete this json code:\n\n{\n \"name\": \"k8s-pod-network\",\n \"cniVersion\": \"0.4.0\",\n \"plugins\": [\n {\n \"type\": \"calico\",\n \"log_level\": \"info\",\n \"datastore_type\": \"kubernetes\",\n \"nodename\": \"127.0.0.1\",\n \"ipam\": {\n \"type\": \"host-local\",\n \"subnet\": \"usePodCidr\"\n },", "completion": " \"policy\": {\n \"type\": \"k8s\"\n },\n \"kubernetes\": {\n \"kubeconfig\": \"/etc/cni/net.d/calico-kubeconfig\"\n }\n },\n {\n \"type\": \"bandwidth\",\n \"capabilities\": {\"bandwidth\": true}\n }\n ]\n}", "language": "json", "type": "api-response", "context": "13.1.1 - Network Plugins | Now you can add the kubernetes.io/ingress-bandwidth and kubernetes.io/egress-bandwidth annotations to your Pod. For example:", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_16543"} {"prompt": "Explain this json code:\n\n{\n \"name\": \"k8s-pod-network\",\n \"cniVersion\": \"0.4.0\",\n \"plugins\": [\n {\n \"type\": \"calico\",\n \"log_level\": \"info\",\n \"datastore_type\": \"kubernetes\",\n \"nodename\": \"127.0.0.1\",\n \"ipam\": {\n \"type\": \"host-local\",\n \"subnet\": \"usePodCidr\"\n },\n \"policy\": {\n \"type\": \"k8s\"\n },\n \"kubernetes\": {\n \"kubeconfig\": \"/etc/cni/net.d/calico-kubeconfig\"\n }\n },\n {\n \"type\": \"bandwidth\",\n \"capabilities\": {\"bandwidth\": true}\n }\n ]\n}", "completion": "This json code 13.1.1 - Network Plugins | Now you can add the kubernetes.io/ingress-bandwidth and kubernetes.io/egress-bandwidth annotations to your Pod. For example:", "language": "json", "type": "api-response_explanation", "context": "13.1.1 - Network Plugins | Now you can add the kubernetes.io/ingress-bandwidth and kubernetes.io/egress-bandwidth annotations to your Pod. For example:", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_16544"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:", "completion": " annotations:\n kubernetes.io/ingress-bandwidth: 1M\n kubernetes.io/egress-bandwidth: 1M\n...", "language": "yaml", "type": "pod-definition", "context": "13.1.1 - Network Plugins | Kubernetes provides a device plugin framework that you can use to advertise system hardware resources to the Kubelet.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_16545"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n annotations:\n kubernetes.io/ingress-bandwidth: 1M\n kubernetes.io/egress-bandwidth: 1M\n...", "completion": "This yaml code 13.1.1 - Network Plugins | Kubernetes provides a device plugin framework that you can use to advertise system hardware resources to the Kubelet.", "language": "yaml", "type": "pod-definition_explanation", "context": "13.1.1 - Network Plugins | Kubernetes provides a device plugin framework that you can use to advertise system hardware resources to the Kubelet.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_16546"} {"prompt": "Explain this text code:\n\nservice Registration {\n\trpc Register(RegisterRequest) returns (Empty) {}\n}", "completion": "This text code 13.1.2 - Device Plugins | A device plugin can register itself with the kubelet through this gRPC service. During the registration, the device plugin needs to send:", "language": "text", "type": "code-example_explanation", "context": "13.1.2 - Device Plugins | A device plugin can register itself with the kubelet through this gRPC service. During the registration, the device plugin needs to send:", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_16547"} {"prompt": "Complete this yaml code:\n\n---\napiVersion: v1\nkind: Pod\nmetadata:\n name: demo-pod\nspec:\n containers:\n - name: demo-container-1\n image: registry.k8s.io/pause:3.8", "completion": " resources:\n limits:\n hardware-vendor.example/foo: 2\n#\n# This Pod needs 2 of the hardware-vendor.example/foo devices\n# and can only schedule onto a Node that's able to satisfy\n# that need.\n#\n# If the Node has more than 2 of those devices available, the\n# remainder would be available for other Pods to use.", "language": "yaml", "type": "pod-definition", "context": "13.1.2 - Device Plugins | The general workflow of a device plugin includes the following steps:", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_16548"} {"prompt": "Explain this yaml code:\n\n---\napiVersion: v1\nkind: Pod\nmetadata:\n name: demo-pod\nspec:\n containers:\n - name: demo-container-1\n image: registry.k8s.io/pause:3.8\n resources:\n limits:\n hardware-vendor.example/foo: 2\n#\n# This Pod needs 2 of the hardware-vendor.example/foo devices\n# and can only schedule onto a Node that's able to satisfy\n# that need.\n#\n# If the Node has more than 2 of those devices available, the\n# remainder would be available for other Pods to use.", "completion": "This yaml code 13.1.2 - Device Plugins | The general workflow of a device plugin includes the following steps:", "language": "yaml", "type": "pod-definition_explanation", "context": "13.1.2 - Device Plugins | The general workflow of a device plugin includes the following steps:", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_16549"} {"prompt": "Complete this text code:\n\nservice DevicePlugin {\n // GetDevicePluginOptions returns options to be communicated with Device Manager.\n rpc GetDevicePluginOptions(Empty) returns (DevicePluginOptions) {}\n\n // ListAndWatch returns a stream of List of Devices\n // Whenever a Device state change or a Device disappears, ListAndWatch\n // returns the new list\n rpc ListAndWatch(Empty) returns (stream ListAndWatchResponse) {}\n\n // Allocate is called during container creation so that the Device\n // Plugin can run device specific operations and instruct Kubelet\n // of the steps to make the Device available in the container\n rpc Allocate(AllocateRequest) returns (AllocateResponse) {}", "completion": "\n // GetPreferredAllocation returns a preferred set of devices to allocate\n // from a list of available ones. The resulting preferred allocation is not\n // guaranteed to be the allocation ultimately performed by the\n // devicemanager. It is only designed to help the devicemanager make a more\n // informed allocation decision when possible.\n rpc GetPreferredAllocation(PreferredAllocationRequest) returns (PreferredAllocationResponse) {}\n\n // PreStartContainer is called, if indicated by Device Plugin during registration phase,\n // before each container start. Device plugin can run device specific operations\n // such as resetting the device before making devices available to the container.\n rpc PreStartContainer(PreStartContainerRequest) returns (PreStartContainerResponse) {}\n}", "language": "text", "type": "code-example", "context": "13.1.2 - Device Plugins | The plugin registers itself with the kubelet through the Unix socket at host path /var/lib/kubelet/device-plugins/kubelet.sock.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_16550"} {"prompt": "Explain this text code:\n\nservice DevicePlugin {\n // GetDevicePluginOptions returns options to be communicated with Device Manager.\n rpc GetDevicePluginOptions(Empty) returns (DevicePluginOptions) {}\n\n // ListAndWatch returns a stream of List of Devices\n // Whenever a Device state change or a Device disappears, ListAndWatch\n // returns the new list\n rpc ListAndWatch(Empty) returns (stream ListAndWatchResponse) {}\n\n // Allocate is called during container creation so that the Device\n // Plugin can run device specific operations and instruct Kubelet\n // of the steps to make the Device available in the container\n rpc Allocate(AllocateRequest) returns (AllocateResponse) {}\n\n // GetPreferredAllocation returns a preferred set of devices to allocate\n // from a list of available ones. The resulting preferred allocation is not\n // guaranteed to be the allocation ultimately performed by the\n // devicemanager. It is only designed to help the devicemanager make a more\n // informed allocation decision when possible.\n rpc GetPreferredAllocation(PreferredAllocationRequest) returns (PreferredAllocationResponse) {}\n\n // PreStartContainer is called, if indicated by Device Plugin during registration phase,\n // before each container start. Device plugin can run device specific operations\n // such as resetting the device before making devices available to the container.\n rpc PreStartContainer(PreStartContainerRequest) returns (PreStartContainerResponse) {}\n}", "completion": "This text code 13.1.2 - Device Plugins | The plugin registers itself with the kubelet through the Unix socket at host path /var/lib/kubelet/device-plugins/kubelet.sock.", "language": "text", "type": "code-example_explanation", "context": "13.1.2 - Device Plugins | The plugin registers itself with the kubelet through the Unix socket at host path /var/lib/kubelet/device-plugins/kubelet.sock.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_16551"} {"prompt": "Complete this text code:\n\n// PodResourcesLister is a service provided by the kubelet that provides information about the\n// node resources consumed by pods and containers on the node\nservice PodResourcesLister {", "completion": " rpc List(ListPodResourcesRequest) returns (ListPodResourcesResponse) {}\n rpc GetAllocatableResources(AllocatableResourcesRequest) returns (AllocatableResourcesResponse) {}\n rpc Get(GetPodResourcesRequest) returns (GetPodResourcesResponse) {}\n}", "language": "text", "type": "code-example", "context": "13.1.2 - Device Plugins | The List endpoint provides information on resources of running pods, with details such as the id of exclusively allocated CPUs, device id as it was reported by device plugins and id of the NUMA node w", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_16552"} {"prompt": "Explain this text code:\n\n// PodResourcesLister is a service provided by the kubelet that provides information about the\n// node resources consumed by pods and containers on the node\nservice PodResourcesLister {\n rpc List(ListPodResourcesRequest) returns (ListPodResourcesResponse) {}\n rpc GetAllocatableResources(AllocatableResourcesRequest) returns (AllocatableResourcesResponse) {}\n rpc Get(GetPodResourcesRequest) returns (GetPodResourcesResponse) {}\n}", "completion": "This text code 13.1.2 - Device Plugins | The List endpoint provides information on resources of running pods, with details such as the id of exclusively allocated CPUs, device id as it was reported by device plugins and id of the NUMA node w", "language": "text", "type": "code-example_explanation", "context": "13.1.2 - Device Plugins | The List endpoint provides information on resources of running pods, with details such as the id of exclusively allocated CPUs, device id as it was reported by device plugins and id of the NUMA node w", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_16553"} {"prompt": "Explain this text code:\n\n--feature-gates=KubeletPodResourcesDynamicResources=false", "completion": "This text code 13.1.2 - Device Plugins | cpu_ids in the ContainerResources in the List endpoint correspond to exclusive CPUs allocated to a particular container. If the goal is to evaluate CPUs that belong to the shared pool, the List endpoi", "language": "text", "type": "code-example_explanation", "context": "13.1.2 - Device Plugins | cpu_ids in the ContainerResources in the List endpoint correspond to exclusive CPUs allocated to a particular container. If the goal is to evaluate CPUs that belong to the shared pool, the List endpoi", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_16554"} {"prompt": "Complete this text code:\n\n// ListPodResourcesResponse is the response returned by List function\nmessage ListPodResourcesResponse {\n repeated PodResources pod_resources = 1;\n}\n\n// PodResources contains information about the node resources assigned to a pod\nmessage PodResources {\n string name = 1;\n string namespace = 2;\n repeated ContainerResources containers = 3;\n}\n\n// ContainerResources contains information about the resources assigned to a container\nmessage ContainerResources {\n string name = 1;\n repeated ContainerDevices devices = 2;\n repeated int64 cpu_ids = 3;\n repeated ContainerMemory memory = 4;\n repeated DynamicResource dynamic_resources = 5;\n}\n\n// ContainerMemory contains information about memory and hugepages assigned to a container\nmessage ContainerMemory {\n string memory_type = 1;\n uint64 size = 2;\n TopologyInfo topology = 3;\n}\n\n// Topology describes hardware topology of the resource\nmessage TopologyInfo {\n repeated NUMANode nodes = 1;\n}\n", "completion": "// NUMA representation of NUMA node\nmessage NUMANode {\n int64 ID = 1;\n}\n\n// ContainerDevices contains information about the devices assigned to a container\nmessage ContainerDevices {\n string resource_name = 1;\n repeated string device_ids = 2;\n TopologyInfo topology = 3;\n}\n\n// DynamicResource contains information about the devices assigned to a container by Dynamic Resource Allocation\nmessage DynamicResource {\n string class_name = 1;\n string claim_name = 2;\n string claim_namespace = 3;\n repeated ClaimResource claim_resources = 4;\n}\n\n// ClaimResource contains per-plugin resource information\nmessage ClaimResource {\n repeated CDIDevice cdi_devices = 1 [(gogoproto.customname) = \"CDIDevices\"];\n}\n\n// CDIDevice specifies a CDI device information\nmessage CDIDevice {\n // Fully qualified CDI device name\n // for example: vendor.com/gpu=gpudevice1\n // see more details in the CDI specification:\n // https://github.com/container-orchestrated-devices/container-device-interface/blob/main/SPEC.md\n string name = 1;\n}", "language": "text", "type": "code-example", "context": "13.1.2 - Device Plugins | cpu_ids in the ContainerResources in the List endpoint correspond to exclusive CPUs allocated to a particular container. If the goal is to evaluate CPUs that belong to the shared pool, the List endpoi", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_16555"} {"prompt": "Explain this text code:\n\n// ListPodResourcesResponse is the response returned by List function\nmessage ListPodResourcesResponse {\n repeated PodResources pod_resources = 1;\n}\n\n// PodResources contains information about the node resources assigned to a pod\nmessage PodResources {\n string name = 1;\n string namespace = 2;\n repeated ContainerResources containers = 3;\n}\n\n// ContainerResources contains information about the resources assigned to a container\nmessage ContainerResources {\n string name = 1;\n repeated ContainerDevices devices = 2;\n repeated int64 cpu_ids = 3;\n repeated ContainerMemory memory = 4;\n repeated DynamicResource dynamic_resources = 5;\n}\n\n// ContainerMemory contains information about memory and hugepages assigned to a container\nmessage ContainerMemory {\n string memory_type = 1;\n uint64 size = 2;\n TopologyInfo topology = 3;\n}\n\n// Topology describes hardware topology of the resource\nmessage TopologyInfo {\n repeated NUMANode nodes = 1;\n}\n\n// NUMA representation of NUMA node\nmessage NUMANode {\n int64 ID = 1;\n}\n\n// ContainerDevices contains information about the devices assigned to a container\nmessage ContainerDevices {\n string resource_name = 1;\n repeated string device_ids = 2;\n TopologyInfo topology = 3;\n}\n\n// DynamicResource contains information about the devices assigned to a container by Dynamic Resource Allocation\nmessage DynamicResource {\n string class_name = 1;\n string claim_name = 2;\n string claim_namespace = 3;\n repeated ClaimResource claim_resources = 4;\n}\n\n// ClaimResource contains per-plugin resource information\nmessage ClaimResource {\n repeated CDIDevice cdi_devices = 1 [(gogoproto.customname) = \"CDIDevices\"];\n}\n\n// CDIDevice specifies a CDI device information\nmessage CDIDevice {\n // Fully qualified CDI device name\n // for example: vendor.com/gpu=gpudevice1\n // see more details in the CDI specification:\n // https://github.com/container-orchestrated-devices/container-device-interface/blob/main/SPEC.md\n string name = 1;\n}", "completion": "This text code 13.1.2 - Device Plugins | cpu_ids in the ContainerResources in the List endpoint correspond to exclusive CPUs allocated to a particular container. If the goal is to evaluate CPUs that belong to the shared pool, the List endpoi", "language": "text", "type": "code-example_explanation", "context": "13.1.2 - Device Plugins | cpu_ids in the ContainerResources in the List endpoint correspond to exclusive CPUs allocated to a particular container. If the goal is to evaluate CPUs that belong to the shared pool, the List endpoi", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_16556"} {"prompt": "Complete this text code:\n\n// AllocatableResourcesResponses contains information about all the devices known by the kubelet\nmessage AllocatableResourcesResponse {\n repeated ContainerDevices devices = 1;", "completion": " repeated int64 cpu_ids = 2;\n repeated ContainerMemory memory = 3;\n}", "language": "text", "type": "code-example", "context": "13.1.2 - Device Plugins | ContainerDevices do expose the topology information declaring to which NUMA cells the device is affine. The NUMA cells are identified using a opaque integer ID, which value is consistent to what devic", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_16557"} {"prompt": "Explain this text code:\n\n// AllocatableResourcesResponses contains information about all the devices known by the kubelet\nmessage AllocatableResourcesResponse {\n repeated ContainerDevices devices = 1;\n repeated int64 cpu_ids = 2;\n repeated ContainerMemory memory = 3;\n}", "completion": "This text code 13.1.2 - Device Plugins | ContainerDevices do expose the topology information declaring to which NUMA cells the device is affine. The NUMA cells are identified using a opaque integer ID, which value is consistent to what devic", "language": "text", "type": "code-example_explanation", "context": "13.1.2 - Device Plugins | ContainerDevices do expose the topology information declaring to which NUMA cells the device is affine. The NUMA cells are identified using a opaque integer ID, which value is consistent to what devic", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_16558"} {"prompt": "Complete this text code:\n\n// GetPodResourcesRequest contains information about the pod\nmessage GetPodResourcesRequest {", "completion": " string pod_name = 1;\n string pod_namespace = 2;\n}", "language": "text", "type": "code-example", "context": "13.1.2 - Device Plugins | To disable this feature, you must start your kubelet services with the following flag:", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_16559"} {"prompt": "Explain this text code:\n\n// GetPodResourcesRequest contains information about the pod\nmessage GetPodResourcesRequest {\n string pod_name = 1;\n string pod_namespace = 2;\n}", "completion": "This text code 13.1.2 - Device Plugins | To disable this feature, you must start your kubelet services with the following flag:", "language": "text", "type": "code-example_explanation", "context": "13.1.2 - Device Plugins | To disable this feature, you must start your kubelet services with the following flag:", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_16560"} {"prompt": "Explain this text code:\n\n--feature-gates=KubeletPodResourcesDynamicResources=false", "completion": "This text code 13.1.2 - Device Plugins | The Topology Manager is a Kubelet component that allows resources to be co-ordinated in a Topology aligned manner. In order to do this, the Device Plugin API was extended to include a TopologyInfo str", "language": "text", "type": "code-example_explanation", "context": "13.1.2 - Device Plugins | The Topology Manager is a Kubelet component that allows resources to be co-ordinated in a Topology aligned manner. In order to do this, the Device Plugin API was extended to include a TopologyInfo str", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_16561"} {"prompt": "Complete this text code:\n\nmessage TopologyInfo {\n repeated NUMANode nodes = 1;\n}", "completion": "\nmessage NUMANode {\n int64 ID = 1;\n}", "language": "text", "type": "code-example", "context": "13.1.2 - Device Plugins | Device Plugins that wish to leverage the Topology Manager can send back a populated TopologyInfo struct as part of the device registration, along with the device IDs and the health of the device. The", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_16562"} {"prompt": "Explain this text code:\n\nmessage TopologyInfo {\n repeated NUMANode nodes = 1;\n}\n\nmessage NUMANode {\n int64 ID = 1;\n}", "completion": "This text code 13.1.2 - Device Plugins | Device Plugins that wish to leverage the Topology Manager can send back a populated TopologyInfo struct as part of the device registration, along with the device IDs and the health of the device. The", "language": "text", "type": "code-example_explanation", "context": "13.1.2 - Device Plugins | Device Plugins that wish to leverage the Topology Manager can send back a populated TopologyInfo struct as part of the device registration, along with the device IDs and the health of the device. The", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_16563"} {"prompt": "Explain this text code:\n\npluginapi.Device{ID: \"25102017\", Health: pluginapi.Healthy, Topology:&pluginapi.TopologyInfo{Nodes: []*pluginapi.NUMANode{&pluginapi.NUMANode{ID: 0,},}}}", "completion": "This text code 13.1.2 - Device Plugins | Here are some examples of device plugin implementations:", "language": "text", "type": "code-example_explanation", "context": "13.1.2 - Device Plugins | Here are some examples of device plugin implementations:", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_16564"} {"prompt": "Complete this yaml code:\n\napiVersion: apiextensions.k8s.io/v1\nkind: CustomResourceDefinition\nmetadata:\n name: shirts.stable.example.com\nspec:\n group: stable.example.com\n scope: Namespaced\n names:\n plural: shirts\n singular: shirt\n kind: Shirt\n versions:\n - name: v1\n served: true\n storage: true\n schema:\n openAPIV3Schema:\n type: object", "completion": " properties:\n spec:\n type: object\n properties:\n color:\n type: string\n size:\n type: string\n selectableFields:\n - jsonPath: .spec.color\n - jsonPath: .spec.size\n additionalPrinterColumns:\n - jsonPath: .spec.color\n name: Color\n type: string\n - jsonPath: .spec.size\n name: Size\n type: string", "language": "yaml", "type": "kubernetes-manifest", "context": "13.2.1 - Custom Resources | Field selectors can then be used to get only resources with a color of blue:", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_16565"} {"prompt": "Explain this yaml code:\n\napiVersion: apiextensions.k8s.io/v1\nkind: CustomResourceDefinition\nmetadata:\n name: shirts.stable.example.com\nspec:\n group: stable.example.com\n scope: Namespaced\n names:\n plural: shirts\n singular: shirt\n kind: Shirt\n versions:\n - name: v1\n served: true\n storage: true\n schema:\n openAPIV3Schema:\n type: object\n properties:\n spec:\n type: object\n properties:\n color:\n type: string\n size:\n type: string\n selectableFields:\n - jsonPath: .spec.color\n - jsonPath: .spec.size\n additionalPrinterColumns:\n - jsonPath: .spec.color\n name: Color\n type: string\n - jsonPath: .spec.size\n name: Size\n type: string", "completion": "This yaml code 13.2.1 - Custom Resources | Field selectors can then be used to get only resources with a color of blue:", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "13.2.1 - Custom Resources | Field selectors can then be used to get only resources with a color of blue:", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_16566"} {"prompt": "Explain this bash code:\n\nkubectl get shirts.stable.example.com --field-selector spec.color=blue", "completion": "This bash code 13.2.1 - Custom Resources | The output should be:", "language": "bash", "type": "kubectl-command_explanation", "context": "13.2.1 - Custom Resources | The output should be:", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_16567"} {"prompt": "Explain this text code:\n\nNAME COLOR SIZE\nexample1 blue S\nexample2 blue M", "completion": "This text code 13.2.1 - Custom Resources | The aggregation layer allows Kubernetes to be extended with additional APIs, beyond what is offered by the core Kubernetes APIs. The additional APIs can either be ready-made solutions such as a metric", "language": "text", "type": "code-example_explanation", "context": "13.2.1 - Custom Resources | The aggregation layer allows Kubernetes to be extended with additional APIs, beyond what is offered by the core Kubernetes APIs. The additional APIs can either be ready-made solutions such as a metric", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_16568"} {"prompt": "Explain this bash code:\n\nkubectl get SampleDB # find configured databases\n\nkubectl edit SampleDB/example-database # manually change some settings", "completion": "This bash code 13.3 - Operator pattern | …and that's it! The operator will take care of applying the changes as well as keeping the existing service in good shape.", "language": "bash", "type": "kubectl-command_explanation", "context": "13.3 - Operator pattern | …and that's it! The operator will take care of applying the changes as well as keeping the existing service in good shape.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_16569"} {"prompt": "Complete this yaml code:\n\napiVersion: apps/v1\nkind: Deployment\nmetadata:\n name: nginx-deployment\nspec:\n selector:\n matchLabels:\n app: nginx\n replicas: 2 # tells deployment to run 2 pods matching the template", "completion": " template:\n metadata:\n labels:\n app: nginx\n spec:\n containers:\n - name: nginx\n image: nginx:1.14.2\n ports:\n - containerPort: 80", "language": "yaml", "type": "deployment-definition", "context": "1.2 - Objects In Kubernetes | One way to create a Deployment using a manifest file like the one above is to use the kubectl apply command in the kubectl command-line interface, passing the .yaml file as an argument. Here's an exam", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_16570"} {"prompt": "Explain this yaml code:\n\napiVersion: apps/v1\nkind: Deployment\nmetadata:\n name: nginx-deployment\nspec:\n selector:\n matchLabels:\n app: nginx\n replicas: 2 # tells deployment to run 2 pods matching the template\n template:\n metadata:\n labels:\n app: nginx\n spec:\n containers:\n - name: nginx\n image: nginx:1.14.2\n ports:\n - containerPort: 80", "completion": "This yaml code 1.2 - Objects In Kubernetes | One way to create a Deployment using a manifest file like the one above is to use the kubectl apply command in the kubectl command-line interface, passing the .yaml file as an argument. Here's an exam", "language": "yaml", "type": "deployment-definition_explanation", "context": "1.2 - Objects In Kubernetes | One way to create a Deployment using a manifest file like the one above is to use the kubectl apply command in the kubectl command-line interface, passing the .yaml file as an argument. Here's an exam", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_16571"} {"prompt": "Explain this shell code:\n\nkubectl apply -f https://k8s.io/examples/application/deployment.yaml", "completion": "This shell code 1.2 - Objects In Kubernetes | The output is similar to this:", "language": "shell", "type": "code-example_explanation", "context": "1.2 - Objects In Kubernetes | The output is similar to this:", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_16572"} {"prompt": "Explain this sh code:\n\nkubectl diff -R -f configs/\nkubectl apply -R -f configs/", "completion": "This sh code 1.2.1 - Kubernetes Object Management | Advantages compared to imperative object configuration:", "language": "sh", "type": "code-example_explanation", "context": "1.2.1 - Kubernetes Object Management | Advantages compared to imperative object configuration:", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_16573"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: nginx-demo\nspec:", "completion": " containers:\n - name: nginx\n image: nginx:1.14.2\n ports:\n - containerPort: 80", "language": "yaml", "type": "pod-definition", "context": "1.2.2 - Object Names and IDs | A Kubernetes systems-generated string to uniquely identify objects.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_16574"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: nginx-demo\nspec:\n containers:\n - name: nginx\n image: nginx:1.14.2\n ports:\n - containerPort: 80", "completion": "This yaml code 1.2.2 - Object Names and IDs | A Kubernetes systems-generated string to uniquely identify objects.", "language": "yaml", "type": "pod-definition_explanation", "context": "1.2.2 - Object Names and IDs | A Kubernetes systems-generated string to uniquely identify objects.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_16575"} {"prompt": "Complete this json code:\n\n\"metadata\": {\n \"labels\": {\n \"key1\" : \"value1\",", "completion": " \"key2\" : \"value2\"\n }\n}", "language": "json", "type": "api-response", "context": "1.2.3 - Labels and Selectors | Labels allow for efficient queries and watches and are ideal for use in UIs and CLIs. Non-identifying information should be recorded using annotations.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_16576"} {"prompt": "Explain this json code:\n\n\"metadata\": {\n \"labels\": {\n \"key1\" : \"value1\",\n \"key2\" : \"value2\"\n }\n}", "completion": "This json code 1.2.3 - Labels and Selectors | Labels allow for efficient queries and watches and are ideal for use in UIs and CLIs. Non-identifying information should be recorded using annotations.", "language": "json", "type": "api-response_explanation", "context": "1.2.3 - Labels and Selectors | Labels allow for efficient queries and watches and are ideal for use in UIs and CLIs. Non-identifying information should be recorded using annotations.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_16577"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: label-demo\n labels:\n environment: production", "completion": " app: nginx\nspec:\n containers:\n - name: nginx\n image: nginx:1.14.2\n ports:\n - containerPort: 80", "language": "yaml", "type": "pod-definition", "context": "1.2.3 - Labels and Selectors | Unlike names and UIDs, labels do not provide uniqueness. In general, we expect many objects to carry the same label(s).", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_16578"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: label-demo\n labels:\n environment: production\n app: nginx\nspec:\n containers:\n - name: nginx\n image: nginx:1.14.2\n ports:\n - containerPort: 80", "completion": "This yaml code 1.2.3 - Labels and Selectors | Unlike names and UIDs, labels do not provide uniqueness. In general, we expect many objects to carry the same label(s).", "language": "yaml", "type": "pod-definition_explanation", "context": "1.2.3 - Labels and Selectors | Unlike names and UIDs, labels do not provide uniqueness. In general, we expect many objects to carry the same label(s).", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_16579"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: cuda-test\nspec:\n containers:", "completion": " - name: cuda-test\n image: \"registry.k8s.io/cuda-vector-add:v0.1\"\n resources:\n limits:\n nvidia.com/gpu: 1\n nodeSelector:\n accelerator: nvidia-tesla-p100", "language": "yaml", "type": "pod-definition", "context": "1.2.3 - Labels and Selectors | Set-based label requirements allow filtering keys according to a set of values. Three kinds of operators are supported: in,notin and exists (only the key identifier). For example:", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_16580"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: cuda-test\nspec:\n containers:\n - name: cuda-test\n image: \"registry.k8s.io/cuda-vector-add:v0.1\"\n resources:\n limits:\n nvidia.com/gpu: 1\n nodeSelector:\n accelerator: nvidia-tesla-p100", "completion": "This yaml code 1.2.3 - Labels and Selectors | Set-based label requirements allow filtering keys according to a set of values. Three kinds of operators are supported: in,notin and exists (only the key identifier). For example:", "language": "yaml", "type": "pod-definition_explanation", "context": "1.2.3 - Labels and Selectors | Set-based label requirements allow filtering keys according to a set of values. Three kinds of operators are supported: in,notin and exists (only the key identifier). For example:", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_16581"} {"prompt": "Complete this text code:\n\nenvironment in (production, qa)\ntier notin (frontend, backend)", "completion": "partition\n!partition", "language": "text", "type": "code-example", "context": "1.2.3 - Labels and Selectors | Similarly the comma separator acts as an AND operator. So filtering resources with a partition key (no matter the value) and with environment different than qa can be achieved using partition,environm", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_16582"} {"prompt": "Explain this text code:\n\nenvironment in (production, qa)\ntier notin (frontend, backend)\npartition\n!partition", "completion": "This text code 1.2.3 - Labels and Selectors | Similarly the comma separator acts as an AND operator. So filtering resources with a partition key (no matter the value) and with environment different than qa can be achieved using partition,environm", "language": "text", "type": "code-example_explanation", "context": "1.2.3 - Labels and Selectors | Similarly the comma separator acts as an AND operator. So filtering resources with a partition key (no matter the value) and with environment different than qa can be achieved using partition,environm", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_16583"} {"prompt": "Explain this text code:\n\npartition in (customerA, customerB),environment!=qa", "completion": "This text code 1.2.3 - Labels and Selectors | For list and watch operations, you can specify label selectors to filter the sets of objects returned; you specify the filter using a query parameter. (To learn in detail about watches in Kubernetes,", "language": "text", "type": "code-example_explanation", "context": "1.2.3 - Labels and Selectors | For list and watch operations, you can specify label selectors to filter the sets of objects returned; you specify the filter using a query parameter. (To learn in detail about watches in Kubernetes,", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_16584"} {"prompt": "Explain this text code:\n\n?labelSelector=environment%3Dproduction,tier%3Dfrontend", "completion": "This text code 1.2.3 - Labels and Selectors | Both label selector styles can be used to list or watch resources via a REST client. For example, targeting apiserver with kubectl and using equality-based one may write:", "language": "text", "type": "code-example_explanation", "context": "1.2.3 - Labels and Selectors | Both label selector styles can be used to list or watch resources via a REST client. For example, targeting apiserver with kubectl and using equality-based one may write:", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_16585"} {"prompt": "Explain this text code:\n\n?labelSelector=environment+in+%28production%2Cqa%29%2Ctier+in+%28frontend%29", "completion": "This text code 1.2.3 - Labels and Selectors | Both label selector styles can be used to list or watch resources via a REST client. For example, targeting apiserver with kubectl and using equality-based one may write:", "language": "text", "type": "code-example_explanation", "context": "1.2.3 - Labels and Selectors | Both label selector styles can be used to list or watch resources via a REST client. For example, targeting apiserver with kubectl and using equality-based one may write:", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_16586"} {"prompt": "Explain this shell code:\n\nkubectl get pods -l environment=production,tier=frontend", "completion": "This shell code 1.2.3 - Labels and Selectors | or using set-based requirements:", "language": "shell", "type": "code-example_explanation", "context": "1.2.3 - Labels and Selectors | or using set-based requirements:", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_16587"} {"prompt": "Explain this shell code:\n\nkubectl get pods -l 'environment in (production),tier in (frontend)'", "completion": "This shell code 1.2.3 - Labels and Selectors | As already mentioned set-based requirements are more expressive. For instance, they can implement the OR operator on values:", "language": "shell", "type": "code-example_explanation", "context": "1.2.3 - Labels and Selectors | As already mentioned set-based requirements are more expressive. For instance, they can implement the OR operator on values:", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_16588"} {"prompt": "Explain this shell code:\n\nkubectl get pods -l 'environment in (production, qa)'", "completion": "This shell code 1.2.3 - Labels and Selectors | or restricting negative matching via notin operator:", "language": "shell", "type": "code-example_explanation", "context": "1.2.3 - Labels and Selectors | or restricting negative matching via notin operator:", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_16589"} {"prompt": "Explain this shell code:\n\nkubectl get pods -l 'environment,environment notin (frontend)'", "completion": "This shell code 1.2.3 - Labels and Selectors | Some Kubernetes objects, such as services and replicationcontrollers, also use label selectors to specify sets of other resources, such as pods.", "language": "shell", "type": "code-example_explanation", "context": "1.2.3 - Labels and Selectors | Some Kubernetes objects, such as services and replicationcontrollers, also use label selectors to specify sets of other resources, such as pods.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_16590"} {"prompt": "Complete this yaml code:\n\nselector:\n matchLabels:\n component: redis", "completion": " matchExpressions:\n - { key: tier, operator: In, values: [cache] }\n - { key: environment, operator: NotIn, values: [dev] }", "language": "yaml", "type": "kubernetes-manifest", "context": "1.2.3 - Labels and Selectors | matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels map is equivalent to an element of matchExpressions, whose key field is \"key\", the operator is \"In\", and the values a", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_16591"} {"prompt": "Explain this yaml code:\n\nselector:\n matchLabels:\n component: redis\n matchExpressions:\n - { key: tier, operator: In, values: [cache] }\n - { key: environment, operator: NotIn, values: [dev] }", "completion": "This yaml code 1.2.3 - Labels and Selectors | matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels map is equivalent to an element of matchExpressions, whose key field is \"key\", the operator is \"In\", and the values a", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "1.2.3 - Labels and Selectors | matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels map is equivalent to an element of matchExpressions, whose key field is \"key\", the operator is \"In\", and the values a", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_16592"} {"prompt": "Complete this yaml code:\n\nlabels:\n app: guestbook", "completion": " tier: backend\n role: master", "language": "yaml", "type": "kubernetes-manifest", "context": "1.2.3 - Labels and Selectors | and", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_16593"} {"prompt": "Explain this yaml code:\n\nlabels:\n app: guestbook\n tier: backend\n role: master", "completion": "This yaml code 1.2.3 - Labels and Selectors | and", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "1.2.3 - Labels and Selectors | and", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_16594"} {"prompt": "Complete this yaml code:\n\nlabels:\n app: guestbook", "completion": " tier: backend\n role: replica", "language": "yaml", "type": "kubernetes-manifest", "context": "1.2.3 - Labels and Selectors | The labels allow for slicing and dicing the resources along any dimension specified by a label:", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_16595"} {"prompt": "Explain this yaml code:\n\nlabels:\n app: guestbook\n tier: backend\n role: replica", "completion": "This yaml code 1.2.3 - Labels and Selectors | The labels allow for slicing and dicing the resources along any dimension specified by a label:", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "1.2.3 - Labels and Selectors | The labels allow for slicing and dicing the resources along any dimension specified by a label:", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_16596"} {"prompt": "Explain this shell code:\n\nkubectl apply -f examples/guestbook/all-in-one/guestbook-all-in-one.yaml\nkubectl get pods -Lapp -Ltier -Lrole", "completion": "This shell code 1.2.3 - Labels and Selectors | Sometimes you may want to relabel existing pods and other resources before creating new resources. This can be done with kubectl label. For example, if you want to label all your NGINX Pods as fronten", "language": "shell", "type": "code-example_explanation", "context": "1.2.3 - Labels and Selectors | Sometimes you may want to relabel existing pods and other resources before creating new resources. This can be done with kubectl label. For example, if you want to label all your NGINX Pods as fronten", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_16597"} {"prompt": "Complete this none code:\n\nNAME READY STATUS RESTARTS AGE APP TIER ROLE\nguestbook-fe-4nlpb 1/1 Running 0 1m guestbook frontend <none>\nguestbook-fe-ght6d 1/1 Running 0 1m guestbook frontend <none>\nguestbook-fe-jpy62 1/1 Running 0 1m guestbook frontend <none>", "completion": "guestbook-redis-master-5pg3b 1/1 Running 0 1m guestbook backend master\nguestbook-redis-replica-2q2yf 1/1 Running 0 1m guestbook backend replica\nguestbook-redis-replica-qgazl 1/1 Running 0 1m guestbook backend replica\nmy-nginx-divi2 1/1 Running 0 29m nginx <none> <none>\nmy-nginx-o0ef1 1/1 Running 0 29m nginx <none> <none>", "language": "none", "type": "code-example", "context": "1.2.3 - Labels and Selectors | Sometimes you may want to relabel existing pods and other resources before creating new resources. This can be done with kubectl label. For example, if you want to label all your NGINX Pods as fronten", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_16598"} {"prompt": "Explain this none code:\n\nNAME READY STATUS RESTARTS AGE APP TIER ROLE\nguestbook-fe-4nlpb 1/1 Running 0 1m guestbook frontend <none>\nguestbook-fe-ght6d 1/1 Running 0 1m guestbook frontend <none>\nguestbook-fe-jpy62 1/1 Running 0 1m guestbook frontend <none>\nguestbook-redis-master-5pg3b 1/1 Running 0 1m guestbook backend master\nguestbook-redis-replica-2q2yf 1/1 Running 0 1m guestbook backend replica\nguestbook-redis-replica-qgazl 1/1 Running 0 1m guestbook backend replica\nmy-nginx-divi2 1/1 Running 0 29m nginx <none> <none>\nmy-nginx-o0ef1 1/1 Running 0 29m nginx <none> <none>", "completion": "This none code 1.2.3 - Labels and Selectors | Sometimes you may want to relabel existing pods and other resources before creating new resources. This can be done with kubectl label. For example, if you want to label all your NGINX Pods as fronten", "language": "none", "type": "code-example_explanation", "context": "1.2.3 - Labels and Selectors | Sometimes you may want to relabel existing pods and other resources before creating new resources. This can be done with kubectl label. For example, if you want to label all your NGINX Pods as fronten", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_16599"} {"prompt": "Explain this none code:\n\nNAME READY STATUS RESTARTS AGE\nguestbook-redis-replica-2q2yf 1/1 Running 0 3m\nguestbook-redis-replica-qgazl 1/1 Running 0 3m", "completion": "This none code 1.2.3 - Labels and Selectors | Sometimes you may want to relabel existing pods and other resources before creating new resources. This can be done with kubectl label. For example, if you want to label all your NGINX Pods as fronten", "language": "none", "type": "code-example_explanation", "context": "1.2.3 - Labels and Selectors | Sometimes you may want to relabel existing pods and other resources before creating new resources. This can be done with kubectl label. For example, if you want to label all your NGINX Pods as fronten", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_16600"} {"prompt": "Explain this none code:\n\npod/my-nginx-2035384211-j5fhi labeled\npod/my-nginx-2035384211-u2c7e labeled\npod/my-nginx-2035384211-u3t6x labeled", "completion": "This none code 1.2.3 - Labels and Selectors | This first filters all pods with the label \"app=nginx\", and then labels them with the \"tier=fe\". To see the pods you labeled, run:", "language": "none", "type": "code-example_explanation", "context": "1.2.3 - Labels and Selectors | This first filters all pods with the label \"app=nginx\", and then labels them with the \"tier=fe\". To see the pods you labeled, run:", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_16601"} {"prompt": "Complete this none code:\n\nNAME READY STATUS RESTARTS AGE TIER\nmy-nginx-2035384211-j5fhi 1/1 Running 0 23m fe", "completion": "my-nginx-2035384211-u2c7e 1/1 Running 0 23m fe\nmy-nginx-2035384211-u3t6x 1/1 Running 0 23m fe", "language": "none", "type": "code-example", "context": "1.2.3 - Labels and Selectors | This outputs all \"app=nginx\" pods, with an additional label column of pods' tier (specified with -L or --label-columns).", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_16602"} {"prompt": "Explain this none code:\n\nNAME READY STATUS RESTARTS AGE TIER\nmy-nginx-2035384211-j5fhi 1/1 Running 0 23m fe\nmy-nginx-2035384211-u2c7e 1/1 Running 0 23m fe\nmy-nginx-2035384211-u3t6x 1/1 Running 0 23m fe", "completion": "This none code 1.2.3 - Labels and Selectors | This outputs all \"app=nginx\" pods, with an additional label column of pods' tier (specified with -L or --label-columns).", "language": "none", "type": "code-example_explanation", "context": "1.2.3 - Labels and Selectors | This outputs all \"app=nginx\" pods, with an additional label column of pods' tier (specified with -L or --label-columns).", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_16603"} {"prompt": "Complete this text code:\n\nNAME STATUS AGE\ndefault Active 1d", "completion": "kube-node-lease Active 1d\nkube-public Active 1d\nkube-system Active 1d", "language": "text", "type": "code-example", "context": "1.2.4 - Namespaces | To set the namespace for a current request, use the --namespace flag.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_16604"} {"prompt": "Explain this text code:\n\nNAME STATUS AGE\ndefault Active 1d\nkube-node-lease Active 1d\nkube-public Active 1d\nkube-system Active 1d", "completion": "This text code 1.2.4 - Namespaces | To set the namespace for a current request, use the --namespace flag.", "language": "text", "type": "code-example_explanation", "context": "1.2.4 - Namespaces | To set the namespace for a current request, use the --namespace flag.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_16605"} {"prompt": "Explain this shell code:\n\nkubectl run nginx --image=nginx --namespace=<insert-namespace-name-here>\nkubectl get pods --namespace=<insert-namespace-name-here>", "completion": "This shell code 1.2.4 - Namespaces | You can permanently save the namespace for all subsequent kubectl commands in that context.", "language": "shell", "type": "code-example_explanation", "context": "1.2.4 - Namespaces | You can permanently save the namespace for all subsequent kubectl commands in that context.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_16606"} {"prompt": "Explain this shell code:\n\nkubectl config set-context --current --namespace=<insert-namespace-name-here>\n# Validate it\nkubectl config view --minify | grep namespace:", "completion": "This shell code 1.2.4 - Namespaces | When you create a Service, it creates a corresponding DNS entry. This entry is of the form <service-name>.<namespace-name>.svc.cluster.local, which means that if a container only uses <service-name>,", "language": "shell", "type": "code-example_explanation", "context": "1.2.4 - Namespaces | When you create a Service, it creates a corresponding DNS entry. This entry is of the form <service-name>.<namespace-name>.svc.cluster.local, which means that if a container only uses <service-name>,", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_16607"} {"prompt": "Complete this shell code:\n\n# In a namespace\nkubectl api-resources --namespaced=true", "completion": "\n# Not in a namespace\nkubectl api-resources --namespaced=false", "language": "shell", "type": "code-example", "context": "1.2.4 - Namespaces | The Kubernetes control plane sets an immutable label kubernetes.io/metadata.name on all namespaces. The value of the label is the namespace name.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_16608"} {"prompt": "Explain this shell code:\n\n# In a namespace\nkubectl api-resources --namespaced=true\n\n# Not in a namespace\nkubectl api-resources --namespaced=false", "completion": "This shell code 1.2.4 - Namespaces | The Kubernetes control plane sets an immutable label kubernetes.io/metadata.name on all namespaces. The value of the label is the namespace name.", "language": "shell", "type": "code-example_explanation", "context": "1.2.4 - Namespaces | The Kubernetes control plane sets an immutable label kubernetes.io/metadata.name on all namespaces. The value of the label is the namespace name.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_16609"} {"prompt": "Complete this json code:\n\n\"metadata\": {\n \"annotations\": {\n \"key1\" : \"value1\",", "completion": " \"key2\" : \"value2\"\n }\n}", "language": "json", "type": "api-response", "context": "1.2.5 - Annotations | Here are some examples of information that could be recorded in annotations:", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_16610"} {"prompt": "Explain this json code:\n\n\"metadata\": {\n \"annotations\": {\n \"key1\" : \"value1\",\n \"key2\" : \"value2\"\n }\n}", "completion": "This json code 1.2.5 - Annotations | Here are some examples of information that could be recorded in annotations:", "language": "json", "type": "api-response_explanation", "context": "1.2.5 - Annotations | Here are some examples of information that could be recorded in annotations:", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_16611"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: annotations-demo\n annotations:\n imageregistry: \"https://hub.docker.com/\"", "completion": "spec:\n containers:\n - name: nginx\n image: nginx:1.14.2\n ports:\n - containerPort: 80", "language": "yaml", "type": "pod-definition", "context": "1.2.5 - Annotations | Field selectors let you select Kubernetes objects based on the value of one or more resource fields. Here are some examples of field selector queries:", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_16612"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: annotations-demo\n annotations:\n imageregistry: \"https://hub.docker.com/\"\nspec:\n containers:\n - name: nginx\n image: nginx:1.14.2\n ports:\n - containerPort: 80", "completion": "This yaml code 1.2.5 - Annotations | Field selectors let you select Kubernetes objects based on the value of one or more resource fields. Here are some examples of field selector queries:", "language": "yaml", "type": "pod-definition_explanation", "context": "1.2.5 - Annotations | Field selectors let you select Kubernetes objects based on the value of one or more resource fields. Here are some examples of field selector queries:", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_16613"} {"prompt": "Explain this shell code:\n\nkubectl get pods --field-selector status.phase=Running", "completion": "This shell code 1.2.6 - Field Selectors | Supported field selectors vary by Kubernetes resource type. All resource types support the metadata.name and metadata.namespace fields. Using unsupported field selectors produces an error. For example", "language": "shell", "type": "code-example_explanation", "context": "1.2.6 - Field Selectors | Supported field selectors vary by Kubernetes resource type. All resource types support the metadata.name and metadata.namespace fields. Using unsupported field selectors produces an error. For example", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_16614"} {"prompt": "Explain this text code:\n\nError from server (BadRequest): Unable to find \"ingresses\" that match label selector \"\", field selector \"foo.bar=baz\": \"foo.bar\" is not a known field selector: only \"metadata.name\", \"metadata.namespace\"", "completion": "This text code 1.2.6 - Field Selectors | All custom resource types support the metadata.name and metadata.namespace fields.", "language": "text", "type": "code-example_explanation", "context": "1.2.6 - Field Selectors | All custom resource types support the metadata.name and metadata.namespace fields.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_16615"} {"prompt": "Explain this shell code:\n\nkubectl get services --all-namespaces --field-selector metadata.namespace!=default", "completion": "This shell code 1.2.6 - Field Selectors | As with label and other selectors, field selectors can be chained together as a comma-separated list. This kubectl command selects all Pods for which the status.phase does not equal Running and the sp", "language": "shell", "type": "code-example_explanation", "context": "1.2.6 - Field Selectors | As with label and other selectors, field selectors can be chained together as a comma-separated list. This kubectl command selects all Pods for which the status.phase does not equal Running and the sp", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_16616"} {"prompt": "Explain this shell code:\n\nkubectl get pods --field-selector=status.phase!=Running,spec.restartPolicy=Always", "completion": "This shell code 1.2.6 - Field Selectors | You can use field selectors across multiple resource types. This kubectl command selects all Statefulsets and Services that are not in the default namespace:", "language": "shell", "type": "code-example_explanation", "context": "1.2.6 - Field Selectors | You can use field selectors across multiple resource types. This kubectl command selects all Statefulsets and Services that are not in the default namespace:", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_16617"} {"prompt": "Explain this shell code:\n\nkubectl get statefulsets,services --all-namespaces --field-selector metadata.namespace!=default", "completion": "This shell code 1.2.6 - Field Selectors | Finalizers are namespaced keys that tell Kubernetes to wait until specific conditions are met before it fully deletes resources that are marked for deletion. Finalizers alert controllers to clean up r", "language": "shell", "type": "code-example_explanation", "context": "1.2.6 - Field Selectors | Finalizers are namespaced keys that tell Kubernetes to wait until specific conditions are met before it fully deletes resources that are marked for deletion. Finalizers alert controllers to clean up r", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_16618"} {"prompt": "Explain this bash code:\n\nkubectl get events -A --field-selector=reason=OwnerRefInvalidNamespace", "completion": "This bash code 1.2.8 - Owners and Dependents | When you tell Kubernetes to delete a resource, the API server allows the managing controller to process any finalizer rules for the resource. Finalizers prevent accidental deletion of resources your c", "language": "bash", "type": "kubectl-command_explanation", "context": "1.2.8 - Owners and Dependents | When you tell Kubernetes to delete a resource, the API server allows the managing controller to process any finalizer rules for the resource. Finalizers prevent accidental deletion of resources your c", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_16619"} {"prompt": "Complete this yaml code:\n\n# This is an excerpt\napiVersion: apps/v1\nkind: StatefulSet\nmetadata:\n labels:", "completion": " app.kubernetes.io/name: mysql\n app.kubernetes.io/instance: mysql-abcxyz\n app.kubernetes.io/version: \"5.7.21\"\n app.kubernetes.io/component: database\n app.kubernetes.io/part-of: wordpress\n app.kubernetes.io/managed-by: Helm", "language": "yaml", "type": "kubernetes-manifest", "context": "1.2.9 - Recommended Labels | An application can be installed one or more times into a Kubernetes cluster and, in some cases, the same namespace. For example, WordPress can be installed more than once where different websites are", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_16620"} {"prompt": "Explain this yaml code:\n\n# This is an excerpt\napiVersion: apps/v1\nkind: StatefulSet\nmetadata:\n labels:\n app.kubernetes.io/name: mysql\n app.kubernetes.io/instance: mysql-abcxyz\n app.kubernetes.io/version: \"5.7.21\"\n app.kubernetes.io/component: database\n app.kubernetes.io/part-of: wordpress\n app.kubernetes.io/managed-by: Helm", "completion": "This yaml code 1.2.9 - Recommended Labels | An application can be installed one or more times into a Kubernetes cluster and, in some cases, the same namespace. For example, WordPress can be installed more than once where different websites are", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "1.2.9 - Recommended Labels | An application can be installed one or more times into a Kubernetes cluster and, in some cases, the same namespace. For example, WordPress can be installed more than once where different websites are", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_16621"} {"prompt": "Complete this yaml code:\n\napiVersion: apps/v1\nkind: Deployment\nmetadata:", "completion": " labels:\n app.kubernetes.io/name: myservice\n app.kubernetes.io/instance: myservice-abcxyz\n...", "language": "yaml", "type": "deployment-definition", "context": "1.2.9 - Recommended Labels | The Service is used to expose the application.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_16622"} {"prompt": "Explain this yaml code:\n\napiVersion: apps/v1\nkind: Deployment\nmetadata:\n labels:\n app.kubernetes.io/name: myservice\n app.kubernetes.io/instance: myservice-abcxyz\n...", "completion": "This yaml code 1.2.9 - Recommended Labels | The Service is used to expose the application.", "language": "yaml", "type": "deployment-definition_explanation", "context": "1.2.9 - Recommended Labels | The Service is used to expose the application.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_16623"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Service\nmetadata:", "completion": " labels:\n app.kubernetes.io/name: myservice\n app.kubernetes.io/instance: myservice-abcxyz\n...", "language": "yaml", "type": "service-definition", "context": "1.2.9 - Recommended Labels | Consider a slightly more complicated application: a web application (WordPress) using a database (MySQL), installed using Helm. The following snippets illustrate the start of objects used to deploy th", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_16624"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Service\nmetadata:\n labels:\n app.kubernetes.io/name: myservice\n app.kubernetes.io/instance: myservice-abcxyz\n...", "completion": "This yaml code 1.2.9 - Recommended Labels | Consider a slightly more complicated application: a web application (WordPress) using a database (MySQL), installed using Helm. The following snippets illustrate the start of objects used to deploy th", "language": "yaml", "type": "service-definition_explanation", "context": "1.2.9 - Recommended Labels | Consider a slightly more complicated application: a web application (WordPress) using a database (MySQL), installed using Helm. The following snippets illustrate the start of objects used to deploy th", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_16625"} {"prompt": "Complete this yaml code:\n\napiVersion: apps/v1\nkind: Deployment\nmetadata:\n labels:\n app.kubernetes.io/name: wordpress", "completion": " app.kubernetes.io/instance: wordpress-abcxyz\n app.kubernetes.io/version: \"4.9.4\"\n app.kubernetes.io/managed-by: Helm\n app.kubernetes.io/component: server\n app.kubernetes.io/part-of: wordpress\n...", "language": "yaml", "type": "deployment-definition", "context": "1.2.9 - Recommended Labels | The Service is used to expose WordPress:", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_16626"} {"prompt": "Explain this yaml code:\n\napiVersion: apps/v1\nkind: Deployment\nmetadata:\n labels:\n app.kubernetes.io/name: wordpress\n app.kubernetes.io/instance: wordpress-abcxyz\n app.kubernetes.io/version: \"4.9.4\"\n app.kubernetes.io/managed-by: Helm\n app.kubernetes.io/component: server\n app.kubernetes.io/part-of: wordpress\n...", "completion": "This yaml code 1.2.9 - Recommended Labels | The Service is used to expose WordPress:", "language": "yaml", "type": "deployment-definition_explanation", "context": "1.2.9 - Recommended Labels | The Service is used to expose WordPress:", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_16627"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Service\nmetadata:\n labels:\n app.kubernetes.io/name: wordpress", "completion": " app.kubernetes.io/instance: wordpress-abcxyz\n app.kubernetes.io/version: \"4.9.4\"\n app.kubernetes.io/managed-by: Helm\n app.kubernetes.io/component: server\n app.kubernetes.io/part-of: wordpress\n...", "language": "yaml", "type": "service-definition", "context": "1.2.9 - Recommended Labels | MySQL is exposed as a StatefulSet with metadata for both it and the larger application it belongs to:", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_16628"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Service\nmetadata:\n labels:\n app.kubernetes.io/name: wordpress\n app.kubernetes.io/instance: wordpress-abcxyz\n app.kubernetes.io/version: \"4.9.4\"\n app.kubernetes.io/managed-by: Helm\n app.kubernetes.io/component: server\n app.kubernetes.io/part-of: wordpress\n...", "completion": "This yaml code 1.2.9 - Recommended Labels | MySQL is exposed as a StatefulSet with metadata for both it and the larger application it belongs to:", "language": "yaml", "type": "service-definition_explanation", "context": "1.2.9 - Recommended Labels | MySQL is exposed as a StatefulSet with metadata for both it and the larger application it belongs to:", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_16629"} {"prompt": "Complete this yaml code:\n\napiVersion: apps/v1\nkind: StatefulSet\nmetadata:\n labels:\n app.kubernetes.io/name: mysql", "completion": " app.kubernetes.io/instance: mysql-abcxyz\n app.kubernetes.io/version: \"5.7.21\"\n app.kubernetes.io/managed-by: Helm\n app.kubernetes.io/component: database\n app.kubernetes.io/part-of: wordpress\n...", "language": "yaml", "type": "kubernetes-manifest", "context": "1.2.9 - Recommended Labels | The Service is used to expose MySQL as part of WordPress:", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_16630"} {"prompt": "Explain this yaml code:\n\napiVersion: apps/v1\nkind: StatefulSet\nmetadata:\n labels:\n app.kubernetes.io/name: mysql\n app.kubernetes.io/instance: mysql-abcxyz\n app.kubernetes.io/version: \"5.7.21\"\n app.kubernetes.io/managed-by: Helm\n app.kubernetes.io/component: database\n app.kubernetes.io/part-of: wordpress\n...", "completion": "This yaml code 1.2.9 - Recommended Labels | The Service is used to expose MySQL as part of WordPress:", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "1.2.9 - Recommended Labels | The Service is used to expose MySQL as part of WordPress:", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_16631"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Service\nmetadata:\n labels:\n app.kubernetes.io/name: mysql", "completion": " app.kubernetes.io/instance: mysql-abcxyz\n app.kubernetes.io/version: \"5.7.21\"\n app.kubernetes.io/managed-by: Helm\n app.kubernetes.io/component: database\n app.kubernetes.io/part-of: wordpress\n...", "language": "yaml", "type": "service-definition", "context": "1.2.9 - Recommended Labels | With the MySQL StatefulSet and Service you'll notice information about both MySQL and WordPress, the broader application, are included.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_16632"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Service\nmetadata:\n labels:\n app.kubernetes.io/name: mysql\n app.kubernetes.io/instance: mysql-abcxyz\n app.kubernetes.io/version: \"5.7.21\"\n app.kubernetes.io/managed-by: Helm\n app.kubernetes.io/component: database\n app.kubernetes.io/part-of: wordpress\n...", "completion": "This yaml code 1.2.9 - Recommended Labels | With the MySQL StatefulSet and Service you'll notice information about both MySQL and WordPress, the broader application, are included.", "language": "yaml", "type": "service-definition_explanation", "context": "1.2.9 - Recommended Labels | With the MySQL StatefulSet and Service you'll notice information about both MySQL and WordPress, the broader application, are included.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_16633"} {"prompt": "Explain this text code:\n\nAccept: application/json;v=v2;g=apidiscovery.k8s.io;as=APIGroupDiscoveryList", "completion": "This text code 1.3 - The Kubernetes API | Without indicating the resource type using the Accept header, the default response for the /api and /apis endpoint is an unaggregated discovery document.", "language": "text", "type": "code-example_explanation", "context": "1.3 - The Kubernetes API | Without indicating the resource type using the Accept header, the default response for the /api and /apis endpoint is an unaggregated discovery document.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_16634"} {"prompt": "Complete this json code:\n\n{\n \"kind\": \"APIGroupList\",\n \"apiVersion\": \"v1\",\n \"groups\": [\n {\n \"name\": \"apiregistration.k8s.io\",\n \"versions\": [\n {\n \"groupVersion\": \"apiregistration.k8s.io/v1\",\n \"version\": \"v1\"\n }\n ],\n \"preferredVersion\": {\n \"groupVersion\": \"apiregistration.k8s.io/v1\",\n \"version\": \"v1\"\n }", "completion": " },\n {\n \"name\": \"apps\",\n \"versions\": [\n {\n \"groupVersion\": \"apps/v1\",\n \"version\": \"v1\"\n }\n ],\n \"preferredVersion\": {\n \"groupVersion\": \"apps/v1\",\n \"version\": \"v1\"\n }\n },\n ...\n}", "language": "json", "type": "api-response", "context": "1.3 - The Kubernetes API | Additional requests are needed to obtain the discovery document for each group version at /apis/<group>/<version> (for example: /apis/rbac.authorization.k8s.io/v1alpha1), which advertises the list of", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_16635"} {"prompt": "Explain this json code:\n\n{\n \"kind\": \"APIGroupList\",\n \"apiVersion\": \"v1\",\n \"groups\": [\n {\n \"name\": \"apiregistration.k8s.io\",\n \"versions\": [\n {\n \"groupVersion\": \"apiregistration.k8s.io/v1\",\n \"version\": \"v1\"\n }\n ],\n \"preferredVersion\": {\n \"groupVersion\": \"apiregistration.k8s.io/v1\",\n \"version\": \"v1\"\n }\n },\n {\n \"name\": \"apps\",\n \"versions\": [\n {\n \"groupVersion\": \"apps/v1\",\n \"version\": \"v1\"\n }\n ],\n \"preferredVersion\": {\n \"groupVersion\": \"apps/v1\",\n \"version\": \"v1\"\n }\n },\n ...\n}", "completion": "This json code 1.3 - The Kubernetes API | Additional requests are needed to obtain the discovery document for each group version at /apis/<group>/<version> (for example: /apis/rbac.authorization.k8s.io/v1alpha1), which advertises the list of", "language": "json", "type": "api-response_explanation", "context": "1.3 - The Kubernetes API | Additional requests are needed to obtain the discovery document for each group version at /apis/<group>/<version> (for example: /apis/rbac.authorization.k8s.io/v1alpha1), which advertises the list of", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_16636"} {"prompt": "Explain this text code:\n\napplication/com.github.proto-openapi.spec.v2@v1.0+protobuf", "completion": "This text code 1.3 - The Kubernetes API | Kubernetes supports publishing a description of its APIs as OpenAPI v3.", "language": "text", "type": "code-example_explanation", "context": "1.3 - The Kubernetes API | Kubernetes supports publishing a description of its APIs as OpenAPI v3.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_16637"} {"prompt": "Complete this yaml code:\n\n{\n \"paths\": {\n ...,\n \"api/v1\": {\n \"serverRelativeURL\": \"/openapi/v3/api/v1?hash=CC0E9BFD992D8C59AEC98A1E2336F899E8318D3CF4C68944C3DEC640AF5AB52D864AC50DAA8D145B3494F75FA3CFF939FCBDDA431DAD3CA79738B297795818CF\"\n },", "completion": " \"apis/admissionregistration.k8s.io/v1\": {\n \"serverRelativeURL\": \"/openapi/v3/apis/admissionregistration.k8s.io/v1?hash=E19CC93A116982CE5422FC42B590A8AFAD92CDE9AE4D59B5CAAD568F083AD07946E6CB5817531680BCE6E215C16973CD39003B0425F3477CFD854E89A9DB6597\"\n },\n ....\n }\n}", "language": "yaml", "type": "kubernetes-manifest", "context": "1.3 - The Kubernetes API | The relative URLs are pointing to immutable OpenAPI descriptions, in order to improve client-side caching. The proper HTTP caching headers are also set by the API server for that purpose (Expires to 1", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_16638"} {"prompt": "Explain this yaml code:\n\n{\n \"paths\": {\n ...,\n \"api/v1\": {\n \"serverRelativeURL\": \"/openapi/v3/api/v1?hash=CC0E9BFD992D8C59AEC98A1E2336F899E8318D3CF4C68944C3DEC640AF5AB52D864AC50DAA8D145B3494F75FA3CFF939FCBDDA431DAD3CA79738B297795818CF\"\n },\n \"apis/admissionregistration.k8s.io/v1\": {\n \"serverRelativeURL\": \"/openapi/v3/apis/admissionregistration.k8s.io/v1?hash=E19CC93A116982CE5422FC42B590A8AFAD92CDE9AE4D59B5CAAD568F083AD07946E6CB5817531680BCE6E215C16973CD39003B0425F3477CFD854E89A9DB6597\"\n },\n ....\n }\n}", "completion": "This yaml code 1.3 - The Kubernetes API | The relative URLs are pointing to immutable OpenAPI descriptions, in order to improve client-side caching. The proper HTTP caching headers are also set by the API server for that purpose (Expires to 1", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "1.3 - The Kubernetes API | The relative URLs are pointing to immutable OpenAPI descriptions, in order to improve client-side caching. The proper HTTP caching headers are also set by the API server for that purpose (Expires to 1", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_16639"} {"prompt": "Explain this text code:\n\napplication/com.github.proto-openapi.spec.v3@v1.0+protobuf", "completion": "This text code 1.3 - The Kubernetes API | A Golang implementation to fetch the OpenAPI V3 is provided in the package k8s.io/client-go/openapi3.", "language": "text", "type": "code-example_explanation", "context": "1.3 - The Kubernetes API | A Golang implementation to fetch the OpenAPI V3 is provided in the package k8s.io/client-go/openapi3.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_16640"} {"prompt": "Complete this json code:\n\n{\n \"kind\": \"Node\",\n \"apiVersion\": \"v1\",\n \"metadata\": {\n \"name\": \"10.240.79.157\",", "completion": " \"labels\": {\n \"name\": \"my-first-k8s-node\"\n }\n }\n}", "language": "json", "type": "api-response", "context": "2.1 - Nodes | Kubernetes creates a Node object internally (the representation). Kubernetes checks that a kubelet has registered to the API server that matches the metadata.name field of the Node. If the node is hea", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_16641"} {"prompt": "Explain this json code:\n\n{\n \"kind\": \"Node\",\n \"apiVersion\": \"v1\",\n \"metadata\": {\n \"name\": \"10.240.79.157\",\n \"labels\": {\n \"name\": \"my-first-k8s-node\"\n }\n }\n}", "completion": "This json code 2.1 - Nodes | Kubernetes creates a Node object internally (the representation). Kubernetes checks that a kubelet has registered to the API server that matches the metadata.name field of the Node. If the node is hea", "language": "json", "type": "api-response_explanation", "context": "2.1 - Nodes | Kubernetes creates a Node object internally (the representation). Kubernetes checks that a kubelet has registered to the API server that matches the metadata.name field of the Node. If the node is hea", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_16642"} {"prompt": "Explain this shell code:\n\nkubectl -n kube-system get lease -l apiserver.kubernetes.io/identity=kube-apiserver", "completion": "This shell code 2.4 - Leases | The SHA256 hash used in the lease name is based on the OS hostname as seen by that API server. Each kube-apiserver should be configured to use a hostname that is unique within the cluster. New instanc", "language": "shell", "type": "code-example_explanation", "context": "2.4 - Leases | The SHA256 hash used in the lease name is based on the OS hostname as seen by that API server. Each kube-apiserver should be configured to use a hostname that is unique within the cluster. New instanc", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_16643"} {"prompt": "Complete this text code:\n\nNAME HOLDER AGE\napiserver-07a5ea9b9b072c4a5f3d1c3702 apiserver-07a5ea9b9b072c4a5f3d1c3702_0c8914f7-0f35-440e-8676-7844977d3a05 5m33s", "completion": "apiserver-7be9e061c59d368b3ddaf1376e apiserver-7be9e061c59d368b3ddaf1376e_84f2a85d-37c1-4b14-b6b9-603e62e4896f 4m23s\napiserver-1dfef752bcb36637d2763d1868 apiserver-1dfef752bcb36637d2763d1868_c5ffa286-8a9a-45d4-91e7-61118ed58d2e 4m43s", "language": "text", "type": "code-example", "context": "2.4 - Leases | The SHA256 hash used in the lease name is based on the OS hostname as seen by that API server. Each kube-apiserver should be configured to use a hostname that is unique within the cluster. New instanc", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_16644"} {"prompt": "Explain this text code:\n\nNAME HOLDER AGE\napiserver-07a5ea9b9b072c4a5f3d1c3702 apiserver-07a5ea9b9b072c4a5f3d1c3702_0c8914f7-0f35-440e-8676-7844977d3a05 5m33s\napiserver-7be9e061c59d368b3ddaf1376e apiserver-7be9e061c59d368b3ddaf1376e_84f2a85d-37c1-4b14-b6b9-603e62e4896f 4m23s\napiserver-1dfef752bcb36637d2763d1868 apiserver-1dfef752bcb36637d2763d1868_c5ffa286-8a9a-45d4-91e7-61118ed58d2e 4m43s", "completion": "This text code 2.4 - Leases | The SHA256 hash used in the lease name is based on the OS hostname as seen by that API server. Each kube-apiserver should be configured to use a hostname that is unique within the cluster. New instanc", "language": "text", "type": "code-example_explanation", "context": "2.4 - Leases | The SHA256 hash used in the lease name is based on the OS hostname as seen by that API server. Each kube-apiserver should be configured to use a hostname that is unique within the cluster. New instanc", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_16645"} {"prompt": "Explain this shell code:\n\nkubectl -n kube-system get lease apiserver-07a5ea9b9b072c4a5f3d1c3702 -o yaml", "completion": "This shell code 2.4 - Leases | Expired leases from kube-apiservers that no longer exist are garbage collected by new kube-apiservers after 1 hour.", "language": "shell", "type": "code-example_explanation", "context": "2.4 - Leases | Expired leases from kube-apiservers that no longer exist are garbage collected by new kube-apiservers after 1 hour.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_16646"} {"prompt": "Complete this yaml code:\n\napiVersion: coordination.k8s.io/v1\nkind: Lease\nmetadata:\n creationTimestamp: \"2023-07-02T13:16:48Z\"\n labels:\n apiserver.kubernetes.io/identity: kube-apiserver\n kubernetes.io/hostname: master-1", "completion": " name: apiserver-07a5ea9b9b072c4a5f3d1c3702\n namespace: kube-system\n resourceVersion: \"334899\"\n uid: 90870ab5-1ba9-4523-b215-e4d4e662acb1\nspec:\n holderIdentity: apiserver-07a5ea9b9b072c4a5f3d1c3702_0c8914f7-0f35-440e-8676-7844977d3a05\n leaseDurationSeconds: 3600\n renewTime: \"2023-07-04T21:58:48.065888Z\"", "language": "yaml", "type": "kubernetes-manifest", "context": "2.4 - Leases | Expired leases from kube-apiservers that no longer exist are garbage collected by new kube-apiservers after 1 hour.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_16647"} {"prompt": "Explain this yaml code:\n\napiVersion: coordination.k8s.io/v1\nkind: Lease\nmetadata:\n creationTimestamp: \"2023-07-02T13:16:48Z\"\n labels:\n apiserver.kubernetes.io/identity: kube-apiserver\n kubernetes.io/hostname: master-1\n name: apiserver-07a5ea9b9b072c4a5f3d1c3702\n namespace: kube-system\n resourceVersion: \"334899\"\n uid: 90870ab5-1ba9-4523-b215-e4d4e662acb1\nspec:\n holderIdentity: apiserver-07a5ea9b9b072c4a5f3d1c3702_0c8914f7-0f35-440e-8676-7844977d3a05\n leaseDurationSeconds: 3600\n renewTime: \"2023-07-04T21:58:48.065888Z\"", "completion": "This yaml code 2.4 - Leases | Expired leases from kube-apiservers that no longer exist are garbage collected by new kube-apiservers after 1 hour.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "2.4 - Leases | Expired leases from kube-apiservers that no longer exist are garbage collected by new kube-apiservers after 1 hour.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_16648"} {"prompt": "Complete this yaml code:\n\napiVersion: rbac.authorization.k8s.io/v1\nkind: ClusterRole\nmetadata:\n name: cloud-controller-manager\nrules:\n- apiGroups:\n - \"\"\n resources:\n - events\n verbs:\n - create\n - patch\n - update\n- apiGroups:\n - \"\"\n resources:\n - nodes\n verbs:\n - '*'\n- apiGroups:\n - \"\"\n resources:\n - nodes/status\n verbs:\n - patch\n- apiGroups:\n - \"\"", "completion": " resources:\n - services\n verbs:\n - list\n - watch\n- apiGroups:\n - \"\"\n resources:\n - services/status\n verbs:\n - patch\n - update\n- apiGroups:\n - \"\"\n resources:\n - serviceaccounts\n verbs:\n - create\n- apiGroups:\n - \"\"\n resources:\n - persistentvolumes\n verbs:\n - get\n - list\n - update\n - watch", "language": "yaml", "type": "kubernetes-manifest", "context": "2.5 - Cloud Controller Manager | Cloud Controller Manager Administration has instructions on running and managing the cloud controller manager.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_16649"} {"prompt": "Explain this yaml code:\n\napiVersion: rbac.authorization.k8s.io/v1\nkind: ClusterRole\nmetadata:\n name: cloud-controller-manager\nrules:\n- apiGroups:\n - \"\"\n resources:\n - events\n verbs:\n - create\n - patch\n - update\n- apiGroups:\n - \"\"\n resources:\n - nodes\n verbs:\n - '*'\n- apiGroups:\n - \"\"\n resources:\n - nodes/status\n verbs:\n - patch\n- apiGroups:\n - \"\"\n resources:\n - services\n verbs:\n - list\n - watch\n- apiGroups:\n - \"\"\n resources:\n - services/status\n verbs:\n - patch\n - update\n- apiGroups:\n - \"\"\n resources:\n - serviceaccounts\n verbs:\n - create\n- apiGroups:\n - \"\"\n resources:\n - persistentvolumes\n verbs:\n - get\n - list\n - update\n - watch", "completion": "This yaml code 2.5 - Cloud Controller Manager | Cloud Controller Manager Administration has instructions on running and managing the cloud controller manager.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "2.5 - Cloud Controller Manager | Cloud Controller Manager Administration has instructions on running and managing the cloud controller manager.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_16650"} {"prompt": "Explain this bash code:\n\nkubectl get events -A --field-selector=reason=OwnerRefInvalidNamespace", "completion": "This bash code 2.9 - Garbage Collection | Kubernetes checks for and deletes objects that no longer have owner references, like the pods left behind when you delete a ReplicaSet. When you delete an object, you can control whether Kubernetes de", "language": "bash", "type": "kubectl-command_explanation", "context": "2.9 - Garbage Collection | Kubernetes checks for and deletes objects that no longer have owner references, like the pods left behind when you delete a ReplicaSet. When you delete an object, you can control whether Kubernetes de", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_16651"} {"prompt": "Complete this shell code:\n\nkube-apiserver \\\n--feature-gates=UnknownVersionInteroperabilityProxy=true \\\n# required command line arguments for this feature\n--peer-ca-file=<path to kube-apiserver CA cert>\n--proxy-client-cert-file=<path to aggregator proxy cert>,\n--proxy-client-key-file=<path to aggregator proxy key>,\n--requestheader-client-ca-file=<path to aggregator CA cert>,", "completion": "# requestheader-allowed-names can be set to blank to allow any Common Name\n--requestheader-allowed-names=<valid Common Names to verify proxy client cert against>,\n\n# optional flags for this feature\n--peer-advertise-ip=`IP of this kube-apiserver that should be used by peers to proxy requests`\n--peer-advertise-port=`port of this kube-apiserver that should be used by peers to proxy requests`\n\n# …and other flags as usual", "language": "shell", "type": "code-example", "context": "2.10 - Mixed Version Proxy | The source kube-apiserver reuses the existing APIserver client authentication flags --proxy-client-cert-file and --proxy-client-key-file to present its identity that will be verified by its peer (the", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_16652"} {"prompt": "Explain this shell code:\n\nkube-apiserver \\\n--feature-gates=UnknownVersionInteroperabilityProxy=true \\\n# required command line arguments for this feature\n--peer-ca-file=<path to kube-apiserver CA cert>\n--proxy-client-cert-file=<path to aggregator proxy cert>,\n--proxy-client-key-file=<path to aggregator proxy key>,\n--requestheader-client-ca-file=<path to aggregator CA cert>,\n# requestheader-allowed-names can be set to blank to allow any Common Name\n--requestheader-allowed-names=<valid Common Names to verify proxy client cert against>,\n\n# optional flags for this feature\n--peer-advertise-ip=`IP of this kube-apiserver that should be used by peers to proxy requests`\n--peer-advertise-port=`port of this kube-apiserver that should be used by peers to proxy requests`\n\n# …and other flags as usual", "completion": "This shell code 2.10 - Mixed Version Proxy | The source kube-apiserver reuses the existing APIserver client authentication flags --proxy-client-cert-file and --proxy-client-key-file to present its identity that will be verified by its peer (the", "language": "shell", "type": "code-example_explanation", "context": "2.10 - Mixed Version Proxy | The source kube-apiserver reuses the existing APIserver client authentication flags --proxy-client-cert-file and --proxy-client-key-file to present its identity that will be verified by its peer (the", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_16653"} {"prompt": "Explain this text code:\n\nsha256:1ff6c18fbef2045af6b9c16bf034cc421a29027b800e4f9b68ae9b1cb3e9ae07", "completion": "This text code 3.1 - Images | Some image name examples that Kubernetes can use are:", "language": "text", "type": "code-example_explanation", "context": "3.1 - Images | Some image name examples that Kubernetes can use are:", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_16654"} {"prompt": "Explain this text code:\n\nregistry.k8s.io/pause@sha256:1ff6c18fbef2045af6b9c16bf034cc421a29027b800e4f9b68ae9b1cb3e9ae07", "completion": "This text code 3.1 - Images | When you first create a Deployment, StatefulSet, Pod, or other object that includes a PodTemplate, and a pull policy was not explicitly specified, then by default the pull policy of all containers in", "language": "text", "type": "code-example_explanation", "context": "3.1 - Images | When you first create a Deployment, StatefulSet, Pod, or other object that includes a PodTemplate, and a pull policy was not explicitly specified, then by default the pull policy of all containers in", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_16655"} {"prompt": "Explain this text code:\n\nregistry.k8s.io/pause:3.5@sha256:1ff6c18fbef2045af6b9c16bf034cc421a29027b800e4f9b68ae9b1cb3e9ae07", "completion": "This text code 3.1 - Images | When you first create a Deployment, StatefulSet, Pod, or other object that includes a PodTemplate, and a pull policy was not explicitly specified, then by default the pull policy of all containers in", "language": "text", "type": "code-example_explanation", "context": "3.1 - Images | When you first create a Deployment, StatefulSet, Pod, or other object that includes a PodTemplate, and a pull policy was not explicitly specified, then by default the pull policy of all containers in", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_16656"} {"prompt": "Explain this text code:\n\nimage@sha256:45b23dee08af5e43a7fea6c4cf9c25ccf269ee113168c19722f87876677c5cb2", "completion": "This text code 3.1 - Images | When using image tags, if the image registry were to change the code that the tag on that image represents, you might end up with a mix of Pods running the old and new code. An image digest uniquely i", "language": "text", "type": "code-example_explanation", "context": "3.1 - Images | When using image tags, if the image registry were to change the code that the tag on that image represents, you might end up with a mix of Pods running the old and new code. An image digest uniquely i", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_16657"} {"prompt": "Complete this json code:\n\n{\n \"auths\": {\n \"my-registry.example/images\": { \"auth\": \"…\" },", "completion": " \"*.my-registry.example/images\": { \"auth\": \"…\" }\n }\n}", "language": "json", "type": "api-response", "context": "3.1 - Images | Image pull operations pass the credentials to the CRI container runtime for every valid pattern. For example, the following container image names would match successfully:", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_16658"} {"prompt": "Explain this json code:\n\n{\n \"auths\": {\n \"my-registry.example/images\": { \"auth\": \"…\" },\n \"*.my-registry.example/images\": { \"auth\": \"…\" }\n }\n}", "completion": "This json code 3.1 - Images | Image pull operations pass the credentials to the CRI container runtime for every valid pattern. For example, the following container image names would match successfully:", "language": "json", "type": "api-response_explanation", "context": "3.1 - Images | Image pull operations pass the credentials to the CRI container runtime for every valid pattern. For example, the following container image names would match successfully:", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_16659"} {"prompt": "Complete this json code:\n\n{\n \"auths\": {\n \"my-registry.example/images\": {\n \"auth\": \"…\"\n },", "completion": " \"my-registry.example/images/subpath\": {\n \"auth\": \"…\"\n }\n }\n}", "language": "json", "type": "api-response", "context": "3.1 - Images | If now a container specifies an image my-registry.example/images/subpath/my-image to be pulled, then the kubelet will try to download it using both authentication sources if one of them fails.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_16660"} {"prompt": "Explain this json code:\n\n{\n \"auths\": {\n \"my-registry.example/images\": {\n \"auth\": \"…\"\n },\n \"my-registry.example/images/subpath\": {\n \"auth\": \"…\"\n }\n }\n}", "completion": "This json code 3.1 - Images | If now a container specifies an image my-registry.example/images/subpath/my-image to be pulled, then the kubelet will try to download it using both authentication sources if one of them fails.", "language": "json", "type": "api-response_explanation", "context": "3.1 - Images | If now a container specifies an image my-registry.example/images/subpath/my-image to be pulled, then the kubelet will try to download it using both authentication sources if one of them fails.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_16661"} {"prompt": "Complete this shell code:\n\nkubectl create secret docker-registry <name> \\\n --docker-server=<docker-registry-server> \\", "completion": " --docker-username=<docker-user> \\\n --docker-password=<docker-password> \\\n --docker-email=<docker-email>", "language": "shell", "type": "code-example", "context": "3.1 - Images | If you already have a Docker credentials file then, rather than using the above command, you can import the credentials file as a Kubernetes Secret. Create a Secret based on existing Docker credential", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_16662"} {"prompt": "Explain this shell code:\n\nkubectl create secret docker-registry <name> \\\n --docker-server=<docker-registry-server> \\\n --docker-username=<docker-user> \\\n --docker-password=<docker-password> \\\n --docker-email=<docker-email>", "completion": "This shell code 3.1 - Images | If you already have a Docker credentials file then, rather than using the above command, you can import the credentials file as a Kubernetes Secret. Create a Secret based on existing Docker credential", "language": "shell", "type": "code-example_explanation", "context": "3.1 - Images | If you already have a Docker credentials file then, rather than using the above command, you can import the credentials file as a Kubernetes Secret. Create a Secret based on existing Docker credential", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_16663"} {"prompt": "Complete this shell code:\n\ncat <<EOF > pod.yaml\napiVersion: v1\nkind: Pod\nmetadata:\n name: foo\n namespace: awesomeapps\nspec:\n containers:\n - name: foo", "completion": " image: janedoe/awesomeapp:v1\n imagePullSecrets:\n - name: myregistrykey\nEOF\n\ncat <<EOF >> ./kustomization.yaml\nresources:\n- pod.yaml\nEOF", "language": "shell", "type": "code-example", "context": "3.1 - Images | This needs to be done for each Pod that is using a private registry.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_16664"} {"prompt": "Explain this shell code:\n\ncat <<EOF > pod.yaml\napiVersion: v1\nkind: Pod\nmetadata:\n name: foo\n namespace: awesomeapps\nspec:\n containers:\n - name: foo\n image: janedoe/awesomeapp:v1\n imagePullSecrets:\n - name: myregistrykey\nEOF\n\ncat <<EOF >> ./kustomization.yaml\nresources:\n- pod.yaml\nEOF", "completion": "This shell code 3.1 - Images | This needs to be done for each Pod that is using a private registry.", "language": "shell", "type": "code-example_explanation", "context": "3.1 - Images | This needs to be done for each Pod that is using a private registry.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_16665"} {"prompt": "Explain this shell code:\n\nFOO_SERVICE_HOST=<the host the service is running on>\nFOO_SERVICE_PORT=<the port the service is running on>", "completion": "This shell code 3.2 - Container Environment | Services have dedicated IP addresses and are available to the Container via DNS, if DNS addon is enabled.", "language": "shell", "type": "code-example_explanation", "context": "3.2 - Container Environment | Services have dedicated IP addresses and are available to the Container via DNS, if DNS addon is enabled.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_16666"} {"prompt": "Complete this yaml code:\n\n# RuntimeClass is defined in the node.k8s.io API group\napiVersion: node.k8s.io/v1\nkind: RuntimeClass\nmetadata:", "completion": " # The name the RuntimeClass will be referenced by.\n # RuntimeClass is a non-namespaced resource.\n name: myclass \n# The name of the corresponding CRI configuration\nhandler: myconfiguration", "language": "yaml", "type": "kubernetes-manifest", "context": "3.3 - Runtime Class | The name of a RuntimeClass object must be a valid DNS subdomain name.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_16667"} {"prompt": "Explain this yaml code:\n\n# RuntimeClass is defined in the node.k8s.io API group\napiVersion: node.k8s.io/v1\nkind: RuntimeClass\nmetadata:\n # The name the RuntimeClass will be referenced by.\n # RuntimeClass is a non-namespaced resource.\n name: myclass \n# The name of the corresponding CRI configuration\nhandler: myconfiguration", "completion": "This yaml code 3.3 - Runtime Class | The name of a RuntimeClass object must be a valid DNS subdomain name.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "3.3 - Runtime Class | The name of a RuntimeClass object must be a valid DNS subdomain name.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_16668"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:", "completion": " name: mypod\nspec:\n runtimeClassName: myclass\n # ...", "language": "yaml", "type": "pod-definition", "context": "3.3 - Runtime Class | This will instruct the kubelet to use the named RuntimeClass to run this pod. If the named RuntimeClass does not exist, or the CRI cannot run the corresponding handler, the pod will enter the Failed t", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_16669"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: mypod\nspec:\n runtimeClassName: myclass\n # ...", "completion": "This yaml code 3.3 - Runtime Class | This will instruct the kubelet to use the named RuntimeClass to run this pod. If the named RuntimeClass does not exist, or the CRI cannot run the corresponding handler, the pod will enter the Failed t", "language": "yaml", "type": "pod-definition_explanation", "context": "3.3 - Runtime Class | This will instruct the kubelet to use the named RuntimeClass to run this pod. If the named RuntimeClass does not exist, or the CRI cannot run the corresponding handler, the pod will enter the Failed t", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_16670"} {"prompt": "Explain this json code:\n\n[plugins.\"io.containerd.grpc.v1.cri\".containerd.runtimes.${HANDLER_NAME}]", "completion": "This json code 3.3 - Runtime Class | See containerd's config documentation for more details:", "language": "json", "type": "api-response_explanation", "context": "3.3 - Runtime Class | See containerd's config documentation for more details:", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_16671"} {"prompt": "Explain this json code:\n\n[crio.runtime.runtimes.${HANDLER_NAME}]\n runtime_path = \"${PATH_TO_BINARY}\"", "completion": "This json code 3.3 - Runtime Class | See CRI-O's config documentation for more details.", "language": "json", "type": "api-response_explanation", "context": "3.3 - Runtime Class | See CRI-O's config documentation for more details.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_16672"} {"prompt": "Complete this text code:\n\nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal Scheduled 7s default-scheduler Successfully assigned default/lifecycle-demo to ip-XXX-XXX-XX-XX.us-east-2...\n Normal Pulled 6s kubelet Successfully pulled image \"nginx\" in 229.604315ms\n Normal Pulling 4s (x2 over 6s) kubelet Pulling image \"nginx\"", "completion": " Normal Created 4s (x2 over 5s) kubelet Created container lifecycle-demo-container\n Normal Started 4s (x2 over 5s) kubelet Started container lifecycle-demo-container\n Warning FailedPostStartHook 4s (x2 over 5s) kubelet Exec lifecycle hook ([badcommand]) for Container \"lifecycle-demo-container\" in Pod \"lifecycle-demo_default(30229739-9651-4e5a-9a32-a8f1688862db)\" failed - error: command 'badcommand' exited with 126: , message: \"OCI runtime exec failed: exec failed: container_linux.go:380: starting container process caused: exec: \\\"badcommand\\\": executable file not found in $PATH: unknown\\r\\n\"\n Normal Killing 4s (x2 over 5s) kubelet FailedPostStartHook\n Normal Pulled 4s kubelet Successfully pulled image \"nginx\" in 215.66395ms\n Warning BackOff 2s (x2 over 3s) kubelet Back-off restarting failed container", "language": "text", "type": "code-example", "context": "3.4 - Container Lifecycle Hooks | A workload is an application running on Kubernetes. Whether your workload is a single component or several that work together, on Kubernetes you run it inside a set of pods. In Kubernetes, a Pod repre", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_16673"} {"prompt": "Explain this text code:\n\nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal Scheduled 7s default-scheduler Successfully assigned default/lifecycle-demo to ip-XXX-XXX-XX-XX.us-east-2...\n Normal Pulled 6s kubelet Successfully pulled image \"nginx\" in 229.604315ms\n Normal Pulling 4s (x2 over 6s) kubelet Pulling image \"nginx\"\n Normal Created 4s (x2 over 5s) kubelet Created container lifecycle-demo-container\n Normal Started 4s (x2 over 5s) kubelet Started container lifecycle-demo-container\n Warning FailedPostStartHook 4s (x2 over 5s) kubelet Exec lifecycle hook ([badcommand]) for Container \"lifecycle-demo-container\" in Pod \"lifecycle-demo_default(30229739-9651-4e5a-9a32-a8f1688862db)\" failed - error: command 'badcommand' exited with 126: , message: \"OCI runtime exec failed: exec failed: container_linux.go:380: starting container process caused: exec: \\\"badcommand\\\": executable file not found in $PATH: unknown\\r\\n\"\n Normal Killing 4s (x2 over 5s) kubelet FailedPostStartHook\n Normal Pulled 4s kubelet Successfully pulled image \"nginx\" in 215.66395ms\n Warning BackOff 2s (x2 over 3s) kubelet Back-off restarting failed container", "completion": "This text code 3.4 - Container Lifecycle Hooks | A workload is an application running on Kubernetes. Whether your workload is a single component or several that work together, on Kubernetes you run it inside a set of pods. In Kubernetes, a Pod repre", "language": "text", "type": "code-example_explanation", "context": "3.4 - Container Lifecycle Hooks | A workload is an application running on Kubernetes. Whether your workload is a single component or several that work together, on Kubernetes you run it inside a set of pods. In Kubernetes, a Pod repre", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_16674"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: nginx\nspec:", "completion": " containers:\n - name: nginx\n image: nginx:1.14.2\n ports:\n - containerPort: 80", "language": "yaml", "type": "pod-definition", "context": "4.1 - Pods | To create the Pod shown above, run the following command:", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_16675"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: nginx\nspec:\n containers:\n - name: nginx\n image: nginx:1.14.2\n ports:\n - containerPort: 80", "completion": "This yaml code 4.1 - Pods | To create the Pod shown above, run the following command:", "language": "yaml", "type": "pod-definition_explanation", "context": "4.1 - Pods | To create the Pod shown above, run the following command:", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_16676"} {"prompt": "Explain this shell code:\n\nkubectl apply -f https://k8s.io/examples/pods/simple-pod.yaml", "completion": "This shell code 4.1 - Pods | Pods are generally not created directly and are created using workload resources. See Working with Pods for more information on how Pods are used with workload resources.", "language": "shell", "type": "code-example_explanation", "context": "4.1 - Pods | Pods are generally not created directly and are created using workload resources. See Working with Pods for more information on how Pods are used with workload resources.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_16677"} {"prompt": "Complete this yaml code:\n\napiVersion: batch/v1\nkind: Job\nmetadata:\n name: hello\nspec:\n template:\n # This is the pod template", "completion": " spec:\n containers:\n - name: hello\n image: busybox:1.28\n command: ['sh', '-c', 'echo \"Hello, Kubernetes!\" && sleep 3600']\n restartPolicy: OnFailure\n # The pod template ends here", "language": "yaml", "type": "kubernetes-manifest", "context": "4.1 - Pods | Modifying the pod template or switching to a new pod template has no direct effect on the Pods that already exist. If you change the pod template for a workload resource, that resource needs to create", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_16678"} {"prompt": "Explain this yaml code:\n\napiVersion: batch/v1\nkind: Job\nmetadata:\n name: hello\nspec:\n template:\n # This is the pod template\n spec:\n containers:\n - name: hello\n image: busybox:1.28\n command: ['sh', '-c', 'echo \"Hello, Kubernetes!\" && sleep 3600']\n restartPolicy: OnFailure\n # The pod template ends here", "completion": "This yaml code 4.1 - Pods | Modifying the pod template or switching to a new pod template has no direct effect on the Pods that already exist. If you change the pod template for a workload resource, that resource needs to create", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "4.1 - Pods | Modifying the pod template or switching to a new pod template has no direct effect on the Pods that already exist. If you change the pod template for a workload resource, that resource needs to create", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_16679"} {"prompt": "Explain this text code:\n\nNAMESPACE NAME READY STATUS RESTARTS AGE\n alessandras-namespace alessandras-pod 0/1 CrashLoopBackOff 200 2d9h", "completion": "This text code 4.1.1 - Pod Lifecycle | A Pod is granted a term to terminate gracefully, which defaults to 30 seconds. You can use the flag --force to terminate a Pod by force.", "language": "text", "type": "code-example_explanation", "context": "4.1.1 - Pod Lifecycle | A Pod is granted a term to terminate gracefully, which defaults to 30 seconds. You can use the flag --force to terminate a Pod by force.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_16680"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: on-failure-pod\nspec:\n restartPolicy: OnFailure\n containers:", "completion": " - name: try-once-container # This container will run only once because the restartPolicy is Never.\n image: docker.io/library/busybox:1.28\n command: ['sh', '-c', 'echo \"Only running once\" && sleep 10 && exit 1']\n restartPolicy: Never \n - name: on-failure-container # This container will be restarted on failure.\n image: docker.io/library/busybox:1.28\n command: ['sh', '-c', 'echo \"Keep restarting\" && sleep 1800 && exit 1']", "language": "yaml", "type": "pod-definition", "context": "4.1.1 - Pod Lifecycle | A Pod with Always restart policy with an init container that only execute once. If the init container fails, the Pod fails. This allows the Pod to fail if the initialization failed, but also keep runn", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_16681"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: on-failure-pod\nspec:\n restartPolicy: OnFailure\n containers:\n - name: try-once-container # This container will run only once because the restartPolicy is Never.\n image: docker.io/library/busybox:1.28\n command: ['sh', '-c', 'echo \"Only running once\" && sleep 10 && exit 1']\n restartPolicy: Never \n - name: on-failure-container # This container will be restarted on failure.\n image: docker.io/library/busybox:1.28\n command: ['sh', '-c', 'echo \"Keep restarting\" && sleep 1800 && exit 1']", "completion": "This yaml code 4.1.1 - Pod Lifecycle | A Pod with Always restart policy with an init container that only execute once. If the init container fails, the Pod fails. This allows the Pod to fail if the initialization failed, but also keep runn", "language": "yaml", "type": "pod-definition_explanation", "context": "4.1.1 - Pod Lifecycle | A Pod with Always restart policy with an init container that only execute once. If the init container fails, the Pod fails. This allows the Pod to fail if the initialization failed, but also keep runn", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_16682"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: fail-pod-if-init-fails\nspec:\n restartPolicy: Always\n initContainers:", "completion": " - name: init-once # This init container will only try once. If it fails, the pod will fail.\n image: docker.io/library/busybox:1.28\n command: ['sh', '-c', 'echo \"Failing initialization\" && sleep 10 && exit 1']\n restartPolicy: Never\n containers:\n - name: main-container # This container will always be restarted once initialization succeeds.\n image: docker.io/library/busybox:1.28\n command: ['sh', '-c', 'sleep 1800 && exit 0']", "language": "yaml", "type": "pod-definition", "context": "4.1.1 - Pod Lifecycle | A Pod with Never restart policy with a container that ignores and restarts on specific exit codes. This is useful to differentiate between restartable errors and non-restartable errors:", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_16683"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: fail-pod-if-init-fails\nspec:\n restartPolicy: Always\n initContainers:\n - name: init-once # This init container will only try once. If it fails, the pod will fail.\n image: docker.io/library/busybox:1.28\n command: ['sh', '-c', 'echo \"Failing initialization\" && sleep 10 && exit 1']\n restartPolicy: Never\n containers:\n - name: main-container # This container will always be restarted once initialization succeeds.\n image: docker.io/library/busybox:1.28\n command: ['sh', '-c', 'sleep 1800 && exit 0']", "completion": "This yaml code 4.1.1 - Pod Lifecycle | A Pod with Never restart policy with a container that ignores and restarts on specific exit codes. This is useful to differentiate between restartable errors and non-restartable errors:", "language": "yaml", "type": "pod-definition_explanation", "context": "4.1.1 - Pod Lifecycle | A Pod with Never restart policy with a container that ignores and restarts on specific exit codes. This is useful to differentiate between restartable errors and non-restartable errors:", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_16684"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: restart-on-exit-codes\nspec:\n restartPolicy: Never\n containers:\n - name: restart-on-exit-codes", "completion": " image: docker.io/library/busybox:1.28\n command: ['sh', '-c', 'sleep 60 && exit 0']\n restartPolicy: Never # Container restart policy must be specified if rules are specified\n restartPolicyRules: # Only restart the container if it exits with code 42\n - action: Restart\n exitCodes:\n operator: In\n values: [42]", "language": "yaml", "type": "pod-definition", "context": "4.1.1 - Pod Lifecycle | Restart rules can be used for many more advanced lifecycle management scenarios. Note, restart rules are affected by the same inconsistencies as the regular restart policy. Kubelet restarts, container", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_16685"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: restart-on-exit-codes\nspec:\n restartPolicy: Never\n containers:\n - name: restart-on-exit-codes\n image: docker.io/library/busybox:1.28\n command: ['sh', '-c', 'sleep 60 && exit 0']\n restartPolicy: Never # Container restart policy must be specified if rules are specified\n restartPolicyRules: # Only restart the container if it exits with code 42\n - action: Restart\n exitCodes:\n operator: In\n values: [42]", "completion": "This yaml code 4.1.1 - Pod Lifecycle | Restart rules can be used for many more advanced lifecycle management scenarios. Note, restart rules are affected by the same inconsistencies as the regular restart policy. Kubelet restarts, container", "language": "yaml", "type": "pod-definition_explanation", "context": "4.1.1 - Pod Lifecycle | Restart rules can be used for many more advanced lifecycle management scenarios. Note, restart rules are affected by the same inconsistencies as the regular restart policy. Kubelet restarts, container", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_16686"} {"prompt": "Complete this yaml code:\n\n# container restart delays will start at 10s, increasing\n# 2x each time they are restarted, to a maximum of 100s", "completion": "kind: KubeletConfiguration\ncrashLoopBackOff:\n maxContainerRestartPeriod: \"100s\"", "language": "yaml", "type": "kubernetes-manifest", "context": "4.1.1 - Pod Lifecycle | If you use this feature along with the alpha feature ReduceDefaultCrashLoopBackOffDecay (described above), your cluster defaults for initial backoff and maximum backoff will no longer be 10s and 300s,", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_16687"} {"prompt": "Explain this yaml code:\n\n# container restart delays will start at 10s, increasing\n# 2x each time they are restarted, to a maximum of 100s\nkind: KubeletConfiguration\ncrashLoopBackOff:\n maxContainerRestartPeriod: \"100s\"", "completion": "This yaml code 4.1.1 - Pod Lifecycle | If you use this feature along with the alpha feature ReduceDefaultCrashLoopBackOffDecay (described above), your cluster defaults for initial backoff and maximum backoff will no longer be 10s and 300s,", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "4.1.1 - Pod Lifecycle | If you use this feature along with the alpha feature ReduceDefaultCrashLoopBackOffDecay (described above), your cluster defaults for initial backoff and maximum backoff will no longer be 10s and 300s,", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_16688"} {"prompt": "Complete this yaml code:\n\n# delays between container restarts will always be 2s\nkind: KubeletConfiguration", "completion": "crashLoopBackOff:\n maxContainerRestartPeriod: \"2s\"", "language": "yaml", "type": "kubernetes-manifest", "context": "4.1.1 - Pod Lifecycle | If you use this feature along with the alpha feature ReduceDefaultCrashLoopBackOffDecay (described above), your cluster defaults for initial backoff and maximum backoff will no longer be 10s and 300s,", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_16689"} {"prompt": "Explain this yaml code:\n\n# delays between container restarts will always be 2s\nkind: KubeletConfiguration\ncrashLoopBackOff:\n maxContainerRestartPeriod: \"2s\"", "completion": "This yaml code 4.1.1 - Pod Lifecycle | If you use this feature along with the alpha feature ReduceDefaultCrashLoopBackOffDecay (described above), your cluster defaults for initial backoff and maximum backoff will no longer be 10s and 300s,", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "4.1.1 - Pod Lifecycle | If you use this feature along with the alpha feature ReduceDefaultCrashLoopBackOffDecay (described above), your cluster defaults for initial backoff and maximum backoff will no longer be 10s and 300s,", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_16690"} {"prompt": "Complete this yaml code:\n\nkind: Pod\n...\nspec:\n readinessGates:\n - conditionType: \"www.example.com/feature-1\"\nstatus:\n conditions:\n - type: Ready # a built-in PodCondition\n status: \"False\"", "completion": " lastProbeTime: null\n lastTransitionTime: 2018-01-01T00:00:00Z\n - type: \"www.example.com/feature-1\" # an extra PodCondition\n status: \"False\"\n lastProbeTime: null\n lastTransitionTime: 2018-01-01T00:00:00Z\n containerStatuses:\n - containerID: docker://abcd...\n ready: true\n...", "language": "yaml", "type": "pod-definition", "context": "4.1.1 - Pod Lifecycle | The Pod conditions you add must have names that meet the Kubernetes label key format.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_16691"} {"prompt": "Explain this yaml code:\n\nkind: Pod\n...\nspec:\n readinessGates:\n - conditionType: \"www.example.com/feature-1\"\nstatus:\n conditions:\n - type: Ready # a built-in PodCondition\n status: \"False\"\n lastProbeTime: null\n lastTransitionTime: 2018-01-01T00:00:00Z\n - type: \"www.example.com/feature-1\" # an extra PodCondition\n status: \"False\"\n lastProbeTime: null\n lastTransitionTime: 2018-01-01T00:00:00Z\n containerStatuses:\n - containerID: docker://abcd...\n ready: true\n...", "completion": "This yaml code 4.1.1 - Pod Lifecycle | The Pod conditions you add must have names that meet the Kubernetes label key format.", "language": "yaml", "type": "pod-definition_explanation", "context": "4.1.1 - Pod Lifecycle | The Pod conditions you add must have names that meet the Kubernetes label key format.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_16692"} {"prompt": "Complete this yaml code:\n\nspec:\n os:\n name: linux\n containers:", "completion": " - name: my-container\n image: container-image:latest\n lifecycle:\n stopSignal: SIGUSR1", "language": "yaml", "type": "kubernetes-manifest", "context": "4.1.1 - Pod Lifecycle | If a stop signal is defined in the lifecycle, this will override the signal defined in the container image. If no stop signal is defined in the container spec, the container would fall back to the def", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_16693"} {"prompt": "Explain this yaml code:\n\nspec:\n os:\n name: linux\n containers:\n - name: my-container\n image: container-image:latest\n lifecycle:\n stopSignal: SIGUSR1", "completion": "This yaml code 4.1.1 - Pod Lifecycle | If a stop signal is defined in the lifecycle, this will override the signal defined in the container image. If no stop signal is defined in the container spec, the container would fall back to the def", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "4.1.1 - Pod Lifecycle | If a stop signal is defined in the lifecycle, this will override the signal defined in the container image. If no stop signal is defined in the container spec, the container would fall back to the def", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_16694"} {"prompt": "Explain this shell code:\n\nfor i in {1..100}; do sleep 1; if nslookup myservice; then exit 0; fi; done; exit 1", "completion": "This shell code 4.1.2 - Init Containers | Register this Pod with a remote server from the downward API with a command like:", "language": "shell", "type": "code-example_explanation", "context": "4.1.2 - Init Containers | Register this Pod with a remote server from the downward API with a command like:", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_16695"} {"prompt": "Explain this shell code:\n\ncurl -X POST http://$MANAGEMENT_SERVICE_HOST:$MANAGEMENT_SERVICE_PORT/register -d 'instance=$(<POD_NAME>)&ip=$(<POD_IP>)'", "completion": "This shell code 4.1.2 - Init Containers | Wait for some time before starting the app container with a command like", "language": "shell", "type": "code-example_explanation", "context": "4.1.2 - Init Containers | Wait for some time before starting the app container with a command like", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_16696"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: myapp-pod\n labels:\n app.kubernetes.io/name: MyApp\nspec:\n containers:\n - name: myapp-container", "completion": " image: busybox:1.28\n command: ['sh', '-c', 'echo The app is running! && sleep 3600']\n initContainers:\n - name: init-myservice\n image: busybox:1.28\n command: ['sh', '-c', \"until nslookup myservice.$(cat /var/run/secrets/kubernetes.io/serviceaccount/namespace).svc.cluster.local; do echo waiting for myservice; sleep 2; done\"]\n - name: init-mydb\n image: busybox:1.28\n command: ['sh', '-c', \"until nslookup mydb.$(cat /var/run/secrets/kubernetes.io/serviceaccount/namespace).svc.cluster.local; do echo waiting for mydb; sleep 2; done\"]", "language": "yaml", "type": "pod-definition", "context": "4.1.2 - Init Containers | You can start this Pod by running:", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_16697"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: myapp-pod\n labels:\n app.kubernetes.io/name: MyApp\nspec:\n containers:\n - name: myapp-container\n image: busybox:1.28\n command: ['sh', '-c', 'echo The app is running! && sleep 3600']\n initContainers:\n - name: init-myservice\n image: busybox:1.28\n command: ['sh', '-c', \"until nslookup myservice.$(cat /var/run/secrets/kubernetes.io/serviceaccount/namespace).svc.cluster.local; do echo waiting for myservice; sleep 2; done\"]\n - name: init-mydb\n image: busybox:1.28\n command: ['sh', '-c', \"until nslookup mydb.$(cat /var/run/secrets/kubernetes.io/serviceaccount/namespace).svc.cluster.local; do echo waiting for mydb; sleep 2; done\"]", "completion": "This yaml code 4.1.2 - Init Containers | You can start this Pod by running:", "language": "yaml", "type": "pod-definition_explanation", "context": "4.1.2 - Init Containers | You can start this Pod by running:", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_16698"} {"prompt": "Explain this text code:\n\nNAME READY STATUS RESTARTS AGE\nmyapp-pod 0/1 Init:0/2 0 6m", "completion": "This text code 4.1.2 - Init Containers | or for more details:", "language": "text", "type": "code-example_explanation", "context": "4.1.2 - Init Containers | or for more details:", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_16699"} {"prompt": "Complete this text code:\n\nName: myapp-pod\nNamespace: default\n[...]\nLabels: app.kubernetes.io/name=MyApp\nStatus: Pending\n[...]\nInit Containers:\n init-myservice:\n[...]\n State: Running\n[...]\n init-mydb:\n[...]\n State: Waiting\n Reason: PodInitializing\n Ready: False", "completion": "[...]\nContainers:\n myapp-container:\n[...]\n State: Waiting\n Reason: PodInitializing\n Ready: False\n[...]\nEvents:\n FirstSeen LastSeen Count From SubObjectPath Type Reason Message\n --------- -------- ----- ---- ------------- -------- ------ -------\n 16s 16s 1 {default-scheduler } Normal Scheduled Successfully assigned myapp-pod to 172.17.4.201\n 16s 16s 1 {kubelet 172.17.4.201} spec.initContainers{init-myservice} Normal Pulling pulling image \"busybox\"\n 13s 13s 1 {kubelet 172.17.4.201} spec.initContainers{init-myservice} Normal Pulled Successfully pulled image \"busybox\"\n 13s 13s 1 {kubelet 172.17.4.201} spec.initContainers{init-myservice} Normal Created Created container init-myservice\n 13s 13s 1 {kubelet 172.17.4.201} spec.initContainers{init-myservice} Normal Started Started container init-myservice", "language": "text", "type": "code-example", "context": "4.1.2 - Init Containers | To see logs for the init containers in this Pod, run:", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_16700"} {"prompt": "Explain this text code:\n\nName: myapp-pod\nNamespace: default\n[...]\nLabels: app.kubernetes.io/name=MyApp\nStatus: Pending\n[...]\nInit Containers:\n init-myservice:\n[...]\n State: Running\n[...]\n init-mydb:\n[...]\n State: Waiting\n Reason: PodInitializing\n Ready: False\n[...]\nContainers:\n myapp-container:\n[...]\n State: Waiting\n Reason: PodInitializing\n Ready: False\n[...]\nEvents:\n FirstSeen LastSeen Count From SubObjectPath Type Reason Message\n --------- -------- ----- ---- ------------- -------- ------ -------\n 16s 16s 1 {default-scheduler } Normal Scheduled Successfully assigned myapp-pod to 172.17.4.201\n 16s 16s 1 {kubelet 172.17.4.201} spec.initContainers{init-myservice} Normal Pulling pulling image \"busybox\"\n 13s 13s 1 {kubelet 172.17.4.201} spec.initContainers{init-myservice} Normal Pulled Successfully pulled image \"busybox\"\n 13s 13s 1 {kubelet 172.17.4.201} spec.initContainers{init-myservice} Normal Created Created container init-myservice\n 13s 13s 1 {kubelet 172.17.4.201} spec.initContainers{init-myservice} Normal Started Started container init-myservice", "completion": "This text code 4.1.2 - Init Containers | To see logs for the init containers in this Pod, run:", "language": "text", "type": "code-example_explanation", "context": "4.1.2 - Init Containers | To see logs for the init containers in this Pod, run:", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_16701"} {"prompt": "Explain this shell code:\n\nkubectl logs myapp-pod -c init-myservice # Inspect the first init container\nkubectl logs myapp-pod -c init-mydb # Inspect the second init container", "completion": "This shell code 4.1.2 - Init Containers | At this point, those init containers will be waiting to discover Services named mydb and myservice.", "language": "shell", "type": "code-example_explanation", "context": "4.1.2 - Init Containers | At this point, those init containers will be waiting to discover Services named mydb and myservice.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_16702"} {"prompt": "Complete this yaml code:\n\n---\napiVersion: v1\nkind: Service\nmetadata:\n name: myservice\nspec:\n ports:\n - protocol: TCP\n port: 80\n targetPort: 9376", "completion": "---\napiVersion: v1\nkind: Service\nmetadata:\n name: mydb\nspec:\n ports:\n - protocol: TCP\n port: 80\n targetPort: 9377", "language": "yaml", "type": "service-definition", "context": "4.1.2 - Init Containers | To create the mydb and myservice services:", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_16703"} {"prompt": "Explain this yaml code:\n\n---\napiVersion: v1\nkind: Service\nmetadata:\n name: myservice\nspec:\n ports:\n - protocol: TCP\n port: 80\n targetPort: 9376\n---\napiVersion: v1\nkind: Service\nmetadata:\n name: mydb\nspec:\n ports:\n - protocol: TCP\n port: 80\n targetPort: 9377", "completion": "This yaml code 4.1.2 - Init Containers | To create the mydb and myservice services:", "language": "yaml", "type": "service-definition_explanation", "context": "4.1.2 - Init Containers | To create the mydb and myservice services:", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_16704"} {"prompt": "Explain this text code:\n\nNAME READY STATUS RESTARTS AGE\nmyapp-pod 1/1 Running 0 9m", "completion": "This text code 4.1.2 - Init Containers | This simple example should provide some inspiration for you to create your own init containers. What's next contains a link to a more detailed example.", "language": "text", "type": "code-example_explanation", "context": "4.1.2 - Init Containers | This simple example should provide some inspiration for you to create your own init containers. What's next contains a link to a more detailed example.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_16705"} {"prompt": "Complete this yaml code:\n\napiVersion: apps/v1\nkind: Deployment\nmetadata:\n name: myapp\n labels:\n app: myapp\nspec:\n replicas: 1\n selector:\n matchLabels:\n app: myapp\n template:\n metadata:\n labels:\n app: myapp\n spec:\n containers:", "completion": " - name: myapp\n image: alpine:latest\n command: ['sh', '-c', 'while true; do echo \"logging\" >> /opt/logs.txt; sleep 1; done']\n volumeMounts:\n - name: data\n mountPath: /opt\n initContainers:\n - name: logshipper\n image: alpine:latest\n restartPolicy: Always\n command: ['sh', '-c', 'tail -F /opt/logs.txt']\n volumeMounts:\n - name: data\n mountPath: /opt\n volumes:\n - name: data\n emptyDir: {}", "language": "yaml", "type": "deployment-definition", "context": "4.1.3 - Sidecar Containers | If an init container is created with its restartPolicy set to Always, it will start and remain running during the entire life of the Pod. This can be helpful for running supporting services separated", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_16706"} {"prompt": "Explain this yaml code:\n\napiVersion: apps/v1\nkind: Deployment\nmetadata:\n name: myapp\n labels:\n app: myapp\nspec:\n replicas: 1\n selector:\n matchLabels:\n app: myapp\n template:\n metadata:\n labels:\n app: myapp\n spec:\n containers:\n - name: myapp\n image: alpine:latest\n command: ['sh', '-c', 'while true; do echo \"logging\" >> /opt/logs.txt; sleep 1; done']\n volumeMounts:\n - name: data\n mountPath: /opt\n initContainers:\n - name: logshipper\n image: alpine:latest\n restartPolicy: Always\n command: ['sh', '-c', 'tail -F /opt/logs.txt']\n volumeMounts:\n - name: data\n mountPath: /opt\n volumes:\n - name: data\n emptyDir: {}", "completion": "This yaml code 4.1.3 - Sidecar Containers | If an init container is created with its restartPolicy set to Always, it will start and remain running during the entire life of the Pod. This can be helpful for running supporting services separated", "language": "yaml", "type": "deployment-definition_explanation", "context": "4.1.3 - Sidecar Containers | If an init container is created with its restartPolicy set to Always, it will start and remain running during the entire life of the Pod. This can be helpful for running supporting services separated", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_16707"} {"prompt": "Complete this yaml code:\n\napiVersion: batch/v1\nkind: Job\nmetadata:\n name: myjob\nspec:\n template:\n spec:\n containers:\n - name: myjob\n image: alpine:latest\n command: ['sh', '-c', 'echo \"logging\" > /opt/logs.txt']\n volumeMounts:\n - name: data", "completion": " mountPath: /opt\n initContainers:\n - name: logshipper\n image: alpine:latest\n restartPolicy: Always\n command: ['sh', '-c', 'tail -F /opt/logs.txt']\n volumeMounts:\n - name: data\n mountPath: /opt\n restartPolicy: Never\n volumes:\n - name: data\n emptyDir: {}", "language": "yaml", "type": "kubernetes-manifest", "context": "4.1.3 - Sidecar Containers | Sidecar containers run alongside app containers in the same pod. However, they do not execute the primary application logic; instead, they provide supporting functionality to the main application.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_16708"} {"prompt": "Explain this yaml code:\n\napiVersion: batch/v1\nkind: Job\nmetadata:\n name: myjob\nspec:\n template:\n spec:\n containers:\n - name: myjob\n image: alpine:latest\n command: ['sh', '-c', 'echo \"logging\" > /opt/logs.txt']\n volumeMounts:\n - name: data\n mountPath: /opt\n initContainers:\n - name: logshipper\n image: alpine:latest\n restartPolicy: Always\n command: ['sh', '-c', 'tail -F /opt/logs.txt']\n volumeMounts:\n - name: data\n mountPath: /opt\n restartPolicy: Never\n volumes:\n - name: data\n emptyDir: {}", "completion": "This yaml code 4.1.3 - Sidecar Containers | Sidecar containers run alongside app containers in the same pod. However, they do not execute the primary application logic; instead, they provide supporting functionality to the main application.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "4.1.3 - Sidecar Containers | Sidecar containers run alongside app containers in the same pod. However, they do not execute the primary application logic; instead, they provide supporting functionality to the main application.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_16709"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: busybox-1\nspec:", "completion": " containers:\n - image: busybox:1.28\n command:\n - sleep\n - \"3600\"\n name: busybox", "language": "yaml", "type": "pod-definition", "context": "4.1.6 - Pod Hostname | The Pod created by this manifest will have its hostname and fully qualified domain name (FQDN) set to busybox-1.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_16710"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: busybox-1\nspec:\n containers:\n - image: busybox:1.28\n command:\n - sleep\n - \"3600\"\n name: busybox", "completion": "This yaml code 4.1.6 - Pod Hostname | The Pod created by this manifest will have its hostname and fully qualified domain name (FQDN) set to busybox-1.", "language": "yaml", "type": "pod-definition_explanation", "context": "4.1.6 - Pod Hostname | The Pod created by this manifest will have its hostname and fully qualified domain name (FQDN) set to busybox-1.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_16711"} {"prompt": "Explain this text code:\n\nbusybox-1.busybox-subdomain.my-namespace.svc.cluster-domain.example", "completion": "This text code 4.1.6 - Pod Hostname | When both setHostnameAsFQDN: true and the subdomain field is set in the Pod spec, the kubelet writes the Pod's FQDN into the hostname for that Pod's namespace. In this case, both hostname and hostname", "language": "text", "type": "code-example_explanation", "context": "4.1.6 - Pod Hostname | When both setHostnameAsFQDN: true and the subdomain field is set in the Pod spec, the kubelet writes the Pod's FQDN into the hostname for that Pod's namespace. In this case, both hostname and hostname", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_16712"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: busybox-2-busybox-example-domain\nspec:\n hostnameOverride: busybox-2.busybox.example.domain", "completion": " containers:\n - image: busybox:1.28\n command:\n - sleep\n - \"3600\"\n name: busybox", "language": "yaml", "type": "pod-definition", "context": "4.1.6 - Pod Hostname | If hostnameOverride is set alongside hostname and subdomain fields:", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_16713"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: busybox-2-busybox-example-domain\nspec:\n hostnameOverride: busybox-2.busybox.example.domain\n containers:\n - image: busybox:1.28\n command:\n - sleep\n - \"3600\"\n name: busybox", "completion": "This yaml code 4.1.6 - Pod Hostname | If hostnameOverride is set alongside hostname and subdomain fields:", "language": "yaml", "type": "pod-definition_explanation", "context": "4.1.6 - Pod Hostname | If hostnameOverride is set alongside hostname and subdomain fields:", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_16714"} {"prompt": "Complete this text code:\n\n# The format is\n# name:firstID:count of IDs\n# where\n# - firstID is 65536 (the minimum value possible)", "completion": "# - count of IDs is 110 * 65536\n# (110 is the default limit for number of pods on the node)\n\nkubelet:65536:7208960", "language": "text", "type": "code-example", "context": "4.1.8 - User Namespaces | Starting with Kubernetes v1.33, the ID count for each of Pods can be set in KubeletConfiguration.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_16715"} {"prompt": "Explain this text code:\n\n# The format is\n# name:firstID:count of IDs\n# where\n# - firstID is 65536 (the minimum value possible)\n# - count of IDs is 110 * 65536\n# (110 is the default limit for number of pods on the node)\n\nkubelet:65536:7208960", "completion": "This text code 4.1.8 - User Namespaces | Starting with Kubernetes v1.33, the ID count for each of Pods can be set in KubeletConfiguration.", "language": "text", "type": "code-example_explanation", "context": "4.1.8 - User Namespaces | Starting with Kubernetes v1.33, the ID count for each of Pods can be set in KubeletConfiguration.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_16716"} {"prompt": "Complete this yaml code:\n\napiVersion: kubelet.config.k8s.io/v1beta1\nkind: KubeletConfiguration", "completion": "userNamespaces:\n idsPerPod: 1048576", "language": "yaml", "type": "kubernetes-manifest", "context": "4.1.8 - User Namespaces | The value of idsPerPod (uint32) must be a multiple of 65536. The default value is 65536. This value only applies to containers created after the kubelet was started with this KubeletConfiguration. Run", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_16717"} {"prompt": "Explain this yaml code:\n\napiVersion: kubelet.config.k8s.io/v1beta1\nkind: KubeletConfiguration\nuserNamespaces:\n idsPerPod: 1048576", "completion": "This yaml code 4.1.8 - User Namespaces | The value of idsPerPod (uint32) must be a multiple of 65536. The default value is 65536. This value only applies to containers created after the kubelet was started with this KubeletConfiguration. Run", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "4.1.8 - User Namespaces | The value of idsPerPod (uint32) must be a multiple of 65536. The default value is 65536. This value only applies to containers created after the kubelet was started with this KubeletConfiguration. Run", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_16718"} {"prompt": "Explain this text code:\n\nspec.initContainers[*].securityContext.runAsNonRoot", "completion": "This text code 4.1.8 - User Namespaces | When using a user namespace for the pod, it is disallowed to use other host namespaces. In particular, if you set hostUsers: false then you are not allowed to set any of:", "language": "text", "type": "code-example_explanation", "context": "4.1.8 - User Namespaces | When using a user namespace for the pod, it is disallowed to use other host namespaces. In particular, if you set hostUsers: false then you are not allowed to set any of:", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_16719"} {"prompt": "Explain this text code:\n\nspec.ephemeralContainers[*].securityContext.runAsNonRoot", "completion": "This text code 4.1.8 - User Namespaces | When using a user namespace for the pod, it is disallowed to use other host namespaces. In particular, if you set hostUsers: false then you are not allowed to set any of:", "language": "text", "type": "code-example_explanation", "context": "4.1.8 - User Namespaces | When using a user namespace for the pod, it is disallowed to use other host namespaces. In particular, if you set hostUsers: false then you are not allowed to set any of:", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_16720"} {"prompt": "Explain this text code:\n\nspec.ephemeralContainers[*].securityContext.runAsUser", "completion": "This text code 4.1.8 - User Namespaces | When using a user namespace for the pod, it is disallowed to use other host namespaces. In particular, if you set hostUsers: false then you are not allowed to set any of:", "language": "text", "type": "code-example_explanation", "context": "4.1.8 - User Namespaces | When using a user namespace for the pod, it is disallowed to use other host namespaces. In particular, if you set hostUsers: false then you are not allowed to set any of:", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_16721"} {"prompt": "Complete this yaml code:\n\napiVersion: apps/v1\nkind: Deployment\nmetadata:\n name: nginx-deployment\n labels:\n app: nginx\nspec:\n replicas: 3\n selector:\n matchLabels:", "completion": " app: nginx\n template:\n metadata:\n labels:\n app: nginx\n spec:\n containers:\n - name: nginx\n image: nginx:1.14.2\n ports:\n - containerPort: 80", "language": "yaml", "type": "deployment-definition", "context": "4.2.1 - Deployments | In this example:", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_16722"} {"prompt": "Explain this yaml code:\n\napiVersion: apps/v1\nkind: Deployment\nmetadata:\n name: nginx-deployment\n labels:\n app: nginx\nspec:\n replicas: 3\n selector:\n matchLabels:\n app: nginx\n template:\n metadata:\n labels:\n app: nginx\n spec:\n containers:\n - name: nginx\n image: nginx:1.14.2\n ports:\n - containerPort: 80", "completion": "This yaml code 4.2.1 - Deployments | In this example:", "language": "yaml", "type": "deployment-definition_explanation", "context": "4.2.1 - Deployments | In this example:", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_16723"} {"prompt": "Explain this shell code:\n\nkubectl apply -f https://k8s.io/examples/controllers/nginx-deployment.yaml", "completion": "This shell code 4.2.1 - Deployments | Run kubectl get deployments to check if the Deployment was created.", "language": "shell", "type": "code-example_explanation", "context": "4.2.1 - Deployments | Run kubectl get deployments to check if the Deployment was created.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_16724"} {"prompt": "Explain this text code:\n\nNAME READY UP-TO-DATE AVAILABLE AGE\nnginx-deployment 0/3 0 0 1s", "completion": "This text code 4.2.1 - Deployments | When you inspect the Deployments in your cluster, the following fields are displayed:", "language": "text", "type": "code-example_explanation", "context": "4.2.1 - Deployments | When you inspect the Deployments in your cluster, the following fields are displayed:", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_16725"} {"prompt": "Explain this text code:\n\nWaiting for rollout to finish: 2 out of 3 new replicas have been updated...\ndeployment \"nginx-deployment\" successfully rolled out", "completion": "This text code 4.2.1 - Deployments | Run the kubectl get deployments again a few seconds later. The output is similar to this:", "language": "text", "type": "code-example_explanation", "context": "4.2.1 - Deployments | Run the kubectl get deployments again a few seconds later. The output is similar to this:", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_16726"} {"prompt": "Explain this text code:\n\nNAME READY UP-TO-DATE AVAILABLE AGE\nnginx-deployment 3/3 3 3 18s", "completion": "This text code 4.2.1 - Deployments | Notice that the Deployment has created all three replicas, and all replicas are up-to-date (they contain the latest Pod template) and available.", "language": "text", "type": "code-example_explanation", "context": "4.2.1 - Deployments | Notice that the Deployment has created all three replicas, and all replicas are up-to-date (they contain the latest Pod template) and available.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_16727"} {"prompt": "Explain this text code:\n\nNAME DESIRED CURRENT READY AGE\nnginx-deployment-75675f5897 3 3 3 18s", "completion": "This text code 4.2.1 - Deployments | ReplicaSet output shows the following fields:", "language": "text", "type": "code-example_explanation", "context": "4.2.1 - Deployments | ReplicaSet output shows the following fields:", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_16728"} {"prompt": "Complete this text code:\n\nNAME READY STATUS RESTARTS AGE LABELS\nnginx-deployment-75675f5897-7ci7o 1/1 Running 0 18s app=nginx,pod-template-hash=75675f5897", "completion": "nginx-deployment-75675f5897-kzszj 1/1 Running 0 18s app=nginx,pod-template-hash=75675f5897\nnginx-deployment-75675f5897-qqcnn 1/1 Running 0 18s app=nginx,pod-template-hash=75675f5897", "language": "text", "type": "code-example", "context": "4.2.1 - Deployments | The created ReplicaSet ensures that there are three nginx Pods.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_16729"} {"prompt": "Explain this text code:\n\nNAME READY STATUS RESTARTS AGE LABELS\nnginx-deployment-75675f5897-7ci7o 1/1 Running 0 18s app=nginx,pod-template-hash=75675f5897\nnginx-deployment-75675f5897-kzszj 1/1 Running 0 18s app=nginx,pod-template-hash=75675f5897\nnginx-deployment-75675f5897-qqcnn 1/1 Running 0 18s app=nginx,pod-template-hash=75675f5897", "completion": "This text code 4.2.1 - Deployments | The created ReplicaSet ensures that there are three nginx Pods.", "language": "text", "type": "code-example_explanation", "context": "4.2.1 - Deployments | The created ReplicaSet ensures that there are three nginx Pods.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_16730"} {"prompt": "Explain this shell code:\n\nkubectl set image deployment.v1.apps/nginx-deployment nginx=nginx:1.16.1", "completion": "This shell code 4.2.1 - Deployments | or use the following command:", "language": "shell", "type": "code-example_explanation", "context": "4.2.1 - Deployments | or use the following command:", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_16731"} {"prompt": "Explain this shell code:\n\nkubectl set image deployment/nginx-deployment nginx=nginx:1.16.1", "completion": "This shell code 4.2.1 - Deployments | where deployment/nginx-deployment indicates the Deployment, nginx indicates the Container the update will take place and nginx:1.16.1 indicates the new image and its tag.", "language": "shell", "type": "code-example_explanation", "context": "4.2.1 - Deployments | where deployment/nginx-deployment indicates the Deployment, nginx indicates the Container the update will take place and nginx:1.16.1 indicates the new image and its tag.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_16732"} {"prompt": "Explain this text code:\n\nWaiting for rollout to finish: 2 out of 3 new replicas have been updated...", "completion": "This text code 4.2.1 - Deployments | or", "language": "text", "type": "code-example_explanation", "context": "4.2.1 - Deployments | or", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_16733"} {"prompt": "Explain this text code:\n\ndeployment \"nginx-deployment\" successfully rolled out", "completion": "This text code 4.2.1 - Deployments | Get more details on your updated Deployment:", "language": "text", "type": "code-example_explanation", "context": "4.2.1 - Deployments | Get more details on your updated Deployment:", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_16734"} {"prompt": "Explain this text code:\n\nNAME READY UP-TO-DATE AVAILABLE AGE\nnginx-deployment 3/3 3 3 36s", "completion": "This text code 4.2.1 - Deployments | Run kubectl get rs to see that the Deployment updated the Pods by creating a new ReplicaSet and scaling it up to 3 replicas, as well as scaling down the old ReplicaSet to 0 replicas.", "language": "text", "type": "code-example_explanation", "context": "4.2.1 - Deployments | Run kubectl get rs to see that the Deployment updated the Pods by creating a new ReplicaSet and scaling it up to 3 replicas, as well as scaling down the old ReplicaSet to 0 replicas.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_16735"} {"prompt": "Explain this text code:\n\nNAME DESIRED CURRENT READY AGE\nnginx-deployment-1564180365 3 3 3 6s\nnginx-deployment-2035384211 0 0 0 36s", "completion": "This text code 4.2.1 - Deployments | Running get pods should now show only the new Pods:", "language": "text", "type": "code-example_explanation", "context": "4.2.1 - Deployments | Running get pods should now show only the new Pods:", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_16736"} {"prompt": "Complete this text code:\n\nNAME READY STATUS RESTARTS AGE\nnginx-deployment-1564180365-khku8 1/1 Running 0 14s", "completion": "nginx-deployment-1564180365-nacti 1/1 Running 0 14s\nnginx-deployment-1564180365-z9gth 1/1 Running 0 14s", "language": "text", "type": "code-example", "context": "4.2.1 - Deployments | Next time you want to update these Pods, you only need to update the Deployment's Pod template again.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_16737"} {"prompt": "Explain this text code:\n\nNAME READY STATUS RESTARTS AGE\nnginx-deployment-1564180365-khku8 1/1 Running 0 14s\nnginx-deployment-1564180365-nacti 1/1 Running 0 14s\nnginx-deployment-1564180365-z9gth 1/1 Running 0 14s", "completion": "This text code 4.2.1 - Deployments | Next time you want to update these Pods, you only need to update the Deployment's Pod template again.", "language": "text", "type": "code-example_explanation", "context": "4.2.1 - Deployments | Next time you want to update these Pods, you only need to update the Deployment's Pod template again.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_16738"} {"prompt": "Complete this text code:\n\nName: nginx-deployment\nNamespace: default\nCreationTimestamp: Thu, 30 Nov 2017 10:56:25 +0000\nLabels: app=nginx\nAnnotations: deployment.kubernetes.io/revision=2\nSelector: app=nginx\nReplicas: 3 desired | 3 updated | 3 total | 3 available | 0 unavailable\nStrategyType: RollingUpdate\nMinReadySeconds: 0\nRollingUpdateStrategy: 25% max unavailable, 25% max surge\nPod Template:\n Labels: app=nginx\n Containers:\n nginx:\n Image: nginx:1.16.1\n Port: 80/TCP\n Environment: <none>\n Mounts: <none>", "completion": " Volumes: <none>\n Conditions:\n Type Status Reason\n ---- ------ ------\n Available True MinimumReplicasAvailable\n Progressing True NewReplicaSetAvailable\n OldReplicaSets: <none>\n NewReplicaSet: nginx-deployment-1564180365 (3/3 replicas created)\n Events:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal ScalingReplicaSet 2m deployment-controller Scaled up replica set nginx-deployment-2035384211 to 3\n Normal ScalingReplicaSet 24s deployment-controller Scaled up replica set nginx-deployment-1564180365 to 1\n Normal ScalingReplicaSet 22s deployment-controller Scaled down replica set nginx-deployment-2035384211 to 2\n Normal ScalingReplicaSet 22s deployment-controller Scaled up replica set nginx-deployment-1564180365 to 2\n Normal ScalingReplicaSet 19s deployment-controller Scaled down replica set nginx-deployment-2035384211 to 1\n Normal ScalingReplicaSet 19s deployment-controller Scaled up replica set nginx-deployment-1564180365 to 3\n Normal ScalingReplicaSet 14s deployment-controller Scaled down replica set nginx-deployment-2035384211 to 0", "language": "text", "type": "code-example", "context": "4.2.1 - Deployments | Here you see that when you first created the Deployment, it created a ReplicaSet (nginx-deployment-2035384211) and scaled it up to 3 replicas directly. When you updated the Deployment, it created a ne", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_16739"} {"prompt": "Explain this text code:\n\nName: nginx-deployment\nNamespace: default\nCreationTimestamp: Thu, 30 Nov 2017 10:56:25 +0000\nLabels: app=nginx\nAnnotations: deployment.kubernetes.io/revision=2\nSelector: app=nginx\nReplicas: 3 desired | 3 updated | 3 total | 3 available | 0 unavailable\nStrategyType: RollingUpdate\nMinReadySeconds: 0\nRollingUpdateStrategy: 25% max unavailable, 25% max surge\nPod Template:\n Labels: app=nginx\n Containers:\n nginx:\n Image: nginx:1.16.1\n Port: 80/TCP\n Environment: <none>\n Mounts: <none>\n Volumes: <none>\n Conditions:\n Type Status Reason\n ---- ------ ------\n Available True MinimumReplicasAvailable\n Progressing True NewReplicaSetAvailable\n OldReplicaSets: <none>\n NewReplicaSet: nginx-deployment-1564180365 (3/3 replicas created)\n Events:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal ScalingReplicaSet 2m deployment-controller Scaled up replica set nginx-deployment-2035384211 to 3\n Normal ScalingReplicaSet 24s deployment-controller Scaled up replica set nginx-deployment-1564180365 to 1\n Normal ScalingReplicaSet 22s deployment-controller Scaled down replica set nginx-deployment-2035384211 to 2\n Normal ScalingReplicaSet 22s deployment-controller Scaled up replica set nginx-deployment-1564180365 to 2\n Normal ScalingReplicaSet 19s deployment-controller Scaled down replica set nginx-deployment-2035384211 to 1\n Normal ScalingReplicaSet 19s deployment-controller Scaled up replica set nginx-deployment-1564180365 to 3\n Normal ScalingReplicaSet 14s deployment-controller Scaled down replica set nginx-deployment-2035384211 to 0", "completion": "This text code 4.2.1 - Deployments | Here you see that when you first created the Deployment, it created a ReplicaSet (nginx-deployment-2035384211) and scaled it up to 3 replicas directly. When you updated the Deployment, it created a ne", "language": "text", "type": "code-example_explanation", "context": "4.2.1 - Deployments | Here you see that when you first created the Deployment, it created a ReplicaSet (nginx-deployment-2035384211) and scaled it up to 3 replicas directly. When you updated the Deployment, it created a ne", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_16740"} {"prompt": "Explain this shell code:\n\nkubectl set image deployment/nginx-deployment nginx=nginx:1.161", "completion": "This shell code 4.2.1 - Deployments | The output is similar to this:", "language": "shell", "type": "code-example_explanation", "context": "4.2.1 - Deployments | The output is similar to this:", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_16741"} {"prompt": "Explain this text code:\n\nWaiting for rollout to finish: 1 out of 3 new replicas have been updated...", "completion": "This text code 4.2.1 - Deployments | Press Ctrl-C to stop the above rollout status watch. For more information on stuck rollouts, read more here.", "language": "text", "type": "code-example_explanation", "context": "4.2.1 - Deployments | Press Ctrl-C to stop the above rollout status watch. For more information on stuck rollouts, read more here.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_16742"} {"prompt": "Complete this text code:\n\nNAME DESIRED CURRENT READY AGE\nnginx-deployment-1564180365 3 3 3 25s", "completion": "nginx-deployment-2035384211 0 0 0 36s\nnginx-deployment-3066724191 1 1 0 6s", "language": "text", "type": "code-example", "context": "4.2.1 - Deployments | Looking at the Pods created, you see that 1 Pod created by new ReplicaSet is stuck in an image pull loop.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_16743"} {"prompt": "Explain this text code:\n\nNAME DESIRED CURRENT READY AGE\nnginx-deployment-1564180365 3 3 3 25s\nnginx-deployment-2035384211 0 0 0 36s\nnginx-deployment-3066724191 1 1 0 6s", "completion": "This text code 4.2.1 - Deployments | Looking at the Pods created, you see that 1 Pod created by new ReplicaSet is stuck in an image pull loop.", "language": "text", "type": "code-example_explanation", "context": "4.2.1 - Deployments | Looking at the Pods created, you see that 1 Pod created by new ReplicaSet is stuck in an image pull loop.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_16744"} {"prompt": "Complete this text code:\n\nNAME READY STATUS RESTARTS AGE\nnginx-deployment-1564180365-70iae 1/1 Running 0 25s", "completion": "nginx-deployment-1564180365-jbqqo 1/1 Running 0 25s\nnginx-deployment-1564180365-hysrc 1/1 Running 0 25s\nnginx-deployment-3066724191-08mng 0/1 ImagePullBackOff 0 6s", "language": "text", "type": "code-example", "context": "4.2.1 - Deployments | Get the description of the Deployment:", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_16745"} {"prompt": "Explain this text code:\n\nNAME READY STATUS RESTARTS AGE\nnginx-deployment-1564180365-70iae 1/1 Running 0 25s\nnginx-deployment-1564180365-jbqqo 1/1 Running 0 25s\nnginx-deployment-1564180365-hysrc 1/1 Running 0 25s\nnginx-deployment-3066724191-08mng 0/1 ImagePullBackOff 0 6s", "completion": "This text code 4.2.1 - Deployments | Get the description of the Deployment:", "language": "text", "type": "code-example_explanation", "context": "4.2.1 - Deployments | Get the description of the Deployment:", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_16746"} {"prompt": "Complete this text code:\n\nName: nginx-deployment\nNamespace: default\nCreationTimestamp: Tue, 15 Mar 2016 14:48:04 -0700\nLabels: app=nginx\nSelector: app=nginx\nReplicas: 3 desired | 1 updated | 4 total | 3 available | 1 unavailable\nStrategyType: RollingUpdate\nMinReadySeconds: 0\nRollingUpdateStrategy: 25% max unavailable, 25% max surge\nPod Template:\n Labels: app=nginx\n Containers:\n nginx:\n Image: nginx:1.161\n Port: 80/TCP\n Host Port: 0/TCP\n Environment: <none>\n Mounts: <none>", "completion": " Volumes: <none>\nConditions:\n Type Status Reason\n ---- ------ ------\n Available True MinimumReplicasAvailable\n Progressing True ReplicaSetUpdated\nOldReplicaSets: nginx-deployment-1564180365 (3/3 replicas created)\nNewReplicaSet: nginx-deployment-3066724191 (1/1 replicas created)\nEvents:\n FirstSeen LastSeen Count From SubObjectPath Type Reason Message\n --------- -------- ----- ---- ------------- -------- ------ -------\n 1m 1m 1 {deployment-controller } Normal ScalingReplicaSet Scaled up replica set nginx-deployment-2035384211 to 3\n 22s 22s 1 {deployment-controller } Normal ScalingReplicaSet Scaled up replica set nginx-deployment-1564180365 to 1\n 22s 22s 1 {deployment-controller } Normal ScalingReplicaSet Scaled down replica set nginx-deployment-2035384211 to 2\n 22s 22s 1 {deployment-controller } Normal ScalingReplicaSet Scaled up replica set nginx-deployment-1564180365 to 2\n 21s 21s 1 {deployment-controller } Normal ScalingReplicaSet Scaled down replica set nginx-deployment-2035384211 to 1\n 21s 21s 1 {deployment-controller } Normal ScalingReplicaSet Scaled up replica set nginx-deployment-1564180365 to 3\n 13s 13s 1 {deployment-controller } Normal ScalingReplicaSet Scaled down replica set nginx-deployment-2035384211 to 0\n 13s 13s 1 {deployment-controller } Normal ScalingReplicaSet Scaled up replica set nginx-deployment-3066724191 to 1", "language": "text", "type": "code-example", "context": "4.2.1 - Deployments | To fix this, you need to rollback to a previous revision of Deployment that is stable.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_16747"} {"prompt": "Explain this text code:\n\nName: nginx-deployment\nNamespace: default\nCreationTimestamp: Tue, 15 Mar 2016 14:48:04 -0700\nLabels: app=nginx\nSelector: app=nginx\nReplicas: 3 desired | 1 updated | 4 total | 3 available | 1 unavailable\nStrategyType: RollingUpdate\nMinReadySeconds: 0\nRollingUpdateStrategy: 25% max unavailable, 25% max surge\nPod Template:\n Labels: app=nginx\n Containers:\n nginx:\n Image: nginx:1.161\n Port: 80/TCP\n Host Port: 0/TCP\n Environment: <none>\n Mounts: <none>\n Volumes: <none>\nConditions:\n Type Status Reason\n ---- ------ ------\n Available True MinimumReplicasAvailable\n Progressing True ReplicaSetUpdated\nOldReplicaSets: nginx-deployment-1564180365 (3/3 replicas created)\nNewReplicaSet: nginx-deployment-3066724191 (1/1 replicas created)\nEvents:\n FirstSeen LastSeen Count From SubObjectPath Type Reason Message\n --------- -------- ----- ---- ------------- -------- ------ -------\n 1m 1m 1 {deployment-controller } Normal ScalingReplicaSet Scaled up replica set nginx-deployment-2035384211 to 3\n 22s 22s 1 {deployment-controller } Normal ScalingReplicaSet Scaled up replica set nginx-deployment-1564180365 to 1\n 22s 22s 1 {deployment-controller } Normal ScalingReplicaSet Scaled down replica set nginx-deployment-2035384211 to 2\n 22s 22s 1 {deployment-controller } Normal ScalingReplicaSet Scaled up replica set nginx-deployment-1564180365 to 2\n 21s 21s 1 {deployment-controller } Normal ScalingReplicaSet Scaled down replica set nginx-deployment-2035384211 to 1\n 21s 21s 1 {deployment-controller } Normal ScalingReplicaSet Scaled up replica set nginx-deployment-1564180365 to 3\n 13s 13s 1 {deployment-controller } Normal ScalingReplicaSet Scaled down replica set nginx-deployment-2035384211 to 0\n 13s 13s 1 {deployment-controller } Normal ScalingReplicaSet Scaled up replica set nginx-deployment-3066724191 to 1", "completion": "This text code 4.2.1 - Deployments | To fix this, you need to rollback to a previous revision of Deployment that is stable.", "language": "text", "type": "code-example_explanation", "context": "4.2.1 - Deployments | To fix this, you need to rollback to a previous revision of Deployment that is stable.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_16748"} {"prompt": "Explain this shell code:\n\nkubectl rollout history deployment/nginx-deployment", "completion": "This shell code 4.2.1 - Deployments | The output is similar to this:", "language": "shell", "type": "code-example_explanation", "context": "4.2.1 - Deployments | The output is similar to this:", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_16749"} {"prompt": "Complete this text code:\n\ndeployments \"nginx-deployment\"\nREVISION CHANGE-CAUSE", "completion": "1 kubectl apply --filename=https://k8s.io/examples/controllers/nginx-deployment.yaml\n2 kubectl set image deployment/nginx-deployment nginx=nginx:1.16.1\n3 kubectl set image deployment/nginx-deployment nginx=nginx:1.161", "language": "text", "type": "code-example", "context": "4.2.1 - Deployments | CHANGE-CAUSE is copied from the Deployment annotation kubernetes.io/change-cause to its revisions upon creation. You can specify theCHANGE-CAUSE message by:", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_16750"} {"prompt": "Explain this text code:\n\ndeployments \"nginx-deployment\"\nREVISION CHANGE-CAUSE\n1 kubectl apply --filename=https://k8s.io/examples/controllers/nginx-deployment.yaml\n2 kubectl set image deployment/nginx-deployment nginx=nginx:1.16.1\n3 kubectl set image deployment/nginx-deployment nginx=nginx:1.161", "completion": "This text code 4.2.1 - Deployments | CHANGE-CAUSE is copied from the Deployment annotation kubernetes.io/change-cause to its revisions upon creation. You can specify theCHANGE-CAUSE message by:", "language": "text", "type": "code-example_explanation", "context": "4.2.1 - Deployments | CHANGE-CAUSE is copied from the Deployment annotation kubernetes.io/change-cause to its revisions upon creation. You can specify theCHANGE-CAUSE message by:", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_16751"} {"prompt": "Explain this bash code:\n\nkubectl annotate deployment/nginx-deployment kubernetes.io/change-cause=\"image updated to 1.16.1\"", "completion": "This bash code 4.2.1 - Deployments | To see the details of each revision, run:", "language": "bash", "type": "kubectl-command_explanation", "context": "4.2.1 - Deployments | To see the details of each revision, run:", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_16752"} {"prompt": "Explain this shell code:\n\nkubectl rollout history deployment/nginx-deployment --revision=2", "completion": "This shell code 4.2.1 - Deployments | The output is similar to this:", "language": "shell", "type": "code-example_explanation", "context": "4.2.1 - Deployments | The output is similar to this:", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_16753"} {"prompt": "Complete this text code:\n\ndeployments \"nginx-deployment\" revision 2\n Labels: app=nginx\n pod-template-hash=1159050644\n Annotations: kubernetes.io/change-cause=kubectl set image deployment/nginx-deployment nginx=nginx:1.16.1\n Containers:\n nginx:", "completion": " Image: nginx:1.16.1\n Port: 80/TCP\n QoS Tier:\n cpu: BestEffort\n memory: BestEffort\n Environment Variables: <none>\n No volumes.", "language": "text", "type": "code-example", "context": "4.2.1 - Deployments | Follow the steps given below to rollback the Deployment from the current version to the previous version, which is version 2.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_16754"} {"prompt": "Explain this text code:\n\ndeployments \"nginx-deployment\" revision 2\n Labels: app=nginx\n pod-template-hash=1159050644\n Annotations: kubernetes.io/change-cause=kubectl set image deployment/nginx-deployment nginx=nginx:1.16.1\n Containers:\n nginx:\n Image: nginx:1.16.1\n Port: 80/TCP\n QoS Tier:\n cpu: BestEffort\n memory: BestEffort\n Environment Variables: <none>\n No volumes.", "completion": "This text code 4.2.1 - Deployments | Follow the steps given below to rollback the Deployment from the current version to the previous version, which is version 2.", "language": "text", "type": "code-example_explanation", "context": "4.2.1 - Deployments | Follow the steps given below to rollback the Deployment from the current version to the previous version, which is version 2.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_16755"} {"prompt": "Explain this shell code:\n\nkubectl rollout undo deployment/nginx-deployment --to-revision=2", "completion": "This shell code 4.2.1 - Deployments | The output is similar to this:", "language": "shell", "type": "code-example_explanation", "context": "4.2.1 - Deployments | The output is similar to this:", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_16756"} {"prompt": "Explain this text code:\n\nNAME READY UP-TO-DATE AVAILABLE AGE\nnginx-deployment 3/3 3 3 30m", "completion": "This text code 4.2.1 - Deployments | Get the description of the Deployment:", "language": "text", "type": "code-example_explanation", "context": "4.2.1 - Deployments | Get the description of the Deployment:", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_16757"} {"prompt": "Complete this text code:\n\nName: nginx-deployment\nNamespace: default\nCreationTimestamp: Sun, 02 Sep 2018 18:17:55 -0500\nLabels: app=nginx\nAnnotations: deployment.kubernetes.io/revision=4\n kubernetes.io/change-cause=kubectl set image deployment/nginx-deployment nginx=nginx:1.16.1\nSelector: app=nginx\nReplicas: 3 desired | 3 updated | 3 total | 3 available | 0 unavailable\nStrategyType: RollingUpdate\nMinReadySeconds: 0\nRollingUpdateStrategy: 25% max unavailable, 25% max surge\nPod Template:\n Labels: app=nginx\n Containers:\n nginx:\n Image: nginx:1.16.1\n Port: 80/TCP\n Host Port: 0/TCP\n Environment: <none>\n Mounts: <none>", "completion": " Volumes: <none>\nConditions:\n Type Status Reason\n ---- ------ ------\n Available True MinimumReplicasAvailable\n Progressing True NewReplicaSetAvailable\nOldReplicaSets: <none>\nNewReplicaSet: nginx-deployment-c4747d96c (3/3 replicas created)\nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal ScalingReplicaSet 12m deployment-controller Scaled up replica set nginx-deployment-75675f5897 to 3\n Normal ScalingReplicaSet 11m deployment-controller Scaled up replica set nginx-deployment-c4747d96c to 1\n Normal ScalingReplicaSet 11m deployment-controller Scaled down replica set nginx-deployment-75675f5897 to 2\n Normal ScalingReplicaSet 11m deployment-controller Scaled up replica set nginx-deployment-c4747d96c to 2\n Normal ScalingReplicaSet 11m deployment-controller Scaled down replica set nginx-deployment-75675f5897 to 1\n Normal ScalingReplicaSet 11m deployment-controller Scaled up replica set nginx-deployment-c4747d96c to 3\n Normal ScalingReplicaSet 11m deployment-controller Scaled down replica set nginx-deployment-75675f5897 to 0\n Normal ScalingReplicaSet 11m deployment-controller Scaled up replica set nginx-deployment-595696685f to 1\n Normal DeploymentRollback 15s deployment-controller Rolled back deployment \"nginx-deployment\" to revision 2\n Normal ScalingReplicaSet 15s deployment-controller Scaled down replica set nginx-deployment-595696685f to 0", "language": "text", "type": "code-example", "context": "4.2.1 - Deployments | You can scale a Deployment by using the following command:", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_16758"} {"prompt": "Explain this text code:\n\nName: nginx-deployment\nNamespace: default\nCreationTimestamp: Sun, 02 Sep 2018 18:17:55 -0500\nLabels: app=nginx\nAnnotations: deployment.kubernetes.io/revision=4\n kubernetes.io/change-cause=kubectl set image deployment/nginx-deployment nginx=nginx:1.16.1\nSelector: app=nginx\nReplicas: 3 desired | 3 updated | 3 total | 3 available | 0 unavailable\nStrategyType: RollingUpdate\nMinReadySeconds: 0\nRollingUpdateStrategy: 25% max unavailable, 25% max surge\nPod Template:\n Labels: app=nginx\n Containers:\n nginx:\n Image: nginx:1.16.1\n Port: 80/TCP\n Host Port: 0/TCP\n Environment: <none>\n Mounts: <none>\n Volumes: <none>\nConditions:\n Type Status Reason\n ---- ------ ------\n Available True MinimumReplicasAvailable\n Progressing True NewReplicaSetAvailable\nOldReplicaSets: <none>\nNewReplicaSet: nginx-deployment-c4747d96c (3/3 replicas created)\nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal ScalingReplicaSet 12m deployment-controller Scaled up replica set nginx-deployment-75675f5897 to 3\n Normal ScalingReplicaSet 11m deployment-controller Scaled up replica set nginx-deployment-c4747d96c to 1\n Normal ScalingReplicaSet 11m deployment-controller Scaled down replica set nginx-deployment-75675f5897 to 2\n Normal ScalingReplicaSet 11m deployment-controller Scaled up replica set nginx-deployment-c4747d96c to 2\n Normal ScalingReplicaSet 11m deployment-controller Scaled down replica set nginx-deployment-75675f5897 to 1\n Normal ScalingReplicaSet 11m deployment-controller Scaled up replica set nginx-deployment-c4747d96c to 3\n Normal ScalingReplicaSet 11m deployment-controller Scaled down replica set nginx-deployment-75675f5897 to 0\n Normal ScalingReplicaSet 11m deployment-controller Scaled up replica set nginx-deployment-595696685f to 1\n Normal DeploymentRollback 15s deployment-controller Rolled back deployment \"nginx-deployment\" to revision 2\n Normal ScalingReplicaSet 15s deployment-controller Scaled down replica set nginx-deployment-595696685f to 0", "completion": "This text code 4.2.1 - Deployments | You can scale a Deployment by using the following command:", "language": "text", "type": "code-example_explanation", "context": "4.2.1 - Deployments | You can scale a Deployment by using the following command:", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_16759"} {"prompt": "Explain this shell code:\n\nkubectl scale deployment/nginx-deployment --replicas=10", "completion": "This shell code 4.2.1 - Deployments | The output is similar to this:", "language": "shell", "type": "code-example_explanation", "context": "4.2.1 - Deployments | The output is similar to this:", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_16760"} {"prompt": "Explain this shell code:\n\nkubectl autoscale deployment/nginx-deployment --min=10 --max=15 --cpu-percent=80", "completion": "This shell code 4.2.1 - Deployments | The output is similar to this:", "language": "shell", "type": "code-example_explanation", "context": "4.2.1 - Deployments | The output is similar to this:", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_16761"} {"prompt": "Explain this text code:\n\nNAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE\nnginx-deployment 10 10 10 10 50s", "completion": "This text code 4.2.1 - Deployments | You update to a new image which happens to be unresolvable from inside the cluster.", "language": "text", "type": "code-example_explanation", "context": "4.2.1 - Deployments | You update to a new image which happens to be unresolvable from inside the cluster.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_16762"} {"prompt": "Explain this shell code:\n\nkubectl set image deployment/nginx-deployment nginx=nginx:sometag", "completion": "This shell code 4.2.1 - Deployments | The output is similar to this:", "language": "shell", "type": "code-example_explanation", "context": "4.2.1 - Deployments | The output is similar to this:", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_16763"} {"prompt": "Explain this text code:\n\nNAME DESIRED CURRENT READY AGE\nnginx-deployment-1989198191 5 5 0 9s\nnginx-deployment-618515232 8 8 8 1m", "completion": "This text code 4.2.1 - Deployments | Then a new scaling request for the Deployment comes along. The autoscaler increments the Deployment replicas to 15. The Deployment controller needs to decide where to add these new 5 replicas. If you", "language": "text", "type": "code-example_explanation", "context": "4.2.1 - Deployments | Then a new scaling request for the Deployment comes along. The autoscaler increments the Deployment replicas to 15. The Deployment controller needs to decide where to add these new 5 replicas. If you", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_16764"} {"prompt": "Explain this text code:\n\nNAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE\nnginx-deployment 15 18 7 8 7m", "completion": "This text code 4.2.1 - Deployments | The rollout status confirms how the replicas were added to each ReplicaSet.", "language": "text", "type": "code-example_explanation", "context": "4.2.1 - Deployments | The rollout status confirms how the replicas were added to each ReplicaSet.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_16765"} {"prompt": "Explain this text code:\n\nNAME DESIRED CURRENT READY AGE\nnginx-deployment-1989198191 7 7 0 7m\nnginx-deployment-618515232 11 11 11 7m", "completion": "This text code 4.2.1 - Deployments | When you update a Deployment, or plan to, you can pause rollouts for that Deployment before you trigger one or more updates. When you're ready to apply those changes, you resume rollouts for the Deplo", "language": "text", "type": "code-example_explanation", "context": "4.2.1 - Deployments | When you update a Deployment, or plan to, you can pause rollouts for that Deployment before you trigger one or more updates. When you're ready to apply those changes, you resume rollouts for the Deplo", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_16766"} {"prompt": "Explain this text code:\n\nNAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE\nnginx 3 3 3 3 1m", "completion": "This text code 4.2.1 - Deployments | Get the rollout status:", "language": "text", "type": "code-example_explanation", "context": "4.2.1 - Deployments | Get the rollout status:", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_16767"} {"prompt": "Explain this text code:\n\nNAME DESIRED CURRENT READY AGE\nnginx-2142116321 3 3 3 1m", "completion": "This text code 4.2.1 - Deployments | Pause by running the following command:", "language": "text", "type": "code-example_explanation", "context": "4.2.1 - Deployments | Pause by running the following command:", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_16768"} {"prompt": "Explain this shell code:\n\nkubectl set image deployment/nginx-deployment nginx=nginx:1.16.1", "completion": "This shell code 4.2.1 - Deployments | The output is similar to this:", "language": "shell", "type": "code-example_explanation", "context": "4.2.1 - Deployments | The output is similar to this:", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_16769"} {"prompt": "Explain this shell code:\n\nkubectl rollout history deployment/nginx-deployment", "completion": "This shell code 4.2.1 - Deployments | The output is similar to this:", "language": "shell", "type": "code-example_explanation", "context": "4.2.1 - Deployments | The output is similar to this:", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_16770"} {"prompt": "Explain this text code:\n\ndeployments \"nginx\"\nREVISION CHANGE-CAUSE\n1 <none>", "completion": "This text code 4.2.1 - Deployments | Get the rollout status to verify that the existing ReplicaSet has not changed:", "language": "text", "type": "code-example_explanation", "context": "4.2.1 - Deployments | Get the rollout status to verify that the existing ReplicaSet has not changed:", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_16771"} {"prompt": "Explain this text code:\n\nNAME DESIRED CURRENT READY AGE\nnginx-2142116321 3 3 3 2m", "completion": "This text code 4.2.1 - Deployments | You can make as many updates as you wish, for example, update the resources that will be used:", "language": "text", "type": "code-example_explanation", "context": "4.2.1 - Deployments | You can make as many updates as you wish, for example, update the resources that will be used:", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_16772"} {"prompt": "Explain this shell code:\n\nkubectl set resources deployment/nginx-deployment -c=nginx --limits=cpu=200m,memory=512Mi", "completion": "This shell code 4.2.1 - Deployments | The output is similar to this:", "language": "shell", "type": "code-example_explanation", "context": "4.2.1 - Deployments | The output is similar to this:", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_16773"} {"prompt": "Explain this text code:\n\ndeployment.apps/nginx-deployment resource requirements updated", "completion": "This text code 4.2.1 - Deployments | The initial state of the Deployment prior to pausing its rollout will continue its function, but new updates to the Deployment will not have any effect as long as the Deployment rollout is paused.", "language": "text", "type": "code-example_explanation", "context": "4.2.1 - Deployments | The initial state of the Deployment prior to pausing its rollout will continue its function, but new updates to the Deployment will not have any effect as long as the Deployment rollout is paused.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_16774"} {"prompt": "Complete this text code:\n\nNAME DESIRED CURRENT READY AGE\nnginx-2142116321 2 2 2 2m\nnginx-3926361531 2 2 0 6s\nnginx-3926361531 2 2 1 18s\nnginx-2142116321 1 2 2 2m\nnginx-2142116321 1 2 2 2m\nnginx-3926361531 3 2 1 18s", "completion": "nginx-3926361531 3 2 1 18s\nnginx-2142116321 1 1 1 2m\nnginx-3926361531 3 3 1 18s\nnginx-3926361531 3 3 2 19s\nnginx-2142116321 0 1 1 2m\nnginx-2142116321 0 1 1 2m\nnginx-2142116321 0 0 0 2m\nnginx-3926361531 3 3 3 20s", "language": "text", "type": "code-example", "context": "4.2.1 - Deployments | Get the status of the latest rollout:", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_16775"} {"prompt": "Explain this text code:\n\nNAME DESIRED CURRENT READY AGE\nnginx-2142116321 2 2 2 2m\nnginx-3926361531 2 2 0 6s\nnginx-3926361531 2 2 1 18s\nnginx-2142116321 1 2 2 2m\nnginx-2142116321 1 2 2 2m\nnginx-3926361531 3 2 1 18s\nnginx-3926361531 3 2 1 18s\nnginx-2142116321 1 1 1 2m\nnginx-3926361531 3 3 1 18s\nnginx-3926361531 3 3 2 19s\nnginx-2142116321 0 1 1 2m\nnginx-2142116321 0 1 1 2m\nnginx-2142116321 0 0 0 2m\nnginx-3926361531 3 3 3 20s", "completion": "This text code 4.2.1 - Deployments | Get the status of the latest rollout:", "language": "text", "type": "code-example_explanation", "context": "4.2.1 - Deployments | Get the status of the latest rollout:", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_16776"} {"prompt": "Explain this text code:\n\nNAME DESIRED CURRENT READY AGE\nnginx-2142116321 0 0 0 2m\nnginx-3926361531 3 3 3 28s", "completion": "This text code 4.2.1 - Deployments | A Deployment enters various states during its lifecycle. It can be progressing while rolling out a new ReplicaSet, it can be complete, or it can fail to progress.", "language": "text", "type": "code-example_explanation", "context": "4.2.1 - Deployments | A Deployment enters various states during its lifecycle. It can be progressing while rolling out a new ReplicaSet, it can be complete, or it can fail to progress.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_16777"} {"prompt": "Explain this text code:\n\nWaiting for rollout to finish: 2 of 3 updated replicas are available...\ndeployment \"nginx-deployment\" successfully rolled out", "completion": "This text code 4.2.1 - Deployments | and the exit status from kubectl rollout is 0 (success):", "language": "text", "type": "code-example_explanation", "context": "4.2.1 - Deployments | and the exit status from kubectl rollout is 0 (success):", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_16778"} {"prompt": "Explain this shell code:\n\nkubectl patch deployment/nginx-deployment -p '{\"spec\":{\"progressDeadlineSeconds\":600}}'", "completion": "This shell code 4.2.1 - Deployments | The output is similar to this:", "language": "shell", "type": "code-example_explanation", "context": "4.2.1 - Deployments | The output is similar to this:", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_16779"} {"prompt": "Complete this text code:\n\n<...>\nConditions:\n Type Status Reason\n ---- ------ ------", "completion": " Available True MinimumReplicasAvailable\n Progressing True ReplicaSetUpdated\n ReplicaFailure True FailedCreate\n<...>", "language": "text", "type": "code-example", "context": "4.2.1 - Deployments | If you run kubectl get deployment nginx-deployment -o yaml, the Deployment status is similar to this:", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_16780"} {"prompt": "Explain this text code:\n\n<...>\nConditions:\n Type Status Reason\n ---- ------ ------\n Available True MinimumReplicasAvailable\n Progressing True ReplicaSetUpdated\n ReplicaFailure True FailedCreate\n<...>", "completion": "This text code 4.2.1 - Deployments | If you run kubectl get deployment nginx-deployment -o yaml, the Deployment status is similar to this:", "language": "text", "type": "code-example_explanation", "context": "4.2.1 - Deployments | If you run kubectl get deployment nginx-deployment -o yaml, the Deployment status is similar to this:", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_16781"} {"prompt": "Complete this text code:\n\nstatus:\n availableReplicas: 2\n conditions:\n - lastTransitionTime: 2016-10-04T12:25:39Z\n lastUpdateTime: 2016-10-04T12:25:39Z\n message: Replica set \"nginx-deployment-4262182780\" is progressing.\n reason: ReplicaSetUpdated\n status: \"True\"\n type: Progressing\n - lastTransitionTime: 2016-10-04T12:25:42Z\n lastUpdateTime: 2016-10-04T12:25:42Z\n message: Deployment has minimum availability.", "completion": " reason: MinimumReplicasAvailable\n status: \"True\"\n type: Available\n - lastTransitionTime: 2016-10-04T12:25:39Z\n lastUpdateTime: 2016-10-04T12:25:39Z\n message: 'Error creating: pods \"nginx-deployment-4262182780-\" is forbidden: exceeded quota:\n object-counts, requested: pods=1, used: pods=3, limited: pods=2'\n reason: FailedCreate\n status: \"True\"\n type: ReplicaFailure\n observedGeneration: 3\n replicas: 2\n unavailableReplicas: 2", "language": "text", "type": "code-example", "context": "4.2.1 - Deployments | Eventually, once the Deployment progress deadline is exceeded, Kubernetes updates the status and the reason for the Progressing condition:", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_16782"} {"prompt": "Explain this text code:\n\nstatus:\n availableReplicas: 2\n conditions:\n - lastTransitionTime: 2016-10-04T12:25:39Z\n lastUpdateTime: 2016-10-04T12:25:39Z\n message: Replica set \"nginx-deployment-4262182780\" is progressing.\n reason: ReplicaSetUpdated\n status: \"True\"\n type: Progressing\n - lastTransitionTime: 2016-10-04T12:25:42Z\n lastUpdateTime: 2016-10-04T12:25:42Z\n message: Deployment has minimum availability.\n reason: MinimumReplicasAvailable\n status: \"True\"\n type: Available\n - lastTransitionTime: 2016-10-04T12:25:39Z\n lastUpdateTime: 2016-10-04T12:25:39Z\n message: 'Error creating: pods \"nginx-deployment-4262182780-\" is forbidden: exceeded quota:\n object-counts, requested: pods=1, used: pods=3, limited: pods=2'\n reason: FailedCreate\n status: \"True\"\n type: ReplicaFailure\n observedGeneration: 3\n replicas: 2\n unavailableReplicas: 2", "completion": "This text code 4.2.1 - Deployments | Eventually, once the Deployment progress deadline is exceeded, Kubernetes updates the status and the reason for the Progressing condition:", "language": "text", "type": "code-example_explanation", "context": "4.2.1 - Deployments | Eventually, once the Deployment progress deadline is exceeded, Kubernetes updates the status and the reason for the Progressing condition:", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_16783"} {"prompt": "Complete this text code:\n\nConditions:\n Type Status Reason\n ---- ------ ------", "completion": " Available True MinimumReplicasAvailable\n Progressing False ProgressDeadlineExceeded\n ReplicaFailure True FailedCreate", "language": "text", "type": "code-example", "context": "4.2.1 - Deployments | You can address an issue of insufficient quota by scaling down your Deployment, by scaling down other controllers you may be running, or by increasing quota in your namespace. If you satisfy the quota", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_16784"} {"prompt": "Explain this text code:\n\nConditions:\n Type Status Reason\n ---- ------ ------\n Available True MinimumReplicasAvailable\n Progressing False ProgressDeadlineExceeded\n ReplicaFailure True FailedCreate", "completion": "This text code 4.2.1 - Deployments | You can address an issue of insufficient quota by scaling down your Deployment, by scaling down other controllers you may be running, or by increasing quota in your namespace. If you satisfy the quota", "language": "text", "type": "code-example_explanation", "context": "4.2.1 - Deployments | You can address an issue of insufficient quota by scaling down your Deployment, by scaling down other controllers you may be running, or by increasing quota in your namespace. If you satisfy the quota", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_16785"} {"prompt": "Complete this text code:\n\nConditions:\n Type Status Reason", "completion": " ---- ------ ------\n Available True MinimumReplicasAvailable\n Progressing True NewReplicaSetAvailable", "language": "text", "type": "code-example", "context": "4.2.1 - Deployments | type: Available with status: \"True\" means that your Deployment has minimum availability. Minimum availability is dictated by the parameters specified in the deployment strategy. type: Progressing with", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_16786"} {"prompt": "Explain this text code:\n\nConditions:\n Type Status Reason\n ---- ------ ------\n Available True MinimumReplicasAvailable\n Progressing True NewReplicaSetAvailable", "completion": "This text code 4.2.1 - Deployments | type: Available with status: \"True\" means that your Deployment has minimum availability. Minimum availability is dictated by the parameters specified in the deployment strategy. type: Progressing with", "language": "text", "type": "code-example_explanation", "context": "4.2.1 - Deployments | type: Available with status: \"True\" means that your Deployment has minimum availability. Minimum availability is dictated by the parameters specified in the deployment strategy. type: Progressing with", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_16787"} {"prompt": "Explain this text code:\n\nWaiting for rollout to finish: 2 out of 3 new replicas have been updated...\nerror: deployment \"nginx\" exceeded its progress deadline", "completion": "This text code 4.2.1 - Deployments | and the exit status from kubectl rollout is 1 (indicating an error):", "language": "text", "type": "code-example_explanation", "context": "4.2.1 - Deployments | and the exit status from kubectl rollout is 1 (indicating an error):", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_16788"} {"prompt": "Complete this yaml code:\n\napiVersion: apps/v1\nkind: Deployment\nmetadata:\n name: nginx-deployment\n labels:\n app: nginx\nspec:\n replicas: 3\n selector:\n matchLabels:\n app: nginx\n template:", "completion": " metadata:\n labels:\n app: nginx\n spec:\n containers:\n - name: nginx\n image: nginx:1.14.2\n ports:\n - containerPort: 80\n strategy:\n type: RollingUpdate\n rollingUpdate:\n maxUnavailable: 1", "language": "yaml", "type": "deployment-definition", "context": "4.2.1 - Deployments | apiVersion: apps/v1 kind: Deployment metadata: name: nginx-deployment labels: app: nginx spec: replicas: 3 selector: matchLabels: app: nginx template: metadata: labels:", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_16789"} {"prompt": "Explain this yaml code:\n\napiVersion: apps/v1\nkind: Deployment\nmetadata:\n name: nginx-deployment\n labels:\n app: nginx\nspec:\n replicas: 3\n selector:\n matchLabels:\n app: nginx\n template:\n metadata:\n labels:\n app: nginx\n spec:\n containers:\n - name: nginx\n image: nginx:1.14.2\n ports:\n - containerPort: 80\n strategy:\n type: RollingUpdate\n rollingUpdate:\n maxUnavailable: 1", "completion": "This yaml code 4.2.1 - Deployments | apiVersion: apps/v1 kind: Deployment metadata: name: nginx-deployment labels: app: nginx spec: replicas: 3 selector: matchLabels: app: nginx template: metadata: labels:", "language": "yaml", "type": "deployment-definition_explanation", "context": "4.2.1 - Deployments | apiVersion: apps/v1 kind: Deployment metadata: name: nginx-deployment labels: app: nginx spec: replicas: 3 selector: matchLabels: app: nginx template: metadata: labels:", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_16790"} {"prompt": "Complete this yaml code:\n\napiVersion: apps/v1\nkind: Deployment\nmetadata:\n name: nginx-deployment\n labels:\n app: nginx\nspec:\n replicas: 3\n selector:\n matchLabels:\n app: nginx\n template:", "completion": " metadata:\n labels:\n app: nginx\n spec:\n containers:\n - name: nginx\n image: nginx:1.14.2\n ports:\n - containerPort: 80\n strategy:\n type: RollingUpdate\n rollingUpdate:\n maxSurge: 1", "language": "yaml", "type": "deployment-definition", "context": "4.2.1 - Deployments | apiVersion: apps/v1 kind: Deployment metadata: name: nginx-deployment labels: app: nginx spec: replicas: 3 selector: matchLabels: app: nginx template: metadata: labels:", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_16791"} {"prompt": "Explain this yaml code:\n\napiVersion: apps/v1\nkind: Deployment\nmetadata:\n name: nginx-deployment\n labels:\n app: nginx\nspec:\n replicas: 3\n selector:\n matchLabels:\n app: nginx\n template:\n metadata:\n labels:\n app: nginx\n spec:\n containers:\n - name: nginx\n image: nginx:1.14.2\n ports:\n - containerPort: 80\n strategy:\n type: RollingUpdate\n rollingUpdate:\n maxSurge: 1", "completion": "This yaml code 4.2.1 - Deployments | apiVersion: apps/v1 kind: Deployment metadata: name: nginx-deployment labels: app: nginx spec: replicas: 3 selector: matchLabels: app: nginx template: metadata: labels:", "language": "yaml", "type": "deployment-definition_explanation", "context": "4.2.1 - Deployments | apiVersion: apps/v1 kind: Deployment metadata: name: nginx-deployment labels: app: nginx spec: replicas: 3 selector: matchLabels: app: nginx template: metadata: labels:", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_16792"} {"prompt": "Complete this yaml code:\n\napiVersion: apps/v1\nkind: Deployment\nmetadata:\n name: nginx-deployment\n labels:\n app: nginx\nspec:\n replicas: 3\n selector:\n matchLabels:\n app: nginx\n template:\n metadata:", "completion": " labels:\n app: nginx\n spec:\n containers:\n - name: nginx\n image: nginx:1.14.2\n ports:\n - containerPort: 80\n strategy:\n type: RollingUpdate\n rollingUpdate:\n maxSurge: 1\n maxUnavailable: 1", "language": "yaml", "type": "deployment-definition", "context": "4.2.1 - Deployments | .spec.progressDeadlineSeconds is an optional field that specifies the number of seconds you want to wait for your Deployment to progress before the system reports back that the Deployment has failed p", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_16793"} {"prompt": "Explain this yaml code:\n\napiVersion: apps/v1\nkind: Deployment\nmetadata:\n name: nginx-deployment\n labels:\n app: nginx\nspec:\n replicas: 3\n selector:\n matchLabels:\n app: nginx\n template:\n metadata:\n labels:\n app: nginx\n spec:\n containers:\n - name: nginx\n image: nginx:1.14.2\n ports:\n - containerPort: 80\n strategy:\n type: RollingUpdate\n rollingUpdate:\n maxSurge: 1\n maxUnavailable: 1", "completion": "This yaml code 4.2.1 - Deployments | .spec.progressDeadlineSeconds is an optional field that specifies the number of seconds you want to wait for your Deployment to progress before the system reports back that the Deployment has failed p", "language": "yaml", "type": "deployment-definition_explanation", "context": "4.2.1 - Deployments | .spec.progressDeadlineSeconds is an optional field that specifies the number of seconds you want to wait for your Deployment to progress before the system reports back that the Deployment has failed p", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_16794"} {"prompt": "Complete this yaml code:\n\napiVersion: apps/v1\nkind: ReplicaSet\nmetadata:\n name: frontend\n labels:\n app: guestbook\n tier: frontend\nspec:\n # modify replicas according to your case\n replicas: 3", "completion": " selector:\n matchLabels:\n tier: frontend\n template:\n metadata:\n labels:\n tier: frontend\n spec:\n containers:\n - name: php-redis\n image: us-docker.pkg.dev/google-samples/containers/gke/gb-frontend:v5", "language": "yaml", "type": "kubernetes-manifest", "context": "4.2.2 - ReplicaSet | Saving this manifest into frontend.yaml and submitting it to a Kubernetes cluster will create the defined ReplicaSet and the Pods that it manages.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_16795"} {"prompt": "Explain this yaml code:\n\napiVersion: apps/v1\nkind: ReplicaSet\nmetadata:\n name: frontend\n labels:\n app: guestbook\n tier: frontend\nspec:\n # modify replicas according to your case\n replicas: 3\n selector:\n matchLabels:\n tier: frontend\n template:\n metadata:\n labels:\n tier: frontend\n spec:\n containers:\n - name: php-redis\n image: us-docker.pkg.dev/google-samples/containers/gke/gb-frontend:v5", "completion": "This yaml code 4.2.2 - ReplicaSet | Saving this manifest into frontend.yaml and submitting it to a Kubernetes cluster will create the defined ReplicaSet and the Pods that it manages.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "4.2.2 - ReplicaSet | Saving this manifest into frontend.yaml and submitting it to a Kubernetes cluster will create the defined ReplicaSet and the Pods that it manages.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_16796"} {"prompt": "Explain this shell code:\n\nkubectl apply -f https://kubernetes.io/examples/controllers/frontend.yaml", "completion": "This shell code 4.2.2 - ReplicaSet | You can then get the current ReplicaSets deployed:", "language": "shell", "type": "code-example_explanation", "context": "4.2.2 - ReplicaSet | You can then get the current ReplicaSets deployed:", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_16797"} {"prompt": "Explain this text code:\n\nNAME DESIRED CURRENT READY AGE\nfrontend 3 3 3 6s", "completion": "This text code 4.2.2 - ReplicaSet | You can also check on the state of the ReplicaSet:", "language": "text", "type": "code-example_explanation", "context": "4.2.2 - ReplicaSet | You can also check on the state of the ReplicaSet:", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_16798"} {"prompt": "Complete this text code:\n\nName: frontend\nNamespace: default\nSelector: tier=frontend\nLabels: app=guestbook\n tier=frontend\nAnnotations: <none>\nReplicas: 3 current / 3 desired\nPods Status: 3 Running / 0 Waiting / 0 Succeeded / 0 Failed\nPod Template:\n Labels: tier=frontend\n Containers:\n php-redis:", "completion": " Image: us-docker.pkg.dev/google-samples/containers/gke/gb-frontend:v5\n Port: <none>\n Host Port: <none>\n Environment: <none>\n Mounts: <none>\n Volumes: <none>\nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal SuccessfulCreate 13s replicaset-controller Created pod: frontend-gbgfx\n Normal SuccessfulCreate 13s replicaset-controller Created pod: frontend-rwz57\n Normal SuccessfulCreate 13s replicaset-controller Created pod: frontend-wkl7w", "language": "text", "type": "code-example", "context": "4.2.2 - ReplicaSet | And lastly you can check for the Pods brought up:", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_16799"} {"prompt": "Explain this text code:\n\nName: frontend\nNamespace: default\nSelector: tier=frontend\nLabels: app=guestbook\n tier=frontend\nAnnotations: <none>\nReplicas: 3 current / 3 desired\nPods Status: 3 Running / 0 Waiting / 0 Succeeded / 0 Failed\nPod Template:\n Labels: tier=frontend\n Containers:\n php-redis:\n Image: us-docker.pkg.dev/google-samples/containers/gke/gb-frontend:v5\n Port: <none>\n Host Port: <none>\n Environment: <none>\n Mounts: <none>\n Volumes: <none>\nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal SuccessfulCreate 13s replicaset-controller Created pod: frontend-gbgfx\n Normal SuccessfulCreate 13s replicaset-controller Created pod: frontend-rwz57\n Normal SuccessfulCreate 13s replicaset-controller Created pod: frontend-wkl7w", "completion": "This text code 4.2.2 - ReplicaSet | And lastly you can check for the Pods brought up:", "language": "text", "type": "code-example_explanation", "context": "4.2.2 - ReplicaSet | And lastly you can check for the Pods brought up:", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_16800"} {"prompt": "Complete this text code:\n\nNAME READY STATUS RESTARTS AGE\nfrontend-gbgfx 1/1 Running 0 10m", "completion": "frontend-rwz57 1/1 Running 0 10m\nfrontend-wkl7w 1/1 Running 0 10m", "language": "text", "type": "code-example", "context": "4.2.2 - ReplicaSet | You can also verify that the owner reference of these pods is set to the frontend ReplicaSet. To do this, get the yaml of one of the Pods running:", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_16801"} {"prompt": "Explain this text code:\n\nNAME READY STATUS RESTARTS AGE\nfrontend-gbgfx 1/1 Running 0 10m\nfrontend-rwz57 1/1 Running 0 10m\nfrontend-wkl7w 1/1 Running 0 10m", "completion": "This text code 4.2.2 - ReplicaSet | You can also verify that the owner reference of these pods is set to the frontend ReplicaSet. To do this, get the yaml of one of the Pods running:", "language": "text", "type": "code-example_explanation", "context": "4.2.2 - ReplicaSet | You can also verify that the owner reference of these pods is set to the frontend ReplicaSet. To do this, get the yaml of one of the Pods running:", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_16802"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n creationTimestamp: \"2024-02-28T22:30:44Z\"\n generateName: frontend-\n labels:\n tier: frontend\n name: frontend-gbgfx", "completion": " namespace: default\n ownerReferences:\n - apiVersion: apps/v1\n blockOwnerDeletion: true\n controller: true\n kind: ReplicaSet\n name: frontend\n uid: e129deca-f864-481b-bb16-b27abfd92292\n...", "language": "yaml", "type": "pod-definition", "context": "4.2.2 - ReplicaSet | While you can create bare Pods with no problems, it is strongly recommended to make sure that the bare Pods do not have labels which match the selector of one of your ReplicaSets. The reason for this", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_16803"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n creationTimestamp: \"2024-02-28T22:30:44Z\"\n generateName: frontend-\n labels:\n tier: frontend\n name: frontend-gbgfx\n namespace: default\n ownerReferences:\n - apiVersion: apps/v1\n blockOwnerDeletion: true\n controller: true\n kind: ReplicaSet\n name: frontend\n uid: e129deca-f864-481b-bb16-b27abfd92292\n...", "completion": "This yaml code 4.2.2 - ReplicaSet | While you can create bare Pods with no problems, it is strongly recommended to make sure that the bare Pods do not have labels which match the selector of one of your ReplicaSets. The reason for this", "language": "yaml", "type": "pod-definition_explanation", "context": "4.2.2 - ReplicaSet | While you can create bare Pods with no problems, it is strongly recommended to make sure that the bare Pods do not have labels which match the selector of one of your ReplicaSets. The reason for this", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_16804"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: pod1\n labels:\n tier: frontend\nspec:\n containers:\n - name: hello1\n image: gcr.io/google-samples/hello-app:2.0\n", "completion": "---\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: pod2\n labels:\n tier: frontend\nspec:\n containers:\n - name: hello2\n image: gcr.io/google-samples/hello-app:1.0", "language": "yaml", "type": "pod-definition", "context": "4.2.2 - ReplicaSet | As those Pods do not have a Controller (or any object) as their owner reference and match the selector of the frontend ReplicaSet, they will immediately be acquired by it.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_16805"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: pod1\n labels:\n tier: frontend\nspec:\n containers:\n - name: hello1\n image: gcr.io/google-samples/hello-app:2.0\n\n---\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: pod2\n labels:\n tier: frontend\nspec:\n containers:\n - name: hello2\n image: gcr.io/google-samples/hello-app:1.0", "completion": "This yaml code 4.2.2 - ReplicaSet | As those Pods do not have a Controller (or any object) as their owner reference and match the selector of the frontend ReplicaSet, they will immediately be acquired by it.", "language": "yaml", "type": "pod-definition_explanation", "context": "4.2.2 - ReplicaSet | As those Pods do not have a Controller (or any object) as their owner reference and match the selector of the frontend ReplicaSet, they will immediately be acquired by it.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_16806"} {"prompt": "Explain this shell code:\n\nkubectl apply -f https://kubernetes.io/examples/pods/pod-rs.yaml", "completion": "This shell code 4.2.2 - ReplicaSet | The new Pods will be acquired by the ReplicaSet, and then immediately terminated as the ReplicaSet would be over its desired count.", "language": "shell", "type": "code-example_explanation", "context": "4.2.2 - ReplicaSet | The new Pods will be acquired by the ReplicaSet, and then immediately terminated as the ReplicaSet would be over its desired count.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_16807"} {"prompt": "Complete this text code:\n\nNAME READY STATUS RESTARTS AGE\nfrontend-b2zdv 1/1 Running 0 10m\nfrontend-vcmts 1/1 Running 0 10m", "completion": "frontend-wtsmm 1/1 Running 0 10m\npod1 0/1 Terminating 0 1s\npod2 0/1 Terminating 0 1s", "language": "text", "type": "code-example", "context": "4.2.2 - ReplicaSet | If you create the Pods first:", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_16808"} {"prompt": "Explain this text code:\n\nNAME READY STATUS RESTARTS AGE\nfrontend-b2zdv 1/1 Running 0 10m\nfrontend-vcmts 1/1 Running 0 10m\nfrontend-wtsmm 1/1 Running 0 10m\npod1 0/1 Terminating 0 1s\npod2 0/1 Terminating 0 1s", "completion": "This text code 4.2.2 - ReplicaSet | If you create the Pods first:", "language": "text", "type": "code-example_explanation", "context": "4.2.2 - ReplicaSet | If you create the Pods first:", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_16809"} {"prompt": "Explain this shell code:\n\nkubectl apply -f https://kubernetes.io/examples/pods/pod-rs.yaml", "completion": "This shell code 4.2.2 - ReplicaSet | And then create the ReplicaSet however:", "language": "shell", "type": "code-example_explanation", "context": "4.2.2 - ReplicaSet | And then create the ReplicaSet however:", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_16810"} {"prompt": "Explain this shell code:\n\nkubectl apply -f https://kubernetes.io/examples/controllers/frontend.yaml", "completion": "This shell code 4.2.2 - ReplicaSet | You shall see that the ReplicaSet has acquired the Pods and has only created new ones according to its spec until the number of its new Pods and the original matches its desired count. As fetching the", "language": "shell", "type": "code-example_explanation", "context": "4.2.2 - ReplicaSet | You shall see that the ReplicaSet has acquired the Pods and has only created new ones according to its spec until the number of its new Pods and the original matches its desired count. As fetching the", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_16811"} {"prompt": "Complete this text code:\n\nNAME READY STATUS RESTARTS AGE\nfrontend-hmmj2 1/1 Running 0 9s", "completion": "pod1 1/1 Running 0 36s\npod2 1/1 Running 0 36s", "language": "text", "type": "code-example", "context": "4.2.2 - ReplicaSet | In this manner, a ReplicaSet can own a non-homogeneous set of Pods", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_16812"} {"prompt": "Explain this text code:\n\nNAME READY STATUS RESTARTS AGE\nfrontend-hmmj2 1/1 Running 0 9s\npod1 1/1 Running 0 36s\npod2 1/1 Running 0 36s", "completion": "This text code 4.2.2 - ReplicaSet | In this manner, a ReplicaSet can own a non-homogeneous set of Pods", "language": "text", "type": "code-example_explanation", "context": "4.2.2 - ReplicaSet | In this manner, a ReplicaSet can own a non-homogeneous set of Pods", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_16813"} {"prompt": "Complete this shell code:\n\nkubectl proxy --port=8080\ncurl -X DELETE 'localhost:8080/apis/apps/v1/namespaces/default/replicasets/frontend' \\", "completion": " -d '{\"kind\":\"DeleteOptions\",\"apiVersion\":\"v1\",\"propagationPolicy\":\"Foreground\"}' \\\n -H \"Content-Type: application/json\"", "language": "shell", "type": "code-example", "context": "4.2.2 - ReplicaSet | You can delete a ReplicaSet without affecting any of its Pods using kubectl delete with the --cascade=orphan option. When using the REST API or the client-go library, you must set propagationPolicy to", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_16814"} {"prompt": "Explain this shell code:\n\nkubectl proxy --port=8080\ncurl -X DELETE 'localhost:8080/apis/apps/v1/namespaces/default/replicasets/frontend' \\\n -d '{\"kind\":\"DeleteOptions\",\"apiVersion\":\"v1\",\"propagationPolicy\":\"Foreground\"}' \\\n -H \"Content-Type: application/json\"", "completion": "This shell code 4.2.2 - ReplicaSet | You can delete a ReplicaSet without affecting any of its Pods using kubectl delete with the --cascade=orphan option. When using the REST API or the client-go library, you must set propagationPolicy to", "language": "shell", "type": "code-example_explanation", "context": "4.2.2 - ReplicaSet | You can delete a ReplicaSet without affecting any of its Pods using kubectl delete with the --cascade=orphan option. When using the REST API or the client-go library, you must set propagationPolicy to", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_16815"} {"prompt": "Complete this shell code:\n\nkubectl proxy --port=8080\ncurl -X DELETE 'localhost:8080/apis/apps/v1/namespaces/default/replicasets/frontend' \\", "completion": " -d '{\"kind\":\"DeleteOptions\",\"apiVersion\":\"v1\",\"propagationPolicy\":\"Orphan\"}' \\\n -H \"Content-Type: application/json\"", "language": "shell", "type": "code-example", "context": "4.2.2 - ReplicaSet | Once the original is deleted, you can create a new ReplicaSet to replace it. As long as the old and new .spec.selector are the same, then the new one will adopt the old Pods. However, it will not make", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_16816"} {"prompt": "Explain this shell code:\n\nkubectl proxy --port=8080\ncurl -X DELETE 'localhost:8080/apis/apps/v1/namespaces/default/replicasets/frontend' \\\n -d '{\"kind\":\"DeleteOptions\",\"apiVersion\":\"v1\",\"propagationPolicy\":\"Orphan\"}' \\\n -H \"Content-Type: application/json\"", "completion": "This shell code 4.2.2 - ReplicaSet | Once the original is deleted, you can create a new ReplicaSet to replace it. As long as the old and new .spec.selector are the same, then the new one will adopt the old Pods. However, it will not make", "language": "shell", "type": "code-example_explanation", "context": "4.2.2 - ReplicaSet | Once the original is deleted, you can create a new ReplicaSet to replace it. As long as the old and new .spec.selector are the same, then the new one will adopt the old Pods. However, it will not make", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_16817"} {"prompt": "Complete this yaml code:\n\napiVersion: autoscaling/v1\nkind: HorizontalPodAutoscaler\nmetadata:\n name: frontend-scaler\nspec:", "completion": " scaleTargetRef:\n kind: ReplicaSet\n name: frontend\n minReplicas: 3\n maxReplicas: 10\n targetCPUUtilizationPercentage: 50", "language": "yaml", "type": "kubernetes-manifest", "context": "4.2.2 - ReplicaSet | Saving this manifest into hpa-rs.yaml and submitting it to a Kubernetes cluster should create the defined HPA that autoscales the target ReplicaSet depending on the CPU usage of the replicated Pods.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_16818"} {"prompt": "Explain this yaml code:\n\napiVersion: autoscaling/v1\nkind: HorizontalPodAutoscaler\nmetadata:\n name: frontend-scaler\nspec:\n scaleTargetRef:\n kind: ReplicaSet\n name: frontend\n minReplicas: 3\n maxReplicas: 10\n targetCPUUtilizationPercentage: 50", "completion": "This yaml code 4.2.2 - ReplicaSet | Saving this manifest into hpa-rs.yaml and submitting it to a Kubernetes cluster should create the defined HPA that autoscales the target ReplicaSet depending on the CPU usage of the replicated Pods.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "4.2.2 - ReplicaSet | Saving this manifest into hpa-rs.yaml and submitting it to a Kubernetes cluster should create the defined HPA that autoscales the target ReplicaSet depending on the CPU usage of the replicated Pods.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_16819"} {"prompt": "Explain this shell code:\n\nkubectl apply -f https://k8s.io/examples/controllers/hpa-rs.yaml", "completion": "This shell code 4.2.2 - ReplicaSet | Alternatively, you can use the kubectl autoscale command to accomplish the same (and it's easier!)", "language": "shell", "type": "code-example_explanation", "context": "4.2.2 - ReplicaSet | Alternatively, you can use the kubectl autoscale command to accomplish the same (and it's easier!)", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_16820"} {"prompt": "Explain this shell code:\n\nkubectl autoscale rs frontend --max=10 --min=3 --cpu-percent=50", "completion": "This shell code 4.2.2 - ReplicaSet | Deployment is an object which can own ReplicaSets and update them and their Pods via declarative, server-side rolling updates. While ReplicaSets can be used independently, today they're mainly used by", "language": "shell", "type": "code-example_explanation", "context": "4.2.2 - ReplicaSet | Deployment is an object which can own ReplicaSets and update them and their Pods via declarative, server-side rolling updates. While ReplicaSets can be used independently, today they're mainly used by", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_16821"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Service\nmetadata:\n name: nginx\n labels:\n app: nginx\nspec:\n ports:\n - port: 80\n name: web\n clusterIP: None\n selector:\n app: nginx\n---\napiVersion: apps/v1\nkind: StatefulSet\nmetadata:\n name: web\nspec:\n selector:\n matchLabels:\n app: nginx # has to match .spec.template.metadata.labels\n serviceName: \"nginx\"\n replicas: 3 # by default is 1", "completion": " minReadySeconds: 10 # by default is 0\n template:\n metadata:\n labels:\n app: nginx # has to match .spec.selector.matchLabels\n spec:\n terminationGracePeriodSeconds: 10\n containers:\n - name: nginx\n image: registry.k8s.io/nginx-slim:0.24\n ports:\n - containerPort: 80\n name: web\n volumeMounts:\n - name: www\n mountPath: /usr/share/nginx/html\n volumeClaimTemplates:\n - metadata:\n name: www\n spec:\n accessModes: [ \"ReadWriteOnce\" ]\n storageClassName: \"my-storage-class\"\n resources:\n requests:\n storage: 1Gi", "language": "yaml", "type": "service-definition", "context": "4.2.3 - StatefulSets | In the above example:", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_16822"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Service\nmetadata:\n name: nginx\n labels:\n app: nginx\nspec:\n ports:\n - port: 80\n name: web\n clusterIP: None\n selector:\n app: nginx\n---\napiVersion: apps/v1\nkind: StatefulSet\nmetadata:\n name: web\nspec:\n selector:\n matchLabels:\n app: nginx # has to match .spec.template.metadata.labels\n serviceName: \"nginx\"\n replicas: 3 # by default is 1\n minReadySeconds: 10 # by default is 0\n template:\n metadata:\n labels:\n app: nginx # has to match .spec.selector.matchLabels\n spec:\n terminationGracePeriodSeconds: 10\n containers:\n - name: nginx\n image: registry.k8s.io/nginx-slim:0.24\n ports:\n - containerPort: 80\n name: web\n volumeMounts:\n - name: www\n mountPath: /usr/share/nginx/html\n volumeClaimTemplates:\n - metadata:\n name: www\n spec:\n accessModes: [ \"ReadWriteOnce\" ]\n storageClassName: \"my-storage-class\"\n resources:\n requests:\n storage: 1Gi", "completion": "This yaml code 4.2.3 - StatefulSets | In the above example:", "language": "yaml", "type": "service-definition_explanation", "context": "4.2.3 - StatefulSets | In the above example:", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_16823"} {"prompt": "Complete this yaml code:\n\napiVersion: apps/v1\nkind: StatefulSet\nmetadata:", "completion": " name: webapp\nspec:\n revisionHistoryLimit: 5 # Keep last 5 revisions\n # ... other spec fields ...", "language": "yaml", "type": "kubernetes-manifest", "context": "4.2.3 - StatefulSets | You can revert to a previous configuration using:", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_16824"} {"prompt": "Explain this yaml code:\n\napiVersion: apps/v1\nkind: StatefulSet\nmetadata:\n name: webapp\nspec:\n revisionHistoryLimit: 5 # Keep last 5 revisions\n # ... other spec fields ...", "completion": "This yaml code 4.2.3 - StatefulSets | You can revert to a previous configuration using:", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "4.2.3 - StatefulSets | You can revert to a previous configuration using:", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_16825"} {"prompt": "Complete this bash code:\n\n# View revision history\nkubectl rollout history statefulset/webapp", "completion": "\n# Rollback to a specific revision\nkubectl rollout undo statefulset/webapp --to-revision=3", "language": "bash", "type": "shell-command", "context": "4.2.3 - StatefulSets | This will:", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_16826"} {"prompt": "Explain this bash code:\n\n# View revision history\nkubectl rollout history statefulset/webapp\n\n# Rollback to a specific revision\nkubectl rollout undo statefulset/webapp --to-revision=3", "completion": "This bash code 4.2.3 - StatefulSets | This will:", "language": "bash", "type": "shell-command_explanation", "context": "4.2.3 - StatefulSets | This will:", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_16827"} {"prompt": "Complete this bash code:\n\n# List all revisions for the StatefulSet\nkubectl get controllerrevisions -l app.kubernetes.io/name=webapp", "completion": "\n# View detailed configuration of a specific revision\nkubectl get controllerrevision/webapp-3 -o yaml", "language": "bash", "type": "shell-command", "context": "4.2.3 - StatefulSets | Regularly check revisions with:", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_16828"} {"prompt": "Explain this bash code:\n\n# List all revisions for the StatefulSet\nkubectl get controllerrevisions -l app.kubernetes.io/name=webapp\n\n# View detailed configuration of a specific revision\nkubectl get controllerrevision/webapp-3 -o yaml", "completion": "This bash code 4.2.3 - StatefulSets | Regularly check revisions with:", "language": "bash", "type": "shell-command_explanation", "context": "4.2.3 - StatefulSets | Regularly check revisions with:", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_16829"} {"prompt": "Complete this yaml code:\n\napiVersion: apps/v1\nkind: StatefulSet\n...\nspec:", "completion": " persistentVolumeClaimRetentionPolicy:\n whenDeleted: Retain\n whenScaled: Delete\n...", "language": "yaml", "type": "kubernetes-manifest", "context": "4.2.3 - StatefulSets | The StatefulSet controller adds owner references to its PVCs, which are then deleted by the garbage collector after the Pod is terminated. This enables the Pod to cleanly unmount all volumes before th", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_16830"} {"prompt": "Explain this yaml code:\n\napiVersion: apps/v1\nkind: StatefulSet\n...\nspec:\n persistentVolumeClaimRetentionPolicy:\n whenDeleted: Retain\n whenScaled: Delete\n...", "completion": "This yaml code 4.2.3 - StatefulSets | The StatefulSet controller adds owner references to its PVCs, which are then deleted by the garbage collector after the Pod is terminated. This enables the Pod to cleanly unmount all volumes before th", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "4.2.3 - StatefulSets | The StatefulSet controller adds owner references to its PVCs, which are then deleted by the garbage collector after the Pod is terminated. This enables the Pod to cleanly unmount all volumes before th", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_16831"} {"prompt": "Complete this yaml code:\n\napiVersion: apps/v1\nkind: DaemonSet\nmetadata:\n name: fluentd-elasticsearch\n namespace: kube-system\n labels:\n k8s-app: fluentd-logging\nspec:\n selector:\n matchLabels:\n name: fluentd-elasticsearch\n template:\n metadata:\n labels:\n name: fluentd-elasticsearch\n spec:\n tolerations:\n # these tolerations are to have the daemonset runnable on control plane nodes\n # remove them if your control plane nodes should not run pods\n - key: node-role.kubernetes.io/control-plane\n operator: Exists\n effect: NoSchedule", "completion": " - key: node-role.kubernetes.io/master\n operator: Exists\n effect: NoSchedule\n containers:\n - name: fluentd-elasticsearch\n image: quay.io/fluentd_elasticsearch/fluentd:v5.0.1\n resources:\n limits:\n memory: 200Mi\n requests:\n cpu: 100m\n memory: 200Mi\n volumeMounts:\n - name: varlog\n mountPath: /var/log\n # it may be desirable to set a high priority class to ensure that a DaemonSet Pod\n # preempts running Pods\n # priorityClassName: important\n terminationGracePeriodSeconds: 30\n volumes:\n - name: varlog\n hostPath:\n path: /var/log", "language": "yaml", "type": "kubernetes-manifest", "context": "4.2.4 - DaemonSet | Create a DaemonSet based on the YAML file:", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_16832"} {"prompt": "Explain this yaml code:\n\napiVersion: apps/v1\nkind: DaemonSet\nmetadata:\n name: fluentd-elasticsearch\n namespace: kube-system\n labels:\n k8s-app: fluentd-logging\nspec:\n selector:\n matchLabels:\n name: fluentd-elasticsearch\n template:\n metadata:\n labels:\n name: fluentd-elasticsearch\n spec:\n tolerations:\n # these tolerations are to have the daemonset runnable on control plane nodes\n # remove them if your control plane nodes should not run pods\n - key: node-role.kubernetes.io/control-plane\n operator: Exists\n effect: NoSchedule\n - key: node-role.kubernetes.io/master\n operator: Exists\n effect: NoSchedule\n containers:\n - name: fluentd-elasticsearch\n image: quay.io/fluentd_elasticsearch/fluentd:v5.0.1\n resources:\n limits:\n memory: 200Mi\n requests:\n cpu: 100m\n memory: 200Mi\n volumeMounts:\n - name: varlog\n mountPath: /var/log\n # it may be desirable to set a high priority class to ensure that a DaemonSet Pod\n # preempts running Pods\n # priorityClassName: important\n terminationGracePeriodSeconds: 30\n volumes:\n - name: varlog\n hostPath:\n path: /var/log", "completion": "This yaml code 4.2.4 - DaemonSet | Create a DaemonSet based on the YAML file:", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "4.2.4 - DaemonSet | Create a DaemonSet based on the YAML file:", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_16833"} {"prompt": "Explain this bash code:\n\nkubectl apply -f https://k8s.io/examples/controllers/daemonset.yaml", "completion": "This bash code 4.2.4 - DaemonSet | As with all other Kubernetes config, a DaemonSet needs apiVersion, kind, and metadata fields. For general information about working with config files, see running stateless applications and object man", "language": "bash", "type": "kubectl-command_explanation", "context": "4.2.4 - DaemonSet | As with all other Kubernetes config, a DaemonSet needs apiVersion, kind, and metadata fields. For general information about working with config files, see running stateless applications and object man", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_16834"} {"prompt": "Complete this yaml code:\n\nnodeAffinity:\n requiredDuringSchedulingIgnoredDuringExecution:\n nodeSelectorTerms:\n - matchFields:", "completion": " - key: metadata.name\n operator: In\n values:\n - target-host-name", "language": "yaml", "type": "kubernetes-manifest", "context": "4.2.4 - DaemonSet | The DaemonSet controller automatically adds a set of tolerations to DaemonSet Pods:", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_16835"} {"prompt": "Explain this yaml code:\n\nnodeAffinity:\n requiredDuringSchedulingIgnoredDuringExecution:\n nodeSelectorTerms:\n - matchFields:\n - key: metadata.name\n operator: In\n values:\n - target-host-name", "completion": "This yaml code 4.2.4 - DaemonSet | The DaemonSet controller automatically adds a set of tolerations to DaemonSet Pods:", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "4.2.4 - DaemonSet | The DaemonSet controller automatically adds a set of tolerations to DaemonSet Pods:", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_16836"} {"prompt": "Complete this yaml code:\n\napiVersion: batch/v1\nkind: Job\nmetadata:\n name: pi\nspec:\n template:", "completion": " spec:\n containers:\n - name: pi\n image: perl:5.34.0\n command: [\"perl\", \"-Mbignum=bpi\", \"-wle\", \"print bpi(2000)\"]\n restartPolicy: Never\n backoffLimit: 4", "language": "yaml", "type": "kubernetes-manifest", "context": "4.2.5 - Jobs | You can run the example with this command:", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_16837"} {"prompt": "Explain this yaml code:\n\napiVersion: batch/v1\nkind: Job\nmetadata:\n name: pi\nspec:\n template:\n spec:\n containers:\n - name: pi\n image: perl:5.34.0\n command: [\"perl\", \"-Mbignum=bpi\", \"-wle\", \"print bpi(2000)\"]\n restartPolicy: Never\n backoffLimit: 4", "completion": "This yaml code 4.2.5 - Jobs | You can run the example with this command:", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "4.2.5 - Jobs | You can run the example with this command:", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_16838"} {"prompt": "Explain this shell code:\n\nkubectl apply -f https://kubernetes.io/examples/controllers/job.yaml", "completion": "This shell code 4.2.5 - Jobs | The output is similar to this:", "language": "shell", "type": "code-example_explanation", "context": "4.2.5 - Jobs | The output is similar to this:", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_16839"} {"prompt": "Complete this bash code:\n\nName: pi\nNamespace: default\nSelector: batch.kubernetes.io/controller-uid=c9948307-e56d-4b5d-8302-ae2d7b7da67c\nLabels: batch.kubernetes.io/controller-uid=c9948307-e56d-4b5d-8302-ae2d7b7da67c\n batch.kubernetes.io/job-name=pi\n ...\nAnnotations: batch.kubernetes.io/job-tracking: \"\"\nParallelism: 1\nCompletions: 1\nStart Time: Mon, 02 Dec 2019 15:20:11 +0200\nCompleted At: Mon, 02 Dec 2019 15:21:16 +0200\nDuration: 65s\nPods Statuses: 0 Running / 1 Succeeded / 0 Failed\nPod Template:\n Labels: batch.kubernetes.io/controller-uid=c9948307-e56d-4b5d-8302-ae2d7b7da67c\n batch.kubernetes.io/job-name=pi\n Containers:", "completion": " pi:\n Image: perl:5.34.0\n Port: <none>\n Host Port: <none>\n Command:\n perl\n -Mbignum=bpi\n -wle\n print bpi(2000)\n Environment: <none>\n Mounts: <none>\n Volumes: <none>\nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal SuccessfulCreate 21s job-controller Created pod: pi-xf9p4\n Normal Completed 18s job-controller Job completed", "language": "bash", "type": "shell-command", "context": "4.2.5 - Jobs | apiVersion: batch/v1 kind: Job metadata: annotations: batch.kubernetes.io/job-tracking: \"\" ... creationTimestamp: \"2022-11-10T17:53:53Z\" generation: 1 labels: batch.kubernet", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_16840"} {"prompt": "Explain this bash code:\n\nName: pi\nNamespace: default\nSelector: batch.kubernetes.io/controller-uid=c9948307-e56d-4b5d-8302-ae2d7b7da67c\nLabels: batch.kubernetes.io/controller-uid=c9948307-e56d-4b5d-8302-ae2d7b7da67c\n batch.kubernetes.io/job-name=pi\n ...\nAnnotations: batch.kubernetes.io/job-tracking: \"\"\nParallelism: 1\nCompletions: 1\nStart Time: Mon, 02 Dec 2019 15:20:11 +0200\nCompleted At: Mon, 02 Dec 2019 15:21:16 +0200\nDuration: 65s\nPods Statuses: 0 Running / 1 Succeeded / 0 Failed\nPod Template:\n Labels: batch.kubernetes.io/controller-uid=c9948307-e56d-4b5d-8302-ae2d7b7da67c\n batch.kubernetes.io/job-name=pi\n Containers:\n pi:\n Image: perl:5.34.0\n Port: <none>\n Host Port: <none>\n Command:\n perl\n -Mbignum=bpi\n -wle\n print bpi(2000)\n Environment: <none>\n Mounts: <none>\n Volumes: <none>\nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal SuccessfulCreate 21s job-controller Created pod: pi-xf9p4\n Normal Completed 18s job-controller Job completed", "completion": "This bash code 4.2.5 - Jobs | apiVersion: batch/v1 kind: Job metadata: annotations: batch.kubernetes.io/job-tracking: \"\" ... creationTimestamp: \"2022-11-10T17:53:53Z\" generation: 1 labels: batch.kubernet", "language": "bash", "type": "shell-command_explanation", "context": "4.2.5 - Jobs | apiVersion: batch/v1 kind: Job metadata: annotations: batch.kubernetes.io/job-tracking: \"\" ... creationTimestamp: \"2022-11-10T17:53:53Z\" generation: 1 labels: batch.kubernet", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_16841"} {"prompt": "Complete this bash code:\n\napiVersion: batch/v1\nkind: Job\nmetadata:\n annotations: batch.kubernetes.io/job-tracking: \"\"\n ... \n creationTimestamp: \"2022-11-10T17:53:53Z\"\n generation: 1\n labels:\n batch.kubernetes.io/controller-uid: 863452e6-270d-420e-9b94-53a54146c223\n batch.kubernetes.io/job-name: pi\n name: pi\n namespace: default\n resourceVersion: \"4751\"\n uid: 204fb678-040b-497f-9266-35ffa8716d14\nspec:\n backoffLimit: 4\n completionMode: NonIndexed\n completions: 1\n parallelism: 1\n selector:\n matchLabels:\n batch.kubernetes.io/controller-uid: 863452e6-270d-420e-9b94-53a54146c223\n suspend: false\n template:\n metadata:\n creationTimestamp: null", "completion": " labels:\n batch.kubernetes.io/controller-uid: 863452e6-270d-420e-9b94-53a54146c223\n batch.kubernetes.io/job-name: pi\n spec:\n containers:\n - command:\n - perl\n - -Mbignum=bpi\n - -wle\n - print bpi(2000)\n image: perl:5.34.0\n imagePullPolicy: IfNotPresent\n name: pi\n resources: {}\n terminationMessagePath: /dev/termination-log\n terminationMessagePolicy: File\n dnsPolicy: ClusterFirst\n restartPolicy: Never\n schedulerName: default-scheduler\n securityContext: {}\n terminationGracePeriodSeconds: 30\nstatus:\n active: 1\n ready: 0\n startTime: \"2022-11-10T17:53:57Z\"\n uncountedTerminatedPods: {}", "language": "bash", "type": "shell-command", "context": "4.2.5 - Jobs | To view completed Pods of a Job, use kubectl get pods.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_16842"} {"prompt": "Explain this bash code:\n\napiVersion: batch/v1\nkind: Job\nmetadata:\n annotations: batch.kubernetes.io/job-tracking: \"\"\n ... \n creationTimestamp: \"2022-11-10T17:53:53Z\"\n generation: 1\n labels:\n batch.kubernetes.io/controller-uid: 863452e6-270d-420e-9b94-53a54146c223\n batch.kubernetes.io/job-name: pi\n name: pi\n namespace: default\n resourceVersion: \"4751\"\n uid: 204fb678-040b-497f-9266-35ffa8716d14\nspec:\n backoffLimit: 4\n completionMode: NonIndexed\n completions: 1\n parallelism: 1\n selector:\n matchLabels:\n batch.kubernetes.io/controller-uid: 863452e6-270d-420e-9b94-53a54146c223\n suspend: false\n template:\n metadata:\n creationTimestamp: null\n labels:\n batch.kubernetes.io/controller-uid: 863452e6-270d-420e-9b94-53a54146c223\n batch.kubernetes.io/job-name: pi\n spec:\n containers:\n - command:\n - perl\n - -Mbignum=bpi\n - -wle\n - print bpi(2000)\n image: perl:5.34.0\n imagePullPolicy: IfNotPresent\n name: pi\n resources: {}\n terminationMessagePath: /dev/termination-log\n terminationMessagePolicy: File\n dnsPolicy: ClusterFirst\n restartPolicy: Never\n schedulerName: default-scheduler\n securityContext: {}\n terminationGracePeriodSeconds: 30\nstatus:\n active: 1\n ready: 0\n startTime: \"2022-11-10T17:53:57Z\"\n uncountedTerminatedPods: {}", "completion": "This bash code 4.2.5 - Jobs | To view completed Pods of a Job, use kubectl get pods.", "language": "bash", "type": "shell-command_explanation", "context": "4.2.5 - Jobs | To view completed Pods of a Job, use kubectl get pods.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_16843"} {"prompt": "Explain this shell code:\n\npods=$(kubectl get pods --selector=batch.kubernetes.io/job-name=pi --output=jsonpath='{.items[*].metadata.name}')\necho $pods", "completion": "This shell code 4.2.5 - Jobs | The output is similar to this:", "language": "shell", "type": "code-example_explanation", "context": "4.2.5 - Jobs | The output is similar to this:", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_16844"} {"prompt": "Explain this text code:\n\n3.1415926535897932384626433832795028841971693993751058209749445923078164062862089986280348253421170679821480865132823066470938446095505822317253594081284811174502841027019385211055596446229489549303819644288109756659334461284756482337867831652712019091456485669234603486104543266482133936072602491412737245870066063155881748815209209628292540917153643678925903600113305305488204665213841469519415116094330572703657595919530921861173819326117931051185480744623799627495673518857527248912279381830119491298336733624406566430860213949463952247371907021798609437027705392171762931767523846748184676694051320005681271452635608277857713427577896091736371787214684409012249534301465495853710507922796892589235420199561121290219608640344181598136297747713099605187072113499999983729780499510597317328160963185950244594553469083026425223082533446850352619311881710100031378387528865875332083814206171776691473035982534904287554687311595628638823537875937519577818577805321712268066130019278766111959092164201989380952572010654858632788659361533818279682303019520353018529689957736225994138912497217752834791315155748572424541506959508295331168617278558890750983817546374649393192550604009277016711390098488240128583616035637076601047101819429555961989467678374494482553797747268471040475346462080466842590694912933136770289891521047521620569660240580381501935112533824300355876402474964732639141992726042699227967823547816360093417216412199245863150302861829745557067498385054945885869269956909272107975093029553211653449872027559602364806654991198818347977535663698074265425278625518184175746728909777727938000816470600161452491921732172147723501414419735685481613611573525521334757418494684385233239073941433345477624168625189835694855620992192221842725502542568876717904946016534668049886272327917860857843838279679766814541009538837863609506800642251252051173929848960841284886269456042419652850222106611863067442786220391949450471237137869609563643719172874677646575739624138908658326459958133904780275901", "completion": "This text code 4.2.5 - Jobs | As with all other Kubernetes config, a Job needs apiVersion, kind, and metadata fields.", "language": "text", "type": "code-example_explanation", "context": "4.2.5 - Jobs | As with all other Kubernetes config, a Job needs apiVersion, kind, and metadata fields.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_16845"} {"prompt": "Explain this text code:\n\n/controllers/job-backoff-limit-per-index-example.yaml", "completion": "This text code 4.2.5 - Jobs | In the example above, the Job controller allows for one restart for each of the indexes. When the total number of failed indexes exceeds 5, then the entire Job is terminated.", "language": "text", "type": "code-example_explanation", "context": "4.2.5 - Jobs | In the example above, the Job controller allows for one restart for each of the indexes. When the total number of failed indexes exceeds 5, then the entire Job is terminated.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_16846"} {"prompt": "Complete this yaml code:\n\napiVersion: batch/v1\nkind: Job\nmetadata:\n name: job-backoff-limit-per-index-example\nspec:\n completions: 10\n parallelism: 3\n completionMode: Indexed # required for the feature\n backoffLimitPerIndex: 1 # maximal number of failures per index\n maxFailedIndexes: 5 # maximal number of failed indexes before terminating the Job execution\n template:\n spec:\n restartPolicy: Never # required for the feature", "completion": " containers:\n - name: example\n image: python\n command: # The jobs fails as there is at least one failed index\n # (all even indexes fail in here), yet all indexes\n # are executed as maxFailedIndexes is not exceeded.\n - python3\n - -c\n - |\n import os, sys\n print(\"Hello world\")\n if int(os.environ.get(\"JOB_COMPLETION_INDEX\")) % 2 == 0:\n sys.exit(1)", "language": "yaml", "type": "kubernetes-manifest", "context": "4.2.5 - Jobs | In the example above, the Job controller allows for one restart for each of the indexes. When the total number of failed indexes exceeds 5, then the entire Job is terminated.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_16847"} {"prompt": "Explain this yaml code:\n\napiVersion: batch/v1\nkind: Job\nmetadata:\n name: job-backoff-limit-per-index-example\nspec:\n completions: 10\n parallelism: 3\n completionMode: Indexed # required for the feature\n backoffLimitPerIndex: 1 # maximal number of failures per index\n maxFailedIndexes: 5 # maximal number of failed indexes before terminating the Job execution\n template:\n spec:\n restartPolicy: Never # required for the feature\n containers:\n - name: example\n image: python\n command: # The jobs fails as there is at least one failed index\n # (all even indexes fail in here), yet all indexes\n # are executed as maxFailedIndexes is not exceeded.\n - python3\n - -c\n - |\n import os, sys\n print(\"Hello world\")\n if int(os.environ.get(\"JOB_COMPLETION_INDEX\")) % 2 == 0:\n sys.exit(1)", "completion": "This yaml code 4.2.5 - Jobs | In the example above, the Job controller allows for one restart for each of the indexes. When the total number of failed indexes exceeds 5, then the entire Job is terminated.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "4.2.5 - Jobs | In the example above, the Job controller allows for one restart for each of the indexes. When the total number of failed indexes exceeds 5, then the entire Job is terminated.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_16848"} {"prompt": "Explain this sh code:\n\nkubectl get -o yaml job job-backoff-limit-per-index-example", "completion": "This sh code 4.2.5 - Jobs | The Job controller adds the FailureTarget Job condition to trigger Job termination and cleanup. When all of the Job Pods are terminated, the Job controller adds the Failed condition with the same valu", "language": "sh", "type": "code-example_explanation", "context": "4.2.5 - Jobs | The Job controller adds the FailureTarget Job condition to trigger Job termination and cleanup. When all of the Job Pods are terminated, the Job controller adds the Failed condition with the same valu", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_16849"} {"prompt": "Complete this yaml code:\n\nstatus:\n completedIndexes: 1,3,5,7,9\n failedIndexes: 0,2,4,6,8\n succeeded: 5 # 1 succeeded pod for each of 5 succeeded indexes\n failed: 10 # 2 failed pods (1 retry) for each of 5 failed indexes\n conditions:\n - message: Job has failed indexes", "completion": " reason: FailedIndexes\n status: \"True\"\n type: FailureTarget\n - message: Job has failed indexes\n reason: FailedIndexes\n status: \"True\"\n type: Failed", "language": "yaml", "type": "kubernetes-manifest", "context": "4.2.5 - Jobs | The Job controller adds the FailureTarget Job condition to trigger Job termination and cleanup. When all of the Job Pods are terminated, the Job controller adds the Failed condition with the same valu", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_16850"} {"prompt": "Explain this yaml code:\n\nstatus:\n completedIndexes: 1,3,5,7,9\n failedIndexes: 0,2,4,6,8\n succeeded: 5 # 1 succeeded pod for each of 5 succeeded indexes\n failed: 10 # 2 failed pods (1 retry) for each of 5 failed indexes\n conditions:\n - message: Job has failed indexes\n reason: FailedIndexes\n status: \"True\"\n type: FailureTarget\n - message: Job has failed indexes\n reason: FailedIndexes\n status: \"True\"\n type: Failed", "completion": "This yaml code 4.2.5 - Jobs | The Job controller adds the FailureTarget Job condition to trigger Job termination and cleanup. When all of the Job Pods are terminated, the Job controller adds the Failed condition with the same valu", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "4.2.5 - Jobs | The Job controller adds the FailureTarget Job condition to trigger Job termination and cleanup. When all of the Job Pods are terminated, the Job controller adds the Failed condition with the same valu", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_16851"} {"prompt": "Complete this yaml code:\n\napiVersion: batch/v1\nkind: Job\nmetadata:\n name: job-pod-failure-policy-example\nspec:\n completions: 12\n parallelism: 3\n template:\n spec:\n restartPolicy: Never\n containers:\n - name: main\n image: docker.io/library/bash:5\n command: [\"bash\"] # example command simulating a bug which triggers the FailJob action", "completion": " args:\n - -c\n - echo \"Hello world!\" && sleep 5 && exit 42\n backoffLimit: 6\n podFailurePolicy:\n rules:\n - action: FailJob\n onExitCodes:\n containerName: main # optional\n operator: In # one of: In, NotIn\n values: [42]\n - action: Ignore # one of: Ignore, FailJob, Count\n onPodConditions:\n - type: DisruptionTarget # indicates Pod disruption", "language": "yaml", "type": "kubernetes-manifest", "context": "4.2.5 - Jobs | In the example above, the first rule of the Pod failure policy specifies that the Job should be marked failed if the main container fails with the 42 exit code. The following are the rules for the mai", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_16852"} {"prompt": "Explain this yaml code:\n\napiVersion: batch/v1\nkind: Job\nmetadata:\n name: job-pod-failure-policy-example\nspec:\n completions: 12\n parallelism: 3\n template:\n spec:\n restartPolicy: Never\n containers:\n - name: main\n image: docker.io/library/bash:5\n command: [\"bash\"] # example command simulating a bug which triggers the FailJob action\n args:\n - -c\n - echo \"Hello world!\" && sleep 5 && exit 42\n backoffLimit: 6\n podFailurePolicy:\n rules:\n - action: FailJob\n onExitCodes:\n containerName: main # optional\n operator: In # one of: In, NotIn\n values: [42]\n - action: Ignore # one of: Ignore, FailJob, Count\n onPodConditions:\n - type: DisruptionTarget # indicates Pod disruption", "completion": "This yaml code 4.2.5 - Jobs | In the example above, the first rule of the Pod failure policy specifies that the Job should be marked failed if the main container fails with the 42 exit code. The following are the rules for the mai", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "4.2.5 - Jobs | In the example above, the first rule of the Pod failure policy specifies that the Job should be marked failed if the main container fails with the 42 exit code. The following are the rules for the mai", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_16853"} {"prompt": "Explain this text code:\n\nspec.podFailurePolicy.rules[*].onExitCodes.containerName", "completion": "This text code 4.2.5 - Jobs | When you use the podFailurePolicy, and the Job fails due to the pod matching the rule with the FailJob action, then the Job controller triggers the Job termination process by adding the FailureTarget", "language": "text", "type": "code-example_explanation", "context": "4.2.5 - Jobs | When you use the podFailurePolicy, and the Job fails due to the pod matching the rule with the FailJob action, then the Job controller triggers the Job termination process by adding the FailureTarget", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_16854"} {"prompt": "Complete this yaml code:\n\napiVersion: batch/v1\nkind: Job\nmetadata:\n name: job-success\nspec:\n parallelism: 10\n completions: 10\n completionMode: Indexed # Required for the success policy\n successPolicy:\n rules:\n - succeededIndexes: 0,2-3\n succeededCount: 1\n template:\n spec:", "completion": " containers:\n - name: main\n image: python\n command: # Provided that at least one of the Pods with 0, 2, and 3 indexes has succeeded,\n # the overall Job is a success.\n - python3\n - -c\n - |\n import os, sys\n if os.environ.get(\"JOB_COMPLETION_INDEX\") == \"2\":\n sys.exit(0)\n else:\n sys.exit(1) \n restartPolicy: Never", "language": "yaml", "type": "kubernetes-manifest", "context": "4.2.5 - Jobs | In the example above, both succeededIndexes and succeededCount have been specified. Therefore, the job controller will mark the Job as succeeded and terminate the lingering Pods when either of the spe", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_16855"} {"prompt": "Explain this yaml code:\n\napiVersion: batch/v1\nkind: Job\nmetadata:\n name: job-success\nspec:\n parallelism: 10\n completions: 10\n completionMode: Indexed # Required for the success policy\n successPolicy:\n rules:\n - succeededIndexes: 0,2-3\n succeededCount: 1\n template:\n spec:\n containers:\n - name: main\n image: python\n command: # Provided that at least one of the Pods with 0, 2, and 3 indexes has succeeded,\n # the overall Job is a success.\n - python3\n - -c\n - |\n import os, sys\n if os.environ.get(\"JOB_COMPLETION_INDEX\") == \"2\":\n sys.exit(0)\n else:\n sys.exit(1) \n restartPolicy: Never", "completion": "This yaml code 4.2.5 - Jobs | In the example above, both succeededIndexes and succeededCount have been specified. Therefore, the job controller will mark the Job as succeeded and terminate the lingering Pods when either of the spe", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "4.2.5 - Jobs | In the example above, both succeededIndexes and succeededCount have been specified. Therefore, the job controller will mark the Job as succeeded and terminate the lingering Pods when either of the spe", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_16856"} {"prompt": "Complete this yaml code:\n\napiVersion: batch/v1\nkind: Job\nmetadata:\n name: pi-with-timeout\nspec:\n backoffLimit: 5\n activeDeadlineSeconds: 100", "completion": " template:\n spec:\n containers:\n - name: pi\n image: perl:5.34.0\n command: [\"perl\", \"-Mbignum=bpi\", \"-wle\", \"print bpi(2000)\"]\n restartPolicy: Never", "language": "yaml", "type": "kubernetes-manifest", "context": "4.2.5 - Jobs | Note that both the Job spec and the Pod template spec within the Job have an activeDeadlineSeconds field. Ensure that you set this field at the proper level.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_16857"} {"prompt": "Explain this yaml code:\n\napiVersion: batch/v1\nkind: Job\nmetadata:\n name: pi-with-timeout\nspec:\n backoffLimit: 5\n activeDeadlineSeconds: 100\n template:\n spec:\n containers:\n - name: pi\n image: perl:5.34.0\n command: [\"perl\", \"-Mbignum=bpi\", \"-wle\", \"print bpi(2000)\"]\n restartPolicy: Never", "completion": "This yaml code 4.2.5 - Jobs | Note that both the Job spec and the Pod template spec within the Job have an activeDeadlineSeconds field. Ensure that you set this field at the proper level.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "4.2.5 - Jobs | Note that both the Job spec and the Pod template spec within the Job have an activeDeadlineSeconds field. Ensure that you set this field at the proper level.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_16858"} {"prompt": "Complete this yaml code:\n\napiVersion: batch/v1\nkind: Job\nmetadata:\n name: pi-with-ttl\nspec:\n ttlSecondsAfterFinished: 100", "completion": " template:\n spec:\n containers:\n - name: pi\n image: perl:5.34.0\n command: [\"perl\", \"-Mbignum=bpi\", \"-wle\", \"print bpi(2000)\"]\n restartPolicy: Never", "language": "yaml", "type": "kubernetes-manifest", "context": "4.2.5 - Jobs | The Job pi-with-ttl will be eligible to be automatically deleted, 100 seconds after it finishes.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_16859"} {"prompt": "Explain this yaml code:\n\napiVersion: batch/v1\nkind: Job\nmetadata:\n name: pi-with-ttl\nspec:\n ttlSecondsAfterFinished: 100\n template:\n spec:\n containers:\n - name: pi\n image: perl:5.34.0\n command: [\"perl\", \"-Mbignum=bpi\", \"-wle\", \"print bpi(2000)\"]\n restartPolicy: Never", "completion": "This yaml code 4.2.5 - Jobs | The Job pi-with-ttl will be eligible to be automatically deleted, 100 seconds after it finishes.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "4.2.5 - Jobs | The Job pi-with-ttl will be eligible to be automatically deleted, 100 seconds after it finishes.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_16860"} {"prompt": "Complete this yaml code:\n\napiVersion: batch/v1\nkind: Job\nmetadata:\n name: myjob\nspec:", "completion": " suspend: true\n parallelism: 1\n completions: 5\n template:\n spec:\n ...", "language": "yaml", "type": "kubernetes-manifest", "context": "4.2.5 - Jobs | You can also toggle Job suspension by patching the Job using the command line.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_16861"} {"prompt": "Explain this yaml code:\n\napiVersion: batch/v1\nkind: Job\nmetadata:\n name: myjob\nspec:\n suspend: true\n parallelism: 1\n completions: 5\n template:\n spec:\n ...", "completion": "This yaml code 4.2.5 - Jobs | You can also toggle Job suspension by patching the Job using the command line.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "4.2.5 - Jobs | You can also toggle Job suspension by patching the Job using the command line.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_16862"} {"prompt": "Explain this shell code:\n\nkubectl patch job/myjob --type=strategic --patch '{\"spec\":{\"suspend\":true}}'", "completion": "This shell code 4.2.5 - Jobs | Resume a suspended Job:", "language": "shell", "type": "code-example_explanation", "context": "4.2.5 - Jobs | Resume a suspended Job:", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_16863"} {"prompt": "Explain this shell code:\n\nkubectl patch job/myjob --type=strategic --patch '{\"spec\":{\"suspend\":false}}'", "completion": "This shell code 4.2.5 - Jobs | The Job's status can be used to determine if a Job is suspended or has been suspended in the past:", "language": "shell", "type": "code-example_explanation", "context": "4.2.5 - Jobs | The Job's status can be used to determine if a Job is suspended or has been suspended in the past:", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_16864"} {"prompt": "Complete this yaml code:\n\napiVersion: batch/v1\nkind: Job\n# .metadata and .spec omitted\nstatus:\n conditions:", "completion": " - lastProbeTime: \"2021-02-05T13:14:33Z\"\n lastTransitionTime: \"2021-02-05T13:14:33Z\"\n status: \"True\"\n type: Suspended\n startTime: \"2021-02-05T13:13:48Z\"", "language": "yaml", "type": "kubernetes-manifest", "context": "4.2.5 - Jobs | The Job condition of type \"Suspended\" with status \"True\" means the Job is suspended; the lastTransitionTime field can be used to determine how long the Job has been suspended for. If the status of tha", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_16865"} {"prompt": "Explain this yaml code:\n\napiVersion: batch/v1\nkind: Job\n# .metadata and .spec omitted\nstatus:\n conditions:\n - lastProbeTime: \"2021-02-05T13:14:33Z\"\n lastTransitionTime: \"2021-02-05T13:14:33Z\"\n status: \"True\"\n type: Suspended\n startTime: \"2021-02-05T13:13:48Z\"", "completion": "This yaml code 4.2.5 - Jobs | The Job condition of type \"Suspended\" with status \"True\" means the Job is suspended; the lastTransitionTime field can be used to determine how long the Job has been suspended for. If the status of tha", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "4.2.5 - Jobs | The Job condition of type \"Suspended\" with status \"True\" means the Job is suspended; the lastTransitionTime field can be used to determine how long the Job has been suspended for. If the status of tha", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_16866"} {"prompt": "Complete this text code:\n\nName: myjob\n...\nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------", "completion": " Normal SuccessfulCreate 12m job-controller Created pod: myjob-hlrpl\n Normal SuccessfulDelete 11m job-controller Deleted pod: myjob-hlrpl\n Normal Suspended 11m job-controller Job suspended\n Normal SuccessfulCreate 3s job-controller Created pod: myjob-jvb44\n Normal Resumed 3s job-controller Job resumed", "language": "text", "type": "code-example", "context": "4.2.5 - Jobs | The last four events, particularly the \"Suspended\" and \"Resumed\" events, are directly a result of toggling the .spec.suspend field. In the time between these two events, we see that no Pods were creat", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_16867"} {"prompt": "Explain this text code:\n\nName: myjob\n...\nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal SuccessfulCreate 12m job-controller Created pod: myjob-hlrpl\n Normal SuccessfulDelete 11m job-controller Deleted pod: myjob-hlrpl\n Normal Suspended 11m job-controller Job suspended\n Normal SuccessfulCreate 3s job-controller Created pod: myjob-jvb44\n Normal Resumed 3s job-controller Job resumed", "completion": "This text code 4.2.5 - Jobs | The last four events, particularly the \"Suspended\" and \"Resumed\" events, are directly a result of toggling the .spec.suspend field. In the time between these two events, we see that no Pods were creat", "language": "text", "type": "code-example_explanation", "context": "4.2.5 - Jobs | The last four events, particularly the \"Suspended\" and \"Resumed\" events, are directly a result of toggling the .spec.suspend field. In the time between these two events, we see that no Pods were creat", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_16868"} {"prompt": "Complete this yaml code:\n\nkind: Job\nmetadata:\n name: old\n ...", "completion": "spec:\n selector:\n matchLabels:\n batch.kubernetes.io/controller-uid: a8f3d00d-c6d2-11e5-9f87-42010af00002\n ...", "language": "yaml", "type": "kubernetes-manifest", "context": "4.2.5 - Jobs | Then you create a new Job with name new and you explicitly specify the same selector. Since the existing Pods have label batch.kubernetes.io/controller-uid=a8f3d00d-c6d2-11e5-9f87-42010af00002, they a", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_16869"} {"prompt": "Explain this yaml code:\n\nkind: Job\nmetadata:\n name: old\n ...\nspec:\n selector:\n matchLabels:\n batch.kubernetes.io/controller-uid: a8f3d00d-c6d2-11e5-9f87-42010af00002\n ...", "completion": "This yaml code 4.2.5 - Jobs | Then you create a new Job with name new and you explicitly specify the same selector. Since the existing Pods have label batch.kubernetes.io/controller-uid=a8f3d00d-c6d2-11e5-9f87-42010af00002, they a", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "4.2.5 - Jobs | Then you create a new Job with name new and you explicitly specify the same selector. Since the existing Pods have label batch.kubernetes.io/controller-uid=a8f3d00d-c6d2-11e5-9f87-42010af00002, they a", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_16870"} {"prompt": "Explain this text code:\n\nbatch.kubernetes.io/controller-uid=a8f3d00d-c6d2-11e5-9f87-42010af00002", "completion": "This text code 4.2.5 - Jobs | You need to specify manualSelector: true in the new Job since you are not using the selector that the system normally generates for you automatically.", "language": "text", "type": "code-example_explanation", "context": "4.2.5 - Jobs | You need to specify manualSelector: true in the new Job since you are not using the selector that the system normally generates for you automatically.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_16871"} {"prompt": "Complete this yaml code:\n\nkind: Job\nmetadata:\n name: new\n ...\nspec:", "completion": " manualSelector: true\n selector:\n matchLabels:\n batch.kubernetes.io/controller-uid: a8f3d00d-c6d2-11e5-9f87-42010af00002\n ...", "language": "yaml", "type": "kubernetes-manifest", "context": "4.2.5 - Jobs | The new Job itself will have a different uid from a8f3d00d-c6d2-11e5-9f87-42010af00002. Setting manualSelector: true tells the system that you know what you are doing and to allow this mismatch.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_16872"} {"prompt": "Explain this yaml code:\n\nkind: Job\nmetadata:\n name: new\n ...\nspec:\n manualSelector: true\n selector:\n matchLabels:\n batch.kubernetes.io/controller-uid: a8f3d00d-c6d2-11e5-9f87-42010af00002\n ...", "completion": "This yaml code 4.2.5 - Jobs | The new Job itself will have a different uid from a8f3d00d-c6d2-11e5-9f87-42010af00002. Setting manualSelector: true tells the system that you know what you are doing and to allow this mismatch.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "4.2.5 - Jobs | The new Job itself will have a different uid from a8f3d00d-c6d2-11e5-9f87-42010af00002. Setting manualSelector: true tells the system that you know what you are doing and to allow this mismatch.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_16873"} {"prompt": "Complete this yaml code:\n\nkind: Job\nmetadata:\n name: new", "completion": " ...\nspec:\n podReplacementPolicy: Failed\n ...", "language": "yaml", "type": "kubernetes-manifest", "context": "4.2.5 - Jobs | Provided your cluster has the feature gate enabled, you can inspect the .status.terminating field of a Job. The value of the field is the number of Pods owned by the Job that are currently terminating", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_16874"} {"prompt": "Explain this yaml code:\n\nkind: Job\nmetadata:\n name: new\n ...\nspec:\n podReplacementPolicy: Failed\n ...", "completion": "This yaml code 4.2.5 - Jobs | Provided your cluster has the feature gate enabled, you can inspect the .status.terminating field of a Job. The value of the field is the number of Pods owned by the Job that are currently terminating", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "4.2.5 - Jobs | Provided your cluster has the feature gate enabled, you can inspect the .status.terminating field of a Job. The value of the field is the number of Pods owned by the Job that are currently terminating", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_16875"} {"prompt": "Complete this yaml code:\n\napiVersion: batch/v1\nkind: Job", "completion": "# .metadata and .spec omitted\nstatus:\n terminating: 3 # three Pods are terminating and have not yet reached the Failed phase", "language": "yaml", "type": "kubernetes-manifest", "context": "4.2.5 - Jobs | This feature allows you to disable the built-in Job controller, for a specific Job, and delegate reconciliation of the Job to an external controller.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_16876"} {"prompt": "Explain this yaml code:\n\napiVersion: batch/v1\nkind: Job\n# .metadata and .spec omitted\nstatus:\n terminating: 3 # three Pods are terminating and have not yet reached the Failed phase", "completion": "This yaml code 4.2.5 - Jobs | This feature allows you to disable the built-in Job controller, for a specific Job, and delegate reconciliation of the Job to an external controller.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "4.2.5 - Jobs | This feature allows you to disable the built-in Job controller, for a specific Job, and delegate reconciliation of the Job to an external controller.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_16877"} {"prompt": "Complete this yaml code:\n\napiVersion: batch/v1\nkind: CronJob\nmetadata:\n name: hello\nspec:\n schedule: \"* * * * *\"\n jobTemplate:\n spec:\n template:", "completion": " spec:\n containers:\n - name: hello\n image: busybox:1.28\n imagePullPolicy: IfNotPresent\n command:\n - /bin/sh\n - -c\n - date; echo Hello from the Kubernetes cluster\n restartPolicy: OnFailure", "language": "yaml", "type": "kubernetes-manifest", "context": "4.2.7 - CronJob | (Running Automated Tasks with a CronJob takes you through this example in more detail).", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_16878"} {"prompt": "Explain this yaml code:\n\napiVersion: batch/v1\nkind: CronJob\nmetadata:\n name: hello\nspec:\n schedule: \"* * * * *\"\n jobTemplate:\n spec:\n template:\n spec:\n containers:\n - name: hello\n image: busybox:1.28\n imagePullPolicy: IfNotPresent\n command:\n - /bin/sh\n - -c\n - date; echo Hello from the Kubernetes cluster\n restartPolicy: OnFailure", "completion": "This yaml code 4.2.7 - CronJob | (Running Automated Tasks with a CronJob takes you through this example in more detail).", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "4.2.7 - CronJob | (Running Automated Tasks with a CronJob takes you through this example in more detail).", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_16879"} {"prompt": "Complete this text code:\n\n# ┌───────────── minute (0 - 59)\n# │ ┌───────────── hour (0 - 23)\n# │ │ ┌───────────── day of the month (1 - 31)\n# │ │ │ ┌───────────── month (1 - 12)", "completion": "# │ │ │ │ ┌───────────── day of the week (0 - 6) (Sunday to Saturday)\n# │ │ │ │ │ OR sun, mon, tue, wed, thu, fri, sat\n# │ │ │ │ │\n# │ │ │ │ │\n# * * * * *", "language": "text", "type": "code-example", "context": "4.2.7 - CronJob | For example, 0 3 * * 1 means this task is scheduled to run weekly on a Monday at 3 AM.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_16880"} {"prompt": "Explain this text code:\n\n# ┌───────────── minute (0 - 59)\n# │ ┌───────────── hour (0 - 23)\n# │ │ ┌───────────── day of the month (1 - 31)\n# │ │ │ ┌───────────── month (1 - 12)\n# │ │ │ │ ┌───────────── day of the week (0 - 6) (Sunday to Saturday)\n# │ │ │ │ │ OR sun, mon, tue, wed, thu, fri, sat\n# │ │ │ │ │\n# │ │ │ │ │\n# * * * * *", "completion": "This text code 4.2.7 - CronJob | For example, 0 3 * * 1 means this task is scheduled to run weekly on a Monday at 3 AM.", "language": "text", "type": "code-example_explanation", "context": "4.2.7 - CronJob | For example, 0 3 * * 1 means this task is scheduled to run weekly on a Monday at 3 AM.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_16881"} {"prompt": "Explain this text code:\n\nCannot determine if job needs to be started. Too many missed start time (> 100). Set or decrease .spec.startingDeadlineSeconds or check clock skew.", "completion": "This text code 4.2.7 - CronJob | It is important to note that if the startingDeadlineSeconds field is set (not nil), the controller counts how many missed Jobs occurred from the value of startingDeadlineSeconds until now rather than", "language": "text", "type": "code-example_explanation", "context": "4.2.7 - CronJob | It is important to note that if the startingDeadlineSeconds field is set (not nil), the controller counts how many missed Jobs occurred from the value of startingDeadlineSeconds until now rather than", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_16882"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: ReplicationController\nmetadata:\n name: nginx\nspec:\n replicas: 3\n selector:\n app: nginx\n template:", "completion": " metadata:\n name: nginx\n labels:\n app: nginx\n spec:\n containers:\n - name: nginx\n image: nginx\n ports:\n - containerPort: 80", "language": "yaml", "type": "kubernetes-manifest", "context": "4.2.8 - ReplicationController | Run the example job by downloading the example file and then running this command:", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_16883"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: ReplicationController\nmetadata:\n name: nginx\nspec:\n replicas: 3\n selector:\n app: nginx\n template:\n metadata:\n name: nginx\n labels:\n app: nginx\n spec:\n containers:\n - name: nginx\n image: nginx\n ports:\n - containerPort: 80", "completion": "This yaml code 4.2.8 - ReplicationController | Run the example job by downloading the example file and then running this command:", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "4.2.8 - ReplicationController | Run the example job by downloading the example file and then running this command:", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_16884"} {"prompt": "Explain this shell code:\n\nkubectl apply -f https://k8s.io/examples/controllers/replication.yaml", "completion": "This shell code 4.2.8 - ReplicationController | The output is similar to this:", "language": "shell", "type": "code-example_explanation", "context": "4.2.8 - ReplicationController | The output is similar to this:", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_16885"} {"prompt": "Complete this text code:\n\nName: nginx\nNamespace: default\nSelector: app=nginx\nLabels: app=nginx\nAnnotations: <none>\nReplicas: 3 current / 3 desired\nPods Status: 0 Running / 3 Waiting / 0 Succeeded / 0 Failed\nPod Template:\n Labels: app=nginx\n Containers:\n nginx:", "completion": " Image: nginx\n Port: 80/TCP\n Environment: <none>\n Mounts: <none>\n Volumes: <none>\nEvents:\n FirstSeen LastSeen Count From SubobjectPath Type Reason Message\n --------- -------- ----- ---- ------------- ---- ------ -------\n 20s 20s 1 {replication-controller } Normal SuccessfulCreate Created pod: nginx-qrm3m\n 20s 20s 1 {replication-controller } Normal SuccessfulCreate Created pod: nginx-3ntk0\n 20s 20s 1 {replication-controller } Normal SuccessfulCreate Created pod: nginx-4ok8v", "language": "text", "type": "code-example", "context": "4.2.8 - ReplicationController | Here, three pods are created, but none is running yet, perhaps because the image is being pulled. A little later, the same command may show:", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_16886"} {"prompt": "Explain this text code:\n\nName: nginx\nNamespace: default\nSelector: app=nginx\nLabels: app=nginx\nAnnotations: <none>\nReplicas: 3 current / 3 desired\nPods Status: 0 Running / 3 Waiting / 0 Succeeded / 0 Failed\nPod Template:\n Labels: app=nginx\n Containers:\n nginx:\n Image: nginx\n Port: 80/TCP\n Environment: <none>\n Mounts: <none>\n Volumes: <none>\nEvents:\n FirstSeen LastSeen Count From SubobjectPath Type Reason Message\n --------- -------- ----- ---- ------------- ---- ------ -------\n 20s 20s 1 {replication-controller } Normal SuccessfulCreate Created pod: nginx-qrm3m\n 20s 20s 1 {replication-controller } Normal SuccessfulCreate Created pod: nginx-3ntk0\n 20s 20s 1 {replication-controller } Normal SuccessfulCreate Created pod: nginx-4ok8v", "completion": "This text code 4.2.8 - ReplicationController | Here, three pods are created, but none is running yet, perhaps because the image is being pulled. A little later, the same command may show:", "language": "text", "type": "code-example_explanation", "context": "4.2.8 - ReplicationController | Here, three pods are created, but none is running yet, perhaps because the image is being pulled. A little later, the same command may show:", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_16887"} {"prompt": "Explain this shell code:\n\nPods Status: 3 Running / 0 Waiting / 0 Succeeded / 0 Failed", "completion": "This shell code 4.2.8 - ReplicationController | To list all the pods that belong to the ReplicationController in a machine readable form, you can use a command like this:", "language": "shell", "type": "code-example_explanation", "context": "4.2.8 - ReplicationController | To list all the pods that belong to the ReplicationController in a machine readable form, you can use a command like this:", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_16888"} {"prompt": "Explain this shell code:\n\npods=$(kubectl get pods --selector=app=nginx --output=jsonpath={.items..metadata.name})\necho $pods", "completion": "This shell code 4.2.8 - ReplicationController | The output is similar to this:", "language": "shell", "type": "code-example_explanation", "context": "4.2.8 - ReplicationController | The output is similar to this:", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_16889"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Service\nmetadata:\n name: my-nginx-svc\n labels:\n app: nginx\nspec:\n type: LoadBalancer\n ports:\n - port: 80\n selector:\n app: nginx\n---\napiVersion: apps/v1\nkind: Deployment\nmetadata:\n name: my-nginx", "completion": " labels:\n app: nginx\nspec:\n replicas: 3\n selector:\n matchLabels:\n app: nginx\n template:\n metadata:\n labels:\n app: nginx\n spec:\n containers:\n - name: nginx\n image: nginx:1.14.2\n ports:\n - containerPort: 80", "language": "yaml", "type": "deployment-definition", "context": "4.4 - Managing Workloads | Multiple resources can be created the same way as a single resource:", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_16890"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Service\nmetadata:\n name: my-nginx-svc\n labels:\n app: nginx\nspec:\n type: LoadBalancer\n ports:\n - port: 80\n selector:\n app: nginx\n---\napiVersion: apps/v1\nkind: Deployment\nmetadata:\n name: my-nginx\n labels:\n app: nginx\nspec:\n replicas: 3\n selector:\n matchLabels:\n app: nginx\n template:\n metadata:\n labels:\n app: nginx\n spec:\n containers:\n - name: nginx\n image: nginx:1.14.2\n ports:\n - containerPort: 80", "completion": "This yaml code 4.4 - Managing Workloads | Multiple resources can be created the same way as a single resource:", "language": "yaml", "type": "deployment-definition_explanation", "context": "4.4 - Managing Workloads | Multiple resources can be created the same way as a single resource:", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_16891"} {"prompt": "Explain this shell code:\n\nkubectl apply -f https://k8s.io/examples/application/nginx-app.yaml", "completion": "This shell code 4.4 - Managing Workloads | The resources will be created in the order they appear in the manifest. Therefore, it's best to specify the Service first, since that will ensure the scheduler can spread the pods associated with the", "language": "shell", "type": "code-example_explanation", "context": "4.4 - Managing Workloads | The resources will be created in the order they appear in the manifest. Therefore, it's best to specify the Service first, since that will ensure the scheduler can spread the pods associated with the", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_16892"} {"prompt": "Explain this none code:\n\nservice/my-nginx-svc created\ndeployment.apps/my-nginx created", "completion": "This none code 4.4 - Managing Workloads | The resources will be created in the order they appear in the manifest. Therefore, it's best to specify the Service first, since that will ensure the scheduler can spread the pods associated with the", "language": "none", "type": "code-example_explanation", "context": "4.4 - Managing Workloads | The resources will be created in the order they appear in the manifest. Therefore, it's best to specify the Service first, since that will ensure the scheduler can spread the pods associated with the", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_16893"} {"prompt": "Explain this shell code:\n\nkubectl apply -f https://k8s.io/examples/application/nginx/nginx-svc.yaml \\\n -f https://k8s.io/examples/application/nginx/nginx-deployment.yaml", "completion": "This shell code 4.4 - Managing Workloads | It is a recommended practice to put resources related to the same microservice or application tier into the same file, and to group all of the files associated with your application in the same direct", "language": "shell", "type": "code-example_explanation", "context": "4.4 - Managing Workloads | It is a recommended practice to put resources related to the same microservice or application tier into the same file, and to group all of the files associated with your application in the same direct", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_16894"} {"prompt": "Explain this shell code:\n\nkubectl apply -f https://k8s.io/examples/application/nginx/nginx-deployment.yaml", "completion": "This shell code 4.4 - Managing Workloads | If you need to define more manifests, such as adding a ConfigMap, you can do that too.", "language": "shell", "type": "code-example_explanation", "context": "4.4 - Managing Workloads | If you need to define more manifests, such as adding a ConfigMap, you can do that too.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_16895"} {"prompt": "Explain this shell code:\n\nkubectl delete -f https://k8s.io/examples/application/nginx-app.yaml", "completion": "This shell code 4.4 - Managing Workloads | In the case of two resources, you can specify both resources on the command line using the resource/name syntax:", "language": "shell", "type": "code-example_explanation", "context": "4.4 - Managing Workloads | In the case of two resources, you can specify both resources on the command line using the resource/name syntax:", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_16896"} {"prompt": "Explain this none code:\n\ndeployment.apps \"my-nginx\" deleted\nservice \"my-nginx-svc\" deleted", "completion": "This none code 4.4 - Managing Workloads | In the case of two resources, you can specify both resources on the command line using the resource/name syntax:", "language": "none", "type": "code-example_explanation", "context": "4.4 - Managing Workloads | In the case of two resources, you can specify both resources on the command line using the resource/name syntax:", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_16897"} {"prompt": "Explain this shell code:\n\nkubectl delete deployments/my-nginx services/my-nginx-svc", "completion": "This shell code 4.4 - Managing Workloads | For larger numbers of resources, you'll find it easier to specify the selector (label query) specified using -l or --selector, to filter resources by their labels:", "language": "shell", "type": "code-example_explanation", "context": "4.4 - Managing Workloads | For larger numbers of resources, you'll find it easier to specify the selector (label query) specified using -l or --selector, to filter resources by their labels:", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_16898"} {"prompt": "Explain this none code:\n\ndeployment.apps \"my-nginx\" deleted\nservice \"my-nginx-svc\" deleted", "completion": "This none code 4.4 - Managing Workloads | Because kubectl outputs resource names in the same syntax it accepts, you can chain operations using $() or xargs:", "language": "none", "type": "code-example_explanation", "context": "4.4 - Managing Workloads | Because kubectl outputs resource names in the same syntax it accepts, you can chain operations using $() or xargs:", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_16899"} {"prompt": "Explain this shell code:\n\nkubectl get $(kubectl create -f docs/concepts/cluster-administration/nginx/ -o name | grep service/ )\nkubectl create -f docs/concepts/cluster-administration/nginx/ -o name | grep service/ | xargs -i kubectl get '{}'", "completion": "This shell code 4.4 - Managing Workloads | The output might be similar to:", "language": "shell", "type": "code-example_explanation", "context": "4.4 - Managing Workloads | The output might be similar to:", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_16900"} {"prompt": "Explain this none code:\n\nNAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE\nmy-nginx-svc LoadBalancer 10.0.0.208 <pending> 80/TCP 0s", "completion": "This none code 4.4 - Managing Workloads | With the above commands, first you create resources under examples/application/nginx/ and print the resources created with -o name output format (print each resource as resource/name). Then you grep o", "language": "none", "type": "code-example_explanation", "context": "4.4 - Managing Workloads | With the above commands, first you create resources under examples/application/nginx/ and print the resources created with -o name output format (print each resource as resource/name). Then you grep o", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_16901"} {"prompt": "Complete this none code:\n\nproject/k8s/development\n├── configmap\n│   └── my-configmap.yaml", "completion": "├── deployment\n│   └── my-deployment.yaml\n└── pvc\n └── my-pvc.yaml", "language": "none", "type": "code-example", "context": "4.4 - Managing Workloads | By default, performing a bulk operation on project/k8s/development will stop at the first level of the directory, not processing any subdirectories. If you had tried to create the resources in this di", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_16902"} {"prompt": "Explain this none code:\n\nproject/k8s/development\n├── configmap\n│   └── my-configmap.yaml\n├── deployment\n│   └── my-deployment.yaml\n└── pvc\n └── my-pvc.yaml", "completion": "This none code 4.4 - Managing Workloads | By default, performing a bulk operation on project/k8s/development will stop at the first level of the directory, not processing any subdirectories. If you had tried to create the resources in this di", "language": "none", "type": "code-example_explanation", "context": "4.4 - Managing Workloads | By default, performing a bulk operation on project/k8s/development will stop at the first level of the directory, not processing any subdirectories. If you had tried to create the resources in this di", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_16903"} {"prompt": "Explain this none code:\n\nerror: you must provide one or more resources by argument or filename (.json|.yaml|.yml|stdin)", "completion": "This none code 4.4 - Managing Workloads | Instead, specify the --recursive or -R command line argument along with the --filename/-f argument:", "language": "none", "type": "code-example_explanation", "context": "4.4 - Managing Workloads | Instead, specify the --recursive or -R command line argument along with the --filename/-f argument:", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_16904"} {"prompt": "Explain this shell code:\n\nkubectl apply -f project/k8s/development --recursive", "completion": "This shell code 4.4 - Managing Workloads | The --recursive argument works with any operation that accepts the --filename/-f argument such as: kubectl create, kubectl get, kubectl delete, kubectl describe, or even kubectl rollout.", "language": "shell", "type": "code-example_explanation", "context": "4.4 - Managing Workloads | The --recursive argument works with any operation that accepts the --filename/-f argument such as: kubectl create, kubectl get, kubectl delete, kubectl describe, or even kubectl rollout.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_16905"} {"prompt": "Explain this none code:\n\nconfigmap/my-config created\ndeployment.apps/my-deployment created\npersistentvolumeclaim/my-pvc created", "completion": "This none code 4.4 - Managing Workloads | The --recursive argument works with any operation that accepts the --filename/-f argument such as: kubectl create, kubectl get, kubectl delete, kubectl describe, or even kubectl rollout.", "language": "none", "type": "code-example_explanation", "context": "4.4 - Managing Workloads | The --recursive argument works with any operation that accepts the --filename/-f argument such as: kubectl create, kubectl get, kubectl delete, kubectl describe, or even kubectl rollout.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_16906"} {"prompt": "Explain this shell code:\n\nkubectl apply -f project/k8s/namespaces -f project/k8s/development --recursive", "completion": "This shell code 4.4 - Managing Workloads | If you're interested in learning more about kubectl, go ahead and read Command line tool (kubectl).", "language": "shell", "type": "code-example_explanation", "context": "4.4 - Managing Workloads | If you're interested in learning more about kubectl, go ahead and read Command line tool (kubectl).", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_16907"} {"prompt": "Complete this none code:\n\nnamespace/development created\nnamespace/staging created", "completion": "configmap/my-config created\ndeployment.apps/my-deployment created\npersistentvolumeclaim/my-pvc created", "language": "none", "type": "code-example", "context": "4.4 - Managing Workloads | If you're interested in learning more about kubectl, go ahead and read Command line tool (kubectl).", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_16908"} {"prompt": "Explain this none code:\n\nnamespace/development created\nnamespace/staging created\nconfigmap/my-config created\ndeployment.apps/my-deployment created\npersistentvolumeclaim/my-pvc created", "completion": "This none code 4.4 - Managing Workloads | If you're interested in learning more about kubectl, go ahead and read Command line tool (kubectl).", "language": "none", "type": "code-example_explanation", "context": "4.4 - Managing Workloads | If you're interested in learning more about kubectl, go ahead and read Command line tool (kubectl).", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_16909"} {"prompt": "Explain this shell code:\n\nkubectl create deployment my-nginx --image=nginx:1.14.2", "completion": "This shell code 4.4 - Managing Workloads | Ensure that there is 1 replica:", "language": "shell", "type": "code-example_explanation", "context": "4.4 - Managing Workloads | Ensure that there is 1 replica:", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_16910"} {"prompt": "Explain this shell code:\n\nkubectl scale --replicas 1 deployments/my-nginx --subresource='scale' --type='merge' -p '{\"spec\":{\"replicas\": 1}}'", "completion": "This shell code 4.4 - Managing Workloads | and allow Kubernetes to add more temporary replicas during a rollout, by setting a surge maximum of 100%:", "language": "shell", "type": "code-example_explanation", "context": "4.4 - Managing Workloads | and allow Kubernetes to add more temporary replicas during a rollout, by setting a surge maximum of 100%:", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_16911"} {"prompt": "Explain this shell code:\n\nkubectl patch --type='merge' -p '{\"spec\":{\"strategy\":{\"rollingUpdate\":{\"maxSurge\": \"100%\" }}}}'", "completion": "This shell code 4.4 - Managing Workloads | To update to version 1.16.1, change .spec.template.spec.containers[0].image from nginx:1.14.2 to nginx:1.16.1 using kubectl edit:", "language": "shell", "type": "code-example_explanation", "context": "4.4 - Managing Workloads | To update to version 1.16.1, change .spec.template.spec.containers[0].image from nginx:1.14.2 to nginx:1.16.1 using kubectl edit:", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_16912"} {"prompt": "Explain this shell code:\n\nkubectl edit deployment/my-nginx\n# Change the manifest to use the newer container image, then save your changes", "completion": "This shell code 4.4 - Managing Workloads | That's it! The Deployment will declaratively update the deployed nginx application progressively behind the scene. It ensures that only a certain number of old replicas may be down while they are bein", "language": "shell", "type": "code-example_explanation", "context": "4.4 - Managing Workloads | That's it! The Deployment will declaratively update the deployed nginx application progressively behind the scene. It ensures that only a certain number of old replicas may be down while they are bein", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_16913"} {"prompt": "Complete this shell code:\n\nkubectl apply -f my-deployment.yaml\n", "completion": "# wait for rollout to finish\nkubectl rollout status deployment/my-deployment --timeout 10m # 10 minute timeout", "language": "shell", "type": "code-example", "context": "4.4 - Managing Workloads | or", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_16914"} {"prompt": "Explain this shell code:\n\nkubectl apply -f my-deployment.yaml\n\n# wait for rollout to finish\nkubectl rollout status deployment/my-deployment --timeout 10m # 10 minute timeout", "completion": "This shell code 4.4 - Managing Workloads | or", "language": "shell", "type": "code-example_explanation", "context": "4.4 - Managing Workloads | or", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_16915"} {"prompt": "Complete this shell code:\n\nkubectl apply -f backing-stateful-component.yaml\n", "completion": "# don't wait for rollout to finish, just check the status\nkubectl rollout status statefulsets/backing-stateful-component --watch=false", "language": "shell", "type": "code-example", "context": "4.4 - Managing Workloads | You can also pause, resume or cancel a rollout. Visit kubectl rollout to learn more.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_16916"} {"prompt": "Explain this shell code:\n\nkubectl apply -f backing-stateful-component.yaml\n\n# don't wait for rollout to finish, just check the status\nkubectl rollout status statefulsets/backing-stateful-component --watch=false", "completion": "This shell code 4.4 - Managing Workloads | You can also pause, resume or cancel a rollout. Visit kubectl rollout to learn more.", "language": "shell", "type": "code-example_explanation", "context": "4.4 - Managing Workloads | You can also pause, resume or cancel a rollout. Visit kubectl rollout to learn more.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_16917"} {"prompt": "Complete this none code:\n\nname: frontend\nreplicas: 3\n...\nlabels:", "completion": " app: guestbook\n tier: frontend\n track: stable\n...\nimage: gb-frontend:v3", "language": "none", "type": "code-example", "context": "4.4 - Managing Workloads | and then you can create a new release of the guestbook frontend that carries the track label with different value (i.e. canary), so that two sets of pods would not overlap:", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_16918"} {"prompt": "Explain this none code:\n\nname: frontend\nreplicas: 3\n...\nlabels:\n app: guestbook\n tier: frontend\n track: stable\n...\nimage: gb-frontend:v3", "completion": "This none code 4.4 - Managing Workloads | and then you can create a new release of the guestbook frontend that carries the track label with different value (i.e. canary), so that two sets of pods would not overlap:", "language": "none", "type": "code-example_explanation", "context": "4.4 - Managing Workloads | and then you can create a new release of the guestbook frontend that carries the track label with different value (i.e. canary), so that two sets of pods would not overlap:", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_16919"} {"prompt": "Complete this none code:\n\nname: frontend-canary\nreplicas: 1\n...\nlabels:", "completion": " app: guestbook\n tier: frontend\n track: canary\n...\nimage: gb-frontend:v4", "language": "none", "type": "code-example", "context": "4.4 - Managing Workloads | The frontend service would span both sets of replicas by selecting the common subset of their labels (i.e. omitting the track label), so that the traffic will be redirected to both applications:", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_16920"} {"prompt": "Explain this none code:\n\nname: frontend-canary\nreplicas: 1\n...\nlabels:\n app: guestbook\n tier: frontend\n track: canary\n...\nimage: gb-frontend:v4", "completion": "This none code 4.4 - Managing Workloads | The frontend service would span both sets of replicas by selecting the common subset of their labels (i.e. omitting the track label), so that the traffic will be redirected to both applications:", "language": "none", "type": "code-example_explanation", "context": "4.4 - Managing Workloads | The frontend service would span both sets of replicas by selecting the common subset of their labels (i.e. omitting the track label), so that the traffic will be redirected to both applications:", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_16921"} {"prompt": "Explain this shell code:\n\nkubectl annotate pods my-nginx-v4-9gw19 description='my frontend running nginx'\nkubectl get pods my-nginx-v4-9gw19 -o yaml", "completion": "This shell code 4.4 - Managing Workloads | For more information, see annotations and kubectl annotate.", "language": "shell", "type": "code-example_explanation", "context": "4.4 - Managing Workloads | For more information, see annotations and kubectl annotate.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_16922"} {"prompt": "Complete this shell code:\n\napiVersion: v1\nkind: pod\nmetadata:", "completion": " annotations:\n description: my frontend running nginx\n...", "language": "shell", "type": "code-example", "context": "4.4 - Managing Workloads | For more information, see annotations and kubectl annotate.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_16923"} {"prompt": "Explain this shell code:\n\napiVersion: v1\nkind: pod\nmetadata:\n annotations:\n description: my frontend running nginx\n...", "completion": "This shell code 4.4 - Managing Workloads | For more information, see annotations and kubectl annotate.", "language": "shell", "type": "code-example_explanation", "context": "4.4 - Managing Workloads | For more information, see annotations and kubectl annotate.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_16924"} {"prompt": "Explain this none code:\n\nNAME READY STATUS RESTARTS AGE\nmy-nginx-2035384211-j5fhi 1/1 Running 0 30m", "completion": "This none code 4.4 - Managing Workloads | To have the system automatically choose the number of nginx replicas as needed, ranging from 1 to 3, do:", "language": "none", "type": "code-example_explanation", "context": "4.4 - Managing Workloads | To have the system automatically choose the number of nginx replicas as needed, ranging from 1 to 3, do:", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_16925"} {"prompt": "Explain this shell code:\n\n# This requires an existing source of container and Pod metrics\nkubectl autoscale deployment/my-nginx --min=1 --max=3", "completion": "This shell code 4.4 - Managing Workloads | Now your nginx replicas will be scaled up and down as needed, automatically.", "language": "shell", "type": "code-example_explanation", "context": "4.4 - Managing Workloads | Now your nginx replicas will be scaled up and down as needed, automatically.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_16926"} {"prompt": "Explain this none code:\n\nhorizontalpodautoscaler.autoscaling/my-nginx autoscaled", "completion": "This none code 4.4 - Managing Workloads | Now your nginx replicas will be scaled up and down as needed, automatically.", "language": "none", "type": "code-example_explanation", "context": "4.4 - Managing Workloads | Now your nginx replicas will be scaled up and down as needed, automatically.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_16927"} {"prompt": "Explain this shell code:\n\nkubectl apply -f https://k8s.io/examples/application/nginx/nginx-deployment.yaml", "completion": "This shell code 4.4 - Managing Workloads | To learn more about the underlying mechanism, read server-side apply.", "language": "shell", "type": "code-example_explanation", "context": "4.4 - Managing Workloads | To learn more about the underlying mechanism, read server-side apply.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_16928"} {"prompt": "Complete this shell code:\n\nkubectl get deployment my-nginx -o yaml > /tmp/nginx.yaml\nvi /tmp/nginx.yaml\n# do some edit, and then save the file\n", "completion": "kubectl apply -f /tmp/nginx.yaml\ndeployment.apps/my-nginx configured\n\nrm /tmp/nginx.yaml", "language": "shell", "type": "code-example", "context": "4.4 - Managing Workloads | This allows you to do more significant changes more easily. Note that you can specify the editor with your EDITOR or KUBE_EDITOR environment variables.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_16929"} {"prompt": "Explain this shell code:\n\nkubectl get deployment my-nginx -o yaml > /tmp/nginx.yaml\nvi /tmp/nginx.yaml\n# do some edit, and then save the file\n\nkubectl apply -f /tmp/nginx.yaml\ndeployment.apps/my-nginx configured\n\nrm /tmp/nginx.yaml", "completion": "This shell code 4.4 - Managing Workloads | This allows you to do more significant changes more easily. Note that you can specify the editor with your EDITOR or KUBE_EDITOR environment variables.", "language": "shell", "type": "code-example_explanation", "context": "4.4 - Managing Workloads | This allows you to do more significant changes more easily. Note that you can specify the editor with your EDITOR or KUBE_EDITOR environment variables.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_16930"} {"prompt": "Explain this shell code:\n\nkubectl replace -f https://k8s.io/examples/application/nginx/nginx-deployment.yaml --force", "completion": "This shell code 4.4 - Managing Workloads | The Kubernetes network model is built out of several pieces:", "language": "shell", "type": "code-example_explanation", "context": "4.4 - Managing Workloads | The Kubernetes network model is built out of several pieces:", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_16931"} {"prompt": "Explain this none code:\n\ndeployment.apps/my-nginx deleted\ndeployment.apps/my-nginx replaced", "completion": "This none code 4.4 - Managing Workloads | The Kubernetes network model is built out of several pieces:", "language": "none", "type": "code-example_explanation", "context": "4.4 - Managing Workloads | The Kubernetes network model is built out of several pieces:", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_16932"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Service\nmetadata:\n name: my-service\nspec:", "completion": " selector:\n app.kubernetes.io/name: MyApp\n ports:\n - protocol: TCP\n port: 80\n targetPort: 9376", "language": "yaml", "type": "service-definition", "context": "5.1 - Service | Applying this manifest creates a new Service named \"my-service\" with the default ClusterIP service type. The Service targets TCP port 9376 on any Pod with the app.kubernetes.io/name: MyApp label.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_16933"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Service\nmetadata:\n name: my-service\nspec:\n selector:\n app.kubernetes.io/name: MyApp\n ports:\n - protocol: TCP\n port: 80\n targetPort: 9376", "completion": "This yaml code 5.1 - Service | Applying this manifest creates a new Service named \"my-service\" with the default ClusterIP service type. The Service targets TCP port 9376 on any Pod with the app.kubernetes.io/name: MyApp label.", "language": "yaml", "type": "service-definition_explanation", "context": "5.1 - Service | Applying this manifest creates a new Service named \"my-service\" with the default ClusterIP service type. The Service targets TCP port 9376 on any Pod with the app.kubernetes.io/name: MyApp label.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_16934"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: nginx\n labels:\n app.kubernetes.io/name: proxy\nspec:\n containers:\n - name: nginx\n image: nginx:stable\n ports:\n - containerPort: 80\n name: http-web-svc", "completion": "\n---\napiVersion: v1\nkind: Service\nmetadata:\n name: nginx-service\nspec:\n selector:\n app.kubernetes.io/name: proxy\n ports:\n - name: name-of-service-port\n protocol: TCP\n port: 80\n targetPort: http-web-svc", "language": "yaml", "type": "pod-definition", "context": "5.1 - Service | This works even if there is a mixture of Pods in the Service using a single configured name, with the same network protocol available via different port numbers. This offers a lot of flexibility for d", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_16935"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: nginx\n labels:\n app.kubernetes.io/name: proxy\nspec:\n containers:\n - name: nginx\n image: nginx:stable\n ports:\n - containerPort: 80\n name: http-web-svc\n\n---\napiVersion: v1\nkind: Service\nmetadata:\n name: nginx-service\nspec:\n selector:\n app.kubernetes.io/name: proxy\n ports:\n - name: name-of-service-port\n protocol: TCP\n port: 80\n targetPort: http-web-svc", "completion": "This yaml code 5.1 - Service | This works even if there is a mixture of Pods in the Service using a single configured name, with the same network protocol available via different port numbers. This offers a lot of flexibility for d", "language": "yaml", "type": "pod-definition_explanation", "context": "5.1 - Service | This works even if there is a mixture of Pods in the Service using a single configured name, with the same network protocol available via different port numbers. This offers a lot of flexibility for d", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_16936"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Service\nmetadata:\n name: my-service\nspec:", "completion": " ports:\n - name: http\n protocol: TCP\n port: 80\n targetPort: 9376", "language": "yaml", "type": "service-definition", "context": "5.1 - Service | Because this Service has no selector, the corresponding EndpointSlice objects are not created automatically. You can map the Service to the network address and port where it's running, by adding an En", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_16937"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Service\nmetadata:\n name: my-service\nspec:\n ports:\n - name: http\n protocol: TCP\n port: 80\n targetPort: 9376", "completion": "This yaml code 5.1 - Service | Because this Service has no selector, the corresponding EndpointSlice objects are not created automatically. You can map the Service to the network address and port where it's running, by adding an En", "language": "yaml", "type": "service-definition_explanation", "context": "5.1 - Service | Because this Service has no selector, the corresponding EndpointSlice objects are not created automatically. You can map the Service to the network address and port where it's running, by adding an En", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_16938"} {"prompt": "Complete this yaml code:\n\napiVersion: discovery.k8s.io/v1\nkind: EndpointSlice\nmetadata:\n name: my-service-1 # by convention, use the name of the Service\n # as a prefix for the name of the EndpointSlice\n labels:\n # You should set the \"kubernetes.io/service-name\" label.\n # Set its value to match the name of the Service\n kubernetes.io/service-name: my-service\naddressType: IPv4", "completion": "ports:\n - name: http # should match with the name of the service port defined above\n appProtocol: http\n protocol: TCP\n port: 9376\nendpoints:\n - addresses:\n - \"10.4.5.6\"\n - addresses:\n - \"10.1.2.3\"", "language": "yaml", "type": "kubernetes-manifest", "context": "5.1 - Service | When you create an EndpointSlice object for a Service, you can use any name for the EndpointSlice. Each EndpointSlice in a namespace must have a unique name. You link an EndpointSlice to a Service by", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_16939"} {"prompt": "Explain this yaml code:\n\napiVersion: discovery.k8s.io/v1\nkind: EndpointSlice\nmetadata:\n name: my-service-1 # by convention, use the name of the Service\n # as a prefix for the name of the EndpointSlice\n labels:\n # You should set the \"kubernetes.io/service-name\" label.\n # Set its value to match the name of the Service\n kubernetes.io/service-name: my-service\naddressType: IPv4\nports:\n - name: http # should match with the name of the service port defined above\n appProtocol: http\n protocol: TCP\n port: 9376\nendpoints:\n - addresses:\n - \"10.4.5.6\"\n - addresses:\n - \"10.1.2.3\"", "completion": "This yaml code 5.1 - Service | When you create an EndpointSlice object for a Service, you can use any name for the EndpointSlice. Each EndpointSlice in a namespace must have a unique name. You link an EndpointSlice to a Service by", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "5.1 - Service | When you create an EndpointSlice object for a Service, you can use any name for the EndpointSlice. Each EndpointSlice in a namespace must have a unique name. You link an EndpointSlice to a Service by", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_16940"} {"prompt": "Explain this bash code:\n\nkubectl port-forward service/<service-name> forwardedPort:servicePort", "completion": "This bash code 5.1 - Service | An ExternalName Service is a special case of Service that does not have selectors and uses DNS names instead. For more information, see the ExternalName section.", "language": "bash", "type": "kubectl-command_explanation", "context": "5.1 - Service | An ExternalName Service is a special case of Service that does not have selectors and uses DNS names instead. For more information, see the ExternalName section.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_16941"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Service\nmetadata:\n name: my-service\nspec:\n selector:\n app.kubernetes.io/name: MyApp\n ports:", "completion": " - name: http\n protocol: TCP\n port: 80\n targetPort: 9376\n - name: https\n protocol: TCP\n port: 443\n targetPort: 9377", "language": "yaml", "type": "service-definition", "context": "5.1 - Service | As with Kubernetes names in general, names for ports must only contain lowercase alphanumeric characters and -. Port names must also start and end with an alphanumeric character.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_16942"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Service\nmetadata:\n name: my-service\nspec:\n selector:\n app.kubernetes.io/name: MyApp\n ports:\n - name: http\n protocol: TCP\n port: 80\n targetPort: 9376\n - name: https\n protocol: TCP\n port: 443\n targetPort: 9377", "completion": "This yaml code 5.1 - Service | As with Kubernetes names in general, names for ports must only contain lowercase alphanumeric characters and -. Port names must also start and end with an alphanumeric character.", "language": "yaml", "type": "service-definition_explanation", "context": "5.1 - Service | As with Kubernetes names in general, names for ports must only contain lowercase alphanumeric characters and -. Port names must also start and end with an alphanumeric character.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_16943"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Service\nmetadata:\n name: my-service\nspec:\n type: NodePort\n selector:\n app.kubernetes.io/name: MyApp", "completion": " ports:\n - port: 80\n # By default and for convenience, the `targetPort` is set to\n # the same value as the `port` field.\n targetPort: 80\n # Optional field\n # By default and for convenience, the Kubernetes control plane\n # will allocate a port from a range (default: 30000-32767)\n nodePort: 30007", "language": "yaml", "type": "service-definition", "context": "5.1 - Service | The policy for assigning ports to NodePort services applies to both the auto-assignment and the manual assignment scenarios. When a user wants to create a NodePort service that uses a specific port, t", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_16944"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Service\nmetadata:\n name: my-service\nspec:\n type: NodePort\n selector:\n app.kubernetes.io/name: MyApp\n ports:\n - port: 80\n # By default and for convenience, the `targetPort` is set to\n # the same value as the `port` field.\n targetPort: 80\n # Optional field\n # By default and for convenience, the Kubernetes control plane\n # will allocate a port from a range (default: 30000-32767)\n nodePort: 30007", "completion": "This yaml code 5.1 - Service | The policy for assigning ports to NodePort services applies to both the auto-assignment and the manual assignment scenarios. When a user wants to create a NodePort service that uses a specific port, t", "language": "yaml", "type": "service-definition_explanation", "context": "5.1 - Service | The policy for assigning ports to NodePort services applies to both the auto-assignment and the manual assignment scenarios. When a user wants to create a NodePort service that uses a specific port, t", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_16945"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Service\nmetadata:\n name: my-service\nspec:\n selector:\n app.kubernetes.io/name: MyApp\n ports:", "completion": " - protocol: TCP\n port: 80\n targetPort: 9376\n clusterIP: 10.0.171.239\n type: LoadBalancer\nstatus:\n loadBalancer:\n ingress:\n - ip: 192.0.2.127", "language": "yaml", "type": "service-definition", "context": "5.1 - Service | Traffic from the external load balancer is directed at the backend Pods. The cloud provider decides how it is load balanced.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_16946"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Service\nmetadata:\n name: my-service\nspec:\n selector:\n app.kubernetes.io/name: MyApp\n ports:\n - protocol: TCP\n port: 80\n targetPort: 9376\n clusterIP: 10.0.171.239\n type: LoadBalancer\nstatus:\n loadBalancer:\n ingress:\n - ip: 192.0.2.127", "completion": "This yaml code 5.1 - Service | Traffic from the external load balancer is directed at the backend Pods. The cloud provider decides how it is load balanced.", "language": "yaml", "type": "service-definition_explanation", "context": "5.1 - Service | Traffic from the external load balancer is directed at the backend Pods. The cloud provider decides how it is load balanced.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_16947"} {"prompt": "Complete this yaml code:\n\nmetadata:\n name: my-service", "completion": " annotations:\n networking.gke.io/load-balancer-type: \"Internal\"", "language": "yaml", "type": "kubernetes-manifest", "context": "5.1 - Service | metadata: name: my-service annotations: service.beta.kubernetes.io/aws-load-balancer-internal: \"true\"", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_16948"} {"prompt": "Explain this yaml code:\n\nmetadata:\n name: my-service\n annotations:\n networking.gke.io/load-balancer-type: \"Internal\"", "completion": "This yaml code 5.1 - Service | metadata: name: my-service annotations: service.beta.kubernetes.io/aws-load-balancer-internal: \"true\"", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "5.1 - Service | metadata: name: my-service annotations: service.beta.kubernetes.io/aws-load-balancer-internal: \"true\"", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_16949"} {"prompt": "Complete this yaml code:\n\nmetadata:\n name: my-service", "completion": " annotations:\n service.beta.kubernetes.io/aws-load-balancer-internal: \"true\"", "language": "yaml", "type": "kubernetes-manifest", "context": "5.1 - Service | metadata: name: my-service annotations: service.beta.kubernetes.io/azure-load-balancer-internal: \"true\"", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_16950"} {"prompt": "Explain this yaml code:\n\nmetadata:\n name: my-service\n annotations:\n service.beta.kubernetes.io/aws-load-balancer-internal: \"true\"", "completion": "This yaml code 5.1 - Service | metadata: name: my-service annotations: service.beta.kubernetes.io/azure-load-balancer-internal: \"true\"", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "5.1 - Service | metadata: name: my-service annotations: service.beta.kubernetes.io/azure-load-balancer-internal: \"true\"", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_16951"} {"prompt": "Complete this yaml code:\n\nmetadata:\n name: my-service", "completion": " annotations:\n service.beta.kubernetes.io/azure-load-balancer-internal: \"true\"", "language": "yaml", "type": "kubernetes-manifest", "context": "5.1 - Service | metadata: name: my-service annotations: service.kubernetes.io/ibm-load-balancer-cloud-provider-ip-type: \"private\"", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_16952"} {"prompt": "Explain this yaml code:\n\nmetadata:\n name: my-service\n annotations:\n service.beta.kubernetes.io/azure-load-balancer-internal: \"true\"", "completion": "This yaml code 5.1 - Service | metadata: name: my-service annotations: service.kubernetes.io/ibm-load-balancer-cloud-provider-ip-type: \"private\"", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "5.1 - Service | metadata: name: my-service annotations: service.kubernetes.io/ibm-load-balancer-cloud-provider-ip-type: \"private\"", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_16953"} {"prompt": "Complete this yaml code:\n\nmetadata:\n name: my-service", "completion": " annotations:\n service.kubernetes.io/ibm-load-balancer-cloud-provider-ip-type: \"private\"", "language": "yaml", "type": "kubernetes-manifest", "context": "5.1 - Service | metadata: name: my-service annotations: service.beta.kubernetes.io/openstack-internal-load-balancer: \"true\"", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_16954"} {"prompt": "Explain this yaml code:\n\nmetadata:\n name: my-service\n annotations:\n service.kubernetes.io/ibm-load-balancer-cloud-provider-ip-type: \"private\"", "completion": "This yaml code 5.1 - Service | metadata: name: my-service annotations: service.beta.kubernetes.io/openstack-internal-load-balancer: \"true\"", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "5.1 - Service | metadata: name: my-service annotations: service.beta.kubernetes.io/openstack-internal-load-balancer: \"true\"", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_16955"} {"prompt": "Complete this yaml code:\n\nmetadata:\n name: my-service", "completion": " annotations:\n service.beta.kubernetes.io/openstack-internal-load-balancer: \"true\"", "language": "yaml", "type": "kubernetes-manifest", "context": "5.1 - Service | metadata: name: my-service annotations: service.beta.kubernetes.io/cce-load-balancer-internal-vpc: \"true\"", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_16956"} {"prompt": "Explain this yaml code:\n\nmetadata:\n name: my-service\n annotations:\n service.beta.kubernetes.io/openstack-internal-load-balancer: \"true\"", "completion": "This yaml code 5.1 - Service | metadata: name: my-service annotations: service.beta.kubernetes.io/cce-load-balancer-internal-vpc: \"true\"", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "5.1 - Service | metadata: name: my-service annotations: service.beta.kubernetes.io/cce-load-balancer-internal-vpc: \"true\"", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_16957"} {"prompt": "Complete this yaml code:\n\nmetadata:\n name: my-service", "completion": " annotations:\n service.beta.kubernetes.io/cce-load-balancer-internal-vpc: \"true\"", "language": "yaml", "type": "kubernetes-manifest", "context": "5.1 - Service | metadata: annotations: service.kubernetes.io/qcloud-loadbalancer-internal-subnetid: subnet-xxxxx", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_16958"} {"prompt": "Explain this yaml code:\n\nmetadata:\n name: my-service\n annotations:\n service.beta.kubernetes.io/cce-load-balancer-internal-vpc: \"true\"", "completion": "This yaml code 5.1 - Service | metadata: annotations: service.kubernetes.io/qcloud-loadbalancer-internal-subnetid: subnet-xxxxx", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "5.1 - Service | metadata: annotations: service.kubernetes.io/qcloud-loadbalancer-internal-subnetid: subnet-xxxxx", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_16959"} {"prompt": "Explain this yaml code:\n\nmetadata:\n annotations:\n service.kubernetes.io/qcloud-loadbalancer-internal-subnetid: subnet-xxxxx", "completion": "This yaml code 5.1 - Service | metadata: annotations: service.beta.kubernetes.io/alibaba-cloud-loadbalancer-address-type: \"intranet\"", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "5.1 - Service | metadata: annotations: service.beta.kubernetes.io/alibaba-cloud-loadbalancer-address-type: \"intranet\"", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_16960"} {"prompt": "Explain this yaml code:\n\nmetadata:\n annotations:\n service.beta.kubernetes.io/alibaba-cloud-loadbalancer-address-type: \"intranet\"", "completion": "This yaml code 5.1 - Service | metadata: name: my-service annotations: service.beta.kubernetes.io/oci-load-balancer-internal: true", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "5.1 - Service | metadata: name: my-service annotations: service.beta.kubernetes.io/oci-load-balancer-internal: true", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_16961"} {"prompt": "Complete this yaml code:\n\nmetadata:\n name: my-service", "completion": " annotations:\n service.beta.kubernetes.io/oci-load-balancer-internal: true", "language": "yaml", "type": "kubernetes-manifest", "context": "5.1 - Service | Services of type ExternalName map a Service to a DNS name, not to a typical selector such as my-service or cassandra. You specify these Services with the spec.externalName parameter.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_16962"} {"prompt": "Explain this yaml code:\n\nmetadata:\n name: my-service\n annotations:\n service.beta.kubernetes.io/oci-load-balancer-internal: true", "completion": "This yaml code 5.1 - Service | Services of type ExternalName map a Service to a DNS name, not to a typical selector such as my-service or cassandra. You specify these Services with the spec.externalName parameter.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "5.1 - Service | Services of type ExternalName map a Service to a DNS name, not to a typical selector such as my-service or cassandra. You specify these Services with the spec.externalName parameter.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_16963"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Service\nmetadata:\n name: my-service", "completion": " namespace: prod\nspec:\n type: ExternalName\n externalName: my.database.example.com", "language": "yaml", "type": "service-definition", "context": "5.1 - Service | A Service of type: ExternalName accepts an IPv4 address string, but treats that string as a DNS name comprised of digits, not as an IP address (the internet does not however allow such names in DNS).", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_16964"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Service\nmetadata:\n name: my-service\n namespace: prod\nspec:\n type: ExternalName\n externalName: my.database.example.com", "completion": "This yaml code 5.1 - Service | A Service of type: ExternalName accepts an IPv4 address string, but treats that string as a DNS name comprised of digits, not as an IP address (the internet does not however allow such names in DNS).", "language": "yaml", "type": "service-definition_explanation", "context": "5.1 - Service | A Service of type: ExternalName accepts an IPv4 address string, but treats that string as a DNS name comprised of digits, not as an IP address (the internet does not however allow such names in DNS).", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_16965"} {"prompt": "Complete this shell code:\n\nREDIS_PRIMARY_SERVICE_HOST=10.0.0.11\nREDIS_PRIMARY_SERVICE_PORT=6379\nREDIS_PRIMARY_PORT=tcp://10.0.0.11:6379", "completion": "REDIS_PRIMARY_PORT_6379_TCP=tcp://10.0.0.11:6379\nREDIS_PRIMARY_PORT_6379_TCP_PROTO=tcp\nREDIS_PRIMARY_PORT_6379_TCP_PORT=6379\nREDIS_PRIMARY_PORT_6379_TCP_ADDR=10.0.0.11", "language": "shell", "type": "code-example", "context": "5.1 - Service | When you have a Pod that needs to access a Service, and you are using the environment variable method to publish the port and cluster IP to the client Pods, you must create the Service before the clie", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_16966"} {"prompt": "Explain this shell code:\n\nREDIS_PRIMARY_SERVICE_HOST=10.0.0.11\nREDIS_PRIMARY_SERVICE_PORT=6379\nREDIS_PRIMARY_PORT=tcp://10.0.0.11:6379\nREDIS_PRIMARY_PORT_6379_TCP=tcp://10.0.0.11:6379\nREDIS_PRIMARY_PORT_6379_TCP_PROTO=tcp\nREDIS_PRIMARY_PORT_6379_TCP_PORT=6379\nREDIS_PRIMARY_PORT_6379_TCP_ADDR=10.0.0.11", "completion": "This shell code 5.1 - Service | When you have a Pod that needs to access a Service, and you are using the environment variable method to publish the port and cluster IP to the client Pods, you must create the Service before the clie", "language": "shell", "type": "code-example_explanation", "context": "5.1 - Service | When you have a Pod that needs to access a Service, and you are using the environment variable method to publish the port and cluster IP to the client Pods, you must create the Service before the clie", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_16967"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Service\nmetadata:\n name: my-service\nspec:\n selector:\n app.kubernetes.io/name: MyApp", "completion": " ports:\n - name: http\n protocol: TCP\n port: 80\n targetPort: 49152\n externalIPs:\n - 198.51.100.32", "language": "yaml", "type": "service-definition", "context": "5.1 - Service | Service is a top-level resource in the Kubernetes REST API. You can find more details about the Service API object.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_16968"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Service\nmetadata:\n name: my-service\nspec:\n selector:\n app.kubernetes.io/name: MyApp\n ports:\n - name: http\n protocol: TCP\n port: 80\n targetPort: 49152\n externalIPs:\n - 198.51.100.32", "completion": "This yaml code 5.1 - Service | Service is a top-level resource in the Kubernetes REST API. You can find more details about the Service API object.", "language": "yaml", "type": "service-definition_explanation", "context": "5.1 - Service | Service is a top-level resource in the Kubernetes REST API. You can find more details about the Service API object.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_16969"} {"prompt": "Complete this yaml code:\n\napiVersion: networking.k8s.io/v1\nkind: Ingress\nmetadata:\n name: minimal-ingress\n annotations:\n nginx.ingress.kubernetes.io/rewrite-target: /\nspec:\n ingressClassName: nginx-example\n rules:", "completion": " - http:\n paths:\n - path: /testpath\n pathType: Prefix\n backend:\n service:\n name: test\n port:\n number: 80", "language": "yaml", "type": "kubernetes-manifest", "context": "5.2 - Ingress | An Ingress needs apiVersion, kind, metadata and spec fields. The name of an Ingress object must be a valid DNS subdomain name. For general information about working with config files, see deploying ap", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_16970"} {"prompt": "Explain this yaml code:\n\napiVersion: networking.k8s.io/v1\nkind: Ingress\nmetadata:\n name: minimal-ingress\n annotations:\n nginx.ingress.kubernetes.io/rewrite-target: /\nspec:\n ingressClassName: nginx-example\n rules:\n - http:\n paths:\n - path: /testpath\n pathType: Prefix\n backend:\n service:\n name: test\n port:\n number: 80", "completion": "This yaml code 5.2 - Ingress | An Ingress needs apiVersion, kind, metadata and spec fields. The name of an Ingress object must be a valid DNS subdomain name. For general information about working with config files, see deploying ap", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "5.2 - Ingress | An Ingress needs apiVersion, kind, metadata and spec fields. The name of an Ingress object must be a valid DNS subdomain name. For general information about working with config files, see deploying ap", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_16971"} {"prompt": "Complete this yaml code:\n\napiVersion: networking.k8s.io/v1\nkind: Ingress\nmetadata:\n name: ingress-resource-backend\nspec:\n defaultBackend:\n resource:\n apiGroup: k8s.example.com\n kind: StorageBucket\n name: static-assets", "completion": " rules:\n - http:\n paths:\n - path: /icons\n pathType: ImplementationSpecific\n backend:\n resource:\n apiGroup: k8s.example.com\n kind: StorageBucket\n name: icon-assets", "language": "yaml", "type": "kubernetes-manifest", "context": "5.2 - Ingress | After creating the Ingress above, you can view it with the following command:", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_16972"} {"prompt": "Explain this yaml code:\n\napiVersion: networking.k8s.io/v1\nkind: Ingress\nmetadata:\n name: ingress-resource-backend\nspec:\n defaultBackend:\n resource:\n apiGroup: k8s.example.com\n kind: StorageBucket\n name: static-assets\n rules:\n - http:\n paths:\n - path: /icons\n pathType: ImplementationSpecific\n backend:\n resource:\n apiGroup: k8s.example.com\n kind: StorageBucket\n name: icon-assets", "completion": "This yaml code 5.2 - Ingress | After creating the Ingress above, you can view it with the following command:", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "5.2 - Ingress | After creating the Ingress above, you can view it with the following command:", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_16973"} {"prompt": "Complete this text code:\n\nName: ingress-resource-backend\nNamespace: default\nAddress:\nDefault backend: APIGroup: k8s.example.com, Kind: StorageBucket, Name: static-assets\nRules:", "completion": " Host Path Backends\n ---- ---- --------\n *\n /icons APIGroup: k8s.example.com, Kind: StorageBucket, Name: icon-assets\nAnnotations: <none>\nEvents: <none>", "language": "text", "type": "code-example", "context": "5.2 - Ingress | Each path in an Ingress is required to have a corresponding path type. Paths that do not include an explicit pathType will fail validation. There are three supported path types:", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_16974"} {"prompt": "Explain this text code:\n\nName: ingress-resource-backend\nNamespace: default\nAddress:\nDefault backend: APIGroup: k8s.example.com, Kind: StorageBucket, Name: static-assets\nRules:\n Host Path Backends\n ---- ---- --------\n *\n /icons APIGroup: k8s.example.com, Kind: StorageBucket, Name: icon-assets\nAnnotations: <none>\nEvents: <none>", "completion": "This text code 5.2 - Ingress | Each path in an Ingress is required to have a corresponding path type. Paths that do not include an explicit pathType will fail validation. There are three supported path types:", "language": "text", "type": "code-example_explanation", "context": "5.2 - Ingress | Each path in an Ingress is required to have a corresponding path type. Paths that do not include an explicit pathType will fail validation. There are three supported path types:", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_16975"} {"prompt": "Complete this yaml code:\n\napiVersion: networking.k8s.io/v1\nkind: Ingress\nmetadata:\n name: ingress-wildcard-host\nspec:\n rules:\n - host: \"foo.bar.com\"\n http:\n paths:\n - pathType: Prefix\n path: \"/bar\"\n backend:\n service:", "completion": " name: service1\n port:\n number: 80\n - host: \"*.foo.com\"\n http:\n paths:\n - pathType: Prefix\n path: \"/foo\"\n backend:\n service:\n name: service2\n port:\n number: 80", "language": "yaml", "type": "kubernetes-manifest", "context": "5.2 - Ingress | Ingresses can be implemented by different controllers, often with different configuration. Each Ingress should specify a class, a reference to an IngressClass resource that contains additional configu", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_16976"} {"prompt": "Explain this yaml code:\n\napiVersion: networking.k8s.io/v1\nkind: Ingress\nmetadata:\n name: ingress-wildcard-host\nspec:\n rules:\n - host: \"foo.bar.com\"\n http:\n paths:\n - pathType: Prefix\n path: \"/bar\"\n backend:\n service:\n name: service1\n port:\n number: 80\n - host: \"*.foo.com\"\n http:\n paths:\n - pathType: Prefix\n path: \"/foo\"\n backend:\n service:\n name: service2\n port:\n number: 80", "completion": "This yaml code 5.2 - Ingress | Ingresses can be implemented by different controllers, often with different configuration. Each Ingress should specify a class, a reference to an IngressClass resource that contains additional configu", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "5.2 - Ingress | Ingresses can be implemented by different controllers, often with different configuration. Each Ingress should specify a class, a reference to an IngressClass resource that contains additional configu", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_16977"} {"prompt": "Complete this yaml code:\n\napiVersion: networking.k8s.io/v1\nkind: IngressClass\nmetadata:\n name: external-lb\nspec:", "completion": " controller: example.com/ingress-controller\n parameters:\n apiGroup: k8s.example.com\n kind: IngressParameters\n name: external-lb", "language": "yaml", "type": "kubernetes-manifest", "context": "5.2 - Ingress | The .spec.parameters field of an IngressClass lets you reference another resource that provides configuration related to that IngressClass.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_16978"} {"prompt": "Explain this yaml code:\n\napiVersion: networking.k8s.io/v1\nkind: IngressClass\nmetadata:\n name: external-lb\nspec:\n controller: example.com/ingress-controller\n parameters:\n apiGroup: k8s.example.com\n kind: IngressParameters\n name: external-lb", "completion": "This yaml code 5.2 - Ingress | The .spec.parameters field of an IngressClass lets you reference another resource that provides configuration related to that IngressClass.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "5.2 - Ingress | The .spec.parameters field of an IngressClass lets you reference another resource that provides configuration related to that IngressClass.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_16979"} {"prompt": "Complete this yaml code:\n\n---\napiVersion: networking.k8s.io/v1\nkind: IngressClass\nmetadata:\n name: external-lb-1\nspec:\n controller: example.com/ingress-controller\n parameters:", "completion": " # The parameters for this IngressClass are specified in a\n # ClusterIngressParameter (API group k8s.example.net) named\n # \"external-config-1\". This definition tells Kubernetes to\n # look for a cluster-scoped parameter resource.\n scope: Cluster\n apiGroup: k8s.example.net\n kind: ClusterIngressParameter\n name: external-config-1", "language": "yaml", "type": "kubernetes-manifest", "context": "5.2 - Ingress | FEATURE STATE: Kubernetes v1.23 [stable]If you set the .spec.parameters field and set .spec.parameters.scope to Namespace, then the IngressClass refers to a namespaced-scoped resource. You must also s", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_16980"} {"prompt": "Explain this yaml code:\n\n---\napiVersion: networking.k8s.io/v1\nkind: IngressClass\nmetadata:\n name: external-lb-1\nspec:\n controller: example.com/ingress-controller\n parameters:\n # The parameters for this IngressClass are specified in a\n # ClusterIngressParameter (API group k8s.example.net) named\n # \"external-config-1\". This definition tells Kubernetes to\n # look for a cluster-scoped parameter resource.\n scope: Cluster\n apiGroup: k8s.example.net\n kind: ClusterIngressParameter\n name: external-config-1", "completion": "This yaml code 5.2 - Ingress | FEATURE STATE: Kubernetes v1.23 [stable]If you set the .spec.parameters field and set .spec.parameters.scope to Namespace, then the IngressClass refers to a namespaced-scoped resource. You must also s", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "5.2 - Ingress | FEATURE STATE: Kubernetes v1.23 [stable]If you set the .spec.parameters field and set .spec.parameters.scope to Namespace, then the IngressClass refers to a namespaced-scoped resource. You must also s", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_16981"} {"prompt": "Complete this yaml code:\n\n---\napiVersion: networking.k8s.io/v1\nkind: IngressClass\nmetadata:\n name: external-lb-2\nspec:\n controller: example.com/ingress-controller\n parameters:", "completion": " # The parameters for this IngressClass are specified in an\n # IngressParameter (API group k8s.example.com) named \"external-config\",\n # that's in the \"external-configuration\" namespace.\n scope: Namespace\n apiGroup: k8s.example.com\n kind: IngressParameter\n namespace: external-configuration\n name: external-config", "language": "yaml", "type": "kubernetes-manifest", "context": "5.2 - Ingress | Before the IngressClass resource and ingressClassName field were added in Kubernetes 1.18, Ingress classes were specified with a kubernetes.io/ingress.class annotation on the Ingress. This annotation", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_16982"} {"prompt": "Explain this yaml code:\n\n---\napiVersion: networking.k8s.io/v1\nkind: IngressClass\nmetadata:\n name: external-lb-2\nspec:\n controller: example.com/ingress-controller\n parameters:\n # The parameters for this IngressClass are specified in an\n # IngressParameter (API group k8s.example.com) named \"external-config\",\n # that's in the \"external-configuration\" namespace.\n scope: Namespace\n apiGroup: k8s.example.com\n kind: IngressParameter\n namespace: external-configuration\n name: external-config", "completion": "This yaml code 5.2 - Ingress | Before the IngressClass resource and ingressClassName field were added in Kubernetes 1.18, Ingress classes were specified with a kubernetes.io/ingress.class annotation on the Ingress. This annotation", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "5.2 - Ingress | Before the IngressClass resource and ingressClassName field were added in Kubernetes 1.18, Ingress classes were specified with a kubernetes.io/ingress.class annotation on the Ingress. This annotation", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_16983"} {"prompt": "Complete this yaml code:\n\napiVersion: networking.k8s.io/v1\nkind: IngressClass\nmetadata:\n labels:\n app.kubernetes.io/component: controller", "completion": " name: nginx-example\n annotations:\n ingressclass.kubernetes.io/is-default-class: \"true\"\nspec:\n controller: k8s.io/ingress-nginx", "language": "yaml", "type": "kubernetes-manifest", "context": "5.2 - Ingress | There are existing Kubernetes concepts that allow you to expose a single Service (see alternatives). You can also do this with an Ingress by specifying a default backend with no rules.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_16984"} {"prompt": "Explain this yaml code:\n\napiVersion: networking.k8s.io/v1\nkind: IngressClass\nmetadata:\n labels:\n app.kubernetes.io/component: controller\n name: nginx-example\n annotations:\n ingressclass.kubernetes.io/is-default-class: \"true\"\nspec:\n controller: k8s.io/ingress-nginx", "completion": "This yaml code 5.2 - Ingress | There are existing Kubernetes concepts that allow you to expose a single Service (see alternatives). You can also do this with an Ingress by specifying a default backend with no rules.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "5.2 - Ingress | There are existing Kubernetes concepts that allow you to expose a single Service (see alternatives). You can also do this with an Ingress by specifying a default backend with no rules.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_16985"} {"prompt": "Complete this yaml code:\n\napiVersion: networking.k8s.io/v1\nkind: Ingress\nmetadata:\n name: test-ingress\nspec:", "completion": " defaultBackend:\n service:\n name: test\n port:\n number: 80", "language": "yaml", "type": "kubernetes-manifest", "context": "5.2 - Ingress | If you create it using kubectl apply -f you should be able to view the state of the Ingress you added:", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_16986"} {"prompt": "Explain this yaml code:\n\napiVersion: networking.k8s.io/v1\nkind: Ingress\nmetadata:\n name: test-ingress\nspec:\n defaultBackend:\n service:\n name: test\n port:\n number: 80", "completion": "This yaml code 5.2 - Ingress | If you create it using kubectl apply -f you should be able to view the state of the Ingress you added:", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "5.2 - Ingress | If you create it using kubectl apply -f you should be able to view the state of the Ingress you added:", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_16987"} {"prompt": "Explain this text code:\n\nNAME CLASS HOSTS ADDRESS PORTS AGE\ntest-ingress external-lb * 203.0.113.123 80 59s", "completion": "This text code 5.2 - Ingress | Where 203.0.113.123 is the IP allocated by the Ingress controller to satisfy this Ingress.", "language": "text", "type": "code-example_explanation", "context": "5.2 - Ingress | Where 203.0.113.123 is the IP allocated by the Ingress controller to satisfy this Ingress.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_16988"} {"prompt": "Complete this yaml code:\n\napiVersion: networking.k8s.io/v1\nkind: Ingress\nmetadata:\n name: simple-fanout-example\nspec:\n rules:\n - host: foo.bar.com\n http:\n paths:\n - path: /foo\n pathType: Prefix", "completion": " backend:\n service:\n name: service1\n port:\n number: 4200\n - path: /bar\n pathType: Prefix\n backend:\n service:\n name: service2\n port:\n number: 8080", "language": "yaml", "type": "kubernetes-manifest", "context": "5.2 - Ingress | When you create the Ingress with kubectl apply -f:", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_16989"} {"prompt": "Explain this yaml code:\n\napiVersion: networking.k8s.io/v1\nkind: Ingress\nmetadata:\n name: simple-fanout-example\nspec:\n rules:\n - host: foo.bar.com\n http:\n paths:\n - path: /foo\n pathType: Prefix\n backend:\n service:\n name: service1\n port:\n number: 4200\n - path: /bar\n pathType: Prefix\n backend:\n service:\n name: service2\n port:\n number: 8080", "completion": "This yaml code 5.2 - Ingress | When you create the Ingress with kubectl apply -f:", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "5.2 - Ingress | When you create the Ingress with kubectl apply -f:", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_16990"} {"prompt": "Complete this text code:\n\nName: simple-fanout-example\nNamespace: default\nAddress: 178.91.123.132\nDefault backend: default-http-backend:80 (10.8.2.3:8080)\nRules:\n Host Path Backends\n ---- ---- --------", "completion": " foo.bar.com\n /foo service1:4200 (10.8.0.90:4200)\n /bar service2:8080 (10.8.0.91:8080)\nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal ADD 22s loadbalancer-controller default/test", "language": "text", "type": "code-example", "context": "5.2 - Ingress | The Ingress controller provisions an implementation-specific load balancer that satisfies the Ingress, as long as the Services (service1, service2) exist. When it has done so, you can see the address", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_16991"} {"prompt": "Explain this text code:\n\nName: simple-fanout-example\nNamespace: default\nAddress: 178.91.123.132\nDefault backend: default-http-backend:80 (10.8.2.3:8080)\nRules:\n Host Path Backends\n ---- ---- --------\n foo.bar.com\n /foo service1:4200 (10.8.0.90:4200)\n /bar service2:8080 (10.8.0.91:8080)\nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal ADD 22s loadbalancer-controller default/test", "completion": "This text code 5.2 - Ingress | The Ingress controller provisions an implementation-specific load balancer that satisfies the Ingress, as long as the Services (service1, service2) exist. When it has done so, you can see the address", "language": "text", "type": "code-example_explanation", "context": "5.2 - Ingress | The Ingress controller provisions an implementation-specific load balancer that satisfies the Ingress, as long as the Services (service1, service2) exist. When it has done so, you can see the address", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_16992"} {"prompt": "Complete this yaml code:\n\napiVersion: networking.k8s.io/v1\nkind: Ingress\nmetadata:\n name: name-virtual-host-ingress\nspec:\n rules:\n - host: foo.bar.com\n http:\n paths:\n - pathType: Prefix\n path: \"/\"\n backend:\n service:", "completion": " name: service1\n port:\n number: 80\n - host: bar.foo.com\n http:\n paths:\n - pathType: Prefix\n path: \"/\"\n backend:\n service:\n name: service2\n port:\n number: 80", "language": "yaml", "type": "kubernetes-manifest", "context": "5.2 - Ingress | If you create an Ingress resource without any hosts defined in the rules, then any web traffic to the IP address of your Ingress controller can be matched without a name based virtual host being requi", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_16993"} {"prompt": "Explain this yaml code:\n\napiVersion: networking.k8s.io/v1\nkind: Ingress\nmetadata:\n name: name-virtual-host-ingress\nspec:\n rules:\n - host: foo.bar.com\n http:\n paths:\n - pathType: Prefix\n path: \"/\"\n backend:\n service:\n name: service1\n port:\n number: 80\n - host: bar.foo.com\n http:\n paths:\n - pathType: Prefix\n path: \"/\"\n backend:\n service:\n name: service2\n port:\n number: 80", "completion": "This yaml code 5.2 - Ingress | If you create an Ingress resource without any hosts defined in the rules, then any web traffic to the IP address of your Ingress controller can be matched without a name based virtual host being requi", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "5.2 - Ingress | If you create an Ingress resource without any hosts defined in the rules, then any web traffic to the IP address of your Ingress controller can be matched without a name based virtual host being requi", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_16994"} {"prompt": "Explain this text code:\n\nservice/networking/name-virtual-host-ingress-no-third-host.yaml", "completion": "This text code 5.2 - Ingress | You can secure an Ingress by specifying a Secret that contains a TLS private key and certificate. The Ingress resource only supports a single TLS port, 443, and assumes TLS termination at the ingress", "language": "text", "type": "code-example_explanation", "context": "5.2 - Ingress | You can secure an Ingress by specifying a Secret that contains a TLS private key and certificate. The Ingress resource only supports a single TLS port, 443, and assumes TLS termination at the ingress", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_16995"} {"prompt": "Complete this yaml code:\n\napiVersion: networking.k8s.io/v1\nkind: Ingress\nmetadata:\n name: name-virtual-host-ingress-no-third-host\nspec:\n rules:\n - host: first.bar.com\n http:\n paths:\n - pathType: Prefix\n path: \"/\"\n backend:\n service:\n name: service1\n port:\n number: 80\n - host: second.bar.com", "completion": " http:\n paths:\n - pathType: Prefix\n path: \"/\"\n backend:\n service:\n name: service2\n port:\n number: 80\n - http:\n paths:\n - pathType: Prefix\n path: \"/\"\n backend:\n service:\n name: service3\n port:\n number: 80", "language": "yaml", "type": "kubernetes-manifest", "context": "5.2 - Ingress | You can secure an Ingress by specifying a Secret that contains a TLS private key and certificate. The Ingress resource only supports a single TLS port, 443, and assumes TLS termination at the ingress", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_16996"} {"prompt": "Explain this yaml code:\n\napiVersion: networking.k8s.io/v1\nkind: Ingress\nmetadata:\n name: name-virtual-host-ingress-no-third-host\nspec:\n rules:\n - host: first.bar.com\n http:\n paths:\n - pathType: Prefix\n path: \"/\"\n backend:\n service:\n name: service1\n port:\n number: 80\n - host: second.bar.com\n http:\n paths:\n - pathType: Prefix\n path: \"/\"\n backend:\n service:\n name: service2\n port:\n number: 80\n - http:\n paths:\n - pathType: Prefix\n path: \"/\"\n backend:\n service:\n name: service3\n port:\n number: 80", "completion": "This yaml code 5.2 - Ingress | You can secure an Ingress by specifying a Secret that contains a TLS private key and certificate. The Ingress resource only supports a single TLS port, 443, and assumes TLS termination at the ingress", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "5.2 - Ingress | You can secure an Ingress by specifying a Secret that contains a TLS private key and certificate. The Ingress resource only supports a single TLS port, 443, and assumes TLS termination at the ingress", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_16997"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Secret\nmetadata:\n name: testsecret-tls", "completion": " namespace: default\ndata:\n tls.crt: base64 encoded cert\n tls.key: base64 encoded key\ntype: kubernetes.io/tls", "language": "yaml", "type": "kubernetes-manifest", "context": "5.2 - Ingress | Referencing this secret in an Ingress tells the Ingress controller to secure the channel from the client to the load balancer using TLS. You need to make sure the TLS secret you created came from a ce", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_16998"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Secret\nmetadata:\n name: testsecret-tls\n namespace: default\ndata:\n tls.crt: base64 encoded cert\n tls.key: base64 encoded key\ntype: kubernetes.io/tls", "completion": "This yaml code 5.2 - Ingress | Referencing this secret in an Ingress tells the Ingress controller to secure the channel from the client to the load balancer using TLS. You need to make sure the TLS secret you created came from a ce", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "5.2 - Ingress | Referencing this secret in an Ingress tells the Ingress controller to secure the channel from the client to the load balancer using TLS. You need to make sure the TLS secret you created came from a ce", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_16999"} {"prompt": "Complete this yaml code:\n\napiVersion: networking.k8s.io/v1\nkind: Ingress\nmetadata:\n name: tls-example-ingress\nspec:\n tls:\n - hosts:\n - https-example.foo.com\n secretName: testsecret-tls\n rules:", "completion": " - host: https-example.foo.com\n http:\n paths:\n - path: /\n pathType: Prefix\n backend:\n service:\n name: service1\n port:\n number: 80", "language": "yaml", "type": "kubernetes-manifest", "context": "5.2 - Ingress | An Ingress controller is bootstrapped with some load balancing policy settings that it applies to all Ingress, such as the load balancing algorithm, backend weight scheme, and others. More advanced lo", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_17000"} {"prompt": "Explain this yaml code:\n\napiVersion: networking.k8s.io/v1\nkind: Ingress\nmetadata:\n name: tls-example-ingress\nspec:\n tls:\n - hosts:\n - https-example.foo.com\n secretName: testsecret-tls\n rules:\n - host: https-example.foo.com\n http:\n paths:\n - path: /\n pathType: Prefix\n backend:\n service:\n name: service1\n port:\n number: 80", "completion": "This yaml code 5.2 - Ingress | An Ingress controller is bootstrapped with some load balancing policy settings that it applies to all Ingress, such as the load balancing algorithm, backend weight scheme, and others. More advanced lo", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "5.2 - Ingress | An Ingress controller is bootstrapped with some load balancing policy settings that it applies to all Ingress, such as the load balancing algorithm, backend weight scheme, and others. More advanced lo", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_17001"} {"prompt": "Complete this text code:\n\nName: test\nNamespace: default\nAddress: 178.91.123.132\nDefault backend: default-http-backend:80 (10.8.2.3:8080)\nRules:\n Host Path Backends\n ---- ---- --------", "completion": " foo.bar.com\n /foo service1:80 (10.8.0.90:80)\nAnnotations:\n nginx.ingress.kubernetes.io/rewrite-target: /\nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal ADD 35s loadbalancer-controller default/test", "language": "text", "type": "code-example", "context": "5.2 - Ingress | This pops up an editor with the existing configuration in YAML format. Modify it to include the new Host:", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_17002"} {"prompt": "Explain this text code:\n\nName: test\nNamespace: default\nAddress: 178.91.123.132\nDefault backend: default-http-backend:80 (10.8.2.3:8080)\nRules:\n Host Path Backends\n ---- ---- --------\n foo.bar.com\n /foo service1:80 (10.8.0.90:80)\nAnnotations:\n nginx.ingress.kubernetes.io/rewrite-target: /\nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal ADD 35s loadbalancer-controller default/test", "completion": "This text code 5.2 - Ingress | This pops up an editor with the existing configuration in YAML format. Modify it to include the new Host:", "language": "text", "type": "code-example_explanation", "context": "5.2 - Ingress | This pops up an editor with the existing configuration in YAML format. Modify it to include the new Host:", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_17003"} {"prompt": "Complete this yaml code:\n\nspec:\n rules:\n - host: foo.bar.com\n http:\n paths:\n - backend:\n service:\n name: service1\n port:\n number: 80\n path: /foo", "completion": " pathType: Prefix\n - host: bar.baz.com\n http:\n paths:\n - backend:\n service:\n name: service2\n port:\n number: 80\n path: /foo\n pathType: Prefix\n..", "language": "yaml", "type": "kubernetes-manifest", "context": "5.2 - Ingress | After you save your changes, kubectl updates the resource in the API server, which tells the Ingress controller to reconfigure the load balancer.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_17004"} {"prompt": "Explain this yaml code:\n\nspec:\n rules:\n - host: foo.bar.com\n http:\n paths:\n - backend:\n service:\n name: service1\n port:\n number: 80\n path: /foo\n pathType: Prefix\n - host: bar.baz.com\n http:\n paths:\n - backend:\n service:\n name: service2\n port:\n number: 80\n path: /foo\n pathType: Prefix\n..", "completion": "This yaml code 5.2 - Ingress | After you save your changes, kubectl updates the resource in the API server, which tells the Ingress controller to reconfigure the load balancer.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "5.2 - Ingress | After you save your changes, kubectl updates the resource in the API server, which tells the Ingress controller to reconfigure the load balancer.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_17005"} {"prompt": "Complete this text code:\n\nName: test\nNamespace: default\nAddress: 178.91.123.132\nDefault backend: default-http-backend:80 (10.8.2.3:8080)\nRules:\n Host Path Backends\n ---- ---- --------\n foo.bar.com", "completion": " /foo service1:80 (10.8.0.90:80)\n bar.baz.com\n /foo service2:80 (10.8.0.91:80)\nAnnotations:\n nginx.ingress.kubernetes.io/rewrite-target: /\nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal ADD 45s loadbalancer-controller default/test", "language": "text", "type": "code-example", "context": "5.2 - Ingress | You can achieve the same outcome by invoking kubectl replace -f on a modified Ingress YAML file.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_17006"} {"prompt": "Explain this text code:\n\nName: test\nNamespace: default\nAddress: 178.91.123.132\nDefault backend: default-http-backend:80 (10.8.2.3:8080)\nRules:\n Host Path Backends\n ---- ---- --------\n foo.bar.com\n /foo service1:80 (10.8.0.90:80)\n bar.baz.com\n /foo service2:80 (10.8.0.91:80)\nAnnotations:\n nginx.ingress.kubernetes.io/rewrite-target: /\nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal ADD 45s loadbalancer-controller default/test", "completion": "This text code 5.2 - Ingress | You can achieve the same outcome by invoking kubectl replace -f on a modified Ingress YAML file.", "language": "text", "type": "code-example_explanation", "context": "5.2 - Ingress | You can achieve the same outcome by invoking kubectl replace -f on a modified Ingress YAML file.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_17007"} {"prompt": "Complete this yaml code:\n\napiVersion: gateway.networking.k8s.io/v1\nkind: GatewayClass\nmetadata:", "completion": " name: example-class\nspec:\n controllerName: example.com/gateway-controller", "language": "yaml", "type": "kubernetes-manifest", "context": "5.4 - Gateway API | In this example, a controller that has implemented Gateway API is configured to manage GatewayClasses with the controller name example.com/gateway-controller. Gateways of this class will be managed by", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_17008"} {"prompt": "Explain this yaml code:\n\napiVersion: gateway.networking.k8s.io/v1\nkind: GatewayClass\nmetadata:\n name: example-class\nspec:\n controllerName: example.com/gateway-controller", "completion": "This yaml code 5.4 - Gateway API | In this example, a controller that has implemented Gateway API is configured to manage GatewayClasses with the controller name example.com/gateway-controller. Gateways of this class will be managed by", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "5.4 - Gateway API | In this example, a controller that has implemented Gateway API is configured to manage GatewayClasses with the controller name example.com/gateway-controller. Gateways of this class will be managed by", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_17009"} {"prompt": "Complete this yaml code:\n\napiVersion: gateway.networking.k8s.io/v1\nkind: Gateway\nmetadata:\n name: example-gateway\nspec:", "completion": " gatewayClassName: example-class\n listeners:\n - name: http\n protocol: HTTP\n port: 80", "language": "yaml", "type": "kubernetes-manifest", "context": "5.4 - Gateway API | In this example, an instance of traffic handling infrastructure is programmed to listen for HTTP traffic on port 80. Since the addresses field is unspecified, an address or hostname is assigned to the", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_17010"} {"prompt": "Explain this yaml code:\n\napiVersion: gateway.networking.k8s.io/v1\nkind: Gateway\nmetadata:\n name: example-gateway\nspec:\n gatewayClassName: example-class\n listeners:\n - name: http\n protocol: HTTP\n port: 80", "completion": "This yaml code 5.4 - Gateway API | In this example, an instance of traffic handling infrastructure is programmed to listen for HTTP traffic on port 80. Since the addresses field is unspecified, an address or hostname is assigned to the", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "5.4 - Gateway API | In this example, an instance of traffic handling infrastructure is programmed to listen for HTTP traffic on port 80. Since the addresses field is unspecified, an address or hostname is assigned to the", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_17011"} {"prompt": "Complete this yaml code:\n\napiVersion: gateway.networking.k8s.io/v1\nkind: HTTPRoute\nmetadata:\n name: example-httproute\nspec:\n parentRefs:\n - name: example-gateway\n hostnames:", "completion": " - \"www.example.com\"\n rules:\n - matches:\n - path:\n type: PathPrefix\n value: /login\n backendRefs:\n - name: example-svc\n port: 8080", "language": "yaml", "type": "kubernetes-manifest", "context": "5.4 - Gateway API | In this example, HTTP traffic from Gateway example-gateway with the Host: header set to www.example.com and the request path specified as /login will be routed to Service example-svc on port 8080.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_17012"} {"prompt": "Explain this yaml code:\n\napiVersion: gateway.networking.k8s.io/v1\nkind: HTTPRoute\nmetadata:\n name: example-httproute\nspec:\n parentRefs:\n - name: example-gateway\n hostnames:\n - \"www.example.com\"\n rules:\n - matches:\n - path:\n type: PathPrefix\n value: /login\n backendRefs:\n - name: example-svc\n port: 8080", "completion": "This yaml code 5.4 - Gateway API | In this example, HTTP traffic from Gateway example-gateway with the Host: header set to www.example.com and the request path specified as /login will be routed to Service example-svc on port 8080.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "5.4 - Gateway API | In this example, HTTP traffic from Gateway example-gateway with the Host: header set to www.example.com and the request path specified as /login will be routed to Service example-svc on port 8080.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_17013"} {"prompt": "Complete this yaml code:\n\napiVersion: gateway.networking.k8s.io/v1\nkind: GRPCRoute\nmetadata:\n name: example-grpcroute\nspec:\n parentRefs:", "completion": " - name: example-gateway\n hostnames:\n - \"svc.example.com\"\n rules:\n - backendRefs:\n - name: example-svc\n port: 50051", "language": "yaml", "type": "kubernetes-manifest", "context": "5.4 - Gateway API | In this example, gRPC traffic from Gateway example-gateway with the host set to svc.example.com will be directed to the service example-svc on port 50051 from the same namespace.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_17014"} {"prompt": "Explain this yaml code:\n\napiVersion: gateway.networking.k8s.io/v1\nkind: GRPCRoute\nmetadata:\n name: example-grpcroute\nspec:\n parentRefs:\n - name: example-gateway\n hostnames:\n - \"svc.example.com\"\n rules:\n - backendRefs:\n - name: example-svc\n port: 50051", "completion": "This yaml code 5.4 - Gateway API | In this example, gRPC traffic from Gateway example-gateway with the host set to svc.example.com will be directed to the service example-svc on port 50051 from the same namespace.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "5.4 - Gateway API | In this example, gRPC traffic from Gateway example-gateway with the host set to svc.example.com will be directed to the service example-svc on port 50051 from the same namespace.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_17015"} {"prompt": "Complete this yaml code:\n\napiVersion: gateway.networking.k8s.io/v1\nkind: GRPCRoute\nmetadata:\n name: example-grpcroute\nspec:\n parentRefs:\n - name: example-gateway\n hostnames:", "completion": " - \"svc.example.com\"\n rules:\n - matches:\n - method:\n service: com.example\n method: Login\n backendRefs:\n - name: foo-svc\n port: 50051", "language": "yaml", "type": "kubernetes-manifest", "context": "5.4 - Gateway API | In this case, the GRPCRoute will match any traffic for svc.example.com and apply its routing rules to forward the traffic to the correct backend. Since there is only one match specified,only requests", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_17016"} {"prompt": "Explain this yaml code:\n\napiVersion: gateway.networking.k8s.io/v1\nkind: GRPCRoute\nmetadata:\n name: example-grpcroute\nspec:\n parentRefs:\n - name: example-gateway\n hostnames:\n - \"svc.example.com\"\n rules:\n - matches:\n - method:\n service: com.example\n method: Login\n backendRefs:\n - name: foo-svc\n port: 50051", "completion": "This yaml code 5.4 - Gateway API | In this case, the GRPCRoute will match any traffic for svc.example.com and apply its routing rules to forward the traffic to the correct backend. Since there is only one match specified,only requests", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "5.4 - Gateway API | In this case, the GRPCRoute will match any traffic for svc.example.com and apply its routing rules to forward the traffic to the correct backend. Since there is only one match specified,only requests", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_17017"} {"prompt": "Complete this yaml code:\n\napiVersion: discovery.k8s.io/v1\nkind: EndpointSlice\nmetadata:\n name: example-abc\n labels:\n kubernetes.io/service-name: example\naddressType: IPv4\nports:\n - name: http", "completion": " protocol: TCP\n port: 80\nendpoints:\n - addresses:\n - \"10.1.2.3\"\n conditions:\n ready: true\n hostname: pod-1\n nodeName: node-1\n zone: us-west2-a", "language": "yaml", "type": "kubernetes-manifest", "context": "5.5 - EndpointSlices | By default, the control plane creates and manages EndpointSlices to have no more than 100 endpoints each. You can configure this with the --max-endpoints-per-slice kube-controller-manager flag, up to", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_17018"} {"prompt": "Explain this yaml code:\n\napiVersion: discovery.k8s.io/v1\nkind: EndpointSlice\nmetadata:\n name: example-abc\n labels:\n kubernetes.io/service-name: example\naddressType: IPv4\nports:\n - name: http\n protocol: TCP\n port: 80\nendpoints:\n - addresses:\n - \"10.1.2.3\"\n conditions:\n ready: true\n hostname: pod-1\n nodeName: node-1\n zone: us-west2-a", "completion": "This yaml code 5.5 - EndpointSlices | By default, the control plane creates and manages EndpointSlices to have no more than 100 endpoints each. You can configure this with the --max-endpoints-per-slice kube-controller-manager flag, up to", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "5.5 - EndpointSlices | By default, the control plane creates and manages EndpointSlices to have no more than 100 endpoints each. You can configure this with the --max-endpoints-per-slice kube-controller-manager flag, up to", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_17019"} {"prompt": "Complete this yaml code:\n\napiVersion: networking.k8s.io/v1\nkind: NetworkPolicy\nmetadata:\n name: test-network-policy\n namespace: default\nspec:\n podSelector:\n matchLabels:\n role: db\n policyTypes:\n - Ingress\n - Egress\n ingress:\n - from:\n - ipBlock:\n cidr: 172.17.0.0/16\n except:", "completion": " - 172.17.1.0/24\n - namespaceSelector:\n matchLabels:\n project: myproject\n - podSelector:\n matchLabels:\n role: frontend\n ports:\n - protocol: TCP\n port: 6379\n egress:\n - to:\n - ipBlock:\n cidr: 10.0.0.0/24\n ports:\n - protocol: TCP\n port: 5978", "language": "yaml", "type": "kubernetes-manifest", "context": "5.6 - Network Policies | Mandatory Fields: As with all other Kubernetes config, a NetworkPolicy needs apiVersion, kind, and metadata fields. For general information about working with config files, see Configure a Pod to Use", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_17020"} {"prompt": "Explain this yaml code:\n\napiVersion: networking.k8s.io/v1\nkind: NetworkPolicy\nmetadata:\n name: test-network-policy\n namespace: default\nspec:\n podSelector:\n matchLabels:\n role: db\n policyTypes:\n - Ingress\n - Egress\n ingress:\n - from:\n - ipBlock:\n cidr: 172.17.0.0/16\n except:\n - 172.17.1.0/24\n - namespaceSelector:\n matchLabels:\n project: myproject\n - podSelector:\n matchLabels:\n role: frontend\n ports:\n - protocol: TCP\n port: 6379\n egress:\n - to:\n - ipBlock:\n cidr: 10.0.0.0/24\n ports:\n - protocol: TCP\n port: 5978", "completion": "This yaml code 5.6 - Network Policies | Mandatory Fields: As with all other Kubernetes config, a NetworkPolicy needs apiVersion, kind, and metadata fields. For general information about working with config files, see Configure a Pod to Use", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "5.6 - Network Policies | Mandatory Fields: As with all other Kubernetes config, a NetworkPolicy needs apiVersion, kind, and metadata fields. For general information about working with config files, see Configure a Pod to Use", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_17021"} {"prompt": "Complete this yaml code:\n\n...\n ingress:\n - from:\n - namespaceSelector:\n matchLabels:", "completion": " user: alice\n podSelector:\n matchLabels:\n role: client\n ...", "language": "yaml", "type": "kubernetes-manifest", "context": "5.6 - Network Policies | This policy contains a single from element allowing connections from Pods with the label role=client in namespaces with the label user=alice. But the following policy is different:", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_17022"} {"prompt": "Explain this yaml code:\n\n...\n ingress:\n - from:\n - namespaceSelector:\n matchLabels:\n user: alice\n podSelector:\n matchLabels:\n role: client\n ...", "completion": "This yaml code 5.6 - Network Policies | This policy contains a single from element allowing connections from Pods with the label role=client in namespaces with the label user=alice. But the following policy is different:", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "5.6 - Network Policies | This policy contains a single from element allowing connections from Pods with the label role=client in namespaces with the label user=alice. But the following policy is different:", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_17023"} {"prompt": "Complete this yaml code:\n\n...\n ingress:\n - from:\n - namespaceSelector:\n matchLabels:", "completion": " user: alice\n - podSelector:\n matchLabels:\n role: client\n ...", "language": "yaml", "type": "kubernetes-manifest", "context": "5.6 - Network Policies | It contains two elements in the from array, and allows connections from Pods in the local Namespace with the label role=client, or from any Pod in any namespace with the label user=alice.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_17024"} {"prompt": "Explain this yaml code:\n\n...\n ingress:\n - from:\n - namespaceSelector:\n matchLabels:\n user: alice\n - podSelector:\n matchLabels:\n role: client\n ...", "completion": "This yaml code 5.6 - Network Policies | It contains two elements in the from array, and allows connections from Pods in the local Namespace with the label role=client, or from any Pod in any namespace with the label user=alice.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "5.6 - Network Policies | It contains two elements in the from array, and allows connections from Pods in the local Namespace with the label role=client, or from any Pod in any namespace with the label user=alice.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_17025"} {"prompt": "Explain this text code:\n\nservice/networking/network-policy-default-deny-ingress.yaml", "completion": "This text code 5.6 - Network Policies | This ensures that even pods that aren't selected by any other NetworkPolicy will still be isolated for ingress. This policy does not affect isolation for egress from any pod.", "language": "text", "type": "code-example_explanation", "context": "5.6 - Network Policies | This ensures that even pods that aren't selected by any other NetworkPolicy will still be isolated for ingress. This policy does not affect isolation for egress from any pod.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_17026"} {"prompt": "Complete this yaml code:\n\n---\napiVersion: networking.k8s.io/v1\nkind: NetworkPolicy\nmetadata:", "completion": " name: default-deny-ingress\nspec:\n podSelector: {}\n policyTypes:\n - Ingress", "language": "yaml", "type": "kubernetes-manifest", "context": "5.6 - Network Policies | This ensures that even pods that aren't selected by any other NetworkPolicy will still be isolated for ingress. This policy does not affect isolation for egress from any pod.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_17027"} {"prompt": "Explain this yaml code:\n\n---\napiVersion: networking.k8s.io/v1\nkind: NetworkPolicy\nmetadata:\n name: default-deny-ingress\nspec:\n podSelector: {}\n policyTypes:\n - Ingress", "completion": "This yaml code 5.6 - Network Policies | This ensures that even pods that aren't selected by any other NetworkPolicy will still be isolated for ingress. This policy does not affect isolation for egress from any pod.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "5.6 - Network Policies | This ensures that even pods that aren't selected by any other NetworkPolicy will still be isolated for ingress. This policy does not affect isolation for egress from any pod.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_17028"} {"prompt": "Explain this text code:\n\nservice/networking/network-policy-allow-all-ingress.yaml", "completion": "This text code 5.6 - Network Policies | With this policy in place, no additional policy or policies can cause any incoming connection to those pods to be denied. This policy has no effect on isolation for egress from any pod.", "language": "text", "type": "code-example_explanation", "context": "5.6 - Network Policies | With this policy in place, no additional policy or policies can cause any incoming connection to those pods to be denied. This policy has no effect on isolation for egress from any pod.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_17029"} {"prompt": "Complete this yaml code:\n\n---\napiVersion: networking.k8s.io/v1\nkind: NetworkPolicy\nmetadata:\n name: allow-all-ingress", "completion": "spec:\n podSelector: {}\n ingress:\n - {}\n policyTypes:\n - Ingress", "language": "yaml", "type": "kubernetes-manifest", "context": "5.6 - Network Policies | With this policy in place, no additional policy or policies can cause any incoming connection to those pods to be denied. This policy has no effect on isolation for egress from any pod.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_17030"} {"prompt": "Explain this yaml code:\n\n---\napiVersion: networking.k8s.io/v1\nkind: NetworkPolicy\nmetadata:\n name: allow-all-ingress\nspec:\n podSelector: {}\n ingress:\n - {}\n policyTypes:\n - Ingress", "completion": "This yaml code 5.6 - Network Policies | With this policy in place, no additional policy or policies can cause any incoming connection to those pods to be denied. This policy has no effect on isolation for egress from any pod.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "5.6 - Network Policies | With this policy in place, no additional policy or policies can cause any incoming connection to those pods to be denied. This policy has no effect on isolation for egress from any pod.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_17031"} {"prompt": "Explain this text code:\n\nservice/networking/network-policy-default-deny-egress.yaml", "completion": "This text code 5.6 - Network Policies | This ensures that even pods that aren't selected by any other NetworkPolicy will not be allowed egress traffic. This policy does not change the ingress isolation behavior of any pod.", "language": "text", "type": "code-example_explanation", "context": "5.6 - Network Policies | This ensures that even pods that aren't selected by any other NetworkPolicy will not be allowed egress traffic. This policy does not change the ingress isolation behavior of any pod.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_17032"} {"prompt": "Complete this yaml code:\n\n---\napiVersion: networking.k8s.io/v1\nkind: NetworkPolicy\nmetadata:", "completion": " name: default-deny-egress\nspec:\n podSelector: {}\n policyTypes:\n - Egress", "language": "yaml", "type": "kubernetes-manifest", "context": "5.6 - Network Policies | This ensures that even pods that aren't selected by any other NetworkPolicy will not be allowed egress traffic. This policy does not change the ingress isolation behavior of any pod.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_17033"} {"prompt": "Explain this yaml code:\n\n---\napiVersion: networking.k8s.io/v1\nkind: NetworkPolicy\nmetadata:\n name: default-deny-egress\nspec:\n podSelector: {}\n policyTypes:\n - Egress", "completion": "This yaml code 5.6 - Network Policies | This ensures that even pods that aren't selected by any other NetworkPolicy will not be allowed egress traffic. This policy does not change the ingress isolation behavior of any pod.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "5.6 - Network Policies | This ensures that even pods that aren't selected by any other NetworkPolicy will not be allowed egress traffic. This policy does not change the ingress isolation behavior of any pod.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_17034"} {"prompt": "Explain this text code:\n\nservice/networking/network-policy-allow-all-egress.yaml", "completion": "This text code 5.6 - Network Policies | With this policy in place, no additional policy or policies can cause any outgoing connection from those pods to be denied. This policy has no effect on isolation for ingress to any pod.", "language": "text", "type": "code-example_explanation", "context": "5.6 - Network Policies | With this policy in place, no additional policy or policies can cause any outgoing connection from those pods to be denied. This policy has no effect on isolation for ingress to any pod.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_17035"} {"prompt": "Complete this yaml code:\n\n---\napiVersion: networking.k8s.io/v1\nkind: NetworkPolicy\nmetadata:\n name: allow-all-egress", "completion": "spec:\n podSelector: {}\n egress:\n - {}\n policyTypes:\n - Egress", "language": "yaml", "type": "kubernetes-manifest", "context": "5.6 - Network Policies | With this policy in place, no additional policy or policies can cause any outgoing connection from those pods to be denied. This policy has no effect on isolation for ingress to any pod.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_17036"} {"prompt": "Explain this yaml code:\n\n---\napiVersion: networking.k8s.io/v1\nkind: NetworkPolicy\nmetadata:\n name: allow-all-egress\nspec:\n podSelector: {}\n egress:\n - {}\n policyTypes:\n - Egress", "completion": "This yaml code 5.6 - Network Policies | With this policy in place, no additional policy or policies can cause any outgoing connection from those pods to be denied. This policy has no effect on isolation for ingress to any pod.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "5.6 - Network Policies | With this policy in place, no additional policy or policies can cause any outgoing connection from those pods to be denied. This policy has no effect on isolation for ingress to any pod.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_17037"} {"prompt": "Explain this text code:\n\nservice/networking/network-policy-default-deny-all.yaml", "completion": "This text code 5.6 - Network Policies | This ensures that even pods that aren't selected by any other NetworkPolicy will not be allowed ingress or egress traffic.", "language": "text", "type": "code-example_explanation", "context": "5.6 - Network Policies | This ensures that even pods that aren't selected by any other NetworkPolicy will not be allowed ingress or egress traffic.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_17038"} {"prompt": "Complete this yaml code:\n\n---\napiVersion: networking.k8s.io/v1\nkind: NetworkPolicy\nmetadata:\n name: default-deny-all", "completion": "spec:\n podSelector: {}\n policyTypes:\n - Ingress\n - Egress", "language": "yaml", "type": "kubernetes-manifest", "context": "5.6 - Network Policies | This ensures that even pods that aren't selected by any other NetworkPolicy will not be allowed ingress or egress traffic.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_17039"} {"prompt": "Explain this yaml code:\n\n---\napiVersion: networking.k8s.io/v1\nkind: NetworkPolicy\nmetadata:\n name: default-deny-all\nspec:\n podSelector: {}\n policyTypes:\n - Ingress\n - Egress", "completion": "This yaml code 5.6 - Network Policies | This ensures that even pods that aren't selected by any other NetworkPolicy will not be allowed ingress or egress traffic.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "5.6 - Network Policies | This ensures that even pods that aren't selected by any other NetworkPolicy will not be allowed ingress or egress traffic.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_17040"} {"prompt": "Explain this text code:\n\nservice/networking/networkpolicy-multiport-egress.yaml", "completion": "This text code 5.6 - Network Policies | The above rule allows any Pod with label role=db on the namespace default to communicate with any IP within the range 10.0.0.0/24 over TCP, provided that the target port is between the range 32000 and", "language": "text", "type": "code-example_explanation", "context": "5.6 - Network Policies | The above rule allows any Pod with label role=db on the namespace default to communicate with any IP within the range 10.0.0.0/24 over TCP, provided that the target port is between the range 32000 and", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_17041"} {"prompt": "Complete this yaml code:\n\napiVersion: networking.k8s.io/v1\nkind: NetworkPolicy\nmetadata:\n name: multi-port-egress\n namespace: default\nspec:\n podSelector:\n matchLabels:\n role: db", "completion": " policyTypes:\n - Egress\n egress:\n - to:\n - ipBlock:\n cidr: 10.0.0.0/24\n ports:\n - protocol: TCP\n port: 32000\n endPort: 32768", "language": "yaml", "type": "kubernetes-manifest", "context": "5.6 - Network Policies | The above rule allows any Pod with label role=db on the namespace default to communicate with any IP within the range 10.0.0.0/24 over TCP, provided that the target port is between the range 32000 and", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_17042"} {"prompt": "Explain this yaml code:\n\napiVersion: networking.k8s.io/v1\nkind: NetworkPolicy\nmetadata:\n name: multi-port-egress\n namespace: default\nspec:\n podSelector:\n matchLabels:\n role: db\n policyTypes:\n - Egress\n egress:\n - to:\n - ipBlock:\n cidr: 10.0.0.0/24\n ports:\n - protocol: TCP\n port: 32000\n endPort: 32768", "completion": "This yaml code 5.6 - Network Policies | The above rule allows any Pod with label role=db on the namespace default to communicate with any IP within the range 10.0.0.0/24 over TCP, provided that the target port is between the range 32000 and", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "5.6 - Network Policies | The above rule allows any Pod with label role=db on the namespace default to communicate with any IP within the range 10.0.0.0/24 over TCP, provided that the target port is between the range 32000 and", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_17043"} {"prompt": "Explain this shell code:\n\nkubectl label namespace frontend namespace=frontend\nkubectl label namespace backend namespace=backend", "completion": "This shell code 5.6 - Network Policies | Add the labels under namespaceSelector in your NetworkPolicy document. For example:", "language": "shell", "type": "code-example_explanation", "context": "5.6 - Network Policies | Add the labels under namespaceSelector in your NetworkPolicy document. For example:", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_17044"} {"prompt": "Complete this yaml code:\n\napiVersion: networking.k8s.io/v1\nkind: NetworkPolicy\nmetadata:\n name: egress-namespaces\nspec:\n podSelector:\n matchLabels:\n app: myapp", "completion": " policyTypes:\n - Egress\n egress:\n - to:\n - namespaceSelector:\n matchExpressions:\n - key: namespace\n operator: In\n values: [\"frontend\", \"backend\"]", "language": "yaml", "type": "kubernetes-manifest", "context": "5.6 - Network Policies | The Kubernetes control plane sets an immutable label kubernetes.io/metadata.name on all namespaces, the value of the label is the namespace name.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_17045"} {"prompt": "Explain this yaml code:\n\napiVersion: networking.k8s.io/v1\nkind: NetworkPolicy\nmetadata:\n name: egress-namespaces\nspec:\n podSelector:\n matchLabels:\n app: myapp\n policyTypes:\n - Egress\n egress:\n - to:\n - namespaceSelector:\n matchExpressions:\n - key: namespace\n operator: In\n values: [\"frontend\", \"backend\"]", "completion": "This yaml code 5.6 - Network Policies | The Kubernetes control plane sets an immutable label kubernetes.io/metadata.name on all namespaces, the value of the label is the namespace name.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "5.6 - Network Policies | The Kubernetes control plane sets an immutable label kubernetes.io/metadata.name on all namespaces, the value of the label is the namespace name.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_17046"} {"prompt": "Complete this yaml code:\n\n...\n spec:\n podSelector:", "completion": " matchLabels:\n role: client\n ...", "language": "yaml", "type": "kubernetes-manifest", "context": "5.6 - Network Policies | a hostNetwork pod is selected by a podSelector or namespaceSelector in an ingress or egress rule.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_17047"} {"prompt": "Explain this yaml code:\n\n...\n spec:\n podSelector:\n matchLabels:\n role: client\n ...", "completion": "This yaml code 5.6 - Network Policies | a hostNetwork pod is selected by a podSelector or namespaceSelector in an ingress or egress rule.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "5.6 - Network Policies | a hostNetwork pod is selected by a podSelector or namespaceSelector in an ingress or egress rule.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_17048"} {"prompt": "Complete this yaml code:\n\n...\n ingress:\n - from:", "completion": " - podSelector:\n matchLabels:\n role: client\n ...", "language": "yaml", "type": "kubernetes-manifest", "context": "5.6 - Network Policies | At the same time, since hostNetwork pods have the same IP addresses as the nodes they reside on, their connections will be treated as node connections. For example, you can allow traffic from a hostNe", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_17049"} {"prompt": "Explain this yaml code:\n\n...\n ingress:\n - from:\n - podSelector:\n matchLabels:\n role: client\n ...", "completion": "This yaml code 5.6 - Network Policies | At the same time, since hostNetwork pods have the same IP addresses as the nodes they reside on, their connections will be treated as node connections. For example, you can allow traffic from a hostNe", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "5.6 - Network Policies | At the same time, since hostNetwork pods have the same IP addresses as the nodes they reside on, their connections will be treated as node connections. For example, you can allow traffic from a hostNe", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_17050"} {"prompt": "Explain this text code:\n\nnameserver 10.32.0.10\nsearch <namespace>.svc.cluster.local svc.cluster.local cluster.local\noptions ndots:5", "completion": "This text code 5.7 - DNS for Services and Pods | In summary, a Pod in the test namespace can successfully resolve either data.prod or data.prod.svc.cluster.local.", "language": "text", "type": "code-example_explanation", "context": "5.7 - DNS for Services and Pods | In summary, a Pod in the test namespace can successfully resolve either data.prod or data.prod.svc.cluster.local.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_17051"} {"prompt": "Explain this text code:\n\n_port-name._port-protocol.my-svc.my-namespace.svc.cluster-domain.example", "completion": "This text code 5.7 - DNS for Services and Pods | Kube-DNS versions, prior to the implementation of the DNS specification, had the following DNS resolution:", "language": "text", "type": "code-example_explanation", "context": "5.7 - DNS for Services and Pods | Kube-DNS versions, prior to the implementation of the DNS specification, had the following DNS resolution:", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_17052"} {"prompt": "Explain this text code:\n\nhostname.my-svc.my-namespace.svc.cluster-domain.example", "completion": "This text code 5.7 - DNS for Services and Pods | Kube-DNS versions, prior to the implementation of the DNS specification, had the following DNS resolution:", "language": "text", "type": "code-example_explanation", "context": "5.7 - DNS for Services and Pods | Kube-DNS versions, prior to the implementation of the DNS specification, had the following DNS resolution:", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_17053"} {"prompt": "Explain this text code:\n\n<pod-IPv4-address>.<namespace>.pod.<cluster-domain>", "completion": "This text code 5.7 - DNS for Services and Pods | For example, if a Pod in the default namespace has the IP address 172.17.0.3, and the domain name for your cluster is cluster.local, then the Pod has a DNS name:", "language": "text", "type": "code-example_explanation", "context": "5.7 - DNS for Services and Pods | For example, if a Pod in the default namespace has the IP address 172.17.0.3, and the domain name for your cluster is cluster.local, then the Pod has a DNS name:", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_17054"} {"prompt": "Explain this text code:\n\n<pod-ipv4-address>.<service-name>.<my-namespace>.svc.<cluster-domain.example>", "completion": "This text code 5.7 - DNS for Services and Pods | For example, if a Pod in the cafe namespace has the IP address 172.17.0.3, is an endpoint of a Service named barista, and the domain name for your cluster is cluster.local, then the Pod would have thi", "language": "text", "type": "code-example_explanation", "context": "5.7 - DNS for Services and Pods | For example, if a Pod in the cafe namespace has the IP address 172.17.0.3, is an endpoint of a Service named barista, and the domain name for your cluster is cluster.local, then the Pod would have thi", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_17055"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Service\nmetadata:\n name: busybox-subdomain\nspec:\n selector:\n name: busybox\n clusterIP: None\n ports:\n - name: foo # name is not required for single-port Services\n port: 1234\n---\napiVersion: v1\nkind: Pod\nmetadata:\n name: busybox1\n labels:\n name: busybox\nspec:\n hostname: busybox-1\n subdomain: busybox-subdomain", "completion": " containers:\n - image: busybox:1.28\n command:\n - sleep\n - \"3600\"\n name: busybox\n---\napiVersion: v1\nkind: Pod\nmetadata:\n name: busybox2\n labels:\n name: busybox\nspec:\n hostname: busybox-2\n subdomain: busybox-subdomain\n containers:\n - image: busybox:1.28\n command:\n - sleep\n - \"3600\"\n name: busybox", "language": "yaml", "type": "pod-definition", "context": "5.7 - DNS for Services and Pods | Given the above Service \"busybox-subdomain\" and the Pods which set spec.subdomain to \"busybox-subdomain\", the first Pod will see its own FQDN as \"busybox-1.busybox-subdomain.my-namespace.svc.cluster-d", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_17056"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Service\nmetadata:\n name: busybox-subdomain\nspec:\n selector:\n name: busybox\n clusterIP: None\n ports:\n - name: foo # name is not required for single-port Services\n port: 1234\n---\napiVersion: v1\nkind: Pod\nmetadata:\n name: busybox1\n labels:\n name: busybox\nspec:\n hostname: busybox-1\n subdomain: busybox-subdomain\n containers:\n - image: busybox:1.28\n command:\n - sleep\n - \"3600\"\n name: busybox\n---\napiVersion: v1\nkind: Pod\nmetadata:\n name: busybox2\n labels:\n name: busybox\nspec:\n hostname: busybox-2\n subdomain: busybox-subdomain\n containers:\n - image: busybox:1.28\n command:\n - sleep\n - \"3600\"\n name: busybox", "completion": "This yaml code 5.7 - DNS for Services and Pods | Given the above Service \"busybox-subdomain\" and the Pods which set spec.subdomain to \"busybox-subdomain\", the first Pod will see its own FQDN as \"busybox-1.busybox-subdomain.my-namespace.svc.cluster-d", "language": "yaml", "type": "pod-definition_explanation", "context": "5.7 - DNS for Services and Pods | Given the above Service \"busybox-subdomain\" and the Pods which set spec.subdomain to \"busybox-subdomain\", the first Pod will see its own FQDN as \"busybox-1.busybox-subdomain.my-namespace.svc.cluster-d", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_17057"} {"prompt": "Explain this text code:\n\n\"busybox-1.busybox-subdomain.my-namespace.svc.cluster-domain.example\"", "completion": "This text code 5.7 - DNS for Services and Pods | An EndpointSlice can specify the DNS hostname for any endpoint addresses, along with its IP.", "language": "text", "type": "code-example_explanation", "context": "5.7 - DNS for Services and Pods | An EndpointSlice can specify the DNS hostname for any endpoint addresses, along with its IP.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_17058"} {"prompt": "Explain this text code:\n\nbusybox-subdomain.my-namespace.svc.cluster-domain.example", "completion": "This text code 5.7 - DNS for Services and Pods | When a Pod is configured to have fully qualified domain name (FQDN), its hostname is the short hostname. For example, if you have a Pod with the fully qualified domain name busybox-1.busybox-subdomain", "language": "text", "type": "code-example_explanation", "context": "5.7 - DNS for Services and Pods | When a Pod is configured to have fully qualified domain name (FQDN), its hostname is the short hostname. For example, if you have a Pod with the fully qualified domain name busybox-1.busybox-subdomain", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_17059"} {"prompt": "Explain this text code:\n\nbusybox-1.busybox-subdomain.my-namespace.svc.cluster-domain.example", "completion": "This text code 5.7 - DNS for Services and Pods | When you set setHostnameAsFQDN: true in the Pod spec, the kubelet writes the Pod's FQDN into the hostname for that Pod's namespace. In this case, both hostname and hostname --fqdn return the Pod's FQD", "language": "text", "type": "code-example_explanation", "context": "5.7 - DNS for Services and Pods | When you set setHostnameAsFQDN: true in the Pod spec, the kubelet writes the Pod's FQDN into the hostname for that Pod's namespace. In this case, both hostname and hostname --fqdn return the Pod's FQD", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_17060"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: busybox\n namespace: default\nspec:\n containers:\n - image: busybox:1.28", "completion": " command:\n - sleep\n - \"3600\"\n imagePullPolicy: IfNotPresent\n name: busybox\n restartPolicy: Always\n hostNetwork: true\n dnsPolicy: ClusterFirstWithHostNet", "language": "yaml", "type": "pod-definition", "context": "5.7 - DNS for Services and Pods | Pod's DNS Config allows users more control on the DNS settings for a Pod.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_17061"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: busybox\n namespace: default\nspec:\n containers:\n - image: busybox:1.28\n command:\n - sleep\n - \"3600\"\n imagePullPolicy: IfNotPresent\n name: busybox\n restartPolicy: Always\n hostNetwork: true\n dnsPolicy: ClusterFirstWithHostNet", "completion": "This yaml code 5.7 - DNS for Services and Pods | Pod's DNS Config allows users more control on the DNS settings for a Pod.", "language": "yaml", "type": "pod-definition_explanation", "context": "5.7 - DNS for Services and Pods | Pod's DNS Config allows users more control on the DNS settings for a Pod.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_17062"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n namespace: default\n name: dns-example\nspec:\n containers:\n - name: test\n image: nginx\n dnsPolicy: \"None\"", "completion": " dnsConfig:\n nameservers:\n - 192.0.2.1 # this is an example\n searches:\n - ns1.svc.cluster-domain.example\n - my.dns.search.suffix\n options:\n - name: ndots\n value: \"2\"\n - name: edns0", "language": "yaml", "type": "pod-definition", "context": "5.7 - DNS for Services and Pods | When the Pod above is created, the container test gets the following contents in its /etc/resolv.conf file:", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_17063"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n namespace: default\n name: dns-example\nspec:\n containers:\n - name: test\n image: nginx\n dnsPolicy: \"None\"\n dnsConfig:\n nameservers:\n - 192.0.2.1 # this is an example\n searches:\n - ns1.svc.cluster-domain.example\n - my.dns.search.suffix\n options:\n - name: ndots\n value: \"2\"\n - name: edns0", "completion": "This yaml code 5.7 - DNS for Services and Pods | When the Pod above is created, the container test gets the following contents in its /etc/resolv.conf file:", "language": "yaml", "type": "pod-definition_explanation", "context": "5.7 - DNS for Services and Pods | When the Pod above is created, the container test gets the following contents in its /etc/resolv.conf file:", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_17064"} {"prompt": "Explain this text code:\n\nnameserver 192.0.2.1\nsearch ns1.svc.cluster-domain.example my.dns.search.suffix\noptions ndots:2 edns0", "completion": "This text code 5.7 - DNS for Services and Pods | For IPv6 setup, search path and name server should be set up like this:", "language": "text", "type": "code-example_explanation", "context": "5.7 - DNS for Services and Pods | For IPv6 setup, search path and name server should be set up like this:", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_17065"} {"prompt": "Explain this shell code:\n\nkubectl exec -it dns-example -- cat /etc/resolv.conf", "completion": "This shell code 5.7 - DNS for Services and Pods | The output is similar to this:", "language": "shell", "type": "code-example_explanation", "context": "5.7 - DNS for Services and Pods | The output is similar to this:", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_17066"} {"prompt": "Explain this text code:\n\nnameserver 2001:db8:30::a\nsearch default.svc.cluster-domain.example svc.cluster-domain.example cluster-domain.example\noptions ndots:5", "completion": "This text code 5.7 - DNS for Services and Pods | Kubernetes itself does not limit the DNS Config until the length of the search domain list exceeds 32 or the total length of all search domains exceeds 2048. This limit applies to the node's resolver", "language": "text", "type": "code-example_explanation", "context": "5.7 - DNS for Services and Pods | Kubernetes itself does not limit the DNS Config until the length of the search domain list exceeds 32 or the total length of all search domains exceeds 2048. This limit applies to the node's resolver", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_17067"} {"prompt": "Explain this text code:\n\n--node-cidr-mask-size-ipv4|--node-cidr-mask-size-ipv6", "completion": "This text code 5.8 - IPv4/IPv6 dual-stack | An example of an IPv4 CIDR: 10.244.0.0/16 (though you would supply your own address range)", "language": "text", "type": "code-example_explanation", "context": "5.8 - IPv4/IPv6 dual-stack | An example of an IPv4 CIDR: 10.244.0.0/16 (though you would supply your own address range)", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_17068"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Service\nmetadata:\n name: my-service\n labels:\n app.kubernetes.io/name: MyApp", "completion": "spec:\n selector:\n app.kubernetes.io/name: MyApp\n ports:\n - protocol: TCP\n port: 80", "language": "yaml", "type": "service-definition", "context": "5.8 - IPv4/IPv6 dual-stack | This Service specification explicitly defines PreferDualStack in .spec.ipFamilyPolicy. When you create this Service on a dual-stack cluster, Kubernetes assigns both IPv4 and IPv6 addresses for the ser", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_17069"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Service\nmetadata:\n name: my-service\n labels:\n app.kubernetes.io/name: MyApp\nspec:\n selector:\n app.kubernetes.io/name: MyApp\n ports:\n - protocol: TCP\n port: 80", "completion": "This yaml code 5.8 - IPv4/IPv6 dual-stack | This Service specification explicitly defines PreferDualStack in .spec.ipFamilyPolicy. When you create this Service on a dual-stack cluster, Kubernetes assigns both IPv4 and IPv6 addresses for the ser", "language": "yaml", "type": "service-definition_explanation", "context": "5.8 - IPv4/IPv6 dual-stack | This Service specification explicitly defines PreferDualStack in .spec.ipFamilyPolicy. When you create this Service on a dual-stack cluster, Kubernetes assigns both IPv4 and IPv6 addresses for the ser", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_17070"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Service\nmetadata:\n name: my-service\n labels:\n app.kubernetes.io/name: MyApp", "completion": "spec:\n ipFamilyPolicy: PreferDualStack\n selector:\n app.kubernetes.io/name: MyApp\n ports:\n - protocol: TCP\n port: 80", "language": "yaml", "type": "service-definition", "context": "5.8 - IPv4/IPv6 dual-stack | This Service specification explicitly defines IPv6 and IPv4 in .spec.ipFamilies as well as defining PreferDualStack in .spec.ipFamilyPolicy. When Kubernetes assigns an IPv6 and IPv4 address in .spec.c", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_17071"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Service\nmetadata:\n name: my-service\n labels:\n app.kubernetes.io/name: MyApp\nspec:\n ipFamilyPolicy: PreferDualStack\n selector:\n app.kubernetes.io/name: MyApp\n ports:\n - protocol: TCP\n port: 80", "completion": "This yaml code 5.8 - IPv4/IPv6 dual-stack | This Service specification explicitly defines IPv6 and IPv4 in .spec.ipFamilies as well as defining PreferDualStack in .spec.ipFamilyPolicy. When Kubernetes assigns an IPv6 and IPv4 address in .spec.c", "language": "yaml", "type": "service-definition_explanation", "context": "5.8 - IPv4/IPv6 dual-stack | This Service specification explicitly defines IPv6 and IPv4 in .spec.ipFamilies as well as defining PreferDualStack in .spec.ipFamilyPolicy. When Kubernetes assigns an IPv6 and IPv4 address in .spec.c", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_17072"} {"prompt": "Explain this text code:\n\nservice/networking/dual-stack-preferred-ipfamilies-svc.yaml", "completion": "This text code 5.8 - IPv4/IPv6 dual-stack | These examples demonstrate the default behavior when dual-stack is newly enabled on a cluster where Services already exist. (Upgrading an existing cluster to 1.21 or beyond will enable dual-stack.)", "language": "text", "type": "code-example_explanation", "context": "5.8 - IPv4/IPv6 dual-stack | These examples demonstrate the default behavior when dual-stack is newly enabled on a cluster where Services already exist. (Upgrading an existing cluster to 1.21 or beyond will enable dual-stack.)", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_17073"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Service\nmetadata:\n name: my-service\n labels:\n app.kubernetes.io/name: MyApp\nspec:\n ipFamilyPolicy: PreferDualStack", "completion": " ipFamilies:\n - IPv6\n - IPv4\n selector:\n app.kubernetes.io/name: MyApp\n ports:\n - protocol: TCP\n port: 80", "language": "yaml", "type": "service-definition", "context": "5.8 - IPv4/IPv6 dual-stack | These examples demonstrate the default behavior when dual-stack is newly enabled on a cluster where Services already exist. (Upgrading an existing cluster to 1.21 or beyond will enable dual-stack.)", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_17074"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Service\nmetadata:\n name: my-service\n labels:\n app.kubernetes.io/name: MyApp\nspec:\n ipFamilyPolicy: PreferDualStack\n ipFamilies:\n - IPv6\n - IPv4\n selector:\n app.kubernetes.io/name: MyApp\n ports:\n - protocol: TCP\n port: 80", "completion": "This yaml code 5.8 - IPv4/IPv6 dual-stack | These examples demonstrate the default behavior when dual-stack is newly enabled on a cluster where Services already exist. (Upgrading an existing cluster to 1.21 or beyond will enable dual-stack.)", "language": "yaml", "type": "service-definition_explanation", "context": "5.8 - IPv4/IPv6 dual-stack | These examples demonstrate the default behavior when dual-stack is newly enabled on a cluster where Services already exist. (Upgrading an existing cluster to 1.21 or beyond will enable dual-stack.)", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_17075"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Service\nmetadata:\n name: my-service\n labels:\n app.kubernetes.io/name: MyApp", "completion": "spec:\n selector:\n app.kubernetes.io/name: MyApp\n ports:\n - protocol: TCP\n port: 80", "language": "yaml", "type": "service-definition", "context": "5.8 - IPv4/IPv6 dual-stack | You can validate this behavior by using kubectl to inspect an existing service.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_17076"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Service\nmetadata:\n name: my-service\n labels:\n app.kubernetes.io/name: MyApp\nspec:\n selector:\n app.kubernetes.io/name: MyApp\n ports:\n - protocol: TCP\n port: 80", "completion": "This yaml code 5.8 - IPv4/IPv6 dual-stack | You can validate this behavior by using kubectl to inspect an existing service.", "language": "yaml", "type": "service-definition_explanation", "context": "5.8 - IPv4/IPv6 dual-stack | You can validate this behavior by using kubectl to inspect an existing service.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_17077"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Service\nmetadata:\n labels:\n app.kubernetes.io/name: MyApp\n name: my-service\nspec:\n clusterIP: 10.0.197.123\n clusterIPs:\n - 10.0.197.123\n ipFamilies:", "completion": " - IPv4\n ipFamilyPolicy: SingleStack\n ports:\n - port: 80\n protocol: TCP\n targetPort: 80\n selector:\n app.kubernetes.io/name: MyApp\n type: ClusterIP\nstatus:\n loadBalancer: {}", "language": "yaml", "type": "service-definition", "context": "5.8 - IPv4/IPv6 dual-stack | When dual-stack is enabled on a cluster, existing headless Services with selectors are configured by the control plane to set .spec.ipFamilyPolicy to SingleStack and set .spec.ipFamilies to the addres", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_17078"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Service\nmetadata:\n labels:\n app.kubernetes.io/name: MyApp\n name: my-service\nspec:\n clusterIP: 10.0.197.123\n clusterIPs:\n - 10.0.197.123\n ipFamilies:\n - IPv4\n ipFamilyPolicy: SingleStack\n ports:\n - port: 80\n protocol: TCP\n targetPort: 80\n selector:\n app.kubernetes.io/name: MyApp\n type: ClusterIP\nstatus:\n loadBalancer: {}", "completion": "This yaml code 5.8 - IPv4/IPv6 dual-stack | When dual-stack is enabled on a cluster, existing headless Services with selectors are configured by the control plane to set .spec.ipFamilyPolicy to SingleStack and set .spec.ipFamilies to the addres", "language": "yaml", "type": "service-definition_explanation", "context": "5.8 - IPv4/IPv6 dual-stack | When dual-stack is enabled on a cluster, existing headless Services with selectors are configured by the control plane to set .spec.ipFamilyPolicy to SingleStack and set .spec.ipFamilies to the addres", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_17079"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Service\nmetadata:\n name: my-service\n labels:\n app.kubernetes.io/name: MyApp", "completion": "spec:\n selector:\n app.kubernetes.io/name: MyApp\n ports:\n - protocol: TCP\n port: 80", "language": "yaml", "type": "service-definition", "context": "5.8 - IPv4/IPv6 dual-stack | You can validate this behavior by using kubectl to inspect an existing headless service with selectors.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_17080"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Service\nmetadata:\n name: my-service\n labels:\n app.kubernetes.io/name: MyApp\nspec:\n selector:\n app.kubernetes.io/name: MyApp\n ports:\n - protocol: TCP\n port: 80", "completion": "This yaml code 5.8 - IPv4/IPv6 dual-stack | You can validate this behavior by using kubectl to inspect an existing headless service with selectors.", "language": "yaml", "type": "service-definition_explanation", "context": "5.8 - IPv4/IPv6 dual-stack | You can validate this behavior by using kubectl to inspect an existing headless service with selectors.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_17081"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Service\nmetadata:\n labels:\n app.kubernetes.io/name: MyApp\n name: my-service\nspec:\n clusterIP: None\n clusterIPs:", "completion": " - None\n ipFamilies:\n - IPv4\n ipFamilyPolicy: SingleStack\n ports:\n - port: 80\n protocol: TCP\n targetPort: 80\n selector:\n app.kubernetes.io/name: MyApp", "language": "yaml", "type": "service-definition", "context": "5.8 - IPv4/IPv6 dual-stack | Services can be changed from single-stack to dual-stack and from dual-stack to single-stack.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_17082"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Service\nmetadata:\n labels:\n app.kubernetes.io/name: MyApp\n name: my-service\nspec:\n clusterIP: None\n clusterIPs:\n - None\n ipFamilies:\n - IPv4\n ipFamilyPolicy: SingleStack\n ports:\n - port: 80\n protocol: TCP\n targetPort: 80\n selector:\n app.kubernetes.io/name: MyApp", "completion": "This yaml code 5.8 - IPv4/IPv6 dual-stack | Services can be changed from single-stack to dual-stack and from dual-stack to single-stack.", "language": "yaml", "type": "service-definition_explanation", "context": "5.8 - IPv4/IPv6 dual-stack | Services can be changed from single-stack to dual-stack and from dual-stack to single-stack.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_17083"} {"prompt": "Complete this yaml code:\n\napiVersion: discovery.k8s.io/v1\nkind: EndpointSlice\nmetadata:\n name: example-hints\n labels:\n kubernetes.io/service-name: example-svc\naddressType: IPv4\nports:\n - name: http\n protocol: TCP", "completion": " port: 80\nendpoints:\n - addresses:\n - \"10.1.2.3\"\n conditions:\n ready: true\n hostname: pod-1\n zone: zone-a\n hints:\n forZones:\n - name: \"zone-a\"", "language": "yaml", "type": "kubernetes-manifest", "context": "5.9 - Topology Aware Routing | The kube-proxy component filters the endpoints it routes to based on the hints set by the EndpointSlice controller. In most cases, this means that the kube-proxy is able to route traffic to endpoints", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_17084"} {"prompt": "Explain this yaml code:\n\napiVersion: discovery.k8s.io/v1\nkind: EndpointSlice\nmetadata:\n name: example-hints\n labels:\n kubernetes.io/service-name: example-svc\naddressType: IPv4\nports:\n - name: http\n protocol: TCP\n port: 80\nendpoints:\n - addresses:\n - \"10.1.2.3\"\n conditions:\n ready: true\n hostname: pod-1\n zone: zone-a\n hints:\n forZones:\n - name: \"zone-a\"", "completion": "This yaml code 5.9 - Topology Aware Routing | The kube-proxy component filters the endpoints it routes to based on the hints set by the EndpointSlice controller. In most cases, this means that the kube-proxy is able to route traffic to endpoints", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "5.9 - Topology Aware Routing | The kube-proxy component filters the endpoints it routes to based on the hints set by the EndpointSlice controller. In most cases, this means that the kube-proxy is able to route traffic to endpoints", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_17085"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Service\nmetadata:\n labels:\n k8s-app: kube-dns\n kubernetes.io/cluster-service: \"true\"\n kubernetes.io/name: CoreDNS\n name: kube-dns\n namespace: kube-system\nspec:\n clusterIP: 10.96.0.10", "completion": " ports:\n - name: dns\n port: 53\n protocol: UDP\n targetPort: 53\n - name: dns-tcp\n port: 53\n protocol: TCP\n targetPort: 53\n selector:\n k8s-app: kube-dns\n type: ClusterIP", "language": "yaml", "type": "service-definition", "context": "5.11 - Service ClusterIP allocation | But, as it was explained before, the IP address 10.96.0.10 has not been reserved. If other Services are created before or in parallel with dynamic allocation, there is a chance they can allocate this", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_17086"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Service\nmetadata:\n labels:\n k8s-app: kube-dns\n kubernetes.io/cluster-service: \"true\"\n kubernetes.io/name: CoreDNS\n name: kube-dns\n namespace: kube-system\nspec:\n clusterIP: 10.96.0.10\n ports:\n - name: dns\n port: 53\n protocol: UDP\n targetPort: 53\n - name: dns-tcp\n port: 53\n protocol: TCP\n targetPort: 53\n selector:\n k8s-app: kube-dns\n type: ClusterIP", "completion": "This yaml code 5.11 - Service ClusterIP allocation | But, as it was explained before, the IP address 10.96.0.10 has not been reserved. If other Services are created before or in parallel with dynamic allocation, there is a chance they can allocate this", "language": "yaml", "type": "service-definition_explanation", "context": "5.11 - Service ClusterIP allocation | But, as it was explained before, the IP address 10.96.0.10 has not been reserved. If other Services are created before or in parallel with dynamic allocation, there is a chance they can allocate this", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_17087"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Service\nmetadata:\n name: my-service\nspec:\n selector:", "completion": " app.kubernetes.io/name: MyApp\n ports:\n - protocol: TCP\n port: 80\n targetPort: 9376\n internalTrafficPolicy: Local", "language": "yaml", "type": "service-definition", "context": "5.12 - Service Internal Traffic Policy | The kube-proxy filters the endpoints it routes to based on the spec.internalTrafficPolicy setting. When it's set to Local, only node local endpoints are considered. When it's Cluster (the default), or", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_17088"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Service\nmetadata:\n name: my-service\nspec:\n selector:\n app.kubernetes.io/name: MyApp\n ports:\n - protocol: TCP\n port: 80\n targetPort: 9376\n internalTrafficPolicy: Local", "completion": "This yaml code 5.12 - Service Internal Traffic Policy | The kube-proxy filters the endpoints it routes to based on the spec.internalTrafficPolicy setting. When it's set to Local, only node local endpoints are considered. When it's Cluster (the default), or", "language": "yaml", "type": "service-definition_explanation", "context": "5.12 - Service Internal Traffic Policy | The kube-proxy filters the endpoints it routes to based on the spec.internalTrafficPolicy setting. When it's set to Local, only node local endpoints are considered. When it's Cluster (the default), or", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_17089"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: configmap-pod\nspec:\n containers:\n - name: test\n image: busybox:1.28\n command: ['sh', '-c', 'echo \"The app is running!\" && tail -f /dev/null']", "completion": " volumeMounts:\n - name: config-vol\n mountPath: /etc/config\n volumes:\n - name: config-vol\n configMap:\n name: log-config\n items:\n - key: log_level\n path: log_level.conf", "language": "yaml", "type": "pod-definition", "context": "6.1 - Volumes | The log-config ConfigMap is mounted as a volume, and all contents stored in its log_level entry are mounted into the Pod at path /etc/config/log_level.conf. Note that this path is derived from the vol", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_17090"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: configmap-pod\nspec:\n containers:\n - name: test\n image: busybox:1.28\n command: ['sh', '-c', 'echo \"The app is running!\" && tail -f /dev/null']\n volumeMounts:\n - name: config-vol\n mountPath: /etc/config\n volumes:\n - name: config-vol\n configMap:\n name: log-config\n items:\n - key: log_level\n path: log_level.conf", "completion": "This yaml code 6.1 - Volumes | The log-config ConfigMap is mounted as a volume, and all contents stored in its log_level entry are mounted into the Pod at path /etc/config/log_level.conf. Note that this path is derived from the vol", "language": "yaml", "type": "pod-definition_explanation", "context": "6.1 - Volumes | The log-config ConfigMap is mounted as a volume, and all contents stored in its log_level entry are mounted into the Pod at path /etc/config/log_level.conf. Note that this path is derived from the vol", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_17091"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: test-pd\nspec:\n containers:\n - image: registry.k8s.io/test-webserver", "completion": " name: test-container\n volumeMounts:\n - mountPath: /cache\n name: cache-volume\n volumes:\n - name: cache-volume\n emptyDir:\n sizeLimit: 500Mi", "language": "yaml", "type": "pod-definition", "context": "6.1 - Volumes | An fc volume type allows an existing fibre channel block storage volume to be mounted in a Pod. You can specify single or multiple target world wide names (WWNs) using the parameter targetWWNs in your", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_17092"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: test-pd\nspec:\n containers:\n - image: registry.k8s.io/test-webserver\n name: test-container\n volumeMounts:\n - mountPath: /cache\n name: cache-volume\n volumes:\n - name: cache-volume\n emptyDir:\n sizeLimit: 500Mi", "completion": "This yaml code 6.1 - Volumes | An fc volume type allows an existing fibre channel block storage volume to be mounted in a Pod. You can specify single or multiple target world wide names (WWNs) using the parameter targetWWNs in your", "language": "yaml", "type": "pod-definition_explanation", "context": "6.1 - Volumes | An fc volume type allows an existing fibre channel block storage volume to be mounted in a Pod. You can specify single or multiple target world wide names (WWNs) using the parameter targetWWNs in your", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_17093"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: test-pd\nspec:\n containers:\n - image: registry.k8s.io/test-webserver\n name: test-container", "completion": " volumeMounts:\n - mountPath: /cache\n name: cache-volume\n volumes:\n - name: cache-volume\n emptyDir:\n sizeLimit: 500Mi\n medium: Memory", "language": "yaml", "type": "pod-definition", "context": "6.1 - Volumes | An fc volume type allows an existing fibre channel block storage volume to be mounted in a Pod. You can specify single or multiple target world wide names (WWNs) using the parameter targetWWNs in your", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_17094"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: test-pd\nspec:\n containers:\n - image: registry.k8s.io/test-webserver\n name: test-container\n volumeMounts:\n - mountPath: /cache\n name: cache-volume\n volumes:\n - name: cache-volume\n emptyDir:\n sizeLimit: 500Mi\n medium: Memory", "completion": "This yaml code 6.1 - Volumes | An fc volume type allows an existing fibre channel block storage volume to be mounted in a Pod. You can specify single or multiple target world wide names (WWNs) using the parameter targetWWNs in your", "language": "yaml", "type": "pod-definition_explanation", "context": "6.1 - Volumes | An fc volume type allows an existing fibre channel block storage volume to be mounted in a Pod. You can specify single or multiple target world wide names (WWNs) using the parameter targetWWNs in your", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_17095"} {"prompt": "Explain this cel code:\n\n!has(object.spec.volumes) || !object.spec.volumes.exists(v, has(v.gitRepo))", "completion": "This cel code 6.1 - Volumes | You can use this deprecated storage plugin in your cluster if you explicitly enable the GitRepoVolumeDriver feature gate.", "language": "cel", "type": "code-example_explanation", "context": "6.1 - Volumes | You can use this deprecated storage plugin in your cluster if you explicitly enable the GitRepoVolumeDriver feature gate.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_17096"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: server\nspec:\n containers:\n - image: nginx\n name: nginx", "completion": " volumeMounts:\n - mountPath: /mypath\n name: git-volume\n volumes:\n - name: git-volume\n gitRepo:\n repository: \"git@somewhere:me/my-git-repository.git\"\n revision: \"22f1d8406d464b0c0874075539c1f2e96c253775\"", "language": "yaml", "type": "pod-definition", "context": "6.1 - Volumes | Kubernetes 1.34 does not include a glusterfs volume type.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_17097"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: server\nspec:\n containers:\n - image: nginx\n name: nginx\n volumeMounts:\n - mountPath: /mypath\n name: git-volume\n volumes:\n - name: git-volume\n gitRepo:\n repository: \"git@somewhere:me/my-git-repository.git\"\n revision: \"22f1d8406d464b0c0874075539c1f2e96c253775\"", "completion": "This yaml code 6.1 - Volumes | Kubernetes 1.34 does not include a glusterfs volume type.", "language": "yaml", "type": "pod-definition_explanation", "context": "6.1 - Volumes | Kubernetes 1.34 does not include a glusterfs volume type.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_17098"} {"prompt": "Complete this yaml code:\n\n---\n# This manifest mounts /data/foo on the host as /foo inside the\n# single container that runs within the hostpath-example-linux Pod.\n#\n# The mount into the container is read-only.\napiVersion: v1\nkind: Pod\nmetadata:\n name: hostpath-example-linux\nspec:\n os: { name: linux }\n nodeSelector:\n kubernetes.io/os: linux", "completion": " containers:\n - name: example-container\n image: registry.k8s.io/test-webserver\n volumeMounts:\n - mountPath: /foo\n name: example-volume\n readOnly: true\n volumes:\n - name: example-volume\n # mount /data/foo, but only if that directory already exists\n hostPath:\n path: /data/foo # directory location on host\n type: Directory # this field is optional", "language": "yaml", "type": "pod-definition", "context": "6.1 - Volumes | --- # This manifest mounts C:\\Data\\foo on the host as C:\\foo, inside the # single container that runs within the hostpath-example-windows Pod. # # The mount into the container is read-only. apiVersion", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_17099"} {"prompt": "Explain this yaml code:\n\n---\n# This manifest mounts /data/foo on the host as /foo inside the\n# single container that runs within the hostpath-example-linux Pod.\n#\n# The mount into the container is read-only.\napiVersion: v1\nkind: Pod\nmetadata:\n name: hostpath-example-linux\nspec:\n os: { name: linux }\n nodeSelector:\n kubernetes.io/os: linux\n containers:\n - name: example-container\n image: registry.k8s.io/test-webserver\n volumeMounts:\n - mountPath: /foo\n name: example-volume\n readOnly: true\n volumes:\n - name: example-volume\n # mount /data/foo, but only if that directory already exists\n hostPath:\n path: /data/foo # directory location on host\n type: Directory # this field is optional", "completion": "This yaml code 6.1 - Volumes | --- # This manifest mounts C:\\Data\\foo on the host as C:\\foo, inside the # single container that runs within the hostpath-example-windows Pod. # # The mount into the container is read-only. apiVersion", "language": "yaml", "type": "pod-definition_explanation", "context": "6.1 - Volumes | --- # This manifest mounts C:\\Data\\foo on the host as C:\\foo, inside the # single container that runs within the hostpath-example-windows Pod. # # The mount into the container is read-only. apiVersion", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_17100"} {"prompt": "Complete this yaml code:\n\n---\n# This manifest mounts C:\\Data\\foo on the host as C:\\foo, inside the\n# single container that runs within the hostpath-example-windows Pod.\n#\n# The mount into the container is read-only.\napiVersion: v1\nkind: Pod\nmetadata:\n name: hostpath-example-windows\nspec:\n os: { name: windows }\n nodeSelector:\n kubernetes.io/os: windows", "completion": " containers:\n - name: example-container\n image: microsoft/windowsservercore:1709\n volumeMounts:\n - name: example-volume\n mountPath: \"C:\\\\foo\"\n readOnly: true\n volumes:\n # mount C:\\Data\\foo from the host, but only if that directory already exists\n - name: example-volume\n hostPath:\n path: \"C:\\\\Data\\\\foo\" # directory location on host\n type: Directory # this field is optional", "language": "yaml", "type": "pod-definition", "context": "6.1 - Volumes | The following manifest defines a Pod that mounts /var/local/aaa inside the single container in the Pod. If the node does not already have a path /var/local/aaa, the kubelet creates it as a directory a", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_17101"} {"prompt": "Explain this yaml code:\n\n---\n# This manifest mounts C:\\Data\\foo on the host as C:\\foo, inside the\n# single container that runs within the hostpath-example-windows Pod.\n#\n# The mount into the container is read-only.\napiVersion: v1\nkind: Pod\nmetadata:\n name: hostpath-example-windows\nspec:\n os: { name: windows }\n nodeSelector:\n kubernetes.io/os: windows\n containers:\n - name: example-container\n image: microsoft/windowsservercore:1709\n volumeMounts:\n - name: example-volume\n mountPath: \"C:\\\\foo\"\n readOnly: true\n volumes:\n # mount C:\\Data\\foo from the host, but only if that directory already exists\n - name: example-volume\n hostPath:\n path: \"C:\\\\Data\\\\foo\" # directory location on host\n type: Directory # this field is optional", "completion": "This yaml code 6.1 - Volumes | The following manifest defines a Pod that mounts /var/local/aaa inside the single container in the Pod. If the node does not already have a path /var/local/aaa, the kubelet creates it as a directory a", "language": "yaml", "type": "pod-definition_explanation", "context": "6.1 - Volumes | The following manifest defines a Pod that mounts /var/local/aaa inside the single container in the Pod. If the node does not already have a path /var/local/aaa, the kubelet creates it as a directory a", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_17102"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: test-webserver\nspec:\n os: { name: linux }\n nodeSelector:\n kubernetes.io/os: linux\n containers:\n - name: test-webserver\n image: registry.k8s.io/test-webserver:latest\n volumeMounts:\n - mountPath: /var/local/aaa", "completion": " name: mydir\n - mountPath: /var/local/aaa/1.txt\n name: myfile\n volumes:\n - name: mydir\n hostPath:\n # Ensure the file directory is created.\n path: /var/local/aaa\n type: DirectoryOrCreate\n - name: myfile\n hostPath:\n path: /var/local/aaa/1.txt\n type: FileOrCreate", "language": "yaml", "type": "pod-definition", "context": "6.1 - Volumes | An image volume source represents an OCI object (a container image or artifact) which is available on the kubelet's host machine.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_17103"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: test-webserver\nspec:\n os: { name: linux }\n nodeSelector:\n kubernetes.io/os: linux\n containers:\n - name: test-webserver\n image: registry.k8s.io/test-webserver:latest\n volumeMounts:\n - mountPath: /var/local/aaa\n name: mydir\n - mountPath: /var/local/aaa/1.txt\n name: myfile\n volumes:\n - name: mydir\n hostPath:\n # Ensure the file directory is created.\n path: /var/local/aaa\n type: DirectoryOrCreate\n - name: myfile\n hostPath:\n path: /var/local/aaa/1.txt\n type: FileOrCreate", "completion": "This yaml code 6.1 - Volumes | An image volume source represents an OCI object (a container image or artifact) which is available on the kubelet's host machine.", "language": "yaml", "type": "pod-definition_explanation", "context": "6.1 - Volumes | An image volume source represents an OCI object (a container image or artifact) which is available on the kubelet's host machine.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_17104"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: image-volume\nspec:\n containers:\n - name: shell\n command: [\"sleep\", \"infinity\"]", "completion": " image: debian\n volumeMounts:\n - name: volume\n mountPath: /volume\n volumes:\n - name: volume\n image:\n reference: quay.io/crio/artifact:v2\n pullPolicy: IfNotPresent", "language": "yaml", "type": "pod-definition", "context": "6.1 - Volumes | The volume is resolved at pod startup depending on which pullPolicy value is provided:", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_17105"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: image-volume\nspec:\n containers:\n - name: shell\n command: [\"sleep\", \"infinity\"]\n image: debian\n volumeMounts:\n - name: volume\n mountPath: /volume\n volumes:\n - name: volume\n image:\n reference: quay.io/crio/artifact:v2\n pullPolicy: IfNotPresent", "completion": "This yaml code 6.1 - Volumes | The volume is resolved at pod startup depending on which pullPolicy value is provided:", "language": "yaml", "type": "pod-definition_explanation", "context": "6.1 - Volumes | The volume is resolved at pod startup depending on which pullPolicy value is provided:", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_17106"} {"prompt": "Explain this text code:\n\nspec.containers[*].volumeMounts.[subPath,subPathExpr]", "completion": "This text code 6.1 - Volumes | The following fields are available for the image type:", "language": "text", "type": "code-example_explanation", "context": "6.1 - Volumes | The following fields are available for the image type:", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_17107"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: PersistentVolume\nmetadata:\n name: example-pv\nspec:\n capacity:\n storage: 100Gi\n volumeMode: Filesystem\n accessModes:\n - ReadWriteOnce\n persistentVolumeReclaimPolicy: Delete", "completion": " storageClassName: local-storage\n local:\n path: /mnt/disks/ssd1\n nodeAffinity:\n required:\n nodeSelectorTerms:\n - matchExpressions:\n - key: kubernetes.io/hostname\n operator: In\n values:\n - example-node", "language": "yaml", "type": "kubernetes-manifest", "context": "6.1 - Volumes | You must set a PersistentVolume nodeAffinity when using local volumes. The Kubernetes scheduler uses the PersistentVolume nodeAffinity to schedule these Pods to the correct node.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_17108"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: PersistentVolume\nmetadata:\n name: example-pv\nspec:\n capacity:\n storage: 100Gi\n volumeMode: Filesystem\n accessModes:\n - ReadWriteOnce\n persistentVolumeReclaimPolicy: Delete\n storageClassName: local-storage\n local:\n path: /mnt/disks/ssd1\n nodeAffinity:\n required:\n nodeSelectorTerms:\n - matchExpressions:\n - key: kubernetes.io/hostname\n operator: In\n values:\n - example-node", "completion": "This yaml code 6.1 - Volumes | You must set a PersistentVolume nodeAffinity when using local volumes. The Kubernetes scheduler uses the PersistentVolume nodeAffinity to schedule these Pods to the correct node.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "6.1 - Volumes | You must set a PersistentVolume nodeAffinity when using local volumes. The Kubernetes scheduler uses the PersistentVolume nodeAffinity to schedule these Pods to the correct node.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_17109"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: test-pd\nspec:\n containers:\n - image: registry.k8s.io/test-webserver\n name: test-container", "completion": " volumeMounts:\n - mountPath: /my-nfs-data\n name: test-volume\n volumes:\n - name: test-volume\n nfs:\n server: my-nfs-server.example.com\n path: /my-nfs-volume\n readOnly: true", "language": "yaml", "type": "pod-definition", "context": "6.1 - Volumes | You must have your own NFS server running with the share exported before you can use it.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_17110"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: test-pd\nspec:\n containers:\n - image: registry.k8s.io/test-webserver\n name: test-container\n volumeMounts:\n - mountPath: /my-nfs-data\n name: test-volume\n volumes:\n - name: test-volume\n nfs:\n server: my-nfs-server.example.com\n path: /my-nfs-volume\n readOnly: true", "completion": "This yaml code 6.1 - Volumes | You must have your own NFS server running with the share exported before you can use it.", "language": "yaml", "type": "pod-definition_explanation", "context": "6.1 - Volumes | You must have your own NFS server running with the share exported before you can use it.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_17111"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: test-portworx-volume-pod\nspec:\n containers:\n - image: registry.k8s.io/test-webserver\n name: test-container", "completion": " volumeMounts:\n - mountPath: /mnt\n name: pxvol\n volumes:\n - name: pxvol\n # This Portworx volume must already exist.\n portworxVolume:\n volumeID: \"pxvol\"\n fsType: \"<fs-type>\"", "language": "yaml", "type": "pod-definition", "context": "6.1 - Volumes | In Kubernetes 1.34, all operations for the in-tree Portworx volumes are redirected to the pxd.portworx.com Container Storage Interface (CSI) Driver by default.Portworx CSI Driver must be installed on", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_17112"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: test-portworx-volume-pod\nspec:\n containers:\n - image: registry.k8s.io/test-webserver\n name: test-container\n volumeMounts:\n - mountPath: /mnt\n name: pxvol\n volumes:\n - name: pxvol\n # This Portworx volume must already exist.\n portworxVolume:\n volumeID: \"pxvol\"\n fsType: \"<fs-type>\"", "completion": "This yaml code 6.1 - Volumes | In Kubernetes 1.34, all operations for the in-tree Portworx volumes are redirected to the pxd.portworx.com Container Storage Interface (CSI) Driver by default.Portworx CSI Driver must be installed on", "language": "yaml", "type": "pod-definition_explanation", "context": "6.1 - Volumes | In Kubernetes 1.34, all operations for the in-tree Portworx volumes are redirected to the pxd.portworx.com Container Storage Interface (CSI) Driver by default.Portworx CSI Driver must be installed on", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_17113"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: my-lamp-site\nspec:\n containers:\n - name: mysql\n image: mysql\n env:\n - name: MYSQL_ROOT_PASSWORD\n value: \"rootpasswd\"\n volumeMounts:", "completion": " - mountPath: /var/lib/mysql\n name: site-data\n subPath: mysql\n - name: php\n image: php:7.0-apache\n volumeMounts:\n - mountPath: /var/www/html\n name: site-data\n subPath: html\n volumes:\n - name: site-data\n persistentVolumeClaim:\n claimName: my-lamp-site-data", "language": "yaml", "type": "pod-definition", "context": "6.1 - Volumes | Use the subPathExpr field to construct subPath directory names from downward API environment variables. The subPath and subPathExpr properties are mutually exclusive.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_17114"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: my-lamp-site\nspec:\n containers:\n - name: mysql\n image: mysql\n env:\n - name: MYSQL_ROOT_PASSWORD\n value: \"rootpasswd\"\n volumeMounts:\n - mountPath: /var/lib/mysql\n name: site-data\n subPath: mysql\n - name: php\n image: php:7.0-apache\n volumeMounts:\n - mountPath: /var/www/html\n name: site-data\n subPath: html\n volumes:\n - name: site-data\n persistentVolumeClaim:\n claimName: my-lamp-site-data", "completion": "This yaml code 6.1 - Volumes | Use the subPathExpr field to construct subPath directory names from downward API environment variables. The subPath and subPathExpr properties are mutually exclusive.", "language": "yaml", "type": "pod-definition_explanation", "context": "6.1 - Volumes | Use the subPathExpr field to construct subPath directory names from downward API environment variables. The subPath and subPathExpr properties are mutually exclusive.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_17115"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: pod1\nspec:\n containers:\n - name: container1\n env:\n - name: POD_NAME\n valueFrom:\n fieldRef:\n apiVersion: v1", "completion": " fieldPath: metadata.name\n image: busybox:1.28\n command: [ \"sh\", \"-c\", \"while [ true ]; do echo 'Hello'; sleep 10; done | tee -a /logs/hello.txt\" ]\n volumeMounts:\n - name: workdir1\n mountPath: /logs\n # The variable expansion uses round brackets (not curly brackets).\n subPathExpr: $(POD_NAME)\n restartPolicy: Never\n volumes:\n - name: workdir1\n hostPath:\n path: /var/log/pods", "language": "yaml", "type": "pod-definition", "context": "6.1 - Volumes | The storage medium (such as Disk or SSD) of an emptyDir volume is determined by the medium of the filesystem holding the kubelet root dir (typically /var/lib/kubelet). There is no limit on how much sp", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_17116"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: pod1\nspec:\n containers:\n - name: container1\n env:\n - name: POD_NAME\n valueFrom:\n fieldRef:\n apiVersion: v1\n fieldPath: metadata.name\n image: busybox:1.28\n command: [ \"sh\", \"-c\", \"while [ true ]; do echo 'Hello'; sleep 10; done | tee -a /logs/hello.txt\" ]\n volumeMounts:\n - name: workdir1\n mountPath: /logs\n # The variable expansion uses round brackets (not curly brackets).\n subPathExpr: $(POD_NAME)\n restartPolicy: Never\n volumes:\n - name: workdir1\n hostPath:\n path: /var/log/pods", "completion": "This yaml code 6.1 - Volumes | The storage medium (such as Disk or SSD) of an emptyDir volume is determined by the medium of the filesystem holding the kubelet root dir (typically /var/lib/kubelet). There is no limit on how much sp", "language": "yaml", "type": "pod-definition_explanation", "context": "6.1 - Volumes | The storage medium (such as Disk or SSD) of an emptyDir volume is determined by the medium of the filesystem holding the kubelet root dir (typically /var/lib/kubelet). There is no limit on how much sp", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_17117"} {"prompt": "Explain this text code:\n\n.spec.containers[].volumeMounts[].recursiveReadOnly", "completion": "This text code 6.1 - Volumes | The allowed values are:", "language": "text", "type": "code-example_explanation", "context": "6.1 - Volumes | The allowed values are:", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_17118"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: rro\nspec:\n volumes:\n - name: mnt\n hostPath:\n # tmpfs is mounted on /mnt/tmpfs\n path: /mnt\n containers:\n - name: busybox\n image: busybox\n args: [\"sleep\", \"infinity\"]", "completion": " volumeMounts:\n # /mnt-rro/tmpfs is not writable\n - name: mnt\n mountPath: /mnt-rro\n readOnly: true\n mountPropagation: None\n recursiveReadOnly: Enabled\n # /mnt-ro/tmpfs is writable\n - name: mnt\n mountPath: /mnt-ro\n readOnly: true\n # /mnt-rw/tmpfs is writable\n - name: mnt\n mountPath: /mnt-rw", "language": "yaml", "type": "pod-definition", "context": "6.1 - Volumes | When this property is recognized by kubelet and kube-apiserver, the .status.containerStatuses[].volumeMounts[].recursiveReadOnly field is set to either Enabled or Disabled.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_17119"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: rro\nspec:\n volumes:\n - name: mnt\n hostPath:\n # tmpfs is mounted on /mnt/tmpfs\n path: /mnt\n containers:\n - name: busybox\n image: busybox\n args: [\"sleep\", \"infinity\"]\n volumeMounts:\n # /mnt-rro/tmpfs is not writable\n - name: mnt\n mountPath: /mnt-rro\n readOnly: true\n mountPropagation: None\n recursiveReadOnly: Enabled\n # /mnt-ro/tmpfs is writable\n - name: mnt\n mountPath: /mnt-ro\n readOnly: true\n # /mnt-rw/tmpfs is writable\n - name: mnt\n mountPath: /mnt-rw", "completion": "This yaml code 6.1 - Volumes | When this property is recognized by kubelet and kube-apiserver, the .status.containerStatuses[].volumeMounts[].recursiveReadOnly field is set to either Enabled or Disabled.", "language": "yaml", "type": "pod-definition_explanation", "context": "6.1 - Volumes | When this property is recognized by kubelet and kube-apiserver, the .status.containerStatuses[].volumeMounts[].recursiveReadOnly field is set to either Enabled or Disabled.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_17120"} {"prompt": "Explain this text code:\n\n.status.containerStatuses[].volumeMounts[].recursiveReadOnly", "completion": "This text code 6.1 - Volumes | The following container runtimes are known to support recursive read-only mounts.", "language": "text", "type": "code-example_explanation", "context": "6.1 - Volumes | The following container runtimes are known to support recursive read-only mounts.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_17121"} {"prompt": "Complete this shell code:\n\nkubectl describe pvc hostpath\nName: hostpath\nNamespace: default\nStorageClass: example-hostpath\nStatus: Terminating", "completion": "Volume:\nLabels: <none>\nAnnotations: volume.beta.kubernetes.io/storage-class=example-hostpath\n volume.beta.kubernetes.io/storage-provisioner=example.com/hostpath\nFinalizers: [kubernetes.io/pvc-protection]\n...", "language": "shell", "type": "code-example", "context": "6.2 - Persistent Volumes | You can see that a PV is protected when the PV's status is Terminating and the Finalizers list includes kubernetes.io/pv-protection too:", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_17122"} {"prompt": "Explain this shell code:\n\nkubectl describe pvc hostpath\nName: hostpath\nNamespace: default\nStorageClass: example-hostpath\nStatus: Terminating\nVolume:\nLabels: <none>\nAnnotations: volume.beta.kubernetes.io/storage-class=example-hostpath\n volume.beta.kubernetes.io/storage-provisioner=example.com/hostpath\nFinalizers: [kubernetes.io/pvc-protection]\n...", "completion": "This shell code 6.2 - Persistent Volumes | You can see that a PV is protected when the PV's status is Terminating and the Finalizers list includes kubernetes.io/pv-protection too:", "language": "shell", "type": "code-example_explanation", "context": "6.2 - Persistent Volumes | You can see that a PV is protected when the PV's status is Terminating and the Finalizers list includes kubernetes.io/pv-protection too:", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_17123"} {"prompt": "Complete this shell code:\n\nkubectl describe pv task-pv-volume\nName: task-pv-volume\nLabels: type=local\nAnnotations: <none>\nFinalizers: [kubernetes.io/pv-protection]\nStorageClass: standard\nStatus: Terminating\nClaim:", "completion": "Reclaim Policy: Delete\nAccess Modes: RWO\nCapacity: 1Gi\nMessage:\nSource:\n Type: HostPath (bare host directory volume)\n Path: /tmp/data\n HostPathType:\nEvents: <none>", "language": "shell", "type": "code-example", "context": "6.2 - Persistent Volumes | When a user is done with their volume, they can delete the PVC objects from the API that allows reclamation of the resource. The reclaim policy for a PersistentVolume tells the cluster what to do with", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_17124"} {"prompt": "Explain this shell code:\n\nkubectl describe pv task-pv-volume\nName: task-pv-volume\nLabels: type=local\nAnnotations: <none>\nFinalizers: [kubernetes.io/pv-protection]\nStorageClass: standard\nStatus: Terminating\nClaim:\nReclaim Policy: Delete\nAccess Modes: RWO\nCapacity: 1Gi\nMessage:\nSource:\n Type: HostPath (bare host directory volume)\n Path: /tmp/data\n HostPathType:\nEvents: <none>", "completion": "This shell code 6.2 - Persistent Volumes | When a user is done with their volume, they can delete the PVC objects from the API that allows reclamation of the resource. The reclaim policy for a PersistentVolume tells the cluster what to do with", "language": "shell", "type": "code-example_explanation", "context": "6.2 - Persistent Volumes | When a user is done with their volume, they can delete the PVC objects from the API that allows reclamation of the resource. The reclaim policy for a PersistentVolume tells the cluster what to do with", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_17125"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: pv-recycler\n namespace: default\nspec:\n restartPolicy: Never\n volumes:\n - name: vol", "completion": " hostPath:\n path: /any/path/it/will/be/replaced\n containers:\n - name: pv-recycler\n image: \"registry.k8s.io/busybox\"\n command: [\"/bin/sh\", \"-c\", \"test -e /scrub && rm -rf /scrub/..?* /scrub/.[!.]* /scrub/* && test -z \\\"$(ls -A /scrub)\\\" || exit 1\"]\n volumeMounts:\n - name: vol\n mountPath: /scrub", "language": "yaml", "type": "pod-definition", "context": "6.2 - Persistent Volumes | However, the particular path specified in the custom recycler Pod template in the volumes part is replaced with the particular path of the volume that is being recycled.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_17126"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: pv-recycler\n namespace: default\nspec:\n restartPolicy: Never\n volumes:\n - name: vol\n hostPath:\n path: /any/path/it/will/be/replaced\n containers:\n - name: pv-recycler\n image: \"registry.k8s.io/busybox\"\n command: [\"/bin/sh\", \"-c\", \"test -e /scrub && rm -rf /scrub/..?* /scrub/.[!.]* /scrub/* && test -z \\\"$(ls -A /scrub)\\\" || exit 1\"]\n volumeMounts:\n - name: vol\n mountPath: /scrub", "completion": "This yaml code 6.2 - Persistent Volumes | However, the particular path specified in the custom recycler Pod template in the volumes part is replaced with the particular path of the volume that is being recycled.", "language": "yaml", "type": "pod-definition_explanation", "context": "6.2 - Persistent Volumes | However, the particular path specified in the custom recycler Pod template in the volumes part is replaced with the particular path of the volume that is being recycled.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_17127"} {"prompt": "Explain this text code:\n\nexternal-provisioner.volume.kubernetes.io/finalizer", "completion": "This text code 6.2 - Persistent Volumes | The finalizer kubernetes.io/pv-controller(introduced in v1.31) is added to dynamically provisioned in-tree plugin volumes and skipped for statically provisioned in-tree plugin volumes.", "language": "text", "type": "code-example_explanation", "context": "6.2 - Persistent Volumes | The finalizer kubernetes.io/pv-controller(introduced in v1.31) is added to dynamically provisioned in-tree plugin volumes and skipped for statically provisioned in-tree plugin volumes.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_17128"} {"prompt": "Complete this shell code:\n\nkubectl describe pv pvc-74a498d6-3929-47e8-8c02-078c1ece4d78\nName: pvc-74a498d6-3929-47e8-8c02-078c1ece4d78\nLabels: <none>\nAnnotations: kubernetes.io/createdby: vsphere-volume-dynamic-provisioner\n pv.kubernetes.io/bound-by-controller: yes\n pv.kubernetes.io/provisioned-by: kubernetes.io/vsphere-volume\nFinalizers: [kubernetes.io/pv-protection kubernetes.io/pv-controller]\nStorageClass: vcp-sc\nStatus: Bound\nClaim: default/vcp-pvc-1\nReclaim Policy: Delete", "completion": "Access Modes: RWO\nVolumeMode: Filesystem\nCapacity: 1Gi\nNode Affinity: <none>\nMessage:\nSource:\n Type: vSphereVolume (a Persistent Disk resource in vSphere)\n VolumePath: [vsanDatastore] d49c4a62-166f-ce12-c464-020077ba5d46/kubernetes-dynamic-pvc-74a498d6-3929-47e8-8c02-078c1ece4d78.vmdk\n FSType: ext4\n StoragePolicyName: vSAN Default Storage Policy\nEvents: <none>", "language": "shell", "type": "code-example", "context": "6.2 - Persistent Volumes | The finalizer external-provisioner.volume.kubernetes.io/finalizer is added for CSI volumes. The following is an example:", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_17129"} {"prompt": "Explain this shell code:\n\nkubectl describe pv pvc-74a498d6-3929-47e8-8c02-078c1ece4d78\nName: pvc-74a498d6-3929-47e8-8c02-078c1ece4d78\nLabels: <none>\nAnnotations: kubernetes.io/createdby: vsphere-volume-dynamic-provisioner\n pv.kubernetes.io/bound-by-controller: yes\n pv.kubernetes.io/provisioned-by: kubernetes.io/vsphere-volume\nFinalizers: [kubernetes.io/pv-protection kubernetes.io/pv-controller]\nStorageClass: vcp-sc\nStatus: Bound\nClaim: default/vcp-pvc-1\nReclaim Policy: Delete\nAccess Modes: RWO\nVolumeMode: Filesystem\nCapacity: 1Gi\nNode Affinity: <none>\nMessage:\nSource:\n Type: vSphereVolume (a Persistent Disk resource in vSphere)\n VolumePath: [vsanDatastore] d49c4a62-166f-ce12-c464-020077ba5d46/kubernetes-dynamic-pvc-74a498d6-3929-47e8-8c02-078c1ece4d78.vmdk\n FSType: ext4\n StoragePolicyName: vSAN Default Storage Policy\nEvents: <none>", "completion": "This shell code 6.2 - Persistent Volumes | The finalizer external-provisioner.volume.kubernetes.io/finalizer is added for CSI volumes. The following is an example:", "language": "shell", "type": "code-example_explanation", "context": "6.2 - Persistent Volumes | The finalizer external-provisioner.volume.kubernetes.io/finalizer is added for CSI volumes. The following is an example:", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_17130"} {"prompt": "Explain this text code:\n\nexternal-provisioner.volume.kubernetes.io/finalizer", "completion": "This text code 6.2 - Persistent Volumes | When the CSIMigration{provider} feature flag is enabled for a specific in-tree volume plugin, the kubernetes.io/pv-controller finalizer is replaced by the external-provisioner.volume.kubernetes.io/fin", "language": "text", "type": "code-example_explanation", "context": "6.2 - Persistent Volumes | When the CSIMigration{provider} feature flag is enabled for a specific in-tree volume plugin, the kubernetes.io/pv-controller finalizer is replaced by the external-provisioner.volume.kubernetes.io/fin", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_17131"} {"prompt": "Complete this shell code:\n\nName: pvc-2f0bab97-85a8-4552-8044-eb8be45cf48d\nLabels: <none>\nAnnotations: pv.kubernetes.io/provisioned-by: csi.vsphere.vmware.com\nFinalizers: [kubernetes.io/pv-protection external-provisioner.volume.kubernetes.io/finalizer]\nStorageClass: fast\nStatus: Bound\nClaim: demo-app/nginx-logs\nReclaim Policy: Delete\nAccess Modes: RWO\nVolumeMode: Filesystem\nCapacity: 200Mi", "completion": "Node Affinity: <none>\nMessage:\nSource:\n Type: CSI (a Container Storage Interface (CSI) volume source)\n Driver: csi.vsphere.vmware.com\n FSType: ext4\n VolumeHandle: 44830fa8-79b4-406b-8b58-621ba25353fd\n ReadOnly: false\n VolumeAttributes: storage.kubernetes.io/csiProvisionerIdentity=1648442357185-8081-csi.vsphere.vmware.com\n type=vSphere CNS Block Volume\nEvents: <none>", "language": "shell", "type": "code-example", "context": "6.2 - Persistent Volumes | When the CSIMigration{provider} feature flag is enabled for a specific in-tree volume plugin, the kubernetes.io/pv-controller finalizer is replaced by the external-provisioner.volume.kubernetes.io/fin", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_17132"} {"prompt": "Explain this shell code:\n\nName: pvc-2f0bab97-85a8-4552-8044-eb8be45cf48d\nLabels: <none>\nAnnotations: pv.kubernetes.io/provisioned-by: csi.vsphere.vmware.com\nFinalizers: [kubernetes.io/pv-protection external-provisioner.volume.kubernetes.io/finalizer]\nStorageClass: fast\nStatus: Bound\nClaim: demo-app/nginx-logs\nReclaim Policy: Delete\nAccess Modes: RWO\nVolumeMode: Filesystem\nCapacity: 200Mi\nNode Affinity: <none>\nMessage:\nSource:\n Type: CSI (a Container Storage Interface (CSI) volume source)\n Driver: csi.vsphere.vmware.com\n FSType: ext4\n VolumeHandle: 44830fa8-79b4-406b-8b58-621ba25353fd\n ReadOnly: false\n VolumeAttributes: storage.kubernetes.io/csiProvisionerIdentity=1648442357185-8081-csi.vsphere.vmware.com\n type=vSphere CNS Block Volume\nEvents: <none>", "completion": "This shell code 6.2 - Persistent Volumes | When the CSIMigration{provider} feature flag is enabled for a specific in-tree volume plugin, the kubernetes.io/pv-controller finalizer is replaced by the external-provisioner.volume.kubernetes.io/fin", "language": "shell", "type": "code-example_explanation", "context": "6.2 - Persistent Volumes | When the CSIMigration{provider} feature flag is enabled for a specific in-tree volume plugin, the kubernetes.io/pv-controller finalizer is replaced by the external-provisioner.volume.kubernetes.io/fin", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_17133"} {"prompt": "Explain this text code:\n\nexternal-provisioner.volume.kubernetes.io/finalizer", "completion": "This text code 6.2 - Persistent Volumes | The finalizers ensure that the PV object is removed only after the volume is deleted from the storage backend provided the reclaim policy of the PV is Delete. This also ensures that the volume is dele", "language": "text", "type": "code-example_explanation", "context": "6.2 - Persistent Volumes | The finalizers ensure that the PV object is removed only after the volume is deleted from the storage backend provided the reclaim policy of the PV is Delete. This also ensures that the volume is dele", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_17134"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: PersistentVolumeClaim\nmetadata:\n name: foo-pvc", "completion": " namespace: foo\nspec:\n storageClassName: \"\" # Empty string must be explicitly set otherwise default StorageClass will be set\n volumeName: foo-pv\n ...", "language": "yaml", "type": "kubernetes-manifest", "context": "6.2 - Persistent Volumes | This method does not guarantee any binding privileges to the PersistentVolume. If other PersistentVolumeClaims could use the PV that you specify, you first need to reserve that storage volume. Specify", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_17135"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: PersistentVolumeClaim\nmetadata:\n name: foo-pvc\n namespace: foo\nspec:\n storageClassName: \"\" # Empty string must be explicitly set otherwise default StorageClass will be set\n volumeName: foo-pv\n ...", "completion": "This yaml code 6.2 - Persistent Volumes | This method does not guarantee any binding privileges to the PersistentVolume. If other PersistentVolumeClaims could use the PV that you specify, you first need to reserve that storage volume. Specify", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "6.2 - Persistent Volumes | This method does not guarantee any binding privileges to the PersistentVolume. If other PersistentVolumeClaims could use the PV that you specify, you first need to reserve that storage volume. Specify", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_17136"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: PersistentVolume\nmetadata:\n name: foo-pv\nspec:", "completion": " storageClassName: \"\"\n claimRef:\n name: foo-pvc\n namespace: foo\n ...", "language": "yaml", "type": "kubernetes-manifest", "context": "6.2 - Persistent Volumes | This is useful if you want to consume PersistentVolumes that have their persistentVolumeReclaimPolicy set to Retain, including cases where you are reusing an existing PV.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_17137"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: PersistentVolume\nmetadata:\n name: foo-pv\nspec:\n storageClassName: \"\"\n claimRef:\n name: foo-pvc\n namespace: foo\n ...", "completion": "This yaml code 6.2 - Persistent Volumes | This is useful if you want to consume PersistentVolumes that have their persistentVolumeReclaimPolicy set to Retain, including cases where you are reusing an existing PV.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "6.2 - Persistent Volumes | This is useful if you want to consume PersistentVolumes that have their persistentVolumeReclaimPolicy set to Retain, including cases where you are reusing an existing PV.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_17138"} {"prompt": "Complete this yaml code:\n\napiVersion: storage.k8s.io/v1\nkind: StorageClass\nmetadata:\n name: example-vol-default\nprovisioner: vendor-name.example/magicstorage", "completion": "parameters:\n resturl: \"http://192.168.10.100:8080\"\n restuser: \"\"\n secretNamespace: \"\"\n secretName: \"\"\nallowVolumeExpansion: true", "language": "yaml", "type": "kubernetes-manifest", "context": "6.2 - Persistent Volumes | To request a larger volume for a PVC, edit the PVC object and specify a larger size. This triggers expansion of the volume that backs the underlying PersistentVolume. A new PersistentVolume is never c", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_17139"} {"prompt": "Explain this yaml code:\n\napiVersion: storage.k8s.io/v1\nkind: StorageClass\nmetadata:\n name: example-vol-default\nprovisioner: vendor-name.example/magicstorage\nparameters:\n resturl: \"http://192.168.10.100:8080\"\n restuser: \"\"\n secretNamespace: \"\"\n secretName: \"\"\nallowVolumeExpansion: true", "completion": "This yaml code 6.2 - Persistent Volumes | To request a larger volume for a PVC, edit the PVC object and specify a larger size. This triggers expansion of the volume that backs the underlying PersistentVolume. A new PersistentVolume is never c", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "6.2 - Persistent Volumes | To request a larger volume for a PVC, edit the PVC object and specify a larger size. This triggers expansion of the volume that backs the underlying PersistentVolume. A new PersistentVolume is never c", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_17140"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: PersistentVolume\nmetadata:\n name: pv0003\nspec:\n capacity:\n storage: 5Gi\n volumeMode: Filesystem\n accessModes:", "completion": " - ReadWriteOnce\n persistentVolumeReclaimPolicy: Recycle\n storageClassName: slow\n mountOptions:\n - hard\n - nfsvers=4.1\n nfs:\n path: /tmp\n server: 172.17.0.2", "language": "yaml", "type": "kubernetes-manifest", "context": "6.2 - Persistent Volumes | Generally, a PV will have a specific storage capacity. This is set using the PV's capacity attribute which is a Quantity value.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_17141"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: PersistentVolume\nmetadata:\n name: pv0003\nspec:\n capacity:\n storage: 5Gi\n volumeMode: Filesystem\n accessModes:\n - ReadWriteOnce\n persistentVolumeReclaimPolicy: Recycle\n storageClassName: slow\n mountOptions:\n - hard\n - nfsvers=4.1\n nfs:\n path: /tmp\n server: 172.17.0.2", "completion": "This yaml code 6.2 - Persistent Volumes | Generally, a PV will have a specific storage capacity. This is set using the PV's capacity attribute which is a Quantity value.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "6.2 - Persistent Volumes | Generally, a PV will have a specific storage capacity. This is set using the PV's capacity attribute which is a Quantity value.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_17142"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: PersistentVolumeClaim\nmetadata:\n name: myclaim\nspec:\n accessModes:\n - ReadWriteOnce\n volumeMode: Filesystem", "completion": " resources:\n requests:\n storage: 8Gi\n storageClassName: slow\n selector:\n matchLabels:\n release: \"stable\"\n matchExpressions:\n - {key: environment, operator: In, values: [dev]}", "language": "yaml", "type": "kubernetes-manifest", "context": "6.2 - Persistent Volumes | Claims use the same conventions as volumes when requesting storage with specific access modes.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_17143"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: PersistentVolumeClaim\nmetadata:\n name: myclaim\nspec:\n accessModes:\n - ReadWriteOnce\n volumeMode: Filesystem\n resources:\n requests:\n storage: 8Gi\n storageClassName: slow\n selector:\n matchLabels:\n release: \"stable\"\n matchExpressions:\n - {key: environment, operator: In, values: [dev]}", "completion": "This yaml code 6.2 - Persistent Volumes | Claims use the same conventions as volumes when requesting storage with specific access modes.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "6.2 - Persistent Volumes | Claims use the same conventions as volumes when requesting storage with specific access modes.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_17144"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: mypod\nspec:\n containers:\n - name: myfrontend", "completion": " image: nginx\n volumeMounts:\n - mountPath: \"/var/www/html\"\n name: mypd\n volumes:\n - name: mypd\n persistentVolumeClaim:\n claimName: myclaim", "language": "yaml", "type": "pod-definition", "context": "6.2 - Persistent Volumes | PersistentVolumes binds are exclusive, and since PersistentVolumeClaims are namespaced objects, mounting claims with \"Many\" modes (ROX, RWX) is only possible within one namespace.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_17145"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: mypod\nspec:\n containers:\n - name: myfrontend\n image: nginx\n volumeMounts:\n - mountPath: \"/var/www/html\"\n name: mypd\n volumes:\n - name: mypd\n persistentVolumeClaim:\n claimName: myclaim", "completion": "This yaml code 6.2 - Persistent Volumes | PersistentVolumes binds are exclusive, and since PersistentVolumeClaims are namespaced objects, mounting claims with \"Many\" modes (ROX, RWX) is only possible within one namespace.", "language": "yaml", "type": "pod-definition_explanation", "context": "6.2 - Persistent Volumes | PersistentVolumes binds are exclusive, and since PersistentVolumeClaims are namespaced objects, mounting claims with \"Many\" modes (ROX, RWX) is only possible within one namespace.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_17146"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: PersistentVolume\nmetadata:\n name: block-pv\nspec:\n capacity:\n storage: 10Gi", "completion": " accessModes:\n - ReadWriteOnce\n volumeMode: Block\n persistentVolumeReclaimPolicy: Retain\n fc:\n targetWWNs: [\"50060e801049cfd1\"]\n lun: 0\n readOnly: false", "language": "yaml", "type": "kubernetes-manifest", "context": "6.2 - Persistent Volumes | If a user requests a raw block volume by indicating this using the volumeMode field in the PersistentVolumeClaim spec, the binding rules differ slightly from previous releases that didn't consider thi", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_17147"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: PersistentVolume\nmetadata:\n name: block-pv\nspec:\n capacity:\n storage: 10Gi\n accessModes:\n - ReadWriteOnce\n volumeMode: Block\n persistentVolumeReclaimPolicy: Retain\n fc:\n targetWWNs: [\"50060e801049cfd1\"]\n lun: 0\n readOnly: false", "completion": "This yaml code 6.2 - Persistent Volumes | If a user requests a raw block volume by indicating this using the volumeMode field in the PersistentVolumeClaim spec, the binding rules differ slightly from previous releases that didn't consider thi", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "6.2 - Persistent Volumes | If a user requests a raw block volume by indicating this using the volumeMode field in the PersistentVolumeClaim spec, the binding rules differ slightly from previous releases that didn't consider thi", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_17148"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: PersistentVolumeClaim\nmetadata:\n name: block-pvc\nspec:", "completion": " accessModes:\n - ReadWriteOnce\n volumeMode: Block\n resources:\n requests:\n storage: 10Gi", "language": "yaml", "type": "kubernetes-manifest", "context": "6.2 - Persistent Volumes | If a user requests a raw block volume by indicating this using the volumeMode field in the PersistentVolumeClaim spec, the binding rules differ slightly from previous releases that didn't consider thi", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_17149"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: PersistentVolumeClaim\nmetadata:\n name: block-pvc\nspec:\n accessModes:\n - ReadWriteOnce\n volumeMode: Block\n resources:\n requests:\n storage: 10Gi", "completion": "This yaml code 6.2 - Persistent Volumes | If a user requests a raw block volume by indicating this using the volumeMode field in the PersistentVolumeClaim spec, the binding rules differ slightly from previous releases that didn't consider thi", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "6.2 - Persistent Volumes | If a user requests a raw block volume by indicating this using the volumeMode field in the PersistentVolumeClaim spec, the binding rules differ slightly from previous releases that didn't consider thi", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_17150"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: pod-with-block-volume\nspec:\n containers:\n - name: fc-container\n image: fedora:26", "completion": " command: [\"/bin/sh\", \"-c\"]\n args: [ \"tail -f /dev/null\" ]\n volumeDevices:\n - name: data\n devicePath: /dev/xvda\n volumes:\n - name: data\n persistentVolumeClaim:\n claimName: block-pvc", "language": "yaml", "type": "pod-definition", "context": "6.2 - Persistent Volumes | If a user requests a raw block volume by indicating this using the volumeMode field in the PersistentVolumeClaim spec, the binding rules differ slightly from previous releases that didn't consider thi", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_17151"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: pod-with-block-volume\nspec:\n containers:\n - name: fc-container\n image: fedora:26\n command: [\"/bin/sh\", \"-c\"]\n args: [ \"tail -f /dev/null\" ]\n volumeDevices:\n - name: data\n devicePath: /dev/xvda\n volumes:\n - name: data\n persistentVolumeClaim:\n claimName: block-pvc", "completion": "This yaml code 6.2 - Persistent Volumes | If a user requests a raw block volume by indicating this using the volumeMode field in the PersistentVolumeClaim spec, the binding rules differ slightly from previous releases that didn't consider thi", "language": "yaml", "type": "pod-definition_explanation", "context": "6.2 - Persistent Volumes | If a user requests a raw block volume by indicating this using the volumeMode field in the PersistentVolumeClaim spec, the binding rules differ slightly from previous releases that didn't consider thi", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_17152"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: PersistentVolumeClaim\nmetadata:\n name: restore-pvc\nspec:\n storageClassName: csi-hostpath-sc\n dataSource:", "completion": " name: new-snapshot-test\n kind: VolumeSnapshot\n apiGroup: snapshot.storage.k8s.io\n accessModes:\n - ReadWriteOnce\n resources:\n requests:\n storage: 10Gi", "language": "yaml", "type": "kubernetes-manifest", "context": "6.2 - Persistent Volumes | Volume Cloning only available for CSI volume plugins.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_17153"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: PersistentVolumeClaim\nmetadata:\n name: restore-pvc\nspec:\n storageClassName: csi-hostpath-sc\n dataSource:\n name: new-snapshot-test\n kind: VolumeSnapshot\n apiGroup: snapshot.storage.k8s.io\n accessModes:\n - ReadWriteOnce\n resources:\n requests:\n storage: 10Gi", "completion": "This yaml code 6.2 - Persistent Volumes | Volume Cloning only available for CSI volume plugins.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "6.2 - Persistent Volumes | Volume Cloning only available for CSI volume plugins.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_17154"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: PersistentVolumeClaim\nmetadata:\n name: cloned-pvc\nspec:\n storageClassName: my-csi-plugin\n dataSource:", "completion": " name: existing-src-pvc-name\n kind: PersistentVolumeClaim\n accessModes:\n - ReadWriteOnce\n resources:\n requests:\n storage: 10Gi", "language": "yaml", "type": "kubernetes-manifest", "context": "6.2 - Persistent Volumes | Kubernetes supports custom volume populators. To use custom volume populators, you must enable the AnyVolumeDataSource feature gate for the kube-apiserver and kube-controller-manager.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_17155"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: PersistentVolumeClaim\nmetadata:\n name: cloned-pvc\nspec:\n storageClassName: my-csi-plugin\n dataSource:\n name: existing-src-pvc-name\n kind: PersistentVolumeClaim\n accessModes:\n - ReadWriteOnce\n resources:\n requests:\n storage: 10Gi", "completion": "This yaml code 6.2 - Persistent Volumes | Kubernetes supports custom volume populators. To use custom volume populators, you must enable the AnyVolumeDataSource feature gate for the kube-apiserver and kube-controller-manager.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "6.2 - Persistent Volumes | Kubernetes supports custom volume populators. To use custom volume populators, you must enable the AnyVolumeDataSource feature gate for the kube-apiserver and kube-controller-manager.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_17156"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: PersistentVolumeClaim\nmetadata:\n name: populated-pvc\nspec:\n dataSourceRef:\n name: example-name", "completion": " kind: ExampleDataSource\n apiGroup: example.storage.k8s.io\n accessModes:\n - ReadWriteOnce\n resources:\n requests:\n storage: 10Gi", "language": "yaml", "type": "kubernetes-manifest", "context": "6.2 - Persistent Volumes | Because volume populators are external components, attempts to create a PVC that uses one can fail if not all the correct components are installed. External controllers should generate events on the P", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_17157"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: PersistentVolumeClaim\nmetadata:\n name: populated-pvc\nspec:\n dataSourceRef:\n name: example-name\n kind: ExampleDataSource\n apiGroup: example.storage.k8s.io\n accessModes:\n - ReadWriteOnce\n resources:\n requests:\n storage: 10Gi", "completion": "This yaml code 6.2 - Persistent Volumes | Because volume populators are external components, attempts to create a PVC that uses one can fail if not all the correct components are installed. External controllers should generate events on the P", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "6.2 - Persistent Volumes | Because volume populators are external components, attempts to create a PVC that uses one can fail if not all the correct components are installed. External controllers should generate events on the P", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_17158"} {"prompt": "Complete this yaml code:\n\napiVersion: gateway.networking.k8s.io/v1beta1\nkind: ReferenceGrant\nmetadata:\n name: allow-ns1-pvc\n namespace: default\nspec:\n from:", "completion": " - group: \"\"\n kind: PersistentVolumeClaim\n namespace: ns1\n to:\n - group: snapshot.storage.k8s.io\n kind: VolumeSnapshot\n name: new-snapshot-demo", "language": "yaml", "type": "kubernetes-manifest", "context": "6.2 - Persistent Volumes | If you're writing configuration templates or examples that run on a wide range of clusters and need persistent storage, it is recommended that you use the following pattern:", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_17159"} {"prompt": "Explain this yaml code:\n\napiVersion: gateway.networking.k8s.io/v1beta1\nkind: ReferenceGrant\nmetadata:\n name: allow-ns1-pvc\n namespace: default\nspec:\n from:\n - group: \"\"\n kind: PersistentVolumeClaim\n namespace: ns1\n to:\n - group: snapshot.storage.k8s.io\n kind: VolumeSnapshot\n name: new-snapshot-demo", "completion": "This yaml code 6.2 - Persistent Volumes | If you're writing configuration templates or examples that run on a wide range of clusters and need persistent storage, it is recommended that you use the following pattern:", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "6.2 - Persistent Volumes | If you're writing configuration templates or examples that run on a wide range of clusters and need persistent storage, it is recommended that you use the following pattern:", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_17160"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: PersistentVolumeClaim\nmetadata:\n name: foo-pvc\n namespace: ns1\nspec:\n storageClassName: example\n accessModes:\n - ReadWriteOnce", "completion": " resources:\n requests:\n storage: 1Gi\n dataSourceRef:\n apiGroup: snapshot.storage.k8s.io\n kind: VolumeSnapshot\n name: new-snapshot-demo\n namespace: default\n volumeMode: Filesystem", "language": "yaml", "type": "kubernetes-manifest", "context": "6.2 - Persistent Volumes | If you're writing configuration templates or examples that run on a wide range of clusters and need persistent storage, it is recommended that you use the following pattern:", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_17161"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: PersistentVolumeClaim\nmetadata:\n name: foo-pvc\n namespace: ns1\nspec:\n storageClassName: example\n accessModes:\n - ReadWriteOnce\n resources:\n requests:\n storage: 1Gi\n dataSourceRef:\n apiGroup: snapshot.storage.k8s.io\n kind: VolumeSnapshot\n name: new-snapshot-demo\n namespace: default\n volumeMode: Filesystem", "completion": "This yaml code 6.2 - Persistent Volumes | If you're writing configuration templates or examples that run on a wide range of clusters and need persistent storage, it is recommended that you use the following pattern:", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "6.2 - Persistent Volumes | If you're writing configuration templates or examples that run on a wide range of clusters and need persistent storage, it is recommended that you use the following pattern:", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_17162"} {"prompt": "Explain this text code:\n\npods/storage/projected-secret-downwardapi-configmap.yaml", "completion": "This text code 6.3 - Projected Volumes | Each projected volume source is listed in the spec under sources. The parameters are nearly the same with two exceptions:", "language": "text", "type": "code-example_explanation", "context": "6.3 - Projected Volumes | Each projected volume source is listed in the spec under sources. The parameters are nearly the same with two exceptions:", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_17163"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: volume-test\nspec:\n containers:\n - name: container-test\n image: busybox:1.28\n command: [\"sleep\", \"3600\"]\n volumeMounts:\n - name: all-in-one\n mountPath: \"/projected-volume\"\n readOnly: true\n volumes:\n - name: all-in-one\n projected:\n sources:\n - secret:", "completion": " name: mysecret\n items:\n - key: username\n path: my-group/my-username\n - downwardAPI:\n items:\n - path: \"labels\"\n fieldRef:\n fieldPath: metadata.labels\n - path: \"cpu_limit\"\n resourceFieldRef:\n containerName: container-test\n resource: limits.cpu\n - configMap:\n name: myconfigmap\n items:\n - key: config\n path: my-group/my-config", "language": "yaml", "type": "pod-definition", "context": "6.3 - Projected Volumes | Each projected volume source is listed in the spec under sources. The parameters are nearly the same with two exceptions:", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_17164"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: volume-test\nspec:\n containers:\n - name: container-test\n image: busybox:1.28\n command: [\"sleep\", \"3600\"]\n volumeMounts:\n - name: all-in-one\n mountPath: \"/projected-volume\"\n readOnly: true\n volumes:\n - name: all-in-one\n projected:\n sources:\n - secret:\n name: mysecret\n items:\n - key: username\n path: my-group/my-username\n - downwardAPI:\n items:\n - path: \"labels\"\n fieldRef:\n fieldPath: metadata.labels\n - path: \"cpu_limit\"\n resourceFieldRef:\n containerName: container-test\n resource: limits.cpu\n - configMap:\n name: myconfigmap\n items:\n - key: config\n path: my-group/my-config", "completion": "This yaml code 6.3 - Projected Volumes | Each projected volume source is listed in the spec under sources. The parameters are nearly the same with two exceptions:", "language": "yaml", "type": "pod-definition_explanation", "context": "6.3 - Projected Volumes | Each projected volume source is listed in the spec under sources. The parameters are nearly the same with two exceptions:", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_17165"} {"prompt": "Explain this text code:\n\npods/storage/projected-secrets-nondefault-permission-mode.yaml", "completion": "This text code 6.3 - Projected Volumes | Each projected volume source is listed in the spec under sources. The parameters are nearly the same with two exceptions:", "language": "text", "type": "code-example_explanation", "context": "6.3 - Projected Volumes | Each projected volume source is listed in the spec under sources. The parameters are nearly the same with two exceptions:", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_17166"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: volume-test\nspec:\n containers:\n - name: container-test\n image: busybox:1.28\n command: [\"sleep\", \"3600\"]\n volumeMounts:\n - name: all-in-one\n mountPath: \"/projected-volume\"\n readOnly: true\n volumes:", "completion": " - name: all-in-one\n projected:\n sources:\n - secret:\n name: mysecret\n items:\n - key: username\n path: my-group/my-username\n - secret:\n name: mysecret2\n items:\n - key: password\n path: my-group/my-password\n mode: 511", "language": "yaml", "type": "pod-definition", "context": "6.3 - Projected Volumes | Each projected volume source is listed in the spec under sources. The parameters are nearly the same with two exceptions:", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_17167"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: volume-test\nspec:\n containers:\n - name: container-test\n image: busybox:1.28\n command: [\"sleep\", \"3600\"]\n volumeMounts:\n - name: all-in-one\n mountPath: \"/projected-volume\"\n readOnly: true\n volumes:\n - name: all-in-one\n projected:\n sources:\n - secret:\n name: mysecret\n items:\n - key: username\n path: my-group/my-username\n - secret:\n name: mysecret2\n items:\n - key: password\n path: my-group/my-password\n mode: 511", "completion": "This yaml code 6.3 - Projected Volumes | Each projected volume source is listed in the spec under sources. The parameters are nearly the same with two exceptions:", "language": "yaml", "type": "pod-definition_explanation", "context": "6.3 - Projected Volumes | Each projected volume source is listed in the spec under sources. The parameters are nearly the same with two exceptions:", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_17168"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: sa-token-test\nspec:\n containers:\n - name: container-test\n image: busybox:1.28\n command: [\"sleep\", \"3600\"]\n volumeMounts:\n - name: token-vol", "completion": " mountPath: \"/service-account\"\n readOnly: true\n serviceAccountName: default\n volumes:\n - name: token-vol\n projected:\n sources:\n - serviceAccountToken:\n audience: api\n expirationSeconds: 3600\n path: token", "language": "yaml", "type": "pod-definition", "context": "6.3 - Projected Volumes | The example Pod has a projected volume containing the injected service account token. Containers in this Pod can use that token to access the Kubernetes API server, authenticating with the identity of", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_17169"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: sa-token-test\nspec:\n containers:\n - name: container-test\n image: busybox:1.28\n command: [\"sleep\", \"3600\"]\n volumeMounts:\n - name: token-vol\n mountPath: \"/service-account\"\n readOnly: true\n serviceAccountName: default\n volumes:\n - name: token-vol\n projected:\n sources:\n - serviceAccountToken:\n audience: api\n expirationSeconds: 3600\n path: token", "completion": "This yaml code 6.3 - Projected Volumes | The example Pod has a projected volume containing the injected service account token. Containers in this Pod can use that token to access the Kubernetes API server, authenticating with the identity of", "language": "yaml", "type": "pod-definition_explanation", "context": "6.3 - Projected Volumes | The example Pod has a projected volume containing the injected service account token. Containers in this Pod can use that token to access the Kubernetes API server, authenticating with the identity of", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_17170"} {"prompt": "Explain this text code:\n\n--runtime-config=certificates.k8s.io/v1beta1/clustertrustbundles=true", "completion": "This text code 6.3 - Projected Volumes | The clusterTrustBundle projected volume source injects the contents of one or more ClusterTrustBundle objects as an automatically-updating file in the container filesystem.", "language": "text", "type": "code-example_explanation", "context": "6.3 - Projected Volumes | The clusterTrustBundle projected volume source injects the contents of one or more ClusterTrustBundle objects as an automatically-updating file in the container filesystem.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_17171"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: sa-ctb-name-test\nspec:\n containers:\n - name: container-test\n image: busybox\n command: [\"sleep\", \"3600\"]\n volumeMounts:\n - name: token-vol\n mountPath: \"/root-certificates\"\n readOnly: true\n serviceAccountName: default", "completion": " volumes:\n - name: token-vol\n projected:\n sources:\n - clusterTrustBundle:\n name: example\n path: example-roots.pem\n - clusterTrustBundle:\n signerName: \"example.com/mysigner\"\n labelSelector:\n matchLabels:\n version: live\n path: mysigner-roots.pem\n optional: true", "language": "yaml", "type": "pod-definition", "context": "6.3 - Projected Volumes | The podCertificate projected volumes source securely provisions a private key and X.509 certificate chain for pod to use as client or server credentials. Kubelet will then handle refreshing the privat", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_17172"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: sa-ctb-name-test\nspec:\n containers:\n - name: container-test\n image: busybox\n command: [\"sleep\", \"3600\"]\n volumeMounts:\n - name: token-vol\n mountPath: \"/root-certificates\"\n readOnly: true\n serviceAccountName: default\n volumes:\n - name: token-vol\n projected:\n sources:\n - clusterTrustBundle:\n name: example\n path: example-roots.pem\n - clusterTrustBundle:\n signerName: \"example.com/mysigner\"\n labelSelector:\n matchLabels:\n version: live\n path: mysigner-roots.pem\n optional: true", "completion": "This yaml code 6.3 - Projected Volumes | The podCertificate projected volumes source securely provisions a private key and X.509 certificate chain for pod to use as client or server credentials. Kubelet will then handle refreshing the privat", "language": "yaml", "type": "pod-definition_explanation", "context": "6.3 - Projected Volumes | The podCertificate projected volumes source securely provisions a private key and X.509 certificate chain for pod to use as client or server credentials. Kubelet will then handle refreshing the privat", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_17173"} {"prompt": "Explain this text code:\n\n--runtime-config=certificates.k8s.io/v1alpha1/podcertificaterequests=true", "completion": "This text code 6.3 - Projected Volumes | The podCertificate projected volumes source securely provisions a private key and X.509 certificate chain for pod to use as client or server credentials. Kubelet will then handle refreshing the privat", "language": "text", "type": "code-example_explanation", "context": "6.3 - Projected Volumes | The podCertificate projected volumes source securely provisions a private key and X.509 certificate chain for pod to use as client or server credentials. Kubelet will then handle refreshing the privat", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_17174"} {"prompt": "Complete this yaml code:\n\n# Sample Pod spec that uses a podCertificate projection to request an ED25519\n# private key, a certificate from the `coolcert.example.com/foo` signer, and\n# write the results to `/var/run/my-x509-credentials/credentialbundle.pem`.\napiVersion: v1\nkind: Pod\nmetadata:\n namespace: default\n name: podcertificate-pod\nspec:\n serviceAccountName: default\n containers:\n - image: debian\n name: main", "completion": " command: [\"sleep\", \"infinity\"]\n volumeMounts:\n - name: my-x509-credentials\n mountPath: /var/run/my-x509-credentials\n volumes:\n - name: my-x509-credentials\n projected:\n defaultMode: 420\n sources:\n - podCertificate:\n keyType: ED25519\n signerName: coolcert.example.com/foo\n credentialBundlePath: credentialbundle.pem", "language": "yaml", "type": "pod-definition", "context": "6.3 - Projected Volumes | The proposal for file permission handling in projected service account volume enhancement introduced the projected files having the correct owner permissions set.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_17175"} {"prompt": "Explain this yaml code:\n\n# Sample Pod spec that uses a podCertificate projection to request an ED25519\n# private key, a certificate from the `coolcert.example.com/foo` signer, and\n# write the results to `/var/run/my-x509-credentials/credentialbundle.pem`.\napiVersion: v1\nkind: Pod\nmetadata:\n namespace: default\n name: podcertificate-pod\nspec:\n serviceAccountName: default\n containers:\n - image: debian\n name: main\n command: [\"sleep\", \"infinity\"]\n volumeMounts:\n - name: my-x509-credentials\n mountPath: /var/run/my-x509-credentials\n volumes:\n - name: my-x509-credentials\n projected:\n defaultMode: 420\n sources:\n - podCertificate:\n keyType: ED25519\n signerName: coolcert.example.com/foo\n credentialBundlePath: credentialbundle.pem", "completion": "This yaml code 6.3 - Projected Volumes | The proposal for file permission handling in projected service account volume enhancement introduced the projected files having the correct owner permissions set.", "language": "yaml", "type": "pod-definition_explanation", "context": "6.3 - Projected Volumes | The proposal for file permission handling in projected service account volume enhancement introduced the projected files having the correct owner permissions set.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_17176"} {"prompt": "Complete this powershell code:\n\nPS C:\\> Get-Acl C:\\var\\run\\secrets\\kubernetes.io\\serviceaccount\\..2021_08_31_22_22_18.318230061\\ca.crt | Format-List\n\nPath : Microsoft.PowerShell.Core\\FileSystem::C:\\var\\run\\secrets\\kubernetes.io\\serviceaccount\\..2021_08_31_22_22_18.318230061\\ca.crt\nOwner : BUILTIN\\Administrators\nGroup : NT AUTHORITY\\SYSTEM", "completion": "Access : NT AUTHORITY\\SYSTEM Allow FullControl\n BUILTIN\\Administrators Allow FullControl\n BUILTIN\\Users Allow ReadAndExecute, Synchronize\nAudit :\nSddl : O:BAG:SYD:AI(A;ID;FA;;;SY)(A;ID;FA;;;BA)(A;ID;0x1200a9;;;BU)", "language": "powershell", "type": "code-example", "context": "6.3 - Projected Volumes | This implies all administrator users like ContainerAdministrator will have read, write and execute access while, non-administrator users will have read and execute access.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_17177"} {"prompt": "Explain this powershell code:\n\nPS C:\\> Get-Acl C:\\var\\run\\secrets\\kubernetes.io\\serviceaccount\\..2021_08_31_22_22_18.318230061\\ca.crt | Format-List\n\nPath : Microsoft.PowerShell.Core\\FileSystem::C:\\var\\run\\secrets\\kubernetes.io\\serviceaccount\\..2021_08_31_22_22_18.318230061\\ca.crt\nOwner : BUILTIN\\Administrators\nGroup : NT AUTHORITY\\SYSTEM\nAccess : NT AUTHORITY\\SYSTEM Allow FullControl\n BUILTIN\\Administrators Allow FullControl\n BUILTIN\\Users Allow ReadAndExecute, Synchronize\nAudit :\nSddl : O:BAG:SYD:AI(A;ID;FA;;;SY)(A;ID;FA;;;BA)(A;ID;0x1200a9;;;BU)", "completion": "This powershell code 6.3 - Projected Volumes | This implies all administrator users like ContainerAdministrator will have read, write and execute access while, non-administrator users will have read and execute access.", "language": "powershell", "type": "code-example_explanation", "context": "6.3 - Projected Volumes | This implies all administrator users like ContainerAdministrator will have read, write and execute access while, non-administrator users will have read and execute access.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_17178"} {"prompt": "Complete this yaml code:\n\nkind: Pod\napiVersion: v1\nmetadata:\n name: my-csi-app\nspec:\n containers:\n - name: my-frontend\n image: busybox:1.28\n volumeMounts:", "completion": " - mountPath: \"/data\"\n name: my-csi-inline-vol\n command: [ \"sleep\", \"1000000\" ]\n volumes:\n - name: my-csi-inline-vol\n csi:\n driver: inline.storage.kubernetes.io\n volumeAttributes:\n foo: bar", "language": "yaml", "type": "pod-definition", "context": "6.4 - Ephemeral Volumes | The volumeAttributes determine what volume is prepared by the driver. These attributes are specific to each driver and not standardized. See the documentation of each CSI driver for further instructio", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_17179"} {"prompt": "Explain this yaml code:\n\nkind: Pod\napiVersion: v1\nmetadata:\n name: my-csi-app\nspec:\n containers:\n - name: my-frontend\n image: busybox:1.28\n volumeMounts:\n - mountPath: \"/data\"\n name: my-csi-inline-vol\n command: [ \"sleep\", \"1000000\" ]\n volumes:\n - name: my-csi-inline-vol\n csi:\n driver: inline.storage.kubernetes.io\n volumeAttributes:\n foo: bar", "completion": "This yaml code 6.4 - Ephemeral Volumes | The volumeAttributes determine what volume is prepared by the driver. These attributes are specific to each driver and not standardized. See the documentation of each CSI driver for further instructio", "language": "yaml", "type": "pod-definition_explanation", "context": "6.4 - Ephemeral Volumes | The volumeAttributes determine what volume is prepared by the driver. These attributes are specific to each driver and not standardized. See the documentation of each CSI driver for further instructio", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_17180"} {"prompt": "Complete this yaml code:\n\nkind: Pod\napiVersion: v1\nmetadata:\n name: my-app\nspec:\n containers:\n - name: my-frontend\n image: busybox:1.28\n volumeMounts:\n - mountPath: \"/scratch\"\n name: scratch-volume\n command: [ \"sleep\", \"1000000\" ]", "completion": " volumes:\n - name: scratch-volume\n ephemeral:\n volumeClaimTemplate:\n metadata:\n labels:\n type: my-frontend-volume\n spec:\n accessModes: [ \"ReadWriteOnce\" ]\n storageClassName: \"scratch-storage-class\"\n resources:\n requests:\n storage: 1Gi", "language": "yaml", "type": "pod-definition", "context": "6.4 - Ephemeral Volumes | The key design idea is that the parameters for a volume claim are allowed inside a volume source of the Pod. Labels, annotations and the whole set of fields for a PersistentVolumeClaim are supported.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_17181"} {"prompt": "Explain this yaml code:\n\nkind: Pod\napiVersion: v1\nmetadata:\n name: my-app\nspec:\n containers:\n - name: my-frontend\n image: busybox:1.28\n volumeMounts:\n - mountPath: \"/scratch\"\n name: scratch-volume\n command: [ \"sleep\", \"1000000\" ]\n volumes:\n - name: scratch-volume\n ephemeral:\n volumeClaimTemplate:\n metadata:\n labels:\n type: my-frontend-volume\n spec:\n accessModes: [ \"ReadWriteOnce\" ]\n storageClassName: \"scratch-storage-class\"\n resources:\n requests:\n storage: 1Gi", "completion": "This yaml code 6.4 - Ephemeral Volumes | The key design idea is that the parameters for a volume claim are allowed inside a volume source of the Pod. Labels, annotations and the whole set of fields for a PersistentVolumeClaim are supported.", "language": "yaml", "type": "pod-definition_explanation", "context": "6.4 - Ephemeral Volumes | The key design idea is that the parameters for a volume claim are allowed inside a volume source of the Pod. Labels, annotations and the whole set of fields for a PersistentVolumeClaim are supported.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_17182"} {"prompt": "Complete this yaml code:\n\napiVersion: storage.k8s.io/v1\nkind: StorageClass\nmetadata:\n name: low-latency\n annotations:\n storageclass.kubernetes.io/is-default-class: \"false\"\nprovisioner: csi-driver.example-vendor.example", "completion": "reclaimPolicy: Retain # default value is Delete\nallowVolumeExpansion: true\nmountOptions:\n - discard # this might enable UNMAP / TRIM at the block storage layer\nvolumeBindingMode: WaitForFirstConsumer\nparameters:\n guaranteedReadWriteLatency: \"true\" # provider-specific", "language": "yaml", "type": "kubernetes-manifest", "context": "6.5 - Storage Classes | You can mark a StorageClass as the default for your cluster. For instructions on setting the default StorageClass, see Change the default StorageClass.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_17183"} {"prompt": "Explain this yaml code:\n\napiVersion: storage.k8s.io/v1\nkind: StorageClass\nmetadata:\n name: low-latency\n annotations:\n storageclass.kubernetes.io/is-default-class: \"false\"\nprovisioner: csi-driver.example-vendor.example\nreclaimPolicy: Retain # default value is Delete\nallowVolumeExpansion: true\nmountOptions:\n - discard # this might enable UNMAP / TRIM at the block storage layer\nvolumeBindingMode: WaitForFirstConsumer\nparameters:\n guaranteedReadWriteLatency: \"true\" # provider-specific", "completion": "This yaml code 6.5 - Storage Classes | You can mark a StorageClass as the default for your cluster. For instructions on setting the default StorageClass, see Change the default StorageClass.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "6.5 - Storage Classes | You can mark a StorageClass as the default for your cluster. For instructions on setting the default StorageClass, see Change the default StorageClass.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_17184"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: task-pv-pod\nspec:\n nodeSelector:\n kubernetes.io/hostname: kube-01\n volumes:\n - name: task-pv-storage\n persistentVolumeClaim:", "completion": " claimName: task-pv-claim\n containers:\n - name: task-pv-container\n image: nginx\n ports:\n - containerPort: 80\n name: \"http-server\"\n volumeMounts:\n - mountPath: \"/usr/share/nginx/html\"\n name: task-pv-storage", "language": "yaml", "type": "pod-definition", "context": "6.5 - Storage Classes | When a cluster operator specifies the WaitForFirstConsumer volume binding mode, it is no longer necessary to restrict provisioning to specific topologies in most situations. However, if still required", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_17185"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: task-pv-pod\nspec:\n nodeSelector:\n kubernetes.io/hostname: kube-01\n volumes:\n - name: task-pv-storage\n persistentVolumeClaim:\n claimName: task-pv-claim\n containers:\n - name: task-pv-container\n image: nginx\n ports:\n - containerPort: 80\n name: \"http-server\"\n volumeMounts:\n - mountPath: \"/usr/share/nginx/html\"\n name: task-pv-storage", "completion": "This yaml code 6.5 - Storage Classes | When a cluster operator specifies the WaitForFirstConsumer volume binding mode, it is no longer necessary to restrict provisioning to specific topologies in most situations. However, if still required", "language": "yaml", "type": "pod-definition_explanation", "context": "6.5 - Storage Classes | When a cluster operator specifies the WaitForFirstConsumer volume binding mode, it is no longer necessary to restrict provisioning to specific topologies in most situations. However, if still required", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_17186"} {"prompt": "Complete this yaml code:\n\napiVersion: storage.k8s.io/v1\nkind: StorageClass\nmetadata:\n name: standard\nprovisioner: example.com/example\nparameters:\n type: pd-standard", "completion": "volumeBindingMode: WaitForFirstConsumer\nallowedTopologies:\n- matchLabelExpressions:\n - key: topology.kubernetes.io/zone\n values:\n - us-central-1a\n - us-central-1b", "language": "yaml", "type": "kubernetes-manifest", "context": "6.5 - Storage Classes | StorageClasses have parameters that describe volumes belonging to the storage class. Different parameters may be accepted depending on the provisioner. When a parameter is omitted, some default is use", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_17187"} {"prompt": "Explain this yaml code:\n\napiVersion: storage.k8s.io/v1\nkind: StorageClass\nmetadata:\n name: standard\nprovisioner: example.com/example\nparameters:\n type: pd-standard\nvolumeBindingMode: WaitForFirstConsumer\nallowedTopologies:\n- matchLabelExpressions:\n - key: topology.kubernetes.io/zone\n values:\n - us-central-1a\n - us-central-1b", "completion": "This yaml code 6.5 - Storage Classes | StorageClasses have parameters that describe volumes belonging to the storage class. Different parameters may be accepted depending on the provisioner. When a parameter is omitted, some default is use", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "6.5 - Storage Classes | StorageClasses have parameters that describe volumes belonging to the storage class. Different parameters may be accepted depending on the provisioner. When a parameter is omitted, some default is use", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_17188"} {"prompt": "Complete this yaml code:\n\napiVersion: storage.k8s.io/v1\nkind: StorageClass\nmetadata:\n name: ebs-sc\nprovisioner: ebs.csi.aws.com\nvolumeBindingMode: WaitForFirstConsumer\nparameters:\n csi.storage.k8s.io/fstype: xfs\n type: io1", "completion": " iopsPerGB: \"50\"\n encrypted: \"true\"\n tagSpecification_1: \"key1=value1\"\n tagSpecification_2: \"key2=value2\"\nallowedTopologies:\n- matchLabelExpressions:\n - key: topology.ebs.csi.aws.com/zone\n values:\n - us-east-2c", "language": "yaml", "type": "kubernetes-manifest", "context": "6.5 - Storage Classes | tagSpecification: Tags with this prefix are applied to dynamically provisioned EBS volumes.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_17189"} {"prompt": "Explain this yaml code:\n\napiVersion: storage.k8s.io/v1\nkind: StorageClass\nmetadata:\n name: ebs-sc\nprovisioner: ebs.csi.aws.com\nvolumeBindingMode: WaitForFirstConsumer\nparameters:\n csi.storage.k8s.io/fstype: xfs\n type: io1\n iopsPerGB: \"50\"\n encrypted: \"true\"\n tagSpecification_1: \"key1=value1\"\n tagSpecification_2: \"key2=value2\"\nallowedTopologies:\n- matchLabelExpressions:\n - key: topology.ebs.csi.aws.com/zone\n values:\n - us-east-2c", "completion": "This yaml code 6.5 - Storage Classes | tagSpecification: Tags with this prefix are applied to dynamically provisioned EBS volumes.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "6.5 - Storage Classes | tagSpecification: Tags with this prefix are applied to dynamically provisioned EBS volumes.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_17190"} {"prompt": "Complete this yaml code:\n\nkind: StorageClass\napiVersion: storage.k8s.io/v1\nmetadata:\n name: efs-sc", "completion": "provisioner: efs.csi.aws.com\nparameters:\n provisioningMode: efs-ap\n fileSystemId: fs-92107410\n directoryPerms: \"700\"", "language": "yaml", "type": "kubernetes-manifest", "context": "6.5 - Storage Classes | For more details, refer to the AWS_EFS_CSI_Driver Dynamic Provisioning documentation.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_17191"} {"prompt": "Explain this yaml code:\n\nkind: StorageClass\napiVersion: storage.k8s.io/v1\nmetadata:\n name: efs-sc\nprovisioner: efs.csi.aws.com\nparameters:\n provisioningMode: efs-ap\n fileSystemId: fs-92107410\n directoryPerms: \"700\"", "completion": "This yaml code 6.5 - Storage Classes | For more details, refer to the AWS_EFS_CSI_Driver Dynamic Provisioning documentation.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "6.5 - Storage Classes | For more details, refer to the AWS_EFS_CSI_Driver Dynamic Provisioning documentation.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_17192"} {"prompt": "Complete this yaml code:\n\napiVersion: storage.k8s.io/v1\nkind: StorageClass\nmetadata:\n name: example-nfs", "completion": "provisioner: example.com/external-nfs\nparameters:\n server: nfs-server.example.com\n path: /share\n readOnly: \"false\"", "language": "yaml", "type": "kubernetes-manifest", "context": "6.5 - Storage Classes | Kubernetes doesn't include an internal NFS provisioner. You need to use an external provisioner to create a StorageClass for NFS. Here are some examples:", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_17193"} {"prompt": "Explain this yaml code:\n\napiVersion: storage.k8s.io/v1\nkind: StorageClass\nmetadata:\n name: example-nfs\nprovisioner: example.com/external-nfs\nparameters:\n server: nfs-server.example.com\n path: /share\n readOnly: \"false\"", "completion": "This yaml code 6.5 - Storage Classes | Kubernetes doesn't include an internal NFS provisioner. You need to use an external provisioner to create a StorageClass for NFS. Here are some examples:", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "6.5 - Storage Classes | Kubernetes doesn't include an internal NFS provisioner. You need to use an external provisioner to create a StorageClass for NFS. Here are some examples:", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_17194"} {"prompt": "Complete this yaml code:\n\napiVersion: storage.k8s.io/v1\nkind: StorageClass\nmetadata:", "completion": " name: fast\nprovisioner: kubernetes.io/vsphere-volume\nparameters:\n diskformat: zeroedthick", "language": "yaml", "type": "kubernetes-manifest", "context": "6.5 - Storage Classes | diskformat: thin, zeroedthick and eagerzeroedthick. Default: \"thin\".", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_17195"} {"prompt": "Explain this yaml code:\n\napiVersion: storage.k8s.io/v1\nkind: StorageClass\nmetadata:\n name: fast\nprovisioner: kubernetes.io/vsphere-volume\nparameters:\n diskformat: zeroedthick", "completion": "This yaml code 6.5 - Storage Classes | diskformat: thin, zeroedthick and eagerzeroedthick. Default: \"thin\".", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "6.5 - Storage Classes | diskformat: thin, zeroedthick and eagerzeroedthick. Default: \"thin\".", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_17196"} {"prompt": "Complete this yaml code:\n\napiVersion: storage.k8s.io/v1\nkind: StorageClass\nmetadata:\n name: fast", "completion": "provisioner: kubernetes.io/vsphere-volume\nparameters:\n diskformat: zeroedthick\n datastore: VSANDatastore", "language": "yaml", "type": "kubernetes-manifest", "context": "6.5 - Storage Classes | datastore: The user can also specify the datastore in the StorageClass. The volume will be created on the datastore specified in the StorageClass, which in this case is VSANDatastore. This field is op", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_17197"} {"prompt": "Explain this yaml code:\n\napiVersion: storage.k8s.io/v1\nkind: StorageClass\nmetadata:\n name: fast\nprovisioner: kubernetes.io/vsphere-volume\nparameters:\n diskformat: zeroedthick\n datastore: VSANDatastore", "completion": "This yaml code 6.5 - Storage Classes | datastore: The user can also specify the datastore in the StorageClass. The volume will be created on the datastore specified in the StorageClass, which in this case is VSANDatastore. This field is op", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "6.5 - Storage Classes | datastore: The user can also specify the datastore in the StorageClass. The volume will be created on the datastore specified in the StorageClass, which in this case is VSANDatastore. This field is op", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_17198"} {"prompt": "Complete this yaml code:\n\napiVersion: storage.k8s.io/v1\nkind: StorageClass\nmetadata:\n name: fast\nprovisioner: kubernetes.io/rbd # This provisioner is deprecated\nparameters:\n monitors: 198.19.254.105:6789\n adminId: kube", "completion": " adminSecretName: ceph-secret\n adminSecretNamespace: kube-system\n pool: kube\n userId: kube\n userSecretName: ceph-secret-user\n userSecretNamespace: default\n fsType: ext4\n imageFormat: \"2\"\n imageFeatures: \"layering\"", "language": "yaml", "type": "kubernetes-manifest", "context": "6.5 - Storage Classes | monitors: Ceph monitors, comma delimited. This parameter is required.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_17199"} {"prompt": "Explain this yaml code:\n\napiVersion: storage.k8s.io/v1\nkind: StorageClass\nmetadata:\n name: fast\nprovisioner: kubernetes.io/rbd # This provisioner is deprecated\nparameters:\n monitors: 198.19.254.105:6789\n adminId: kube\n adminSecretName: ceph-secret\n adminSecretNamespace: kube-system\n pool: kube\n userId: kube\n userSecretName: ceph-secret-user\n userSecretNamespace: default\n fsType: ext4\n imageFormat: \"2\"\n imageFeatures: \"layering\"", "completion": "This yaml code 6.5 - Storage Classes | monitors: Ceph monitors, comma delimited. This parameter is required.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "6.5 - Storage Classes | monitors: Ceph monitors, comma delimited. This parameter is required.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_17200"} {"prompt": "Explain this shell code:\n\nkubectl create secret generic ceph-secret --type=\"kubernetes.io/rbd\" \\\n --from-literal=key='QVFEQ1pMdFhPUnQrSmhBQUFYaERWNHJsZ3BsMmNjcDR6RFZST0E9PQ==' \\\n --namespace=kube-system", "completion": "This shell code 6.5 - Storage Classes | userSecretNamespace: The namespace for userSecretName.", "language": "shell", "type": "code-example_explanation", "context": "6.5 - Storage Classes | userSecretNamespace: The namespace for userSecretName.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_17201"} {"prompt": "Complete this yaml code:\n\napiVersion: storage.k8s.io/v1\nkind: StorageClass\nmetadata:\n name: azurefile", "completion": "provisioner: kubernetes.io/azure-file\nparameters:\n skuName: Standard_LRS\n location: eastus\n storageAccount: azure_storage_account_name # example value", "language": "yaml", "type": "kubernetes-manifest", "context": "6.5 - Storage Classes | During storage provisioning, a secret named by secretName is created for the mounting credentials. If the cluster has enabled both RBAC and Controller Roles, add the create permission of resource secr", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_17202"} {"prompt": "Explain this yaml code:\n\napiVersion: storage.k8s.io/v1\nkind: StorageClass\nmetadata:\n name: azurefile\nprovisioner: kubernetes.io/azure-file\nparameters:\n skuName: Standard_LRS\n location: eastus\n storageAccount: azure_storage_account_name # example value", "completion": "This yaml code 6.5 - Storage Classes | During storage provisioning, a secret named by secretName is created for the mounting credentials. If the cluster has enabled both RBAC and Controller Roles, add the create permission of resource secr", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "6.5 - Storage Classes | During storage provisioning, a secret named by secretName is created for the mounting credentials. If the cluster has enabled both RBAC and Controller Roles, add the create permission of resource secr", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_17203"} {"prompt": "Explain this text code:\n\nstorage/storageclass/storageclass-portworx-volume.yaml", "completion": "This text code 6.5 - Storage Classes | Local volumes do not support dynamic provisioning in Kubernetes 1.34; however a StorageClass should still be created to delay volume binding until a Pod is actually scheduled to the appropriate node.", "language": "text", "type": "code-example_explanation", "context": "6.5 - Storage Classes | Local volumes do not support dynamic provisioning in Kubernetes 1.34; however a StorageClass should still be created to delay volume binding until a Pod is actually scheduled to the appropriate node.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_17204"} {"prompt": "Complete this yaml code:\n\napiVersion: storage.k8s.io/v1\nkind: StorageClass\nmetadata:\n name: portworx-io-priority-high", "completion": "provisioner: kubernetes.io/portworx-volume # This provisioner is deprecated\nparameters:\n repl: \"1\"\n snap_interval: \"70\"\n priority_io: \"high\"", "language": "yaml", "type": "kubernetes-manifest", "context": "6.5 - Storage Classes | Local volumes do not support dynamic provisioning in Kubernetes 1.34; however a StorageClass should still be created to delay volume binding until a Pod is actually scheduled to the appropriate node.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_17205"} {"prompt": "Explain this yaml code:\n\napiVersion: storage.k8s.io/v1\nkind: StorageClass\nmetadata:\n name: portworx-io-priority-high\nprovisioner: kubernetes.io/portworx-volume # This provisioner is deprecated\nparameters:\n repl: \"1\"\n snap_interval: \"70\"\n priority_io: \"high\"", "completion": "This yaml code 6.5 - Storage Classes | Local volumes do not support dynamic provisioning in Kubernetes 1.34; however a StorageClass should still be created to delay volume binding until a Pod is actually scheduled to the appropriate node.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "6.5 - Storage Classes | Local volumes do not support dynamic provisioning in Kubernetes 1.34; however a StorageClass should still be created to delay volume binding until a Pod is actually scheduled to the appropriate node.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_17206"} {"prompt": "Complete this yaml code:\n\napiVersion: storage.k8s.io/v1\nkind: StorageClass\nmetadata:", "completion": " name: local-storage\nprovisioner: kubernetes.io/no-provisioner # indicates that this StorageClass does not support automatic provisioning\nvolumeBindingMode: WaitForFirstConsumer", "language": "yaml", "type": "kubernetes-manifest", "context": "6.5 - Storage Classes | Local volumes do not support dynamic provisioning in Kubernetes 1.34; however a StorageClass should still be created to delay volume binding until a Pod is actually scheduled to the appropriate node.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_17207"} {"prompt": "Explain this yaml code:\n\napiVersion: storage.k8s.io/v1\nkind: StorageClass\nmetadata:\n name: local-storage\nprovisioner: kubernetes.io/no-provisioner # indicates that this StorageClass does not support automatic provisioning\nvolumeBindingMode: WaitForFirstConsumer", "completion": "This yaml code 6.5 - Storage Classes | Local volumes do not support dynamic provisioning in Kubernetes 1.34; however a StorageClass should still be created to delay volume binding until a Pod is actually scheduled to the appropriate node.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "6.5 - Storage Classes | Local volumes do not support dynamic provisioning in Kubernetes 1.34; however a StorageClass should still be created to delay volume binding until a Pod is actually scheduled to the appropriate node.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_17208"} {"prompt": "Complete this yaml code:\n\napiVersion: storage.k8s.io/v1\nkind: VolumeAttributesClass\nmetadata:\n name: silver", "completion": "driverName: pd.csi.storage.gke.io\nparameters:\n provisioned-iops: \"3000\"\n provisioned-throughput: \"50\"", "language": "yaml", "type": "kubernetes-manifest", "context": "6.6 - Volume Attributes Classes | Each VolumeAttributesClass has a provisioner that determines what volume plugin is used for provisioning PVs. The field driverName must be specified.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_17209"} {"prompt": "Explain this yaml code:\n\napiVersion: storage.k8s.io/v1\nkind: VolumeAttributesClass\nmetadata:\n name: silver\ndriverName: pd.csi.storage.gke.io\nparameters:\n provisioned-iops: \"3000\"\n provisioned-throughput: \"50\"", "completion": "This yaml code 6.6 - Volume Attributes Classes | Each VolumeAttributesClass has a provisioner that determines what volume plugin is used for provisioning PVs. The field driverName must be specified.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "6.6 - Volume Attributes Classes | Each VolumeAttributesClass has a provisioner that determines what volume plugin is used for provisioning PVs. The field driverName must be specified.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_17210"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: PersistentVolumeClaim\nmetadata:\n name: test-pv-claim", "completion": "spec:\n …\n volumeAttributesClassName: silver\n …", "language": "yaml", "type": "kubernetes-manifest", "context": "6.6 - Volume Attributes Classes | A new VolumeAttributesClass gold is available in the cluster:", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_17211"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: PersistentVolumeClaim\nmetadata:\n name: test-pv-claim\nspec:\n …\n volumeAttributesClassName: silver\n …", "completion": "This yaml code 6.6 - Volume Attributes Classes | A new VolumeAttributesClass gold is available in the cluster:", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "6.6 - Volume Attributes Classes | A new VolumeAttributesClass gold is available in the cluster:", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_17212"} {"prompt": "Complete this yaml code:\n\napiVersion: storage.k8s.io/v1\nkind: VolumeAttributesClass\nmetadata:\n name: gold", "completion": "driverName: pd.csi.storage.gke.io\nparameters:\n iops: \"4000\"\n throughput: \"60\"", "language": "yaml", "type": "kubernetes-manifest", "context": "6.6 - Volume Attributes Classes | The end user can update the PVC with the new VolumeAttributesClass gold and apply:", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_17213"} {"prompt": "Explain this yaml code:\n\napiVersion: storage.k8s.io/v1\nkind: VolumeAttributesClass\nmetadata:\n name: gold\ndriverName: pd.csi.storage.gke.io\nparameters:\n iops: \"4000\"\n throughput: \"60\"", "completion": "This yaml code 6.6 - Volume Attributes Classes | The end user can update the PVC with the new VolumeAttributesClass gold and apply:", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "6.6 - Volume Attributes Classes | The end user can update the PVC with the new VolumeAttributesClass gold and apply:", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_17214"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: PersistentVolumeClaim\nmetadata:\n name: test-pv-claim", "completion": "spec:\n …\n volumeAttributesClassName: gold\n …", "language": "yaml", "type": "kubernetes-manifest", "context": "6.6 - Volume Attributes Classes | To understand how the resizer works with VolumeAttributesClass, refer to the CSI external-resizer documentation.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_17215"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: PersistentVolumeClaim\nmetadata:\n name: test-pv-claim\nspec:\n …\n volumeAttributesClassName: gold\n …", "completion": "This yaml code 6.6 - Volume Attributes Classes | To understand how the resizer works with VolumeAttributesClass, refer to the CSI external-resizer documentation.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "6.6 - Volume Attributes Classes | To understand how the resizer works with VolumeAttributesClass, refer to the CSI external-resizer documentation.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_17216"} {"prompt": "Complete this yaml code:\n\napiVersion: storage.k8s.io/v1\nkind: StorageClass\nmetadata:", "completion": " name: slow\nprovisioner: kubernetes.io/gce-pd\nparameters:\n type: pd-standard", "language": "yaml", "type": "kubernetes-manifest", "context": "6.7 - Dynamic Volume Provisioning | The following manifest creates a storage class \"fast\" which provisions SSD-like persistent disks.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_17217"} {"prompt": "Explain this yaml code:\n\napiVersion: storage.k8s.io/v1\nkind: StorageClass\nmetadata:\n name: slow\nprovisioner: kubernetes.io/gce-pd\nparameters:\n type: pd-standard", "completion": "This yaml code 6.7 - Dynamic Volume Provisioning | The following manifest creates a storage class \"fast\" which provisions SSD-like persistent disks.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "6.7 - Dynamic Volume Provisioning | The following manifest creates a storage class \"fast\" which provisions SSD-like persistent disks.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_17218"} {"prompt": "Complete this yaml code:\n\napiVersion: storage.k8s.io/v1\nkind: StorageClass\nmetadata:", "completion": " name: fast\nprovisioner: kubernetes.io/gce-pd\nparameters:\n type: pd-ssd", "language": "yaml", "type": "kubernetes-manifest", "context": "6.7 - Dynamic Volume Provisioning | Users request dynamically provisioned storage by including a storage class in their PersistentVolumeClaim. Before Kubernetes v1.6, this was done via the volume.beta.kubernetes.io/storage-class annotat", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_17219"} {"prompt": "Explain this yaml code:\n\napiVersion: storage.k8s.io/v1\nkind: StorageClass\nmetadata:\n name: fast\nprovisioner: kubernetes.io/gce-pd\nparameters:\n type: pd-ssd", "completion": "This yaml code 6.7 - Dynamic Volume Provisioning | Users request dynamically provisioned storage by including a storage class in their PersistentVolumeClaim. Before Kubernetes v1.6, this was done via the volume.beta.kubernetes.io/storage-class annotat", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "6.7 - Dynamic Volume Provisioning | Users request dynamically provisioned storage by including a storage class in their PersistentVolumeClaim. Before Kubernetes v1.6, this was done via the volume.beta.kubernetes.io/storage-class annotat", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_17220"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: PersistentVolumeClaim\nmetadata:\n name: claim1\nspec:", "completion": " accessModes:\n - ReadWriteOnce\n storageClassName: fast\n resources:\n requests:\n storage: 30Gi", "language": "yaml", "type": "kubernetes-manifest", "context": "6.7 - Dynamic Volume Provisioning | This claim results in an SSD-like Persistent Disk being automatically provisioned. When the claim is deleted, the volume is destroyed.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_17221"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: PersistentVolumeClaim\nmetadata:\n name: claim1\nspec:\n accessModes:\n - ReadWriteOnce\n storageClassName: fast\n resources:\n requests:\n storage: 30Gi", "completion": "This yaml code 6.7 - Dynamic Volume Provisioning | This claim results in an SSD-like Persistent Disk being automatically provisioned. When the claim is deleted, the volume is destroyed.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "6.7 - Dynamic Volume Provisioning | This claim results in an SSD-like Persistent Disk being automatically provisioned. When the claim is deleted, the volume is destroyed.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_17222"} {"prompt": "Complete this yaml code:\n\napiVersion: snapshot.storage.k8s.io/v1\nkind: VolumeSnapshot\nmetadata:\n name: new-snapshot-test", "completion": "spec:\n volumeSnapshotClassName: csi-hostpath-snapclass\n source:\n persistentVolumeClaimName: pvc-test", "language": "yaml", "type": "kubernetes-manifest", "context": "6.8 - Volume Snapshots | persistentVolumeClaimName is the name of the PersistentVolumeClaim data source for the snapshot. This field is required for dynamically provisioning a snapshot.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_17223"} {"prompt": "Explain this yaml code:\n\napiVersion: snapshot.storage.k8s.io/v1\nkind: VolumeSnapshot\nmetadata:\n name: new-snapshot-test\nspec:\n volumeSnapshotClassName: csi-hostpath-snapclass\n source:\n persistentVolumeClaimName: pvc-test", "completion": "This yaml code 6.8 - Volume Snapshots | persistentVolumeClaimName is the name of the PersistentVolumeClaim data source for the snapshot. This field is required for dynamically provisioning a snapshot.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "6.8 - Volume Snapshots | persistentVolumeClaimName is the name of the PersistentVolumeClaim data source for the snapshot. This field is required for dynamically provisioning a snapshot.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_17224"} {"prompt": "Complete this yaml code:\n\napiVersion: snapshot.storage.k8s.io/v1\nkind: VolumeSnapshot\nmetadata:", "completion": " name: test-snapshot\nspec:\n source:\n volumeSnapshotContentName: test-content", "language": "yaml", "type": "kubernetes-manifest", "context": "6.8 - Volume Snapshots | Each VolumeSnapshotContent contains a spec and status. In dynamic provisioning, the snapshot common controller creates VolumeSnapshotContent objects. Here is an example:", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_17225"} {"prompt": "Explain this yaml code:\n\napiVersion: snapshot.storage.k8s.io/v1\nkind: VolumeSnapshot\nmetadata:\n name: test-snapshot\nspec:\n source:\n volumeSnapshotContentName: test-content", "completion": "This yaml code 6.8 - Volume Snapshots | Each VolumeSnapshotContent contains a spec and status. In dynamic provisioning, the snapshot common controller creates VolumeSnapshotContent objects. Here is an example:", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "6.8 - Volume Snapshots | Each VolumeSnapshotContent contains a spec and status. In dynamic provisioning, the snapshot common controller creates VolumeSnapshotContent objects. Here is an example:", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_17226"} {"prompt": "Complete this yaml code:\n\napiVersion: snapshot.storage.k8s.io/v1\nkind: VolumeSnapshotContent\nmetadata:\n name: snapcontent-72d9a349-aacd-42d2-a240-d775650d2455\nspec:\n deletionPolicy: Delete\n driver: hostpath.csi.k8s.io", "completion": " source:\n volumeHandle: ee0cfb94-f8d4-11e9-b2d8-0242ac110002\n sourceVolumeMode: Filesystem\n volumeSnapshotClassName: csi-hostpath-snapclass\n volumeSnapshotRef:\n name: new-snapshot-test\n namespace: default\n uid: 72d9a349-aacd-42d2-a240-d775650d2455", "language": "yaml", "type": "kubernetes-manifest", "context": "6.8 - Volume Snapshots | volumeHandle is the unique identifier of the volume created on the storage backend and returned by the CSI driver during the volume creation. This field is required for dynamically provisioning a snap", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_17227"} {"prompt": "Explain this yaml code:\n\napiVersion: snapshot.storage.k8s.io/v1\nkind: VolumeSnapshotContent\nmetadata:\n name: snapcontent-72d9a349-aacd-42d2-a240-d775650d2455\nspec:\n deletionPolicy: Delete\n driver: hostpath.csi.k8s.io\n source:\n volumeHandle: ee0cfb94-f8d4-11e9-b2d8-0242ac110002\n sourceVolumeMode: Filesystem\n volumeSnapshotClassName: csi-hostpath-snapclass\n volumeSnapshotRef:\n name: new-snapshot-test\n namespace: default\n uid: 72d9a349-aacd-42d2-a240-d775650d2455", "completion": "This yaml code 6.8 - Volume Snapshots | volumeHandle is the unique identifier of the volume created on the storage backend and returned by the CSI driver during the volume creation. This field is required for dynamically provisioning a snap", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "6.8 - Volume Snapshots | volumeHandle is the unique identifier of the volume created on the storage backend and returned by the CSI driver during the volume creation. This field is required for dynamically provisioning a snap", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_17228"} {"prompt": "Complete this yaml code:\n\napiVersion: snapshot.storage.k8s.io/v1\nkind: VolumeSnapshotContent\nmetadata:\n name: new-snapshot-content-test\nspec:\n deletionPolicy: Delete", "completion": " driver: hostpath.csi.k8s.io\n source:\n snapshotHandle: 7bdd0de3-aaeb-11e8-9aae-0242ac110002\n sourceVolumeMode: Filesystem\n volumeSnapshotRef:\n name: new-snapshot-test\n namespace: default", "language": "yaml", "type": "kubernetes-manifest", "context": "6.8 - Volume Snapshots | snapshotHandle is the unique identifier of the volume snapshot created on the storage backend. This field is required for the pre-provisioned snapshots. It specifies the CSI snapshot id on the storage", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_17229"} {"prompt": "Explain this yaml code:\n\napiVersion: snapshot.storage.k8s.io/v1\nkind: VolumeSnapshotContent\nmetadata:\n name: new-snapshot-content-test\nspec:\n deletionPolicy: Delete\n driver: hostpath.csi.k8s.io\n source:\n snapshotHandle: 7bdd0de3-aaeb-11e8-9aae-0242ac110002\n sourceVolumeMode: Filesystem\n volumeSnapshotRef:\n name: new-snapshot-test\n namespace: default", "completion": "This yaml code 6.8 - Volume Snapshots | snapshotHandle is the unique identifier of the volume snapshot created on the storage backend. This field is required for the pre-provisioned snapshots. It specifies the CSI snapshot id on the storage", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "6.8 - Volume Snapshots | snapshotHandle is the unique identifier of the volume snapshot created on the storage backend. This field is required for the pre-provisioned snapshots. It specifies the CSI snapshot id on the storage", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_17230"} {"prompt": "Explain this text code:\n\nsnapshot.storage.kubernetes.io/allow-volume-mode-change: \"true\"", "completion": "This text code 6.8 - Volume Snapshots | For pre-provisioned snapshots, spec.sourceVolumeMode needs to be populated by the cluster administrator.", "language": "text", "type": "code-example_explanation", "context": "6.8 - Volume Snapshots | For pre-provisioned snapshots, spec.sourceVolumeMode needs to be populated by the cluster administrator.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_17231"} {"prompt": "Complete this yaml code:\n\napiVersion: snapshot.storage.k8s.io/v1\nkind: VolumeSnapshotContent\nmetadata:\n name: new-snapshot-content-test\n annotations:\n - snapshot.storage.kubernetes.io/allow-volume-mode-change: \"true\"\nspec:", "completion": " deletionPolicy: Delete\n driver: hostpath.csi.k8s.io\n source:\n snapshotHandle: 7bdd0de3-aaeb-11e8-9aae-0242ac110002\n sourceVolumeMode: Filesystem\n volumeSnapshotRef:\n name: new-snapshot-test\n namespace: default", "language": "yaml", "type": "kubernetes-manifest", "context": "6.8 - Volume Snapshots | You can provision a new volume, pre-populated with data from a snapshot, by using the dataSource field in the PersistentVolumeClaim object.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_17232"} {"prompt": "Explain this yaml code:\n\napiVersion: snapshot.storage.k8s.io/v1\nkind: VolumeSnapshotContent\nmetadata:\n name: new-snapshot-content-test\n annotations:\n - snapshot.storage.kubernetes.io/allow-volume-mode-change: \"true\"\nspec:\n deletionPolicy: Delete\n driver: hostpath.csi.k8s.io\n source:\n snapshotHandle: 7bdd0de3-aaeb-11e8-9aae-0242ac110002\n sourceVolumeMode: Filesystem\n volumeSnapshotRef:\n name: new-snapshot-test\n namespace: default", "completion": "This yaml code 6.8 - Volume Snapshots | You can provision a new volume, pre-populated with data from a snapshot, by using the dataSource field in the PersistentVolumeClaim object.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "6.8 - Volume Snapshots | You can provision a new volume, pre-populated with data from a snapshot, by using the dataSource field in the PersistentVolumeClaim object.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_17233"} {"prompt": "Complete this yaml code:\n\napiVersion: snapshot.storage.k8s.io/v1\nkind: VolumeSnapshotClass\nmetadata:", "completion": " name: csi-hostpath-snapclass\ndriver: hostpath.csi.k8s.io\ndeletionPolicy: Delete\nparameters:", "language": "yaml", "type": "kubernetes-manifest", "context": "6.9 - Volume Snapshot Classes | Administrators can specify a default VolumeSnapshotClass for VolumeSnapshots that don't request any particular class to bind to by adding the snapshot.storage.kubernetes.io/is-default-class: \"true\" an", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_17234"} {"prompt": "Explain this yaml code:\n\napiVersion: snapshot.storage.k8s.io/v1\nkind: VolumeSnapshotClass\nmetadata:\n name: csi-hostpath-snapclass\ndriver: hostpath.csi.k8s.io\ndeletionPolicy: Delete\nparameters:", "completion": "This yaml code 6.9 - Volume Snapshot Classes | Administrators can specify a default VolumeSnapshotClass for VolumeSnapshots that don't request any particular class to bind to by adding the snapshot.storage.kubernetes.io/is-default-class: \"true\" an", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "6.9 - Volume Snapshot Classes | Administrators can specify a default VolumeSnapshotClass for VolumeSnapshots that don't request any particular class to bind to by adding the snapshot.storage.kubernetes.io/is-default-class: \"true\" an", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_17235"} {"prompt": "Explain this text code:\n\nsnapshot.storage.kubernetes.io/is-default-class: \"true\"", "completion": "This text code 6.9 - Volume Snapshot Classes | If multiple CSI drivers exist, a default VolumeSnapshotClass can be specified for each of them.", "language": "text", "type": "code-example_explanation", "context": "6.9 - Volume Snapshot Classes | If multiple CSI drivers exist, a default VolumeSnapshotClass can be specified for each of them.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_17236"} {"prompt": "Complete this yaml code:\n\napiVersion: snapshot.storage.k8s.io/v1\nkind: VolumeSnapshotClass\nmetadata:\n name: csi-hostpath-snapclass", "completion": " annotations:\n snapshot.storage.kubernetes.io/is-default-class: \"true\"\ndriver: hostpath.csi.k8s.io\ndeletionPolicy: Delete\nparameters:", "language": "yaml", "type": "kubernetes-manifest", "context": "6.9 - Volume Snapshot Classes | If multiple CSI drivers exist, a default VolumeSnapshotClass can be specified for each of them.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_17237"} {"prompt": "Explain this yaml code:\n\napiVersion: snapshot.storage.k8s.io/v1\nkind: VolumeSnapshotClass\nmetadata:\n name: csi-hostpath-snapclass\n annotations:\n snapshot.storage.kubernetes.io/is-default-class: \"true\"\ndriver: hostpath.csi.k8s.io\ndeletionPolicy: Delete\nparameters:", "completion": "This yaml code 6.9 - Volume Snapshot Classes | If multiple CSI drivers exist, a default VolumeSnapshotClass can be specified for each of them.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "6.9 - Volume Snapshot Classes | If multiple CSI drivers exist, a default VolumeSnapshotClass can be specified for each of them.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_17238"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: PersistentVolumeClaim\nmetadata:\n name: clone-of-pvc-1\n namespace: myns\nspec:\n accessModes:", "completion": " - ReadWriteOnce\n storageClassName: cloning\n resources:\n requests:\n storage: 5Gi\n dataSource:\n kind: PersistentVolumeClaim\n name: pvc-1", "language": "yaml", "type": "kubernetes-manifest", "context": "6.10 - CSI Volume Cloning | The result is a new PVC with the name clone-of-pvc-1 that has the exact same content as the specified source pvc-1.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_17239"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: PersistentVolumeClaim\nmetadata:\n name: clone-of-pvc-1\n namespace: myns\nspec:\n accessModes:\n - ReadWriteOnce\n storageClassName: cloning\n resources:\n requests:\n storage: 5Gi\n dataSource:\n kind: PersistentVolumeClaim\n name: pvc-1", "completion": "This yaml code 6.10 - CSI Volume Cloning | The result is a new PVC with the name clone-of-pvc-1 that has the exact same content as the specified source pvc-1.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "6.10 - CSI Volume Cloning | The result is a new PVC with the name clone-of-pvc-1 that has the exact same content as the specified source pvc-1.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_17240"} {"prompt": "Complete this yaml code:\n\napiVersion: storage.k8s.io/v1\nkind: CSIDriver\nmetadata:", "completion": " name: hostpath.csi.k8s.io\nspec:\n nodeAllocatableUpdatePeriodSeconds: 60", "language": "yaml", "type": "kubernetes-manifest", "context": "6.12 - Node-specific Volume Limits | Kubelet will periodically call the corresponding CSI driver’s NodeGetInfo endpoint to refresh the maximum number of attachable volumes, using the interval specified in nodeAllocatableUpdatePeriodSecon", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_17241"} {"prompt": "Explain this yaml code:\n\napiVersion: storage.k8s.io/v1\nkind: CSIDriver\nmetadata:\n name: hostpath.csi.k8s.io\nspec:\n nodeAllocatableUpdatePeriodSeconds: 60", "completion": "This yaml code 6.12 - Node-specific Volume Limits | Kubelet will periodically call the corresponding CSI driver’s NodeGetInfo endpoint to refresh the maximum number of attachable volumes, using the interval specified in nodeAllocatableUpdatePeriodSecon", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "6.12 - Node-specific Volume Limits | Kubelet will periodically call the corresponding CSI driver’s NodeGetInfo endpoint to refresh the maximum number of attachable volumes, using the interval specified in nodeAllocatableUpdatePeriodSecon", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_17242"} {"prompt": "Explain this shell code:\n\nFOO_SERVICE_HOST=<the host the Service is running on>\nFOO_SERVICE_PORT=<the port the Service is running on>", "completion": "This shell code 7.1 - Configuration Best Practices | This does imply an ordering requirement - any Service that a Pod wants to access must be created before the Pod itself, or else the environment variables will not be populated. DNS does not have this", "language": "shell", "type": "code-example_explanation", "context": "7.1 - Configuration Best Practices | This does imply an ordering requirement - any Service that a Pod wants to access must be created before the Pod itself, or else the environment variables will not be populated. DNS does not have this", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_17243"} {"prompt": "Explain this json code:\n\n{ app.kubernetes.io/name: MyApp, tier: frontend, phase: test, deployment: v3 }", "completion": "This json code 7.1 - Configuration Best Practices | A Service can be made to span multiple Deployments by omitting release-specific labels from its selector. When you need to update a running service without downtime, use a Deployment.", "language": "json", "type": "api-response_explanation", "context": "7.1 - Configuration Best Practices | A Service can be made to span multiple Deployments by omitting release-specific labels from its selector. When you need to update a running service without downtime, use a Deployment.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_17244"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: ConfigMap\nmetadata:\n name: game-demo\ndata:\n # property-like keys; each key maps to a simple value\n player_initial_lives: \"3\"\n ui_properties_file_name: \"user-interface.properties\"", "completion": "\n # file-like keys\n game.properties: |\n enemy.types=aliens,monsters\n player.maximum-lives=5 \n user-interface.properties: |\n color.good=purple\n color.bad=yellow\n allow.textmode=true", "language": "yaml", "type": "kubernetes-manifest", "context": "7.2 - ConfigMaps | There are four different ways that you can use a ConfigMap to configure a container inside a Pod:", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_17245"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: ConfigMap\nmetadata:\n name: game-demo\ndata:\n # property-like keys; each key maps to a simple value\n player_initial_lives: \"3\"\n ui_properties_file_name: \"user-interface.properties\"\n\n # file-like keys\n game.properties: |\n enemy.types=aliens,monsters\n player.maximum-lives=5 \n user-interface.properties: |\n color.good=purple\n color.bad=yellow\n allow.textmode=true", "completion": "This yaml code 7.2 - ConfigMaps | There are four different ways that you can use a ConfigMap to configure a container inside a Pod:", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "7.2 - ConfigMaps | There are four different ways that you can use a ConfigMap to configure a container inside a Pod:", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_17246"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: configmap-demo-pod\nspec:\n containers:\n - name: demo\n image: alpine\n command: [\"sleep\", \"3600\"]\n env:\n # Define the environment variable\n - name: PLAYER_INITIAL_LIVES # Notice that the case is different here\n # from the key name in the ConfigMap.\n valueFrom:\n configMapKeyRef:\n name: game-demo # The ConfigMap this value comes from.\n key: player_initial_lives # The key to fetch.\n - name: UI_PROPERTIES_FILE_NAME\n valueFrom:", "completion": " configMapKeyRef:\n name: game-demo\n key: ui_properties_file_name\n volumeMounts:\n - name: config\n mountPath: \"/config\"\n readOnly: true\n volumes:\n # You set volumes at the Pod level, then mount them into containers inside that Pod\n - name: config\n configMap:\n # Provide the name of the ConfigMap you want to mount.\n name: game-demo\n # An array of keys from the ConfigMap to create as files\n items:\n - key: \"game.properties\"\n path: \"game.properties\"\n - key: \"user-interface.properties\"\n path: \"user-interface.properties\"", "language": "yaml", "type": "pod-definition", "context": "7.2 - ConfigMaps | A ConfigMap doesn't differentiate between single line property values and multi-line file-like values. What matters is how Pods and other objects consume those values.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_17247"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: configmap-demo-pod\nspec:\n containers:\n - name: demo\n image: alpine\n command: [\"sleep\", \"3600\"]\n env:\n # Define the environment variable\n - name: PLAYER_INITIAL_LIVES # Notice that the case is different here\n # from the key name in the ConfigMap.\n valueFrom:\n configMapKeyRef:\n name: game-demo # The ConfigMap this value comes from.\n key: player_initial_lives # The key to fetch.\n - name: UI_PROPERTIES_FILE_NAME\n valueFrom:\n configMapKeyRef:\n name: game-demo\n key: ui_properties_file_name\n volumeMounts:\n - name: config\n mountPath: \"/config\"\n readOnly: true\n volumes:\n # You set volumes at the Pod level, then mount them into containers inside that Pod\n - name: config\n configMap:\n # Provide the name of the ConfigMap you want to mount.\n name: game-demo\n # An array of keys from the ConfigMap to create as files\n items:\n - key: \"game.properties\"\n path: \"game.properties\"\n - key: \"user-interface.properties\"\n path: \"user-interface.properties\"", "completion": "This yaml code 7.2 - ConfigMaps | A ConfigMap doesn't differentiate between single line property values and multi-line file-like values. What matters is how Pods and other objects consume those values.", "language": "yaml", "type": "pod-definition_explanation", "context": "7.2 - ConfigMaps | A ConfigMap doesn't differentiate between single line property values and multi-line file-like values. What matters is how Pods and other objects consume those values.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_17248"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: mypod\nspec:\n containers:\n - name: mypod\n image: redis", "completion": " volumeMounts:\n - name: foo\n mountPath: \"/etc/foo\"\n readOnly: true\n volumes:\n - name: foo\n configMap:\n name: myconfigmap", "language": "yaml", "type": "pod-definition", "context": "7.2 - ConfigMaps | Each ConfigMap you want to use needs to be referred to in .spec.volumes.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_17249"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: mypod\nspec:\n containers:\n - name: mypod\n image: redis\n volumeMounts:\n - name: foo\n mountPath: \"/etc/foo\"\n readOnly: true\n volumes:\n - name: foo\n configMap:\n name: myconfigmap", "completion": "This yaml code 7.2 - ConfigMaps | Each ConfigMap you want to use needs to be referred to in .spec.volumes.", "language": "yaml", "type": "pod-definition_explanation", "context": "7.2 - ConfigMaps | Each ConfigMap you want to use needs to be referred to in .spec.volumes.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_17250"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: ConfigMap\nmetadata:", "completion": " name: myconfigmap\ndata:\n username: k8s-admin\n access_level: \"1\"", "language": "yaml", "type": "kubernetes-manifest", "context": "7.2 - ConfigMaps | The following command will create the ConfigMap object:", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_17251"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: ConfigMap\nmetadata:\n name: myconfigmap\ndata:\n username: k8s-admin\n access_level: \"1\"", "completion": "This yaml code 7.2 - ConfigMaps | The following command will create the ConfigMap object:", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "7.2 - ConfigMaps | The following command will create the ConfigMap object:", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_17252"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: env-configmap\nspec:\n containers:", "completion": " - name: app\n command: [\"/bin/sh\", \"-c\", \"printenv\"]\n image: busybox:latest\n envFrom:\n - configMapRef:\n name: myconfigmap", "language": "yaml", "type": "pod-definition", "context": "7.2 - ConfigMaps | The envFrom field instructs Kubernetes to create environment variables from the sources nested within it. The inner configMapRef refers to a ConfigMap by its name and selects all its key-value pairs.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_17253"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: env-configmap\nspec:\n containers:\n - name: app\n command: [\"/bin/sh\", \"-c\", \"printenv\"]\n image: busybox:latest\n envFrom:\n - configMapRef:\n name: myconfigmap", "completion": "This yaml code 7.2 - ConfigMaps | The envFrom field instructs Kubernetes to create environment variables from the sources nested within it. The inner configMapRef refers to a ConfigMap by its name and selects all its key-value pairs.", "language": "yaml", "type": "pod-definition_explanation", "context": "7.2 - ConfigMaps | The envFrom field instructs Kubernetes to create environment variables from the sources nested within it. The inner configMapRef refers to a ConfigMap by its name and selects all its key-value pairs.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_17254"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: env-configmap\nspec:\n containers:\n - name: envars-test-container", "completion": " image: nginx\n env:\n - name: CONFIGMAP_USERNAME\n valueFrom:\n configMapKeyRef:\n name: myconfigmap\n key: username", "language": "yaml", "type": "pod-definition", "context": "7.2 - ConfigMaps | In the Pod created from this manifest, you will see that the environment variable CONFIGMAP_USERNAME is set to the value of the username value from the ConfigMap. Other keys from the ConfigMap data ar", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_17255"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: env-configmap\nspec:\n containers:\n - name: envars-test-container\n image: nginx\n env:\n - name: CONFIGMAP_USERNAME\n valueFrom:\n configMapKeyRef:\n name: myconfigmap\n key: username", "completion": "This yaml code 7.2 - ConfigMaps | In the Pod created from this manifest, you will see that the environment variable CONFIGMAP_USERNAME is set to the value of the username value from the ConfigMap. Other keys from the ConfigMap data ar", "language": "yaml", "type": "pod-definition_explanation", "context": "7.2 - ConfigMaps | In the Pod created from this manifest, you will see that the environment variable CONFIGMAP_USERNAME is set to the value of the username value from the ConfigMap. Other keys from the ConfigMap data ar", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_17256"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: ConfigMap\nmetadata:", "completion": " ...\ndata:\n ...\nimmutable: true", "language": "yaml", "type": "kubernetes-manifest", "context": "7.2 - ConfigMaps | Once a ConfigMap is marked as immutable, it is not possible to revert this change nor to mutate the contents of the data or the binaryData field. You can only delete and recreate the ConfigMap. Becaus", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_17257"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: ConfigMap\nmetadata:\n ...\ndata:\n ...\nimmutable: true", "completion": "This yaml code 7.2 - ConfigMaps | Once a ConfigMap is marked as immutable, it is not possible to revert this change nor to mutate the contents of the data or the binaryData field. You can only delete and recreate the ConfigMap. Becaus", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "7.2 - ConfigMaps | Once a ConfigMap is marked as immutable, it is not possible to revert this change nor to mutate the contents of the data or the binaryData field. You can only delete and recreate the ConfigMap. Becaus", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_17258"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Secret\nmetadata:\n name: dotfile-secret\ndata:\n .secret-file: dmFsdWUtMg0KDQo=\n---\napiVersion: v1\nkind: Pod\nmetadata:\n name: secret-dotfiles-pod\nspec:\n volumes:", "completion": " - name: secret-volume\n secret:\n secretName: dotfile-secret\n containers:\n - name: dotfile-test-container\n image: registry.k8s.io/busybox\n command:\n - ls\n - \"-l\"\n - \"/etc/secret-volume\"\n volumeMounts:\n - name: secret-volume\n readOnly: true\n mountPath: \"/etc/secret-volume\"", "language": "yaml", "type": "pod-definition", "context": "7.3 - Secrets | Consider a program that needs to handle HTTP requests, do some complex business logic, and then sign some messages with an HMAC. Because it has complex application logic, there might be an unnoticed r", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_17259"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Secret\nmetadata:\n name: dotfile-secret\ndata:\n .secret-file: dmFsdWUtMg0KDQo=\n---\napiVersion: v1\nkind: Pod\nmetadata:\n name: secret-dotfiles-pod\nspec:\n volumes:\n - name: secret-volume\n secret:\n secretName: dotfile-secret\n containers:\n - name: dotfile-test-container\n image: registry.k8s.io/busybox\n command:\n - ls\n - \"-l\"\n - \"/etc/secret-volume\"\n volumeMounts:\n - name: secret-volume\n readOnly: true\n mountPath: \"/etc/secret-volume\"", "completion": "This yaml code 7.3 - Secrets | Consider a program that needs to handle HTTP requests, do some complex business logic, and then sign some messages with an HMAC. Because it has complex application logic, there might be an unnoticed r", "language": "yaml", "type": "pod-definition_explanation", "context": "7.3 - Secrets | Consider a program that needs to handle HTTP requests, do some complex business logic, and then sign some messages with an HMAC. Because it has complex application logic, there might be an unnoticed r", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_17260"} {"prompt": "Explain this shell code:\n\nkubectl create secret generic empty-secret\nkubectl get secret empty-secret", "completion": "This shell code 7.3 - Secrets | The output looks like:", "language": "shell", "type": "code-example_explanation", "context": "7.3 - Secrets | The output looks like:", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_17261"} {"prompt": "Explain this text code:\n\nNAME TYPE DATA AGE\nempty-secret Opaque 0 2m6s", "completion": "This text code 7.3 - Secrets | The DATA column shows the number of data items stored in the Secret. In this case, 0 means you have created an empty Secret.", "language": "text", "type": "code-example_explanation", "context": "7.3 - Secrets | The DATA column shows the number of data items stored in the Secret. In this case, 0 means you have created an empty Secret.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_17262"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Secret\nmetadata:\n name: secret-sa-sample", "completion": " annotations:\n kubernetes.io/service-account.name: \"sa-name\"\ntype: kubernetes.io/service-account-token\ndata:\n extra: YmFyCg==", "language": "yaml", "type": "kubernetes-manifest", "context": "7.3 - Secrets | After creating the Secret, wait for Kubernetes to populate the token key in the data field.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_17263"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Secret\nmetadata:\n name: secret-sa-sample\n annotations:\n kubernetes.io/service-account.name: \"sa-name\"\ntype: kubernetes.io/service-account-token\ndata:\n extra: YmFyCg==", "completion": "This yaml code 7.3 - Secrets | After creating the Secret, wait for Kubernetes to populate the token key in the data field.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "7.3 - Secrets | After creating the Secret, wait for Kubernetes to populate the token key in the data field.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_17264"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Secret\nmetadata:\n name: secret-dockercfg", "completion": "type: kubernetes.io/dockercfg\ndata:\n .dockercfg: |\n eyJhdXRocyI6eyJodHRwczovL2V4YW1wbGUvdjEvIjp7ImF1dGgiOiJvcGVuc2VzYW1lIn19fQo=", "language": "yaml", "type": "kubernetes-manifest", "context": "7.3 - Secrets | When you create Docker config Secrets using a manifest, the API server checks whether the expected key exists in the data field, and it verifies if the value provided can be parsed as a valid JSON. Th", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_17265"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Secret\nmetadata:\n name: secret-dockercfg\ntype: kubernetes.io/dockercfg\ndata:\n .dockercfg: |\n eyJhdXRocyI6eyJodHRwczovL2V4YW1wbGUvdjEvIjp7ImF1dGgiOiJvcGVuc2VzYW1lIn19fQo=", "completion": "This yaml code 7.3 - Secrets | When you create Docker config Secrets using a manifest, the API server checks whether the expected key exists in the data field, and it verifies if the value provided can be parsed as a valid JSON. Th", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "7.3 - Secrets | When you create Docker config Secrets using a manifest, the API server checks whether the expected key exists in the data field, and it verifies if the value provided can be parsed as a valid JSON. Th", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_17266"} {"prompt": "Complete this shell code:\n\nkubectl create secret docker-registry secret-tiger-docker \\\n --docker-email=tiger@acme.example \\", "completion": " --docker-username=tiger \\\n --docker-password=pass1234 \\\n --docker-server=my-registry.example:5000", "language": "shell", "type": "code-example", "context": "7.3 - Secrets | This command creates a Secret of type kubernetes.io/dockerconfigjson.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_17267"} {"prompt": "Explain this shell code:\n\nkubectl create secret docker-registry secret-tiger-docker \\\n --docker-email=tiger@acme.example \\\n --docker-username=tiger \\\n --docker-password=pass1234 \\\n --docker-server=my-registry.example:5000", "completion": "This shell code 7.3 - Secrets | This command creates a Secret of type kubernetes.io/dockerconfigjson.", "language": "shell", "type": "code-example_explanation", "context": "7.3 - Secrets | This command creates a Secret of type kubernetes.io/dockerconfigjson.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_17268"} {"prompt": "Explain this shell code:\n\nkubectl get secret secret-tiger-docker -o jsonpath='{.data.*}' | base64 -d", "completion": "This shell code 7.3 - Secrets | The output is equivalent to the following JSON document (which is also a valid Docker configuration file):", "language": "shell", "type": "code-example_explanation", "context": "7.3 - Secrets | The output is equivalent to the following JSON document (which is also a valid Docker configuration file):", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_17269"} {"prompt": "Complete this json code:\n\n{\n \"auths\": {\n \"my-registry.example:5000\": {\n \"username\": \"tiger\",\n \"password\": \"pass1234\",", "completion": " \"email\": \"tiger@acme.example\",\n \"auth\": \"dGlnZXI6cGFzczEyMzQ=\"\n }\n }\n}", "language": "json", "type": "api-response", "context": "7.3 - Secrets | The auth value there is base64 encoded; it is obscured but not secret. Anyone who can read that Secret can learn the registry access bearer token.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_17270"} {"prompt": "Explain this json code:\n\n{\n \"auths\": {\n \"my-registry.example:5000\": {\n \"username\": \"tiger\",\n \"password\": \"pass1234\",\n \"email\": \"tiger@acme.example\",\n \"auth\": \"dGlnZXI6cGFzczEyMzQ=\"\n }\n }\n}", "completion": "This json code 7.3 - Secrets | The auth value there is base64 encoded; it is obscured but not secret. Anyone who can read that Secret can learn the registry access bearer token.", "language": "json", "type": "api-response_explanation", "context": "7.3 - Secrets | The auth value there is base64 encoded; it is obscured but not secret. Anyone who can read that Secret can learn the registry access bearer token.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_17271"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Secret\nmetadata:\n name: secret-basic-auth", "completion": "type: kubernetes.io/basic-auth\nstringData:\n username: admin # required field for kubernetes.io/basic-auth\n password: t0p-Secret # required field for kubernetes.io/basic-auth", "language": "yaml", "type": "kubernetes-manifest", "context": "7.3 - Secrets | The basic authentication Secret type is provided only for convenience. You can create an Opaque type for credentials used for basic authentication. However, using the defined and public Secret type (k", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_17272"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Secret\nmetadata:\n name: secret-basic-auth\ntype: kubernetes.io/basic-auth\nstringData:\n username: admin # required field for kubernetes.io/basic-auth\n password: t0p-Secret # required field for kubernetes.io/basic-auth", "completion": "This yaml code 7.3 - Secrets | The basic authentication Secret type is provided only for convenience. You can create an Opaque type for credentials used for basic authentication. However, using the defined and public Secret type (k", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "7.3 - Secrets | The basic authentication Secret type is provided only for convenience. You can create an Opaque type for credentials used for basic authentication. However, using the defined and public Secret type (k", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_17273"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Secret\nmetadata:\n name: secret-ssh-auth", "completion": "type: kubernetes.io/ssh-auth\ndata:\n # the data is abbreviated in this example\n ssh-privatekey: |\n UG91cmluZzYlRW1vdGljb24lU2N1YmE=", "language": "yaml", "type": "kubernetes-manifest", "context": "7.3 - Secrets | The SSH authentication Secret type is provided only for convenience. You can create an Opaque type for credentials used for SSH authentication. However, using the defined and public Secret type (kuber", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_17274"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Secret\nmetadata:\n name: secret-ssh-auth\ntype: kubernetes.io/ssh-auth\ndata:\n # the data is abbreviated in this example\n ssh-privatekey: |\n UG91cmluZzYlRW1vdGljb24lU2N1YmE=", "completion": "This yaml code 7.3 - Secrets | The SSH authentication Secret type is provided only for convenience. You can create an Opaque type for credentials used for SSH authentication. However, using the defined and public Secret type (kuber", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "7.3 - Secrets | The SSH authentication Secret type is provided only for convenience. You can create an Opaque type for credentials used for SSH authentication. However, using the defined and public Secret type (kuber", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_17275"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Secret\nmetadata:\n name: secret-tls\ntype: kubernetes.io/tls", "completion": "data:\n # values are base64 encoded, which obscures them but does NOT provide\n # any useful level of confidentiality\n # Replace the following values with your own base64-encoded certificate and key.\n tls.crt: \"REPLACE_WITH_BASE64_CERT\" \n tls.key: \"REPLACE_WITH_BASE64_KEY\"", "language": "yaml", "type": "kubernetes-manifest", "context": "7.3 - Secrets | The TLS Secret type is provided only for convenience. You can create an Opaque type for credentials used for TLS authentication. However, using the defined and public Secret type (kubernetes.io/tls) h", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_17276"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Secret\nmetadata:\n name: secret-tls\ntype: kubernetes.io/tls\ndata:\n # values are base64 encoded, which obscures them but does NOT provide\n # any useful level of confidentiality\n # Replace the following values with your own base64-encoded certificate and key.\n tls.crt: \"REPLACE_WITH_BASE64_CERT\" \n tls.key: \"REPLACE_WITH_BASE64_KEY\"", "completion": "This yaml code 7.3 - Secrets | The TLS Secret type is provided only for convenience. You can create an Opaque type for credentials used for TLS authentication. However, using the defined and public Secret type (kubernetes.io/tls) h", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "7.3 - Secrets | The TLS Secret type is provided only for convenience. You can create an Opaque type for credentials used for TLS authentication. However, using the defined and public Secret type (kubernetes.io/tls) h", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_17277"} {"prompt": "Explain this shell code:\n\nkubectl create secret tls my-tls-secret \\\n --cert=path/to/cert/file \\\n --key=path/to/key/file", "completion": "This shell code 7.3 - Secrets | The public/private key pair must exist before hand. The public key certificate for --cert must be .PEM encoded and must match the given private key for --key.", "language": "shell", "type": "code-example_explanation", "context": "7.3 - Secrets | The public/private key pair must exist before hand. The public key certificate for --cert must be .PEM encoded and must match the given private key for --key.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_17278"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Secret\nmetadata:\n name: bootstrap-token-5emitj\n namespace: kube-system\ntype: bootstrap.kubernetes.io/token", "completion": "data:\n auth-extra-groups: c3lzdGVtOmJvb3RzdHJhcHBlcnM6a3ViZWFkbTpkZWZhdWx0LW5vZGUtdG9rZW4=\n expiration: MjAyMC0wOS0xM1QwNDozOToxMFo=\n token-id: NWVtaXRq\n token-secret: a3E0Z2lodnN6emduMXAwcg==\n usage-bootstrap-authentication: dHJ1ZQ==\n usage-bootstrap-signing: dHJ1ZQ==", "language": "yaml", "type": "kubernetes-manifest", "context": "7.3 - Secrets | A bootstrap token Secret has the following keys specified under data:", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_17279"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Secret\nmetadata:\n name: bootstrap-token-5emitj\n namespace: kube-system\ntype: bootstrap.kubernetes.io/token\ndata:\n auth-extra-groups: c3lzdGVtOmJvb3RzdHJhcHBlcnM6a3ViZWFkbTpkZWZhdWx0LW5vZGUtdG9rZW4=\n expiration: MjAyMC0wOS0xM1QwNDozOToxMFo=\n token-id: NWVtaXRq\n token-secret: a3E0Z2lodnN6emduMXAwcg==\n usage-bootstrap-authentication: dHJ1ZQ==\n usage-bootstrap-signing: dHJ1ZQ==", "completion": "This yaml code 7.3 - Secrets | A bootstrap token Secret has the following keys specified under data:", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "7.3 - Secrets | A bootstrap token Secret has the following keys specified under data:", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_17280"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Secret\nmetadata:\n # Note how the Secret is named\n name: bootstrap-token-5emitj\n # A bootstrap token Secret usually resides in the kube-system namespace\n namespace: kube-system\ntype: bootstrap.kubernetes.io/token\nstringData:", "completion": " auth-extra-groups: \"system:bootstrappers:kubeadm:default-node-token\"\n expiration: \"2020-09-13T04:39:10Z\"\n # This token ID is used in the name\n token-id: \"5emitj\"\n token-secret: \"kq4gihvszzgn1p0r\"\n # This token can be used for authentication\n usage-bootstrap-authentication: \"true\"\n # and it can be used for signing\n usage-bootstrap-signing: \"true\"", "language": "yaml", "type": "kubernetes-manifest", "context": "7.3 - Secrets | There are several options to create a Secret:", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_17281"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Secret\nmetadata:\n # Note how the Secret is named\n name: bootstrap-token-5emitj\n # A bootstrap token Secret usually resides in the kube-system namespace\n namespace: kube-system\ntype: bootstrap.kubernetes.io/token\nstringData:\n auth-extra-groups: \"system:bootstrappers:kubeadm:default-node-token\"\n expiration: \"2020-09-13T04:39:10Z\"\n # This token ID is used in the name\n token-id: \"5emitj\"\n token-secret: \"kq4gihvszzgn1p0r\"\n # This token can be used for authentication\n usage-bootstrap-authentication: \"true\"\n # and it can be used for signing\n usage-bootstrap-signing: \"true\"", "completion": "This yaml code 7.3 - Secrets | There are several options to create a Secret:", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "7.3 - Secrets | There are several options to create a Secret:", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_17282"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: mypod\nspec:\n containers:\n - name: mypod\n image: redis", "completion": " volumeMounts:\n - name: foo\n mountPath: \"/etc/foo\"\n readOnly: true\n volumes:\n - name: foo\n secret:\n secretName: mysecret\n optional: true", "language": "yaml", "type": "pod-definition", "context": "7.3 - Secrets | By default, Secrets are required. None of a Pod's containers will start until all non-optional Secrets are available.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_17283"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: mypod\nspec:\n containers:\n - name: mypod\n image: redis\n volumeMounts:\n - name: foo\n mountPath: \"/etc/foo\"\n readOnly: true\n volumes:\n - name: foo\n secret:\n secretName: mysecret\n optional: true", "completion": "This yaml code 7.3 - Secrets | By default, Secrets are required. None of a Pod's containers will start until all non-optional Secrets are available.", "language": "yaml", "type": "pod-definition_explanation", "context": "7.3 - Secrets | By default, Secrets are required. None of a Pod's containers will start until all non-optional Secrets are available.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_17284"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Secret", "completion": "metadata: ...\ndata: ...\nimmutable: true", "language": "yaml", "type": "kubernetes-manifest", "context": "7.3 - Secrets | You can also update any existing mutable Secret to make it immutable.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_17285"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Secret\nmetadata: ...\ndata: ...\nimmutable: true", "completion": "This yaml code 7.3 - Secrets | You can also update any existing mutable Secret to make it immutable.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "7.3 - Secrets | You can also update any existing mutable Secret to make it immutable.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_17286"} {"prompt": "Explain this text code:\n\nspec.containers[].resources.limits.hugepages-<size>", "completion": "This text code 7.5 - Resource Management for Pods and Containers | Although you can only specify requests and limits for individual containers, it is also useful to think about the overall resource requests and limits for a Pod. For a particular resource, a Pod resou", "language": "text", "type": "code-example_explanation", "context": "7.5 - Resource Management for Pods and Containers | Although you can only specify requests and limits for individual containers, it is also useful to think about the overall resource requests and limits for a Pod. For a particular resource, a Pod resou", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_17287"} {"prompt": "Explain this text code:\n\nspec.containers[].resources.requests.hugepages-<size>", "completion": "This text code 7.5 - Resource Management for Pods and Containers | Although you can only specify requests and limits for individual containers, it is also useful to think about the overall resource requests and limits for a Pod. For a particular resource, a Pod resou", "language": "text", "type": "code-example_explanation", "context": "7.5 - Resource Management for Pods and Containers | Although you can only specify requests and limits for individual containers, it is also useful to think about the overall resource requests and limits for a Pod. For a particular resource, a Pod resou", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_17288"} {"prompt": "Complete this yaml code:\n\n---\napiVersion: v1\nkind: Pod\nmetadata:\n name: frontend\nspec:\n containers:\n - name: app\n image: images.my-company.example/app:v4\n resources:\n requests:\n memory: \"64Mi\"", "completion": " cpu: \"250m\"\n limits:\n memory: \"128Mi\"\n cpu: \"500m\"\n - name: log-aggregator\n image: images.my-company.example/log-aggregator:v6\n resources:\n requests:\n memory: \"64Mi\"\n cpu: \"250m\"\n limits:\n memory: \"128Mi\"\n cpu: \"500m\"", "language": "yaml", "type": "pod-definition", "context": "7.5 - Resource Management for Pods and Containers | This feature can be enabled by setting the PodLevelResources feature gate. The following Pod has an explicit request of 1 CPU and 100 MiB of memory, and an explicit limit of 1 CPU and 200 MiB of memor", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_17289"} {"prompt": "Explain this yaml code:\n\n---\napiVersion: v1\nkind: Pod\nmetadata:\n name: frontend\nspec:\n containers:\n - name: app\n image: images.my-company.example/app:v4\n resources:\n requests:\n memory: \"64Mi\"\n cpu: \"250m\"\n limits:\n memory: \"128Mi\"\n cpu: \"500m\"\n - name: log-aggregator\n image: images.my-company.example/log-aggregator:v6\n resources:\n requests:\n memory: \"64Mi\"\n cpu: \"250m\"\n limits:\n memory: \"128Mi\"\n cpu: \"500m\"", "completion": "This yaml code 7.5 - Resource Management for Pods and Containers | This feature can be enabled by setting the PodLevelResources feature gate. The following Pod has an explicit request of 1 CPU and 100 MiB of memory, and an explicit limit of 1 CPU and 200 MiB of memor", "language": "yaml", "type": "pod-definition_explanation", "context": "7.5 - Resource Management for Pods and Containers | This feature can be enabled by setting the PodLevelResources feature gate. The following Pod has an explicit request of 1 CPU and 100 MiB of memory, and an explicit limit of 1 CPU and 200 MiB of memor", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_17290"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: pod-resources-demo\n namespace: pod-resources-example\nspec:\n resources:\n limits:\n cpu: \"1\"\n memory: \"200Mi\"\n requests:\n cpu: \"1\"\n memory: \"100Mi\"\n containers:", "completion": " - name: pod-resources-demo-ctr-1\n image: nginx\n resources:\n limits:\n cpu: \"0.5\"\n memory: \"100Mi\"\n requests:\n cpu: \"0.5\"\n memory: \"50Mi\"\n - name: pod-resources-demo-ctr-2\n image: fedora\n command:\n - sleep\n - inf", "language": "yaml", "type": "pod-definition", "context": "7.5 - Resource Management for Pods and Containers | When you create a Pod, the Kubernetes scheduler selects a node for the Pod to run on. Each node has a maximum capacity for each of the resource types: the amount of CPU and memory it can provide for P", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_17291"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: pod-resources-demo\n namespace: pod-resources-example\nspec:\n resources:\n limits:\n cpu: \"1\"\n memory: \"200Mi\"\n requests:\n cpu: \"1\"\n memory: \"100Mi\"\n containers:\n - name: pod-resources-demo-ctr-1\n image: nginx\n resources:\n limits:\n cpu: \"0.5\"\n memory: \"100Mi\"\n requests:\n cpu: \"0.5\"\n memory: \"50Mi\"\n - name: pod-resources-demo-ctr-2\n image: fedora\n command:\n - sleep\n - inf", "completion": "This yaml code 7.5 - Resource Management for Pods and Containers | When you create a Pod, the Kubernetes scheduler selects a node for the Pod to run on. Each node has a maximum capacity for each of the resource types: the amount of CPU and memory it can provide for P", "language": "yaml", "type": "pod-definition_explanation", "context": "7.5 - Resource Management for Pods and Containers | When you create a Pod, the Kubernetes scheduler selects a node for the Pod to run on. Each node has a maximum capacity for each of the resource types: the amount of CPU and memory it can provide for P", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_17292"} {"prompt": "Explain this text code:\n\nspec.containers[].resources.limits.ephemeral-storage", "completion": "This text code 7.5 - Resource Management for Pods and Containers | Limits and requests for ephemeral-storage are measured in byte quantities. You can express storage as a plain integer or as a fixed-point number using one of these suffixes: E, P, T, G, M, k. You can", "language": "text", "type": "code-example_explanation", "context": "7.5 - Resource Management for Pods and Containers | Limits and requests for ephemeral-storage are measured in byte quantities. You can express storage as a plain integer or as a fixed-point number using one of these suffixes: E, P, T, G, M, k. You can", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_17293"} {"prompt": "Explain this text code:\n\nspec.containers[].resources.requests.ephemeral-storage", "completion": "This text code 7.5 - Resource Management for Pods and Containers | Limits and requests for ephemeral-storage are measured in byte quantities. You can express storage as a plain integer or as a fixed-point number using one of these suffixes: E, P, T, G, M, k. You can", "language": "text", "type": "code-example_explanation", "context": "7.5 - Resource Management for Pods and Containers | Limits and requests for ephemeral-storage are measured in byte quantities. You can express storage as a plain integer or as a fixed-point number using one of these suffixes: E, P, T, G, M, k. You can", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_17294"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: frontend\nspec:\n containers:\n - name: app\n image: images.my-company.example/app:v4\n resources:\n requests:\n ephemeral-storage: \"2Gi\"\n limits:\n ephemeral-storage: \"4Gi\"\n volumeMounts:\n - name: ephemeral", "completion": " mountPath: \"/tmp\"\n - name: log-aggregator\n image: images.my-company.example/log-aggregator:v6\n resources:\n requests:\n ephemeral-storage: \"2Gi\"\n limits:\n ephemeral-storage: \"4Gi\"\n volumeMounts:\n - name: ephemeral\n mountPath: \"/tmp\"\n volumes:\n - name: ephemeral\n emptyDir:\n sizeLimit: 500Mi", "language": "yaml", "type": "pod-definition", "context": "7.5 - Resource Management for Pods and Containers | When you create a Pod, the Kubernetes scheduler selects a node for the Pod to run on. Each node has a maximum amount of local ephemeral storage it can provide for Pods. For more information, see Node", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_17295"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: frontend\nspec:\n containers:\n - name: app\n image: images.my-company.example/app:v4\n resources:\n requests:\n ephemeral-storage: \"2Gi\"\n limits:\n ephemeral-storage: \"4Gi\"\n volumeMounts:\n - name: ephemeral\n mountPath: \"/tmp\"\n - name: log-aggregator\n image: images.my-company.example/log-aggregator:v6\n resources:\n requests:\n ephemeral-storage: \"2Gi\"\n limits:\n ephemeral-storage: \"4Gi\"\n volumeMounts:\n - name: ephemeral\n mountPath: \"/tmp\"\n volumes:\n - name: ephemeral\n emptyDir:\n sizeLimit: 500Mi", "completion": "This yaml code 7.5 - Resource Management for Pods and Containers | When you create a Pod, the Kubernetes scheduler selects a node for the Pod to run on. Each node has a maximum amount of local ephemeral storage it can provide for Pods. For more information, see Node", "language": "yaml", "type": "pod-definition_explanation", "context": "7.5 - Resource Management for Pods and Containers | When you create a Pod, the Kubernetes scheduler selects a node for the Pod to run on. Each node has a maximum amount of local ephemeral storage it can provide for Pods. For more information, see Node", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_17296"} {"prompt": "Explain this text code:\n\nLocalStorageCapacityIsolationFSQuotaMonitoring=true", "completion": "This text code 7.5 - Resource Management for Pods and Containers | Ensure the UserNamespacesSupport feature gate is enabled, and that the kernel, CRI implementation and OCI runtime support user namespaces.", "language": "text", "type": "code-example_explanation", "context": "7.5 - Resource Management for Pods and Containers | Ensure the UserNamespacesSupport feature gate is enabled, and that the kernel, CRI implementation and OCI runtime support user namespaces.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_17297"} {"prompt": "Explain this bash code:\n\n# For ext4, with /dev/block-device not mounted\nsudo tune2fs -O project -Q prjquota /dev/block-device", "completion": "This bash code 7.5 - Resource Management for Pods and Containers | Ensure that the root filesystem (or optional runtime filesystem) is mounted with project quotas enabled. For both XFS and ext4fs, the mount option is named prjquota.", "language": "bash", "type": "shell-command_explanation", "context": "7.5 - Resource Management for Pods and Containers | Ensure that the root filesystem (or optional runtime filesystem) is mounted with project quotas enabled. For both XFS and ext4fs, the mount option is named prjquota.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_17298"} {"prompt": "Complete this shell code:\n\ncurl --header \"Content-Type: application/json-patch+json\" \\\n--request PATCH \\", "completion": "--data '[{\"op\": \"add\", \"path\": \"/status/capacity/example.com~1foo\", \"value\": \"5\"}]' \\\nhttp://k8s-master:8080/api/v1/nodes/k8s-node-1/status", "language": "shell", "type": "code-example", "context": "7.5 - Resource Management for Pods and Containers | Cluster-level extended resources are not tied to nodes. They are usually managed by scheduler extenders, which handle the resource consumption and resource quota.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_17299"} {"prompt": "Explain this shell code:\n\ncurl --header \"Content-Type: application/json-patch+json\" \\\n--request PATCH \\\n--data '[{\"op\": \"add\", \"path\": \"/status/capacity/example.com~1foo\", \"value\": \"5\"}]' \\\nhttp://k8s-master:8080/api/v1/nodes/k8s-node-1/status", "completion": "This shell code 7.5 - Resource Management for Pods and Containers | Cluster-level extended resources are not tied to nodes. They are usually managed by scheduler extenders, which handle the resource consumption and resource quota.", "language": "shell", "type": "code-example_explanation", "context": "7.5 - Resource Management for Pods and Containers | Cluster-level extended resources are not tied to nodes. They are usually managed by scheduler extenders, which handle the resource consumption and resource quota.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_17300"} {"prompt": "Complete this json code:\n\n{\n \"kind\": \"Policy\",\n \"apiVersion\": \"v1\",\n \"extenders\": [\n {\n \"urlPrefix\":\"<extender-endpoint>\",\n \"bindVerb\": \"bind\",\n \"managedResources\": [", "completion": " {\n \"name\": \"example.com/foo\",\n \"ignoredByScheduler\": true\n }\n ]\n }\n ]\n}", "language": "json", "type": "api-response", "context": "7.5 - Resource Management for Pods and Containers | Extended resources allocation by DRA allows cluster administrators to specify an extendedResourceName in DeviceClass, then the devices matching the DeviceClass can be requested from a pod's extended r", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_17301"} {"prompt": "Explain this json code:\n\n{\n \"kind\": \"Policy\",\n \"apiVersion\": \"v1\",\n \"extenders\": [\n {\n \"urlPrefix\":\"<extender-endpoint>\",\n \"bindVerb\": \"bind\",\n \"managedResources\": [\n {\n \"name\": \"example.com/foo\",\n \"ignoredByScheduler\": true\n }\n ]\n }\n ]\n}", "completion": "This json code 7.5 - Resource Management for Pods and Containers | Extended resources allocation by DRA allows cluster administrators to specify an extendedResourceName in DeviceClass, then the devices matching the DeviceClass can be requested from a pod's extended r", "language": "json", "type": "api-response_explanation", "context": "7.5 - Resource Management for Pods and Containers | Extended resources allocation by DRA allows cluster administrators to specify an extendedResourceName in DeviceClass, then the devices matching the DeviceClass can be requested from a pod's extended r", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_17302"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: my-pod\nspec:\n containers:\n - name: my-container", "completion": " image: myimage\n resources:\n requests:\n cpu: 2\n example.com/foo: 1\n limits:\n example.com/foo: 1", "language": "yaml", "type": "pod-definition", "context": "7.5 - Resource Management for Pods and Containers | Process ID (PID) limits allow for the configuration of a kubelet to limit the number of PIDs that a given Pod can consume. See PID Limiting for information.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_17303"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: my-pod\nspec:\n containers:\n - name: my-container\n image: myimage\n resources:\n requests:\n cpu: 2\n example.com/foo: 1\n limits:\n example.com/foo: 1", "completion": "This yaml code 7.5 - Resource Management for Pods and Containers | Process ID (PID) limits allow for the configuration of a kubelet to limit the number of PIDs that a given Pod can consume. See PID Limiting for information.", "language": "yaml", "type": "pod-definition_explanation", "context": "7.5 - Resource Management for Pods and Containers | Process ID (PID) limits allow for the configuration of a kubelet to limit the number of PIDs that a given Pod can consume. See PID Limiting for information.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_17304"} {"prompt": "Explain this shell code:\n\nkubectl describe pod frontend | grep -A 9999999999 Events", "completion": "This shell code 7.5 - Resource Management for Pods and Containers | In the preceding example, the Pod named \"frontend\" fails to be scheduled due to insufficient CPU resource on any node. Similar error messages can also suggest failure due to insufficient memory (PodEx", "language": "shell", "type": "code-example_explanation", "context": "7.5 - Resource Management for Pods and Containers | In the preceding example, the Pod named \"frontend\" fails to be scheduled due to insufficient CPU resource on any node. Similar error messages can also suggest failure due to insufficient memory (PodEx", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_17305"} {"prompt": "Complete this text code:\n\nEvents:\n Type Reason Age From Message", "completion": " ---- ------ ---- ---- -------\n Warning FailedScheduling 23s default-scheduler 0/42 nodes available: insufficient cpu", "language": "text", "type": "code-example", "context": "7.5 - Resource Management for Pods and Containers | In the preceding example, the Pod named \"frontend\" fails to be scheduled due to insufficient CPU resource on any node. Similar error messages can also suggest failure due to insufficient memory (PodEx", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_17306"} {"prompt": "Explain this text code:\n\nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Warning FailedScheduling 23s default-scheduler 0/42 nodes available: insufficient cpu", "completion": "This text code 7.5 - Resource Management for Pods and Containers | In the preceding example, the Pod named \"frontend\" fails to be scheduled due to insufficient CPU resource on any node. Similar error messages can also suggest failure due to insufficient memory (PodEx", "language": "text", "type": "code-example_explanation", "context": "7.5 - Resource Management for Pods and Containers | In the preceding example, the Pod named \"frontend\" fails to be scheduled due to insufficient CPU resource on any node. Similar error messages can also suggest failure due to insufficient memory (PodEx", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_17307"} {"prompt": "Complete this text code:\n\nName: e2e-test-node-pool-4lw4\n[ ... lines removed for clarity ...]\nCapacity:\n cpu: 2\n memory: 7679792Ki\n pods: 110\nAllocatable:\n cpu: 1800m\n memory: 7474992Ki\n pods: 110\n[ ... lines removed for clarity ...]\nNon-terminated Pods: (5 in total)", "completion": " Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits\n --------- ---- ------------ ---------- --------------- -------------\n kube-system fluentd-gcp-v1.38-28bv1 100m (5%) 0 (0%) 200Mi (2%) 200Mi (2%)\n kube-system kube-dns-3297075139-61lj3 260m (13%) 0 (0%) 100Mi (1%) 170Mi (2%)\n kube-system kube-proxy-e2e-test-... 100m (5%) 0 (0%) 0 (0%) 0 (0%)\n kube-system monitoring-influxdb-grafana-v4-z1m12 200m (10%) 200m (10%) 600Mi (8%) 600Mi (8%)\n kube-system node-problem-detector-v0.1-fj7m3 20m (1%) 200m (10%) 20Mi (0%) 100Mi (1%)\nAllocated resources:\n (Total limits may be over 100 percent, i.e., overcommitted.)\n CPU Requests CPU Limits Memory Requests Memory Limits\n ------------ ---------- --------------- -------------\n 680m (34%) 400m (20%) 920Mi (11%) 1070Mi (13%)", "language": "text", "type": "code-example", "context": "7.5 - Resource Management for Pods and Containers | In the preceding output, you can see that if a Pod requests more than 1.120 CPUs or more than 6.23Gi of memory, that Pod will not fit on the node.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_17308"} {"prompt": "Explain this text code:\n\nName: e2e-test-node-pool-4lw4\n[ ... lines removed for clarity ...]\nCapacity:\n cpu: 2\n memory: 7679792Ki\n pods: 110\nAllocatable:\n cpu: 1800m\n memory: 7474992Ki\n pods: 110\n[ ... lines removed for clarity ...]\nNon-terminated Pods: (5 in total)\n Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits\n --------- ---- ------------ ---------- --------------- -------------\n kube-system fluentd-gcp-v1.38-28bv1 100m (5%) 0 (0%) 200Mi (2%) 200Mi (2%)\n kube-system kube-dns-3297075139-61lj3 260m (13%) 0 (0%) 100Mi (1%) 170Mi (2%)\n kube-system kube-proxy-e2e-test-... 100m (5%) 0 (0%) 0 (0%) 0 (0%)\n kube-system monitoring-influxdb-grafana-v4-z1m12 200m (10%) 200m (10%) 600Mi (8%) 600Mi (8%)\n kube-system node-problem-detector-v0.1-fj7m3 20m (1%) 200m (10%) 20Mi (0%) 100Mi (1%)\nAllocated resources:\n (Total limits may be over 100 percent, i.e., overcommitted.)\n CPU Requests CPU Limits Memory Requests Memory Limits\n ------------ ---------- --------------- -------------\n 680m (34%) 400m (20%) 920Mi (11%) 1070Mi (13%)", "completion": "This text code 7.5 - Resource Management for Pods and Containers | In the preceding output, you can see that if a Pod requests more than 1.120 CPUs or more than 6.23Gi of memory, that Pod will not fit on the node.", "language": "text", "type": "code-example_explanation", "context": "7.5 - Resource Management for Pods and Containers | In the preceding output, you can see that if a Pod requests more than 1.120 CPUs or more than 6.23Gi of memory, that Pod will not fit on the node.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_17309"} {"prompt": "Complete this text code:\n\nName: simmemleak-hra99\nNamespace: default\nImage(s): saadali/simmemleak\nNode: kubernetes-node-tf0f/10.240.216.66\nLabels: name=simmemleak\nStatus: Running\nReason:\nMessage:\nIP: 10.244.2.75\nContainers:\n simmemleak:\n Image: saadali/simmemleak:latest\n Limits:\n cpu: 100m\n memory: 50Mi\n State: Running\n Started: Tue, 07 Jul 2019 12:54:41 -0700", "completion": " Last State: Terminated\n Reason: OOMKilled\n Exit Code: 137\n Started: Fri, 07 Jul 2019 12:54:30 -0700\n Finished: Fri, 07 Jul 2019 12:54:33 -0700\n Ready: False\n Restart Count: 5\nConditions:\n Type Status\n Ready False\nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal Scheduled 42s default-scheduler Successfully assigned simmemleak-hra99 to kubernetes-node-tf0f\n Normal Pulled 41s kubelet Container image \"saadali/simmemleak:latest\" already present on machine\n Normal Created 41s kubelet Created container simmemleak\n Normal Started 40s kubelet Started container simmemleak\n Normal Killing 32s kubelet Killing container with id ead3fb35-5cf5-44ed-9ae1-488115be66c6: Need to kill Pod", "language": "text", "type": "code-example", "context": "7.5 - Resource Management for Pods and Containers | In the preceding example, the Restart Count: 5 indicates that the simmemleak container in the Pod was terminated and restarted five times (so far). The OOMKilled reason shows that the container tried", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_17310"} {"prompt": "Explain this text code:\n\nName: simmemleak-hra99\nNamespace: default\nImage(s): saadali/simmemleak\nNode: kubernetes-node-tf0f/10.240.216.66\nLabels: name=simmemleak\nStatus: Running\nReason:\nMessage:\nIP: 10.244.2.75\nContainers:\n simmemleak:\n Image: saadali/simmemleak:latest\n Limits:\n cpu: 100m\n memory: 50Mi\n State: Running\n Started: Tue, 07 Jul 2019 12:54:41 -0700\n Last State: Terminated\n Reason: OOMKilled\n Exit Code: 137\n Started: Fri, 07 Jul 2019 12:54:30 -0700\n Finished: Fri, 07 Jul 2019 12:54:33 -0700\n Ready: False\n Restart Count: 5\nConditions:\n Type Status\n Ready False\nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal Scheduled 42s default-scheduler Successfully assigned simmemleak-hra99 to kubernetes-node-tf0f\n Normal Pulled 41s kubelet Container image \"saadali/simmemleak:latest\" already present on machine\n Normal Created 41s kubelet Created container simmemleak\n Normal Started 40s kubelet Started container simmemleak\n Normal Killing 32s kubelet Killing container with id ead3fb35-5cf5-44ed-9ae1-488115be66c6: Need to kill Pod", "completion": "This text code 7.5 - Resource Management for Pods and Containers | In the preceding example, the Restart Count: 5 indicates that the simmemleak container in the Pod was terminated and restarted five times (so far). The OOMKilled reason shows that the container tried", "language": "text", "type": "code-example_explanation", "context": "7.5 - Resource Management for Pods and Containers | In the preceding example, the Restart Count: 5 indicates that the simmemleak container in the Pod was terminated and restarted five times (so far). The OOMKilled reason shows that the container tried", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_17311"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Config\n\nclusters:\n- cluster:\n proxy-url: http://proxy.example.org:3128\n server: https://k8s.example.org/k8s/clusters/c-xxyyzz", "completion": " name: development\n\nusers:\n- name: developer\n\ncontexts:\n- context:\n name: development", "language": "yaml", "type": "kubernetes-manifest", "context": "7.6 - Organizing Cluster Access Using kubeconfig Files | This page outlines the differences in how resources are managed between Linux and Windows.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_17312"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Config\n\nclusters:\n- cluster:\n proxy-url: http://proxy.example.org:3128\n server: https://k8s.example.org/k8s/clusters/c-xxyyzz\n name: development\n\nusers:\n- name: developer\n\ncontexts:\n- context:\n name: development", "completion": "This yaml code 7.6 - Organizing Cluster Access Using kubeconfig Files | This page outlines the differences in how resources are managed between Linux and Windows.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "7.6 - Organizing Cluster Access Using kubeconfig Files | This page outlines the differences in how resources are managed between Linux and Windows.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_17313"} {"prompt": "Explain this text code:\n\nspec.containers[*].securityContext.windowsOptions.hostProcess", "completion": "This text code 8.2 - Pod Security Standards | Allowed Values", "language": "text", "type": "code-example_explanation", "context": "8.2 - Pod Security Standards | Allowed Values", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_17314"} {"prompt": "Explain this text code:\n\nspec.initContainers[*].securityContext.windowsOptions.hostProcess", "completion": "This text code 8.2 - Pod Security Standards | Allowed Values", "language": "text", "type": "code-example_explanation", "context": "8.2 - Pod Security Standards | Allowed Values", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_17315"} {"prompt": "Explain this text code:\n\nspec.ephemeralContainers[*].securityContext.windowsOptions.hostProcess", "completion": "This text code 8.2 - Pod Security Standards | Allowed Values", "language": "text", "type": "code-example_explanation", "context": "8.2 - Pod Security Standards | Allowed Values", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_17316"} {"prompt": "Explain this text code:\n\nspec.ephemeralContainers[*].securityContext.privileged", "completion": "This text code 8.2 - Pod Security Standards | Allowed Values", "language": "text", "type": "code-example_explanation", "context": "8.2 - Pod Security Standards | Allowed Values", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_17317"} {"prompt": "Explain this text code:\n\nspec.containers[*].securityContext.capabilities.add", "completion": "This text code 8.2 - Pod Security Standards | Allowed Values", "language": "text", "type": "code-example_explanation", "context": "8.2 - Pod Security Standards | Allowed Values", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_17318"} {"prompt": "Explain this text code:\n\nspec.initContainers[*].securityContext.capabilities.add", "completion": "This text code 8.2 - Pod Security Standards | Allowed Values", "language": "text", "type": "code-example_explanation", "context": "8.2 - Pod Security Standards | Allowed Values", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_17319"} {"prompt": "Explain this text code:\n\nspec.ephemeralContainers[*].securityContext.capabilities.add", "completion": "This text code 8.2 - Pod Security Standards | Allowed Values", "language": "text", "type": "code-example_explanation", "context": "8.2 - Pod Security Standards | Allowed Values", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_17320"} {"prompt": "Explain this text code:\n\nspec.containers[*].lifecycle.postStart.tcpSocket.host", "completion": "This text code 8.2 - Pod Security Standards | Allowed Values", "language": "text", "type": "code-example_explanation", "context": "8.2 - Pod Security Standards | Allowed Values", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_17321"} {"prompt": "Explain this text code:\n\nspec.containers[*].lifecycle.preStop.tcpSocket.host", "completion": "This text code 8.2 - Pod Security Standards | Allowed Values", "language": "text", "type": "code-example_explanation", "context": "8.2 - Pod Security Standards | Allowed Values", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_17322"} {"prompt": "Explain this text code:\n\nspec.containers[*].lifecycle.postStart.httpGet.host", "completion": "This text code 8.2 - Pod Security Standards | Allowed Values", "language": "text", "type": "code-example_explanation", "context": "8.2 - Pod Security Standards | Allowed Values", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_17323"} {"prompt": "Explain this text code:\n\nspec.initContainers[*].livenessProbe.tcpSocket.host", "completion": "This text code 8.2 - Pod Security Standards | Allowed Values", "language": "text", "type": "code-example_explanation", "context": "8.2 - Pod Security Standards | Allowed Values", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_17324"} {"prompt": "Explain this text code:\n\nspec.initContainers[*].readinessProbe.tcpSocket.host", "completion": "This text code 8.2 - Pod Security Standards | Allowed Values", "language": "text", "type": "code-example_explanation", "context": "8.2 - Pod Security Standards | Allowed Values", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_17325"} {"prompt": "Explain this text code:\n\nspec.initContainers[*].lifecycle.postStart.tcpSocket.host", "completion": "This text code 8.2 - Pod Security Standards | Allowed Values", "language": "text", "type": "code-example_explanation", "context": "8.2 - Pod Security Standards | Allowed Values", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_17326"} {"prompt": "Explain this text code:\n\nspec.initContainers[*].lifecycle.preStop.tcpSocket.host", "completion": "This text code 8.2 - Pod Security Standards | Allowed Values", "language": "text", "type": "code-example_explanation", "context": "8.2 - Pod Security Standards | Allowed Values", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_17327"} {"prompt": "Explain this text code:\n\nspec.initContainers[*].lifecycle.postStart.httpGet.host", "completion": "This text code 8.2 - Pod Security Standards | Allowed Values", "language": "text", "type": "code-example_explanation", "context": "8.2 - Pod Security Standards | Allowed Values", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_17328"} {"prompt": "Explain this text code:\n\nspec.initContainers[*].lifecycle.preStop.httpGet.host", "completion": "This text code 8.2 - Pod Security Standards | Allowed Values", "language": "text", "type": "code-example_explanation", "context": "8.2 - Pod Security Standards | Allowed Values", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_17329"} {"prompt": "Explain this text code:\n\nspec.containers[*].securityContext.appArmorProfile.type", "completion": "This text code 8.2 - Pod Security Standards | Allowed Values", "language": "text", "type": "code-example_explanation", "context": "8.2 - Pod Security Standards | Allowed Values", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_17330"} {"prompt": "Explain this text code:\n\nspec.initContainers[*].securityContext.appArmorProfile.type", "completion": "This text code 8.2 - Pod Security Standards | Allowed Values", "language": "text", "type": "code-example_explanation", "context": "8.2 - Pod Security Standards | Allowed Values", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_17331"} {"prompt": "Explain this text code:\n\nspec.ephemeralContainers[*].securityContext.appArmorProfile.type", "completion": "This text code 8.2 - Pod Security Standards | Allowed Values", "language": "text", "type": "code-example_explanation", "context": "8.2 - Pod Security Standards | Allowed Values", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_17332"} {"prompt": "Explain this text code:\n\nmetadata.annotations[\"container.apparmor.security.beta.kubernetes.io/*\"]", "completion": "This text code 8.2 - Pod Security Standards | Allowed Values", "language": "text", "type": "code-example_explanation", "context": "8.2 - Pod Security Standards | Allowed Values", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_17333"} {"prompt": "Explain this text code:\n\nspec.containers[*].securityContext.seLinuxOptions.type", "completion": "This text code 8.2 - Pod Security Standards | Allowed Values", "language": "text", "type": "code-example_explanation", "context": "8.2 - Pod Security Standards | Allowed Values", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_17334"} {"prompt": "Explain this text code:\n\nspec.initContainers[*].securityContext.seLinuxOptions.type", "completion": "This text code 8.2 - Pod Security Standards | Allowed Values", "language": "text", "type": "code-example_explanation", "context": "8.2 - Pod Security Standards | Allowed Values", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_17335"} {"prompt": "Explain this text code:\n\nspec.ephemeralContainers[*].securityContext.seLinuxOptions.type", "completion": "This text code 8.2 - Pod Security Standards | Allowed Values", "language": "text", "type": "code-example_explanation", "context": "8.2 - Pod Security Standards | Allowed Values", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_17336"} {"prompt": "Explain this text code:\n\nspec.containers[*].securityContext.seLinuxOptions.user", "completion": "This text code 8.2 - Pod Security Standards | Allowed Values", "language": "text", "type": "code-example_explanation", "context": "8.2 - Pod Security Standards | Allowed Values", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_17337"} {"prompt": "Explain this text code:\n\nspec.initContainers[*].securityContext.seLinuxOptions.user", "completion": "This text code 8.2 - Pod Security Standards | Allowed Values", "language": "text", "type": "code-example_explanation", "context": "8.2 - Pod Security Standards | Allowed Values", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_17338"} {"prompt": "Explain this text code:\n\nspec.ephemeralContainers[*].securityContext.seLinuxOptions.user", "completion": "This text code 8.2 - Pod Security Standards | Allowed Values", "language": "text", "type": "code-example_explanation", "context": "8.2 - Pod Security Standards | Allowed Values", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_17339"} {"prompt": "Explain this text code:\n\nspec.containers[*].securityContext.seLinuxOptions.role", "completion": "This text code 8.2 - Pod Security Standards | Allowed Values", "language": "text", "type": "code-example_explanation", "context": "8.2 - Pod Security Standards | Allowed Values", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_17340"} {"prompt": "Explain this text code:\n\nspec.initContainers[*].securityContext.seLinuxOptions.role", "completion": "This text code 8.2 - Pod Security Standards | Allowed Values", "language": "text", "type": "code-example_explanation", "context": "8.2 - Pod Security Standards | Allowed Values", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_17341"} {"prompt": "Explain this text code:\n\nspec.ephemeralContainers[*].securityContext.seLinuxOptions.role", "completion": "This text code 8.2 - Pod Security Standards | Allowed Values", "language": "text", "type": "code-example_explanation", "context": "8.2 - Pod Security Standards | Allowed Values", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_17342"} {"prompt": "Explain this text code:\n\nspec.ephemeralContainers[*].securityContext.procMount", "completion": "This text code 8.2 - Pod Security Standards | Allowed Values", "language": "text", "type": "code-example_explanation", "context": "8.2 - Pod Security Standards | Allowed Values", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_17343"} {"prompt": "Explain this text code:\n\nspec.containers[*].securityContext.seccompProfile.type", "completion": "This text code 8.2 - Pod Security Standards | Allowed Values", "language": "text", "type": "code-example_explanation", "context": "8.2 - Pod Security Standards | Allowed Values", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_17344"} {"prompt": "Explain this text code:\n\nspec.initContainers[*].securityContext.seccompProfile.type", "completion": "This text code 8.2 - Pod Security Standards | Allowed Values", "language": "text", "type": "code-example_explanation", "context": "8.2 - Pod Security Standards | Allowed Values", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_17345"} {"prompt": "Explain this text code:\n\nspec.ephemeralContainers[*].securityContext.seccompProfile.type", "completion": "This text code 8.2 - Pod Security Standards | Allowed Values", "language": "text", "type": "code-example_explanation", "context": "8.2 - Pod Security Standards | Allowed Values", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_17346"} {"prompt": "Explain this text code:\n\nspec.containers[*].securityContext.allowPrivilegeEscalation", "completion": "This text code 8.2 - Pod Security Standards | Allowed Values", "language": "text", "type": "code-example_explanation", "context": "8.2 - Pod Security Standards | Allowed Values", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_17347"} {"prompt": "Explain this text code:\n\nspec.initContainers[*].securityContext.allowPrivilegeEscalation", "completion": "This text code 8.2 - Pod Security Standards | Allowed Values", "language": "text", "type": "code-example_explanation", "context": "8.2 - Pod Security Standards | Allowed Values", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_17348"} {"prompt": "Explain this text code:\n\nspec.ephemeralContainers[*].securityContext.allowPrivilegeEscalation", "completion": "This text code 8.2 - Pod Security Standards | Allowed Values", "language": "text", "type": "code-example_explanation", "context": "8.2 - Pod Security Standards | Allowed Values", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_17349"} {"prompt": "Explain this text code:\n\nspec.initContainers[*].securityContext.runAsNonRoot", "completion": "This text code 8.2 - Pod Security Standards | Allowed Values", "language": "text", "type": "code-example_explanation", "context": "8.2 - Pod Security Standards | Allowed Values", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_17350"} {"prompt": "Explain this text code:\n\nspec.ephemeralContainers[*].securityContext.runAsNonRoot", "completion": "This text code 8.2 - Pod Security Standards | Allowed Values", "language": "text", "type": "code-example_explanation", "context": "8.2 - Pod Security Standards | Allowed Values", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_17351"} {"prompt": "Explain this text code:\n\nspec.ephemeralContainers[*].securityContext.runAsUser", "completion": "This text code 8.2 - Pod Security Standards | Allowed Values", "language": "text", "type": "code-example_explanation", "context": "8.2 - Pod Security Standards | Allowed Values", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_17352"} {"prompt": "Explain this text code:\n\nspec.containers[*].securityContext.seccompProfile.type", "completion": "This text code 8.2 - Pod Security Standards | Allowed Values", "language": "text", "type": "code-example_explanation", "context": "8.2 - Pod Security Standards | Allowed Values", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_17353"} {"prompt": "Explain this text code:\n\nspec.initContainers[*].securityContext.seccompProfile.type", "completion": "This text code 8.2 - Pod Security Standards | Allowed Values", "language": "text", "type": "code-example_explanation", "context": "8.2 - Pod Security Standards | Allowed Values", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_17354"} {"prompt": "Explain this text code:\n\nspec.ephemeralContainers[*].securityContext.seccompProfile.type", "completion": "This text code 8.2 - Pod Security Standards | Allowed Values", "language": "text", "type": "code-example_explanation", "context": "8.2 - Pod Security Standards | Allowed Values", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_17355"} {"prompt": "Explain this text code:\n\nspec.containers[*].securityContext.capabilities.drop", "completion": "This text code 8.2 - Pod Security Standards | Allowed Values", "language": "text", "type": "code-example_explanation", "context": "8.2 - Pod Security Standards | Allowed Values", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_17356"} {"prompt": "Explain this text code:\n\nspec.initContainers[*].securityContext.capabilities.drop", "completion": "This text code 8.2 - Pod Security Standards | Allowed Values", "language": "text", "type": "code-example_explanation", "context": "8.2 - Pod Security Standards | Allowed Values", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_17357"} {"prompt": "Explain this text code:\n\nspec.ephemeralContainers[*].securityContext.capabilities.drop", "completion": "This text code 8.2 - Pod Security Standards | Allowed Values", "language": "text", "type": "code-example_explanation", "context": "8.2 - Pod Security Standards | Allowed Values", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_17358"} {"prompt": "Explain this text code:\n\nspec.containers[*].securityContext.capabilities.add", "completion": "This text code 8.2 - Pod Security Standards | Allowed Values", "language": "text", "type": "code-example_explanation", "context": "8.2 - Pod Security Standards | Allowed Values", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_17359"} {"prompt": "Explain this text code:\n\nspec.initContainers[*].securityContext.capabilities.add", "completion": "This text code 8.2 - Pod Security Standards | Allowed Values", "language": "text", "type": "code-example_explanation", "context": "8.2 - Pod Security Standards | Allowed Values", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_17360"} {"prompt": "Explain this text code:\n\nspec.ephemeralContainers[*].securityContext.capabilities.add", "completion": "This text code 8.2 - Pod Security Standards | Allowed Values", "language": "text", "type": "code-example_explanation", "context": "8.2 - Pod Security Standards | Allowed Values", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_17361"} {"prompt": "Complete this yaml code:\n\n# The per-mode level label indicates which policy level to apply for the mode.\n#\n# MODE must be one of `enforce`, `audit`, or `warn`.\n# LEVEL must be one of `privileged`, `baseline`, or `restricted`.\npod-security.kubernetes.io/<MODE>: <LEVEL>\n", "completion": "# Optional: per-mode version label that can be used to pin the policy to the\n# version that shipped with a given Kubernetes minor version (for example v1.34).\n#\n# MODE must be one of `enforce`, `audit`, or `warn`.\n# VERSION must be a valid Kubernetes minor version, or `latest`.\npod-security.kubernetes.io/<MODE>-version: <VERSION>", "language": "yaml", "type": "kubernetes-manifest", "context": "8.3 - Pod Security Admission | Check out Enforce Pod Security Standards with Namespace Labels to see example usage.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_17362"} {"prompt": "Explain this yaml code:\n\n# The per-mode level label indicates which policy level to apply for the mode.\n#\n# MODE must be one of `enforce`, `audit`, or `warn`.\n# LEVEL must be one of `privileged`, `baseline`, or `restricted`.\npod-security.kubernetes.io/<MODE>: <LEVEL>\n\n# Optional: per-mode version label that can be used to pin the policy to the\n# version that shipped with a given Kubernetes minor version (for example v1.34).\n#\n# MODE must be one of `enforce`, `audit`, or `warn`.\n# VERSION must be a valid Kubernetes minor version, or `latest`.\npod-security.kubernetes.io/<MODE>-version: <VERSION>", "completion": "This yaml code 8.3 - Pod Security Admission | Check out Enforce Pod Security Standards with Namespace Labels to see example usage.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "8.3 - Pod Security Admission | Check out Enforce Pod Security Standards with Namespace Labels to see example usage.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_17363"} {"prompt": "Explain this text code:\n\nsystem:serviceaccount:kube-system:replicaset-controller", "completion": "This text code 8.3 - Pod Security Admission | Updates to the following pod fields are exempt from policy checks, meaning that if a pod update request only changes these fields, it will not be denied even if the pod is in violation of the current", "language": "text", "type": "code-example_explanation", "context": "8.3 - Pod Security Admission | Updates to the following pod fields are exempt from policy checks, meaning that if a pod update request only changes these fields, it will not be denied even if the pod is in violation of the current", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_17364"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: ServiceAccount\nmetadata:", "completion": " annotations:\n kubernetes.io/enforce-mountable-secrets: \"true\"\n name: my-serviceaccount\n namespace: my-namespace", "language": "yaml", "type": "service-definition", "context": "8.4 - Service Accounts | When this annotation is set to \"true\", the Kubernetes control plane ensures that the Secrets from this ServiceAccount are subject to certain mounting restrictions.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_17365"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: ServiceAccount\nmetadata:\n annotations:\n kubernetes.io/enforce-mountable-secrets: \"true\"\n name: my-serviceaccount\n namespace: my-namespace", "completion": "This yaml code 8.4 - Service Accounts | When this annotation is set to \"true\", the Kubernetes control plane ensures that the Secrets from this ServiceAccount are subject to certain mounting restrictions.", "language": "yaml", "type": "service-definition_explanation", "context": "8.4 - Service Accounts | When this annotation is set to \"true\", the Kubernetes control plane ensures that the Secrets from this ServiceAccount are subject to certain mounting restrictions.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_17366"} {"prompt": "Complete this json code:\n\n{\n \"apiVersion\": \"abac.authorization.kubernetes.io/v1beta1\",\n \"kind\": \"Policy\",\n \"spec\": {\n \"user\": \"bob\",", "completion": " \"namespace\": \"projectCaribou\",\n \"resource\": \"pods\",\n \"readonly\": true\n }\n}", "language": "json", "type": "api-response", "context": "8.8 - Controlling Access to the Kubernetes API | If Bob makes the following request, the request is authorized because he is allowed to read objects in the projectCaribou namespace:", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_17367"} {"prompt": "Explain this json code:\n\n{\n \"apiVersion\": \"abac.authorization.kubernetes.io/v1beta1\",\n \"kind\": \"Policy\",\n \"spec\": {\n \"user\": \"bob\",\n \"namespace\": \"projectCaribou\",\n \"resource\": \"pods\",\n \"readonly\": true\n }\n}", "completion": "This json code 8.8 - Controlling Access to the Kubernetes API | If Bob makes the following request, the request is authorized because he is allowed to read objects in the projectCaribou namespace:", "language": "json", "type": "api-response_explanation", "context": "8.8 - Controlling Access to the Kubernetes API | If Bob makes the following request, the request is authorized because he is allowed to read objects in the projectCaribou namespace:", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_17368"} {"prompt": "Complete this json code:\n\n{\n \"apiVersion\": \"authorization.k8s.io/v1beta1\",\n \"kind\": \"SubjectAccessReview\",\n \"spec\": {\n \"resourceAttributes\": {\n \"namespace\": \"projectCaribou\",", "completion": " \"verb\": \"get\",\n \"group\": \"unicorn.example.org\",\n \"resource\": \"pods\"\n }\n }\n}", "language": "json", "type": "api-response", "context": "8.8 - Controlling Access to the Kubernetes API | If Bob makes a request to write (create or update) to the objects in the projectCaribou namespace, his authorization is denied. If Bob makes a request to read (get) objects in a different namespace su", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_17369"} {"prompt": "Explain this json code:\n\n{\n \"apiVersion\": \"authorization.k8s.io/v1beta1\",\n \"kind\": \"SubjectAccessReview\",\n \"spec\": {\n \"resourceAttributes\": {\n \"namespace\": \"projectCaribou\",\n \"verb\": \"get\",\n \"group\": \"unicorn.example.org\",\n \"resource\": \"pods\"\n }\n }\n}", "completion": "This json code 8.8 - Controlling Access to the Kubernetes API | If Bob makes a request to write (create or update) to the objects in the projectCaribou namespace, his authorization is denied. If Bob makes a request to read (get) objects in a different namespace su", "language": "json", "type": "api-response_explanation", "context": "8.8 - Controlling Access to the Kubernetes API | If Bob makes a request to write (create or update) to the objects in the projectCaribou namespace, his authorization is denied. If Bob makes a request to read (get) objects in a different namespace su", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_17370"} {"prompt": "Complete this yaml code:\n\napiVersion: kubescheduler.config.k8s.io/v1\nkind: KubeSchedulerConfiguration\nprofiles:\n - schedulerName: my-scheduler\n plugins:\n # Disable specific plugins for different extension points\n # You can disable all plugins for an extension point using \"*\"\n queueSort:\n disabled:", "completion": " - name: \"*\" # Disable all queueSort plugins\n # - name: \"PrioritySort\" # Disable specific queueSort plugin\n filter:\n disabled:\n - name: \"*\" # Disable all filter plugins\n # - name: \"NodeResourcesFit\" # Disable specific filter plugin\n permit:\n disabled:\n - name: \"*\" # Disables all permit plugins\n # - name: \"TaintToleration\" # Disable specific permit plugin", "language": "yaml", "type": "kubernetes-manifest", "context": "8.13 - Hardening Guide - Scheduler Configuration | This creates a scheduler profile my-custom-scheduler. Whenever the .spec of a Pod does not have a value for .spec.schedulerName, the kube-scheduler runs for that Pod, using its main configuration, and", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_17371"} {"prompt": "Explain this yaml code:\n\napiVersion: kubescheduler.config.k8s.io/v1\nkind: KubeSchedulerConfiguration\nprofiles:\n - schedulerName: my-scheduler\n plugins:\n # Disable specific plugins for different extension points\n # You can disable all plugins for an extension point using \"*\"\n queueSort:\n disabled:\n - name: \"*\" # Disable all queueSort plugins\n # - name: \"PrioritySort\" # Disable specific queueSort plugin\n filter:\n disabled:\n - name: \"*\" # Disable all filter plugins\n # - name: \"NodeResourcesFit\" # Disable specific filter plugin\n permit:\n disabled:\n - name: \"*\" # Disables all permit plugins\n # - name: \"TaintToleration\" # Disable specific permit plugin", "completion": "This yaml code 8.13 - Hardening Guide - Scheduler Configuration | This creates a scheduler profile my-custom-scheduler. Whenever the .spec of a Pod does not have a value for .spec.schedulerName, the kube-scheduler runs for that Pod, using its main configuration, and", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "8.13 - Hardening Guide - Scheduler Configuration | This creates a scheduler profile my-custom-scheduler. Whenever the .spec of a Pod does not have a value for .spec.schedulerName, the kube-scheduler runs for that Pod, using its main configuration, and", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_17372"} {"prompt": "Explain this text code:\n\nconcepts/policy/limit-range/problematic-limit-range.yaml", "completion": "This text code 9.1 - Limit Ranges | along with a Pod that declares a CPU resource request of 700m, but not a limit:", "language": "text", "type": "code-example_explanation", "context": "9.1 - Limit Ranges | along with a Pod that declares a CPU resource request of 700m, but not a limit:", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_17373"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: LimitRange\nmetadata:\n name: cpu-resource-constraint\nspec:\n limits:\n - default: # this section defines default limits", "completion": " cpu: 500m\n defaultRequest: # this section defines default requests\n cpu: 500m\n max: # max and min define the limit range\n cpu: \"1\"\n min:\n cpu: 100m\n type: Container", "language": "yaml", "type": "kubernetes-manifest", "context": "9.1 - Limit Ranges | along with a Pod that declares a CPU resource request of 700m, but not a limit:", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_17374"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: LimitRange\nmetadata:\n name: cpu-resource-constraint\nspec:\n limits:\n - default: # this section defines default limits\n cpu: 500m\n defaultRequest: # this section defines default requests\n cpu: 500m\n max: # max and min define the limit range\n cpu: \"1\"\n min:\n cpu: 100m\n type: Container", "completion": "This yaml code 9.1 - Limit Ranges | along with a Pod that declares a CPU resource request of 700m, but not a limit:", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "9.1 - Limit Ranges | along with a Pod that declares a CPU resource request of 700m, but not a limit:", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_17375"} {"prompt": "Explain this text code:\n\nconcepts/policy/limit-range/example-conflict-with-limitrange-cpu.yaml", "completion": "This text code 9.1 - Limit Ranges | then that Pod will not be scheduled, failing with an error similar to:", "language": "text", "type": "code-example_explanation", "context": "9.1 - Limit Ranges | then that Pod will not be scheduled, failing with an error similar to:", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_17376"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: example-conflict-with-limitrange-cpu\nspec:", "completion": " containers:\n - name: demo\n image: registry.k8s.io/pause:3.8\n resources:\n requests:\n cpu: 700m", "language": "yaml", "type": "pod-definition", "context": "9.1 - Limit Ranges | then that Pod will not be scheduled, failing with an error similar to:", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_17377"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: example-conflict-with-limitrange-cpu\nspec:\n containers:\n - name: demo\n image: registry.k8s.io/pause:3.8\n resources:\n requests:\n cpu: 700m", "completion": "This yaml code 9.1 - Limit Ranges | then that Pod will not be scheduled, failing with an error similar to:", "language": "yaml", "type": "pod-definition_explanation", "context": "9.1 - Limit Ranges | then that Pod will not be scheduled, failing with an error similar to:", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_17378"} {"prompt": "Explain this text code:\n\nPod \"example-conflict-with-limitrange-cpu\" is invalid: spec.containers[0].resources.requests: Invalid value: \"700m\": must be less than or equal to cpu limit", "completion": "This text code 9.1 - Limit Ranges | If you set both request and limit, then that new Pod will be scheduled successfully even with the same LimitRange in place:", "language": "text", "type": "code-example_explanation", "context": "9.1 - Limit Ranges | If you set both request and limit, then that new Pod will be scheduled successfully even with the same LimitRange in place:", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_17379"} {"prompt": "Explain this text code:\n\nconcepts/policy/limit-range/example-no-conflict-with-limitrange-cpu.yaml", "completion": "This text code 9.1 - Limit Ranges | Examples of policies that could be created using LimitRange are:", "language": "text", "type": "code-example_explanation", "context": "9.1 - Limit Ranges | Examples of policies that could be created using LimitRange are:", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_17380"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: example-no-conflict-with-limitrange-cpu\nspec:\n containers:", "completion": " - name: demo\n image: registry.k8s.io/pause:3.8\n resources:\n requests:\n cpu: 700m\n limits:\n cpu: 700m", "language": "yaml", "type": "pod-definition", "context": "9.1 - Limit Ranges | Examples of policies that could be created using LimitRange are:", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_17381"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: example-no-conflict-with-limitrange-cpu\nspec:\n containers:\n - name: demo\n image: registry.k8s.io/pause:3.8\n resources:\n requests:\n cpu: 700m\n limits:\n cpu: 700m", "completion": "This yaml code 9.1 - Limit Ranges | Examples of policies that could be created using LimitRange are:", "language": "yaml", "type": "pod-definition_explanation", "context": "9.1 - Limit Ranges | Examples of policies that could be created using LimitRange are:", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_17382"} {"prompt": "Explain this text code:\n\n<storage-class-name>.storageclass.storage.k8s.io/requests.storage", "completion": "This text code 9.2 - Resource Quotas | For example, if you want to quota storage with gold StorageClass separate from a bronze StorageClass, you can define a quota as follows:", "language": "text", "type": "code-example_explanation", "context": "9.2 - Resource Quotas | For example, if you want to quota storage with gold StorageClass separate from a bronze StorageClass, you can define a quota as follows:", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_17383"} {"prompt": "Explain this text code:\n\n<storage-class-name>.storageclass.storage.k8s.io/persistentvolumeclaims", "completion": "This text code 9.2 - Resource Quotas | For example, if you want to quota storage with gold StorageClass separate from a bronze StorageClass, you can define a quota as follows:", "language": "text", "type": "code-example_explanation", "context": "9.2 - Resource Quotas | For example, if you want to quota storage with gold StorageClass separate from a bronze StorageClass, you can define a quota as follows:", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_17384"} {"prompt": "Explain this text code:\n\ngold.storageclass.storage.k8s.io/requests.storage: 500Gi", "completion": "This text code 9.2 - Resource Quotas | When using a CRI container runtime, container logs will count against the ephemeral storage quota. This can result in the unexpected eviction of pods that have exhausted their storage quotas.", "language": "text", "type": "code-example_explanation", "context": "9.2 - Resource Quotas | When using a CRI container runtime, container logs will count against the ephemeral storage quota. This can result in the unexpected eviction of pods that have exhausted their storage quotas.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_17385"} {"prompt": "Explain this text code:\n\nbronze.storageclass.storage.k8s.io/requests.storage: 100Gi", "completion": "This text code 9.2 - Resource Quotas | When using a CRI container runtime, container logs will count against the ephemeral storage quota. This can result in the unexpected eviction of pods that have exhausted their storage quotas.", "language": "text", "type": "code-example_explanation", "context": "9.2 - Resource Quotas | When using a CRI container runtime, container logs will count against the ephemeral storage quota. This can result in the unexpected eviction of pods that have exhausted their storage quotas.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_17386"} {"prompt": "Complete this yaml code:\n\nscopeSelector:\n matchExpressions:\n - scopeName: PriorityClass", "completion": " operator: In\n values:\n - middle", "language": "yaml", "type": "kubernetes-manifest", "context": "9.2 - Resource Quotas | If the operator is Exists or DoesNotExist, the values field must NOT be specified.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_17387"} {"prompt": "Explain this yaml code:\n\nscopeSelector:\n matchExpressions:\n - scopeName: PriorityClass\n operator: In\n values:\n - middle", "completion": "This yaml code 9.2 - Resource Quotas | If the operator is Exists or DoesNotExist, the values field must NOT be specified.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "9.2 - Resource Quotas | If the operator is Exists or DoesNotExist, the values field must NOT be specified.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_17388"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: List\nitems:\n- apiVersion: v1\n kind: ResourceQuota\n metadata:\n name: pods-high\n spec:\n hard:\n cpu: \"1000\"\n memory: \"200Gi\"\n pods: \"10\"\n scopeSelector:\n matchExpressions:\n - operator: In\n scopeName: PriorityClass\n values: [\"high\"]\n- apiVersion: v1\n kind: ResourceQuota\n metadata:\n name: pods-medium\n spec:", "completion": " hard:\n cpu: \"10\"\n memory: \"20Gi\"\n pods: \"10\"\n scopeSelector:\n matchExpressions:\n - operator: In\n scopeName: PriorityClass\n values: [\"medium\"]\n- apiVersion: v1\n kind: ResourceQuota\n metadata:\n name: pods-low\n spec:\n hard:\n cpu: \"5\"\n memory: \"10Gi\"\n pods: \"10\"\n scopeSelector:\n matchExpressions:\n - operator: In\n scopeName: PriorityClass\n values: [\"low\"]", "language": "yaml", "type": "kubernetes-manifest", "context": "9.2 - Resource Quotas | Apply the YAML using kubectl create.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_17389"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: List\nitems:\n- apiVersion: v1\n kind: ResourceQuota\n metadata:\n name: pods-high\n spec:\n hard:\n cpu: \"1000\"\n memory: \"200Gi\"\n pods: \"10\"\n scopeSelector:\n matchExpressions:\n - operator: In\n scopeName: PriorityClass\n values: [\"high\"]\n- apiVersion: v1\n kind: ResourceQuota\n metadata:\n name: pods-medium\n spec:\n hard:\n cpu: \"10\"\n memory: \"20Gi\"\n pods: \"10\"\n scopeSelector:\n matchExpressions:\n - operator: In\n scopeName: PriorityClass\n values: [\"medium\"]\n- apiVersion: v1\n kind: ResourceQuota\n metadata:\n name: pods-low\n spec:\n hard:\n cpu: \"5\"\n memory: \"10Gi\"\n pods: \"10\"\n scopeSelector:\n matchExpressions:\n - operator: In\n scopeName: PriorityClass\n values: [\"low\"]", "completion": "This yaml code 9.2 - Resource Quotas | Apply the YAML using kubectl create.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "9.2 - Resource Quotas | Apply the YAML using kubectl create.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_17390"} {"prompt": "Explain this text code:\n\nresourcequota/pods-high created\nresourcequota/pods-medium created\nresourcequota/pods-low created", "completion": "This text code 9.2 - Resource Quotas | Verify that Used quota is 0 using kubectl describe quota.", "language": "text", "type": "code-example_explanation", "context": "9.2 - Resource Quotas | Verify that Used quota is 0 using kubectl describe quota.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_17391"} {"prompt": "Complete this text code:\n\nName: pods-high\nNamespace: default\nResource Used Hard\n-------- ---- ----\ncpu 0 1k\nmemory 0 200Gi\npods 0 10\n\n\nName: pods-low\nNamespace: default\nResource Used Hard", "completion": "-------- ---- ----\ncpu 0 5\nmemory 0 10Gi\npods 0 10\n\n\nName: pods-medium\nNamespace: default\nResource Used Hard\n-------- ---- ----\ncpu 0 10\nmemory 0 20Gi\npods 0 10", "language": "text", "type": "code-example", "context": "9.2 - Resource Quotas | Create a pod with priority \"high\". Save the following YAML to a file high-priority-pod.yaml.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_17392"} {"prompt": "Explain this text code:\n\nName: pods-high\nNamespace: default\nResource Used Hard\n-------- ---- ----\ncpu 0 1k\nmemory 0 200Gi\npods 0 10\n\n\nName: pods-low\nNamespace: default\nResource Used Hard\n-------- ---- ----\ncpu 0 5\nmemory 0 10Gi\npods 0 10\n\n\nName: pods-medium\nNamespace: default\nResource Used Hard\n-------- ---- ----\ncpu 0 10\nmemory 0 20Gi\npods 0 10", "completion": "This text code 9.2 - Resource Quotas | Create a pod with priority \"high\". Save the following YAML to a file high-priority-pod.yaml.", "language": "text", "type": "code-example_explanation", "context": "9.2 - Resource Quotas | Create a pod with priority \"high\". Save the following YAML to a file high-priority-pod.yaml.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_17393"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: high-priority\nspec:\n containers:\n - name: high-priority\n image: ubuntu\n command: [\"/bin/sh\"]", "completion": " args: [\"-c\", \"while true; do echo hello; sleep 10;done\"]\n resources:\n requests:\n memory: \"10Gi\"\n cpu: \"500m\"\n limits:\n memory: \"10Gi\"\n cpu: \"500m\"\n priorityClassName: high", "language": "yaml", "type": "pod-definition", "context": "9.2 - Resource Quotas | Apply it with kubectl create.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_17394"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: high-priority\nspec:\n containers:\n - name: high-priority\n image: ubuntu\n command: [\"/bin/sh\"]\n args: [\"-c\", \"while true; do echo hello; sleep 10;done\"]\n resources:\n requests:\n memory: \"10Gi\"\n cpu: \"500m\"\n limits:\n memory: \"10Gi\"\n cpu: \"500m\"\n priorityClassName: high", "completion": "This yaml code 9.2 - Resource Quotas | Apply it with kubectl create.", "language": "yaml", "type": "pod-definition_explanation", "context": "9.2 - Resource Quotas | Apply it with kubectl create.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_17395"} {"prompt": "Complete this text code:\n\nName: pods-high\nNamespace: default\nResource Used Hard\n-------- ---- ----\ncpu 500m 1k\nmemory 10Gi 200Gi\npods 1 10\n\n\nName: pods-low\nNamespace: default\nResource Used Hard", "completion": "-------- ---- ----\ncpu 0 5\nmemory 0 10Gi\npods 0 10\n\n\nName: pods-medium\nNamespace: default\nResource Used Hard\n-------- ---- ----\ncpu 0 10\nmemory 0 20Gi\npods 0 10", "language": "text", "type": "code-example", "context": "9.2 - Resource Quotas | Operators can use CrossNamespacePodAffinity quota scope to limit which namespaces are allowed to have pods with affinity terms that cross namespaces. Specifically, it controls which pods are allowed t", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_17396"} {"prompt": "Explain this text code:\n\nName: pods-high\nNamespace: default\nResource Used Hard\n-------- ---- ----\ncpu 500m 1k\nmemory 10Gi 200Gi\npods 1 10\n\n\nName: pods-low\nNamespace: default\nResource Used Hard\n-------- ---- ----\ncpu 0 5\nmemory 0 10Gi\npods 0 10\n\n\nName: pods-medium\nNamespace: default\nResource Used Hard\n-------- ---- ----\ncpu 0 10\nmemory 0 20Gi\npods 0 10", "completion": "This text code 9.2 - Resource Quotas | Operators can use CrossNamespacePodAffinity quota scope to limit which namespaces are allowed to have pods with affinity terms that cross namespaces. Specifically, it controls which pods are allowed t", "language": "text", "type": "code-example_explanation", "context": "9.2 - Resource Quotas | Operators can use CrossNamespacePodAffinity quota scope to limit which namespaces are allowed to have pods with affinity terms that cross namespaces. Specifically, it controls which pods are allowed t", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_17397"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: ResourceQuota\nmetadata:\n name: disable-cross-namespace-affinity\n namespace: foo-ns\nspec:", "completion": " hard:\n pods: \"0\"\n scopeSelector:\n matchExpressions:\n - scopeName: CrossNamespacePodAffinity\n operator: Exists", "language": "yaml", "type": "kubernetes-manifest", "context": "9.2 - Resource Quotas | If operators want to disallow using namespaces and namespaceSelector by default, and only allow it for specific namespaces, they could configure CrossNamespacePodAffinity as a limited resource by sett", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_17398"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: ResourceQuota\nmetadata:\n name: disable-cross-namespace-affinity\n namespace: foo-ns\nspec:\n hard:\n pods: \"0\"\n scopeSelector:\n matchExpressions:\n - scopeName: CrossNamespacePodAffinity\n operator: Exists", "completion": "This yaml code 9.2 - Resource Quotas | If operators want to disallow using namespaces and namespaceSelector by default, and only allow it for specific namespaces, they could configure CrossNamespacePodAffinity as a limited resource by sett", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "9.2 - Resource Quotas | If operators want to disallow using namespaces and namespaceSelector by default, and only allow it for specific namespaces, they could configure CrossNamespacePodAffinity as a limited resource by sett", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_17399"} {"prompt": "Complete this yaml code:\n\napiVersion: apiserver.config.k8s.io/v1\nkind: AdmissionConfiguration\nplugins:\n- name: \"ResourceQuota\"\n configuration:\n apiVersion: apiserver.config.k8s.io/v1", "completion": " kind: ResourceQuotaConfiguration\n limitedResources:\n - resource: pods\n matchScopes:\n - scopeName: CrossNamespacePodAffinity\n operator: Exists", "language": "yaml", "type": "kubernetes-manifest", "context": "9.2 - Resource Quotas | With the above configuration, pods can use namespaces and namespaceSelector in pod affinity only if the namespace where they are created have a resource quota object with CrossNamespacePodAffinity sco", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_17400"} {"prompt": "Explain this yaml code:\n\napiVersion: apiserver.config.k8s.io/v1\nkind: AdmissionConfiguration\nplugins:\n- name: \"ResourceQuota\"\n configuration:\n apiVersion: apiserver.config.k8s.io/v1\n kind: ResourceQuotaConfiguration\n limitedResources:\n - resource: pods\n matchScopes:\n - scopeName: CrossNamespacePodAffinity\n operator: Exists", "completion": "This yaml code 9.2 - Resource Quotas | With the above configuration, pods can use namespaces and namespaceSelector in pod affinity only if the namespace where they are created have a resource quota object with CrossNamespacePodAffinity sco", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "9.2 - Resource Quotas | With the above configuration, pods can use namespaces and namespaceSelector in pod affinity only if the namespace where they are created have a resource quota object with CrossNamespacePodAffinity sco", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_17401"} {"prompt": "Explain this text code:\n\nstatus.modifyVolumeStatus.targetVolumeAttributesClassName", "completion": "This text code 9.2 - Resource Quotas | A quota is matched and consumed only if scopeSelector in the quota spec selects the PVC.", "language": "text", "type": "code-example_explanation", "context": "9.2 - Resource Quotas | A quota is matched and consumed only if scopeSelector in the quota spec selects the PVC.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_17402"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: List\nitems:\n- apiVersion: v1\n kind: ResourceQuota\n metadata:\n name: pvcs-gold\n spec:\n hard:\n requests.storage: \"10Gi\"\n persistentvolumeclaims: \"10\"\n scopeSelector:\n matchExpressions:\n - operator: In\n scopeName: VolumeAttributesClass\n values: [\"gold\"]\n- apiVersion: v1\n kind: ResourceQuota\n metadata:\n name: pvcs-silver\n spec:", "completion": " hard:\n requests.storage: \"20Gi\"\n persistentvolumeclaims: \"10\"\n scopeSelector:\n matchExpressions:\n - operator: In\n scopeName: VolumeAttributesClass\n values: [\"silver\"]\n- apiVersion: v1\n kind: ResourceQuota\n metadata:\n name: pvcs-copper\n spec:\n hard:\n requests.storage: \"30Gi\"\n persistentvolumeclaims: \"10\"\n scopeSelector:\n matchExpressions:\n - operator: In\n scopeName: VolumeAttributesClass\n values: [\"copper\"]", "language": "yaml", "type": "kubernetes-manifest", "context": "9.2 - Resource Quotas | Apply the YAML using kubectl create.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_17403"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: List\nitems:\n- apiVersion: v1\n kind: ResourceQuota\n metadata:\n name: pvcs-gold\n spec:\n hard:\n requests.storage: \"10Gi\"\n persistentvolumeclaims: \"10\"\n scopeSelector:\n matchExpressions:\n - operator: In\n scopeName: VolumeAttributesClass\n values: [\"gold\"]\n- apiVersion: v1\n kind: ResourceQuota\n metadata:\n name: pvcs-silver\n spec:\n hard:\n requests.storage: \"20Gi\"\n persistentvolumeclaims: \"10\"\n scopeSelector:\n matchExpressions:\n - operator: In\n scopeName: VolumeAttributesClass\n values: [\"silver\"]\n- apiVersion: v1\n kind: ResourceQuota\n metadata:\n name: pvcs-copper\n spec:\n hard:\n requests.storage: \"30Gi\"\n persistentvolumeclaims: \"10\"\n scopeSelector:\n matchExpressions:\n - operator: In\n scopeName: VolumeAttributesClass\n values: [\"copper\"]", "completion": "This yaml code 9.2 - Resource Quotas | Apply the YAML using kubectl create.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "9.2 - Resource Quotas | Apply the YAML using kubectl create.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_17404"} {"prompt": "Explain this text code:\n\nresourcequota/pvcs-gold created\nresourcequota/pvcs-silver created\nresourcequota/pvcs-copper created", "completion": "This text code 9.2 - Resource Quotas | Verify that Used quota is 0 using kubectl describe quota.", "language": "text", "type": "code-example_explanation", "context": "9.2 - Resource Quotas | Verify that Used quota is 0 using kubectl describe quota.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_17405"} {"prompt": "Complete this text code:\n\nName: pvcs-gold\nNamespace: default\nResource Used Hard\n-------- ---- ----\npersistentvolumeclaims 0 10\nrequests.storage 0 10Gi\n\n\nName: pvcs-silver\nNamespace: default\nResource Used Hard", "completion": "-------- ---- ----\npersistentvolumeclaims 0 10\nrequests.storage 0 20Gi\n\n\nName: pvcs-copper\nNamespace: default\nResource Used Hard\n-------- ---- ----\npersistentvolumeclaims 0 10\nrequests.storage 0 30Gi", "language": "text", "type": "code-example", "context": "9.2 - Resource Quotas | Create a pvc with volume attributes class \"gold\". Save the following YAML to a file gold-vac-pvc.yaml.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_17406"} {"prompt": "Explain this text code:\n\nName: pvcs-gold\nNamespace: default\nResource Used Hard\n-------- ---- ----\npersistentvolumeclaims 0 10\nrequests.storage 0 10Gi\n\n\nName: pvcs-silver\nNamespace: default\nResource Used Hard\n-------- ---- ----\npersistentvolumeclaims 0 10\nrequests.storage 0 20Gi\n\n\nName: pvcs-copper\nNamespace: default\nResource Used Hard\n-------- ---- ----\npersistentvolumeclaims 0 10\nrequests.storage 0 30Gi", "completion": "This text code 9.2 - Resource Quotas | Create a pvc with volume attributes class \"gold\". Save the following YAML to a file gold-vac-pvc.yaml.", "language": "text", "type": "code-example_explanation", "context": "9.2 - Resource Quotas | Create a pvc with volume attributes class \"gold\". Save the following YAML to a file gold-vac-pvc.yaml.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_17407"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: PersistentVolumeClaim\nmetadata:\n name: gold-vac-pvc\nspec:\n accessModes:", "completion": " - ReadWriteOnce\n resources:\n requests:\n storage: 2Gi\n storageClassName: # change this to the name of the storage class you want to use\n volumeAttributesClassName: gold", "language": "yaml", "type": "kubernetes-manifest", "context": "9.2 - Resource Quotas | Apply it with kubectl create.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_17408"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: PersistentVolumeClaim\nmetadata:\n name: gold-vac-pvc\nspec:\n accessModes:\n - ReadWriteOnce\n resources:\n requests:\n storage: 2Gi\n storageClassName: # change this to the name of the storage class you want to use\n volumeAttributesClassName: gold", "completion": "This yaml code 9.2 - Resource Quotas | Apply it with kubectl create.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "9.2 - Resource Quotas | Apply it with kubectl create.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_17409"} {"prompt": "Complete this text code:\n\nName: pvcs-gold\nNamespace: default\nResource Used Hard\n-------- ---- ----\npersistentvolumeclaims 1 10\nrequests.storage 2Gi 10Gi\n\n\nName: pvcs-silver\nNamespace: default\nResource Used Hard", "completion": "-------- ---- ----\npersistentvolumeclaims 0 10\nrequests.storage 0 20Gi\n\n\nName: pvcs-copper\nNamespace: default\nResource Used Hard\n-------- ---- ----\npersistentvolumeclaims 0 10\nrequests.storage 0 30Gi", "language": "text", "type": "code-example", "context": "9.2 - Resource Quotas | Once the PVC is bound, it is allowed to modify the desired volume attributes class. Let's change it to \"silver\" with kubectl patch.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_17410"} {"prompt": "Explain this text code:\n\nName: pvcs-gold\nNamespace: default\nResource Used Hard\n-------- ---- ----\npersistentvolumeclaims 1 10\nrequests.storage 2Gi 10Gi\n\n\nName: pvcs-silver\nNamespace: default\nResource Used Hard\n-------- ---- ----\npersistentvolumeclaims 0 10\nrequests.storage 0 20Gi\n\n\nName: pvcs-copper\nNamespace: default\nResource Used Hard\n-------- ---- ----\npersistentvolumeclaims 0 10\nrequests.storage 0 30Gi", "completion": "This text code 9.2 - Resource Quotas | Once the PVC is bound, it is allowed to modify the desired volume attributes class. Let's change it to \"silver\" with kubectl patch.", "language": "text", "type": "code-example_explanation", "context": "9.2 - Resource Quotas | Once the PVC is bound, it is allowed to modify the desired volume attributes class. Let's change it to \"silver\" with kubectl patch.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_17411"} {"prompt": "Explain this shell code:\n\nkubectl patch pvc gold-vac-pvc --type='merge' -p '{\"spec\":{\"volumeAttributesClassName\":\"silver\"}}'", "completion": "This shell code 9.2 - Resource Quotas | Verify that \"Used\" stats for \"silver\" volume attributes class quota, pvcs-silver has changed, pvcs-copper is unchanged, and pvcs-gold might be unchanged or released, which depends on the PVC's status.", "language": "shell", "type": "code-example_explanation", "context": "9.2 - Resource Quotas | Verify that \"Used\" stats for \"silver\" volume attributes class quota, pvcs-silver has changed, pvcs-copper is unchanged, and pvcs-gold might be unchanged or released, which depends on the PVC's status.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_17412"} {"prompt": "Complete this text code:\n\nName: pvcs-gold\nNamespace: default\nResource Used Hard\n-------- ---- ----\npersistentvolumeclaims 1 10\nrequests.storage 2Gi 10Gi\n\n\nName: pvcs-silver\nNamespace: default\nResource Used Hard", "completion": "-------- ---- ----\npersistentvolumeclaims 1 10\nrequests.storage 2Gi 20Gi\n\n\nName: pvcs-copper\nNamespace: default\nResource Used Hard\n-------- ---- ----\npersistentvolumeclaims 0 10\nrequests.storage 0 30Gi", "language": "text", "type": "code-example", "context": "9.2 - Resource Quotas | Let's change it to \"copper\" with kubectl patch.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_17413"} {"prompt": "Explain this text code:\n\nName: pvcs-gold\nNamespace: default\nResource Used Hard\n-------- ---- ----\npersistentvolumeclaims 1 10\nrequests.storage 2Gi 10Gi\n\n\nName: pvcs-silver\nNamespace: default\nResource Used Hard\n-------- ---- ----\npersistentvolumeclaims 1 10\nrequests.storage 2Gi 20Gi\n\n\nName: pvcs-copper\nNamespace: default\nResource Used Hard\n-------- ---- ----\npersistentvolumeclaims 0 10\nrequests.storage 0 30Gi", "completion": "This text code 9.2 - Resource Quotas | Let's change it to \"copper\" with kubectl patch.", "language": "text", "type": "code-example_explanation", "context": "9.2 - Resource Quotas | Let's change it to \"copper\" with kubectl patch.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_17414"} {"prompt": "Explain this shell code:\n\nkubectl patch pvc gold-vac-pvc --type='merge' -p '{\"spec\":{\"volumeAttributesClassName\":\"copper\"}}'", "completion": "This shell code 9.2 - Resource Quotas | Verify that \"Used\" stats for \"copper\" volume attributes class quota, pvcs-copper has changed, pvcs-silver and pvcs-gold might be unchanged or released, which depends on the PVC's status.", "language": "shell", "type": "code-example_explanation", "context": "9.2 - Resource Quotas | Verify that \"Used\" stats for \"copper\" volume attributes class quota, pvcs-copper has changed, pvcs-silver and pvcs-gold might be unchanged or released, which depends on the PVC's status.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_17415"} {"prompt": "Complete this text code:\n\nName: pvcs-gold\nNamespace: default\nResource Used Hard\n-------- ---- ----\npersistentvolumeclaims 1 10\nrequests.storage 2Gi 10Gi\n\n\nName: pvcs-silver\nNamespace: default\nResource Used Hard", "completion": "-------- ---- ----\npersistentvolumeclaims 1 10\nrequests.storage 2Gi 20Gi\n\n\nName: pvcs-copper\nNamespace: default\nResource Used Hard\n-------- ---- ----\npersistentvolumeclaims 1 10\nrequests.storage 2Gi 30Gi", "language": "text", "type": "code-example", "context": "9.2 - Resource Quotas | Print the manifest of the PVC using the following command:", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_17416"} {"prompt": "Explain this text code:\n\nName: pvcs-gold\nNamespace: default\nResource Used Hard\n-------- ---- ----\npersistentvolumeclaims 1 10\nrequests.storage 2Gi 10Gi\n\n\nName: pvcs-silver\nNamespace: default\nResource Used Hard\n-------- ---- ----\npersistentvolumeclaims 1 10\nrequests.storage 2Gi 20Gi\n\n\nName: pvcs-copper\nNamespace: default\nResource Used Hard\n-------- ---- ----\npersistentvolumeclaims 1 10\nrequests.storage 2Gi 30Gi", "completion": "This text code 9.2 - Resource Quotas | Print the manifest of the PVC using the following command:", "language": "text", "type": "code-example_explanation", "context": "9.2 - Resource Quotas | Print the manifest of the PVC using the following command:", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_17417"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: PersistentVolumeClaim\nmetadata:\n name: gold-vac-pvc\nspec:\n accessModes:\n - ReadWriteOnce\n resources:\n requests:\n storage: 2Gi\n storageClassName: default", "completion": " volumeAttributesClassName: copper\nstatus:\n accessModes:\n - ReadWriteOnce\n capacity:\n storage: 2Gi\n currentVolumeAttributesClassName: gold\n phase: Bound\n modifyVolumeStatus:\n status: InProgress\n targetVolumeAttributesClassName: silver\n storageClassName: default", "language": "yaml", "type": "kubernetes-manifest", "context": "9.2 - Resource Quotas | Wait a moment for the volume modification to complete, then verify the quota again.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_17418"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: PersistentVolumeClaim\nmetadata:\n name: gold-vac-pvc\nspec:\n accessModes:\n - ReadWriteOnce\n resources:\n requests:\n storage: 2Gi\n storageClassName: default\n volumeAttributesClassName: copper\nstatus:\n accessModes:\n - ReadWriteOnce\n capacity:\n storage: 2Gi\n currentVolumeAttributesClassName: gold\n phase: Bound\n modifyVolumeStatus:\n status: InProgress\n targetVolumeAttributesClassName: silver\n storageClassName: default", "completion": "This yaml code 9.2 - Resource Quotas | Wait a moment for the volume modification to complete, then verify the quota again.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "9.2 - Resource Quotas | Wait a moment for the volume modification to complete, then verify the quota again.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_17419"} {"prompt": "Complete this text code:\n\nName: pvcs-gold\nNamespace: default\nResource Used Hard\n-------- ---- ----\npersistentvolumeclaims 0 10\nrequests.storage 0 10Gi\n\n\nName: pvcs-silver\nNamespace: default\nResource Used Hard", "completion": "-------- ---- ----\npersistentvolumeclaims 0 10\nrequests.storage 0 20Gi\n\n\nName: pvcs-copper\nNamespace: default\nResource Used Hard\n-------- ---- ----\npersistentvolumeclaims 1 10\nrequests.storage 2Gi 30Gi", "language": "text", "type": "code-example", "context": "9.2 - Resource Quotas | When allocating compute resources, each container may specify a request and a limit value for either CPU or memory. The quota can be configured to quota either value.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_17420"} {"prompt": "Explain this text code:\n\nName: pvcs-gold\nNamespace: default\nResource Used Hard\n-------- ---- ----\npersistentvolumeclaims 0 10\nrequests.storage 0 10Gi\n\n\nName: pvcs-silver\nNamespace: default\nResource Used Hard\n-------- ---- ----\npersistentvolumeclaims 0 10\nrequests.storage 0 20Gi\n\n\nName: pvcs-copper\nNamespace: default\nResource Used Hard\n-------- ---- ----\npersistentvolumeclaims 1 10\nrequests.storage 2Gi 30Gi", "completion": "This text code 9.2 - Resource Quotas | When allocating compute resources, each container may specify a request and a limit value for either CPU or memory. The quota can be configured to quota either value.", "language": "text", "type": "code-example_explanation", "context": "9.2 - Resource Quotas | When allocating compute resources, each container may specify a request and a limit value for either CPU or memory. The quota can be configured to quota either value.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_17421"} {"prompt": "Complete this shell code:\n\ncat <<EOF > compute-resources.yaml\napiVersion: v1\nkind: ResourceQuota\nmetadata:\n name: compute-resources\nspec:", "completion": " hard:\n requests.cpu: \"1\"\n requests.memory: \"1Gi\"\n limits.cpu: \"2\"\n limits.memory: \"2Gi\"\n requests.nvidia.com/gpu: 4\nEOF", "language": "shell", "type": "code-example", "context": "9.2 - Resource Quotas | kubectl also supports object count quota for all standard namespaced resources using the syntax count/<resource>.<group>:", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_17422"} {"prompt": "Explain this shell code:\n\ncat <<EOF > compute-resources.yaml\napiVersion: v1\nkind: ResourceQuota\nmetadata:\n name: compute-resources\nspec:\n hard:\n requests.cpu: \"1\"\n requests.memory: \"1Gi\"\n limits.cpu: \"2\"\n limits.memory: \"2Gi\"\n requests.nvidia.com/gpu: 4\nEOF", "completion": "This shell code 9.2 - Resource Quotas | kubectl also supports object count quota for all standard namespaced resources using the syntax count/<resource>.<group>:", "language": "shell", "type": "code-example_explanation", "context": "9.2 - Resource Quotas | kubectl also supports object count quota for all standard namespaced resources using the syntax count/<resource>.<group>:", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_17423"} {"prompt": "Explain this shell code:\n\nkubectl create -f ./compute-resources.yaml --namespace=myspace", "completion": "This shell code 9.2 - Resource Quotas | kubectl also supports object count quota for all standard namespaced resources using the syntax count/<resource>.<group>:", "language": "shell", "type": "code-example_explanation", "context": "9.2 - Resource Quotas | kubectl also supports object count quota for all standard namespaced resources using the syntax count/<resource>.<group>:", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_17424"} {"prompt": "Complete this shell code:\n\ncat <<EOF > object-counts.yaml\napiVersion: v1\nkind: ResourceQuota\nmetadata:\n name: object-counts\nspec:\n hard:", "completion": " configmaps: \"10\"\n persistentvolumeclaims: \"4\"\n pods: \"4\"\n replicationcontrollers: \"20\"\n secrets: \"10\"\n services: \"10\"\n services.loadbalancers: \"2\"\nEOF", "language": "shell", "type": "code-example", "context": "9.2 - Resource Quotas | kubectl also supports object count quota for all standard namespaced resources using the syntax count/<resource>.<group>:", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_17425"} {"prompt": "Explain this shell code:\n\ncat <<EOF > object-counts.yaml\napiVersion: v1\nkind: ResourceQuota\nmetadata:\n name: object-counts\nspec:\n hard:\n configmaps: \"10\"\n persistentvolumeclaims: \"4\"\n pods: \"4\"\n replicationcontrollers: \"20\"\n secrets: \"10\"\n services: \"10\"\n services.loadbalancers: \"2\"\nEOF", "completion": "This shell code 9.2 - Resource Quotas | kubectl also supports object count quota for all standard namespaced resources using the syntax count/<resource>.<group>:", "language": "shell", "type": "code-example_explanation", "context": "9.2 - Resource Quotas | kubectl also supports object count quota for all standard namespaced resources using the syntax count/<resource>.<group>:", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_17426"} {"prompt": "Explain this shell code:\n\nkubectl create -f ./object-counts.yaml --namespace=myspace", "completion": "This shell code 9.2 - Resource Quotas | kubectl also supports object count quota for all standard namespaced resources using the syntax count/<resource>.<group>:", "language": "shell", "type": "code-example_explanation", "context": "9.2 - Resource Quotas | kubectl also supports object count quota for all standard namespaced resources using the syntax count/<resource>.<group>:", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_17427"} {"prompt": "Explain this none code:\n\nNAME AGE\ncompute-resources 30s\nobject-counts 32s", "completion": "This none code 9.2 - Resource Quotas | kubectl also supports object count quota for all standard namespaced resources using the syntax count/<resource>.<group>:", "language": "none", "type": "code-example_explanation", "context": "9.2 - Resource Quotas | kubectl also supports object count quota for all standard namespaced resources using the syntax count/<resource>.<group>:", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_17428"} {"prompt": "Explain this shell code:\n\nkubectl describe quota compute-resources --namespace=myspace", "completion": "This shell code 9.2 - Resource Quotas | kubectl also supports object count quota for all standard namespaced resources using the syntax count/<resource>.<group>:", "language": "shell", "type": "code-example_explanation", "context": "9.2 - Resource Quotas | kubectl also supports object count quota for all standard namespaced resources using the syntax count/<resource>.<group>:", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_17429"} {"prompt": "Complete this none code:\n\nName: compute-resources\nNamespace: myspace\nResource Used Hard\n-------- ---- ----", "completion": "limits.cpu 0 2\nlimits.memory 0 2Gi\nrequests.cpu 0 1\nrequests.memory 0 1Gi\nrequests.nvidia.com/gpu 0 4", "language": "none", "type": "code-example", "context": "9.2 - Resource Quotas | kubectl also supports object count quota for all standard namespaced resources using the syntax count/<resource>.<group>:", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_17430"} {"prompt": "Explain this none code:\n\nName: compute-resources\nNamespace: myspace\nResource Used Hard\n-------- ---- ----\nlimits.cpu 0 2\nlimits.memory 0 2Gi\nrequests.cpu 0 1\nrequests.memory 0 1Gi\nrequests.nvidia.com/gpu 0 4", "completion": "This none code 9.2 - Resource Quotas | kubectl also supports object count quota for all standard namespaced resources using the syntax count/<resource>.<group>:", "language": "none", "type": "code-example_explanation", "context": "9.2 - Resource Quotas | kubectl also supports object count quota for all standard namespaced resources using the syntax count/<resource>.<group>:", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_17431"} {"prompt": "Explain this shell code:\n\nkubectl describe quota object-counts --namespace=myspace", "completion": "This shell code 9.2 - Resource Quotas | kubectl also supports object count quota for all standard namespaced resources using the syntax count/<resource>.<group>:", "language": "shell", "type": "code-example_explanation", "context": "9.2 - Resource Quotas | kubectl also supports object count quota for all standard namespaced resources using the syntax count/<resource>.<group>:", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_17432"} {"prompt": "Complete this none code:\n\nName: object-counts\nNamespace: myspace\nResource Used Hard\n-------- ---- ----\nconfigmaps 0 10", "completion": "persistentvolumeclaims 0 4\npods 0 4\nreplicationcontrollers 0 20\nsecrets 1 10\nservices 0 10\nservices.loadbalancers 0 2", "language": "none", "type": "code-example", "context": "9.2 - Resource Quotas | kubectl also supports object count quota for all standard namespaced resources using the syntax count/<resource>.<group>:", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_17433"} {"prompt": "Explain this none code:\n\nName: object-counts\nNamespace: myspace\nResource Used Hard\n-------- ---- ----\nconfigmaps 0 10\npersistentvolumeclaims 0 4\npods 0 4\nreplicationcontrollers 0 20\nsecrets 1 10\nservices 0 10\nservices.loadbalancers 0 2", "completion": "This none code 9.2 - Resource Quotas | kubectl also supports object count quota for all standard namespaced resources using the syntax count/<resource>.<group>:", "language": "none", "type": "code-example_explanation", "context": "9.2 - Resource Quotas | kubectl also supports object count quota for all standard namespaced resources using the syntax count/<resource>.<group>:", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_17434"} {"prompt": "Explain this shell code:\n\nkubectl create quota test --hard=count/deployments.apps=2,count/replicasets.apps=4,count/pods=3,count/secrets=4 --namespace=myspace", "completion": "This shell code 9.2 - Resource Quotas | ResourceQuotas are independent of the cluster capacity. They are expressed in absolute units. So, if you add nodes to your cluster, this does not automatically give each namespace the ability to consu", "language": "shell", "type": "code-example_explanation", "context": "9.2 - Resource Quotas | ResourceQuotas are independent of the cluster capacity. They are expressed in absolute units. So, if you add nodes to your cluster, this does not automatically give each namespace the ability to consu", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_17435"} {"prompt": "Explain this shell code:\n\nkubectl create deployment nginx --image=nginx --namespace=myspace --replicas=2", "completion": "This shell code 9.2 - Resource Quotas | ResourceQuotas are independent of the cluster capacity. They are expressed in absolute units. So, if you add nodes to your cluster, this does not automatically give each namespace the ability to consu", "language": "shell", "type": "code-example_explanation", "context": "9.2 - Resource Quotas | ResourceQuotas are independent of the cluster capacity. They are expressed in absolute units. So, if you add nodes to your cluster, this does not automatically give each namespace the ability to consu", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_17436"} {"prompt": "Complete this text code:\n\nName: test\nNamespace: myspace\nResource Used Hard\n-------- ---- ----", "completion": "count/deployments.apps 1 2\ncount/pods 2 3\ncount/replicasets.apps 1 4\ncount/secrets 1 4", "language": "text", "type": "code-example", "context": "9.2 - Resource Quotas | ResourceQuotas are independent of the cluster capacity. They are expressed in absolute units. So, if you add nodes to your cluster, this does not automatically give each namespace the ability to consu", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_17437"} {"prompt": "Explain this text code:\n\nName: test\nNamespace: myspace\nResource Used Hard\n-------- ---- ----\ncount/deployments.apps 1 2\ncount/pods 2 3\ncount/replicasets.apps 1 4\ncount/secrets 1 4", "completion": "This text code 9.2 - Resource Quotas | ResourceQuotas are independent of the cluster capacity. They are expressed in absolute units. So, if you add nodes to your cluster, this does not automatically give each namespace the ability to consu", "language": "text", "type": "code-example_explanation", "context": "9.2 - Resource Quotas | ResourceQuotas are independent of the cluster capacity. They are expressed in absolute units. So, if you add nodes to your cluster, this does not automatically give each namespace the ability to consu", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_17438"} {"prompt": "Complete this yaml code:\n\napiVersion: apiserver.config.k8s.io/v1\nkind: AdmissionConfiguration\nplugins:\n- name: \"ResourceQuota\"\n configuration:\n apiVersion: apiserver.config.k8s.io/v1", "completion": " kind: ResourceQuotaConfiguration\n limitedResources:\n - resource: pods\n matchScopes:\n - scopeName: PriorityClass\n operator: In\n values: [\"cluster-services\"]", "language": "yaml", "type": "kubernetes-manifest", "context": "9.2 - Resource Quotas | Then, create a resource quota object in the kube-system namespace:", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_17439"} {"prompt": "Explain this yaml code:\n\napiVersion: apiserver.config.k8s.io/v1\nkind: AdmissionConfiguration\nplugins:\n- name: \"ResourceQuota\"\n configuration:\n apiVersion: apiserver.config.k8s.io/v1\n kind: ResourceQuotaConfiguration\n limitedResources:\n - resource: pods\n matchScopes:\n - scopeName: PriorityClass\n operator: In\n values: [\"cluster-services\"]", "completion": "This yaml code 9.2 - Resource Quotas | Then, create a resource quota object in the kube-system namespace:", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "9.2 - Resource Quotas | Then, create a resource quota object in the kube-system namespace:", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_17440"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: ResourceQuota\nmetadata:\n name: pods-cluster-services\nspec:", "completion": " scopeSelector:\n matchExpressions:\n - operator : In\n scopeName: PriorityClass\n values: [\"cluster-services\"]", "language": "yaml", "type": "kubernetes-manifest", "context": "9.2 - Resource Quotas | In this case, a pod creation will be allowed if:", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_17441"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: ResourceQuota\nmetadata:\n name: pods-cluster-services\nspec:\n scopeSelector:\n matchExpressions:\n - operator : In\n scopeName: PriorityClass\n values: [\"cluster-services\"]", "completion": "This yaml code 9.2 - Resource Quotas | In this case, a pod creation will be allowed if:", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "9.2 - Resource Quotas | In this case, a pod creation will be allowed if:", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_17442"} {"prompt": "Explain this shell code:\n\nkubectl apply -f https://k8s.io/examples/policy/priority-class-resourcequota.yaml -n kube-system", "completion": "This shell code 9.2 - Resource Quotas | In this case, a pod creation will be allowed if:", "language": "shell", "type": "code-example_explanation", "context": "9.2 - Resource Quotas | In this case, a pod creation will be allowed if:", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_17443"} {"prompt": "Complete this yaml code:\n\nspec:\n containers:", "completion": " - name: nginx\n image: nginx", "language": "yaml", "type": "kubernetes-manifest", "context": "9.4 - Node Resource Managers | The pod above runs in the BestEffort QoS class because no resource requests or limits are specified. It runs in the shared pool.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_17444"} {"prompt": "Explain this yaml code:\n\nspec:\n containers:\n - name: nginx\n image: nginx", "completion": "This yaml code 9.4 - Node Resource Managers | The pod above runs in the BestEffort QoS class because no resource requests or limits are specified. It runs in the shared pool.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "9.4 - Node Resource Managers | The pod above runs in the BestEffort QoS class because no resource requests or limits are specified. It runs in the shared pool.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_17445"} {"prompt": "Complete this yaml code:\n\nspec:\n containers:\n - name: nginx\n image: nginx", "completion": " resources:\n limits:\n memory: \"200Mi\"\n requests:\n memory: \"100Mi\"", "language": "yaml", "type": "kubernetes-manifest", "context": "9.4 - Node Resource Managers | The pod above runs in the Burstable QoS class because resource requests do not equal limits and the cpu quantity is not specified. It runs in the shared pool.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_17446"} {"prompt": "Explain this yaml code:\n\nspec:\n containers:\n - name: nginx\n image: nginx\n resources:\n limits:\n memory: \"200Mi\"\n requests:\n memory: \"100Mi\"", "completion": "This yaml code 9.4 - Node Resource Managers | The pod above runs in the Burstable QoS class because resource requests do not equal limits and the cpu quantity is not specified. It runs in the shared pool.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "9.4 - Node Resource Managers | The pod above runs in the Burstable QoS class because resource requests do not equal limits and the cpu quantity is not specified. It runs in the shared pool.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_17447"} {"prompt": "Complete this yaml code:\n\nspec:\n containers:\n - name: nginx\n image: nginx\n resources:", "completion": " limits:\n memory: \"200Mi\"\n cpu: \"2\"\n requests:\n memory: \"100Mi\"\n cpu: \"1\"", "language": "yaml", "type": "kubernetes-manifest", "context": "9.4 - Node Resource Managers | The pod above runs in the Burstable QoS class because resource requests do not equal limits. It runs in the shared pool.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_17448"} {"prompt": "Explain this yaml code:\n\nspec:\n containers:\n - name: nginx\n image: nginx\n resources:\n limits:\n memory: \"200Mi\"\n cpu: \"2\"\n requests:\n memory: \"100Mi\"\n cpu: \"1\"", "completion": "This yaml code 9.4 - Node Resource Managers | The pod above runs in the Burstable QoS class because resource requests do not equal limits. It runs in the shared pool.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "9.4 - Node Resource Managers | The pod above runs in the Burstable QoS class because resource requests do not equal limits. It runs in the shared pool.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_17449"} {"prompt": "Complete this yaml code:\n\nspec:\n containers:\n - name: nginx\n image: nginx\n resources:", "completion": " limits:\n memory: \"200Mi\"\n cpu: \"2\"\n requests:\n memory: \"200Mi\"\n cpu: \"2\"", "language": "yaml", "type": "kubernetes-manifest", "context": "9.4 - Node Resource Managers | The pod above runs in the Guaranteed QoS class because requests are equal to limits. And the container's resource limit for the CPU resource is an integer greater than or equal to one. The nginx conta", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_17450"} {"prompt": "Explain this yaml code:\n\nspec:\n containers:\n - name: nginx\n image: nginx\n resources:\n limits:\n memory: \"200Mi\"\n cpu: \"2\"\n requests:\n memory: \"200Mi\"\n cpu: \"2\"", "completion": "This yaml code 9.4 - Node Resource Managers | The pod above runs in the Guaranteed QoS class because requests are equal to limits. And the container's resource limit for the CPU resource is an integer greater than or equal to one. The nginx conta", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "9.4 - Node Resource Managers | The pod above runs in the Guaranteed QoS class because requests are equal to limits. And the container's resource limit for the CPU resource is an integer greater than or equal to one. The nginx conta", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_17451"} {"prompt": "Complete this yaml code:\n\nspec:\n containers:\n - name: nginx\n image: nginx\n resources:", "completion": " limits:\n memory: \"200Mi\"\n cpu: \"1.5\"\n requests:\n memory: \"200Mi\"\n cpu: \"1.5\"", "language": "yaml", "type": "kubernetes-manifest", "context": "9.4 - Node Resource Managers | The pod above runs in the Guaranteed QoS class because requests are equal to limits. But the container's resource limit for the CPU resource is a fraction. It runs in the shared pool.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_17452"} {"prompt": "Explain this yaml code:\n\nspec:\n containers:\n - name: nginx\n image: nginx\n resources:\n limits:\n memory: \"200Mi\"\n cpu: \"1.5\"\n requests:\n memory: \"200Mi\"\n cpu: \"1.5\"", "completion": "This yaml code 9.4 - Node Resource Managers | The pod above runs in the Guaranteed QoS class because requests are equal to limits. But the container's resource limit for the CPU resource is a fraction. It runs in the shared pool.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "9.4 - Node Resource Managers | The pod above runs in the Guaranteed QoS class because requests are equal to limits. But the container's resource limit for the CPU resource is a fraction. It runs in the shared pool.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_17453"} {"prompt": "Complete this yaml code:\n\nspec:\n containers:\n - name: nginx\n image: nginx", "completion": " resources:\n limits:\n memory: \"200Mi\"\n cpu: \"2\"", "language": "yaml", "type": "kubernetes-manifest", "context": "9.4 - Node Resource Managers | The pod above runs in the Guaranteed QoS class because only limits are specified and requests are set equal to limits when not explicitly specified. And the container's resource limit for the CPU reso", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_17454"} {"prompt": "Explain this yaml code:\n\nspec:\n containers:\n - name: nginx\n image: nginx\n resources:\n limits:\n memory: \"200Mi\"\n cpu: \"2\"", "completion": "This yaml code 9.4 - Node Resource Managers | The pod above runs in the Guaranteed QoS class because only limits are specified and requests are set equal to limits when not explicitly specified. And the container's resource limit for the CPU reso", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "9.4 - Node Resource Managers | The pod above runs in the Guaranteed QoS class because only limits are specified and requests are set equal to limits when not explicitly specified. And the container's resource limit for the CPU reso", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_17455"} {"prompt": "Explain this text code:\n\nexample.com.node-restriction.kubernetes.io/fips=true", "completion": "This text code 10.2 - Assigning Pods to Nodes | nodeSelector is the simplest recommended form of node selection constraint. You can add the nodeSelector field to your Pod specification and specify the node labels you want the target node to have. K", "language": "text", "type": "code-example_explanation", "context": "10.2 - Assigning Pods to Nodes | nodeSelector is the simplest recommended form of node selection constraint. You can add the nodeSelector field to your Pod specification and specify the node labels you want the target node to have. K", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_17456"} {"prompt": "Explain this text code:\n\nexample.com.node-restriction.kubernetes.io/pci-dss=true", "completion": "This text code 10.2 - Assigning Pods to Nodes | nodeSelector is the simplest recommended form of node selection constraint. You can add the nodeSelector field to your Pod specification and specify the node labels you want the target node to have. K", "language": "text", "type": "code-example_explanation", "context": "10.2 - Assigning Pods to Nodes | nodeSelector is the simplest recommended form of node selection constraint. You can add the nodeSelector field to your Pod specification and specify the node labels you want the target node to have. K", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_17457"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: with-node-affinity\nspec:\n affinity:\n nodeAffinity:\n requiredDuringSchedulingIgnoredDuringExecution:\n nodeSelectorTerms:\n - matchExpressions:\n - key: topology.kubernetes.io/zone\n operator: In\n values:", "completion": " - antarctica-east1\n - antarctica-west1\n preferredDuringSchedulingIgnoredDuringExecution:\n - weight: 1\n preference:\n matchExpressions:\n - key: another-node-label-key\n operator: In\n values:\n - another-node-label-value\n containers:\n - name: with-node-affinity\n image: registry.k8s.io/pause:3.8", "language": "yaml", "type": "pod-definition", "context": "10.2 - Assigning Pods to Nodes | In this example, the following rules apply:", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_17458"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: with-node-affinity\nspec:\n affinity:\n nodeAffinity:\n requiredDuringSchedulingIgnoredDuringExecution:\n nodeSelectorTerms:\n - matchExpressions:\n - key: topology.kubernetes.io/zone\n operator: In\n values:\n - antarctica-east1\n - antarctica-west1\n preferredDuringSchedulingIgnoredDuringExecution:\n - weight: 1\n preference:\n matchExpressions:\n - key: another-node-label-key\n operator: In\n values:\n - another-node-label-value\n containers:\n - name: with-node-affinity\n image: registry.k8s.io/pause:3.8", "completion": "This yaml code 10.2 - Assigning Pods to Nodes | In this example, the following rules apply:", "language": "yaml", "type": "pod-definition_explanation", "context": "10.2 - Assigning Pods to Nodes | In this example, the following rules apply:", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_17459"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: with-affinity-preferred-weight\nspec:\n affinity:\n nodeAffinity:\n requiredDuringSchedulingIgnoredDuringExecution:\n nodeSelectorTerms:\n - matchExpressions:\n - key: kubernetes.io/os\n operator: In\n values:\n - linux\n preferredDuringSchedulingIgnoredDuringExecution:\n - weight: 1", "completion": " preference:\n matchExpressions:\n - key: label-1\n operator: In\n values:\n - key-1\n - weight: 50\n preference:\n matchExpressions:\n - key: label-2\n operator: In\n values:\n - key-2\n containers:\n - name: with-node-affinity\n image: registry.k8s.io/pause:3.8", "language": "yaml", "type": "pod-definition", "context": "10.2 - Assigning Pods to Nodes | If there are two possible nodes that match the preferredDuringSchedulingIgnoredDuringExecution rule, one with the label-1:key-1 label and another with the label-2:key-2 label, the scheduler considers", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_17460"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: with-affinity-preferred-weight\nspec:\n affinity:\n nodeAffinity:\n requiredDuringSchedulingIgnoredDuringExecution:\n nodeSelectorTerms:\n - matchExpressions:\n - key: kubernetes.io/os\n operator: In\n values:\n - linux\n preferredDuringSchedulingIgnoredDuringExecution:\n - weight: 1\n preference:\n matchExpressions:\n - key: label-1\n operator: In\n values:\n - key-1\n - weight: 50\n preference:\n matchExpressions:\n - key: label-2\n operator: In\n values:\n - key-2\n containers:\n - name: with-node-affinity\n image: registry.k8s.io/pause:3.8", "completion": "This yaml code 10.2 - Assigning Pods to Nodes | If there are two possible nodes that match the preferredDuringSchedulingIgnoredDuringExecution rule, one with the label-1:key-1 label and another with the label-2:key-2 label, the scheduler considers", "language": "yaml", "type": "pod-definition_explanation", "context": "10.2 - Assigning Pods to Nodes | If there are two possible nodes that match the preferredDuringSchedulingIgnoredDuringExecution rule, one with the label-1:key-1 label and another with the label-2:key-2 label, the scheduler considers", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_17461"} {"prompt": "Complete this yaml code:\n\napiVersion: kubescheduler.config.k8s.io/v1\nkind: KubeSchedulerConfiguration\n\nprofiles:\n - schedulerName: default-scheduler\n - schedulerName: foo-scheduler\n pluginConfig:\n - name: NodeAffinity", "completion": " args:\n addedAffinity:\n requiredDuringSchedulingIgnoredDuringExecution:\n nodeSelectorTerms:\n - matchExpressions:\n - key: scheduler-profile\n operator: In\n values:\n - foo", "language": "yaml", "type": "kubernetes-manifest", "context": "10.2 - Assigning Pods to Nodes | The addedAffinity is applied to all Pods that set .spec.schedulerName to foo-scheduler, in addition to the NodeAffinity specified in the PodSpec. That is, in order to match the Pod, nodes need to sati", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_17462"} {"prompt": "Explain this yaml code:\n\napiVersion: kubescheduler.config.k8s.io/v1\nkind: KubeSchedulerConfiguration\n\nprofiles:\n - schedulerName: default-scheduler\n - schedulerName: foo-scheduler\n pluginConfig:\n - name: NodeAffinity\n args:\n addedAffinity:\n requiredDuringSchedulingIgnoredDuringExecution:\n nodeSelectorTerms:\n - matchExpressions:\n - key: scheduler-profile\n operator: In\n values:\n - foo", "completion": "This yaml code 10.2 - Assigning Pods to Nodes | The addedAffinity is applied to all Pods that set .spec.schedulerName to foo-scheduler, in addition to the NodeAffinity specified in the PodSpec. That is, in order to match the Pod, nodes need to sati", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "10.2 - Assigning Pods to Nodes | The addedAffinity is applied to all Pods that set .spec.schedulerName to foo-scheduler, in addition to the NodeAffinity specified in the PodSpec. That is, in order to match the Pod, nodes need to sati", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_17463"} {"prompt": "Explain this text code:\n\npodAffinity.requiredDuringSchedulingIgnoredDuringExecution", "completion": "This text code 10.2 - Assigning Pods to Nodes | Soft Constraints (Scoring):", "language": "text", "type": "code-example_explanation", "context": "10.2 - Assigning Pods to Nodes | Soft Constraints (Scoring):", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_17464"} {"prompt": "Explain this text code:\n\npodAntiAffinity.requiredDuringSchedulingIgnoredDuringExecution", "completion": "This text code 10.2 - Assigning Pods to Nodes | Soft Constraints (Scoring):", "language": "text", "type": "code-example_explanation", "context": "10.2 - Assigning Pods to Nodes | Soft Constraints (Scoring):", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_17465"} {"prompt": "Explain this text code:\n\npodAffinity.preferredDuringSchedulingIgnoredDuringExecution", "completion": "This text code 10.2 - Assigning Pods to Nodes | Ignored Fields:", "language": "text", "type": "code-example_explanation", "context": "10.2 - Assigning Pods to Nodes | Ignored Fields:", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_17466"} {"prompt": "Explain this text code:\n\npodAntiAffinity.preferredDuringSchedulingIgnoredDuringExecution", "completion": "This text code 10.2 - Assigning Pods to Nodes | Ignored Fields:", "language": "text", "type": "code-example_explanation", "context": "10.2 - Assigning Pods to Nodes | Ignored Fields:", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_17467"} {"prompt": "Explain this text code:\n\npodAffinity.preferredDuringSchedulingIgnoredDuringExecution", "completion": "This text code 10.2 - Assigning Pods to Nodes | If the current Pod being scheduled is the first in a series that have affinity to themselves, it is allowed to be scheduled if it passes all other affinity checks. This is determined by verifying that", "language": "text", "type": "code-example_explanation", "context": "10.2 - Assigning Pods to Nodes | If the current Pod being scheduled is the first in a series that have affinity to themselves, it is allowed to be scheduled if it passes all other affinity checks. This is determined by verifying that", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_17468"} {"prompt": "Explain this text code:\n\npodAntiAffinity.preferredDuringSchedulingIgnoredDuringExecution", "completion": "This text code 10.2 - Assigning Pods to Nodes | If the current Pod being scheduled is the first in a series that have affinity to themselves, it is allowed to be scheduled if it passes all other affinity checks. This is determined by verifying that", "language": "text", "type": "code-example_explanation", "context": "10.2 - Assigning Pods to Nodes | If the current Pod being scheduled is the first in a series that have affinity to themselves, it is allowed to be scheduled if it passes all other affinity checks. This is determined by verifying that", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_17469"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: with-pod-affinity\nspec:\n affinity:\n podAffinity:\n requiredDuringSchedulingIgnoredDuringExecution:\n - labelSelector:\n matchExpressions:\n - key: security\n operator: In\n values:\n - S1", "completion": " topologyKey: topology.kubernetes.io/zone\n podAntiAffinity:\n preferredDuringSchedulingIgnoredDuringExecution:\n - weight: 100\n podAffinityTerm:\n labelSelector:\n matchExpressions:\n - key: security\n operator: In\n values:\n - S2\n topologyKey: topology.kubernetes.io/zone\n containers:\n - name: with-pod-affinity\n image: registry.k8s.io/pause:3.8", "language": "yaml", "type": "pod-definition", "context": "10.2 - Assigning Pods to Nodes | This example defines one Pod affinity rule and one Pod anti-affinity rule. The Pod affinity rule uses the \"hard\" requiredDuringSchedulingIgnoredDuringExecution, while the anti-affinity rule uses the \"", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_17470"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: with-pod-affinity\nspec:\n affinity:\n podAffinity:\n requiredDuringSchedulingIgnoredDuringExecution:\n - labelSelector:\n matchExpressions:\n - key: security\n operator: In\n values:\n - S1\n topologyKey: topology.kubernetes.io/zone\n podAntiAffinity:\n preferredDuringSchedulingIgnoredDuringExecution:\n - weight: 100\n podAffinityTerm:\n labelSelector:\n matchExpressions:\n - key: security\n operator: In\n values:\n - S2\n topologyKey: topology.kubernetes.io/zone\n containers:\n - name: with-pod-affinity\n image: registry.k8s.io/pause:3.8", "completion": "This yaml code 10.2 - Assigning Pods to Nodes | This example defines one Pod affinity rule and one Pod anti-affinity rule. The Pod affinity rule uses the \"hard\" requiredDuringSchedulingIgnoredDuringExecution, while the anti-affinity rule uses the \"", "language": "yaml", "type": "pod-definition_explanation", "context": "10.2 - Assigning Pods to Nodes | This example defines one Pod affinity rule and one Pod anti-affinity rule. The Pod affinity rule uses the \"hard\" requiredDuringSchedulingIgnoredDuringExecution, while the anti-affinity rule uses the \"", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_17471"} {"prompt": "Complete this yaml code:\n\napiVersion: apps/v1\nkind: Deployment\nmetadata:\n name: application-server\n...\nspec:\n template:\n spec:\n affinity:\n podAffinity:\n requiredDuringSchedulingIgnoredDuringExecution:", "completion": " - labelSelector:\n matchExpressions:\n - key: app\n operator: In\n values:\n - database\n topologyKey: topology.kubernetes.io/zone\n # Only Pods from a given rollout are taken into consideration when calculating pod affinity.\n # If you update the Deployment, the replacement Pods follow their own affinity rules\n # (if there are any defined in the new Pod template)\n matchLabelKeys:\n - pod-template-hash", "language": "yaml", "type": "deployment-definition", "context": "10.2 - Assigning Pods to Nodes | The mismatchLabelKeys field is a beta-level field and is enabled by default in Kubernetes 1.34. When you want to disable it, you have to disable it explicitly via the MatchLabelKeysInPodAffinity featu", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_17472"} {"prompt": "Explain this yaml code:\n\napiVersion: apps/v1\nkind: Deployment\nmetadata:\n name: application-server\n...\nspec:\n template:\n spec:\n affinity:\n podAffinity:\n requiredDuringSchedulingIgnoredDuringExecution:\n - labelSelector:\n matchExpressions:\n - key: app\n operator: In\n values:\n - database\n topologyKey: topology.kubernetes.io/zone\n # Only Pods from a given rollout are taken into consideration when calculating pod affinity.\n # If you update the Deployment, the replacement Pods follow their own affinity rules\n # (if there are any defined in the new Pod template)\n matchLabelKeys:\n - pod-template-hash", "completion": "This yaml code 10.2 - Assigning Pods to Nodes | The mismatchLabelKeys field is a beta-level field and is enabled by default in Kubernetes 1.34. When you want to disable it, you have to disable it explicitly via the MatchLabelKeysInPodAffinity featu", "language": "yaml", "type": "deployment-definition_explanation", "context": "10.2 - Assigning Pods to Nodes | The mismatchLabelKeys field is a beta-level field and is enabled by default in Kubernetes 1.34. When you want to disable it, you have to disable it explicitly via the MatchLabelKeysInPodAffinity featu", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_17473"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n labels:\n # Assume that all relevant Pods have a \"tenant\" label set\n tenant: tenant-a\n...\nspec:\n affinity:\n podAffinity:\n requiredDuringSchedulingIgnoredDuringExecution:\n # ensure that Pods associated with this tenant land on the correct node pool\n - matchLabelKeys:\n - tenant\n topologyKey: node-pool", "completion": " podAntiAffinity:\n requiredDuringSchedulingIgnoredDuringExecution:\n # ensure that Pods associated with this tenant can't schedule to nodes used for another tenant\n - mismatchLabelKeys:\n - tenant # whatever the value of the \"tenant\" label for this Pod, prevent\n # scheduling to nodes in any pool where any Pod from a different\n # tenant is running.\n labelSelector:\n # We have to have the labelSelector which selects only Pods with the tenant label,\n # otherwise this Pod would have anti-affinity against Pods from daemonsets as well, for example,\n # which aren't supposed to have the tenant label.\n matchExpressions:\n - key: tenant\n operator: Exists\n topologyKey: node-pool", "language": "yaml", "type": "pod-definition", "context": "10.2 - Assigning Pods to Nodes | Inter-pod affinity and anti-affinity can be even more useful when they are used with higher level collections such as ReplicaSets, StatefulSets, Deployments, etc. These rules allow you to configure th", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_17474"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n labels:\n # Assume that all relevant Pods have a \"tenant\" label set\n tenant: tenant-a\n...\nspec:\n affinity:\n podAffinity:\n requiredDuringSchedulingIgnoredDuringExecution:\n # ensure that Pods associated with this tenant land on the correct node pool\n - matchLabelKeys:\n - tenant\n topologyKey: node-pool\n podAntiAffinity:\n requiredDuringSchedulingIgnoredDuringExecution:\n # ensure that Pods associated with this tenant can't schedule to nodes used for another tenant\n - mismatchLabelKeys:\n - tenant # whatever the value of the \"tenant\" label for this Pod, prevent\n # scheduling to nodes in any pool where any Pod from a different\n # tenant is running.\n labelSelector:\n # We have to have the labelSelector which selects only Pods with the tenant label,\n # otherwise this Pod would have anti-affinity against Pods from daemonsets as well, for example,\n # which aren't supposed to have the tenant label.\n matchExpressions:\n - key: tenant\n operator: Exists\n topologyKey: node-pool", "completion": "This yaml code 10.2 - Assigning Pods to Nodes | Inter-pod affinity and anti-affinity can be even more useful when they are used with higher level collections such as ReplicaSets, StatefulSets, Deployments, etc. These rules allow you to configure th", "language": "yaml", "type": "pod-definition_explanation", "context": "10.2 - Assigning Pods to Nodes | Inter-pod affinity and anti-affinity can be even more useful when they are used with higher level collections such as ReplicaSets, StatefulSets, Deployments, etc. These rules allow you to configure th", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_17475"} {"prompt": "Complete this yaml code:\n\napiVersion: apps/v1\nkind: Deployment\nmetadata:\n name: redis-cache\nspec:\n selector:\n matchLabels:\n app: store\n replicas: 3\n template:\n metadata:\n labels:\n app: store", "completion": " spec:\n affinity:\n podAntiAffinity:\n requiredDuringSchedulingIgnoredDuringExecution:\n - labelSelector:\n matchExpressions:\n - key: app\n operator: In\n values:\n - store\n topologyKey: \"kubernetes.io/hostname\"\n containers:\n - name: redis-server\n image: redis:3.2-alpine", "language": "yaml", "type": "deployment-definition", "context": "10.2 - Assigning Pods to Nodes | The following example Deployment for the web servers creates replicas with the label app=web-store. The Pod affinity rule tells the scheduler to place each replica on a node that has a Pod with the la", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_17476"} {"prompt": "Explain this yaml code:\n\napiVersion: apps/v1\nkind: Deployment\nmetadata:\n name: redis-cache\nspec:\n selector:\n matchLabels:\n app: store\n replicas: 3\n template:\n metadata:\n labels:\n app: store\n spec:\n affinity:\n podAntiAffinity:\n requiredDuringSchedulingIgnoredDuringExecution:\n - labelSelector:\n matchExpressions:\n - key: app\n operator: In\n values:\n - store\n topologyKey: \"kubernetes.io/hostname\"\n containers:\n - name: redis-server\n image: redis:3.2-alpine", "completion": "This yaml code 10.2 - Assigning Pods to Nodes | The following example Deployment for the web servers creates replicas with the label app=web-store. The Pod affinity rule tells the scheduler to place each replica on a node that has a Pod with the la", "language": "yaml", "type": "deployment-definition_explanation", "context": "10.2 - Assigning Pods to Nodes | The following example Deployment for the web servers creates replicas with the label app=web-store. The Pod affinity rule tells the scheduler to place each replica on a node that has a Pod with the la", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_17477"} {"prompt": "Complete this yaml code:\n\napiVersion: apps/v1\nkind: Deployment\nmetadata:\n name: web-server\nspec:\n selector:\n matchLabels:\n app: web-store\n replicas: 3\n template:\n metadata:\n labels:\n app: web-store\n spec:\n affinity:\n podAntiAffinity:\n requiredDuringSchedulingIgnoredDuringExecution:\n - labelSelector:", "completion": " matchExpressions:\n - key: app\n operator: In\n values:\n - web-store\n topologyKey: \"kubernetes.io/hostname\"\n podAffinity:\n requiredDuringSchedulingIgnoredDuringExecution:\n - labelSelector:\n matchExpressions:\n - key: app\n operator: In\n values:\n - store\n topologyKey: \"kubernetes.io/hostname\"\n containers:\n - name: web-app\n image: nginx:1.16-alpine", "language": "yaml", "type": "deployment-definition", "context": "10.2 - Assigning Pods to Nodes | Creating the two preceding Deployments results in the following cluster layout, where each web server is co-located with a cache, on three separate nodes.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_17478"} {"prompt": "Explain this yaml code:\n\napiVersion: apps/v1\nkind: Deployment\nmetadata:\n name: web-server\nspec:\n selector:\n matchLabels:\n app: web-store\n replicas: 3\n template:\n metadata:\n labels:\n app: web-store\n spec:\n affinity:\n podAntiAffinity:\n requiredDuringSchedulingIgnoredDuringExecution:\n - labelSelector:\n matchExpressions:\n - key: app\n operator: In\n values:\n - web-store\n topologyKey: \"kubernetes.io/hostname\"\n podAffinity:\n requiredDuringSchedulingIgnoredDuringExecution:\n - labelSelector:\n matchExpressions:\n - key: app\n operator: In\n values:\n - store\n topologyKey: \"kubernetes.io/hostname\"\n containers:\n - name: web-app\n image: nginx:1.16-alpine", "completion": "This yaml code 10.2 - Assigning Pods to Nodes | Creating the two preceding Deployments results in the following cluster layout, where each web server is co-located with a cache, on three separate nodes.", "language": "yaml", "type": "deployment-definition_explanation", "context": "10.2 - Assigning Pods to Nodes | Creating the two preceding Deployments results in the following cluster layout, where each web server is co-located with a cache, on three separate nodes.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_17479"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: nginx", "completion": "spec:\n containers:\n - name: nginx\n image: nginx\n nodeName: kube-01", "language": "yaml", "type": "pod-definition", "context": "10.2 - Assigning Pods to Nodes | The above Pod will only run on the node kube-01.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_17480"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: nginx\nspec:\n containers:\n - name: nginx\n image: nginx\n nodeName: kube-01", "completion": "This yaml code 10.2 - Assigning Pods to Nodes | The above Pod will only run on the node kube-01.", "language": "yaml", "type": "pod-definition_explanation", "context": "10.2 - Assigning Pods to Nodes | The above Pod will only run on the node kube-01.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_17481"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:", "completion": " name: nginx\n...\nstatus:\n nominatedNodeName: kube-01", "language": "yaml", "type": "pod-definition", "context": "10.2 - Assigning Pods to Nodes | You can use topology spread constraints to control how Pods are spread across your cluster among failure-domains such as regions, zones, nodes, or among any other topology domains that you define. You", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_17482"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: nginx\n...\nstatus:\n nominatedNodeName: kube-01", "completion": "This yaml code 10.2 - Assigning Pods to Nodes | You can use topology spread constraints to control how Pods are spread across your cluster among failure-domains such as regions, zones, nodes, or among any other topology domains that you define. You", "language": "yaml", "type": "pod-definition_explanation", "context": "10.2 - Assigning Pods to Nodes | You can use topology spread constraints to control how Pods are spread across your cluster among failure-domains such as regions, zones, nodes, or among any other topology domains that you define. You", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_17483"} {"prompt": "Complete this yaml code:\n\n# You need to change this example to match the actual runtime name, and per-Pod\n# resource overhead, that the container runtime is adding in your cluster.\napiVersion: node.k8s.io/v1\nkind: RuntimeClass\nmetadata:", "completion": " name: kata-fc\nhandler: kata-fc\noverhead:\n podFixed:\n memory: \"120Mi\"\n cpu: \"250m\"", "language": "yaml", "type": "kubernetes-manifest", "context": "10.3 - Pod Overhead | Workloads which are created which specify the kata-fc RuntimeClass handler will take the memory and cpu overheads into account for resource quota calculations, node scheduling, as well as Pod cgroup s", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_17484"} {"prompt": "Explain this yaml code:\n\n# You need to change this example to match the actual runtime name, and per-Pod\n# resource overhead, that the container runtime is adding in your cluster.\napiVersion: node.k8s.io/v1\nkind: RuntimeClass\nmetadata:\n name: kata-fc\nhandler: kata-fc\noverhead:\n podFixed:\n memory: \"120Mi\"\n cpu: \"250m\"", "completion": "This yaml code 10.3 - Pod Overhead | Workloads which are created which specify the kata-fc RuntimeClass handler will take the memory and cpu overheads into account for resource quota calculations, node scheduling, as well as Pod cgroup s", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "10.3 - Pod Overhead | Workloads which are created which specify the kata-fc RuntimeClass handler will take the memory and cpu overheads into account for resource quota calculations, node scheduling, as well as Pod cgroup s", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_17485"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: test-pod\nspec:\n runtimeClassName: kata-fc\n containers:\n - name: busybox-ctr\n image: busybox:1.28\n stdin: true", "completion": " tty: true\n resources:\n limits:\n cpu: 500m\n memory: 100Mi\n - name: nginx-ctr\n image: nginx\n resources:\n limits:\n cpu: 1500m\n memory: 100Mi", "language": "yaml", "type": "pod-definition", "context": "10.3 - Pod Overhead | At admission time the RuntimeClass admission controller updates the workload's PodSpec to include the overhead as described in the RuntimeClass. If the PodSpec already has this field defined, the Pod", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_17486"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: test-pod\nspec:\n runtimeClassName: kata-fc\n containers:\n - name: busybox-ctr\n image: busybox:1.28\n stdin: true\n tty: true\n resources:\n limits:\n cpu: 500m\n memory: 100Mi\n - name: nginx-ctr\n image: nginx\n resources:\n limits:\n cpu: 1500m\n memory: 100Mi", "completion": "This yaml code 10.3 - Pod Overhead | At admission time the RuntimeClass admission controller updates the workload's PodSpec to include the overhead as described in the RuntimeClass. If the PodSpec already has this field defined, the Pod", "language": "yaml", "type": "pod-definition_explanation", "context": "10.3 - Pod Overhead | At admission time the RuntimeClass admission controller updates the workload's PodSpec to include the overhead as described in the RuntimeClass. If the PodSpec already has this field defined, the Pod", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_17487"} {"prompt": "Explain this bash code:\n\nkubectl get pod test-pod -o jsonpath='{.spec.overhead}'", "completion": "This bash code 10.3 - Pod Overhead | The output is:", "language": "bash", "type": "kubectl-command_explanation", "context": "10.3 - Pod Overhead | The output is:", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_17488"} {"prompt": "Explain this bash code:\n\nkubectl get pod test-pod -o jsonpath='{.spec.containers[*].resources.limits}'", "completion": "This bash code 10.3 - Pod Overhead | The total container requests are 2000m CPU and 200MiB of memory:", "language": "bash", "type": "kubectl-command_explanation", "context": "10.3 - Pod Overhead | The total container requests are 2000m CPU and 200MiB of memory:", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_17489"} {"prompt": "Explain this text code:\n\nmap[cpu: 500m memory:100Mi] map[cpu:1500m memory:100Mi]", "completion": "This text code 10.3 - Pod Overhead | Check this against what is observed by the node:", "language": "text", "type": "code-example_explanation", "context": "10.3 - Pod Overhead | Check this against what is observed by the node:", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_17490"} {"prompt": "Explain this text code:\n\nNamespace Name CPU Requests CPU Limits Memory Requests Memory Limits AGE\n --------- ---- ------------ ---------- --------------- ------------- ---\n default test-pod 2250m (56%) 2250m (56%) 320Mi (1%) 320Mi (1%) 36m", "completion": "This text code 10.3 - Pod Overhead | Check the Pod's memory cgroups on the node where the workload is running. In the following example, crictl is used on the node, which provides a CLI for CRI-compatible container runtimes. This is an a", "language": "text", "type": "code-example_explanation", "context": "10.3 - Pod Overhead | Check the Pod's memory cgroups on the node where the workload is running. In the following example, crictl is used on the node, which provides a CLI for CRI-compatible container runtimes. This is an a", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_17491"} {"prompt": "Explain this bash code:\n\n# Run this on the node where the Pod is scheduled\nPOD_ID=\"$(sudo crictl pods --name test-pod -q)\"", "completion": "This bash code 10.3 - Pod Overhead | From this, you can determine the cgroup path for the Pod:", "language": "bash", "type": "shell-command_explanation", "context": "10.3 - Pod Overhead | From this, you can determine the cgroup path for the Pod:", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_17492"} {"prompt": "Explain this bash code:\n\n# Run this on the node where the Pod is scheduled\nsudo crictl inspectp -o=json $POD_ID | grep cgroupsPath", "completion": "This bash code 10.3 - Pod Overhead | The resulting cgroup path includes the Pod's pause container. The Pod level cgroup is one directory above.", "language": "bash", "type": "shell-command_explanation", "context": "10.3 - Pod Overhead | The resulting cgroup path includes the Pod's pause container. The Pod level cgroup is one directory above.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_17493"} {"prompt": "Explain this text code:\n\n\"cgroupsPath\": \"/kubepods/podd7f4b509-cf94-4951-9417-d1087c92a5b2/7ccf55aee35dd16aca4189c952d83487297f3cd760f1bbf09620e206e7d0c27a\"", "completion": "This text code 10.3 - Pod Overhead | In this specific case, the pod cgroup path is kubepods/podd7f4b509-cf94-4951-9417-d1087c92a5b2. Verify the Pod level cgroup setting for memory:", "language": "text", "type": "code-example_explanation", "context": "10.3 - Pod Overhead | In this specific case, the pod cgroup path is kubepods/podd7f4b509-cf94-4951-9417-d1087c92a5b2. Verify the Pod level cgroup setting for memory:", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_17494"} {"prompt": "Explain this bash code:\n\n# Run this on the node where the Pod is scheduled.\n# Also, change the name of the cgroup to match the cgroup allocated for your pod.\n cat /sys/fs/cgroup/memory/kubepods/podd7f4b509-cf94-4951-9417-d1087c92a5b2/memory.limit_in_bytes", "completion": "This bash code 10.3 - Pod Overhead | This is 320 MiB, as expected:", "language": "bash", "type": "shell-command_explanation", "context": "10.3 - Pod Overhead | This is 320 MiB, as expected:", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_17495"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: test-pod\nspec:", "completion": " schedulingGates:\n - name: example.com/foo\n - name: example.com/bar\n containers:\n - name: pause\n image: registry.k8s.io/pause:3.6", "language": "yaml", "type": "pod-definition", "context": "10.4 - Pod Scheduling Readiness | After the Pod's creation, you can check its state using:", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_17496"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: test-pod\nspec:\n schedulingGates:\n - name: example.com/foo\n - name: example.com/bar\n containers:\n - name: pause\n image: registry.k8s.io/pause:3.6", "completion": "This yaml code 10.4 - Pod Scheduling Readiness | After the Pod's creation, you can check its state using:", "language": "yaml", "type": "pod-definition_explanation", "context": "10.4 - Pod Scheduling Readiness | After the Pod's creation, you can check its state using:", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_17497"} {"prompt": "Explain this none code:\n\nNAME READY STATUS RESTARTS AGE\ntest-pod 0/1 SchedulingGated 0 7s", "completion": "This none code 10.4 - Pod Scheduling Readiness | You can also check its schedulingGates field by running:", "language": "none", "type": "code-example_explanation", "context": "10.4 - Pod Scheduling Readiness | You can also check its schedulingGates field by running:", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_17498"} {"prompt": "Explain this bash code:\n\nkubectl get pod test-pod -o jsonpath='{.spec.schedulingGates}'", "completion": "This bash code 10.4 - Pod Scheduling Readiness | The output is:", "language": "bash", "type": "kubectl-command_explanation", "context": "10.4 - Pod Scheduling Readiness | The output is:", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_17499"} {"prompt": "Explain this none code:\n\n[{\"name\":\"example.com/foo\"},{\"name\":\"example.com/bar\"}]", "completion": "This none code 10.4 - Pod Scheduling Readiness | To inform scheduler this Pod is ready for scheduling, you can remove its schedulingGates entirely by reapplying a modified manifest:", "language": "none", "type": "code-example_explanation", "context": "10.4 - Pod Scheduling Readiness | To inform scheduler this Pod is ready for scheduling, you can remove its schedulingGates entirely by reapplying a modified manifest:", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_17500"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: test-pod", "completion": "spec:\n containers:\n - name: pause\n image: registry.k8s.io/pause:3.6", "language": "yaml", "type": "pod-definition", "context": "10.4 - Pod Scheduling Readiness | You can check if the schedulingGates is cleared by running:", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_17501"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: test-pod\nspec:\n containers:\n - name: pause\n image: registry.k8s.io/pause:3.6", "completion": "This yaml code 10.4 - Pod Scheduling Readiness | You can check if the schedulingGates is cleared by running:", "language": "yaml", "type": "pod-definition_explanation", "context": "10.4 - Pod Scheduling Readiness | You can check if the schedulingGates is cleared by running:", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_17502"} {"prompt": "Explain this bash code:\n\nkubectl get pod test-pod -o jsonpath='{.spec.schedulingGates}'", "completion": "This bash code 10.4 - Pod Scheduling Readiness | The output is expected to be empty. And you can check its latest status by running:", "language": "bash", "type": "kubectl-command_explanation", "context": "10.4 - Pod Scheduling Readiness | The output is expected to be empty. And you can check its latest status by running:", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_17503"} {"prompt": "Explain this none code:\n\nNAME READY STATUS RESTARTS AGE IP NODE\ntest-pod 1/1 Running 0 15s 10.0.0.4 node-2", "completion": "This none code 10.4 - Pod Scheduling Readiness | The metric scheduler_pending_pods comes with a new label \"gated\" to distinguish whether a Pod has been tried scheduling but claimed as unschedulable, or explicitly marked as not ready for scheduling.", "language": "none", "type": "code-example_explanation", "context": "10.4 - Pod Scheduling Readiness | The metric scheduler_pending_pods comes with a new label \"gated\" to distinguish whether a Pod has been tried scheduling but claimed as unschedulable, or explicitly marked as not ready for scheduling.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_17504"} {"prompt": "Explain this text code:\n\n.requiredDuringSchedulingIgnoredDuringExecution.NodeSelectorTerms", "completion": "This text code 10.4 - Pod Scheduling Readiness | For .preferredDuringSchedulingIgnoredDuringExecution, all updates are allowed. This is because preferred terms are not authoritative, and so policy controllers don't validate those terms.", "language": "text", "type": "code-example_explanation", "context": "10.4 - Pod Scheduling Readiness | For .preferredDuringSchedulingIgnoredDuringExecution, all updates are allowed. This is because preferred terms are not authoritative, and so policy controllers don't validate those terms.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_17505"} {"prompt": "Complete this yaml code:\n\n---\napiVersion: v1\nkind: Pod\nmetadata:\n name: example-pod\nspec:\n # Configure a topology spread constraint\n topologySpreadConstraints:", "completion": " - maxSkew: <integer>\n minDomains: <integer> # optional\n topologyKey: <string>\n whenUnsatisfiable: <string>\n labelSelector: <object>\n matchLabelKeys: <list> # optional; beta since v1.27\n nodeAffinityPolicy: [Honor|Ignore] # optional; beta since v1.26\n nodeTaintsPolicy: [Honor|Ignore] # optional; beta since v1.26\n ### other Pod fields go here", "language": "yaml", "type": "pod-definition", "context": "10.5 - Pod Topology Spread Constraints | You can read more about this field by running kubectl explain Pod.spec.topologySpreadConstraints or refer to the scheduling section of the API reference for Pod.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_17506"} {"prompt": "Explain this yaml code:\n\n---\napiVersion: v1\nkind: Pod\nmetadata:\n name: example-pod\nspec:\n # Configure a topology spread constraint\n topologySpreadConstraints:\n - maxSkew: <integer>\n minDomains: <integer> # optional\n topologyKey: <string>\n whenUnsatisfiable: <string>\n labelSelector: <object>\n matchLabelKeys: <list> # optional; beta since v1.27\n nodeAffinityPolicy: [Honor|Ignore] # optional; beta since v1.26\n nodeTaintsPolicy: [Honor|Ignore] # optional; beta since v1.26\n ### other Pod fields go here", "completion": "This yaml code 10.5 - Pod Topology Spread Constraints | You can read more about this field by running kubectl explain Pod.spec.topologySpreadConstraints or refer to the scheduling section of the API reference for Pod.", "language": "yaml", "type": "pod-definition_explanation", "context": "10.5 - Pod Topology Spread Constraints | You can read more about this field by running kubectl explain Pod.spec.topologySpreadConstraints or refer to the scheduling section of the API reference for Pod.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_17507"} {"prompt": "Complete this yaml code:\n\ntopologySpreadConstraints:\n - maxSkew: 1\n topologyKey: kubernetes.io/hostname\n whenUnsatisfiable: DoNotSchedule", "completion": " labelSelector:\n matchLabels:\n app: foo\n matchLabelKeys:\n - pod-template-hash", "language": "yaml", "type": "kubernetes-manifest", "context": "10.5 - Pod Topology Spread Constraints | The matchLabelKeys field is a beta-level field and enabled by default in 1.27. You can disable it by disabling the MatchLabelKeysInPodTopologySpread feature gate.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_17508"} {"prompt": "Explain this yaml code:\n\ntopologySpreadConstraints:\n - maxSkew: 1\n topologyKey: kubernetes.io/hostname\n whenUnsatisfiable: DoNotSchedule\n labelSelector:\n matchLabels:\n app: foo\n matchLabelKeys:\n - pod-template-hash", "completion": "This yaml code 10.5 - Pod Topology Spread Constraints | The matchLabelKeys field is a beta-level field and enabled by default in 1.27. You can disable it by disabling the MatchLabelKeysInPodTopologySpread feature gate.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "10.5 - Pod Topology Spread Constraints | The matchLabelKeys field is a beta-level field and enabled by default in 1.27. You can disable it by disabling the MatchLabelKeysInPodTopologySpread feature gate.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_17509"} {"prompt": "Complete this text code:\n\nNAME STATUS ROLES AGE VERSION LABELS\nnode1 Ready <none> 4m26s v1.16.0 node=node1,zone=zoneA", "completion": "node2 Ready <none> 3m58s v1.16.0 node=node2,zone=zoneA\nnode3 Ready <none> 3m17s v1.16.0 node=node3,zone=zoneB\nnode4 Ready <none> 2m43s v1.16.0 node=node4,zone=zoneB", "language": "text", "type": "code-example", "context": "10.5 - Pod Topology Spread Constraints | Then the cluster is logically viewed as below:", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_17510"} {"prompt": "Explain this text code:\n\nNAME STATUS ROLES AGE VERSION LABELS\nnode1 Ready <none> 4m26s v1.16.0 node=node1,zone=zoneA\nnode2 Ready <none> 3m58s v1.16.0 node=node2,zone=zoneA\nnode3 Ready <none> 3m17s v1.16.0 node=node3,zone=zoneB\nnode4 Ready <none> 2m43s v1.16.0 node=node4,zone=zoneB", "completion": "This text code 10.5 - Pod Topology Spread Constraints | Then the cluster is logically viewed as below:", "language": "text", "type": "code-example_explanation", "context": "10.5 - Pod Topology Spread Constraints | Then the cluster is logically viewed as below:", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_17511"} {"prompt": "Explain this text code:\n\npods/topology-spread-constraints/one-constraint.yaml", "completion": "This text code 10.5 - Pod Topology Spread Constraints | From that manifest, topologyKey: zone implies the even distribution will only be applied to nodes that are labeled zone: <any value> (nodes that don't have a zone label are skipped). The field whenUns", "language": "text", "type": "code-example_explanation", "context": "10.5 - Pod Topology Spread Constraints | From that manifest, topologyKey: zone implies the even distribution will only be applied to nodes that are labeled zone: <any value> (nodes that don't have a zone label are skipped). The field whenUns", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_17512"} {"prompt": "Complete this yaml code:\n\nkind: Pod\napiVersion: v1\nmetadata:\n name: mypod\n labels:\n foo: bar\nspec:\n topologySpreadConstraints:", "completion": " - maxSkew: 1\n topologyKey: zone\n whenUnsatisfiable: DoNotSchedule\n labelSelector:\n matchLabels:\n foo: bar\n containers:\n - name: pause\n image: registry.k8s.io/pause:3.1", "language": "yaml", "type": "pod-definition", "context": "10.5 - Pod Topology Spread Constraints | From that manifest, topologyKey: zone implies the even distribution will only be applied to nodes that are labeled zone: <any value> (nodes that don't have a zone label are skipped). The field whenUns", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_17513"} {"prompt": "Explain this yaml code:\n\nkind: Pod\napiVersion: v1\nmetadata:\n name: mypod\n labels:\n foo: bar\nspec:\n topologySpreadConstraints:\n - maxSkew: 1\n topologyKey: zone\n whenUnsatisfiable: DoNotSchedule\n labelSelector:\n matchLabels:\n foo: bar\n containers:\n - name: pause\n image: registry.k8s.io/pause:3.1", "completion": "This yaml code 10.5 - Pod Topology Spread Constraints | From that manifest, topologyKey: zone implies the even distribution will only be applied to nodes that are labeled zone: <any value> (nodes that don't have a zone label are skipped). The field whenUns", "language": "yaml", "type": "pod-definition_explanation", "context": "10.5 - Pod Topology Spread Constraints | From that manifest, topologyKey: zone implies the even distribution will only be applied to nodes that are labeled zone: <any value> (nodes that don't have a zone label are skipped). The field whenUns", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_17514"} {"prompt": "Explain this text code:\n\npods/topology-spread-constraints/two-constraints.yaml", "completion": "This text code 10.5 - Pod Topology Spread Constraints | In this case, to match the first constraint, the incoming Pod can only be placed onto nodes in zone B; while in terms of the second constraint, the incoming Pod can only be scheduled to the node node4", "language": "text", "type": "code-example_explanation", "context": "10.5 - Pod Topology Spread Constraints | In this case, to match the first constraint, the incoming Pod can only be placed onto nodes in zone B; while in terms of the second constraint, the incoming Pod can only be scheduled to the node node4", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_17515"} {"prompt": "Complete this yaml code:\n\nkind: Pod\napiVersion: v1\nmetadata:\n name: mypod\n labels:\n foo: bar\nspec:\n topologySpreadConstraints:\n - maxSkew: 1\n topologyKey: zone\n whenUnsatisfiable: DoNotSchedule", "completion": " labelSelector:\n matchLabels:\n foo: bar\n - maxSkew: 1\n topologyKey: node\n whenUnsatisfiable: DoNotSchedule\n labelSelector:\n matchLabels:\n foo: bar\n containers:\n - name: pause\n image: registry.k8s.io/pause:3.1", "language": "yaml", "type": "pod-definition", "context": "10.5 - Pod Topology Spread Constraints | In this case, to match the first constraint, the incoming Pod can only be placed onto nodes in zone B; while in terms of the second constraint, the incoming Pod can only be scheduled to the node node4", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_17516"} {"prompt": "Explain this yaml code:\n\nkind: Pod\napiVersion: v1\nmetadata:\n name: mypod\n labels:\n foo: bar\nspec:\n topologySpreadConstraints:\n - maxSkew: 1\n topologyKey: zone\n whenUnsatisfiable: DoNotSchedule\n labelSelector:\n matchLabels:\n foo: bar\n - maxSkew: 1\n topologyKey: node\n whenUnsatisfiable: DoNotSchedule\n labelSelector:\n matchLabels:\n foo: bar\n containers:\n - name: pause\n image: registry.k8s.io/pause:3.1", "completion": "This yaml code 10.5 - Pod Topology Spread Constraints | In this case, to match the first constraint, the incoming Pod can only be placed onto nodes in zone B; while in terms of the second constraint, the incoming Pod can only be scheduled to the node node4", "language": "yaml", "type": "pod-definition_explanation", "context": "10.5 - Pod Topology Spread Constraints | In this case, to match the first constraint, the incoming Pod can only be placed onto nodes in zone B; while in terms of the second constraint, the incoming Pod can only be scheduled to the node node4", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_17517"} {"prompt": "Explain this text code:\n\npods/topology-spread-constraints/one-constraint-with-nodeaffinity.yaml", "completion": "This text code 10.5 - Pod Topology Spread Constraints | There are some implicit conventions worth noting here:", "language": "text", "type": "code-example_explanation", "context": "10.5 - Pod Topology Spread Constraints | There are some implicit conventions worth noting here:", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_17518"} {"prompt": "Complete this yaml code:\n\nkind: Pod\napiVersion: v1\nmetadata:\n name: mypod\n labels:\n foo: bar\nspec:\n topologySpreadConstraints:\n - maxSkew: 1\n topologyKey: zone\n whenUnsatisfiable: DoNotSchedule\n labelSelector:\n matchLabels:", "completion": " foo: bar\n affinity:\n nodeAffinity:\n requiredDuringSchedulingIgnoredDuringExecution:\n nodeSelectorTerms:\n - matchExpressions:\n - key: zone\n operator: NotIn\n values:\n - zoneC\n containers:\n - name: pause\n image: registry.k8s.io/pause:3.1", "language": "yaml", "type": "pod-definition", "context": "10.5 - Pod Topology Spread Constraints | There are some implicit conventions worth noting here:", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_17519"} {"prompt": "Explain this yaml code:\n\nkind: Pod\napiVersion: v1\nmetadata:\n name: mypod\n labels:\n foo: bar\nspec:\n topologySpreadConstraints:\n - maxSkew: 1\n topologyKey: zone\n whenUnsatisfiable: DoNotSchedule\n labelSelector:\n matchLabels:\n foo: bar\n affinity:\n nodeAffinity:\n requiredDuringSchedulingIgnoredDuringExecution:\n nodeSelectorTerms:\n - matchExpressions:\n - key: zone\n operator: NotIn\n values:\n - zoneC\n containers:\n - name: pause\n image: registry.k8s.io/pause:3.1", "completion": "This yaml code 10.5 - Pod Topology Spread Constraints | There are some implicit conventions worth noting here:", "language": "yaml", "type": "pod-definition_explanation", "context": "10.5 - Pod Topology Spread Constraints | There are some implicit conventions worth noting here:", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_17520"} {"prompt": "Complete this yaml code:\n\napiVersion: kubescheduler.config.k8s.io/v1\nkind: KubeSchedulerConfiguration\n\nprofiles:\n - schedulerName: default-scheduler\n pluginConfig:", "completion": " - name: PodTopologySpread\n args:\n defaultConstraints:\n - maxSkew: 1\n topologyKey: topology.kubernetes.io/zone\n whenUnsatisfiable: ScheduleAnyway\n defaultingType: List", "language": "yaml", "type": "kubernetes-manifest", "context": "10.5 - Pod Topology Spread Constraints | If you don't configure any cluster-level default constraints for pod topology spreading, then kube-scheduler acts as if you specified the following default topology constraints:", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_17521"} {"prompt": "Explain this yaml code:\n\napiVersion: kubescheduler.config.k8s.io/v1\nkind: KubeSchedulerConfiguration\n\nprofiles:\n - schedulerName: default-scheduler\n pluginConfig:\n - name: PodTopologySpread\n args:\n defaultConstraints:\n - maxSkew: 1\n topologyKey: topology.kubernetes.io/zone\n whenUnsatisfiable: ScheduleAnyway\n defaultingType: List", "completion": "This yaml code 10.5 - Pod Topology Spread Constraints | If you don't configure any cluster-level default constraints for pod topology spreading, then kube-scheduler acts as if you specified the following default topology constraints:", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "10.5 - Pod Topology Spread Constraints | If you don't configure any cluster-level default constraints for pod topology spreading, then kube-scheduler acts as if you specified the following default topology constraints:", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_17522"} {"prompt": "Complete this yaml code:\n\ndefaultConstraints:\n - maxSkew: 3\n topologyKey: \"kubernetes.io/hostname\"", "completion": " whenUnsatisfiable: ScheduleAnyway\n - maxSkew: 5\n topologyKey: \"topology.kubernetes.io/zone\"\n whenUnsatisfiable: ScheduleAnyway", "language": "yaml", "type": "kubernetes-manifest", "context": "10.5 - Pod Topology Spread Constraints | Also, the legacy SelectorSpread plugin, which provides an equivalent behavior, is disabled by default.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_17523"} {"prompt": "Explain this yaml code:\n\ndefaultConstraints:\n - maxSkew: 3\n topologyKey: \"kubernetes.io/hostname\"\n whenUnsatisfiable: ScheduleAnyway\n - maxSkew: 5\n topologyKey: \"topology.kubernetes.io/zone\"\n whenUnsatisfiable: ScheduleAnyway", "completion": "This yaml code 10.5 - Pod Topology Spread Constraints | Also, the legacy SelectorSpread plugin, which provides an equivalent behavior, is disabled by default.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "10.5 - Pod Topology Spread Constraints | Also, the legacy SelectorSpread plugin, which provides an equivalent behavior, is disabled by default.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_17524"} {"prompt": "Complete this yaml code:\n\napiVersion: kubescheduler.config.k8s.io/v1\nkind: KubeSchedulerConfiguration\n\nprofiles:\n - schedulerName: default-scheduler", "completion": " pluginConfig:\n - name: PodTopologySpread\n args:\n defaultConstraints: []\n defaultingType: List", "language": "yaml", "type": "kubernetes-manifest", "context": "10.5 - Pod Topology Spread Constraints | In Kubernetes, inter-Pod affinity and anti-affinity control how Pods are scheduled in relation to one another - either more packed or more scattered.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_17525"} {"prompt": "Explain this yaml code:\n\napiVersion: kubescheduler.config.k8s.io/v1\nkind: KubeSchedulerConfiguration\n\nprofiles:\n - schedulerName: default-scheduler\n pluginConfig:\n - name: PodTopologySpread\n args:\n defaultConstraints: []\n defaultingType: List", "completion": "This yaml code 10.5 - Pod Topology Spread Constraints | In Kubernetes, inter-Pod affinity and anti-affinity control how Pods are scheduled in relation to one another - either more packed or more scattered.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "10.5 - Pod Topology Spread Constraints | In Kubernetes, inter-Pod affinity and anti-affinity control how Pods are scheduled in relation to one another - either more packed or more scattered.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_17526"} {"prompt": "Complete this yaml code:\n\ntolerations:\n- key: \"key1\"", "completion": " operator: \"Equal\"\n value: \"value1\"\n effect: \"NoSchedule\"", "language": "yaml", "type": "kubernetes-manifest", "context": "10.6 - Taints and Tolerations | The default Kubernetes scheduler takes taints and tolerations into account when selecting a node to run a particular Pod. However, if you manually specify the .spec.nodeName for a Pod, that action byp", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_17527"} {"prompt": "Explain this yaml code:\n\ntolerations:\n- key: \"key1\"\n operator: \"Equal\"\n value: \"value1\"\n effect: \"NoSchedule\"", "completion": "This yaml code 10.6 - Taints and Tolerations | The default Kubernetes scheduler takes taints and tolerations into account when selecting a node to run a particular Pod. However, if you manually specify the .spec.nodeName for a Pod, that action byp", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "10.6 - Taints and Tolerations | The default Kubernetes scheduler takes taints and tolerations into account when selecting a node to run a particular Pod. However, if you manually specify the .spec.nodeName for a Pod, that action byp", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_17528"} {"prompt": "Complete this yaml code:\n\ntolerations:\n- key: \"key1\"", "completion": " operator: \"Exists\"\n effect: \"NoSchedule\"", "language": "yaml", "type": "kubernetes-manifest", "context": "10.6 - Taints and Tolerations | The default Kubernetes scheduler takes taints and tolerations into account when selecting a node to run a particular Pod. However, if you manually specify the .spec.nodeName for a Pod, that action byp", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_17529"} {"prompt": "Explain this yaml code:\n\ntolerations:\n- key: \"key1\"\n operator: \"Exists\"\n effect: \"NoSchedule\"", "completion": "This yaml code 10.6 - Taints and Tolerations | The default Kubernetes scheduler takes taints and tolerations into account when selecting a node to run a particular Pod. However, if you manually specify the .spec.nodeName for a Pod, that action byp", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "10.6 - Taints and Tolerations | The default Kubernetes scheduler takes taints and tolerations into account when selecting a node to run a particular Pod. However, if you manually specify the .spec.nodeName for a Pod, that action byp", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_17530"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: nginx\n labels:\n env: test\nspec:", "completion": " containers:\n - name: nginx\n image: nginx\n imagePullPolicy: IfNotPresent\n tolerations:\n - key: \"example-key\"\n operator: \"Exists\"\n effect: \"NoSchedule\"", "language": "yaml", "type": "pod-definition", "context": "10.6 - Taints and Tolerations | The default value for operator is Equal.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_17531"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: nginx\n labels:\n env: test\nspec:\n containers:\n - name: nginx\n image: nginx\n imagePullPolicy: IfNotPresent\n tolerations:\n - key: \"example-key\"\n operator: \"Exists\"\n effect: \"NoSchedule\"", "completion": "This yaml code 10.6 - Taints and Tolerations | The default value for operator is Equal.", "language": "yaml", "type": "pod-definition_explanation", "context": "10.6 - Taints and Tolerations | The default value for operator is Equal.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_17532"} {"prompt": "Explain this shell code:\n\nkubectl taint nodes node1 key1=value1:NoSchedule\nkubectl taint nodes node1 key1=value1:NoExecute\nkubectl taint nodes node1 key2=value2:NoSchedule", "completion": "This shell code 10.6 - Taints and Tolerations | And a pod has two tolerations:", "language": "shell", "type": "code-example_explanation", "context": "10.6 - Taints and Tolerations | And a pod has two tolerations:", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_17533"} {"prompt": "Complete this yaml code:\n\ntolerations:\n- key: \"key1\"\n operator: \"Equal\"\n value: \"value1\"", "completion": " effect: \"NoSchedule\"\n- key: \"key1\"\n operator: \"Equal\"\n value: \"value1\"\n effect: \"NoExecute\"", "language": "yaml", "type": "kubernetes-manifest", "context": "10.6 - Taints and Tolerations | In this case, the pod will not be able to schedule onto the node, because there is no toleration matching the third taint. But it will be able to continue running if it is already running on the node", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_17534"} {"prompt": "Explain this yaml code:\n\ntolerations:\n- key: \"key1\"\n operator: \"Equal\"\n value: \"value1\"\n effect: \"NoSchedule\"\n- key: \"key1\"\n operator: \"Equal\"\n value: \"value1\"\n effect: \"NoExecute\"", "completion": "This yaml code 10.6 - Taints and Tolerations | In this case, the pod will not be able to schedule onto the node, because there is no toleration matching the third taint. But it will be able to continue running if it is already running on the node", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "10.6 - Taints and Tolerations | In this case, the pod will not be able to schedule onto the node, because there is no toleration matching the third taint. But it will be able to continue running if it is already running on the node", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_17535"} {"prompt": "Complete this yaml code:\n\ntolerations:\n- key: \"key1\"\n operator: \"Equal\"", "completion": " value: \"value1\"\n effect: \"NoExecute\"\n tolerationSeconds: 3600", "language": "yaml", "type": "kubernetes-manifest", "context": "10.6 - Taints and Tolerations | means that if this pod is running and a matching taint is added to the node, then the pod will stay bound to the node for 3600 seconds, and then be evicted. If the taint is removed before that time, t", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_17536"} {"prompt": "Explain this yaml code:\n\ntolerations:\n- key: \"key1\"\n operator: \"Equal\"\n value: \"value1\"\n effect: \"NoExecute\"\n tolerationSeconds: 3600", "completion": "This yaml code 10.6 - Taints and Tolerations | means that if this pod is running and a matching taint is added to the node, then the pod will stay bound to the node for 3600 seconds, and then be evicted. If the taint is removed before that time, t", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "10.6 - Taints and Tolerations | means that if this pod is running and a matching taint is added to the node, then the pod will stay bound to the node for 3600 seconds, and then be evicted. If the taint is removed before that time, t", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_17537"} {"prompt": "Explain this bash code:\n\nkubectl taint nodes nodename dedicated=groupName:NoSchedule", "completion": "This bash code 10.6 - Taints and Tolerations | Nodes with Special Hardware: In a cluster where a small subset of nodes have specialized hardware (for example GPUs), it is desirable to keep pods that don't need the specialized hardware off of those", "language": "bash", "type": "kubectl-command_explanation", "context": "10.6 - Taints and Tolerations | Nodes with Special Hardware: In a cluster where a small subset of nodes have specialized hardware (for example GPUs), it is desirable to keep pods that don't need the specialized hardware off of those", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_17538"} {"prompt": "Explain this bash code:\n\nkubectl taint nodes nodename special=true:NoSchedule", "completion": "This bash code 10.6 - Taints and Tolerations | Taint based Evictions: A per-pod-configurable eviction behavior when there are node problems, which is described in the next section.", "language": "bash", "type": "kubectl-command_explanation", "context": "10.6 - Taints and Tolerations | Taint based Evictions: A per-pod-configurable eviction behavior when there are node problems, which is described in the next section.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_17539"} {"prompt": "Explain this bash code:\n\nkubectl taint nodes nodename special=true:PreferNoSchedule", "completion": "This bash code 10.6 - Taints and Tolerations | Taint based Evictions: A per-pod-configurable eviction behavior when there are node problems, which is described in the next section.", "language": "bash", "type": "kubectl-command_explanation", "context": "10.6 - Taints and Tolerations | Taint based Evictions: A per-pod-configurable eviction behavior when there are node problems, which is described in the next section.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_17540"} {"prompt": "Complete this yaml code:\n\ntolerations:\n- key: \"node.kubernetes.io/unreachable\"", "completion": " operator: \"Exists\"\n effect: \"NoExecute\"\n tolerationSeconds: 6000", "language": "yaml", "type": "kubernetes-manifest", "context": "10.6 - Taints and Tolerations | Kubernetes automatically adds a toleration for node.kubernetes.io/not-ready and node.kubernetes.io/unreachable with tolerationSeconds=300, unless you, or a controller, set those tolerations explicitly", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_17541"} {"prompt": "Explain this yaml code:\n\ntolerations:\n- key: \"node.kubernetes.io/unreachable\"\n operator: \"Exists\"\n effect: \"NoExecute\"\n tolerationSeconds: 6000", "completion": "This yaml code 10.6 - Taints and Tolerations | Kubernetes automatically adds a toleration for node.kubernetes.io/not-ready and node.kubernetes.io/unreachable with tolerationSeconds=300, unless you, or a controller, set those tolerations explicitly", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "10.6 - Taints and Tolerations | Kubernetes automatically adds a toleration for node.kubernetes.io/not-ready and node.kubernetes.io/unreachable with tolerationSeconds=300, unless you, or a controller, set those tolerations explicitly", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_17542"} {"prompt": "Explain this go code:\n\nfunc ScoreNode(_ *v1.pod, n *v1.Node) (int, error) {\n return getBlinkingLightCount(n)\n}", "completion": "This go code 10.7 - Scheduling Framework | However, the maximum count of blinking lights may be small compared to NodeScoreMax. To fix this, BlinkingLightScorer should also register for this extension point.", "language": "go", "type": "code-example_explanation", "context": "10.7 - Scheduling Framework | However, the maximum count of blinking lights may be small compared to NodeScoreMax. To fix this, BlinkingLightScorer should also register for this extension point.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_17543"} {"prompt": "Complete this go code:\n\nfunc NormalizeScores(scores map[string]int) {\n highest := 0\n for _, score := range scores {\n highest = max(highest, score)", "completion": " }\n for node, score := range scores {\n scores[node] = score*NodeScoreMax/highest\n }\n}", "language": "go", "type": "code-example", "context": "10.7 - Scheduling Framework | If any NormalizeScore plugin returns an error, the scheduling cycle is aborted.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_17544"} {"prompt": "Explain this go code:\n\nfunc NormalizeScores(scores map[string]int) {\n highest := 0\n for _, score := range scores {\n highest = max(highest, score)\n }\n for node, score := range scores {\n scores[node] = score*NodeScoreMax/highest\n }\n}", "completion": "This go code 10.7 - Scheduling Framework | If any NormalizeScore plugin returns an error, the scheduling cycle is aborted.", "language": "go", "type": "code-example_explanation", "context": "10.7 - Scheduling Framework | If any NormalizeScore plugin returns an error, the scheduling cycle is aborted.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_17545"} {"prompt": "Complete this go code:\n\ntype Plugin interface {\n Name() string\n}\n\ntype QueueSortPlugin interface {\n Plugin\n Less(*v1.pod, *v1.pod) bool", "completion": "}\n\ntype PreFilterPlugin interface {\n Plugin\n PreFilter(context.Context, *framework.CycleState, *v1.pod) error\n}\n\n// ...", "language": "go", "type": "code-example", "context": "10.7 - Scheduling Framework | You can enable or disable plugins in the scheduler configuration. If you are using Kubernetes v1.18 or later, most scheduling plugins are in use and enabled by default.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_17546"} {"prompt": "Explain this go code:\n\ntype Plugin interface {\n Name() string\n}\n\ntype QueueSortPlugin interface {\n Plugin\n Less(*v1.pod, *v1.pod) bool\n}\n\ntype PreFilterPlugin interface {\n Plugin\n PreFilter(context.Context, *framework.CycleState, *v1.pod) error\n}\n\n// ...", "completion": "This go code 10.7 - Scheduling Framework | You can enable or disable plugins in the scheduler configuration. If you are using Kubernetes v1.18 or later, most scheduling plugins are in use and enabled by default.", "language": "go", "type": "code-example_explanation", "context": "10.7 - Scheduling Framework | You can enable or disable plugins in the scheduler configuration. If you are using Kubernetes v1.18 or later, most scheduling plugins are in use and enabled by default.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_17547"} {"prompt": "Complete this yaml code:\n\napiVersion: resource.k8s.io/v1\nkind: ResourceClaimTemplate\nmetadata:\n name: prioritized-list-claim-template\nspec:\n spec:\n devices:\n requests:\n - name: req-0\n firstAvailable:\n - name: large-black\n deviceClassName: resource.example.com", "completion": " selectors:\n - cel:\n expression: |-\n device.attributes[\"resource-driver.example.com\"].color == \"black\" &&\n device.attributes[\"resource-driver.example.com\"].size == \"large\" \n - name: small-white\n deviceClassName: resource.example.com\n selectors:\n - cel:\n expression: |-\n device.attributes[\"resource-driver.example.com\"].color == \"white\" &&\n device.attributes[\"resource-driver.example.com\"].size == \"small\" \n count: 2", "language": "yaml", "type": "kubernetes-manifest", "context": "10.8 - Dynamic Resource Allocation | The decision is made on a per-Pod basis, so if the Pod is a member of a ReplicaSet or similar grouping, you cannot rely on all the members of the group having the same subrequest chosen. Your workload", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_17548"} {"prompt": "Explain this yaml code:\n\napiVersion: resource.k8s.io/v1\nkind: ResourceClaimTemplate\nmetadata:\n name: prioritized-list-claim-template\nspec:\n spec:\n devices:\n requests:\n - name: req-0\n firstAvailable:\n - name: large-black\n deviceClassName: resource.example.com\n selectors:\n - cel:\n expression: |-\n device.attributes[\"resource-driver.example.com\"].color == \"black\" &&\n device.attributes[\"resource-driver.example.com\"].size == \"large\" \n - name: small-white\n deviceClassName: resource.example.com\n selectors:\n - cel:\n expression: |-\n device.attributes[\"resource-driver.example.com\"].color == \"white\" &&\n device.attributes[\"resource-driver.example.com\"].size == \"small\" \n count: 2", "completion": "This yaml code 10.8 - Dynamic Resource Allocation | The decision is made on a per-Pod basis, so if the Pod is a member of a ReplicaSet or similar grouping, you cannot rely on all the members of the group having the same subrequest chosen. Your workload", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "10.8 - Dynamic Resource Allocation | The decision is made on a per-Pod basis, so if the Pod is a member of a ReplicaSet or similar grouping, you cannot rely on all the members of the group having the same subrequest chosen. Your workload", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_17549"} {"prompt": "Complete this yaml code:\n\napiVersion: resource.k8s.io/v1\nkind: ResourceSlice\nmetadata:\n name: cat-slice\nspec:\n driver: \"resource-driver.example.com\"\n pool:\n generation: 1\n name: \"black-cat-pool\"\n resourceSliceCount: 1", "completion": " # The allNodes field defines whether any node in the cluster can access the device.\n allNodes: true\n devices:\n - name: \"large-black-cat\"\n attributes:\n color:\n string: \"black\"\n size:\n string: \"large\"\n cat:\n boolean: true", "language": "yaml", "type": "kubernetes-manifest", "context": "10.8 - Dynamic Resource Allocation | This ResourceSlice is managed by the resource-driver.example.com driver in the black-cat-pool pool. The allNodes: true field indicates that any node in the cluster can access the devices. There's one", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_17550"} {"prompt": "Explain this yaml code:\n\napiVersion: resource.k8s.io/v1\nkind: ResourceSlice\nmetadata:\n name: cat-slice\nspec:\n driver: \"resource-driver.example.com\"\n pool:\n generation: 1\n name: \"black-cat-pool\"\n resourceSliceCount: 1\n # The allNodes field defines whether any node in the cluster can access the device.\n allNodes: true\n devices:\n - name: \"large-black-cat\"\n attributes:\n color:\n string: \"black\"\n size:\n string: \"large\"\n cat:\n boolean: true", "completion": "This yaml code 10.8 - Dynamic Resource Allocation | This ResourceSlice is managed by the resource-driver.example.com driver in the black-cat-pool pool. The allNodes: true field indicates that any node in the cluster can access the devices. There's one", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "10.8 - Dynamic Resource Allocation | This ResourceSlice is managed by the resource-driver.example.com driver in the black-cat-pool pool. The allNodes: true field indicates that any node in the cluster can access the devices. There's one", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_17551"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: pod-with-cats", "completion": "spec:\n nodeSelector:\n kubernetes.io/hostname: name-of-the-intended-node\n ...", "language": "yaml", "type": "pod-definition", "context": "10.8 - Dynamic Resource Allocation | You may also be able to mutate the incoming Pod, at admission time, to unset the .spec.nodeName field and to use a node selector instead.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_17552"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: pod-with-cats\nspec:\n nodeSelector:\n kubernetes.io/hostname: name-of-the-intended-node\n ...", "completion": "This yaml code 10.8 - Dynamic Resource Allocation | You may also be able to mutate the incoming Pod, at admission time, to unset the .spec.nodeName field and to use a node selector instead.", "language": "yaml", "type": "pod-definition_explanation", "context": "10.8 - Dynamic Resource Allocation | You may also be able to mutate the incoming Pod, at admission time, to unset the .spec.nodeName field and to use a node selector instead.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_17553"} {"prompt": "Complete this yaml code:\n\napiVersion: resource.k8s.io/v1\nkind: ResourceClaimTemplate\nmetadata:\n name: large-black-cat-claim-template\nspec:\n spec:", "completion": " devices:\n requests:\n - name: req-0\n exactly:\n deviceClassName: resource.example.com\n allocationMode: All\n adminAccess: true", "language": "yaml", "type": "kubernetes-manifest", "context": "10.8 - Dynamic Resource Allocation | If this feature is disabled, the adminAccess field will be removed automatically when creating such a ResourceClaim.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_17554"} {"prompt": "Explain this yaml code:\n\napiVersion: resource.k8s.io/v1\nkind: ResourceClaimTemplate\nmetadata:\n name: large-black-cat-claim-template\nspec:\n spec:\n devices:\n requests:\n - name: req-0\n exactly:\n deviceClassName: resource.example.com\n allocationMode: All\n adminAccess: true", "completion": "This yaml code 10.8 - Dynamic Resource Allocation | If this feature is disabled, the adminAccess field will be removed automatically when creating such a ResourceClaim.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "10.8 - Dynamic Resource Allocation | If this feature is disabled, the adminAccess field will be removed automatically when creating such a ResourceClaim.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_17555"} {"prompt": "Complete this yaml code:\n\napiVersion: resource.k8s.io/v1\nkind: DeviceClass\nmetadata:\n name: gpu.example.com\nspec:", "completion": " selectors:\n - cel:\n expression: device.driver == 'gpu.example.com' && device.attributes['gpu.example.com'].type\n == 'gpu'\n extendedResourceName: example.com/gpu", "language": "yaml", "type": "kubernetes-manifest", "context": "10.8 - Dynamic Resource Allocation | In addition, users can use a special extended resource to allocate devices without having to explicitly create a ResourceClaim. Using the extended resource name prefix deviceclass.resource.kubernetes.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_17556"} {"prompt": "Explain this yaml code:\n\napiVersion: resource.k8s.io/v1\nkind: DeviceClass\nmetadata:\n name: gpu.example.com\nspec:\n selectors:\n - cel:\n expression: device.driver == 'gpu.example.com' && device.attributes['gpu.example.com'].type\n == 'gpu'\n extendedResourceName: example.com/gpu", "completion": "This yaml code 10.8 - Dynamic Resource Allocation | In addition, users can use a special extended resource to allocate devices without having to explicitly create a ResourceClaim. Using the extended resource name prefix deviceclass.resource.kubernetes.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "10.8 - Dynamic Resource Allocation | In addition, users can use a special extended resource to allocate devices without having to explicitly create a ResourceClaim. Using the extended resource name prefix deviceclass.resource.kubernetes.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_17557"} {"prompt": "Complete this yaml code:\n\nkind: ResourceSlice\napiVersion: resource.k8s.io/v1\nmetadata:\n name: resourceslice\nspec:\n nodeName: worker-1\n pool:\n name: pool\n generation: 1\n resourceSliceCount: 1\n driver: dra.example.com\n sharedCounters:\n - name: gpu-1-counters\n counters:", "completion": " memory:\n value: 8Gi\n devices:\n - name: device-1\n consumesCounters:\n - counterSet: gpu-1-counters\n counters:\n memory:\n value: 6Gi\n - name: device-2\n consumesCounters:\n - counterSet: gpu-1-counters\n counters:\n memory:\n value: 6Gi", "language": "yaml", "type": "kubernetes-manifest", "context": "10.8 - Dynamic Resource Allocation | Partitionable devices is an alpha feature and only enabled when the DRAPartitionableDevices feature gate is enabled in the kube-apiserver and kube-scheduler.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_17558"} {"prompt": "Explain this yaml code:\n\nkind: ResourceSlice\napiVersion: resource.k8s.io/v1\nmetadata:\n name: resourceslice\nspec:\n nodeName: worker-1\n pool:\n name: pool\n generation: 1\n resourceSliceCount: 1\n driver: dra.example.com\n sharedCounters:\n - name: gpu-1-counters\n counters:\n memory:\n value: 8Gi\n devices:\n - name: device-1\n consumesCounters:\n - counterSet: gpu-1-counters\n counters:\n memory:\n value: 6Gi\n - name: device-2\n consumesCounters:\n - counterSet: gpu-1-counters\n counters:\n memory:\n value: 6Gi", "completion": "This yaml code 10.8 - Dynamic Resource Allocation | Partitionable devices is an alpha feature and only enabled when the DRAPartitionableDevices feature gate is enabled in the kube-apiserver and kube-scheduler.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "10.8 - Dynamic Resource Allocation | Partitionable devices is an alpha feature and only enabled when the DRAPartitionableDevices feature gate is enabled in the kube-apiserver and kube-scheduler.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_17559"} {"prompt": "Complete this yaml code:\n\nkind: ResourceSlice\napiVersion: resource.k8s.io/v1\nmetadata:\n name: resourceslice\nspec:\n nodeName: worker-1\n pool:\n name: pool\n generation: 1\n resourceSliceCount: 1\n driver: dra.example.com\n devices:", "completion": " - name: eth1\n allowMultipleAllocations: true\n attributes:\n name:\n string: \"eth1\"\n capacity:\n bandwidth:\n requestPolicy:\n default: \"1M\"\n validRange:\n min: \"1M\"\n step: \"8\"\n value: \"10G\"", "language": "yaml", "type": "kubernetes-manifest", "context": "10.8 - Dynamic Resource Allocation | The consumable capacity can be requested as shown in the below example.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_17560"} {"prompt": "Explain this yaml code:\n\nkind: ResourceSlice\napiVersion: resource.k8s.io/v1\nmetadata:\n name: resourceslice\nspec:\n nodeName: worker-1\n pool:\n name: pool\n generation: 1\n resourceSliceCount: 1\n driver: dra.example.com\n devices:\n - name: eth1\n allowMultipleAllocations: true\n attributes:\n name:\n string: \"eth1\"\n capacity:\n bandwidth:\n requestPolicy:\n default: \"1M\"\n validRange:\n min: \"1M\"\n step: \"8\"\n value: \"10G\"", "completion": "This yaml code 10.8 - Dynamic Resource Allocation | The consumable capacity can be requested as shown in the below example.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "10.8 - Dynamic Resource Allocation | The consumable capacity can be requested as shown in the below example.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_17561"} {"prompt": "Complete this yaml code:\n\napiVersion: resource.k8s.io/v1\nkind: ResourceClaimTemplate\nmetadata:\n name: bandwidth-claim-template\nspec:\n spec:\n devices:", "completion": " requests:\n - name: req-0\n exactly:\n - deviceClassName: resource.example.com\n capacity:\n requests:\n bandwidth: 1G", "language": "yaml", "type": "kubernetes-manifest", "context": "10.8 - Dynamic Resource Allocation | The allocation result will include the consumed capacity and the identifier of the share.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_17562"} {"prompt": "Explain this yaml code:\n\napiVersion: resource.k8s.io/v1\nkind: ResourceClaimTemplate\nmetadata:\n name: bandwidth-claim-template\nspec:\n spec:\n devices:\n requests:\n - name: req-0\n exactly:\n - deviceClassName: resource.example.com\n capacity:\n requests:\n bandwidth: 1G", "completion": "This yaml code 10.8 - Dynamic Resource Allocation | The allocation result will include the consumed capacity and the identifier of the share.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "10.8 - Dynamic Resource Allocation | The allocation result will include the consumed capacity and the identifier of the share.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_17563"} {"prompt": "Complete this yaml code:\n\napiVersion: resource.k8s.io/v1\nkind: ResourceClaim\n...\nstatus:\n allocation:", "completion": " devices:\n results:\n - consumedCapacity:\n bandwidth: 1G\n device: eth1\n shareID: \"a671734a-e8e5-11e4-8fde-42010af09327\"", "language": "yaml", "type": "kubernetes-manifest", "context": "10.8 - Dynamic Resource Allocation | In this example, a multiply-allocatable device was chosen. However, any resource.example.com device with at least the requested 1G bandwidth could have met the requirement. If a non-multiply-allocatab", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_17564"} {"prompt": "Explain this yaml code:\n\napiVersion: resource.k8s.io/v1\nkind: ResourceClaim\n...\nstatus:\n allocation:\n devices:\n results:\n - consumedCapacity:\n bandwidth: 1G\n device: eth1\n shareID: \"a671734a-e8e5-11e4-8fde-42010af09327\"", "completion": "This yaml code 10.8 - Dynamic Resource Allocation | In this example, a multiply-allocatable device was chosen. However, any resource.example.com device with at least the requested 1G bandwidth could have met the requirement. If a non-multiply-allocatab", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "10.8 - Dynamic Resource Allocation | In this example, a multiply-allocatable device was chosen. However, any resource.example.com device with at least the requested 1G bandwidth could have met the requirement. If a non-multiply-allocatab", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_17565"} {"prompt": "Complete this yaml code:\n\napiVersion: resource.k8s.io/v1alpha3\nkind: DeviceTaintRule\nmetadata:\n name: example\nspec:\n # The entire hardware installation for this\n # particular driver is broken.", "completion": " # Evict all pods and don't schedule new ones.\n deviceSelector:\n driver: dra.example.com\n taint:\n key: dra.example.com/unhealthy\n value: Broken\n effect: NoExecute", "language": "yaml", "type": "kubernetes-manifest", "context": "10.8 - Dynamic Resource Allocation | Device Binding Conditions allow the Kubernetes scheduler to delay Pod binding until external resources, such as fabric-attached GPUs or reprogrammable FPGAs, are confirmed to be ready.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_17566"} {"prompt": "Explain this yaml code:\n\napiVersion: resource.k8s.io/v1alpha3\nkind: DeviceTaintRule\nmetadata:\n name: example\nspec:\n # The entire hardware installation for this\n # particular driver is broken.\n # Evict all pods and don't schedule new ones.\n deviceSelector:\n driver: dra.example.com\n taint:\n key: dra.example.com/unhealthy\n value: Broken\n effect: NoExecute", "completion": "This yaml code 10.8 - Dynamic Resource Allocation | Device Binding Conditions allow the Kubernetes scheduler to delay Pod binding until external resources, such as fabric-attached GPUs or reprogrammable FPGAs, are confirmed to be ready.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "10.8 - Dynamic Resource Allocation | Device Binding Conditions allow the Kubernetes scheduler to delay Pod binding until external resources, such as fabric-attached GPUs or reprogrammable FPGAs, are confirmed to be ready.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_17567"} {"prompt": "Complete this yaml code:\n\napiVersion: resource.k8s.io/v1\nkind: ResourceSlice\nmetadata:\n name: gpu-slice\nspec:\n driver: dra.example.com\n nodeSelector:\n accelerator-type: high-performance\n pool:\n name: gpu-pool\n generation: 1\n resourceSliceCount: 1", "completion": " devices:\n - name: gpu-1\n attributes:\n vendor:\n string: \"example\"\n model:\n string: \"example-gpu\"\n bindsToNode: true\n bindingConditions:\n - dra.example.com/is-prepared\n bindingFailureConditions:\n - dra.example.com/preparing-failed", "language": "yaml", "type": "kubernetes-manifest", "context": "10.8 - Dynamic Resource Allocation | This example ResourceSlice has the following properties:", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_17568"} {"prompt": "Explain this yaml code:\n\napiVersion: resource.k8s.io/v1\nkind: ResourceSlice\nmetadata:\n name: gpu-slice\nspec:\n driver: dra.example.com\n nodeSelector:\n accelerator-type: high-performance\n pool:\n name: gpu-pool\n generation: 1\n resourceSliceCount: 1\n devices:\n - name: gpu-1\n attributes:\n vendor:\n string: \"example\"\n model:\n string: \"example-gpu\"\n bindsToNode: true\n bindingConditions:\n - dra.example.com/is-prepared\n bindingFailureConditions:\n - dra.example.com/preparing-failed", "completion": "This yaml code 10.8 - Dynamic Resource Allocation | This example ResourceSlice has the following properties:", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "10.8 - Dynamic Resource Allocation | This example ResourceSlice has the following properties:", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_17569"} {"prompt": "Explain this bash code:\n\nkubectl get pods -n kube-system | grep kube-scheduler", "completion": "This bash code 10.9 - Scheduler Performance Tuning | to verify that the kube-scheduler component is healthy.", "language": "bash", "type": "kubectl-command_explanation", "context": "10.9 - Scheduler Performance Tuning | to verify that the kube-scheduler component is healthy.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_17570"} {"prompt": "Complete this yaml code:\n\napiVersion: kubescheduler.config.k8s.io/v1alpha1\nkind: KubeSchedulerConfiguration\nalgorithmSource:\n provider: DefaultProvider", "completion": "\n...\n\npercentageOfNodesToScore: 50", "language": "yaml", "type": "kubernetes-manifest", "context": "10.9 - Scheduler Performance Tuning | percentageOfNodesToScore must be a value between 1 and 100 with the default value being calculated based on the cluster size. There is also a hardcoded minimum value of 100 nodes.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_17571"} {"prompt": "Explain this yaml code:\n\napiVersion: kubescheduler.config.k8s.io/v1alpha1\nkind: KubeSchedulerConfiguration\nalgorithmSource:\n provider: DefaultProvider\n\n...\n\npercentageOfNodesToScore: 50", "completion": "This yaml code 10.9 - Scheduler Performance Tuning | percentageOfNodesToScore must be a value between 1 and 100 with the default value being calculated based on the cluster size. There is also a hardcoded minimum value of 100 nodes.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "10.9 - Scheduler Performance Tuning | percentageOfNodesToScore must be a value between 1 and 100 with the default value being calculated based on the cluster size. There is also a hardcoded minimum value of 100 nodes.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_17572"} {"prompt": "Explain this text code:\n\nZone 1: Node 1, Node 2, Node 3, Node 4\nZone 2: Node 5, Node 6", "completion": "This text code 10.9 - Scheduler Performance Tuning | The Scheduler evaluates feasibility of the nodes in this order:", "language": "text", "type": "code-example_explanation", "context": "10.9 - Scheduler Performance Tuning | The Scheduler evaluates feasibility of the nodes in this order:", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_17573"} {"prompt": "Complete this yaml code:\n\napiVersion: kubescheduler.config.k8s.io/v1\nkind: KubeSchedulerConfiguration\nprofiles:\n- pluginConfig:\n - args:\n scoringStrategy:\n resources:\n - name: cpu", "completion": " weight: 1\n - name: memory\n weight: 1\n - name: intel.com/foo\n weight: 3\n - name: intel.com/bar\n weight: 3\n type: MostAllocated\n name: NodeResourcesFit", "language": "yaml", "type": "kubernetes-manifest", "context": "10.10 - Resource Bin Packing | To learn more about other parameters and their default configuration, see the API documentation for NodeResourcesFitArgs.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_17574"} {"prompt": "Explain this yaml code:\n\napiVersion: kubescheduler.config.k8s.io/v1\nkind: KubeSchedulerConfiguration\nprofiles:\n- pluginConfig:\n - args:\n scoringStrategy:\n resources:\n - name: cpu\n weight: 1\n - name: memory\n weight: 1\n - name: intel.com/foo\n weight: 3\n - name: intel.com/bar\n weight: 3\n type: MostAllocated\n name: NodeResourcesFit", "completion": "This yaml code 10.10 - Resource Bin Packing | To learn more about other parameters and their default configuration, see the API documentation for NodeResourcesFitArgs.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "10.10 - Resource Bin Packing | To learn more about other parameters and their default configuration, see the API documentation for NodeResourcesFitArgs.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_17575"} {"prompt": "Complete this yaml code:\n\napiVersion: kubescheduler.config.k8s.io/v1\nkind: KubeSchedulerConfiguration\nprofiles:\n- pluginConfig:\n - args:\n scoringStrategy:\n resources:\n - name: intel.com/foo\n weight: 3", "completion": " - name: intel.com/bar\n weight: 3\n requestedToCapacityRatio:\n shape:\n - utilization: 0\n score: 0\n - utilization: 100\n score: 10\n type: RequestedToCapacityRatio\n name: NodeResourcesFit", "language": "yaml", "type": "kubernetes-manifest", "context": "10.10 - Resource Bin Packing | Referencing the KubeSchedulerConfiguration file with the kube-scheduler flag --config=/path/to/config/file will pass the configuration to the scheduler.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_17576"} {"prompt": "Explain this yaml code:\n\napiVersion: kubescheduler.config.k8s.io/v1\nkind: KubeSchedulerConfiguration\nprofiles:\n- pluginConfig:\n - args:\n scoringStrategy:\n resources:\n - name: intel.com/foo\n weight: 3\n - name: intel.com/bar\n weight: 3\n requestedToCapacityRatio:\n shape:\n - utilization: 0\n score: 0\n - utilization: 100\n score: 10\n type: RequestedToCapacityRatio\n name: NodeResourcesFit", "completion": "This yaml code 10.10 - Resource Bin Packing | Referencing the KubeSchedulerConfiguration file with the kube-scheduler flag --config=/path/to/config/file will pass the configuration to the scheduler.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "10.10 - Resource Bin Packing | Referencing the KubeSchedulerConfiguration file with the kube-scheduler flag --config=/path/to/config/file will pass the configuration to the scheduler.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_17577"} {"prompt": "Complete this yaml code:\n\nshape:\n - utilization: 0", "completion": " score: 0\n - utilization: 100\n score: 10", "language": "yaml", "type": "kubernetes-manifest", "context": "10.10 - Resource Bin Packing | The above arguments give the node a score of 0 if utilization is 0% and 10 for utilization 100%, thus enabling bin packing behavior. To enable least requested the score value must be reversed as follo", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_17578"} {"prompt": "Explain this yaml code:\n\nshape:\n - utilization: 0\n score: 0\n - utilization: 100\n score: 10", "completion": "This yaml code 10.10 - Resource Bin Packing | The above arguments give the node a score of 0 if utilization is 0% and 10 for utilization 100%, thus enabling bin packing behavior. To enable least requested the score value must be reversed as follo", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "10.10 - Resource Bin Packing | The above arguments give the node a score of 0 if utilization is 0% and 10 for utilization 100%, thus enabling bin packing behavior. To enable least requested the score value must be reversed as follo", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_17579"} {"prompt": "Complete this yaml code:\n\nshape:\n - utilization: 0", "completion": " score: 10\n - utilization: 100\n score: 0", "language": "yaml", "type": "kubernetes-manifest", "context": "10.10 - Resource Bin Packing | resources is an optional parameter which defaults to:", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_17580"} {"prompt": "Explain this yaml code:\n\nshape:\n - utilization: 0\n score: 10\n - utilization: 100\n score: 0", "completion": "This yaml code 10.10 - Resource Bin Packing | resources is an optional parameter which defaults to:", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "10.10 - Resource Bin Packing | resources is an optional parameter which defaults to:", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_17581"} {"prompt": "Complete this yaml code:\n\nresources:\n - name: cpu", "completion": " weight: 1\n - name: memory\n weight: 1", "language": "yaml", "type": "kubernetes-manifest", "context": "10.10 - Resource Bin Packing | It can be used to add extended resources as follows:", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_17582"} {"prompt": "Explain this yaml code:\n\nresources:\n - name: cpu\n weight: 1\n - name: memory\n weight: 1", "completion": "This yaml code 10.10 - Resource Bin Packing | It can be used to add extended resources as follows:", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "10.10 - Resource Bin Packing | It can be used to add extended resources as follows:", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_17583"} {"prompt": "Complete this yaml code:\n\nresources:\n - name: intel.com/foo\n weight: 5", "completion": " - name: cpu\n weight: 3\n - name: memory\n weight: 1", "language": "yaml", "type": "kubernetes-manifest", "context": "10.10 - Resource Bin Packing | The weight parameter is optional and is set to 1 if not specified. Also, the weight cannot be set to a negative value.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_17584"} {"prompt": "Explain this yaml code:\n\nresources:\n - name: intel.com/foo\n weight: 5\n - name: cpu\n weight: 3\n - name: memory\n weight: 1", "completion": "This yaml code 10.10 - Resource Bin Packing | The weight parameter is optional and is set to 1 if not specified. Also, the weight cannot be set to a negative value.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "10.10 - Resource Bin Packing | The weight parameter is optional and is set to 1 if not specified. Also, the weight cannot be set to a negative value.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_17585"} {"prompt": "Complete this text code:\n\nAvailable:\n intel.com/foo: 4\n memory: 1 GB\n cpu: 8", "completion": "\nUsed:\n intel.com/foo: 1\n memory: 256MB\n cpu: 1", "language": "text", "type": "code-example", "context": "10.10 - Resource Bin Packing | Node score:", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_17586"} {"prompt": "Explain this text code:\n\nAvailable:\n intel.com/foo: 4\n memory: 1 GB\n cpu: 8\n\nUsed:\n intel.com/foo: 1\n memory: 256MB\n cpu: 1", "completion": "This text code 10.10 - Resource Bin Packing | Node score:", "language": "text", "type": "code-example_explanation", "context": "10.10 - Resource Bin Packing | Node score:", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_17587"} {"prompt": "Complete this text code:\n\nintel.com/foo = resourceScoringFunction((2+1),4)\n = (100 - ((4-3)*100/4)\n = (100 - 25)\n = 75 # requested + used = 75% * available\n = rawScoringFunction(75)\n = 7 # floor(75/10)\n\nmemory = resourceScoringFunction((256+256),1024)\n = (100 -((1024-512)*100/1024))\n = 50 # requested + used = 50% * available", "completion": " = rawScoringFunction(50)\n = 5 # floor(50/10)\n\ncpu = resourceScoringFunction((2+1),8)\n = (100 -((8-3)*100/8))\n = 37.5 # requested + used = 37.5% * available\n = rawScoringFunction(37.5)\n = 3 # floor(37.5/10)\n\nNodeScore = ((7 * 5) + (5 * 1) + (3 * 3)) / (5 + 1 + 3)\n = 5", "language": "text", "type": "code-example", "context": "10.10 - Resource Bin Packing | Node 2 spec:", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_17588"} {"prompt": "Explain this text code:\n\nintel.com/foo = resourceScoringFunction((2+1),4)\n = (100 - ((4-3)*100/4)\n = (100 - 25)\n = 75 # requested + used = 75% * available\n = rawScoringFunction(75)\n = 7 # floor(75/10)\n\nmemory = resourceScoringFunction((256+256),1024)\n = (100 -((1024-512)*100/1024))\n = 50 # requested + used = 50% * available\n = rawScoringFunction(50)\n = 5 # floor(50/10)\n\ncpu = resourceScoringFunction((2+1),8)\n = (100 -((8-3)*100/8))\n = 37.5 # requested + used = 37.5% * available\n = rawScoringFunction(37.5)\n = 3 # floor(37.5/10)\n\nNodeScore = ((7 * 5) + (5 * 1) + (3 * 3)) / (5 + 1 + 3)\n = 5", "completion": "This text code 10.10 - Resource Bin Packing | Node 2 spec:", "language": "text", "type": "code-example_explanation", "context": "10.10 - Resource Bin Packing | Node 2 spec:", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_17589"} {"prompt": "Complete this text code:\n\nAvailable:\n intel.com/foo: 8\n memory: 1GB\n cpu: 8", "completion": "Used:\n intel.com/foo: 2\n memory: 512MB\n cpu: 6", "language": "text", "type": "code-example", "context": "10.10 - Resource Bin Packing | Node score:", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_17590"} {"prompt": "Explain this text code:\n\nAvailable:\n intel.com/foo: 8\n memory: 1GB\n cpu: 8\nUsed:\n intel.com/foo: 2\n memory: 512MB\n cpu: 6", "completion": "This text code 10.10 - Resource Bin Packing | Node score:", "language": "text", "type": "code-example_explanation", "context": "10.10 - Resource Bin Packing | Node score:", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_17591"} {"prompt": "Complete this text code:\n\nintel.com/foo = resourceScoringFunction((2+2),8)\n = (100 - ((8-4)*100/8)\n = (100 - 50)\n = 50\n = rawScoringFunction(50)\n = 5\n\nmemory = resourceScoringFunction((256+512),1024)\n = (100 -((1024-768)*100/1024))\n = 75", "completion": " = rawScoringFunction(75)\n = 7\n\ncpu = resourceScoringFunction((2+6),8)\n = (100 -((8-8)*100/8))\n = 100\n = rawScoringFunction(100)\n = 10\n\nNodeScore = ((5 * 5) + (7 * 1) + (10 * 3)) / (5 + 1 + 3)\n = 7", "language": "text", "type": "code-example", "context": "10.10 - Resource Bin Packing | Pods can have priority. Priority indicates the importance of a Pod relative to other Pods. If a Pod cannot be scheduled, the scheduler tries to preempt (evict) lower priority Pods to make scheduling o", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_17592"} {"prompt": "Explain this text code:\n\nintel.com/foo = resourceScoringFunction((2+2),8)\n = (100 - ((8-4)*100/8)\n = (100 - 50)\n = 50\n = rawScoringFunction(50)\n = 5\n\nmemory = resourceScoringFunction((256+512),1024)\n = (100 -((1024-768)*100/1024))\n = 75\n = rawScoringFunction(75)\n = 7\n\ncpu = resourceScoringFunction((2+6),8)\n = (100 -((8-8)*100/8))\n = 100\n = rawScoringFunction(100)\n = 10\n\nNodeScore = ((5 * 5) + (7 * 1) + (10 * 3)) / (5 + 1 + 3)\n = 7", "completion": "This text code 10.10 - Resource Bin Packing | Pods can have priority. Priority indicates the importance of a Pod relative to other Pods. If a Pod cannot be scheduled, the scheduler tries to preempt (evict) lower priority Pods to make scheduling o", "language": "text", "type": "code-example_explanation", "context": "10.10 - Resource Bin Packing | Pods can have priority. Priority indicates the importance of a Pod relative to other Pods. If a Pod cannot be scheduled, the scheduler tries to preempt (evict) lower priority Pods to make scheduling o", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_17593"} {"prompt": "Complete this yaml code:\n\napiVersion: scheduling.k8s.io/v1\nkind: PriorityClass\nmetadata:", "completion": " name: high-priority\nvalue: 1000000\nglobalDefault: false\ndescription: \"This priority class should be used for XYZ service pods only.\"", "language": "yaml", "type": "kubernetes-manifest", "context": "10.11 - Pod Priority and Preemption | Pods with preemptionPolicy: Never will be placed in the scheduling queue ahead of lower-priority pods, but they cannot preempt other pods. A non-preempting pod waiting to be scheduled will stay in the", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_17594"} {"prompt": "Explain this yaml code:\n\napiVersion: scheduling.k8s.io/v1\nkind: PriorityClass\nmetadata:\n name: high-priority\nvalue: 1000000\nglobalDefault: false\ndescription: \"This priority class should be used for XYZ service pods only.\"", "completion": "This yaml code 10.11 - Pod Priority and Preemption | Pods with preemptionPolicy: Never will be placed in the scheduling queue ahead of lower-priority pods, but they cannot preempt other pods. A non-preempting pod waiting to be scheduled will stay in the", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "10.11 - Pod Priority and Preemption | Pods with preemptionPolicy: Never will be placed in the scheduling queue ahead of lower-priority pods, but they cannot preempt other pods. A non-preempting pod waiting to be scheduled will stay in the", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_17595"} {"prompt": "Complete this yaml code:\n\napiVersion: scheduling.k8s.io/v1\nkind: PriorityClass\nmetadata:\n name: high-priority-nonpreempting", "completion": "value: 1000000\npreemptionPolicy: Never\nglobalDefault: false\ndescription: \"This priority class will not cause other pods to be preempted.\"", "language": "yaml", "type": "kubernetes-manifest", "context": "10.11 - Pod Priority and Preemption | After you have one or more PriorityClasses, you can create Pods that specify one of those PriorityClass names in their specifications. The priority admission controller uses the priorityClassName fiel", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_17596"} {"prompt": "Explain this yaml code:\n\napiVersion: scheduling.k8s.io/v1\nkind: PriorityClass\nmetadata:\n name: high-priority-nonpreempting\nvalue: 1000000\npreemptionPolicy: Never\nglobalDefault: false\ndescription: \"This priority class will not cause other pods to be preempted.\"", "completion": "This yaml code 10.11 - Pod Priority and Preemption | After you have one or more PriorityClasses, you can create Pods that specify one of those PriorityClass names in their specifications. The priority admission controller uses the priorityClassName fiel", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "10.11 - Pod Priority and Preemption | After you have one or more PriorityClasses, you can create Pods that specify one of those PriorityClass names in their specifications. The priority admission controller uses the priorityClassName fiel", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_17597"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: nginx\n labels:\n env: test", "completion": "spec:\n containers:\n - name: nginx\n image: nginx\n imagePullPolicy: IfNotPresent\n priorityClassName: high-priority", "language": "yaml", "type": "pod-definition", "context": "10.11 - Pod Priority and Preemption | When Pod priority is enabled, the scheduler orders pending Pods by their priority and a pending Pod is placed ahead of other pending Pods with lower priority in the scheduling queue. As a result, the", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_17598"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: nginx\n labels:\n env: test\nspec:\n containers:\n - name: nginx\n image: nginx\n imagePullPolicy: IfNotPresent\n priorityClassName: high-priority", "completion": "This yaml code 10.11 - Pod Priority and Preemption | When Pod priority is enabled, the scheduler orders pending Pods by their priority and a pending Pod is placed ahead of other pending Pods with lower priority in the scheduling queue. As a result, the", "language": "yaml", "type": "pod-definition_explanation", "context": "10.11 - Pod Priority and Preemption | When Pod priority is enabled, the scheduler orders pending Pods by their priority and a pending Pod is placed ahead of other pending Pods with lower priority in the scheduling queue. As a result, the", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_17599"} {"prompt": "Explain this text code:\n\nlocal volumes + logs and a writable layer of all containers", "completion": "This text code 10.12 - Node-pressure Eviction | If nodefs triggers evictions, the kubelet sorts pods based on nodefs usage (local volumes + logs of all containers).", "language": "text", "type": "code-example_explanation", "context": "10.12 - Node-pressure Eviction | If nodefs triggers evictions, the kubelet sorts pods based on nodefs usage (local volumes + logs of all containers).", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_17600"} {"prompt": "Explain this text code:\n\nlocal volumes + logs and a writable layer of all containers", "completion": "This text code 10.12 - Node-pressure Eviction | If imagefs triggers evictions, the kubelet sorts pods based on the storage of images rank, which represents the disk usage of a given image.", "language": "text", "type": "code-example_explanation", "context": "10.12 - Node-pressure Eviction | If imagefs triggers evictions, the kubelet sorts pods based on the storage of images rank, which represents the disk usage of a given image.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_17601"} {"prompt": "Complete this yaml code:\n\napiVersion: kubelet.config.k8s.io/v1beta1\nkind: KubeletConfiguration\nevictionHard:\n memory.available: \"500Mi\"\n nodefs.available: \"1Gi\"", "completion": " imagefs.available: \"100Gi\"\nevictionMinimumReclaim:\n memory.available: \"0Mi\"\n nodefs.available: \"500Mi\"\n imagefs.available: \"2Gi\"", "language": "yaml", "type": "kubernetes-manifest", "context": "10.12 - Node-pressure Eviction | In this example, if the nodefs.available signal meets the eviction threshold, the kubelet reclaims the resource until the signal reaches the threshold of 1GiB, and then continues to reclaim the minimu", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_17602"} {"prompt": "Explain this yaml code:\n\napiVersion: kubelet.config.k8s.io/v1beta1\nkind: KubeletConfiguration\nevictionHard:\n memory.available: \"500Mi\"\n nodefs.available: \"1Gi\"\n imagefs.available: \"100Gi\"\nevictionMinimumReclaim:\n memory.available: \"0Mi\"\n nodefs.available: \"500Mi\"\n imagefs.available: \"2Gi\"", "completion": "This yaml code 10.12 - Node-pressure Eviction | In this example, if the nodefs.available signal meets the eviction threshold, the kubelet reclaims the resource until the signal reaches the threshold of 1GiB, and then continues to reclaim the minimu", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "10.12 - Node-pressure Eviction | In this example, if the nodefs.available signal meets the eviction threshold, the kubelet reclaims the resource until the signal reaches the threshold of 1GiB, and then continues to reclaim the minimu", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_17603"} {"prompt": "Explain this none code:\n\n--eviction-hard=memory.available<500Mi\n--system-reserved=memory=1.5Gi", "completion": "This none code 10.12 - Node-pressure Eviction | In this configuration, the --system-reserved flag reserves 1.5GiB of memory for the system, which is 10% of the total memory + the eviction threshold amount.", "language": "none", "type": "code-example_explanation", "context": "10.12 - Node-pressure Eviction | In this configuration, the --system-reserved flag reserves 1.5GiB of memory for the system, which is 10% of the total memory + the eviction threshold amount.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_17604"} {"prompt": "Explain this text code:\n\n10% of the total memory + the eviction threshold amount", "completion": "This text code 10.12 - Node-pressure Eviction | The node can reach the eviction threshold if a pod is using more than its request, or if the system is using more than 1GiB of memory, which makes the memory.available signal fall below 500MiB and tri", "language": "text", "type": "code-example_explanation", "context": "10.12 - Node-pressure Eviction | The node can reach the eviction threshold if a pod is using more than its request, or if the system is using more than 1GiB of memory, which makes the memory.available signal fall below 500MiB and tri", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_17605"} {"prompt": "Complete this json code:\n\n{\n \"apiVersion\": \"policy/v1\",\n \"kind\": \"Eviction\",\n \"metadata\": {", "completion": " \"name\": \"quux\",\n \"namespace\": \"default\"\n }\n}", "language": "json", "type": "api-response", "context": "10.13 - API-initiated Eviction | Note:Deprecated in v1.22 in favor of policy/v1{ \"apiVersion\": \"policy/v1beta1\", \"kind\": \"Eviction\", \"metadata\": { \"name\": \"quux\", \"namespace\": \"default\" } }", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_17606"} {"prompt": "Explain this json code:\n\n{\n \"apiVersion\": \"policy/v1\",\n \"kind\": \"Eviction\",\n \"metadata\": {\n \"name\": \"quux\",\n \"namespace\": \"default\"\n }\n}", "completion": "This json code 10.13 - API-initiated Eviction | Note:Deprecated in v1.22 in favor of policy/v1{ \"apiVersion\": \"policy/v1beta1\", \"kind\": \"Eviction\", \"metadata\": { \"name\": \"quux\", \"namespace\": \"default\" } }", "language": "json", "type": "api-response_explanation", "context": "10.13 - API-initiated Eviction | Note:Deprecated in v1.22 in favor of policy/v1{ \"apiVersion\": \"policy/v1beta1\", \"kind\": \"Eviction\", \"metadata\": { \"name\": \"quux\", \"namespace\": \"default\" } }", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_17607"} {"prompt": "Complete this json code:\n\n{\n \"apiVersion\": \"policy/v1beta1\",\n \"kind\": \"Eviction\",\n \"metadata\": {", "completion": " \"name\": \"quux\",\n \"namespace\": \"default\"\n }\n}", "language": "json", "type": "api-response", "context": "10.13 - API-initiated Eviction | Alternatively, you can attempt an eviction operation by accessing the API using curl or wget, similar to the following example:", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_17608"} {"prompt": "Explain this json code:\n\n{\n \"apiVersion\": \"policy/v1beta1\",\n \"kind\": \"Eviction\",\n \"metadata\": {\n \"name\": \"quux\",\n \"namespace\": \"default\"\n }\n}", "completion": "This json code 10.13 - API-initiated Eviction | Alternatively, you can attempt an eviction operation by accessing the API using curl or wget, similar to the following example:", "language": "json", "type": "api-response_explanation", "context": "10.13 - API-initiated Eviction | Alternatively, you can attempt an eviction operation by accessing the API using curl or wget, similar to the following example:", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_17609"} {"prompt": "Explain this bash code:\n\ncurl -v -H 'Content-type: application/json' https://your-cluster-api-endpoint.example/api/v1/namespaces/default/pods/quux/eviction -d @eviction.json", "completion": "This bash code 10.13 - API-initiated Eviction | When you request an eviction using the API, the API server performs admission checks and responds in one of the following ways:", "language": "bash", "type": "shell-command_explanation", "context": "10.13 - API-initiated Eviction | When you request an eviction using the API, the API server performs admission checks and responds in one of the following ways:", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_17610"} {"prompt": "Explain this text code:\n\nReason: Terminated\nMessage: Pod was terminated in response to imminent node shutdown.", "completion": "This text code 11.1 - Node Shutdowns | To provide more flexibility during graceful node shutdown around the ordering of pods during shutdown, graceful node shutdown honors the PriorityClass for Pods, provided that you enabled this feature", "language": "text", "type": "code-example_explanation", "context": "11.1 - Node Shutdowns | To provide more flexibility during graceful node shutdown around the ordering of pods during shutdown, graceful node shutdown honors the PriorityClass for Pods, provided that you enabled this feature", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_17611"} {"prompt": "Complete this yaml code:\n\nshutdownGracePeriodByPodPriority:\n - priority: 100000\n shutdownGracePeriodSeconds: 10\n - priority: 10000", "completion": " shutdownGracePeriodSeconds: 180\n - priority: 1000\n shutdownGracePeriodSeconds: 120\n - priority: 0\n shutdownGracePeriodSeconds: 60", "language": "yaml", "type": "kubernetes-manifest", "context": "11.1 - Node Shutdowns | The above table implies that any pod with priority value >= 100000 will get just 10 seconds to shut down, any pod with value >= 10000 and < 100000 will get 180 seconds to shut down, any pod with value", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_17612"} {"prompt": "Explain this yaml code:\n\nshutdownGracePeriodByPodPriority:\n - priority: 100000\n shutdownGracePeriodSeconds: 10\n - priority: 10000\n shutdownGracePeriodSeconds: 180\n - priority: 1000\n shutdownGracePeriodSeconds: 120\n - priority: 0\n shutdownGracePeriodSeconds: 60", "completion": "This yaml code 11.1 - Node Shutdowns | The above table implies that any pod with priority value >= 100000 will get just 10 seconds to shut down, any pod with value >= 10000 and < 100000 will get 180 seconds to shut down, any pod with value", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "11.1 - Node Shutdowns | The above table implies that any pod with priority value >= 100000 will get just 10 seconds to shut down, any pod with value >= 10000 and < 100000 will get 180 seconds to shut down, any pod with value", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_17613"} {"prompt": "Complete this text code:\n\nNAME CPU(cores) CPU(%) MEMORY(bytes) MEMORY(%) SWAP(bytes) SWAP(%) \nnode1 1m 10% 2Mi 10% 1Mi 0% ", "completion": "node2 5m 10% 6Mi 10% 2Mi 0% \nnode3 3m 10% 4Mi 10% <unknown> <unknown>", "language": "text", "type": "code-example", "context": "11.2 - Swap memory management | In order to receive information about swap usage by pods, kubectl top nodes --show-swap can be used:", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_17614"} {"prompt": "Explain this text code:\n\nNAME CPU(cores) CPU(%) MEMORY(bytes) MEMORY(%) SWAP(bytes) SWAP(%) \nnode1 1m 10% 2Mi 10% 1Mi 0% \nnode2 5m 10% 6Mi 10% 2Mi 0% \nnode3 3m 10% 4Mi 10% <unknown> <unknown>", "completion": "This text code 11.2 - Swap memory management | In order to receive information about swap usage by pods, kubectl top nodes --show-swap can be used:", "language": "text", "type": "code-example_explanation", "context": "11.2 - Swap memory management | In order to receive information about swap usage by pods, kubectl top nodes --show-swap can be used:", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_17615"} {"prompt": "Complete this text code:\n\nNAME CPU(cores) MEMORY(bytes) SWAP(bytes)\ncoredns-58d5bc5cdb-5nbk4 2m 19Mi 0Mi\ncoredns-58d5bc5cdb-jsh26 3m 37Mi 0Mi\netcd-node01 51m 143Mi 5Mi\nkube-apiserver-node01 98m 824Mi 16Mi", "completion": "kube-controller-manager-node01 20m 135Mi 9Mi\nkube-proxy-ffgs2 1m 24Mi 0Mi\nkube-proxy-fhvwx 1m 39Mi 0Mi\nkube-scheduler-node01 13m 69Mi 0Mi\nmetrics-server-8598789fdb-d2kcj 5m 26Mi 0Mi", "language": "text", "type": "code-example", "context": "11.2 - Swap memory management | A new node status field is now added, node.status.nodeInfo.swap.capacity, to report the swap capacity of a node.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_17616"} {"prompt": "Explain this text code:\n\nNAME CPU(cores) MEMORY(bytes) SWAP(bytes)\ncoredns-58d5bc5cdb-5nbk4 2m 19Mi 0Mi\ncoredns-58d5bc5cdb-jsh26 3m 37Mi 0Mi\netcd-node01 51m 143Mi 5Mi\nkube-apiserver-node01 98m 824Mi 16Mi\nkube-controller-manager-node01 20m 135Mi 9Mi\nkube-proxy-ffgs2 1m 24Mi 0Mi\nkube-proxy-fhvwx 1m 39Mi 0Mi\nkube-scheduler-node01 13m 69Mi 0Mi\nmetrics-server-8598789fdb-d2kcj 5m 26Mi 0Mi", "completion": "This text code 11.2 - Swap memory management | A new node status field is now added, node.status.nodeInfo.swap.capacity, to report the swap capacity of a node.", "language": "text", "type": "code-example_explanation", "context": "11.2 - Swap memory management | A new node status field is now added, node.status.nodeInfo.swap.capacity, to report the swap capacity of a node.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_17617"} {"prompt": "Explain this shell code:\n\nkubectl get nodes -o go-template='{{range .items}}{{.metadata.name}}: {{if .status.nodeInfo.swap.capacity}}{{.status.nodeInfo.swap.capacity}}{{else}}<unknown>{{end}}{{\"\\n\"}}{{end}}'", "completion": "This shell code 11.2 - Swap memory management | This will result in an output similar to:", "language": "shell", "type": "code-example_explanation", "context": "11.2 - Swap memory management | This will result in an output similar to:", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_17618"} {"prompt": "Explain this text code:\n\nnode1: 21474836480\nnode2: 42949664768\nnode3: <unknown>", "completion": "This text code 11.2 - Swap memory management | Node Feature Discovery is a Kubernetes addon for detecting hardware features and configuration. It can be utilized to discover which nodes are provisioned with swap.", "language": "text", "type": "code-example_explanation", "context": "11.2 - Swap memory management | Node Feature Discovery is a Kubernetes addon for detecting hardware features and configuration. It can be utilized to discover which nodes are provisioned with swap.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_17619"} {"prompt": "Explain this shell code:\n\nkubectl get nodes -o jsonpath='{range .items[?(@.metadata.labels.feature\\.node\\.kubernetes\\.io/memory-swap)]}{.metadata.name}{\"\\t\"}{.metadata.labels.feature\\.node\\.kubernetes\\.io/memory-swap}{\"\\n\"}{end}'", "completion": "This shell code 11.2 - Swap memory management | This will result in an output similar to:", "language": "shell", "type": "code-example_explanation", "context": "11.2 - Swap memory management | This will result in an output similar to:", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_17620"} {"prompt": "Explain this text code:\n\nk8s-worker1: true\nk8s-worker2: true\nk8s-worker3: false", "completion": "This text code 11.2 - Swap memory management | In this example, swap is provisioned on nodes k8s-worker1 and k8s-worker2, but not on k8s-worker3.", "language": "text", "type": "code-example_explanation", "context": "11.2 - Swap memory management | In this example, swap is provisioned on nodes k8s-worker1 and k8s-worker2, but not on k8s-worker3.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_17621"} {"prompt": "Explain this promql code:\n\nsum(rate(workqueue_adds_total{name=\"resource_claim\"}[5m]))", "completion": "This promql code 11.7 - Good practices for Dynamic Resource Allocation as a Cluster Admin | If you are experiencing low Workqueue Add Rate, high Workqueue Depth, and/or high Workqueue Work Duration, this suggests the controller isn't performing optimally. Consider tuning parameters like QPS,", "language": "promql", "type": "code-example_explanation", "context": "11.7 - Good practices for Dynamic Resource Allocation as a Cluster Admin | If you are experiencing low Workqueue Add Rate, high Workqueue Depth, and/or high Workqueue Work Duration, this suggests the controller isn't performing optimally. Consider tuning parameters like QPS,", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_17622"} {"prompt": "Explain this promql code:\n\nsum(workqueue_depth{endpoint=\"kube-controller-manager\",\nname=\"resource_claim\"})", "completion": "This promql code 11.7 - Good practices for Dynamic Resource Allocation as a Cluster Admin | If you are experiencing low Workqueue Add Rate, high Workqueue Depth, and/or high Workqueue Work Duration, this suggests the controller isn't performing optimally. Consider tuning parameters like QPS,", "language": "promql", "type": "code-example_explanation", "context": "11.7 - Good practices for Dynamic Resource Allocation as a Cluster Admin | If you are experiencing low Workqueue Add Rate, high Workqueue Depth, and/or high Workqueue Work Duration, this suggests the controller isn't performing optimally. Consider tuning parameters like QPS,", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_17623"} {"prompt": "Explain this promql code:\n\nhistogram_quantile(0.99,\nsum(rate(workqueue_work_duration_seconds_bucket{name=\"resource_claim\"}[5m]))\nby (le))", "completion": "This promql code 11.7 - Good practices for Dynamic Resource Allocation as a Cluster Admin | If you are experiencing low Workqueue Add Rate, high Workqueue Depth, and/or high Workqueue Work Duration, this suggests the controller isn't performing optimally. Consider tuning parameters like QPS,", "language": "promql", "type": "code-example_explanation", "context": "11.7 - Good practices for Dynamic Resource Allocation as a Cluster Admin | If you are experiencing low Workqueue Add Rate, high Workqueue Depth, and/or high Workqueue Work Duration, this suggests the controller isn't performing optimally. Consider tuning parameters like QPS,", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_17624"} {"prompt": "Explain this promql code:\n\nhistogram_quantile(0.99,\nsum(increase(scheduler_pod_scheduling_sli_duration_seconds_bucket[5m])) by\n(le))", "completion": "This promql code 11.7 - Good practices for Dynamic Resource Allocation as a Cluster Admin | When a Pod bound to a node must have a ResourceClaim satisfied, kubelet calls the NodePrepareResources and NodeUnprepareResources methods of the DRA driver. You can observe this behavior from the kube", "language": "promql", "type": "code-example_explanation", "context": "11.7 - Good practices for Dynamic Resource Allocation as a Cluster Admin | When a Pod bound to a node must have a ResourceClaim satisfied, kubelet calls the NodePrepareResources and NodeUnprepareResources methods of the DRA driver. You can observe this behavior from the kube", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_17625"} {"prompt": "Explain this promql code:\n\nhistogram_quantile(0.99,\nsum(increase(scheduler_scheduling_algorithm_duration_seconds_bucket[5m])) by\n(le))", "completion": "This promql code 11.7 - Good practices for Dynamic Resource Allocation as a Cluster Admin | When a Pod bound to a node must have a ResourceClaim satisfied, kubelet calls the NodePrepareResources and NodeUnprepareResources methods of the DRA driver. You can observe this behavior from the kube", "language": "promql", "type": "code-example_explanation", "context": "11.7 - Good practices for Dynamic Resource Allocation as a Cluster Admin | When a Pod bound to a node must have a ResourceClaim satisfied, kubelet calls the NodePrepareResources and NodeUnprepareResources methods of the DRA driver. You can observe this behavior from the kube", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_17626"} {"prompt": "Explain this promql code:\n\nhistogram_quantile(0.99,\nsum(rate(dra_operations_duration_seconds_bucket{operation_name=\"PrepareResources\"}[5m]))\nby (le))", "completion": "This promql code 11.7 - Good practices for Dynamic Resource Allocation as a Cluster Admin | DRA drivers implement the kubeletplugin package interface which surfaces its own metric for the underlying gRPC operation NodePrepareResources and NodeUnprepareResources. You can observe this behavior", "language": "promql", "type": "code-example_explanation", "context": "11.7 - Good practices for Dynamic Resource Allocation as a Cluster Admin | DRA drivers implement the kubeletplugin package interface which surfaces its own metric for the underlying gRPC operation NodePrepareResources and NodeUnprepareResources. You can observe this behavior", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_17627"} {"prompt": "Explain this promql code:\n\nhistogram_quantile(0.99,\nsum(rate(dra_operations_duration_seconds_bucket{operation_name=\"UnprepareResources\"}[5m]))\nby (le))", "completion": "This promql code 11.7 - Good practices for Dynamic Resource Allocation as a Cluster Admin | DRA drivers implement the kubeletplugin package interface which surfaces its own metric for the underlying gRPC operation NodePrepareResources and NodeUnprepareResources. You can observe this behavior", "language": "promql", "type": "code-example_explanation", "context": "11.7 - Good practices for Dynamic Resource Allocation as a Cluster Admin | DRA drivers implement the kubeletplugin package interface which surfaces its own metric for the underlying gRPC operation NodePrepareResources and NodeUnprepareResources. You can observe this behavior", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_17628"} {"prompt": "Explain this promql code:\n\nhistogram_quantile(0.99,\nsum(rate(dra_grpc_operations_duration_seconds_bucket{method_name=~\".*NodePrepareResources\"}[5m]))\nby (le))", "completion": "This promql code 11.7 - Good practices for Dynamic Resource Allocation as a Cluster Admin | Application logs can help you understand what is happening inside your application. The logs are particularly useful for debugging problems and monitoring cluster activity. Most modern applications ha", "language": "promql", "type": "code-example_explanation", "context": "11.7 - Good practices for Dynamic Resource Allocation as a Cluster Admin | Application logs can help you understand what is happening inside your application. The logs are particularly useful for debugging problems and monitoring cluster activity. Most modern applications ha", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_17629"} {"prompt": "Explain this promql code:\n\nhistogram_quantile(0.99,\nsum(rate(dra_grpc_operations_duration_seconds_bucket{method_name=~\".*NodeUnprepareResources\"}[5m]))\nby (le))", "completion": "This promql code 11.7 - Good practices for Dynamic Resource Allocation as a Cluster Admin | Application logs can help you understand what is happening inside your application. The logs are particularly useful for debugging problems and monitoring cluster activity. Most modern applications ha", "language": "promql", "type": "code-example_explanation", "context": "11.7 - Good practices for Dynamic Resource Allocation as a Cluster Admin | Application logs can help you understand what is happening inside your application. The logs are particularly useful for debugging problems and monitoring cluster activity. Most modern applications ha", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_17630"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: counter\nspec:", "completion": " containers:\n - name: count\n image: busybox:1.28\n args: [/bin/sh, -c,\n 'i=0; while true; do echo \"$i: $(date)\"; i=$((i+1)); sleep 1; done']", "language": "yaml", "type": "pod-definition", "context": "11.8 - Logging Architecture | To run this pod, use the following command:", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_17631"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: counter\nspec:\n containers:\n - name: count\n image: busybox:1.28\n args: [/bin/sh, -c,\n 'i=0; while true; do echo \"$i: $(date)\"; i=$((i+1)); sleep 1; done']", "completion": "This yaml code 11.8 - Logging Architecture | To run this pod, use the following command:", "language": "yaml", "type": "pod-definition_explanation", "context": "11.8 - Logging Architecture | To run this pod, use the following command:", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_17632"} {"prompt": "Explain this shell code:\n\nkubectl apply -f https://k8s.io/examples/debug/counter-pod.yaml", "completion": "This shell code 11.8 - Logging Architecture | The output is:", "language": "shell", "type": "code-example_explanation", "context": "11.8 - Logging Architecture | The output is:", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_17633"} {"prompt": "Explain this console code:\n\n0: Fri Apr 1 11:42:23 UTC 2022\n1: Fri Apr 1 11:42:24 UTC 2022\n2: Fri Apr 1 11:42:25 UTC 2022", "completion": "This console code 11.8 - Logging Architecture | You can use kubectl logs --previous to retrieve logs from a previous instantiation of a container. If your pod has multiple containers, specify which container's logs you want to access by appending a", "language": "console", "type": "code-example_explanation", "context": "11.8 - Logging Architecture | You can use kubectl logs --previous to retrieve logs from a previous instantiation of a container. If your pod has multiple containers, specify which container's logs you want to access by appending a", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_17634"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: counter-err\nspec:", "completion": " containers:\n - name: count\n image: busybox:1.28\n args: [/bin/sh, -c,\n 'i=0; while true; do echo \"$i: $(date)\"; echo \"$i: err\" >&2 ; i=$((i+1)); sleep 1; done']", "language": "yaml", "type": "pod-definition", "context": "11.8 - Logging Architecture | To run this pod, use the following command:", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_17635"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: counter-err\nspec:\n containers:\n - name: count\n image: busybox:1.28\n args: [/bin/sh, -c,\n 'i=0; while true; do echo \"$i: $(date)\"; echo \"$i: err\" >&2 ; i=$((i+1)); sleep 1; done']", "completion": "This yaml code 11.8 - Logging Architecture | To run this pod, use the following command:", "language": "yaml", "type": "pod-definition_explanation", "context": "11.8 - Logging Architecture | To run this pod, use the following command:", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_17636"} {"prompt": "Explain this shell code:\n\nkubectl apply -f https://k8s.io/examples/debug/counter-pod-err.yaml", "completion": "This shell code 11.8 - Logging Architecture | To fetch only the stderr log stream, you can run:", "language": "shell", "type": "code-example_explanation", "context": "11.8 - Logging Architecture | To fetch only the stderr log stream, you can run:", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_17637"} {"prompt": "Explain this shell code:\n\nkubectl get --raw \"/api/v1/namespaces/default/pods/counter-err/log?stream=Stderr\"", "completion": "This shell code 11.8 - Logging Architecture | See the kubectl logs documentation for more details.", "language": "shell", "type": "code-example_explanation", "context": "11.8 - Logging Architecture | See the kubectl logs documentation for more details.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_17638"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: counter\nspec:\n containers:\n - name: count\n image: busybox:1.28\n args:\n - /bin/sh\n - -c\n - >\n i=0;", "completion": " while true;\n do\n echo \"$i: $(date)\" >> /var/log/1.log;\n echo \"$(date) INFO $i\" >> /var/log/2.log;\n i=$((i+1));\n sleep 1;\n done \n volumeMounts:\n - name: varlog\n mountPath: /var/log\n volumes:\n - name: varlog\n emptyDir: {}", "language": "yaml", "type": "pod-definition", "context": "11.8 - Logging Architecture | It is not recommended to write log entries with different formats to the same log stream, even if you managed to redirect both components to the stdout stream of the container. Instead, you can create", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_17639"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: counter\nspec:\n containers:\n - name: count\n image: busybox:1.28\n args:\n - /bin/sh\n - -c\n - >\n i=0;\n while true;\n do\n echo \"$i: $(date)\" >> /var/log/1.log;\n echo \"$(date) INFO $i\" >> /var/log/2.log;\n i=$((i+1));\n sleep 1;\n done \n volumeMounts:\n - name: varlog\n mountPath: /var/log\n volumes:\n - name: varlog\n emptyDir: {}", "completion": "This yaml code 11.8 - Logging Architecture | It is not recommended to write log entries with different formats to the same log stream, even if you managed to redirect both components to the stdout stream of the container. Instead, you can create", "language": "yaml", "type": "pod-definition_explanation", "context": "11.8 - Logging Architecture | It is not recommended to write log entries with different formats to the same log stream, even if you managed to redirect both components to the stdout stream of the container. Instead, you can create", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_17640"} {"prompt": "Explain this text code:\n\nadmin/logging/two-files-counter-pod-streaming-sidecar.yaml", "completion": "This text code 11.8 - Logging Architecture | Now when you run this pod, you can access each log stream separately by running the following commands:", "language": "text", "type": "code-example_explanation", "context": "11.8 - Logging Architecture | Now when you run this pod, you can access each log stream separately by running the following commands:", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_17641"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: counter\nspec:\n containers:\n - name: count\n image: busybox:1.28\n args:\n - /bin/sh\n - -c\n - >\n i=0;\n while true;\n do\n echo \"$i: $(date)\" >> /var/log/1.log;\n echo \"$(date) INFO $i\" >> /var/log/2.log;\n i=$((i+1));\n sleep 1;", "completion": " done \n volumeMounts:\n - name: varlog\n mountPath: /var/log\n - name: count-log-1\n image: busybox:1.28\n args: [/bin/sh, -c, 'tail -n+1 -F /var/log/1.log']\n volumeMounts:\n - name: varlog\n mountPath: /var/log\n - name: count-log-2\n image: busybox:1.28\n args: [/bin/sh, -c, 'tail -n+1 -F /var/log/2.log']\n volumeMounts:\n - name: varlog\n mountPath: /var/log\n volumes:\n - name: varlog\n emptyDir: {}", "language": "yaml", "type": "pod-definition", "context": "11.8 - Logging Architecture | Now when you run this pod, you can access each log stream separately by running the following commands:", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_17642"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: counter\nspec:\n containers:\n - name: count\n image: busybox:1.28\n args:\n - /bin/sh\n - -c\n - >\n i=0;\n while true;\n do\n echo \"$i: $(date)\" >> /var/log/1.log;\n echo \"$(date) INFO $i\" >> /var/log/2.log;\n i=$((i+1));\n sleep 1;\n done \n volumeMounts:\n - name: varlog\n mountPath: /var/log\n - name: count-log-1\n image: busybox:1.28\n args: [/bin/sh, -c, 'tail -n+1 -F /var/log/1.log']\n volumeMounts:\n - name: varlog\n mountPath: /var/log\n - name: count-log-2\n image: busybox:1.28\n args: [/bin/sh, -c, 'tail -n+1 -F /var/log/2.log']\n volumeMounts:\n - name: varlog\n mountPath: /var/log\n volumes:\n - name: varlog\n emptyDir: {}", "completion": "This yaml code 11.8 - Logging Architecture | Now when you run this pod, you can access each log stream separately by running the following commands:", "language": "yaml", "type": "pod-definition_explanation", "context": "11.8 - Logging Architecture | Now when you run this pod, you can access each log stream separately by running the following commands:", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_17643"} {"prompt": "Complete this console code:\n\n0: Fri Apr 1 11:42:26 UTC 2022\n1: Fri Apr 1 11:42:27 UTC 2022", "completion": "2: Fri Apr 1 11:42:28 UTC 2022\n...", "language": "console", "type": "code-example", "context": "11.8 - Logging Architecture | The output is similar to:", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_17644"} {"prompt": "Explain this console code:\n\n0: Fri Apr 1 11:42:26 UTC 2022\n1: Fri Apr 1 11:42:27 UTC 2022\n2: Fri Apr 1 11:42:28 UTC 2022\n...", "completion": "This console code 11.8 - Logging Architecture | The output is similar to:", "language": "console", "type": "code-example_explanation", "context": "11.8 - Logging Architecture | The output is similar to:", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_17645"} {"prompt": "Complete this console code:\n\nFri Apr 1 11:42:29 UTC 2022 INFO 0\nFri Apr 1 11:42:30 UTC 2022 INFO 0", "completion": "Fri Apr 1 11:42:31 UTC 2022 INFO 0\n...", "language": "console", "type": "code-example", "context": "11.8 - Logging Architecture | If you installed a node-level agent in your cluster, that agent picks up those log streams automatically without any further configuration. If you like, you can configure the agent to parse log lines", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_17646"} {"prompt": "Explain this console code:\n\nFri Apr 1 11:42:29 UTC 2022 INFO 0\nFri Apr 1 11:42:30 UTC 2022 INFO 0\nFri Apr 1 11:42:31 UTC 2022 INFO 0\n...", "completion": "This console code 11.8 - Logging Architecture | If you installed a node-level agent in your cluster, that agent picks up those log streams automatically without any further configuration. If you like, you can configure the agent to parse log lines", "language": "console", "type": "code-example_explanation", "context": "11.8 - Logging Architecture | If you installed a node-level agent in your cluster, that agent picks up those log streams automatically without any further configuration. If you like, you can configure the agent to parse log lines", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_17647"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: ConfigMap\nmetadata:\n name: fluentd-config\ndata:\n fluentd.conf: |\n <source>\n type tail\n format none\n path /var/log/1.log\n pos_file /var/log/1.log.pos\n tag count.format1", "completion": " </source>\n\n <source>\n type tail\n format none\n path /var/log/2.log\n pos_file /var/log/2.log.pos\n tag count.format2\n </source>\n\n <match **>\n type google_cloud\n </match>", "language": "yaml", "type": "kubernetes-manifest", "context": "11.8 - Logging Architecture | The second manifest describes a pod that has a sidecar container running fluentd. The pod mounts a volume where fluentd can pick up its configuration data.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_17648"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: ConfigMap\nmetadata:\n name: fluentd-config\ndata:\n fluentd.conf: |\n <source>\n type tail\n format none\n path /var/log/1.log\n pos_file /var/log/1.log.pos\n tag count.format1\n </source>\n\n <source>\n type tail\n format none\n path /var/log/2.log\n pos_file /var/log/2.log.pos\n tag count.format2\n </source>\n\n <match **>\n type google_cloud\n </match>", "completion": "This yaml code 11.8 - Logging Architecture | The second manifest describes a pod that has a sidecar container running fluentd. The pod mounts a volume where fluentd can pick up its configuration data.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "11.8 - Logging Architecture | The second manifest describes a pod that has a sidecar container running fluentd. The pod mounts a volume where fluentd can pick up its configuration data.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_17649"} {"prompt": "Explain this text code:\n\nadmin/logging/two-files-counter-pod-agent-sidecar.yaml", "completion": "This text code 11.8 - Logging Architecture |", "language": "text", "type": "code-example_explanation", "context": "11.8 - Logging Architecture |", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_17650"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: counter\nspec:\n containers:\n - name: count\n image: busybox:1.28\n args:\n - /bin/sh\n - -c\n - >\n i=0;\n while true;\n do\n echo \"$i: $(date)\" >> /var/log/1.log;\n echo \"$(date) INFO $i\" >> /var/log/2.log;\n i=$((i+1));\n sleep 1;", "completion": " done \n volumeMounts:\n - name: varlog\n mountPath: /var/log\n - name: count-agent\n image: registry.k8s.io/fluentd-gcp:1.30\n env:\n - name: FLUENTD_ARGS\n value: -c /etc/fluentd-config/fluentd.conf\n volumeMounts:\n - name: varlog\n mountPath: /var/log\n - name: config-volume\n mountPath: /etc/fluentd-config\n volumes:\n - name: varlog\n emptyDir: {}\n - name: config-volume\n configMap:\n name: fluentd-config", "language": "yaml", "type": "pod-definition", "context": "11.8 - Logging Architecture |", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_17651"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: counter\nspec:\n containers:\n - name: count\n image: busybox:1.28\n args:\n - /bin/sh\n - -c\n - >\n i=0;\n while true;\n do\n echo \"$i: $(date)\" >> /var/log/1.log;\n echo \"$(date) INFO $i\" >> /var/log/2.log;\n i=$((i+1));\n sleep 1;\n done \n volumeMounts:\n - name: varlog\n mountPath: /var/log\n - name: count-agent\n image: registry.k8s.io/fluentd-gcp:1.30\n env:\n - name: FLUENTD_ARGS\n value: -c /etc/fluentd-config/fluentd.conf\n volumeMounts:\n - name: varlog\n mountPath: /var/log\n - name: config-volume\n mountPath: /etc/fluentd-config\n volumes:\n - name: varlog\n emptyDir: {}\n - name: config-volume\n configMap:\n name: fluentd-config", "completion": "This yaml code 11.8 - Logging Architecture |", "language": "yaml", "type": "pod-definition_explanation", "context": "11.8 - Logging Architecture |", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_17652"} {"prompt": "Complete this yaml code:\n\napiVersion: rbac.authorization.k8s.io/v1\nkind: ClusterRole\nmetadata:\n name: prometheus", "completion": "rules:\n - nonResourceURLs:\n - \"/metrics\"\n verbs:\n - get", "language": "yaml", "type": "kubernetes-manifest", "context": "11.10 - Metrics For Kubernetes System Components | Alpha metric → Stable metric → Deprecated metric → Hidden metric → Deleted metric", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_17653"} {"prompt": "Explain this yaml code:\n\napiVersion: rbac.authorization.k8s.io/v1\nkind: ClusterRole\nmetadata:\n name: prometheus\nrules:\n - nonResourceURLs:\n - \"/metrics\"\n verbs:\n - get", "completion": "This yaml code 11.10 - Metrics For Kubernetes System Components | Alpha metric → Stable metric → Deprecated metric → Hidden metric → Deleted metric", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "11.10 - Metrics For Kubernetes System Components | Alpha metric → Stable metric → Deprecated metric → Hidden metric → Deleted metric", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_17654"} {"prompt": "Explain this text code:\n\n# HELP some_counter this counts things\n# TYPE some_counter counter\nsome_counter 0", "completion": "This text code 11.10 - Metrics For Kubernetes System Components | After deprecation", "language": "text", "type": "code-example_explanation", "context": "11.10 - Metrics For Kubernetes System Components | After deprecation", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_17655"} {"prompt": "Explain this text code:\n\n# HELP some_counter (Deprecated since 1.15.0) this counts things\n# TYPE some_counter counter\nsome_counter 0", "completion": "This text code 11.10 - Metrics For Kubernetes System Components | Hidden metrics are no longer published for scraping, but are still available for use. To use a hidden metric, please refer to the Show hidden metrics section.", "language": "text", "type": "code-example_explanation", "context": "11.10 - Metrics For Kubernetes System Components | Hidden metrics are no longer published for scraping, but are still available for use. To use a hidden metric, please refer to the Show hidden metrics section.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_17656"} {"prompt": "Complete this text code:\n\ncloudprovider_gce_api_request_duration_seconds { request = \"instance_list\"}\ncloudprovider_gce_api_request_duration_seconds { request = \"disk_insert\"}\ncloudprovider_gce_api_request_duration_seconds { request = \"disk_delete\"}", "completion": "cloudprovider_gce_api_request_duration_seconds { request = \"attach_disk\"}\ncloudprovider_gce_api_request_duration_seconds { request = \"detach_disk\"}\ncloudprovider_gce_api_request_duration_seconds { request = \"list_disk\"}", "language": "text", "type": "code-example", "context": "11.10 - Metrics For Kubernetes System Components | The scheduler exposes optional metrics that reports the requested resources and the desired limits of all running pods. These metrics can be used to build capacity planning dashboards, assess current", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_17657"} {"prompt": "Explain this text code:\n\ncloudprovider_gce_api_request_duration_seconds { request = \"instance_list\"}\ncloudprovider_gce_api_request_duration_seconds { request = \"disk_insert\"}\ncloudprovider_gce_api_request_duration_seconds { request = \"disk_delete\"}\ncloudprovider_gce_api_request_duration_seconds { request = \"attach_disk\"}\ncloudprovider_gce_api_request_duration_seconds { request = \"detach_disk\"}\ncloudprovider_gce_api_request_duration_seconds { request = \"list_disk\"}", "completion": "This text code 11.10 - Metrics For Kubernetes System Components | The scheduler exposes optional metrics that reports the requested resources and the desired limits of all running pods. These metrics can be used to build capacity planning dashboards, assess current", "language": "text", "type": "code-example_explanation", "context": "11.10 - Metrics For Kubernetes System Components | The scheduler exposes optional metrics that reports the requested resources and the desired limits of all running pods. These metrics can be used to build capacity planning dashboards, assess current", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_17658"} {"prompt": "Complete this text code:\n\ncontainer_pressure_cpu_stalled_seconds_total\ncontainer_pressure_cpu_waiting_seconds_total\ncontainer_pressure_memory_stalled_seconds_total", "completion": "container_pressure_memory_waiting_seconds_total\ncontainer_pressure_io_stalled_seconds_total\ncontainer_pressure_io_waiting_seconds_total", "language": "text", "type": "code-example", "context": "11.10 - Metrics For Kubernetes System Components | This feature is enabled by default, by setting the KubeletPSI feature gate. The information is also exposed in the Summary API.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_17659"} {"prompt": "Explain this text code:\n\ncontainer_pressure_cpu_stalled_seconds_total\ncontainer_pressure_cpu_waiting_seconds_total\ncontainer_pressure_memory_stalled_seconds_total\ncontainer_pressure_memory_waiting_seconds_total\ncontainer_pressure_io_stalled_seconds_total\ncontainer_pressure_io_waiting_seconds_total", "completion": "This text code 11.10 - Metrics For Kubernetes System Components | This feature is enabled by default, by setting the KubeletPSI feature gate. The information is also exposed in the Summary API.", "language": "text", "type": "code-example_explanation", "context": "11.10 - Metrics For Kubernetes System Components | This feature is enabled by default, by setting the KubeletPSI feature gate. The information is also exposed in the Summary API.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_17660"} {"prompt": "Explain this text code:\n\n--allow-metric-labels <metric_name>,<label_name>='<allow_value1>, <allow_value2>...', <metric_name2>,<label_name>='<allow_value1>, <allow_value2>...', ...", "completion": "This text code 11.10 - Metrics For Kubernetes System Components | Here is an example:", "language": "text", "type": "code-example_explanation", "context": "11.10 - Metrics For Kubernetes System Components | Here is an example:", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_17661"} {"prompt": "Explain this none code:\n\n--allow-metric-labels number_count_metric,odd_number='1,3,5', number_count_metric,even_number='2,4,6', date_gauge_metric,weekend='Saturday,Sunday'", "completion": "This none code 11.10 - Metrics For Kubernetes System Components | In addition to specifying this from the CLI, this can also be done within a configuration file. You can specify the path to that configuration file using the --allow-metric-labels-manifest command lin", "language": "none", "type": "code-example_explanation", "context": "11.10 - Metrics For Kubernetes System Components | In addition to specifying this from the CLI, this can also be done within a configuration file. You can specify the path to that configuration file using the --allow-metric-labels-manifest command lin", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_17662"} {"prompt": "Explain this yaml code:\n\n\"metric1,label2\": \"v1,v2,v3\"\n\"metric2,label1\": \"v1,v2,v3\"", "completion": "This yaml code 11.10 - Metrics For Kubernetes System Components | Additionally, the cardinality_enforcement_unexpected_categorizations_total meta-metric records the count of unexpected categorizations during cardinality enforcement, that is, whenever a label value i", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "11.10 - Metrics For Kubernetes System Components | Additionally, the cardinality_enforcement_unexpected_categorizations_total meta-metric records the count of unexpected categorizations during cardinality enforcement, that is, whenever a label value i", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_17663"} {"prompt": "Explain this text code:\n\ncardinality_enforcement_unexpected_categorizations_total", "completion": "This text code 11.10 - Metrics For Kubernetes System Components | The state of Kubernetes objects in the Kubernetes API can be exposed as metrics. An add-on agent called kube-state-metrics can connect to the Kubernetes API server and expose a HTTP endpoint with metr", "language": "text", "type": "code-example_explanation", "context": "11.10 - Metrics For Kubernetes System Components | The state of Kubernetes objects in the Kubernetes API can be exposed as metrics. An add-on agent called kube-state-metrics can connect to the Kubernetes API server and expose a HTTP endpoint with metr", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_17664"} {"prompt": "Explain this text code:\n\ncount(kube_pod_status_ready{condition=\"false\"}) by (namespace, pod)", "completion": "This text code 11.11 - Metrics for Kubernetes Object States | Metrics generated from kube-state-metrics also allow for alerting on issues in the cluster.", "language": "text", "type": "code-example_explanation", "context": "11.11 - Metrics for Kubernetes Object States | Metrics generated from kube-state-metrics also allow for alerting on issues in the cluster.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_17665"} {"prompt": "Complete this yaml code:\n\ngroups:\n- name: Pod state\n rules:\n - alert: PodsBlockedInTerminatingState\n expr: count(kube_pod_deletion_timestamp) by (namespace, pod) * count(kube_pod_status_reason{reason=\"NodeLost\"} == 0) by (namespace, pod) > 0", "completion": " for: 5m\n labels:\n severity: page\n annotations:\n summary: Pod {{$labels.namespace}}/{{$labels.pod}} blocked in Terminating state.", "language": "yaml", "type": "kubernetes-manifest", "context": "11.11 - Metrics for Kubernetes Object States | System component logs record events happening in cluster, which can be very useful for debugging. You can configure log verbosity to see more or less detail. Logs can be as coarse-grained as showing e", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_17666"} {"prompt": "Explain this yaml code:\n\ngroups:\n- name: Pod state\n rules:\n - alert: PodsBlockedInTerminatingState\n expr: count(kube_pod_deletion_timestamp) by (namespace, pod) * count(kube_pod_status_reason{reason=\"NodeLost\"} == 0) by (namespace, pod) > 0\n for: 5m\n labels:\n severity: page\n annotations:\n summary: Pod {{$labels.namespace}}/{{$labels.pod}} blocked in Terminating state.", "completion": "This yaml code 11.11 - Metrics for Kubernetes Object States | System component logs record events happening in cluster, which can be very useful for debugging. You can configure log verbosity to see more or less detail. Logs can be as coarse-grained as showing e", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "11.11 - Metrics for Kubernetes Object States | System component logs record events happening in cluster, which can be very useful for debugging. You can configure log verbosity to see more or less detail. Logs can be as coarse-grained as showing e", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_17667"} {"prompt": "Explain this text code:\n\nkube-log-runner -log-file=/tmp/log -redirect-stderr=false", "completion": "This text code 11.12 - System Logs | An example of the traditional klog native format:", "language": "text", "type": "code-example_explanation", "context": "11.12 - System Logs | An example of the traditional klog native format:", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_17668"} {"prompt": "Explain this text code:\n\nI1025 00:15:15.525108 1 httplog.go:79] GET /api/v1/namespaces/kube-system/pods/metrics-server-v0.3.1-57c75779f-9p8wg: (1.512ms) 200 [pod_nanny/v0.0.0 (linux/amd64) kubernetes/$Format 10.56.1.19:51756]", "completion": "This text code 11.12 - System Logs | The message string may contain line breaks:", "language": "text", "type": "code-example_explanation", "context": "11.12 - System Logs | The message string may contain line breaks:", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_17669"} {"prompt": "Explain this text code:\n\nI1025 00:15:15.525108 1 example.go:79] This is a message\nwhich has a line break.", "completion": "This text code 11.12 - System Logs | Migration to structured log messages is an ongoing process. Not all log messages are structured in this version. When parsing log files, you must also handle unstructured log messages.", "language": "text", "type": "code-example_explanation", "context": "11.12 - System Logs | Migration to structured log messages is an ongoing process. Not all log messages are structured in this version. When parsing log files, you must also handle unstructured log messages.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_17670"} {"prompt": "Explain this text code:\n\n<klog header> \"<message>\" <key1>=\"<value1>\" <key2>=\"<value2>\" ...", "completion": "This text code 11.12 - System Logs | Example:", "language": "text", "type": "code-example_explanation", "context": "11.12 - System Logs | Example:", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_17671"} {"prompt": "Explain this text code:\n\nI1025 00:15:15.525108 1 controller_utils.go:116] \"Pod status updated\" pod=\"kube-system/kubedns\" status=\"ready\"", "completion": "This text code 11.12 - System Logs | Strings are quoted. Other values are formatted with %+v, which may cause log messages to continue on the next line depending on the data.", "language": "text", "type": "code-example_explanation", "context": "11.12 - System Logs | Strings are quoted. Other values are formatted with %+v, which may cause log messages to continue on the next line depending on the data.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_17672"} {"prompt": "Explain this text code:\n\nI1025 00:15:15.525108 1 example.go:116] \"Example\" data=\"This is text with a line break\\nand \\\"quotation marks\\\".\" someInt=1 someFloat=0.1 someStruct={StringField: First line,\nsecond line.}", "completion": "This text code 11.12 - System Logs | Contextual logging builds on top of structured logging. It is primarily about how developers use logging calls: code based on that concept is more flexible and supports additional use cases as describ", "language": "text", "type": "code-example_explanation", "context": "11.12 - System Logs | Contextual logging builds on top of structured logging. It is primarily about how developers use logging calls: code based on that concept is more flexible and supports additional use cases as describ", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_17673"} {"prompt": "Complete this console code:\n\n$ cd $GOPATH/src/k8s.io/kubernetes/staging/src/k8s.io/component-base/logs/example/cmd/\n$ go run . --help\n...\n --feature-gates mapStringBool A set of key=value pairs that describe feature gates for alpha/experimental features. Options are:\n AllAlpha=true|false (ALPHA - default=false)", "completion": " AllBeta=true|false (BETA - default=false)\n ContextualLogging=true|false (BETA - default=true)\n$ go run . --feature-gates ContextualLogging=true\n...\nI0222 15:13:31.645988 197901 example.go:54] \"runtime\" logger=\"example.myname\" foo=\"bar\" duration=\"1m0s\"\nI0222 15:13:31.646007 197901 example.go:55] \"another runtime\" logger=\"example\" foo=\"bar\" duration=\"1h0m0s\" duration=\"1m0s\"", "language": "console", "type": "code-example", "context": "11.12 - System Logs | The logger key and foo=\"bar\" were added by the caller of the function which logs the runtime message and duration=\"1m0s\" value, without having to modify that function.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_17674"} {"prompt": "Explain this console code:\n\n$ cd $GOPATH/src/k8s.io/kubernetes/staging/src/k8s.io/component-base/logs/example/cmd/\n$ go run . --help\n...\n --feature-gates mapStringBool A set of key=value pairs that describe feature gates for alpha/experimental features. Options are:\n AllAlpha=true|false (ALPHA - default=false)\n AllBeta=true|false (BETA - default=false)\n ContextualLogging=true|false (BETA - default=true)\n$ go run . --feature-gates ContextualLogging=true\n...\nI0222 15:13:31.645988 197901 example.go:54] \"runtime\" logger=\"example.myname\" foo=\"bar\" duration=\"1m0s\"\nI0222 15:13:31.646007 197901 example.go:55] \"another runtime\" logger=\"example\" foo=\"bar\" duration=\"1h0m0s\" duration=\"1m0s\"", "completion": "This console code 11.12 - System Logs | The logger key and foo=\"bar\" were added by the caller of the function which logs the runtime message and duration=\"1m0s\" value, without having to modify that function.", "language": "console", "type": "code-example_explanation", "context": "11.12 - System Logs | The logger key and foo=\"bar\" were added by the caller of the function which logs the runtime message and duration=\"1m0s\" value, without having to modify that function.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_17675"} {"prompt": "Complete this console code:\n\n$ go run . --feature-gates ContextualLogging=false\n...", "completion": "I0222 15:14:40.497333 198174 example.go:54] \"runtime\" duration=\"1m0s\"\nI0222 15:14:40.497346 198174 example.go:55] \"another runtime\" duration=\"1h0m0s\" duration=\"1m0s\"", "language": "console", "type": "code-example", "context": "11.12 - System Logs | JSON output does not support many standard klog flags. For list of unsupported klog flags, see the Command line tool reference.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_17676"} {"prompt": "Explain this console code:\n\n$ go run . --feature-gates ContextualLogging=false\n...\nI0222 15:14:40.497333 198174 example.go:54] \"runtime\" duration=\"1m0s\"\nI0222 15:14:40.497346 198174 example.go:55] \"another runtime\" duration=\"1h0m0s\" duration=\"1m0s\"", "completion": "This console code 11.12 - System Logs | JSON output does not support many standard klog flags. For list of unsupported klog flags, see the Command line tool reference.", "language": "console", "type": "code-example_explanation", "context": "11.12 - System Logs | JSON output does not support many standard klog flags. For list of unsupported klog flags, see the Command line tool reference.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_17677"} {"prompt": "Complete this json code:\n\n{\n \"ts\": 1580306777.04728,\n \"v\": 4,\n \"msg\": \"Pod status updated\",\n \"pod\":{", "completion": " \"name\": \"nginx-1\",\n \"namespace\": \"default\"\n },\n \"status\": \"ready\"\n}", "language": "json", "type": "api-response", "context": "11.12 - System Logs | Keys with special meaning:", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_17678"} {"prompt": "Explain this json code:\n\n{\n \"ts\": 1580306777.04728,\n \"v\": 4,\n \"msg\": \"Pod status updated\",\n \"pod\":{\n \"name\": \"nginx-1\",\n \"namespace\": \"default\"\n },\n \"status\": \"ready\"\n}", "completion": "This json code 11.12 - System Logs | Keys with special meaning:", "language": "json", "type": "api-response_explanation", "context": "11.12 - System Logs | Keys with special meaning:", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_17679"} {"prompt": "Explain this shell code:\n\n# Fetch kubelet logs from a node named node-1.example\nkubectl get --raw \"/api/v1/nodes/node-1.example/proxy/logs/?query=kubelet\"", "completion": "This shell code 11.12 - System Logs | You can also fetch files, provided that the files are in a directory that the kubelet allows for log fetches. For example, you can fetch a log from /var/log on a Linux node:", "language": "shell", "type": "code-example_explanation", "context": "11.12 - System Logs | You can also fetch files, provided that the files are in a directory that the kubelet allows for log fetches. For example, you can fetch a log from /var/log on a Linux node:", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_17680"} {"prompt": "Explain this shell code:\n\nkubectl get --raw \"/api/v1/nodes/<insert-node-name-here>/proxy/logs/?query=/<insert-log-file-name-here>\"", "completion": "This shell code 11.12 - System Logs | The kubelet uses heuristics to retrieve logs. This helps if you are not aware whether a given system service is writing logs to the operating system's native logger like journald or to a log file in /", "language": "shell", "type": "code-example_explanation", "context": "11.12 - System Logs | The kubelet uses heuristics to retrieve logs. This helps if you are not aware whether a given system service is writing logs to the operating system's native logger like journald or to a log file in /", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_17681"} {"prompt": "Explain this shell code:\n\n# Fetch kubelet logs from a node named node-1.example that have the word \"error\"\nkubectl get --raw \"/api/v1/nodes/node-1.example/proxy/logs/?query=kubelet&pattern=error\"", "completion": "This shell code 11.12 - System Logs | System component traces record the latency of and relationships between operations in the cluster.", "language": "shell", "type": "code-example_explanation", "context": "11.12 - System Logs | System component traces record the latency of and relationships between operations in the cluster.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_17682"} {"prompt": "Complete this yaml code:\n\nreceivers:\n otlp:\n protocols:\n grpc:\nexporters:\n # Replace this exporter with the exporter for your backend\n exporters:", "completion": " debug:\n verbosity: detailed\nservice:\n pipelines:\n traces:\n receivers: [otlp]\n exporters: [debug]", "language": "yaml", "type": "kubernetes-manifest", "context": "11.13 - Traces For Kubernetes System Components | To directly emit traces to a backend without utilizing a collector, specify the endpoint field in the Kubernetes tracing configuration file with the desired trace backend address. This method negates", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_17683"} {"prompt": "Explain this yaml code:\n\nreceivers:\n otlp:\n protocols:\n grpc:\nexporters:\n # Replace this exporter with the exporter for your backend\n exporters:\n debug:\n verbosity: detailed\nservice:\n pipelines:\n traces:\n receivers: [otlp]\n exporters: [debug]", "completion": "This yaml code 11.13 - Traces For Kubernetes System Components | To directly emit traces to a backend without utilizing a collector, specify the endpoint field in the Kubernetes tracing configuration file with the desired trace backend address. This method negates", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "11.13 - Traces For Kubernetes System Components | To directly emit traces to a backend without utilizing a collector, specify the endpoint field in the Kubernetes tracing configuration file with the desired trace backend address. This method negates", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_17684"} {"prompt": "Complete this yaml code:\n\napiVersion: apiserver.config.k8s.io/v1\nkind: TracingConfiguration", "completion": "# default value\n#endpoint: localhost:4317\nsamplingRatePerMillion: 100", "language": "yaml", "type": "kubernetes-manifest", "context": "11.13 - Traces For Kubernetes System Components | For more information about the TracingConfiguration struct, see API server config API (v1).", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_17685"} {"prompt": "Explain this yaml code:\n\napiVersion: apiserver.config.k8s.io/v1\nkind: TracingConfiguration\n# default value\n#endpoint: localhost:4317\nsamplingRatePerMillion: 100", "completion": "This yaml code 11.13 - Traces For Kubernetes System Components | For more information about the TracingConfiguration struct, see API server config API (v1).", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "11.13 - Traces For Kubernetes System Components | For more information about the TracingConfiguration struct, see API server config API (v1).", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_17686"} {"prompt": "Complete this yaml code:\n\napiVersion: kubelet.config.k8s.io/v1beta1\nkind: KubeletConfiguration\ntracing:", "completion": " # default value\n #endpoint: localhost:4317\n samplingRatePerMillion: 100", "language": "yaml", "type": "kubernetes-manifest", "context": "11.13 - Traces For Kubernetes System Components | If the samplingRatePerMillion is set to one million (1000000), then every span will be sent to the exporter.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_17687"} {"prompt": "Explain this yaml code:\n\napiVersion: kubelet.config.k8s.io/v1beta1\nkind: KubeletConfiguration\ntracing:\n # default value\n #endpoint: localhost:4317\n samplingRatePerMillion: 100", "completion": "This yaml code 11.13 - Traces For Kubernetes System Components | If the samplingRatePerMillion is set to one million (1000000), then every span will be sent to the exporter.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "11.13 - Traces For Kubernetes System Components | If the samplingRatePerMillion is set to one million (1000000), then every span will be sent to the exporter.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_17688"} {"prompt": "Explain this shell code:\n\nkube-apiserver \\\n--runtime-config=flowcontrol.apiserver.k8s.io/v1beta3=false \\\n # …and other flags as usual", "completion": "This shell code 11.15 - API Priority and Fairness | The command-line flag --enable-priority-and-fairness=false will disable the API Priority and Fairness feature.", "language": "shell", "type": "code-example_explanation", "context": "11.15 - API Priority and Fairness | The command-line flag --enable-priority-and-fairness=false will disable the API Priority and Fairness feature.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_17689"} {"prompt": "Complete this yaml code:\n\napiVersion: flowcontrol.apiserver.k8s.io/v1\nkind: FlowSchema\nmetadata:\n name: health-for-strangers\nspec:\n matchingPrecedence: 1000\n priorityLevelConfiguration:\n name: exempt\n rules:\n - nonResourceRules:", "completion": " - nonResourceURLs:\n - \"/healthz\"\n - \"/livez\"\n - \"/readyz\"\n verbs:\n - \"*\"\n subjects:\n - kind: Group\n group:\n name: \"system:unauthenticated\"", "language": "yaml", "type": "kubernetes-manifest", "context": "11.15 - API Priority and Fairness | When you enable the API Priority and Fairness feature, the kube-apiserver exports additional metrics. Monitoring these can help you determine whether your configuration is inappropriately throttling i", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_17690"} {"prompt": "Explain this yaml code:\n\napiVersion: flowcontrol.apiserver.k8s.io/v1\nkind: FlowSchema\nmetadata:\n name: health-for-strangers\nspec:\n matchingPrecedence: 1000\n priorityLevelConfiguration:\n name: exempt\n rules:\n - nonResourceRules:\n - nonResourceURLs:\n - \"/healthz\"\n - \"/livez\"\n - \"/readyz\"\n verbs:\n - \"*\"\n subjects:\n - kind: Group\n group:\n name: \"system:unauthenticated\"", "completion": "This yaml code 11.15 - API Priority and Fairness | When you enable the API Priority and Fairness feature, the kube-apiserver exports additional metrics. Monitoring these can help you determine whether your configuration is inappropriately throttling i", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "11.15 - API Priority and Fairness | When you enable the API Priority and Fairness feature, the kube-apiserver exports additional metrics. Monitoring these can help you determine whether your configuration is inappropriately throttling i", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_17691"} {"prompt": "Explain this text code:\n\napiserver_flowcontrol_request_wait_duration_seconds", "completion": "This text code 11.15 - API Priority and Fairness | apiserver_flowcontrol_nominal_limit_seats is a gauge vector holding each priority level's nominal concurrency limit, computed from the API server's total concurrency limit and the priority level's con", "language": "text", "type": "code-example_explanation", "context": "11.15 - API Priority and Fairness | apiserver_flowcontrol_nominal_limit_seats is a gauge vector holding each priority level's nominal concurrency limit, computed from the API server's total concurrency limit and the priority level's con", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_17692"} {"prompt": "Explain this text code:\n\napiserver_flowcontrol_read_vs_write_current_requests", "completion": "This text code 11.15 - API Priority and Fairness | apiserver_flowcontrol_request_concurrency_in_use is a gauge vector holding the instantaneous number of occupied seats, broken down by priority_level and flow_schema.", "language": "text", "type": "code-example_explanation", "context": "11.15 - API Priority and Fairness | apiserver_flowcontrol_request_concurrency_in_use is a gauge vector holding the instantaneous number of occupied seats, broken down by priority_level and flow_schema.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_17693"} {"prompt": "Explain this text code:\n\napiserver_flowcontrol_priority_level_request_utilization", "completion": "This text code 11.15 - API Priority and Fairness | apiserver_flowcontrol_priority_level_seat_utilization is a histogram vector of observations, made at the end of each nanosecond, of the utilization of a priority level's concurrency limit, broken down", "language": "text", "type": "code-example_explanation", "context": "11.15 - API Priority and Fairness | apiserver_flowcontrol_priority_level_seat_utilization is a histogram vector of observations, made at the end of each nanosecond, of the utilization of a priority level's concurrency limit, broken down", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_17694"} {"prompt": "Explain this text code:\n\napiserver_flowcontrol_priority_level_seat_utilization", "completion": "This text code 11.15 - API Priority and Fairness | apiserver_flowcontrol_request_queue_length_after_enqueue is a histogram vector of queue lengths for the queues, broken down by priority_level and flow_schema, as sampled by the enqueued requests. Each", "language": "text", "type": "code-example_explanation", "context": "11.15 - API Priority and Fairness | apiserver_flowcontrol_request_queue_length_after_enqueue is a histogram vector of queue lengths for the queues, broken down by priority_level and flow_schema, as sampled by the enqueued requests. Each", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_17695"} {"prompt": "Explain this text code:\n\napiserver_flowcontrol_request_queue_length_after_enqueue", "completion": "This text code 11.15 - API Priority and Fairness | apiserver_flowcontrol_request_concurrency_limit is the same as apiserver_flowcontrol_nominal_limit_seats. Before the introduction of concurrency borrowing between priority levels, this was always equa", "language": "text", "type": "code-example_explanation", "context": "11.15 - API Priority and Fairness | apiserver_flowcontrol_request_concurrency_limit is the same as apiserver_flowcontrol_nominal_limit_seats. Before the introduction of concurrency borrowing between priority levels, this was always equa", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_17696"} {"prompt": "Explain this text code:\n\napiserver_flowcontrol_request_dispatch_no_accommodation_total", "completion": "This text code 11.15 - API Priority and Fairness | apiserver_flowcontrol_epoch_advance_total is a counter vector of the number of attempts to jump a priority level's progress meter backward to avoid numeric overflow, grouped by priority_level and succ", "language": "text", "type": "code-example_explanation", "context": "11.15 - API Priority and Fairness | apiserver_flowcontrol_epoch_advance_total is a counter vector of the number of attempts to jump a priority level's progress meter backward to avoid numeric overflow, grouped by priority_level and succ", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_17697"} {"prompt": "Explain this text code:\n\npriority-and-fairness/list-events-default-service-account.yaml", "completion": "This text code 11.15 - API Priority and Fairness | Add-ons extend the functionality of Kubernetes.", "language": "text", "type": "code-example_explanation", "context": "11.15 - API Priority and Fairness | Add-ons extend the functionality of Kubernetes.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_17698"} {"prompt": "Complete this yaml code:\n\napiVersion: flowcontrol.apiserver.k8s.io/v1\nkind: FlowSchema\nmetadata:\n name: list-events-default-service-account\nspec:\n distinguisherMethod:\n type: ByUser\n matchingPrecedence: 8000\n priorityLevelConfiguration:\n name: catch-all\n rules:\n - resourceRules:", "completion": " - apiGroups:\n - '*'\n namespaces:\n - default\n resources:\n - events\n verbs:\n - list\n subjects:\n - kind: ServiceAccount\n serviceAccount:\n name: default\n namespace: default", "language": "yaml", "type": "service-definition", "context": "11.15 - API Priority and Fairness | Add-ons extend the functionality of Kubernetes.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_17699"} {"prompt": "Explain this yaml code:\n\napiVersion: flowcontrol.apiserver.k8s.io/v1\nkind: FlowSchema\nmetadata:\n name: list-events-default-service-account\nspec:\n distinguisherMethod:\n type: ByUser\n matchingPrecedence: 8000\n priorityLevelConfiguration:\n name: catch-all\n rules:\n - resourceRules:\n - apiGroups:\n - '*'\n namespaces:\n - default\n resources:\n - events\n verbs:\n - list\n subjects:\n - kind: ServiceAccount\n serviceAccount:\n name: default\n namespace: default", "completion": "This yaml code 11.15 - API Priority and Fairness | Add-ons extend the functionality of Kubernetes.", "language": "yaml", "type": "service-definition_explanation", "context": "11.15 - API Priority and Fairness | Add-ons extend the functionality of Kubernetes.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_17700"} {"prompt": "Explain this text code:\n\n--runtime-config=\"coordination.k8s.io/v1beta1=true\"", "completion": "This text code 11.17 - Coordinated Leader Election | Provided that you have enabled the CoordinatedLeaderElection feature gate andhave the coordination.k8s.io/v1beta1 API group enabled, compatible control planecomponents automatically use the LeaseCandi", "language": "text", "type": "code-example_explanation", "context": "11.17 - Coordinated Leader Election | Provided that you have enabled the CoordinatedLeaderElection feature gate andhave the coordination.k8s.io/v1beta1 API group enabled, compatible control planecomponents automatically use the LeaseCandi", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_17701"} {"prompt": "Explain this text code:\n\nspec.containers[*].securityContext.readOnlyRootFilesystem", "completion": "This text code 12.1 - Windows containers in Kubernetes | In the above list, wildcards (*) indicate all elements in a list. For example, spec.containers[*].securityContext refers to the SecurityContext object for all containers. If any of these fields is spe", "language": "text", "type": "code-example_explanation", "context": "12.1 - Windows containers in Kubernetes | In the above list, wildcards (*) indicate all elements in a list. For example, spec.containers[*].securityContext refers to the SecurityContext object for all containers. If any of these fields is spe", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_17702"} {"prompt": "Explain this text code:\n\nspec.containers[*].securityContext.allowPrivilegeEscalation", "completion": "This text code 12.1 - Windows containers in Kubernetes | In the above list, wildcards (*) indicate all elements in a list. For example, spec.containers[*].securityContext refers to the SecurityContext object for all containers. If any of these fields is spe", "language": "text", "type": "code-example_explanation", "context": "12.1 - Windows containers in Kubernetes | In the above list, wildcards (*) indicate all elements in a list. For example, spec.containers[*].securityContext refers to the SecurityContext object for all containers. If any of these fields is spe", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_17703"} {"prompt": "Complete this yaml code:\n\n---\napiVersion: v1\nkind: Service\nmetadata:\n name: win-webserver\n labels:\n app: win-webserver\nspec:\n ports:\n # the port that this service should serve on\n - port: 80\n targetPort: 80\n selector:\n app: win-webserver\n type: NodePort\n---\napiVersion: apps/v1\nkind: Deployment\nmetadata:\n labels:\n app: win-webserver", "completion": " name: win-webserver\nspec:\n replicas: 2\n selector:\n matchLabels:\n app: win-webserver\n template:\n metadata:\n labels:\n app: win-webserver\n name: win-webserver\n spec:\n containers:\n - name: windowswebserver\n image: mcr.microsoft.com/windows/servercore:ltsc2019\n command:\n - powershell.exe\n - -command\n - \"<#code used from https://gist.github.com/19WAS85/5424431#> ; $$listener = New-Object System.Net.HttpListener ; $$listener.Prefixes.Add('http://*:80/') ; $$listener.Start() ; $$callerCounts = @{} ; Write-Host('Listening at http://*:80/') ; while ($$listener.IsListening) { ;$$context = $$listener.GetContext() ;$$requestUrl = $$context.Request.Url ;$$clientIP = $$context.Request.RemoteEndPoint.Address ;$$response = $$context.Response ;Write-Host '' ;Write-Host('> {0}' -f $$requestUrl) ; ;$$count = 1 ;$$k=$$callerCounts.Get_Item($$clientIP) ;if ($$k -ne $$null) { $$count += $$k } ;$$callerCounts.Set_Item($$clientIP, $$count) ;$$ip=(Get-NetAdapter | Get-NetIpAddress); $$header='<html><body><H1>Windows Container Web Server</H1>' ;$$callerCountsString='' ;$$callerCounts.Keys | % { $$callerCountsString+='<p>IP {0} callerCount {1} ' -f $$ip[1].IPAddress,$$callerCounts.Item($$_) } ;$$footer='</body></html>' ;$$content='{0}{1}{2}' -f $$header,$$callerCountsString,$$footer ;Write-Output $$content ;$$buffer = [System.Text.Encoding]::UTF8.GetBytes($$content) ;$$response.ContentLength64 = $$buffer.Length ;$$response.OutputStream.Write($$buffer, 0, $$buffer.Length) ;$$response.Close() ;$$responseStatus = $$response.StatusCode ;Write-Host('< {0}' -f $$responseStatus) } ; \"\n nodeSelector:\n kubernetes.io/os: windows", "language": "yaml", "type": "deployment-definition", "context": "12.2 - Guide for Running Windows Containers in Kubernetes | Check that all nodes are healthy:", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_17704"} {"prompt": "Explain this yaml code:\n\n---\napiVersion: v1\nkind: Service\nmetadata:\n name: win-webserver\n labels:\n app: win-webserver\nspec:\n ports:\n # the port that this service should serve on\n - port: 80\n targetPort: 80\n selector:\n app: win-webserver\n type: NodePort\n---\napiVersion: apps/v1\nkind: Deployment\nmetadata:\n labels:\n app: win-webserver\n name: win-webserver\nspec:\n replicas: 2\n selector:\n matchLabels:\n app: win-webserver\n template:\n metadata:\n labels:\n app: win-webserver\n name: win-webserver\n spec:\n containers:\n - name: windowswebserver\n image: mcr.microsoft.com/windows/servercore:ltsc2019\n command:\n - powershell.exe\n - -command\n - \"<#code used from https://gist.github.com/19WAS85/5424431#> ; $$listener = New-Object System.Net.HttpListener ; $$listener.Prefixes.Add('http://*:80/') ; $$listener.Start() ; $$callerCounts = @{} ; Write-Host('Listening at http://*:80/') ; while ($$listener.IsListening) { ;$$context = $$listener.GetContext() ;$$requestUrl = $$context.Request.Url ;$$clientIP = $$context.Request.RemoteEndPoint.Address ;$$response = $$context.Response ;Write-Host '' ;Write-Host('> {0}' -f $$requestUrl) ; ;$$count = 1 ;$$k=$$callerCounts.Get_Item($$clientIP) ;if ($$k -ne $$null) { $$count += $$k } ;$$callerCounts.Set_Item($$clientIP, $$count) ;$$ip=(Get-NetAdapter | Get-NetIpAddress); $$header='<html><body><H1>Windows Container Web Server</H1>' ;$$callerCountsString='' ;$$callerCounts.Keys | % { $$callerCountsString+='<p>IP {0} callerCount {1} ' -f $$ip[1].IPAddress,$$callerCounts.Item($$_) } ;$$footer='</body></html>' ;$$content='{0}{1}{2}' -f $$header,$$callerCountsString,$$footer ;Write-Output $$content ;$$buffer = [System.Text.Encoding]::UTF8.GetBytes($$content) ;$$response.ContentLength64 = $$buffer.Length ;$$response.OutputStream.Write($$buffer, 0, $$buffer.Length) ;$$response.Close() ;$$responseStatus = $$response.StatusCode ;Write-Host('< {0}' -f $$responseStatus) } ; \"\n nodeSelector:\n kubernetes.io/os: windows", "completion": "This yaml code 12.2 - Guide for Running Windows Containers in Kubernetes | Check that all nodes are healthy:", "language": "yaml", "type": "deployment-definition_explanation", "context": "12.2 - Guide for Running Windows Containers in Kubernetes | Check that all nodes are healthy:", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_17705"} {"prompt": "Explain this bash code:\n\nkubectl apply -f win-webserver.yaml\nkubectl get pods -o wide -w", "completion": "This bash code 12.2 - Guide for Running Windows Containers in Kubernetes | When the service is deployed correctly both Pods are marked as Ready. To exit the watch command, press Ctrl+C.", "language": "bash", "type": "kubectl-command_explanation", "context": "12.2 - Guide for Running Windows Containers in Kubernetes | When the service is deployed correctly both Pods are marked as Ready. To exit the watch command, press Ctrl+C.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_17706"} {"prompt": "Complete this yaml code:\n\nnodeSelector:\n kubernetes.io/os: windows\n node.kubernetes.io/windows-build: '10.0.17763'\ntolerations:", "completion": " - key: \"os\"\n operator: \"Equal\"\n value: \"windows\"\n effect: \"NoSchedule\"", "language": "yaml", "type": "kubernetes-manifest", "context": "12.2 - Guide for Running Windows Containers in Kubernetes | The Windows Server version used by each pod must match that of the node. If you want to use multiple Windows Server versions in the same cluster, then you should set additional node labels and nodeSel", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_17707"} {"prompt": "Explain this yaml code:\n\nnodeSelector:\n kubernetes.io/os: windows\n node.kubernetes.io/windows-build: '10.0.17763'\ntolerations:\n - key: \"os\"\n operator: \"Equal\"\n value: \"windows\"\n effect: \"NoSchedule\"", "completion": "This yaml code 12.2 - Guide for Running Windows Containers in Kubernetes | The Windows Server version used by each pod must match that of the node. If you want to use multiple Windows Server versions in the same cluster, then you should set additional node labels and nodeSel", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "12.2 - Guide for Running Windows Containers in Kubernetes | The Windows Server version used by each pod must match that of the node. If you want to use multiple Windows Server versions in the same cluster, then you should set additional node labels and nodeSel", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_17708"} {"prompt": "Complete this yaml code:\n\n---\napiVersion: node.k8s.io/v1\nkind: RuntimeClass\nmetadata:\n name: windows-2019\nhandler: example-container-runtime-handler\nscheduling:\n nodeSelector:", "completion": " kubernetes.io/os: 'windows'\n kubernetes.io/arch: 'amd64'\n node.kubernetes.io/windows-build: '10.0.17763'\n tolerations:\n - effect: NoSchedule\n key: os\n operator: Equal\n value: \"windows\"", "language": "yaml", "type": "kubernetes-manifest", "context": "12.2 - Guide for Running Windows Containers in Kubernetes | Run kubectl create -f runtimeClasses.yml using as a cluster administrator", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_17709"} {"prompt": "Explain this yaml code:\n\n---\napiVersion: node.k8s.io/v1\nkind: RuntimeClass\nmetadata:\n name: windows-2019\nhandler: example-container-runtime-handler\nscheduling:\n nodeSelector:\n kubernetes.io/os: 'windows'\n kubernetes.io/arch: 'amd64'\n node.kubernetes.io/windows-build: '10.0.17763'\n tolerations:\n - effect: NoSchedule\n key: os\n operator: Equal\n value: \"windows\"", "completion": "This yaml code 12.2 - Guide for Running Windows Containers in Kubernetes | Run kubectl create -f runtimeClasses.yml using as a cluster administrator", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "12.2 - Guide for Running Windows Containers in Kubernetes | Run kubectl create -f runtimeClasses.yml using as a cluster administrator", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_17710"} {"prompt": "Complete this yaml code:\n\n---\napiVersion: apps/v1\nkind: Deployment\nmetadata:\n name: iis-2019\n labels:\n app: iis-2019\nspec:\n replicas: 1\n template:\n metadata:\n name: iis-2019\n labels:\n app: iis-2019\n spec:\n runtimeClassName: windows-2019\n containers:\n - name: iis\n image: mcr.microsoft.com/windows/servercore/iis:windowsservercore-ltsc2019\n resources:\n limits:", "completion": " cpu: 1\n memory: 800Mi\n requests:\n cpu: .1\n memory: 300Mi\n ports:\n - containerPort: 80\n selector:\n matchLabels:\n app: iis-2019\n---\napiVersion: v1\nkind: Service\nmetadata:\n name: iis\nspec:\n type: LoadBalancer\n ports:\n - protocol: TCP\n port: 80\n selector:\n app: iis-2019", "language": "yaml", "type": "deployment-definition", "context": "12.2 - Guide for Running Windows Containers in Kubernetes | Kubernetes is highly configurable and extensible. As a result, there is rarely a need to fork or submit patches to the Kubernetes project code.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_17711"} {"prompt": "Explain this yaml code:\n\n---\napiVersion: apps/v1\nkind: Deployment\nmetadata:\n name: iis-2019\n labels:\n app: iis-2019\nspec:\n replicas: 1\n template:\n metadata:\n name: iis-2019\n labels:\n app: iis-2019\n spec:\n runtimeClassName: windows-2019\n containers:\n - name: iis\n image: mcr.microsoft.com/windows/servercore/iis:windowsservercore-ltsc2019\n resources:\n limits:\n cpu: 1\n memory: 800Mi\n requests:\n cpu: .1\n memory: 300Mi\n ports:\n - containerPort: 80\n selector:\n matchLabels:\n app: iis-2019\n---\napiVersion: v1\nkind: Service\nmetadata:\n name: iis\nspec:\n type: LoadBalancer\n ports:\n - protocol: TCP\n port: 80\n selector:\n app: iis-2019", "completion": "This yaml code 12.2 - Guide for Running Windows Containers in Kubernetes | Kubernetes is highly configurable and extensible. As a result, there is rarely a need to fork or submit patches to the Kubernetes project code.", "language": "yaml", "type": "deployment-definition_explanation", "context": "12.2 - Guide for Running Windows Containers in Kubernetes | Kubernetes is highly configurable and extensible. As a result, there is rarely a need to fork or submit patches to the Kubernetes project code.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_17712"} {"prompt": "Complete this json code:\n\n{\n \"name\": \"k8s-pod-network\",\n \"cniVersion\": \"0.4.0\",\n \"plugins\": [\n {\n \"type\": \"calico\",\n \"log_level\": \"info\",\n \"datastore_type\": \"kubernetes\",\n \"nodename\": \"127.0.0.1\",\n \"ipam\": {\n \"type\": \"host-local\",\n \"subnet\": \"usePodCidr\"\n },", "completion": " \"policy\": {\n \"type\": \"k8s\"\n },\n \"kubernetes\": {\n \"kubeconfig\": \"/etc/cni/net.d/calico-kubeconfig\"\n }\n },\n {\n \"type\": \"portmap\",\n \"capabilities\": {\"portMappings\": true},\n \"externalSetMarkChain\": \"KUBE-MARK-MASQ\"\n }\n ]\n}", "language": "json", "type": "api-response", "context": "13.1.1 - Network Plugins | Experimental Feature", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_17713"} {"prompt": "Explain this json code:\n\n{\n \"name\": \"k8s-pod-network\",\n \"cniVersion\": \"0.4.0\",\n \"plugins\": [\n {\n \"type\": \"calico\",\n \"log_level\": \"info\",\n \"datastore_type\": \"kubernetes\",\n \"nodename\": \"127.0.0.1\",\n \"ipam\": {\n \"type\": \"host-local\",\n \"subnet\": \"usePodCidr\"\n },\n \"policy\": {\n \"type\": \"k8s\"\n },\n \"kubernetes\": {\n \"kubeconfig\": \"/etc/cni/net.d/calico-kubeconfig\"\n }\n },\n {\n \"type\": \"portmap\",\n \"capabilities\": {\"portMappings\": true},\n \"externalSetMarkChain\": \"KUBE-MARK-MASQ\"\n }\n ]\n}", "completion": "This json code 13.1.1 - Network Plugins | Experimental Feature", "language": "json", "type": "api-response_explanation", "context": "13.1.1 - Network Plugins | Experimental Feature", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_17714"} {"prompt": "Complete this json code:\n\n{\n \"name\": \"k8s-pod-network\",\n \"cniVersion\": \"0.4.0\",\n \"plugins\": [\n {\n \"type\": \"calico\",\n \"log_level\": \"info\",\n \"datastore_type\": \"kubernetes\",\n \"nodename\": \"127.0.0.1\",\n \"ipam\": {\n \"type\": \"host-local\",\n \"subnet\": \"usePodCidr\"\n },", "completion": " \"policy\": {\n \"type\": \"k8s\"\n },\n \"kubernetes\": {\n \"kubeconfig\": \"/etc/cni/net.d/calico-kubeconfig\"\n }\n },\n {\n \"type\": \"bandwidth\",\n \"capabilities\": {\"bandwidth\": true}\n }\n ]\n}", "language": "json", "type": "api-response", "context": "13.1.1 - Network Plugins | Now you can add the kubernetes.io/ingress-bandwidth and kubernetes.io/egress-bandwidth annotations to your Pod. For example:", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_17715"} {"prompt": "Explain this json code:\n\n{\n \"name\": \"k8s-pod-network\",\n \"cniVersion\": \"0.4.0\",\n \"plugins\": [\n {\n \"type\": \"calico\",\n \"log_level\": \"info\",\n \"datastore_type\": \"kubernetes\",\n \"nodename\": \"127.0.0.1\",\n \"ipam\": {\n \"type\": \"host-local\",\n \"subnet\": \"usePodCidr\"\n },\n \"policy\": {\n \"type\": \"k8s\"\n },\n \"kubernetes\": {\n \"kubeconfig\": \"/etc/cni/net.d/calico-kubeconfig\"\n }\n },\n {\n \"type\": \"bandwidth\",\n \"capabilities\": {\"bandwidth\": true}\n }\n ]\n}", "completion": "This json code 13.1.1 - Network Plugins | Now you can add the kubernetes.io/ingress-bandwidth and kubernetes.io/egress-bandwidth annotations to your Pod. For example:", "language": "json", "type": "api-response_explanation", "context": "13.1.1 - Network Plugins | Now you can add the kubernetes.io/ingress-bandwidth and kubernetes.io/egress-bandwidth annotations to your Pod. For example:", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_17716"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:", "completion": " annotations:\n kubernetes.io/ingress-bandwidth: 1M\n kubernetes.io/egress-bandwidth: 1M\n...", "language": "yaml", "type": "pod-definition", "context": "13.1.1 - Network Plugins | Kubernetes provides a device plugin framework that you can use to advertise system hardware resources to the Kubelet.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_17717"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n annotations:\n kubernetes.io/ingress-bandwidth: 1M\n kubernetes.io/egress-bandwidth: 1M\n...", "completion": "This yaml code 13.1.1 - Network Plugins | Kubernetes provides a device plugin framework that you can use to advertise system hardware resources to the Kubelet.", "language": "yaml", "type": "pod-definition_explanation", "context": "13.1.1 - Network Plugins | Kubernetes provides a device plugin framework that you can use to advertise system hardware resources to the Kubelet.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_17718"} {"prompt": "Explain this gRPC code:\n\nservice Registration {\n\trpc Register(RegisterRequest) returns (Empty) {}\n}", "completion": "This gRPC code 13.1.2 - Device Plugins | A device plugin can register itself with the kubelet through this gRPC service. During the registration, the device plugin needs to send:", "language": "gRPC", "type": "code-example_explanation", "context": "13.1.2 - Device Plugins | A device plugin can register itself with the kubelet through this gRPC service. During the registration, the device plugin needs to send:", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_17719"} {"prompt": "Complete this yaml code:\n\n---\napiVersion: v1\nkind: Pod\nmetadata:\n name: demo-pod\nspec:\n containers:\n - name: demo-container-1\n image: registry.k8s.io/pause:3.8", "completion": " resources:\n limits:\n hardware-vendor.example/foo: 2\n#\n# This Pod needs 2 of the hardware-vendor.example/foo devices\n# and can only schedule onto a Node that's able to satisfy\n# that need.\n#\n# If the Node has more than 2 of those devices available, the\n# remainder would be available for other Pods to use.", "language": "yaml", "type": "pod-definition", "context": "13.1.2 - Device Plugins | The general workflow of a device plugin includes the following steps:", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_17720"} {"prompt": "Explain this yaml code:\n\n---\napiVersion: v1\nkind: Pod\nmetadata:\n name: demo-pod\nspec:\n containers:\n - name: demo-container-1\n image: registry.k8s.io/pause:3.8\n resources:\n limits:\n hardware-vendor.example/foo: 2\n#\n# This Pod needs 2 of the hardware-vendor.example/foo devices\n# and can only schedule onto a Node that's able to satisfy\n# that need.\n#\n# If the Node has more than 2 of those devices available, the\n# remainder would be available for other Pods to use.", "completion": "This yaml code 13.1.2 - Device Plugins | The general workflow of a device plugin includes the following steps:", "language": "yaml", "type": "pod-definition_explanation", "context": "13.1.2 - Device Plugins | The general workflow of a device plugin includes the following steps:", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_17721"} {"prompt": "Complete this gRPC code:\n\nservice DevicePlugin {\n // GetDevicePluginOptions returns options to be communicated with Device Manager.\n rpc GetDevicePluginOptions(Empty) returns (DevicePluginOptions) {}\n\n // ListAndWatch returns a stream of List of Devices\n // Whenever a Device state change or a Device disappears, ListAndWatch\n // returns the new list\n rpc ListAndWatch(Empty) returns (stream ListAndWatchResponse) {}\n\n // Allocate is called during container creation so that the Device\n // Plugin can run device specific operations and instruct Kubelet\n // of the steps to make the Device available in the container\n rpc Allocate(AllocateRequest) returns (AllocateResponse) {}", "completion": "\n // GetPreferredAllocation returns a preferred set of devices to allocate\n // from a list of available ones. The resulting preferred allocation is not\n // guaranteed to be the allocation ultimately performed by the\n // devicemanager. It is only designed to help the devicemanager make a more\n // informed allocation decision when possible.\n rpc GetPreferredAllocation(PreferredAllocationRequest) returns (PreferredAllocationResponse) {}\n\n // PreStartContainer is called, if indicated by Device Plugin during registration phase,\n // before each container start. Device plugin can run device specific operations\n // such as resetting the device before making devices available to the container.\n rpc PreStartContainer(PreStartContainerRequest) returns (PreStartContainerResponse) {}\n}", "language": "gRPC", "type": "code-example", "context": "13.1.2 - Device Plugins | The plugin registers itself with the kubelet through the Unix socket at host path /var/lib/kubelet/device-plugins/kubelet.sock.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_17722"} {"prompt": "Explain this gRPC code:\n\nservice DevicePlugin {\n // GetDevicePluginOptions returns options to be communicated with Device Manager.\n rpc GetDevicePluginOptions(Empty) returns (DevicePluginOptions) {}\n\n // ListAndWatch returns a stream of List of Devices\n // Whenever a Device state change or a Device disappears, ListAndWatch\n // returns the new list\n rpc ListAndWatch(Empty) returns (stream ListAndWatchResponse) {}\n\n // Allocate is called during container creation so that the Device\n // Plugin can run device specific operations and instruct Kubelet\n // of the steps to make the Device available in the container\n rpc Allocate(AllocateRequest) returns (AllocateResponse) {}\n\n // GetPreferredAllocation returns a preferred set of devices to allocate\n // from a list of available ones. The resulting preferred allocation is not\n // guaranteed to be the allocation ultimately performed by the\n // devicemanager. It is only designed to help the devicemanager make a more\n // informed allocation decision when possible.\n rpc GetPreferredAllocation(PreferredAllocationRequest) returns (PreferredAllocationResponse) {}\n\n // PreStartContainer is called, if indicated by Device Plugin during registration phase,\n // before each container start. Device plugin can run device specific operations\n // such as resetting the device before making devices available to the container.\n rpc PreStartContainer(PreStartContainerRequest) returns (PreStartContainerResponse) {}\n}", "completion": "This gRPC code 13.1.2 - Device Plugins | The plugin registers itself with the kubelet through the Unix socket at host path /var/lib/kubelet/device-plugins/kubelet.sock.", "language": "gRPC", "type": "code-example_explanation", "context": "13.1.2 - Device Plugins | The plugin registers itself with the kubelet through the Unix socket at host path /var/lib/kubelet/device-plugins/kubelet.sock.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_17723"} {"prompt": "Complete this gRPC code:\n\n// PodResourcesLister is a service provided by the kubelet that provides information about the\n// node resources consumed by pods and containers on the node\nservice PodResourcesLister {", "completion": " rpc List(ListPodResourcesRequest) returns (ListPodResourcesResponse) {}\n rpc GetAllocatableResources(AllocatableResourcesRequest) returns (AllocatableResourcesResponse) {}\n rpc Get(GetPodResourcesRequest) returns (GetPodResourcesResponse) {}\n}", "language": "gRPC", "type": "code-example", "context": "13.1.2 - Device Plugins | The List endpoint provides information on resources of running pods, with details such as the id of exclusively allocated CPUs, device id as it was reported by device plugins and id of the NUMA node w", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_17724"} {"prompt": "Explain this gRPC code:\n\n// PodResourcesLister is a service provided by the kubelet that provides information about the\n// node resources consumed by pods and containers on the node\nservice PodResourcesLister {\n rpc List(ListPodResourcesRequest) returns (ListPodResourcesResponse) {}\n rpc GetAllocatableResources(AllocatableResourcesRequest) returns (AllocatableResourcesResponse) {}\n rpc Get(GetPodResourcesRequest) returns (GetPodResourcesResponse) {}\n}", "completion": "This gRPC code 13.1.2 - Device Plugins | The List endpoint provides information on resources of running pods, with details such as the id of exclusively allocated CPUs, device id as it was reported by device plugins and id of the NUMA node w", "language": "gRPC", "type": "code-example_explanation", "context": "13.1.2 - Device Plugins | The List endpoint provides information on resources of running pods, with details such as the id of exclusively allocated CPUs, device id as it was reported by device plugins and id of the NUMA node w", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_17725"} {"prompt": "Explain this text code:\n\n--feature-gates=KubeletPodResourcesDynamicResources=false", "completion": "This text code 13.1.2 - Device Plugins | cpu_ids in the ContainerResources in the List endpoint correspond to exclusive CPUs allocated to a particular container. If the goal is to evaluate CPUs that belong to the shared pool, the List endpoi", "language": "text", "type": "code-example_explanation", "context": "13.1.2 - Device Plugins | cpu_ids in the ContainerResources in the List endpoint correspond to exclusive CPUs allocated to a particular container. If the goal is to evaluate CPUs that belong to the shared pool, the List endpoi", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_17726"} {"prompt": "Complete this gRPC code:\n\n// ListPodResourcesResponse is the response returned by List function\nmessage ListPodResourcesResponse {\n repeated PodResources pod_resources = 1;\n}\n\n// PodResources contains information about the node resources assigned to a pod\nmessage PodResources {\n string name = 1;\n string namespace = 2;\n repeated ContainerResources containers = 3;\n}\n\n// ContainerResources contains information about the resources assigned to a container\nmessage ContainerResources {\n string name = 1;\n repeated ContainerDevices devices = 2;\n repeated int64 cpu_ids = 3;\n repeated ContainerMemory memory = 4;\n repeated DynamicResource dynamic_resources = 5;\n}\n\n// ContainerMemory contains information about memory and hugepages assigned to a container\nmessage ContainerMemory {\n string memory_type = 1;\n uint64 size = 2;\n TopologyInfo topology = 3;\n}\n\n// Topology describes hardware topology of the resource\nmessage TopologyInfo {\n repeated NUMANode nodes = 1;\n}\n", "completion": "// NUMA representation of NUMA node\nmessage NUMANode {\n int64 ID = 1;\n}\n\n// ContainerDevices contains information about the devices assigned to a container\nmessage ContainerDevices {\n string resource_name = 1;\n repeated string device_ids = 2;\n TopologyInfo topology = 3;\n}\n\n// DynamicResource contains information about the devices assigned to a container by Dynamic Resource Allocation\nmessage DynamicResource {\n string class_name = 1;\n string claim_name = 2;\n string claim_namespace = 3;\n repeated ClaimResource claim_resources = 4;\n}\n\n// ClaimResource contains per-plugin resource information\nmessage ClaimResource {\n repeated CDIDevice cdi_devices = 1 [(gogoproto.customname) = \"CDIDevices\"];\n}\n\n// CDIDevice specifies a CDI device information\nmessage CDIDevice {\n // Fully qualified CDI device name\n // for example: vendor.com/gpu=gpudevice1\n // see more details in the CDI specification:\n // https://github.com/container-orchestrated-devices/container-device-interface/blob/main/SPEC.md\n string name = 1;\n}", "language": "gRPC", "type": "code-example", "context": "13.1.2 - Device Plugins | cpu_ids in the ContainerResources in the List endpoint correspond to exclusive CPUs allocated to a particular container. If the goal is to evaluate CPUs that belong to the shared pool, the List endpoi", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_17727"} {"prompt": "Explain this gRPC code:\n\n// ListPodResourcesResponse is the response returned by List function\nmessage ListPodResourcesResponse {\n repeated PodResources pod_resources = 1;\n}\n\n// PodResources contains information about the node resources assigned to a pod\nmessage PodResources {\n string name = 1;\n string namespace = 2;\n repeated ContainerResources containers = 3;\n}\n\n// ContainerResources contains information about the resources assigned to a container\nmessage ContainerResources {\n string name = 1;\n repeated ContainerDevices devices = 2;\n repeated int64 cpu_ids = 3;\n repeated ContainerMemory memory = 4;\n repeated DynamicResource dynamic_resources = 5;\n}\n\n// ContainerMemory contains information about memory and hugepages assigned to a container\nmessage ContainerMemory {\n string memory_type = 1;\n uint64 size = 2;\n TopologyInfo topology = 3;\n}\n\n// Topology describes hardware topology of the resource\nmessage TopologyInfo {\n repeated NUMANode nodes = 1;\n}\n\n// NUMA representation of NUMA node\nmessage NUMANode {\n int64 ID = 1;\n}\n\n// ContainerDevices contains information about the devices assigned to a container\nmessage ContainerDevices {\n string resource_name = 1;\n repeated string device_ids = 2;\n TopologyInfo topology = 3;\n}\n\n// DynamicResource contains information about the devices assigned to a container by Dynamic Resource Allocation\nmessage DynamicResource {\n string class_name = 1;\n string claim_name = 2;\n string claim_namespace = 3;\n repeated ClaimResource claim_resources = 4;\n}\n\n// ClaimResource contains per-plugin resource information\nmessage ClaimResource {\n repeated CDIDevice cdi_devices = 1 [(gogoproto.customname) = \"CDIDevices\"];\n}\n\n// CDIDevice specifies a CDI device information\nmessage CDIDevice {\n // Fully qualified CDI device name\n // for example: vendor.com/gpu=gpudevice1\n // see more details in the CDI specification:\n // https://github.com/container-orchestrated-devices/container-device-interface/blob/main/SPEC.md\n string name = 1;\n}", "completion": "This gRPC code 13.1.2 - Device Plugins | cpu_ids in the ContainerResources in the List endpoint correspond to exclusive CPUs allocated to a particular container. If the goal is to evaluate CPUs that belong to the shared pool, the List endpoi", "language": "gRPC", "type": "code-example_explanation", "context": "13.1.2 - Device Plugins | cpu_ids in the ContainerResources in the List endpoint correspond to exclusive CPUs allocated to a particular container. If the goal is to evaluate CPUs that belong to the shared pool, the List endpoi", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_17728"} {"prompt": "Complete this gRPC code:\n\n// AllocatableResourcesResponses contains information about all the devices known by the kubelet\nmessage AllocatableResourcesResponse {\n repeated ContainerDevices devices = 1;", "completion": " repeated int64 cpu_ids = 2;\n repeated ContainerMemory memory = 3;\n}", "language": "gRPC", "type": "code-example", "context": "13.1.2 - Device Plugins | ContainerDevices do expose the topology information declaring to which NUMA cells the device is affine. The NUMA cells are identified using a opaque integer ID, which value is consistent to what devic", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_17729"} {"prompt": "Explain this gRPC code:\n\n// AllocatableResourcesResponses contains information about all the devices known by the kubelet\nmessage AllocatableResourcesResponse {\n repeated ContainerDevices devices = 1;\n repeated int64 cpu_ids = 2;\n repeated ContainerMemory memory = 3;\n}", "completion": "This gRPC code 13.1.2 - Device Plugins | ContainerDevices do expose the topology information declaring to which NUMA cells the device is affine. The NUMA cells are identified using a opaque integer ID, which value is consistent to what devic", "language": "gRPC", "type": "code-example_explanation", "context": "13.1.2 - Device Plugins | ContainerDevices do expose the topology information declaring to which NUMA cells the device is affine. The NUMA cells are identified using a opaque integer ID, which value is consistent to what devic", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_17730"} {"prompt": "Complete this gRPC code:\n\n// GetPodResourcesRequest contains information about the pod\nmessage GetPodResourcesRequest {", "completion": " string pod_name = 1;\n string pod_namespace = 2;\n}", "language": "gRPC", "type": "code-example", "context": "13.1.2 - Device Plugins | To disable this feature, you must start your kubelet services with the following flag:", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_17731"} {"prompt": "Explain this gRPC code:\n\n// GetPodResourcesRequest contains information about the pod\nmessage GetPodResourcesRequest {\n string pod_name = 1;\n string pod_namespace = 2;\n}", "completion": "This gRPC code 13.1.2 - Device Plugins | To disable this feature, you must start your kubelet services with the following flag:", "language": "gRPC", "type": "code-example_explanation", "context": "13.1.2 - Device Plugins | To disable this feature, you must start your kubelet services with the following flag:", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_17732"} {"prompt": "Explain this text code:\n\n--feature-gates=KubeletPodResourcesDynamicResources=false", "completion": "This text code 13.1.2 - Device Plugins | The Topology Manager is a Kubelet component that allows resources to be co-ordinated in a Topology aligned manner. In order to do this, the Device Plugin API was extended to include a TopologyInfo str", "language": "text", "type": "code-example_explanation", "context": "13.1.2 - Device Plugins | The Topology Manager is a Kubelet component that allows resources to be co-ordinated in a Topology aligned manner. In order to do this, the Device Plugin API was extended to include a TopologyInfo str", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_17733"} {"prompt": "Complete this gRPC code:\n\nmessage TopologyInfo {\n repeated NUMANode nodes = 1;\n}", "completion": "\nmessage NUMANode {\n int64 ID = 1;\n}", "language": "gRPC", "type": "code-example", "context": "13.1.2 - Device Plugins | Device Plugins that wish to leverage the Topology Manager can send back a populated TopologyInfo struct as part of the device registration, along with the device IDs and the health of the device. The", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_17734"} {"prompt": "Explain this gRPC code:\n\nmessage TopologyInfo {\n repeated NUMANode nodes = 1;\n}\n\nmessage NUMANode {\n int64 ID = 1;\n}", "completion": "This gRPC code 13.1.2 - Device Plugins | Device Plugins that wish to leverage the Topology Manager can send back a populated TopologyInfo struct as part of the device registration, along with the device IDs and the health of the device. The", "language": "gRPC", "type": "code-example_explanation", "context": "13.1.2 - Device Plugins | Device Plugins that wish to leverage the Topology Manager can send back a populated TopologyInfo struct as part of the device registration, along with the device IDs and the health of the device. The", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_17735"} {"prompt": "Explain this text code:\n\npluginapi.Device{ID: \"25102017\", Health: pluginapi.Healthy, Topology:&pluginapi.TopologyInfo{Nodes: []*pluginapi.NUMANode{&pluginapi.NUMANode{ID: 0,},}}}", "completion": "This text code 13.1.2 - Device Plugins | Here are some examples of device plugin implementations:", "language": "text", "type": "code-example_explanation", "context": "13.1.2 - Device Plugins | Here are some examples of device plugin implementations:", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_17736"} {"prompt": "Explain this text code:\n\nContent-Type: application/strategic-merge-patch+json", "completion": "This text code 13.2.1 - Custom Resources | When you create a custom resource, either via a CRD or an AA, you get many features for your API, compared to implementing it outside the Kubernetes platform:", "language": "text", "type": "code-example_explanation", "context": "13.2.1 - Custom Resources | When you create a custom resource, either via a CRD or an AA, you get many features for your API, compared to implementing it outside the Kubernetes platform:", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_17737"} {"prompt": "Explain this text code:\n\ncustomresourcedefinition/shirt-resource-definition.yaml", "completion": "This text code 13.2.1 - Custom Resources | Field selectors can then be used to get only resources with a color of blue:", "language": "text", "type": "code-example_explanation", "context": "13.2.1 - Custom Resources | Field selectors can then be used to get only resources with a color of blue:", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_17738"} {"prompt": "Complete this yaml code:\n\napiVersion: apiextensions.k8s.io/v1\nkind: CustomResourceDefinition\nmetadata:\n name: shirts.stable.example.com\nspec:\n group: stable.example.com\n scope: Namespaced\n names:\n plural: shirts\n singular: shirt\n kind: Shirt\n versions:\n - name: v1\n served: true\n storage: true\n schema:\n openAPIV3Schema:\n type: object", "completion": " properties:\n spec:\n type: object\n properties:\n color:\n type: string\n size:\n type: string\n selectableFields:\n - jsonPath: .spec.color\n - jsonPath: .spec.size\n additionalPrinterColumns:\n - jsonPath: .spec.color\n name: Color\n type: string\n - jsonPath: .spec.size\n name: Size\n type: string", "language": "yaml", "type": "kubernetes-manifest", "context": "13.2.1 - Custom Resources | Field selectors can then be used to get only resources with a color of blue:", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_17739"} {"prompt": "Explain this yaml code:\n\napiVersion: apiextensions.k8s.io/v1\nkind: CustomResourceDefinition\nmetadata:\n name: shirts.stable.example.com\nspec:\n group: stable.example.com\n scope: Namespaced\n names:\n plural: shirts\n singular: shirt\n kind: Shirt\n versions:\n - name: v1\n served: true\n storage: true\n schema:\n openAPIV3Schema:\n type: object\n properties:\n spec:\n type: object\n properties:\n color:\n type: string\n size:\n type: string\n selectableFields:\n - jsonPath: .spec.color\n - jsonPath: .spec.size\n additionalPrinterColumns:\n - jsonPath: .spec.color\n name: Color\n type: string\n - jsonPath: .spec.size\n name: Size\n type: string", "completion": "This yaml code 13.2.1 - Custom Resources | Field selectors can then be used to get only resources with a color of blue:", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "13.2.1 - Custom Resources | Field selectors can then be used to get only resources with a color of blue:", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_17740"} {"prompt": "Explain this shell code:\n\nkubectl get shirts.stable.example.com --field-selector spec.color=blue", "completion": "This shell code 13.2.1 - Custom Resources | The output should be:", "language": "shell", "type": "code-example_explanation", "context": "13.2.1 - Custom Resources | The output should be:", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_17741"} {"prompt": "Explain this text code:\n\nNAME COLOR SIZE\nexample1 blue S\nexample2 blue M", "completion": "This text code 13.2.1 - Custom Resources | The aggregation layer allows Kubernetes to be extended with additional APIs, beyond what is offered by the core Kubernetes APIs. The additional APIs can either be ready-made solutions such as a metric", "language": "text", "type": "code-example_explanation", "context": "13.2.1 - Custom Resources | The aggregation layer allows Kubernetes to be extended with additional APIs, beyond what is offered by the core Kubernetes APIs. The additional APIs can either be ready-made solutions such as a metric", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_17742"} {"prompt": "Explain this shell code:\n\nkubectl get SampleDB # find configured databases\n\nkubectl edit SampleDB/example-database # manually change some settings", "completion": "This shell code 13.3 - Operator pattern | …and that's it! The operator will take care of applying the changes as well as keeping the existing service in good shape.", "language": "shell", "type": "code-example_explanation", "context": "13.3 - Operator pattern | …and that's it! The operator will take care of applying the changes as well as keeping the existing service in good shape.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_17743"} {"prompt": "Complete this yaml code:\n\napplication/deployment.yaml\napiVersion: apps/v1\nkind: Deployment\nmetadata:\n name: nginx-deployment\nspec:\n selector:\n matchLabels:\n app: nginx\n replicas: 2 # tells deployment to run 2 pods matching the template", "completion": " template:\n metadata:\n labels:\n app: nginx\n spec:\n containers:\n - name: nginx\n image: nginx:1.14.2\n ports:\n - containerPort: 80", "language": "yaml", "type": "deployment-definition", "context": "1.2 - Objects In Kubernetes | One way to create a Deployment using a manifest file like the one above is to use the kubectl apply command in the kubectl command-line interface, passing the .yaml file as an argument. Here's an exam", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_17744"} {"prompt": "Explain this yaml code:\n\napplication/deployment.yaml\napiVersion: apps/v1\nkind: Deployment\nmetadata:\n name: nginx-deployment\nspec:\n selector:\n matchLabels:\n app: nginx\n replicas: 2 # tells deployment to run 2 pods matching the template\n template:\n metadata:\n labels:\n app: nginx\n spec:\n containers:\n - name: nginx\n image: nginx:1.14.2\n ports:\n - containerPort: 80", "completion": "This yaml code 1.2 - Objects In Kubernetes | One way to create a Deployment using a manifest file like the one above is to use the kubectl apply command in the kubectl command-line interface, passing the .yaml file as an argument. Here's an exam", "language": "yaml", "type": "deployment-definition_explanation", "context": "1.2 - Objects In Kubernetes | One way to create a Deployment using a manifest file like the one above is to use the kubectl apply command in the kubectl command-line interface, passing the .yaml file as an argument. Here's an exam", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_17745"} {"prompt": "Complete this yaml code:\n\napiVersion: apps/v1\nkind: Deployment\nmetadata:\n name: nginx-deployment\nspec:\n selector:\n matchLabels:\n app: nginx\n replicas: 2 # tells deployment to run 2 pods matching the template", "completion": " template:\n metadata:\n labels:\n app: nginx\n spec:\n containers:\n - name: nginx\n image: nginx:1.14.2\n ports:\n - containerPort: 80", "language": "yaml", "type": "deployment-definition", "context": "1.2 - Objects In Kubernetes | One way to create a Deployment using a manifest file like the one above is to use the kubectl apply command in the kubectl command-line interface, passing the .yaml file as an argument. Here's an exam", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_17746"} {"prompt": "Explain this yaml code:\n\napiVersion: apps/v1\nkind: Deployment\nmetadata:\n name: nginx-deployment\nspec:\n selector:\n matchLabels:\n app: nginx\n replicas: 2 # tells deployment to run 2 pods matching the template\n template:\n metadata:\n labels:\n app: nginx\n spec:\n containers:\n - name: nginx\n image: nginx:1.14.2\n ports:\n - containerPort: 80", "completion": "This yaml code 1.2 - Objects In Kubernetes | One way to create a Deployment using a manifest file like the one above is to use the kubectl apply command in the kubectl command-line interface, passing the .yaml file as an argument. Here's an exam", "language": "yaml", "type": "deployment-definition_explanation", "context": "1.2 - Objects In Kubernetes | One way to create a Deployment using a manifest file like the one above is to use the kubectl apply command in the kubectl command-line interface, passing the .yaml file as an argument. Here's an exam", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_17747"} {"prompt": "Explain this bash code:\n\nkubectl apply -f https://k8s.io/examples/application/deployment.yaml", "completion": "This bash code 1.2 - Objects In Kubernetes | The output is similar to this:", "language": "bash", "type": "kubectl-command_explanation", "context": "1.2 - Objects In Kubernetes | The output is similar to this:", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_17748"} {"prompt": "Explain this bash code:\n\nkubectl diff -R -f configs/\nkubectl apply -R -f configs/", "completion": "This bash code 1.2.1 - Kubernetes Object Management | Advantages compared to imperative object configuration:", "language": "bash", "type": "kubectl-command_explanation", "context": "1.2.1 - Kubernetes Object Management | Advantages compared to imperative object configuration:", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_17749"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: nginx-demo\nspec:", "completion": " containers:\n - name: nginx\n image: nginx:1.14.2\n ports:\n - containerPort: 80", "language": "yaml", "type": "pod-definition", "context": "1.2.2 - Object Names and IDs | A Kubernetes systems-generated string to uniquely identify objects.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_17750"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: nginx-demo\nspec:\n containers:\n - name: nginx\n image: nginx:1.14.2\n ports:\n - containerPort: 80", "completion": "This yaml code 1.2.2 - Object Names and IDs | A Kubernetes systems-generated string to uniquely identify objects.", "language": "yaml", "type": "pod-definition_explanation", "context": "1.2.2 - Object Names and IDs | A Kubernetes systems-generated string to uniquely identify objects.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_17751"} {"prompt": "Complete this text code:\n\n\"metadata\": {\n \"labels\": {\n \"key1\" : \"value1\",", "completion": " \"key2\" : \"value2\"\n }\n}", "language": "text", "type": "code-example", "context": "1.2.3 - Labels and Selectors | Labels allow for efficient queries and watches and are ideal for use in UIs and CLIs. Non-identifying information should be recorded using annotations.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_17752"} {"prompt": "Explain this text code:\n\n\"metadata\": {\n \"labels\": {\n \"key1\" : \"value1\",\n \"key2\" : \"value2\"\n }\n}", "completion": "This text code 1.2.3 - Labels and Selectors | Labels allow for efficient queries and watches and are ideal for use in UIs and CLIs. Non-identifying information should be recorded using annotations.", "language": "text", "type": "code-example_explanation", "context": "1.2.3 - Labels and Selectors | Labels allow for efficient queries and watches and are ideal for use in UIs and CLIs. Non-identifying information should be recorded using annotations.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_17753"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: label-demo\n labels:\n environment: production", "completion": " app: nginx\nspec:\n containers:\n - name: nginx\n image: nginx:1.14.2\n ports:\n - containerPort: 80", "language": "yaml", "type": "pod-definition", "context": "1.2.3 - Labels and Selectors | Unlike names and UIDs, labels do not provide uniqueness. In general, we expect many objects to carry the same label(s).", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_17754"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: label-demo\n labels:\n environment: production\n app: nginx\nspec:\n containers:\n - name: nginx\n image: nginx:1.14.2\n ports:\n - containerPort: 80", "completion": "This yaml code 1.2.3 - Labels and Selectors | Unlike names and UIDs, labels do not provide uniqueness. In general, we expect many objects to carry the same label(s).", "language": "yaml", "type": "pod-definition_explanation", "context": "1.2.3 - Labels and Selectors | Unlike names and UIDs, labels do not provide uniqueness. In general, we expect many objects to carry the same label(s).", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_17755"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: cuda-test\nspec:\n containers:", "completion": " - name: cuda-test\n image: \"registry.k8s.io/cuda-vector-add:v0.1\"\n resources:\n limits:\n nvidia.com/gpu: 1\n nodeSelector:\n accelerator: nvidia-tesla-p100", "language": "yaml", "type": "pod-definition", "context": "1.2.3 - Labels and Selectors | Set-based label requirements allow filtering keys according to a set of values. Three kinds of operators are supported: in,notin and exists (only the key identifier). For example:", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_17756"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: cuda-test\nspec:\n containers:\n - name: cuda-test\n image: \"registry.k8s.io/cuda-vector-add:v0.1\"\n resources:\n limits:\n nvidia.com/gpu: 1\n nodeSelector:\n accelerator: nvidia-tesla-p100", "completion": "This yaml code 1.2.3 - Labels and Selectors | Set-based label requirements allow filtering keys according to a set of values. Three kinds of operators are supported: in,notin and exists (only the key identifier). For example:", "language": "yaml", "type": "pod-definition_explanation", "context": "1.2.3 - Labels and Selectors | Set-based label requirements allow filtering keys according to a set of values. Three kinds of operators are supported: in,notin and exists (only the key identifier). For example:", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_17757"} {"prompt": "Explain this bash code:\n\nkubectl get pods -l environment=production,tier=frontend", "completion": "This bash code 1.2.3 - Labels and Selectors | or using set-based requirements:", "language": "bash", "type": "kubectl-command_explanation", "context": "1.2.3 - Labels and Selectors | or using set-based requirements:", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_17758"} {"prompt": "Explain this bash code:\n\nkubectl get pods -l 'environment in (production),tier in (frontend)'", "completion": "This bash code 1.2.3 - Labels and Selectors | As already mentioned set-based requirements are more expressive. For instance, they can implement the OR operator on values:", "language": "bash", "type": "kubectl-command_explanation", "context": "1.2.3 - Labels and Selectors | As already mentioned set-based requirements are more expressive. For instance, they can implement the OR operator on values:", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_17759"} {"prompt": "Explain this bash code:\n\nkubectl get pods -l 'environment in (production, qa)'", "completion": "This bash code 1.2.3 - Labels and Selectors | or restricting negative matching via notin operator:", "language": "bash", "type": "kubectl-command_explanation", "context": "1.2.3 - Labels and Selectors | or restricting negative matching via notin operator:", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_17760"} {"prompt": "Explain this bash code:\n\nkubectl get pods -l 'environment,environment notin (frontend)'", "completion": "This bash code 1.2.3 - Labels and Selectors | Some Kubernetes objects, such as services and replicationcontrollers, also use label selectors to specify sets of other resources, such as pods.", "language": "bash", "type": "kubectl-command_explanation", "context": "1.2.3 - Labels and Selectors | Some Kubernetes objects, such as services and replicationcontrollers, also use label selectors to specify sets of other resources, such as pods.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_17761"} {"prompt": "Complete this text code:\n\nselector:\n matchLabels:\n component: redis", "completion": " matchExpressions:\n - { key: tier, operator: In, values: [cache] }\n - { key: environment, operator: NotIn, values: [dev] }", "language": "text", "type": "code-example", "context": "1.2.3 - Labels and Selectors | matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels map is equivalent to an element of matchExpressions, whose key field is \"key\", the operator is \"In\", and the values a", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_17762"} {"prompt": "Explain this text code:\n\nselector:\n matchLabels:\n component: redis\n matchExpressions:\n - { key: tier, operator: In, values: [cache] }\n - { key: environment, operator: NotIn, values: [dev] }", "completion": "This text code 1.2.3 - Labels and Selectors | matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels map is equivalent to an element of matchExpressions, whose key field is \"key\", the operator is \"In\", and the values a", "language": "text", "type": "code-example_explanation", "context": "1.2.3 - Labels and Selectors | matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels map is equivalent to an element of matchExpressions, whose key field is \"key\", the operator is \"In\", and the values a", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_17763"} {"prompt": "Complete this text code:\n\nlabels:\n app: guestbook", "completion": " tier: backend\n role: master", "language": "text", "type": "code-example", "context": "1.2.3 - Labels and Selectors | and", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_17764"} {"prompt": "Explain this text code:\n\nlabels:\n app: guestbook\n tier: backend\n role: master", "completion": "This text code 1.2.3 - Labels and Selectors | and", "language": "text", "type": "code-example_explanation", "context": "1.2.3 - Labels and Selectors | and", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_17765"} {"prompt": "Complete this text code:\n\nlabels:\n app: guestbook", "completion": " tier: backend\n role: replica", "language": "text", "type": "code-example", "context": "1.2.3 - Labels and Selectors | The labels allow for slicing and dicing the resources along any dimension specified by a label:", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_17766"} {"prompt": "Explain this text code:\n\nlabels:\n app: guestbook\n tier: backend\n role: replica", "completion": "This text code 1.2.3 - Labels and Selectors | The labels allow for slicing and dicing the resources along any dimension specified by a label:", "language": "text", "type": "code-example_explanation", "context": "1.2.3 - Labels and Selectors | The labels allow for slicing and dicing the resources along any dimension specified by a label:", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_17767"} {"prompt": "Explain this bash code:\n\nkubectl apply -f examples/guestbook/all-in-one/guestbook-all-in-one.yaml\nkubectl get pods -Lapp -Ltier -Lrole", "completion": "This bash code 1.2.3 - Labels and Selectors | Sometimes you may want to relabel existing pods and other resources before creating new resources. This can be done with kubectl label. For example, if you want to label all your NGINX Pods as fronten", "language": "bash", "type": "kubectl-command_explanation", "context": "1.2.3 - Labels and Selectors | Sometimes you may want to relabel existing pods and other resources before creating new resources. This can be done with kubectl label. For example, if you want to label all your NGINX Pods as fronten", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_17768"} {"prompt": "Explain this bash code:\n\nkubectl run nginx --image=nginx --namespace=<insert-namespace-name-here>\nkubectl get pods --namespace=<insert-namespace-name-here>", "completion": "This bash code 1.2.4 - Namespaces | You can permanently save the namespace for all subsequent kubectl commands in that context.", "language": "bash", "type": "kubectl-command_explanation", "context": "1.2.4 - Namespaces | You can permanently save the namespace for all subsequent kubectl commands in that context.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_17769"} {"prompt": "Explain this bash code:\n\nkubectl config set-context --current --namespace=<insert-namespace-name-here>\n# Validate it\nkubectl config view --minify | grep namespace:", "completion": "This bash code 1.2.4 - Namespaces | When you create a Service, it creates a corresponding DNS entry. This entry is of the form <service-name>.<namespace-name>.svc.cluster.local, which means that if a container only uses <service-name>,", "language": "bash", "type": "kubectl-command_explanation", "context": "1.2.4 - Namespaces | When you create a Service, it creates a corresponding DNS entry. This entry is of the form <service-name>.<namespace-name>.svc.cluster.local, which means that if a container only uses <service-name>,", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_17770"} {"prompt": "Complete this text code:\n\n# In a namespace\nkubectl api-resources --namespaced=true", "completion": "\n# Not in a namespace\nkubectl api-resources --namespaced=false", "language": "text", "type": "code-example", "context": "1.2.4 - Namespaces | The Kubernetes control plane sets an immutable label kubernetes.io/metadata.name on all namespaces. The value of the label is the namespace name.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_17771"} {"prompt": "Explain this text code:\n\n# In a namespace\nkubectl api-resources --namespaced=true\n\n# Not in a namespace\nkubectl api-resources --namespaced=false", "completion": "This text code 1.2.4 - Namespaces | The Kubernetes control plane sets an immutable label kubernetes.io/metadata.name on all namespaces. The value of the label is the namespace name.", "language": "text", "type": "code-example_explanation", "context": "1.2.4 - Namespaces | The Kubernetes control plane sets an immutable label kubernetes.io/metadata.name on all namespaces. The value of the label is the namespace name.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_17772"} {"prompt": "Complete this text code:\n\n\"metadata\": {\n \"annotations\": {\n \"key1\" : \"value1\",", "completion": " \"key2\" : \"value2\"\n }\n}", "language": "text", "type": "code-example", "context": "1.2.5 - Annotations | Here are some examples of information that could be recorded in annotations:", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_17773"} {"prompt": "Explain this text code:\n\n\"metadata\": {\n \"annotations\": {\n \"key1\" : \"value1\",\n \"key2\" : \"value2\"\n }\n}", "completion": "This text code 1.2.5 - Annotations | Here are some examples of information that could be recorded in annotations:", "language": "text", "type": "code-example_explanation", "context": "1.2.5 - Annotations | Here are some examples of information that could be recorded in annotations:", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_17774"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: annotations-demo\n annotations:\n imageregistry: \"https://hub.docker.com/\"", "completion": "spec:\n containers:\n - name: nginx\n image: nginx:1.14.2\n ports:\n - containerPort: 80", "language": "yaml", "type": "pod-definition", "context": "1.2.5 - Annotations | Field selectors let you select Kubernetes objects based on the value of one or more resource fields. Here are some examples of field selector queries:", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_17775"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: annotations-demo\n annotations:\n imageregistry: \"https://hub.docker.com/\"\nspec:\n containers:\n - name: nginx\n image: nginx:1.14.2\n ports:\n - containerPort: 80", "completion": "This yaml code 1.2.5 - Annotations | Field selectors let you select Kubernetes objects based on the value of one or more resource fields. Here are some examples of field selector queries:", "language": "yaml", "type": "pod-definition_explanation", "context": "1.2.5 - Annotations | Field selectors let you select Kubernetes objects based on the value of one or more resource fields. Here are some examples of field selector queries:", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_17776"} {"prompt": "Explain this bash code:\n\nkubectl get pods --field-selector status.phase=Running", "completion": "This bash code 1.2.6 - Field Selectors | Supported field selectors vary by Kubernetes resource type. All resource types support the metadata.name and metadata.namespace fields. Using unsupported field selectors produces an error. For example", "language": "bash", "type": "kubectl-command_explanation", "context": "1.2.6 - Field Selectors | Supported field selectors vary by Kubernetes resource type. All resource types support the metadata.name and metadata.namespace fields. Using unsupported field selectors produces an error. For example", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_17777"} {"prompt": "Explain this bash code:\n\nkubectl get services --all-namespaces --field-selector metadata.namespace!=default", "completion": "This bash code 1.2.6 - Field Selectors | As with label and other selectors, field selectors can be chained together as a comma-separated list. This kubectl command selects all Pods for which the status.phase does not equal Running and the sp", "language": "bash", "type": "kubectl-command_explanation", "context": "1.2.6 - Field Selectors | As with label and other selectors, field selectors can be chained together as a comma-separated list. This kubectl command selects all Pods for which the status.phase does not equal Running and the sp", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_17778"} {"prompt": "Explain this bash code:\n\nkubectl get pods --field-selector=status.phase!=Running,spec.restartPolicy=Always", "completion": "This bash code 1.2.6 - Field Selectors | You can use field selectors across multiple resource types. This kubectl command selects all Statefulsets and Services that are not in the default namespace:", "language": "bash", "type": "kubectl-command_explanation", "context": "1.2.6 - Field Selectors | You can use field selectors across multiple resource types. This kubectl command selects all Statefulsets and Services that are not in the default namespace:", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_17779"} {"prompt": "Explain this bash code:\n\nkubectl get statefulsets,services --all-namespaces --field-selector metadata.namespace!=default", "completion": "This bash code 1.2.6 - Field Selectors | Finalizers are namespaced keys that tell Kubernetes to wait until specific conditions are met before it fully deletes resources that are marked for deletion. Finalizers alert controllers to clean up r", "language": "bash", "type": "kubectl-command_explanation", "context": "1.2.6 - Field Selectors | Finalizers are namespaced keys that tell Kubernetes to wait until specific conditions are met before it fully deletes resources that are marked for deletion. Finalizers alert controllers to clean up r", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_17780"} {"prompt": "Complete this yaml code:\n\n# This is an excerpt\napiVersion: apps/v1\nkind: StatefulSet\nmetadata:\n labels:", "completion": " app.kubernetes.io/name: mysql\n app.kubernetes.io/instance: mysql-abcxyz\n app.kubernetes.io/version: \"5.7.21\"\n app.kubernetes.io/component: database\n app.kubernetes.io/part-of: wordpress\n app.kubernetes.io/managed-by: Helm", "language": "yaml", "type": "kubernetes-manifest", "context": "1.2.9 - Recommended Labels | An application can be installed one or more times into a Kubernetes cluster and, in some cases, the same namespace. For example, WordPress can be installed more than once where different websites are", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_17781"} {"prompt": "Explain this yaml code:\n\n# This is an excerpt\napiVersion: apps/v1\nkind: StatefulSet\nmetadata:\n labels:\n app.kubernetes.io/name: mysql\n app.kubernetes.io/instance: mysql-abcxyz\n app.kubernetes.io/version: \"5.7.21\"\n app.kubernetes.io/component: database\n app.kubernetes.io/part-of: wordpress\n app.kubernetes.io/managed-by: Helm", "completion": "This yaml code 1.2.9 - Recommended Labels | An application can be installed one or more times into a Kubernetes cluster and, in some cases, the same namespace. For example, WordPress can be installed more than once where different websites are", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "1.2.9 - Recommended Labels | An application can be installed one or more times into a Kubernetes cluster and, in some cases, the same namespace. For example, WordPress can be installed more than once where different websites are", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_17782"} {"prompt": "Complete this yaml code:\n\napiVersion: apps/v1\nkind: Deployment\nmetadata:", "completion": " labels:\n app.kubernetes.io/name: myservice\n app.kubernetes.io/instance: myservice-abcxyz\n...", "language": "yaml", "type": "deployment-definition", "context": "1.2.9 - Recommended Labels | The Service is used to expose the application.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_17783"} {"prompt": "Explain this yaml code:\n\napiVersion: apps/v1\nkind: Deployment\nmetadata:\n labels:\n app.kubernetes.io/name: myservice\n app.kubernetes.io/instance: myservice-abcxyz\n...", "completion": "This yaml code 1.2.9 - Recommended Labels | The Service is used to expose the application.", "language": "yaml", "type": "deployment-definition_explanation", "context": "1.2.9 - Recommended Labels | The Service is used to expose the application.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_17784"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Service\nmetadata:", "completion": " labels:\n app.kubernetes.io/name: myservice\n app.kubernetes.io/instance: myservice-abcxyz\n...", "language": "yaml", "type": "service-definition", "context": "1.2.9 - Recommended Labels | Consider a slightly more complicated application: a web application (WordPress) using a database (MySQL), installed using Helm. The following snippets illustrate the start of objects used to deploy th", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_17785"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Service\nmetadata:\n labels:\n app.kubernetes.io/name: myservice\n app.kubernetes.io/instance: myservice-abcxyz\n...", "completion": "This yaml code 1.2.9 - Recommended Labels | Consider a slightly more complicated application: a web application (WordPress) using a database (MySQL), installed using Helm. The following snippets illustrate the start of objects used to deploy th", "language": "yaml", "type": "service-definition_explanation", "context": "1.2.9 - Recommended Labels | Consider a slightly more complicated application: a web application (WordPress) using a database (MySQL), installed using Helm. The following snippets illustrate the start of objects used to deploy th", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_17786"} {"prompt": "Complete this yaml code:\n\napiVersion: apps/v1\nkind: Deployment\nmetadata:\n labels:\n app.kubernetes.io/name: wordpress", "completion": " app.kubernetes.io/instance: wordpress-abcxyz\n app.kubernetes.io/version: \"4.9.4\"\n app.kubernetes.io/managed-by: Helm\n app.kubernetes.io/component: server\n app.kubernetes.io/part-of: wordpress\n...", "language": "yaml", "type": "deployment-definition", "context": "1.2.9 - Recommended Labels | The Service is used to expose WordPress:", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_17787"} {"prompt": "Explain this yaml code:\n\napiVersion: apps/v1\nkind: Deployment\nmetadata:\n labels:\n app.kubernetes.io/name: wordpress\n app.kubernetes.io/instance: wordpress-abcxyz\n app.kubernetes.io/version: \"4.9.4\"\n app.kubernetes.io/managed-by: Helm\n app.kubernetes.io/component: server\n app.kubernetes.io/part-of: wordpress\n...", "completion": "This yaml code 1.2.9 - Recommended Labels | The Service is used to expose WordPress:", "language": "yaml", "type": "deployment-definition_explanation", "context": "1.2.9 - Recommended Labels | The Service is used to expose WordPress:", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_17788"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Service\nmetadata:\n labels:\n app.kubernetes.io/name: wordpress", "completion": " app.kubernetes.io/instance: wordpress-abcxyz\n app.kubernetes.io/version: \"4.9.4\"\n app.kubernetes.io/managed-by: Helm\n app.kubernetes.io/component: server\n app.kubernetes.io/part-of: wordpress\n...", "language": "yaml", "type": "service-definition", "context": "1.2.9 - Recommended Labels | MySQL is exposed as a StatefulSet with metadata for both it and the larger application it belongs to:", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_17789"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Service\nmetadata:\n labels:\n app.kubernetes.io/name: wordpress\n app.kubernetes.io/instance: wordpress-abcxyz\n app.kubernetes.io/version: \"4.9.4\"\n app.kubernetes.io/managed-by: Helm\n app.kubernetes.io/component: server\n app.kubernetes.io/part-of: wordpress\n...", "completion": "This yaml code 1.2.9 - Recommended Labels | MySQL is exposed as a StatefulSet with metadata for both it and the larger application it belongs to:", "language": "yaml", "type": "service-definition_explanation", "context": "1.2.9 - Recommended Labels | MySQL is exposed as a StatefulSet with metadata for both it and the larger application it belongs to:", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_17790"} {"prompt": "Complete this yaml code:\n\napiVersion: apps/v1\nkind: StatefulSet\nmetadata:\n labels:\n app.kubernetes.io/name: mysql", "completion": " app.kubernetes.io/instance: mysql-abcxyz\n app.kubernetes.io/version: \"5.7.21\"\n app.kubernetes.io/managed-by: Helm\n app.kubernetes.io/component: database\n app.kubernetes.io/part-of: wordpress\n...", "language": "yaml", "type": "kubernetes-manifest", "context": "1.2.9 - Recommended Labels | The Service is used to expose MySQL as part of WordPress:", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_17791"} {"prompt": "Explain this yaml code:\n\napiVersion: apps/v1\nkind: StatefulSet\nmetadata:\n labels:\n app.kubernetes.io/name: mysql\n app.kubernetes.io/instance: mysql-abcxyz\n app.kubernetes.io/version: \"5.7.21\"\n app.kubernetes.io/managed-by: Helm\n app.kubernetes.io/component: database\n app.kubernetes.io/part-of: wordpress\n...", "completion": "This yaml code 1.2.9 - Recommended Labels | The Service is used to expose MySQL as part of WordPress:", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "1.2.9 - Recommended Labels | The Service is used to expose MySQL as part of WordPress:", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_17792"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Service\nmetadata:\n labels:\n app.kubernetes.io/name: mysql", "completion": " app.kubernetes.io/instance: mysql-abcxyz\n app.kubernetes.io/version: \"5.7.21\"\n app.kubernetes.io/managed-by: Helm\n app.kubernetes.io/component: database\n app.kubernetes.io/part-of: wordpress\n...", "language": "yaml", "type": "service-definition", "context": "1.2.9 - Recommended Labels | With the MySQL StatefulSet and Service you'll notice information about both MySQL and WordPress, the broader application, are included.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_17793"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Service\nmetadata:\n labels:\n app.kubernetes.io/name: mysql\n app.kubernetes.io/instance: mysql-abcxyz\n app.kubernetes.io/version: \"5.7.21\"\n app.kubernetes.io/managed-by: Helm\n app.kubernetes.io/component: database\n app.kubernetes.io/part-of: wordpress\n...", "completion": "This yaml code 1.2.9 - Recommended Labels | With the MySQL StatefulSet and Service you'll notice information about both MySQL and WordPress, the broader application, are included.", "language": "yaml", "type": "service-definition_explanation", "context": "1.2.9 - Recommended Labels | With the MySQL StatefulSet and Service you'll notice information about both MySQL and WordPress, the broader application, are included.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_17794"} {"prompt": "Complete this json code:\n\n{\n \"paths\": {\n ...,\n \"api/v1\": {\n \"serverRelativeURL\": \"/openapi/v3/api/v1?hash=CC0E9BFD992D8C59AEC98A1E2336F899E8318D3CF4C68944C3DEC640AF5AB52D864AC50DAA8D145B3494F75FA3CFF939FCBDDA431DAD3CA79738B297795818CF\"\n },", "completion": " \"apis/admissionregistration.k8s.io/v1\": {\n \"serverRelativeURL\": \"/openapi/v3/apis/admissionregistration.k8s.io/v1?hash=E19CC93A116982CE5422FC42B590A8AFAD92CDE9AE4D59B5CAAD568F083AD07946E6CB5817531680BCE6E215C16973CD39003B0425F3477CFD854E89A9DB6597\"\n },\n ....\n }\n}", "language": "json", "type": "api-response", "context": "1.3 - The Kubernetes API | The relative URLs are pointing to immutable OpenAPI descriptions, in order to improve client-side caching. The proper HTTP caching headers are also set by the API server for that purpose (Expires to 1", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_17795"} {"prompt": "Explain this json code:\n\n{\n \"paths\": {\n ...,\n \"api/v1\": {\n \"serverRelativeURL\": \"/openapi/v3/api/v1?hash=CC0E9BFD992D8C59AEC98A1E2336F899E8318D3CF4C68944C3DEC640AF5AB52D864AC50DAA8D145B3494F75FA3CFF939FCBDDA431DAD3CA79738B297795818CF\"\n },\n \"apis/admissionregistration.k8s.io/v1\": {\n \"serverRelativeURL\": \"/openapi/v3/apis/admissionregistration.k8s.io/v1?hash=E19CC93A116982CE5422FC42B590A8AFAD92CDE9AE4D59B5CAAD568F083AD07946E6CB5817531680BCE6E215C16973CD39003B0425F3477CFD854E89A9DB6597\"\n },\n ....\n }\n}", "completion": "This json code 1.3 - The Kubernetes API | The relative URLs are pointing to immutable OpenAPI descriptions, in order to improve client-side caching. The proper HTTP caching headers are also set by the API server for that purpose (Expires to 1", "language": "json", "type": "api-response_explanation", "context": "1.3 - The Kubernetes API | The relative URLs are pointing to immutable OpenAPI descriptions, in order to improve client-side caching. The proper HTTP caching headers are also set by the API server for that purpose (Expires to 1", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_17796"} {"prompt": "Complete this json code:\n\n{\n \"kind\": \"Node\",\n \"apiVersion\": \"v1\",\n \"metadata\": {\n \"name\": \"10.240.79.157\",", "completion": " \"labels\": {\n \"name\": \"my-first-k8s-node\"\n }\n }\n}", "language": "json", "type": "api-response", "context": "2.1 - Nodes | Kubernetes creates a Node object internally (the representation). Kubernetes checks that a kubelet has registered to the API server that matches the metadata.name field of the Node. If the node is hea", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_17797"} {"prompt": "Explain this json code:\n\n{\n \"kind\": \"Node\",\n \"apiVersion\": \"v1\",\n \"metadata\": {\n \"name\": \"10.240.79.157\",\n \"labels\": {\n \"name\": \"my-first-k8s-node\"\n }\n }\n}", "completion": "This json code 2.1 - Nodes | Kubernetes creates a Node object internally (the representation). Kubernetes checks that a kubelet has registered to the API server that matches the metadata.name field of the Node. If the node is hea", "language": "json", "type": "api-response_explanation", "context": "2.1 - Nodes | Kubernetes creates a Node object internally (the representation). Kubernetes checks that a kubelet has registered to the API server that matches the metadata.name field of the Node. If the node is hea", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_17798"} {"prompt": "Explain this bash code:\n\nkubectl -n kube-system get lease -l apiserver.kubernetes.io/identity=kube-apiserver", "completion": "This bash code 2.4 - Leases | The SHA256 hash used in the lease name is based on the OS hostname as seen by that API server. Each kube-apiserver should be configured to use a hostname that is unique within the cluster. New instanc", "language": "bash", "type": "kubectl-command_explanation", "context": "2.4 - Leases | The SHA256 hash used in the lease name is based on the OS hostname as seen by that API server. Each kube-apiserver should be configured to use a hostname that is unique within the cluster. New instanc", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_17799"} {"prompt": "Explain this bash code:\n\nkubectl -n kube-system get lease apiserver-07a5ea9b9b072c4a5f3d1c3702 -o yaml", "completion": "This bash code 2.4 - Leases | Expired leases from kube-apiservers that no longer exist are garbage collected by new kube-apiservers after 1 hour.", "language": "bash", "type": "kubectl-command_explanation", "context": "2.4 - Leases | Expired leases from kube-apiservers that no longer exist are garbage collected by new kube-apiservers after 1 hour.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_17800"} {"prompt": "Complete this yaml code:\n\napiVersion: coordination.k8s.io/v1\nkind: Lease\nmetadata:\n creationTimestamp: \"2023-07-02T13:16:48Z\"\n labels:\n apiserver.kubernetes.io/identity: kube-apiserver\n kubernetes.io/hostname: master-1", "completion": " name: apiserver-07a5ea9b9b072c4a5f3d1c3702\n namespace: kube-system\n resourceVersion: \"334899\"\n uid: 90870ab5-1ba9-4523-b215-e4d4e662acb1\nspec:\n holderIdentity: apiserver-07a5ea9b9b072c4a5f3d1c3702_0c8914f7-0f35-440e-8676-7844977d3a05\n leaseDurationSeconds: 3600\n renewTime: \"2023-07-04T21:58:48.065888Z\"", "language": "yaml", "type": "kubernetes-manifest", "context": "2.4 - Leases | Expired leases from kube-apiservers that no longer exist are garbage collected by new kube-apiservers after 1 hour.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_17801"} {"prompt": "Explain this yaml code:\n\napiVersion: coordination.k8s.io/v1\nkind: Lease\nmetadata:\n creationTimestamp: \"2023-07-02T13:16:48Z\"\n labels:\n apiserver.kubernetes.io/identity: kube-apiserver\n kubernetes.io/hostname: master-1\n name: apiserver-07a5ea9b9b072c4a5f3d1c3702\n namespace: kube-system\n resourceVersion: \"334899\"\n uid: 90870ab5-1ba9-4523-b215-e4d4e662acb1\nspec:\n holderIdentity: apiserver-07a5ea9b9b072c4a5f3d1c3702_0c8914f7-0f35-440e-8676-7844977d3a05\n leaseDurationSeconds: 3600\n renewTime: \"2023-07-04T21:58:48.065888Z\"", "completion": "This yaml code 2.4 - Leases | Expired leases from kube-apiservers that no longer exist are garbage collected by new kube-apiservers after 1 hour.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "2.4 - Leases | Expired leases from kube-apiservers that no longer exist are garbage collected by new kube-apiservers after 1 hour.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_17802"} {"prompt": "Complete this yaml code:\n\napiVersion: rbac.authorization.k8s.io/v1\nkind: ClusterRole\nmetadata:\n name: cloud-controller-manager\nrules:\n- apiGroups:\n - \"\"\n resources:\n - events\n verbs:\n - create\n - patch\n - update\n- apiGroups:\n - \"\"\n resources:\n - nodes\n verbs:\n - '*'\n- apiGroups:\n - \"\"\n resources:\n - nodes/status\n verbs:\n - patch\n- apiGroups:\n - \"\"", "completion": " resources:\n - services\n verbs:\n - list\n - watch\n- apiGroups:\n - \"\"\n resources:\n - services/status\n verbs:\n - patch\n - update\n- apiGroups:\n - \"\"\n resources:\n - serviceaccounts\n verbs:\n - create\n- apiGroups:\n - \"\"\n resources:\n - persistentvolumes\n verbs:\n - get\n - list\n - update\n - watch", "language": "yaml", "type": "kubernetes-manifest", "context": "2.5 - Cloud Controller Manager | Cloud Controller Manager Administration has instructions on running and managing the cloud controller manager.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_17803"} {"prompt": "Explain this yaml code:\n\napiVersion: rbac.authorization.k8s.io/v1\nkind: ClusterRole\nmetadata:\n name: cloud-controller-manager\nrules:\n- apiGroups:\n - \"\"\n resources:\n - events\n verbs:\n - create\n - patch\n - update\n- apiGroups:\n - \"\"\n resources:\n - nodes\n verbs:\n - '*'\n- apiGroups:\n - \"\"\n resources:\n - nodes/status\n verbs:\n - patch\n- apiGroups:\n - \"\"\n resources:\n - services\n verbs:\n - list\n - watch\n- apiGroups:\n - \"\"\n resources:\n - services/status\n verbs:\n - patch\n - update\n- apiGroups:\n - \"\"\n resources:\n - serviceaccounts\n verbs:\n - create\n- apiGroups:\n - \"\"\n resources:\n - persistentvolumes\n verbs:\n - get\n - list\n - update\n - watch", "completion": "This yaml code 2.5 - Cloud Controller Manager | Cloud Controller Manager Administration has instructions on running and managing the cloud controller manager.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "2.5 - Cloud Controller Manager | Cloud Controller Manager Administration has instructions on running and managing the cloud controller manager.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_17804"} {"prompt": "Complete this text code:\n\nkube-apiserver \\\n--feature-gates=UnknownVersionInteroperabilityProxy=true \\\n# required command line arguments for this feature\n--peer-ca-file=<path to kube-apiserver CA cert>\n--proxy-client-cert-file=<path to aggregator proxy cert>,\n--proxy-client-key-file=<path to aggregator proxy key>,\n--requestheader-client-ca-file=<path to aggregator CA cert>,", "completion": "# requestheader-allowed-names can be set to blank to allow any Common Name\n--requestheader-allowed-names=<valid Common Names to verify proxy client cert against>,\n\n# optional flags for this feature\n--peer-advertise-ip=`IP of this kube-apiserver that should be used by peers to proxy requests`\n--peer-advertise-port=`port of this kube-apiserver that should be used by peers to proxy requests`\n\n# …and other flags as usual", "language": "text", "type": "code-example", "context": "2.10 - Mixed Version Proxy | The source kube-apiserver reuses the existing APIserver client authentication flags --proxy-client-cert-file and --proxy-client-key-file to present its identity that will be verified by its peer (the", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_17805"} {"prompt": "Explain this text code:\n\nkube-apiserver \\\n--feature-gates=UnknownVersionInteroperabilityProxy=true \\\n# required command line arguments for this feature\n--peer-ca-file=<path to kube-apiserver CA cert>\n--proxy-client-cert-file=<path to aggregator proxy cert>,\n--proxy-client-key-file=<path to aggregator proxy key>,\n--requestheader-client-ca-file=<path to aggregator CA cert>,\n# requestheader-allowed-names can be set to blank to allow any Common Name\n--requestheader-allowed-names=<valid Common Names to verify proxy client cert against>,\n\n# optional flags for this feature\n--peer-advertise-ip=`IP of this kube-apiserver that should be used by peers to proxy requests`\n--peer-advertise-port=`port of this kube-apiserver that should be used by peers to proxy requests`\n\n# …and other flags as usual", "completion": "This text code 2.10 - Mixed Version Proxy | The source kube-apiserver reuses the existing APIserver client authentication flags --proxy-client-cert-file and --proxy-client-key-file to present its identity that will be verified by its peer (the", "language": "text", "type": "code-example_explanation", "context": "2.10 - Mixed Version Proxy | The source kube-apiserver reuses the existing APIserver client authentication flags --proxy-client-cert-file and --proxy-client-key-file to present its identity that will be verified by its peer (the", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_17806"} {"prompt": "Complete this json code:\n\n{\n \"auths\": {\n \"my-registry.example/images\": { \"auth\": \"…\" },", "completion": " \"*.my-registry.example/images\": { \"auth\": \"…\" }\n }\n}", "language": "json", "type": "api-response", "context": "3.1 - Images | Image pull operations pass the credentials to the CRI container runtime for every valid pattern. For example, the following container image names would match successfully:", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_17807"} {"prompt": "Explain this json code:\n\n{\n \"auths\": {\n \"my-registry.example/images\": { \"auth\": \"…\" },\n \"*.my-registry.example/images\": { \"auth\": \"…\" }\n }\n}", "completion": "This json code 3.1 - Images | Image pull operations pass the credentials to the CRI container runtime for every valid pattern. For example, the following container image names would match successfully:", "language": "json", "type": "api-response_explanation", "context": "3.1 - Images | Image pull operations pass the credentials to the CRI container runtime for every valid pattern. For example, the following container image names would match successfully:", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_17808"} {"prompt": "Complete this json code:\n\n{\n \"auths\": {\n \"my-registry.example/images\": {\n \"auth\": \"…\"\n },", "completion": " \"my-registry.example/images/subpath\": {\n \"auth\": \"…\"\n }\n }\n}", "language": "json", "type": "api-response", "context": "3.1 - Images | If now a container specifies an image my-registry.example/images/subpath/my-image to be pulled, then the kubelet will try to download it using both authentication sources if one of them fails.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_17809"} {"prompt": "Explain this json code:\n\n{\n \"auths\": {\n \"my-registry.example/images\": {\n \"auth\": \"…\"\n },\n \"my-registry.example/images/subpath\": {\n \"auth\": \"…\"\n }\n }\n}", "completion": "This json code 3.1 - Images | If now a container specifies an image my-registry.example/images/subpath/my-image to be pulled, then the kubelet will try to download it using both authentication sources if one of them fails.", "language": "json", "type": "api-response_explanation", "context": "3.1 - Images | If now a container specifies an image my-registry.example/images/subpath/my-image to be pulled, then the kubelet will try to download it using both authentication sources if one of them fails.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_17810"} {"prompt": "Complete this bash code:\n\nkubectl create secret docker-registry <name> \\\n --docker-server=<docker-registry-server> \\", "completion": " --docker-username=<docker-user> \\\n --docker-password=<docker-password> \\\n --docker-email=<docker-email>", "language": "bash", "type": "kubectl-command", "context": "3.1 - Images | If you already have a Docker credentials file then, rather than using the above command, you can import the credentials file as a Kubernetes Secret. Create a Secret based on existing Docker credential", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_17811"} {"prompt": "Explain this bash code:\n\nkubectl create secret docker-registry <name> \\\n --docker-server=<docker-registry-server> \\\n --docker-username=<docker-user> \\\n --docker-password=<docker-password> \\\n --docker-email=<docker-email>", "completion": "This bash code 3.1 - Images | If you already have a Docker credentials file then, rather than using the above command, you can import the credentials file as a Kubernetes Secret. Create a Secret based on existing Docker credential", "language": "bash", "type": "kubectl-command_explanation", "context": "3.1 - Images | If you already have a Docker credentials file then, rather than using the above command, you can import the credentials file as a Kubernetes Secret. Create a Secret based on existing Docker credential", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_17812"} {"prompt": "Complete this yaml code:\n\ncat <<EOF > pod.yaml\napiVersion: v1\nkind: Pod\nmetadata:\n name: foo\n namespace: awesomeapps\nspec:\n containers:\n - name: foo", "completion": " image: janedoe/awesomeapp:v1\n imagePullSecrets:\n - name: myregistrykey\nEOF\n\ncat <<EOF >> ./kustomization.yaml\nresources:\n- pod.yaml\nEOF", "language": "yaml", "type": "pod-definition", "context": "3.1 - Images | This needs to be done for each Pod that is using a private registry.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_17813"} {"prompt": "Explain this yaml code:\n\ncat <<EOF > pod.yaml\napiVersion: v1\nkind: Pod\nmetadata:\n name: foo\n namespace: awesomeapps\nspec:\n containers:\n - name: foo\n image: janedoe/awesomeapp:v1\n imagePullSecrets:\n - name: myregistrykey\nEOF\n\ncat <<EOF >> ./kustomization.yaml\nresources:\n- pod.yaml\nEOF", "completion": "This yaml code 3.1 - Images | This needs to be done for each Pod that is using a private registry.", "language": "yaml", "type": "pod-definition_explanation", "context": "3.1 - Images | This needs to be done for each Pod that is using a private registry.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_17814"} {"prompt": "Explain this text code:\n\nFOO_SERVICE_HOST=<the host the service is running on>\nFOO_SERVICE_PORT=<the port the service is running on>", "completion": "This text code 3.2 - Container Environment | Services have dedicated IP addresses and are available to the Container via DNS, if DNS addon is enabled.", "language": "text", "type": "code-example_explanation", "context": "3.2 - Container Environment | Services have dedicated IP addresses and are available to the Container via DNS, if DNS addon is enabled.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_17815"} {"prompt": "Complete this yaml code:\n\n# RuntimeClass is defined in the node.k8s.io API group\napiVersion: node.k8s.io/v1\nkind: RuntimeClass\nmetadata:", "completion": " # The name the RuntimeClass will be referenced by.\n # RuntimeClass is a non-namespaced resource.\n name: myclass \n# The name of the corresponding CRI configuration\nhandler: myconfiguration", "language": "yaml", "type": "kubernetes-manifest", "context": "3.3 - Runtime Class | The name of a RuntimeClass object must be a valid DNS subdomain name.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_17816"} {"prompt": "Explain this yaml code:\n\n# RuntimeClass is defined in the node.k8s.io API group\napiVersion: node.k8s.io/v1\nkind: RuntimeClass\nmetadata:\n # The name the RuntimeClass will be referenced by.\n # RuntimeClass is a non-namespaced resource.\n name: myclass \n# The name of the corresponding CRI configuration\nhandler: myconfiguration", "completion": "This yaml code 3.3 - Runtime Class | The name of a RuntimeClass object must be a valid DNS subdomain name.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "3.3 - Runtime Class | The name of a RuntimeClass object must be a valid DNS subdomain name.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_17817"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:", "completion": " name: mypod\nspec:\n runtimeClassName: myclass\n # ...", "language": "yaml", "type": "pod-definition", "context": "3.3 - Runtime Class | This will instruct the kubelet to use the named RuntimeClass to run this pod. If the named RuntimeClass does not exist, or the CRI cannot run the corresponding handler, the pod will enter the Failed t", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_17818"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: mypod\nspec:\n runtimeClassName: myclass\n # ...", "completion": "This yaml code 3.3 - Runtime Class | This will instruct the kubelet to use the named RuntimeClass to run this pod. If the named RuntimeClass does not exist, or the CRI cannot run the corresponding handler, the pod will enter the Failed t", "language": "yaml", "type": "pod-definition_explanation", "context": "3.3 - Runtime Class | This will instruct the kubelet to use the named RuntimeClass to run this pod. If the named RuntimeClass does not exist, or the CRI cannot run the corresponding handler, the pod will enter the Failed t", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_17819"} {"prompt": "Complete this yaml code:\n\npods/simple-pod.yaml\napiVersion: v1\nkind: Pod\nmetadata:\n name: nginx", "completion": "spec:\n containers:\n - name: nginx\n image: nginx:1.14.2\n ports:\n - containerPort: 80", "language": "yaml", "type": "pod-definition", "context": "4.1 - Pods | To create the Pod shown above, run the following command:", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_17820"} {"prompt": "Explain this yaml code:\n\npods/simple-pod.yaml\napiVersion: v1\nkind: Pod\nmetadata:\n name: nginx\nspec:\n containers:\n - name: nginx\n image: nginx:1.14.2\n ports:\n - containerPort: 80", "completion": "This yaml code 4.1 - Pods | To create the Pod shown above, run the following command:", "language": "yaml", "type": "pod-definition_explanation", "context": "4.1 - Pods | To create the Pod shown above, run the following command:", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_17821"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: nginx\nspec:", "completion": " containers:\n - name: nginx\n image: nginx:1.14.2\n ports:\n - containerPort: 80", "language": "yaml", "type": "pod-definition", "context": "4.1 - Pods | To create the Pod shown above, run the following command:", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_17822"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: nginx\nspec:\n containers:\n - name: nginx\n image: nginx:1.14.2\n ports:\n - containerPort: 80", "completion": "This yaml code 4.1 - Pods | To create the Pod shown above, run the following command:", "language": "yaml", "type": "pod-definition_explanation", "context": "4.1 - Pods | To create the Pod shown above, run the following command:", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_17823"} {"prompt": "Explain this bash code:\n\nkubectl apply -f https://k8s.io/examples/pods/simple-pod.yaml", "completion": "This bash code 4.1 - Pods | Pods are generally not created directly and are created using workload resources. See Working with Pods for more information on how Pods are used with workload resources.", "language": "bash", "type": "kubectl-command_explanation", "context": "4.1 - Pods | Pods are generally not created directly and are created using workload resources. See Working with Pods for more information on how Pods are used with workload resources.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_17824"} {"prompt": "Complete this yaml code:\n\napiVersion: batch/v1\nkind: Job\nmetadata:\n name: hello\nspec:\n template:\n # This is the pod template", "completion": " spec:\n containers:\n - name: hello\n image: busybox:1.28\n command: ['sh', '-c', 'echo \"Hello, Kubernetes!\" && sleep 3600']\n restartPolicy: OnFailure\n # The pod template ends here", "language": "yaml", "type": "kubernetes-manifest", "context": "4.1 - Pods | Modifying the pod template or switching to a new pod template has no direct effect on the Pods that already exist. If you change the pod template for a workload resource, that resource needs to create", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_17825"} {"prompt": "Explain this yaml code:\n\napiVersion: batch/v1\nkind: Job\nmetadata:\n name: hello\nspec:\n template:\n # This is the pod template\n spec:\n containers:\n - name: hello\n image: busybox:1.28\n command: ['sh', '-c', 'echo \"Hello, Kubernetes!\" && sleep 3600']\n restartPolicy: OnFailure\n # The pod template ends here", "completion": "This yaml code 4.1 - Pods | Modifying the pod template or switching to a new pod template has no direct effect on the Pods that already exist. If you change the pod template for a workload resource, that resource needs to create", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "4.1 - Pods | Modifying the pod template or switching to a new pod template has no direct effect on the Pods that already exist. If you change the pod template for a workload resource, that resource needs to create", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_17826"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: on-failure-pod\nspec:\n restartPolicy: OnFailure\n containers:", "completion": " - name: try-once-container # This container will run only once because the restartPolicy is Never.\n image: docker.io/library/busybox:1.28\n command: ['sh', '-c', 'echo \"Only running once\" && sleep 10 && exit 1']\n restartPolicy: Never \n - name: on-failure-container # This container will be restarted on failure.\n image: docker.io/library/busybox:1.28\n command: ['sh', '-c', 'echo \"Keep restarting\" && sleep 1800 && exit 1']", "language": "yaml", "type": "pod-definition", "context": "4.1.1 - Pod Lifecycle | A Pod with Always restart policy with an init container that only execute once. If the init container fails, the Pod fails. This allows the Pod to fail if the initialization failed, but also keep runn", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_17827"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: on-failure-pod\nspec:\n restartPolicy: OnFailure\n containers:\n - name: try-once-container # This container will run only once because the restartPolicy is Never.\n image: docker.io/library/busybox:1.28\n command: ['sh', '-c', 'echo \"Only running once\" && sleep 10 && exit 1']\n restartPolicy: Never \n - name: on-failure-container # This container will be restarted on failure.\n image: docker.io/library/busybox:1.28\n command: ['sh', '-c', 'echo \"Keep restarting\" && sleep 1800 && exit 1']", "completion": "This yaml code 4.1.1 - Pod Lifecycle | A Pod with Always restart policy with an init container that only execute once. If the init container fails, the Pod fails. This allows the Pod to fail if the initialization failed, but also keep runn", "language": "yaml", "type": "pod-definition_explanation", "context": "4.1.1 - Pod Lifecycle | A Pod with Always restart policy with an init container that only execute once. If the init container fails, the Pod fails. This allows the Pod to fail if the initialization failed, but also keep runn", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_17828"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: fail-pod-if-init-fails\nspec:\n restartPolicy: Always\n initContainers:", "completion": " - name: init-once # This init container will only try once. If it fails, the pod will fail.\n image: docker.io/library/busybox:1.28\n command: ['sh', '-c', 'echo \"Failing initialization\" && sleep 10 && exit 1']\n restartPolicy: Never\n containers:\n - name: main-container # This container will always be restarted once initialization succeeds.\n image: docker.io/library/busybox:1.28\n command: ['sh', '-c', 'sleep 1800 && exit 0']", "language": "yaml", "type": "pod-definition", "context": "4.1.1 - Pod Lifecycle | A Pod with Never restart policy with a container that ignores and restarts on specific exit codes. This is useful to differentiate between restartable errors and non-restartable errors:", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_17829"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: fail-pod-if-init-fails\nspec:\n restartPolicy: Always\n initContainers:\n - name: init-once # This init container will only try once. If it fails, the pod will fail.\n image: docker.io/library/busybox:1.28\n command: ['sh', '-c', 'echo \"Failing initialization\" && sleep 10 && exit 1']\n restartPolicy: Never\n containers:\n - name: main-container # This container will always be restarted once initialization succeeds.\n image: docker.io/library/busybox:1.28\n command: ['sh', '-c', 'sleep 1800 && exit 0']", "completion": "This yaml code 4.1.1 - Pod Lifecycle | A Pod with Never restart policy with a container that ignores and restarts on specific exit codes. This is useful to differentiate between restartable errors and non-restartable errors:", "language": "yaml", "type": "pod-definition_explanation", "context": "4.1.1 - Pod Lifecycle | A Pod with Never restart policy with a container that ignores and restarts on specific exit codes. This is useful to differentiate between restartable errors and non-restartable errors:", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_17830"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: restart-on-exit-codes\nspec:\n restartPolicy: Never\n containers:\n - name: restart-on-exit-codes", "completion": " image: docker.io/library/busybox:1.28\n command: ['sh', '-c', 'sleep 60 && exit 0']\n restartPolicy: Never # Container restart policy must be specified if rules are specified\n restartPolicyRules: # Only restart the container if it exits with code 42\n - action: Restart\n exitCodes:\n operator: In\n values: [42]", "language": "yaml", "type": "pod-definition", "context": "4.1.1 - Pod Lifecycle | Restart rules can be used for many more advanced lifecycle management scenarios. Note, restart rules are affected by the same inconsistencies as the regular restart policy. Kubelet restarts, container", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_17831"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: restart-on-exit-codes\nspec:\n restartPolicy: Never\n containers:\n - name: restart-on-exit-codes\n image: docker.io/library/busybox:1.28\n command: ['sh', '-c', 'sleep 60 && exit 0']\n restartPolicy: Never # Container restart policy must be specified if rules are specified\n restartPolicyRules: # Only restart the container if it exits with code 42\n - action: Restart\n exitCodes:\n operator: In\n values: [42]", "completion": "This yaml code 4.1.1 - Pod Lifecycle | Restart rules can be used for many more advanced lifecycle management scenarios. Note, restart rules are affected by the same inconsistencies as the regular restart policy. Kubelet restarts, container", "language": "yaml", "type": "pod-definition_explanation", "context": "4.1.1 - Pod Lifecycle | Restart rules can be used for many more advanced lifecycle management scenarios. Note, restart rules are affected by the same inconsistencies as the regular restart policy. Kubelet restarts, container", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_17832"} {"prompt": "Complete this yaml code:\n\n# container restart delays will start at 10s, increasing\n# 2x each time they are restarted, to a maximum of 100s", "completion": "kind: KubeletConfiguration\ncrashLoopBackOff:\n maxContainerRestartPeriod: \"100s\"", "language": "yaml", "type": "kubernetes-manifest", "context": "4.1.1 - Pod Lifecycle | If you use this feature along with the alpha feature ReduceDefaultCrashLoopBackOffDecay (described above), your cluster defaults for initial backoff and maximum backoff will no longer be 10s and 300s,", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_17833"} {"prompt": "Explain this yaml code:\n\n# container restart delays will start at 10s, increasing\n# 2x each time they are restarted, to a maximum of 100s\nkind: KubeletConfiguration\ncrashLoopBackOff:\n maxContainerRestartPeriod: \"100s\"", "completion": "This yaml code 4.1.1 - Pod Lifecycle | If you use this feature along with the alpha feature ReduceDefaultCrashLoopBackOffDecay (described above), your cluster defaults for initial backoff and maximum backoff will no longer be 10s and 300s,", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "4.1.1 - Pod Lifecycle | If you use this feature along with the alpha feature ReduceDefaultCrashLoopBackOffDecay (described above), your cluster defaults for initial backoff and maximum backoff will no longer be 10s and 300s,", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_17834"} {"prompt": "Complete this yaml code:\n\n# delays between container restarts will always be 2s\nkind: KubeletConfiguration", "completion": "crashLoopBackOff:\n maxContainerRestartPeriod: \"2s\"", "language": "yaml", "type": "kubernetes-manifest", "context": "4.1.1 - Pod Lifecycle | If you use this feature along with the alpha feature ReduceDefaultCrashLoopBackOffDecay (described above), your cluster defaults for initial backoff and maximum backoff will no longer be 10s and 300s,", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_17835"} {"prompt": "Explain this yaml code:\n\n# delays between container restarts will always be 2s\nkind: KubeletConfiguration\ncrashLoopBackOff:\n maxContainerRestartPeriod: \"2s\"", "completion": "This yaml code 4.1.1 - Pod Lifecycle | If you use this feature along with the alpha feature ReduceDefaultCrashLoopBackOffDecay (described above), your cluster defaults for initial backoff and maximum backoff will no longer be 10s and 300s,", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "4.1.1 - Pod Lifecycle | If you use this feature along with the alpha feature ReduceDefaultCrashLoopBackOffDecay (described above), your cluster defaults for initial backoff and maximum backoff will no longer be 10s and 300s,", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_17836"} {"prompt": "Complete this yaml code:\n\nkind: Pod\n...\nspec:\n readinessGates:\n - conditionType: \"www.example.com/feature-1\"\nstatus:\n conditions:\n - type: Ready # a built-in PodCondition\n status: \"False\"", "completion": " lastProbeTime: null\n lastTransitionTime: 2018-01-01T00:00:00Z\n - type: \"www.example.com/feature-1\" # an extra PodCondition\n status: \"False\"\n lastProbeTime: null\n lastTransitionTime: 2018-01-01T00:00:00Z\n containerStatuses:\n - containerID: docker://abcd...\n ready: true\n...", "language": "yaml", "type": "pod-definition", "context": "4.1.1 - Pod Lifecycle | The Pod conditions you add must have names that meet the Kubernetes label key format.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_17837"} {"prompt": "Explain this yaml code:\n\nkind: Pod\n...\nspec:\n readinessGates:\n - conditionType: \"www.example.com/feature-1\"\nstatus:\n conditions:\n - type: Ready # a built-in PodCondition\n status: \"False\"\n lastProbeTime: null\n lastTransitionTime: 2018-01-01T00:00:00Z\n - type: \"www.example.com/feature-1\" # an extra PodCondition\n status: \"False\"\n lastProbeTime: null\n lastTransitionTime: 2018-01-01T00:00:00Z\n containerStatuses:\n - containerID: docker://abcd...\n ready: true\n...", "completion": "This yaml code 4.1.1 - Pod Lifecycle | The Pod conditions you add must have names that meet the Kubernetes label key format.", "language": "yaml", "type": "pod-definition_explanation", "context": "4.1.1 - Pod Lifecycle | The Pod conditions you add must have names that meet the Kubernetes label key format.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_17838"} {"prompt": "Complete this text code:\n\nspec:\n os:\n name: linux\n containers:", "completion": " - name: my-container\n image: container-image:latest\n lifecycle:\n stopSignal: SIGUSR1", "language": "text", "type": "code-example", "context": "4.1.1 - Pod Lifecycle | If a stop signal is defined in the lifecycle, this will override the signal defined in the container image. If no stop signal is defined in the container spec, the container would fall back to the def", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_17839"} {"prompt": "Explain this text code:\n\nspec:\n os:\n name: linux\n containers:\n - name: my-container\n image: container-image:latest\n lifecycle:\n stopSignal: SIGUSR1", "completion": "This text code 4.1.1 - Pod Lifecycle | If a stop signal is defined in the lifecycle, this will override the signal defined in the container image. If no stop signal is defined in the container spec, the container would fall back to the def", "language": "text", "type": "code-example_explanation", "context": "4.1.1 - Pod Lifecycle | If a stop signal is defined in the lifecycle, this will override the signal defined in the container image. If no stop signal is defined in the container spec, the container would fall back to the def", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_17840"} {"prompt": "Explain this text code:\n\nfor i in {1..100}; do sleep 1; if nslookup myservice; then exit 0; fi; done; exit 1", "completion": "This text code 4.1.2 - Init Containers | Register this Pod with a remote server from the downward API with a command like:", "language": "text", "type": "code-example_explanation", "context": "4.1.2 - Init Containers | Register this Pod with a remote server from the downward API with a command like:", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_17841"} {"prompt": "Explain this text code:\n\ncurl -X POST http://$MANAGEMENT_SERVICE_HOST:$MANAGEMENT_SERVICE_PORT/register -d 'instance=$(<POD_NAME>)&ip=$(<POD_IP>)'", "completion": "This text code 4.1.2 - Init Containers | Wait for some time before starting the app container with a command like", "language": "text", "type": "code-example_explanation", "context": "4.1.2 - Init Containers | Wait for some time before starting the app container with a command like", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_17842"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: myapp-pod\n labels:\n app.kubernetes.io/name: MyApp\nspec:\n containers:\n - name: myapp-container", "completion": " image: busybox:1.28\n command: ['sh', '-c', 'echo The app is running! && sleep 3600']\n initContainers:\n - name: init-myservice\n image: busybox:1.28\n command: ['sh', '-c', \"until nslookup myservice.$(cat /var/run/secrets/kubernetes.io/serviceaccount/namespace).svc.cluster.local; do echo waiting for myservice; sleep 2; done\"]\n - name: init-mydb\n image: busybox:1.28\n command: ['sh', '-c', \"until nslookup mydb.$(cat /var/run/secrets/kubernetes.io/serviceaccount/namespace).svc.cluster.local; do echo waiting for mydb; sleep 2; done\"]", "language": "yaml", "type": "pod-definition", "context": "4.1.2 - Init Containers | You can start this Pod by running:", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_17843"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: myapp-pod\n labels:\n app.kubernetes.io/name: MyApp\nspec:\n containers:\n - name: myapp-container\n image: busybox:1.28\n command: ['sh', '-c', 'echo The app is running! && sleep 3600']\n initContainers:\n - name: init-myservice\n image: busybox:1.28\n command: ['sh', '-c', \"until nslookup myservice.$(cat /var/run/secrets/kubernetes.io/serviceaccount/namespace).svc.cluster.local; do echo waiting for myservice; sleep 2; done\"]\n - name: init-mydb\n image: busybox:1.28\n command: ['sh', '-c', \"until nslookup mydb.$(cat /var/run/secrets/kubernetes.io/serviceaccount/namespace).svc.cluster.local; do echo waiting for mydb; sleep 2; done\"]", "completion": "This yaml code 4.1.2 - Init Containers | You can start this Pod by running:", "language": "yaml", "type": "pod-definition_explanation", "context": "4.1.2 - Init Containers | You can start this Pod by running:", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_17844"} {"prompt": "Explain this bash code:\n\nkubectl logs myapp-pod -c init-myservice # Inspect the first init container\nkubectl logs myapp-pod -c init-mydb # Inspect the second init container", "completion": "This bash code 4.1.2 - Init Containers | At this point, those init containers will be waiting to discover Services named mydb and myservice.", "language": "bash", "type": "kubectl-command_explanation", "context": "4.1.2 - Init Containers | At this point, those init containers will be waiting to discover Services named mydb and myservice.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_17845"} {"prompt": "Complete this yaml code:\n\n---\napiVersion: v1\nkind: Service\nmetadata:\n name: myservice\nspec:\n ports:\n - protocol: TCP\n port: 80\n targetPort: 9376", "completion": "---\napiVersion: v1\nkind: Service\nmetadata:\n name: mydb\nspec:\n ports:\n - protocol: TCP\n port: 80\n targetPort: 9377", "language": "yaml", "type": "service-definition", "context": "4.1.2 - Init Containers | To create the mydb and myservice services:", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_17846"} {"prompt": "Explain this yaml code:\n\n---\napiVersion: v1\nkind: Service\nmetadata:\n name: myservice\nspec:\n ports:\n - protocol: TCP\n port: 80\n targetPort: 9376\n---\napiVersion: v1\nkind: Service\nmetadata:\n name: mydb\nspec:\n ports:\n - protocol: TCP\n port: 80\n targetPort: 9377", "completion": "This yaml code 4.1.2 - Init Containers | To create the mydb and myservice services:", "language": "yaml", "type": "service-definition_explanation", "context": "4.1.2 - Init Containers | To create the mydb and myservice services:", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_17847"} {"prompt": "Complete this yaml code:\n\napplication/deployment-sidecar.yaml\napiVersion: apps/v1\nkind: Deployment\nmetadata:\n name: myapp\n labels:\n app: myapp\nspec:\n replicas: 1\n selector:\n matchLabels:\n app: myapp\n template:\n metadata:\n labels:\n app: myapp\n spec:", "completion": " containers:\n - name: myapp\n image: alpine:latest\n command: ['sh', '-c', 'while true; do echo \"logging\" >> /opt/logs.txt; sleep 1; done']\n volumeMounts:\n - name: data\n mountPath: /opt\n initContainers:\n - name: logshipper\n image: alpine:latest\n restartPolicy: Always\n command: ['sh', '-c', 'tail -F /opt/logs.txt']\n volumeMounts:\n - name: data\n mountPath: /opt\n volumes:\n - name: data\n emptyDir: {}", "language": "yaml", "type": "deployment-definition", "context": "4.1.3 - Sidecar Containers | If an init container is created with its restartPolicy set to Always, it will start and remain running during the entire life of the Pod. This can be helpful for running supporting services separated", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_17848"} {"prompt": "Explain this yaml code:\n\napplication/deployment-sidecar.yaml\napiVersion: apps/v1\nkind: Deployment\nmetadata:\n name: myapp\n labels:\n app: myapp\nspec:\n replicas: 1\n selector:\n matchLabels:\n app: myapp\n template:\n metadata:\n labels:\n app: myapp\n spec:\n containers:\n - name: myapp\n image: alpine:latest\n command: ['sh', '-c', 'while true; do echo \"logging\" >> /opt/logs.txt; sleep 1; done']\n volumeMounts:\n - name: data\n mountPath: /opt\n initContainers:\n - name: logshipper\n image: alpine:latest\n restartPolicy: Always\n command: ['sh', '-c', 'tail -F /opt/logs.txt']\n volumeMounts:\n - name: data\n mountPath: /opt\n volumes:\n - name: data\n emptyDir: {}", "completion": "This yaml code 4.1.3 - Sidecar Containers | If an init container is created with its restartPolicy set to Always, it will start and remain running during the entire life of the Pod. This can be helpful for running supporting services separated", "language": "yaml", "type": "deployment-definition_explanation", "context": "4.1.3 - Sidecar Containers | If an init container is created with its restartPolicy set to Always, it will start and remain running during the entire life of the Pod. This can be helpful for running supporting services separated", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_17849"} {"prompt": "Complete this yaml code:\n\napiVersion: apps/v1\nkind: Deployment\nmetadata:\n name: myapp\n labels:\n app: myapp\nspec:\n replicas: 1\n selector:\n matchLabels:\n app: myapp\n template:\n metadata:\n labels:\n app: myapp\n spec:\n containers:", "completion": " - name: myapp\n image: alpine:latest\n command: ['sh', '-c', 'while true; do echo \"logging\" >> /opt/logs.txt; sleep 1; done']\n volumeMounts:\n - name: data\n mountPath: /opt\n initContainers:\n - name: logshipper\n image: alpine:latest\n restartPolicy: Always\n command: ['sh', '-c', 'tail -F /opt/logs.txt']\n volumeMounts:\n - name: data\n mountPath: /opt\n volumes:\n - name: data\n emptyDir: {}", "language": "yaml", "type": "deployment-definition", "context": "4.1.3 - Sidecar Containers | If an init container is created with its restartPolicy set to Always, it will start and remain running during the entire life of the Pod. This can be helpful for running supporting services separated", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_17850"} {"prompt": "Explain this yaml code:\n\napiVersion: apps/v1\nkind: Deployment\nmetadata:\n name: myapp\n labels:\n app: myapp\nspec:\n replicas: 1\n selector:\n matchLabels:\n app: myapp\n template:\n metadata:\n labels:\n app: myapp\n spec:\n containers:\n - name: myapp\n image: alpine:latest\n command: ['sh', '-c', 'while true; do echo \"logging\" >> /opt/logs.txt; sleep 1; done']\n volumeMounts:\n - name: data\n mountPath: /opt\n initContainers:\n - name: logshipper\n image: alpine:latest\n restartPolicy: Always\n command: ['sh', '-c', 'tail -F /opt/logs.txt']\n volumeMounts:\n - name: data\n mountPath: /opt\n volumes:\n - name: data\n emptyDir: {}", "completion": "This yaml code 4.1.3 - Sidecar Containers | If an init container is created with its restartPolicy set to Always, it will start and remain running during the entire life of the Pod. This can be helpful for running supporting services separated", "language": "yaml", "type": "deployment-definition_explanation", "context": "4.1.3 - Sidecar Containers | If an init container is created with its restartPolicy set to Always, it will start and remain running during the entire life of the Pod. This can be helpful for running supporting services separated", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_17851"} {"prompt": "Complete this yaml code:\n\napplication/job/job-sidecar.yaml\napiVersion: batch/v1\nkind: Job\nmetadata:\n name: myjob\nspec:\n template:\n spec:\n containers:\n - name: myjob\n image: alpine:latest\n command: ['sh', '-c', 'echo \"logging\" > /opt/logs.txt']\n volumeMounts:", "completion": " - name: data\n mountPath: /opt\n initContainers:\n - name: logshipper\n image: alpine:latest\n restartPolicy: Always\n command: ['sh', '-c', 'tail -F /opt/logs.txt']\n volumeMounts:\n - name: data\n mountPath: /opt\n restartPolicy: Never\n volumes:\n - name: data\n emptyDir: {}", "language": "yaml", "type": "kubernetes-manifest", "context": "4.1.3 - Sidecar Containers | Sidecar containers run alongside app containers in the same pod. However, they do not execute the primary application logic; instead, they provide supporting functionality to the main application.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_17852"} {"prompt": "Explain this yaml code:\n\napplication/job/job-sidecar.yaml\napiVersion: batch/v1\nkind: Job\nmetadata:\n name: myjob\nspec:\n template:\n spec:\n containers:\n - name: myjob\n image: alpine:latest\n command: ['sh', '-c', 'echo \"logging\" > /opt/logs.txt']\n volumeMounts:\n - name: data\n mountPath: /opt\n initContainers:\n - name: logshipper\n image: alpine:latest\n restartPolicy: Always\n command: ['sh', '-c', 'tail -F /opt/logs.txt']\n volumeMounts:\n - name: data\n mountPath: /opt\n restartPolicy: Never\n volumes:\n - name: data\n emptyDir: {}", "completion": "This yaml code 4.1.3 - Sidecar Containers | Sidecar containers run alongside app containers in the same pod. However, they do not execute the primary application logic; instead, they provide supporting functionality to the main application.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "4.1.3 - Sidecar Containers | Sidecar containers run alongside app containers in the same pod. However, they do not execute the primary application logic; instead, they provide supporting functionality to the main application.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_17853"} {"prompt": "Complete this yaml code:\n\napiVersion: batch/v1\nkind: Job\nmetadata:\n name: myjob\nspec:\n template:\n spec:\n containers:\n - name: myjob\n image: alpine:latest\n command: ['sh', '-c', 'echo \"logging\" > /opt/logs.txt']\n volumeMounts:\n - name: data", "completion": " mountPath: /opt\n initContainers:\n - name: logshipper\n image: alpine:latest\n restartPolicy: Always\n command: ['sh', '-c', 'tail -F /opt/logs.txt']\n volumeMounts:\n - name: data\n mountPath: /opt\n restartPolicy: Never\n volumes:\n - name: data\n emptyDir: {}", "language": "yaml", "type": "kubernetes-manifest", "context": "4.1.3 - Sidecar Containers | Sidecar containers run alongside app containers in the same pod. However, they do not execute the primary application logic; instead, they provide supporting functionality to the main application.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_17854"} {"prompt": "Explain this yaml code:\n\napiVersion: batch/v1\nkind: Job\nmetadata:\n name: myjob\nspec:\n template:\n spec:\n containers:\n - name: myjob\n image: alpine:latest\n command: ['sh', '-c', 'echo \"logging\" > /opt/logs.txt']\n volumeMounts:\n - name: data\n mountPath: /opt\n initContainers:\n - name: logshipper\n image: alpine:latest\n restartPolicy: Always\n command: ['sh', '-c', 'tail -F /opt/logs.txt']\n volumeMounts:\n - name: data\n mountPath: /opt\n restartPolicy: Never\n volumes:\n - name: data\n emptyDir: {}", "completion": "This yaml code 4.1.3 - Sidecar Containers | Sidecar containers run alongside app containers in the same pod. However, they do not execute the primary application logic; instead, they provide supporting functionality to the main application.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "4.1.3 - Sidecar Containers | Sidecar containers run alongside app containers in the same pod. However, they do not execute the primary application logic; instead, they provide supporting functionality to the main application.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_17855"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: busybox-1\nspec:", "completion": " containers:\n - image: busybox:1.28\n command:\n - sleep\n - \"3600\"\n name: busybox", "language": "yaml", "type": "pod-definition", "context": "4.1.6 - Pod Hostname | The Pod created by this manifest will have its hostname and fully qualified domain name (FQDN) set to busybox-1.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_17856"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: busybox-1\nspec:\n containers:\n - image: busybox:1.28\n command:\n - sleep\n - \"3600\"\n name: busybox", "completion": "This yaml code 4.1.6 - Pod Hostname | The Pod created by this manifest will have its hostname and fully qualified domain name (FQDN) set to busybox-1.", "language": "yaml", "type": "pod-definition_explanation", "context": "4.1.6 - Pod Hostname | The Pod created by this manifest will have its hostname and fully qualified domain name (FQDN) set to busybox-1.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_17857"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: busybox-2-busybox-example-domain\nspec:\n hostnameOverride: busybox-2.busybox.example.domain", "completion": " containers:\n - image: busybox:1.28\n command:\n - sleep\n - \"3600\"\n name: busybox", "language": "yaml", "type": "pod-definition", "context": "4.1.6 - Pod Hostname | If hostnameOverride is set alongside hostname and subdomain fields:", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_17858"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: busybox-2-busybox-example-domain\nspec:\n hostnameOverride: busybox-2.busybox.example.domain\n containers:\n - image: busybox:1.28\n command:\n - sleep\n - \"3600\"\n name: busybox", "completion": "This yaml code 4.1.6 - Pod Hostname | If hostnameOverride is set alongside hostname and subdomain fields:", "language": "yaml", "type": "pod-definition_explanation", "context": "4.1.6 - Pod Hostname | If hostnameOverride is set alongside hostname and subdomain fields:", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_17859"} {"prompt": "Complete this yaml code:\n\napiVersion: kubelet.config.k8s.io/v1beta1\nkind: KubeletConfiguration", "completion": "userNamespaces:\n idsPerPod: 1048576", "language": "yaml", "type": "kubernetes-manifest", "context": "4.1.8 - User Namespaces | The value of idsPerPod (uint32) must be a multiple of 65536. The default value is 65536. This value only applies to containers created after the kubelet was started with this KubeletConfiguration. Run", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_17860"} {"prompt": "Explain this yaml code:\n\napiVersion: kubelet.config.k8s.io/v1beta1\nkind: KubeletConfiguration\nuserNamespaces:\n idsPerPod: 1048576", "completion": "This yaml code 4.1.8 - User Namespaces | The value of idsPerPod (uint32) must be a multiple of 65536. The default value is 65536. This value only applies to containers created after the kubelet was started with this KubeletConfiguration. Run", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "4.1.8 - User Namespaces | The value of idsPerPod (uint32) must be a multiple of 65536. The default value is 65536. This value only applies to containers created after the kubelet was started with this KubeletConfiguration. Run", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_17861"} {"prompt": "Complete this yaml code:\n\ncontrollers/nginx-deployment.yaml\napiVersion: apps/v1\nkind: Deployment\nmetadata:\n name: nginx-deployment\n labels:\n app: nginx\nspec:\n replicas: 3\n selector:\n matchLabels:", "completion": " app: nginx\n template:\n metadata:\n labels:\n app: nginx\n spec:\n containers:\n - name: nginx\n image: nginx:1.14.2\n ports:\n - containerPort: 80", "language": "yaml", "type": "deployment-definition", "context": "4.2.1 - Deployments | In this example:", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_17862"} {"prompt": "Explain this yaml code:\n\ncontrollers/nginx-deployment.yaml\napiVersion: apps/v1\nkind: Deployment\nmetadata:\n name: nginx-deployment\n labels:\n app: nginx\nspec:\n replicas: 3\n selector:\n matchLabels:\n app: nginx\n template:\n metadata:\n labels:\n app: nginx\n spec:\n containers:\n - name: nginx\n image: nginx:1.14.2\n ports:\n - containerPort: 80", "completion": "This yaml code 4.2.1 - Deployments | In this example:", "language": "yaml", "type": "deployment-definition_explanation", "context": "4.2.1 - Deployments | In this example:", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_17863"} {"prompt": "Complete this yaml code:\n\napiVersion: apps/v1\nkind: Deployment\nmetadata:\n name: nginx-deployment\n labels:\n app: nginx\nspec:\n replicas: 3\n selector:\n matchLabels:", "completion": " app: nginx\n template:\n metadata:\n labels:\n app: nginx\n spec:\n containers:\n - name: nginx\n image: nginx:1.14.2\n ports:\n - containerPort: 80", "language": "yaml", "type": "deployment-definition", "context": "4.2.1 - Deployments | In this example:", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_17864"} {"prompt": "Explain this yaml code:\n\napiVersion: apps/v1\nkind: Deployment\nmetadata:\n name: nginx-deployment\n labels:\n app: nginx\nspec:\n replicas: 3\n selector:\n matchLabels:\n app: nginx\n template:\n metadata:\n labels:\n app: nginx\n spec:\n containers:\n - name: nginx\n image: nginx:1.14.2\n ports:\n - containerPort: 80", "completion": "This yaml code 4.2.1 - Deployments | In this example:", "language": "yaml", "type": "deployment-definition_explanation", "context": "4.2.1 - Deployments | In this example:", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_17865"} {"prompt": "Explain this bash code:\n\nkubectl apply -f https://k8s.io/examples/controllers/nginx-deployment.yaml", "completion": "This bash code 4.2.1 - Deployments | Run kubectl get deployments to check if the Deployment was created.", "language": "bash", "type": "kubectl-command_explanation", "context": "4.2.1 - Deployments | Run kubectl get deployments to check if the Deployment was created.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_17866"} {"prompt": "Explain this bash code:\n\nkubectl set image deployment.v1.apps/nginx-deployment nginx=nginx:1.16.1", "completion": "This bash code 4.2.1 - Deployments | or use the following command:", "language": "bash", "type": "kubectl-command_explanation", "context": "4.2.1 - Deployments | or use the following command:", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_17867"} {"prompt": "Explain this bash code:\n\nkubectl set image deployment/nginx-deployment nginx=nginx:1.16.1", "completion": "This bash code 4.2.1 - Deployments | where deployment/nginx-deployment indicates the Deployment, nginx indicates the Container the update will take place and nginx:1.16.1 indicates the new image and its tag.", "language": "bash", "type": "kubectl-command_explanation", "context": "4.2.1 - Deployments | where deployment/nginx-deployment indicates the Deployment, nginx indicates the Container the update will take place and nginx:1.16.1 indicates the new image and its tag.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_17868"} {"prompt": "Explain this bash code:\n\nkubectl set image deployment/nginx-deployment nginx=nginx:1.161", "completion": "This bash code 4.2.1 - Deployments | The output is similar to this:", "language": "bash", "type": "kubectl-command_explanation", "context": "4.2.1 - Deployments | The output is similar to this:", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_17869"} {"prompt": "Explain this bash code:\n\nkubectl rollout history deployment/nginx-deployment", "completion": "This bash code 4.2.1 - Deployments | The output is similar to this:", "language": "bash", "type": "kubectl-command_explanation", "context": "4.2.1 - Deployments | The output is similar to this:", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_17870"} {"prompt": "Explain this bash code:\n\nkubectl rollout history deployment/nginx-deployment --revision=2", "completion": "This bash code 4.2.1 - Deployments | The output is similar to this:", "language": "bash", "type": "kubectl-command_explanation", "context": "4.2.1 - Deployments | The output is similar to this:", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_17871"} {"prompt": "Explain this bash code:\n\nkubectl rollout undo deployment/nginx-deployment --to-revision=2", "completion": "This bash code 4.2.1 - Deployments | The output is similar to this:", "language": "bash", "type": "kubectl-command_explanation", "context": "4.2.1 - Deployments | The output is similar to this:", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_17872"} {"prompt": "Explain this bash code:\n\nkubectl scale deployment/nginx-deployment --replicas=10", "completion": "This bash code 4.2.1 - Deployments | The output is similar to this:", "language": "bash", "type": "kubectl-command_explanation", "context": "4.2.1 - Deployments | The output is similar to this:", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_17873"} {"prompt": "Explain this bash code:\n\nkubectl autoscale deployment/nginx-deployment --min=10 --max=15 --cpu-percent=80", "completion": "This bash code 4.2.1 - Deployments | The output is similar to this:", "language": "bash", "type": "kubectl-command_explanation", "context": "4.2.1 - Deployments | The output is similar to this:", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_17874"} {"prompt": "Explain this bash code:\n\nkubectl set image deployment/nginx-deployment nginx=nginx:sometag", "completion": "This bash code 4.2.1 - Deployments | The output is similar to this:", "language": "bash", "type": "kubectl-command_explanation", "context": "4.2.1 - Deployments | The output is similar to this:", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_17875"} {"prompt": "Explain this bash code:\n\nkubectl set image deployment/nginx-deployment nginx=nginx:1.16.1", "completion": "This bash code 4.2.1 - Deployments | The output is similar to this:", "language": "bash", "type": "kubectl-command_explanation", "context": "4.2.1 - Deployments | The output is similar to this:", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_17876"} {"prompt": "Explain this bash code:\n\nkubectl rollout history deployment/nginx-deployment", "completion": "This bash code 4.2.1 - Deployments | The output is similar to this:", "language": "bash", "type": "kubectl-command_explanation", "context": "4.2.1 - Deployments | The output is similar to this:", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_17877"} {"prompt": "Explain this bash code:\n\nkubectl set resources deployment/nginx-deployment -c=nginx --limits=cpu=200m,memory=512Mi", "completion": "This bash code 4.2.1 - Deployments | The output is similar to this:", "language": "bash", "type": "kubectl-command_explanation", "context": "4.2.1 - Deployments | The output is similar to this:", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_17878"} {"prompt": "Explain this bash code:\n\nkubectl patch deployment/nginx-deployment -p '{\"spec\":{\"progressDeadlineSeconds\":600}}'", "completion": "This bash code 4.2.1 - Deployments | The output is similar to this:", "language": "bash", "type": "kubectl-command_explanation", "context": "4.2.1 - Deployments | The output is similar to this:", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_17879"} {"prompt": "Complete this yaml code:\n\napiVersion: apps/v1\nkind: Deployment\nmetadata:\n name: nginx-deployment\n labels:\n app: nginx\nspec:\n replicas: 3\n selector:\n matchLabels:\n app: nginx\n template:", "completion": " metadata:\n labels:\n app: nginx\n spec:\n containers:\n - name: nginx\n image: nginx:1.14.2\n ports:\n - containerPort: 80\n strategy:\n type: RollingUpdate\n rollingUpdate:\n maxUnavailable: 1", "language": "yaml", "type": "deployment-definition", "context": "4.2.1 - Deployments | apiVersion: apps/v1 kind: Deployment metadata: name: nginx-deployment labels: app: nginx spec: replicas: 3 selector: matchLabels: app: nginx template: metadata: labels:", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_17880"} {"prompt": "Explain this yaml code:\n\napiVersion: apps/v1\nkind: Deployment\nmetadata:\n name: nginx-deployment\n labels:\n app: nginx\nspec:\n replicas: 3\n selector:\n matchLabels:\n app: nginx\n template:\n metadata:\n labels:\n app: nginx\n spec:\n containers:\n - name: nginx\n image: nginx:1.14.2\n ports:\n - containerPort: 80\n strategy:\n type: RollingUpdate\n rollingUpdate:\n maxUnavailable: 1", "completion": "This yaml code 4.2.1 - Deployments | apiVersion: apps/v1 kind: Deployment metadata: name: nginx-deployment labels: app: nginx spec: replicas: 3 selector: matchLabels: app: nginx template: metadata: labels:", "language": "yaml", "type": "deployment-definition_explanation", "context": "4.2.1 - Deployments | apiVersion: apps/v1 kind: Deployment metadata: name: nginx-deployment labels: app: nginx spec: replicas: 3 selector: matchLabels: app: nginx template: metadata: labels:", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_17881"} {"prompt": "Complete this yaml code:\n\napiVersion: apps/v1\nkind: Deployment\nmetadata:\n name: nginx-deployment\n labels:\n app: nginx\nspec:\n replicas: 3\n selector:\n matchLabels:\n app: nginx\n template:", "completion": " metadata:\n labels:\n app: nginx\n spec:\n containers:\n - name: nginx\n image: nginx:1.14.2\n ports:\n - containerPort: 80\n strategy:\n type: RollingUpdate\n rollingUpdate:\n maxSurge: 1", "language": "yaml", "type": "deployment-definition", "context": "4.2.1 - Deployments | apiVersion: apps/v1 kind: Deployment metadata: name: nginx-deployment labels: app: nginx spec: replicas: 3 selector: matchLabels: app: nginx template: metadata: labels:", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_17882"} {"prompt": "Explain this yaml code:\n\napiVersion: apps/v1\nkind: Deployment\nmetadata:\n name: nginx-deployment\n labels:\n app: nginx\nspec:\n replicas: 3\n selector:\n matchLabels:\n app: nginx\n template:\n metadata:\n labels:\n app: nginx\n spec:\n containers:\n - name: nginx\n image: nginx:1.14.2\n ports:\n - containerPort: 80\n strategy:\n type: RollingUpdate\n rollingUpdate:\n maxSurge: 1", "completion": "This yaml code 4.2.1 - Deployments | apiVersion: apps/v1 kind: Deployment metadata: name: nginx-deployment labels: app: nginx spec: replicas: 3 selector: matchLabels: app: nginx template: metadata: labels:", "language": "yaml", "type": "deployment-definition_explanation", "context": "4.2.1 - Deployments | apiVersion: apps/v1 kind: Deployment metadata: name: nginx-deployment labels: app: nginx spec: replicas: 3 selector: matchLabels: app: nginx template: metadata: labels:", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_17883"} {"prompt": "Complete this yaml code:\n\napiVersion: apps/v1\nkind: Deployment\nmetadata:\n name: nginx-deployment\n labels:\n app: nginx\nspec:\n replicas: 3\n selector:\n matchLabels:\n app: nginx\n template:\n metadata:", "completion": " labels:\n app: nginx\n spec:\n containers:\n - name: nginx\n image: nginx:1.14.2\n ports:\n - containerPort: 80\n strategy:\n type: RollingUpdate\n rollingUpdate:\n maxSurge: 1\n maxUnavailable: 1", "language": "yaml", "type": "deployment-definition", "context": "4.2.1 - Deployments | .spec.progressDeadlineSeconds is an optional field that specifies the number of seconds you want to wait for your Deployment to progress before the system reports back that the Deployment has failed p", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_17884"} {"prompt": "Explain this yaml code:\n\napiVersion: apps/v1\nkind: Deployment\nmetadata:\n name: nginx-deployment\n labels:\n app: nginx\nspec:\n replicas: 3\n selector:\n matchLabels:\n app: nginx\n template:\n metadata:\n labels:\n app: nginx\n spec:\n containers:\n - name: nginx\n image: nginx:1.14.2\n ports:\n - containerPort: 80\n strategy:\n type: RollingUpdate\n rollingUpdate:\n maxSurge: 1\n maxUnavailable: 1", "completion": "This yaml code 4.2.1 - Deployments | .spec.progressDeadlineSeconds is an optional field that specifies the number of seconds you want to wait for your Deployment to progress before the system reports back that the Deployment has failed p", "language": "yaml", "type": "deployment-definition_explanation", "context": "4.2.1 - Deployments | .spec.progressDeadlineSeconds is an optional field that specifies the number of seconds you want to wait for your Deployment to progress before the system reports back that the Deployment has failed p", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_17885"} {"prompt": "Complete this yaml code:\n\ncontrollers/frontend.yaml\napiVersion: apps/v1\nkind: ReplicaSet\nmetadata:\n name: frontend\n labels:\n app: guestbook\n tier: frontend\nspec:\n # modify replicas according to your case\n replicas: 3", "completion": " selector:\n matchLabels:\n tier: frontend\n template:\n metadata:\n labels:\n tier: frontend\n spec:\n containers:\n - name: php-redis\n image: us-docker.pkg.dev/google-samples/containers/gke/gb-frontend:v5", "language": "yaml", "type": "kubernetes-manifest", "context": "4.2.2 - ReplicaSet | Saving this manifest into frontend.yaml and submitting it to a Kubernetes cluster will create the defined ReplicaSet and the Pods that it manages.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_17886"} {"prompt": "Explain this yaml code:\n\ncontrollers/frontend.yaml\napiVersion: apps/v1\nkind: ReplicaSet\nmetadata:\n name: frontend\n labels:\n app: guestbook\n tier: frontend\nspec:\n # modify replicas according to your case\n replicas: 3\n selector:\n matchLabels:\n tier: frontend\n template:\n metadata:\n labels:\n tier: frontend\n spec:\n containers:\n - name: php-redis\n image: us-docker.pkg.dev/google-samples/containers/gke/gb-frontend:v5", "completion": "This yaml code 4.2.2 - ReplicaSet | Saving this manifest into frontend.yaml and submitting it to a Kubernetes cluster will create the defined ReplicaSet and the Pods that it manages.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "4.2.2 - ReplicaSet | Saving this manifest into frontend.yaml and submitting it to a Kubernetes cluster will create the defined ReplicaSet and the Pods that it manages.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_17887"} {"prompt": "Complete this yaml code:\n\napiVersion: apps/v1\nkind: ReplicaSet\nmetadata:\n name: frontend\n labels:\n app: guestbook\n tier: frontend\nspec:\n # modify replicas according to your case\n replicas: 3", "completion": " selector:\n matchLabels:\n tier: frontend\n template:\n metadata:\n labels:\n tier: frontend\n spec:\n containers:\n - name: php-redis\n image: us-docker.pkg.dev/google-samples/containers/gke/gb-frontend:v5", "language": "yaml", "type": "kubernetes-manifest", "context": "4.2.2 - ReplicaSet | Saving this manifest into frontend.yaml and submitting it to a Kubernetes cluster will create the defined ReplicaSet and the Pods that it manages.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_17888"} {"prompt": "Explain this yaml code:\n\napiVersion: apps/v1\nkind: ReplicaSet\nmetadata:\n name: frontend\n labels:\n app: guestbook\n tier: frontend\nspec:\n # modify replicas according to your case\n replicas: 3\n selector:\n matchLabels:\n tier: frontend\n template:\n metadata:\n labels:\n tier: frontend\n spec:\n containers:\n - name: php-redis\n image: us-docker.pkg.dev/google-samples/containers/gke/gb-frontend:v5", "completion": "This yaml code 4.2.2 - ReplicaSet | Saving this manifest into frontend.yaml and submitting it to a Kubernetes cluster will create the defined ReplicaSet and the Pods that it manages.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "4.2.2 - ReplicaSet | Saving this manifest into frontend.yaml and submitting it to a Kubernetes cluster will create the defined ReplicaSet and the Pods that it manages.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_17889"} {"prompt": "Explain this bash code:\n\nkubectl apply -f https://kubernetes.io/examples/controllers/frontend.yaml", "completion": "This bash code 4.2.2 - ReplicaSet | You can then get the current ReplicaSets deployed:", "language": "bash", "type": "kubectl-command_explanation", "context": "4.2.2 - ReplicaSet | You can then get the current ReplicaSets deployed:", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_17890"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n creationTimestamp: \"2024-02-28T22:30:44Z\"\n generateName: frontend-\n labels:\n tier: frontend\n name: frontend-gbgfx", "completion": " namespace: default\n ownerReferences:\n - apiVersion: apps/v1\n blockOwnerDeletion: true\n controller: true\n kind: ReplicaSet\n name: frontend\n uid: e129deca-f864-481b-bb16-b27abfd92292\n...", "language": "yaml", "type": "pod-definition", "context": "4.2.2 - ReplicaSet | While you can create bare Pods with no problems, it is strongly recommended to make sure that the bare Pods do not have labels which match the selector of one of your ReplicaSets. The reason for this", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_17891"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n creationTimestamp: \"2024-02-28T22:30:44Z\"\n generateName: frontend-\n labels:\n tier: frontend\n name: frontend-gbgfx\n namespace: default\n ownerReferences:\n - apiVersion: apps/v1\n blockOwnerDeletion: true\n controller: true\n kind: ReplicaSet\n name: frontend\n uid: e129deca-f864-481b-bb16-b27abfd92292\n...", "completion": "This yaml code 4.2.2 - ReplicaSet | While you can create bare Pods with no problems, it is strongly recommended to make sure that the bare Pods do not have labels which match the selector of one of your ReplicaSets. The reason for this", "language": "yaml", "type": "pod-definition_explanation", "context": "4.2.2 - ReplicaSet | While you can create bare Pods with no problems, it is strongly recommended to make sure that the bare Pods do not have labels which match the selector of one of your ReplicaSets. The reason for this", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_17892"} {"prompt": "Complete this yaml code:\n\npods/pod-rs.yaml\napiVersion: v1\nkind: Pod\nmetadata:\n name: pod1\n labels:\n tier: frontend\nspec:\n containers:\n - name: hello1\n image: gcr.io/google-samples/hello-app:2.0\n", "completion": "---\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: pod2\n labels:\n tier: frontend\nspec:\n containers:\n - name: hello2\n image: gcr.io/google-samples/hello-app:1.0", "language": "yaml", "type": "pod-definition", "context": "4.2.2 - ReplicaSet | As those Pods do not have a Controller (or any object) as their owner reference and match the selector of the frontend ReplicaSet, they will immediately be acquired by it.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_17893"} {"prompt": "Explain this yaml code:\n\npods/pod-rs.yaml\napiVersion: v1\nkind: Pod\nmetadata:\n name: pod1\n labels:\n tier: frontend\nspec:\n containers:\n - name: hello1\n image: gcr.io/google-samples/hello-app:2.0\n\n---\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: pod2\n labels:\n tier: frontend\nspec:\n containers:\n - name: hello2\n image: gcr.io/google-samples/hello-app:1.0", "completion": "This yaml code 4.2.2 - ReplicaSet | As those Pods do not have a Controller (or any object) as their owner reference and match the selector of the frontend ReplicaSet, they will immediately be acquired by it.", "language": "yaml", "type": "pod-definition_explanation", "context": "4.2.2 - ReplicaSet | As those Pods do not have a Controller (or any object) as their owner reference and match the selector of the frontend ReplicaSet, they will immediately be acquired by it.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_17894"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: pod1\n labels:\n tier: frontend\nspec:\n containers:\n - name: hello1\n image: gcr.io/google-samples/hello-app:2.0\n", "completion": "---\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: pod2\n labels:\n tier: frontend\nspec:\n containers:\n - name: hello2\n image: gcr.io/google-samples/hello-app:1.0", "language": "yaml", "type": "pod-definition", "context": "4.2.2 - ReplicaSet | As those Pods do not have a Controller (or any object) as their owner reference and match the selector of the frontend ReplicaSet, they will immediately be acquired by it.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_17895"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: pod1\n labels:\n tier: frontend\nspec:\n containers:\n - name: hello1\n image: gcr.io/google-samples/hello-app:2.0\n\n---\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: pod2\n labels:\n tier: frontend\nspec:\n containers:\n - name: hello2\n image: gcr.io/google-samples/hello-app:1.0", "completion": "This yaml code 4.2.2 - ReplicaSet | As those Pods do not have a Controller (or any object) as their owner reference and match the selector of the frontend ReplicaSet, they will immediately be acquired by it.", "language": "yaml", "type": "pod-definition_explanation", "context": "4.2.2 - ReplicaSet | As those Pods do not have a Controller (or any object) as their owner reference and match the selector of the frontend ReplicaSet, they will immediately be acquired by it.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_17896"} {"prompt": "Explain this bash code:\n\nkubectl apply -f https://kubernetes.io/examples/pods/pod-rs.yaml", "completion": "This bash code 4.2.2 - ReplicaSet | The new Pods will be acquired by the ReplicaSet, and then immediately terminated as the ReplicaSet would be over its desired count.", "language": "bash", "type": "kubectl-command_explanation", "context": "4.2.2 - ReplicaSet | The new Pods will be acquired by the ReplicaSet, and then immediately terminated as the ReplicaSet would be over its desired count.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_17897"} {"prompt": "Explain this bash code:\n\nkubectl apply -f https://kubernetes.io/examples/pods/pod-rs.yaml", "completion": "This bash code 4.2.2 - ReplicaSet | And then create the ReplicaSet however:", "language": "bash", "type": "kubectl-command_explanation", "context": "4.2.2 - ReplicaSet | And then create the ReplicaSet however:", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_17898"} {"prompt": "Explain this bash code:\n\nkubectl apply -f https://kubernetes.io/examples/controllers/frontend.yaml", "completion": "This bash code 4.2.2 - ReplicaSet | You shall see that the ReplicaSet has acquired the Pods and has only created new ones according to its spec until the number of its new Pods and the original matches its desired count. As fetching the", "language": "bash", "type": "kubectl-command_explanation", "context": "4.2.2 - ReplicaSet | You shall see that the ReplicaSet has acquired the Pods and has only created new ones according to its spec until the number of its new Pods and the original matches its desired count. As fetching the", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_17899"} {"prompt": "Complete this bash code:\n\nkubectl proxy --port=8080\ncurl -X DELETE 'localhost:8080/apis/apps/v1/namespaces/default/replicasets/frontend' \\", "completion": " -d '{\"kind\":\"DeleteOptions\",\"apiVersion\":\"v1\",\"propagationPolicy\":\"Foreground\"}' \\\n -H \"Content-Type: application/json\"", "language": "bash", "type": "kubectl-command", "context": "4.2.2 - ReplicaSet | You can delete a ReplicaSet without affecting any of its Pods using kubectl delete with the --cascade=orphan option. When using the REST API or the client-go library, you must set propagationPolicy to", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_17900"} {"prompt": "Explain this bash code:\n\nkubectl proxy --port=8080\ncurl -X DELETE 'localhost:8080/apis/apps/v1/namespaces/default/replicasets/frontend' \\\n -d '{\"kind\":\"DeleteOptions\",\"apiVersion\":\"v1\",\"propagationPolicy\":\"Foreground\"}' \\\n -H \"Content-Type: application/json\"", "completion": "This bash code 4.2.2 - ReplicaSet | You can delete a ReplicaSet without affecting any of its Pods using kubectl delete with the --cascade=orphan option. When using the REST API or the client-go library, you must set propagationPolicy to", "language": "bash", "type": "kubectl-command_explanation", "context": "4.2.2 - ReplicaSet | You can delete a ReplicaSet without affecting any of its Pods using kubectl delete with the --cascade=orphan option. When using the REST API or the client-go library, you must set propagationPolicy to", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_17901"} {"prompt": "Complete this bash code:\n\nkubectl proxy --port=8080\ncurl -X DELETE 'localhost:8080/apis/apps/v1/namespaces/default/replicasets/frontend' \\", "completion": " -d '{\"kind\":\"DeleteOptions\",\"apiVersion\":\"v1\",\"propagationPolicy\":\"Orphan\"}' \\\n -H \"Content-Type: application/json\"", "language": "bash", "type": "kubectl-command", "context": "4.2.2 - ReplicaSet | Once the original is deleted, you can create a new ReplicaSet to replace it. As long as the old and new .spec.selector are the same, then the new one will adopt the old Pods. However, it will not make", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_17902"} {"prompt": "Explain this bash code:\n\nkubectl proxy --port=8080\ncurl -X DELETE 'localhost:8080/apis/apps/v1/namespaces/default/replicasets/frontend' \\\n -d '{\"kind\":\"DeleteOptions\",\"apiVersion\":\"v1\",\"propagationPolicy\":\"Orphan\"}' \\\n -H \"Content-Type: application/json\"", "completion": "This bash code 4.2.2 - ReplicaSet | Once the original is deleted, you can create a new ReplicaSet to replace it. As long as the old and new .spec.selector are the same, then the new one will adopt the old Pods. However, it will not make", "language": "bash", "type": "kubectl-command_explanation", "context": "4.2.2 - ReplicaSet | Once the original is deleted, you can create a new ReplicaSet to replace it. As long as the old and new .spec.selector are the same, then the new one will adopt the old Pods. However, it will not make", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_17903"} {"prompt": "Complete this yaml code:\n\ncontrollers/hpa-rs.yaml\napiVersion: autoscaling/v1\nkind: HorizontalPodAutoscaler\nmetadata:\n name: frontend-scaler\nspec:", "completion": " scaleTargetRef:\n kind: ReplicaSet\n name: frontend\n minReplicas: 3\n maxReplicas: 10\n targetCPUUtilizationPercentage: 50", "language": "yaml", "type": "kubernetes-manifest", "context": "4.2.2 - ReplicaSet | Saving this manifest into hpa-rs.yaml and submitting it to a Kubernetes cluster should create the defined HPA that autoscales the target ReplicaSet depending on the CPU usage of the replicated Pods.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_17904"} {"prompt": "Explain this yaml code:\n\ncontrollers/hpa-rs.yaml\napiVersion: autoscaling/v1\nkind: HorizontalPodAutoscaler\nmetadata:\n name: frontend-scaler\nspec:\n scaleTargetRef:\n kind: ReplicaSet\n name: frontend\n minReplicas: 3\n maxReplicas: 10\n targetCPUUtilizationPercentage: 50", "completion": "This yaml code 4.2.2 - ReplicaSet | Saving this manifest into hpa-rs.yaml and submitting it to a Kubernetes cluster should create the defined HPA that autoscales the target ReplicaSet depending on the CPU usage of the replicated Pods.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "4.2.2 - ReplicaSet | Saving this manifest into hpa-rs.yaml and submitting it to a Kubernetes cluster should create the defined HPA that autoscales the target ReplicaSet depending on the CPU usage of the replicated Pods.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_17905"} {"prompt": "Complete this yaml code:\n\napiVersion: autoscaling/v1\nkind: HorizontalPodAutoscaler\nmetadata:\n name: frontend-scaler\nspec:", "completion": " scaleTargetRef:\n kind: ReplicaSet\n name: frontend\n minReplicas: 3\n maxReplicas: 10\n targetCPUUtilizationPercentage: 50", "language": "yaml", "type": "kubernetes-manifest", "context": "4.2.2 - ReplicaSet | Saving this manifest into hpa-rs.yaml and submitting it to a Kubernetes cluster should create the defined HPA that autoscales the target ReplicaSet depending on the CPU usage of the replicated Pods.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_17906"} {"prompt": "Explain this yaml code:\n\napiVersion: autoscaling/v1\nkind: HorizontalPodAutoscaler\nmetadata:\n name: frontend-scaler\nspec:\n scaleTargetRef:\n kind: ReplicaSet\n name: frontend\n minReplicas: 3\n maxReplicas: 10\n targetCPUUtilizationPercentage: 50", "completion": "This yaml code 4.2.2 - ReplicaSet | Saving this manifest into hpa-rs.yaml and submitting it to a Kubernetes cluster should create the defined HPA that autoscales the target ReplicaSet depending on the CPU usage of the replicated Pods.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "4.2.2 - ReplicaSet | Saving this manifest into hpa-rs.yaml and submitting it to a Kubernetes cluster should create the defined HPA that autoscales the target ReplicaSet depending on the CPU usage of the replicated Pods.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_17907"} {"prompt": "Explain this bash code:\n\nkubectl apply -f https://k8s.io/examples/controllers/hpa-rs.yaml", "completion": "This bash code 4.2.2 - ReplicaSet | Alternatively, you can use the kubectl autoscale command to accomplish the same (and it's easier!)", "language": "bash", "type": "kubectl-command_explanation", "context": "4.2.2 - ReplicaSet | Alternatively, you can use the kubectl autoscale command to accomplish the same (and it's easier!)", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_17908"} {"prompt": "Explain this bash code:\n\nkubectl autoscale rs frontend --max=10 --min=3 --cpu-percent=50", "completion": "This bash code 4.2.2 - ReplicaSet | Deployment is an object which can own ReplicaSets and update them and their Pods via declarative, server-side rolling updates. While ReplicaSets can be used independently, today they're mainly used by", "language": "bash", "type": "kubectl-command_explanation", "context": "4.2.2 - ReplicaSet | Deployment is an object which can own ReplicaSets and update them and their Pods via declarative, server-side rolling updates. While ReplicaSets can be used independently, today they're mainly used by", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_17909"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Service\nmetadata:\n name: nginx\n labels:\n app: nginx\nspec:\n ports:\n - port: 80\n name: web\n clusterIP: None\n selector:\n app: nginx\n---\napiVersion: apps/v1\nkind: StatefulSet\nmetadata:\n name: web\nspec:\n selector:\n matchLabels:\n app: nginx # has to match .spec.template.metadata.labels\n serviceName: \"nginx\"\n replicas: 3 # by default is 1", "completion": " minReadySeconds: 10 # by default is 0\n template:\n metadata:\n labels:\n app: nginx # has to match .spec.selector.matchLabels\n spec:\n terminationGracePeriodSeconds: 10\n containers:\n - name: nginx\n image: registry.k8s.io/nginx-slim:0.24\n ports:\n - containerPort: 80\n name: web\n volumeMounts:\n - name: www\n mountPath: /usr/share/nginx/html\n volumeClaimTemplates:\n - metadata:\n name: www\n spec:\n accessModes: [ \"ReadWriteOnce\" ]\n storageClassName: \"my-storage-class\"\n resources:\n requests:\n storage: 1Gi", "language": "yaml", "type": "service-definition", "context": "4.2.3 - StatefulSets | In the above example:", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_17910"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Service\nmetadata:\n name: nginx\n labels:\n app: nginx\nspec:\n ports:\n - port: 80\n name: web\n clusterIP: None\n selector:\n app: nginx\n---\napiVersion: apps/v1\nkind: StatefulSet\nmetadata:\n name: web\nspec:\n selector:\n matchLabels:\n app: nginx # has to match .spec.template.metadata.labels\n serviceName: \"nginx\"\n replicas: 3 # by default is 1\n minReadySeconds: 10 # by default is 0\n template:\n metadata:\n labels:\n app: nginx # has to match .spec.selector.matchLabels\n spec:\n terminationGracePeriodSeconds: 10\n containers:\n - name: nginx\n image: registry.k8s.io/nginx-slim:0.24\n ports:\n - containerPort: 80\n name: web\n volumeMounts:\n - name: www\n mountPath: /usr/share/nginx/html\n volumeClaimTemplates:\n - metadata:\n name: www\n spec:\n accessModes: [ \"ReadWriteOnce\" ]\n storageClassName: \"my-storage-class\"\n resources:\n requests:\n storage: 1Gi", "completion": "This yaml code 4.2.3 - StatefulSets | In the above example:", "language": "yaml", "type": "service-definition_explanation", "context": "4.2.3 - StatefulSets | In the above example:", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_17911"} {"prompt": "Complete this yaml code:\n\napiVersion: apps/v1\nkind: StatefulSet\nmetadata:", "completion": " name: webapp\nspec:\n revisionHistoryLimit: 5 # Keep last 5 revisions\n # ... other spec fields ...", "language": "yaml", "type": "kubernetes-manifest", "context": "4.2.3 - StatefulSets | You can revert to a previous configuration using:", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_17912"} {"prompt": "Explain this yaml code:\n\napiVersion: apps/v1\nkind: StatefulSet\nmetadata:\n name: webapp\nspec:\n revisionHistoryLimit: 5 # Keep last 5 revisions\n # ... other spec fields ...", "completion": "This yaml code 4.2.3 - StatefulSets | You can revert to a previous configuration using:", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "4.2.3 - StatefulSets | You can revert to a previous configuration using:", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_17913"} {"prompt": "Complete this text code:\n\n# View revision history\nkubectl rollout history statefulset/webapp", "completion": "\n# Rollback to a specific revision\nkubectl rollout undo statefulset/webapp --to-revision=3", "language": "text", "type": "code-example", "context": "4.2.3 - StatefulSets | This will:", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_17914"} {"prompt": "Explain this text code:\n\n# View revision history\nkubectl rollout history statefulset/webapp\n\n# Rollback to a specific revision\nkubectl rollout undo statefulset/webapp --to-revision=3", "completion": "This text code 4.2.3 - StatefulSets | This will:", "language": "text", "type": "code-example_explanation", "context": "4.2.3 - StatefulSets | This will:", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_17915"} {"prompt": "Complete this text code:\n\n# List all revisions for the StatefulSet\nkubectl get controllerrevisions -l app.kubernetes.io/name=webapp", "completion": "\n# View detailed configuration of a specific revision\nkubectl get controllerrevision/webapp-3 -o yaml", "language": "text", "type": "code-example", "context": "4.2.3 - StatefulSets | Regularly check revisions with:", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_17916"} {"prompt": "Explain this text code:\n\n# List all revisions for the StatefulSet\nkubectl get controllerrevisions -l app.kubernetes.io/name=webapp\n\n# View detailed configuration of a specific revision\nkubectl get controllerrevision/webapp-3 -o yaml", "completion": "This text code 4.2.3 - StatefulSets | Regularly check revisions with:", "language": "text", "type": "code-example_explanation", "context": "4.2.3 - StatefulSets | Regularly check revisions with:", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_17917"} {"prompt": "Complete this yaml code:\n\napiVersion: apps/v1\nkind: StatefulSet\n...\nspec:", "completion": " persistentVolumeClaimRetentionPolicy:\n whenDeleted: Retain\n whenScaled: Delete\n...", "language": "yaml", "type": "kubernetes-manifest", "context": "4.2.3 - StatefulSets | The StatefulSet controller adds owner references to its PVCs, which are then deleted by the garbage collector after the Pod is terminated. This enables the Pod to cleanly unmount all volumes before th", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_17918"} {"prompt": "Explain this yaml code:\n\napiVersion: apps/v1\nkind: StatefulSet\n...\nspec:\n persistentVolumeClaimRetentionPolicy:\n whenDeleted: Retain\n whenScaled: Delete\n...", "completion": "This yaml code 4.2.3 - StatefulSets | The StatefulSet controller adds owner references to its PVCs, which are then deleted by the garbage collector after the Pod is terminated. This enables the Pod to cleanly unmount all volumes before th", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "4.2.3 - StatefulSets | The StatefulSet controller adds owner references to its PVCs, which are then deleted by the garbage collector after the Pod is terminated. This enables the Pod to cleanly unmount all volumes before th", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_17919"} {"prompt": "Complete this yaml code:\n\ncontrollers/daemonset.yaml\napiVersion: apps/v1\nkind: DaemonSet\nmetadata:\n name: fluentd-elasticsearch\n namespace: kube-system\n labels:\n k8s-app: fluentd-logging\nspec:\n selector:\n matchLabels:\n name: fluentd-elasticsearch\n template:\n metadata:\n labels:\n name: fluentd-elasticsearch\n spec:\n tolerations:\n # these tolerations are to have the daemonset runnable on control plane nodes\n # remove them if your control plane nodes should not run pods\n - key: node-role.kubernetes.io/control-plane\n operator: Exists\n effect: NoSchedule", "completion": " - key: node-role.kubernetes.io/master\n operator: Exists\n effect: NoSchedule\n containers:\n - name: fluentd-elasticsearch\n image: quay.io/fluentd_elasticsearch/fluentd:v5.0.1\n resources:\n limits:\n memory: 200Mi\n requests:\n cpu: 100m\n memory: 200Mi\n volumeMounts:\n - name: varlog\n mountPath: /var/log\n # it may be desirable to set a high priority class to ensure that a DaemonSet Pod\n # preempts running Pods\n # priorityClassName: important\n terminationGracePeriodSeconds: 30\n volumes:\n - name: varlog\n hostPath:\n path: /var/log", "language": "yaml", "type": "kubernetes-manifest", "context": "4.2.4 - DaemonSet | Create a DaemonSet based on the YAML file:", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_17920"} {"prompt": "Explain this yaml code:\n\ncontrollers/daemonset.yaml\napiVersion: apps/v1\nkind: DaemonSet\nmetadata:\n name: fluentd-elasticsearch\n namespace: kube-system\n labels:\n k8s-app: fluentd-logging\nspec:\n selector:\n matchLabels:\n name: fluentd-elasticsearch\n template:\n metadata:\n labels:\n name: fluentd-elasticsearch\n spec:\n tolerations:\n # these tolerations are to have the daemonset runnable on control plane nodes\n # remove them if your control plane nodes should not run pods\n - key: node-role.kubernetes.io/control-plane\n operator: Exists\n effect: NoSchedule\n - key: node-role.kubernetes.io/master\n operator: Exists\n effect: NoSchedule\n containers:\n - name: fluentd-elasticsearch\n image: quay.io/fluentd_elasticsearch/fluentd:v5.0.1\n resources:\n limits:\n memory: 200Mi\n requests:\n cpu: 100m\n memory: 200Mi\n volumeMounts:\n - name: varlog\n mountPath: /var/log\n # it may be desirable to set a high priority class to ensure that a DaemonSet Pod\n # preempts running Pods\n # priorityClassName: important\n terminationGracePeriodSeconds: 30\n volumes:\n - name: varlog\n hostPath:\n path: /var/log", "completion": "This yaml code 4.2.4 - DaemonSet | Create a DaemonSet based on the YAML file:", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "4.2.4 - DaemonSet | Create a DaemonSet based on the YAML file:", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_17921"} {"prompt": "Complete this yaml code:\n\napiVersion: apps/v1\nkind: DaemonSet\nmetadata:\n name: fluentd-elasticsearch\n namespace: kube-system\n labels:\n k8s-app: fluentd-logging\nspec:\n selector:\n matchLabels:\n name: fluentd-elasticsearch\n template:\n metadata:\n labels:\n name: fluentd-elasticsearch\n spec:\n tolerations:\n # these tolerations are to have the daemonset runnable on control plane nodes\n # remove them if your control plane nodes should not run pods\n - key: node-role.kubernetes.io/control-plane\n operator: Exists\n effect: NoSchedule", "completion": " - key: node-role.kubernetes.io/master\n operator: Exists\n effect: NoSchedule\n containers:\n - name: fluentd-elasticsearch\n image: quay.io/fluentd_elasticsearch/fluentd:v5.0.1\n resources:\n limits:\n memory: 200Mi\n requests:\n cpu: 100m\n memory: 200Mi\n volumeMounts:\n - name: varlog\n mountPath: /var/log\n # it may be desirable to set a high priority class to ensure that a DaemonSet Pod\n # preempts running Pods\n # priorityClassName: important\n terminationGracePeriodSeconds: 30\n volumes:\n - name: varlog\n hostPath:\n path: /var/log", "language": "yaml", "type": "kubernetes-manifest", "context": "4.2.4 - DaemonSet | Create a DaemonSet based on the YAML file:", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_17922"} {"prompt": "Explain this yaml code:\n\napiVersion: apps/v1\nkind: DaemonSet\nmetadata:\n name: fluentd-elasticsearch\n namespace: kube-system\n labels:\n k8s-app: fluentd-logging\nspec:\n selector:\n matchLabels:\n name: fluentd-elasticsearch\n template:\n metadata:\n labels:\n name: fluentd-elasticsearch\n spec:\n tolerations:\n # these tolerations are to have the daemonset runnable on control plane nodes\n # remove them if your control plane nodes should not run pods\n - key: node-role.kubernetes.io/control-plane\n operator: Exists\n effect: NoSchedule\n - key: node-role.kubernetes.io/master\n operator: Exists\n effect: NoSchedule\n containers:\n - name: fluentd-elasticsearch\n image: quay.io/fluentd_elasticsearch/fluentd:v5.0.1\n resources:\n limits:\n memory: 200Mi\n requests:\n cpu: 100m\n memory: 200Mi\n volumeMounts:\n - name: varlog\n mountPath: /var/log\n # it may be desirable to set a high priority class to ensure that a DaemonSet Pod\n # preempts running Pods\n # priorityClassName: important\n terminationGracePeriodSeconds: 30\n volumes:\n - name: varlog\n hostPath:\n path: /var/log", "completion": "This yaml code 4.2.4 - DaemonSet | Create a DaemonSet based on the YAML file:", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "4.2.4 - DaemonSet | Create a DaemonSet based on the YAML file:", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_17923"} {"prompt": "Complete this text code:\n\nnodeAffinity:\n requiredDuringSchedulingIgnoredDuringExecution:\n nodeSelectorTerms:\n - matchFields:", "completion": " - key: metadata.name\n operator: In\n values:\n - target-host-name", "language": "text", "type": "code-example", "context": "4.2.4 - DaemonSet | The DaemonSet controller automatically adds a set of tolerations to DaemonSet Pods:", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_17924"} {"prompt": "Explain this text code:\n\nnodeAffinity:\n requiredDuringSchedulingIgnoredDuringExecution:\n nodeSelectorTerms:\n - matchFields:\n - key: metadata.name\n operator: In\n values:\n - target-host-name", "completion": "This text code 4.2.4 - DaemonSet | The DaemonSet controller automatically adds a set of tolerations to DaemonSet Pods:", "language": "text", "type": "code-example_explanation", "context": "4.2.4 - DaemonSet | The DaemonSet controller automatically adds a set of tolerations to DaemonSet Pods:", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_17925"} {"prompt": "Complete this yaml code:\n\ncontrollers/job.yaml\napiVersion: batch/v1\nkind: Job\nmetadata:\n name: pi\nspec:\n template:", "completion": " spec:\n containers:\n - name: pi\n image: perl:5.34.0\n command: [\"perl\", \"-Mbignum=bpi\", \"-wle\", \"print bpi(2000)\"]\n restartPolicy: Never\n backoffLimit: 4", "language": "yaml", "type": "kubernetes-manifest", "context": "4.2.5 - Jobs | You can run the example with this command:", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_17926"} {"prompt": "Explain this yaml code:\n\ncontrollers/job.yaml\napiVersion: batch/v1\nkind: Job\nmetadata:\n name: pi\nspec:\n template:\n spec:\n containers:\n - name: pi\n image: perl:5.34.0\n command: [\"perl\", \"-Mbignum=bpi\", \"-wle\", \"print bpi(2000)\"]\n restartPolicy: Never\n backoffLimit: 4", "completion": "This yaml code 4.2.5 - Jobs | You can run the example with this command:", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "4.2.5 - Jobs | You can run the example with this command:", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_17927"} {"prompt": "Complete this yaml code:\n\napiVersion: batch/v1\nkind: Job\nmetadata:\n name: pi\nspec:\n template:", "completion": " spec:\n containers:\n - name: pi\n image: perl:5.34.0\n command: [\"perl\", \"-Mbignum=bpi\", \"-wle\", \"print bpi(2000)\"]\n restartPolicy: Never\n backoffLimit: 4", "language": "yaml", "type": "kubernetes-manifest", "context": "4.2.5 - Jobs | You can run the example with this command:", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_17928"} {"prompt": "Explain this yaml code:\n\napiVersion: batch/v1\nkind: Job\nmetadata:\n name: pi\nspec:\n template:\n spec:\n containers:\n - name: pi\n image: perl:5.34.0\n command: [\"perl\", \"-Mbignum=bpi\", \"-wle\", \"print bpi(2000)\"]\n restartPolicy: Never\n backoffLimit: 4", "completion": "This yaml code 4.2.5 - Jobs | You can run the example with this command:", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "4.2.5 - Jobs | You can run the example with this command:", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_17929"} {"prompt": "Explain this bash code:\n\nkubectl apply -f https://kubernetes.io/examples/controllers/job.yaml", "completion": "This bash code 4.2.5 - Jobs | The output is similar to this:", "language": "bash", "type": "kubectl-command_explanation", "context": "4.2.5 - Jobs | The output is similar to this:", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_17930"} {"prompt": "Complete this text code:\n\nName: pi\nNamespace: default\nSelector: batch.kubernetes.io/controller-uid=c9948307-e56d-4b5d-8302-ae2d7b7da67c\nLabels: batch.kubernetes.io/controller-uid=c9948307-e56d-4b5d-8302-ae2d7b7da67c\n batch.kubernetes.io/job-name=pi\n ...\nAnnotations: batch.kubernetes.io/job-tracking: \"\"\nParallelism: 1\nCompletions: 1\nStart Time: Mon, 02 Dec 2019 15:20:11 +0200\nCompleted At: Mon, 02 Dec 2019 15:21:16 +0200\nDuration: 65s\nPods Statuses: 0 Running / 1 Succeeded / 0 Failed\nPod Template:\n Labels: batch.kubernetes.io/controller-uid=c9948307-e56d-4b5d-8302-ae2d7b7da67c\n batch.kubernetes.io/job-name=pi\n Containers:", "completion": " pi:\n Image: perl:5.34.0\n Port: <none>\n Host Port: <none>\n Command:\n perl\n -Mbignum=bpi\n -wle\n print bpi(2000)\n Environment: <none>\n Mounts: <none>\n Volumes: <none>\nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal SuccessfulCreate 21s job-controller Created pod: pi-xf9p4\n Normal Completed 18s job-controller Job completed", "language": "text", "type": "code-example", "context": "4.2.5 - Jobs | apiVersion: batch/v1 kind: Job metadata: annotations: batch.kubernetes.io/job-tracking: \"\" ... creationTimestamp: \"2022-11-10T17:53:53Z\" generation: 1 labels: batch.kubernet", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_17931"} {"prompt": "Explain this text code:\n\nName: pi\nNamespace: default\nSelector: batch.kubernetes.io/controller-uid=c9948307-e56d-4b5d-8302-ae2d7b7da67c\nLabels: batch.kubernetes.io/controller-uid=c9948307-e56d-4b5d-8302-ae2d7b7da67c\n batch.kubernetes.io/job-name=pi\n ...\nAnnotations: batch.kubernetes.io/job-tracking: \"\"\nParallelism: 1\nCompletions: 1\nStart Time: Mon, 02 Dec 2019 15:20:11 +0200\nCompleted At: Mon, 02 Dec 2019 15:21:16 +0200\nDuration: 65s\nPods Statuses: 0 Running / 1 Succeeded / 0 Failed\nPod Template:\n Labels: batch.kubernetes.io/controller-uid=c9948307-e56d-4b5d-8302-ae2d7b7da67c\n batch.kubernetes.io/job-name=pi\n Containers:\n pi:\n Image: perl:5.34.0\n Port: <none>\n Host Port: <none>\n Command:\n perl\n -Mbignum=bpi\n -wle\n print bpi(2000)\n Environment: <none>\n Mounts: <none>\n Volumes: <none>\nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal SuccessfulCreate 21s job-controller Created pod: pi-xf9p4\n Normal Completed 18s job-controller Job completed", "completion": "This text code 4.2.5 - Jobs | apiVersion: batch/v1 kind: Job metadata: annotations: batch.kubernetes.io/job-tracking: \"\" ... creationTimestamp: \"2022-11-10T17:53:53Z\" generation: 1 labels: batch.kubernet", "language": "text", "type": "code-example_explanation", "context": "4.2.5 - Jobs | apiVersion: batch/v1 kind: Job metadata: annotations: batch.kubernetes.io/job-tracking: \"\" ... creationTimestamp: \"2022-11-10T17:53:53Z\" generation: 1 labels: batch.kubernet", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_17932"} {"prompt": "Complete this yaml code:\n\napiVersion: batch/v1\nkind: Job\nmetadata:\n annotations: batch.kubernetes.io/job-tracking: \"\"\n ... \n creationTimestamp: \"2022-11-10T17:53:53Z\"\n generation: 1\n labels:\n batch.kubernetes.io/controller-uid: 863452e6-270d-420e-9b94-53a54146c223\n batch.kubernetes.io/job-name: pi\n name: pi\n namespace: default\n resourceVersion: \"4751\"\n uid: 204fb678-040b-497f-9266-35ffa8716d14\nspec:\n backoffLimit: 4\n completionMode: NonIndexed\n completions: 1\n parallelism: 1\n selector:\n matchLabels:\n batch.kubernetes.io/controller-uid: 863452e6-270d-420e-9b94-53a54146c223\n suspend: false\n template:\n metadata:\n creationTimestamp: null", "completion": " labels:\n batch.kubernetes.io/controller-uid: 863452e6-270d-420e-9b94-53a54146c223\n batch.kubernetes.io/job-name: pi\n spec:\n containers:\n - command:\n - perl\n - -Mbignum=bpi\n - -wle\n - print bpi(2000)\n image: perl:5.34.0\n imagePullPolicy: IfNotPresent\n name: pi\n resources: {}\n terminationMessagePath: /dev/termination-log\n terminationMessagePolicy: File\n dnsPolicy: ClusterFirst\n restartPolicy: Never\n schedulerName: default-scheduler\n securityContext: {}\n terminationGracePeriodSeconds: 30\nstatus:\n active: 1\n ready: 0\n startTime: \"2022-11-10T17:53:57Z\"\n uncountedTerminatedPods: {}", "language": "yaml", "type": "kubernetes-manifest", "context": "4.2.5 - Jobs | To view completed Pods of a Job, use kubectl get pods.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_17933"} {"prompt": "Explain this yaml code:\n\napiVersion: batch/v1\nkind: Job\nmetadata:\n annotations: batch.kubernetes.io/job-tracking: \"\"\n ... \n creationTimestamp: \"2022-11-10T17:53:53Z\"\n generation: 1\n labels:\n batch.kubernetes.io/controller-uid: 863452e6-270d-420e-9b94-53a54146c223\n batch.kubernetes.io/job-name: pi\n name: pi\n namespace: default\n resourceVersion: \"4751\"\n uid: 204fb678-040b-497f-9266-35ffa8716d14\nspec:\n backoffLimit: 4\n completionMode: NonIndexed\n completions: 1\n parallelism: 1\n selector:\n matchLabels:\n batch.kubernetes.io/controller-uid: 863452e6-270d-420e-9b94-53a54146c223\n suspend: false\n template:\n metadata:\n creationTimestamp: null\n labels:\n batch.kubernetes.io/controller-uid: 863452e6-270d-420e-9b94-53a54146c223\n batch.kubernetes.io/job-name: pi\n spec:\n containers:\n - command:\n - perl\n - -Mbignum=bpi\n - -wle\n - print bpi(2000)\n image: perl:5.34.0\n imagePullPolicy: IfNotPresent\n name: pi\n resources: {}\n terminationMessagePath: /dev/termination-log\n terminationMessagePolicy: File\n dnsPolicy: ClusterFirst\n restartPolicy: Never\n schedulerName: default-scheduler\n securityContext: {}\n terminationGracePeriodSeconds: 30\nstatus:\n active: 1\n ready: 0\n startTime: \"2022-11-10T17:53:57Z\"\n uncountedTerminatedPods: {}", "completion": "This yaml code 4.2.5 - Jobs | To view completed Pods of a Job, use kubectl get pods.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "4.2.5 - Jobs | To view completed Pods of a Job, use kubectl get pods.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_17934"} {"prompt": "Explain this text code:\n\npods=$(kubectl get pods --selector=batch.kubernetes.io/job-name=pi --output=jsonpath='{.items[*].metadata.name}')\necho $pods", "completion": "This text code 4.2.5 - Jobs | The output is similar to this:", "language": "text", "type": "code-example_explanation", "context": "4.2.5 - Jobs | The output is similar to this:", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_17935"} {"prompt": "Complete this yaml code:\n\n/controllers/job-backoff-limit-per-index-example.yaml\napiVersion: batch/v1\nkind: Job\nmetadata:\n name: job-backoff-limit-per-index-example\nspec:\n completions: 10\n parallelism: 3\n completionMode: Indexed # required for the feature\n backoffLimitPerIndex: 1 # maximal number of failures per index\n maxFailedIndexes: 5 # maximal number of failed indexes before terminating the Job execution\n template:\n spec:", "completion": " restartPolicy: Never # required for the feature\n containers:\n - name: example\n image: python\n command: # The jobs fails as there is at least one failed index\n # (all even indexes fail in here), yet all indexes\n # are executed as maxFailedIndexes is not exceeded.\n - python3\n - -c\n - |\n import os, sys\n print(\"Hello world\")\n if int(os.environ.get(\"JOB_COMPLETION_INDEX\")) % 2 == 0:\n sys.exit(1)", "language": "yaml", "type": "kubernetes-manifest", "context": "4.2.5 - Jobs | In the example above, the Job controller allows for one restart for each of the indexes. When the total number of failed indexes exceeds 5, then the entire Job is terminated.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_17936"} {"prompt": "Explain this yaml code:\n\n/controllers/job-backoff-limit-per-index-example.yaml\napiVersion: batch/v1\nkind: Job\nmetadata:\n name: job-backoff-limit-per-index-example\nspec:\n completions: 10\n parallelism: 3\n completionMode: Indexed # required for the feature\n backoffLimitPerIndex: 1 # maximal number of failures per index\n maxFailedIndexes: 5 # maximal number of failed indexes before terminating the Job execution\n template:\n spec:\n restartPolicy: Never # required for the feature\n containers:\n - name: example\n image: python\n command: # The jobs fails as there is at least one failed index\n # (all even indexes fail in here), yet all indexes\n # are executed as maxFailedIndexes is not exceeded.\n - python3\n - -c\n - |\n import os, sys\n print(\"Hello world\")\n if int(os.environ.get(\"JOB_COMPLETION_INDEX\")) % 2 == 0:\n sys.exit(1)", "completion": "This yaml code 4.2.5 - Jobs | In the example above, the Job controller allows for one restart for each of the indexes. When the total number of failed indexes exceeds 5, then the entire Job is terminated.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "4.2.5 - Jobs | In the example above, the Job controller allows for one restart for each of the indexes. When the total number of failed indexes exceeds 5, then the entire Job is terminated.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_17937"} {"prompt": "Complete this yaml code:\n\napiVersion: batch/v1\nkind: Job\nmetadata:\n name: job-backoff-limit-per-index-example\nspec:\n completions: 10\n parallelism: 3\n completionMode: Indexed # required for the feature\n backoffLimitPerIndex: 1 # maximal number of failures per index\n maxFailedIndexes: 5 # maximal number of failed indexes before terminating the Job execution\n template:\n spec:\n restartPolicy: Never # required for the feature", "completion": " containers:\n - name: example\n image: python\n command: # The jobs fails as there is at least one failed index\n # (all even indexes fail in here), yet all indexes\n # are executed as maxFailedIndexes is not exceeded.\n - python3\n - -c\n - |\n import os, sys\n print(\"Hello world\")\n if int(os.environ.get(\"JOB_COMPLETION_INDEX\")) % 2 == 0:\n sys.exit(1)", "language": "yaml", "type": "kubernetes-manifest", "context": "4.2.5 - Jobs | In the example above, the Job controller allows for one restart for each of the indexes. When the total number of failed indexes exceeds 5, then the entire Job is terminated.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_17938"} {"prompt": "Explain this yaml code:\n\napiVersion: batch/v1\nkind: Job\nmetadata:\n name: job-backoff-limit-per-index-example\nspec:\n completions: 10\n parallelism: 3\n completionMode: Indexed # required for the feature\n backoffLimitPerIndex: 1 # maximal number of failures per index\n maxFailedIndexes: 5 # maximal number of failed indexes before terminating the Job execution\n template:\n spec:\n restartPolicy: Never # required for the feature\n containers:\n - name: example\n image: python\n command: # The jobs fails as there is at least one failed index\n # (all even indexes fail in here), yet all indexes\n # are executed as maxFailedIndexes is not exceeded.\n - python3\n - -c\n - |\n import os, sys\n print(\"Hello world\")\n if int(os.environ.get(\"JOB_COMPLETION_INDEX\")) % 2 == 0:\n sys.exit(1)", "completion": "This yaml code 4.2.5 - Jobs | In the example above, the Job controller allows for one restart for each of the indexes. When the total number of failed indexes exceeds 5, then the entire Job is terminated.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "4.2.5 - Jobs | In the example above, the Job controller allows for one restart for each of the indexes. When the total number of failed indexes exceeds 5, then the entire Job is terminated.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_17939"} {"prompt": "Explain this bash code:\n\nkubectl get -o yaml job job-backoff-limit-per-index-example", "completion": "This bash code 4.2.5 - Jobs | The Job controller adds the FailureTarget Job condition to trigger Job termination and cleanup. When all of the Job Pods are terminated, the Job controller adds the Failed condition with the same valu", "language": "bash", "type": "kubectl-command_explanation", "context": "4.2.5 - Jobs | The Job controller adds the FailureTarget Job condition to trigger Job termination and cleanup. When all of the Job Pods are terminated, the Job controller adds the Failed condition with the same valu", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_17940"} {"prompt": "Complete this text code:\n\nstatus:\n completedIndexes: 1,3,5,7,9\n failedIndexes: 0,2,4,6,8\n succeeded: 5 # 1 succeeded pod for each of 5 succeeded indexes\n failed: 10 # 2 failed pods (1 retry) for each of 5 failed indexes\n conditions:\n - message: Job has failed indexes", "completion": " reason: FailedIndexes\n status: \"True\"\n type: FailureTarget\n - message: Job has failed indexes\n reason: FailedIndexes\n status: \"True\"\n type: Failed", "language": "text", "type": "code-example", "context": "4.2.5 - Jobs | The Job controller adds the FailureTarget Job condition to trigger Job termination and cleanup. When all of the Job Pods are terminated, the Job controller adds the Failed condition with the same valu", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_17941"} {"prompt": "Explain this text code:\n\nstatus:\n completedIndexes: 1,3,5,7,9\n failedIndexes: 0,2,4,6,8\n succeeded: 5 # 1 succeeded pod for each of 5 succeeded indexes\n failed: 10 # 2 failed pods (1 retry) for each of 5 failed indexes\n conditions:\n - message: Job has failed indexes\n reason: FailedIndexes\n status: \"True\"\n type: FailureTarget\n - message: Job has failed indexes\n reason: FailedIndexes\n status: \"True\"\n type: Failed", "completion": "This text code 4.2.5 - Jobs | The Job controller adds the FailureTarget Job condition to trigger Job termination and cleanup. When all of the Job Pods are terminated, the Job controller adds the Failed condition with the same valu", "language": "text", "type": "code-example_explanation", "context": "4.2.5 - Jobs | The Job controller adds the FailureTarget Job condition to trigger Job termination and cleanup. When all of the Job Pods are terminated, the Job controller adds the Failed condition with the same valu", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_17942"} {"prompt": "Complete this yaml code:\n\n/controllers/job-pod-failure-policy-example.yaml\napiVersion: batch/v1\nkind: Job\nmetadata:\n name: job-pod-failure-policy-example\nspec:\n completions: 12\n parallelism: 3\n template:\n spec:\n restartPolicy: Never\n containers:\n - name: main\n image: docker.io/library/bash:5", "completion": " command: [\"bash\"] # example command simulating a bug which triggers the FailJob action\n args:\n - -c\n - echo \"Hello world!\" && sleep 5 && exit 42\n backoffLimit: 6\n podFailurePolicy:\n rules:\n - action: FailJob\n onExitCodes:\n containerName: main # optional\n operator: In # one of: In, NotIn\n values: [42]\n - action: Ignore # one of: Ignore, FailJob, Count\n onPodConditions:\n - type: DisruptionTarget # indicates Pod disruption", "language": "yaml", "type": "kubernetes-manifest", "context": "4.2.5 - Jobs | In the example above, the first rule of the Pod failure policy specifies that the Job should be marked failed if the main container fails with the 42 exit code. The following are the rules for the mai", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_17943"} {"prompt": "Explain this yaml code:\n\n/controllers/job-pod-failure-policy-example.yaml\napiVersion: batch/v1\nkind: Job\nmetadata:\n name: job-pod-failure-policy-example\nspec:\n completions: 12\n parallelism: 3\n template:\n spec:\n restartPolicy: Never\n containers:\n - name: main\n image: docker.io/library/bash:5\n command: [\"bash\"] # example command simulating a bug which triggers the FailJob action\n args:\n - -c\n - echo \"Hello world!\" && sleep 5 && exit 42\n backoffLimit: 6\n podFailurePolicy:\n rules:\n - action: FailJob\n onExitCodes:\n containerName: main # optional\n operator: In # one of: In, NotIn\n values: [42]\n - action: Ignore # one of: Ignore, FailJob, Count\n onPodConditions:\n - type: DisruptionTarget # indicates Pod disruption", "completion": "This yaml code 4.2.5 - Jobs | In the example above, the first rule of the Pod failure policy specifies that the Job should be marked failed if the main container fails with the 42 exit code. The following are the rules for the mai", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "4.2.5 - Jobs | In the example above, the first rule of the Pod failure policy specifies that the Job should be marked failed if the main container fails with the 42 exit code. The following are the rules for the mai", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_17944"} {"prompt": "Complete this yaml code:\n\napiVersion: batch/v1\nkind: Job\nmetadata:\n name: job-pod-failure-policy-example\nspec:\n completions: 12\n parallelism: 3\n template:\n spec:\n restartPolicy: Never\n containers:\n - name: main\n image: docker.io/library/bash:5\n command: [\"bash\"] # example command simulating a bug which triggers the FailJob action", "completion": " args:\n - -c\n - echo \"Hello world!\" && sleep 5 && exit 42\n backoffLimit: 6\n podFailurePolicy:\n rules:\n - action: FailJob\n onExitCodes:\n containerName: main # optional\n operator: In # one of: In, NotIn\n values: [42]\n - action: Ignore # one of: Ignore, FailJob, Count\n onPodConditions:\n - type: DisruptionTarget # indicates Pod disruption", "language": "yaml", "type": "kubernetes-manifest", "context": "4.2.5 - Jobs | In the example above, the first rule of the Pod failure policy specifies that the Job should be marked failed if the main container fails with the 42 exit code. The following are the rules for the mai", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_17945"} {"prompt": "Explain this yaml code:\n\napiVersion: batch/v1\nkind: Job\nmetadata:\n name: job-pod-failure-policy-example\nspec:\n completions: 12\n parallelism: 3\n template:\n spec:\n restartPolicy: Never\n containers:\n - name: main\n image: docker.io/library/bash:5\n command: [\"bash\"] # example command simulating a bug which triggers the FailJob action\n args:\n - -c\n - echo \"Hello world!\" && sleep 5 && exit 42\n backoffLimit: 6\n podFailurePolicy:\n rules:\n - action: FailJob\n onExitCodes:\n containerName: main # optional\n operator: In # one of: In, NotIn\n values: [42]\n - action: Ignore # one of: Ignore, FailJob, Count\n onPodConditions:\n - type: DisruptionTarget # indicates Pod disruption", "completion": "This yaml code 4.2.5 - Jobs | In the example above, the first rule of the Pod failure policy specifies that the Job should be marked failed if the main container fails with the 42 exit code. The following are the rules for the mai", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "4.2.5 - Jobs | In the example above, the first rule of the Pod failure policy specifies that the Job should be marked failed if the main container fails with the 42 exit code. The following are the rules for the mai", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_17946"} {"prompt": "Complete this yaml code:\n\n/controllers/job-success-policy.yaml\napiVersion: batch/v1\nkind: Job\nmetadata:\n name: job-success\nspec:\n parallelism: 10\n completions: 10\n completionMode: Indexed # Required for the success policy\n successPolicy:\n rules:\n - succeededIndexes: 0,2-3\n succeededCount: 1\n template:", "completion": " spec:\n containers:\n - name: main\n image: python\n command: # Provided that at least one of the Pods with 0, 2, and 3 indexes has succeeded,\n # the overall Job is a success.\n - python3\n - -c\n - |\n import os, sys\n if os.environ.get(\"JOB_COMPLETION_INDEX\") == \"2\":\n sys.exit(0)\n else:\n sys.exit(1) \n restartPolicy: Never", "language": "yaml", "type": "kubernetes-manifest", "context": "4.2.5 - Jobs | In the example above, both succeededIndexes and succeededCount have been specified. Therefore, the job controller will mark the Job as succeeded and terminate the lingering Pods when either of the spe", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_17947"} {"prompt": "Explain this yaml code:\n\n/controllers/job-success-policy.yaml\napiVersion: batch/v1\nkind: Job\nmetadata:\n name: job-success\nspec:\n parallelism: 10\n completions: 10\n completionMode: Indexed # Required for the success policy\n successPolicy:\n rules:\n - succeededIndexes: 0,2-3\n succeededCount: 1\n template:\n spec:\n containers:\n - name: main\n image: python\n command: # Provided that at least one of the Pods with 0, 2, and 3 indexes has succeeded,\n # the overall Job is a success.\n - python3\n - -c\n - |\n import os, sys\n if os.environ.get(\"JOB_COMPLETION_INDEX\") == \"2\":\n sys.exit(0)\n else:\n sys.exit(1) \n restartPolicy: Never", "completion": "This yaml code 4.2.5 - Jobs | In the example above, both succeededIndexes and succeededCount have been specified. Therefore, the job controller will mark the Job as succeeded and terminate the lingering Pods when either of the spe", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "4.2.5 - Jobs | In the example above, both succeededIndexes and succeededCount have been specified. Therefore, the job controller will mark the Job as succeeded and terminate the lingering Pods when either of the spe", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_17948"} {"prompt": "Complete this yaml code:\n\napiVersion: batch/v1\nkind: Job\nmetadata:\n name: job-success\nspec:\n parallelism: 10\n completions: 10\n completionMode: Indexed # Required for the success policy\n successPolicy:\n rules:\n - succeededIndexes: 0,2-3\n succeededCount: 1\n template:\n spec:", "completion": " containers:\n - name: main\n image: python\n command: # Provided that at least one of the Pods with 0, 2, and 3 indexes has succeeded,\n # the overall Job is a success.\n - python3\n - -c\n - |\n import os, sys\n if os.environ.get(\"JOB_COMPLETION_INDEX\") == \"2\":\n sys.exit(0)\n else:\n sys.exit(1) \n restartPolicy: Never", "language": "yaml", "type": "kubernetes-manifest", "context": "4.2.5 - Jobs | In the example above, both succeededIndexes and succeededCount have been specified. Therefore, the job controller will mark the Job as succeeded and terminate the lingering Pods when either of the spe", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_17949"} {"prompt": "Explain this yaml code:\n\napiVersion: batch/v1\nkind: Job\nmetadata:\n name: job-success\nspec:\n parallelism: 10\n completions: 10\n completionMode: Indexed # Required for the success policy\n successPolicy:\n rules:\n - succeededIndexes: 0,2-3\n succeededCount: 1\n template:\n spec:\n containers:\n - name: main\n image: python\n command: # Provided that at least one of the Pods with 0, 2, and 3 indexes has succeeded,\n # the overall Job is a success.\n - python3\n - -c\n - |\n import os, sys\n if os.environ.get(\"JOB_COMPLETION_INDEX\") == \"2\":\n sys.exit(0)\n else:\n sys.exit(1) \n restartPolicy: Never", "completion": "This yaml code 4.2.5 - Jobs | In the example above, both succeededIndexes and succeededCount have been specified. Therefore, the job controller will mark the Job as succeeded and terminate the lingering Pods when either of the spe", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "4.2.5 - Jobs | In the example above, both succeededIndexes and succeededCount have been specified. Therefore, the job controller will mark the Job as succeeded and terminate the lingering Pods when either of the spe", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_17950"} {"prompt": "Complete this yaml code:\n\napiVersion: batch/v1\nkind: Job\nmetadata:\n name: pi-with-timeout\nspec:\n backoffLimit: 5\n activeDeadlineSeconds: 100", "completion": " template:\n spec:\n containers:\n - name: pi\n image: perl:5.34.0\n command: [\"perl\", \"-Mbignum=bpi\", \"-wle\", \"print bpi(2000)\"]\n restartPolicy: Never", "language": "yaml", "type": "kubernetes-manifest", "context": "4.2.5 - Jobs | Note that both the Job spec and the Pod template spec within the Job have an activeDeadlineSeconds field. Ensure that you set this field at the proper level.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_17951"} {"prompt": "Explain this yaml code:\n\napiVersion: batch/v1\nkind: Job\nmetadata:\n name: pi-with-timeout\nspec:\n backoffLimit: 5\n activeDeadlineSeconds: 100\n template:\n spec:\n containers:\n - name: pi\n image: perl:5.34.0\n command: [\"perl\", \"-Mbignum=bpi\", \"-wle\", \"print bpi(2000)\"]\n restartPolicy: Never", "completion": "This yaml code 4.2.5 - Jobs | Note that both the Job spec and the Pod template spec within the Job have an activeDeadlineSeconds field. Ensure that you set this field at the proper level.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "4.2.5 - Jobs | Note that both the Job spec and the Pod template spec within the Job have an activeDeadlineSeconds field. Ensure that you set this field at the proper level.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_17952"} {"prompt": "Complete this yaml code:\n\napiVersion: batch/v1\nkind: Job\nmetadata:\n name: pi-with-ttl\nspec:\n ttlSecondsAfterFinished: 100", "completion": " template:\n spec:\n containers:\n - name: pi\n image: perl:5.34.0\n command: [\"perl\", \"-Mbignum=bpi\", \"-wle\", \"print bpi(2000)\"]\n restartPolicy: Never", "language": "yaml", "type": "kubernetes-manifest", "context": "4.2.5 - Jobs | The Job pi-with-ttl will be eligible to be automatically deleted, 100 seconds after it finishes.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_17953"} {"prompt": "Explain this yaml code:\n\napiVersion: batch/v1\nkind: Job\nmetadata:\n name: pi-with-ttl\nspec:\n ttlSecondsAfterFinished: 100\n template:\n spec:\n containers:\n - name: pi\n image: perl:5.34.0\n command: [\"perl\", \"-Mbignum=bpi\", \"-wle\", \"print bpi(2000)\"]\n restartPolicy: Never", "completion": "This yaml code 4.2.5 - Jobs | The Job pi-with-ttl will be eligible to be automatically deleted, 100 seconds after it finishes.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "4.2.5 - Jobs | The Job pi-with-ttl will be eligible to be automatically deleted, 100 seconds after it finishes.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_17954"} {"prompt": "Complete this yaml code:\n\napiVersion: batch/v1\nkind: Job\nmetadata:\n name: myjob\nspec:", "completion": " suspend: true\n parallelism: 1\n completions: 5\n template:\n spec:\n ...", "language": "yaml", "type": "kubernetes-manifest", "context": "4.2.5 - Jobs | You can also toggle Job suspension by patching the Job using the command line.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_17955"} {"prompt": "Explain this yaml code:\n\napiVersion: batch/v1\nkind: Job\nmetadata:\n name: myjob\nspec:\n suspend: true\n parallelism: 1\n completions: 5\n template:\n spec:\n ...", "completion": "This yaml code 4.2.5 - Jobs | You can also toggle Job suspension by patching the Job using the command line.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "4.2.5 - Jobs | You can also toggle Job suspension by patching the Job using the command line.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_17956"} {"prompt": "Explain this bash code:\n\nkubectl patch job/myjob --type=strategic --patch '{\"spec\":{\"suspend\":true}}'", "completion": "This bash code 4.2.5 - Jobs | Resume a suspended Job:", "language": "bash", "type": "kubectl-command_explanation", "context": "4.2.5 - Jobs | Resume a suspended Job:", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_17957"} {"prompt": "Explain this bash code:\n\nkubectl patch job/myjob --type=strategic --patch '{\"spec\":{\"suspend\":false}}'", "completion": "This bash code 4.2.5 - Jobs | The Job's status can be used to determine if a Job is suspended or has been suspended in the past:", "language": "bash", "type": "kubectl-command_explanation", "context": "4.2.5 - Jobs | The Job's status can be used to determine if a Job is suspended or has been suspended in the past:", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_17958"} {"prompt": "Complete this yaml code:\n\napiVersion: batch/v1\nkind: Job\n# .metadata and .spec omitted\nstatus:\n conditions:", "completion": " - lastProbeTime: \"2021-02-05T13:14:33Z\"\n lastTransitionTime: \"2021-02-05T13:14:33Z\"\n status: \"True\"\n type: Suspended\n startTime: \"2021-02-05T13:13:48Z\"", "language": "yaml", "type": "kubernetes-manifest", "context": "4.2.5 - Jobs | The Job condition of type \"Suspended\" with status \"True\" means the Job is suspended; the lastTransitionTime field can be used to determine how long the Job has been suspended for. If the status of tha", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_17959"} {"prompt": "Explain this yaml code:\n\napiVersion: batch/v1\nkind: Job\n# .metadata and .spec omitted\nstatus:\n conditions:\n - lastProbeTime: \"2021-02-05T13:14:33Z\"\n lastTransitionTime: \"2021-02-05T13:14:33Z\"\n status: \"True\"\n type: Suspended\n startTime: \"2021-02-05T13:13:48Z\"", "completion": "This yaml code 4.2.5 - Jobs | The Job condition of type \"Suspended\" with status \"True\" means the Job is suspended; the lastTransitionTime field can be used to determine how long the Job has been suspended for. If the status of tha", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "4.2.5 - Jobs | The Job condition of type \"Suspended\" with status \"True\" means the Job is suspended; the lastTransitionTime field can be used to determine how long the Job has been suspended for. If the status of tha", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_17960"} {"prompt": "Complete this yaml code:\n\nkind: Job\nmetadata:\n name: old\n ...", "completion": "spec:\n selector:\n matchLabels:\n batch.kubernetes.io/controller-uid: a8f3d00d-c6d2-11e5-9f87-42010af00002\n ...", "language": "yaml", "type": "kubernetes-manifest", "context": "4.2.5 - Jobs | Then you create a new Job with name new and you explicitly specify the same selector. Since the existing Pods have label batch.kubernetes.io/controller-uid=a8f3d00d-c6d2-11e5-9f87-42010af00002, they a", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_17961"} {"prompt": "Explain this yaml code:\n\nkind: Job\nmetadata:\n name: old\n ...\nspec:\n selector:\n matchLabels:\n batch.kubernetes.io/controller-uid: a8f3d00d-c6d2-11e5-9f87-42010af00002\n ...", "completion": "This yaml code 4.2.5 - Jobs | Then you create a new Job with name new and you explicitly specify the same selector. Since the existing Pods have label batch.kubernetes.io/controller-uid=a8f3d00d-c6d2-11e5-9f87-42010af00002, they a", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "4.2.5 - Jobs | Then you create a new Job with name new and you explicitly specify the same selector. Since the existing Pods have label batch.kubernetes.io/controller-uid=a8f3d00d-c6d2-11e5-9f87-42010af00002, they a", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_17962"} {"prompt": "Complete this yaml code:\n\nkind: Job\nmetadata:\n name: new\n ...\nspec:", "completion": " manualSelector: true\n selector:\n matchLabels:\n batch.kubernetes.io/controller-uid: a8f3d00d-c6d2-11e5-9f87-42010af00002\n ...", "language": "yaml", "type": "kubernetes-manifest", "context": "4.2.5 - Jobs | The new Job itself will have a different uid from a8f3d00d-c6d2-11e5-9f87-42010af00002. Setting manualSelector: true tells the system that you know what you are doing and to allow this mismatch.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_17963"} {"prompt": "Explain this yaml code:\n\nkind: Job\nmetadata:\n name: new\n ...\nspec:\n manualSelector: true\n selector:\n matchLabels:\n batch.kubernetes.io/controller-uid: a8f3d00d-c6d2-11e5-9f87-42010af00002\n ...", "completion": "This yaml code 4.2.5 - Jobs | The new Job itself will have a different uid from a8f3d00d-c6d2-11e5-9f87-42010af00002. Setting manualSelector: true tells the system that you know what you are doing and to allow this mismatch.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "4.2.5 - Jobs | The new Job itself will have a different uid from a8f3d00d-c6d2-11e5-9f87-42010af00002. Setting manualSelector: true tells the system that you know what you are doing and to allow this mismatch.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_17964"} {"prompt": "Complete this yaml code:\n\nkind: Job\nmetadata:\n name: new", "completion": " ...\nspec:\n podReplacementPolicy: Failed\n ...", "language": "yaml", "type": "kubernetes-manifest", "context": "4.2.5 - Jobs | Provided your cluster has the feature gate enabled, you can inspect the .status.terminating field of a Job. The value of the field is the number of Pods owned by the Job that are currently terminating", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_17965"} {"prompt": "Explain this yaml code:\n\nkind: Job\nmetadata:\n name: new\n ...\nspec:\n podReplacementPolicy: Failed\n ...", "completion": "This yaml code 4.2.5 - Jobs | Provided your cluster has the feature gate enabled, you can inspect the .status.terminating field of a Job. The value of the field is the number of Pods owned by the Job that are currently terminating", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "4.2.5 - Jobs | Provided your cluster has the feature gate enabled, you can inspect the .status.terminating field of a Job. The value of the field is the number of Pods owned by the Job that are currently terminating", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_17966"} {"prompt": "Complete this yaml code:\n\napiVersion: batch/v1\nkind: Job", "completion": "# .metadata and .spec omitted\nstatus:\n terminating: 3 # three Pods are terminating and have not yet reached the Failed phase", "language": "yaml", "type": "kubernetes-manifest", "context": "4.2.5 - Jobs | This feature allows you to disable the built-in Job controller, for a specific Job, and delegate reconciliation of the Job to an external controller.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_17967"} {"prompt": "Explain this yaml code:\n\napiVersion: batch/v1\nkind: Job\n# .metadata and .spec omitted\nstatus:\n terminating: 3 # three Pods are terminating and have not yet reached the Failed phase", "completion": "This yaml code 4.2.5 - Jobs | This feature allows you to disable the built-in Job controller, for a specific Job, and delegate reconciliation of the Job to an external controller.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "4.2.5 - Jobs | This feature allows you to disable the built-in Job controller, for a specific Job, and delegate reconciliation of the Job to an external controller.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_17968"} {"prompt": "Complete this yaml code:\n\napplication/job/cronjob.yaml\napiVersion: batch/v1\nkind: CronJob\nmetadata:\n name: hello\nspec:\n schedule: \"* * * * *\"\n jobTemplate:\n spec:\n template:", "completion": " spec:\n containers:\n - name: hello\n image: busybox:1.28\n imagePullPolicy: IfNotPresent\n command:\n - /bin/sh\n - -c\n - date; echo Hello from the Kubernetes cluster\n restartPolicy: OnFailure", "language": "yaml", "type": "kubernetes-manifest", "context": "4.2.7 - CronJob | (Running Automated Tasks with a CronJob takes you through this example in more detail).", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_17969"} {"prompt": "Explain this yaml code:\n\napplication/job/cronjob.yaml\napiVersion: batch/v1\nkind: CronJob\nmetadata:\n name: hello\nspec:\n schedule: \"* * * * *\"\n jobTemplate:\n spec:\n template:\n spec:\n containers:\n - name: hello\n image: busybox:1.28\n imagePullPolicy: IfNotPresent\n command:\n - /bin/sh\n - -c\n - date; echo Hello from the Kubernetes cluster\n restartPolicy: OnFailure", "completion": "This yaml code 4.2.7 - CronJob | (Running Automated Tasks with a CronJob takes you through this example in more detail).", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "4.2.7 - CronJob | (Running Automated Tasks with a CronJob takes you through this example in more detail).", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_17970"} {"prompt": "Complete this yaml code:\n\napiVersion: batch/v1\nkind: CronJob\nmetadata:\n name: hello\nspec:\n schedule: \"* * * * *\"\n jobTemplate:\n spec:\n template:", "completion": " spec:\n containers:\n - name: hello\n image: busybox:1.28\n imagePullPolicy: IfNotPresent\n command:\n - /bin/sh\n - -c\n - date; echo Hello from the Kubernetes cluster\n restartPolicy: OnFailure", "language": "yaml", "type": "kubernetes-manifest", "context": "4.2.7 - CronJob | (Running Automated Tasks with a CronJob takes you through this example in more detail).", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_17971"} {"prompt": "Explain this yaml code:\n\napiVersion: batch/v1\nkind: CronJob\nmetadata:\n name: hello\nspec:\n schedule: \"* * * * *\"\n jobTemplate:\n spec:\n template:\n spec:\n containers:\n - name: hello\n image: busybox:1.28\n imagePullPolicy: IfNotPresent\n command:\n - /bin/sh\n - -c\n - date; echo Hello from the Kubernetes cluster\n restartPolicy: OnFailure", "completion": "This yaml code 4.2.7 - CronJob | (Running Automated Tasks with a CronJob takes you through this example in more detail).", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "4.2.7 - CronJob | (Running Automated Tasks with a CronJob takes you through this example in more detail).", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_17972"} {"prompt": "Complete this yaml code:\n\ncontrollers/replication.yaml\napiVersion: v1\nkind: ReplicationController\nmetadata:\n name: nginx\nspec:\n replicas: 3\n selector:\n app: nginx\n template:", "completion": " metadata:\n name: nginx\n labels:\n app: nginx\n spec:\n containers:\n - name: nginx\n image: nginx\n ports:\n - containerPort: 80", "language": "yaml", "type": "kubernetes-manifest", "context": "4.2.8 - ReplicationController | Run the example job by downloading the example file and then running this command:", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_17973"} {"prompt": "Explain this yaml code:\n\ncontrollers/replication.yaml\napiVersion: v1\nkind: ReplicationController\nmetadata:\n name: nginx\nspec:\n replicas: 3\n selector:\n app: nginx\n template:\n metadata:\n name: nginx\n labels:\n app: nginx\n spec:\n containers:\n - name: nginx\n image: nginx\n ports:\n - containerPort: 80", "completion": "This yaml code 4.2.8 - ReplicationController | Run the example job by downloading the example file and then running this command:", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "4.2.8 - ReplicationController | Run the example job by downloading the example file and then running this command:", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_17974"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: ReplicationController\nmetadata:\n name: nginx\nspec:\n replicas: 3\n selector:\n app: nginx\n template:", "completion": " metadata:\n name: nginx\n labels:\n app: nginx\n spec:\n containers:\n - name: nginx\n image: nginx\n ports:\n - containerPort: 80", "language": "yaml", "type": "kubernetes-manifest", "context": "4.2.8 - ReplicationController | Run the example job by downloading the example file and then running this command:", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_17975"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: ReplicationController\nmetadata:\n name: nginx\nspec:\n replicas: 3\n selector:\n app: nginx\n template:\n metadata:\n name: nginx\n labels:\n app: nginx\n spec:\n containers:\n - name: nginx\n image: nginx\n ports:\n - containerPort: 80", "completion": "This yaml code 4.2.8 - ReplicationController | Run the example job by downloading the example file and then running this command:", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "4.2.8 - ReplicationController | Run the example job by downloading the example file and then running this command:", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_17976"} {"prompt": "Explain this bash code:\n\nkubectl apply -f https://k8s.io/examples/controllers/replication.yaml", "completion": "This bash code 4.2.8 - ReplicationController | The output is similar to this:", "language": "bash", "type": "kubectl-command_explanation", "context": "4.2.8 - ReplicationController | The output is similar to this:", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_17977"} {"prompt": "Explain this text code:\n\nPods Status: 3 Running / 0 Waiting / 0 Succeeded / 0 Failed", "completion": "This text code 4.2.8 - ReplicationController | To list all the pods that belong to the ReplicationController in a machine readable form, you can use a command like this:", "language": "text", "type": "code-example_explanation", "context": "4.2.8 - ReplicationController | To list all the pods that belong to the ReplicationController in a machine readable form, you can use a command like this:", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_17978"} {"prompt": "Explain this text code:\n\npods=$(kubectl get pods --selector=app=nginx --output=jsonpath={.items..metadata.name})\necho $pods", "completion": "This text code 4.2.8 - ReplicationController | The output is similar to this:", "language": "text", "type": "code-example_explanation", "context": "4.2.8 - ReplicationController | The output is similar to this:", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_17979"} {"prompt": "Complete this yaml code:\n\napplication/nginx-app.yaml\napiVersion: v1\nkind: Service\nmetadata:\n name: my-nginx-svc\n labels:\n app: nginx\nspec:\n type: LoadBalancer\n ports:\n - port: 80\n selector:\n app: nginx\n---\napiVersion: apps/v1\nkind: Deployment\nmetadata:", "completion": " name: my-nginx\n labels:\n app: nginx\nspec:\n replicas: 3\n selector:\n matchLabels:\n app: nginx\n template:\n metadata:\n labels:\n app: nginx\n spec:\n containers:\n - name: nginx\n image: nginx:1.14.2\n ports:\n - containerPort: 80", "language": "yaml", "type": "deployment-definition", "context": "4.4 - Managing Workloads | Multiple resources can be created the same way as a single resource:", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_17980"} {"prompt": "Explain this yaml code:\n\napplication/nginx-app.yaml\napiVersion: v1\nkind: Service\nmetadata:\n name: my-nginx-svc\n labels:\n app: nginx\nspec:\n type: LoadBalancer\n ports:\n - port: 80\n selector:\n app: nginx\n---\napiVersion: apps/v1\nkind: Deployment\nmetadata:\n name: my-nginx\n labels:\n app: nginx\nspec:\n replicas: 3\n selector:\n matchLabels:\n app: nginx\n template:\n metadata:\n labels:\n app: nginx\n spec:\n containers:\n - name: nginx\n image: nginx:1.14.2\n ports:\n - containerPort: 80", "completion": "This yaml code 4.4 - Managing Workloads | Multiple resources can be created the same way as a single resource:", "language": "yaml", "type": "deployment-definition_explanation", "context": "4.4 - Managing Workloads | Multiple resources can be created the same way as a single resource:", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_17981"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Service\nmetadata:\n name: my-nginx-svc\n labels:\n app: nginx\nspec:\n type: LoadBalancer\n ports:\n - port: 80\n selector:\n app: nginx\n---\napiVersion: apps/v1\nkind: Deployment\nmetadata:\n name: my-nginx", "completion": " labels:\n app: nginx\nspec:\n replicas: 3\n selector:\n matchLabels:\n app: nginx\n template:\n metadata:\n labels:\n app: nginx\n spec:\n containers:\n - name: nginx\n image: nginx:1.14.2\n ports:\n - containerPort: 80", "language": "yaml", "type": "deployment-definition", "context": "4.4 - Managing Workloads | Multiple resources can be created the same way as a single resource:", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_17982"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Service\nmetadata:\n name: my-nginx-svc\n labels:\n app: nginx\nspec:\n type: LoadBalancer\n ports:\n - port: 80\n selector:\n app: nginx\n---\napiVersion: apps/v1\nkind: Deployment\nmetadata:\n name: my-nginx\n labels:\n app: nginx\nspec:\n replicas: 3\n selector:\n matchLabels:\n app: nginx\n template:\n metadata:\n labels:\n app: nginx\n spec:\n containers:\n - name: nginx\n image: nginx:1.14.2\n ports:\n - containerPort: 80", "completion": "This yaml code 4.4 - Managing Workloads | Multiple resources can be created the same way as a single resource:", "language": "yaml", "type": "deployment-definition_explanation", "context": "4.4 - Managing Workloads | Multiple resources can be created the same way as a single resource:", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_17983"} {"prompt": "Explain this bash code:\n\nkubectl apply -f https://k8s.io/examples/application/nginx-app.yaml", "completion": "This bash code 4.4 - Managing Workloads | The resources will be created in the order they appear in the manifest. Therefore, it's best to specify the Service first, since that will ensure the scheduler can spread the pods associated with the", "language": "bash", "type": "kubectl-command_explanation", "context": "4.4 - Managing Workloads | The resources will be created in the order they appear in the manifest. Therefore, it's best to specify the Service first, since that will ensure the scheduler can spread the pods associated with the", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_17984"} {"prompt": "Explain this bash code:\n\nkubectl apply -f https://k8s.io/examples/application/nginx/nginx-svc.yaml \\\n -f https://k8s.io/examples/application/nginx/nginx-deployment.yaml", "completion": "This bash code 4.4 - Managing Workloads | It is a recommended practice to put resources related to the same microservice or application tier into the same file, and to group all of the files associated with your application in the same direct", "language": "bash", "type": "kubectl-command_explanation", "context": "4.4 - Managing Workloads | It is a recommended practice to put resources related to the same microservice or application tier into the same file, and to group all of the files associated with your application in the same direct", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_17985"} {"prompt": "Explain this bash code:\n\nkubectl apply -f https://k8s.io/examples/application/nginx/nginx-deployment.yaml", "completion": "This bash code 4.4 - Managing Workloads | If you need to define more manifests, such as adding a ConfigMap, you can do that too.", "language": "bash", "type": "kubectl-command_explanation", "context": "4.4 - Managing Workloads | If you need to define more manifests, such as adding a ConfigMap, you can do that too.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_17986"} {"prompt": "Explain this bash code:\n\nkubectl delete -f https://k8s.io/examples/application/nginx-app.yaml", "completion": "This bash code 4.4 - Managing Workloads | In the case of two resources, you can specify both resources on the command line using the resource/name syntax:", "language": "bash", "type": "kubectl-command_explanation", "context": "4.4 - Managing Workloads | In the case of two resources, you can specify both resources on the command line using the resource/name syntax:", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_17987"} {"prompt": "Explain this bash code:\n\nkubectl delete deployments/my-nginx services/my-nginx-svc", "completion": "This bash code 4.4 - Managing Workloads | For larger numbers of resources, you'll find it easier to specify the selector (label query) specified using -l or --selector, to filter resources by their labels:", "language": "bash", "type": "kubectl-command_explanation", "context": "4.4 - Managing Workloads | For larger numbers of resources, you'll find it easier to specify the selector (label query) specified using -l or --selector, to filter resources by their labels:", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_17988"} {"prompt": "Explain this bash code:\n\nkubectl get $(kubectl create -f docs/concepts/cluster-administration/nginx/ -o name | grep service/ )\nkubectl create -f docs/concepts/cluster-administration/nginx/ -o name | grep service/ | xargs -i kubectl get '{}'", "completion": "This bash code 4.4 - Managing Workloads | The output might be similar to:", "language": "bash", "type": "kubectl-command_explanation", "context": "4.4 - Managing Workloads | The output might be similar to:", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_17989"} {"prompt": "Explain this bash code:\n\nkubectl apply -f project/k8s/development --recursive", "completion": "This bash code 4.4 - Managing Workloads | The --recursive argument works with any operation that accepts the --filename/-f argument such as: kubectl create, kubectl get, kubectl delete, kubectl describe, or even kubectl rollout.", "language": "bash", "type": "kubectl-command_explanation", "context": "4.4 - Managing Workloads | The --recursive argument works with any operation that accepts the --filename/-f argument such as: kubectl create, kubectl get, kubectl delete, kubectl describe, or even kubectl rollout.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_17990"} {"prompt": "Explain this bash code:\n\nkubectl apply -f project/k8s/namespaces -f project/k8s/development --recursive", "completion": "This bash code 4.4 - Managing Workloads | If you're interested in learning more about kubectl, go ahead and read Command line tool (kubectl).", "language": "bash", "type": "kubectl-command_explanation", "context": "4.4 - Managing Workloads | If you're interested in learning more about kubectl, go ahead and read Command line tool (kubectl).", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_17991"} {"prompt": "Explain this bash code:\n\nkubectl create deployment my-nginx --image=nginx:1.14.2", "completion": "This bash code 4.4 - Managing Workloads | Ensure that there is 1 replica:", "language": "bash", "type": "kubectl-command_explanation", "context": "4.4 - Managing Workloads | Ensure that there is 1 replica:", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_17992"} {"prompt": "Explain this bash code:\n\nkubectl scale --replicas 1 deployments/my-nginx --subresource='scale' --type='merge' -p '{\"spec\":{\"replicas\": 1}}'", "completion": "This bash code 4.4 - Managing Workloads | and allow Kubernetes to add more temporary replicas during a rollout, by setting a surge maximum of 100%:", "language": "bash", "type": "kubectl-command_explanation", "context": "4.4 - Managing Workloads | and allow Kubernetes to add more temporary replicas during a rollout, by setting a surge maximum of 100%:", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_17993"} {"prompt": "Explain this bash code:\n\nkubectl patch --type='merge' -p '{\"spec\":{\"strategy\":{\"rollingUpdate\":{\"maxSurge\": \"100%\" }}}}'", "completion": "This bash code 4.4 - Managing Workloads | To update to version 1.16.1, change .spec.template.spec.containers[0].image from nginx:1.14.2 to nginx:1.16.1 using kubectl edit:", "language": "bash", "type": "kubectl-command_explanation", "context": "4.4 - Managing Workloads | To update to version 1.16.1, change .spec.template.spec.containers[0].image from nginx:1.14.2 to nginx:1.16.1 using kubectl edit:", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_17994"} {"prompt": "Explain this bash code:\n\nkubectl edit deployment/my-nginx\n# Change the manifest to use the newer container image, then save your changes", "completion": "This bash code 4.4 - Managing Workloads | That's it! The Deployment will declaratively update the deployed nginx application progressively behind the scene. It ensures that only a certain number of old replicas may be down while they are bein", "language": "bash", "type": "kubectl-command_explanation", "context": "4.4 - Managing Workloads | That's it! The Deployment will declaratively update the deployed nginx application progressively behind the scene. It ensures that only a certain number of old replicas may be down while they are bein", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_17995"} {"prompt": "Complete this bash code:\n\nkubectl apply -f my-deployment.yaml\n", "completion": "# wait for rollout to finish\nkubectl rollout status deployment/my-deployment --timeout 10m # 10 minute timeout", "language": "bash", "type": "kubectl-command", "context": "4.4 - Managing Workloads | or", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_17996"} {"prompt": "Explain this bash code:\n\nkubectl apply -f my-deployment.yaml\n\n# wait for rollout to finish\nkubectl rollout status deployment/my-deployment --timeout 10m # 10 minute timeout", "completion": "This bash code 4.4 - Managing Workloads | or", "language": "bash", "type": "kubectl-command_explanation", "context": "4.4 - Managing Workloads | or", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_17997"} {"prompt": "Complete this bash code:\n\nkubectl apply -f backing-stateful-component.yaml\n", "completion": "# don't wait for rollout to finish, just check the status\nkubectl rollout status statefulsets/backing-stateful-component --watch=false", "language": "bash", "type": "kubectl-command", "context": "4.4 - Managing Workloads | You can also pause, resume or cancel a rollout. Visit kubectl rollout to learn more.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_17998"} {"prompt": "Explain this bash code:\n\nkubectl apply -f backing-stateful-component.yaml\n\n# don't wait for rollout to finish, just check the status\nkubectl rollout status statefulsets/backing-stateful-component --watch=false", "completion": "This bash code 4.4 - Managing Workloads | You can also pause, resume or cancel a rollout. Visit kubectl rollout to learn more.", "language": "bash", "type": "kubectl-command_explanation", "context": "4.4 - Managing Workloads | You can also pause, resume or cancel a rollout. Visit kubectl rollout to learn more.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_17999"} {"prompt": "Explain this bash code:\n\nkubectl annotate pods my-nginx-v4-9gw19 description='my frontend running nginx'\nkubectl get pods my-nginx-v4-9gw19 -o yaml", "completion": "This bash code 4.4 - Managing Workloads | For more information, see annotations and kubectl annotate.", "language": "bash", "type": "kubectl-command_explanation", "context": "4.4 - Managing Workloads | For more information, see annotations and kubectl annotate.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_18000"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: pod\nmetadata:", "completion": " annotations:\n description: my frontend running nginx\n...", "language": "yaml", "type": "kubernetes-manifest", "context": "4.4 - Managing Workloads | For more information, see annotations and kubectl annotate.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_18001"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: pod\nmetadata:\n annotations:\n description: my frontend running nginx\n...", "completion": "This yaml code 4.4 - Managing Workloads | For more information, see annotations and kubectl annotate.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "4.4 - Managing Workloads | For more information, see annotations and kubectl annotate.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_18002"} {"prompt": "Explain this text code:\n\n# This requires an existing source of container and Pod metrics\nkubectl autoscale deployment/my-nginx --min=1 --max=3", "completion": "This text code 4.4 - Managing Workloads | Now your nginx replicas will be scaled up and down as needed, automatically.", "language": "text", "type": "code-example_explanation", "context": "4.4 - Managing Workloads | Now your nginx replicas will be scaled up and down as needed, automatically.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_18003"} {"prompt": "Explain this bash code:\n\nkubectl apply -f https://k8s.io/examples/application/nginx/nginx-deployment.yaml", "completion": "This bash code 4.4 - Managing Workloads | To learn more about the underlying mechanism, read server-side apply.", "language": "bash", "type": "kubectl-command_explanation", "context": "4.4 - Managing Workloads | To learn more about the underlying mechanism, read server-side apply.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_18004"} {"prompt": "Complete this bash code:\n\nkubectl get deployment my-nginx -o yaml > /tmp/nginx.yaml\nvi /tmp/nginx.yaml\n# do some edit, and then save the file\n", "completion": "kubectl apply -f /tmp/nginx.yaml\ndeployment.apps/my-nginx configured\n\nrm /tmp/nginx.yaml", "language": "bash", "type": "kubectl-command", "context": "4.4 - Managing Workloads | This allows you to do more significant changes more easily. Note that you can specify the editor with your EDITOR or KUBE_EDITOR environment variables.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_18005"} {"prompt": "Explain this bash code:\n\nkubectl get deployment my-nginx -o yaml > /tmp/nginx.yaml\nvi /tmp/nginx.yaml\n# do some edit, and then save the file\n\nkubectl apply -f /tmp/nginx.yaml\ndeployment.apps/my-nginx configured\n\nrm /tmp/nginx.yaml", "completion": "This bash code 4.4 - Managing Workloads | This allows you to do more significant changes more easily. Note that you can specify the editor with your EDITOR or KUBE_EDITOR environment variables.", "language": "bash", "type": "kubectl-command_explanation", "context": "4.4 - Managing Workloads | This allows you to do more significant changes more easily. Note that you can specify the editor with your EDITOR or KUBE_EDITOR environment variables.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_18006"} {"prompt": "Explain this bash code:\n\nkubectl replace -f https://k8s.io/examples/application/nginx/nginx-deployment.yaml --force", "completion": "This bash code 4.4 - Managing Workloads | The Kubernetes network model is built out of several pieces:", "language": "bash", "type": "kubectl-command_explanation", "context": "4.4 - Managing Workloads | The Kubernetes network model is built out of several pieces:", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_18007"} {"prompt": "Complete this yaml code:\n\nservice/simple-service.yaml\napiVersion: v1\nkind: Service\nmetadata:\n name: my-service\nspec:", "completion": " selector:\n app.kubernetes.io/name: MyApp\n ports:\n - protocol: TCP\n port: 80\n targetPort: 9376", "language": "yaml", "type": "service-definition", "context": "5.1 - Service | Applying this manifest creates a new Service named \"my-service\" with the default ClusterIP service type. The Service targets TCP port 9376 on any Pod with the app.kubernetes.io/name: MyApp label.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_18008"} {"prompt": "Explain this yaml code:\n\nservice/simple-service.yaml\napiVersion: v1\nkind: Service\nmetadata:\n name: my-service\nspec:\n selector:\n app.kubernetes.io/name: MyApp\n ports:\n - protocol: TCP\n port: 80\n targetPort: 9376", "completion": "This yaml code 5.1 - Service | Applying this manifest creates a new Service named \"my-service\" with the default ClusterIP service type. The Service targets TCP port 9376 on any Pod with the app.kubernetes.io/name: MyApp label.", "language": "yaml", "type": "service-definition_explanation", "context": "5.1 - Service | Applying this manifest creates a new Service named \"my-service\" with the default ClusterIP service type. The Service targets TCP port 9376 on any Pod with the app.kubernetes.io/name: MyApp label.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_18009"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Service\nmetadata:\n name: my-service\nspec:", "completion": " selector:\n app.kubernetes.io/name: MyApp\n ports:\n - protocol: TCP\n port: 80\n targetPort: 9376", "language": "yaml", "type": "service-definition", "context": "5.1 - Service | Applying this manifest creates a new Service named \"my-service\" with the default ClusterIP service type. The Service targets TCP port 9376 on any Pod with the app.kubernetes.io/name: MyApp label.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_18010"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Service\nmetadata:\n name: my-service\nspec:\n selector:\n app.kubernetes.io/name: MyApp\n ports:\n - protocol: TCP\n port: 80\n targetPort: 9376", "completion": "This yaml code 5.1 - Service | Applying this manifest creates a new Service named \"my-service\" with the default ClusterIP service type. The Service targets TCP port 9376 on any Pod with the app.kubernetes.io/name: MyApp label.", "language": "yaml", "type": "service-definition_explanation", "context": "5.1 - Service | Applying this manifest creates a new Service named \"my-service\" with the default ClusterIP service type. The Service targets TCP port 9376 on any Pod with the app.kubernetes.io/name: MyApp label.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_18011"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: nginx\n labels:\n app.kubernetes.io/name: proxy\nspec:\n containers:\n - name: nginx\n image: nginx:stable\n ports:\n - containerPort: 80\n name: http-web-svc", "completion": "\n---\napiVersion: v1\nkind: Service\nmetadata:\n name: nginx-service\nspec:\n selector:\n app.kubernetes.io/name: proxy\n ports:\n - name: name-of-service-port\n protocol: TCP\n port: 80\n targetPort: http-web-svc", "language": "yaml", "type": "pod-definition", "context": "5.1 - Service | This works even if there is a mixture of Pods in the Service using a single configured name, with the same network protocol available via different port numbers. This offers a lot of flexibility for d", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_18012"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: nginx\n labels:\n app.kubernetes.io/name: proxy\nspec:\n containers:\n - name: nginx\n image: nginx:stable\n ports:\n - containerPort: 80\n name: http-web-svc\n\n---\napiVersion: v1\nkind: Service\nmetadata:\n name: nginx-service\nspec:\n selector:\n app.kubernetes.io/name: proxy\n ports:\n - name: name-of-service-port\n protocol: TCP\n port: 80\n targetPort: http-web-svc", "completion": "This yaml code 5.1 - Service | This works even if there is a mixture of Pods in the Service using a single configured name, with the same network protocol available via different port numbers. This offers a lot of flexibility for d", "language": "yaml", "type": "pod-definition_explanation", "context": "5.1 - Service | This works even if there is a mixture of Pods in the Service using a single configured name, with the same network protocol available via different port numbers. This offers a lot of flexibility for d", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_18013"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Service\nmetadata:\n name: my-service\nspec:", "completion": " ports:\n - name: http\n protocol: TCP\n port: 80\n targetPort: 9376", "language": "yaml", "type": "service-definition", "context": "5.1 - Service | Because this Service has no selector, the corresponding EndpointSlice objects are not created automatically. You can map the Service to the network address and port where it's running, by adding an En", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_18014"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Service\nmetadata:\n name: my-service\nspec:\n ports:\n - name: http\n protocol: TCP\n port: 80\n targetPort: 9376", "completion": "This yaml code 5.1 - Service | Because this Service has no selector, the corresponding EndpointSlice objects are not created automatically. You can map the Service to the network address and port where it's running, by adding an En", "language": "yaml", "type": "service-definition_explanation", "context": "5.1 - Service | Because this Service has no selector, the corresponding EndpointSlice objects are not created automatically. You can map the Service to the network address and port where it's running, by adding an En", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_18015"} {"prompt": "Complete this yaml code:\n\napiVersion: discovery.k8s.io/v1\nkind: EndpointSlice\nmetadata:\n name: my-service-1 # by convention, use the name of the Service\n # as a prefix for the name of the EndpointSlice\n labels:\n # You should set the \"kubernetes.io/service-name\" label.\n # Set its value to match the name of the Service\n kubernetes.io/service-name: my-service\naddressType: IPv4", "completion": "ports:\n - name: http # should match with the name of the service port defined above\n appProtocol: http\n protocol: TCP\n port: 9376\nendpoints:\n - addresses:\n - \"10.4.5.6\"\n - addresses:\n - \"10.1.2.3\"", "language": "yaml", "type": "kubernetes-manifest", "context": "5.1 - Service | When you create an EndpointSlice object for a Service, you can use any name for the EndpointSlice. Each EndpointSlice in a namespace must have a unique name. You link an EndpointSlice to a Service by", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_18016"} {"prompt": "Explain this yaml code:\n\napiVersion: discovery.k8s.io/v1\nkind: EndpointSlice\nmetadata:\n name: my-service-1 # by convention, use the name of the Service\n # as a prefix for the name of the EndpointSlice\n labels:\n # You should set the \"kubernetes.io/service-name\" label.\n # Set its value to match the name of the Service\n kubernetes.io/service-name: my-service\naddressType: IPv4\nports:\n - name: http # should match with the name of the service port defined above\n appProtocol: http\n protocol: TCP\n port: 9376\nendpoints:\n - addresses:\n - \"10.4.5.6\"\n - addresses:\n - \"10.1.2.3\"", "completion": "This yaml code 5.1 - Service | When you create an EndpointSlice object for a Service, you can use any name for the EndpointSlice. Each EndpointSlice in a namespace must have a unique name. You link an EndpointSlice to a Service by", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "5.1 - Service | When you create an EndpointSlice object for a Service, you can use any name for the EndpointSlice. Each EndpointSlice in a namespace must have a unique name. You link an EndpointSlice to a Service by", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_18017"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Service\nmetadata:\n name: my-service\nspec:\n selector:\n app.kubernetes.io/name: MyApp\n ports:", "completion": " - name: http\n protocol: TCP\n port: 80\n targetPort: 9376\n - name: https\n protocol: TCP\n port: 443\n targetPort: 9377", "language": "yaml", "type": "service-definition", "context": "5.1 - Service | As with Kubernetes names in general, names for ports must only contain lowercase alphanumeric characters and -. Port names must also start and end with an alphanumeric character.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_18018"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Service\nmetadata:\n name: my-service\nspec:\n selector:\n app.kubernetes.io/name: MyApp\n ports:\n - name: http\n protocol: TCP\n port: 80\n targetPort: 9376\n - name: https\n protocol: TCP\n port: 443\n targetPort: 9377", "completion": "This yaml code 5.1 - Service | As with Kubernetes names in general, names for ports must only contain lowercase alphanumeric characters and -. Port names must also start and end with an alphanumeric character.", "language": "yaml", "type": "service-definition_explanation", "context": "5.1 - Service | As with Kubernetes names in general, names for ports must only contain lowercase alphanumeric characters and -. Port names must also start and end with an alphanumeric character.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_18019"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Service\nmetadata:\n name: my-service\nspec:\n type: NodePort\n selector:\n app.kubernetes.io/name: MyApp", "completion": " ports:\n - port: 80\n # By default and for convenience, the `targetPort` is set to\n # the same value as the `port` field.\n targetPort: 80\n # Optional field\n # By default and for convenience, the Kubernetes control plane\n # will allocate a port from a range (default: 30000-32767)\n nodePort: 30007", "language": "yaml", "type": "service-definition", "context": "5.1 - Service | The policy for assigning ports to NodePort services applies to both the auto-assignment and the manual assignment scenarios. When a user wants to create a NodePort service that uses a specific port, t", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_18020"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Service\nmetadata:\n name: my-service\nspec:\n type: NodePort\n selector:\n app.kubernetes.io/name: MyApp\n ports:\n - port: 80\n # By default and for convenience, the `targetPort` is set to\n # the same value as the `port` field.\n targetPort: 80\n # Optional field\n # By default and for convenience, the Kubernetes control plane\n # will allocate a port from a range (default: 30000-32767)\n nodePort: 30007", "completion": "This yaml code 5.1 - Service | The policy for assigning ports to NodePort services applies to both the auto-assignment and the manual assignment scenarios. When a user wants to create a NodePort service that uses a specific port, t", "language": "yaml", "type": "service-definition_explanation", "context": "5.1 - Service | The policy for assigning ports to NodePort services applies to both the auto-assignment and the manual assignment scenarios. When a user wants to create a NodePort service that uses a specific port, t", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_18021"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Service\nmetadata:\n name: my-service\nspec:\n selector:\n app.kubernetes.io/name: MyApp\n ports:", "completion": " - protocol: TCP\n port: 80\n targetPort: 9376\n clusterIP: 10.0.171.239\n type: LoadBalancer\nstatus:\n loadBalancer:\n ingress:\n - ip: 192.0.2.127", "language": "yaml", "type": "service-definition", "context": "5.1 - Service | Traffic from the external load balancer is directed at the backend Pods. The cloud provider decides how it is load balanced.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_18022"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Service\nmetadata:\n name: my-service\nspec:\n selector:\n app.kubernetes.io/name: MyApp\n ports:\n - protocol: TCP\n port: 80\n targetPort: 9376\n clusterIP: 10.0.171.239\n type: LoadBalancer\nstatus:\n loadBalancer:\n ingress:\n - ip: 192.0.2.127", "completion": "This yaml code 5.1 - Service | Traffic from the external load balancer is directed at the backend Pods. The cloud provider decides how it is load balanced.", "language": "yaml", "type": "service-definition_explanation", "context": "5.1 - Service | Traffic from the external load balancer is directed at the backend Pods. The cloud provider decides how it is load balanced.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_18023"} {"prompt": "Complete this text code:\n\nmetadata:\n name: my-service", "completion": " annotations:\n networking.gke.io/load-balancer-type: \"Internal\"", "language": "text", "type": "code-example", "context": "5.1 - Service | metadata: name: my-service annotations: service.beta.kubernetes.io/aws-load-balancer-internal: \"true\"", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_18024"} {"prompt": "Explain this text code:\n\nmetadata:\n name: my-service\n annotations:\n networking.gke.io/load-balancer-type: \"Internal\"", "completion": "This text code 5.1 - Service | metadata: name: my-service annotations: service.beta.kubernetes.io/aws-load-balancer-internal: \"true\"", "language": "text", "type": "code-example_explanation", "context": "5.1 - Service | metadata: name: my-service annotations: service.beta.kubernetes.io/aws-load-balancer-internal: \"true\"", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_18025"} {"prompt": "Complete this text code:\n\nmetadata:\n name: my-service", "completion": " annotations:\n service.beta.kubernetes.io/aws-load-balancer-internal: \"true\"", "language": "text", "type": "code-example", "context": "5.1 - Service | metadata: name: my-service annotations: service.beta.kubernetes.io/azure-load-balancer-internal: \"true\"", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_18026"} {"prompt": "Explain this text code:\n\nmetadata:\n name: my-service\n annotations:\n service.beta.kubernetes.io/aws-load-balancer-internal: \"true\"", "completion": "This text code 5.1 - Service | metadata: name: my-service annotations: service.beta.kubernetes.io/azure-load-balancer-internal: \"true\"", "language": "text", "type": "code-example_explanation", "context": "5.1 - Service | metadata: name: my-service annotations: service.beta.kubernetes.io/azure-load-balancer-internal: \"true\"", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_18027"} {"prompt": "Complete this text code:\n\nmetadata:\n name: my-service", "completion": " annotations:\n service.beta.kubernetes.io/azure-load-balancer-internal: \"true\"", "language": "text", "type": "code-example", "context": "5.1 - Service | metadata: name: my-service annotations: service.kubernetes.io/ibm-load-balancer-cloud-provider-ip-type: \"private\"", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_18028"} {"prompt": "Explain this text code:\n\nmetadata:\n name: my-service\n annotations:\n service.beta.kubernetes.io/azure-load-balancer-internal: \"true\"", "completion": "This text code 5.1 - Service | metadata: name: my-service annotations: service.kubernetes.io/ibm-load-balancer-cloud-provider-ip-type: \"private\"", "language": "text", "type": "code-example_explanation", "context": "5.1 - Service | metadata: name: my-service annotations: service.kubernetes.io/ibm-load-balancer-cloud-provider-ip-type: \"private\"", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_18029"} {"prompt": "Complete this text code:\n\nmetadata:\n name: my-service", "completion": " annotations:\n service.kubernetes.io/ibm-load-balancer-cloud-provider-ip-type: \"private\"", "language": "text", "type": "code-example", "context": "5.1 - Service | metadata: name: my-service annotations: service.beta.kubernetes.io/openstack-internal-load-balancer: \"true\"", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_18030"} {"prompt": "Explain this text code:\n\nmetadata:\n name: my-service\n annotations:\n service.kubernetes.io/ibm-load-balancer-cloud-provider-ip-type: \"private\"", "completion": "This text code 5.1 - Service | metadata: name: my-service annotations: service.beta.kubernetes.io/openstack-internal-load-balancer: \"true\"", "language": "text", "type": "code-example_explanation", "context": "5.1 - Service | metadata: name: my-service annotations: service.beta.kubernetes.io/openstack-internal-load-balancer: \"true\"", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_18031"} {"prompt": "Complete this text code:\n\nmetadata:\n name: my-service", "completion": " annotations:\n service.beta.kubernetes.io/openstack-internal-load-balancer: \"true\"", "language": "text", "type": "code-example", "context": "5.1 - Service | metadata: name: my-service annotations: service.beta.kubernetes.io/cce-load-balancer-internal-vpc: \"true\"", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_18032"} {"prompt": "Explain this text code:\n\nmetadata:\n name: my-service\n annotations:\n service.beta.kubernetes.io/openstack-internal-load-balancer: \"true\"", "completion": "This text code 5.1 - Service | metadata: name: my-service annotations: service.beta.kubernetes.io/cce-load-balancer-internal-vpc: \"true\"", "language": "text", "type": "code-example_explanation", "context": "5.1 - Service | metadata: name: my-service annotations: service.beta.kubernetes.io/cce-load-balancer-internal-vpc: \"true\"", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_18033"} {"prompt": "Complete this text code:\n\nmetadata:\n name: my-service", "completion": " annotations:\n service.beta.kubernetes.io/cce-load-balancer-internal-vpc: \"true\"", "language": "text", "type": "code-example", "context": "5.1 - Service | metadata: annotations: service.kubernetes.io/qcloud-loadbalancer-internal-subnetid: subnet-xxxxx", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_18034"} {"prompt": "Explain this text code:\n\nmetadata:\n name: my-service\n annotations:\n service.beta.kubernetes.io/cce-load-balancer-internal-vpc: \"true\"", "completion": "This text code 5.1 - Service | metadata: annotations: service.kubernetes.io/qcloud-loadbalancer-internal-subnetid: subnet-xxxxx", "language": "text", "type": "code-example_explanation", "context": "5.1 - Service | metadata: annotations: service.kubernetes.io/qcloud-loadbalancer-internal-subnetid: subnet-xxxxx", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_18035"} {"prompt": "Explain this text code:\n\nmetadata:\n annotations:\n service.kubernetes.io/qcloud-loadbalancer-internal-subnetid: subnet-xxxxx", "completion": "This text code 5.1 - Service | metadata: annotations: service.beta.kubernetes.io/alibaba-cloud-loadbalancer-address-type: \"intranet\"", "language": "text", "type": "code-example_explanation", "context": "5.1 - Service | metadata: annotations: service.beta.kubernetes.io/alibaba-cloud-loadbalancer-address-type: \"intranet\"", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_18036"} {"prompt": "Explain this text code:\n\nmetadata:\n annotations:\n service.beta.kubernetes.io/alibaba-cloud-loadbalancer-address-type: \"intranet\"", "completion": "This text code 5.1 - Service | metadata: name: my-service annotations: service.beta.kubernetes.io/oci-load-balancer-internal: true", "language": "text", "type": "code-example_explanation", "context": "5.1 - Service | metadata: name: my-service annotations: service.beta.kubernetes.io/oci-load-balancer-internal: true", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_18037"} {"prompt": "Complete this text code:\n\nmetadata:\n name: my-service", "completion": " annotations:\n service.beta.kubernetes.io/oci-load-balancer-internal: true", "language": "text", "type": "code-example", "context": "5.1 - Service | Services of type ExternalName map a Service to a DNS name, not to a typical selector such as my-service or cassandra. You specify these Services with the spec.externalName parameter.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_18038"} {"prompt": "Explain this text code:\n\nmetadata:\n name: my-service\n annotations:\n service.beta.kubernetes.io/oci-load-balancer-internal: true", "completion": "This text code 5.1 - Service | Services of type ExternalName map a Service to a DNS name, not to a typical selector such as my-service or cassandra. You specify these Services with the spec.externalName parameter.", "language": "text", "type": "code-example_explanation", "context": "5.1 - Service | Services of type ExternalName map a Service to a DNS name, not to a typical selector such as my-service or cassandra. You specify these Services with the spec.externalName parameter.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_18039"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Service\nmetadata:\n name: my-service", "completion": " namespace: prod\nspec:\n type: ExternalName\n externalName: my.database.example.com", "language": "yaml", "type": "service-definition", "context": "5.1 - Service | A Service of type: ExternalName accepts an IPv4 address string, but treats that string as a DNS name comprised of digits, not as an IP address (the internet does not however allow such names in DNS).", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_18040"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Service\nmetadata:\n name: my-service\n namespace: prod\nspec:\n type: ExternalName\n externalName: my.database.example.com", "completion": "This yaml code 5.1 - Service | A Service of type: ExternalName accepts an IPv4 address string, but treats that string as a DNS name comprised of digits, not as an IP address (the internet does not however allow such names in DNS).", "language": "yaml", "type": "service-definition_explanation", "context": "5.1 - Service | A Service of type: ExternalName accepts an IPv4 address string, but treats that string as a DNS name comprised of digits, not as an IP address (the internet does not however allow such names in DNS).", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_18041"} {"prompt": "Complete this text code:\n\nREDIS_PRIMARY_SERVICE_HOST=10.0.0.11\nREDIS_PRIMARY_SERVICE_PORT=6379\nREDIS_PRIMARY_PORT=tcp://10.0.0.11:6379", "completion": "REDIS_PRIMARY_PORT_6379_TCP=tcp://10.0.0.11:6379\nREDIS_PRIMARY_PORT_6379_TCP_PROTO=tcp\nREDIS_PRIMARY_PORT_6379_TCP_PORT=6379\nREDIS_PRIMARY_PORT_6379_TCP_ADDR=10.0.0.11", "language": "text", "type": "code-example", "context": "5.1 - Service | When you have a Pod that needs to access a Service, and you are using the environment variable method to publish the port and cluster IP to the client Pods, you must create the Service before the clie", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_18042"} {"prompt": "Explain this text code:\n\nREDIS_PRIMARY_SERVICE_HOST=10.0.0.11\nREDIS_PRIMARY_SERVICE_PORT=6379\nREDIS_PRIMARY_PORT=tcp://10.0.0.11:6379\nREDIS_PRIMARY_PORT_6379_TCP=tcp://10.0.0.11:6379\nREDIS_PRIMARY_PORT_6379_TCP_PROTO=tcp\nREDIS_PRIMARY_PORT_6379_TCP_PORT=6379\nREDIS_PRIMARY_PORT_6379_TCP_ADDR=10.0.0.11", "completion": "This text code 5.1 - Service | When you have a Pod that needs to access a Service, and you are using the environment variable method to publish the port and cluster IP to the client Pods, you must create the Service before the clie", "language": "text", "type": "code-example_explanation", "context": "5.1 - Service | When you have a Pod that needs to access a Service, and you are using the environment variable method to publish the port and cluster IP to the client Pods, you must create the Service before the clie", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_18043"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Service\nmetadata:\n name: my-service\nspec:\n selector:\n app.kubernetes.io/name: MyApp", "completion": " ports:\n - name: http\n protocol: TCP\n port: 80\n targetPort: 49152\n externalIPs:\n - 198.51.100.32", "language": "yaml", "type": "service-definition", "context": "5.1 - Service | Service is a top-level resource in the Kubernetes REST API. You can find more details about the Service API object.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_18044"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Service\nmetadata:\n name: my-service\nspec:\n selector:\n app.kubernetes.io/name: MyApp\n ports:\n - name: http\n protocol: TCP\n port: 80\n targetPort: 49152\n externalIPs:\n - 198.51.100.32", "completion": "This yaml code 5.1 - Service | Service is a top-level resource in the Kubernetes REST API. You can find more details about the Service API object.", "language": "yaml", "type": "service-definition_explanation", "context": "5.1 - Service | Service is a top-level resource in the Kubernetes REST API. You can find more details about the Service API object.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_18045"} {"prompt": "Complete this yaml code:\n\nservice/networking/minimal-ingress.yaml\napiVersion: networking.k8s.io/v1\nkind: Ingress\nmetadata:\n name: minimal-ingress\n annotations:\n nginx.ingress.kubernetes.io/rewrite-target: /\nspec:\n ingressClassName: nginx-example", "completion": " rules:\n - http:\n paths:\n - path: /testpath\n pathType: Prefix\n backend:\n service:\n name: test\n port:\n number: 80", "language": "yaml", "type": "kubernetes-manifest", "context": "5.2 - Ingress | An Ingress needs apiVersion, kind, metadata and spec fields. The name of an Ingress object must be a valid DNS subdomain name. For general information about working with config files, see deploying ap", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_18046"} {"prompt": "Explain this yaml code:\n\nservice/networking/minimal-ingress.yaml\napiVersion: networking.k8s.io/v1\nkind: Ingress\nmetadata:\n name: minimal-ingress\n annotations:\n nginx.ingress.kubernetes.io/rewrite-target: /\nspec:\n ingressClassName: nginx-example\n rules:\n - http:\n paths:\n - path: /testpath\n pathType: Prefix\n backend:\n service:\n name: test\n port:\n number: 80", "completion": "This yaml code 5.2 - Ingress | An Ingress needs apiVersion, kind, metadata and spec fields. The name of an Ingress object must be a valid DNS subdomain name. For general information about working with config files, see deploying ap", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "5.2 - Ingress | An Ingress needs apiVersion, kind, metadata and spec fields. The name of an Ingress object must be a valid DNS subdomain name. For general information about working with config files, see deploying ap", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_18047"} {"prompt": "Complete this yaml code:\n\napiVersion: networking.k8s.io/v1\nkind: Ingress\nmetadata:\n name: minimal-ingress\n annotations:\n nginx.ingress.kubernetes.io/rewrite-target: /\nspec:\n ingressClassName: nginx-example\n rules:", "completion": " - http:\n paths:\n - path: /testpath\n pathType: Prefix\n backend:\n service:\n name: test\n port:\n number: 80", "language": "yaml", "type": "kubernetes-manifest", "context": "5.2 - Ingress | An Ingress needs apiVersion, kind, metadata and spec fields. The name of an Ingress object must be a valid DNS subdomain name. For general information about working with config files, see deploying ap", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_18048"} {"prompt": "Explain this yaml code:\n\napiVersion: networking.k8s.io/v1\nkind: Ingress\nmetadata:\n name: minimal-ingress\n annotations:\n nginx.ingress.kubernetes.io/rewrite-target: /\nspec:\n ingressClassName: nginx-example\n rules:\n - http:\n paths:\n - path: /testpath\n pathType: Prefix\n backend:\n service:\n name: test\n port:\n number: 80", "completion": "This yaml code 5.2 - Ingress | An Ingress needs apiVersion, kind, metadata and spec fields. The name of an Ingress object must be a valid DNS subdomain name. For general information about working with config files, see deploying ap", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "5.2 - Ingress | An Ingress needs apiVersion, kind, metadata and spec fields. The name of an Ingress object must be a valid DNS subdomain name. For general information about working with config files, see deploying ap", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_18049"} {"prompt": "Complete this yaml code:\n\nservice/networking/ingress-resource-backend.yaml\napiVersion: networking.k8s.io/v1\nkind: Ingress\nmetadata:\n name: ingress-resource-backend\nspec:\n defaultBackend:\n resource:\n apiGroup: k8s.example.com\n kind: StorageBucket", "completion": " name: static-assets\n rules:\n - http:\n paths:\n - path: /icons\n pathType: ImplementationSpecific\n backend:\n resource:\n apiGroup: k8s.example.com\n kind: StorageBucket\n name: icon-assets", "language": "yaml", "type": "kubernetes-manifest", "context": "5.2 - Ingress | After creating the Ingress above, you can view it with the following command:", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_18050"} {"prompt": "Explain this yaml code:\n\nservice/networking/ingress-resource-backend.yaml\napiVersion: networking.k8s.io/v1\nkind: Ingress\nmetadata:\n name: ingress-resource-backend\nspec:\n defaultBackend:\n resource:\n apiGroup: k8s.example.com\n kind: StorageBucket\n name: static-assets\n rules:\n - http:\n paths:\n - path: /icons\n pathType: ImplementationSpecific\n backend:\n resource:\n apiGroup: k8s.example.com\n kind: StorageBucket\n name: icon-assets", "completion": "This yaml code 5.2 - Ingress | After creating the Ingress above, you can view it with the following command:", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "5.2 - Ingress | After creating the Ingress above, you can view it with the following command:", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_18051"} {"prompt": "Complete this yaml code:\n\napiVersion: networking.k8s.io/v1\nkind: Ingress\nmetadata:\n name: ingress-resource-backend\nspec:\n defaultBackend:\n resource:\n apiGroup: k8s.example.com\n kind: StorageBucket\n name: static-assets", "completion": " rules:\n - http:\n paths:\n - path: /icons\n pathType: ImplementationSpecific\n backend:\n resource:\n apiGroup: k8s.example.com\n kind: StorageBucket\n name: icon-assets", "language": "yaml", "type": "kubernetes-manifest", "context": "5.2 - Ingress | After creating the Ingress above, you can view it with the following command:", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_18052"} {"prompt": "Explain this yaml code:\n\napiVersion: networking.k8s.io/v1\nkind: Ingress\nmetadata:\n name: ingress-resource-backend\nspec:\n defaultBackend:\n resource:\n apiGroup: k8s.example.com\n kind: StorageBucket\n name: static-assets\n rules:\n - http:\n paths:\n - path: /icons\n pathType: ImplementationSpecific\n backend:\n resource:\n apiGroup: k8s.example.com\n kind: StorageBucket\n name: icon-assets", "completion": "This yaml code 5.2 - Ingress | After creating the Ingress above, you can view it with the following command:", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "5.2 - Ingress | After creating the Ingress above, you can view it with the following command:", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_18053"} {"prompt": "Complete this yaml code:\n\nservice/networking/ingress-wildcard-host.yaml\napiVersion: networking.k8s.io/v1\nkind: Ingress\nmetadata:\n name: ingress-wildcard-host\nspec:\n rules:\n - host: \"foo.bar.com\"\n http:\n paths:\n - pathType: Prefix\n path: \"/bar\"\n backend:", "completion": " service:\n name: service1\n port:\n number: 80\n - host: \"*.foo.com\"\n http:\n paths:\n - pathType: Prefix\n path: \"/foo\"\n backend:\n service:\n name: service2\n port:\n number: 80", "language": "yaml", "type": "kubernetes-manifest", "context": "5.2 - Ingress | Ingresses can be implemented by different controllers, often with different configuration. Each Ingress should specify a class, a reference to an IngressClass resource that contains additional configu", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_18054"} {"prompt": "Explain this yaml code:\n\nservice/networking/ingress-wildcard-host.yaml\napiVersion: networking.k8s.io/v1\nkind: Ingress\nmetadata:\n name: ingress-wildcard-host\nspec:\n rules:\n - host: \"foo.bar.com\"\n http:\n paths:\n - pathType: Prefix\n path: \"/bar\"\n backend:\n service:\n name: service1\n port:\n number: 80\n - host: \"*.foo.com\"\n http:\n paths:\n - pathType: Prefix\n path: \"/foo\"\n backend:\n service:\n name: service2\n port:\n number: 80", "completion": "This yaml code 5.2 - Ingress | Ingresses can be implemented by different controllers, often with different configuration. Each Ingress should specify a class, a reference to an IngressClass resource that contains additional configu", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "5.2 - Ingress | Ingresses can be implemented by different controllers, often with different configuration. Each Ingress should specify a class, a reference to an IngressClass resource that contains additional configu", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_18055"} {"prompt": "Complete this yaml code:\n\napiVersion: networking.k8s.io/v1\nkind: Ingress\nmetadata:\n name: ingress-wildcard-host\nspec:\n rules:\n - host: \"foo.bar.com\"\n http:\n paths:\n - pathType: Prefix\n path: \"/bar\"\n backend:\n service:", "completion": " name: service1\n port:\n number: 80\n - host: \"*.foo.com\"\n http:\n paths:\n - pathType: Prefix\n path: \"/foo\"\n backend:\n service:\n name: service2\n port:\n number: 80", "language": "yaml", "type": "kubernetes-manifest", "context": "5.2 - Ingress | Ingresses can be implemented by different controllers, often with different configuration. Each Ingress should specify a class, a reference to an IngressClass resource that contains additional configu", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_18056"} {"prompt": "Explain this yaml code:\n\napiVersion: networking.k8s.io/v1\nkind: Ingress\nmetadata:\n name: ingress-wildcard-host\nspec:\n rules:\n - host: \"foo.bar.com\"\n http:\n paths:\n - pathType: Prefix\n path: \"/bar\"\n backend:\n service:\n name: service1\n port:\n number: 80\n - host: \"*.foo.com\"\n http:\n paths:\n - pathType: Prefix\n path: \"/foo\"\n backend:\n service:\n name: service2\n port:\n number: 80", "completion": "This yaml code 5.2 - Ingress | Ingresses can be implemented by different controllers, often with different configuration. Each Ingress should specify a class, a reference to an IngressClass resource that contains additional configu", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "5.2 - Ingress | Ingresses can be implemented by different controllers, often with different configuration. Each Ingress should specify a class, a reference to an IngressClass resource that contains additional configu", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_18057"} {"prompt": "Complete this yaml code:\n\nservice/networking/external-lb.yaml\napiVersion: networking.k8s.io/v1\nkind: IngressClass\nmetadata:\n name: external-lb", "completion": "spec:\n controller: example.com/ingress-controller\n parameters:\n apiGroup: k8s.example.com\n kind: IngressParameters\n name: external-lb", "language": "yaml", "type": "kubernetes-manifest", "context": "5.2 - Ingress | The .spec.parameters field of an IngressClass lets you reference another resource that provides configuration related to that IngressClass.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_18058"} {"prompt": "Explain this yaml code:\n\nservice/networking/external-lb.yaml\napiVersion: networking.k8s.io/v1\nkind: IngressClass\nmetadata:\n name: external-lb\nspec:\n controller: example.com/ingress-controller\n parameters:\n apiGroup: k8s.example.com\n kind: IngressParameters\n name: external-lb", "completion": "This yaml code 5.2 - Ingress | The .spec.parameters field of an IngressClass lets you reference another resource that provides configuration related to that IngressClass.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "5.2 - Ingress | The .spec.parameters field of an IngressClass lets you reference another resource that provides configuration related to that IngressClass.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_18059"} {"prompt": "Complete this yaml code:\n\napiVersion: networking.k8s.io/v1\nkind: IngressClass\nmetadata:\n name: external-lb\nspec:", "completion": " controller: example.com/ingress-controller\n parameters:\n apiGroup: k8s.example.com\n kind: IngressParameters\n name: external-lb", "language": "yaml", "type": "kubernetes-manifest", "context": "5.2 - Ingress | The .spec.parameters field of an IngressClass lets you reference another resource that provides configuration related to that IngressClass.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_18060"} {"prompt": "Explain this yaml code:\n\napiVersion: networking.k8s.io/v1\nkind: IngressClass\nmetadata:\n name: external-lb\nspec:\n controller: example.com/ingress-controller\n parameters:\n apiGroup: k8s.example.com\n kind: IngressParameters\n name: external-lb", "completion": "This yaml code 5.2 - Ingress | The .spec.parameters field of an IngressClass lets you reference another resource that provides configuration related to that IngressClass.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "5.2 - Ingress | The .spec.parameters field of an IngressClass lets you reference another resource that provides configuration related to that IngressClass.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_18061"} {"prompt": "Complete this yaml code:\n\n---\napiVersion: networking.k8s.io/v1\nkind: IngressClass\nmetadata:\n name: external-lb-1\nspec:\n controller: example.com/ingress-controller\n parameters:", "completion": " # The parameters for this IngressClass are specified in a\n # ClusterIngressParameter (API group k8s.example.net) named\n # \"external-config-1\". This definition tells Kubernetes to\n # look for a cluster-scoped parameter resource.\n scope: Cluster\n apiGroup: k8s.example.net\n kind: ClusterIngressParameter\n name: external-config-1", "language": "yaml", "type": "kubernetes-manifest", "context": "5.2 - Ingress | FEATURE STATE: Kubernetes v1.23 [stable]If you set the .spec.parameters field and set .spec.parameters.scope to Namespace, then the IngressClass refers to a namespaced-scoped resource. You must also s", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_18062"} {"prompt": "Explain this yaml code:\n\n---\napiVersion: networking.k8s.io/v1\nkind: IngressClass\nmetadata:\n name: external-lb-1\nspec:\n controller: example.com/ingress-controller\n parameters:\n # The parameters for this IngressClass are specified in a\n # ClusterIngressParameter (API group k8s.example.net) named\n # \"external-config-1\". This definition tells Kubernetes to\n # look for a cluster-scoped parameter resource.\n scope: Cluster\n apiGroup: k8s.example.net\n kind: ClusterIngressParameter\n name: external-config-1", "completion": "This yaml code 5.2 - Ingress | FEATURE STATE: Kubernetes v1.23 [stable]If you set the .spec.parameters field and set .spec.parameters.scope to Namespace, then the IngressClass refers to a namespaced-scoped resource. You must also s", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "5.2 - Ingress | FEATURE STATE: Kubernetes v1.23 [stable]If you set the .spec.parameters field and set .spec.parameters.scope to Namespace, then the IngressClass refers to a namespaced-scoped resource. You must also s", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_18063"} {"prompt": "Complete this yaml code:\n\n---\napiVersion: networking.k8s.io/v1\nkind: IngressClass\nmetadata:\n name: external-lb-2\nspec:\n controller: example.com/ingress-controller\n parameters:", "completion": " # The parameters for this IngressClass are specified in an\n # IngressParameter (API group k8s.example.com) named \"external-config\",\n # that's in the \"external-configuration\" namespace.\n scope: Namespace\n apiGroup: k8s.example.com\n kind: IngressParameter\n namespace: external-configuration\n name: external-config", "language": "yaml", "type": "kubernetes-manifest", "context": "5.2 - Ingress | Before the IngressClass resource and ingressClassName field were added in Kubernetes 1.18, Ingress classes were specified with a kubernetes.io/ingress.class annotation on the Ingress. This annotation", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_18064"} {"prompt": "Explain this yaml code:\n\n---\napiVersion: networking.k8s.io/v1\nkind: IngressClass\nmetadata:\n name: external-lb-2\nspec:\n controller: example.com/ingress-controller\n parameters:\n # The parameters for this IngressClass are specified in an\n # IngressParameter (API group k8s.example.com) named \"external-config\",\n # that's in the \"external-configuration\" namespace.\n scope: Namespace\n apiGroup: k8s.example.com\n kind: IngressParameter\n namespace: external-configuration\n name: external-config", "completion": "This yaml code 5.2 - Ingress | Before the IngressClass resource and ingressClassName field were added in Kubernetes 1.18, Ingress classes were specified with a kubernetes.io/ingress.class annotation on the Ingress. This annotation", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "5.2 - Ingress | Before the IngressClass resource and ingressClassName field were added in Kubernetes 1.18, Ingress classes were specified with a kubernetes.io/ingress.class annotation on the Ingress. This annotation", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_18065"} {"prompt": "Complete this yaml code:\n\nservice/networking/default-ingressclass.yaml\napiVersion: networking.k8s.io/v1\nkind: IngressClass\nmetadata:\n labels:", "completion": " app.kubernetes.io/component: controller\n name: nginx-example\n annotations:\n ingressclass.kubernetes.io/is-default-class: \"true\"\nspec:\n controller: k8s.io/ingress-nginx", "language": "yaml", "type": "kubernetes-manifest", "context": "5.2 - Ingress | There are existing Kubernetes concepts that allow you to expose a single Service (see alternatives). You can also do this with an Ingress by specifying a default backend with no rules.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_18066"} {"prompt": "Explain this yaml code:\n\nservice/networking/default-ingressclass.yaml\napiVersion: networking.k8s.io/v1\nkind: IngressClass\nmetadata:\n labels:\n app.kubernetes.io/component: controller\n name: nginx-example\n annotations:\n ingressclass.kubernetes.io/is-default-class: \"true\"\nspec:\n controller: k8s.io/ingress-nginx", "completion": "This yaml code 5.2 - Ingress | There are existing Kubernetes concepts that allow you to expose a single Service (see alternatives). You can also do this with an Ingress by specifying a default backend with no rules.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "5.2 - Ingress | There are existing Kubernetes concepts that allow you to expose a single Service (see alternatives). You can also do this with an Ingress by specifying a default backend with no rules.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_18067"} {"prompt": "Complete this yaml code:\n\napiVersion: networking.k8s.io/v1\nkind: IngressClass\nmetadata:\n labels:\n app.kubernetes.io/component: controller", "completion": " name: nginx-example\n annotations:\n ingressclass.kubernetes.io/is-default-class: \"true\"\nspec:\n controller: k8s.io/ingress-nginx", "language": "yaml", "type": "kubernetes-manifest", "context": "5.2 - Ingress | There are existing Kubernetes concepts that allow you to expose a single Service (see alternatives). You can also do this with an Ingress by specifying a default backend with no rules.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_18068"} {"prompt": "Explain this yaml code:\n\napiVersion: networking.k8s.io/v1\nkind: IngressClass\nmetadata:\n labels:\n app.kubernetes.io/component: controller\n name: nginx-example\n annotations:\n ingressclass.kubernetes.io/is-default-class: \"true\"\nspec:\n controller: k8s.io/ingress-nginx", "completion": "This yaml code 5.2 - Ingress | There are existing Kubernetes concepts that allow you to expose a single Service (see alternatives). You can also do this with an Ingress by specifying a default backend with no rules.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "5.2 - Ingress | There are existing Kubernetes concepts that allow you to expose a single Service (see alternatives). You can also do this with an Ingress by specifying a default backend with no rules.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_18069"} {"prompt": "Complete this yaml code:\n\nservice/networking/test-ingress.yaml\napiVersion: networking.k8s.io/v1\nkind: Ingress\nmetadata:\n name: test-ingress", "completion": "spec:\n defaultBackend:\n service:\n name: test\n port:\n number: 80", "language": "yaml", "type": "kubernetes-manifest", "context": "5.2 - Ingress | If you create it using kubectl apply -f you should be able to view the state of the Ingress you added:", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_18070"} {"prompt": "Explain this yaml code:\n\nservice/networking/test-ingress.yaml\napiVersion: networking.k8s.io/v1\nkind: Ingress\nmetadata:\n name: test-ingress\nspec:\n defaultBackend:\n service:\n name: test\n port:\n number: 80", "completion": "This yaml code 5.2 - Ingress | If you create it using kubectl apply -f you should be able to view the state of the Ingress you added:", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "5.2 - Ingress | If you create it using kubectl apply -f you should be able to view the state of the Ingress you added:", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_18071"} {"prompt": "Complete this yaml code:\n\napiVersion: networking.k8s.io/v1\nkind: Ingress\nmetadata:\n name: test-ingress\nspec:", "completion": " defaultBackend:\n service:\n name: test\n port:\n number: 80", "language": "yaml", "type": "kubernetes-manifest", "context": "5.2 - Ingress | If you create it using kubectl apply -f you should be able to view the state of the Ingress you added:", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_18072"} {"prompt": "Explain this yaml code:\n\napiVersion: networking.k8s.io/v1\nkind: Ingress\nmetadata:\n name: test-ingress\nspec:\n defaultBackend:\n service:\n name: test\n port:\n number: 80", "completion": "This yaml code 5.2 - Ingress | If you create it using kubectl apply -f you should be able to view the state of the Ingress you added:", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "5.2 - Ingress | If you create it using kubectl apply -f you should be able to view the state of the Ingress you added:", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_18073"} {"prompt": "Complete this yaml code:\n\nservice/networking/simple-fanout-example.yaml\napiVersion: networking.k8s.io/v1\nkind: Ingress\nmetadata:\n name: simple-fanout-example\nspec:\n rules:\n - host: foo.bar.com\n http:\n paths:\n - path: /foo\n pathType: Prefix", "completion": " backend:\n service:\n name: service1\n port:\n number: 4200\n - path: /bar\n pathType: Prefix\n backend:\n service:\n name: service2\n port:\n number: 8080", "language": "yaml", "type": "kubernetes-manifest", "context": "5.2 - Ingress | When you create the Ingress with kubectl apply -f:", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_18074"} {"prompt": "Explain this yaml code:\n\nservice/networking/simple-fanout-example.yaml\napiVersion: networking.k8s.io/v1\nkind: Ingress\nmetadata:\n name: simple-fanout-example\nspec:\n rules:\n - host: foo.bar.com\n http:\n paths:\n - path: /foo\n pathType: Prefix\n backend:\n service:\n name: service1\n port:\n number: 4200\n - path: /bar\n pathType: Prefix\n backend:\n service:\n name: service2\n port:\n number: 8080", "completion": "This yaml code 5.2 - Ingress | When you create the Ingress with kubectl apply -f:", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "5.2 - Ingress | When you create the Ingress with kubectl apply -f:", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_18075"} {"prompt": "Complete this yaml code:\n\napiVersion: networking.k8s.io/v1\nkind: Ingress\nmetadata:\n name: simple-fanout-example\nspec:\n rules:\n - host: foo.bar.com\n http:\n paths:\n - path: /foo\n pathType: Prefix", "completion": " backend:\n service:\n name: service1\n port:\n number: 4200\n - path: /bar\n pathType: Prefix\n backend:\n service:\n name: service2\n port:\n number: 8080", "language": "yaml", "type": "kubernetes-manifest", "context": "5.2 - Ingress | When you create the Ingress with kubectl apply -f:", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_18076"} {"prompt": "Explain this yaml code:\n\napiVersion: networking.k8s.io/v1\nkind: Ingress\nmetadata:\n name: simple-fanout-example\nspec:\n rules:\n - host: foo.bar.com\n http:\n paths:\n - path: /foo\n pathType: Prefix\n backend:\n service:\n name: service1\n port:\n number: 4200\n - path: /bar\n pathType: Prefix\n backend:\n service:\n name: service2\n port:\n number: 8080", "completion": "This yaml code 5.2 - Ingress | When you create the Ingress with kubectl apply -f:", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "5.2 - Ingress | When you create the Ingress with kubectl apply -f:", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_18077"} {"prompt": "Complete this yaml code:\n\nservice/networking/name-virtual-host-ingress.yaml\napiVersion: networking.k8s.io/v1\nkind: Ingress\nmetadata:\n name: name-virtual-host-ingress\nspec:\n rules:\n - host: foo.bar.com\n http:\n paths:\n - pathType: Prefix\n path: \"/\"\n backend:", "completion": " service:\n name: service1\n port:\n number: 80\n - host: bar.foo.com\n http:\n paths:\n - pathType: Prefix\n path: \"/\"\n backend:\n service:\n name: service2\n port:\n number: 80", "language": "yaml", "type": "kubernetes-manifest", "context": "5.2 - Ingress | If you create an Ingress resource without any hosts defined in the rules, then any web traffic to the IP address of your Ingress controller can be matched without a name based virtual host being requi", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_18078"} {"prompt": "Explain this yaml code:\n\nservice/networking/name-virtual-host-ingress.yaml\napiVersion: networking.k8s.io/v1\nkind: Ingress\nmetadata:\n name: name-virtual-host-ingress\nspec:\n rules:\n - host: foo.bar.com\n http:\n paths:\n - pathType: Prefix\n path: \"/\"\n backend:\n service:\n name: service1\n port:\n number: 80\n - host: bar.foo.com\n http:\n paths:\n - pathType: Prefix\n path: \"/\"\n backend:\n service:\n name: service2\n port:\n number: 80", "completion": "This yaml code 5.2 - Ingress | If you create an Ingress resource without any hosts defined in the rules, then any web traffic to the IP address of your Ingress controller can be matched without a name based virtual host being requi", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "5.2 - Ingress | If you create an Ingress resource without any hosts defined in the rules, then any web traffic to the IP address of your Ingress controller can be matched without a name based virtual host being requi", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_18079"} {"prompt": "Complete this yaml code:\n\napiVersion: networking.k8s.io/v1\nkind: Ingress\nmetadata:\n name: name-virtual-host-ingress\nspec:\n rules:\n - host: foo.bar.com\n http:\n paths:\n - pathType: Prefix\n path: \"/\"\n backend:\n service:", "completion": " name: service1\n port:\n number: 80\n - host: bar.foo.com\n http:\n paths:\n - pathType: Prefix\n path: \"/\"\n backend:\n service:\n name: service2\n port:\n number: 80", "language": "yaml", "type": "kubernetes-manifest", "context": "5.2 - Ingress | If you create an Ingress resource without any hosts defined in the rules, then any web traffic to the IP address of your Ingress controller can be matched without a name based virtual host being requi", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_18080"} {"prompt": "Explain this yaml code:\n\napiVersion: networking.k8s.io/v1\nkind: Ingress\nmetadata:\n name: name-virtual-host-ingress\nspec:\n rules:\n - host: foo.bar.com\n http:\n paths:\n - pathType: Prefix\n path: \"/\"\n backend:\n service:\n name: service1\n port:\n number: 80\n - host: bar.foo.com\n http:\n paths:\n - pathType: Prefix\n path: \"/\"\n backend:\n service:\n name: service2\n port:\n number: 80", "completion": "This yaml code 5.2 - Ingress | If you create an Ingress resource without any hosts defined in the rules, then any web traffic to the IP address of your Ingress controller can be matched without a name based virtual host being requi", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "5.2 - Ingress | If you create an Ingress resource without any hosts defined in the rules, then any web traffic to the IP address of your Ingress controller can be matched without a name based virtual host being requi", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_18081"} {"prompt": "Complete this yaml code:\n\nservice/networking/name-virtual-host-ingress-no-third-host.yaml\napiVersion: networking.k8s.io/v1\nkind: Ingress\nmetadata:\n name: name-virtual-host-ingress-no-third-host\nspec:\n rules:\n - host: first.bar.com\n http:\n paths:\n - pathType: Prefix\n path: \"/\"\n backend:\n service:\n name: service1\n port:\n number: 80\n - host: second.bar.com", "completion": " http:\n paths:\n - pathType: Prefix\n path: \"/\"\n backend:\n service:\n name: service2\n port:\n number: 80\n - http:\n paths:\n - pathType: Prefix\n path: \"/\"\n backend:\n service:\n name: service3\n port:\n number: 80", "language": "yaml", "type": "kubernetes-manifest", "context": "5.2 - Ingress | You can secure an Ingress by specifying a Secret that contains a TLS private key and certificate. The Ingress resource only supports a single TLS port, 443, and assumes TLS termination at the ingress", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_18082"} {"prompt": "Explain this yaml code:\n\nservice/networking/name-virtual-host-ingress-no-third-host.yaml\napiVersion: networking.k8s.io/v1\nkind: Ingress\nmetadata:\n name: name-virtual-host-ingress-no-third-host\nspec:\n rules:\n - host: first.bar.com\n http:\n paths:\n - pathType: Prefix\n path: \"/\"\n backend:\n service:\n name: service1\n port:\n number: 80\n - host: second.bar.com\n http:\n paths:\n - pathType: Prefix\n path: \"/\"\n backend:\n service:\n name: service2\n port:\n number: 80\n - http:\n paths:\n - pathType: Prefix\n path: \"/\"\n backend:\n service:\n name: service3\n port:\n number: 80", "completion": "This yaml code 5.2 - Ingress | You can secure an Ingress by specifying a Secret that contains a TLS private key and certificate. The Ingress resource only supports a single TLS port, 443, and assumes TLS termination at the ingress", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "5.2 - Ingress | You can secure an Ingress by specifying a Secret that contains a TLS private key and certificate. The Ingress resource only supports a single TLS port, 443, and assumes TLS termination at the ingress", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_18083"} {"prompt": "Complete this yaml code:\n\napiVersion: networking.k8s.io/v1\nkind: Ingress\nmetadata:\n name: name-virtual-host-ingress-no-third-host\nspec:\n rules:\n - host: first.bar.com\n http:\n paths:\n - pathType: Prefix\n path: \"/\"\n backend:\n service:\n name: service1\n port:\n number: 80\n - host: second.bar.com", "completion": " http:\n paths:\n - pathType: Prefix\n path: \"/\"\n backend:\n service:\n name: service2\n port:\n number: 80\n - http:\n paths:\n - pathType: Prefix\n path: \"/\"\n backend:\n service:\n name: service3\n port:\n number: 80", "language": "yaml", "type": "kubernetes-manifest", "context": "5.2 - Ingress | You can secure an Ingress by specifying a Secret that contains a TLS private key and certificate. The Ingress resource only supports a single TLS port, 443, and assumes TLS termination at the ingress", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_18084"} {"prompt": "Explain this yaml code:\n\napiVersion: networking.k8s.io/v1\nkind: Ingress\nmetadata:\n name: name-virtual-host-ingress-no-third-host\nspec:\n rules:\n - host: first.bar.com\n http:\n paths:\n - pathType: Prefix\n path: \"/\"\n backend:\n service:\n name: service1\n port:\n number: 80\n - host: second.bar.com\n http:\n paths:\n - pathType: Prefix\n path: \"/\"\n backend:\n service:\n name: service2\n port:\n number: 80\n - http:\n paths:\n - pathType: Prefix\n path: \"/\"\n backend:\n service:\n name: service3\n port:\n number: 80", "completion": "This yaml code 5.2 - Ingress | You can secure an Ingress by specifying a Secret that contains a TLS private key and certificate. The Ingress resource only supports a single TLS port, 443, and assumes TLS termination at the ingress", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "5.2 - Ingress | You can secure an Ingress by specifying a Secret that contains a TLS private key and certificate. The Ingress resource only supports a single TLS port, 443, and assumes TLS termination at the ingress", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_18085"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Secret\nmetadata:\n name: testsecret-tls", "completion": " namespace: default\ndata:\n tls.crt: base64 encoded cert\n tls.key: base64 encoded key\ntype: kubernetes.io/tls", "language": "yaml", "type": "kubernetes-manifest", "context": "5.2 - Ingress | Referencing this secret in an Ingress tells the Ingress controller to secure the channel from the client to the load balancer using TLS. You need to make sure the TLS secret you created came from a ce", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_18086"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Secret\nmetadata:\n name: testsecret-tls\n namespace: default\ndata:\n tls.crt: base64 encoded cert\n tls.key: base64 encoded key\ntype: kubernetes.io/tls", "completion": "This yaml code 5.2 - Ingress | Referencing this secret in an Ingress tells the Ingress controller to secure the channel from the client to the load balancer using TLS. You need to make sure the TLS secret you created came from a ce", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "5.2 - Ingress | Referencing this secret in an Ingress tells the Ingress controller to secure the channel from the client to the load balancer using TLS. You need to make sure the TLS secret you created came from a ce", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_18087"} {"prompt": "Complete this yaml code:\n\nservice/networking/tls-example-ingress.yaml\napiVersion: networking.k8s.io/v1\nkind: Ingress\nmetadata:\n name: tls-example-ingress\nspec:\n tls:\n - hosts:\n - https-example.foo.com\n secretName: testsecret-tls", "completion": " rules:\n - host: https-example.foo.com\n http:\n paths:\n - path: /\n pathType: Prefix\n backend:\n service:\n name: service1\n port:\n number: 80", "language": "yaml", "type": "kubernetes-manifest", "context": "5.2 - Ingress | An Ingress controller is bootstrapped with some load balancing policy settings that it applies to all Ingress, such as the load balancing algorithm, backend weight scheme, and others. More advanced lo", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_18088"} {"prompt": "Explain this yaml code:\n\nservice/networking/tls-example-ingress.yaml\napiVersion: networking.k8s.io/v1\nkind: Ingress\nmetadata:\n name: tls-example-ingress\nspec:\n tls:\n - hosts:\n - https-example.foo.com\n secretName: testsecret-tls\n rules:\n - host: https-example.foo.com\n http:\n paths:\n - path: /\n pathType: Prefix\n backend:\n service:\n name: service1\n port:\n number: 80", "completion": "This yaml code 5.2 - Ingress | An Ingress controller is bootstrapped with some load balancing policy settings that it applies to all Ingress, such as the load balancing algorithm, backend weight scheme, and others. More advanced lo", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "5.2 - Ingress | An Ingress controller is bootstrapped with some load balancing policy settings that it applies to all Ingress, such as the load balancing algorithm, backend weight scheme, and others. More advanced lo", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_18089"} {"prompt": "Complete this yaml code:\n\napiVersion: networking.k8s.io/v1\nkind: Ingress\nmetadata:\n name: tls-example-ingress\nspec:\n tls:\n - hosts:\n - https-example.foo.com\n secretName: testsecret-tls\n rules:", "completion": " - host: https-example.foo.com\n http:\n paths:\n - path: /\n pathType: Prefix\n backend:\n service:\n name: service1\n port:\n number: 80", "language": "yaml", "type": "kubernetes-manifest", "context": "5.2 - Ingress | An Ingress controller is bootstrapped with some load balancing policy settings that it applies to all Ingress, such as the load balancing algorithm, backend weight scheme, and others. More advanced lo", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_18090"} {"prompt": "Explain this yaml code:\n\napiVersion: networking.k8s.io/v1\nkind: Ingress\nmetadata:\n name: tls-example-ingress\nspec:\n tls:\n - hosts:\n - https-example.foo.com\n secretName: testsecret-tls\n rules:\n - host: https-example.foo.com\n http:\n paths:\n - path: /\n pathType: Prefix\n backend:\n service:\n name: service1\n port:\n number: 80", "completion": "This yaml code 5.2 - Ingress | An Ingress controller is bootstrapped with some load balancing policy settings that it applies to all Ingress, such as the load balancing algorithm, backend weight scheme, and others. More advanced lo", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "5.2 - Ingress | An Ingress controller is bootstrapped with some load balancing policy settings that it applies to all Ingress, such as the load balancing algorithm, backend weight scheme, and others. More advanced lo", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_18091"} {"prompt": "Complete this text code:\n\nspec:\n rules:\n - host: foo.bar.com\n http:\n paths:\n - backend:\n service:\n name: service1\n port:\n number: 80\n path: /foo", "completion": " pathType: Prefix\n - host: bar.baz.com\n http:\n paths:\n - backend:\n service:\n name: service2\n port:\n number: 80\n path: /foo\n pathType: Prefix\n..", "language": "text", "type": "code-example", "context": "5.2 - Ingress | After you save your changes, kubectl updates the resource in the API server, which tells the Ingress controller to reconfigure the load balancer.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_18092"} {"prompt": "Explain this text code:\n\nspec:\n rules:\n - host: foo.bar.com\n http:\n paths:\n - backend:\n service:\n name: service1\n port:\n number: 80\n path: /foo\n pathType: Prefix\n - host: bar.baz.com\n http:\n paths:\n - backend:\n service:\n name: service2\n port:\n number: 80\n path: /foo\n pathType: Prefix\n..", "completion": "This text code 5.2 - Ingress | After you save your changes, kubectl updates the resource in the API server, which tells the Ingress controller to reconfigure the load balancer.", "language": "text", "type": "code-example_explanation", "context": "5.2 - Ingress | After you save your changes, kubectl updates the resource in the API server, which tells the Ingress controller to reconfigure the load balancer.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_18093"} {"prompt": "Complete this yaml code:\n\napiVersion: gateway.networking.k8s.io/v1\nkind: GatewayClass\nmetadata:", "completion": " name: example-class\nspec:\n controllerName: example.com/gateway-controller", "language": "yaml", "type": "kubernetes-manifest", "context": "5.4 - Gateway API | In this example, a controller that has implemented Gateway API is configured to manage GatewayClasses with the controller name example.com/gateway-controller. Gateways of this class will be managed by", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_18094"} {"prompt": "Explain this yaml code:\n\napiVersion: gateway.networking.k8s.io/v1\nkind: GatewayClass\nmetadata:\n name: example-class\nspec:\n controllerName: example.com/gateway-controller", "completion": "This yaml code 5.4 - Gateway API | In this example, a controller that has implemented Gateway API is configured to manage GatewayClasses with the controller name example.com/gateway-controller. Gateways of this class will be managed by", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "5.4 - Gateway API | In this example, a controller that has implemented Gateway API is configured to manage GatewayClasses with the controller name example.com/gateway-controller. Gateways of this class will be managed by", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_18095"} {"prompt": "Complete this yaml code:\n\napiVersion: gateway.networking.k8s.io/v1\nkind: Gateway\nmetadata:\n name: example-gateway\nspec:", "completion": " gatewayClassName: example-class\n listeners:\n - name: http\n protocol: HTTP\n port: 80", "language": "yaml", "type": "kubernetes-manifest", "context": "5.4 - Gateway API | In this example, an instance of traffic handling infrastructure is programmed to listen for HTTP traffic on port 80. Since the addresses field is unspecified, an address or hostname is assigned to the", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_18096"} {"prompt": "Explain this yaml code:\n\napiVersion: gateway.networking.k8s.io/v1\nkind: Gateway\nmetadata:\n name: example-gateway\nspec:\n gatewayClassName: example-class\n listeners:\n - name: http\n protocol: HTTP\n port: 80", "completion": "This yaml code 5.4 - Gateway API | In this example, an instance of traffic handling infrastructure is programmed to listen for HTTP traffic on port 80. Since the addresses field is unspecified, an address or hostname is assigned to the", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "5.4 - Gateway API | In this example, an instance of traffic handling infrastructure is programmed to listen for HTTP traffic on port 80. Since the addresses field is unspecified, an address or hostname is assigned to the", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_18097"} {"prompt": "Complete this yaml code:\n\napiVersion: gateway.networking.k8s.io/v1\nkind: HTTPRoute\nmetadata:\n name: example-httproute\nspec:\n parentRefs:\n - name: example-gateway\n hostnames:", "completion": " - \"www.example.com\"\n rules:\n - matches:\n - path:\n type: PathPrefix\n value: /login\n backendRefs:\n - name: example-svc\n port: 8080", "language": "yaml", "type": "kubernetes-manifest", "context": "5.4 - Gateway API | In this example, HTTP traffic from Gateway example-gateway with the Host: header set to www.example.com and the request path specified as /login will be routed to Service example-svc on port 8080.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_18098"} {"prompt": "Explain this yaml code:\n\napiVersion: gateway.networking.k8s.io/v1\nkind: HTTPRoute\nmetadata:\n name: example-httproute\nspec:\n parentRefs:\n - name: example-gateway\n hostnames:\n - \"www.example.com\"\n rules:\n - matches:\n - path:\n type: PathPrefix\n value: /login\n backendRefs:\n - name: example-svc\n port: 8080", "completion": "This yaml code 5.4 - Gateway API | In this example, HTTP traffic from Gateway example-gateway with the Host: header set to www.example.com and the request path specified as /login will be routed to Service example-svc on port 8080.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "5.4 - Gateway API | In this example, HTTP traffic from Gateway example-gateway with the Host: header set to www.example.com and the request path specified as /login will be routed to Service example-svc on port 8080.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_18099"} {"prompt": "Complete this yaml code:\n\napiVersion: gateway.networking.k8s.io/v1\nkind: GRPCRoute\nmetadata:\n name: example-grpcroute\nspec:\n parentRefs:", "completion": " - name: example-gateway\n hostnames:\n - \"svc.example.com\"\n rules:\n - backendRefs:\n - name: example-svc\n port: 50051", "language": "yaml", "type": "kubernetes-manifest", "context": "5.4 - Gateway API | In this example, gRPC traffic from Gateway example-gateway with the host set to svc.example.com will be directed to the service example-svc on port 50051 from the same namespace.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_18100"} {"prompt": "Explain this yaml code:\n\napiVersion: gateway.networking.k8s.io/v1\nkind: GRPCRoute\nmetadata:\n name: example-grpcroute\nspec:\n parentRefs:\n - name: example-gateway\n hostnames:\n - \"svc.example.com\"\n rules:\n - backendRefs:\n - name: example-svc\n port: 50051", "completion": "This yaml code 5.4 - Gateway API | In this example, gRPC traffic from Gateway example-gateway with the host set to svc.example.com will be directed to the service example-svc on port 50051 from the same namespace.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "5.4 - Gateway API | In this example, gRPC traffic from Gateway example-gateway with the host set to svc.example.com will be directed to the service example-svc on port 50051 from the same namespace.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_18101"} {"prompt": "Complete this yaml code:\n\napiVersion: gateway.networking.k8s.io/v1\nkind: GRPCRoute\nmetadata:\n name: example-grpcroute\nspec:\n parentRefs:\n - name: example-gateway\n hostnames:", "completion": " - \"svc.example.com\"\n rules:\n - matches:\n - method:\n service: com.example\n method: Login\n backendRefs:\n - name: foo-svc\n port: 50051", "language": "yaml", "type": "kubernetes-manifest", "context": "5.4 - Gateway API | In this case, the GRPCRoute will match any traffic for svc.example.com and apply its routing rules to forward the traffic to the correct backend. Since there is only one match specified,only requests", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_18102"} {"prompt": "Explain this yaml code:\n\napiVersion: gateway.networking.k8s.io/v1\nkind: GRPCRoute\nmetadata:\n name: example-grpcroute\nspec:\n parentRefs:\n - name: example-gateway\n hostnames:\n - \"svc.example.com\"\n rules:\n - matches:\n - method:\n service: com.example\n method: Login\n backendRefs:\n - name: foo-svc\n port: 50051", "completion": "This yaml code 5.4 - Gateway API | In this case, the GRPCRoute will match any traffic for svc.example.com and apply its routing rules to forward the traffic to the correct backend. Since there is only one match specified,only requests", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "5.4 - Gateway API | In this case, the GRPCRoute will match any traffic for svc.example.com and apply its routing rules to forward the traffic to the correct backend. Since there is only one match specified,only requests", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_18103"} {"prompt": "Complete this yaml code:\n\napiVersion: discovery.k8s.io/v1\nkind: EndpointSlice\nmetadata:\n name: example-abc\n labels:\n kubernetes.io/service-name: example\naddressType: IPv4\nports:\n - name: http", "completion": " protocol: TCP\n port: 80\nendpoints:\n - addresses:\n - \"10.1.2.3\"\n conditions:\n ready: true\n hostname: pod-1\n nodeName: node-1\n zone: us-west2-a", "language": "yaml", "type": "kubernetes-manifest", "context": "5.5 - EndpointSlices | By default, the control plane creates and manages EndpointSlices to have no more than 100 endpoints each. You can configure this with the --max-endpoints-per-slice kube-controller-manager flag, up to", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_18104"} {"prompt": "Explain this yaml code:\n\napiVersion: discovery.k8s.io/v1\nkind: EndpointSlice\nmetadata:\n name: example-abc\n labels:\n kubernetes.io/service-name: example\naddressType: IPv4\nports:\n - name: http\n protocol: TCP\n port: 80\nendpoints:\n - addresses:\n - \"10.1.2.3\"\n conditions:\n ready: true\n hostname: pod-1\n nodeName: node-1\n zone: us-west2-a", "completion": "This yaml code 5.5 - EndpointSlices | By default, the control plane creates and manages EndpointSlices to have no more than 100 endpoints each. You can configure this with the --max-endpoints-per-slice kube-controller-manager flag, up to", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "5.5 - EndpointSlices | By default, the control plane creates and manages EndpointSlices to have no more than 100 endpoints each. You can configure this with the --max-endpoints-per-slice kube-controller-manager flag, up to", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_18105"} {"prompt": "Complete this yaml code:\n\nservice/networking/networkpolicy.yaml\napiVersion: networking.k8s.io/v1\nkind: NetworkPolicy\nmetadata:\n name: test-network-policy\n namespace: default\nspec:\n podSelector:\n matchLabels:\n role: db\n policyTypes:\n - Ingress\n - Egress\n ingress:\n - from:\n - ipBlock:\n cidr: 172.17.0.0/16", "completion": " except:\n - 172.17.1.0/24\n - namespaceSelector:\n matchLabels:\n project: myproject\n - podSelector:\n matchLabels:\n role: frontend\n ports:\n - protocol: TCP\n port: 6379\n egress:\n - to:\n - ipBlock:\n cidr: 10.0.0.0/24\n ports:\n - protocol: TCP\n port: 5978", "language": "yaml", "type": "kubernetes-manifest", "context": "5.6 - Network Policies | Mandatory Fields: As with all other Kubernetes config, a NetworkPolicy needs apiVersion, kind, and metadata fields. For general information about working with config files, see Configure a Pod to Use", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_18106"} {"prompt": "Explain this yaml code:\n\nservice/networking/networkpolicy.yaml\napiVersion: networking.k8s.io/v1\nkind: NetworkPolicy\nmetadata:\n name: test-network-policy\n namespace: default\nspec:\n podSelector:\n matchLabels:\n role: db\n policyTypes:\n - Ingress\n - Egress\n ingress:\n - from:\n - ipBlock:\n cidr: 172.17.0.0/16\n except:\n - 172.17.1.0/24\n - namespaceSelector:\n matchLabels:\n project: myproject\n - podSelector:\n matchLabels:\n role: frontend\n ports:\n - protocol: TCP\n port: 6379\n egress:\n - to:\n - ipBlock:\n cidr: 10.0.0.0/24\n ports:\n - protocol: TCP\n port: 5978", "completion": "This yaml code 5.6 - Network Policies | Mandatory Fields: As with all other Kubernetes config, a NetworkPolicy needs apiVersion, kind, and metadata fields. For general information about working with config files, see Configure a Pod to Use", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "5.6 - Network Policies | Mandatory Fields: As with all other Kubernetes config, a NetworkPolicy needs apiVersion, kind, and metadata fields. For general information about working with config files, see Configure a Pod to Use", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_18107"} {"prompt": "Complete this yaml code:\n\napiVersion: networking.k8s.io/v1\nkind: NetworkPolicy\nmetadata:\n name: test-network-policy\n namespace: default\nspec:\n podSelector:\n matchLabels:\n role: db\n policyTypes:\n - Ingress\n - Egress\n ingress:\n - from:\n - ipBlock:\n cidr: 172.17.0.0/16\n except:", "completion": " - 172.17.1.0/24\n - namespaceSelector:\n matchLabels:\n project: myproject\n - podSelector:\n matchLabels:\n role: frontend\n ports:\n - protocol: TCP\n port: 6379\n egress:\n - to:\n - ipBlock:\n cidr: 10.0.0.0/24\n ports:\n - protocol: TCP\n port: 5978", "language": "yaml", "type": "kubernetes-manifest", "context": "5.6 - Network Policies | Mandatory Fields: As with all other Kubernetes config, a NetworkPolicy needs apiVersion, kind, and metadata fields. For general information about working with config files, see Configure a Pod to Use", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_18108"} {"prompt": "Explain this yaml code:\n\napiVersion: networking.k8s.io/v1\nkind: NetworkPolicy\nmetadata:\n name: test-network-policy\n namespace: default\nspec:\n podSelector:\n matchLabels:\n role: db\n policyTypes:\n - Ingress\n - Egress\n ingress:\n - from:\n - ipBlock:\n cidr: 172.17.0.0/16\n except:\n - 172.17.1.0/24\n - namespaceSelector:\n matchLabels:\n project: myproject\n - podSelector:\n matchLabels:\n role: frontend\n ports:\n - protocol: TCP\n port: 6379\n egress:\n - to:\n - ipBlock:\n cidr: 10.0.0.0/24\n ports:\n - protocol: TCP\n port: 5978", "completion": "This yaml code 5.6 - Network Policies | Mandatory Fields: As with all other Kubernetes config, a NetworkPolicy needs apiVersion, kind, and metadata fields. For general information about working with config files, see Configure a Pod to Use", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "5.6 - Network Policies | Mandatory Fields: As with all other Kubernetes config, a NetworkPolicy needs apiVersion, kind, and metadata fields. For general information about working with config files, see Configure a Pod to Use", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_18109"} {"prompt": "Complete this text code:\n\n...\n ingress:\n - from:\n - namespaceSelector:\n matchLabels:", "completion": " user: alice\n podSelector:\n matchLabels:\n role: client\n ...", "language": "text", "type": "code-example", "context": "5.6 - Network Policies | This policy contains a single from element allowing connections from Pods with the label role=client in namespaces with the label user=alice. But the following policy is different:", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_18110"} {"prompt": "Explain this text code:\n\n...\n ingress:\n - from:\n - namespaceSelector:\n matchLabels:\n user: alice\n podSelector:\n matchLabels:\n role: client\n ...", "completion": "This text code 5.6 - Network Policies | This policy contains a single from element allowing connections from Pods with the label role=client in namespaces with the label user=alice. But the following policy is different:", "language": "text", "type": "code-example_explanation", "context": "5.6 - Network Policies | This policy contains a single from element allowing connections from Pods with the label role=client in namespaces with the label user=alice. But the following policy is different:", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_18111"} {"prompt": "Complete this text code:\n\n...\n ingress:\n - from:\n - namespaceSelector:\n matchLabels:", "completion": " user: alice\n - podSelector:\n matchLabels:\n role: client\n ...", "language": "text", "type": "code-example", "context": "5.6 - Network Policies | It contains two elements in the from array, and allows connections from Pods in the local Namespace with the label role=client, or from any Pod in any namespace with the label user=alice.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_18112"} {"prompt": "Explain this text code:\n\n...\n ingress:\n - from:\n - namespaceSelector:\n matchLabels:\n user: alice\n - podSelector:\n matchLabels:\n role: client\n ...", "completion": "This text code 5.6 - Network Policies | It contains two elements in the from array, and allows connections from Pods in the local Namespace with the label role=client, or from any Pod in any namespace with the label user=alice.", "language": "text", "type": "code-example_explanation", "context": "5.6 - Network Policies | It contains two elements in the from array, and allows connections from Pods in the local Namespace with the label role=client, or from any Pod in any namespace with the label user=alice.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_18113"} {"prompt": "Complete this yaml code:\n\nservice/networking/network-policy-default-deny-ingress.yaml\n---\napiVersion: networking.k8s.io/v1\nkind: NetworkPolicy\nmetadata:", "completion": " name: default-deny-ingress\nspec:\n podSelector: {}\n policyTypes:\n - Ingress", "language": "yaml", "type": "kubernetes-manifest", "context": "5.6 - Network Policies | This ensures that even pods that aren't selected by any other NetworkPolicy will still be isolated for ingress. This policy does not affect isolation for egress from any pod.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_18114"} {"prompt": "Explain this yaml code:\n\nservice/networking/network-policy-default-deny-ingress.yaml\n---\napiVersion: networking.k8s.io/v1\nkind: NetworkPolicy\nmetadata:\n name: default-deny-ingress\nspec:\n podSelector: {}\n policyTypes:\n - Ingress", "completion": "This yaml code 5.6 - Network Policies | This ensures that even pods that aren't selected by any other NetworkPolicy will still be isolated for ingress. This policy does not affect isolation for egress from any pod.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "5.6 - Network Policies | This ensures that even pods that aren't selected by any other NetworkPolicy will still be isolated for ingress. This policy does not affect isolation for egress from any pod.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_18115"} {"prompt": "Complete this yaml code:\n\n---\napiVersion: networking.k8s.io/v1\nkind: NetworkPolicy\nmetadata:", "completion": " name: default-deny-ingress\nspec:\n podSelector: {}\n policyTypes:\n - Ingress", "language": "yaml", "type": "kubernetes-manifest", "context": "5.6 - Network Policies | This ensures that even pods that aren't selected by any other NetworkPolicy will still be isolated for ingress. This policy does not affect isolation for egress from any pod.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_18116"} {"prompt": "Explain this yaml code:\n\n---\napiVersion: networking.k8s.io/v1\nkind: NetworkPolicy\nmetadata:\n name: default-deny-ingress\nspec:\n podSelector: {}\n policyTypes:\n - Ingress", "completion": "This yaml code 5.6 - Network Policies | This ensures that even pods that aren't selected by any other NetworkPolicy will still be isolated for ingress. This policy does not affect isolation for egress from any pod.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "5.6 - Network Policies | This ensures that even pods that aren't selected by any other NetworkPolicy will still be isolated for ingress. This policy does not affect isolation for egress from any pod.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_18117"} {"prompt": "Complete this yaml code:\n\nservice/networking/network-policy-allow-all-ingress.yaml\n---\napiVersion: networking.k8s.io/v1\nkind: NetworkPolicy\nmetadata:\n name: allow-all-ingress", "completion": "spec:\n podSelector: {}\n ingress:\n - {}\n policyTypes:\n - Ingress", "language": "yaml", "type": "kubernetes-manifest", "context": "5.6 - Network Policies | With this policy in place, no additional policy or policies can cause any incoming connection to those pods to be denied. This policy has no effect on isolation for egress from any pod.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_18118"} {"prompt": "Explain this yaml code:\n\nservice/networking/network-policy-allow-all-ingress.yaml\n---\napiVersion: networking.k8s.io/v1\nkind: NetworkPolicy\nmetadata:\n name: allow-all-ingress\nspec:\n podSelector: {}\n ingress:\n - {}\n policyTypes:\n - Ingress", "completion": "This yaml code 5.6 - Network Policies | With this policy in place, no additional policy or policies can cause any incoming connection to those pods to be denied. This policy has no effect on isolation for egress from any pod.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "5.6 - Network Policies | With this policy in place, no additional policy or policies can cause any incoming connection to those pods to be denied. This policy has no effect on isolation for egress from any pod.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_18119"} {"prompt": "Complete this yaml code:\n\n---\napiVersion: networking.k8s.io/v1\nkind: NetworkPolicy\nmetadata:\n name: allow-all-ingress", "completion": "spec:\n podSelector: {}\n ingress:\n - {}\n policyTypes:\n - Ingress", "language": "yaml", "type": "kubernetes-manifest", "context": "5.6 - Network Policies | With this policy in place, no additional policy or policies can cause any incoming connection to those pods to be denied. This policy has no effect on isolation for egress from any pod.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_18120"} {"prompt": "Explain this yaml code:\n\n---\napiVersion: networking.k8s.io/v1\nkind: NetworkPolicy\nmetadata:\n name: allow-all-ingress\nspec:\n podSelector: {}\n ingress:\n - {}\n policyTypes:\n - Ingress", "completion": "This yaml code 5.6 - Network Policies | With this policy in place, no additional policy or policies can cause any incoming connection to those pods to be denied. This policy has no effect on isolation for egress from any pod.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "5.6 - Network Policies | With this policy in place, no additional policy or policies can cause any incoming connection to those pods to be denied. This policy has no effect on isolation for egress from any pod.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_18121"} {"prompt": "Complete this yaml code:\n\nservice/networking/network-policy-default-deny-egress.yaml\n---\napiVersion: networking.k8s.io/v1\nkind: NetworkPolicy\nmetadata:", "completion": " name: default-deny-egress\nspec:\n podSelector: {}\n policyTypes:\n - Egress", "language": "yaml", "type": "kubernetes-manifest", "context": "5.6 - Network Policies | This ensures that even pods that aren't selected by any other NetworkPolicy will not be allowed egress traffic. This policy does not change the ingress isolation behavior of any pod.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_18122"} {"prompt": "Explain this yaml code:\n\nservice/networking/network-policy-default-deny-egress.yaml\n---\napiVersion: networking.k8s.io/v1\nkind: NetworkPolicy\nmetadata:\n name: default-deny-egress\nspec:\n podSelector: {}\n policyTypes:\n - Egress", "completion": "This yaml code 5.6 - Network Policies | This ensures that even pods that aren't selected by any other NetworkPolicy will not be allowed egress traffic. This policy does not change the ingress isolation behavior of any pod.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "5.6 - Network Policies | This ensures that even pods that aren't selected by any other NetworkPolicy will not be allowed egress traffic. This policy does not change the ingress isolation behavior of any pod.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_18123"} {"prompt": "Complete this yaml code:\n\n---\napiVersion: networking.k8s.io/v1\nkind: NetworkPolicy\nmetadata:", "completion": " name: default-deny-egress\nspec:\n podSelector: {}\n policyTypes:\n - Egress", "language": "yaml", "type": "kubernetes-manifest", "context": "5.6 - Network Policies | This ensures that even pods that aren't selected by any other NetworkPolicy will not be allowed egress traffic. This policy does not change the ingress isolation behavior of any pod.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_18124"} {"prompt": "Explain this yaml code:\n\n---\napiVersion: networking.k8s.io/v1\nkind: NetworkPolicy\nmetadata:\n name: default-deny-egress\nspec:\n podSelector: {}\n policyTypes:\n - Egress", "completion": "This yaml code 5.6 - Network Policies | This ensures that even pods that aren't selected by any other NetworkPolicy will not be allowed egress traffic. This policy does not change the ingress isolation behavior of any pod.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "5.6 - Network Policies | This ensures that even pods that aren't selected by any other NetworkPolicy will not be allowed egress traffic. This policy does not change the ingress isolation behavior of any pod.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_18125"} {"prompt": "Complete this yaml code:\n\nservice/networking/network-policy-allow-all-egress.yaml\n---\napiVersion: networking.k8s.io/v1\nkind: NetworkPolicy\nmetadata:\n name: allow-all-egress", "completion": "spec:\n podSelector: {}\n egress:\n - {}\n policyTypes:\n - Egress", "language": "yaml", "type": "kubernetes-manifest", "context": "5.6 - Network Policies | With this policy in place, no additional policy or policies can cause any outgoing connection from those pods to be denied. This policy has no effect on isolation for ingress to any pod.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_18126"} {"prompt": "Explain this yaml code:\n\nservice/networking/network-policy-allow-all-egress.yaml\n---\napiVersion: networking.k8s.io/v1\nkind: NetworkPolicy\nmetadata:\n name: allow-all-egress\nspec:\n podSelector: {}\n egress:\n - {}\n policyTypes:\n - Egress", "completion": "This yaml code 5.6 - Network Policies | With this policy in place, no additional policy or policies can cause any outgoing connection from those pods to be denied. This policy has no effect on isolation for ingress to any pod.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "5.6 - Network Policies | With this policy in place, no additional policy or policies can cause any outgoing connection from those pods to be denied. This policy has no effect on isolation for ingress to any pod.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_18127"} {"prompt": "Complete this yaml code:\n\n---\napiVersion: networking.k8s.io/v1\nkind: NetworkPolicy\nmetadata:\n name: allow-all-egress", "completion": "spec:\n podSelector: {}\n egress:\n - {}\n policyTypes:\n - Egress", "language": "yaml", "type": "kubernetes-manifest", "context": "5.6 - Network Policies | With this policy in place, no additional policy or policies can cause any outgoing connection from those pods to be denied. This policy has no effect on isolation for ingress to any pod.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_18128"} {"prompt": "Explain this yaml code:\n\n---\napiVersion: networking.k8s.io/v1\nkind: NetworkPolicy\nmetadata:\n name: allow-all-egress\nspec:\n podSelector: {}\n egress:\n - {}\n policyTypes:\n - Egress", "completion": "This yaml code 5.6 - Network Policies | With this policy in place, no additional policy or policies can cause any outgoing connection from those pods to be denied. This policy has no effect on isolation for ingress to any pod.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "5.6 - Network Policies | With this policy in place, no additional policy or policies can cause any outgoing connection from those pods to be denied. This policy has no effect on isolation for ingress to any pod.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_18129"} {"prompt": "Complete this yaml code:\n\nservice/networking/network-policy-default-deny-all.yaml\n---\napiVersion: networking.k8s.io/v1\nkind: NetworkPolicy\nmetadata:", "completion": " name: default-deny-all\nspec:\n podSelector: {}\n policyTypes:\n - Ingress\n - Egress", "language": "yaml", "type": "kubernetes-manifest", "context": "5.6 - Network Policies | This ensures that even pods that aren't selected by any other NetworkPolicy will not be allowed ingress or egress traffic.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_18130"} {"prompt": "Explain this yaml code:\n\nservice/networking/network-policy-default-deny-all.yaml\n---\napiVersion: networking.k8s.io/v1\nkind: NetworkPolicy\nmetadata:\n name: default-deny-all\nspec:\n podSelector: {}\n policyTypes:\n - Ingress\n - Egress", "completion": "This yaml code 5.6 - Network Policies | This ensures that even pods that aren't selected by any other NetworkPolicy will not be allowed ingress or egress traffic.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "5.6 - Network Policies | This ensures that even pods that aren't selected by any other NetworkPolicy will not be allowed ingress or egress traffic.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_18131"} {"prompt": "Complete this yaml code:\n\n---\napiVersion: networking.k8s.io/v1\nkind: NetworkPolicy\nmetadata:\n name: default-deny-all", "completion": "spec:\n podSelector: {}\n policyTypes:\n - Ingress\n - Egress", "language": "yaml", "type": "kubernetes-manifest", "context": "5.6 - Network Policies | This ensures that even pods that aren't selected by any other NetworkPolicy will not be allowed ingress or egress traffic.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_18132"} {"prompt": "Explain this yaml code:\n\n---\napiVersion: networking.k8s.io/v1\nkind: NetworkPolicy\nmetadata:\n name: default-deny-all\nspec:\n podSelector: {}\n policyTypes:\n - Ingress\n - Egress", "completion": "This yaml code 5.6 - Network Policies | This ensures that even pods that aren't selected by any other NetworkPolicy will not be allowed ingress or egress traffic.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "5.6 - Network Policies | This ensures that even pods that aren't selected by any other NetworkPolicy will not be allowed ingress or egress traffic.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_18133"} {"prompt": "Complete this yaml code:\n\nservice/networking/networkpolicy-multiport-egress.yaml\napiVersion: networking.k8s.io/v1\nkind: NetworkPolicy\nmetadata:\n name: multi-port-egress\n namespace: default\nspec:\n podSelector:\n matchLabels:\n role: db", "completion": " policyTypes:\n - Egress\n egress:\n - to:\n - ipBlock:\n cidr: 10.0.0.0/24\n ports:\n - protocol: TCP\n port: 32000\n endPort: 32768", "language": "yaml", "type": "kubernetes-manifest", "context": "5.6 - Network Policies | The above rule allows any Pod with label role=db on the namespace default to communicate with any IP within the range 10.0.0.0/24 over TCP, provided that the target port is between the range 32000 and", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_18134"} {"prompt": "Explain this yaml code:\n\nservice/networking/networkpolicy-multiport-egress.yaml\napiVersion: networking.k8s.io/v1\nkind: NetworkPolicy\nmetadata:\n name: multi-port-egress\n namespace: default\nspec:\n podSelector:\n matchLabels:\n role: db\n policyTypes:\n - Egress\n egress:\n - to:\n - ipBlock:\n cidr: 10.0.0.0/24\n ports:\n - protocol: TCP\n port: 32000\n endPort: 32768", "completion": "This yaml code 5.6 - Network Policies | The above rule allows any Pod with label role=db on the namespace default to communicate with any IP within the range 10.0.0.0/24 over TCP, provided that the target port is between the range 32000 and", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "5.6 - Network Policies | The above rule allows any Pod with label role=db on the namespace default to communicate with any IP within the range 10.0.0.0/24 over TCP, provided that the target port is between the range 32000 and", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_18135"} {"prompt": "Complete this yaml code:\n\napiVersion: networking.k8s.io/v1\nkind: NetworkPolicy\nmetadata:\n name: multi-port-egress\n namespace: default\nspec:\n podSelector:\n matchLabels:\n role: db", "completion": " policyTypes:\n - Egress\n egress:\n - to:\n - ipBlock:\n cidr: 10.0.0.0/24\n ports:\n - protocol: TCP\n port: 32000\n endPort: 32768", "language": "yaml", "type": "kubernetes-manifest", "context": "5.6 - Network Policies | The above rule allows any Pod with label role=db on the namespace default to communicate with any IP within the range 10.0.0.0/24 over TCP, provided that the target port is between the range 32000 and", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_18136"} {"prompt": "Explain this yaml code:\n\napiVersion: networking.k8s.io/v1\nkind: NetworkPolicy\nmetadata:\n name: multi-port-egress\n namespace: default\nspec:\n podSelector:\n matchLabels:\n role: db\n policyTypes:\n - Egress\n egress:\n - to:\n - ipBlock:\n cidr: 10.0.0.0/24\n ports:\n - protocol: TCP\n port: 32000\n endPort: 32768", "completion": "This yaml code 5.6 - Network Policies | The above rule allows any Pod with label role=db on the namespace default to communicate with any IP within the range 10.0.0.0/24 over TCP, provided that the target port is between the range 32000 and", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "5.6 - Network Policies | The above rule allows any Pod with label role=db on the namespace default to communicate with any IP within the range 10.0.0.0/24 over TCP, provided that the target port is between the range 32000 and", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_18137"} {"prompt": "Explain this bash code:\n\nkubectl label namespace frontend namespace=frontend\nkubectl label namespace backend namespace=backend", "completion": "This bash code 5.6 - Network Policies | Add the labels under namespaceSelector in your NetworkPolicy document. For example:", "language": "bash", "type": "kubectl-command_explanation", "context": "5.6 - Network Policies | Add the labels under namespaceSelector in your NetworkPolicy document. For example:", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_18138"} {"prompt": "Complete this yaml code:\n\napiVersion: networking.k8s.io/v1\nkind: NetworkPolicy\nmetadata:\n name: egress-namespaces\nspec:\n podSelector:\n matchLabels:\n app: myapp", "completion": " policyTypes:\n - Egress\n egress:\n - to:\n - namespaceSelector:\n matchExpressions:\n - key: namespace\n operator: In\n values: [\"frontend\", \"backend\"]", "language": "yaml", "type": "kubernetes-manifest", "context": "5.6 - Network Policies | The Kubernetes control plane sets an immutable label kubernetes.io/metadata.name on all namespaces, the value of the label is the namespace name.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_18139"} {"prompt": "Explain this yaml code:\n\napiVersion: networking.k8s.io/v1\nkind: NetworkPolicy\nmetadata:\n name: egress-namespaces\nspec:\n podSelector:\n matchLabels:\n app: myapp\n policyTypes:\n - Egress\n egress:\n - to:\n - namespaceSelector:\n matchExpressions:\n - key: namespace\n operator: In\n values: [\"frontend\", \"backend\"]", "completion": "This yaml code 5.6 - Network Policies | The Kubernetes control plane sets an immutable label kubernetes.io/metadata.name on all namespaces, the value of the label is the namespace name.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "5.6 - Network Policies | The Kubernetes control plane sets an immutable label kubernetes.io/metadata.name on all namespaces, the value of the label is the namespace name.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_18140"} {"prompt": "Complete this text code:\n\n...\n spec:\n podSelector:", "completion": " matchLabels:\n role: client\n ...", "language": "text", "type": "code-example", "context": "5.6 - Network Policies | a hostNetwork pod is selected by a podSelector or namespaceSelector in an ingress or egress rule.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_18141"} {"prompt": "Explain this text code:\n\n...\n spec:\n podSelector:\n matchLabels:\n role: client\n ...", "completion": "This text code 5.6 - Network Policies | a hostNetwork pod is selected by a podSelector or namespaceSelector in an ingress or egress rule.", "language": "text", "type": "code-example_explanation", "context": "5.6 - Network Policies | a hostNetwork pod is selected by a podSelector or namespaceSelector in an ingress or egress rule.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_18142"} {"prompt": "Complete this text code:\n\n...\n ingress:\n - from:", "completion": " - podSelector:\n matchLabels:\n role: client\n ...", "language": "text", "type": "code-example", "context": "5.6 - Network Policies | At the same time, since hostNetwork pods have the same IP addresses as the nodes they reside on, their connections will be treated as node connections. For example, you can allow traffic from a hostNe", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_18143"} {"prompt": "Explain this text code:\n\n...\n ingress:\n - from:\n - podSelector:\n matchLabels:\n role: client\n ...", "completion": "This text code 5.6 - Network Policies | At the same time, since hostNetwork pods have the same IP addresses as the nodes they reside on, their connections will be treated as node connections. For example, you can allow traffic from a hostNe", "language": "text", "type": "code-example_explanation", "context": "5.6 - Network Policies | At the same time, since hostNetwork pods have the same IP addresses as the nodes they reside on, their connections will be treated as node connections. For example, you can allow traffic from a hostNe", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_18144"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Service\nmetadata:\n name: busybox-subdomain\nspec:\n selector:\n name: busybox\n clusterIP: None\n ports:\n - name: foo # name is not required for single-port Services\n port: 1234\n---\napiVersion: v1\nkind: Pod\nmetadata:\n name: busybox1\n labels:\n name: busybox\nspec:\n hostname: busybox-1\n subdomain: busybox-subdomain", "completion": " containers:\n - image: busybox:1.28\n command:\n - sleep\n - \"3600\"\n name: busybox\n---\napiVersion: v1\nkind: Pod\nmetadata:\n name: busybox2\n labels:\n name: busybox\nspec:\n hostname: busybox-2\n subdomain: busybox-subdomain\n containers:\n - image: busybox:1.28\n command:\n - sleep\n - \"3600\"\n name: busybox", "language": "yaml", "type": "pod-definition", "context": "5.7 - DNS for Services and Pods | Given the above Service \"busybox-subdomain\" and the Pods which set spec.subdomain to \"busybox-subdomain\", the first Pod will see its own FQDN as \"busybox-1.busybox-subdomain.my-namespace.svc.cluster-d", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_18145"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Service\nmetadata:\n name: busybox-subdomain\nspec:\n selector:\n name: busybox\n clusterIP: None\n ports:\n - name: foo # name is not required for single-port Services\n port: 1234\n---\napiVersion: v1\nkind: Pod\nmetadata:\n name: busybox1\n labels:\n name: busybox\nspec:\n hostname: busybox-1\n subdomain: busybox-subdomain\n containers:\n - image: busybox:1.28\n command:\n - sleep\n - \"3600\"\n name: busybox\n---\napiVersion: v1\nkind: Pod\nmetadata:\n name: busybox2\n labels:\n name: busybox\nspec:\n hostname: busybox-2\n subdomain: busybox-subdomain\n containers:\n - image: busybox:1.28\n command:\n - sleep\n - \"3600\"\n name: busybox", "completion": "This yaml code 5.7 - DNS for Services and Pods | Given the above Service \"busybox-subdomain\" and the Pods which set spec.subdomain to \"busybox-subdomain\", the first Pod will see its own FQDN as \"busybox-1.busybox-subdomain.my-namespace.svc.cluster-d", "language": "yaml", "type": "pod-definition_explanation", "context": "5.7 - DNS for Services and Pods | Given the above Service \"busybox-subdomain\" and the Pods which set spec.subdomain to \"busybox-subdomain\", the first Pod will see its own FQDN as \"busybox-1.busybox-subdomain.my-namespace.svc.cluster-d", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_18146"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: busybox\n namespace: default\nspec:\n containers:\n - image: busybox:1.28", "completion": " command:\n - sleep\n - \"3600\"\n imagePullPolicy: IfNotPresent\n name: busybox\n restartPolicy: Always\n hostNetwork: true\n dnsPolicy: ClusterFirstWithHostNet", "language": "yaml", "type": "pod-definition", "context": "5.7 - DNS for Services and Pods | Pod's DNS Config allows users more control on the DNS settings for a Pod.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_18147"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: busybox\n namespace: default\nspec:\n containers:\n - image: busybox:1.28\n command:\n - sleep\n - \"3600\"\n imagePullPolicy: IfNotPresent\n name: busybox\n restartPolicy: Always\n hostNetwork: true\n dnsPolicy: ClusterFirstWithHostNet", "completion": "This yaml code 5.7 - DNS for Services and Pods | Pod's DNS Config allows users more control on the DNS settings for a Pod.", "language": "yaml", "type": "pod-definition_explanation", "context": "5.7 - DNS for Services and Pods | Pod's DNS Config allows users more control on the DNS settings for a Pod.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_18148"} {"prompt": "Complete this yaml code:\n\nservice/networking/custom-dns.yaml\napiVersion: v1\nkind: Pod\nmetadata:\n namespace: default\n name: dns-example\nspec:\n containers:\n - name: test\n image: nginx", "completion": " dnsPolicy: \"None\"\n dnsConfig:\n nameservers:\n - 192.0.2.1 # this is an example\n searches:\n - ns1.svc.cluster-domain.example\n - my.dns.search.suffix\n options:\n - name: ndots\n value: \"2\"\n - name: edns0", "language": "yaml", "type": "pod-definition", "context": "5.7 - DNS for Services and Pods | When the Pod above is created, the container test gets the following contents in its /etc/resolv.conf file:", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_18149"} {"prompt": "Explain this yaml code:\n\nservice/networking/custom-dns.yaml\napiVersion: v1\nkind: Pod\nmetadata:\n namespace: default\n name: dns-example\nspec:\n containers:\n - name: test\n image: nginx\n dnsPolicy: \"None\"\n dnsConfig:\n nameservers:\n - 192.0.2.1 # this is an example\n searches:\n - ns1.svc.cluster-domain.example\n - my.dns.search.suffix\n options:\n - name: ndots\n value: \"2\"\n - name: edns0", "completion": "This yaml code 5.7 - DNS for Services and Pods | When the Pod above is created, the container test gets the following contents in its /etc/resolv.conf file:", "language": "yaml", "type": "pod-definition_explanation", "context": "5.7 - DNS for Services and Pods | When the Pod above is created, the container test gets the following contents in its /etc/resolv.conf file:", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_18150"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n namespace: default\n name: dns-example\nspec:\n containers:\n - name: test\n image: nginx\n dnsPolicy: \"None\"", "completion": " dnsConfig:\n nameservers:\n - 192.0.2.1 # this is an example\n searches:\n - ns1.svc.cluster-domain.example\n - my.dns.search.suffix\n options:\n - name: ndots\n value: \"2\"\n - name: edns0", "language": "yaml", "type": "pod-definition", "context": "5.7 - DNS for Services and Pods | When the Pod above is created, the container test gets the following contents in its /etc/resolv.conf file:", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_18151"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n namespace: default\n name: dns-example\nspec:\n containers:\n - name: test\n image: nginx\n dnsPolicy: \"None\"\n dnsConfig:\n nameservers:\n - 192.0.2.1 # this is an example\n searches:\n - ns1.svc.cluster-domain.example\n - my.dns.search.suffix\n options:\n - name: ndots\n value: \"2\"\n - name: edns0", "completion": "This yaml code 5.7 - DNS for Services and Pods | When the Pod above is created, the container test gets the following contents in its /etc/resolv.conf file:", "language": "yaml", "type": "pod-definition_explanation", "context": "5.7 - DNS for Services and Pods | When the Pod above is created, the container test gets the following contents in its /etc/resolv.conf file:", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_18152"} {"prompt": "Explain this bash code:\n\nkubectl exec -it dns-example -- cat /etc/resolv.conf", "completion": "This bash code 5.7 - DNS for Services and Pods | The output is similar to this:", "language": "bash", "type": "kubectl-command_explanation", "context": "5.7 - DNS for Services and Pods | The output is similar to this:", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_18153"} {"prompt": "Complete this yaml code:\n\nservice/networking/dual-stack-default-svc.yaml\napiVersion: v1\nkind: Service\nmetadata:\n name: my-service\n labels:", "completion": " app.kubernetes.io/name: MyApp\nspec:\n selector:\n app.kubernetes.io/name: MyApp\n ports:\n - protocol: TCP\n port: 80", "language": "yaml", "type": "service-definition", "context": "5.8 - IPv4/IPv6 dual-stack | This Service specification explicitly defines PreferDualStack in .spec.ipFamilyPolicy. When you create this Service on a dual-stack cluster, Kubernetes assigns both IPv4 and IPv6 addresses for the ser", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_18154"} {"prompt": "Explain this yaml code:\n\nservice/networking/dual-stack-default-svc.yaml\napiVersion: v1\nkind: Service\nmetadata:\n name: my-service\n labels:\n app.kubernetes.io/name: MyApp\nspec:\n selector:\n app.kubernetes.io/name: MyApp\n ports:\n - protocol: TCP\n port: 80", "completion": "This yaml code 5.8 - IPv4/IPv6 dual-stack | This Service specification explicitly defines PreferDualStack in .spec.ipFamilyPolicy. When you create this Service on a dual-stack cluster, Kubernetes assigns both IPv4 and IPv6 addresses for the ser", "language": "yaml", "type": "service-definition_explanation", "context": "5.8 - IPv4/IPv6 dual-stack | This Service specification explicitly defines PreferDualStack in .spec.ipFamilyPolicy. When you create this Service on a dual-stack cluster, Kubernetes assigns both IPv4 and IPv6 addresses for the ser", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_18155"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Service\nmetadata:\n name: my-service\n labels:\n app.kubernetes.io/name: MyApp", "completion": "spec:\n selector:\n app.kubernetes.io/name: MyApp\n ports:\n - protocol: TCP\n port: 80", "language": "yaml", "type": "service-definition", "context": "5.8 - IPv4/IPv6 dual-stack | This Service specification explicitly defines PreferDualStack in .spec.ipFamilyPolicy. When you create this Service on a dual-stack cluster, Kubernetes assigns both IPv4 and IPv6 addresses for the ser", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_18156"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Service\nmetadata:\n name: my-service\n labels:\n app.kubernetes.io/name: MyApp\nspec:\n selector:\n app.kubernetes.io/name: MyApp\n ports:\n - protocol: TCP\n port: 80", "completion": "This yaml code 5.8 - IPv4/IPv6 dual-stack | This Service specification explicitly defines PreferDualStack in .spec.ipFamilyPolicy. When you create this Service on a dual-stack cluster, Kubernetes assigns both IPv4 and IPv6 addresses for the ser", "language": "yaml", "type": "service-definition_explanation", "context": "5.8 - IPv4/IPv6 dual-stack | This Service specification explicitly defines PreferDualStack in .spec.ipFamilyPolicy. When you create this Service on a dual-stack cluster, Kubernetes assigns both IPv4 and IPv6 addresses for the ser", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_18157"} {"prompt": "Complete this yaml code:\n\nservice/networking/dual-stack-preferred-svc.yaml\napiVersion: v1\nkind: Service\nmetadata:\n name: my-service\n labels:\n app.kubernetes.io/name: MyApp", "completion": "spec:\n ipFamilyPolicy: PreferDualStack\n selector:\n app.kubernetes.io/name: MyApp\n ports:\n - protocol: TCP\n port: 80", "language": "yaml", "type": "service-definition", "context": "5.8 - IPv4/IPv6 dual-stack | This Service specification explicitly defines IPv6 and IPv4 in .spec.ipFamilies as well as defining PreferDualStack in .spec.ipFamilyPolicy. When Kubernetes assigns an IPv6 and IPv4 address in .spec.c", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_18158"} {"prompt": "Explain this yaml code:\n\nservice/networking/dual-stack-preferred-svc.yaml\napiVersion: v1\nkind: Service\nmetadata:\n name: my-service\n labels:\n app.kubernetes.io/name: MyApp\nspec:\n ipFamilyPolicy: PreferDualStack\n selector:\n app.kubernetes.io/name: MyApp\n ports:\n - protocol: TCP\n port: 80", "completion": "This yaml code 5.8 - IPv4/IPv6 dual-stack | This Service specification explicitly defines IPv6 and IPv4 in .spec.ipFamilies as well as defining PreferDualStack in .spec.ipFamilyPolicy. When Kubernetes assigns an IPv6 and IPv4 address in .spec.c", "language": "yaml", "type": "service-definition_explanation", "context": "5.8 - IPv4/IPv6 dual-stack | This Service specification explicitly defines IPv6 and IPv4 in .spec.ipFamilies as well as defining PreferDualStack in .spec.ipFamilyPolicy. When Kubernetes assigns an IPv6 and IPv4 address in .spec.c", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_18159"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Service\nmetadata:\n name: my-service\n labels:\n app.kubernetes.io/name: MyApp", "completion": "spec:\n ipFamilyPolicy: PreferDualStack\n selector:\n app.kubernetes.io/name: MyApp\n ports:\n - protocol: TCP\n port: 80", "language": "yaml", "type": "service-definition", "context": "5.8 - IPv4/IPv6 dual-stack | This Service specification explicitly defines IPv6 and IPv4 in .spec.ipFamilies as well as defining PreferDualStack in .spec.ipFamilyPolicy. When Kubernetes assigns an IPv6 and IPv4 address in .spec.c", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_18160"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Service\nmetadata:\n name: my-service\n labels:\n app.kubernetes.io/name: MyApp\nspec:\n ipFamilyPolicy: PreferDualStack\n selector:\n app.kubernetes.io/name: MyApp\n ports:\n - protocol: TCP\n port: 80", "completion": "This yaml code 5.8 - IPv4/IPv6 dual-stack | This Service specification explicitly defines IPv6 and IPv4 in .spec.ipFamilies as well as defining PreferDualStack in .spec.ipFamilyPolicy. When Kubernetes assigns an IPv6 and IPv4 address in .spec.c", "language": "yaml", "type": "service-definition_explanation", "context": "5.8 - IPv4/IPv6 dual-stack | This Service specification explicitly defines IPv6 and IPv4 in .spec.ipFamilies as well as defining PreferDualStack in .spec.ipFamilyPolicy. When Kubernetes assigns an IPv6 and IPv4 address in .spec.c", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_18161"} {"prompt": "Complete this yaml code:\n\nservice/networking/dual-stack-preferred-ipfamilies-svc.yaml\napiVersion: v1\nkind: Service\nmetadata:\n name: my-service\n labels:\n app.kubernetes.io/name: MyApp\nspec:", "completion": " ipFamilyPolicy: PreferDualStack\n ipFamilies:\n - IPv6\n - IPv4\n selector:\n app.kubernetes.io/name: MyApp\n ports:\n - protocol: TCP\n port: 80", "language": "yaml", "type": "service-definition", "context": "5.8 - IPv4/IPv6 dual-stack | These examples demonstrate the default behavior when dual-stack is newly enabled on a cluster where Services already exist. (Upgrading an existing cluster to 1.21 or beyond will enable dual-stack.)", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_18162"} {"prompt": "Explain this yaml code:\n\nservice/networking/dual-stack-preferred-ipfamilies-svc.yaml\napiVersion: v1\nkind: Service\nmetadata:\n name: my-service\n labels:\n app.kubernetes.io/name: MyApp\nspec:\n ipFamilyPolicy: PreferDualStack\n ipFamilies:\n - IPv6\n - IPv4\n selector:\n app.kubernetes.io/name: MyApp\n ports:\n - protocol: TCP\n port: 80", "completion": "This yaml code 5.8 - IPv4/IPv6 dual-stack | These examples demonstrate the default behavior when dual-stack is newly enabled on a cluster where Services already exist. (Upgrading an existing cluster to 1.21 or beyond will enable dual-stack.)", "language": "yaml", "type": "service-definition_explanation", "context": "5.8 - IPv4/IPv6 dual-stack | These examples demonstrate the default behavior when dual-stack is newly enabled on a cluster where Services already exist. (Upgrading an existing cluster to 1.21 or beyond will enable dual-stack.)", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_18163"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Service\nmetadata:\n name: my-service\n labels:\n app.kubernetes.io/name: MyApp\nspec:\n ipFamilyPolicy: PreferDualStack", "completion": " ipFamilies:\n - IPv6\n - IPv4\n selector:\n app.kubernetes.io/name: MyApp\n ports:\n - protocol: TCP\n port: 80", "language": "yaml", "type": "service-definition", "context": "5.8 - IPv4/IPv6 dual-stack | These examples demonstrate the default behavior when dual-stack is newly enabled on a cluster where Services already exist. (Upgrading an existing cluster to 1.21 or beyond will enable dual-stack.)", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_18164"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Service\nmetadata:\n name: my-service\n labels:\n app.kubernetes.io/name: MyApp\nspec:\n ipFamilyPolicy: PreferDualStack\n ipFamilies:\n - IPv6\n - IPv4\n selector:\n app.kubernetes.io/name: MyApp\n ports:\n - protocol: TCP\n port: 80", "completion": "This yaml code 5.8 - IPv4/IPv6 dual-stack | These examples demonstrate the default behavior when dual-stack is newly enabled on a cluster where Services already exist. (Upgrading an existing cluster to 1.21 or beyond will enable dual-stack.)", "language": "yaml", "type": "service-definition_explanation", "context": "5.8 - IPv4/IPv6 dual-stack | These examples demonstrate the default behavior when dual-stack is newly enabled on a cluster where Services already exist. (Upgrading an existing cluster to 1.21 or beyond will enable dual-stack.)", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_18165"} {"prompt": "Complete this yaml code:\n\nservice/networking/dual-stack-default-svc.yaml\napiVersion: v1\nkind: Service\nmetadata:\n name: my-service\n labels:", "completion": " app.kubernetes.io/name: MyApp\nspec:\n selector:\n app.kubernetes.io/name: MyApp\n ports:\n - protocol: TCP\n port: 80", "language": "yaml", "type": "service-definition", "context": "5.8 - IPv4/IPv6 dual-stack | You can validate this behavior by using kubectl to inspect an existing service.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_18166"} {"prompt": "Explain this yaml code:\n\nservice/networking/dual-stack-default-svc.yaml\napiVersion: v1\nkind: Service\nmetadata:\n name: my-service\n labels:\n app.kubernetes.io/name: MyApp\nspec:\n selector:\n app.kubernetes.io/name: MyApp\n ports:\n - protocol: TCP\n port: 80", "completion": "This yaml code 5.8 - IPv4/IPv6 dual-stack | You can validate this behavior by using kubectl to inspect an existing service.", "language": "yaml", "type": "service-definition_explanation", "context": "5.8 - IPv4/IPv6 dual-stack | You can validate this behavior by using kubectl to inspect an existing service.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_18167"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Service\nmetadata:\n name: my-service\n labels:\n app.kubernetes.io/name: MyApp", "completion": "spec:\n selector:\n app.kubernetes.io/name: MyApp\n ports:\n - protocol: TCP\n port: 80", "language": "yaml", "type": "service-definition", "context": "5.8 - IPv4/IPv6 dual-stack | You can validate this behavior by using kubectl to inspect an existing service.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_18168"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Service\nmetadata:\n name: my-service\n labels:\n app.kubernetes.io/name: MyApp\nspec:\n selector:\n app.kubernetes.io/name: MyApp\n ports:\n - protocol: TCP\n port: 80", "completion": "This yaml code 5.8 - IPv4/IPv6 dual-stack | You can validate this behavior by using kubectl to inspect an existing service.", "language": "yaml", "type": "service-definition_explanation", "context": "5.8 - IPv4/IPv6 dual-stack | You can validate this behavior by using kubectl to inspect an existing service.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_18169"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Service\nmetadata:\n labels:\n app.kubernetes.io/name: MyApp\n name: my-service\nspec:\n clusterIP: 10.0.197.123\n clusterIPs:\n - 10.0.197.123\n ipFamilies:", "completion": " - IPv4\n ipFamilyPolicy: SingleStack\n ports:\n - port: 80\n protocol: TCP\n targetPort: 80\n selector:\n app.kubernetes.io/name: MyApp\n type: ClusterIP\nstatus:\n loadBalancer: {}", "language": "yaml", "type": "service-definition", "context": "5.8 - IPv4/IPv6 dual-stack | When dual-stack is enabled on a cluster, existing headless Services with selectors are configured by the control plane to set .spec.ipFamilyPolicy to SingleStack and set .spec.ipFamilies to the addres", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_18170"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Service\nmetadata:\n labels:\n app.kubernetes.io/name: MyApp\n name: my-service\nspec:\n clusterIP: 10.0.197.123\n clusterIPs:\n - 10.0.197.123\n ipFamilies:\n - IPv4\n ipFamilyPolicy: SingleStack\n ports:\n - port: 80\n protocol: TCP\n targetPort: 80\n selector:\n app.kubernetes.io/name: MyApp\n type: ClusterIP\nstatus:\n loadBalancer: {}", "completion": "This yaml code 5.8 - IPv4/IPv6 dual-stack | When dual-stack is enabled on a cluster, existing headless Services with selectors are configured by the control plane to set .spec.ipFamilyPolicy to SingleStack and set .spec.ipFamilies to the addres", "language": "yaml", "type": "service-definition_explanation", "context": "5.8 - IPv4/IPv6 dual-stack | When dual-stack is enabled on a cluster, existing headless Services with selectors are configured by the control plane to set .spec.ipFamilyPolicy to SingleStack and set .spec.ipFamilies to the addres", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_18171"} {"prompt": "Complete this yaml code:\n\nservice/networking/dual-stack-default-svc.yaml\napiVersion: v1\nkind: Service\nmetadata:\n name: my-service\n labels:", "completion": " app.kubernetes.io/name: MyApp\nspec:\n selector:\n app.kubernetes.io/name: MyApp\n ports:\n - protocol: TCP\n port: 80", "language": "yaml", "type": "service-definition", "context": "5.8 - IPv4/IPv6 dual-stack | You can validate this behavior by using kubectl to inspect an existing headless service with selectors.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_18172"} {"prompt": "Explain this yaml code:\n\nservice/networking/dual-stack-default-svc.yaml\napiVersion: v1\nkind: Service\nmetadata:\n name: my-service\n labels:\n app.kubernetes.io/name: MyApp\nspec:\n selector:\n app.kubernetes.io/name: MyApp\n ports:\n - protocol: TCP\n port: 80", "completion": "This yaml code 5.8 - IPv4/IPv6 dual-stack | You can validate this behavior by using kubectl to inspect an existing headless service with selectors.", "language": "yaml", "type": "service-definition_explanation", "context": "5.8 - IPv4/IPv6 dual-stack | You can validate this behavior by using kubectl to inspect an existing headless service with selectors.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_18173"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Service\nmetadata:\n name: my-service\n labels:\n app.kubernetes.io/name: MyApp", "completion": "spec:\n selector:\n app.kubernetes.io/name: MyApp\n ports:\n - protocol: TCP\n port: 80", "language": "yaml", "type": "service-definition", "context": "5.8 - IPv4/IPv6 dual-stack | You can validate this behavior by using kubectl to inspect an existing headless service with selectors.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_18174"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Service\nmetadata:\n name: my-service\n labels:\n app.kubernetes.io/name: MyApp\nspec:\n selector:\n app.kubernetes.io/name: MyApp\n ports:\n - protocol: TCP\n port: 80", "completion": "This yaml code 5.8 - IPv4/IPv6 dual-stack | You can validate this behavior by using kubectl to inspect an existing headless service with selectors.", "language": "yaml", "type": "service-definition_explanation", "context": "5.8 - IPv4/IPv6 dual-stack | You can validate this behavior by using kubectl to inspect an existing headless service with selectors.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_18175"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Service\nmetadata:\n labels:\n app.kubernetes.io/name: MyApp\n name: my-service\nspec:\n clusterIP: None\n clusterIPs:", "completion": " - None\n ipFamilies:\n - IPv4\n ipFamilyPolicy: SingleStack\n ports:\n - port: 80\n protocol: TCP\n targetPort: 80\n selector:\n app.kubernetes.io/name: MyApp", "language": "yaml", "type": "service-definition", "context": "5.8 - IPv4/IPv6 dual-stack | Services can be changed from single-stack to dual-stack and from dual-stack to single-stack.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_18176"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Service\nmetadata:\n labels:\n app.kubernetes.io/name: MyApp\n name: my-service\nspec:\n clusterIP: None\n clusterIPs:\n - None\n ipFamilies:\n - IPv4\n ipFamilyPolicy: SingleStack\n ports:\n - port: 80\n protocol: TCP\n targetPort: 80\n selector:\n app.kubernetes.io/name: MyApp", "completion": "This yaml code 5.8 - IPv4/IPv6 dual-stack | Services can be changed from single-stack to dual-stack and from dual-stack to single-stack.", "language": "yaml", "type": "service-definition_explanation", "context": "5.8 - IPv4/IPv6 dual-stack | Services can be changed from single-stack to dual-stack and from dual-stack to single-stack.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_18177"} {"prompt": "Complete this yaml code:\n\napiVersion: discovery.k8s.io/v1\nkind: EndpointSlice\nmetadata:\n name: example-hints\n labels:\n kubernetes.io/service-name: example-svc\naddressType: IPv4\nports:\n - name: http\n protocol: TCP", "completion": " port: 80\nendpoints:\n - addresses:\n - \"10.1.2.3\"\n conditions:\n ready: true\n hostname: pod-1\n zone: zone-a\n hints:\n forZones:\n - name: \"zone-a\"", "language": "yaml", "type": "kubernetes-manifest", "context": "5.9 - Topology Aware Routing | The kube-proxy component filters the endpoints it routes to based on the hints set by the EndpointSlice controller. In most cases, this means that the kube-proxy is able to route traffic to endpoints", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_18178"} {"prompt": "Explain this yaml code:\n\napiVersion: discovery.k8s.io/v1\nkind: EndpointSlice\nmetadata:\n name: example-hints\n labels:\n kubernetes.io/service-name: example-svc\naddressType: IPv4\nports:\n - name: http\n protocol: TCP\n port: 80\nendpoints:\n - addresses:\n - \"10.1.2.3\"\n conditions:\n ready: true\n hostname: pod-1\n zone: zone-a\n hints:\n forZones:\n - name: \"zone-a\"", "completion": "This yaml code 5.9 - Topology Aware Routing | The kube-proxy component filters the endpoints it routes to based on the hints set by the EndpointSlice controller. In most cases, this means that the kube-proxy is able to route traffic to endpoints", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "5.9 - Topology Aware Routing | The kube-proxy component filters the endpoints it routes to based on the hints set by the EndpointSlice controller. In most cases, this means that the kube-proxy is able to route traffic to endpoints", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_18179"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Service\nmetadata:\n labels:\n k8s-app: kube-dns\n kubernetes.io/cluster-service: \"true\"\n kubernetes.io/name: CoreDNS\n name: kube-dns\n namespace: kube-system\nspec:\n clusterIP: 10.96.0.10", "completion": " ports:\n - name: dns\n port: 53\n protocol: UDP\n targetPort: 53\n - name: dns-tcp\n port: 53\n protocol: TCP\n targetPort: 53\n selector:\n k8s-app: kube-dns\n type: ClusterIP", "language": "yaml", "type": "service-definition", "context": "5.11 - Service ClusterIP allocation | But, as it was explained before, the IP address 10.96.0.10 has not been reserved. If other Services are created before or in parallel with dynamic allocation, there is a chance they can allocate this", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_18180"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Service\nmetadata:\n labels:\n k8s-app: kube-dns\n kubernetes.io/cluster-service: \"true\"\n kubernetes.io/name: CoreDNS\n name: kube-dns\n namespace: kube-system\nspec:\n clusterIP: 10.96.0.10\n ports:\n - name: dns\n port: 53\n protocol: UDP\n targetPort: 53\n - name: dns-tcp\n port: 53\n protocol: TCP\n targetPort: 53\n selector:\n k8s-app: kube-dns\n type: ClusterIP", "completion": "This yaml code 5.11 - Service ClusterIP allocation | But, as it was explained before, the IP address 10.96.0.10 has not been reserved. If other Services are created before or in parallel with dynamic allocation, there is a chance they can allocate this", "language": "yaml", "type": "service-definition_explanation", "context": "5.11 - Service ClusterIP allocation | But, as it was explained before, the IP address 10.96.0.10 has not been reserved. If other Services are created before or in parallel with dynamic allocation, there is a chance they can allocate this", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_18181"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Service\nmetadata:\n name: my-service\nspec:\n selector:", "completion": " app.kubernetes.io/name: MyApp\n ports:\n - protocol: TCP\n port: 80\n targetPort: 9376\n internalTrafficPolicy: Local", "language": "yaml", "type": "service-definition", "context": "5.12 - Service Internal Traffic Policy | The kube-proxy filters the endpoints it routes to based on the spec.internalTrafficPolicy setting. When it's set to Local, only node local endpoints are considered. When it's Cluster (the default), or", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_18182"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Service\nmetadata:\n name: my-service\nspec:\n selector:\n app.kubernetes.io/name: MyApp\n ports:\n - protocol: TCP\n port: 80\n targetPort: 9376\n internalTrafficPolicy: Local", "completion": "This yaml code 5.12 - Service Internal Traffic Policy | The kube-proxy filters the endpoints it routes to based on the spec.internalTrafficPolicy setting. When it's set to Local, only node local endpoints are considered. When it's Cluster (the default), or", "language": "yaml", "type": "service-definition_explanation", "context": "5.12 - Service Internal Traffic Policy | The kube-proxy filters the endpoints it routes to based on the spec.internalTrafficPolicy setting. When it's set to Local, only node local endpoints are considered. When it's Cluster (the default), or", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_18183"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: configmap-pod\nspec:\n containers:\n - name: test\n image: busybox:1.28\n command: ['sh', '-c', 'echo \"The app is running!\" && tail -f /dev/null']", "completion": " volumeMounts:\n - name: config-vol\n mountPath: /etc/config\n volumes:\n - name: config-vol\n configMap:\n name: log-config\n items:\n - key: log_level\n path: log_level.conf", "language": "yaml", "type": "pod-definition", "context": "6.1 - Volumes | The log-config ConfigMap is mounted as a volume, and all contents stored in its log_level entry are mounted into the Pod at path /etc/config/log_level.conf. Note that this path is derived from the vol", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_18184"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: configmap-pod\nspec:\n containers:\n - name: test\n image: busybox:1.28\n command: ['sh', '-c', 'echo \"The app is running!\" && tail -f /dev/null']\n volumeMounts:\n - name: config-vol\n mountPath: /etc/config\n volumes:\n - name: config-vol\n configMap:\n name: log-config\n items:\n - key: log_level\n path: log_level.conf", "completion": "This yaml code 6.1 - Volumes | The log-config ConfigMap is mounted as a volume, and all contents stored in its log_level entry are mounted into the Pod at path /etc/config/log_level.conf. Note that this path is derived from the vol", "language": "yaml", "type": "pod-definition_explanation", "context": "6.1 - Volumes | The log-config ConfigMap is mounted as a volume, and all contents stored in its log_level entry are mounted into the Pod at path /etc/config/log_level.conf. Note that this path is derived from the vol", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_18185"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: test-pd\nspec:\n containers:\n - image: registry.k8s.io/test-webserver", "completion": " name: test-container\n volumeMounts:\n - mountPath: /cache\n name: cache-volume\n volumes:\n - name: cache-volume\n emptyDir:\n sizeLimit: 500Mi", "language": "yaml", "type": "pod-definition", "context": "6.1 - Volumes | An fc volume type allows an existing fibre channel block storage volume to be mounted in a Pod. You can specify single or multiple target world wide names (WWNs) using the parameter targetWWNs in your", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_18186"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: test-pd\nspec:\n containers:\n - image: registry.k8s.io/test-webserver\n name: test-container\n volumeMounts:\n - mountPath: /cache\n name: cache-volume\n volumes:\n - name: cache-volume\n emptyDir:\n sizeLimit: 500Mi", "completion": "This yaml code 6.1 - Volumes | An fc volume type allows an existing fibre channel block storage volume to be mounted in a Pod. You can specify single or multiple target world wide names (WWNs) using the parameter targetWWNs in your", "language": "yaml", "type": "pod-definition_explanation", "context": "6.1 - Volumes | An fc volume type allows an existing fibre channel block storage volume to be mounted in a Pod. You can specify single or multiple target world wide names (WWNs) using the parameter targetWWNs in your", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_18187"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: test-pd\nspec:\n containers:\n - image: registry.k8s.io/test-webserver\n name: test-container", "completion": " volumeMounts:\n - mountPath: /cache\n name: cache-volume\n volumes:\n - name: cache-volume\n emptyDir:\n sizeLimit: 500Mi\n medium: Memory", "language": "yaml", "type": "pod-definition", "context": "6.1 - Volumes | An fc volume type allows an existing fibre channel block storage volume to be mounted in a Pod. You can specify single or multiple target world wide names (WWNs) using the parameter targetWWNs in your", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_18188"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: test-pd\nspec:\n containers:\n - image: registry.k8s.io/test-webserver\n name: test-container\n volumeMounts:\n - mountPath: /cache\n name: cache-volume\n volumes:\n - name: cache-volume\n emptyDir:\n sizeLimit: 500Mi\n medium: Memory", "completion": "This yaml code 6.1 - Volumes | An fc volume type allows an existing fibre channel block storage volume to be mounted in a Pod. You can specify single or multiple target world wide names (WWNs) using the parameter targetWWNs in your", "language": "yaml", "type": "pod-definition_explanation", "context": "6.1 - Volumes | An fc volume type allows an existing fibre channel block storage volume to be mounted in a Pod. You can specify single or multiple target world wide names (WWNs) using the parameter targetWWNs in your", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_18189"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: server\nspec:\n containers:\n - image: nginx\n name: nginx", "completion": " volumeMounts:\n - mountPath: /mypath\n name: git-volume\n volumes:\n - name: git-volume\n gitRepo:\n repository: \"git@somewhere:me/my-git-repository.git\"\n revision: \"22f1d8406d464b0c0874075539c1f2e96c253775\"", "language": "yaml", "type": "pod-definition", "context": "6.1 - Volumes | Kubernetes 1.34 does not include a glusterfs volume type.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_18190"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: server\nspec:\n containers:\n - image: nginx\n name: nginx\n volumeMounts:\n - mountPath: /mypath\n name: git-volume\n volumes:\n - name: git-volume\n gitRepo:\n repository: \"git@somewhere:me/my-git-repository.git\"\n revision: \"22f1d8406d464b0c0874075539c1f2e96c253775\"", "completion": "This yaml code 6.1 - Volumes | Kubernetes 1.34 does not include a glusterfs volume type.", "language": "yaml", "type": "pod-definition_explanation", "context": "6.1 - Volumes | Kubernetes 1.34 does not include a glusterfs volume type.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_18191"} {"prompt": "Complete this yaml code:\n\n---\n# This manifest mounts /data/foo on the host as /foo inside the\n# single container that runs within the hostpath-example-linux Pod.\n#\n# The mount into the container is read-only.\napiVersion: v1\nkind: Pod\nmetadata:\n name: hostpath-example-linux\nspec:\n os: { name: linux }\n nodeSelector:\n kubernetes.io/os: linux", "completion": " containers:\n - name: example-container\n image: registry.k8s.io/test-webserver\n volumeMounts:\n - mountPath: /foo\n name: example-volume\n readOnly: true\n volumes:\n - name: example-volume\n # mount /data/foo, but only if that directory already exists\n hostPath:\n path: /data/foo # directory location on host\n type: Directory # this field is optional", "language": "yaml", "type": "pod-definition", "context": "6.1 - Volumes | --- # This manifest mounts C:\\Data\\foo on the host as C:\\foo, inside the # single container that runs within the hostpath-example-windows Pod. # # The mount into the container is read-only. apiVersion", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_18192"} {"prompt": "Explain this yaml code:\n\n---\n# This manifest mounts /data/foo on the host as /foo inside the\n# single container that runs within the hostpath-example-linux Pod.\n#\n# The mount into the container is read-only.\napiVersion: v1\nkind: Pod\nmetadata:\n name: hostpath-example-linux\nspec:\n os: { name: linux }\n nodeSelector:\n kubernetes.io/os: linux\n containers:\n - name: example-container\n image: registry.k8s.io/test-webserver\n volumeMounts:\n - mountPath: /foo\n name: example-volume\n readOnly: true\n volumes:\n - name: example-volume\n # mount /data/foo, but only if that directory already exists\n hostPath:\n path: /data/foo # directory location on host\n type: Directory # this field is optional", "completion": "This yaml code 6.1 - Volumes | --- # This manifest mounts C:\\Data\\foo on the host as C:\\foo, inside the # single container that runs within the hostpath-example-windows Pod. # # The mount into the container is read-only. apiVersion", "language": "yaml", "type": "pod-definition_explanation", "context": "6.1 - Volumes | --- # This manifest mounts C:\\Data\\foo on the host as C:\\foo, inside the # single container that runs within the hostpath-example-windows Pod. # # The mount into the container is read-only. apiVersion", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_18193"} {"prompt": "Complete this yaml code:\n\n---\n# This manifest mounts C:\\Data\\foo on the host as C:\\foo, inside the\n# single container that runs within the hostpath-example-windows Pod.\n#\n# The mount into the container is read-only.\napiVersion: v1\nkind: Pod\nmetadata:\n name: hostpath-example-windows\nspec:\n os: { name: windows }\n nodeSelector:\n kubernetes.io/os: windows", "completion": " containers:\n - name: example-container\n image: microsoft/windowsservercore:1709\n volumeMounts:\n - name: example-volume\n mountPath: \"C:\\\\foo\"\n readOnly: true\n volumes:\n # mount C:\\Data\\foo from the host, but only if that directory already exists\n - name: example-volume\n hostPath:\n path: \"C:\\\\Data\\\\foo\" # directory location on host\n type: Directory # this field is optional", "language": "yaml", "type": "pod-definition", "context": "6.1 - Volumes | The following manifest defines a Pod that mounts /var/local/aaa inside the single container in the Pod. If the node does not already have a path /var/local/aaa, the kubelet creates it as a directory a", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_18194"} {"prompt": "Explain this yaml code:\n\n---\n# This manifest mounts C:\\Data\\foo on the host as C:\\foo, inside the\n# single container that runs within the hostpath-example-windows Pod.\n#\n# The mount into the container is read-only.\napiVersion: v1\nkind: Pod\nmetadata:\n name: hostpath-example-windows\nspec:\n os: { name: windows }\n nodeSelector:\n kubernetes.io/os: windows\n containers:\n - name: example-container\n image: microsoft/windowsservercore:1709\n volumeMounts:\n - name: example-volume\n mountPath: \"C:\\\\foo\"\n readOnly: true\n volumes:\n # mount C:\\Data\\foo from the host, but only if that directory already exists\n - name: example-volume\n hostPath:\n path: \"C:\\\\Data\\\\foo\" # directory location on host\n type: Directory # this field is optional", "completion": "This yaml code 6.1 - Volumes | The following manifest defines a Pod that mounts /var/local/aaa inside the single container in the Pod. If the node does not already have a path /var/local/aaa, the kubelet creates it as a directory a", "language": "yaml", "type": "pod-definition_explanation", "context": "6.1 - Volumes | The following manifest defines a Pod that mounts /var/local/aaa inside the single container in the Pod. If the node does not already have a path /var/local/aaa, the kubelet creates it as a directory a", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_18195"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: test-webserver\nspec:\n os: { name: linux }\n nodeSelector:\n kubernetes.io/os: linux\n containers:\n - name: test-webserver\n image: registry.k8s.io/test-webserver:latest\n volumeMounts:\n - mountPath: /var/local/aaa", "completion": " name: mydir\n - mountPath: /var/local/aaa/1.txt\n name: myfile\n volumes:\n - name: mydir\n hostPath:\n # Ensure the file directory is created.\n path: /var/local/aaa\n type: DirectoryOrCreate\n - name: myfile\n hostPath:\n path: /var/local/aaa/1.txt\n type: FileOrCreate", "language": "yaml", "type": "pod-definition", "context": "6.1 - Volumes | An image volume source represents an OCI object (a container image or artifact) which is available on the kubelet's host machine.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_18196"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: test-webserver\nspec:\n os: { name: linux }\n nodeSelector:\n kubernetes.io/os: linux\n containers:\n - name: test-webserver\n image: registry.k8s.io/test-webserver:latest\n volumeMounts:\n - mountPath: /var/local/aaa\n name: mydir\n - mountPath: /var/local/aaa/1.txt\n name: myfile\n volumes:\n - name: mydir\n hostPath:\n # Ensure the file directory is created.\n path: /var/local/aaa\n type: DirectoryOrCreate\n - name: myfile\n hostPath:\n path: /var/local/aaa/1.txt\n type: FileOrCreate", "completion": "This yaml code 6.1 - Volumes | An image volume source represents an OCI object (a container image or artifact) which is available on the kubelet's host machine.", "language": "yaml", "type": "pod-definition_explanation", "context": "6.1 - Volumes | An image volume source represents an OCI object (a container image or artifact) which is available on the kubelet's host machine.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_18197"} {"prompt": "Complete this yaml code:\n\npods/image-volumes.yaml\napiVersion: v1\nkind: Pod\nmetadata:\n name: image-volume\nspec:\n containers:\n - name: shell\n command: [\"sleep\", \"infinity\"]", "completion": " image: debian\n volumeMounts:\n - name: volume\n mountPath: /volume\n volumes:\n - name: volume\n image:\n reference: quay.io/crio/artifact:v2\n pullPolicy: IfNotPresent", "language": "yaml", "type": "pod-definition", "context": "6.1 - Volumes | The volume is resolved at pod startup depending on which pullPolicy value is provided:", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_18198"} {"prompt": "Explain this yaml code:\n\npods/image-volumes.yaml\napiVersion: v1\nkind: Pod\nmetadata:\n name: image-volume\nspec:\n containers:\n - name: shell\n command: [\"sleep\", \"infinity\"]\n image: debian\n volumeMounts:\n - name: volume\n mountPath: /volume\n volumes:\n - name: volume\n image:\n reference: quay.io/crio/artifact:v2\n pullPolicy: IfNotPresent", "completion": "This yaml code 6.1 - Volumes | The volume is resolved at pod startup depending on which pullPolicy value is provided:", "language": "yaml", "type": "pod-definition_explanation", "context": "6.1 - Volumes | The volume is resolved at pod startup depending on which pullPolicy value is provided:", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_18199"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: image-volume\nspec:\n containers:\n - name: shell\n command: [\"sleep\", \"infinity\"]", "completion": " image: debian\n volumeMounts:\n - name: volume\n mountPath: /volume\n volumes:\n - name: volume\n image:\n reference: quay.io/crio/artifact:v2\n pullPolicy: IfNotPresent", "language": "yaml", "type": "pod-definition", "context": "6.1 - Volumes | The volume is resolved at pod startup depending on which pullPolicy value is provided:", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_18200"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: image-volume\nspec:\n containers:\n - name: shell\n command: [\"sleep\", \"infinity\"]\n image: debian\n volumeMounts:\n - name: volume\n mountPath: /volume\n volumes:\n - name: volume\n image:\n reference: quay.io/crio/artifact:v2\n pullPolicy: IfNotPresent", "completion": "This yaml code 6.1 - Volumes | The volume is resolved at pod startup depending on which pullPolicy value is provided:", "language": "yaml", "type": "pod-definition_explanation", "context": "6.1 - Volumes | The volume is resolved at pod startup depending on which pullPolicy value is provided:", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_18201"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: PersistentVolume\nmetadata:\n name: example-pv\nspec:\n capacity:\n storage: 100Gi\n volumeMode: Filesystem\n accessModes:\n - ReadWriteOnce\n persistentVolumeReclaimPolicy: Delete", "completion": " storageClassName: local-storage\n local:\n path: /mnt/disks/ssd1\n nodeAffinity:\n required:\n nodeSelectorTerms:\n - matchExpressions:\n - key: kubernetes.io/hostname\n operator: In\n values:\n - example-node", "language": "yaml", "type": "kubernetes-manifest", "context": "6.1 - Volumes | You must set a PersistentVolume nodeAffinity when using local volumes. The Kubernetes scheduler uses the PersistentVolume nodeAffinity to schedule these Pods to the correct node.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_18202"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: PersistentVolume\nmetadata:\n name: example-pv\nspec:\n capacity:\n storage: 100Gi\n volumeMode: Filesystem\n accessModes:\n - ReadWriteOnce\n persistentVolumeReclaimPolicy: Delete\n storageClassName: local-storage\n local:\n path: /mnt/disks/ssd1\n nodeAffinity:\n required:\n nodeSelectorTerms:\n - matchExpressions:\n - key: kubernetes.io/hostname\n operator: In\n values:\n - example-node", "completion": "This yaml code 6.1 - Volumes | You must set a PersistentVolume nodeAffinity when using local volumes. The Kubernetes scheduler uses the PersistentVolume nodeAffinity to schedule these Pods to the correct node.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "6.1 - Volumes | You must set a PersistentVolume nodeAffinity when using local volumes. The Kubernetes scheduler uses the PersistentVolume nodeAffinity to schedule these Pods to the correct node.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_18203"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: test-pd\nspec:\n containers:\n - image: registry.k8s.io/test-webserver\n name: test-container", "completion": " volumeMounts:\n - mountPath: /my-nfs-data\n name: test-volume\n volumes:\n - name: test-volume\n nfs:\n server: my-nfs-server.example.com\n path: /my-nfs-volume\n readOnly: true", "language": "yaml", "type": "pod-definition", "context": "6.1 - Volumes | You must have your own NFS server running with the share exported before you can use it.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_18204"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: test-pd\nspec:\n containers:\n - image: registry.k8s.io/test-webserver\n name: test-container\n volumeMounts:\n - mountPath: /my-nfs-data\n name: test-volume\n volumes:\n - name: test-volume\n nfs:\n server: my-nfs-server.example.com\n path: /my-nfs-volume\n readOnly: true", "completion": "This yaml code 6.1 - Volumes | You must have your own NFS server running with the share exported before you can use it.", "language": "yaml", "type": "pod-definition_explanation", "context": "6.1 - Volumes | You must have your own NFS server running with the share exported before you can use it.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_18205"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: test-portworx-volume-pod\nspec:\n containers:\n - image: registry.k8s.io/test-webserver\n name: test-container", "completion": " volumeMounts:\n - mountPath: /mnt\n name: pxvol\n volumes:\n - name: pxvol\n # This Portworx volume must already exist.\n portworxVolume:\n volumeID: \"pxvol\"\n fsType: \"<fs-type>\"", "language": "yaml", "type": "pod-definition", "context": "6.1 - Volumes | In Kubernetes 1.34, all operations for the in-tree Portworx volumes are redirected to the pxd.portworx.com Container Storage Interface (CSI) Driver by default.Portworx CSI Driver must be installed on", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_18206"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: test-portworx-volume-pod\nspec:\n containers:\n - image: registry.k8s.io/test-webserver\n name: test-container\n volumeMounts:\n - mountPath: /mnt\n name: pxvol\n volumes:\n - name: pxvol\n # This Portworx volume must already exist.\n portworxVolume:\n volumeID: \"pxvol\"\n fsType: \"<fs-type>\"", "completion": "This yaml code 6.1 - Volumes | In Kubernetes 1.34, all operations for the in-tree Portworx volumes are redirected to the pxd.portworx.com Container Storage Interface (CSI) Driver by default.Portworx CSI Driver must be installed on", "language": "yaml", "type": "pod-definition_explanation", "context": "6.1 - Volumes | In Kubernetes 1.34, all operations for the in-tree Portworx volumes are redirected to the pxd.portworx.com Container Storage Interface (CSI) Driver by default.Portworx CSI Driver must be installed on", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_18207"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: my-lamp-site\nspec:\n containers:\n - name: mysql\n image: mysql\n env:\n - name: MYSQL_ROOT_PASSWORD\n value: \"rootpasswd\"\n volumeMounts:", "completion": " - mountPath: /var/lib/mysql\n name: site-data\n subPath: mysql\n - name: php\n image: php:7.0-apache\n volumeMounts:\n - mountPath: /var/www/html\n name: site-data\n subPath: html\n volumes:\n - name: site-data\n persistentVolumeClaim:\n claimName: my-lamp-site-data", "language": "yaml", "type": "pod-definition", "context": "6.1 - Volumes | Use the subPathExpr field to construct subPath directory names from downward API environment variables. The subPath and subPathExpr properties are mutually exclusive.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_18208"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: my-lamp-site\nspec:\n containers:\n - name: mysql\n image: mysql\n env:\n - name: MYSQL_ROOT_PASSWORD\n value: \"rootpasswd\"\n volumeMounts:\n - mountPath: /var/lib/mysql\n name: site-data\n subPath: mysql\n - name: php\n image: php:7.0-apache\n volumeMounts:\n - mountPath: /var/www/html\n name: site-data\n subPath: html\n volumes:\n - name: site-data\n persistentVolumeClaim:\n claimName: my-lamp-site-data", "completion": "This yaml code 6.1 - Volumes | Use the subPathExpr field to construct subPath directory names from downward API environment variables. The subPath and subPathExpr properties are mutually exclusive.", "language": "yaml", "type": "pod-definition_explanation", "context": "6.1 - Volumes | Use the subPathExpr field to construct subPath directory names from downward API environment variables. The subPath and subPathExpr properties are mutually exclusive.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_18209"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: pod1\nspec:\n containers:\n - name: container1\n env:\n - name: POD_NAME\n valueFrom:\n fieldRef:\n apiVersion: v1", "completion": " fieldPath: metadata.name\n image: busybox:1.28\n command: [ \"sh\", \"-c\", \"while [ true ]; do echo 'Hello'; sleep 10; done | tee -a /logs/hello.txt\" ]\n volumeMounts:\n - name: workdir1\n mountPath: /logs\n # The variable expansion uses round brackets (not curly brackets).\n subPathExpr: $(POD_NAME)\n restartPolicy: Never\n volumes:\n - name: workdir1\n hostPath:\n path: /var/log/pods", "language": "yaml", "type": "pod-definition", "context": "6.1 - Volumes | The storage medium (such as Disk or SSD) of an emptyDir volume is determined by the medium of the filesystem holding the kubelet root dir (typically /var/lib/kubelet). There is no limit on how much sp", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_18210"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: pod1\nspec:\n containers:\n - name: container1\n env:\n - name: POD_NAME\n valueFrom:\n fieldRef:\n apiVersion: v1\n fieldPath: metadata.name\n image: busybox:1.28\n command: [ \"sh\", \"-c\", \"while [ true ]; do echo 'Hello'; sleep 10; done | tee -a /logs/hello.txt\" ]\n volumeMounts:\n - name: workdir1\n mountPath: /logs\n # The variable expansion uses round brackets (not curly brackets).\n subPathExpr: $(POD_NAME)\n restartPolicy: Never\n volumes:\n - name: workdir1\n hostPath:\n path: /var/log/pods", "completion": "This yaml code 6.1 - Volumes | The storage medium (such as Disk or SSD) of an emptyDir volume is determined by the medium of the filesystem holding the kubelet root dir (typically /var/lib/kubelet). There is no limit on how much sp", "language": "yaml", "type": "pod-definition_explanation", "context": "6.1 - Volumes | The storage medium (such as Disk or SSD) of an emptyDir volume is determined by the medium of the filesystem holding the kubelet root dir (typically /var/lib/kubelet). There is no limit on how much sp", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_18211"} {"prompt": "Complete this yaml code:\n\nstorage/rro.yaml\napiVersion: v1\nkind: Pod\nmetadata:\n name: rro\nspec:\n volumes:\n - name: mnt\n hostPath:\n # tmpfs is mounted on /mnt/tmpfs\n path: /mnt\n containers:\n - name: busybox\n image: busybox", "completion": " args: [\"sleep\", \"infinity\"]\n volumeMounts:\n # /mnt-rro/tmpfs is not writable\n - name: mnt\n mountPath: /mnt-rro\n readOnly: true\n mountPropagation: None\n recursiveReadOnly: Enabled\n # /mnt-ro/tmpfs is writable\n - name: mnt\n mountPath: /mnt-ro\n readOnly: true\n # /mnt-rw/tmpfs is writable\n - name: mnt\n mountPath: /mnt-rw", "language": "yaml", "type": "pod-definition", "context": "6.1 - Volumes | When this property is recognized by kubelet and kube-apiserver, the .status.containerStatuses[].volumeMounts[].recursiveReadOnly field is set to either Enabled or Disabled.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_18212"} {"prompt": "Explain this yaml code:\n\nstorage/rro.yaml\napiVersion: v1\nkind: Pod\nmetadata:\n name: rro\nspec:\n volumes:\n - name: mnt\n hostPath:\n # tmpfs is mounted on /mnt/tmpfs\n path: /mnt\n containers:\n - name: busybox\n image: busybox\n args: [\"sleep\", \"infinity\"]\n volumeMounts:\n # /mnt-rro/tmpfs is not writable\n - name: mnt\n mountPath: /mnt-rro\n readOnly: true\n mountPropagation: None\n recursiveReadOnly: Enabled\n # /mnt-ro/tmpfs is writable\n - name: mnt\n mountPath: /mnt-ro\n readOnly: true\n # /mnt-rw/tmpfs is writable\n - name: mnt\n mountPath: /mnt-rw", "completion": "This yaml code 6.1 - Volumes | When this property is recognized by kubelet and kube-apiserver, the .status.containerStatuses[].volumeMounts[].recursiveReadOnly field is set to either Enabled or Disabled.", "language": "yaml", "type": "pod-definition_explanation", "context": "6.1 - Volumes | When this property is recognized by kubelet and kube-apiserver, the .status.containerStatuses[].volumeMounts[].recursiveReadOnly field is set to either Enabled or Disabled.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_18213"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: rro\nspec:\n volumes:\n - name: mnt\n hostPath:\n # tmpfs is mounted on /mnt/tmpfs\n path: /mnt\n containers:\n - name: busybox\n image: busybox\n args: [\"sleep\", \"infinity\"]", "completion": " volumeMounts:\n # /mnt-rro/tmpfs is not writable\n - name: mnt\n mountPath: /mnt-rro\n readOnly: true\n mountPropagation: None\n recursiveReadOnly: Enabled\n # /mnt-ro/tmpfs is writable\n - name: mnt\n mountPath: /mnt-ro\n readOnly: true\n # /mnt-rw/tmpfs is writable\n - name: mnt\n mountPath: /mnt-rw", "language": "yaml", "type": "pod-definition", "context": "6.1 - Volumes | When this property is recognized by kubelet and kube-apiserver, the .status.containerStatuses[].volumeMounts[].recursiveReadOnly field is set to either Enabled or Disabled.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_18214"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: rro\nspec:\n volumes:\n - name: mnt\n hostPath:\n # tmpfs is mounted on /mnt/tmpfs\n path: /mnt\n containers:\n - name: busybox\n image: busybox\n args: [\"sleep\", \"infinity\"]\n volumeMounts:\n # /mnt-rro/tmpfs is not writable\n - name: mnt\n mountPath: /mnt-rro\n readOnly: true\n mountPropagation: None\n recursiveReadOnly: Enabled\n # /mnt-ro/tmpfs is writable\n - name: mnt\n mountPath: /mnt-ro\n readOnly: true\n # /mnt-rw/tmpfs is writable\n - name: mnt\n mountPath: /mnt-rw", "completion": "This yaml code 6.1 - Volumes | When this property is recognized by kubelet and kube-apiserver, the .status.containerStatuses[].volumeMounts[].recursiveReadOnly field is set to either Enabled or Disabled.", "language": "yaml", "type": "pod-definition_explanation", "context": "6.1 - Volumes | When this property is recognized by kubelet and kube-apiserver, the .status.containerStatuses[].volumeMounts[].recursiveReadOnly field is set to either Enabled or Disabled.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_18215"} {"prompt": "Complete this bash code:\n\nkubectl describe pvc hostpath\nName: hostpath\nNamespace: default\nStorageClass: example-hostpath\nStatus: Terminating", "completion": "Volume:\nLabels: <none>\nAnnotations: volume.beta.kubernetes.io/storage-class=example-hostpath\n volume.beta.kubernetes.io/storage-provisioner=example.com/hostpath\nFinalizers: [kubernetes.io/pvc-protection]\n...", "language": "bash", "type": "kubectl-command", "context": "6.2 - Persistent Volumes | You can see that a PV is protected when the PV's status is Terminating and the Finalizers list includes kubernetes.io/pv-protection too:", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_18216"} {"prompt": "Explain this bash code:\n\nkubectl describe pvc hostpath\nName: hostpath\nNamespace: default\nStorageClass: example-hostpath\nStatus: Terminating\nVolume:\nLabels: <none>\nAnnotations: volume.beta.kubernetes.io/storage-class=example-hostpath\n volume.beta.kubernetes.io/storage-provisioner=example.com/hostpath\nFinalizers: [kubernetes.io/pvc-protection]\n...", "completion": "This bash code 6.2 - Persistent Volumes | You can see that a PV is protected when the PV's status is Terminating and the Finalizers list includes kubernetes.io/pv-protection too:", "language": "bash", "type": "kubectl-command_explanation", "context": "6.2 - Persistent Volumes | You can see that a PV is protected when the PV's status is Terminating and the Finalizers list includes kubernetes.io/pv-protection too:", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_18217"} {"prompt": "Complete this bash code:\n\nkubectl describe pv task-pv-volume\nName: task-pv-volume\nLabels: type=local\nAnnotations: <none>\nFinalizers: [kubernetes.io/pv-protection]\nStorageClass: standard\nStatus: Terminating\nClaim:", "completion": "Reclaim Policy: Delete\nAccess Modes: RWO\nCapacity: 1Gi\nMessage:\nSource:\n Type: HostPath (bare host directory volume)\n Path: /tmp/data\n HostPathType:\nEvents: <none>", "language": "bash", "type": "kubectl-command", "context": "6.2 - Persistent Volumes | When a user is done with their volume, they can delete the PVC objects from the API that allows reclamation of the resource. The reclaim policy for a PersistentVolume tells the cluster what to do with", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_18218"} {"prompt": "Explain this bash code:\n\nkubectl describe pv task-pv-volume\nName: task-pv-volume\nLabels: type=local\nAnnotations: <none>\nFinalizers: [kubernetes.io/pv-protection]\nStorageClass: standard\nStatus: Terminating\nClaim:\nReclaim Policy: Delete\nAccess Modes: RWO\nCapacity: 1Gi\nMessage:\nSource:\n Type: HostPath (bare host directory volume)\n Path: /tmp/data\n HostPathType:\nEvents: <none>", "completion": "This bash code 6.2 - Persistent Volumes | When a user is done with their volume, they can delete the PVC objects from the API that allows reclamation of the resource. The reclaim policy for a PersistentVolume tells the cluster what to do with", "language": "bash", "type": "kubectl-command_explanation", "context": "6.2 - Persistent Volumes | When a user is done with their volume, they can delete the PVC objects from the API that allows reclamation of the resource. The reclaim policy for a PersistentVolume tells the cluster what to do with", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_18219"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: pv-recycler\n namespace: default\nspec:\n restartPolicy: Never\n volumes:\n - name: vol", "completion": " hostPath:\n path: /any/path/it/will/be/replaced\n containers:\n - name: pv-recycler\n image: \"registry.k8s.io/busybox\"\n command: [\"/bin/sh\", \"-c\", \"test -e /scrub && rm -rf /scrub/..?* /scrub/.[!.]* /scrub/* && test -z \\\"$(ls -A /scrub)\\\" || exit 1\"]\n volumeMounts:\n - name: vol\n mountPath: /scrub", "language": "yaml", "type": "pod-definition", "context": "6.2 - Persistent Volumes | However, the particular path specified in the custom recycler Pod template in the volumes part is replaced with the particular path of the volume that is being recycled.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_18220"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: pv-recycler\n namespace: default\nspec:\n restartPolicy: Never\n volumes:\n - name: vol\n hostPath:\n path: /any/path/it/will/be/replaced\n containers:\n - name: pv-recycler\n image: \"registry.k8s.io/busybox\"\n command: [\"/bin/sh\", \"-c\", \"test -e /scrub && rm -rf /scrub/..?* /scrub/.[!.]* /scrub/* && test -z \\\"$(ls -A /scrub)\\\" || exit 1\"]\n volumeMounts:\n - name: vol\n mountPath: /scrub", "completion": "This yaml code 6.2 - Persistent Volumes | However, the particular path specified in the custom recycler Pod template in the volumes part is replaced with the particular path of the volume that is being recycled.", "language": "yaml", "type": "pod-definition_explanation", "context": "6.2 - Persistent Volumes | However, the particular path specified in the custom recycler Pod template in the volumes part is replaced with the particular path of the volume that is being recycled.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_18221"} {"prompt": "Complete this bash code:\n\nkubectl describe pv pvc-74a498d6-3929-47e8-8c02-078c1ece4d78\nName: pvc-74a498d6-3929-47e8-8c02-078c1ece4d78\nLabels: <none>\nAnnotations: kubernetes.io/createdby: vsphere-volume-dynamic-provisioner\n pv.kubernetes.io/bound-by-controller: yes\n pv.kubernetes.io/provisioned-by: kubernetes.io/vsphere-volume\nFinalizers: [kubernetes.io/pv-protection kubernetes.io/pv-controller]\nStorageClass: vcp-sc\nStatus: Bound\nClaim: default/vcp-pvc-1\nReclaim Policy: Delete", "completion": "Access Modes: RWO\nVolumeMode: Filesystem\nCapacity: 1Gi\nNode Affinity: <none>\nMessage:\nSource:\n Type: vSphereVolume (a Persistent Disk resource in vSphere)\n VolumePath: [vsanDatastore] d49c4a62-166f-ce12-c464-020077ba5d46/kubernetes-dynamic-pvc-74a498d6-3929-47e8-8c02-078c1ece4d78.vmdk\n FSType: ext4\n StoragePolicyName: vSAN Default Storage Policy\nEvents: <none>", "language": "bash", "type": "kubectl-command", "context": "6.2 - Persistent Volumes | The finalizer external-provisioner.volume.kubernetes.io/finalizer is added for CSI volumes. The following is an example:", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_18222"} {"prompt": "Explain this bash code:\n\nkubectl describe pv pvc-74a498d6-3929-47e8-8c02-078c1ece4d78\nName: pvc-74a498d6-3929-47e8-8c02-078c1ece4d78\nLabels: <none>\nAnnotations: kubernetes.io/createdby: vsphere-volume-dynamic-provisioner\n pv.kubernetes.io/bound-by-controller: yes\n pv.kubernetes.io/provisioned-by: kubernetes.io/vsphere-volume\nFinalizers: [kubernetes.io/pv-protection kubernetes.io/pv-controller]\nStorageClass: vcp-sc\nStatus: Bound\nClaim: default/vcp-pvc-1\nReclaim Policy: Delete\nAccess Modes: RWO\nVolumeMode: Filesystem\nCapacity: 1Gi\nNode Affinity: <none>\nMessage:\nSource:\n Type: vSphereVolume (a Persistent Disk resource in vSphere)\n VolumePath: [vsanDatastore] d49c4a62-166f-ce12-c464-020077ba5d46/kubernetes-dynamic-pvc-74a498d6-3929-47e8-8c02-078c1ece4d78.vmdk\n FSType: ext4\n StoragePolicyName: vSAN Default Storage Policy\nEvents: <none>", "completion": "This bash code 6.2 - Persistent Volumes | The finalizer external-provisioner.volume.kubernetes.io/finalizer is added for CSI volumes. The following is an example:", "language": "bash", "type": "kubectl-command_explanation", "context": "6.2 - Persistent Volumes | The finalizer external-provisioner.volume.kubernetes.io/finalizer is added for CSI volumes. The following is an example:", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_18223"} {"prompt": "Complete this text code:\n\nName: pvc-2f0bab97-85a8-4552-8044-eb8be45cf48d\nLabels: <none>\nAnnotations: pv.kubernetes.io/provisioned-by: csi.vsphere.vmware.com\nFinalizers: [kubernetes.io/pv-protection external-provisioner.volume.kubernetes.io/finalizer]\nStorageClass: fast\nStatus: Bound\nClaim: demo-app/nginx-logs\nReclaim Policy: Delete\nAccess Modes: RWO\nVolumeMode: Filesystem\nCapacity: 200Mi", "completion": "Node Affinity: <none>\nMessage:\nSource:\n Type: CSI (a Container Storage Interface (CSI) volume source)\n Driver: csi.vsphere.vmware.com\n FSType: ext4\n VolumeHandle: 44830fa8-79b4-406b-8b58-621ba25353fd\n ReadOnly: false\n VolumeAttributes: storage.kubernetes.io/csiProvisionerIdentity=1648442357185-8081-csi.vsphere.vmware.com\n type=vSphere CNS Block Volume\nEvents: <none>", "language": "text", "type": "code-example", "context": "6.2 - Persistent Volumes | When the CSIMigration{provider} feature flag is enabled for a specific in-tree volume plugin, the kubernetes.io/pv-controller finalizer is replaced by the external-provisioner.volume.kubernetes.io/fin", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_18224"} {"prompt": "Explain this text code:\n\nName: pvc-2f0bab97-85a8-4552-8044-eb8be45cf48d\nLabels: <none>\nAnnotations: pv.kubernetes.io/provisioned-by: csi.vsphere.vmware.com\nFinalizers: [kubernetes.io/pv-protection external-provisioner.volume.kubernetes.io/finalizer]\nStorageClass: fast\nStatus: Bound\nClaim: demo-app/nginx-logs\nReclaim Policy: Delete\nAccess Modes: RWO\nVolumeMode: Filesystem\nCapacity: 200Mi\nNode Affinity: <none>\nMessage:\nSource:\n Type: CSI (a Container Storage Interface (CSI) volume source)\n Driver: csi.vsphere.vmware.com\n FSType: ext4\n VolumeHandle: 44830fa8-79b4-406b-8b58-621ba25353fd\n ReadOnly: false\n VolumeAttributes: storage.kubernetes.io/csiProvisionerIdentity=1648442357185-8081-csi.vsphere.vmware.com\n type=vSphere CNS Block Volume\nEvents: <none>", "completion": "This text code 6.2 - Persistent Volumes | When the CSIMigration{provider} feature flag is enabled for a specific in-tree volume plugin, the kubernetes.io/pv-controller finalizer is replaced by the external-provisioner.volume.kubernetes.io/fin", "language": "text", "type": "code-example_explanation", "context": "6.2 - Persistent Volumes | When the CSIMigration{provider} feature flag is enabled for a specific in-tree volume plugin, the kubernetes.io/pv-controller finalizer is replaced by the external-provisioner.volume.kubernetes.io/fin", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_18225"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: PersistentVolumeClaim\nmetadata:\n name: foo-pvc", "completion": " namespace: foo\nspec:\n storageClassName: \"\" # Empty string must be explicitly set otherwise default StorageClass will be set\n volumeName: foo-pv\n ...", "language": "yaml", "type": "kubernetes-manifest", "context": "6.2 - Persistent Volumes | This method does not guarantee any binding privileges to the PersistentVolume. If other PersistentVolumeClaims could use the PV that you specify, you first need to reserve that storage volume. Specify", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_18226"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: PersistentVolumeClaim\nmetadata:\n name: foo-pvc\n namespace: foo\nspec:\n storageClassName: \"\" # Empty string must be explicitly set otherwise default StorageClass will be set\n volumeName: foo-pv\n ...", "completion": "This yaml code 6.2 - Persistent Volumes | This method does not guarantee any binding privileges to the PersistentVolume. If other PersistentVolumeClaims could use the PV that you specify, you first need to reserve that storage volume. Specify", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "6.2 - Persistent Volumes | This method does not guarantee any binding privileges to the PersistentVolume. If other PersistentVolumeClaims could use the PV that you specify, you first need to reserve that storage volume. Specify", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_18227"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: PersistentVolume\nmetadata:\n name: foo-pv\nspec:", "completion": " storageClassName: \"\"\n claimRef:\n name: foo-pvc\n namespace: foo\n ...", "language": "yaml", "type": "kubernetes-manifest", "context": "6.2 - Persistent Volumes | This is useful if you want to consume PersistentVolumes that have their persistentVolumeReclaimPolicy set to Retain, including cases where you are reusing an existing PV.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_18228"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: PersistentVolume\nmetadata:\n name: foo-pv\nspec:\n storageClassName: \"\"\n claimRef:\n name: foo-pvc\n namespace: foo\n ...", "completion": "This yaml code 6.2 - Persistent Volumes | This is useful if you want to consume PersistentVolumes that have their persistentVolumeReclaimPolicy set to Retain, including cases where you are reusing an existing PV.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "6.2 - Persistent Volumes | This is useful if you want to consume PersistentVolumes that have their persistentVolumeReclaimPolicy set to Retain, including cases where you are reusing an existing PV.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_18229"} {"prompt": "Complete this yaml code:\n\napiVersion: storage.k8s.io/v1\nkind: StorageClass\nmetadata:\n name: example-vol-default\nprovisioner: vendor-name.example/magicstorage", "completion": "parameters:\n resturl: \"http://192.168.10.100:8080\"\n restuser: \"\"\n secretNamespace: \"\"\n secretName: \"\"\nallowVolumeExpansion: true", "language": "yaml", "type": "kubernetes-manifest", "context": "6.2 - Persistent Volumes | To request a larger volume for a PVC, edit the PVC object and specify a larger size. This triggers expansion of the volume that backs the underlying PersistentVolume. A new PersistentVolume is never c", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_18230"} {"prompt": "Explain this yaml code:\n\napiVersion: storage.k8s.io/v1\nkind: StorageClass\nmetadata:\n name: example-vol-default\nprovisioner: vendor-name.example/magicstorage\nparameters:\n resturl: \"http://192.168.10.100:8080\"\n restuser: \"\"\n secretNamespace: \"\"\n secretName: \"\"\nallowVolumeExpansion: true", "completion": "This yaml code 6.2 - Persistent Volumes | To request a larger volume for a PVC, edit the PVC object and specify a larger size. This triggers expansion of the volume that backs the underlying PersistentVolume. A new PersistentVolume is never c", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "6.2 - Persistent Volumes | To request a larger volume for a PVC, edit the PVC object and specify a larger size. This triggers expansion of the volume that backs the underlying PersistentVolume. A new PersistentVolume is never c", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_18231"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: PersistentVolume\nmetadata:\n name: pv0003\nspec:\n capacity:\n storage: 5Gi\n volumeMode: Filesystem\n accessModes:", "completion": " - ReadWriteOnce\n persistentVolumeReclaimPolicy: Recycle\n storageClassName: slow\n mountOptions:\n - hard\n - nfsvers=4.1\n nfs:\n path: /tmp\n server: 172.17.0.2", "language": "yaml", "type": "kubernetes-manifest", "context": "6.2 - Persistent Volumes | Generally, a PV will have a specific storage capacity. This is set using the PV's capacity attribute which is a Quantity value.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_18232"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: PersistentVolume\nmetadata:\n name: pv0003\nspec:\n capacity:\n storage: 5Gi\n volumeMode: Filesystem\n accessModes:\n - ReadWriteOnce\n persistentVolumeReclaimPolicy: Recycle\n storageClassName: slow\n mountOptions:\n - hard\n - nfsvers=4.1\n nfs:\n path: /tmp\n server: 172.17.0.2", "completion": "This yaml code 6.2 - Persistent Volumes | Generally, a PV will have a specific storage capacity. This is set using the PV's capacity attribute which is a Quantity value.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "6.2 - Persistent Volumes | Generally, a PV will have a specific storage capacity. This is set using the PV's capacity attribute which is a Quantity value.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_18233"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: PersistentVolumeClaim\nmetadata:\n name: myclaim\nspec:\n accessModes:\n - ReadWriteOnce\n volumeMode: Filesystem", "completion": " resources:\n requests:\n storage: 8Gi\n storageClassName: slow\n selector:\n matchLabels:\n release: \"stable\"\n matchExpressions:\n - {key: environment, operator: In, values: [dev]}", "language": "yaml", "type": "kubernetes-manifest", "context": "6.2 - Persistent Volumes | Claims use the same conventions as volumes when requesting storage with specific access modes.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_18234"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: PersistentVolumeClaim\nmetadata:\n name: myclaim\nspec:\n accessModes:\n - ReadWriteOnce\n volumeMode: Filesystem\n resources:\n requests:\n storage: 8Gi\n storageClassName: slow\n selector:\n matchLabels:\n release: \"stable\"\n matchExpressions:\n - {key: environment, operator: In, values: [dev]}", "completion": "This yaml code 6.2 - Persistent Volumes | Claims use the same conventions as volumes when requesting storage with specific access modes.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "6.2 - Persistent Volumes | Claims use the same conventions as volumes when requesting storage with specific access modes.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_18235"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: mypod\nspec:\n containers:\n - name: myfrontend", "completion": " image: nginx\n volumeMounts:\n - mountPath: \"/var/www/html\"\n name: mypd\n volumes:\n - name: mypd\n persistentVolumeClaim:\n claimName: myclaim", "language": "yaml", "type": "pod-definition", "context": "6.2 - Persistent Volumes | PersistentVolumes binds are exclusive, and since PersistentVolumeClaims are namespaced objects, mounting claims with \"Many\" modes (ROX, RWX) is only possible within one namespace.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_18236"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: mypod\nspec:\n containers:\n - name: myfrontend\n image: nginx\n volumeMounts:\n - mountPath: \"/var/www/html\"\n name: mypd\n volumes:\n - name: mypd\n persistentVolumeClaim:\n claimName: myclaim", "completion": "This yaml code 6.2 - Persistent Volumes | PersistentVolumes binds are exclusive, and since PersistentVolumeClaims are namespaced objects, mounting claims with \"Many\" modes (ROX, RWX) is only possible within one namespace.", "language": "yaml", "type": "pod-definition_explanation", "context": "6.2 - Persistent Volumes | PersistentVolumes binds are exclusive, and since PersistentVolumeClaims are namespaced objects, mounting claims with \"Many\" modes (ROX, RWX) is only possible within one namespace.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_18237"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: PersistentVolume\nmetadata:\n name: block-pv\nspec:\n capacity:\n storage: 10Gi", "completion": " accessModes:\n - ReadWriteOnce\n volumeMode: Block\n persistentVolumeReclaimPolicy: Retain\n fc:\n targetWWNs: [\"50060e801049cfd1\"]\n lun: 0\n readOnly: false", "language": "yaml", "type": "kubernetes-manifest", "context": "6.2 - Persistent Volumes | If a user requests a raw block volume by indicating this using the volumeMode field in the PersistentVolumeClaim spec, the binding rules differ slightly from previous releases that didn't consider thi", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_18238"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: PersistentVolume\nmetadata:\n name: block-pv\nspec:\n capacity:\n storage: 10Gi\n accessModes:\n - ReadWriteOnce\n volumeMode: Block\n persistentVolumeReclaimPolicy: Retain\n fc:\n targetWWNs: [\"50060e801049cfd1\"]\n lun: 0\n readOnly: false", "completion": "This yaml code 6.2 - Persistent Volumes | If a user requests a raw block volume by indicating this using the volumeMode field in the PersistentVolumeClaim spec, the binding rules differ slightly from previous releases that didn't consider thi", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "6.2 - Persistent Volumes | If a user requests a raw block volume by indicating this using the volumeMode field in the PersistentVolumeClaim spec, the binding rules differ slightly from previous releases that didn't consider thi", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_18239"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: PersistentVolumeClaim\nmetadata:\n name: block-pvc\nspec:", "completion": " accessModes:\n - ReadWriteOnce\n volumeMode: Block\n resources:\n requests:\n storage: 10Gi", "language": "yaml", "type": "kubernetes-manifest", "context": "6.2 - Persistent Volumes | If a user requests a raw block volume by indicating this using the volumeMode field in the PersistentVolumeClaim spec, the binding rules differ slightly from previous releases that didn't consider thi", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_18240"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: PersistentVolumeClaim\nmetadata:\n name: block-pvc\nspec:\n accessModes:\n - ReadWriteOnce\n volumeMode: Block\n resources:\n requests:\n storage: 10Gi", "completion": "This yaml code 6.2 - Persistent Volumes | If a user requests a raw block volume by indicating this using the volumeMode field in the PersistentVolumeClaim spec, the binding rules differ slightly from previous releases that didn't consider thi", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "6.2 - Persistent Volumes | If a user requests a raw block volume by indicating this using the volumeMode field in the PersistentVolumeClaim spec, the binding rules differ slightly from previous releases that didn't consider thi", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_18241"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: pod-with-block-volume\nspec:\n containers:\n - name: fc-container\n image: fedora:26", "completion": " command: [\"/bin/sh\", \"-c\"]\n args: [ \"tail -f /dev/null\" ]\n volumeDevices:\n - name: data\n devicePath: /dev/xvda\n volumes:\n - name: data\n persistentVolumeClaim:\n claimName: block-pvc", "language": "yaml", "type": "pod-definition", "context": "6.2 - Persistent Volumes | If a user requests a raw block volume by indicating this using the volumeMode field in the PersistentVolumeClaim spec, the binding rules differ slightly from previous releases that didn't consider thi", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_18242"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: pod-with-block-volume\nspec:\n containers:\n - name: fc-container\n image: fedora:26\n command: [\"/bin/sh\", \"-c\"]\n args: [ \"tail -f /dev/null\" ]\n volumeDevices:\n - name: data\n devicePath: /dev/xvda\n volumes:\n - name: data\n persistentVolumeClaim:\n claimName: block-pvc", "completion": "This yaml code 6.2 - Persistent Volumes | If a user requests a raw block volume by indicating this using the volumeMode field in the PersistentVolumeClaim spec, the binding rules differ slightly from previous releases that didn't consider thi", "language": "yaml", "type": "pod-definition_explanation", "context": "6.2 - Persistent Volumes | If a user requests a raw block volume by indicating this using the volumeMode field in the PersistentVolumeClaim spec, the binding rules differ slightly from previous releases that didn't consider thi", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_18243"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: PersistentVolumeClaim\nmetadata:\n name: restore-pvc\nspec:\n storageClassName: csi-hostpath-sc\n dataSource:", "completion": " name: new-snapshot-test\n kind: VolumeSnapshot\n apiGroup: snapshot.storage.k8s.io\n accessModes:\n - ReadWriteOnce\n resources:\n requests:\n storage: 10Gi", "language": "yaml", "type": "kubernetes-manifest", "context": "6.2 - Persistent Volumes | Volume Cloning only available for CSI volume plugins.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_18244"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: PersistentVolumeClaim\nmetadata:\n name: restore-pvc\nspec:\n storageClassName: csi-hostpath-sc\n dataSource:\n name: new-snapshot-test\n kind: VolumeSnapshot\n apiGroup: snapshot.storage.k8s.io\n accessModes:\n - ReadWriteOnce\n resources:\n requests:\n storage: 10Gi", "completion": "This yaml code 6.2 - Persistent Volumes | Volume Cloning only available for CSI volume plugins.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "6.2 - Persistent Volumes | Volume Cloning only available for CSI volume plugins.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_18245"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: PersistentVolumeClaim\nmetadata:\n name: cloned-pvc\nspec:\n storageClassName: my-csi-plugin\n dataSource:", "completion": " name: existing-src-pvc-name\n kind: PersistentVolumeClaim\n accessModes:\n - ReadWriteOnce\n resources:\n requests:\n storage: 10Gi", "language": "yaml", "type": "kubernetes-manifest", "context": "6.2 - Persistent Volumes | Kubernetes supports custom volume populators. To use custom volume populators, you must enable the AnyVolumeDataSource feature gate for the kube-apiserver and kube-controller-manager.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_18246"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: PersistentVolumeClaim\nmetadata:\n name: cloned-pvc\nspec:\n storageClassName: my-csi-plugin\n dataSource:\n name: existing-src-pvc-name\n kind: PersistentVolumeClaim\n accessModes:\n - ReadWriteOnce\n resources:\n requests:\n storage: 10Gi", "completion": "This yaml code 6.2 - Persistent Volumes | Kubernetes supports custom volume populators. To use custom volume populators, you must enable the AnyVolumeDataSource feature gate for the kube-apiserver and kube-controller-manager.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "6.2 - Persistent Volumes | Kubernetes supports custom volume populators. To use custom volume populators, you must enable the AnyVolumeDataSource feature gate for the kube-apiserver and kube-controller-manager.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_18247"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: PersistentVolumeClaim\nmetadata:\n name: populated-pvc\nspec:\n dataSourceRef:\n name: example-name", "completion": " kind: ExampleDataSource\n apiGroup: example.storage.k8s.io\n accessModes:\n - ReadWriteOnce\n resources:\n requests:\n storage: 10Gi", "language": "yaml", "type": "kubernetes-manifest", "context": "6.2 - Persistent Volumes | Because volume populators are external components, attempts to create a PVC that uses one can fail if not all the correct components are installed. External controllers should generate events on the P", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_18248"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: PersistentVolumeClaim\nmetadata:\n name: populated-pvc\nspec:\n dataSourceRef:\n name: example-name\n kind: ExampleDataSource\n apiGroup: example.storage.k8s.io\n accessModes:\n - ReadWriteOnce\n resources:\n requests:\n storage: 10Gi", "completion": "This yaml code 6.2 - Persistent Volumes | Because volume populators are external components, attempts to create a PVC that uses one can fail if not all the correct components are installed. External controllers should generate events on the P", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "6.2 - Persistent Volumes | Because volume populators are external components, attempts to create a PVC that uses one can fail if not all the correct components are installed. External controllers should generate events on the P", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_18249"} {"prompt": "Complete this yaml code:\n\napiVersion: gateway.networking.k8s.io/v1beta1\nkind: ReferenceGrant\nmetadata:\n name: allow-ns1-pvc\n namespace: default\nspec:\n from:", "completion": " - group: \"\"\n kind: PersistentVolumeClaim\n namespace: ns1\n to:\n - group: snapshot.storage.k8s.io\n kind: VolumeSnapshot\n name: new-snapshot-demo", "language": "yaml", "type": "kubernetes-manifest", "context": "6.2 - Persistent Volumes | If you're writing configuration templates or examples that run on a wide range of clusters and need persistent storage, it is recommended that you use the following pattern:", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_18250"} {"prompt": "Explain this yaml code:\n\napiVersion: gateway.networking.k8s.io/v1beta1\nkind: ReferenceGrant\nmetadata:\n name: allow-ns1-pvc\n namespace: default\nspec:\n from:\n - group: \"\"\n kind: PersistentVolumeClaim\n namespace: ns1\n to:\n - group: snapshot.storage.k8s.io\n kind: VolumeSnapshot\n name: new-snapshot-demo", "completion": "This yaml code 6.2 - Persistent Volumes | If you're writing configuration templates or examples that run on a wide range of clusters and need persistent storage, it is recommended that you use the following pattern:", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "6.2 - Persistent Volumes | If you're writing configuration templates or examples that run on a wide range of clusters and need persistent storage, it is recommended that you use the following pattern:", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_18251"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: PersistentVolumeClaim\nmetadata:\n name: foo-pvc\n namespace: ns1\nspec:\n storageClassName: example\n accessModes:\n - ReadWriteOnce", "completion": " resources:\n requests:\n storage: 1Gi\n dataSourceRef:\n apiGroup: snapshot.storage.k8s.io\n kind: VolumeSnapshot\n name: new-snapshot-demo\n namespace: default\n volumeMode: Filesystem", "language": "yaml", "type": "kubernetes-manifest", "context": "6.2 - Persistent Volumes | If you're writing configuration templates or examples that run on a wide range of clusters and need persistent storage, it is recommended that you use the following pattern:", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_18252"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: PersistentVolumeClaim\nmetadata:\n name: foo-pvc\n namespace: ns1\nspec:\n storageClassName: example\n accessModes:\n - ReadWriteOnce\n resources:\n requests:\n storage: 1Gi\n dataSourceRef:\n apiGroup: snapshot.storage.k8s.io\n kind: VolumeSnapshot\n name: new-snapshot-demo\n namespace: default\n volumeMode: Filesystem", "completion": "This yaml code 6.2 - Persistent Volumes | If you're writing configuration templates or examples that run on a wide range of clusters and need persistent storage, it is recommended that you use the following pattern:", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "6.2 - Persistent Volumes | If you're writing configuration templates or examples that run on a wide range of clusters and need persistent storage, it is recommended that you use the following pattern:", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_18253"} {"prompt": "Complete this yaml code:\n\npods/storage/projected-secret-downwardapi-configmap.yaml\napiVersion: v1\nkind: Pod\nmetadata:\n name: volume-test\nspec:\n containers:\n - name: container-test\n image: busybox:1.28\n command: [\"sleep\", \"3600\"]\n volumeMounts:\n - name: all-in-one\n mountPath: \"/projected-volume\"\n readOnly: true\n volumes:\n - name: all-in-one\n projected:\n sources:", "completion": " - secret:\n name: mysecret\n items:\n - key: username\n path: my-group/my-username\n - downwardAPI:\n items:\n - path: \"labels\"\n fieldRef:\n fieldPath: metadata.labels\n - path: \"cpu_limit\"\n resourceFieldRef:\n containerName: container-test\n resource: limits.cpu\n - configMap:\n name: myconfigmap\n items:\n - key: config\n path: my-group/my-config", "language": "yaml", "type": "pod-definition", "context": "6.3 - Projected Volumes | Each projected volume source is listed in the spec under sources. The parameters are nearly the same with two exceptions:", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_18254"} {"prompt": "Explain this yaml code:\n\npods/storage/projected-secret-downwardapi-configmap.yaml\napiVersion: v1\nkind: Pod\nmetadata:\n name: volume-test\nspec:\n containers:\n - name: container-test\n image: busybox:1.28\n command: [\"sleep\", \"3600\"]\n volumeMounts:\n - name: all-in-one\n mountPath: \"/projected-volume\"\n readOnly: true\n volumes:\n - name: all-in-one\n projected:\n sources:\n - secret:\n name: mysecret\n items:\n - key: username\n path: my-group/my-username\n - downwardAPI:\n items:\n - path: \"labels\"\n fieldRef:\n fieldPath: metadata.labels\n - path: \"cpu_limit\"\n resourceFieldRef:\n containerName: container-test\n resource: limits.cpu\n - configMap:\n name: myconfigmap\n items:\n - key: config\n path: my-group/my-config", "completion": "This yaml code 6.3 - Projected Volumes | Each projected volume source is listed in the spec under sources. The parameters are nearly the same with two exceptions:", "language": "yaml", "type": "pod-definition_explanation", "context": "6.3 - Projected Volumes | Each projected volume source is listed in the spec under sources. The parameters are nearly the same with two exceptions:", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_18255"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: volume-test\nspec:\n containers:\n - name: container-test\n image: busybox:1.28\n command: [\"sleep\", \"3600\"]\n volumeMounts:\n - name: all-in-one\n mountPath: \"/projected-volume\"\n readOnly: true\n volumes:\n - name: all-in-one\n projected:\n sources:\n - secret:", "completion": " name: mysecret\n items:\n - key: username\n path: my-group/my-username\n - downwardAPI:\n items:\n - path: \"labels\"\n fieldRef:\n fieldPath: metadata.labels\n - path: \"cpu_limit\"\n resourceFieldRef:\n containerName: container-test\n resource: limits.cpu\n - configMap:\n name: myconfigmap\n items:\n - key: config\n path: my-group/my-config", "language": "yaml", "type": "pod-definition", "context": "6.3 - Projected Volumes | Each projected volume source is listed in the spec under sources. The parameters are nearly the same with two exceptions:", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_18256"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: volume-test\nspec:\n containers:\n - name: container-test\n image: busybox:1.28\n command: [\"sleep\", \"3600\"]\n volumeMounts:\n - name: all-in-one\n mountPath: \"/projected-volume\"\n readOnly: true\n volumes:\n - name: all-in-one\n projected:\n sources:\n - secret:\n name: mysecret\n items:\n - key: username\n path: my-group/my-username\n - downwardAPI:\n items:\n - path: \"labels\"\n fieldRef:\n fieldPath: metadata.labels\n - path: \"cpu_limit\"\n resourceFieldRef:\n containerName: container-test\n resource: limits.cpu\n - configMap:\n name: myconfigmap\n items:\n - key: config\n path: my-group/my-config", "completion": "This yaml code 6.3 - Projected Volumes | Each projected volume source is listed in the spec under sources. The parameters are nearly the same with two exceptions:", "language": "yaml", "type": "pod-definition_explanation", "context": "6.3 - Projected Volumes | Each projected volume source is listed in the spec under sources. The parameters are nearly the same with two exceptions:", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_18257"} {"prompt": "Complete this yaml code:\n\npods/storage/projected-secrets-nondefault-permission-mode.yaml\napiVersion: v1\nkind: Pod\nmetadata:\n name: volume-test\nspec:\n containers:\n - name: container-test\n image: busybox:1.28\n command: [\"sleep\", \"3600\"]\n volumeMounts:\n - name: all-in-one\n mountPath: \"/projected-volume\"\n readOnly: true", "completion": " volumes:\n - name: all-in-one\n projected:\n sources:\n - secret:\n name: mysecret\n items:\n - key: username\n path: my-group/my-username\n - secret:\n name: mysecret2\n items:\n - key: password\n path: my-group/my-password\n mode: 511", "language": "yaml", "type": "pod-definition", "context": "6.3 - Projected Volumes | Each projected volume source is listed in the spec under sources. The parameters are nearly the same with two exceptions:", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_18258"} {"prompt": "Explain this yaml code:\n\npods/storage/projected-secrets-nondefault-permission-mode.yaml\napiVersion: v1\nkind: Pod\nmetadata:\n name: volume-test\nspec:\n containers:\n - name: container-test\n image: busybox:1.28\n command: [\"sleep\", \"3600\"]\n volumeMounts:\n - name: all-in-one\n mountPath: \"/projected-volume\"\n readOnly: true\n volumes:\n - name: all-in-one\n projected:\n sources:\n - secret:\n name: mysecret\n items:\n - key: username\n path: my-group/my-username\n - secret:\n name: mysecret2\n items:\n - key: password\n path: my-group/my-password\n mode: 511", "completion": "This yaml code 6.3 - Projected Volumes | Each projected volume source is listed in the spec under sources. The parameters are nearly the same with two exceptions:", "language": "yaml", "type": "pod-definition_explanation", "context": "6.3 - Projected Volumes | Each projected volume source is listed in the spec under sources. The parameters are nearly the same with two exceptions:", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_18259"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: volume-test\nspec:\n containers:\n - name: container-test\n image: busybox:1.28\n command: [\"sleep\", \"3600\"]\n volumeMounts:\n - name: all-in-one\n mountPath: \"/projected-volume\"\n readOnly: true\n volumes:", "completion": " - name: all-in-one\n projected:\n sources:\n - secret:\n name: mysecret\n items:\n - key: username\n path: my-group/my-username\n - secret:\n name: mysecret2\n items:\n - key: password\n path: my-group/my-password\n mode: 511", "language": "yaml", "type": "pod-definition", "context": "6.3 - Projected Volumes | Each projected volume source is listed in the spec under sources. The parameters are nearly the same with two exceptions:", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_18260"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: volume-test\nspec:\n containers:\n - name: container-test\n image: busybox:1.28\n command: [\"sleep\", \"3600\"]\n volumeMounts:\n - name: all-in-one\n mountPath: \"/projected-volume\"\n readOnly: true\n volumes:\n - name: all-in-one\n projected:\n sources:\n - secret:\n name: mysecret\n items:\n - key: username\n path: my-group/my-username\n - secret:\n name: mysecret2\n items:\n - key: password\n path: my-group/my-password\n mode: 511", "completion": "This yaml code 6.3 - Projected Volumes | Each projected volume source is listed in the spec under sources. The parameters are nearly the same with two exceptions:", "language": "yaml", "type": "pod-definition_explanation", "context": "6.3 - Projected Volumes | Each projected volume source is listed in the spec under sources. The parameters are nearly the same with two exceptions:", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_18261"} {"prompt": "Complete this yaml code:\n\npods/storage/projected-service-account-token.yaml\napiVersion: v1\nkind: Pod\nmetadata:\n name: sa-token-test\nspec:\n containers:\n - name: container-test\n image: busybox:1.28\n command: [\"sleep\", \"3600\"]\n volumeMounts:", "completion": " - name: token-vol\n mountPath: \"/service-account\"\n readOnly: true\n serviceAccountName: default\n volumes:\n - name: token-vol\n projected:\n sources:\n - serviceAccountToken:\n audience: api\n expirationSeconds: 3600\n path: token", "language": "yaml", "type": "pod-definition", "context": "6.3 - Projected Volumes | The example Pod has a projected volume containing the injected service account token. Containers in this Pod can use that token to access the Kubernetes API server, authenticating with the identity of", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_18262"} {"prompt": "Explain this yaml code:\n\npods/storage/projected-service-account-token.yaml\napiVersion: v1\nkind: Pod\nmetadata:\n name: sa-token-test\nspec:\n containers:\n - name: container-test\n image: busybox:1.28\n command: [\"sleep\", \"3600\"]\n volumeMounts:\n - name: token-vol\n mountPath: \"/service-account\"\n readOnly: true\n serviceAccountName: default\n volumes:\n - name: token-vol\n projected:\n sources:\n - serviceAccountToken:\n audience: api\n expirationSeconds: 3600\n path: token", "completion": "This yaml code 6.3 - Projected Volumes | The example Pod has a projected volume containing the injected service account token. Containers in this Pod can use that token to access the Kubernetes API server, authenticating with the identity of", "language": "yaml", "type": "pod-definition_explanation", "context": "6.3 - Projected Volumes | The example Pod has a projected volume containing the injected service account token. Containers in this Pod can use that token to access the Kubernetes API server, authenticating with the identity of", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_18263"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: sa-token-test\nspec:\n containers:\n - name: container-test\n image: busybox:1.28\n command: [\"sleep\", \"3600\"]\n volumeMounts:\n - name: token-vol", "completion": " mountPath: \"/service-account\"\n readOnly: true\n serviceAccountName: default\n volumes:\n - name: token-vol\n projected:\n sources:\n - serviceAccountToken:\n audience: api\n expirationSeconds: 3600\n path: token", "language": "yaml", "type": "pod-definition", "context": "6.3 - Projected Volumes | The example Pod has a projected volume containing the injected service account token. Containers in this Pod can use that token to access the Kubernetes API server, authenticating with the identity of", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_18264"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: sa-token-test\nspec:\n containers:\n - name: container-test\n image: busybox:1.28\n command: [\"sleep\", \"3600\"]\n volumeMounts:\n - name: token-vol\n mountPath: \"/service-account\"\n readOnly: true\n serviceAccountName: default\n volumes:\n - name: token-vol\n projected:\n sources:\n - serviceAccountToken:\n audience: api\n expirationSeconds: 3600\n path: token", "completion": "This yaml code 6.3 - Projected Volumes | The example Pod has a projected volume containing the injected service account token. Containers in this Pod can use that token to access the Kubernetes API server, authenticating with the identity of", "language": "yaml", "type": "pod-definition_explanation", "context": "6.3 - Projected Volumes | The example Pod has a projected volume containing the injected service account token. Containers in this Pod can use that token to access the Kubernetes API server, authenticating with the identity of", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_18265"} {"prompt": "Complete this yaml code:\n\npods/storage/projected-clustertrustbundle.yaml\napiVersion: v1\nkind: Pod\nmetadata:\n name: sa-ctb-name-test\nspec:\n containers:\n - name: container-test\n image: busybox\n command: [\"sleep\", \"3600\"]\n volumeMounts:\n - name: token-vol\n mountPath: \"/root-certificates\"\n readOnly: true", "completion": " serviceAccountName: default\n volumes:\n - name: token-vol\n projected:\n sources:\n - clusterTrustBundle:\n name: example\n path: example-roots.pem\n - clusterTrustBundle:\n signerName: \"example.com/mysigner\"\n labelSelector:\n matchLabels:\n version: live\n path: mysigner-roots.pem\n optional: true", "language": "yaml", "type": "pod-definition", "context": "6.3 - Projected Volumes | The podCertificate projected volumes source securely provisions a private key and X.509 certificate chain for pod to use as client or server credentials. Kubelet will then handle refreshing the privat", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_18266"} {"prompt": "Explain this yaml code:\n\npods/storage/projected-clustertrustbundle.yaml\napiVersion: v1\nkind: Pod\nmetadata:\n name: sa-ctb-name-test\nspec:\n containers:\n - name: container-test\n image: busybox\n command: [\"sleep\", \"3600\"]\n volumeMounts:\n - name: token-vol\n mountPath: \"/root-certificates\"\n readOnly: true\n serviceAccountName: default\n volumes:\n - name: token-vol\n projected:\n sources:\n - clusterTrustBundle:\n name: example\n path: example-roots.pem\n - clusterTrustBundle:\n signerName: \"example.com/mysigner\"\n labelSelector:\n matchLabels:\n version: live\n path: mysigner-roots.pem\n optional: true", "completion": "This yaml code 6.3 - Projected Volumes | The podCertificate projected volumes source securely provisions a private key and X.509 certificate chain for pod to use as client or server credentials. Kubelet will then handle refreshing the privat", "language": "yaml", "type": "pod-definition_explanation", "context": "6.3 - Projected Volumes | The podCertificate projected volumes source securely provisions a private key and X.509 certificate chain for pod to use as client or server credentials. Kubelet will then handle refreshing the privat", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_18267"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: sa-ctb-name-test\nspec:\n containers:\n - name: container-test\n image: busybox\n command: [\"sleep\", \"3600\"]\n volumeMounts:\n - name: token-vol\n mountPath: \"/root-certificates\"\n readOnly: true\n serviceAccountName: default", "completion": " volumes:\n - name: token-vol\n projected:\n sources:\n - clusterTrustBundle:\n name: example\n path: example-roots.pem\n - clusterTrustBundle:\n signerName: \"example.com/mysigner\"\n labelSelector:\n matchLabels:\n version: live\n path: mysigner-roots.pem\n optional: true", "language": "yaml", "type": "pod-definition", "context": "6.3 - Projected Volumes | The podCertificate projected volumes source securely provisions a private key and X.509 certificate chain for pod to use as client or server credentials. Kubelet will then handle refreshing the privat", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_18268"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: sa-ctb-name-test\nspec:\n containers:\n - name: container-test\n image: busybox\n command: [\"sleep\", \"3600\"]\n volumeMounts:\n - name: token-vol\n mountPath: \"/root-certificates\"\n readOnly: true\n serviceAccountName: default\n volumes:\n - name: token-vol\n projected:\n sources:\n - clusterTrustBundle:\n name: example\n path: example-roots.pem\n - clusterTrustBundle:\n signerName: \"example.com/mysigner\"\n labelSelector:\n matchLabels:\n version: live\n path: mysigner-roots.pem\n optional: true", "completion": "This yaml code 6.3 - Projected Volumes | The podCertificate projected volumes source securely provisions a private key and X.509 certificate chain for pod to use as client or server credentials. Kubelet will then handle refreshing the privat", "language": "yaml", "type": "pod-definition_explanation", "context": "6.3 - Projected Volumes | The podCertificate projected volumes source securely provisions a private key and X.509 certificate chain for pod to use as client or server credentials. Kubelet will then handle refreshing the privat", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_18269"} {"prompt": "Complete this yaml code:\n\npods/storage/projected-podcertificate.yaml\n# Sample Pod spec that uses a podCertificate projection to request an ED25519\n# private key, a certificate from the `coolcert.example.com/foo` signer, and\n# write the results to `/var/run/my-x509-credentials/credentialbundle.pem`.\napiVersion: v1\nkind: Pod\nmetadata:\n namespace: default\n name: podcertificate-pod\nspec:\n serviceAccountName: default\n containers:\n - image: debian", "completion": " name: main\n command: [\"sleep\", \"infinity\"]\n volumeMounts:\n - name: my-x509-credentials\n mountPath: /var/run/my-x509-credentials\n volumes:\n - name: my-x509-credentials\n projected:\n defaultMode: 420\n sources:\n - podCertificate:\n keyType: ED25519\n signerName: coolcert.example.com/foo\n credentialBundlePath: credentialbundle.pem", "language": "yaml", "type": "pod-definition", "context": "6.3 - Projected Volumes | The proposal for file permission handling in projected service account volume enhancement introduced the projected files having the correct owner permissions set.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_18270"} {"prompt": "Explain this yaml code:\n\npods/storage/projected-podcertificate.yaml\n# Sample Pod spec that uses a podCertificate projection to request an ED25519\n# private key, a certificate from the `coolcert.example.com/foo` signer, and\n# write the results to `/var/run/my-x509-credentials/credentialbundle.pem`.\napiVersion: v1\nkind: Pod\nmetadata:\n namespace: default\n name: podcertificate-pod\nspec:\n serviceAccountName: default\n containers:\n - image: debian\n name: main\n command: [\"sleep\", \"infinity\"]\n volumeMounts:\n - name: my-x509-credentials\n mountPath: /var/run/my-x509-credentials\n volumes:\n - name: my-x509-credentials\n projected:\n defaultMode: 420\n sources:\n - podCertificate:\n keyType: ED25519\n signerName: coolcert.example.com/foo\n credentialBundlePath: credentialbundle.pem", "completion": "This yaml code 6.3 - Projected Volumes | The proposal for file permission handling in projected service account volume enhancement introduced the projected files having the correct owner permissions set.", "language": "yaml", "type": "pod-definition_explanation", "context": "6.3 - Projected Volumes | The proposal for file permission handling in projected service account volume enhancement introduced the projected files having the correct owner permissions set.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_18271"} {"prompt": "Complete this yaml code:\n\n# Sample Pod spec that uses a podCertificate projection to request an ED25519\n# private key, a certificate from the `coolcert.example.com/foo` signer, and\n# write the results to `/var/run/my-x509-credentials/credentialbundle.pem`.\napiVersion: v1\nkind: Pod\nmetadata:\n namespace: default\n name: podcertificate-pod\nspec:\n serviceAccountName: default\n containers:\n - image: debian\n name: main", "completion": " command: [\"sleep\", \"infinity\"]\n volumeMounts:\n - name: my-x509-credentials\n mountPath: /var/run/my-x509-credentials\n volumes:\n - name: my-x509-credentials\n projected:\n defaultMode: 420\n sources:\n - podCertificate:\n keyType: ED25519\n signerName: coolcert.example.com/foo\n credentialBundlePath: credentialbundle.pem", "language": "yaml", "type": "pod-definition", "context": "6.3 - Projected Volumes | The proposal for file permission handling in projected service account volume enhancement introduced the projected files having the correct owner permissions set.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_18272"} {"prompt": "Explain this yaml code:\n\n# Sample Pod spec that uses a podCertificate projection to request an ED25519\n# private key, a certificate from the `coolcert.example.com/foo` signer, and\n# write the results to `/var/run/my-x509-credentials/credentialbundle.pem`.\napiVersion: v1\nkind: Pod\nmetadata:\n namespace: default\n name: podcertificate-pod\nspec:\n serviceAccountName: default\n containers:\n - image: debian\n name: main\n command: [\"sleep\", \"infinity\"]\n volumeMounts:\n - name: my-x509-credentials\n mountPath: /var/run/my-x509-credentials\n volumes:\n - name: my-x509-credentials\n projected:\n defaultMode: 420\n sources:\n - podCertificate:\n keyType: ED25519\n signerName: coolcert.example.com/foo\n credentialBundlePath: credentialbundle.pem", "completion": "This yaml code 6.3 - Projected Volumes | The proposal for file permission handling in projected service account volume enhancement introduced the projected files having the correct owner permissions set.", "language": "yaml", "type": "pod-definition_explanation", "context": "6.3 - Projected Volumes | The proposal for file permission handling in projected service account volume enhancement introduced the projected files having the correct owner permissions set.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_18273"} {"prompt": "Complete this text code:\n\nPS C:\\> Get-Acl C:\\var\\run\\secrets\\kubernetes.io\\serviceaccount\\..2021_08_31_22_22_18.318230061\\ca.crt | Format-List\n\nPath : Microsoft.PowerShell.Core\\FileSystem::C:\\var\\run\\secrets\\kubernetes.io\\serviceaccount\\..2021_08_31_22_22_18.318230061\\ca.crt\nOwner : BUILTIN\\Administrators\nGroup : NT AUTHORITY\\SYSTEM", "completion": "Access : NT AUTHORITY\\SYSTEM Allow FullControl\n BUILTIN\\Administrators Allow FullControl\n BUILTIN\\Users Allow ReadAndExecute, Synchronize\nAudit :\nSddl : O:BAG:SYD:AI(A;ID;FA;;;SY)(A;ID;FA;;;BA)(A;ID;0x1200a9;;;BU)", "language": "text", "type": "code-example", "context": "6.3 - Projected Volumes | This implies all administrator users like ContainerAdministrator will have read, write and execute access while, non-administrator users will have read and execute access.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_18274"} {"prompt": "Explain this text code:\n\nPS C:\\> Get-Acl C:\\var\\run\\secrets\\kubernetes.io\\serviceaccount\\..2021_08_31_22_22_18.318230061\\ca.crt | Format-List\n\nPath : Microsoft.PowerShell.Core\\FileSystem::C:\\var\\run\\secrets\\kubernetes.io\\serviceaccount\\..2021_08_31_22_22_18.318230061\\ca.crt\nOwner : BUILTIN\\Administrators\nGroup : NT AUTHORITY\\SYSTEM\nAccess : NT AUTHORITY\\SYSTEM Allow FullControl\n BUILTIN\\Administrators Allow FullControl\n BUILTIN\\Users Allow ReadAndExecute, Synchronize\nAudit :\nSddl : O:BAG:SYD:AI(A;ID;FA;;;SY)(A;ID;FA;;;BA)(A;ID;0x1200a9;;;BU)", "completion": "This text code 6.3 - Projected Volumes | This implies all administrator users like ContainerAdministrator will have read, write and execute access while, non-administrator users will have read and execute access.", "language": "text", "type": "code-example_explanation", "context": "6.3 - Projected Volumes | This implies all administrator users like ContainerAdministrator will have read, write and execute access while, non-administrator users will have read and execute access.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_18275"} {"prompt": "Complete this yaml code:\n\nkind: Pod\napiVersion: v1\nmetadata:\n name: my-csi-app\nspec:\n containers:\n - name: my-frontend\n image: busybox:1.28\n volumeMounts:", "completion": " - mountPath: \"/data\"\n name: my-csi-inline-vol\n command: [ \"sleep\", \"1000000\" ]\n volumes:\n - name: my-csi-inline-vol\n csi:\n driver: inline.storage.kubernetes.io\n volumeAttributes:\n foo: bar", "language": "yaml", "type": "pod-definition", "context": "6.4 - Ephemeral Volumes | The volumeAttributes determine what volume is prepared by the driver. These attributes are specific to each driver and not standardized. See the documentation of each CSI driver for further instructio", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_18276"} {"prompt": "Explain this yaml code:\n\nkind: Pod\napiVersion: v1\nmetadata:\n name: my-csi-app\nspec:\n containers:\n - name: my-frontend\n image: busybox:1.28\n volumeMounts:\n - mountPath: \"/data\"\n name: my-csi-inline-vol\n command: [ \"sleep\", \"1000000\" ]\n volumes:\n - name: my-csi-inline-vol\n csi:\n driver: inline.storage.kubernetes.io\n volumeAttributes:\n foo: bar", "completion": "This yaml code 6.4 - Ephemeral Volumes | The volumeAttributes determine what volume is prepared by the driver. These attributes are specific to each driver and not standardized. See the documentation of each CSI driver for further instructio", "language": "yaml", "type": "pod-definition_explanation", "context": "6.4 - Ephemeral Volumes | The volumeAttributes determine what volume is prepared by the driver. These attributes are specific to each driver and not standardized. See the documentation of each CSI driver for further instructio", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_18277"} {"prompt": "Complete this yaml code:\n\nkind: Pod\napiVersion: v1\nmetadata:\n name: my-app\nspec:\n containers:\n - name: my-frontend\n image: busybox:1.28\n volumeMounts:\n - mountPath: \"/scratch\"\n name: scratch-volume\n command: [ \"sleep\", \"1000000\" ]", "completion": " volumes:\n - name: scratch-volume\n ephemeral:\n volumeClaimTemplate:\n metadata:\n labels:\n type: my-frontend-volume\n spec:\n accessModes: [ \"ReadWriteOnce\" ]\n storageClassName: \"scratch-storage-class\"\n resources:\n requests:\n storage: 1Gi", "language": "yaml", "type": "pod-definition", "context": "6.4 - Ephemeral Volumes | The key design idea is that the parameters for a volume claim are allowed inside a volume source of the Pod. Labels, annotations and the whole set of fields for a PersistentVolumeClaim are supported.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_18278"} {"prompt": "Explain this yaml code:\n\nkind: Pod\napiVersion: v1\nmetadata:\n name: my-app\nspec:\n containers:\n - name: my-frontend\n image: busybox:1.28\n volumeMounts:\n - mountPath: \"/scratch\"\n name: scratch-volume\n command: [ \"sleep\", \"1000000\" ]\n volumes:\n - name: scratch-volume\n ephemeral:\n volumeClaimTemplate:\n metadata:\n labels:\n type: my-frontend-volume\n spec:\n accessModes: [ \"ReadWriteOnce\" ]\n storageClassName: \"scratch-storage-class\"\n resources:\n requests:\n storage: 1Gi", "completion": "This yaml code 6.4 - Ephemeral Volumes | The key design idea is that the parameters for a volume claim are allowed inside a volume source of the Pod. Labels, annotations and the whole set of fields for a PersistentVolumeClaim are supported.", "language": "yaml", "type": "pod-definition_explanation", "context": "6.4 - Ephemeral Volumes | The key design idea is that the parameters for a volume claim are allowed inside a volume source of the Pod. Labels, annotations and the whole set of fields for a PersistentVolumeClaim are supported.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_18279"} {"prompt": "Complete this yaml code:\n\nstorage/storageclass-low-latency.yaml\napiVersion: storage.k8s.io/v1\nkind: StorageClass\nmetadata:\n name: low-latency\n annotations:\n storageclass.kubernetes.io/is-default-class: \"false\"", "completion": "provisioner: csi-driver.example-vendor.example\nreclaimPolicy: Retain # default value is Delete\nallowVolumeExpansion: true\nmountOptions:\n - discard # this might enable UNMAP / TRIM at the block storage layer\nvolumeBindingMode: WaitForFirstConsumer\nparameters:\n guaranteedReadWriteLatency: \"true\" # provider-specific", "language": "yaml", "type": "kubernetes-manifest", "context": "6.5 - Storage Classes | You can mark a StorageClass as the default for your cluster. For instructions on setting the default StorageClass, see Change the default StorageClass.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_18280"} {"prompt": "Explain this yaml code:\n\nstorage/storageclass-low-latency.yaml\napiVersion: storage.k8s.io/v1\nkind: StorageClass\nmetadata:\n name: low-latency\n annotations:\n storageclass.kubernetes.io/is-default-class: \"false\"\nprovisioner: csi-driver.example-vendor.example\nreclaimPolicy: Retain # default value is Delete\nallowVolumeExpansion: true\nmountOptions:\n - discard # this might enable UNMAP / TRIM at the block storage layer\nvolumeBindingMode: WaitForFirstConsumer\nparameters:\n guaranteedReadWriteLatency: \"true\" # provider-specific", "completion": "This yaml code 6.5 - Storage Classes | You can mark a StorageClass as the default for your cluster. For instructions on setting the default StorageClass, see Change the default StorageClass.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "6.5 - Storage Classes | You can mark a StorageClass as the default for your cluster. For instructions on setting the default StorageClass, see Change the default StorageClass.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_18281"} {"prompt": "Complete this yaml code:\n\napiVersion: storage.k8s.io/v1\nkind: StorageClass\nmetadata:\n name: low-latency\n annotations:\n storageclass.kubernetes.io/is-default-class: \"false\"\nprovisioner: csi-driver.example-vendor.example", "completion": "reclaimPolicy: Retain # default value is Delete\nallowVolumeExpansion: true\nmountOptions:\n - discard # this might enable UNMAP / TRIM at the block storage layer\nvolumeBindingMode: WaitForFirstConsumer\nparameters:\n guaranteedReadWriteLatency: \"true\" # provider-specific", "language": "yaml", "type": "kubernetes-manifest", "context": "6.5 - Storage Classes | You can mark a StorageClass as the default for your cluster. For instructions on setting the default StorageClass, see Change the default StorageClass.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_18282"} {"prompt": "Explain this yaml code:\n\napiVersion: storage.k8s.io/v1\nkind: StorageClass\nmetadata:\n name: low-latency\n annotations:\n storageclass.kubernetes.io/is-default-class: \"false\"\nprovisioner: csi-driver.example-vendor.example\nreclaimPolicy: Retain # default value is Delete\nallowVolumeExpansion: true\nmountOptions:\n - discard # this might enable UNMAP / TRIM at the block storage layer\nvolumeBindingMode: WaitForFirstConsumer\nparameters:\n guaranteedReadWriteLatency: \"true\" # provider-specific", "completion": "This yaml code 6.5 - Storage Classes | You can mark a StorageClass as the default for your cluster. For instructions on setting the default StorageClass, see Change the default StorageClass.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "6.5 - Storage Classes | You can mark a StorageClass as the default for your cluster. For instructions on setting the default StorageClass, see Change the default StorageClass.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_18283"} {"prompt": "Complete this yaml code:\n\nstorage/storageclass/pod-volume-binding.yaml\napiVersion: v1\nkind: Pod\nmetadata:\n name: task-pv-pod\nspec:\n nodeSelector:\n kubernetes.io/hostname: kube-01\n volumes:\n - name: task-pv-storage", "completion": " persistentVolumeClaim:\n claimName: task-pv-claim\n containers:\n - name: task-pv-container\n image: nginx\n ports:\n - containerPort: 80\n name: \"http-server\"\n volumeMounts:\n - mountPath: \"/usr/share/nginx/html\"\n name: task-pv-storage", "language": "yaml", "type": "pod-definition", "context": "6.5 - Storage Classes | When a cluster operator specifies the WaitForFirstConsumer volume binding mode, it is no longer necessary to restrict provisioning to specific topologies in most situations. However, if still required", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_18284"} {"prompt": "Explain this yaml code:\n\nstorage/storageclass/pod-volume-binding.yaml\napiVersion: v1\nkind: Pod\nmetadata:\n name: task-pv-pod\nspec:\n nodeSelector:\n kubernetes.io/hostname: kube-01\n volumes:\n - name: task-pv-storage\n persistentVolumeClaim:\n claimName: task-pv-claim\n containers:\n - name: task-pv-container\n image: nginx\n ports:\n - containerPort: 80\n name: \"http-server\"\n volumeMounts:\n - mountPath: \"/usr/share/nginx/html\"\n name: task-pv-storage", "completion": "This yaml code 6.5 - Storage Classes | When a cluster operator specifies the WaitForFirstConsumer volume binding mode, it is no longer necessary to restrict provisioning to specific topologies in most situations. However, if still required", "language": "yaml", "type": "pod-definition_explanation", "context": "6.5 - Storage Classes | When a cluster operator specifies the WaitForFirstConsumer volume binding mode, it is no longer necessary to restrict provisioning to specific topologies in most situations. However, if still required", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_18285"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: task-pv-pod\nspec:\n nodeSelector:\n kubernetes.io/hostname: kube-01\n volumes:\n - name: task-pv-storage\n persistentVolumeClaim:", "completion": " claimName: task-pv-claim\n containers:\n - name: task-pv-container\n image: nginx\n ports:\n - containerPort: 80\n name: \"http-server\"\n volumeMounts:\n - mountPath: \"/usr/share/nginx/html\"\n name: task-pv-storage", "language": "yaml", "type": "pod-definition", "context": "6.5 - Storage Classes | When a cluster operator specifies the WaitForFirstConsumer volume binding mode, it is no longer necessary to restrict provisioning to specific topologies in most situations. However, if still required", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_18286"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: task-pv-pod\nspec:\n nodeSelector:\n kubernetes.io/hostname: kube-01\n volumes:\n - name: task-pv-storage\n persistentVolumeClaim:\n claimName: task-pv-claim\n containers:\n - name: task-pv-container\n image: nginx\n ports:\n - containerPort: 80\n name: \"http-server\"\n volumeMounts:\n - mountPath: \"/usr/share/nginx/html\"\n name: task-pv-storage", "completion": "This yaml code 6.5 - Storage Classes | When a cluster operator specifies the WaitForFirstConsumer volume binding mode, it is no longer necessary to restrict provisioning to specific topologies in most situations. However, if still required", "language": "yaml", "type": "pod-definition_explanation", "context": "6.5 - Storage Classes | When a cluster operator specifies the WaitForFirstConsumer volume binding mode, it is no longer necessary to restrict provisioning to specific topologies in most situations. However, if still required", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_18287"} {"prompt": "Complete this yaml code:\n\nstorage/storageclass/storageclass-topology.yaml\napiVersion: storage.k8s.io/v1\nkind: StorageClass\nmetadata:\n name: standard\nprovisioner: example.com/example\nparameters:", "completion": " type: pd-standard\nvolumeBindingMode: WaitForFirstConsumer\nallowedTopologies:\n- matchLabelExpressions:\n - key: topology.kubernetes.io/zone\n values:\n - us-central-1a\n - us-central-1b", "language": "yaml", "type": "kubernetes-manifest", "context": "6.5 - Storage Classes | StorageClasses have parameters that describe volumes belonging to the storage class. Different parameters may be accepted depending on the provisioner. When a parameter is omitted, some default is use", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_18288"} {"prompt": "Explain this yaml code:\n\nstorage/storageclass/storageclass-topology.yaml\napiVersion: storage.k8s.io/v1\nkind: StorageClass\nmetadata:\n name: standard\nprovisioner: example.com/example\nparameters:\n type: pd-standard\nvolumeBindingMode: WaitForFirstConsumer\nallowedTopologies:\n- matchLabelExpressions:\n - key: topology.kubernetes.io/zone\n values:\n - us-central-1a\n - us-central-1b", "completion": "This yaml code 6.5 - Storage Classes | StorageClasses have parameters that describe volumes belonging to the storage class. Different parameters may be accepted depending on the provisioner. When a parameter is omitted, some default is use", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "6.5 - Storage Classes | StorageClasses have parameters that describe volumes belonging to the storage class. Different parameters may be accepted depending on the provisioner. When a parameter is omitted, some default is use", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_18289"} {"prompt": "Complete this yaml code:\n\napiVersion: storage.k8s.io/v1\nkind: StorageClass\nmetadata:\n name: standard\nprovisioner: example.com/example\nparameters:\n type: pd-standard", "completion": "volumeBindingMode: WaitForFirstConsumer\nallowedTopologies:\n- matchLabelExpressions:\n - key: topology.kubernetes.io/zone\n values:\n - us-central-1a\n - us-central-1b", "language": "yaml", "type": "kubernetes-manifest", "context": "6.5 - Storage Classes | StorageClasses have parameters that describe volumes belonging to the storage class. Different parameters may be accepted depending on the provisioner. When a parameter is omitted, some default is use", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_18290"} {"prompt": "Explain this yaml code:\n\napiVersion: storage.k8s.io/v1\nkind: StorageClass\nmetadata:\n name: standard\nprovisioner: example.com/example\nparameters:\n type: pd-standard\nvolumeBindingMode: WaitForFirstConsumer\nallowedTopologies:\n- matchLabelExpressions:\n - key: topology.kubernetes.io/zone\n values:\n - us-central-1a\n - us-central-1b", "completion": "This yaml code 6.5 - Storage Classes | StorageClasses have parameters that describe volumes belonging to the storage class. Different parameters may be accepted depending on the provisioner. When a parameter is omitted, some default is use", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "6.5 - Storage Classes | StorageClasses have parameters that describe volumes belonging to the storage class. Different parameters may be accepted depending on the provisioner. When a parameter is omitted, some default is use", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_18291"} {"prompt": "Complete this yaml code:\n\nstorage/storageclass/storageclass-aws-ebs.yaml\napiVersion: storage.k8s.io/v1\nkind: StorageClass\nmetadata:\n name: ebs-sc\nprovisioner: ebs.csi.aws.com\nvolumeBindingMode: WaitForFirstConsumer\nparameters:\n csi.storage.k8s.io/fstype: xfs", "completion": " type: io1\n iopsPerGB: \"50\"\n encrypted: \"true\"\n tagSpecification_1: \"key1=value1\"\n tagSpecification_2: \"key2=value2\"\nallowedTopologies:\n- matchLabelExpressions:\n - key: topology.ebs.csi.aws.com/zone\n values:\n - us-east-2c", "language": "yaml", "type": "kubernetes-manifest", "context": "6.5 - Storage Classes | tagSpecification: Tags with this prefix are applied to dynamically provisioned EBS volumes.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_18292"} {"prompt": "Explain this yaml code:\n\nstorage/storageclass/storageclass-aws-ebs.yaml\napiVersion: storage.k8s.io/v1\nkind: StorageClass\nmetadata:\n name: ebs-sc\nprovisioner: ebs.csi.aws.com\nvolumeBindingMode: WaitForFirstConsumer\nparameters:\n csi.storage.k8s.io/fstype: xfs\n type: io1\n iopsPerGB: \"50\"\n encrypted: \"true\"\n tagSpecification_1: \"key1=value1\"\n tagSpecification_2: \"key2=value2\"\nallowedTopologies:\n- matchLabelExpressions:\n - key: topology.ebs.csi.aws.com/zone\n values:\n - us-east-2c", "completion": "This yaml code 6.5 - Storage Classes | tagSpecification: Tags with this prefix are applied to dynamically provisioned EBS volumes.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "6.5 - Storage Classes | tagSpecification: Tags with this prefix are applied to dynamically provisioned EBS volumes.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_18293"} {"prompt": "Complete this yaml code:\n\napiVersion: storage.k8s.io/v1\nkind: StorageClass\nmetadata:\n name: ebs-sc\nprovisioner: ebs.csi.aws.com\nvolumeBindingMode: WaitForFirstConsumer\nparameters:\n csi.storage.k8s.io/fstype: xfs\n type: io1", "completion": " iopsPerGB: \"50\"\n encrypted: \"true\"\n tagSpecification_1: \"key1=value1\"\n tagSpecification_2: \"key2=value2\"\nallowedTopologies:\n- matchLabelExpressions:\n - key: topology.ebs.csi.aws.com/zone\n values:\n - us-east-2c", "language": "yaml", "type": "kubernetes-manifest", "context": "6.5 - Storage Classes | tagSpecification: Tags with this prefix are applied to dynamically provisioned EBS volumes.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_18294"} {"prompt": "Explain this yaml code:\n\napiVersion: storage.k8s.io/v1\nkind: StorageClass\nmetadata:\n name: ebs-sc\nprovisioner: ebs.csi.aws.com\nvolumeBindingMode: WaitForFirstConsumer\nparameters:\n csi.storage.k8s.io/fstype: xfs\n type: io1\n iopsPerGB: \"50\"\n encrypted: \"true\"\n tagSpecification_1: \"key1=value1\"\n tagSpecification_2: \"key2=value2\"\nallowedTopologies:\n- matchLabelExpressions:\n - key: topology.ebs.csi.aws.com/zone\n values:\n - us-east-2c", "completion": "This yaml code 6.5 - Storage Classes | tagSpecification: Tags with this prefix are applied to dynamically provisioned EBS volumes.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "6.5 - Storage Classes | tagSpecification: Tags with this prefix are applied to dynamically provisioned EBS volumes.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_18295"} {"prompt": "Complete this yaml code:\n\nstorage/storageclass/storageclass-aws-efs.yaml\nkind: StorageClass\napiVersion: storage.k8s.io/v1\nmetadata:\n name: efs-sc", "completion": "provisioner: efs.csi.aws.com\nparameters:\n provisioningMode: efs-ap\n fileSystemId: fs-92107410\n directoryPerms: \"700\"", "language": "yaml", "type": "kubernetes-manifest", "context": "6.5 - Storage Classes | For more details, refer to the AWS_EFS_CSI_Driver Dynamic Provisioning documentation.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_18296"} {"prompt": "Explain this yaml code:\n\nstorage/storageclass/storageclass-aws-efs.yaml\nkind: StorageClass\napiVersion: storage.k8s.io/v1\nmetadata:\n name: efs-sc\nprovisioner: efs.csi.aws.com\nparameters:\n provisioningMode: efs-ap\n fileSystemId: fs-92107410\n directoryPerms: \"700\"", "completion": "This yaml code 6.5 - Storage Classes | For more details, refer to the AWS_EFS_CSI_Driver Dynamic Provisioning documentation.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "6.5 - Storage Classes | For more details, refer to the AWS_EFS_CSI_Driver Dynamic Provisioning documentation.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_18297"} {"prompt": "Complete this yaml code:\n\nkind: StorageClass\napiVersion: storage.k8s.io/v1\nmetadata:\n name: efs-sc", "completion": "provisioner: efs.csi.aws.com\nparameters:\n provisioningMode: efs-ap\n fileSystemId: fs-92107410\n directoryPerms: \"700\"", "language": "yaml", "type": "kubernetes-manifest", "context": "6.5 - Storage Classes | For more details, refer to the AWS_EFS_CSI_Driver Dynamic Provisioning documentation.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_18298"} {"prompt": "Explain this yaml code:\n\nkind: StorageClass\napiVersion: storage.k8s.io/v1\nmetadata:\n name: efs-sc\nprovisioner: efs.csi.aws.com\nparameters:\n provisioningMode: efs-ap\n fileSystemId: fs-92107410\n directoryPerms: \"700\"", "completion": "This yaml code 6.5 - Storage Classes | For more details, refer to the AWS_EFS_CSI_Driver Dynamic Provisioning documentation.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "6.5 - Storage Classes | For more details, refer to the AWS_EFS_CSI_Driver Dynamic Provisioning documentation.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_18299"} {"prompt": "Complete this yaml code:\n\nstorage/storageclass/storageclass-nfs.yaml\napiVersion: storage.k8s.io/v1\nkind: StorageClass\nmetadata:\n name: example-nfs", "completion": "provisioner: example.com/external-nfs\nparameters:\n server: nfs-server.example.com\n path: /share\n readOnly: \"false\"", "language": "yaml", "type": "kubernetes-manifest", "context": "6.5 - Storage Classes | Kubernetes doesn't include an internal NFS provisioner. You need to use an external provisioner to create a StorageClass for NFS. Here are some examples:", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_18300"} {"prompt": "Explain this yaml code:\n\nstorage/storageclass/storageclass-nfs.yaml\napiVersion: storage.k8s.io/v1\nkind: StorageClass\nmetadata:\n name: example-nfs\nprovisioner: example.com/external-nfs\nparameters:\n server: nfs-server.example.com\n path: /share\n readOnly: \"false\"", "completion": "This yaml code 6.5 - Storage Classes | Kubernetes doesn't include an internal NFS provisioner. You need to use an external provisioner to create a StorageClass for NFS. Here are some examples:", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "6.5 - Storage Classes | Kubernetes doesn't include an internal NFS provisioner. You need to use an external provisioner to create a StorageClass for NFS. Here are some examples:", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_18301"} {"prompt": "Complete this yaml code:\n\napiVersion: storage.k8s.io/v1\nkind: StorageClass\nmetadata:\n name: example-nfs", "completion": "provisioner: example.com/external-nfs\nparameters:\n server: nfs-server.example.com\n path: /share\n readOnly: \"false\"", "language": "yaml", "type": "kubernetes-manifest", "context": "6.5 - Storage Classes | Kubernetes doesn't include an internal NFS provisioner. You need to use an external provisioner to create a StorageClass for NFS. Here are some examples:", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_18302"} {"prompt": "Explain this yaml code:\n\napiVersion: storage.k8s.io/v1\nkind: StorageClass\nmetadata:\n name: example-nfs\nprovisioner: example.com/external-nfs\nparameters:\n server: nfs-server.example.com\n path: /share\n readOnly: \"false\"", "completion": "This yaml code 6.5 - Storage Classes | Kubernetes doesn't include an internal NFS provisioner. You need to use an external provisioner to create a StorageClass for NFS. Here are some examples:", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "6.5 - Storage Classes | Kubernetes doesn't include an internal NFS provisioner. You need to use an external provisioner to create a StorageClass for NFS. Here are some examples:", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_18303"} {"prompt": "Complete this yaml code:\n\napiVersion: storage.k8s.io/v1\nkind: StorageClass\nmetadata:", "completion": " name: fast\nprovisioner: kubernetes.io/vsphere-volume\nparameters:\n diskformat: zeroedthick", "language": "yaml", "type": "kubernetes-manifest", "context": "6.5 - Storage Classes | diskformat: thin, zeroedthick and eagerzeroedthick. Default: \"thin\".", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_18304"} {"prompt": "Explain this yaml code:\n\napiVersion: storage.k8s.io/v1\nkind: StorageClass\nmetadata:\n name: fast\nprovisioner: kubernetes.io/vsphere-volume\nparameters:\n diskformat: zeroedthick", "completion": "This yaml code 6.5 - Storage Classes | diskformat: thin, zeroedthick and eagerzeroedthick. Default: \"thin\".", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "6.5 - Storage Classes | diskformat: thin, zeroedthick and eagerzeroedthick. Default: \"thin\".", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_18305"} {"prompt": "Complete this yaml code:\n\napiVersion: storage.k8s.io/v1\nkind: StorageClass\nmetadata:\n name: fast", "completion": "provisioner: kubernetes.io/vsphere-volume\nparameters:\n diskformat: zeroedthick\n datastore: VSANDatastore", "language": "yaml", "type": "kubernetes-manifest", "context": "6.5 - Storage Classes | datastore: The user can also specify the datastore in the StorageClass. The volume will be created on the datastore specified in the StorageClass, which in this case is VSANDatastore. This field is op", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_18306"} {"prompt": "Explain this yaml code:\n\napiVersion: storage.k8s.io/v1\nkind: StorageClass\nmetadata:\n name: fast\nprovisioner: kubernetes.io/vsphere-volume\nparameters:\n diskformat: zeroedthick\n datastore: VSANDatastore", "completion": "This yaml code 6.5 - Storage Classes | datastore: The user can also specify the datastore in the StorageClass. The volume will be created on the datastore specified in the StorageClass, which in this case is VSANDatastore. This field is op", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "6.5 - Storage Classes | datastore: The user can also specify the datastore in the StorageClass. The volume will be created on the datastore specified in the StorageClass, which in this case is VSANDatastore. This field is op", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_18307"} {"prompt": "Complete this yaml code:\n\nstorage/storageclass/storageclass-ceph-rbd.yaml\napiVersion: storage.k8s.io/v1\nkind: StorageClass\nmetadata:\n name: fast\nprovisioner: kubernetes.io/rbd # This provisioner is deprecated\nparameters:\n monitors: 198.19.254.105:6789\n adminId: kube", "completion": " adminSecretName: ceph-secret\n adminSecretNamespace: kube-system\n pool: kube\n userId: kube\n userSecretName: ceph-secret-user\n userSecretNamespace: default\n fsType: ext4\n imageFormat: \"2\"\n imageFeatures: \"layering\"", "language": "yaml", "type": "kubernetes-manifest", "context": "6.5 - Storage Classes | monitors: Ceph monitors, comma delimited. This parameter is required.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_18308"} {"prompt": "Explain this yaml code:\n\nstorage/storageclass/storageclass-ceph-rbd.yaml\napiVersion: storage.k8s.io/v1\nkind: StorageClass\nmetadata:\n name: fast\nprovisioner: kubernetes.io/rbd # This provisioner is deprecated\nparameters:\n monitors: 198.19.254.105:6789\n adminId: kube\n adminSecretName: ceph-secret\n adminSecretNamespace: kube-system\n pool: kube\n userId: kube\n userSecretName: ceph-secret-user\n userSecretNamespace: default\n fsType: ext4\n imageFormat: \"2\"\n imageFeatures: \"layering\"", "completion": "This yaml code 6.5 - Storage Classes | monitors: Ceph monitors, comma delimited. This parameter is required.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "6.5 - Storage Classes | monitors: Ceph monitors, comma delimited. This parameter is required.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_18309"} {"prompt": "Complete this yaml code:\n\napiVersion: storage.k8s.io/v1\nkind: StorageClass\nmetadata:\n name: fast\nprovisioner: kubernetes.io/rbd # This provisioner is deprecated\nparameters:\n monitors: 198.19.254.105:6789\n adminId: kube", "completion": " adminSecretName: ceph-secret\n adminSecretNamespace: kube-system\n pool: kube\n userId: kube\n userSecretName: ceph-secret-user\n userSecretNamespace: default\n fsType: ext4\n imageFormat: \"2\"\n imageFeatures: \"layering\"", "language": "yaml", "type": "kubernetes-manifest", "context": "6.5 - Storage Classes | monitors: Ceph monitors, comma delimited. This parameter is required.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_18310"} {"prompt": "Explain this yaml code:\n\napiVersion: storage.k8s.io/v1\nkind: StorageClass\nmetadata:\n name: fast\nprovisioner: kubernetes.io/rbd # This provisioner is deprecated\nparameters:\n monitors: 198.19.254.105:6789\n adminId: kube\n adminSecretName: ceph-secret\n adminSecretNamespace: kube-system\n pool: kube\n userId: kube\n userSecretName: ceph-secret-user\n userSecretNamespace: default\n fsType: ext4\n imageFormat: \"2\"\n imageFeatures: \"layering\"", "completion": "This yaml code 6.5 - Storage Classes | monitors: Ceph monitors, comma delimited. This parameter is required.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "6.5 - Storage Classes | monitors: Ceph monitors, comma delimited. This parameter is required.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_18311"} {"prompt": "Explain this bash code:\n\nkubectl create secret generic ceph-secret --type=\"kubernetes.io/rbd\" \\\n --from-literal=key='QVFEQ1pMdFhPUnQrSmhBQUFYaERWNHJsZ3BsMmNjcDR6RFZST0E9PQ==' \\\n --namespace=kube-system", "completion": "This bash code 6.5 - Storage Classes | userSecretNamespace: The namespace for userSecretName.", "language": "bash", "type": "kubectl-command_explanation", "context": "6.5 - Storage Classes | userSecretNamespace: The namespace for userSecretName.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_18312"} {"prompt": "Complete this yaml code:\n\nstorage/storageclass/storageclass-azure-file.yaml\napiVersion: storage.k8s.io/v1\nkind: StorageClass\nmetadata:\n name: azurefile", "completion": "provisioner: kubernetes.io/azure-file\nparameters:\n skuName: Standard_LRS\n location: eastus\n storageAccount: azure_storage_account_name # example value", "language": "yaml", "type": "kubernetes-manifest", "context": "6.5 - Storage Classes | During storage provisioning, a secret named by secretName is created for the mounting credentials. If the cluster has enabled both RBAC and Controller Roles, add the create permission of resource secr", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_18313"} {"prompt": "Explain this yaml code:\n\nstorage/storageclass/storageclass-azure-file.yaml\napiVersion: storage.k8s.io/v1\nkind: StorageClass\nmetadata:\n name: azurefile\nprovisioner: kubernetes.io/azure-file\nparameters:\n skuName: Standard_LRS\n location: eastus\n storageAccount: azure_storage_account_name # example value", "completion": "This yaml code 6.5 - Storage Classes | During storage provisioning, a secret named by secretName is created for the mounting credentials. If the cluster has enabled both RBAC and Controller Roles, add the create permission of resource secr", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "6.5 - Storage Classes | During storage provisioning, a secret named by secretName is created for the mounting credentials. If the cluster has enabled both RBAC and Controller Roles, add the create permission of resource secr", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_18314"} {"prompt": "Complete this yaml code:\n\napiVersion: storage.k8s.io/v1\nkind: StorageClass\nmetadata:\n name: azurefile", "completion": "provisioner: kubernetes.io/azure-file\nparameters:\n skuName: Standard_LRS\n location: eastus\n storageAccount: azure_storage_account_name # example value", "language": "yaml", "type": "kubernetes-manifest", "context": "6.5 - Storage Classes | During storage provisioning, a secret named by secretName is created for the mounting credentials. If the cluster has enabled both RBAC and Controller Roles, add the create permission of resource secr", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_18315"} {"prompt": "Explain this yaml code:\n\napiVersion: storage.k8s.io/v1\nkind: StorageClass\nmetadata:\n name: azurefile\nprovisioner: kubernetes.io/azure-file\nparameters:\n skuName: Standard_LRS\n location: eastus\n storageAccount: azure_storage_account_name # example value", "completion": "This yaml code 6.5 - Storage Classes | During storage provisioning, a secret named by secretName is created for the mounting credentials. If the cluster has enabled both RBAC and Controller Roles, add the create permission of resource secr", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "6.5 - Storage Classes | During storage provisioning, a secret named by secretName is created for the mounting credentials. If the cluster has enabled both RBAC and Controller Roles, add the create permission of resource secr", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_18316"} {"prompt": "Complete this yaml code:\n\nstorage/storageclass/storageclass-portworx-volume.yaml\napiVersion: storage.k8s.io/v1\nkind: StorageClass\nmetadata:\n name: portworx-io-priority-high", "completion": "provisioner: kubernetes.io/portworx-volume # This provisioner is deprecated\nparameters:\n repl: \"1\"\n snap_interval: \"70\"\n priority_io: \"high\"", "language": "yaml", "type": "kubernetes-manifest", "context": "6.5 - Storage Classes | Local volumes do not support dynamic provisioning in Kubernetes 1.34; however a StorageClass should still be created to delay volume binding until a Pod is actually scheduled to the appropriate node.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_18317"} {"prompt": "Explain this yaml code:\n\nstorage/storageclass/storageclass-portworx-volume.yaml\napiVersion: storage.k8s.io/v1\nkind: StorageClass\nmetadata:\n name: portworx-io-priority-high\nprovisioner: kubernetes.io/portworx-volume # This provisioner is deprecated\nparameters:\n repl: \"1\"\n snap_interval: \"70\"\n priority_io: \"high\"", "completion": "This yaml code 6.5 - Storage Classes | Local volumes do not support dynamic provisioning in Kubernetes 1.34; however a StorageClass should still be created to delay volume binding until a Pod is actually scheduled to the appropriate node.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "6.5 - Storage Classes | Local volumes do not support dynamic provisioning in Kubernetes 1.34; however a StorageClass should still be created to delay volume binding until a Pod is actually scheduled to the appropriate node.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_18318"} {"prompt": "Complete this yaml code:\n\napiVersion: storage.k8s.io/v1\nkind: StorageClass\nmetadata:\n name: portworx-io-priority-high", "completion": "provisioner: kubernetes.io/portworx-volume # This provisioner is deprecated\nparameters:\n repl: \"1\"\n snap_interval: \"70\"\n priority_io: \"high\"", "language": "yaml", "type": "kubernetes-manifest", "context": "6.5 - Storage Classes | Local volumes do not support dynamic provisioning in Kubernetes 1.34; however a StorageClass should still be created to delay volume binding until a Pod is actually scheduled to the appropriate node.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_18319"} {"prompt": "Explain this yaml code:\n\napiVersion: storage.k8s.io/v1\nkind: StorageClass\nmetadata:\n name: portworx-io-priority-high\nprovisioner: kubernetes.io/portworx-volume # This provisioner is deprecated\nparameters:\n repl: \"1\"\n snap_interval: \"70\"\n priority_io: \"high\"", "completion": "This yaml code 6.5 - Storage Classes | Local volumes do not support dynamic provisioning in Kubernetes 1.34; however a StorageClass should still be created to delay volume binding until a Pod is actually scheduled to the appropriate node.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "6.5 - Storage Classes | Local volumes do not support dynamic provisioning in Kubernetes 1.34; however a StorageClass should still be created to delay volume binding until a Pod is actually scheduled to the appropriate node.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_18320"} {"prompt": "Complete this yaml code:\n\nstorage/storageclass/storageclass-local.yaml\napiVersion: storage.k8s.io/v1\nkind: StorageClass", "completion": "metadata:\n name: local-storage\nprovisioner: kubernetes.io/no-provisioner # indicates that this StorageClass does not support automatic provisioning\nvolumeBindingMode: WaitForFirstConsumer", "language": "yaml", "type": "kubernetes-manifest", "context": "6.5 - Storage Classes | Local volumes do not support dynamic provisioning in Kubernetes 1.34; however a StorageClass should still be created to delay volume binding until a Pod is actually scheduled to the appropriate node.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_18321"} {"prompt": "Explain this yaml code:\n\nstorage/storageclass/storageclass-local.yaml\napiVersion: storage.k8s.io/v1\nkind: StorageClass\nmetadata:\n name: local-storage\nprovisioner: kubernetes.io/no-provisioner # indicates that this StorageClass does not support automatic provisioning\nvolumeBindingMode: WaitForFirstConsumer", "completion": "This yaml code 6.5 - Storage Classes | Local volumes do not support dynamic provisioning in Kubernetes 1.34; however a StorageClass should still be created to delay volume binding until a Pod is actually scheduled to the appropriate node.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "6.5 - Storage Classes | Local volumes do not support dynamic provisioning in Kubernetes 1.34; however a StorageClass should still be created to delay volume binding until a Pod is actually scheduled to the appropriate node.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_18322"} {"prompt": "Complete this yaml code:\n\napiVersion: storage.k8s.io/v1\nkind: StorageClass\nmetadata:", "completion": " name: local-storage\nprovisioner: kubernetes.io/no-provisioner # indicates that this StorageClass does not support automatic provisioning\nvolumeBindingMode: WaitForFirstConsumer", "language": "yaml", "type": "kubernetes-manifest", "context": "6.5 - Storage Classes | Local volumes do not support dynamic provisioning in Kubernetes 1.34; however a StorageClass should still be created to delay volume binding until a Pod is actually scheduled to the appropriate node.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_18323"} {"prompt": "Explain this yaml code:\n\napiVersion: storage.k8s.io/v1\nkind: StorageClass\nmetadata:\n name: local-storage\nprovisioner: kubernetes.io/no-provisioner # indicates that this StorageClass does not support automatic provisioning\nvolumeBindingMode: WaitForFirstConsumer", "completion": "This yaml code 6.5 - Storage Classes | Local volumes do not support dynamic provisioning in Kubernetes 1.34; however a StorageClass should still be created to delay volume binding until a Pod is actually scheduled to the appropriate node.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "6.5 - Storage Classes | Local volumes do not support dynamic provisioning in Kubernetes 1.34; however a StorageClass should still be created to delay volume binding until a Pod is actually scheduled to the appropriate node.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_18324"} {"prompt": "Complete this yaml code:\n\napiVersion: storage.k8s.io/v1\nkind: VolumeAttributesClass\nmetadata:\n name: silver", "completion": "driverName: pd.csi.storage.gke.io\nparameters:\n provisioned-iops: \"3000\"\n provisioned-throughput: \"50\"", "language": "yaml", "type": "kubernetes-manifest", "context": "6.6 - Volume Attributes Classes | Each VolumeAttributesClass has a provisioner that determines what volume plugin is used for provisioning PVs. The field driverName must be specified.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_18325"} {"prompt": "Explain this yaml code:\n\napiVersion: storage.k8s.io/v1\nkind: VolumeAttributesClass\nmetadata:\n name: silver\ndriverName: pd.csi.storage.gke.io\nparameters:\n provisioned-iops: \"3000\"\n provisioned-throughput: \"50\"", "completion": "This yaml code 6.6 - Volume Attributes Classes | Each VolumeAttributesClass has a provisioner that determines what volume plugin is used for provisioning PVs. The field driverName must be specified.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "6.6 - Volume Attributes Classes | Each VolumeAttributesClass has a provisioner that determines what volume plugin is used for provisioning PVs. The field driverName must be specified.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_18326"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: PersistentVolumeClaim\nmetadata:\n name: test-pv-claim", "completion": "spec:\n …\n volumeAttributesClassName: silver\n …", "language": "yaml", "type": "kubernetes-manifest", "context": "6.6 - Volume Attributes Classes | A new VolumeAttributesClass gold is available in the cluster:", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_18327"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: PersistentVolumeClaim\nmetadata:\n name: test-pv-claim\nspec:\n …\n volumeAttributesClassName: silver\n …", "completion": "This yaml code 6.6 - Volume Attributes Classes | A new VolumeAttributesClass gold is available in the cluster:", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "6.6 - Volume Attributes Classes | A new VolumeAttributesClass gold is available in the cluster:", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_18328"} {"prompt": "Complete this yaml code:\n\napiVersion: storage.k8s.io/v1\nkind: VolumeAttributesClass\nmetadata:\n name: gold", "completion": "driverName: pd.csi.storage.gke.io\nparameters:\n iops: \"4000\"\n throughput: \"60\"", "language": "yaml", "type": "kubernetes-manifest", "context": "6.6 - Volume Attributes Classes | The end user can update the PVC with the new VolumeAttributesClass gold and apply:", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_18329"} {"prompt": "Explain this yaml code:\n\napiVersion: storage.k8s.io/v1\nkind: VolumeAttributesClass\nmetadata:\n name: gold\ndriverName: pd.csi.storage.gke.io\nparameters:\n iops: \"4000\"\n throughput: \"60\"", "completion": "This yaml code 6.6 - Volume Attributes Classes | The end user can update the PVC with the new VolumeAttributesClass gold and apply:", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "6.6 - Volume Attributes Classes | The end user can update the PVC with the new VolumeAttributesClass gold and apply:", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_18330"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: PersistentVolumeClaim\nmetadata:\n name: test-pv-claim", "completion": "spec:\n …\n volumeAttributesClassName: gold\n …", "language": "yaml", "type": "kubernetes-manifest", "context": "6.6 - Volume Attributes Classes | To understand how the resizer works with VolumeAttributesClass, refer to the CSI external-resizer documentation.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_18331"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: PersistentVolumeClaim\nmetadata:\n name: test-pv-claim\nspec:\n …\n volumeAttributesClassName: gold\n …", "completion": "This yaml code 6.6 - Volume Attributes Classes | To understand how the resizer works with VolumeAttributesClass, refer to the CSI external-resizer documentation.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "6.6 - Volume Attributes Classes | To understand how the resizer works with VolumeAttributesClass, refer to the CSI external-resizer documentation.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_18332"} {"prompt": "Complete this yaml code:\n\napiVersion: storage.k8s.io/v1\nkind: StorageClass\nmetadata:", "completion": " name: slow\nprovisioner: kubernetes.io/gce-pd\nparameters:\n type: pd-standard", "language": "yaml", "type": "kubernetes-manifest", "context": "6.7 - Dynamic Volume Provisioning | The following manifest creates a storage class \"fast\" which provisions SSD-like persistent disks.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_18333"} {"prompt": "Explain this yaml code:\n\napiVersion: storage.k8s.io/v1\nkind: StorageClass\nmetadata:\n name: slow\nprovisioner: kubernetes.io/gce-pd\nparameters:\n type: pd-standard", "completion": "This yaml code 6.7 - Dynamic Volume Provisioning | The following manifest creates a storage class \"fast\" which provisions SSD-like persistent disks.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "6.7 - Dynamic Volume Provisioning | The following manifest creates a storage class \"fast\" which provisions SSD-like persistent disks.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_18334"} {"prompt": "Complete this yaml code:\n\napiVersion: storage.k8s.io/v1\nkind: StorageClass\nmetadata:", "completion": " name: fast\nprovisioner: kubernetes.io/gce-pd\nparameters:\n type: pd-ssd", "language": "yaml", "type": "kubernetes-manifest", "context": "6.7 - Dynamic Volume Provisioning | Users request dynamically provisioned storage by including a storage class in their PersistentVolumeClaim. Before Kubernetes v1.6, this was done via the volume.beta.kubernetes.io/storage-class annotat", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_18335"} {"prompt": "Explain this yaml code:\n\napiVersion: storage.k8s.io/v1\nkind: StorageClass\nmetadata:\n name: fast\nprovisioner: kubernetes.io/gce-pd\nparameters:\n type: pd-ssd", "completion": "This yaml code 6.7 - Dynamic Volume Provisioning | Users request dynamically provisioned storage by including a storage class in their PersistentVolumeClaim. Before Kubernetes v1.6, this was done via the volume.beta.kubernetes.io/storage-class annotat", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "6.7 - Dynamic Volume Provisioning | Users request dynamically provisioned storage by including a storage class in their PersistentVolumeClaim. Before Kubernetes v1.6, this was done via the volume.beta.kubernetes.io/storage-class annotat", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_18336"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: PersistentVolumeClaim\nmetadata:\n name: claim1\nspec:", "completion": " accessModes:\n - ReadWriteOnce\n storageClassName: fast\n resources:\n requests:\n storage: 30Gi", "language": "yaml", "type": "kubernetes-manifest", "context": "6.7 - Dynamic Volume Provisioning | This claim results in an SSD-like Persistent Disk being automatically provisioned. When the claim is deleted, the volume is destroyed.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_18337"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: PersistentVolumeClaim\nmetadata:\n name: claim1\nspec:\n accessModes:\n - ReadWriteOnce\n storageClassName: fast\n resources:\n requests:\n storage: 30Gi", "completion": "This yaml code 6.7 - Dynamic Volume Provisioning | This claim results in an SSD-like Persistent Disk being automatically provisioned. When the claim is deleted, the volume is destroyed.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "6.7 - Dynamic Volume Provisioning | This claim results in an SSD-like Persistent Disk being automatically provisioned. When the claim is deleted, the volume is destroyed.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_18338"} {"prompt": "Complete this yaml code:\n\napiVersion: snapshot.storage.k8s.io/v1\nkind: VolumeSnapshot\nmetadata:\n name: new-snapshot-test", "completion": "spec:\n volumeSnapshotClassName: csi-hostpath-snapclass\n source:\n persistentVolumeClaimName: pvc-test", "language": "yaml", "type": "kubernetes-manifest", "context": "6.8 - Volume Snapshots | persistentVolumeClaimName is the name of the PersistentVolumeClaim data source for the snapshot. This field is required for dynamically provisioning a snapshot.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_18339"} {"prompt": "Explain this yaml code:\n\napiVersion: snapshot.storage.k8s.io/v1\nkind: VolumeSnapshot\nmetadata:\n name: new-snapshot-test\nspec:\n volumeSnapshotClassName: csi-hostpath-snapclass\n source:\n persistentVolumeClaimName: pvc-test", "completion": "This yaml code 6.8 - Volume Snapshots | persistentVolumeClaimName is the name of the PersistentVolumeClaim data source for the snapshot. This field is required for dynamically provisioning a snapshot.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "6.8 - Volume Snapshots | persistentVolumeClaimName is the name of the PersistentVolumeClaim data source for the snapshot. This field is required for dynamically provisioning a snapshot.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_18340"} {"prompt": "Complete this yaml code:\n\napiVersion: snapshot.storage.k8s.io/v1\nkind: VolumeSnapshot\nmetadata:", "completion": " name: test-snapshot\nspec:\n source:\n volumeSnapshotContentName: test-content", "language": "yaml", "type": "kubernetes-manifest", "context": "6.8 - Volume Snapshots | Each VolumeSnapshotContent contains a spec and status. In dynamic provisioning, the snapshot common controller creates VolumeSnapshotContent objects. Here is an example:", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_18341"} {"prompt": "Explain this yaml code:\n\napiVersion: snapshot.storage.k8s.io/v1\nkind: VolumeSnapshot\nmetadata:\n name: test-snapshot\nspec:\n source:\n volumeSnapshotContentName: test-content", "completion": "This yaml code 6.8 - Volume Snapshots | Each VolumeSnapshotContent contains a spec and status. In dynamic provisioning, the snapshot common controller creates VolumeSnapshotContent objects. Here is an example:", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "6.8 - Volume Snapshots | Each VolumeSnapshotContent contains a spec and status. In dynamic provisioning, the snapshot common controller creates VolumeSnapshotContent objects. Here is an example:", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_18342"} {"prompt": "Complete this yaml code:\n\napiVersion: snapshot.storage.k8s.io/v1\nkind: VolumeSnapshotContent\nmetadata:\n name: snapcontent-72d9a349-aacd-42d2-a240-d775650d2455\nspec:\n deletionPolicy: Delete\n driver: hostpath.csi.k8s.io", "completion": " source:\n volumeHandle: ee0cfb94-f8d4-11e9-b2d8-0242ac110002\n sourceVolumeMode: Filesystem\n volumeSnapshotClassName: csi-hostpath-snapclass\n volumeSnapshotRef:\n name: new-snapshot-test\n namespace: default\n uid: 72d9a349-aacd-42d2-a240-d775650d2455", "language": "yaml", "type": "kubernetes-manifest", "context": "6.8 - Volume Snapshots | volumeHandle is the unique identifier of the volume created on the storage backend and returned by the CSI driver during the volume creation. This field is required for dynamically provisioning a snap", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_18343"} {"prompt": "Explain this yaml code:\n\napiVersion: snapshot.storage.k8s.io/v1\nkind: VolumeSnapshotContent\nmetadata:\n name: snapcontent-72d9a349-aacd-42d2-a240-d775650d2455\nspec:\n deletionPolicy: Delete\n driver: hostpath.csi.k8s.io\n source:\n volumeHandle: ee0cfb94-f8d4-11e9-b2d8-0242ac110002\n sourceVolumeMode: Filesystem\n volumeSnapshotClassName: csi-hostpath-snapclass\n volumeSnapshotRef:\n name: new-snapshot-test\n namespace: default\n uid: 72d9a349-aacd-42d2-a240-d775650d2455", "completion": "This yaml code 6.8 - Volume Snapshots | volumeHandle is the unique identifier of the volume created on the storage backend and returned by the CSI driver during the volume creation. This field is required for dynamically provisioning a snap", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "6.8 - Volume Snapshots | volumeHandle is the unique identifier of the volume created on the storage backend and returned by the CSI driver during the volume creation. This field is required for dynamically provisioning a snap", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_18344"} {"prompt": "Complete this yaml code:\n\napiVersion: snapshot.storage.k8s.io/v1\nkind: VolumeSnapshotContent\nmetadata:\n name: new-snapshot-content-test\nspec:\n deletionPolicy: Delete", "completion": " driver: hostpath.csi.k8s.io\n source:\n snapshotHandle: 7bdd0de3-aaeb-11e8-9aae-0242ac110002\n sourceVolumeMode: Filesystem\n volumeSnapshotRef:\n name: new-snapshot-test\n namespace: default", "language": "yaml", "type": "kubernetes-manifest", "context": "6.8 - Volume Snapshots | snapshotHandle is the unique identifier of the volume snapshot created on the storage backend. This field is required for the pre-provisioned snapshots. It specifies the CSI snapshot id on the storage", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_18345"} {"prompt": "Explain this yaml code:\n\napiVersion: snapshot.storage.k8s.io/v1\nkind: VolumeSnapshotContent\nmetadata:\n name: new-snapshot-content-test\nspec:\n deletionPolicy: Delete\n driver: hostpath.csi.k8s.io\n source:\n snapshotHandle: 7bdd0de3-aaeb-11e8-9aae-0242ac110002\n sourceVolumeMode: Filesystem\n volumeSnapshotRef:\n name: new-snapshot-test\n namespace: default", "completion": "This yaml code 6.8 - Volume Snapshots | snapshotHandle is the unique identifier of the volume snapshot created on the storage backend. This field is required for the pre-provisioned snapshots. It specifies the CSI snapshot id on the storage", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "6.8 - Volume Snapshots | snapshotHandle is the unique identifier of the volume snapshot created on the storage backend. This field is required for the pre-provisioned snapshots. It specifies the CSI snapshot id on the storage", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_18346"} {"prompt": "Complete this yaml code:\n\napiVersion: snapshot.storage.k8s.io/v1\nkind: VolumeSnapshotContent\nmetadata:\n name: new-snapshot-content-test\n annotations:\n - snapshot.storage.kubernetes.io/allow-volume-mode-change: \"true\"\nspec:", "completion": " deletionPolicy: Delete\n driver: hostpath.csi.k8s.io\n source:\n snapshotHandle: 7bdd0de3-aaeb-11e8-9aae-0242ac110002\n sourceVolumeMode: Filesystem\n volumeSnapshotRef:\n name: new-snapshot-test\n namespace: default", "language": "yaml", "type": "kubernetes-manifest", "context": "6.8 - Volume Snapshots | You can provision a new volume, pre-populated with data from a snapshot, by using the dataSource field in the PersistentVolumeClaim object.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_18347"} {"prompt": "Explain this yaml code:\n\napiVersion: snapshot.storage.k8s.io/v1\nkind: VolumeSnapshotContent\nmetadata:\n name: new-snapshot-content-test\n annotations:\n - snapshot.storage.kubernetes.io/allow-volume-mode-change: \"true\"\nspec:\n deletionPolicy: Delete\n driver: hostpath.csi.k8s.io\n source:\n snapshotHandle: 7bdd0de3-aaeb-11e8-9aae-0242ac110002\n sourceVolumeMode: Filesystem\n volumeSnapshotRef:\n name: new-snapshot-test\n namespace: default", "completion": "This yaml code 6.8 - Volume Snapshots | You can provision a new volume, pre-populated with data from a snapshot, by using the dataSource field in the PersistentVolumeClaim object.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "6.8 - Volume Snapshots | You can provision a new volume, pre-populated with data from a snapshot, by using the dataSource field in the PersistentVolumeClaim object.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_18348"} {"prompt": "Complete this yaml code:\n\napiVersion: snapshot.storage.k8s.io/v1\nkind: VolumeSnapshotClass\nmetadata:", "completion": " name: csi-hostpath-snapclass\ndriver: hostpath.csi.k8s.io\ndeletionPolicy: Delete\nparameters:", "language": "yaml", "type": "kubernetes-manifest", "context": "6.9 - Volume Snapshot Classes | Administrators can specify a default VolumeSnapshotClass for VolumeSnapshots that don't request any particular class to bind to by adding the snapshot.storage.kubernetes.io/is-default-class: \"true\" an", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_18349"} {"prompt": "Explain this yaml code:\n\napiVersion: snapshot.storage.k8s.io/v1\nkind: VolumeSnapshotClass\nmetadata:\n name: csi-hostpath-snapclass\ndriver: hostpath.csi.k8s.io\ndeletionPolicy: Delete\nparameters:", "completion": "This yaml code 6.9 - Volume Snapshot Classes | Administrators can specify a default VolumeSnapshotClass for VolumeSnapshots that don't request any particular class to bind to by adding the snapshot.storage.kubernetes.io/is-default-class: \"true\" an", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "6.9 - Volume Snapshot Classes | Administrators can specify a default VolumeSnapshotClass for VolumeSnapshots that don't request any particular class to bind to by adding the snapshot.storage.kubernetes.io/is-default-class: \"true\" an", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_18350"} {"prompt": "Complete this yaml code:\n\napiVersion: snapshot.storage.k8s.io/v1\nkind: VolumeSnapshotClass\nmetadata:\n name: csi-hostpath-snapclass", "completion": " annotations:\n snapshot.storage.kubernetes.io/is-default-class: \"true\"\ndriver: hostpath.csi.k8s.io\ndeletionPolicy: Delete\nparameters:", "language": "yaml", "type": "kubernetes-manifest", "context": "6.9 - Volume Snapshot Classes | If multiple CSI drivers exist, a default VolumeSnapshotClass can be specified for each of them.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_18351"} {"prompt": "Explain this yaml code:\n\napiVersion: snapshot.storage.k8s.io/v1\nkind: VolumeSnapshotClass\nmetadata:\n name: csi-hostpath-snapclass\n annotations:\n snapshot.storage.kubernetes.io/is-default-class: \"true\"\ndriver: hostpath.csi.k8s.io\ndeletionPolicy: Delete\nparameters:", "completion": "This yaml code 6.9 - Volume Snapshot Classes | If multiple CSI drivers exist, a default VolumeSnapshotClass can be specified for each of them.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "6.9 - Volume Snapshot Classes | If multiple CSI drivers exist, a default VolumeSnapshotClass can be specified for each of them.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_18352"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: PersistentVolumeClaim\nmetadata:\n name: clone-of-pvc-1\n namespace: myns\nspec:\n accessModes:", "completion": " - ReadWriteOnce\n storageClassName: cloning\n resources:\n requests:\n storage: 5Gi\n dataSource:\n kind: PersistentVolumeClaim\n name: pvc-1", "language": "yaml", "type": "kubernetes-manifest", "context": "6.10 - CSI Volume Cloning | The result is a new PVC with the name clone-of-pvc-1 that has the exact same content as the specified source pvc-1.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_18353"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: PersistentVolumeClaim\nmetadata:\n name: clone-of-pvc-1\n namespace: myns\nspec:\n accessModes:\n - ReadWriteOnce\n storageClassName: cloning\n resources:\n requests:\n storage: 5Gi\n dataSource:\n kind: PersistentVolumeClaim\n name: pvc-1", "completion": "This yaml code 6.10 - CSI Volume Cloning | The result is a new PVC with the name clone-of-pvc-1 that has the exact same content as the specified source pvc-1.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "6.10 - CSI Volume Cloning | The result is a new PVC with the name clone-of-pvc-1 that has the exact same content as the specified source pvc-1.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_18354"} {"prompt": "Complete this yaml code:\n\napiVersion: storage.k8s.io/v1\nkind: CSIDriver\nmetadata:", "completion": " name: hostpath.csi.k8s.io\nspec:\n nodeAllocatableUpdatePeriodSeconds: 60", "language": "yaml", "type": "kubernetes-manifest", "context": "6.12 - Node-specific Volume Limits | Kubelet will periodically call the corresponding CSI driver’s NodeGetInfo endpoint to refresh the maximum number of attachable volumes, using the interval specified in nodeAllocatableUpdatePeriodSecon", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_18355"} {"prompt": "Explain this yaml code:\n\napiVersion: storage.k8s.io/v1\nkind: CSIDriver\nmetadata:\n name: hostpath.csi.k8s.io\nspec:\n nodeAllocatableUpdatePeriodSeconds: 60", "completion": "This yaml code 6.12 - Node-specific Volume Limits | Kubelet will periodically call the corresponding CSI driver’s NodeGetInfo endpoint to refresh the maximum number of attachable volumes, using the interval specified in nodeAllocatableUpdatePeriodSecon", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "6.12 - Node-specific Volume Limits | Kubelet will periodically call the corresponding CSI driver’s NodeGetInfo endpoint to refresh the maximum number of attachable volumes, using the interval specified in nodeAllocatableUpdatePeriodSecon", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_18356"} {"prompt": "Explain this text code:\n\nFOO_SERVICE_HOST=<the host the Service is running on>\nFOO_SERVICE_PORT=<the port the Service is running on>", "completion": "This text code 7.1 - Configuration Best Practices | This does imply an ordering requirement - any Service that a Pod wants to access must be created before the Pod itself, or else the environment variables will not be populated. DNS does not have this", "language": "text", "type": "code-example_explanation", "context": "7.1 - Configuration Best Practices | This does imply an ordering requirement - any Service that a Pod wants to access must be created before the Pod itself, or else the environment variables will not be populated. DNS does not have this", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_18357"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: ConfigMap\nmetadata:\n name: game-demo\ndata:\n # property-like keys; each key maps to a simple value\n player_initial_lives: \"3\"\n ui_properties_file_name: \"user-interface.properties\"", "completion": "\n # file-like keys\n game.properties: |\n enemy.types=aliens,monsters\n player.maximum-lives=5 \n user-interface.properties: |\n color.good=purple\n color.bad=yellow\n allow.textmode=true", "language": "yaml", "type": "kubernetes-manifest", "context": "7.2 - ConfigMaps | There are four different ways that you can use a ConfigMap to configure a container inside a Pod:", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_18358"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: ConfigMap\nmetadata:\n name: game-demo\ndata:\n # property-like keys; each key maps to a simple value\n player_initial_lives: \"3\"\n ui_properties_file_name: \"user-interface.properties\"\n\n # file-like keys\n game.properties: |\n enemy.types=aliens,monsters\n player.maximum-lives=5 \n user-interface.properties: |\n color.good=purple\n color.bad=yellow\n allow.textmode=true", "completion": "This yaml code 7.2 - ConfigMaps | There are four different ways that you can use a ConfigMap to configure a container inside a Pod:", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "7.2 - ConfigMaps | There are four different ways that you can use a ConfigMap to configure a container inside a Pod:", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_18359"} {"prompt": "Complete this yaml code:\n\nconfigmap/configure-pod.yaml\napiVersion: v1\nkind: Pod\nmetadata:\n name: configmap-demo-pod\nspec:\n containers:\n - name: demo\n image: alpine\n command: [\"sleep\", \"3600\"]\n env:\n # Define the environment variable\n - name: PLAYER_INITIAL_LIVES # Notice that the case is different here\n # from the key name in the ConfigMap.\n valueFrom:\n configMapKeyRef:\n name: game-demo # The ConfigMap this value comes from.\n key: player_initial_lives # The key to fetch.\n - name: UI_PROPERTIES_FILE_NAME", "completion": " valueFrom:\n configMapKeyRef:\n name: game-demo\n key: ui_properties_file_name\n volumeMounts:\n - name: config\n mountPath: \"/config\"\n readOnly: true\n volumes:\n # You set volumes at the Pod level, then mount them into containers inside that Pod\n - name: config\n configMap:\n # Provide the name of the ConfigMap you want to mount.\n name: game-demo\n # An array of keys from the ConfigMap to create as files\n items:\n - key: \"game.properties\"\n path: \"game.properties\"\n - key: \"user-interface.properties\"\n path: \"user-interface.properties\"", "language": "yaml", "type": "pod-definition", "context": "7.2 - ConfigMaps | A ConfigMap doesn't differentiate between single line property values and multi-line file-like values. What matters is how Pods and other objects consume those values.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_18360"} {"prompt": "Explain this yaml code:\n\nconfigmap/configure-pod.yaml\napiVersion: v1\nkind: Pod\nmetadata:\n name: configmap-demo-pod\nspec:\n containers:\n - name: demo\n image: alpine\n command: [\"sleep\", \"3600\"]\n env:\n # Define the environment variable\n - name: PLAYER_INITIAL_LIVES # Notice that the case is different here\n # from the key name in the ConfigMap.\n valueFrom:\n configMapKeyRef:\n name: game-demo # The ConfigMap this value comes from.\n key: player_initial_lives # The key to fetch.\n - name: UI_PROPERTIES_FILE_NAME\n valueFrom:\n configMapKeyRef:\n name: game-demo\n key: ui_properties_file_name\n volumeMounts:\n - name: config\n mountPath: \"/config\"\n readOnly: true\n volumes:\n # You set volumes at the Pod level, then mount them into containers inside that Pod\n - name: config\n configMap:\n # Provide the name of the ConfigMap you want to mount.\n name: game-demo\n # An array of keys from the ConfigMap to create as files\n items:\n - key: \"game.properties\"\n path: \"game.properties\"\n - key: \"user-interface.properties\"\n path: \"user-interface.properties\"", "completion": "This yaml code 7.2 - ConfigMaps | A ConfigMap doesn't differentiate between single line property values and multi-line file-like values. What matters is how Pods and other objects consume those values.", "language": "yaml", "type": "pod-definition_explanation", "context": "7.2 - ConfigMaps | A ConfigMap doesn't differentiate between single line property values and multi-line file-like values. What matters is how Pods and other objects consume those values.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_18361"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: configmap-demo-pod\nspec:\n containers:\n - name: demo\n image: alpine\n command: [\"sleep\", \"3600\"]\n env:\n # Define the environment variable\n - name: PLAYER_INITIAL_LIVES # Notice that the case is different here\n # from the key name in the ConfigMap.\n valueFrom:\n configMapKeyRef:\n name: game-demo # The ConfigMap this value comes from.\n key: player_initial_lives # The key to fetch.\n - name: UI_PROPERTIES_FILE_NAME\n valueFrom:", "completion": " configMapKeyRef:\n name: game-demo\n key: ui_properties_file_name\n volumeMounts:\n - name: config\n mountPath: \"/config\"\n readOnly: true\n volumes:\n # You set volumes at the Pod level, then mount them into containers inside that Pod\n - name: config\n configMap:\n # Provide the name of the ConfigMap you want to mount.\n name: game-demo\n # An array of keys from the ConfigMap to create as files\n items:\n - key: \"game.properties\"\n path: \"game.properties\"\n - key: \"user-interface.properties\"\n path: \"user-interface.properties\"", "language": "yaml", "type": "pod-definition", "context": "7.2 - ConfigMaps | A ConfigMap doesn't differentiate between single line property values and multi-line file-like values. What matters is how Pods and other objects consume those values.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_18362"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: configmap-demo-pod\nspec:\n containers:\n - name: demo\n image: alpine\n command: [\"sleep\", \"3600\"]\n env:\n # Define the environment variable\n - name: PLAYER_INITIAL_LIVES # Notice that the case is different here\n # from the key name in the ConfigMap.\n valueFrom:\n configMapKeyRef:\n name: game-demo # The ConfigMap this value comes from.\n key: player_initial_lives # The key to fetch.\n - name: UI_PROPERTIES_FILE_NAME\n valueFrom:\n configMapKeyRef:\n name: game-demo\n key: ui_properties_file_name\n volumeMounts:\n - name: config\n mountPath: \"/config\"\n readOnly: true\n volumes:\n # You set volumes at the Pod level, then mount them into containers inside that Pod\n - name: config\n configMap:\n # Provide the name of the ConfigMap you want to mount.\n name: game-demo\n # An array of keys from the ConfigMap to create as files\n items:\n - key: \"game.properties\"\n path: \"game.properties\"\n - key: \"user-interface.properties\"\n path: \"user-interface.properties\"", "completion": "This yaml code 7.2 - ConfigMaps | A ConfigMap doesn't differentiate between single line property values and multi-line file-like values. What matters is how Pods and other objects consume those values.", "language": "yaml", "type": "pod-definition_explanation", "context": "7.2 - ConfigMaps | A ConfigMap doesn't differentiate between single line property values and multi-line file-like values. What matters is how Pods and other objects consume those values.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_18363"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: mypod\nspec:\n containers:\n - name: mypod\n image: redis", "completion": " volumeMounts:\n - name: foo\n mountPath: \"/etc/foo\"\n readOnly: true\n volumes:\n - name: foo\n configMap:\n name: myconfigmap", "language": "yaml", "type": "pod-definition", "context": "7.2 - ConfigMaps | Each ConfigMap you want to use needs to be referred to in .spec.volumes.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_18364"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: mypod\nspec:\n containers:\n - name: mypod\n image: redis\n volumeMounts:\n - name: foo\n mountPath: \"/etc/foo\"\n readOnly: true\n volumes:\n - name: foo\n configMap:\n name: myconfigmap", "completion": "This yaml code 7.2 - ConfigMaps | Each ConfigMap you want to use needs to be referred to in .spec.volumes.", "language": "yaml", "type": "pod-definition_explanation", "context": "7.2 - ConfigMaps | Each ConfigMap you want to use needs to be referred to in .spec.volumes.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_18365"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: ConfigMap\nmetadata:", "completion": " name: myconfigmap\ndata:\n username: k8s-admin\n access_level: \"1\"", "language": "yaml", "type": "kubernetes-manifest", "context": "7.2 - ConfigMaps | The following command will create the ConfigMap object:", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_18366"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: ConfigMap\nmetadata:\n name: myconfigmap\ndata:\n username: k8s-admin\n access_level: \"1\"", "completion": "This yaml code 7.2 - ConfigMaps | The following command will create the ConfigMap object:", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "7.2 - ConfigMaps | The following command will create the ConfigMap object:", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_18367"} {"prompt": "Complete this yaml code:\n\nconfigmap/env-configmap.yaml\napiVersion: v1\nkind: Pod\nmetadata:\n name: env-configmap\nspec:", "completion": " containers:\n - name: app\n command: [\"/bin/sh\", \"-c\", \"printenv\"]\n image: busybox:latest\n envFrom:\n - configMapRef:\n name: myconfigmap", "language": "yaml", "type": "pod-definition", "context": "7.2 - ConfigMaps | The envFrom field instructs Kubernetes to create environment variables from the sources nested within it. The inner configMapRef refers to a ConfigMap by its name and selects all its key-value pairs.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_18368"} {"prompt": "Explain this yaml code:\n\nconfigmap/env-configmap.yaml\napiVersion: v1\nkind: Pod\nmetadata:\n name: env-configmap\nspec:\n containers:\n - name: app\n command: [\"/bin/sh\", \"-c\", \"printenv\"]\n image: busybox:latest\n envFrom:\n - configMapRef:\n name: myconfigmap", "completion": "This yaml code 7.2 - ConfigMaps | The envFrom field instructs Kubernetes to create environment variables from the sources nested within it. The inner configMapRef refers to a ConfigMap by its name and selects all its key-value pairs.", "language": "yaml", "type": "pod-definition_explanation", "context": "7.2 - ConfigMaps | The envFrom field instructs Kubernetes to create environment variables from the sources nested within it. The inner configMapRef refers to a ConfigMap by its name and selects all its key-value pairs.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_18369"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: env-configmap\nspec:\n containers:", "completion": " - name: app\n command: [\"/bin/sh\", \"-c\", \"printenv\"]\n image: busybox:latest\n envFrom:\n - configMapRef:\n name: myconfigmap", "language": "yaml", "type": "pod-definition", "context": "7.2 - ConfigMaps | The envFrom field instructs Kubernetes to create environment variables from the sources nested within it. The inner configMapRef refers to a ConfigMap by its name and selects all its key-value pairs.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_18370"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: env-configmap\nspec:\n containers:\n - name: app\n command: [\"/bin/sh\", \"-c\", \"printenv\"]\n image: busybox:latest\n envFrom:\n - configMapRef:\n name: myconfigmap", "completion": "This yaml code 7.2 - ConfigMaps | The envFrom field instructs Kubernetes to create environment variables from the sources nested within it. The inner configMapRef refers to a ConfigMap by its name and selects all its key-value pairs.", "language": "yaml", "type": "pod-definition_explanation", "context": "7.2 - ConfigMaps | The envFrom field instructs Kubernetes to create environment variables from the sources nested within it. The inner configMapRef refers to a ConfigMap by its name and selects all its key-value pairs.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_18371"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: env-configmap\nspec:\n containers:\n - name: envars-test-container", "completion": " image: nginx\n env:\n - name: CONFIGMAP_USERNAME\n valueFrom:\n configMapKeyRef:\n name: myconfigmap\n key: username", "language": "yaml", "type": "pod-definition", "context": "7.2 - ConfigMaps | In the Pod created from this manifest, you will see that the environment variable CONFIGMAP_USERNAME is set to the value of the username value from the ConfigMap. Other keys from the ConfigMap data ar", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_18372"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: env-configmap\nspec:\n containers:\n - name: envars-test-container\n image: nginx\n env:\n - name: CONFIGMAP_USERNAME\n valueFrom:\n configMapKeyRef:\n name: myconfigmap\n key: username", "completion": "This yaml code 7.2 - ConfigMaps | In the Pod created from this manifest, you will see that the environment variable CONFIGMAP_USERNAME is set to the value of the username value from the ConfigMap. Other keys from the ConfigMap data ar", "language": "yaml", "type": "pod-definition_explanation", "context": "7.2 - ConfigMaps | In the Pod created from this manifest, you will see that the environment variable CONFIGMAP_USERNAME is set to the value of the username value from the ConfigMap. Other keys from the ConfigMap data ar", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_18373"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: ConfigMap\nmetadata:", "completion": " ...\ndata:\n ...\nimmutable: true", "language": "yaml", "type": "kubernetes-manifest", "context": "7.2 - ConfigMaps | Once a ConfigMap is marked as immutable, it is not possible to revert this change nor to mutate the contents of the data or the binaryData field. You can only delete and recreate the ConfigMap. Becaus", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_18374"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: ConfigMap\nmetadata:\n ...\ndata:\n ...\nimmutable: true", "completion": "This yaml code 7.2 - ConfigMaps | Once a ConfigMap is marked as immutable, it is not possible to revert this change nor to mutate the contents of the data or the binaryData field. You can only delete and recreate the ConfigMap. Becaus", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "7.2 - ConfigMaps | Once a ConfigMap is marked as immutable, it is not possible to revert this change nor to mutate the contents of the data or the binaryData field. You can only delete and recreate the ConfigMap. Becaus", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_18375"} {"prompt": "Complete this yaml code:\n\nsecret/dotfile-secret.yaml\napiVersion: v1\nkind: Secret\nmetadata:\n name: dotfile-secret\ndata:\n .secret-file: dmFsdWUtMg0KDQo=\n---\napiVersion: v1\nkind: Pod\nmetadata:\n name: secret-dotfiles-pod\nspec:\n volumes:", "completion": " - name: secret-volume\n secret:\n secretName: dotfile-secret\n containers:\n - name: dotfile-test-container\n image: registry.k8s.io/busybox\n command:\n - ls\n - \"-l\"\n - \"/etc/secret-volume\"\n volumeMounts:\n - name: secret-volume\n readOnly: true\n mountPath: \"/etc/secret-volume\"", "language": "yaml", "type": "pod-definition", "context": "7.3 - Secrets | Consider a program that needs to handle HTTP requests, do some complex business logic, and then sign some messages with an HMAC. Because it has complex application logic, there might be an unnoticed r", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_18376"} {"prompt": "Explain this yaml code:\n\nsecret/dotfile-secret.yaml\napiVersion: v1\nkind: Secret\nmetadata:\n name: dotfile-secret\ndata:\n .secret-file: dmFsdWUtMg0KDQo=\n---\napiVersion: v1\nkind: Pod\nmetadata:\n name: secret-dotfiles-pod\nspec:\n volumes:\n - name: secret-volume\n secret:\n secretName: dotfile-secret\n containers:\n - name: dotfile-test-container\n image: registry.k8s.io/busybox\n command:\n - ls\n - \"-l\"\n - \"/etc/secret-volume\"\n volumeMounts:\n - name: secret-volume\n readOnly: true\n mountPath: \"/etc/secret-volume\"", "completion": "This yaml code 7.3 - Secrets | Consider a program that needs to handle HTTP requests, do some complex business logic, and then sign some messages with an HMAC. Because it has complex application logic, there might be an unnoticed r", "language": "yaml", "type": "pod-definition_explanation", "context": "7.3 - Secrets | Consider a program that needs to handle HTTP requests, do some complex business logic, and then sign some messages with an HMAC. Because it has complex application logic, there might be an unnoticed r", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_18377"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Secret\nmetadata:\n name: dotfile-secret\ndata:\n .secret-file: dmFsdWUtMg0KDQo=\n---\napiVersion: v1\nkind: Pod\nmetadata:\n name: secret-dotfiles-pod\nspec:\n volumes:", "completion": " - name: secret-volume\n secret:\n secretName: dotfile-secret\n containers:\n - name: dotfile-test-container\n image: registry.k8s.io/busybox\n command:\n - ls\n - \"-l\"\n - \"/etc/secret-volume\"\n volumeMounts:\n - name: secret-volume\n readOnly: true\n mountPath: \"/etc/secret-volume\"", "language": "yaml", "type": "pod-definition", "context": "7.3 - Secrets | Consider a program that needs to handle HTTP requests, do some complex business logic, and then sign some messages with an HMAC. Because it has complex application logic, there might be an unnoticed r", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_18378"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Secret\nmetadata:\n name: dotfile-secret\ndata:\n .secret-file: dmFsdWUtMg0KDQo=\n---\napiVersion: v1\nkind: Pod\nmetadata:\n name: secret-dotfiles-pod\nspec:\n volumes:\n - name: secret-volume\n secret:\n secretName: dotfile-secret\n containers:\n - name: dotfile-test-container\n image: registry.k8s.io/busybox\n command:\n - ls\n - \"-l\"\n - \"/etc/secret-volume\"\n volumeMounts:\n - name: secret-volume\n readOnly: true\n mountPath: \"/etc/secret-volume\"", "completion": "This yaml code 7.3 - Secrets | Consider a program that needs to handle HTTP requests, do some complex business logic, and then sign some messages with an HMAC. Because it has complex application logic, there might be an unnoticed r", "language": "yaml", "type": "pod-definition_explanation", "context": "7.3 - Secrets | Consider a program that needs to handle HTTP requests, do some complex business logic, and then sign some messages with an HMAC. Because it has complex application logic, there might be an unnoticed r", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_18379"} {"prompt": "Explain this bash code:\n\nkubectl create secret generic empty-secret\nkubectl get secret empty-secret", "completion": "This bash code 7.3 - Secrets | The output looks like:", "language": "bash", "type": "kubectl-command_explanation", "context": "7.3 - Secrets | The output looks like:", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_18380"} {"prompt": "Complete this yaml code:\n\nsecret/serviceaccount-token-secret.yaml\napiVersion: v1\nkind: Secret\nmetadata:\n name: secret-sa-sample", "completion": " annotations:\n kubernetes.io/service-account.name: \"sa-name\"\ntype: kubernetes.io/service-account-token\ndata:\n extra: YmFyCg==", "language": "yaml", "type": "kubernetes-manifest", "context": "7.3 - Secrets | After creating the Secret, wait for Kubernetes to populate the token key in the data field.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_18381"} {"prompt": "Explain this yaml code:\n\nsecret/serviceaccount-token-secret.yaml\napiVersion: v1\nkind: Secret\nmetadata:\n name: secret-sa-sample\n annotations:\n kubernetes.io/service-account.name: \"sa-name\"\ntype: kubernetes.io/service-account-token\ndata:\n extra: YmFyCg==", "completion": "This yaml code 7.3 - Secrets | After creating the Secret, wait for Kubernetes to populate the token key in the data field.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "7.3 - Secrets | After creating the Secret, wait for Kubernetes to populate the token key in the data field.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_18382"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Secret\nmetadata:\n name: secret-sa-sample", "completion": " annotations:\n kubernetes.io/service-account.name: \"sa-name\"\ntype: kubernetes.io/service-account-token\ndata:\n extra: YmFyCg==", "language": "yaml", "type": "kubernetes-manifest", "context": "7.3 - Secrets | After creating the Secret, wait for Kubernetes to populate the token key in the data field.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_18383"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Secret\nmetadata:\n name: secret-sa-sample\n annotations:\n kubernetes.io/service-account.name: \"sa-name\"\ntype: kubernetes.io/service-account-token\ndata:\n extra: YmFyCg==", "completion": "This yaml code 7.3 - Secrets | After creating the Secret, wait for Kubernetes to populate the token key in the data field.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "7.3 - Secrets | After creating the Secret, wait for Kubernetes to populate the token key in the data field.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_18384"} {"prompt": "Complete this yaml code:\n\nsecret/dockercfg-secret.yaml\napiVersion: v1\nkind: Secret\nmetadata:", "completion": " name: secret-dockercfg\ntype: kubernetes.io/dockercfg\ndata:\n .dockercfg: |\n eyJhdXRocyI6eyJodHRwczovL2V4YW1wbGUvdjEvIjp7ImF1dGgiOiJvcGVuc2VzYW1lIn19fQo=", "language": "yaml", "type": "kubernetes-manifest", "context": "7.3 - Secrets | When you create Docker config Secrets using a manifest, the API server checks whether the expected key exists in the data field, and it verifies if the value provided can be parsed as a valid JSON. Th", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_18385"} {"prompt": "Explain this yaml code:\n\nsecret/dockercfg-secret.yaml\napiVersion: v1\nkind: Secret\nmetadata:\n name: secret-dockercfg\ntype: kubernetes.io/dockercfg\ndata:\n .dockercfg: |\n eyJhdXRocyI6eyJodHRwczovL2V4YW1wbGUvdjEvIjp7ImF1dGgiOiJvcGVuc2VzYW1lIn19fQo=", "completion": "This yaml code 7.3 - Secrets | When you create Docker config Secrets using a manifest, the API server checks whether the expected key exists in the data field, and it verifies if the value provided can be parsed as a valid JSON. Th", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "7.3 - Secrets | When you create Docker config Secrets using a manifest, the API server checks whether the expected key exists in the data field, and it verifies if the value provided can be parsed as a valid JSON. Th", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_18386"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Secret\nmetadata:\n name: secret-dockercfg", "completion": "type: kubernetes.io/dockercfg\ndata:\n .dockercfg: |\n eyJhdXRocyI6eyJodHRwczovL2V4YW1wbGUvdjEvIjp7ImF1dGgiOiJvcGVuc2VzYW1lIn19fQo=", "language": "yaml", "type": "kubernetes-manifest", "context": "7.3 - Secrets | When you create Docker config Secrets using a manifest, the API server checks whether the expected key exists in the data field, and it verifies if the value provided can be parsed as a valid JSON. Th", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_18387"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Secret\nmetadata:\n name: secret-dockercfg\ntype: kubernetes.io/dockercfg\ndata:\n .dockercfg: |\n eyJhdXRocyI6eyJodHRwczovL2V4YW1wbGUvdjEvIjp7ImF1dGgiOiJvcGVuc2VzYW1lIn19fQo=", "completion": "This yaml code 7.3 - Secrets | When you create Docker config Secrets using a manifest, the API server checks whether the expected key exists in the data field, and it verifies if the value provided can be parsed as a valid JSON. Th", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "7.3 - Secrets | When you create Docker config Secrets using a manifest, the API server checks whether the expected key exists in the data field, and it verifies if the value provided can be parsed as a valid JSON. Th", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_18388"} {"prompt": "Complete this bash code:\n\nkubectl create secret docker-registry secret-tiger-docker \\\n --docker-email=tiger@acme.example \\", "completion": " --docker-username=tiger \\\n --docker-password=pass1234 \\\n --docker-server=my-registry.example:5000", "language": "bash", "type": "kubectl-command", "context": "7.3 - Secrets | This command creates a Secret of type kubernetes.io/dockerconfigjson.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_18389"} {"prompt": "Explain this bash code:\n\nkubectl create secret docker-registry secret-tiger-docker \\\n --docker-email=tiger@acme.example \\\n --docker-username=tiger \\\n --docker-password=pass1234 \\\n --docker-server=my-registry.example:5000", "completion": "This bash code 7.3 - Secrets | This command creates a Secret of type kubernetes.io/dockerconfigjson.", "language": "bash", "type": "kubectl-command_explanation", "context": "7.3 - Secrets | This command creates a Secret of type kubernetes.io/dockerconfigjson.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_18390"} {"prompt": "Explain this bash code:\n\nkubectl get secret secret-tiger-docker -o jsonpath='{.data.*}' | base64 -d", "completion": "This bash code 7.3 - Secrets | The output is equivalent to the following JSON document (which is also a valid Docker configuration file):", "language": "bash", "type": "kubectl-command_explanation", "context": "7.3 - Secrets | The output is equivalent to the following JSON document (which is also a valid Docker configuration file):", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_18391"} {"prompt": "Complete this json code:\n\n{\n \"auths\": {\n \"my-registry.example:5000\": {\n \"username\": \"tiger\",\n \"password\": \"pass1234\",", "completion": " \"email\": \"tiger@acme.example\",\n \"auth\": \"dGlnZXI6cGFzczEyMzQ=\"\n }\n }\n}", "language": "json", "type": "api-response", "context": "7.3 - Secrets | The auth value there is base64 encoded; it is obscured but not secret. Anyone who can read that Secret can learn the registry access bearer token.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_18392"} {"prompt": "Explain this json code:\n\n{\n \"auths\": {\n \"my-registry.example:5000\": {\n \"username\": \"tiger\",\n \"password\": \"pass1234\",\n \"email\": \"tiger@acme.example\",\n \"auth\": \"dGlnZXI6cGFzczEyMzQ=\"\n }\n }\n}", "completion": "This json code 7.3 - Secrets | The auth value there is base64 encoded; it is obscured but not secret. Anyone who can read that Secret can learn the registry access bearer token.", "language": "json", "type": "api-response_explanation", "context": "7.3 - Secrets | The auth value there is base64 encoded; it is obscured but not secret. Anyone who can read that Secret can learn the registry access bearer token.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_18393"} {"prompt": "Complete this yaml code:\n\nsecret/basicauth-secret.yaml\napiVersion: v1\nkind: Secret\nmetadata:", "completion": " name: secret-basic-auth\ntype: kubernetes.io/basic-auth\nstringData:\n username: admin # required field for kubernetes.io/basic-auth\n password: t0p-Secret # required field for kubernetes.io/basic-auth", "language": "yaml", "type": "kubernetes-manifest", "context": "7.3 - Secrets | The basic authentication Secret type is provided only for convenience. You can create an Opaque type for credentials used for basic authentication. However, using the defined and public Secret type (k", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_18394"} {"prompt": "Explain this yaml code:\n\nsecret/basicauth-secret.yaml\napiVersion: v1\nkind: Secret\nmetadata:\n name: secret-basic-auth\ntype: kubernetes.io/basic-auth\nstringData:\n username: admin # required field for kubernetes.io/basic-auth\n password: t0p-Secret # required field for kubernetes.io/basic-auth", "completion": "This yaml code 7.3 - Secrets | The basic authentication Secret type is provided only for convenience. You can create an Opaque type for credentials used for basic authentication. However, using the defined and public Secret type (k", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "7.3 - Secrets | The basic authentication Secret type is provided only for convenience. You can create an Opaque type for credentials used for basic authentication. However, using the defined and public Secret type (k", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_18395"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Secret\nmetadata:\n name: secret-basic-auth", "completion": "type: kubernetes.io/basic-auth\nstringData:\n username: admin # required field for kubernetes.io/basic-auth\n password: t0p-Secret # required field for kubernetes.io/basic-auth", "language": "yaml", "type": "kubernetes-manifest", "context": "7.3 - Secrets | The basic authentication Secret type is provided only for convenience. You can create an Opaque type for credentials used for basic authentication. However, using the defined and public Secret type (k", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_18396"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Secret\nmetadata:\n name: secret-basic-auth\ntype: kubernetes.io/basic-auth\nstringData:\n username: admin # required field for kubernetes.io/basic-auth\n password: t0p-Secret # required field for kubernetes.io/basic-auth", "completion": "This yaml code 7.3 - Secrets | The basic authentication Secret type is provided only for convenience. You can create an Opaque type for credentials used for basic authentication. However, using the defined and public Secret type (k", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "7.3 - Secrets | The basic authentication Secret type is provided only for convenience. You can create an Opaque type for credentials used for basic authentication. However, using the defined and public Secret type (k", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_18397"} {"prompt": "Complete this yaml code:\n\nsecret/ssh-auth-secret.yaml\napiVersion: v1\nkind: Secret\nmetadata:\n name: secret-ssh-auth", "completion": "type: kubernetes.io/ssh-auth\ndata:\n # the data is abbreviated in this example\n ssh-privatekey: |\n UG91cmluZzYlRW1vdGljb24lU2N1YmE=", "language": "yaml", "type": "kubernetes-manifest", "context": "7.3 - Secrets | The SSH authentication Secret type is provided only for convenience. You can create an Opaque type for credentials used for SSH authentication. However, using the defined and public Secret type (kuber", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_18398"} {"prompt": "Explain this yaml code:\n\nsecret/ssh-auth-secret.yaml\napiVersion: v1\nkind: Secret\nmetadata:\n name: secret-ssh-auth\ntype: kubernetes.io/ssh-auth\ndata:\n # the data is abbreviated in this example\n ssh-privatekey: |\n UG91cmluZzYlRW1vdGljb24lU2N1YmE=", "completion": "This yaml code 7.3 - Secrets | The SSH authentication Secret type is provided only for convenience. You can create an Opaque type for credentials used for SSH authentication. However, using the defined and public Secret type (kuber", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "7.3 - Secrets | The SSH authentication Secret type is provided only for convenience. You can create an Opaque type for credentials used for SSH authentication. However, using the defined and public Secret type (kuber", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_18399"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Secret\nmetadata:\n name: secret-ssh-auth", "completion": "type: kubernetes.io/ssh-auth\ndata:\n # the data is abbreviated in this example\n ssh-privatekey: |\n UG91cmluZzYlRW1vdGljb24lU2N1YmE=", "language": "yaml", "type": "kubernetes-manifest", "context": "7.3 - Secrets | The SSH authentication Secret type is provided only for convenience. You can create an Opaque type for credentials used for SSH authentication. However, using the defined and public Secret type (kuber", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_18400"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Secret\nmetadata:\n name: secret-ssh-auth\ntype: kubernetes.io/ssh-auth\ndata:\n # the data is abbreviated in this example\n ssh-privatekey: |\n UG91cmluZzYlRW1vdGljb24lU2N1YmE=", "completion": "This yaml code 7.3 - Secrets | The SSH authentication Secret type is provided only for convenience. You can create an Opaque type for credentials used for SSH authentication. However, using the defined and public Secret type (kuber", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "7.3 - Secrets | The SSH authentication Secret type is provided only for convenience. You can create an Opaque type for credentials used for SSH authentication. However, using the defined and public Secret type (kuber", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_18401"} {"prompt": "Complete this yaml code:\n\nsecret/tls-auth-secret.yaml\napiVersion: v1\nkind: Secret\nmetadata:\n name: secret-tls\ntype: kubernetes.io/tls", "completion": "data:\n # values are base64 encoded, which obscures them but does NOT provide\n # any useful level of confidentiality\n # Replace the following values with your own base64-encoded certificate and key.\n tls.crt: \"REPLACE_WITH_BASE64_CERT\" \n tls.key: \"REPLACE_WITH_BASE64_KEY\"", "language": "yaml", "type": "kubernetes-manifest", "context": "7.3 - Secrets | The TLS Secret type is provided only for convenience. You can create an Opaque type for credentials used for TLS authentication. However, using the defined and public Secret type (kubernetes.io/tls) h", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_18402"} {"prompt": "Explain this yaml code:\n\nsecret/tls-auth-secret.yaml\napiVersion: v1\nkind: Secret\nmetadata:\n name: secret-tls\ntype: kubernetes.io/tls\ndata:\n # values are base64 encoded, which obscures them but does NOT provide\n # any useful level of confidentiality\n # Replace the following values with your own base64-encoded certificate and key.\n tls.crt: \"REPLACE_WITH_BASE64_CERT\" \n tls.key: \"REPLACE_WITH_BASE64_KEY\"", "completion": "This yaml code 7.3 - Secrets | The TLS Secret type is provided only for convenience. You can create an Opaque type for credentials used for TLS authentication. However, using the defined and public Secret type (kubernetes.io/tls) h", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "7.3 - Secrets | The TLS Secret type is provided only for convenience. You can create an Opaque type for credentials used for TLS authentication. However, using the defined and public Secret type (kubernetes.io/tls) h", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_18403"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Secret\nmetadata:\n name: secret-tls\ntype: kubernetes.io/tls", "completion": "data:\n # values are base64 encoded, which obscures them but does NOT provide\n # any useful level of confidentiality\n # Replace the following values with your own base64-encoded certificate and key.\n tls.crt: \"REPLACE_WITH_BASE64_CERT\" \n tls.key: \"REPLACE_WITH_BASE64_KEY\"", "language": "yaml", "type": "kubernetes-manifest", "context": "7.3 - Secrets | The TLS Secret type is provided only for convenience. You can create an Opaque type for credentials used for TLS authentication. However, using the defined and public Secret type (kubernetes.io/tls) h", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_18404"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Secret\nmetadata:\n name: secret-tls\ntype: kubernetes.io/tls\ndata:\n # values are base64 encoded, which obscures them but does NOT provide\n # any useful level of confidentiality\n # Replace the following values with your own base64-encoded certificate and key.\n tls.crt: \"REPLACE_WITH_BASE64_CERT\" \n tls.key: \"REPLACE_WITH_BASE64_KEY\"", "completion": "This yaml code 7.3 - Secrets | The TLS Secret type is provided only for convenience. You can create an Opaque type for credentials used for TLS authentication. However, using the defined and public Secret type (kubernetes.io/tls) h", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "7.3 - Secrets | The TLS Secret type is provided only for convenience. You can create an Opaque type for credentials used for TLS authentication. However, using the defined and public Secret type (kubernetes.io/tls) h", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_18405"} {"prompt": "Explain this bash code:\n\nkubectl create secret tls my-tls-secret \\\n --cert=path/to/cert/file \\\n --key=path/to/key/file", "completion": "This bash code 7.3 - Secrets | The public/private key pair must exist before hand. The public key certificate for --cert must be .PEM encoded and must match the given private key for --key.", "language": "bash", "type": "kubectl-command_explanation", "context": "7.3 - Secrets | The public/private key pair must exist before hand. The public key certificate for --cert must be .PEM encoded and must match the given private key for --key.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_18406"} {"prompt": "Complete this yaml code:\n\nsecret/bootstrap-token-secret-base64.yaml\napiVersion: v1\nkind: Secret\nmetadata:\n name: bootstrap-token-5emitj\n namespace: kube-system\ntype: bootstrap.kubernetes.io/token", "completion": "data:\n auth-extra-groups: c3lzdGVtOmJvb3RzdHJhcHBlcnM6a3ViZWFkbTpkZWZhdWx0LW5vZGUtdG9rZW4=\n expiration: MjAyMC0wOS0xM1QwNDozOToxMFo=\n token-id: NWVtaXRq\n token-secret: a3E0Z2lodnN6emduMXAwcg==\n usage-bootstrap-authentication: dHJ1ZQ==\n usage-bootstrap-signing: dHJ1ZQ==", "language": "yaml", "type": "kubernetes-manifest", "context": "7.3 - Secrets | A bootstrap token Secret has the following keys specified under data:", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_18407"} {"prompt": "Explain this yaml code:\n\nsecret/bootstrap-token-secret-base64.yaml\napiVersion: v1\nkind: Secret\nmetadata:\n name: bootstrap-token-5emitj\n namespace: kube-system\ntype: bootstrap.kubernetes.io/token\ndata:\n auth-extra-groups: c3lzdGVtOmJvb3RzdHJhcHBlcnM6a3ViZWFkbTpkZWZhdWx0LW5vZGUtdG9rZW4=\n expiration: MjAyMC0wOS0xM1QwNDozOToxMFo=\n token-id: NWVtaXRq\n token-secret: a3E0Z2lodnN6emduMXAwcg==\n usage-bootstrap-authentication: dHJ1ZQ==\n usage-bootstrap-signing: dHJ1ZQ==", "completion": "This yaml code 7.3 - Secrets | A bootstrap token Secret has the following keys specified under data:", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "7.3 - Secrets | A bootstrap token Secret has the following keys specified under data:", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_18408"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Secret\nmetadata:\n name: bootstrap-token-5emitj\n namespace: kube-system\ntype: bootstrap.kubernetes.io/token", "completion": "data:\n auth-extra-groups: c3lzdGVtOmJvb3RzdHJhcHBlcnM6a3ViZWFkbTpkZWZhdWx0LW5vZGUtdG9rZW4=\n expiration: MjAyMC0wOS0xM1QwNDozOToxMFo=\n token-id: NWVtaXRq\n token-secret: a3E0Z2lodnN6emduMXAwcg==\n usage-bootstrap-authentication: dHJ1ZQ==\n usage-bootstrap-signing: dHJ1ZQ==", "language": "yaml", "type": "kubernetes-manifest", "context": "7.3 - Secrets | A bootstrap token Secret has the following keys specified under data:", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_18409"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Secret\nmetadata:\n name: bootstrap-token-5emitj\n namespace: kube-system\ntype: bootstrap.kubernetes.io/token\ndata:\n auth-extra-groups: c3lzdGVtOmJvb3RzdHJhcHBlcnM6a3ViZWFkbTpkZWZhdWx0LW5vZGUtdG9rZW4=\n expiration: MjAyMC0wOS0xM1QwNDozOToxMFo=\n token-id: NWVtaXRq\n token-secret: a3E0Z2lodnN6emduMXAwcg==\n usage-bootstrap-authentication: dHJ1ZQ==\n usage-bootstrap-signing: dHJ1ZQ==", "completion": "This yaml code 7.3 - Secrets | A bootstrap token Secret has the following keys specified under data:", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "7.3 - Secrets | A bootstrap token Secret has the following keys specified under data:", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_18410"} {"prompt": "Complete this yaml code:\n\nsecret/bootstrap-token-secret-literal.yaml\napiVersion: v1\nkind: Secret\nmetadata:\n # Note how the Secret is named\n name: bootstrap-token-5emitj\n # A bootstrap token Secret usually resides in the kube-system namespace\n namespace: kube-system\ntype: bootstrap.kubernetes.io/token", "completion": "stringData:\n auth-extra-groups: \"system:bootstrappers:kubeadm:default-node-token\"\n expiration: \"2020-09-13T04:39:10Z\"\n # This token ID is used in the name\n token-id: \"5emitj\"\n token-secret: \"kq4gihvszzgn1p0r\"\n # This token can be used for authentication\n usage-bootstrap-authentication: \"true\"\n # and it can be used for signing\n usage-bootstrap-signing: \"true\"", "language": "yaml", "type": "kubernetes-manifest", "context": "7.3 - Secrets | There are several options to create a Secret:", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_18411"} {"prompt": "Explain this yaml code:\n\nsecret/bootstrap-token-secret-literal.yaml\napiVersion: v1\nkind: Secret\nmetadata:\n # Note how the Secret is named\n name: bootstrap-token-5emitj\n # A bootstrap token Secret usually resides in the kube-system namespace\n namespace: kube-system\ntype: bootstrap.kubernetes.io/token\nstringData:\n auth-extra-groups: \"system:bootstrappers:kubeadm:default-node-token\"\n expiration: \"2020-09-13T04:39:10Z\"\n # This token ID is used in the name\n token-id: \"5emitj\"\n token-secret: \"kq4gihvszzgn1p0r\"\n # This token can be used for authentication\n usage-bootstrap-authentication: \"true\"\n # and it can be used for signing\n usage-bootstrap-signing: \"true\"", "completion": "This yaml code 7.3 - Secrets | There are several options to create a Secret:", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "7.3 - Secrets | There are several options to create a Secret:", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_18412"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Secret\nmetadata:\n # Note how the Secret is named\n name: bootstrap-token-5emitj\n # A bootstrap token Secret usually resides in the kube-system namespace\n namespace: kube-system\ntype: bootstrap.kubernetes.io/token\nstringData:", "completion": " auth-extra-groups: \"system:bootstrappers:kubeadm:default-node-token\"\n expiration: \"2020-09-13T04:39:10Z\"\n # This token ID is used in the name\n token-id: \"5emitj\"\n token-secret: \"kq4gihvszzgn1p0r\"\n # This token can be used for authentication\n usage-bootstrap-authentication: \"true\"\n # and it can be used for signing\n usage-bootstrap-signing: \"true\"", "language": "yaml", "type": "kubernetes-manifest", "context": "7.3 - Secrets | There are several options to create a Secret:", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_18413"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Secret\nmetadata:\n # Note how the Secret is named\n name: bootstrap-token-5emitj\n # A bootstrap token Secret usually resides in the kube-system namespace\n namespace: kube-system\ntype: bootstrap.kubernetes.io/token\nstringData:\n auth-extra-groups: \"system:bootstrappers:kubeadm:default-node-token\"\n expiration: \"2020-09-13T04:39:10Z\"\n # This token ID is used in the name\n token-id: \"5emitj\"\n token-secret: \"kq4gihvszzgn1p0r\"\n # This token can be used for authentication\n usage-bootstrap-authentication: \"true\"\n # and it can be used for signing\n usage-bootstrap-signing: \"true\"", "completion": "This yaml code 7.3 - Secrets | There are several options to create a Secret:", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "7.3 - Secrets | There are several options to create a Secret:", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_18414"} {"prompt": "Complete this yaml code:\n\nsecret/optional-secret.yaml\napiVersion: v1\nkind: Pod\nmetadata:\n name: mypod\nspec:\n containers:\n - name: mypod\n image: redis", "completion": " volumeMounts:\n - name: foo\n mountPath: \"/etc/foo\"\n readOnly: true\n volumes:\n - name: foo\n secret:\n secretName: mysecret\n optional: true", "language": "yaml", "type": "pod-definition", "context": "7.3 - Secrets | By default, Secrets are required. None of a Pod's containers will start until all non-optional Secrets are available.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_18415"} {"prompt": "Explain this yaml code:\n\nsecret/optional-secret.yaml\napiVersion: v1\nkind: Pod\nmetadata:\n name: mypod\nspec:\n containers:\n - name: mypod\n image: redis\n volumeMounts:\n - name: foo\n mountPath: \"/etc/foo\"\n readOnly: true\n volumes:\n - name: foo\n secret:\n secretName: mysecret\n optional: true", "completion": "This yaml code 7.3 - Secrets | By default, Secrets are required. None of a Pod's containers will start until all non-optional Secrets are available.", "language": "yaml", "type": "pod-definition_explanation", "context": "7.3 - Secrets | By default, Secrets are required. None of a Pod's containers will start until all non-optional Secrets are available.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_18416"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: mypod\nspec:\n containers:\n - name: mypod\n image: redis", "completion": " volumeMounts:\n - name: foo\n mountPath: \"/etc/foo\"\n readOnly: true\n volumes:\n - name: foo\n secret:\n secretName: mysecret\n optional: true", "language": "yaml", "type": "pod-definition", "context": "7.3 - Secrets | By default, Secrets are required. None of a Pod's containers will start until all non-optional Secrets are available.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_18417"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: mypod\nspec:\n containers:\n - name: mypod\n image: redis\n volumeMounts:\n - name: foo\n mountPath: \"/etc/foo\"\n readOnly: true\n volumes:\n - name: foo\n secret:\n secretName: mysecret\n optional: true", "completion": "This yaml code 7.3 - Secrets | By default, Secrets are required. None of a Pod's containers will start until all non-optional Secrets are available.", "language": "yaml", "type": "pod-definition_explanation", "context": "7.3 - Secrets | By default, Secrets are required. None of a Pod's containers will start until all non-optional Secrets are available.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_18418"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Secret", "completion": "metadata: ...\ndata: ...\nimmutable: true", "language": "yaml", "type": "kubernetes-manifest", "context": "7.3 - Secrets | You can also update any existing mutable Secret to make it immutable.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_18419"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Secret\nmetadata: ...\ndata: ...\nimmutable: true", "completion": "This yaml code 7.3 - Secrets | You can also update any existing mutable Secret to make it immutable.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "7.3 - Secrets | You can also update any existing mutable Secret to make it immutable.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_18420"} {"prompt": "Complete this yaml code:\n\n---\napiVersion: v1\nkind: Pod\nmetadata:\n name: frontend\nspec:\n containers:\n - name: app\n image: images.my-company.example/app:v4\n resources:\n requests:\n memory: \"64Mi\"", "completion": " cpu: \"250m\"\n limits:\n memory: \"128Mi\"\n cpu: \"500m\"\n - name: log-aggregator\n image: images.my-company.example/log-aggregator:v6\n resources:\n requests:\n memory: \"64Mi\"\n cpu: \"250m\"\n limits:\n memory: \"128Mi\"\n cpu: \"500m\"", "language": "yaml", "type": "pod-definition", "context": "7.5 - Resource Management for Pods and Containers | This feature can be enabled by setting the PodLevelResources feature gate. The following Pod has an explicit request of 1 CPU and 100 MiB of memory, and an explicit limit of 1 CPU and 200 MiB of memor", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_18421"} {"prompt": "Explain this yaml code:\n\n---\napiVersion: v1\nkind: Pod\nmetadata:\n name: frontend\nspec:\n containers:\n - name: app\n image: images.my-company.example/app:v4\n resources:\n requests:\n memory: \"64Mi\"\n cpu: \"250m\"\n limits:\n memory: \"128Mi\"\n cpu: \"500m\"\n - name: log-aggregator\n image: images.my-company.example/log-aggregator:v6\n resources:\n requests:\n memory: \"64Mi\"\n cpu: \"250m\"\n limits:\n memory: \"128Mi\"\n cpu: \"500m\"", "completion": "This yaml code 7.5 - Resource Management for Pods and Containers | This feature can be enabled by setting the PodLevelResources feature gate. The following Pod has an explicit request of 1 CPU and 100 MiB of memory, and an explicit limit of 1 CPU and 200 MiB of memor", "language": "yaml", "type": "pod-definition_explanation", "context": "7.5 - Resource Management for Pods and Containers | This feature can be enabled by setting the PodLevelResources feature gate. The following Pod has an explicit request of 1 CPU and 100 MiB of memory, and an explicit limit of 1 CPU and 200 MiB of memor", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_18422"} {"prompt": "Complete this yaml code:\n\npods/resource/pod-level-resources.yaml\napiVersion: v1\nkind: Pod\nmetadata:\n name: pod-resources-demo\n namespace: pod-resources-example\nspec:\n resources:\n limits:\n cpu: \"1\"\n memory: \"200Mi\"\n requests:\n cpu: \"1\"\n memory: \"100Mi\"", "completion": " containers:\n - name: pod-resources-demo-ctr-1\n image: nginx\n resources:\n limits:\n cpu: \"0.5\"\n memory: \"100Mi\"\n requests:\n cpu: \"0.5\"\n memory: \"50Mi\"\n - name: pod-resources-demo-ctr-2\n image: fedora\n command:\n - sleep\n - inf", "language": "yaml", "type": "pod-definition", "context": "7.5 - Resource Management for Pods and Containers | When you create a Pod, the Kubernetes scheduler selects a node for the Pod to run on. Each node has a maximum capacity for each of the resource types: the amount of CPU and memory it can provide for P", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_18423"} {"prompt": "Explain this yaml code:\n\npods/resource/pod-level-resources.yaml\napiVersion: v1\nkind: Pod\nmetadata:\n name: pod-resources-demo\n namespace: pod-resources-example\nspec:\n resources:\n limits:\n cpu: \"1\"\n memory: \"200Mi\"\n requests:\n cpu: \"1\"\n memory: \"100Mi\"\n containers:\n - name: pod-resources-demo-ctr-1\n image: nginx\n resources:\n limits:\n cpu: \"0.5\"\n memory: \"100Mi\"\n requests:\n cpu: \"0.5\"\n memory: \"50Mi\"\n - name: pod-resources-demo-ctr-2\n image: fedora\n command:\n - sleep\n - inf", "completion": "This yaml code 7.5 - Resource Management for Pods and Containers | When you create a Pod, the Kubernetes scheduler selects a node for the Pod to run on. Each node has a maximum capacity for each of the resource types: the amount of CPU and memory it can provide for P", "language": "yaml", "type": "pod-definition_explanation", "context": "7.5 - Resource Management for Pods and Containers | When you create a Pod, the Kubernetes scheduler selects a node for the Pod to run on. Each node has a maximum capacity for each of the resource types: the amount of CPU and memory it can provide for P", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_18424"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: pod-resources-demo\n namespace: pod-resources-example\nspec:\n resources:\n limits:\n cpu: \"1\"\n memory: \"200Mi\"\n requests:\n cpu: \"1\"\n memory: \"100Mi\"\n containers:", "completion": " - name: pod-resources-demo-ctr-1\n image: nginx\n resources:\n limits:\n cpu: \"0.5\"\n memory: \"100Mi\"\n requests:\n cpu: \"0.5\"\n memory: \"50Mi\"\n - name: pod-resources-demo-ctr-2\n image: fedora\n command:\n - sleep\n - inf", "language": "yaml", "type": "pod-definition", "context": "7.5 - Resource Management for Pods and Containers | When you create a Pod, the Kubernetes scheduler selects a node for the Pod to run on. Each node has a maximum capacity for each of the resource types: the amount of CPU and memory it can provide for P", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_18425"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: pod-resources-demo\n namespace: pod-resources-example\nspec:\n resources:\n limits:\n cpu: \"1\"\n memory: \"200Mi\"\n requests:\n cpu: \"1\"\n memory: \"100Mi\"\n containers:\n - name: pod-resources-demo-ctr-1\n image: nginx\n resources:\n limits:\n cpu: \"0.5\"\n memory: \"100Mi\"\n requests:\n cpu: \"0.5\"\n memory: \"50Mi\"\n - name: pod-resources-demo-ctr-2\n image: fedora\n command:\n - sleep\n - inf", "completion": "This yaml code 7.5 - Resource Management for Pods and Containers | When you create a Pod, the Kubernetes scheduler selects a node for the Pod to run on. Each node has a maximum capacity for each of the resource types: the amount of CPU and memory it can provide for P", "language": "yaml", "type": "pod-definition_explanation", "context": "7.5 - Resource Management for Pods and Containers | When you create a Pod, the Kubernetes scheduler selects a node for the Pod to run on. Each node has a maximum capacity for each of the resource types: the amount of CPU and memory it can provide for P", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_18426"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: frontend\nspec:\n containers:\n - name: app\n image: images.my-company.example/app:v4\n resources:\n requests:\n ephemeral-storage: \"2Gi\"\n limits:\n ephemeral-storage: \"4Gi\"\n volumeMounts:\n - name: ephemeral", "completion": " mountPath: \"/tmp\"\n - name: log-aggregator\n image: images.my-company.example/log-aggregator:v6\n resources:\n requests:\n ephemeral-storage: \"2Gi\"\n limits:\n ephemeral-storage: \"4Gi\"\n volumeMounts:\n - name: ephemeral\n mountPath: \"/tmp\"\n volumes:\n - name: ephemeral\n emptyDir:\n sizeLimit: 500Mi", "language": "yaml", "type": "pod-definition", "context": "7.5 - Resource Management for Pods and Containers | When you create a Pod, the Kubernetes scheduler selects a node for the Pod to run on. Each node has a maximum amount of local ephemeral storage it can provide for Pods. For more information, see Node", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_18427"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: frontend\nspec:\n containers:\n - name: app\n image: images.my-company.example/app:v4\n resources:\n requests:\n ephemeral-storage: \"2Gi\"\n limits:\n ephemeral-storage: \"4Gi\"\n volumeMounts:\n - name: ephemeral\n mountPath: \"/tmp\"\n - name: log-aggregator\n image: images.my-company.example/log-aggregator:v6\n resources:\n requests:\n ephemeral-storage: \"2Gi\"\n limits:\n ephemeral-storage: \"4Gi\"\n volumeMounts:\n - name: ephemeral\n mountPath: \"/tmp\"\n volumes:\n - name: ephemeral\n emptyDir:\n sizeLimit: 500Mi", "completion": "This yaml code 7.5 - Resource Management for Pods and Containers | When you create a Pod, the Kubernetes scheduler selects a node for the Pod to run on. Each node has a maximum amount of local ephemeral storage it can provide for Pods. For more information, see Node", "language": "yaml", "type": "pod-definition_explanation", "context": "7.5 - Resource Management for Pods and Containers | When you create a Pod, the Kubernetes scheduler selects a node for the Pod to run on. Each node has a maximum amount of local ephemeral storage it can provide for Pods. For more information, see Node", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_18428"} {"prompt": "Explain this text code:\n\n# For ext4, with /dev/block-device not mounted\nsudo tune2fs -O project -Q prjquota /dev/block-device", "completion": "This text code 7.5 - Resource Management for Pods and Containers | Ensure that the root filesystem (or optional runtime filesystem) is mounted with project quotas enabled. For both XFS and ext4fs, the mount option is named prjquota.", "language": "text", "type": "code-example_explanation", "context": "7.5 - Resource Management for Pods and Containers | Ensure that the root filesystem (or optional runtime filesystem) is mounted with project quotas enabled. For both XFS and ext4fs, the mount option is named prjquota.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_18429"} {"prompt": "Complete this text code:\n\ncurl --header \"Content-Type: application/json-patch+json\" \\\n--request PATCH \\", "completion": "--data '[{\"op\": \"add\", \"path\": \"/status/capacity/example.com~1foo\", \"value\": \"5\"}]' \\\nhttp://k8s-master:8080/api/v1/nodes/k8s-node-1/status", "language": "text", "type": "code-example", "context": "7.5 - Resource Management for Pods and Containers | Cluster-level extended resources are not tied to nodes. They are usually managed by scheduler extenders, which handle the resource consumption and resource quota.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_18430"} {"prompt": "Explain this text code:\n\ncurl --header \"Content-Type: application/json-patch+json\" \\\n--request PATCH \\\n--data '[{\"op\": \"add\", \"path\": \"/status/capacity/example.com~1foo\", \"value\": \"5\"}]' \\\nhttp://k8s-master:8080/api/v1/nodes/k8s-node-1/status", "completion": "This text code 7.5 - Resource Management for Pods and Containers | Cluster-level extended resources are not tied to nodes. They are usually managed by scheduler extenders, which handle the resource consumption and resource quota.", "language": "text", "type": "code-example_explanation", "context": "7.5 - Resource Management for Pods and Containers | Cluster-level extended resources are not tied to nodes. They are usually managed by scheduler extenders, which handle the resource consumption and resource quota.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_18431"} {"prompt": "Complete this json code:\n\n{\n \"kind\": \"Policy\",\n \"apiVersion\": \"v1\",\n \"extenders\": [\n {\n \"urlPrefix\":\"<extender-endpoint>\",\n \"bindVerb\": \"bind\",\n \"managedResources\": [", "completion": " {\n \"name\": \"example.com/foo\",\n \"ignoredByScheduler\": true\n }\n ]\n }\n ]\n}", "language": "json", "type": "api-response", "context": "7.5 - Resource Management for Pods and Containers | Extended resources allocation by DRA allows cluster administrators to specify an extendedResourceName in DeviceClass, then the devices matching the DeviceClass can be requested from a pod's extended r", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_18432"} {"prompt": "Explain this json code:\n\n{\n \"kind\": \"Policy\",\n \"apiVersion\": \"v1\",\n \"extenders\": [\n {\n \"urlPrefix\":\"<extender-endpoint>\",\n \"bindVerb\": \"bind\",\n \"managedResources\": [\n {\n \"name\": \"example.com/foo\",\n \"ignoredByScheduler\": true\n }\n ]\n }\n ]\n}", "completion": "This json code 7.5 - Resource Management for Pods and Containers | Extended resources allocation by DRA allows cluster administrators to specify an extendedResourceName in DeviceClass, then the devices matching the DeviceClass can be requested from a pod's extended r", "language": "json", "type": "api-response_explanation", "context": "7.5 - Resource Management for Pods and Containers | Extended resources allocation by DRA allows cluster administrators to specify an extendedResourceName in DeviceClass, then the devices matching the DeviceClass can be requested from a pod's extended r", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_18433"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: my-pod\nspec:\n containers:\n - name: my-container", "completion": " image: myimage\n resources:\n requests:\n cpu: 2\n example.com/foo: 1\n limits:\n example.com/foo: 1", "language": "yaml", "type": "pod-definition", "context": "7.5 - Resource Management for Pods and Containers | Process ID (PID) limits allow for the configuration of a kubelet to limit the number of PIDs that a given Pod can consume. See PID Limiting for information.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_18434"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: my-pod\nspec:\n containers:\n - name: my-container\n image: myimage\n resources:\n requests:\n cpu: 2\n example.com/foo: 1\n limits:\n example.com/foo: 1", "completion": "This yaml code 7.5 - Resource Management for Pods and Containers | Process ID (PID) limits allow for the configuration of a kubelet to limit the number of PIDs that a given Pod can consume. See PID Limiting for information.", "language": "yaml", "type": "pod-definition_explanation", "context": "7.5 - Resource Management for Pods and Containers | Process ID (PID) limits allow for the configuration of a kubelet to limit the number of PIDs that a given Pod can consume. See PID Limiting for information.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_18435"} {"prompt": "Explain this bash code:\n\nkubectl describe pod frontend | grep -A 9999999999 Events", "completion": "This bash code 7.5 - Resource Management for Pods and Containers | In the preceding example, the Pod named \"frontend\" fails to be scheduled due to insufficient CPU resource on any node. Similar error messages can also suggest failure due to insufficient memory (PodEx", "language": "bash", "type": "kubectl-command_explanation", "context": "7.5 - Resource Management for Pods and Containers | In the preceding example, the Pod named \"frontend\" fails to be scheduled due to insufficient CPU resource on any node. Similar error messages can also suggest failure due to insufficient memory (PodEx", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_18436"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Config\n\nclusters:\n- cluster:\n proxy-url: http://proxy.example.org:3128\n server: https://k8s.example.org/k8s/clusters/c-xxyyzz", "completion": " name: development\n\nusers:\n- name: developer\n\ncontexts:\n- context:\n name: development", "language": "yaml", "type": "kubernetes-manifest", "context": "7.6 - Organizing Cluster Access Using kubeconfig Files | This page outlines the differences in how resources are managed between Linux and Windows.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_18437"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Config\n\nclusters:\n- cluster:\n proxy-url: http://proxy.example.org:3128\n server: https://k8s.example.org/k8s/clusters/c-xxyyzz\n name: development\n\nusers:\n- name: developer\n\ncontexts:\n- context:\n name: development", "completion": "This yaml code 7.6 - Organizing Cluster Access Using kubeconfig Files | This page outlines the differences in how resources are managed between Linux and Windows.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "7.6 - Organizing Cluster Access Using kubeconfig Files | This page outlines the differences in how resources are managed between Linux and Windows.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_18438"} {"prompt": "Complete this text code:\n\n# The per-mode level label indicates which policy level to apply for the mode.\n#\n# MODE must be one of `enforce`, `audit`, or `warn`.\n# LEVEL must be one of `privileged`, `baseline`, or `restricted`.\npod-security.kubernetes.io/<MODE>: <LEVEL>\n", "completion": "# Optional: per-mode version label that can be used to pin the policy to the\n# version that shipped with a given Kubernetes minor version (for example v1.34).\n#\n# MODE must be one of `enforce`, `audit`, or `warn`.\n# VERSION must be a valid Kubernetes minor version, or `latest`.\npod-security.kubernetes.io/<MODE>-version: <VERSION>", "language": "text", "type": "code-example", "context": "8.3 - Pod Security Admission | Check out Enforce Pod Security Standards with Namespace Labels to see example usage.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_18439"} {"prompt": "Explain this text code:\n\n# The per-mode level label indicates which policy level to apply for the mode.\n#\n# MODE must be one of `enforce`, `audit`, or `warn`.\n# LEVEL must be one of `privileged`, `baseline`, or `restricted`.\npod-security.kubernetes.io/<MODE>: <LEVEL>\n\n# Optional: per-mode version label that can be used to pin the policy to the\n# version that shipped with a given Kubernetes minor version (for example v1.34).\n#\n# MODE must be one of `enforce`, `audit`, or `warn`.\n# VERSION must be a valid Kubernetes minor version, or `latest`.\npod-security.kubernetes.io/<MODE>-version: <VERSION>", "completion": "This text code 8.3 - Pod Security Admission | Check out Enforce Pod Security Standards with Namespace Labels to see example usage.", "language": "text", "type": "code-example_explanation", "context": "8.3 - Pod Security Admission | Check out Enforce Pod Security Standards with Namespace Labels to see example usage.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_18440"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: ServiceAccount\nmetadata:", "completion": " annotations:\n kubernetes.io/enforce-mountable-secrets: \"true\"\n name: my-serviceaccount\n namespace: my-namespace", "language": "yaml", "type": "service-definition", "context": "8.4 - Service Accounts | When this annotation is set to \"true\", the Kubernetes control plane ensures that the Secrets from this ServiceAccount are subject to certain mounting restrictions.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_18441"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: ServiceAccount\nmetadata:\n annotations:\n kubernetes.io/enforce-mountable-secrets: \"true\"\n name: my-serviceaccount\n namespace: my-namespace", "completion": "This yaml code 8.4 - Service Accounts | When this annotation is set to \"true\", the Kubernetes control plane ensures that the Secrets from this ServiceAccount are subject to certain mounting restrictions.", "language": "yaml", "type": "service-definition_explanation", "context": "8.4 - Service Accounts | When this annotation is set to \"true\", the Kubernetes control plane ensures that the Secrets from this ServiceAccount are subject to certain mounting restrictions.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_18442"} {"prompt": "Complete this json code:\n\n{\n \"apiVersion\": \"abac.authorization.kubernetes.io/v1beta1\",\n \"kind\": \"Policy\",\n \"spec\": {\n \"user\": \"bob\",", "completion": " \"namespace\": \"projectCaribou\",\n \"resource\": \"pods\",\n \"readonly\": true\n }\n}", "language": "json", "type": "api-response", "context": "8.8 - Controlling Access to the Kubernetes API | If Bob makes the following request, the request is authorized because he is allowed to read objects in the projectCaribou namespace:", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_18443"} {"prompt": "Explain this json code:\n\n{\n \"apiVersion\": \"abac.authorization.kubernetes.io/v1beta1\",\n \"kind\": \"Policy\",\n \"spec\": {\n \"user\": \"bob\",\n \"namespace\": \"projectCaribou\",\n \"resource\": \"pods\",\n \"readonly\": true\n }\n}", "completion": "This json code 8.8 - Controlling Access to the Kubernetes API | If Bob makes the following request, the request is authorized because he is allowed to read objects in the projectCaribou namespace:", "language": "json", "type": "api-response_explanation", "context": "8.8 - Controlling Access to the Kubernetes API | If Bob makes the following request, the request is authorized because he is allowed to read objects in the projectCaribou namespace:", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_18444"} {"prompt": "Complete this json code:\n\n{\n \"apiVersion\": \"authorization.k8s.io/v1beta1\",\n \"kind\": \"SubjectAccessReview\",\n \"spec\": {\n \"resourceAttributes\": {\n \"namespace\": \"projectCaribou\",", "completion": " \"verb\": \"get\",\n \"group\": \"unicorn.example.org\",\n \"resource\": \"pods\"\n }\n }\n}", "language": "json", "type": "api-response", "context": "8.8 - Controlling Access to the Kubernetes API | If Bob makes a request to write (create or update) to the objects in the projectCaribou namespace, his authorization is denied. If Bob makes a request to read (get) objects in a different namespace su", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_18445"} {"prompt": "Explain this json code:\n\n{\n \"apiVersion\": \"authorization.k8s.io/v1beta1\",\n \"kind\": \"SubjectAccessReview\",\n \"spec\": {\n \"resourceAttributes\": {\n \"namespace\": \"projectCaribou\",\n \"verb\": \"get\",\n \"group\": \"unicorn.example.org\",\n \"resource\": \"pods\"\n }\n }\n}", "completion": "This json code 8.8 - Controlling Access to the Kubernetes API | If Bob makes a request to write (create or update) to the objects in the projectCaribou namespace, his authorization is denied. If Bob makes a request to read (get) objects in a different namespace su", "language": "json", "type": "api-response_explanation", "context": "8.8 - Controlling Access to the Kubernetes API | If Bob makes a request to write (create or update) to the objects in the projectCaribou namespace, his authorization is denied. If Bob makes a request to read (get) objects in a different namespace su", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_18446"} {"prompt": "Complete this yaml code:\n\napiVersion: kubescheduler.config.k8s.io/v1\nkind: KubeSchedulerConfiguration\nprofiles:\n - schedulerName: my-scheduler\n plugins:\n # Disable specific plugins for different extension points\n # You can disable all plugins for an extension point using \"*\"\n queueSort:\n disabled:", "completion": " - name: \"*\" # Disable all queueSort plugins\n # - name: \"PrioritySort\" # Disable specific queueSort plugin\n filter:\n disabled:\n - name: \"*\" # Disable all filter plugins\n # - name: \"NodeResourcesFit\" # Disable specific filter plugin\n permit:\n disabled:\n - name: \"*\" # Disables all permit plugins\n # - name: \"TaintToleration\" # Disable specific permit plugin", "language": "yaml", "type": "kubernetes-manifest", "context": "8.13 - Hardening Guide - Scheduler Configuration | This creates a scheduler profile my-custom-scheduler. Whenever the .spec of a Pod does not have a value for .spec.schedulerName, the kube-scheduler runs for that Pod, using its main configuration, and", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_18447"} {"prompt": "Explain this yaml code:\n\napiVersion: kubescheduler.config.k8s.io/v1\nkind: KubeSchedulerConfiguration\nprofiles:\n - schedulerName: my-scheduler\n plugins:\n # Disable specific plugins for different extension points\n # You can disable all plugins for an extension point using \"*\"\n queueSort:\n disabled:\n - name: \"*\" # Disable all queueSort plugins\n # - name: \"PrioritySort\" # Disable specific queueSort plugin\n filter:\n disabled:\n - name: \"*\" # Disable all filter plugins\n # - name: \"NodeResourcesFit\" # Disable specific filter plugin\n permit:\n disabled:\n - name: \"*\" # Disables all permit plugins\n # - name: \"TaintToleration\" # Disable specific permit plugin", "completion": "This yaml code 8.13 - Hardening Guide - Scheduler Configuration | This creates a scheduler profile my-custom-scheduler. Whenever the .spec of a Pod does not have a value for .spec.schedulerName, the kube-scheduler runs for that Pod, using its main configuration, and", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "8.13 - Hardening Guide - Scheduler Configuration | This creates a scheduler profile my-custom-scheduler. Whenever the .spec of a Pod does not have a value for .spec.schedulerName, the kube-scheduler runs for that Pod, using its main configuration, and", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_18448"} {"prompt": "Complete this yaml code:\n\nconcepts/policy/limit-range/problematic-limit-range.yaml\napiVersion: v1\nkind: LimitRange\nmetadata:\n name: cpu-resource-constraint\nspec:\n limits:\n - default: # this section defines default limits", "completion": " cpu: 500m\n defaultRequest: # this section defines default requests\n cpu: 500m\n max: # max and min define the limit range\n cpu: \"1\"\n min:\n cpu: 100m\n type: Container", "language": "yaml", "type": "kubernetes-manifest", "context": "9.1 - Limit Ranges | along with a Pod that declares a CPU resource request of 700m, but not a limit:", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_18449"} {"prompt": "Explain this yaml code:\n\nconcepts/policy/limit-range/problematic-limit-range.yaml\napiVersion: v1\nkind: LimitRange\nmetadata:\n name: cpu-resource-constraint\nspec:\n limits:\n - default: # this section defines default limits\n cpu: 500m\n defaultRequest: # this section defines default requests\n cpu: 500m\n max: # max and min define the limit range\n cpu: \"1\"\n min:\n cpu: 100m\n type: Container", "completion": "This yaml code 9.1 - Limit Ranges | along with a Pod that declares a CPU resource request of 700m, but not a limit:", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "9.1 - Limit Ranges | along with a Pod that declares a CPU resource request of 700m, but not a limit:", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_18450"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: LimitRange\nmetadata:\n name: cpu-resource-constraint\nspec:\n limits:\n - default: # this section defines default limits", "completion": " cpu: 500m\n defaultRequest: # this section defines default requests\n cpu: 500m\n max: # max and min define the limit range\n cpu: \"1\"\n min:\n cpu: 100m\n type: Container", "language": "yaml", "type": "kubernetes-manifest", "context": "9.1 - Limit Ranges | along with a Pod that declares a CPU resource request of 700m, but not a limit:", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_18451"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: LimitRange\nmetadata:\n name: cpu-resource-constraint\nspec:\n limits:\n - default: # this section defines default limits\n cpu: 500m\n defaultRequest: # this section defines default requests\n cpu: 500m\n max: # max and min define the limit range\n cpu: \"1\"\n min:\n cpu: 100m\n type: Container", "completion": "This yaml code 9.1 - Limit Ranges | along with a Pod that declares a CPU resource request of 700m, but not a limit:", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "9.1 - Limit Ranges | along with a Pod that declares a CPU resource request of 700m, but not a limit:", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_18452"} {"prompt": "Complete this yaml code:\n\nconcepts/policy/limit-range/example-conflict-with-limitrange-cpu.yaml\napiVersion: v1\nkind: Pod\nmetadata:\n name: example-conflict-with-limitrange-cpu\nspec:", "completion": " containers:\n - name: demo\n image: registry.k8s.io/pause:3.8\n resources:\n requests:\n cpu: 700m", "language": "yaml", "type": "pod-definition", "context": "9.1 - Limit Ranges | then that Pod will not be scheduled, failing with an error similar to:", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_18453"} {"prompt": "Explain this yaml code:\n\nconcepts/policy/limit-range/example-conflict-with-limitrange-cpu.yaml\napiVersion: v1\nkind: Pod\nmetadata:\n name: example-conflict-with-limitrange-cpu\nspec:\n containers:\n - name: demo\n image: registry.k8s.io/pause:3.8\n resources:\n requests:\n cpu: 700m", "completion": "This yaml code 9.1 - Limit Ranges | then that Pod will not be scheduled, failing with an error similar to:", "language": "yaml", "type": "pod-definition_explanation", "context": "9.1 - Limit Ranges | then that Pod will not be scheduled, failing with an error similar to:", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_18454"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: example-conflict-with-limitrange-cpu\nspec:", "completion": " containers:\n - name: demo\n image: registry.k8s.io/pause:3.8\n resources:\n requests:\n cpu: 700m", "language": "yaml", "type": "pod-definition", "context": "9.1 - Limit Ranges | then that Pod will not be scheduled, failing with an error similar to:", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_18455"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: example-conflict-with-limitrange-cpu\nspec:\n containers:\n - name: demo\n image: registry.k8s.io/pause:3.8\n resources:\n requests:\n cpu: 700m", "completion": "This yaml code 9.1 - Limit Ranges | then that Pod will not be scheduled, failing with an error similar to:", "language": "yaml", "type": "pod-definition_explanation", "context": "9.1 - Limit Ranges | then that Pod will not be scheduled, failing with an error similar to:", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_18456"} {"prompt": "Complete this yaml code:\n\nconcepts/policy/limit-range/example-no-conflict-with-limitrange-cpu.yaml\napiVersion: v1\nkind: Pod\nmetadata:\n name: example-no-conflict-with-limitrange-cpu\nspec:\n containers:", "completion": " - name: demo\n image: registry.k8s.io/pause:3.8\n resources:\n requests:\n cpu: 700m\n limits:\n cpu: 700m", "language": "yaml", "type": "pod-definition", "context": "9.1 - Limit Ranges | Examples of policies that could be created using LimitRange are:", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_18457"} {"prompt": "Explain this yaml code:\n\nconcepts/policy/limit-range/example-no-conflict-with-limitrange-cpu.yaml\napiVersion: v1\nkind: Pod\nmetadata:\n name: example-no-conflict-with-limitrange-cpu\nspec:\n containers:\n - name: demo\n image: registry.k8s.io/pause:3.8\n resources:\n requests:\n cpu: 700m\n limits:\n cpu: 700m", "completion": "This yaml code 9.1 - Limit Ranges | Examples of policies that could be created using LimitRange are:", "language": "yaml", "type": "pod-definition_explanation", "context": "9.1 - Limit Ranges | Examples of policies that could be created using LimitRange are:", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_18458"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: example-no-conflict-with-limitrange-cpu\nspec:\n containers:", "completion": " - name: demo\n image: registry.k8s.io/pause:3.8\n resources:\n requests:\n cpu: 700m\n limits:\n cpu: 700m", "language": "yaml", "type": "pod-definition", "context": "9.1 - Limit Ranges | Examples of policies that could be created using LimitRange are:", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_18459"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: example-no-conflict-with-limitrange-cpu\nspec:\n containers:\n - name: demo\n image: registry.k8s.io/pause:3.8\n resources:\n requests:\n cpu: 700m\n limits:\n cpu: 700m", "completion": "This yaml code 9.1 - Limit Ranges | Examples of policies that could be created using LimitRange are:", "language": "yaml", "type": "pod-definition_explanation", "context": "9.1 - Limit Ranges | Examples of policies that could be created using LimitRange are:", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_18460"} {"prompt": "Complete this text code:\n\nscopeSelector:\n matchExpressions:\n - scopeName: PriorityClass", "completion": " operator: In\n values:\n - middle", "language": "text", "type": "code-example", "context": "9.2 - Resource Quotas | If the operator is Exists or DoesNotExist, the values field must NOT be specified.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_18461"} {"prompt": "Explain this text code:\n\nscopeSelector:\n matchExpressions:\n - scopeName: PriorityClass\n operator: In\n values:\n - middle", "completion": "This text code 9.2 - Resource Quotas | If the operator is Exists or DoesNotExist, the values field must NOT be specified.", "language": "text", "type": "code-example_explanation", "context": "9.2 - Resource Quotas | If the operator is Exists or DoesNotExist, the values field must NOT be specified.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_18462"} {"prompt": "Complete this yaml code:\n\npolicy/quota.yaml\napiVersion: v1\nkind: List\nitems:\n- apiVersion: v1\n kind: ResourceQuota\n metadata:\n name: pods-high\n spec:\n hard:\n cpu: \"1000\"\n memory: \"200Gi\"\n pods: \"10\"\n scopeSelector:\n matchExpressions:\n - operator: In\n scopeName: PriorityClass\n values: [\"high\"]\n- apiVersion: v1\n kind: ResourceQuota\n metadata:\n name: pods-medium\n spec:", "completion": " hard:\n cpu: \"10\"\n memory: \"20Gi\"\n pods: \"10\"\n scopeSelector:\n matchExpressions:\n - operator: In\n scopeName: PriorityClass\n values: [\"medium\"]\n- apiVersion: v1\n kind: ResourceQuota\n metadata:\n name: pods-low\n spec:\n hard:\n cpu: \"5\"\n memory: \"10Gi\"\n pods: \"10\"\n scopeSelector:\n matchExpressions:\n - operator: In\n scopeName: PriorityClass\n values: [\"low\"]", "language": "yaml", "type": "kubernetes-manifest", "context": "9.2 - Resource Quotas | Apply the YAML using kubectl create.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_18463"} {"prompt": "Explain this yaml code:\n\npolicy/quota.yaml\napiVersion: v1\nkind: List\nitems:\n- apiVersion: v1\n kind: ResourceQuota\n metadata:\n name: pods-high\n spec:\n hard:\n cpu: \"1000\"\n memory: \"200Gi\"\n pods: \"10\"\n scopeSelector:\n matchExpressions:\n - operator: In\n scopeName: PriorityClass\n values: [\"high\"]\n- apiVersion: v1\n kind: ResourceQuota\n metadata:\n name: pods-medium\n spec:\n hard:\n cpu: \"10\"\n memory: \"20Gi\"\n pods: \"10\"\n scopeSelector:\n matchExpressions:\n - operator: In\n scopeName: PriorityClass\n values: [\"medium\"]\n- apiVersion: v1\n kind: ResourceQuota\n metadata:\n name: pods-low\n spec:\n hard:\n cpu: \"5\"\n memory: \"10Gi\"\n pods: \"10\"\n scopeSelector:\n matchExpressions:\n - operator: In\n scopeName: PriorityClass\n values: [\"low\"]", "completion": "This yaml code 9.2 - Resource Quotas | Apply the YAML using kubectl create.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "9.2 - Resource Quotas | Apply the YAML using kubectl create.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_18464"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: List\nitems:\n- apiVersion: v1\n kind: ResourceQuota\n metadata:\n name: pods-high\n spec:\n hard:\n cpu: \"1000\"\n memory: \"200Gi\"\n pods: \"10\"\n scopeSelector:\n matchExpressions:\n - operator: In\n scopeName: PriorityClass\n values: [\"high\"]\n- apiVersion: v1\n kind: ResourceQuota\n metadata:\n name: pods-medium\n spec:", "completion": " hard:\n cpu: \"10\"\n memory: \"20Gi\"\n pods: \"10\"\n scopeSelector:\n matchExpressions:\n - operator: In\n scopeName: PriorityClass\n values: [\"medium\"]\n- apiVersion: v1\n kind: ResourceQuota\n metadata:\n name: pods-low\n spec:\n hard:\n cpu: \"5\"\n memory: \"10Gi\"\n pods: \"10\"\n scopeSelector:\n matchExpressions:\n - operator: In\n scopeName: PriorityClass\n values: [\"low\"]", "language": "yaml", "type": "kubernetes-manifest", "context": "9.2 - Resource Quotas | Apply the YAML using kubectl create.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_18465"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: List\nitems:\n- apiVersion: v1\n kind: ResourceQuota\n metadata:\n name: pods-high\n spec:\n hard:\n cpu: \"1000\"\n memory: \"200Gi\"\n pods: \"10\"\n scopeSelector:\n matchExpressions:\n - operator: In\n scopeName: PriorityClass\n values: [\"high\"]\n- apiVersion: v1\n kind: ResourceQuota\n metadata:\n name: pods-medium\n spec:\n hard:\n cpu: \"10\"\n memory: \"20Gi\"\n pods: \"10\"\n scopeSelector:\n matchExpressions:\n - operator: In\n scopeName: PriorityClass\n values: [\"medium\"]\n- apiVersion: v1\n kind: ResourceQuota\n metadata:\n name: pods-low\n spec:\n hard:\n cpu: \"5\"\n memory: \"10Gi\"\n pods: \"10\"\n scopeSelector:\n matchExpressions:\n - operator: In\n scopeName: PriorityClass\n values: [\"low\"]", "completion": "This yaml code 9.2 - Resource Quotas | Apply the YAML using kubectl create.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "9.2 - Resource Quotas | Apply the YAML using kubectl create.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_18466"} {"prompt": "Complete this yaml code:\n\npolicy/high-priority-pod.yaml\napiVersion: v1\nkind: Pod\nmetadata:\n name: high-priority\nspec:\n containers:\n - name: high-priority\n image: ubuntu", "completion": " command: [\"/bin/sh\"]\n args: [\"-c\", \"while true; do echo hello; sleep 10;done\"]\n resources:\n requests:\n memory: \"10Gi\"\n cpu: \"500m\"\n limits:\n memory: \"10Gi\"\n cpu: \"500m\"\n priorityClassName: high", "language": "yaml", "type": "pod-definition", "context": "9.2 - Resource Quotas | Apply it with kubectl create.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_18467"} {"prompt": "Explain this yaml code:\n\npolicy/high-priority-pod.yaml\napiVersion: v1\nkind: Pod\nmetadata:\n name: high-priority\nspec:\n containers:\n - name: high-priority\n image: ubuntu\n command: [\"/bin/sh\"]\n args: [\"-c\", \"while true; do echo hello; sleep 10;done\"]\n resources:\n requests:\n memory: \"10Gi\"\n cpu: \"500m\"\n limits:\n memory: \"10Gi\"\n cpu: \"500m\"\n priorityClassName: high", "completion": "This yaml code 9.2 - Resource Quotas | Apply it with kubectl create.", "language": "yaml", "type": "pod-definition_explanation", "context": "9.2 - Resource Quotas | Apply it with kubectl create.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_18468"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: high-priority\nspec:\n containers:\n - name: high-priority\n image: ubuntu\n command: [\"/bin/sh\"]", "completion": " args: [\"-c\", \"while true; do echo hello; sleep 10;done\"]\n resources:\n requests:\n memory: \"10Gi\"\n cpu: \"500m\"\n limits:\n memory: \"10Gi\"\n cpu: \"500m\"\n priorityClassName: high", "language": "yaml", "type": "pod-definition", "context": "9.2 - Resource Quotas | Apply it with kubectl create.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_18469"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: high-priority\nspec:\n containers:\n - name: high-priority\n image: ubuntu\n command: [\"/bin/sh\"]\n args: [\"-c\", \"while true; do echo hello; sleep 10;done\"]\n resources:\n requests:\n memory: \"10Gi\"\n cpu: \"500m\"\n limits:\n memory: \"10Gi\"\n cpu: \"500m\"\n priorityClassName: high", "completion": "This yaml code 9.2 - Resource Quotas | Apply it with kubectl create.", "language": "yaml", "type": "pod-definition_explanation", "context": "9.2 - Resource Quotas | Apply it with kubectl create.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_18470"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: ResourceQuota\nmetadata:\n name: disable-cross-namespace-affinity\n namespace: foo-ns\nspec:", "completion": " hard:\n pods: \"0\"\n scopeSelector:\n matchExpressions:\n - scopeName: CrossNamespacePodAffinity\n operator: Exists", "language": "yaml", "type": "kubernetes-manifest", "context": "9.2 - Resource Quotas | If operators want to disallow using namespaces and namespaceSelector by default, and only allow it for specific namespaces, they could configure CrossNamespacePodAffinity as a limited resource by sett", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_18471"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: ResourceQuota\nmetadata:\n name: disable-cross-namespace-affinity\n namespace: foo-ns\nspec:\n hard:\n pods: \"0\"\n scopeSelector:\n matchExpressions:\n - scopeName: CrossNamespacePodAffinity\n operator: Exists", "completion": "This yaml code 9.2 - Resource Quotas | If operators want to disallow using namespaces and namespaceSelector by default, and only allow it for specific namespaces, they could configure CrossNamespacePodAffinity as a limited resource by sett", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "9.2 - Resource Quotas | If operators want to disallow using namespaces and namespaceSelector by default, and only allow it for specific namespaces, they could configure CrossNamespacePodAffinity as a limited resource by sett", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_18472"} {"prompt": "Complete this yaml code:\n\napiVersion: apiserver.config.k8s.io/v1\nkind: AdmissionConfiguration\nplugins:\n- name: \"ResourceQuota\"\n configuration:\n apiVersion: apiserver.config.k8s.io/v1", "completion": " kind: ResourceQuotaConfiguration\n limitedResources:\n - resource: pods\n matchScopes:\n - scopeName: CrossNamespacePodAffinity\n operator: Exists", "language": "yaml", "type": "kubernetes-manifest", "context": "9.2 - Resource Quotas | With the above configuration, pods can use namespaces and namespaceSelector in pod affinity only if the namespace where they are created have a resource quota object with CrossNamespacePodAffinity sco", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_18473"} {"prompt": "Explain this yaml code:\n\napiVersion: apiserver.config.k8s.io/v1\nkind: AdmissionConfiguration\nplugins:\n- name: \"ResourceQuota\"\n configuration:\n apiVersion: apiserver.config.k8s.io/v1\n kind: ResourceQuotaConfiguration\n limitedResources:\n - resource: pods\n matchScopes:\n - scopeName: CrossNamespacePodAffinity\n operator: Exists", "completion": "This yaml code 9.2 - Resource Quotas | With the above configuration, pods can use namespaces and namespaceSelector in pod affinity only if the namespace where they are created have a resource quota object with CrossNamespacePodAffinity sco", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "9.2 - Resource Quotas | With the above configuration, pods can use namespaces and namespaceSelector in pod affinity only if the namespace where they are created have a resource quota object with CrossNamespacePodAffinity sco", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_18474"} {"prompt": "Complete this yaml code:\n\npolicy/quota-vac.yaml\napiVersion: v1\nkind: List\nitems:\n- apiVersion: v1\n kind: ResourceQuota\n metadata:\n name: pvcs-gold\n spec:\n hard:\n requests.storage: \"10Gi\"\n persistentvolumeclaims: \"10\"\n scopeSelector:\n matchExpressions:\n - operator: In\n scopeName: VolumeAttributesClass\n values: [\"gold\"]\n- apiVersion: v1\n kind: ResourceQuota\n metadata:\n name: pvcs-silver", "completion": " spec:\n hard:\n requests.storage: \"20Gi\"\n persistentvolumeclaims: \"10\"\n scopeSelector:\n matchExpressions:\n - operator: In\n scopeName: VolumeAttributesClass\n values: [\"silver\"]\n- apiVersion: v1\n kind: ResourceQuota\n metadata:\n name: pvcs-copper\n spec:\n hard:\n requests.storage: \"30Gi\"\n persistentvolumeclaims: \"10\"\n scopeSelector:\n matchExpressions:\n - operator: In\n scopeName: VolumeAttributesClass\n values: [\"copper\"]", "language": "yaml", "type": "kubernetes-manifest", "context": "9.2 - Resource Quotas | Apply the YAML using kubectl create.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_18475"} {"prompt": "Explain this yaml code:\n\npolicy/quota-vac.yaml\napiVersion: v1\nkind: List\nitems:\n- apiVersion: v1\n kind: ResourceQuota\n metadata:\n name: pvcs-gold\n spec:\n hard:\n requests.storage: \"10Gi\"\n persistentvolumeclaims: \"10\"\n scopeSelector:\n matchExpressions:\n - operator: In\n scopeName: VolumeAttributesClass\n values: [\"gold\"]\n- apiVersion: v1\n kind: ResourceQuota\n metadata:\n name: pvcs-silver\n spec:\n hard:\n requests.storage: \"20Gi\"\n persistentvolumeclaims: \"10\"\n scopeSelector:\n matchExpressions:\n - operator: In\n scopeName: VolumeAttributesClass\n values: [\"silver\"]\n- apiVersion: v1\n kind: ResourceQuota\n metadata:\n name: pvcs-copper\n spec:\n hard:\n requests.storage: \"30Gi\"\n persistentvolumeclaims: \"10\"\n scopeSelector:\n matchExpressions:\n - operator: In\n scopeName: VolumeAttributesClass\n values: [\"copper\"]", "completion": "This yaml code 9.2 - Resource Quotas | Apply the YAML using kubectl create.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "9.2 - Resource Quotas | Apply the YAML using kubectl create.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_18476"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: List\nitems:\n- apiVersion: v1\n kind: ResourceQuota\n metadata:\n name: pvcs-gold\n spec:\n hard:\n requests.storage: \"10Gi\"\n persistentvolumeclaims: \"10\"\n scopeSelector:\n matchExpressions:\n - operator: In\n scopeName: VolumeAttributesClass\n values: [\"gold\"]\n- apiVersion: v1\n kind: ResourceQuota\n metadata:\n name: pvcs-silver\n spec:", "completion": " hard:\n requests.storage: \"20Gi\"\n persistentvolumeclaims: \"10\"\n scopeSelector:\n matchExpressions:\n - operator: In\n scopeName: VolumeAttributesClass\n values: [\"silver\"]\n- apiVersion: v1\n kind: ResourceQuota\n metadata:\n name: pvcs-copper\n spec:\n hard:\n requests.storage: \"30Gi\"\n persistentvolumeclaims: \"10\"\n scopeSelector:\n matchExpressions:\n - operator: In\n scopeName: VolumeAttributesClass\n values: [\"copper\"]", "language": "yaml", "type": "kubernetes-manifest", "context": "9.2 - Resource Quotas | Apply the YAML using kubectl create.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_18477"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: List\nitems:\n- apiVersion: v1\n kind: ResourceQuota\n metadata:\n name: pvcs-gold\n spec:\n hard:\n requests.storage: \"10Gi\"\n persistentvolumeclaims: \"10\"\n scopeSelector:\n matchExpressions:\n - operator: In\n scopeName: VolumeAttributesClass\n values: [\"gold\"]\n- apiVersion: v1\n kind: ResourceQuota\n metadata:\n name: pvcs-silver\n spec:\n hard:\n requests.storage: \"20Gi\"\n persistentvolumeclaims: \"10\"\n scopeSelector:\n matchExpressions:\n - operator: In\n scopeName: VolumeAttributesClass\n values: [\"silver\"]\n- apiVersion: v1\n kind: ResourceQuota\n metadata:\n name: pvcs-copper\n spec:\n hard:\n requests.storage: \"30Gi\"\n persistentvolumeclaims: \"10\"\n scopeSelector:\n matchExpressions:\n - operator: In\n scopeName: VolumeAttributesClass\n values: [\"copper\"]", "completion": "This yaml code 9.2 - Resource Quotas | Apply the YAML using kubectl create.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "9.2 - Resource Quotas | Apply the YAML using kubectl create.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_18478"} {"prompt": "Complete this yaml code:\n\npolicy/gold-vac-pvc.yaml\napiVersion: v1\nkind: PersistentVolumeClaim\nmetadata:\n name: gold-vac-pvc\nspec:", "completion": " accessModes:\n - ReadWriteOnce\n resources:\n requests:\n storage: 2Gi\n storageClassName: # change this to the name of the storage class you want to use\n volumeAttributesClassName: gold", "language": "yaml", "type": "kubernetes-manifest", "context": "9.2 - Resource Quotas | Apply it with kubectl create.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_18479"} {"prompt": "Explain this yaml code:\n\npolicy/gold-vac-pvc.yaml\napiVersion: v1\nkind: PersistentVolumeClaim\nmetadata:\n name: gold-vac-pvc\nspec:\n accessModes:\n - ReadWriteOnce\n resources:\n requests:\n storage: 2Gi\n storageClassName: # change this to the name of the storage class you want to use\n volumeAttributesClassName: gold", "completion": "This yaml code 9.2 - Resource Quotas | Apply it with kubectl create.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "9.2 - Resource Quotas | Apply it with kubectl create.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_18480"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: PersistentVolumeClaim\nmetadata:\n name: gold-vac-pvc\nspec:\n accessModes:", "completion": " - ReadWriteOnce\n resources:\n requests:\n storage: 2Gi\n storageClassName: # change this to the name of the storage class you want to use\n volumeAttributesClassName: gold", "language": "yaml", "type": "kubernetes-manifest", "context": "9.2 - Resource Quotas | Apply it with kubectl create.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_18481"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: PersistentVolumeClaim\nmetadata:\n name: gold-vac-pvc\nspec:\n accessModes:\n - ReadWriteOnce\n resources:\n requests:\n storage: 2Gi\n storageClassName: # change this to the name of the storage class you want to use\n volumeAttributesClassName: gold", "completion": "This yaml code 9.2 - Resource Quotas | Apply it with kubectl create.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "9.2 - Resource Quotas | Apply it with kubectl create.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_18482"} {"prompt": "Explain this bash code:\n\nkubectl patch pvc gold-vac-pvc --type='merge' -p '{\"spec\":{\"volumeAttributesClassName\":\"silver\"}}'", "completion": "This bash code 9.2 - Resource Quotas | Verify that \"Used\" stats for \"silver\" volume attributes class quota, pvcs-silver has changed, pvcs-copper is unchanged, and pvcs-gold might be unchanged or released, which depends on the PVC's status.", "language": "bash", "type": "kubectl-command_explanation", "context": "9.2 - Resource Quotas | Verify that \"Used\" stats for \"silver\" volume attributes class quota, pvcs-silver has changed, pvcs-copper is unchanged, and pvcs-gold might be unchanged or released, which depends on the PVC's status.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_18483"} {"prompt": "Explain this bash code:\n\nkubectl patch pvc gold-vac-pvc --type='merge' -p '{\"spec\":{\"volumeAttributesClassName\":\"copper\"}}'", "completion": "This bash code 9.2 - Resource Quotas | Verify that \"Used\" stats for \"copper\" volume attributes class quota, pvcs-copper has changed, pvcs-silver and pvcs-gold might be unchanged or released, which depends on the PVC's status.", "language": "bash", "type": "kubectl-command_explanation", "context": "9.2 - Resource Quotas | Verify that \"Used\" stats for \"copper\" volume attributes class quota, pvcs-copper has changed, pvcs-silver and pvcs-gold might be unchanged or released, which depends on the PVC's status.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_18484"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: PersistentVolumeClaim\nmetadata:\n name: gold-vac-pvc\nspec:\n accessModes:\n - ReadWriteOnce\n resources:\n requests:\n storage: 2Gi\n storageClassName: default", "completion": " volumeAttributesClassName: copper\nstatus:\n accessModes:\n - ReadWriteOnce\n capacity:\n storage: 2Gi\n currentVolumeAttributesClassName: gold\n phase: Bound\n modifyVolumeStatus:\n status: InProgress\n targetVolumeAttributesClassName: silver\n storageClassName: default", "language": "yaml", "type": "kubernetes-manifest", "context": "9.2 - Resource Quotas | Wait a moment for the volume modification to complete, then verify the quota again.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_18485"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: PersistentVolumeClaim\nmetadata:\n name: gold-vac-pvc\nspec:\n accessModes:\n - ReadWriteOnce\n resources:\n requests:\n storage: 2Gi\n storageClassName: default\n volumeAttributesClassName: copper\nstatus:\n accessModes:\n - ReadWriteOnce\n capacity:\n storage: 2Gi\n currentVolumeAttributesClassName: gold\n phase: Bound\n modifyVolumeStatus:\n status: InProgress\n targetVolumeAttributesClassName: silver\n storageClassName: default", "completion": "This yaml code 9.2 - Resource Quotas | Wait a moment for the volume modification to complete, then verify the quota again.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "9.2 - Resource Quotas | Wait a moment for the volume modification to complete, then verify the quota again.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_18486"} {"prompt": "Complete this yaml code:\n\ncat <<EOF > compute-resources.yaml\napiVersion: v1\nkind: ResourceQuota\nmetadata:\n name: compute-resources\nspec:", "completion": " hard:\n requests.cpu: \"1\"\n requests.memory: \"1Gi\"\n limits.cpu: \"2\"\n limits.memory: \"2Gi\"\n requests.nvidia.com/gpu: 4\nEOF", "language": "yaml", "type": "kubernetes-manifest", "context": "9.2 - Resource Quotas | kubectl also supports object count quota for all standard namespaced resources using the syntax count/<resource>.<group>:", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_18487"} {"prompt": "Explain this yaml code:\n\ncat <<EOF > compute-resources.yaml\napiVersion: v1\nkind: ResourceQuota\nmetadata:\n name: compute-resources\nspec:\n hard:\n requests.cpu: \"1\"\n requests.memory: \"1Gi\"\n limits.cpu: \"2\"\n limits.memory: \"2Gi\"\n requests.nvidia.com/gpu: 4\nEOF", "completion": "This yaml code 9.2 - Resource Quotas | kubectl also supports object count quota for all standard namespaced resources using the syntax count/<resource>.<group>:", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "9.2 - Resource Quotas | kubectl also supports object count quota for all standard namespaced resources using the syntax count/<resource>.<group>:", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_18488"} {"prompt": "Explain this bash code:\n\nkubectl create -f ./compute-resources.yaml --namespace=myspace", "completion": "This bash code 9.2 - Resource Quotas | kubectl also supports object count quota for all standard namespaced resources using the syntax count/<resource>.<group>:", "language": "bash", "type": "kubectl-command_explanation", "context": "9.2 - Resource Quotas | kubectl also supports object count quota for all standard namespaced resources using the syntax count/<resource>.<group>:", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_18489"} {"prompt": "Complete this yaml code:\n\ncat <<EOF > object-counts.yaml\napiVersion: v1\nkind: ResourceQuota\nmetadata:\n name: object-counts\nspec:\n hard:", "completion": " configmaps: \"10\"\n persistentvolumeclaims: \"4\"\n pods: \"4\"\n replicationcontrollers: \"20\"\n secrets: \"10\"\n services: \"10\"\n services.loadbalancers: \"2\"\nEOF", "language": "yaml", "type": "kubernetes-manifest", "context": "9.2 - Resource Quotas | kubectl also supports object count quota for all standard namespaced resources using the syntax count/<resource>.<group>:", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_18490"} {"prompt": "Explain this yaml code:\n\ncat <<EOF > object-counts.yaml\napiVersion: v1\nkind: ResourceQuota\nmetadata:\n name: object-counts\nspec:\n hard:\n configmaps: \"10\"\n persistentvolumeclaims: \"4\"\n pods: \"4\"\n replicationcontrollers: \"20\"\n secrets: \"10\"\n services: \"10\"\n services.loadbalancers: \"2\"\nEOF", "completion": "This yaml code 9.2 - Resource Quotas | kubectl also supports object count quota for all standard namespaced resources using the syntax count/<resource>.<group>:", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "9.2 - Resource Quotas | kubectl also supports object count quota for all standard namespaced resources using the syntax count/<resource>.<group>:", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_18491"} {"prompt": "Explain this bash code:\n\nkubectl create -f ./object-counts.yaml --namespace=myspace", "completion": "This bash code 9.2 - Resource Quotas | kubectl also supports object count quota for all standard namespaced resources using the syntax count/<resource>.<group>:", "language": "bash", "type": "kubectl-command_explanation", "context": "9.2 - Resource Quotas | kubectl also supports object count quota for all standard namespaced resources using the syntax count/<resource>.<group>:", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_18492"} {"prompt": "Explain this bash code:\n\nkubectl describe quota compute-resources --namespace=myspace", "completion": "This bash code 9.2 - Resource Quotas | kubectl also supports object count quota for all standard namespaced resources using the syntax count/<resource>.<group>:", "language": "bash", "type": "kubectl-command_explanation", "context": "9.2 - Resource Quotas | kubectl also supports object count quota for all standard namespaced resources using the syntax count/<resource>.<group>:", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_18493"} {"prompt": "Explain this bash code:\n\nkubectl describe quota object-counts --namespace=myspace", "completion": "This bash code 9.2 - Resource Quotas | kubectl also supports object count quota for all standard namespaced resources using the syntax count/<resource>.<group>:", "language": "bash", "type": "kubectl-command_explanation", "context": "9.2 - Resource Quotas | kubectl also supports object count quota for all standard namespaced resources using the syntax count/<resource>.<group>:", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_18494"} {"prompt": "Explain this bash code:\n\nkubectl create quota test --hard=count/deployments.apps=2,count/replicasets.apps=4,count/pods=3,count/secrets=4 --namespace=myspace", "completion": "This bash code 9.2 - Resource Quotas | ResourceQuotas are independent of the cluster capacity. They are expressed in absolute units. So, if you add nodes to your cluster, this does not automatically give each namespace the ability to consu", "language": "bash", "type": "kubectl-command_explanation", "context": "9.2 - Resource Quotas | ResourceQuotas are independent of the cluster capacity. They are expressed in absolute units. So, if you add nodes to your cluster, this does not automatically give each namespace the ability to consu", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_18495"} {"prompt": "Explain this bash code:\n\nkubectl create deployment nginx --image=nginx --namespace=myspace --replicas=2", "completion": "This bash code 9.2 - Resource Quotas | ResourceQuotas are independent of the cluster capacity. They are expressed in absolute units. So, if you add nodes to your cluster, this does not automatically give each namespace the ability to consu", "language": "bash", "type": "kubectl-command_explanation", "context": "9.2 - Resource Quotas | ResourceQuotas are independent of the cluster capacity. They are expressed in absolute units. So, if you add nodes to your cluster, this does not automatically give each namespace the ability to consu", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_18496"} {"prompt": "Complete this yaml code:\n\napiVersion: apiserver.config.k8s.io/v1\nkind: AdmissionConfiguration\nplugins:\n- name: \"ResourceQuota\"\n configuration:\n apiVersion: apiserver.config.k8s.io/v1", "completion": " kind: ResourceQuotaConfiguration\n limitedResources:\n - resource: pods\n matchScopes:\n - scopeName: PriorityClass\n operator: In\n values: [\"cluster-services\"]", "language": "yaml", "type": "kubernetes-manifest", "context": "9.2 - Resource Quotas | Then, create a resource quota object in the kube-system namespace:", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_18497"} {"prompt": "Explain this yaml code:\n\napiVersion: apiserver.config.k8s.io/v1\nkind: AdmissionConfiguration\nplugins:\n- name: \"ResourceQuota\"\n configuration:\n apiVersion: apiserver.config.k8s.io/v1\n kind: ResourceQuotaConfiguration\n limitedResources:\n - resource: pods\n matchScopes:\n - scopeName: PriorityClass\n operator: In\n values: [\"cluster-services\"]", "completion": "This yaml code 9.2 - Resource Quotas | Then, create a resource quota object in the kube-system namespace:", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "9.2 - Resource Quotas | Then, create a resource quota object in the kube-system namespace:", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_18498"} {"prompt": "Complete this yaml code:\n\npolicy/priority-class-resourcequota.yaml\napiVersion: v1\nkind: ResourceQuota\nmetadata:\n name: pods-cluster-services", "completion": "spec:\n scopeSelector:\n matchExpressions:\n - operator : In\n scopeName: PriorityClass\n values: [\"cluster-services\"]", "language": "yaml", "type": "kubernetes-manifest", "context": "9.2 - Resource Quotas | In this case, a pod creation will be allowed if:", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_18499"} {"prompt": "Explain this yaml code:\n\npolicy/priority-class-resourcequota.yaml\napiVersion: v1\nkind: ResourceQuota\nmetadata:\n name: pods-cluster-services\nspec:\n scopeSelector:\n matchExpressions:\n - operator : In\n scopeName: PriorityClass\n values: [\"cluster-services\"]", "completion": "This yaml code 9.2 - Resource Quotas | In this case, a pod creation will be allowed if:", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "9.2 - Resource Quotas | In this case, a pod creation will be allowed if:", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_18500"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: ResourceQuota\nmetadata:\n name: pods-cluster-services\nspec:", "completion": " scopeSelector:\n matchExpressions:\n - operator : In\n scopeName: PriorityClass\n values: [\"cluster-services\"]", "language": "yaml", "type": "kubernetes-manifest", "context": "9.2 - Resource Quotas | In this case, a pod creation will be allowed if:", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_18501"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: ResourceQuota\nmetadata:\n name: pods-cluster-services\nspec:\n scopeSelector:\n matchExpressions:\n - operator : In\n scopeName: PriorityClass\n values: [\"cluster-services\"]", "completion": "This yaml code 9.2 - Resource Quotas | In this case, a pod creation will be allowed if:", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "9.2 - Resource Quotas | In this case, a pod creation will be allowed if:", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_18502"} {"prompt": "Explain this bash code:\n\nkubectl apply -f https://k8s.io/examples/policy/priority-class-resourcequota.yaml -n kube-system", "completion": "This bash code 9.2 - Resource Quotas | In this case, a pod creation will be allowed if:", "language": "bash", "type": "kubectl-command_explanation", "context": "9.2 - Resource Quotas | In this case, a pod creation will be allowed if:", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_18503"} {"prompt": "Complete this text code:\n\nspec:\n containers:", "completion": " - name: nginx\n image: nginx", "language": "text", "type": "code-example", "context": "9.4 - Node Resource Managers | The pod above runs in the BestEffort QoS class because no resource requests or limits are specified. It runs in the shared pool.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_18504"} {"prompt": "Explain this text code:\n\nspec:\n containers:\n - name: nginx\n image: nginx", "completion": "This text code 9.4 - Node Resource Managers | The pod above runs in the BestEffort QoS class because no resource requests or limits are specified. It runs in the shared pool.", "language": "text", "type": "code-example_explanation", "context": "9.4 - Node Resource Managers | The pod above runs in the BestEffort QoS class because no resource requests or limits are specified. It runs in the shared pool.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_18505"} {"prompt": "Complete this text code:\n\nspec:\n containers:\n - name: nginx\n image: nginx", "completion": " resources:\n limits:\n memory: \"200Mi\"\n requests:\n memory: \"100Mi\"", "language": "text", "type": "code-example", "context": "9.4 - Node Resource Managers | The pod above runs in the Burstable QoS class because resource requests do not equal limits and the cpu quantity is not specified. It runs in the shared pool.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_18506"} {"prompt": "Explain this text code:\n\nspec:\n containers:\n - name: nginx\n image: nginx\n resources:\n limits:\n memory: \"200Mi\"\n requests:\n memory: \"100Mi\"", "completion": "This text code 9.4 - Node Resource Managers | The pod above runs in the Burstable QoS class because resource requests do not equal limits and the cpu quantity is not specified. It runs in the shared pool.", "language": "text", "type": "code-example_explanation", "context": "9.4 - Node Resource Managers | The pod above runs in the Burstable QoS class because resource requests do not equal limits and the cpu quantity is not specified. It runs in the shared pool.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_18507"} {"prompt": "Complete this text code:\n\nspec:\n containers:\n - name: nginx\n image: nginx\n resources:", "completion": " limits:\n memory: \"200Mi\"\n cpu: \"2\"\n requests:\n memory: \"100Mi\"\n cpu: \"1\"", "language": "text", "type": "code-example", "context": "9.4 - Node Resource Managers | The pod above runs in the Burstable QoS class because resource requests do not equal limits. It runs in the shared pool.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_18508"} {"prompt": "Explain this text code:\n\nspec:\n containers:\n - name: nginx\n image: nginx\n resources:\n limits:\n memory: \"200Mi\"\n cpu: \"2\"\n requests:\n memory: \"100Mi\"\n cpu: \"1\"", "completion": "This text code 9.4 - Node Resource Managers | The pod above runs in the Burstable QoS class because resource requests do not equal limits. It runs in the shared pool.", "language": "text", "type": "code-example_explanation", "context": "9.4 - Node Resource Managers | The pod above runs in the Burstable QoS class because resource requests do not equal limits. It runs in the shared pool.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_18509"} {"prompt": "Complete this text code:\n\nspec:\n containers:\n - name: nginx\n image: nginx\n resources:", "completion": " limits:\n memory: \"200Mi\"\n cpu: \"2\"\n requests:\n memory: \"200Mi\"\n cpu: \"2\"", "language": "text", "type": "code-example", "context": "9.4 - Node Resource Managers | The pod above runs in the Guaranteed QoS class because requests are equal to limits. And the container's resource limit for the CPU resource is an integer greater than or equal to one. The nginx conta", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_18510"} {"prompt": "Explain this text code:\n\nspec:\n containers:\n - name: nginx\n image: nginx\n resources:\n limits:\n memory: \"200Mi\"\n cpu: \"2\"\n requests:\n memory: \"200Mi\"\n cpu: \"2\"", "completion": "This text code 9.4 - Node Resource Managers | The pod above runs in the Guaranteed QoS class because requests are equal to limits. And the container's resource limit for the CPU resource is an integer greater than or equal to one. The nginx conta", "language": "text", "type": "code-example_explanation", "context": "9.4 - Node Resource Managers | The pod above runs in the Guaranteed QoS class because requests are equal to limits. And the container's resource limit for the CPU resource is an integer greater than or equal to one. The nginx conta", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_18511"} {"prompt": "Complete this text code:\n\nspec:\n containers:\n - name: nginx\n image: nginx\n resources:", "completion": " limits:\n memory: \"200Mi\"\n cpu: \"1.5\"\n requests:\n memory: \"200Mi\"\n cpu: \"1.5\"", "language": "text", "type": "code-example", "context": "9.4 - Node Resource Managers | The pod above runs in the Guaranteed QoS class because requests are equal to limits. But the container's resource limit for the CPU resource is a fraction. It runs in the shared pool.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_18512"} {"prompt": "Explain this text code:\n\nspec:\n containers:\n - name: nginx\n image: nginx\n resources:\n limits:\n memory: \"200Mi\"\n cpu: \"1.5\"\n requests:\n memory: \"200Mi\"\n cpu: \"1.5\"", "completion": "This text code 9.4 - Node Resource Managers | The pod above runs in the Guaranteed QoS class because requests are equal to limits. But the container's resource limit for the CPU resource is a fraction. It runs in the shared pool.", "language": "text", "type": "code-example_explanation", "context": "9.4 - Node Resource Managers | The pod above runs in the Guaranteed QoS class because requests are equal to limits. But the container's resource limit for the CPU resource is a fraction. It runs in the shared pool.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_18513"} {"prompt": "Complete this text code:\n\nspec:\n containers:\n - name: nginx\n image: nginx", "completion": " resources:\n limits:\n memory: \"200Mi\"\n cpu: \"2\"", "language": "text", "type": "code-example", "context": "9.4 - Node Resource Managers | The pod above runs in the Guaranteed QoS class because only limits are specified and requests are set equal to limits when not explicitly specified. And the container's resource limit for the CPU reso", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_18514"} {"prompt": "Explain this text code:\n\nspec:\n containers:\n - name: nginx\n image: nginx\n resources:\n limits:\n memory: \"200Mi\"\n cpu: \"2\"", "completion": "This text code 9.4 - Node Resource Managers | The pod above runs in the Guaranteed QoS class because only limits are specified and requests are set equal to limits when not explicitly specified. And the container's resource limit for the CPU reso", "language": "text", "type": "code-example_explanation", "context": "9.4 - Node Resource Managers | The pod above runs in the Guaranteed QoS class because only limits are specified and requests are set equal to limits when not explicitly specified. And the container's resource limit for the CPU reso", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_18515"} {"prompt": "Complete this yaml code:\n\npods/pod-with-node-affinity.yaml\napiVersion: v1\nkind: Pod\nmetadata:\n name: with-node-affinity\nspec:\n affinity:\n nodeAffinity:\n requiredDuringSchedulingIgnoredDuringExecution:\n nodeSelectorTerms:\n - matchExpressions:\n - key: topology.kubernetes.io/zone\n operator: In", "completion": " values:\n - antarctica-east1\n - antarctica-west1\n preferredDuringSchedulingIgnoredDuringExecution:\n - weight: 1\n preference:\n matchExpressions:\n - key: another-node-label-key\n operator: In\n values:\n - another-node-label-value\n containers:\n - name: with-node-affinity\n image: registry.k8s.io/pause:3.8", "language": "yaml", "type": "pod-definition", "context": "10.2 - Assigning Pods to Nodes | In this example, the following rules apply:", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_18516"} {"prompt": "Explain this yaml code:\n\npods/pod-with-node-affinity.yaml\napiVersion: v1\nkind: Pod\nmetadata:\n name: with-node-affinity\nspec:\n affinity:\n nodeAffinity:\n requiredDuringSchedulingIgnoredDuringExecution:\n nodeSelectorTerms:\n - matchExpressions:\n - key: topology.kubernetes.io/zone\n operator: In\n values:\n - antarctica-east1\n - antarctica-west1\n preferredDuringSchedulingIgnoredDuringExecution:\n - weight: 1\n preference:\n matchExpressions:\n - key: another-node-label-key\n operator: In\n values:\n - another-node-label-value\n containers:\n - name: with-node-affinity\n image: registry.k8s.io/pause:3.8", "completion": "This yaml code 10.2 - Assigning Pods to Nodes | In this example, the following rules apply:", "language": "yaml", "type": "pod-definition_explanation", "context": "10.2 - Assigning Pods to Nodes | In this example, the following rules apply:", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_18517"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: with-node-affinity\nspec:\n affinity:\n nodeAffinity:\n requiredDuringSchedulingIgnoredDuringExecution:\n nodeSelectorTerms:\n - matchExpressions:\n - key: topology.kubernetes.io/zone\n operator: In\n values:", "completion": " - antarctica-east1\n - antarctica-west1\n preferredDuringSchedulingIgnoredDuringExecution:\n - weight: 1\n preference:\n matchExpressions:\n - key: another-node-label-key\n operator: In\n values:\n - another-node-label-value\n containers:\n - name: with-node-affinity\n image: registry.k8s.io/pause:3.8", "language": "yaml", "type": "pod-definition", "context": "10.2 - Assigning Pods to Nodes | In this example, the following rules apply:", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_18518"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: with-node-affinity\nspec:\n affinity:\n nodeAffinity:\n requiredDuringSchedulingIgnoredDuringExecution:\n nodeSelectorTerms:\n - matchExpressions:\n - key: topology.kubernetes.io/zone\n operator: In\n values:\n - antarctica-east1\n - antarctica-west1\n preferredDuringSchedulingIgnoredDuringExecution:\n - weight: 1\n preference:\n matchExpressions:\n - key: another-node-label-key\n operator: In\n values:\n - another-node-label-value\n containers:\n - name: with-node-affinity\n image: registry.k8s.io/pause:3.8", "completion": "This yaml code 10.2 - Assigning Pods to Nodes | In this example, the following rules apply:", "language": "yaml", "type": "pod-definition_explanation", "context": "10.2 - Assigning Pods to Nodes | In this example, the following rules apply:", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_18519"} {"prompt": "Complete this yaml code:\n\npods/pod-with-affinity-preferred-weight.yaml\napiVersion: v1\nkind: Pod\nmetadata:\n name: with-affinity-preferred-weight\nspec:\n affinity:\n nodeAffinity:\n requiredDuringSchedulingIgnoredDuringExecution:\n nodeSelectorTerms:\n - matchExpressions:\n - key: kubernetes.io/os\n operator: In\n values:\n - linux\n preferredDuringSchedulingIgnoredDuringExecution:", "completion": " - weight: 1\n preference:\n matchExpressions:\n - key: label-1\n operator: In\n values:\n - key-1\n - weight: 50\n preference:\n matchExpressions:\n - key: label-2\n operator: In\n values:\n - key-2\n containers:\n - name: with-node-affinity\n image: registry.k8s.io/pause:3.8", "language": "yaml", "type": "pod-definition", "context": "10.2 - Assigning Pods to Nodes | If there are two possible nodes that match the preferredDuringSchedulingIgnoredDuringExecution rule, one with the label-1:key-1 label and another with the label-2:key-2 label, the scheduler considers", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_18520"} {"prompt": "Explain this yaml code:\n\npods/pod-with-affinity-preferred-weight.yaml\napiVersion: v1\nkind: Pod\nmetadata:\n name: with-affinity-preferred-weight\nspec:\n affinity:\n nodeAffinity:\n requiredDuringSchedulingIgnoredDuringExecution:\n nodeSelectorTerms:\n - matchExpressions:\n - key: kubernetes.io/os\n operator: In\n values:\n - linux\n preferredDuringSchedulingIgnoredDuringExecution:\n - weight: 1\n preference:\n matchExpressions:\n - key: label-1\n operator: In\n values:\n - key-1\n - weight: 50\n preference:\n matchExpressions:\n - key: label-2\n operator: In\n values:\n - key-2\n containers:\n - name: with-node-affinity\n image: registry.k8s.io/pause:3.8", "completion": "This yaml code 10.2 - Assigning Pods to Nodes | If there are two possible nodes that match the preferredDuringSchedulingIgnoredDuringExecution rule, one with the label-1:key-1 label and another with the label-2:key-2 label, the scheduler considers", "language": "yaml", "type": "pod-definition_explanation", "context": "10.2 - Assigning Pods to Nodes | If there are two possible nodes that match the preferredDuringSchedulingIgnoredDuringExecution rule, one with the label-1:key-1 label and another with the label-2:key-2 label, the scheduler considers", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_18521"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: with-affinity-preferred-weight\nspec:\n affinity:\n nodeAffinity:\n requiredDuringSchedulingIgnoredDuringExecution:\n nodeSelectorTerms:\n - matchExpressions:\n - key: kubernetes.io/os\n operator: In\n values:\n - linux\n preferredDuringSchedulingIgnoredDuringExecution:\n - weight: 1", "completion": " preference:\n matchExpressions:\n - key: label-1\n operator: In\n values:\n - key-1\n - weight: 50\n preference:\n matchExpressions:\n - key: label-2\n operator: In\n values:\n - key-2\n containers:\n - name: with-node-affinity\n image: registry.k8s.io/pause:3.8", "language": "yaml", "type": "pod-definition", "context": "10.2 - Assigning Pods to Nodes | If there are two possible nodes that match the preferredDuringSchedulingIgnoredDuringExecution rule, one with the label-1:key-1 label and another with the label-2:key-2 label, the scheduler considers", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_18522"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: with-affinity-preferred-weight\nspec:\n affinity:\n nodeAffinity:\n requiredDuringSchedulingIgnoredDuringExecution:\n nodeSelectorTerms:\n - matchExpressions:\n - key: kubernetes.io/os\n operator: In\n values:\n - linux\n preferredDuringSchedulingIgnoredDuringExecution:\n - weight: 1\n preference:\n matchExpressions:\n - key: label-1\n operator: In\n values:\n - key-1\n - weight: 50\n preference:\n matchExpressions:\n - key: label-2\n operator: In\n values:\n - key-2\n containers:\n - name: with-node-affinity\n image: registry.k8s.io/pause:3.8", "completion": "This yaml code 10.2 - Assigning Pods to Nodes | If there are two possible nodes that match the preferredDuringSchedulingIgnoredDuringExecution rule, one with the label-1:key-1 label and another with the label-2:key-2 label, the scheduler considers", "language": "yaml", "type": "pod-definition_explanation", "context": "10.2 - Assigning Pods to Nodes | If there are two possible nodes that match the preferredDuringSchedulingIgnoredDuringExecution rule, one with the label-1:key-1 label and another with the label-2:key-2 label, the scheduler considers", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_18523"} {"prompt": "Complete this yaml code:\n\napiVersion: kubescheduler.config.k8s.io/v1\nkind: KubeSchedulerConfiguration\n\nprofiles:\n - schedulerName: default-scheduler\n - schedulerName: foo-scheduler\n pluginConfig:\n - name: NodeAffinity", "completion": " args:\n addedAffinity:\n requiredDuringSchedulingIgnoredDuringExecution:\n nodeSelectorTerms:\n - matchExpressions:\n - key: scheduler-profile\n operator: In\n values:\n - foo", "language": "yaml", "type": "kubernetes-manifest", "context": "10.2 - Assigning Pods to Nodes | The addedAffinity is applied to all Pods that set .spec.schedulerName to foo-scheduler, in addition to the NodeAffinity specified in the PodSpec. That is, in order to match the Pod, nodes need to sati", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_18524"} {"prompt": "Explain this yaml code:\n\napiVersion: kubescheduler.config.k8s.io/v1\nkind: KubeSchedulerConfiguration\n\nprofiles:\n - schedulerName: default-scheduler\n - schedulerName: foo-scheduler\n pluginConfig:\n - name: NodeAffinity\n args:\n addedAffinity:\n requiredDuringSchedulingIgnoredDuringExecution:\n nodeSelectorTerms:\n - matchExpressions:\n - key: scheduler-profile\n operator: In\n values:\n - foo", "completion": "This yaml code 10.2 - Assigning Pods to Nodes | The addedAffinity is applied to all Pods that set .spec.schedulerName to foo-scheduler, in addition to the NodeAffinity specified in the PodSpec. That is, in order to match the Pod, nodes need to sati", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "10.2 - Assigning Pods to Nodes | The addedAffinity is applied to all Pods that set .spec.schedulerName to foo-scheduler, in addition to the NodeAffinity specified in the PodSpec. That is, in order to match the Pod, nodes need to sati", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_18525"} {"prompt": "Complete this yaml code:\n\npods/pod-with-pod-affinity.yaml\napiVersion: v1\nkind: Pod\nmetadata:\n name: with-pod-affinity\nspec:\n affinity:\n podAffinity:\n requiredDuringSchedulingIgnoredDuringExecution:\n - labelSelector:\n matchExpressions:\n - key: security\n operator: In\n values:\n - S1", "completion": " topologyKey: topology.kubernetes.io/zone\n podAntiAffinity:\n preferredDuringSchedulingIgnoredDuringExecution:\n - weight: 100\n podAffinityTerm:\n labelSelector:\n matchExpressions:\n - key: security\n operator: In\n values:\n - S2\n topologyKey: topology.kubernetes.io/zone\n containers:\n - name: with-pod-affinity\n image: registry.k8s.io/pause:3.8", "language": "yaml", "type": "pod-definition", "context": "10.2 - Assigning Pods to Nodes | This example defines one Pod affinity rule and one Pod anti-affinity rule. The Pod affinity rule uses the \"hard\" requiredDuringSchedulingIgnoredDuringExecution, while the anti-affinity rule uses the \"", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_18526"} {"prompt": "Explain this yaml code:\n\npods/pod-with-pod-affinity.yaml\napiVersion: v1\nkind: Pod\nmetadata:\n name: with-pod-affinity\nspec:\n affinity:\n podAffinity:\n requiredDuringSchedulingIgnoredDuringExecution:\n - labelSelector:\n matchExpressions:\n - key: security\n operator: In\n values:\n - S1\n topologyKey: topology.kubernetes.io/zone\n podAntiAffinity:\n preferredDuringSchedulingIgnoredDuringExecution:\n - weight: 100\n podAffinityTerm:\n labelSelector:\n matchExpressions:\n - key: security\n operator: In\n values:\n - S2\n topologyKey: topology.kubernetes.io/zone\n containers:\n - name: with-pod-affinity\n image: registry.k8s.io/pause:3.8", "completion": "This yaml code 10.2 - Assigning Pods to Nodes | This example defines one Pod affinity rule and one Pod anti-affinity rule. The Pod affinity rule uses the \"hard\" requiredDuringSchedulingIgnoredDuringExecution, while the anti-affinity rule uses the \"", "language": "yaml", "type": "pod-definition_explanation", "context": "10.2 - Assigning Pods to Nodes | This example defines one Pod affinity rule and one Pod anti-affinity rule. The Pod affinity rule uses the \"hard\" requiredDuringSchedulingIgnoredDuringExecution, while the anti-affinity rule uses the \"", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_18527"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: with-pod-affinity\nspec:\n affinity:\n podAffinity:\n requiredDuringSchedulingIgnoredDuringExecution:\n - labelSelector:\n matchExpressions:\n - key: security\n operator: In\n values:\n - S1", "completion": " topologyKey: topology.kubernetes.io/zone\n podAntiAffinity:\n preferredDuringSchedulingIgnoredDuringExecution:\n - weight: 100\n podAffinityTerm:\n labelSelector:\n matchExpressions:\n - key: security\n operator: In\n values:\n - S2\n topologyKey: topology.kubernetes.io/zone\n containers:\n - name: with-pod-affinity\n image: registry.k8s.io/pause:3.8", "language": "yaml", "type": "pod-definition", "context": "10.2 - Assigning Pods to Nodes | This example defines one Pod affinity rule and one Pod anti-affinity rule. The Pod affinity rule uses the \"hard\" requiredDuringSchedulingIgnoredDuringExecution, while the anti-affinity rule uses the \"", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_18528"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: with-pod-affinity\nspec:\n affinity:\n podAffinity:\n requiredDuringSchedulingIgnoredDuringExecution:\n - labelSelector:\n matchExpressions:\n - key: security\n operator: In\n values:\n - S1\n topologyKey: topology.kubernetes.io/zone\n podAntiAffinity:\n preferredDuringSchedulingIgnoredDuringExecution:\n - weight: 100\n podAffinityTerm:\n labelSelector:\n matchExpressions:\n - key: security\n operator: In\n values:\n - S2\n topologyKey: topology.kubernetes.io/zone\n containers:\n - name: with-pod-affinity\n image: registry.k8s.io/pause:3.8", "completion": "This yaml code 10.2 - Assigning Pods to Nodes | This example defines one Pod affinity rule and one Pod anti-affinity rule. The Pod affinity rule uses the \"hard\" requiredDuringSchedulingIgnoredDuringExecution, while the anti-affinity rule uses the \"", "language": "yaml", "type": "pod-definition_explanation", "context": "10.2 - Assigning Pods to Nodes | This example defines one Pod affinity rule and one Pod anti-affinity rule. The Pod affinity rule uses the \"hard\" requiredDuringSchedulingIgnoredDuringExecution, while the anti-affinity rule uses the \"", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_18529"} {"prompt": "Complete this yaml code:\n\napiVersion: apps/v1\nkind: Deployment\nmetadata:\n name: application-server\n...\nspec:\n template:\n spec:\n affinity:\n podAffinity:\n requiredDuringSchedulingIgnoredDuringExecution:", "completion": " - labelSelector:\n matchExpressions:\n - key: app\n operator: In\n values:\n - database\n topologyKey: topology.kubernetes.io/zone\n # Only Pods from a given rollout are taken into consideration when calculating pod affinity.\n # If you update the Deployment, the replacement Pods follow their own affinity rules\n # (if there are any defined in the new Pod template)\n matchLabelKeys:\n - pod-template-hash", "language": "yaml", "type": "deployment-definition", "context": "10.2 - Assigning Pods to Nodes | The mismatchLabelKeys field is a beta-level field and is enabled by default in Kubernetes 1.34. When you want to disable it, you have to disable it explicitly via the MatchLabelKeysInPodAffinity featu", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_18530"} {"prompt": "Explain this yaml code:\n\napiVersion: apps/v1\nkind: Deployment\nmetadata:\n name: application-server\n...\nspec:\n template:\n spec:\n affinity:\n podAffinity:\n requiredDuringSchedulingIgnoredDuringExecution:\n - labelSelector:\n matchExpressions:\n - key: app\n operator: In\n values:\n - database\n topologyKey: topology.kubernetes.io/zone\n # Only Pods from a given rollout are taken into consideration when calculating pod affinity.\n # If you update the Deployment, the replacement Pods follow their own affinity rules\n # (if there are any defined in the new Pod template)\n matchLabelKeys:\n - pod-template-hash", "completion": "This yaml code 10.2 - Assigning Pods to Nodes | The mismatchLabelKeys field is a beta-level field and is enabled by default in Kubernetes 1.34. When you want to disable it, you have to disable it explicitly via the MatchLabelKeysInPodAffinity featu", "language": "yaml", "type": "deployment-definition_explanation", "context": "10.2 - Assigning Pods to Nodes | The mismatchLabelKeys field is a beta-level field and is enabled by default in Kubernetes 1.34. When you want to disable it, you have to disable it explicitly via the MatchLabelKeysInPodAffinity featu", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_18531"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n labels:\n # Assume that all relevant Pods have a \"tenant\" label set\n tenant: tenant-a\n...\nspec:\n affinity:\n podAffinity:\n requiredDuringSchedulingIgnoredDuringExecution:\n # ensure that Pods associated with this tenant land on the correct node pool\n - matchLabelKeys:\n - tenant\n topologyKey: node-pool", "completion": " podAntiAffinity:\n requiredDuringSchedulingIgnoredDuringExecution:\n # ensure that Pods associated with this tenant can't schedule to nodes used for another tenant\n - mismatchLabelKeys:\n - tenant # whatever the value of the \"tenant\" label for this Pod, prevent\n # scheduling to nodes in any pool where any Pod from a different\n # tenant is running.\n labelSelector:\n # We have to have the labelSelector which selects only Pods with the tenant label,\n # otherwise this Pod would have anti-affinity against Pods from daemonsets as well, for example,\n # which aren't supposed to have the tenant label.\n matchExpressions:\n - key: tenant\n operator: Exists\n topologyKey: node-pool", "language": "yaml", "type": "pod-definition", "context": "10.2 - Assigning Pods to Nodes | Inter-pod affinity and anti-affinity can be even more useful when they are used with higher level collections such as ReplicaSets, StatefulSets, Deployments, etc. These rules allow you to configure th", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_18532"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n labels:\n # Assume that all relevant Pods have a \"tenant\" label set\n tenant: tenant-a\n...\nspec:\n affinity:\n podAffinity:\n requiredDuringSchedulingIgnoredDuringExecution:\n # ensure that Pods associated with this tenant land on the correct node pool\n - matchLabelKeys:\n - tenant\n topologyKey: node-pool\n podAntiAffinity:\n requiredDuringSchedulingIgnoredDuringExecution:\n # ensure that Pods associated with this tenant can't schedule to nodes used for another tenant\n - mismatchLabelKeys:\n - tenant # whatever the value of the \"tenant\" label for this Pod, prevent\n # scheduling to nodes in any pool where any Pod from a different\n # tenant is running.\n labelSelector:\n # We have to have the labelSelector which selects only Pods with the tenant label,\n # otherwise this Pod would have anti-affinity against Pods from daemonsets as well, for example,\n # which aren't supposed to have the tenant label.\n matchExpressions:\n - key: tenant\n operator: Exists\n topologyKey: node-pool", "completion": "This yaml code 10.2 - Assigning Pods to Nodes | Inter-pod affinity and anti-affinity can be even more useful when they are used with higher level collections such as ReplicaSets, StatefulSets, Deployments, etc. These rules allow you to configure th", "language": "yaml", "type": "pod-definition_explanation", "context": "10.2 - Assigning Pods to Nodes | Inter-pod affinity and anti-affinity can be even more useful when they are used with higher level collections such as ReplicaSets, StatefulSets, Deployments, etc. These rules allow you to configure th", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_18533"} {"prompt": "Complete this yaml code:\n\napiVersion: apps/v1\nkind: Deployment\nmetadata:\n name: redis-cache\nspec:\n selector:\n matchLabels:\n app: store\n replicas: 3\n template:\n metadata:\n labels:\n app: store", "completion": " spec:\n affinity:\n podAntiAffinity:\n requiredDuringSchedulingIgnoredDuringExecution:\n - labelSelector:\n matchExpressions:\n - key: app\n operator: In\n values:\n - store\n topologyKey: \"kubernetes.io/hostname\"\n containers:\n - name: redis-server\n image: redis:3.2-alpine", "language": "yaml", "type": "deployment-definition", "context": "10.2 - Assigning Pods to Nodes | The following example Deployment for the web servers creates replicas with the label app=web-store. The Pod affinity rule tells the scheduler to place each replica on a node that has a Pod with the la", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_18534"} {"prompt": "Explain this yaml code:\n\napiVersion: apps/v1\nkind: Deployment\nmetadata:\n name: redis-cache\nspec:\n selector:\n matchLabels:\n app: store\n replicas: 3\n template:\n metadata:\n labels:\n app: store\n spec:\n affinity:\n podAntiAffinity:\n requiredDuringSchedulingIgnoredDuringExecution:\n - labelSelector:\n matchExpressions:\n - key: app\n operator: In\n values:\n - store\n topologyKey: \"kubernetes.io/hostname\"\n containers:\n - name: redis-server\n image: redis:3.2-alpine", "completion": "This yaml code 10.2 - Assigning Pods to Nodes | The following example Deployment for the web servers creates replicas with the label app=web-store. The Pod affinity rule tells the scheduler to place each replica on a node that has a Pod with the la", "language": "yaml", "type": "deployment-definition_explanation", "context": "10.2 - Assigning Pods to Nodes | The following example Deployment for the web servers creates replicas with the label app=web-store. The Pod affinity rule tells the scheduler to place each replica on a node that has a Pod with the la", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_18535"} {"prompt": "Complete this yaml code:\n\napiVersion: apps/v1\nkind: Deployment\nmetadata:\n name: web-server\nspec:\n selector:\n matchLabels:\n app: web-store\n replicas: 3\n template:\n metadata:\n labels:\n app: web-store\n spec:\n affinity:\n podAntiAffinity:\n requiredDuringSchedulingIgnoredDuringExecution:\n - labelSelector:", "completion": " matchExpressions:\n - key: app\n operator: In\n values:\n - web-store\n topologyKey: \"kubernetes.io/hostname\"\n podAffinity:\n requiredDuringSchedulingIgnoredDuringExecution:\n - labelSelector:\n matchExpressions:\n - key: app\n operator: In\n values:\n - store\n topologyKey: \"kubernetes.io/hostname\"\n containers:\n - name: web-app\n image: nginx:1.16-alpine", "language": "yaml", "type": "deployment-definition", "context": "10.2 - Assigning Pods to Nodes | Creating the two preceding Deployments results in the following cluster layout, where each web server is co-located with a cache, on three separate nodes.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_18536"} {"prompt": "Explain this yaml code:\n\napiVersion: apps/v1\nkind: Deployment\nmetadata:\n name: web-server\nspec:\n selector:\n matchLabels:\n app: web-store\n replicas: 3\n template:\n metadata:\n labels:\n app: web-store\n spec:\n affinity:\n podAntiAffinity:\n requiredDuringSchedulingIgnoredDuringExecution:\n - labelSelector:\n matchExpressions:\n - key: app\n operator: In\n values:\n - web-store\n topologyKey: \"kubernetes.io/hostname\"\n podAffinity:\n requiredDuringSchedulingIgnoredDuringExecution:\n - labelSelector:\n matchExpressions:\n - key: app\n operator: In\n values:\n - store\n topologyKey: \"kubernetes.io/hostname\"\n containers:\n - name: web-app\n image: nginx:1.16-alpine", "completion": "This yaml code 10.2 - Assigning Pods to Nodes | Creating the two preceding Deployments results in the following cluster layout, where each web server is co-located with a cache, on three separate nodes.", "language": "yaml", "type": "deployment-definition_explanation", "context": "10.2 - Assigning Pods to Nodes | Creating the two preceding Deployments results in the following cluster layout, where each web server is co-located with a cache, on three separate nodes.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_18537"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: nginx", "completion": "spec:\n containers:\n - name: nginx\n image: nginx\n nodeName: kube-01", "language": "yaml", "type": "pod-definition", "context": "10.2 - Assigning Pods to Nodes | The above Pod will only run on the node kube-01.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_18538"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: nginx\nspec:\n containers:\n - name: nginx\n image: nginx\n nodeName: kube-01", "completion": "This yaml code 10.2 - Assigning Pods to Nodes | The above Pod will only run on the node kube-01.", "language": "yaml", "type": "pod-definition_explanation", "context": "10.2 - Assigning Pods to Nodes | The above Pod will only run on the node kube-01.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_18539"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:", "completion": " name: nginx\n...\nstatus:\n nominatedNodeName: kube-01", "language": "yaml", "type": "pod-definition", "context": "10.2 - Assigning Pods to Nodes | You can use topology spread constraints to control how Pods are spread across your cluster among failure-domains such as regions, zones, nodes, or among any other topology domains that you define. You", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_18540"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: nginx\n...\nstatus:\n nominatedNodeName: kube-01", "completion": "This yaml code 10.2 - Assigning Pods to Nodes | You can use topology spread constraints to control how Pods are spread across your cluster among failure-domains such as regions, zones, nodes, or among any other topology domains that you define. You", "language": "yaml", "type": "pod-definition_explanation", "context": "10.2 - Assigning Pods to Nodes | You can use topology spread constraints to control how Pods are spread across your cluster among failure-domains such as regions, zones, nodes, or among any other topology domains that you define. You", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_18541"} {"prompt": "Complete this yaml code:\n\n# You need to change this example to match the actual runtime name, and per-Pod\n# resource overhead, that the container runtime is adding in your cluster.\napiVersion: node.k8s.io/v1\nkind: RuntimeClass\nmetadata:", "completion": " name: kata-fc\nhandler: kata-fc\noverhead:\n podFixed:\n memory: \"120Mi\"\n cpu: \"250m\"", "language": "yaml", "type": "kubernetes-manifest", "context": "10.3 - Pod Overhead | Workloads which are created which specify the kata-fc RuntimeClass handler will take the memory and cpu overheads into account for resource quota calculations, node scheduling, as well as Pod cgroup s", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_18542"} {"prompt": "Explain this yaml code:\n\n# You need to change this example to match the actual runtime name, and per-Pod\n# resource overhead, that the container runtime is adding in your cluster.\napiVersion: node.k8s.io/v1\nkind: RuntimeClass\nmetadata:\n name: kata-fc\nhandler: kata-fc\noverhead:\n podFixed:\n memory: \"120Mi\"\n cpu: \"250m\"", "completion": "This yaml code 10.3 - Pod Overhead | Workloads which are created which specify the kata-fc RuntimeClass handler will take the memory and cpu overheads into account for resource quota calculations, node scheduling, as well as Pod cgroup s", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "10.3 - Pod Overhead | Workloads which are created which specify the kata-fc RuntimeClass handler will take the memory and cpu overheads into account for resource quota calculations, node scheduling, as well as Pod cgroup s", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_18543"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: test-pod\nspec:\n runtimeClassName: kata-fc\n containers:\n - name: busybox-ctr\n image: busybox:1.28\n stdin: true", "completion": " tty: true\n resources:\n limits:\n cpu: 500m\n memory: 100Mi\n - name: nginx-ctr\n image: nginx\n resources:\n limits:\n cpu: 1500m\n memory: 100Mi", "language": "yaml", "type": "pod-definition", "context": "10.3 - Pod Overhead | At admission time the RuntimeClass admission controller updates the workload's PodSpec to include the overhead as described in the RuntimeClass. If the PodSpec already has this field defined, the Pod", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_18544"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: test-pod\nspec:\n runtimeClassName: kata-fc\n containers:\n - name: busybox-ctr\n image: busybox:1.28\n stdin: true\n tty: true\n resources:\n limits:\n cpu: 500m\n memory: 100Mi\n - name: nginx-ctr\n image: nginx\n resources:\n limits:\n cpu: 1500m\n memory: 100Mi", "completion": "This yaml code 10.3 - Pod Overhead | At admission time the RuntimeClass admission controller updates the workload's PodSpec to include the overhead as described in the RuntimeClass. If the PodSpec already has this field defined, the Pod", "language": "yaml", "type": "pod-definition_explanation", "context": "10.3 - Pod Overhead | At admission time the RuntimeClass admission controller updates the workload's PodSpec to include the overhead as described in the RuntimeClass. If the PodSpec already has this field defined, the Pod", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_18545"} {"prompt": "Explain this bash code:\n\nkubectl get pod test-pod -o jsonpath='{.spec.overhead}'", "completion": "This bash code 10.3 - Pod Overhead | The output is:", "language": "bash", "type": "kubectl-command_explanation", "context": "10.3 - Pod Overhead | The output is:", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_18546"} {"prompt": "Explain this bash code:\n\nkubectl get pod test-pod -o jsonpath='{.spec.containers[*].resources.limits}'", "completion": "This bash code 10.3 - Pod Overhead | The total container requests are 2000m CPU and 200MiB of memory:", "language": "bash", "type": "kubectl-command_explanation", "context": "10.3 - Pod Overhead | The total container requests are 2000m CPU and 200MiB of memory:", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_18547"} {"prompt": "Explain this text code:\n\n# Run this on the node where the Pod is scheduled\nPOD_ID=\"$(sudo crictl pods --name test-pod -q)\"", "completion": "This text code 10.3 - Pod Overhead | From this, you can determine the cgroup path for the Pod:", "language": "text", "type": "code-example_explanation", "context": "10.3 - Pod Overhead | From this, you can determine the cgroup path for the Pod:", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_18548"} {"prompt": "Explain this text code:\n\n# Run this on the node where the Pod is scheduled\nsudo crictl inspectp -o=json $POD_ID | grep cgroupsPath", "completion": "This text code 10.3 - Pod Overhead | The resulting cgroup path includes the Pod's pause container. The Pod level cgroup is one directory above.", "language": "text", "type": "code-example_explanation", "context": "10.3 - Pod Overhead | The resulting cgroup path includes the Pod's pause container. The Pod level cgroup is one directory above.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_18549"} {"prompt": "Explain this text code:\n\n# Run this on the node where the Pod is scheduled.\n# Also, change the name of the cgroup to match the cgroup allocated for your pod.\n cat /sys/fs/cgroup/memory/kubepods/podd7f4b509-cf94-4951-9417-d1087c92a5b2/memory.limit_in_bytes", "completion": "This text code 10.3 - Pod Overhead | This is 320 MiB, as expected:", "language": "text", "type": "code-example_explanation", "context": "10.3 - Pod Overhead | This is 320 MiB, as expected:", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_18550"} {"prompt": "Complete this yaml code:\n\npods/pod-with-scheduling-gates.yaml\napiVersion: v1\nkind: Pod\nmetadata:\n name: test-pod\nspec:", "completion": " schedulingGates:\n - name: example.com/foo\n - name: example.com/bar\n containers:\n - name: pause\n image: registry.k8s.io/pause:3.6", "language": "yaml", "type": "pod-definition", "context": "10.4 - Pod Scheduling Readiness | After the Pod's creation, you can check its state using:", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_18551"} {"prompt": "Explain this yaml code:\n\npods/pod-with-scheduling-gates.yaml\napiVersion: v1\nkind: Pod\nmetadata:\n name: test-pod\nspec:\n schedulingGates:\n - name: example.com/foo\n - name: example.com/bar\n containers:\n - name: pause\n image: registry.k8s.io/pause:3.6", "completion": "This yaml code 10.4 - Pod Scheduling Readiness | After the Pod's creation, you can check its state using:", "language": "yaml", "type": "pod-definition_explanation", "context": "10.4 - Pod Scheduling Readiness | After the Pod's creation, you can check its state using:", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_18552"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: test-pod\nspec:", "completion": " schedulingGates:\n - name: example.com/foo\n - name: example.com/bar\n containers:\n - name: pause\n image: registry.k8s.io/pause:3.6", "language": "yaml", "type": "pod-definition", "context": "10.4 - Pod Scheduling Readiness | After the Pod's creation, you can check its state using:", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_18553"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: test-pod\nspec:\n schedulingGates:\n - name: example.com/foo\n - name: example.com/bar\n containers:\n - name: pause\n image: registry.k8s.io/pause:3.6", "completion": "This yaml code 10.4 - Pod Scheduling Readiness | After the Pod's creation, you can check its state using:", "language": "yaml", "type": "pod-definition_explanation", "context": "10.4 - Pod Scheduling Readiness | After the Pod's creation, you can check its state using:", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_18554"} {"prompt": "Explain this bash code:\n\nkubectl get pod test-pod -o jsonpath='{.spec.schedulingGates}'", "completion": "This bash code 10.4 - Pod Scheduling Readiness | The output is:", "language": "bash", "type": "kubectl-command_explanation", "context": "10.4 - Pod Scheduling Readiness | The output is:", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_18555"} {"prompt": "Complete this yaml code:\n\npods/pod-without-scheduling-gates.yaml\napiVersion: v1\nkind: Pod\nmetadata:", "completion": " name: test-pod\nspec:\n containers:\n - name: pause\n image: registry.k8s.io/pause:3.6", "language": "yaml", "type": "pod-definition", "context": "10.4 - Pod Scheduling Readiness | You can check if the schedulingGates is cleared by running:", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_18556"} {"prompt": "Explain this yaml code:\n\npods/pod-without-scheduling-gates.yaml\napiVersion: v1\nkind: Pod\nmetadata:\n name: test-pod\nspec:\n containers:\n - name: pause\n image: registry.k8s.io/pause:3.6", "completion": "This yaml code 10.4 - Pod Scheduling Readiness | You can check if the schedulingGates is cleared by running:", "language": "yaml", "type": "pod-definition_explanation", "context": "10.4 - Pod Scheduling Readiness | You can check if the schedulingGates is cleared by running:", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_18557"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: test-pod", "completion": "spec:\n containers:\n - name: pause\n image: registry.k8s.io/pause:3.6", "language": "yaml", "type": "pod-definition", "context": "10.4 - Pod Scheduling Readiness | You can check if the schedulingGates is cleared by running:", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_18558"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: test-pod\nspec:\n containers:\n - name: pause\n image: registry.k8s.io/pause:3.6", "completion": "This yaml code 10.4 - Pod Scheduling Readiness | You can check if the schedulingGates is cleared by running:", "language": "yaml", "type": "pod-definition_explanation", "context": "10.4 - Pod Scheduling Readiness | You can check if the schedulingGates is cleared by running:", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_18559"} {"prompt": "Explain this bash code:\n\nkubectl get pod test-pod -o jsonpath='{.spec.schedulingGates}'", "completion": "This bash code 10.4 - Pod Scheduling Readiness | The output is expected to be empty. And you can check its latest status by running:", "language": "bash", "type": "kubectl-command_explanation", "context": "10.4 - Pod Scheduling Readiness | The output is expected to be empty. And you can check its latest status by running:", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_18560"} {"prompt": "Complete this yaml code:\n\n---\napiVersion: v1\nkind: Pod\nmetadata:\n name: example-pod\nspec:\n # Configure a topology spread constraint\n topologySpreadConstraints:", "completion": " - maxSkew: <integer>\n minDomains: <integer> # optional\n topologyKey: <string>\n whenUnsatisfiable: <string>\n labelSelector: <object>\n matchLabelKeys: <list> # optional; beta since v1.27\n nodeAffinityPolicy: [Honor|Ignore] # optional; beta since v1.26\n nodeTaintsPolicy: [Honor|Ignore] # optional; beta since v1.26\n ### other Pod fields go here", "language": "yaml", "type": "pod-definition", "context": "10.5 - Pod Topology Spread Constraints | You can read more about this field by running kubectl explain Pod.spec.topologySpreadConstraints or refer to the scheduling section of the API reference for Pod.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_18561"} {"prompt": "Explain this yaml code:\n\n---\napiVersion: v1\nkind: Pod\nmetadata:\n name: example-pod\nspec:\n # Configure a topology spread constraint\n topologySpreadConstraints:\n - maxSkew: <integer>\n minDomains: <integer> # optional\n topologyKey: <string>\n whenUnsatisfiable: <string>\n labelSelector: <object>\n matchLabelKeys: <list> # optional; beta since v1.27\n nodeAffinityPolicy: [Honor|Ignore] # optional; beta since v1.26\n nodeTaintsPolicy: [Honor|Ignore] # optional; beta since v1.26\n ### other Pod fields go here", "completion": "This yaml code 10.5 - Pod Topology Spread Constraints | You can read more about this field by running kubectl explain Pod.spec.topologySpreadConstraints or refer to the scheduling section of the API reference for Pod.", "language": "yaml", "type": "pod-definition_explanation", "context": "10.5 - Pod Topology Spread Constraints | You can read more about this field by running kubectl explain Pod.spec.topologySpreadConstraints or refer to the scheduling section of the API reference for Pod.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_18562"} {"prompt": "Complete this text code:\n\ntopologySpreadConstraints:\n - maxSkew: 1\n topologyKey: kubernetes.io/hostname\n whenUnsatisfiable: DoNotSchedule", "completion": " labelSelector:\n matchLabels:\n app: foo\n matchLabelKeys:\n - pod-template-hash", "language": "text", "type": "code-example", "context": "10.5 - Pod Topology Spread Constraints | The matchLabelKeys field is a beta-level field and enabled by default in 1.27. You can disable it by disabling the MatchLabelKeysInPodTopologySpread feature gate.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_18563"} {"prompt": "Explain this text code:\n\ntopologySpreadConstraints:\n - maxSkew: 1\n topologyKey: kubernetes.io/hostname\n whenUnsatisfiable: DoNotSchedule\n labelSelector:\n matchLabels:\n app: foo\n matchLabelKeys:\n - pod-template-hash", "completion": "This text code 10.5 - Pod Topology Spread Constraints | The matchLabelKeys field is a beta-level field and enabled by default in 1.27. You can disable it by disabling the MatchLabelKeysInPodTopologySpread feature gate.", "language": "text", "type": "code-example_explanation", "context": "10.5 - Pod Topology Spread Constraints | The matchLabelKeys field is a beta-level field and enabled by default in 1.27. You can disable it by disabling the MatchLabelKeysInPodTopologySpread feature gate.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_18564"} {"prompt": "Complete this yaml code:\n\npods/topology-spread-constraints/one-constraint.yaml\nkind: Pod\napiVersion: v1\nmetadata:\n name: mypod\n labels:\n foo: bar\nspec:\n topologySpreadConstraints:", "completion": " - maxSkew: 1\n topologyKey: zone\n whenUnsatisfiable: DoNotSchedule\n labelSelector:\n matchLabels:\n foo: bar\n containers:\n - name: pause\n image: registry.k8s.io/pause:3.1", "language": "yaml", "type": "pod-definition", "context": "10.5 - Pod Topology Spread Constraints | From that manifest, topologyKey: zone implies the even distribution will only be applied to nodes that are labeled zone: <any value> (nodes that don't have a zone label are skipped). The field whenUns", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_18565"} {"prompt": "Explain this yaml code:\n\npods/topology-spread-constraints/one-constraint.yaml\nkind: Pod\napiVersion: v1\nmetadata:\n name: mypod\n labels:\n foo: bar\nspec:\n topologySpreadConstraints:\n - maxSkew: 1\n topologyKey: zone\n whenUnsatisfiable: DoNotSchedule\n labelSelector:\n matchLabels:\n foo: bar\n containers:\n - name: pause\n image: registry.k8s.io/pause:3.1", "completion": "This yaml code 10.5 - Pod Topology Spread Constraints | From that manifest, topologyKey: zone implies the even distribution will only be applied to nodes that are labeled zone: <any value> (nodes that don't have a zone label are skipped). The field whenUns", "language": "yaml", "type": "pod-definition_explanation", "context": "10.5 - Pod Topology Spread Constraints | From that manifest, topologyKey: zone implies the even distribution will only be applied to nodes that are labeled zone: <any value> (nodes that don't have a zone label are skipped). The field whenUns", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_18566"} {"prompt": "Complete this yaml code:\n\nkind: Pod\napiVersion: v1\nmetadata:\n name: mypod\n labels:\n foo: bar\nspec:\n topologySpreadConstraints:", "completion": " - maxSkew: 1\n topologyKey: zone\n whenUnsatisfiable: DoNotSchedule\n labelSelector:\n matchLabels:\n foo: bar\n containers:\n - name: pause\n image: registry.k8s.io/pause:3.1", "language": "yaml", "type": "pod-definition", "context": "10.5 - Pod Topology Spread Constraints | From that manifest, topologyKey: zone implies the even distribution will only be applied to nodes that are labeled zone: <any value> (nodes that don't have a zone label are skipped). The field whenUns", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_18567"} {"prompt": "Explain this yaml code:\n\nkind: Pod\napiVersion: v1\nmetadata:\n name: mypod\n labels:\n foo: bar\nspec:\n topologySpreadConstraints:\n - maxSkew: 1\n topologyKey: zone\n whenUnsatisfiable: DoNotSchedule\n labelSelector:\n matchLabels:\n foo: bar\n containers:\n - name: pause\n image: registry.k8s.io/pause:3.1", "completion": "This yaml code 10.5 - Pod Topology Spread Constraints | From that manifest, topologyKey: zone implies the even distribution will only be applied to nodes that are labeled zone: <any value> (nodes that don't have a zone label are skipped). The field whenUns", "language": "yaml", "type": "pod-definition_explanation", "context": "10.5 - Pod Topology Spread Constraints | From that manifest, topologyKey: zone implies the even distribution will only be applied to nodes that are labeled zone: <any value> (nodes that don't have a zone label are skipped). The field whenUns", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_18568"} {"prompt": "Complete this yaml code:\n\npods/topology-spread-constraints/two-constraints.yaml\nkind: Pod\napiVersion: v1\nmetadata:\n name: mypod\n labels:\n foo: bar\nspec:\n topologySpreadConstraints:\n - maxSkew: 1\n topologyKey: zone\n whenUnsatisfiable: DoNotSchedule", "completion": " labelSelector:\n matchLabels:\n foo: bar\n - maxSkew: 1\n topologyKey: node\n whenUnsatisfiable: DoNotSchedule\n labelSelector:\n matchLabels:\n foo: bar\n containers:\n - name: pause\n image: registry.k8s.io/pause:3.1", "language": "yaml", "type": "pod-definition", "context": "10.5 - Pod Topology Spread Constraints | In this case, to match the first constraint, the incoming Pod can only be placed onto nodes in zone B; while in terms of the second constraint, the incoming Pod can only be scheduled to the node node4", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_18569"} {"prompt": "Explain this yaml code:\n\npods/topology-spread-constraints/two-constraints.yaml\nkind: Pod\napiVersion: v1\nmetadata:\n name: mypod\n labels:\n foo: bar\nspec:\n topologySpreadConstraints:\n - maxSkew: 1\n topologyKey: zone\n whenUnsatisfiable: DoNotSchedule\n labelSelector:\n matchLabels:\n foo: bar\n - maxSkew: 1\n topologyKey: node\n whenUnsatisfiable: DoNotSchedule\n labelSelector:\n matchLabels:\n foo: bar\n containers:\n - name: pause\n image: registry.k8s.io/pause:3.1", "completion": "This yaml code 10.5 - Pod Topology Spread Constraints | In this case, to match the first constraint, the incoming Pod can only be placed onto nodes in zone B; while in terms of the second constraint, the incoming Pod can only be scheduled to the node node4", "language": "yaml", "type": "pod-definition_explanation", "context": "10.5 - Pod Topology Spread Constraints | In this case, to match the first constraint, the incoming Pod can only be placed onto nodes in zone B; while in terms of the second constraint, the incoming Pod can only be scheduled to the node node4", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_18570"} {"prompt": "Complete this yaml code:\n\nkind: Pod\napiVersion: v1\nmetadata:\n name: mypod\n labels:\n foo: bar\nspec:\n topologySpreadConstraints:\n - maxSkew: 1\n topologyKey: zone\n whenUnsatisfiable: DoNotSchedule", "completion": " labelSelector:\n matchLabels:\n foo: bar\n - maxSkew: 1\n topologyKey: node\n whenUnsatisfiable: DoNotSchedule\n labelSelector:\n matchLabels:\n foo: bar\n containers:\n - name: pause\n image: registry.k8s.io/pause:3.1", "language": "yaml", "type": "pod-definition", "context": "10.5 - Pod Topology Spread Constraints | In this case, to match the first constraint, the incoming Pod can only be placed onto nodes in zone B; while in terms of the second constraint, the incoming Pod can only be scheduled to the node node4", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_18571"} {"prompt": "Explain this yaml code:\n\nkind: Pod\napiVersion: v1\nmetadata:\n name: mypod\n labels:\n foo: bar\nspec:\n topologySpreadConstraints:\n - maxSkew: 1\n topologyKey: zone\n whenUnsatisfiable: DoNotSchedule\n labelSelector:\n matchLabels:\n foo: bar\n - maxSkew: 1\n topologyKey: node\n whenUnsatisfiable: DoNotSchedule\n labelSelector:\n matchLabels:\n foo: bar\n containers:\n - name: pause\n image: registry.k8s.io/pause:3.1", "completion": "This yaml code 10.5 - Pod Topology Spread Constraints | In this case, to match the first constraint, the incoming Pod can only be placed onto nodes in zone B; while in terms of the second constraint, the incoming Pod can only be scheduled to the node node4", "language": "yaml", "type": "pod-definition_explanation", "context": "10.5 - Pod Topology Spread Constraints | In this case, to match the first constraint, the incoming Pod can only be placed onto nodes in zone B; while in terms of the second constraint, the incoming Pod can only be scheduled to the node node4", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_18572"} {"prompt": "Complete this yaml code:\n\npods/topology-spread-constraints/one-constraint-with-nodeaffinity.yaml\nkind: Pod\napiVersion: v1\nmetadata:\n name: mypod\n labels:\n foo: bar\nspec:\n topologySpreadConstraints:\n - maxSkew: 1\n topologyKey: zone\n whenUnsatisfiable: DoNotSchedule\n labelSelector:", "completion": " matchLabels:\n foo: bar\n affinity:\n nodeAffinity:\n requiredDuringSchedulingIgnoredDuringExecution:\n nodeSelectorTerms:\n - matchExpressions:\n - key: zone\n operator: NotIn\n values:\n - zoneC\n containers:\n - name: pause\n image: registry.k8s.io/pause:3.1", "language": "yaml", "type": "pod-definition", "context": "10.5 - Pod Topology Spread Constraints | There are some implicit conventions worth noting here:", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_18573"} {"prompt": "Explain this yaml code:\n\npods/topology-spread-constraints/one-constraint-with-nodeaffinity.yaml\nkind: Pod\napiVersion: v1\nmetadata:\n name: mypod\n labels:\n foo: bar\nspec:\n topologySpreadConstraints:\n - maxSkew: 1\n topologyKey: zone\n whenUnsatisfiable: DoNotSchedule\n labelSelector:\n matchLabels:\n foo: bar\n affinity:\n nodeAffinity:\n requiredDuringSchedulingIgnoredDuringExecution:\n nodeSelectorTerms:\n - matchExpressions:\n - key: zone\n operator: NotIn\n values:\n - zoneC\n containers:\n - name: pause\n image: registry.k8s.io/pause:3.1", "completion": "This yaml code 10.5 - Pod Topology Spread Constraints | There are some implicit conventions worth noting here:", "language": "yaml", "type": "pod-definition_explanation", "context": "10.5 - Pod Topology Spread Constraints | There are some implicit conventions worth noting here:", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_18574"} {"prompt": "Complete this yaml code:\n\nkind: Pod\napiVersion: v1\nmetadata:\n name: mypod\n labels:\n foo: bar\nspec:\n topologySpreadConstraints:\n - maxSkew: 1\n topologyKey: zone\n whenUnsatisfiable: DoNotSchedule\n labelSelector:\n matchLabels:", "completion": " foo: bar\n affinity:\n nodeAffinity:\n requiredDuringSchedulingIgnoredDuringExecution:\n nodeSelectorTerms:\n - matchExpressions:\n - key: zone\n operator: NotIn\n values:\n - zoneC\n containers:\n - name: pause\n image: registry.k8s.io/pause:3.1", "language": "yaml", "type": "pod-definition", "context": "10.5 - Pod Topology Spread Constraints | There are some implicit conventions worth noting here:", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_18575"} {"prompt": "Explain this yaml code:\n\nkind: Pod\napiVersion: v1\nmetadata:\n name: mypod\n labels:\n foo: bar\nspec:\n topologySpreadConstraints:\n - maxSkew: 1\n topologyKey: zone\n whenUnsatisfiable: DoNotSchedule\n labelSelector:\n matchLabels:\n foo: bar\n affinity:\n nodeAffinity:\n requiredDuringSchedulingIgnoredDuringExecution:\n nodeSelectorTerms:\n - matchExpressions:\n - key: zone\n operator: NotIn\n values:\n - zoneC\n containers:\n - name: pause\n image: registry.k8s.io/pause:3.1", "completion": "This yaml code 10.5 - Pod Topology Spread Constraints | There are some implicit conventions worth noting here:", "language": "yaml", "type": "pod-definition_explanation", "context": "10.5 - Pod Topology Spread Constraints | There are some implicit conventions worth noting here:", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_18576"} {"prompt": "Complete this yaml code:\n\napiVersion: kubescheduler.config.k8s.io/v1\nkind: KubeSchedulerConfiguration\n\nprofiles:\n - schedulerName: default-scheduler\n pluginConfig:", "completion": " - name: PodTopologySpread\n args:\n defaultConstraints:\n - maxSkew: 1\n topologyKey: topology.kubernetes.io/zone\n whenUnsatisfiable: ScheduleAnyway\n defaultingType: List", "language": "yaml", "type": "kubernetes-manifest", "context": "10.5 - Pod Topology Spread Constraints | If you don't configure any cluster-level default constraints for pod topology spreading, then kube-scheduler acts as if you specified the following default topology constraints:", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_18577"} {"prompt": "Explain this yaml code:\n\napiVersion: kubescheduler.config.k8s.io/v1\nkind: KubeSchedulerConfiguration\n\nprofiles:\n - schedulerName: default-scheduler\n pluginConfig:\n - name: PodTopologySpread\n args:\n defaultConstraints:\n - maxSkew: 1\n topologyKey: topology.kubernetes.io/zone\n whenUnsatisfiable: ScheduleAnyway\n defaultingType: List", "completion": "This yaml code 10.5 - Pod Topology Spread Constraints | If you don't configure any cluster-level default constraints for pod topology spreading, then kube-scheduler acts as if you specified the following default topology constraints:", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "10.5 - Pod Topology Spread Constraints | If you don't configure any cluster-level default constraints for pod topology spreading, then kube-scheduler acts as if you specified the following default topology constraints:", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_18578"} {"prompt": "Complete this text code:\n\ndefaultConstraints:\n - maxSkew: 3\n topologyKey: \"kubernetes.io/hostname\"", "completion": " whenUnsatisfiable: ScheduleAnyway\n - maxSkew: 5\n topologyKey: \"topology.kubernetes.io/zone\"\n whenUnsatisfiable: ScheduleAnyway", "language": "text", "type": "code-example", "context": "10.5 - Pod Topology Spread Constraints | Also, the legacy SelectorSpread plugin, which provides an equivalent behavior, is disabled by default.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_18579"} {"prompt": "Explain this text code:\n\ndefaultConstraints:\n - maxSkew: 3\n topologyKey: \"kubernetes.io/hostname\"\n whenUnsatisfiable: ScheduleAnyway\n - maxSkew: 5\n topologyKey: \"topology.kubernetes.io/zone\"\n whenUnsatisfiable: ScheduleAnyway", "completion": "This text code 10.5 - Pod Topology Spread Constraints | Also, the legacy SelectorSpread plugin, which provides an equivalent behavior, is disabled by default.", "language": "text", "type": "code-example_explanation", "context": "10.5 - Pod Topology Spread Constraints | Also, the legacy SelectorSpread plugin, which provides an equivalent behavior, is disabled by default.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_18580"} {"prompt": "Complete this yaml code:\n\napiVersion: kubescheduler.config.k8s.io/v1\nkind: KubeSchedulerConfiguration\n\nprofiles:\n - schedulerName: default-scheduler", "completion": " pluginConfig:\n - name: PodTopologySpread\n args:\n defaultConstraints: []\n defaultingType: List", "language": "yaml", "type": "kubernetes-manifest", "context": "10.5 - Pod Topology Spread Constraints | In Kubernetes, inter-Pod affinity and anti-affinity control how Pods are scheduled in relation to one another - either more packed or more scattered.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_18581"} {"prompt": "Explain this yaml code:\n\napiVersion: kubescheduler.config.k8s.io/v1\nkind: KubeSchedulerConfiguration\n\nprofiles:\n - schedulerName: default-scheduler\n pluginConfig:\n - name: PodTopologySpread\n args:\n defaultConstraints: []\n defaultingType: List", "completion": "This yaml code 10.5 - Pod Topology Spread Constraints | In Kubernetes, inter-Pod affinity and anti-affinity control how Pods are scheduled in relation to one another - either more packed or more scattered.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "10.5 - Pod Topology Spread Constraints | In Kubernetes, inter-Pod affinity and anti-affinity control how Pods are scheduled in relation to one another - either more packed or more scattered.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_18582"} {"prompt": "Complete this text code:\n\ntolerations:\n- key: \"key1\"", "completion": " operator: \"Equal\"\n value: \"value1\"\n effect: \"NoSchedule\"", "language": "text", "type": "code-example", "context": "10.6 - Taints and Tolerations | The default Kubernetes scheduler takes taints and tolerations into account when selecting a node to run a particular Pod. However, if you manually specify the .spec.nodeName for a Pod, that action byp", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_18583"} {"prompt": "Explain this text code:\n\ntolerations:\n- key: \"key1\"\n operator: \"Equal\"\n value: \"value1\"\n effect: \"NoSchedule\"", "completion": "This text code 10.6 - Taints and Tolerations | The default Kubernetes scheduler takes taints and tolerations into account when selecting a node to run a particular Pod. However, if you manually specify the .spec.nodeName for a Pod, that action byp", "language": "text", "type": "code-example_explanation", "context": "10.6 - Taints and Tolerations | The default Kubernetes scheduler takes taints and tolerations into account when selecting a node to run a particular Pod. However, if you manually specify the .spec.nodeName for a Pod, that action byp", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_18584"} {"prompt": "Complete this text code:\n\ntolerations:\n- key: \"key1\"", "completion": " operator: \"Exists\"\n effect: \"NoSchedule\"", "language": "text", "type": "code-example", "context": "10.6 - Taints and Tolerations | The default Kubernetes scheduler takes taints and tolerations into account when selecting a node to run a particular Pod. However, if you manually specify the .spec.nodeName for a Pod, that action byp", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_18585"} {"prompt": "Explain this text code:\n\ntolerations:\n- key: \"key1\"\n operator: \"Exists\"\n effect: \"NoSchedule\"", "completion": "This text code 10.6 - Taints and Tolerations | The default Kubernetes scheduler takes taints and tolerations into account when selecting a node to run a particular Pod. However, if you manually specify the .spec.nodeName for a Pod, that action byp", "language": "text", "type": "code-example_explanation", "context": "10.6 - Taints and Tolerations | The default Kubernetes scheduler takes taints and tolerations into account when selecting a node to run a particular Pod. However, if you manually specify the .spec.nodeName for a Pod, that action byp", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_18586"} {"prompt": "Complete this yaml code:\n\npods/pod-with-toleration.yaml\napiVersion: v1\nkind: Pod\nmetadata:\n name: nginx\n labels:\n env: test\nspec:", "completion": " containers:\n - name: nginx\n image: nginx\n imagePullPolicy: IfNotPresent\n tolerations:\n - key: \"example-key\"\n operator: \"Exists\"\n effect: \"NoSchedule\"", "language": "yaml", "type": "pod-definition", "context": "10.6 - Taints and Tolerations | The default value for operator is Equal.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_18587"} {"prompt": "Explain this yaml code:\n\npods/pod-with-toleration.yaml\napiVersion: v1\nkind: Pod\nmetadata:\n name: nginx\n labels:\n env: test\nspec:\n containers:\n - name: nginx\n image: nginx\n imagePullPolicy: IfNotPresent\n tolerations:\n - key: \"example-key\"\n operator: \"Exists\"\n effect: \"NoSchedule\"", "completion": "This yaml code 10.6 - Taints and Tolerations | The default value for operator is Equal.", "language": "yaml", "type": "pod-definition_explanation", "context": "10.6 - Taints and Tolerations | The default value for operator is Equal.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_18588"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: nginx\n labels:\n env: test\nspec:", "completion": " containers:\n - name: nginx\n image: nginx\n imagePullPolicy: IfNotPresent\n tolerations:\n - key: \"example-key\"\n operator: \"Exists\"\n effect: \"NoSchedule\"", "language": "yaml", "type": "pod-definition", "context": "10.6 - Taints and Tolerations | The default value for operator is Equal.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_18589"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: nginx\n labels:\n env: test\nspec:\n containers:\n - name: nginx\n image: nginx\n imagePullPolicy: IfNotPresent\n tolerations:\n - key: \"example-key\"\n operator: \"Exists\"\n effect: \"NoSchedule\"", "completion": "This yaml code 10.6 - Taints and Tolerations | The default value for operator is Equal.", "language": "yaml", "type": "pod-definition_explanation", "context": "10.6 - Taints and Tolerations | The default value for operator is Equal.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_18590"} {"prompt": "Explain this bash code:\n\nkubectl taint nodes node1 key1=value1:NoSchedule\nkubectl taint nodes node1 key1=value1:NoExecute\nkubectl taint nodes node1 key2=value2:NoSchedule", "completion": "This bash code 10.6 - Taints and Tolerations | And a pod has two tolerations:", "language": "bash", "type": "kubectl-command_explanation", "context": "10.6 - Taints and Tolerations | And a pod has two tolerations:", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_18591"} {"prompt": "Complete this text code:\n\ntolerations:\n- key: \"key1\"\n operator: \"Equal\"\n value: \"value1\"", "completion": " effect: \"NoSchedule\"\n- key: \"key1\"\n operator: \"Equal\"\n value: \"value1\"\n effect: \"NoExecute\"", "language": "text", "type": "code-example", "context": "10.6 - Taints and Tolerations | In this case, the pod will not be able to schedule onto the node, because there is no toleration matching the third taint. But it will be able to continue running if it is already running on the node", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_18592"} {"prompt": "Explain this text code:\n\ntolerations:\n- key: \"key1\"\n operator: \"Equal\"\n value: \"value1\"\n effect: \"NoSchedule\"\n- key: \"key1\"\n operator: \"Equal\"\n value: \"value1\"\n effect: \"NoExecute\"", "completion": "This text code 10.6 - Taints and Tolerations | In this case, the pod will not be able to schedule onto the node, because there is no toleration matching the third taint. But it will be able to continue running if it is already running on the node", "language": "text", "type": "code-example_explanation", "context": "10.6 - Taints and Tolerations | In this case, the pod will not be able to schedule onto the node, because there is no toleration matching the third taint. But it will be able to continue running if it is already running on the node", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_18593"} {"prompt": "Complete this text code:\n\ntolerations:\n- key: \"key1\"\n operator: \"Equal\"", "completion": " value: \"value1\"\n effect: \"NoExecute\"\n tolerationSeconds: 3600", "language": "text", "type": "code-example", "context": "10.6 - Taints and Tolerations | means that if this pod is running and a matching taint is added to the node, then the pod will stay bound to the node for 3600 seconds, and then be evicted. If the taint is removed before that time, t", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_18594"} {"prompt": "Explain this text code:\n\ntolerations:\n- key: \"key1\"\n operator: \"Equal\"\n value: \"value1\"\n effect: \"NoExecute\"\n tolerationSeconds: 3600", "completion": "This text code 10.6 - Taints and Tolerations | means that if this pod is running and a matching taint is added to the node, then the pod will stay bound to the node for 3600 seconds, and then be evicted. If the taint is removed before that time, t", "language": "text", "type": "code-example_explanation", "context": "10.6 - Taints and Tolerations | means that if this pod is running and a matching taint is added to the node, then the pod will stay bound to the node for 3600 seconds, and then be evicted. If the taint is removed before that time, t", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_18595"} {"prompt": "Complete this text code:\n\ntolerations:\n- key: \"node.kubernetes.io/unreachable\"", "completion": " operator: \"Exists\"\n effect: \"NoExecute\"\n tolerationSeconds: 6000", "language": "text", "type": "code-example", "context": "10.6 - Taints and Tolerations | Kubernetes automatically adds a toleration for node.kubernetes.io/not-ready and node.kubernetes.io/unreachable with tolerationSeconds=300, unless you, or a controller, set those tolerations explicitly", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_18596"} {"prompt": "Explain this text code:\n\ntolerations:\n- key: \"node.kubernetes.io/unreachable\"\n operator: \"Exists\"\n effect: \"NoExecute\"\n tolerationSeconds: 6000", "completion": "This text code 10.6 - Taints and Tolerations | Kubernetes automatically adds a toleration for node.kubernetes.io/not-ready and node.kubernetes.io/unreachable with tolerationSeconds=300, unless you, or a controller, set those tolerations explicitly", "language": "text", "type": "code-example_explanation", "context": "10.6 - Taints and Tolerations | Kubernetes automatically adds a toleration for node.kubernetes.io/not-ready and node.kubernetes.io/unreachable with tolerationSeconds=300, unless you, or a controller, set those tolerations explicitly", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_18597"} {"prompt": "Explain this go code:\n\nfunc ScoreNode(_ *v1.pod, n *v1.Node) (int, error) {\n return getBlinkingLightCount(n)\n}", "completion": "This go code 10.7 - Scheduling Framework | However, the maximum count of blinking lights may be small compared to NodeScoreMax. To fix this, BlinkingLightScorer should also register for this extension point.", "language": "go", "type": "code-example_explanation", "context": "10.7 - Scheduling Framework | However, the maximum count of blinking lights may be small compared to NodeScoreMax. To fix this, BlinkingLightScorer should also register for this extension point.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_18598"} {"prompt": "Complete this go code:\n\nfunc NormalizeScores(scores map[string]int) {\n highest := 0\n for _, score := range scores {\n highest = max(highest, score)", "completion": " }\n for node, score := range scores {\n scores[node] = score*NodeScoreMax/highest\n }\n}", "language": "go", "type": "code-example", "context": "10.7 - Scheduling Framework | If any NormalizeScore plugin returns an error, the scheduling cycle is aborted.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_18599"} {"prompt": "Explain this go code:\n\nfunc NormalizeScores(scores map[string]int) {\n highest := 0\n for _, score := range scores {\n highest = max(highest, score)\n }\n for node, score := range scores {\n scores[node] = score*NodeScoreMax/highest\n }\n}", "completion": "This go code 10.7 - Scheduling Framework | If any NormalizeScore plugin returns an error, the scheduling cycle is aborted.", "language": "go", "type": "code-example_explanation", "context": "10.7 - Scheduling Framework | If any NormalizeScore plugin returns an error, the scheduling cycle is aborted.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_18600"} {"prompt": "Complete this text code:\n\ntype Plugin interface {\n Name() string\n}\n\ntype QueueSortPlugin interface {\n Plugin\n Less(*v1.pod, *v1.pod) bool", "completion": "}\n\ntype PreFilterPlugin interface {\n Plugin\n PreFilter(context.Context, *framework.CycleState, *v1.pod) error\n}\n\n// ...", "language": "text", "type": "code-example", "context": "10.7 - Scheduling Framework | You can enable or disable plugins in the scheduler configuration. If you are using Kubernetes v1.18 or later, most scheduling plugins are in use and enabled by default.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_18601"} {"prompt": "Explain this text code:\n\ntype Plugin interface {\n Name() string\n}\n\ntype QueueSortPlugin interface {\n Plugin\n Less(*v1.pod, *v1.pod) bool\n}\n\ntype PreFilterPlugin interface {\n Plugin\n PreFilter(context.Context, *framework.CycleState, *v1.pod) error\n}\n\n// ...", "completion": "This text code 10.7 - Scheduling Framework | You can enable or disable plugins in the scheduler configuration. If you are using Kubernetes v1.18 or later, most scheduling plugins are in use and enabled by default.", "language": "text", "type": "code-example_explanation", "context": "10.7 - Scheduling Framework | You can enable or disable plugins in the scheduler configuration. If you are using Kubernetes v1.18 or later, most scheduling plugins are in use and enabled by default.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_18602"} {"prompt": "Complete this yaml code:\n\napiVersion: resource.k8s.io/v1\nkind: ResourceClaimTemplate\nmetadata:\n name: prioritized-list-claim-template\nspec:\n spec:\n devices:\n requests:\n - name: req-0\n firstAvailable:\n - name: large-black\n deviceClassName: resource.example.com", "completion": " selectors:\n - cel:\n expression: |-\n device.attributes[\"resource-driver.example.com\"].color == \"black\" &&\n device.attributes[\"resource-driver.example.com\"].size == \"large\" \n - name: small-white\n deviceClassName: resource.example.com\n selectors:\n - cel:\n expression: |-\n device.attributes[\"resource-driver.example.com\"].color == \"white\" &&\n device.attributes[\"resource-driver.example.com\"].size == \"small\" \n count: 2", "language": "yaml", "type": "kubernetes-manifest", "context": "10.8 - Dynamic Resource Allocation | The decision is made on a per-Pod basis, so if the Pod is a member of a ReplicaSet or similar grouping, you cannot rely on all the members of the group having the same subrequest chosen. Your workload", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_18603"} {"prompt": "Explain this yaml code:\n\napiVersion: resource.k8s.io/v1\nkind: ResourceClaimTemplate\nmetadata:\n name: prioritized-list-claim-template\nspec:\n spec:\n devices:\n requests:\n - name: req-0\n firstAvailable:\n - name: large-black\n deviceClassName: resource.example.com\n selectors:\n - cel:\n expression: |-\n device.attributes[\"resource-driver.example.com\"].color == \"black\" &&\n device.attributes[\"resource-driver.example.com\"].size == \"large\" \n - name: small-white\n deviceClassName: resource.example.com\n selectors:\n - cel:\n expression: |-\n device.attributes[\"resource-driver.example.com\"].color == \"white\" &&\n device.attributes[\"resource-driver.example.com\"].size == \"small\" \n count: 2", "completion": "This yaml code 10.8 - Dynamic Resource Allocation | The decision is made on a per-Pod basis, so if the Pod is a member of a ReplicaSet or similar grouping, you cannot rely on all the members of the group having the same subrequest chosen. Your workload", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "10.8 - Dynamic Resource Allocation | The decision is made on a per-Pod basis, so if the Pod is a member of a ReplicaSet or similar grouping, you cannot rely on all the members of the group having the same subrequest chosen. Your workload", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_18604"} {"prompt": "Complete this yaml code:\n\napiVersion: resource.k8s.io/v1\nkind: ResourceSlice\nmetadata:\n name: cat-slice\nspec:\n driver: \"resource-driver.example.com\"\n pool:\n generation: 1\n name: \"black-cat-pool\"\n resourceSliceCount: 1", "completion": " # The allNodes field defines whether any node in the cluster can access the device.\n allNodes: true\n devices:\n - name: \"large-black-cat\"\n attributes:\n color:\n string: \"black\"\n size:\n string: \"large\"\n cat:\n boolean: true", "language": "yaml", "type": "kubernetes-manifest", "context": "10.8 - Dynamic Resource Allocation | This ResourceSlice is managed by the resource-driver.example.com driver in the black-cat-pool pool. The allNodes: true field indicates that any node in the cluster can access the devices. There's one", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_18605"} {"prompt": "Explain this yaml code:\n\napiVersion: resource.k8s.io/v1\nkind: ResourceSlice\nmetadata:\n name: cat-slice\nspec:\n driver: \"resource-driver.example.com\"\n pool:\n generation: 1\n name: \"black-cat-pool\"\n resourceSliceCount: 1\n # The allNodes field defines whether any node in the cluster can access the device.\n allNodes: true\n devices:\n - name: \"large-black-cat\"\n attributes:\n color:\n string: \"black\"\n size:\n string: \"large\"\n cat:\n boolean: true", "completion": "This yaml code 10.8 - Dynamic Resource Allocation | This ResourceSlice is managed by the resource-driver.example.com driver in the black-cat-pool pool. The allNodes: true field indicates that any node in the cluster can access the devices. There's one", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "10.8 - Dynamic Resource Allocation | This ResourceSlice is managed by the resource-driver.example.com driver in the black-cat-pool pool. The allNodes: true field indicates that any node in the cluster can access the devices. There's one", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_18606"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: pod-with-cats", "completion": "spec:\n nodeSelector:\n kubernetes.io/hostname: name-of-the-intended-node\n ...", "language": "yaml", "type": "pod-definition", "context": "10.8 - Dynamic Resource Allocation | You may also be able to mutate the incoming Pod, at admission time, to unset the .spec.nodeName field and to use a node selector instead.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_18607"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: pod-with-cats\nspec:\n nodeSelector:\n kubernetes.io/hostname: name-of-the-intended-node\n ...", "completion": "This yaml code 10.8 - Dynamic Resource Allocation | You may also be able to mutate the incoming Pod, at admission time, to unset the .spec.nodeName field and to use a node selector instead.", "language": "yaml", "type": "pod-definition_explanation", "context": "10.8 - Dynamic Resource Allocation | You may also be able to mutate the incoming Pod, at admission time, to unset the .spec.nodeName field and to use a node selector instead.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_18608"} {"prompt": "Complete this yaml code:\n\napiVersion: resource.k8s.io/v1\nkind: ResourceClaimTemplate\nmetadata:\n name: large-black-cat-claim-template\nspec:\n spec:", "completion": " devices:\n requests:\n - name: req-0\n exactly:\n deviceClassName: resource.example.com\n allocationMode: All\n adminAccess: true", "language": "yaml", "type": "kubernetes-manifest", "context": "10.8 - Dynamic Resource Allocation | If this feature is disabled, the adminAccess field will be removed automatically when creating such a ResourceClaim.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_18609"} {"prompt": "Explain this yaml code:\n\napiVersion: resource.k8s.io/v1\nkind: ResourceClaimTemplate\nmetadata:\n name: large-black-cat-claim-template\nspec:\n spec:\n devices:\n requests:\n - name: req-0\n exactly:\n deviceClassName: resource.example.com\n allocationMode: All\n adminAccess: true", "completion": "This yaml code 10.8 - Dynamic Resource Allocation | If this feature is disabled, the adminAccess field will be removed automatically when creating such a ResourceClaim.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "10.8 - Dynamic Resource Allocation | If this feature is disabled, the adminAccess field will be removed automatically when creating such a ResourceClaim.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_18610"} {"prompt": "Complete this yaml code:\n\napiVersion: resource.k8s.io/v1\nkind: DeviceClass\nmetadata:\n name: gpu.example.com\nspec:", "completion": " selectors:\n - cel:\n expression: device.driver == 'gpu.example.com' && device.attributes['gpu.example.com'].type\n == 'gpu'\n extendedResourceName: example.com/gpu", "language": "yaml", "type": "kubernetes-manifest", "context": "10.8 - Dynamic Resource Allocation | In addition, users can use a special extended resource to allocate devices without having to explicitly create a ResourceClaim. Using the extended resource name prefix deviceclass.resource.kubernetes.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_18611"} {"prompt": "Explain this yaml code:\n\napiVersion: resource.k8s.io/v1\nkind: DeviceClass\nmetadata:\n name: gpu.example.com\nspec:\n selectors:\n - cel:\n expression: device.driver == 'gpu.example.com' && device.attributes['gpu.example.com'].type\n == 'gpu'\n extendedResourceName: example.com/gpu", "completion": "This yaml code 10.8 - Dynamic Resource Allocation | In addition, users can use a special extended resource to allocate devices without having to explicitly create a ResourceClaim. Using the extended resource name prefix deviceclass.resource.kubernetes.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "10.8 - Dynamic Resource Allocation | In addition, users can use a special extended resource to allocate devices without having to explicitly create a ResourceClaim. Using the extended resource name prefix deviceclass.resource.kubernetes.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_18612"} {"prompt": "Complete this yaml code:\n\nkind: ResourceSlice\napiVersion: resource.k8s.io/v1\nmetadata:\n name: resourceslice\nspec:\n nodeName: worker-1\n pool:\n name: pool\n generation: 1\n resourceSliceCount: 1\n driver: dra.example.com\n sharedCounters:\n - name: gpu-1-counters\n counters:", "completion": " memory:\n value: 8Gi\n devices:\n - name: device-1\n consumesCounters:\n - counterSet: gpu-1-counters\n counters:\n memory:\n value: 6Gi\n - name: device-2\n consumesCounters:\n - counterSet: gpu-1-counters\n counters:\n memory:\n value: 6Gi", "language": "yaml", "type": "kubernetes-manifest", "context": "10.8 - Dynamic Resource Allocation | Partitionable devices is an alpha feature and only enabled when the DRAPartitionableDevices feature gate is enabled in the kube-apiserver and kube-scheduler.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_18613"} {"prompt": "Explain this yaml code:\n\nkind: ResourceSlice\napiVersion: resource.k8s.io/v1\nmetadata:\n name: resourceslice\nspec:\n nodeName: worker-1\n pool:\n name: pool\n generation: 1\n resourceSliceCount: 1\n driver: dra.example.com\n sharedCounters:\n - name: gpu-1-counters\n counters:\n memory:\n value: 8Gi\n devices:\n - name: device-1\n consumesCounters:\n - counterSet: gpu-1-counters\n counters:\n memory:\n value: 6Gi\n - name: device-2\n consumesCounters:\n - counterSet: gpu-1-counters\n counters:\n memory:\n value: 6Gi", "completion": "This yaml code 10.8 - Dynamic Resource Allocation | Partitionable devices is an alpha feature and only enabled when the DRAPartitionableDevices feature gate is enabled in the kube-apiserver and kube-scheduler.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "10.8 - Dynamic Resource Allocation | Partitionable devices is an alpha feature and only enabled when the DRAPartitionableDevices feature gate is enabled in the kube-apiserver and kube-scheduler.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_18614"} {"prompt": "Complete this yaml code:\n\nkind: ResourceSlice\napiVersion: resource.k8s.io/v1\nmetadata:\n name: resourceslice\nspec:\n nodeName: worker-1\n pool:\n name: pool\n generation: 1\n resourceSliceCount: 1\n driver: dra.example.com\n devices:", "completion": " - name: eth1\n allowMultipleAllocations: true\n attributes:\n name:\n string: \"eth1\"\n capacity:\n bandwidth:\n requestPolicy:\n default: \"1M\"\n validRange:\n min: \"1M\"\n step: \"8\"\n value: \"10G\"", "language": "yaml", "type": "kubernetes-manifest", "context": "10.8 - Dynamic Resource Allocation | The consumable capacity can be requested as shown in the below example.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_18615"} {"prompt": "Explain this yaml code:\n\nkind: ResourceSlice\napiVersion: resource.k8s.io/v1\nmetadata:\n name: resourceslice\nspec:\n nodeName: worker-1\n pool:\n name: pool\n generation: 1\n resourceSliceCount: 1\n driver: dra.example.com\n devices:\n - name: eth1\n allowMultipleAllocations: true\n attributes:\n name:\n string: \"eth1\"\n capacity:\n bandwidth:\n requestPolicy:\n default: \"1M\"\n validRange:\n min: \"1M\"\n step: \"8\"\n value: \"10G\"", "completion": "This yaml code 10.8 - Dynamic Resource Allocation | The consumable capacity can be requested as shown in the below example.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "10.8 - Dynamic Resource Allocation | The consumable capacity can be requested as shown in the below example.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_18616"} {"prompt": "Complete this yaml code:\n\napiVersion: resource.k8s.io/v1\nkind: ResourceClaimTemplate\nmetadata:\n name: bandwidth-claim-template\nspec:\n spec:\n devices:", "completion": " requests:\n - name: req-0\n exactly:\n - deviceClassName: resource.example.com\n capacity:\n requests:\n bandwidth: 1G", "language": "yaml", "type": "kubernetes-manifest", "context": "10.8 - Dynamic Resource Allocation | The allocation result will include the consumed capacity and the identifier of the share.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_18617"} {"prompt": "Explain this yaml code:\n\napiVersion: resource.k8s.io/v1\nkind: ResourceClaimTemplate\nmetadata:\n name: bandwidth-claim-template\nspec:\n spec:\n devices:\n requests:\n - name: req-0\n exactly:\n - deviceClassName: resource.example.com\n capacity:\n requests:\n bandwidth: 1G", "completion": "This yaml code 10.8 - Dynamic Resource Allocation | The allocation result will include the consumed capacity and the identifier of the share.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "10.8 - Dynamic Resource Allocation | The allocation result will include the consumed capacity and the identifier of the share.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_18618"} {"prompt": "Complete this yaml code:\n\napiVersion: resource.k8s.io/v1\nkind: ResourceClaim\n...\nstatus:\n allocation:", "completion": " devices:\n results:\n - consumedCapacity:\n bandwidth: 1G\n device: eth1\n shareID: \"a671734a-e8e5-11e4-8fde-42010af09327\"", "language": "yaml", "type": "kubernetes-manifest", "context": "10.8 - Dynamic Resource Allocation | In this example, a multiply-allocatable device was chosen. However, any resource.example.com device with at least the requested 1G bandwidth could have met the requirement. If a non-multiply-allocatab", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_18619"} {"prompt": "Explain this yaml code:\n\napiVersion: resource.k8s.io/v1\nkind: ResourceClaim\n...\nstatus:\n allocation:\n devices:\n results:\n - consumedCapacity:\n bandwidth: 1G\n device: eth1\n shareID: \"a671734a-e8e5-11e4-8fde-42010af09327\"", "completion": "This yaml code 10.8 - Dynamic Resource Allocation | In this example, a multiply-allocatable device was chosen. However, any resource.example.com device with at least the requested 1G bandwidth could have met the requirement. If a non-multiply-allocatab", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "10.8 - Dynamic Resource Allocation | In this example, a multiply-allocatable device was chosen. However, any resource.example.com device with at least the requested 1G bandwidth could have met the requirement. If a non-multiply-allocatab", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_18620"} {"prompt": "Complete this yaml code:\n\napiVersion: resource.k8s.io/v1alpha3\nkind: DeviceTaintRule\nmetadata:\n name: example\nspec:\n # The entire hardware installation for this\n # particular driver is broken.", "completion": " # Evict all pods and don't schedule new ones.\n deviceSelector:\n driver: dra.example.com\n taint:\n key: dra.example.com/unhealthy\n value: Broken\n effect: NoExecute", "language": "yaml", "type": "kubernetes-manifest", "context": "10.8 - Dynamic Resource Allocation | Device Binding Conditions allow the Kubernetes scheduler to delay Pod binding until external resources, such as fabric-attached GPUs or reprogrammable FPGAs, are confirmed to be ready.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_18621"} {"prompt": "Explain this yaml code:\n\napiVersion: resource.k8s.io/v1alpha3\nkind: DeviceTaintRule\nmetadata:\n name: example\nspec:\n # The entire hardware installation for this\n # particular driver is broken.\n # Evict all pods and don't schedule new ones.\n deviceSelector:\n driver: dra.example.com\n taint:\n key: dra.example.com/unhealthy\n value: Broken\n effect: NoExecute", "completion": "This yaml code 10.8 - Dynamic Resource Allocation | Device Binding Conditions allow the Kubernetes scheduler to delay Pod binding until external resources, such as fabric-attached GPUs or reprogrammable FPGAs, are confirmed to be ready.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "10.8 - Dynamic Resource Allocation | Device Binding Conditions allow the Kubernetes scheduler to delay Pod binding until external resources, such as fabric-attached GPUs or reprogrammable FPGAs, are confirmed to be ready.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_18622"} {"prompt": "Complete this yaml code:\n\napiVersion: resource.k8s.io/v1\nkind: ResourceSlice\nmetadata:\n name: gpu-slice\nspec:\n driver: dra.example.com\n nodeSelector:\n accelerator-type: high-performance\n pool:\n name: gpu-pool\n generation: 1\n resourceSliceCount: 1", "completion": " devices:\n - name: gpu-1\n attributes:\n vendor:\n string: \"example\"\n model:\n string: \"example-gpu\"\n bindsToNode: true\n bindingConditions:\n - dra.example.com/is-prepared\n bindingFailureConditions:\n - dra.example.com/preparing-failed", "language": "yaml", "type": "kubernetes-manifest", "context": "10.8 - Dynamic Resource Allocation | This example ResourceSlice has the following properties:", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_18623"} {"prompt": "Explain this yaml code:\n\napiVersion: resource.k8s.io/v1\nkind: ResourceSlice\nmetadata:\n name: gpu-slice\nspec:\n driver: dra.example.com\n nodeSelector:\n accelerator-type: high-performance\n pool:\n name: gpu-pool\n generation: 1\n resourceSliceCount: 1\n devices:\n - name: gpu-1\n attributes:\n vendor:\n string: \"example\"\n model:\n string: \"example-gpu\"\n bindsToNode: true\n bindingConditions:\n - dra.example.com/is-prepared\n bindingFailureConditions:\n - dra.example.com/preparing-failed", "completion": "This yaml code 10.8 - Dynamic Resource Allocation | This example ResourceSlice has the following properties:", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "10.8 - Dynamic Resource Allocation | This example ResourceSlice has the following properties:", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_18624"} {"prompt": "Explain this bash code:\n\nkubectl get pods -n kube-system | grep kube-scheduler", "completion": "This bash code 10.9 - Scheduler Performance Tuning | to verify that the kube-scheduler component is healthy.", "language": "bash", "type": "kubectl-command_explanation", "context": "10.9 - Scheduler Performance Tuning | to verify that the kube-scheduler component is healthy.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_18625"} {"prompt": "Complete this yaml code:\n\napiVersion: kubescheduler.config.k8s.io/v1alpha1\nkind: KubeSchedulerConfiguration\nalgorithmSource:\n provider: DefaultProvider", "completion": "\n...\n\npercentageOfNodesToScore: 50", "language": "yaml", "type": "kubernetes-manifest", "context": "10.9 - Scheduler Performance Tuning | percentageOfNodesToScore must be a value between 1 and 100 with the default value being calculated based on the cluster size. There is also a hardcoded minimum value of 100 nodes.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_18626"} {"prompt": "Explain this yaml code:\n\napiVersion: kubescheduler.config.k8s.io/v1alpha1\nkind: KubeSchedulerConfiguration\nalgorithmSource:\n provider: DefaultProvider\n\n...\n\npercentageOfNodesToScore: 50", "completion": "This yaml code 10.9 - Scheduler Performance Tuning | percentageOfNodesToScore must be a value between 1 and 100 with the default value being calculated based on the cluster size. There is also a hardcoded minimum value of 100 nodes.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "10.9 - Scheduler Performance Tuning | percentageOfNodesToScore must be a value between 1 and 100 with the default value being calculated based on the cluster size. There is also a hardcoded minimum value of 100 nodes.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_18627"} {"prompt": "Complete this yaml code:\n\napiVersion: kubescheduler.config.k8s.io/v1\nkind: KubeSchedulerConfiguration\nprofiles:\n- pluginConfig:\n - args:\n scoringStrategy:\n resources:\n - name: cpu", "completion": " weight: 1\n - name: memory\n weight: 1\n - name: intel.com/foo\n weight: 3\n - name: intel.com/bar\n weight: 3\n type: MostAllocated\n name: NodeResourcesFit", "language": "yaml", "type": "kubernetes-manifest", "context": "10.10 - Resource Bin Packing | To learn more about other parameters and their default configuration, see the API documentation for NodeResourcesFitArgs.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_18628"} {"prompt": "Explain this yaml code:\n\napiVersion: kubescheduler.config.k8s.io/v1\nkind: KubeSchedulerConfiguration\nprofiles:\n- pluginConfig:\n - args:\n scoringStrategy:\n resources:\n - name: cpu\n weight: 1\n - name: memory\n weight: 1\n - name: intel.com/foo\n weight: 3\n - name: intel.com/bar\n weight: 3\n type: MostAllocated\n name: NodeResourcesFit", "completion": "This yaml code 10.10 - Resource Bin Packing | To learn more about other parameters and their default configuration, see the API documentation for NodeResourcesFitArgs.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "10.10 - Resource Bin Packing | To learn more about other parameters and their default configuration, see the API documentation for NodeResourcesFitArgs.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_18629"} {"prompt": "Complete this yaml code:\n\napiVersion: kubescheduler.config.k8s.io/v1\nkind: KubeSchedulerConfiguration\nprofiles:\n- pluginConfig:\n - args:\n scoringStrategy:\n resources:\n - name: intel.com/foo\n weight: 3", "completion": " - name: intel.com/bar\n weight: 3\n requestedToCapacityRatio:\n shape:\n - utilization: 0\n score: 0\n - utilization: 100\n score: 10\n type: RequestedToCapacityRatio\n name: NodeResourcesFit", "language": "yaml", "type": "kubernetes-manifest", "context": "10.10 - Resource Bin Packing | Referencing the KubeSchedulerConfiguration file with the kube-scheduler flag --config=/path/to/config/file will pass the configuration to the scheduler.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_18630"} {"prompt": "Explain this yaml code:\n\napiVersion: kubescheduler.config.k8s.io/v1\nkind: KubeSchedulerConfiguration\nprofiles:\n- pluginConfig:\n - args:\n scoringStrategy:\n resources:\n - name: intel.com/foo\n weight: 3\n - name: intel.com/bar\n weight: 3\n requestedToCapacityRatio:\n shape:\n - utilization: 0\n score: 0\n - utilization: 100\n score: 10\n type: RequestedToCapacityRatio\n name: NodeResourcesFit", "completion": "This yaml code 10.10 - Resource Bin Packing | Referencing the KubeSchedulerConfiguration file with the kube-scheduler flag --config=/path/to/config/file will pass the configuration to the scheduler.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "10.10 - Resource Bin Packing | Referencing the KubeSchedulerConfiguration file with the kube-scheduler flag --config=/path/to/config/file will pass the configuration to the scheduler.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_18631"} {"prompt": "Complete this text code:\n\nshape:\n - utilization: 0", "completion": " score: 0\n - utilization: 100\n score: 10", "language": "text", "type": "code-example", "context": "10.10 - Resource Bin Packing | The above arguments give the node a score of 0 if utilization is 0% and 10 for utilization 100%, thus enabling bin packing behavior. To enable least requested the score value must be reversed as follo", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_18632"} {"prompt": "Explain this text code:\n\nshape:\n - utilization: 0\n score: 0\n - utilization: 100\n score: 10", "completion": "This text code 10.10 - Resource Bin Packing | The above arguments give the node a score of 0 if utilization is 0% and 10 for utilization 100%, thus enabling bin packing behavior. To enable least requested the score value must be reversed as follo", "language": "text", "type": "code-example_explanation", "context": "10.10 - Resource Bin Packing | The above arguments give the node a score of 0 if utilization is 0% and 10 for utilization 100%, thus enabling bin packing behavior. To enable least requested the score value must be reversed as follo", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_18633"} {"prompt": "Complete this text code:\n\nshape:\n - utilization: 0", "completion": " score: 10\n - utilization: 100\n score: 0", "language": "text", "type": "code-example", "context": "10.10 - Resource Bin Packing | resources is an optional parameter which defaults to:", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_18634"} {"prompt": "Explain this text code:\n\nshape:\n - utilization: 0\n score: 10\n - utilization: 100\n score: 0", "completion": "This text code 10.10 - Resource Bin Packing | resources is an optional parameter which defaults to:", "language": "text", "type": "code-example_explanation", "context": "10.10 - Resource Bin Packing | resources is an optional parameter which defaults to:", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_18635"} {"prompt": "Complete this text code:\n\nresources:\n - name: cpu", "completion": " weight: 1\n - name: memory\n weight: 1", "language": "text", "type": "code-example", "context": "10.10 - Resource Bin Packing | It can be used to add extended resources as follows:", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_18636"} {"prompt": "Explain this text code:\n\nresources:\n - name: cpu\n weight: 1\n - name: memory\n weight: 1", "completion": "This text code 10.10 - Resource Bin Packing | It can be used to add extended resources as follows:", "language": "text", "type": "code-example_explanation", "context": "10.10 - Resource Bin Packing | It can be used to add extended resources as follows:", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_18637"} {"prompt": "Complete this text code:\n\nresources:\n - name: intel.com/foo\n weight: 5", "completion": " - name: cpu\n weight: 3\n - name: memory\n weight: 1", "language": "text", "type": "code-example", "context": "10.10 - Resource Bin Packing | The weight parameter is optional and is set to 1 if not specified. Also, the weight cannot be set to a negative value.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_18638"} {"prompt": "Explain this text code:\n\nresources:\n - name: intel.com/foo\n weight: 5\n - name: cpu\n weight: 3\n - name: memory\n weight: 1", "completion": "This text code 10.10 - Resource Bin Packing | The weight parameter is optional and is set to 1 if not specified. Also, the weight cannot be set to a negative value.", "language": "text", "type": "code-example_explanation", "context": "10.10 - Resource Bin Packing | The weight parameter is optional and is set to 1 if not specified. Also, the weight cannot be set to a negative value.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_18639"} {"prompt": "Complete this yaml code:\n\napiVersion: scheduling.k8s.io/v1\nkind: PriorityClass\nmetadata:", "completion": " name: high-priority\nvalue: 1000000\nglobalDefault: false\ndescription: \"This priority class should be used for XYZ service pods only.\"", "language": "yaml", "type": "kubernetes-manifest", "context": "10.11 - Pod Priority and Preemption | Pods with preemptionPolicy: Never will be placed in the scheduling queue ahead of lower-priority pods, but they cannot preempt other pods. A non-preempting pod waiting to be scheduled will stay in the", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_18640"} {"prompt": "Explain this yaml code:\n\napiVersion: scheduling.k8s.io/v1\nkind: PriorityClass\nmetadata:\n name: high-priority\nvalue: 1000000\nglobalDefault: false\ndescription: \"This priority class should be used for XYZ service pods only.\"", "completion": "This yaml code 10.11 - Pod Priority and Preemption | Pods with preemptionPolicy: Never will be placed in the scheduling queue ahead of lower-priority pods, but they cannot preempt other pods. A non-preempting pod waiting to be scheduled will stay in the", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "10.11 - Pod Priority and Preemption | Pods with preemptionPolicy: Never will be placed in the scheduling queue ahead of lower-priority pods, but they cannot preempt other pods. A non-preempting pod waiting to be scheduled will stay in the", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_18641"} {"prompt": "Complete this yaml code:\n\napiVersion: scheduling.k8s.io/v1\nkind: PriorityClass\nmetadata:\n name: high-priority-nonpreempting", "completion": "value: 1000000\npreemptionPolicy: Never\nglobalDefault: false\ndescription: \"This priority class will not cause other pods to be preempted.\"", "language": "yaml", "type": "kubernetes-manifest", "context": "10.11 - Pod Priority and Preemption | After you have one or more PriorityClasses, you can create Pods that specify one of those PriorityClass names in their specifications. The priority admission controller uses the priorityClassName fiel", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_18642"} {"prompt": "Explain this yaml code:\n\napiVersion: scheduling.k8s.io/v1\nkind: PriorityClass\nmetadata:\n name: high-priority-nonpreempting\nvalue: 1000000\npreemptionPolicy: Never\nglobalDefault: false\ndescription: \"This priority class will not cause other pods to be preempted.\"", "completion": "This yaml code 10.11 - Pod Priority and Preemption | After you have one or more PriorityClasses, you can create Pods that specify one of those PriorityClass names in their specifications. The priority admission controller uses the priorityClassName fiel", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "10.11 - Pod Priority and Preemption | After you have one or more PriorityClasses, you can create Pods that specify one of those PriorityClass names in their specifications. The priority admission controller uses the priorityClassName fiel", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_18643"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: nginx\n labels:\n env: test", "completion": "spec:\n containers:\n - name: nginx\n image: nginx\n imagePullPolicy: IfNotPresent\n priorityClassName: high-priority", "language": "yaml", "type": "pod-definition", "context": "10.11 - Pod Priority and Preemption | When Pod priority is enabled, the scheduler orders pending Pods by their priority and a pending Pod is placed ahead of other pending Pods with lower priority in the scheduling queue. As a result, the", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_18644"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: nginx\n labels:\n env: test\nspec:\n containers:\n - name: nginx\n image: nginx\n imagePullPolicy: IfNotPresent\n priorityClassName: high-priority", "completion": "This yaml code 10.11 - Pod Priority and Preemption | When Pod priority is enabled, the scheduler orders pending Pods by their priority and a pending Pod is placed ahead of other pending Pods with lower priority in the scheduling queue. As a result, the", "language": "yaml", "type": "pod-definition_explanation", "context": "10.11 - Pod Priority and Preemption | When Pod priority is enabled, the scheduler orders pending Pods by their priority and a pending Pod is placed ahead of other pending Pods with lower priority in the scheduling queue. As a result, the", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_18645"} {"prompt": "Complete this yaml code:\n\napiVersion: kubelet.config.k8s.io/v1beta1\nkind: KubeletConfiguration\nevictionHard:\n memory.available: \"500Mi\"\n nodefs.available: \"1Gi\"", "completion": " imagefs.available: \"100Gi\"\nevictionMinimumReclaim:\n memory.available: \"0Mi\"\n nodefs.available: \"500Mi\"\n imagefs.available: \"2Gi\"", "language": "yaml", "type": "kubernetes-manifest", "context": "10.12 - Node-pressure Eviction | In this example, if the nodefs.available signal meets the eviction threshold, the kubelet reclaims the resource until the signal reaches the threshold of 1GiB, and then continues to reclaim the minimu", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_18646"} {"prompt": "Explain this yaml code:\n\napiVersion: kubelet.config.k8s.io/v1beta1\nkind: KubeletConfiguration\nevictionHard:\n memory.available: \"500Mi\"\n nodefs.available: \"1Gi\"\n imagefs.available: \"100Gi\"\nevictionMinimumReclaim:\n memory.available: \"0Mi\"\n nodefs.available: \"500Mi\"\n imagefs.available: \"2Gi\"", "completion": "This yaml code 10.12 - Node-pressure Eviction | In this example, if the nodefs.available signal meets the eviction threshold, the kubelet reclaims the resource until the signal reaches the threshold of 1GiB, and then continues to reclaim the minimu", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "10.12 - Node-pressure Eviction | In this example, if the nodefs.available signal meets the eviction threshold, the kubelet reclaims the resource until the signal reaches the threshold of 1GiB, and then continues to reclaim the minimu", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_18647"} {"prompt": "Complete this json code:\n\n{\n \"apiVersion\": \"policy/v1\",\n \"kind\": \"Eviction\",\n \"metadata\": {", "completion": " \"name\": \"quux\",\n \"namespace\": \"default\"\n }\n}", "language": "json", "type": "api-response", "context": "10.13 - API-initiated Eviction | Note:Deprecated in v1.22 in favor of policy/v1{ \"apiVersion\": \"policy/v1beta1\", \"kind\": \"Eviction\", \"metadata\": { \"name\": \"quux\", \"namespace\": \"default\" } }", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_18648"} {"prompt": "Explain this json code:\n\n{\n \"apiVersion\": \"policy/v1\",\n \"kind\": \"Eviction\",\n \"metadata\": {\n \"name\": \"quux\",\n \"namespace\": \"default\"\n }\n}", "completion": "This json code 10.13 - API-initiated Eviction | Note:Deprecated in v1.22 in favor of policy/v1{ \"apiVersion\": \"policy/v1beta1\", \"kind\": \"Eviction\", \"metadata\": { \"name\": \"quux\", \"namespace\": \"default\" } }", "language": "json", "type": "api-response_explanation", "context": "10.13 - API-initiated Eviction | Note:Deprecated in v1.22 in favor of policy/v1{ \"apiVersion\": \"policy/v1beta1\", \"kind\": \"Eviction\", \"metadata\": { \"name\": \"quux\", \"namespace\": \"default\" } }", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_18649"} {"prompt": "Complete this json code:\n\n{\n \"apiVersion\": \"policy/v1beta1\",\n \"kind\": \"Eviction\",\n \"metadata\": {", "completion": " \"name\": \"quux\",\n \"namespace\": \"default\"\n }\n}", "language": "json", "type": "api-response", "context": "10.13 - API-initiated Eviction | Alternatively, you can attempt an eviction operation by accessing the API using curl or wget, similar to the following example:", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_18650"} {"prompt": "Explain this json code:\n\n{\n \"apiVersion\": \"policy/v1beta1\",\n \"kind\": \"Eviction\",\n \"metadata\": {\n \"name\": \"quux\",\n \"namespace\": \"default\"\n }\n}", "completion": "This json code 10.13 - API-initiated Eviction | Alternatively, you can attempt an eviction operation by accessing the API using curl or wget, similar to the following example:", "language": "json", "type": "api-response_explanation", "context": "10.13 - API-initiated Eviction | Alternatively, you can attempt an eviction operation by accessing the API using curl or wget, similar to the following example:", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_18651"} {"prompt": "Explain this text code:\n\ncurl -v -H 'Content-type: application/json' https://your-cluster-api-endpoint.example/api/v1/namespaces/default/pods/quux/eviction -d @eviction.json", "completion": "This text code 10.13 - API-initiated Eviction | When you request an eviction using the API, the API server performs admission checks and responds in one of the following ways:", "language": "text", "type": "code-example_explanation", "context": "10.13 - API-initiated Eviction | When you request an eviction using the API, the API server performs admission checks and responds in one of the following ways:", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_18652"} {"prompt": "Complete this text code:\n\nshutdownGracePeriodByPodPriority:\n - priority: 100000\n shutdownGracePeriodSeconds: 10\n - priority: 10000", "completion": " shutdownGracePeriodSeconds: 180\n - priority: 1000\n shutdownGracePeriodSeconds: 120\n - priority: 0\n shutdownGracePeriodSeconds: 60", "language": "text", "type": "code-example", "context": "11.1 - Node Shutdowns | The above table implies that any pod with priority value >= 100000 will get just 10 seconds to shut down, any pod with value >= 10000 and < 100000 will get 180 seconds to shut down, any pod with value", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_18653"} {"prompt": "Explain this text code:\n\nshutdownGracePeriodByPodPriority:\n - priority: 100000\n shutdownGracePeriodSeconds: 10\n - priority: 10000\n shutdownGracePeriodSeconds: 180\n - priority: 1000\n shutdownGracePeriodSeconds: 120\n - priority: 0\n shutdownGracePeriodSeconds: 60", "completion": "This text code 11.1 - Node Shutdowns | The above table implies that any pod with priority value >= 100000 will get just 10 seconds to shut down, any pod with value >= 10000 and < 100000 will get 180 seconds to shut down, any pod with value", "language": "text", "type": "code-example_explanation", "context": "11.1 - Node Shutdowns | The above table implies that any pod with priority value >= 100000 will get just 10 seconds to shut down, any pod with value >= 10000 and < 100000 will get 180 seconds to shut down, any pod with value", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_18654"} {"prompt": "Explain this bash code:\n\nkubectl get nodes -o go-template='{{range .items}}{{.metadata.name}}: {{if .status.nodeInfo.swap.capacity}}{{.status.nodeInfo.swap.capacity}}{{else}}<unknown>{{end}}{{\"\\n\"}}{{end}}'", "completion": "This bash code 11.2 - Swap memory management | This will result in an output similar to:", "language": "bash", "type": "kubectl-command_explanation", "context": "11.2 - Swap memory management | This will result in an output similar to:", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_18655"} {"prompt": "Explain this bash code:\n\nkubectl get nodes -o jsonpath='{range .items[?(@.metadata.labels.feature\\.node\\.kubernetes\\.io/memory-swap)]}{.metadata.name}{\"\\t\"}{.metadata.labels.feature\\.node\\.kubernetes\\.io/memory-swap}{\"\\n\"}{end}'", "completion": "This bash code 11.2 - Swap memory management | This will result in an output similar to:", "language": "bash", "type": "kubectl-command_explanation", "context": "11.2 - Swap memory management | This will result in an output similar to:", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_18656"} {"prompt": "Complete this yaml code:\n\ndebug/counter-pod.yaml\napiVersion: v1\nkind: Pod\nmetadata:\n name: counter", "completion": "spec:\n containers:\n - name: count\n image: busybox:1.28\n args: [/bin/sh, -c,\n 'i=0; while true; do echo \"$i: $(date)\"; i=$((i+1)); sleep 1; done']", "language": "yaml", "type": "pod-definition", "context": "11.8 - Logging Architecture | To run this pod, use the following command:", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_18657"} {"prompt": "Explain this yaml code:\n\ndebug/counter-pod.yaml\napiVersion: v1\nkind: Pod\nmetadata:\n name: counter\nspec:\n containers:\n - name: count\n image: busybox:1.28\n args: [/bin/sh, -c,\n 'i=0; while true; do echo \"$i: $(date)\"; i=$((i+1)); sleep 1; done']", "completion": "This yaml code 11.8 - Logging Architecture | To run this pod, use the following command:", "language": "yaml", "type": "pod-definition_explanation", "context": "11.8 - Logging Architecture | To run this pod, use the following command:", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_18658"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: counter\nspec:", "completion": " containers:\n - name: count\n image: busybox:1.28\n args: [/bin/sh, -c,\n 'i=0; while true; do echo \"$i: $(date)\"; i=$((i+1)); sleep 1; done']", "language": "yaml", "type": "pod-definition", "context": "11.8 - Logging Architecture | To run this pod, use the following command:", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_18659"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: counter\nspec:\n containers:\n - name: count\n image: busybox:1.28\n args: [/bin/sh, -c,\n 'i=0; while true; do echo \"$i: $(date)\"; i=$((i+1)); sleep 1; done']", "completion": "This yaml code 11.8 - Logging Architecture | To run this pod, use the following command:", "language": "yaml", "type": "pod-definition_explanation", "context": "11.8 - Logging Architecture | To run this pod, use the following command:", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_18660"} {"prompt": "Explain this bash code:\n\nkubectl apply -f https://k8s.io/examples/debug/counter-pod.yaml", "completion": "This bash code 11.8 - Logging Architecture | The output is:", "language": "bash", "type": "kubectl-command_explanation", "context": "11.8 - Logging Architecture | The output is:", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_18661"} {"prompt": "Explain this text code:\n\n0: Fri Apr 1 11:42:23 UTC 2022\n1: Fri Apr 1 11:42:24 UTC 2022\n2: Fri Apr 1 11:42:25 UTC 2022", "completion": "This text code 11.8 - Logging Architecture | You can use kubectl logs --previous to retrieve logs from a previous instantiation of a container. If your pod has multiple containers, specify which container's logs you want to access by appending a", "language": "text", "type": "code-example_explanation", "context": "11.8 - Logging Architecture | You can use kubectl logs --previous to retrieve logs from a previous instantiation of a container. If your pod has multiple containers, specify which container's logs you want to access by appending a", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_18662"} {"prompt": "Complete this yaml code:\n\ndebug/counter-pod-err.yaml\napiVersion: v1\nkind: Pod\nmetadata:\n name: counter-err", "completion": "spec:\n containers:\n - name: count\n image: busybox:1.28\n args: [/bin/sh, -c,\n 'i=0; while true; do echo \"$i: $(date)\"; echo \"$i: err\" >&2 ; i=$((i+1)); sleep 1; done']", "language": "yaml", "type": "pod-definition", "context": "11.8 - Logging Architecture | To run this pod, use the following command:", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_18663"} {"prompt": "Explain this yaml code:\n\ndebug/counter-pod-err.yaml\napiVersion: v1\nkind: Pod\nmetadata:\n name: counter-err\nspec:\n containers:\n - name: count\n image: busybox:1.28\n args: [/bin/sh, -c,\n 'i=0; while true; do echo \"$i: $(date)\"; echo \"$i: err\" >&2 ; i=$((i+1)); sleep 1; done']", "completion": "This yaml code 11.8 - Logging Architecture | To run this pod, use the following command:", "language": "yaml", "type": "pod-definition_explanation", "context": "11.8 - Logging Architecture | To run this pod, use the following command:", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_18664"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: counter-err\nspec:", "completion": " containers:\n - name: count\n image: busybox:1.28\n args: [/bin/sh, -c,\n 'i=0; while true; do echo \"$i: $(date)\"; echo \"$i: err\" >&2 ; i=$((i+1)); sleep 1; done']", "language": "yaml", "type": "pod-definition", "context": "11.8 - Logging Architecture | To run this pod, use the following command:", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_18665"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: counter-err\nspec:\n containers:\n - name: count\n image: busybox:1.28\n args: [/bin/sh, -c,\n 'i=0; while true; do echo \"$i: $(date)\"; echo \"$i: err\" >&2 ; i=$((i+1)); sleep 1; done']", "completion": "This yaml code 11.8 - Logging Architecture | To run this pod, use the following command:", "language": "yaml", "type": "pod-definition_explanation", "context": "11.8 - Logging Architecture | To run this pod, use the following command:", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_18666"} {"prompt": "Explain this bash code:\n\nkubectl apply -f https://k8s.io/examples/debug/counter-pod-err.yaml", "completion": "This bash code 11.8 - Logging Architecture | To fetch only the stderr log stream, you can run:", "language": "bash", "type": "kubectl-command_explanation", "context": "11.8 - Logging Architecture | To fetch only the stderr log stream, you can run:", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_18667"} {"prompt": "Explain this bash code:\n\nkubectl get --raw \"/api/v1/namespaces/default/pods/counter-err/log?stream=Stderr\"", "completion": "This bash code 11.8 - Logging Architecture | See the kubectl logs documentation for more details.", "language": "bash", "type": "kubectl-command_explanation", "context": "11.8 - Logging Architecture | See the kubectl logs documentation for more details.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_18668"} {"prompt": "Complete this yaml code:\n\nadmin/logging/two-files-counter-pod.yaml\napiVersion: v1\nkind: Pod\nmetadata:\n name: counter\nspec:\n containers:\n - name: count\n image: busybox:1.28\n args:\n - /bin/sh\n - -c\n - >", "completion": " i=0;\n while true;\n do\n echo \"$i: $(date)\" >> /var/log/1.log;\n echo \"$(date) INFO $i\" >> /var/log/2.log;\n i=$((i+1));\n sleep 1;\n done \n volumeMounts:\n - name: varlog\n mountPath: /var/log\n volumes:\n - name: varlog\n emptyDir: {}", "language": "yaml", "type": "pod-definition", "context": "11.8 - Logging Architecture | It is not recommended to write log entries with different formats to the same log stream, even if you managed to redirect both components to the stdout stream of the container. Instead, you can create", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_18669"} {"prompt": "Explain this yaml code:\n\nadmin/logging/two-files-counter-pod.yaml\napiVersion: v1\nkind: Pod\nmetadata:\n name: counter\nspec:\n containers:\n - name: count\n image: busybox:1.28\n args:\n - /bin/sh\n - -c\n - >\n i=0;\n while true;\n do\n echo \"$i: $(date)\" >> /var/log/1.log;\n echo \"$(date) INFO $i\" >> /var/log/2.log;\n i=$((i+1));\n sleep 1;\n done \n volumeMounts:\n - name: varlog\n mountPath: /var/log\n volumes:\n - name: varlog\n emptyDir: {}", "completion": "This yaml code 11.8 - Logging Architecture | It is not recommended to write log entries with different formats to the same log stream, even if you managed to redirect both components to the stdout stream of the container. Instead, you can create", "language": "yaml", "type": "pod-definition_explanation", "context": "11.8 - Logging Architecture | It is not recommended to write log entries with different formats to the same log stream, even if you managed to redirect both components to the stdout stream of the container. Instead, you can create", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_18670"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: counter\nspec:\n containers:\n - name: count\n image: busybox:1.28\n args:\n - /bin/sh\n - -c\n - >\n i=0;", "completion": " while true;\n do\n echo \"$i: $(date)\" >> /var/log/1.log;\n echo \"$(date) INFO $i\" >> /var/log/2.log;\n i=$((i+1));\n sleep 1;\n done \n volumeMounts:\n - name: varlog\n mountPath: /var/log\n volumes:\n - name: varlog\n emptyDir: {}", "language": "yaml", "type": "pod-definition", "context": "11.8 - Logging Architecture | It is not recommended to write log entries with different formats to the same log stream, even if you managed to redirect both components to the stdout stream of the container. Instead, you can create", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_18671"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: counter\nspec:\n containers:\n - name: count\n image: busybox:1.28\n args:\n - /bin/sh\n - -c\n - >\n i=0;\n while true;\n do\n echo \"$i: $(date)\" >> /var/log/1.log;\n echo \"$(date) INFO $i\" >> /var/log/2.log;\n i=$((i+1));\n sleep 1;\n done \n volumeMounts:\n - name: varlog\n mountPath: /var/log\n volumes:\n - name: varlog\n emptyDir: {}", "completion": "This yaml code 11.8 - Logging Architecture | It is not recommended to write log entries with different formats to the same log stream, even if you managed to redirect both components to the stdout stream of the container. Instead, you can create", "language": "yaml", "type": "pod-definition_explanation", "context": "11.8 - Logging Architecture | It is not recommended to write log entries with different formats to the same log stream, even if you managed to redirect both components to the stdout stream of the container. Instead, you can create", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_18672"} {"prompt": "Complete this yaml code:\n\nadmin/logging/two-files-counter-pod-streaming-sidecar.yaml\napiVersion: v1\nkind: Pod\nmetadata:\n name: counter\nspec:\n containers:\n - name: count\n image: busybox:1.28\n args:\n - /bin/sh\n - -c\n - >\n i=0;\n while true;\n do\n echo \"$i: $(date)\" >> /var/log/1.log;\n echo \"$(date) INFO $i\" >> /var/log/2.log;\n i=$((i+1));", "completion": " sleep 1;\n done \n volumeMounts:\n - name: varlog\n mountPath: /var/log\n - name: count-log-1\n image: busybox:1.28\n args: [/bin/sh, -c, 'tail -n+1 -F /var/log/1.log']\n volumeMounts:\n - name: varlog\n mountPath: /var/log\n - name: count-log-2\n image: busybox:1.28\n args: [/bin/sh, -c, 'tail -n+1 -F /var/log/2.log']\n volumeMounts:\n - name: varlog\n mountPath: /var/log\n volumes:\n - name: varlog\n emptyDir: {}", "language": "yaml", "type": "pod-definition", "context": "11.8 - Logging Architecture | Now when you run this pod, you can access each log stream separately by running the following commands:", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_18673"} {"prompt": "Explain this yaml code:\n\nadmin/logging/two-files-counter-pod-streaming-sidecar.yaml\napiVersion: v1\nkind: Pod\nmetadata:\n name: counter\nspec:\n containers:\n - name: count\n image: busybox:1.28\n args:\n - /bin/sh\n - -c\n - >\n i=0;\n while true;\n do\n echo \"$i: $(date)\" >> /var/log/1.log;\n echo \"$(date) INFO $i\" >> /var/log/2.log;\n i=$((i+1));\n sleep 1;\n done \n volumeMounts:\n - name: varlog\n mountPath: /var/log\n - name: count-log-1\n image: busybox:1.28\n args: [/bin/sh, -c, 'tail -n+1 -F /var/log/1.log']\n volumeMounts:\n - name: varlog\n mountPath: /var/log\n - name: count-log-2\n image: busybox:1.28\n args: [/bin/sh, -c, 'tail -n+1 -F /var/log/2.log']\n volumeMounts:\n - name: varlog\n mountPath: /var/log\n volumes:\n - name: varlog\n emptyDir: {}", "completion": "This yaml code 11.8 - Logging Architecture | Now when you run this pod, you can access each log stream separately by running the following commands:", "language": "yaml", "type": "pod-definition_explanation", "context": "11.8 - Logging Architecture | Now when you run this pod, you can access each log stream separately by running the following commands:", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_18674"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: counter\nspec:\n containers:\n - name: count\n image: busybox:1.28\n args:\n - /bin/sh\n - -c\n - >\n i=0;\n while true;\n do\n echo \"$i: $(date)\" >> /var/log/1.log;\n echo \"$(date) INFO $i\" >> /var/log/2.log;\n i=$((i+1));\n sleep 1;", "completion": " done \n volumeMounts:\n - name: varlog\n mountPath: /var/log\n - name: count-log-1\n image: busybox:1.28\n args: [/bin/sh, -c, 'tail -n+1 -F /var/log/1.log']\n volumeMounts:\n - name: varlog\n mountPath: /var/log\n - name: count-log-2\n image: busybox:1.28\n args: [/bin/sh, -c, 'tail -n+1 -F /var/log/2.log']\n volumeMounts:\n - name: varlog\n mountPath: /var/log\n volumes:\n - name: varlog\n emptyDir: {}", "language": "yaml", "type": "pod-definition", "context": "11.8 - Logging Architecture | Now when you run this pod, you can access each log stream separately by running the following commands:", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_18675"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: counter\nspec:\n containers:\n - name: count\n image: busybox:1.28\n args:\n - /bin/sh\n - -c\n - >\n i=0;\n while true;\n do\n echo \"$i: $(date)\" >> /var/log/1.log;\n echo \"$(date) INFO $i\" >> /var/log/2.log;\n i=$((i+1));\n sleep 1;\n done \n volumeMounts:\n - name: varlog\n mountPath: /var/log\n - name: count-log-1\n image: busybox:1.28\n args: [/bin/sh, -c, 'tail -n+1 -F /var/log/1.log']\n volumeMounts:\n - name: varlog\n mountPath: /var/log\n - name: count-log-2\n image: busybox:1.28\n args: [/bin/sh, -c, 'tail -n+1 -F /var/log/2.log']\n volumeMounts:\n - name: varlog\n mountPath: /var/log\n volumes:\n - name: varlog\n emptyDir: {}", "completion": "This yaml code 11.8 - Logging Architecture | Now when you run this pod, you can access each log stream separately by running the following commands:", "language": "yaml", "type": "pod-definition_explanation", "context": "11.8 - Logging Architecture | Now when you run this pod, you can access each log stream separately by running the following commands:", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_18676"} {"prompt": "Complete this text code:\n\n0: Fri Apr 1 11:42:26 UTC 2022\n1: Fri Apr 1 11:42:27 UTC 2022", "completion": "2: Fri Apr 1 11:42:28 UTC 2022\n...", "language": "text", "type": "code-example", "context": "11.8 - Logging Architecture | The output is similar to:", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_18677"} {"prompt": "Explain this text code:\n\n0: Fri Apr 1 11:42:26 UTC 2022\n1: Fri Apr 1 11:42:27 UTC 2022\n2: Fri Apr 1 11:42:28 UTC 2022\n...", "completion": "This text code 11.8 - Logging Architecture | The output is similar to:", "language": "text", "type": "code-example_explanation", "context": "11.8 - Logging Architecture | The output is similar to:", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_18678"} {"prompt": "Complete this text code:\n\nFri Apr 1 11:42:29 UTC 2022 INFO 0\nFri Apr 1 11:42:30 UTC 2022 INFO 0", "completion": "Fri Apr 1 11:42:31 UTC 2022 INFO 0\n...", "language": "text", "type": "code-example", "context": "11.8 - Logging Architecture | If you installed a node-level agent in your cluster, that agent picks up those log streams automatically without any further configuration. If you like, you can configure the agent to parse log lines", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_18679"} {"prompt": "Explain this text code:\n\nFri Apr 1 11:42:29 UTC 2022 INFO 0\nFri Apr 1 11:42:30 UTC 2022 INFO 0\nFri Apr 1 11:42:31 UTC 2022 INFO 0\n...", "completion": "This text code 11.8 - Logging Architecture | If you installed a node-level agent in your cluster, that agent picks up those log streams automatically without any further configuration. If you like, you can configure the agent to parse log lines", "language": "text", "type": "code-example_explanation", "context": "11.8 - Logging Architecture | If you installed a node-level agent in your cluster, that agent picks up those log streams automatically without any further configuration. If you like, you can configure the agent to parse log lines", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_18680"} {"prompt": "Complete this yaml code:\n\nadmin/logging/fluentd-sidecar-config.yaml\napiVersion: v1\nkind: ConfigMap\nmetadata:\n name: fluentd-config\ndata:\n fluentd.conf: |\n <source>\n type tail\n format none\n path /var/log/1.log\n pos_file /var/log/1.log.pos\n tag count.format1", "completion": " </source>\n\n <source>\n type tail\n format none\n path /var/log/2.log\n pos_file /var/log/2.log.pos\n tag count.format2\n </source>\n\n <match **>\n type google_cloud\n </match>", "language": "yaml", "type": "kubernetes-manifest", "context": "11.8 - Logging Architecture | The second manifest describes a pod that has a sidecar container running fluentd. The pod mounts a volume where fluentd can pick up its configuration data.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_18681"} {"prompt": "Explain this yaml code:\n\nadmin/logging/fluentd-sidecar-config.yaml\napiVersion: v1\nkind: ConfigMap\nmetadata:\n name: fluentd-config\ndata:\n fluentd.conf: |\n <source>\n type tail\n format none\n path /var/log/1.log\n pos_file /var/log/1.log.pos\n tag count.format1\n </source>\n\n <source>\n type tail\n format none\n path /var/log/2.log\n pos_file /var/log/2.log.pos\n tag count.format2\n </source>\n\n <match **>\n type google_cloud\n </match>", "completion": "This yaml code 11.8 - Logging Architecture | The second manifest describes a pod that has a sidecar container running fluentd. The pod mounts a volume where fluentd can pick up its configuration data.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "11.8 - Logging Architecture | The second manifest describes a pod that has a sidecar container running fluentd. The pod mounts a volume where fluentd can pick up its configuration data.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_18682"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: ConfigMap\nmetadata:\n name: fluentd-config\ndata:\n fluentd.conf: |\n <source>\n type tail\n format none\n path /var/log/1.log\n pos_file /var/log/1.log.pos\n tag count.format1", "completion": " </source>\n\n <source>\n type tail\n format none\n path /var/log/2.log\n pos_file /var/log/2.log.pos\n tag count.format2\n </source>\n\n <match **>\n type google_cloud\n </match>", "language": "yaml", "type": "kubernetes-manifest", "context": "11.8 - Logging Architecture | The second manifest describes a pod that has a sidecar container running fluentd. The pod mounts a volume where fluentd can pick up its configuration data.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_18683"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: ConfigMap\nmetadata:\n name: fluentd-config\ndata:\n fluentd.conf: |\n <source>\n type tail\n format none\n path /var/log/1.log\n pos_file /var/log/1.log.pos\n tag count.format1\n </source>\n\n <source>\n type tail\n format none\n path /var/log/2.log\n pos_file /var/log/2.log.pos\n tag count.format2\n </source>\n\n <match **>\n type google_cloud\n </match>", "completion": "This yaml code 11.8 - Logging Architecture | The second manifest describes a pod that has a sidecar container running fluentd. The pod mounts a volume where fluentd can pick up its configuration data.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "11.8 - Logging Architecture | The second manifest describes a pod that has a sidecar container running fluentd. The pod mounts a volume where fluentd can pick up its configuration data.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_18684"} {"prompt": "Complete this yaml code:\n\nadmin/logging/two-files-counter-pod-agent-sidecar.yaml\napiVersion: v1\nkind: Pod\nmetadata:\n name: counter\nspec:\n containers:\n - name: count\n image: busybox:1.28\n args:\n - /bin/sh\n - -c\n - >\n i=0;\n while true;\n do\n echo \"$i: $(date)\" >> /var/log/1.log;\n echo \"$(date) INFO $i\" >> /var/log/2.log;\n i=$((i+1));\n sleep 1;", "completion": " done \n volumeMounts:\n - name: varlog\n mountPath: /var/log\n - name: count-agent\n image: registry.k8s.io/fluentd-gcp:1.30\n env:\n - name: FLUENTD_ARGS\n value: -c /etc/fluentd-config/fluentd.conf\n volumeMounts:\n - name: varlog\n mountPath: /var/log\n - name: config-volume\n mountPath: /etc/fluentd-config\n volumes:\n - name: varlog\n emptyDir: {}\n - name: config-volume\n configMap:\n name: fluentd-config", "language": "yaml", "type": "pod-definition", "context": "11.8 - Logging Architecture |", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_18685"} {"prompt": "Explain this yaml code:\n\nadmin/logging/two-files-counter-pod-agent-sidecar.yaml\napiVersion: v1\nkind: Pod\nmetadata:\n name: counter\nspec:\n containers:\n - name: count\n image: busybox:1.28\n args:\n - /bin/sh\n - -c\n - >\n i=0;\n while true;\n do\n echo \"$i: $(date)\" >> /var/log/1.log;\n echo \"$(date) INFO $i\" >> /var/log/2.log;\n i=$((i+1));\n sleep 1;\n done \n volumeMounts:\n - name: varlog\n mountPath: /var/log\n - name: count-agent\n image: registry.k8s.io/fluentd-gcp:1.30\n env:\n - name: FLUENTD_ARGS\n value: -c /etc/fluentd-config/fluentd.conf\n volumeMounts:\n - name: varlog\n mountPath: /var/log\n - name: config-volume\n mountPath: /etc/fluentd-config\n volumes:\n - name: varlog\n emptyDir: {}\n - name: config-volume\n configMap:\n name: fluentd-config", "completion": "This yaml code 11.8 - Logging Architecture |", "language": "yaml", "type": "pod-definition_explanation", "context": "11.8 - Logging Architecture |", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_18686"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: counter\nspec:\n containers:\n - name: count\n image: busybox:1.28\n args:\n - /bin/sh\n - -c\n - >\n i=0;\n while true;\n do\n echo \"$i: $(date)\" >> /var/log/1.log;\n echo \"$(date) INFO $i\" >> /var/log/2.log;\n i=$((i+1));\n sleep 1;", "completion": " done \n volumeMounts:\n - name: varlog\n mountPath: /var/log\n - name: count-agent\n image: registry.k8s.io/fluentd-gcp:1.30\n env:\n - name: FLUENTD_ARGS\n value: -c /etc/fluentd-config/fluentd.conf\n volumeMounts:\n - name: varlog\n mountPath: /var/log\n - name: config-volume\n mountPath: /etc/fluentd-config\n volumes:\n - name: varlog\n emptyDir: {}\n - name: config-volume\n configMap:\n name: fluentd-config", "language": "yaml", "type": "pod-definition", "context": "11.8 - Logging Architecture |", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_18687"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: counter\nspec:\n containers:\n - name: count\n image: busybox:1.28\n args:\n - /bin/sh\n - -c\n - >\n i=0;\n while true;\n do\n echo \"$i: $(date)\" >> /var/log/1.log;\n echo \"$(date) INFO $i\" >> /var/log/2.log;\n i=$((i+1));\n sleep 1;\n done \n volumeMounts:\n - name: varlog\n mountPath: /var/log\n - name: count-agent\n image: registry.k8s.io/fluentd-gcp:1.30\n env:\n - name: FLUENTD_ARGS\n value: -c /etc/fluentd-config/fluentd.conf\n volumeMounts:\n - name: varlog\n mountPath: /var/log\n - name: config-volume\n mountPath: /etc/fluentd-config\n volumes:\n - name: varlog\n emptyDir: {}\n - name: config-volume\n configMap:\n name: fluentd-config", "completion": "This yaml code 11.8 - Logging Architecture |", "language": "yaml", "type": "pod-definition_explanation", "context": "11.8 - Logging Architecture |", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_18688"} {"prompt": "Complete this yaml code:\n\napiVersion: rbac.authorization.k8s.io/v1\nkind: ClusterRole\nmetadata:\n name: prometheus", "completion": "rules:\n - nonResourceURLs:\n - \"/metrics\"\n verbs:\n - get", "language": "yaml", "type": "kubernetes-manifest", "context": "11.10 - Metrics For Kubernetes System Components | Alpha metric → Stable metric → Deprecated metric → Hidden metric → Deleted metric", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_18689"} {"prompt": "Explain this yaml code:\n\napiVersion: rbac.authorization.k8s.io/v1\nkind: ClusterRole\nmetadata:\n name: prometheus\nrules:\n - nonResourceURLs:\n - \"/metrics\"\n verbs:\n - get", "completion": "This yaml code 11.10 - Metrics For Kubernetes System Components | Alpha metric → Stable metric → Deprecated metric → Hidden metric → Deleted metric", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "11.10 - Metrics For Kubernetes System Components | Alpha metric → Stable metric → Deprecated metric → Hidden metric → Deleted metric", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_18690"} {"prompt": "Explain this text code:\n\n\"metric1,label2\": \"v1,v2,v3\"\n\"metric2,label1\": \"v1,v2,v3\"", "completion": "This text code 11.10 - Metrics For Kubernetes System Components | Additionally, the cardinality_enforcement_unexpected_categorizations_total meta-metric records the count of unexpected categorizations during cardinality enforcement, that is, whenever a label value i", "language": "text", "type": "code-example_explanation", "context": "11.10 - Metrics For Kubernetes System Components | Additionally, the cardinality_enforcement_unexpected_categorizations_total meta-metric records the count of unexpected categorizations during cardinality enforcement, that is, whenever a label value i", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_18691"} {"prompt": "Complete this text code:\n\ngroups:\n- name: Pod state\n rules:\n - alert: PodsBlockedInTerminatingState\n expr: count(kube_pod_deletion_timestamp) by (namespace, pod) * count(kube_pod_status_reason{reason=\"NodeLost\"} == 0) by (namespace, pod) > 0", "completion": " for: 5m\n labels:\n severity: page\n annotations:\n summary: Pod {{$labels.namespace}}/{{$labels.pod}} blocked in Terminating state.", "language": "text", "type": "code-example", "context": "11.11 - Metrics for Kubernetes Object States | System component logs record events happening in cluster, which can be very useful for debugging. You can configure log verbosity to see more or less detail. Logs can be as coarse-grained as showing e", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_18692"} {"prompt": "Explain this text code:\n\ngroups:\n- name: Pod state\n rules:\n - alert: PodsBlockedInTerminatingState\n expr: count(kube_pod_deletion_timestamp) by (namespace, pod) * count(kube_pod_status_reason{reason=\"NodeLost\"} == 0) by (namespace, pod) > 0\n for: 5m\n labels:\n severity: page\n annotations:\n summary: Pod {{$labels.namespace}}/{{$labels.pod}} blocked in Terminating state.", "completion": "This text code 11.11 - Metrics for Kubernetes Object States | System component logs record events happening in cluster, which can be very useful for debugging. You can configure log verbosity to see more or less detail. Logs can be as coarse-grained as showing e", "language": "text", "type": "code-example_explanation", "context": "11.11 - Metrics for Kubernetes Object States | System component logs record events happening in cluster, which can be very useful for debugging. You can configure log verbosity to see more or less detail. Logs can be as coarse-grained as showing e", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_18693"} {"prompt": "Complete this text code:\n\n$ cd $GOPATH/src/k8s.io/kubernetes/staging/src/k8s.io/component-base/logs/example/cmd/\n$ go run . --help\n...\n --feature-gates mapStringBool A set of key=value pairs that describe feature gates for alpha/experimental features. Options are:\n AllAlpha=true|false (ALPHA - default=false)", "completion": " AllBeta=true|false (BETA - default=false)\n ContextualLogging=true|false (BETA - default=true)\n$ go run . --feature-gates ContextualLogging=true\n...\nI0222 15:13:31.645988 197901 example.go:54] \"runtime\" logger=\"example.myname\" foo=\"bar\" duration=\"1m0s\"\nI0222 15:13:31.646007 197901 example.go:55] \"another runtime\" logger=\"example\" foo=\"bar\" duration=\"1h0m0s\" duration=\"1m0s\"", "language": "text", "type": "code-example", "context": "11.12 - System Logs | The logger key and foo=\"bar\" were added by the caller of the function which logs the runtime message and duration=\"1m0s\" value, without having to modify that function.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_18694"} {"prompt": "Explain this text code:\n\n$ cd $GOPATH/src/k8s.io/kubernetes/staging/src/k8s.io/component-base/logs/example/cmd/\n$ go run . --help\n...\n --feature-gates mapStringBool A set of key=value pairs that describe feature gates for alpha/experimental features. Options are:\n AllAlpha=true|false (ALPHA - default=false)\n AllBeta=true|false (BETA - default=false)\n ContextualLogging=true|false (BETA - default=true)\n$ go run . --feature-gates ContextualLogging=true\n...\nI0222 15:13:31.645988 197901 example.go:54] \"runtime\" logger=\"example.myname\" foo=\"bar\" duration=\"1m0s\"\nI0222 15:13:31.646007 197901 example.go:55] \"another runtime\" logger=\"example\" foo=\"bar\" duration=\"1h0m0s\" duration=\"1m0s\"", "completion": "This text code 11.12 - System Logs | The logger key and foo=\"bar\" were added by the caller of the function which logs the runtime message and duration=\"1m0s\" value, without having to modify that function.", "language": "text", "type": "code-example_explanation", "context": "11.12 - System Logs | The logger key and foo=\"bar\" were added by the caller of the function which logs the runtime message and duration=\"1m0s\" value, without having to modify that function.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_18695"} {"prompt": "Complete this text code:\n\n$ go run . --feature-gates ContextualLogging=false\n...", "completion": "I0222 15:14:40.497333 198174 example.go:54] \"runtime\" duration=\"1m0s\"\nI0222 15:14:40.497346 198174 example.go:55] \"another runtime\" duration=\"1h0m0s\" duration=\"1m0s\"", "language": "text", "type": "code-example", "context": "11.12 - System Logs | JSON output does not support many standard klog flags. For list of unsupported klog flags, see the Command line tool reference.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_18696"} {"prompt": "Explain this text code:\n\n$ go run . --feature-gates ContextualLogging=false\n...\nI0222 15:14:40.497333 198174 example.go:54] \"runtime\" duration=\"1m0s\"\nI0222 15:14:40.497346 198174 example.go:55] \"another runtime\" duration=\"1h0m0s\" duration=\"1m0s\"", "completion": "This text code 11.12 - System Logs | JSON output does not support many standard klog flags. For list of unsupported klog flags, see the Command line tool reference.", "language": "text", "type": "code-example_explanation", "context": "11.12 - System Logs | JSON output does not support many standard klog flags. For list of unsupported klog flags, see the Command line tool reference.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_18697"} {"prompt": "Complete this json code:\n\n{\n \"ts\": 1580306777.04728,\n \"v\": 4,\n \"msg\": \"Pod status updated\",\n \"pod\":{", "completion": " \"name\": \"nginx-1\",\n \"namespace\": \"default\"\n },\n \"status\": \"ready\"\n}", "language": "json", "type": "api-response", "context": "11.12 - System Logs | Keys with special meaning:", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_18698"} {"prompt": "Explain this json code:\n\n{\n \"ts\": 1580306777.04728,\n \"v\": 4,\n \"msg\": \"Pod status updated\",\n \"pod\":{\n \"name\": \"nginx-1\",\n \"namespace\": \"default\"\n },\n \"status\": \"ready\"\n}", "completion": "This json code 11.12 - System Logs | Keys with special meaning:", "language": "json", "type": "api-response_explanation", "context": "11.12 - System Logs | Keys with special meaning:", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_18699"} {"prompt": "Explain this text code:\n\n# Fetch kubelet logs from a node named node-1.example\nkubectl get --raw \"/api/v1/nodes/node-1.example/proxy/logs/?query=kubelet\"", "completion": "This text code 11.12 - System Logs | You can also fetch files, provided that the files are in a directory that the kubelet allows for log fetches. For example, you can fetch a log from /var/log on a Linux node:", "language": "text", "type": "code-example_explanation", "context": "11.12 - System Logs | You can also fetch files, provided that the files are in a directory that the kubelet allows for log fetches. For example, you can fetch a log from /var/log on a Linux node:", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_18700"} {"prompt": "Explain this bash code:\n\nkubectl get --raw \"/api/v1/nodes/<insert-node-name-here>/proxy/logs/?query=/<insert-log-file-name-here>\"", "completion": "This bash code 11.12 - System Logs | The kubelet uses heuristics to retrieve logs. This helps if you are not aware whether a given system service is writing logs to the operating system's native logger like journald or to a log file in /", "language": "bash", "type": "kubectl-command_explanation", "context": "11.12 - System Logs | The kubelet uses heuristics to retrieve logs. This helps if you are not aware whether a given system service is writing logs to the operating system's native logger like journald or to a log file in /", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_18701"} {"prompt": "Explain this text code:\n\n# Fetch kubelet logs from a node named node-1.example that have the word \"error\"\nkubectl get --raw \"/api/v1/nodes/node-1.example/proxy/logs/?query=kubelet&pattern=error\"", "completion": "This text code 11.12 - System Logs | System component traces record the latency of and relationships between operations in the cluster.", "language": "text", "type": "code-example_explanation", "context": "11.12 - System Logs | System component traces record the latency of and relationships between operations in the cluster.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_18702"} {"prompt": "Complete this text code:\n\nreceivers:\n otlp:\n protocols:\n grpc:\nexporters:\n # Replace this exporter with the exporter for your backend\n exporters:", "completion": " debug:\n verbosity: detailed\nservice:\n pipelines:\n traces:\n receivers: [otlp]\n exporters: [debug]", "language": "text", "type": "code-example", "context": "11.13 - Traces For Kubernetes System Components | To directly emit traces to a backend without utilizing a collector, specify the endpoint field in the Kubernetes tracing configuration file with the desired trace backend address. This method negates", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_18703"} {"prompt": "Explain this text code:\n\nreceivers:\n otlp:\n protocols:\n grpc:\nexporters:\n # Replace this exporter with the exporter for your backend\n exporters:\n debug:\n verbosity: detailed\nservice:\n pipelines:\n traces:\n receivers: [otlp]\n exporters: [debug]", "completion": "This text code 11.13 - Traces For Kubernetes System Components | To directly emit traces to a backend without utilizing a collector, specify the endpoint field in the Kubernetes tracing configuration file with the desired trace backend address. This method negates", "language": "text", "type": "code-example_explanation", "context": "11.13 - Traces For Kubernetes System Components | To directly emit traces to a backend without utilizing a collector, specify the endpoint field in the Kubernetes tracing configuration file with the desired trace backend address. This method negates", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_18704"} {"prompt": "Complete this yaml code:\n\napiVersion: apiserver.config.k8s.io/v1\nkind: TracingConfiguration", "completion": "# default value\n#endpoint: localhost:4317\nsamplingRatePerMillion: 100", "language": "yaml", "type": "kubernetes-manifest", "context": "11.13 - Traces For Kubernetes System Components | For more information about the TracingConfiguration struct, see API server config API (v1).", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_18705"} {"prompt": "Explain this yaml code:\n\napiVersion: apiserver.config.k8s.io/v1\nkind: TracingConfiguration\n# default value\n#endpoint: localhost:4317\nsamplingRatePerMillion: 100", "completion": "This yaml code 11.13 - Traces For Kubernetes System Components | For more information about the TracingConfiguration struct, see API server config API (v1).", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "11.13 - Traces For Kubernetes System Components | For more information about the TracingConfiguration struct, see API server config API (v1).", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_18706"} {"prompt": "Complete this yaml code:\n\napiVersion: kubelet.config.k8s.io/v1beta1\nkind: KubeletConfiguration\ntracing:", "completion": " # default value\n #endpoint: localhost:4317\n samplingRatePerMillion: 100", "language": "yaml", "type": "kubernetes-manifest", "context": "11.13 - Traces For Kubernetes System Components | If the samplingRatePerMillion is set to one million (1000000), then every span will be sent to the exporter.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_18707"} {"prompt": "Explain this yaml code:\n\napiVersion: kubelet.config.k8s.io/v1beta1\nkind: KubeletConfiguration\ntracing:\n # default value\n #endpoint: localhost:4317\n samplingRatePerMillion: 100", "completion": "This yaml code 11.13 - Traces For Kubernetes System Components | If the samplingRatePerMillion is set to one million (1000000), then every span will be sent to the exporter.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "11.13 - Traces For Kubernetes System Components | If the samplingRatePerMillion is set to one million (1000000), then every span will be sent to the exporter.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_18708"} {"prompt": "Explain this text code:\n\nkube-apiserver \\\n--runtime-config=flowcontrol.apiserver.k8s.io/v1beta3=false \\\n # …and other flags as usual", "completion": "This text code 11.15 - API Priority and Fairness | The command-line flag --enable-priority-and-fairness=false will disable the API Priority and Fairness feature.", "language": "text", "type": "code-example_explanation", "context": "11.15 - API Priority and Fairness | The command-line flag --enable-priority-and-fairness=false will disable the API Priority and Fairness feature.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_18709"} {"prompt": "Complete this yaml code:\n\npriority-and-fairness/health-for-strangers.yaml\napiVersion: flowcontrol.apiserver.k8s.io/v1\nkind: FlowSchema\nmetadata:\n name: health-for-strangers\nspec:\n matchingPrecedence: 1000\n priorityLevelConfiguration:\n name: exempt\n rules:", "completion": " - nonResourceRules:\n - nonResourceURLs:\n - \"/healthz\"\n - \"/livez\"\n - \"/readyz\"\n verbs:\n - \"*\"\n subjects:\n - kind: Group\n group:\n name: \"system:unauthenticated\"", "language": "yaml", "type": "kubernetes-manifest", "context": "11.15 - API Priority and Fairness | When you enable the API Priority and Fairness feature, the kube-apiserver exports additional metrics. Monitoring these can help you determine whether your configuration is inappropriately throttling i", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_18710"} {"prompt": "Explain this yaml code:\n\npriority-and-fairness/health-for-strangers.yaml\napiVersion: flowcontrol.apiserver.k8s.io/v1\nkind: FlowSchema\nmetadata:\n name: health-for-strangers\nspec:\n matchingPrecedence: 1000\n priorityLevelConfiguration:\n name: exempt\n rules:\n - nonResourceRules:\n - nonResourceURLs:\n - \"/healthz\"\n - \"/livez\"\n - \"/readyz\"\n verbs:\n - \"*\"\n subjects:\n - kind: Group\n group:\n name: \"system:unauthenticated\"", "completion": "This yaml code 11.15 - API Priority and Fairness | When you enable the API Priority and Fairness feature, the kube-apiserver exports additional metrics. Monitoring these can help you determine whether your configuration is inappropriately throttling i", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "11.15 - API Priority and Fairness | When you enable the API Priority and Fairness feature, the kube-apiserver exports additional metrics. Monitoring these can help you determine whether your configuration is inappropriately throttling i", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_18711"} {"prompt": "Complete this yaml code:\n\napiVersion: flowcontrol.apiserver.k8s.io/v1\nkind: FlowSchema\nmetadata:\n name: health-for-strangers\nspec:\n matchingPrecedence: 1000\n priorityLevelConfiguration:\n name: exempt\n rules:\n - nonResourceRules:", "completion": " - nonResourceURLs:\n - \"/healthz\"\n - \"/livez\"\n - \"/readyz\"\n verbs:\n - \"*\"\n subjects:\n - kind: Group\n group:\n name: \"system:unauthenticated\"", "language": "yaml", "type": "kubernetes-manifest", "context": "11.15 - API Priority and Fairness | When you enable the API Priority and Fairness feature, the kube-apiserver exports additional metrics. Monitoring these can help you determine whether your configuration is inappropriately throttling i", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_18712"} {"prompt": "Explain this yaml code:\n\napiVersion: flowcontrol.apiserver.k8s.io/v1\nkind: FlowSchema\nmetadata:\n name: health-for-strangers\nspec:\n matchingPrecedence: 1000\n priorityLevelConfiguration:\n name: exempt\n rules:\n - nonResourceRules:\n - nonResourceURLs:\n - \"/healthz\"\n - \"/livez\"\n - \"/readyz\"\n verbs:\n - \"*\"\n subjects:\n - kind: Group\n group:\n name: \"system:unauthenticated\"", "completion": "This yaml code 11.15 - API Priority and Fairness | When you enable the API Priority and Fairness feature, the kube-apiserver exports additional metrics. Monitoring these can help you determine whether your configuration is inappropriately throttling i", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "11.15 - API Priority and Fairness | When you enable the API Priority and Fairness feature, the kube-apiserver exports additional metrics. Monitoring these can help you determine whether your configuration is inappropriately throttling i", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_18713"} {"prompt": "Complete this yaml code:\n\npriority-and-fairness/list-events-default-service-account.yaml\napiVersion: flowcontrol.apiserver.k8s.io/v1\nkind: FlowSchema\nmetadata:\n name: list-events-default-service-account\nspec:\n distinguisherMethod:\n type: ByUser\n matchingPrecedence: 8000\n priorityLevelConfiguration:\n name: catch-all\n rules:\n - resourceRules:", "completion": " - apiGroups:\n - '*'\n namespaces:\n - default\n resources:\n - events\n verbs:\n - list\n subjects:\n - kind: ServiceAccount\n serviceAccount:\n name: default\n namespace: default", "language": "yaml", "type": "service-definition", "context": "11.15 - API Priority and Fairness | Add-ons extend the functionality of Kubernetes.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_18714"} {"prompt": "Explain this yaml code:\n\npriority-and-fairness/list-events-default-service-account.yaml\napiVersion: flowcontrol.apiserver.k8s.io/v1\nkind: FlowSchema\nmetadata:\n name: list-events-default-service-account\nspec:\n distinguisherMethod:\n type: ByUser\n matchingPrecedence: 8000\n priorityLevelConfiguration:\n name: catch-all\n rules:\n - resourceRules:\n - apiGroups:\n - '*'\n namespaces:\n - default\n resources:\n - events\n verbs:\n - list\n subjects:\n - kind: ServiceAccount\n serviceAccount:\n name: default\n namespace: default", "completion": "This yaml code 11.15 - API Priority and Fairness | Add-ons extend the functionality of Kubernetes.", "language": "yaml", "type": "service-definition_explanation", "context": "11.15 - API Priority and Fairness | Add-ons extend the functionality of Kubernetes.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_18715"} {"prompt": "Complete this yaml code:\n\napiVersion: flowcontrol.apiserver.k8s.io/v1\nkind: FlowSchema\nmetadata:\n name: list-events-default-service-account\nspec:\n distinguisherMethod:\n type: ByUser\n matchingPrecedence: 8000\n priorityLevelConfiguration:\n name: catch-all\n rules:\n - resourceRules:", "completion": " - apiGroups:\n - '*'\n namespaces:\n - default\n resources:\n - events\n verbs:\n - list\n subjects:\n - kind: ServiceAccount\n serviceAccount:\n name: default\n namespace: default", "language": "yaml", "type": "service-definition", "context": "11.15 - API Priority and Fairness | Add-ons extend the functionality of Kubernetes.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_18716"} {"prompt": "Explain this yaml code:\n\napiVersion: flowcontrol.apiserver.k8s.io/v1\nkind: FlowSchema\nmetadata:\n name: list-events-default-service-account\nspec:\n distinguisherMethod:\n type: ByUser\n matchingPrecedence: 8000\n priorityLevelConfiguration:\n name: catch-all\n rules:\n - resourceRules:\n - apiGroups:\n - '*'\n namespaces:\n - default\n resources:\n - events\n verbs:\n - list\n subjects:\n - kind: ServiceAccount\n serviceAccount:\n name: default\n namespace: default", "completion": "This yaml code 11.15 - API Priority and Fairness | Add-ons extend the functionality of Kubernetes.", "language": "yaml", "type": "service-definition_explanation", "context": "11.15 - API Priority and Fairness | Add-ons extend the functionality of Kubernetes.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_18717"} {"prompt": "Complete this yaml code:\n\n---\napiVersion: v1\nkind: Service\nmetadata:\n name: win-webserver\n labels:\n app: win-webserver\nspec:\n ports:\n # the port that this service should serve on\n - port: 80\n targetPort: 80\n selector:\n app: win-webserver\n type: NodePort\n---\napiVersion: apps/v1\nkind: Deployment\nmetadata:\n labels:\n app: win-webserver", "completion": " name: win-webserver\nspec:\n replicas: 2\n selector:\n matchLabels:\n app: win-webserver\n template:\n metadata:\n labels:\n app: win-webserver\n name: win-webserver\n spec:\n containers:\n - name: windowswebserver\n image: mcr.microsoft.com/windows/servercore:ltsc2019\n command:\n - powershell.exe\n - -command\n - \"<#code used from https://gist.github.com/19WAS85/5424431#> ; $$listener = New-Object System.Net.HttpListener ; $$listener.Prefixes.Add('http://*:80/') ; $$listener.Start() ; $$callerCounts = @{} ; Write-Host('Listening at http://*:80/') ; while ($$listener.IsListening) { ;$$context = $$listener.GetContext() ;$$requestUrl = $$context.Request.Url ;$$clientIP = $$context.Request.RemoteEndPoint.Address ;$$response = $$context.Response ;Write-Host '' ;Write-Host('> {0}' -f $$requestUrl) ; ;$$count = 1 ;$$k=$$callerCounts.Get_Item($$clientIP) ;if ($$k -ne $$null) { $$count += $$k } ;$$callerCounts.Set_Item($$clientIP, $$count) ;$$ip=(Get-NetAdapter | Get-NetIpAddress); $$header='<html><body><H1>Windows Container Web Server</H1>' ;$$callerCountsString='' ;$$callerCounts.Keys | % { $$callerCountsString+='<p>IP {0} callerCount {1} ' -f $$ip[1].IPAddress,$$callerCounts.Item($$_) } ;$$footer='</body></html>' ;$$content='{0}{1}{2}' -f $$header,$$callerCountsString,$$footer ;Write-Output $$content ;$$buffer = [System.Text.Encoding]::UTF8.GetBytes($$content) ;$$response.ContentLength64 = $$buffer.Length ;$$response.OutputStream.Write($$buffer, 0, $$buffer.Length) ;$$response.Close() ;$$responseStatus = $$response.StatusCode ;Write-Host('< {0}' -f $$responseStatus) } ; \"\n nodeSelector:\n kubernetes.io/os: windows", "language": "yaml", "type": "deployment-definition", "context": "12.2 - Guide for Running Windows Containers in Kubernetes | Check that all nodes are healthy:", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_18718"} {"prompt": "Explain this yaml code:\n\n---\napiVersion: v1\nkind: Service\nmetadata:\n name: win-webserver\n labels:\n app: win-webserver\nspec:\n ports:\n # the port that this service should serve on\n - port: 80\n targetPort: 80\n selector:\n app: win-webserver\n type: NodePort\n---\napiVersion: apps/v1\nkind: Deployment\nmetadata:\n labels:\n app: win-webserver\n name: win-webserver\nspec:\n replicas: 2\n selector:\n matchLabels:\n app: win-webserver\n template:\n metadata:\n labels:\n app: win-webserver\n name: win-webserver\n spec:\n containers:\n - name: windowswebserver\n image: mcr.microsoft.com/windows/servercore:ltsc2019\n command:\n - powershell.exe\n - -command\n - \"<#code used from https://gist.github.com/19WAS85/5424431#> ; $$listener = New-Object System.Net.HttpListener ; $$listener.Prefixes.Add('http://*:80/') ; $$listener.Start() ; $$callerCounts = @{} ; Write-Host('Listening at http://*:80/') ; while ($$listener.IsListening) { ;$$context = $$listener.GetContext() ;$$requestUrl = $$context.Request.Url ;$$clientIP = $$context.Request.RemoteEndPoint.Address ;$$response = $$context.Response ;Write-Host '' ;Write-Host('> {0}' -f $$requestUrl) ; ;$$count = 1 ;$$k=$$callerCounts.Get_Item($$clientIP) ;if ($$k -ne $$null) { $$count += $$k } ;$$callerCounts.Set_Item($$clientIP, $$count) ;$$ip=(Get-NetAdapter | Get-NetIpAddress); $$header='<html><body><H1>Windows Container Web Server</H1>' ;$$callerCountsString='' ;$$callerCounts.Keys | % { $$callerCountsString+='<p>IP {0} callerCount {1} ' -f $$ip[1].IPAddress,$$callerCounts.Item($$_) } ;$$footer='</body></html>' ;$$content='{0}{1}{2}' -f $$header,$$callerCountsString,$$footer ;Write-Output $$content ;$$buffer = [System.Text.Encoding]::UTF8.GetBytes($$content) ;$$response.ContentLength64 = $$buffer.Length ;$$response.OutputStream.Write($$buffer, 0, $$buffer.Length) ;$$response.Close() ;$$responseStatus = $$response.StatusCode ;Write-Host('< {0}' -f $$responseStatus) } ; \"\n nodeSelector:\n kubernetes.io/os: windows", "completion": "This yaml code 12.2 - Guide for Running Windows Containers in Kubernetes | Check that all nodes are healthy:", "language": "yaml", "type": "deployment-definition_explanation", "context": "12.2 - Guide for Running Windows Containers in Kubernetes | Check that all nodes are healthy:", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_18719"} {"prompt": "Explain this bash code:\n\nkubectl apply -f win-webserver.yaml\nkubectl get pods -o wide -w", "completion": "This bash code 12.2 - Guide for Running Windows Containers in Kubernetes | When the service is deployed correctly both Pods are marked as Ready. To exit the watch command, press Ctrl+C.", "language": "bash", "type": "kubectl-command_explanation", "context": "12.2 - Guide for Running Windows Containers in Kubernetes | When the service is deployed correctly both Pods are marked as Ready. To exit the watch command, press Ctrl+C.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_18720"} {"prompt": "Complete this text code:\n\nnodeSelector:\n kubernetes.io/os: windows\n node.kubernetes.io/windows-build: '10.0.17763'\ntolerations:", "completion": " - key: \"os\"\n operator: \"Equal\"\n value: \"windows\"\n effect: \"NoSchedule\"", "language": "text", "type": "code-example", "context": "12.2 - Guide for Running Windows Containers in Kubernetes | The Windows Server version used by each pod must match that of the node. If you want to use multiple Windows Server versions in the same cluster, then you should set additional node labels and nodeSel", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_18721"} {"prompt": "Explain this text code:\n\nnodeSelector:\n kubernetes.io/os: windows\n node.kubernetes.io/windows-build: '10.0.17763'\ntolerations:\n - key: \"os\"\n operator: \"Equal\"\n value: \"windows\"\n effect: \"NoSchedule\"", "completion": "This text code 12.2 - Guide for Running Windows Containers in Kubernetes | The Windows Server version used by each pod must match that of the node. If you want to use multiple Windows Server versions in the same cluster, then you should set additional node labels and nodeSel", "language": "text", "type": "code-example_explanation", "context": "12.2 - Guide for Running Windows Containers in Kubernetes | The Windows Server version used by each pod must match that of the node. If you want to use multiple Windows Server versions in the same cluster, then you should set additional node labels and nodeSel", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_18722"} {"prompt": "Complete this yaml code:\n\n---\napiVersion: node.k8s.io/v1\nkind: RuntimeClass\nmetadata:\n name: windows-2019\nhandler: example-container-runtime-handler\nscheduling:\n nodeSelector:", "completion": " kubernetes.io/os: 'windows'\n kubernetes.io/arch: 'amd64'\n node.kubernetes.io/windows-build: '10.0.17763'\n tolerations:\n - effect: NoSchedule\n key: os\n operator: Equal\n value: \"windows\"", "language": "yaml", "type": "kubernetes-manifest", "context": "12.2 - Guide for Running Windows Containers in Kubernetes | Run kubectl create -f runtimeClasses.yml using as a cluster administrator", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_18723"} {"prompt": "Explain this yaml code:\n\n---\napiVersion: node.k8s.io/v1\nkind: RuntimeClass\nmetadata:\n name: windows-2019\nhandler: example-container-runtime-handler\nscheduling:\n nodeSelector:\n kubernetes.io/os: 'windows'\n kubernetes.io/arch: 'amd64'\n node.kubernetes.io/windows-build: '10.0.17763'\n tolerations:\n - effect: NoSchedule\n key: os\n operator: Equal\n value: \"windows\"", "completion": "This yaml code 12.2 - Guide for Running Windows Containers in Kubernetes | Run kubectl create -f runtimeClasses.yml using as a cluster administrator", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "12.2 - Guide for Running Windows Containers in Kubernetes | Run kubectl create -f runtimeClasses.yml using as a cluster administrator", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_18724"} {"prompt": "Complete this yaml code:\n\n---\napiVersion: apps/v1\nkind: Deployment\nmetadata:\n name: iis-2019\n labels:\n app: iis-2019\nspec:\n replicas: 1\n template:\n metadata:\n name: iis-2019\n labels:\n app: iis-2019\n spec:\n runtimeClassName: windows-2019\n containers:\n - name: iis\n image: mcr.microsoft.com/windows/servercore/iis:windowsservercore-ltsc2019\n resources:\n limits:", "completion": " cpu: 1\n memory: 800Mi\n requests:\n cpu: .1\n memory: 300Mi\n ports:\n - containerPort: 80\n selector:\n matchLabels:\n app: iis-2019\n---\napiVersion: v1\nkind: Service\nmetadata:\n name: iis\nspec:\n type: LoadBalancer\n ports:\n - protocol: TCP\n port: 80\n selector:\n app: iis-2019", "language": "yaml", "type": "deployment-definition", "context": "12.2 - Guide for Running Windows Containers in Kubernetes | Kubernetes is highly configurable and extensible. As a result, there is rarely a need to fork or submit patches to the Kubernetes project code.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_18725"} {"prompt": "Explain this yaml code:\n\n---\napiVersion: apps/v1\nkind: Deployment\nmetadata:\n name: iis-2019\n labels:\n app: iis-2019\nspec:\n replicas: 1\n template:\n metadata:\n name: iis-2019\n labels:\n app: iis-2019\n spec:\n runtimeClassName: windows-2019\n containers:\n - name: iis\n image: mcr.microsoft.com/windows/servercore/iis:windowsservercore-ltsc2019\n resources:\n limits:\n cpu: 1\n memory: 800Mi\n requests:\n cpu: .1\n memory: 300Mi\n ports:\n - containerPort: 80\n selector:\n matchLabels:\n app: iis-2019\n---\napiVersion: v1\nkind: Service\nmetadata:\n name: iis\nspec:\n type: LoadBalancer\n ports:\n - protocol: TCP\n port: 80\n selector:\n app: iis-2019", "completion": "This yaml code 12.2 - Guide for Running Windows Containers in Kubernetes | Kubernetes is highly configurable and extensible. As a result, there is rarely a need to fork or submit patches to the Kubernetes project code.", "language": "yaml", "type": "deployment-definition_explanation", "context": "12.2 - Guide for Running Windows Containers in Kubernetes | Kubernetes is highly configurable and extensible. As a result, there is rarely a need to fork or submit patches to the Kubernetes project code.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_18726"} {"prompt": "Complete this json code:\n\n{\n \"name\": \"k8s-pod-network\",\n \"cniVersion\": \"0.4.0\",\n \"plugins\": [\n {\n \"type\": \"calico\",\n \"log_level\": \"info\",\n \"datastore_type\": \"kubernetes\",\n \"nodename\": \"127.0.0.1\",\n \"ipam\": {\n \"type\": \"host-local\",\n \"subnet\": \"usePodCidr\"\n },", "completion": " \"policy\": {\n \"type\": \"k8s\"\n },\n \"kubernetes\": {\n \"kubeconfig\": \"/etc/cni/net.d/calico-kubeconfig\"\n }\n },\n {\n \"type\": \"portmap\",\n \"capabilities\": {\"portMappings\": true},\n \"externalSetMarkChain\": \"KUBE-MARK-MASQ\"\n }\n ]\n}", "language": "json", "type": "api-response", "context": "13.1.1 - Network Plugins | Experimental Feature", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_18727"} {"prompt": "Explain this json code:\n\n{\n \"name\": \"k8s-pod-network\",\n \"cniVersion\": \"0.4.0\",\n \"plugins\": [\n {\n \"type\": \"calico\",\n \"log_level\": \"info\",\n \"datastore_type\": \"kubernetes\",\n \"nodename\": \"127.0.0.1\",\n \"ipam\": {\n \"type\": \"host-local\",\n \"subnet\": \"usePodCidr\"\n },\n \"policy\": {\n \"type\": \"k8s\"\n },\n \"kubernetes\": {\n \"kubeconfig\": \"/etc/cni/net.d/calico-kubeconfig\"\n }\n },\n {\n \"type\": \"portmap\",\n \"capabilities\": {\"portMappings\": true},\n \"externalSetMarkChain\": \"KUBE-MARK-MASQ\"\n }\n ]\n}", "completion": "This json code 13.1.1 - Network Plugins | Experimental Feature", "language": "json", "type": "api-response_explanation", "context": "13.1.1 - Network Plugins | Experimental Feature", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_18728"} {"prompt": "Complete this json code:\n\n{\n \"name\": \"k8s-pod-network\",\n \"cniVersion\": \"0.4.0\",\n \"plugins\": [\n {\n \"type\": \"calico\",\n \"log_level\": \"info\",\n \"datastore_type\": \"kubernetes\",\n \"nodename\": \"127.0.0.1\",\n \"ipam\": {\n \"type\": \"host-local\",\n \"subnet\": \"usePodCidr\"\n },", "completion": " \"policy\": {\n \"type\": \"k8s\"\n },\n \"kubernetes\": {\n \"kubeconfig\": \"/etc/cni/net.d/calico-kubeconfig\"\n }\n },\n {\n \"type\": \"bandwidth\",\n \"capabilities\": {\"bandwidth\": true}\n }\n ]\n}", "language": "json", "type": "api-response", "context": "13.1.1 - Network Plugins | Now you can add the kubernetes.io/ingress-bandwidth and kubernetes.io/egress-bandwidth annotations to your Pod. For example:", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_18729"} {"prompt": "Explain this json code:\n\n{\n \"name\": \"k8s-pod-network\",\n \"cniVersion\": \"0.4.0\",\n \"plugins\": [\n {\n \"type\": \"calico\",\n \"log_level\": \"info\",\n \"datastore_type\": \"kubernetes\",\n \"nodename\": \"127.0.0.1\",\n \"ipam\": {\n \"type\": \"host-local\",\n \"subnet\": \"usePodCidr\"\n },\n \"policy\": {\n \"type\": \"k8s\"\n },\n \"kubernetes\": {\n \"kubeconfig\": \"/etc/cni/net.d/calico-kubeconfig\"\n }\n },\n {\n \"type\": \"bandwidth\",\n \"capabilities\": {\"bandwidth\": true}\n }\n ]\n}", "completion": "This json code 13.1.1 - Network Plugins | Now you can add the kubernetes.io/ingress-bandwidth and kubernetes.io/egress-bandwidth annotations to your Pod. For example:", "language": "json", "type": "api-response_explanation", "context": "13.1.1 - Network Plugins | Now you can add the kubernetes.io/ingress-bandwidth and kubernetes.io/egress-bandwidth annotations to your Pod. For example:", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_18730"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:", "completion": " annotations:\n kubernetes.io/ingress-bandwidth: 1M\n kubernetes.io/egress-bandwidth: 1M\n...", "language": "yaml", "type": "pod-definition", "context": "13.1.1 - Network Plugins | Kubernetes provides a device plugin framework that you can use to advertise system hardware resources to the Kubelet.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_18731"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n annotations:\n kubernetes.io/ingress-bandwidth: 1M\n kubernetes.io/egress-bandwidth: 1M\n...", "completion": "This yaml code 13.1.1 - Network Plugins | Kubernetes provides a device plugin framework that you can use to advertise system hardware resources to the Kubelet.", "language": "yaml", "type": "pod-definition_explanation", "context": "13.1.1 - Network Plugins | Kubernetes provides a device plugin framework that you can use to advertise system hardware resources to the Kubelet.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_18732"} {"prompt": "Complete this yaml code:\n\n---\napiVersion: v1\nkind: Pod\nmetadata:\n name: demo-pod\nspec:\n containers:\n - name: demo-container-1\n image: registry.k8s.io/pause:3.8", "completion": " resources:\n limits:\n hardware-vendor.example/foo: 2\n#\n# This Pod needs 2 of the hardware-vendor.example/foo devices\n# and can only schedule onto a Node that's able to satisfy\n# that need.\n#\n# If the Node has more than 2 of those devices available, the\n# remainder would be available for other Pods to use.", "language": "yaml", "type": "pod-definition", "context": "13.1.2 - Device Plugins | The general workflow of a device plugin includes the following steps:", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_18733"} {"prompt": "Explain this yaml code:\n\n---\napiVersion: v1\nkind: Pod\nmetadata:\n name: demo-pod\nspec:\n containers:\n - name: demo-container-1\n image: registry.k8s.io/pause:3.8\n resources:\n limits:\n hardware-vendor.example/foo: 2\n#\n# This Pod needs 2 of the hardware-vendor.example/foo devices\n# and can only schedule onto a Node that's able to satisfy\n# that need.\n#\n# If the Node has more than 2 of those devices available, the\n# remainder would be available for other Pods to use.", "completion": "This yaml code 13.1.2 - Device Plugins | The general workflow of a device plugin includes the following steps:", "language": "yaml", "type": "pod-definition_explanation", "context": "13.1.2 - Device Plugins | The general workflow of a device plugin includes the following steps:", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_18734"} {"prompt": "Complete this yaml code:\n\ncustomresourcedefinition/shirt-resource-definition.yaml\napiVersion: apiextensions.k8s.io/v1\nkind: CustomResourceDefinition\nmetadata:\n name: shirts.stable.example.com\nspec:\n group: stable.example.com\n scope: Namespaced\n names:\n plural: shirts\n singular: shirt\n kind: Shirt\n versions:\n - name: v1\n served: true\n storage: true\n schema:\n openAPIV3Schema:", "completion": " type: object\n properties:\n spec:\n type: object\n properties:\n color:\n type: string\n size:\n type: string\n selectableFields:\n - jsonPath: .spec.color\n - jsonPath: .spec.size\n additionalPrinterColumns:\n - jsonPath: .spec.color\n name: Color\n type: string\n - jsonPath: .spec.size\n name: Size\n type: string", "language": "yaml", "type": "kubernetes-manifest", "context": "13.2.1 - Custom Resources | Field selectors can then be used to get only resources with a color of blue:", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_18735"} {"prompt": "Explain this yaml code:\n\ncustomresourcedefinition/shirt-resource-definition.yaml\napiVersion: apiextensions.k8s.io/v1\nkind: CustomResourceDefinition\nmetadata:\n name: shirts.stable.example.com\nspec:\n group: stable.example.com\n scope: Namespaced\n names:\n plural: shirts\n singular: shirt\n kind: Shirt\n versions:\n - name: v1\n served: true\n storage: true\n schema:\n openAPIV3Schema:\n type: object\n properties:\n spec:\n type: object\n properties:\n color:\n type: string\n size:\n type: string\n selectableFields:\n - jsonPath: .spec.color\n - jsonPath: .spec.size\n additionalPrinterColumns:\n - jsonPath: .spec.color\n name: Color\n type: string\n - jsonPath: .spec.size\n name: Size\n type: string", "completion": "This yaml code 13.2.1 - Custom Resources | Field selectors can then be used to get only resources with a color of blue:", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "13.2.1 - Custom Resources | Field selectors can then be used to get only resources with a color of blue:", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_18736"} {"prompt": "Complete this yaml code:\n\napiVersion: apiextensions.k8s.io/v1\nkind: CustomResourceDefinition\nmetadata:\n name: shirts.stable.example.com\nspec:\n group: stable.example.com\n scope: Namespaced\n names:\n plural: shirts\n singular: shirt\n kind: Shirt\n versions:\n - name: v1\n served: true\n storage: true\n schema:\n openAPIV3Schema:\n type: object", "completion": " properties:\n spec:\n type: object\n properties:\n color:\n type: string\n size:\n type: string\n selectableFields:\n - jsonPath: .spec.color\n - jsonPath: .spec.size\n additionalPrinterColumns:\n - jsonPath: .spec.color\n name: Color\n type: string\n - jsonPath: .spec.size\n name: Size\n type: string", "language": "yaml", "type": "kubernetes-manifest", "context": "13.2.1 - Custom Resources | Field selectors can then be used to get only resources with a color of blue:", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_completion_18737"} {"prompt": "Explain this yaml code:\n\napiVersion: apiextensions.k8s.io/v1\nkind: CustomResourceDefinition\nmetadata:\n name: shirts.stable.example.com\nspec:\n group: stable.example.com\n scope: Namespaced\n names:\n plural: shirts\n singular: shirt\n kind: Shirt\n versions:\n - name: v1\n served: true\n storage: true\n schema:\n openAPIV3Schema:\n type: object\n properties:\n spec:\n type: object\n properties:\n color:\n type: string\n size:\n type: string\n selectableFields:\n - jsonPath: .spec.color\n - jsonPath: .spec.size\n additionalPrinterColumns:\n - jsonPath: .spec.color\n name: Color\n type: string\n - jsonPath: .spec.size\n name: Size\n type: string", "completion": "This yaml code 13.2.1 - Custom Resources | Field selectors can then be used to get only resources with a color of blue:", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "13.2.1 - Custom Resources | Field selectors can then be used to get only resources with a color of blue:", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_18738"} {"prompt": "Explain this bash code:\n\nkubectl get shirts.stable.example.com --field-selector spec.color=blue", "completion": "This bash code 13.2.1 - Custom Resources | The output should be:", "language": "bash", "type": "kubectl-command_explanation", "context": "13.2.1 - Custom Resources | The output should be:", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_18739"} {"prompt": "Explain this bash code:\n\nkubectl get SampleDB # find configured databases\n\nkubectl edit SampleDB/example-database # manually change some settings", "completion": "This bash code 13.3 - Operator pattern | …and that's it! The operator will take care of applying the changes as well as keeping the existing service in good shape.", "language": "bash", "type": "kubectl-command_explanation", "context": "13.3 - Operator pattern | …and that's it! The operator will take care of applying the changes as well as keeping the existing service in good shape.", "source": "https://kubernetes.io/docs/concepts/_print/", "id": "k8s_00617_code_explain_18740"} {"prompt": "Explain this bash code:\n\nkubectl create -f docker-registry.yaml --edit -o json", "completion": "This bash code create | Create a resource from a file or from stdin.", "language": "bash", "type": "kubectl-command_explanation", "context": "create | Create a resource from a file or from stdin.", "source": "https://kubernetes.io/docs/reference/kubectl/kubectl-cmds/", "id": "k8s_00618_code_explain_18741"} {"prompt": "Explain this bash code:\n\nkubectl create clusterrole pod-reader --verb=get,list,watch --resource=pods", "completion": "This bash code create | Create a cluster role named \"pod-reader\" with ResourceName specified", "language": "bash", "type": "kubectl-command_explanation", "context": "create | Create a cluster role named \"pod-reader\" with ResourceName specified", "source": "https://kubernetes.io/docs/reference/kubectl/kubectl-cmds/", "id": "k8s_00618_code_explain_18742"} {"prompt": "Explain this bash code:\n\nkubectl create clusterrole pod-reader --verb=get --resource=pods --resource-name=readablepod --resource-name=anotherpod", "completion": "This bash code create | Create a cluster role named \"foo\" with API Group specified", "language": "bash", "type": "kubectl-command_explanation", "context": "create | Create a cluster role named \"foo\" with API Group specified", "source": "https://kubernetes.io/docs/reference/kubectl/kubectl-cmds/", "id": "k8s_00618_code_explain_18743"} {"prompt": "Explain this bash code:\n\nkubectl create clusterrole foo --verb=get,list,watch --resource=rs.extensions", "completion": "This bash code create | Create a cluster role named \"foo\" with SubResource specified", "language": "bash", "type": "kubectl-command_explanation", "context": "create | Create a cluster role named \"foo\" with SubResource specified", "source": "https://kubernetes.io/docs/reference/kubectl/kubectl-cmds/", "id": "k8s_00618_code_explain_18744"} {"prompt": "Explain this bash code:\n\nkubectl create clusterrole foo --verb=get,list,watch --resource=pods,pods/status", "completion": "This bash code create | Create a cluster role name \"foo\" with NonResourceURL specified", "language": "bash", "type": "kubectl-command_explanation", "context": "create | Create a cluster role name \"foo\" with NonResourceURL specified", "source": "https://kubernetes.io/docs/reference/kubectl/kubectl-cmds/", "id": "k8s_00618_code_explain_18745"} {"prompt": "Explain this bash code:\n\nkubectl create clusterrole \"foo\" --verb=get --non-resource-url=/logs/*", "completion": "This bash code create | Create a cluster role name \"monitoring\" with AggregationRule specified", "language": "bash", "type": "kubectl-command_explanation", "context": "create | Create a cluster role name \"monitoring\" with AggregationRule specified", "source": "https://kubernetes.io/docs/reference/kubectl/kubectl-cmds/", "id": "k8s_00618_code_explain_18746"} {"prompt": "Explain this bash code:\n\nkubectl create clusterrole monitoring --aggregation-rule=\"rbac.example.com/aggregate-to-monitoring=true\"", "completion": "This bash code create | Create a cluster role.", "language": "bash", "type": "kubectl-command_explanation", "context": "create | Create a cluster role.", "source": "https://kubernetes.io/docs/reference/kubectl/kubectl-cmds/", "id": "k8s_00618_code_explain_18747"} {"prompt": "Explain this bash code:\n\nkubectl create clusterrolebinding cluster-admin --clusterrole=cluster-admin --user=user1 --user=user2 --group=group1", "completion": "This bash code create | Create a cluster role binding for a particular cluster role.", "language": "bash", "type": "kubectl-command_explanation", "context": "create | Create a cluster role binding for a particular cluster role.", "source": "https://kubernetes.io/docs/reference/kubectl/kubectl-cmds/", "id": "k8s_00618_code_explain_18748"} {"prompt": "Explain this bash code:\n\nkubectl create configmap my-config --from-file=path/to/bar", "completion": "This bash code create | Create a new config map named my-config with specified keys instead of file basenames on disk", "language": "bash", "type": "kubectl-command_explanation", "context": "create | Create a new config map named my-config with specified keys instead of file basenames on disk", "source": "https://kubernetes.io/docs/reference/kubectl/kubectl-cmds/", "id": "k8s_00618_code_explain_18749"} {"prompt": "Explain this bash code:\n\nkubectl create configmap my-config --from-file=key1=/path/to/bar/file1.txt --from-file=key2=/path/to/bar/file2.txt", "completion": "This bash code create | Create a new config map named my-config with key1=config1 and key2=config2", "language": "bash", "type": "kubectl-command_explanation", "context": "create | Create a new config map named my-config with key1=config1 and key2=config2", "source": "https://kubernetes.io/docs/reference/kubectl/kubectl-cmds/", "id": "k8s_00618_code_explain_18750"} {"prompt": "Explain this bash code:\n\nkubectl create configmap my-config --from-literal=key1=config1 --from-literal=key2=config2", "completion": "This bash code create | Create a new config map named my-config from the key=value pairs in the file", "language": "bash", "type": "kubectl-command_explanation", "context": "create | Create a new config map named my-config from the key=value pairs in the file", "source": "https://kubernetes.io/docs/reference/kubectl/kubectl-cmds/", "id": "k8s_00618_code_explain_18751"} {"prompt": "Explain this bash code:\n\nkubectl create configmap my-config --from-file=path/to/bar", "completion": "This bash code create | Create a new config map named my-config from an env file", "language": "bash", "type": "kubectl-command_explanation", "context": "create | Create a new config map named my-config from an env file", "source": "https://kubernetes.io/docs/reference/kubectl/kubectl-cmds/", "id": "k8s_00618_code_explain_18752"} {"prompt": "Explain this bash code:\n\nkubectl create configmap my-config --from-env-file=path/to/bar.env", "completion": "This bash code create | Create a config map based on a file, directory, or specified literal value.", "language": "bash", "type": "kubectl-command_explanation", "context": "create | Create a config map based on a file, directory, or specified literal value.", "source": "https://kubernetes.io/docs/reference/kubectl/kubectl-cmds/", "id": "k8s_00618_code_explain_18753"} {"prompt": "Explain this bash code:\n\nkubectl create cronjob my-job --image=busybox --schedule=\"*/1 * * * *\"", "completion": "This bash code create | Create a cron job with a command", "language": "bash", "type": "kubectl-command_explanation", "context": "create | Create a cron job with a command", "source": "https://kubernetes.io/docs/reference/kubectl/kubectl-cmds/", "id": "k8s_00618_code_explain_18754"} {"prompt": "Explain this bash code:\n\nkubectl create cronjob my-job --image=busybox --schedule=\"*/1 * * * *\" -- date", "completion": "This bash code create | Create a cron job with the specified name.", "language": "bash", "type": "kubectl-command_explanation", "context": "create | Create a cron job with the specified name.", "source": "https://kubernetes.io/docs/reference/kubectl/kubectl-cmds/", "id": "k8s_00618_code_explain_18755"} {"prompt": "Explain this bash code:\n\nkubectl create deployment my-dep --image=busybox -- date", "completion": "This bash code create | Create a deployment named my-dep that runs the nginx image with 3 replicas", "language": "bash", "type": "kubectl-command_explanation", "context": "create | Create a deployment named my-dep that runs the nginx image with 3 replicas", "source": "https://kubernetes.io/docs/reference/kubectl/kubectl-cmds/", "id": "k8s_00618_code_explain_18756"} {"prompt": "Explain this bash code:\n\nkubectl create deployment my-dep --image=nginx --replicas=3", "completion": "This bash code create | Create a deployment named my-dep that runs the busybox image and expose port 5701", "language": "bash", "type": "kubectl-command_explanation", "context": "create | Create a deployment named my-dep that runs the busybox image and expose port 5701", "source": "https://kubernetes.io/docs/reference/kubectl/kubectl-cmds/", "id": "k8s_00618_code_explain_18757"} {"prompt": "Explain this bash code:\n\nkubectl create deployment my-dep --image=busybox --port=5701", "completion": "This bash code create | Create a deployment with the specified name.", "language": "bash", "type": "kubectl-command_explanation", "context": "create | Create a deployment with the specified name.", "source": "https://kubernetes.io/docs/reference/kubectl/kubectl-cmds/", "id": "k8s_00618_code_explain_18758"} {"prompt": "Explain this bash code:\n\nkubectl create ingress simple --rule=\"foo.com/bar=svc1:8080,tls=my-cert\"", "completion": "This bash code create | Create a catch all ingress of \"/path\" pointing to service svc:port and Ingress Class as \"otheringress\"", "language": "bash", "type": "kubectl-command_explanation", "context": "create | Create a catch all ingress of \"/path\" pointing to service svc:port and Ingress Class as \"otheringress\"", "source": "https://kubernetes.io/docs/reference/kubectl/kubectl-cmds/", "id": "k8s_00618_code_explain_18759"} {"prompt": "Explain this bash code:\n\nkubectl create ingress catch-all --class=otheringress --rule=\"/path=svc:port\"", "completion": "This bash code create | Create an ingress with two annotations: ingress.annotation1 and ingress.annotations2", "language": "bash", "type": "kubectl-command_explanation", "context": "create | Create an ingress with two annotations: ingress.annotation1 and ingress.annotations2", "source": "https://kubernetes.io/docs/reference/kubectl/kubectl-cmds/", "id": "k8s_00618_code_explain_18760"} {"prompt": "Explain this bash code:\n\nkubectl create ingress annotated --class=default --rule=\"foo.com/bar=svc:port\" \\\n--annotation ingress.annotation1=foo \\\n--annotation ingress.annotation2=bla", "completion": "This bash code create | Create an ingress with the same host and multiple paths", "language": "bash", "type": "kubectl-command_explanation", "context": "create | Create an ingress with the same host and multiple paths", "source": "https://kubernetes.io/docs/reference/kubectl/kubectl-cmds/", "id": "k8s_00618_code_explain_18761"} {"prompt": "Explain this bash code:\n\nkubectl create ingress multipath --class=default \\\n--rule=\"foo.com/=svc:port\" \\\n--rule=\"foo.com/admin/=svcadmin:portadmin\"", "completion": "This bash code create | Create an ingress with multiple hosts and the pathType as Prefix", "language": "bash", "type": "kubectl-command_explanation", "context": "create | Create an ingress with multiple hosts and the pathType as Prefix", "source": "https://kubernetes.io/docs/reference/kubectl/kubectl-cmds/", "id": "k8s_00618_code_explain_18762"} {"prompt": "Explain this bash code:\n\nkubectl create ingress ingress1 --class=default \\\n--rule=\"foo.com/path*=svc:8080\" \\\n--rule=\"bar.com/admin*=svc2:http\"", "completion": "This bash code create | Create an ingress with TLS enabled using the default ingress certificate and different path types", "language": "bash", "type": "kubectl-command_explanation", "context": "create | Create an ingress with TLS enabled using the default ingress certificate and different path types", "source": "https://kubernetes.io/docs/reference/kubectl/kubectl-cmds/", "id": "k8s_00618_code_explain_18763"} {"prompt": "Explain this bash code:\n\nkubectl create ingress ingtls --class=default \\\n--rule=\"foo.com/=svc:https,tls\" \\\n--rule=\"foo.com/path/subpath*=othersvc:8080\"", "completion": "This bash code create | Create an ingress with TLS enabled using a specific secret and pathType as Prefix", "language": "bash", "type": "kubectl-command_explanation", "context": "create | Create an ingress with TLS enabled using a specific secret and pathType as Prefix", "source": "https://kubernetes.io/docs/reference/kubectl/kubectl-cmds/", "id": "k8s_00618_code_explain_18764"} {"prompt": "Explain this bash code:\n\nkubectl create ingress ingsecret --class=default \\\n--rule=\"foo.com/*=svc:8080,tls=secret1\"", "completion": "This bash code create | Create an ingress with a default backend", "language": "bash", "type": "kubectl-command_explanation", "context": "create | Create an ingress with a default backend", "source": "https://kubernetes.io/docs/reference/kubectl/kubectl-cmds/", "id": "k8s_00618_code_explain_18765"} {"prompt": "Explain this bash code:\n\nkubectl create ingress ingdefault --class=default \\\n--default-backend=defaultsvc:http \\\n--rule=\"foo.com/*=svc:8080,tls=secret1\"", "completion": "This bash code create | Create an ingress with the specified name.", "language": "bash", "type": "kubectl-command_explanation", "context": "create | Create an ingress with the specified name.", "source": "https://kubernetes.io/docs/reference/kubectl/kubectl-cmds/", "id": "k8s_00618_code_explain_18766"} {"prompt": "Explain this bash code:\n\nkubectl create job test-job --from=cronjob/a-cronjob", "completion": "This bash code create | Create a job with the specified name.", "language": "bash", "type": "kubectl-command_explanation", "context": "create | Create a job with the specified name.", "source": "https://kubernetes.io/docs/reference/kubectl/kubectl-cmds/", "id": "k8s_00618_code_explain_18767"} {"prompt": "Explain this bash code:\n\nkubectl create poddisruptionbudget my-pdb --selector=app=rails --min-available=1", "completion": "This bash code create | Create a pod disruption budget named my-pdb that will select all pods with the app=nginx label # and require at least half of the pods selected to be available at any point in time", "language": "bash", "type": "kubectl-command_explanation", "context": "create | Create a pod disruption budget named my-pdb that will select all pods with the app=nginx label # and require at least half of the pods selected to be available at any point in time", "source": "https://kubernetes.io/docs/reference/kubectl/kubectl-cmds/", "id": "k8s_00618_code_explain_18768"} {"prompt": "Explain this bash code:\n\nkubectl create pdb my-pdb --selector=app=nginx --min-available=50%", "completion": "This bash code create | Create a pod disruption budget with the specified name, selector, and desired minimum available pods.", "language": "bash", "type": "kubectl-command_explanation", "context": "create | Create a pod disruption budget with the specified name, selector, and desired minimum available pods.", "source": "https://kubernetes.io/docs/reference/kubectl/kubectl-cmds/", "id": "k8s_00618_code_explain_18769"} {"prompt": "Explain this bash code:\n\nkubectl create priorityclass high-priority --value=1000 --description=\"high priority\"", "completion": "This bash code create | Create a priority class named default-priority that is considered as the global default priority", "language": "bash", "type": "kubectl-command_explanation", "context": "create | Create a priority class named default-priority that is considered as the global default priority", "source": "https://kubernetes.io/docs/reference/kubectl/kubectl-cmds/", "id": "k8s_00618_code_explain_18770"} {"prompt": "Explain this bash code:\n\nkubectl create priorityclass default-priority --value=1000 --global-default=true --description=\"default priority\"", "completion": "This bash code create | Create a priority class named high-priority that cannot preempt pods with lower priority", "language": "bash", "type": "kubectl-command_explanation", "context": "create | Create a priority class named high-priority that cannot preempt pods with lower priority", "source": "https://kubernetes.io/docs/reference/kubectl/kubectl-cmds/", "id": "k8s_00618_code_explain_18771"} {"prompt": "Explain this bash code:\n\nkubectl create priorityclass high-priority --value=1000 --description=\"high priority\" --preemption-policy=\"Never\"", "completion": "This bash code create | Create a priority class with the specified name, value, globalDefault and description.", "language": "bash", "type": "kubectl-command_explanation", "context": "create | Create a priority class with the specified name, value, globalDefault and description.", "source": "https://kubernetes.io/docs/reference/kubectl/kubectl-cmds/", "id": "k8s_00618_code_explain_18772"} {"prompt": "Explain this bash code:\n\nkubectl create quota my-quota --hard=cpu=1,memory=1G,pods=2,services=3,replicationcontrollers=2,resourcequotas=1,secrets=5,persistentvolumeclaims=10", "completion": "This bash code create | Create a new resource quota named best-effort", "language": "bash", "type": "kubectl-command_explanation", "context": "create | Create a new resource quota named best-effort", "source": "https://kubernetes.io/docs/reference/kubectl/kubectl-cmds/", "id": "k8s_00618_code_explain_18773"} {"prompt": "Explain this bash code:\n\nkubectl create quota best-effort --hard=pods=100 --scopes=BestEffort", "completion": "This bash code create | Create a resource quota with the specified name, hard limits, and optional scopes.", "language": "bash", "type": "kubectl-command_explanation", "context": "create | Create a resource quota with the specified name, hard limits, and optional scopes.", "source": "https://kubernetes.io/docs/reference/kubectl/kubectl-cmds/", "id": "k8s_00618_code_explain_18774"} {"prompt": "Explain this bash code:\n\nkubectl create role pod-reader --verb=get --verb=list --verb=watch --resource=pods", "completion": "This bash code create | Create a role named \"pod-reader\" with ResourceName specified", "language": "bash", "type": "kubectl-command_explanation", "context": "create | Create a role named \"pod-reader\" with ResourceName specified", "source": "https://kubernetes.io/docs/reference/kubectl/kubectl-cmds/", "id": "k8s_00618_code_explain_18775"} {"prompt": "Explain this bash code:\n\nkubectl create role pod-reader --verb=get --resource=pods --resource-name=readablepod --resource-name=anotherpod", "completion": "This bash code create | Create a role named \"foo\" with API Group specified", "language": "bash", "type": "kubectl-command_explanation", "context": "create | Create a role named \"foo\" with API Group specified", "source": "https://kubernetes.io/docs/reference/kubectl/kubectl-cmds/", "id": "k8s_00618_code_explain_18776"} {"prompt": "Explain this bash code:\n\nkubectl create role foo --verb=get,list,watch --resource=rs.extensions", "completion": "This bash code create | Create a role named \"foo\" with SubResource specified", "language": "bash", "type": "kubectl-command_explanation", "context": "create | Create a role named \"foo\" with SubResource specified", "source": "https://kubernetes.io/docs/reference/kubectl/kubectl-cmds/", "id": "k8s_00618_code_explain_18777"} {"prompt": "Explain this bash code:\n\nkubectl create role foo --verb=get,list,watch --resource=pods,pods/status", "completion": "This bash code create | Create a role with single rule.", "language": "bash", "type": "kubectl-command_explanation", "context": "create | Create a role with single rule.", "source": "https://kubernetes.io/docs/reference/kubectl/kubectl-cmds/", "id": "k8s_00618_code_explain_18778"} {"prompt": "Explain this bash code:\n\nkubectl create rolebinding admin --clusterrole=admin --user=user1 --user=user2 --group=group1", "completion": "This bash code create | Create a role binding for a particular role or cluster role.", "language": "bash", "type": "kubectl-command_explanation", "context": "create | Create a role binding for a particular role or cluster role.", "source": "https://kubernetes.io/docs/reference/kubectl/kubectl-cmds/", "id": "k8s_00618_code_explain_18779"} {"prompt": "Explain this bash code:\n\nkubectl create secret docker-registry my-secret --docker-server=DOCKER_REGISTRY_SERVER --docker-username=DOCKER_USER --docker-password=DOCKER_PASSWORD --docker-email=DOCKER_EMAIL", "completion": "This bash code create | Create a new secret named my-secret from ~/.docker/config.json", "language": "bash", "type": "kubectl-command_explanation", "context": "create | Create a new secret named my-secret from ~/.docker/config.json", "source": "https://kubernetes.io/docs/reference/kubectl/kubectl-cmds/", "id": "k8s_00618_code_explain_18780"} {"prompt": "Explain this bash code:\n\nkubectl create secret docker-registry my-secret --from-file=.dockerconfigjson=path/to/.docker/config.json", "completion": "This bash code create | Create a new secret for use with Docker registries.", "language": "bash", "type": "kubectl-command_explanation", "context": "create | Create a new secret for use with Docker registries.", "source": "https://kubernetes.io/docs/reference/kubectl/kubectl-cmds/", "id": "k8s_00618_code_explain_18781"} {"prompt": "Explain this bash code:\n\nkubectl create secret generic my-secret --from-file=path/to/bar", "completion": "This bash code create | Create a new secret named my-secret with specified keys instead of names on disk", "language": "bash", "type": "kubectl-command_explanation", "context": "create | Create a new secret named my-secret with specified keys instead of names on disk", "source": "https://kubernetes.io/docs/reference/kubectl/kubectl-cmds/", "id": "k8s_00618_code_explain_18782"} {"prompt": "Explain this bash code:\n\nkubectl create secret generic my-secret --from-file=ssh-privatekey=path/to/id_rsa --from-file=ssh-publickey=path/to/id_rsa.pub", "completion": "This bash code create | Create a new secret named my-secret with key1=supersecret and key2=topsecret", "language": "bash", "type": "kubectl-command_explanation", "context": "create | Create a new secret named my-secret with key1=supersecret and key2=topsecret", "source": "https://kubernetes.io/docs/reference/kubectl/kubectl-cmds/", "id": "k8s_00618_code_explain_18783"} {"prompt": "Explain this bash code:\n\nkubectl create secret generic my-secret --from-literal=key1=supersecret --from-literal=key2=topsecret", "completion": "This bash code create | Create a new secret named my-secret using a combination of a file and a literal", "language": "bash", "type": "kubectl-command_explanation", "context": "create | Create a new secret named my-secret using a combination of a file and a literal", "source": "https://kubernetes.io/docs/reference/kubectl/kubectl-cmds/", "id": "k8s_00618_code_explain_18784"} {"prompt": "Explain this bash code:\n\nkubectl create secret generic my-secret --from-file=ssh-privatekey=path/to/id_rsa --from-literal=passphrase=topsecret", "completion": "This bash code create | Create a new secret named my-secret from an env file", "language": "bash", "type": "kubectl-command_explanation", "context": "create | Create a new secret named my-secret from an env file", "source": "https://kubernetes.io/docs/reference/kubectl/kubectl-cmds/", "id": "k8s_00618_code_explain_18785"} {"prompt": "Explain this bash code:\n\nkubectl create secret generic my-secret --from-env-file=path/to/bar.env", "completion": "This bash code create | Create a secret based on a file, directory, or specified literal value.", "language": "bash", "type": "kubectl-command_explanation", "context": "create | Create a secret based on a file, directory, or specified literal value.", "source": "https://kubernetes.io/docs/reference/kubectl/kubectl-cmds/", "id": "k8s_00618_code_explain_18786"} {"prompt": "Explain this bash code:\n\nkubectl create secret tls tls-secret --cert=path/to/tls.cert --key=path/to/tls.key", "completion": "This bash code create | Create a TLS secret from the given public/private key pair.", "language": "bash", "type": "kubectl-command_explanation", "context": "create | Create a TLS secret from the given public/private key pair.", "source": "https://kubernetes.io/docs/reference/kubectl/kubectl-cmds/", "id": "k8s_00618_code_explain_18787"} {"prompt": "Explain this bash code:\n\nkubectl create service clusterip my-cs --tcp=5678:8080", "completion": "This bash code create | Create a new ClusterIP service named my-cs (in headless mode)", "language": "bash", "type": "kubectl-command_explanation", "context": "create | Create a new ClusterIP service named my-cs (in headless mode)", "source": "https://kubernetes.io/docs/reference/kubectl/kubectl-cmds/", "id": "k8s_00618_code_explain_18788"} {"prompt": "Explain this bash code:\n\nkubectl create service clusterip my-cs --clusterip=\"None\"", "completion": "This bash code create | Create a ClusterIP service with the specified name.", "language": "bash", "type": "kubectl-command_explanation", "context": "create | Create a ClusterIP service with the specified name.", "source": "https://kubernetes.io/docs/reference/kubectl/kubectl-cmds/", "id": "k8s_00618_code_explain_18789"} {"prompt": "Explain this bash code:\n\nkubectl create service externalname my-ns --external-name bar.com", "completion": "This bash code create | Create an ExternalName service with the specified name.", "language": "bash", "type": "kubectl-command_explanation", "context": "create | Create an ExternalName service with the specified name.", "source": "https://kubernetes.io/docs/reference/kubectl/kubectl-cmds/", "id": "k8s_00618_code_explain_18790"} {"prompt": "Explain this bash code:\n\nkubectl create service loadbalancer my-lbs --tcp=5678:8080", "completion": "This bash code create | Create a LoadBalancer service with the specified name.", "language": "bash", "type": "kubectl-command_explanation", "context": "create | Create a LoadBalancer service with the specified name.", "source": "https://kubernetes.io/docs/reference/kubectl/kubectl-cmds/", "id": "k8s_00618_code_explain_18791"} {"prompt": "Explain this bash code:\n\nkubectl create service nodeport my-ns --tcp=5678:8080", "completion": "This bash code create | Create a NodePort service with the specified name.", "language": "bash", "type": "kubectl-command_explanation", "context": "create | Create a NodePort service with the specified name.", "source": "https://kubernetes.io/docs/reference/kubectl/kubectl-cmds/", "id": "k8s_00618_code_explain_18792"} {"prompt": "Explain this bash code:\n\nkubectl get -o template pod/web-pod-13je7 --template={{.status.phase}}", "completion": "This bash code get | List resource information in custom columns", "language": "bash", "type": "kubectl-command_explanation", "context": "get | List resource information in custom columns", "source": "https://kubernetes.io/docs/reference/kubectl/kubectl-cmds/", "id": "k8s_00618_code_explain_18793"} {"prompt": "Explain this bash code:\n\nkubectl get pod test-pod -o custom-columns=CONTAINER:.spec.containers[0].name,IMAGE:.spec.containers[0].image", "completion": "This bash code get | List all replication controllers and services together in ps output format", "language": "bash", "type": "kubectl-command_explanation", "context": "get | List all replication controllers and services together in ps output format", "source": "https://kubernetes.io/docs/reference/kubectl/kubectl-cmds/", "id": "k8s_00618_code_explain_18794"} {"prompt": "Explain this bash code:\n\nkubectl get rc/web service/frontend pods/web-pod-13je7", "completion": "This bash code get | Display one or many resources.", "language": "bash", "type": "kubectl-command_explanation", "context": "get | Display one or many resources.", "source": "https://kubernetes.io/docs/reference/kubectl/kubectl-cmds/", "id": "k8s_00618_code_explain_18795"} {"prompt": "Explain this bash code:\n\nkubectl run hazelcast --image=hazelcast/hazelcast --port=5701", "completion": "This bash code run | Start a hazelcast pod and set environment variables \"DNS_DOMAIN=cluster\" and \"POD_NAMESPACE=default\" in the container", "language": "bash", "type": "kubectl-command_explanation", "context": "run | Start a hazelcast pod and set environment variables \"DNS_DOMAIN=cluster\" and \"POD_NAMESPACE=default\" in the container", "source": "https://kubernetes.io/docs/reference/kubectl/kubectl-cmds/", "id": "k8s_00618_code_explain_18796"} {"prompt": "Explain this bash code:\n\nkubectl run hazelcast --image=hazelcast/hazelcast --env=\"DNS_DOMAIN=cluster\" --env=\"POD_NAMESPACE=default\"", "completion": "This bash code run | Start a hazelcast pod and set labels \"app=hazelcast\" and \"env=prod\" in the container", "language": "bash", "type": "kubectl-command_explanation", "context": "run | Start a hazelcast pod and set labels \"app=hazelcast\" and \"env=prod\" in the container", "source": "https://kubernetes.io/docs/reference/kubectl/kubectl-cmds/", "id": "k8s_00618_code_explain_18797"} {"prompt": "Explain this bash code:\n\nkubectl run hazelcast --image=hazelcast/hazelcast --labels=\"app=hazelcast,env=prod\"", "completion": "This bash code run | Dry run; print the corresponding API objects without creating them", "language": "bash", "type": "kubectl-command_explanation", "context": "run | Dry run; print the corresponding API objects without creating them", "source": "https://kubernetes.io/docs/reference/kubectl/kubectl-cmds/", "id": "k8s_00618_code_explain_18798"} {"prompt": "Explain this bash code:\n\nkubectl run nginx --image=nginx --overrides='{ \"apiVersion\": \"v1\", \"spec\": { ... } }'", "completion": "This bash code run | Start a busybox pod and keep it in the foreground, don't restart it if it exits", "language": "bash", "type": "kubectl-command_explanation", "context": "run | Start a busybox pod and keep it in the foreground, don't restart it if it exits", "source": "https://kubernetes.io/docs/reference/kubectl/kubectl-cmds/", "id": "k8s_00618_code_explain_18799"} {"prompt": "Explain this bash code:\n\nkubectl run -i -t busybox --image=busybox --restart=Never", "completion": "This bash code run | Start the nginx pod using the default command, but use custom arguments (arg1 .. argN) for that command", "language": "bash", "type": "kubectl-command_explanation", "context": "run | Start the nginx pod using the default command, but use custom arguments (arg1 .. argN) for that command", "source": "https://kubernetes.io/docs/reference/kubectl/kubectl-cmds/", "id": "k8s_00618_code_explain_18800"} {"prompt": "Explain this bash code:\n\nkubectl run nginx --image=nginx -- <arg1> <arg2> ... <argN>", "completion": "This bash code run | Start the nginx pod using a different command and custom arguments", "language": "bash", "type": "kubectl-command_explanation", "context": "run | Start the nginx pod using a different command and custom arguments", "source": "https://kubernetes.io/docs/reference/kubectl/kubectl-cmds/", "id": "k8s_00618_code_explain_18801"} {"prompt": "Explain this bash code:\n\nkubectl run nginx --image=nginx --command -- <cmd> <arg1> ... <argN>", "completion": "This bash code run | Create and run a particular image in a pod.", "language": "bash", "type": "kubectl-command_explanation", "context": "run | Create and run a particular image in a pod.", "source": "https://kubernetes.io/docs/reference/kubectl/kubectl-cmds/", "id": "k8s_00618_code_explain_18802"} {"prompt": "Explain this bash code:\n\nkubectl expose rc nginx --port=80 --target-port=8000", "completion": "This bash code expose | Create a service for a replication controller identified by type and name specified in \"nginx-controller.yaml\", which serves on port 80 and connects to the containers on port 8000", "language": "bash", "type": "kubectl-command_explanation", "context": "expose | Create a service for a replication controller identified by type and name specified in \"nginx-controller.yaml\", which serves on port 80 and connects to the containers on port 8000", "source": "https://kubernetes.io/docs/reference/kubectl/kubectl-cmds/", "id": "k8s_00618_code_explain_18803"} {"prompt": "Explain this bash code:\n\nkubectl expose -f nginx-controller.yaml --port=80 --target-port=8000", "completion": "This bash code expose | Create a service for a pod valid-pod, which serves on port 444 with the name \"frontend\"", "language": "bash", "type": "kubectl-command_explanation", "context": "expose | Create a service for a pod valid-pod, which serves on port 444 with the name \"frontend\"", "source": "https://kubernetes.io/docs/reference/kubectl/kubectl-cmds/", "id": "k8s_00618_code_explain_18804"} {"prompt": "Explain this bash code:\n\nkubectl expose pod valid-pod --port=444 --name=frontend", "completion": "This bash code expose | Create a second service based on the above service, exposing the container port 8443 as port 443 with the name \"nginx-https\"", "language": "bash", "type": "kubectl-command_explanation", "context": "expose | Create a second service based on the above service, exposing the container port 8443 as port 443 with the name \"nginx-https\"", "source": "https://kubernetes.io/docs/reference/kubectl/kubectl-cmds/", "id": "k8s_00618_code_explain_18805"} {"prompt": "Explain this bash code:\n\nkubectl expose service nginx --port=443 --target-port=8443 --name=nginx-https", "completion": "This bash code expose | Create a service for a replicated streaming application on port 4100 balancing UDP traffic and named 'video-stream'.", "language": "bash", "type": "kubectl-command_explanation", "context": "expose | Create a service for a replicated streaming application on port 4100 balancing UDP traffic and named 'video-stream'.", "source": "https://kubernetes.io/docs/reference/kubectl/kubectl-cmds/", "id": "k8s_00618_code_explain_18806"} {"prompt": "Explain this bash code:\n\nkubectl expose rc streamer --port=4100 --protocol=UDP --name=video-stream", "completion": "This bash code expose | Create a service for a replicated nginx using replica set, which serves on port 80 and connects to the containers on port 8000", "language": "bash", "type": "kubectl-command_explanation", "context": "expose | Create a service for a replicated nginx using replica set, which serves on port 80 and connects to the containers on port 8000", "source": "https://kubernetes.io/docs/reference/kubectl/kubectl-cmds/", "id": "k8s_00618_code_explain_18807"} {"prompt": "Explain this bash code:\n\nkubectl expose rs nginx --port=80 --target-port=8000", "completion": "This bash code expose | Create a service for an nginx deployment, which serves on port 80 and connects to the containers on port 8000", "language": "bash", "type": "kubectl-command_explanation", "context": "expose | Create a service for an nginx deployment, which serves on port 80 and connects to the containers on port 8000", "source": "https://kubernetes.io/docs/reference/kubectl/kubectl-cmds/", "id": "k8s_00618_code_explain_18808"} {"prompt": "Explain this bash code:\n\nkubectl expose deployment nginx --port=80 --target-port=8000", "completion": "This bash code expose | Expose a resource as a new Kubernetes service.", "language": "bash", "type": "kubectl-command_explanation", "context": "expose | Expose a resource as a new Kubernetes service.", "source": "https://kubernetes.io/docs/reference/kubectl/kubectl-cmds/", "id": "k8s_00618_code_explain_18809"} {"prompt": "Explain this bash code:\n\nkubectl apply --prune -f manifest.yaml -l app=nginx", "completion": "This bash code apply | Apply the configuration in manifest.yaml and delete all the other config maps that are not in the file", "language": "bash", "type": "kubectl-command_explanation", "context": "apply | Apply the configuration in manifest.yaml and delete all the other config maps that are not in the file", "source": "https://kubernetes.io/docs/reference/kubectl/kubectl-cmds/", "id": "k8s_00618_code_explain_18810"} {"prompt": "Explain this bash code:\n\nkubectl apply --prune -f manifest.yaml --all --prune-whitelist=core/v1/ConfigMap", "completion": "This bash code apply | Apply a configuration to a resource by file name or stdin. The resource name must be specified. This resource will be created if it doesn't exist yet. To use 'apply', always create the resource initia", "language": "bash", "type": "kubectl-command_explanation", "context": "apply | Apply a configuration to a resource by file name or stdin. The resource name must be specified. This resource will be created if it doesn't exist yet. To use 'apply', always create the resource initia", "source": "https://kubernetes.io/docs/reference/kubectl/kubectl-cmds/", "id": "k8s_00618_code_explain_18811"} {"prompt": "Explain this bash code:\n\nkubectl apply edit-last-applied -f deploy.yaml -o json", "completion": "This bash code apply | Edit the latest last-applied-configuration annotations of resources from the default editor.", "language": "bash", "type": "kubectl-command_explanation", "context": "apply | Edit the latest last-applied-configuration annotations of resources from the default editor.", "source": "https://kubernetes.io/docs/reference/kubectl/kubectl-cmds/", "id": "k8s_00618_code_explain_18812"} {"prompt": "Explain this bash code:\n\nkubectl apply set-last-applied -f deploy.yaml --create-annotation=true", "completion": "This bash code apply | Set the latest last-applied-configuration annotations by setting it to match the contents of a file. This results in the last-applied-configuration being updated as though 'kubectl apply -f ' was run,", "language": "bash", "type": "kubectl-command_explanation", "context": "apply | Set the latest last-applied-configuration annotations by setting it to match the contents of a file. This results in the last-applied-configuration being updated as though 'kubectl apply -f ' was run,", "source": "https://kubernetes.io/docs/reference/kubectl/kubectl-cmds/", "id": "k8s_00618_code_explain_18813"} {"prompt": "Explain this bash code:\n\nkubectl apply view-last-applied -f deploy.yaml -o json", "completion": "This bash code apply | View the latest last-applied-configuration annotations by type/name or file.", "language": "bash", "type": "kubectl-command_explanation", "context": "apply | View the latest last-applied-configuration annotations by type/name or file.", "source": "https://kubernetes.io/docs/reference/kubectl/kubectl-cmds/", "id": "k8s_00618_code_explain_18814"} {"prompt": "Explain this bash code:\n\nkubectl annotate pods foo description='my frontend'", "completion": "This bash code annotate | Update a pod identified by type and name in \"pod.json\"", "language": "bash", "type": "kubectl-command_explanation", "context": "annotate | Update a pod identified by type and name in \"pod.json\"", "source": "https://kubernetes.io/docs/reference/kubectl/kubectl-cmds/", "id": "k8s_00618_code_explain_18815"} {"prompt": "Explain this bash code:\n\nkubectl annotate -f pod.json description='my frontend'", "completion": "This bash code annotate | Update pod 'foo' with the annotation 'description' and the value 'my frontend running nginx', overwriting any existing value", "language": "bash", "type": "kubectl-command_explanation", "context": "annotate | Update pod 'foo' with the annotation 'description' and the value 'my frontend running nginx', overwriting any existing value", "source": "https://kubernetes.io/docs/reference/kubectl/kubectl-cmds/", "id": "k8s_00618_code_explain_18816"} {"prompt": "Explain this bash code:\n\nkubectl annotate --overwrite pods foo description='my frontend running nginx'", "completion": "This bash code annotate | Update all pods in the namespace", "language": "bash", "type": "kubectl-command_explanation", "context": "annotate | Update all pods in the namespace", "source": "https://kubernetes.io/docs/reference/kubectl/kubectl-cmds/", "id": "k8s_00618_code_explain_18817"} {"prompt": "Explain this bash code:\n\nkubectl annotate pods --all description='my frontend running nginx'", "completion": "This bash code annotate | Update pod 'foo' only if the resource is unchanged from version 1", "language": "bash", "type": "kubectl-command_explanation", "context": "annotate | Update pod 'foo' only if the resource is unchanged from version 1", "source": "https://kubernetes.io/docs/reference/kubectl/kubectl-cmds/", "id": "k8s_00618_code_explain_18818"} {"prompt": "Explain this bash code:\n\nkubectl annotate pods foo description='my frontend running nginx' --resource-version=1", "completion": "This bash code annotate | Update pod 'foo' by removing an annotation named 'description' if it exists # Does not require the --overwrite flag", "language": "bash", "type": "kubectl-command_explanation", "context": "annotate | Update pod 'foo' by removing an annotation named 'description' if it exists # Does not require the --overwrite flag", "source": "https://kubernetes.io/docs/reference/kubectl/kubectl-cmds/", "id": "k8s_00618_code_explain_18819"} {"prompt": "Explain this bash code:\n\nkubectl debug --image=myproj/debug-tools -c debugger mypod", "completion": "This bash code debug | Create a copy of mypod adding a debug container and attach to it", "language": "bash", "type": "kubectl-command_explanation", "context": "debug | Create a copy of mypod adding a debug container and attach to it", "source": "https://kubernetes.io/docs/reference/kubectl/kubectl-cmds/", "id": "k8s_00618_code_explain_18820"} {"prompt": "Explain this bash code:\n\nkubectl debug mypod -it --image=busybox --copy-to=my-debugger", "completion": "This bash code debug | Create a copy of mypod changing the command of mycontainer", "language": "bash", "type": "kubectl-command_explanation", "context": "debug | Create a copy of mypod changing the command of mycontainer", "source": "https://kubernetes.io/docs/reference/kubectl/kubectl-cmds/", "id": "k8s_00618_code_explain_18821"} {"prompt": "Explain this bash code:\n\nkubectl debug mypod -it --copy-to=my-debugger --container=mycontainer -- sh", "completion": "This bash code debug | Create a copy of mypod changing all container images to busybox", "language": "bash", "type": "kubectl-command_explanation", "context": "debug | Create a copy of mypod changing all container images to busybox", "source": "https://kubernetes.io/docs/reference/kubectl/kubectl-cmds/", "id": "k8s_00618_code_explain_18822"} {"prompt": "Explain this bash code:\n\nkubectl debug mypod --copy-to=my-debugger --set-image=*=busybox", "completion": "This bash code debug | Create a copy of mypod adding a debug container and changing container images", "language": "bash", "type": "kubectl-command_explanation", "context": "debug | Create a copy of mypod adding a debug container and changing container images", "source": "https://kubernetes.io/docs/reference/kubectl/kubectl-cmds/", "id": "k8s_00618_code_explain_18823"} {"prompt": "Explain this bash code:\n\nkubectl debug mypod -it --copy-to=my-debugger --image=debian --set-image=app=app:debug,sidecar=sidecar:debug", "completion": "This bash code debug | Create an interactive debugging session on a node and immediately attach to it. # The container will run in the host namespaces and the host's filesystem will be mounted at /host", "language": "bash", "type": "kubectl-command_explanation", "context": "debug | Create an interactive debugging session on a node and immediately attach to it. # The container will run in the host namespaces and the host's filesystem will be mounted at /host", "source": "https://kubernetes.io/docs/reference/kubectl/kubectl-cmds/", "id": "k8s_00618_code_explain_18824"} {"prompt": "Explain this text code:\n\nKUBE_EDITOR=\"nano\" kubectl edit svc/docker-registry", "completion": "This text code edit | Edit the job 'myjob' in JSON using the v1 API format", "language": "text", "type": "code-example_explanation", "context": "edit | Edit the job 'myjob' in JSON using the v1 API format", "source": "https://kubernetes.io/docs/reference/kubectl/kubectl-cmds/", "id": "k8s_00618_code_explain_18825"} {"prompt": "Explain this bash code:\n\nkubectl edit deployment/mydeployment -o yaml --save-config", "completion": "This bash code edit | Edit a resource from the default editor.", "language": "bash", "type": "kubectl-command_explanation", "context": "edit | Edit a resource from the default editor.", "source": "https://kubernetes.io/docs/reference/kubectl/kubectl-cmds/", "id": "k8s_00618_code_explain_18826"} {"prompt": "Explain this bash code:\n\nkubectl kustomize https://github.com/kubernetes-sigs/kustomize.git/examples/helloWorld?ref=v1.0.6", "completion": "This bash code kustomize | Build a set of KRM resources using a 'kustomization.yaml' file. The DIR argument must be a path to a directory containing 'kustomization.yaml', or a git repository URL with a path suffix specifying sa", "language": "bash", "type": "kubectl-command_explanation", "context": "kustomize | Build a set of KRM resources using a 'kustomization.yaml' file. The DIR argument must be a path to a directory containing 'kustomization.yaml', or a git repository URL with a path suffix specifying sa", "source": "https://kubernetes.io/docs/reference/kubectl/kubectl-cmds/", "id": "k8s_00618_code_explain_18827"} {"prompt": "Explain this bash code:\n\nkubectl label --overwrite pods foo status=unhealthy", "completion": "This bash code label | Update all pods in the namespace", "language": "bash", "type": "kubectl-command_explanation", "context": "label | Update all pods in the namespace", "source": "https://kubernetes.io/docs/reference/kubectl/kubectl-cmds/", "id": "k8s_00618_code_explain_18828"} {"prompt": "Explain this bash code:\n\nkubectl label pods foo status=unhealthy --resource-version=1", "completion": "This bash code label | Update pod 'foo' by removing a label named 'bar' if it exists # Does not require the --overwrite flag", "language": "bash", "type": "kubectl-command_explanation", "context": "label | Update pod 'foo' by removing a label named 'bar' if it exists # Does not require the --overwrite flag", "source": "https://kubernetes.io/docs/reference/kubectl/kubectl-cmds/", "id": "k8s_00618_code_explain_18829"} {"prompt": "Explain this bash code:\n\nkubectl patch node k8s-node-1 -p '{\"spec\":{\"unschedulable\":true}}'", "completion": "This bash code patch | Partially update a node using a strategic merge patch, specifying the patch as YAML", "language": "bash", "type": "kubectl-command_explanation", "context": "patch | Partially update a node using a strategic merge patch, specifying the patch as YAML", "source": "https://kubernetes.io/docs/reference/kubectl/kubectl-cmds/", "id": "k8s_00618_code_explain_18830"} {"prompt": "Explain this bash code:\n\nkubectl patch node k8s-node-1 -p $'spec:\\n unschedulable: true'", "completion": "This bash code patch | Partially update a node identified by the type and name specified in \"node.json\" using strategic merge patch", "language": "bash", "type": "kubectl-command_explanation", "context": "patch | Partially update a node identified by the type and name specified in \"node.json\" using strategic merge patch", "source": "https://kubernetes.io/docs/reference/kubectl/kubectl-cmds/", "id": "k8s_00618_code_explain_18831"} {"prompt": "Explain this bash code:\n\nkubectl patch -f node.json -p '{\"spec\":{\"unschedulable\":true}}'", "completion": "This bash code patch | Update a container's image; spec.containers[*].name is required because it's a merge key", "language": "bash", "type": "kubectl-command_explanation", "context": "patch | Update a container's image; spec.containers[*].name is required because it's a merge key", "source": "https://kubernetes.io/docs/reference/kubectl/kubectl-cmds/", "id": "k8s_00618_code_explain_18832"} {"prompt": "Explain this bash code:\n\nkubectl patch pod valid-pod -p '{\"spec\":{\"containers\":[{\"name\":\"kubernetes-serve-hostname\",\"image\":\"new image\"}]}}'", "completion": "This bash code patch | Update a container's image using a JSON patch with positional arrays", "language": "bash", "type": "kubectl-command_explanation", "context": "patch | Update a container's image using a JSON patch with positional arrays", "source": "https://kubernetes.io/docs/reference/kubectl/kubectl-cmds/", "id": "k8s_00618_code_explain_18833"} {"prompt": "Explain this bash code:\n\nkubectl patch pod valid-pod --type='json' -p='[{\"op\": \"replace\", \"path\": \"/spec/containers/0/image\", \"value\":\"new image\"}]'", "completion": "This bash code patch | Update fields of a resource using strategic merge patch, a JSON merge patch, or a JSON patch.", "language": "bash", "type": "kubectl-command_explanation", "context": "patch | Update fields of a resource using strategic merge patch, a JSON merge patch, or a JSON patch.", "source": "https://kubernetes.io/docs/reference/kubectl/kubectl-cmds/", "id": "k8s_00618_code_explain_18834"} {"prompt": "Explain this bash code:\n\nkubectl get pod mypod -o yaml | sed 's/\\(image: myimage\\):.*$/\\1:v4/' | kubectl replace -f -", "completion": "This bash code replace | Force replace, delete and then re-create the resource", "language": "bash", "type": "kubectl-command_explanation", "context": "replace | Force replace, delete and then re-create the resource", "source": "https://kubernetes.io/docs/reference/kubectl/kubectl-cmds/", "id": "k8s_00618_code_explain_18835"} {"prompt": "Explain this bash code:\n\nkubectl rollout undo --dry-run=server deployment/abc", "completion": "This bash code rollout | Roll back to a previous rollout.", "language": "bash", "type": "kubectl-command_explanation", "context": "rollout | Roll back to a previous rollout.", "source": "https://kubernetes.io/docs/reference/kubectl/kubectl-cmds/", "id": "k8s_00618_code_explain_18836"} {"prompt": "Explain this bash code:\n\nkubectl scale --current-replicas=2 --replicas=3 deployment/mysql", "completion": "This bash code scale | Scale multiple replication controllers", "language": "bash", "type": "kubectl-command_explanation", "context": "scale | Scale multiple replication controllers", "source": "https://kubernetes.io/docs/reference/kubectl/kubectl-cmds/", "id": "k8s_00618_code_explain_18837"} {"prompt": "Explain this bash code:\n\nkubectl set env deployment/registry STORAGE_DIR=/local", "completion": "This bash code set | List the environment variables defined on a deployments 'sample-build'", "language": "bash", "type": "kubectl-command_explanation", "context": "set | List the environment variables defined on a deployments 'sample-build'", "source": "https://kubernetes.io/docs/reference/kubectl/kubectl-cmds/", "id": "k8s_00618_code_explain_18838"} {"prompt": "Explain this bash code:\n\nkubectl set env deployment/sample-build STORAGE_DIR=/data -o yaml", "completion": "This bash code set | Update all containers in all replication controllers in the project to have ENV=prod", "language": "bash", "type": "kubectl-command_explanation", "context": "set | Update all containers in all replication controllers in the project to have ENV=prod", "source": "https://kubernetes.io/docs/reference/kubectl/kubectl-cmds/", "id": "k8s_00618_code_explain_18839"} {"prompt": "Explain this bash code:\n\nkubectl set env --from=secret/mysecret deployment/myapp", "completion": "This bash code set | Import environment from a config map with a prefix", "language": "bash", "type": "kubectl-command_explanation", "context": "set | Import environment from a config map with a prefix", "source": "https://kubernetes.io/docs/reference/kubectl/kubectl-cmds/", "id": "k8s_00618_code_explain_18840"} {"prompt": "Explain this bash code:\n\nkubectl set env --from=configmap/myconfigmap --prefix=MYSQL_ deployment/myapp", "completion": "This bash code set | Import specific keys from a config map", "language": "bash", "type": "kubectl-command_explanation", "context": "set | Import specific keys from a config map", "source": "https://kubernetes.io/docs/reference/kubectl/kubectl-cmds/", "id": "k8s_00618_code_explain_18841"} {"prompt": "Explain this bash code:\n\nkubectl set env --keys=my-example-key --from=configmap/myconfigmap deployment/myapp", "completion": "This bash code set | Remove the environment variable ENV from container 'c1' in all deployment configs", "language": "bash", "type": "kubectl-command_explanation", "context": "set | Remove the environment variable ENV from container 'c1' in all deployment configs", "source": "https://kubernetes.io/docs/reference/kubectl/kubectl-cmds/", "id": "k8s_00618_code_explain_18842"} {"prompt": "Explain this bash code:\n\nkubectl set env deployments --all --containers=\"c1\" ENV-", "completion": "This bash code set | Remove the environment variable ENV from a deployment definition on disk and # update the deployment config on the server", "language": "bash", "type": "kubectl-command_explanation", "context": "set | Remove the environment variable ENV from a deployment definition on disk and # update the deployment config on the server", "source": "https://kubernetes.io/docs/reference/kubectl/kubectl-cmds/", "id": "k8s_00618_code_explain_18843"} {"prompt": "Explain this text code:\n\nenv | grep RAILS_ | kubectl set env -e - deployment/registry", "completion": "This text code set | Update environment variables on a pod template.", "language": "text", "type": "code-example_explanation", "context": "set | Update environment variables on a pod template.", "source": "https://kubernetes.io/docs/reference/kubectl/kubectl-cmds/", "id": "k8s_00618_code_explain_18844"} {"prompt": "Explain this bash code:\n\nkubectl set image deployment/nginx busybox=busybox nginx=nginx:1.9.1", "completion": "This bash code set | Update all deployments' and rc's nginx container's image to 'nginx:1.9.1'", "language": "bash", "type": "kubectl-command_explanation", "context": "set | Update all deployments' and rc's nginx container's image to 'nginx:1.9.1'", "source": "https://kubernetes.io/docs/reference/kubectl/kubectl-cmds/", "id": "k8s_00618_code_explain_18845"} {"prompt": "Explain this bash code:\n\nkubectl set image deployments,rc nginx=nginx:1.9.1 --all", "completion": "This bash code set | Update image of all containers of daemonset abc to 'nginx:1.9.1'", "language": "bash", "type": "kubectl-command_explanation", "context": "set | Update image of all containers of daemonset abc to 'nginx:1.9.1'", "source": "https://kubernetes.io/docs/reference/kubectl/kubectl-cmds/", "id": "k8s_00618_code_explain_18846"} {"prompt": "Explain this bash code:\n\nkubectl set image -f path/to/file.yaml nginx=nginx:1.9.1 --local -o yaml", "completion": "This bash code set | Update existing container image(s) of resources.", "language": "bash", "type": "kubectl-command_explanation", "context": "set | Update existing container image(s) of resources.", "source": "https://kubernetes.io/docs/reference/kubectl/kubectl-cmds/", "id": "k8s_00618_code_explain_18847"} {"prompt": "Explain this bash code:\n\nkubectl set resources deployment nginx -c=nginx --limits=cpu=200m,memory=512Mi", "completion": "This bash code set | Set the resource request and limits for all containers in nginx", "language": "bash", "type": "kubectl-command_explanation", "context": "set | Set the resource request and limits for all containers in nginx", "source": "https://kubernetes.io/docs/reference/kubectl/kubectl-cmds/", "id": "k8s_00618_code_explain_18848"} {"prompt": "Explain this bash code:\n\nkubectl set resources deployment nginx --limits=cpu=200m,memory=512Mi --requests=cpu=100m,memory=256Mi", "completion": "This bash code set | Remove the resource requests for resources on containers in nginx", "language": "bash", "type": "kubectl-command_explanation", "context": "set | Remove the resource requests for resources on containers in nginx", "source": "https://kubernetes.io/docs/reference/kubectl/kubectl-cmds/", "id": "k8s_00618_code_explain_18849"} {"prompt": "Explain this bash code:\n\nkubectl set resources deployment nginx --limits=cpu=0,memory=0 --requests=cpu=0,memory=0", "completion": "This bash code set | Print the result (in yaml format) of updating nginx container limits from a local, without hitting the server", "language": "bash", "type": "kubectl-command_explanation", "context": "set | Print the result (in yaml format) of updating nginx container limits from a local, without hitting the server", "source": "https://kubernetes.io/docs/reference/kubectl/kubectl-cmds/", "id": "k8s_00618_code_explain_18850"} {"prompt": "Explain this bash code:\n\nkubectl set resources -f path/to/file.yaml --limits=cpu=200m,memory=512Mi --local -o yaml", "completion": "This bash code set | Specify compute resource requirements (CPU, memory) for any resource that defines a pod template. If a pod is successfully scheduled, it is guaranteed the amount of resource requested, but may burst", "language": "bash", "type": "kubectl-command_explanation", "context": "set | Specify compute resource requirements (CPU, memory) for any resource that defines a pod template. If a pod is successfully scheduled, it is guaranteed the amount of resource requested, but may burst", "source": "https://kubernetes.io/docs/reference/kubectl/kubectl-cmds/", "id": "k8s_00618_code_explain_18851"} {"prompt": "Explain this bash code:\n\nkubectl create service clusterip my-svc --clusterip=\"None\" -o yaml --dry-run=client | kubectl set selector --local -f - 'environment=qa' -o yaml | kubectl create -f -\nkubectl create deployment my-dep -o yaml --dry-run=client | kubectl label --local -f - environment=qa -o yaml | kubectl create -f -", "completion": "This bash code set | Set the selector on a resource. Note that the new selector will overwrite the old selector if the resource had one prior to the invocation of 'set selector'.", "language": "bash", "type": "kubectl-command_explanation", "context": "set | Set the selector on a resource. Note that the new selector will overwrite the old selector if the resource had one prior to the invocation of 'set selector'.", "source": "https://kubernetes.io/docs/reference/kubectl/kubectl-cmds/", "id": "k8s_00618_code_explain_18852"} {"prompt": "Explain this bash code:\n\nkubectl set serviceaccount deployment nginx-deployment serviceaccount1", "completion": "This bash code set | Print the result (in YAML format) of updated nginx deployment with the service account from local file, without hitting the API server", "language": "bash", "type": "kubectl-command_explanation", "context": "set | Print the result (in YAML format) of updated nginx deployment with the service account from local file, without hitting the API server", "source": "https://kubernetes.io/docs/reference/kubectl/kubectl-cmds/", "id": "k8s_00618_code_explain_18853"} {"prompt": "Explain this bash code:\n\nkubectl set sa -f nginx-deployment.yaml serviceaccount1 --local --dry-run=client -o yaml", "completion": "This bash code set | Update the service account of pod template resources.", "language": "bash", "type": "kubectl-command_explanation", "context": "set | Update the service account of pod template resources.", "source": "https://kubernetes.io/docs/reference/kubectl/kubectl-cmds/", "id": "k8s_00618_code_explain_18854"} {"prompt": "Explain this bash code:\n\nkubectl set subject clusterrolebinding admin --serviceaccount=namespace:serviceaccount1", "completion": "This bash code set | Update a role binding for user1, user2, and group1", "language": "bash", "type": "kubectl-command_explanation", "context": "set | Update a role binding for user1, user2, and group1", "source": "https://kubernetes.io/docs/reference/kubectl/kubectl-cmds/", "id": "k8s_00618_code_explain_18855"} {"prompt": "Explain this bash code:\n\nkubectl set subject rolebinding admin --user=user1 --user=user2 --group=group1", "completion": "This bash code set | Print the result (in YAML format) of updating rolebinding subjects from a local, without hitting the server", "language": "bash", "type": "kubectl-command_explanation", "context": "set | Print the result (in YAML format) of updating rolebinding subjects from a local, without hitting the server", "source": "https://kubernetes.io/docs/reference/kubectl/kubectl-cmds/", "id": "k8s_00618_code_explain_18856"} {"prompt": "Explain this bash code:\n\nkubectl create rolebinding admin --role=admin --user=admin -o yaml --dry-run=client | kubectl set subject --local -f - --user=foo -o yaml", "completion": "This bash code set | Update the user, group, or service account in a role binding or cluster role binding.", "language": "bash", "type": "kubectl-command_explanation", "context": "set | Update the user, group, or service account in a role binding or cluster role binding.", "source": "https://kubernetes.io/docs/reference/kubectl/kubectl-cmds/", "id": "k8s_00618_code_explain_18857"} {"prompt": "Explain this bash code:\n\nkubectl wait --for=condition=Ready=false pod/busybox1", "completion": "This bash code wait | Wait for the pod \"busybox1\" to be deleted, with a timeout of 60s, after having issued the \"delete\" command", "language": "bash", "type": "kubectl-command_explanation", "context": "wait | Wait for the pod \"busybox1\" to be deleted, with a timeout of 60s, after having issued the \"delete\" command", "source": "https://kubernetes.io/docs/reference/kubectl/kubectl-cmds/", "id": "k8s_00618_code_explain_18858"} {"prompt": "Explain this bash code:\n\nkubectl delete pod/busybox1\nkubectl wait --for=delete pod/busybox1 --timeout=60s", "completion": "This bash code wait | Experimental: Wait for a specific condition on one or many resources.", "language": "bash", "type": "kubectl-command_explanation", "context": "wait | Experimental: Wait for a specific condition on one or many resources.", "source": "https://kubernetes.io/docs/reference/kubectl/kubectl-cmds/", "id": "k8s_00618_code_explain_18859"} {"prompt": "Explain this text code:\n\ntar cf - /tmp/foo | kubectl exec -i -n <some-namespace> <some-pod> -- tar xf - -C /tmp/bar", "completion": "This text code cp | Copy /tmp/foo from a remote pod to /tmp/bar locally", "language": "text", "type": "code-example_explanation", "context": "cp | Copy /tmp/foo from a remote pod to /tmp/bar locally", "source": "https://kubernetes.io/docs/reference/kubectl/kubectl-cmds/", "id": "k8s_00618_code_explain_18860"} {"prompt": "Explain this bash code:\n\nkubectl exec -n <some-namespace> <some-pod> -- tar cf - /tmp/foo | tar xf - -C /tmp/bar", "completion": "This bash code cp | Copy /tmp/foo_dir local directory to /tmp/bar_dir in a remote pod in the default namespace", "language": "bash", "type": "kubectl-command_explanation", "context": "cp | Copy /tmp/foo_dir local directory to /tmp/bar_dir in a remote pod in the default namespace", "source": "https://kubernetes.io/docs/reference/kubectl/kubectl-cmds/", "id": "k8s_00618_code_explain_18861"} {"prompt": "Explain this bash code:\n\nkubectl cp /tmp/foo <some-pod>:/tmp/bar -c <specific-container>", "completion": "This bash code cp | Copy /tmp/foo local file to /tmp/bar in a remote pod in namespace", "language": "bash", "type": "kubectl-command_explanation", "context": "cp | Copy /tmp/foo local file to /tmp/bar in a remote pod in namespace", "source": "https://kubernetes.io/docs/reference/kubectl/kubectl-cmds/", "id": "k8s_00618_code_explain_18862"} {"prompt": "Explain this bash code:\n\nkubectl cp /tmp/foo <some-namespace>/<some-pod>:/tmp/bar", "completion": "This bash code cp | Copy /tmp/foo from a remote pod to /tmp/bar locally", "language": "bash", "type": "kubectl-command_explanation", "context": "cp | Copy /tmp/foo from a remote pod to /tmp/bar locally", "source": "https://kubernetes.io/docs/reference/kubectl/kubectl-cmds/", "id": "k8s_00618_code_explain_18863"} {"prompt": "Explain this bash code:\n\nkubectl cp <some-namespace>/<some-pod>:/tmp/foo /tmp/bar", "completion": "This bash code cp | Copy files and directories to and from containers.", "language": "bash", "type": "kubectl-command_explanation", "context": "cp | Copy files and directories to and from containers.", "source": "https://kubernetes.io/docs/reference/kubectl/kubectl-cmds/", "id": "k8s_00618_code_explain_18864"} {"prompt": "Explain this bash code:\n\nkubectl describe nodes kubernetes-node-emt8.c.myproject.internal", "completion": "This bash code describe | Describe a pod", "language": "bash", "type": "kubectl-command_explanation", "context": "describe | Describe a pod", "source": "https://kubernetes.io/docs/reference/kubectl/kubectl-cmds/", "id": "k8s_00618_code_explain_18865"} {"prompt": "Explain this bash code:\n\nkubectl exec mypod -c ruby-container -i -t -- bash -il", "completion": "This bash code exec | List contents of /usr from the first container of pod mypod and sort by modification time # If the command you want to execute in the pod has any flags in common (e.g. -i), # you must use two dashes (", "language": "bash", "type": "kubectl-command_explanation", "context": "exec | List contents of /usr from the first container of pod mypod and sort by modification time # If the command you want to execute in the pod has any flags in common (e.g. -i), # you must use two dashes (", "source": "https://kubernetes.io/docs/reference/kubectl/kubectl-cmds/", "id": "k8s_00618_code_explain_18866"} {"prompt": "Explain this bash code:\n\nkubectl logs --insecure-skip-tls-verify-backend nginx", "completion": "This bash code logs | Return snapshot logs from first container of a job named hello", "language": "bash", "type": "kubectl-command_explanation", "context": "logs | Return snapshot logs from first container of a job named hello", "source": "https://kubernetes.io/docs/reference/kubectl/kubectl-cmds/", "id": "k8s_00618_code_explain_18867"} {"prompt": "Explain this bash code:\n\nkubectl port-forward deployment/mydeployment 5000 6000", "completion": "This bash code port-forward | Listen on port 8443 locally, forwarding to the targetPort of the service's port named \"https\" in a pod selected by the service", "language": "bash", "type": "kubectl-command_explanation", "context": "port-forward | Listen on port 8443 locally, forwarding to the targetPort of the service's port named \"https\" in a pod selected by the service", "source": "https://kubernetes.io/docs/reference/kubectl/kubectl-cmds/", "id": "k8s_00618_code_explain_18868"} {"prompt": "Explain this bash code:\n\nkubectl port-forward --address 0.0.0.0 pod/mypod 8888:5000", "completion": "This bash code port-forward | Listen on port 8888 on localhost and selected IP, forwarding to 5000 in the pod", "language": "bash", "type": "kubectl-command_explanation", "context": "port-forward | Listen on port 8888 on localhost and selected IP, forwarding to 5000 in the pod", "source": "https://kubernetes.io/docs/reference/kubectl/kubectl-cmds/", "id": "k8s_00618_code_explain_18869"} {"prompt": "Explain this bash code:\n\nkubectl port-forward --address localhost,10.19.21.23 pod/mypod 8888:5000", "completion": "This bash code port-forward | Listen on a random port locally, forwarding to 5000 in the pod", "language": "bash", "type": "kubectl-command_explanation", "context": "port-forward | Listen on a random port locally, forwarding to 5000 in the pod", "source": "https://kubernetes.io/docs/reference/kubectl/kubectl-cmds/", "id": "k8s_00618_code_explain_18870"} {"prompt": "Explain this bash code:\n\nkubectl proxy --www=/my/files --www-prefix=/static/ --api-prefix=/api/", "completion": "This bash code proxy | To proxy the entire Kubernetes API at a different root # You can get pods info with 'curl localhost:8001/custom/api/v1/pods'", "language": "bash", "type": "kubectl-command_explanation", "context": "proxy | To proxy the entire Kubernetes API at a different root # You can get pods info with 'curl localhost:8001/custom/api/v1/pods'", "source": "https://kubernetes.io/docs/reference/kubectl/kubectl-cmds/", "id": "k8s_00618_code_explain_18871"} {"prompt": "Explain this bash code:\n\nkubectl cluster-info dump --output-directory=/path/to/cluster-state", "completion": "This bash code cluster-info | Dump all namespaces to stdout", "language": "bash", "type": "kubectl-command_explanation", "context": "cluster-info | Dump all namespaces to stdout", "source": "https://kubernetes.io/docs/reference/kubectl/kubectl-cmds/", "id": "k8s_00618_code_explain_18872"} {"prompt": "Explain this bash code:\n\nkubectl cluster-info dump --namespaces default,kube-system --output-directory=/path/to/cluster-state", "completion": "This bash code cluster-info | Dump cluster information out suitable for debugging and diagnosing cluster problems. By default, dumps everything to stdout. You can optionally specify a directory with --output-directory. If you sp", "language": "bash", "type": "kubectl-command_explanation", "context": "cluster-info | Dump cluster information out suitable for debugging and diagnosing cluster problems. By default, dumps everything to stdout. You can optionally specify a directory with --output-directory. If you sp", "source": "https://kubernetes.io/docs/reference/kubectl/kubectl-cmds/", "id": "k8s_00618_code_explain_18873"} {"prompt": "Explain this bash code:\n\nkubectl taint nodes foo dedicated=special-user:NoSchedule", "completion": "This bash code taint | Remove from node 'foo' the taint with key 'dedicated' and effect 'NoSchedule' if one exists", "language": "bash", "type": "kubectl-command_explanation", "context": "taint | Remove from node 'foo' the taint with key 'dedicated' and effect 'NoSchedule' if one exists", "source": "https://kubernetes.io/docs/reference/kubectl/kubectl-cmds/", "id": "k8s_00618_code_explain_18874"} {"prompt": "Explain this bash code:\n\nkubectl taint node -l myLabel=X dedicated=foo:PreferNoSchedule", "completion": "This bash code taint | Add to node 'foo' a taint with key 'bar' and no value", "language": "bash", "type": "kubectl-command_explanation", "context": "taint | Add to node 'foo' a taint with key 'bar' and no value", "source": "https://kubernetes.io/docs/reference/kubectl/kubectl-cmds/", "id": "k8s_00618_code_explain_18875"} {"prompt": "Explain this bash code:\n\nkubectl completion bash > $(brew --prefix)/etc/bash_completion.d/kubectl", "completion": "This bash code completion | Installing bash completion on Linux ## If bash-completion is not installed on Linux, install the 'bash-completion' package ## via your distribution's package manager. ## Load the kubectl completion co", "language": "bash", "type": "kubectl-command_explanation", "context": "completion | Installing bash completion on Linux ## If bash-completion is not installed on Linux, install the 'bash-completion' package ## via your distribution's package manager. ## Load the kubectl completion co", "source": "https://kubernetes.io/docs/reference/kubectl/kubectl-cmds/", "id": "k8s_00618_code_explain_18876"} {"prompt": "Explain this bash code:\n\nkubectl completion bash > ~/.kube/completion.bash.inc\nprintf \"", "completion": "This bash code completion | Kubectl shell completion", "language": "bash", "type": "kubectl-command_explanation", "context": "completion | Kubectl shell completion", "source": "https://kubernetes.io/docs/reference/kubectl/kubectl-cmds/", "id": "k8s_00618_code_explain_18877"} {"prompt": "Explain this text code:\n\nsource '$HOME/.kube/completion.bash.inc'\n\" >> $HOME/.bash_profile\nsource $HOME/.bash_profile", "completion": "This text code completion | Load the kubectl completion code for zsh[1] into the current shell", "language": "text", "type": "code-example_explanation", "context": "completion | Load the kubectl completion code for zsh[1] into the current shell", "source": "https://kubernetes.io/docs/reference/kubectl/kubectl-cmds/", "id": "k8s_00618_code_explain_18878"} {"prompt": "Explain this bash code:\n\nkubectl config set clusters.my-cluster.server https://1.2.3.4", "completion": "This bash code config | Set the certificate-authority-data field on the my-cluster cluster", "language": "bash", "type": "kubectl-command_explanation", "context": "config | Set the certificate-authority-data field on the my-cluster cluster", "source": "https://kubernetes.io/docs/reference/kubectl/kubectl-cmds/", "id": "k8s_00618_code_explain_18879"} {"prompt": "Explain this bash code:\n\nkubectl config set clusters.my-cluster.certificate-authority-data $(echo \"cert_data_here\" | base64 -i -)", "completion": "This bash code config | Set the cluster field in the my-context context to my-cluster", "language": "bash", "type": "kubectl-command_explanation", "context": "config | Set the cluster field in the my-context context to my-cluster", "source": "https://kubernetes.io/docs/reference/kubectl/kubectl-cmds/", "id": "k8s_00618_code_explain_18880"} {"prompt": "Explain this bash code:\n\nkubectl config set contexts.my-context.cluster my-cluster", "completion": "This bash code config | Set the client-key-data field in the cluster-admin user using --set-raw-bytes option", "language": "bash", "type": "kubectl-command_explanation", "context": "config | Set the client-key-data field in the cluster-admin user using --set-raw-bytes option", "source": "https://kubernetes.io/docs/reference/kubectl/kubectl-cmds/", "id": "k8s_00618_code_explain_18881"} {"prompt": "Explain this bash code:\n\nkubectl config set users.cluster-admin.client-key-data cert_data_here --set-raw-bytes=true", "completion": "This bash code config | Set an individual value in a kubeconfig file.", "language": "bash", "type": "kubectl-command_explanation", "context": "config | Set an individual value in a kubeconfig file.", "source": "https://kubernetes.io/docs/reference/kubectl/kubectl-cmds/", "id": "k8s_00618_code_explain_18882"} {"prompt": "Explain this bash code:\n\nkubectl config set-cluster e2e --server=https://1.2.3.4", "completion": "This bash code config | Embed certificate authority data for the e2e cluster entry", "language": "bash", "type": "kubectl-command_explanation", "context": "config | Embed certificate authority data for the e2e cluster entry", "source": "https://kubernetes.io/docs/reference/kubectl/kubectl-cmds/", "id": "k8s_00618_code_explain_18883"} {"prompt": "Explain this bash code:\n\nkubectl config set-cluster e2e --embed-certs --certificate-authority=~/.kube/e2e/kubernetes.ca.crt", "completion": "This bash code config | Disable cert checking for the dev cluster entry", "language": "bash", "type": "kubectl-command_explanation", "context": "config | Disable cert checking for the dev cluster entry", "source": "https://kubernetes.io/docs/reference/kubectl/kubectl-cmds/", "id": "k8s_00618_code_explain_18884"} {"prompt": "Explain this bash code:\n\nkubectl config set-cluster e2e --insecure-skip-tls-verify=true", "completion": "This bash code config | Set custom TLS server name to use for validation for the e2e cluster entry", "language": "bash", "type": "kubectl-command_explanation", "context": "config | Set custom TLS server name to use for validation for the e2e cluster entry", "source": "https://kubernetes.io/docs/reference/kubectl/kubectl-cmds/", "id": "k8s_00618_code_explain_18885"} {"prompt": "Explain this bash code:\n\nkubectl config set-cluster e2e --tls-server-name=my-cluster-name", "completion": "This bash code config | Set a cluster entry in kubeconfig.", "language": "bash", "type": "kubectl-command_explanation", "context": "config | Set a cluster entry in kubeconfig.", "source": "https://kubernetes.io/docs/reference/kubectl/kubectl-cmds/", "id": "k8s_00618_code_explain_18886"} {"prompt": "Explain this bash code:\n\nkubectl config set-context gce --user=cluster-admin", "completion": "This bash code config | Set a context entry in kubeconfig.", "language": "bash", "type": "kubectl-command_explanation", "context": "config | Set a context entry in kubeconfig.", "source": "https://kubernetes.io/docs/reference/kubectl/kubectl-cmds/", "id": "k8s_00618_code_explain_18887"} {"prompt": "Explain this bash code:\n\nkubectl config set-credentials cluster-admin --client-key=~/.kube/admin.key", "completion": "This bash code config | Set basic auth for the \"cluster-admin\" entry", "language": "bash", "type": "kubectl-command_explanation", "context": "config | Set basic auth for the \"cluster-admin\" entry", "source": "https://kubernetes.io/docs/reference/kubectl/kubectl-cmds/", "id": "k8s_00618_code_explain_18888"} {"prompt": "Explain this bash code:\n\nkubectl config set-credentials cluster-admin --username=admin --password=uXFGweU9l35qcif", "completion": "This bash code config | Embed client certificate data in the \"cluster-admin\" entry", "language": "bash", "type": "kubectl-command_explanation", "context": "config | Embed client certificate data in the \"cluster-admin\" entry", "source": "https://kubernetes.io/docs/reference/kubectl/kubectl-cmds/", "id": "k8s_00618_code_explain_18889"} {"prompt": "Explain this bash code:\n\nkubectl config set-credentials cluster-admin --client-certificate=~/.kube/admin.crt --embed-certs=true", "completion": "This bash code config | Enable the Google Compute Platform auth provider for the \"cluster-admin\" entry", "language": "bash", "type": "kubectl-command_explanation", "context": "config | Enable the Google Compute Platform auth provider for the \"cluster-admin\" entry", "source": "https://kubernetes.io/docs/reference/kubectl/kubectl-cmds/", "id": "k8s_00618_code_explain_18890"} {"prompt": "Explain this bash code:\n\nkubectl config set-credentials cluster-admin --auth-provider=gcp", "completion": "This bash code config | Enable the OpenID Connect auth provider for the \"cluster-admin\" entry with additional args", "language": "bash", "type": "kubectl-command_explanation", "context": "config | Enable the OpenID Connect auth provider for the \"cluster-admin\" entry with additional args", "source": "https://kubernetes.io/docs/reference/kubectl/kubectl-cmds/", "id": "k8s_00618_code_explain_18891"} {"prompt": "Explain this bash code:\n\nkubectl config set-credentials cluster-admin --auth-provider=oidc --auth-provider-arg=client-id=foo --auth-provider-arg=client-secret=bar", "completion": "This bash code config | Remove the \"client-secret\" config value for the OpenID Connect auth provider for the \"cluster-admin\" entry", "language": "bash", "type": "kubectl-command_explanation", "context": "config | Remove the \"client-secret\" config value for the OpenID Connect auth provider for the \"cluster-admin\" entry", "source": "https://kubernetes.io/docs/reference/kubectl/kubectl-cmds/", "id": "k8s_00618_code_explain_18892"} {"prompt": "Explain this bash code:\n\nkubectl config set-credentials cluster-admin --auth-provider=oidc --auth-provider-arg=client-secret-", "completion": "This bash code config | Enable new exec auth plugin for the \"cluster-admin\" entry", "language": "bash", "type": "kubectl-command_explanation", "context": "config | Enable new exec auth plugin for the \"cluster-admin\" entry", "source": "https://kubernetes.io/docs/reference/kubectl/kubectl-cmds/", "id": "k8s_00618_code_explain_18893"} {"prompt": "Explain this bash code:\n\nkubectl config set-credentials cluster-admin --exec-command=/path/to/the/executable --exec-api-version=client.authentication.k8s.io/v1beta1", "completion": "This bash code config | Define new exec auth plugin args for the \"cluster-admin\" entry", "language": "bash", "type": "kubectl-command_explanation", "context": "config | Define new exec auth plugin args for the \"cluster-admin\" entry", "source": "https://kubernetes.io/docs/reference/kubectl/kubectl-cmds/", "id": "k8s_00618_code_explain_18894"} {"prompt": "Explain this bash code:\n\nkubectl config set-credentials cluster-admin --exec-arg=arg1 --exec-arg=arg2", "completion": "This bash code config | Create or update exec auth plugin environment variables for the \"cluster-admin\" entry", "language": "bash", "type": "kubectl-command_explanation", "context": "config | Create or update exec auth plugin environment variables for the \"cluster-admin\" entry", "source": "https://kubernetes.io/docs/reference/kubectl/kubectl-cmds/", "id": "k8s_00618_code_explain_18895"} {"prompt": "Explain this bash code:\n\nkubectl config set-credentials cluster-admin --exec-env=key1=val1 --exec-env=key2=val2", "completion": "This bash code config | Remove exec auth plugin environment variables for the \"cluster-admin\" entry", "language": "bash", "type": "kubectl-command_explanation", "context": "config | Remove exec auth plugin environment variables for the \"cluster-admin\" entry", "source": "https://kubernetes.io/docs/reference/kubectl/kubectl-cmds/", "id": "k8s_00618_code_explain_18896"} {"prompt": "Explain this bash code:\n\nkubectl config set-credentials cluster-admin --exec-env=var-to-remove-", "completion": "This bash code config | Set a user entry in kubeconfig.", "language": "bash", "type": "kubectl-command_explanation", "context": "config | Set a user entry in kubeconfig.", "source": "https://kubernetes.io/docs/reference/kubectl/kubectl-cmds/", "id": "k8s_00618_code_explain_18897"} {"prompt": "Explain this bash code:\n\nkubectl config view -o jsonpath='{.users[?(@.name == \"e2e\")].user.password}'", "completion": "This bash code config | Display merged kubeconfig settings or a specified kubeconfig file.", "language": "bash", "type": "kubectl-command_explanation", "context": "config | Display merged kubeconfig settings or a specified kubeconfig file.", "source": "https://kubernetes.io/docs/reference/kubectl/kubectl-cmds/", "id": "k8s_00618_code_explain_18898"} {"prompt": "Explain this bash code:\n\nkubectl create -f docker-registry.yaml --edit -o json", "completion": "This bash code create | Create a resource from a file or from stdin.", "language": "bash", "type": "kubectl-command_explanation", "context": "create | Create a resource from a file or from stdin.", "source": "https://kubernetes.io/docs/reference/kubectl/kubectl-cmds/", "id": "k8s_00618_code_explain_18899"} {"prompt": "Explain this bash code:\n\nkubectl create clusterrole pod-reader --verb=get,list,watch --resource=pods", "completion": "This bash code create | Create a cluster role named \"pod-reader\" with ResourceName specified", "language": "bash", "type": "kubectl-command_explanation", "context": "create | Create a cluster role named \"pod-reader\" with ResourceName specified", "source": "https://kubernetes.io/docs/reference/kubectl/kubectl-cmds/", "id": "k8s_00618_code_explain_18900"} {"prompt": "Explain this bash code:\n\nkubectl create clusterrole pod-reader --verb=get --resource=pods --resource-name=readablepod --resource-name=anotherpod", "completion": "This bash code create | Create a cluster role named \"foo\" with API Group specified", "language": "bash", "type": "kubectl-command_explanation", "context": "create | Create a cluster role named \"foo\" with API Group specified", "source": "https://kubernetes.io/docs/reference/kubectl/kubectl-cmds/", "id": "k8s_00618_code_explain_18901"} {"prompt": "Explain this bash code:\n\nkubectl create clusterrole foo --verb=get,list,watch --resource=rs.extensions", "completion": "This bash code create | Create a cluster role named \"foo\" with SubResource specified", "language": "bash", "type": "kubectl-command_explanation", "context": "create | Create a cluster role named \"foo\" with SubResource specified", "source": "https://kubernetes.io/docs/reference/kubectl/kubectl-cmds/", "id": "k8s_00618_code_explain_18902"} {"prompt": "Explain this bash code:\n\nkubectl create clusterrole foo --verb=get,list,watch --resource=pods,pods/status", "completion": "This bash code create | Create a cluster role name \"foo\" with NonResourceURL specified", "language": "bash", "type": "kubectl-command_explanation", "context": "create | Create a cluster role name \"foo\" with NonResourceURL specified", "source": "https://kubernetes.io/docs/reference/kubectl/kubectl-cmds/", "id": "k8s_00618_code_explain_18903"} {"prompt": "Explain this bash code:\n\nkubectl create clusterrole \"foo\" --verb=get --non-resource-url=/logs/*", "completion": "This bash code create | Create a cluster role name \"monitoring\" with AggregationRule specified", "language": "bash", "type": "kubectl-command_explanation", "context": "create | Create a cluster role name \"monitoring\" with AggregationRule specified", "source": "https://kubernetes.io/docs/reference/kubectl/kubectl-cmds/", "id": "k8s_00618_code_explain_18904"} {"prompt": "Explain this bash code:\n\nkubectl create clusterrole monitoring --aggregation-rule=\"rbac.example.com/aggregate-to-monitoring=true\"", "completion": "This bash code create | Create a cluster role.", "language": "bash", "type": "kubectl-command_explanation", "context": "create | Create a cluster role.", "source": "https://kubernetes.io/docs/reference/kubectl/kubectl-cmds/", "id": "k8s_00618_code_explain_18905"} {"prompt": "Explain this text code:\n\n$ kubectl create clusterrole NAME --verb=verb --resource=resource.group [--resource-name=resourcename] [--dry-run=server|client|none]", "completion": "This text code create | Create a cluster role binding for user1, user2, and group1 using the cluster-admin cluster role", "language": "text", "type": "code-example_explanation", "context": "create | Create a cluster role binding for user1, user2, and group1 using the cluster-admin cluster role", "source": "https://kubernetes.io/docs/reference/kubectl/kubectl-cmds/", "id": "k8s_00618_code_explain_18906"} {"prompt": "Explain this bash code:\n\nkubectl create clusterrolebinding cluster-admin --clusterrole=cluster-admin --user=user1 --user=user2 --group=group1", "completion": "This bash code create | Create a cluster role binding for a particular cluster role.", "language": "bash", "type": "kubectl-command_explanation", "context": "create | Create a cluster role binding for a particular cluster role.", "source": "https://kubernetes.io/docs/reference/kubectl/kubectl-cmds/", "id": "k8s_00618_code_explain_18907"} {"prompt": "Explain this text code:\n\n$ kubectl create clusterrolebinding NAME --clusterrole=NAME [--user=username] [--group=groupname] [--serviceaccount=namespace:serviceaccountname] [--dry-run=server|client|none]", "completion": "This text code create | Create a new config map named my-config based on folder bar", "language": "text", "type": "code-example_explanation", "context": "create | Create a new config map named my-config based on folder bar", "source": "https://kubernetes.io/docs/reference/kubectl/kubectl-cmds/", "id": "k8s_00618_code_explain_18908"} {"prompt": "Explain this bash code:\n\nkubectl create configmap my-config --from-file=path/to/bar", "completion": "This bash code create | Create a new config map named my-config with specified keys instead of file basenames on disk", "language": "bash", "type": "kubectl-command_explanation", "context": "create | Create a new config map named my-config with specified keys instead of file basenames on disk", "source": "https://kubernetes.io/docs/reference/kubectl/kubectl-cmds/", "id": "k8s_00618_code_explain_18909"} {"prompt": "Explain this bash code:\n\nkubectl create configmap my-config --from-file=key1=/path/to/bar/file1.txt --from-file=key2=/path/to/bar/file2.txt", "completion": "This bash code create | Create a new config map named my-config with key1=config1 and key2=config2", "language": "bash", "type": "kubectl-command_explanation", "context": "create | Create a new config map named my-config with key1=config1 and key2=config2", "source": "https://kubernetes.io/docs/reference/kubectl/kubectl-cmds/", "id": "k8s_00618_code_explain_18910"} {"prompt": "Explain this bash code:\n\nkubectl create configmap my-config --from-literal=key1=config1 --from-literal=key2=config2", "completion": "This bash code create | Create a new config map named my-config from the key=value pairs in the file", "language": "bash", "type": "kubectl-command_explanation", "context": "create | Create a new config map named my-config from the key=value pairs in the file", "source": "https://kubernetes.io/docs/reference/kubectl/kubectl-cmds/", "id": "k8s_00618_code_explain_18911"} {"prompt": "Explain this bash code:\n\nkubectl create configmap my-config --from-file=path/to/bar", "completion": "This bash code create | Create a new config map named my-config from an env file", "language": "bash", "type": "kubectl-command_explanation", "context": "create | Create a new config map named my-config from an env file", "source": "https://kubernetes.io/docs/reference/kubectl/kubectl-cmds/", "id": "k8s_00618_code_explain_18912"} {"prompt": "Explain this bash code:\n\nkubectl create configmap my-config --from-env-file=path/to/bar.env", "completion": "This bash code create | Create a config map based on a file, directory, or specified literal value.", "language": "bash", "type": "kubectl-command_explanation", "context": "create | Create a config map based on a file, directory, or specified literal value.", "source": "https://kubernetes.io/docs/reference/kubectl/kubectl-cmds/", "id": "k8s_00618_code_explain_18913"} {"prompt": "Explain this text code:\n\n$ kubectl create configmap NAME [--from-file=[key=]source] [--from-literal=key1=value1] [--dry-run=server|client|none]", "completion": "This text code create | Create a cron job", "language": "text", "type": "code-example_explanation", "context": "create | Create a cron job", "source": "https://kubernetes.io/docs/reference/kubectl/kubectl-cmds/", "id": "k8s_00618_code_explain_18914"} {"prompt": "Explain this bash code:\n\nkubectl create cronjob my-job --image=busybox --schedule=\"*/1 * * * *\"", "completion": "This bash code create | Create a cron job with a command", "language": "bash", "type": "kubectl-command_explanation", "context": "create | Create a cron job with a command", "source": "https://kubernetes.io/docs/reference/kubectl/kubectl-cmds/", "id": "k8s_00618_code_explain_18915"} {"prompt": "Explain this bash code:\n\nkubectl create cronjob my-job --image=busybox --schedule=\"*/1 * * * *\" -- date", "completion": "This bash code create | Create a cron job with the specified name.", "language": "bash", "type": "kubectl-command_explanation", "context": "create | Create a cron job with the specified name.", "source": "https://kubernetes.io/docs/reference/kubectl/kubectl-cmds/", "id": "k8s_00618_code_explain_18916"} {"prompt": "Explain this text code:\n\n$ kubectl create cronjob NAME --image=image --schedule='0/5 * * * ?' -- [COMMAND] [args...]", "completion": "This text code create | Create a deployment named my-dep that runs the busybox image", "language": "text", "type": "code-example_explanation", "context": "create | Create a deployment named my-dep that runs the busybox image", "source": "https://kubernetes.io/docs/reference/kubectl/kubectl-cmds/", "id": "k8s_00618_code_explain_18917"} {"prompt": "Explain this bash code:\n\nkubectl create deployment my-dep --image=busybox -- date", "completion": "This bash code create | Create a deployment named my-dep that runs the nginx image with 3 replicas", "language": "bash", "type": "kubectl-command_explanation", "context": "create | Create a deployment named my-dep that runs the nginx image with 3 replicas", "source": "https://kubernetes.io/docs/reference/kubectl/kubectl-cmds/", "id": "k8s_00618_code_explain_18918"} {"prompt": "Explain this bash code:\n\nkubectl create deployment my-dep --image=nginx --replicas=3", "completion": "This bash code create | Create a deployment named my-dep that runs the busybox image and expose port 5701", "language": "bash", "type": "kubectl-command_explanation", "context": "create | Create a deployment named my-dep that runs the busybox image and expose port 5701", "source": "https://kubernetes.io/docs/reference/kubectl/kubectl-cmds/", "id": "k8s_00618_code_explain_18919"} {"prompt": "Explain this bash code:\n\nkubectl create deployment my-dep --image=busybox --port=5701", "completion": "This bash code create | Create a deployment with the specified name.", "language": "bash", "type": "kubectl-command_explanation", "context": "create | Create a deployment with the specified name.", "source": "https://kubernetes.io/docs/reference/kubectl/kubectl-cmds/", "id": "k8s_00618_code_explain_18920"} {"prompt": "Explain this text code:\n\n$ kubectl create deployment NAME --image=image -- [COMMAND] [args...]", "completion": "This text code create | Create a single ingress called 'simple' that directs requests to foo.com/bar to svc # svc1:8080 with a tls secret \"my-cert\"", "language": "text", "type": "code-example_explanation", "context": "create | Create a single ingress called 'simple' that directs requests to foo.com/bar to svc # svc1:8080 with a tls secret \"my-cert\"", "source": "https://kubernetes.io/docs/reference/kubectl/kubectl-cmds/", "id": "k8s_00618_code_explain_18921"} {"prompt": "Explain this bash code:\n\nkubectl create ingress simple --rule=\"foo.com/bar=svc1:8080,tls=my-cert\"", "completion": "This bash code create | Create a catch all ingress of \"/path\" pointing to service svc:port and Ingress Class as \"otheringress\"", "language": "bash", "type": "kubectl-command_explanation", "context": "create | Create a catch all ingress of \"/path\" pointing to service svc:port and Ingress Class as \"otheringress\"", "source": "https://kubernetes.io/docs/reference/kubectl/kubectl-cmds/", "id": "k8s_00618_code_explain_18922"} {"prompt": "Explain this bash code:\n\nkubectl create ingress catch-all --class=otheringress --rule=\"/path=svc:port\"", "completion": "This bash code create | Create an ingress with two annotations: ingress.annotation1 and ingress.annotations2", "language": "bash", "type": "kubectl-command_explanation", "context": "create | Create an ingress with two annotations: ingress.annotation1 and ingress.annotations2", "source": "https://kubernetes.io/docs/reference/kubectl/kubectl-cmds/", "id": "k8s_00618_code_explain_18923"} {"prompt": "Explain this bash code:\n\nkubectl create ingress annotated --class=default --rule=\"foo.com/bar=svc:port\" \\\n--annotation ingress.annotation1=foo \\\n--annotation ingress.annotation2=bla", "completion": "This bash code create | Create an ingress with the same host and multiple paths", "language": "bash", "type": "kubectl-command_explanation", "context": "create | Create an ingress with the same host and multiple paths", "source": "https://kubernetes.io/docs/reference/kubectl/kubectl-cmds/", "id": "k8s_00618_code_explain_18924"} {"prompt": "Explain this bash code:\n\nkubectl create ingress multipath --class=default \\\n--rule=\"foo.com/=svc:port\" \\\n--rule=\"foo.com/admin/=svcadmin:portadmin\"", "completion": "This bash code create | Create an ingress with multiple hosts and the pathType as Prefix", "language": "bash", "type": "kubectl-command_explanation", "context": "create | Create an ingress with multiple hosts and the pathType as Prefix", "source": "https://kubernetes.io/docs/reference/kubectl/kubectl-cmds/", "id": "k8s_00618_code_explain_18925"} {"prompt": "Explain this bash code:\n\nkubectl create ingress ingress1 --class=default \\\n--rule=\"foo.com/path*=svc:8080\" \\\n--rule=\"bar.com/admin*=svc2:http\"", "completion": "This bash code create | Create an ingress with TLS enabled using the default ingress certificate and different path types", "language": "bash", "type": "kubectl-command_explanation", "context": "create | Create an ingress with TLS enabled using the default ingress certificate and different path types", "source": "https://kubernetes.io/docs/reference/kubectl/kubectl-cmds/", "id": "k8s_00618_code_explain_18926"} {"prompt": "Explain this bash code:\n\nkubectl create ingress ingtls --class=default \\\n--rule=\"foo.com/=svc:https,tls\" \\\n--rule=\"foo.com/path/subpath*=othersvc:8080\"", "completion": "This bash code create | Create an ingress with TLS enabled using a specific secret and pathType as Prefix", "language": "bash", "type": "kubectl-command_explanation", "context": "create | Create an ingress with TLS enabled using a specific secret and pathType as Prefix", "source": "https://kubernetes.io/docs/reference/kubectl/kubectl-cmds/", "id": "k8s_00618_code_explain_18927"} {"prompt": "Explain this bash code:\n\nkubectl create ingress ingsecret --class=default \\\n--rule=\"foo.com/*=svc:8080,tls=secret1\"", "completion": "This bash code create | Create an ingress with a default backend", "language": "bash", "type": "kubectl-command_explanation", "context": "create | Create an ingress with a default backend", "source": "https://kubernetes.io/docs/reference/kubectl/kubectl-cmds/", "id": "k8s_00618_code_explain_18928"} {"prompt": "Explain this bash code:\n\nkubectl create ingress ingdefault --class=default \\\n--default-backend=defaultsvc:http \\\n--rule=\"foo.com/*=svc:8080,tls=secret1\"", "completion": "This bash code create | Create an ingress with the specified name.", "language": "bash", "type": "kubectl-command_explanation", "context": "create | Create an ingress with the specified name.", "source": "https://kubernetes.io/docs/reference/kubectl/kubectl-cmds/", "id": "k8s_00618_code_explain_18929"} {"prompt": "Explain this text code:\n\n$ kubectl create ingress NAME --rule=host/path=service:port[,tls[=secret]]", "completion": "This text code create | Create a job", "language": "text", "type": "code-example_explanation", "context": "create | Create a job", "source": "https://kubernetes.io/docs/reference/kubectl/kubectl-cmds/", "id": "k8s_00618_code_explain_18930"} {"prompt": "Explain this bash code:\n\nkubectl create job test-job --from=cronjob/a-cronjob", "completion": "This bash code create | Create a job with the specified name.", "language": "bash", "type": "kubectl-command_explanation", "context": "create | Create a job with the specified name.", "source": "https://kubernetes.io/docs/reference/kubectl/kubectl-cmds/", "id": "k8s_00618_code_explain_18931"} {"prompt": "Explain this text code:\n\n$ kubectl create job NAME --image=image [--from=cronjob/name] -- [COMMAND] [args...]", "completion": "This text code create | Create a new namespace named my-namespace", "language": "text", "type": "code-example_explanation", "context": "create | Create a new namespace named my-namespace", "source": "https://kubernetes.io/docs/reference/kubectl/kubectl-cmds/", "id": "k8s_00618_code_explain_18932"} {"prompt": "Explain this text code:\n\n$ kubectl create namespace NAME [--dry-run=server|client|none]", "completion": "This text code create | Create a pod disruption budget named my-pdb that will select all pods with the app=rails label # and require at least one of them being available at any point in time", "language": "text", "type": "code-example_explanation", "context": "create | Create a pod disruption budget named my-pdb that will select all pods with the app=rails label # and require at least one of them being available at any point in time", "source": "https://kubernetes.io/docs/reference/kubectl/kubectl-cmds/", "id": "k8s_00618_code_explain_18933"} {"prompt": "Explain this bash code:\n\nkubectl create poddisruptionbudget my-pdb --selector=app=rails --min-available=1", "completion": "This bash code create | Create a pod disruption budget named my-pdb that will select all pods with the app=nginx label # and require at least half of the pods selected to be available at any point in time", "language": "bash", "type": "kubectl-command_explanation", "context": "create | Create a pod disruption budget named my-pdb that will select all pods with the app=nginx label # and require at least half of the pods selected to be available at any point in time", "source": "https://kubernetes.io/docs/reference/kubectl/kubectl-cmds/", "id": "k8s_00618_code_explain_18934"} {"prompt": "Explain this bash code:\n\nkubectl create pdb my-pdb --selector=app=nginx --min-available=50%", "completion": "This bash code create | Create a pod disruption budget with the specified name, selector, and desired minimum available pods.", "language": "bash", "type": "kubectl-command_explanation", "context": "create | Create a pod disruption budget with the specified name, selector, and desired minimum available pods.", "source": "https://kubernetes.io/docs/reference/kubectl/kubectl-cmds/", "id": "k8s_00618_code_explain_18935"} {"prompt": "Explain this text code:\n\n$ kubectl create poddisruptionbudget NAME --selector=SELECTOR --min-available=N [--dry-run=server|client|none]", "completion": "This text code create | Create a priority class named high-priority", "language": "text", "type": "code-example_explanation", "context": "create | Create a priority class named high-priority", "source": "https://kubernetes.io/docs/reference/kubectl/kubectl-cmds/", "id": "k8s_00618_code_explain_18936"} {"prompt": "Explain this bash code:\n\nkubectl create priorityclass high-priority --value=1000 --description=\"high priority\"", "completion": "This bash code create | Create a priority class named default-priority that is considered as the global default priority", "language": "bash", "type": "kubectl-command_explanation", "context": "create | Create a priority class named default-priority that is considered as the global default priority", "source": "https://kubernetes.io/docs/reference/kubectl/kubectl-cmds/", "id": "k8s_00618_code_explain_18937"} {"prompt": "Explain this bash code:\n\nkubectl create priorityclass default-priority --value=1000 --global-default=true --description=\"default priority\"", "completion": "This bash code create | Create a priority class named high-priority that cannot preempt pods with lower priority", "language": "bash", "type": "kubectl-command_explanation", "context": "create | Create a priority class named high-priority that cannot preempt pods with lower priority", "source": "https://kubernetes.io/docs/reference/kubectl/kubectl-cmds/", "id": "k8s_00618_code_explain_18938"} {"prompt": "Explain this bash code:\n\nkubectl create priorityclass high-priority --value=1000 --description=\"high priority\" --preemption-policy=\"Never\"", "completion": "This bash code create | Create a priority class with the specified name, value, globalDefault and description.", "language": "bash", "type": "kubectl-command_explanation", "context": "create | Create a priority class with the specified name, value, globalDefault and description.", "source": "https://kubernetes.io/docs/reference/kubectl/kubectl-cmds/", "id": "k8s_00618_code_explain_18939"} {"prompt": "Explain this text code:\n\n$ kubectl create priorityclass NAME --value=VALUE --global-default=BOOL [--dry-run=server|client|none]", "completion": "This text code create | Create a new resource quota named my-quota", "language": "text", "type": "code-example_explanation", "context": "create | Create a new resource quota named my-quota", "source": "https://kubernetes.io/docs/reference/kubectl/kubectl-cmds/", "id": "k8s_00618_code_explain_18940"} {"prompt": "Explain this bash code:\n\nkubectl create quota my-quota --hard=cpu=1,memory=1G,pods=2,services=3,replicationcontrollers=2,resourcequotas=1,secrets=5,persistentvolumeclaims=10", "completion": "This bash code create | Create a new resource quota named best-effort", "language": "bash", "type": "kubectl-command_explanation", "context": "create | Create a new resource quota named best-effort", "source": "https://kubernetes.io/docs/reference/kubectl/kubectl-cmds/", "id": "k8s_00618_code_explain_18941"} {"prompt": "Explain this bash code:\n\nkubectl create quota best-effort --hard=pods=100 --scopes=BestEffort", "completion": "This bash code create | Create a resource quota with the specified name, hard limits, and optional scopes.", "language": "bash", "type": "kubectl-command_explanation", "context": "create | Create a resource quota with the specified name, hard limits, and optional scopes.", "source": "https://kubernetes.io/docs/reference/kubectl/kubectl-cmds/", "id": "k8s_00618_code_explain_18942"} {"prompt": "Explain this text code:\n\n$ kubectl create quota NAME [--hard=key1=value1,key2=value2] [--scopes=Scope1,Scope2] [--dry-run=server|client|none]", "completion": "This text code create | Create a role named \"pod-reader\" that allows user to perform \"get\", \"watch\" and \"list\" on pods", "language": "text", "type": "code-example_explanation", "context": "create | Create a role named \"pod-reader\" that allows user to perform \"get\", \"watch\" and \"list\" on pods", "source": "https://kubernetes.io/docs/reference/kubectl/kubectl-cmds/", "id": "k8s_00618_code_explain_18943"} {"prompt": "Explain this bash code:\n\nkubectl create role pod-reader --verb=get --verb=list --verb=watch --resource=pods", "completion": "This bash code create | Create a role named \"pod-reader\" with ResourceName specified", "language": "bash", "type": "kubectl-command_explanation", "context": "create | Create a role named \"pod-reader\" with ResourceName specified", "source": "https://kubernetes.io/docs/reference/kubectl/kubectl-cmds/", "id": "k8s_00618_code_explain_18944"} {"prompt": "Explain this bash code:\n\nkubectl create role pod-reader --verb=get --resource=pods --resource-name=readablepod --resource-name=anotherpod", "completion": "This bash code create | Create a role named \"foo\" with API Group specified", "language": "bash", "type": "kubectl-command_explanation", "context": "create | Create a role named \"foo\" with API Group specified", "source": "https://kubernetes.io/docs/reference/kubectl/kubectl-cmds/", "id": "k8s_00618_code_explain_18945"} {"prompt": "Explain this bash code:\n\nkubectl create role foo --verb=get,list,watch --resource=rs.extensions", "completion": "This bash code create | Create a role named \"foo\" with SubResource specified", "language": "bash", "type": "kubectl-command_explanation", "context": "create | Create a role named \"foo\" with SubResource specified", "source": "https://kubernetes.io/docs/reference/kubectl/kubectl-cmds/", "id": "k8s_00618_code_explain_18946"} {"prompt": "Explain this bash code:\n\nkubectl create role foo --verb=get,list,watch --resource=pods,pods/status", "completion": "This bash code create | Create a role with single rule.", "language": "bash", "type": "kubectl-command_explanation", "context": "create | Create a role with single rule.", "source": "https://kubernetes.io/docs/reference/kubectl/kubectl-cmds/", "id": "k8s_00618_code_explain_18947"} {"prompt": "Explain this text code:\n\n$ kubectl create role NAME --verb=verb --resource=resource.group/subresource [--resource-name=resourcename] [--dry-run=server|client|none]", "completion": "This text code create | Create a role binding for user1, user2, and group1 using the admin cluster role", "language": "text", "type": "code-example_explanation", "context": "create | Create a role binding for user1, user2, and group1 using the admin cluster role", "source": "https://kubernetes.io/docs/reference/kubectl/kubectl-cmds/", "id": "k8s_00618_code_explain_18948"} {"prompt": "Explain this bash code:\n\nkubectl create rolebinding admin --clusterrole=admin --user=user1 --user=user2 --group=group1", "completion": "This bash code create | Create a role binding for a particular role or cluster role.", "language": "bash", "type": "kubectl-command_explanation", "context": "create | Create a role binding for a particular role or cluster role.", "source": "https://kubernetes.io/docs/reference/kubectl/kubectl-cmds/", "id": "k8s_00618_code_explain_18949"} {"prompt": "Explain this text code:\n\n$ kubectl create rolebinding NAME --clusterrole=NAME|--role=NAME [--user=username] [--group=groupname] [--serviceaccount=namespace:serviceaccountname] [--dry-run=server|client|none]", "completion": "This text code create | Create a secret using specified subcommand.", "language": "text", "type": "code-example_explanation", "context": "create | Create a secret using specified subcommand.", "source": "https://kubernetes.io/docs/reference/kubectl/kubectl-cmds/", "id": "k8s_00618_code_explain_18950"} {"prompt": "Explain this bash code:\n\nkubectl create secret docker-registry my-secret --docker-server=DOCKER_REGISTRY_SERVER --docker-username=DOCKER_USER --docker-password=DOCKER_PASSWORD --docker-email=DOCKER_EMAIL", "completion": "This bash code create | Create a new secret named my-secret from ~/.docker/config.json", "language": "bash", "type": "kubectl-command_explanation", "context": "create | Create a new secret named my-secret from ~/.docker/config.json", "source": "https://kubernetes.io/docs/reference/kubectl/kubectl-cmds/", "id": "k8s_00618_code_explain_18951"} {"prompt": "Explain this bash code:\n\nkubectl create secret docker-registry my-secret --from-file=.dockerconfigjson=path/to/.docker/config.json", "completion": "This bash code create | Create a new secret for use with Docker registries.", "language": "bash", "type": "kubectl-command_explanation", "context": "create | Create a new secret for use with Docker registries.", "source": "https://kubernetes.io/docs/reference/kubectl/kubectl-cmds/", "id": "k8s_00618_code_explain_18952"} {"prompt": "Explain this text code:\n\n$ kubectl create secret docker-registry NAME --docker-username=user --docker-password=password --docker-email=email [--docker-server=string] [--from-file=[key=]source] [--dry-run=server|client|none]", "completion": "This text code create | Create a new secret named my-secret with keys for each file in folder bar", "language": "text", "type": "code-example_explanation", "context": "create | Create a new secret named my-secret with keys for each file in folder bar", "source": "https://kubernetes.io/docs/reference/kubectl/kubectl-cmds/", "id": "k8s_00618_code_explain_18953"} {"prompt": "Explain this bash code:\n\nkubectl create secret generic my-secret --from-file=path/to/bar", "completion": "This bash code create | Create a new secret named my-secret with specified keys instead of names on disk", "language": "bash", "type": "kubectl-command_explanation", "context": "create | Create a new secret named my-secret with specified keys instead of names on disk", "source": "https://kubernetes.io/docs/reference/kubectl/kubectl-cmds/", "id": "k8s_00618_code_explain_18954"} {"prompt": "Explain this bash code:\n\nkubectl create secret generic my-secret --from-file=ssh-privatekey=path/to/id_rsa --from-file=ssh-publickey=path/to/id_rsa.pub", "completion": "This bash code create | Create a new secret named my-secret with key1=supersecret and key2=topsecret", "language": "bash", "type": "kubectl-command_explanation", "context": "create | Create a new secret named my-secret with key1=supersecret and key2=topsecret", "source": "https://kubernetes.io/docs/reference/kubectl/kubectl-cmds/", "id": "k8s_00618_code_explain_18955"} {"prompt": "Explain this bash code:\n\nkubectl create secret generic my-secret --from-literal=key1=supersecret --from-literal=key2=topsecret", "completion": "This bash code create | Create a new secret named my-secret using a combination of a file and a literal", "language": "bash", "type": "kubectl-command_explanation", "context": "create | Create a new secret named my-secret using a combination of a file and a literal", "source": "https://kubernetes.io/docs/reference/kubectl/kubectl-cmds/", "id": "k8s_00618_code_explain_18956"} {"prompt": "Explain this bash code:\n\nkubectl create secret generic my-secret --from-file=ssh-privatekey=path/to/id_rsa --from-literal=passphrase=topsecret", "completion": "This bash code create | Create a new secret named my-secret from an env file", "language": "bash", "type": "kubectl-command_explanation", "context": "create | Create a new secret named my-secret from an env file", "source": "https://kubernetes.io/docs/reference/kubectl/kubectl-cmds/", "id": "k8s_00618_code_explain_18957"} {"prompt": "Explain this bash code:\n\nkubectl create secret generic my-secret --from-env-file=path/to/bar.env", "completion": "This bash code create | Create a secret based on a file, directory, or specified literal value.", "language": "bash", "type": "kubectl-command_explanation", "context": "create | Create a secret based on a file, directory, or specified literal value.", "source": "https://kubernetes.io/docs/reference/kubectl/kubectl-cmds/", "id": "k8s_00618_code_explain_18958"} {"prompt": "Explain this text code:\n\n$ kubectl create generic NAME [--type=string] [--from-file=[key=]source] [--from-literal=key1=value1] [--dry-run=server|client|none]", "completion": "This text code create | Create a new TLS secret named tls-secret with the given key pair", "language": "text", "type": "code-example_explanation", "context": "create | Create a new TLS secret named tls-secret with the given key pair", "source": "https://kubernetes.io/docs/reference/kubectl/kubectl-cmds/", "id": "k8s_00618_code_explain_18959"} {"prompt": "Explain this bash code:\n\nkubectl create secret tls tls-secret --cert=path/to/tls.cert --key=path/to/tls.key", "completion": "This bash code create | Create a TLS secret from the given public/private key pair.", "language": "bash", "type": "kubectl-command_explanation", "context": "create | Create a TLS secret from the given public/private key pair.", "source": "https://kubernetes.io/docs/reference/kubectl/kubectl-cmds/", "id": "k8s_00618_code_explain_18960"} {"prompt": "Explain this text code:\n\n$ kubectl create secret tls NAME --cert=path/to/cert/file --key=path/to/key/file [--dry-run=server|client|none]", "completion": "This text code create | Create a service using a specified subcommand.", "language": "text", "type": "code-example_explanation", "context": "create | Create a service using a specified subcommand.", "source": "https://kubernetes.io/docs/reference/kubectl/kubectl-cmds/", "id": "k8s_00618_code_explain_18961"} {"prompt": "Explain this bash code:\n\nkubectl create service clusterip my-cs --tcp=5678:8080", "completion": "This bash code create | Create a new ClusterIP service named my-cs (in headless mode)", "language": "bash", "type": "kubectl-command_explanation", "context": "create | Create a new ClusterIP service named my-cs (in headless mode)", "source": "https://kubernetes.io/docs/reference/kubectl/kubectl-cmds/", "id": "k8s_00618_code_explain_18962"} {"prompt": "Explain this bash code:\n\nkubectl create service clusterip my-cs --clusterip=\"None\"", "completion": "This bash code create | Create a ClusterIP service with the specified name.", "language": "bash", "type": "kubectl-command_explanation", "context": "create | Create a ClusterIP service with the specified name.", "source": "https://kubernetes.io/docs/reference/kubectl/kubectl-cmds/", "id": "k8s_00618_code_explain_18963"} {"prompt": "Explain this text code:\n\n$ kubectl create service clusterip NAME [--tcp=<port>:<targetPort>] [--dry-run=server|client|none]", "completion": "This text code create | Create a new ExternalName service named my-ns", "language": "text", "type": "code-example_explanation", "context": "create | Create a new ExternalName service named my-ns", "source": "https://kubernetes.io/docs/reference/kubectl/kubectl-cmds/", "id": "k8s_00618_code_explain_18964"} {"prompt": "Explain this bash code:\n\nkubectl create service externalname my-ns --external-name bar.com", "completion": "This bash code create | Create an ExternalName service with the specified name.", "language": "bash", "type": "kubectl-command_explanation", "context": "create | Create an ExternalName service with the specified name.", "source": "https://kubernetes.io/docs/reference/kubectl/kubectl-cmds/", "id": "k8s_00618_code_explain_18965"} {"prompt": "Explain this text code:\n\n$ kubectl create service externalname NAME --external-name external.name [--dry-run=server|client|none]", "completion": "This text code create | Create a new LoadBalancer service named my-lbs", "language": "text", "type": "code-example_explanation", "context": "create | Create a new LoadBalancer service named my-lbs", "source": "https://kubernetes.io/docs/reference/kubectl/kubectl-cmds/", "id": "k8s_00618_code_explain_18966"} {"prompt": "Explain this bash code:\n\nkubectl create service loadbalancer my-lbs --tcp=5678:8080", "completion": "This bash code create | Create a LoadBalancer service with the specified name.", "language": "bash", "type": "kubectl-command_explanation", "context": "create | Create a LoadBalancer service with the specified name.", "source": "https://kubernetes.io/docs/reference/kubectl/kubectl-cmds/", "id": "k8s_00618_code_explain_18967"} {"prompt": "Explain this text code:\n\n$ kubectl create service loadbalancer NAME [--tcp=port:targetPort] [--dry-run=server|client|none]", "completion": "This text code create | Create a new NodePort service named my-ns", "language": "text", "type": "code-example_explanation", "context": "create | Create a new NodePort service named my-ns", "source": "https://kubernetes.io/docs/reference/kubectl/kubectl-cmds/", "id": "k8s_00618_code_explain_18968"} {"prompt": "Explain this bash code:\n\nkubectl create service nodeport my-ns --tcp=5678:8080", "completion": "This bash code create | Create a NodePort service with the specified name.", "language": "bash", "type": "kubectl-command_explanation", "context": "create | Create a NodePort service with the specified name.", "source": "https://kubernetes.io/docs/reference/kubectl/kubectl-cmds/", "id": "k8s_00618_code_explain_18969"} {"prompt": "Explain this text code:\n\n$ kubectl create service nodeport NAME [--tcp=port:targetPort] [--dry-run=server|client|none]", "completion": "This text code create | Create a new service account named my-service-account", "language": "text", "type": "code-example_explanation", "context": "create | Create a new service account named my-service-account", "source": "https://kubernetes.io/docs/reference/kubectl/kubectl-cmds/", "id": "k8s_00618_code_explain_18970"} {"prompt": "Explain this text code:\n\n$ kubectl create serviceaccount NAME [--dry-run=server|client|none]", "completion": "This text code create | List all pods in ps output format", "language": "text", "type": "code-example_explanation", "context": "create | List all pods in ps output format", "source": "https://kubernetes.io/docs/reference/kubectl/kubectl-cmds/", "id": "k8s_00618_code_explain_18971"} {"prompt": "Explain this bash code:\n\nkubectl get -o template pod/web-pod-13je7 --template={{.status.phase}}", "completion": "This bash code get | List resource information in custom columns", "language": "bash", "type": "kubectl-command_explanation", "context": "get | List resource information in custom columns", "source": "https://kubernetes.io/docs/reference/kubectl/kubectl-cmds/", "id": "k8s_00618_code_explain_18972"} {"prompt": "Explain this bash code:\n\nkubectl get pod test-pod -o custom-columns=CONTAINER:.spec.containers[0].name,IMAGE:.spec.containers[0].image", "completion": "This bash code get | List all replication controllers and services together in ps output format", "language": "bash", "type": "kubectl-command_explanation", "context": "get | List all replication controllers and services together in ps output format", "source": "https://kubernetes.io/docs/reference/kubectl/kubectl-cmds/", "id": "k8s_00618_code_explain_18973"} {"prompt": "Explain this bash code:\n\nkubectl get rc/web service/frontend pods/web-pod-13je7", "completion": "This bash code get | Display one or many resources.", "language": "bash", "type": "kubectl-command_explanation", "context": "get | Display one or many resources.", "source": "https://kubernetes.io/docs/reference/kubectl/kubectl-cmds/", "id": "k8s_00618_code_explain_18974"} {"prompt": "Explain this text code:\n\n$ kubectl get [(-o|--output=)json|yaml|name|go-template|go-template-file|template|templatefile|jsonpath|jsonpath-as-json|jsonpath-file|custom-columns|custom-columns-file|wide] (TYPE[.VERSION][.GROUP] [NAME | -l label] | TYPE[.VERSION][.GROUP]/NAME ...) [flags]", "completion": "This text code get | Start a nginx pod", "language": "text", "type": "code-example_explanation", "context": "get | Start a nginx pod", "source": "https://kubernetes.io/docs/reference/kubectl/kubectl-cmds/", "id": "k8s_00618_code_explain_18975"} {"prompt": "Explain this bash code:\n\nkubectl run hazelcast --image=hazelcast/hazelcast --port=5701", "completion": "This bash code run | Start a hazelcast pod and set environment variables \"DNS_DOMAIN=cluster\" and \"POD_NAMESPACE=default\" in the container", "language": "bash", "type": "kubectl-command_explanation", "context": "run | Start a hazelcast pod and set environment variables \"DNS_DOMAIN=cluster\" and \"POD_NAMESPACE=default\" in the container", "source": "https://kubernetes.io/docs/reference/kubectl/kubectl-cmds/", "id": "k8s_00618_code_explain_18976"} {"prompt": "Explain this bash code:\n\nkubectl run hazelcast --image=hazelcast/hazelcast --env=\"DNS_DOMAIN=cluster\" --env=\"POD_NAMESPACE=default\"", "completion": "This bash code run | Start a hazelcast pod and set labels \"app=hazelcast\" and \"env=prod\" in the container", "language": "bash", "type": "kubectl-command_explanation", "context": "run | Start a hazelcast pod and set labels \"app=hazelcast\" and \"env=prod\" in the container", "source": "https://kubernetes.io/docs/reference/kubectl/kubectl-cmds/", "id": "k8s_00618_code_explain_18977"} {"prompt": "Explain this bash code:\n\nkubectl run hazelcast --image=hazelcast/hazelcast --labels=\"app=hazelcast,env=prod\"", "completion": "This bash code run | Dry run; print the corresponding API objects without creating them", "language": "bash", "type": "kubectl-command_explanation", "context": "run | Dry run; print the corresponding API objects without creating them", "source": "https://kubernetes.io/docs/reference/kubectl/kubectl-cmds/", "id": "k8s_00618_code_explain_18978"} {"prompt": "Explain this bash code:\n\nkubectl run nginx --image=nginx --overrides='{ \"apiVersion\": \"v1\", \"spec\": { ... } }'", "completion": "This bash code run | Start a busybox pod and keep it in the foreground, don't restart it if it exits", "language": "bash", "type": "kubectl-command_explanation", "context": "run | Start a busybox pod and keep it in the foreground, don't restart it if it exits", "source": "https://kubernetes.io/docs/reference/kubectl/kubectl-cmds/", "id": "k8s_00618_code_explain_18979"} {"prompt": "Explain this bash code:\n\nkubectl run -i -t busybox --image=busybox --restart=Never", "completion": "This bash code run | Start the nginx pod using the default command, but use custom arguments (arg1 .. argN) for that command", "language": "bash", "type": "kubectl-command_explanation", "context": "run | Start the nginx pod using the default command, but use custom arguments (arg1 .. argN) for that command", "source": "https://kubernetes.io/docs/reference/kubectl/kubectl-cmds/", "id": "k8s_00618_code_explain_18980"} {"prompt": "Explain this bash code:\n\nkubectl run nginx --image=nginx -- <arg1> <arg2> ... <argN>", "completion": "This bash code run | Start the nginx pod using a different command and custom arguments", "language": "bash", "type": "kubectl-command_explanation", "context": "run | Start the nginx pod using a different command and custom arguments", "source": "https://kubernetes.io/docs/reference/kubectl/kubectl-cmds/", "id": "k8s_00618_code_explain_18981"} {"prompt": "Explain this bash code:\n\nkubectl run nginx --image=nginx --command -- <cmd> <arg1> ... <argN>", "completion": "This bash code run | Create and run a particular image in a pod.", "language": "bash", "type": "kubectl-command_explanation", "context": "run | Create and run a particular image in a pod.", "source": "https://kubernetes.io/docs/reference/kubectl/kubectl-cmds/", "id": "k8s_00618_code_explain_18982"} {"prompt": "Explain this text code:\n\n$ kubectl run NAME --image=image [--env=\"key=value\"] [--port=port] [--dry-run=server|client] [--overrides=inline-json] [--command] -- [COMMAND] [args...]", "completion": "This text code run | Create a service for a replicated nginx, which serves on port 80 and connects to the containers on port 8000", "language": "text", "type": "code-example_explanation", "context": "run | Create a service for a replicated nginx, which serves on port 80 and connects to the containers on port 8000", "source": "https://kubernetes.io/docs/reference/kubectl/kubectl-cmds/", "id": "k8s_00618_code_explain_18983"} {"prompt": "Explain this bash code:\n\nkubectl expose rc nginx --port=80 --target-port=8000", "completion": "This bash code expose | Create a service for a replication controller identified by type and name specified in \"nginx-controller.yaml\", which serves on port 80 and connects to the containers on port 8000", "language": "bash", "type": "kubectl-command_explanation", "context": "expose | Create a service for a replication controller identified by type and name specified in \"nginx-controller.yaml\", which serves on port 80 and connects to the containers on port 8000", "source": "https://kubernetes.io/docs/reference/kubectl/kubectl-cmds/", "id": "k8s_00618_code_explain_18984"} {"prompt": "Explain this bash code:\n\nkubectl expose -f nginx-controller.yaml --port=80 --target-port=8000", "completion": "This bash code expose | Create a service for a pod valid-pod, which serves on port 444 with the name \"frontend\"", "language": "bash", "type": "kubectl-command_explanation", "context": "expose | Create a service for a pod valid-pod, which serves on port 444 with the name \"frontend\"", "source": "https://kubernetes.io/docs/reference/kubectl/kubectl-cmds/", "id": "k8s_00618_code_explain_18985"} {"prompt": "Explain this bash code:\n\nkubectl expose pod valid-pod --port=444 --name=frontend", "completion": "This bash code expose | Create a second service based on the above service, exposing the container port 8443 as port 443 with the name \"nginx-https\"", "language": "bash", "type": "kubectl-command_explanation", "context": "expose | Create a second service based on the above service, exposing the container port 8443 as port 443 with the name \"nginx-https\"", "source": "https://kubernetes.io/docs/reference/kubectl/kubectl-cmds/", "id": "k8s_00618_code_explain_18986"} {"prompt": "Explain this bash code:\n\nkubectl expose service nginx --port=443 --target-port=8443 --name=nginx-https", "completion": "This bash code expose | Create a service for a replicated streaming application on port 4100 balancing UDP traffic and named 'video-stream'.", "language": "bash", "type": "kubectl-command_explanation", "context": "expose | Create a service for a replicated streaming application on port 4100 balancing UDP traffic and named 'video-stream'.", "source": "https://kubernetes.io/docs/reference/kubectl/kubectl-cmds/", "id": "k8s_00618_code_explain_18987"} {"prompt": "Explain this bash code:\n\nkubectl expose rc streamer --port=4100 --protocol=UDP --name=video-stream", "completion": "This bash code expose | Create a service for a replicated nginx using replica set, which serves on port 80 and connects to the containers on port 8000", "language": "bash", "type": "kubectl-command_explanation", "context": "expose | Create a service for a replicated nginx using replica set, which serves on port 80 and connects to the containers on port 8000", "source": "https://kubernetes.io/docs/reference/kubectl/kubectl-cmds/", "id": "k8s_00618_code_explain_18988"} {"prompt": "Explain this bash code:\n\nkubectl expose rs nginx --port=80 --target-port=8000", "completion": "This bash code expose | Create a service for an nginx deployment, which serves on port 80 and connects to the containers on port 8000", "language": "bash", "type": "kubectl-command_explanation", "context": "expose | Create a service for an nginx deployment, which serves on port 80 and connects to the containers on port 8000", "source": "https://kubernetes.io/docs/reference/kubectl/kubectl-cmds/", "id": "k8s_00618_code_explain_18989"} {"prompt": "Explain this bash code:\n\nkubectl expose deployment nginx --port=80 --target-port=8000", "completion": "This bash code expose | Expose a resource as a new Kubernetes service.", "language": "bash", "type": "kubectl-command_explanation", "context": "expose | Expose a resource as a new Kubernetes service.", "source": "https://kubernetes.io/docs/reference/kubectl/kubectl-cmds/", "id": "k8s_00618_code_explain_18990"} {"prompt": "Explain this text code:\n\n$ kubectl expose (-f FILENAME | TYPE NAME) [--port=port] [--protocol=TCP|UDP|SCTP] [--target-port=number-or-name] [--name=name] [--external-ip=external-ip-of-service] [--type=type]", "completion": "This text code expose | Delete a pod using the type and name specified in pod.json", "language": "text", "type": "code-example_explanation", "context": "expose | Delete a pod using the type and name specified in pod.json", "source": "https://kubernetes.io/docs/reference/kubectl/kubectl-cmds/", "id": "k8s_00618_code_explain_18991"} {"prompt": "Explain this text code:\n\n$ kubectl delete ([-f FILENAME] | [-k DIRECTORY] | TYPE [(NAME | -l label | --all)])", "completion": "This text code delete | This section contains commands for creating, updating, deleting, and viewing your workloads in a Kubernetes cluster.", "language": "text", "type": "code-example_explanation", "context": "delete | This section contains commands for creating, updating, deleting, and viewing your workloads in a Kubernetes cluster.", "source": "https://kubernetes.io/docs/reference/kubectl/kubectl-cmds/", "id": "k8s_00618_code_explain_18992"} {"prompt": "Explain this bash code:\n\nkubectl apply --prune -f manifest.yaml -l app=nginx", "completion": "This bash code apply | Apply the configuration in manifest.yaml and delete all the other config maps that are not in the file", "language": "bash", "type": "kubectl-command_explanation", "context": "apply | Apply the configuration in manifest.yaml and delete all the other config maps that are not in the file", "source": "https://kubernetes.io/docs/reference/kubectl/kubectl-cmds/", "id": "k8s_00618_code_explain_18993"} {"prompt": "Explain this bash code:\n\nkubectl apply --prune -f manifest.yaml --all --prune-whitelist=core/v1/ConfigMap", "completion": "This bash code apply | Apply a configuration to a resource by file name or stdin. The resource name must be specified. This resource will be created if it doesn't exist yet. To use 'apply', always create the resource initia", "language": "bash", "type": "kubectl-command_explanation", "context": "apply | Apply a configuration to a resource by file name or stdin. The resource name must be specified. This resource will be created if it doesn't exist yet. To use 'apply', always create the resource initia", "source": "https://kubernetes.io/docs/reference/kubectl/kubectl-cmds/", "id": "k8s_00618_code_explain_18994"} {"prompt": "Explain this bash code:\n\nkubectl apply edit-last-applied -f deploy.yaml -o json", "completion": "This bash code apply | Edit the latest last-applied-configuration annotations of resources from the default editor.", "language": "bash", "type": "kubectl-command_explanation", "context": "apply | Edit the latest last-applied-configuration annotations of resources from the default editor.", "source": "https://kubernetes.io/docs/reference/kubectl/kubectl-cmds/", "id": "k8s_00618_code_explain_18995"} {"prompt": "Explain this text code:\n\n$ kubectl apply edit-last-applied (RESOURCE/NAME | -f FILENAME)", "completion": "This text code apply | Set the last-applied-configuration of a resource to match the contents of a file", "language": "text", "type": "code-example_explanation", "context": "apply | Set the last-applied-configuration of a resource to match the contents of a file", "source": "https://kubernetes.io/docs/reference/kubectl/kubectl-cmds/", "id": "k8s_00618_code_explain_18996"} {"prompt": "Explain this bash code:\n\nkubectl apply set-last-applied -f deploy.yaml --create-annotation=true", "completion": "This bash code apply | Set the latest last-applied-configuration annotations by setting it to match the contents of a file. This results in the last-applied-configuration being updated as though 'kubectl apply -f ' was run,", "language": "bash", "type": "kubectl-command_explanation", "context": "apply | Set the latest last-applied-configuration annotations by setting it to match the contents of a file. This results in the last-applied-configuration being updated as though 'kubectl apply -f ' was run,", "source": "https://kubernetes.io/docs/reference/kubectl/kubectl-cmds/", "id": "k8s_00618_code_explain_18997"} {"prompt": "Explain this bash code:\n\nkubectl apply view-last-applied -f deploy.yaml -o json", "completion": "This bash code apply | View the latest last-applied-configuration annotations by type/name or file.", "language": "bash", "type": "kubectl-command_explanation", "context": "apply | View the latest last-applied-configuration annotations by type/name or file.", "source": "https://kubernetes.io/docs/reference/kubectl/kubectl-cmds/", "id": "k8s_00618_code_explain_18998"} {"prompt": "Explain this text code:\n\n$ kubectl apply view-last-applied (TYPE [NAME | -l label] | TYPE/NAME | -f FILENAME)", "completion": "This text code apply | Update pod 'foo' with the annotation 'description' and the value 'my frontend' # If the same annotation is set multiple times, only the last value will be applied", "language": "text", "type": "code-example_explanation", "context": "apply | Update pod 'foo' with the annotation 'description' and the value 'my frontend' # If the same annotation is set multiple times, only the last value will be applied", "source": "https://kubernetes.io/docs/reference/kubectl/kubectl-cmds/", "id": "k8s_00618_code_explain_18999"} {"prompt": "Explain this bash code:\n\nkubectl annotate pods foo description='my frontend'", "completion": "This bash code annotate | Update a pod identified by type and name in \"pod.json\"", "language": "bash", "type": "kubectl-command_explanation", "context": "annotate | Update a pod identified by type and name in \"pod.json\"", "source": "https://kubernetes.io/docs/reference/kubectl/kubectl-cmds/", "id": "k8s_00618_code_explain_19000"} {"prompt": "Explain this bash code:\n\nkubectl annotate -f pod.json description='my frontend'", "completion": "This bash code annotate | Update pod 'foo' with the annotation 'description' and the value 'my frontend running nginx', overwriting any existing value", "language": "bash", "type": "kubectl-command_explanation", "context": "annotate | Update pod 'foo' with the annotation 'description' and the value 'my frontend running nginx', overwriting any existing value", "source": "https://kubernetes.io/docs/reference/kubectl/kubectl-cmds/", "id": "k8s_00618_code_explain_19001"} {"prompt": "Explain this bash code:\n\nkubectl annotate --overwrite pods foo description='my frontend running nginx'", "completion": "This bash code annotate | Update all pods in the namespace", "language": "bash", "type": "kubectl-command_explanation", "context": "annotate | Update all pods in the namespace", "source": "https://kubernetes.io/docs/reference/kubectl/kubectl-cmds/", "id": "k8s_00618_code_explain_19002"} {"prompt": "Explain this bash code:\n\nkubectl annotate pods --all description='my frontend running nginx'", "completion": "This bash code annotate | Update pod 'foo' only if the resource is unchanged from version 1", "language": "bash", "type": "kubectl-command_explanation", "context": "annotate | Update pod 'foo' only if the resource is unchanged from version 1", "source": "https://kubernetes.io/docs/reference/kubectl/kubectl-cmds/", "id": "k8s_00618_code_explain_19003"} {"prompt": "Explain this bash code:\n\nkubectl annotate pods foo description='my frontend running nginx' --resource-version=1", "completion": "This bash code annotate | Update pod 'foo' by removing an annotation named 'description' if it exists # Does not require the --overwrite flag", "language": "bash", "type": "kubectl-command_explanation", "context": "annotate | Update pod 'foo' by removing an annotation named 'description' if it exists # Does not require the --overwrite flag", "source": "https://kubernetes.io/docs/reference/kubectl/kubectl-cmds/", "id": "k8s_00618_code_explain_19004"} {"prompt": "Explain this text code:\n\n$ kubectl annotate [--overwrite] (-f FILENAME | TYPE NAME) KEY_1=VAL_1 ... KEY_N=VAL_N [--resource-version=version]", "completion": "This text code annotate | Auto scale a deployment \"foo\", with the number of pods between 2 and 10, no target CPU utilization specified so a default autoscaling policy will be used", "language": "text", "type": "code-example_explanation", "context": "annotate | Auto scale a deployment \"foo\", with the number of pods between 2 and 10, no target CPU utilization specified so a default autoscaling policy will be used", "source": "https://kubernetes.io/docs/reference/kubectl/kubectl-cmds/", "id": "k8s_00618_code_explain_19005"} {"prompt": "Explain this text code:\n\n$ kubectl autoscale (-f FILENAME | TYPE NAME | TYPE/NAME) [--min=MINPODS] --max=MAXPODS [--cpu-percent=CPU]", "completion": "This text code autoscale | Create an interactive debugging session in pod mypod and immediately attach to it. # (requires the EphemeralContainers feature to be enabled in the cluster)", "language": "text", "type": "code-example_explanation", "context": "autoscale | Create an interactive debugging session in pod mypod and immediately attach to it. # (requires the EphemeralContainers feature to be enabled in the cluster)", "source": "https://kubernetes.io/docs/reference/kubectl/kubectl-cmds/", "id": "k8s_00618_code_explain_19006"} {"prompt": "Explain this bash code:\n\nkubectl debug --image=myproj/debug-tools -c debugger mypod", "completion": "This bash code debug | Create a copy of mypod adding a debug container and attach to it", "language": "bash", "type": "kubectl-command_explanation", "context": "debug | Create a copy of mypod adding a debug container and attach to it", "source": "https://kubernetes.io/docs/reference/kubectl/kubectl-cmds/", "id": "k8s_00618_code_explain_19007"} {"prompt": "Explain this bash code:\n\nkubectl debug mypod -it --image=busybox --copy-to=my-debugger", "completion": "This bash code debug | Create a copy of mypod changing the command of mycontainer", "language": "bash", "type": "kubectl-command_explanation", "context": "debug | Create a copy of mypod changing the command of mycontainer", "source": "https://kubernetes.io/docs/reference/kubectl/kubectl-cmds/", "id": "k8s_00618_code_explain_19008"} {"prompt": "Explain this bash code:\n\nkubectl debug mypod -it --copy-to=my-debugger --container=mycontainer -- sh", "completion": "This bash code debug | Create a copy of mypod changing all container images to busybox", "language": "bash", "type": "kubectl-command_explanation", "context": "debug | Create a copy of mypod changing all container images to busybox", "source": "https://kubernetes.io/docs/reference/kubectl/kubectl-cmds/", "id": "k8s_00618_code_explain_19009"} {"prompt": "Explain this bash code:\n\nkubectl debug mypod --copy-to=my-debugger --set-image=*=busybox", "completion": "This bash code debug | Create a copy of mypod adding a debug container and changing container images", "language": "bash", "type": "kubectl-command_explanation", "context": "debug | Create a copy of mypod adding a debug container and changing container images", "source": "https://kubernetes.io/docs/reference/kubectl/kubectl-cmds/", "id": "k8s_00618_code_explain_19010"} {"prompt": "Explain this bash code:\n\nkubectl debug mypod -it --copy-to=my-debugger --image=debian --set-image=app=app:debug,sidecar=sidecar:debug", "completion": "This bash code debug | Create an interactive debugging session on a node and immediately attach to it. # The container will run in the host namespaces and the host's filesystem will be mounted at /host", "language": "bash", "type": "kubectl-command_explanation", "context": "debug | Create an interactive debugging session on a node and immediately attach to it. # The container will run in the host namespaces and the host's filesystem will be mounted at /host", "source": "https://kubernetes.io/docs/reference/kubectl/kubectl-cmds/", "id": "k8s_00618_code_explain_19011"} {"prompt": "Explain this text code:\n\n$ kubectl debug (POD | TYPE[[.VERSION].GROUP]/NAME) [ -- COMMAND [args...] ]", "completion": "This text code debug | Diff resources included in pod.json", "language": "text", "type": "code-example_explanation", "context": "debug | Diff resources included in pod.json", "source": "https://kubernetes.io/docs/reference/kubectl/kubectl-cmds/", "id": "k8s_00618_code_explain_19012"} {"prompt": "Explain this text code:\n\nKUBE_EDITOR=\"nano\" kubectl edit svc/docker-registry", "completion": "This text code edit | Edit the job 'myjob' in JSON using the v1 API format", "language": "text", "type": "code-example_explanation", "context": "edit | Edit the job 'myjob' in JSON using the v1 API format", "source": "https://kubernetes.io/docs/reference/kubectl/kubectl-cmds/", "id": "k8s_00618_code_explain_19013"} {"prompt": "Explain this bash code:\n\nkubectl edit deployment/mydeployment -o yaml --save-config", "completion": "This bash code edit | Edit a resource from the default editor.", "language": "bash", "type": "kubectl-command_explanation", "context": "edit | Edit a resource from the default editor.", "source": "https://kubernetes.io/docs/reference/kubectl/kubectl-cmds/", "id": "k8s_00618_code_explain_19014"} {"prompt": "Explain this bash code:\n\nkubectl kustomize https://github.com/kubernetes-sigs/kustomize.git/examples/helloWorld?ref=v1.0.6", "completion": "This bash code kustomize | Build a set of KRM resources using a 'kustomization.yaml' file. The DIR argument must be a path to a directory containing 'kustomization.yaml', or a git repository URL with a path suffix specifying sa", "language": "bash", "type": "kubectl-command_explanation", "context": "kustomize | Build a set of KRM resources using a 'kustomization.yaml' file. The DIR argument must be a path to a directory containing 'kustomization.yaml', or a git repository URL with a path suffix specifying sa", "source": "https://kubernetes.io/docs/reference/kubectl/kubectl-cmds/", "id": "k8s_00618_code_explain_19015"} {"prompt": "Explain this bash code:\n\nkubectl label --overwrite pods foo status=unhealthy", "completion": "This bash code label | Update all pods in the namespace", "language": "bash", "type": "kubectl-command_explanation", "context": "label | Update all pods in the namespace", "source": "https://kubernetes.io/docs/reference/kubectl/kubectl-cmds/", "id": "k8s_00618_code_explain_19016"} {"prompt": "Explain this bash code:\n\nkubectl label pods foo status=unhealthy --resource-version=1", "completion": "This bash code label | Update pod 'foo' by removing a label named 'bar' if it exists # Does not require the --overwrite flag", "language": "bash", "type": "kubectl-command_explanation", "context": "label | Update pod 'foo' by removing a label named 'bar' if it exists # Does not require the --overwrite flag", "source": "https://kubernetes.io/docs/reference/kubectl/kubectl-cmds/", "id": "k8s_00618_code_explain_19017"} {"prompt": "Explain this text code:\n\n$ kubectl label [--overwrite] (-f FILENAME | TYPE NAME) KEY_1=VAL_1 ... KEY_N=VAL_N [--resource-version=version]", "completion": "This text code label | Partially update a node using a strategic merge patch, specifying the patch as JSON", "language": "text", "type": "code-example_explanation", "context": "label | Partially update a node using a strategic merge patch, specifying the patch as JSON", "source": "https://kubernetes.io/docs/reference/kubectl/kubectl-cmds/", "id": "k8s_00618_code_explain_19018"} {"prompt": "Explain this bash code:\n\nkubectl patch node k8s-node-1 -p '{\"spec\":{\"unschedulable\":true}}'", "completion": "This bash code patch | Partially update a node using a strategic merge patch, specifying the patch as YAML", "language": "bash", "type": "kubectl-command_explanation", "context": "patch | Partially update a node using a strategic merge patch, specifying the patch as YAML", "source": "https://kubernetes.io/docs/reference/kubectl/kubectl-cmds/", "id": "k8s_00618_code_explain_19019"} {"prompt": "Explain this bash code:\n\nkubectl patch node k8s-node-1 -p $'spec:\\n unschedulable: true'", "completion": "This bash code patch | Partially update a node identified by the type and name specified in \"node.json\" using strategic merge patch", "language": "bash", "type": "kubectl-command_explanation", "context": "patch | Partially update a node identified by the type and name specified in \"node.json\" using strategic merge patch", "source": "https://kubernetes.io/docs/reference/kubectl/kubectl-cmds/", "id": "k8s_00618_code_explain_19020"} {"prompt": "Explain this bash code:\n\nkubectl patch -f node.json -p '{\"spec\":{\"unschedulable\":true}}'", "completion": "This bash code patch | Update a container's image; spec.containers[*].name is required because it's a merge key", "language": "bash", "type": "kubectl-command_explanation", "context": "patch | Update a container's image; spec.containers[*].name is required because it's a merge key", "source": "https://kubernetes.io/docs/reference/kubectl/kubectl-cmds/", "id": "k8s_00618_code_explain_19021"} {"prompt": "Explain this bash code:\n\nkubectl patch pod valid-pod -p '{\"spec\":{\"containers\":[{\"name\":\"kubernetes-serve-hostname\",\"image\":\"new image\"}]}}'", "completion": "This bash code patch | Update a container's image using a JSON patch with positional arrays", "language": "bash", "type": "kubectl-command_explanation", "context": "patch | Update a container's image using a JSON patch with positional arrays", "source": "https://kubernetes.io/docs/reference/kubectl/kubectl-cmds/", "id": "k8s_00618_code_explain_19022"} {"prompt": "Explain this bash code:\n\nkubectl patch pod valid-pod --type='json' -p='[{\"op\": \"replace\", \"path\": \"/spec/containers/0/image\", \"value\":\"new image\"}]'", "completion": "This bash code patch | Update fields of a resource using strategic merge patch, a JSON merge patch, or a JSON patch.", "language": "bash", "type": "kubectl-command_explanation", "context": "patch | Update fields of a resource using strategic merge patch, a JSON merge patch, or a JSON patch.", "source": "https://kubernetes.io/docs/reference/kubectl/kubectl-cmds/", "id": "k8s_00618_code_explain_19023"} {"prompt": "Explain this text code:\n\n$ kubectl patch (-f FILENAME | TYPE NAME) [-p PATCH|--patch-file FILE]", "completion": "This text code patch | Replace a pod using the data in pod.json", "language": "text", "type": "code-example_explanation", "context": "patch | Replace a pod using the data in pod.json", "source": "https://kubernetes.io/docs/reference/kubectl/kubectl-cmds/", "id": "k8s_00618_code_explain_19024"} {"prompt": "Explain this bash code:\n\nkubectl get pod mypod -o yaml | sed 's/\\(image: myimage\\):.*$/\\1:v4/' | kubectl replace -f -", "completion": "This bash code replace | Force replace, delete and then re-create the resource", "language": "bash", "type": "kubectl-command_explanation", "context": "replace | Force replace, delete and then re-create the resource", "source": "https://kubernetes.io/docs/reference/kubectl/kubectl-cmds/", "id": "k8s_00618_code_explain_19025"} {"prompt": "Explain this text code:\n\n$ kubectl rollout history (TYPE NAME | TYPE/NAME) [flags]", "completion": "This text code rollout | Mark the nginx deployment as paused # Any current state of the deployment will continue its function; new updates # to the deployment will not have an effect as long as the deployment is paused", "language": "text", "type": "code-example_explanation", "context": "rollout | Mark the nginx deployment as paused # Any current state of the deployment will continue its function; new updates # to the deployment will not have an effect as long as the deployment is paused", "source": "https://kubernetes.io/docs/reference/kubectl/kubectl-cmds/", "id": "k8s_00618_code_explain_19026"} {"prompt": "Explain this text code:\n\n$ kubectl rollout status (TYPE NAME | TYPE/NAME) [flags]", "completion": "This text code rollout | Roll back to the previous deployment", "language": "text", "type": "code-example_explanation", "context": "rollout | Roll back to the previous deployment", "source": "https://kubernetes.io/docs/reference/kubectl/kubectl-cmds/", "id": "k8s_00618_code_explain_19027"} {"prompt": "Explain this bash code:\n\nkubectl rollout undo --dry-run=server deployment/abc", "completion": "This bash code rollout | Roll back to a previous rollout.", "language": "bash", "type": "kubectl-command_explanation", "context": "rollout | Roll back to a previous rollout.", "source": "https://kubernetes.io/docs/reference/kubectl/kubectl-cmds/", "id": "k8s_00618_code_explain_19028"} {"prompt": "Explain this text code:\n\n$ kubectl rollout undo (TYPE NAME | TYPE/NAME) [flags]", "completion": "This text code rollout | Scale a replica set named 'foo' to 3", "language": "text", "type": "code-example_explanation", "context": "rollout | Scale a replica set named 'foo' to 3", "source": "https://kubernetes.io/docs/reference/kubectl/kubectl-cmds/", "id": "k8s_00618_code_explain_19029"} {"prompt": "Explain this bash code:\n\nkubectl scale --current-replicas=2 --replicas=3 deployment/mysql", "completion": "This bash code scale | Scale multiple replication controllers", "language": "bash", "type": "kubectl-command_explanation", "context": "scale | Scale multiple replication controllers", "source": "https://kubernetes.io/docs/reference/kubectl/kubectl-cmds/", "id": "k8s_00618_code_explain_19030"} {"prompt": "Explain this text code:\n\n$ kubectl scale [--resource-version=version] [--current-replicas=count] --replicas=COUNT (-f FILENAME | TYPE NAME)", "completion": "This text code scale | Configure application resources.", "language": "text", "type": "code-example_explanation", "context": "scale | Configure application resources.", "source": "https://kubernetes.io/docs/reference/kubectl/kubectl-cmds/", "id": "k8s_00618_code_explain_19031"} {"prompt": "Explain this bash code:\n\nkubectl set env deployment/registry STORAGE_DIR=/local", "completion": "This bash code set | List the environment variables defined on a deployments 'sample-build'", "language": "bash", "type": "kubectl-command_explanation", "context": "set | List the environment variables defined on a deployments 'sample-build'", "source": "https://kubernetes.io/docs/reference/kubectl/kubectl-cmds/", "id": "k8s_00618_code_explain_19032"} {"prompt": "Explain this bash code:\n\nkubectl set env deployment/sample-build STORAGE_DIR=/data -o yaml", "completion": "This bash code set | Update all containers in all replication controllers in the project to have ENV=prod", "language": "bash", "type": "kubectl-command_explanation", "context": "set | Update all containers in all replication controllers in the project to have ENV=prod", "source": "https://kubernetes.io/docs/reference/kubectl/kubectl-cmds/", "id": "k8s_00618_code_explain_19033"} {"prompt": "Explain this bash code:\n\nkubectl set env --from=secret/mysecret deployment/myapp", "completion": "This bash code set | Import environment from a config map with a prefix", "language": "bash", "type": "kubectl-command_explanation", "context": "set | Import environment from a config map with a prefix", "source": "https://kubernetes.io/docs/reference/kubectl/kubectl-cmds/", "id": "k8s_00618_code_explain_19034"} {"prompt": "Explain this bash code:\n\nkubectl set env --from=configmap/myconfigmap --prefix=MYSQL_ deployment/myapp", "completion": "This bash code set | Import specific keys from a config map", "language": "bash", "type": "kubectl-command_explanation", "context": "set | Import specific keys from a config map", "source": "https://kubernetes.io/docs/reference/kubectl/kubectl-cmds/", "id": "k8s_00618_code_explain_19035"} {"prompt": "Explain this bash code:\n\nkubectl set env --keys=my-example-key --from=configmap/myconfigmap deployment/myapp", "completion": "This bash code set | Remove the environment variable ENV from container 'c1' in all deployment configs", "language": "bash", "type": "kubectl-command_explanation", "context": "set | Remove the environment variable ENV from container 'c1' in all deployment configs", "source": "https://kubernetes.io/docs/reference/kubectl/kubectl-cmds/", "id": "k8s_00618_code_explain_19036"} {"prompt": "Explain this bash code:\n\nkubectl set env deployments --all --containers=\"c1\" ENV-", "completion": "This bash code set | Remove the environment variable ENV from a deployment definition on disk and # update the deployment config on the server", "language": "bash", "type": "kubectl-command_explanation", "context": "set | Remove the environment variable ENV from a deployment definition on disk and # update the deployment config on the server", "source": "https://kubernetes.io/docs/reference/kubectl/kubectl-cmds/", "id": "k8s_00618_code_explain_19037"} {"prompt": "Explain this text code:\n\nenv | grep RAILS_ | kubectl set env -e - deployment/registry", "completion": "This text code set | Update environment variables on a pod template.", "language": "text", "type": "code-example_explanation", "context": "set | Update environment variables on a pod template.", "source": "https://kubernetes.io/docs/reference/kubectl/kubectl-cmds/", "id": "k8s_00618_code_explain_19038"} {"prompt": "Explain this text code:\n\n$ kubectl set env RESOURCE/NAME KEY_1=VAL_1 ... KEY_N=VAL_N", "completion": "This text code set | Set a deployment's nginx container image to 'nginx:1.9.1', and its busybox container image to 'busybox'", "language": "text", "type": "code-example_explanation", "context": "set | Set a deployment's nginx container image to 'nginx:1.9.1', and its busybox container image to 'busybox'", "source": "https://kubernetes.io/docs/reference/kubectl/kubectl-cmds/", "id": "k8s_00618_code_explain_19039"} {"prompt": "Explain this bash code:\n\nkubectl set image deployment/nginx busybox=busybox nginx=nginx:1.9.1", "completion": "This bash code set | Update all deployments' and rc's nginx container's image to 'nginx:1.9.1'", "language": "bash", "type": "kubectl-command_explanation", "context": "set | Update all deployments' and rc's nginx container's image to 'nginx:1.9.1'", "source": "https://kubernetes.io/docs/reference/kubectl/kubectl-cmds/", "id": "k8s_00618_code_explain_19040"} {"prompt": "Explain this bash code:\n\nkubectl set image deployments,rc nginx=nginx:1.9.1 --all", "completion": "This bash code set | Update image of all containers of daemonset abc to 'nginx:1.9.1'", "language": "bash", "type": "kubectl-command_explanation", "context": "set | Update image of all containers of daemonset abc to 'nginx:1.9.1'", "source": "https://kubernetes.io/docs/reference/kubectl/kubectl-cmds/", "id": "k8s_00618_code_explain_19041"} {"prompt": "Explain this bash code:\n\nkubectl set image -f path/to/file.yaml nginx=nginx:1.9.1 --local -o yaml", "completion": "This bash code set | Update existing container image(s) of resources.", "language": "bash", "type": "kubectl-command_explanation", "context": "set | Update existing container image(s) of resources.", "source": "https://kubernetes.io/docs/reference/kubectl/kubectl-cmds/", "id": "k8s_00618_code_explain_19042"} {"prompt": "Explain this text code:\n\n$ kubectl set image (-f FILENAME | TYPE NAME) CONTAINER_NAME_1=CONTAINER_IMAGE_1 ... CONTAINER_NAME_N=CONTAINER_IMAGE_N", "completion": "This text code set | Set a deployments nginx container cpu limits to \"200m\" and memory to \"512Mi\"", "language": "text", "type": "code-example_explanation", "context": "set | Set a deployments nginx container cpu limits to \"200m\" and memory to \"512Mi\"", "source": "https://kubernetes.io/docs/reference/kubectl/kubectl-cmds/", "id": "k8s_00618_code_explain_19043"} {"prompt": "Explain this bash code:\n\nkubectl set resources deployment nginx -c=nginx --limits=cpu=200m,memory=512Mi", "completion": "This bash code set | Set the resource request and limits for all containers in nginx", "language": "bash", "type": "kubectl-command_explanation", "context": "set | Set the resource request and limits for all containers in nginx", "source": "https://kubernetes.io/docs/reference/kubectl/kubectl-cmds/", "id": "k8s_00618_code_explain_19044"} {"prompt": "Explain this bash code:\n\nkubectl set resources deployment nginx --limits=cpu=200m,memory=512Mi --requests=cpu=100m,memory=256Mi", "completion": "This bash code set | Remove the resource requests for resources on containers in nginx", "language": "bash", "type": "kubectl-command_explanation", "context": "set | Remove the resource requests for resources on containers in nginx", "source": "https://kubernetes.io/docs/reference/kubectl/kubectl-cmds/", "id": "k8s_00618_code_explain_19045"} {"prompt": "Explain this bash code:\n\nkubectl set resources deployment nginx --limits=cpu=0,memory=0 --requests=cpu=0,memory=0", "completion": "This bash code set | Print the result (in yaml format) of updating nginx container limits from a local, without hitting the server", "language": "bash", "type": "kubectl-command_explanation", "context": "set | Print the result (in yaml format) of updating nginx container limits from a local, without hitting the server", "source": "https://kubernetes.io/docs/reference/kubectl/kubectl-cmds/", "id": "k8s_00618_code_explain_19046"} {"prompt": "Explain this bash code:\n\nkubectl set resources -f path/to/file.yaml --limits=cpu=200m,memory=512Mi --local -o yaml", "completion": "This bash code set | Specify compute resource requirements (CPU, memory) for any resource that defines a pod template. If a pod is successfully scheduled, it is guaranteed the amount of resource requested, but may burst", "language": "bash", "type": "kubectl-command_explanation", "context": "set | Specify compute resource requirements (CPU, memory) for any resource that defines a pod template. If a pod is successfully scheduled, it is guaranteed the amount of resource requested, but may burst", "source": "https://kubernetes.io/docs/reference/kubectl/kubectl-cmds/", "id": "k8s_00618_code_explain_19047"} {"prompt": "Explain this text code:\n\n$ kubectl set resources (-f FILENAME | TYPE NAME) ([--limits=LIMITS & --requests=REQUESTS]", "completion": "This text code set | Set the labels and selector before creating a deployment/service pair", "language": "text", "type": "code-example_explanation", "context": "set | Set the labels and selector before creating a deployment/service pair", "source": "https://kubernetes.io/docs/reference/kubectl/kubectl-cmds/", "id": "k8s_00618_code_explain_19048"} {"prompt": "Explain this bash code:\n\nkubectl create service clusterip my-svc --clusterip=\"None\" -o yaml --dry-run=client | kubectl set selector --local -f - 'environment=qa' -o yaml | kubectl create -f -\nkubectl create deployment my-dep -o yaml --dry-run=client | kubectl label --local -f - environment=qa -o yaml | kubectl create -f -", "completion": "This bash code set | Set the selector on a resource. Note that the new selector will overwrite the old selector if the resource had one prior to the invocation of 'set selector'.", "language": "bash", "type": "kubectl-command_explanation", "context": "set | Set the selector on a resource. Note that the new selector will overwrite the old selector if the resource had one prior to the invocation of 'set selector'.", "source": "https://kubernetes.io/docs/reference/kubectl/kubectl-cmds/", "id": "k8s_00618_code_explain_19049"} {"prompt": "Explain this text code:\n\n$ kubectl set selector (-f FILENAME | TYPE NAME) EXPRESSIONS [--resource-version=version]", "completion": "This text code set | Set deployment nginx-deployment's service account to serviceaccount1", "language": "text", "type": "code-example_explanation", "context": "set | Set deployment nginx-deployment's service account to serviceaccount1", "source": "https://kubernetes.io/docs/reference/kubectl/kubectl-cmds/", "id": "k8s_00618_code_explain_19050"} {"prompt": "Explain this bash code:\n\nkubectl set serviceaccount deployment nginx-deployment serviceaccount1", "completion": "This bash code set | Print the result (in YAML format) of updated nginx deployment with the service account from local file, without hitting the API server", "language": "bash", "type": "kubectl-command_explanation", "context": "set | Print the result (in YAML format) of updated nginx deployment with the service account from local file, without hitting the API server", "source": "https://kubernetes.io/docs/reference/kubectl/kubectl-cmds/", "id": "k8s_00618_code_explain_19051"} {"prompt": "Explain this bash code:\n\nkubectl set sa -f nginx-deployment.yaml serviceaccount1 --local --dry-run=client -o yaml", "completion": "This bash code set | Update the service account of pod template resources.", "language": "bash", "type": "kubectl-command_explanation", "context": "set | Update the service account of pod template resources.", "source": "https://kubernetes.io/docs/reference/kubectl/kubectl-cmds/", "id": "k8s_00618_code_explain_19052"} {"prompt": "Explain this text code:\n\n$ kubectl set serviceaccount (-f FILENAME | TYPE NAME) SERVICE_ACCOUNT", "completion": "This text code set | Update a cluster role binding for serviceaccount1", "language": "text", "type": "code-example_explanation", "context": "set | Update a cluster role binding for serviceaccount1", "source": "https://kubernetes.io/docs/reference/kubectl/kubectl-cmds/", "id": "k8s_00618_code_explain_19053"} {"prompt": "Explain this bash code:\n\nkubectl set subject clusterrolebinding admin --serviceaccount=namespace:serviceaccount1", "completion": "This bash code set | Update a role binding for user1, user2, and group1", "language": "bash", "type": "kubectl-command_explanation", "context": "set | Update a role binding for user1, user2, and group1", "source": "https://kubernetes.io/docs/reference/kubectl/kubectl-cmds/", "id": "k8s_00618_code_explain_19054"} {"prompt": "Explain this bash code:\n\nkubectl set subject rolebinding admin --user=user1 --user=user2 --group=group1", "completion": "This bash code set | Print the result (in YAML format) of updating rolebinding subjects from a local, without hitting the server", "language": "bash", "type": "kubectl-command_explanation", "context": "set | Print the result (in YAML format) of updating rolebinding subjects from a local, without hitting the server", "source": "https://kubernetes.io/docs/reference/kubectl/kubectl-cmds/", "id": "k8s_00618_code_explain_19055"} {"prompt": "Explain this bash code:\n\nkubectl create rolebinding admin --role=admin --user=admin -o yaml --dry-run=client | kubectl set subject --local -f - --user=foo -o yaml", "completion": "This bash code set | Update the user, group, or service account in a role binding or cluster role binding.", "language": "bash", "type": "kubectl-command_explanation", "context": "set | Update the user, group, or service account in a role binding or cluster role binding.", "source": "https://kubernetes.io/docs/reference/kubectl/kubectl-cmds/", "id": "k8s_00618_code_explain_19056"} {"prompt": "Explain this text code:\n\n$ kubectl set subject (-f FILENAME | TYPE NAME) [--user=username] [--group=groupname] [--serviceaccount=namespace:serviceaccountname] [--dry-run=server|client|none]", "completion": "This text code set | Wait for the pod \"busybox1\" to contain the status condition of type \"Ready\"", "language": "text", "type": "code-example_explanation", "context": "set | Wait for the pod \"busybox1\" to contain the status condition of type \"Ready\"", "source": "https://kubernetes.io/docs/reference/kubectl/kubectl-cmds/", "id": "k8s_00618_code_explain_19057"} {"prompt": "Explain this bash code:\n\nkubectl wait --for=condition=Ready=false pod/busybox1", "completion": "This bash code wait | Wait for the pod \"busybox1\" to be deleted, with a timeout of 60s, after having issued the \"delete\" command", "language": "bash", "type": "kubectl-command_explanation", "context": "wait | Wait for the pod \"busybox1\" to be deleted, with a timeout of 60s, after having issued the \"delete\" command", "source": "https://kubernetes.io/docs/reference/kubectl/kubectl-cmds/", "id": "k8s_00618_code_explain_19058"} {"prompt": "Explain this bash code:\n\nkubectl delete pod/busybox1\nkubectl wait --for=delete pod/busybox1 --timeout=60s", "completion": "This bash code wait | Experimental: Wait for a specific condition on one or many resources.", "language": "bash", "type": "kubectl-command_explanation", "context": "wait | Experimental: Wait for a specific condition on one or many resources.", "source": "https://kubernetes.io/docs/reference/kubectl/kubectl-cmds/", "id": "k8s_00618_code_explain_19059"} {"prompt": "Explain this text code:\n\n$ kubectl wait ([-f FILENAME] | resource.group/resource.name | resource.group [(-l label | --all)]) [--for=delete|--for condition=available]", "completion": "This text code wait | This section contains commands for inspecting and debugging your applications.", "language": "text", "type": "code-example_explanation", "context": "wait | This section contains commands for inspecting and debugging your applications.", "source": "https://kubernetes.io/docs/reference/kubectl/kubectl-cmds/", "id": "k8s_00618_code_explain_19060"} {"prompt": "Explain this text code:\n\n$ kubectl auth can-i VERB [TYPE | TYPE/NAME | NONRESOURCEURL]", "completion": "This text code auth | Reconcile RBAC resources from a file", "language": "text", "type": "code-example_explanation", "context": "auth | Reconcile RBAC resources from a file", "source": "https://kubernetes.io/docs/reference/kubectl/kubectl-cmds/", "id": "k8s_00618_code_explain_19061"} {"prompt": "Explain this text code:\n\ntar cf - /tmp/foo | kubectl exec -i -n <some-namespace> <some-pod> -- tar xf - -C /tmp/bar", "completion": "This text code cp | Copy /tmp/foo from a remote pod to /tmp/bar locally", "language": "text", "type": "code-example_explanation", "context": "cp | Copy /tmp/foo from a remote pod to /tmp/bar locally", "source": "https://kubernetes.io/docs/reference/kubectl/kubectl-cmds/", "id": "k8s_00618_code_explain_19062"} {"prompt": "Explain this bash code:\n\nkubectl exec -n <some-namespace> <some-pod> -- tar cf - /tmp/foo | tar xf - -C /tmp/bar", "completion": "This bash code cp | Copy /tmp/foo_dir local directory to /tmp/bar_dir in a remote pod in the default namespace", "language": "bash", "type": "kubectl-command_explanation", "context": "cp | Copy /tmp/foo_dir local directory to /tmp/bar_dir in a remote pod in the default namespace", "source": "https://kubernetes.io/docs/reference/kubectl/kubectl-cmds/", "id": "k8s_00618_code_explain_19063"} {"prompt": "Explain this bash code:\n\nkubectl cp /tmp/foo <some-pod>:/tmp/bar -c <specific-container>", "completion": "This bash code cp | Copy /tmp/foo local file to /tmp/bar in a remote pod in namespace", "language": "bash", "type": "kubectl-command_explanation", "context": "cp | Copy /tmp/foo local file to /tmp/bar in a remote pod in namespace", "source": "https://kubernetes.io/docs/reference/kubectl/kubectl-cmds/", "id": "k8s_00618_code_explain_19064"} {"prompt": "Explain this bash code:\n\nkubectl cp /tmp/foo <some-namespace>/<some-pod>:/tmp/bar", "completion": "This bash code cp | Copy /tmp/foo from a remote pod to /tmp/bar locally", "language": "bash", "type": "kubectl-command_explanation", "context": "cp | Copy /tmp/foo from a remote pod to /tmp/bar locally", "source": "https://kubernetes.io/docs/reference/kubectl/kubectl-cmds/", "id": "k8s_00618_code_explain_19065"} {"prompt": "Explain this bash code:\n\nkubectl cp <some-namespace>/<some-pod>:/tmp/foo /tmp/bar", "completion": "This bash code cp | Copy files and directories to and from containers.", "language": "bash", "type": "kubectl-command_explanation", "context": "cp | Copy files and directories to and from containers.", "source": "https://kubernetes.io/docs/reference/kubectl/kubectl-cmds/", "id": "k8s_00618_code_explain_19066"} {"prompt": "Explain this bash code:\n\nkubectl describe nodes kubernetes-node-emt8.c.myproject.internal", "completion": "This bash code describe | Describe a pod", "language": "bash", "type": "kubectl-command_explanation", "context": "describe | Describe a pod", "source": "https://kubernetes.io/docs/reference/kubectl/kubectl-cmds/", "id": "k8s_00618_code_explain_19067"} {"prompt": "Explain this text code:\n\n$ kubectl describe (-f FILENAME | TYPE [NAME_PREFIX | -l label] | TYPE/NAME)", "completion": "This text code describe | Get output from running the 'date' command from pod mypod, using the first container by default", "language": "text", "type": "code-example_explanation", "context": "describe | Get output from running the 'date' command from pod mypod, using the first container by default", "source": "https://kubernetes.io/docs/reference/kubectl/kubectl-cmds/", "id": "k8s_00618_code_explain_19068"} {"prompt": "Explain this bash code:\n\nkubectl exec mypod -c ruby-container -i -t -- bash -il", "completion": "This bash code exec | List contents of /usr from the first container of pod mypod and sort by modification time # If the command you want to execute in the pod has any flags in common (e.g. -i), # you must use two dashes (", "language": "bash", "type": "kubectl-command_explanation", "context": "exec | List contents of /usr from the first container of pod mypod and sort by modification time # If the command you want to execute in the pod has any flags in common (e.g. -i), # you must use two dashes (", "source": "https://kubernetes.io/docs/reference/kubectl/kubectl-cmds/", "id": "k8s_00618_code_explain_19069"} {"prompt": "Explain this text code:\n\n$ kubectl exec (POD | TYPE/NAME) [-c CONTAINER] [flags] -- COMMAND [args...]", "completion": "This text code exec | Return snapshot logs from pod nginx with only one container", "language": "text", "type": "code-example_explanation", "context": "exec | Return snapshot logs from pod nginx with only one container", "source": "https://kubernetes.io/docs/reference/kubectl/kubectl-cmds/", "id": "k8s_00618_code_explain_19070"} {"prompt": "Explain this bash code:\n\nkubectl logs --insecure-skip-tls-verify-backend nginx", "completion": "This bash code logs | Return snapshot logs from first container of a job named hello", "language": "bash", "type": "kubectl-command_explanation", "context": "logs | Return snapshot logs from first container of a job named hello", "source": "https://kubernetes.io/docs/reference/kubectl/kubectl-cmds/", "id": "k8s_00618_code_explain_19071"} {"prompt": "Explain this text code:\n\n$ kubectl logs [-f] [-p] (POD | TYPE/NAME) [-c CONTAINER]", "completion": "This text code logs | Listen on ports 5000 and 6000 locally, forwarding data to/from ports 5000 and 6000 in the pod", "language": "text", "type": "code-example_explanation", "context": "logs | Listen on ports 5000 and 6000 locally, forwarding data to/from ports 5000 and 6000 in the pod", "source": "https://kubernetes.io/docs/reference/kubectl/kubectl-cmds/", "id": "k8s_00618_code_explain_19072"} {"prompt": "Explain this bash code:\n\nkubectl port-forward deployment/mydeployment 5000 6000", "completion": "This bash code port-forward | Listen on port 8443 locally, forwarding to the targetPort of the service's port named \"https\" in a pod selected by the service", "language": "bash", "type": "kubectl-command_explanation", "context": "port-forward | Listen on port 8443 locally, forwarding to the targetPort of the service's port named \"https\" in a pod selected by the service", "source": "https://kubernetes.io/docs/reference/kubectl/kubectl-cmds/", "id": "k8s_00618_code_explain_19073"} {"prompt": "Explain this bash code:\n\nkubectl port-forward --address 0.0.0.0 pod/mypod 8888:5000", "completion": "This bash code port-forward | Listen on port 8888 on localhost and selected IP, forwarding to 5000 in the pod", "language": "bash", "type": "kubectl-command_explanation", "context": "port-forward | Listen on port 8888 on localhost and selected IP, forwarding to 5000 in the pod", "source": "https://kubernetes.io/docs/reference/kubectl/kubectl-cmds/", "id": "k8s_00618_code_explain_19074"} {"prompt": "Explain this bash code:\n\nkubectl port-forward --address localhost,10.19.21.23 pod/mypod 8888:5000", "completion": "This bash code port-forward | Listen on a random port locally, forwarding to 5000 in the pod", "language": "bash", "type": "kubectl-command_explanation", "context": "port-forward | Listen on a random port locally, forwarding to 5000 in the pod", "source": "https://kubernetes.io/docs/reference/kubectl/kubectl-cmds/", "id": "k8s_00618_code_explain_19075"} {"prompt": "Explain this text code:\n\n$ kubectl port-forward TYPE/NAME [options] [LOCAL_PORT:]REMOTE_PORT [...[LOCAL_PORT_N:]REMOTE_PORT_N]", "completion": "This text code port-forward | To proxy all of the Kubernetes API and nothing else", "language": "text", "type": "code-example_explanation", "context": "port-forward | To proxy all of the Kubernetes API and nothing else", "source": "https://kubernetes.io/docs/reference/kubectl/kubectl-cmds/", "id": "k8s_00618_code_explain_19076"} {"prompt": "Explain this bash code:\n\nkubectl proxy --www=/my/files --www-prefix=/static/ --api-prefix=/api/", "completion": "This bash code proxy | To proxy the entire Kubernetes API at a different root # You can get pods info with 'curl localhost:8001/custom/api/v1/pods'", "language": "bash", "type": "kubectl-command_explanation", "context": "proxy | To proxy the entire Kubernetes API at a different root # You can get pods info with 'curl localhost:8001/custom/api/v1/pods'", "source": "https://kubernetes.io/docs/reference/kubectl/kubectl-cmds/", "id": "k8s_00618_code_explain_19077"} {"prompt": "Explain this text code:\n\n$ kubectl proxy [--port=PORT] [--www=static-dir] [--www-prefix=prefix] [--api-prefix=prefix]", "completion": "This text code proxy | Display Resource (CPU/Memory) usage.", "language": "text", "type": "code-example_explanation", "context": "proxy | Display Resource (CPU/Memory) usage.", "source": "https://kubernetes.io/docs/reference/kubectl/kubectl-cmds/", "id": "k8s_00618_code_explain_19078"} {"prompt": "Explain this bash code:\n\nkubectl cluster-info dump --output-directory=/path/to/cluster-state", "completion": "This bash code cluster-info | Dump all namespaces to stdout", "language": "bash", "type": "kubectl-command_explanation", "context": "cluster-info | Dump all namespaces to stdout", "source": "https://kubernetes.io/docs/reference/kubectl/kubectl-cmds/", "id": "k8s_00618_code_explain_19079"} {"prompt": "Explain this bash code:\n\nkubectl cluster-info dump --namespaces default,kube-system --output-directory=/path/to/cluster-state", "completion": "This bash code cluster-info | Dump cluster information out suitable for debugging and diagnosing cluster problems. By default, dumps everything to stdout. You can optionally specify a directory with --output-directory. If you sp", "language": "bash", "type": "kubectl-command_explanation", "context": "cluster-info | Dump cluster information out suitable for debugging and diagnosing cluster problems. By default, dumps everything to stdout. You can optionally specify a directory with --output-directory. If you sp", "source": "https://kubernetes.io/docs/reference/kubectl/kubectl-cmds/", "id": "k8s_00618_code_explain_19080"} {"prompt": "Explain this bash code:\n\nkubectl taint nodes foo dedicated=special-user:NoSchedule", "completion": "This bash code taint | Remove from node 'foo' the taint with key 'dedicated' and effect 'NoSchedule' if one exists", "language": "bash", "type": "kubectl-command_explanation", "context": "taint | Remove from node 'foo' the taint with key 'dedicated' and effect 'NoSchedule' if one exists", "source": "https://kubernetes.io/docs/reference/kubectl/kubectl-cmds/", "id": "k8s_00618_code_explain_19081"} {"prompt": "Explain this bash code:\n\nkubectl taint node -l myLabel=X dedicated=foo:PreferNoSchedule", "completion": "This bash code taint | Add to node 'foo' a taint with key 'bar' and no value", "language": "bash", "type": "kubectl-command_explanation", "context": "taint | Add to node 'foo' a taint with key 'bar' and no value", "source": "https://kubernetes.io/docs/reference/kubectl/kubectl-cmds/", "id": "k8s_00618_code_explain_19082"} {"prompt": "Explain this text code:\n\n$ kubectl taint NODE NAME KEY_1=VAL_1:TAINT_EFFECT_1 ... KEY_N=VAL_N:TAINT_EFFECT_N", "completion": "This text code taint | Mark node \"foo\" as schedulable", "language": "text", "type": "code-example_explanation", "context": "taint | Mark node \"foo\" as schedulable", "source": "https://kubernetes.io/docs/reference/kubectl/kubectl-cmds/", "id": "k8s_00618_code_explain_19083"} {"prompt": "Explain this bash code:\n\nkubectl completion bash > $(brew --prefix)/etc/bash_completion.d/kubectl", "completion": "This bash code completion | Installing bash completion on Linux ## If bash-completion is not installed on Linux, install the 'bash-completion' package ## via your distribution's package manager. ## Load the kubectl completion co", "language": "bash", "type": "kubectl-command_explanation", "context": "completion | Installing bash completion on Linux ## If bash-completion is not installed on Linux, install the 'bash-completion' package ## via your distribution's package manager. ## Load the kubectl completion co", "source": "https://kubernetes.io/docs/reference/kubectl/kubectl-cmds/", "id": "k8s_00618_code_explain_19084"} {"prompt": "Explain this bash code:\n\nkubectl completion bash > ~/.kube/completion.bash.inc\nprintf \"", "completion": "This bash code completion | Kubectl shell completion", "language": "bash", "type": "kubectl-command_explanation", "context": "completion | Kubectl shell completion", "source": "https://kubernetes.io/docs/reference/kubectl/kubectl-cmds/", "id": "k8s_00618_code_explain_19085"} {"prompt": "Explain this text code:\n\nsource '$HOME/.kube/completion.bash.inc'\n\" >> $HOME/.bash_profile\nsource $HOME/.bash_profile", "completion": "This text code completion | Load the kubectl completion code for zsh[1] into the current shell", "language": "text", "type": "code-example_explanation", "context": "completion | Load the kubectl completion code for zsh[1] into the current shell", "source": "https://kubernetes.io/docs/reference/kubectl/kubectl-cmds/", "id": "k8s_00618_code_explain_19086"} {"prompt": "Explain this text code:\n\n$ kubectl config get-contexts [(-o|--output=)name)]", "completion": "This text code config | List the users that kubectl knows about", "language": "text", "type": "code-example_explanation", "context": "config | List the users that kubectl knows about", "source": "https://kubernetes.io/docs/reference/kubectl/kubectl-cmds/", "id": "k8s_00618_code_explain_19087"} {"prompt": "Explain this text code:\n\n$ kubectl config rename-context CONTEXT_NAME NEW_NAME", "completion": "This text code config | Set the server field on the my-cluster cluster to https://1.2.3.4", "language": "text", "type": "code-example_explanation", "context": "config | Set the server field on the my-cluster cluster to https://1.2.3.4", "source": "https://kubernetes.io/docs/reference/kubectl/kubectl-cmds/", "id": "k8s_00618_code_explain_19088"} {"prompt": "Explain this bash code:\n\nkubectl config set clusters.my-cluster.server https://1.2.3.4", "completion": "This bash code config | Set the certificate-authority-data field on the my-cluster cluster", "language": "bash", "type": "kubectl-command_explanation", "context": "config | Set the certificate-authority-data field on the my-cluster cluster", "source": "https://kubernetes.io/docs/reference/kubectl/kubectl-cmds/", "id": "k8s_00618_code_explain_19089"} {"prompt": "Explain this bash code:\n\nkubectl config set clusters.my-cluster.certificate-authority-data $(echo \"cert_data_here\" | base64 -i -)", "completion": "This bash code config | Set the cluster field in the my-context context to my-cluster", "language": "bash", "type": "kubectl-command_explanation", "context": "config | Set the cluster field in the my-context context to my-cluster", "source": "https://kubernetes.io/docs/reference/kubectl/kubectl-cmds/", "id": "k8s_00618_code_explain_19090"} {"prompt": "Explain this bash code:\n\nkubectl config set contexts.my-context.cluster my-cluster", "completion": "This bash code config | Set the client-key-data field in the cluster-admin user using --set-raw-bytes option", "language": "bash", "type": "kubectl-command_explanation", "context": "config | Set the client-key-data field in the cluster-admin user using --set-raw-bytes option", "source": "https://kubernetes.io/docs/reference/kubectl/kubectl-cmds/", "id": "k8s_00618_code_explain_19091"} {"prompt": "Explain this bash code:\n\nkubectl config set users.cluster-admin.client-key-data cert_data_here --set-raw-bytes=true", "completion": "This bash code config | Set an individual value in a kubeconfig file.", "language": "bash", "type": "kubectl-command_explanation", "context": "config | Set an individual value in a kubeconfig file.", "source": "https://kubernetes.io/docs/reference/kubectl/kubectl-cmds/", "id": "k8s_00618_code_explain_19092"} {"prompt": "Explain this bash code:\n\nkubectl config set-cluster e2e --server=https://1.2.3.4", "completion": "This bash code config | Embed certificate authority data for the e2e cluster entry", "language": "bash", "type": "kubectl-command_explanation", "context": "config | Embed certificate authority data for the e2e cluster entry", "source": "https://kubernetes.io/docs/reference/kubectl/kubectl-cmds/", "id": "k8s_00618_code_explain_19093"} {"prompt": "Explain this bash code:\n\nkubectl config set-cluster e2e --embed-certs --certificate-authority=~/.kube/e2e/kubernetes.ca.crt", "completion": "This bash code config | Disable cert checking for the dev cluster entry", "language": "bash", "type": "kubectl-command_explanation", "context": "config | Disable cert checking for the dev cluster entry", "source": "https://kubernetes.io/docs/reference/kubectl/kubectl-cmds/", "id": "k8s_00618_code_explain_19094"} {"prompt": "Explain this bash code:\n\nkubectl config set-cluster e2e --insecure-skip-tls-verify=true", "completion": "This bash code config | Set custom TLS server name to use for validation for the e2e cluster entry", "language": "bash", "type": "kubectl-command_explanation", "context": "config | Set custom TLS server name to use for validation for the e2e cluster entry", "source": "https://kubernetes.io/docs/reference/kubectl/kubectl-cmds/", "id": "k8s_00618_code_explain_19095"} {"prompt": "Explain this bash code:\n\nkubectl config set-cluster e2e --tls-server-name=my-cluster-name", "completion": "This bash code config | Set a cluster entry in kubeconfig.", "language": "bash", "type": "kubectl-command_explanation", "context": "config | Set a cluster entry in kubeconfig.", "source": "https://kubernetes.io/docs/reference/kubectl/kubectl-cmds/", "id": "k8s_00618_code_explain_19096"} {"prompt": "Explain this text code:\n\n$ kubectl config set-cluster NAME [--server=server] [--certificate-authority=path/to/certificate/authority] [--insecure-skip-tls-verify=true] [--tls-server-name=example.com]", "completion": "This text code config | Set the user field on the gce context entry without touching other values", "language": "text", "type": "code-example_explanation", "context": "config | Set the user field on the gce context entry without touching other values", "source": "https://kubernetes.io/docs/reference/kubectl/kubectl-cmds/", "id": "k8s_00618_code_explain_19097"} {"prompt": "Explain this bash code:\n\nkubectl config set-context gce --user=cluster-admin", "completion": "This bash code config | Set a context entry in kubeconfig.", "language": "bash", "type": "kubectl-command_explanation", "context": "config | Set a context entry in kubeconfig.", "source": "https://kubernetes.io/docs/reference/kubectl/kubectl-cmds/", "id": "k8s_00618_code_explain_19098"} {"prompt": "Explain this text code:\n\n$ kubectl config set-context [NAME | --current] [--cluster=cluster_nickname] [--user=user_nickname] [--namespace=namespace]", "completion": "This text code config | Set only the \"client-key\" field on the \"cluster-admin\" # entry, without touching other values", "language": "text", "type": "code-example_explanation", "context": "config | Set only the \"client-key\" field on the \"cluster-admin\" # entry, without touching other values", "source": "https://kubernetes.io/docs/reference/kubectl/kubectl-cmds/", "id": "k8s_00618_code_explain_19099"} {"prompt": "Explain this bash code:\n\nkubectl config set-credentials cluster-admin --client-key=~/.kube/admin.key", "completion": "This bash code config | Set basic auth for the \"cluster-admin\" entry", "language": "bash", "type": "kubectl-command_explanation", "context": "config | Set basic auth for the \"cluster-admin\" entry", "source": "https://kubernetes.io/docs/reference/kubectl/kubectl-cmds/", "id": "k8s_00618_code_explain_19100"} {"prompt": "Explain this bash code:\n\nkubectl config set-credentials cluster-admin --username=admin --password=uXFGweU9l35qcif", "completion": "This bash code config | Embed client certificate data in the \"cluster-admin\" entry", "language": "bash", "type": "kubectl-command_explanation", "context": "config | Embed client certificate data in the \"cluster-admin\" entry", "source": "https://kubernetes.io/docs/reference/kubectl/kubectl-cmds/", "id": "k8s_00618_code_explain_19101"} {"prompt": "Explain this bash code:\n\nkubectl config set-credentials cluster-admin --client-certificate=~/.kube/admin.crt --embed-certs=true", "completion": "This bash code config | Enable the Google Compute Platform auth provider for the \"cluster-admin\" entry", "language": "bash", "type": "kubectl-command_explanation", "context": "config | Enable the Google Compute Platform auth provider for the \"cluster-admin\" entry", "source": "https://kubernetes.io/docs/reference/kubectl/kubectl-cmds/", "id": "k8s_00618_code_explain_19102"} {"prompt": "Explain this bash code:\n\nkubectl config set-credentials cluster-admin --auth-provider=gcp", "completion": "This bash code config | Enable the OpenID Connect auth provider for the \"cluster-admin\" entry with additional args", "language": "bash", "type": "kubectl-command_explanation", "context": "config | Enable the OpenID Connect auth provider for the \"cluster-admin\" entry with additional args", "source": "https://kubernetes.io/docs/reference/kubectl/kubectl-cmds/", "id": "k8s_00618_code_explain_19103"} {"prompt": "Explain this bash code:\n\nkubectl config set-credentials cluster-admin --auth-provider=oidc --auth-provider-arg=client-id=foo --auth-provider-arg=client-secret=bar", "completion": "This bash code config | Remove the \"client-secret\" config value for the OpenID Connect auth provider for the \"cluster-admin\" entry", "language": "bash", "type": "kubectl-command_explanation", "context": "config | Remove the \"client-secret\" config value for the OpenID Connect auth provider for the \"cluster-admin\" entry", "source": "https://kubernetes.io/docs/reference/kubectl/kubectl-cmds/", "id": "k8s_00618_code_explain_19104"} {"prompt": "Explain this bash code:\n\nkubectl config set-credentials cluster-admin --auth-provider=oidc --auth-provider-arg=client-secret-", "completion": "This bash code config | Enable new exec auth plugin for the \"cluster-admin\" entry", "language": "bash", "type": "kubectl-command_explanation", "context": "config | Enable new exec auth plugin for the \"cluster-admin\" entry", "source": "https://kubernetes.io/docs/reference/kubectl/kubectl-cmds/", "id": "k8s_00618_code_explain_19105"} {"prompt": "Explain this bash code:\n\nkubectl config set-credentials cluster-admin --exec-command=/path/to/the/executable --exec-api-version=client.authentication.k8s.io/v1beta1", "completion": "This bash code config | Define new exec auth plugin args for the \"cluster-admin\" entry", "language": "bash", "type": "kubectl-command_explanation", "context": "config | Define new exec auth plugin args for the \"cluster-admin\" entry", "source": "https://kubernetes.io/docs/reference/kubectl/kubectl-cmds/", "id": "k8s_00618_code_explain_19106"} {"prompt": "Explain this bash code:\n\nkubectl config set-credentials cluster-admin --exec-arg=arg1 --exec-arg=arg2", "completion": "This bash code config | Create or update exec auth plugin environment variables for the \"cluster-admin\" entry", "language": "bash", "type": "kubectl-command_explanation", "context": "config | Create or update exec auth plugin environment variables for the \"cluster-admin\" entry", "source": "https://kubernetes.io/docs/reference/kubectl/kubectl-cmds/", "id": "k8s_00618_code_explain_19107"} {"prompt": "Explain this bash code:\n\nkubectl config set-credentials cluster-admin --exec-env=key1=val1 --exec-env=key2=val2", "completion": "This bash code config | Remove exec auth plugin environment variables for the \"cluster-admin\" entry", "language": "bash", "type": "kubectl-command_explanation", "context": "config | Remove exec auth plugin environment variables for the \"cluster-admin\" entry", "source": "https://kubernetes.io/docs/reference/kubectl/kubectl-cmds/", "id": "k8s_00618_code_explain_19108"} {"prompt": "Explain this bash code:\n\nkubectl config set-credentials cluster-admin --exec-env=var-to-remove-", "completion": "This bash code config | Set a user entry in kubeconfig.", "language": "bash", "type": "kubectl-command_explanation", "context": "config | Set a user entry in kubeconfig.", "source": "https://kubernetes.io/docs/reference/kubectl/kubectl-cmds/", "id": "k8s_00618_code_explain_19109"} {"prompt": "Explain this text code:\n\n$ kubectl config set-credentials NAME [--client-certificate=path/to/certfile] [--client-key=path/to/keyfile] [--token=bearer_token] [--username=basic_user] [--password=basic_password] [--auth-provider=provider_name] [--auth-provider-arg=key=value] [--exec-command=exec_command] [--exec-api-version=exec_api_version] [--exec-arg=arg] [--exec-env=key=value]", "completion": "This text code config | Unset the current-context", "language": "text", "type": "code-example_explanation", "context": "config | Unset the current-context", "source": "https://kubernetes.io/docs/reference/kubectl/kubectl-cmds/", "id": "k8s_00618_code_explain_19110"} {"prompt": "Explain this bash code:\n\nkubectl config view -o jsonpath='{.users[?(@.name == \"e2e\")].user.password}'", "completion": "This bash code config | Display merged kubeconfig settings or a specified kubeconfig file.", "language": "bash", "type": "kubectl-command_explanation", "context": "config | Display merged kubeconfig settings or a specified kubeconfig file.", "source": "https://kubernetes.io/docs/reference/kubectl/kubectl-cmds/", "id": "k8s_00618_code_explain_19111"} {"prompt": "Explain this text code:\n\nEdit the data in docker-registry.yaml in JSON then create the resource using the edited data", "completion": "This text code create | Edit the data in docker-registry.yaml in JSON then create the resource using the edited data", "language": "text", "type": "code-example_explanation", "context": "create | Edit the data in docker-registry.yaml in JSON then create the resource using the edited data", "source": "https://kubernetes.io/docs/reference/kubectl/kubectl-cmds/", "id": "k8s_00618_code_explain_19112"} {"prompt": "Explain this bash code:\n\nkubectl create -f docker-registry.yaml --edit -o json", "completion": "This bash code create | Create a resource from a file or from stdin.", "language": "bash", "type": "kubectl-command_explanation", "context": "create | Create a resource from a file or from stdin.", "source": "https://kubernetes.io/docs/reference/kubectl/kubectl-cmds/", "id": "k8s_00618_code_explain_19113"} {"prompt": "Explain this text code:\n\nCreate a cluster role named \"pod-reader\" that allows user to perform \"get\", \"watch\" and \"list\" on pods", "completion": "This text code create | Create a cluster role named \"pod-reader\" that allows user to perform \"get\", \"watch\" and \"list\" on pods", "language": "text", "type": "code-example_explanation", "context": "create | Create a cluster role named \"pod-reader\" that allows user to perform \"get\", \"watch\" and \"list\" on pods", "source": "https://kubernetes.io/docs/reference/kubectl/kubectl-cmds/", "id": "k8s_00618_code_explain_19114"} {"prompt": "Explain this bash code:\n\nkubectl create clusterrole pod-reader --verb=get,list,watch --resource=pods", "completion": "This bash code create | Create a cluster role named \"pod-reader\" with ResourceName specified", "language": "bash", "type": "kubectl-command_explanation", "context": "create | Create a cluster role named \"pod-reader\" with ResourceName specified", "source": "https://kubernetes.io/docs/reference/kubectl/kubectl-cmds/", "id": "k8s_00618_code_explain_19115"} {"prompt": "Explain this text code:\n\nCreate a cluster role named \"pod-reader\" with ResourceName specified", "completion": "This text code create | Create a cluster role named \"pod-reader\" with ResourceName specified", "language": "text", "type": "code-example_explanation", "context": "create | Create a cluster role named \"pod-reader\" with ResourceName specified", "source": "https://kubernetes.io/docs/reference/kubectl/kubectl-cmds/", "id": "k8s_00618_code_explain_19116"} {"prompt": "Explain this bash code:\n\nkubectl create clusterrole pod-reader --verb=get --resource=pods --resource-name=readablepod --resource-name=anotherpod", "completion": "This bash code create | Create a cluster role named \"foo\" with API Group specified", "language": "bash", "type": "kubectl-command_explanation", "context": "create | Create a cluster role named \"foo\" with API Group specified", "source": "https://kubernetes.io/docs/reference/kubectl/kubectl-cmds/", "id": "k8s_00618_code_explain_19117"} {"prompt": "Explain this text code:\n\nCreate a cluster role named \"foo\" with API Group specified", "completion": "This text code create | Create a cluster role named \"foo\" with API Group specified", "language": "text", "type": "code-example_explanation", "context": "create | Create a cluster role named \"foo\" with API Group specified", "source": "https://kubernetes.io/docs/reference/kubectl/kubectl-cmds/", "id": "k8s_00618_code_explain_19118"} {"prompt": "Explain this bash code:\n\nkubectl create clusterrole foo --verb=get,list,watch --resource=rs.extensions", "completion": "This bash code create | Create a cluster role named \"foo\" with SubResource specified", "language": "bash", "type": "kubectl-command_explanation", "context": "create | Create a cluster role named \"foo\" with SubResource specified", "source": "https://kubernetes.io/docs/reference/kubectl/kubectl-cmds/", "id": "k8s_00618_code_explain_19119"} {"prompt": "Explain this text code:\n\nCreate a cluster role named \"foo\" with SubResource specified", "completion": "This text code create | Create a cluster role named \"foo\" with SubResource specified", "language": "text", "type": "code-example_explanation", "context": "create | Create a cluster role named \"foo\" with SubResource specified", "source": "https://kubernetes.io/docs/reference/kubectl/kubectl-cmds/", "id": "k8s_00618_code_explain_19120"} {"prompt": "Explain this bash code:\n\nkubectl create clusterrole foo --verb=get,list,watch --resource=pods,pods/status", "completion": "This bash code create | Create a cluster role name \"foo\" with NonResourceURL specified", "language": "bash", "type": "kubectl-command_explanation", "context": "create | Create a cluster role name \"foo\" with NonResourceURL specified", "source": "https://kubernetes.io/docs/reference/kubectl/kubectl-cmds/", "id": "k8s_00618_code_explain_19121"} {"prompt": "Explain this text code:\n\nCreate a cluster role name \"foo\" with NonResourceURL specified", "completion": "This text code create | Create a cluster role name \"foo\" with NonResourceURL specified", "language": "text", "type": "code-example_explanation", "context": "create | Create a cluster role name \"foo\" with NonResourceURL specified", "source": "https://kubernetes.io/docs/reference/kubectl/kubectl-cmds/", "id": "k8s_00618_code_explain_19122"} {"prompt": "Explain this bash code:\n\nkubectl create clusterrole \"foo\" --verb=get --non-resource-url=/logs/*", "completion": "This bash code create | Create a cluster role name \"monitoring\" with AggregationRule specified", "language": "bash", "type": "kubectl-command_explanation", "context": "create | Create a cluster role name \"monitoring\" with AggregationRule specified", "source": "https://kubernetes.io/docs/reference/kubectl/kubectl-cmds/", "id": "k8s_00618_code_explain_19123"} {"prompt": "Explain this text code:\n\nCreate a cluster role name \"monitoring\" with AggregationRule specified", "completion": "This text code create | Create a cluster role name \"monitoring\" with AggregationRule specified", "language": "text", "type": "code-example_explanation", "context": "create | Create a cluster role name \"monitoring\" with AggregationRule specified", "source": "https://kubernetes.io/docs/reference/kubectl/kubectl-cmds/", "id": "k8s_00618_code_explain_19124"} {"prompt": "Explain this bash code:\n\nkubectl create clusterrole monitoring --aggregation-rule=\"rbac.example.com/aggregate-to-monitoring=true\"", "completion": "This bash code create | Create a cluster role.", "language": "bash", "type": "kubectl-command_explanation", "context": "create | Create a cluster role.", "source": "https://kubernetes.io/docs/reference/kubectl/kubectl-cmds/", "id": "k8s_00618_code_explain_19125"} {"prompt": "Explain this text code:\n\nCreate a cluster role binding for user1, user2, and group1 using the cluster-admin cluster role", "completion": "This text code create | Create a cluster role binding for user1, user2, and group1 using the cluster-admin cluster role", "language": "text", "type": "code-example_explanation", "context": "create | Create a cluster role binding for user1, user2, and group1 using the cluster-admin cluster role", "source": "https://kubernetes.io/docs/reference/kubectl/kubectl-cmds/", "id": "k8s_00618_code_explain_19126"} {"prompt": "Explain this bash code:\n\nkubectl create clusterrolebinding cluster-admin --clusterrole=cluster-admin --user=user1 --user=user2 --group=group1", "completion": "This bash code create | Create a cluster role binding for a particular cluster role.", "language": "bash", "type": "kubectl-command_explanation", "context": "create | Create a cluster role binding for a particular cluster role.", "source": "https://kubernetes.io/docs/reference/kubectl/kubectl-cmds/", "id": "k8s_00618_code_explain_19127"} {"prompt": "Explain this text code:\n\nCreate a new config map named my-config based on folder bar", "completion": "This text code create | Create a new config map named my-config based on folder bar", "language": "text", "type": "code-example_explanation", "context": "create | Create a new config map named my-config based on folder bar", "source": "https://kubernetes.io/docs/reference/kubectl/kubectl-cmds/", "id": "k8s_00618_code_explain_19128"} {"prompt": "Explain this bash code:\n\nkubectl create configmap my-config --from-file=path/to/bar", "completion": "This bash code create | Create a new config map named my-config with specified keys instead of file basenames on disk", "language": "bash", "type": "kubectl-command_explanation", "context": "create | Create a new config map named my-config with specified keys instead of file basenames on disk", "source": "https://kubernetes.io/docs/reference/kubectl/kubectl-cmds/", "id": "k8s_00618_code_explain_19129"} {"prompt": "Explain this text code:\n\nCreate a new config map named my-config with specified keys instead of file basenames on disk", "completion": "This text code create | Create a new config map named my-config with specified keys instead of file basenames on disk", "language": "text", "type": "code-example_explanation", "context": "create | Create a new config map named my-config with specified keys instead of file basenames on disk", "source": "https://kubernetes.io/docs/reference/kubectl/kubectl-cmds/", "id": "k8s_00618_code_explain_19130"} {"prompt": "Explain this bash code:\n\nkubectl create configmap my-config --from-file=key1=/path/to/bar/file1.txt --from-file=key2=/path/to/bar/file2.txt", "completion": "This bash code create | Create a new config map named my-config with key1=config1 and key2=config2", "language": "bash", "type": "kubectl-command_explanation", "context": "create | Create a new config map named my-config with key1=config1 and key2=config2", "source": "https://kubernetes.io/docs/reference/kubectl/kubectl-cmds/", "id": "k8s_00618_code_explain_19131"} {"prompt": "Explain this text code:\n\nCreate a new config map named my-config with key1=config1 and key2=config2", "completion": "This text code create | Create a new config map named my-config with key1=config1 and key2=config2", "language": "text", "type": "code-example_explanation", "context": "create | Create a new config map named my-config with key1=config1 and key2=config2", "source": "https://kubernetes.io/docs/reference/kubectl/kubectl-cmds/", "id": "k8s_00618_code_explain_19132"} {"prompt": "Explain this bash code:\n\nkubectl create configmap my-config --from-literal=key1=config1 --from-literal=key2=config2", "completion": "This bash code create | Create a new config map named my-config from the key=value pairs in the file", "language": "bash", "type": "kubectl-command_explanation", "context": "create | Create a new config map named my-config from the key=value pairs in the file", "source": "https://kubernetes.io/docs/reference/kubectl/kubectl-cmds/", "id": "k8s_00618_code_explain_19133"} {"prompt": "Explain this text code:\n\nCreate a new config map named my-config from the key=value pairs in the file", "completion": "This text code create | Create a new config map named my-config from the key=value pairs in the file", "language": "text", "type": "code-example_explanation", "context": "create | Create a new config map named my-config from the key=value pairs in the file", "source": "https://kubernetes.io/docs/reference/kubectl/kubectl-cmds/", "id": "k8s_00618_code_explain_19134"} {"prompt": "Explain this bash code:\n\nkubectl create configmap my-config --from-file=path/to/bar", "completion": "This bash code create | Create a new config map named my-config from an env file", "language": "bash", "type": "kubectl-command_explanation", "context": "create | Create a new config map named my-config from an env file", "source": "https://kubernetes.io/docs/reference/kubectl/kubectl-cmds/", "id": "k8s_00618_code_explain_19135"} {"prompt": "Explain this text code:\n\nCreate a new config map named my-config from an env file", "completion": "This text code create | Create a new config map named my-config from an env file", "language": "text", "type": "code-example_explanation", "context": "create | Create a new config map named my-config from an env file", "source": "https://kubernetes.io/docs/reference/kubectl/kubectl-cmds/", "id": "k8s_00618_code_explain_19136"} {"prompt": "Explain this bash code:\n\nkubectl create configmap my-config --from-env-file=path/to/bar.env", "completion": "This bash code create | Create a config map based on a file, directory, or specified literal value.", "language": "bash", "type": "kubectl-command_explanation", "context": "create | Create a config map based on a file, directory, or specified literal value.", "source": "https://kubernetes.io/docs/reference/kubectl/kubectl-cmds/", "id": "k8s_00618_code_explain_19137"} {"prompt": "Explain this bash code:\n\nkubectl create cronjob my-job --image=busybox --schedule=\"*/1 * * * *\"", "completion": "This bash code create | Create a cron job with a command", "language": "bash", "type": "kubectl-command_explanation", "context": "create | Create a cron job with a command", "source": "https://kubernetes.io/docs/reference/kubectl/kubectl-cmds/", "id": "k8s_00618_code_explain_19138"} {"prompt": "Explain this bash code:\n\nkubectl create cronjob my-job --image=busybox --schedule=\"*/1 * * * *\" -- date", "completion": "This bash code create | Create a cron job with the specified name.", "language": "bash", "type": "kubectl-command_explanation", "context": "create | Create a cron job with the specified name.", "source": "https://kubernetes.io/docs/reference/kubectl/kubectl-cmds/", "id": "k8s_00618_code_explain_19139"} {"prompt": "Explain this text code:\n\nCreate a deployment named my-dep that runs the busybox image", "completion": "This text code create | Create a deployment named my-dep that runs the busybox image", "language": "text", "type": "code-example_explanation", "context": "create | Create a deployment named my-dep that runs the busybox image", "source": "https://kubernetes.io/docs/reference/kubectl/kubectl-cmds/", "id": "k8s_00618_code_explain_19140"} {"prompt": "Explain this bash code:\n\nkubectl create deployment my-dep --image=busybox -- date", "completion": "This bash code create | Create a deployment named my-dep that runs the nginx image with 3 replicas", "language": "bash", "type": "kubectl-command_explanation", "context": "create | Create a deployment named my-dep that runs the nginx image with 3 replicas", "source": "https://kubernetes.io/docs/reference/kubectl/kubectl-cmds/", "id": "k8s_00618_code_explain_19141"} {"prompt": "Explain this text code:\n\nCreate a deployment named my-dep that runs the nginx image with 3 replicas", "completion": "This text code create | Create a deployment named my-dep that runs the nginx image with 3 replicas", "language": "text", "type": "code-example_explanation", "context": "create | Create a deployment named my-dep that runs the nginx image with 3 replicas", "source": "https://kubernetes.io/docs/reference/kubectl/kubectl-cmds/", "id": "k8s_00618_code_explain_19142"} {"prompt": "Explain this bash code:\n\nkubectl create deployment my-dep --image=nginx --replicas=3", "completion": "This bash code create | Create a deployment named my-dep that runs the busybox image and expose port 5701", "language": "bash", "type": "kubectl-command_explanation", "context": "create | Create a deployment named my-dep that runs the busybox image and expose port 5701", "source": "https://kubernetes.io/docs/reference/kubectl/kubectl-cmds/", "id": "k8s_00618_code_explain_19143"} {"prompt": "Explain this text code:\n\nCreate a deployment named my-dep that runs the busybox image and expose port 5701", "completion": "This text code create | Create a deployment named my-dep that runs the busybox image and expose port 5701", "language": "text", "type": "code-example_explanation", "context": "create | Create a deployment named my-dep that runs the busybox image and expose port 5701", "source": "https://kubernetes.io/docs/reference/kubectl/kubectl-cmds/", "id": "k8s_00618_code_explain_19144"} {"prompt": "Explain this bash code:\n\nkubectl create deployment my-dep --image=busybox --port=5701", "completion": "This bash code create | Create a deployment with the specified name.", "language": "bash", "type": "kubectl-command_explanation", "context": "create | Create a deployment with the specified name.", "source": "https://kubernetes.io/docs/reference/kubectl/kubectl-cmds/", "id": "k8s_00618_code_explain_19145"} {"prompt": "Explain this text code:\n\nCreate a single ingress called 'simple' that directs requests to foo.com/bar to svc # svc1:8080 with a tls secret \"my-cert\"", "completion": "This text code create | Create a single ingress called 'simple' that directs requests to foo.com/bar to svc # svc1:8080 with a tls secret \"my-cert\"", "language": "text", "type": "code-example_explanation", "context": "create | Create a single ingress called 'simple' that directs requests to foo.com/bar to svc # svc1:8080 with a tls secret \"my-cert\"", "source": "https://kubernetes.io/docs/reference/kubectl/kubectl-cmds/", "id": "k8s_00618_code_explain_19146"} {"prompt": "Explain this bash code:\n\nkubectl create ingress simple --rule=\"foo.com/bar=svc1:8080,tls=my-cert\"", "completion": "This bash code create | Create a catch all ingress of \"/path\" pointing to service svc:port and Ingress Class as \"otheringress\"", "language": "bash", "type": "kubectl-command_explanation", "context": "create | Create a catch all ingress of \"/path\" pointing to service svc:port and Ingress Class as \"otheringress\"", "source": "https://kubernetes.io/docs/reference/kubectl/kubectl-cmds/", "id": "k8s_00618_code_explain_19147"} {"prompt": "Explain this text code:\n\nCreate a catch all ingress of \"/path\" pointing to service svc:port and Ingress Class as \"otheringress\"", "completion": "This text code create | Create a catch all ingress of \"/path\" pointing to service svc:port and Ingress Class as \"otheringress\"", "language": "text", "type": "code-example_explanation", "context": "create | Create a catch all ingress of \"/path\" pointing to service svc:port and Ingress Class as \"otheringress\"", "source": "https://kubernetes.io/docs/reference/kubectl/kubectl-cmds/", "id": "k8s_00618_code_explain_19148"} {"prompt": "Explain this bash code:\n\nkubectl create ingress catch-all --class=otheringress --rule=\"/path=svc:port\"", "completion": "This bash code create | Create an ingress with two annotations: ingress.annotation1 and ingress.annotations2", "language": "bash", "type": "kubectl-command_explanation", "context": "create | Create an ingress with two annotations: ingress.annotation1 and ingress.annotations2", "source": "https://kubernetes.io/docs/reference/kubectl/kubectl-cmds/", "id": "k8s_00618_code_explain_19149"} {"prompt": "Explain this text code:\n\nCreate an ingress with two annotations: ingress.annotation1 and ingress.annotations2", "completion": "This text code create | Create an ingress with two annotations: ingress.annotation1 and ingress.annotations2", "language": "text", "type": "code-example_explanation", "context": "create | Create an ingress with two annotations: ingress.annotation1 and ingress.annotations2", "source": "https://kubernetes.io/docs/reference/kubectl/kubectl-cmds/", "id": "k8s_00618_code_explain_19150"} {"prompt": "Explain this bash code:\n\nkubectl create ingress annotated --class=default --rule=\"foo.com/bar=svc:port\" \\\n--annotation ingress.annotation1=foo \\\n--annotation ingress.annotation2=bla", "completion": "This bash code create | Create an ingress with the same host and multiple paths", "language": "bash", "type": "kubectl-command_explanation", "context": "create | Create an ingress with the same host and multiple paths", "source": "https://kubernetes.io/docs/reference/kubectl/kubectl-cmds/", "id": "k8s_00618_code_explain_19151"} {"prompt": "Explain this text code:\n\nCreate an ingress with the same host and multiple paths", "completion": "This text code create | Create an ingress with the same host and multiple paths", "language": "text", "type": "code-example_explanation", "context": "create | Create an ingress with the same host and multiple paths", "source": "https://kubernetes.io/docs/reference/kubectl/kubectl-cmds/", "id": "k8s_00618_code_explain_19152"} {"prompt": "Explain this bash code:\n\nkubectl create ingress multipath --class=default \\\n--rule=\"foo.com/=svc:port\" \\\n--rule=\"foo.com/admin/=svcadmin:portadmin\"", "completion": "This bash code create | Create an ingress with multiple hosts and the pathType as Prefix", "language": "bash", "type": "kubectl-command_explanation", "context": "create | Create an ingress with multiple hosts and the pathType as Prefix", "source": "https://kubernetes.io/docs/reference/kubectl/kubectl-cmds/", "id": "k8s_00618_code_explain_19153"} {"prompt": "Explain this text code:\n\nCreate an ingress with multiple hosts and the pathType as Prefix", "completion": "This text code create | Create an ingress with multiple hosts and the pathType as Prefix", "language": "text", "type": "code-example_explanation", "context": "create | Create an ingress with multiple hosts and the pathType as Prefix", "source": "https://kubernetes.io/docs/reference/kubectl/kubectl-cmds/", "id": "k8s_00618_code_explain_19154"} {"prompt": "Explain this bash code:\n\nkubectl create ingress ingress1 --class=default \\\n--rule=\"foo.com/path*=svc:8080\" \\\n--rule=\"bar.com/admin*=svc2:http\"", "completion": "This bash code create | Create an ingress with TLS enabled using the default ingress certificate and different path types", "language": "bash", "type": "kubectl-command_explanation", "context": "create | Create an ingress with TLS enabled using the default ingress certificate and different path types", "source": "https://kubernetes.io/docs/reference/kubectl/kubectl-cmds/", "id": "k8s_00618_code_explain_19155"} {"prompt": "Explain this text code:\n\nCreate an ingress with TLS enabled using the default ingress certificate and different path types", "completion": "This text code create | Create an ingress with TLS enabled using the default ingress certificate and different path types", "language": "text", "type": "code-example_explanation", "context": "create | Create an ingress with TLS enabled using the default ingress certificate and different path types", "source": "https://kubernetes.io/docs/reference/kubectl/kubectl-cmds/", "id": "k8s_00618_code_explain_19156"} {"prompt": "Explain this bash code:\n\nkubectl create ingress ingtls --class=default \\\n--rule=\"foo.com/=svc:https,tls\" \\\n--rule=\"foo.com/path/subpath*=othersvc:8080\"", "completion": "This bash code create | Create an ingress with TLS enabled using a specific secret and pathType as Prefix", "language": "bash", "type": "kubectl-command_explanation", "context": "create | Create an ingress with TLS enabled using a specific secret and pathType as Prefix", "source": "https://kubernetes.io/docs/reference/kubectl/kubectl-cmds/", "id": "k8s_00618_code_explain_19157"} {"prompt": "Explain this text code:\n\nCreate an ingress with TLS enabled using a specific secret and pathType as Prefix", "completion": "This text code create | Create an ingress with TLS enabled using a specific secret and pathType as Prefix", "language": "text", "type": "code-example_explanation", "context": "create | Create an ingress with TLS enabled using a specific secret and pathType as Prefix", "source": "https://kubernetes.io/docs/reference/kubectl/kubectl-cmds/", "id": "k8s_00618_code_explain_19158"} {"prompt": "Explain this bash code:\n\nkubectl create ingress ingsecret --class=default \\\n--rule=\"foo.com/*=svc:8080,tls=secret1\"", "completion": "This bash code create | Create an ingress with a default backend", "language": "bash", "type": "kubectl-command_explanation", "context": "create | Create an ingress with a default backend", "source": "https://kubernetes.io/docs/reference/kubectl/kubectl-cmds/", "id": "k8s_00618_code_explain_19159"} {"prompt": "Explain this bash code:\n\nkubectl create ingress ingdefault --class=default \\\n--default-backend=defaultsvc:http \\\n--rule=\"foo.com/*=svc:8080,tls=secret1\"", "completion": "This bash code create | Create an ingress with the specified name.", "language": "bash", "type": "kubectl-command_explanation", "context": "create | Create an ingress with the specified name.", "source": "https://kubernetes.io/docs/reference/kubectl/kubectl-cmds/", "id": "k8s_00618_code_explain_19160"} {"prompt": "Explain this bash code:\n\nkubectl create job test-job --from=cronjob/a-cronjob", "completion": "This bash code create | Create a job with the specified name.", "language": "bash", "type": "kubectl-command_explanation", "context": "create | Create a job with the specified name.", "source": "https://kubernetes.io/docs/reference/kubectl/kubectl-cmds/", "id": "k8s_00618_code_explain_19161"} {"prompt": "Explain this text code:\n\nCreate a pod disruption budget named my-pdb that will select all pods with the app=rails label # and require at least one of them being available at any point in time", "completion": "This text code create | Create a pod disruption budget named my-pdb that will select all pods with the app=rails label # and require at least one of them being available at any point in time", "language": "text", "type": "code-example_explanation", "context": "create | Create a pod disruption budget named my-pdb that will select all pods with the app=rails label # and require at least one of them being available at any point in time", "source": "https://kubernetes.io/docs/reference/kubectl/kubectl-cmds/", "id": "k8s_00618_code_explain_19162"} {"prompt": "Explain this bash code:\n\nkubectl create poddisruptionbudget my-pdb --selector=app=rails --min-available=1", "completion": "This bash code create | Create a pod disruption budget named my-pdb that will select all pods with the app=nginx label # and require at least half of the pods selected to be available at any point in time", "language": "bash", "type": "kubectl-command_explanation", "context": "create | Create a pod disruption budget named my-pdb that will select all pods with the app=nginx label # and require at least half of the pods selected to be available at any point in time", "source": "https://kubernetes.io/docs/reference/kubectl/kubectl-cmds/", "id": "k8s_00618_code_explain_19163"} {"prompt": "Explain this text code:\n\nCreate a pod disruption budget named my-pdb that will select all pods with the app=nginx label # and require at least half of the pods selected to be available at any point in time", "completion": "This text code create | Create a pod disruption budget named my-pdb that will select all pods with the app=nginx label # and require at least half of the pods selected to be available at any point in time", "language": "text", "type": "code-example_explanation", "context": "create | Create a pod disruption budget named my-pdb that will select all pods with the app=nginx label # and require at least half of the pods selected to be available at any point in time", "source": "https://kubernetes.io/docs/reference/kubectl/kubectl-cmds/", "id": "k8s_00618_code_explain_19164"} {"prompt": "Explain this bash code:\n\nkubectl create pdb my-pdb --selector=app=nginx --min-available=50%", "completion": "This bash code create | Create a pod disruption budget with the specified name, selector, and desired minimum available pods.", "language": "bash", "type": "kubectl-command_explanation", "context": "create | Create a pod disruption budget with the specified name, selector, and desired minimum available pods.", "source": "https://kubernetes.io/docs/reference/kubectl/kubectl-cmds/", "id": "k8s_00618_code_explain_19165"} {"prompt": "Explain this bash code:\n\nkubectl create priorityclass high-priority --value=1000 --description=\"high priority\"", "completion": "This bash code create | Create a priority class named default-priority that is considered as the global default priority", "language": "bash", "type": "kubectl-command_explanation", "context": "create | Create a priority class named default-priority that is considered as the global default priority", "source": "https://kubernetes.io/docs/reference/kubectl/kubectl-cmds/", "id": "k8s_00618_code_explain_19166"} {"prompt": "Explain this text code:\n\nCreate a priority class named default-priority that is considered as the global default priority", "completion": "This text code create | Create a priority class named default-priority that is considered as the global default priority", "language": "text", "type": "code-example_explanation", "context": "create | Create a priority class named default-priority that is considered as the global default priority", "source": "https://kubernetes.io/docs/reference/kubectl/kubectl-cmds/", "id": "k8s_00618_code_explain_19167"} {"prompt": "Explain this bash code:\n\nkubectl create priorityclass default-priority --value=1000 --global-default=true --description=\"default priority\"", "completion": "This bash code create | Create a priority class named high-priority that cannot preempt pods with lower priority", "language": "bash", "type": "kubectl-command_explanation", "context": "create | Create a priority class named high-priority that cannot preempt pods with lower priority", "source": "https://kubernetes.io/docs/reference/kubectl/kubectl-cmds/", "id": "k8s_00618_code_explain_19168"} {"prompt": "Explain this text code:\n\nCreate a priority class named high-priority that cannot preempt pods with lower priority", "completion": "This text code create | Create a priority class named high-priority that cannot preempt pods with lower priority", "language": "text", "type": "code-example_explanation", "context": "create | Create a priority class named high-priority that cannot preempt pods with lower priority", "source": "https://kubernetes.io/docs/reference/kubectl/kubectl-cmds/", "id": "k8s_00618_code_explain_19169"} {"prompt": "Explain this bash code:\n\nkubectl create priorityclass high-priority --value=1000 --description=\"high priority\" --preemption-policy=\"Never\"", "completion": "This bash code create | Create a priority class with the specified name, value, globalDefault and description.", "language": "bash", "type": "kubectl-command_explanation", "context": "create | Create a priority class with the specified name, value, globalDefault and description.", "source": "https://kubernetes.io/docs/reference/kubectl/kubectl-cmds/", "id": "k8s_00618_code_explain_19170"} {"prompt": "Explain this bash code:\n\nkubectl create quota my-quota --hard=cpu=1,memory=1G,pods=2,services=3,replicationcontrollers=2,resourcequotas=1,secrets=5,persistentvolumeclaims=10", "completion": "This bash code create | Create a new resource quota named best-effort", "language": "bash", "type": "kubectl-command_explanation", "context": "create | Create a new resource quota named best-effort", "source": "https://kubernetes.io/docs/reference/kubectl/kubectl-cmds/", "id": "k8s_00618_code_explain_19171"} {"prompt": "Explain this bash code:\n\nkubectl create quota best-effort --hard=pods=100 --scopes=BestEffort", "completion": "This bash code create | Create a resource quota with the specified name, hard limits, and optional scopes.", "language": "bash", "type": "kubectl-command_explanation", "context": "create | Create a resource quota with the specified name, hard limits, and optional scopes.", "source": "https://kubernetes.io/docs/reference/kubectl/kubectl-cmds/", "id": "k8s_00618_code_explain_19172"} {"prompt": "Explain this text code:\n\nCreate a role named \"pod-reader\" that allows user to perform \"get\", \"watch\" and \"list\" on pods", "completion": "This text code create | Create a role named \"pod-reader\" that allows user to perform \"get\", \"watch\" and \"list\" on pods", "language": "text", "type": "code-example_explanation", "context": "create | Create a role named \"pod-reader\" that allows user to perform \"get\", \"watch\" and \"list\" on pods", "source": "https://kubernetes.io/docs/reference/kubectl/kubectl-cmds/", "id": "k8s_00618_code_explain_19173"} {"prompt": "Explain this bash code:\n\nkubectl create role pod-reader --verb=get --verb=list --verb=watch --resource=pods", "completion": "This bash code create | Create a role named \"pod-reader\" with ResourceName specified", "language": "bash", "type": "kubectl-command_explanation", "context": "create | Create a role named \"pod-reader\" with ResourceName specified", "source": "https://kubernetes.io/docs/reference/kubectl/kubectl-cmds/", "id": "k8s_00618_code_explain_19174"} {"prompt": "Explain this text code:\n\nCreate a role named \"pod-reader\" with ResourceName specified", "completion": "This text code create | Create a role named \"pod-reader\" with ResourceName specified", "language": "text", "type": "code-example_explanation", "context": "create | Create a role named \"pod-reader\" with ResourceName specified", "source": "https://kubernetes.io/docs/reference/kubectl/kubectl-cmds/", "id": "k8s_00618_code_explain_19175"} {"prompt": "Explain this bash code:\n\nkubectl create role pod-reader --verb=get --resource=pods --resource-name=readablepod --resource-name=anotherpod", "completion": "This bash code create | Create a role named \"foo\" with API Group specified", "language": "bash", "type": "kubectl-command_explanation", "context": "create | Create a role named \"foo\" with API Group specified", "source": "https://kubernetes.io/docs/reference/kubectl/kubectl-cmds/", "id": "k8s_00618_code_explain_19176"} {"prompt": "Explain this bash code:\n\nkubectl create role foo --verb=get,list,watch --resource=rs.extensions", "completion": "This bash code create | Create a role named \"foo\" with SubResource specified", "language": "bash", "type": "kubectl-command_explanation", "context": "create | Create a role named \"foo\" with SubResource specified", "source": "https://kubernetes.io/docs/reference/kubectl/kubectl-cmds/", "id": "k8s_00618_code_explain_19177"} {"prompt": "Explain this text code:\n\nCreate a role named \"foo\" with SubResource specified", "completion": "This text code create | Create a role named \"foo\" with SubResource specified", "language": "text", "type": "code-example_explanation", "context": "create | Create a role named \"foo\" with SubResource specified", "source": "https://kubernetes.io/docs/reference/kubectl/kubectl-cmds/", "id": "k8s_00618_code_explain_19178"} {"prompt": "Explain this bash code:\n\nkubectl create role foo --verb=get,list,watch --resource=pods,pods/status", "completion": "This bash code create | Create a role with single rule.", "language": "bash", "type": "kubectl-command_explanation", "context": "create | Create a role with single rule.", "source": "https://kubernetes.io/docs/reference/kubectl/kubectl-cmds/", "id": "k8s_00618_code_explain_19179"} {"prompt": "Explain this text code:\n\nCreate a role binding for user1, user2, and group1 using the admin cluster role", "completion": "This text code create | Create a role binding for user1, user2, and group1 using the admin cluster role", "language": "text", "type": "code-example_explanation", "context": "create | Create a role binding for user1, user2, and group1 using the admin cluster role", "source": "https://kubernetes.io/docs/reference/kubectl/kubectl-cmds/", "id": "k8s_00618_code_explain_19180"} {"prompt": "Explain this bash code:\n\nkubectl create rolebinding admin --clusterrole=admin --user=user1 --user=user2 --group=group1", "completion": "This bash code create | Create a role binding for a particular role or cluster role.", "language": "bash", "type": "kubectl-command_explanation", "context": "create | Create a role binding for a particular role or cluster role.", "source": "https://kubernetes.io/docs/reference/kubectl/kubectl-cmds/", "id": "k8s_00618_code_explain_19181"} {"prompt": "Explain this text code:\n\nIf you don't already have a .dockercfg file, you can create a dockercfg secret directly by using:", "completion": "This text code create | If you don't already have a .dockercfg file, you can create a dockercfg secret directly by using:", "language": "text", "type": "code-example_explanation", "context": "create | If you don't already have a .dockercfg file, you can create a dockercfg secret directly by using:", "source": "https://kubernetes.io/docs/reference/kubectl/kubectl-cmds/", "id": "k8s_00618_code_explain_19182"} {"prompt": "Explain this bash code:\n\nkubectl create secret docker-registry my-secret --docker-server=DOCKER_REGISTRY_SERVER --docker-username=DOCKER_USER --docker-password=DOCKER_PASSWORD --docker-email=DOCKER_EMAIL", "completion": "This bash code create | Create a new secret named my-secret from ~/.docker/config.json", "language": "bash", "type": "kubectl-command_explanation", "context": "create | Create a new secret named my-secret from ~/.docker/config.json", "source": "https://kubernetes.io/docs/reference/kubectl/kubectl-cmds/", "id": "k8s_00618_code_explain_19183"} {"prompt": "Explain this text code:\n\nCreate a new secret named my-secret from ~/.docker/config.json", "completion": "This text code create | Create a new secret named my-secret from ~/.docker/config.json", "language": "text", "type": "code-example_explanation", "context": "create | Create a new secret named my-secret from ~/.docker/config.json", "source": "https://kubernetes.io/docs/reference/kubectl/kubectl-cmds/", "id": "k8s_00618_code_explain_19184"} {"prompt": "Explain this bash code:\n\nkubectl create secret docker-registry my-secret --from-file=.dockerconfigjson=path/to/.docker/config.json", "completion": "This bash code create | Create a new secret for use with Docker registries.", "language": "bash", "type": "kubectl-command_explanation", "context": "create | Create a new secret for use with Docker registries.", "source": "https://kubernetes.io/docs/reference/kubectl/kubectl-cmds/", "id": "k8s_00618_code_explain_19185"} {"prompt": "Explain this text code:\n\nCreate a new secret named my-secret with keys for each file in folder bar", "completion": "This text code create | Create a new secret named my-secret with keys for each file in folder bar", "language": "text", "type": "code-example_explanation", "context": "create | Create a new secret named my-secret with keys for each file in folder bar", "source": "https://kubernetes.io/docs/reference/kubectl/kubectl-cmds/", "id": "k8s_00618_code_explain_19186"} {"prompt": "Explain this bash code:\n\nkubectl create secret generic my-secret --from-file=path/to/bar", "completion": "This bash code create | Create a new secret named my-secret with specified keys instead of names on disk", "language": "bash", "type": "kubectl-command_explanation", "context": "create | Create a new secret named my-secret with specified keys instead of names on disk", "source": "https://kubernetes.io/docs/reference/kubectl/kubectl-cmds/", "id": "k8s_00618_code_explain_19187"} {"prompt": "Explain this text code:\n\nCreate a new secret named my-secret with specified keys instead of names on disk", "completion": "This text code create | Create a new secret named my-secret with specified keys instead of names on disk", "language": "text", "type": "code-example_explanation", "context": "create | Create a new secret named my-secret with specified keys instead of names on disk", "source": "https://kubernetes.io/docs/reference/kubectl/kubectl-cmds/", "id": "k8s_00618_code_explain_19188"} {"prompt": "Explain this bash code:\n\nkubectl create secret generic my-secret --from-file=ssh-privatekey=path/to/id_rsa --from-file=ssh-publickey=path/to/id_rsa.pub", "completion": "This bash code create | Create a new secret named my-secret with key1=supersecret and key2=topsecret", "language": "bash", "type": "kubectl-command_explanation", "context": "create | Create a new secret named my-secret with key1=supersecret and key2=topsecret", "source": "https://kubernetes.io/docs/reference/kubectl/kubectl-cmds/", "id": "k8s_00618_code_explain_19189"} {"prompt": "Explain this text code:\n\nCreate a new secret named my-secret with key1=supersecret and key2=topsecret", "completion": "This text code create | Create a new secret named my-secret with key1=supersecret and key2=topsecret", "language": "text", "type": "code-example_explanation", "context": "create | Create a new secret named my-secret with key1=supersecret and key2=topsecret", "source": "https://kubernetes.io/docs/reference/kubectl/kubectl-cmds/", "id": "k8s_00618_code_explain_19190"} {"prompt": "Explain this bash code:\n\nkubectl create secret generic my-secret --from-literal=key1=supersecret --from-literal=key2=topsecret", "completion": "This bash code create | Create a new secret named my-secret using a combination of a file and a literal", "language": "bash", "type": "kubectl-command_explanation", "context": "create | Create a new secret named my-secret using a combination of a file and a literal", "source": "https://kubernetes.io/docs/reference/kubectl/kubectl-cmds/", "id": "k8s_00618_code_explain_19191"} {"prompt": "Explain this text code:\n\nCreate a new secret named my-secret using a combination of a file and a literal", "completion": "This text code create | Create a new secret named my-secret using a combination of a file and a literal", "language": "text", "type": "code-example_explanation", "context": "create | Create a new secret named my-secret using a combination of a file and a literal", "source": "https://kubernetes.io/docs/reference/kubectl/kubectl-cmds/", "id": "k8s_00618_code_explain_19192"} {"prompt": "Explain this bash code:\n\nkubectl create secret generic my-secret --from-file=ssh-privatekey=path/to/id_rsa --from-literal=passphrase=topsecret", "completion": "This bash code create | Create a new secret named my-secret from an env file", "language": "bash", "type": "kubectl-command_explanation", "context": "create | Create a new secret named my-secret from an env file", "source": "https://kubernetes.io/docs/reference/kubectl/kubectl-cmds/", "id": "k8s_00618_code_explain_19193"} {"prompt": "Explain this text code:\n\nCreate a new secret named my-secret from an env file", "completion": "This text code create | Create a new secret named my-secret from an env file", "language": "text", "type": "code-example_explanation", "context": "create | Create a new secret named my-secret from an env file", "source": "https://kubernetes.io/docs/reference/kubectl/kubectl-cmds/", "id": "k8s_00618_code_explain_19194"} {"prompt": "Explain this bash code:\n\nkubectl create secret generic my-secret --from-env-file=path/to/bar.env", "completion": "This bash code create | Create a secret based on a file, directory, or specified literal value.", "language": "bash", "type": "kubectl-command_explanation", "context": "create | Create a secret based on a file, directory, or specified literal value.", "source": "https://kubernetes.io/docs/reference/kubectl/kubectl-cmds/", "id": "k8s_00618_code_explain_19195"} {"prompt": "Explain this text code:\n\nCreate a new TLS secret named tls-secret with the given key pair", "completion": "This text code create | Create a new TLS secret named tls-secret with the given key pair", "language": "text", "type": "code-example_explanation", "context": "create | Create a new TLS secret named tls-secret with the given key pair", "source": "https://kubernetes.io/docs/reference/kubectl/kubectl-cmds/", "id": "k8s_00618_code_explain_19196"} {"prompt": "Explain this bash code:\n\nkubectl create secret tls tls-secret --cert=path/to/tls.cert --key=path/to/tls.key", "completion": "This bash code create | Create a TLS secret from the given public/private key pair.", "language": "bash", "type": "kubectl-command_explanation", "context": "create | Create a TLS secret from the given public/private key pair.", "source": "https://kubernetes.io/docs/reference/kubectl/kubectl-cmds/", "id": "k8s_00618_code_explain_19197"} {"prompt": "Explain this bash code:\n\nkubectl create service clusterip my-cs --tcp=5678:8080", "completion": "This bash code create | Create a new ClusterIP service named my-cs (in headless mode)", "language": "bash", "type": "kubectl-command_explanation", "context": "create | Create a new ClusterIP service named my-cs (in headless mode)", "source": "https://kubernetes.io/docs/reference/kubectl/kubectl-cmds/", "id": "k8s_00618_code_explain_19198"} {"prompt": "Explain this text code:\n\nCreate a new ClusterIP service named my-cs (in headless mode)", "completion": "This text code create | Create a new ClusterIP service named my-cs (in headless mode)", "language": "text", "type": "code-example_explanation", "context": "create | Create a new ClusterIP service named my-cs (in headless mode)", "source": "https://kubernetes.io/docs/reference/kubectl/kubectl-cmds/", "id": "k8s_00618_code_explain_19199"} {"prompt": "Explain this bash code:\n\nkubectl create service clusterip my-cs --clusterip=\"None\"", "completion": "This bash code create | Create a ClusterIP service with the specified name.", "language": "bash", "type": "kubectl-command_explanation", "context": "create | Create a ClusterIP service with the specified name.", "source": "https://kubernetes.io/docs/reference/kubectl/kubectl-cmds/", "id": "k8s_00618_code_explain_19200"} {"prompt": "Explain this bash code:\n\nkubectl create service externalname my-ns --external-name bar.com", "completion": "This bash code create | Create an ExternalName service with the specified name.", "language": "bash", "type": "kubectl-command_explanation", "context": "create | Create an ExternalName service with the specified name.", "source": "https://kubernetes.io/docs/reference/kubectl/kubectl-cmds/", "id": "k8s_00618_code_explain_19201"} {"prompt": "Explain this bash code:\n\nkubectl create service loadbalancer my-lbs --tcp=5678:8080", "completion": "This bash code create | Create a LoadBalancer service with the specified name.", "language": "bash", "type": "kubectl-command_explanation", "context": "create | Create a LoadBalancer service with the specified name.", "source": "https://kubernetes.io/docs/reference/kubectl/kubectl-cmds/", "id": "k8s_00618_code_explain_19202"} {"prompt": "Explain this bash code:\n\nkubectl create service nodeport my-ns --tcp=5678:8080", "completion": "This bash code create | Create a NodePort service with the specified name.", "language": "bash", "type": "kubectl-command_explanation", "context": "create | Create a NodePort service with the specified name.", "source": "https://kubernetes.io/docs/reference/kubectl/kubectl-cmds/", "id": "k8s_00618_code_explain_19203"} {"prompt": "Explain this text code:\n\nCreate a new service account named my-service-account", "completion": "This text code create | Create a new service account named my-service-account", "language": "text", "type": "code-example_explanation", "context": "create | Create a new service account named my-service-account", "source": "https://kubernetes.io/docs/reference/kubectl/kubectl-cmds/", "id": "k8s_00618_code_explain_19204"} {"prompt": "Explain this text code:\n\nList all pods in ps output format with more information (such as node name)", "completion": "This text code get | List all pods in ps output format with more information (such as node name)", "language": "text", "type": "code-example_explanation", "context": "get | List all pods in ps output format with more information (such as node name)", "source": "https://kubernetes.io/docs/reference/kubectl/kubectl-cmds/", "id": "k8s_00618_code_explain_19205"} {"prompt": "Explain this text code:\n\nList a single replication controller with specified NAME in ps output format", "completion": "This text code get | List a single replication controller with specified NAME in ps output format", "language": "text", "type": "code-example_explanation", "context": "get | List a single replication controller with specified NAME in ps output format", "source": "https://kubernetes.io/docs/reference/kubectl/kubectl-cmds/", "id": "k8s_00618_code_explain_19206"} {"prompt": "Explain this text code:\n\nList deployments in JSON output format, in the \"v1\" version of the \"apps\" API group", "completion": "This text code get | List deployments in JSON output format, in the \"v1\" version of the \"apps\" API group", "language": "text", "type": "code-example_explanation", "context": "get | List deployments in JSON output format, in the \"v1\" version of the \"apps\" API group", "source": "https://kubernetes.io/docs/reference/kubectl/kubectl-cmds/", "id": "k8s_00618_code_explain_19207"} {"prompt": "Explain this text code:\n\nList a pod identified by type and name specified in \"pod.yaml\" in JSON output format", "completion": "This text code get | List a pod identified by type and name specified in \"pod.yaml\" in JSON output format", "language": "text", "type": "code-example_explanation", "context": "get | List a pod identified by type and name specified in \"pod.yaml\" in JSON output format", "source": "https://kubernetes.io/docs/reference/kubectl/kubectl-cmds/", "id": "k8s_00618_code_explain_19208"} {"prompt": "Explain this text code:\n\nList resources from a directory with kustomization.yaml - e.g. dir/kustomization.yaml", "completion": "This text code get | List resources from a directory with kustomization.yaml - e.g. dir/kustomization.yaml", "language": "text", "type": "code-example_explanation", "context": "get | List resources from a directory with kustomization.yaml - e.g. dir/kustomization.yaml", "source": "https://kubernetes.io/docs/reference/kubectl/kubectl-cmds/", "id": "k8s_00618_code_explain_19209"} {"prompt": "Explain this bash code:\n\nkubectl get -o template pod/web-pod-13je7 --template={{.status.phase}}", "completion": "This bash code get | List resource information in custom columns", "language": "bash", "type": "kubectl-command_explanation", "context": "get | List resource information in custom columns", "source": "https://kubernetes.io/docs/reference/kubectl/kubectl-cmds/", "id": "k8s_00618_code_explain_19210"} {"prompt": "Explain this bash code:\n\nkubectl get pod test-pod -o custom-columns=CONTAINER:.spec.containers[0].name,IMAGE:.spec.containers[0].image", "completion": "This bash code get | List all replication controllers and services together in ps output format", "language": "bash", "type": "kubectl-command_explanation", "context": "get | List all replication controllers and services together in ps output format", "source": "https://kubernetes.io/docs/reference/kubectl/kubectl-cmds/", "id": "k8s_00618_code_explain_19211"} {"prompt": "Explain this text code:\n\nList all replication controllers and services together in ps output format", "completion": "This text code get | List all replication controllers and services together in ps output format", "language": "text", "type": "code-example_explanation", "context": "get | List all replication controllers and services together in ps output format", "source": "https://kubernetes.io/docs/reference/kubectl/kubectl-cmds/", "id": "k8s_00618_code_explain_19212"} {"prompt": "Explain this bash code:\n\nkubectl get rc/web service/frontend pods/web-pod-13je7", "completion": "This bash code get | Display one or many resources.", "language": "bash", "type": "kubectl-command_explanation", "context": "get | Display one or many resources.", "source": "https://kubernetes.io/docs/reference/kubectl/kubectl-cmds/", "id": "k8s_00618_code_explain_19213"} {"prompt": "Explain this text code:\n\nStart a hazelcast pod and let the container expose port 5701", "completion": "This text code run | Start a hazelcast pod and let the container expose port 5701", "language": "text", "type": "code-example_explanation", "context": "run | Start a hazelcast pod and let the container expose port 5701", "source": "https://kubernetes.io/docs/reference/kubectl/kubectl-cmds/", "id": "k8s_00618_code_explain_19214"} {"prompt": "Explain this bash code:\n\nkubectl run hazelcast --image=hazelcast/hazelcast --port=5701", "completion": "This bash code run | Start a hazelcast pod and set environment variables \"DNS_DOMAIN=cluster\" and \"POD_NAMESPACE=default\" in the container", "language": "bash", "type": "kubectl-command_explanation", "context": "run | Start a hazelcast pod and set environment variables \"DNS_DOMAIN=cluster\" and \"POD_NAMESPACE=default\" in the container", "source": "https://kubernetes.io/docs/reference/kubectl/kubectl-cmds/", "id": "k8s_00618_code_explain_19215"} {"prompt": "Explain this text code:\n\nStart a hazelcast pod and set environment variables \"DNS_DOMAIN=cluster\" and \"POD_NAMESPACE=default\" in the container", "completion": "This text code run | Start a hazelcast pod and set environment variables \"DNS_DOMAIN=cluster\" and \"POD_NAMESPACE=default\" in the container", "language": "text", "type": "code-example_explanation", "context": "run | Start a hazelcast pod and set environment variables \"DNS_DOMAIN=cluster\" and \"POD_NAMESPACE=default\" in the container", "source": "https://kubernetes.io/docs/reference/kubectl/kubectl-cmds/", "id": "k8s_00618_code_explain_19216"} {"prompt": "Explain this bash code:\n\nkubectl run hazelcast --image=hazelcast/hazelcast --env=\"DNS_DOMAIN=cluster\" --env=\"POD_NAMESPACE=default\"", "completion": "This bash code run | Start a hazelcast pod and set labels \"app=hazelcast\" and \"env=prod\" in the container", "language": "bash", "type": "kubectl-command_explanation", "context": "run | Start a hazelcast pod and set labels \"app=hazelcast\" and \"env=prod\" in the container", "source": "https://kubernetes.io/docs/reference/kubectl/kubectl-cmds/", "id": "k8s_00618_code_explain_19217"} {"prompt": "Explain this text code:\n\nStart a hazelcast pod and set labels \"app=hazelcast\" and \"env=prod\" in the container", "completion": "This text code run | Start a hazelcast pod and set labels \"app=hazelcast\" and \"env=prod\" in the container", "language": "text", "type": "code-example_explanation", "context": "run | Start a hazelcast pod and set labels \"app=hazelcast\" and \"env=prod\" in the container", "source": "https://kubernetes.io/docs/reference/kubectl/kubectl-cmds/", "id": "k8s_00618_code_explain_19218"} {"prompt": "Explain this bash code:\n\nkubectl run hazelcast --image=hazelcast/hazelcast --labels=\"app=hazelcast,env=prod\"", "completion": "This bash code run | Dry run; print the corresponding API objects without creating them", "language": "bash", "type": "kubectl-command_explanation", "context": "run | Dry run; print the corresponding API objects without creating them", "source": "https://kubernetes.io/docs/reference/kubectl/kubectl-cmds/", "id": "k8s_00618_code_explain_19219"} {"prompt": "Explain this text code:\n\nDry run; print the corresponding API objects without creating them", "completion": "This text code run | Dry run; print the corresponding API objects without creating them", "language": "text", "type": "code-example_explanation", "context": "run | Dry run; print the corresponding API objects without creating them", "source": "https://kubernetes.io/docs/reference/kubectl/kubectl-cmds/", "id": "k8s_00618_code_explain_19220"} {"prompt": "Explain this text code:\n\nStart a nginx pod, but overload the spec with a partial set of values parsed from JSON", "completion": "This text code run | Start a nginx pod, but overload the spec with a partial set of values parsed from JSON", "language": "text", "type": "code-example_explanation", "context": "run | Start a nginx pod, but overload the spec with a partial set of values parsed from JSON", "source": "https://kubernetes.io/docs/reference/kubectl/kubectl-cmds/", "id": "k8s_00618_code_explain_19221"} {"prompt": "Explain this bash code:\n\nkubectl run nginx --image=nginx --overrides='{ \"apiVersion\": \"v1\", \"spec\": { ... } }'", "completion": "This bash code run | Start a busybox pod and keep it in the foreground, don't restart it if it exits", "language": "bash", "type": "kubectl-command_explanation", "context": "run | Start a busybox pod and keep it in the foreground, don't restart it if it exits", "source": "https://kubernetes.io/docs/reference/kubectl/kubectl-cmds/", "id": "k8s_00618_code_explain_19222"} {"prompt": "Explain this text code:\n\nStart a busybox pod and keep it in the foreground, don't restart it if it exits", "completion": "This text code run | Start a busybox pod and keep it in the foreground, don't restart it if it exits", "language": "text", "type": "code-example_explanation", "context": "run | Start a busybox pod and keep it in the foreground, don't restart it if it exits", "source": "https://kubernetes.io/docs/reference/kubectl/kubectl-cmds/", "id": "k8s_00618_code_explain_19223"} {"prompt": "Explain this bash code:\n\nkubectl run -i -t busybox --image=busybox --restart=Never", "completion": "This bash code run | Start the nginx pod using the default command, but use custom arguments (arg1 .. argN) for that command", "language": "bash", "type": "kubectl-command_explanation", "context": "run | Start the nginx pod using the default command, but use custom arguments (arg1 .. argN) for that command", "source": "https://kubernetes.io/docs/reference/kubectl/kubectl-cmds/", "id": "k8s_00618_code_explain_19224"} {"prompt": "Explain this text code:\n\nStart the nginx pod using the default command, but use custom arguments (arg1 .. argN) for that command", "completion": "This text code run | Start the nginx pod using the default command, but use custom arguments (arg1 .. argN) for that command", "language": "text", "type": "code-example_explanation", "context": "run | Start the nginx pod using the default command, but use custom arguments (arg1 .. argN) for that command", "source": "https://kubernetes.io/docs/reference/kubectl/kubectl-cmds/", "id": "k8s_00618_code_explain_19225"} {"prompt": "Explain this bash code:\n\nkubectl run nginx --image=nginx -- <arg1> <arg2> ... <argN>", "completion": "This bash code run | Start the nginx pod using a different command and custom arguments", "language": "bash", "type": "kubectl-command_explanation", "context": "run | Start the nginx pod using a different command and custom arguments", "source": "https://kubernetes.io/docs/reference/kubectl/kubectl-cmds/", "id": "k8s_00618_code_explain_19226"} {"prompt": "Explain this text code:\n\nStart the nginx pod using a different command and custom arguments", "completion": "This text code run | Start the nginx pod using a different command and custom arguments", "language": "text", "type": "code-example_explanation", "context": "run | Start the nginx pod using a different command and custom arguments", "source": "https://kubernetes.io/docs/reference/kubectl/kubectl-cmds/", "id": "k8s_00618_code_explain_19227"} {"prompt": "Explain this bash code:\n\nkubectl run nginx --image=nginx --command -- <cmd> <arg1> ... <argN>", "completion": "This bash code run | Create and run a particular image in a pod.", "language": "bash", "type": "kubectl-command_explanation", "context": "run | Create and run a particular image in a pod.", "source": "https://kubernetes.io/docs/reference/kubectl/kubectl-cmds/", "id": "k8s_00618_code_explain_19228"} {"prompt": "Explain this text code:\n\nCreate a service for a replicated nginx, which serves on port 80 and connects to the containers on port 8000", "completion": "This text code expose | Create a service for a replicated nginx, which serves on port 80 and connects to the containers on port 8000", "language": "text", "type": "code-example_explanation", "context": "expose | Create a service for a replicated nginx, which serves on port 80 and connects to the containers on port 8000", "source": "https://kubernetes.io/docs/reference/kubectl/kubectl-cmds/", "id": "k8s_00618_code_explain_19229"} {"prompt": "Explain this bash code:\n\nkubectl expose rc nginx --port=80 --target-port=8000", "completion": "This bash code expose | Create a service for a replication controller identified by type and name specified in \"nginx-controller.yaml\", which serves on port 80 and connects to the containers on port 8000", "language": "bash", "type": "kubectl-command_explanation", "context": "expose | Create a service for a replication controller identified by type and name specified in \"nginx-controller.yaml\", which serves on port 80 and connects to the containers on port 8000", "source": "https://kubernetes.io/docs/reference/kubectl/kubectl-cmds/", "id": "k8s_00618_code_explain_19230"} {"prompt": "Explain this text code:\n\nCreate a service for a replication controller identified by type and name specified in \"nginx-controller.yaml\", which serves on port 80 and connects to the containers on port 8000", "completion": "This text code expose | Create a service for a replication controller identified by type and name specified in \"nginx-controller.yaml\", which serves on port 80 and connects to the containers on port 8000", "language": "text", "type": "code-example_explanation", "context": "expose | Create a service for a replication controller identified by type and name specified in \"nginx-controller.yaml\", which serves on port 80 and connects to the containers on port 8000", "source": "https://kubernetes.io/docs/reference/kubectl/kubectl-cmds/", "id": "k8s_00618_code_explain_19231"} {"prompt": "Explain this bash code:\n\nkubectl expose -f nginx-controller.yaml --port=80 --target-port=8000", "completion": "This bash code expose | Create a service for a pod valid-pod, which serves on port 444 with the name \"frontend\"", "language": "bash", "type": "kubectl-command_explanation", "context": "expose | Create a service for a pod valid-pod, which serves on port 444 with the name \"frontend\"", "source": "https://kubernetes.io/docs/reference/kubectl/kubectl-cmds/", "id": "k8s_00618_code_explain_19232"} {"prompt": "Explain this text code:\n\nCreate a service for a pod valid-pod, which serves on port 444 with the name \"frontend\"", "completion": "This text code expose | Create a service for a pod valid-pod, which serves on port 444 with the name \"frontend\"", "language": "text", "type": "code-example_explanation", "context": "expose | Create a service for a pod valid-pod, which serves on port 444 with the name \"frontend\"", "source": "https://kubernetes.io/docs/reference/kubectl/kubectl-cmds/", "id": "k8s_00618_code_explain_19233"} {"prompt": "Explain this bash code:\n\nkubectl expose pod valid-pod --port=444 --name=frontend", "completion": "This bash code expose | Create a second service based on the above service, exposing the container port 8443 as port 443 with the name \"nginx-https\"", "language": "bash", "type": "kubectl-command_explanation", "context": "expose | Create a second service based on the above service, exposing the container port 8443 as port 443 with the name \"nginx-https\"", "source": "https://kubernetes.io/docs/reference/kubectl/kubectl-cmds/", "id": "k8s_00618_code_explain_19234"} {"prompt": "Explain this text code:\n\nCreate a second service based on the above service, exposing the container port 8443 as port 443 with the name \"nginx-https\"", "completion": "This text code expose | Create a second service based on the above service, exposing the container port 8443 as port 443 with the name \"nginx-https\"", "language": "text", "type": "code-example_explanation", "context": "expose | Create a second service based on the above service, exposing the container port 8443 as port 443 with the name \"nginx-https\"", "source": "https://kubernetes.io/docs/reference/kubectl/kubectl-cmds/", "id": "k8s_00618_code_explain_19235"} {"prompt": "Explain this bash code:\n\nkubectl expose service nginx --port=443 --target-port=8443 --name=nginx-https", "completion": "This bash code expose | Create a service for a replicated streaming application on port 4100 balancing UDP traffic and named 'video-stream'.", "language": "bash", "type": "kubectl-command_explanation", "context": "expose | Create a service for a replicated streaming application on port 4100 balancing UDP traffic and named 'video-stream'.", "source": "https://kubernetes.io/docs/reference/kubectl/kubectl-cmds/", "id": "k8s_00618_code_explain_19236"} {"prompt": "Explain this text code:\n\nCreate a service for a replicated streaming application on port 4100 balancing UDP traffic and named 'video-stream'.", "completion": "This text code expose | Create a service for a replicated streaming application on port 4100 balancing UDP traffic and named 'video-stream'.", "language": "text", "type": "code-example_explanation", "context": "expose | Create a service for a replicated streaming application on port 4100 balancing UDP traffic and named 'video-stream'.", "source": "https://kubernetes.io/docs/reference/kubectl/kubectl-cmds/", "id": "k8s_00618_code_explain_19237"} {"prompt": "Explain this bash code:\n\nkubectl expose rc streamer --port=4100 --protocol=UDP --name=video-stream", "completion": "This bash code expose | Create a service for a replicated nginx using replica set, which serves on port 80 and connects to the containers on port 8000", "language": "bash", "type": "kubectl-command_explanation", "context": "expose | Create a service for a replicated nginx using replica set, which serves on port 80 and connects to the containers on port 8000", "source": "https://kubernetes.io/docs/reference/kubectl/kubectl-cmds/", "id": "k8s_00618_code_explain_19238"} {"prompt": "Explain this text code:\n\nCreate a service for a replicated nginx using replica set, which serves on port 80 and connects to the containers on port 8000", "completion": "This text code expose | Create a service for a replicated nginx using replica set, which serves on port 80 and connects to the containers on port 8000", "language": "text", "type": "code-example_explanation", "context": "expose | Create a service for a replicated nginx using replica set, which serves on port 80 and connects to the containers on port 8000", "source": "https://kubernetes.io/docs/reference/kubectl/kubectl-cmds/", "id": "k8s_00618_code_explain_19239"} {"prompt": "Explain this bash code:\n\nkubectl expose rs nginx --port=80 --target-port=8000", "completion": "This bash code expose | Create a service for an nginx deployment, which serves on port 80 and connects to the containers on port 8000", "language": "bash", "type": "kubectl-command_explanation", "context": "expose | Create a service for an nginx deployment, which serves on port 80 and connects to the containers on port 8000", "source": "https://kubernetes.io/docs/reference/kubectl/kubectl-cmds/", "id": "k8s_00618_code_explain_19240"} {"prompt": "Explain this text code:\n\nCreate a service for an nginx deployment, which serves on port 80 and connects to the containers on port 8000", "completion": "This text code expose | Create a service for an nginx deployment, which serves on port 80 and connects to the containers on port 8000", "language": "text", "type": "code-example_explanation", "context": "expose | Create a service for an nginx deployment, which serves on port 80 and connects to the containers on port 8000", "source": "https://kubernetes.io/docs/reference/kubectl/kubectl-cmds/", "id": "k8s_00618_code_explain_19241"} {"prompt": "Explain this bash code:\n\nkubectl expose deployment nginx --port=80 --target-port=8000", "completion": "This bash code expose | Expose a resource as a new Kubernetes service.", "language": "bash", "type": "kubectl-command_explanation", "context": "expose | Expose a resource as a new Kubernetes service.", "source": "https://kubernetes.io/docs/reference/kubectl/kubectl-cmds/", "id": "k8s_00618_code_explain_19242"} {"prompt": "Explain this text code:\n\nDelete a pod using the type and name specified in pod.json", "completion": "This text code delete | Delete a pod using the type and name specified in pod.json", "language": "text", "type": "code-example_explanation", "context": "delete | Delete a pod using the type and name specified in pod.json", "source": "https://kubernetes.io/docs/reference/kubectl/kubectl-cmds/", "id": "k8s_00618_code_explain_19243"} {"prompt": "Explain this text code:\n\nDelete resources from a directory containing kustomization.yaml - e.g. dir/kustomization.yaml", "completion": "This text code delete | Delete resources from a directory containing kustomization.yaml - e.g. dir/kustomization.yaml", "language": "text", "type": "code-example_explanation", "context": "delete | Delete resources from a directory containing kustomization.yaml - e.g. dir/kustomization.yaml", "source": "https://kubernetes.io/docs/reference/kubectl/kubectl-cmds/", "id": "k8s_00618_code_explain_19244"} {"prompt": "Explain this text code:\n\nDelete a pod based on the type and name in the JSON passed into stdin", "completion": "This text code delete | Delete a pod based on the type and name in the JSON passed into stdin", "language": "text", "type": "code-example_explanation", "context": "delete | Delete a pod based on the type and name in the JSON passed into stdin", "source": "https://kubernetes.io/docs/reference/kubectl/kubectl-cmds/", "id": "k8s_00618_code_explain_19245"} {"prompt": "Explain this text code:\n\nDelete pods and services with same names \"baz\" and \"foo\"", "completion": "This text code delete | Delete pods and services with same names \"baz\" and \"foo\"", "language": "text", "type": "code-example_explanation", "context": "delete | Delete pods and services with same names \"baz\" and \"foo\"", "source": "https://kubernetes.io/docs/reference/kubectl/kubectl-cmds/", "id": "k8s_00618_code_explain_19246"} {"prompt": "Explain this text code:\n\nApply resources from a directory containing kustomization.yaml - e.g. dir/kustomization.yaml", "completion": "This text code apply | Apply resources from a directory containing kustomization.yaml - e.g. dir/kustomization.yaml", "language": "text", "type": "code-example_explanation", "context": "apply | Apply resources from a directory containing kustomization.yaml - e.g. dir/kustomization.yaml", "source": "https://kubernetes.io/docs/reference/kubectl/kubectl-cmds/", "id": "k8s_00618_code_explain_19247"} {"prompt": "Explain this text code:\n\nNote: --prune is still in Alpha # Apply the configuration in manifest.yaml that matches label app=nginx and delete all other resources that are not in the file and match label app=nginx", "completion": "This text code apply | Note: --prune is still in Alpha # Apply the configuration in manifest.yaml that matches label app=nginx and delete all other resources that are not in the file and match label app=nginx", "language": "text", "type": "code-example_explanation", "context": "apply | Note: --prune is still in Alpha # Apply the configuration in manifest.yaml that matches label app=nginx and delete all other resources that are not in the file and match label app=nginx", "source": "https://kubernetes.io/docs/reference/kubectl/kubectl-cmds/", "id": "k8s_00618_code_explain_19248"} {"prompt": "Explain this bash code:\n\nkubectl apply --prune -f manifest.yaml -l app=nginx", "completion": "This bash code apply | Apply the configuration in manifest.yaml and delete all the other config maps that are not in the file", "language": "bash", "type": "kubectl-command_explanation", "context": "apply | Apply the configuration in manifest.yaml and delete all the other config maps that are not in the file", "source": "https://kubernetes.io/docs/reference/kubectl/kubectl-cmds/", "id": "k8s_00618_code_explain_19249"} {"prompt": "Explain this text code:\n\nApply the configuration in manifest.yaml and delete all the other config maps that are not in the file", "completion": "This text code apply | Apply the configuration in manifest.yaml and delete all the other config maps that are not in the file", "language": "text", "type": "code-example_explanation", "context": "apply | Apply the configuration in manifest.yaml and delete all the other config maps that are not in the file", "source": "https://kubernetes.io/docs/reference/kubectl/kubectl-cmds/", "id": "k8s_00618_code_explain_19250"} {"prompt": "Explain this bash code:\n\nkubectl apply --prune -f manifest.yaml --all --prune-whitelist=core/v1/ConfigMap", "completion": "This bash code apply | Apply a configuration to a resource by file name or stdin. The resource name must be specified. This resource will be created if it doesn't exist yet. To use 'apply', always create the resource initia", "language": "bash", "type": "kubectl-command_explanation", "context": "apply | Apply a configuration to a resource by file name or stdin. The resource name must be specified. This resource will be created if it doesn't exist yet. To use 'apply', always create the resource initia", "source": "https://kubernetes.io/docs/reference/kubectl/kubectl-cmds/", "id": "k8s_00618_code_explain_19251"} {"prompt": "Explain this text code:\n\nEdit the last-applied-configuration annotations by type/name in YAML", "completion": "This text code apply | Edit the last-applied-configuration annotations by type/name in YAML", "language": "text", "type": "code-example_explanation", "context": "apply | Edit the last-applied-configuration annotations by type/name in YAML", "source": "https://kubernetes.io/docs/reference/kubectl/kubectl-cmds/", "id": "k8s_00618_code_explain_19252"} {"prompt": "Explain this text code:\n\nEdit the last-applied-configuration annotations by file in JSON", "completion": "This text code apply | Edit the last-applied-configuration annotations by file in JSON", "language": "text", "type": "code-example_explanation", "context": "apply | Edit the last-applied-configuration annotations by file in JSON", "source": "https://kubernetes.io/docs/reference/kubectl/kubectl-cmds/", "id": "k8s_00618_code_explain_19253"} {"prompt": "Explain this bash code:\n\nkubectl apply edit-last-applied -f deploy.yaml -o json", "completion": "This bash code apply | Edit the latest last-applied-configuration annotations of resources from the default editor.", "language": "bash", "type": "kubectl-command_explanation", "context": "apply | Edit the latest last-applied-configuration annotations of resources from the default editor.", "source": "https://kubernetes.io/docs/reference/kubectl/kubectl-cmds/", "id": "k8s_00618_code_explain_19254"} {"prompt": "Explain this text code:\n\nSet the last-applied-configuration of a resource to match the contents of a file", "completion": "This text code apply | Set the last-applied-configuration of a resource to match the contents of a file", "language": "text", "type": "code-example_explanation", "context": "apply | Set the last-applied-configuration of a resource to match the contents of a file", "source": "https://kubernetes.io/docs/reference/kubectl/kubectl-cmds/", "id": "k8s_00618_code_explain_19255"} {"prompt": "Explain this text code:\n\nExecute set-last-applied against each configuration file in a directory", "completion": "This text code apply | Execute set-last-applied against each configuration file in a directory", "language": "text", "type": "code-example_explanation", "context": "apply | Execute set-last-applied against each configuration file in a directory", "source": "https://kubernetes.io/docs/reference/kubectl/kubectl-cmds/", "id": "k8s_00618_code_explain_19256"} {"prompt": "Explain this text code:\n\nSet the last-applied-configuration of a resource to match the contents of a file; will create the annotation if it does not already exist", "completion": "This text code apply | Set the last-applied-configuration of a resource to match the contents of a file; will create the annotation if it does not already exist", "language": "text", "type": "code-example_explanation", "context": "apply | Set the last-applied-configuration of a resource to match the contents of a file; will create the annotation if it does not already exist", "source": "https://kubernetes.io/docs/reference/kubectl/kubectl-cmds/", "id": "k8s_00618_code_explain_19257"} {"prompt": "Explain this bash code:\n\nkubectl apply set-last-applied -f deploy.yaml --create-annotation=true", "completion": "This bash code apply | Set the latest last-applied-configuration annotations by setting it to match the contents of a file. This results in the last-applied-configuration being updated as though 'kubectl apply -f ' was run,", "language": "bash", "type": "kubectl-command_explanation", "context": "apply | Set the latest last-applied-configuration annotations by setting it to match the contents of a file. This results in the last-applied-configuration being updated as though 'kubectl apply -f ' was run,", "source": "https://kubernetes.io/docs/reference/kubectl/kubectl-cmds/", "id": "k8s_00618_code_explain_19258"} {"prompt": "Explain this text code:\n\nView the last-applied-configuration annotations by type/name in YAML", "completion": "This text code apply | View the last-applied-configuration annotations by type/name in YAML", "language": "text", "type": "code-example_explanation", "context": "apply | View the last-applied-configuration annotations by type/name in YAML", "source": "https://kubernetes.io/docs/reference/kubectl/kubectl-cmds/", "id": "k8s_00618_code_explain_19259"} {"prompt": "Explain this text code:\n\nView the last-applied-configuration annotations by file in JSON", "completion": "This text code apply | View the last-applied-configuration annotations by file in JSON", "language": "text", "type": "code-example_explanation", "context": "apply | View the last-applied-configuration annotations by file in JSON", "source": "https://kubernetes.io/docs/reference/kubectl/kubectl-cmds/", "id": "k8s_00618_code_explain_19260"} {"prompt": "Explain this bash code:\n\nkubectl apply view-last-applied -f deploy.yaml -o json", "completion": "This bash code apply | View the latest last-applied-configuration annotations by type/name or file.", "language": "bash", "type": "kubectl-command_explanation", "context": "apply | View the latest last-applied-configuration annotations by type/name or file.", "source": "https://kubernetes.io/docs/reference/kubectl/kubectl-cmds/", "id": "k8s_00618_code_explain_19261"} {"prompt": "Explain this text code:\n\nUpdate pod 'foo' with the annotation 'description' and the value 'my frontend' # If the same annotation is set multiple times, only the last value will be applied", "completion": "This text code annotate | Update pod 'foo' with the annotation 'description' and the value 'my frontend' # If the same annotation is set multiple times, only the last value will be applied", "language": "text", "type": "code-example_explanation", "context": "annotate | Update pod 'foo' with the annotation 'description' and the value 'my frontend' # If the same annotation is set multiple times, only the last value will be applied", "source": "https://kubernetes.io/docs/reference/kubectl/kubectl-cmds/", "id": "k8s_00618_code_explain_19262"} {"prompt": "Explain this bash code:\n\nkubectl annotate pods foo description='my frontend'", "completion": "This bash code annotate | Update a pod identified by type and name in \"pod.json\"", "language": "bash", "type": "kubectl-command_explanation", "context": "annotate | Update a pod identified by type and name in \"pod.json\"", "source": "https://kubernetes.io/docs/reference/kubectl/kubectl-cmds/", "id": "k8s_00618_code_explain_19263"} {"prompt": "Explain this text code:\n\nUpdate a pod identified by type and name in \"pod.json\"", "completion": "This text code annotate | Update a pod identified by type and name in \"pod.json\"", "language": "text", "type": "code-example_explanation", "context": "annotate | Update a pod identified by type and name in \"pod.json\"", "source": "https://kubernetes.io/docs/reference/kubectl/kubectl-cmds/", "id": "k8s_00618_code_explain_19264"} {"prompt": "Explain this bash code:\n\nkubectl annotate -f pod.json description='my frontend'", "completion": "This bash code annotate | Update pod 'foo' with the annotation 'description' and the value 'my frontend running nginx', overwriting any existing value", "language": "bash", "type": "kubectl-command_explanation", "context": "annotate | Update pod 'foo' with the annotation 'description' and the value 'my frontend running nginx', overwriting any existing value", "source": "https://kubernetes.io/docs/reference/kubectl/kubectl-cmds/", "id": "k8s_00618_code_explain_19265"} {"prompt": "Explain this text code:\n\nUpdate pod 'foo' with the annotation 'description' and the value 'my frontend running nginx', overwriting any existing value", "completion": "This text code annotate | Update pod 'foo' with the annotation 'description' and the value 'my frontend running nginx', overwriting any existing value", "language": "text", "type": "code-example_explanation", "context": "annotate | Update pod 'foo' with the annotation 'description' and the value 'my frontend running nginx', overwriting any existing value", "source": "https://kubernetes.io/docs/reference/kubectl/kubectl-cmds/", "id": "k8s_00618_code_explain_19266"} {"prompt": "Explain this bash code:\n\nkubectl annotate --overwrite pods foo description='my frontend running nginx'", "completion": "This bash code annotate | Update all pods in the namespace", "language": "bash", "type": "kubectl-command_explanation", "context": "annotate | Update all pods in the namespace", "source": "https://kubernetes.io/docs/reference/kubectl/kubectl-cmds/", "id": "k8s_00618_code_explain_19267"} {"prompt": "Explain this bash code:\n\nkubectl annotate pods --all description='my frontend running nginx'", "completion": "This bash code annotate | Update pod 'foo' only if the resource is unchanged from version 1", "language": "bash", "type": "kubectl-command_explanation", "context": "annotate | Update pod 'foo' only if the resource is unchanged from version 1", "source": "https://kubernetes.io/docs/reference/kubectl/kubectl-cmds/", "id": "k8s_00618_code_explain_19268"} {"prompt": "Explain this text code:\n\nUpdate pod 'foo' only if the resource is unchanged from version 1", "completion": "This text code annotate | Update pod 'foo' only if the resource is unchanged from version 1", "language": "text", "type": "code-example_explanation", "context": "annotate | Update pod 'foo' only if the resource is unchanged from version 1", "source": "https://kubernetes.io/docs/reference/kubectl/kubectl-cmds/", "id": "k8s_00618_code_explain_19269"} {"prompt": "Explain this bash code:\n\nkubectl annotate pods foo description='my frontend running nginx' --resource-version=1", "completion": "This bash code annotate | Update pod 'foo' by removing an annotation named 'description' if it exists # Does not require the --overwrite flag", "language": "bash", "type": "kubectl-command_explanation", "context": "annotate | Update pod 'foo' by removing an annotation named 'description' if it exists # Does not require the --overwrite flag", "source": "https://kubernetes.io/docs/reference/kubectl/kubectl-cmds/", "id": "k8s_00618_code_explain_19270"} {"prompt": "Explain this text code:\n\nUpdate pod 'foo' by removing an annotation named 'description' if it exists # Does not require the --overwrite flag", "completion": "This text code annotate | Update pod 'foo' by removing an annotation named 'description' if it exists # Does not require the --overwrite flag", "language": "text", "type": "code-example_explanation", "context": "annotate | Update pod 'foo' by removing an annotation named 'description' if it exists # Does not require the --overwrite flag", "source": "https://kubernetes.io/docs/reference/kubectl/kubectl-cmds/", "id": "k8s_00618_code_explain_19271"} {"prompt": "Explain this text code:\n\nAuto scale a deployment \"foo\", with the number of pods between 2 and 10, no target CPU utilization specified so a default autoscaling policy will be used", "completion": "This text code autoscale | Auto scale a deployment \"foo\", with the number of pods between 2 and 10, no target CPU utilization specified so a default autoscaling policy will be used", "language": "text", "type": "code-example_explanation", "context": "autoscale | Auto scale a deployment \"foo\", with the number of pods between 2 and 10, no target CPU utilization specified so a default autoscaling policy will be used", "source": "https://kubernetes.io/docs/reference/kubectl/kubectl-cmds/", "id": "k8s_00618_code_explain_19272"} {"prompt": "Explain this text code:\n\nAuto scale a replication controller \"foo\", with the number of pods between 1 and 5, target CPU utilization at 80%", "completion": "This text code autoscale | Auto scale a replication controller \"foo\", with the number of pods between 1 and 5, target CPU utilization at 80%", "language": "text", "type": "code-example_explanation", "context": "autoscale | Auto scale a replication controller \"foo\", with the number of pods between 1 and 5, target CPU utilization at 80%", "source": "https://kubernetes.io/docs/reference/kubectl/kubectl-cmds/", "id": "k8s_00618_code_explain_19273"} {"prompt": "Explain this text code:\n\nCreate an interactive debugging session in pod mypod and immediately attach to it. # (requires the EphemeralContainers feature to be enabled in the cluster)", "completion": "This text code debug | Create an interactive debugging session in pod mypod and immediately attach to it. # (requires the EphemeralContainers feature to be enabled in the cluster)", "language": "text", "type": "code-example_explanation", "context": "debug | Create an interactive debugging session in pod mypod and immediately attach to it. # (requires the EphemeralContainers feature to be enabled in the cluster)", "source": "https://kubernetes.io/docs/reference/kubectl/kubectl-cmds/", "id": "k8s_00618_code_explain_19274"} {"prompt": "Explain this text code:\n\nCreate a debug container named debugger using a custom automated debugging image. # (requires the EphemeralContainers feature to be enabled in the cluster)", "completion": "This text code debug | Create a debug container named debugger using a custom automated debugging image. # (requires the EphemeralContainers feature to be enabled in the cluster)", "language": "text", "type": "code-example_explanation", "context": "debug | Create a debug container named debugger using a custom automated debugging image. # (requires the EphemeralContainers feature to be enabled in the cluster)", "source": "https://kubernetes.io/docs/reference/kubectl/kubectl-cmds/", "id": "k8s_00618_code_explain_19275"} {"prompt": "Explain this bash code:\n\nkubectl debug --image=myproj/debug-tools -c debugger mypod", "completion": "This bash code debug | Create a copy of mypod adding a debug container and attach to it", "language": "bash", "type": "kubectl-command_explanation", "context": "debug | Create a copy of mypod adding a debug container and attach to it", "source": "https://kubernetes.io/docs/reference/kubectl/kubectl-cmds/", "id": "k8s_00618_code_explain_19276"} {"prompt": "Explain this text code:\n\nCreate a copy of mypod adding a debug container and attach to it", "completion": "This text code debug | Create a copy of mypod adding a debug container and attach to it", "language": "text", "type": "code-example_explanation", "context": "debug | Create a copy of mypod adding a debug container and attach to it", "source": "https://kubernetes.io/docs/reference/kubectl/kubectl-cmds/", "id": "k8s_00618_code_explain_19277"} {"prompt": "Explain this bash code:\n\nkubectl debug mypod -it --image=busybox --copy-to=my-debugger", "completion": "This bash code debug | Create a copy of mypod changing the command of mycontainer", "language": "bash", "type": "kubectl-command_explanation", "context": "debug | Create a copy of mypod changing the command of mycontainer", "source": "https://kubernetes.io/docs/reference/kubectl/kubectl-cmds/", "id": "k8s_00618_code_explain_19278"} {"prompt": "Explain this text code:\n\nCreate a copy of mypod changing the command of mycontainer", "completion": "This text code debug | Create a copy of mypod changing the command of mycontainer", "language": "text", "type": "code-example_explanation", "context": "debug | Create a copy of mypod changing the command of mycontainer", "source": "https://kubernetes.io/docs/reference/kubectl/kubectl-cmds/", "id": "k8s_00618_code_explain_19279"} {"prompt": "Explain this bash code:\n\nkubectl debug mypod -it --copy-to=my-debugger --container=mycontainer -- sh", "completion": "This bash code debug | Create a copy of mypod changing all container images to busybox", "language": "bash", "type": "kubectl-command_explanation", "context": "debug | Create a copy of mypod changing all container images to busybox", "source": "https://kubernetes.io/docs/reference/kubectl/kubectl-cmds/", "id": "k8s_00618_code_explain_19280"} {"prompt": "Explain this text code:\n\nCreate a copy of mypod changing all container images to busybox", "completion": "This text code debug | Create a copy of mypod changing all container images to busybox", "language": "text", "type": "code-example_explanation", "context": "debug | Create a copy of mypod changing all container images to busybox", "source": "https://kubernetes.io/docs/reference/kubectl/kubectl-cmds/", "id": "k8s_00618_code_explain_19281"} {"prompt": "Explain this bash code:\n\nkubectl debug mypod --copy-to=my-debugger --set-image=*=busybox", "completion": "This bash code debug | Create a copy of mypod adding a debug container and changing container images", "language": "bash", "type": "kubectl-command_explanation", "context": "debug | Create a copy of mypod adding a debug container and changing container images", "source": "https://kubernetes.io/docs/reference/kubectl/kubectl-cmds/", "id": "k8s_00618_code_explain_19282"} {"prompt": "Explain this text code:\n\nCreate a copy of mypod adding a debug container and changing container images", "completion": "This text code debug | Create a copy of mypod adding a debug container and changing container images", "language": "text", "type": "code-example_explanation", "context": "debug | Create a copy of mypod adding a debug container and changing container images", "source": "https://kubernetes.io/docs/reference/kubectl/kubectl-cmds/", "id": "k8s_00618_code_explain_19283"} {"prompt": "Explain this bash code:\n\nkubectl debug mypod -it --copy-to=my-debugger --image=debian --set-image=app=app:debug,sidecar=sidecar:debug", "completion": "This bash code debug | Create an interactive debugging session on a node and immediately attach to it. # The container will run in the host namespaces and the host's filesystem will be mounted at /host", "language": "bash", "type": "kubectl-command_explanation", "context": "debug | Create an interactive debugging session on a node and immediately attach to it. # The container will run in the host namespaces and the host's filesystem will be mounted at /host", "source": "https://kubernetes.io/docs/reference/kubectl/kubectl-cmds/", "id": "k8s_00618_code_explain_19284"} {"prompt": "Explain this text code:\n\nCreate an interactive debugging session on a node and immediately attach to it. # The container will run in the host namespaces and the host's filesystem will be mounted at /host", "completion": "This text code debug | Create an interactive debugging session on a node and immediately attach to it. # The container will run in the host namespaces and the host's filesystem will be mounted at /host", "language": "text", "type": "code-example_explanation", "context": "debug | Create an interactive debugging session on a node and immediately attach to it. # The container will run in the host namespaces and the host's filesystem will be mounted at /host", "source": "https://kubernetes.io/docs/reference/kubectl/kubectl-cmds/", "id": "k8s_00618_code_explain_19285"} {"prompt": "Explain this text code:\n\nKUBE_EDITOR=\"nano\" kubectl edit svc/docker-registry", "completion": "This text code edit | Edit the job 'myjob' in JSON using the v1 API format", "language": "text", "type": "code-example_explanation", "context": "edit | Edit the job 'myjob' in JSON using the v1 API format", "source": "https://kubernetes.io/docs/reference/kubectl/kubectl-cmds/", "id": "k8s_00618_code_explain_19286"} {"prompt": "Explain this text code:\n\nEdit the job 'myjob' in JSON using the v1 API format", "completion": "This text code edit | Edit the job 'myjob' in JSON using the v1 API format", "language": "text", "type": "code-example_explanation", "context": "edit | Edit the job 'myjob' in JSON using the v1 API format", "source": "https://kubernetes.io/docs/reference/kubectl/kubectl-cmds/", "id": "k8s_00618_code_explain_19287"} {"prompt": "Explain this text code:\n\nEdit the deployment 'mydeployment' in YAML and save the modified config in its annotation", "completion": "This text code edit | Edit the deployment 'mydeployment' in YAML and save the modified config in its annotation", "language": "text", "type": "code-example_explanation", "context": "edit | Edit the deployment 'mydeployment' in YAML and save the modified config in its annotation", "source": "https://kubernetes.io/docs/reference/kubectl/kubectl-cmds/", "id": "k8s_00618_code_explain_19288"} {"prompt": "Explain this bash code:\n\nkubectl edit deployment/mydeployment -o yaml --save-config", "completion": "This bash code edit | Edit a resource from the default editor.", "language": "bash", "type": "kubectl-command_explanation", "context": "edit | Edit a resource from the default editor.", "source": "https://kubernetes.io/docs/reference/kubectl/kubectl-cmds/", "id": "k8s_00618_code_explain_19289"} {"prompt": "Explain this bash code:\n\nkubectl kustomize https://github.com/kubernetes-sigs/kustomize.git/examples/helloWorld?ref=v1.0.6", "completion": "This bash code kustomize | Build a set of KRM resources using a 'kustomization.yaml' file. The DIR argument must be a path to a directory containing 'kustomization.yaml', or a git repository URL with a path suffix specifying sa", "language": "bash", "type": "kubectl-command_explanation", "context": "kustomize | Build a set of KRM resources using a 'kustomization.yaml' file. The DIR argument must be a path to a directory containing 'kustomization.yaml', or a git repository URL with a path suffix specifying sa", "source": "https://kubernetes.io/docs/reference/kubectl/kubectl-cmds/", "id": "k8s_00618_code_explain_19290"} {"prompt": "Explain this text code:\n\nUpdate pod 'foo' with the label 'unhealthy' and the value 'true'", "completion": "This text code label | Update pod 'foo' with the label 'unhealthy' and the value 'true'", "language": "text", "type": "code-example_explanation", "context": "label | Update pod 'foo' with the label 'unhealthy' and the value 'true'", "source": "https://kubernetes.io/docs/reference/kubectl/kubectl-cmds/", "id": "k8s_00618_code_explain_19291"} {"prompt": "Explain this text code:\n\nUpdate pod 'foo' with the label 'status' and the value 'unhealthy', overwriting any existing value", "completion": "This text code label | Update pod 'foo' with the label 'status' and the value 'unhealthy', overwriting any existing value", "language": "text", "type": "code-example_explanation", "context": "label | Update pod 'foo' with the label 'status' and the value 'unhealthy', overwriting any existing value", "source": "https://kubernetes.io/docs/reference/kubectl/kubectl-cmds/", "id": "k8s_00618_code_explain_19292"} {"prompt": "Explain this bash code:\n\nkubectl label --overwrite pods foo status=unhealthy", "completion": "This bash code label | Update all pods in the namespace", "language": "bash", "type": "kubectl-command_explanation", "context": "label | Update all pods in the namespace", "source": "https://kubernetes.io/docs/reference/kubectl/kubectl-cmds/", "id": "k8s_00618_code_explain_19293"} {"prompt": "Explain this text code:\n\nUpdate a pod identified by the type and name in \"pod.json\"", "completion": "This text code label | Update a pod identified by the type and name in \"pod.json\"", "language": "text", "type": "code-example_explanation", "context": "label | Update a pod identified by the type and name in \"pod.json\"", "source": "https://kubernetes.io/docs/reference/kubectl/kubectl-cmds/", "id": "k8s_00618_code_explain_19294"} {"prompt": "Explain this text code:\n\nUpdate pod 'foo' only if the resource is unchanged from version 1", "completion": "This text code label | Update pod 'foo' only if the resource is unchanged from version 1", "language": "text", "type": "code-example_explanation", "context": "label | Update pod 'foo' only if the resource is unchanged from version 1", "source": "https://kubernetes.io/docs/reference/kubectl/kubectl-cmds/", "id": "k8s_00618_code_explain_19295"} {"prompt": "Explain this bash code:\n\nkubectl label pods foo status=unhealthy --resource-version=1", "completion": "This bash code label | Update pod 'foo' by removing a label named 'bar' if it exists # Does not require the --overwrite flag", "language": "bash", "type": "kubectl-command_explanation", "context": "label | Update pod 'foo' by removing a label named 'bar' if it exists # Does not require the --overwrite flag", "source": "https://kubernetes.io/docs/reference/kubectl/kubectl-cmds/", "id": "k8s_00618_code_explain_19296"} {"prompt": "Explain this text code:\n\nUpdate pod 'foo' by removing a label named 'bar' if it exists # Does not require the --overwrite flag", "completion": "This text code label | Update pod 'foo' by removing a label named 'bar' if it exists # Does not require the --overwrite flag", "language": "text", "type": "code-example_explanation", "context": "label | Update pod 'foo' by removing a label named 'bar' if it exists # Does not require the --overwrite flag", "source": "https://kubernetes.io/docs/reference/kubectl/kubectl-cmds/", "id": "k8s_00618_code_explain_19297"} {"prompt": "Explain this text code:\n\nPartially update a node using a strategic merge patch, specifying the patch as JSON", "completion": "This text code patch | Partially update a node using a strategic merge patch, specifying the patch as JSON", "language": "text", "type": "code-example_explanation", "context": "patch | Partially update a node using a strategic merge patch, specifying the patch as JSON", "source": "https://kubernetes.io/docs/reference/kubectl/kubectl-cmds/", "id": "k8s_00618_code_explain_19298"} {"prompt": "Explain this bash code:\n\nkubectl patch node k8s-node-1 -p '{\"spec\":{\"unschedulable\":true}}'", "completion": "This bash code patch | Partially update a node using a strategic merge patch, specifying the patch as YAML", "language": "bash", "type": "kubectl-command_explanation", "context": "patch | Partially update a node using a strategic merge patch, specifying the patch as YAML", "source": "https://kubernetes.io/docs/reference/kubectl/kubectl-cmds/", "id": "k8s_00618_code_explain_19299"} {"prompt": "Explain this text code:\n\nPartially update a node using a strategic merge patch, specifying the patch as YAML", "completion": "This text code patch | Partially update a node using a strategic merge patch, specifying the patch as YAML", "language": "text", "type": "code-example_explanation", "context": "patch | Partially update a node using a strategic merge patch, specifying the patch as YAML", "source": "https://kubernetes.io/docs/reference/kubectl/kubectl-cmds/", "id": "k8s_00618_code_explain_19300"} {"prompt": "Explain this bash code:\n\nkubectl patch node k8s-node-1 -p $'spec:\\n unschedulable: true'", "completion": "This bash code patch | Partially update a node identified by the type and name specified in \"node.json\" using strategic merge patch", "language": "bash", "type": "kubectl-command_explanation", "context": "patch | Partially update a node identified by the type and name specified in \"node.json\" using strategic merge patch", "source": "https://kubernetes.io/docs/reference/kubectl/kubectl-cmds/", "id": "k8s_00618_code_explain_19301"} {"prompt": "Explain this text code:\n\nPartially update a node identified by the type and name specified in \"node.json\" using strategic merge patch", "completion": "This text code patch | Partially update a node identified by the type and name specified in \"node.json\" using strategic merge patch", "language": "text", "type": "code-example_explanation", "context": "patch | Partially update a node identified by the type and name specified in \"node.json\" using strategic merge patch", "source": "https://kubernetes.io/docs/reference/kubectl/kubectl-cmds/", "id": "k8s_00618_code_explain_19302"} {"prompt": "Explain this bash code:\n\nkubectl patch -f node.json -p '{\"spec\":{\"unschedulable\":true}}'", "completion": "This bash code patch | Update a container's image; spec.containers[*].name is required because it's a merge key", "language": "bash", "type": "kubectl-command_explanation", "context": "patch | Update a container's image; spec.containers[*].name is required because it's a merge key", "source": "https://kubernetes.io/docs/reference/kubectl/kubectl-cmds/", "id": "k8s_00618_code_explain_19303"} {"prompt": "Explain this text code:\n\nUpdate a container's image; spec.containers[*].name is required because it's a merge key", "completion": "This text code patch | Update a container's image; spec.containers[*].name is required because it's a merge key", "language": "text", "type": "code-example_explanation", "context": "patch | Update a container's image; spec.containers[*].name is required because it's a merge key", "source": "https://kubernetes.io/docs/reference/kubectl/kubectl-cmds/", "id": "k8s_00618_code_explain_19304"} {"prompt": "Explain this bash code:\n\nkubectl patch pod valid-pod -p '{\"spec\":{\"containers\":[{\"name\":\"kubernetes-serve-hostname\",\"image\":\"new image\"}]}}'", "completion": "This bash code patch | Update a container's image using a JSON patch with positional arrays", "language": "bash", "type": "kubectl-command_explanation", "context": "patch | Update a container's image using a JSON patch with positional arrays", "source": "https://kubernetes.io/docs/reference/kubectl/kubectl-cmds/", "id": "k8s_00618_code_explain_19305"} {"prompt": "Explain this text code:\n\nUpdate a container's image using a JSON patch with positional arrays", "completion": "This text code patch | Update a container's image using a JSON patch with positional arrays", "language": "text", "type": "code-example_explanation", "context": "patch | Update a container's image using a JSON patch with positional arrays", "source": "https://kubernetes.io/docs/reference/kubectl/kubectl-cmds/", "id": "k8s_00618_code_explain_19306"} {"prompt": "Explain this bash code:\n\nkubectl patch pod valid-pod --type='json' -p='[{\"op\": \"replace\", \"path\": \"/spec/containers/0/image\", \"value\":\"new image\"}]'", "completion": "This bash code patch | Update fields of a resource using strategic merge patch, a JSON merge patch, or a JSON patch.", "language": "bash", "type": "kubectl-command_explanation", "context": "patch | Update fields of a resource using strategic merge patch, a JSON merge patch, or a JSON patch.", "source": "https://kubernetes.io/docs/reference/kubectl/kubectl-cmds/", "id": "k8s_00618_code_explain_19307"} {"prompt": "Explain this text code:\n\nUpdate a single-container pod's image version (tag) to v4", "completion": "This text code replace | Update a single-container pod's image version (tag) to v4", "language": "text", "type": "code-example_explanation", "context": "replace | Update a single-container pod's image version (tag) to v4", "source": "https://kubernetes.io/docs/reference/kubectl/kubectl-cmds/", "id": "k8s_00618_code_explain_19308"} {"prompt": "Explain this bash code:\n\nkubectl get pod mypod -o yaml | sed 's/\\(image: myimage\\):.*$/\\1:v4/' | kubectl replace -f -", "completion": "This bash code replace | Force replace, delete and then re-create the resource", "language": "bash", "type": "kubectl-command_explanation", "context": "replace | Force replace, delete and then re-create the resource", "source": "https://kubernetes.io/docs/reference/kubectl/kubectl-cmds/", "id": "k8s_00618_code_explain_19309"} {"prompt": "Explain this text code:\n\nForce replace, delete and then re-create the resource", "completion": "This text code replace | Force replace, delete and then re-create the resource", "language": "text", "type": "code-example_explanation", "context": "replace | Force replace, delete and then re-create the resource", "source": "https://kubernetes.io/docs/reference/kubectl/kubectl-cmds/", "id": "k8s_00618_code_explain_19310"} {"prompt": "Explain this text code:\n\nMark the nginx deployment as paused # Any current state of the deployment will continue its function; new updates # to the deployment will not have an effect as long as the deployment is paused", "completion": "This text code rollout | Mark the nginx deployment as paused # Any current state of the deployment will continue its function; new updates # to the deployment will not have an effect as long as the deployment is paused", "language": "text", "type": "code-example_explanation", "context": "rollout | Mark the nginx deployment as paused # Any current state of the deployment will continue its function; new updates # to the deployment will not have an effect as long as the deployment is paused", "source": "https://kubernetes.io/docs/reference/kubectl/kubectl-cmds/", "id": "k8s_00618_code_explain_19311"} {"prompt": "Explain this bash code:\n\nkubectl rollout undo --dry-run=server deployment/abc", "completion": "This bash code rollout | Roll back to a previous rollout.", "language": "bash", "type": "kubectl-command_explanation", "context": "rollout | Roll back to a previous rollout.", "source": "https://kubernetes.io/docs/reference/kubectl/kubectl-cmds/", "id": "k8s_00618_code_explain_19312"} {"prompt": "Explain this text code:\n\nScale a resource identified by type and name specified in \"foo.yaml\" to 3", "completion": "This text code scale | Scale a resource identified by type and name specified in \"foo.yaml\" to 3", "language": "text", "type": "code-example_explanation", "context": "scale | Scale a resource identified by type and name specified in \"foo.yaml\" to 3", "source": "https://kubernetes.io/docs/reference/kubectl/kubectl-cmds/", "id": "k8s_00618_code_explain_19313"} {"prompt": "Explain this text code:\n\nIf the deployment named mysql's current size is 2, scale mysql to 3", "completion": "This text code scale | If the deployment named mysql's current size is 2, scale mysql to 3", "language": "text", "type": "code-example_explanation", "context": "scale | If the deployment named mysql's current size is 2, scale mysql to 3", "source": "https://kubernetes.io/docs/reference/kubectl/kubectl-cmds/", "id": "k8s_00618_code_explain_19314"} {"prompt": "Explain this bash code:\n\nkubectl scale --current-replicas=2 --replicas=3 deployment/mysql", "completion": "This bash code scale | Scale multiple replication controllers", "language": "bash", "type": "kubectl-command_explanation", "context": "scale | Scale multiple replication controllers", "source": "https://kubernetes.io/docs/reference/kubectl/kubectl-cmds/", "id": "k8s_00618_code_explain_19315"} {"prompt": "Explain this text code:\n\nUpdate deployment 'registry' with a new environment variable", "completion": "This text code set | Update deployment 'registry' with a new environment variable", "language": "text", "type": "code-example_explanation", "context": "set | Update deployment 'registry' with a new environment variable", "source": "https://kubernetes.io/docs/reference/kubectl/kubectl-cmds/", "id": "k8s_00618_code_explain_19316"} {"prompt": "Explain this bash code:\n\nkubectl set env deployment/registry STORAGE_DIR=/local", "completion": "This bash code set | List the environment variables defined on a deployments 'sample-build'", "language": "bash", "type": "kubectl-command_explanation", "context": "set | List the environment variables defined on a deployments 'sample-build'", "source": "https://kubernetes.io/docs/reference/kubectl/kubectl-cmds/", "id": "k8s_00618_code_explain_19317"} {"prompt": "Explain this text code:\n\nList the environment variables defined on a deployments 'sample-build'", "completion": "This text code set | List the environment variables defined on a deployments 'sample-build'", "language": "text", "type": "code-example_explanation", "context": "set | List the environment variables defined on a deployments 'sample-build'", "source": "https://kubernetes.io/docs/reference/kubectl/kubectl-cmds/", "id": "k8s_00618_code_explain_19318"} {"prompt": "Explain this text code:\n\nOutput modified deployment in YAML, and does not alter the object on the server", "completion": "This text code set | Output modified deployment in YAML, and does not alter the object on the server", "language": "text", "type": "code-example_explanation", "context": "set | Output modified deployment in YAML, and does not alter the object on the server", "source": "https://kubernetes.io/docs/reference/kubectl/kubectl-cmds/", "id": "k8s_00618_code_explain_19319"} {"prompt": "Explain this bash code:\n\nkubectl set env deployment/sample-build STORAGE_DIR=/data -o yaml", "completion": "This bash code set | Update all containers in all replication controllers in the project to have ENV=prod", "language": "bash", "type": "kubectl-command_explanation", "context": "set | Update all containers in all replication controllers in the project to have ENV=prod", "source": "https://kubernetes.io/docs/reference/kubectl/kubectl-cmds/", "id": "k8s_00618_code_explain_19320"} {"prompt": "Explain this text code:\n\nUpdate all containers in all replication controllers in the project to have ENV=prod", "completion": "This text code set | Update all containers in all replication controllers in the project to have ENV=prod", "language": "text", "type": "code-example_explanation", "context": "set | Update all containers in all replication controllers in the project to have ENV=prod", "source": "https://kubernetes.io/docs/reference/kubectl/kubectl-cmds/", "id": "k8s_00618_code_explain_19321"} {"prompt": "Explain this bash code:\n\nkubectl set env --from=secret/mysecret deployment/myapp", "completion": "This bash code set | Import environment from a config map with a prefix", "language": "bash", "type": "kubectl-command_explanation", "context": "set | Import environment from a config map with a prefix", "source": "https://kubernetes.io/docs/reference/kubectl/kubectl-cmds/", "id": "k8s_00618_code_explain_19322"} {"prompt": "Explain this bash code:\n\nkubectl set env --from=configmap/myconfigmap --prefix=MYSQL_ deployment/myapp", "completion": "This bash code set | Import specific keys from a config map", "language": "bash", "type": "kubectl-command_explanation", "context": "set | Import specific keys from a config map", "source": "https://kubernetes.io/docs/reference/kubectl/kubectl-cmds/", "id": "k8s_00618_code_explain_19323"} {"prompt": "Explain this bash code:\n\nkubectl set env --keys=my-example-key --from=configmap/myconfigmap deployment/myapp", "completion": "This bash code set | Remove the environment variable ENV from container 'c1' in all deployment configs", "language": "bash", "type": "kubectl-command_explanation", "context": "set | Remove the environment variable ENV from container 'c1' in all deployment configs", "source": "https://kubernetes.io/docs/reference/kubectl/kubectl-cmds/", "id": "k8s_00618_code_explain_19324"} {"prompt": "Explain this text code:\n\nRemove the environment variable ENV from container 'c1' in all deployment configs", "completion": "This text code set | Remove the environment variable ENV from container 'c1' in all deployment configs", "language": "text", "type": "code-example_explanation", "context": "set | Remove the environment variable ENV from container 'c1' in all deployment configs", "source": "https://kubernetes.io/docs/reference/kubectl/kubectl-cmds/", "id": "k8s_00618_code_explain_19325"} {"prompt": "Explain this bash code:\n\nkubectl set env deployments --all --containers=\"c1\" ENV-", "completion": "This bash code set | Remove the environment variable ENV from a deployment definition on disk and # update the deployment config on the server", "language": "bash", "type": "kubectl-command_explanation", "context": "set | Remove the environment variable ENV from a deployment definition on disk and # update the deployment config on the server", "source": "https://kubernetes.io/docs/reference/kubectl/kubectl-cmds/", "id": "k8s_00618_code_explain_19326"} {"prompt": "Explain this text code:\n\nRemove the environment variable ENV from a deployment definition on disk and # update the deployment config on the server", "completion": "This text code set | Remove the environment variable ENV from a deployment definition on disk and # update the deployment config on the server", "language": "text", "type": "code-example_explanation", "context": "set | Remove the environment variable ENV from a deployment definition on disk and # update the deployment config on the server", "source": "https://kubernetes.io/docs/reference/kubectl/kubectl-cmds/", "id": "k8s_00618_code_explain_19327"} {"prompt": "Explain this text code:\n\nSet some of the local shell environment into a deployment config on the server", "completion": "This text code set | Set some of the local shell environment into a deployment config on the server", "language": "text", "type": "code-example_explanation", "context": "set | Set some of the local shell environment into a deployment config on the server", "source": "https://kubernetes.io/docs/reference/kubectl/kubectl-cmds/", "id": "k8s_00618_code_explain_19328"} {"prompt": "Explain this text code:\n\nenv | grep RAILS_ | kubectl set env -e - deployment/registry", "completion": "This text code set | Update environment variables on a pod template.", "language": "text", "type": "code-example_explanation", "context": "set | Update environment variables on a pod template.", "source": "https://kubernetes.io/docs/reference/kubectl/kubectl-cmds/", "id": "k8s_00618_code_explain_19329"} {"prompt": "Explain this text code:\n\nSet a deployment's nginx container image to 'nginx:1.9.1', and its busybox container image to 'busybox'", "completion": "This text code set | Set a deployment's nginx container image to 'nginx:1.9.1', and its busybox container image to 'busybox'", "language": "text", "type": "code-example_explanation", "context": "set | Set a deployment's nginx container image to 'nginx:1.9.1', and its busybox container image to 'busybox'", "source": "https://kubernetes.io/docs/reference/kubectl/kubectl-cmds/", "id": "k8s_00618_code_explain_19330"} {"prompt": "Explain this bash code:\n\nkubectl set image deployment/nginx busybox=busybox nginx=nginx:1.9.1", "completion": "This bash code set | Update all deployments' and rc's nginx container's image to 'nginx:1.9.1'", "language": "bash", "type": "kubectl-command_explanation", "context": "set | Update all deployments' and rc's nginx container's image to 'nginx:1.9.1'", "source": "https://kubernetes.io/docs/reference/kubectl/kubectl-cmds/", "id": "k8s_00618_code_explain_19331"} {"prompt": "Explain this text code:\n\nUpdate all deployments' and rc's nginx container's image to 'nginx:1.9.1'", "completion": "This text code set | Update all deployments' and rc's nginx container's image to 'nginx:1.9.1'", "language": "text", "type": "code-example_explanation", "context": "set | Update all deployments' and rc's nginx container's image to 'nginx:1.9.1'", "source": "https://kubernetes.io/docs/reference/kubectl/kubectl-cmds/", "id": "k8s_00618_code_explain_19332"} {"prompt": "Explain this bash code:\n\nkubectl set image deployments,rc nginx=nginx:1.9.1 --all", "completion": "This bash code set | Update image of all containers of daemonset abc to 'nginx:1.9.1'", "language": "bash", "type": "kubectl-command_explanation", "context": "set | Update image of all containers of daemonset abc to 'nginx:1.9.1'", "source": "https://kubernetes.io/docs/reference/kubectl/kubectl-cmds/", "id": "k8s_00618_code_explain_19333"} {"prompt": "Explain this text code:\n\nUpdate image of all containers of daemonset abc to 'nginx:1.9.1'", "completion": "This text code set | Update image of all containers of daemonset abc to 'nginx:1.9.1'", "language": "text", "type": "code-example_explanation", "context": "set | Update image of all containers of daemonset abc to 'nginx:1.9.1'", "source": "https://kubernetes.io/docs/reference/kubectl/kubectl-cmds/", "id": "k8s_00618_code_explain_19334"} {"prompt": "Explain this text code:\n\nPrint result (in yaml format) of updating nginx container image from local file, without hitting the server", "completion": "This text code set | Print result (in yaml format) of updating nginx container image from local file, without hitting the server", "language": "text", "type": "code-example_explanation", "context": "set | Print result (in yaml format) of updating nginx container image from local file, without hitting the server", "source": "https://kubernetes.io/docs/reference/kubectl/kubectl-cmds/", "id": "k8s_00618_code_explain_19335"} {"prompt": "Explain this bash code:\n\nkubectl set image -f path/to/file.yaml nginx=nginx:1.9.1 --local -o yaml", "completion": "This bash code set | Update existing container image(s) of resources.", "language": "bash", "type": "kubectl-command_explanation", "context": "set | Update existing container image(s) of resources.", "source": "https://kubernetes.io/docs/reference/kubectl/kubectl-cmds/", "id": "k8s_00618_code_explain_19336"} {"prompt": "Explain this text code:\n\nSet a deployments nginx container cpu limits to \"200m\" and memory to \"512Mi\"", "completion": "This text code set | Set a deployments nginx container cpu limits to \"200m\" and memory to \"512Mi\"", "language": "text", "type": "code-example_explanation", "context": "set | Set a deployments nginx container cpu limits to \"200m\" and memory to \"512Mi\"", "source": "https://kubernetes.io/docs/reference/kubectl/kubectl-cmds/", "id": "k8s_00618_code_explain_19337"} {"prompt": "Explain this bash code:\n\nkubectl set resources deployment nginx -c=nginx --limits=cpu=200m,memory=512Mi", "completion": "This bash code set | Set the resource request and limits for all containers in nginx", "language": "bash", "type": "kubectl-command_explanation", "context": "set | Set the resource request and limits for all containers in nginx", "source": "https://kubernetes.io/docs/reference/kubectl/kubectl-cmds/", "id": "k8s_00618_code_explain_19338"} {"prompt": "Explain this text code:\n\nSet the resource request and limits for all containers in nginx", "completion": "This text code set | Set the resource request and limits for all containers in nginx", "language": "text", "type": "code-example_explanation", "context": "set | Set the resource request and limits for all containers in nginx", "source": "https://kubernetes.io/docs/reference/kubectl/kubectl-cmds/", "id": "k8s_00618_code_explain_19339"} {"prompt": "Explain this bash code:\n\nkubectl set resources deployment nginx --limits=cpu=200m,memory=512Mi --requests=cpu=100m,memory=256Mi", "completion": "This bash code set | Remove the resource requests for resources on containers in nginx", "language": "bash", "type": "kubectl-command_explanation", "context": "set | Remove the resource requests for resources on containers in nginx", "source": "https://kubernetes.io/docs/reference/kubectl/kubectl-cmds/", "id": "k8s_00618_code_explain_19340"} {"prompt": "Explain this text code:\n\nRemove the resource requests for resources on containers in nginx", "completion": "This text code set | Remove the resource requests for resources on containers in nginx", "language": "text", "type": "code-example_explanation", "context": "set | Remove the resource requests for resources on containers in nginx", "source": "https://kubernetes.io/docs/reference/kubectl/kubectl-cmds/", "id": "k8s_00618_code_explain_19341"} {"prompt": "Explain this bash code:\n\nkubectl set resources deployment nginx --limits=cpu=0,memory=0 --requests=cpu=0,memory=0", "completion": "This bash code set | Print the result (in yaml format) of updating nginx container limits from a local, without hitting the server", "language": "bash", "type": "kubectl-command_explanation", "context": "set | Print the result (in yaml format) of updating nginx container limits from a local, without hitting the server", "source": "https://kubernetes.io/docs/reference/kubectl/kubectl-cmds/", "id": "k8s_00618_code_explain_19342"} {"prompt": "Explain this text code:\n\nPrint the result (in yaml format) of updating nginx container limits from a local, without hitting the server", "completion": "This text code set | Print the result (in yaml format) of updating nginx container limits from a local, without hitting the server", "language": "text", "type": "code-example_explanation", "context": "set | Print the result (in yaml format) of updating nginx container limits from a local, without hitting the server", "source": "https://kubernetes.io/docs/reference/kubectl/kubectl-cmds/", "id": "k8s_00618_code_explain_19343"} {"prompt": "Explain this bash code:\n\nkubectl set resources -f path/to/file.yaml --limits=cpu=200m,memory=512Mi --local -o yaml", "completion": "This bash code set | Specify compute resource requirements (CPU, memory) for any resource that defines a pod template. If a pod is successfully scheduled, it is guaranteed the amount of resource requested, but may burst", "language": "bash", "type": "kubectl-command_explanation", "context": "set | Specify compute resource requirements (CPU, memory) for any resource that defines a pod template. If a pod is successfully scheduled, it is guaranteed the amount of resource requested, but may burst", "source": "https://kubernetes.io/docs/reference/kubectl/kubectl-cmds/", "id": "k8s_00618_code_explain_19344"} {"prompt": "Explain this text code:\n\nSet the labels and selector before creating a deployment/service pair", "completion": "This text code set | Set the labels and selector before creating a deployment/service pair", "language": "text", "type": "code-example_explanation", "context": "set | Set the labels and selector before creating a deployment/service pair", "source": "https://kubernetes.io/docs/reference/kubectl/kubectl-cmds/", "id": "k8s_00618_code_explain_19345"} {"prompt": "Explain this bash code:\n\nkubectl create service clusterip my-svc --clusterip=\"None\" -o yaml --dry-run=client | kubectl set selector --local -f - 'environment=qa' -o yaml | kubectl create -f -\nkubectl create deployment my-dep -o yaml --dry-run=client | kubectl label --local -f - environment=qa -o yaml | kubectl create -f -", "completion": "This bash code set | Set the selector on a resource. Note that the new selector will overwrite the old selector if the resource had one prior to the invocation of 'set selector'.", "language": "bash", "type": "kubectl-command_explanation", "context": "set | Set the selector on a resource. Note that the new selector will overwrite the old selector if the resource had one prior to the invocation of 'set selector'.", "source": "https://kubernetes.io/docs/reference/kubectl/kubectl-cmds/", "id": "k8s_00618_code_explain_19346"} {"prompt": "Explain this text code:\n\nSet deployment nginx-deployment's service account to serviceaccount1", "completion": "This text code set | Set deployment nginx-deployment's service account to serviceaccount1", "language": "text", "type": "code-example_explanation", "context": "set | Set deployment nginx-deployment's service account to serviceaccount1", "source": "https://kubernetes.io/docs/reference/kubectl/kubectl-cmds/", "id": "k8s_00618_code_explain_19347"} {"prompt": "Explain this bash code:\n\nkubectl set serviceaccount deployment nginx-deployment serviceaccount1", "completion": "This bash code set | Print the result (in YAML format) of updated nginx deployment with the service account from local file, without hitting the API server", "language": "bash", "type": "kubectl-command_explanation", "context": "set | Print the result (in YAML format) of updated nginx deployment with the service account from local file, without hitting the API server", "source": "https://kubernetes.io/docs/reference/kubectl/kubectl-cmds/", "id": "k8s_00618_code_explain_19348"} {"prompt": "Explain this text code:\n\nPrint the result (in YAML format) of updated nginx deployment with the service account from local file, without hitting the API server", "completion": "This text code set | Print the result (in YAML format) of updated nginx deployment with the service account from local file, without hitting the API server", "language": "text", "type": "code-example_explanation", "context": "set | Print the result (in YAML format) of updated nginx deployment with the service account from local file, without hitting the API server", "source": "https://kubernetes.io/docs/reference/kubectl/kubectl-cmds/", "id": "k8s_00618_code_explain_19349"} {"prompt": "Explain this bash code:\n\nkubectl set sa -f nginx-deployment.yaml serviceaccount1 --local --dry-run=client -o yaml", "completion": "This bash code set | Update the service account of pod template resources.", "language": "bash", "type": "kubectl-command_explanation", "context": "set | Update the service account of pod template resources.", "source": "https://kubernetes.io/docs/reference/kubectl/kubectl-cmds/", "id": "k8s_00618_code_explain_19350"} {"prompt": "Explain this bash code:\n\nkubectl set subject clusterrolebinding admin --serviceaccount=namespace:serviceaccount1", "completion": "This bash code set | Update a role binding for user1, user2, and group1", "language": "bash", "type": "kubectl-command_explanation", "context": "set | Update a role binding for user1, user2, and group1", "source": "https://kubernetes.io/docs/reference/kubectl/kubectl-cmds/", "id": "k8s_00618_code_explain_19351"} {"prompt": "Explain this bash code:\n\nkubectl set subject rolebinding admin --user=user1 --user=user2 --group=group1", "completion": "This bash code set | Print the result (in YAML format) of updating rolebinding subjects from a local, without hitting the server", "language": "bash", "type": "kubectl-command_explanation", "context": "set | Print the result (in YAML format) of updating rolebinding subjects from a local, without hitting the server", "source": "https://kubernetes.io/docs/reference/kubectl/kubectl-cmds/", "id": "k8s_00618_code_explain_19352"} {"prompt": "Explain this text code:\n\nPrint the result (in YAML format) of updating rolebinding subjects from a local, without hitting the server", "completion": "This text code set | Print the result (in YAML format) of updating rolebinding subjects from a local, without hitting the server", "language": "text", "type": "code-example_explanation", "context": "set | Print the result (in YAML format) of updating rolebinding subjects from a local, without hitting the server", "source": "https://kubernetes.io/docs/reference/kubectl/kubectl-cmds/", "id": "k8s_00618_code_explain_19353"} {"prompt": "Explain this bash code:\n\nkubectl create rolebinding admin --role=admin --user=admin -o yaml --dry-run=client | kubectl set subject --local -f - --user=foo -o yaml", "completion": "This bash code set | Update the user, group, or service account in a role binding or cluster role binding.", "language": "bash", "type": "kubectl-command_explanation", "context": "set | Update the user, group, or service account in a role binding or cluster role binding.", "source": "https://kubernetes.io/docs/reference/kubectl/kubectl-cmds/", "id": "k8s_00618_code_explain_19354"} {"prompt": "Explain this text code:\n\nWait for the pod \"busybox1\" to contain the status condition of type \"Ready\"", "completion": "This text code wait | Wait for the pod \"busybox1\" to contain the status condition of type \"Ready\"", "language": "text", "type": "code-example_explanation", "context": "wait | Wait for the pod \"busybox1\" to contain the status condition of type \"Ready\"", "source": "https://kubernetes.io/docs/reference/kubectl/kubectl-cmds/", "id": "k8s_00618_code_explain_19355"} {"prompt": "Explain this text code:\n\nThe default value of status condition is true; you can set it to false", "completion": "This text code wait | The default value of status condition is true; you can set it to false", "language": "text", "type": "code-example_explanation", "context": "wait | The default value of status condition is true; you can set it to false", "source": "https://kubernetes.io/docs/reference/kubectl/kubectl-cmds/", "id": "k8s_00618_code_explain_19356"} {"prompt": "Explain this bash code:\n\nkubectl wait --for=condition=Ready=false pod/busybox1", "completion": "This bash code wait | Wait for the pod \"busybox1\" to be deleted, with a timeout of 60s, after having issued the \"delete\" command", "language": "bash", "type": "kubectl-command_explanation", "context": "wait | Wait for the pod \"busybox1\" to be deleted, with a timeout of 60s, after having issued the \"delete\" command", "source": "https://kubernetes.io/docs/reference/kubectl/kubectl-cmds/", "id": "k8s_00618_code_explain_19357"} {"prompt": "Explain this text code:\n\nWait for the pod \"busybox1\" to be deleted, with a timeout of 60s, after having issued the \"delete\" command", "completion": "This text code wait | Wait for the pod \"busybox1\" to be deleted, with a timeout of 60s, after having issued the \"delete\" command", "language": "text", "type": "code-example_explanation", "context": "wait | Wait for the pod \"busybox1\" to be deleted, with a timeout of 60s, after having issued the \"delete\" command", "source": "https://kubernetes.io/docs/reference/kubectl/kubectl-cmds/", "id": "k8s_00618_code_explain_19358"} {"prompt": "Explain this bash code:\n\nkubectl delete pod/busybox1\nkubectl wait --for=delete pod/busybox1 --timeout=60s", "completion": "This bash code wait | Experimental: Wait for a specific condition on one or many resources.", "language": "bash", "type": "kubectl-command_explanation", "context": "wait | Experimental: Wait for a specific condition on one or many resources.", "source": "https://kubernetes.io/docs/reference/kubectl/kubectl-cmds/", "id": "k8s_00618_code_explain_19359"} {"prompt": "Explain this text code:\n\nGet output from running pod mypod; use the 'kubectl.kubernetes.io/default-container' annotation # for selecting the container to be attached or the first container in the pod will be chosen", "completion": "This text code attach | Get output from running pod mypod; use the 'kubectl.kubernetes.io/default-container' annotation # for selecting the container to be attached or the first container in the pod will be chosen", "language": "text", "type": "code-example_explanation", "context": "attach | Get output from running pod mypod; use the 'kubectl.kubernetes.io/default-container' annotation # for selecting the container to be attached or the first container in the pod will be chosen", "source": "https://kubernetes.io/docs/reference/kubectl/kubectl-cmds/", "id": "k8s_00618_code_explain_19360"} {"prompt": "Explain this text code:\n\nSwitch to raw terminal mode; sends stdin to 'bash' in ruby-container from pod mypod # and sends stdout/stderr from 'bash' back to the client", "completion": "This text code attach | Switch to raw terminal mode; sends stdin to 'bash' in ruby-container from pod mypod # and sends stdout/stderr from 'bash' back to the client", "language": "text", "type": "code-example_explanation", "context": "attach | Switch to raw terminal mode; sends stdin to 'bash' in ruby-container from pod mypod # and sends stdout/stderr from 'bash' back to the client", "source": "https://kubernetes.io/docs/reference/kubectl/kubectl-cmds/", "id": "k8s_00618_code_explain_19361"} {"prompt": "Explain this text code:\n\nGet output from the first pod of a replica set named nginx", "completion": "This text code attach | Get output from the first pod of a replica set named nginx", "language": "text", "type": "code-example_explanation", "context": "attach | Get output from the first pod of a replica set named nginx", "source": "https://kubernetes.io/docs/reference/kubectl/kubectl-cmds/", "id": "k8s_00618_code_explain_19362"} {"prompt": "Explain this text code:\n\nCheck to see if I can list deployments in my current namespace", "completion": "This text code auth | Check to see if I can list deployments in my current namespace", "language": "text", "type": "code-example_explanation", "context": "auth | Check to see if I can list deployments in my current namespace", "source": "https://kubernetes.io/docs/reference/kubectl/kubectl-cmds/", "id": "k8s_00618_code_explain_19363"} {"prompt": "Explain this text code:\n\nCheck to see if I can do everything in my current namespace (\"*\" means all)", "completion": "This text code auth | Check to see if I can do everything in my current namespace (\"*\" means all)", "language": "text", "type": "code-example_explanation", "context": "auth | Check to see if I can do everything in my current namespace (\"*\" means all)", "source": "https://kubernetes.io/docs/reference/kubectl/kubectl-cmds/", "id": "k8s_00618_code_explain_19364"} {"prompt": "Explain this text code:\n\nCheck to see if I can get the job named \"bar\" in namespace \"foo\"", "completion": "This text code auth | Check to see if I can get the job named \"bar\" in namespace \"foo\"", "language": "text", "type": "code-example_explanation", "context": "auth | Check to see if I can get the job named \"bar\" in namespace \"foo\"", "source": "https://kubernetes.io/docs/reference/kubectl/kubectl-cmds/", "id": "k8s_00618_code_explain_19365"} {"prompt": "Explain this text code:\n\n!!!Important Note!!! # Requires that the 'tar' binary is present in your container # image. If 'tar' is not present, 'kubectl cp' will fail. # # For advanced use cases, such as symlinks, wildcard expansion or # file mode preservation, consider using 'kubectl exec'. # Copy /tmp/foo local file to /tmp/bar in a remote pod in namespace", "completion": "This text code cp | !!!Important Note!!! # Requires that the 'tar' binary is present in your container # image. If 'tar' is not present, 'kubectl cp' will fail. # # For advanced use cases, such as symlinks, wildcard exp", "language": "text", "type": "code-example_explanation", "context": "cp | !!!Important Note!!! # Requires that the 'tar' binary is present in your container # image. If 'tar' is not present, 'kubectl cp' will fail. # # For advanced use cases, such as symlinks, wildcard exp", "source": "https://kubernetes.io/docs/reference/kubectl/kubectl-cmds/", "id": "k8s_00618_code_explain_19366"} {"prompt": "Explain this text code:\n\ntar cf - /tmp/foo | kubectl exec -i -n <some-namespace> <some-pod> -- tar xf - -C /tmp/bar", "completion": "This text code cp | Copy /tmp/foo from a remote pod to /tmp/bar locally", "language": "text", "type": "code-example_explanation", "context": "cp | Copy /tmp/foo from a remote pod to /tmp/bar locally", "source": "https://kubernetes.io/docs/reference/kubectl/kubectl-cmds/", "id": "k8s_00618_code_explain_19367"} {"prompt": "Explain this text code:\n\nCopy /tmp/foo from a remote pod to /tmp/bar locally", "completion": "This text code cp | Copy /tmp/foo from a remote pod to /tmp/bar locally", "language": "text", "type": "code-example_explanation", "context": "cp | Copy /tmp/foo from a remote pod to /tmp/bar locally", "source": "https://kubernetes.io/docs/reference/kubectl/kubectl-cmds/", "id": "k8s_00618_code_explain_19368"} {"prompt": "Explain this bash code:\n\nkubectl exec -n <some-namespace> <some-pod> -- tar cf - /tmp/foo | tar xf - -C /tmp/bar", "completion": "This bash code cp | Copy /tmp/foo_dir local directory to /tmp/bar_dir in a remote pod in the default namespace", "language": "bash", "type": "kubectl-command_explanation", "context": "cp | Copy /tmp/foo_dir local directory to /tmp/bar_dir in a remote pod in the default namespace", "source": "https://kubernetes.io/docs/reference/kubectl/kubectl-cmds/", "id": "k8s_00618_code_explain_19369"} {"prompt": "Explain this text code:\n\nCopy /tmp/foo_dir local directory to /tmp/bar_dir in a remote pod in the default namespace", "completion": "This text code cp | Copy /tmp/foo_dir local directory to /tmp/bar_dir in a remote pod in the default namespace", "language": "text", "type": "code-example_explanation", "context": "cp | Copy /tmp/foo_dir local directory to /tmp/bar_dir in a remote pod in the default namespace", "source": "https://kubernetes.io/docs/reference/kubectl/kubectl-cmds/", "id": "k8s_00618_code_explain_19370"} {"prompt": "Explain this text code:\n\nCopy /tmp/foo local file to /tmp/bar in a remote pod in a specific container", "completion": "This text code cp | Copy /tmp/foo local file to /tmp/bar in a remote pod in a specific container", "language": "text", "type": "code-example_explanation", "context": "cp | Copy /tmp/foo local file to /tmp/bar in a remote pod in a specific container", "source": "https://kubernetes.io/docs/reference/kubectl/kubectl-cmds/", "id": "k8s_00618_code_explain_19371"} {"prompt": "Explain this bash code:\n\nkubectl cp /tmp/foo <some-pod>:/tmp/bar -c <specific-container>", "completion": "This bash code cp | Copy /tmp/foo local file to /tmp/bar in a remote pod in namespace", "language": "bash", "type": "kubectl-command_explanation", "context": "cp | Copy /tmp/foo local file to /tmp/bar in a remote pod in namespace", "source": "https://kubernetes.io/docs/reference/kubectl/kubectl-cmds/", "id": "k8s_00618_code_explain_19372"} {"prompt": "Explain this text code:\n\nCopy /tmp/foo local file to /tmp/bar in a remote pod in namespace", "completion": "This text code cp | Copy /tmp/foo local file to /tmp/bar in a remote pod in namespace", "language": "text", "type": "code-example_explanation", "context": "cp | Copy /tmp/foo local file to /tmp/bar in a remote pod in namespace", "source": "https://kubernetes.io/docs/reference/kubectl/kubectl-cmds/", "id": "k8s_00618_code_explain_19373"} {"prompt": "Explain this bash code:\n\nkubectl cp /tmp/foo <some-namespace>/<some-pod>:/tmp/bar", "completion": "This bash code cp | Copy /tmp/foo from a remote pod to /tmp/bar locally", "language": "bash", "type": "kubectl-command_explanation", "context": "cp | Copy /tmp/foo from a remote pod to /tmp/bar locally", "source": "https://kubernetes.io/docs/reference/kubectl/kubectl-cmds/", "id": "k8s_00618_code_explain_19374"} {"prompt": "Explain this text code:\n\nCopy /tmp/foo from a remote pod to /tmp/bar locally", "completion": "This text code cp | Copy /tmp/foo from a remote pod to /tmp/bar locally", "language": "text", "type": "code-example_explanation", "context": "cp | Copy /tmp/foo from a remote pod to /tmp/bar locally", "source": "https://kubernetes.io/docs/reference/kubectl/kubectl-cmds/", "id": "k8s_00618_code_explain_19375"} {"prompt": "Explain this bash code:\n\nkubectl cp <some-namespace>/<some-pod>:/tmp/foo /tmp/bar", "completion": "This bash code cp | Copy files and directories to and from containers.", "language": "bash", "type": "kubectl-command_explanation", "context": "cp | Copy files and directories to and from containers.", "source": "https://kubernetes.io/docs/reference/kubectl/kubectl-cmds/", "id": "k8s_00618_code_explain_19376"} {"prompt": "Explain this bash code:\n\nkubectl describe nodes kubernetes-node-emt8.c.myproject.internal", "completion": "This bash code describe | Describe a pod", "language": "bash", "type": "kubectl-command_explanation", "context": "describe | Describe a pod", "source": "https://kubernetes.io/docs/reference/kubectl/kubectl-cmds/", "id": "k8s_00618_code_explain_19377"} {"prompt": "Explain this text code:\n\nDescribe a pod identified by type and name in \"pod.json\"", "completion": "This text code describe | Describe a pod identified by type and name in \"pod.json\"", "language": "text", "type": "code-example_explanation", "context": "describe | Describe a pod identified by type and name in \"pod.json\"", "source": "https://kubernetes.io/docs/reference/kubectl/kubectl-cmds/", "id": "k8s_00618_code_explain_19378"} {"prompt": "Explain this text code:\n\nDescribe all pods managed by the 'frontend' replication controller (rc-created pods # get the name of the rc as a prefix in the pod the name)", "completion": "This text code describe | Describe all pods managed by the 'frontend' replication controller (rc-created pods # get the name of the rc as a prefix in the pod the name)", "language": "text", "type": "code-example_explanation", "context": "describe | Describe all pods managed by the 'frontend' replication controller (rc-created pods # get the name of the rc as a prefix in the pod the name)", "source": "https://kubernetes.io/docs/reference/kubectl/kubectl-cmds/", "id": "k8s_00618_code_explain_19379"} {"prompt": "Explain this text code:\n\nGet output from running the 'date' command from pod mypod, using the first container by default", "completion": "This text code exec | Get output from running the 'date' command from pod mypod, using the first container by default", "language": "text", "type": "code-example_explanation", "context": "exec | Get output from running the 'date' command from pod mypod, using the first container by default", "source": "https://kubernetes.io/docs/reference/kubectl/kubectl-cmds/", "id": "k8s_00618_code_explain_19380"} {"prompt": "Explain this text code:\n\nGet output from running the 'date' command in ruby-container from pod mypod", "completion": "This text code exec | Get output from running the 'date' command in ruby-container from pod mypod", "language": "text", "type": "code-example_explanation", "context": "exec | Get output from running the 'date' command in ruby-container from pod mypod", "source": "https://kubernetes.io/docs/reference/kubectl/kubectl-cmds/", "id": "k8s_00618_code_explain_19381"} {"prompt": "Explain this text code:\n\nSwitch to raw terminal mode; sends stdin to 'bash' in ruby-container from pod mypod # and sends stdout/stderr from 'bash' back to the client", "completion": "This text code exec | Switch to raw terminal mode; sends stdin to 'bash' in ruby-container from pod mypod # and sends stdout/stderr from 'bash' back to the client", "language": "text", "type": "code-example_explanation", "context": "exec | Switch to raw terminal mode; sends stdin to 'bash' in ruby-container from pod mypod # and sends stdout/stderr from 'bash' back to the client", "source": "https://kubernetes.io/docs/reference/kubectl/kubectl-cmds/", "id": "k8s_00618_code_explain_19382"} {"prompt": "Explain this bash code:\n\nkubectl exec mypod -c ruby-container -i -t -- bash -il", "completion": "This bash code exec | List contents of /usr from the first container of pod mypod and sort by modification time # If the command you want to execute in the pod has any flags in common (e.g. -i), # you must use two dashes (", "language": "bash", "type": "kubectl-command_explanation", "context": "exec | List contents of /usr from the first container of pod mypod and sort by modification time # If the command you want to execute in the pod has any flags in common (e.g. -i), # you must use two dashes (", "source": "https://kubernetes.io/docs/reference/kubectl/kubectl-cmds/", "id": "k8s_00618_code_explain_19383"} {"prompt": "Explain this text code:\n\nList contents of /usr from the first container of pod mypod and sort by modification time # If the command you want to execute in the pod has any flags in common (e.g. -i), # you must use two dashes (--) to separate your command's flags/arguments # Also note, do not surround your command and its flags/arguments with quotes # unless that is how you would execute it normally (i.e., do ls -t /usr, not \"ls -t /usr\")", "completion": "This text code exec | List contents of /usr from the first container of pod mypod and sort by modification time # If the command you want to execute in the pod has any flags in common (e.g. -i), # you must use two dashes (", "language": "text", "type": "code-example_explanation", "context": "exec | List contents of /usr from the first container of pod mypod and sort by modification time # If the command you want to execute in the pod has any flags in common (e.g. -i), # you must use two dashes (", "source": "https://kubernetes.io/docs/reference/kubectl/kubectl-cmds/", "id": "k8s_00618_code_explain_19384"} {"prompt": "Explain this text code:\n\nGet output from running 'date' command from the first pod of the deployment mydeployment, using the first container by default", "completion": "This text code exec | Get output from running 'date' command from the first pod of the deployment mydeployment, using the first container by default", "language": "text", "type": "code-example_explanation", "context": "exec | Get output from running 'date' command from the first pod of the deployment mydeployment, using the first container by default", "source": "https://kubernetes.io/docs/reference/kubectl/kubectl-cmds/", "id": "k8s_00618_code_explain_19385"} {"prompt": "Explain this text code:\n\nGet output from running 'date' command from the first pod of the service myservice, using the first container by default", "completion": "This text code exec | Get output from running 'date' command from the first pod of the service myservice, using the first container by default", "language": "text", "type": "code-example_explanation", "context": "exec | Get output from running 'date' command from the first pod of the service myservice, using the first container by default", "source": "https://kubernetes.io/docs/reference/kubectl/kubectl-cmds/", "id": "k8s_00618_code_explain_19386"} {"prompt": "Explain this text code:\n\nReturn snapshot logs from pod nginx with only one container", "completion": "This text code logs | Return snapshot logs from pod nginx with only one container", "language": "text", "type": "code-example_explanation", "context": "logs | Return snapshot logs from pod nginx with only one container", "source": "https://kubernetes.io/docs/reference/kubectl/kubectl-cmds/", "id": "k8s_00618_code_explain_19387"} {"prompt": "Explain this text code:\n\nReturn snapshot logs from pod nginx with multi containers", "completion": "This text code logs | Return snapshot logs from pod nginx with multi containers", "language": "text", "type": "code-example_explanation", "context": "logs | Return snapshot logs from pod nginx with multi containers", "source": "https://kubernetes.io/docs/reference/kubectl/kubectl-cmds/", "id": "k8s_00618_code_explain_19388"} {"prompt": "Explain this text code:\n\nReturn snapshot logs from all containers in pods defined by label app=nginx", "completion": "This text code logs | Return snapshot logs from all containers in pods defined by label app=nginx", "language": "text", "type": "code-example_explanation", "context": "logs | Return snapshot logs from all containers in pods defined by label app=nginx", "source": "https://kubernetes.io/docs/reference/kubectl/kubectl-cmds/", "id": "k8s_00618_code_explain_19389"} {"prompt": "Explain this text code:\n\nReturn snapshot of previous terminated ruby container logs from pod web-1", "completion": "This text code logs | Return snapshot of previous terminated ruby container logs from pod web-1", "language": "text", "type": "code-example_explanation", "context": "logs | Return snapshot of previous terminated ruby container logs from pod web-1", "source": "https://kubernetes.io/docs/reference/kubectl/kubectl-cmds/", "id": "k8s_00618_code_explain_19390"} {"prompt": "Explain this text code:\n\nBegin streaming the logs of the ruby container in pod web-1", "completion": "This text code logs | Begin streaming the logs of the ruby container in pod web-1", "language": "text", "type": "code-example_explanation", "context": "logs | Begin streaming the logs of the ruby container in pod web-1", "source": "https://kubernetes.io/docs/reference/kubectl/kubectl-cmds/", "id": "k8s_00618_code_explain_19391"} {"prompt": "Explain this text code:\n\nBegin streaming the logs from all containers in pods defined by label app=nginx", "completion": "This text code logs | Begin streaming the logs from all containers in pods defined by label app=nginx", "language": "text", "type": "code-example_explanation", "context": "logs | Begin streaming the logs from all containers in pods defined by label app=nginx", "source": "https://kubernetes.io/docs/reference/kubectl/kubectl-cmds/", "id": "k8s_00618_code_explain_19392"} {"prompt": "Explain this text code:\n\nDisplay only the most recent 20 lines of output in pod nginx", "completion": "This text code logs | Display only the most recent 20 lines of output in pod nginx", "language": "text", "type": "code-example_explanation", "context": "logs | Display only the most recent 20 lines of output in pod nginx", "source": "https://kubernetes.io/docs/reference/kubectl/kubectl-cmds/", "id": "k8s_00618_code_explain_19393"} {"prompt": "Explain this text code:\n\nShow all logs from pod nginx written in the last hour", "completion": "This text code logs | Show all logs from pod nginx written in the last hour", "language": "text", "type": "code-example_explanation", "context": "logs | Show all logs from pod nginx written in the last hour", "source": "https://kubernetes.io/docs/reference/kubectl/kubectl-cmds/", "id": "k8s_00618_code_explain_19394"} {"prompt": "Explain this text code:\n\nShow logs from a kubelet with an expired serving certificate", "completion": "This text code logs | Show logs from a kubelet with an expired serving certificate", "language": "text", "type": "code-example_explanation", "context": "logs | Show logs from a kubelet with an expired serving certificate", "source": "https://kubernetes.io/docs/reference/kubectl/kubectl-cmds/", "id": "k8s_00618_code_explain_19395"} {"prompt": "Explain this bash code:\n\nkubectl logs --insecure-skip-tls-verify-backend nginx", "completion": "This bash code logs | Return snapshot logs from first container of a job named hello", "language": "bash", "type": "kubectl-command_explanation", "context": "logs | Return snapshot logs from first container of a job named hello", "source": "https://kubernetes.io/docs/reference/kubectl/kubectl-cmds/", "id": "k8s_00618_code_explain_19396"} {"prompt": "Explain this text code:\n\nReturn snapshot logs from first container of a job named hello", "completion": "This text code logs | Return snapshot logs from first container of a job named hello", "language": "text", "type": "code-example_explanation", "context": "logs | Return snapshot logs from first container of a job named hello", "source": "https://kubernetes.io/docs/reference/kubectl/kubectl-cmds/", "id": "k8s_00618_code_explain_19397"} {"prompt": "Explain this text code:\n\nReturn snapshot logs from container nginx-1 of a deployment named nginx", "completion": "This text code logs | Return snapshot logs from container nginx-1 of a deployment named nginx", "language": "text", "type": "code-example_explanation", "context": "logs | Return snapshot logs from container nginx-1 of a deployment named nginx", "source": "https://kubernetes.io/docs/reference/kubectl/kubectl-cmds/", "id": "k8s_00618_code_explain_19398"} {"prompt": "Explain this text code:\n\nListen on ports 5000 and 6000 locally, forwarding data to/from ports 5000 and 6000 in the pod", "completion": "This text code port-forward | Listen on ports 5000 and 6000 locally, forwarding data to/from ports 5000 and 6000 in the pod", "language": "text", "type": "code-example_explanation", "context": "port-forward | Listen on ports 5000 and 6000 locally, forwarding data to/from ports 5000 and 6000 in the pod", "source": "https://kubernetes.io/docs/reference/kubectl/kubectl-cmds/", "id": "k8s_00618_code_explain_19399"} {"prompt": "Explain this text code:\n\nListen on ports 5000 and 6000 locally, forwarding data to/from ports 5000 and 6000 in a pod selected by the deployment", "completion": "This text code port-forward | Listen on ports 5000 and 6000 locally, forwarding data to/from ports 5000 and 6000 in a pod selected by the deployment", "language": "text", "type": "code-example_explanation", "context": "port-forward | Listen on ports 5000 and 6000 locally, forwarding data to/from ports 5000 and 6000 in a pod selected by the deployment", "source": "https://kubernetes.io/docs/reference/kubectl/kubectl-cmds/", "id": "k8s_00618_code_explain_19400"} {"prompt": "Explain this bash code:\n\nkubectl port-forward deployment/mydeployment 5000 6000", "completion": "This bash code port-forward | Listen on port 8443 locally, forwarding to the targetPort of the service's port named \"https\" in a pod selected by the service", "language": "bash", "type": "kubectl-command_explanation", "context": "port-forward | Listen on port 8443 locally, forwarding to the targetPort of the service's port named \"https\" in a pod selected by the service", "source": "https://kubernetes.io/docs/reference/kubectl/kubectl-cmds/", "id": "k8s_00618_code_explain_19401"} {"prompt": "Explain this text code:\n\nListen on port 8443 locally, forwarding to the targetPort of the service's port named \"https\" in a pod selected by the service", "completion": "This text code port-forward | Listen on port 8443 locally, forwarding to the targetPort of the service's port named \"https\" in a pod selected by the service", "language": "text", "type": "code-example_explanation", "context": "port-forward | Listen on port 8443 locally, forwarding to the targetPort of the service's port named \"https\" in a pod selected by the service", "source": "https://kubernetes.io/docs/reference/kubectl/kubectl-cmds/", "id": "k8s_00618_code_explain_19402"} {"prompt": "Explain this text code:\n\nListen on port 8888 locally, forwarding to 5000 in the pod", "completion": "This text code port-forward | Listen on port 8888 locally, forwarding to 5000 in the pod", "language": "text", "type": "code-example_explanation", "context": "port-forward | Listen on port 8888 locally, forwarding to 5000 in the pod", "source": "https://kubernetes.io/docs/reference/kubectl/kubectl-cmds/", "id": "k8s_00618_code_explain_19403"} {"prompt": "Explain this text code:\n\nListen on port 8888 on all addresses, forwarding to 5000 in the pod", "completion": "This text code port-forward | Listen on port 8888 on all addresses, forwarding to 5000 in the pod", "language": "text", "type": "code-example_explanation", "context": "port-forward | Listen on port 8888 on all addresses, forwarding to 5000 in the pod", "source": "https://kubernetes.io/docs/reference/kubectl/kubectl-cmds/", "id": "k8s_00618_code_explain_19404"} {"prompt": "Explain this bash code:\n\nkubectl port-forward --address 0.0.0.0 pod/mypod 8888:5000", "completion": "This bash code port-forward | Listen on port 8888 on localhost and selected IP, forwarding to 5000 in the pod", "language": "bash", "type": "kubectl-command_explanation", "context": "port-forward | Listen on port 8888 on localhost and selected IP, forwarding to 5000 in the pod", "source": "https://kubernetes.io/docs/reference/kubectl/kubectl-cmds/", "id": "k8s_00618_code_explain_19405"} {"prompt": "Explain this text code:\n\nListen on port 8888 on localhost and selected IP, forwarding to 5000 in the pod", "completion": "This text code port-forward | Listen on port 8888 on localhost and selected IP, forwarding to 5000 in the pod", "language": "text", "type": "code-example_explanation", "context": "port-forward | Listen on port 8888 on localhost and selected IP, forwarding to 5000 in the pod", "source": "https://kubernetes.io/docs/reference/kubectl/kubectl-cmds/", "id": "k8s_00618_code_explain_19406"} {"prompt": "Explain this bash code:\n\nkubectl port-forward --address localhost,10.19.21.23 pod/mypod 8888:5000", "completion": "This bash code port-forward | Listen on a random port locally, forwarding to 5000 in the pod", "language": "bash", "type": "kubectl-command_explanation", "context": "port-forward | Listen on a random port locally, forwarding to 5000 in the pod", "source": "https://kubernetes.io/docs/reference/kubectl/kubectl-cmds/", "id": "k8s_00618_code_explain_19407"} {"prompt": "Explain this text code:\n\nListen on a random port locally, forwarding to 5000 in the pod", "completion": "This text code port-forward | Listen on a random port locally, forwarding to 5000 in the pod", "language": "text", "type": "code-example_explanation", "context": "port-forward | Listen on a random port locally, forwarding to 5000 in the pod", "source": "https://kubernetes.io/docs/reference/kubectl/kubectl-cmds/", "id": "k8s_00618_code_explain_19408"} {"prompt": "Explain this text code:\n\nTo proxy all of the Kubernetes API and nothing else", "completion": "This text code proxy | To proxy all of the Kubernetes API and nothing else", "language": "text", "type": "code-example_explanation", "context": "proxy | To proxy all of the Kubernetes API and nothing else", "source": "https://kubernetes.io/docs/reference/kubectl/kubectl-cmds/", "id": "k8s_00618_code_explain_19409"} {"prompt": "Explain this text code:\n\nTo proxy only part of the Kubernetes API and also some static files # You can get pods info with 'curl localhost:8001/api/v1/pods'", "completion": "This text code proxy | To proxy only part of the Kubernetes API and also some static files # You can get pods info with 'curl localhost:8001/api/v1/pods'", "language": "text", "type": "code-example_explanation", "context": "proxy | To proxy only part of the Kubernetes API and also some static files # You can get pods info with 'curl localhost:8001/api/v1/pods'", "source": "https://kubernetes.io/docs/reference/kubectl/kubectl-cmds/", "id": "k8s_00618_code_explain_19410"} {"prompt": "Explain this bash code:\n\nkubectl proxy --www=/my/files --www-prefix=/static/ --api-prefix=/api/", "completion": "This bash code proxy | To proxy the entire Kubernetes API at a different root # You can get pods info with 'curl localhost:8001/custom/api/v1/pods'", "language": "bash", "type": "kubectl-command_explanation", "context": "proxy | To proxy the entire Kubernetes API at a different root # You can get pods info with 'curl localhost:8001/custom/api/v1/pods'", "source": "https://kubernetes.io/docs/reference/kubectl/kubectl-cmds/", "id": "k8s_00618_code_explain_19411"} {"prompt": "Explain this text code:\n\nTo proxy the entire Kubernetes API at a different root # You can get pods info with 'curl localhost:8001/custom/api/v1/pods'", "completion": "This text code proxy | To proxy the entire Kubernetes API at a different root # You can get pods info with 'curl localhost:8001/custom/api/v1/pods'", "language": "text", "type": "code-example_explanation", "context": "proxy | To proxy the entire Kubernetes API at a different root # You can get pods info with 'curl localhost:8001/custom/api/v1/pods'", "source": "https://kubernetes.io/docs/reference/kubectl/kubectl-cmds/", "id": "k8s_00618_code_explain_19412"} {"prompt": "Explain this text code:\n\nRun a proxy to the Kubernetes API server on port 8011, serving static content from ./local/www/", "completion": "This text code proxy | Run a proxy to the Kubernetes API server on port 8011, serving static content from ./local/www/", "language": "text", "type": "code-example_explanation", "context": "proxy | Run a proxy to the Kubernetes API server on port 8011, serving static content from ./local/www/", "source": "https://kubernetes.io/docs/reference/kubectl/kubectl-cmds/", "id": "k8s_00618_code_explain_19413"} {"prompt": "Explain this text code:\n\nRun a proxy to the Kubernetes API server on an arbitrary local port # The chosen port for the server will be output to stdout", "completion": "This text code proxy | Run a proxy to the Kubernetes API server on an arbitrary local port # The chosen port for the server will be output to stdout", "language": "text", "type": "code-example_explanation", "context": "proxy | Run a proxy to the Kubernetes API server on an arbitrary local port # The chosen port for the server will be output to stdout", "source": "https://kubernetes.io/docs/reference/kubectl/kubectl-cmds/", "id": "k8s_00618_code_explain_19414"} {"prompt": "Explain this text code:\n\nRun a proxy to the Kubernetes API server, changing the API prefix to k8s-api # This makes e.g. the pods API available at localhost:8001/k8s-api/v1/pods/", "completion": "This text code proxy | Run a proxy to the Kubernetes API server, changing the API prefix to k8s-api # This makes e.g. the pods API available at localhost:8001/k8s-api/v1/pods/", "language": "text", "type": "code-example_explanation", "context": "proxy | Run a proxy to the Kubernetes API server, changing the API prefix to k8s-api # This makes e.g. the pods API available at localhost:8001/k8s-api/v1/pods/", "source": "https://kubernetes.io/docs/reference/kubectl/kubectl-cmds/", "id": "k8s_00618_code_explain_19415"} {"prompt": "Explain this text code:\n\nShow metrics for the pods defined by label name=myLabel", "completion": "This text code top | Show metrics for the pods defined by label name=myLabel", "language": "text", "type": "code-example_explanation", "context": "top | Show metrics for the pods defined by label name=myLabel", "source": "https://kubernetes.io/docs/reference/kubectl/kubectl-cmds/", "id": "k8s_00618_code_explain_19416"} {"prompt": "Explain this text code:\n\nPrint the address of the control plane and cluster services", "completion": "This text code cluster-info | Print the address of the control plane and cluster services", "language": "text", "type": "code-example_explanation", "context": "cluster-info | Print the address of the control plane and cluster services", "source": "https://kubernetes.io/docs/reference/kubectl/kubectl-cmds/", "id": "k8s_00618_code_explain_19417"} {"prompt": "Explain this text code:\n\nDump current cluster state to /path/to/cluster-state", "completion": "This text code cluster-info | Dump current cluster state to /path/to/cluster-state", "language": "text", "type": "code-example_explanation", "context": "cluster-info | Dump current cluster state to /path/to/cluster-state", "source": "https://kubernetes.io/docs/reference/kubectl/kubectl-cmds/", "id": "k8s_00618_code_explain_19418"} {"prompt": "Explain this bash code:\n\nkubectl cluster-info dump --output-directory=/path/to/cluster-state", "completion": "This bash code cluster-info | Dump all namespaces to stdout", "language": "bash", "type": "kubectl-command_explanation", "context": "cluster-info | Dump all namespaces to stdout", "source": "https://kubernetes.io/docs/reference/kubectl/kubectl-cmds/", "id": "k8s_00618_code_explain_19419"} {"prompt": "Explain this bash code:\n\nkubectl cluster-info dump --namespaces default,kube-system --output-directory=/path/to/cluster-state", "completion": "This bash code cluster-info | Dump cluster information out suitable for debugging and diagnosing cluster problems. By default, dumps everything to stdout. You can optionally specify a directory with --output-directory. If you sp", "language": "bash", "type": "kubectl-command_explanation", "context": "cluster-info | Dump cluster information out suitable for debugging and diagnosing cluster problems. By default, dumps everything to stdout. You can optionally specify a directory with --output-directory. If you sp", "source": "https://kubernetes.io/docs/reference/kubectl/kubectl-cmds/", "id": "k8s_00618_code_explain_19420"} {"prompt": "Explain this text code:\n\nDrain node \"foo\", even if there are pods not managed by a replication controller, replica set, job, daemon set or stateful set on it", "completion": "This text code drain | Drain node \"foo\", even if there are pods not managed by a replication controller, replica set, job, daemon set or stateful set on it", "language": "text", "type": "code-example_explanation", "context": "drain | Drain node \"foo\", even if there are pods not managed by a replication controller, replica set, job, daemon set or stateful set on it", "source": "https://kubernetes.io/docs/reference/kubectl/kubectl-cmds/", "id": "k8s_00618_code_explain_19421"} {"prompt": "Explain this text code:\n\nAs above, but abort if there are pods not managed by a replication controller, replica set, job, daemon set or stateful set, and use a grace period of 15 minutes", "completion": "This text code drain | As above, but abort if there are pods not managed by a replication controller, replica set, job, daemon set or stateful set, and use a grace period of 15 minutes", "language": "text", "type": "code-example_explanation", "context": "drain | As above, but abort if there are pods not managed by a replication controller, replica set, job, daemon set or stateful set, and use a grace period of 15 minutes", "source": "https://kubernetes.io/docs/reference/kubectl/kubectl-cmds/", "id": "k8s_00618_code_explain_19422"} {"prompt": "Explain this text code:\n\nUpdate node 'foo' with a taint with key 'dedicated' and value 'special-user' and effect 'NoSchedule' # If a taint with that key and effect already exists, its value is replaced as specified", "completion": "This text code taint | Update node 'foo' with a taint with key 'dedicated' and value 'special-user' and effect 'NoSchedule' # If a taint with that key and effect already exists, its value is replaced as specified", "language": "text", "type": "code-example_explanation", "context": "taint | Update node 'foo' with a taint with key 'dedicated' and value 'special-user' and effect 'NoSchedule' # If a taint with that key and effect already exists, its value is replaced as specified", "source": "https://kubernetes.io/docs/reference/kubectl/kubectl-cmds/", "id": "k8s_00618_code_explain_19423"} {"prompt": "Explain this bash code:\n\nkubectl taint nodes foo dedicated=special-user:NoSchedule", "completion": "This bash code taint | Remove from node 'foo' the taint with key 'dedicated' and effect 'NoSchedule' if one exists", "language": "bash", "type": "kubectl-command_explanation", "context": "taint | Remove from node 'foo' the taint with key 'dedicated' and effect 'NoSchedule' if one exists", "source": "https://kubernetes.io/docs/reference/kubectl/kubectl-cmds/", "id": "k8s_00618_code_explain_19424"} {"prompt": "Explain this text code:\n\nRemove from node 'foo' the taint with key 'dedicated' and effect 'NoSchedule' if one exists", "completion": "This text code taint | Remove from node 'foo' the taint with key 'dedicated' and effect 'NoSchedule' if one exists", "language": "text", "type": "code-example_explanation", "context": "taint | Remove from node 'foo' the taint with key 'dedicated' and effect 'NoSchedule' if one exists", "source": "https://kubernetes.io/docs/reference/kubectl/kubectl-cmds/", "id": "k8s_00618_code_explain_19425"} {"prompt": "Explain this text code:\n\nRemove from node 'foo' all the taints with key 'dedicated'", "completion": "This text code taint | Remove from node 'foo' all the taints with key 'dedicated'", "language": "text", "type": "code-example_explanation", "context": "taint | Remove from node 'foo' all the taints with key 'dedicated'", "source": "https://kubernetes.io/docs/reference/kubectl/kubectl-cmds/", "id": "k8s_00618_code_explain_19426"} {"prompt": "Explain this text code:\n\nAdd a taint with key 'dedicated' on nodes having label mylabel=X", "completion": "This text code taint | Add a taint with key 'dedicated' on nodes having label mylabel=X", "language": "text", "type": "code-example_explanation", "context": "taint | Add a taint with key 'dedicated' on nodes having label mylabel=X", "source": "https://kubernetes.io/docs/reference/kubectl/kubectl-cmds/", "id": "k8s_00618_code_explain_19427"} {"prompt": "Explain this bash code:\n\nkubectl taint node -l myLabel=X dedicated=foo:PreferNoSchedule", "completion": "This bash code taint | Add to node 'foo' a taint with key 'bar' and no value", "language": "bash", "type": "kubectl-command_explanation", "context": "taint | Add to node 'foo' a taint with key 'bar' and no value", "source": "https://kubernetes.io/docs/reference/kubectl/kubectl-cmds/", "id": "k8s_00618_code_explain_19428"} {"prompt": "Explain this text code:\n\nAdd to node 'foo' a taint with key 'bar' and no value", "completion": "This text code taint | Add to node 'foo' a taint with key 'bar' and no value", "language": "text", "type": "code-example_explanation", "context": "taint | Add to node 'foo' a taint with key 'bar' and no value", "source": "https://kubernetes.io/docs/reference/kubectl/kubectl-cmds/", "id": "k8s_00618_code_explain_19429"} {"prompt": "Explain this text code:\n\nPrint the supported API resources with more information", "completion": "This text code api-resources | Print the supported API resources with more information", "language": "text", "type": "code-example_explanation", "context": "api-resources | Print the supported API resources with more information", "source": "https://kubernetes.io/docs/reference/kubectl/kubectl-cmds/", "id": "k8s_00618_code_explain_19430"} {"prompt": "Explain this text code:\n\nPrint the supported API resources sorted by a column", "completion": "This text code api-resources | Print the supported API resources sorted by a column", "language": "text", "type": "code-example_explanation", "context": "api-resources | Print the supported API resources sorted by a column", "source": "https://kubernetes.io/docs/reference/kubectl/kubectl-cmds/", "id": "k8s_00618_code_explain_19431"} {"prompt": "Explain this text code:\n\nPrint the supported API resources with a specific APIGroup", "completion": "This text code api-resources | Print the supported API resources with a specific APIGroup", "language": "text", "type": "code-example_explanation", "context": "api-resources | Print the supported API resources with a specific APIGroup", "source": "https://kubernetes.io/docs/reference/kubectl/kubectl-cmds/", "id": "k8s_00618_code_explain_19432"} {"prompt": "Explain this text code:\n\nInstalling bash completion on macOS using homebrew ## If running Bash 3.2 included with macOS", "completion": "This text code completion | Installing bash completion on macOS using homebrew ## If running Bash 3.2 included with macOS", "language": "text", "type": "code-example_explanation", "context": "completion | Installing bash completion on macOS using homebrew ## If running Bash 3.2 included with macOS", "source": "https://kubernetes.io/docs/reference/kubectl/kubectl-cmds/", "id": "k8s_00618_code_explain_19433"} {"prompt": "Explain this text code:\n\nIf kubectl is installed via homebrew, this should start working immediately ## If you've installed via other means, you may need add the completion to your completion directory", "completion": "This text code completion | If kubectl is installed via homebrew, this should start working immediately ## If you've installed via other means, you may need add the completion to your completion directory", "language": "text", "type": "code-example_explanation", "context": "completion | If kubectl is installed via homebrew, this should start working immediately ## If you've installed via other means, you may need add the completion to your completion directory", "source": "https://kubernetes.io/docs/reference/kubectl/kubectl-cmds/", "id": "k8s_00618_code_explain_19434"} {"prompt": "Explain this bash code:\n\nkubectl completion bash > $(brew --prefix)/etc/bash_completion.d/kubectl", "completion": "This bash code completion | Installing bash completion on Linux ## If bash-completion is not installed on Linux, install the 'bash-completion' package ## via your distribution's package manager. ## Load the kubectl completion co", "language": "bash", "type": "kubectl-command_explanation", "context": "completion | Installing bash completion on Linux ## If bash-completion is not installed on Linux, install the 'bash-completion' package ## via your distribution's package manager. ## Load the kubectl completion co", "source": "https://kubernetes.io/docs/reference/kubectl/kubectl-cmds/", "id": "k8s_00618_code_explain_19435"} {"prompt": "Explain this go code:\n\nInstalling bash completion on Linux ## If bash-completion is not installed on Linux, install the 'bash-completion' package ## via your distribution's package manager. ## Load the kubectl completion code for bash into the current shell", "completion": "This go code completion | Installing bash completion on Linux ## If bash-completion is not installed on Linux, install the 'bash-completion' package ## via your distribution's package manager. ## Load the kubectl completion co", "language": "go", "type": "code-example_explanation", "context": "completion | Installing bash completion on Linux ## If bash-completion is not installed on Linux, install the 'bash-completion' package ## via your distribution's package manager. ## Load the kubectl completion co", "source": "https://kubernetes.io/docs/reference/kubectl/kubectl-cmds/", "id": "k8s_00618_code_explain_19436"} {"prompt": "Explain this text code:\n\nWrite bash completion code to a file and source it from .bash_profile", "completion": "This text code completion | Write bash completion code to a file and source it from .bash_profile", "language": "text", "type": "code-example_explanation", "context": "completion | Write bash completion code to a file and source it from .bash_profile", "source": "https://kubernetes.io/docs/reference/kubectl/kubectl-cmds/", "id": "k8s_00618_code_explain_19437"} {"prompt": "Explain this bash code:\n\nkubectl completion bash > ~/.kube/completion.bash.inc\nprintf \"", "completion": "This bash code completion | Kubectl shell completion", "language": "bash", "type": "kubectl-command_explanation", "context": "completion | Kubectl shell completion", "source": "https://kubernetes.io/docs/reference/kubectl/kubectl-cmds/", "id": "k8s_00618_code_explain_19438"} {"prompt": "Explain this text code:\n\nsource '$HOME/.kube/completion.bash.inc'\n\" >> $HOME/.bash_profile\nsource $HOME/.bash_profile", "completion": "This text code completion | Load the kubectl completion code for zsh[1] into the current shell", "language": "text", "type": "code-example_explanation", "context": "completion | Load the kubectl completion code for zsh[1] into the current shell", "source": "https://kubernetes.io/docs/reference/kubectl/kubectl-cmds/", "id": "k8s_00618_code_explain_19439"} {"prompt": "Explain this text code:\n\nLoad the kubectl completion code for zsh[1] into the current shell", "completion": "This text code completion | Load the kubectl completion code for zsh[1] into the current shell", "language": "text", "type": "code-example_explanation", "context": "completion | Load the kubectl completion code for zsh[1] into the current shell", "source": "https://kubernetes.io/docs/reference/kubectl/kubectl-cmds/", "id": "k8s_00618_code_explain_19440"} {"prompt": "Explain this text code:\n\nSet the kubectl completion code for zsh[1] to autoload on startup", "completion": "This text code completion | Set the kubectl completion code for zsh[1] to autoload on startup", "language": "text", "type": "code-example_explanation", "context": "completion | Set the kubectl completion code for zsh[1] to autoload on startup", "source": "https://kubernetes.io/docs/reference/kubectl/kubectl-cmds/", "id": "k8s_00618_code_explain_19441"} {"prompt": "Explain this text code:\n\nRename the context 'old-name' to 'new-name' in your kubeconfig file", "completion": "This text code config | Rename the context 'old-name' to 'new-name' in your kubeconfig file", "language": "text", "type": "code-example_explanation", "context": "config | Rename the context 'old-name' to 'new-name' in your kubeconfig file", "source": "https://kubernetes.io/docs/reference/kubectl/kubectl-cmds/", "id": "k8s_00618_code_explain_19442"} {"prompt": "Explain this text code:\n\nSet the server field on the my-cluster cluster to https://1.2.3.4", "completion": "This text code config | Set the server field on the my-cluster cluster to https://1.2.3.4", "language": "text", "type": "code-example_explanation", "context": "config | Set the server field on the my-cluster cluster to https://1.2.3.4", "source": "https://kubernetes.io/docs/reference/kubectl/kubectl-cmds/", "id": "k8s_00618_code_explain_19443"} {"prompt": "Explain this bash code:\n\nkubectl config set clusters.my-cluster.server https://1.2.3.4", "completion": "This bash code config | Set the certificate-authority-data field on the my-cluster cluster", "language": "bash", "type": "kubectl-command_explanation", "context": "config | Set the certificate-authority-data field on the my-cluster cluster", "source": "https://kubernetes.io/docs/reference/kubectl/kubectl-cmds/", "id": "k8s_00618_code_explain_19444"} {"prompt": "Explain this text code:\n\nSet the certificate-authority-data field on the my-cluster cluster", "completion": "This text code config | Set the certificate-authority-data field on the my-cluster cluster", "language": "text", "type": "code-example_explanation", "context": "config | Set the certificate-authority-data field on the my-cluster cluster", "source": "https://kubernetes.io/docs/reference/kubectl/kubectl-cmds/", "id": "k8s_00618_code_explain_19445"} {"prompt": "Explain this bash code:\n\nkubectl config set clusters.my-cluster.certificate-authority-data $(echo \"cert_data_here\" | base64 -i -)", "completion": "This bash code config | Set the cluster field in the my-context context to my-cluster", "language": "bash", "type": "kubectl-command_explanation", "context": "config | Set the cluster field in the my-context context to my-cluster", "source": "https://kubernetes.io/docs/reference/kubectl/kubectl-cmds/", "id": "k8s_00618_code_explain_19446"} {"prompt": "Explain this text code:\n\nSet the cluster field in the my-context context to my-cluster", "completion": "This text code config | Set the cluster field in the my-context context to my-cluster", "language": "text", "type": "code-example_explanation", "context": "config | Set the cluster field in the my-context context to my-cluster", "source": "https://kubernetes.io/docs/reference/kubectl/kubectl-cmds/", "id": "k8s_00618_code_explain_19447"} {"prompt": "Explain this bash code:\n\nkubectl config set contexts.my-context.cluster my-cluster", "completion": "This bash code config | Set the client-key-data field in the cluster-admin user using --set-raw-bytes option", "language": "bash", "type": "kubectl-command_explanation", "context": "config | Set the client-key-data field in the cluster-admin user using --set-raw-bytes option", "source": "https://kubernetes.io/docs/reference/kubectl/kubectl-cmds/", "id": "k8s_00618_code_explain_19448"} {"prompt": "Explain this text code:\n\nSet the client-key-data field in the cluster-admin user using --set-raw-bytes option", "completion": "This text code config | Set the client-key-data field in the cluster-admin user using --set-raw-bytes option", "language": "text", "type": "code-example_explanation", "context": "config | Set the client-key-data field in the cluster-admin user using --set-raw-bytes option", "source": "https://kubernetes.io/docs/reference/kubectl/kubectl-cmds/", "id": "k8s_00618_code_explain_19449"} {"prompt": "Explain this bash code:\n\nkubectl config set users.cluster-admin.client-key-data cert_data_here --set-raw-bytes=true", "completion": "This bash code config | Set an individual value in a kubeconfig file.", "language": "bash", "type": "kubectl-command_explanation", "context": "config | Set an individual value in a kubeconfig file.", "source": "https://kubernetes.io/docs/reference/kubectl/kubectl-cmds/", "id": "k8s_00618_code_explain_19450"} {"prompt": "Explain this text code:\n\nSet only the server field on the e2e cluster entry without touching other values", "completion": "This text code config | Set only the server field on the e2e cluster entry without touching other values", "language": "text", "type": "code-example_explanation", "context": "config | Set only the server field on the e2e cluster entry without touching other values", "source": "https://kubernetes.io/docs/reference/kubectl/kubectl-cmds/", "id": "k8s_00618_code_explain_19451"} {"prompt": "Explain this bash code:\n\nkubectl config set-cluster e2e --server=https://1.2.3.4", "completion": "This bash code config | Embed certificate authority data for the e2e cluster entry", "language": "bash", "type": "kubectl-command_explanation", "context": "config | Embed certificate authority data for the e2e cluster entry", "source": "https://kubernetes.io/docs/reference/kubectl/kubectl-cmds/", "id": "k8s_00618_code_explain_19452"} {"prompt": "Explain this text code:\n\nEmbed certificate authority data for the e2e cluster entry", "completion": "This text code config | Embed certificate authority data for the e2e cluster entry", "language": "text", "type": "code-example_explanation", "context": "config | Embed certificate authority data for the e2e cluster entry", "source": "https://kubernetes.io/docs/reference/kubectl/kubectl-cmds/", "id": "k8s_00618_code_explain_19453"} {"prompt": "Explain this bash code:\n\nkubectl config set-cluster e2e --embed-certs --certificate-authority=~/.kube/e2e/kubernetes.ca.crt", "completion": "This bash code config | Disable cert checking for the dev cluster entry", "language": "bash", "type": "kubectl-command_explanation", "context": "config | Disable cert checking for the dev cluster entry", "source": "https://kubernetes.io/docs/reference/kubectl/kubectl-cmds/", "id": "k8s_00618_code_explain_19454"} {"prompt": "Explain this bash code:\n\nkubectl config set-cluster e2e --insecure-skip-tls-verify=true", "completion": "This bash code config | Set custom TLS server name to use for validation for the e2e cluster entry", "language": "bash", "type": "kubectl-command_explanation", "context": "config | Set custom TLS server name to use for validation for the e2e cluster entry", "source": "https://kubernetes.io/docs/reference/kubectl/kubectl-cmds/", "id": "k8s_00618_code_explain_19455"} {"prompt": "Explain this text code:\n\nSet custom TLS server name to use for validation for the e2e cluster entry", "completion": "This text code config | Set custom TLS server name to use for validation for the e2e cluster entry", "language": "text", "type": "code-example_explanation", "context": "config | Set custom TLS server name to use for validation for the e2e cluster entry", "source": "https://kubernetes.io/docs/reference/kubectl/kubectl-cmds/", "id": "k8s_00618_code_explain_19456"} {"prompt": "Explain this bash code:\n\nkubectl config set-cluster e2e --tls-server-name=my-cluster-name", "completion": "This bash code config | Set a cluster entry in kubeconfig.", "language": "bash", "type": "kubectl-command_explanation", "context": "config | Set a cluster entry in kubeconfig.", "source": "https://kubernetes.io/docs/reference/kubectl/kubectl-cmds/", "id": "k8s_00618_code_explain_19457"} {"prompt": "Explain this text code:\n\nSet the user field on the gce context entry without touching other values", "completion": "This text code config | Set the user field on the gce context entry without touching other values", "language": "text", "type": "code-example_explanation", "context": "config | Set the user field on the gce context entry without touching other values", "source": "https://kubernetes.io/docs/reference/kubectl/kubectl-cmds/", "id": "k8s_00618_code_explain_19458"} {"prompt": "Explain this bash code:\n\nkubectl config set-context gce --user=cluster-admin", "completion": "This bash code config | Set a context entry in kubeconfig.", "language": "bash", "type": "kubectl-command_explanation", "context": "config | Set a context entry in kubeconfig.", "source": "https://kubernetes.io/docs/reference/kubectl/kubectl-cmds/", "id": "k8s_00618_code_explain_19459"} {"prompt": "Explain this text code:\n\nSet only the \"client-key\" field on the \"cluster-admin\" # entry, without touching other values", "completion": "This text code config | Set only the \"client-key\" field on the \"cluster-admin\" # entry, without touching other values", "language": "text", "type": "code-example_explanation", "context": "config | Set only the \"client-key\" field on the \"cluster-admin\" # entry, without touching other values", "source": "https://kubernetes.io/docs/reference/kubectl/kubectl-cmds/", "id": "k8s_00618_code_explain_19460"} {"prompt": "Explain this bash code:\n\nkubectl config set-credentials cluster-admin --client-key=~/.kube/admin.key", "completion": "This bash code config | Set basic auth for the \"cluster-admin\" entry", "language": "bash", "type": "kubectl-command_explanation", "context": "config | Set basic auth for the \"cluster-admin\" entry", "source": "https://kubernetes.io/docs/reference/kubectl/kubectl-cmds/", "id": "k8s_00618_code_explain_19461"} {"prompt": "Explain this bash code:\n\nkubectl config set-credentials cluster-admin --username=admin --password=uXFGweU9l35qcif", "completion": "This bash code config | Embed client certificate data in the \"cluster-admin\" entry", "language": "bash", "type": "kubectl-command_explanation", "context": "config | Embed client certificate data in the \"cluster-admin\" entry", "source": "https://kubernetes.io/docs/reference/kubectl/kubectl-cmds/", "id": "k8s_00618_code_explain_19462"} {"prompt": "Explain this text code:\n\nEmbed client certificate data in the \"cluster-admin\" entry", "completion": "This text code config | Embed client certificate data in the \"cluster-admin\" entry", "language": "text", "type": "code-example_explanation", "context": "config | Embed client certificate data in the \"cluster-admin\" entry", "source": "https://kubernetes.io/docs/reference/kubectl/kubectl-cmds/", "id": "k8s_00618_code_explain_19463"} {"prompt": "Explain this bash code:\n\nkubectl config set-credentials cluster-admin --client-certificate=~/.kube/admin.crt --embed-certs=true", "completion": "This bash code config | Enable the Google Compute Platform auth provider for the \"cluster-admin\" entry", "language": "bash", "type": "kubectl-command_explanation", "context": "config | Enable the Google Compute Platform auth provider for the \"cluster-admin\" entry", "source": "https://kubernetes.io/docs/reference/kubectl/kubectl-cmds/", "id": "k8s_00618_code_explain_19464"} {"prompt": "Explain this text code:\n\nEnable the Google Compute Platform auth provider for the \"cluster-admin\" entry", "completion": "This text code config | Enable the Google Compute Platform auth provider for the \"cluster-admin\" entry", "language": "text", "type": "code-example_explanation", "context": "config | Enable the Google Compute Platform auth provider for the \"cluster-admin\" entry", "source": "https://kubernetes.io/docs/reference/kubectl/kubectl-cmds/", "id": "k8s_00618_code_explain_19465"} {"prompt": "Explain this bash code:\n\nkubectl config set-credentials cluster-admin --auth-provider=gcp", "completion": "This bash code config | Enable the OpenID Connect auth provider for the \"cluster-admin\" entry with additional args", "language": "bash", "type": "kubectl-command_explanation", "context": "config | Enable the OpenID Connect auth provider for the \"cluster-admin\" entry with additional args", "source": "https://kubernetes.io/docs/reference/kubectl/kubectl-cmds/", "id": "k8s_00618_code_explain_19466"} {"prompt": "Explain this text code:\n\nEnable the OpenID Connect auth provider for the \"cluster-admin\" entry with additional args", "completion": "This text code config | Enable the OpenID Connect auth provider for the \"cluster-admin\" entry with additional args", "language": "text", "type": "code-example_explanation", "context": "config | Enable the OpenID Connect auth provider for the \"cluster-admin\" entry with additional args", "source": "https://kubernetes.io/docs/reference/kubectl/kubectl-cmds/", "id": "k8s_00618_code_explain_19467"} {"prompt": "Explain this bash code:\n\nkubectl config set-credentials cluster-admin --auth-provider=oidc --auth-provider-arg=client-id=foo --auth-provider-arg=client-secret=bar", "completion": "This bash code config | Remove the \"client-secret\" config value for the OpenID Connect auth provider for the \"cluster-admin\" entry", "language": "bash", "type": "kubectl-command_explanation", "context": "config | Remove the \"client-secret\" config value for the OpenID Connect auth provider for the \"cluster-admin\" entry", "source": "https://kubernetes.io/docs/reference/kubectl/kubectl-cmds/", "id": "k8s_00618_code_explain_19468"} {"prompt": "Explain this text code:\n\nRemove the \"client-secret\" config value for the OpenID Connect auth provider for the \"cluster-admin\" entry", "completion": "This text code config | Remove the \"client-secret\" config value for the OpenID Connect auth provider for the \"cluster-admin\" entry", "language": "text", "type": "code-example_explanation", "context": "config | Remove the \"client-secret\" config value for the OpenID Connect auth provider for the \"cluster-admin\" entry", "source": "https://kubernetes.io/docs/reference/kubectl/kubectl-cmds/", "id": "k8s_00618_code_explain_19469"} {"prompt": "Explain this bash code:\n\nkubectl config set-credentials cluster-admin --auth-provider=oidc --auth-provider-arg=client-secret-", "completion": "This bash code config | Enable new exec auth plugin for the \"cluster-admin\" entry", "language": "bash", "type": "kubectl-command_explanation", "context": "config | Enable new exec auth plugin for the \"cluster-admin\" entry", "source": "https://kubernetes.io/docs/reference/kubectl/kubectl-cmds/", "id": "k8s_00618_code_explain_19470"} {"prompt": "Explain this text code:\n\nEnable new exec auth plugin for the \"cluster-admin\" entry", "completion": "This text code config | Enable new exec auth plugin for the \"cluster-admin\" entry", "language": "text", "type": "code-example_explanation", "context": "config | Enable new exec auth plugin for the \"cluster-admin\" entry", "source": "https://kubernetes.io/docs/reference/kubectl/kubectl-cmds/", "id": "k8s_00618_code_explain_19471"} {"prompt": "Explain this bash code:\n\nkubectl config set-credentials cluster-admin --exec-command=/path/to/the/executable --exec-api-version=client.authentication.k8s.io/v1beta1", "completion": "This bash code config | Define new exec auth plugin args for the \"cluster-admin\" entry", "language": "bash", "type": "kubectl-command_explanation", "context": "config | Define new exec auth plugin args for the \"cluster-admin\" entry", "source": "https://kubernetes.io/docs/reference/kubectl/kubectl-cmds/", "id": "k8s_00618_code_explain_19472"} {"prompt": "Explain this text code:\n\nDefine new exec auth plugin args for the \"cluster-admin\" entry", "completion": "This text code config | Define new exec auth plugin args for the \"cluster-admin\" entry", "language": "text", "type": "code-example_explanation", "context": "config | Define new exec auth plugin args for the \"cluster-admin\" entry", "source": "https://kubernetes.io/docs/reference/kubectl/kubectl-cmds/", "id": "k8s_00618_code_explain_19473"} {"prompt": "Explain this bash code:\n\nkubectl config set-credentials cluster-admin --exec-arg=arg1 --exec-arg=arg2", "completion": "This bash code config | Create or update exec auth plugin environment variables for the \"cluster-admin\" entry", "language": "bash", "type": "kubectl-command_explanation", "context": "config | Create or update exec auth plugin environment variables for the \"cluster-admin\" entry", "source": "https://kubernetes.io/docs/reference/kubectl/kubectl-cmds/", "id": "k8s_00618_code_explain_19474"} {"prompt": "Explain this text code:\n\nCreate or update exec auth plugin environment variables for the \"cluster-admin\" entry", "completion": "This text code config | Create or update exec auth plugin environment variables for the \"cluster-admin\" entry", "language": "text", "type": "code-example_explanation", "context": "config | Create or update exec auth plugin environment variables for the \"cluster-admin\" entry", "source": "https://kubernetes.io/docs/reference/kubectl/kubectl-cmds/", "id": "k8s_00618_code_explain_19475"} {"prompt": "Explain this bash code:\n\nkubectl config set-credentials cluster-admin --exec-env=key1=val1 --exec-env=key2=val2", "completion": "This bash code config | Remove exec auth plugin environment variables for the \"cluster-admin\" entry", "language": "bash", "type": "kubectl-command_explanation", "context": "config | Remove exec auth plugin environment variables for the \"cluster-admin\" entry", "source": "https://kubernetes.io/docs/reference/kubectl/kubectl-cmds/", "id": "k8s_00618_code_explain_19476"} {"prompt": "Explain this text code:\n\nRemove exec auth plugin environment variables for the \"cluster-admin\" entry", "completion": "This text code config | Remove exec auth plugin environment variables for the \"cluster-admin\" entry", "language": "text", "type": "code-example_explanation", "context": "config | Remove exec auth plugin environment variables for the \"cluster-admin\" entry", "source": "https://kubernetes.io/docs/reference/kubectl/kubectl-cmds/", "id": "k8s_00618_code_explain_19477"} {"prompt": "Explain this bash code:\n\nkubectl config set-credentials cluster-admin --exec-env=var-to-remove-", "completion": "This bash code config | Set a user entry in kubeconfig.", "language": "bash", "type": "kubectl-command_explanation", "context": "config | Set a user entry in kubeconfig.", "source": "https://kubernetes.io/docs/reference/kubectl/kubectl-cmds/", "id": "k8s_00618_code_explain_19478"} {"prompt": "Explain this text code:\n\nShow merged kubeconfig settings and raw certificate data", "completion": "This text code config | Show merged kubeconfig settings and raw certificate data", "language": "text", "type": "code-example_explanation", "context": "config | Show merged kubeconfig settings and raw certificate data", "source": "https://kubernetes.io/docs/reference/kubectl/kubectl-cmds/", "id": "k8s_00618_code_explain_19479"} {"prompt": "Explain this bash code:\n\nkubectl config view -o jsonpath='{.users[?(@.name == \"e2e\")].user.password}'", "completion": "This bash code config | Display merged kubeconfig settings or a specified kubeconfig file.", "language": "bash", "type": "kubectl-command_explanation", "context": "config | Display merged kubeconfig settings or a specified kubeconfig file.", "source": "https://kubernetes.io/docs/reference/kubectl/kubectl-cmds/", "id": "k8s_00618_code_explain_19480"} {"prompt": "Explain this text code:\n\nGet the documentation of the resource and its fields", "completion": "This text code explain | Get the documentation of the resource and its fields", "language": "text", "type": "code-example_explanation", "context": "explain | Get the documentation of the resource and its fields", "source": "https://kubernetes.io/docs/reference/kubectl/kubectl-cmds/", "id": "k8s_00618_code_explain_19481"} {"prompt": "Explain this text code:\n\nGet the documentation of a specific field of a resource", "completion": "This text code explain | Get the documentation of a specific field of a resource", "language": "text", "type": "code-example_explanation", "context": "explain | Get the documentation of a specific field of a resource", "source": "https://kubernetes.io/docs/reference/kubectl/kubectl-cmds/", "id": "k8s_00618_code_explain_19482"} {"prompt": "Explain this text code:\n\nPrint the client and server versions for the current context", "completion": "This text code version | Print the client and server versions for the current context", "language": "text", "type": "code-example_explanation", "context": "version | Print the client and server versions for the current context", "source": "https://kubernetes.io/docs/reference/kubectl/kubectl-cmds/", "id": "k8s_00618_code_explain_19483"} {"prompt": "Complete this yaml code:\n\napiVersion: audit.k8s.io/v1 # This is required.\nkind: Policy\n# Don't generate audit events for all requests in RequestReceived stage.\nomitStages:\n - \"RequestReceived\"\nrules:\n # Log pod changes at RequestResponse level\n - level: RequestResponse\n resources:\n - group: \"\"\n # Resource \"pods\" doesn't match requests to any subresource of pods,\n # which is consistent with the RBAC policy.\n resources: [\"pods\"]\n # Log \"pods/log\", \"pods/status\" at Metadata level\n - level: Metadata\n resources:\n - group: \"\"\n resources: [\"pods/log\", \"pods/status\"]\n\n # Don't log requests to a configmap called \"controller-leader\"\n - level: None\n resources:\n - group: \"\"\n resources: [\"configmaps\"]\n resourceNames: [\"controller-leader\"]\n\n # Don't log watch requests by the \"system:kube-proxy\" on endpoints or services\n - level: None\n users: [\"system:kube-proxy\"]\n verbs: [\"watch\"]\n resources:\n - group: \"\" # core API group\n resources: [\"endpoints\", \"services\"]\n", "completion": " # Don't log authenticated requests to certain non-resource URL paths.\n - level: None\n userGroups: [\"system:authenticated\"]\n nonResourceURLs:\n - \"/api*\" # Wildcard matching.\n - \"/version\"\n\n # Log the request body of configmap changes in kube-system.\n - level: Request\n resources:\n - group: \"\" # core API group\n resources: [\"configmaps\"]\n # This rule only applies to resources in the \"kube-system\" namespace.\n # The empty string \"\" can be used to select non-namespaced resources.\n namespaces: [\"kube-system\"]\n\n # Log configmap and secret changes in all other namespaces at the Metadata level.\n - level: Metadata\n resources:\n - group: \"\" # core API group\n resources: [\"secrets\", \"configmaps\"]\n\n # Log all other resources in core and extensions at the Request level.\n - level: Request\n resources:\n - group: \"\" # core API group\n - group: \"extensions\" # Version of group should NOT be included.\n\n # A catch-all rule to log all other requests at the Metadata level.\n - level: Metadata\n # Long-running requests like watches that fall under this rule will not\n # generate an audit event in RequestReceived.\n omitStages:\n - \"RequestReceived\"", "language": "yaml", "type": "kubernetes-manifest", "context": "Auditing | You can use a minimal audit policy file to log all requests at the Metadata level:", "source": "https://kubernetes.io/docs/tasks/debug/debug-cluster/audit/", "id": "k8s_00620_code_completion_19484"} {"prompt": "Explain this yaml code:\n\napiVersion: audit.k8s.io/v1 # This is required.\nkind: Policy\n# Don't generate audit events for all requests in RequestReceived stage.\nomitStages:\n - \"RequestReceived\"\nrules:\n # Log pod changes at RequestResponse level\n - level: RequestResponse\n resources:\n - group: \"\"\n # Resource \"pods\" doesn't match requests to any subresource of pods,\n # which is consistent with the RBAC policy.\n resources: [\"pods\"]\n # Log \"pods/log\", \"pods/status\" at Metadata level\n - level: Metadata\n resources:\n - group: \"\"\n resources: [\"pods/log\", \"pods/status\"]\n\n # Don't log requests to a configmap called \"controller-leader\"\n - level: None\n resources:\n - group: \"\"\n resources: [\"configmaps\"]\n resourceNames: [\"controller-leader\"]\n\n # Don't log watch requests by the \"system:kube-proxy\" on endpoints or services\n - level: None\n users: [\"system:kube-proxy\"]\n verbs: [\"watch\"]\n resources:\n - group: \"\" # core API group\n resources: [\"endpoints\", \"services\"]\n\n # Don't log authenticated requests to certain non-resource URL paths.\n - level: None\n userGroups: [\"system:authenticated\"]\n nonResourceURLs:\n - \"/api*\" # Wildcard matching.\n - \"/version\"\n\n # Log the request body of configmap changes in kube-system.\n - level: Request\n resources:\n - group: \"\" # core API group\n resources: [\"configmaps\"]\n # This rule only applies to resources in the \"kube-system\" namespace.\n # The empty string \"\" can be used to select non-namespaced resources.\n namespaces: [\"kube-system\"]\n\n # Log configmap and secret changes in all other namespaces at the Metadata level.\n - level: Metadata\n resources:\n - group: \"\" # core API group\n resources: [\"secrets\", \"configmaps\"]\n\n # Log all other resources in core and extensions at the Request level.\n - level: Request\n resources:\n - group: \"\" # core API group\n - group: \"extensions\" # Version of group should NOT be included.\n\n # A catch-all rule to log all other requests at the Metadata level.\n - level: Metadata\n # Long-running requests like watches that fall under this rule will not\n # generate an audit event in RequestReceived.\n omitStages:\n - \"RequestReceived\"", "completion": "This yaml code Auditing | You can use a minimal audit policy file to log all requests at the Metadata level:", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Auditing | You can use a minimal audit policy file to log all requests at the Metadata level:", "source": "https://kubernetes.io/docs/tasks/debug/debug-cluster/audit/", "id": "k8s_00620_code_explain_19485"} {"prompt": "Complete this yaml code:\n\n# Log all requests at the Metadata level.\napiVersion: audit.k8s.io/v1", "completion": "kind: Policy\nrules:\n- level: Metadata", "language": "yaml", "type": "kubernetes-manifest", "context": "Auditing | If you're crafting your own audit profile, you can use the audit profile for Google Container-Optimized OS as a starting point. You can check the configure-helper.sh script, which generates an audit p", "source": "https://kubernetes.io/docs/tasks/debug/debug-cluster/audit/", "id": "k8s_00620_code_completion_19486"} {"prompt": "Explain this yaml code:\n\n# Log all requests at the Metadata level.\napiVersion: audit.k8s.io/v1\nkind: Policy\nrules:\n- level: Metadata", "completion": "This yaml code Auditing | If you're crafting your own audit profile, you can use the audit profile for Google Container-Optimized OS as a starting point. You can check the configure-helper.sh script, which generates an audit p", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Auditing | If you're crafting your own audit profile, you can use the audit profile for Google Container-Optimized OS as a starting point. You can check the configure-helper.sh script, which generates an audit p", "source": "https://kubernetes.io/docs/tasks/debug/debug-cluster/audit/", "id": "k8s_00620_code_explain_19487"} {"prompt": "Complete this json code:\n\n[\n {\n \"op\": \"replace\",\n \"path\": \"/spec/parallelism\",\n \"value\": 0", "completion": " },\n {\n \"op\": \"remove\",\n \"path\": \"/spec/template/spec/containers/0/terminationMessagePolicy\"\n }\n]", "language": "json", "type": "api-response", "context": "Auditing | The log backend writes audit events to a file in JSONlines format. You can configure the log audit backend using the following kube-apiserver flags:", "source": "https://kubernetes.io/docs/tasks/debug/debug-cluster/audit/", "id": "k8s_00620_code_completion_19488"} {"prompt": "Explain this json code:\n\n[\n {\n \"op\": \"replace\",\n \"path\": \"/spec/parallelism\",\n \"value\": 0\n },\n {\n \"op\": \"remove\",\n \"path\": \"/spec/template/spec/containers/0/terminationMessagePolicy\"\n }\n]", "completion": "This json code Auditing | The log backend writes audit events to a file in JSONlines format. You can configure the log audit backend using the following kube-apiserver flags:", "language": "json", "type": "api-response_explanation", "context": "Auditing | The log backend writes audit events to a file in JSONlines format. You can configure the log audit backend using the following kube-apiserver flags:", "source": "https://kubernetes.io/docs/tasks/debug/debug-cluster/audit/", "id": "k8s_00620_code_explain_19489"} {"prompt": "Explain this text code:\n\n- --audit-policy-file=/etc/kubernetes/audit-policy.yaml\n - --audit-log-path=/var/log/kubernetes/audit/audit.log", "completion": "This text code Auditing | then mount the volumes:", "language": "text", "type": "code-example_explanation", "context": "Auditing | then mount the volumes:", "source": "https://kubernetes.io/docs/tasks/debug/debug-cluster/audit/", "id": "k8s_00620_code_explain_19490"} {"prompt": "Complete this text code:\n\n...\nvolumeMounts:\n - mountPath: /etc/kubernetes/audit-policy.yaml\n name: audit", "completion": " readOnly: true\n - mountPath: /var/log/kubernetes/audit/\n name: audit-log\n readOnly: false", "language": "text", "type": "code-example", "context": "Auditing | and finally configure the hostPath:", "source": "https://kubernetes.io/docs/tasks/debug/debug-cluster/audit/", "id": "k8s_00620_code_completion_19491"} {"prompt": "Explain this text code:\n\n...\nvolumeMounts:\n - mountPath: /etc/kubernetes/audit-policy.yaml\n name: audit\n readOnly: true\n - mountPath: /var/log/kubernetes/audit/\n name: audit-log\n readOnly: false", "completion": "This text code Auditing | and finally configure the hostPath:", "language": "text", "type": "code-example_explanation", "context": "Auditing | and finally configure the hostPath:", "source": "https://kubernetes.io/docs/tasks/debug/debug-cluster/audit/", "id": "k8s_00620_code_explain_19492"} {"prompt": "Complete this text code:\n\n...\nvolumes:\n- name: audit\n hostPath:\n path: /etc/kubernetes/audit-policy.yaml", "completion": " type: File\n\n- name: audit-log\n hostPath:\n path: /var/log/kubernetes/audit/\n type: DirectoryOrCreate", "language": "text", "type": "code-example", "context": "Auditing | The webhook audit backend sends audit events to a remote web API, which is assumed to be a form of the Kubernetes API, including means of authentication. You can configure a webhook audit backend usin", "source": "https://kubernetes.io/docs/tasks/debug/debug-cluster/audit/", "id": "k8s_00620_code_completion_19493"} {"prompt": "Explain this text code:\n\n...\nvolumes:\n- name: audit\n hostPath:\n path: /etc/kubernetes/audit-policy.yaml\n type: File\n\n- name: audit-log\n hostPath:\n path: /var/log/kubernetes/audit/\n type: DirectoryOrCreate", "completion": "This text code Auditing | The webhook audit backend sends audit events to a remote web API, which is assumed to be a form of the Kubernetes API, including means of authentication. You can configure a webhook audit backend usin", "language": "text", "type": "code-example_explanation", "context": "Auditing | The webhook audit backend sends audit events to a remote web API, which is assumed to be a form of the Kubernetes API, including means of authentication. You can configure a webhook audit backend usin", "source": "https://kubernetes.io/docs/tasks/debug/debug-cluster/audit/", "id": "k8s_00620_code_explain_19494"} {"prompt": "Complete this yaml code:\n\napiVersion: audit.k8s.io/v1 # This is required.\nkind: Policy\n# Don't generate audit events for all requests in RequestReceived stage.\nomitStages:\n - \"RequestReceived\"\nrules:\n # Log pod changes at RequestResponse level\n - level: RequestResponse\n resources:\n - group: \"\"\n # Resource \"pods\" doesn't match requests to any subresource of pods,\n # which is consistent with the RBAC policy.\n resources: [\"pods\"]\n # Log \"pods/log\", \"pods/status\" at Metadata level\n - level: Metadata\n resources:\n - group: \"\"\n resources: [\"pods/log\", \"pods/status\"]\n\n # Don't log requests to a configmap called \"controller-leader\"\n - level: None\n resources:\n - group: \"\"\n resources: [\"configmaps\"]\n resourceNames: [\"controller-leader\"]\n\n # Don't log watch requests by the \"system:kube-proxy\" on endpoints or services\n - level: None\n users: [\"system:kube-proxy\"]\n verbs: [\"watch\"]\n resources:\n - group: \"\" # core API group\n resources: [\"endpoints\", \"services\"]\n", "completion": " # Don't log authenticated requests to certain non-resource URL paths.\n - level: None\n userGroups: [\"system:authenticated\"]\n nonResourceURLs:\n - \"/api*\" # Wildcard matching.\n - \"/version\"\n\n # Log the request body of configmap changes in kube-system.\n - level: Request\n resources:\n - group: \"\" # core API group\n resources: [\"configmaps\"]\n # This rule only applies to resources in the \"kube-system\" namespace.\n # The empty string \"\" can be used to select non-namespaced resources.\n namespaces: [\"kube-system\"]\n\n # Log configmap and secret changes in all other namespaces at the Metadata level.\n - level: Metadata\n resources:\n - group: \"\" # core API group\n resources: [\"secrets\", \"configmaps\"]\n\n # Log all other resources in core and extensions at the Request level.\n - level: Request\n resources:\n - group: \"\" # core API group\n - group: \"extensions\" # Version of group should NOT be included.\n\n # A catch-all rule to log all other requests at the Metadata level.\n - level: Metadata\n # Long-running requests like watches that fall under this rule will not\n # generate an audit event in RequestReceived.\n omitStages:\n - \"RequestReceived\"", "language": "yaml", "type": "kubernetes-manifest", "context": "Auditing | You can use a minimal audit policy file to log all requests at the Metadata level:", "source": "https://kubernetes.io/docs/tasks/debug/debug-cluster/audit/", "id": "k8s_00620_code_completion_19495"} {"prompt": "Explain this yaml code:\n\napiVersion: audit.k8s.io/v1 # This is required.\nkind: Policy\n# Don't generate audit events for all requests in RequestReceived stage.\nomitStages:\n - \"RequestReceived\"\nrules:\n # Log pod changes at RequestResponse level\n - level: RequestResponse\n resources:\n - group: \"\"\n # Resource \"pods\" doesn't match requests to any subresource of pods,\n # which is consistent with the RBAC policy.\n resources: [\"pods\"]\n # Log \"pods/log\", \"pods/status\" at Metadata level\n - level: Metadata\n resources:\n - group: \"\"\n resources: [\"pods/log\", \"pods/status\"]\n\n # Don't log requests to a configmap called \"controller-leader\"\n - level: None\n resources:\n - group: \"\"\n resources: [\"configmaps\"]\n resourceNames: [\"controller-leader\"]\n\n # Don't log watch requests by the \"system:kube-proxy\" on endpoints or services\n - level: None\n users: [\"system:kube-proxy\"]\n verbs: [\"watch\"]\n resources:\n - group: \"\" # core API group\n resources: [\"endpoints\", \"services\"]\n\n # Don't log authenticated requests to certain non-resource URL paths.\n - level: None\n userGroups: [\"system:authenticated\"]\n nonResourceURLs:\n - \"/api*\" # Wildcard matching.\n - \"/version\"\n\n # Log the request body of configmap changes in kube-system.\n - level: Request\n resources:\n - group: \"\" # core API group\n resources: [\"configmaps\"]\n # This rule only applies to resources in the \"kube-system\" namespace.\n # The empty string \"\" can be used to select non-namespaced resources.\n namespaces: [\"kube-system\"]\n\n # Log configmap and secret changes in all other namespaces at the Metadata level.\n - level: Metadata\n resources:\n - group: \"\" # core API group\n resources: [\"secrets\", \"configmaps\"]\n\n # Log all other resources in core and extensions at the Request level.\n - level: Request\n resources:\n - group: \"\" # core API group\n - group: \"extensions\" # Version of group should NOT be included.\n\n # A catch-all rule to log all other requests at the Metadata level.\n - level: Metadata\n # Long-running requests like watches that fall under this rule will not\n # generate an audit event in RequestReceived.\n omitStages:\n - \"RequestReceived\"", "completion": "This yaml code Auditing | You can use a minimal audit policy file to log all requests at the Metadata level:", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Auditing | You can use a minimal audit policy file to log all requests at the Metadata level:", "source": "https://kubernetes.io/docs/tasks/debug/debug-cluster/audit/", "id": "k8s_00620_code_explain_19496"} {"prompt": "Complete this yaml code:\n\n# Log all requests at the Metadata level.\napiVersion: audit.k8s.io/v1", "completion": "kind: Policy\nrules:\n- level: Metadata", "language": "yaml", "type": "kubernetes-manifest", "context": "Auditing | If you're crafting your own audit profile, you can use the audit profile for Google Container-Optimized OS as a starting point. You can check the configure-helper.sh script, which generates an audit p", "source": "https://kubernetes.io/docs/tasks/debug/debug-cluster/audit/", "id": "k8s_00620_code_completion_19497"} {"prompt": "Explain this yaml code:\n\n# Log all requests at the Metadata level.\napiVersion: audit.k8s.io/v1\nkind: Policy\nrules:\n- level: Metadata", "completion": "This yaml code Auditing | If you're crafting your own audit profile, you can use the audit profile for Google Container-Optimized OS as a starting point. You can check the configure-helper.sh script, which generates an audit p", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Auditing | If you're crafting your own audit profile, you can use the audit profile for Google Container-Optimized OS as a starting point. You can check the configure-helper.sh script, which generates an audit p", "source": "https://kubernetes.io/docs/tasks/debug/debug-cluster/audit/", "id": "k8s_00620_code_explain_19498"} {"prompt": "Explain this text code:\n\n/apis/batch/v1/namespaces/some-namespace/jobs/some-job-name", "completion": "This text code Auditing | The log backend writes audit events to a file in JSONlines format. You can configure the log audit backend using the following kube-apiserver flags:", "language": "text", "type": "code-example_explanation", "context": "Auditing | The log backend writes audit events to a file in JSONlines format. You can configure the log audit backend using the following kube-apiserver flags:", "source": "https://kubernetes.io/docs/tasks/debug/debug-cluster/audit/", "id": "k8s_00620_code_explain_19499"} {"prompt": "Complete this json code:\n\n[\n {\n \"op\": \"replace\",\n \"path\": \"/spec/parallelism\",\n \"value\": 0", "completion": " },\n {\n \"op\": \"remove\",\n \"path\": \"/spec/template/spec/containers/0/terminationMessagePolicy\"\n }\n]", "language": "json", "type": "api-response", "context": "Auditing | The log backend writes audit events to a file in JSONlines format. You can configure the log audit backend using the following kube-apiserver flags:", "source": "https://kubernetes.io/docs/tasks/debug/debug-cluster/audit/", "id": "k8s_00620_code_completion_19500"} {"prompt": "Explain this json code:\n\n[\n {\n \"op\": \"replace\",\n \"path\": \"/spec/parallelism\",\n \"value\": 0\n },\n {\n \"op\": \"remove\",\n \"path\": \"/spec/template/spec/containers/0/terminationMessagePolicy\"\n }\n]", "completion": "This json code Auditing | The log backend writes audit events to a file in JSONlines format. You can configure the log audit backend using the following kube-apiserver flags:", "language": "json", "type": "api-response_explanation", "context": "Auditing | The log backend writes audit events to a file in JSONlines format. You can configure the log audit backend using the following kube-apiserver flags:", "source": "https://kubernetes.io/docs/tasks/debug/debug-cluster/audit/", "id": "k8s_00620_code_explain_19501"} {"prompt": "Explain this yaml code:\n\n- --audit-policy-file=/etc/kubernetes/audit-policy.yaml\n - --audit-log-path=/var/log/kubernetes/audit/audit.log", "completion": "This yaml code Auditing | then mount the volumes:", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Auditing | then mount the volumes:", "source": "https://kubernetes.io/docs/tasks/debug/debug-cluster/audit/", "id": "k8s_00620_code_explain_19502"} {"prompt": "Complete this yaml code:\n\n...\nvolumeMounts:\n - mountPath: /etc/kubernetes/audit-policy.yaml\n name: audit", "completion": " readOnly: true\n - mountPath: /var/log/kubernetes/audit/\n name: audit-log\n readOnly: false", "language": "yaml", "type": "kubernetes-manifest", "context": "Auditing | and finally configure the hostPath:", "source": "https://kubernetes.io/docs/tasks/debug/debug-cluster/audit/", "id": "k8s_00620_code_completion_19503"} {"prompt": "Explain this yaml code:\n\n...\nvolumeMounts:\n - mountPath: /etc/kubernetes/audit-policy.yaml\n name: audit\n readOnly: true\n - mountPath: /var/log/kubernetes/audit/\n name: audit-log\n readOnly: false", "completion": "This yaml code Auditing | and finally configure the hostPath:", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Auditing | and finally configure the hostPath:", "source": "https://kubernetes.io/docs/tasks/debug/debug-cluster/audit/", "id": "k8s_00620_code_explain_19504"} {"prompt": "Complete this yaml code:\n\n...\nvolumes:\n- name: audit\n hostPath:\n path: /etc/kubernetes/audit-policy.yaml", "completion": " type: File\n\n- name: audit-log\n hostPath:\n path: /var/log/kubernetes/audit/\n type: DirectoryOrCreate", "language": "yaml", "type": "kubernetes-manifest", "context": "Auditing | The webhook audit backend sends audit events to a remote web API, which is assumed to be a form of the Kubernetes API, including means of authentication. You can configure a webhook audit backend usin", "source": "https://kubernetes.io/docs/tasks/debug/debug-cluster/audit/", "id": "k8s_00620_code_completion_19505"} {"prompt": "Explain this yaml code:\n\n...\nvolumes:\n- name: audit\n hostPath:\n path: /etc/kubernetes/audit-policy.yaml\n type: File\n\n- name: audit-log\n hostPath:\n path: /var/log/kubernetes/audit/\n type: DirectoryOrCreate", "completion": "This yaml code Auditing | The webhook audit backend sends audit events to a remote web API, which is assumed to be a form of the Kubernetes API, including means of authentication. You can configure a webhook audit backend usin", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Auditing | The webhook audit backend sends audit events to a remote web API, which is assumed to be a form of the Kubernetes API, including means of authentication. You can configure a webhook audit backend usin", "source": "https://kubernetes.io/docs/tasks/debug/debug-cluster/audit/", "id": "k8s_00620_code_explain_19506"} {"prompt": "Complete this yaml code:\n\naudit/audit-policy.yaml\napiVersion: audit.k8s.io/v1 # This is required.\nkind: Policy\n# Don't generate audit events for all requests in RequestReceived stage.\nomitStages:\n - \"RequestReceived\"\nrules:\n # Log pod changes at RequestResponse level\n - level: RequestResponse\n resources:\n - group: \"\"\n # Resource \"pods\" doesn't match requests to any subresource of pods,\n # which is consistent with the RBAC policy.\n resources: [\"pods\"]\n # Log \"pods/log\", \"pods/status\" at Metadata level\n - level: Metadata\n resources:\n - group: \"\"\n resources: [\"pods/log\", \"pods/status\"]\n\n # Don't log requests to a configmap called \"controller-leader\"\n - level: None\n resources:\n - group: \"\"\n resources: [\"configmaps\"]\n resourceNames: [\"controller-leader\"]\n\n # Don't log watch requests by the \"system:kube-proxy\" on endpoints or services\n - level: None\n users: [\"system:kube-proxy\"]\n verbs: [\"watch\"]\n resources:\n - group: \"\" # core API group\n resources: [\"endpoints\", \"services\"]", "completion": "\n # Don't log authenticated requests to certain non-resource URL paths.\n - level: None\n userGroups: [\"system:authenticated\"]\n nonResourceURLs:\n - \"/api*\" # Wildcard matching.\n - \"/version\"\n\n # Log the request body of configmap changes in kube-system.\n - level: Request\n resources:\n - group: \"\" # core API group\n resources: [\"configmaps\"]\n # This rule only applies to resources in the \"kube-system\" namespace.\n # The empty string \"\" can be used to select non-namespaced resources.\n namespaces: [\"kube-system\"]\n\n # Log configmap and secret changes in all other namespaces at the Metadata level.\n - level: Metadata\n resources:\n - group: \"\" # core API group\n resources: [\"secrets\", \"configmaps\"]\n\n # Log all other resources in core and extensions at the Request level.\n - level: Request\n resources:\n - group: \"\" # core API group\n - group: \"extensions\" # Version of group should NOT be included.\n\n # A catch-all rule to log all other requests at the Metadata level.\n - level: Metadata\n # Long-running requests like watches that fall under this rule will not\n # generate an audit event in RequestReceived.\n omitStages:\n - \"RequestReceived\"", "language": "yaml", "type": "kubernetes-manifest", "context": "Auditing | You can use a minimal audit policy file to log all requests at the Metadata level:", "source": "https://kubernetes.io/docs/tasks/debug/debug-cluster/audit/", "id": "k8s_00620_code_completion_19507"} {"prompt": "Explain this yaml code:\n\naudit/audit-policy.yaml\napiVersion: audit.k8s.io/v1 # This is required.\nkind: Policy\n# Don't generate audit events for all requests in RequestReceived stage.\nomitStages:\n - \"RequestReceived\"\nrules:\n # Log pod changes at RequestResponse level\n - level: RequestResponse\n resources:\n - group: \"\"\n # Resource \"pods\" doesn't match requests to any subresource of pods,\n # which is consistent with the RBAC policy.\n resources: [\"pods\"]\n # Log \"pods/log\", \"pods/status\" at Metadata level\n - level: Metadata\n resources:\n - group: \"\"\n resources: [\"pods/log\", \"pods/status\"]\n\n # Don't log requests to a configmap called \"controller-leader\"\n - level: None\n resources:\n - group: \"\"\n resources: [\"configmaps\"]\n resourceNames: [\"controller-leader\"]\n\n # Don't log watch requests by the \"system:kube-proxy\" on endpoints or services\n - level: None\n users: [\"system:kube-proxy\"]\n verbs: [\"watch\"]\n resources:\n - group: \"\" # core API group\n resources: [\"endpoints\", \"services\"]\n\n # Don't log authenticated requests to certain non-resource URL paths.\n - level: None\n userGroups: [\"system:authenticated\"]\n nonResourceURLs:\n - \"/api*\" # Wildcard matching.\n - \"/version\"\n\n # Log the request body of configmap changes in kube-system.\n - level: Request\n resources:\n - group: \"\" # core API group\n resources: [\"configmaps\"]\n # This rule only applies to resources in the \"kube-system\" namespace.\n # The empty string \"\" can be used to select non-namespaced resources.\n namespaces: [\"kube-system\"]\n\n # Log configmap and secret changes in all other namespaces at the Metadata level.\n - level: Metadata\n resources:\n - group: \"\" # core API group\n resources: [\"secrets\", \"configmaps\"]\n\n # Log all other resources in core and extensions at the Request level.\n - level: Request\n resources:\n - group: \"\" # core API group\n - group: \"extensions\" # Version of group should NOT be included.\n\n # A catch-all rule to log all other requests at the Metadata level.\n - level: Metadata\n # Long-running requests like watches that fall under this rule will not\n # generate an audit event in RequestReceived.\n omitStages:\n - \"RequestReceived\"", "completion": "This yaml code Auditing | You can use a minimal audit policy file to log all requests at the Metadata level:", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Auditing | You can use a minimal audit policy file to log all requests at the Metadata level:", "source": "https://kubernetes.io/docs/tasks/debug/debug-cluster/audit/", "id": "k8s_00620_code_explain_19508"} {"prompt": "Complete this yaml code:\n\napiVersion: audit.k8s.io/v1 # This is required.\nkind: Policy\n# Don't generate audit events for all requests in RequestReceived stage.\nomitStages:\n - \"RequestReceived\"\nrules:\n # Log pod changes at RequestResponse level\n - level: RequestResponse\n resources:\n - group: \"\"\n # Resource \"pods\" doesn't match requests to any subresource of pods,\n # which is consistent with the RBAC policy.\n resources: [\"pods\"]\n # Log \"pods/log\", \"pods/status\" at Metadata level\n - level: Metadata\n resources:\n - group: \"\"\n resources: [\"pods/log\", \"pods/status\"]\n\n # Don't log requests to a configmap called \"controller-leader\"\n - level: None\n resources:\n - group: \"\"\n resources: [\"configmaps\"]\n resourceNames: [\"controller-leader\"]\n\n # Don't log watch requests by the \"system:kube-proxy\" on endpoints or services\n - level: None\n users: [\"system:kube-proxy\"]\n verbs: [\"watch\"]\n resources:\n - group: \"\" # core API group\n resources: [\"endpoints\", \"services\"]\n", "completion": " # Don't log authenticated requests to certain non-resource URL paths.\n - level: None\n userGroups: [\"system:authenticated\"]\n nonResourceURLs:\n - \"/api*\" # Wildcard matching.\n - \"/version\"\n\n # Log the request body of configmap changes in kube-system.\n - level: Request\n resources:\n - group: \"\" # core API group\n resources: [\"configmaps\"]\n # This rule only applies to resources in the \"kube-system\" namespace.\n # The empty string \"\" can be used to select non-namespaced resources.\n namespaces: [\"kube-system\"]\n\n # Log configmap and secret changes in all other namespaces at the Metadata level.\n - level: Metadata\n resources:\n - group: \"\" # core API group\n resources: [\"secrets\", \"configmaps\"]\n\n # Log all other resources in core and extensions at the Request level.\n - level: Request\n resources:\n - group: \"\" # core API group\n - group: \"extensions\" # Version of group should NOT be included.\n\n # A catch-all rule to log all other requests at the Metadata level.\n - level: Metadata\n # Long-running requests like watches that fall under this rule will not\n # generate an audit event in RequestReceived.\n omitStages:\n - \"RequestReceived\"", "language": "yaml", "type": "kubernetes-manifest", "context": "Auditing | You can use a minimal audit policy file to log all requests at the Metadata level:", "source": "https://kubernetes.io/docs/tasks/debug/debug-cluster/audit/", "id": "k8s_00620_code_completion_19509"} {"prompt": "Explain this yaml code:\n\napiVersion: audit.k8s.io/v1 # This is required.\nkind: Policy\n# Don't generate audit events for all requests in RequestReceived stage.\nomitStages:\n - \"RequestReceived\"\nrules:\n # Log pod changes at RequestResponse level\n - level: RequestResponse\n resources:\n - group: \"\"\n # Resource \"pods\" doesn't match requests to any subresource of pods,\n # which is consistent with the RBAC policy.\n resources: [\"pods\"]\n # Log \"pods/log\", \"pods/status\" at Metadata level\n - level: Metadata\n resources:\n - group: \"\"\n resources: [\"pods/log\", \"pods/status\"]\n\n # Don't log requests to a configmap called \"controller-leader\"\n - level: None\n resources:\n - group: \"\"\n resources: [\"configmaps\"]\n resourceNames: [\"controller-leader\"]\n\n # Don't log watch requests by the \"system:kube-proxy\" on endpoints or services\n - level: None\n users: [\"system:kube-proxy\"]\n verbs: [\"watch\"]\n resources:\n - group: \"\" # core API group\n resources: [\"endpoints\", \"services\"]\n\n # Don't log authenticated requests to certain non-resource URL paths.\n - level: None\n userGroups: [\"system:authenticated\"]\n nonResourceURLs:\n - \"/api*\" # Wildcard matching.\n - \"/version\"\n\n # Log the request body of configmap changes in kube-system.\n - level: Request\n resources:\n - group: \"\" # core API group\n resources: [\"configmaps\"]\n # This rule only applies to resources in the \"kube-system\" namespace.\n # The empty string \"\" can be used to select non-namespaced resources.\n namespaces: [\"kube-system\"]\n\n # Log configmap and secret changes in all other namespaces at the Metadata level.\n - level: Metadata\n resources:\n - group: \"\" # core API group\n resources: [\"secrets\", \"configmaps\"]\n\n # Log all other resources in core and extensions at the Request level.\n - level: Request\n resources:\n - group: \"\" # core API group\n - group: \"extensions\" # Version of group should NOT be included.\n\n # A catch-all rule to log all other requests at the Metadata level.\n - level: Metadata\n # Long-running requests like watches that fall under this rule will not\n # generate an audit event in RequestReceived.\n omitStages:\n - \"RequestReceived\"", "completion": "This yaml code Auditing | You can use a minimal audit policy file to log all requests at the Metadata level:", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Auditing | You can use a minimal audit policy file to log all requests at the Metadata level:", "source": "https://kubernetes.io/docs/tasks/debug/debug-cluster/audit/", "id": "k8s_00620_code_explain_19510"} {"prompt": "Complete this yaml code:\n\n# Log all requests at the Metadata level.\napiVersion: audit.k8s.io/v1", "completion": "kind: Policy\nrules:\n- level: Metadata", "language": "yaml", "type": "kubernetes-manifest", "context": "Auditing | If you're crafting your own audit profile, you can use the audit profile for Google Container-Optimized OS as a starting point. You can check the configure-helper.sh script, which generates an audit p", "source": "https://kubernetes.io/docs/tasks/debug/debug-cluster/audit/", "id": "k8s_00620_code_completion_19511"} {"prompt": "Explain this yaml code:\n\n# Log all requests at the Metadata level.\napiVersion: audit.k8s.io/v1\nkind: Policy\nrules:\n- level: Metadata", "completion": "This yaml code Auditing | If you're crafting your own audit profile, you can use the audit profile for Google Container-Optimized OS as a starting point. You can check the configure-helper.sh script, which generates an audit p", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Auditing | If you're crafting your own audit profile, you can use the audit profile for Google Container-Optimized OS as a starting point. You can check the configure-helper.sh script, which generates an audit p", "source": "https://kubernetes.io/docs/tasks/debug/debug-cluster/audit/", "id": "k8s_00620_code_explain_19512"} {"prompt": "Complete this json code:\n\n[\n {\n \"op\": \"replace\",\n \"path\": \"/spec/parallelism\",\n \"value\": 0", "completion": " },\n {\n \"op\": \"remove\",\n \"path\": \"/spec/template/spec/containers/0/terminationMessagePolicy\"\n }\n]", "language": "json", "type": "api-response", "context": "Auditing | The log backend writes audit events to a file in JSONlines format. You can configure the log audit backend using the following kube-apiserver flags:", "source": "https://kubernetes.io/docs/tasks/debug/debug-cluster/audit/", "id": "k8s_00620_code_completion_19513"} {"prompt": "Explain this json code:\n\n[\n {\n \"op\": \"replace\",\n \"path\": \"/spec/parallelism\",\n \"value\": 0\n },\n {\n \"op\": \"remove\",\n \"path\": \"/spec/template/spec/containers/0/terminationMessagePolicy\"\n }\n]", "completion": "This json code Auditing | The log backend writes audit events to a file in JSONlines format. You can configure the log audit backend using the following kube-apiserver flags:", "language": "json", "type": "api-response_explanation", "context": "Auditing | The log backend writes audit events to a file in JSONlines format. You can configure the log audit backend using the following kube-apiserver flags:", "source": "https://kubernetes.io/docs/tasks/debug/debug-cluster/audit/", "id": "k8s_00620_code_explain_19514"} {"prompt": "Explain this text code:\n\n- --audit-policy-file=/etc/kubernetes/audit-policy.yaml\n - --audit-log-path=/var/log/kubernetes/audit/audit.log", "completion": "This text code Auditing | then mount the volumes:", "language": "text", "type": "code-example_explanation", "context": "Auditing | then mount the volumes:", "source": "https://kubernetes.io/docs/tasks/debug/debug-cluster/audit/", "id": "k8s_00620_code_explain_19515"} {"prompt": "Complete this text code:\n\n...\nvolumeMounts:\n - mountPath: /etc/kubernetes/audit-policy.yaml\n name: audit", "completion": " readOnly: true\n - mountPath: /var/log/kubernetes/audit/\n name: audit-log\n readOnly: false", "language": "text", "type": "code-example", "context": "Auditing | and finally configure the hostPath:", "source": "https://kubernetes.io/docs/tasks/debug/debug-cluster/audit/", "id": "k8s_00620_code_completion_19516"} {"prompt": "Explain this text code:\n\n...\nvolumeMounts:\n - mountPath: /etc/kubernetes/audit-policy.yaml\n name: audit\n readOnly: true\n - mountPath: /var/log/kubernetes/audit/\n name: audit-log\n readOnly: false", "completion": "This text code Auditing | and finally configure the hostPath:", "language": "text", "type": "code-example_explanation", "context": "Auditing | and finally configure the hostPath:", "source": "https://kubernetes.io/docs/tasks/debug/debug-cluster/audit/", "id": "k8s_00620_code_explain_19517"} {"prompt": "Complete this text code:\n\n...\nvolumes:\n- name: audit\n hostPath:\n path: /etc/kubernetes/audit-policy.yaml", "completion": " type: File\n\n- name: audit-log\n hostPath:\n path: /var/log/kubernetes/audit/\n type: DirectoryOrCreate", "language": "text", "type": "code-example", "context": "Auditing | The webhook audit backend sends audit events to a remote web API, which is assumed to be a form of the Kubernetes API, including means of authentication. You can configure a webhook audit backend usin", "source": "https://kubernetes.io/docs/tasks/debug/debug-cluster/audit/", "id": "k8s_00620_code_completion_19518"} {"prompt": "Explain this text code:\n\n...\nvolumes:\n- name: audit\n hostPath:\n path: /etc/kubernetes/audit-policy.yaml\n type: File\n\n- name: audit-log\n hostPath:\n path: /var/log/kubernetes/audit/\n type: DirectoryOrCreate", "completion": "This text code Auditing | The webhook audit backend sends audit events to a remote web API, which is assumed to be a form of the Kubernetes API, including means of authentication. You can configure a webhook audit backend usin", "language": "text", "type": "code-example_explanation", "context": "Auditing | The webhook audit backend sends audit events to a remote web API, which is assumed to be a form of the Kubernetes API, including means of authentication. You can configure a webhook audit backend usin", "source": "https://kubernetes.io/docs/tasks/debug/debug-cluster/audit/", "id": "k8s_00620_code_explain_19519"} {"prompt": "Complete this text code:\n\n# Apply the configuration in pod.json to a pod\n kubectl apply -f ./pod.json\n \n # Apply resources from a directory containing kustomization.yaml - e.g. dir/kustomization.yaml\n kubectl apply -k dir/\n \n # Apply the JSON passed into stdin to a pod\n cat pod.json | kubectl apply -f -\n ", "completion": " # Apply the configuration from all files that end with '.json'\n kubectl apply -f '*.json'\n \n # Note: --prune is still in Alpha\n # Apply the configuration in manifest.yaml that matches label app=nginx and delete all other resources that are not in the file and match label app=nginx\n kubectl apply --prune -f manifest.yaml -l app=nginx\n \n # Apply the configuration in manifest.yaml and delete all the other config maps that are not in the file\n kubectl apply --prune -f manifest.yaml --all --prune-allowlist=core/v1/ConfigMap", "language": "text", "type": "code-example", "context": "kubectl apply | Select all resources in the namespace of the specified resource types.", "source": "https://kubernetes.io/docs/reference/kubectl/generated/kubectl_apply/", "id": "k8s_00622_code_completion_19520"} {"prompt": "Explain this text code:\n\n# Apply the configuration in pod.json to a pod\n kubectl apply -f ./pod.json\n \n # Apply resources from a directory containing kustomization.yaml - e.g. dir/kustomization.yaml\n kubectl apply -k dir/\n \n # Apply the JSON passed into stdin to a pod\n cat pod.json | kubectl apply -f -\n \n # Apply the configuration from all files that end with '.json'\n kubectl apply -f '*.json'\n \n # Note: --prune is still in Alpha\n # Apply the configuration in manifest.yaml that matches label app=nginx and delete all other resources that are not in the file and match label app=nginx\n kubectl apply --prune -f manifest.yaml -l app=nginx\n \n # Apply the configuration in manifest.yaml and delete all the other config maps that are not in the file\n kubectl apply --prune -f manifest.yaml --all --prune-allowlist=core/v1/ConfigMap", "completion": "This text code kubectl apply | Select all resources in the namespace of the specified resource types.", "language": "text", "type": "code-example_explanation", "context": "kubectl apply | Select all resources in the namespace of the specified resource types.", "source": "https://kubernetes.io/docs/reference/kubectl/generated/kubectl_apply/", "id": "k8s_00622_code_explain_19521"} {"prompt": "Complete this text code:\n\n# Apply the configuration in pod.json to a pod\n kubectl apply -f ./pod.json\n \n # Apply resources from a directory containing kustomization.yaml - e.g. dir/kustomization.yaml\n kubectl apply -k dir/\n \n # Apply the JSON passed into stdin to a pod\n cat pod.json | kubectl apply -f -\n ", "completion": " # Apply the configuration from all files that end with '.json'\n kubectl apply -f '*.json'\n \n # Note: --prune is still in Alpha\n # Apply the configuration in manifest.yaml that matches label app=nginx and delete all other resources that are not in the file and match label app=nginx\n kubectl apply --prune -f manifest.yaml -l app=nginx\n \n # Apply the configuration in manifest.yaml and delete all the other config maps that are not in the file\n kubectl apply --prune -f manifest.yaml --all --prune-allowlist=core/v1/ConfigMap", "language": "text", "type": "code-example", "context": "kubectl apply | Select all resources in the namespace of the specified resource types.", "source": "https://kubernetes.io/docs/reference/kubectl/generated/kubectl_apply/", "id": "k8s_00622_code_completion_19522"} {"prompt": "Explain this text code:\n\n# Apply the configuration in pod.json to a pod\n kubectl apply -f ./pod.json\n \n # Apply resources from a directory containing kustomization.yaml - e.g. dir/kustomization.yaml\n kubectl apply -k dir/\n \n # Apply the JSON passed into stdin to a pod\n cat pod.json | kubectl apply -f -\n \n # Apply the configuration from all files that end with '.json'\n kubectl apply -f '*.json'\n \n # Note: --prune is still in Alpha\n # Apply the configuration in manifest.yaml that matches label app=nginx and delete all other resources that are not in the file and match label app=nginx\n kubectl apply --prune -f manifest.yaml -l app=nginx\n \n # Apply the configuration in manifest.yaml and delete all the other config maps that are not in the file\n kubectl apply --prune -f manifest.yaml --all --prune-allowlist=core/v1/ConfigMap", "completion": "This text code kubectl apply | Select all resources in the namespace of the specified resource types.", "language": "text", "type": "code-example_explanation", "context": "kubectl apply | Select all resources in the namespace of the specified resource types.", "source": "https://kubernetes.io/docs/reference/kubectl/generated/kubectl_apply/", "id": "k8s_00622_code_explain_19523"} {"prompt": "Complete this yaml code:\n\napiVersion: kubescheduler.config.k8s.io/v1\nkind: KubeSchedulerConfiguration\nprofiles:\n- pluginConfig:\n - args:\n scoringStrategy:\n resources:\n - name: cpu", "completion": " weight: 1\n - name: memory\n weight: 1\n - name: intel.com/foo\n weight: 3\n - name: intel.com/bar\n weight: 3\n type: MostAllocated\n name: NodeResourcesFit", "language": "yaml", "type": "kubernetes-manifest", "context": "Resource Bin Packing | To learn more about other parameters and their default configuration, see the API documentation for NodeResourcesFitArgs.", "source": "https://kubernetes.io/docs/concepts/scheduling-eviction/resource-bin-packing/", "id": "k8s_00623_code_completion_19524"} {"prompt": "Explain this yaml code:\n\napiVersion: kubescheduler.config.k8s.io/v1\nkind: KubeSchedulerConfiguration\nprofiles:\n- pluginConfig:\n - args:\n scoringStrategy:\n resources:\n - name: cpu\n weight: 1\n - name: memory\n weight: 1\n - name: intel.com/foo\n weight: 3\n - name: intel.com/bar\n weight: 3\n type: MostAllocated\n name: NodeResourcesFit", "completion": "This yaml code Resource Bin Packing | To learn more about other parameters and their default configuration, see the API documentation for NodeResourcesFitArgs.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Resource Bin Packing | To learn more about other parameters and their default configuration, see the API documentation for NodeResourcesFitArgs.", "source": "https://kubernetes.io/docs/concepts/scheduling-eviction/resource-bin-packing/", "id": "k8s_00623_code_explain_19525"} {"prompt": "Complete this yaml code:\n\napiVersion: kubescheduler.config.k8s.io/v1\nkind: KubeSchedulerConfiguration\nprofiles:\n- pluginConfig:\n - args:\n scoringStrategy:\n resources:\n - name: intel.com/foo\n weight: 3", "completion": " - name: intel.com/bar\n weight: 3\n requestedToCapacityRatio:\n shape:\n - utilization: 0\n score: 0\n - utilization: 100\n score: 10\n type: RequestedToCapacityRatio\n name: NodeResourcesFit", "language": "yaml", "type": "kubernetes-manifest", "context": "Resource Bin Packing | Referencing the KubeSchedulerConfiguration file with the kube-scheduler flag --config=/path/to/config/file will pass the configuration to the scheduler.", "source": "https://kubernetes.io/docs/concepts/scheduling-eviction/resource-bin-packing/", "id": "k8s_00623_code_completion_19526"} {"prompt": "Explain this yaml code:\n\napiVersion: kubescheduler.config.k8s.io/v1\nkind: KubeSchedulerConfiguration\nprofiles:\n- pluginConfig:\n - args:\n scoringStrategy:\n resources:\n - name: intel.com/foo\n weight: 3\n - name: intel.com/bar\n weight: 3\n requestedToCapacityRatio:\n shape:\n - utilization: 0\n score: 0\n - utilization: 100\n score: 10\n type: RequestedToCapacityRatio\n name: NodeResourcesFit", "completion": "This yaml code Resource Bin Packing | Referencing the KubeSchedulerConfiguration file with the kube-scheduler flag --config=/path/to/config/file will pass the configuration to the scheduler.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Resource Bin Packing | Referencing the KubeSchedulerConfiguration file with the kube-scheduler flag --config=/path/to/config/file will pass the configuration to the scheduler.", "source": "https://kubernetes.io/docs/concepts/scheduling-eviction/resource-bin-packing/", "id": "k8s_00623_code_explain_19527"} {"prompt": "Complete this text code:\n\nshape:\n - utilization: 0", "completion": " score: 0\n - utilization: 100\n score: 10", "language": "text", "type": "code-example", "context": "Resource Bin Packing | The above arguments give the node a score of 0 if utilization is 0% and 10 for utilization 100%, thus enabling bin packing behavior. To enable least requested the score value must be reversed as follo", "source": "https://kubernetes.io/docs/concepts/scheduling-eviction/resource-bin-packing/", "id": "k8s_00623_code_completion_19528"} {"prompt": "Explain this text code:\n\nshape:\n - utilization: 0\n score: 0\n - utilization: 100\n score: 10", "completion": "This text code Resource Bin Packing | The above arguments give the node a score of 0 if utilization is 0% and 10 for utilization 100%, thus enabling bin packing behavior. To enable least requested the score value must be reversed as follo", "language": "text", "type": "code-example_explanation", "context": "Resource Bin Packing | The above arguments give the node a score of 0 if utilization is 0% and 10 for utilization 100%, thus enabling bin packing behavior. To enable least requested the score value must be reversed as follo", "source": "https://kubernetes.io/docs/concepts/scheduling-eviction/resource-bin-packing/", "id": "k8s_00623_code_explain_19529"} {"prompt": "Complete this text code:\n\nshape:\n - utilization: 0", "completion": " score: 10\n - utilization: 100\n score: 0", "language": "text", "type": "code-example", "context": "Resource Bin Packing | resources is an optional parameter which defaults to:", "source": "https://kubernetes.io/docs/concepts/scheduling-eviction/resource-bin-packing/", "id": "k8s_00623_code_completion_19530"} {"prompt": "Explain this text code:\n\nshape:\n - utilization: 0\n score: 10\n - utilization: 100\n score: 0", "completion": "This text code Resource Bin Packing | resources is an optional parameter which defaults to:", "language": "text", "type": "code-example_explanation", "context": "Resource Bin Packing | resources is an optional parameter which defaults to:", "source": "https://kubernetes.io/docs/concepts/scheduling-eviction/resource-bin-packing/", "id": "k8s_00623_code_explain_19531"} {"prompt": "Complete this text code:\n\nresources:\n - name: cpu", "completion": " weight: 1\n - name: memory\n weight: 1", "language": "text", "type": "code-example", "context": "Resource Bin Packing | It can be used to add extended resources as follows:", "source": "https://kubernetes.io/docs/concepts/scheduling-eviction/resource-bin-packing/", "id": "k8s_00623_code_completion_19532"} {"prompt": "Explain this text code:\n\nresources:\n - name: cpu\n weight: 1\n - name: memory\n weight: 1", "completion": "This text code Resource Bin Packing | It can be used to add extended resources as follows:", "language": "text", "type": "code-example_explanation", "context": "Resource Bin Packing | It can be used to add extended resources as follows:", "source": "https://kubernetes.io/docs/concepts/scheduling-eviction/resource-bin-packing/", "id": "k8s_00623_code_explain_19533"} {"prompt": "Complete this text code:\n\nresources:\n - name: intel.com/foo\n weight: 5", "completion": " - name: cpu\n weight: 3\n - name: memory\n weight: 1", "language": "text", "type": "code-example", "context": "Resource Bin Packing | The weight parameter is optional and is set to 1 if not specified. Also, the weight cannot be set to a negative value.", "source": "https://kubernetes.io/docs/concepts/scheduling-eviction/resource-bin-packing/", "id": "k8s_00623_code_completion_19534"} {"prompt": "Explain this text code:\n\nresources:\n - name: intel.com/foo\n weight: 5\n - name: cpu\n weight: 3\n - name: memory\n weight: 1", "completion": "This text code Resource Bin Packing | The weight parameter is optional and is set to 1 if not specified. Also, the weight cannot be set to a negative value.", "language": "text", "type": "code-example_explanation", "context": "Resource Bin Packing | The weight parameter is optional and is set to 1 if not specified. Also, the weight cannot be set to a negative value.", "source": "https://kubernetes.io/docs/concepts/scheduling-eviction/resource-bin-packing/", "id": "k8s_00623_code_explain_19535"} {"prompt": "Complete this text code:\n\nAvailable:\n intel.com/foo: 4\n memory: 1 GB\n cpu: 8", "completion": "\nUsed:\n intel.com/foo: 1\n memory: 256MB\n cpu: 1", "language": "text", "type": "code-example", "context": "Resource Bin Packing | Node score:", "source": "https://kubernetes.io/docs/concepts/scheduling-eviction/resource-bin-packing/", "id": "k8s_00623_code_completion_19536"} {"prompt": "Explain this text code:\n\nAvailable:\n intel.com/foo: 4\n memory: 1 GB\n cpu: 8\n\nUsed:\n intel.com/foo: 1\n memory: 256MB\n cpu: 1", "completion": "This text code Resource Bin Packing | Node score:", "language": "text", "type": "code-example_explanation", "context": "Resource Bin Packing | Node score:", "source": "https://kubernetes.io/docs/concepts/scheduling-eviction/resource-bin-packing/", "id": "k8s_00623_code_explain_19537"} {"prompt": "Complete this text code:\n\nintel.com/foo = resourceScoringFunction((2+1),4)\n = (100 - ((4-3)*100/4)\n = (100 - 25)\n = 75 # requested + used = 75% * available\n = rawScoringFunction(75)\n = 7 # floor(75/10)\n\nmemory = resourceScoringFunction((256+256),1024)\n = (100 -((1024-512)*100/1024))\n = 50 # requested + used = 50% * available", "completion": " = rawScoringFunction(50)\n = 5 # floor(50/10)\n\ncpu = resourceScoringFunction((2+1),8)\n = (100 -((8-3)*100/8))\n = 37.5 # requested + used = 37.5% * available\n = rawScoringFunction(37.5)\n = 3 # floor(37.5/10)\n\nNodeScore = ((7 * 5) + (5 * 1) + (3 * 3)) / (5 + 1 + 3)\n = 5", "language": "text", "type": "code-example", "context": "Resource Bin Packing | Node 2 spec:", "source": "https://kubernetes.io/docs/concepts/scheduling-eviction/resource-bin-packing/", "id": "k8s_00623_code_completion_19538"} {"prompt": "Explain this text code:\n\nintel.com/foo = resourceScoringFunction((2+1),4)\n = (100 - ((4-3)*100/4)\n = (100 - 25)\n = 75 # requested + used = 75% * available\n = rawScoringFunction(75)\n = 7 # floor(75/10)\n\nmemory = resourceScoringFunction((256+256),1024)\n = (100 -((1024-512)*100/1024))\n = 50 # requested + used = 50% * available\n = rawScoringFunction(50)\n = 5 # floor(50/10)\n\ncpu = resourceScoringFunction((2+1),8)\n = (100 -((8-3)*100/8))\n = 37.5 # requested + used = 37.5% * available\n = rawScoringFunction(37.5)\n = 3 # floor(37.5/10)\n\nNodeScore = ((7 * 5) + (5 * 1) + (3 * 3)) / (5 + 1 + 3)\n = 5", "completion": "This text code Resource Bin Packing | Node 2 spec:", "language": "text", "type": "code-example_explanation", "context": "Resource Bin Packing | Node 2 spec:", "source": "https://kubernetes.io/docs/concepts/scheduling-eviction/resource-bin-packing/", "id": "k8s_00623_code_explain_19539"} {"prompt": "Complete this text code:\n\nAvailable:\n intel.com/foo: 8\n memory: 1GB\n cpu: 8", "completion": "Used:\n intel.com/foo: 2\n memory: 512MB\n cpu: 6", "language": "text", "type": "code-example", "context": "Resource Bin Packing | Node score:", "source": "https://kubernetes.io/docs/concepts/scheduling-eviction/resource-bin-packing/", "id": "k8s_00623_code_completion_19540"} {"prompt": "Explain this text code:\n\nAvailable:\n intel.com/foo: 8\n memory: 1GB\n cpu: 8\nUsed:\n intel.com/foo: 2\n memory: 512MB\n cpu: 6", "completion": "This text code Resource Bin Packing | Node score:", "language": "text", "type": "code-example_explanation", "context": "Resource Bin Packing | Node score:", "source": "https://kubernetes.io/docs/concepts/scheduling-eviction/resource-bin-packing/", "id": "k8s_00623_code_explain_19541"} {"prompt": "Complete this text code:\n\nintel.com/foo = resourceScoringFunction((2+2),8)\n = (100 - ((8-4)*100/8)\n = (100 - 50)\n = 50\n = rawScoringFunction(50)\n = 5\n\nmemory = resourceScoringFunction((256+512),1024)\n = (100 -((1024-768)*100/1024))\n = 75", "completion": " = rawScoringFunction(75)\n = 7\n\ncpu = resourceScoringFunction((2+6),8)\n = (100 -((8-8)*100/8))\n = 100\n = rawScoringFunction(100)\n = 10\n\nNodeScore = ((5 * 5) + (7 * 1) + (10 * 3)) / (5 + 1 + 3)\n = 7", "language": "text", "type": "code-example", "context": "Resource Bin Packing |", "source": "https://kubernetes.io/docs/concepts/scheduling-eviction/resource-bin-packing/", "id": "k8s_00623_code_completion_19542"} {"prompt": "Explain this text code:\n\nintel.com/foo = resourceScoringFunction((2+2),8)\n = (100 - ((8-4)*100/8)\n = (100 - 50)\n = 50\n = rawScoringFunction(50)\n = 5\n\nmemory = resourceScoringFunction((256+512),1024)\n = (100 -((1024-768)*100/1024))\n = 75\n = rawScoringFunction(75)\n = 7\n\ncpu = resourceScoringFunction((2+6),8)\n = (100 -((8-8)*100/8))\n = 100\n = rawScoringFunction(100)\n = 10\n\nNodeScore = ((5 * 5) + (7 * 1) + (10 * 3)) / (5 + 1 + 3)\n = 7", "completion": "This text code Resource Bin Packing |", "language": "text", "type": "code-example_explanation", "context": "Resource Bin Packing |", "source": "https://kubernetes.io/docs/concepts/scheduling-eviction/resource-bin-packing/", "id": "k8s_00623_code_explain_19543"} {"prompt": "Complete this yaml code:\n\napiVersion: kubescheduler.config.k8s.io/v1\nkind: KubeSchedulerConfiguration\nprofiles:\n- pluginConfig:\n - args:\n scoringStrategy:\n resources:\n - name: cpu", "completion": " weight: 1\n - name: memory\n weight: 1\n - name: intel.com/foo\n weight: 3\n - name: intel.com/bar\n weight: 3\n type: MostAllocated\n name: NodeResourcesFit", "language": "yaml", "type": "kubernetes-manifest", "context": "Resource Bin Packing | To learn more about other parameters and their default configuration, see the API documentation for NodeResourcesFitArgs.", "source": "https://kubernetes.io/docs/concepts/scheduling-eviction/resource-bin-packing/", "id": "k8s_00623_code_completion_19544"} {"prompt": "Explain this yaml code:\n\napiVersion: kubescheduler.config.k8s.io/v1\nkind: KubeSchedulerConfiguration\nprofiles:\n- pluginConfig:\n - args:\n scoringStrategy:\n resources:\n - name: cpu\n weight: 1\n - name: memory\n weight: 1\n - name: intel.com/foo\n weight: 3\n - name: intel.com/bar\n weight: 3\n type: MostAllocated\n name: NodeResourcesFit", "completion": "This yaml code Resource Bin Packing | To learn more about other parameters and their default configuration, see the API documentation for NodeResourcesFitArgs.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Resource Bin Packing | To learn more about other parameters and their default configuration, see the API documentation for NodeResourcesFitArgs.", "source": "https://kubernetes.io/docs/concepts/scheduling-eviction/resource-bin-packing/", "id": "k8s_00623_code_explain_19545"} {"prompt": "Complete this yaml code:\n\napiVersion: kubescheduler.config.k8s.io/v1\nkind: KubeSchedulerConfiguration\nprofiles:\n- pluginConfig:\n - args:\n scoringStrategy:\n resources:\n - name: intel.com/foo\n weight: 3", "completion": " - name: intel.com/bar\n weight: 3\n requestedToCapacityRatio:\n shape:\n - utilization: 0\n score: 0\n - utilization: 100\n score: 10\n type: RequestedToCapacityRatio\n name: NodeResourcesFit", "language": "yaml", "type": "kubernetes-manifest", "context": "Resource Bin Packing | Referencing the KubeSchedulerConfiguration file with the kube-scheduler flag --config=/path/to/config/file will pass the configuration to the scheduler.", "source": "https://kubernetes.io/docs/concepts/scheduling-eviction/resource-bin-packing/", "id": "k8s_00623_code_completion_19546"} {"prompt": "Explain this yaml code:\n\napiVersion: kubescheduler.config.k8s.io/v1\nkind: KubeSchedulerConfiguration\nprofiles:\n- pluginConfig:\n - args:\n scoringStrategy:\n resources:\n - name: intel.com/foo\n weight: 3\n - name: intel.com/bar\n weight: 3\n requestedToCapacityRatio:\n shape:\n - utilization: 0\n score: 0\n - utilization: 100\n score: 10\n type: RequestedToCapacityRatio\n name: NodeResourcesFit", "completion": "This yaml code Resource Bin Packing | Referencing the KubeSchedulerConfiguration file with the kube-scheduler flag --config=/path/to/config/file will pass the configuration to the scheduler.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Resource Bin Packing | Referencing the KubeSchedulerConfiguration file with the kube-scheduler flag --config=/path/to/config/file will pass the configuration to the scheduler.", "source": "https://kubernetes.io/docs/concepts/scheduling-eviction/resource-bin-packing/", "id": "k8s_00623_code_explain_19547"} {"prompt": "Complete this yaml code:\n\nshape:\n - utilization: 0", "completion": " score: 0\n - utilization: 100\n score: 10", "language": "yaml", "type": "kubernetes-manifest", "context": "Resource Bin Packing | The above arguments give the node a score of 0 if utilization is 0% and 10 for utilization 100%, thus enabling bin packing behavior. To enable least requested the score value must be reversed as follo", "source": "https://kubernetes.io/docs/concepts/scheduling-eviction/resource-bin-packing/", "id": "k8s_00623_code_completion_19548"} {"prompt": "Explain this yaml code:\n\nshape:\n - utilization: 0\n score: 0\n - utilization: 100\n score: 10", "completion": "This yaml code Resource Bin Packing | The above arguments give the node a score of 0 if utilization is 0% and 10 for utilization 100%, thus enabling bin packing behavior. To enable least requested the score value must be reversed as follo", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Resource Bin Packing | The above arguments give the node a score of 0 if utilization is 0% and 10 for utilization 100%, thus enabling bin packing behavior. To enable least requested the score value must be reversed as follo", "source": "https://kubernetes.io/docs/concepts/scheduling-eviction/resource-bin-packing/", "id": "k8s_00623_code_explain_19549"} {"prompt": "Complete this yaml code:\n\nshape:\n - utilization: 0", "completion": " score: 10\n - utilization: 100\n score: 0", "language": "yaml", "type": "kubernetes-manifest", "context": "Resource Bin Packing | resources is an optional parameter which defaults to:", "source": "https://kubernetes.io/docs/concepts/scheduling-eviction/resource-bin-packing/", "id": "k8s_00623_code_completion_19550"} {"prompt": "Explain this yaml code:\n\nshape:\n - utilization: 0\n score: 10\n - utilization: 100\n score: 0", "completion": "This yaml code Resource Bin Packing | resources is an optional parameter which defaults to:", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Resource Bin Packing | resources is an optional parameter which defaults to:", "source": "https://kubernetes.io/docs/concepts/scheduling-eviction/resource-bin-packing/", "id": "k8s_00623_code_explain_19551"} {"prompt": "Complete this yaml code:\n\nresources:\n - name: cpu", "completion": " weight: 1\n - name: memory\n weight: 1", "language": "yaml", "type": "kubernetes-manifest", "context": "Resource Bin Packing | It can be used to add extended resources as follows:", "source": "https://kubernetes.io/docs/concepts/scheduling-eviction/resource-bin-packing/", "id": "k8s_00623_code_completion_19552"} {"prompt": "Explain this yaml code:\n\nresources:\n - name: cpu\n weight: 1\n - name: memory\n weight: 1", "completion": "This yaml code Resource Bin Packing | It can be used to add extended resources as follows:", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Resource Bin Packing | It can be used to add extended resources as follows:", "source": "https://kubernetes.io/docs/concepts/scheduling-eviction/resource-bin-packing/", "id": "k8s_00623_code_explain_19553"} {"prompt": "Complete this yaml code:\n\nresources:\n - name: intel.com/foo\n weight: 5", "completion": " - name: cpu\n weight: 3\n - name: memory\n weight: 1", "language": "yaml", "type": "kubernetes-manifest", "context": "Resource Bin Packing | The weight parameter is optional and is set to 1 if not specified. Also, the weight cannot be set to a negative value.", "source": "https://kubernetes.io/docs/concepts/scheduling-eviction/resource-bin-packing/", "id": "k8s_00623_code_completion_19554"} {"prompt": "Explain this yaml code:\n\nresources:\n - name: intel.com/foo\n weight: 5\n - name: cpu\n weight: 3\n - name: memory\n weight: 1", "completion": "This yaml code Resource Bin Packing | The weight parameter is optional and is set to 1 if not specified. Also, the weight cannot be set to a negative value.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Resource Bin Packing | The weight parameter is optional and is set to 1 if not specified. Also, the weight cannot be set to a negative value.", "source": "https://kubernetes.io/docs/concepts/scheduling-eviction/resource-bin-packing/", "id": "k8s_00623_code_explain_19555"} {"prompt": "Complete this text code:\n\nAvailable:\n intel.com/foo: 4\n memory: 1 GB\n cpu: 8", "completion": "\nUsed:\n intel.com/foo: 1\n memory: 256MB\n cpu: 1", "language": "text", "type": "code-example", "context": "Resource Bin Packing | Node score:", "source": "https://kubernetes.io/docs/concepts/scheduling-eviction/resource-bin-packing/", "id": "k8s_00623_code_completion_19556"} {"prompt": "Explain this text code:\n\nAvailable:\n intel.com/foo: 4\n memory: 1 GB\n cpu: 8\n\nUsed:\n intel.com/foo: 1\n memory: 256MB\n cpu: 1", "completion": "This text code Resource Bin Packing | Node score:", "language": "text", "type": "code-example_explanation", "context": "Resource Bin Packing | Node score:", "source": "https://kubernetes.io/docs/concepts/scheduling-eviction/resource-bin-packing/", "id": "k8s_00623_code_explain_19557"} {"prompt": "Complete this text code:\n\nintel.com/foo = resourceScoringFunction((2+1),4)\n = (100 - ((4-3)*100/4)\n = (100 - 25)\n = 75 # requested + used = 75% * available\n = rawScoringFunction(75)\n = 7 # floor(75/10)\n\nmemory = resourceScoringFunction((256+256),1024)\n = (100 -((1024-512)*100/1024))\n = 50 # requested + used = 50% * available", "completion": " = rawScoringFunction(50)\n = 5 # floor(50/10)\n\ncpu = resourceScoringFunction((2+1),8)\n = (100 -((8-3)*100/8))\n = 37.5 # requested + used = 37.5% * available\n = rawScoringFunction(37.5)\n = 3 # floor(37.5/10)\n\nNodeScore = ((7 * 5) + (5 * 1) + (3 * 3)) / (5 + 1 + 3)\n = 5", "language": "text", "type": "code-example", "context": "Resource Bin Packing | Node 2 spec:", "source": "https://kubernetes.io/docs/concepts/scheduling-eviction/resource-bin-packing/", "id": "k8s_00623_code_completion_19558"} {"prompt": "Explain this text code:\n\nintel.com/foo = resourceScoringFunction((2+1),4)\n = (100 - ((4-3)*100/4)\n = (100 - 25)\n = 75 # requested + used = 75% * available\n = rawScoringFunction(75)\n = 7 # floor(75/10)\n\nmemory = resourceScoringFunction((256+256),1024)\n = (100 -((1024-512)*100/1024))\n = 50 # requested + used = 50% * available\n = rawScoringFunction(50)\n = 5 # floor(50/10)\n\ncpu = resourceScoringFunction((2+1),8)\n = (100 -((8-3)*100/8))\n = 37.5 # requested + used = 37.5% * available\n = rawScoringFunction(37.5)\n = 3 # floor(37.5/10)\n\nNodeScore = ((7 * 5) + (5 * 1) + (3 * 3)) / (5 + 1 + 3)\n = 5", "completion": "This text code Resource Bin Packing | Node 2 spec:", "language": "text", "type": "code-example_explanation", "context": "Resource Bin Packing | Node 2 spec:", "source": "https://kubernetes.io/docs/concepts/scheduling-eviction/resource-bin-packing/", "id": "k8s_00623_code_explain_19559"} {"prompt": "Complete this text code:\n\nAvailable:\n intel.com/foo: 8\n memory: 1GB\n cpu: 8", "completion": "Used:\n intel.com/foo: 2\n memory: 512MB\n cpu: 6", "language": "text", "type": "code-example", "context": "Resource Bin Packing | Node score:", "source": "https://kubernetes.io/docs/concepts/scheduling-eviction/resource-bin-packing/", "id": "k8s_00623_code_completion_19560"} {"prompt": "Explain this text code:\n\nAvailable:\n intel.com/foo: 8\n memory: 1GB\n cpu: 8\nUsed:\n intel.com/foo: 2\n memory: 512MB\n cpu: 6", "completion": "This text code Resource Bin Packing | Node score:", "language": "text", "type": "code-example_explanation", "context": "Resource Bin Packing | Node score:", "source": "https://kubernetes.io/docs/concepts/scheduling-eviction/resource-bin-packing/", "id": "k8s_00623_code_explain_19561"} {"prompt": "Complete this text code:\n\nintel.com/foo = resourceScoringFunction((2+2),8)\n = (100 - ((8-4)*100/8)\n = (100 - 50)\n = 50\n = rawScoringFunction(50)\n = 5\n\nmemory = resourceScoringFunction((256+512),1024)\n = (100 -((1024-768)*100/1024))\n = 75", "completion": " = rawScoringFunction(75)\n = 7\n\ncpu = resourceScoringFunction((2+6),8)\n = (100 -((8-8)*100/8))\n = 100\n = rawScoringFunction(100)\n = 10\n\nNodeScore = ((5 * 5) + (7 * 1) + (10 * 3)) / (5 + 1 + 3)\n = 7", "language": "text", "type": "code-example", "context": "Resource Bin Packing |", "source": "https://kubernetes.io/docs/concepts/scheduling-eviction/resource-bin-packing/", "id": "k8s_00623_code_completion_19562"} {"prompt": "Explain this text code:\n\nintel.com/foo = resourceScoringFunction((2+2),8)\n = (100 - ((8-4)*100/8)\n = (100 - 50)\n = 50\n = rawScoringFunction(50)\n = 5\n\nmemory = resourceScoringFunction((256+512),1024)\n = (100 -((1024-768)*100/1024))\n = 75\n = rawScoringFunction(75)\n = 7\n\ncpu = resourceScoringFunction((2+6),8)\n = (100 -((8-8)*100/8))\n = 100\n = rawScoringFunction(100)\n = 10\n\nNodeScore = ((5 * 5) + (7 * 1) + (10 * 3)) / (5 + 1 + 3)\n = 7", "completion": "This text code Resource Bin Packing |", "language": "text", "type": "code-example_explanation", "context": "Resource Bin Packing |", "source": "https://kubernetes.io/docs/concepts/scheduling-eviction/resource-bin-packing/", "id": "k8s_00623_code_explain_19563"} {"prompt": "Complete this yaml code:\n\napiVersion: kubescheduler.config.k8s.io/v1\nkind: KubeSchedulerConfiguration\nprofiles:\n- pluginConfig:\n - args:\n scoringStrategy:\n resources:\n - name: cpu", "completion": " weight: 1\n - name: memory\n weight: 1\n - name: intel.com/foo\n weight: 3\n - name: intel.com/bar\n weight: 3\n type: MostAllocated\n name: NodeResourcesFit", "language": "yaml", "type": "kubernetes-manifest", "context": "Resource Bin Packing | To learn more about other parameters and their default configuration, see the API documentation for NodeResourcesFitArgs.", "source": "https://kubernetes.io/docs/concepts/scheduling-eviction/resource-bin-packing/", "id": "k8s_00623_code_completion_19564"} {"prompt": "Explain this yaml code:\n\napiVersion: kubescheduler.config.k8s.io/v1\nkind: KubeSchedulerConfiguration\nprofiles:\n- pluginConfig:\n - args:\n scoringStrategy:\n resources:\n - name: cpu\n weight: 1\n - name: memory\n weight: 1\n - name: intel.com/foo\n weight: 3\n - name: intel.com/bar\n weight: 3\n type: MostAllocated\n name: NodeResourcesFit", "completion": "This yaml code Resource Bin Packing | To learn more about other parameters and their default configuration, see the API documentation for NodeResourcesFitArgs.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Resource Bin Packing | To learn more about other parameters and their default configuration, see the API documentation for NodeResourcesFitArgs.", "source": "https://kubernetes.io/docs/concepts/scheduling-eviction/resource-bin-packing/", "id": "k8s_00623_code_explain_19565"} {"prompt": "Complete this yaml code:\n\napiVersion: kubescheduler.config.k8s.io/v1\nkind: KubeSchedulerConfiguration\nprofiles:\n- pluginConfig:\n - args:\n scoringStrategy:\n resources:\n - name: intel.com/foo\n weight: 3", "completion": " - name: intel.com/bar\n weight: 3\n requestedToCapacityRatio:\n shape:\n - utilization: 0\n score: 0\n - utilization: 100\n score: 10\n type: RequestedToCapacityRatio\n name: NodeResourcesFit", "language": "yaml", "type": "kubernetes-manifest", "context": "Resource Bin Packing | Referencing the KubeSchedulerConfiguration file with the kube-scheduler flag --config=/path/to/config/file will pass the configuration to the scheduler.", "source": "https://kubernetes.io/docs/concepts/scheduling-eviction/resource-bin-packing/", "id": "k8s_00623_code_completion_19566"} {"prompt": "Explain this yaml code:\n\napiVersion: kubescheduler.config.k8s.io/v1\nkind: KubeSchedulerConfiguration\nprofiles:\n- pluginConfig:\n - args:\n scoringStrategy:\n resources:\n - name: intel.com/foo\n weight: 3\n - name: intel.com/bar\n weight: 3\n requestedToCapacityRatio:\n shape:\n - utilization: 0\n score: 0\n - utilization: 100\n score: 10\n type: RequestedToCapacityRatio\n name: NodeResourcesFit", "completion": "This yaml code Resource Bin Packing | Referencing the KubeSchedulerConfiguration file with the kube-scheduler flag --config=/path/to/config/file will pass the configuration to the scheduler.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Resource Bin Packing | Referencing the KubeSchedulerConfiguration file with the kube-scheduler flag --config=/path/to/config/file will pass the configuration to the scheduler.", "source": "https://kubernetes.io/docs/concepts/scheduling-eviction/resource-bin-packing/", "id": "k8s_00623_code_explain_19567"} {"prompt": "Complete this text code:\n\nshape:\n - utilization: 0", "completion": " score: 0\n - utilization: 100\n score: 10", "language": "text", "type": "code-example", "context": "Resource Bin Packing | The above arguments give the node a score of 0 if utilization is 0% and 10 for utilization 100%, thus enabling bin packing behavior. To enable least requested the score value must be reversed as follo", "source": "https://kubernetes.io/docs/concepts/scheduling-eviction/resource-bin-packing/", "id": "k8s_00623_code_completion_19568"} {"prompt": "Explain this text code:\n\nshape:\n - utilization: 0\n score: 0\n - utilization: 100\n score: 10", "completion": "This text code Resource Bin Packing | The above arguments give the node a score of 0 if utilization is 0% and 10 for utilization 100%, thus enabling bin packing behavior. To enable least requested the score value must be reversed as follo", "language": "text", "type": "code-example_explanation", "context": "Resource Bin Packing | The above arguments give the node a score of 0 if utilization is 0% and 10 for utilization 100%, thus enabling bin packing behavior. To enable least requested the score value must be reversed as follo", "source": "https://kubernetes.io/docs/concepts/scheduling-eviction/resource-bin-packing/", "id": "k8s_00623_code_explain_19569"} {"prompt": "Complete this text code:\n\nshape:\n - utilization: 0", "completion": " score: 10\n - utilization: 100\n score: 0", "language": "text", "type": "code-example", "context": "Resource Bin Packing | resources is an optional parameter which defaults to:", "source": "https://kubernetes.io/docs/concepts/scheduling-eviction/resource-bin-packing/", "id": "k8s_00623_code_completion_19570"} {"prompt": "Explain this text code:\n\nshape:\n - utilization: 0\n score: 10\n - utilization: 100\n score: 0", "completion": "This text code Resource Bin Packing | resources is an optional parameter which defaults to:", "language": "text", "type": "code-example_explanation", "context": "Resource Bin Packing | resources is an optional parameter which defaults to:", "source": "https://kubernetes.io/docs/concepts/scheduling-eviction/resource-bin-packing/", "id": "k8s_00623_code_explain_19571"} {"prompt": "Complete this text code:\n\nresources:\n - name: cpu", "completion": " weight: 1\n - name: memory\n weight: 1", "language": "text", "type": "code-example", "context": "Resource Bin Packing | It can be used to add extended resources as follows:", "source": "https://kubernetes.io/docs/concepts/scheduling-eviction/resource-bin-packing/", "id": "k8s_00623_code_completion_19572"} {"prompt": "Explain this text code:\n\nresources:\n - name: cpu\n weight: 1\n - name: memory\n weight: 1", "completion": "This text code Resource Bin Packing | It can be used to add extended resources as follows:", "language": "text", "type": "code-example_explanation", "context": "Resource Bin Packing | It can be used to add extended resources as follows:", "source": "https://kubernetes.io/docs/concepts/scheduling-eviction/resource-bin-packing/", "id": "k8s_00623_code_explain_19573"} {"prompt": "Complete this text code:\n\nresources:\n - name: intel.com/foo\n weight: 5", "completion": " - name: cpu\n weight: 3\n - name: memory\n weight: 1", "language": "text", "type": "code-example", "context": "Resource Bin Packing | The weight parameter is optional and is set to 1 if not specified. Also, the weight cannot be set to a negative value.", "source": "https://kubernetes.io/docs/concepts/scheduling-eviction/resource-bin-packing/", "id": "k8s_00623_code_completion_19574"} {"prompt": "Explain this text code:\n\nresources:\n - name: intel.com/foo\n weight: 5\n - name: cpu\n weight: 3\n - name: memory\n weight: 1", "completion": "This text code Resource Bin Packing | The weight parameter is optional and is set to 1 if not specified. Also, the weight cannot be set to a negative value.", "language": "text", "type": "code-example_explanation", "context": "Resource Bin Packing | The weight parameter is optional and is set to 1 if not specified. Also, the weight cannot be set to a negative value.", "source": "https://kubernetes.io/docs/concepts/scheduling-eviction/resource-bin-packing/", "id": "k8s_00623_code_explain_19575"} {"prompt": "Explain this text code:\n\ncould not restore state from checkpoint: configured policy \"static\" differs from state checkpoint policy \"none\", please drain this node and delete the CPU manager checkpoint file \"/var/lib/kubelet/cpu_manager_state\" before restarting Kubelet", "completion": "This text code Control CPU Management Policies on the Node | This policy has no extra configuration items.", "language": "text", "type": "code-example_explanation", "context": "Control CPU Management Policies on the Node | This policy has no extra configuration items.", "source": "https://kubernetes.io/docs/tasks/administer-cluster/cpu-management-policies/", "id": "k8s_00625_code_explain_19576"} {"prompt": "Explain this text code:\n\ncould not restore state from checkpoint: configured policy \"static\" differs from state checkpoint policy \"none\", please drain this node and delete the CPU manager checkpoint file \"/var/lib/kubelet/cpu_manager_state\" before restarting Kubelet", "completion": "This text code Control CPU Management Policies on the Node | This policy has no extra configuration items.", "language": "text", "type": "code-example_explanation", "context": "Control CPU Management Policies on the Node | This policy has no extra configuration items.", "source": "https://kubernetes.io/docs/tasks/administer-cluster/cpu-management-policies/", "id": "k8s_00625_code_explain_19577"} {"prompt": "Explain this bash code:\n\nkubectl config set-context [NAME | --current] [--cluster=cluster_nickname] [--user=user_nickname] [--namespace=namespace]", "completion": "This bash code kubectl config set-context | cluster for the context entry in kubeconfig", "language": "bash", "type": "kubectl-command_explanation", "context": "kubectl config set-context | cluster for the context entry in kubeconfig", "source": "https://kubernetes.io/docs/reference/kubectl/generated/kubectl_config/kubectl_config_set-context/", "id": "k8s_00627_code_explain_19578"} {"prompt": "Explain this text code:\n\n# Set the user field on the gce context entry without touching other values\n kubectl config set-context gce --user=cluster-admin", "completion": "This text code kubectl config set-context | cluster for the context entry in kubeconfig", "language": "text", "type": "code-example_explanation", "context": "kubectl config set-context | cluster for the context entry in kubeconfig", "source": "https://kubernetes.io/docs/reference/kubectl/generated/kubectl_config/kubectl_config_set-context/", "id": "k8s_00627_code_explain_19579"} {"prompt": "Explain this bash code:\n\nkubectl config set-context [NAME | --current] [--cluster=cluster_nickname] [--user=user_nickname] [--namespace=namespace]", "completion": "This bash code kubectl config set-context | cluster for the context entry in kubeconfig", "language": "bash", "type": "kubectl-command_explanation", "context": "kubectl config set-context | cluster for the context entry in kubeconfig", "source": "https://kubernetes.io/docs/reference/kubectl/generated/kubectl_config/kubectl_config_set-context/", "id": "k8s_00627_code_explain_19580"} {"prompt": "Explain this text code:\n\n# Set the user field on the gce context entry without touching other values\n kubectl config set-context gce --user=cluster-admin", "completion": "This text code kubectl config set-context | cluster for the context entry in kubeconfig", "language": "text", "type": "code-example_explanation", "context": "kubectl config set-context | cluster for the context entry in kubeconfig", "source": "https://kubernetes.io/docs/reference/kubectl/generated/kubectl_config/kubectl_config_set-context/", "id": "k8s_00627_code_explain_19581"} {"prompt": "Complete this text code:\n\nNAME STATUS AGE\ndefault Active 11d", "completion": "kube-node-lease Active 11d\nkube-public Active 11d\nkube-system Active 11d", "language": "text", "type": "code-example", "context": "Share a Cluster with Namespaces | Kubernetes starts with four initial namespaces:", "source": "https://kubernetes.io/docs/tasks/administer-cluster/namespaces/", "id": "k8s_00628_code_completion_19582"} {"prompt": "Explain this text code:\n\nNAME STATUS AGE\ndefault Active 11d\nkube-node-lease Active 11d\nkube-public Active 11d\nkube-system Active 11d", "completion": "This text code Share a Cluster with Namespaces | Kubernetes starts with four initial namespaces:", "language": "text", "type": "code-example_explanation", "context": "Share a Cluster with Namespaces | Kubernetes starts with four initial namespaces:", "source": "https://kubernetes.io/docs/tasks/administer-cluster/namespaces/", "id": "k8s_00628_code_explain_19583"} {"prompt": "Complete this text code:\n\nName: default\nLabels: <none>\nAnnotations: <none>\nStatus: Active\n", "completion": "No resource quota.\n\nResource Limits\n Type Resource Min Max Default\n ---- -------- --- --- ---\n Container cpu - - 100m", "language": "text", "type": "code-example", "context": "Share a Cluster with Namespaces | Note that these details show both resource quota (if present) as well as resource limit ranges.", "source": "https://kubernetes.io/docs/tasks/administer-cluster/namespaces/", "id": "k8s_00628_code_completion_19584"} {"prompt": "Explain this text code:\n\nName: default\nLabels: <none>\nAnnotations: <none>\nStatus: Active\n\nNo resource quota.\n\nResource Limits\n Type Resource Min Max Default\n ---- -------- --- --- ---\n Container cpu - - 100m", "completion": "This text code Share a Cluster with Namespaces | Note that these details show both resource quota (if present) as well as resource limit ranges.", "language": "text", "type": "code-example_explanation", "context": "Share a Cluster with Namespaces | Note that these details show both resource quota (if present) as well as resource limit ranges.", "source": "https://kubernetes.io/docs/tasks/administer-cluster/namespaces/", "id": "k8s_00628_code_explain_19585"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Namespace", "completion": "metadata:\n name: <insert-namespace-name-here>", "language": "yaml", "type": "kubernetes-manifest", "context": "Share a Cluster with Namespaces | Then run:", "source": "https://kubernetes.io/docs/tasks/administer-cluster/namespaces/", "id": "k8s_00628_code_completion_19586"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Namespace\nmetadata:\n name: <insert-namespace-name-here>", "completion": "This yaml code Share a Cluster with Namespaces | Then run:", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Share a Cluster with Namespaces | Then run:", "source": "https://kubernetes.io/docs/tasks/administer-cluster/namespaces/", "id": "k8s_00628_code_explain_19587"} {"prompt": "Explain this bash code:\n\nkubectl create namespace <insert-namespace-name-here>", "completion": "This bash code Share a Cluster with Namespaces | The name of your namespace must be a valid DNS label.", "language": "bash", "type": "kubectl-command_explanation", "context": "Share a Cluster with Namespaces | The name of your namespace must be a valid DNS label.", "source": "https://kubernetes.io/docs/tasks/administer-cluster/namespaces/", "id": "k8s_00628_code_explain_19588"} {"prompt": "Explain this bash code:\n\nkubectl delete namespaces <insert-some-namespace-name>", "completion": "This bash code Share a Cluster with Namespaces | This delete is asynchronous, so for a time you will see the namespace in the Terminating state.", "language": "bash", "type": "kubectl-command_explanation", "context": "Share a Cluster with Namespaces | This delete is asynchronous, so for a time you will see the namespace in the Terminating state.", "source": "https://kubernetes.io/docs/tasks/administer-cluster/namespaces/", "id": "k8s_00628_code_explain_19589"} {"prompt": "Explain this bash code:\n\nkubectl create -f https://k8s.io/examples/admin/namespace-dev.json", "completion": "This bash code Share a Cluster with Namespaces | And then let's create the production namespace using kubectl:", "language": "bash", "type": "kubectl-command_explanation", "context": "Share a Cluster with Namespaces | And then let's create the production namespace using kubectl:", "source": "https://kubernetes.io/docs/tasks/administer-cluster/namespaces/", "id": "k8s_00628_code_explain_19590"} {"prompt": "Explain this bash code:\n\nkubectl create -f https://k8s.io/examples/admin/namespace-prod.json", "completion": "This bash code Share a Cluster with Namespaces | To be sure things are right, list all of the namespaces in our cluster.", "language": "bash", "type": "kubectl-command_explanation", "context": "Share a Cluster with Namespaces | To be sure things are right, list all of the namespaces in our cluster.", "source": "https://kubernetes.io/docs/tasks/administer-cluster/namespaces/", "id": "k8s_00628_code_explain_19591"} {"prompt": "Complete this text code:\n\nNAME STATUS AGE LABELS\ndefault Active 32m <none>", "completion": "development Active 29s name=development\nproduction Active 23s name=production", "language": "text", "type": "code-example", "context": "Share a Cluster with Namespaces | A Kubernetes namespace provides the scope for Pods, Services, and Deployments in the cluster. Users interacting with one namespace do not see the content in another namespace. To demonstrate this, let", "source": "https://kubernetes.io/docs/tasks/administer-cluster/namespaces/", "id": "k8s_00628_code_completion_19592"} {"prompt": "Explain this text code:\n\nNAME STATUS AGE LABELS\ndefault Active 32m <none>\ndevelopment Active 29s name=development\nproduction Active 23s name=production", "completion": "This text code Share a Cluster with Namespaces | A Kubernetes namespace provides the scope for Pods, Services, and Deployments in the cluster. Users interacting with one namespace do not see the content in another namespace. To demonstrate this, let", "language": "text", "type": "code-example_explanation", "context": "Share a Cluster with Namespaces | A Kubernetes namespace provides the scope for Pods, Services, and Deployments in the cluster. Users interacting with one namespace do not see the content in another namespace. To demonstrate this, let", "source": "https://kubernetes.io/docs/tasks/administer-cluster/namespaces/", "id": "k8s_00628_code_explain_19593"} {"prompt": "Explain this bash code:\n\nkubectl create deployment snowflake \\\n --image=registry.k8s.io/serve_hostname \\\n -n=development --replicas=2", "completion": "This bash code Share a Cluster with Namespaces | We have created a deployment whose replica size is 2 that is running the pod called snowflake with a basic container that serves the hostname.", "language": "bash", "type": "kubectl-command_explanation", "context": "Share a Cluster with Namespaces | We have created a deployment whose replica size is 2 that is running the pod called snowflake with a basic container that serves the hostname.", "source": "https://kubernetes.io/docs/tasks/administer-cluster/namespaces/", "id": "k8s_00628_code_explain_19594"} {"prompt": "Explain this text code:\n\nNAME READY UP-TO-DATE AVAILABLE AGE\nsnowflake 2/2 2 2 2m", "completion": "This text code Share a Cluster with Namespaces | And this is great, developers are able to do what they want, and they do not have to worry about affecting content in the production namespace.", "language": "text", "type": "code-example_explanation", "context": "Share a Cluster with Namespaces | And this is great, developers are able to do what they want, and they do not have to worry about affecting content in the production namespace.", "source": "https://kubernetes.io/docs/tasks/administer-cluster/namespaces/", "id": "k8s_00628_code_explain_19595"} {"prompt": "Explain this text code:\n\nNAME READY STATUS RESTARTS AGE\nsnowflake-3968820950-9dgr8 1/1 Running 0 2m\nsnowflake-3968820950-vgc4n 1/1 Running 0 2m", "completion": "This text code Share a Cluster with Namespaces | And this is great, developers are able to do what they want, and they do not have to worry about affecting content in the production namespace.", "language": "text", "type": "code-example_explanation", "context": "Share a Cluster with Namespaces | And this is great, developers are able to do what they want, and they do not have to worry about affecting content in the production namespace.", "source": "https://kubernetes.io/docs/tasks/administer-cluster/namespaces/", "id": "k8s_00628_code_explain_19596"} {"prompt": "Explain this bash code:\n\nkubectl get deployment -n=production\nkubectl get pods -n=production", "completion": "This bash code Share a Cluster with Namespaces | Production likes to run cattle, so let's create some cattle pods.", "language": "bash", "type": "kubectl-command_explanation", "context": "Share a Cluster with Namespaces | Production likes to run cattle, so let's create some cattle pods.", "source": "https://kubernetes.io/docs/tasks/administer-cluster/namespaces/", "id": "k8s_00628_code_explain_19597"} {"prompt": "Complete this bash code:\n\nkubectl create deployment cattle --image=registry.k8s.io/serve_hostname -n=production\nkubectl scale deployment cattle --replicas=5 -n=production", "completion": "\nkubectl get deployment -n=production", "language": "bash", "type": "kubectl-command", "context": "Share a Cluster with Namespaces | At this point, it should be clear that the resources users create in one namespace are hidden from the other namespace.", "source": "https://kubernetes.io/docs/tasks/administer-cluster/namespaces/", "id": "k8s_00628_code_completion_19598"} {"prompt": "Explain this bash code:\n\nkubectl create deployment cattle --image=registry.k8s.io/serve_hostname -n=production\nkubectl scale deployment cattle --replicas=5 -n=production\n\nkubectl get deployment -n=production", "completion": "This bash code Share a Cluster with Namespaces | At this point, it should be clear that the resources users create in one namespace are hidden from the other namespace.", "language": "bash", "type": "kubectl-command_explanation", "context": "Share a Cluster with Namespaces | At this point, it should be clear that the resources users create in one namespace are hidden from the other namespace.", "source": "https://kubernetes.io/docs/tasks/administer-cluster/namespaces/", "id": "k8s_00628_code_explain_19599"} {"prompt": "Explain this text code:\n\nNAME READY UP-TO-DATE AVAILABLE AGE\ncattle 5/5 5 5 10s", "completion": "This text code Share a Cluster with Namespaces | At this point, it should be clear that the resources users create in one namespace are hidden from the other namespace.", "language": "text", "type": "code-example_explanation", "context": "Share a Cluster with Namespaces | At this point, it should be clear that the resources users create in one namespace are hidden from the other namespace.", "source": "https://kubernetes.io/docs/tasks/administer-cluster/namespaces/", "id": "k8s_00628_code_explain_19600"} {"prompt": "Complete this text code:\n\nNAME READY STATUS RESTARTS AGE\ncattle-2263376956-41xy6 1/1 Running 0 34s\ncattle-2263376956-kw466 1/1 Running 0 34s", "completion": "cattle-2263376956-n4v97 1/1 Running 0 34s\ncattle-2263376956-p5p3i 1/1 Running 0 34s\ncattle-2263376956-sxpth 1/1 Running 0 34s", "language": "text", "type": "code-example", "context": "Share a Cluster with Namespaces | At this point, it should be clear that the resources users create in one namespace are hidden from the other namespace.", "source": "https://kubernetes.io/docs/tasks/administer-cluster/namespaces/", "id": "k8s_00628_code_completion_19601"} {"prompt": "Explain this text code:\n\nNAME READY STATUS RESTARTS AGE\ncattle-2263376956-41xy6 1/1 Running 0 34s\ncattle-2263376956-kw466 1/1 Running 0 34s\ncattle-2263376956-n4v97 1/1 Running 0 34s\ncattle-2263376956-p5p3i 1/1 Running 0 34s\ncattle-2263376956-sxpth 1/1 Running 0 34s", "completion": "This text code Share a Cluster with Namespaces | At this point, it should be clear that the resources users create in one namespace are hidden from the other namespace.", "language": "text", "type": "code-example_explanation", "context": "Share a Cluster with Namespaces | At this point, it should be clear that the resources users create in one namespace are hidden from the other namespace.", "source": "https://kubernetes.io/docs/tasks/administer-cluster/namespaces/", "id": "k8s_00628_code_explain_19602"} {"prompt": "Complete this console code:\n\nNAME STATUS AGE\ndefault Active 11d", "completion": "kube-node-lease Active 11d\nkube-public Active 11d\nkube-system Active 11d", "language": "console", "type": "code-example", "context": "Share a Cluster with Namespaces | Kubernetes starts with four initial namespaces:", "source": "https://kubernetes.io/docs/tasks/administer-cluster/namespaces/", "id": "k8s_00628_code_completion_19603"} {"prompt": "Explain this console code:\n\nNAME STATUS AGE\ndefault Active 11d\nkube-node-lease Active 11d\nkube-public Active 11d\nkube-system Active 11d", "completion": "This console code Share a Cluster with Namespaces | Kubernetes starts with four initial namespaces:", "language": "console", "type": "code-example_explanation", "context": "Share a Cluster with Namespaces | Kubernetes starts with four initial namespaces:", "source": "https://kubernetes.io/docs/tasks/administer-cluster/namespaces/", "id": "k8s_00628_code_explain_19604"} {"prompt": "Complete this console code:\n\nName: default\nLabels: <none>\nAnnotations: <none>\nStatus: Active\n", "completion": "No resource quota.\n\nResource Limits\n Type Resource Min Max Default\n ---- -------- --- --- ---\n Container cpu - - 100m", "language": "console", "type": "code-example", "context": "Share a Cluster with Namespaces | Note that these details show both resource quota (if present) as well as resource limit ranges.", "source": "https://kubernetes.io/docs/tasks/administer-cluster/namespaces/", "id": "k8s_00628_code_completion_19605"} {"prompt": "Explain this console code:\n\nName: default\nLabels: <none>\nAnnotations: <none>\nStatus: Active\n\nNo resource quota.\n\nResource Limits\n Type Resource Min Max Default\n ---- -------- --- --- ---\n Container cpu - - 100m", "completion": "This console code Share a Cluster with Namespaces | Note that these details show both resource quota (if present) as well as resource limit ranges.", "language": "console", "type": "code-example_explanation", "context": "Share a Cluster with Namespaces | Note that these details show both resource quota (if present) as well as resource limit ranges.", "source": "https://kubernetes.io/docs/tasks/administer-cluster/namespaces/", "id": "k8s_00628_code_explain_19606"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Namespace", "completion": "metadata:\n name: <insert-namespace-name-here>", "language": "yaml", "type": "kubernetes-manifest", "context": "Share a Cluster with Namespaces | Then run:", "source": "https://kubernetes.io/docs/tasks/administer-cluster/namespaces/", "id": "k8s_00628_code_completion_19607"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Namespace\nmetadata:\n name: <insert-namespace-name-here>", "completion": "This yaml code Share a Cluster with Namespaces | Then run:", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Share a Cluster with Namespaces | Then run:", "source": "https://kubernetes.io/docs/tasks/administer-cluster/namespaces/", "id": "k8s_00628_code_explain_19608"} {"prompt": "Explain this shell code:\n\nkubectl create namespace <insert-namespace-name-here>", "completion": "This shell code Share a Cluster with Namespaces | The name of your namespace must be a valid DNS label.", "language": "shell", "type": "code-example_explanation", "context": "Share a Cluster with Namespaces | The name of your namespace must be a valid DNS label.", "source": "https://kubernetes.io/docs/tasks/administer-cluster/namespaces/", "id": "k8s_00628_code_explain_19609"} {"prompt": "Explain this shell code:\n\nkubectl delete namespaces <insert-some-namespace-name>", "completion": "This shell code Share a Cluster with Namespaces | This delete is asynchronous, so for a time you will see the namespace in the Terminating state.", "language": "shell", "type": "code-example_explanation", "context": "Share a Cluster with Namespaces | This delete is asynchronous, so for a time you will see the namespace in the Terminating state.", "source": "https://kubernetes.io/docs/tasks/administer-cluster/namespaces/", "id": "k8s_00628_code_explain_19610"} {"prompt": "Explain this shell code:\n\nkubectl create -f https://k8s.io/examples/admin/namespace-dev.json", "completion": "This shell code Share a Cluster with Namespaces | And then let's create the production namespace using kubectl:", "language": "shell", "type": "code-example_explanation", "context": "Share a Cluster with Namespaces | And then let's create the production namespace using kubectl:", "source": "https://kubernetes.io/docs/tasks/administer-cluster/namespaces/", "id": "k8s_00628_code_explain_19611"} {"prompt": "Explain this shell code:\n\nkubectl create -f https://k8s.io/examples/admin/namespace-prod.json", "completion": "This shell code Share a Cluster with Namespaces | To be sure things are right, list all of the namespaces in our cluster.", "language": "shell", "type": "code-example_explanation", "context": "Share a Cluster with Namespaces | To be sure things are right, list all of the namespaces in our cluster.", "source": "https://kubernetes.io/docs/tasks/administer-cluster/namespaces/", "id": "k8s_00628_code_explain_19612"} {"prompt": "Complete this console code:\n\nNAME STATUS AGE LABELS\ndefault Active 32m <none>", "completion": "development Active 29s name=development\nproduction Active 23s name=production", "language": "console", "type": "code-example", "context": "Share a Cluster with Namespaces | A Kubernetes namespace provides the scope for Pods, Services, and Deployments in the cluster. Users interacting with one namespace do not see the content in another namespace. To demonstrate this, let", "source": "https://kubernetes.io/docs/tasks/administer-cluster/namespaces/", "id": "k8s_00628_code_completion_19613"} {"prompt": "Explain this console code:\n\nNAME STATUS AGE LABELS\ndefault Active 32m <none>\ndevelopment Active 29s name=development\nproduction Active 23s name=production", "completion": "This console code Share a Cluster with Namespaces | A Kubernetes namespace provides the scope for Pods, Services, and Deployments in the cluster. Users interacting with one namespace do not see the content in another namespace. To demonstrate this, let", "language": "console", "type": "code-example_explanation", "context": "Share a Cluster with Namespaces | A Kubernetes namespace provides the scope for Pods, Services, and Deployments in the cluster. Users interacting with one namespace do not see the content in another namespace. To demonstrate this, let", "source": "https://kubernetes.io/docs/tasks/administer-cluster/namespaces/", "id": "k8s_00628_code_explain_19614"} {"prompt": "Explain this shell code:\n\nkubectl create deployment snowflake \\\n --image=registry.k8s.io/serve_hostname \\\n -n=development --replicas=2", "completion": "This shell code Share a Cluster with Namespaces | We have created a deployment whose replica size is 2 that is running the pod called snowflake with a basic container that serves the hostname.", "language": "shell", "type": "code-example_explanation", "context": "Share a Cluster with Namespaces | We have created a deployment whose replica size is 2 that is running the pod called snowflake with a basic container that serves the hostname.", "source": "https://kubernetes.io/docs/tasks/administer-cluster/namespaces/", "id": "k8s_00628_code_explain_19615"} {"prompt": "Explain this console code:\n\nNAME READY UP-TO-DATE AVAILABLE AGE\nsnowflake 2/2 2 2 2m", "completion": "This console code Share a Cluster with Namespaces | And this is great, developers are able to do what they want, and they do not have to worry about affecting content in the production namespace.", "language": "console", "type": "code-example_explanation", "context": "Share a Cluster with Namespaces | And this is great, developers are able to do what they want, and they do not have to worry about affecting content in the production namespace.", "source": "https://kubernetes.io/docs/tasks/administer-cluster/namespaces/", "id": "k8s_00628_code_explain_19616"} {"prompt": "Explain this console code:\n\nNAME READY STATUS RESTARTS AGE\nsnowflake-3968820950-9dgr8 1/1 Running 0 2m\nsnowflake-3968820950-vgc4n 1/1 Running 0 2m", "completion": "This console code Share a Cluster with Namespaces | And this is great, developers are able to do what they want, and they do not have to worry about affecting content in the production namespace.", "language": "console", "type": "code-example_explanation", "context": "Share a Cluster with Namespaces | And this is great, developers are able to do what they want, and they do not have to worry about affecting content in the production namespace.", "source": "https://kubernetes.io/docs/tasks/administer-cluster/namespaces/", "id": "k8s_00628_code_explain_19617"} {"prompt": "Explain this shell code:\n\nkubectl get deployment -n=production\nkubectl get pods -n=production", "completion": "This shell code Share a Cluster with Namespaces | Production likes to run cattle, so let's create some cattle pods.", "language": "shell", "type": "code-example_explanation", "context": "Share a Cluster with Namespaces | Production likes to run cattle, so let's create some cattle pods.", "source": "https://kubernetes.io/docs/tasks/administer-cluster/namespaces/", "id": "k8s_00628_code_explain_19618"} {"prompt": "Complete this shell code:\n\nkubectl create deployment cattle --image=registry.k8s.io/serve_hostname -n=production\nkubectl scale deployment cattle --replicas=5 -n=production", "completion": "\nkubectl get deployment -n=production", "language": "shell", "type": "code-example", "context": "Share a Cluster with Namespaces | At this point, it should be clear that the resources users create in one namespace are hidden from the other namespace.", "source": "https://kubernetes.io/docs/tasks/administer-cluster/namespaces/", "id": "k8s_00628_code_completion_19619"} {"prompt": "Explain this shell code:\n\nkubectl create deployment cattle --image=registry.k8s.io/serve_hostname -n=production\nkubectl scale deployment cattle --replicas=5 -n=production\n\nkubectl get deployment -n=production", "completion": "This shell code Share a Cluster with Namespaces | At this point, it should be clear that the resources users create in one namespace are hidden from the other namespace.", "language": "shell", "type": "code-example_explanation", "context": "Share a Cluster with Namespaces | At this point, it should be clear that the resources users create in one namespace are hidden from the other namespace.", "source": "https://kubernetes.io/docs/tasks/administer-cluster/namespaces/", "id": "k8s_00628_code_explain_19620"} {"prompt": "Explain this console code:\n\nNAME READY UP-TO-DATE AVAILABLE AGE\ncattle 5/5 5 5 10s", "completion": "This console code Share a Cluster with Namespaces | At this point, it should be clear that the resources users create in one namespace are hidden from the other namespace.", "language": "console", "type": "code-example_explanation", "context": "Share a Cluster with Namespaces | At this point, it should be clear that the resources users create in one namespace are hidden from the other namespace.", "source": "https://kubernetes.io/docs/tasks/administer-cluster/namespaces/", "id": "k8s_00628_code_explain_19621"} {"prompt": "Complete this console code:\n\nNAME READY STATUS RESTARTS AGE\ncattle-2263376956-41xy6 1/1 Running 0 34s\ncattle-2263376956-kw466 1/1 Running 0 34s", "completion": "cattle-2263376956-n4v97 1/1 Running 0 34s\ncattle-2263376956-p5p3i 1/1 Running 0 34s\ncattle-2263376956-sxpth 1/1 Running 0 34s", "language": "console", "type": "code-example", "context": "Share a Cluster with Namespaces | At this point, it should be clear that the resources users create in one namespace are hidden from the other namespace.", "source": "https://kubernetes.io/docs/tasks/administer-cluster/namespaces/", "id": "k8s_00628_code_completion_19622"} {"prompt": "Explain this console code:\n\nNAME READY STATUS RESTARTS AGE\ncattle-2263376956-41xy6 1/1 Running 0 34s\ncattle-2263376956-kw466 1/1 Running 0 34s\ncattle-2263376956-n4v97 1/1 Running 0 34s\ncattle-2263376956-p5p3i 1/1 Running 0 34s\ncattle-2263376956-sxpth 1/1 Running 0 34s", "completion": "This console code Share a Cluster with Namespaces | At this point, it should be clear that the resources users create in one namespace are hidden from the other namespace.", "language": "console", "type": "code-example_explanation", "context": "Share a Cluster with Namespaces | At this point, it should be clear that the resources users create in one namespace are hidden from the other namespace.", "source": "https://kubernetes.io/docs/tasks/administer-cluster/namespaces/", "id": "k8s_00628_code_explain_19623"} {"prompt": "Complete this text code:\n\nNAME STATUS AGE\ndefault Active 11d", "completion": "kube-node-lease Active 11d\nkube-public Active 11d\nkube-system Active 11d", "language": "text", "type": "code-example", "context": "Share a Cluster with Namespaces | Kubernetes starts with four initial namespaces:", "source": "https://kubernetes.io/docs/tasks/administer-cluster/namespaces/", "id": "k8s_00628_code_completion_19624"} {"prompt": "Explain this text code:\n\nNAME STATUS AGE\ndefault Active 11d\nkube-node-lease Active 11d\nkube-public Active 11d\nkube-system Active 11d", "completion": "This text code Share a Cluster with Namespaces | Kubernetes starts with four initial namespaces:", "language": "text", "type": "code-example_explanation", "context": "Share a Cluster with Namespaces | Kubernetes starts with four initial namespaces:", "source": "https://kubernetes.io/docs/tasks/administer-cluster/namespaces/", "id": "k8s_00628_code_explain_19625"} {"prompt": "Complete this text code:\n\nName: default\nLabels: <none>\nAnnotations: <none>\nStatus: Active\n", "completion": "No resource quota.\n\nResource Limits\n Type Resource Min Max Default\n ---- -------- --- --- ---\n Container cpu - - 100m", "language": "text", "type": "code-example", "context": "Share a Cluster with Namespaces | Note that these details show both resource quota (if present) as well as resource limit ranges.", "source": "https://kubernetes.io/docs/tasks/administer-cluster/namespaces/", "id": "k8s_00628_code_completion_19626"} {"prompt": "Explain this text code:\n\nName: default\nLabels: <none>\nAnnotations: <none>\nStatus: Active\n\nNo resource quota.\n\nResource Limits\n Type Resource Min Max Default\n ---- -------- --- --- ---\n Container cpu - - 100m", "completion": "This text code Share a Cluster with Namespaces | Note that these details show both resource quota (if present) as well as resource limit ranges.", "language": "text", "type": "code-example_explanation", "context": "Share a Cluster with Namespaces | Note that these details show both resource quota (if present) as well as resource limit ranges.", "source": "https://kubernetes.io/docs/tasks/administer-cluster/namespaces/", "id": "k8s_00628_code_explain_19627"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Namespace", "completion": "metadata:\n name: <insert-namespace-name-here>", "language": "yaml", "type": "kubernetes-manifest", "context": "Share a Cluster with Namespaces | Then run:", "source": "https://kubernetes.io/docs/tasks/administer-cluster/namespaces/", "id": "k8s_00628_code_completion_19628"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Namespace\nmetadata:\n name: <insert-namespace-name-here>", "completion": "This yaml code Share a Cluster with Namespaces | Then run:", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Share a Cluster with Namespaces | Then run:", "source": "https://kubernetes.io/docs/tasks/administer-cluster/namespaces/", "id": "k8s_00628_code_explain_19629"} {"prompt": "Explain this bash code:\n\nkubectl create namespace <insert-namespace-name-here>", "completion": "This bash code Share a Cluster with Namespaces | The name of your namespace must be a valid DNS label.", "language": "bash", "type": "kubectl-command_explanation", "context": "Share a Cluster with Namespaces | The name of your namespace must be a valid DNS label.", "source": "https://kubernetes.io/docs/tasks/administer-cluster/namespaces/", "id": "k8s_00628_code_explain_19630"} {"prompt": "Explain this bash code:\n\nkubectl delete namespaces <insert-some-namespace-name>", "completion": "This bash code Share a Cluster with Namespaces | This delete is asynchronous, so for a time you will see the namespace in the Terminating state.", "language": "bash", "type": "kubectl-command_explanation", "context": "Share a Cluster with Namespaces | This delete is asynchronous, so for a time you will see the namespace in the Terminating state.", "source": "https://kubernetes.io/docs/tasks/administer-cluster/namespaces/", "id": "k8s_00628_code_explain_19631"} {"prompt": "Explain this bash code:\n\nkubectl create -f https://k8s.io/examples/admin/namespace-dev.json", "completion": "This bash code Share a Cluster with Namespaces | And then let's create the production namespace using kubectl:", "language": "bash", "type": "kubectl-command_explanation", "context": "Share a Cluster with Namespaces | And then let's create the production namespace using kubectl:", "source": "https://kubernetes.io/docs/tasks/administer-cluster/namespaces/", "id": "k8s_00628_code_explain_19632"} {"prompt": "Explain this bash code:\n\nkubectl create -f https://k8s.io/examples/admin/namespace-prod.json", "completion": "This bash code Share a Cluster with Namespaces | To be sure things are right, list all of the namespaces in our cluster.", "language": "bash", "type": "kubectl-command_explanation", "context": "Share a Cluster with Namespaces | To be sure things are right, list all of the namespaces in our cluster.", "source": "https://kubernetes.io/docs/tasks/administer-cluster/namespaces/", "id": "k8s_00628_code_explain_19633"} {"prompt": "Complete this text code:\n\nNAME STATUS AGE LABELS\ndefault Active 32m <none>", "completion": "development Active 29s name=development\nproduction Active 23s name=production", "language": "text", "type": "code-example", "context": "Share a Cluster with Namespaces | A Kubernetes namespace provides the scope for Pods, Services, and Deployments in the cluster. Users interacting with one namespace do not see the content in another namespace. To demonstrate this, let", "source": "https://kubernetes.io/docs/tasks/administer-cluster/namespaces/", "id": "k8s_00628_code_completion_19634"} {"prompt": "Explain this text code:\n\nNAME STATUS AGE LABELS\ndefault Active 32m <none>\ndevelopment Active 29s name=development\nproduction Active 23s name=production", "completion": "This text code Share a Cluster with Namespaces | A Kubernetes namespace provides the scope for Pods, Services, and Deployments in the cluster. Users interacting with one namespace do not see the content in another namespace. To demonstrate this, let", "language": "text", "type": "code-example_explanation", "context": "Share a Cluster with Namespaces | A Kubernetes namespace provides the scope for Pods, Services, and Deployments in the cluster. Users interacting with one namespace do not see the content in another namespace. To demonstrate this, let", "source": "https://kubernetes.io/docs/tasks/administer-cluster/namespaces/", "id": "k8s_00628_code_explain_19635"} {"prompt": "Explain this bash code:\n\nkubectl create deployment snowflake \\\n --image=registry.k8s.io/serve_hostname \\\n -n=development --replicas=2", "completion": "This bash code Share a Cluster with Namespaces | We have created a deployment whose replica size is 2 that is running the pod called snowflake with a basic container that serves the hostname.", "language": "bash", "type": "kubectl-command_explanation", "context": "Share a Cluster with Namespaces | We have created a deployment whose replica size is 2 that is running the pod called snowflake with a basic container that serves the hostname.", "source": "https://kubernetes.io/docs/tasks/administer-cluster/namespaces/", "id": "k8s_00628_code_explain_19636"} {"prompt": "Explain this text code:\n\nNAME READY UP-TO-DATE AVAILABLE AGE\nsnowflake 2/2 2 2 2m", "completion": "This text code Share a Cluster with Namespaces | And this is great, developers are able to do what they want, and they do not have to worry about affecting content in the production namespace.", "language": "text", "type": "code-example_explanation", "context": "Share a Cluster with Namespaces | And this is great, developers are able to do what they want, and they do not have to worry about affecting content in the production namespace.", "source": "https://kubernetes.io/docs/tasks/administer-cluster/namespaces/", "id": "k8s_00628_code_explain_19637"} {"prompt": "Explain this text code:\n\nNAME READY STATUS RESTARTS AGE\nsnowflake-3968820950-9dgr8 1/1 Running 0 2m\nsnowflake-3968820950-vgc4n 1/1 Running 0 2m", "completion": "This text code Share a Cluster with Namespaces | And this is great, developers are able to do what they want, and they do not have to worry about affecting content in the production namespace.", "language": "text", "type": "code-example_explanation", "context": "Share a Cluster with Namespaces | And this is great, developers are able to do what they want, and they do not have to worry about affecting content in the production namespace.", "source": "https://kubernetes.io/docs/tasks/administer-cluster/namespaces/", "id": "k8s_00628_code_explain_19638"} {"prompt": "Explain this bash code:\n\nkubectl get deployment -n=production\nkubectl get pods -n=production", "completion": "This bash code Share a Cluster with Namespaces | Production likes to run cattle, so let's create some cattle pods.", "language": "bash", "type": "kubectl-command_explanation", "context": "Share a Cluster with Namespaces | Production likes to run cattle, so let's create some cattle pods.", "source": "https://kubernetes.io/docs/tasks/administer-cluster/namespaces/", "id": "k8s_00628_code_explain_19639"} {"prompt": "Complete this bash code:\n\nkubectl create deployment cattle --image=registry.k8s.io/serve_hostname -n=production\nkubectl scale deployment cattle --replicas=5 -n=production", "completion": "\nkubectl get deployment -n=production", "language": "bash", "type": "kubectl-command", "context": "Share a Cluster with Namespaces | At this point, it should be clear that the resources users create in one namespace are hidden from the other namespace.", "source": "https://kubernetes.io/docs/tasks/administer-cluster/namespaces/", "id": "k8s_00628_code_completion_19640"} {"prompt": "Explain this bash code:\n\nkubectl create deployment cattle --image=registry.k8s.io/serve_hostname -n=production\nkubectl scale deployment cattle --replicas=5 -n=production\n\nkubectl get deployment -n=production", "completion": "This bash code Share a Cluster with Namespaces | At this point, it should be clear that the resources users create in one namespace are hidden from the other namespace.", "language": "bash", "type": "kubectl-command_explanation", "context": "Share a Cluster with Namespaces | At this point, it should be clear that the resources users create in one namespace are hidden from the other namespace.", "source": "https://kubernetes.io/docs/tasks/administer-cluster/namespaces/", "id": "k8s_00628_code_explain_19641"} {"prompt": "Explain this text code:\n\nNAME READY UP-TO-DATE AVAILABLE AGE\ncattle 5/5 5 5 10s", "completion": "This text code Share a Cluster with Namespaces | At this point, it should be clear that the resources users create in one namespace are hidden from the other namespace.", "language": "text", "type": "code-example_explanation", "context": "Share a Cluster with Namespaces | At this point, it should be clear that the resources users create in one namespace are hidden from the other namespace.", "source": "https://kubernetes.io/docs/tasks/administer-cluster/namespaces/", "id": "k8s_00628_code_explain_19642"} {"prompt": "Complete this text code:\n\nNAME READY STATUS RESTARTS AGE\ncattle-2263376956-41xy6 1/1 Running 0 34s\ncattle-2263376956-kw466 1/1 Running 0 34s", "completion": "cattle-2263376956-n4v97 1/1 Running 0 34s\ncattle-2263376956-p5p3i 1/1 Running 0 34s\ncattle-2263376956-sxpth 1/1 Running 0 34s", "language": "text", "type": "code-example", "context": "Share a Cluster with Namespaces | At this point, it should be clear that the resources users create in one namespace are hidden from the other namespace.", "source": "https://kubernetes.io/docs/tasks/administer-cluster/namespaces/", "id": "k8s_00628_code_completion_19643"} {"prompt": "Explain this text code:\n\nNAME READY STATUS RESTARTS AGE\ncattle-2263376956-41xy6 1/1 Running 0 34s\ncattle-2263376956-kw466 1/1 Running 0 34s\ncattle-2263376956-n4v97 1/1 Running 0 34s\ncattle-2263376956-p5p3i 1/1 Running 0 34s\ncattle-2263376956-sxpth 1/1 Running 0 34s", "completion": "This text code Share a Cluster with Namespaces | At this point, it should be clear that the resources users create in one namespace are hidden from the other namespace.", "language": "text", "type": "code-example_explanation", "context": "Share a Cluster with Namespaces | At this point, it should be clear that the resources users create in one namespace are hidden from the other namespace.", "source": "https://kubernetes.io/docs/tasks/administer-cluster/namespaces/", "id": "k8s_00628_code_explain_19644"} {"prompt": "Explain this text code:\n\nKUBECONFIG=/etc/kubernetes/admin.conf KUBE_EDITOR=nano kubectl edit <parameters>", "completion": "This text code Reconfiguring a kubeadm cluster | During cluster creation and upgrade, kubeadm writes its ClusterConfiguration in a ConfigMap called kubeadm-config in the kube-system namespace.", "language": "text", "type": "code-example_explanation", "context": "Reconfiguring a kubeadm cluster | During cluster creation and upgrade, kubeadm writes its ClusterConfiguration in a ConfigMap called kubeadm-config in the kube-system namespace.", "source": "https://kubernetes.io/docs/tasks/administer-cluster/kubeadm/kubeadm-reconfigure/", "id": "k8s_00629_code_explain_19645"} {"prompt": "Explain this text code:\n\nkubeadm init phase certs <component-name> --config <config-file>", "completion": "This text code Reconfiguring a kubeadm cluster | To write new manifest files in /etc/kubernetes/manifests you can use:", "language": "text", "type": "code-example_explanation", "context": "Reconfiguring a kubeadm cluster | To write new manifest files in /etc/kubernetes/manifests you can use:", "source": "https://kubernetes.io/docs/tasks/administer-cluster/kubeadm/kubeadm-reconfigure/", "id": "k8s_00629_code_explain_19646"} {"prompt": "Complete this text code:\n\n# For Kubernetes control plane components\nkubeadm init phase control-plane <component-name> --config <config-file>", "completion": "# For local etcd\nkubeadm init phase etcd local --config <config-file>", "language": "text", "type": "code-example", "context": "Reconfiguring a kubeadm cluster | The <config-file> contents must match the updated ClusterConfiguration. The <component-name> value must be a name of a Kubernetes control plane component (apiserver, controller-manager or scheduler).", "source": "https://kubernetes.io/docs/tasks/administer-cluster/kubeadm/kubeadm-reconfigure/", "id": "k8s_00629_code_completion_19647"} {"prompt": "Explain this text code:\n\n# For Kubernetes control plane components\nkubeadm init phase control-plane <component-name> --config <config-file>\n# For local etcd\nkubeadm init phase etcd local --config <config-file>", "completion": "This text code Reconfiguring a kubeadm cluster | The <config-file> contents must match the updated ClusterConfiguration. The <component-name> value must be a name of a Kubernetes control plane component (apiserver, controller-manager or scheduler).", "language": "text", "type": "code-example_explanation", "context": "Reconfiguring a kubeadm cluster | The <config-file> contents must match the updated ClusterConfiguration. The <component-name> value must be a name of a Kubernetes control plane component (apiserver, controller-manager or scheduler).", "source": "https://kubernetes.io/docs/tasks/administer-cluster/kubeadm/kubeadm-reconfigure/", "id": "k8s_00629_code_explain_19648"} {"prompt": "Explain this bash code:\n\nkubectl delete po -n kube-system -l k8s-app=kube-proxy", "completion": "This bash code Reconfiguring a kubeadm cluster | New Pods that use the updated ConfigMap will be created.", "language": "bash", "type": "kubectl-command_explanation", "context": "Reconfiguring a kubeadm cluster | New Pods that use the updated ConfigMap will be created.", "source": "https://kubernetes.io/docs/tasks/administer-cluster/kubeadm/kubeadm-reconfigure/", "id": "k8s_00629_code_explain_19649"} {"prompt": "Explain this bash code:\n\nkubectl edit deployment -n kube-system coredns\nkubectl edit service -n kube-system kube-dns", "completion": "This bash code Reconfiguring a kubeadm cluster | Once the CoreDNS changes are applied you can restart the CoreDNS deployment:", "language": "bash", "type": "kubectl-command_explanation", "context": "Reconfiguring a kubeadm cluster | Once the CoreDNS changes are applied you can restart the CoreDNS deployment:", "source": "https://kubernetes.io/docs/tasks/administer-cluster/kubeadm/kubeadm-reconfigure/", "id": "k8s_00629_code_explain_19650"} {"prompt": "Explain this bash code:\n\nkubectl rollout restart deployment -n kube-system coredns", "completion": "This bash code Reconfiguring a kubeadm cluster | During the execution of kubeadm upgrade on a managed node, kubeadm might overwrite configuration that was applied after the cluster was created (reconfiguration).", "language": "bash", "type": "kubectl-command_explanation", "context": "Reconfiguring a kubeadm cluster | During the execution of kubeadm upgrade on a managed node, kubeadm might overwrite configuration that was applied after the cluster was created (reconfiguration).", "source": "https://kubernetes.io/docs/tasks/administer-cluster/kubeadm/kubeadm-reconfigure/", "id": "k8s_00629_code_explain_19651"} {"prompt": "Explain this bash code:\n\nkubectl patch no <node-name> --patch-file <patch-file>", "completion": "This bash code Reconfiguring a kubeadm cluster | The main source of control plane configuration is the ClusterConfiguration object stored in the cluster. To extend the static Pod manifests configuration, patches can be used.", "language": "bash", "type": "kubectl-command_explanation", "context": "Reconfiguring a kubeadm cluster | The main source of control plane configuration is the ClusterConfiguration object stored in the cluster. To extend the static Pod manifests configuration, patches can be used.", "source": "https://kubernetes.io/docs/tasks/administer-cluster/kubeadm/kubeadm-reconfigure/", "id": "k8s_00629_code_explain_19652"} {"prompt": "Explain this text code:\n\nKUBECONFIG=/etc/kubernetes/admin.conf KUBE_EDITOR=nano kubectl edit <parameters>", "completion": "This text code Reconfiguring a kubeadm cluster | During cluster creation and upgrade, kubeadm writes its ClusterConfiguration in a ConfigMap called kubeadm-config in the kube-system namespace.", "language": "text", "type": "code-example_explanation", "context": "Reconfiguring a kubeadm cluster | During cluster creation and upgrade, kubeadm writes its ClusterConfiguration in a ConfigMap called kubeadm-config in the kube-system namespace.", "source": "https://kubernetes.io/docs/tasks/administer-cluster/kubeadm/kubeadm-reconfigure/", "id": "k8s_00629_code_explain_19653"} {"prompt": "Explain this shell code:\n\nkubeadm init phase certs <component-name> --config <config-file>", "completion": "This shell code Reconfiguring a kubeadm cluster | To write new manifest files in /etc/kubernetes/manifests you can use:", "language": "shell", "type": "code-example_explanation", "context": "Reconfiguring a kubeadm cluster | To write new manifest files in /etc/kubernetes/manifests you can use:", "source": "https://kubernetes.io/docs/tasks/administer-cluster/kubeadm/kubeadm-reconfigure/", "id": "k8s_00629_code_explain_19654"} {"prompt": "Complete this shell code:\n\n# For Kubernetes control plane components\nkubeadm init phase control-plane <component-name> --config <config-file>", "completion": "# For local etcd\nkubeadm init phase etcd local --config <config-file>", "language": "shell", "type": "code-example", "context": "Reconfiguring a kubeadm cluster | The <config-file> contents must match the updated ClusterConfiguration. The <component-name> value must be a name of a Kubernetes control plane component (apiserver, controller-manager or scheduler).", "source": "https://kubernetes.io/docs/tasks/administer-cluster/kubeadm/kubeadm-reconfigure/", "id": "k8s_00629_code_completion_19655"} {"prompt": "Explain this shell code:\n\n# For Kubernetes control plane components\nkubeadm init phase control-plane <component-name> --config <config-file>\n# For local etcd\nkubeadm init phase etcd local --config <config-file>", "completion": "This shell code Reconfiguring a kubeadm cluster | The <config-file> contents must match the updated ClusterConfiguration. The <component-name> value must be a name of a Kubernetes control plane component (apiserver, controller-manager or scheduler).", "language": "shell", "type": "code-example_explanation", "context": "Reconfiguring a kubeadm cluster | The <config-file> contents must match the updated ClusterConfiguration. The <component-name> value must be a name of a Kubernetes control plane component (apiserver, controller-manager or scheduler).", "source": "https://kubernetes.io/docs/tasks/administer-cluster/kubeadm/kubeadm-reconfigure/", "id": "k8s_00629_code_explain_19656"} {"prompt": "Explain this shell code:\n\nkubectl delete po -n kube-system -l k8s-app=kube-proxy", "completion": "This shell code Reconfiguring a kubeadm cluster | New Pods that use the updated ConfigMap will be created.", "language": "shell", "type": "code-example_explanation", "context": "Reconfiguring a kubeadm cluster | New Pods that use the updated ConfigMap will be created.", "source": "https://kubernetes.io/docs/tasks/administer-cluster/kubeadm/kubeadm-reconfigure/", "id": "k8s_00629_code_explain_19657"} {"prompt": "Explain this shell code:\n\nkubectl edit deployment -n kube-system coredns\nkubectl edit service -n kube-system kube-dns", "completion": "This shell code Reconfiguring a kubeadm cluster | Once the CoreDNS changes are applied you can restart the CoreDNS deployment:", "language": "shell", "type": "code-example_explanation", "context": "Reconfiguring a kubeadm cluster | Once the CoreDNS changes are applied you can restart the CoreDNS deployment:", "source": "https://kubernetes.io/docs/tasks/administer-cluster/kubeadm/kubeadm-reconfigure/", "id": "k8s_00629_code_explain_19658"} {"prompt": "Explain this shell code:\n\nkubectl rollout restart deployment -n kube-system coredns", "completion": "This shell code Reconfiguring a kubeadm cluster | During the execution of kubeadm upgrade on a managed node, kubeadm might overwrite configuration that was applied after the cluster was created (reconfiguration).", "language": "shell", "type": "code-example_explanation", "context": "Reconfiguring a kubeadm cluster | During the execution of kubeadm upgrade on a managed node, kubeadm might overwrite configuration that was applied after the cluster was created (reconfiguration).", "source": "https://kubernetes.io/docs/tasks/administer-cluster/kubeadm/kubeadm-reconfigure/", "id": "k8s_00629_code_explain_19659"} {"prompt": "Explain this shell code:\n\nkubectl patch no <node-name> --patch-file <patch-file>", "completion": "This shell code Reconfiguring a kubeadm cluster | The main source of control plane configuration is the ClusterConfiguration object stored in the cluster. To extend the static Pod manifests configuration, patches can be used.", "language": "shell", "type": "code-example_explanation", "context": "Reconfiguring a kubeadm cluster | The main source of control plane configuration is the ClusterConfiguration object stored in the cluster. To extend the static Pod manifests configuration, patches can be used.", "source": "https://kubernetes.io/docs/tasks/administer-cluster/kubeadm/kubeadm-reconfigure/", "id": "k8s_00629_code_explain_19660"} {"prompt": "Explain this text code:\n\nkubeadm init phase certs <component-name> --config <config-file>", "completion": "This text code Reconfiguring a kubeadm cluster | To write new manifest files in /etc/kubernetes/manifests you can use:", "language": "text", "type": "code-example_explanation", "context": "Reconfiguring a kubeadm cluster | To write new manifest files in /etc/kubernetes/manifests you can use:", "source": "https://kubernetes.io/docs/tasks/administer-cluster/kubeadm/kubeadm-reconfigure/", "id": "k8s_00629_code_explain_19661"} {"prompt": "Complete this text code:\n\n# For Kubernetes control plane components\nkubeadm init phase control-plane <component-name> --config <config-file>", "completion": "# For local etcd\nkubeadm init phase etcd local --config <config-file>", "language": "text", "type": "code-example", "context": "Reconfiguring a kubeadm cluster | The <config-file> contents must match the updated ClusterConfiguration. The <component-name> value must be a name of a Kubernetes control plane component (apiserver, controller-manager or scheduler).", "source": "https://kubernetes.io/docs/tasks/administer-cluster/kubeadm/kubeadm-reconfigure/", "id": "k8s_00629_code_completion_19662"} {"prompt": "Explain this text code:\n\n# For Kubernetes control plane components\nkubeadm init phase control-plane <component-name> --config <config-file>\n# For local etcd\nkubeadm init phase etcd local --config <config-file>", "completion": "This text code Reconfiguring a kubeadm cluster | The <config-file> contents must match the updated ClusterConfiguration. The <component-name> value must be a name of a Kubernetes control plane component (apiserver, controller-manager or scheduler).", "language": "text", "type": "code-example_explanation", "context": "Reconfiguring a kubeadm cluster | The <config-file> contents must match the updated ClusterConfiguration. The <component-name> value must be a name of a Kubernetes control plane component (apiserver, controller-manager or scheduler).", "source": "https://kubernetes.io/docs/tasks/administer-cluster/kubeadm/kubeadm-reconfigure/", "id": "k8s_00629_code_explain_19663"} {"prompt": "Explain this bash code:\n\nkubectl delete po -n kube-system -l k8s-app=kube-proxy", "completion": "This bash code Reconfiguring a kubeadm cluster | New Pods that use the updated ConfigMap will be created.", "language": "bash", "type": "kubectl-command_explanation", "context": "Reconfiguring a kubeadm cluster | New Pods that use the updated ConfigMap will be created.", "source": "https://kubernetes.io/docs/tasks/administer-cluster/kubeadm/kubeadm-reconfigure/", "id": "k8s_00629_code_explain_19664"} {"prompt": "Explain this bash code:\n\nkubectl edit deployment -n kube-system coredns\nkubectl edit service -n kube-system kube-dns", "completion": "This bash code Reconfiguring a kubeadm cluster | Once the CoreDNS changes are applied you can restart the CoreDNS deployment:", "language": "bash", "type": "kubectl-command_explanation", "context": "Reconfiguring a kubeadm cluster | Once the CoreDNS changes are applied you can restart the CoreDNS deployment:", "source": "https://kubernetes.io/docs/tasks/administer-cluster/kubeadm/kubeadm-reconfigure/", "id": "k8s_00629_code_explain_19665"} {"prompt": "Explain this bash code:\n\nkubectl rollout restart deployment -n kube-system coredns", "completion": "This bash code Reconfiguring a kubeadm cluster | During the execution of kubeadm upgrade on a managed node, kubeadm might overwrite configuration that was applied after the cluster was created (reconfiguration).", "language": "bash", "type": "kubectl-command_explanation", "context": "Reconfiguring a kubeadm cluster | During the execution of kubeadm upgrade on a managed node, kubeadm might overwrite configuration that was applied after the cluster was created (reconfiguration).", "source": "https://kubernetes.io/docs/tasks/administer-cluster/kubeadm/kubeadm-reconfigure/", "id": "k8s_00629_code_explain_19666"} {"prompt": "Explain this bash code:\n\nkubectl patch no <node-name> --patch-file <patch-file>", "completion": "This bash code Reconfiguring a kubeadm cluster | The main source of control plane configuration is the ClusterConfiguration object stored in the cluster. To extend the static Pod manifests configuration, patches can be used.", "language": "bash", "type": "kubectl-command_explanation", "context": "Reconfiguring a kubeadm cluster | The main source of control plane configuration is the ClusterConfiguration object stored in the cluster. To extend the static Pod manifests configuration, patches can be used.", "source": "https://kubernetes.io/docs/tasks/administer-cluster/kubeadm/kubeadm-reconfigure/", "id": "k8s_00629_code_explain_19667"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: LimitRange\nmetadata:\n name: mem-limit-range\nspec:", "completion": " limits:\n - default:\n memory: 512Mi\n defaultRequest:\n memory: 256Mi\n type: Container", "language": "yaml", "type": "kubernetes-manifest", "context": "Configure Default Memory Requests and Limits for a Namespace | Create the LimitRange in the default-mem-example namespace:", "source": "https://kubernetes.io/docs/tasks/administer-cluster/manage-resources/memory-default-namespace/", "id": "k8s_00630_code_completion_19668"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: LimitRange\nmetadata:\n name: mem-limit-range\nspec:\n limits:\n - default:\n memory: 512Mi\n defaultRequest:\n memory: 256Mi\n type: Container", "completion": "This yaml code Configure Default Memory Requests and Limits for a Namespace | Create the LimitRange in the default-mem-example namespace:", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Configure Default Memory Requests and Limits for a Namespace | Create the LimitRange in the default-mem-example namespace:", "source": "https://kubernetes.io/docs/tasks/administer-cluster/manage-resources/memory-default-namespace/", "id": "k8s_00630_code_explain_19669"} {"prompt": "Explain this bash code:\n\nkubectl apply -f https://k8s.io/examples/admin/resource/memory-defaults.yaml --namespace=default-mem-example", "completion": "This bash code Configure Default Memory Requests and Limits for a Namespace | Now if you create a Pod in the default-mem-example namespace, and any container within that Pod does not specify its own values for memory request and memory limit, then the control plane applies defa", "language": "bash", "type": "kubectl-command_explanation", "context": "Configure Default Memory Requests and Limits for a Namespace | Now if you create a Pod in the default-mem-example namespace, and any container within that Pod does not specify its own values for memory request and memory limit, then the control plane applies defa", "source": "https://kubernetes.io/docs/tasks/administer-cluster/manage-resources/memory-default-namespace/", "id": "k8s_00630_code_explain_19670"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: default-mem-demo", "completion": "spec:\n containers:\n - name: default-mem-demo-ctr\n image: nginx", "language": "yaml", "type": "pod-definition", "context": "Configure Default Memory Requests and Limits for a Namespace | Create the Pod.", "source": "https://kubernetes.io/docs/tasks/administer-cluster/manage-resources/memory-default-namespace/", "id": "k8s_00630_code_completion_19671"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: default-mem-demo\nspec:\n containers:\n - name: default-mem-demo-ctr\n image: nginx", "completion": "This yaml code Configure Default Memory Requests and Limits for a Namespace | Create the Pod.", "language": "yaml", "type": "pod-definition_explanation", "context": "Configure Default Memory Requests and Limits for a Namespace | Create the Pod.", "source": "https://kubernetes.io/docs/tasks/administer-cluster/manage-resources/memory-default-namespace/", "id": "k8s_00630_code_explain_19672"} {"prompt": "Explain this bash code:\n\nkubectl apply -f https://k8s.io/examples/admin/resource/memory-defaults-pod.yaml --namespace=default-mem-example", "completion": "This bash code Configure Default Memory Requests and Limits for a Namespace | View detailed information about the Pod:", "language": "bash", "type": "kubectl-command_explanation", "context": "Configure Default Memory Requests and Limits for a Namespace | View detailed information about the Pod:", "source": "https://kubernetes.io/docs/tasks/administer-cluster/manage-resources/memory-default-namespace/", "id": "k8s_00630_code_explain_19673"} {"prompt": "Explain this bash code:\n\nkubectl get pod default-mem-demo --output=yaml --namespace=default-mem-example", "completion": "This bash code Configure Default Memory Requests and Limits for a Namespace | The output shows that the Pod's container has a memory request of 256 MiB and a memory limit of 512 MiB. These are the default values specified by the LimitRange.", "language": "bash", "type": "kubectl-command_explanation", "context": "Configure Default Memory Requests and Limits for a Namespace | The output shows that the Pod's container has a memory request of 256 MiB and a memory limit of 512 MiB. These are the default values specified by the LimitRange.", "source": "https://kubernetes.io/docs/tasks/administer-cluster/manage-resources/memory-default-namespace/", "id": "k8s_00630_code_explain_19674"} {"prompt": "Complete this text code:\n\ncontainers:\n- image: nginx\n imagePullPolicy: Always\n name: default-mem-demo-ctr", "completion": " resources:\n limits:\n memory: 512Mi\n requests:\n memory: 256Mi", "language": "text", "type": "code-example", "context": "Configure Default Memory Requests and Limits for a Namespace | Delete your Pod:", "source": "https://kubernetes.io/docs/tasks/administer-cluster/manage-resources/memory-default-namespace/", "id": "k8s_00630_code_completion_19675"} {"prompt": "Explain this text code:\n\ncontainers:\n- image: nginx\n imagePullPolicy: Always\n name: default-mem-demo-ctr\n resources:\n limits:\n memory: 512Mi\n requests:\n memory: 256Mi", "completion": "This text code Configure Default Memory Requests and Limits for a Namespace | Delete your Pod:", "language": "text", "type": "code-example_explanation", "context": "Configure Default Memory Requests and Limits for a Namespace | Delete your Pod:", "source": "https://kubernetes.io/docs/tasks/administer-cluster/manage-resources/memory-default-namespace/", "id": "k8s_00630_code_explain_19676"} {"prompt": "Explain this bash code:\n\nkubectl delete pod default-mem-demo --namespace=default-mem-example", "completion": "This bash code Configure Default Memory Requests and Limits for a Namespace | Here's a manifest for a Pod that has one container. The container specifies a memory limit, but not a request:", "language": "bash", "type": "kubectl-command_explanation", "context": "Configure Default Memory Requests and Limits for a Namespace | Here's a manifest for a Pod that has one container. The container specifies a memory limit, but not a request:", "source": "https://kubernetes.io/docs/tasks/administer-cluster/manage-resources/memory-default-namespace/", "id": "k8s_00630_code_explain_19677"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: default-mem-demo-2\nspec:", "completion": " containers:\n - name: default-mem-demo-2-ctr\n image: nginx\n resources:\n limits:\n memory: \"1Gi\"", "language": "yaml", "type": "pod-definition", "context": "Configure Default Memory Requests and Limits for a Namespace | Create the Pod:", "source": "https://kubernetes.io/docs/tasks/administer-cluster/manage-resources/memory-default-namespace/", "id": "k8s_00630_code_completion_19678"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: default-mem-demo-2\nspec:\n containers:\n - name: default-mem-demo-2-ctr\n image: nginx\n resources:\n limits:\n memory: \"1Gi\"", "completion": "This yaml code Configure Default Memory Requests and Limits for a Namespace | Create the Pod:", "language": "yaml", "type": "pod-definition_explanation", "context": "Configure Default Memory Requests and Limits for a Namespace | Create the Pod:", "source": "https://kubernetes.io/docs/tasks/administer-cluster/manage-resources/memory-default-namespace/", "id": "k8s_00630_code_explain_19679"} {"prompt": "Explain this bash code:\n\nkubectl apply -f https://k8s.io/examples/admin/resource/memory-defaults-pod-2.yaml --namespace=default-mem-example", "completion": "This bash code Configure Default Memory Requests and Limits for a Namespace | View detailed information about the Pod:", "language": "bash", "type": "kubectl-command_explanation", "context": "Configure Default Memory Requests and Limits for a Namespace | View detailed information about the Pod:", "source": "https://kubernetes.io/docs/tasks/administer-cluster/manage-resources/memory-default-namespace/", "id": "k8s_00630_code_explain_19680"} {"prompt": "Explain this bash code:\n\nkubectl get pod default-mem-demo-2 --output=yaml --namespace=default-mem-example", "completion": "This bash code Configure Default Memory Requests and Limits for a Namespace | The output shows that the container's memory request is set to match its memory limit. Notice that the container was not assigned the default memory request value of 256Mi.", "language": "bash", "type": "kubectl-command_explanation", "context": "Configure Default Memory Requests and Limits for a Namespace | The output shows that the container's memory request is set to match its memory limit. Notice that the container was not assigned the default memory request value of 256Mi.", "source": "https://kubernetes.io/docs/tasks/administer-cluster/manage-resources/memory-default-namespace/", "id": "k8s_00630_code_explain_19681"} {"prompt": "Complete this text code:\n\nresources:\n limits:", "completion": " memory: 1Gi\n requests:\n memory: 1Gi", "language": "text", "type": "code-example", "context": "Configure Default Memory Requests and Limits for a Namespace | Here's a manifest for a Pod that has one container. The container specifies a memory request, but not a limit:", "source": "https://kubernetes.io/docs/tasks/administer-cluster/manage-resources/memory-default-namespace/", "id": "k8s_00630_code_completion_19682"} {"prompt": "Explain this text code:\n\nresources:\n limits:\n memory: 1Gi\n requests:\n memory: 1Gi", "completion": "This text code Configure Default Memory Requests and Limits for a Namespace | Here's a manifest for a Pod that has one container. The container specifies a memory request, but not a limit:", "language": "text", "type": "code-example_explanation", "context": "Configure Default Memory Requests and Limits for a Namespace | Here's a manifest for a Pod that has one container. The container specifies a memory request, but not a limit:", "source": "https://kubernetes.io/docs/tasks/administer-cluster/manage-resources/memory-default-namespace/", "id": "k8s_00630_code_explain_19683"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: default-mem-demo-3\nspec:", "completion": " containers:\n - name: default-mem-demo-3-ctr\n image: nginx\n resources:\n requests:\n memory: \"128Mi\"", "language": "yaml", "type": "pod-definition", "context": "Configure Default Memory Requests and Limits for a Namespace | Create the Pod:", "source": "https://kubernetes.io/docs/tasks/administer-cluster/manage-resources/memory-default-namespace/", "id": "k8s_00630_code_completion_19684"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: default-mem-demo-3\nspec:\n containers:\n - name: default-mem-demo-3-ctr\n image: nginx\n resources:\n requests:\n memory: \"128Mi\"", "completion": "This yaml code Configure Default Memory Requests and Limits for a Namespace | Create the Pod:", "language": "yaml", "type": "pod-definition_explanation", "context": "Configure Default Memory Requests and Limits for a Namespace | Create the Pod:", "source": "https://kubernetes.io/docs/tasks/administer-cluster/manage-resources/memory-default-namespace/", "id": "k8s_00630_code_explain_19685"} {"prompt": "Explain this bash code:\n\nkubectl apply -f https://k8s.io/examples/admin/resource/memory-defaults-pod-3.yaml --namespace=default-mem-example", "completion": "This bash code Configure Default Memory Requests and Limits for a Namespace | View the Pod's specification:", "language": "bash", "type": "kubectl-command_explanation", "context": "Configure Default Memory Requests and Limits for a Namespace | View the Pod's specification:", "source": "https://kubernetes.io/docs/tasks/administer-cluster/manage-resources/memory-default-namespace/", "id": "k8s_00630_code_explain_19686"} {"prompt": "Explain this bash code:\n\nkubectl get pod default-mem-demo-3 --output=yaml --namespace=default-mem-example", "completion": "This bash code Configure Default Memory Requests and Limits for a Namespace | The output shows that the container's memory request is set to the value specified in the container's manifest. The container is limited to use no more than 512MiB of memory, which matches the default", "language": "bash", "type": "kubectl-command_explanation", "context": "Configure Default Memory Requests and Limits for a Namespace | The output shows that the container's memory request is set to the value specified in the container's manifest. The container is limited to use no more than 512MiB of memory, which matches the default", "source": "https://kubernetes.io/docs/tasks/administer-cluster/manage-resources/memory-default-namespace/", "id": "k8s_00630_code_explain_19687"} {"prompt": "Complete this text code:\n\nresources:\n limits:", "completion": " memory: 512Mi\n requests:\n memory: 128Mi", "language": "text", "type": "code-example", "context": "Configure Default Memory Requests and Limits for a Namespace | If your namespace has a memory resource quota configured, it is helpful to have a default value in place for memory limit. Here are three of the restrictions that a resource quota imposes on a namespa", "source": "https://kubernetes.io/docs/tasks/administer-cluster/manage-resources/memory-default-namespace/", "id": "k8s_00630_code_completion_19688"} {"prompt": "Explain this text code:\n\nresources:\n limits:\n memory: 512Mi\n requests:\n memory: 128Mi", "completion": "This text code Configure Default Memory Requests and Limits for a Namespace | If your namespace has a memory resource quota configured, it is helpful to have a default value in place for memory limit. Here are three of the restrictions that a resource quota imposes on a namespa", "language": "text", "type": "code-example_explanation", "context": "Configure Default Memory Requests and Limits for a Namespace | If your namespace has a memory resource quota configured, it is helpful to have a default value in place for memory limit. Here are three of the restrictions that a resource quota imposes on a namespa", "source": "https://kubernetes.io/docs/tasks/administer-cluster/manage-resources/memory-default-namespace/", "id": "k8s_00630_code_explain_19689"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: LimitRange\nmetadata:\n name: mem-limit-range\nspec:", "completion": " limits:\n - default:\n memory: 512Mi\n defaultRequest:\n memory: 256Mi\n type: Container", "language": "yaml", "type": "kubernetes-manifest", "context": "Configure Default Memory Requests and Limits for a Namespace | Create the LimitRange in the default-mem-example namespace:", "source": "https://kubernetes.io/docs/tasks/administer-cluster/manage-resources/memory-default-namespace/", "id": "k8s_00630_code_completion_19690"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: LimitRange\nmetadata:\n name: mem-limit-range\nspec:\n limits:\n - default:\n memory: 512Mi\n defaultRequest:\n memory: 256Mi\n type: Container", "completion": "This yaml code Configure Default Memory Requests and Limits for a Namespace | Create the LimitRange in the default-mem-example namespace:", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Configure Default Memory Requests and Limits for a Namespace | Create the LimitRange in the default-mem-example namespace:", "source": "https://kubernetes.io/docs/tasks/administer-cluster/manage-resources/memory-default-namespace/", "id": "k8s_00630_code_explain_19691"} {"prompt": "Explain this shell code:\n\nkubectl apply -f https://k8s.io/examples/admin/resource/memory-defaults.yaml --namespace=default-mem-example", "completion": "This shell code Configure Default Memory Requests and Limits for a Namespace | Now if you create a Pod in the default-mem-example namespace, and any container within that Pod does not specify its own values for memory request and memory limit, then the control plane applies defa", "language": "shell", "type": "code-example_explanation", "context": "Configure Default Memory Requests and Limits for a Namespace | Now if you create a Pod in the default-mem-example namespace, and any container within that Pod does not specify its own values for memory request and memory limit, then the control plane applies defa", "source": "https://kubernetes.io/docs/tasks/administer-cluster/manage-resources/memory-default-namespace/", "id": "k8s_00630_code_explain_19692"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: default-mem-demo", "completion": "spec:\n containers:\n - name: default-mem-demo-ctr\n image: nginx", "language": "yaml", "type": "pod-definition", "context": "Configure Default Memory Requests and Limits for a Namespace | Create the Pod.", "source": "https://kubernetes.io/docs/tasks/administer-cluster/manage-resources/memory-default-namespace/", "id": "k8s_00630_code_completion_19693"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: default-mem-demo\nspec:\n containers:\n - name: default-mem-demo-ctr\n image: nginx", "completion": "This yaml code Configure Default Memory Requests and Limits for a Namespace | Create the Pod.", "language": "yaml", "type": "pod-definition_explanation", "context": "Configure Default Memory Requests and Limits for a Namespace | Create the Pod.", "source": "https://kubernetes.io/docs/tasks/administer-cluster/manage-resources/memory-default-namespace/", "id": "k8s_00630_code_explain_19694"} {"prompt": "Explain this shell code:\n\nkubectl apply -f https://k8s.io/examples/admin/resource/memory-defaults-pod.yaml --namespace=default-mem-example", "completion": "This shell code Configure Default Memory Requests and Limits for a Namespace | View detailed information about the Pod:", "language": "shell", "type": "code-example_explanation", "context": "Configure Default Memory Requests and Limits for a Namespace | View detailed information about the Pod:", "source": "https://kubernetes.io/docs/tasks/administer-cluster/manage-resources/memory-default-namespace/", "id": "k8s_00630_code_explain_19695"} {"prompt": "Explain this shell code:\n\nkubectl get pod default-mem-demo --output=yaml --namespace=default-mem-example", "completion": "This shell code Configure Default Memory Requests and Limits for a Namespace | The output shows that the Pod's container has a memory request of 256 MiB and a memory limit of 512 MiB. These are the default values specified by the LimitRange.", "language": "shell", "type": "code-example_explanation", "context": "Configure Default Memory Requests and Limits for a Namespace | The output shows that the Pod's container has a memory request of 256 MiB and a memory limit of 512 MiB. These are the default values specified by the LimitRange.", "source": "https://kubernetes.io/docs/tasks/administer-cluster/manage-resources/memory-default-namespace/", "id": "k8s_00630_code_explain_19696"} {"prompt": "Complete this shell code:\n\ncontainers:\n- image: nginx\n imagePullPolicy: Always\n name: default-mem-demo-ctr", "completion": " resources:\n limits:\n memory: 512Mi\n requests:\n memory: 256Mi", "language": "shell", "type": "code-example", "context": "Configure Default Memory Requests and Limits for a Namespace | Delete your Pod:", "source": "https://kubernetes.io/docs/tasks/administer-cluster/manage-resources/memory-default-namespace/", "id": "k8s_00630_code_completion_19697"} {"prompt": "Explain this shell code:\n\ncontainers:\n- image: nginx\n imagePullPolicy: Always\n name: default-mem-demo-ctr\n resources:\n limits:\n memory: 512Mi\n requests:\n memory: 256Mi", "completion": "This shell code Configure Default Memory Requests and Limits for a Namespace | Delete your Pod:", "language": "shell", "type": "code-example_explanation", "context": "Configure Default Memory Requests and Limits for a Namespace | Delete your Pod:", "source": "https://kubernetes.io/docs/tasks/administer-cluster/manage-resources/memory-default-namespace/", "id": "k8s_00630_code_explain_19698"} {"prompt": "Explain this shell code:\n\nkubectl delete pod default-mem-demo --namespace=default-mem-example", "completion": "This shell code Configure Default Memory Requests and Limits for a Namespace | Here's a manifest for a Pod that has one container. The container specifies a memory limit, but not a request:", "language": "shell", "type": "code-example_explanation", "context": "Configure Default Memory Requests and Limits for a Namespace | Here's a manifest for a Pod that has one container. The container specifies a memory limit, but not a request:", "source": "https://kubernetes.io/docs/tasks/administer-cluster/manage-resources/memory-default-namespace/", "id": "k8s_00630_code_explain_19699"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: default-mem-demo-2\nspec:", "completion": " containers:\n - name: default-mem-demo-2-ctr\n image: nginx\n resources:\n limits:\n memory: \"1Gi\"", "language": "yaml", "type": "pod-definition", "context": "Configure Default Memory Requests and Limits for a Namespace | Create the Pod:", "source": "https://kubernetes.io/docs/tasks/administer-cluster/manage-resources/memory-default-namespace/", "id": "k8s_00630_code_completion_19700"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: default-mem-demo-2\nspec:\n containers:\n - name: default-mem-demo-2-ctr\n image: nginx\n resources:\n limits:\n memory: \"1Gi\"", "completion": "This yaml code Configure Default Memory Requests and Limits for a Namespace | Create the Pod:", "language": "yaml", "type": "pod-definition_explanation", "context": "Configure Default Memory Requests and Limits for a Namespace | Create the Pod:", "source": "https://kubernetes.io/docs/tasks/administer-cluster/manage-resources/memory-default-namespace/", "id": "k8s_00630_code_explain_19701"} {"prompt": "Explain this shell code:\n\nkubectl apply -f https://k8s.io/examples/admin/resource/memory-defaults-pod-2.yaml --namespace=default-mem-example", "completion": "This shell code Configure Default Memory Requests and Limits for a Namespace | View detailed information about the Pod:", "language": "shell", "type": "code-example_explanation", "context": "Configure Default Memory Requests and Limits for a Namespace | View detailed information about the Pod:", "source": "https://kubernetes.io/docs/tasks/administer-cluster/manage-resources/memory-default-namespace/", "id": "k8s_00630_code_explain_19702"} {"prompt": "Explain this shell code:\n\nkubectl get pod default-mem-demo-2 --output=yaml --namespace=default-mem-example", "completion": "This shell code Configure Default Memory Requests and Limits for a Namespace | The output shows that the container's memory request is set to match its memory limit. Notice that the container was not assigned the default memory request value of 256Mi.", "language": "shell", "type": "code-example_explanation", "context": "Configure Default Memory Requests and Limits for a Namespace | The output shows that the container's memory request is set to match its memory limit. Notice that the container was not assigned the default memory request value of 256Mi.", "source": "https://kubernetes.io/docs/tasks/administer-cluster/manage-resources/memory-default-namespace/", "id": "k8s_00630_code_explain_19703"} {"prompt": "Complete this text code:\n\nresources:\n limits:", "completion": " memory: 1Gi\n requests:\n memory: 1Gi", "language": "text", "type": "code-example", "context": "Configure Default Memory Requests and Limits for a Namespace | Here's a manifest for a Pod that has one container. The container specifies a memory request, but not a limit:", "source": "https://kubernetes.io/docs/tasks/administer-cluster/manage-resources/memory-default-namespace/", "id": "k8s_00630_code_completion_19704"} {"prompt": "Explain this text code:\n\nresources:\n limits:\n memory: 1Gi\n requests:\n memory: 1Gi", "completion": "This text code Configure Default Memory Requests and Limits for a Namespace | Here's a manifest for a Pod that has one container. The container specifies a memory request, but not a limit:", "language": "text", "type": "code-example_explanation", "context": "Configure Default Memory Requests and Limits for a Namespace | Here's a manifest for a Pod that has one container. The container specifies a memory request, but not a limit:", "source": "https://kubernetes.io/docs/tasks/administer-cluster/manage-resources/memory-default-namespace/", "id": "k8s_00630_code_explain_19705"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: default-mem-demo-3\nspec:", "completion": " containers:\n - name: default-mem-demo-3-ctr\n image: nginx\n resources:\n requests:\n memory: \"128Mi\"", "language": "yaml", "type": "pod-definition", "context": "Configure Default Memory Requests and Limits for a Namespace | Create the Pod:", "source": "https://kubernetes.io/docs/tasks/administer-cluster/manage-resources/memory-default-namespace/", "id": "k8s_00630_code_completion_19706"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: default-mem-demo-3\nspec:\n containers:\n - name: default-mem-demo-3-ctr\n image: nginx\n resources:\n requests:\n memory: \"128Mi\"", "completion": "This yaml code Configure Default Memory Requests and Limits for a Namespace | Create the Pod:", "language": "yaml", "type": "pod-definition_explanation", "context": "Configure Default Memory Requests and Limits for a Namespace | Create the Pod:", "source": "https://kubernetes.io/docs/tasks/administer-cluster/manage-resources/memory-default-namespace/", "id": "k8s_00630_code_explain_19707"} {"prompt": "Explain this shell code:\n\nkubectl apply -f https://k8s.io/examples/admin/resource/memory-defaults-pod-3.yaml --namespace=default-mem-example", "completion": "This shell code Configure Default Memory Requests and Limits for a Namespace | View the Pod's specification:", "language": "shell", "type": "code-example_explanation", "context": "Configure Default Memory Requests and Limits for a Namespace | View the Pod's specification:", "source": "https://kubernetes.io/docs/tasks/administer-cluster/manage-resources/memory-default-namespace/", "id": "k8s_00630_code_explain_19708"} {"prompt": "Explain this shell code:\n\nkubectl get pod default-mem-demo-3 --output=yaml --namespace=default-mem-example", "completion": "This shell code Configure Default Memory Requests and Limits for a Namespace | The output shows that the container's memory request is set to the value specified in the container's manifest. The container is limited to use no more than 512MiB of memory, which matches the default", "language": "shell", "type": "code-example_explanation", "context": "Configure Default Memory Requests and Limits for a Namespace | The output shows that the container's memory request is set to the value specified in the container's manifest. The container is limited to use no more than 512MiB of memory, which matches the default", "source": "https://kubernetes.io/docs/tasks/administer-cluster/manage-resources/memory-default-namespace/", "id": "k8s_00630_code_explain_19709"} {"prompt": "Complete this text code:\n\nresources:\n limits:", "completion": " memory: 512Mi\n requests:\n memory: 128Mi", "language": "text", "type": "code-example", "context": "Configure Default Memory Requests and Limits for a Namespace | If your namespace has a memory resource quota configured, it is helpful to have a default value in place for memory limit. Here are three of the restrictions that a resource quota imposes on a namespa", "source": "https://kubernetes.io/docs/tasks/administer-cluster/manage-resources/memory-default-namespace/", "id": "k8s_00630_code_completion_19710"} {"prompt": "Explain this text code:\n\nresources:\n limits:\n memory: 512Mi\n requests:\n memory: 128Mi", "completion": "This text code Configure Default Memory Requests and Limits for a Namespace | If your namespace has a memory resource quota configured, it is helpful to have a default value in place for memory limit. Here are three of the restrictions that a resource quota imposes on a namespa", "language": "text", "type": "code-example_explanation", "context": "Configure Default Memory Requests and Limits for a Namespace | If your namespace has a memory resource quota configured, it is helpful to have a default value in place for memory limit. Here are three of the restrictions that a resource quota imposes on a namespa", "source": "https://kubernetes.io/docs/tasks/administer-cluster/manage-resources/memory-default-namespace/", "id": "k8s_00630_code_explain_19711"} {"prompt": "Complete this yaml code:\n\nadmin/resource/memory-defaults.yaml\napiVersion: v1\nkind: LimitRange\nmetadata:\n name: mem-limit-range\nspec:", "completion": " limits:\n - default:\n memory: 512Mi\n defaultRequest:\n memory: 256Mi\n type: Container", "language": "yaml", "type": "kubernetes-manifest", "context": "Configure Default Memory Requests and Limits for a Namespace | Create the LimitRange in the default-mem-example namespace:", "source": "https://kubernetes.io/docs/tasks/administer-cluster/manage-resources/memory-default-namespace/", "id": "k8s_00630_code_completion_19712"} {"prompt": "Explain this yaml code:\n\nadmin/resource/memory-defaults.yaml\napiVersion: v1\nkind: LimitRange\nmetadata:\n name: mem-limit-range\nspec:\n limits:\n - default:\n memory: 512Mi\n defaultRequest:\n memory: 256Mi\n type: Container", "completion": "This yaml code Configure Default Memory Requests and Limits for a Namespace | Create the LimitRange in the default-mem-example namespace:", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Configure Default Memory Requests and Limits for a Namespace | Create the LimitRange in the default-mem-example namespace:", "source": "https://kubernetes.io/docs/tasks/administer-cluster/manage-resources/memory-default-namespace/", "id": "k8s_00630_code_explain_19713"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: LimitRange\nmetadata:\n name: mem-limit-range\nspec:", "completion": " limits:\n - default:\n memory: 512Mi\n defaultRequest:\n memory: 256Mi\n type: Container", "language": "yaml", "type": "kubernetes-manifest", "context": "Configure Default Memory Requests and Limits for a Namespace | Create the LimitRange in the default-mem-example namespace:", "source": "https://kubernetes.io/docs/tasks/administer-cluster/manage-resources/memory-default-namespace/", "id": "k8s_00630_code_completion_19714"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: LimitRange\nmetadata:\n name: mem-limit-range\nspec:\n limits:\n - default:\n memory: 512Mi\n defaultRequest:\n memory: 256Mi\n type: Container", "completion": "This yaml code Configure Default Memory Requests and Limits for a Namespace | Create the LimitRange in the default-mem-example namespace:", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Configure Default Memory Requests and Limits for a Namespace | Create the LimitRange in the default-mem-example namespace:", "source": "https://kubernetes.io/docs/tasks/administer-cluster/manage-resources/memory-default-namespace/", "id": "k8s_00630_code_explain_19715"} {"prompt": "Explain this bash code:\n\nkubectl apply -f https://k8s.io/examples/admin/resource/memory-defaults.yaml --namespace=default-mem-example", "completion": "This bash code Configure Default Memory Requests and Limits for a Namespace | Now if you create a Pod in the default-mem-example namespace, and any container within that Pod does not specify its own values for memory request and memory limit, then the control plane applies defa", "language": "bash", "type": "kubectl-command_explanation", "context": "Configure Default Memory Requests and Limits for a Namespace | Now if you create a Pod in the default-mem-example namespace, and any container within that Pod does not specify its own values for memory request and memory limit, then the control plane applies defa", "source": "https://kubernetes.io/docs/tasks/administer-cluster/manage-resources/memory-default-namespace/", "id": "k8s_00630_code_explain_19716"} {"prompt": "Complete this yaml code:\n\nadmin/resource/memory-defaults-pod.yaml\napiVersion: v1\nkind: Pod\nmetadata:", "completion": " name: default-mem-demo\nspec:\n containers:\n - name: default-mem-demo-ctr\n image: nginx", "language": "yaml", "type": "pod-definition", "context": "Configure Default Memory Requests and Limits for a Namespace | Create the Pod.", "source": "https://kubernetes.io/docs/tasks/administer-cluster/manage-resources/memory-default-namespace/", "id": "k8s_00630_code_completion_19717"} {"prompt": "Explain this yaml code:\n\nadmin/resource/memory-defaults-pod.yaml\napiVersion: v1\nkind: Pod\nmetadata:\n name: default-mem-demo\nspec:\n containers:\n - name: default-mem-demo-ctr\n image: nginx", "completion": "This yaml code Configure Default Memory Requests and Limits for a Namespace | Create the Pod.", "language": "yaml", "type": "pod-definition_explanation", "context": "Configure Default Memory Requests and Limits for a Namespace | Create the Pod.", "source": "https://kubernetes.io/docs/tasks/administer-cluster/manage-resources/memory-default-namespace/", "id": "k8s_00630_code_explain_19718"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: default-mem-demo", "completion": "spec:\n containers:\n - name: default-mem-demo-ctr\n image: nginx", "language": "yaml", "type": "pod-definition", "context": "Configure Default Memory Requests and Limits for a Namespace | Create the Pod.", "source": "https://kubernetes.io/docs/tasks/administer-cluster/manage-resources/memory-default-namespace/", "id": "k8s_00630_code_completion_19719"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: default-mem-demo\nspec:\n containers:\n - name: default-mem-demo-ctr\n image: nginx", "completion": "This yaml code Configure Default Memory Requests and Limits for a Namespace | Create the Pod.", "language": "yaml", "type": "pod-definition_explanation", "context": "Configure Default Memory Requests and Limits for a Namespace | Create the Pod.", "source": "https://kubernetes.io/docs/tasks/administer-cluster/manage-resources/memory-default-namespace/", "id": "k8s_00630_code_explain_19720"} {"prompt": "Explain this bash code:\n\nkubectl apply -f https://k8s.io/examples/admin/resource/memory-defaults-pod.yaml --namespace=default-mem-example", "completion": "This bash code Configure Default Memory Requests and Limits for a Namespace | View detailed information about the Pod:", "language": "bash", "type": "kubectl-command_explanation", "context": "Configure Default Memory Requests and Limits for a Namespace | View detailed information about the Pod:", "source": "https://kubernetes.io/docs/tasks/administer-cluster/manage-resources/memory-default-namespace/", "id": "k8s_00630_code_explain_19721"} {"prompt": "Explain this bash code:\n\nkubectl get pod default-mem-demo --output=yaml --namespace=default-mem-example", "completion": "This bash code Configure Default Memory Requests and Limits for a Namespace | The output shows that the Pod's container has a memory request of 256 MiB and a memory limit of 512 MiB. These are the default values specified by the LimitRange.", "language": "bash", "type": "kubectl-command_explanation", "context": "Configure Default Memory Requests and Limits for a Namespace | The output shows that the Pod's container has a memory request of 256 MiB and a memory limit of 512 MiB. These are the default values specified by the LimitRange.", "source": "https://kubernetes.io/docs/tasks/administer-cluster/manage-resources/memory-default-namespace/", "id": "k8s_00630_code_explain_19722"} {"prompt": "Complete this text code:\n\ncontainers:\n- image: nginx\n imagePullPolicy: Always\n name: default-mem-demo-ctr", "completion": " resources:\n limits:\n memory: 512Mi\n requests:\n memory: 256Mi", "language": "text", "type": "code-example", "context": "Configure Default Memory Requests and Limits for a Namespace | Delete your Pod:", "source": "https://kubernetes.io/docs/tasks/administer-cluster/manage-resources/memory-default-namespace/", "id": "k8s_00630_code_completion_19723"} {"prompt": "Explain this text code:\n\ncontainers:\n- image: nginx\n imagePullPolicy: Always\n name: default-mem-demo-ctr\n resources:\n limits:\n memory: 512Mi\n requests:\n memory: 256Mi", "completion": "This text code Configure Default Memory Requests and Limits for a Namespace | Delete your Pod:", "language": "text", "type": "code-example_explanation", "context": "Configure Default Memory Requests and Limits for a Namespace | Delete your Pod:", "source": "https://kubernetes.io/docs/tasks/administer-cluster/manage-resources/memory-default-namespace/", "id": "k8s_00630_code_explain_19724"} {"prompt": "Explain this bash code:\n\nkubectl delete pod default-mem-demo --namespace=default-mem-example", "completion": "This bash code Configure Default Memory Requests and Limits for a Namespace | Here's a manifest for a Pod that has one container. The container specifies a memory limit, but not a request:", "language": "bash", "type": "kubectl-command_explanation", "context": "Configure Default Memory Requests and Limits for a Namespace | Here's a manifest for a Pod that has one container. The container specifies a memory limit, but not a request:", "source": "https://kubernetes.io/docs/tasks/administer-cluster/manage-resources/memory-default-namespace/", "id": "k8s_00630_code_explain_19725"} {"prompt": "Complete this yaml code:\n\nadmin/resource/memory-defaults-pod-2.yaml\napiVersion: v1\nkind: Pod\nmetadata:\n name: default-mem-demo-2\nspec:", "completion": " containers:\n - name: default-mem-demo-2-ctr\n image: nginx\n resources:\n limits:\n memory: \"1Gi\"", "language": "yaml", "type": "pod-definition", "context": "Configure Default Memory Requests and Limits for a Namespace | Create the Pod:", "source": "https://kubernetes.io/docs/tasks/administer-cluster/manage-resources/memory-default-namespace/", "id": "k8s_00630_code_completion_19726"} {"prompt": "Explain this yaml code:\n\nadmin/resource/memory-defaults-pod-2.yaml\napiVersion: v1\nkind: Pod\nmetadata:\n name: default-mem-demo-2\nspec:\n containers:\n - name: default-mem-demo-2-ctr\n image: nginx\n resources:\n limits:\n memory: \"1Gi\"", "completion": "This yaml code Configure Default Memory Requests and Limits for a Namespace | Create the Pod:", "language": "yaml", "type": "pod-definition_explanation", "context": "Configure Default Memory Requests and Limits for a Namespace | Create the Pod:", "source": "https://kubernetes.io/docs/tasks/administer-cluster/manage-resources/memory-default-namespace/", "id": "k8s_00630_code_explain_19727"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: default-mem-demo-2\nspec:", "completion": " containers:\n - name: default-mem-demo-2-ctr\n image: nginx\n resources:\n limits:\n memory: \"1Gi\"", "language": "yaml", "type": "pod-definition", "context": "Configure Default Memory Requests and Limits for a Namespace | Create the Pod:", "source": "https://kubernetes.io/docs/tasks/administer-cluster/manage-resources/memory-default-namespace/", "id": "k8s_00630_code_completion_19728"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: default-mem-demo-2\nspec:\n containers:\n - name: default-mem-demo-2-ctr\n image: nginx\n resources:\n limits:\n memory: \"1Gi\"", "completion": "This yaml code Configure Default Memory Requests and Limits for a Namespace | Create the Pod:", "language": "yaml", "type": "pod-definition_explanation", "context": "Configure Default Memory Requests and Limits for a Namespace | Create the Pod:", "source": "https://kubernetes.io/docs/tasks/administer-cluster/manage-resources/memory-default-namespace/", "id": "k8s_00630_code_explain_19729"} {"prompt": "Explain this bash code:\n\nkubectl apply -f https://k8s.io/examples/admin/resource/memory-defaults-pod-2.yaml --namespace=default-mem-example", "completion": "This bash code Configure Default Memory Requests and Limits for a Namespace | View detailed information about the Pod:", "language": "bash", "type": "kubectl-command_explanation", "context": "Configure Default Memory Requests and Limits for a Namespace | View detailed information about the Pod:", "source": "https://kubernetes.io/docs/tasks/administer-cluster/manage-resources/memory-default-namespace/", "id": "k8s_00630_code_explain_19730"} {"prompt": "Explain this bash code:\n\nkubectl get pod default-mem-demo-2 --output=yaml --namespace=default-mem-example", "completion": "This bash code Configure Default Memory Requests and Limits for a Namespace | The output shows that the container's memory request is set to match its memory limit. Notice that the container was not assigned the default memory request value of 256Mi.", "language": "bash", "type": "kubectl-command_explanation", "context": "Configure Default Memory Requests and Limits for a Namespace | The output shows that the container's memory request is set to match its memory limit. Notice that the container was not assigned the default memory request value of 256Mi.", "source": "https://kubernetes.io/docs/tasks/administer-cluster/manage-resources/memory-default-namespace/", "id": "k8s_00630_code_explain_19731"} {"prompt": "Complete this yaml code:\n\nadmin/resource/memory-defaults-pod-3.yaml\napiVersion: v1\nkind: Pod\nmetadata:\n name: default-mem-demo-3\nspec:", "completion": " containers:\n - name: default-mem-demo-3-ctr\n image: nginx\n resources:\n requests:\n memory: \"128Mi\"", "language": "yaml", "type": "pod-definition", "context": "Configure Default Memory Requests and Limits for a Namespace | Create the Pod:", "source": "https://kubernetes.io/docs/tasks/administer-cluster/manage-resources/memory-default-namespace/", "id": "k8s_00630_code_completion_19732"} {"prompt": "Explain this yaml code:\n\nadmin/resource/memory-defaults-pod-3.yaml\napiVersion: v1\nkind: Pod\nmetadata:\n name: default-mem-demo-3\nspec:\n containers:\n - name: default-mem-demo-3-ctr\n image: nginx\n resources:\n requests:\n memory: \"128Mi\"", "completion": "This yaml code Configure Default Memory Requests and Limits for a Namespace | Create the Pod:", "language": "yaml", "type": "pod-definition_explanation", "context": "Configure Default Memory Requests and Limits for a Namespace | Create the Pod:", "source": "https://kubernetes.io/docs/tasks/administer-cluster/manage-resources/memory-default-namespace/", "id": "k8s_00630_code_explain_19733"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: default-mem-demo-3\nspec:", "completion": " containers:\n - name: default-mem-demo-3-ctr\n image: nginx\n resources:\n requests:\n memory: \"128Mi\"", "language": "yaml", "type": "pod-definition", "context": "Configure Default Memory Requests and Limits for a Namespace | Create the Pod:", "source": "https://kubernetes.io/docs/tasks/administer-cluster/manage-resources/memory-default-namespace/", "id": "k8s_00630_code_completion_19734"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: default-mem-demo-3\nspec:\n containers:\n - name: default-mem-demo-3-ctr\n image: nginx\n resources:\n requests:\n memory: \"128Mi\"", "completion": "This yaml code Configure Default Memory Requests and Limits for a Namespace | Create the Pod:", "language": "yaml", "type": "pod-definition_explanation", "context": "Configure Default Memory Requests and Limits for a Namespace | Create the Pod:", "source": "https://kubernetes.io/docs/tasks/administer-cluster/manage-resources/memory-default-namespace/", "id": "k8s_00630_code_explain_19735"} {"prompt": "Explain this bash code:\n\nkubectl apply -f https://k8s.io/examples/admin/resource/memory-defaults-pod-3.yaml --namespace=default-mem-example", "completion": "This bash code Configure Default Memory Requests and Limits for a Namespace | View the Pod's specification:", "language": "bash", "type": "kubectl-command_explanation", "context": "Configure Default Memory Requests and Limits for a Namespace | View the Pod's specification:", "source": "https://kubernetes.io/docs/tasks/administer-cluster/manage-resources/memory-default-namespace/", "id": "k8s_00630_code_explain_19736"} {"prompt": "Explain this bash code:\n\nkubectl get pod default-mem-demo-3 --output=yaml --namespace=default-mem-example", "completion": "This bash code Configure Default Memory Requests and Limits for a Namespace | The output shows that the container's memory request is set to the value specified in the container's manifest. The container is limited to use no more than 512MiB of memory, which matches the default", "language": "bash", "type": "kubectl-command_explanation", "context": "Configure Default Memory Requests and Limits for a Namespace | The output shows that the container's memory request is set to the value specified in the container's manifest. The container is limited to use no more than 512MiB of memory, which matches the default", "source": "https://kubernetes.io/docs/tasks/administer-cluster/manage-resources/memory-default-namespace/", "id": "k8s_00630_code_explain_19737"} {"prompt": "Explain this text code:\n\n# Delete the minikube cluster\n kubectl config delete-cluster minikube", "completion": "This text code kubectl config delete-cluster | help for delete-cluster", "language": "text", "type": "code-example_explanation", "context": "kubectl config delete-cluster | help for delete-cluster", "source": "https://kubernetes.io/docs/reference/kubectl/generated/kubectl_config/kubectl_config_delete-cluster/", "id": "k8s_00631_code_explain_19738"} {"prompt": "Explain this text code:\n\n# Delete the minikube cluster\n kubectl config delete-cluster minikube", "completion": "This text code kubectl config delete-cluster | help for delete-cluster", "language": "text", "type": "code-example_explanation", "context": "kubectl config delete-cluster | help for delete-cluster", "source": "https://kubernetes.io/docs/reference/kubectl/generated/kubectl_config/kubectl_config_delete-cluster/", "id": "k8s_00631_code_explain_19739"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: memory-demo\n namespace: mem-example\nspec:\n containers:\n - name: memory-demo-ctr", "completion": " image: polinux/stress\n resources:\n requests:\n memory: \"100Mi\"\n limits:\n memory: \"200Mi\"\n command: [\"stress\"]\n args: [\"--vm\", \"1\", \"--vm-bytes\", \"150M\", \"--vm-hang\", \"1\"]", "language": "yaml", "type": "pod-definition", "context": "Assign Memory Resources to Containers and Pods | The args section in the configuration file provides arguments for the Container when it starts. The \"--vm-bytes\", \"150M\" arguments tell the Container to attempt to allocate 150 MiB of memory.", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/assign-memory-resource/", "id": "k8s_00632_code_completion_19740"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: memory-demo\n namespace: mem-example\nspec:\n containers:\n - name: memory-demo-ctr\n image: polinux/stress\n resources:\n requests:\n memory: \"100Mi\"\n limits:\n memory: \"200Mi\"\n command: [\"stress\"]\n args: [\"--vm\", \"1\", \"--vm-bytes\", \"150M\", \"--vm-hang\", \"1\"]", "completion": "This yaml code Assign Memory Resources to Containers and Pods | The args section in the configuration file provides arguments for the Container when it starts. The \"--vm-bytes\", \"150M\" arguments tell the Container to attempt to allocate 150 MiB of memory.", "language": "yaml", "type": "pod-definition_explanation", "context": "Assign Memory Resources to Containers and Pods | The args section in the configuration file provides arguments for the Container when it starts. The \"--vm-bytes\", \"150M\" arguments tell the Container to attempt to allocate 150 MiB of memory.", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/assign-memory-resource/", "id": "k8s_00632_code_explain_19741"} {"prompt": "Explain this bash code:\n\nkubectl apply -f https://k8s.io/examples/pods/resource/memory-request-limit.yaml --namespace=mem-example", "completion": "This bash code Assign Memory Resources to Containers and Pods | Verify that the Pod Container is running:", "language": "bash", "type": "kubectl-command_explanation", "context": "Assign Memory Resources to Containers and Pods | Verify that the Pod Container is running:", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/assign-memory-resource/", "id": "k8s_00632_code_explain_19742"} {"prompt": "Explain this bash code:\n\nkubectl get pod memory-demo --namespace=mem-example", "completion": "This bash code Assign Memory Resources to Containers and Pods | View detailed information about the Pod:", "language": "bash", "type": "kubectl-command_explanation", "context": "Assign Memory Resources to Containers and Pods | View detailed information about the Pod:", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/assign-memory-resource/", "id": "k8s_00632_code_explain_19743"} {"prompt": "Explain this bash code:\n\nkubectl get pod memory-demo --output=yaml --namespace=mem-example", "completion": "This bash code Assign Memory Resources to Containers and Pods | The output shows that the one Container in the Pod has a memory request of 100 MiB and a memory limit of 200 MiB.", "language": "bash", "type": "kubectl-command_explanation", "context": "Assign Memory Resources to Containers and Pods | The output shows that the one Container in the Pod has a memory request of 100 MiB and a memory limit of 200 MiB.", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/assign-memory-resource/", "id": "k8s_00632_code_explain_19744"} {"prompt": "Complete this text code:\n\n...\nresources:\n requests:", "completion": " memory: 100Mi\n limits:\n memory: 200Mi\n...", "language": "text", "type": "code-example", "context": "Assign Memory Resources to Containers and Pods | Run kubectl top to fetch the metrics for the pod:", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/assign-memory-resource/", "id": "k8s_00632_code_completion_19745"} {"prompt": "Explain this text code:\n\n...\nresources:\n requests:\n memory: 100Mi\n limits:\n memory: 200Mi\n...", "completion": "This text code Assign Memory Resources to Containers and Pods | Run kubectl top to fetch the metrics for the pod:", "language": "text", "type": "code-example_explanation", "context": "Assign Memory Resources to Containers and Pods | Run kubectl top to fetch the metrics for the pod:", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/assign-memory-resource/", "id": "k8s_00632_code_explain_19746"} {"prompt": "Explain this bash code:\n\nkubectl top pod memory-demo --namespace=mem-example", "completion": "This bash code Assign Memory Resources to Containers and Pods | The output shows that the Pod is using about 162,900,000 bytes of memory, which is about 150 MiB. This is greater than the Pod's 100 MiB request, but within the Pod's 200 MiB limit.", "language": "bash", "type": "kubectl-command_explanation", "context": "Assign Memory Resources to Containers and Pods | The output shows that the Pod is using about 162,900,000 bytes of memory, which is about 150 MiB. This is greater than the Pod's 100 MiB request, but within the Pod's 200 MiB limit.", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/assign-memory-resource/", "id": "k8s_00632_code_explain_19747"} {"prompt": "Explain this text code:\n\nNAME CPU(cores) MEMORY(bytes)\nmemory-demo <something> 162856960", "completion": "This text code Assign Memory Resources to Containers and Pods | Delete your Pod:", "language": "text", "type": "code-example_explanation", "context": "Assign Memory Resources to Containers and Pods | Delete your Pod:", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/assign-memory-resource/", "id": "k8s_00632_code_explain_19748"} {"prompt": "Explain this bash code:\n\nkubectl delete pod memory-demo --namespace=mem-example", "completion": "This bash code Assign Memory Resources to Containers and Pods | A Container can exceed its memory request if the Node has memory available. But a Container is not allowed to use more than its memory limit. If a Container allocates more memory than its limit, the C", "language": "bash", "type": "kubectl-command_explanation", "context": "Assign Memory Resources to Containers and Pods | A Container can exceed its memory request if the Node has memory available. But a Container is not allowed to use more than its memory limit. If a Container allocates more memory than its limit, the C", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/assign-memory-resource/", "id": "k8s_00632_code_explain_19749"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: memory-demo-2\n namespace: mem-example\nspec:\n containers:\n - name: memory-demo-2-ctr", "completion": " image: polinux/stress\n resources:\n requests:\n memory: \"50Mi\"\n limits:\n memory: \"100Mi\"\n command: [\"stress\"]\n args: [\"--vm\", \"1\", \"--vm-bytes\", \"250M\", \"--vm-hang\", \"1\"]", "language": "yaml", "type": "pod-definition", "context": "Assign Memory Resources to Containers and Pods | In the args section of the configuration file, you can see that the Container will attempt to allocate 250 MiB of memory, which is well above the 100 MiB limit.", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/assign-memory-resource/", "id": "k8s_00632_code_completion_19750"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: memory-demo-2\n namespace: mem-example\nspec:\n containers:\n - name: memory-demo-2-ctr\n image: polinux/stress\n resources:\n requests:\n memory: \"50Mi\"\n limits:\n memory: \"100Mi\"\n command: [\"stress\"]\n args: [\"--vm\", \"1\", \"--vm-bytes\", \"250M\", \"--vm-hang\", \"1\"]", "completion": "This yaml code Assign Memory Resources to Containers and Pods | In the args section of the configuration file, you can see that the Container will attempt to allocate 250 MiB of memory, which is well above the 100 MiB limit.", "language": "yaml", "type": "pod-definition_explanation", "context": "Assign Memory Resources to Containers and Pods | In the args section of the configuration file, you can see that the Container will attempt to allocate 250 MiB of memory, which is well above the 100 MiB limit.", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/assign-memory-resource/", "id": "k8s_00632_code_explain_19751"} {"prompt": "Explain this bash code:\n\nkubectl apply -f https://k8s.io/examples/pods/resource/memory-request-limit-2.yaml --namespace=mem-example", "completion": "This bash code Assign Memory Resources to Containers and Pods | View detailed information about the Pod:", "language": "bash", "type": "kubectl-command_explanation", "context": "Assign Memory Resources to Containers and Pods | View detailed information about the Pod:", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/assign-memory-resource/", "id": "k8s_00632_code_explain_19752"} {"prompt": "Explain this bash code:\n\nkubectl get pod memory-demo-2 --namespace=mem-example", "completion": "This bash code Assign Memory Resources to Containers and Pods | At this point, the Container might be running or killed. Repeat the preceding command until the Container is killed:", "language": "bash", "type": "kubectl-command_explanation", "context": "Assign Memory Resources to Containers and Pods | At this point, the Container might be running or killed. Repeat the preceding command until the Container is killed:", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/assign-memory-resource/", "id": "k8s_00632_code_explain_19753"} {"prompt": "Explain this text code:\n\nNAME READY STATUS RESTARTS AGE\nmemory-demo-2 0/1 OOMKilled 1 24s", "completion": "This text code Assign Memory Resources to Containers and Pods | Get a more detailed view of the Container status:", "language": "text", "type": "code-example_explanation", "context": "Assign Memory Resources to Containers and Pods | Get a more detailed view of the Container status:", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/assign-memory-resource/", "id": "k8s_00632_code_explain_19754"} {"prompt": "Explain this bash code:\n\nkubectl get pod memory-demo-2 --output=yaml --namespace=mem-example", "completion": "This bash code Assign Memory Resources to Containers and Pods | The output shows that the Container was killed because it is out of memory (OOM):", "language": "bash", "type": "kubectl-command_explanation", "context": "Assign Memory Resources to Containers and Pods | The output shows that the Container was killed because it is out of memory (OOM):", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/assign-memory-resource/", "id": "k8s_00632_code_explain_19755"} {"prompt": "Complete this text code:\n\nlastState:\n terminated:\n containerID: 65183c1877aaec2e8427bc95609cc52677a454b56fcb24340dbd22917c23b10f", "completion": " exitCode: 137\n finishedAt: 2017-06-20T20:52:19Z\n reason: OOMKilled\n startedAt: null", "language": "text", "type": "code-example", "context": "Assign Memory Resources to Containers and Pods | The Container in this exercise can be restarted, so the kubelet restarts it. Repeat this command several times to see that the Container is repeatedly killed and restarted:", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/assign-memory-resource/", "id": "k8s_00632_code_completion_19756"} {"prompt": "Explain this text code:\n\nlastState:\n terminated:\n containerID: 65183c1877aaec2e8427bc95609cc52677a454b56fcb24340dbd22917c23b10f\n exitCode: 137\n finishedAt: 2017-06-20T20:52:19Z\n reason: OOMKilled\n startedAt: null", "completion": "This text code Assign Memory Resources to Containers and Pods | The Container in this exercise can be restarted, so the kubelet restarts it. Repeat this command several times to see that the Container is repeatedly killed and restarted:", "language": "text", "type": "code-example_explanation", "context": "Assign Memory Resources to Containers and Pods | The Container in this exercise can be restarted, so the kubelet restarts it. Repeat this command several times to see that the Container is repeatedly killed and restarted:", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/assign-memory-resource/", "id": "k8s_00632_code_explain_19757"} {"prompt": "Explain this bash code:\n\nkubectl get pod memory-demo-2 --namespace=mem-example", "completion": "This bash code Assign Memory Resources to Containers and Pods | The output shows that the Container is killed, restarted, killed again, restarted again, and so on:", "language": "bash", "type": "kubectl-command_explanation", "context": "Assign Memory Resources to Containers and Pods | The output shows that the Container is killed, restarted, killed again, restarted again, and so on:", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/assign-memory-resource/", "id": "k8s_00632_code_explain_19758"} {"prompt": "Explain this bash code:\n\nkubectl get pod memory-demo-2 --namespace=mem-example\nNAME READY STATUS RESTARTS AGE\nmemory-demo-2 0/1 OOMKilled 1 37s", "completion": "This bash code Assign Memory Resources to Containers and Pods | View detailed information about the Pod history:", "language": "bash", "type": "kubectl-command_explanation", "context": "Assign Memory Resources to Containers and Pods | View detailed information about the Pod history:", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/assign-memory-resource/", "id": "k8s_00632_code_explain_19759"} {"prompt": "Explain this bash code:\n\nkubectl get pod memory-demo-2 --namespace=mem-example\nNAME READY STATUS RESTARTS AGE\nmemory-demo-2 1/1 Running 2 40s", "completion": "This bash code Assign Memory Resources to Containers and Pods | View detailed information about the Pod history:", "language": "bash", "type": "kubectl-command_explanation", "context": "Assign Memory Resources to Containers and Pods | View detailed information about the Pod history:", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/assign-memory-resource/", "id": "k8s_00632_code_explain_19760"} {"prompt": "Explain this bash code:\n\nkubectl describe pod memory-demo-2 --namespace=mem-example", "completion": "This bash code Assign Memory Resources to Containers and Pods | The output shows that the Container starts and fails repeatedly:", "language": "bash", "type": "kubectl-command_explanation", "context": "Assign Memory Resources to Containers and Pods | The output shows that the Container starts and fails repeatedly:", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/assign-memory-resource/", "id": "k8s_00632_code_explain_19761"} {"prompt": "Explain this text code:\n\n... Normal Created Created container with id 66a3a20aa7980e61be4922780bf9d24d1a1d8b7395c09861225b0eba1b1f8511\n... Warning BackOff Back-off restarting failed container", "completion": "This text code Assign Memory Resources to Containers and Pods | View detailed information about your cluster's Nodes:", "language": "text", "type": "code-example_explanation", "context": "Assign Memory Resources to Containers and Pods | View detailed information about your cluster's Nodes:", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/assign-memory-resource/", "id": "k8s_00632_code_explain_19762"} {"prompt": "Explain this text code:\n\nWarning OOMKilling Memory cgroup out of memory: Kill process 4481 (stress) score 1994 or sacrifice child", "completion": "This text code Assign Memory Resources to Containers and Pods | Delete your Pod:", "language": "text", "type": "code-example_explanation", "context": "Assign Memory Resources to Containers and Pods | Delete your Pod:", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/assign-memory-resource/", "id": "k8s_00632_code_explain_19763"} {"prompt": "Explain this bash code:\n\nkubectl delete pod memory-demo-2 --namespace=mem-example", "completion": "This bash code Assign Memory Resources to Containers and Pods | Memory requests and limits are associated with Containers, but it is useful to think of a Pod as having a memory request and limit. The memory request for the Pod is the sum of the memory requests for", "language": "bash", "type": "kubectl-command_explanation", "context": "Assign Memory Resources to Containers and Pods | Memory requests and limits are associated with Containers, but it is useful to think of a Pod as having a memory request and limit. The memory request for the Pod is the sum of the memory requests for", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/assign-memory-resource/", "id": "k8s_00632_code_explain_19764"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: memory-demo-3\n namespace: mem-example\nspec:\n containers:\n - name: memory-demo-3-ctr", "completion": " image: polinux/stress\n resources:\n requests:\n memory: \"1000Gi\"\n limits:\n memory: \"1000Gi\"\n command: [\"stress\"]\n args: [\"--vm\", \"1\", \"--vm-bytes\", \"150M\", \"--vm-hang\", \"1\"]", "language": "yaml", "type": "pod-definition", "context": "Assign Memory Resources to Containers and Pods | Create the Pod:", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/assign-memory-resource/", "id": "k8s_00632_code_completion_19765"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: memory-demo-3\n namespace: mem-example\nspec:\n containers:\n - name: memory-demo-3-ctr\n image: polinux/stress\n resources:\n requests:\n memory: \"1000Gi\"\n limits:\n memory: \"1000Gi\"\n command: [\"stress\"]\n args: [\"--vm\", \"1\", \"--vm-bytes\", \"150M\", \"--vm-hang\", \"1\"]", "completion": "This yaml code Assign Memory Resources to Containers and Pods | Create the Pod:", "language": "yaml", "type": "pod-definition_explanation", "context": "Assign Memory Resources to Containers and Pods | Create the Pod:", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/assign-memory-resource/", "id": "k8s_00632_code_explain_19766"} {"prompt": "Explain this bash code:\n\nkubectl apply -f https://k8s.io/examples/pods/resource/memory-request-limit-3.yaml --namespace=mem-example", "completion": "This bash code Assign Memory Resources to Containers and Pods | View the Pod status:", "language": "bash", "type": "kubectl-command_explanation", "context": "Assign Memory Resources to Containers and Pods | View the Pod status:", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/assign-memory-resource/", "id": "k8s_00632_code_explain_19767"} {"prompt": "Explain this bash code:\n\nkubectl get pod memory-demo-3 --namespace=mem-example", "completion": "This bash code Assign Memory Resources to Containers and Pods | The output shows that the Pod status is PENDING. That is, the Pod is not scheduled to run on any Node, and it will remain in the PENDING state indefinitely:", "language": "bash", "type": "kubectl-command_explanation", "context": "Assign Memory Resources to Containers and Pods | The output shows that the Pod status is PENDING. That is, the Pod is not scheduled to run on any Node, and it will remain in the PENDING state indefinitely:", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/assign-memory-resource/", "id": "k8s_00632_code_explain_19768"} {"prompt": "Explain this bash code:\n\nkubectl get pod memory-demo-3 --namespace=mem-example\nNAME READY STATUS RESTARTS AGE\nmemory-demo-3 0/1 Pending 0 25s", "completion": "This bash code Assign Memory Resources to Containers and Pods | View detailed information about the Pod, including events:", "language": "bash", "type": "kubectl-command_explanation", "context": "Assign Memory Resources to Containers and Pods | View detailed information about the Pod, including events:", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/assign-memory-resource/", "id": "k8s_00632_code_explain_19769"} {"prompt": "Explain this bash code:\n\nkubectl describe pod memory-demo-3 --namespace=mem-example", "completion": "This bash code Assign Memory Resources to Containers and Pods | The output shows that the Container cannot be scheduled because of insufficient memory on the Nodes:", "language": "bash", "type": "kubectl-command_explanation", "context": "Assign Memory Resources to Containers and Pods | The output shows that the Container cannot be scheduled because of insufficient memory on the Nodes:", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/assign-memory-resource/", "id": "k8s_00632_code_explain_19770"} {"prompt": "Complete this text code:\n\nEvents:\n ... Reason Message", "completion": " ------ -------\n ... FailedScheduling No nodes are available that match all of the following predicates:: Insufficient memory (3).", "language": "text", "type": "code-example", "context": "Assign Memory Resources to Containers and Pods | The memory resource is measured in bytes. You can express memory as a plain integer or a fixed-point integer with one of these suffixes: E, P, T, G, M, K, Ei, Pi, Ti, Gi, Mi, Ki. For example, the foll", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/assign-memory-resource/", "id": "k8s_00632_code_completion_19771"} {"prompt": "Explain this text code:\n\nEvents:\n ... Reason Message\n ------ -------\n ... FailedScheduling No nodes are available that match all of the following predicates:: Insufficient memory (3).", "completion": "This text code Assign Memory Resources to Containers and Pods | The memory resource is measured in bytes. You can express memory as a plain integer or a fixed-point integer with one of these suffixes: E, P, T, G, M, K, Ei, Pi, Ti, Gi, Mi, Ki. For example, the foll", "language": "text", "type": "code-example_explanation", "context": "Assign Memory Resources to Containers and Pods | The memory resource is measured in bytes. You can express memory as a plain integer or a fixed-point integer with one of these suffixes: E, P, T, G, M, K, Ei, Pi, Ti, Gi, Mi, Ki. For example, the foll", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/assign-memory-resource/", "id": "k8s_00632_code_explain_19772"} {"prompt": "Explain this bash code:\n\nkubectl delete pod memory-demo-3 --namespace=mem-example", "completion": "This bash code Assign Memory Resources to Containers and Pods | If you do not specify a memory limit for a Container, one of the following situations applies:", "language": "bash", "type": "kubectl-command_explanation", "context": "Assign Memory Resources to Containers and Pods | If you do not specify a memory limit for a Container, one of the following situations applies:", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/assign-memory-resource/", "id": "k8s_00632_code_explain_19773"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: memory-demo\n namespace: mem-example\nspec:\n containers:\n - name: memory-demo-ctr", "completion": " image: polinux/stress\n resources:\n requests:\n memory: \"100Mi\"\n limits:\n memory: \"200Mi\"\n command: [\"stress\"]\n args: [\"--vm\", \"1\", \"--vm-bytes\", \"150M\", \"--vm-hang\", \"1\"]", "language": "yaml", "type": "pod-definition", "context": "Assign Memory Resources to Containers and Pods | The args section in the configuration file provides arguments for the Container when it starts. The \"--vm-bytes\", \"150M\" arguments tell the Container to attempt to allocate 150 MiB of memory.", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/assign-memory-resource/", "id": "k8s_00632_code_completion_19774"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: memory-demo\n namespace: mem-example\nspec:\n containers:\n - name: memory-demo-ctr\n image: polinux/stress\n resources:\n requests:\n memory: \"100Mi\"\n limits:\n memory: \"200Mi\"\n command: [\"stress\"]\n args: [\"--vm\", \"1\", \"--vm-bytes\", \"150M\", \"--vm-hang\", \"1\"]", "completion": "This yaml code Assign Memory Resources to Containers and Pods | The args section in the configuration file provides arguments for the Container when it starts. The \"--vm-bytes\", \"150M\" arguments tell the Container to attempt to allocate 150 MiB of memory.", "language": "yaml", "type": "pod-definition_explanation", "context": "Assign Memory Resources to Containers and Pods | The args section in the configuration file provides arguments for the Container when it starts. The \"--vm-bytes\", \"150M\" arguments tell the Container to attempt to allocate 150 MiB of memory.", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/assign-memory-resource/", "id": "k8s_00632_code_explain_19775"} {"prompt": "Explain this shell code:\n\nkubectl apply -f https://k8s.io/examples/pods/resource/memory-request-limit.yaml --namespace=mem-example", "completion": "This shell code Assign Memory Resources to Containers and Pods | Verify that the Pod Container is running:", "language": "shell", "type": "code-example_explanation", "context": "Assign Memory Resources to Containers and Pods | Verify that the Pod Container is running:", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/assign-memory-resource/", "id": "k8s_00632_code_explain_19776"} {"prompt": "Explain this shell code:\n\nkubectl get pod memory-demo --namespace=mem-example", "completion": "This shell code Assign Memory Resources to Containers and Pods | View detailed information about the Pod:", "language": "shell", "type": "code-example_explanation", "context": "Assign Memory Resources to Containers and Pods | View detailed information about the Pod:", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/assign-memory-resource/", "id": "k8s_00632_code_explain_19777"} {"prompt": "Explain this shell code:\n\nkubectl get pod memory-demo --output=yaml --namespace=mem-example", "completion": "This shell code Assign Memory Resources to Containers and Pods | The output shows that the one Container in the Pod has a memory request of 100 MiB and a memory limit of 200 MiB.", "language": "shell", "type": "code-example_explanation", "context": "Assign Memory Resources to Containers and Pods | The output shows that the one Container in the Pod has a memory request of 100 MiB and a memory limit of 200 MiB.", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/assign-memory-resource/", "id": "k8s_00632_code_explain_19778"} {"prompt": "Complete this yaml code:\n\n...\nresources:\n requests:", "completion": " memory: 100Mi\n limits:\n memory: 200Mi\n...", "language": "yaml", "type": "kubernetes-manifest", "context": "Assign Memory Resources to Containers and Pods | Run kubectl top to fetch the metrics for the pod:", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/assign-memory-resource/", "id": "k8s_00632_code_completion_19779"} {"prompt": "Explain this yaml code:\n\n...\nresources:\n requests:\n memory: 100Mi\n limits:\n memory: 200Mi\n...", "completion": "This yaml code Assign Memory Resources to Containers and Pods | Run kubectl top to fetch the metrics for the pod:", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Assign Memory Resources to Containers and Pods | Run kubectl top to fetch the metrics for the pod:", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/assign-memory-resource/", "id": "k8s_00632_code_explain_19780"} {"prompt": "Explain this shell code:\n\nkubectl top pod memory-demo --namespace=mem-example", "completion": "This shell code Assign Memory Resources to Containers and Pods | The output shows that the Pod is using about 162,900,000 bytes of memory, which is about 150 MiB. This is greater than the Pod's 100 MiB request, but within the Pod's 200 MiB limit.", "language": "shell", "type": "code-example_explanation", "context": "Assign Memory Resources to Containers and Pods | The output shows that the Pod is using about 162,900,000 bytes of memory, which is about 150 MiB. This is greater than the Pod's 100 MiB request, but within the Pod's 200 MiB limit.", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/assign-memory-resource/", "id": "k8s_00632_code_explain_19781"} {"prompt": "Explain this text code:\n\nNAME CPU(cores) MEMORY(bytes)\nmemory-demo <something> 162856960", "completion": "This text code Assign Memory Resources to Containers and Pods | Delete your Pod:", "language": "text", "type": "code-example_explanation", "context": "Assign Memory Resources to Containers and Pods | Delete your Pod:", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/assign-memory-resource/", "id": "k8s_00632_code_explain_19782"} {"prompt": "Explain this shell code:\n\nkubectl delete pod memory-demo --namespace=mem-example", "completion": "This shell code Assign Memory Resources to Containers and Pods | A Container can exceed its memory request if the Node has memory available. But a Container is not allowed to use more than its memory limit. If a Container allocates more memory than its limit, the C", "language": "shell", "type": "code-example_explanation", "context": "Assign Memory Resources to Containers and Pods | A Container can exceed its memory request if the Node has memory available. But a Container is not allowed to use more than its memory limit. If a Container allocates more memory than its limit, the C", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/assign-memory-resource/", "id": "k8s_00632_code_explain_19783"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: memory-demo-2\n namespace: mem-example\nspec:\n containers:\n - name: memory-demo-2-ctr", "completion": " image: polinux/stress\n resources:\n requests:\n memory: \"50Mi\"\n limits:\n memory: \"100Mi\"\n command: [\"stress\"]\n args: [\"--vm\", \"1\", \"--vm-bytes\", \"250M\", \"--vm-hang\", \"1\"]", "language": "yaml", "type": "pod-definition", "context": "Assign Memory Resources to Containers and Pods | In the args section of the configuration file, you can see that the Container will attempt to allocate 250 MiB of memory, which is well above the 100 MiB limit.", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/assign-memory-resource/", "id": "k8s_00632_code_completion_19784"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: memory-demo-2\n namespace: mem-example\nspec:\n containers:\n - name: memory-demo-2-ctr\n image: polinux/stress\n resources:\n requests:\n memory: \"50Mi\"\n limits:\n memory: \"100Mi\"\n command: [\"stress\"]\n args: [\"--vm\", \"1\", \"--vm-bytes\", \"250M\", \"--vm-hang\", \"1\"]", "completion": "This yaml code Assign Memory Resources to Containers and Pods | In the args section of the configuration file, you can see that the Container will attempt to allocate 250 MiB of memory, which is well above the 100 MiB limit.", "language": "yaml", "type": "pod-definition_explanation", "context": "Assign Memory Resources to Containers and Pods | In the args section of the configuration file, you can see that the Container will attempt to allocate 250 MiB of memory, which is well above the 100 MiB limit.", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/assign-memory-resource/", "id": "k8s_00632_code_explain_19785"} {"prompt": "Explain this shell code:\n\nkubectl apply -f https://k8s.io/examples/pods/resource/memory-request-limit-2.yaml --namespace=mem-example", "completion": "This shell code Assign Memory Resources to Containers and Pods | View detailed information about the Pod:", "language": "shell", "type": "code-example_explanation", "context": "Assign Memory Resources to Containers and Pods | View detailed information about the Pod:", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/assign-memory-resource/", "id": "k8s_00632_code_explain_19786"} {"prompt": "Explain this shell code:\n\nkubectl get pod memory-demo-2 --namespace=mem-example", "completion": "This shell code Assign Memory Resources to Containers and Pods | At this point, the Container might be running or killed. Repeat the preceding command until the Container is killed:", "language": "shell", "type": "code-example_explanation", "context": "Assign Memory Resources to Containers and Pods | At this point, the Container might be running or killed. Repeat the preceding command until the Container is killed:", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/assign-memory-resource/", "id": "k8s_00632_code_explain_19787"} {"prompt": "Explain this shell code:\n\nNAME READY STATUS RESTARTS AGE\nmemory-demo-2 0/1 OOMKilled 1 24s", "completion": "This shell code Assign Memory Resources to Containers and Pods | Get a more detailed view of the Container status:", "language": "shell", "type": "code-example_explanation", "context": "Assign Memory Resources to Containers and Pods | Get a more detailed view of the Container status:", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/assign-memory-resource/", "id": "k8s_00632_code_explain_19788"} {"prompt": "Explain this shell code:\n\nkubectl get pod memory-demo-2 --output=yaml --namespace=mem-example", "completion": "This shell code Assign Memory Resources to Containers and Pods | The output shows that the Container was killed because it is out of memory (OOM):", "language": "shell", "type": "code-example_explanation", "context": "Assign Memory Resources to Containers and Pods | The output shows that the Container was killed because it is out of memory (OOM):", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/assign-memory-resource/", "id": "k8s_00632_code_explain_19789"} {"prompt": "Complete this yaml code:\n\nlastState:\n terminated:\n containerID: 65183c1877aaec2e8427bc95609cc52677a454b56fcb24340dbd22917c23b10f", "completion": " exitCode: 137\n finishedAt: 2017-06-20T20:52:19Z\n reason: OOMKilled\n startedAt: null", "language": "yaml", "type": "kubernetes-manifest", "context": "Assign Memory Resources to Containers and Pods | The Container in this exercise can be restarted, so the kubelet restarts it. Repeat this command several times to see that the Container is repeatedly killed and restarted:", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/assign-memory-resource/", "id": "k8s_00632_code_completion_19790"} {"prompt": "Explain this yaml code:\n\nlastState:\n terminated:\n containerID: 65183c1877aaec2e8427bc95609cc52677a454b56fcb24340dbd22917c23b10f\n exitCode: 137\n finishedAt: 2017-06-20T20:52:19Z\n reason: OOMKilled\n startedAt: null", "completion": "This yaml code Assign Memory Resources to Containers and Pods | The Container in this exercise can be restarted, so the kubelet restarts it. Repeat this command several times to see that the Container is repeatedly killed and restarted:", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Assign Memory Resources to Containers and Pods | The Container in this exercise can be restarted, so the kubelet restarts it. Repeat this command several times to see that the Container is repeatedly killed and restarted:", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/assign-memory-resource/", "id": "k8s_00632_code_explain_19791"} {"prompt": "Explain this shell code:\n\nkubectl get pod memory-demo-2 --namespace=mem-example", "completion": "This shell code Assign Memory Resources to Containers and Pods | The output shows that the Container is killed, restarted, killed again, restarted again, and so on:", "language": "shell", "type": "code-example_explanation", "context": "Assign Memory Resources to Containers and Pods | The output shows that the Container is killed, restarted, killed again, restarted again, and so on:", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/assign-memory-resource/", "id": "k8s_00632_code_explain_19792"} {"prompt": "Explain this bash code:\n\nkubectl get pod memory-demo-2 --namespace=mem-example\nNAME READY STATUS RESTARTS AGE\nmemory-demo-2 0/1 OOMKilled 1 37s", "completion": "This bash code Assign Memory Resources to Containers and Pods | View detailed information about the Pod history:", "language": "bash", "type": "kubectl-command_explanation", "context": "Assign Memory Resources to Containers and Pods | View detailed information about the Pod history:", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/assign-memory-resource/", "id": "k8s_00632_code_explain_19793"} {"prompt": "Explain this bash code:\n\nkubectl get pod memory-demo-2 --namespace=mem-example\nNAME READY STATUS RESTARTS AGE\nmemory-demo-2 1/1 Running 2 40s", "completion": "This bash code Assign Memory Resources to Containers and Pods | View detailed information about the Pod history:", "language": "bash", "type": "kubectl-command_explanation", "context": "Assign Memory Resources to Containers and Pods | View detailed information about the Pod history:", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/assign-memory-resource/", "id": "k8s_00632_code_explain_19794"} {"prompt": "Explain this bash code:\n\nkubectl describe pod memory-demo-2 --namespace=mem-example", "completion": "This bash code Assign Memory Resources to Containers and Pods | The output shows that the Container starts and fails repeatedly:", "language": "bash", "type": "kubectl-command_explanation", "context": "Assign Memory Resources to Containers and Pods | The output shows that the Container starts and fails repeatedly:", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/assign-memory-resource/", "id": "k8s_00632_code_explain_19795"} {"prompt": "Explain this text code:\n\n... Normal Created Created container with id 66a3a20aa7980e61be4922780bf9d24d1a1d8b7395c09861225b0eba1b1f8511\n... Warning BackOff Back-off restarting failed container", "completion": "This text code Assign Memory Resources to Containers and Pods | View detailed information about your cluster's Nodes:", "language": "text", "type": "code-example_explanation", "context": "Assign Memory Resources to Containers and Pods | View detailed information about your cluster's Nodes:", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/assign-memory-resource/", "id": "k8s_00632_code_explain_19796"} {"prompt": "Explain this text code:\n\nWarning OOMKilling Memory cgroup out of memory: Kill process 4481 (stress) score 1994 or sacrifice child", "completion": "This text code Assign Memory Resources to Containers and Pods | Delete your Pod:", "language": "text", "type": "code-example_explanation", "context": "Assign Memory Resources to Containers and Pods | Delete your Pod:", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/assign-memory-resource/", "id": "k8s_00632_code_explain_19797"} {"prompt": "Explain this shell code:\n\nkubectl delete pod memory-demo-2 --namespace=mem-example", "completion": "This shell code Assign Memory Resources to Containers and Pods | Memory requests and limits are associated with Containers, but it is useful to think of a Pod as having a memory request and limit. The memory request for the Pod is the sum of the memory requests for", "language": "shell", "type": "code-example_explanation", "context": "Assign Memory Resources to Containers and Pods | Memory requests and limits are associated with Containers, but it is useful to think of a Pod as having a memory request and limit. The memory request for the Pod is the sum of the memory requests for", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/assign-memory-resource/", "id": "k8s_00632_code_explain_19798"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: memory-demo-3\n namespace: mem-example\nspec:\n containers:\n - name: memory-demo-3-ctr", "completion": " image: polinux/stress\n resources:\n requests:\n memory: \"1000Gi\"\n limits:\n memory: \"1000Gi\"\n command: [\"stress\"]\n args: [\"--vm\", \"1\", \"--vm-bytes\", \"150M\", \"--vm-hang\", \"1\"]", "language": "yaml", "type": "pod-definition", "context": "Assign Memory Resources to Containers and Pods | Create the Pod:", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/assign-memory-resource/", "id": "k8s_00632_code_completion_19799"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: memory-demo-3\n namespace: mem-example\nspec:\n containers:\n - name: memory-demo-3-ctr\n image: polinux/stress\n resources:\n requests:\n memory: \"1000Gi\"\n limits:\n memory: \"1000Gi\"\n command: [\"stress\"]\n args: [\"--vm\", \"1\", \"--vm-bytes\", \"150M\", \"--vm-hang\", \"1\"]", "completion": "This yaml code Assign Memory Resources to Containers and Pods | Create the Pod:", "language": "yaml", "type": "pod-definition_explanation", "context": "Assign Memory Resources to Containers and Pods | Create the Pod:", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/assign-memory-resource/", "id": "k8s_00632_code_explain_19800"} {"prompt": "Explain this shell code:\n\nkubectl apply -f https://k8s.io/examples/pods/resource/memory-request-limit-3.yaml --namespace=mem-example", "completion": "This shell code Assign Memory Resources to Containers and Pods | View the Pod status:", "language": "shell", "type": "code-example_explanation", "context": "Assign Memory Resources to Containers and Pods | View the Pod status:", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/assign-memory-resource/", "id": "k8s_00632_code_explain_19801"} {"prompt": "Explain this shell code:\n\nkubectl get pod memory-demo-3 --namespace=mem-example", "completion": "This shell code Assign Memory Resources to Containers and Pods | The output shows that the Pod status is PENDING. That is, the Pod is not scheduled to run on any Node, and it will remain in the PENDING state indefinitely:", "language": "shell", "type": "code-example_explanation", "context": "Assign Memory Resources to Containers and Pods | The output shows that the Pod status is PENDING. That is, the Pod is not scheduled to run on any Node, and it will remain in the PENDING state indefinitely:", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/assign-memory-resource/", "id": "k8s_00632_code_explain_19802"} {"prompt": "Explain this bash code:\n\nkubectl get pod memory-demo-3 --namespace=mem-example\nNAME READY STATUS RESTARTS AGE\nmemory-demo-3 0/1 Pending 0 25s", "completion": "This bash code Assign Memory Resources to Containers and Pods | View detailed information about the Pod, including events:", "language": "bash", "type": "kubectl-command_explanation", "context": "Assign Memory Resources to Containers and Pods | View detailed information about the Pod, including events:", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/assign-memory-resource/", "id": "k8s_00632_code_explain_19803"} {"prompt": "Explain this shell code:\n\nkubectl describe pod memory-demo-3 --namespace=mem-example", "completion": "This shell code Assign Memory Resources to Containers and Pods | The output shows that the Container cannot be scheduled because of insufficient memory on the Nodes:", "language": "shell", "type": "code-example_explanation", "context": "Assign Memory Resources to Containers and Pods | The output shows that the Container cannot be scheduled because of insufficient memory on the Nodes:", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/assign-memory-resource/", "id": "k8s_00632_code_explain_19804"} {"prompt": "Complete this text code:\n\nEvents:\n ... Reason Message", "completion": " ------ -------\n ... FailedScheduling No nodes are available that match all of the following predicates:: Insufficient memory (3).", "language": "text", "type": "code-example", "context": "Assign Memory Resources to Containers and Pods | The memory resource is measured in bytes. You can express memory as a plain integer or a fixed-point integer with one of these suffixes: E, P, T, G, M, K, Ei, Pi, Ti, Gi, Mi, Ki. For example, the foll", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/assign-memory-resource/", "id": "k8s_00632_code_completion_19805"} {"prompt": "Explain this text code:\n\nEvents:\n ... Reason Message\n ------ -------\n ... FailedScheduling No nodes are available that match all of the following predicates:: Insufficient memory (3).", "completion": "This text code Assign Memory Resources to Containers and Pods | The memory resource is measured in bytes. You can express memory as a plain integer or a fixed-point integer with one of these suffixes: E, P, T, G, M, K, Ei, Pi, Ti, Gi, Mi, Ki. For example, the foll", "language": "text", "type": "code-example_explanation", "context": "Assign Memory Resources to Containers and Pods | The memory resource is measured in bytes. You can express memory as a plain integer or a fixed-point integer with one of these suffixes: E, P, T, G, M, K, Ei, Pi, Ti, Gi, Mi, Ki. For example, the foll", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/assign-memory-resource/", "id": "k8s_00632_code_explain_19806"} {"prompt": "Explain this shell code:\n\nkubectl delete pod memory-demo-3 --namespace=mem-example", "completion": "This shell code Assign Memory Resources to Containers and Pods | If you do not specify a memory limit for a Container, one of the following situations applies:", "language": "shell", "type": "code-example_explanation", "context": "Assign Memory Resources to Containers and Pods | If you do not specify a memory limit for a Container, one of the following situations applies:", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/assign-memory-resource/", "id": "k8s_00632_code_explain_19807"} {"prompt": "Complete this yaml code:\n\npods/resource/memory-request-limit.yaml\napiVersion: v1\nkind: Pod\nmetadata:\n name: memory-demo\n namespace: mem-example\nspec:\n containers:", "completion": " - name: memory-demo-ctr\n image: polinux/stress\n resources:\n requests:\n memory: \"100Mi\"\n limits:\n memory: \"200Mi\"\n command: [\"stress\"]\n args: [\"--vm\", \"1\", \"--vm-bytes\", \"150M\", \"--vm-hang\", \"1\"]", "language": "yaml", "type": "pod-definition", "context": "Assign Memory Resources to Containers and Pods | The args section in the configuration file provides arguments for the Container when it starts. The \"--vm-bytes\", \"150M\" arguments tell the Container to attempt to allocate 150 MiB of memory.", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/assign-memory-resource/", "id": "k8s_00632_code_completion_19808"} {"prompt": "Explain this yaml code:\n\npods/resource/memory-request-limit.yaml\napiVersion: v1\nkind: Pod\nmetadata:\n name: memory-demo\n namespace: mem-example\nspec:\n containers:\n - name: memory-demo-ctr\n image: polinux/stress\n resources:\n requests:\n memory: \"100Mi\"\n limits:\n memory: \"200Mi\"\n command: [\"stress\"]\n args: [\"--vm\", \"1\", \"--vm-bytes\", \"150M\", \"--vm-hang\", \"1\"]", "completion": "This yaml code Assign Memory Resources to Containers and Pods | The args section in the configuration file provides arguments for the Container when it starts. The \"--vm-bytes\", \"150M\" arguments tell the Container to attempt to allocate 150 MiB of memory.", "language": "yaml", "type": "pod-definition_explanation", "context": "Assign Memory Resources to Containers and Pods | The args section in the configuration file provides arguments for the Container when it starts. The \"--vm-bytes\", \"150M\" arguments tell the Container to attempt to allocate 150 MiB of memory.", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/assign-memory-resource/", "id": "k8s_00632_code_explain_19809"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: memory-demo\n namespace: mem-example\nspec:\n containers:\n - name: memory-demo-ctr", "completion": " image: polinux/stress\n resources:\n requests:\n memory: \"100Mi\"\n limits:\n memory: \"200Mi\"\n command: [\"stress\"]\n args: [\"--vm\", \"1\", \"--vm-bytes\", \"150M\", \"--vm-hang\", \"1\"]", "language": "yaml", "type": "pod-definition", "context": "Assign Memory Resources to Containers and Pods | The args section in the configuration file provides arguments for the Container when it starts. The \"--vm-bytes\", \"150M\" arguments tell the Container to attempt to allocate 150 MiB of memory.", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/assign-memory-resource/", "id": "k8s_00632_code_completion_19810"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: memory-demo\n namespace: mem-example\nspec:\n containers:\n - name: memory-demo-ctr\n image: polinux/stress\n resources:\n requests:\n memory: \"100Mi\"\n limits:\n memory: \"200Mi\"\n command: [\"stress\"]\n args: [\"--vm\", \"1\", \"--vm-bytes\", \"150M\", \"--vm-hang\", \"1\"]", "completion": "This yaml code Assign Memory Resources to Containers and Pods | The args section in the configuration file provides arguments for the Container when it starts. The \"--vm-bytes\", \"150M\" arguments tell the Container to attempt to allocate 150 MiB of memory.", "language": "yaml", "type": "pod-definition_explanation", "context": "Assign Memory Resources to Containers and Pods | The args section in the configuration file provides arguments for the Container when it starts. The \"--vm-bytes\", \"150M\" arguments tell the Container to attempt to allocate 150 MiB of memory.", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/assign-memory-resource/", "id": "k8s_00632_code_explain_19811"} {"prompt": "Explain this bash code:\n\nkubectl apply -f https://k8s.io/examples/pods/resource/memory-request-limit.yaml --namespace=mem-example", "completion": "This bash code Assign Memory Resources to Containers and Pods | Verify that the Pod Container is running:", "language": "bash", "type": "kubectl-command_explanation", "context": "Assign Memory Resources to Containers and Pods | Verify that the Pod Container is running:", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/assign-memory-resource/", "id": "k8s_00632_code_explain_19812"} {"prompt": "Explain this bash code:\n\nkubectl get pod memory-demo --namespace=mem-example", "completion": "This bash code Assign Memory Resources to Containers and Pods | View detailed information about the Pod:", "language": "bash", "type": "kubectl-command_explanation", "context": "Assign Memory Resources to Containers and Pods | View detailed information about the Pod:", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/assign-memory-resource/", "id": "k8s_00632_code_explain_19813"} {"prompt": "Explain this bash code:\n\nkubectl get pod memory-demo --output=yaml --namespace=mem-example", "completion": "This bash code Assign Memory Resources to Containers and Pods | The output shows that the one Container in the Pod has a memory request of 100 MiB and a memory limit of 200 MiB.", "language": "bash", "type": "kubectl-command_explanation", "context": "Assign Memory Resources to Containers and Pods | The output shows that the one Container in the Pod has a memory request of 100 MiB and a memory limit of 200 MiB.", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/assign-memory-resource/", "id": "k8s_00632_code_explain_19814"} {"prompt": "Complete this text code:\n\n...\nresources:\n requests:", "completion": " memory: 100Mi\n limits:\n memory: 200Mi\n...", "language": "text", "type": "code-example", "context": "Assign Memory Resources to Containers and Pods | Run kubectl top to fetch the metrics for the pod:", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/assign-memory-resource/", "id": "k8s_00632_code_completion_19815"} {"prompt": "Explain this text code:\n\n...\nresources:\n requests:\n memory: 100Mi\n limits:\n memory: 200Mi\n...", "completion": "This text code Assign Memory Resources to Containers and Pods | Run kubectl top to fetch the metrics for the pod:", "language": "text", "type": "code-example_explanation", "context": "Assign Memory Resources to Containers and Pods | Run kubectl top to fetch the metrics for the pod:", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/assign-memory-resource/", "id": "k8s_00632_code_explain_19816"} {"prompt": "Explain this bash code:\n\nkubectl top pod memory-demo --namespace=mem-example", "completion": "This bash code Assign Memory Resources to Containers and Pods | The output shows that the Pod is using about 162,900,000 bytes of memory, which is about 150 MiB. This is greater than the Pod's 100 MiB request, but within the Pod's 200 MiB limit.", "language": "bash", "type": "kubectl-command_explanation", "context": "Assign Memory Resources to Containers and Pods | The output shows that the Pod is using about 162,900,000 bytes of memory, which is about 150 MiB. This is greater than the Pod's 100 MiB request, but within the Pod's 200 MiB limit.", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/assign-memory-resource/", "id": "k8s_00632_code_explain_19817"} {"prompt": "Explain this bash code:\n\nkubectl delete pod memory-demo --namespace=mem-example", "completion": "This bash code Assign Memory Resources to Containers and Pods | A Container can exceed its memory request if the Node has memory available. But a Container is not allowed to use more than its memory limit. If a Container allocates more memory than its limit, the C", "language": "bash", "type": "kubectl-command_explanation", "context": "Assign Memory Resources to Containers and Pods | A Container can exceed its memory request if the Node has memory available. But a Container is not allowed to use more than its memory limit. If a Container allocates more memory than its limit, the C", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/assign-memory-resource/", "id": "k8s_00632_code_explain_19818"} {"prompt": "Complete this yaml code:\n\npods/resource/memory-request-limit-2.yaml\napiVersion: v1\nkind: Pod\nmetadata:\n name: memory-demo-2\n namespace: mem-example\nspec:\n containers:", "completion": " - name: memory-demo-2-ctr\n image: polinux/stress\n resources:\n requests:\n memory: \"50Mi\"\n limits:\n memory: \"100Mi\"\n command: [\"stress\"]\n args: [\"--vm\", \"1\", \"--vm-bytes\", \"250M\", \"--vm-hang\", \"1\"]", "language": "yaml", "type": "pod-definition", "context": "Assign Memory Resources to Containers and Pods | In the args section of the configuration file, you can see that the Container will attempt to allocate 250 MiB of memory, which is well above the 100 MiB limit.", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/assign-memory-resource/", "id": "k8s_00632_code_completion_19819"} {"prompt": "Explain this yaml code:\n\npods/resource/memory-request-limit-2.yaml\napiVersion: v1\nkind: Pod\nmetadata:\n name: memory-demo-2\n namespace: mem-example\nspec:\n containers:\n - name: memory-demo-2-ctr\n image: polinux/stress\n resources:\n requests:\n memory: \"50Mi\"\n limits:\n memory: \"100Mi\"\n command: [\"stress\"]\n args: [\"--vm\", \"1\", \"--vm-bytes\", \"250M\", \"--vm-hang\", \"1\"]", "completion": "This yaml code Assign Memory Resources to Containers and Pods | In the args section of the configuration file, you can see that the Container will attempt to allocate 250 MiB of memory, which is well above the 100 MiB limit.", "language": "yaml", "type": "pod-definition_explanation", "context": "Assign Memory Resources to Containers and Pods | In the args section of the configuration file, you can see that the Container will attempt to allocate 250 MiB of memory, which is well above the 100 MiB limit.", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/assign-memory-resource/", "id": "k8s_00632_code_explain_19820"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: memory-demo-2\n namespace: mem-example\nspec:\n containers:\n - name: memory-demo-2-ctr", "completion": " image: polinux/stress\n resources:\n requests:\n memory: \"50Mi\"\n limits:\n memory: \"100Mi\"\n command: [\"stress\"]\n args: [\"--vm\", \"1\", \"--vm-bytes\", \"250M\", \"--vm-hang\", \"1\"]", "language": "yaml", "type": "pod-definition", "context": "Assign Memory Resources to Containers and Pods | In the args section of the configuration file, you can see that the Container will attempt to allocate 250 MiB of memory, which is well above the 100 MiB limit.", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/assign-memory-resource/", "id": "k8s_00632_code_completion_19821"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: memory-demo-2\n namespace: mem-example\nspec:\n containers:\n - name: memory-demo-2-ctr\n image: polinux/stress\n resources:\n requests:\n memory: \"50Mi\"\n limits:\n memory: \"100Mi\"\n command: [\"stress\"]\n args: [\"--vm\", \"1\", \"--vm-bytes\", \"250M\", \"--vm-hang\", \"1\"]", "completion": "This yaml code Assign Memory Resources to Containers and Pods | In the args section of the configuration file, you can see that the Container will attempt to allocate 250 MiB of memory, which is well above the 100 MiB limit.", "language": "yaml", "type": "pod-definition_explanation", "context": "Assign Memory Resources to Containers and Pods | In the args section of the configuration file, you can see that the Container will attempt to allocate 250 MiB of memory, which is well above the 100 MiB limit.", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/assign-memory-resource/", "id": "k8s_00632_code_explain_19822"} {"prompt": "Explain this bash code:\n\nkubectl apply -f https://k8s.io/examples/pods/resource/memory-request-limit-2.yaml --namespace=mem-example", "completion": "This bash code Assign Memory Resources to Containers and Pods | View detailed information about the Pod:", "language": "bash", "type": "kubectl-command_explanation", "context": "Assign Memory Resources to Containers and Pods | View detailed information about the Pod:", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/assign-memory-resource/", "id": "k8s_00632_code_explain_19823"} {"prompt": "Explain this bash code:\n\nkubectl get pod memory-demo-2 --namespace=mem-example", "completion": "This bash code Assign Memory Resources to Containers and Pods | At this point, the Container might be running or killed. Repeat the preceding command until the Container is killed:", "language": "bash", "type": "kubectl-command_explanation", "context": "Assign Memory Resources to Containers and Pods | At this point, the Container might be running or killed. Repeat the preceding command until the Container is killed:", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/assign-memory-resource/", "id": "k8s_00632_code_explain_19824"} {"prompt": "Explain this text code:\n\nNAME READY STATUS RESTARTS AGE\nmemory-demo-2 0/1 OOMKilled 1 24s", "completion": "This text code Assign Memory Resources to Containers and Pods | Get a more detailed view of the Container status:", "language": "text", "type": "code-example_explanation", "context": "Assign Memory Resources to Containers and Pods | Get a more detailed view of the Container status:", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/assign-memory-resource/", "id": "k8s_00632_code_explain_19825"} {"prompt": "Explain this bash code:\n\nkubectl get pod memory-demo-2 --output=yaml --namespace=mem-example", "completion": "This bash code Assign Memory Resources to Containers and Pods | The output shows that the Container was killed because it is out of memory (OOM):", "language": "bash", "type": "kubectl-command_explanation", "context": "Assign Memory Resources to Containers and Pods | The output shows that the Container was killed because it is out of memory (OOM):", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/assign-memory-resource/", "id": "k8s_00632_code_explain_19826"} {"prompt": "Complete this text code:\n\nlastState:\n terminated:\n containerID: 65183c1877aaec2e8427bc95609cc52677a454b56fcb24340dbd22917c23b10f", "completion": " exitCode: 137\n finishedAt: 2017-06-20T20:52:19Z\n reason: OOMKilled\n startedAt: null", "language": "text", "type": "code-example", "context": "Assign Memory Resources to Containers and Pods | The Container in this exercise can be restarted, so the kubelet restarts it. Repeat this command several times to see that the Container is repeatedly killed and restarted:", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/assign-memory-resource/", "id": "k8s_00632_code_completion_19827"} {"prompt": "Explain this text code:\n\nlastState:\n terminated:\n containerID: 65183c1877aaec2e8427bc95609cc52677a454b56fcb24340dbd22917c23b10f\n exitCode: 137\n finishedAt: 2017-06-20T20:52:19Z\n reason: OOMKilled\n startedAt: null", "completion": "This text code Assign Memory Resources to Containers and Pods | The Container in this exercise can be restarted, so the kubelet restarts it. Repeat this command several times to see that the Container is repeatedly killed and restarted:", "language": "text", "type": "code-example_explanation", "context": "Assign Memory Resources to Containers and Pods | The Container in this exercise can be restarted, so the kubelet restarts it. Repeat this command several times to see that the Container is repeatedly killed and restarted:", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/assign-memory-resource/", "id": "k8s_00632_code_explain_19828"} {"prompt": "Explain this bash code:\n\nkubectl get pod memory-demo-2 --namespace=mem-example", "completion": "This bash code Assign Memory Resources to Containers and Pods | The output shows that the Container is killed, restarted, killed again, restarted again, and so on:", "language": "bash", "type": "kubectl-command_explanation", "context": "Assign Memory Resources to Containers and Pods | The output shows that the Container is killed, restarted, killed again, restarted again, and so on:", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/assign-memory-resource/", "id": "k8s_00632_code_explain_19829"} {"prompt": "Explain this bash code:\n\nkubectl delete pod memory-demo-2 --namespace=mem-example", "completion": "This bash code Assign Memory Resources to Containers and Pods | Memory requests and limits are associated with Containers, but it is useful to think of a Pod as having a memory request and limit. The memory request for the Pod is the sum of the memory requests for", "language": "bash", "type": "kubectl-command_explanation", "context": "Assign Memory Resources to Containers and Pods | Memory requests and limits are associated with Containers, but it is useful to think of a Pod as having a memory request and limit. The memory request for the Pod is the sum of the memory requests for", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/assign-memory-resource/", "id": "k8s_00632_code_explain_19830"} {"prompt": "Complete this yaml code:\n\npods/resource/memory-request-limit-3.yaml\napiVersion: v1\nkind: Pod\nmetadata:\n name: memory-demo-3\n namespace: mem-example\nspec:\n containers:", "completion": " - name: memory-demo-3-ctr\n image: polinux/stress\n resources:\n requests:\n memory: \"1000Gi\"\n limits:\n memory: \"1000Gi\"\n command: [\"stress\"]\n args: [\"--vm\", \"1\", \"--vm-bytes\", \"150M\", \"--vm-hang\", \"1\"]", "language": "yaml", "type": "pod-definition", "context": "Assign Memory Resources to Containers and Pods | Create the Pod:", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/assign-memory-resource/", "id": "k8s_00632_code_completion_19831"} {"prompt": "Explain this yaml code:\n\npods/resource/memory-request-limit-3.yaml\napiVersion: v1\nkind: Pod\nmetadata:\n name: memory-demo-3\n namespace: mem-example\nspec:\n containers:\n - name: memory-demo-3-ctr\n image: polinux/stress\n resources:\n requests:\n memory: \"1000Gi\"\n limits:\n memory: \"1000Gi\"\n command: [\"stress\"]\n args: [\"--vm\", \"1\", \"--vm-bytes\", \"150M\", \"--vm-hang\", \"1\"]", "completion": "This yaml code Assign Memory Resources to Containers and Pods | Create the Pod:", "language": "yaml", "type": "pod-definition_explanation", "context": "Assign Memory Resources to Containers and Pods | Create the Pod:", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/assign-memory-resource/", "id": "k8s_00632_code_explain_19832"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: memory-demo-3\n namespace: mem-example\nspec:\n containers:\n - name: memory-demo-3-ctr", "completion": " image: polinux/stress\n resources:\n requests:\n memory: \"1000Gi\"\n limits:\n memory: \"1000Gi\"\n command: [\"stress\"]\n args: [\"--vm\", \"1\", \"--vm-bytes\", \"150M\", \"--vm-hang\", \"1\"]", "language": "yaml", "type": "pod-definition", "context": "Assign Memory Resources to Containers and Pods | Create the Pod:", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/assign-memory-resource/", "id": "k8s_00632_code_completion_19833"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: memory-demo-3\n namespace: mem-example\nspec:\n containers:\n - name: memory-demo-3-ctr\n image: polinux/stress\n resources:\n requests:\n memory: \"1000Gi\"\n limits:\n memory: \"1000Gi\"\n command: [\"stress\"]\n args: [\"--vm\", \"1\", \"--vm-bytes\", \"150M\", \"--vm-hang\", \"1\"]", "completion": "This yaml code Assign Memory Resources to Containers and Pods | Create the Pod:", "language": "yaml", "type": "pod-definition_explanation", "context": "Assign Memory Resources to Containers and Pods | Create the Pod:", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/assign-memory-resource/", "id": "k8s_00632_code_explain_19834"} {"prompt": "Explain this bash code:\n\nkubectl apply -f https://k8s.io/examples/pods/resource/memory-request-limit-3.yaml --namespace=mem-example", "completion": "This bash code Assign Memory Resources to Containers and Pods | View the Pod status:", "language": "bash", "type": "kubectl-command_explanation", "context": "Assign Memory Resources to Containers and Pods | View the Pod status:", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/assign-memory-resource/", "id": "k8s_00632_code_explain_19835"} {"prompt": "Explain this bash code:\n\nkubectl get pod memory-demo-3 --namespace=mem-example", "completion": "This bash code Assign Memory Resources to Containers and Pods | The output shows that the Pod status is PENDING. That is, the Pod is not scheduled to run on any Node, and it will remain in the PENDING state indefinitely:", "language": "bash", "type": "kubectl-command_explanation", "context": "Assign Memory Resources to Containers and Pods | The output shows that the Pod status is PENDING. That is, the Pod is not scheduled to run on any Node, and it will remain in the PENDING state indefinitely:", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/assign-memory-resource/", "id": "k8s_00632_code_explain_19836"} {"prompt": "Explain this bash code:\n\nkubectl describe pod memory-demo-3 --namespace=mem-example", "completion": "This bash code Assign Memory Resources to Containers and Pods | The output shows that the Container cannot be scheduled because of insufficient memory on the Nodes:", "language": "bash", "type": "kubectl-command_explanation", "context": "Assign Memory Resources to Containers and Pods | The output shows that the Container cannot be scheduled because of insufficient memory on the Nodes:", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/assign-memory-resource/", "id": "k8s_00632_code_explain_19837"} {"prompt": "Explain this bash code:\n\nkubectl delete pod memory-demo-3 --namespace=mem-example", "completion": "This bash code Assign Memory Resources to Containers and Pods | If you do not specify a memory limit for a Container, one of the following situations applies:", "language": "bash", "type": "kubectl-command_explanation", "context": "Assign Memory Resources to Containers and Pods | If you do not specify a memory limit for a Container, one of the following situations applies:", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/assign-memory-resource/", "id": "k8s_00632_code_explain_19838"} {"prompt": "Explain this go code:\n\nfunc ScoreNode(_ *v1.pod, n *v1.Node) (int, error) {\n return getBlinkingLightCount(n)\n}", "completion": "This go code Scheduling Framework | However, the maximum count of blinking lights may be small compared to NodeScoreMax. To fix this, BlinkingLightScorer should also register for this extension point.", "language": "go", "type": "code-example_explanation", "context": "Scheduling Framework | However, the maximum count of blinking lights may be small compared to NodeScoreMax. To fix this, BlinkingLightScorer should also register for this extension point.", "source": "https://kubernetes.io/docs/concepts/scheduling-eviction/scheduling-framework/", "id": "k8s_00633_code_explain_19839"} {"prompt": "Complete this go code:\n\nfunc NormalizeScores(scores map[string]int) {\n highest := 0\n for _, score := range scores {\n highest = max(highest, score)", "completion": " }\n for node, score := range scores {\n scores[node] = score*NodeScoreMax/highest\n }\n}", "language": "go", "type": "code-example", "context": "Scheduling Framework | If any NormalizeScore plugin returns an error, the scheduling cycle is aborted.", "source": "https://kubernetes.io/docs/concepts/scheduling-eviction/scheduling-framework/", "id": "k8s_00633_code_completion_19840"} {"prompt": "Explain this go code:\n\nfunc NormalizeScores(scores map[string]int) {\n highest := 0\n for _, score := range scores {\n highest = max(highest, score)\n }\n for node, score := range scores {\n scores[node] = score*NodeScoreMax/highest\n }\n}", "completion": "This go code Scheduling Framework | If any NormalizeScore plugin returns an error, the scheduling cycle is aborted.", "language": "go", "type": "code-example_explanation", "context": "Scheduling Framework | If any NormalizeScore plugin returns an error, the scheduling cycle is aborted.", "source": "https://kubernetes.io/docs/concepts/scheduling-eviction/scheduling-framework/", "id": "k8s_00633_code_explain_19841"} {"prompt": "Complete this text code:\n\ntype Plugin interface {\n Name() string\n}\n\ntype QueueSortPlugin interface {\n Plugin\n Less(*v1.pod, *v1.pod) bool", "completion": "}\n\ntype PreFilterPlugin interface {\n Plugin\n PreFilter(context.Context, *framework.CycleState, *v1.pod) error\n}\n\n// ...", "language": "text", "type": "code-example", "context": "Scheduling Framework | You can enable or disable plugins in the scheduler configuration. If you are using Kubernetes v1.18 or later, most scheduling plugins are in use and enabled by default.", "source": "https://kubernetes.io/docs/concepts/scheduling-eviction/scheduling-framework/", "id": "k8s_00633_code_completion_19842"} {"prompt": "Explain this text code:\n\ntype Plugin interface {\n Name() string\n}\n\ntype QueueSortPlugin interface {\n Plugin\n Less(*v1.pod, *v1.pod) bool\n}\n\ntype PreFilterPlugin interface {\n Plugin\n PreFilter(context.Context, *framework.CycleState, *v1.pod) error\n}\n\n// ...", "completion": "This text code Scheduling Framework | You can enable or disable plugins in the scheduler configuration. If you are using Kubernetes v1.18 or later, most scheduling plugins are in use and enabled by default.", "language": "text", "type": "code-example_explanation", "context": "Scheduling Framework | You can enable or disable plugins in the scheduler configuration. If you are using Kubernetes v1.18 or later, most scheduling plugins are in use and enabled by default.", "source": "https://kubernetes.io/docs/concepts/scheduling-eviction/scheduling-framework/", "id": "k8s_00633_code_explain_19843"} {"prompt": "Explain this go code:\n\nfunc ScoreNode(_ *v1.pod, n *v1.Node) (int, error) {\n return getBlinkingLightCount(n)\n}", "completion": "This go code Scheduling Framework | However, the maximum count of blinking lights may be small compared to NodeScoreMax. To fix this, BlinkingLightScorer should also register for this extension point.", "language": "go", "type": "code-example_explanation", "context": "Scheduling Framework | However, the maximum count of blinking lights may be small compared to NodeScoreMax. To fix this, BlinkingLightScorer should also register for this extension point.", "source": "https://kubernetes.io/docs/concepts/scheduling-eviction/scheduling-framework/", "id": "k8s_00633_code_explain_19844"} {"prompt": "Complete this go code:\n\nfunc NormalizeScores(scores map[string]int) {\n highest := 0\n for _, score := range scores {\n highest = max(highest, score)", "completion": " }\n for node, score := range scores {\n scores[node] = score*NodeScoreMax/highest\n }\n}", "language": "go", "type": "code-example", "context": "Scheduling Framework | If any NormalizeScore plugin returns an error, the scheduling cycle is aborted.", "source": "https://kubernetes.io/docs/concepts/scheduling-eviction/scheduling-framework/", "id": "k8s_00633_code_completion_19845"} {"prompt": "Explain this go code:\n\nfunc NormalizeScores(scores map[string]int) {\n highest := 0\n for _, score := range scores {\n highest = max(highest, score)\n }\n for node, score := range scores {\n scores[node] = score*NodeScoreMax/highest\n }\n}", "completion": "This go code Scheduling Framework | If any NormalizeScore plugin returns an error, the scheduling cycle is aborted.", "language": "go", "type": "code-example_explanation", "context": "Scheduling Framework | If any NormalizeScore plugin returns an error, the scheduling cycle is aborted.", "source": "https://kubernetes.io/docs/concepts/scheduling-eviction/scheduling-framework/", "id": "k8s_00633_code_explain_19846"} {"prompt": "Complete this go code:\n\ntype Plugin interface {\n Name() string\n}\n\ntype QueueSortPlugin interface {\n Plugin\n Less(*v1.pod, *v1.pod) bool", "completion": "}\n\ntype PreFilterPlugin interface {\n Plugin\n PreFilter(context.Context, *framework.CycleState, *v1.pod) error\n}\n\n// ...", "language": "go", "type": "code-example", "context": "Scheduling Framework | You can enable or disable plugins in the scheduler configuration. If you are using Kubernetes v1.18 or later, most scheduling plugins are in use and enabled by default.", "source": "https://kubernetes.io/docs/concepts/scheduling-eviction/scheduling-framework/", "id": "k8s_00633_code_completion_19847"} {"prompt": "Explain this go code:\n\ntype Plugin interface {\n Name() string\n}\n\ntype QueueSortPlugin interface {\n Plugin\n Less(*v1.pod, *v1.pod) bool\n}\n\ntype PreFilterPlugin interface {\n Plugin\n PreFilter(context.Context, *framework.CycleState, *v1.pod) error\n}\n\n// ...", "completion": "This go code Scheduling Framework | You can enable or disable plugins in the scheduler configuration. If you are using Kubernetes v1.18 or later, most scheduling plugins are in use and enabled by default.", "language": "go", "type": "code-example_explanation", "context": "Scheduling Framework | You can enable or disable plugins in the scheduler configuration. If you are using Kubernetes v1.18 or later, most scheduling plugins are in use and enabled by default.", "source": "https://kubernetes.io/docs/concepts/scheduling-eviction/scheduling-framework/", "id": "k8s_00633_code_explain_19848"} {"prompt": "Explain this go code:\n\nfunc ScoreNode(_ *v1.pod, n *v1.Node) (int, error) {\n return getBlinkingLightCount(n)\n}", "completion": "This go code Scheduling Framework | However, the maximum count of blinking lights may be small compared to NodeScoreMax. To fix this, BlinkingLightScorer should also register for this extension point.", "language": "go", "type": "code-example_explanation", "context": "Scheduling Framework | However, the maximum count of blinking lights may be small compared to NodeScoreMax. To fix this, BlinkingLightScorer should also register for this extension point.", "source": "https://kubernetes.io/docs/concepts/scheduling-eviction/scheduling-framework/", "id": "k8s_00633_code_explain_19849"} {"prompt": "Complete this go code:\n\nfunc NormalizeScores(scores map[string]int) {\n highest := 0\n for _, score := range scores {\n highest = max(highest, score)", "completion": " }\n for node, score := range scores {\n scores[node] = score*NodeScoreMax/highest\n }\n}", "language": "go", "type": "code-example", "context": "Scheduling Framework | If any NormalizeScore plugin returns an error, the scheduling cycle is aborted.", "source": "https://kubernetes.io/docs/concepts/scheduling-eviction/scheduling-framework/", "id": "k8s_00633_code_completion_19850"} {"prompt": "Explain this go code:\n\nfunc NormalizeScores(scores map[string]int) {\n highest := 0\n for _, score := range scores {\n highest = max(highest, score)\n }\n for node, score := range scores {\n scores[node] = score*NodeScoreMax/highest\n }\n}", "completion": "This go code Scheduling Framework | If any NormalizeScore plugin returns an error, the scheduling cycle is aborted.", "language": "go", "type": "code-example_explanation", "context": "Scheduling Framework | If any NormalizeScore plugin returns an error, the scheduling cycle is aborted.", "source": "https://kubernetes.io/docs/concepts/scheduling-eviction/scheduling-framework/", "id": "k8s_00633_code_explain_19851"} {"prompt": "Complete this text code:\n\ntype Plugin interface {\n Name() string\n}\n\ntype QueueSortPlugin interface {\n Plugin\n Less(*v1.pod, *v1.pod) bool", "completion": "}\n\ntype PreFilterPlugin interface {\n Plugin\n PreFilter(context.Context, *framework.CycleState, *v1.pod) error\n}\n\n// ...", "language": "text", "type": "code-example", "context": "Scheduling Framework | You can enable or disable plugins in the scheduler configuration. If you are using Kubernetes v1.18 or later, most scheduling plugins are in use and enabled by default.", "source": "https://kubernetes.io/docs/concepts/scheduling-eviction/scheduling-framework/", "id": "k8s_00633_code_completion_19852"} {"prompt": "Explain this text code:\n\ntype Plugin interface {\n Name() string\n}\n\ntype QueueSortPlugin interface {\n Plugin\n Less(*v1.pod, *v1.pod) bool\n}\n\ntype PreFilterPlugin interface {\n Plugin\n PreFilter(context.Context, *framework.CycleState, *v1.pod) error\n}\n\n// ...", "completion": "This text code Scheduling Framework | You can enable or disable plugins in the scheduler configuration. If you are using Kubernetes v1.18 or later, most scheduling plugins are in use and enabled by default.", "language": "text", "type": "code-example_explanation", "context": "Scheduling Framework | You can enable or disable plugins in the scheduler configuration. If you are using Kubernetes v1.18 or later, most scheduling plugins are in use and enabled by default.", "source": "https://kubernetes.io/docs/concepts/scheduling-eviction/scheduling-framework/", "id": "k8s_00633_code_explain_19853"} {"prompt": "Explain this text code:\n\nFOO_SERVICE_HOST=<the host the service is running on>\nFOO_SERVICE_PORT=<the port the service is running on>", "completion": "This text code Container Environment | Services have dedicated IP addresses and are available to the Container via DNS, if DNS addon is enabled.", "language": "text", "type": "code-example_explanation", "context": "Container Environment | Services have dedicated IP addresses and are available to the Container via DNS, if DNS addon is enabled.", "source": "https://kubernetes.io/docs/concepts/containers/container-environment/", "id": "k8s_00634_code_explain_19854"} {"prompt": "Explain this shell code:\n\nFOO_SERVICE_HOST=<the host the service is running on>\nFOO_SERVICE_PORT=<the port the service is running on>", "completion": "This shell code Container Environment | Services have dedicated IP addresses and are available to the Container via DNS, if DNS addon is enabled.", "language": "shell", "type": "code-example_explanation", "context": "Container Environment | Services have dedicated IP addresses and are available to the Container via DNS, if DNS addon is enabled.", "source": "https://kubernetes.io/docs/concepts/containers/container-environment/", "id": "k8s_00634_code_explain_19855"} {"prompt": "Explain this text code:\n\nFOO_SERVICE_HOST=<the host the service is running on>\nFOO_SERVICE_PORT=<the port the service is running on>", "completion": "This text code Container Environment | Services have dedicated IP addresses and are available to the Container via DNS, if DNS addon is enabled.", "language": "text", "type": "code-example_explanation", "context": "Container Environment | Services have dedicated IP addresses and are available to the Container via DNS, if DNS addon is enabled.", "source": "https://kubernetes.io/docs/concepts/containers/container-environment/", "id": "k8s_00634_code_explain_19856"} {"prompt": "Complete this text code:\n\n\"ExceptionList\": [\n \"10.244.0.0/16\", # Cluster subnet", "completion": " \"10.96.0.0/12\", # Service subnet\n \"10.127.130.0/24\" # Management (host) subnet\n ]", "language": "text", "type": "code-example", "context": "Windows debugging tips | My Windows node cannot access NodePort type Services", "source": "https://kubernetes.io/docs/tasks/debug/debug-cluster/windows/", "id": "k8s_00635_code_completion_19857"} {"prompt": "Explain this text code:\n\n\"ExceptionList\": [\n \"10.244.0.0/16\", # Cluster subnet\n \"10.96.0.0/12\", # Service subnet\n \"10.127.130.0/24\" # Management (host) subnet\n ]", "completion": "This text code Windows debugging tips | My Windows node cannot access NodePort type Services", "language": "text", "type": "code-example_explanation", "context": "Windows debugging tips | My Windows node cannot access NodePort type Services", "source": "https://kubernetes.io/docs/tasks/debug/debug-cluster/windows/", "id": "k8s_00635_code_explain_19858"} {"prompt": "Explain this text code:\n\nC:\\k\\kube-proxy.exe --hostname-override=$(hostname)", "completion": "This text code Windows debugging tips | My Windows node cannot access my services using the service IP", "language": "text", "type": "code-example_explanation", "context": "Windows debugging tips | My Windows node cannot access my services using the service IP", "source": "https://kubernetes.io/docs/tasks/debug/debug-cluster/windows/", "id": "k8s_00635_code_explain_19859"} {"prompt": "Explain this text code:\n\nGet-HnsNetwork | ? Name -ieq \"cbr0\"\nGet-NetAdapter | ? Name -Like \"vEthernet (Ethernet*\"", "completion": "This text code Windows debugging tips | Often it is worthwhile to modify the InterfaceName parameter of the start.ps1 script, in cases where the host's network adapter isn't \"Ethernet\". Otherwise, consult the output of the start-kubelet.ps1", "language": "text", "type": "code-example_explanation", "context": "Windows debugging tips | Often it is worthwhile to modify the InterfaceName parameter of the start.ps1 script, in cases where the host's network adapter isn't \"Ethernet\". Otherwise, consult the output of the start-kubelet.ps1", "source": "https://kubernetes.io/docs/tasks/debug/debug-cluster/windows/", "id": "k8s_00635_code_explain_19860"} {"prompt": "Explain this json code:\n\n[Environment]::SetEnvironmentVariable(\"HTTP_PROXY\", \"http://proxy.example.com:80/\", [EnvironmentVariableTarget]::Machine)\n[Environment]::SetEnvironmentVariable(\"HTTPS_PROXY\", \"http://proxy.example.com:443/\", [EnvironmentVariableTarget]::Machine)", "completion": "This json code Windows debugging tips | With Flannel, my nodes are having issues after rejoining a cluster", "language": "json", "type": "api-response_explanation", "context": "Windows debugging tips | With Flannel, my nodes are having issues after rejoining a cluster", "source": "https://kubernetes.io/docs/tasks/debug/debug-cluster/windows/", "id": "k8s_00635_code_explain_19861"} {"prompt": "Explain this text code:\n\nRemove-Item C:\\k\\SourceVip.json\nRemove-Item C:\\k\\SourceVipRequest.json", "completion": "This text code Windows debugging tips | Flanneld is stuck in \"Waiting for the Network to be created\"", "language": "text", "type": "code-example_explanation", "context": "Windows debugging tips | Flanneld is stuck in \"Waiting for the Network to be created\"", "source": "https://kubernetes.io/docs/tasks/debug/debug-cluster/windows/", "id": "k8s_00635_code_explain_19862"} {"prompt": "Explain this json code:\n\n[Environment]::SetEnvironmentVariable(\"NODE_NAME\", \"<Windows_Worker_Hostname>\")\nC:\\flannel\\flanneld.exe --kubeconfig-file=c:\\k\\config --iface=<Windows_Worker_Node_IP> --ip-masq=1 --kube-subnet-mgr=1", "completion": "This json code Windows debugging tips | My Windows Pods cannot launch because of missing /run/flannel/subnet.env", "language": "json", "type": "api-response_explanation", "context": "Windows debugging tips | My Windows Pods cannot launch because of missing /run/flannel/subnet.env", "source": "https://kubernetes.io/docs/tasks/debug/debug-cluster/windows/", "id": "k8s_00635_code_explain_19863"} {"prompt": "Complete this text code:\n\nFLANNEL_NETWORK=10.244.0.0/16\nFLANNEL_SUBNET=10.244.4.1/24", "completion": "FLANNEL_MTU=1500\nFLANNEL_IPMASQ=true", "language": "text", "type": "code-example", "context": "Windows debugging tips | If these steps don't resolve your problem, you can get help running Windows containers on Windows nodes in Kubernetes through:", "source": "https://kubernetes.io/docs/tasks/debug/debug-cluster/windows/", "id": "k8s_00635_code_completion_19864"} {"prompt": "Explain this text code:\n\nFLANNEL_NETWORK=10.244.0.0/16\nFLANNEL_SUBNET=10.244.4.1/24\nFLANNEL_MTU=1500\nFLANNEL_IPMASQ=true", "completion": "This text code Windows debugging tips | If these steps don't resolve your problem, you can get help running Windows containers on Windows nodes in Kubernetes through:", "language": "text", "type": "code-example_explanation", "context": "Windows debugging tips | If these steps don't resolve your problem, you can get help running Windows containers on Windows nodes in Kubernetes through:", "source": "https://kubernetes.io/docs/tasks/debug/debug-cluster/windows/", "id": "k8s_00635_code_explain_19865"} {"prompt": "Complete this conf code:\n\n\"ExceptionList\": [\n \"10.244.0.0/16\", # Cluster subnet", "completion": " \"10.96.0.0/12\", # Service subnet\n \"10.127.130.0/24\" # Management (host) subnet\n ]", "language": "conf", "type": "code-example", "context": "Windows debugging tips | My Windows node cannot access NodePort type Services", "source": "https://kubernetes.io/docs/tasks/debug/debug-cluster/windows/", "id": "k8s_00635_code_completion_19866"} {"prompt": "Explain this conf code:\n\n\"ExceptionList\": [\n \"10.244.0.0/16\", # Cluster subnet\n \"10.96.0.0/12\", # Service subnet\n \"10.127.130.0/24\" # Management (host) subnet\n ]", "completion": "This conf code Windows debugging tips | My Windows node cannot access NodePort type Services", "language": "conf", "type": "code-example_explanation", "context": "Windows debugging tips | My Windows node cannot access NodePort type Services", "source": "https://kubernetes.io/docs/tasks/debug/debug-cluster/windows/", "id": "k8s_00635_code_explain_19867"} {"prompt": "Explain this powershell code:\n\nC:\\k\\kube-proxy.exe --hostname-override=$(hostname)", "completion": "This powershell code Windows debugging tips | My Windows node cannot access my services using the service IP", "language": "powershell", "type": "code-example_explanation", "context": "Windows debugging tips | My Windows node cannot access my services using the service IP", "source": "https://kubernetes.io/docs/tasks/debug/debug-cluster/windows/", "id": "k8s_00635_code_explain_19868"} {"prompt": "Explain this powershell code:\n\nGet-HnsNetwork | ? Name -ieq \"cbr0\"\nGet-NetAdapter | ? Name -Like \"vEthernet (Ethernet*\"", "completion": "This powershell code Windows debugging tips | Often it is worthwhile to modify the InterfaceName parameter of the start.ps1 script, in cases where the host's network adapter isn't \"Ethernet\". Otherwise, consult the output of the start-kubelet.ps1", "language": "powershell", "type": "code-example_explanation", "context": "Windows debugging tips | Often it is worthwhile to modify the InterfaceName parameter of the start.ps1 script, in cases where the host's network adapter isn't \"Ethernet\". Otherwise, consult the output of the start-kubelet.ps1", "source": "https://kubernetes.io/docs/tasks/debug/debug-cluster/windows/", "id": "k8s_00635_code_explain_19869"} {"prompt": "Explain this PowerShell code:\n\n[Environment]::SetEnvironmentVariable(\"HTTP_PROXY\", \"http://proxy.example.com:80/\", [EnvironmentVariableTarget]::Machine)\n[Environment]::SetEnvironmentVariable(\"HTTPS_PROXY\", \"http://proxy.example.com:443/\", [EnvironmentVariableTarget]::Machine)", "completion": "This PowerShell code Windows debugging tips | With Flannel, my nodes are having issues after rejoining a cluster", "language": "PowerShell", "type": "code-example_explanation", "context": "Windows debugging tips | With Flannel, my nodes are having issues after rejoining a cluster", "source": "https://kubernetes.io/docs/tasks/debug/debug-cluster/windows/", "id": "k8s_00635_code_explain_19870"} {"prompt": "Explain this powershell code:\n\nRemove-Item C:\\k\\SourceVip.json\nRemove-Item C:\\k\\SourceVipRequest.json", "completion": "This powershell code Windows debugging tips | Flanneld is stuck in \"Waiting for the Network to be created\"", "language": "powershell", "type": "code-example_explanation", "context": "Windows debugging tips | Flanneld is stuck in \"Waiting for the Network to be created\"", "source": "https://kubernetes.io/docs/tasks/debug/debug-cluster/windows/", "id": "k8s_00635_code_explain_19871"} {"prompt": "Explain this powershell code:\n\n[Environment]::SetEnvironmentVariable(\"NODE_NAME\", \"<Windows_Worker_Hostname>\")\nC:\\flannel\\flanneld.exe --kubeconfig-file=c:\\k\\config --iface=<Windows_Worker_Node_IP> --ip-masq=1 --kube-subnet-mgr=1", "completion": "This powershell code Windows debugging tips | My Windows Pods cannot launch because of missing /run/flannel/subnet.env", "language": "powershell", "type": "code-example_explanation", "context": "Windows debugging tips | My Windows Pods cannot launch because of missing /run/flannel/subnet.env", "source": "https://kubernetes.io/docs/tasks/debug/debug-cluster/windows/", "id": "k8s_00635_code_explain_19872"} {"prompt": "Complete this env code:\n\nFLANNEL_NETWORK=10.244.0.0/16\nFLANNEL_SUBNET=10.244.4.1/24", "completion": "FLANNEL_MTU=1500\nFLANNEL_IPMASQ=true", "language": "env", "type": "code-example", "context": "Windows debugging tips | If these steps don't resolve your problem, you can get help running Windows containers on Windows nodes in Kubernetes through:", "source": "https://kubernetes.io/docs/tasks/debug/debug-cluster/windows/", "id": "k8s_00635_code_completion_19873"} {"prompt": "Explain this env code:\n\nFLANNEL_NETWORK=10.244.0.0/16\nFLANNEL_SUBNET=10.244.4.1/24\nFLANNEL_MTU=1500\nFLANNEL_IPMASQ=true", "completion": "This env code Windows debugging tips | If these steps don't resolve your problem, you can get help running Windows containers on Windows nodes in Kubernetes through:", "language": "env", "type": "code-example_explanation", "context": "Windows debugging tips | If these steps don't resolve your problem, you can get help running Windows containers on Windows nodes in Kubernetes through:", "source": "https://kubernetes.io/docs/tasks/debug/debug-cluster/windows/", "id": "k8s_00635_code_explain_19874"} {"prompt": "Explain this text code:\n\nC:\\k\\kube-proxy.exe --hostname-override=$(hostname)", "completion": "This text code Windows debugging tips | My Windows node cannot access my services using the service IP", "language": "text", "type": "code-example_explanation", "context": "Windows debugging tips | My Windows node cannot access my services using the service IP", "source": "https://kubernetes.io/docs/tasks/debug/debug-cluster/windows/", "id": "k8s_00635_code_explain_19875"} {"prompt": "Explain this text code:\n\nGet-HnsNetwork | ? Name -ieq \"cbr0\"\nGet-NetAdapter | ? Name -Like \"vEthernet (Ethernet*\"", "completion": "This text code Windows debugging tips | Often it is worthwhile to modify the InterfaceName parameter of the start.ps1 script, in cases where the host's network adapter isn't \"Ethernet\". Otherwise, consult the output of the start-kubelet.ps1", "language": "text", "type": "code-example_explanation", "context": "Windows debugging tips | Often it is worthwhile to modify the InterfaceName parameter of the start.ps1 script, in cases where the host's network adapter isn't \"Ethernet\". Otherwise, consult the output of the start-kubelet.ps1", "source": "https://kubernetes.io/docs/tasks/debug/debug-cluster/windows/", "id": "k8s_00635_code_explain_19876"} {"prompt": "Explain this json code:\n\n[Environment]::SetEnvironmentVariable(\"HTTP_PROXY\", \"http://proxy.example.com:80/\", [EnvironmentVariableTarget]::Machine)\n[Environment]::SetEnvironmentVariable(\"HTTPS_PROXY\", \"http://proxy.example.com:443/\", [EnvironmentVariableTarget]::Machine)", "completion": "This json code Windows debugging tips | With Flannel, my nodes are having issues after rejoining a cluster", "language": "json", "type": "api-response_explanation", "context": "Windows debugging tips | With Flannel, my nodes are having issues after rejoining a cluster", "source": "https://kubernetes.io/docs/tasks/debug/debug-cluster/windows/", "id": "k8s_00635_code_explain_19877"} {"prompt": "Explain this text code:\n\nRemove-Item C:\\k\\SourceVip.json\nRemove-Item C:\\k\\SourceVipRequest.json", "completion": "This text code Windows debugging tips | Flanneld is stuck in \"Waiting for the Network to be created\"", "language": "text", "type": "code-example_explanation", "context": "Windows debugging tips | Flanneld is stuck in \"Waiting for the Network to be created\"", "source": "https://kubernetes.io/docs/tasks/debug/debug-cluster/windows/", "id": "k8s_00635_code_explain_19878"} {"prompt": "Explain this json code:\n\n[Environment]::SetEnvironmentVariable(\"NODE_NAME\", \"<Windows_Worker_Hostname>\")\nC:\\flannel\\flanneld.exe --kubeconfig-file=c:\\k\\config --iface=<Windows_Worker_Node_IP> --ip-masq=1 --kube-subnet-mgr=1", "completion": "This json code Windows debugging tips | My Windows Pods cannot launch because of missing /run/flannel/subnet.env", "language": "json", "type": "api-response_explanation", "context": "Windows debugging tips | My Windows Pods cannot launch because of missing /run/flannel/subnet.env", "source": "https://kubernetes.io/docs/tasks/debug/debug-cluster/windows/", "id": "k8s_00635_code_explain_19879"} {"prompt": "Complete this text code:\n\nFLANNEL_NETWORK=10.244.0.0/16\nFLANNEL_SUBNET=10.244.4.1/24", "completion": "FLANNEL_MTU=1500\nFLANNEL_IPMASQ=true", "language": "text", "type": "code-example", "context": "Windows debugging tips | If these steps don't resolve your problem, you can get help running Windows containers on Windows nodes in Kubernetes through:", "source": "https://kubernetes.io/docs/tasks/debug/debug-cluster/windows/", "id": "k8s_00635_code_completion_19880"} {"prompt": "Explain this text code:\n\nFLANNEL_NETWORK=10.244.0.0/16\nFLANNEL_SUBNET=10.244.4.1/24\nFLANNEL_MTU=1500\nFLANNEL_IPMASQ=true", "completion": "This text code Windows debugging tips | If these steps don't resolve your problem, you can get help running Windows containers on Windows nodes in Kubernetes through:", "language": "text", "type": "code-example_explanation", "context": "Windows debugging tips | If these steps don't resolve your problem, you can get help running Windows containers on Windows nodes in Kubernetes through:", "source": "https://kubernetes.io/docs/tasks/debug/debug-cluster/windows/", "id": "k8s_00635_code_explain_19881"} {"prompt": "Explain this text code:\n\n--runtime-config=storage.k8s.io/v1beta1/csistoragecapacities", "completion": "This text code API Overview | Kubernetes stores its serialized state in terms of the API resources by writing them into etcd.", "language": "text", "type": "code-example_explanation", "context": "API Overview | Kubernetes stores its serialized state in terms of the API resources by writing them into etcd.", "source": "https://kubernetes.io/docs/reference/using-api/", "id": "k8s_00636_code_explain_19882"} {"prompt": "Complete this yaml code:\n\napiVersion: apiserver.config.k8s.io/v1\nkind: AuthorizationConfiguration\nauthorizers:", "completion": " ...\n - type: RBAC\n ...", "language": "yaml", "type": "kubernetes-manifest", "context": "Using RBAC Authorization | Or, start the API server with the --authorization-mode flag set to a comma-separated list that includes RBAC; for example:", "source": "https://kubernetes.io/docs/reference/access-authn-authz/rbac/", "id": "k8s_00637_code_completion_19883"} {"prompt": "Explain this yaml code:\n\napiVersion: apiserver.config.k8s.io/v1\nkind: AuthorizationConfiguration\nauthorizers:\n ...\n - type: RBAC\n ...", "completion": "This yaml code Using RBAC Authorization | Or, start the API server with the --authorization-mode flag set to a comma-separated list that includes RBAC; for example:", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Using RBAC Authorization | Or, start the API server with the --authorization-mode flag set to a comma-separated list that includes RBAC; for example:", "source": "https://kubernetes.io/docs/reference/access-authn-authz/rbac/", "id": "k8s_00637_code_explain_19884"} {"prompt": "Explain this text code:\n\nkube-apiserver --authorization-mode=...,RBAC --other-options --more-options", "completion": "This text code Using RBAC Authorization | The RBAC API declares four kinds of Kubernetes object: Role, ClusterRole, RoleBinding and ClusterRoleBinding. You can describe or amend the RBAC objects using tools such as kubectl, just like any othe", "language": "text", "type": "code-example_explanation", "context": "Using RBAC Authorization | The RBAC API declares four kinds of Kubernetes object: Role, ClusterRole, RoleBinding and ClusterRoleBinding. You can describe or amend the RBAC objects using tools such as kubectl, just like any othe", "source": "https://kubernetes.io/docs/reference/access-authn-authz/rbac/", "id": "k8s_00637_code_explain_19885"} {"prompt": "Complete this yaml code:\n\napiVersion: rbac.authorization.k8s.io/v1\nkind: Role\nmetadata:\n namespace: default", "completion": " name: pod-reader\nrules:\n- apiGroups: [\"\"] # \"\" indicates the core API group\n resources: [\"pods\"]\n verbs: [\"get\", \"watch\", \"list\"]", "language": "yaml", "type": "kubernetes-manifest", "context": "Using RBAC Authorization | A ClusterRole can be used to grant the same permissions as a Role. Because ClusterRoles are cluster-scoped, you can also use them to grant access to:", "source": "https://kubernetes.io/docs/reference/access-authn-authz/rbac/", "id": "k8s_00637_code_completion_19886"} {"prompt": "Explain this yaml code:\n\napiVersion: rbac.authorization.k8s.io/v1\nkind: Role\nmetadata:\n namespace: default\n name: pod-reader\nrules:\n- apiGroups: [\"\"] # \"\" indicates the core API group\n resources: [\"pods\"]\n verbs: [\"get\", \"watch\", \"list\"]", "completion": "This yaml code Using RBAC Authorization | A ClusterRole can be used to grant the same permissions as a Role. Because ClusterRoles are cluster-scoped, you can also use them to grant access to:", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Using RBAC Authorization | A ClusterRole can be used to grant the same permissions as a Role. Because ClusterRoles are cluster-scoped, you can also use them to grant access to:", "source": "https://kubernetes.io/docs/reference/access-authn-authz/rbac/", "id": "k8s_00637_code_explain_19887"} {"prompt": "Complete this yaml code:\n\napiVersion: rbac.authorization.k8s.io/v1\nkind: ClusterRole\nmetadata:\n # \"namespace\" omitted since ClusterRoles are not namespaced\n name: secret-reader\nrules:", "completion": "- apiGroups: [\"\"]\n #\n # at the HTTP level, the name of the resource for accessing Secret\n # objects is \"secrets\"\n resources: [\"secrets\"]\n verbs: [\"get\", \"watch\", \"list\"]", "language": "yaml", "type": "kubernetes-manifest", "context": "Using RBAC Authorization | The name of a Role or a ClusterRole object must be a valid path segment name.", "source": "https://kubernetes.io/docs/reference/access-authn-authz/rbac/", "id": "k8s_00637_code_completion_19888"} {"prompt": "Explain this yaml code:\n\napiVersion: rbac.authorization.k8s.io/v1\nkind: ClusterRole\nmetadata:\n # \"namespace\" omitted since ClusterRoles are not namespaced\n name: secret-reader\nrules:\n- apiGroups: [\"\"]\n #\n # at the HTTP level, the name of the resource for accessing Secret\n # objects is \"secrets\"\n resources: [\"secrets\"]\n verbs: [\"get\", \"watch\", \"list\"]", "completion": "This yaml code Using RBAC Authorization | The name of a Role or a ClusterRole object must be a valid path segment name.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Using RBAC Authorization | The name of a Role or a ClusterRole object must be a valid path segment name.", "source": "https://kubernetes.io/docs/reference/access-authn-authz/rbac/", "id": "k8s_00637_code_explain_19889"} {"prompt": "Complete this yaml code:\n\napiVersion: rbac.authorization.k8s.io/v1\n# This role binding allows \"jane\" to read pods in the \"default\" namespace.\n# You need to already have a Role named \"pod-reader\" in that namespace.\nkind: RoleBinding\nmetadata:\n name: read-pods\n namespace: default\nsubjects:", "completion": "# You can specify more than one \"subject\"\n- kind: User\n name: jane # \"name\" is case sensitive\n apiGroup: rbac.authorization.k8s.io\nroleRef:\n # \"roleRef\" specifies the binding to a Role / ClusterRole\n kind: Role #this must be Role or ClusterRole\n name: pod-reader # this must match the name of the Role or ClusterRole you wish to bind to\n apiGroup: rbac.authorization.k8s.io", "language": "yaml", "type": "kubernetes-manifest", "context": "Using RBAC Authorization | A RoleBinding can also reference a ClusterRole to grant the permissions defined in that ClusterRole to resources inside the RoleBinding's namespace. This kind of reference lets you define a set of com", "source": "https://kubernetes.io/docs/reference/access-authn-authz/rbac/", "id": "k8s_00637_code_completion_19890"} {"prompt": "Explain this yaml code:\n\napiVersion: rbac.authorization.k8s.io/v1\n# This role binding allows \"jane\" to read pods in the \"default\" namespace.\n# You need to already have a Role named \"pod-reader\" in that namespace.\nkind: RoleBinding\nmetadata:\n name: read-pods\n namespace: default\nsubjects:\n# You can specify more than one \"subject\"\n- kind: User\n name: jane # \"name\" is case sensitive\n apiGroup: rbac.authorization.k8s.io\nroleRef:\n # \"roleRef\" specifies the binding to a Role / ClusterRole\n kind: Role #this must be Role or ClusterRole\n name: pod-reader # this must match the name of the Role or ClusterRole you wish to bind to\n apiGroup: rbac.authorization.k8s.io", "completion": "This yaml code Using RBAC Authorization | A RoleBinding can also reference a ClusterRole to grant the permissions defined in that ClusterRole to resources inside the RoleBinding's namespace. This kind of reference lets you define a set of com", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Using RBAC Authorization | A RoleBinding can also reference a ClusterRole to grant the permissions defined in that ClusterRole to resources inside the RoleBinding's namespace. This kind of reference lets you define a set of com", "source": "https://kubernetes.io/docs/reference/access-authn-authz/rbac/", "id": "k8s_00637_code_explain_19891"} {"prompt": "Complete this yaml code:\n\napiVersion: rbac.authorization.k8s.io/v1\n# This role binding allows \"dave\" to read secrets in the \"development\" namespace.\n# You need to already have a ClusterRole named \"secret-reader\".\nkind: RoleBinding\nmetadata:\n name: read-secrets\n #\n # The namespace of the RoleBinding determines where the permissions are granted.\n # This only grants permissions within the \"development\" namespace.", "completion": " namespace: development\nsubjects:\n- kind: User\n name: dave # Name is case sensitive\n apiGroup: rbac.authorization.k8s.io\nroleRef:\n kind: ClusterRole\n name: secret-reader\n apiGroup: rbac.authorization.k8s.io", "language": "yaml", "type": "kubernetes-manifest", "context": "Using RBAC Authorization | To grant permissions across a whole cluster, you can use a ClusterRoleBinding. The following ClusterRoleBinding allows any user in the group \"manager\" to read secrets in any namespace.", "source": "https://kubernetes.io/docs/reference/access-authn-authz/rbac/", "id": "k8s_00637_code_completion_19892"} {"prompt": "Explain this yaml code:\n\napiVersion: rbac.authorization.k8s.io/v1\n# This role binding allows \"dave\" to read secrets in the \"development\" namespace.\n# You need to already have a ClusterRole named \"secret-reader\".\nkind: RoleBinding\nmetadata:\n name: read-secrets\n #\n # The namespace of the RoleBinding determines where the permissions are granted.\n # This only grants permissions within the \"development\" namespace.\n namespace: development\nsubjects:\n- kind: User\n name: dave # Name is case sensitive\n apiGroup: rbac.authorization.k8s.io\nroleRef:\n kind: ClusterRole\n name: secret-reader\n apiGroup: rbac.authorization.k8s.io", "completion": "This yaml code Using RBAC Authorization | To grant permissions across a whole cluster, you can use a ClusterRoleBinding. The following ClusterRoleBinding allows any user in the group \"manager\" to read secrets in any namespace.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Using RBAC Authorization | To grant permissions across a whole cluster, you can use a ClusterRoleBinding. The following ClusterRoleBinding allows any user in the group \"manager\" to read secrets in any namespace.", "source": "https://kubernetes.io/docs/reference/access-authn-authz/rbac/", "id": "k8s_00637_code_explain_19893"} {"prompt": "Complete this yaml code:\n\napiVersion: rbac.authorization.k8s.io/v1\n# This cluster role binding allows anyone in the \"manager\" group to read secrets in any namespace.\nkind: ClusterRoleBinding\nmetadata:\n name: read-secrets-global\nsubjects:", "completion": "- kind: Group\n name: manager # Name is case sensitive\n apiGroup: rbac.authorization.k8s.io\nroleRef:\n kind: ClusterRole\n name: secret-reader\n apiGroup: rbac.authorization.k8s.io", "language": "yaml", "type": "kubernetes-manifest", "context": "Using RBAC Authorization | After you create a binding, you cannot change the Role or ClusterRole that it refers to. If you try to change a binding's roleRef, you get a validation error. If you do want to change the roleRef for", "source": "https://kubernetes.io/docs/reference/access-authn-authz/rbac/", "id": "k8s_00637_code_completion_19894"} {"prompt": "Explain this yaml code:\n\napiVersion: rbac.authorization.k8s.io/v1\n# This cluster role binding allows anyone in the \"manager\" group to read secrets in any namespace.\nkind: ClusterRoleBinding\nmetadata:\n name: read-secrets-global\nsubjects:\n- kind: Group\n name: manager # Name is case sensitive\n apiGroup: rbac.authorization.k8s.io\nroleRef:\n kind: ClusterRole\n name: secret-reader\n apiGroup: rbac.authorization.k8s.io", "completion": "This yaml code Using RBAC Authorization | After you create a binding, you cannot change the Role or ClusterRole that it refers to. If you try to change a binding's roleRef, you get a validation error. If you do want to change the roleRef for", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Using RBAC Authorization | After you create a binding, you cannot change the Role or ClusterRole that it refers to. If you try to change a binding's roleRef, you get a validation error. If you do want to change the roleRef for", "source": "https://kubernetes.io/docs/reference/access-authn-authz/rbac/", "id": "k8s_00637_code_explain_19895"} {"prompt": "Complete this yaml code:\n\napiVersion: rbac.authorization.k8s.io/v1\nkind: Role\nmetadata:\n namespace: default", "completion": " name: pod-and-pod-logs-reader\nrules:\n- apiGroups: [\"\"]\n resources: [\"pods\", \"pods/log\"]\n verbs: [\"get\", \"list\"]", "language": "yaml", "type": "kubernetes-manifest", "context": "Using RBAC Authorization | You can also refer to resources by name for certain requests through the resourceNames list. When specified, requests can be restricted to individual instances of a resource. Here is an example that r", "source": "https://kubernetes.io/docs/reference/access-authn-authz/rbac/", "id": "k8s_00637_code_completion_19896"} {"prompt": "Explain this yaml code:\n\napiVersion: rbac.authorization.k8s.io/v1\nkind: Role\nmetadata:\n namespace: default\n name: pod-and-pod-logs-reader\nrules:\n- apiGroups: [\"\"]\n resources: [\"pods\", \"pods/log\"]\n verbs: [\"get\", \"list\"]", "completion": "This yaml code Using RBAC Authorization | You can also refer to resources by name for certain requests through the resourceNames list. When specified, requests can be restricted to individual instances of a resource. Here is an example that r", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Using RBAC Authorization | You can also refer to resources by name for certain requests through the resourceNames list. When specified, requests can be restricted to individual instances of a resource. Here is an example that r", "source": "https://kubernetes.io/docs/reference/access-authn-authz/rbac/", "id": "k8s_00637_code_explain_19897"} {"prompt": "Complete this yaml code:\n\napiVersion: rbac.authorization.k8s.io/v1\nkind: Role\nmetadata:\n namespace: default\n name: configmap-updater\nrules:", "completion": "- apiGroups: [\"\"]\n #\n # at the HTTP level, the name of the resource for accessing ConfigMap\n # objects is \"configmaps\"\n resources: [\"configmaps\"]\n resourceNames: [\"my-configmap\"]\n verbs: [\"update\", \"get\"]", "language": "yaml", "type": "kubernetes-manifest", "context": "Using RBAC Authorization | Rather than referring to individual resources, apiGroups, and verbs, you can use the wildcard * symbol to refer to all such objects. For nonResourceURLs, you can use the wildcard * as a suffix glob ma", "source": "https://kubernetes.io/docs/reference/access-authn-authz/rbac/", "id": "k8s_00637_code_completion_19898"} {"prompt": "Explain this yaml code:\n\napiVersion: rbac.authorization.k8s.io/v1\nkind: Role\nmetadata:\n namespace: default\n name: configmap-updater\nrules:\n- apiGroups: [\"\"]\n #\n # at the HTTP level, the name of the resource for accessing ConfigMap\n # objects is \"configmaps\"\n resources: [\"configmaps\"]\n resourceNames: [\"my-configmap\"]\n verbs: [\"update\", \"get\"]", "completion": "This yaml code Using RBAC Authorization | Rather than referring to individual resources, apiGroups, and verbs, you can use the wildcard * symbol to refer to all such objects. For nonResourceURLs, you can use the wildcard * as a suffix glob ma", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Using RBAC Authorization | Rather than referring to individual resources, apiGroups, and verbs, you can use the wildcard * symbol to refer to all such objects. For nonResourceURLs, you can use the wildcard * as a suffix glob ma", "source": "https://kubernetes.io/docs/reference/access-authn-authz/rbac/", "id": "k8s_00637_code_explain_19899"} {"prompt": "Complete this yaml code:\n\napiVersion: rbac.authorization.k8s.io/v1\nkind: Role\nmetadata:\n namespace: default", "completion": " name: example.com-superuser # DO NOT USE THIS ROLE, IT IS JUST AN EXAMPLE\nrules:\n- apiGroups: [\"example.com\"]\n resources: [\"*\"]\n verbs: [\"*\"]", "language": "yaml", "type": "kubernetes-manifest", "context": "Using RBAC Authorization | You can aggregate several ClusterRoles into one combined ClusterRole. A controller, running as part of the cluster control plane, watches for ClusterRole objects with an aggregationRule set. The aggre", "source": "https://kubernetes.io/docs/reference/access-authn-authz/rbac/", "id": "k8s_00637_code_completion_19900"} {"prompt": "Explain this yaml code:\n\napiVersion: rbac.authorization.k8s.io/v1\nkind: Role\nmetadata:\n namespace: default\n name: example.com-superuser # DO NOT USE THIS ROLE, IT IS JUST AN EXAMPLE\nrules:\n- apiGroups: [\"example.com\"]\n resources: [\"*\"]\n verbs: [\"*\"]", "completion": "This yaml code Using RBAC Authorization | You can aggregate several ClusterRoles into one combined ClusterRole. A controller, running as part of the cluster control plane, watches for ClusterRole objects with an aggregationRule set. The aggre", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Using RBAC Authorization | You can aggregate several ClusterRoles into one combined ClusterRole. A controller, running as part of the cluster control plane, watches for ClusterRole objects with an aggregationRule set. The aggre", "source": "https://kubernetes.io/docs/reference/access-authn-authz/rbac/", "id": "k8s_00637_code_explain_19901"} {"prompt": "Complete this yaml code:\n\napiVersion: rbac.authorization.k8s.io/v1\nkind: ClusterRole\nmetadata:\n name: monitoring", "completion": "aggregationRule:\n clusterRoleSelectors:\n - matchLabels:\n rbac.example.com/aggregate-to-monitoring: \"true\"\nrules: [] # The control plane automatically fills in the rules", "language": "yaml", "type": "kubernetes-manifest", "context": "Using RBAC Authorization | If you create a new ClusterRole that matches the label selector of an existing aggregated ClusterRole, that change triggers adding the new rules into the aggregated ClusterRole. Here is an example tha", "source": "https://kubernetes.io/docs/reference/access-authn-authz/rbac/", "id": "k8s_00637_code_completion_19902"} {"prompt": "Explain this yaml code:\n\napiVersion: rbac.authorization.k8s.io/v1\nkind: ClusterRole\nmetadata:\n name: monitoring\naggregationRule:\n clusterRoleSelectors:\n - matchLabels:\n rbac.example.com/aggregate-to-monitoring: \"true\"\nrules: [] # The control plane automatically fills in the rules", "completion": "This yaml code Using RBAC Authorization | If you create a new ClusterRole that matches the label selector of an existing aggregated ClusterRole, that change triggers adding the new rules into the aggregated ClusterRole. Here is an example tha", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Using RBAC Authorization | If you create a new ClusterRole that matches the label selector of an existing aggregated ClusterRole, that change triggers adding the new rules into the aggregated ClusterRole. Here is an example tha", "source": "https://kubernetes.io/docs/reference/access-authn-authz/rbac/", "id": "k8s_00637_code_explain_19903"} {"prompt": "Complete this yaml code:\n\napiVersion: rbac.authorization.k8s.io/v1\nkind: ClusterRole\nmetadata:\n name: monitoring-endpoints\n labels:\n rbac.example.com/aggregate-to-monitoring: \"true\"", "completion": "# When you create the \"monitoring-endpoints\" ClusterRole,\n# the rules below will be added to the \"monitoring\" ClusterRole.\nrules:\n- apiGroups: [\"\"]\n resources: [\"services\", \"endpointslices\", \"pods\"]\n verbs: [\"get\", \"list\", \"watch\"]", "language": "yaml", "type": "kubernetes-manifest", "context": "Using RBAC Authorization | The default user-facing roles use ClusterRole aggregation. This lets you, as a cluster administrator, include rules for custom resources, such as those served by CustomResourceDefinitions or aggregate", "source": "https://kubernetes.io/docs/reference/access-authn-authz/rbac/", "id": "k8s_00637_code_completion_19904"} {"prompt": "Explain this yaml code:\n\napiVersion: rbac.authorization.k8s.io/v1\nkind: ClusterRole\nmetadata:\n name: monitoring-endpoints\n labels:\n rbac.example.com/aggregate-to-monitoring: \"true\"\n# When you create the \"monitoring-endpoints\" ClusterRole,\n# the rules below will be added to the \"monitoring\" ClusterRole.\nrules:\n- apiGroups: [\"\"]\n resources: [\"services\", \"endpointslices\", \"pods\"]\n verbs: [\"get\", \"list\", \"watch\"]", "completion": "This yaml code Using RBAC Authorization | The default user-facing roles use ClusterRole aggregation. This lets you, as a cluster administrator, include rules for custom resources, such as those served by CustomResourceDefinitions or aggregate", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Using RBAC Authorization | The default user-facing roles use ClusterRole aggregation. This lets you, as a cluster administrator, include rules for custom resources, such as those served by CustomResourceDefinitions or aggregate", "source": "https://kubernetes.io/docs/reference/access-authn-authz/rbac/", "id": "k8s_00637_code_explain_19905"} {"prompt": "Complete this yaml code:\n\napiVersion: rbac.authorization.k8s.io/v1\nkind: ClusterRole\nmetadata:\n name: aggregate-cron-tabs-edit\n labels:\n # Add these permissions to the \"admin\" and \"edit\" default roles.\n rbac.authorization.k8s.io/aggregate-to-admin: \"true\"\n rbac.authorization.k8s.io/aggregate-to-edit: \"true\"\nrules:\n- apiGroups: [\"stable.example.com\"]\n resources: [\"crontabs\"]\n verbs: [\"get\", \"list\", \"watch\", \"create\", \"update\", \"patch\", \"delete\"]", "completion": "---\nkind: ClusterRole\napiVersion: rbac.authorization.k8s.io/v1\nmetadata:\n name: aggregate-cron-tabs-view\n labels:\n # Add these permissions to the \"view\" default role.\n rbac.authorization.k8s.io/aggregate-to-view: \"true\"\nrules:\n- apiGroups: [\"stable.example.com\"]\n resources: [\"crontabs\"]\n verbs: [\"get\", \"list\", \"watch\"]", "language": "yaml", "type": "kubernetes-manifest", "context": "Using RBAC Authorization | The following examples are excerpts from Role or ClusterRole objects, showing only the rules section.", "source": "https://kubernetes.io/docs/reference/access-authn-authz/rbac/", "id": "k8s_00637_code_completion_19906"} {"prompt": "Explain this yaml code:\n\napiVersion: rbac.authorization.k8s.io/v1\nkind: ClusterRole\nmetadata:\n name: aggregate-cron-tabs-edit\n labels:\n # Add these permissions to the \"admin\" and \"edit\" default roles.\n rbac.authorization.k8s.io/aggregate-to-admin: \"true\"\n rbac.authorization.k8s.io/aggregate-to-edit: \"true\"\nrules:\n- apiGroups: [\"stable.example.com\"]\n resources: [\"crontabs\"]\n verbs: [\"get\", \"list\", \"watch\", \"create\", \"update\", \"patch\", \"delete\"]\n---\nkind: ClusterRole\napiVersion: rbac.authorization.k8s.io/v1\nmetadata:\n name: aggregate-cron-tabs-view\n labels:\n # Add these permissions to the \"view\" default role.\n rbac.authorization.k8s.io/aggregate-to-view: \"true\"\nrules:\n- apiGroups: [\"stable.example.com\"]\n resources: [\"crontabs\"]\n verbs: [\"get\", \"list\", \"watch\"]", "completion": "This yaml code Using RBAC Authorization | The following examples are excerpts from Role or ClusterRole objects, showing only the rules section.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Using RBAC Authorization | The following examples are excerpts from Role or ClusterRole objects, showing only the rules section.", "source": "https://kubernetes.io/docs/reference/access-authn-authz/rbac/", "id": "k8s_00637_code_explain_19907"} {"prompt": "Complete this text code:\n\nrules:\n- apiGroups: [\"\"]\n #", "completion": " # at the HTTP level, the name of the resource for accessing Pod\n # objects is \"pods\"\n resources: [\"pods\"]\n verbs: [\"get\", \"list\", \"watch\"]", "language": "text", "type": "code-example", "context": "Using RBAC Authorization | Allow reading/writing Deployments (at the HTTP level: objects with \"deployments\" in the resource part of their URL) in the \"apps\" API groups:", "source": "https://kubernetes.io/docs/reference/access-authn-authz/rbac/", "id": "k8s_00637_code_completion_19908"} {"prompt": "Explain this text code:\n\nrules:\n- apiGroups: [\"\"]\n #\n # at the HTTP level, the name of the resource for accessing Pod\n # objects is \"pods\"\n resources: [\"pods\"]\n verbs: [\"get\", \"list\", \"watch\"]", "completion": "This text code Using RBAC Authorization | Allow reading/writing Deployments (at the HTTP level: objects with \"deployments\" in the resource part of their URL) in the \"apps\" API groups:", "language": "text", "type": "code-example_explanation", "context": "Using RBAC Authorization | Allow reading/writing Deployments (at the HTTP level: objects with \"deployments\" in the resource part of their URL) in the \"apps\" API groups:", "source": "https://kubernetes.io/docs/reference/access-authn-authz/rbac/", "id": "k8s_00637_code_explain_19909"} {"prompt": "Complete this text code:\n\nrules:\n- apiGroups: [\"apps\"]\n #", "completion": " # at the HTTP level, the name of the resource for accessing Deployment\n # objects is \"deployments\"\n resources: [\"deployments\"]\n verbs: [\"get\", \"list\", \"watch\", \"create\", \"update\", \"patch\", \"delete\"]", "language": "text", "type": "code-example", "context": "Using RBAC Authorization | Allow reading Pods in the core API group, as well as reading or writing Job resources in the \"batch\" API group:", "source": "https://kubernetes.io/docs/reference/access-authn-authz/rbac/", "id": "k8s_00637_code_completion_19910"} {"prompt": "Explain this text code:\n\nrules:\n- apiGroups: [\"apps\"]\n #\n # at the HTTP level, the name of the resource for accessing Deployment\n # objects is \"deployments\"\n resources: [\"deployments\"]\n verbs: [\"get\", \"list\", \"watch\", \"create\", \"update\", \"patch\", \"delete\"]", "completion": "This text code Using RBAC Authorization | Allow reading Pods in the core API group, as well as reading or writing Job resources in the \"batch\" API group:", "language": "text", "type": "code-example_explanation", "context": "Using RBAC Authorization | Allow reading Pods in the core API group, as well as reading or writing Job resources in the \"batch\" API group:", "source": "https://kubernetes.io/docs/reference/access-authn-authz/rbac/", "id": "k8s_00637_code_explain_19911"} {"prompt": "Complete this text code:\n\nrules:\n- apiGroups: [\"\"]\n #\n # at the HTTP level, the name of the resource for accessing Pod\n # objects is \"pods\"\n resources: [\"pods\"]", "completion": " verbs: [\"get\", \"list\", \"watch\"]\n- apiGroups: [\"batch\"]\n #\n # at the HTTP level, the name of the resource for accessing Job\n # objects is \"jobs\"\n resources: [\"jobs\"]\n verbs: [\"get\", \"list\", \"watch\", \"create\", \"update\", \"patch\", \"delete\"]", "language": "text", "type": "code-example", "context": "Using RBAC Authorization | Allow reading a ConfigMap named \"my-config\" (must be bound with a RoleBinding to limit to a single ConfigMap in a single namespace):", "source": "https://kubernetes.io/docs/reference/access-authn-authz/rbac/", "id": "k8s_00637_code_completion_19912"} {"prompt": "Explain this text code:\n\nrules:\n- apiGroups: [\"\"]\n #\n # at the HTTP level, the name of the resource for accessing Pod\n # objects is \"pods\"\n resources: [\"pods\"]\n verbs: [\"get\", \"list\", \"watch\"]\n- apiGroups: [\"batch\"]\n #\n # at the HTTP level, the name of the resource for accessing Job\n # objects is \"jobs\"\n resources: [\"jobs\"]\n verbs: [\"get\", \"list\", \"watch\", \"create\", \"update\", \"patch\", \"delete\"]", "completion": "This text code Using RBAC Authorization | Allow reading a ConfigMap named \"my-config\" (must be bound with a RoleBinding to limit to a single ConfigMap in a single namespace):", "language": "text", "type": "code-example_explanation", "context": "Using RBAC Authorization | Allow reading a ConfigMap named \"my-config\" (must be bound with a RoleBinding to limit to a single ConfigMap in a single namespace):", "source": "https://kubernetes.io/docs/reference/access-authn-authz/rbac/", "id": "k8s_00637_code_explain_19913"} {"prompt": "Complete this text code:\n\nrules:\n- apiGroups: [\"\"]\n #\n # at the HTTP level, the name of the resource for accessing ConfigMap", "completion": " # objects is \"configmaps\"\n resources: [\"configmaps\"]\n resourceNames: [\"my-config\"]\n verbs: [\"get\"]", "language": "text", "type": "code-example", "context": "Using RBAC Authorization | Allow reading the resource \"nodes\" in the core group (because a Node is cluster-scoped, this must be in a ClusterRole bound with a ClusterRoleBinding to be effective):", "source": "https://kubernetes.io/docs/reference/access-authn-authz/rbac/", "id": "k8s_00637_code_completion_19914"} {"prompt": "Explain this text code:\n\nrules:\n- apiGroups: [\"\"]\n #\n # at the HTTP level, the name of the resource for accessing ConfigMap\n # objects is \"configmaps\"\n resources: [\"configmaps\"]\n resourceNames: [\"my-config\"]\n verbs: [\"get\"]", "completion": "This text code Using RBAC Authorization | Allow reading the resource \"nodes\" in the core group (because a Node is cluster-scoped, this must be in a ClusterRole bound with a ClusterRoleBinding to be effective):", "language": "text", "type": "code-example_explanation", "context": "Using RBAC Authorization | Allow reading the resource \"nodes\" in the core group (because a Node is cluster-scoped, this must be in a ClusterRole bound with a ClusterRoleBinding to be effective):", "source": "https://kubernetes.io/docs/reference/access-authn-authz/rbac/", "id": "k8s_00637_code_explain_19915"} {"prompt": "Complete this text code:\n\nrules:\n- apiGroups: [\"\"]\n #", "completion": " # at the HTTP level, the name of the resource for accessing Node\n # objects is \"nodes\"\n resources: [\"nodes\"]\n verbs: [\"get\", \"list\", \"watch\"]", "language": "text", "type": "code-example", "context": "Using RBAC Authorization | Allow GET and POST requests to the non-resource endpoint /healthz and all subpaths (must be in a ClusterRole bound with a ClusterRoleBinding to be effective):", "source": "https://kubernetes.io/docs/reference/access-authn-authz/rbac/", "id": "k8s_00637_code_completion_19916"} {"prompt": "Explain this text code:\n\nrules:\n- apiGroups: [\"\"]\n #\n # at the HTTP level, the name of the resource for accessing Node\n # objects is \"nodes\"\n resources: [\"nodes\"]\n verbs: [\"get\", \"list\", \"watch\"]", "completion": "This text code Using RBAC Authorization | Allow GET and POST requests to the non-resource endpoint /healthz and all subpaths (must be in a ClusterRole bound with a ClusterRoleBinding to be effective):", "language": "text", "type": "code-example_explanation", "context": "Using RBAC Authorization | Allow GET and POST requests to the non-resource endpoint /healthz and all subpaths (must be in a ClusterRole bound with a ClusterRoleBinding to be effective):", "source": "https://kubernetes.io/docs/reference/access-authn-authz/rbac/", "id": "k8s_00637_code_explain_19917"} {"prompt": "Explain this text code:\n\nrules:\n- nonResourceURLs: [\"/healthz\", \"/healthz/*\"] # '*' in a nonResourceURL is a suffix glob match\n verbs: [\"get\", \"post\"]", "completion": "This text code Using RBAC Authorization | A RoleBinding or ClusterRoleBinding binds a role to subjects. Subjects can be groups, users or ServiceAccounts.", "language": "text", "type": "code-example_explanation", "context": "Using RBAC Authorization | A RoleBinding or ClusterRoleBinding binds a role to subjects. Subjects can be groups, users or ServiceAccounts.", "source": "https://kubernetes.io/docs/reference/access-authn-authz/rbac/", "id": "k8s_00637_code_explain_19918"} {"prompt": "Complete this yaml code:\n\nsubjects:\n- kind: User", "completion": " name: \"alice@example.com\"\n apiGroup: rbac.authorization.k8s.io", "language": "yaml", "type": "kubernetes-manifest", "context": "Using RBAC Authorization | For a group named frontend-admins:", "source": "https://kubernetes.io/docs/reference/access-authn-authz/rbac/", "id": "k8s_00637_code_completion_19919"} {"prompt": "Explain this yaml code:\n\nsubjects:\n- kind: User\n name: \"alice@example.com\"\n apiGroup: rbac.authorization.k8s.io", "completion": "This yaml code Using RBAC Authorization | For a group named frontend-admins:", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Using RBAC Authorization | For a group named frontend-admins:", "source": "https://kubernetes.io/docs/reference/access-authn-authz/rbac/", "id": "k8s_00637_code_explain_19920"} {"prompt": "Complete this yaml code:\n\nsubjects:\n- kind: Group", "completion": " name: \"frontend-admins\"\n apiGroup: rbac.authorization.k8s.io", "language": "yaml", "type": "kubernetes-manifest", "context": "Using RBAC Authorization | For the default service account in the \"kube-system\" namespace:", "source": "https://kubernetes.io/docs/reference/access-authn-authz/rbac/", "id": "k8s_00637_code_completion_19921"} {"prompt": "Explain this yaml code:\n\nsubjects:\n- kind: Group\n name: \"frontend-admins\"\n apiGroup: rbac.authorization.k8s.io", "completion": "This yaml code Using RBAC Authorization | For the default service account in the \"kube-system\" namespace:", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Using RBAC Authorization | For the default service account in the \"kube-system\" namespace:", "source": "https://kubernetes.io/docs/reference/access-authn-authz/rbac/", "id": "k8s_00637_code_explain_19922"} {"prompt": "Complete this yaml code:\n\nsubjects:\n- kind: ServiceAccount", "completion": " name: default\n namespace: kube-system", "language": "yaml", "type": "service-definition", "context": "Using RBAC Authorization | For all service accounts in the \"qa\" namespace:", "source": "https://kubernetes.io/docs/reference/access-authn-authz/rbac/", "id": "k8s_00637_code_completion_19923"} {"prompt": "Explain this yaml code:\n\nsubjects:\n- kind: ServiceAccount\n name: default\n namespace: kube-system", "completion": "This yaml code Using RBAC Authorization | For all service accounts in the \"qa\" namespace:", "language": "yaml", "type": "service-definition_explanation", "context": "Using RBAC Authorization | For all service accounts in the \"qa\" namespace:", "source": "https://kubernetes.io/docs/reference/access-authn-authz/rbac/", "id": "k8s_00637_code_explain_19924"} {"prompt": "Complete this yaml code:\n\nsubjects:\n- kind: Group", "completion": " name: system:serviceaccounts:qa\n apiGroup: rbac.authorization.k8s.io", "language": "yaml", "type": "kubernetes-manifest", "context": "Using RBAC Authorization | For all service accounts in any namespace:", "source": "https://kubernetes.io/docs/reference/access-authn-authz/rbac/", "id": "k8s_00637_code_completion_19925"} {"prompt": "Explain this yaml code:\n\nsubjects:\n- kind: Group\n name: system:serviceaccounts:qa\n apiGroup: rbac.authorization.k8s.io", "completion": "This yaml code Using RBAC Authorization | For all service accounts in any namespace:", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Using RBAC Authorization | For all service accounts in any namespace:", "source": "https://kubernetes.io/docs/reference/access-authn-authz/rbac/", "id": "k8s_00637_code_explain_19926"} {"prompt": "Complete this yaml code:\n\nsubjects:\n- kind: Group", "completion": " name: system:serviceaccounts\n apiGroup: rbac.authorization.k8s.io", "language": "yaml", "type": "kubernetes-manifest", "context": "Using RBAC Authorization | For all authenticated users:", "source": "https://kubernetes.io/docs/reference/access-authn-authz/rbac/", "id": "k8s_00637_code_completion_19927"} {"prompt": "Explain this yaml code:\n\nsubjects:\n- kind: Group\n name: system:serviceaccounts\n apiGroup: rbac.authorization.k8s.io", "completion": "This yaml code Using RBAC Authorization | For all authenticated users:", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Using RBAC Authorization | For all authenticated users:", "source": "https://kubernetes.io/docs/reference/access-authn-authz/rbac/", "id": "k8s_00637_code_explain_19928"} {"prompt": "Complete this yaml code:\n\nsubjects:\n- kind: Group", "completion": " name: system:authenticated\n apiGroup: rbac.authorization.k8s.io", "language": "yaml", "type": "kubernetes-manifest", "context": "Using RBAC Authorization | For all unauthenticated users:", "source": "https://kubernetes.io/docs/reference/access-authn-authz/rbac/", "id": "k8s_00637_code_completion_19929"} {"prompt": "Explain this yaml code:\n\nsubjects:\n- kind: Group\n name: system:authenticated\n apiGroup: rbac.authorization.k8s.io", "completion": "This yaml code Using RBAC Authorization | For all unauthenticated users:", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Using RBAC Authorization | For all unauthenticated users:", "source": "https://kubernetes.io/docs/reference/access-authn-authz/rbac/", "id": "k8s_00637_code_explain_19930"} {"prompt": "Complete this yaml code:\n\nsubjects:\n- kind: Group", "completion": " name: system:unauthenticated\n apiGroup: rbac.authorization.k8s.io", "language": "yaml", "type": "kubernetes-manifest", "context": "Using RBAC Authorization | For all users:", "source": "https://kubernetes.io/docs/reference/access-authn-authz/rbac/", "id": "k8s_00637_code_completion_19931"} {"prompt": "Explain this yaml code:\n\nsubjects:\n- kind: Group\n name: system:unauthenticated\n apiGroup: rbac.authorization.k8s.io", "completion": "This yaml code Using RBAC Authorization | For all users:", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Using RBAC Authorization | For all users:", "source": "https://kubernetes.io/docs/reference/access-authn-authz/rbac/", "id": "k8s_00637_code_explain_19932"} {"prompt": "Complete this yaml code:\n\nsubjects:\n- kind: Group\n name: system:authenticated", "completion": " apiGroup: rbac.authorization.k8s.io\n- kind: Group\n name: system:unauthenticated\n apiGroup: rbac.authorization.k8s.io", "language": "yaml", "type": "kubernetes-manifest", "context": "Using RBAC Authorization | API servers create a set of default ClusterRole and ClusterRoleBinding objects. Many of these are system: prefixed, which indicates that the resource is directly managed by the cluster control plane.", "source": "https://kubernetes.io/docs/reference/access-authn-authz/rbac/", "id": "k8s_00637_code_completion_19933"} {"prompt": "Explain this yaml code:\n\nsubjects:\n- kind: Group\n name: system:authenticated\n apiGroup: rbac.authorization.k8s.io\n- kind: Group\n name: system:unauthenticated\n apiGroup: rbac.authorization.k8s.io", "completion": "This yaml code Using RBAC Authorization | API servers create a set of default ClusterRole and ClusterRoleBinding objects. Many of these are system: prefixed, which indicates that the resource is directly managed by the cluster control plane.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Using RBAC Authorization | API servers create a set of default ClusterRole and ClusterRoleBinding objects. Many of these are system: prefixed, which indicates that the resource is directly managed by the cluster control plane.", "source": "https://kubernetes.io/docs/reference/access-authn-authz/rbac/", "id": "k8s_00637_code_explain_19934"} {"prompt": "Complete this text code:\n\nmetadata:\n labels:", "completion": " rbac.authorization.k8s.io/aggregate-to-admin: \"true\"\n rbac.authorization.k8s.io/aggregate-to-edit: \"true\"\n rbac.authorization.k8s.io/aggregate-to-view: \"true\"", "language": "text", "type": "code-example", "context": "Using RBAC Authorization | If used in a RoleBinding, allows read/write access to most resources in a namespace, including the ability to create roles and role bindings within the namespace. This role does not allow write access", "source": "https://kubernetes.io/docs/reference/access-authn-authz/rbac/", "id": "k8s_00637_code_completion_19935"} {"prompt": "Explain this text code:\n\nmetadata:\n labels:\n rbac.authorization.k8s.io/aggregate-to-admin: \"true\"\n rbac.authorization.k8s.io/aggregate-to-edit: \"true\"\n rbac.authorization.k8s.io/aggregate-to-view: \"true\"", "completion": "This text code Using RBAC Authorization | If used in a RoleBinding, allows read/write access to most resources in a namespace, including the ability to create roles and role bindings within the namespace. This role does not allow write access", "language": "text", "type": "code-example_explanation", "context": "Using RBAC Authorization | If used in a RoleBinding, allows read/write access to most resources in a namespace, including the ability to create roles and role bindings within the namespace. This role does not allow write access", "source": "https://kubernetes.io/docs/reference/access-authn-authz/rbac/", "id": "k8s_00637_code_explain_19936"} {"prompt": "Complete this yaml code:\n\napiVersion: rbac.authorization.k8s.io/v1\nkind: ClusterRole\nmetadata:\n name: role-grantor\nrules:\n- apiGroups: [\"rbac.authorization.k8s.io\"]\n resources: [\"rolebindings\"]\n verbs: [\"create\"]\n- apiGroups: [\"rbac.authorization.k8s.io\"]\n resources: [\"clusterroles\"]\n verbs: [\"bind\"]\n # omit resourceNames to allow binding any ClusterRole\n resourceNames: [\"admin\",\"edit\",\"view\"]", "completion": "---\napiVersion: rbac.authorization.k8s.io/v1\nkind: RoleBinding\nmetadata:\n name: role-grantor-binding\n namespace: user-1-namespace\nroleRef:\n apiGroup: rbac.authorization.k8s.io\n kind: ClusterRole\n name: role-grantor\nsubjects:\n- apiGroup: rbac.authorization.k8s.io\n kind: User\n name: user-1", "language": "yaml", "type": "kubernetes-manifest", "context": "Using RBAC Authorization | When bootstrapping the first roles and role bindings, it is necessary for the initial user to grant permissions they do not yet have. To bootstrap initial roles and role bindings:", "source": "https://kubernetes.io/docs/reference/access-authn-authz/rbac/", "id": "k8s_00637_code_completion_19937"} {"prompt": "Explain this yaml code:\n\napiVersion: rbac.authorization.k8s.io/v1\nkind: ClusterRole\nmetadata:\n name: role-grantor\nrules:\n- apiGroups: [\"rbac.authorization.k8s.io\"]\n resources: [\"rolebindings\"]\n verbs: [\"create\"]\n- apiGroups: [\"rbac.authorization.k8s.io\"]\n resources: [\"clusterroles\"]\n verbs: [\"bind\"]\n # omit resourceNames to allow binding any ClusterRole\n resourceNames: [\"admin\",\"edit\",\"view\"]\n---\napiVersion: rbac.authorization.k8s.io/v1\nkind: RoleBinding\nmetadata:\n name: role-grantor-binding\n namespace: user-1-namespace\nroleRef:\n apiGroup: rbac.authorization.k8s.io\n kind: ClusterRole\n name: role-grantor\nsubjects:\n- apiGroup: rbac.authorization.k8s.io\n kind: User\n name: user-1", "completion": "This yaml code Using RBAC Authorization | When bootstrapping the first roles and role bindings, it is necessary for the initial user to grant permissions they do not yet have. To bootstrap initial roles and role bindings:", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Using RBAC Authorization | When bootstrapping the first roles and role bindings, it is necessary for the initial user to grant permissions they do not yet have. To bootstrap initial roles and role bindings:", "source": "https://kubernetes.io/docs/reference/access-authn-authz/rbac/", "id": "k8s_00637_code_explain_19938"} {"prompt": "Explain this bash code:\n\nkubectl create role pod-reader --verb=get --verb=list --verb=watch --resource=pods", "completion": "This bash code Using RBAC Authorization | Create a Role named \"pod-reader\" with resourceNames specified:", "language": "bash", "type": "kubectl-command_explanation", "context": "Using RBAC Authorization | Create a Role named \"pod-reader\" with resourceNames specified:", "source": "https://kubernetes.io/docs/reference/access-authn-authz/rbac/", "id": "k8s_00637_code_explain_19939"} {"prompt": "Explain this bash code:\n\nkubectl create role pod-reader --verb=get --resource=pods --resource-name=readablepod --resource-name=anotherpod", "completion": "This bash code Using RBAC Authorization | Create a Role named \"foo\" with apiGroups specified:", "language": "bash", "type": "kubectl-command_explanation", "context": "Using RBAC Authorization | Create a Role named \"foo\" with apiGroups specified:", "source": "https://kubernetes.io/docs/reference/access-authn-authz/rbac/", "id": "k8s_00637_code_explain_19940"} {"prompt": "Explain this bash code:\n\nkubectl create role foo --verb=get,list,watch --resource=replicasets.apps", "completion": "This bash code Using RBAC Authorization | Create a Role named \"foo\" with subresource permissions:", "language": "bash", "type": "kubectl-command_explanation", "context": "Using RBAC Authorization | Create a Role named \"foo\" with subresource permissions:", "source": "https://kubernetes.io/docs/reference/access-authn-authz/rbac/", "id": "k8s_00637_code_explain_19941"} {"prompt": "Explain this bash code:\n\nkubectl create role foo --verb=get,list,watch --resource=pods,pods/status", "completion": "This bash code Using RBAC Authorization | Create a Role named \"my-component-lease-holder\" with permissions to get/update a resource with a specific name:", "language": "bash", "type": "kubectl-command_explanation", "context": "Using RBAC Authorization | Create a Role named \"my-component-lease-holder\" with permissions to get/update a resource with a specific name:", "source": "https://kubernetes.io/docs/reference/access-authn-authz/rbac/", "id": "k8s_00637_code_explain_19942"} {"prompt": "Explain this bash code:\n\nkubectl create role my-component-lease-holder --verb=get,list,watch,update --resource=lease --resource-name=my-component", "completion": "This bash code Using RBAC Authorization | Creates a ClusterRole. Examples:", "language": "bash", "type": "kubectl-command_explanation", "context": "Using RBAC Authorization | Creates a ClusterRole. Examples:", "source": "https://kubernetes.io/docs/reference/access-authn-authz/rbac/", "id": "k8s_00637_code_explain_19943"} {"prompt": "Explain this bash code:\n\nkubectl create clusterrole pod-reader --verb=get,list,watch --resource=pods", "completion": "This bash code Using RBAC Authorization | Create a ClusterRole named \"pod-reader\" with resourceNames specified:", "language": "bash", "type": "kubectl-command_explanation", "context": "Using RBAC Authorization | Create a ClusterRole named \"pod-reader\" with resourceNames specified:", "source": "https://kubernetes.io/docs/reference/access-authn-authz/rbac/", "id": "k8s_00637_code_explain_19944"} {"prompt": "Explain this bash code:\n\nkubectl create clusterrole pod-reader --verb=get --resource=pods --resource-name=readablepod --resource-name=anotherpod", "completion": "This bash code Using RBAC Authorization | Create a ClusterRole named \"foo\" with apiGroups specified:", "language": "bash", "type": "kubectl-command_explanation", "context": "Using RBAC Authorization | Create a ClusterRole named \"foo\" with apiGroups specified:", "source": "https://kubernetes.io/docs/reference/access-authn-authz/rbac/", "id": "k8s_00637_code_explain_19945"} {"prompt": "Explain this bash code:\n\nkubectl create clusterrole foo --verb=get,list,watch --resource=replicasets.apps", "completion": "This bash code Using RBAC Authorization | Create a ClusterRole named \"foo\" with subresource permissions:", "language": "bash", "type": "kubectl-command_explanation", "context": "Using RBAC Authorization | Create a ClusterRole named \"foo\" with subresource permissions:", "source": "https://kubernetes.io/docs/reference/access-authn-authz/rbac/", "id": "k8s_00637_code_explain_19946"} {"prompt": "Explain this bash code:\n\nkubectl create clusterrole foo --verb=get,list,watch --resource=pods,pods/status", "completion": "This bash code Using RBAC Authorization | Create a ClusterRole named \"foo\" with nonResourceURL specified:", "language": "bash", "type": "kubectl-command_explanation", "context": "Using RBAC Authorization | Create a ClusterRole named \"foo\" with nonResourceURL specified:", "source": "https://kubernetes.io/docs/reference/access-authn-authz/rbac/", "id": "k8s_00637_code_explain_19947"} {"prompt": "Explain this bash code:\n\nkubectl create clusterrole \"foo\" --verb=get --non-resource-url=/logs/*", "completion": "This bash code Using RBAC Authorization | Create a ClusterRole named \"monitoring\" with an aggregationRule specified:", "language": "bash", "type": "kubectl-command_explanation", "context": "Using RBAC Authorization | Create a ClusterRole named \"monitoring\" with an aggregationRule specified:", "source": "https://kubernetes.io/docs/reference/access-authn-authz/rbac/", "id": "k8s_00637_code_explain_19948"} {"prompt": "Explain this bash code:\n\nkubectl create clusterrole monitoring --aggregation-rule=\"rbac.example.com/aggregate-to-monitoring=true\"", "completion": "This bash code Using RBAC Authorization | Grants a Role or ClusterRole within a specific namespace. Examples:", "language": "bash", "type": "kubectl-command_explanation", "context": "Using RBAC Authorization | Grants a Role or ClusterRole within a specific namespace. Examples:", "source": "https://kubernetes.io/docs/reference/access-authn-authz/rbac/", "id": "k8s_00637_code_explain_19949"} {"prompt": "Explain this bash code:\n\nkubectl create rolebinding bob-admin-binding --clusterrole=admin --user=bob --namespace=acme", "completion": "This bash code Using RBAC Authorization | Within the namespace \"acme\", grant the permissions in the \"view\" ClusterRole to the service account in the namespace \"acme\" named \"myapp\":", "language": "bash", "type": "kubectl-command_explanation", "context": "Using RBAC Authorization | Within the namespace \"acme\", grant the permissions in the \"view\" ClusterRole to the service account in the namespace \"acme\" named \"myapp\":", "source": "https://kubernetes.io/docs/reference/access-authn-authz/rbac/", "id": "k8s_00637_code_explain_19950"} {"prompt": "Explain this bash code:\n\nkubectl create rolebinding myapp-view-binding --clusterrole=view --serviceaccount=acme:myapp --namespace=acme", "completion": "This bash code Using RBAC Authorization | Within the namespace \"acme\", grant the permissions in the \"view\" ClusterRole to a service account in the namespace \"myappnamespace\" named \"myapp\":", "language": "bash", "type": "kubectl-command_explanation", "context": "Using RBAC Authorization | Within the namespace \"acme\", grant the permissions in the \"view\" ClusterRole to a service account in the namespace \"myappnamespace\" named \"myapp\":", "source": "https://kubernetes.io/docs/reference/access-authn-authz/rbac/", "id": "k8s_00637_code_explain_19951"} {"prompt": "Explain this bash code:\n\nkubectl create rolebinding myappnamespace-myapp-view-binding --clusterrole=view --serviceaccount=myappnamespace:myapp --namespace=acme", "completion": "This bash code Using RBAC Authorization | Grants a ClusterRole across the entire cluster (all namespaces). Examples:", "language": "bash", "type": "kubectl-command_explanation", "context": "Using RBAC Authorization | Grants a ClusterRole across the entire cluster (all namespaces). Examples:", "source": "https://kubernetes.io/docs/reference/access-authn-authz/rbac/", "id": "k8s_00637_code_explain_19952"} {"prompt": "Explain this bash code:\n\nkubectl create clusterrolebinding root-cluster-admin-binding --clusterrole=cluster-admin --user=root", "completion": "This bash code Using RBAC Authorization | Across the entire cluster, grant the permissions in the \"system:node-proxier\" ClusterRole to a user named \"system:kube-proxy\":", "language": "bash", "type": "kubectl-command_explanation", "context": "Using RBAC Authorization | Across the entire cluster, grant the permissions in the \"system:node-proxier\" ClusterRole to a user named \"system:kube-proxy\":", "source": "https://kubernetes.io/docs/reference/access-authn-authz/rbac/", "id": "k8s_00637_code_explain_19953"} {"prompt": "Explain this bash code:\n\nkubectl create clusterrolebinding kube-proxy-binding --clusterrole=system:node-proxier --user=system:kube-proxy", "completion": "This bash code Using RBAC Authorization | Across the entire cluster, grant the permissions in the \"view\" ClusterRole to a service account named \"myapp\" in the namespace \"acme\":", "language": "bash", "type": "kubectl-command_explanation", "context": "Using RBAC Authorization | Across the entire cluster, grant the permissions in the \"view\" ClusterRole to a service account named \"myapp\" in the namespace \"acme\":", "source": "https://kubernetes.io/docs/reference/access-authn-authz/rbac/", "id": "k8s_00637_code_explain_19954"} {"prompt": "Explain this bash code:\n\nkubectl create clusterrolebinding myapp-view-binding --clusterrole=view --serviceaccount=acme:myapp", "completion": "This bash code Using RBAC Authorization | Creates or updates rbac.authorization.k8s.io/v1 API objects from a manifest file.", "language": "bash", "type": "kubectl-command_explanation", "context": "Using RBAC Authorization | Creates or updates rbac.authorization.k8s.io/v1 API objects from a manifest file.", "source": "https://kubernetes.io/docs/reference/access-authn-authz/rbac/", "id": "k8s_00637_code_explain_19955"} {"prompt": "Explain this bash code:\n\nkubectl auth reconcile -f my-rbac-rules.yaml --dry-run=client", "completion": "This bash code Using RBAC Authorization | Apply a manifest file of RBAC objects, preserving any extra permissions (in roles) and any extra subjects (in bindings):", "language": "bash", "type": "kubectl-command_explanation", "context": "Using RBAC Authorization | Apply a manifest file of RBAC objects, preserving any extra permissions (in roles) and any extra subjects (in bindings):", "source": "https://kubernetes.io/docs/reference/access-authn-authz/rbac/", "id": "k8s_00637_code_explain_19956"} {"prompt": "Explain this bash code:\n\nkubectl auth reconcile -f my-rbac-rules.yaml --remove-extra-subjects --remove-extra-permissions", "completion": "This bash code Using RBAC Authorization | Default RBAC policies grant scoped permissions to control-plane components, nodes, and controllers, but grant no permissions to service accounts outside the kube-system namespace (beyond the permissio", "language": "bash", "type": "kubectl-command_explanation", "context": "Using RBAC Authorization | Default RBAC policies grant scoped permissions to control-plane components, nodes, and controllers, but grant no permissions to service accounts outside the kube-system namespace (beyond the permissio", "source": "https://kubernetes.io/docs/reference/access-authn-authz/rbac/", "id": "k8s_00637_code_explain_19957"} {"prompt": "Complete this bash code:\n\nkubectl create rolebinding my-sa-view \\\n --clusterrole=view \\", "completion": " --serviceaccount=my-namespace:my-sa \\\n --namespace=my-namespace", "language": "bash", "type": "kubectl-command", "context": "Using RBAC Authorization | Grant a role to the \"default\" service account in a namespace", "source": "https://kubernetes.io/docs/reference/access-authn-authz/rbac/", "id": "k8s_00637_code_completion_19958"} {"prompt": "Explain this bash code:\n\nkubectl create rolebinding my-sa-view \\\n --clusterrole=view \\\n --serviceaccount=my-namespace:my-sa \\\n --namespace=my-namespace", "completion": "This bash code Using RBAC Authorization | Grant a role to the \"default\" service account in a namespace", "language": "bash", "type": "kubectl-command_explanation", "context": "Using RBAC Authorization | Grant a role to the \"default\" service account in a namespace", "source": "https://kubernetes.io/docs/reference/access-authn-authz/rbac/", "id": "k8s_00637_code_explain_19959"} {"prompt": "Complete this bash code:\n\nkubectl create rolebinding default-view \\\n --clusterrole=view \\", "completion": " --serviceaccount=my-namespace:default \\\n --namespace=my-namespace", "language": "bash", "type": "kubectl-command", "context": "Using RBAC Authorization | Many add-ons run as the \"default\" service account in the kube-system namespace. To allow those add-ons to run with super-user access, grant cluster-admin permissions to the \"default\" service account i", "source": "https://kubernetes.io/docs/reference/access-authn-authz/rbac/", "id": "k8s_00637_code_completion_19960"} {"prompt": "Explain this bash code:\n\nkubectl create rolebinding default-view \\\n --clusterrole=view \\\n --serviceaccount=my-namespace:default \\\n --namespace=my-namespace", "completion": "This bash code Using RBAC Authorization | Many add-ons run as the \"default\" service account in the kube-system namespace. To allow those add-ons to run with super-user access, grant cluster-admin permissions to the \"default\" service account i", "language": "bash", "type": "kubectl-command_explanation", "context": "Using RBAC Authorization | Many add-ons run as the \"default\" service account in the kube-system namespace. To allow those add-ons to run with super-user access, grant cluster-admin permissions to the \"default\" service account i", "source": "https://kubernetes.io/docs/reference/access-authn-authz/rbac/", "id": "k8s_00637_code_explain_19961"} {"prompt": "Explain this bash code:\n\nkubectl create clusterrolebinding add-on-cluster-admin \\\n --clusterrole=cluster-admin \\\n --serviceaccount=kube-system:default", "completion": "This bash code Using RBAC Authorization | Grant a role to all service accounts in a namespace", "language": "bash", "type": "kubectl-command_explanation", "context": "Using RBAC Authorization | Grant a role to all service accounts in a namespace", "source": "https://kubernetes.io/docs/reference/access-authn-authz/rbac/", "id": "k8s_00637_code_explain_19962"} {"prompt": "Complete this bash code:\n\nkubectl create rolebinding serviceaccounts-view \\\n --clusterrole=view \\", "completion": " --group=system:serviceaccounts:my-namespace \\\n --namespace=my-namespace", "language": "bash", "type": "kubectl-command", "context": "Using RBAC Authorization | Grant a limited role to all service accounts cluster-wide (discouraged)", "source": "https://kubernetes.io/docs/reference/access-authn-authz/rbac/", "id": "k8s_00637_code_completion_19963"} {"prompt": "Explain this bash code:\n\nkubectl create rolebinding serviceaccounts-view \\\n --clusterrole=view \\\n --group=system:serviceaccounts:my-namespace \\\n --namespace=my-namespace", "completion": "This bash code Using RBAC Authorization | Grant a limited role to all service accounts cluster-wide (discouraged)", "language": "bash", "type": "kubectl-command_explanation", "context": "Using RBAC Authorization | Grant a limited role to all service accounts cluster-wide (discouraged)", "source": "https://kubernetes.io/docs/reference/access-authn-authz/rbac/", "id": "k8s_00637_code_explain_19964"} {"prompt": "Explain this bash code:\n\nkubectl create clusterrolebinding serviceaccounts-view \\\n --clusterrole=view \\\n --group=system:serviceaccounts", "completion": "This bash code Using RBAC Authorization | Grant super-user access to all service accounts cluster-wide (strongly discouraged)", "language": "bash", "type": "kubectl-command_explanation", "context": "Using RBAC Authorization | Grant super-user access to all service accounts cluster-wide (strongly discouraged)", "source": "https://kubernetes.io/docs/reference/access-authn-authz/rbac/", "id": "k8s_00637_code_explain_19965"} {"prompt": "Explain this bash code:\n\nkubectl create clusterrolebinding serviceaccounts-cluster-admin \\\n --clusterrole=cluster-admin \\\n --group=system:serviceaccounts", "completion": "This bash code Using RBAC Authorization | Kubernetes clusters created before Kubernetes v1.22 include write access to EndpointSlices (and the now-deprecated Endpoints API) in the aggregated \"edit\" and \"admin\" roles. As a mitigation for CVE-20", "language": "bash", "type": "kubectl-command_explanation", "context": "Using RBAC Authorization | Kubernetes clusters created before Kubernetes v1.22 include write access to EndpointSlices (and the now-deprecated Endpoints API) in the aggregated \"edit\" and \"admin\" roles. As a mitigation for CVE-20", "source": "https://kubernetes.io/docs/reference/access-authn-authz/rbac/", "id": "k8s_00637_code_explain_19966"} {"prompt": "Complete this yaml code:\n\napiVersion: rbac.authorization.k8s.io/v1\nkind: ClusterRole\nmetadata:\n annotations:\n kubernetes.io/description: |-\n Add endpoints write permissions to the edit and admin roles. This was\n removed by default in 1.22 because of CVE-2021-25740. See\n https://issue.k8s.io/103675. This can allow writers to direct LoadBalancer\n or Ingress implementations to expose backend IPs that would not otherwise\n be accessible, and can circumvent network policies or security controls", "completion": " intended to prevent/isolate access to those backends.\n EndpointSlices were never included in the edit or admin roles, so there\n is nothing to restore for the EndpointSlice API. \n labels:\n rbac.authorization.k8s.io/aggregate-to-edit: \"true\"\n name: custom:aggregate-to-edit:endpoints # you can change this if you wish\nrules:\n - apiGroups: [\"\"]\n resources: [\"endpoints\"]\n verbs: [\"create\", \"delete\", \"deletecollection\", \"patch\", \"update\"]", "language": "yaml", "type": "kubernetes-manifest", "context": "Using RBAC Authorization | Clusters that originally ran older Kubernetes versions often used permissive ABAC policies, including granting full API access to all service accounts.", "source": "https://kubernetes.io/docs/reference/access-authn-authz/rbac/", "id": "k8s_00637_code_completion_19967"} {"prompt": "Explain this yaml code:\n\napiVersion: rbac.authorization.k8s.io/v1\nkind: ClusterRole\nmetadata:\n annotations:\n kubernetes.io/description: |-\n Add endpoints write permissions to the edit and admin roles. This was\n removed by default in 1.22 because of CVE-2021-25740. See\n https://issue.k8s.io/103675. This can allow writers to direct LoadBalancer\n or Ingress implementations to expose backend IPs that would not otherwise\n be accessible, and can circumvent network policies or security controls\n intended to prevent/isolate access to those backends.\n EndpointSlices were never included in the edit or admin roles, so there\n is nothing to restore for the EndpointSlice API. \n labels:\n rbac.authorization.k8s.io/aggregate-to-edit: \"true\"\n name: custom:aggregate-to-edit:endpoints # you can change this if you wish\nrules:\n - apiGroups: [\"\"]\n resources: [\"endpoints\"]\n verbs: [\"create\", \"delete\", \"deletecollection\", \"patch\", \"update\"]", "completion": "This yaml code Using RBAC Authorization | Clusters that originally ran older Kubernetes versions often used permissive ABAC policies, including granting full API access to all service accounts.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Using RBAC Authorization | Clusters that originally ran older Kubernetes versions often used permissive ABAC policies, including granting full API access to all service accounts.", "source": "https://kubernetes.io/docs/reference/access-authn-authz/rbac/", "id": "k8s_00637_code_explain_19968"} {"prompt": "Explain this text code:\n\n--authorization-mode=...,RBAC,ABAC --authorization-policy-file=mypolicy.json", "completion": "This text code Using RBAC Authorization | To explain that first command line option in detail: if earlier authorizers, such as Node, deny a request, then the RBAC authorizer attempts to authorize the API request. If RBAC also denies that API", "language": "text", "type": "code-example_explanation", "context": "Using RBAC Authorization | To explain that first command line option in detail: if earlier authorizers, such as Node, deny a request, then the RBAC authorizer attempts to authorize the API request. If RBAC also denies that API", "source": "https://kubernetes.io/docs/reference/access-authn-authz/rbac/", "id": "k8s_00637_code_explain_19969"} {"prompt": "Complete this bash code:\n\nkubectl create clusterrolebinding permissive-binding \\\n --clusterrole=cluster-admin \\", "completion": " --user=admin \\\n --user=kubelet \\\n --group=system:serviceaccounts", "language": "bash", "type": "kubectl-command", "context": "Using RBAC Authorization | After you have transitioned to use RBAC, you should adjust the access controls for your cluster to ensure that these meet your information security needs.", "source": "https://kubernetes.io/docs/reference/access-authn-authz/rbac/", "id": "k8s_00637_code_completion_19970"} {"prompt": "Explain this bash code:\n\nkubectl create clusterrolebinding permissive-binding \\\n --clusterrole=cluster-admin \\\n --user=admin \\\n --user=kubelet \\\n --group=system:serviceaccounts", "completion": "This bash code Using RBAC Authorization | After you have transitioned to use RBAC, you should adjust the access controls for your cluster to ensure that these meet your information security needs.", "language": "bash", "type": "kubectl-command_explanation", "context": "Using RBAC Authorization | After you have transitioned to use RBAC, you should adjust the access controls for your cluster to ensure that these meet your information security needs.", "source": "https://kubernetes.io/docs/reference/access-authn-authz/rbac/", "id": "k8s_00637_code_explain_19971"} {"prompt": "Complete this yaml code:\n\napiVersion: apiserver.config.k8s.io/v1\nkind: AuthorizationConfiguration\nauthorizers:", "completion": " ...\n - type: RBAC\n ...", "language": "yaml", "type": "kubernetes-manifest", "context": "Using RBAC Authorization | Or, start the API server with the --authorization-mode flag set to a comma-separated list that includes RBAC; for example:", "source": "https://kubernetes.io/docs/reference/access-authn-authz/rbac/", "id": "k8s_00637_code_completion_19972"} {"prompt": "Explain this yaml code:\n\napiVersion: apiserver.config.k8s.io/v1\nkind: AuthorizationConfiguration\nauthorizers:\n ...\n - type: RBAC\n ...", "completion": "This yaml code Using RBAC Authorization | Or, start the API server with the --authorization-mode flag set to a comma-separated list that includes RBAC; for example:", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Using RBAC Authorization | Or, start the API server with the --authorization-mode flag set to a comma-separated list that includes RBAC; for example:", "source": "https://kubernetes.io/docs/reference/access-authn-authz/rbac/", "id": "k8s_00637_code_explain_19973"} {"prompt": "Explain this shell code:\n\nkube-apiserver --authorization-mode=...,RBAC --other-options --more-options", "completion": "This shell code Using RBAC Authorization | The RBAC API declares four kinds of Kubernetes object: Role, ClusterRole, RoleBinding and ClusterRoleBinding. You can describe or amend the RBAC objects using tools such as kubectl, just like any othe", "language": "shell", "type": "code-example_explanation", "context": "Using RBAC Authorization | The RBAC API declares four kinds of Kubernetes object: Role, ClusterRole, RoleBinding and ClusterRoleBinding. You can describe or amend the RBAC objects using tools such as kubectl, just like any othe", "source": "https://kubernetes.io/docs/reference/access-authn-authz/rbac/", "id": "k8s_00637_code_explain_19974"} {"prompt": "Complete this yaml code:\n\napiVersion: rbac.authorization.k8s.io/v1\nkind: Role\nmetadata:\n namespace: default", "completion": " name: pod-reader\nrules:\n- apiGroups: [\"\"] # \"\" indicates the core API group\n resources: [\"pods\"]\n verbs: [\"get\", \"watch\", \"list\"]", "language": "yaml", "type": "kubernetes-manifest", "context": "Using RBAC Authorization | A ClusterRole can be used to grant the same permissions as a Role. Because ClusterRoles are cluster-scoped, you can also use them to grant access to:", "source": "https://kubernetes.io/docs/reference/access-authn-authz/rbac/", "id": "k8s_00637_code_completion_19975"} {"prompt": "Explain this yaml code:\n\napiVersion: rbac.authorization.k8s.io/v1\nkind: Role\nmetadata:\n namespace: default\n name: pod-reader\nrules:\n- apiGroups: [\"\"] # \"\" indicates the core API group\n resources: [\"pods\"]\n verbs: [\"get\", \"watch\", \"list\"]", "completion": "This yaml code Using RBAC Authorization | A ClusterRole can be used to grant the same permissions as a Role. Because ClusterRoles are cluster-scoped, you can also use them to grant access to:", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Using RBAC Authorization | A ClusterRole can be used to grant the same permissions as a Role. Because ClusterRoles are cluster-scoped, you can also use them to grant access to:", "source": "https://kubernetes.io/docs/reference/access-authn-authz/rbac/", "id": "k8s_00637_code_explain_19976"} {"prompt": "Complete this yaml code:\n\napiVersion: rbac.authorization.k8s.io/v1\nkind: ClusterRole\nmetadata:\n # \"namespace\" omitted since ClusterRoles are not namespaced\n name: secret-reader\nrules:", "completion": "- apiGroups: [\"\"]\n #\n # at the HTTP level, the name of the resource for accessing Secret\n # objects is \"secrets\"\n resources: [\"secrets\"]\n verbs: [\"get\", \"watch\", \"list\"]", "language": "yaml", "type": "kubernetes-manifest", "context": "Using RBAC Authorization | The name of a Role or a ClusterRole object must be a valid path segment name.", "source": "https://kubernetes.io/docs/reference/access-authn-authz/rbac/", "id": "k8s_00637_code_completion_19977"} {"prompt": "Explain this yaml code:\n\napiVersion: rbac.authorization.k8s.io/v1\nkind: ClusterRole\nmetadata:\n # \"namespace\" omitted since ClusterRoles are not namespaced\n name: secret-reader\nrules:\n- apiGroups: [\"\"]\n #\n # at the HTTP level, the name of the resource for accessing Secret\n # objects is \"secrets\"\n resources: [\"secrets\"]\n verbs: [\"get\", \"watch\", \"list\"]", "completion": "This yaml code Using RBAC Authorization | The name of a Role or a ClusterRole object must be a valid path segment name.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Using RBAC Authorization | The name of a Role or a ClusterRole object must be a valid path segment name.", "source": "https://kubernetes.io/docs/reference/access-authn-authz/rbac/", "id": "k8s_00637_code_explain_19978"} {"prompt": "Complete this yaml code:\n\napiVersion: rbac.authorization.k8s.io/v1\n# This role binding allows \"jane\" to read pods in the \"default\" namespace.\n# You need to already have a Role named \"pod-reader\" in that namespace.\nkind: RoleBinding\nmetadata:\n name: read-pods\n namespace: default\nsubjects:", "completion": "# You can specify more than one \"subject\"\n- kind: User\n name: jane # \"name\" is case sensitive\n apiGroup: rbac.authorization.k8s.io\nroleRef:\n # \"roleRef\" specifies the binding to a Role / ClusterRole\n kind: Role #this must be Role or ClusterRole\n name: pod-reader # this must match the name of the Role or ClusterRole you wish to bind to\n apiGroup: rbac.authorization.k8s.io", "language": "yaml", "type": "kubernetes-manifest", "context": "Using RBAC Authorization | A RoleBinding can also reference a ClusterRole to grant the permissions defined in that ClusterRole to resources inside the RoleBinding's namespace. This kind of reference lets you define a set of com", "source": "https://kubernetes.io/docs/reference/access-authn-authz/rbac/", "id": "k8s_00637_code_completion_19979"} {"prompt": "Explain this yaml code:\n\napiVersion: rbac.authorization.k8s.io/v1\n# This role binding allows \"jane\" to read pods in the \"default\" namespace.\n# You need to already have a Role named \"pod-reader\" in that namespace.\nkind: RoleBinding\nmetadata:\n name: read-pods\n namespace: default\nsubjects:\n# You can specify more than one \"subject\"\n- kind: User\n name: jane # \"name\" is case sensitive\n apiGroup: rbac.authorization.k8s.io\nroleRef:\n # \"roleRef\" specifies the binding to a Role / ClusterRole\n kind: Role #this must be Role or ClusterRole\n name: pod-reader # this must match the name of the Role or ClusterRole you wish to bind to\n apiGroup: rbac.authorization.k8s.io", "completion": "This yaml code Using RBAC Authorization | A RoleBinding can also reference a ClusterRole to grant the permissions defined in that ClusterRole to resources inside the RoleBinding's namespace. This kind of reference lets you define a set of com", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Using RBAC Authorization | A RoleBinding can also reference a ClusterRole to grant the permissions defined in that ClusterRole to resources inside the RoleBinding's namespace. This kind of reference lets you define a set of com", "source": "https://kubernetes.io/docs/reference/access-authn-authz/rbac/", "id": "k8s_00637_code_explain_19980"} {"prompt": "Complete this yaml code:\n\napiVersion: rbac.authorization.k8s.io/v1\n# This role binding allows \"dave\" to read secrets in the \"development\" namespace.\n# You need to already have a ClusterRole named \"secret-reader\".\nkind: RoleBinding\nmetadata:\n name: read-secrets\n #\n # The namespace of the RoleBinding determines where the permissions are granted.\n # This only grants permissions within the \"development\" namespace.", "completion": " namespace: development\nsubjects:\n- kind: User\n name: dave # Name is case sensitive\n apiGroup: rbac.authorization.k8s.io\nroleRef:\n kind: ClusterRole\n name: secret-reader\n apiGroup: rbac.authorization.k8s.io", "language": "yaml", "type": "kubernetes-manifest", "context": "Using RBAC Authorization | To grant permissions across a whole cluster, you can use a ClusterRoleBinding. The following ClusterRoleBinding allows any user in the group \"manager\" to read secrets in any namespace.", "source": "https://kubernetes.io/docs/reference/access-authn-authz/rbac/", "id": "k8s_00637_code_completion_19981"} {"prompt": "Explain this yaml code:\n\napiVersion: rbac.authorization.k8s.io/v1\n# This role binding allows \"dave\" to read secrets in the \"development\" namespace.\n# You need to already have a ClusterRole named \"secret-reader\".\nkind: RoleBinding\nmetadata:\n name: read-secrets\n #\n # The namespace of the RoleBinding determines where the permissions are granted.\n # This only grants permissions within the \"development\" namespace.\n namespace: development\nsubjects:\n- kind: User\n name: dave # Name is case sensitive\n apiGroup: rbac.authorization.k8s.io\nroleRef:\n kind: ClusterRole\n name: secret-reader\n apiGroup: rbac.authorization.k8s.io", "completion": "This yaml code Using RBAC Authorization | To grant permissions across a whole cluster, you can use a ClusterRoleBinding. The following ClusterRoleBinding allows any user in the group \"manager\" to read secrets in any namespace.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Using RBAC Authorization | To grant permissions across a whole cluster, you can use a ClusterRoleBinding. The following ClusterRoleBinding allows any user in the group \"manager\" to read secrets in any namespace.", "source": "https://kubernetes.io/docs/reference/access-authn-authz/rbac/", "id": "k8s_00637_code_explain_19982"} {"prompt": "Complete this yaml code:\n\napiVersion: rbac.authorization.k8s.io/v1\n# This cluster role binding allows anyone in the \"manager\" group to read secrets in any namespace.\nkind: ClusterRoleBinding\nmetadata:\n name: read-secrets-global\nsubjects:", "completion": "- kind: Group\n name: manager # Name is case sensitive\n apiGroup: rbac.authorization.k8s.io\nroleRef:\n kind: ClusterRole\n name: secret-reader\n apiGroup: rbac.authorization.k8s.io", "language": "yaml", "type": "kubernetes-manifest", "context": "Using RBAC Authorization | After you create a binding, you cannot change the Role or ClusterRole that it refers to. If you try to change a binding's roleRef, you get a validation error. If you do want to change the roleRef for", "source": "https://kubernetes.io/docs/reference/access-authn-authz/rbac/", "id": "k8s_00637_code_completion_19983"} {"prompt": "Explain this yaml code:\n\napiVersion: rbac.authorization.k8s.io/v1\n# This cluster role binding allows anyone in the \"manager\" group to read secrets in any namespace.\nkind: ClusterRoleBinding\nmetadata:\n name: read-secrets-global\nsubjects:\n- kind: Group\n name: manager # Name is case sensitive\n apiGroup: rbac.authorization.k8s.io\nroleRef:\n kind: ClusterRole\n name: secret-reader\n apiGroup: rbac.authorization.k8s.io", "completion": "This yaml code Using RBAC Authorization | After you create a binding, you cannot change the Role or ClusterRole that it refers to. If you try to change a binding's roleRef, you get a validation error. If you do want to change the roleRef for", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Using RBAC Authorization | After you create a binding, you cannot change the Role or ClusterRole that it refers to. If you try to change a binding's roleRef, you get a validation error. If you do want to change the roleRef for", "source": "https://kubernetes.io/docs/reference/access-authn-authz/rbac/", "id": "k8s_00637_code_explain_19984"} {"prompt": "Complete this yaml code:\n\napiVersion: rbac.authorization.k8s.io/v1\nkind: Role\nmetadata:\n namespace: default", "completion": " name: pod-and-pod-logs-reader\nrules:\n- apiGroups: [\"\"]\n resources: [\"pods\", \"pods/log\"]\n verbs: [\"get\", \"list\"]", "language": "yaml", "type": "kubernetes-manifest", "context": "Using RBAC Authorization | You can also refer to resources by name for certain requests through the resourceNames list. When specified, requests can be restricted to individual instances of a resource. Here is an example that r", "source": "https://kubernetes.io/docs/reference/access-authn-authz/rbac/", "id": "k8s_00637_code_completion_19985"} {"prompt": "Explain this yaml code:\n\napiVersion: rbac.authorization.k8s.io/v1\nkind: Role\nmetadata:\n namespace: default\n name: pod-and-pod-logs-reader\nrules:\n- apiGroups: [\"\"]\n resources: [\"pods\", \"pods/log\"]\n verbs: [\"get\", \"list\"]", "completion": "This yaml code Using RBAC Authorization | You can also refer to resources by name for certain requests through the resourceNames list. When specified, requests can be restricted to individual instances of a resource. Here is an example that r", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Using RBAC Authorization | You can also refer to resources by name for certain requests through the resourceNames list. When specified, requests can be restricted to individual instances of a resource. Here is an example that r", "source": "https://kubernetes.io/docs/reference/access-authn-authz/rbac/", "id": "k8s_00637_code_explain_19986"} {"prompt": "Complete this yaml code:\n\napiVersion: rbac.authorization.k8s.io/v1\nkind: Role\nmetadata:\n namespace: default\n name: configmap-updater\nrules:", "completion": "- apiGroups: [\"\"]\n #\n # at the HTTP level, the name of the resource for accessing ConfigMap\n # objects is \"configmaps\"\n resources: [\"configmaps\"]\n resourceNames: [\"my-configmap\"]\n verbs: [\"update\", \"get\"]", "language": "yaml", "type": "kubernetes-manifest", "context": "Using RBAC Authorization | Rather than referring to individual resources, apiGroups, and verbs, you can use the wildcard * symbol to refer to all such objects. For nonResourceURLs, you can use the wildcard * as a suffix glob ma", "source": "https://kubernetes.io/docs/reference/access-authn-authz/rbac/", "id": "k8s_00637_code_completion_19987"} {"prompt": "Explain this yaml code:\n\napiVersion: rbac.authorization.k8s.io/v1\nkind: Role\nmetadata:\n namespace: default\n name: configmap-updater\nrules:\n- apiGroups: [\"\"]\n #\n # at the HTTP level, the name of the resource for accessing ConfigMap\n # objects is \"configmaps\"\n resources: [\"configmaps\"]\n resourceNames: [\"my-configmap\"]\n verbs: [\"update\", \"get\"]", "completion": "This yaml code Using RBAC Authorization | Rather than referring to individual resources, apiGroups, and verbs, you can use the wildcard * symbol to refer to all such objects. For nonResourceURLs, you can use the wildcard * as a suffix glob ma", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Using RBAC Authorization | Rather than referring to individual resources, apiGroups, and verbs, you can use the wildcard * symbol to refer to all such objects. For nonResourceURLs, you can use the wildcard * as a suffix glob ma", "source": "https://kubernetes.io/docs/reference/access-authn-authz/rbac/", "id": "k8s_00637_code_explain_19988"} {"prompt": "Explain this bash code:\n\nkubectl get configmaps --field-selector=metadata.name=my-configmap", "completion": "This bash code Using RBAC Authorization | Rather than referring to individual resources, apiGroups, and verbs, you can use the wildcard * symbol to refer to all such objects. For nonResourceURLs, you can use the wildcard * as a suffix glob ma", "language": "bash", "type": "kubectl-command_explanation", "context": "Using RBAC Authorization | Rather than referring to individual resources, apiGroups, and verbs, you can use the wildcard * symbol to refer to all such objects. For nonResourceURLs, you can use the wildcard * as a suffix glob ma", "source": "https://kubernetes.io/docs/reference/access-authn-authz/rbac/", "id": "k8s_00637_code_explain_19989"} {"prompt": "Complete this yaml code:\n\napiVersion: rbac.authorization.k8s.io/v1\nkind: Role\nmetadata:\n namespace: default", "completion": " name: example.com-superuser # DO NOT USE THIS ROLE, IT IS JUST AN EXAMPLE\nrules:\n- apiGroups: [\"example.com\"]\n resources: [\"*\"]\n verbs: [\"*\"]", "language": "yaml", "type": "kubernetes-manifest", "context": "Using RBAC Authorization | You can aggregate several ClusterRoles into one combined ClusterRole. A controller, running as part of the cluster control plane, watches for ClusterRole objects with an aggregationRule set. The aggre", "source": "https://kubernetes.io/docs/reference/access-authn-authz/rbac/", "id": "k8s_00637_code_completion_19990"} {"prompt": "Explain this yaml code:\n\napiVersion: rbac.authorization.k8s.io/v1\nkind: Role\nmetadata:\n namespace: default\n name: example.com-superuser # DO NOT USE THIS ROLE, IT IS JUST AN EXAMPLE\nrules:\n- apiGroups: [\"example.com\"]\n resources: [\"*\"]\n verbs: [\"*\"]", "completion": "This yaml code Using RBAC Authorization | You can aggregate several ClusterRoles into one combined ClusterRole. A controller, running as part of the cluster control plane, watches for ClusterRole objects with an aggregationRule set. The aggre", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Using RBAC Authorization | You can aggregate several ClusterRoles into one combined ClusterRole. A controller, running as part of the cluster control plane, watches for ClusterRole objects with an aggregationRule set. The aggre", "source": "https://kubernetes.io/docs/reference/access-authn-authz/rbac/", "id": "k8s_00637_code_explain_19991"} {"prompt": "Complete this yaml code:\n\napiVersion: rbac.authorization.k8s.io/v1\nkind: ClusterRole\nmetadata:\n name: monitoring", "completion": "aggregationRule:\n clusterRoleSelectors:\n - matchLabels:\n rbac.example.com/aggregate-to-monitoring: \"true\"\nrules: [] # The control plane automatically fills in the rules", "language": "yaml", "type": "kubernetes-manifest", "context": "Using RBAC Authorization | If you create a new ClusterRole that matches the label selector of an existing aggregated ClusterRole, that change triggers adding the new rules into the aggregated ClusterRole. Here is an example tha", "source": "https://kubernetes.io/docs/reference/access-authn-authz/rbac/", "id": "k8s_00637_code_completion_19992"} {"prompt": "Explain this yaml code:\n\napiVersion: rbac.authorization.k8s.io/v1\nkind: ClusterRole\nmetadata:\n name: monitoring\naggregationRule:\n clusterRoleSelectors:\n - matchLabels:\n rbac.example.com/aggregate-to-monitoring: \"true\"\nrules: [] # The control plane automatically fills in the rules", "completion": "This yaml code Using RBAC Authorization | If you create a new ClusterRole that matches the label selector of an existing aggregated ClusterRole, that change triggers adding the new rules into the aggregated ClusterRole. Here is an example tha", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Using RBAC Authorization | If you create a new ClusterRole that matches the label selector of an existing aggregated ClusterRole, that change triggers adding the new rules into the aggregated ClusterRole. Here is an example tha", "source": "https://kubernetes.io/docs/reference/access-authn-authz/rbac/", "id": "k8s_00637_code_explain_19993"} {"prompt": "Complete this yaml code:\n\napiVersion: rbac.authorization.k8s.io/v1\nkind: ClusterRole\nmetadata:\n name: monitoring-endpoints\n labels:\n rbac.example.com/aggregate-to-monitoring: \"true\"", "completion": "# When you create the \"monitoring-endpoints\" ClusterRole,\n# the rules below will be added to the \"monitoring\" ClusterRole.\nrules:\n- apiGroups: [\"\"]\n resources: [\"services\", \"endpointslices\", \"pods\"]\n verbs: [\"get\", \"list\", \"watch\"]", "language": "yaml", "type": "kubernetes-manifest", "context": "Using RBAC Authorization | The default user-facing roles use ClusterRole aggregation. This lets you, as a cluster administrator, include rules for custom resources, such as those served by CustomResourceDefinitions or aggregate", "source": "https://kubernetes.io/docs/reference/access-authn-authz/rbac/", "id": "k8s_00637_code_completion_19994"} {"prompt": "Explain this yaml code:\n\napiVersion: rbac.authorization.k8s.io/v1\nkind: ClusterRole\nmetadata:\n name: monitoring-endpoints\n labels:\n rbac.example.com/aggregate-to-monitoring: \"true\"\n# When you create the \"monitoring-endpoints\" ClusterRole,\n# the rules below will be added to the \"monitoring\" ClusterRole.\nrules:\n- apiGroups: [\"\"]\n resources: [\"services\", \"endpointslices\", \"pods\"]\n verbs: [\"get\", \"list\", \"watch\"]", "completion": "This yaml code Using RBAC Authorization | The default user-facing roles use ClusterRole aggregation. This lets you, as a cluster administrator, include rules for custom resources, such as those served by CustomResourceDefinitions or aggregate", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Using RBAC Authorization | The default user-facing roles use ClusterRole aggregation. This lets you, as a cluster administrator, include rules for custom resources, such as those served by CustomResourceDefinitions or aggregate", "source": "https://kubernetes.io/docs/reference/access-authn-authz/rbac/", "id": "k8s_00637_code_explain_19995"} {"prompt": "Complete this yaml code:\n\napiVersion: rbac.authorization.k8s.io/v1\nkind: ClusterRole\nmetadata:\n name: aggregate-cron-tabs-edit\n labels:\n # Add these permissions to the \"admin\" and \"edit\" default roles.\n rbac.authorization.k8s.io/aggregate-to-admin: \"true\"\n rbac.authorization.k8s.io/aggregate-to-edit: \"true\"\nrules:\n- apiGroups: [\"stable.example.com\"]\n resources: [\"crontabs\"]\n verbs: [\"get\", \"list\", \"watch\", \"create\", \"update\", \"patch\", \"delete\"]", "completion": "---\nkind: ClusterRole\napiVersion: rbac.authorization.k8s.io/v1\nmetadata:\n name: aggregate-cron-tabs-view\n labels:\n # Add these permissions to the \"view\" default role.\n rbac.authorization.k8s.io/aggregate-to-view: \"true\"\nrules:\n- apiGroups: [\"stable.example.com\"]\n resources: [\"crontabs\"]\n verbs: [\"get\", \"list\", \"watch\"]", "language": "yaml", "type": "kubernetes-manifest", "context": "Using RBAC Authorization | The following examples are excerpts from Role or ClusterRole objects, showing only the rules section.", "source": "https://kubernetes.io/docs/reference/access-authn-authz/rbac/", "id": "k8s_00637_code_completion_19996"} {"prompt": "Explain this yaml code:\n\napiVersion: rbac.authorization.k8s.io/v1\nkind: ClusterRole\nmetadata:\n name: aggregate-cron-tabs-edit\n labels:\n # Add these permissions to the \"admin\" and \"edit\" default roles.\n rbac.authorization.k8s.io/aggregate-to-admin: \"true\"\n rbac.authorization.k8s.io/aggregate-to-edit: \"true\"\nrules:\n- apiGroups: [\"stable.example.com\"]\n resources: [\"crontabs\"]\n verbs: [\"get\", \"list\", \"watch\", \"create\", \"update\", \"patch\", \"delete\"]\n---\nkind: ClusterRole\napiVersion: rbac.authorization.k8s.io/v1\nmetadata:\n name: aggregate-cron-tabs-view\n labels:\n # Add these permissions to the \"view\" default role.\n rbac.authorization.k8s.io/aggregate-to-view: \"true\"\nrules:\n- apiGroups: [\"stable.example.com\"]\n resources: [\"crontabs\"]\n verbs: [\"get\", \"list\", \"watch\"]", "completion": "This yaml code Using RBAC Authorization | The following examples are excerpts from Role or ClusterRole objects, showing only the rules section.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Using RBAC Authorization | The following examples are excerpts from Role or ClusterRole objects, showing only the rules section.", "source": "https://kubernetes.io/docs/reference/access-authn-authz/rbac/", "id": "k8s_00637_code_explain_19997"} {"prompt": "Complete this yaml code:\n\nrules:\n- apiGroups: [\"\"]\n #", "completion": " # at the HTTP level, the name of the resource for accessing Pod\n # objects is \"pods\"\n resources: [\"pods\"]\n verbs: [\"get\", \"list\", \"watch\"]", "language": "yaml", "type": "kubernetes-manifest", "context": "Using RBAC Authorization | Allow reading/writing Deployments (at the HTTP level: objects with \"deployments\" in the resource part of their URL) in the \"apps\" API groups:", "source": "https://kubernetes.io/docs/reference/access-authn-authz/rbac/", "id": "k8s_00637_code_completion_19998"} {"prompt": "Explain this yaml code:\n\nrules:\n- apiGroups: [\"\"]\n #\n # at the HTTP level, the name of the resource for accessing Pod\n # objects is \"pods\"\n resources: [\"pods\"]\n verbs: [\"get\", \"list\", \"watch\"]", "completion": "This yaml code Using RBAC Authorization | Allow reading/writing Deployments (at the HTTP level: objects with \"deployments\" in the resource part of their URL) in the \"apps\" API groups:", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Using RBAC Authorization | Allow reading/writing Deployments (at the HTTP level: objects with \"deployments\" in the resource part of their URL) in the \"apps\" API groups:", "source": "https://kubernetes.io/docs/reference/access-authn-authz/rbac/", "id": "k8s_00637_code_explain_19999"} {"prompt": "Complete this yaml code:\n\nrules:\n- apiGroups: [\"apps\"]\n #", "completion": " # at the HTTP level, the name of the resource for accessing Deployment\n # objects is \"deployments\"\n resources: [\"deployments\"]\n verbs: [\"get\", \"list\", \"watch\", \"create\", \"update\", \"patch\", \"delete\"]", "language": "yaml", "type": "kubernetes-manifest", "context": "Using RBAC Authorization | Allow reading Pods in the core API group, as well as reading or writing Job resources in the \"batch\" API group:", "source": "https://kubernetes.io/docs/reference/access-authn-authz/rbac/", "id": "k8s_00637_code_completion_20000"} {"prompt": "Explain this yaml code:\n\nrules:\n- apiGroups: [\"apps\"]\n #\n # at the HTTP level, the name of the resource for accessing Deployment\n # objects is \"deployments\"\n resources: [\"deployments\"]\n verbs: [\"get\", \"list\", \"watch\", \"create\", \"update\", \"patch\", \"delete\"]", "completion": "This yaml code Using RBAC Authorization | Allow reading Pods in the core API group, as well as reading or writing Job resources in the \"batch\" API group:", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Using RBAC Authorization | Allow reading Pods in the core API group, as well as reading or writing Job resources in the \"batch\" API group:", "source": "https://kubernetes.io/docs/reference/access-authn-authz/rbac/", "id": "k8s_00637_code_explain_20001"} {"prompt": "Complete this yaml code:\n\nrules:\n- apiGroups: [\"\"]\n #\n # at the HTTP level, the name of the resource for accessing Pod\n # objects is \"pods\"\n resources: [\"pods\"]", "completion": " verbs: [\"get\", \"list\", \"watch\"]\n- apiGroups: [\"batch\"]\n #\n # at the HTTP level, the name of the resource for accessing Job\n # objects is \"jobs\"\n resources: [\"jobs\"]\n verbs: [\"get\", \"list\", \"watch\", \"create\", \"update\", \"patch\", \"delete\"]", "language": "yaml", "type": "kubernetes-manifest", "context": "Using RBAC Authorization | Allow reading a ConfigMap named \"my-config\" (must be bound with a RoleBinding to limit to a single ConfigMap in a single namespace):", "source": "https://kubernetes.io/docs/reference/access-authn-authz/rbac/", "id": "k8s_00637_code_completion_20002"} {"prompt": "Explain this yaml code:\n\nrules:\n- apiGroups: [\"\"]\n #\n # at the HTTP level, the name of the resource for accessing Pod\n # objects is \"pods\"\n resources: [\"pods\"]\n verbs: [\"get\", \"list\", \"watch\"]\n- apiGroups: [\"batch\"]\n #\n # at the HTTP level, the name of the resource for accessing Job\n # objects is \"jobs\"\n resources: [\"jobs\"]\n verbs: [\"get\", \"list\", \"watch\", \"create\", \"update\", \"patch\", \"delete\"]", "completion": "This yaml code Using RBAC Authorization | Allow reading a ConfigMap named \"my-config\" (must be bound with a RoleBinding to limit to a single ConfigMap in a single namespace):", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Using RBAC Authorization | Allow reading a ConfigMap named \"my-config\" (must be bound with a RoleBinding to limit to a single ConfigMap in a single namespace):", "source": "https://kubernetes.io/docs/reference/access-authn-authz/rbac/", "id": "k8s_00637_code_explain_20003"} {"prompt": "Complete this yaml code:\n\nrules:\n- apiGroups: [\"\"]\n #\n # at the HTTP level, the name of the resource for accessing ConfigMap", "completion": " # objects is \"configmaps\"\n resources: [\"configmaps\"]\n resourceNames: [\"my-config\"]\n verbs: [\"get\"]", "language": "yaml", "type": "kubernetes-manifest", "context": "Using RBAC Authorization | Allow reading the resource \"nodes\" in the core group (because a Node is cluster-scoped, this must be in a ClusterRole bound with a ClusterRoleBinding to be effective):", "source": "https://kubernetes.io/docs/reference/access-authn-authz/rbac/", "id": "k8s_00637_code_completion_20004"} {"prompt": "Explain this yaml code:\n\nrules:\n- apiGroups: [\"\"]\n #\n # at the HTTP level, the name of the resource for accessing ConfigMap\n # objects is \"configmaps\"\n resources: [\"configmaps\"]\n resourceNames: [\"my-config\"]\n verbs: [\"get\"]", "completion": "This yaml code Using RBAC Authorization | Allow reading the resource \"nodes\" in the core group (because a Node is cluster-scoped, this must be in a ClusterRole bound with a ClusterRoleBinding to be effective):", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Using RBAC Authorization | Allow reading the resource \"nodes\" in the core group (because a Node is cluster-scoped, this must be in a ClusterRole bound with a ClusterRoleBinding to be effective):", "source": "https://kubernetes.io/docs/reference/access-authn-authz/rbac/", "id": "k8s_00637_code_explain_20005"} {"prompt": "Complete this yaml code:\n\nrules:\n- apiGroups: [\"\"]\n #", "completion": " # at the HTTP level, the name of the resource for accessing Node\n # objects is \"nodes\"\n resources: [\"nodes\"]\n verbs: [\"get\", \"list\", \"watch\"]", "language": "yaml", "type": "kubernetes-manifest", "context": "Using RBAC Authorization | Allow GET and POST requests to the non-resource endpoint /healthz and all subpaths (must be in a ClusterRole bound with a ClusterRoleBinding to be effective):", "source": "https://kubernetes.io/docs/reference/access-authn-authz/rbac/", "id": "k8s_00637_code_completion_20006"} {"prompt": "Explain this yaml code:\n\nrules:\n- apiGroups: [\"\"]\n #\n # at the HTTP level, the name of the resource for accessing Node\n # objects is \"nodes\"\n resources: [\"nodes\"]\n verbs: [\"get\", \"list\", \"watch\"]", "completion": "This yaml code Using RBAC Authorization | Allow GET and POST requests to the non-resource endpoint /healthz and all subpaths (must be in a ClusterRole bound with a ClusterRoleBinding to be effective):", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Using RBAC Authorization | Allow GET and POST requests to the non-resource endpoint /healthz and all subpaths (must be in a ClusterRole bound with a ClusterRoleBinding to be effective):", "source": "https://kubernetes.io/docs/reference/access-authn-authz/rbac/", "id": "k8s_00637_code_explain_20007"} {"prompt": "Explain this yaml code:\n\nrules:\n- nonResourceURLs: [\"/healthz\", \"/healthz/*\"] # '*' in a nonResourceURL is a suffix glob match\n verbs: [\"get\", \"post\"]", "completion": "This yaml code Using RBAC Authorization | A RoleBinding or ClusterRoleBinding binds a role to subjects. Subjects can be groups, users or ServiceAccounts.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Using RBAC Authorization | A RoleBinding or ClusterRoleBinding binds a role to subjects. Subjects can be groups, users or ServiceAccounts.", "source": "https://kubernetes.io/docs/reference/access-authn-authz/rbac/", "id": "k8s_00637_code_explain_20008"} {"prompt": "Complete this yaml code:\n\nsubjects:\n- kind: User", "completion": " name: \"alice@example.com\"\n apiGroup: rbac.authorization.k8s.io", "language": "yaml", "type": "kubernetes-manifest", "context": "Using RBAC Authorization | For a group named frontend-admins:", "source": "https://kubernetes.io/docs/reference/access-authn-authz/rbac/", "id": "k8s_00637_code_completion_20009"} {"prompt": "Explain this yaml code:\n\nsubjects:\n- kind: User\n name: \"alice@example.com\"\n apiGroup: rbac.authorization.k8s.io", "completion": "This yaml code Using RBAC Authorization | For a group named frontend-admins:", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Using RBAC Authorization | For a group named frontend-admins:", "source": "https://kubernetes.io/docs/reference/access-authn-authz/rbac/", "id": "k8s_00637_code_explain_20010"} {"prompt": "Complete this yaml code:\n\nsubjects:\n- kind: Group", "completion": " name: \"frontend-admins\"\n apiGroup: rbac.authorization.k8s.io", "language": "yaml", "type": "kubernetes-manifest", "context": "Using RBAC Authorization | For the default service account in the \"kube-system\" namespace:", "source": "https://kubernetes.io/docs/reference/access-authn-authz/rbac/", "id": "k8s_00637_code_completion_20011"} {"prompt": "Explain this yaml code:\n\nsubjects:\n- kind: Group\n name: \"frontend-admins\"\n apiGroup: rbac.authorization.k8s.io", "completion": "This yaml code Using RBAC Authorization | For the default service account in the \"kube-system\" namespace:", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Using RBAC Authorization | For the default service account in the \"kube-system\" namespace:", "source": "https://kubernetes.io/docs/reference/access-authn-authz/rbac/", "id": "k8s_00637_code_explain_20012"} {"prompt": "Complete this yaml code:\n\nsubjects:\n- kind: ServiceAccount", "completion": " name: default\n namespace: kube-system", "language": "yaml", "type": "service-definition", "context": "Using RBAC Authorization | For all service accounts in the \"qa\" namespace:", "source": "https://kubernetes.io/docs/reference/access-authn-authz/rbac/", "id": "k8s_00637_code_completion_20013"} {"prompt": "Explain this yaml code:\n\nsubjects:\n- kind: ServiceAccount\n name: default\n namespace: kube-system", "completion": "This yaml code Using RBAC Authorization | For all service accounts in the \"qa\" namespace:", "language": "yaml", "type": "service-definition_explanation", "context": "Using RBAC Authorization | For all service accounts in the \"qa\" namespace:", "source": "https://kubernetes.io/docs/reference/access-authn-authz/rbac/", "id": "k8s_00637_code_explain_20014"} {"prompt": "Complete this yaml code:\n\nsubjects:\n- kind: Group", "completion": " name: system:serviceaccounts:qa\n apiGroup: rbac.authorization.k8s.io", "language": "yaml", "type": "kubernetes-manifest", "context": "Using RBAC Authorization | For all service accounts in any namespace:", "source": "https://kubernetes.io/docs/reference/access-authn-authz/rbac/", "id": "k8s_00637_code_completion_20015"} {"prompt": "Explain this yaml code:\n\nsubjects:\n- kind: Group\n name: system:serviceaccounts:qa\n apiGroup: rbac.authorization.k8s.io", "completion": "This yaml code Using RBAC Authorization | For all service accounts in any namespace:", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Using RBAC Authorization | For all service accounts in any namespace:", "source": "https://kubernetes.io/docs/reference/access-authn-authz/rbac/", "id": "k8s_00637_code_explain_20016"} {"prompt": "Complete this yaml code:\n\nsubjects:\n- kind: Group", "completion": " name: system:serviceaccounts\n apiGroup: rbac.authorization.k8s.io", "language": "yaml", "type": "kubernetes-manifest", "context": "Using RBAC Authorization | For all authenticated users:", "source": "https://kubernetes.io/docs/reference/access-authn-authz/rbac/", "id": "k8s_00637_code_completion_20017"} {"prompt": "Explain this yaml code:\n\nsubjects:\n- kind: Group\n name: system:serviceaccounts\n apiGroup: rbac.authorization.k8s.io", "completion": "This yaml code Using RBAC Authorization | For all authenticated users:", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Using RBAC Authorization | For all authenticated users:", "source": "https://kubernetes.io/docs/reference/access-authn-authz/rbac/", "id": "k8s_00637_code_explain_20018"} {"prompt": "Complete this yaml code:\n\nsubjects:\n- kind: Group", "completion": " name: system:authenticated\n apiGroup: rbac.authorization.k8s.io", "language": "yaml", "type": "kubernetes-manifest", "context": "Using RBAC Authorization | For all unauthenticated users:", "source": "https://kubernetes.io/docs/reference/access-authn-authz/rbac/", "id": "k8s_00637_code_completion_20019"} {"prompt": "Explain this yaml code:\n\nsubjects:\n- kind: Group\n name: system:authenticated\n apiGroup: rbac.authorization.k8s.io", "completion": "This yaml code Using RBAC Authorization | For all unauthenticated users:", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Using RBAC Authorization | For all unauthenticated users:", "source": "https://kubernetes.io/docs/reference/access-authn-authz/rbac/", "id": "k8s_00637_code_explain_20020"} {"prompt": "Complete this yaml code:\n\nsubjects:\n- kind: Group", "completion": " name: system:unauthenticated\n apiGroup: rbac.authorization.k8s.io", "language": "yaml", "type": "kubernetes-manifest", "context": "Using RBAC Authorization | For all users:", "source": "https://kubernetes.io/docs/reference/access-authn-authz/rbac/", "id": "k8s_00637_code_completion_20021"} {"prompt": "Explain this yaml code:\n\nsubjects:\n- kind: Group\n name: system:unauthenticated\n apiGroup: rbac.authorization.k8s.io", "completion": "This yaml code Using RBAC Authorization | For all users:", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Using RBAC Authorization | For all users:", "source": "https://kubernetes.io/docs/reference/access-authn-authz/rbac/", "id": "k8s_00637_code_explain_20022"} {"prompt": "Complete this yaml code:\n\nsubjects:\n- kind: Group\n name: system:authenticated", "completion": " apiGroup: rbac.authorization.k8s.io\n- kind: Group\n name: system:unauthenticated\n apiGroup: rbac.authorization.k8s.io", "language": "yaml", "type": "kubernetes-manifest", "context": "Using RBAC Authorization | API servers create a set of default ClusterRole and ClusterRoleBinding objects. Many of these are system: prefixed, which indicates that the resource is directly managed by the cluster control plane.", "source": "https://kubernetes.io/docs/reference/access-authn-authz/rbac/", "id": "k8s_00637_code_completion_20023"} {"prompt": "Explain this yaml code:\n\nsubjects:\n- kind: Group\n name: system:authenticated\n apiGroup: rbac.authorization.k8s.io\n- kind: Group\n name: system:unauthenticated\n apiGroup: rbac.authorization.k8s.io", "completion": "This yaml code Using RBAC Authorization | API servers create a set of default ClusterRole and ClusterRoleBinding objects. Many of these are system: prefixed, which indicates that the resource is directly managed by the cluster control plane.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Using RBAC Authorization | API servers create a set of default ClusterRole and ClusterRoleBinding objects. Many of these are system: prefixed, which indicates that the resource is directly managed by the cluster control plane.", "source": "https://kubernetes.io/docs/reference/access-authn-authz/rbac/", "id": "k8s_00637_code_explain_20024"} {"prompt": "Complete this yaml code:\n\nmetadata:\n labels:", "completion": " rbac.authorization.k8s.io/aggregate-to-admin: \"true\"\n rbac.authorization.k8s.io/aggregate-to-edit: \"true\"\n rbac.authorization.k8s.io/aggregate-to-view: \"true\"", "language": "yaml", "type": "kubernetes-manifest", "context": "Using RBAC Authorization | If used in a RoleBinding, allows read/write access to most resources in a namespace, including the ability to create roles and role bindings within the namespace. This role does not allow write access", "source": "https://kubernetes.io/docs/reference/access-authn-authz/rbac/", "id": "k8s_00637_code_completion_20025"} {"prompt": "Explain this yaml code:\n\nmetadata:\n labels:\n rbac.authorization.k8s.io/aggregate-to-admin: \"true\"\n rbac.authorization.k8s.io/aggregate-to-edit: \"true\"\n rbac.authorization.k8s.io/aggregate-to-view: \"true\"", "completion": "This yaml code Using RBAC Authorization | If used in a RoleBinding, allows read/write access to most resources in a namespace, including the ability to create roles and role bindings within the namespace. This role does not allow write access", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Using RBAC Authorization | If used in a RoleBinding, allows read/write access to most resources in a namespace, including the ability to create roles and role bindings within the namespace. This role does not allow write access", "source": "https://kubernetes.io/docs/reference/access-authn-authz/rbac/", "id": "k8s_00637_code_explain_20026"} {"prompt": "Explain this text code:\n\nsystem:controller:clusterrole-aggregation-controller", "completion": "This text code Using RBAC Authorization | The RBAC API prevents users from escalating privileges by editing roles or role bindings. Because this is enforced at the API level, it applies even when the RBAC authorizer is not in use.", "language": "text", "type": "code-example_explanation", "context": "Using RBAC Authorization | The RBAC API prevents users from escalating privileges by editing roles or role bindings. Because this is enforced at the API level, it applies even when the RBAC authorizer is not in use.", "source": "https://kubernetes.io/docs/reference/access-authn-authz/rbac/", "id": "k8s_00637_code_explain_20027"} {"prompt": "Complete this yaml code:\n\napiVersion: rbac.authorization.k8s.io/v1\nkind: ClusterRole\nmetadata:\n name: role-grantor\nrules:\n- apiGroups: [\"rbac.authorization.k8s.io\"]\n resources: [\"rolebindings\"]\n verbs: [\"create\"]\n- apiGroups: [\"rbac.authorization.k8s.io\"]\n resources: [\"clusterroles\"]\n verbs: [\"bind\"]\n # omit resourceNames to allow binding any ClusterRole\n resourceNames: [\"admin\",\"edit\",\"view\"]", "completion": "---\napiVersion: rbac.authorization.k8s.io/v1\nkind: RoleBinding\nmetadata:\n name: role-grantor-binding\n namespace: user-1-namespace\nroleRef:\n apiGroup: rbac.authorization.k8s.io\n kind: ClusterRole\n name: role-grantor\nsubjects:\n- apiGroup: rbac.authorization.k8s.io\n kind: User\n name: user-1", "language": "yaml", "type": "kubernetes-manifest", "context": "Using RBAC Authorization | When bootstrapping the first roles and role bindings, it is necessary for the initial user to grant permissions they do not yet have. To bootstrap initial roles and role bindings:", "source": "https://kubernetes.io/docs/reference/access-authn-authz/rbac/", "id": "k8s_00637_code_completion_20028"} {"prompt": "Explain this yaml code:\n\napiVersion: rbac.authorization.k8s.io/v1\nkind: ClusterRole\nmetadata:\n name: role-grantor\nrules:\n- apiGroups: [\"rbac.authorization.k8s.io\"]\n resources: [\"rolebindings\"]\n verbs: [\"create\"]\n- apiGroups: [\"rbac.authorization.k8s.io\"]\n resources: [\"clusterroles\"]\n verbs: [\"bind\"]\n # omit resourceNames to allow binding any ClusterRole\n resourceNames: [\"admin\",\"edit\",\"view\"]\n---\napiVersion: rbac.authorization.k8s.io/v1\nkind: RoleBinding\nmetadata:\n name: role-grantor-binding\n namespace: user-1-namespace\nroleRef:\n apiGroup: rbac.authorization.k8s.io\n kind: ClusterRole\n name: role-grantor\nsubjects:\n- apiGroup: rbac.authorization.k8s.io\n kind: User\n name: user-1", "completion": "This yaml code Using RBAC Authorization | When bootstrapping the first roles and role bindings, it is necessary for the initial user to grant permissions they do not yet have. To bootstrap initial roles and role bindings:", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Using RBAC Authorization | When bootstrapping the first roles and role bindings, it is necessary for the initial user to grant permissions they do not yet have. To bootstrap initial roles and role bindings:", "source": "https://kubernetes.io/docs/reference/access-authn-authz/rbac/", "id": "k8s_00637_code_explain_20029"} {"prompt": "Explain this shell code:\n\nkubectl create role pod-reader --verb=get --verb=list --verb=watch --resource=pods", "completion": "This shell code Using RBAC Authorization | Create a Role named \"pod-reader\" with resourceNames specified:", "language": "shell", "type": "code-example_explanation", "context": "Using RBAC Authorization | Create a Role named \"pod-reader\" with resourceNames specified:", "source": "https://kubernetes.io/docs/reference/access-authn-authz/rbac/", "id": "k8s_00637_code_explain_20030"} {"prompt": "Explain this shell code:\n\nkubectl create role pod-reader --verb=get --resource=pods --resource-name=readablepod --resource-name=anotherpod", "completion": "This shell code Using RBAC Authorization | Create a Role named \"foo\" with apiGroups specified:", "language": "shell", "type": "code-example_explanation", "context": "Using RBAC Authorization | Create a Role named \"foo\" with apiGroups specified:", "source": "https://kubernetes.io/docs/reference/access-authn-authz/rbac/", "id": "k8s_00637_code_explain_20031"} {"prompt": "Explain this shell code:\n\nkubectl create role foo --verb=get,list,watch --resource=replicasets.apps", "completion": "This shell code Using RBAC Authorization | Create a Role named \"foo\" with subresource permissions:", "language": "shell", "type": "code-example_explanation", "context": "Using RBAC Authorization | Create a Role named \"foo\" with subresource permissions:", "source": "https://kubernetes.io/docs/reference/access-authn-authz/rbac/", "id": "k8s_00637_code_explain_20032"} {"prompt": "Explain this shell code:\n\nkubectl create role foo --verb=get,list,watch --resource=pods,pods/status", "completion": "This shell code Using RBAC Authorization | Create a Role named \"my-component-lease-holder\" with permissions to get/update a resource with a specific name:", "language": "shell", "type": "code-example_explanation", "context": "Using RBAC Authorization | Create a Role named \"my-component-lease-holder\" with permissions to get/update a resource with a specific name:", "source": "https://kubernetes.io/docs/reference/access-authn-authz/rbac/", "id": "k8s_00637_code_explain_20033"} {"prompt": "Explain this shell code:\n\nkubectl create role my-component-lease-holder --verb=get,list,watch,update --resource=lease --resource-name=my-component", "completion": "This shell code Using RBAC Authorization | Creates a ClusterRole. Examples:", "language": "shell", "type": "code-example_explanation", "context": "Using RBAC Authorization | Creates a ClusterRole. Examples:", "source": "https://kubernetes.io/docs/reference/access-authn-authz/rbac/", "id": "k8s_00637_code_explain_20034"} {"prompt": "Explain this shell code:\n\nkubectl create clusterrole pod-reader --verb=get,list,watch --resource=pods", "completion": "This shell code Using RBAC Authorization | Create a ClusterRole named \"pod-reader\" with resourceNames specified:", "language": "shell", "type": "code-example_explanation", "context": "Using RBAC Authorization | Create a ClusterRole named \"pod-reader\" with resourceNames specified:", "source": "https://kubernetes.io/docs/reference/access-authn-authz/rbac/", "id": "k8s_00637_code_explain_20035"} {"prompt": "Explain this shell code:\n\nkubectl create clusterrole pod-reader --verb=get --resource=pods --resource-name=readablepod --resource-name=anotherpod", "completion": "This shell code Using RBAC Authorization | Create a ClusterRole named \"foo\" with apiGroups specified:", "language": "shell", "type": "code-example_explanation", "context": "Using RBAC Authorization | Create a ClusterRole named \"foo\" with apiGroups specified:", "source": "https://kubernetes.io/docs/reference/access-authn-authz/rbac/", "id": "k8s_00637_code_explain_20036"} {"prompt": "Explain this shell code:\n\nkubectl create clusterrole foo --verb=get,list,watch --resource=replicasets.apps", "completion": "This shell code Using RBAC Authorization | Create a ClusterRole named \"foo\" with subresource permissions:", "language": "shell", "type": "code-example_explanation", "context": "Using RBAC Authorization | Create a ClusterRole named \"foo\" with subresource permissions:", "source": "https://kubernetes.io/docs/reference/access-authn-authz/rbac/", "id": "k8s_00637_code_explain_20037"} {"prompt": "Explain this shell code:\n\nkubectl create clusterrole foo --verb=get,list,watch --resource=pods,pods/status", "completion": "This shell code Using RBAC Authorization | Create a ClusterRole named \"foo\" with nonResourceURL specified:", "language": "shell", "type": "code-example_explanation", "context": "Using RBAC Authorization | Create a ClusterRole named \"foo\" with nonResourceURL specified:", "source": "https://kubernetes.io/docs/reference/access-authn-authz/rbac/", "id": "k8s_00637_code_explain_20038"} {"prompt": "Explain this shell code:\n\nkubectl create clusterrole \"foo\" --verb=get --non-resource-url=/logs/*", "completion": "This shell code Using RBAC Authorization | Create a ClusterRole named \"monitoring\" with an aggregationRule specified:", "language": "shell", "type": "code-example_explanation", "context": "Using RBAC Authorization | Create a ClusterRole named \"monitoring\" with an aggregationRule specified:", "source": "https://kubernetes.io/docs/reference/access-authn-authz/rbac/", "id": "k8s_00637_code_explain_20039"} {"prompt": "Explain this shell code:\n\nkubectl create clusterrole monitoring --aggregation-rule=\"rbac.example.com/aggregate-to-monitoring=true\"", "completion": "This shell code Using RBAC Authorization | Grants a Role or ClusterRole within a specific namespace. Examples:", "language": "shell", "type": "code-example_explanation", "context": "Using RBAC Authorization | Grants a Role or ClusterRole within a specific namespace. Examples:", "source": "https://kubernetes.io/docs/reference/access-authn-authz/rbac/", "id": "k8s_00637_code_explain_20040"} {"prompt": "Explain this shell code:\n\nkubectl create rolebinding bob-admin-binding --clusterrole=admin --user=bob --namespace=acme", "completion": "This shell code Using RBAC Authorization | Within the namespace \"acme\", grant the permissions in the \"view\" ClusterRole to the service account in the namespace \"acme\" named \"myapp\":", "language": "shell", "type": "code-example_explanation", "context": "Using RBAC Authorization | Within the namespace \"acme\", grant the permissions in the \"view\" ClusterRole to the service account in the namespace \"acme\" named \"myapp\":", "source": "https://kubernetes.io/docs/reference/access-authn-authz/rbac/", "id": "k8s_00637_code_explain_20041"} {"prompt": "Explain this shell code:\n\nkubectl create rolebinding myapp-view-binding --clusterrole=view --serviceaccount=acme:myapp --namespace=acme", "completion": "This shell code Using RBAC Authorization | Within the namespace \"acme\", grant the permissions in the \"view\" ClusterRole to a service account in the namespace \"myappnamespace\" named \"myapp\":", "language": "shell", "type": "code-example_explanation", "context": "Using RBAC Authorization | Within the namespace \"acme\", grant the permissions in the \"view\" ClusterRole to a service account in the namespace \"myappnamespace\" named \"myapp\":", "source": "https://kubernetes.io/docs/reference/access-authn-authz/rbac/", "id": "k8s_00637_code_explain_20042"} {"prompt": "Explain this shell code:\n\nkubectl create rolebinding myappnamespace-myapp-view-binding --clusterrole=view --serviceaccount=myappnamespace:myapp --namespace=acme", "completion": "This shell code Using RBAC Authorization | Grants a ClusterRole across the entire cluster (all namespaces). Examples:", "language": "shell", "type": "code-example_explanation", "context": "Using RBAC Authorization | Grants a ClusterRole across the entire cluster (all namespaces). Examples:", "source": "https://kubernetes.io/docs/reference/access-authn-authz/rbac/", "id": "k8s_00637_code_explain_20043"} {"prompt": "Explain this shell code:\n\nkubectl create clusterrolebinding root-cluster-admin-binding --clusterrole=cluster-admin --user=root", "completion": "This shell code Using RBAC Authorization | Across the entire cluster, grant the permissions in the \"system:node-proxier\" ClusterRole to a user named \"system:kube-proxy\":", "language": "shell", "type": "code-example_explanation", "context": "Using RBAC Authorization | Across the entire cluster, grant the permissions in the \"system:node-proxier\" ClusterRole to a user named \"system:kube-proxy\":", "source": "https://kubernetes.io/docs/reference/access-authn-authz/rbac/", "id": "k8s_00637_code_explain_20044"} {"prompt": "Explain this shell code:\n\nkubectl create clusterrolebinding kube-proxy-binding --clusterrole=system:node-proxier --user=system:kube-proxy", "completion": "This shell code Using RBAC Authorization | Across the entire cluster, grant the permissions in the \"view\" ClusterRole to a service account named \"myapp\" in the namespace \"acme\":", "language": "shell", "type": "code-example_explanation", "context": "Using RBAC Authorization | Across the entire cluster, grant the permissions in the \"view\" ClusterRole to a service account named \"myapp\" in the namespace \"acme\":", "source": "https://kubernetes.io/docs/reference/access-authn-authz/rbac/", "id": "k8s_00637_code_explain_20045"} {"prompt": "Explain this shell code:\n\nkubectl create clusterrolebinding myapp-view-binding --clusterrole=view --serviceaccount=acme:myapp", "completion": "This shell code Using RBAC Authorization | Creates or updates rbac.authorization.k8s.io/v1 API objects from a manifest file.", "language": "shell", "type": "code-example_explanation", "context": "Using RBAC Authorization | Creates or updates rbac.authorization.k8s.io/v1 API objects from a manifest file.", "source": "https://kubernetes.io/docs/reference/access-authn-authz/rbac/", "id": "k8s_00637_code_explain_20046"} {"prompt": "Explain this shell code:\n\nkubectl auth reconcile -f my-rbac-rules.yaml --dry-run=client", "completion": "This shell code Using RBAC Authorization | Apply a manifest file of RBAC objects, preserving any extra permissions (in roles) and any extra subjects (in bindings):", "language": "shell", "type": "code-example_explanation", "context": "Using RBAC Authorization | Apply a manifest file of RBAC objects, preserving any extra permissions (in roles) and any extra subjects (in bindings):", "source": "https://kubernetes.io/docs/reference/access-authn-authz/rbac/", "id": "k8s_00637_code_explain_20047"} {"prompt": "Explain this shell code:\n\nkubectl auth reconcile -f my-rbac-rules.yaml --remove-extra-subjects --remove-extra-permissions", "completion": "This shell code Using RBAC Authorization | Default RBAC policies grant scoped permissions to control-plane components, nodes, and controllers, but grant no permissions to service accounts outside the kube-system namespace (beyond the permissio", "language": "shell", "type": "code-example_explanation", "context": "Using RBAC Authorization | Default RBAC policies grant scoped permissions to control-plane components, nodes, and controllers, but grant no permissions to service accounts outside the kube-system namespace (beyond the permissio", "source": "https://kubernetes.io/docs/reference/access-authn-authz/rbac/", "id": "k8s_00637_code_explain_20048"} {"prompt": "Complete this shell code:\n\nkubectl create rolebinding my-sa-view \\\n --clusterrole=view \\", "completion": " --serviceaccount=my-namespace:my-sa \\\n --namespace=my-namespace", "language": "shell", "type": "code-example", "context": "Using RBAC Authorization | Grant a role to the \"default\" service account in a namespace", "source": "https://kubernetes.io/docs/reference/access-authn-authz/rbac/", "id": "k8s_00637_code_completion_20049"} {"prompt": "Explain this shell code:\n\nkubectl create rolebinding my-sa-view \\\n --clusterrole=view \\\n --serviceaccount=my-namespace:my-sa \\\n --namespace=my-namespace", "completion": "This shell code Using RBAC Authorization | Grant a role to the \"default\" service account in a namespace", "language": "shell", "type": "code-example_explanation", "context": "Using RBAC Authorization | Grant a role to the \"default\" service account in a namespace", "source": "https://kubernetes.io/docs/reference/access-authn-authz/rbac/", "id": "k8s_00637_code_explain_20050"} {"prompt": "Complete this shell code:\n\nkubectl create rolebinding default-view \\\n --clusterrole=view \\", "completion": " --serviceaccount=my-namespace:default \\\n --namespace=my-namespace", "language": "shell", "type": "code-example", "context": "Using RBAC Authorization | Many add-ons run as the \"default\" service account in the kube-system namespace. To allow those add-ons to run with super-user access, grant cluster-admin permissions to the \"default\" service account i", "source": "https://kubernetes.io/docs/reference/access-authn-authz/rbac/", "id": "k8s_00637_code_completion_20051"} {"prompt": "Explain this shell code:\n\nkubectl create rolebinding default-view \\\n --clusterrole=view \\\n --serviceaccount=my-namespace:default \\\n --namespace=my-namespace", "completion": "This shell code Using RBAC Authorization | Many add-ons run as the \"default\" service account in the kube-system namespace. To allow those add-ons to run with super-user access, grant cluster-admin permissions to the \"default\" service account i", "language": "shell", "type": "code-example_explanation", "context": "Using RBAC Authorization | Many add-ons run as the \"default\" service account in the kube-system namespace. To allow those add-ons to run with super-user access, grant cluster-admin permissions to the \"default\" service account i", "source": "https://kubernetes.io/docs/reference/access-authn-authz/rbac/", "id": "k8s_00637_code_explain_20052"} {"prompt": "Explain this shell code:\n\nkubectl create clusterrolebinding add-on-cluster-admin \\\n --clusterrole=cluster-admin \\\n --serviceaccount=kube-system:default", "completion": "This shell code Using RBAC Authorization | Grant a role to all service accounts in a namespace", "language": "shell", "type": "code-example_explanation", "context": "Using RBAC Authorization | Grant a role to all service accounts in a namespace", "source": "https://kubernetes.io/docs/reference/access-authn-authz/rbac/", "id": "k8s_00637_code_explain_20053"} {"prompt": "Complete this shell code:\n\nkubectl create rolebinding serviceaccounts-view \\\n --clusterrole=view \\", "completion": " --group=system:serviceaccounts:my-namespace \\\n --namespace=my-namespace", "language": "shell", "type": "code-example", "context": "Using RBAC Authorization | Grant a limited role to all service accounts cluster-wide (discouraged)", "source": "https://kubernetes.io/docs/reference/access-authn-authz/rbac/", "id": "k8s_00637_code_completion_20054"} {"prompt": "Explain this shell code:\n\nkubectl create rolebinding serviceaccounts-view \\\n --clusterrole=view \\\n --group=system:serviceaccounts:my-namespace \\\n --namespace=my-namespace", "completion": "This shell code Using RBAC Authorization | Grant a limited role to all service accounts cluster-wide (discouraged)", "language": "shell", "type": "code-example_explanation", "context": "Using RBAC Authorization | Grant a limited role to all service accounts cluster-wide (discouraged)", "source": "https://kubernetes.io/docs/reference/access-authn-authz/rbac/", "id": "k8s_00637_code_explain_20055"} {"prompt": "Explain this shell code:\n\nkubectl create clusterrolebinding serviceaccounts-view \\\n --clusterrole=view \\\n --group=system:serviceaccounts", "completion": "This shell code Using RBAC Authorization | Grant super-user access to all service accounts cluster-wide (strongly discouraged)", "language": "shell", "type": "code-example_explanation", "context": "Using RBAC Authorization | Grant super-user access to all service accounts cluster-wide (strongly discouraged)", "source": "https://kubernetes.io/docs/reference/access-authn-authz/rbac/", "id": "k8s_00637_code_explain_20056"} {"prompt": "Explain this shell code:\n\nkubectl create clusterrolebinding serviceaccounts-cluster-admin \\\n --clusterrole=cluster-admin \\\n --group=system:serviceaccounts", "completion": "This shell code Using RBAC Authorization | Kubernetes clusters created before Kubernetes v1.22 include write access to EndpointSlices (and the now-deprecated Endpoints API) in the aggregated \"edit\" and \"admin\" roles. As a mitigation for CVE-20", "language": "shell", "type": "code-example_explanation", "context": "Using RBAC Authorization | Kubernetes clusters created before Kubernetes v1.22 include write access to EndpointSlices (and the now-deprecated Endpoints API) in the aggregated \"edit\" and \"admin\" roles. As a mitigation for CVE-20", "source": "https://kubernetes.io/docs/reference/access-authn-authz/rbac/", "id": "k8s_00637_code_explain_20057"} {"prompt": "Complete this yaml code:\n\napiVersion: rbac.authorization.k8s.io/v1\nkind: ClusterRole\nmetadata:\n annotations:\n kubernetes.io/description: |-\n Add endpoints write permissions to the edit and admin roles. This was\n removed by default in 1.22 because of CVE-2021-25740. See\n https://issue.k8s.io/103675. This can allow writers to direct LoadBalancer\n or Ingress implementations to expose backend IPs that would not otherwise\n be accessible, and can circumvent network policies or security controls", "completion": " intended to prevent/isolate access to those backends.\n EndpointSlices were never included in the edit or admin roles, so there\n is nothing to restore for the EndpointSlice API. \n labels:\n rbac.authorization.k8s.io/aggregate-to-edit: \"true\"\n name: custom:aggregate-to-edit:endpoints # you can change this if you wish\nrules:\n - apiGroups: [\"\"]\n resources: [\"endpoints\"]\n verbs: [\"create\", \"delete\", \"deletecollection\", \"patch\", \"update\"]", "language": "yaml", "type": "kubernetes-manifest", "context": "Using RBAC Authorization | Clusters that originally ran older Kubernetes versions often used permissive ABAC policies, including granting full API access to all service accounts.", "source": "https://kubernetes.io/docs/reference/access-authn-authz/rbac/", "id": "k8s_00637_code_completion_20058"} {"prompt": "Explain this yaml code:\n\napiVersion: rbac.authorization.k8s.io/v1\nkind: ClusterRole\nmetadata:\n annotations:\n kubernetes.io/description: |-\n Add endpoints write permissions to the edit and admin roles. This was\n removed by default in 1.22 because of CVE-2021-25740. See\n https://issue.k8s.io/103675. This can allow writers to direct LoadBalancer\n or Ingress implementations to expose backend IPs that would not otherwise\n be accessible, and can circumvent network policies or security controls\n intended to prevent/isolate access to those backends.\n EndpointSlices were never included in the edit or admin roles, so there\n is nothing to restore for the EndpointSlice API. \n labels:\n rbac.authorization.k8s.io/aggregate-to-edit: \"true\"\n name: custom:aggregate-to-edit:endpoints # you can change this if you wish\nrules:\n - apiGroups: [\"\"]\n resources: [\"endpoints\"]\n verbs: [\"create\", \"delete\", \"deletecollection\", \"patch\", \"update\"]", "completion": "This yaml code Using RBAC Authorization | Clusters that originally ran older Kubernetes versions often used permissive ABAC policies, including granting full API access to all service accounts.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Using RBAC Authorization | Clusters that originally ran older Kubernetes versions often used permissive ABAC policies, including granting full API access to all service accounts.", "source": "https://kubernetes.io/docs/reference/access-authn-authz/rbac/", "id": "k8s_00637_code_explain_20059"} {"prompt": "Explain this shell code:\n\n--authorization-mode=...,RBAC,ABAC --authorization-policy-file=mypolicy.json", "completion": "This shell code Using RBAC Authorization | To explain that first command line option in detail: if earlier authorizers, such as Node, deny a request, then the RBAC authorizer attempts to authorize the API request. If RBAC also denies that API", "language": "shell", "type": "code-example_explanation", "context": "Using RBAC Authorization | To explain that first command line option in detail: if earlier authorizers, such as Node, deny a request, then the RBAC authorizer attempts to authorize the API request. If RBAC also denies that API", "source": "https://kubernetes.io/docs/reference/access-authn-authz/rbac/", "id": "k8s_00637_code_explain_20060"} {"prompt": "Complete this shell code:\n\nkubectl create clusterrolebinding permissive-binding \\\n --clusterrole=cluster-admin \\", "completion": " --user=admin \\\n --user=kubelet \\\n --group=system:serviceaccounts", "language": "shell", "type": "code-example", "context": "Using RBAC Authorization | After you have transitioned to use RBAC, you should adjust the access controls for your cluster to ensure that these meet your information security needs.", "source": "https://kubernetes.io/docs/reference/access-authn-authz/rbac/", "id": "k8s_00637_code_completion_20061"} {"prompt": "Explain this shell code:\n\nkubectl create clusterrolebinding permissive-binding \\\n --clusterrole=cluster-admin \\\n --user=admin \\\n --user=kubelet \\\n --group=system:serviceaccounts", "completion": "This shell code Using RBAC Authorization | After you have transitioned to use RBAC, you should adjust the access controls for your cluster to ensure that these meet your information security needs.", "language": "shell", "type": "code-example_explanation", "context": "Using RBAC Authorization | After you have transitioned to use RBAC, you should adjust the access controls for your cluster to ensure that these meet your information security needs.", "source": "https://kubernetes.io/docs/reference/access-authn-authz/rbac/", "id": "k8s_00637_code_explain_20062"} {"prompt": "Complete this yaml code:\n\napiVersion: apiserver.config.k8s.io/v1\nkind: AuthorizationConfiguration\nauthorizers:", "completion": " ...\n - type: RBAC\n ...", "language": "yaml", "type": "kubernetes-manifest", "context": "Using RBAC Authorization | Or, start the API server with the --authorization-mode flag set to a comma-separated list that includes RBAC; for example:", "source": "https://kubernetes.io/docs/reference/access-authn-authz/rbac/", "id": "k8s_00637_code_completion_20063"} {"prompt": "Explain this yaml code:\n\napiVersion: apiserver.config.k8s.io/v1\nkind: AuthorizationConfiguration\nauthorizers:\n ...\n - type: RBAC\n ...", "completion": "This yaml code Using RBAC Authorization | Or, start the API server with the --authorization-mode flag set to a comma-separated list that includes RBAC; for example:", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Using RBAC Authorization | Or, start the API server with the --authorization-mode flag set to a comma-separated list that includes RBAC; for example:", "source": "https://kubernetes.io/docs/reference/access-authn-authz/rbac/", "id": "k8s_00637_code_explain_20064"} {"prompt": "Explain this text code:\n\nkube-apiserver --authorization-mode=...,RBAC --other-options --more-options", "completion": "This text code Using RBAC Authorization | The RBAC API declares four kinds of Kubernetes object: Role, ClusterRole, RoleBinding and ClusterRoleBinding. You can describe or amend the RBAC objects using tools such as kubectl, just like any othe", "language": "text", "type": "code-example_explanation", "context": "Using RBAC Authorization | The RBAC API declares four kinds of Kubernetes object: Role, ClusterRole, RoleBinding and ClusterRoleBinding. You can describe or amend the RBAC objects using tools such as kubectl, just like any othe", "source": "https://kubernetes.io/docs/reference/access-authn-authz/rbac/", "id": "k8s_00637_code_explain_20065"} {"prompt": "Complete this yaml code:\n\naccess/simple-role.yaml\napiVersion: rbac.authorization.k8s.io/v1\nkind: Role\nmetadata:\n namespace: default", "completion": " name: pod-reader\nrules:\n- apiGroups: [\"\"] # \"\" indicates the core API group\n resources: [\"pods\"]\n verbs: [\"get\", \"watch\", \"list\"]", "language": "yaml", "type": "kubernetes-manifest", "context": "Using RBAC Authorization | A ClusterRole can be used to grant the same permissions as a Role. Because ClusterRoles are cluster-scoped, you can also use them to grant access to:", "source": "https://kubernetes.io/docs/reference/access-authn-authz/rbac/", "id": "k8s_00637_code_completion_20066"} {"prompt": "Explain this yaml code:\n\naccess/simple-role.yaml\napiVersion: rbac.authorization.k8s.io/v1\nkind: Role\nmetadata:\n namespace: default\n name: pod-reader\nrules:\n- apiGroups: [\"\"] # \"\" indicates the core API group\n resources: [\"pods\"]\n verbs: [\"get\", \"watch\", \"list\"]", "completion": "This yaml code Using RBAC Authorization | A ClusterRole can be used to grant the same permissions as a Role. Because ClusterRoles are cluster-scoped, you can also use them to grant access to:", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Using RBAC Authorization | A ClusterRole can be used to grant the same permissions as a Role. Because ClusterRoles are cluster-scoped, you can also use them to grant access to:", "source": "https://kubernetes.io/docs/reference/access-authn-authz/rbac/", "id": "k8s_00637_code_explain_20067"} {"prompt": "Complete this yaml code:\n\napiVersion: rbac.authorization.k8s.io/v1\nkind: Role\nmetadata:\n namespace: default", "completion": " name: pod-reader\nrules:\n- apiGroups: [\"\"] # \"\" indicates the core API group\n resources: [\"pods\"]\n verbs: [\"get\", \"watch\", \"list\"]", "language": "yaml", "type": "kubernetes-manifest", "context": "Using RBAC Authorization | A ClusterRole can be used to grant the same permissions as a Role. Because ClusterRoles are cluster-scoped, you can also use them to grant access to:", "source": "https://kubernetes.io/docs/reference/access-authn-authz/rbac/", "id": "k8s_00637_code_completion_20068"} {"prompt": "Explain this yaml code:\n\napiVersion: rbac.authorization.k8s.io/v1\nkind: Role\nmetadata:\n namespace: default\n name: pod-reader\nrules:\n- apiGroups: [\"\"] # \"\" indicates the core API group\n resources: [\"pods\"]\n verbs: [\"get\", \"watch\", \"list\"]", "completion": "This yaml code Using RBAC Authorization | A ClusterRole can be used to grant the same permissions as a Role. Because ClusterRoles are cluster-scoped, you can also use them to grant access to:", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Using RBAC Authorization | A ClusterRole can be used to grant the same permissions as a Role. Because ClusterRoles are cluster-scoped, you can also use them to grant access to:", "source": "https://kubernetes.io/docs/reference/access-authn-authz/rbac/", "id": "k8s_00637_code_explain_20069"} {"prompt": "Complete this yaml code:\n\naccess/simple-clusterrole.yaml\napiVersion: rbac.authorization.k8s.io/v1\nkind: ClusterRole\nmetadata:\n # \"namespace\" omitted since ClusterRoles are not namespaced\n name: secret-reader", "completion": "rules:\n- apiGroups: [\"\"]\n #\n # at the HTTP level, the name of the resource for accessing Secret\n # objects is \"secrets\"\n resources: [\"secrets\"]\n verbs: [\"get\", \"watch\", \"list\"]", "language": "yaml", "type": "kubernetes-manifest", "context": "Using RBAC Authorization | The name of a Role or a ClusterRole object must be a valid path segment name.", "source": "https://kubernetes.io/docs/reference/access-authn-authz/rbac/", "id": "k8s_00637_code_completion_20070"} {"prompt": "Explain this yaml code:\n\naccess/simple-clusterrole.yaml\napiVersion: rbac.authorization.k8s.io/v1\nkind: ClusterRole\nmetadata:\n # \"namespace\" omitted since ClusterRoles are not namespaced\n name: secret-reader\nrules:\n- apiGroups: [\"\"]\n #\n # at the HTTP level, the name of the resource for accessing Secret\n # objects is \"secrets\"\n resources: [\"secrets\"]\n verbs: [\"get\", \"watch\", \"list\"]", "completion": "This yaml code Using RBAC Authorization | The name of a Role or a ClusterRole object must be a valid path segment name.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Using RBAC Authorization | The name of a Role or a ClusterRole object must be a valid path segment name.", "source": "https://kubernetes.io/docs/reference/access-authn-authz/rbac/", "id": "k8s_00637_code_explain_20071"} {"prompt": "Complete this yaml code:\n\napiVersion: rbac.authorization.k8s.io/v1\nkind: ClusterRole\nmetadata:\n # \"namespace\" omitted since ClusterRoles are not namespaced\n name: secret-reader\nrules:", "completion": "- apiGroups: [\"\"]\n #\n # at the HTTP level, the name of the resource for accessing Secret\n # objects is \"secrets\"\n resources: [\"secrets\"]\n verbs: [\"get\", \"watch\", \"list\"]", "language": "yaml", "type": "kubernetes-manifest", "context": "Using RBAC Authorization | The name of a Role or a ClusterRole object must be a valid path segment name.", "source": "https://kubernetes.io/docs/reference/access-authn-authz/rbac/", "id": "k8s_00637_code_completion_20072"} {"prompt": "Explain this yaml code:\n\napiVersion: rbac.authorization.k8s.io/v1\nkind: ClusterRole\nmetadata:\n # \"namespace\" omitted since ClusterRoles are not namespaced\n name: secret-reader\nrules:\n- apiGroups: [\"\"]\n #\n # at the HTTP level, the name of the resource for accessing Secret\n # objects is \"secrets\"\n resources: [\"secrets\"]\n verbs: [\"get\", \"watch\", \"list\"]", "completion": "This yaml code Using RBAC Authorization | The name of a Role or a ClusterRole object must be a valid path segment name.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Using RBAC Authorization | The name of a Role or a ClusterRole object must be a valid path segment name.", "source": "https://kubernetes.io/docs/reference/access-authn-authz/rbac/", "id": "k8s_00637_code_explain_20073"} {"prompt": "Complete this yaml code:\n\naccess/simple-rolebinding-with-role.yaml\napiVersion: rbac.authorization.k8s.io/v1\n# This role binding allows \"jane\" to read pods in the \"default\" namespace.\n# You need to already have a Role named \"pod-reader\" in that namespace.\nkind: RoleBinding\nmetadata:\n name: read-pods\n namespace: default\nsubjects:", "completion": "# You can specify more than one \"subject\"\n- kind: User\n name: jane # \"name\" is case sensitive\n apiGroup: rbac.authorization.k8s.io\nroleRef:\n # \"roleRef\" specifies the binding to a Role / ClusterRole\n kind: Role #this must be Role or ClusterRole\n name: pod-reader # this must match the name of the Role or ClusterRole you wish to bind to\n apiGroup: rbac.authorization.k8s.io", "language": "yaml", "type": "kubernetes-manifest", "context": "Using RBAC Authorization | A RoleBinding can also reference a ClusterRole to grant the permissions defined in that ClusterRole to resources inside the RoleBinding's namespace. This kind of reference lets you define a set of com", "source": "https://kubernetes.io/docs/reference/access-authn-authz/rbac/", "id": "k8s_00637_code_completion_20074"} {"prompt": "Explain this yaml code:\n\naccess/simple-rolebinding-with-role.yaml\napiVersion: rbac.authorization.k8s.io/v1\n# This role binding allows \"jane\" to read pods in the \"default\" namespace.\n# You need to already have a Role named \"pod-reader\" in that namespace.\nkind: RoleBinding\nmetadata:\n name: read-pods\n namespace: default\nsubjects:\n# You can specify more than one \"subject\"\n- kind: User\n name: jane # \"name\" is case sensitive\n apiGroup: rbac.authorization.k8s.io\nroleRef:\n # \"roleRef\" specifies the binding to a Role / ClusterRole\n kind: Role #this must be Role or ClusterRole\n name: pod-reader # this must match the name of the Role or ClusterRole you wish to bind to\n apiGroup: rbac.authorization.k8s.io", "completion": "This yaml code Using RBAC Authorization | A RoleBinding can also reference a ClusterRole to grant the permissions defined in that ClusterRole to resources inside the RoleBinding's namespace. This kind of reference lets you define a set of com", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Using RBAC Authorization | A RoleBinding can also reference a ClusterRole to grant the permissions defined in that ClusterRole to resources inside the RoleBinding's namespace. This kind of reference lets you define a set of com", "source": "https://kubernetes.io/docs/reference/access-authn-authz/rbac/", "id": "k8s_00637_code_explain_20075"} {"prompt": "Complete this yaml code:\n\napiVersion: rbac.authorization.k8s.io/v1\n# This role binding allows \"jane\" to read pods in the \"default\" namespace.\n# You need to already have a Role named \"pod-reader\" in that namespace.\nkind: RoleBinding\nmetadata:\n name: read-pods\n namespace: default\nsubjects:", "completion": "# You can specify more than one \"subject\"\n- kind: User\n name: jane # \"name\" is case sensitive\n apiGroup: rbac.authorization.k8s.io\nroleRef:\n # \"roleRef\" specifies the binding to a Role / ClusterRole\n kind: Role #this must be Role or ClusterRole\n name: pod-reader # this must match the name of the Role or ClusterRole you wish to bind to\n apiGroup: rbac.authorization.k8s.io", "language": "yaml", "type": "kubernetes-manifest", "context": "Using RBAC Authorization | A RoleBinding can also reference a ClusterRole to grant the permissions defined in that ClusterRole to resources inside the RoleBinding's namespace. This kind of reference lets you define a set of com", "source": "https://kubernetes.io/docs/reference/access-authn-authz/rbac/", "id": "k8s_00637_code_completion_20076"} {"prompt": "Explain this yaml code:\n\napiVersion: rbac.authorization.k8s.io/v1\n# This role binding allows \"jane\" to read pods in the \"default\" namespace.\n# You need to already have a Role named \"pod-reader\" in that namespace.\nkind: RoleBinding\nmetadata:\n name: read-pods\n namespace: default\nsubjects:\n# You can specify more than one \"subject\"\n- kind: User\n name: jane # \"name\" is case sensitive\n apiGroup: rbac.authorization.k8s.io\nroleRef:\n # \"roleRef\" specifies the binding to a Role / ClusterRole\n kind: Role #this must be Role or ClusterRole\n name: pod-reader # this must match the name of the Role or ClusterRole you wish to bind to\n apiGroup: rbac.authorization.k8s.io", "completion": "This yaml code Using RBAC Authorization | A RoleBinding can also reference a ClusterRole to grant the permissions defined in that ClusterRole to resources inside the RoleBinding's namespace. This kind of reference lets you define a set of com", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Using RBAC Authorization | A RoleBinding can also reference a ClusterRole to grant the permissions defined in that ClusterRole to resources inside the RoleBinding's namespace. This kind of reference lets you define a set of com", "source": "https://kubernetes.io/docs/reference/access-authn-authz/rbac/", "id": "k8s_00637_code_explain_20077"} {"prompt": "Complete this yaml code:\n\naccess/simple-rolebinding-with-clusterrole.yaml\napiVersion: rbac.authorization.k8s.io/v1\n# This role binding allows \"dave\" to read secrets in the \"development\" namespace.\n# You need to already have a ClusterRole named \"secret-reader\".\nkind: RoleBinding\nmetadata:\n name: read-secrets\n #\n # The namespace of the RoleBinding determines where the permissions are granted.", "completion": " # This only grants permissions within the \"development\" namespace.\n namespace: development\nsubjects:\n- kind: User\n name: dave # Name is case sensitive\n apiGroup: rbac.authorization.k8s.io\nroleRef:\n kind: ClusterRole\n name: secret-reader\n apiGroup: rbac.authorization.k8s.io", "language": "yaml", "type": "kubernetes-manifest", "context": "Using RBAC Authorization | To grant permissions across a whole cluster, you can use a ClusterRoleBinding. The following ClusterRoleBinding allows any user in the group \"manager\" to read secrets in any namespace.", "source": "https://kubernetes.io/docs/reference/access-authn-authz/rbac/", "id": "k8s_00637_code_completion_20078"} {"prompt": "Explain this yaml code:\n\naccess/simple-rolebinding-with-clusterrole.yaml\napiVersion: rbac.authorization.k8s.io/v1\n# This role binding allows \"dave\" to read secrets in the \"development\" namespace.\n# You need to already have a ClusterRole named \"secret-reader\".\nkind: RoleBinding\nmetadata:\n name: read-secrets\n #\n # The namespace of the RoleBinding determines where the permissions are granted.\n # This only grants permissions within the \"development\" namespace.\n namespace: development\nsubjects:\n- kind: User\n name: dave # Name is case sensitive\n apiGroup: rbac.authorization.k8s.io\nroleRef:\n kind: ClusterRole\n name: secret-reader\n apiGroup: rbac.authorization.k8s.io", "completion": "This yaml code Using RBAC Authorization | To grant permissions across a whole cluster, you can use a ClusterRoleBinding. The following ClusterRoleBinding allows any user in the group \"manager\" to read secrets in any namespace.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Using RBAC Authorization | To grant permissions across a whole cluster, you can use a ClusterRoleBinding. The following ClusterRoleBinding allows any user in the group \"manager\" to read secrets in any namespace.", "source": "https://kubernetes.io/docs/reference/access-authn-authz/rbac/", "id": "k8s_00637_code_explain_20079"} {"prompt": "Complete this yaml code:\n\napiVersion: rbac.authorization.k8s.io/v1\n# This role binding allows \"dave\" to read secrets in the \"development\" namespace.\n# You need to already have a ClusterRole named \"secret-reader\".\nkind: RoleBinding\nmetadata:\n name: read-secrets\n #\n # The namespace of the RoleBinding determines where the permissions are granted.\n # This only grants permissions within the \"development\" namespace.", "completion": " namespace: development\nsubjects:\n- kind: User\n name: dave # Name is case sensitive\n apiGroup: rbac.authorization.k8s.io\nroleRef:\n kind: ClusterRole\n name: secret-reader\n apiGroup: rbac.authorization.k8s.io", "language": "yaml", "type": "kubernetes-manifest", "context": "Using RBAC Authorization | To grant permissions across a whole cluster, you can use a ClusterRoleBinding. The following ClusterRoleBinding allows any user in the group \"manager\" to read secrets in any namespace.", "source": "https://kubernetes.io/docs/reference/access-authn-authz/rbac/", "id": "k8s_00637_code_completion_20080"} {"prompt": "Explain this yaml code:\n\napiVersion: rbac.authorization.k8s.io/v1\n# This role binding allows \"dave\" to read secrets in the \"development\" namespace.\n# You need to already have a ClusterRole named \"secret-reader\".\nkind: RoleBinding\nmetadata:\n name: read-secrets\n #\n # The namespace of the RoleBinding determines where the permissions are granted.\n # This only grants permissions within the \"development\" namespace.\n namespace: development\nsubjects:\n- kind: User\n name: dave # Name is case sensitive\n apiGroup: rbac.authorization.k8s.io\nroleRef:\n kind: ClusterRole\n name: secret-reader\n apiGroup: rbac.authorization.k8s.io", "completion": "This yaml code Using RBAC Authorization | To grant permissions across a whole cluster, you can use a ClusterRoleBinding. The following ClusterRoleBinding allows any user in the group \"manager\" to read secrets in any namespace.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Using RBAC Authorization | To grant permissions across a whole cluster, you can use a ClusterRoleBinding. The following ClusterRoleBinding allows any user in the group \"manager\" to read secrets in any namespace.", "source": "https://kubernetes.io/docs/reference/access-authn-authz/rbac/", "id": "k8s_00637_code_explain_20081"} {"prompt": "Complete this yaml code:\n\naccess/simple-clusterrolebinding.yaml\napiVersion: rbac.authorization.k8s.io/v1\n# This cluster role binding allows anyone in the \"manager\" group to read secrets in any namespace.\nkind: ClusterRoleBinding\nmetadata:\n name: read-secrets-global\nsubjects:", "completion": "- kind: Group\n name: manager # Name is case sensitive\n apiGroup: rbac.authorization.k8s.io\nroleRef:\n kind: ClusterRole\n name: secret-reader\n apiGroup: rbac.authorization.k8s.io", "language": "yaml", "type": "kubernetes-manifest", "context": "Using RBAC Authorization | After you create a binding, you cannot change the Role or ClusterRole that it refers to. If you try to change a binding's roleRef, you get a validation error. If you do want to change the roleRef for", "source": "https://kubernetes.io/docs/reference/access-authn-authz/rbac/", "id": "k8s_00637_code_completion_20082"} {"prompt": "Explain this yaml code:\n\naccess/simple-clusterrolebinding.yaml\napiVersion: rbac.authorization.k8s.io/v1\n# This cluster role binding allows anyone in the \"manager\" group to read secrets in any namespace.\nkind: ClusterRoleBinding\nmetadata:\n name: read-secrets-global\nsubjects:\n- kind: Group\n name: manager # Name is case sensitive\n apiGroup: rbac.authorization.k8s.io\nroleRef:\n kind: ClusterRole\n name: secret-reader\n apiGroup: rbac.authorization.k8s.io", "completion": "This yaml code Using RBAC Authorization | After you create a binding, you cannot change the Role or ClusterRole that it refers to. If you try to change a binding's roleRef, you get a validation error. If you do want to change the roleRef for", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Using RBAC Authorization | After you create a binding, you cannot change the Role or ClusterRole that it refers to. If you try to change a binding's roleRef, you get a validation error. If you do want to change the roleRef for", "source": "https://kubernetes.io/docs/reference/access-authn-authz/rbac/", "id": "k8s_00637_code_explain_20083"} {"prompt": "Complete this yaml code:\n\napiVersion: rbac.authorization.k8s.io/v1\n# This cluster role binding allows anyone in the \"manager\" group to read secrets in any namespace.\nkind: ClusterRoleBinding\nmetadata:\n name: read-secrets-global\nsubjects:", "completion": "- kind: Group\n name: manager # Name is case sensitive\n apiGroup: rbac.authorization.k8s.io\nroleRef:\n kind: ClusterRole\n name: secret-reader\n apiGroup: rbac.authorization.k8s.io", "language": "yaml", "type": "kubernetes-manifest", "context": "Using RBAC Authorization | After you create a binding, you cannot change the Role or ClusterRole that it refers to. If you try to change a binding's roleRef, you get a validation error. If you do want to change the roleRef for", "source": "https://kubernetes.io/docs/reference/access-authn-authz/rbac/", "id": "k8s_00637_code_completion_20084"} {"prompt": "Explain this yaml code:\n\napiVersion: rbac.authorization.k8s.io/v1\n# This cluster role binding allows anyone in the \"manager\" group to read secrets in any namespace.\nkind: ClusterRoleBinding\nmetadata:\n name: read-secrets-global\nsubjects:\n- kind: Group\n name: manager # Name is case sensitive\n apiGroup: rbac.authorization.k8s.io\nroleRef:\n kind: ClusterRole\n name: secret-reader\n apiGroup: rbac.authorization.k8s.io", "completion": "This yaml code Using RBAC Authorization | After you create a binding, you cannot change the Role or ClusterRole that it refers to. If you try to change a binding's roleRef, you get a validation error. If you do want to change the roleRef for", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Using RBAC Authorization | After you create a binding, you cannot change the Role or ClusterRole that it refers to. If you try to change a binding's roleRef, you get a validation error. If you do want to change the roleRef for", "source": "https://kubernetes.io/docs/reference/access-authn-authz/rbac/", "id": "k8s_00637_code_explain_20085"} {"prompt": "Complete this yaml code:\n\napiVersion: rbac.authorization.k8s.io/v1\nkind: Role\nmetadata:\n namespace: default", "completion": " name: pod-and-pod-logs-reader\nrules:\n- apiGroups: [\"\"]\n resources: [\"pods\", \"pods/log\"]\n verbs: [\"get\", \"list\"]", "language": "yaml", "type": "kubernetes-manifest", "context": "Using RBAC Authorization | You can also refer to resources by name for certain requests through the resourceNames list. When specified, requests can be restricted to individual instances of a resource. Here is an example that r", "source": "https://kubernetes.io/docs/reference/access-authn-authz/rbac/", "id": "k8s_00637_code_completion_20086"} {"prompt": "Explain this yaml code:\n\napiVersion: rbac.authorization.k8s.io/v1\nkind: Role\nmetadata:\n namespace: default\n name: pod-and-pod-logs-reader\nrules:\n- apiGroups: [\"\"]\n resources: [\"pods\", \"pods/log\"]\n verbs: [\"get\", \"list\"]", "completion": "This yaml code Using RBAC Authorization | You can also refer to resources by name for certain requests through the resourceNames list. When specified, requests can be restricted to individual instances of a resource. Here is an example that r", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Using RBAC Authorization | You can also refer to resources by name for certain requests through the resourceNames list. When specified, requests can be restricted to individual instances of a resource. Here is an example that r", "source": "https://kubernetes.io/docs/reference/access-authn-authz/rbac/", "id": "k8s_00637_code_explain_20087"} {"prompt": "Complete this yaml code:\n\napiVersion: rbac.authorization.k8s.io/v1\nkind: Role\nmetadata:\n namespace: default\n name: configmap-updater\nrules:", "completion": "- apiGroups: [\"\"]\n #\n # at the HTTP level, the name of the resource for accessing ConfigMap\n # objects is \"configmaps\"\n resources: [\"configmaps\"]\n resourceNames: [\"my-configmap\"]\n verbs: [\"update\", \"get\"]", "language": "yaml", "type": "kubernetes-manifest", "context": "Using RBAC Authorization | Rather than referring to individual resources, apiGroups, and verbs, you can use the wildcard * symbol to refer to all such objects. For nonResourceURLs, you can use the wildcard * as a suffix glob ma", "source": "https://kubernetes.io/docs/reference/access-authn-authz/rbac/", "id": "k8s_00637_code_completion_20088"} {"prompt": "Explain this yaml code:\n\napiVersion: rbac.authorization.k8s.io/v1\nkind: Role\nmetadata:\n namespace: default\n name: configmap-updater\nrules:\n- apiGroups: [\"\"]\n #\n # at the HTTP level, the name of the resource for accessing ConfigMap\n # objects is \"configmaps\"\n resources: [\"configmaps\"]\n resourceNames: [\"my-configmap\"]\n verbs: [\"update\", \"get\"]", "completion": "This yaml code Using RBAC Authorization | Rather than referring to individual resources, apiGroups, and verbs, you can use the wildcard * symbol to refer to all such objects. For nonResourceURLs, you can use the wildcard * as a suffix glob ma", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Using RBAC Authorization | Rather than referring to individual resources, apiGroups, and verbs, you can use the wildcard * symbol to refer to all such objects. For nonResourceURLs, you can use the wildcard * as a suffix glob ma", "source": "https://kubernetes.io/docs/reference/access-authn-authz/rbac/", "id": "k8s_00637_code_explain_20089"} {"prompt": "Complete this yaml code:\n\napiVersion: rbac.authorization.k8s.io/v1\nkind: Role\nmetadata:\n namespace: default", "completion": " name: example.com-superuser # DO NOT USE THIS ROLE, IT IS JUST AN EXAMPLE\nrules:\n- apiGroups: [\"example.com\"]\n resources: [\"*\"]\n verbs: [\"*\"]", "language": "yaml", "type": "kubernetes-manifest", "context": "Using RBAC Authorization | You can aggregate several ClusterRoles into one combined ClusterRole. A controller, running as part of the cluster control plane, watches for ClusterRole objects with an aggregationRule set. The aggre", "source": "https://kubernetes.io/docs/reference/access-authn-authz/rbac/", "id": "k8s_00637_code_completion_20090"} {"prompt": "Explain this yaml code:\n\napiVersion: rbac.authorization.k8s.io/v1\nkind: Role\nmetadata:\n namespace: default\n name: example.com-superuser # DO NOT USE THIS ROLE, IT IS JUST AN EXAMPLE\nrules:\n- apiGroups: [\"example.com\"]\n resources: [\"*\"]\n verbs: [\"*\"]", "completion": "This yaml code Using RBAC Authorization | You can aggregate several ClusterRoles into one combined ClusterRole. A controller, running as part of the cluster control plane, watches for ClusterRole objects with an aggregationRule set. The aggre", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Using RBAC Authorization | You can aggregate several ClusterRoles into one combined ClusterRole. A controller, running as part of the cluster control plane, watches for ClusterRole objects with an aggregationRule set. The aggre", "source": "https://kubernetes.io/docs/reference/access-authn-authz/rbac/", "id": "k8s_00637_code_explain_20091"} {"prompt": "Complete this yaml code:\n\napiVersion: rbac.authorization.k8s.io/v1\nkind: ClusterRole\nmetadata:\n name: monitoring", "completion": "aggregationRule:\n clusterRoleSelectors:\n - matchLabels:\n rbac.example.com/aggregate-to-monitoring: \"true\"\nrules: [] # The control plane automatically fills in the rules", "language": "yaml", "type": "kubernetes-manifest", "context": "Using RBAC Authorization | If you create a new ClusterRole that matches the label selector of an existing aggregated ClusterRole, that change triggers adding the new rules into the aggregated ClusterRole. Here is an example tha", "source": "https://kubernetes.io/docs/reference/access-authn-authz/rbac/", "id": "k8s_00637_code_completion_20092"} {"prompt": "Explain this yaml code:\n\napiVersion: rbac.authorization.k8s.io/v1\nkind: ClusterRole\nmetadata:\n name: monitoring\naggregationRule:\n clusterRoleSelectors:\n - matchLabels:\n rbac.example.com/aggregate-to-monitoring: \"true\"\nrules: [] # The control plane automatically fills in the rules", "completion": "This yaml code Using RBAC Authorization | If you create a new ClusterRole that matches the label selector of an existing aggregated ClusterRole, that change triggers adding the new rules into the aggregated ClusterRole. Here is an example tha", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Using RBAC Authorization | If you create a new ClusterRole that matches the label selector of an existing aggregated ClusterRole, that change triggers adding the new rules into the aggregated ClusterRole. Here is an example tha", "source": "https://kubernetes.io/docs/reference/access-authn-authz/rbac/", "id": "k8s_00637_code_explain_20093"} {"prompt": "Complete this yaml code:\n\napiVersion: rbac.authorization.k8s.io/v1\nkind: ClusterRole\nmetadata:\n name: monitoring-endpoints\n labels:\n rbac.example.com/aggregate-to-monitoring: \"true\"", "completion": "# When you create the \"monitoring-endpoints\" ClusterRole,\n# the rules below will be added to the \"monitoring\" ClusterRole.\nrules:\n- apiGroups: [\"\"]\n resources: [\"services\", \"endpointslices\", \"pods\"]\n verbs: [\"get\", \"list\", \"watch\"]", "language": "yaml", "type": "kubernetes-manifest", "context": "Using RBAC Authorization | The default user-facing roles use ClusterRole aggregation. This lets you, as a cluster administrator, include rules for custom resources, such as those served by CustomResourceDefinitions or aggregate", "source": "https://kubernetes.io/docs/reference/access-authn-authz/rbac/", "id": "k8s_00637_code_completion_20094"} {"prompt": "Explain this yaml code:\n\napiVersion: rbac.authorization.k8s.io/v1\nkind: ClusterRole\nmetadata:\n name: monitoring-endpoints\n labels:\n rbac.example.com/aggregate-to-monitoring: \"true\"\n# When you create the \"monitoring-endpoints\" ClusterRole,\n# the rules below will be added to the \"monitoring\" ClusterRole.\nrules:\n- apiGroups: [\"\"]\n resources: [\"services\", \"endpointslices\", \"pods\"]\n verbs: [\"get\", \"list\", \"watch\"]", "completion": "This yaml code Using RBAC Authorization | The default user-facing roles use ClusterRole aggregation. This lets you, as a cluster administrator, include rules for custom resources, such as those served by CustomResourceDefinitions or aggregate", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Using RBAC Authorization | The default user-facing roles use ClusterRole aggregation. This lets you, as a cluster administrator, include rules for custom resources, such as those served by CustomResourceDefinitions or aggregate", "source": "https://kubernetes.io/docs/reference/access-authn-authz/rbac/", "id": "k8s_00637_code_explain_20095"} {"prompt": "Complete this yaml code:\n\napiVersion: rbac.authorization.k8s.io/v1\nkind: ClusterRole\nmetadata:\n name: aggregate-cron-tabs-edit\n labels:\n # Add these permissions to the \"admin\" and \"edit\" default roles.\n rbac.authorization.k8s.io/aggregate-to-admin: \"true\"\n rbac.authorization.k8s.io/aggregate-to-edit: \"true\"\nrules:\n- apiGroups: [\"stable.example.com\"]\n resources: [\"crontabs\"]\n verbs: [\"get\", \"list\", \"watch\", \"create\", \"update\", \"patch\", \"delete\"]", "completion": "---\nkind: ClusterRole\napiVersion: rbac.authorization.k8s.io/v1\nmetadata:\n name: aggregate-cron-tabs-view\n labels:\n # Add these permissions to the \"view\" default role.\n rbac.authorization.k8s.io/aggregate-to-view: \"true\"\nrules:\n- apiGroups: [\"stable.example.com\"]\n resources: [\"crontabs\"]\n verbs: [\"get\", \"list\", \"watch\"]", "language": "yaml", "type": "kubernetes-manifest", "context": "Using RBAC Authorization | The following examples are excerpts from Role or ClusterRole objects, showing only the rules section.", "source": "https://kubernetes.io/docs/reference/access-authn-authz/rbac/", "id": "k8s_00637_code_completion_20096"} {"prompt": "Explain this yaml code:\n\napiVersion: rbac.authorization.k8s.io/v1\nkind: ClusterRole\nmetadata:\n name: aggregate-cron-tabs-edit\n labels:\n # Add these permissions to the \"admin\" and \"edit\" default roles.\n rbac.authorization.k8s.io/aggregate-to-admin: \"true\"\n rbac.authorization.k8s.io/aggregate-to-edit: \"true\"\nrules:\n- apiGroups: [\"stable.example.com\"]\n resources: [\"crontabs\"]\n verbs: [\"get\", \"list\", \"watch\", \"create\", \"update\", \"patch\", \"delete\"]\n---\nkind: ClusterRole\napiVersion: rbac.authorization.k8s.io/v1\nmetadata:\n name: aggregate-cron-tabs-view\n labels:\n # Add these permissions to the \"view\" default role.\n rbac.authorization.k8s.io/aggregate-to-view: \"true\"\nrules:\n- apiGroups: [\"stable.example.com\"]\n resources: [\"crontabs\"]\n verbs: [\"get\", \"list\", \"watch\"]", "completion": "This yaml code Using RBAC Authorization | The following examples are excerpts from Role or ClusterRole objects, showing only the rules section.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Using RBAC Authorization | The following examples are excerpts from Role or ClusterRole objects, showing only the rules section.", "source": "https://kubernetes.io/docs/reference/access-authn-authz/rbac/", "id": "k8s_00637_code_explain_20097"} {"prompt": "Complete this text code:\n\nrules:\n- apiGroups: [\"\"]\n #", "completion": " # at the HTTP level, the name of the resource for accessing Pod\n # objects is \"pods\"\n resources: [\"pods\"]\n verbs: [\"get\", \"list\", \"watch\"]", "language": "text", "type": "code-example", "context": "Using RBAC Authorization | Allow reading/writing Deployments (at the HTTP level: objects with \"deployments\" in the resource part of their URL) in the \"apps\" API groups:", "source": "https://kubernetes.io/docs/reference/access-authn-authz/rbac/", "id": "k8s_00637_code_completion_20098"} {"prompt": "Explain this text code:\n\nrules:\n- apiGroups: [\"\"]\n #\n # at the HTTP level, the name of the resource for accessing Pod\n # objects is \"pods\"\n resources: [\"pods\"]\n verbs: [\"get\", \"list\", \"watch\"]", "completion": "This text code Using RBAC Authorization | Allow reading/writing Deployments (at the HTTP level: objects with \"deployments\" in the resource part of their URL) in the \"apps\" API groups:", "language": "text", "type": "code-example_explanation", "context": "Using RBAC Authorization | Allow reading/writing Deployments (at the HTTP level: objects with \"deployments\" in the resource part of their URL) in the \"apps\" API groups:", "source": "https://kubernetes.io/docs/reference/access-authn-authz/rbac/", "id": "k8s_00637_code_explain_20099"} {"prompt": "Complete this text code:\n\nrules:\n- apiGroups: [\"apps\"]\n #", "completion": " # at the HTTP level, the name of the resource for accessing Deployment\n # objects is \"deployments\"\n resources: [\"deployments\"]\n verbs: [\"get\", \"list\", \"watch\", \"create\", \"update\", \"patch\", \"delete\"]", "language": "text", "type": "code-example", "context": "Using RBAC Authorization | Allow reading Pods in the core API group, as well as reading or writing Job resources in the \"batch\" API group:", "source": "https://kubernetes.io/docs/reference/access-authn-authz/rbac/", "id": "k8s_00637_code_completion_20100"} {"prompt": "Explain this text code:\n\nrules:\n- apiGroups: [\"apps\"]\n #\n # at the HTTP level, the name of the resource for accessing Deployment\n # objects is \"deployments\"\n resources: [\"deployments\"]\n verbs: [\"get\", \"list\", \"watch\", \"create\", \"update\", \"patch\", \"delete\"]", "completion": "This text code Using RBAC Authorization | Allow reading Pods in the core API group, as well as reading or writing Job resources in the \"batch\" API group:", "language": "text", "type": "code-example_explanation", "context": "Using RBAC Authorization | Allow reading Pods in the core API group, as well as reading or writing Job resources in the \"batch\" API group:", "source": "https://kubernetes.io/docs/reference/access-authn-authz/rbac/", "id": "k8s_00637_code_explain_20101"} {"prompt": "Complete this text code:\n\nrules:\n- apiGroups: [\"\"]\n #\n # at the HTTP level, the name of the resource for accessing Pod\n # objects is \"pods\"\n resources: [\"pods\"]", "completion": " verbs: [\"get\", \"list\", \"watch\"]\n- apiGroups: [\"batch\"]\n #\n # at the HTTP level, the name of the resource for accessing Job\n # objects is \"jobs\"\n resources: [\"jobs\"]\n verbs: [\"get\", \"list\", \"watch\", \"create\", \"update\", \"patch\", \"delete\"]", "language": "text", "type": "code-example", "context": "Using RBAC Authorization | Allow reading a ConfigMap named \"my-config\" (must be bound with a RoleBinding to limit to a single ConfigMap in a single namespace):", "source": "https://kubernetes.io/docs/reference/access-authn-authz/rbac/", "id": "k8s_00637_code_completion_20102"} {"prompt": "Explain this text code:\n\nrules:\n- apiGroups: [\"\"]\n #\n # at the HTTP level, the name of the resource for accessing Pod\n # objects is \"pods\"\n resources: [\"pods\"]\n verbs: [\"get\", \"list\", \"watch\"]\n- apiGroups: [\"batch\"]\n #\n # at the HTTP level, the name of the resource for accessing Job\n # objects is \"jobs\"\n resources: [\"jobs\"]\n verbs: [\"get\", \"list\", \"watch\", \"create\", \"update\", \"patch\", \"delete\"]", "completion": "This text code Using RBAC Authorization | Allow reading a ConfigMap named \"my-config\" (must be bound with a RoleBinding to limit to a single ConfigMap in a single namespace):", "language": "text", "type": "code-example_explanation", "context": "Using RBAC Authorization | Allow reading a ConfigMap named \"my-config\" (must be bound with a RoleBinding to limit to a single ConfigMap in a single namespace):", "source": "https://kubernetes.io/docs/reference/access-authn-authz/rbac/", "id": "k8s_00637_code_explain_20103"} {"prompt": "Complete this text code:\n\nrules:\n- apiGroups: [\"\"]\n #\n # at the HTTP level, the name of the resource for accessing ConfigMap", "completion": " # objects is \"configmaps\"\n resources: [\"configmaps\"]\n resourceNames: [\"my-config\"]\n verbs: [\"get\"]", "language": "text", "type": "code-example", "context": "Using RBAC Authorization | Allow reading the resource \"nodes\" in the core group (because a Node is cluster-scoped, this must be in a ClusterRole bound with a ClusterRoleBinding to be effective):", "source": "https://kubernetes.io/docs/reference/access-authn-authz/rbac/", "id": "k8s_00637_code_completion_20104"} {"prompt": "Explain this text code:\n\nrules:\n- apiGroups: [\"\"]\n #\n # at the HTTP level, the name of the resource for accessing ConfigMap\n # objects is \"configmaps\"\n resources: [\"configmaps\"]\n resourceNames: [\"my-config\"]\n verbs: [\"get\"]", "completion": "This text code Using RBAC Authorization | Allow reading the resource \"nodes\" in the core group (because a Node is cluster-scoped, this must be in a ClusterRole bound with a ClusterRoleBinding to be effective):", "language": "text", "type": "code-example_explanation", "context": "Using RBAC Authorization | Allow reading the resource \"nodes\" in the core group (because a Node is cluster-scoped, this must be in a ClusterRole bound with a ClusterRoleBinding to be effective):", "source": "https://kubernetes.io/docs/reference/access-authn-authz/rbac/", "id": "k8s_00637_code_explain_20105"} {"prompt": "Complete this text code:\n\nrules:\n- apiGroups: [\"\"]\n #", "completion": " # at the HTTP level, the name of the resource for accessing Node\n # objects is \"nodes\"\n resources: [\"nodes\"]\n verbs: [\"get\", \"list\", \"watch\"]", "language": "text", "type": "code-example", "context": "Using RBAC Authorization | Allow GET and POST requests to the non-resource endpoint /healthz and all subpaths (must be in a ClusterRole bound with a ClusterRoleBinding to be effective):", "source": "https://kubernetes.io/docs/reference/access-authn-authz/rbac/", "id": "k8s_00637_code_completion_20106"} {"prompt": "Explain this text code:\n\nrules:\n- apiGroups: [\"\"]\n #\n # at the HTTP level, the name of the resource for accessing Node\n # objects is \"nodes\"\n resources: [\"nodes\"]\n verbs: [\"get\", \"list\", \"watch\"]", "completion": "This text code Using RBAC Authorization | Allow GET and POST requests to the non-resource endpoint /healthz and all subpaths (must be in a ClusterRole bound with a ClusterRoleBinding to be effective):", "language": "text", "type": "code-example_explanation", "context": "Using RBAC Authorization | Allow GET and POST requests to the non-resource endpoint /healthz and all subpaths (must be in a ClusterRole bound with a ClusterRoleBinding to be effective):", "source": "https://kubernetes.io/docs/reference/access-authn-authz/rbac/", "id": "k8s_00637_code_explain_20107"} {"prompt": "Explain this text code:\n\nrules:\n- nonResourceURLs: [\"/healthz\", \"/healthz/*\"] # '*' in a nonResourceURL is a suffix glob match\n verbs: [\"get\", \"post\"]", "completion": "This text code Using RBAC Authorization | A RoleBinding or ClusterRoleBinding binds a role to subjects. Subjects can be groups, users or ServiceAccounts.", "language": "text", "type": "code-example_explanation", "context": "Using RBAC Authorization | A RoleBinding or ClusterRoleBinding binds a role to subjects. Subjects can be groups, users or ServiceAccounts.", "source": "https://kubernetes.io/docs/reference/access-authn-authz/rbac/", "id": "k8s_00637_code_explain_20108"} {"prompt": "Complete this yaml code:\n\nsubjects:\n- kind: User", "completion": " name: \"alice@example.com\"\n apiGroup: rbac.authorization.k8s.io", "language": "yaml", "type": "kubernetes-manifest", "context": "Using RBAC Authorization | For a group named frontend-admins:", "source": "https://kubernetes.io/docs/reference/access-authn-authz/rbac/", "id": "k8s_00637_code_completion_20109"} {"prompt": "Explain this yaml code:\n\nsubjects:\n- kind: User\n name: \"alice@example.com\"\n apiGroup: rbac.authorization.k8s.io", "completion": "This yaml code Using RBAC Authorization | For a group named frontend-admins:", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Using RBAC Authorization | For a group named frontend-admins:", "source": "https://kubernetes.io/docs/reference/access-authn-authz/rbac/", "id": "k8s_00637_code_explain_20110"} {"prompt": "Complete this yaml code:\n\nsubjects:\n- kind: Group", "completion": " name: \"frontend-admins\"\n apiGroup: rbac.authorization.k8s.io", "language": "yaml", "type": "kubernetes-manifest", "context": "Using RBAC Authorization | For the default service account in the \"kube-system\" namespace:", "source": "https://kubernetes.io/docs/reference/access-authn-authz/rbac/", "id": "k8s_00637_code_completion_20111"} {"prompt": "Explain this yaml code:\n\nsubjects:\n- kind: Group\n name: \"frontend-admins\"\n apiGroup: rbac.authorization.k8s.io", "completion": "This yaml code Using RBAC Authorization | For the default service account in the \"kube-system\" namespace:", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Using RBAC Authorization | For the default service account in the \"kube-system\" namespace:", "source": "https://kubernetes.io/docs/reference/access-authn-authz/rbac/", "id": "k8s_00637_code_explain_20112"} {"prompt": "Complete this yaml code:\n\nsubjects:\n- kind: ServiceAccount", "completion": " name: default\n namespace: kube-system", "language": "yaml", "type": "service-definition", "context": "Using RBAC Authorization | For all service accounts in the \"qa\" namespace:", "source": "https://kubernetes.io/docs/reference/access-authn-authz/rbac/", "id": "k8s_00637_code_completion_20113"} {"prompt": "Explain this yaml code:\n\nsubjects:\n- kind: ServiceAccount\n name: default\n namespace: kube-system", "completion": "This yaml code Using RBAC Authorization | For all service accounts in the \"qa\" namespace:", "language": "yaml", "type": "service-definition_explanation", "context": "Using RBAC Authorization | For all service accounts in the \"qa\" namespace:", "source": "https://kubernetes.io/docs/reference/access-authn-authz/rbac/", "id": "k8s_00637_code_explain_20114"} {"prompt": "Complete this yaml code:\n\nsubjects:\n- kind: Group", "completion": " name: system:serviceaccounts:qa\n apiGroup: rbac.authorization.k8s.io", "language": "yaml", "type": "kubernetes-manifest", "context": "Using RBAC Authorization | For all service accounts in any namespace:", "source": "https://kubernetes.io/docs/reference/access-authn-authz/rbac/", "id": "k8s_00637_code_completion_20115"} {"prompt": "Explain this yaml code:\n\nsubjects:\n- kind: Group\n name: system:serviceaccounts:qa\n apiGroup: rbac.authorization.k8s.io", "completion": "This yaml code Using RBAC Authorization | For all service accounts in any namespace:", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Using RBAC Authorization | For all service accounts in any namespace:", "source": "https://kubernetes.io/docs/reference/access-authn-authz/rbac/", "id": "k8s_00637_code_explain_20116"} {"prompt": "Complete this yaml code:\n\nsubjects:\n- kind: Group", "completion": " name: system:serviceaccounts\n apiGroup: rbac.authorization.k8s.io", "language": "yaml", "type": "kubernetes-manifest", "context": "Using RBAC Authorization | For all authenticated users:", "source": "https://kubernetes.io/docs/reference/access-authn-authz/rbac/", "id": "k8s_00637_code_completion_20117"} {"prompt": "Explain this yaml code:\n\nsubjects:\n- kind: Group\n name: system:serviceaccounts\n apiGroup: rbac.authorization.k8s.io", "completion": "This yaml code Using RBAC Authorization | For all authenticated users:", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Using RBAC Authorization | For all authenticated users:", "source": "https://kubernetes.io/docs/reference/access-authn-authz/rbac/", "id": "k8s_00637_code_explain_20118"} {"prompt": "Complete this yaml code:\n\nsubjects:\n- kind: Group", "completion": " name: system:authenticated\n apiGroup: rbac.authorization.k8s.io", "language": "yaml", "type": "kubernetes-manifest", "context": "Using RBAC Authorization | For all unauthenticated users:", "source": "https://kubernetes.io/docs/reference/access-authn-authz/rbac/", "id": "k8s_00637_code_completion_20119"} {"prompt": "Explain this yaml code:\n\nsubjects:\n- kind: Group\n name: system:authenticated\n apiGroup: rbac.authorization.k8s.io", "completion": "This yaml code Using RBAC Authorization | For all unauthenticated users:", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Using RBAC Authorization | For all unauthenticated users:", "source": "https://kubernetes.io/docs/reference/access-authn-authz/rbac/", "id": "k8s_00637_code_explain_20120"} {"prompt": "Complete this yaml code:\n\nsubjects:\n- kind: Group", "completion": " name: system:unauthenticated\n apiGroup: rbac.authorization.k8s.io", "language": "yaml", "type": "kubernetes-manifest", "context": "Using RBAC Authorization | For all users:", "source": "https://kubernetes.io/docs/reference/access-authn-authz/rbac/", "id": "k8s_00637_code_completion_20121"} {"prompt": "Explain this yaml code:\n\nsubjects:\n- kind: Group\n name: system:unauthenticated\n apiGroup: rbac.authorization.k8s.io", "completion": "This yaml code Using RBAC Authorization | For all users:", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Using RBAC Authorization | For all users:", "source": "https://kubernetes.io/docs/reference/access-authn-authz/rbac/", "id": "k8s_00637_code_explain_20122"} {"prompt": "Complete this yaml code:\n\nsubjects:\n- kind: Group\n name: system:authenticated", "completion": " apiGroup: rbac.authorization.k8s.io\n- kind: Group\n name: system:unauthenticated\n apiGroup: rbac.authorization.k8s.io", "language": "yaml", "type": "kubernetes-manifest", "context": "Using RBAC Authorization | API servers create a set of default ClusterRole and ClusterRoleBinding objects. Many of these are system: prefixed, which indicates that the resource is directly managed by the cluster control plane.", "source": "https://kubernetes.io/docs/reference/access-authn-authz/rbac/", "id": "k8s_00637_code_completion_20123"} {"prompt": "Explain this yaml code:\n\nsubjects:\n- kind: Group\n name: system:authenticated\n apiGroup: rbac.authorization.k8s.io\n- kind: Group\n name: system:unauthenticated\n apiGroup: rbac.authorization.k8s.io", "completion": "This yaml code Using RBAC Authorization | API servers create a set of default ClusterRole and ClusterRoleBinding objects. Many of these are system: prefixed, which indicates that the resource is directly managed by the cluster control plane.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Using RBAC Authorization | API servers create a set of default ClusterRole and ClusterRoleBinding objects. Many of these are system: prefixed, which indicates that the resource is directly managed by the cluster control plane.", "source": "https://kubernetes.io/docs/reference/access-authn-authz/rbac/", "id": "k8s_00637_code_explain_20124"} {"prompt": "Complete this text code:\n\nmetadata:\n labels:", "completion": " rbac.authorization.k8s.io/aggregate-to-admin: \"true\"\n rbac.authorization.k8s.io/aggregate-to-edit: \"true\"\n rbac.authorization.k8s.io/aggregate-to-view: \"true\"", "language": "text", "type": "code-example", "context": "Using RBAC Authorization | If used in a RoleBinding, allows read/write access to most resources in a namespace, including the ability to create roles and role bindings within the namespace. This role does not allow write access", "source": "https://kubernetes.io/docs/reference/access-authn-authz/rbac/", "id": "k8s_00637_code_completion_20125"} {"prompt": "Explain this text code:\n\nmetadata:\n labels:\n rbac.authorization.k8s.io/aggregate-to-admin: \"true\"\n rbac.authorization.k8s.io/aggregate-to-edit: \"true\"\n rbac.authorization.k8s.io/aggregate-to-view: \"true\"", "completion": "This text code Using RBAC Authorization | If used in a RoleBinding, allows read/write access to most resources in a namespace, including the ability to create roles and role bindings within the namespace. This role does not allow write access", "language": "text", "type": "code-example_explanation", "context": "Using RBAC Authorization | If used in a RoleBinding, allows read/write access to most resources in a namespace, including the ability to create roles and role bindings within the namespace. This role does not allow write access", "source": "https://kubernetes.io/docs/reference/access-authn-authz/rbac/", "id": "k8s_00637_code_explain_20126"} {"prompt": "Complete this yaml code:\n\napiVersion: rbac.authorization.k8s.io/v1\nkind: ClusterRole\nmetadata:\n name: role-grantor\nrules:\n- apiGroups: [\"rbac.authorization.k8s.io\"]\n resources: [\"rolebindings\"]\n verbs: [\"create\"]\n- apiGroups: [\"rbac.authorization.k8s.io\"]\n resources: [\"clusterroles\"]\n verbs: [\"bind\"]\n # omit resourceNames to allow binding any ClusterRole\n resourceNames: [\"admin\",\"edit\",\"view\"]", "completion": "---\napiVersion: rbac.authorization.k8s.io/v1\nkind: RoleBinding\nmetadata:\n name: role-grantor-binding\n namespace: user-1-namespace\nroleRef:\n apiGroup: rbac.authorization.k8s.io\n kind: ClusterRole\n name: role-grantor\nsubjects:\n- apiGroup: rbac.authorization.k8s.io\n kind: User\n name: user-1", "language": "yaml", "type": "kubernetes-manifest", "context": "Using RBAC Authorization | When bootstrapping the first roles and role bindings, it is necessary for the initial user to grant permissions they do not yet have. To bootstrap initial roles and role bindings:", "source": "https://kubernetes.io/docs/reference/access-authn-authz/rbac/", "id": "k8s_00637_code_completion_20127"} {"prompt": "Explain this yaml code:\n\napiVersion: rbac.authorization.k8s.io/v1\nkind: ClusterRole\nmetadata:\n name: role-grantor\nrules:\n- apiGroups: [\"rbac.authorization.k8s.io\"]\n resources: [\"rolebindings\"]\n verbs: [\"create\"]\n- apiGroups: [\"rbac.authorization.k8s.io\"]\n resources: [\"clusterroles\"]\n verbs: [\"bind\"]\n # omit resourceNames to allow binding any ClusterRole\n resourceNames: [\"admin\",\"edit\",\"view\"]\n---\napiVersion: rbac.authorization.k8s.io/v1\nkind: RoleBinding\nmetadata:\n name: role-grantor-binding\n namespace: user-1-namespace\nroleRef:\n apiGroup: rbac.authorization.k8s.io\n kind: ClusterRole\n name: role-grantor\nsubjects:\n- apiGroup: rbac.authorization.k8s.io\n kind: User\n name: user-1", "completion": "This yaml code Using RBAC Authorization | When bootstrapping the first roles and role bindings, it is necessary for the initial user to grant permissions they do not yet have. To bootstrap initial roles and role bindings:", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Using RBAC Authorization | When bootstrapping the first roles and role bindings, it is necessary for the initial user to grant permissions they do not yet have. To bootstrap initial roles and role bindings:", "source": "https://kubernetes.io/docs/reference/access-authn-authz/rbac/", "id": "k8s_00637_code_explain_20128"} {"prompt": "Explain this bash code:\n\nkubectl create role pod-reader --verb=get --verb=list --verb=watch --resource=pods", "completion": "This bash code Using RBAC Authorization | Create a Role named \"pod-reader\" with resourceNames specified:", "language": "bash", "type": "kubectl-command_explanation", "context": "Using RBAC Authorization | Create a Role named \"pod-reader\" with resourceNames specified:", "source": "https://kubernetes.io/docs/reference/access-authn-authz/rbac/", "id": "k8s_00637_code_explain_20129"} {"prompt": "Explain this bash code:\n\nkubectl create role pod-reader --verb=get --resource=pods --resource-name=readablepod --resource-name=anotherpod", "completion": "This bash code Using RBAC Authorization | Create a Role named \"foo\" with apiGroups specified:", "language": "bash", "type": "kubectl-command_explanation", "context": "Using RBAC Authorization | Create a Role named \"foo\" with apiGroups specified:", "source": "https://kubernetes.io/docs/reference/access-authn-authz/rbac/", "id": "k8s_00637_code_explain_20130"} {"prompt": "Explain this bash code:\n\nkubectl create role foo --verb=get,list,watch --resource=replicasets.apps", "completion": "This bash code Using RBAC Authorization | Create a Role named \"foo\" with subresource permissions:", "language": "bash", "type": "kubectl-command_explanation", "context": "Using RBAC Authorization | Create a Role named \"foo\" with subresource permissions:", "source": "https://kubernetes.io/docs/reference/access-authn-authz/rbac/", "id": "k8s_00637_code_explain_20131"} {"prompt": "Explain this bash code:\n\nkubectl create role foo --verb=get,list,watch --resource=pods,pods/status", "completion": "This bash code Using RBAC Authorization | Create a Role named \"my-component-lease-holder\" with permissions to get/update a resource with a specific name:", "language": "bash", "type": "kubectl-command_explanation", "context": "Using RBAC Authorization | Create a Role named \"my-component-lease-holder\" with permissions to get/update a resource with a specific name:", "source": "https://kubernetes.io/docs/reference/access-authn-authz/rbac/", "id": "k8s_00637_code_explain_20132"} {"prompt": "Explain this bash code:\n\nkubectl create role my-component-lease-holder --verb=get,list,watch,update --resource=lease --resource-name=my-component", "completion": "This bash code Using RBAC Authorization | Creates a ClusterRole. Examples:", "language": "bash", "type": "kubectl-command_explanation", "context": "Using RBAC Authorization | Creates a ClusterRole. Examples:", "source": "https://kubernetes.io/docs/reference/access-authn-authz/rbac/", "id": "k8s_00637_code_explain_20133"} {"prompt": "Explain this bash code:\n\nkubectl create clusterrole pod-reader --verb=get,list,watch --resource=pods", "completion": "This bash code Using RBAC Authorization | Create a ClusterRole named \"pod-reader\" with resourceNames specified:", "language": "bash", "type": "kubectl-command_explanation", "context": "Using RBAC Authorization | Create a ClusterRole named \"pod-reader\" with resourceNames specified:", "source": "https://kubernetes.io/docs/reference/access-authn-authz/rbac/", "id": "k8s_00637_code_explain_20134"} {"prompt": "Explain this bash code:\n\nkubectl create clusterrole pod-reader --verb=get --resource=pods --resource-name=readablepod --resource-name=anotherpod", "completion": "This bash code Using RBAC Authorization | Create a ClusterRole named \"foo\" with apiGroups specified:", "language": "bash", "type": "kubectl-command_explanation", "context": "Using RBAC Authorization | Create a ClusterRole named \"foo\" with apiGroups specified:", "source": "https://kubernetes.io/docs/reference/access-authn-authz/rbac/", "id": "k8s_00637_code_explain_20135"} {"prompt": "Explain this bash code:\n\nkubectl create clusterrole foo --verb=get,list,watch --resource=replicasets.apps", "completion": "This bash code Using RBAC Authorization | Create a ClusterRole named \"foo\" with subresource permissions:", "language": "bash", "type": "kubectl-command_explanation", "context": "Using RBAC Authorization | Create a ClusterRole named \"foo\" with subresource permissions:", "source": "https://kubernetes.io/docs/reference/access-authn-authz/rbac/", "id": "k8s_00637_code_explain_20136"} {"prompt": "Explain this bash code:\n\nkubectl create clusterrole foo --verb=get,list,watch --resource=pods,pods/status", "completion": "This bash code Using RBAC Authorization | Create a ClusterRole named \"foo\" with nonResourceURL specified:", "language": "bash", "type": "kubectl-command_explanation", "context": "Using RBAC Authorization | Create a ClusterRole named \"foo\" with nonResourceURL specified:", "source": "https://kubernetes.io/docs/reference/access-authn-authz/rbac/", "id": "k8s_00637_code_explain_20137"} {"prompt": "Explain this bash code:\n\nkubectl create clusterrole \"foo\" --verb=get --non-resource-url=/logs/*", "completion": "This bash code Using RBAC Authorization | Create a ClusterRole named \"monitoring\" with an aggregationRule specified:", "language": "bash", "type": "kubectl-command_explanation", "context": "Using RBAC Authorization | Create a ClusterRole named \"monitoring\" with an aggregationRule specified:", "source": "https://kubernetes.io/docs/reference/access-authn-authz/rbac/", "id": "k8s_00637_code_explain_20138"} {"prompt": "Explain this bash code:\n\nkubectl create clusterrole monitoring --aggregation-rule=\"rbac.example.com/aggregate-to-monitoring=true\"", "completion": "This bash code Using RBAC Authorization | Grants a Role or ClusterRole within a specific namespace. Examples:", "language": "bash", "type": "kubectl-command_explanation", "context": "Using RBAC Authorization | Grants a Role or ClusterRole within a specific namespace. Examples:", "source": "https://kubernetes.io/docs/reference/access-authn-authz/rbac/", "id": "k8s_00637_code_explain_20139"} {"prompt": "Explain this bash code:\n\nkubectl create rolebinding bob-admin-binding --clusterrole=admin --user=bob --namespace=acme", "completion": "This bash code Using RBAC Authorization | Within the namespace \"acme\", grant the permissions in the \"view\" ClusterRole to the service account in the namespace \"acme\" named \"myapp\":", "language": "bash", "type": "kubectl-command_explanation", "context": "Using RBAC Authorization | Within the namespace \"acme\", grant the permissions in the \"view\" ClusterRole to the service account in the namespace \"acme\" named \"myapp\":", "source": "https://kubernetes.io/docs/reference/access-authn-authz/rbac/", "id": "k8s_00637_code_explain_20140"} {"prompt": "Explain this bash code:\n\nkubectl create rolebinding myapp-view-binding --clusterrole=view --serviceaccount=acme:myapp --namespace=acme", "completion": "This bash code Using RBAC Authorization | Within the namespace \"acme\", grant the permissions in the \"view\" ClusterRole to a service account in the namespace \"myappnamespace\" named \"myapp\":", "language": "bash", "type": "kubectl-command_explanation", "context": "Using RBAC Authorization | Within the namespace \"acme\", grant the permissions in the \"view\" ClusterRole to a service account in the namespace \"myappnamespace\" named \"myapp\":", "source": "https://kubernetes.io/docs/reference/access-authn-authz/rbac/", "id": "k8s_00637_code_explain_20141"} {"prompt": "Explain this bash code:\n\nkubectl create rolebinding myappnamespace-myapp-view-binding --clusterrole=view --serviceaccount=myappnamespace:myapp --namespace=acme", "completion": "This bash code Using RBAC Authorization | Grants a ClusterRole across the entire cluster (all namespaces). Examples:", "language": "bash", "type": "kubectl-command_explanation", "context": "Using RBAC Authorization | Grants a ClusterRole across the entire cluster (all namespaces). Examples:", "source": "https://kubernetes.io/docs/reference/access-authn-authz/rbac/", "id": "k8s_00637_code_explain_20142"} {"prompt": "Explain this bash code:\n\nkubectl create clusterrolebinding root-cluster-admin-binding --clusterrole=cluster-admin --user=root", "completion": "This bash code Using RBAC Authorization | Across the entire cluster, grant the permissions in the \"system:node-proxier\" ClusterRole to a user named \"system:kube-proxy\":", "language": "bash", "type": "kubectl-command_explanation", "context": "Using RBAC Authorization | Across the entire cluster, grant the permissions in the \"system:node-proxier\" ClusterRole to a user named \"system:kube-proxy\":", "source": "https://kubernetes.io/docs/reference/access-authn-authz/rbac/", "id": "k8s_00637_code_explain_20143"} {"prompt": "Explain this bash code:\n\nkubectl create clusterrolebinding kube-proxy-binding --clusterrole=system:node-proxier --user=system:kube-proxy", "completion": "This bash code Using RBAC Authorization | Across the entire cluster, grant the permissions in the \"view\" ClusterRole to a service account named \"myapp\" in the namespace \"acme\":", "language": "bash", "type": "kubectl-command_explanation", "context": "Using RBAC Authorization | Across the entire cluster, grant the permissions in the \"view\" ClusterRole to a service account named \"myapp\" in the namespace \"acme\":", "source": "https://kubernetes.io/docs/reference/access-authn-authz/rbac/", "id": "k8s_00637_code_explain_20144"} {"prompt": "Explain this bash code:\n\nkubectl create clusterrolebinding myapp-view-binding --clusterrole=view --serviceaccount=acme:myapp", "completion": "This bash code Using RBAC Authorization | Creates or updates rbac.authorization.k8s.io/v1 API objects from a manifest file.", "language": "bash", "type": "kubectl-command_explanation", "context": "Using RBAC Authorization | Creates or updates rbac.authorization.k8s.io/v1 API objects from a manifest file.", "source": "https://kubernetes.io/docs/reference/access-authn-authz/rbac/", "id": "k8s_00637_code_explain_20145"} {"prompt": "Explain this bash code:\n\nkubectl auth reconcile -f my-rbac-rules.yaml --dry-run=client", "completion": "This bash code Using RBAC Authorization | Apply a manifest file of RBAC objects, preserving any extra permissions (in roles) and any extra subjects (in bindings):", "language": "bash", "type": "kubectl-command_explanation", "context": "Using RBAC Authorization | Apply a manifest file of RBAC objects, preserving any extra permissions (in roles) and any extra subjects (in bindings):", "source": "https://kubernetes.io/docs/reference/access-authn-authz/rbac/", "id": "k8s_00637_code_explain_20146"} {"prompt": "Explain this bash code:\n\nkubectl auth reconcile -f my-rbac-rules.yaml --remove-extra-subjects --remove-extra-permissions", "completion": "This bash code Using RBAC Authorization | Default RBAC policies grant scoped permissions to control-plane components, nodes, and controllers, but grant no permissions to service accounts outside the kube-system namespace (beyond the permissio", "language": "bash", "type": "kubectl-command_explanation", "context": "Using RBAC Authorization | Default RBAC policies grant scoped permissions to control-plane components, nodes, and controllers, but grant no permissions to service accounts outside the kube-system namespace (beyond the permissio", "source": "https://kubernetes.io/docs/reference/access-authn-authz/rbac/", "id": "k8s_00637_code_explain_20147"} {"prompt": "Complete this bash code:\n\nkubectl create rolebinding my-sa-view \\\n --clusterrole=view \\", "completion": " --serviceaccount=my-namespace:my-sa \\\n --namespace=my-namespace", "language": "bash", "type": "kubectl-command", "context": "Using RBAC Authorization | Grant a role to the \"default\" service account in a namespace", "source": "https://kubernetes.io/docs/reference/access-authn-authz/rbac/", "id": "k8s_00637_code_completion_20148"} {"prompt": "Explain this bash code:\n\nkubectl create rolebinding my-sa-view \\\n --clusterrole=view \\\n --serviceaccount=my-namespace:my-sa \\\n --namespace=my-namespace", "completion": "This bash code Using RBAC Authorization | Grant a role to the \"default\" service account in a namespace", "language": "bash", "type": "kubectl-command_explanation", "context": "Using RBAC Authorization | Grant a role to the \"default\" service account in a namespace", "source": "https://kubernetes.io/docs/reference/access-authn-authz/rbac/", "id": "k8s_00637_code_explain_20149"} {"prompt": "Complete this bash code:\n\nkubectl create rolebinding default-view \\\n --clusterrole=view \\", "completion": " --serviceaccount=my-namespace:default \\\n --namespace=my-namespace", "language": "bash", "type": "kubectl-command", "context": "Using RBAC Authorization | Many add-ons run as the \"default\" service account in the kube-system namespace. To allow those add-ons to run with super-user access, grant cluster-admin permissions to the \"default\" service account i", "source": "https://kubernetes.io/docs/reference/access-authn-authz/rbac/", "id": "k8s_00637_code_completion_20150"} {"prompt": "Explain this bash code:\n\nkubectl create rolebinding default-view \\\n --clusterrole=view \\\n --serviceaccount=my-namespace:default \\\n --namespace=my-namespace", "completion": "This bash code Using RBAC Authorization | Many add-ons run as the \"default\" service account in the kube-system namespace. To allow those add-ons to run with super-user access, grant cluster-admin permissions to the \"default\" service account i", "language": "bash", "type": "kubectl-command_explanation", "context": "Using RBAC Authorization | Many add-ons run as the \"default\" service account in the kube-system namespace. To allow those add-ons to run with super-user access, grant cluster-admin permissions to the \"default\" service account i", "source": "https://kubernetes.io/docs/reference/access-authn-authz/rbac/", "id": "k8s_00637_code_explain_20151"} {"prompt": "Explain this bash code:\n\nkubectl create clusterrolebinding add-on-cluster-admin \\\n --clusterrole=cluster-admin \\\n --serviceaccount=kube-system:default", "completion": "This bash code Using RBAC Authorization | Grant a role to all service accounts in a namespace", "language": "bash", "type": "kubectl-command_explanation", "context": "Using RBAC Authorization | Grant a role to all service accounts in a namespace", "source": "https://kubernetes.io/docs/reference/access-authn-authz/rbac/", "id": "k8s_00637_code_explain_20152"} {"prompt": "Complete this bash code:\n\nkubectl create rolebinding serviceaccounts-view \\\n --clusterrole=view \\", "completion": " --group=system:serviceaccounts:my-namespace \\\n --namespace=my-namespace", "language": "bash", "type": "kubectl-command", "context": "Using RBAC Authorization | Grant a limited role to all service accounts cluster-wide (discouraged)", "source": "https://kubernetes.io/docs/reference/access-authn-authz/rbac/", "id": "k8s_00637_code_completion_20153"} {"prompt": "Explain this bash code:\n\nkubectl create rolebinding serviceaccounts-view \\\n --clusterrole=view \\\n --group=system:serviceaccounts:my-namespace \\\n --namespace=my-namespace", "completion": "This bash code Using RBAC Authorization | Grant a limited role to all service accounts cluster-wide (discouraged)", "language": "bash", "type": "kubectl-command_explanation", "context": "Using RBAC Authorization | Grant a limited role to all service accounts cluster-wide (discouraged)", "source": "https://kubernetes.io/docs/reference/access-authn-authz/rbac/", "id": "k8s_00637_code_explain_20154"} {"prompt": "Explain this bash code:\n\nkubectl create clusterrolebinding serviceaccounts-view \\\n --clusterrole=view \\\n --group=system:serviceaccounts", "completion": "This bash code Using RBAC Authorization | Grant super-user access to all service accounts cluster-wide (strongly discouraged)", "language": "bash", "type": "kubectl-command_explanation", "context": "Using RBAC Authorization | Grant super-user access to all service accounts cluster-wide (strongly discouraged)", "source": "https://kubernetes.io/docs/reference/access-authn-authz/rbac/", "id": "k8s_00637_code_explain_20155"} {"prompt": "Explain this bash code:\n\nkubectl create clusterrolebinding serviceaccounts-cluster-admin \\\n --clusterrole=cluster-admin \\\n --group=system:serviceaccounts", "completion": "This bash code Using RBAC Authorization | Kubernetes clusters created before Kubernetes v1.22 include write access to EndpointSlices (and the now-deprecated Endpoints API) in the aggregated \"edit\" and \"admin\" roles. As a mitigation for CVE-20", "language": "bash", "type": "kubectl-command_explanation", "context": "Using RBAC Authorization | Kubernetes clusters created before Kubernetes v1.22 include write access to EndpointSlices (and the now-deprecated Endpoints API) in the aggregated \"edit\" and \"admin\" roles. As a mitigation for CVE-20", "source": "https://kubernetes.io/docs/reference/access-authn-authz/rbac/", "id": "k8s_00637_code_explain_20156"} {"prompt": "Complete this yaml code:\n\naccess/endpoints-aggregated.yaml\napiVersion: rbac.authorization.k8s.io/v1\nkind: ClusterRole\nmetadata:\n annotations:\n kubernetes.io/description: |-\n Add endpoints write permissions to the edit and admin roles. This was\n removed by default in 1.22 because of CVE-2021-25740. See\n https://issue.k8s.io/103675. This can allow writers to direct LoadBalancer\n or Ingress implementations to expose backend IPs that would not otherwise", "completion": " be accessible, and can circumvent network policies or security controls\n intended to prevent/isolate access to those backends.\n EndpointSlices were never included in the edit or admin roles, so there\n is nothing to restore for the EndpointSlice API. \n labels:\n rbac.authorization.k8s.io/aggregate-to-edit: \"true\"\n name: custom:aggregate-to-edit:endpoints # you can change this if you wish\nrules:\n - apiGroups: [\"\"]\n resources: [\"endpoints\"]\n verbs: [\"create\", \"delete\", \"deletecollection\", \"patch\", \"update\"]", "language": "yaml", "type": "kubernetes-manifest", "context": "Using RBAC Authorization | Clusters that originally ran older Kubernetes versions often used permissive ABAC policies, including granting full API access to all service accounts.", "source": "https://kubernetes.io/docs/reference/access-authn-authz/rbac/", "id": "k8s_00637_code_completion_20157"} {"prompt": "Explain this yaml code:\n\naccess/endpoints-aggregated.yaml\napiVersion: rbac.authorization.k8s.io/v1\nkind: ClusterRole\nmetadata:\n annotations:\n kubernetes.io/description: |-\n Add endpoints write permissions to the edit and admin roles. This was\n removed by default in 1.22 because of CVE-2021-25740. See\n https://issue.k8s.io/103675. This can allow writers to direct LoadBalancer\n or Ingress implementations to expose backend IPs that would not otherwise\n be accessible, and can circumvent network policies or security controls\n intended to prevent/isolate access to those backends.\n EndpointSlices were never included in the edit or admin roles, so there\n is nothing to restore for the EndpointSlice API. \n labels:\n rbac.authorization.k8s.io/aggregate-to-edit: \"true\"\n name: custom:aggregate-to-edit:endpoints # you can change this if you wish\nrules:\n - apiGroups: [\"\"]\n resources: [\"endpoints\"]\n verbs: [\"create\", \"delete\", \"deletecollection\", \"patch\", \"update\"]", "completion": "This yaml code Using RBAC Authorization | Clusters that originally ran older Kubernetes versions often used permissive ABAC policies, including granting full API access to all service accounts.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Using RBAC Authorization | Clusters that originally ran older Kubernetes versions often used permissive ABAC policies, including granting full API access to all service accounts.", "source": "https://kubernetes.io/docs/reference/access-authn-authz/rbac/", "id": "k8s_00637_code_explain_20158"} {"prompt": "Complete this yaml code:\n\napiVersion: rbac.authorization.k8s.io/v1\nkind: ClusterRole\nmetadata:\n annotations:\n kubernetes.io/description: |-\n Add endpoints write permissions to the edit and admin roles. This was\n removed by default in 1.22 because of CVE-2021-25740. See\n https://issue.k8s.io/103675. This can allow writers to direct LoadBalancer\n or Ingress implementations to expose backend IPs that would not otherwise\n be accessible, and can circumvent network policies or security controls", "completion": " intended to prevent/isolate access to those backends.\n EndpointSlices were never included in the edit or admin roles, so there\n is nothing to restore for the EndpointSlice API. \n labels:\n rbac.authorization.k8s.io/aggregate-to-edit: \"true\"\n name: custom:aggregate-to-edit:endpoints # you can change this if you wish\nrules:\n - apiGroups: [\"\"]\n resources: [\"endpoints\"]\n verbs: [\"create\", \"delete\", \"deletecollection\", \"patch\", \"update\"]", "language": "yaml", "type": "kubernetes-manifest", "context": "Using RBAC Authorization | Clusters that originally ran older Kubernetes versions often used permissive ABAC policies, including granting full API access to all service accounts.", "source": "https://kubernetes.io/docs/reference/access-authn-authz/rbac/", "id": "k8s_00637_code_completion_20159"} {"prompt": "Explain this yaml code:\n\napiVersion: rbac.authorization.k8s.io/v1\nkind: ClusterRole\nmetadata:\n annotations:\n kubernetes.io/description: |-\n Add endpoints write permissions to the edit and admin roles. This was\n removed by default in 1.22 because of CVE-2021-25740. See\n https://issue.k8s.io/103675. This can allow writers to direct LoadBalancer\n or Ingress implementations to expose backend IPs that would not otherwise\n be accessible, and can circumvent network policies or security controls\n intended to prevent/isolate access to those backends.\n EndpointSlices were never included in the edit or admin roles, so there\n is nothing to restore for the EndpointSlice API. \n labels:\n rbac.authorization.k8s.io/aggregate-to-edit: \"true\"\n name: custom:aggregate-to-edit:endpoints # you can change this if you wish\nrules:\n - apiGroups: [\"\"]\n resources: [\"endpoints\"]\n verbs: [\"create\", \"delete\", \"deletecollection\", \"patch\", \"update\"]", "completion": "This yaml code Using RBAC Authorization | Clusters that originally ran older Kubernetes versions often used permissive ABAC policies, including granting full API access to all service accounts.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Using RBAC Authorization | Clusters that originally ran older Kubernetes versions often used permissive ABAC policies, including granting full API access to all service accounts.", "source": "https://kubernetes.io/docs/reference/access-authn-authz/rbac/", "id": "k8s_00637_code_explain_20160"} {"prompt": "Explain this text code:\n\n--authorization-mode=...,RBAC,ABAC --authorization-policy-file=mypolicy.json", "completion": "This text code Using RBAC Authorization | To explain that first command line option in detail: if earlier authorizers, such as Node, deny a request, then the RBAC authorizer attempts to authorize the API request. If RBAC also denies that API", "language": "text", "type": "code-example_explanation", "context": "Using RBAC Authorization | To explain that first command line option in detail: if earlier authorizers, such as Node, deny a request, then the RBAC authorizer attempts to authorize the API request. If RBAC also denies that API", "source": "https://kubernetes.io/docs/reference/access-authn-authz/rbac/", "id": "k8s_00637_code_explain_20161"} {"prompt": "Complete this bash code:\n\nkubectl create clusterrolebinding permissive-binding \\\n --clusterrole=cluster-admin \\", "completion": " --user=admin \\\n --user=kubelet \\\n --group=system:serviceaccounts", "language": "bash", "type": "kubectl-command", "context": "Using RBAC Authorization | After you have transitioned to use RBAC, you should adjust the access controls for your cluster to ensure that these meet your information security needs.", "source": "https://kubernetes.io/docs/reference/access-authn-authz/rbac/", "id": "k8s_00637_code_completion_20162"} {"prompt": "Explain this bash code:\n\nkubectl create clusterrolebinding permissive-binding \\\n --clusterrole=cluster-admin \\\n --user=admin \\\n --user=kubelet \\\n --group=system:serviceaccounts", "completion": "This bash code Using RBAC Authorization | After you have transitioned to use RBAC, you should adjust the access controls for your cluster to ensure that these meet your information security needs.", "language": "bash", "type": "kubectl-command_explanation", "context": "Using RBAC Authorization | After you have transitioned to use RBAC, you should adjust the access controls for your cluster to ensure that these meet your information security needs.", "source": "https://kubernetes.io/docs/reference/access-authn-authz/rbac/", "id": "k8s_00637_code_explain_20163"} {"prompt": "Complete this text code:\n\nruntime-endpoint: unix:///var/run/containerd/containerd.sock\nimage-endpoint: unix:///var/run/containerd/containerd.sock", "completion": "timeout: 10\ndebug: true", "language": "text", "type": "code-example", "context": "Debugging Kubernetes nodes with crictl | To learn more about crictl, refer to the crictl documentation.", "source": "https://kubernetes.io/docs/tasks/debug/debug-cluster/crictl/", "id": "k8s_00638_code_completion_20164"} {"prompt": "Explain this text code:\n\nruntime-endpoint: unix:///var/run/containerd/containerd.sock\nimage-endpoint: unix:///var/run/containerd/containerd.sock\ntimeout: 10\ndebug: true", "completion": "This text code Debugging Kubernetes nodes with crictl | To learn more about crictl, refer to the crictl documentation.", "language": "text", "type": "code-example_explanation", "context": "Debugging Kubernetes nodes with crictl | To learn more about crictl, refer to the crictl documentation.", "source": "https://kubernetes.io/docs/tasks/debug/debug-cluster/crictl/", "id": "k8s_00638_code_explain_20165"} {"prompt": "Complete this text code:\n\nPOD ID CREATED STATE NAME NAMESPACE ATTEMPT\n926f1b5a1d33a About a minute ago Ready sh-84d7dcf559-4r2gq default 0", "completion": "4dccb216c4adb About a minute ago Ready nginx-65899c769f-wv2gp default 0\na86316e96fa89 17 hours ago Ready kube-proxy-gblk4 kube-system 0\n919630b8f81f1 17 hours ago Ready nvidia-device-plugin-zgbbv kube-system 0", "language": "text", "type": "code-example", "context": "Debugging Kubernetes nodes with crictl | List pods by name:", "source": "https://kubernetes.io/docs/tasks/debug/debug-cluster/crictl/", "id": "k8s_00638_code_completion_20166"} {"prompt": "Explain this text code:\n\nPOD ID CREATED STATE NAME NAMESPACE ATTEMPT\n926f1b5a1d33a About a minute ago Ready sh-84d7dcf559-4r2gq default 0\n4dccb216c4adb About a minute ago Ready nginx-65899c769f-wv2gp default 0\na86316e96fa89 17 hours ago Ready kube-proxy-gblk4 kube-system 0\n919630b8f81f1 17 hours ago Ready nvidia-device-plugin-zgbbv kube-system 0", "completion": "This text code Debugging Kubernetes nodes with crictl | List pods by name:", "language": "text", "type": "code-example_explanation", "context": "Debugging Kubernetes nodes with crictl | List pods by name:", "source": "https://kubernetes.io/docs/tasks/debug/debug-cluster/crictl/", "id": "k8s_00638_code_explain_20167"} {"prompt": "Explain this text code:\n\nPOD ID CREATED STATE NAME NAMESPACE ATTEMPT\n4dccb216c4adb 2 minutes ago Ready nginx-65899c769f-wv2gp default 0", "completion": "This text code Debugging Kubernetes nodes with crictl | List pods by label:", "language": "text", "type": "code-example_explanation", "context": "Debugging Kubernetes nodes with crictl | List pods by label:", "source": "https://kubernetes.io/docs/tasks/debug/debug-cluster/crictl/", "id": "k8s_00638_code_explain_20168"} {"prompt": "Explain this text code:\n\nPOD ID CREATED STATE NAME NAMESPACE ATTEMPT\n4dccb216c4adb 2 minutes ago Ready nginx-65899c769f-wv2gp default 0", "completion": "This text code Debugging Kubernetes nodes with crictl | List all images:", "language": "text", "type": "code-example_explanation", "context": "Debugging Kubernetes nodes with crictl | List all images:", "source": "https://kubernetes.io/docs/tasks/debug/debug-cluster/crictl/", "id": "k8s_00638_code_explain_20169"} {"prompt": "Complete this text code:\n\nIMAGE TAG IMAGE ID SIZE\nbusybox latest 8c811b4aec35f 1.15MB", "completion": "k8s-gcrio.azureedge.net/hyperkube-amd64 v1.10.3 e179bbfe5d238 665MB\nk8s-gcrio.azureedge.net/pause-amd64 3.1 da86e6ba6ca19 742kB\nnginx latest cd5239a0906a6 109MB", "language": "text", "type": "code-example", "context": "Debugging Kubernetes nodes with crictl | List images by repository:", "source": "https://kubernetes.io/docs/tasks/debug/debug-cluster/crictl/", "id": "k8s_00638_code_completion_20170"} {"prompt": "Explain this text code:\n\nIMAGE TAG IMAGE ID SIZE\nbusybox latest 8c811b4aec35f 1.15MB\nk8s-gcrio.azureedge.net/hyperkube-amd64 v1.10.3 e179bbfe5d238 665MB\nk8s-gcrio.azureedge.net/pause-amd64 3.1 da86e6ba6ca19 742kB\nnginx latest cd5239a0906a6 109MB", "completion": "This text code Debugging Kubernetes nodes with crictl | List images by repository:", "language": "text", "type": "code-example_explanation", "context": "Debugging Kubernetes nodes with crictl | List images by repository:", "source": "https://kubernetes.io/docs/tasks/debug/debug-cluster/crictl/", "id": "k8s_00638_code_explain_20171"} {"prompt": "Explain this text code:\n\nIMAGE TAG IMAGE ID SIZE\nnginx latest cd5239a0906a6 109MB", "completion": "This text code Debugging Kubernetes nodes with crictl | Only list image IDs:", "language": "text", "type": "code-example_explanation", "context": "Debugging Kubernetes nodes with crictl | Only list image IDs:", "source": "https://kubernetes.io/docs/tasks/debug/debug-cluster/crictl/", "id": "k8s_00638_code_explain_20172"} {"prompt": "Complete this text code:\n\nsha256:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a\nsha256:e179bbfe5d238de6069f3b03fccbecc3fb4f2019af741bfff1233c4d7b2970c5", "completion": "sha256:da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e\nsha256:cd5239a0906a6ccf0562354852fae04bc5b52d72a2aff9a871ddb6bd57553569", "language": "text", "type": "code-example", "context": "Debugging Kubernetes nodes with crictl | List all containers:", "source": "https://kubernetes.io/docs/tasks/debug/debug-cluster/crictl/", "id": "k8s_00638_code_completion_20173"} {"prompt": "Explain this text code:\n\nsha256:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a\nsha256:e179bbfe5d238de6069f3b03fccbecc3fb4f2019af741bfff1233c4d7b2970c5\nsha256:da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e\nsha256:cd5239a0906a6ccf0562354852fae04bc5b52d72a2aff9a871ddb6bd57553569", "completion": "This text code Debugging Kubernetes nodes with crictl | List all containers:", "language": "text", "type": "code-example_explanation", "context": "Debugging Kubernetes nodes with crictl | List all containers:", "source": "https://kubernetes.io/docs/tasks/debug/debug-cluster/crictl/", "id": "k8s_00638_code_explain_20174"} {"prompt": "Complete this text code:\n\nCONTAINER ID IMAGE CREATED STATE NAME ATTEMPT\n1f73f2d81bf98 busybox@sha256:141c253bc4c3fd0a201d32dc1f493bcf3fff003b6df416dea4f41046e0f37d47 7 minutes ago Running sh 1", "completion": "9c5951df22c78 busybox@sha256:141c253bc4c3fd0a201d32dc1f493bcf3fff003b6df416dea4f41046e0f37d47 8 minutes ago Exited sh 0\n87d3992f84f74 nginx@sha256:d0a8828cccb73397acb0073bf34f4d7d8aa315263f1e7806bf8c55d8ac139d5f 8 minutes ago Running nginx 0\n1941fb4da154f k8s-gcrio.azureedge.net/hyperkube-amd64@sha256:00d814b1f7763f4ab5be80c58e98140dfc69df107f253d7fdd714b30a714260a 18 hours ago Running kube-proxy 0", "language": "text", "type": "code-example", "context": "Debugging Kubernetes nodes with crictl | List running containers:", "source": "https://kubernetes.io/docs/tasks/debug/debug-cluster/crictl/", "id": "k8s_00638_code_completion_20175"} {"prompt": "Explain this text code:\n\nCONTAINER ID IMAGE CREATED STATE NAME ATTEMPT\n1f73f2d81bf98 busybox@sha256:141c253bc4c3fd0a201d32dc1f493bcf3fff003b6df416dea4f41046e0f37d47 7 minutes ago Running sh 1\n9c5951df22c78 busybox@sha256:141c253bc4c3fd0a201d32dc1f493bcf3fff003b6df416dea4f41046e0f37d47 8 minutes ago Exited sh 0\n87d3992f84f74 nginx@sha256:d0a8828cccb73397acb0073bf34f4d7d8aa315263f1e7806bf8c55d8ac139d5f 8 minutes ago Running nginx 0\n1941fb4da154f k8s-gcrio.azureedge.net/hyperkube-amd64@sha256:00d814b1f7763f4ab5be80c58e98140dfc69df107f253d7fdd714b30a714260a 18 hours ago Running kube-proxy 0", "completion": "This text code Debugging Kubernetes nodes with crictl | List running containers:", "language": "text", "type": "code-example_explanation", "context": "Debugging Kubernetes nodes with crictl | List running containers:", "source": "https://kubernetes.io/docs/tasks/debug/debug-cluster/crictl/", "id": "k8s_00638_code_explain_20176"} {"prompt": "Complete this text code:\n\nCONTAINER ID IMAGE CREATED STATE NAME ATTEMPT\n1f73f2d81bf98 busybox@sha256:141c253bc4c3fd0a201d32dc1f493bcf3fff003b6df416dea4f41046e0f37d47 6 minutes ago Running sh 1", "completion": "87d3992f84f74 nginx@sha256:d0a8828cccb73397acb0073bf34f4d7d8aa315263f1e7806bf8c55d8ac139d5f 7 minutes ago Running nginx 0\n1941fb4da154f k8s-gcrio.azureedge.net/hyperkube-amd64@sha256:00d814b1f7763f4ab5be80c58e98140dfc69df107f253d7fdd714b30a714260a 17 hours ago Running kube-proxy 0", "language": "text", "type": "code-example", "context": "Debugging Kubernetes nodes with crictl | The output is similar to this:", "source": "https://kubernetes.io/docs/tasks/debug/debug-cluster/crictl/", "id": "k8s_00638_code_completion_20177"} {"prompt": "Explain this text code:\n\nCONTAINER ID IMAGE CREATED STATE NAME ATTEMPT\n1f73f2d81bf98 busybox@sha256:141c253bc4c3fd0a201d32dc1f493bcf3fff003b6df416dea4f41046e0f37d47 6 minutes ago Running sh 1\n87d3992f84f74 nginx@sha256:d0a8828cccb73397acb0073bf34f4d7d8aa315263f1e7806bf8c55d8ac139d5f 7 minutes ago Running nginx 0\n1941fb4da154f k8s-gcrio.azureedge.net/hyperkube-amd64@sha256:00d814b1f7763f4ab5be80c58e98140dfc69df107f253d7fdd714b30a714260a 17 hours ago Running kube-proxy 0", "completion": "This text code Debugging Kubernetes nodes with crictl | The output is similar to this:", "language": "text", "type": "code-example_explanation", "context": "Debugging Kubernetes nodes with crictl | The output is similar to this:", "source": "https://kubernetes.io/docs/tasks/debug/debug-cluster/crictl/", "id": "k8s_00638_code_explain_20178"} {"prompt": "Explain this text code:\n\nbin dev etc home proc root sys tmp usr var", "completion": "This text code Debugging Kubernetes nodes with crictl | Get all container logs:", "language": "text", "type": "code-example_explanation", "context": "Debugging Kubernetes nodes with crictl | Get all container logs:", "source": "https://kubernetes.io/docs/tasks/debug/debug-cluster/crictl/", "id": "k8s_00638_code_explain_20179"} {"prompt": "Explain this text code:\n\n10.240.0.96 - - [06/Jun/2018:02:45:49 +0000] \"GET / HTTP/1.1\" 200 612 \"-\" \"curl/7.47.0\" \"-\"\n10.240.0.96 - - [06/Jun/2018:02:45:50 +0000] \"GET / HTTP/1.1\" 200 612 \"-\" \"curl/7.47.0\" \"-\"\n10.240.0.96 - - [06/Jun/2018:02:45:51 +0000] \"GET / HTTP/1.1\" 200 612 \"-\" \"curl/7.47.0\" \"-\"", "completion": "This text code Debugging Kubernetes nodes with crictl | Get only the latest N lines of logs:", "language": "text", "type": "code-example_explanation", "context": "Debugging Kubernetes nodes with crictl | Get only the latest N lines of logs:", "source": "https://kubernetes.io/docs/tasks/debug/debug-cluster/crictl/", "id": "k8s_00638_code_explain_20180"} {"prompt": "Explain this text code:\n\n10.240.0.96 - - [06/Jun/2018:02:45:51 +0000] \"GET / HTTP/1.1\" 200 612 \"-\" \"curl/7.47.0\" \"-\"", "completion": "This text code Debugging Kubernetes nodes with crictl |", "language": "text", "type": "code-example_explanation", "context": "Debugging Kubernetes nodes with crictl |", "source": "https://kubernetes.io/docs/tasks/debug/debug-cluster/crictl/", "id": "k8s_00638_code_explain_20181"} {"prompt": "Complete this text code:\n\nruntime-endpoint: unix:///var/run/containerd/containerd.sock\nimage-endpoint: unix:///var/run/containerd/containerd.sock", "completion": "timeout: 10\ndebug: true", "language": "text", "type": "code-example", "context": "Debugging Kubernetes nodes with crictl | To learn more about crictl, refer to the crictl documentation.", "source": "https://kubernetes.io/docs/tasks/debug/debug-cluster/crictl/", "id": "k8s_00638_code_completion_20182"} {"prompt": "Explain this text code:\n\nruntime-endpoint: unix:///var/run/containerd/containerd.sock\nimage-endpoint: unix:///var/run/containerd/containerd.sock\ntimeout: 10\ndebug: true", "completion": "This text code Debugging Kubernetes nodes with crictl | To learn more about crictl, refer to the crictl documentation.", "language": "text", "type": "code-example_explanation", "context": "Debugging Kubernetes nodes with crictl | To learn more about crictl, refer to the crictl documentation.", "source": "https://kubernetes.io/docs/tasks/debug/debug-cluster/crictl/", "id": "k8s_00638_code_explain_20183"} {"prompt": "Complete this text code:\n\nPOD ID CREATED STATE NAME NAMESPACE ATTEMPT\n926f1b5a1d33a About a minute ago Ready sh-84d7dcf559-4r2gq default 0", "completion": "4dccb216c4adb About a minute ago Ready nginx-65899c769f-wv2gp default 0\na86316e96fa89 17 hours ago Ready kube-proxy-gblk4 kube-system 0\n919630b8f81f1 17 hours ago Ready nvidia-device-plugin-zgbbv kube-system 0", "language": "text", "type": "code-example", "context": "Debugging Kubernetes nodes with crictl | List pods by name:", "source": "https://kubernetes.io/docs/tasks/debug/debug-cluster/crictl/", "id": "k8s_00638_code_completion_20184"} {"prompt": "Explain this text code:\n\nPOD ID CREATED STATE NAME NAMESPACE ATTEMPT\n926f1b5a1d33a About a minute ago Ready sh-84d7dcf559-4r2gq default 0\n4dccb216c4adb About a minute ago Ready nginx-65899c769f-wv2gp default 0\na86316e96fa89 17 hours ago Ready kube-proxy-gblk4 kube-system 0\n919630b8f81f1 17 hours ago Ready nvidia-device-plugin-zgbbv kube-system 0", "completion": "This text code Debugging Kubernetes nodes with crictl | List pods by name:", "language": "text", "type": "code-example_explanation", "context": "Debugging Kubernetes nodes with crictl | List pods by name:", "source": "https://kubernetes.io/docs/tasks/debug/debug-cluster/crictl/", "id": "k8s_00638_code_explain_20185"} {"prompt": "Explain this text code:\n\nPOD ID CREATED STATE NAME NAMESPACE ATTEMPT\n4dccb216c4adb 2 minutes ago Ready nginx-65899c769f-wv2gp default 0", "completion": "This text code Debugging Kubernetes nodes with crictl | List pods by label:", "language": "text", "type": "code-example_explanation", "context": "Debugging Kubernetes nodes with crictl | List pods by label:", "source": "https://kubernetes.io/docs/tasks/debug/debug-cluster/crictl/", "id": "k8s_00638_code_explain_20186"} {"prompt": "Explain this text code:\n\nPOD ID CREATED STATE NAME NAMESPACE ATTEMPT\n4dccb216c4adb 2 minutes ago Ready nginx-65899c769f-wv2gp default 0", "completion": "This text code Debugging Kubernetes nodes with crictl | List all images:", "language": "text", "type": "code-example_explanation", "context": "Debugging Kubernetes nodes with crictl | List all images:", "source": "https://kubernetes.io/docs/tasks/debug/debug-cluster/crictl/", "id": "k8s_00638_code_explain_20187"} {"prompt": "Complete this text code:\n\nIMAGE TAG IMAGE ID SIZE\nbusybox latest 8c811b4aec35f 1.15MB", "completion": "k8s-gcrio.azureedge.net/hyperkube-amd64 v1.10.3 e179bbfe5d238 665MB\nk8s-gcrio.azureedge.net/pause-amd64 3.1 da86e6ba6ca19 742kB\nnginx latest cd5239a0906a6 109MB", "language": "text", "type": "code-example", "context": "Debugging Kubernetes nodes with crictl | List images by repository:", "source": "https://kubernetes.io/docs/tasks/debug/debug-cluster/crictl/", "id": "k8s_00638_code_completion_20188"} {"prompt": "Explain this text code:\n\nIMAGE TAG IMAGE ID SIZE\nbusybox latest 8c811b4aec35f 1.15MB\nk8s-gcrio.azureedge.net/hyperkube-amd64 v1.10.3 e179bbfe5d238 665MB\nk8s-gcrio.azureedge.net/pause-amd64 3.1 da86e6ba6ca19 742kB\nnginx latest cd5239a0906a6 109MB", "completion": "This text code Debugging Kubernetes nodes with crictl | List images by repository:", "language": "text", "type": "code-example_explanation", "context": "Debugging Kubernetes nodes with crictl | List images by repository:", "source": "https://kubernetes.io/docs/tasks/debug/debug-cluster/crictl/", "id": "k8s_00638_code_explain_20189"} {"prompt": "Explain this text code:\n\nIMAGE TAG IMAGE ID SIZE\nnginx latest cd5239a0906a6 109MB", "completion": "This text code Debugging Kubernetes nodes with crictl | Only list image IDs:", "language": "text", "type": "code-example_explanation", "context": "Debugging Kubernetes nodes with crictl | Only list image IDs:", "source": "https://kubernetes.io/docs/tasks/debug/debug-cluster/crictl/", "id": "k8s_00638_code_explain_20190"} {"prompt": "Complete this text code:\n\nsha256:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a\nsha256:e179bbfe5d238de6069f3b03fccbecc3fb4f2019af741bfff1233c4d7b2970c5", "completion": "sha256:da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e\nsha256:cd5239a0906a6ccf0562354852fae04bc5b52d72a2aff9a871ddb6bd57553569", "language": "text", "type": "code-example", "context": "Debugging Kubernetes nodes with crictl | List all containers:", "source": "https://kubernetes.io/docs/tasks/debug/debug-cluster/crictl/", "id": "k8s_00638_code_completion_20191"} {"prompt": "Explain this text code:\n\nsha256:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a\nsha256:e179bbfe5d238de6069f3b03fccbecc3fb4f2019af741bfff1233c4d7b2970c5\nsha256:da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e\nsha256:cd5239a0906a6ccf0562354852fae04bc5b52d72a2aff9a871ddb6bd57553569", "completion": "This text code Debugging Kubernetes nodes with crictl | List all containers:", "language": "text", "type": "code-example_explanation", "context": "Debugging Kubernetes nodes with crictl | List all containers:", "source": "https://kubernetes.io/docs/tasks/debug/debug-cluster/crictl/", "id": "k8s_00638_code_explain_20192"} {"prompt": "Complete this text code:\n\nCONTAINER ID IMAGE CREATED STATE NAME ATTEMPT\n1f73f2d81bf98 busybox@sha256:141c253bc4c3fd0a201d32dc1f493bcf3fff003b6df416dea4f41046e0f37d47 7 minutes ago Running sh 1", "completion": "9c5951df22c78 busybox@sha256:141c253bc4c3fd0a201d32dc1f493bcf3fff003b6df416dea4f41046e0f37d47 8 minutes ago Exited sh 0\n87d3992f84f74 nginx@sha256:d0a8828cccb73397acb0073bf34f4d7d8aa315263f1e7806bf8c55d8ac139d5f 8 minutes ago Running nginx 0\n1941fb4da154f k8s-gcrio.azureedge.net/hyperkube-amd64@sha256:00d814b1f7763f4ab5be80c58e98140dfc69df107f253d7fdd714b30a714260a 18 hours ago Running kube-proxy 0", "language": "text", "type": "code-example", "context": "Debugging Kubernetes nodes with crictl | List running containers:", "source": "https://kubernetes.io/docs/tasks/debug/debug-cluster/crictl/", "id": "k8s_00638_code_completion_20193"} {"prompt": "Explain this text code:\n\nCONTAINER ID IMAGE CREATED STATE NAME ATTEMPT\n1f73f2d81bf98 busybox@sha256:141c253bc4c3fd0a201d32dc1f493bcf3fff003b6df416dea4f41046e0f37d47 7 minutes ago Running sh 1\n9c5951df22c78 busybox@sha256:141c253bc4c3fd0a201d32dc1f493bcf3fff003b6df416dea4f41046e0f37d47 8 minutes ago Exited sh 0\n87d3992f84f74 nginx@sha256:d0a8828cccb73397acb0073bf34f4d7d8aa315263f1e7806bf8c55d8ac139d5f 8 minutes ago Running nginx 0\n1941fb4da154f k8s-gcrio.azureedge.net/hyperkube-amd64@sha256:00d814b1f7763f4ab5be80c58e98140dfc69df107f253d7fdd714b30a714260a 18 hours ago Running kube-proxy 0", "completion": "This text code Debugging Kubernetes nodes with crictl | List running containers:", "language": "text", "type": "code-example_explanation", "context": "Debugging Kubernetes nodes with crictl | List running containers:", "source": "https://kubernetes.io/docs/tasks/debug/debug-cluster/crictl/", "id": "k8s_00638_code_explain_20194"} {"prompt": "Complete this text code:\n\nCONTAINER ID IMAGE CREATED STATE NAME ATTEMPT\n1f73f2d81bf98 busybox@sha256:141c253bc4c3fd0a201d32dc1f493bcf3fff003b6df416dea4f41046e0f37d47 6 minutes ago Running sh 1", "completion": "87d3992f84f74 nginx@sha256:d0a8828cccb73397acb0073bf34f4d7d8aa315263f1e7806bf8c55d8ac139d5f 7 minutes ago Running nginx 0\n1941fb4da154f k8s-gcrio.azureedge.net/hyperkube-amd64@sha256:00d814b1f7763f4ab5be80c58e98140dfc69df107f253d7fdd714b30a714260a 17 hours ago Running kube-proxy 0", "language": "text", "type": "code-example", "context": "Debugging Kubernetes nodes with crictl | The output is similar to this:", "source": "https://kubernetes.io/docs/tasks/debug/debug-cluster/crictl/", "id": "k8s_00638_code_completion_20195"} {"prompt": "Explain this text code:\n\nCONTAINER ID IMAGE CREATED STATE NAME ATTEMPT\n1f73f2d81bf98 busybox@sha256:141c253bc4c3fd0a201d32dc1f493bcf3fff003b6df416dea4f41046e0f37d47 6 minutes ago Running sh 1\n87d3992f84f74 nginx@sha256:d0a8828cccb73397acb0073bf34f4d7d8aa315263f1e7806bf8c55d8ac139d5f 7 minutes ago Running nginx 0\n1941fb4da154f k8s-gcrio.azureedge.net/hyperkube-amd64@sha256:00d814b1f7763f4ab5be80c58e98140dfc69df107f253d7fdd714b30a714260a 17 hours ago Running kube-proxy 0", "completion": "This text code Debugging Kubernetes nodes with crictl | The output is similar to this:", "language": "text", "type": "code-example_explanation", "context": "Debugging Kubernetes nodes with crictl | The output is similar to this:", "source": "https://kubernetes.io/docs/tasks/debug/debug-cluster/crictl/", "id": "k8s_00638_code_explain_20196"} {"prompt": "Explain this text code:\n\nbin dev etc home proc root sys tmp usr var", "completion": "This text code Debugging Kubernetes nodes with crictl | Get all container logs:", "language": "text", "type": "code-example_explanation", "context": "Debugging Kubernetes nodes with crictl | Get all container logs:", "source": "https://kubernetes.io/docs/tasks/debug/debug-cluster/crictl/", "id": "k8s_00638_code_explain_20197"} {"prompt": "Explain this text code:\n\n10.240.0.96 - - [06/Jun/2018:02:45:49 +0000] \"GET / HTTP/1.1\" 200 612 \"-\" \"curl/7.47.0\" \"-\"\n10.240.0.96 - - [06/Jun/2018:02:45:50 +0000] \"GET / HTTP/1.1\" 200 612 \"-\" \"curl/7.47.0\" \"-\"\n10.240.0.96 - - [06/Jun/2018:02:45:51 +0000] \"GET / HTTP/1.1\" 200 612 \"-\" \"curl/7.47.0\" \"-\"", "completion": "This text code Debugging Kubernetes nodes with crictl | Get only the latest N lines of logs:", "language": "text", "type": "code-example_explanation", "context": "Debugging Kubernetes nodes with crictl | Get only the latest N lines of logs:", "source": "https://kubernetes.io/docs/tasks/debug/debug-cluster/crictl/", "id": "k8s_00638_code_explain_20198"} {"prompt": "Explain this text code:\n\n10.240.0.96 - - [06/Jun/2018:02:45:51 +0000] \"GET / HTTP/1.1\" 200 612 \"-\" \"curl/7.47.0\" \"-\"", "completion": "This text code Debugging Kubernetes nodes with crictl |", "language": "text", "type": "code-example_explanation", "context": "Debugging Kubernetes nodes with crictl |", "source": "https://kubernetes.io/docs/tasks/debug/debug-cluster/crictl/", "id": "k8s_00638_code_explain_20199"} {"prompt": "Explain this bash code:\n\nkubectl get events -A --field-selector=reason=OwnerRefInvalidNamespace", "completion": "This bash code Owners and Dependents | When you tell Kubernetes to delete a resource, the API server allows the managing controller to process any finalizer rules for the resource. Finalizers prevent accidental deletion of resources your c", "language": "bash", "type": "kubectl-command_explanation", "context": "Owners and Dependents | When you tell Kubernetes to delete a resource, the API server allows the managing controller to process any finalizer rules for the resource. Finalizers prevent accidental deletion of resources your c", "source": "https://kubernetes.io/docs/concepts/overview/working-with-objects/owners-dependents/", "id": "k8s_00640_code_explain_20200"} {"prompt": "Explain this text code:\n\nservice Registration {\n\trpc Register(RegisterRequest) returns (Empty) {}\n}", "completion": "This text code Device Plugins | A device plugin can register itself with the kubelet through this gRPC service. During the registration, the device plugin needs to send:", "language": "text", "type": "code-example_explanation", "context": "Device Plugins | A device plugin can register itself with the kubelet through this gRPC service. During the registration, the device plugin needs to send:", "source": "https://kubernetes.io/docs/concepts/extend-kubernetes/compute-storage-net/device-plugins/", "id": "k8s_00641_code_explain_20201"} {"prompt": "Complete this yaml code:\n\n---\napiVersion: v1\nkind: Pod\nmetadata:\n name: demo-pod\nspec:\n containers:\n - name: demo-container-1\n image: registry.k8s.io/pause:3.8", "completion": " resources:\n limits:\n hardware-vendor.example/foo: 2\n#\n# This Pod needs 2 of the hardware-vendor.example/foo devices\n# and can only schedule onto a Node that's able to satisfy\n# that need.\n#\n# If the Node has more than 2 of those devices available, the\n# remainder would be available for other Pods to use.", "language": "yaml", "type": "pod-definition", "context": "Device Plugins | The general workflow of a device plugin includes the following steps:", "source": "https://kubernetes.io/docs/concepts/extend-kubernetes/compute-storage-net/device-plugins/", "id": "k8s_00641_code_completion_20202"} {"prompt": "Explain this yaml code:\n\n---\napiVersion: v1\nkind: Pod\nmetadata:\n name: demo-pod\nspec:\n containers:\n - name: demo-container-1\n image: registry.k8s.io/pause:3.8\n resources:\n limits:\n hardware-vendor.example/foo: 2\n#\n# This Pod needs 2 of the hardware-vendor.example/foo devices\n# and can only schedule onto a Node that's able to satisfy\n# that need.\n#\n# If the Node has more than 2 of those devices available, the\n# remainder would be available for other Pods to use.", "completion": "This yaml code Device Plugins | The general workflow of a device plugin includes the following steps:", "language": "yaml", "type": "pod-definition_explanation", "context": "Device Plugins | The general workflow of a device plugin includes the following steps:", "source": "https://kubernetes.io/docs/concepts/extend-kubernetes/compute-storage-net/device-plugins/", "id": "k8s_00641_code_explain_20203"} {"prompt": "Complete this text code:\n\nservice DevicePlugin {\n // GetDevicePluginOptions returns options to be communicated with Device Manager.\n rpc GetDevicePluginOptions(Empty) returns (DevicePluginOptions) {}\n\n // ListAndWatch returns a stream of List of Devices\n // Whenever a Device state change or a Device disappears, ListAndWatch\n // returns the new list\n rpc ListAndWatch(Empty) returns (stream ListAndWatchResponse) {}\n\n // Allocate is called during container creation so that the Device\n // Plugin can run device specific operations and instruct Kubelet\n // of the steps to make the Device available in the container\n rpc Allocate(AllocateRequest) returns (AllocateResponse) {}", "completion": "\n // GetPreferredAllocation returns a preferred set of devices to allocate\n // from a list of available ones. The resulting preferred allocation is not\n // guaranteed to be the allocation ultimately performed by the\n // devicemanager. It is only designed to help the devicemanager make a more\n // informed allocation decision when possible.\n rpc GetPreferredAllocation(PreferredAllocationRequest) returns (PreferredAllocationResponse) {}\n\n // PreStartContainer is called, if indicated by Device Plugin during registration phase,\n // before each container start. Device plugin can run device specific operations\n // such as resetting the device before making devices available to the container.\n rpc PreStartContainer(PreStartContainerRequest) returns (PreStartContainerResponse) {}\n}", "language": "text", "type": "code-example", "context": "Device Plugins | The plugin registers itself with the kubelet through the Unix socket at host path /var/lib/kubelet/device-plugins/kubelet.sock.", "source": "https://kubernetes.io/docs/concepts/extend-kubernetes/compute-storage-net/device-plugins/", "id": "k8s_00641_code_completion_20204"} {"prompt": "Explain this text code:\n\nservice DevicePlugin {\n // GetDevicePluginOptions returns options to be communicated with Device Manager.\n rpc GetDevicePluginOptions(Empty) returns (DevicePluginOptions) {}\n\n // ListAndWatch returns a stream of List of Devices\n // Whenever a Device state change or a Device disappears, ListAndWatch\n // returns the new list\n rpc ListAndWatch(Empty) returns (stream ListAndWatchResponse) {}\n\n // Allocate is called during container creation so that the Device\n // Plugin can run device specific operations and instruct Kubelet\n // of the steps to make the Device available in the container\n rpc Allocate(AllocateRequest) returns (AllocateResponse) {}\n\n // GetPreferredAllocation returns a preferred set of devices to allocate\n // from a list of available ones. The resulting preferred allocation is not\n // guaranteed to be the allocation ultimately performed by the\n // devicemanager. It is only designed to help the devicemanager make a more\n // informed allocation decision when possible.\n rpc GetPreferredAllocation(PreferredAllocationRequest) returns (PreferredAllocationResponse) {}\n\n // PreStartContainer is called, if indicated by Device Plugin during registration phase,\n // before each container start. Device plugin can run device specific operations\n // such as resetting the device before making devices available to the container.\n rpc PreStartContainer(PreStartContainerRequest) returns (PreStartContainerResponse) {}\n}", "completion": "This text code Device Plugins | The plugin registers itself with the kubelet through the Unix socket at host path /var/lib/kubelet/device-plugins/kubelet.sock.", "language": "text", "type": "code-example_explanation", "context": "Device Plugins | The plugin registers itself with the kubelet through the Unix socket at host path /var/lib/kubelet/device-plugins/kubelet.sock.", "source": "https://kubernetes.io/docs/concepts/extend-kubernetes/compute-storage-net/device-plugins/", "id": "k8s_00641_code_explain_20205"} {"prompt": "Complete this text code:\n\n// PodResourcesLister is a service provided by the kubelet that provides information about the\n// node resources consumed by pods and containers on the node\nservice PodResourcesLister {", "completion": " rpc List(ListPodResourcesRequest) returns (ListPodResourcesResponse) {}\n rpc GetAllocatableResources(AllocatableResourcesRequest) returns (AllocatableResourcesResponse) {}\n rpc Get(GetPodResourcesRequest) returns (GetPodResourcesResponse) {}\n}", "language": "text", "type": "code-example", "context": "Device Plugins | The List endpoint provides information on resources of running pods, with details such as the id of exclusively allocated CPUs, device id as it was reported by device plugins and id of the NUMA node w", "source": "https://kubernetes.io/docs/concepts/extend-kubernetes/compute-storage-net/device-plugins/", "id": "k8s_00641_code_completion_20206"} {"prompt": "Explain this text code:\n\n// PodResourcesLister is a service provided by the kubelet that provides information about the\n// node resources consumed by pods and containers on the node\nservice PodResourcesLister {\n rpc List(ListPodResourcesRequest) returns (ListPodResourcesResponse) {}\n rpc GetAllocatableResources(AllocatableResourcesRequest) returns (AllocatableResourcesResponse) {}\n rpc Get(GetPodResourcesRequest) returns (GetPodResourcesResponse) {}\n}", "completion": "This text code Device Plugins | The List endpoint provides information on resources of running pods, with details such as the id of exclusively allocated CPUs, device id as it was reported by device plugins and id of the NUMA node w", "language": "text", "type": "code-example_explanation", "context": "Device Plugins | The List endpoint provides information on resources of running pods, with details such as the id of exclusively allocated CPUs, device id as it was reported by device plugins and id of the NUMA node w", "source": "https://kubernetes.io/docs/concepts/extend-kubernetes/compute-storage-net/device-plugins/", "id": "k8s_00641_code_explain_20207"} {"prompt": "Explain this text code:\n\n--feature-gates=KubeletPodResourcesDynamicResources=false", "completion": "This text code Device Plugins | cpu_ids in the ContainerResources in the List endpoint correspond to exclusive CPUs allocated to a particular container. If the goal is to evaluate CPUs that belong to the shared pool, the List endpoi", "language": "text", "type": "code-example_explanation", "context": "Device Plugins | cpu_ids in the ContainerResources in the List endpoint correspond to exclusive CPUs allocated to a particular container. If the goal is to evaluate CPUs that belong to the shared pool, the List endpoi", "source": "https://kubernetes.io/docs/concepts/extend-kubernetes/compute-storage-net/device-plugins/", "id": "k8s_00641_code_explain_20208"} {"prompt": "Complete this text code:\n\n// ListPodResourcesResponse is the response returned by List function\nmessage ListPodResourcesResponse {\n repeated PodResources pod_resources = 1;\n}\n\n// PodResources contains information about the node resources assigned to a pod\nmessage PodResources {\n string name = 1;\n string namespace = 2;\n repeated ContainerResources containers = 3;\n}\n\n// ContainerResources contains information about the resources assigned to a container\nmessage ContainerResources {\n string name = 1;\n repeated ContainerDevices devices = 2;\n repeated int64 cpu_ids = 3;\n repeated ContainerMemory memory = 4;\n repeated DynamicResource dynamic_resources = 5;\n}\n\n// ContainerMemory contains information about memory and hugepages assigned to a container\nmessage ContainerMemory {\n string memory_type = 1;\n uint64 size = 2;\n TopologyInfo topology = 3;\n}\n\n// Topology describes hardware topology of the resource\nmessage TopologyInfo {\n repeated NUMANode nodes = 1;\n}\n", "completion": "// NUMA representation of NUMA node\nmessage NUMANode {\n int64 ID = 1;\n}\n\n// ContainerDevices contains information about the devices assigned to a container\nmessage ContainerDevices {\n string resource_name = 1;\n repeated string device_ids = 2;\n TopologyInfo topology = 3;\n}\n\n// DynamicResource contains information about the devices assigned to a container by Dynamic Resource Allocation\nmessage DynamicResource {\n string class_name = 1;\n string claim_name = 2;\n string claim_namespace = 3;\n repeated ClaimResource claim_resources = 4;\n}\n\n// ClaimResource contains per-plugin resource information\nmessage ClaimResource {\n repeated CDIDevice cdi_devices = 1 [(gogoproto.customname) = \"CDIDevices\"];\n}\n\n// CDIDevice specifies a CDI device information\nmessage CDIDevice {\n // Fully qualified CDI device name\n // for example: vendor.com/gpu=gpudevice1\n // see more details in the CDI specification:\n // https://github.com/container-orchestrated-devices/container-device-interface/blob/main/SPEC.md\n string name = 1;\n}", "language": "text", "type": "code-example", "context": "Device Plugins | cpu_ids in the ContainerResources in the List endpoint correspond to exclusive CPUs allocated to a particular container. If the goal is to evaluate CPUs that belong to the shared pool, the List endpoi", "source": "https://kubernetes.io/docs/concepts/extend-kubernetes/compute-storage-net/device-plugins/", "id": "k8s_00641_code_completion_20209"} {"prompt": "Explain this text code:\n\n// ListPodResourcesResponse is the response returned by List function\nmessage ListPodResourcesResponse {\n repeated PodResources pod_resources = 1;\n}\n\n// PodResources contains information about the node resources assigned to a pod\nmessage PodResources {\n string name = 1;\n string namespace = 2;\n repeated ContainerResources containers = 3;\n}\n\n// ContainerResources contains information about the resources assigned to a container\nmessage ContainerResources {\n string name = 1;\n repeated ContainerDevices devices = 2;\n repeated int64 cpu_ids = 3;\n repeated ContainerMemory memory = 4;\n repeated DynamicResource dynamic_resources = 5;\n}\n\n// ContainerMemory contains information about memory and hugepages assigned to a container\nmessage ContainerMemory {\n string memory_type = 1;\n uint64 size = 2;\n TopologyInfo topology = 3;\n}\n\n// Topology describes hardware topology of the resource\nmessage TopologyInfo {\n repeated NUMANode nodes = 1;\n}\n\n// NUMA representation of NUMA node\nmessage NUMANode {\n int64 ID = 1;\n}\n\n// ContainerDevices contains information about the devices assigned to a container\nmessage ContainerDevices {\n string resource_name = 1;\n repeated string device_ids = 2;\n TopologyInfo topology = 3;\n}\n\n// DynamicResource contains information about the devices assigned to a container by Dynamic Resource Allocation\nmessage DynamicResource {\n string class_name = 1;\n string claim_name = 2;\n string claim_namespace = 3;\n repeated ClaimResource claim_resources = 4;\n}\n\n// ClaimResource contains per-plugin resource information\nmessage ClaimResource {\n repeated CDIDevice cdi_devices = 1 [(gogoproto.customname) = \"CDIDevices\"];\n}\n\n// CDIDevice specifies a CDI device information\nmessage CDIDevice {\n // Fully qualified CDI device name\n // for example: vendor.com/gpu=gpudevice1\n // see more details in the CDI specification:\n // https://github.com/container-orchestrated-devices/container-device-interface/blob/main/SPEC.md\n string name = 1;\n}", "completion": "This text code Device Plugins | cpu_ids in the ContainerResources in the List endpoint correspond to exclusive CPUs allocated to a particular container. If the goal is to evaluate CPUs that belong to the shared pool, the List endpoi", "language": "text", "type": "code-example_explanation", "context": "Device Plugins | cpu_ids in the ContainerResources in the List endpoint correspond to exclusive CPUs allocated to a particular container. If the goal is to evaluate CPUs that belong to the shared pool, the List endpoi", "source": "https://kubernetes.io/docs/concepts/extend-kubernetes/compute-storage-net/device-plugins/", "id": "k8s_00641_code_explain_20210"} {"prompt": "Complete this text code:\n\n// AllocatableResourcesResponses contains information about all the devices known by the kubelet\nmessage AllocatableResourcesResponse {\n repeated ContainerDevices devices = 1;", "completion": " repeated int64 cpu_ids = 2;\n repeated ContainerMemory memory = 3;\n}", "language": "text", "type": "code-example", "context": "Device Plugins | ContainerDevices do expose the topology information declaring to which NUMA cells the device is affine. The NUMA cells are identified using a opaque integer ID, which value is consistent to what devic", "source": "https://kubernetes.io/docs/concepts/extend-kubernetes/compute-storage-net/device-plugins/", "id": "k8s_00641_code_completion_20211"} {"prompt": "Explain this text code:\n\n// AllocatableResourcesResponses contains information about all the devices known by the kubelet\nmessage AllocatableResourcesResponse {\n repeated ContainerDevices devices = 1;\n repeated int64 cpu_ids = 2;\n repeated ContainerMemory memory = 3;\n}", "completion": "This text code Device Plugins | ContainerDevices do expose the topology information declaring to which NUMA cells the device is affine. The NUMA cells are identified using a opaque integer ID, which value is consistent to what devic", "language": "text", "type": "code-example_explanation", "context": "Device Plugins | ContainerDevices do expose the topology information declaring to which NUMA cells the device is affine. The NUMA cells are identified using a opaque integer ID, which value is consistent to what devic", "source": "https://kubernetes.io/docs/concepts/extend-kubernetes/compute-storage-net/device-plugins/", "id": "k8s_00641_code_explain_20212"} {"prompt": "Complete this text code:\n\n// GetPodResourcesRequest contains information about the pod\nmessage GetPodResourcesRequest {", "completion": " string pod_name = 1;\n string pod_namespace = 2;\n}", "language": "text", "type": "code-example", "context": "Device Plugins | To disable this feature, you must start your kubelet services with the following flag:", "source": "https://kubernetes.io/docs/concepts/extend-kubernetes/compute-storage-net/device-plugins/", "id": "k8s_00641_code_completion_20213"} {"prompt": "Explain this text code:\n\n// GetPodResourcesRequest contains information about the pod\nmessage GetPodResourcesRequest {\n string pod_name = 1;\n string pod_namespace = 2;\n}", "completion": "This text code Device Plugins | To disable this feature, you must start your kubelet services with the following flag:", "language": "text", "type": "code-example_explanation", "context": "Device Plugins | To disable this feature, you must start your kubelet services with the following flag:", "source": "https://kubernetes.io/docs/concepts/extend-kubernetes/compute-storage-net/device-plugins/", "id": "k8s_00641_code_explain_20214"} {"prompt": "Explain this text code:\n\n--feature-gates=KubeletPodResourcesDynamicResources=false", "completion": "This text code Device Plugins | The Topology Manager is a Kubelet component that allows resources to be co-ordinated in a Topology aligned manner. In order to do this, the Device Plugin API was extended to include a TopologyInfo str", "language": "text", "type": "code-example_explanation", "context": "Device Plugins | The Topology Manager is a Kubelet component that allows resources to be co-ordinated in a Topology aligned manner. In order to do this, the Device Plugin API was extended to include a TopologyInfo str", "source": "https://kubernetes.io/docs/concepts/extend-kubernetes/compute-storage-net/device-plugins/", "id": "k8s_00641_code_explain_20215"} {"prompt": "Complete this text code:\n\nmessage TopologyInfo {\n repeated NUMANode nodes = 1;\n}", "completion": "\nmessage NUMANode {\n int64 ID = 1;\n}", "language": "text", "type": "code-example", "context": "Device Plugins | Device Plugins that wish to leverage the Topology Manager can send back a populated TopologyInfo struct as part of the device registration, along with the device IDs and the health of the device. The", "source": "https://kubernetes.io/docs/concepts/extend-kubernetes/compute-storage-net/device-plugins/", "id": "k8s_00641_code_completion_20216"} {"prompt": "Explain this text code:\n\nmessage TopologyInfo {\n repeated NUMANode nodes = 1;\n}\n\nmessage NUMANode {\n int64 ID = 1;\n}", "completion": "This text code Device Plugins | Device Plugins that wish to leverage the Topology Manager can send back a populated TopologyInfo struct as part of the device registration, along with the device IDs and the health of the device. The", "language": "text", "type": "code-example_explanation", "context": "Device Plugins | Device Plugins that wish to leverage the Topology Manager can send back a populated TopologyInfo struct as part of the device registration, along with the device IDs and the health of the device. The", "source": "https://kubernetes.io/docs/concepts/extend-kubernetes/compute-storage-net/device-plugins/", "id": "k8s_00641_code_explain_20217"} {"prompt": "Explain this text code:\n\npluginapi.Device{ID: \"25102017\", Health: pluginapi.Healthy, Topology:&pluginapi.TopologyInfo{Nodes: []*pluginapi.NUMANode{&pluginapi.NUMANode{ID: 0,},}}}", "completion": "This text code Device Plugins | Here are some examples of device plugin implementations:", "language": "text", "type": "code-example_explanation", "context": "Device Plugins | Here are some examples of device plugin implementations:", "source": "https://kubernetes.io/docs/concepts/extend-kubernetes/compute-storage-net/device-plugins/", "id": "k8s_00641_code_explain_20218"} {"prompt": "Explain this gRPC code:\n\nservice Registration {\n\trpc Register(RegisterRequest) returns (Empty) {}\n}", "completion": "This gRPC code Device Plugins | A device plugin can register itself with the kubelet through this gRPC service. During the registration, the device plugin needs to send:", "language": "gRPC", "type": "code-example_explanation", "context": "Device Plugins | A device plugin can register itself with the kubelet through this gRPC service. During the registration, the device plugin needs to send:", "source": "https://kubernetes.io/docs/concepts/extend-kubernetes/compute-storage-net/device-plugins/", "id": "k8s_00641_code_explain_20219"} {"prompt": "Complete this yaml code:\n\n---\napiVersion: v1\nkind: Pod\nmetadata:\n name: demo-pod\nspec:\n containers:\n - name: demo-container-1\n image: registry.k8s.io/pause:3.8", "completion": " resources:\n limits:\n hardware-vendor.example/foo: 2\n#\n# This Pod needs 2 of the hardware-vendor.example/foo devices\n# and can only schedule onto a Node that's able to satisfy\n# that need.\n#\n# If the Node has more than 2 of those devices available, the\n# remainder would be available for other Pods to use.", "language": "yaml", "type": "pod-definition", "context": "Device Plugins | The general workflow of a device plugin includes the following steps:", "source": "https://kubernetes.io/docs/concepts/extend-kubernetes/compute-storage-net/device-plugins/", "id": "k8s_00641_code_completion_20220"} {"prompt": "Explain this yaml code:\n\n---\napiVersion: v1\nkind: Pod\nmetadata:\n name: demo-pod\nspec:\n containers:\n - name: demo-container-1\n image: registry.k8s.io/pause:3.8\n resources:\n limits:\n hardware-vendor.example/foo: 2\n#\n# This Pod needs 2 of the hardware-vendor.example/foo devices\n# and can only schedule onto a Node that's able to satisfy\n# that need.\n#\n# If the Node has more than 2 of those devices available, the\n# remainder would be available for other Pods to use.", "completion": "This yaml code Device Plugins | The general workflow of a device plugin includes the following steps:", "language": "yaml", "type": "pod-definition_explanation", "context": "Device Plugins | The general workflow of a device plugin includes the following steps:", "source": "https://kubernetes.io/docs/concepts/extend-kubernetes/compute-storage-net/device-plugins/", "id": "k8s_00641_code_explain_20221"} {"prompt": "Complete this gRPC code:\n\nservice DevicePlugin {\n // GetDevicePluginOptions returns options to be communicated with Device Manager.\n rpc GetDevicePluginOptions(Empty) returns (DevicePluginOptions) {}\n\n // ListAndWatch returns a stream of List of Devices\n // Whenever a Device state change or a Device disappears, ListAndWatch\n // returns the new list\n rpc ListAndWatch(Empty) returns (stream ListAndWatchResponse) {}\n\n // Allocate is called during container creation so that the Device\n // Plugin can run device specific operations and instruct Kubelet\n // of the steps to make the Device available in the container\n rpc Allocate(AllocateRequest) returns (AllocateResponse) {}", "completion": "\n // GetPreferredAllocation returns a preferred set of devices to allocate\n // from a list of available ones. The resulting preferred allocation is not\n // guaranteed to be the allocation ultimately performed by the\n // devicemanager. It is only designed to help the devicemanager make a more\n // informed allocation decision when possible.\n rpc GetPreferredAllocation(PreferredAllocationRequest) returns (PreferredAllocationResponse) {}\n\n // PreStartContainer is called, if indicated by Device Plugin during registration phase,\n // before each container start. Device plugin can run device specific operations\n // such as resetting the device before making devices available to the container.\n rpc PreStartContainer(PreStartContainerRequest) returns (PreStartContainerResponse) {}\n}", "language": "gRPC", "type": "code-example", "context": "Device Plugins | The plugin registers itself with the kubelet through the Unix socket at host path /var/lib/kubelet/device-plugins/kubelet.sock.", "source": "https://kubernetes.io/docs/concepts/extend-kubernetes/compute-storage-net/device-plugins/", "id": "k8s_00641_code_completion_20222"} {"prompt": "Explain this gRPC code:\n\nservice DevicePlugin {\n // GetDevicePluginOptions returns options to be communicated with Device Manager.\n rpc GetDevicePluginOptions(Empty) returns (DevicePluginOptions) {}\n\n // ListAndWatch returns a stream of List of Devices\n // Whenever a Device state change or a Device disappears, ListAndWatch\n // returns the new list\n rpc ListAndWatch(Empty) returns (stream ListAndWatchResponse) {}\n\n // Allocate is called during container creation so that the Device\n // Plugin can run device specific operations and instruct Kubelet\n // of the steps to make the Device available in the container\n rpc Allocate(AllocateRequest) returns (AllocateResponse) {}\n\n // GetPreferredAllocation returns a preferred set of devices to allocate\n // from a list of available ones. The resulting preferred allocation is not\n // guaranteed to be the allocation ultimately performed by the\n // devicemanager. It is only designed to help the devicemanager make a more\n // informed allocation decision when possible.\n rpc GetPreferredAllocation(PreferredAllocationRequest) returns (PreferredAllocationResponse) {}\n\n // PreStartContainer is called, if indicated by Device Plugin during registration phase,\n // before each container start. Device plugin can run device specific operations\n // such as resetting the device before making devices available to the container.\n rpc PreStartContainer(PreStartContainerRequest) returns (PreStartContainerResponse) {}\n}", "completion": "This gRPC code Device Plugins | The plugin registers itself with the kubelet through the Unix socket at host path /var/lib/kubelet/device-plugins/kubelet.sock.", "language": "gRPC", "type": "code-example_explanation", "context": "Device Plugins | The plugin registers itself with the kubelet through the Unix socket at host path /var/lib/kubelet/device-plugins/kubelet.sock.", "source": "https://kubernetes.io/docs/concepts/extend-kubernetes/compute-storage-net/device-plugins/", "id": "k8s_00641_code_explain_20223"} {"prompt": "Complete this gRPC code:\n\n// PodResourcesLister is a service provided by the kubelet that provides information about the\n// node resources consumed by pods and containers on the node\nservice PodResourcesLister {", "completion": " rpc List(ListPodResourcesRequest) returns (ListPodResourcesResponse) {}\n rpc GetAllocatableResources(AllocatableResourcesRequest) returns (AllocatableResourcesResponse) {}\n rpc Get(GetPodResourcesRequest) returns (GetPodResourcesResponse) {}\n}", "language": "gRPC", "type": "code-example", "context": "Device Plugins | The List endpoint provides information on resources of running pods, with details such as the id of exclusively allocated CPUs, device id as it was reported by device plugins and id of the NUMA node w", "source": "https://kubernetes.io/docs/concepts/extend-kubernetes/compute-storage-net/device-plugins/", "id": "k8s_00641_code_completion_20224"} {"prompt": "Explain this gRPC code:\n\n// PodResourcesLister is a service provided by the kubelet that provides information about the\n// node resources consumed by pods and containers on the node\nservice PodResourcesLister {\n rpc List(ListPodResourcesRequest) returns (ListPodResourcesResponse) {}\n rpc GetAllocatableResources(AllocatableResourcesRequest) returns (AllocatableResourcesResponse) {}\n rpc Get(GetPodResourcesRequest) returns (GetPodResourcesResponse) {}\n}", "completion": "This gRPC code Device Plugins | The List endpoint provides information on resources of running pods, with details such as the id of exclusively allocated CPUs, device id as it was reported by device plugins and id of the NUMA node w", "language": "gRPC", "type": "code-example_explanation", "context": "Device Plugins | The List endpoint provides information on resources of running pods, with details such as the id of exclusively allocated CPUs, device id as it was reported by device plugins and id of the NUMA node w", "source": "https://kubernetes.io/docs/concepts/extend-kubernetes/compute-storage-net/device-plugins/", "id": "k8s_00641_code_explain_20225"} {"prompt": "Explain this text code:\n\n--feature-gates=KubeletPodResourcesDynamicResources=false", "completion": "This text code Device Plugins | cpu_ids in the ContainerResources in the List endpoint correspond to exclusive CPUs allocated to a particular container. If the goal is to evaluate CPUs that belong to the shared pool, the List endpoi", "language": "text", "type": "code-example_explanation", "context": "Device Plugins | cpu_ids in the ContainerResources in the List endpoint correspond to exclusive CPUs allocated to a particular container. If the goal is to evaluate CPUs that belong to the shared pool, the List endpoi", "source": "https://kubernetes.io/docs/concepts/extend-kubernetes/compute-storage-net/device-plugins/", "id": "k8s_00641_code_explain_20226"} {"prompt": "Complete this gRPC code:\n\n// ListPodResourcesResponse is the response returned by List function\nmessage ListPodResourcesResponse {\n repeated PodResources pod_resources = 1;\n}\n\n// PodResources contains information about the node resources assigned to a pod\nmessage PodResources {\n string name = 1;\n string namespace = 2;\n repeated ContainerResources containers = 3;\n}\n\n// ContainerResources contains information about the resources assigned to a container\nmessage ContainerResources {\n string name = 1;\n repeated ContainerDevices devices = 2;\n repeated int64 cpu_ids = 3;\n repeated ContainerMemory memory = 4;\n repeated DynamicResource dynamic_resources = 5;\n}\n\n// ContainerMemory contains information about memory and hugepages assigned to a container\nmessage ContainerMemory {\n string memory_type = 1;\n uint64 size = 2;\n TopologyInfo topology = 3;\n}\n\n// Topology describes hardware topology of the resource\nmessage TopologyInfo {\n repeated NUMANode nodes = 1;\n}\n", "completion": "// NUMA representation of NUMA node\nmessage NUMANode {\n int64 ID = 1;\n}\n\n// ContainerDevices contains information about the devices assigned to a container\nmessage ContainerDevices {\n string resource_name = 1;\n repeated string device_ids = 2;\n TopologyInfo topology = 3;\n}\n\n// DynamicResource contains information about the devices assigned to a container by Dynamic Resource Allocation\nmessage DynamicResource {\n string class_name = 1;\n string claim_name = 2;\n string claim_namespace = 3;\n repeated ClaimResource claim_resources = 4;\n}\n\n// ClaimResource contains per-plugin resource information\nmessage ClaimResource {\n repeated CDIDevice cdi_devices = 1 [(gogoproto.customname) = \"CDIDevices\"];\n}\n\n// CDIDevice specifies a CDI device information\nmessage CDIDevice {\n // Fully qualified CDI device name\n // for example: vendor.com/gpu=gpudevice1\n // see more details in the CDI specification:\n // https://github.com/container-orchestrated-devices/container-device-interface/blob/main/SPEC.md\n string name = 1;\n}", "language": "gRPC", "type": "code-example", "context": "Device Plugins | cpu_ids in the ContainerResources in the List endpoint correspond to exclusive CPUs allocated to a particular container. If the goal is to evaluate CPUs that belong to the shared pool, the List endpoi", "source": "https://kubernetes.io/docs/concepts/extend-kubernetes/compute-storage-net/device-plugins/", "id": "k8s_00641_code_completion_20227"} {"prompt": "Explain this gRPC code:\n\n// ListPodResourcesResponse is the response returned by List function\nmessage ListPodResourcesResponse {\n repeated PodResources pod_resources = 1;\n}\n\n// PodResources contains information about the node resources assigned to a pod\nmessage PodResources {\n string name = 1;\n string namespace = 2;\n repeated ContainerResources containers = 3;\n}\n\n// ContainerResources contains information about the resources assigned to a container\nmessage ContainerResources {\n string name = 1;\n repeated ContainerDevices devices = 2;\n repeated int64 cpu_ids = 3;\n repeated ContainerMemory memory = 4;\n repeated DynamicResource dynamic_resources = 5;\n}\n\n// ContainerMemory contains information about memory and hugepages assigned to a container\nmessage ContainerMemory {\n string memory_type = 1;\n uint64 size = 2;\n TopologyInfo topology = 3;\n}\n\n// Topology describes hardware topology of the resource\nmessage TopologyInfo {\n repeated NUMANode nodes = 1;\n}\n\n// NUMA representation of NUMA node\nmessage NUMANode {\n int64 ID = 1;\n}\n\n// ContainerDevices contains information about the devices assigned to a container\nmessage ContainerDevices {\n string resource_name = 1;\n repeated string device_ids = 2;\n TopologyInfo topology = 3;\n}\n\n// DynamicResource contains information about the devices assigned to a container by Dynamic Resource Allocation\nmessage DynamicResource {\n string class_name = 1;\n string claim_name = 2;\n string claim_namespace = 3;\n repeated ClaimResource claim_resources = 4;\n}\n\n// ClaimResource contains per-plugin resource information\nmessage ClaimResource {\n repeated CDIDevice cdi_devices = 1 [(gogoproto.customname) = \"CDIDevices\"];\n}\n\n// CDIDevice specifies a CDI device information\nmessage CDIDevice {\n // Fully qualified CDI device name\n // for example: vendor.com/gpu=gpudevice1\n // see more details in the CDI specification:\n // https://github.com/container-orchestrated-devices/container-device-interface/blob/main/SPEC.md\n string name = 1;\n}", "completion": "This gRPC code Device Plugins | cpu_ids in the ContainerResources in the List endpoint correspond to exclusive CPUs allocated to a particular container. If the goal is to evaluate CPUs that belong to the shared pool, the List endpoi", "language": "gRPC", "type": "code-example_explanation", "context": "Device Plugins | cpu_ids in the ContainerResources in the List endpoint correspond to exclusive CPUs allocated to a particular container. If the goal is to evaluate CPUs that belong to the shared pool, the List endpoi", "source": "https://kubernetes.io/docs/concepts/extend-kubernetes/compute-storage-net/device-plugins/", "id": "k8s_00641_code_explain_20228"} {"prompt": "Complete this gRPC code:\n\n// AllocatableResourcesResponses contains information about all the devices known by the kubelet\nmessage AllocatableResourcesResponse {\n repeated ContainerDevices devices = 1;", "completion": " repeated int64 cpu_ids = 2;\n repeated ContainerMemory memory = 3;\n}", "language": "gRPC", "type": "code-example", "context": "Device Plugins | ContainerDevices do expose the topology information declaring to which NUMA cells the device is affine. The NUMA cells are identified using a opaque integer ID, which value is consistent to what devic", "source": "https://kubernetes.io/docs/concepts/extend-kubernetes/compute-storage-net/device-plugins/", "id": "k8s_00641_code_completion_20229"} {"prompt": "Explain this gRPC code:\n\n// AllocatableResourcesResponses contains information about all the devices known by the kubelet\nmessage AllocatableResourcesResponse {\n repeated ContainerDevices devices = 1;\n repeated int64 cpu_ids = 2;\n repeated ContainerMemory memory = 3;\n}", "completion": "This gRPC code Device Plugins | ContainerDevices do expose the topology information declaring to which NUMA cells the device is affine. The NUMA cells are identified using a opaque integer ID, which value is consistent to what devic", "language": "gRPC", "type": "code-example_explanation", "context": "Device Plugins | ContainerDevices do expose the topology information declaring to which NUMA cells the device is affine. The NUMA cells are identified using a opaque integer ID, which value is consistent to what devic", "source": "https://kubernetes.io/docs/concepts/extend-kubernetes/compute-storage-net/device-plugins/", "id": "k8s_00641_code_explain_20230"} {"prompt": "Complete this gRPC code:\n\n// GetPodResourcesRequest contains information about the pod\nmessage GetPodResourcesRequest {", "completion": " string pod_name = 1;\n string pod_namespace = 2;\n}", "language": "gRPC", "type": "code-example", "context": "Device Plugins | To disable this feature, you must start your kubelet services with the following flag:", "source": "https://kubernetes.io/docs/concepts/extend-kubernetes/compute-storage-net/device-plugins/", "id": "k8s_00641_code_completion_20231"} {"prompt": "Explain this gRPC code:\n\n// GetPodResourcesRequest contains information about the pod\nmessage GetPodResourcesRequest {\n string pod_name = 1;\n string pod_namespace = 2;\n}", "completion": "This gRPC code Device Plugins | To disable this feature, you must start your kubelet services with the following flag:", "language": "gRPC", "type": "code-example_explanation", "context": "Device Plugins | To disable this feature, you must start your kubelet services with the following flag:", "source": "https://kubernetes.io/docs/concepts/extend-kubernetes/compute-storage-net/device-plugins/", "id": "k8s_00641_code_explain_20232"} {"prompt": "Explain this text code:\n\n--feature-gates=KubeletPodResourcesDynamicResources=false", "completion": "This text code Device Plugins | The Topology Manager is a Kubelet component that allows resources to be co-ordinated in a Topology aligned manner. In order to do this, the Device Plugin API was extended to include a TopologyInfo str", "language": "text", "type": "code-example_explanation", "context": "Device Plugins | The Topology Manager is a Kubelet component that allows resources to be co-ordinated in a Topology aligned manner. In order to do this, the Device Plugin API was extended to include a TopologyInfo str", "source": "https://kubernetes.io/docs/concepts/extend-kubernetes/compute-storage-net/device-plugins/", "id": "k8s_00641_code_explain_20233"} {"prompt": "Complete this gRPC code:\n\nmessage TopologyInfo {\n repeated NUMANode nodes = 1;\n}", "completion": "\nmessage NUMANode {\n int64 ID = 1;\n}", "language": "gRPC", "type": "code-example", "context": "Device Plugins | Device Plugins that wish to leverage the Topology Manager can send back a populated TopologyInfo struct as part of the device registration, along with the device IDs and the health of the device. The", "source": "https://kubernetes.io/docs/concepts/extend-kubernetes/compute-storage-net/device-plugins/", "id": "k8s_00641_code_completion_20234"} {"prompt": "Explain this gRPC code:\n\nmessage TopologyInfo {\n repeated NUMANode nodes = 1;\n}\n\nmessage NUMANode {\n int64 ID = 1;\n}", "completion": "This gRPC code Device Plugins | Device Plugins that wish to leverage the Topology Manager can send back a populated TopologyInfo struct as part of the device registration, along with the device IDs and the health of the device. The", "language": "gRPC", "type": "code-example_explanation", "context": "Device Plugins | Device Plugins that wish to leverage the Topology Manager can send back a populated TopologyInfo struct as part of the device registration, along with the device IDs and the health of the device. The", "source": "https://kubernetes.io/docs/concepts/extend-kubernetes/compute-storage-net/device-plugins/", "id": "k8s_00641_code_explain_20235"} {"prompt": "Explain this text code:\n\npluginapi.Device{ID: \"25102017\", Health: pluginapi.Healthy, Topology:&pluginapi.TopologyInfo{Nodes: []*pluginapi.NUMANode{&pluginapi.NUMANode{ID: 0,},}}}", "completion": "This text code Device Plugins | Here are some examples of device plugin implementations:", "language": "text", "type": "code-example_explanation", "context": "Device Plugins | Here are some examples of device plugin implementations:", "source": "https://kubernetes.io/docs/concepts/extend-kubernetes/compute-storage-net/device-plugins/", "id": "k8s_00641_code_explain_20236"} {"prompt": "Complete this yaml code:\n\n---\napiVersion: v1\nkind: Pod\nmetadata:\n name: demo-pod\nspec:\n containers:\n - name: demo-container-1\n image: registry.k8s.io/pause:3.8", "completion": " resources:\n limits:\n hardware-vendor.example/foo: 2\n#\n# This Pod needs 2 of the hardware-vendor.example/foo devices\n# and can only schedule onto a Node that's able to satisfy\n# that need.\n#\n# If the Node has more than 2 of those devices available, the\n# remainder would be available for other Pods to use.", "language": "yaml", "type": "pod-definition", "context": "Device Plugins | The general workflow of a device plugin includes the following steps:", "source": "https://kubernetes.io/docs/concepts/extend-kubernetes/compute-storage-net/device-plugins/", "id": "k8s_00641_code_completion_20237"} {"prompt": "Explain this yaml code:\n\n---\napiVersion: v1\nkind: Pod\nmetadata:\n name: demo-pod\nspec:\n containers:\n - name: demo-container-1\n image: registry.k8s.io/pause:3.8\n resources:\n limits:\n hardware-vendor.example/foo: 2\n#\n# This Pod needs 2 of the hardware-vendor.example/foo devices\n# and can only schedule onto a Node that's able to satisfy\n# that need.\n#\n# If the Node has more than 2 of those devices available, the\n# remainder would be available for other Pods to use.", "completion": "This yaml code Device Plugins | The general workflow of a device plugin includes the following steps:", "language": "yaml", "type": "pod-definition_explanation", "context": "Device Plugins | The general workflow of a device plugin includes the following steps:", "source": "https://kubernetes.io/docs/concepts/extend-kubernetes/compute-storage-net/device-plugins/", "id": "k8s_00641_code_explain_20238"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: cpu-demo\n namespace: cpu-example\nspec:\n containers:\n - name: cpu-demo-ctr", "completion": " image: vish/stress\n resources:\n limits:\n cpu: \"1\"\n requests:\n cpu: \"0.5\"\n args:\n - -cpus\n - \"2\"", "language": "yaml", "type": "pod-definition", "context": "Assign CPU Resources to Containers and Pods | The args section of the configuration file provides arguments for the container when it starts. The -cpus \"2\" argument tells the Container to attempt to use 2 CPUs.", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/assign-cpu-resource/", "id": "k8s_00642_code_completion_20239"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: cpu-demo\n namespace: cpu-example\nspec:\n containers:\n - name: cpu-demo-ctr\n image: vish/stress\n resources:\n limits:\n cpu: \"1\"\n requests:\n cpu: \"0.5\"\n args:\n - -cpus\n - \"2\"", "completion": "This yaml code Assign CPU Resources to Containers and Pods | The args section of the configuration file provides arguments for the container when it starts. The -cpus \"2\" argument tells the Container to attempt to use 2 CPUs.", "language": "yaml", "type": "pod-definition_explanation", "context": "Assign CPU Resources to Containers and Pods | The args section of the configuration file provides arguments for the container when it starts. The -cpus \"2\" argument tells the Container to attempt to use 2 CPUs.", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/assign-cpu-resource/", "id": "k8s_00642_code_explain_20240"} {"prompt": "Explain this bash code:\n\nkubectl apply -f https://k8s.io/examples/pods/resource/cpu-request-limit.yaml --namespace=cpu-example", "completion": "This bash code Assign CPU Resources to Containers and Pods | Verify that the Pod is running:", "language": "bash", "type": "kubectl-command_explanation", "context": "Assign CPU Resources to Containers and Pods | Verify that the Pod is running:", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/assign-cpu-resource/", "id": "k8s_00642_code_explain_20241"} {"prompt": "Explain this bash code:\n\nkubectl get pod cpu-demo --output=yaml --namespace=cpu-example", "completion": "This bash code Assign CPU Resources to Containers and Pods | The output shows that the one container in the Pod has a CPU request of 500 milliCPU and a CPU limit of 1 CPU.", "language": "bash", "type": "kubectl-command_explanation", "context": "Assign CPU Resources to Containers and Pods | The output shows that the one container in the Pod has a CPU request of 500 milliCPU and a CPU limit of 1 CPU.", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/assign-cpu-resource/", "id": "k8s_00642_code_explain_20242"} {"prompt": "Complete this text code:\n\nresources:\n limits:", "completion": " cpu: \"1\"\n requests:\n cpu: 500m", "language": "text", "type": "code-example", "context": "Assign CPU Resources to Containers and Pods | Use kubectl top to fetch the metrics for the Pod:", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/assign-cpu-resource/", "id": "k8s_00642_code_completion_20243"} {"prompt": "Explain this text code:\n\nresources:\n limits:\n cpu: \"1\"\n requests:\n cpu: 500m", "completion": "This text code Assign CPU Resources to Containers and Pods | Use kubectl top to fetch the metrics for the Pod:", "language": "text", "type": "code-example_explanation", "context": "Assign CPU Resources to Containers and Pods | Use kubectl top to fetch the metrics for the Pod:", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/assign-cpu-resource/", "id": "k8s_00642_code_explain_20244"} {"prompt": "Explain this text code:\n\nNAME CPU(cores) MEMORY(bytes)\ncpu-demo 974m <something>", "completion": "This text code Assign CPU Resources to Containers and Pods | Recall that by setting -cpu \"2\", you configured the Container to attempt to use 2 CPUs, but the Container is only being allowed to use about 1 CPU. The container's CPU use is being throttled, because", "language": "text", "type": "code-example_explanation", "context": "Assign CPU Resources to Containers and Pods | Recall that by setting -cpu \"2\", you configured the Container to attempt to use 2 CPUs, but the Container is only being allowed to use about 1 CPU. The container's CPU use is being throttled, because", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/assign-cpu-resource/", "id": "k8s_00642_code_explain_20245"} {"prompt": "Explain this bash code:\n\nkubectl delete pod cpu-demo --namespace=cpu-example", "completion": "This bash code Assign CPU Resources to Containers and Pods | CPU requests and limits are associated with Containers, but it is useful to think of a Pod as having a CPU request and limit. The CPU request for a Pod is the sum of the CPU requests for all the Conta", "language": "bash", "type": "kubectl-command_explanation", "context": "Assign CPU Resources to Containers and Pods | CPU requests and limits are associated with Containers, but it is useful to think of a Pod as having a CPU request and limit. The CPU request for a Pod is the sum of the CPU requests for all the Conta", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/assign-cpu-resource/", "id": "k8s_00642_code_explain_20246"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: cpu-demo-2\n namespace: cpu-example\nspec:\n containers:\n - name: cpu-demo-ctr-2", "completion": " image: vish/stress\n resources:\n limits:\n cpu: \"100\"\n requests:\n cpu: \"100\"\n args:\n - -cpus\n - \"2\"", "language": "yaml", "type": "pod-definition", "context": "Assign CPU Resources to Containers and Pods | Create the Pod:", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/assign-cpu-resource/", "id": "k8s_00642_code_completion_20247"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: cpu-demo-2\n namespace: cpu-example\nspec:\n containers:\n - name: cpu-demo-ctr-2\n image: vish/stress\n resources:\n limits:\n cpu: \"100\"\n requests:\n cpu: \"100\"\n args:\n - -cpus\n - \"2\"", "completion": "This yaml code Assign CPU Resources to Containers and Pods | Create the Pod:", "language": "yaml", "type": "pod-definition_explanation", "context": "Assign CPU Resources to Containers and Pods | Create the Pod:", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/assign-cpu-resource/", "id": "k8s_00642_code_explain_20248"} {"prompt": "Explain this bash code:\n\nkubectl apply -f https://k8s.io/examples/pods/resource/cpu-request-limit-2.yaml --namespace=cpu-example", "completion": "This bash code Assign CPU Resources to Containers and Pods | View the Pod status:", "language": "bash", "type": "kubectl-command_explanation", "context": "Assign CPU Resources to Containers and Pods | View the Pod status:", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/assign-cpu-resource/", "id": "k8s_00642_code_explain_20249"} {"prompt": "Explain this text code:\n\nNAME READY STATUS RESTARTS AGE\ncpu-demo-2 0/1 Pending 0 7m", "completion": "This text code Assign CPU Resources to Containers and Pods | View detailed information about the Pod, including events:", "language": "text", "type": "code-example_explanation", "context": "Assign CPU Resources to Containers and Pods | View detailed information about the Pod, including events:", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/assign-cpu-resource/", "id": "k8s_00642_code_explain_20250"} {"prompt": "Explain this bash code:\n\nkubectl describe pod cpu-demo-2 --namespace=cpu-example", "completion": "This bash code Assign CPU Resources to Containers and Pods | The output shows that the Container cannot be scheduled because of insufficient CPU resources on the Nodes:", "language": "bash", "type": "kubectl-command_explanation", "context": "Assign CPU Resources to Containers and Pods | The output shows that the Container cannot be scheduled because of insufficient CPU resources on the Nodes:", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/assign-cpu-resource/", "id": "k8s_00642_code_explain_20251"} {"prompt": "Complete this text code:\n\nEvents:\n Reason Message", "completion": " ------ -------\n FailedScheduling No nodes are available that match all of the following predicates:: Insufficient cpu (3).", "language": "text", "type": "code-example", "context": "Assign CPU Resources to Containers and Pods | Delete your Pod:", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/assign-cpu-resource/", "id": "k8s_00642_code_completion_20252"} {"prompt": "Explain this text code:\n\nEvents:\n Reason Message\n ------ -------\n FailedScheduling No nodes are available that match all of the following predicates:: Insufficient cpu (3).", "completion": "This text code Assign CPU Resources to Containers and Pods | Delete your Pod:", "language": "text", "type": "code-example_explanation", "context": "Assign CPU Resources to Containers and Pods | Delete your Pod:", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/assign-cpu-resource/", "id": "k8s_00642_code_explain_20253"} {"prompt": "Explain this bash code:\n\nkubectl delete pod cpu-demo-2 --namespace=cpu-example", "completion": "This bash code Assign CPU Resources to Containers and Pods | If you do not specify a CPU limit for a Container, then one of these situations applies:", "language": "bash", "type": "kubectl-command_explanation", "context": "Assign CPU Resources to Containers and Pods | If you do not specify a CPU limit for a Container, then one of these situations applies:", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/assign-cpu-resource/", "id": "k8s_00642_code_explain_20254"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: cpu-demo\n namespace: cpu-example\nspec:\n containers:\n - name: cpu-demo-ctr", "completion": " image: vish/stress\n resources:\n limits:\n cpu: \"1\"\n requests:\n cpu: \"0.5\"\n args:\n - -cpus\n - \"2\"", "language": "yaml", "type": "pod-definition", "context": "Assign CPU Resources to Containers and Pods | The args section of the configuration file provides arguments for the container when it starts. The -cpus \"2\" argument tells the Container to attempt to use 2 CPUs.", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/assign-cpu-resource/", "id": "k8s_00642_code_completion_20255"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: cpu-demo\n namespace: cpu-example\nspec:\n containers:\n - name: cpu-demo-ctr\n image: vish/stress\n resources:\n limits:\n cpu: \"1\"\n requests:\n cpu: \"0.5\"\n args:\n - -cpus\n - \"2\"", "completion": "This yaml code Assign CPU Resources to Containers and Pods | The args section of the configuration file provides arguments for the container when it starts. The -cpus \"2\" argument tells the Container to attempt to use 2 CPUs.", "language": "yaml", "type": "pod-definition_explanation", "context": "Assign CPU Resources to Containers and Pods | The args section of the configuration file provides arguments for the container when it starts. The -cpus \"2\" argument tells the Container to attempt to use 2 CPUs.", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/assign-cpu-resource/", "id": "k8s_00642_code_explain_20256"} {"prompt": "Explain this shell code:\n\nkubectl apply -f https://k8s.io/examples/pods/resource/cpu-request-limit.yaml --namespace=cpu-example", "completion": "This shell code Assign CPU Resources to Containers and Pods | Verify that the Pod is running:", "language": "shell", "type": "code-example_explanation", "context": "Assign CPU Resources to Containers and Pods | Verify that the Pod is running:", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/assign-cpu-resource/", "id": "k8s_00642_code_explain_20257"} {"prompt": "Explain this shell code:\n\nkubectl get pod cpu-demo --output=yaml --namespace=cpu-example", "completion": "This shell code Assign CPU Resources to Containers and Pods | The output shows that the one container in the Pod has a CPU request of 500 milliCPU and a CPU limit of 1 CPU.", "language": "shell", "type": "code-example_explanation", "context": "Assign CPU Resources to Containers and Pods | The output shows that the one container in the Pod has a CPU request of 500 milliCPU and a CPU limit of 1 CPU.", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/assign-cpu-resource/", "id": "k8s_00642_code_explain_20258"} {"prompt": "Complete this yaml code:\n\nresources:\n limits:", "completion": " cpu: \"1\"\n requests:\n cpu: 500m", "language": "yaml", "type": "kubernetes-manifest", "context": "Assign CPU Resources to Containers and Pods | Use kubectl top to fetch the metrics for the Pod:", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/assign-cpu-resource/", "id": "k8s_00642_code_completion_20259"} {"prompt": "Explain this yaml code:\n\nresources:\n limits:\n cpu: \"1\"\n requests:\n cpu: 500m", "completion": "This yaml code Assign CPU Resources to Containers and Pods | Use kubectl top to fetch the metrics for the Pod:", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Assign CPU Resources to Containers and Pods | Use kubectl top to fetch the metrics for the Pod:", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/assign-cpu-resource/", "id": "k8s_00642_code_explain_20260"} {"prompt": "Explain this text code:\n\nNAME CPU(cores) MEMORY(bytes)\ncpu-demo 974m <something>", "completion": "This text code Assign CPU Resources to Containers and Pods | Recall that by setting -cpu \"2\", you configured the Container to attempt to use 2 CPUs, but the Container is only being allowed to use about 1 CPU. The container's CPU use is being throttled, because", "language": "text", "type": "code-example_explanation", "context": "Assign CPU Resources to Containers and Pods | Recall that by setting -cpu \"2\", you configured the Container to attempt to use 2 CPUs, but the Container is only being allowed to use about 1 CPU. The container's CPU use is being throttled, because", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/assign-cpu-resource/", "id": "k8s_00642_code_explain_20261"} {"prompt": "Explain this shell code:\n\nkubectl delete pod cpu-demo --namespace=cpu-example", "completion": "This shell code Assign CPU Resources to Containers and Pods | CPU requests and limits are associated with Containers, but it is useful to think of a Pod as having a CPU request and limit. The CPU request for a Pod is the sum of the CPU requests for all the Conta", "language": "shell", "type": "code-example_explanation", "context": "Assign CPU Resources to Containers and Pods | CPU requests and limits are associated with Containers, but it is useful to think of a Pod as having a CPU request and limit. The CPU request for a Pod is the sum of the CPU requests for all the Conta", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/assign-cpu-resource/", "id": "k8s_00642_code_explain_20262"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: cpu-demo-2\n namespace: cpu-example\nspec:\n containers:\n - name: cpu-demo-ctr-2", "completion": " image: vish/stress\n resources:\n limits:\n cpu: \"100\"\n requests:\n cpu: \"100\"\n args:\n - -cpus\n - \"2\"", "language": "yaml", "type": "pod-definition", "context": "Assign CPU Resources to Containers and Pods | Create the Pod:", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/assign-cpu-resource/", "id": "k8s_00642_code_completion_20263"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: cpu-demo-2\n namespace: cpu-example\nspec:\n containers:\n - name: cpu-demo-ctr-2\n image: vish/stress\n resources:\n limits:\n cpu: \"100\"\n requests:\n cpu: \"100\"\n args:\n - -cpus\n - \"2\"", "completion": "This yaml code Assign CPU Resources to Containers and Pods | Create the Pod:", "language": "yaml", "type": "pod-definition_explanation", "context": "Assign CPU Resources to Containers and Pods | Create the Pod:", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/assign-cpu-resource/", "id": "k8s_00642_code_explain_20264"} {"prompt": "Explain this shell code:\n\nkubectl apply -f https://k8s.io/examples/pods/resource/cpu-request-limit-2.yaml --namespace=cpu-example", "completion": "This shell code Assign CPU Resources to Containers and Pods | View the Pod status:", "language": "shell", "type": "code-example_explanation", "context": "Assign CPU Resources to Containers and Pods | View the Pod status:", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/assign-cpu-resource/", "id": "k8s_00642_code_explain_20265"} {"prompt": "Explain this text code:\n\nNAME READY STATUS RESTARTS AGE\ncpu-demo-2 0/1 Pending 0 7m", "completion": "This text code Assign CPU Resources to Containers and Pods | View detailed information about the Pod, including events:", "language": "text", "type": "code-example_explanation", "context": "Assign CPU Resources to Containers and Pods | View detailed information about the Pod, including events:", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/assign-cpu-resource/", "id": "k8s_00642_code_explain_20266"} {"prompt": "Explain this shell code:\n\nkubectl describe pod cpu-demo-2 --namespace=cpu-example", "completion": "This shell code Assign CPU Resources to Containers and Pods | The output shows that the Container cannot be scheduled because of insufficient CPU resources on the Nodes:", "language": "shell", "type": "code-example_explanation", "context": "Assign CPU Resources to Containers and Pods | The output shows that the Container cannot be scheduled because of insufficient CPU resources on the Nodes:", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/assign-cpu-resource/", "id": "k8s_00642_code_explain_20267"} {"prompt": "Complete this text code:\n\nEvents:\n Reason Message", "completion": " ------ -------\n FailedScheduling No nodes are available that match all of the following predicates:: Insufficient cpu (3).", "language": "text", "type": "code-example", "context": "Assign CPU Resources to Containers and Pods | Delete your Pod:", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/assign-cpu-resource/", "id": "k8s_00642_code_completion_20268"} {"prompt": "Explain this text code:\n\nEvents:\n Reason Message\n ------ -------\n FailedScheduling No nodes are available that match all of the following predicates:: Insufficient cpu (3).", "completion": "This text code Assign CPU Resources to Containers and Pods | Delete your Pod:", "language": "text", "type": "code-example_explanation", "context": "Assign CPU Resources to Containers and Pods | Delete your Pod:", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/assign-cpu-resource/", "id": "k8s_00642_code_explain_20269"} {"prompt": "Explain this shell code:\n\nkubectl delete pod cpu-demo-2 --namespace=cpu-example", "completion": "This shell code Assign CPU Resources to Containers and Pods | If you do not specify a CPU limit for a Container, then one of these situations applies:", "language": "shell", "type": "code-example_explanation", "context": "Assign CPU Resources to Containers and Pods | If you do not specify a CPU limit for a Container, then one of these situations applies:", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/assign-cpu-resource/", "id": "k8s_00642_code_explain_20270"} {"prompt": "Complete this yaml code:\n\npods/resource/cpu-request-limit.yaml\napiVersion: v1\nkind: Pod\nmetadata:\n name: cpu-demo\n namespace: cpu-example\nspec:\n containers:\n - name: cpu-demo-ctr", "completion": " image: vish/stress\n resources:\n limits:\n cpu: \"1\"\n requests:\n cpu: \"0.5\"\n args:\n - -cpus\n - \"2\"", "language": "yaml", "type": "pod-definition", "context": "Assign CPU Resources to Containers and Pods | The args section of the configuration file provides arguments for the container when it starts. The -cpus \"2\" argument tells the Container to attempt to use 2 CPUs.", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/assign-cpu-resource/", "id": "k8s_00642_code_completion_20271"} {"prompt": "Explain this yaml code:\n\npods/resource/cpu-request-limit.yaml\napiVersion: v1\nkind: Pod\nmetadata:\n name: cpu-demo\n namespace: cpu-example\nspec:\n containers:\n - name: cpu-demo-ctr\n image: vish/stress\n resources:\n limits:\n cpu: \"1\"\n requests:\n cpu: \"0.5\"\n args:\n - -cpus\n - \"2\"", "completion": "This yaml code Assign CPU Resources to Containers and Pods | The args section of the configuration file provides arguments for the container when it starts. The -cpus \"2\" argument tells the Container to attempt to use 2 CPUs.", "language": "yaml", "type": "pod-definition_explanation", "context": "Assign CPU Resources to Containers and Pods | The args section of the configuration file provides arguments for the container when it starts. The -cpus \"2\" argument tells the Container to attempt to use 2 CPUs.", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/assign-cpu-resource/", "id": "k8s_00642_code_explain_20272"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: cpu-demo\n namespace: cpu-example\nspec:\n containers:\n - name: cpu-demo-ctr", "completion": " image: vish/stress\n resources:\n limits:\n cpu: \"1\"\n requests:\n cpu: \"0.5\"\n args:\n - -cpus\n - \"2\"", "language": "yaml", "type": "pod-definition", "context": "Assign CPU Resources to Containers and Pods | The args section of the configuration file provides arguments for the container when it starts. The -cpus \"2\" argument tells the Container to attempt to use 2 CPUs.", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/assign-cpu-resource/", "id": "k8s_00642_code_completion_20273"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: cpu-demo\n namespace: cpu-example\nspec:\n containers:\n - name: cpu-demo-ctr\n image: vish/stress\n resources:\n limits:\n cpu: \"1\"\n requests:\n cpu: \"0.5\"\n args:\n - -cpus\n - \"2\"", "completion": "This yaml code Assign CPU Resources to Containers and Pods | The args section of the configuration file provides arguments for the container when it starts. The -cpus \"2\" argument tells the Container to attempt to use 2 CPUs.", "language": "yaml", "type": "pod-definition_explanation", "context": "Assign CPU Resources to Containers and Pods | The args section of the configuration file provides arguments for the container when it starts. The -cpus \"2\" argument tells the Container to attempt to use 2 CPUs.", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/assign-cpu-resource/", "id": "k8s_00642_code_explain_20274"} {"prompt": "Explain this bash code:\n\nkubectl apply -f https://k8s.io/examples/pods/resource/cpu-request-limit.yaml --namespace=cpu-example", "completion": "This bash code Assign CPU Resources to Containers and Pods | Verify that the Pod is running:", "language": "bash", "type": "kubectl-command_explanation", "context": "Assign CPU Resources to Containers and Pods | Verify that the Pod is running:", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/assign-cpu-resource/", "id": "k8s_00642_code_explain_20275"} {"prompt": "Explain this bash code:\n\nkubectl get pod cpu-demo --output=yaml --namespace=cpu-example", "completion": "This bash code Assign CPU Resources to Containers and Pods | The output shows that the one container in the Pod has a CPU request of 500 milliCPU and a CPU limit of 1 CPU.", "language": "bash", "type": "kubectl-command_explanation", "context": "Assign CPU Resources to Containers and Pods | The output shows that the one container in the Pod has a CPU request of 500 milliCPU and a CPU limit of 1 CPU.", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/assign-cpu-resource/", "id": "k8s_00642_code_explain_20276"} {"prompt": "Complete this text code:\n\nresources:\n limits:", "completion": " cpu: \"1\"\n requests:\n cpu: 500m", "language": "text", "type": "code-example", "context": "Assign CPU Resources to Containers and Pods | Use kubectl top to fetch the metrics for the Pod:", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/assign-cpu-resource/", "id": "k8s_00642_code_completion_20277"} {"prompt": "Explain this text code:\n\nresources:\n limits:\n cpu: \"1\"\n requests:\n cpu: 500m", "completion": "This text code Assign CPU Resources to Containers and Pods | Use kubectl top to fetch the metrics for the Pod:", "language": "text", "type": "code-example_explanation", "context": "Assign CPU Resources to Containers and Pods | Use kubectl top to fetch the metrics for the Pod:", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/assign-cpu-resource/", "id": "k8s_00642_code_explain_20278"} {"prompt": "Explain this bash code:\n\nkubectl delete pod cpu-demo --namespace=cpu-example", "completion": "This bash code Assign CPU Resources to Containers and Pods | CPU requests and limits are associated with Containers, but it is useful to think of a Pod as having a CPU request and limit. The CPU request for a Pod is the sum of the CPU requests for all the Conta", "language": "bash", "type": "kubectl-command_explanation", "context": "Assign CPU Resources to Containers and Pods | CPU requests and limits are associated with Containers, but it is useful to think of a Pod as having a CPU request and limit. The CPU request for a Pod is the sum of the CPU requests for all the Conta", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/assign-cpu-resource/", "id": "k8s_00642_code_explain_20279"} {"prompt": "Complete this yaml code:\n\npods/resource/cpu-request-limit-2.yaml\napiVersion: v1\nkind: Pod\nmetadata:\n name: cpu-demo-2\n namespace: cpu-example\nspec:\n containers:\n - name: cpu-demo-ctr-2", "completion": " image: vish/stress\n resources:\n limits:\n cpu: \"100\"\n requests:\n cpu: \"100\"\n args:\n - -cpus\n - \"2\"", "language": "yaml", "type": "pod-definition", "context": "Assign CPU Resources to Containers and Pods | Create the Pod:", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/assign-cpu-resource/", "id": "k8s_00642_code_completion_20280"} {"prompt": "Explain this yaml code:\n\npods/resource/cpu-request-limit-2.yaml\napiVersion: v1\nkind: Pod\nmetadata:\n name: cpu-demo-2\n namespace: cpu-example\nspec:\n containers:\n - name: cpu-demo-ctr-2\n image: vish/stress\n resources:\n limits:\n cpu: \"100\"\n requests:\n cpu: \"100\"\n args:\n - -cpus\n - \"2\"", "completion": "This yaml code Assign CPU Resources to Containers and Pods | Create the Pod:", "language": "yaml", "type": "pod-definition_explanation", "context": "Assign CPU Resources to Containers and Pods | Create the Pod:", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/assign-cpu-resource/", "id": "k8s_00642_code_explain_20281"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: cpu-demo-2\n namespace: cpu-example\nspec:\n containers:\n - name: cpu-demo-ctr-2", "completion": " image: vish/stress\n resources:\n limits:\n cpu: \"100\"\n requests:\n cpu: \"100\"\n args:\n - -cpus\n - \"2\"", "language": "yaml", "type": "pod-definition", "context": "Assign CPU Resources to Containers and Pods | Create the Pod:", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/assign-cpu-resource/", "id": "k8s_00642_code_completion_20282"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: cpu-demo-2\n namespace: cpu-example\nspec:\n containers:\n - name: cpu-demo-ctr-2\n image: vish/stress\n resources:\n limits:\n cpu: \"100\"\n requests:\n cpu: \"100\"\n args:\n - -cpus\n - \"2\"", "completion": "This yaml code Assign CPU Resources to Containers and Pods | Create the Pod:", "language": "yaml", "type": "pod-definition_explanation", "context": "Assign CPU Resources to Containers and Pods | Create the Pod:", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/assign-cpu-resource/", "id": "k8s_00642_code_explain_20283"} {"prompt": "Explain this bash code:\n\nkubectl apply -f https://k8s.io/examples/pods/resource/cpu-request-limit-2.yaml --namespace=cpu-example", "completion": "This bash code Assign CPU Resources to Containers and Pods | View the Pod status:", "language": "bash", "type": "kubectl-command_explanation", "context": "Assign CPU Resources to Containers and Pods | View the Pod status:", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/assign-cpu-resource/", "id": "k8s_00642_code_explain_20284"} {"prompt": "Explain this bash code:\n\nkubectl describe pod cpu-demo-2 --namespace=cpu-example", "completion": "This bash code Assign CPU Resources to Containers and Pods | The output shows that the Container cannot be scheduled because of insufficient CPU resources on the Nodes:", "language": "bash", "type": "kubectl-command_explanation", "context": "Assign CPU Resources to Containers and Pods | The output shows that the Container cannot be scheduled because of insufficient CPU resources on the Nodes:", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/assign-cpu-resource/", "id": "k8s_00642_code_explain_20285"} {"prompt": "Explain this bash code:\n\nkubectl delete pod cpu-demo-2 --namespace=cpu-example", "completion": "This bash code Assign CPU Resources to Containers and Pods | If you do not specify a CPU limit for a Container, then one of these situations applies:", "language": "bash", "type": "kubectl-command_explanation", "context": "Assign CPU Resources to Containers and Pods | If you do not specify a CPU limit for a Container, then one of these situations applies:", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/assign-cpu-resource/", "id": "k8s_00642_code_explain_20286"} {"prompt": "Complete this text code:\n\nNAME READY STATUS RESTARTS AGE IP NODE\nweave-net-1t1qg 2/2 Running 0 9d 192.168.2.10 worknode3", "completion": "weave-net-231d7 2/2 Running 1 7d 10.2.0.17 worknodegpu\nweave-net-7nmwt 2/2 Running 3 9d 192.168.2.131 masternode\nweave-net-pmw8w 2/2 Running 0 9d 192.168.2.216 worknode2", "language": "text", "type": "code-example", "context": "Weave Net for NetworkPolicy | Each Node has a weave Pod, and all Pods are Running and 2/2 READY. (2/2 means that each Pod has weave and weave-npc.)", "source": "https://kubernetes.io/docs/tasks/administer-cluster/network-policy-provider/weave-network-policy/", "id": "k8s_00644_code_completion_20287"} {"prompt": "Explain this text code:\n\nNAME READY STATUS RESTARTS AGE IP NODE\nweave-net-1t1qg 2/2 Running 0 9d 192.168.2.10 worknode3\nweave-net-231d7 2/2 Running 1 7d 10.2.0.17 worknodegpu\nweave-net-7nmwt 2/2 Running 3 9d 192.168.2.131 masternode\nweave-net-pmw8w 2/2 Running 0 9d 192.168.2.216 worknode2", "completion": "This text code Weave Net for NetworkPolicy | Each Node has a weave Pod, and all Pods are Running and 2/2 READY. (2/2 means that each Pod has weave and weave-npc.)", "language": "text", "type": "code-example_explanation", "context": "Weave Net for NetworkPolicy | Each Node has a weave Pod, and all Pods are Running and 2/2 READY. (2/2 means that each Pod has weave and weave-npc.)", "source": "https://kubernetes.io/docs/tasks/administer-cluster/network-policy-provider/weave-network-policy/", "id": "k8s_00644_code_explain_20288"} {"prompt": "Complete this text code:\n\nNAME READY STATUS RESTARTS AGE IP NODE\nweave-net-1t1qg 2/2 Running 0 9d 192.168.2.10 worknode3", "completion": "weave-net-231d7 2/2 Running 1 7d 10.2.0.17 worknodegpu\nweave-net-7nmwt 2/2 Running 3 9d 192.168.2.131 masternode\nweave-net-pmw8w 2/2 Running 0 9d 192.168.2.216 worknode2", "language": "text", "type": "code-example", "context": "Weave Net for NetworkPolicy | Each Node has a weave Pod, and all Pods are Running and 2/2 READY. (2/2 means that each Pod has weave and weave-npc.)", "source": "https://kubernetes.io/docs/tasks/administer-cluster/network-policy-provider/weave-network-policy/", "id": "k8s_00644_code_completion_20289"} {"prompt": "Explain this text code:\n\nNAME READY STATUS RESTARTS AGE IP NODE\nweave-net-1t1qg 2/2 Running 0 9d 192.168.2.10 worknode3\nweave-net-231d7 2/2 Running 1 7d 10.2.0.17 worknodegpu\nweave-net-7nmwt 2/2 Running 3 9d 192.168.2.131 masternode\nweave-net-pmw8w 2/2 Running 0 9d 192.168.2.216 worknode2", "completion": "This text code Weave Net for NetworkPolicy | Each Node has a weave Pod, and all Pods are Running and 2/2 READY. (2/2 means that each Pod has weave and weave-npc.)", "language": "text", "type": "code-example_explanation", "context": "Weave Net for NetworkPolicy | Each Node has a weave Pod, and all Pods are Running and 2/2 READY. (2/2 means that each Pod has weave and weave-npc.)", "source": "https://kubernetes.io/docs/tasks/administer-cluster/network-policy-provider/weave-network-policy/", "id": "k8s_00644_code_explain_20290"} {"prompt": "Explain this text code:\n\nNAME READY STATUS RESTARTS AGE IP NODE\nnginx 1/1 Running 0 13s 10.200.0.4 worker0", "completion": "This text code Documentation Style Guide | Code examples and configuration examples that include version information should be consistent with the accompanying text.", "language": "text", "type": "code-example_explanation", "context": "Documentation Style Guide | Code examples and configuration examples that include version information should be consistent with the accompanying text.", "source": "https://kubernetes.io/docs/contribute/style/style-guide/", "id": "k8s_00645_code_explain_20291"} {"prompt": "Complete this text code:\n\n---\ntitle: <your tutorial title here>", "completion": "min-kubernetes-server-version: v1.8\n---", "language": "text", "type": "code-example", "context": "Documentation Style Guide | In code and configuration examples, do not include comments about alternative versions. Be careful to not include incorrect statements in your examples as comments, such as:", "source": "https://kubernetes.io/docs/contribute/style/style-guide/", "id": "k8s_00645_code_completion_20292"} {"prompt": "Explain this text code:\n\n---\ntitle: <your tutorial title here>\nmin-kubernetes-server-version: v1.8\n---", "completion": "This text code Documentation Style Guide | In code and configuration examples, do not include comments about alternative versions. Be careful to not include incorrect statements in your examples as comments, such as:", "language": "text", "type": "code-example_explanation", "context": "Documentation Style Guide | In code and configuration examples, do not include comments about alternative versions. Be careful to not include incorrect statements in your examples as comments, such as:", "source": "https://kubernetes.io/docs/contribute/style/style-guide/", "id": "k8s_00645_code_explain_20293"} {"prompt": "Explain this yaml code:\n\napiVersion: v1 # earlier versions use...\nkind: Pod\n...", "completion": "This yaml code Documentation Style Guide | You can use the Docsy support for diagrams and formulae.", "language": "yaml", "type": "pod-definition_explanation", "context": "Documentation Style Guide | You can use the Docsy support for diagrams and formulae.", "source": "https://kubernetes.io/docs/contribute/style/style-guide/", "id": "k8s_00645_code_explain_20294"} {"prompt": "Explain this json code:\n\n{{< note >}}\nNo need to include a prefix; the shortcode automatically provides one. (Note:, Caution:, etc.)\n{{< /note >}}", "completion": "This json code Documentation Style Guide | The output is:", "language": "json", "type": "api-response_explanation", "context": "Documentation Style Guide | The output is:", "source": "https://kubernetes.io/docs/contribute/style/style-guide/", "id": "k8s_00645_code_explain_20295"} {"prompt": "Explain this json code:\n\n{{< note >}}\nYou can _still_ use Markdown inside these callouts.\n{{< /note >}}", "completion": "This json code Documentation Style Guide | The output is:", "language": "json", "type": "api-response_explanation", "context": "Documentation Style Guide | The output is:", "source": "https://kubernetes.io/docs/contribute/style/style-guide/", "id": "k8s_00645_code_explain_20296"} {"prompt": "Complete this text code:\n\n1. Use the note shortcode in a list\n\n1. A second item with an embedded note\n\n {{< note >}}", "completion": " Warning, Caution, and Note shortcodes, embedded in lists, need to be indented four spaces. See [Common Shortcode Issues](#common-shortcode-issues).\n {{< /note >}}\n\n1. A third item in a list\n\n1. A fourth item in a list", "language": "text", "type": "code-example", "context": "Documentation Style Guide | The output is:", "source": "https://kubernetes.io/docs/contribute/style/style-guide/", "id": "k8s_00645_code_completion_20297"} {"prompt": "Explain this text code:\n\n1. Use the note shortcode in a list\n\n1. A second item with an embedded note\n\n {{< note >}}\n Warning, Caution, and Note shortcodes, embedded in lists, need to be indented four spaces. See [Common Shortcode Issues](#common-shortcode-issues).\n {{< /note >}}\n\n1. A third item in a list\n\n1. A fourth item in a list", "completion": "This text code Documentation Style Guide | The output is:", "language": "text", "type": "code-example_explanation", "context": "Documentation Style Guide | The output is:", "source": "https://kubernetes.io/docs/contribute/style/style-guide/", "id": "k8s_00645_code_explain_20298"} {"prompt": "Explain this text code:\n\nWarning, Caution, and Note shortcodes, embedded in lists, need to be indented four spaces. See [Common Shortcode Issues](#common-shortcode-issues).", "completion": "This text code Documentation Style Guide | A third item in a list", "language": "text", "type": "code-example_explanation", "context": "Documentation Style Guide | A third item in a list", "source": "https://kubernetes.io/docs/contribute/style/style-guide/", "id": "k8s_00645_code_explain_20299"} {"prompt": "Explain this json code:\n\n{{< caution >}}\nThe callout style only applies to the line directly above the tag.\n{{< /caution >}}", "completion": "This json code Documentation Style Guide | The output is:", "language": "json", "type": "api-response_explanation", "context": "Documentation Style Guide | The output is:", "source": "https://kubernetes.io/docs/contribute/style/style-guide/", "id": "k8s_00645_code_explain_20300"} {"prompt": "Complete this text code:\n\n1. Preheat oven to 350˚F\n\n1. Prepare the batter, and pour into springform pan.", "completion": " {{< note >}}Grease the pan for best results.{{< /note >}}\n\n1. Bake for 20-25 minutes or until set.", "language": "text", "type": "code-example", "context": "Documentation Style Guide | The output is:", "source": "https://kubernetes.io/docs/contribute/style/style-guide/", "id": "k8s_00645_code_completion_20301"} {"prompt": "Explain this text code:\n\n1. Preheat oven to 350˚F\n\n1. Prepare the batter, and pour into springform pan.\n {{< note >}}Grease the pan for best results.{{< /note >}}\n\n1. Bake for 20-25 minutes or until set.", "completion": "This text code Documentation Style Guide | The output is:", "language": "text", "type": "code-example_explanation", "context": "Documentation Style Guide | The output is:", "source": "https://kubernetes.io/docs/contribute/style/style-guide/", "id": "k8s_00645_code_explain_20302"} {"prompt": "Explain this json code:\n\n{{< note >}}\n{{< include \"task-tutorial-prereqs.md\" >}}\n{{< /note >}}", "completion": "This json code Documentation Style Guide | Use a single newline to separate block-level content like headings, lists, images, code blocks, and others. The exception is second-level headings, where it should be two newlines. Second-level headin", "language": "json", "type": "api-response_explanation", "context": "Documentation Style Guide | Use a single newline to separate block-level content like headings, lists, images, code blocks, and others. The exception is second-level headings, where it should be two newlines. Second-level headin", "source": "https://kubernetes.io/docs/contribute/style/style-guide/", "id": "k8s_00645_code_explain_20303"} {"prompt": "Explain this text code:\n\n/apis/apps/v1/namespaces/default/deployments/my-app", "completion": "This text code Documentation Style Guide | Kubernetes documentation also uses \"resource\" to talk about CPU and memory requests and limits. It's very often a good idea to refer to API resources as \"API resources\"; that helps to avoid confusion", "language": "text", "type": "code-example_explanation", "context": "Documentation Style Guide | Kubernetes documentation also uses \"resource\" to talk about CPU and memory requests and limits. It's very often a good idea to refer to API resources as \"API resources\"; that helps to avoid confusion", "source": "https://kubernetes.io/docs/contribute/style/style-guide/", "id": "k8s_00645_code_explain_20304"} {"prompt": "Explain this console code:\n\nNAME READY STATUS RESTARTS AGE IP NODE\nnginx 1/1 Running 0 13s 10.200.0.4 worker0", "completion": "This console code Documentation Style Guide | Code examples and configuration examples that include version information should be consistent with the accompanying text.", "language": "console", "type": "code-example_explanation", "context": "Documentation Style Guide | Code examples and configuration examples that include version information should be consistent with the accompanying text.", "source": "https://kubernetes.io/docs/contribute/style/style-guide/", "id": "k8s_00645_code_explain_20305"} {"prompt": "Complete this yaml code:\n\n---\ntitle: <your tutorial title here>", "completion": "min-kubernetes-server-version: v1.8\n---", "language": "yaml", "type": "kubernetes-manifest", "context": "Documentation Style Guide | In code and configuration examples, do not include comments about alternative versions. Be careful to not include incorrect statements in your examples as comments, such as:", "source": "https://kubernetes.io/docs/contribute/style/style-guide/", "id": "k8s_00645_code_completion_20306"} {"prompt": "Explain this yaml code:\n\n---\ntitle: <your tutorial title here>\nmin-kubernetes-server-version: v1.8\n---", "completion": "This yaml code Documentation Style Guide | In code and configuration examples, do not include comments about alternative versions. Be careful to not include incorrect statements in your examples as comments, such as:", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Documentation Style Guide | In code and configuration examples, do not include comments about alternative versions. Be careful to not include incorrect statements in your examples as comments, such as:", "source": "https://kubernetes.io/docs/contribute/style/style-guide/", "id": "k8s_00645_code_explain_20307"} {"prompt": "Explain this yaml code:\n\napiVersion: v1 # earlier versions use...\nkind: Pod\n...", "completion": "This yaml code Documentation Style Guide | You can use the Docsy support for diagrams and formulae.", "language": "yaml", "type": "pod-definition_explanation", "context": "Documentation Style Guide | You can use the Docsy support for diagrams and formulae.", "source": "https://kubernetes.io/docs/contribute/style/style-guide/", "id": "k8s_00645_code_explain_20308"} {"prompt": "Explain this none code:\n\n{{< note >}}\nNo need to include a prefix; the shortcode automatically provides one. (Note:, Caution:, etc.)\n{{< /note >}}", "completion": "This none code Documentation Style Guide | The output is:", "language": "none", "type": "code-example_explanation", "context": "Documentation Style Guide | The output is:", "source": "https://kubernetes.io/docs/contribute/style/style-guide/", "id": "k8s_00645_code_explain_20309"} {"prompt": "Explain this json code:\n\n{{< note >}}\nYou can _still_ use Markdown inside these callouts.\n{{< /note >}}", "completion": "This json code Documentation Style Guide | The output is:", "language": "json", "type": "api-response_explanation", "context": "Documentation Style Guide | The output is:", "source": "https://kubernetes.io/docs/contribute/style/style-guide/", "id": "k8s_00645_code_explain_20310"} {"prompt": "Complete this text code:\n\n1. Use the note shortcode in a list\n\n1. A second item with an embedded note\n\n {{< note >}}", "completion": " Warning, Caution, and Note shortcodes, embedded in lists, need to be indented four spaces. See [Common Shortcode Issues](#common-shortcode-issues).\n {{< /note >}}\n\n1. A third item in a list\n\n1. A fourth item in a list", "language": "text", "type": "code-example", "context": "Documentation Style Guide | The output is:", "source": "https://kubernetes.io/docs/contribute/style/style-guide/", "id": "k8s_00645_code_completion_20311"} {"prompt": "Explain this text code:\n\n1. Use the note shortcode in a list\n\n1. A second item with an embedded note\n\n {{< note >}}\n Warning, Caution, and Note shortcodes, embedded in lists, need to be indented four spaces. See [Common Shortcode Issues](#common-shortcode-issues).\n {{< /note >}}\n\n1. A third item in a list\n\n1. A fourth item in a list", "completion": "This text code Documentation Style Guide | The output is:", "language": "text", "type": "code-example_explanation", "context": "Documentation Style Guide | The output is:", "source": "https://kubernetes.io/docs/contribute/style/style-guide/", "id": "k8s_00645_code_explain_20312"} {"prompt": "Explain this text code:\n\nWarning, Caution, and Note shortcodes, embedded in lists, need to be indented four spaces. See [Common Shortcode Issues](#common-shortcode-issues).", "completion": "This text code Documentation Style Guide | A third item in a list", "language": "text", "type": "code-example_explanation", "context": "Documentation Style Guide | A third item in a list", "source": "https://kubernetes.io/docs/contribute/style/style-guide/", "id": "k8s_00645_code_explain_20313"} {"prompt": "Explain this json code:\n\n{{< caution >}}\nThe callout style only applies to the line directly above the tag.\n{{< /caution >}}", "completion": "This json code Documentation Style Guide | The output is:", "language": "json", "type": "api-response_explanation", "context": "Documentation Style Guide | The output is:", "source": "https://kubernetes.io/docs/contribute/style/style-guide/", "id": "k8s_00645_code_explain_20314"} {"prompt": "Complete this text code:\n\n1. Preheat oven to 350˚F\n\n1. Prepare the batter, and pour into springform pan.", "completion": " {{< note >}}Grease the pan for best results.{{< /note >}}\n\n1. Bake for 20-25 minutes or until set.", "language": "text", "type": "code-example", "context": "Documentation Style Guide | The output is:", "source": "https://kubernetes.io/docs/contribute/style/style-guide/", "id": "k8s_00645_code_completion_20315"} {"prompt": "Explain this text code:\n\n1. Preheat oven to 350˚F\n\n1. Prepare the batter, and pour into springform pan.\n {{< note >}}Grease the pan for best results.{{< /note >}}\n\n1. Bake for 20-25 minutes or until set.", "completion": "This text code Documentation Style Guide | The output is:", "language": "text", "type": "code-example_explanation", "context": "Documentation Style Guide | The output is:", "source": "https://kubernetes.io/docs/contribute/style/style-guide/", "id": "k8s_00645_code_explain_20316"} {"prompt": "Explain this json code:\n\n{{< note >}}\n{{< include \"task-tutorial-prereqs.md\" >}}\n{{< /note >}}", "completion": "This json code Documentation Style Guide | Use a single newline to separate block-level content like headings, lists, images, code blocks, and others. The exception is second-level headings, where it should be two newlines. Second-level headin", "language": "json", "type": "api-response_explanation", "context": "Documentation Style Guide | Use a single newline to separate block-level content like headings, lists, images, code blocks, and others. The exception is second-level headings, where it should be two newlines. Second-level headin", "source": "https://kubernetes.io/docs/contribute/style/style-guide/", "id": "k8s_00645_code_explain_20317"} {"prompt": "Explain this json code:\n\n[Hugo shortcodes](/docs/contribute/style/hugo-shortcodes/#table-captions)", "completion": "This json code Documentation Style Guide | Group items in a list that are related to each other and need to appear in a specific order or to indicate a correlation between multiple items. When a screen reader comes across a list—whether it is", "language": "json", "type": "api-response_explanation", "context": "Documentation Style Guide | Group items in a list that are related to each other and need to appear in a specific order or to indicate a correlation between multiple items. When a screen reader comes across a list—whether it is", "source": "https://kubernetes.io/docs/contribute/style/style-guide/", "id": "k8s_00645_code_explain_20318"} {"prompt": "Explain this text code:\n\n<a href=\"/media/examples/link-element-example.css\" target=\"_blank\">Visit our tutorial!</a>", "completion": "This text code Documentation Style Guide | Group items in a list that are related to each other and need to appear in a specific order or to indicate a correlation between multiple items. When a screen reader comes across a list—whether it is", "language": "text", "type": "code-example_explanation", "context": "Documentation Style Guide | Group items in a list that are related to each other and need to appear in a specific order or to indicate a correlation between multiple items. When a screen reader comes across a list—whether it is", "source": "https://kubernetes.io/docs/contribute/style/style-guide/", "id": "k8s_00645_code_explain_20319"} {"prompt": "Explain this json code:\n\n[example website](https://example.com){target=\"_blank\"}", "completion": "This json code Documentation Style Guide | Group items in a list that are related to each other and need to appear in a specific order or to indicate a correlation between multiple items. When a screen reader comes across a list—whether it is", "language": "json", "type": "api-response_explanation", "context": "Documentation Style Guide | Group items in a list that are related to each other and need to appear in a specific order or to indicate a correlation between multiple items. When a screen reader comes across a list—whether it is", "source": "https://kubernetes.io/docs/contribute/style/style-guide/", "id": "k8s_00645_code_explain_20320"} {"prompt": "Explain this text code:\n\nNAME READY STATUS RESTARTS AGE IP NODE\nnginx 1/1 Running 0 13s 10.200.0.4 worker0", "completion": "This text code Documentation Style Guide | Code examples and configuration examples that include version information should be consistent with the accompanying text.", "language": "text", "type": "code-example_explanation", "context": "Documentation Style Guide | Code examples and configuration examples that include version information should be consistent with the accompanying text.", "source": "https://kubernetes.io/docs/contribute/style/style-guide/", "id": "k8s_00645_code_explain_20321"} {"prompt": "Complete this text code:\n\n---\ntitle: <your tutorial title here>", "completion": "min-kubernetes-server-version: v1.8\n---", "language": "text", "type": "code-example", "context": "Documentation Style Guide | In code and configuration examples, do not include comments about alternative versions. Be careful to not include incorrect statements in your examples as comments, such as:", "source": "https://kubernetes.io/docs/contribute/style/style-guide/", "id": "k8s_00645_code_completion_20322"} {"prompt": "Explain this text code:\n\n---\ntitle: <your tutorial title here>\nmin-kubernetes-server-version: v1.8\n---", "completion": "This text code Documentation Style Guide | In code and configuration examples, do not include comments about alternative versions. Be careful to not include incorrect statements in your examples as comments, such as:", "language": "text", "type": "code-example_explanation", "context": "Documentation Style Guide | In code and configuration examples, do not include comments about alternative versions. Be careful to not include incorrect statements in your examples as comments, such as:", "source": "https://kubernetes.io/docs/contribute/style/style-guide/", "id": "k8s_00645_code_explain_20323"} {"prompt": "Explain this yaml code:\n\napiVersion: v1 # earlier versions use...\nkind: Pod\n...", "completion": "This yaml code Documentation Style Guide | You can use the Docsy support for diagrams and formulae.", "language": "yaml", "type": "pod-definition_explanation", "context": "Documentation Style Guide | You can use the Docsy support for diagrams and formulae.", "source": "https://kubernetes.io/docs/contribute/style/style-guide/", "id": "k8s_00645_code_explain_20324"} {"prompt": "Explain this text code:\n\nNAMESPACE NAME READY STATUS RESTARTS AGE\n alessandras-namespace alessandras-pod 0/1 CrashLoopBackOff 200 2d9h", "completion": "This text code Pod Lifecycle | A Pod is granted a term to terminate gracefully, which defaults to 30 seconds. You can use the flag --force to terminate a Pod by force.", "language": "text", "type": "code-example_explanation", "context": "Pod Lifecycle | A Pod is granted a term to terminate gracefully, which defaults to 30 seconds. You can use the flag --force to terminate a Pod by force.", "source": "https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle/", "id": "k8s_00646_code_explain_20325"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: on-failure-pod\nspec:\n restartPolicy: OnFailure\n containers:", "completion": " - name: try-once-container # This container will run only once because the restartPolicy is Never.\n image: docker.io/library/busybox:1.28\n command: ['sh', '-c', 'echo \"Only running once\" && sleep 10 && exit 1']\n restartPolicy: Never \n - name: on-failure-container # This container will be restarted on failure.\n image: docker.io/library/busybox:1.28\n command: ['sh', '-c', 'echo \"Keep restarting\" && sleep 1800 && exit 1']", "language": "yaml", "type": "pod-definition", "context": "Pod Lifecycle | A Pod with Always restart policy with an init container that only execute once. If the init container fails, the Pod fails. This allows the Pod to fail if the initialization failed, but also keep runn", "source": "https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle/", "id": "k8s_00646_code_completion_20326"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: on-failure-pod\nspec:\n restartPolicy: OnFailure\n containers:\n - name: try-once-container # This container will run only once because the restartPolicy is Never.\n image: docker.io/library/busybox:1.28\n command: ['sh', '-c', 'echo \"Only running once\" && sleep 10 && exit 1']\n restartPolicy: Never \n - name: on-failure-container # This container will be restarted on failure.\n image: docker.io/library/busybox:1.28\n command: ['sh', '-c', 'echo \"Keep restarting\" && sleep 1800 && exit 1']", "completion": "This yaml code Pod Lifecycle | A Pod with Always restart policy with an init container that only execute once. If the init container fails, the Pod fails. This allows the Pod to fail if the initialization failed, but also keep runn", "language": "yaml", "type": "pod-definition_explanation", "context": "Pod Lifecycle | A Pod with Always restart policy with an init container that only execute once. If the init container fails, the Pod fails. This allows the Pod to fail if the initialization failed, but also keep runn", "source": "https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle/", "id": "k8s_00646_code_explain_20327"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: fail-pod-if-init-fails\nspec:\n restartPolicy: Always\n initContainers:", "completion": " - name: init-once # This init container will only try once. If it fails, the pod will fail.\n image: docker.io/library/busybox:1.28\n command: ['sh', '-c', 'echo \"Failing initialization\" && sleep 10 && exit 1']\n restartPolicy: Never\n containers:\n - name: main-container # This container will always be restarted once initialization succeeds.\n image: docker.io/library/busybox:1.28\n command: ['sh', '-c', 'sleep 1800 && exit 0']", "language": "yaml", "type": "pod-definition", "context": "Pod Lifecycle | A Pod with Never restart policy with a container that ignores and restarts on specific exit codes. This is useful to differentiate between restartable errors and non-restartable errors:", "source": "https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle/", "id": "k8s_00646_code_completion_20328"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: fail-pod-if-init-fails\nspec:\n restartPolicy: Always\n initContainers:\n - name: init-once # This init container will only try once. If it fails, the pod will fail.\n image: docker.io/library/busybox:1.28\n command: ['sh', '-c', 'echo \"Failing initialization\" && sleep 10 && exit 1']\n restartPolicy: Never\n containers:\n - name: main-container # This container will always be restarted once initialization succeeds.\n image: docker.io/library/busybox:1.28\n command: ['sh', '-c', 'sleep 1800 && exit 0']", "completion": "This yaml code Pod Lifecycle | A Pod with Never restart policy with a container that ignores and restarts on specific exit codes. This is useful to differentiate between restartable errors and non-restartable errors:", "language": "yaml", "type": "pod-definition_explanation", "context": "Pod Lifecycle | A Pod with Never restart policy with a container that ignores and restarts on specific exit codes. This is useful to differentiate between restartable errors and non-restartable errors:", "source": "https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle/", "id": "k8s_00646_code_explain_20329"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: restart-on-exit-codes\nspec:\n restartPolicy: Never\n containers:\n - name: restart-on-exit-codes", "completion": " image: docker.io/library/busybox:1.28\n command: ['sh', '-c', 'sleep 60 && exit 0']\n restartPolicy: Never # Container restart policy must be specified if rules are specified\n restartPolicyRules: # Only restart the container if it exits with code 42\n - action: Restart\n exitCodes:\n operator: In\n values: [42]", "language": "yaml", "type": "pod-definition", "context": "Pod Lifecycle | Restart rules can be used for many more advanced lifecycle management scenarios. Note, restart rules are affected by the same inconsistencies as the regular restart policy. Kubelet restarts, container", "source": "https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle/", "id": "k8s_00646_code_completion_20330"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: restart-on-exit-codes\nspec:\n restartPolicy: Never\n containers:\n - name: restart-on-exit-codes\n image: docker.io/library/busybox:1.28\n command: ['sh', '-c', 'sleep 60 && exit 0']\n restartPolicy: Never # Container restart policy must be specified if rules are specified\n restartPolicyRules: # Only restart the container if it exits with code 42\n - action: Restart\n exitCodes:\n operator: In\n values: [42]", "completion": "This yaml code Pod Lifecycle | Restart rules can be used for many more advanced lifecycle management scenarios. Note, restart rules are affected by the same inconsistencies as the regular restart policy. Kubelet restarts, container", "language": "yaml", "type": "pod-definition_explanation", "context": "Pod Lifecycle | Restart rules can be used for many more advanced lifecycle management scenarios. Note, restart rules are affected by the same inconsistencies as the regular restart policy. Kubelet restarts, container", "source": "https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle/", "id": "k8s_00646_code_explain_20331"} {"prompt": "Complete this yaml code:\n\n# container restart delays will start at 10s, increasing\n# 2x each time they are restarted, to a maximum of 100s", "completion": "kind: KubeletConfiguration\ncrashLoopBackOff:\n maxContainerRestartPeriod: \"100s\"", "language": "yaml", "type": "kubernetes-manifest", "context": "Pod Lifecycle | If you use this feature along with the alpha feature ReduceDefaultCrashLoopBackOffDecay (described above), your cluster defaults for initial backoff and maximum backoff will no longer be 10s and 300s,", "source": "https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle/", "id": "k8s_00646_code_completion_20332"} {"prompt": "Explain this yaml code:\n\n# container restart delays will start at 10s, increasing\n# 2x each time they are restarted, to a maximum of 100s\nkind: KubeletConfiguration\ncrashLoopBackOff:\n maxContainerRestartPeriod: \"100s\"", "completion": "This yaml code Pod Lifecycle | If you use this feature along with the alpha feature ReduceDefaultCrashLoopBackOffDecay (described above), your cluster defaults for initial backoff and maximum backoff will no longer be 10s and 300s,", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Pod Lifecycle | If you use this feature along with the alpha feature ReduceDefaultCrashLoopBackOffDecay (described above), your cluster defaults for initial backoff and maximum backoff will no longer be 10s and 300s,", "source": "https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle/", "id": "k8s_00646_code_explain_20333"} {"prompt": "Complete this yaml code:\n\n# delays between container restarts will always be 2s\nkind: KubeletConfiguration", "completion": "crashLoopBackOff:\n maxContainerRestartPeriod: \"2s\"", "language": "yaml", "type": "kubernetes-manifest", "context": "Pod Lifecycle | If you use this feature along with the alpha feature ReduceDefaultCrashLoopBackOffDecay (described above), your cluster defaults for initial backoff and maximum backoff will no longer be 10s and 300s,", "source": "https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle/", "id": "k8s_00646_code_completion_20334"} {"prompt": "Explain this yaml code:\n\n# delays between container restarts will always be 2s\nkind: KubeletConfiguration\ncrashLoopBackOff:\n maxContainerRestartPeriod: \"2s\"", "completion": "This yaml code Pod Lifecycle | If you use this feature along with the alpha feature ReduceDefaultCrashLoopBackOffDecay (described above), your cluster defaults for initial backoff and maximum backoff will no longer be 10s and 300s,", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Pod Lifecycle | If you use this feature along with the alpha feature ReduceDefaultCrashLoopBackOffDecay (described above), your cluster defaults for initial backoff and maximum backoff will no longer be 10s and 300s,", "source": "https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle/", "id": "k8s_00646_code_explain_20335"} {"prompt": "Complete this yaml code:\n\nkind: Pod\n...\nspec:\n readinessGates:\n - conditionType: \"www.example.com/feature-1\"\nstatus:\n conditions:\n - type: Ready # a built-in PodCondition\n status: \"False\"", "completion": " lastProbeTime: null\n lastTransitionTime: 2018-01-01T00:00:00Z\n - type: \"www.example.com/feature-1\" # an extra PodCondition\n status: \"False\"\n lastProbeTime: null\n lastTransitionTime: 2018-01-01T00:00:00Z\n containerStatuses:\n - containerID: docker://abcd...\n ready: true\n...", "language": "yaml", "type": "pod-definition", "context": "Pod Lifecycle | The Pod conditions you add must have names that meet the Kubernetes label key format.", "source": "https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle/", "id": "k8s_00646_code_completion_20336"} {"prompt": "Explain this yaml code:\n\nkind: Pod\n...\nspec:\n readinessGates:\n - conditionType: \"www.example.com/feature-1\"\nstatus:\n conditions:\n - type: Ready # a built-in PodCondition\n status: \"False\"\n lastProbeTime: null\n lastTransitionTime: 2018-01-01T00:00:00Z\n - type: \"www.example.com/feature-1\" # an extra PodCondition\n status: \"False\"\n lastProbeTime: null\n lastTransitionTime: 2018-01-01T00:00:00Z\n containerStatuses:\n - containerID: docker://abcd...\n ready: true\n...", "completion": "This yaml code Pod Lifecycle | The Pod conditions you add must have names that meet the Kubernetes label key format.", "language": "yaml", "type": "pod-definition_explanation", "context": "Pod Lifecycle | The Pod conditions you add must have names that meet the Kubernetes label key format.", "source": "https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle/", "id": "k8s_00646_code_explain_20337"} {"prompt": "Complete this text code:\n\nspec:\n os:\n name: linux\n containers:", "completion": " - name: my-container\n image: container-image:latest\n lifecycle:\n stopSignal: SIGUSR1", "language": "text", "type": "code-example", "context": "Pod Lifecycle | If a stop signal is defined in the lifecycle, this will override the signal defined in the container image. If no stop signal is defined in the container spec, the container would fall back to the def", "source": "https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle/", "id": "k8s_00646_code_completion_20338"} {"prompt": "Explain this text code:\n\nspec:\n os:\n name: linux\n containers:\n - name: my-container\n image: container-image:latest\n lifecycle:\n stopSignal: SIGUSR1", "completion": "This text code Pod Lifecycle | If a stop signal is defined in the lifecycle, this will override the signal defined in the container image. If no stop signal is defined in the container spec, the container would fall back to the def", "language": "text", "type": "code-example_explanation", "context": "Pod Lifecycle | If a stop signal is defined in the lifecycle, this will override the signal defined in the container image. If no stop signal is defined in the container spec, the container would fall back to the def", "source": "https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle/", "id": "k8s_00646_code_explain_20339"} {"prompt": "Explain this text code:\n\nNAMESPACE NAME READY STATUS RESTARTS AGE\n alessandras-namespace alessandras-pod 0/1 CrashLoopBackOff 200 2d9h", "completion": "This text code Pod Lifecycle | A Pod is granted a term to terminate gracefully, which defaults to 30 seconds. You can use the flag --force to terminate a Pod by force.", "language": "text", "type": "code-example_explanation", "context": "Pod Lifecycle | A Pod is granted a term to terminate gracefully, which defaults to 30 seconds. You can use the flag --force to terminate a Pod by force.", "source": "https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle/", "id": "k8s_00646_code_explain_20340"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: on-failure-pod\nspec:\n restartPolicy: OnFailure\n containers:", "completion": " - name: try-once-container # This container will run only once because the restartPolicy is Never.\n image: docker.io/library/busybox:1.28\n command: ['sh', '-c', 'echo \"Only running once\" && sleep 10 && exit 1']\n restartPolicy: Never \n - name: on-failure-container # This container will be restarted on failure.\n image: docker.io/library/busybox:1.28\n command: ['sh', '-c', 'echo \"Keep restarting\" && sleep 1800 && exit 1']", "language": "yaml", "type": "pod-definition", "context": "Pod Lifecycle | A Pod with Always restart policy with an init container that only execute once. If the init container fails, the Pod fails. This allows the Pod to fail if the initialization failed, but also keep runn", "source": "https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle/", "id": "k8s_00646_code_completion_20341"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: on-failure-pod\nspec:\n restartPolicy: OnFailure\n containers:\n - name: try-once-container # This container will run only once because the restartPolicy is Never.\n image: docker.io/library/busybox:1.28\n command: ['sh', '-c', 'echo \"Only running once\" && sleep 10 && exit 1']\n restartPolicy: Never \n - name: on-failure-container # This container will be restarted on failure.\n image: docker.io/library/busybox:1.28\n command: ['sh', '-c', 'echo \"Keep restarting\" && sleep 1800 && exit 1']", "completion": "This yaml code Pod Lifecycle | A Pod with Always restart policy with an init container that only execute once. If the init container fails, the Pod fails. This allows the Pod to fail if the initialization failed, but also keep runn", "language": "yaml", "type": "pod-definition_explanation", "context": "Pod Lifecycle | A Pod with Always restart policy with an init container that only execute once. If the init container fails, the Pod fails. This allows the Pod to fail if the initialization failed, but also keep runn", "source": "https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle/", "id": "k8s_00646_code_explain_20342"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: fail-pod-if-init-fails\nspec:\n restartPolicy: Always\n initContainers:", "completion": " - name: init-once # This init container will only try once. If it fails, the pod will fail.\n image: docker.io/library/busybox:1.28\n command: ['sh', '-c', 'echo \"Failing initialization\" && sleep 10 && exit 1']\n restartPolicy: Never\n containers:\n - name: main-container # This container will always be restarted once initialization succeeds.\n image: docker.io/library/busybox:1.28\n command: ['sh', '-c', 'sleep 1800 && exit 0']", "language": "yaml", "type": "pod-definition", "context": "Pod Lifecycle | A Pod with Never restart policy with a container that ignores and restarts on specific exit codes. This is useful to differentiate between restartable errors and non-restartable errors:", "source": "https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle/", "id": "k8s_00646_code_completion_20343"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: fail-pod-if-init-fails\nspec:\n restartPolicy: Always\n initContainers:\n - name: init-once # This init container will only try once. If it fails, the pod will fail.\n image: docker.io/library/busybox:1.28\n command: ['sh', '-c', 'echo \"Failing initialization\" && sleep 10 && exit 1']\n restartPolicy: Never\n containers:\n - name: main-container # This container will always be restarted once initialization succeeds.\n image: docker.io/library/busybox:1.28\n command: ['sh', '-c', 'sleep 1800 && exit 0']", "completion": "This yaml code Pod Lifecycle | A Pod with Never restart policy with a container that ignores and restarts on specific exit codes. This is useful to differentiate between restartable errors and non-restartable errors:", "language": "yaml", "type": "pod-definition_explanation", "context": "Pod Lifecycle | A Pod with Never restart policy with a container that ignores and restarts on specific exit codes. This is useful to differentiate between restartable errors and non-restartable errors:", "source": "https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle/", "id": "k8s_00646_code_explain_20344"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: restart-on-exit-codes\nspec:\n restartPolicy: Never\n containers:\n - name: restart-on-exit-codes", "completion": " image: docker.io/library/busybox:1.28\n command: ['sh', '-c', 'sleep 60 && exit 0']\n restartPolicy: Never # Container restart policy must be specified if rules are specified\n restartPolicyRules: # Only restart the container if it exits with code 42\n - action: Restart\n exitCodes:\n operator: In\n values: [42]", "language": "yaml", "type": "pod-definition", "context": "Pod Lifecycle | Restart rules can be used for many more advanced lifecycle management scenarios. Note, restart rules are affected by the same inconsistencies as the regular restart policy. Kubelet restarts, container", "source": "https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle/", "id": "k8s_00646_code_completion_20345"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: restart-on-exit-codes\nspec:\n restartPolicy: Never\n containers:\n - name: restart-on-exit-codes\n image: docker.io/library/busybox:1.28\n command: ['sh', '-c', 'sleep 60 && exit 0']\n restartPolicy: Never # Container restart policy must be specified if rules are specified\n restartPolicyRules: # Only restart the container if it exits with code 42\n - action: Restart\n exitCodes:\n operator: In\n values: [42]", "completion": "This yaml code Pod Lifecycle | Restart rules can be used for many more advanced lifecycle management scenarios. Note, restart rules are affected by the same inconsistencies as the regular restart policy. Kubelet restarts, container", "language": "yaml", "type": "pod-definition_explanation", "context": "Pod Lifecycle | Restart rules can be used for many more advanced lifecycle management scenarios. Note, restart rules are affected by the same inconsistencies as the regular restart policy. Kubelet restarts, container", "source": "https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle/", "id": "k8s_00646_code_explain_20346"} {"prompt": "Complete this yaml code:\n\n# container restart delays will start at 10s, increasing\n# 2x each time they are restarted, to a maximum of 100s", "completion": "kind: KubeletConfiguration\ncrashLoopBackOff:\n maxContainerRestartPeriod: \"100s\"", "language": "yaml", "type": "kubernetes-manifest", "context": "Pod Lifecycle | If you use this feature along with the alpha feature ReduceDefaultCrashLoopBackOffDecay (described above), your cluster defaults for initial backoff and maximum backoff will no longer be 10s and 300s,", "source": "https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle/", "id": "k8s_00646_code_completion_20347"} {"prompt": "Explain this yaml code:\n\n# container restart delays will start at 10s, increasing\n# 2x each time they are restarted, to a maximum of 100s\nkind: KubeletConfiguration\ncrashLoopBackOff:\n maxContainerRestartPeriod: \"100s\"", "completion": "This yaml code Pod Lifecycle | If you use this feature along with the alpha feature ReduceDefaultCrashLoopBackOffDecay (described above), your cluster defaults for initial backoff and maximum backoff will no longer be 10s and 300s,", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Pod Lifecycle | If you use this feature along with the alpha feature ReduceDefaultCrashLoopBackOffDecay (described above), your cluster defaults for initial backoff and maximum backoff will no longer be 10s and 300s,", "source": "https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle/", "id": "k8s_00646_code_explain_20348"} {"prompt": "Complete this yaml code:\n\n# delays between container restarts will always be 2s\nkind: KubeletConfiguration", "completion": "crashLoopBackOff:\n maxContainerRestartPeriod: \"2s\"", "language": "yaml", "type": "kubernetes-manifest", "context": "Pod Lifecycle | If you use this feature along with the alpha feature ReduceDefaultCrashLoopBackOffDecay (described above), your cluster defaults for initial backoff and maximum backoff will no longer be 10s and 300s,", "source": "https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle/", "id": "k8s_00646_code_completion_20349"} {"prompt": "Explain this yaml code:\n\n# delays between container restarts will always be 2s\nkind: KubeletConfiguration\ncrashLoopBackOff:\n maxContainerRestartPeriod: \"2s\"", "completion": "This yaml code Pod Lifecycle | If you use this feature along with the alpha feature ReduceDefaultCrashLoopBackOffDecay (described above), your cluster defaults for initial backoff and maximum backoff will no longer be 10s and 300s,", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Pod Lifecycle | If you use this feature along with the alpha feature ReduceDefaultCrashLoopBackOffDecay (described above), your cluster defaults for initial backoff and maximum backoff will no longer be 10s and 300s,", "source": "https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle/", "id": "k8s_00646_code_explain_20350"} {"prompt": "Complete this yaml code:\n\nkind: Pod\n...\nspec:\n readinessGates:\n - conditionType: \"www.example.com/feature-1\"\nstatus:\n conditions:\n - type: Ready # a built-in PodCondition\n status: \"False\"", "completion": " lastProbeTime: null\n lastTransitionTime: 2018-01-01T00:00:00Z\n - type: \"www.example.com/feature-1\" # an extra PodCondition\n status: \"False\"\n lastProbeTime: null\n lastTransitionTime: 2018-01-01T00:00:00Z\n containerStatuses:\n - containerID: docker://abcd...\n ready: true\n...", "language": "yaml", "type": "pod-definition", "context": "Pod Lifecycle | The Pod conditions you add must have names that meet the Kubernetes label key format.", "source": "https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle/", "id": "k8s_00646_code_completion_20351"} {"prompt": "Explain this yaml code:\n\nkind: Pod\n...\nspec:\n readinessGates:\n - conditionType: \"www.example.com/feature-1\"\nstatus:\n conditions:\n - type: Ready # a built-in PodCondition\n status: \"False\"\n lastProbeTime: null\n lastTransitionTime: 2018-01-01T00:00:00Z\n - type: \"www.example.com/feature-1\" # an extra PodCondition\n status: \"False\"\n lastProbeTime: null\n lastTransitionTime: 2018-01-01T00:00:00Z\n containerStatuses:\n - containerID: docker://abcd...\n ready: true\n...", "completion": "This yaml code Pod Lifecycle | The Pod conditions you add must have names that meet the Kubernetes label key format.", "language": "yaml", "type": "pod-definition_explanation", "context": "Pod Lifecycle | The Pod conditions you add must have names that meet the Kubernetes label key format.", "source": "https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle/", "id": "k8s_00646_code_explain_20352"} {"prompt": "Complete this yaml code:\n\nspec:\n os:\n name: linux\n containers:", "completion": " - name: my-container\n image: container-image:latest\n lifecycle:\n stopSignal: SIGUSR1", "language": "yaml", "type": "kubernetes-manifest", "context": "Pod Lifecycle | If a stop signal is defined in the lifecycle, this will override the signal defined in the container image. If no stop signal is defined in the container spec, the container would fall back to the def", "source": "https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle/", "id": "k8s_00646_code_completion_20353"} {"prompt": "Explain this yaml code:\n\nspec:\n os:\n name: linux\n containers:\n - name: my-container\n image: container-image:latest\n lifecycle:\n stopSignal: SIGUSR1", "completion": "This yaml code Pod Lifecycle | If a stop signal is defined in the lifecycle, this will override the signal defined in the container image. If no stop signal is defined in the container spec, the container would fall back to the def", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Pod Lifecycle | If a stop signal is defined in the lifecycle, this will override the signal defined in the container image. If no stop signal is defined in the container spec, the container would fall back to the def", "source": "https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle/", "id": "k8s_00646_code_explain_20354"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: on-failure-pod\nspec:\n restartPolicy: OnFailure\n containers:", "completion": " - name: try-once-container # This container will run only once because the restartPolicy is Never.\n image: docker.io/library/busybox:1.28\n command: ['sh', '-c', 'echo \"Only running once\" && sleep 10 && exit 1']\n restartPolicy: Never \n - name: on-failure-container # This container will be restarted on failure.\n image: docker.io/library/busybox:1.28\n command: ['sh', '-c', 'echo \"Keep restarting\" && sleep 1800 && exit 1']", "language": "yaml", "type": "pod-definition", "context": "Pod Lifecycle | A Pod with Always restart policy with an init container that only execute once. If the init container fails, the Pod fails. This allows the Pod to fail if the initialization failed, but also keep runn", "source": "https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle/", "id": "k8s_00646_code_completion_20355"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: on-failure-pod\nspec:\n restartPolicy: OnFailure\n containers:\n - name: try-once-container # This container will run only once because the restartPolicy is Never.\n image: docker.io/library/busybox:1.28\n command: ['sh', '-c', 'echo \"Only running once\" && sleep 10 && exit 1']\n restartPolicy: Never \n - name: on-failure-container # This container will be restarted on failure.\n image: docker.io/library/busybox:1.28\n command: ['sh', '-c', 'echo \"Keep restarting\" && sleep 1800 && exit 1']", "completion": "This yaml code Pod Lifecycle | A Pod with Always restart policy with an init container that only execute once. If the init container fails, the Pod fails. This allows the Pod to fail if the initialization failed, but also keep runn", "language": "yaml", "type": "pod-definition_explanation", "context": "Pod Lifecycle | A Pod with Always restart policy with an init container that only execute once. If the init container fails, the Pod fails. This allows the Pod to fail if the initialization failed, but also keep runn", "source": "https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle/", "id": "k8s_00646_code_explain_20356"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: fail-pod-if-init-fails\nspec:\n restartPolicy: Always\n initContainers:", "completion": " - name: init-once # This init container will only try once. If it fails, the pod will fail.\n image: docker.io/library/busybox:1.28\n command: ['sh', '-c', 'echo \"Failing initialization\" && sleep 10 && exit 1']\n restartPolicy: Never\n containers:\n - name: main-container # This container will always be restarted once initialization succeeds.\n image: docker.io/library/busybox:1.28\n command: ['sh', '-c', 'sleep 1800 && exit 0']", "language": "yaml", "type": "pod-definition", "context": "Pod Lifecycle | A Pod with Never restart policy with a container that ignores and restarts on specific exit codes. This is useful to differentiate between restartable errors and non-restartable errors:", "source": "https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle/", "id": "k8s_00646_code_completion_20357"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: fail-pod-if-init-fails\nspec:\n restartPolicy: Always\n initContainers:\n - name: init-once # This init container will only try once. If it fails, the pod will fail.\n image: docker.io/library/busybox:1.28\n command: ['sh', '-c', 'echo \"Failing initialization\" && sleep 10 && exit 1']\n restartPolicy: Never\n containers:\n - name: main-container # This container will always be restarted once initialization succeeds.\n image: docker.io/library/busybox:1.28\n command: ['sh', '-c', 'sleep 1800 && exit 0']", "completion": "This yaml code Pod Lifecycle | A Pod with Never restart policy with a container that ignores and restarts on specific exit codes. This is useful to differentiate between restartable errors and non-restartable errors:", "language": "yaml", "type": "pod-definition_explanation", "context": "Pod Lifecycle | A Pod with Never restart policy with a container that ignores and restarts on specific exit codes. This is useful to differentiate between restartable errors and non-restartable errors:", "source": "https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle/", "id": "k8s_00646_code_explain_20358"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: restart-on-exit-codes\nspec:\n restartPolicy: Never\n containers:\n - name: restart-on-exit-codes", "completion": " image: docker.io/library/busybox:1.28\n command: ['sh', '-c', 'sleep 60 && exit 0']\n restartPolicy: Never # Container restart policy must be specified if rules are specified\n restartPolicyRules: # Only restart the container if it exits with code 42\n - action: Restart\n exitCodes:\n operator: In\n values: [42]", "language": "yaml", "type": "pod-definition", "context": "Pod Lifecycle | Restart rules can be used for many more advanced lifecycle management scenarios. Note, restart rules are affected by the same inconsistencies as the regular restart policy. Kubelet restarts, container", "source": "https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle/", "id": "k8s_00646_code_completion_20359"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: restart-on-exit-codes\nspec:\n restartPolicy: Never\n containers:\n - name: restart-on-exit-codes\n image: docker.io/library/busybox:1.28\n command: ['sh', '-c', 'sleep 60 && exit 0']\n restartPolicy: Never # Container restart policy must be specified if rules are specified\n restartPolicyRules: # Only restart the container if it exits with code 42\n - action: Restart\n exitCodes:\n operator: In\n values: [42]", "completion": "This yaml code Pod Lifecycle | Restart rules can be used for many more advanced lifecycle management scenarios. Note, restart rules are affected by the same inconsistencies as the regular restart policy. Kubelet restarts, container", "language": "yaml", "type": "pod-definition_explanation", "context": "Pod Lifecycle | Restart rules can be used for many more advanced lifecycle management scenarios. Note, restart rules are affected by the same inconsistencies as the regular restart policy. Kubelet restarts, container", "source": "https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle/", "id": "k8s_00646_code_explain_20360"} {"prompt": "Complete this yaml code:\n\n# container restart delays will start at 10s, increasing\n# 2x each time they are restarted, to a maximum of 100s", "completion": "kind: KubeletConfiguration\ncrashLoopBackOff:\n maxContainerRestartPeriod: \"100s\"", "language": "yaml", "type": "kubernetes-manifest", "context": "Pod Lifecycle | If you use this feature along with the alpha feature ReduceDefaultCrashLoopBackOffDecay (described above), your cluster defaults for initial backoff and maximum backoff will no longer be 10s and 300s,", "source": "https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle/", "id": "k8s_00646_code_completion_20361"} {"prompt": "Explain this yaml code:\n\n# container restart delays will start at 10s, increasing\n# 2x each time they are restarted, to a maximum of 100s\nkind: KubeletConfiguration\ncrashLoopBackOff:\n maxContainerRestartPeriod: \"100s\"", "completion": "This yaml code Pod Lifecycle | If you use this feature along with the alpha feature ReduceDefaultCrashLoopBackOffDecay (described above), your cluster defaults for initial backoff and maximum backoff will no longer be 10s and 300s,", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Pod Lifecycle | If you use this feature along with the alpha feature ReduceDefaultCrashLoopBackOffDecay (described above), your cluster defaults for initial backoff and maximum backoff will no longer be 10s and 300s,", "source": "https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle/", "id": "k8s_00646_code_explain_20362"} {"prompt": "Complete this yaml code:\n\n# delays between container restarts will always be 2s\nkind: KubeletConfiguration", "completion": "crashLoopBackOff:\n maxContainerRestartPeriod: \"2s\"", "language": "yaml", "type": "kubernetes-manifest", "context": "Pod Lifecycle | If you use this feature along with the alpha feature ReduceDefaultCrashLoopBackOffDecay (described above), your cluster defaults for initial backoff and maximum backoff will no longer be 10s and 300s,", "source": "https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle/", "id": "k8s_00646_code_completion_20363"} {"prompt": "Explain this yaml code:\n\n# delays between container restarts will always be 2s\nkind: KubeletConfiguration\ncrashLoopBackOff:\n maxContainerRestartPeriod: \"2s\"", "completion": "This yaml code Pod Lifecycle | If you use this feature along with the alpha feature ReduceDefaultCrashLoopBackOffDecay (described above), your cluster defaults for initial backoff and maximum backoff will no longer be 10s and 300s,", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Pod Lifecycle | If you use this feature along with the alpha feature ReduceDefaultCrashLoopBackOffDecay (described above), your cluster defaults for initial backoff and maximum backoff will no longer be 10s and 300s,", "source": "https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle/", "id": "k8s_00646_code_explain_20364"} {"prompt": "Complete this yaml code:\n\nkind: Pod\n...\nspec:\n readinessGates:\n - conditionType: \"www.example.com/feature-1\"\nstatus:\n conditions:\n - type: Ready # a built-in PodCondition\n status: \"False\"", "completion": " lastProbeTime: null\n lastTransitionTime: 2018-01-01T00:00:00Z\n - type: \"www.example.com/feature-1\" # an extra PodCondition\n status: \"False\"\n lastProbeTime: null\n lastTransitionTime: 2018-01-01T00:00:00Z\n containerStatuses:\n - containerID: docker://abcd...\n ready: true\n...", "language": "yaml", "type": "pod-definition", "context": "Pod Lifecycle | The Pod conditions you add must have names that meet the Kubernetes label key format.", "source": "https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle/", "id": "k8s_00646_code_completion_20365"} {"prompt": "Explain this yaml code:\n\nkind: Pod\n...\nspec:\n readinessGates:\n - conditionType: \"www.example.com/feature-1\"\nstatus:\n conditions:\n - type: Ready # a built-in PodCondition\n status: \"False\"\n lastProbeTime: null\n lastTransitionTime: 2018-01-01T00:00:00Z\n - type: \"www.example.com/feature-1\" # an extra PodCondition\n status: \"False\"\n lastProbeTime: null\n lastTransitionTime: 2018-01-01T00:00:00Z\n containerStatuses:\n - containerID: docker://abcd...\n ready: true\n...", "completion": "This yaml code Pod Lifecycle | The Pod conditions you add must have names that meet the Kubernetes label key format.", "language": "yaml", "type": "pod-definition_explanation", "context": "Pod Lifecycle | The Pod conditions you add must have names that meet the Kubernetes label key format.", "source": "https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle/", "id": "k8s_00646_code_explain_20366"} {"prompt": "Complete this text code:\n\nspec:\n os:\n name: linux\n containers:", "completion": " - name: my-container\n image: container-image:latest\n lifecycle:\n stopSignal: SIGUSR1", "language": "text", "type": "code-example", "context": "Pod Lifecycle | If a stop signal is defined in the lifecycle, this will override the signal defined in the container image. If no stop signal is defined in the container spec, the container would fall back to the def", "source": "https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle/", "id": "k8s_00646_code_completion_20367"} {"prompt": "Explain this text code:\n\nspec:\n os:\n name: linux\n containers:\n - name: my-container\n image: container-image:latest\n lifecycle:\n stopSignal: SIGUSR1", "completion": "This text code Pod Lifecycle | If a stop signal is defined in the lifecycle, this will override the signal defined in the container image. If no stop signal is defined in the container spec, the container would fall back to the def", "language": "text", "type": "code-example_explanation", "context": "Pod Lifecycle | If a stop signal is defined in the lifecycle, this will override the signal defined in the container image. If no stop signal is defined in the container spec, the container would fall back to the def", "source": "https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle/", "id": "k8s_00646_code_explain_20368"} {"prompt": "Complete this text code:\n\n# Build the current working directory\n kubectl kustomize\n \n # Build some shared configuration directory", "completion": " kubectl kustomize /home/config/production\n \n # Build from github\n kubectl kustomize https://github.com/kubernetes-sigs/kustomize.git/examples/helloWorld?ref=v1.0.6", "language": "text", "type": "code-example", "context": "kubectl kustomize | use the uid and gid of the command executor to run the function in the container", "source": "https://kubernetes.io/docs/reference/kubectl/generated/kubectl_kustomize/", "id": "k8s_00648_code_completion_20369"} {"prompt": "Explain this text code:\n\n# Build the current working directory\n kubectl kustomize\n \n # Build some shared configuration directory\n kubectl kustomize /home/config/production\n \n # Build from github\n kubectl kustomize https://github.com/kubernetes-sigs/kustomize.git/examples/helloWorld?ref=v1.0.6", "completion": "This text code kubectl kustomize | use the uid and gid of the command executor to run the function in the container", "language": "text", "type": "code-example_explanation", "context": "kubectl kustomize | use the uid and gid of the command executor to run the function in the container", "source": "https://kubernetes.io/docs/reference/kubectl/generated/kubectl_kustomize/", "id": "k8s_00648_code_explain_20370"} {"prompt": "Complete this text code:\n\n# Build the current working directory\n kubectl kustomize\n \n # Build some shared configuration directory", "completion": " kubectl kustomize /home/config/production\n \n # Build from github\n kubectl kustomize https://github.com/kubernetes-sigs/kustomize.git/examples/helloWorld?ref=v1.0.6", "language": "text", "type": "code-example", "context": "kubectl kustomize | use the uid and gid of the command executor to run the function in the container", "source": "https://kubernetes.io/docs/reference/kubectl/generated/kubectl_kustomize/", "id": "k8s_00648_code_completion_20371"} {"prompt": "Explain this text code:\n\n# Build the current working directory\n kubectl kustomize\n \n # Build some shared configuration directory\n kubectl kustomize /home/config/production\n \n # Build from github\n kubectl kustomize https://github.com/kubernetes-sigs/kustomize.git/examples/helloWorld?ref=v1.0.6", "completion": "This text code kubectl kustomize | use the uid and gid of the command executor to run the function in the container", "language": "text", "type": "code-example_explanation", "context": "kubectl kustomize | use the uid and gid of the command executor to run the function in the container", "source": "https://kubernetes.io/docs/reference/kubectl/generated/kubectl_kustomize/", "id": "k8s_00648_code_explain_20372"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: nginx\nspec:\n shareProcessNamespace: true\n containers:\n - name: nginx\n image: nginx", "completion": " - name: shell\n image: busybox:1.28\n command: [\"sleep\", \"3600\"]\n securityContext:\n capabilities:\n add:\n - SYS_PTRACE\n stdin: true\n tty: true", "language": "yaml", "type": "pod-definition", "context": "Share Process Namespace between Containers in a Pod | Create the pod nginx on your cluster:", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/share-process-namespace/", "id": "k8s_00651_code_completion_20373"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: nginx\nspec:\n shareProcessNamespace: true\n containers:\n - name: nginx\n image: nginx\n - name: shell\n image: busybox:1.28\n command: [\"sleep\", \"3600\"]\n securityContext:\n capabilities:\n add:\n - SYS_PTRACE\n stdin: true\n tty: true", "completion": "This yaml code Share Process Namespace between Containers in a Pod | Create the pod nginx on your cluster:", "language": "yaml", "type": "pod-definition_explanation", "context": "Share Process Namespace between Containers in a Pod | Create the pod nginx on your cluster:", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/share-process-namespace/", "id": "k8s_00651_code_explain_20374"} {"prompt": "Explain this bash code:\n\nkubectl apply -f https://k8s.io/examples/pods/share-process-namespace.yaml", "completion": "This bash code Share Process Namespace between Containers in a Pod | Attach to the shell container and run ps:", "language": "bash", "type": "kubectl-command_explanation", "context": "Share Process Namespace between Containers in a Pod | Attach to the shell container and run ps:", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/share-process-namespace/", "id": "k8s_00651_code_explain_20375"} {"prompt": "Complete this text code:\n\nPID USER TIME COMMAND\n 1 root 0:00 /pause\n 8 root 0:00 nginx: master process nginx -g daemon off;", "completion": " 14 101 0:00 nginx: worker process\n 15 root 0:00 sh\n 21 root 0:00 ps ax", "language": "text", "type": "code-example", "context": "Share Process Namespace between Containers in a Pod | You can signal processes in other containers. For example, send SIGHUP to nginx to restart the worker process. This requires the SYS_PTRACE capability.", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/share-process-namespace/", "id": "k8s_00651_code_completion_20376"} {"prompt": "Explain this text code:\n\nPID USER TIME COMMAND\n 1 root 0:00 /pause\n 8 root 0:00 nginx: master process nginx -g daemon off;\n 14 101 0:00 nginx: worker process\n 15 root 0:00 sh\n 21 root 0:00 ps ax", "completion": "This text code Share Process Namespace between Containers in a Pod | You can signal processes in other containers. For example, send SIGHUP to nginx to restart the worker process. This requires the SYS_PTRACE capability.", "language": "text", "type": "code-example_explanation", "context": "Share Process Namespace between Containers in a Pod | You can signal processes in other containers. For example, send SIGHUP to nginx to restart the worker process. This requires the SYS_PTRACE capability.", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/share-process-namespace/", "id": "k8s_00651_code_explain_20377"} {"prompt": "Explain this text code:\n\n# run this inside the \"shell\" container\nkill -HUP 8 # change \"8\" to match the PID of the nginx leader process, if necessary\nps ax", "completion": "This text code Share Process Namespace between Containers in a Pod | The output is similar to this:", "language": "text", "type": "code-example_explanation", "context": "Share Process Namespace between Containers in a Pod | The output is similar to this:", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/share-process-namespace/", "id": "k8s_00651_code_explain_20378"} {"prompt": "Complete this text code:\n\nPID USER TIME COMMAND\n 1 root 0:00 /pause\n 8 root 0:00 nginx: master process nginx -g daemon off;", "completion": " 15 root 0:00 sh\n 22 101 0:00 nginx: worker process\n 23 root 0:00 ps ax", "language": "text", "type": "code-example", "context": "Share Process Namespace between Containers in a Pod | It's even possible to access the file system of another container using the /proc/$pid/root link.", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/share-process-namespace/", "id": "k8s_00651_code_completion_20379"} {"prompt": "Explain this text code:\n\nPID USER TIME COMMAND\n 1 root 0:00 /pause\n 8 root 0:00 nginx: master process nginx -g daemon off;\n 15 root 0:00 sh\n 22 101 0:00 nginx: worker process\n 23 root 0:00 ps ax", "completion": "This text code Share Process Namespace between Containers in a Pod | It's even possible to access the file system of another container using the /proc/$pid/root link.", "language": "text", "type": "code-example_explanation", "context": "Share Process Namespace between Containers in a Pod | It's even possible to access the file system of another container using the /proc/$pid/root link.", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/share-process-namespace/", "id": "k8s_00651_code_explain_20380"} {"prompt": "Explain this text code:\n\n# run this inside the \"shell\" container\n# change \"8\" to the PID of the Nginx process, if necessary\nhead /proc/8/root/etc/nginx/nginx.conf", "completion": "This text code Share Process Namespace between Containers in a Pod | The output is similar to this:", "language": "text", "type": "code-example_explanation", "context": "Share Process Namespace between Containers in a Pod | The output is similar to this:", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/share-process-namespace/", "id": "k8s_00651_code_explain_20381"} {"prompt": "Complete this text code:\n\nuser nginx;\nworker_processes 1;\n\nerror_log /var/log/nginx/error.log warn;", "completion": "pid /var/run/nginx.pid;\n\n\nevents {\n worker_connections 1024;", "language": "text", "type": "code-example", "context": "Share Process Namespace between Containers in a Pod | Pods share many resources so it makes sense they would also share a process namespace. Some containers may expect to be isolated from others, though, so it's important to understand the differences:", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/share-process-namespace/", "id": "k8s_00651_code_completion_20382"} {"prompt": "Explain this text code:\n\nuser nginx;\nworker_processes 1;\n\nerror_log /var/log/nginx/error.log warn;\npid /var/run/nginx.pid;\n\n\nevents {\n worker_connections 1024;", "completion": "This text code Share Process Namespace between Containers in a Pod | Pods share many resources so it makes sense they would also share a process namespace. Some containers may expect to be isolated from others, though, so it's important to understand the differences:", "language": "text", "type": "code-example_explanation", "context": "Share Process Namespace between Containers in a Pod | Pods share many resources so it makes sense they would also share a process namespace. Some containers may expect to be isolated from others, though, so it's important to understand the differences:", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/share-process-namespace/", "id": "k8s_00651_code_explain_20383"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: nginx\nspec:\n shareProcessNamespace: true\n containers:\n - name: nginx\n image: nginx", "completion": " - name: shell\n image: busybox:1.28\n command: [\"sleep\", \"3600\"]\n securityContext:\n capabilities:\n add:\n - SYS_PTRACE\n stdin: true\n tty: true", "language": "yaml", "type": "pod-definition", "context": "Share Process Namespace between Containers in a Pod | Create the pod nginx on your cluster:", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/share-process-namespace/", "id": "k8s_00651_code_completion_20384"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: nginx\nspec:\n shareProcessNamespace: true\n containers:\n - name: nginx\n image: nginx\n - name: shell\n image: busybox:1.28\n command: [\"sleep\", \"3600\"]\n securityContext:\n capabilities:\n add:\n - SYS_PTRACE\n stdin: true\n tty: true", "completion": "This yaml code Share Process Namespace between Containers in a Pod | Create the pod nginx on your cluster:", "language": "yaml", "type": "pod-definition_explanation", "context": "Share Process Namespace between Containers in a Pod | Create the pod nginx on your cluster:", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/share-process-namespace/", "id": "k8s_00651_code_explain_20385"} {"prompt": "Explain this shell code:\n\nkubectl apply -f https://k8s.io/examples/pods/share-process-namespace.yaml", "completion": "This shell code Share Process Namespace between Containers in a Pod | Attach to the shell container and run ps:", "language": "shell", "type": "code-example_explanation", "context": "Share Process Namespace between Containers in a Pod | Attach to the shell container and run ps:", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/share-process-namespace/", "id": "k8s_00651_code_explain_20386"} {"prompt": "Complete this none code:\n\nPID USER TIME COMMAND\n 1 root 0:00 /pause\n 8 root 0:00 nginx: master process nginx -g daemon off;", "completion": " 14 101 0:00 nginx: worker process\n 15 root 0:00 sh\n 21 root 0:00 ps ax", "language": "none", "type": "code-example", "context": "Share Process Namespace between Containers in a Pod | You can signal processes in other containers. For example, send SIGHUP to nginx to restart the worker process. This requires the SYS_PTRACE capability.", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/share-process-namespace/", "id": "k8s_00651_code_completion_20387"} {"prompt": "Explain this none code:\n\nPID USER TIME COMMAND\n 1 root 0:00 /pause\n 8 root 0:00 nginx: master process nginx -g daemon off;\n 14 101 0:00 nginx: worker process\n 15 root 0:00 sh\n 21 root 0:00 ps ax", "completion": "This none code Share Process Namespace between Containers in a Pod | You can signal processes in other containers. For example, send SIGHUP to nginx to restart the worker process. This requires the SYS_PTRACE capability.", "language": "none", "type": "code-example_explanation", "context": "Share Process Namespace between Containers in a Pod | You can signal processes in other containers. For example, send SIGHUP to nginx to restart the worker process. This requires the SYS_PTRACE capability.", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/share-process-namespace/", "id": "k8s_00651_code_explain_20388"} {"prompt": "Explain this shell code:\n\n# run this inside the \"shell\" container\nkill -HUP 8 # change \"8\" to match the PID of the nginx leader process, if necessary\nps ax", "completion": "This shell code Share Process Namespace between Containers in a Pod | The output is similar to this:", "language": "shell", "type": "code-example_explanation", "context": "Share Process Namespace between Containers in a Pod | The output is similar to this:", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/share-process-namespace/", "id": "k8s_00651_code_explain_20389"} {"prompt": "Complete this none code:\n\nPID USER TIME COMMAND\n 1 root 0:00 /pause\n 8 root 0:00 nginx: master process nginx -g daemon off;", "completion": " 15 root 0:00 sh\n 22 101 0:00 nginx: worker process\n 23 root 0:00 ps ax", "language": "none", "type": "code-example", "context": "Share Process Namespace between Containers in a Pod | It's even possible to access the file system of another container using the /proc/$pid/root link.", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/share-process-namespace/", "id": "k8s_00651_code_completion_20390"} {"prompt": "Explain this none code:\n\nPID USER TIME COMMAND\n 1 root 0:00 /pause\n 8 root 0:00 nginx: master process nginx -g daemon off;\n 15 root 0:00 sh\n 22 101 0:00 nginx: worker process\n 23 root 0:00 ps ax", "completion": "This none code Share Process Namespace between Containers in a Pod | It's even possible to access the file system of another container using the /proc/$pid/root link.", "language": "none", "type": "code-example_explanation", "context": "Share Process Namespace between Containers in a Pod | It's even possible to access the file system of another container using the /proc/$pid/root link.", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/share-process-namespace/", "id": "k8s_00651_code_explain_20391"} {"prompt": "Explain this shell code:\n\n# run this inside the \"shell\" container\n# change \"8\" to the PID of the Nginx process, if necessary\nhead /proc/8/root/etc/nginx/nginx.conf", "completion": "This shell code Share Process Namespace between Containers in a Pod | The output is similar to this:", "language": "shell", "type": "code-example_explanation", "context": "Share Process Namespace between Containers in a Pod | The output is similar to this:", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/share-process-namespace/", "id": "k8s_00651_code_explain_20392"} {"prompt": "Complete this none code:\n\nuser nginx;\nworker_processes 1;\n\nerror_log /var/log/nginx/error.log warn;", "completion": "pid /var/run/nginx.pid;\n\n\nevents {\n worker_connections 1024;", "language": "none", "type": "code-example", "context": "Share Process Namespace between Containers in a Pod | Pods share many resources so it makes sense they would also share a process namespace. Some containers may expect to be isolated from others, though, so it's important to understand the differences:", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/share-process-namespace/", "id": "k8s_00651_code_completion_20393"} {"prompt": "Explain this none code:\n\nuser nginx;\nworker_processes 1;\n\nerror_log /var/log/nginx/error.log warn;\npid /var/run/nginx.pid;\n\n\nevents {\n worker_connections 1024;", "completion": "This none code Share Process Namespace between Containers in a Pod | Pods share many resources so it makes sense they would also share a process namespace. Some containers may expect to be isolated from others, though, so it's important to understand the differences:", "language": "none", "type": "code-example_explanation", "context": "Share Process Namespace between Containers in a Pod | Pods share many resources so it makes sense they would also share a process namespace. Some containers may expect to be isolated from others, though, so it's important to understand the differences:", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/share-process-namespace/", "id": "k8s_00651_code_explain_20394"} {"prompt": "Complete this yaml code:\n\npods/share-process-namespace.yaml\napiVersion: v1\nkind: Pod\nmetadata:\n name: nginx\nspec:\n shareProcessNamespace: true\n containers:\n - name: nginx", "completion": " image: nginx\n - name: shell\n image: busybox:1.28\n command: [\"sleep\", \"3600\"]\n securityContext:\n capabilities:\n add:\n - SYS_PTRACE\n stdin: true\n tty: true", "language": "yaml", "type": "pod-definition", "context": "Share Process Namespace between Containers in a Pod | Create the pod nginx on your cluster:", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/share-process-namespace/", "id": "k8s_00651_code_completion_20395"} {"prompt": "Explain this yaml code:\n\npods/share-process-namespace.yaml\napiVersion: v1\nkind: Pod\nmetadata:\n name: nginx\nspec:\n shareProcessNamespace: true\n containers:\n - name: nginx\n image: nginx\n - name: shell\n image: busybox:1.28\n command: [\"sleep\", \"3600\"]\n securityContext:\n capabilities:\n add:\n - SYS_PTRACE\n stdin: true\n tty: true", "completion": "This yaml code Share Process Namespace between Containers in a Pod | Create the pod nginx on your cluster:", "language": "yaml", "type": "pod-definition_explanation", "context": "Share Process Namespace between Containers in a Pod | Create the pod nginx on your cluster:", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/share-process-namespace/", "id": "k8s_00651_code_explain_20396"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: nginx\nspec:\n shareProcessNamespace: true\n containers:\n - name: nginx\n image: nginx", "completion": " - name: shell\n image: busybox:1.28\n command: [\"sleep\", \"3600\"]\n securityContext:\n capabilities:\n add:\n - SYS_PTRACE\n stdin: true\n tty: true", "language": "yaml", "type": "pod-definition", "context": "Share Process Namespace between Containers in a Pod | Create the pod nginx on your cluster:", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/share-process-namespace/", "id": "k8s_00651_code_completion_20397"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: nginx\nspec:\n shareProcessNamespace: true\n containers:\n - name: nginx\n image: nginx\n - name: shell\n image: busybox:1.28\n command: [\"sleep\", \"3600\"]\n securityContext:\n capabilities:\n add:\n - SYS_PTRACE\n stdin: true\n tty: true", "completion": "This yaml code Share Process Namespace between Containers in a Pod | Create the pod nginx on your cluster:", "language": "yaml", "type": "pod-definition_explanation", "context": "Share Process Namespace between Containers in a Pod | Create the pod nginx on your cluster:", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/share-process-namespace/", "id": "k8s_00651_code_explain_20398"} {"prompt": "Explain this bash code:\n\nkubectl apply -f https://k8s.io/examples/pods/share-process-namespace.yaml", "completion": "This bash code Share Process Namespace between Containers in a Pod | Attach to the shell container and run ps:", "language": "bash", "type": "kubectl-command_explanation", "context": "Share Process Namespace between Containers in a Pod | Attach to the shell container and run ps:", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/share-process-namespace/", "id": "k8s_00651_code_explain_20399"} {"prompt": "Explain this text code:\n\n# run this inside the \"shell\" container\nkill -HUP 8 # change \"8\" to match the PID of the nginx leader process, if necessary\nps ax", "completion": "This text code Share Process Namespace between Containers in a Pod | The output is similar to this:", "language": "text", "type": "code-example_explanation", "context": "Share Process Namespace between Containers in a Pod | The output is similar to this:", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/share-process-namespace/", "id": "k8s_00651_code_explain_20400"} {"prompt": "Explain this text code:\n\n# run this inside the \"shell\" container\n# change \"8\" to the PID of the Nginx process, if necessary\nhead /proc/8/root/etc/nginx/nginx.conf", "completion": "This text code Share Process Namespace between Containers in a Pod | The output is similar to this:", "language": "text", "type": "code-example_explanation", "context": "Share Process Namespace between Containers in a Pod | The output is similar to this:", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/share-process-namespace/", "id": "k8s_00651_code_explain_20401"} {"prompt": "Explain this bash code:\n\nkubectl create service nodeport NAME [--tcp=port:targetPort] [--dry-run=server|client|none]", "completion": "This bash code kubectl create service nodeport | If true, ignore any errors in templates when a field or map key is missing in the template. Only applies to golang and jsonpath output formats.", "language": "bash", "type": "kubectl-command_explanation", "context": "kubectl create service nodeport | If true, ignore any errors in templates when a field or map key is missing in the template. Only applies to golang and jsonpath output formats.", "source": "https://kubernetes.io/docs/reference/kubectl/generated/kubectl_create/kubectl_create_service_nodeport/", "id": "k8s_00652_code_explain_20402"} {"prompt": "Explain this text code:\n\n# Create a new NodePort service named my-ns\n kubectl create service nodeport my-ns --tcp=5678:8080", "completion": "This text code kubectl create service nodeport | If true, ignore any errors in templates when a field or map key is missing in the template. Only applies to golang and jsonpath output formats.", "language": "text", "type": "code-example_explanation", "context": "kubectl create service nodeport | If true, ignore any errors in templates when a field or map key is missing in the template. Only applies to golang and jsonpath output formats.", "source": "https://kubernetes.io/docs/reference/kubectl/generated/kubectl_create/kubectl_create_service_nodeport/", "id": "k8s_00652_code_explain_20403"} {"prompt": "Explain this bash code:\n\nkubectl create service nodeport NAME [--tcp=port:targetPort] [--dry-run=server|client|none]", "completion": "This bash code kubectl create service nodeport | If true, ignore any errors in templates when a field or map key is missing in the template. Only applies to golang and jsonpath output formats.", "language": "bash", "type": "kubectl-command_explanation", "context": "kubectl create service nodeport | If true, ignore any errors in templates when a field or map key is missing in the template. Only applies to golang and jsonpath output formats.", "source": "https://kubernetes.io/docs/reference/kubectl/generated/kubectl_create/kubectl_create_service_nodeport/", "id": "k8s_00652_code_explain_20404"} {"prompt": "Explain this text code:\n\n# Create a new NodePort service named my-ns\n kubectl create service nodeport my-ns --tcp=5678:8080", "completion": "This text code kubectl create service nodeport | If true, ignore any errors in templates when a field or map key is missing in the template. Only applies to golang and jsonpath output formats.", "language": "text", "type": "code-example_explanation", "context": "kubectl create service nodeport | If true, ignore any errors in templates when a field or map key is missing in the template. Only applies to golang and jsonpath output formats.", "source": "https://kubernetes.io/docs/reference/kubectl/generated/kubectl_create/kubectl_create_service_nodeport/", "id": "k8s_00652_code_explain_20405"} {"prompt": "Explain this text code:\n\nspec.containers[*].securityContext.windowsOptions.hostProcess", "completion": "This text code Pod Security Standards | Allowed Values", "language": "text", "type": "code-example_explanation", "context": "Pod Security Standards | Allowed Values", "source": "https://kubernetes.io/docs/concepts/security/pod-security-standards/", "id": "k8s_00654_code_explain_20406"} {"prompt": "Explain this text code:\n\nspec.initContainers[*].securityContext.windowsOptions.hostProcess", "completion": "This text code Pod Security Standards | Allowed Values", "language": "text", "type": "code-example_explanation", "context": "Pod Security Standards | Allowed Values", "source": "https://kubernetes.io/docs/concepts/security/pod-security-standards/", "id": "k8s_00654_code_explain_20407"} {"prompt": "Explain this text code:\n\nspec.ephemeralContainers[*].securityContext.windowsOptions.hostProcess", "completion": "This text code Pod Security Standards | Allowed Values", "language": "text", "type": "code-example_explanation", "context": "Pod Security Standards | Allowed Values", "source": "https://kubernetes.io/docs/concepts/security/pod-security-standards/", "id": "k8s_00654_code_explain_20408"} {"prompt": "Explain this text code:\n\nspec.ephemeralContainers[*].securityContext.privileged", "completion": "This text code Pod Security Standards | Allowed Values", "language": "text", "type": "code-example_explanation", "context": "Pod Security Standards | Allowed Values", "source": "https://kubernetes.io/docs/concepts/security/pod-security-standards/", "id": "k8s_00654_code_explain_20409"} {"prompt": "Explain this text code:\n\nspec.containers[*].securityContext.capabilities.add", "completion": "This text code Pod Security Standards | Allowed Values", "language": "text", "type": "code-example_explanation", "context": "Pod Security Standards | Allowed Values", "source": "https://kubernetes.io/docs/concepts/security/pod-security-standards/", "id": "k8s_00654_code_explain_20410"} {"prompt": "Explain this text code:\n\nspec.initContainers[*].securityContext.capabilities.add", "completion": "This text code Pod Security Standards | Allowed Values", "language": "text", "type": "code-example_explanation", "context": "Pod Security Standards | Allowed Values", "source": "https://kubernetes.io/docs/concepts/security/pod-security-standards/", "id": "k8s_00654_code_explain_20411"} {"prompt": "Explain this text code:\n\nspec.ephemeralContainers[*].securityContext.capabilities.add", "completion": "This text code Pod Security Standards | Allowed Values", "language": "text", "type": "code-example_explanation", "context": "Pod Security Standards | Allowed Values", "source": "https://kubernetes.io/docs/concepts/security/pod-security-standards/", "id": "k8s_00654_code_explain_20412"} {"prompt": "Explain this text code:\n\nspec.containers[*].lifecycle.postStart.tcpSocket.host", "completion": "This text code Pod Security Standards | Allowed Values", "language": "text", "type": "code-example_explanation", "context": "Pod Security Standards | Allowed Values", "source": "https://kubernetes.io/docs/concepts/security/pod-security-standards/", "id": "k8s_00654_code_explain_20413"} {"prompt": "Explain this text code:\n\nspec.containers[*].lifecycle.preStop.tcpSocket.host", "completion": "This text code Pod Security Standards | Allowed Values", "language": "text", "type": "code-example_explanation", "context": "Pod Security Standards | Allowed Values", "source": "https://kubernetes.io/docs/concepts/security/pod-security-standards/", "id": "k8s_00654_code_explain_20414"} {"prompt": "Explain this text code:\n\nspec.containers[*].lifecycle.postStart.httpGet.host", "completion": "This text code Pod Security Standards | Allowed Values", "language": "text", "type": "code-example_explanation", "context": "Pod Security Standards | Allowed Values", "source": "https://kubernetes.io/docs/concepts/security/pod-security-standards/", "id": "k8s_00654_code_explain_20415"} {"prompt": "Explain this text code:\n\nspec.initContainers[*].livenessProbe.tcpSocket.host", "completion": "This text code Pod Security Standards | Allowed Values", "language": "text", "type": "code-example_explanation", "context": "Pod Security Standards | Allowed Values", "source": "https://kubernetes.io/docs/concepts/security/pod-security-standards/", "id": "k8s_00654_code_explain_20416"} {"prompt": "Explain this text code:\n\nspec.initContainers[*].readinessProbe.tcpSocket.host", "completion": "This text code Pod Security Standards | Allowed Values", "language": "text", "type": "code-example_explanation", "context": "Pod Security Standards | Allowed Values", "source": "https://kubernetes.io/docs/concepts/security/pod-security-standards/", "id": "k8s_00654_code_explain_20417"} {"prompt": "Explain this text code:\n\nspec.initContainers[*].lifecycle.postStart.tcpSocket.host", "completion": "This text code Pod Security Standards | Allowed Values", "language": "text", "type": "code-example_explanation", "context": "Pod Security Standards | Allowed Values", "source": "https://kubernetes.io/docs/concepts/security/pod-security-standards/", "id": "k8s_00654_code_explain_20418"} {"prompt": "Explain this text code:\n\nspec.initContainers[*].lifecycle.preStop.tcpSocket.host", "completion": "This text code Pod Security Standards | Allowed Values", "language": "text", "type": "code-example_explanation", "context": "Pod Security Standards | Allowed Values", "source": "https://kubernetes.io/docs/concepts/security/pod-security-standards/", "id": "k8s_00654_code_explain_20419"} {"prompt": "Explain this text code:\n\nspec.initContainers[*].lifecycle.postStart.httpGet.host", "completion": "This text code Pod Security Standards | Allowed Values", "language": "text", "type": "code-example_explanation", "context": "Pod Security Standards | Allowed Values", "source": "https://kubernetes.io/docs/concepts/security/pod-security-standards/", "id": "k8s_00654_code_explain_20420"} {"prompt": "Explain this text code:\n\nspec.initContainers[*].lifecycle.preStop.httpGet.host", "completion": "This text code Pod Security Standards | Allowed Values", "language": "text", "type": "code-example_explanation", "context": "Pod Security Standards | Allowed Values", "source": "https://kubernetes.io/docs/concepts/security/pod-security-standards/", "id": "k8s_00654_code_explain_20421"} {"prompt": "Explain this text code:\n\nspec.containers[*].securityContext.appArmorProfile.type", "completion": "This text code Pod Security Standards | Allowed Values", "language": "text", "type": "code-example_explanation", "context": "Pod Security Standards | Allowed Values", "source": "https://kubernetes.io/docs/concepts/security/pod-security-standards/", "id": "k8s_00654_code_explain_20422"} {"prompt": "Explain this text code:\n\nspec.initContainers[*].securityContext.appArmorProfile.type", "completion": "This text code Pod Security Standards | Allowed Values", "language": "text", "type": "code-example_explanation", "context": "Pod Security Standards | Allowed Values", "source": "https://kubernetes.io/docs/concepts/security/pod-security-standards/", "id": "k8s_00654_code_explain_20423"} {"prompt": "Explain this text code:\n\nspec.ephemeralContainers[*].securityContext.appArmorProfile.type", "completion": "This text code Pod Security Standards | Allowed Values", "language": "text", "type": "code-example_explanation", "context": "Pod Security Standards | Allowed Values", "source": "https://kubernetes.io/docs/concepts/security/pod-security-standards/", "id": "k8s_00654_code_explain_20424"} {"prompt": "Explain this text code:\n\nmetadata.annotations[\"container.apparmor.security.beta.kubernetes.io/*\"]", "completion": "This text code Pod Security Standards | Allowed Values", "language": "text", "type": "code-example_explanation", "context": "Pod Security Standards | Allowed Values", "source": "https://kubernetes.io/docs/concepts/security/pod-security-standards/", "id": "k8s_00654_code_explain_20425"} {"prompt": "Explain this text code:\n\nspec.containers[*].securityContext.seLinuxOptions.type", "completion": "This text code Pod Security Standards | Allowed Values", "language": "text", "type": "code-example_explanation", "context": "Pod Security Standards | Allowed Values", "source": "https://kubernetes.io/docs/concepts/security/pod-security-standards/", "id": "k8s_00654_code_explain_20426"} {"prompt": "Explain this text code:\n\nspec.initContainers[*].securityContext.seLinuxOptions.type", "completion": "This text code Pod Security Standards | Allowed Values", "language": "text", "type": "code-example_explanation", "context": "Pod Security Standards | Allowed Values", "source": "https://kubernetes.io/docs/concepts/security/pod-security-standards/", "id": "k8s_00654_code_explain_20427"} {"prompt": "Explain this text code:\n\nspec.ephemeralContainers[*].securityContext.seLinuxOptions.type", "completion": "This text code Pod Security Standards | Allowed Values", "language": "text", "type": "code-example_explanation", "context": "Pod Security Standards | Allowed Values", "source": "https://kubernetes.io/docs/concepts/security/pod-security-standards/", "id": "k8s_00654_code_explain_20428"} {"prompt": "Explain this text code:\n\nspec.containers[*].securityContext.seLinuxOptions.user", "completion": "This text code Pod Security Standards | Allowed Values", "language": "text", "type": "code-example_explanation", "context": "Pod Security Standards | Allowed Values", "source": "https://kubernetes.io/docs/concepts/security/pod-security-standards/", "id": "k8s_00654_code_explain_20429"} {"prompt": "Explain this text code:\n\nspec.initContainers[*].securityContext.seLinuxOptions.user", "completion": "This text code Pod Security Standards | Allowed Values", "language": "text", "type": "code-example_explanation", "context": "Pod Security Standards | Allowed Values", "source": "https://kubernetes.io/docs/concepts/security/pod-security-standards/", "id": "k8s_00654_code_explain_20430"} {"prompt": "Explain this text code:\n\nspec.ephemeralContainers[*].securityContext.seLinuxOptions.user", "completion": "This text code Pod Security Standards | Allowed Values", "language": "text", "type": "code-example_explanation", "context": "Pod Security Standards | Allowed Values", "source": "https://kubernetes.io/docs/concepts/security/pod-security-standards/", "id": "k8s_00654_code_explain_20431"} {"prompt": "Explain this text code:\n\nspec.containers[*].securityContext.seLinuxOptions.role", "completion": "This text code Pod Security Standards | Allowed Values", "language": "text", "type": "code-example_explanation", "context": "Pod Security Standards | Allowed Values", "source": "https://kubernetes.io/docs/concepts/security/pod-security-standards/", "id": "k8s_00654_code_explain_20432"} {"prompt": "Explain this text code:\n\nspec.initContainers[*].securityContext.seLinuxOptions.role", "completion": "This text code Pod Security Standards | Allowed Values", "language": "text", "type": "code-example_explanation", "context": "Pod Security Standards | Allowed Values", "source": "https://kubernetes.io/docs/concepts/security/pod-security-standards/", "id": "k8s_00654_code_explain_20433"} {"prompt": "Explain this text code:\n\nspec.ephemeralContainers[*].securityContext.seLinuxOptions.role", "completion": "This text code Pod Security Standards | Allowed Values", "language": "text", "type": "code-example_explanation", "context": "Pod Security Standards | Allowed Values", "source": "https://kubernetes.io/docs/concepts/security/pod-security-standards/", "id": "k8s_00654_code_explain_20434"} {"prompt": "Explain this text code:\n\nspec.ephemeralContainers[*].securityContext.procMount", "completion": "This text code Pod Security Standards | Allowed Values", "language": "text", "type": "code-example_explanation", "context": "Pod Security Standards | Allowed Values", "source": "https://kubernetes.io/docs/concepts/security/pod-security-standards/", "id": "k8s_00654_code_explain_20435"} {"prompt": "Explain this text code:\n\nspec.containers[*].securityContext.seccompProfile.type", "completion": "This text code Pod Security Standards | Allowed Values", "language": "text", "type": "code-example_explanation", "context": "Pod Security Standards | Allowed Values", "source": "https://kubernetes.io/docs/concepts/security/pod-security-standards/", "id": "k8s_00654_code_explain_20436"} {"prompt": "Explain this text code:\n\nspec.initContainers[*].securityContext.seccompProfile.type", "completion": "This text code Pod Security Standards | Allowed Values", "language": "text", "type": "code-example_explanation", "context": "Pod Security Standards | Allowed Values", "source": "https://kubernetes.io/docs/concepts/security/pod-security-standards/", "id": "k8s_00654_code_explain_20437"} {"prompt": "Explain this text code:\n\nspec.ephemeralContainers[*].securityContext.seccompProfile.type", "completion": "This text code Pod Security Standards | Allowed Values", "language": "text", "type": "code-example_explanation", "context": "Pod Security Standards | Allowed Values", "source": "https://kubernetes.io/docs/concepts/security/pod-security-standards/", "id": "k8s_00654_code_explain_20438"} {"prompt": "Explain this text code:\n\nspec.containers[*].securityContext.allowPrivilegeEscalation", "completion": "This text code Pod Security Standards | Allowed Values", "language": "text", "type": "code-example_explanation", "context": "Pod Security Standards | Allowed Values", "source": "https://kubernetes.io/docs/concepts/security/pod-security-standards/", "id": "k8s_00654_code_explain_20439"} {"prompt": "Explain this text code:\n\nspec.initContainers[*].securityContext.allowPrivilegeEscalation", "completion": "This text code Pod Security Standards | Allowed Values", "language": "text", "type": "code-example_explanation", "context": "Pod Security Standards | Allowed Values", "source": "https://kubernetes.io/docs/concepts/security/pod-security-standards/", "id": "k8s_00654_code_explain_20440"} {"prompt": "Explain this text code:\n\nspec.ephemeralContainers[*].securityContext.allowPrivilegeEscalation", "completion": "This text code Pod Security Standards | Allowed Values", "language": "text", "type": "code-example_explanation", "context": "Pod Security Standards | Allowed Values", "source": "https://kubernetes.io/docs/concepts/security/pod-security-standards/", "id": "k8s_00654_code_explain_20441"} {"prompt": "Explain this text code:\n\nspec.initContainers[*].securityContext.runAsNonRoot", "completion": "This text code Pod Security Standards | Allowed Values", "language": "text", "type": "code-example_explanation", "context": "Pod Security Standards | Allowed Values", "source": "https://kubernetes.io/docs/concepts/security/pod-security-standards/", "id": "k8s_00654_code_explain_20442"} {"prompt": "Explain this text code:\n\nspec.ephemeralContainers[*].securityContext.runAsNonRoot", "completion": "This text code Pod Security Standards | Allowed Values", "language": "text", "type": "code-example_explanation", "context": "Pod Security Standards | Allowed Values", "source": "https://kubernetes.io/docs/concepts/security/pod-security-standards/", "id": "k8s_00654_code_explain_20443"} {"prompt": "Explain this text code:\n\nspec.ephemeralContainers[*].securityContext.runAsUser", "completion": "This text code Pod Security Standards | Allowed Values", "language": "text", "type": "code-example_explanation", "context": "Pod Security Standards | Allowed Values", "source": "https://kubernetes.io/docs/concepts/security/pod-security-standards/", "id": "k8s_00654_code_explain_20444"} {"prompt": "Explain this text code:\n\nspec.containers[*].securityContext.seccompProfile.type", "completion": "This text code Pod Security Standards | Allowed Values", "language": "text", "type": "code-example_explanation", "context": "Pod Security Standards | Allowed Values", "source": "https://kubernetes.io/docs/concepts/security/pod-security-standards/", "id": "k8s_00654_code_explain_20445"} {"prompt": "Explain this text code:\n\nspec.initContainers[*].securityContext.seccompProfile.type", "completion": "This text code Pod Security Standards | Allowed Values", "language": "text", "type": "code-example_explanation", "context": "Pod Security Standards | Allowed Values", "source": "https://kubernetes.io/docs/concepts/security/pod-security-standards/", "id": "k8s_00654_code_explain_20446"} {"prompt": "Explain this text code:\n\nspec.ephemeralContainers[*].securityContext.seccompProfile.type", "completion": "This text code Pod Security Standards | Allowed Values", "language": "text", "type": "code-example_explanation", "context": "Pod Security Standards | Allowed Values", "source": "https://kubernetes.io/docs/concepts/security/pod-security-standards/", "id": "k8s_00654_code_explain_20447"} {"prompt": "Explain this text code:\n\nspec.containers[*].securityContext.capabilities.drop", "completion": "This text code Pod Security Standards | Allowed Values", "language": "text", "type": "code-example_explanation", "context": "Pod Security Standards | Allowed Values", "source": "https://kubernetes.io/docs/concepts/security/pod-security-standards/", "id": "k8s_00654_code_explain_20448"} {"prompt": "Explain this text code:\n\nspec.initContainers[*].securityContext.capabilities.drop", "completion": "This text code Pod Security Standards | Allowed Values", "language": "text", "type": "code-example_explanation", "context": "Pod Security Standards | Allowed Values", "source": "https://kubernetes.io/docs/concepts/security/pod-security-standards/", "id": "k8s_00654_code_explain_20449"} {"prompt": "Explain this text code:\n\nspec.ephemeralContainers[*].securityContext.capabilities.drop", "completion": "This text code Pod Security Standards | Allowed Values", "language": "text", "type": "code-example_explanation", "context": "Pod Security Standards | Allowed Values", "source": "https://kubernetes.io/docs/concepts/security/pod-security-standards/", "id": "k8s_00654_code_explain_20450"} {"prompt": "Explain this text code:\n\nspec.containers[*].securityContext.capabilities.add", "completion": "This text code Pod Security Standards | Allowed Values", "language": "text", "type": "code-example_explanation", "context": "Pod Security Standards | Allowed Values", "source": "https://kubernetes.io/docs/concepts/security/pod-security-standards/", "id": "k8s_00654_code_explain_20451"} {"prompt": "Explain this text code:\n\nspec.initContainers[*].securityContext.capabilities.add", "completion": "This text code Pod Security Standards | Allowed Values", "language": "text", "type": "code-example_explanation", "context": "Pod Security Standards | Allowed Values", "source": "https://kubernetes.io/docs/concepts/security/pod-security-standards/", "id": "k8s_00654_code_explain_20452"} {"prompt": "Explain this text code:\n\nspec.ephemeralContainers[*].securityContext.capabilities.add", "completion": "This text code Pod Security Standards | Allowed Values", "language": "text", "type": "code-example_explanation", "context": "Pod Security Standards | Allowed Values", "source": "https://kubernetes.io/docs/concepts/security/pod-security-standards/", "id": "k8s_00654_code_explain_20453"} {"prompt": "Explain this bash code:\n\nkubectl create secret tls NAME --cert=path/to/cert/file --key=path/to/key/file [--dry-run=server|client|none]", "completion": "This bash code kubectl create secret tls | If true, ignore any errors in templates when a field or map key is missing in the template. Only applies to golang and jsonpath output formats.", "language": "bash", "type": "kubectl-command_explanation", "context": "kubectl create secret tls | If true, ignore any errors in templates when a field or map key is missing in the template. Only applies to golang and jsonpath output formats.", "source": "https://kubernetes.io/docs/reference/kubectl/generated/kubectl_create/kubectl_create_secret_tls/", "id": "k8s_00655_code_explain_20454"} {"prompt": "Explain this text code:\n\n# Create a new TLS secret named tls-secret with the given key pair\n kubectl create secret tls tls-secret --cert=path/to/tls.crt --key=path/to/tls.key", "completion": "This text code kubectl create secret tls | If true, ignore any errors in templates when a field or map key is missing in the template. Only applies to golang and jsonpath output formats.", "language": "text", "type": "code-example_explanation", "context": "kubectl create secret tls | If true, ignore any errors in templates when a field or map key is missing in the template. Only applies to golang and jsonpath output formats.", "source": "https://kubernetes.io/docs/reference/kubectl/generated/kubectl_create/kubectl_create_secret_tls/", "id": "k8s_00655_code_explain_20455"} {"prompt": "Explain this bash code:\n\nkubectl create secret tls NAME --cert=path/to/cert/file --key=path/to/key/file [--dry-run=server|client|none]", "completion": "This bash code kubectl create secret tls | If true, ignore any errors in templates when a field or map key is missing in the template. Only applies to golang and jsonpath output formats.", "language": "bash", "type": "kubectl-command_explanation", "context": "kubectl create secret tls | If true, ignore any errors in templates when a field or map key is missing in the template. Only applies to golang and jsonpath output formats.", "source": "https://kubernetes.io/docs/reference/kubectl/generated/kubectl_create/kubectl_create_secret_tls/", "id": "k8s_00655_code_explain_20456"} {"prompt": "Explain this text code:\n\n# Create a new TLS secret named tls-secret with the given key pair\n kubectl create secret tls tls-secret --cert=path/to/tls.crt --key=path/to/tls.key", "completion": "This text code kubectl create secret tls | If true, ignore any errors in templates when a field or map key is missing in the template. Only applies to golang and jsonpath output formats.", "language": "text", "type": "code-example_explanation", "context": "kubectl create secret tls | If true, ignore any errors in templates when a field or map key is missing in the template. Only applies to golang and jsonpath output formats.", "source": "https://kubernetes.io/docs/reference/kubectl/generated/kubectl_create/kubectl_create_secret_tls/", "id": "k8s_00655_code_explain_20457"} {"prompt": "Explain this bash code:\n\nkubectl get nodes k8s-linuxpool1-34450317-0 -o go-template --template='{{range .spec.podCIDRs}}{{printf \"%s\\n\" .}}{{end}}'", "completion": "This bash code Validate IPv4/IPv6 dual-stack | There should be one IPv4 block and one IPv6 block allocated.", "language": "bash", "type": "kubectl-command_explanation", "context": "Validate IPv4/IPv6 dual-stack | There should be one IPv4 block and one IPv6 block allocated.", "source": "https://kubernetes.io/docs/tasks/network/validate-dual-stack/", "id": "k8s_00656_code_explain_20458"} {"prompt": "Explain this bash code:\n\nkubectl get nodes k8s-linuxpool1-34450317-0 -o go-template --template='{{range .status.addresses}}{{printf \"%s: %s\\n\" .type .address}}{{end}}'", "completion": "This bash code Validate IPv4/IPv6 dual-stack | Validate that a Pod has an IPv4 and IPv6 address assigned. Replace the Pod name with a valid Pod in your cluster. In this example the Pod name is pod01:", "language": "bash", "type": "kubectl-command_explanation", "context": "Validate IPv4/IPv6 dual-stack | Validate that a Pod has an IPv4 and IPv6 address assigned. Replace the Pod name with a valid Pod in your cluster. In this example the Pod name is pod01:", "source": "https://kubernetes.io/docs/tasks/network/validate-dual-stack/", "id": "k8s_00656_code_explain_20459"} {"prompt": "Explain this text code:\n\nHostname: k8s-linuxpool1-34450317-0\nInternalIP: 10.0.0.5\nInternalIP: 2001:db8:10::5", "completion": "This text code Validate IPv4/IPv6 dual-stack | Validate that a Pod has an IPv4 and IPv6 address assigned. Replace the Pod name with a valid Pod in your cluster. In this example the Pod name is pod01:", "language": "text", "type": "code-example_explanation", "context": "Validate IPv4/IPv6 dual-stack | Validate that a Pod has an IPv4 and IPv6 address assigned. Replace the Pod name with a valid Pod in your cluster. In this example the Pod name is pod01:", "source": "https://kubernetes.io/docs/tasks/network/validate-dual-stack/", "id": "k8s_00656_code_explain_20460"} {"prompt": "Explain this bash code:\n\nkubectl get pods pod01 -o go-template --template='{{range .status.podIPs}}{{printf \"%s\\n\" .ip}}{{end}}'", "completion": "This bash code Validate IPv4/IPv6 dual-stack | You can also validate Pod IPs using the Downward API via the status.podIPs fieldPath. The following snippet demonstrates how you can expose the Pod IPs via an environment variable called MY_POD_IPS wi", "language": "bash", "type": "kubectl-command_explanation", "context": "Validate IPv4/IPv6 dual-stack | You can also validate Pod IPs using the Downward API via the status.podIPs fieldPath. The following snippet demonstrates how you can expose the Pod IPs via an environment variable called MY_POD_IPS wi", "source": "https://kubernetes.io/docs/tasks/network/validate-dual-stack/", "id": "k8s_00656_code_explain_20461"} {"prompt": "Complete this text code:\n\nenv:\n - name: MY_POD_IPS", "completion": " valueFrom:\n fieldRef:\n fieldPath: status.podIPs", "language": "text", "type": "code-example", "context": "Validate IPv4/IPv6 dual-stack | The following command prints the value of the MY_POD_IPS environment variable from within a container. The value is a comma separated list that corresponds to the Pod's IPv4 and IPv6 addresses.", "source": "https://kubernetes.io/docs/tasks/network/validate-dual-stack/", "id": "k8s_00656_code_completion_20462"} {"prompt": "Explain this text code:\n\nenv:\n - name: MY_POD_IPS\n valueFrom:\n fieldRef:\n fieldPath: status.podIPs", "completion": "This text code Validate IPv4/IPv6 dual-stack | The following command prints the value of the MY_POD_IPS environment variable from within a container. The value is a comma separated list that corresponds to the Pod's IPv4 and IPv6 addresses.", "language": "text", "type": "code-example_explanation", "context": "Validate IPv4/IPv6 dual-stack | The following command prints the value of the MY_POD_IPS environment variable from within a container. The value is a comma separated list that corresponds to the Pod's IPv4 and IPv6 addresses.", "source": "https://kubernetes.io/docs/tasks/network/validate-dual-stack/", "id": "k8s_00656_code_explain_20463"} {"prompt": "Complete this text code:\n\n# Kubernetes-managed hosts file.\n127.0.0.1 localhost\n::1 localhost ip6-localhost ip6-loopback\nfe00::0 ip6-localnet", "completion": "fe00::0 ip6-mcastprefix\nfe00::1 ip6-allnodes\nfe00::2 ip6-allrouters\n10.244.1.4 pod01\n2001:db8::4 pod01", "language": "text", "type": "code-example", "context": "Validate IPv4/IPv6 dual-stack | Create the following Service that does not explicitly define .spec.ipFamilyPolicy. Kubernetes will assign a cluster IP for the Service from the first configured service-cluster-ip-range and set the .s", "source": "https://kubernetes.io/docs/tasks/network/validate-dual-stack/", "id": "k8s_00656_code_completion_20464"} {"prompt": "Explain this text code:\n\n# Kubernetes-managed hosts file.\n127.0.0.1 localhost\n::1 localhost ip6-localhost ip6-loopback\nfe00::0 ip6-localnet\nfe00::0 ip6-mcastprefix\nfe00::1 ip6-allnodes\nfe00::2 ip6-allrouters\n10.244.1.4 pod01\n2001:db8::4 pod01", "completion": "This text code Validate IPv4/IPv6 dual-stack | Create the following Service that does not explicitly define .spec.ipFamilyPolicy. Kubernetes will assign a cluster IP for the Service from the first configured service-cluster-ip-range and set the .s", "language": "text", "type": "code-example_explanation", "context": "Validate IPv4/IPv6 dual-stack | Create the following Service that does not explicitly define .spec.ipFamilyPolicy. Kubernetes will assign a cluster IP for the Service from the first configured service-cluster-ip-range and set the .s", "source": "https://kubernetes.io/docs/tasks/network/validate-dual-stack/", "id": "k8s_00656_code_explain_20465"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Service\nmetadata:\n name: my-service\n labels:\n app.kubernetes.io/name: MyApp", "completion": "spec:\n selector:\n app.kubernetes.io/name: MyApp\n ports:\n - protocol: TCP\n port: 80", "language": "yaml", "type": "service-definition", "context": "Validate IPv4/IPv6 dual-stack | Use kubectl to view the YAML for the Service.", "source": "https://kubernetes.io/docs/tasks/network/validate-dual-stack/", "id": "k8s_00656_code_completion_20466"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Service\nmetadata:\n name: my-service\n labels:\n app.kubernetes.io/name: MyApp\nspec:\n selector:\n app.kubernetes.io/name: MyApp\n ports:\n - protocol: TCP\n port: 80", "completion": "This yaml code Validate IPv4/IPv6 dual-stack | Use kubectl to view the YAML for the Service.", "language": "yaml", "type": "service-definition_explanation", "context": "Validate IPv4/IPv6 dual-stack | Use kubectl to view the YAML for the Service.", "source": "https://kubernetes.io/docs/tasks/network/validate-dual-stack/", "id": "k8s_00656_code_explain_20467"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Service\nmetadata:\n name: my-service\n namespace: default\nspec:\n clusterIP: 10.0.217.164\n clusterIPs:\n - 10.0.217.164\n ipFamilies:\n - IPv4", "completion": " ipFamilyPolicy: SingleStack\n ports:\n - port: 80\n protocol: TCP\n targetPort: 9376\n selector:\n app.kubernetes.io/name: MyApp\n sessionAffinity: None\n type: ClusterIP\nstatus:\n loadBalancer: {}", "language": "yaml", "type": "service-definition", "context": "Validate IPv4/IPv6 dual-stack | Create the following Service that explicitly defines IPv6 as the first array element in .spec.ipFamilies. Kubernetes will assign a cluster IP for the Service from the IPv6 range configured service-clu", "source": "https://kubernetes.io/docs/tasks/network/validate-dual-stack/", "id": "k8s_00656_code_completion_20468"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Service\nmetadata:\n name: my-service\n namespace: default\nspec:\n clusterIP: 10.0.217.164\n clusterIPs:\n - 10.0.217.164\n ipFamilies:\n - IPv4\n ipFamilyPolicy: SingleStack\n ports:\n - port: 80\n protocol: TCP\n targetPort: 9376\n selector:\n app.kubernetes.io/name: MyApp\n sessionAffinity: None\n type: ClusterIP\nstatus:\n loadBalancer: {}", "completion": "This yaml code Validate IPv4/IPv6 dual-stack | Create the following Service that explicitly defines IPv6 as the first array element in .spec.ipFamilies. Kubernetes will assign a cluster IP for the Service from the IPv6 range configured service-clu", "language": "yaml", "type": "service-definition_explanation", "context": "Validate IPv4/IPv6 dual-stack | Create the following Service that explicitly defines IPv6 as the first array element in .spec.ipFamilies. Kubernetes will assign a cluster IP for the Service from the IPv6 range configured service-clu", "source": "https://kubernetes.io/docs/tasks/network/validate-dual-stack/", "id": "k8s_00656_code_explain_20469"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Service\nmetadata:\n name: my-service\n labels:\n app.kubernetes.io/name: MyApp\nspec:", "completion": " ipFamilies:\n - IPv6\n selector:\n app.kubernetes.io/name: MyApp\n ports:\n - protocol: TCP\n port: 80", "language": "yaml", "type": "service-definition", "context": "Validate IPv4/IPv6 dual-stack | Use kubectl to view the YAML for the Service.", "source": "https://kubernetes.io/docs/tasks/network/validate-dual-stack/", "id": "k8s_00656_code_completion_20470"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Service\nmetadata:\n name: my-service\n labels:\n app.kubernetes.io/name: MyApp\nspec:\n ipFamilies:\n - IPv6\n selector:\n app.kubernetes.io/name: MyApp\n ports:\n - protocol: TCP\n port: 80", "completion": "This yaml code Validate IPv4/IPv6 dual-stack | Use kubectl to view the YAML for the Service.", "language": "yaml", "type": "service-definition_explanation", "context": "Validate IPv4/IPv6 dual-stack | Use kubectl to view the YAML for the Service.", "source": "https://kubernetes.io/docs/tasks/network/validate-dual-stack/", "id": "k8s_00656_code_explain_20471"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Service\nmetadata:\n labels:\n app.kubernetes.io/name: MyApp\n name: my-service\nspec:\n clusterIP: 2001:db8:fd00::5118\n clusterIPs:\n - 2001:db8:fd00::5118\n ipFamilies:", "completion": " - IPv6\n ipFamilyPolicy: SingleStack\n ports:\n - port: 80\n protocol: TCP\n targetPort: 80\n selector:\n app.kubernetes.io/name: MyApp\n sessionAffinity: None\n type: ClusterIP\nstatus:\n loadBalancer: {}", "language": "yaml", "type": "service-definition", "context": "Validate IPv4/IPv6 dual-stack | Create the following Service that explicitly defines PreferDualStack in .spec.ipFamilyPolicy. Kubernetes will assign both IPv4 and IPv6 addresses (as this cluster has dual-stack enabled) and select th", "source": "https://kubernetes.io/docs/tasks/network/validate-dual-stack/", "id": "k8s_00656_code_completion_20472"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Service\nmetadata:\n labels:\n app.kubernetes.io/name: MyApp\n name: my-service\nspec:\n clusterIP: 2001:db8:fd00::5118\n clusterIPs:\n - 2001:db8:fd00::5118\n ipFamilies:\n - IPv6\n ipFamilyPolicy: SingleStack\n ports:\n - port: 80\n protocol: TCP\n targetPort: 80\n selector:\n app.kubernetes.io/name: MyApp\n sessionAffinity: None\n type: ClusterIP\nstatus:\n loadBalancer: {}", "completion": "This yaml code Validate IPv4/IPv6 dual-stack | Create the following Service that explicitly defines PreferDualStack in .spec.ipFamilyPolicy. Kubernetes will assign both IPv4 and IPv6 addresses (as this cluster has dual-stack enabled) and select th", "language": "yaml", "type": "service-definition_explanation", "context": "Validate IPv4/IPv6 dual-stack | Create the following Service that explicitly defines PreferDualStack in .spec.ipFamilyPolicy. Kubernetes will assign both IPv4 and IPv6 addresses (as this cluster has dual-stack enabled) and select th", "source": "https://kubernetes.io/docs/tasks/network/validate-dual-stack/", "id": "k8s_00656_code_explain_20473"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Service\nmetadata:\n name: my-service\n labels:\n app.kubernetes.io/name: MyApp", "completion": "spec:\n ipFamilyPolicy: PreferDualStack\n selector:\n app.kubernetes.io/name: MyApp\n ports:\n - protocol: TCP\n port: 80", "language": "yaml", "type": "service-definition", "context": "Validate IPv4/IPv6 dual-stack | The kubectl get svc command will only show the primary IP in the CLUSTER-IP field.", "source": "https://kubernetes.io/docs/tasks/network/validate-dual-stack/", "id": "k8s_00656_code_completion_20474"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Service\nmetadata:\n name: my-service\n labels:\n app.kubernetes.io/name: MyApp\nspec:\n ipFamilyPolicy: PreferDualStack\n selector:\n app.kubernetes.io/name: MyApp\n ports:\n - protocol: TCP\n port: 80", "completion": "This yaml code Validate IPv4/IPv6 dual-stack | The kubectl get svc command will only show the primary IP in the CLUSTER-IP field.", "language": "yaml", "type": "service-definition_explanation", "context": "Validate IPv4/IPv6 dual-stack | The kubectl get svc command will only show the primary IP in the CLUSTER-IP field.", "source": "https://kubernetes.io/docs/tasks/network/validate-dual-stack/", "id": "k8s_00656_code_explain_20475"} {"prompt": "Explain this text code:\n\nNAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE\nmy-service ClusterIP 10.0.216.242 <none> 80/TCP 5s", "completion": "This text code Validate IPv4/IPv6 dual-stack | Validate that the Service gets cluster IPs from the IPv4 and IPv6 address blocks using kubectl describe. You may then validate access to the service via the IPs and ports.", "language": "text", "type": "code-example_explanation", "context": "Validate IPv4/IPv6 dual-stack | Validate that the Service gets cluster IPs from the IPv4 and IPv6 address blocks using kubectl describe. You may then validate access to the service via the IPs and ports.", "source": "https://kubernetes.io/docs/tasks/network/validate-dual-stack/", "id": "k8s_00656_code_explain_20476"} {"prompt": "Explain this bash code:\n\nkubectl describe svc -l app.kubernetes.io/name=MyApp", "completion": "This bash code Validate IPv4/IPv6 dual-stack | If the cloud provider supports the provisioning of IPv6 enabled external load balancers, create the following Service with PreferDualStack in .spec.ipFamilyPolicy, IPv6 as the first element of the .sp", "language": "bash", "type": "kubectl-command_explanation", "context": "Validate IPv4/IPv6 dual-stack | If the cloud provider supports the provisioning of IPv6 enabled external load balancers, create the following Service with PreferDualStack in .spec.ipFamilyPolicy, IPv6 as the first element of the .sp", "source": "https://kubernetes.io/docs/tasks/network/validate-dual-stack/", "id": "k8s_00656_code_explain_20477"} {"prompt": "Complete this text code:\n\nName: my-service\nNamespace: default\nLabels: app.kubernetes.io/name=MyApp\nAnnotations: <none>\nSelector: app.kubernetes.io/name=MyApp\nType: ClusterIP\nIP Family Policy: PreferDualStack", "completion": "IP Families: IPv4,IPv6\nIP: 10.0.216.242\nIPs: 10.0.216.242,2001:db8:fd00::af55\nPort: <unset> 80/TCP\nTargetPort: 9376/TCP\nEndpoints: <none>\nSession Affinity: None\nEvents: <none>", "language": "text", "type": "code-example", "context": "Validate IPv4/IPv6 dual-stack | If the cloud provider supports the provisioning of IPv6 enabled external load balancers, create the following Service with PreferDualStack in .spec.ipFamilyPolicy, IPv6 as the first element of the .sp", "source": "https://kubernetes.io/docs/tasks/network/validate-dual-stack/", "id": "k8s_00656_code_completion_20478"} {"prompt": "Explain this text code:\n\nName: my-service\nNamespace: default\nLabels: app.kubernetes.io/name=MyApp\nAnnotations: <none>\nSelector: app.kubernetes.io/name=MyApp\nType: ClusterIP\nIP Family Policy: PreferDualStack\nIP Families: IPv4,IPv6\nIP: 10.0.216.242\nIPs: 10.0.216.242,2001:db8:fd00::af55\nPort: <unset> 80/TCP\nTargetPort: 9376/TCP\nEndpoints: <none>\nSession Affinity: None\nEvents: <none>", "completion": "This text code Validate IPv4/IPv6 dual-stack | If the cloud provider supports the provisioning of IPv6 enabled external load balancers, create the following Service with PreferDualStack in .spec.ipFamilyPolicy, IPv6 as the first element of the .sp", "language": "text", "type": "code-example_explanation", "context": "Validate IPv4/IPv6 dual-stack | If the cloud provider supports the provisioning of IPv6 enabled external load balancers, create the following Service with PreferDualStack in .spec.ipFamilyPolicy, IPv6 as the first element of the .sp", "source": "https://kubernetes.io/docs/tasks/network/validate-dual-stack/", "id": "k8s_00656_code_explain_20479"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Service\nmetadata:\n name: my-service\n labels:\n app.kubernetes.io/name: MyApp\nspec:\n ipFamilyPolicy: PreferDualStack", "completion": " ipFamilies:\n - IPv6\n type: LoadBalancer\n selector:\n app.kubernetes.io/name: MyApp\n ports:\n - protocol: TCP\n port: 80", "language": "yaml", "type": "service-definition", "context": "Validate IPv4/IPv6 dual-stack | Check the Service:", "source": "https://kubernetes.io/docs/tasks/network/validate-dual-stack/", "id": "k8s_00656_code_completion_20480"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Service\nmetadata:\n name: my-service\n labels:\n app.kubernetes.io/name: MyApp\nspec:\n ipFamilyPolicy: PreferDualStack\n ipFamilies:\n - IPv6\n type: LoadBalancer\n selector:\n app.kubernetes.io/name: MyApp\n ports:\n - protocol: TCP\n port: 80", "completion": "This yaml code Validate IPv4/IPv6 dual-stack | Check the Service:", "language": "yaml", "type": "service-definition_explanation", "context": "Validate IPv4/IPv6 dual-stack | Check the Service:", "source": "https://kubernetes.io/docs/tasks/network/validate-dual-stack/", "id": "k8s_00656_code_explain_20481"} {"prompt": "Explain this text code:\n\nNAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE\nmy-service LoadBalancer 2001:db8:fd00::7ebc 2603:1030:805::5 80:30790/TCP 35s", "completion": "This text code Validate IPv4/IPv6 dual-stack |", "language": "text", "type": "code-example_explanation", "context": "Validate IPv4/IPv6 dual-stack |", "source": "https://kubernetes.io/docs/tasks/network/validate-dual-stack/", "id": "k8s_00656_code_explain_20482"} {"prompt": "Explain this shell code:\n\nkubectl get nodes k8s-linuxpool1-34450317-0 -o go-template --template='{{range .spec.podCIDRs}}{{printf \"%s\\n\" .}}{{end}}'", "completion": "This shell code Validate IPv4/IPv6 dual-stack | There should be one IPv4 block and one IPv6 block allocated.", "language": "shell", "type": "code-example_explanation", "context": "Validate IPv4/IPv6 dual-stack | There should be one IPv4 block and one IPv6 block allocated.", "source": "https://kubernetes.io/docs/tasks/network/validate-dual-stack/", "id": "k8s_00656_code_explain_20483"} {"prompt": "Explain this shell code:\n\nkubectl get nodes k8s-linuxpool1-34450317-0 -o go-template --template='{{range .status.addresses}}{{printf \"%s: %s\\n\" .type .address}}{{end}}'", "completion": "This shell code Validate IPv4/IPv6 dual-stack | Validate that a Pod has an IPv4 and IPv6 address assigned. Replace the Pod name with a valid Pod in your cluster. In this example the Pod name is pod01:", "language": "shell", "type": "code-example_explanation", "context": "Validate IPv4/IPv6 dual-stack | Validate that a Pod has an IPv4 and IPv6 address assigned. Replace the Pod name with a valid Pod in your cluster. In this example the Pod name is pod01:", "source": "https://kubernetes.io/docs/tasks/network/validate-dual-stack/", "id": "k8s_00656_code_explain_20484"} {"prompt": "Explain this text code:\n\nHostname: k8s-linuxpool1-34450317-0\nInternalIP: 10.0.0.5\nInternalIP: 2001:db8:10::5", "completion": "This text code Validate IPv4/IPv6 dual-stack | Validate that a Pod has an IPv4 and IPv6 address assigned. Replace the Pod name with a valid Pod in your cluster. In this example the Pod name is pod01:", "language": "text", "type": "code-example_explanation", "context": "Validate IPv4/IPv6 dual-stack | Validate that a Pod has an IPv4 and IPv6 address assigned. Replace the Pod name with a valid Pod in your cluster. In this example the Pod name is pod01:", "source": "https://kubernetes.io/docs/tasks/network/validate-dual-stack/", "id": "k8s_00656_code_explain_20485"} {"prompt": "Explain this shell code:\n\nkubectl get pods pod01 -o go-template --template='{{range .status.podIPs}}{{printf \"%s\\n\" .ip}}{{end}}'", "completion": "This shell code Validate IPv4/IPv6 dual-stack | You can also validate Pod IPs using the Downward API via the status.podIPs fieldPath. The following snippet demonstrates how you can expose the Pod IPs via an environment variable called MY_POD_IPS wi", "language": "shell", "type": "code-example_explanation", "context": "Validate IPv4/IPv6 dual-stack | You can also validate Pod IPs using the Downward API via the status.podIPs fieldPath. The following snippet demonstrates how you can expose the Pod IPs via an environment variable called MY_POD_IPS wi", "source": "https://kubernetes.io/docs/tasks/network/validate-dual-stack/", "id": "k8s_00656_code_explain_20486"} {"prompt": "Complete this yaml code:\n\nenv:\n - name: MY_POD_IPS", "completion": " valueFrom:\n fieldRef:\n fieldPath: status.podIPs", "language": "yaml", "type": "kubernetes-manifest", "context": "Validate IPv4/IPv6 dual-stack | The following command prints the value of the MY_POD_IPS environment variable from within a container. The value is a comma separated list that corresponds to the Pod's IPv4 and IPv6 addresses.", "source": "https://kubernetes.io/docs/tasks/network/validate-dual-stack/", "id": "k8s_00656_code_completion_20487"} {"prompt": "Explain this yaml code:\n\nenv:\n - name: MY_POD_IPS\n valueFrom:\n fieldRef:\n fieldPath: status.podIPs", "completion": "This yaml code Validate IPv4/IPv6 dual-stack | The following command prints the value of the MY_POD_IPS environment variable from within a container. The value is a comma separated list that corresponds to the Pod's IPv4 and IPv6 addresses.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Validate IPv4/IPv6 dual-stack | The following command prints the value of the MY_POD_IPS environment variable from within a container. The value is a comma separated list that corresponds to the Pod's IPv4 and IPv6 addresses.", "source": "https://kubernetes.io/docs/tasks/network/validate-dual-stack/", "id": "k8s_00656_code_explain_20488"} {"prompt": "Complete this text code:\n\n# Kubernetes-managed hosts file.\n127.0.0.1 localhost\n::1 localhost ip6-localhost ip6-loopback\nfe00::0 ip6-localnet", "completion": "fe00::0 ip6-mcastprefix\nfe00::1 ip6-allnodes\nfe00::2 ip6-allrouters\n10.244.1.4 pod01\n2001:db8::4 pod01", "language": "text", "type": "code-example", "context": "Validate IPv4/IPv6 dual-stack | Create the following Service that does not explicitly define .spec.ipFamilyPolicy. Kubernetes will assign a cluster IP for the Service from the first configured service-cluster-ip-range and set the .s", "source": "https://kubernetes.io/docs/tasks/network/validate-dual-stack/", "id": "k8s_00656_code_completion_20489"} {"prompt": "Explain this text code:\n\n# Kubernetes-managed hosts file.\n127.0.0.1 localhost\n::1 localhost ip6-localhost ip6-loopback\nfe00::0 ip6-localnet\nfe00::0 ip6-mcastprefix\nfe00::1 ip6-allnodes\nfe00::2 ip6-allrouters\n10.244.1.4 pod01\n2001:db8::4 pod01", "completion": "This text code Validate IPv4/IPv6 dual-stack | Create the following Service that does not explicitly define .spec.ipFamilyPolicy. Kubernetes will assign a cluster IP for the Service from the first configured service-cluster-ip-range and set the .s", "language": "text", "type": "code-example_explanation", "context": "Validate IPv4/IPv6 dual-stack | Create the following Service that does not explicitly define .spec.ipFamilyPolicy. Kubernetes will assign a cluster IP for the Service from the first configured service-cluster-ip-range and set the .s", "source": "https://kubernetes.io/docs/tasks/network/validate-dual-stack/", "id": "k8s_00656_code_explain_20490"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Service\nmetadata:\n name: my-service\n labels:\n app.kubernetes.io/name: MyApp", "completion": "spec:\n selector:\n app.kubernetes.io/name: MyApp\n ports:\n - protocol: TCP\n port: 80", "language": "yaml", "type": "service-definition", "context": "Validate IPv4/IPv6 dual-stack | Use kubectl to view the YAML for the Service.", "source": "https://kubernetes.io/docs/tasks/network/validate-dual-stack/", "id": "k8s_00656_code_completion_20491"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Service\nmetadata:\n name: my-service\n labels:\n app.kubernetes.io/name: MyApp\nspec:\n selector:\n app.kubernetes.io/name: MyApp\n ports:\n - protocol: TCP\n port: 80", "completion": "This yaml code Validate IPv4/IPv6 dual-stack | Use kubectl to view the YAML for the Service.", "language": "yaml", "type": "service-definition_explanation", "context": "Validate IPv4/IPv6 dual-stack | Use kubectl to view the YAML for the Service.", "source": "https://kubernetes.io/docs/tasks/network/validate-dual-stack/", "id": "k8s_00656_code_explain_20492"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Service\nmetadata:\n name: my-service\n namespace: default\nspec:\n clusterIP: 10.0.217.164\n clusterIPs:\n - 10.0.217.164\n ipFamilies:\n - IPv4", "completion": " ipFamilyPolicy: SingleStack\n ports:\n - port: 80\n protocol: TCP\n targetPort: 9376\n selector:\n app.kubernetes.io/name: MyApp\n sessionAffinity: None\n type: ClusterIP\nstatus:\n loadBalancer: {}", "language": "yaml", "type": "service-definition", "context": "Validate IPv4/IPv6 dual-stack | Create the following Service that explicitly defines IPv6 as the first array element in .spec.ipFamilies. Kubernetes will assign a cluster IP for the Service from the IPv6 range configured service-clu", "source": "https://kubernetes.io/docs/tasks/network/validate-dual-stack/", "id": "k8s_00656_code_completion_20493"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Service\nmetadata:\n name: my-service\n namespace: default\nspec:\n clusterIP: 10.0.217.164\n clusterIPs:\n - 10.0.217.164\n ipFamilies:\n - IPv4\n ipFamilyPolicy: SingleStack\n ports:\n - port: 80\n protocol: TCP\n targetPort: 9376\n selector:\n app.kubernetes.io/name: MyApp\n sessionAffinity: None\n type: ClusterIP\nstatus:\n loadBalancer: {}", "completion": "This yaml code Validate IPv4/IPv6 dual-stack | Create the following Service that explicitly defines IPv6 as the first array element in .spec.ipFamilies. Kubernetes will assign a cluster IP for the Service from the IPv6 range configured service-clu", "language": "yaml", "type": "service-definition_explanation", "context": "Validate IPv4/IPv6 dual-stack | Create the following Service that explicitly defines IPv6 as the first array element in .spec.ipFamilies. Kubernetes will assign a cluster IP for the Service from the IPv6 range configured service-clu", "source": "https://kubernetes.io/docs/tasks/network/validate-dual-stack/", "id": "k8s_00656_code_explain_20494"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Service\nmetadata:\n name: my-service\n labels:\n app.kubernetes.io/name: MyApp\nspec:", "completion": " ipFamilies:\n - IPv6\n selector:\n app.kubernetes.io/name: MyApp\n ports:\n - protocol: TCP\n port: 80", "language": "yaml", "type": "service-definition", "context": "Validate IPv4/IPv6 dual-stack | Use kubectl to view the YAML for the Service.", "source": "https://kubernetes.io/docs/tasks/network/validate-dual-stack/", "id": "k8s_00656_code_completion_20495"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Service\nmetadata:\n name: my-service\n labels:\n app.kubernetes.io/name: MyApp\nspec:\n ipFamilies:\n - IPv6\n selector:\n app.kubernetes.io/name: MyApp\n ports:\n - protocol: TCP\n port: 80", "completion": "This yaml code Validate IPv4/IPv6 dual-stack | Use kubectl to view the YAML for the Service.", "language": "yaml", "type": "service-definition_explanation", "context": "Validate IPv4/IPv6 dual-stack | Use kubectl to view the YAML for the Service.", "source": "https://kubernetes.io/docs/tasks/network/validate-dual-stack/", "id": "k8s_00656_code_explain_20496"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Service\nmetadata:\n labels:\n app.kubernetes.io/name: MyApp\n name: my-service\nspec:\n clusterIP: 2001:db8:fd00::5118\n clusterIPs:\n - 2001:db8:fd00::5118\n ipFamilies:", "completion": " - IPv6\n ipFamilyPolicy: SingleStack\n ports:\n - port: 80\n protocol: TCP\n targetPort: 80\n selector:\n app.kubernetes.io/name: MyApp\n sessionAffinity: None\n type: ClusterIP\nstatus:\n loadBalancer: {}", "language": "yaml", "type": "service-definition", "context": "Validate IPv4/IPv6 dual-stack | Create the following Service that explicitly defines PreferDualStack in .spec.ipFamilyPolicy. Kubernetes will assign both IPv4 and IPv6 addresses (as this cluster has dual-stack enabled) and select th", "source": "https://kubernetes.io/docs/tasks/network/validate-dual-stack/", "id": "k8s_00656_code_completion_20497"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Service\nmetadata:\n labels:\n app.kubernetes.io/name: MyApp\n name: my-service\nspec:\n clusterIP: 2001:db8:fd00::5118\n clusterIPs:\n - 2001:db8:fd00::5118\n ipFamilies:\n - IPv6\n ipFamilyPolicy: SingleStack\n ports:\n - port: 80\n protocol: TCP\n targetPort: 80\n selector:\n app.kubernetes.io/name: MyApp\n sessionAffinity: None\n type: ClusterIP\nstatus:\n loadBalancer: {}", "completion": "This yaml code Validate IPv4/IPv6 dual-stack | Create the following Service that explicitly defines PreferDualStack in .spec.ipFamilyPolicy. Kubernetes will assign both IPv4 and IPv6 addresses (as this cluster has dual-stack enabled) and select th", "language": "yaml", "type": "service-definition_explanation", "context": "Validate IPv4/IPv6 dual-stack | Create the following Service that explicitly defines PreferDualStack in .spec.ipFamilyPolicy. Kubernetes will assign both IPv4 and IPv6 addresses (as this cluster has dual-stack enabled) and select th", "source": "https://kubernetes.io/docs/tasks/network/validate-dual-stack/", "id": "k8s_00656_code_explain_20498"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Service\nmetadata:\n name: my-service\n labels:\n app.kubernetes.io/name: MyApp", "completion": "spec:\n ipFamilyPolicy: PreferDualStack\n selector:\n app.kubernetes.io/name: MyApp\n ports:\n - protocol: TCP\n port: 80", "language": "yaml", "type": "service-definition", "context": "Validate IPv4/IPv6 dual-stack | The kubectl get svc command will only show the primary IP in the CLUSTER-IP field.", "source": "https://kubernetes.io/docs/tasks/network/validate-dual-stack/", "id": "k8s_00656_code_completion_20499"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Service\nmetadata:\n name: my-service\n labels:\n app.kubernetes.io/name: MyApp\nspec:\n ipFamilyPolicy: PreferDualStack\n selector:\n app.kubernetes.io/name: MyApp\n ports:\n - protocol: TCP\n port: 80", "completion": "This yaml code Validate IPv4/IPv6 dual-stack | The kubectl get svc command will only show the primary IP in the CLUSTER-IP field.", "language": "yaml", "type": "service-definition_explanation", "context": "Validate IPv4/IPv6 dual-stack | The kubectl get svc command will only show the primary IP in the CLUSTER-IP field.", "source": "https://kubernetes.io/docs/tasks/network/validate-dual-stack/", "id": "k8s_00656_code_explain_20500"} {"prompt": "Explain this text code:\n\nNAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE\nmy-service ClusterIP 10.0.216.242 <none> 80/TCP 5s", "completion": "This text code Validate IPv4/IPv6 dual-stack | Validate that the Service gets cluster IPs from the IPv4 and IPv6 address blocks using kubectl describe. You may then validate access to the service via the IPs and ports.", "language": "text", "type": "code-example_explanation", "context": "Validate IPv4/IPv6 dual-stack | Validate that the Service gets cluster IPs from the IPv4 and IPv6 address blocks using kubectl describe. You may then validate access to the service via the IPs and ports.", "source": "https://kubernetes.io/docs/tasks/network/validate-dual-stack/", "id": "k8s_00656_code_explain_20501"} {"prompt": "Explain this shell code:\n\nkubectl describe svc -l app.kubernetes.io/name=MyApp", "completion": "This shell code Validate IPv4/IPv6 dual-stack | If the cloud provider supports the provisioning of IPv6 enabled external load balancers, create the following Service with PreferDualStack in .spec.ipFamilyPolicy, IPv6 as the first element of the .sp", "language": "shell", "type": "code-example_explanation", "context": "Validate IPv4/IPv6 dual-stack | If the cloud provider supports the provisioning of IPv6 enabled external load balancers, create the following Service with PreferDualStack in .spec.ipFamilyPolicy, IPv6 as the first element of the .sp", "source": "https://kubernetes.io/docs/tasks/network/validate-dual-stack/", "id": "k8s_00656_code_explain_20502"} {"prompt": "Complete this text code:\n\nName: my-service\nNamespace: default\nLabels: app.kubernetes.io/name=MyApp\nAnnotations: <none>\nSelector: app.kubernetes.io/name=MyApp\nType: ClusterIP\nIP Family Policy: PreferDualStack", "completion": "IP Families: IPv4,IPv6\nIP: 10.0.216.242\nIPs: 10.0.216.242,2001:db8:fd00::af55\nPort: <unset> 80/TCP\nTargetPort: 9376/TCP\nEndpoints: <none>\nSession Affinity: None\nEvents: <none>", "language": "text", "type": "code-example", "context": "Validate IPv4/IPv6 dual-stack | If the cloud provider supports the provisioning of IPv6 enabled external load balancers, create the following Service with PreferDualStack in .spec.ipFamilyPolicy, IPv6 as the first element of the .sp", "source": "https://kubernetes.io/docs/tasks/network/validate-dual-stack/", "id": "k8s_00656_code_completion_20503"} {"prompt": "Explain this text code:\n\nName: my-service\nNamespace: default\nLabels: app.kubernetes.io/name=MyApp\nAnnotations: <none>\nSelector: app.kubernetes.io/name=MyApp\nType: ClusterIP\nIP Family Policy: PreferDualStack\nIP Families: IPv4,IPv6\nIP: 10.0.216.242\nIPs: 10.0.216.242,2001:db8:fd00::af55\nPort: <unset> 80/TCP\nTargetPort: 9376/TCP\nEndpoints: <none>\nSession Affinity: None\nEvents: <none>", "completion": "This text code Validate IPv4/IPv6 dual-stack | If the cloud provider supports the provisioning of IPv6 enabled external load balancers, create the following Service with PreferDualStack in .spec.ipFamilyPolicy, IPv6 as the first element of the .sp", "language": "text", "type": "code-example_explanation", "context": "Validate IPv4/IPv6 dual-stack | If the cloud provider supports the provisioning of IPv6 enabled external load balancers, create the following Service with PreferDualStack in .spec.ipFamilyPolicy, IPv6 as the first element of the .sp", "source": "https://kubernetes.io/docs/tasks/network/validate-dual-stack/", "id": "k8s_00656_code_explain_20504"} {"prompt": "Explain this text code:\n\nservice/networking/dual-stack-prefer-ipv6-lb-svc.yaml", "completion": "This text code Validate IPv4/IPv6 dual-stack | Check the Service:", "language": "text", "type": "code-example_explanation", "context": "Validate IPv4/IPv6 dual-stack | Check the Service:", "source": "https://kubernetes.io/docs/tasks/network/validate-dual-stack/", "id": "k8s_00656_code_explain_20505"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Service\nmetadata:\n name: my-service\n labels:\n app.kubernetes.io/name: MyApp\nspec:\n ipFamilyPolicy: PreferDualStack", "completion": " ipFamilies:\n - IPv6\n type: LoadBalancer\n selector:\n app.kubernetes.io/name: MyApp\n ports:\n - protocol: TCP\n port: 80", "language": "yaml", "type": "service-definition", "context": "Validate IPv4/IPv6 dual-stack | Check the Service:", "source": "https://kubernetes.io/docs/tasks/network/validate-dual-stack/", "id": "k8s_00656_code_completion_20506"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Service\nmetadata:\n name: my-service\n labels:\n app.kubernetes.io/name: MyApp\nspec:\n ipFamilyPolicy: PreferDualStack\n ipFamilies:\n - IPv6\n type: LoadBalancer\n selector:\n app.kubernetes.io/name: MyApp\n ports:\n - protocol: TCP\n port: 80", "completion": "This yaml code Validate IPv4/IPv6 dual-stack | Check the Service:", "language": "yaml", "type": "service-definition_explanation", "context": "Validate IPv4/IPv6 dual-stack | Check the Service:", "source": "https://kubernetes.io/docs/tasks/network/validate-dual-stack/", "id": "k8s_00656_code_explain_20507"} {"prompt": "Explain this text code:\n\nNAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE\nmy-service LoadBalancer 2001:db8:fd00::7ebc 2603:1030:805::5 80:30790/TCP 35s", "completion": "This text code Validate IPv4/IPv6 dual-stack |", "language": "text", "type": "code-example_explanation", "context": "Validate IPv4/IPv6 dual-stack |", "source": "https://kubernetes.io/docs/tasks/network/validate-dual-stack/", "id": "k8s_00656_code_explain_20508"} {"prompt": "Explain this bash code:\n\nkubectl get nodes k8s-linuxpool1-34450317-0 -o go-template --template='{{range .spec.podCIDRs}}{{printf \"%s\\n\" .}}{{end}}'", "completion": "This bash code Validate IPv4/IPv6 dual-stack | There should be one IPv4 block and one IPv6 block allocated.", "language": "bash", "type": "kubectl-command_explanation", "context": "Validate IPv4/IPv6 dual-stack | There should be one IPv4 block and one IPv6 block allocated.", "source": "https://kubernetes.io/docs/tasks/network/validate-dual-stack/", "id": "k8s_00656_code_explain_20509"} {"prompt": "Explain this bash code:\n\nkubectl get nodes k8s-linuxpool1-34450317-0 -o go-template --template='{{range .status.addresses}}{{printf \"%s: %s\\n\" .type .address}}{{end}}'", "completion": "This bash code Validate IPv4/IPv6 dual-stack | Validate that a Pod has an IPv4 and IPv6 address assigned. Replace the Pod name with a valid Pod in your cluster. In this example the Pod name is pod01:", "language": "bash", "type": "kubectl-command_explanation", "context": "Validate IPv4/IPv6 dual-stack | Validate that a Pod has an IPv4 and IPv6 address assigned. Replace the Pod name with a valid Pod in your cluster. In this example the Pod name is pod01:", "source": "https://kubernetes.io/docs/tasks/network/validate-dual-stack/", "id": "k8s_00656_code_explain_20510"} {"prompt": "Explain this bash code:\n\nkubectl get pods pod01 -o go-template --template='{{range .status.podIPs}}{{printf \"%s\\n\" .ip}}{{end}}'", "completion": "This bash code Validate IPv4/IPv6 dual-stack | You can also validate Pod IPs using the Downward API via the status.podIPs fieldPath. The following snippet demonstrates how you can expose the Pod IPs via an environment variable called MY_POD_IPS wi", "language": "bash", "type": "kubectl-command_explanation", "context": "Validate IPv4/IPv6 dual-stack | You can also validate Pod IPs using the Downward API via the status.podIPs fieldPath. The following snippet demonstrates how you can expose the Pod IPs via an environment variable called MY_POD_IPS wi", "source": "https://kubernetes.io/docs/tasks/network/validate-dual-stack/", "id": "k8s_00656_code_explain_20511"} {"prompt": "Complete this text code:\n\nenv:\n - name: MY_POD_IPS", "completion": " valueFrom:\n fieldRef:\n fieldPath: status.podIPs", "language": "text", "type": "code-example", "context": "Validate IPv4/IPv6 dual-stack | The following command prints the value of the MY_POD_IPS environment variable from within a container. The value is a comma separated list that corresponds to the Pod's IPv4 and IPv6 addresses.", "source": "https://kubernetes.io/docs/tasks/network/validate-dual-stack/", "id": "k8s_00656_code_completion_20512"} {"prompt": "Explain this text code:\n\nenv:\n - name: MY_POD_IPS\n valueFrom:\n fieldRef:\n fieldPath: status.podIPs", "completion": "This text code Validate IPv4/IPv6 dual-stack | The following command prints the value of the MY_POD_IPS environment variable from within a container. The value is a comma separated list that corresponds to the Pod's IPv4 and IPv6 addresses.", "language": "text", "type": "code-example_explanation", "context": "Validate IPv4/IPv6 dual-stack | The following command prints the value of the MY_POD_IPS environment variable from within a container. The value is a comma separated list that corresponds to the Pod's IPv4 and IPv6 addresses.", "source": "https://kubernetes.io/docs/tasks/network/validate-dual-stack/", "id": "k8s_00656_code_explain_20513"} {"prompt": "Complete this yaml code:\n\nservice/networking/dual-stack-default-svc.yaml\napiVersion: v1\nkind: Service\nmetadata:\n name: my-service\n labels:", "completion": " app.kubernetes.io/name: MyApp\nspec:\n selector:\n app.kubernetes.io/name: MyApp\n ports:\n - protocol: TCP\n port: 80", "language": "yaml", "type": "service-definition", "context": "Validate IPv4/IPv6 dual-stack | Use kubectl to view the YAML for the Service.", "source": "https://kubernetes.io/docs/tasks/network/validate-dual-stack/", "id": "k8s_00656_code_completion_20514"} {"prompt": "Explain this yaml code:\n\nservice/networking/dual-stack-default-svc.yaml\napiVersion: v1\nkind: Service\nmetadata:\n name: my-service\n labels:\n app.kubernetes.io/name: MyApp\nspec:\n selector:\n app.kubernetes.io/name: MyApp\n ports:\n - protocol: TCP\n port: 80", "completion": "This yaml code Validate IPv4/IPv6 dual-stack | Use kubectl to view the YAML for the Service.", "language": "yaml", "type": "service-definition_explanation", "context": "Validate IPv4/IPv6 dual-stack | Use kubectl to view the YAML for the Service.", "source": "https://kubernetes.io/docs/tasks/network/validate-dual-stack/", "id": "k8s_00656_code_explain_20515"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Service\nmetadata:\n name: my-service\n labels:\n app.kubernetes.io/name: MyApp", "completion": "spec:\n selector:\n app.kubernetes.io/name: MyApp\n ports:\n - protocol: TCP\n port: 80", "language": "yaml", "type": "service-definition", "context": "Validate IPv4/IPv6 dual-stack | Use kubectl to view the YAML for the Service.", "source": "https://kubernetes.io/docs/tasks/network/validate-dual-stack/", "id": "k8s_00656_code_completion_20516"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Service\nmetadata:\n name: my-service\n labels:\n app.kubernetes.io/name: MyApp\nspec:\n selector:\n app.kubernetes.io/name: MyApp\n ports:\n - protocol: TCP\n port: 80", "completion": "This yaml code Validate IPv4/IPv6 dual-stack | Use kubectl to view the YAML for the Service.", "language": "yaml", "type": "service-definition_explanation", "context": "Validate IPv4/IPv6 dual-stack | Use kubectl to view the YAML for the Service.", "source": "https://kubernetes.io/docs/tasks/network/validate-dual-stack/", "id": "k8s_00656_code_explain_20517"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Service\nmetadata:\n name: my-service\n namespace: default\nspec:\n clusterIP: 10.0.217.164\n clusterIPs:\n - 10.0.217.164\n ipFamilies:\n - IPv4", "completion": " ipFamilyPolicy: SingleStack\n ports:\n - port: 80\n protocol: TCP\n targetPort: 9376\n selector:\n app.kubernetes.io/name: MyApp\n sessionAffinity: None\n type: ClusterIP\nstatus:\n loadBalancer: {}", "language": "yaml", "type": "service-definition", "context": "Validate IPv4/IPv6 dual-stack | Create the following Service that explicitly defines IPv6 as the first array element in .spec.ipFamilies. Kubernetes will assign a cluster IP for the Service from the IPv6 range configured service-clu", "source": "https://kubernetes.io/docs/tasks/network/validate-dual-stack/", "id": "k8s_00656_code_completion_20518"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Service\nmetadata:\n name: my-service\n namespace: default\nspec:\n clusterIP: 10.0.217.164\n clusterIPs:\n - 10.0.217.164\n ipFamilies:\n - IPv4\n ipFamilyPolicy: SingleStack\n ports:\n - port: 80\n protocol: TCP\n targetPort: 9376\n selector:\n app.kubernetes.io/name: MyApp\n sessionAffinity: None\n type: ClusterIP\nstatus:\n loadBalancer: {}", "completion": "This yaml code Validate IPv4/IPv6 dual-stack | Create the following Service that explicitly defines IPv6 as the first array element in .spec.ipFamilies. Kubernetes will assign a cluster IP for the Service from the IPv6 range configured service-clu", "language": "yaml", "type": "service-definition_explanation", "context": "Validate IPv4/IPv6 dual-stack | Create the following Service that explicitly defines IPv6 as the first array element in .spec.ipFamilies. Kubernetes will assign a cluster IP for the Service from the IPv6 range configured service-clu", "source": "https://kubernetes.io/docs/tasks/network/validate-dual-stack/", "id": "k8s_00656_code_explain_20519"} {"prompt": "Complete this yaml code:\n\nservice/networking/dual-stack-ipfamilies-ipv6.yaml\napiVersion: v1\nkind: Service\nmetadata:\n name: my-service\n labels:\n app.kubernetes.io/name: MyApp", "completion": "spec:\n ipFamilies:\n - IPv6\n selector:\n app.kubernetes.io/name: MyApp\n ports:\n - protocol: TCP\n port: 80", "language": "yaml", "type": "service-definition", "context": "Validate IPv4/IPv6 dual-stack | Use kubectl to view the YAML for the Service.", "source": "https://kubernetes.io/docs/tasks/network/validate-dual-stack/", "id": "k8s_00656_code_completion_20520"} {"prompt": "Explain this yaml code:\n\nservice/networking/dual-stack-ipfamilies-ipv6.yaml\napiVersion: v1\nkind: Service\nmetadata:\n name: my-service\n labels:\n app.kubernetes.io/name: MyApp\nspec:\n ipFamilies:\n - IPv6\n selector:\n app.kubernetes.io/name: MyApp\n ports:\n - protocol: TCP\n port: 80", "completion": "This yaml code Validate IPv4/IPv6 dual-stack | Use kubectl to view the YAML for the Service.", "language": "yaml", "type": "service-definition_explanation", "context": "Validate IPv4/IPv6 dual-stack | Use kubectl to view the YAML for the Service.", "source": "https://kubernetes.io/docs/tasks/network/validate-dual-stack/", "id": "k8s_00656_code_explain_20521"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Service\nmetadata:\n name: my-service\n labels:\n app.kubernetes.io/name: MyApp\nspec:", "completion": " ipFamilies:\n - IPv6\n selector:\n app.kubernetes.io/name: MyApp\n ports:\n - protocol: TCP\n port: 80", "language": "yaml", "type": "service-definition", "context": "Validate IPv4/IPv6 dual-stack | Use kubectl to view the YAML for the Service.", "source": "https://kubernetes.io/docs/tasks/network/validate-dual-stack/", "id": "k8s_00656_code_completion_20522"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Service\nmetadata:\n name: my-service\n labels:\n app.kubernetes.io/name: MyApp\nspec:\n ipFamilies:\n - IPv6\n selector:\n app.kubernetes.io/name: MyApp\n ports:\n - protocol: TCP\n port: 80", "completion": "This yaml code Validate IPv4/IPv6 dual-stack | Use kubectl to view the YAML for the Service.", "language": "yaml", "type": "service-definition_explanation", "context": "Validate IPv4/IPv6 dual-stack | Use kubectl to view the YAML for the Service.", "source": "https://kubernetes.io/docs/tasks/network/validate-dual-stack/", "id": "k8s_00656_code_explain_20523"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Service\nmetadata:\n labels:\n app.kubernetes.io/name: MyApp\n name: my-service\nspec:\n clusterIP: 2001:db8:fd00::5118\n clusterIPs:\n - 2001:db8:fd00::5118\n ipFamilies:", "completion": " - IPv6\n ipFamilyPolicy: SingleStack\n ports:\n - port: 80\n protocol: TCP\n targetPort: 80\n selector:\n app.kubernetes.io/name: MyApp\n sessionAffinity: None\n type: ClusterIP\nstatus:\n loadBalancer: {}", "language": "yaml", "type": "service-definition", "context": "Validate IPv4/IPv6 dual-stack | Create the following Service that explicitly defines PreferDualStack in .spec.ipFamilyPolicy. Kubernetes will assign both IPv4 and IPv6 addresses (as this cluster has dual-stack enabled) and select th", "source": "https://kubernetes.io/docs/tasks/network/validate-dual-stack/", "id": "k8s_00656_code_completion_20524"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Service\nmetadata:\n labels:\n app.kubernetes.io/name: MyApp\n name: my-service\nspec:\n clusterIP: 2001:db8:fd00::5118\n clusterIPs:\n - 2001:db8:fd00::5118\n ipFamilies:\n - IPv6\n ipFamilyPolicy: SingleStack\n ports:\n - port: 80\n protocol: TCP\n targetPort: 80\n selector:\n app.kubernetes.io/name: MyApp\n sessionAffinity: None\n type: ClusterIP\nstatus:\n loadBalancer: {}", "completion": "This yaml code Validate IPv4/IPv6 dual-stack | Create the following Service that explicitly defines PreferDualStack in .spec.ipFamilyPolicy. Kubernetes will assign both IPv4 and IPv6 addresses (as this cluster has dual-stack enabled) and select th", "language": "yaml", "type": "service-definition_explanation", "context": "Validate IPv4/IPv6 dual-stack | Create the following Service that explicitly defines PreferDualStack in .spec.ipFamilyPolicy. Kubernetes will assign both IPv4 and IPv6 addresses (as this cluster has dual-stack enabled) and select th", "source": "https://kubernetes.io/docs/tasks/network/validate-dual-stack/", "id": "k8s_00656_code_explain_20525"} {"prompt": "Complete this yaml code:\n\nservice/networking/dual-stack-preferred-svc.yaml\napiVersion: v1\nkind: Service\nmetadata:\n name: my-service\n labels:\n app.kubernetes.io/name: MyApp", "completion": "spec:\n ipFamilyPolicy: PreferDualStack\n selector:\n app.kubernetes.io/name: MyApp\n ports:\n - protocol: TCP\n port: 80", "language": "yaml", "type": "service-definition", "context": "Validate IPv4/IPv6 dual-stack | The kubectl get svc command will only show the primary IP in the CLUSTER-IP field.", "source": "https://kubernetes.io/docs/tasks/network/validate-dual-stack/", "id": "k8s_00656_code_completion_20526"} {"prompt": "Explain this yaml code:\n\nservice/networking/dual-stack-preferred-svc.yaml\napiVersion: v1\nkind: Service\nmetadata:\n name: my-service\n labels:\n app.kubernetes.io/name: MyApp\nspec:\n ipFamilyPolicy: PreferDualStack\n selector:\n app.kubernetes.io/name: MyApp\n ports:\n - protocol: TCP\n port: 80", "completion": "This yaml code Validate IPv4/IPv6 dual-stack | The kubectl get svc command will only show the primary IP in the CLUSTER-IP field.", "language": "yaml", "type": "service-definition_explanation", "context": "Validate IPv4/IPv6 dual-stack | The kubectl get svc command will only show the primary IP in the CLUSTER-IP field.", "source": "https://kubernetes.io/docs/tasks/network/validate-dual-stack/", "id": "k8s_00656_code_explain_20527"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Service\nmetadata:\n name: my-service\n labels:\n app.kubernetes.io/name: MyApp", "completion": "spec:\n ipFamilyPolicy: PreferDualStack\n selector:\n app.kubernetes.io/name: MyApp\n ports:\n - protocol: TCP\n port: 80", "language": "yaml", "type": "service-definition", "context": "Validate IPv4/IPv6 dual-stack | The kubectl get svc command will only show the primary IP in the CLUSTER-IP field.", "source": "https://kubernetes.io/docs/tasks/network/validate-dual-stack/", "id": "k8s_00656_code_completion_20528"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Service\nmetadata:\n name: my-service\n labels:\n app.kubernetes.io/name: MyApp\nspec:\n ipFamilyPolicy: PreferDualStack\n selector:\n app.kubernetes.io/name: MyApp\n ports:\n - protocol: TCP\n port: 80", "completion": "This yaml code Validate IPv4/IPv6 dual-stack | The kubectl get svc command will only show the primary IP in the CLUSTER-IP field.", "language": "yaml", "type": "service-definition_explanation", "context": "Validate IPv4/IPv6 dual-stack | The kubectl get svc command will only show the primary IP in the CLUSTER-IP field.", "source": "https://kubernetes.io/docs/tasks/network/validate-dual-stack/", "id": "k8s_00656_code_explain_20529"} {"prompt": "Explain this bash code:\n\nkubectl describe svc -l app.kubernetes.io/name=MyApp", "completion": "This bash code Validate IPv4/IPv6 dual-stack | If the cloud provider supports the provisioning of IPv6 enabled external load balancers, create the following Service with PreferDualStack in .spec.ipFamilyPolicy, IPv6 as the first element of the .sp", "language": "bash", "type": "kubectl-command_explanation", "context": "Validate IPv4/IPv6 dual-stack | If the cloud provider supports the provisioning of IPv6 enabled external load balancers, create the following Service with PreferDualStack in .spec.ipFamilyPolicy, IPv6 as the first element of the .sp", "source": "https://kubernetes.io/docs/tasks/network/validate-dual-stack/", "id": "k8s_00656_code_explain_20530"} {"prompt": "Complete this yaml code:\n\nservice/networking/dual-stack-prefer-ipv6-lb-svc.yaml\napiVersion: v1\nkind: Service\nmetadata:\n name: my-service\n labels:\n app.kubernetes.io/name: MyApp\nspec:", "completion": " ipFamilyPolicy: PreferDualStack\n ipFamilies:\n - IPv6\n type: LoadBalancer\n selector:\n app.kubernetes.io/name: MyApp\n ports:\n - protocol: TCP\n port: 80", "language": "yaml", "type": "service-definition", "context": "Validate IPv4/IPv6 dual-stack | Check the Service:", "source": "https://kubernetes.io/docs/tasks/network/validate-dual-stack/", "id": "k8s_00656_code_completion_20531"} {"prompt": "Explain this yaml code:\n\nservice/networking/dual-stack-prefer-ipv6-lb-svc.yaml\napiVersion: v1\nkind: Service\nmetadata:\n name: my-service\n labels:\n app.kubernetes.io/name: MyApp\nspec:\n ipFamilyPolicy: PreferDualStack\n ipFamilies:\n - IPv6\n type: LoadBalancer\n selector:\n app.kubernetes.io/name: MyApp\n ports:\n - protocol: TCP\n port: 80", "completion": "This yaml code Validate IPv4/IPv6 dual-stack | Check the Service:", "language": "yaml", "type": "service-definition_explanation", "context": "Validate IPv4/IPv6 dual-stack | Check the Service:", "source": "https://kubernetes.io/docs/tasks/network/validate-dual-stack/", "id": "k8s_00656_code_explain_20532"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Service\nmetadata:\n name: my-service\n labels:\n app.kubernetes.io/name: MyApp\nspec:\n ipFamilyPolicy: PreferDualStack", "completion": " ipFamilies:\n - IPv6\n type: LoadBalancer\n selector:\n app.kubernetes.io/name: MyApp\n ports:\n - protocol: TCP\n port: 80", "language": "yaml", "type": "service-definition", "context": "Validate IPv4/IPv6 dual-stack | Check the Service:", "source": "https://kubernetes.io/docs/tasks/network/validate-dual-stack/", "id": "k8s_00656_code_completion_20533"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Service\nmetadata:\n name: my-service\n labels:\n app.kubernetes.io/name: MyApp\nspec:\n ipFamilyPolicy: PreferDualStack\n ipFamilies:\n - IPv6\n type: LoadBalancer\n selector:\n app.kubernetes.io/name: MyApp\n ports:\n - protocol: TCP\n port: 80", "completion": "This yaml code Validate IPv4/IPv6 dual-stack | Check the Service:", "language": "yaml", "type": "service-definition_explanation", "context": "Validate IPv4/IPv6 dual-stack | Check the Service:", "source": "https://kubernetes.io/docs/tasks/network/validate-dual-stack/", "id": "k8s_00656_code_explain_20534"} {"prompt": "Explain this bash code:\n\nkubectl get flowschemas -o custom-columns=\"uid:{metadata.uid},name:{metadata.name}\"\nkubectl get prioritylevelconfigurations -o custom-columns=\"uid:{metadata.uid},name:{metadata.name}\"", "completion": "This bash code Flow control | to get a mapping of UIDs to names for both FlowSchemas and PriorityLevelConfigurations.", "language": "bash", "type": "kubectl-command_explanation", "context": "Flow control | to get a mapping of UIDs to names for both FlowSchemas and PriorityLevelConfigurations.", "source": "https://kubernetes.io/docs/reference/debug-cluster/flow-control/", "id": "k8s_00657_code_explain_20535"} {"prompt": "Explain this bash code:\n\nkubectl get --raw /debug/api_priority_and_fairness/dump_priority_levels", "completion": "This bash code Flow control | The output will be in CSV and similar to this:", "language": "bash", "type": "kubectl-command_explanation", "context": "Flow control | The output will be in CSV and similar to this:", "source": "https://kubernetes.io/docs/reference/debug-cluster/flow-control/", "id": "k8s_00657_code_explain_20536"} {"prompt": "Complete this text code:\n\nPriorityLevelName, ActiveQueues, IsIdle, IsQuiescing, WaitingRequests, ExecutingRequests, DispatchedRequests, RejectedRequests, TimedoutRequests, CancelledRequests\ncatch-all, 0, true, false, 0, 0, 1, 0, 0, 0\nexempt, 0, true, false, 0, 0, 0, 0, 0, 0\nglobal-default, 0, true, false, 0, 0, 46, 0, 0, 0", "completion": "leader-election, 0, true, false, 0, 0, 4, 0, 0, 0\nnode-high, 0, true, false, 0, 0, 34, 0, 0, 0\nsystem, 0, true, false, 0, 0, 48, 0, 0, 0\nworkload-high, 0, true, false, 0, 0, 500, 0, 0, 0\nworkload-low, 0, true, false, 0, 0, 0, 0, 0, 0", "language": "text", "type": "code-example", "context": "Flow control | Explanation for selected column names:", "source": "https://kubernetes.io/docs/reference/debug-cluster/flow-control/", "id": "k8s_00657_code_completion_20537"} {"prompt": "Explain this text code:\n\nPriorityLevelName, ActiveQueues, IsIdle, IsQuiescing, WaitingRequests, ExecutingRequests, DispatchedRequests, RejectedRequests, TimedoutRequests, CancelledRequests\ncatch-all, 0, true, false, 0, 0, 1, 0, 0, 0\nexempt, 0, true, false, 0, 0, 0, 0, 0, 0\nglobal-default, 0, true, false, 0, 0, 46, 0, 0, 0\nleader-election, 0, true, false, 0, 0, 4, 0, 0, 0\nnode-high, 0, true, false, 0, 0, 34, 0, 0, 0\nsystem, 0, true, false, 0, 0, 48, 0, 0, 0\nworkload-high, 0, true, false, 0, 0, 500, 0, 0, 0\nworkload-low, 0, true, false, 0, 0, 0, 0, 0, 0", "completion": "This text code Flow control | Explanation for selected column names:", "language": "text", "type": "code-example_explanation", "context": "Flow control | Explanation for selected column names:", "source": "https://kubernetes.io/docs/reference/debug-cluster/flow-control/", "id": "k8s_00657_code_explain_20538"} {"prompt": "Explain this bash code:\n\nkubectl get --raw /debug/api_priority_and_fairness/dump_queues", "completion": "This bash code Flow control | The output will be in CSV and similar to this:", "language": "bash", "type": "kubectl-command_explanation", "context": "Flow control | The output will be in CSV and similar to this:", "source": "https://kubernetes.io/docs/reference/debug-cluster/flow-control/", "id": "k8s_00657_code_explain_20539"} {"prompt": "Complete this text code:\n\nPriorityLevelName, Index, PendingRequests, ExecutingRequests, SeatsInUse, NextDispatchR, InitialSeatsSum, MaxSeatsSum, TotalWorkSum\nworkload-low, 14, 27, 0, 0, 77.64342019ss, 270, 270, 0.81000000ss\nworkload-low, 74, 26, 0, 0, 76.95387841ss, 260, 260, 0.78000000ss\n...", "completion": "leader-election, 0, 0, 0, 0, 5088.87053833ss, 0, 0, 0.00000000ss\nleader-election, 1, 0, 0, 0, 0.00000000ss, 0, 0, 0.00000000ss\n...\nworkload-high, 0, 0, 0, 0, 0.00000000ss, 0, 0, 0.00000000ss\nworkload-high, 1, 0, 0, 0, 1119.44936475ss, 0, 0, 0.00000000ss", "language": "text", "type": "code-example", "context": "Flow control | Explanation for selected column names:", "source": "https://kubernetes.io/docs/reference/debug-cluster/flow-control/", "id": "k8s_00657_code_completion_20540"} {"prompt": "Explain this text code:\n\nPriorityLevelName, Index, PendingRequests, ExecutingRequests, SeatsInUse, NextDispatchR, InitialSeatsSum, MaxSeatsSum, TotalWorkSum\nworkload-low, 14, 27, 0, 0, 77.64342019ss, 270, 270, 0.81000000ss\nworkload-low, 74, 26, 0, 0, 76.95387841ss, 260, 260, 0.78000000ss\n...\nleader-election, 0, 0, 0, 0, 5088.87053833ss, 0, 0, 0.00000000ss\nleader-election, 1, 0, 0, 0, 0.00000000ss, 0, 0, 0.00000000ss\n...\nworkload-high, 0, 0, 0, 0, 0.00000000ss, 0, 0, 0.00000000ss\nworkload-high, 1, 0, 0, 0, 1119.44936475ss, 0, 0, 0.00000000ss", "completion": "This text code Flow control | Explanation for selected column names:", "language": "text", "type": "code-example_explanation", "context": "Flow control | Explanation for selected column names:", "source": "https://kubernetes.io/docs/reference/debug-cluster/flow-control/", "id": "k8s_00657_code_explain_20541"} {"prompt": "Explain this bash code:\n\nkubectl get --raw /debug/api_priority_and_fairness/dump_requests", "completion": "This bash code Flow control | The output will be in CSV and similar to this:", "language": "bash", "type": "kubectl-command_explanation", "context": "Flow control | The output will be in CSV and similar to this:", "source": "https://kubernetes.io/docs/reference/debug-cluster/flow-control/", "id": "k8s_00657_code_explain_20542"} {"prompt": "Complete this text code:\n\nPriorityLevelName, FlowSchemaName, QueueIndex, RequestIndexInQueue, FlowDistingsher, ArriveTime, InitialSeats, FinalSeats, AdditionalLatency, StartTime\nexempt, exempt, -1, -1, , 2023-07-15T04:51:25.596404345Z, 1, 0, 0s, 2023-07-15T04:51:25.596404345Z", "completion": "workload-low, service-accounts, 14, 0, system:serviceaccount:default:loadtest, 2023-07-18T00:12:51.386556253Z, 10, 0, 0s, 0001-01-01T00:00:00Z\nworkload-low, service-accounts, 14, 1, system:serviceaccount:default:loadtest, 2023-07-18T00:12:51.487092539Z, 10, 0, 0s, 0001-01-01T00:00:00Z", "language": "text", "type": "code-example", "context": "Flow control | You can get a more detailed listing with a command like this:", "source": "https://kubernetes.io/docs/reference/debug-cluster/flow-control/", "id": "k8s_00657_code_completion_20543"} {"prompt": "Explain this text code:\n\nPriorityLevelName, FlowSchemaName, QueueIndex, RequestIndexInQueue, FlowDistingsher, ArriveTime, InitialSeats, FinalSeats, AdditionalLatency, StartTime\nexempt, exempt, -1, -1, , 2023-07-15T04:51:25.596404345Z, 1, 0, 0s, 2023-07-15T04:51:25.596404345Z\nworkload-low, service-accounts, 14, 0, system:serviceaccount:default:loadtest, 2023-07-18T00:12:51.386556253Z, 10, 0, 0s, 0001-01-01T00:00:00Z\nworkload-low, service-accounts, 14, 1, system:serviceaccount:default:loadtest, 2023-07-18T00:12:51.487092539Z, 10, 0, 0s, 0001-01-01T00:00:00Z", "completion": "This text code Flow control | You can get a more detailed listing with a command like this:", "language": "text", "type": "code-example_explanation", "context": "Flow control | You can get a more detailed listing with a command like this:", "source": "https://kubernetes.io/docs/reference/debug-cluster/flow-control/", "id": "k8s_00657_code_explain_20544"} {"prompt": "Explain this bash code:\n\nkubectl get --raw '/debug/api_priority_and_fairness/dump_requests?includeRequestDetails=1'", "completion": "This bash code Flow control | The output will be in CSV and similar to this:", "language": "bash", "type": "kubectl-command_explanation", "context": "Flow control | The output will be in CSV and similar to this:", "source": "https://kubernetes.io/docs/reference/debug-cluster/flow-control/", "id": "k8s_00657_code_explain_20545"} {"prompt": "Complete this text code:\n\nPriorityLevelName, FlowSchemaName, QueueIndex, RequestIndexInQueue, FlowDistingsher, ArriveTime, InitialSeats, FinalSeats, AdditionalLatency, StartTime, UserName, Verb, APIPath, Namespace, Name, APIVersion, Resource, SubResource\nexempt, exempt, -1, -1, , 2023-07-15T04:51:25.596404345Z, 1, 0, 0s, 2023-07-15T04:51:25.596404345Z, system:serviceaccount:system:admin, list, /api/v1/namespaces/kube-stress/configmaps, kube-stress, , v1, configmaps,", "completion": "workload-low, service-accounts, 14, 0, system:serviceaccount:default:loadtest, 2023-07-18T00:13:08.986534842Z, 10, 0, 0s, 0001-01-01T00:00:00Z, system:serviceaccount:default:loadtest, list, /api/v1/namespaces/kube-stress/configmaps, kube-stress, , v1, configmaps,\nworkload-low, service-accounts, 14, 1, system:serviceaccount:default:loadtest, 2023-07-18T00:13:09.086476021Z, 10, 0, 0s, 0001-01-01T00:00:00Z, system:serviceaccount:default:loadtest, list, /api/v1/namespaces/kube-stress/configmaps, kube-stress, , v1, configmaps,", "language": "text", "type": "code-example", "context": "Flow control | Explanation for selected column names:", "source": "https://kubernetes.io/docs/reference/debug-cluster/flow-control/", "id": "k8s_00657_code_completion_20546"} {"prompt": "Explain this text code:\n\nPriorityLevelName, FlowSchemaName, QueueIndex, RequestIndexInQueue, FlowDistingsher, ArriveTime, InitialSeats, FinalSeats, AdditionalLatency, StartTime, UserName, Verb, APIPath, Namespace, Name, APIVersion, Resource, SubResource\nexempt, exempt, -1, -1, , 2023-07-15T04:51:25.596404345Z, 1, 0, 0s, 2023-07-15T04:51:25.596404345Z, system:serviceaccount:system:admin, list, /api/v1/namespaces/kube-stress/configmaps, kube-stress, , v1, configmaps,\nworkload-low, service-accounts, 14, 0, system:serviceaccount:default:loadtest, 2023-07-18T00:13:08.986534842Z, 10, 0, 0s, 0001-01-01T00:00:00Z, system:serviceaccount:default:loadtest, list, /api/v1/namespaces/kube-stress/configmaps, kube-stress, , v1, configmaps,\nworkload-low, service-accounts, 14, 1, system:serviceaccount:default:loadtest, 2023-07-18T00:13:09.086476021Z, 10, 0, 0s, 0001-01-01T00:00:00Z, system:serviceaccount:default:loadtest, list, /api/v1/namespaces/kube-stress/configmaps, kube-stress, , v1, configmaps,", "completion": "This text code Flow control | Explanation for selected column names:", "language": "text", "type": "code-example_explanation", "context": "Flow control | Explanation for selected column names:", "source": "https://kubernetes.io/docs/reference/debug-cluster/flow-control/", "id": "k8s_00657_code_explain_20547"} {"prompt": "Explain this shell code:\n\nkubectl get flowschemas -o custom-columns=\"uid:{metadata.uid},name:{metadata.name}\"\nkubectl get prioritylevelconfigurations -o custom-columns=\"uid:{metadata.uid},name:{metadata.name}\"", "completion": "This shell code Flow control | to get a mapping of UIDs to names for both FlowSchemas and PriorityLevelConfigurations.", "language": "shell", "type": "code-example_explanation", "context": "Flow control | to get a mapping of UIDs to names for both FlowSchemas and PriorityLevelConfigurations.", "source": "https://kubernetes.io/docs/reference/debug-cluster/flow-control/", "id": "k8s_00657_code_explain_20548"} {"prompt": "Explain this text code:\n\n/debug/api_priority_and_fairness/dump_priority_levels", "completion": "This text code Flow control | The output will be in CSV and similar to this:", "language": "text", "type": "code-example_explanation", "context": "Flow control | The output will be in CSV and similar to this:", "source": "https://kubernetes.io/docs/reference/debug-cluster/flow-control/", "id": "k8s_00657_code_explain_20549"} {"prompt": "Explain this shell code:\n\nkubectl get --raw /debug/api_priority_and_fairness/dump_priority_levels", "completion": "This shell code Flow control | The output will be in CSV and similar to this:", "language": "shell", "type": "code-example_explanation", "context": "Flow control | The output will be in CSV and similar to this:", "source": "https://kubernetes.io/docs/reference/debug-cluster/flow-control/", "id": "k8s_00657_code_explain_20550"} {"prompt": "Complete this none code:\n\nPriorityLevelName, ActiveQueues, IsIdle, IsQuiescing, WaitingRequests, ExecutingRequests, DispatchedRequests, RejectedRequests, TimedoutRequests, CancelledRequests\ncatch-all, 0, true, false, 0, 0, 1, 0, 0, 0\nexempt, 0, true, false, 0, 0, 0, 0, 0, 0\nglobal-default, 0, true, false, 0, 0, 46, 0, 0, 0", "completion": "leader-election, 0, true, false, 0, 0, 4, 0, 0, 0\nnode-high, 0, true, false, 0, 0, 34, 0, 0, 0\nsystem, 0, true, false, 0, 0, 48, 0, 0, 0\nworkload-high, 0, true, false, 0, 0, 500, 0, 0, 0\nworkload-low, 0, true, false, 0, 0, 0, 0, 0, 0", "language": "none", "type": "code-example", "context": "Flow control | Explanation for selected column names:", "source": "https://kubernetes.io/docs/reference/debug-cluster/flow-control/", "id": "k8s_00657_code_completion_20551"} {"prompt": "Explain this none code:\n\nPriorityLevelName, ActiveQueues, IsIdle, IsQuiescing, WaitingRequests, ExecutingRequests, DispatchedRequests, RejectedRequests, TimedoutRequests, CancelledRequests\ncatch-all, 0, true, false, 0, 0, 1, 0, 0, 0\nexempt, 0, true, false, 0, 0, 0, 0, 0, 0\nglobal-default, 0, true, false, 0, 0, 46, 0, 0, 0\nleader-election, 0, true, false, 0, 0, 4, 0, 0, 0\nnode-high, 0, true, false, 0, 0, 34, 0, 0, 0\nsystem, 0, true, false, 0, 0, 48, 0, 0, 0\nworkload-high, 0, true, false, 0, 0, 500, 0, 0, 0\nworkload-low, 0, true, false, 0, 0, 0, 0, 0, 0", "completion": "This none code Flow control | Explanation for selected column names:", "language": "none", "type": "code-example_explanation", "context": "Flow control | Explanation for selected column names:", "source": "https://kubernetes.io/docs/reference/debug-cluster/flow-control/", "id": "k8s_00657_code_explain_20552"} {"prompt": "Explain this shell code:\n\nkubectl get --raw /debug/api_priority_and_fairness/dump_queues", "completion": "This shell code Flow control | The output will be in CSV and similar to this:", "language": "shell", "type": "code-example_explanation", "context": "Flow control | The output will be in CSV and similar to this:", "source": "https://kubernetes.io/docs/reference/debug-cluster/flow-control/", "id": "k8s_00657_code_explain_20553"} {"prompt": "Complete this none code:\n\nPriorityLevelName, Index, PendingRequests, ExecutingRequests, SeatsInUse, NextDispatchR, InitialSeatsSum, MaxSeatsSum, TotalWorkSum\nworkload-low, 14, 27, 0, 0, 77.64342019ss, 270, 270, 0.81000000ss\nworkload-low, 74, 26, 0, 0, 76.95387841ss, 260, 260, 0.78000000ss\n...", "completion": "leader-election, 0, 0, 0, 0, 5088.87053833ss, 0, 0, 0.00000000ss\nleader-election, 1, 0, 0, 0, 0.00000000ss, 0, 0, 0.00000000ss\n...\nworkload-high, 0, 0, 0, 0, 0.00000000ss, 0, 0, 0.00000000ss\nworkload-high, 1, 0, 0, 0, 1119.44936475ss, 0, 0, 0.00000000ss", "language": "none", "type": "code-example", "context": "Flow control | Explanation for selected column names:", "source": "https://kubernetes.io/docs/reference/debug-cluster/flow-control/", "id": "k8s_00657_code_completion_20554"} {"prompt": "Explain this none code:\n\nPriorityLevelName, Index, PendingRequests, ExecutingRequests, SeatsInUse, NextDispatchR, InitialSeatsSum, MaxSeatsSum, TotalWorkSum\nworkload-low, 14, 27, 0, 0, 77.64342019ss, 270, 270, 0.81000000ss\nworkload-low, 74, 26, 0, 0, 76.95387841ss, 260, 260, 0.78000000ss\n...\nleader-election, 0, 0, 0, 0, 5088.87053833ss, 0, 0, 0.00000000ss\nleader-election, 1, 0, 0, 0, 0.00000000ss, 0, 0, 0.00000000ss\n...\nworkload-high, 0, 0, 0, 0, 0.00000000ss, 0, 0, 0.00000000ss\nworkload-high, 1, 0, 0, 0, 1119.44936475ss, 0, 0, 0.00000000ss", "completion": "This none code Flow control | Explanation for selected column names:", "language": "none", "type": "code-example_explanation", "context": "Flow control | Explanation for selected column names:", "source": "https://kubernetes.io/docs/reference/debug-cluster/flow-control/", "id": "k8s_00657_code_explain_20555"} {"prompt": "Explain this shell code:\n\nkubectl get --raw /debug/api_priority_and_fairness/dump_requests", "completion": "This shell code Flow control | The output will be in CSV and similar to this:", "language": "shell", "type": "code-example_explanation", "context": "Flow control | The output will be in CSV and similar to this:", "source": "https://kubernetes.io/docs/reference/debug-cluster/flow-control/", "id": "k8s_00657_code_explain_20556"} {"prompt": "Complete this none code:\n\nPriorityLevelName, FlowSchemaName, QueueIndex, RequestIndexInQueue, FlowDistingsher, ArriveTime, InitialSeats, FinalSeats, AdditionalLatency, StartTime\nexempt, exempt, -1, -1, , 2023-07-15T04:51:25.596404345Z, 1, 0, 0s, 2023-07-15T04:51:25.596404345Z", "completion": "workload-low, service-accounts, 14, 0, system:serviceaccount:default:loadtest, 2023-07-18T00:12:51.386556253Z, 10, 0, 0s, 0001-01-01T00:00:00Z\nworkload-low, service-accounts, 14, 1, system:serviceaccount:default:loadtest, 2023-07-18T00:12:51.487092539Z, 10, 0, 0s, 0001-01-01T00:00:00Z", "language": "none", "type": "code-example", "context": "Flow control | You can get a more detailed listing with a command like this:", "source": "https://kubernetes.io/docs/reference/debug-cluster/flow-control/", "id": "k8s_00657_code_completion_20557"} {"prompt": "Explain this none code:\n\nPriorityLevelName, FlowSchemaName, QueueIndex, RequestIndexInQueue, FlowDistingsher, ArriveTime, InitialSeats, FinalSeats, AdditionalLatency, StartTime\nexempt, exempt, -1, -1, , 2023-07-15T04:51:25.596404345Z, 1, 0, 0s, 2023-07-15T04:51:25.596404345Z\nworkload-low, service-accounts, 14, 0, system:serviceaccount:default:loadtest, 2023-07-18T00:12:51.386556253Z, 10, 0, 0s, 0001-01-01T00:00:00Z\nworkload-low, service-accounts, 14, 1, system:serviceaccount:default:loadtest, 2023-07-18T00:12:51.487092539Z, 10, 0, 0s, 0001-01-01T00:00:00Z", "completion": "This none code Flow control | You can get a more detailed listing with a command like this:", "language": "none", "type": "code-example_explanation", "context": "Flow control | You can get a more detailed listing with a command like this:", "source": "https://kubernetes.io/docs/reference/debug-cluster/flow-control/", "id": "k8s_00657_code_explain_20558"} {"prompt": "Explain this shell code:\n\nkubectl get --raw '/debug/api_priority_and_fairness/dump_requests?includeRequestDetails=1'", "completion": "This shell code Flow control | The output will be in CSV and similar to this:", "language": "shell", "type": "code-example_explanation", "context": "Flow control | The output will be in CSV and similar to this:", "source": "https://kubernetes.io/docs/reference/debug-cluster/flow-control/", "id": "k8s_00657_code_explain_20559"} {"prompt": "Complete this none code:\n\nPriorityLevelName, FlowSchemaName, QueueIndex, RequestIndexInQueue, FlowDistingsher, ArriveTime, InitialSeats, FinalSeats, AdditionalLatency, StartTime, UserName, Verb, APIPath, Namespace, Name, APIVersion, Resource, SubResource\nexempt, exempt, -1, -1, , 2023-07-15T04:51:25.596404345Z, 1, 0, 0s, 2023-07-15T04:51:25.596404345Z, system:serviceaccount:system:admin, list, /api/v1/namespaces/kube-stress/configmaps, kube-stress, , v1, configmaps,", "completion": "workload-low, service-accounts, 14, 0, system:serviceaccount:default:loadtest, 2023-07-18T00:13:08.986534842Z, 10, 0, 0s, 0001-01-01T00:00:00Z, system:serviceaccount:default:loadtest, list, /api/v1/namespaces/kube-stress/configmaps, kube-stress, , v1, configmaps,\nworkload-low, service-accounts, 14, 1, system:serviceaccount:default:loadtest, 2023-07-18T00:13:09.086476021Z, 10, 0, 0s, 0001-01-01T00:00:00Z, system:serviceaccount:default:loadtest, list, /api/v1/namespaces/kube-stress/configmaps, kube-stress, , v1, configmaps,", "language": "none", "type": "code-example", "context": "Flow control | Explanation for selected column names:", "source": "https://kubernetes.io/docs/reference/debug-cluster/flow-control/", "id": "k8s_00657_code_completion_20560"} {"prompt": "Explain this none code:\n\nPriorityLevelName, FlowSchemaName, QueueIndex, RequestIndexInQueue, FlowDistingsher, ArriveTime, InitialSeats, FinalSeats, AdditionalLatency, StartTime, UserName, Verb, APIPath, Namespace, Name, APIVersion, Resource, SubResource\nexempt, exempt, -1, -1, , 2023-07-15T04:51:25.596404345Z, 1, 0, 0s, 2023-07-15T04:51:25.596404345Z, system:serviceaccount:system:admin, list, /api/v1/namespaces/kube-stress/configmaps, kube-stress, , v1, configmaps,\nworkload-low, service-accounts, 14, 0, system:serviceaccount:default:loadtest, 2023-07-18T00:13:08.986534842Z, 10, 0, 0s, 0001-01-01T00:00:00Z, system:serviceaccount:default:loadtest, list, /api/v1/namespaces/kube-stress/configmaps, kube-stress, , v1, configmaps,\nworkload-low, service-accounts, 14, 1, system:serviceaccount:default:loadtest, 2023-07-18T00:13:09.086476021Z, 10, 0, 0s, 0001-01-01T00:00:00Z, system:serviceaccount:default:loadtest, list, /api/v1/namespaces/kube-stress/configmaps, kube-stress, , v1, configmaps,", "completion": "This none code Flow control | Explanation for selected column names:", "language": "none", "type": "code-example_explanation", "context": "Flow control | Explanation for selected column names:", "source": "https://kubernetes.io/docs/reference/debug-cluster/flow-control/", "id": "k8s_00657_code_explain_20561"} {"prompt": "Explain this bash code:\n\nkubectl get flowschemas -o custom-columns=\"uid:{metadata.uid},name:{metadata.name}\"\nkubectl get prioritylevelconfigurations -o custom-columns=\"uid:{metadata.uid},name:{metadata.name}\"", "completion": "This bash code Flow control | to get a mapping of UIDs to names for both FlowSchemas and PriorityLevelConfigurations.", "language": "bash", "type": "kubectl-command_explanation", "context": "Flow control | to get a mapping of UIDs to names for both FlowSchemas and PriorityLevelConfigurations.", "source": "https://kubernetes.io/docs/reference/debug-cluster/flow-control/", "id": "k8s_00657_code_explain_20562"} {"prompt": "Explain this bash code:\n\nkubectl get --raw /debug/api_priority_and_fairness/dump_priority_levels", "completion": "This bash code Flow control | The output will be in CSV and similar to this:", "language": "bash", "type": "kubectl-command_explanation", "context": "Flow control | The output will be in CSV and similar to this:", "source": "https://kubernetes.io/docs/reference/debug-cluster/flow-control/", "id": "k8s_00657_code_explain_20563"} {"prompt": "Explain this bash code:\n\nkubectl get --raw /debug/api_priority_and_fairness/dump_queues", "completion": "This bash code Flow control | The output will be in CSV and similar to this:", "language": "bash", "type": "kubectl-command_explanation", "context": "Flow control | The output will be in CSV and similar to this:", "source": "https://kubernetes.io/docs/reference/debug-cluster/flow-control/", "id": "k8s_00657_code_explain_20564"} {"prompt": "Explain this bash code:\n\nkubectl get --raw /debug/api_priority_and_fairness/dump_requests", "completion": "This bash code Flow control | The output will be in CSV and similar to this:", "language": "bash", "type": "kubectl-command_explanation", "context": "Flow control | The output will be in CSV and similar to this:", "source": "https://kubernetes.io/docs/reference/debug-cluster/flow-control/", "id": "k8s_00657_code_explain_20565"} {"prompt": "Explain this bash code:\n\nkubectl get --raw '/debug/api_priority_and_fairness/dump_requests?includeRequestDetails=1'", "completion": "This bash code Flow control | The output will be in CSV and similar to this:", "language": "bash", "type": "kubectl-command_explanation", "context": "Flow control | The output will be in CSV and similar to this:", "source": "https://kubernetes.io/docs/reference/debug-cluster/flow-control/", "id": "k8s_00657_code_explain_20566"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: nginx\nspec:", "completion": " containers:\n - name: nginx\n image: nginx:1.14.2\n ports:\n - containerPort: 80", "language": "yaml", "type": "pod-definition", "context": "1 - Pods | To create the Pod shown above, run the following command:", "source": "https://kubernetes.io/docs/concepts/workloads/_print/", "id": "k8s_00658_code_completion_20567"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: nginx\nspec:\n containers:\n - name: nginx\n image: nginx:1.14.2\n ports:\n - containerPort: 80", "completion": "This yaml code 1 - Pods | To create the Pod shown above, run the following command:", "language": "yaml", "type": "pod-definition_explanation", "context": "1 - Pods | To create the Pod shown above, run the following command:", "source": "https://kubernetes.io/docs/concepts/workloads/_print/", "id": "k8s_00658_code_explain_20568"} {"prompt": "Explain this bash code:\n\nkubectl apply -f https://k8s.io/examples/pods/simple-pod.yaml", "completion": "This bash code 1 - Pods | Pods are generally not created directly and are created using workload resources. See Working with Pods for more information on how Pods are used with workload resources.", "language": "bash", "type": "kubectl-command_explanation", "context": "1 - Pods | Pods are generally not created directly and are created using workload resources. See Working with Pods for more information on how Pods are used with workload resources.", "source": "https://kubernetes.io/docs/concepts/workloads/_print/", "id": "k8s_00658_code_explain_20569"} {"prompt": "Complete this yaml code:\n\napiVersion: batch/v1\nkind: Job\nmetadata:\n name: hello\nspec:\n template:\n # This is the pod template", "completion": " spec:\n containers:\n - name: hello\n image: busybox:1.28\n command: ['sh', '-c', 'echo \"Hello, Kubernetes!\" && sleep 3600']\n restartPolicy: OnFailure\n # The pod template ends here", "language": "yaml", "type": "kubernetes-manifest", "context": "1 - Pods | Modifying the pod template or switching to a new pod template has no direct effect on the Pods that already exist. If you change the pod template for a workload resource, that resource needs to create", "source": "https://kubernetes.io/docs/concepts/workloads/_print/", "id": "k8s_00658_code_completion_20570"} {"prompt": "Explain this yaml code:\n\napiVersion: batch/v1\nkind: Job\nmetadata:\n name: hello\nspec:\n template:\n # This is the pod template\n spec:\n containers:\n - name: hello\n image: busybox:1.28\n command: ['sh', '-c', 'echo \"Hello, Kubernetes!\" && sleep 3600']\n restartPolicy: OnFailure\n # The pod template ends here", "completion": "This yaml code 1 - Pods | Modifying the pod template or switching to a new pod template has no direct effect on the Pods that already exist. If you change the pod template for a workload resource, that resource needs to create", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "1 - Pods | Modifying the pod template or switching to a new pod template has no direct effect on the Pods that already exist. If you change the pod template for a workload resource, that resource needs to create", "source": "https://kubernetes.io/docs/concepts/workloads/_print/", "id": "k8s_00658_code_explain_20571"} {"prompt": "Explain this text code:\n\nNAMESPACE NAME READY STATUS RESTARTS AGE\n alessandras-namespace alessandras-pod 0/1 CrashLoopBackOff 200 2d9h", "completion": "This text code 1.1 - Pod Lifecycle | A Pod is granted a term to terminate gracefully, which defaults to 30 seconds. You can use the flag --force to terminate a Pod by force.", "language": "text", "type": "code-example_explanation", "context": "1.1 - Pod Lifecycle | A Pod is granted a term to terminate gracefully, which defaults to 30 seconds. You can use the flag --force to terminate a Pod by force.", "source": "https://kubernetes.io/docs/concepts/workloads/_print/", "id": "k8s_00658_code_explain_20572"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: on-failure-pod\nspec:\n restartPolicy: OnFailure\n containers:", "completion": " - name: try-once-container # This container will run only once because the restartPolicy is Never.\n image: docker.io/library/busybox:1.28\n command: ['sh', '-c', 'echo \"Only running once\" && sleep 10 && exit 1']\n restartPolicy: Never \n - name: on-failure-container # This container will be restarted on failure.\n image: docker.io/library/busybox:1.28\n command: ['sh', '-c', 'echo \"Keep restarting\" && sleep 1800 && exit 1']", "language": "yaml", "type": "pod-definition", "context": "1.1 - Pod Lifecycle | A Pod with Always restart policy with an init container that only execute once. If the init container fails, the Pod fails. This allows the Pod to fail if the initialization failed, but also keep runn", "source": "https://kubernetes.io/docs/concepts/workloads/_print/", "id": "k8s_00658_code_completion_20573"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: on-failure-pod\nspec:\n restartPolicy: OnFailure\n containers:\n - name: try-once-container # This container will run only once because the restartPolicy is Never.\n image: docker.io/library/busybox:1.28\n command: ['sh', '-c', 'echo \"Only running once\" && sleep 10 && exit 1']\n restartPolicy: Never \n - name: on-failure-container # This container will be restarted on failure.\n image: docker.io/library/busybox:1.28\n command: ['sh', '-c', 'echo \"Keep restarting\" && sleep 1800 && exit 1']", "completion": "This yaml code 1.1 - Pod Lifecycle | A Pod with Always restart policy with an init container that only execute once. If the init container fails, the Pod fails. This allows the Pod to fail if the initialization failed, but also keep runn", "language": "yaml", "type": "pod-definition_explanation", "context": "1.1 - Pod Lifecycle | A Pod with Always restart policy with an init container that only execute once. If the init container fails, the Pod fails. This allows the Pod to fail if the initialization failed, but also keep runn", "source": "https://kubernetes.io/docs/concepts/workloads/_print/", "id": "k8s_00658_code_explain_20574"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: fail-pod-if-init-fails\nspec:\n restartPolicy: Always\n initContainers:", "completion": " - name: init-once # This init container will only try once. If it fails, the pod will fail.\n image: docker.io/library/busybox:1.28\n command: ['sh', '-c', 'echo \"Failing initialization\" && sleep 10 && exit 1']\n restartPolicy: Never\n containers:\n - name: main-container # This container will always be restarted once initialization succeeds.\n image: docker.io/library/busybox:1.28\n command: ['sh', '-c', 'sleep 1800 && exit 0']", "language": "yaml", "type": "pod-definition", "context": "1.1 - Pod Lifecycle | A Pod with Never restart policy with a container that ignores and restarts on specific exit codes. This is useful to differentiate between restartable errors and non-restartable errors:", "source": "https://kubernetes.io/docs/concepts/workloads/_print/", "id": "k8s_00658_code_completion_20575"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: fail-pod-if-init-fails\nspec:\n restartPolicy: Always\n initContainers:\n - name: init-once # This init container will only try once. If it fails, the pod will fail.\n image: docker.io/library/busybox:1.28\n command: ['sh', '-c', 'echo \"Failing initialization\" && sleep 10 && exit 1']\n restartPolicy: Never\n containers:\n - name: main-container # This container will always be restarted once initialization succeeds.\n image: docker.io/library/busybox:1.28\n command: ['sh', '-c', 'sleep 1800 && exit 0']", "completion": "This yaml code 1.1 - Pod Lifecycle | A Pod with Never restart policy with a container that ignores and restarts on specific exit codes. This is useful to differentiate between restartable errors and non-restartable errors:", "language": "yaml", "type": "pod-definition_explanation", "context": "1.1 - Pod Lifecycle | A Pod with Never restart policy with a container that ignores and restarts on specific exit codes. This is useful to differentiate between restartable errors and non-restartable errors:", "source": "https://kubernetes.io/docs/concepts/workloads/_print/", "id": "k8s_00658_code_explain_20576"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: restart-on-exit-codes\nspec:\n restartPolicy: Never\n containers:\n - name: restart-on-exit-codes", "completion": " image: docker.io/library/busybox:1.28\n command: ['sh', '-c', 'sleep 60 && exit 0']\n restartPolicy: Never # Container restart policy must be specified if rules are specified\n restartPolicyRules: # Only restart the container if it exits with code 42\n - action: Restart\n exitCodes:\n operator: In\n values: [42]", "language": "yaml", "type": "pod-definition", "context": "1.1 - Pod Lifecycle | Restart rules can be used for many more advanced lifecycle management scenarios. Note, restart rules are affected by the same inconsistencies as the regular restart policy. Kubelet restarts, container", "source": "https://kubernetes.io/docs/concepts/workloads/_print/", "id": "k8s_00658_code_completion_20577"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: restart-on-exit-codes\nspec:\n restartPolicy: Never\n containers:\n - name: restart-on-exit-codes\n image: docker.io/library/busybox:1.28\n command: ['sh', '-c', 'sleep 60 && exit 0']\n restartPolicy: Never # Container restart policy must be specified if rules are specified\n restartPolicyRules: # Only restart the container if it exits with code 42\n - action: Restart\n exitCodes:\n operator: In\n values: [42]", "completion": "This yaml code 1.1 - Pod Lifecycle | Restart rules can be used for many more advanced lifecycle management scenarios. Note, restart rules are affected by the same inconsistencies as the regular restart policy. Kubelet restarts, container", "language": "yaml", "type": "pod-definition_explanation", "context": "1.1 - Pod Lifecycle | Restart rules can be used for many more advanced lifecycle management scenarios. Note, restart rules are affected by the same inconsistencies as the regular restart policy. Kubelet restarts, container", "source": "https://kubernetes.io/docs/concepts/workloads/_print/", "id": "k8s_00658_code_explain_20578"} {"prompt": "Complete this yaml code:\n\n# container restart delays will start at 10s, increasing\n# 2x each time they are restarted, to a maximum of 100s", "completion": "kind: KubeletConfiguration\ncrashLoopBackOff:\n maxContainerRestartPeriod: \"100s\"", "language": "yaml", "type": "kubernetes-manifest", "context": "1.1 - Pod Lifecycle | If you use this feature along with the alpha feature ReduceDefaultCrashLoopBackOffDecay (described above), your cluster defaults for initial backoff and maximum backoff will no longer be 10s and 300s,", "source": "https://kubernetes.io/docs/concepts/workloads/_print/", "id": "k8s_00658_code_completion_20579"} {"prompt": "Explain this yaml code:\n\n# container restart delays will start at 10s, increasing\n# 2x each time they are restarted, to a maximum of 100s\nkind: KubeletConfiguration\ncrashLoopBackOff:\n maxContainerRestartPeriod: \"100s\"", "completion": "This yaml code 1.1 - Pod Lifecycle | If you use this feature along with the alpha feature ReduceDefaultCrashLoopBackOffDecay (described above), your cluster defaults for initial backoff and maximum backoff will no longer be 10s and 300s,", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "1.1 - Pod Lifecycle | If you use this feature along with the alpha feature ReduceDefaultCrashLoopBackOffDecay (described above), your cluster defaults for initial backoff and maximum backoff will no longer be 10s and 300s,", "source": "https://kubernetes.io/docs/concepts/workloads/_print/", "id": "k8s_00658_code_explain_20580"} {"prompt": "Complete this yaml code:\n\n# delays between container restarts will always be 2s\nkind: KubeletConfiguration", "completion": "crashLoopBackOff:\n maxContainerRestartPeriod: \"2s\"", "language": "yaml", "type": "kubernetes-manifest", "context": "1.1 - Pod Lifecycle | If you use this feature along with the alpha feature ReduceDefaultCrashLoopBackOffDecay (described above), your cluster defaults for initial backoff and maximum backoff will no longer be 10s and 300s,", "source": "https://kubernetes.io/docs/concepts/workloads/_print/", "id": "k8s_00658_code_completion_20581"} {"prompt": "Explain this yaml code:\n\n# delays between container restarts will always be 2s\nkind: KubeletConfiguration\ncrashLoopBackOff:\n maxContainerRestartPeriod: \"2s\"", "completion": "This yaml code 1.1 - Pod Lifecycle | If you use this feature along with the alpha feature ReduceDefaultCrashLoopBackOffDecay (described above), your cluster defaults for initial backoff and maximum backoff will no longer be 10s and 300s,", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "1.1 - Pod Lifecycle | If you use this feature along with the alpha feature ReduceDefaultCrashLoopBackOffDecay (described above), your cluster defaults for initial backoff and maximum backoff will no longer be 10s and 300s,", "source": "https://kubernetes.io/docs/concepts/workloads/_print/", "id": "k8s_00658_code_explain_20582"} {"prompt": "Complete this yaml code:\n\nkind: Pod\n...\nspec:\n readinessGates:\n - conditionType: \"www.example.com/feature-1\"\nstatus:\n conditions:\n - type: Ready # a built-in PodCondition\n status: \"False\"", "completion": " lastProbeTime: null\n lastTransitionTime: 2018-01-01T00:00:00Z\n - type: \"www.example.com/feature-1\" # an extra PodCondition\n status: \"False\"\n lastProbeTime: null\n lastTransitionTime: 2018-01-01T00:00:00Z\n containerStatuses:\n - containerID: docker://abcd...\n ready: true\n...", "language": "yaml", "type": "pod-definition", "context": "1.1 - Pod Lifecycle | The Pod conditions you add must have names that meet the Kubernetes label key format.", "source": "https://kubernetes.io/docs/concepts/workloads/_print/", "id": "k8s_00658_code_completion_20583"} {"prompt": "Explain this yaml code:\n\nkind: Pod\n...\nspec:\n readinessGates:\n - conditionType: \"www.example.com/feature-1\"\nstatus:\n conditions:\n - type: Ready # a built-in PodCondition\n status: \"False\"\n lastProbeTime: null\n lastTransitionTime: 2018-01-01T00:00:00Z\n - type: \"www.example.com/feature-1\" # an extra PodCondition\n status: \"False\"\n lastProbeTime: null\n lastTransitionTime: 2018-01-01T00:00:00Z\n containerStatuses:\n - containerID: docker://abcd...\n ready: true\n...", "completion": "This yaml code 1.1 - Pod Lifecycle | The Pod conditions you add must have names that meet the Kubernetes label key format.", "language": "yaml", "type": "pod-definition_explanation", "context": "1.1 - Pod Lifecycle | The Pod conditions you add must have names that meet the Kubernetes label key format.", "source": "https://kubernetes.io/docs/concepts/workloads/_print/", "id": "k8s_00658_code_explain_20584"} {"prompt": "Complete this text code:\n\nspec:\n os:\n name: linux\n containers:", "completion": " - name: my-container\n image: container-image:latest\n lifecycle:\n stopSignal: SIGUSR1", "language": "text", "type": "code-example", "context": "1.1 - Pod Lifecycle | If a stop signal is defined in the lifecycle, this will override the signal defined in the container image. If no stop signal is defined in the container spec, the container would fall back to the def", "source": "https://kubernetes.io/docs/concepts/workloads/_print/", "id": "k8s_00658_code_completion_20585"} {"prompt": "Explain this text code:\n\nspec:\n os:\n name: linux\n containers:\n - name: my-container\n image: container-image:latest\n lifecycle:\n stopSignal: SIGUSR1", "completion": "This text code 1.1 - Pod Lifecycle | If a stop signal is defined in the lifecycle, this will override the signal defined in the container image. If no stop signal is defined in the container spec, the container would fall back to the def", "language": "text", "type": "code-example_explanation", "context": "1.1 - Pod Lifecycle | If a stop signal is defined in the lifecycle, this will override the signal defined in the container image. If no stop signal is defined in the container spec, the container would fall back to the def", "source": "https://kubernetes.io/docs/concepts/workloads/_print/", "id": "k8s_00658_code_explain_20586"} {"prompt": "Explain this text code:\n\nfor i in {1..100}; do sleep 1; if nslookup myservice; then exit 0; fi; done; exit 1", "completion": "This text code 1.2 - Init Containers | Register this Pod with a remote server from the downward API with a command like:", "language": "text", "type": "code-example_explanation", "context": "1.2 - Init Containers | Register this Pod with a remote server from the downward API with a command like:", "source": "https://kubernetes.io/docs/concepts/workloads/_print/", "id": "k8s_00658_code_explain_20587"} {"prompt": "Explain this text code:\n\ncurl -X POST http://$MANAGEMENT_SERVICE_HOST:$MANAGEMENT_SERVICE_PORT/register -d 'instance=$(<POD_NAME>)&ip=$(<POD_IP>)'", "completion": "This text code 1.2 - Init Containers | Wait for some time before starting the app container with a command like", "language": "text", "type": "code-example_explanation", "context": "1.2 - Init Containers | Wait for some time before starting the app container with a command like", "source": "https://kubernetes.io/docs/concepts/workloads/_print/", "id": "k8s_00658_code_explain_20588"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: myapp-pod\n labels:\n app.kubernetes.io/name: MyApp\nspec:\n containers:\n - name: myapp-container", "completion": " image: busybox:1.28\n command: ['sh', '-c', 'echo The app is running! && sleep 3600']\n initContainers:\n - name: init-myservice\n image: busybox:1.28\n command: ['sh', '-c', \"until nslookup myservice.$(cat /var/run/secrets/kubernetes.io/serviceaccount/namespace).svc.cluster.local; do echo waiting for myservice; sleep 2; done\"]\n - name: init-mydb\n image: busybox:1.28\n command: ['sh', '-c', \"until nslookup mydb.$(cat /var/run/secrets/kubernetes.io/serviceaccount/namespace).svc.cluster.local; do echo waiting for mydb; sleep 2; done\"]", "language": "yaml", "type": "pod-definition", "context": "1.2 - Init Containers | You can start this Pod by running:", "source": "https://kubernetes.io/docs/concepts/workloads/_print/", "id": "k8s_00658_code_completion_20589"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: myapp-pod\n labels:\n app.kubernetes.io/name: MyApp\nspec:\n containers:\n - name: myapp-container\n image: busybox:1.28\n command: ['sh', '-c', 'echo The app is running! && sleep 3600']\n initContainers:\n - name: init-myservice\n image: busybox:1.28\n command: ['sh', '-c', \"until nslookup myservice.$(cat /var/run/secrets/kubernetes.io/serviceaccount/namespace).svc.cluster.local; do echo waiting for myservice; sleep 2; done\"]\n - name: init-mydb\n image: busybox:1.28\n command: ['sh', '-c', \"until nslookup mydb.$(cat /var/run/secrets/kubernetes.io/serviceaccount/namespace).svc.cluster.local; do echo waiting for mydb; sleep 2; done\"]", "completion": "This yaml code 1.2 - Init Containers | You can start this Pod by running:", "language": "yaml", "type": "pod-definition_explanation", "context": "1.2 - Init Containers | You can start this Pod by running:", "source": "https://kubernetes.io/docs/concepts/workloads/_print/", "id": "k8s_00658_code_explain_20590"} {"prompt": "Explain this text code:\n\nNAME READY STATUS RESTARTS AGE\nmyapp-pod 0/1 Init:0/2 0 6m", "completion": "This text code 1.2 - Init Containers | or for more details:", "language": "text", "type": "code-example_explanation", "context": "1.2 - Init Containers | or for more details:", "source": "https://kubernetes.io/docs/concepts/workloads/_print/", "id": "k8s_00658_code_explain_20591"} {"prompt": "Complete this text code:\n\nName: myapp-pod\nNamespace: default\n[...]\nLabels: app.kubernetes.io/name=MyApp\nStatus: Pending\n[...]\nInit Containers:\n init-myservice:\n[...]\n State: Running\n[...]\n init-mydb:\n[...]\n State: Waiting\n Reason: PodInitializing\n Ready: False", "completion": "[...]\nContainers:\n myapp-container:\n[...]\n State: Waiting\n Reason: PodInitializing\n Ready: False\n[...]\nEvents:\n FirstSeen LastSeen Count From SubObjectPath Type Reason Message\n --------- -------- ----- ---- ------------- -------- ------ -------\n 16s 16s 1 {default-scheduler } Normal Scheduled Successfully assigned myapp-pod to 172.17.4.201\n 16s 16s 1 {kubelet 172.17.4.201} spec.initContainers{init-myservice} Normal Pulling pulling image \"busybox\"\n 13s 13s 1 {kubelet 172.17.4.201} spec.initContainers{init-myservice} Normal Pulled Successfully pulled image \"busybox\"\n 13s 13s 1 {kubelet 172.17.4.201} spec.initContainers{init-myservice} Normal Created Created container init-myservice\n 13s 13s 1 {kubelet 172.17.4.201} spec.initContainers{init-myservice} Normal Started Started container init-myservice", "language": "text", "type": "code-example", "context": "1.2 - Init Containers | To see logs for the init containers in this Pod, run:", "source": "https://kubernetes.io/docs/concepts/workloads/_print/", "id": "k8s_00658_code_completion_20592"} {"prompt": "Explain this text code:\n\nName: myapp-pod\nNamespace: default\n[...]\nLabels: app.kubernetes.io/name=MyApp\nStatus: Pending\n[...]\nInit Containers:\n init-myservice:\n[...]\n State: Running\n[...]\n init-mydb:\n[...]\n State: Waiting\n Reason: PodInitializing\n Ready: False\n[...]\nContainers:\n myapp-container:\n[...]\n State: Waiting\n Reason: PodInitializing\n Ready: False\n[...]\nEvents:\n FirstSeen LastSeen Count From SubObjectPath Type Reason Message\n --------- -------- ----- ---- ------------- -------- ------ -------\n 16s 16s 1 {default-scheduler } Normal Scheduled Successfully assigned myapp-pod to 172.17.4.201\n 16s 16s 1 {kubelet 172.17.4.201} spec.initContainers{init-myservice} Normal Pulling pulling image \"busybox\"\n 13s 13s 1 {kubelet 172.17.4.201} spec.initContainers{init-myservice} Normal Pulled Successfully pulled image \"busybox\"\n 13s 13s 1 {kubelet 172.17.4.201} spec.initContainers{init-myservice} Normal Created Created container init-myservice\n 13s 13s 1 {kubelet 172.17.4.201} spec.initContainers{init-myservice} Normal Started Started container init-myservice", "completion": "This text code 1.2 - Init Containers | To see logs for the init containers in this Pod, run:", "language": "text", "type": "code-example_explanation", "context": "1.2 - Init Containers | To see logs for the init containers in this Pod, run:", "source": "https://kubernetes.io/docs/concepts/workloads/_print/", "id": "k8s_00658_code_explain_20593"} {"prompt": "Explain this bash code:\n\nkubectl logs myapp-pod -c init-myservice # Inspect the first init container\nkubectl logs myapp-pod -c init-mydb # Inspect the second init container", "completion": "This bash code 1.2 - Init Containers | At this point, those init containers will be waiting to discover Services named mydb and myservice.", "language": "bash", "type": "kubectl-command_explanation", "context": "1.2 - Init Containers | At this point, those init containers will be waiting to discover Services named mydb and myservice.", "source": "https://kubernetes.io/docs/concepts/workloads/_print/", "id": "k8s_00658_code_explain_20594"} {"prompt": "Complete this yaml code:\n\n---\napiVersion: v1\nkind: Service\nmetadata:\n name: myservice\nspec:\n ports:\n - protocol: TCP\n port: 80\n targetPort: 9376", "completion": "---\napiVersion: v1\nkind: Service\nmetadata:\n name: mydb\nspec:\n ports:\n - protocol: TCP\n port: 80\n targetPort: 9377", "language": "yaml", "type": "service-definition", "context": "1.2 - Init Containers | To create the mydb and myservice services:", "source": "https://kubernetes.io/docs/concepts/workloads/_print/", "id": "k8s_00658_code_completion_20595"} {"prompt": "Explain this yaml code:\n\n---\napiVersion: v1\nkind: Service\nmetadata:\n name: myservice\nspec:\n ports:\n - protocol: TCP\n port: 80\n targetPort: 9376\n---\napiVersion: v1\nkind: Service\nmetadata:\n name: mydb\nspec:\n ports:\n - protocol: TCP\n port: 80\n targetPort: 9377", "completion": "This yaml code 1.2 - Init Containers | To create the mydb and myservice services:", "language": "yaml", "type": "service-definition_explanation", "context": "1.2 - Init Containers | To create the mydb and myservice services:", "source": "https://kubernetes.io/docs/concepts/workloads/_print/", "id": "k8s_00658_code_explain_20596"} {"prompt": "Explain this text code:\n\nNAME READY STATUS RESTARTS AGE\nmyapp-pod 1/1 Running 0 9m", "completion": "This text code 1.2 - Init Containers | This simple example should provide some inspiration for you to create your own init containers. What's next contains a link to a more detailed example.", "language": "text", "type": "code-example_explanation", "context": "1.2 - Init Containers | This simple example should provide some inspiration for you to create your own init containers. What's next contains a link to a more detailed example.", "source": "https://kubernetes.io/docs/concepts/workloads/_print/", "id": "k8s_00658_code_explain_20597"} {"prompt": "Complete this yaml code:\n\napiVersion: apps/v1\nkind: Deployment\nmetadata:\n name: myapp\n labels:\n app: myapp\nspec:\n replicas: 1\n selector:\n matchLabels:\n app: myapp\n template:\n metadata:\n labels:\n app: myapp\n spec:\n containers:", "completion": " - name: myapp\n image: alpine:latest\n command: ['sh', '-c', 'while true; do echo \"logging\" >> /opt/logs.txt; sleep 1; done']\n volumeMounts:\n - name: data\n mountPath: /opt\n initContainers:\n - name: logshipper\n image: alpine:latest\n restartPolicy: Always\n command: ['sh', '-c', 'tail -F /opt/logs.txt']\n volumeMounts:\n - name: data\n mountPath: /opt\n volumes:\n - name: data\n emptyDir: {}", "language": "yaml", "type": "deployment-definition", "context": "1.3 - Sidecar Containers | If an init container is created with its restartPolicy set to Always, it will start and remain running during the entire life of the Pod. This can be helpful for running supporting services separated", "source": "https://kubernetes.io/docs/concepts/workloads/_print/", "id": "k8s_00658_code_completion_20598"} {"prompt": "Explain this yaml code:\n\napiVersion: apps/v1\nkind: Deployment\nmetadata:\n name: myapp\n labels:\n app: myapp\nspec:\n replicas: 1\n selector:\n matchLabels:\n app: myapp\n template:\n metadata:\n labels:\n app: myapp\n spec:\n containers:\n - name: myapp\n image: alpine:latest\n command: ['sh', '-c', 'while true; do echo \"logging\" >> /opt/logs.txt; sleep 1; done']\n volumeMounts:\n - name: data\n mountPath: /opt\n initContainers:\n - name: logshipper\n image: alpine:latest\n restartPolicy: Always\n command: ['sh', '-c', 'tail -F /opt/logs.txt']\n volumeMounts:\n - name: data\n mountPath: /opt\n volumes:\n - name: data\n emptyDir: {}", "completion": "This yaml code 1.3 - Sidecar Containers | If an init container is created with its restartPolicy set to Always, it will start and remain running during the entire life of the Pod. This can be helpful for running supporting services separated", "language": "yaml", "type": "deployment-definition_explanation", "context": "1.3 - Sidecar Containers | If an init container is created with its restartPolicy set to Always, it will start and remain running during the entire life of the Pod. This can be helpful for running supporting services separated", "source": "https://kubernetes.io/docs/concepts/workloads/_print/", "id": "k8s_00658_code_explain_20599"} {"prompt": "Complete this yaml code:\n\napiVersion: batch/v1\nkind: Job\nmetadata:\n name: myjob\nspec:\n template:\n spec:\n containers:\n - name: myjob\n image: alpine:latest\n command: ['sh', '-c', 'echo \"logging\" > /opt/logs.txt']\n volumeMounts:\n - name: data", "completion": " mountPath: /opt\n initContainers:\n - name: logshipper\n image: alpine:latest\n restartPolicy: Always\n command: ['sh', '-c', 'tail -F /opt/logs.txt']\n volumeMounts:\n - name: data\n mountPath: /opt\n restartPolicy: Never\n volumes:\n - name: data\n emptyDir: {}", "language": "yaml", "type": "kubernetes-manifest", "context": "1.3 - Sidecar Containers | Sidecar containers run alongside app containers in the same pod. However, they do not execute the primary application logic; instead, they provide supporting functionality to the main application.", "source": "https://kubernetes.io/docs/concepts/workloads/_print/", "id": "k8s_00658_code_completion_20600"} {"prompt": "Explain this yaml code:\n\napiVersion: batch/v1\nkind: Job\nmetadata:\n name: myjob\nspec:\n template:\n spec:\n containers:\n - name: myjob\n image: alpine:latest\n command: ['sh', '-c', 'echo \"logging\" > /opt/logs.txt']\n volumeMounts:\n - name: data\n mountPath: /opt\n initContainers:\n - name: logshipper\n image: alpine:latest\n restartPolicy: Always\n command: ['sh', '-c', 'tail -F /opt/logs.txt']\n volumeMounts:\n - name: data\n mountPath: /opt\n restartPolicy: Never\n volumes:\n - name: data\n emptyDir: {}", "completion": "This yaml code 1.3 - Sidecar Containers | Sidecar containers run alongside app containers in the same pod. However, they do not execute the primary application logic; instead, they provide supporting functionality to the main application.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "1.3 - Sidecar Containers | Sidecar containers run alongside app containers in the same pod. However, they do not execute the primary application logic; instead, they provide supporting functionality to the main application.", "source": "https://kubernetes.io/docs/concepts/workloads/_print/", "id": "k8s_00658_code_explain_20601"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: busybox-1\nspec:", "completion": " containers:\n - image: busybox:1.28\n command:\n - sleep\n - \"3600\"\n name: busybox", "language": "yaml", "type": "pod-definition", "context": "1.6 - Pod Hostname | The Pod created by this manifest will have its hostname and fully qualified domain name (FQDN) set to busybox-1.", "source": "https://kubernetes.io/docs/concepts/workloads/_print/", "id": "k8s_00658_code_completion_20602"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: busybox-1\nspec:\n containers:\n - image: busybox:1.28\n command:\n - sleep\n - \"3600\"\n name: busybox", "completion": "This yaml code 1.6 - Pod Hostname | The Pod created by this manifest will have its hostname and fully qualified domain name (FQDN) set to busybox-1.", "language": "yaml", "type": "pod-definition_explanation", "context": "1.6 - Pod Hostname | The Pod created by this manifest will have its hostname and fully qualified domain name (FQDN) set to busybox-1.", "source": "https://kubernetes.io/docs/concepts/workloads/_print/", "id": "k8s_00658_code_explain_20603"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: busybox-2-busybox-example-domain\nspec:\n hostnameOverride: busybox-2.busybox.example.domain", "completion": " containers:\n - image: busybox:1.28\n command:\n - sleep\n - \"3600\"\n name: busybox", "language": "yaml", "type": "pod-definition", "context": "1.6 - Pod Hostname | If hostnameOverride is set alongside hostname and subdomain fields:", "source": "https://kubernetes.io/docs/concepts/workloads/_print/", "id": "k8s_00658_code_completion_20604"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: busybox-2-busybox-example-domain\nspec:\n hostnameOverride: busybox-2.busybox.example.domain\n containers:\n - image: busybox:1.28\n command:\n - sleep\n - \"3600\"\n name: busybox", "completion": "This yaml code 1.6 - Pod Hostname | If hostnameOverride is set alongside hostname and subdomain fields:", "language": "yaml", "type": "pod-definition_explanation", "context": "1.6 - Pod Hostname | If hostnameOverride is set alongside hostname and subdomain fields:", "source": "https://kubernetes.io/docs/concepts/workloads/_print/", "id": "k8s_00658_code_explain_20605"} {"prompt": "Complete this text code:\n\n# The format is\n# name:firstID:count of IDs\n# where\n# - firstID is 65536 (the minimum value possible)", "completion": "# - count of IDs is 110 * 65536\n# (110 is the default limit for number of pods on the node)\n\nkubelet:65536:7208960", "language": "text", "type": "code-example", "context": "1.8 - User Namespaces | Starting with Kubernetes v1.33, the ID count for each of Pods can be set in KubeletConfiguration.", "source": "https://kubernetes.io/docs/concepts/workloads/_print/", "id": "k8s_00658_code_completion_20606"} {"prompt": "Explain this text code:\n\n# The format is\n# name:firstID:count of IDs\n# where\n# - firstID is 65536 (the minimum value possible)\n# - count of IDs is 110 * 65536\n# (110 is the default limit for number of pods on the node)\n\nkubelet:65536:7208960", "completion": "This text code 1.8 - User Namespaces | Starting with Kubernetes v1.33, the ID count for each of Pods can be set in KubeletConfiguration.", "language": "text", "type": "code-example_explanation", "context": "1.8 - User Namespaces | Starting with Kubernetes v1.33, the ID count for each of Pods can be set in KubeletConfiguration.", "source": "https://kubernetes.io/docs/concepts/workloads/_print/", "id": "k8s_00658_code_explain_20607"} {"prompt": "Complete this yaml code:\n\napiVersion: kubelet.config.k8s.io/v1beta1\nkind: KubeletConfiguration", "completion": "userNamespaces:\n idsPerPod: 1048576", "language": "yaml", "type": "kubernetes-manifest", "context": "1.8 - User Namespaces | The value of idsPerPod (uint32) must be a multiple of 65536. The default value is 65536. This value only applies to containers created after the kubelet was started with this KubeletConfiguration. Run", "source": "https://kubernetes.io/docs/concepts/workloads/_print/", "id": "k8s_00658_code_completion_20608"} {"prompt": "Explain this yaml code:\n\napiVersion: kubelet.config.k8s.io/v1beta1\nkind: KubeletConfiguration\nuserNamespaces:\n idsPerPod: 1048576", "completion": "This yaml code 1.8 - User Namespaces | The value of idsPerPod (uint32) must be a multiple of 65536. The default value is 65536. This value only applies to containers created after the kubelet was started with this KubeletConfiguration. Run", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "1.8 - User Namespaces | The value of idsPerPod (uint32) must be a multiple of 65536. The default value is 65536. This value only applies to containers created after the kubelet was started with this KubeletConfiguration. Run", "source": "https://kubernetes.io/docs/concepts/workloads/_print/", "id": "k8s_00658_code_explain_20609"} {"prompt": "Complete this yaml code:\n\napiVersion: apps/v1\nkind: Deployment\nmetadata:\n name: nginx-deployment\n labels:\n app: nginx\nspec:\n replicas: 3\n selector:\n matchLabels:", "completion": " app: nginx\n template:\n metadata:\n labels:\n app: nginx\n spec:\n containers:\n - name: nginx\n image: nginx:1.14.2\n ports:\n - containerPort: 80", "language": "yaml", "type": "deployment-definition", "context": "2.1 - Deployments | In this example:", "source": "https://kubernetes.io/docs/concepts/workloads/_print/", "id": "k8s_00658_code_completion_20610"} {"prompt": "Explain this yaml code:\n\napiVersion: apps/v1\nkind: Deployment\nmetadata:\n name: nginx-deployment\n labels:\n app: nginx\nspec:\n replicas: 3\n selector:\n matchLabels:\n app: nginx\n template:\n metadata:\n labels:\n app: nginx\n spec:\n containers:\n - name: nginx\n image: nginx:1.14.2\n ports:\n - containerPort: 80", "completion": "This yaml code 2.1 - Deployments | In this example:", "language": "yaml", "type": "deployment-definition_explanation", "context": "2.1 - Deployments | In this example:", "source": "https://kubernetes.io/docs/concepts/workloads/_print/", "id": "k8s_00658_code_explain_20611"} {"prompt": "Explain this bash code:\n\nkubectl apply -f https://k8s.io/examples/controllers/nginx-deployment.yaml", "completion": "This bash code 2.1 - Deployments | Run kubectl get deployments to check if the Deployment was created.", "language": "bash", "type": "kubectl-command_explanation", "context": "2.1 - Deployments | Run kubectl get deployments to check if the Deployment was created.", "source": "https://kubernetes.io/docs/concepts/workloads/_print/", "id": "k8s_00658_code_explain_20612"} {"prompt": "Explain this text code:\n\nNAME READY UP-TO-DATE AVAILABLE AGE\nnginx-deployment 0/3 0 0 1s", "completion": "This text code 2.1 - Deployments | When you inspect the Deployments in your cluster, the following fields are displayed:", "language": "text", "type": "code-example_explanation", "context": "2.1 - Deployments | When you inspect the Deployments in your cluster, the following fields are displayed:", "source": "https://kubernetes.io/docs/concepts/workloads/_print/", "id": "k8s_00658_code_explain_20613"} {"prompt": "Explain this text code:\n\nWaiting for rollout to finish: 2 out of 3 new replicas have been updated...\ndeployment \"nginx-deployment\" successfully rolled out", "completion": "This text code 2.1 - Deployments | Run the kubectl get deployments again a few seconds later. The output is similar to this:", "language": "text", "type": "code-example_explanation", "context": "2.1 - Deployments | Run the kubectl get deployments again a few seconds later. The output is similar to this:", "source": "https://kubernetes.io/docs/concepts/workloads/_print/", "id": "k8s_00658_code_explain_20614"} {"prompt": "Explain this text code:\n\nNAME READY UP-TO-DATE AVAILABLE AGE\nnginx-deployment 3/3 3 3 18s", "completion": "This text code 2.1 - Deployments | Notice that the Deployment has created all three replicas, and all replicas are up-to-date (they contain the latest Pod template) and available.", "language": "text", "type": "code-example_explanation", "context": "2.1 - Deployments | Notice that the Deployment has created all three replicas, and all replicas are up-to-date (they contain the latest Pod template) and available.", "source": "https://kubernetes.io/docs/concepts/workloads/_print/", "id": "k8s_00658_code_explain_20615"} {"prompt": "Explain this text code:\n\nNAME DESIRED CURRENT READY AGE\nnginx-deployment-75675f5897 3 3 3 18s", "completion": "This text code 2.1 - Deployments | ReplicaSet output shows the following fields:", "language": "text", "type": "code-example_explanation", "context": "2.1 - Deployments | ReplicaSet output shows the following fields:", "source": "https://kubernetes.io/docs/concepts/workloads/_print/", "id": "k8s_00658_code_explain_20616"} {"prompt": "Complete this text code:\n\nNAME READY STATUS RESTARTS AGE LABELS\nnginx-deployment-75675f5897-7ci7o 1/1 Running 0 18s app=nginx,pod-template-hash=75675f5897", "completion": "nginx-deployment-75675f5897-kzszj 1/1 Running 0 18s app=nginx,pod-template-hash=75675f5897\nnginx-deployment-75675f5897-qqcnn 1/1 Running 0 18s app=nginx,pod-template-hash=75675f5897", "language": "text", "type": "code-example", "context": "2.1 - Deployments | The created ReplicaSet ensures that there are three nginx Pods.", "source": "https://kubernetes.io/docs/concepts/workloads/_print/", "id": "k8s_00658_code_completion_20617"} {"prompt": "Explain this text code:\n\nNAME READY STATUS RESTARTS AGE LABELS\nnginx-deployment-75675f5897-7ci7o 1/1 Running 0 18s app=nginx,pod-template-hash=75675f5897\nnginx-deployment-75675f5897-kzszj 1/1 Running 0 18s app=nginx,pod-template-hash=75675f5897\nnginx-deployment-75675f5897-qqcnn 1/1 Running 0 18s app=nginx,pod-template-hash=75675f5897", "completion": "This text code 2.1 - Deployments | The created ReplicaSet ensures that there are three nginx Pods.", "language": "text", "type": "code-example_explanation", "context": "2.1 - Deployments | The created ReplicaSet ensures that there are three nginx Pods.", "source": "https://kubernetes.io/docs/concepts/workloads/_print/", "id": "k8s_00658_code_explain_20618"} {"prompt": "Explain this bash code:\n\nkubectl set image deployment.v1.apps/nginx-deployment nginx=nginx:1.16.1", "completion": "This bash code 2.1 - Deployments | or use the following command:", "language": "bash", "type": "kubectl-command_explanation", "context": "2.1 - Deployments | or use the following command:", "source": "https://kubernetes.io/docs/concepts/workloads/_print/", "id": "k8s_00658_code_explain_20619"} {"prompt": "Explain this bash code:\n\nkubectl set image deployment/nginx-deployment nginx=nginx:1.16.1", "completion": "This bash code 2.1 - Deployments | where deployment/nginx-deployment indicates the Deployment, nginx indicates the Container the update will take place and nginx:1.16.1 indicates the new image and its tag.", "language": "bash", "type": "kubectl-command_explanation", "context": "2.1 - Deployments | where deployment/nginx-deployment indicates the Deployment, nginx indicates the Container the update will take place and nginx:1.16.1 indicates the new image and its tag.", "source": "https://kubernetes.io/docs/concepts/workloads/_print/", "id": "k8s_00658_code_explain_20620"} {"prompt": "Explain this text code:\n\nWaiting for rollout to finish: 2 out of 3 new replicas have been updated...", "completion": "This text code 2.1 - Deployments | or", "language": "text", "type": "code-example_explanation", "context": "2.1 - Deployments | or", "source": "https://kubernetes.io/docs/concepts/workloads/_print/", "id": "k8s_00658_code_explain_20621"} {"prompt": "Explain this text code:\n\ndeployment \"nginx-deployment\" successfully rolled out", "completion": "This text code 2.1 - Deployments | Get more details on your updated Deployment:", "language": "text", "type": "code-example_explanation", "context": "2.1 - Deployments | Get more details on your updated Deployment:", "source": "https://kubernetes.io/docs/concepts/workloads/_print/", "id": "k8s_00658_code_explain_20622"} {"prompt": "Explain this text code:\n\nNAME READY UP-TO-DATE AVAILABLE AGE\nnginx-deployment 3/3 3 3 36s", "completion": "This text code 2.1 - Deployments | Run kubectl get rs to see that the Deployment updated the Pods by creating a new ReplicaSet and scaling it up to 3 replicas, as well as scaling down the old ReplicaSet to 0 replicas.", "language": "text", "type": "code-example_explanation", "context": "2.1 - Deployments | Run kubectl get rs to see that the Deployment updated the Pods by creating a new ReplicaSet and scaling it up to 3 replicas, as well as scaling down the old ReplicaSet to 0 replicas.", "source": "https://kubernetes.io/docs/concepts/workloads/_print/", "id": "k8s_00658_code_explain_20623"} {"prompt": "Explain this text code:\n\nNAME DESIRED CURRENT READY AGE\nnginx-deployment-1564180365 3 3 3 6s\nnginx-deployment-2035384211 0 0 0 36s", "completion": "This text code 2.1 - Deployments | Running get pods should now show only the new Pods:", "language": "text", "type": "code-example_explanation", "context": "2.1 - Deployments | Running get pods should now show only the new Pods:", "source": "https://kubernetes.io/docs/concepts/workloads/_print/", "id": "k8s_00658_code_explain_20624"} {"prompt": "Complete this text code:\n\nNAME READY STATUS RESTARTS AGE\nnginx-deployment-1564180365-khku8 1/1 Running 0 14s", "completion": "nginx-deployment-1564180365-nacti 1/1 Running 0 14s\nnginx-deployment-1564180365-z9gth 1/1 Running 0 14s", "language": "text", "type": "code-example", "context": "2.1 - Deployments | Next time you want to update these Pods, you only need to update the Deployment's Pod template again.", "source": "https://kubernetes.io/docs/concepts/workloads/_print/", "id": "k8s_00658_code_completion_20625"} {"prompt": "Explain this text code:\n\nNAME READY STATUS RESTARTS AGE\nnginx-deployment-1564180365-khku8 1/1 Running 0 14s\nnginx-deployment-1564180365-nacti 1/1 Running 0 14s\nnginx-deployment-1564180365-z9gth 1/1 Running 0 14s", "completion": "This text code 2.1 - Deployments | Next time you want to update these Pods, you only need to update the Deployment's Pod template again.", "language": "text", "type": "code-example_explanation", "context": "2.1 - Deployments | Next time you want to update these Pods, you only need to update the Deployment's Pod template again.", "source": "https://kubernetes.io/docs/concepts/workloads/_print/", "id": "k8s_00658_code_explain_20626"} {"prompt": "Complete this text code:\n\nName: nginx-deployment\nNamespace: default\nCreationTimestamp: Thu, 30 Nov 2017 10:56:25 +0000\nLabels: app=nginx\nAnnotations: deployment.kubernetes.io/revision=2\nSelector: app=nginx\nReplicas: 3 desired | 3 updated | 3 total | 3 available | 0 unavailable\nStrategyType: RollingUpdate\nMinReadySeconds: 0\nRollingUpdateStrategy: 25% max unavailable, 25% max surge\nPod Template:\n Labels: app=nginx\n Containers:\n nginx:\n Image: nginx:1.16.1\n Port: 80/TCP\n Environment: <none>\n Mounts: <none>", "completion": " Volumes: <none>\n Conditions:\n Type Status Reason\n ---- ------ ------\n Available True MinimumReplicasAvailable\n Progressing True NewReplicaSetAvailable\n OldReplicaSets: <none>\n NewReplicaSet: nginx-deployment-1564180365 (3/3 replicas created)\n Events:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal ScalingReplicaSet 2m deployment-controller Scaled up replica set nginx-deployment-2035384211 to 3\n Normal ScalingReplicaSet 24s deployment-controller Scaled up replica set nginx-deployment-1564180365 to 1\n Normal ScalingReplicaSet 22s deployment-controller Scaled down replica set nginx-deployment-2035384211 to 2\n Normal ScalingReplicaSet 22s deployment-controller Scaled up replica set nginx-deployment-1564180365 to 2\n Normal ScalingReplicaSet 19s deployment-controller Scaled down replica set nginx-deployment-2035384211 to 1\n Normal ScalingReplicaSet 19s deployment-controller Scaled up replica set nginx-deployment-1564180365 to 3\n Normal ScalingReplicaSet 14s deployment-controller Scaled down replica set nginx-deployment-2035384211 to 0", "language": "text", "type": "code-example", "context": "2.1 - Deployments | Here you see that when you first created the Deployment, it created a ReplicaSet (nginx-deployment-2035384211) and scaled it up to 3 replicas directly. When you updated the Deployment, it created a ne", "source": "https://kubernetes.io/docs/concepts/workloads/_print/", "id": "k8s_00658_code_completion_20627"} {"prompt": "Explain this text code:\n\nName: nginx-deployment\nNamespace: default\nCreationTimestamp: Thu, 30 Nov 2017 10:56:25 +0000\nLabels: app=nginx\nAnnotations: deployment.kubernetes.io/revision=2\nSelector: app=nginx\nReplicas: 3 desired | 3 updated | 3 total | 3 available | 0 unavailable\nStrategyType: RollingUpdate\nMinReadySeconds: 0\nRollingUpdateStrategy: 25% max unavailable, 25% max surge\nPod Template:\n Labels: app=nginx\n Containers:\n nginx:\n Image: nginx:1.16.1\n Port: 80/TCP\n Environment: <none>\n Mounts: <none>\n Volumes: <none>\n Conditions:\n Type Status Reason\n ---- ------ ------\n Available True MinimumReplicasAvailable\n Progressing True NewReplicaSetAvailable\n OldReplicaSets: <none>\n NewReplicaSet: nginx-deployment-1564180365 (3/3 replicas created)\n Events:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal ScalingReplicaSet 2m deployment-controller Scaled up replica set nginx-deployment-2035384211 to 3\n Normal ScalingReplicaSet 24s deployment-controller Scaled up replica set nginx-deployment-1564180365 to 1\n Normal ScalingReplicaSet 22s deployment-controller Scaled down replica set nginx-deployment-2035384211 to 2\n Normal ScalingReplicaSet 22s deployment-controller Scaled up replica set nginx-deployment-1564180365 to 2\n Normal ScalingReplicaSet 19s deployment-controller Scaled down replica set nginx-deployment-2035384211 to 1\n Normal ScalingReplicaSet 19s deployment-controller Scaled up replica set nginx-deployment-1564180365 to 3\n Normal ScalingReplicaSet 14s deployment-controller Scaled down replica set nginx-deployment-2035384211 to 0", "completion": "This text code 2.1 - Deployments | Here you see that when you first created the Deployment, it created a ReplicaSet (nginx-deployment-2035384211) and scaled it up to 3 replicas directly. When you updated the Deployment, it created a ne", "language": "text", "type": "code-example_explanation", "context": "2.1 - Deployments | Here you see that when you first created the Deployment, it created a ReplicaSet (nginx-deployment-2035384211) and scaled it up to 3 replicas directly. When you updated the Deployment, it created a ne", "source": "https://kubernetes.io/docs/concepts/workloads/_print/", "id": "k8s_00658_code_explain_20628"} {"prompt": "Explain this bash code:\n\nkubectl set image deployment/nginx-deployment nginx=nginx:1.161", "completion": "This bash code 2.1 - Deployments | The output is similar to this:", "language": "bash", "type": "kubectl-command_explanation", "context": "2.1 - Deployments | The output is similar to this:", "source": "https://kubernetes.io/docs/concepts/workloads/_print/", "id": "k8s_00658_code_explain_20629"} {"prompt": "Explain this text code:\n\nWaiting for rollout to finish: 1 out of 3 new replicas have been updated...", "completion": "This text code 2.1 - Deployments | Press Ctrl-C to stop the above rollout status watch. For more information on stuck rollouts, read more here.", "language": "text", "type": "code-example_explanation", "context": "2.1 - Deployments | Press Ctrl-C to stop the above rollout status watch. For more information on stuck rollouts, read more here.", "source": "https://kubernetes.io/docs/concepts/workloads/_print/", "id": "k8s_00658_code_explain_20630"} {"prompt": "Complete this text code:\n\nNAME DESIRED CURRENT READY AGE\nnginx-deployment-1564180365 3 3 3 25s", "completion": "nginx-deployment-2035384211 0 0 0 36s\nnginx-deployment-3066724191 1 1 0 6s", "language": "text", "type": "code-example", "context": "2.1 - Deployments | Looking at the Pods created, you see that 1 Pod created by new ReplicaSet is stuck in an image pull loop.", "source": "https://kubernetes.io/docs/concepts/workloads/_print/", "id": "k8s_00658_code_completion_20631"} {"prompt": "Explain this text code:\n\nNAME DESIRED CURRENT READY AGE\nnginx-deployment-1564180365 3 3 3 25s\nnginx-deployment-2035384211 0 0 0 36s\nnginx-deployment-3066724191 1 1 0 6s", "completion": "This text code 2.1 - Deployments | Looking at the Pods created, you see that 1 Pod created by new ReplicaSet is stuck in an image pull loop.", "language": "text", "type": "code-example_explanation", "context": "2.1 - Deployments | Looking at the Pods created, you see that 1 Pod created by new ReplicaSet is stuck in an image pull loop.", "source": "https://kubernetes.io/docs/concepts/workloads/_print/", "id": "k8s_00658_code_explain_20632"} {"prompt": "Complete this text code:\n\nNAME READY STATUS RESTARTS AGE\nnginx-deployment-1564180365-70iae 1/1 Running 0 25s", "completion": "nginx-deployment-1564180365-jbqqo 1/1 Running 0 25s\nnginx-deployment-1564180365-hysrc 1/1 Running 0 25s\nnginx-deployment-3066724191-08mng 0/1 ImagePullBackOff 0 6s", "language": "text", "type": "code-example", "context": "2.1 - Deployments | Get the description of the Deployment:", "source": "https://kubernetes.io/docs/concepts/workloads/_print/", "id": "k8s_00658_code_completion_20633"} {"prompt": "Explain this text code:\n\nNAME READY STATUS RESTARTS AGE\nnginx-deployment-1564180365-70iae 1/1 Running 0 25s\nnginx-deployment-1564180365-jbqqo 1/1 Running 0 25s\nnginx-deployment-1564180365-hysrc 1/1 Running 0 25s\nnginx-deployment-3066724191-08mng 0/1 ImagePullBackOff 0 6s", "completion": "This text code 2.1 - Deployments | Get the description of the Deployment:", "language": "text", "type": "code-example_explanation", "context": "2.1 - Deployments | Get the description of the Deployment:", "source": "https://kubernetes.io/docs/concepts/workloads/_print/", "id": "k8s_00658_code_explain_20634"} {"prompt": "Complete this text code:\n\nName: nginx-deployment\nNamespace: default\nCreationTimestamp: Tue, 15 Mar 2016 14:48:04 -0700\nLabels: app=nginx\nSelector: app=nginx\nReplicas: 3 desired | 1 updated | 4 total | 3 available | 1 unavailable\nStrategyType: RollingUpdate\nMinReadySeconds: 0\nRollingUpdateStrategy: 25% max unavailable, 25% max surge\nPod Template:\n Labels: app=nginx\n Containers:\n nginx:\n Image: nginx:1.161\n Port: 80/TCP\n Host Port: 0/TCP\n Environment: <none>\n Mounts: <none>", "completion": " Volumes: <none>\nConditions:\n Type Status Reason\n ---- ------ ------\n Available True MinimumReplicasAvailable\n Progressing True ReplicaSetUpdated\nOldReplicaSets: nginx-deployment-1564180365 (3/3 replicas created)\nNewReplicaSet: nginx-deployment-3066724191 (1/1 replicas created)\nEvents:\n FirstSeen LastSeen Count From SubObjectPath Type Reason Message\n --------- -------- ----- ---- ------------- -------- ------ -------\n 1m 1m 1 {deployment-controller } Normal ScalingReplicaSet Scaled up replica set nginx-deployment-2035384211 to 3\n 22s 22s 1 {deployment-controller } Normal ScalingReplicaSet Scaled up replica set nginx-deployment-1564180365 to 1\n 22s 22s 1 {deployment-controller } Normal ScalingReplicaSet Scaled down replica set nginx-deployment-2035384211 to 2\n 22s 22s 1 {deployment-controller } Normal ScalingReplicaSet Scaled up replica set nginx-deployment-1564180365 to 2\n 21s 21s 1 {deployment-controller } Normal ScalingReplicaSet Scaled down replica set nginx-deployment-2035384211 to 1\n 21s 21s 1 {deployment-controller } Normal ScalingReplicaSet Scaled up replica set nginx-deployment-1564180365 to 3\n 13s 13s 1 {deployment-controller } Normal ScalingReplicaSet Scaled down replica set nginx-deployment-2035384211 to 0\n 13s 13s 1 {deployment-controller } Normal ScalingReplicaSet Scaled up replica set nginx-deployment-3066724191 to 1", "language": "text", "type": "code-example", "context": "2.1 - Deployments | To fix this, you need to rollback to a previous revision of Deployment that is stable.", "source": "https://kubernetes.io/docs/concepts/workloads/_print/", "id": "k8s_00658_code_completion_20635"} {"prompt": "Explain this text code:\n\nName: nginx-deployment\nNamespace: default\nCreationTimestamp: Tue, 15 Mar 2016 14:48:04 -0700\nLabels: app=nginx\nSelector: app=nginx\nReplicas: 3 desired | 1 updated | 4 total | 3 available | 1 unavailable\nStrategyType: RollingUpdate\nMinReadySeconds: 0\nRollingUpdateStrategy: 25% max unavailable, 25% max surge\nPod Template:\n Labels: app=nginx\n Containers:\n nginx:\n Image: nginx:1.161\n Port: 80/TCP\n Host Port: 0/TCP\n Environment: <none>\n Mounts: <none>\n Volumes: <none>\nConditions:\n Type Status Reason\n ---- ------ ------\n Available True MinimumReplicasAvailable\n Progressing True ReplicaSetUpdated\nOldReplicaSets: nginx-deployment-1564180365 (3/3 replicas created)\nNewReplicaSet: nginx-deployment-3066724191 (1/1 replicas created)\nEvents:\n FirstSeen LastSeen Count From SubObjectPath Type Reason Message\n --------- -------- ----- ---- ------------- -------- ------ -------\n 1m 1m 1 {deployment-controller } Normal ScalingReplicaSet Scaled up replica set nginx-deployment-2035384211 to 3\n 22s 22s 1 {deployment-controller } Normal ScalingReplicaSet Scaled up replica set nginx-deployment-1564180365 to 1\n 22s 22s 1 {deployment-controller } Normal ScalingReplicaSet Scaled down replica set nginx-deployment-2035384211 to 2\n 22s 22s 1 {deployment-controller } Normal ScalingReplicaSet Scaled up replica set nginx-deployment-1564180365 to 2\n 21s 21s 1 {deployment-controller } Normal ScalingReplicaSet Scaled down replica set nginx-deployment-2035384211 to 1\n 21s 21s 1 {deployment-controller } Normal ScalingReplicaSet Scaled up replica set nginx-deployment-1564180365 to 3\n 13s 13s 1 {deployment-controller } Normal ScalingReplicaSet Scaled down replica set nginx-deployment-2035384211 to 0\n 13s 13s 1 {deployment-controller } Normal ScalingReplicaSet Scaled up replica set nginx-deployment-3066724191 to 1", "completion": "This text code 2.1 - Deployments | To fix this, you need to rollback to a previous revision of Deployment that is stable.", "language": "text", "type": "code-example_explanation", "context": "2.1 - Deployments | To fix this, you need to rollback to a previous revision of Deployment that is stable.", "source": "https://kubernetes.io/docs/concepts/workloads/_print/", "id": "k8s_00658_code_explain_20636"} {"prompt": "Explain this bash code:\n\nkubectl rollout history deployment/nginx-deployment", "completion": "This bash code 2.1 - Deployments | The output is similar to this:", "language": "bash", "type": "kubectl-command_explanation", "context": "2.1 - Deployments | The output is similar to this:", "source": "https://kubernetes.io/docs/concepts/workloads/_print/", "id": "k8s_00658_code_explain_20637"} {"prompt": "Complete this text code:\n\ndeployments \"nginx-deployment\"\nREVISION CHANGE-CAUSE", "completion": "1 kubectl apply --filename=https://k8s.io/examples/controllers/nginx-deployment.yaml\n2 kubectl set image deployment/nginx-deployment nginx=nginx:1.16.1\n3 kubectl set image deployment/nginx-deployment nginx=nginx:1.161", "language": "text", "type": "code-example", "context": "2.1 - Deployments | CHANGE-CAUSE is copied from the Deployment annotation kubernetes.io/change-cause to its revisions upon creation. You can specify theCHANGE-CAUSE message by:", "source": "https://kubernetes.io/docs/concepts/workloads/_print/", "id": "k8s_00658_code_completion_20638"} {"prompt": "Explain this text code:\n\ndeployments \"nginx-deployment\"\nREVISION CHANGE-CAUSE\n1 kubectl apply --filename=https://k8s.io/examples/controllers/nginx-deployment.yaml\n2 kubectl set image deployment/nginx-deployment nginx=nginx:1.16.1\n3 kubectl set image deployment/nginx-deployment nginx=nginx:1.161", "completion": "This text code 2.1 - Deployments | CHANGE-CAUSE is copied from the Deployment annotation kubernetes.io/change-cause to its revisions upon creation. You can specify theCHANGE-CAUSE message by:", "language": "text", "type": "code-example_explanation", "context": "2.1 - Deployments | CHANGE-CAUSE is copied from the Deployment annotation kubernetes.io/change-cause to its revisions upon creation. You can specify theCHANGE-CAUSE message by:", "source": "https://kubernetes.io/docs/concepts/workloads/_print/", "id": "k8s_00658_code_explain_20639"} {"prompt": "Explain this bash code:\n\nkubectl rollout history deployment/nginx-deployment --revision=2", "completion": "This bash code 2.1 - Deployments | The output is similar to this:", "language": "bash", "type": "kubectl-command_explanation", "context": "2.1 - Deployments | The output is similar to this:", "source": "https://kubernetes.io/docs/concepts/workloads/_print/", "id": "k8s_00658_code_explain_20640"} {"prompt": "Complete this text code:\n\ndeployments \"nginx-deployment\" revision 2\n Labels: app=nginx\n pod-template-hash=1159050644\n Annotations: kubernetes.io/change-cause=kubectl set image deployment/nginx-deployment nginx=nginx:1.16.1\n Containers:\n nginx:", "completion": " Image: nginx:1.16.1\n Port: 80/TCP\n QoS Tier:\n cpu: BestEffort\n memory: BestEffort\n Environment Variables: <none>\n No volumes.", "language": "text", "type": "code-example", "context": "2.1 - Deployments | Follow the steps given below to rollback the Deployment from the current version to the previous version, which is version 2.", "source": "https://kubernetes.io/docs/concepts/workloads/_print/", "id": "k8s_00658_code_completion_20641"} {"prompt": "Explain this text code:\n\ndeployments \"nginx-deployment\" revision 2\n Labels: app=nginx\n pod-template-hash=1159050644\n Annotations: kubernetes.io/change-cause=kubectl set image deployment/nginx-deployment nginx=nginx:1.16.1\n Containers:\n nginx:\n Image: nginx:1.16.1\n Port: 80/TCP\n QoS Tier:\n cpu: BestEffort\n memory: BestEffort\n Environment Variables: <none>\n No volumes.", "completion": "This text code 2.1 - Deployments | Follow the steps given below to rollback the Deployment from the current version to the previous version, which is version 2.", "language": "text", "type": "code-example_explanation", "context": "2.1 - Deployments | Follow the steps given below to rollback the Deployment from the current version to the previous version, which is version 2.", "source": "https://kubernetes.io/docs/concepts/workloads/_print/", "id": "k8s_00658_code_explain_20642"} {"prompt": "Explain this bash code:\n\nkubectl rollout undo deployment/nginx-deployment --to-revision=2", "completion": "This bash code 2.1 - Deployments | The output is similar to this:", "language": "bash", "type": "kubectl-command_explanation", "context": "2.1 - Deployments | The output is similar to this:", "source": "https://kubernetes.io/docs/concepts/workloads/_print/", "id": "k8s_00658_code_explain_20643"} {"prompt": "Explain this text code:\n\nNAME READY UP-TO-DATE AVAILABLE AGE\nnginx-deployment 3/3 3 3 30m", "completion": "This text code 2.1 - Deployments | Get the description of the Deployment:", "language": "text", "type": "code-example_explanation", "context": "2.1 - Deployments | Get the description of the Deployment:", "source": "https://kubernetes.io/docs/concepts/workloads/_print/", "id": "k8s_00658_code_explain_20644"} {"prompt": "Complete this text code:\n\nName: nginx-deployment\nNamespace: default\nCreationTimestamp: Sun, 02 Sep 2018 18:17:55 -0500\nLabels: app=nginx\nAnnotations: deployment.kubernetes.io/revision=4\n kubernetes.io/change-cause=kubectl set image deployment/nginx-deployment nginx=nginx:1.16.1\nSelector: app=nginx\nReplicas: 3 desired | 3 updated | 3 total | 3 available | 0 unavailable\nStrategyType: RollingUpdate\nMinReadySeconds: 0\nRollingUpdateStrategy: 25% max unavailable, 25% max surge\nPod Template:\n Labels: app=nginx\n Containers:\n nginx:\n Image: nginx:1.16.1\n Port: 80/TCP\n Host Port: 0/TCP\n Environment: <none>\n Mounts: <none>", "completion": " Volumes: <none>\nConditions:\n Type Status Reason\n ---- ------ ------\n Available True MinimumReplicasAvailable\n Progressing True NewReplicaSetAvailable\nOldReplicaSets: <none>\nNewReplicaSet: nginx-deployment-c4747d96c (3/3 replicas created)\nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal ScalingReplicaSet 12m deployment-controller Scaled up replica set nginx-deployment-75675f5897 to 3\n Normal ScalingReplicaSet 11m deployment-controller Scaled up replica set nginx-deployment-c4747d96c to 1\n Normal ScalingReplicaSet 11m deployment-controller Scaled down replica set nginx-deployment-75675f5897 to 2\n Normal ScalingReplicaSet 11m deployment-controller Scaled up replica set nginx-deployment-c4747d96c to 2\n Normal ScalingReplicaSet 11m deployment-controller Scaled down replica set nginx-deployment-75675f5897 to 1\n Normal ScalingReplicaSet 11m deployment-controller Scaled up replica set nginx-deployment-c4747d96c to 3\n Normal ScalingReplicaSet 11m deployment-controller Scaled down replica set nginx-deployment-75675f5897 to 0\n Normal ScalingReplicaSet 11m deployment-controller Scaled up replica set nginx-deployment-595696685f to 1\n Normal DeploymentRollback 15s deployment-controller Rolled back deployment \"nginx-deployment\" to revision 2\n Normal ScalingReplicaSet 15s deployment-controller Scaled down replica set nginx-deployment-595696685f to 0", "language": "text", "type": "code-example", "context": "2.1 - Deployments | You can scale a Deployment by using the following command:", "source": "https://kubernetes.io/docs/concepts/workloads/_print/", "id": "k8s_00658_code_completion_20645"} {"prompt": "Explain this text code:\n\nName: nginx-deployment\nNamespace: default\nCreationTimestamp: Sun, 02 Sep 2018 18:17:55 -0500\nLabels: app=nginx\nAnnotations: deployment.kubernetes.io/revision=4\n kubernetes.io/change-cause=kubectl set image deployment/nginx-deployment nginx=nginx:1.16.1\nSelector: app=nginx\nReplicas: 3 desired | 3 updated | 3 total | 3 available | 0 unavailable\nStrategyType: RollingUpdate\nMinReadySeconds: 0\nRollingUpdateStrategy: 25% max unavailable, 25% max surge\nPod Template:\n Labels: app=nginx\n Containers:\n nginx:\n Image: nginx:1.16.1\n Port: 80/TCP\n Host Port: 0/TCP\n Environment: <none>\n Mounts: <none>\n Volumes: <none>\nConditions:\n Type Status Reason\n ---- ------ ------\n Available True MinimumReplicasAvailable\n Progressing True NewReplicaSetAvailable\nOldReplicaSets: <none>\nNewReplicaSet: nginx-deployment-c4747d96c (3/3 replicas created)\nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal ScalingReplicaSet 12m deployment-controller Scaled up replica set nginx-deployment-75675f5897 to 3\n Normal ScalingReplicaSet 11m deployment-controller Scaled up replica set nginx-deployment-c4747d96c to 1\n Normal ScalingReplicaSet 11m deployment-controller Scaled down replica set nginx-deployment-75675f5897 to 2\n Normal ScalingReplicaSet 11m deployment-controller Scaled up replica set nginx-deployment-c4747d96c to 2\n Normal ScalingReplicaSet 11m deployment-controller Scaled down replica set nginx-deployment-75675f5897 to 1\n Normal ScalingReplicaSet 11m deployment-controller Scaled up replica set nginx-deployment-c4747d96c to 3\n Normal ScalingReplicaSet 11m deployment-controller Scaled down replica set nginx-deployment-75675f5897 to 0\n Normal ScalingReplicaSet 11m deployment-controller Scaled up replica set nginx-deployment-595696685f to 1\n Normal DeploymentRollback 15s deployment-controller Rolled back deployment \"nginx-deployment\" to revision 2\n Normal ScalingReplicaSet 15s deployment-controller Scaled down replica set nginx-deployment-595696685f to 0", "completion": "This text code 2.1 - Deployments | You can scale a Deployment by using the following command:", "language": "text", "type": "code-example_explanation", "context": "2.1 - Deployments | You can scale a Deployment by using the following command:", "source": "https://kubernetes.io/docs/concepts/workloads/_print/", "id": "k8s_00658_code_explain_20646"} {"prompt": "Explain this bash code:\n\nkubectl scale deployment/nginx-deployment --replicas=10", "completion": "This bash code 2.1 - Deployments | The output is similar to this:", "language": "bash", "type": "kubectl-command_explanation", "context": "2.1 - Deployments | The output is similar to this:", "source": "https://kubernetes.io/docs/concepts/workloads/_print/", "id": "k8s_00658_code_explain_20647"} {"prompt": "Explain this bash code:\n\nkubectl autoscale deployment/nginx-deployment --min=10 --max=15 --cpu-percent=80", "completion": "This bash code 2.1 - Deployments | The output is similar to this:", "language": "bash", "type": "kubectl-command_explanation", "context": "2.1 - Deployments | The output is similar to this:", "source": "https://kubernetes.io/docs/concepts/workloads/_print/", "id": "k8s_00658_code_explain_20648"} {"prompt": "Explain this text code:\n\nNAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE\nnginx-deployment 10 10 10 10 50s", "completion": "This text code 2.1 - Deployments | You update to a new image which happens to be unresolvable from inside the cluster.", "language": "text", "type": "code-example_explanation", "context": "2.1 - Deployments | You update to a new image which happens to be unresolvable from inside the cluster.", "source": "https://kubernetes.io/docs/concepts/workloads/_print/", "id": "k8s_00658_code_explain_20649"} {"prompt": "Explain this bash code:\n\nkubectl set image deployment/nginx-deployment nginx=nginx:sometag", "completion": "This bash code 2.1 - Deployments | The output is similar to this:", "language": "bash", "type": "kubectl-command_explanation", "context": "2.1 - Deployments | The output is similar to this:", "source": "https://kubernetes.io/docs/concepts/workloads/_print/", "id": "k8s_00658_code_explain_20650"} {"prompt": "Explain this text code:\n\nNAME DESIRED CURRENT READY AGE\nnginx-deployment-1989198191 5 5 0 9s\nnginx-deployment-618515232 8 8 8 1m", "completion": "This text code 2.1 - Deployments | Then a new scaling request for the Deployment comes along. The autoscaler increments the Deployment replicas to 15. The Deployment controller needs to decide where to add these new 5 replicas. If you", "language": "text", "type": "code-example_explanation", "context": "2.1 - Deployments | Then a new scaling request for the Deployment comes along. The autoscaler increments the Deployment replicas to 15. The Deployment controller needs to decide where to add these new 5 replicas. If you", "source": "https://kubernetes.io/docs/concepts/workloads/_print/", "id": "k8s_00658_code_explain_20651"} {"prompt": "Explain this text code:\n\nNAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE\nnginx-deployment 15 18 7 8 7m", "completion": "This text code 2.1 - Deployments | The rollout status confirms how the replicas were added to each ReplicaSet.", "language": "text", "type": "code-example_explanation", "context": "2.1 - Deployments | The rollout status confirms how the replicas were added to each ReplicaSet.", "source": "https://kubernetes.io/docs/concepts/workloads/_print/", "id": "k8s_00658_code_explain_20652"} {"prompt": "Explain this text code:\n\nNAME DESIRED CURRENT READY AGE\nnginx-deployment-1989198191 7 7 0 7m\nnginx-deployment-618515232 11 11 11 7m", "completion": "This text code 2.1 - Deployments | When you update a Deployment, or plan to, you can pause rollouts for that Deployment before you trigger one or more updates. When you're ready to apply those changes, you resume rollouts for the Deplo", "language": "text", "type": "code-example_explanation", "context": "2.1 - Deployments | When you update a Deployment, or plan to, you can pause rollouts for that Deployment before you trigger one or more updates. When you're ready to apply those changes, you resume rollouts for the Deplo", "source": "https://kubernetes.io/docs/concepts/workloads/_print/", "id": "k8s_00658_code_explain_20653"} {"prompt": "Explain this text code:\n\nNAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE\nnginx 3 3 3 3 1m", "completion": "This text code 2.1 - Deployments | Get the rollout status:", "language": "text", "type": "code-example_explanation", "context": "2.1 - Deployments | Get the rollout status:", "source": "https://kubernetes.io/docs/concepts/workloads/_print/", "id": "k8s_00658_code_explain_20654"} {"prompt": "Explain this text code:\n\nNAME DESIRED CURRENT READY AGE\nnginx-2142116321 3 3 3 1m", "completion": "This text code 2.1 - Deployments | Pause by running the following command:", "language": "text", "type": "code-example_explanation", "context": "2.1 - Deployments | Pause by running the following command:", "source": "https://kubernetes.io/docs/concepts/workloads/_print/", "id": "k8s_00658_code_explain_20655"} {"prompt": "Explain this bash code:\n\nkubectl set image deployment/nginx-deployment nginx=nginx:1.16.1", "completion": "This bash code 2.1 - Deployments | The output is similar to this:", "language": "bash", "type": "kubectl-command_explanation", "context": "2.1 - Deployments | The output is similar to this:", "source": "https://kubernetes.io/docs/concepts/workloads/_print/", "id": "k8s_00658_code_explain_20656"} {"prompt": "Explain this bash code:\n\nkubectl rollout history deployment/nginx-deployment", "completion": "This bash code 2.1 - Deployments | The output is similar to this:", "language": "bash", "type": "kubectl-command_explanation", "context": "2.1 - Deployments | The output is similar to this:", "source": "https://kubernetes.io/docs/concepts/workloads/_print/", "id": "k8s_00658_code_explain_20657"} {"prompt": "Explain this text code:\n\ndeployments \"nginx\"\nREVISION CHANGE-CAUSE\n1 <none>", "completion": "This text code 2.1 - Deployments | Get the rollout status to verify that the existing ReplicaSet has not changed:", "language": "text", "type": "code-example_explanation", "context": "2.1 - Deployments | Get the rollout status to verify that the existing ReplicaSet has not changed:", "source": "https://kubernetes.io/docs/concepts/workloads/_print/", "id": "k8s_00658_code_explain_20658"} {"prompt": "Explain this text code:\n\nNAME DESIRED CURRENT READY AGE\nnginx-2142116321 3 3 3 2m", "completion": "This text code 2.1 - Deployments | You can make as many updates as you wish, for example, update the resources that will be used:", "language": "text", "type": "code-example_explanation", "context": "2.1 - Deployments | You can make as many updates as you wish, for example, update the resources that will be used:", "source": "https://kubernetes.io/docs/concepts/workloads/_print/", "id": "k8s_00658_code_explain_20659"} {"prompt": "Explain this bash code:\n\nkubectl set resources deployment/nginx-deployment -c=nginx --limits=cpu=200m,memory=512Mi", "completion": "This bash code 2.1 - Deployments | The output is similar to this:", "language": "bash", "type": "kubectl-command_explanation", "context": "2.1 - Deployments | The output is similar to this:", "source": "https://kubernetes.io/docs/concepts/workloads/_print/", "id": "k8s_00658_code_explain_20660"} {"prompt": "Explain this text code:\n\ndeployment.apps/nginx-deployment resource requirements updated", "completion": "This text code 2.1 - Deployments | The initial state of the Deployment prior to pausing its rollout will continue its function, but new updates to the Deployment will not have any effect as long as the Deployment rollout is paused.", "language": "text", "type": "code-example_explanation", "context": "2.1 - Deployments | The initial state of the Deployment prior to pausing its rollout will continue its function, but new updates to the Deployment will not have any effect as long as the Deployment rollout is paused.", "source": "https://kubernetes.io/docs/concepts/workloads/_print/", "id": "k8s_00658_code_explain_20661"} {"prompt": "Complete this text code:\n\nNAME DESIRED CURRENT READY AGE\nnginx-2142116321 2 2 2 2m\nnginx-3926361531 2 2 0 6s\nnginx-3926361531 2 2 1 18s\nnginx-2142116321 1 2 2 2m\nnginx-2142116321 1 2 2 2m\nnginx-3926361531 3 2 1 18s", "completion": "nginx-3926361531 3 2 1 18s\nnginx-2142116321 1 1 1 2m\nnginx-3926361531 3 3 1 18s\nnginx-3926361531 3 3 2 19s\nnginx-2142116321 0 1 1 2m\nnginx-2142116321 0 1 1 2m\nnginx-2142116321 0 0 0 2m\nnginx-3926361531 3 3 3 20s", "language": "text", "type": "code-example", "context": "2.1 - Deployments | Get the status of the latest rollout:", "source": "https://kubernetes.io/docs/concepts/workloads/_print/", "id": "k8s_00658_code_completion_20662"} {"prompt": "Explain this text code:\n\nNAME DESIRED CURRENT READY AGE\nnginx-2142116321 2 2 2 2m\nnginx-3926361531 2 2 0 6s\nnginx-3926361531 2 2 1 18s\nnginx-2142116321 1 2 2 2m\nnginx-2142116321 1 2 2 2m\nnginx-3926361531 3 2 1 18s\nnginx-3926361531 3 2 1 18s\nnginx-2142116321 1 1 1 2m\nnginx-3926361531 3 3 1 18s\nnginx-3926361531 3 3 2 19s\nnginx-2142116321 0 1 1 2m\nnginx-2142116321 0 1 1 2m\nnginx-2142116321 0 0 0 2m\nnginx-3926361531 3 3 3 20s", "completion": "This text code 2.1 - Deployments | Get the status of the latest rollout:", "language": "text", "type": "code-example_explanation", "context": "2.1 - Deployments | Get the status of the latest rollout:", "source": "https://kubernetes.io/docs/concepts/workloads/_print/", "id": "k8s_00658_code_explain_20663"} {"prompt": "Explain this text code:\n\nNAME DESIRED CURRENT READY AGE\nnginx-2142116321 0 0 0 2m\nnginx-3926361531 3 3 3 28s", "completion": "This text code 2.1 - Deployments | A Deployment enters various states during its lifecycle. It can be progressing while rolling out a new ReplicaSet, it can be complete, or it can fail to progress.", "language": "text", "type": "code-example_explanation", "context": "2.1 - Deployments | A Deployment enters various states during its lifecycle. It can be progressing while rolling out a new ReplicaSet, it can be complete, or it can fail to progress.", "source": "https://kubernetes.io/docs/concepts/workloads/_print/", "id": "k8s_00658_code_explain_20664"} {"prompt": "Explain this text code:\n\nWaiting for rollout to finish: 2 of 3 updated replicas are available...\ndeployment \"nginx-deployment\" successfully rolled out", "completion": "This text code 2.1 - Deployments | and the exit status from kubectl rollout is 0 (success):", "language": "text", "type": "code-example_explanation", "context": "2.1 - Deployments | and the exit status from kubectl rollout is 0 (success):", "source": "https://kubernetes.io/docs/concepts/workloads/_print/", "id": "k8s_00658_code_explain_20665"} {"prompt": "Explain this bash code:\n\nkubectl patch deployment/nginx-deployment -p '{\"spec\":{\"progressDeadlineSeconds\":600}}'", "completion": "This bash code 2.1 - Deployments | The output is similar to this:", "language": "bash", "type": "kubectl-command_explanation", "context": "2.1 - Deployments | The output is similar to this:", "source": "https://kubernetes.io/docs/concepts/workloads/_print/", "id": "k8s_00658_code_explain_20666"} {"prompt": "Complete this text code:\n\n<...>\nConditions:\n Type Status Reason\n ---- ------ ------", "completion": " Available True MinimumReplicasAvailable\n Progressing True ReplicaSetUpdated\n ReplicaFailure True FailedCreate\n<...>", "language": "text", "type": "code-example", "context": "2.1 - Deployments | If you run kubectl get deployment nginx-deployment -o yaml, the Deployment status is similar to this:", "source": "https://kubernetes.io/docs/concepts/workloads/_print/", "id": "k8s_00658_code_completion_20667"} {"prompt": "Explain this text code:\n\n<...>\nConditions:\n Type Status Reason\n ---- ------ ------\n Available True MinimumReplicasAvailable\n Progressing True ReplicaSetUpdated\n ReplicaFailure True FailedCreate\n<...>", "completion": "This text code 2.1 - Deployments | If you run kubectl get deployment nginx-deployment -o yaml, the Deployment status is similar to this:", "language": "text", "type": "code-example_explanation", "context": "2.1 - Deployments | If you run kubectl get deployment nginx-deployment -o yaml, the Deployment status is similar to this:", "source": "https://kubernetes.io/docs/concepts/workloads/_print/", "id": "k8s_00658_code_explain_20668"} {"prompt": "Complete this text code:\n\nstatus:\n availableReplicas: 2\n conditions:\n - lastTransitionTime: 2016-10-04T12:25:39Z\n lastUpdateTime: 2016-10-04T12:25:39Z\n message: Replica set \"nginx-deployment-4262182780\" is progressing.\n reason: ReplicaSetUpdated\n status: \"True\"\n type: Progressing\n - lastTransitionTime: 2016-10-04T12:25:42Z\n lastUpdateTime: 2016-10-04T12:25:42Z\n message: Deployment has minimum availability.", "completion": " reason: MinimumReplicasAvailable\n status: \"True\"\n type: Available\n - lastTransitionTime: 2016-10-04T12:25:39Z\n lastUpdateTime: 2016-10-04T12:25:39Z\n message: 'Error creating: pods \"nginx-deployment-4262182780-\" is forbidden: exceeded quota:\n object-counts, requested: pods=1, used: pods=3, limited: pods=2'\n reason: FailedCreate\n status: \"True\"\n type: ReplicaFailure\n observedGeneration: 3\n replicas: 2\n unavailableReplicas: 2", "language": "text", "type": "code-example", "context": "2.1 - Deployments | Eventually, once the Deployment progress deadline is exceeded, Kubernetes updates the status and the reason for the Progressing condition:", "source": "https://kubernetes.io/docs/concepts/workloads/_print/", "id": "k8s_00658_code_completion_20669"} {"prompt": "Explain this text code:\n\nstatus:\n availableReplicas: 2\n conditions:\n - lastTransitionTime: 2016-10-04T12:25:39Z\n lastUpdateTime: 2016-10-04T12:25:39Z\n message: Replica set \"nginx-deployment-4262182780\" is progressing.\n reason: ReplicaSetUpdated\n status: \"True\"\n type: Progressing\n - lastTransitionTime: 2016-10-04T12:25:42Z\n lastUpdateTime: 2016-10-04T12:25:42Z\n message: Deployment has minimum availability.\n reason: MinimumReplicasAvailable\n status: \"True\"\n type: Available\n - lastTransitionTime: 2016-10-04T12:25:39Z\n lastUpdateTime: 2016-10-04T12:25:39Z\n message: 'Error creating: pods \"nginx-deployment-4262182780-\" is forbidden: exceeded quota:\n object-counts, requested: pods=1, used: pods=3, limited: pods=2'\n reason: FailedCreate\n status: \"True\"\n type: ReplicaFailure\n observedGeneration: 3\n replicas: 2\n unavailableReplicas: 2", "completion": "This text code 2.1 - Deployments | Eventually, once the Deployment progress deadline is exceeded, Kubernetes updates the status and the reason for the Progressing condition:", "language": "text", "type": "code-example_explanation", "context": "2.1 - Deployments | Eventually, once the Deployment progress deadline is exceeded, Kubernetes updates the status and the reason for the Progressing condition:", "source": "https://kubernetes.io/docs/concepts/workloads/_print/", "id": "k8s_00658_code_explain_20670"} {"prompt": "Complete this text code:\n\nConditions:\n Type Status Reason\n ---- ------ ------", "completion": " Available True MinimumReplicasAvailable\n Progressing False ProgressDeadlineExceeded\n ReplicaFailure True FailedCreate", "language": "text", "type": "code-example", "context": "2.1 - Deployments | You can address an issue of insufficient quota by scaling down your Deployment, by scaling down other controllers you may be running, or by increasing quota in your namespace. If you satisfy the quota", "source": "https://kubernetes.io/docs/concepts/workloads/_print/", "id": "k8s_00658_code_completion_20671"} {"prompt": "Explain this text code:\n\nConditions:\n Type Status Reason\n ---- ------ ------\n Available True MinimumReplicasAvailable\n Progressing False ProgressDeadlineExceeded\n ReplicaFailure True FailedCreate", "completion": "This text code 2.1 - Deployments | You can address an issue of insufficient quota by scaling down your Deployment, by scaling down other controllers you may be running, or by increasing quota in your namespace. If you satisfy the quota", "language": "text", "type": "code-example_explanation", "context": "2.1 - Deployments | You can address an issue of insufficient quota by scaling down your Deployment, by scaling down other controllers you may be running, or by increasing quota in your namespace. If you satisfy the quota", "source": "https://kubernetes.io/docs/concepts/workloads/_print/", "id": "k8s_00658_code_explain_20672"} {"prompt": "Complete this text code:\n\nConditions:\n Type Status Reason", "completion": " ---- ------ ------\n Available True MinimumReplicasAvailable\n Progressing True NewReplicaSetAvailable", "language": "text", "type": "code-example", "context": "2.1 - Deployments | type: Available with status: \"True\" means that your Deployment has minimum availability. Minimum availability is dictated by the parameters specified in the deployment strategy. type: Progressing with", "source": "https://kubernetes.io/docs/concepts/workloads/_print/", "id": "k8s_00658_code_completion_20673"} {"prompt": "Explain this text code:\n\nConditions:\n Type Status Reason\n ---- ------ ------\n Available True MinimumReplicasAvailable\n Progressing True NewReplicaSetAvailable", "completion": "This text code 2.1 - Deployments | type: Available with status: \"True\" means that your Deployment has minimum availability. Minimum availability is dictated by the parameters specified in the deployment strategy. type: Progressing with", "language": "text", "type": "code-example_explanation", "context": "2.1 - Deployments | type: Available with status: \"True\" means that your Deployment has minimum availability. Minimum availability is dictated by the parameters specified in the deployment strategy. type: Progressing with", "source": "https://kubernetes.io/docs/concepts/workloads/_print/", "id": "k8s_00658_code_explain_20674"} {"prompt": "Explain this text code:\n\nWaiting for rollout to finish: 2 out of 3 new replicas have been updated...\nerror: deployment \"nginx\" exceeded its progress deadline", "completion": "This text code 2.1 - Deployments | and the exit status from kubectl rollout is 1 (indicating an error):", "language": "text", "type": "code-example_explanation", "context": "2.1 - Deployments | and the exit status from kubectl rollout is 1 (indicating an error):", "source": "https://kubernetes.io/docs/concepts/workloads/_print/", "id": "k8s_00658_code_explain_20675"} {"prompt": "Complete this yaml code:\n\napiVersion: apps/v1\nkind: Deployment\nmetadata:\n name: nginx-deployment\n labels:\n app: nginx\nspec:\n replicas: 3\n selector:\n matchLabels:\n app: nginx\n template:", "completion": " metadata:\n labels:\n app: nginx\n spec:\n containers:\n - name: nginx\n image: nginx:1.14.2\n ports:\n - containerPort: 80\n strategy:\n type: RollingUpdate\n rollingUpdate:\n maxUnavailable: 1", "language": "yaml", "type": "deployment-definition", "context": "2.1 - Deployments | apiVersion: apps/v1 kind: Deployment metadata: name: nginx-deployment labels: app: nginx spec: replicas: 3 selector: matchLabels: app: nginx template: metadata: labels:", "source": "https://kubernetes.io/docs/concepts/workloads/_print/", "id": "k8s_00658_code_completion_20676"} {"prompt": "Explain this yaml code:\n\napiVersion: apps/v1\nkind: Deployment\nmetadata:\n name: nginx-deployment\n labels:\n app: nginx\nspec:\n replicas: 3\n selector:\n matchLabels:\n app: nginx\n template:\n metadata:\n labels:\n app: nginx\n spec:\n containers:\n - name: nginx\n image: nginx:1.14.2\n ports:\n - containerPort: 80\n strategy:\n type: RollingUpdate\n rollingUpdate:\n maxUnavailable: 1", "completion": "This yaml code 2.1 - Deployments | apiVersion: apps/v1 kind: Deployment metadata: name: nginx-deployment labels: app: nginx spec: replicas: 3 selector: matchLabels: app: nginx template: metadata: labels:", "language": "yaml", "type": "deployment-definition_explanation", "context": "2.1 - Deployments | apiVersion: apps/v1 kind: Deployment metadata: name: nginx-deployment labels: app: nginx spec: replicas: 3 selector: matchLabels: app: nginx template: metadata: labels:", "source": "https://kubernetes.io/docs/concepts/workloads/_print/", "id": "k8s_00658_code_explain_20677"} {"prompt": "Complete this yaml code:\n\napiVersion: apps/v1\nkind: Deployment\nmetadata:\n name: nginx-deployment\n labels:\n app: nginx\nspec:\n replicas: 3\n selector:\n matchLabels:\n app: nginx\n template:", "completion": " metadata:\n labels:\n app: nginx\n spec:\n containers:\n - name: nginx\n image: nginx:1.14.2\n ports:\n - containerPort: 80\n strategy:\n type: RollingUpdate\n rollingUpdate:\n maxSurge: 1", "language": "yaml", "type": "deployment-definition", "context": "2.1 - Deployments | apiVersion: apps/v1 kind: Deployment metadata: name: nginx-deployment labels: app: nginx spec: replicas: 3 selector: matchLabels: app: nginx template: metadata: labels:", "source": "https://kubernetes.io/docs/concepts/workloads/_print/", "id": "k8s_00658_code_completion_20678"} {"prompt": "Explain this yaml code:\n\napiVersion: apps/v1\nkind: Deployment\nmetadata:\n name: nginx-deployment\n labels:\n app: nginx\nspec:\n replicas: 3\n selector:\n matchLabels:\n app: nginx\n template:\n metadata:\n labels:\n app: nginx\n spec:\n containers:\n - name: nginx\n image: nginx:1.14.2\n ports:\n - containerPort: 80\n strategy:\n type: RollingUpdate\n rollingUpdate:\n maxSurge: 1", "completion": "This yaml code 2.1 - Deployments | apiVersion: apps/v1 kind: Deployment metadata: name: nginx-deployment labels: app: nginx spec: replicas: 3 selector: matchLabels: app: nginx template: metadata: labels:", "language": "yaml", "type": "deployment-definition_explanation", "context": "2.1 - Deployments | apiVersion: apps/v1 kind: Deployment metadata: name: nginx-deployment labels: app: nginx spec: replicas: 3 selector: matchLabels: app: nginx template: metadata: labels:", "source": "https://kubernetes.io/docs/concepts/workloads/_print/", "id": "k8s_00658_code_explain_20679"} {"prompt": "Complete this yaml code:\n\napiVersion: apps/v1\nkind: Deployment\nmetadata:\n name: nginx-deployment\n labels:\n app: nginx\nspec:\n replicas: 3\n selector:\n matchLabels:\n app: nginx\n template:\n metadata:", "completion": " labels:\n app: nginx\n spec:\n containers:\n - name: nginx\n image: nginx:1.14.2\n ports:\n - containerPort: 80\n strategy:\n type: RollingUpdate\n rollingUpdate:\n maxSurge: 1\n maxUnavailable: 1", "language": "yaml", "type": "deployment-definition", "context": "2.1 - Deployments | .spec.progressDeadlineSeconds is an optional field that specifies the number of seconds you want to wait for your Deployment to progress before the system reports back that the Deployment has failed p", "source": "https://kubernetes.io/docs/concepts/workloads/_print/", "id": "k8s_00658_code_completion_20680"} {"prompt": "Explain this yaml code:\n\napiVersion: apps/v1\nkind: Deployment\nmetadata:\n name: nginx-deployment\n labels:\n app: nginx\nspec:\n replicas: 3\n selector:\n matchLabels:\n app: nginx\n template:\n metadata:\n labels:\n app: nginx\n spec:\n containers:\n - name: nginx\n image: nginx:1.14.2\n ports:\n - containerPort: 80\n strategy:\n type: RollingUpdate\n rollingUpdate:\n maxSurge: 1\n maxUnavailable: 1", "completion": "This yaml code 2.1 - Deployments | .spec.progressDeadlineSeconds is an optional field that specifies the number of seconds you want to wait for your Deployment to progress before the system reports back that the Deployment has failed p", "language": "yaml", "type": "deployment-definition_explanation", "context": "2.1 - Deployments | .spec.progressDeadlineSeconds is an optional field that specifies the number of seconds you want to wait for your Deployment to progress before the system reports back that the Deployment has failed p", "source": "https://kubernetes.io/docs/concepts/workloads/_print/", "id": "k8s_00658_code_explain_20681"} {"prompt": "Complete this yaml code:\n\napiVersion: apps/v1\nkind: ReplicaSet\nmetadata:\n name: frontend\n labels:\n app: guestbook\n tier: frontend\nspec:\n # modify replicas according to your case\n replicas: 3", "completion": " selector:\n matchLabels:\n tier: frontend\n template:\n metadata:\n labels:\n tier: frontend\n spec:\n containers:\n - name: php-redis\n image: us-docker.pkg.dev/google-samples/containers/gke/gb-frontend:v5", "language": "yaml", "type": "kubernetes-manifest", "context": "2.2 - ReplicaSet | Saving this manifest into frontend.yaml and submitting it to a Kubernetes cluster will create the defined ReplicaSet and the Pods that it manages.", "source": "https://kubernetes.io/docs/concepts/workloads/_print/", "id": "k8s_00658_code_completion_20682"} {"prompt": "Explain this yaml code:\n\napiVersion: apps/v1\nkind: ReplicaSet\nmetadata:\n name: frontend\n labels:\n app: guestbook\n tier: frontend\nspec:\n # modify replicas according to your case\n replicas: 3\n selector:\n matchLabels:\n tier: frontend\n template:\n metadata:\n labels:\n tier: frontend\n spec:\n containers:\n - name: php-redis\n image: us-docker.pkg.dev/google-samples/containers/gke/gb-frontend:v5", "completion": "This yaml code 2.2 - ReplicaSet | Saving this manifest into frontend.yaml and submitting it to a Kubernetes cluster will create the defined ReplicaSet and the Pods that it manages.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "2.2 - ReplicaSet | Saving this manifest into frontend.yaml and submitting it to a Kubernetes cluster will create the defined ReplicaSet and the Pods that it manages.", "source": "https://kubernetes.io/docs/concepts/workloads/_print/", "id": "k8s_00658_code_explain_20683"} {"prompt": "Explain this bash code:\n\nkubectl apply -f https://kubernetes.io/examples/controllers/frontend.yaml", "completion": "This bash code 2.2 - ReplicaSet | You can then get the current ReplicaSets deployed:", "language": "bash", "type": "kubectl-command_explanation", "context": "2.2 - ReplicaSet | You can then get the current ReplicaSets deployed:", "source": "https://kubernetes.io/docs/concepts/workloads/_print/", "id": "k8s_00658_code_explain_20684"} {"prompt": "Explain this text code:\n\nNAME DESIRED CURRENT READY AGE\nfrontend 3 3 3 6s", "completion": "This text code 2.2 - ReplicaSet | You can also check on the state of the ReplicaSet:", "language": "text", "type": "code-example_explanation", "context": "2.2 - ReplicaSet | You can also check on the state of the ReplicaSet:", "source": "https://kubernetes.io/docs/concepts/workloads/_print/", "id": "k8s_00658_code_explain_20685"} {"prompt": "Complete this text code:\n\nName: frontend\nNamespace: default\nSelector: tier=frontend\nLabels: app=guestbook\n tier=frontend\nAnnotations: <none>\nReplicas: 3 current / 3 desired\nPods Status: 3 Running / 0 Waiting / 0 Succeeded / 0 Failed\nPod Template:\n Labels: tier=frontend\n Containers:\n php-redis:", "completion": " Image: us-docker.pkg.dev/google-samples/containers/gke/gb-frontend:v5\n Port: <none>\n Host Port: <none>\n Environment: <none>\n Mounts: <none>\n Volumes: <none>\nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal SuccessfulCreate 13s replicaset-controller Created pod: frontend-gbgfx\n Normal SuccessfulCreate 13s replicaset-controller Created pod: frontend-rwz57\n Normal SuccessfulCreate 13s replicaset-controller Created pod: frontend-wkl7w", "language": "text", "type": "code-example", "context": "2.2 - ReplicaSet | And lastly you can check for the Pods brought up:", "source": "https://kubernetes.io/docs/concepts/workloads/_print/", "id": "k8s_00658_code_completion_20686"} {"prompt": "Explain this text code:\n\nName: frontend\nNamespace: default\nSelector: tier=frontend\nLabels: app=guestbook\n tier=frontend\nAnnotations: <none>\nReplicas: 3 current / 3 desired\nPods Status: 3 Running / 0 Waiting / 0 Succeeded / 0 Failed\nPod Template:\n Labels: tier=frontend\n Containers:\n php-redis:\n Image: us-docker.pkg.dev/google-samples/containers/gke/gb-frontend:v5\n Port: <none>\n Host Port: <none>\n Environment: <none>\n Mounts: <none>\n Volumes: <none>\nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal SuccessfulCreate 13s replicaset-controller Created pod: frontend-gbgfx\n Normal SuccessfulCreate 13s replicaset-controller Created pod: frontend-rwz57\n Normal SuccessfulCreate 13s replicaset-controller Created pod: frontend-wkl7w", "completion": "This text code 2.2 - ReplicaSet | And lastly you can check for the Pods brought up:", "language": "text", "type": "code-example_explanation", "context": "2.2 - ReplicaSet | And lastly you can check for the Pods brought up:", "source": "https://kubernetes.io/docs/concepts/workloads/_print/", "id": "k8s_00658_code_explain_20687"} {"prompt": "Complete this text code:\n\nNAME READY STATUS RESTARTS AGE\nfrontend-gbgfx 1/1 Running 0 10m", "completion": "frontend-rwz57 1/1 Running 0 10m\nfrontend-wkl7w 1/1 Running 0 10m", "language": "text", "type": "code-example", "context": "2.2 - ReplicaSet | You can also verify that the owner reference of these pods is set to the frontend ReplicaSet. To do this, get the yaml of one of the Pods running:", "source": "https://kubernetes.io/docs/concepts/workloads/_print/", "id": "k8s_00658_code_completion_20688"} {"prompt": "Explain this text code:\n\nNAME READY STATUS RESTARTS AGE\nfrontend-gbgfx 1/1 Running 0 10m\nfrontend-rwz57 1/1 Running 0 10m\nfrontend-wkl7w 1/1 Running 0 10m", "completion": "This text code 2.2 - ReplicaSet | You can also verify that the owner reference of these pods is set to the frontend ReplicaSet. To do this, get the yaml of one of the Pods running:", "language": "text", "type": "code-example_explanation", "context": "2.2 - ReplicaSet | You can also verify that the owner reference of these pods is set to the frontend ReplicaSet. To do this, get the yaml of one of the Pods running:", "source": "https://kubernetes.io/docs/concepts/workloads/_print/", "id": "k8s_00658_code_explain_20689"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n creationTimestamp: \"2024-02-28T22:30:44Z\"\n generateName: frontend-\n labels:\n tier: frontend\n name: frontend-gbgfx", "completion": " namespace: default\n ownerReferences:\n - apiVersion: apps/v1\n blockOwnerDeletion: true\n controller: true\n kind: ReplicaSet\n name: frontend\n uid: e129deca-f864-481b-bb16-b27abfd92292\n...", "language": "yaml", "type": "pod-definition", "context": "2.2 - ReplicaSet | While you can create bare Pods with no problems, it is strongly recommended to make sure that the bare Pods do not have labels which match the selector of one of your ReplicaSets. The reason for this", "source": "https://kubernetes.io/docs/concepts/workloads/_print/", "id": "k8s_00658_code_completion_20690"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n creationTimestamp: \"2024-02-28T22:30:44Z\"\n generateName: frontend-\n labels:\n tier: frontend\n name: frontend-gbgfx\n namespace: default\n ownerReferences:\n - apiVersion: apps/v1\n blockOwnerDeletion: true\n controller: true\n kind: ReplicaSet\n name: frontend\n uid: e129deca-f864-481b-bb16-b27abfd92292\n...", "completion": "This yaml code 2.2 - ReplicaSet | While you can create bare Pods with no problems, it is strongly recommended to make sure that the bare Pods do not have labels which match the selector of one of your ReplicaSets. The reason for this", "language": "yaml", "type": "pod-definition_explanation", "context": "2.2 - ReplicaSet | While you can create bare Pods with no problems, it is strongly recommended to make sure that the bare Pods do not have labels which match the selector of one of your ReplicaSets. The reason for this", "source": "https://kubernetes.io/docs/concepts/workloads/_print/", "id": "k8s_00658_code_explain_20691"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: pod1\n labels:\n tier: frontend\nspec:\n containers:\n - name: hello1\n image: gcr.io/google-samples/hello-app:2.0\n", "completion": "---\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: pod2\n labels:\n tier: frontend\nspec:\n containers:\n - name: hello2\n image: gcr.io/google-samples/hello-app:1.0", "language": "yaml", "type": "pod-definition", "context": "2.2 - ReplicaSet | As those Pods do not have a Controller (or any object) as their owner reference and match the selector of the frontend ReplicaSet, they will immediately be acquired by it.", "source": "https://kubernetes.io/docs/concepts/workloads/_print/", "id": "k8s_00658_code_completion_20692"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: pod1\n labels:\n tier: frontend\nspec:\n containers:\n - name: hello1\n image: gcr.io/google-samples/hello-app:2.0\n\n---\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: pod2\n labels:\n tier: frontend\nspec:\n containers:\n - name: hello2\n image: gcr.io/google-samples/hello-app:1.0", "completion": "This yaml code 2.2 - ReplicaSet | As those Pods do not have a Controller (or any object) as their owner reference and match the selector of the frontend ReplicaSet, they will immediately be acquired by it.", "language": "yaml", "type": "pod-definition_explanation", "context": "2.2 - ReplicaSet | As those Pods do not have a Controller (or any object) as their owner reference and match the selector of the frontend ReplicaSet, they will immediately be acquired by it.", "source": "https://kubernetes.io/docs/concepts/workloads/_print/", "id": "k8s_00658_code_explain_20693"} {"prompt": "Explain this bash code:\n\nkubectl apply -f https://kubernetes.io/examples/pods/pod-rs.yaml", "completion": "This bash code 2.2 - ReplicaSet | The new Pods will be acquired by the ReplicaSet, and then immediately terminated as the ReplicaSet would be over its desired count.", "language": "bash", "type": "kubectl-command_explanation", "context": "2.2 - ReplicaSet | The new Pods will be acquired by the ReplicaSet, and then immediately terminated as the ReplicaSet would be over its desired count.", "source": "https://kubernetes.io/docs/concepts/workloads/_print/", "id": "k8s_00658_code_explain_20694"} {"prompt": "Complete this text code:\n\nNAME READY STATUS RESTARTS AGE\nfrontend-b2zdv 1/1 Running 0 10m\nfrontend-vcmts 1/1 Running 0 10m", "completion": "frontend-wtsmm 1/1 Running 0 10m\npod1 0/1 Terminating 0 1s\npod2 0/1 Terminating 0 1s", "language": "text", "type": "code-example", "context": "2.2 - ReplicaSet | If you create the Pods first:", "source": "https://kubernetes.io/docs/concepts/workloads/_print/", "id": "k8s_00658_code_completion_20695"} {"prompt": "Explain this text code:\n\nNAME READY STATUS RESTARTS AGE\nfrontend-b2zdv 1/1 Running 0 10m\nfrontend-vcmts 1/1 Running 0 10m\nfrontend-wtsmm 1/1 Running 0 10m\npod1 0/1 Terminating 0 1s\npod2 0/1 Terminating 0 1s", "completion": "This text code 2.2 - ReplicaSet | If you create the Pods first:", "language": "text", "type": "code-example_explanation", "context": "2.2 - ReplicaSet | If you create the Pods first:", "source": "https://kubernetes.io/docs/concepts/workloads/_print/", "id": "k8s_00658_code_explain_20696"} {"prompt": "Explain this bash code:\n\nkubectl apply -f https://kubernetes.io/examples/pods/pod-rs.yaml", "completion": "This bash code 2.2 - ReplicaSet | And then create the ReplicaSet however:", "language": "bash", "type": "kubectl-command_explanation", "context": "2.2 - ReplicaSet | And then create the ReplicaSet however:", "source": "https://kubernetes.io/docs/concepts/workloads/_print/", "id": "k8s_00658_code_explain_20697"} {"prompt": "Explain this bash code:\n\nkubectl apply -f https://kubernetes.io/examples/controllers/frontend.yaml", "completion": "This bash code 2.2 - ReplicaSet | You shall see that the ReplicaSet has acquired the Pods and has only created new ones according to its spec until the number of its new Pods and the original matches its desired count. As fetching the", "language": "bash", "type": "kubectl-command_explanation", "context": "2.2 - ReplicaSet | You shall see that the ReplicaSet has acquired the Pods and has only created new ones according to its spec until the number of its new Pods and the original matches its desired count. As fetching the", "source": "https://kubernetes.io/docs/concepts/workloads/_print/", "id": "k8s_00658_code_explain_20698"} {"prompt": "Complete this text code:\n\nNAME READY STATUS RESTARTS AGE\nfrontend-hmmj2 1/1 Running 0 9s", "completion": "pod1 1/1 Running 0 36s\npod2 1/1 Running 0 36s", "language": "text", "type": "code-example", "context": "2.2 - ReplicaSet | In this manner, a ReplicaSet can own a non-homogeneous set of Pods", "source": "https://kubernetes.io/docs/concepts/workloads/_print/", "id": "k8s_00658_code_completion_20699"} {"prompt": "Explain this text code:\n\nNAME READY STATUS RESTARTS AGE\nfrontend-hmmj2 1/1 Running 0 9s\npod1 1/1 Running 0 36s\npod2 1/1 Running 0 36s", "completion": "This text code 2.2 - ReplicaSet | In this manner, a ReplicaSet can own a non-homogeneous set of Pods", "language": "text", "type": "code-example_explanation", "context": "2.2 - ReplicaSet | In this manner, a ReplicaSet can own a non-homogeneous set of Pods", "source": "https://kubernetes.io/docs/concepts/workloads/_print/", "id": "k8s_00658_code_explain_20700"} {"prompt": "Complete this bash code:\n\nkubectl proxy --port=8080\ncurl -X DELETE 'localhost:8080/apis/apps/v1/namespaces/default/replicasets/frontend' \\", "completion": " -d '{\"kind\":\"DeleteOptions\",\"apiVersion\":\"v1\",\"propagationPolicy\":\"Foreground\"}' \\\n -H \"Content-Type: application/json\"", "language": "bash", "type": "kubectl-command", "context": "2.2 - ReplicaSet | You can delete a ReplicaSet without affecting any of its Pods using kubectl delete with the --cascade=orphan option. When using the REST API or the client-go library, you must set propagationPolicy to", "source": "https://kubernetes.io/docs/concepts/workloads/_print/", "id": "k8s_00658_code_completion_20701"} {"prompt": "Explain this bash code:\n\nkubectl proxy --port=8080\ncurl -X DELETE 'localhost:8080/apis/apps/v1/namespaces/default/replicasets/frontend' \\\n -d '{\"kind\":\"DeleteOptions\",\"apiVersion\":\"v1\",\"propagationPolicy\":\"Foreground\"}' \\\n -H \"Content-Type: application/json\"", "completion": "This bash code 2.2 - ReplicaSet | You can delete a ReplicaSet without affecting any of its Pods using kubectl delete with the --cascade=orphan option. When using the REST API or the client-go library, you must set propagationPolicy to", "language": "bash", "type": "kubectl-command_explanation", "context": "2.2 - ReplicaSet | You can delete a ReplicaSet without affecting any of its Pods using kubectl delete with the --cascade=orphan option. When using the REST API or the client-go library, you must set propagationPolicy to", "source": "https://kubernetes.io/docs/concepts/workloads/_print/", "id": "k8s_00658_code_explain_20702"} {"prompt": "Complete this bash code:\n\nkubectl proxy --port=8080\ncurl -X DELETE 'localhost:8080/apis/apps/v1/namespaces/default/replicasets/frontend' \\", "completion": " -d '{\"kind\":\"DeleteOptions\",\"apiVersion\":\"v1\",\"propagationPolicy\":\"Orphan\"}' \\\n -H \"Content-Type: application/json\"", "language": "bash", "type": "kubectl-command", "context": "2.2 - ReplicaSet | Once the original is deleted, you can create a new ReplicaSet to replace it. As long as the old and new .spec.selector are the same, then the new one will adopt the old Pods. However, it will not make", "source": "https://kubernetes.io/docs/concepts/workloads/_print/", "id": "k8s_00658_code_completion_20703"} {"prompt": "Explain this bash code:\n\nkubectl proxy --port=8080\ncurl -X DELETE 'localhost:8080/apis/apps/v1/namespaces/default/replicasets/frontend' \\\n -d '{\"kind\":\"DeleteOptions\",\"apiVersion\":\"v1\",\"propagationPolicy\":\"Orphan\"}' \\\n -H \"Content-Type: application/json\"", "completion": "This bash code 2.2 - ReplicaSet | Once the original is deleted, you can create a new ReplicaSet to replace it. As long as the old and new .spec.selector are the same, then the new one will adopt the old Pods. However, it will not make", "language": "bash", "type": "kubectl-command_explanation", "context": "2.2 - ReplicaSet | Once the original is deleted, you can create a new ReplicaSet to replace it. As long as the old and new .spec.selector are the same, then the new one will adopt the old Pods. However, it will not make", "source": "https://kubernetes.io/docs/concepts/workloads/_print/", "id": "k8s_00658_code_explain_20704"} {"prompt": "Complete this yaml code:\n\napiVersion: autoscaling/v1\nkind: HorizontalPodAutoscaler\nmetadata:\n name: frontend-scaler\nspec:", "completion": " scaleTargetRef:\n kind: ReplicaSet\n name: frontend\n minReplicas: 3\n maxReplicas: 10\n targetCPUUtilizationPercentage: 50", "language": "yaml", "type": "kubernetes-manifest", "context": "2.2 - ReplicaSet | Saving this manifest into hpa-rs.yaml and submitting it to a Kubernetes cluster should create the defined HPA that autoscales the target ReplicaSet depending on the CPU usage of the replicated Pods.", "source": "https://kubernetes.io/docs/concepts/workloads/_print/", "id": "k8s_00658_code_completion_20705"} {"prompt": "Explain this yaml code:\n\napiVersion: autoscaling/v1\nkind: HorizontalPodAutoscaler\nmetadata:\n name: frontend-scaler\nspec:\n scaleTargetRef:\n kind: ReplicaSet\n name: frontend\n minReplicas: 3\n maxReplicas: 10\n targetCPUUtilizationPercentage: 50", "completion": "This yaml code 2.2 - ReplicaSet | Saving this manifest into hpa-rs.yaml and submitting it to a Kubernetes cluster should create the defined HPA that autoscales the target ReplicaSet depending on the CPU usage of the replicated Pods.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "2.2 - ReplicaSet | Saving this manifest into hpa-rs.yaml and submitting it to a Kubernetes cluster should create the defined HPA that autoscales the target ReplicaSet depending on the CPU usage of the replicated Pods.", "source": "https://kubernetes.io/docs/concepts/workloads/_print/", "id": "k8s_00658_code_explain_20706"} {"prompt": "Explain this bash code:\n\nkubectl apply -f https://k8s.io/examples/controllers/hpa-rs.yaml", "completion": "This bash code 2.2 - ReplicaSet | Alternatively, you can use the kubectl autoscale command to accomplish the same (and it's easier!)", "language": "bash", "type": "kubectl-command_explanation", "context": "2.2 - ReplicaSet | Alternatively, you can use the kubectl autoscale command to accomplish the same (and it's easier!)", "source": "https://kubernetes.io/docs/concepts/workloads/_print/", "id": "k8s_00658_code_explain_20707"} {"prompt": "Explain this bash code:\n\nkubectl autoscale rs frontend --max=10 --min=3 --cpu-percent=50", "completion": "This bash code 2.2 - ReplicaSet | Deployment is an object which can own ReplicaSets and update them and their Pods via declarative, server-side rolling updates. While ReplicaSets can be used independently, today they're mainly used by", "language": "bash", "type": "kubectl-command_explanation", "context": "2.2 - ReplicaSet | Deployment is an object which can own ReplicaSets and update them and their Pods via declarative, server-side rolling updates. While ReplicaSets can be used independently, today they're mainly used by", "source": "https://kubernetes.io/docs/concepts/workloads/_print/", "id": "k8s_00658_code_explain_20708"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Service\nmetadata:\n name: nginx\n labels:\n app: nginx\nspec:\n ports:\n - port: 80\n name: web\n clusterIP: None\n selector:\n app: nginx\n---\napiVersion: apps/v1\nkind: StatefulSet\nmetadata:\n name: web\nspec:\n selector:\n matchLabels:\n app: nginx # has to match .spec.template.metadata.labels\n serviceName: \"nginx\"\n replicas: 3 # by default is 1", "completion": " minReadySeconds: 10 # by default is 0\n template:\n metadata:\n labels:\n app: nginx # has to match .spec.selector.matchLabels\n spec:\n terminationGracePeriodSeconds: 10\n containers:\n - name: nginx\n image: registry.k8s.io/nginx-slim:0.24\n ports:\n - containerPort: 80\n name: web\n volumeMounts:\n - name: www\n mountPath: /usr/share/nginx/html\n volumeClaimTemplates:\n - metadata:\n name: www\n spec:\n accessModes: [ \"ReadWriteOnce\" ]\n storageClassName: \"my-storage-class\"\n resources:\n requests:\n storage: 1Gi", "language": "yaml", "type": "service-definition", "context": "2.3 - StatefulSets | In the above example:", "source": "https://kubernetes.io/docs/concepts/workloads/_print/", "id": "k8s_00658_code_completion_20709"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Service\nmetadata:\n name: nginx\n labels:\n app: nginx\nspec:\n ports:\n - port: 80\n name: web\n clusterIP: None\n selector:\n app: nginx\n---\napiVersion: apps/v1\nkind: StatefulSet\nmetadata:\n name: web\nspec:\n selector:\n matchLabels:\n app: nginx # has to match .spec.template.metadata.labels\n serviceName: \"nginx\"\n replicas: 3 # by default is 1\n minReadySeconds: 10 # by default is 0\n template:\n metadata:\n labels:\n app: nginx # has to match .spec.selector.matchLabels\n spec:\n terminationGracePeriodSeconds: 10\n containers:\n - name: nginx\n image: registry.k8s.io/nginx-slim:0.24\n ports:\n - containerPort: 80\n name: web\n volumeMounts:\n - name: www\n mountPath: /usr/share/nginx/html\n volumeClaimTemplates:\n - metadata:\n name: www\n spec:\n accessModes: [ \"ReadWriteOnce\" ]\n storageClassName: \"my-storage-class\"\n resources:\n requests:\n storage: 1Gi", "completion": "This yaml code 2.3 - StatefulSets | In the above example:", "language": "yaml", "type": "service-definition_explanation", "context": "2.3 - StatefulSets | In the above example:", "source": "https://kubernetes.io/docs/concepts/workloads/_print/", "id": "k8s_00658_code_explain_20710"} {"prompt": "Complete this yaml code:\n\napiVersion: apps/v1\nkind: StatefulSet\nmetadata:", "completion": " name: webapp\nspec:\n revisionHistoryLimit: 5 # Keep last 5 revisions\n # ... other spec fields ...", "language": "yaml", "type": "kubernetes-manifest", "context": "2.3 - StatefulSets | You can revert to a previous configuration using:", "source": "https://kubernetes.io/docs/concepts/workloads/_print/", "id": "k8s_00658_code_completion_20711"} {"prompt": "Explain this yaml code:\n\napiVersion: apps/v1\nkind: StatefulSet\nmetadata:\n name: webapp\nspec:\n revisionHistoryLimit: 5 # Keep last 5 revisions\n # ... other spec fields ...", "completion": "This yaml code 2.3 - StatefulSets | You can revert to a previous configuration using:", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "2.3 - StatefulSets | You can revert to a previous configuration using:", "source": "https://kubernetes.io/docs/concepts/workloads/_print/", "id": "k8s_00658_code_explain_20712"} {"prompt": "Complete this text code:\n\n# View revision history\nkubectl rollout history statefulset/webapp", "completion": "\n# Rollback to a specific revision\nkubectl rollout undo statefulset/webapp --to-revision=3", "language": "text", "type": "code-example", "context": "2.3 - StatefulSets | This will:", "source": "https://kubernetes.io/docs/concepts/workloads/_print/", "id": "k8s_00658_code_completion_20713"} {"prompt": "Explain this text code:\n\n# View revision history\nkubectl rollout history statefulset/webapp\n\n# Rollback to a specific revision\nkubectl rollout undo statefulset/webapp --to-revision=3", "completion": "This text code 2.3 - StatefulSets | This will:", "language": "text", "type": "code-example_explanation", "context": "2.3 - StatefulSets | This will:", "source": "https://kubernetes.io/docs/concepts/workloads/_print/", "id": "k8s_00658_code_explain_20714"} {"prompt": "Complete this text code:\n\n# List all revisions for the StatefulSet\nkubectl get controllerrevisions -l app.kubernetes.io/name=webapp", "completion": "\n# View detailed configuration of a specific revision\nkubectl get controllerrevision/webapp-3 -o yaml", "language": "text", "type": "code-example", "context": "2.3 - StatefulSets | Regularly check revisions with:", "source": "https://kubernetes.io/docs/concepts/workloads/_print/", "id": "k8s_00658_code_completion_20715"} {"prompt": "Explain this text code:\n\n# List all revisions for the StatefulSet\nkubectl get controllerrevisions -l app.kubernetes.io/name=webapp\n\n# View detailed configuration of a specific revision\nkubectl get controllerrevision/webapp-3 -o yaml", "completion": "This text code 2.3 - StatefulSets | Regularly check revisions with:", "language": "text", "type": "code-example_explanation", "context": "2.3 - StatefulSets | Regularly check revisions with:", "source": "https://kubernetes.io/docs/concepts/workloads/_print/", "id": "k8s_00658_code_explain_20716"} {"prompt": "Complete this yaml code:\n\napiVersion: apps/v1\nkind: StatefulSet\n...\nspec:", "completion": " persistentVolumeClaimRetentionPolicy:\n whenDeleted: Retain\n whenScaled: Delete\n...", "language": "yaml", "type": "kubernetes-manifest", "context": "2.3 - StatefulSets | The StatefulSet controller adds owner references to its PVCs, which are then deleted by the garbage collector after the Pod is terminated. This enables the Pod to cleanly unmount all volumes before th", "source": "https://kubernetes.io/docs/concepts/workloads/_print/", "id": "k8s_00658_code_completion_20717"} {"prompt": "Explain this yaml code:\n\napiVersion: apps/v1\nkind: StatefulSet\n...\nspec:\n persistentVolumeClaimRetentionPolicy:\n whenDeleted: Retain\n whenScaled: Delete\n...", "completion": "This yaml code 2.3 - StatefulSets | The StatefulSet controller adds owner references to its PVCs, which are then deleted by the garbage collector after the Pod is terminated. This enables the Pod to cleanly unmount all volumes before th", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "2.3 - StatefulSets | The StatefulSet controller adds owner references to its PVCs, which are then deleted by the garbage collector after the Pod is terminated. This enables the Pod to cleanly unmount all volumes before th", "source": "https://kubernetes.io/docs/concepts/workloads/_print/", "id": "k8s_00658_code_explain_20718"} {"prompt": "Complete this yaml code:\n\napiVersion: apps/v1\nkind: DaemonSet\nmetadata:\n name: fluentd-elasticsearch\n namespace: kube-system\n labels:\n k8s-app: fluentd-logging\nspec:\n selector:\n matchLabels:\n name: fluentd-elasticsearch\n template:\n metadata:\n labels:\n name: fluentd-elasticsearch\n spec:\n tolerations:\n # these tolerations are to have the daemonset runnable on control plane nodes\n # remove them if your control plane nodes should not run pods\n - key: node-role.kubernetes.io/control-plane\n operator: Exists\n effect: NoSchedule", "completion": " - key: node-role.kubernetes.io/master\n operator: Exists\n effect: NoSchedule\n containers:\n - name: fluentd-elasticsearch\n image: quay.io/fluentd_elasticsearch/fluentd:v5.0.1\n resources:\n limits:\n memory: 200Mi\n requests:\n cpu: 100m\n memory: 200Mi\n volumeMounts:\n - name: varlog\n mountPath: /var/log\n # it may be desirable to set a high priority class to ensure that a DaemonSet Pod\n # preempts running Pods\n # priorityClassName: important\n terminationGracePeriodSeconds: 30\n volumes:\n - name: varlog\n hostPath:\n path: /var/log", "language": "yaml", "type": "kubernetes-manifest", "context": "2.4 - DaemonSet | Create a DaemonSet based on the YAML file:", "source": "https://kubernetes.io/docs/concepts/workloads/_print/", "id": "k8s_00658_code_completion_20719"} {"prompt": "Explain this yaml code:\n\napiVersion: apps/v1\nkind: DaemonSet\nmetadata:\n name: fluentd-elasticsearch\n namespace: kube-system\n labels:\n k8s-app: fluentd-logging\nspec:\n selector:\n matchLabels:\n name: fluentd-elasticsearch\n template:\n metadata:\n labels:\n name: fluentd-elasticsearch\n spec:\n tolerations:\n # these tolerations are to have the daemonset runnable on control plane nodes\n # remove them if your control plane nodes should not run pods\n - key: node-role.kubernetes.io/control-plane\n operator: Exists\n effect: NoSchedule\n - key: node-role.kubernetes.io/master\n operator: Exists\n effect: NoSchedule\n containers:\n - name: fluentd-elasticsearch\n image: quay.io/fluentd_elasticsearch/fluentd:v5.0.1\n resources:\n limits:\n memory: 200Mi\n requests:\n cpu: 100m\n memory: 200Mi\n volumeMounts:\n - name: varlog\n mountPath: /var/log\n # it may be desirable to set a high priority class to ensure that a DaemonSet Pod\n # preempts running Pods\n # priorityClassName: important\n terminationGracePeriodSeconds: 30\n volumes:\n - name: varlog\n hostPath:\n path: /var/log", "completion": "This yaml code 2.4 - DaemonSet | Create a DaemonSet based on the YAML file:", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "2.4 - DaemonSet | Create a DaemonSet based on the YAML file:", "source": "https://kubernetes.io/docs/concepts/workloads/_print/", "id": "k8s_00658_code_explain_20720"} {"prompt": "Explain this bash code:\n\nkubectl apply -f https://k8s.io/examples/controllers/daemonset.yaml", "completion": "This bash code 2.4 - DaemonSet | As with all other Kubernetes config, a DaemonSet needs apiVersion, kind, and metadata fields. For general information about working with config files, see running stateless applications and object man", "language": "bash", "type": "kubectl-command_explanation", "context": "2.4 - DaemonSet | As with all other Kubernetes config, a DaemonSet needs apiVersion, kind, and metadata fields. For general information about working with config files, see running stateless applications and object man", "source": "https://kubernetes.io/docs/concepts/workloads/_print/", "id": "k8s_00658_code_explain_20721"} {"prompt": "Complete this text code:\n\nnodeAffinity:\n requiredDuringSchedulingIgnoredDuringExecution:\n nodeSelectorTerms:\n - matchFields:", "completion": " - key: metadata.name\n operator: In\n values:\n - target-host-name", "language": "text", "type": "code-example", "context": "2.4 - DaemonSet | The DaemonSet controller automatically adds a set of tolerations to DaemonSet Pods:", "source": "https://kubernetes.io/docs/concepts/workloads/_print/", "id": "k8s_00658_code_completion_20722"} {"prompt": "Explain this text code:\n\nnodeAffinity:\n requiredDuringSchedulingIgnoredDuringExecution:\n nodeSelectorTerms:\n - matchFields:\n - key: metadata.name\n operator: In\n values:\n - target-host-name", "completion": "This text code 2.4 - DaemonSet | The DaemonSet controller automatically adds a set of tolerations to DaemonSet Pods:", "language": "text", "type": "code-example_explanation", "context": "2.4 - DaemonSet | The DaemonSet controller automatically adds a set of tolerations to DaemonSet Pods:", "source": "https://kubernetes.io/docs/concepts/workloads/_print/", "id": "k8s_00658_code_explain_20723"} {"prompt": "Complete this yaml code:\n\napiVersion: batch/v1\nkind: Job\nmetadata:\n name: pi\nspec:\n template:", "completion": " spec:\n containers:\n - name: pi\n image: perl:5.34.0\n command: [\"perl\", \"-Mbignum=bpi\", \"-wle\", \"print bpi(2000)\"]\n restartPolicy: Never\n backoffLimit: 4", "language": "yaml", "type": "kubernetes-manifest", "context": "2.5 - Jobs | You can run the example with this command:", "source": "https://kubernetes.io/docs/concepts/workloads/_print/", "id": "k8s_00658_code_completion_20724"} {"prompt": "Explain this yaml code:\n\napiVersion: batch/v1\nkind: Job\nmetadata:\n name: pi\nspec:\n template:\n spec:\n containers:\n - name: pi\n image: perl:5.34.0\n command: [\"perl\", \"-Mbignum=bpi\", \"-wle\", \"print bpi(2000)\"]\n restartPolicy: Never\n backoffLimit: 4", "completion": "This yaml code 2.5 - Jobs | You can run the example with this command:", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "2.5 - Jobs | You can run the example with this command:", "source": "https://kubernetes.io/docs/concepts/workloads/_print/", "id": "k8s_00658_code_explain_20725"} {"prompt": "Explain this bash code:\n\nkubectl apply -f https://kubernetes.io/examples/controllers/job.yaml", "completion": "This bash code 2.5 - Jobs | The output is similar to this:", "language": "bash", "type": "kubectl-command_explanation", "context": "2.5 - Jobs | The output is similar to this:", "source": "https://kubernetes.io/docs/concepts/workloads/_print/", "id": "k8s_00658_code_explain_20726"} {"prompt": "Complete this text code:\n\nName: pi\nNamespace: default\nSelector: batch.kubernetes.io/controller-uid=c9948307-e56d-4b5d-8302-ae2d7b7da67c\nLabels: batch.kubernetes.io/controller-uid=c9948307-e56d-4b5d-8302-ae2d7b7da67c\n batch.kubernetes.io/job-name=pi\n ...\nAnnotations: batch.kubernetes.io/job-tracking: \"\"\nParallelism: 1\nCompletions: 1\nStart Time: Mon, 02 Dec 2019 15:20:11 +0200\nCompleted At: Mon, 02 Dec 2019 15:21:16 +0200\nDuration: 65s\nPods Statuses: 0 Running / 1 Succeeded / 0 Failed\nPod Template:\n Labels: batch.kubernetes.io/controller-uid=c9948307-e56d-4b5d-8302-ae2d7b7da67c\n batch.kubernetes.io/job-name=pi\n Containers:", "completion": " pi:\n Image: perl:5.34.0\n Port: <none>\n Host Port: <none>\n Command:\n perl\n -Mbignum=bpi\n -wle\n print bpi(2000)\n Environment: <none>\n Mounts: <none>\n Volumes: <none>\nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal SuccessfulCreate 21s job-controller Created pod: pi-xf9p4\n Normal Completed 18s job-controller Job completed", "language": "text", "type": "code-example", "context": "2.5 - Jobs | apiVersion: batch/v1 kind: Job metadata: annotations: batch.kubernetes.io/job-tracking: \"\" ... creationTimestamp: \"2022-11-10T17:53:53Z\" generation: 1 labels: batch.kubernet", "source": "https://kubernetes.io/docs/concepts/workloads/_print/", "id": "k8s_00658_code_completion_20727"} {"prompt": "Explain this text code:\n\nName: pi\nNamespace: default\nSelector: batch.kubernetes.io/controller-uid=c9948307-e56d-4b5d-8302-ae2d7b7da67c\nLabels: batch.kubernetes.io/controller-uid=c9948307-e56d-4b5d-8302-ae2d7b7da67c\n batch.kubernetes.io/job-name=pi\n ...\nAnnotations: batch.kubernetes.io/job-tracking: \"\"\nParallelism: 1\nCompletions: 1\nStart Time: Mon, 02 Dec 2019 15:20:11 +0200\nCompleted At: Mon, 02 Dec 2019 15:21:16 +0200\nDuration: 65s\nPods Statuses: 0 Running / 1 Succeeded / 0 Failed\nPod Template:\n Labels: batch.kubernetes.io/controller-uid=c9948307-e56d-4b5d-8302-ae2d7b7da67c\n batch.kubernetes.io/job-name=pi\n Containers:\n pi:\n Image: perl:5.34.0\n Port: <none>\n Host Port: <none>\n Command:\n perl\n -Mbignum=bpi\n -wle\n print bpi(2000)\n Environment: <none>\n Mounts: <none>\n Volumes: <none>\nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal SuccessfulCreate 21s job-controller Created pod: pi-xf9p4\n Normal Completed 18s job-controller Job completed", "completion": "This text code 2.5 - Jobs | apiVersion: batch/v1 kind: Job metadata: annotations: batch.kubernetes.io/job-tracking: \"\" ... creationTimestamp: \"2022-11-10T17:53:53Z\" generation: 1 labels: batch.kubernet", "language": "text", "type": "code-example_explanation", "context": "2.5 - Jobs | apiVersion: batch/v1 kind: Job metadata: annotations: batch.kubernetes.io/job-tracking: \"\" ... creationTimestamp: \"2022-11-10T17:53:53Z\" generation: 1 labels: batch.kubernet", "source": "https://kubernetes.io/docs/concepts/workloads/_print/", "id": "k8s_00658_code_explain_20728"} {"prompt": "Complete this yaml code:\n\napiVersion: batch/v1\nkind: Job\nmetadata:\n annotations: batch.kubernetes.io/job-tracking: \"\"\n ... \n creationTimestamp: \"2022-11-10T17:53:53Z\"\n generation: 1\n labels:\n batch.kubernetes.io/controller-uid: 863452e6-270d-420e-9b94-53a54146c223\n batch.kubernetes.io/job-name: pi\n name: pi\n namespace: default\n resourceVersion: \"4751\"\n uid: 204fb678-040b-497f-9266-35ffa8716d14\nspec:\n backoffLimit: 4\n completionMode: NonIndexed\n completions: 1\n parallelism: 1\n selector:\n matchLabels:\n batch.kubernetes.io/controller-uid: 863452e6-270d-420e-9b94-53a54146c223\n suspend: false\n template:\n metadata:\n creationTimestamp: null", "completion": " labels:\n batch.kubernetes.io/controller-uid: 863452e6-270d-420e-9b94-53a54146c223\n batch.kubernetes.io/job-name: pi\n spec:\n containers:\n - command:\n - perl\n - -Mbignum=bpi\n - -wle\n - print bpi(2000)\n image: perl:5.34.0\n imagePullPolicy: IfNotPresent\n name: pi\n resources: {}\n terminationMessagePath: /dev/termination-log\n terminationMessagePolicy: File\n dnsPolicy: ClusterFirst\n restartPolicy: Never\n schedulerName: default-scheduler\n securityContext: {}\n terminationGracePeriodSeconds: 30\nstatus:\n active: 1\n ready: 0\n startTime: \"2022-11-10T17:53:57Z\"\n uncountedTerminatedPods: {}", "language": "yaml", "type": "kubernetes-manifest", "context": "2.5 - Jobs | To view completed Pods of a Job, use kubectl get pods.", "source": "https://kubernetes.io/docs/concepts/workloads/_print/", "id": "k8s_00658_code_completion_20729"} {"prompt": "Explain this yaml code:\n\napiVersion: batch/v1\nkind: Job\nmetadata:\n annotations: batch.kubernetes.io/job-tracking: \"\"\n ... \n creationTimestamp: \"2022-11-10T17:53:53Z\"\n generation: 1\n labels:\n batch.kubernetes.io/controller-uid: 863452e6-270d-420e-9b94-53a54146c223\n batch.kubernetes.io/job-name: pi\n name: pi\n namespace: default\n resourceVersion: \"4751\"\n uid: 204fb678-040b-497f-9266-35ffa8716d14\nspec:\n backoffLimit: 4\n completionMode: NonIndexed\n completions: 1\n parallelism: 1\n selector:\n matchLabels:\n batch.kubernetes.io/controller-uid: 863452e6-270d-420e-9b94-53a54146c223\n suspend: false\n template:\n metadata:\n creationTimestamp: null\n labels:\n batch.kubernetes.io/controller-uid: 863452e6-270d-420e-9b94-53a54146c223\n batch.kubernetes.io/job-name: pi\n spec:\n containers:\n - command:\n - perl\n - -Mbignum=bpi\n - -wle\n - print bpi(2000)\n image: perl:5.34.0\n imagePullPolicy: IfNotPresent\n name: pi\n resources: {}\n terminationMessagePath: /dev/termination-log\n terminationMessagePolicy: File\n dnsPolicy: ClusterFirst\n restartPolicy: Never\n schedulerName: default-scheduler\n securityContext: {}\n terminationGracePeriodSeconds: 30\nstatus:\n active: 1\n ready: 0\n startTime: \"2022-11-10T17:53:57Z\"\n uncountedTerminatedPods: {}", "completion": "This yaml code 2.5 - Jobs | To view completed Pods of a Job, use kubectl get pods.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "2.5 - Jobs | To view completed Pods of a Job, use kubectl get pods.", "source": "https://kubernetes.io/docs/concepts/workloads/_print/", "id": "k8s_00658_code_explain_20730"} {"prompt": "Explain this text code:\n\npods=$(kubectl get pods --selector=batch.kubernetes.io/job-name=pi --output=jsonpath='{.items[*].metadata.name}')\necho $pods", "completion": "This text code 2.5 - Jobs | The output is similar to this:", "language": "text", "type": "code-example_explanation", "context": "2.5 - Jobs | The output is similar to this:", "source": "https://kubernetes.io/docs/concepts/workloads/_print/", "id": "k8s_00658_code_explain_20731"} {"prompt": "Explain this text code:\n\n3.1415926535897932384626433832795028841971693993751058209749445923078164062862089986280348253421170679821480865132823066470938446095505822317253594081284811174502841027019385211055596446229489549303819644288109756659334461284756482337867831652712019091456485669234603486104543266482133936072602491412737245870066063155881748815209209628292540917153643678925903600113305305488204665213841469519415116094330572703657595919530921861173819326117931051185480744623799627495673518857527248912279381830119491298336733624406566430860213949463952247371907021798609437027705392171762931767523846748184676694051320005681271452635608277857713427577896091736371787214684409012249534301465495853710507922796892589235420199561121290219608640344181598136297747713099605187072113499999983729780499510597317328160963185950244594553469083026425223082533446850352619311881710100031378387528865875332083814206171776691473035982534904287554687311595628638823537875937519577818577805321712268066130019278766111959092164201989380952572010654858632788659361533818279682303019520353018529689957736225994138912497217752834791315155748572424541506959508295331168617278558890750983817546374649393192550604009277016711390098488240128583616035637076601047101819429555961989467678374494482553797747268471040475346462080466842590694912933136770289891521047521620569660240580381501935112533824300355876402474964732639141992726042699227967823547816360093417216412199245863150302861829745557067498385054945885869269956909272107975093029553211653449872027559602364806654991198818347977535663698074265425278625518184175746728909777727938000816470600161452491921732172147723501414419735685481613611573525521334757418494684385233239073941433345477624168625189835694855620992192221842725502542568876717904946016534668049886272327917860857843838279679766814541009538837863609506800642251252051173929848960841284886269456042419652850222106611863067442786220391949450471237137869609563643719172874677646575739624138908658326459958133904780275901", "completion": "This text code 2.5 - Jobs | As with all other Kubernetes config, a Job needs apiVersion, kind, and metadata fields.", "language": "text", "type": "code-example_explanation", "context": "2.5 - Jobs | As with all other Kubernetes config, a Job needs apiVersion, kind, and metadata fields.", "source": "https://kubernetes.io/docs/concepts/workloads/_print/", "id": "k8s_00658_code_explain_20732"} {"prompt": "Complete this yaml code:\n\napiVersion: batch/v1\nkind: Job\nmetadata:\n name: job-backoff-limit-per-index-example\nspec:\n completions: 10\n parallelism: 3\n completionMode: Indexed # required for the feature\n backoffLimitPerIndex: 1 # maximal number of failures per index\n maxFailedIndexes: 5 # maximal number of failed indexes before terminating the Job execution\n template:\n spec:\n restartPolicy: Never # required for the feature", "completion": " containers:\n - name: example\n image: python\n command: # The jobs fails as there is at least one failed index\n # (all even indexes fail in here), yet all indexes\n # are executed as maxFailedIndexes is not exceeded.\n - python3\n - -c\n - |\n import os, sys\n print(\"Hello world\")\n if int(os.environ.get(\"JOB_COMPLETION_INDEX\")) % 2 == 0:\n sys.exit(1)", "language": "yaml", "type": "kubernetes-manifest", "context": "2.5 - Jobs | In the example above, the Job controller allows for one restart for each of the indexes. When the total number of failed indexes exceeds 5, then the entire Job is terminated.", "source": "https://kubernetes.io/docs/concepts/workloads/_print/", "id": "k8s_00658_code_completion_20733"} {"prompt": "Explain this yaml code:\n\napiVersion: batch/v1\nkind: Job\nmetadata:\n name: job-backoff-limit-per-index-example\nspec:\n completions: 10\n parallelism: 3\n completionMode: Indexed # required for the feature\n backoffLimitPerIndex: 1 # maximal number of failures per index\n maxFailedIndexes: 5 # maximal number of failed indexes before terminating the Job execution\n template:\n spec:\n restartPolicy: Never # required for the feature\n containers:\n - name: example\n image: python\n command: # The jobs fails as there is at least one failed index\n # (all even indexes fail in here), yet all indexes\n # are executed as maxFailedIndexes is not exceeded.\n - python3\n - -c\n - |\n import os, sys\n print(\"Hello world\")\n if int(os.environ.get(\"JOB_COMPLETION_INDEX\")) % 2 == 0:\n sys.exit(1)", "completion": "This yaml code 2.5 - Jobs | In the example above, the Job controller allows for one restart for each of the indexes. When the total number of failed indexes exceeds 5, then the entire Job is terminated.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "2.5 - Jobs | In the example above, the Job controller allows for one restart for each of the indexes. When the total number of failed indexes exceeds 5, then the entire Job is terminated.", "source": "https://kubernetes.io/docs/concepts/workloads/_print/", "id": "k8s_00658_code_explain_20734"} {"prompt": "Explain this bash code:\n\nkubectl get -o yaml job job-backoff-limit-per-index-example", "completion": "This bash code 2.5 - Jobs | The Job controller adds the FailureTarget Job condition to trigger Job termination and cleanup. When all of the Job Pods are terminated, the Job controller adds the Failed condition with the same valu", "language": "bash", "type": "kubectl-command_explanation", "context": "2.5 - Jobs | The Job controller adds the FailureTarget Job condition to trigger Job termination and cleanup. When all of the Job Pods are terminated, the Job controller adds the Failed condition with the same valu", "source": "https://kubernetes.io/docs/concepts/workloads/_print/", "id": "k8s_00658_code_explain_20735"} {"prompt": "Complete this text code:\n\nstatus:\n completedIndexes: 1,3,5,7,9\n failedIndexes: 0,2,4,6,8\n succeeded: 5 # 1 succeeded pod for each of 5 succeeded indexes\n failed: 10 # 2 failed pods (1 retry) for each of 5 failed indexes\n conditions:\n - message: Job has failed indexes", "completion": " reason: FailedIndexes\n status: \"True\"\n type: FailureTarget\n - message: Job has failed indexes\n reason: FailedIndexes\n status: \"True\"\n type: Failed", "language": "text", "type": "code-example", "context": "2.5 - Jobs | The Job controller adds the FailureTarget Job condition to trigger Job termination and cleanup. When all of the Job Pods are terminated, the Job controller adds the Failed condition with the same valu", "source": "https://kubernetes.io/docs/concepts/workloads/_print/", "id": "k8s_00658_code_completion_20736"} {"prompt": "Explain this text code:\n\nstatus:\n completedIndexes: 1,3,5,7,9\n failedIndexes: 0,2,4,6,8\n succeeded: 5 # 1 succeeded pod for each of 5 succeeded indexes\n failed: 10 # 2 failed pods (1 retry) for each of 5 failed indexes\n conditions:\n - message: Job has failed indexes\n reason: FailedIndexes\n status: \"True\"\n type: FailureTarget\n - message: Job has failed indexes\n reason: FailedIndexes\n status: \"True\"\n type: Failed", "completion": "This text code 2.5 - Jobs | The Job controller adds the FailureTarget Job condition to trigger Job termination and cleanup. When all of the Job Pods are terminated, the Job controller adds the Failed condition with the same valu", "language": "text", "type": "code-example_explanation", "context": "2.5 - Jobs | The Job controller adds the FailureTarget Job condition to trigger Job termination and cleanup. When all of the Job Pods are terminated, the Job controller adds the Failed condition with the same valu", "source": "https://kubernetes.io/docs/concepts/workloads/_print/", "id": "k8s_00658_code_explain_20737"} {"prompt": "Complete this yaml code:\n\napiVersion: batch/v1\nkind: Job\nmetadata:\n name: job-pod-failure-policy-example\nspec:\n completions: 12\n parallelism: 3\n template:\n spec:\n restartPolicy: Never\n containers:\n - name: main\n image: docker.io/library/bash:5\n command: [\"bash\"] # example command simulating a bug which triggers the FailJob action", "completion": " args:\n - -c\n - echo \"Hello world!\" && sleep 5 && exit 42\n backoffLimit: 6\n podFailurePolicy:\n rules:\n - action: FailJob\n onExitCodes:\n containerName: main # optional\n operator: In # one of: In, NotIn\n values: [42]\n - action: Ignore # one of: Ignore, FailJob, Count\n onPodConditions:\n - type: DisruptionTarget # indicates Pod disruption", "language": "yaml", "type": "kubernetes-manifest", "context": "2.5 - Jobs | In the example above, the first rule of the Pod failure policy specifies that the Job should be marked failed if the main container fails with the 42 exit code. The following are the rules for the mai", "source": "https://kubernetes.io/docs/concepts/workloads/_print/", "id": "k8s_00658_code_completion_20738"} {"prompt": "Explain this yaml code:\n\napiVersion: batch/v1\nkind: Job\nmetadata:\n name: job-pod-failure-policy-example\nspec:\n completions: 12\n parallelism: 3\n template:\n spec:\n restartPolicy: Never\n containers:\n - name: main\n image: docker.io/library/bash:5\n command: [\"bash\"] # example command simulating a bug which triggers the FailJob action\n args:\n - -c\n - echo \"Hello world!\" && sleep 5 && exit 42\n backoffLimit: 6\n podFailurePolicy:\n rules:\n - action: FailJob\n onExitCodes:\n containerName: main # optional\n operator: In # one of: In, NotIn\n values: [42]\n - action: Ignore # one of: Ignore, FailJob, Count\n onPodConditions:\n - type: DisruptionTarget # indicates Pod disruption", "completion": "This yaml code 2.5 - Jobs | In the example above, the first rule of the Pod failure policy specifies that the Job should be marked failed if the main container fails with the 42 exit code. The following are the rules for the mai", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "2.5 - Jobs | In the example above, the first rule of the Pod failure policy specifies that the Job should be marked failed if the main container fails with the 42 exit code. The following are the rules for the mai", "source": "https://kubernetes.io/docs/concepts/workloads/_print/", "id": "k8s_00658_code_explain_20739"} {"prompt": "Complete this yaml code:\n\napiVersion: batch/v1\nkind: Job\nmetadata:\n name: job-success\nspec:\n parallelism: 10\n completions: 10\n completionMode: Indexed # Required for the success policy\n successPolicy:\n rules:\n - succeededIndexes: 0,2-3\n succeededCount: 1\n template:\n spec:", "completion": " containers:\n - name: main\n image: python\n command: # Provided that at least one of the Pods with 0, 2, and 3 indexes has succeeded,\n # the overall Job is a success.\n - python3\n - -c\n - |\n import os, sys\n if os.environ.get(\"JOB_COMPLETION_INDEX\") == \"2\":\n sys.exit(0)\n else:\n sys.exit(1) \n restartPolicy: Never", "language": "yaml", "type": "kubernetes-manifest", "context": "2.5 - Jobs | In the example above, both succeededIndexes and succeededCount have been specified. Therefore, the job controller will mark the Job as succeeded and terminate the lingering Pods when either of the spe", "source": "https://kubernetes.io/docs/concepts/workloads/_print/", "id": "k8s_00658_code_completion_20740"} {"prompt": "Explain this yaml code:\n\napiVersion: batch/v1\nkind: Job\nmetadata:\n name: job-success\nspec:\n parallelism: 10\n completions: 10\n completionMode: Indexed # Required for the success policy\n successPolicy:\n rules:\n - succeededIndexes: 0,2-3\n succeededCount: 1\n template:\n spec:\n containers:\n - name: main\n image: python\n command: # Provided that at least one of the Pods with 0, 2, and 3 indexes has succeeded,\n # the overall Job is a success.\n - python3\n - -c\n - |\n import os, sys\n if os.environ.get(\"JOB_COMPLETION_INDEX\") == \"2\":\n sys.exit(0)\n else:\n sys.exit(1) \n restartPolicy: Never", "completion": "This yaml code 2.5 - Jobs | In the example above, both succeededIndexes and succeededCount have been specified. Therefore, the job controller will mark the Job as succeeded and terminate the lingering Pods when either of the spe", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "2.5 - Jobs | In the example above, both succeededIndexes and succeededCount have been specified. Therefore, the job controller will mark the Job as succeeded and terminate the lingering Pods when either of the spe", "source": "https://kubernetes.io/docs/concepts/workloads/_print/", "id": "k8s_00658_code_explain_20741"} {"prompt": "Complete this yaml code:\n\napiVersion: batch/v1\nkind: Job\nmetadata:\n name: pi-with-timeout\nspec:\n backoffLimit: 5\n activeDeadlineSeconds: 100", "completion": " template:\n spec:\n containers:\n - name: pi\n image: perl:5.34.0\n command: [\"perl\", \"-Mbignum=bpi\", \"-wle\", \"print bpi(2000)\"]\n restartPolicy: Never", "language": "yaml", "type": "kubernetes-manifest", "context": "2.5 - Jobs | Note that both the Job spec and the Pod template spec within the Job have an activeDeadlineSeconds field. Ensure that you set this field at the proper level.", "source": "https://kubernetes.io/docs/concepts/workloads/_print/", "id": "k8s_00658_code_completion_20742"} {"prompt": "Explain this yaml code:\n\napiVersion: batch/v1\nkind: Job\nmetadata:\n name: pi-with-timeout\nspec:\n backoffLimit: 5\n activeDeadlineSeconds: 100\n template:\n spec:\n containers:\n - name: pi\n image: perl:5.34.0\n command: [\"perl\", \"-Mbignum=bpi\", \"-wle\", \"print bpi(2000)\"]\n restartPolicy: Never", "completion": "This yaml code 2.5 - Jobs | Note that both the Job spec and the Pod template spec within the Job have an activeDeadlineSeconds field. Ensure that you set this field at the proper level.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "2.5 - Jobs | Note that both the Job spec and the Pod template spec within the Job have an activeDeadlineSeconds field. Ensure that you set this field at the proper level.", "source": "https://kubernetes.io/docs/concepts/workloads/_print/", "id": "k8s_00658_code_explain_20743"} {"prompt": "Complete this yaml code:\n\napiVersion: batch/v1\nkind: Job\nmetadata:\n name: pi-with-ttl\nspec:\n ttlSecondsAfterFinished: 100", "completion": " template:\n spec:\n containers:\n - name: pi\n image: perl:5.34.0\n command: [\"perl\", \"-Mbignum=bpi\", \"-wle\", \"print bpi(2000)\"]\n restartPolicy: Never", "language": "yaml", "type": "kubernetes-manifest", "context": "2.5 - Jobs | The Job pi-with-ttl will be eligible to be automatically deleted, 100 seconds after it finishes.", "source": "https://kubernetes.io/docs/concepts/workloads/_print/", "id": "k8s_00658_code_completion_20744"} {"prompt": "Explain this yaml code:\n\napiVersion: batch/v1\nkind: Job\nmetadata:\n name: pi-with-ttl\nspec:\n ttlSecondsAfterFinished: 100\n template:\n spec:\n containers:\n - name: pi\n image: perl:5.34.0\n command: [\"perl\", \"-Mbignum=bpi\", \"-wle\", \"print bpi(2000)\"]\n restartPolicy: Never", "completion": "This yaml code 2.5 - Jobs | The Job pi-with-ttl will be eligible to be automatically deleted, 100 seconds after it finishes.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "2.5 - Jobs | The Job pi-with-ttl will be eligible to be automatically deleted, 100 seconds after it finishes.", "source": "https://kubernetes.io/docs/concepts/workloads/_print/", "id": "k8s_00658_code_explain_20745"} {"prompt": "Complete this yaml code:\n\napiVersion: batch/v1\nkind: Job\nmetadata:\n name: myjob\nspec:", "completion": " suspend: true\n parallelism: 1\n completions: 5\n template:\n spec:\n ...", "language": "yaml", "type": "kubernetes-manifest", "context": "2.5 - Jobs | You can also toggle Job suspension by patching the Job using the command line.", "source": "https://kubernetes.io/docs/concepts/workloads/_print/", "id": "k8s_00658_code_completion_20746"} {"prompt": "Explain this yaml code:\n\napiVersion: batch/v1\nkind: Job\nmetadata:\n name: myjob\nspec:\n suspend: true\n parallelism: 1\n completions: 5\n template:\n spec:\n ...", "completion": "This yaml code 2.5 - Jobs | You can also toggle Job suspension by patching the Job using the command line.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "2.5 - Jobs | You can also toggle Job suspension by patching the Job using the command line.", "source": "https://kubernetes.io/docs/concepts/workloads/_print/", "id": "k8s_00658_code_explain_20747"} {"prompt": "Explain this bash code:\n\nkubectl patch job/myjob --type=strategic --patch '{\"spec\":{\"suspend\":true}}'", "completion": "This bash code 2.5 - Jobs | Resume a suspended Job:", "language": "bash", "type": "kubectl-command_explanation", "context": "2.5 - Jobs | Resume a suspended Job:", "source": "https://kubernetes.io/docs/concepts/workloads/_print/", "id": "k8s_00658_code_explain_20748"} {"prompt": "Explain this bash code:\n\nkubectl patch job/myjob --type=strategic --patch '{\"spec\":{\"suspend\":false}}'", "completion": "This bash code 2.5 - Jobs | The Job's status can be used to determine if a Job is suspended or has been suspended in the past:", "language": "bash", "type": "kubectl-command_explanation", "context": "2.5 - Jobs | The Job's status can be used to determine if a Job is suspended or has been suspended in the past:", "source": "https://kubernetes.io/docs/concepts/workloads/_print/", "id": "k8s_00658_code_explain_20749"} {"prompt": "Complete this yaml code:\n\napiVersion: batch/v1\nkind: Job\n# .metadata and .spec omitted\nstatus:\n conditions:", "completion": " - lastProbeTime: \"2021-02-05T13:14:33Z\"\n lastTransitionTime: \"2021-02-05T13:14:33Z\"\n status: \"True\"\n type: Suspended\n startTime: \"2021-02-05T13:13:48Z\"", "language": "yaml", "type": "kubernetes-manifest", "context": "2.5 - Jobs | The Job condition of type \"Suspended\" with status \"True\" means the Job is suspended; the lastTransitionTime field can be used to determine how long the Job has been suspended for. If the status of tha", "source": "https://kubernetes.io/docs/concepts/workloads/_print/", "id": "k8s_00658_code_completion_20750"} {"prompt": "Explain this yaml code:\n\napiVersion: batch/v1\nkind: Job\n# .metadata and .spec omitted\nstatus:\n conditions:\n - lastProbeTime: \"2021-02-05T13:14:33Z\"\n lastTransitionTime: \"2021-02-05T13:14:33Z\"\n status: \"True\"\n type: Suspended\n startTime: \"2021-02-05T13:13:48Z\"", "completion": "This yaml code 2.5 - Jobs | The Job condition of type \"Suspended\" with status \"True\" means the Job is suspended; the lastTransitionTime field can be used to determine how long the Job has been suspended for. If the status of tha", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "2.5 - Jobs | The Job condition of type \"Suspended\" with status \"True\" means the Job is suspended; the lastTransitionTime field can be used to determine how long the Job has been suspended for. If the status of tha", "source": "https://kubernetes.io/docs/concepts/workloads/_print/", "id": "k8s_00658_code_explain_20751"} {"prompt": "Complete this text code:\n\nName: myjob\n...\nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------", "completion": " Normal SuccessfulCreate 12m job-controller Created pod: myjob-hlrpl\n Normal SuccessfulDelete 11m job-controller Deleted pod: myjob-hlrpl\n Normal Suspended 11m job-controller Job suspended\n Normal SuccessfulCreate 3s job-controller Created pod: myjob-jvb44\n Normal Resumed 3s job-controller Job resumed", "language": "text", "type": "code-example", "context": "2.5 - Jobs | The last four events, particularly the \"Suspended\" and \"Resumed\" events, are directly a result of toggling the .spec.suspend field. In the time between these two events, we see that no Pods were creat", "source": "https://kubernetes.io/docs/concepts/workloads/_print/", "id": "k8s_00658_code_completion_20752"} {"prompt": "Explain this text code:\n\nName: myjob\n...\nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal SuccessfulCreate 12m job-controller Created pod: myjob-hlrpl\n Normal SuccessfulDelete 11m job-controller Deleted pod: myjob-hlrpl\n Normal Suspended 11m job-controller Job suspended\n Normal SuccessfulCreate 3s job-controller Created pod: myjob-jvb44\n Normal Resumed 3s job-controller Job resumed", "completion": "This text code 2.5 - Jobs | The last four events, particularly the \"Suspended\" and \"Resumed\" events, are directly a result of toggling the .spec.suspend field. In the time between these two events, we see that no Pods were creat", "language": "text", "type": "code-example_explanation", "context": "2.5 - Jobs | The last four events, particularly the \"Suspended\" and \"Resumed\" events, are directly a result of toggling the .spec.suspend field. In the time between these two events, we see that no Pods were creat", "source": "https://kubernetes.io/docs/concepts/workloads/_print/", "id": "k8s_00658_code_explain_20753"} {"prompt": "Complete this yaml code:\n\nkind: Job\nmetadata:\n name: old\n ...", "completion": "spec:\n selector:\n matchLabels:\n batch.kubernetes.io/controller-uid: a8f3d00d-c6d2-11e5-9f87-42010af00002\n ...", "language": "yaml", "type": "kubernetes-manifest", "context": "2.5 - Jobs | Then you create a new Job with name new and you explicitly specify the same selector. Since the existing Pods have label batch.kubernetes.io/controller-uid=a8f3d00d-c6d2-11e5-9f87-42010af00002, they a", "source": "https://kubernetes.io/docs/concepts/workloads/_print/", "id": "k8s_00658_code_completion_20754"} {"prompt": "Explain this yaml code:\n\nkind: Job\nmetadata:\n name: old\n ...\nspec:\n selector:\n matchLabels:\n batch.kubernetes.io/controller-uid: a8f3d00d-c6d2-11e5-9f87-42010af00002\n ...", "completion": "This yaml code 2.5 - Jobs | Then you create a new Job with name new and you explicitly specify the same selector. Since the existing Pods have label batch.kubernetes.io/controller-uid=a8f3d00d-c6d2-11e5-9f87-42010af00002, they a", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "2.5 - Jobs | Then you create a new Job with name new and you explicitly specify the same selector. Since the existing Pods have label batch.kubernetes.io/controller-uid=a8f3d00d-c6d2-11e5-9f87-42010af00002, they a", "source": "https://kubernetes.io/docs/concepts/workloads/_print/", "id": "k8s_00658_code_explain_20755"} {"prompt": "Complete this yaml code:\n\nkind: Job\nmetadata:\n name: new\n ...\nspec:", "completion": " manualSelector: true\n selector:\n matchLabels:\n batch.kubernetes.io/controller-uid: a8f3d00d-c6d2-11e5-9f87-42010af00002\n ...", "language": "yaml", "type": "kubernetes-manifest", "context": "2.5 - Jobs | The new Job itself will have a different uid from a8f3d00d-c6d2-11e5-9f87-42010af00002. Setting manualSelector: true tells the system that you know what you are doing and to allow this mismatch.", "source": "https://kubernetes.io/docs/concepts/workloads/_print/", "id": "k8s_00658_code_completion_20756"} {"prompt": "Explain this yaml code:\n\nkind: Job\nmetadata:\n name: new\n ...\nspec:\n manualSelector: true\n selector:\n matchLabels:\n batch.kubernetes.io/controller-uid: a8f3d00d-c6d2-11e5-9f87-42010af00002\n ...", "completion": "This yaml code 2.5 - Jobs | The new Job itself will have a different uid from a8f3d00d-c6d2-11e5-9f87-42010af00002. Setting manualSelector: true tells the system that you know what you are doing and to allow this mismatch.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "2.5 - Jobs | The new Job itself will have a different uid from a8f3d00d-c6d2-11e5-9f87-42010af00002. Setting manualSelector: true tells the system that you know what you are doing and to allow this mismatch.", "source": "https://kubernetes.io/docs/concepts/workloads/_print/", "id": "k8s_00658_code_explain_20757"} {"prompt": "Complete this yaml code:\n\nkind: Job\nmetadata:\n name: new", "completion": " ...\nspec:\n podReplacementPolicy: Failed\n ...", "language": "yaml", "type": "kubernetes-manifest", "context": "2.5 - Jobs | Provided your cluster has the feature gate enabled, you can inspect the .status.terminating field of a Job. The value of the field is the number of Pods owned by the Job that are currently terminating", "source": "https://kubernetes.io/docs/concepts/workloads/_print/", "id": "k8s_00658_code_completion_20758"} {"prompt": "Explain this yaml code:\n\nkind: Job\nmetadata:\n name: new\n ...\nspec:\n podReplacementPolicy: Failed\n ...", "completion": "This yaml code 2.5 - Jobs | Provided your cluster has the feature gate enabled, you can inspect the .status.terminating field of a Job. The value of the field is the number of Pods owned by the Job that are currently terminating", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "2.5 - Jobs | Provided your cluster has the feature gate enabled, you can inspect the .status.terminating field of a Job. The value of the field is the number of Pods owned by the Job that are currently terminating", "source": "https://kubernetes.io/docs/concepts/workloads/_print/", "id": "k8s_00658_code_explain_20759"} {"prompt": "Complete this yaml code:\n\napiVersion: batch/v1\nkind: Job", "completion": "# .metadata and .spec omitted\nstatus:\n terminating: 3 # three Pods are terminating and have not yet reached the Failed phase", "language": "yaml", "type": "kubernetes-manifest", "context": "2.5 - Jobs | This feature allows you to disable the built-in Job controller, for a specific Job, and delegate reconciliation of the Job to an external controller.", "source": "https://kubernetes.io/docs/concepts/workloads/_print/", "id": "k8s_00658_code_completion_20760"} {"prompt": "Explain this yaml code:\n\napiVersion: batch/v1\nkind: Job\n# .metadata and .spec omitted\nstatus:\n terminating: 3 # three Pods are terminating and have not yet reached the Failed phase", "completion": "This yaml code 2.5 - Jobs | This feature allows you to disable the built-in Job controller, for a specific Job, and delegate reconciliation of the Job to an external controller.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "2.5 - Jobs | This feature allows you to disable the built-in Job controller, for a specific Job, and delegate reconciliation of the Job to an external controller.", "source": "https://kubernetes.io/docs/concepts/workloads/_print/", "id": "k8s_00658_code_explain_20761"} {"prompt": "Complete this yaml code:\n\napiVersion: batch/v1\nkind: CronJob\nmetadata:\n name: hello\nspec:\n schedule: \"* * * * *\"\n jobTemplate:\n spec:\n template:", "completion": " spec:\n containers:\n - name: hello\n image: busybox:1.28\n imagePullPolicy: IfNotPresent\n command:\n - /bin/sh\n - -c\n - date; echo Hello from the Kubernetes cluster\n restartPolicy: OnFailure", "language": "yaml", "type": "kubernetes-manifest", "context": "2.7 - CronJob | (Running Automated Tasks with a CronJob takes you through this example in more detail).", "source": "https://kubernetes.io/docs/concepts/workloads/_print/", "id": "k8s_00658_code_completion_20762"} {"prompt": "Explain this yaml code:\n\napiVersion: batch/v1\nkind: CronJob\nmetadata:\n name: hello\nspec:\n schedule: \"* * * * *\"\n jobTemplate:\n spec:\n template:\n spec:\n containers:\n - name: hello\n image: busybox:1.28\n imagePullPolicy: IfNotPresent\n command:\n - /bin/sh\n - -c\n - date; echo Hello from the Kubernetes cluster\n restartPolicy: OnFailure", "completion": "This yaml code 2.7 - CronJob | (Running Automated Tasks with a CronJob takes you through this example in more detail).", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "2.7 - CronJob | (Running Automated Tasks with a CronJob takes you through this example in more detail).", "source": "https://kubernetes.io/docs/concepts/workloads/_print/", "id": "k8s_00658_code_explain_20763"} {"prompt": "Complete this text code:\n\n# ┌───────────── minute (0 - 59)\n# │ ┌───────────── hour (0 - 23)\n# │ │ ┌───────────── day of the month (1 - 31)\n# │ │ │ ┌───────────── month (1 - 12)", "completion": "# │ │ │ │ ┌───────────── day of the week (0 - 6) (Sunday to Saturday)\n# │ │ │ │ │ OR sun, mon, tue, wed, thu, fri, sat\n# │ │ │ │ │\n# │ │ │ │ │\n# * * * * *", "language": "text", "type": "code-example", "context": "2.7 - CronJob | For example, 0 3 * * 1 means this task is scheduled to run weekly on a Monday at 3 AM.", "source": "https://kubernetes.io/docs/concepts/workloads/_print/", "id": "k8s_00658_code_completion_20764"} {"prompt": "Explain this text code:\n\n# ┌───────────── minute (0 - 59)\n# │ ┌───────────── hour (0 - 23)\n# │ │ ┌───────────── day of the month (1 - 31)\n# │ │ │ ┌───────────── month (1 - 12)\n# │ │ │ │ ┌───────────── day of the week (0 - 6) (Sunday to Saturday)\n# │ │ │ │ │ OR sun, mon, tue, wed, thu, fri, sat\n# │ │ │ │ │\n# │ │ │ │ │\n# * * * * *", "completion": "This text code 2.7 - CronJob | For example, 0 3 * * 1 means this task is scheduled to run weekly on a Monday at 3 AM.", "language": "text", "type": "code-example_explanation", "context": "2.7 - CronJob | For example, 0 3 * * 1 means this task is scheduled to run weekly on a Monday at 3 AM.", "source": "https://kubernetes.io/docs/concepts/workloads/_print/", "id": "k8s_00658_code_explain_20765"} {"prompt": "Explain this text code:\n\nCannot determine if job needs to be started. Too many missed start time (> 100). Set or decrease .spec.startingDeadlineSeconds or check clock skew.", "completion": "This text code 2.7 - CronJob | It is important to note that if the startingDeadlineSeconds field is set (not nil), the controller counts how many missed Jobs occurred from the value of startingDeadlineSeconds until now rather than", "language": "text", "type": "code-example_explanation", "context": "2.7 - CronJob | It is important to note that if the startingDeadlineSeconds field is set (not nil), the controller counts how many missed Jobs occurred from the value of startingDeadlineSeconds until now rather than", "source": "https://kubernetes.io/docs/concepts/workloads/_print/", "id": "k8s_00658_code_explain_20766"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: ReplicationController\nmetadata:\n name: nginx\nspec:\n replicas: 3\n selector:\n app: nginx\n template:", "completion": " metadata:\n name: nginx\n labels:\n app: nginx\n spec:\n containers:\n - name: nginx\n image: nginx\n ports:\n - containerPort: 80", "language": "yaml", "type": "kubernetes-manifest", "context": "2.8 - ReplicationController | Run the example job by downloading the example file and then running this command:", "source": "https://kubernetes.io/docs/concepts/workloads/_print/", "id": "k8s_00658_code_completion_20767"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: ReplicationController\nmetadata:\n name: nginx\nspec:\n replicas: 3\n selector:\n app: nginx\n template:\n metadata:\n name: nginx\n labels:\n app: nginx\n spec:\n containers:\n - name: nginx\n image: nginx\n ports:\n - containerPort: 80", "completion": "This yaml code 2.8 - ReplicationController | Run the example job by downloading the example file and then running this command:", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "2.8 - ReplicationController | Run the example job by downloading the example file and then running this command:", "source": "https://kubernetes.io/docs/concepts/workloads/_print/", "id": "k8s_00658_code_explain_20768"} {"prompt": "Explain this bash code:\n\nkubectl apply -f https://k8s.io/examples/controllers/replication.yaml", "completion": "This bash code 2.8 - ReplicationController | The output is similar to this:", "language": "bash", "type": "kubectl-command_explanation", "context": "2.8 - ReplicationController | The output is similar to this:", "source": "https://kubernetes.io/docs/concepts/workloads/_print/", "id": "k8s_00658_code_explain_20769"} {"prompt": "Complete this text code:\n\nName: nginx\nNamespace: default\nSelector: app=nginx\nLabels: app=nginx\nAnnotations: <none>\nReplicas: 3 current / 3 desired\nPods Status: 0 Running / 3 Waiting / 0 Succeeded / 0 Failed\nPod Template:\n Labels: app=nginx\n Containers:\n nginx:", "completion": " Image: nginx\n Port: 80/TCP\n Environment: <none>\n Mounts: <none>\n Volumes: <none>\nEvents:\n FirstSeen LastSeen Count From SubobjectPath Type Reason Message\n --------- -------- ----- ---- ------------- ---- ------ -------\n 20s 20s 1 {replication-controller } Normal SuccessfulCreate Created pod: nginx-qrm3m\n 20s 20s 1 {replication-controller } Normal SuccessfulCreate Created pod: nginx-3ntk0\n 20s 20s 1 {replication-controller } Normal SuccessfulCreate Created pod: nginx-4ok8v", "language": "text", "type": "code-example", "context": "2.8 - ReplicationController | Here, three pods are created, but none is running yet, perhaps because the image is being pulled. A little later, the same command may show:", "source": "https://kubernetes.io/docs/concepts/workloads/_print/", "id": "k8s_00658_code_completion_20770"} {"prompt": "Explain this text code:\n\nName: nginx\nNamespace: default\nSelector: app=nginx\nLabels: app=nginx\nAnnotations: <none>\nReplicas: 3 current / 3 desired\nPods Status: 0 Running / 3 Waiting / 0 Succeeded / 0 Failed\nPod Template:\n Labels: app=nginx\n Containers:\n nginx:\n Image: nginx\n Port: 80/TCP\n Environment: <none>\n Mounts: <none>\n Volumes: <none>\nEvents:\n FirstSeen LastSeen Count From SubobjectPath Type Reason Message\n --------- -------- ----- ---- ------------- ---- ------ -------\n 20s 20s 1 {replication-controller } Normal SuccessfulCreate Created pod: nginx-qrm3m\n 20s 20s 1 {replication-controller } Normal SuccessfulCreate Created pod: nginx-3ntk0\n 20s 20s 1 {replication-controller } Normal SuccessfulCreate Created pod: nginx-4ok8v", "completion": "This text code 2.8 - ReplicationController | Here, three pods are created, but none is running yet, perhaps because the image is being pulled. A little later, the same command may show:", "language": "text", "type": "code-example_explanation", "context": "2.8 - ReplicationController | Here, three pods are created, but none is running yet, perhaps because the image is being pulled. A little later, the same command may show:", "source": "https://kubernetes.io/docs/concepts/workloads/_print/", "id": "k8s_00658_code_explain_20771"} {"prompt": "Explain this text code:\n\nPods Status: 3 Running / 0 Waiting / 0 Succeeded / 0 Failed", "completion": "This text code 2.8 - ReplicationController | To list all the pods that belong to the ReplicationController in a machine readable form, you can use a command like this:", "language": "text", "type": "code-example_explanation", "context": "2.8 - ReplicationController | To list all the pods that belong to the ReplicationController in a machine readable form, you can use a command like this:", "source": "https://kubernetes.io/docs/concepts/workloads/_print/", "id": "k8s_00658_code_explain_20772"} {"prompt": "Explain this text code:\n\npods=$(kubectl get pods --selector=app=nginx --output=jsonpath={.items..metadata.name})\necho $pods", "completion": "This text code 2.8 - ReplicationController | The output is similar to this:", "language": "text", "type": "code-example_explanation", "context": "2.8 - ReplicationController | The output is similar to this:", "source": "https://kubernetes.io/docs/concepts/workloads/_print/", "id": "k8s_00658_code_explain_20773"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Service\nmetadata:\n name: my-nginx-svc\n labels:\n app: nginx\nspec:\n type: LoadBalancer\n ports:\n - port: 80\n selector:\n app: nginx\n---\napiVersion: apps/v1\nkind: Deployment\nmetadata:\n name: my-nginx", "completion": " labels:\n app: nginx\nspec:\n replicas: 3\n selector:\n matchLabels:\n app: nginx\n template:\n metadata:\n labels:\n app: nginx\n spec:\n containers:\n - name: nginx\n image: nginx:1.14.2\n ports:\n - containerPort: 80", "language": "yaml", "type": "deployment-definition", "context": "4 - Managing Workloads | Multiple resources can be created the same way as a single resource:", "source": "https://kubernetes.io/docs/concepts/workloads/_print/", "id": "k8s_00658_code_completion_20774"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Service\nmetadata:\n name: my-nginx-svc\n labels:\n app: nginx\nspec:\n type: LoadBalancer\n ports:\n - port: 80\n selector:\n app: nginx\n---\napiVersion: apps/v1\nkind: Deployment\nmetadata:\n name: my-nginx\n labels:\n app: nginx\nspec:\n replicas: 3\n selector:\n matchLabels:\n app: nginx\n template:\n metadata:\n labels:\n app: nginx\n spec:\n containers:\n - name: nginx\n image: nginx:1.14.2\n ports:\n - containerPort: 80", "completion": "This yaml code 4 - Managing Workloads | Multiple resources can be created the same way as a single resource:", "language": "yaml", "type": "deployment-definition_explanation", "context": "4 - Managing Workloads | Multiple resources can be created the same way as a single resource:", "source": "https://kubernetes.io/docs/concepts/workloads/_print/", "id": "k8s_00658_code_explain_20775"} {"prompt": "Explain this bash code:\n\nkubectl apply -f https://k8s.io/examples/application/nginx-app.yaml", "completion": "This bash code 4 - Managing Workloads | The resources will be created in the order they appear in the manifest. Therefore, it's best to specify the Service first, since that will ensure the scheduler can spread the pods associated with the", "language": "bash", "type": "kubectl-command_explanation", "context": "4 - Managing Workloads | The resources will be created in the order they appear in the manifest. Therefore, it's best to specify the Service first, since that will ensure the scheduler can spread the pods associated with the", "source": "https://kubernetes.io/docs/concepts/workloads/_print/", "id": "k8s_00658_code_explain_20776"} {"prompt": "Explain this text code:\n\nservice/my-nginx-svc created\ndeployment.apps/my-nginx created", "completion": "This text code 4 - Managing Workloads | The resources will be created in the order they appear in the manifest. Therefore, it's best to specify the Service first, since that will ensure the scheduler can spread the pods associated with the", "language": "text", "type": "code-example_explanation", "context": "4 - Managing Workloads | The resources will be created in the order they appear in the manifest. Therefore, it's best to specify the Service first, since that will ensure the scheduler can spread the pods associated with the", "source": "https://kubernetes.io/docs/concepts/workloads/_print/", "id": "k8s_00658_code_explain_20777"} {"prompt": "Explain this bash code:\n\nkubectl apply -f https://k8s.io/examples/application/nginx/nginx-svc.yaml \\\n -f https://k8s.io/examples/application/nginx/nginx-deployment.yaml", "completion": "This bash code 4 - Managing Workloads | It is a recommended practice to put resources related to the same microservice or application tier into the same file, and to group all of the files associated with your application in the same direct", "language": "bash", "type": "kubectl-command_explanation", "context": "4 - Managing Workloads | It is a recommended practice to put resources related to the same microservice or application tier into the same file, and to group all of the files associated with your application in the same direct", "source": "https://kubernetes.io/docs/concepts/workloads/_print/", "id": "k8s_00658_code_explain_20778"} {"prompt": "Explain this bash code:\n\nkubectl apply -f https://k8s.io/examples/application/nginx/nginx-deployment.yaml", "completion": "This bash code 4 - Managing Workloads | If you need to define more manifests, such as adding a ConfigMap, you can do that too.", "language": "bash", "type": "kubectl-command_explanation", "context": "4 - Managing Workloads | If you need to define more manifests, such as adding a ConfigMap, you can do that too.", "source": "https://kubernetes.io/docs/concepts/workloads/_print/", "id": "k8s_00658_code_explain_20779"} {"prompt": "Explain this bash code:\n\nkubectl delete -f https://k8s.io/examples/application/nginx-app.yaml", "completion": "This bash code 4 - Managing Workloads | In the case of two resources, you can specify both resources on the command line using the resource/name syntax:", "language": "bash", "type": "kubectl-command_explanation", "context": "4 - Managing Workloads | In the case of two resources, you can specify both resources on the command line using the resource/name syntax:", "source": "https://kubernetes.io/docs/concepts/workloads/_print/", "id": "k8s_00658_code_explain_20780"} {"prompt": "Explain this text code:\n\ndeployment.apps \"my-nginx\" deleted\nservice \"my-nginx-svc\" deleted", "completion": "This text code 4 - Managing Workloads | In the case of two resources, you can specify both resources on the command line using the resource/name syntax:", "language": "text", "type": "code-example_explanation", "context": "4 - Managing Workloads | In the case of two resources, you can specify both resources on the command line using the resource/name syntax:", "source": "https://kubernetes.io/docs/concepts/workloads/_print/", "id": "k8s_00658_code_explain_20781"} {"prompt": "Explain this bash code:\n\nkubectl delete deployments/my-nginx services/my-nginx-svc", "completion": "This bash code 4 - Managing Workloads | For larger numbers of resources, you'll find it easier to specify the selector (label query) specified using -l or --selector, to filter resources by their labels:", "language": "bash", "type": "kubectl-command_explanation", "context": "4 - Managing Workloads | For larger numbers of resources, you'll find it easier to specify the selector (label query) specified using -l or --selector, to filter resources by their labels:", "source": "https://kubernetes.io/docs/concepts/workloads/_print/", "id": "k8s_00658_code_explain_20782"} {"prompt": "Explain this text code:\n\ndeployment.apps \"my-nginx\" deleted\nservice \"my-nginx-svc\" deleted", "completion": "This text code 4 - Managing Workloads | Because kubectl outputs resource names in the same syntax it accepts, you can chain operations using $() or xargs:", "language": "text", "type": "code-example_explanation", "context": "4 - Managing Workloads | Because kubectl outputs resource names in the same syntax it accepts, you can chain operations using $() or xargs:", "source": "https://kubernetes.io/docs/concepts/workloads/_print/", "id": "k8s_00658_code_explain_20783"} {"prompt": "Explain this bash code:\n\nkubectl get $(kubectl create -f docs/concepts/cluster-administration/nginx/ -o name | grep service/ )\nkubectl create -f docs/concepts/cluster-administration/nginx/ -o name | grep service/ | xargs -i kubectl get '{}'", "completion": "This bash code 4 - Managing Workloads | The output might be similar to:", "language": "bash", "type": "kubectl-command_explanation", "context": "4 - Managing Workloads | The output might be similar to:", "source": "https://kubernetes.io/docs/concepts/workloads/_print/", "id": "k8s_00658_code_explain_20784"} {"prompt": "Explain this text code:\n\nNAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE\nmy-nginx-svc LoadBalancer 10.0.0.208 <pending> 80/TCP 0s", "completion": "This text code 4 - Managing Workloads | With the above commands, first you create resources under examples/application/nginx/ and print the resources created with -o name output format (print each resource as resource/name). Then you grep o", "language": "text", "type": "code-example_explanation", "context": "4 - Managing Workloads | With the above commands, first you create resources under examples/application/nginx/ and print the resources created with -o name output format (print each resource as resource/name). Then you grep o", "source": "https://kubernetes.io/docs/concepts/workloads/_print/", "id": "k8s_00658_code_explain_20785"} {"prompt": "Complete this text code:\n\nproject/k8s/development\n├── configmap\n│   └── my-configmap.yaml", "completion": "├── deployment\n│   └── my-deployment.yaml\n└── pvc\n └── my-pvc.yaml", "language": "text", "type": "code-example", "context": "4 - Managing Workloads | By default, performing a bulk operation on project/k8s/development will stop at the first level of the directory, not processing any subdirectories. If you had tried to create the resources in this di", "source": "https://kubernetes.io/docs/concepts/workloads/_print/", "id": "k8s_00658_code_completion_20786"} {"prompt": "Explain this text code:\n\nproject/k8s/development\n├── configmap\n│   └── my-configmap.yaml\n├── deployment\n│   └── my-deployment.yaml\n└── pvc\n └── my-pvc.yaml", "completion": "This text code 4 - Managing Workloads | By default, performing a bulk operation on project/k8s/development will stop at the first level of the directory, not processing any subdirectories. If you had tried to create the resources in this di", "language": "text", "type": "code-example_explanation", "context": "4 - Managing Workloads | By default, performing a bulk operation on project/k8s/development will stop at the first level of the directory, not processing any subdirectories. If you had tried to create the resources in this di", "source": "https://kubernetes.io/docs/concepts/workloads/_print/", "id": "k8s_00658_code_explain_20787"} {"prompt": "Explain this text code:\n\nerror: you must provide one or more resources by argument or filename (.json|.yaml|.yml|stdin)", "completion": "This text code 4 - Managing Workloads | Instead, specify the --recursive or -R command line argument along with the --filename/-f argument:", "language": "text", "type": "code-example_explanation", "context": "4 - Managing Workloads | Instead, specify the --recursive or -R command line argument along with the --filename/-f argument:", "source": "https://kubernetes.io/docs/concepts/workloads/_print/", "id": "k8s_00658_code_explain_20788"} {"prompt": "Explain this bash code:\n\nkubectl apply -f project/k8s/development --recursive", "completion": "This bash code 4 - Managing Workloads | The --recursive argument works with any operation that accepts the --filename/-f argument such as: kubectl create, kubectl get, kubectl delete, kubectl describe, or even kubectl rollout.", "language": "bash", "type": "kubectl-command_explanation", "context": "4 - Managing Workloads | The --recursive argument works with any operation that accepts the --filename/-f argument such as: kubectl create, kubectl get, kubectl delete, kubectl describe, or even kubectl rollout.", "source": "https://kubernetes.io/docs/concepts/workloads/_print/", "id": "k8s_00658_code_explain_20789"} {"prompt": "Explain this text code:\n\nconfigmap/my-config created\ndeployment.apps/my-deployment created\npersistentvolumeclaim/my-pvc created", "completion": "This text code 4 - Managing Workloads | The --recursive argument works with any operation that accepts the --filename/-f argument such as: kubectl create, kubectl get, kubectl delete, kubectl describe, or even kubectl rollout.", "language": "text", "type": "code-example_explanation", "context": "4 - Managing Workloads | The --recursive argument works with any operation that accepts the --filename/-f argument such as: kubectl create, kubectl get, kubectl delete, kubectl describe, or even kubectl rollout.", "source": "https://kubernetes.io/docs/concepts/workloads/_print/", "id": "k8s_00658_code_explain_20790"} {"prompt": "Explain this bash code:\n\nkubectl apply -f project/k8s/namespaces -f project/k8s/development --recursive", "completion": "This bash code 4 - Managing Workloads | If you're interested in learning more about kubectl, go ahead and read Command line tool (kubectl).", "language": "bash", "type": "kubectl-command_explanation", "context": "4 - Managing Workloads | If you're interested in learning more about kubectl, go ahead and read Command line tool (kubectl).", "source": "https://kubernetes.io/docs/concepts/workloads/_print/", "id": "k8s_00658_code_explain_20791"} {"prompt": "Complete this text code:\n\nnamespace/development created\nnamespace/staging created", "completion": "configmap/my-config created\ndeployment.apps/my-deployment created\npersistentvolumeclaim/my-pvc created", "language": "text", "type": "code-example", "context": "4 - Managing Workloads | If you're interested in learning more about kubectl, go ahead and read Command line tool (kubectl).", "source": "https://kubernetes.io/docs/concepts/workloads/_print/", "id": "k8s_00658_code_completion_20792"} {"prompt": "Explain this text code:\n\nnamespace/development created\nnamespace/staging created\nconfigmap/my-config created\ndeployment.apps/my-deployment created\npersistentvolumeclaim/my-pvc created", "completion": "This text code 4 - Managing Workloads | If you're interested in learning more about kubectl, go ahead and read Command line tool (kubectl).", "language": "text", "type": "code-example_explanation", "context": "4 - Managing Workloads | If you're interested in learning more about kubectl, go ahead and read Command line tool (kubectl).", "source": "https://kubernetes.io/docs/concepts/workloads/_print/", "id": "k8s_00658_code_explain_20793"} {"prompt": "Explain this bash code:\n\nkubectl create deployment my-nginx --image=nginx:1.14.2", "completion": "This bash code 4 - Managing Workloads | Ensure that there is 1 replica:", "language": "bash", "type": "kubectl-command_explanation", "context": "4 - Managing Workloads | Ensure that there is 1 replica:", "source": "https://kubernetes.io/docs/concepts/workloads/_print/", "id": "k8s_00658_code_explain_20794"} {"prompt": "Explain this bash code:\n\nkubectl scale --replicas 1 deployments/my-nginx --subresource='scale' --type='merge' -p '{\"spec\":{\"replicas\": 1}}'", "completion": "This bash code 4 - Managing Workloads | and allow Kubernetes to add more temporary replicas during a rollout, by setting a surge maximum of 100%:", "language": "bash", "type": "kubectl-command_explanation", "context": "4 - Managing Workloads | and allow Kubernetes to add more temporary replicas during a rollout, by setting a surge maximum of 100%:", "source": "https://kubernetes.io/docs/concepts/workloads/_print/", "id": "k8s_00658_code_explain_20795"} {"prompt": "Explain this bash code:\n\nkubectl patch --type='merge' -p '{\"spec\":{\"strategy\":{\"rollingUpdate\":{\"maxSurge\": \"100%\" }}}}'", "completion": "This bash code 4 - Managing Workloads | To update to version 1.16.1, change .spec.template.spec.containers[0].image from nginx:1.14.2 to nginx:1.16.1 using kubectl edit:", "language": "bash", "type": "kubectl-command_explanation", "context": "4 - Managing Workloads | To update to version 1.16.1, change .spec.template.spec.containers[0].image from nginx:1.14.2 to nginx:1.16.1 using kubectl edit:", "source": "https://kubernetes.io/docs/concepts/workloads/_print/", "id": "k8s_00658_code_explain_20796"} {"prompt": "Explain this bash code:\n\nkubectl edit deployment/my-nginx\n# Change the manifest to use the newer container image, then save your changes", "completion": "This bash code 4 - Managing Workloads | That's it! The Deployment will declaratively update the deployed nginx application progressively behind the scene. It ensures that only a certain number of old replicas may be down while they are bein", "language": "bash", "type": "kubectl-command_explanation", "context": "4 - Managing Workloads | That's it! The Deployment will declaratively update the deployed nginx application progressively behind the scene. It ensures that only a certain number of old replicas may be down while they are bein", "source": "https://kubernetes.io/docs/concepts/workloads/_print/", "id": "k8s_00658_code_explain_20797"} {"prompt": "Complete this bash code:\n\nkubectl apply -f my-deployment.yaml\n", "completion": "# wait for rollout to finish\nkubectl rollout status deployment/my-deployment --timeout 10m # 10 minute timeout", "language": "bash", "type": "kubectl-command", "context": "4 - Managing Workloads | or", "source": "https://kubernetes.io/docs/concepts/workloads/_print/", "id": "k8s_00658_code_completion_20798"} {"prompt": "Explain this bash code:\n\nkubectl apply -f my-deployment.yaml\n\n# wait for rollout to finish\nkubectl rollout status deployment/my-deployment --timeout 10m # 10 minute timeout", "completion": "This bash code 4 - Managing Workloads | or", "language": "bash", "type": "kubectl-command_explanation", "context": "4 - Managing Workloads | or", "source": "https://kubernetes.io/docs/concepts/workloads/_print/", "id": "k8s_00658_code_explain_20799"} {"prompt": "Complete this bash code:\n\nkubectl apply -f backing-stateful-component.yaml\n", "completion": "# don't wait for rollout to finish, just check the status\nkubectl rollout status statefulsets/backing-stateful-component --watch=false", "language": "bash", "type": "kubectl-command", "context": "4 - Managing Workloads | You can also pause, resume or cancel a rollout. Visit kubectl rollout to learn more.", "source": "https://kubernetes.io/docs/concepts/workloads/_print/", "id": "k8s_00658_code_completion_20800"} {"prompt": "Explain this bash code:\n\nkubectl apply -f backing-stateful-component.yaml\n\n# don't wait for rollout to finish, just check the status\nkubectl rollout status statefulsets/backing-stateful-component --watch=false", "completion": "This bash code 4 - Managing Workloads | You can also pause, resume or cancel a rollout. Visit kubectl rollout to learn more.", "language": "bash", "type": "kubectl-command_explanation", "context": "4 - Managing Workloads | You can also pause, resume or cancel a rollout. Visit kubectl rollout to learn more.", "source": "https://kubernetes.io/docs/concepts/workloads/_print/", "id": "k8s_00658_code_explain_20801"} {"prompt": "Complete this text code:\n\nname: frontend\nreplicas: 3\n...\nlabels:", "completion": " app: guestbook\n tier: frontend\n track: stable\n...\nimage: gb-frontend:v3", "language": "text", "type": "code-example", "context": "4 - Managing Workloads | and then you can create a new release of the guestbook frontend that carries the track label with different value (i.e. canary), so that two sets of pods would not overlap:", "source": "https://kubernetes.io/docs/concepts/workloads/_print/", "id": "k8s_00658_code_completion_20802"} {"prompt": "Explain this text code:\n\nname: frontend\nreplicas: 3\n...\nlabels:\n app: guestbook\n tier: frontend\n track: stable\n...\nimage: gb-frontend:v3", "completion": "This text code 4 - Managing Workloads | and then you can create a new release of the guestbook frontend that carries the track label with different value (i.e. canary), so that two sets of pods would not overlap:", "language": "text", "type": "code-example_explanation", "context": "4 - Managing Workloads | and then you can create a new release of the guestbook frontend that carries the track label with different value (i.e. canary), so that two sets of pods would not overlap:", "source": "https://kubernetes.io/docs/concepts/workloads/_print/", "id": "k8s_00658_code_explain_20803"} {"prompt": "Complete this text code:\n\nname: frontend-canary\nreplicas: 1\n...\nlabels:", "completion": " app: guestbook\n tier: frontend\n track: canary\n...\nimage: gb-frontend:v4", "language": "text", "type": "code-example", "context": "4 - Managing Workloads | The frontend service would span both sets of replicas by selecting the common subset of their labels (i.e. omitting the track label), so that the traffic will be redirected to both applications:", "source": "https://kubernetes.io/docs/concepts/workloads/_print/", "id": "k8s_00658_code_completion_20804"} {"prompt": "Explain this text code:\n\nname: frontend-canary\nreplicas: 1\n...\nlabels:\n app: guestbook\n tier: frontend\n track: canary\n...\nimage: gb-frontend:v4", "completion": "This text code 4 - Managing Workloads | The frontend service would span both sets of replicas by selecting the common subset of their labels (i.e. omitting the track label), so that the traffic will be redirected to both applications:", "language": "text", "type": "code-example_explanation", "context": "4 - Managing Workloads | The frontend service would span both sets of replicas by selecting the common subset of their labels (i.e. omitting the track label), so that the traffic will be redirected to both applications:", "source": "https://kubernetes.io/docs/concepts/workloads/_print/", "id": "k8s_00658_code_explain_20805"} {"prompt": "Explain this bash code:\n\nkubectl annotate pods my-nginx-v4-9gw19 description='my frontend running nginx'\nkubectl get pods my-nginx-v4-9gw19 -o yaml", "completion": "This bash code 4 - Managing Workloads | For more information, see annotations and kubectl annotate.", "language": "bash", "type": "kubectl-command_explanation", "context": "4 - Managing Workloads | For more information, see annotations and kubectl annotate.", "source": "https://kubernetes.io/docs/concepts/workloads/_print/", "id": "k8s_00658_code_explain_20806"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: pod\nmetadata:", "completion": " annotations:\n description: my frontend running nginx\n...", "language": "yaml", "type": "kubernetes-manifest", "context": "4 - Managing Workloads | For more information, see annotations and kubectl annotate.", "source": "https://kubernetes.io/docs/concepts/workloads/_print/", "id": "k8s_00658_code_completion_20807"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: pod\nmetadata:\n annotations:\n description: my frontend running nginx\n...", "completion": "This yaml code 4 - Managing Workloads | For more information, see annotations and kubectl annotate.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "4 - Managing Workloads | For more information, see annotations and kubectl annotate.", "source": "https://kubernetes.io/docs/concepts/workloads/_print/", "id": "k8s_00658_code_explain_20808"} {"prompt": "Explain this text code:\n\nNAME READY STATUS RESTARTS AGE\nmy-nginx-2035384211-j5fhi 1/1 Running 0 30m", "completion": "This text code 4 - Managing Workloads | To have the system automatically choose the number of nginx replicas as needed, ranging from 1 to 3, do:", "language": "text", "type": "code-example_explanation", "context": "4 - Managing Workloads | To have the system automatically choose the number of nginx replicas as needed, ranging from 1 to 3, do:", "source": "https://kubernetes.io/docs/concepts/workloads/_print/", "id": "k8s_00658_code_explain_20809"} {"prompt": "Explain this text code:\n\n# This requires an existing source of container and Pod metrics\nkubectl autoscale deployment/my-nginx --min=1 --max=3", "completion": "This text code 4 - Managing Workloads | Now your nginx replicas will be scaled up and down as needed, automatically.", "language": "text", "type": "code-example_explanation", "context": "4 - Managing Workloads | Now your nginx replicas will be scaled up and down as needed, automatically.", "source": "https://kubernetes.io/docs/concepts/workloads/_print/", "id": "k8s_00658_code_explain_20810"} {"prompt": "Explain this text code:\n\nhorizontalpodautoscaler.autoscaling/my-nginx autoscaled", "completion": "This text code 4 - Managing Workloads | Now your nginx replicas will be scaled up and down as needed, automatically.", "language": "text", "type": "code-example_explanation", "context": "4 - Managing Workloads | Now your nginx replicas will be scaled up and down as needed, automatically.", "source": "https://kubernetes.io/docs/concepts/workloads/_print/", "id": "k8s_00658_code_explain_20811"} {"prompt": "Explain this bash code:\n\nkubectl apply -f https://k8s.io/examples/application/nginx/nginx-deployment.yaml", "completion": "This bash code 4 - Managing Workloads | To learn more about the underlying mechanism, read server-side apply.", "language": "bash", "type": "kubectl-command_explanation", "context": "4 - Managing Workloads | To learn more about the underlying mechanism, read server-side apply.", "source": "https://kubernetes.io/docs/concepts/workloads/_print/", "id": "k8s_00658_code_explain_20812"} {"prompt": "Complete this bash code:\n\nkubectl get deployment my-nginx -o yaml > /tmp/nginx.yaml\nvi /tmp/nginx.yaml\n# do some edit, and then save the file\n", "completion": "kubectl apply -f /tmp/nginx.yaml\ndeployment.apps/my-nginx configured\n\nrm /tmp/nginx.yaml", "language": "bash", "type": "kubectl-command", "context": "4 - Managing Workloads | This allows you to do more significant changes more easily. Note that you can specify the editor with your EDITOR or KUBE_EDITOR environment variables.", "source": "https://kubernetes.io/docs/concepts/workloads/_print/", "id": "k8s_00658_code_completion_20813"} {"prompt": "Explain this bash code:\n\nkubectl get deployment my-nginx -o yaml > /tmp/nginx.yaml\nvi /tmp/nginx.yaml\n# do some edit, and then save the file\n\nkubectl apply -f /tmp/nginx.yaml\ndeployment.apps/my-nginx configured\n\nrm /tmp/nginx.yaml", "completion": "This bash code 4 - Managing Workloads | This allows you to do more significant changes more easily. Note that you can specify the editor with your EDITOR or KUBE_EDITOR environment variables.", "language": "bash", "type": "kubectl-command_explanation", "context": "4 - Managing Workloads | This allows you to do more significant changes more easily. Note that you can specify the editor with your EDITOR or KUBE_EDITOR environment variables.", "source": "https://kubernetes.io/docs/concepts/workloads/_print/", "id": "k8s_00658_code_explain_20814"} {"prompt": "Explain this bash code:\n\nkubectl replace -f https://k8s.io/examples/application/nginx/nginx-deployment.yaml --force", "completion": "This bash code 4 - Managing Workloads | © 2025 The Kubernetes Authors | Documentation Distributed under CC BY 4.0", "language": "bash", "type": "kubectl-command_explanation", "context": "4 - Managing Workloads | © 2025 The Kubernetes Authors | Documentation Distributed under CC BY 4.0", "source": "https://kubernetes.io/docs/concepts/workloads/_print/", "id": "k8s_00658_code_explain_20815"} {"prompt": "Explain this text code:\n\ndeployment.apps/my-nginx deleted\ndeployment.apps/my-nginx replaced", "completion": "This text code 4 - Managing Workloads | © 2025 The Kubernetes Authors | Documentation Distributed under CC BY 4.0", "language": "text", "type": "code-example_explanation", "context": "4 - Managing Workloads | © 2025 The Kubernetes Authors | Documentation Distributed under CC BY 4.0", "source": "https://kubernetes.io/docs/concepts/workloads/_print/", "id": "k8s_00658_code_explain_20816"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: nginx\nspec:", "completion": " containers:\n - name: nginx\n image: nginx:1.14.2\n ports:\n - containerPort: 80", "language": "yaml", "type": "pod-definition", "context": "1 - Pods | To create the Pod shown above, run the following command:", "source": "https://kubernetes.io/docs/concepts/workloads/_print/", "id": "k8s_00658_code_completion_20817"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: nginx\nspec:\n containers:\n - name: nginx\n image: nginx:1.14.2\n ports:\n - containerPort: 80", "completion": "This yaml code 1 - Pods | To create the Pod shown above, run the following command:", "language": "yaml", "type": "pod-definition_explanation", "context": "1 - Pods | To create the Pod shown above, run the following command:", "source": "https://kubernetes.io/docs/concepts/workloads/_print/", "id": "k8s_00658_code_explain_20818"} {"prompt": "Explain this shell code:\n\nkubectl apply -f https://k8s.io/examples/pods/simple-pod.yaml", "completion": "This shell code 1 - Pods | Pods are generally not created directly and are created using workload resources. See Working with Pods for more information on how Pods are used with workload resources.", "language": "shell", "type": "code-example_explanation", "context": "1 - Pods | Pods are generally not created directly and are created using workload resources. See Working with Pods for more information on how Pods are used with workload resources.", "source": "https://kubernetes.io/docs/concepts/workloads/_print/", "id": "k8s_00658_code_explain_20819"} {"prompt": "Complete this yaml code:\n\napiVersion: batch/v1\nkind: Job\nmetadata:\n name: hello\nspec:\n template:\n # This is the pod template", "completion": " spec:\n containers:\n - name: hello\n image: busybox:1.28\n command: ['sh', '-c', 'echo \"Hello, Kubernetes!\" && sleep 3600']\n restartPolicy: OnFailure\n # The pod template ends here", "language": "yaml", "type": "kubernetes-manifest", "context": "1 - Pods | Modifying the pod template or switching to a new pod template has no direct effect on the Pods that already exist. If you change the pod template for a workload resource, that resource needs to create", "source": "https://kubernetes.io/docs/concepts/workloads/_print/", "id": "k8s_00658_code_completion_20820"} {"prompt": "Explain this yaml code:\n\napiVersion: batch/v1\nkind: Job\nmetadata:\n name: hello\nspec:\n template:\n # This is the pod template\n spec:\n containers:\n - name: hello\n image: busybox:1.28\n command: ['sh', '-c', 'echo \"Hello, Kubernetes!\" && sleep 3600']\n restartPolicy: OnFailure\n # The pod template ends here", "completion": "This yaml code 1 - Pods | Modifying the pod template or switching to a new pod template has no direct effect on the Pods that already exist. If you change the pod template for a workload resource, that resource needs to create", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "1 - Pods | Modifying the pod template or switching to a new pod template has no direct effect on the Pods that already exist. If you change the pod template for a workload resource, that resource needs to create", "source": "https://kubernetes.io/docs/concepts/workloads/_print/", "id": "k8s_00658_code_explain_20821"} {"prompt": "Explain this text code:\n\nNAMESPACE NAME READY STATUS RESTARTS AGE\n alessandras-namespace alessandras-pod 0/1 CrashLoopBackOff 200 2d9h", "completion": "This text code 1.1 - Pod Lifecycle | A Pod is granted a term to terminate gracefully, which defaults to 30 seconds. You can use the flag --force to terminate a Pod by force.", "language": "text", "type": "code-example_explanation", "context": "1.1 - Pod Lifecycle | A Pod is granted a term to terminate gracefully, which defaults to 30 seconds. You can use the flag --force to terminate a Pod by force.", "source": "https://kubernetes.io/docs/concepts/workloads/_print/", "id": "k8s_00658_code_explain_20822"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: on-failure-pod\nspec:\n restartPolicy: OnFailure\n containers:", "completion": " - name: try-once-container # This container will run only once because the restartPolicy is Never.\n image: docker.io/library/busybox:1.28\n command: ['sh', '-c', 'echo \"Only running once\" && sleep 10 && exit 1']\n restartPolicy: Never \n - name: on-failure-container # This container will be restarted on failure.\n image: docker.io/library/busybox:1.28\n command: ['sh', '-c', 'echo \"Keep restarting\" && sleep 1800 && exit 1']", "language": "yaml", "type": "pod-definition", "context": "1.1 - Pod Lifecycle | A Pod with Always restart policy with an init container that only execute once. If the init container fails, the Pod fails. This allows the Pod to fail if the initialization failed, but also keep runn", "source": "https://kubernetes.io/docs/concepts/workloads/_print/", "id": "k8s_00658_code_completion_20823"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: on-failure-pod\nspec:\n restartPolicy: OnFailure\n containers:\n - name: try-once-container # This container will run only once because the restartPolicy is Never.\n image: docker.io/library/busybox:1.28\n command: ['sh', '-c', 'echo \"Only running once\" && sleep 10 && exit 1']\n restartPolicy: Never \n - name: on-failure-container # This container will be restarted on failure.\n image: docker.io/library/busybox:1.28\n command: ['sh', '-c', 'echo \"Keep restarting\" && sleep 1800 && exit 1']", "completion": "This yaml code 1.1 - Pod Lifecycle | A Pod with Always restart policy with an init container that only execute once. If the init container fails, the Pod fails. This allows the Pod to fail if the initialization failed, but also keep runn", "language": "yaml", "type": "pod-definition_explanation", "context": "1.1 - Pod Lifecycle | A Pod with Always restart policy with an init container that only execute once. If the init container fails, the Pod fails. This allows the Pod to fail if the initialization failed, but also keep runn", "source": "https://kubernetes.io/docs/concepts/workloads/_print/", "id": "k8s_00658_code_explain_20824"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: fail-pod-if-init-fails\nspec:\n restartPolicy: Always\n initContainers:", "completion": " - name: init-once # This init container will only try once. If it fails, the pod will fail.\n image: docker.io/library/busybox:1.28\n command: ['sh', '-c', 'echo \"Failing initialization\" && sleep 10 && exit 1']\n restartPolicy: Never\n containers:\n - name: main-container # This container will always be restarted once initialization succeeds.\n image: docker.io/library/busybox:1.28\n command: ['sh', '-c', 'sleep 1800 && exit 0']", "language": "yaml", "type": "pod-definition", "context": "1.1 - Pod Lifecycle | A Pod with Never restart policy with a container that ignores and restarts on specific exit codes. This is useful to differentiate between restartable errors and non-restartable errors:", "source": "https://kubernetes.io/docs/concepts/workloads/_print/", "id": "k8s_00658_code_completion_20825"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: fail-pod-if-init-fails\nspec:\n restartPolicy: Always\n initContainers:\n - name: init-once # This init container will only try once. If it fails, the pod will fail.\n image: docker.io/library/busybox:1.28\n command: ['sh', '-c', 'echo \"Failing initialization\" && sleep 10 && exit 1']\n restartPolicy: Never\n containers:\n - name: main-container # This container will always be restarted once initialization succeeds.\n image: docker.io/library/busybox:1.28\n command: ['sh', '-c', 'sleep 1800 && exit 0']", "completion": "This yaml code 1.1 - Pod Lifecycle | A Pod with Never restart policy with a container that ignores and restarts on specific exit codes. This is useful to differentiate between restartable errors and non-restartable errors:", "language": "yaml", "type": "pod-definition_explanation", "context": "1.1 - Pod Lifecycle | A Pod with Never restart policy with a container that ignores and restarts on specific exit codes. This is useful to differentiate between restartable errors and non-restartable errors:", "source": "https://kubernetes.io/docs/concepts/workloads/_print/", "id": "k8s_00658_code_explain_20826"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: restart-on-exit-codes\nspec:\n restartPolicy: Never\n containers:\n - name: restart-on-exit-codes", "completion": " image: docker.io/library/busybox:1.28\n command: ['sh', '-c', 'sleep 60 && exit 0']\n restartPolicy: Never # Container restart policy must be specified if rules are specified\n restartPolicyRules: # Only restart the container if it exits with code 42\n - action: Restart\n exitCodes:\n operator: In\n values: [42]", "language": "yaml", "type": "pod-definition", "context": "1.1 - Pod Lifecycle | Restart rules can be used for many more advanced lifecycle management scenarios. Note, restart rules are affected by the same inconsistencies as the regular restart policy. Kubelet restarts, container", "source": "https://kubernetes.io/docs/concepts/workloads/_print/", "id": "k8s_00658_code_completion_20827"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: restart-on-exit-codes\nspec:\n restartPolicy: Never\n containers:\n - name: restart-on-exit-codes\n image: docker.io/library/busybox:1.28\n command: ['sh', '-c', 'sleep 60 && exit 0']\n restartPolicy: Never # Container restart policy must be specified if rules are specified\n restartPolicyRules: # Only restart the container if it exits with code 42\n - action: Restart\n exitCodes:\n operator: In\n values: [42]", "completion": "This yaml code 1.1 - Pod Lifecycle | Restart rules can be used for many more advanced lifecycle management scenarios. Note, restart rules are affected by the same inconsistencies as the regular restart policy. Kubelet restarts, container", "language": "yaml", "type": "pod-definition_explanation", "context": "1.1 - Pod Lifecycle | Restart rules can be used for many more advanced lifecycle management scenarios. Note, restart rules are affected by the same inconsistencies as the regular restart policy. Kubelet restarts, container", "source": "https://kubernetes.io/docs/concepts/workloads/_print/", "id": "k8s_00658_code_explain_20828"} {"prompt": "Complete this yaml code:\n\n# container restart delays will start at 10s, increasing\n# 2x each time they are restarted, to a maximum of 100s", "completion": "kind: KubeletConfiguration\ncrashLoopBackOff:\n maxContainerRestartPeriod: \"100s\"", "language": "yaml", "type": "kubernetes-manifest", "context": "1.1 - Pod Lifecycle | If you use this feature along with the alpha feature ReduceDefaultCrashLoopBackOffDecay (described above), your cluster defaults for initial backoff and maximum backoff will no longer be 10s and 300s,", "source": "https://kubernetes.io/docs/concepts/workloads/_print/", "id": "k8s_00658_code_completion_20829"} {"prompt": "Explain this yaml code:\n\n# container restart delays will start at 10s, increasing\n# 2x each time they are restarted, to a maximum of 100s\nkind: KubeletConfiguration\ncrashLoopBackOff:\n maxContainerRestartPeriod: \"100s\"", "completion": "This yaml code 1.1 - Pod Lifecycle | If you use this feature along with the alpha feature ReduceDefaultCrashLoopBackOffDecay (described above), your cluster defaults for initial backoff and maximum backoff will no longer be 10s and 300s,", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "1.1 - Pod Lifecycle | If you use this feature along with the alpha feature ReduceDefaultCrashLoopBackOffDecay (described above), your cluster defaults for initial backoff and maximum backoff will no longer be 10s and 300s,", "source": "https://kubernetes.io/docs/concepts/workloads/_print/", "id": "k8s_00658_code_explain_20830"} {"prompt": "Complete this yaml code:\n\n# delays between container restarts will always be 2s\nkind: KubeletConfiguration", "completion": "crashLoopBackOff:\n maxContainerRestartPeriod: \"2s\"", "language": "yaml", "type": "kubernetes-manifest", "context": "1.1 - Pod Lifecycle | If you use this feature along with the alpha feature ReduceDefaultCrashLoopBackOffDecay (described above), your cluster defaults for initial backoff and maximum backoff will no longer be 10s and 300s,", "source": "https://kubernetes.io/docs/concepts/workloads/_print/", "id": "k8s_00658_code_completion_20831"} {"prompt": "Explain this yaml code:\n\n# delays between container restarts will always be 2s\nkind: KubeletConfiguration\ncrashLoopBackOff:\n maxContainerRestartPeriod: \"2s\"", "completion": "This yaml code 1.1 - Pod Lifecycle | If you use this feature along with the alpha feature ReduceDefaultCrashLoopBackOffDecay (described above), your cluster defaults for initial backoff and maximum backoff will no longer be 10s and 300s,", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "1.1 - Pod Lifecycle | If you use this feature along with the alpha feature ReduceDefaultCrashLoopBackOffDecay (described above), your cluster defaults for initial backoff and maximum backoff will no longer be 10s and 300s,", "source": "https://kubernetes.io/docs/concepts/workloads/_print/", "id": "k8s_00658_code_explain_20832"} {"prompt": "Complete this yaml code:\n\nkind: Pod\n...\nspec:\n readinessGates:\n - conditionType: \"www.example.com/feature-1\"\nstatus:\n conditions:\n - type: Ready # a built-in PodCondition\n status: \"False\"", "completion": " lastProbeTime: null\n lastTransitionTime: 2018-01-01T00:00:00Z\n - type: \"www.example.com/feature-1\" # an extra PodCondition\n status: \"False\"\n lastProbeTime: null\n lastTransitionTime: 2018-01-01T00:00:00Z\n containerStatuses:\n - containerID: docker://abcd...\n ready: true\n...", "language": "yaml", "type": "pod-definition", "context": "1.1 - Pod Lifecycle | The Pod conditions you add must have names that meet the Kubernetes label key format.", "source": "https://kubernetes.io/docs/concepts/workloads/_print/", "id": "k8s_00658_code_completion_20833"} {"prompt": "Explain this yaml code:\n\nkind: Pod\n...\nspec:\n readinessGates:\n - conditionType: \"www.example.com/feature-1\"\nstatus:\n conditions:\n - type: Ready # a built-in PodCondition\n status: \"False\"\n lastProbeTime: null\n lastTransitionTime: 2018-01-01T00:00:00Z\n - type: \"www.example.com/feature-1\" # an extra PodCondition\n status: \"False\"\n lastProbeTime: null\n lastTransitionTime: 2018-01-01T00:00:00Z\n containerStatuses:\n - containerID: docker://abcd...\n ready: true\n...", "completion": "This yaml code 1.1 - Pod Lifecycle | The Pod conditions you add must have names that meet the Kubernetes label key format.", "language": "yaml", "type": "pod-definition_explanation", "context": "1.1 - Pod Lifecycle | The Pod conditions you add must have names that meet the Kubernetes label key format.", "source": "https://kubernetes.io/docs/concepts/workloads/_print/", "id": "k8s_00658_code_explain_20834"} {"prompt": "Complete this yaml code:\n\nspec:\n os:\n name: linux\n containers:", "completion": " - name: my-container\n image: container-image:latest\n lifecycle:\n stopSignal: SIGUSR1", "language": "yaml", "type": "kubernetes-manifest", "context": "1.1 - Pod Lifecycle | If a stop signal is defined in the lifecycle, this will override the signal defined in the container image. If no stop signal is defined in the container spec, the container would fall back to the def", "source": "https://kubernetes.io/docs/concepts/workloads/_print/", "id": "k8s_00658_code_completion_20835"} {"prompt": "Explain this yaml code:\n\nspec:\n os:\n name: linux\n containers:\n - name: my-container\n image: container-image:latest\n lifecycle:\n stopSignal: SIGUSR1", "completion": "This yaml code 1.1 - Pod Lifecycle | If a stop signal is defined in the lifecycle, this will override the signal defined in the container image. If no stop signal is defined in the container spec, the container would fall back to the def", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "1.1 - Pod Lifecycle | If a stop signal is defined in the lifecycle, this will override the signal defined in the container image. If no stop signal is defined in the container spec, the container would fall back to the def", "source": "https://kubernetes.io/docs/concepts/workloads/_print/", "id": "k8s_00658_code_explain_20836"} {"prompt": "Explain this shell code:\n\nfor i in {1..100}; do sleep 1; if nslookup myservice; then exit 0; fi; done; exit 1", "completion": "This shell code 1.2 - Init Containers | Register this Pod with a remote server from the downward API with a command like:", "language": "shell", "type": "code-example_explanation", "context": "1.2 - Init Containers | Register this Pod with a remote server from the downward API with a command like:", "source": "https://kubernetes.io/docs/concepts/workloads/_print/", "id": "k8s_00658_code_explain_20837"} {"prompt": "Explain this shell code:\n\ncurl -X POST http://$MANAGEMENT_SERVICE_HOST:$MANAGEMENT_SERVICE_PORT/register -d 'instance=$(<POD_NAME>)&ip=$(<POD_IP>)'", "completion": "This shell code 1.2 - Init Containers | Wait for some time before starting the app container with a command like", "language": "shell", "type": "code-example_explanation", "context": "1.2 - Init Containers | Wait for some time before starting the app container with a command like", "source": "https://kubernetes.io/docs/concepts/workloads/_print/", "id": "k8s_00658_code_explain_20838"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: myapp-pod\n labels:\n app.kubernetes.io/name: MyApp\nspec:\n containers:\n - name: myapp-container", "completion": " image: busybox:1.28\n command: ['sh', '-c', 'echo The app is running! && sleep 3600']\n initContainers:\n - name: init-myservice\n image: busybox:1.28\n command: ['sh', '-c', \"until nslookup myservice.$(cat /var/run/secrets/kubernetes.io/serviceaccount/namespace).svc.cluster.local; do echo waiting for myservice; sleep 2; done\"]\n - name: init-mydb\n image: busybox:1.28\n command: ['sh', '-c', \"until nslookup mydb.$(cat /var/run/secrets/kubernetes.io/serviceaccount/namespace).svc.cluster.local; do echo waiting for mydb; sleep 2; done\"]", "language": "yaml", "type": "pod-definition", "context": "1.2 - Init Containers | You can start this Pod by running:", "source": "https://kubernetes.io/docs/concepts/workloads/_print/", "id": "k8s_00658_code_completion_20839"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: myapp-pod\n labels:\n app.kubernetes.io/name: MyApp\nspec:\n containers:\n - name: myapp-container\n image: busybox:1.28\n command: ['sh', '-c', 'echo The app is running! && sleep 3600']\n initContainers:\n - name: init-myservice\n image: busybox:1.28\n command: ['sh', '-c', \"until nslookup myservice.$(cat /var/run/secrets/kubernetes.io/serviceaccount/namespace).svc.cluster.local; do echo waiting for myservice; sleep 2; done\"]\n - name: init-mydb\n image: busybox:1.28\n command: ['sh', '-c', \"until nslookup mydb.$(cat /var/run/secrets/kubernetes.io/serviceaccount/namespace).svc.cluster.local; do echo waiting for mydb; sleep 2; done\"]", "completion": "This yaml code 1.2 - Init Containers | You can start this Pod by running:", "language": "yaml", "type": "pod-definition_explanation", "context": "1.2 - Init Containers | You can start this Pod by running:", "source": "https://kubernetes.io/docs/concepts/workloads/_print/", "id": "k8s_00658_code_explain_20840"} {"prompt": "Explain this text code:\n\nNAME READY STATUS RESTARTS AGE\nmyapp-pod 0/1 Init:0/2 0 6m", "completion": "This text code 1.2 - Init Containers | or for more details:", "language": "text", "type": "code-example_explanation", "context": "1.2 - Init Containers | or for more details:", "source": "https://kubernetes.io/docs/concepts/workloads/_print/", "id": "k8s_00658_code_explain_20841"} {"prompt": "Complete this text code:\n\nName: myapp-pod\nNamespace: default\n[...]\nLabels: app.kubernetes.io/name=MyApp\nStatus: Pending\n[...]\nInit Containers:\n init-myservice:\n[...]\n State: Running\n[...]\n init-mydb:\n[...]\n State: Waiting\n Reason: PodInitializing\n Ready: False", "completion": "[...]\nContainers:\n myapp-container:\n[...]\n State: Waiting\n Reason: PodInitializing\n Ready: False\n[...]\nEvents:\n FirstSeen LastSeen Count From SubObjectPath Type Reason Message\n --------- -------- ----- ---- ------------- -------- ------ -------\n 16s 16s 1 {default-scheduler } Normal Scheduled Successfully assigned myapp-pod to 172.17.4.201\n 16s 16s 1 {kubelet 172.17.4.201} spec.initContainers{init-myservice} Normal Pulling pulling image \"busybox\"\n 13s 13s 1 {kubelet 172.17.4.201} spec.initContainers{init-myservice} Normal Pulled Successfully pulled image \"busybox\"\n 13s 13s 1 {kubelet 172.17.4.201} spec.initContainers{init-myservice} Normal Created Created container init-myservice\n 13s 13s 1 {kubelet 172.17.4.201} spec.initContainers{init-myservice} Normal Started Started container init-myservice", "language": "text", "type": "code-example", "context": "1.2 - Init Containers | To see logs for the init containers in this Pod, run:", "source": "https://kubernetes.io/docs/concepts/workloads/_print/", "id": "k8s_00658_code_completion_20842"} {"prompt": "Explain this text code:\n\nName: myapp-pod\nNamespace: default\n[...]\nLabels: app.kubernetes.io/name=MyApp\nStatus: Pending\n[...]\nInit Containers:\n init-myservice:\n[...]\n State: Running\n[...]\n init-mydb:\n[...]\n State: Waiting\n Reason: PodInitializing\n Ready: False\n[...]\nContainers:\n myapp-container:\n[...]\n State: Waiting\n Reason: PodInitializing\n Ready: False\n[...]\nEvents:\n FirstSeen LastSeen Count From SubObjectPath Type Reason Message\n --------- -------- ----- ---- ------------- -------- ------ -------\n 16s 16s 1 {default-scheduler } Normal Scheduled Successfully assigned myapp-pod to 172.17.4.201\n 16s 16s 1 {kubelet 172.17.4.201} spec.initContainers{init-myservice} Normal Pulling pulling image \"busybox\"\n 13s 13s 1 {kubelet 172.17.4.201} spec.initContainers{init-myservice} Normal Pulled Successfully pulled image \"busybox\"\n 13s 13s 1 {kubelet 172.17.4.201} spec.initContainers{init-myservice} Normal Created Created container init-myservice\n 13s 13s 1 {kubelet 172.17.4.201} spec.initContainers{init-myservice} Normal Started Started container init-myservice", "completion": "This text code 1.2 - Init Containers | To see logs for the init containers in this Pod, run:", "language": "text", "type": "code-example_explanation", "context": "1.2 - Init Containers | To see logs for the init containers in this Pod, run:", "source": "https://kubernetes.io/docs/concepts/workloads/_print/", "id": "k8s_00658_code_explain_20843"} {"prompt": "Explain this shell code:\n\nkubectl logs myapp-pod -c init-myservice # Inspect the first init container\nkubectl logs myapp-pod -c init-mydb # Inspect the second init container", "completion": "This shell code 1.2 - Init Containers | At this point, those init containers will be waiting to discover Services named mydb and myservice.", "language": "shell", "type": "code-example_explanation", "context": "1.2 - Init Containers | At this point, those init containers will be waiting to discover Services named mydb and myservice.", "source": "https://kubernetes.io/docs/concepts/workloads/_print/", "id": "k8s_00658_code_explain_20844"} {"prompt": "Complete this yaml code:\n\n---\napiVersion: v1\nkind: Service\nmetadata:\n name: myservice\nspec:\n ports:\n - protocol: TCP\n port: 80\n targetPort: 9376", "completion": "---\napiVersion: v1\nkind: Service\nmetadata:\n name: mydb\nspec:\n ports:\n - protocol: TCP\n port: 80\n targetPort: 9377", "language": "yaml", "type": "service-definition", "context": "1.2 - Init Containers | To create the mydb and myservice services:", "source": "https://kubernetes.io/docs/concepts/workloads/_print/", "id": "k8s_00658_code_completion_20845"} {"prompt": "Explain this yaml code:\n\n---\napiVersion: v1\nkind: Service\nmetadata:\n name: myservice\nspec:\n ports:\n - protocol: TCP\n port: 80\n targetPort: 9376\n---\napiVersion: v1\nkind: Service\nmetadata:\n name: mydb\nspec:\n ports:\n - protocol: TCP\n port: 80\n targetPort: 9377", "completion": "This yaml code 1.2 - Init Containers | To create the mydb and myservice services:", "language": "yaml", "type": "service-definition_explanation", "context": "1.2 - Init Containers | To create the mydb and myservice services:", "source": "https://kubernetes.io/docs/concepts/workloads/_print/", "id": "k8s_00658_code_explain_20846"} {"prompt": "Explain this text code:\n\nNAME READY STATUS RESTARTS AGE\nmyapp-pod 1/1 Running 0 9m", "completion": "This text code 1.2 - Init Containers | This simple example should provide some inspiration for you to create your own init containers. What's next contains a link to a more detailed example.", "language": "text", "type": "code-example_explanation", "context": "1.2 - Init Containers | This simple example should provide some inspiration for you to create your own init containers. What's next contains a link to a more detailed example.", "source": "https://kubernetes.io/docs/concepts/workloads/_print/", "id": "k8s_00658_code_explain_20847"} {"prompt": "Complete this yaml code:\n\napiVersion: apps/v1\nkind: Deployment\nmetadata:\n name: myapp\n labels:\n app: myapp\nspec:\n replicas: 1\n selector:\n matchLabels:\n app: myapp\n template:\n metadata:\n labels:\n app: myapp\n spec:\n containers:", "completion": " - name: myapp\n image: alpine:latest\n command: ['sh', '-c', 'while true; do echo \"logging\" >> /opt/logs.txt; sleep 1; done']\n volumeMounts:\n - name: data\n mountPath: /opt\n initContainers:\n - name: logshipper\n image: alpine:latest\n restartPolicy: Always\n command: ['sh', '-c', 'tail -F /opt/logs.txt']\n volumeMounts:\n - name: data\n mountPath: /opt\n volumes:\n - name: data\n emptyDir: {}", "language": "yaml", "type": "deployment-definition", "context": "1.3 - Sidecar Containers | If an init container is created with its restartPolicy set to Always, it will start and remain running during the entire life of the Pod. This can be helpful for running supporting services separated", "source": "https://kubernetes.io/docs/concepts/workloads/_print/", "id": "k8s_00658_code_completion_20848"} {"prompt": "Explain this yaml code:\n\napiVersion: apps/v1\nkind: Deployment\nmetadata:\n name: myapp\n labels:\n app: myapp\nspec:\n replicas: 1\n selector:\n matchLabels:\n app: myapp\n template:\n metadata:\n labels:\n app: myapp\n spec:\n containers:\n - name: myapp\n image: alpine:latest\n command: ['sh', '-c', 'while true; do echo \"logging\" >> /opt/logs.txt; sleep 1; done']\n volumeMounts:\n - name: data\n mountPath: /opt\n initContainers:\n - name: logshipper\n image: alpine:latest\n restartPolicy: Always\n command: ['sh', '-c', 'tail -F /opt/logs.txt']\n volumeMounts:\n - name: data\n mountPath: /opt\n volumes:\n - name: data\n emptyDir: {}", "completion": "This yaml code 1.3 - Sidecar Containers | If an init container is created with its restartPolicy set to Always, it will start and remain running during the entire life of the Pod. This can be helpful for running supporting services separated", "language": "yaml", "type": "deployment-definition_explanation", "context": "1.3 - Sidecar Containers | If an init container is created with its restartPolicy set to Always, it will start and remain running during the entire life of the Pod. This can be helpful for running supporting services separated", "source": "https://kubernetes.io/docs/concepts/workloads/_print/", "id": "k8s_00658_code_explain_20849"} {"prompt": "Complete this yaml code:\n\napiVersion: batch/v1\nkind: Job\nmetadata:\n name: myjob\nspec:\n template:\n spec:\n containers:\n - name: myjob\n image: alpine:latest\n command: ['sh', '-c', 'echo \"logging\" > /opt/logs.txt']\n volumeMounts:\n - name: data", "completion": " mountPath: /opt\n initContainers:\n - name: logshipper\n image: alpine:latest\n restartPolicy: Always\n command: ['sh', '-c', 'tail -F /opt/logs.txt']\n volumeMounts:\n - name: data\n mountPath: /opt\n restartPolicy: Never\n volumes:\n - name: data\n emptyDir: {}", "language": "yaml", "type": "kubernetes-manifest", "context": "1.3 - Sidecar Containers | Sidecar containers run alongside app containers in the same pod. However, they do not execute the primary application logic; instead, they provide supporting functionality to the main application.", "source": "https://kubernetes.io/docs/concepts/workloads/_print/", "id": "k8s_00658_code_completion_20850"} {"prompt": "Explain this yaml code:\n\napiVersion: batch/v1\nkind: Job\nmetadata:\n name: myjob\nspec:\n template:\n spec:\n containers:\n - name: myjob\n image: alpine:latest\n command: ['sh', '-c', 'echo \"logging\" > /opt/logs.txt']\n volumeMounts:\n - name: data\n mountPath: /opt\n initContainers:\n - name: logshipper\n image: alpine:latest\n restartPolicy: Always\n command: ['sh', '-c', 'tail -F /opt/logs.txt']\n volumeMounts:\n - name: data\n mountPath: /opt\n restartPolicy: Never\n volumes:\n - name: data\n emptyDir: {}", "completion": "This yaml code 1.3 - Sidecar Containers | Sidecar containers run alongside app containers in the same pod. However, they do not execute the primary application logic; instead, they provide supporting functionality to the main application.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "1.3 - Sidecar Containers | Sidecar containers run alongside app containers in the same pod. However, they do not execute the primary application logic; instead, they provide supporting functionality to the main application.", "source": "https://kubernetes.io/docs/concepts/workloads/_print/", "id": "k8s_00658_code_explain_20851"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: busybox-1\nspec:", "completion": " containers:\n - image: busybox:1.28\n command:\n - sleep\n - \"3600\"\n name: busybox", "language": "yaml", "type": "pod-definition", "context": "1.6 - Pod Hostname | The Pod created by this manifest will have its hostname and fully qualified domain name (FQDN) set to busybox-1.", "source": "https://kubernetes.io/docs/concepts/workloads/_print/", "id": "k8s_00658_code_completion_20852"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: busybox-1\nspec:\n containers:\n - image: busybox:1.28\n command:\n - sleep\n - \"3600\"\n name: busybox", "completion": "This yaml code 1.6 - Pod Hostname | The Pod created by this manifest will have its hostname and fully qualified domain name (FQDN) set to busybox-1.", "language": "yaml", "type": "pod-definition_explanation", "context": "1.6 - Pod Hostname | The Pod created by this manifest will have its hostname and fully qualified domain name (FQDN) set to busybox-1.", "source": "https://kubernetes.io/docs/concepts/workloads/_print/", "id": "k8s_00658_code_explain_20853"} {"prompt": "Explain this text code:\n\nbusybox-1.busybox-subdomain.my-namespace.svc.cluster-domain.example", "completion": "This text code 1.6 - Pod Hostname | When both setHostnameAsFQDN: true and the subdomain field is set in the Pod spec, the kubelet writes the Pod's FQDN into the hostname for that Pod's namespace. In this case, both hostname and hostname", "language": "text", "type": "code-example_explanation", "context": "1.6 - Pod Hostname | When both setHostnameAsFQDN: true and the subdomain field is set in the Pod spec, the kubelet writes the Pod's FQDN into the hostname for that Pod's namespace. In this case, both hostname and hostname", "source": "https://kubernetes.io/docs/concepts/workloads/_print/", "id": "k8s_00658_code_explain_20854"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: busybox-2-busybox-example-domain\nspec:\n hostnameOverride: busybox-2.busybox.example.domain", "completion": " containers:\n - image: busybox:1.28\n command:\n - sleep\n - \"3600\"\n name: busybox", "language": "yaml", "type": "pod-definition", "context": "1.6 - Pod Hostname | If hostnameOverride is set alongside hostname and subdomain fields:", "source": "https://kubernetes.io/docs/concepts/workloads/_print/", "id": "k8s_00658_code_completion_20855"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: busybox-2-busybox-example-domain\nspec:\n hostnameOverride: busybox-2.busybox.example.domain\n containers:\n - image: busybox:1.28\n command:\n - sleep\n - \"3600\"\n name: busybox", "completion": "This yaml code 1.6 - Pod Hostname | If hostnameOverride is set alongside hostname and subdomain fields:", "language": "yaml", "type": "pod-definition_explanation", "context": "1.6 - Pod Hostname | If hostnameOverride is set alongside hostname and subdomain fields:", "source": "https://kubernetes.io/docs/concepts/workloads/_print/", "id": "k8s_00658_code_explain_20856"} {"prompt": "Complete this text code:\n\n# The format is\n# name:firstID:count of IDs\n# where\n# - firstID is 65536 (the minimum value possible)", "completion": "# - count of IDs is 110 * 65536\n# (110 is the default limit for number of pods on the node)\n\nkubelet:65536:7208960", "language": "text", "type": "code-example", "context": "1.8 - User Namespaces | Starting with Kubernetes v1.33, the ID count for each of Pods can be set in KubeletConfiguration.", "source": "https://kubernetes.io/docs/concepts/workloads/_print/", "id": "k8s_00658_code_completion_20857"} {"prompt": "Explain this text code:\n\n# The format is\n# name:firstID:count of IDs\n# where\n# - firstID is 65536 (the minimum value possible)\n# - count of IDs is 110 * 65536\n# (110 is the default limit for number of pods on the node)\n\nkubelet:65536:7208960", "completion": "This text code 1.8 - User Namespaces | Starting with Kubernetes v1.33, the ID count for each of Pods can be set in KubeletConfiguration.", "language": "text", "type": "code-example_explanation", "context": "1.8 - User Namespaces | Starting with Kubernetes v1.33, the ID count for each of Pods can be set in KubeletConfiguration.", "source": "https://kubernetes.io/docs/concepts/workloads/_print/", "id": "k8s_00658_code_explain_20858"} {"prompt": "Complete this yaml code:\n\napiVersion: kubelet.config.k8s.io/v1beta1\nkind: KubeletConfiguration", "completion": "userNamespaces:\n idsPerPod: 1048576", "language": "yaml", "type": "kubernetes-manifest", "context": "1.8 - User Namespaces | The value of idsPerPod (uint32) must be a multiple of 65536. The default value is 65536. This value only applies to containers created after the kubelet was started with this KubeletConfiguration. Run", "source": "https://kubernetes.io/docs/concepts/workloads/_print/", "id": "k8s_00658_code_completion_20859"} {"prompt": "Explain this yaml code:\n\napiVersion: kubelet.config.k8s.io/v1beta1\nkind: KubeletConfiguration\nuserNamespaces:\n idsPerPod: 1048576", "completion": "This yaml code 1.8 - User Namespaces | The value of idsPerPod (uint32) must be a multiple of 65536. The default value is 65536. This value only applies to containers created after the kubelet was started with this KubeletConfiguration. Run", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "1.8 - User Namespaces | The value of idsPerPod (uint32) must be a multiple of 65536. The default value is 65536. This value only applies to containers created after the kubelet was started with this KubeletConfiguration. Run", "source": "https://kubernetes.io/docs/concepts/workloads/_print/", "id": "k8s_00658_code_explain_20860"} {"prompt": "Explain this text code:\n\nspec.initContainers[*].securityContext.runAsNonRoot", "completion": "This text code 1.8 - User Namespaces | When using a user namespace for the pod, it is disallowed to use other host namespaces. In particular, if you set hostUsers: false then you are not allowed to set any of:", "language": "text", "type": "code-example_explanation", "context": "1.8 - User Namespaces | When using a user namespace for the pod, it is disallowed to use other host namespaces. In particular, if you set hostUsers: false then you are not allowed to set any of:", "source": "https://kubernetes.io/docs/concepts/workloads/_print/", "id": "k8s_00658_code_explain_20861"} {"prompt": "Explain this text code:\n\nspec.ephemeralContainers[*].securityContext.runAsNonRoot", "completion": "This text code 1.8 - User Namespaces | When using a user namespace for the pod, it is disallowed to use other host namespaces. In particular, if you set hostUsers: false then you are not allowed to set any of:", "language": "text", "type": "code-example_explanation", "context": "1.8 - User Namespaces | When using a user namespace for the pod, it is disallowed to use other host namespaces. In particular, if you set hostUsers: false then you are not allowed to set any of:", "source": "https://kubernetes.io/docs/concepts/workloads/_print/", "id": "k8s_00658_code_explain_20862"} {"prompt": "Explain this text code:\n\nspec.ephemeralContainers[*].securityContext.runAsUser", "completion": "This text code 1.8 - User Namespaces | When using a user namespace for the pod, it is disallowed to use other host namespaces. In particular, if you set hostUsers: false then you are not allowed to set any of:", "language": "text", "type": "code-example_explanation", "context": "1.8 - User Namespaces | When using a user namespace for the pod, it is disallowed to use other host namespaces. In particular, if you set hostUsers: false then you are not allowed to set any of:", "source": "https://kubernetes.io/docs/concepts/workloads/_print/", "id": "k8s_00658_code_explain_20863"} {"prompt": "Complete this yaml code:\n\napiVersion: apps/v1\nkind: Deployment\nmetadata:\n name: nginx-deployment\n labels:\n app: nginx\nspec:\n replicas: 3\n selector:\n matchLabels:", "completion": " app: nginx\n template:\n metadata:\n labels:\n app: nginx\n spec:\n containers:\n - name: nginx\n image: nginx:1.14.2\n ports:\n - containerPort: 80", "language": "yaml", "type": "deployment-definition", "context": "2.1 - Deployments | In this example:", "source": "https://kubernetes.io/docs/concepts/workloads/_print/", "id": "k8s_00658_code_completion_20864"} {"prompt": "Explain this yaml code:\n\napiVersion: apps/v1\nkind: Deployment\nmetadata:\n name: nginx-deployment\n labels:\n app: nginx\nspec:\n replicas: 3\n selector:\n matchLabels:\n app: nginx\n template:\n metadata:\n labels:\n app: nginx\n spec:\n containers:\n - name: nginx\n image: nginx:1.14.2\n ports:\n - containerPort: 80", "completion": "This yaml code 2.1 - Deployments | In this example:", "language": "yaml", "type": "deployment-definition_explanation", "context": "2.1 - Deployments | In this example:", "source": "https://kubernetes.io/docs/concepts/workloads/_print/", "id": "k8s_00658_code_explain_20865"} {"prompt": "Explain this shell code:\n\nkubectl apply -f https://k8s.io/examples/controllers/nginx-deployment.yaml", "completion": "This shell code 2.1 - Deployments | Run kubectl get deployments to check if the Deployment was created.", "language": "shell", "type": "code-example_explanation", "context": "2.1 - Deployments | Run kubectl get deployments to check if the Deployment was created.", "source": "https://kubernetes.io/docs/concepts/workloads/_print/", "id": "k8s_00658_code_explain_20866"} {"prompt": "Explain this text code:\n\nNAME READY UP-TO-DATE AVAILABLE AGE\nnginx-deployment 0/3 0 0 1s", "completion": "This text code 2.1 - Deployments | When you inspect the Deployments in your cluster, the following fields are displayed:", "language": "text", "type": "code-example_explanation", "context": "2.1 - Deployments | When you inspect the Deployments in your cluster, the following fields are displayed:", "source": "https://kubernetes.io/docs/concepts/workloads/_print/", "id": "k8s_00658_code_explain_20867"} {"prompt": "Explain this text code:\n\nWaiting for rollout to finish: 2 out of 3 new replicas have been updated...\ndeployment \"nginx-deployment\" successfully rolled out", "completion": "This text code 2.1 - Deployments | Run the kubectl get deployments again a few seconds later. The output is similar to this:", "language": "text", "type": "code-example_explanation", "context": "2.1 - Deployments | Run the kubectl get deployments again a few seconds later. The output is similar to this:", "source": "https://kubernetes.io/docs/concepts/workloads/_print/", "id": "k8s_00658_code_explain_20868"} {"prompt": "Explain this text code:\n\nNAME READY UP-TO-DATE AVAILABLE AGE\nnginx-deployment 3/3 3 3 18s", "completion": "This text code 2.1 - Deployments | Notice that the Deployment has created all three replicas, and all replicas are up-to-date (they contain the latest Pod template) and available.", "language": "text", "type": "code-example_explanation", "context": "2.1 - Deployments | Notice that the Deployment has created all three replicas, and all replicas are up-to-date (they contain the latest Pod template) and available.", "source": "https://kubernetes.io/docs/concepts/workloads/_print/", "id": "k8s_00658_code_explain_20869"} {"prompt": "Explain this text code:\n\nNAME DESIRED CURRENT READY AGE\nnginx-deployment-75675f5897 3 3 3 18s", "completion": "This text code 2.1 - Deployments | ReplicaSet output shows the following fields:", "language": "text", "type": "code-example_explanation", "context": "2.1 - Deployments | ReplicaSet output shows the following fields:", "source": "https://kubernetes.io/docs/concepts/workloads/_print/", "id": "k8s_00658_code_explain_20870"} {"prompt": "Complete this text code:\n\nNAME READY STATUS RESTARTS AGE LABELS\nnginx-deployment-75675f5897-7ci7o 1/1 Running 0 18s app=nginx,pod-template-hash=75675f5897", "completion": "nginx-deployment-75675f5897-kzszj 1/1 Running 0 18s app=nginx,pod-template-hash=75675f5897\nnginx-deployment-75675f5897-qqcnn 1/1 Running 0 18s app=nginx,pod-template-hash=75675f5897", "language": "text", "type": "code-example", "context": "2.1 - Deployments | The created ReplicaSet ensures that there are three nginx Pods.", "source": "https://kubernetes.io/docs/concepts/workloads/_print/", "id": "k8s_00658_code_completion_20871"} {"prompt": "Explain this text code:\n\nNAME READY STATUS RESTARTS AGE LABELS\nnginx-deployment-75675f5897-7ci7o 1/1 Running 0 18s app=nginx,pod-template-hash=75675f5897\nnginx-deployment-75675f5897-kzszj 1/1 Running 0 18s app=nginx,pod-template-hash=75675f5897\nnginx-deployment-75675f5897-qqcnn 1/1 Running 0 18s app=nginx,pod-template-hash=75675f5897", "completion": "This text code 2.1 - Deployments | The created ReplicaSet ensures that there are three nginx Pods.", "language": "text", "type": "code-example_explanation", "context": "2.1 - Deployments | The created ReplicaSet ensures that there are three nginx Pods.", "source": "https://kubernetes.io/docs/concepts/workloads/_print/", "id": "k8s_00658_code_explain_20872"} {"prompt": "Explain this shell code:\n\nkubectl set image deployment.v1.apps/nginx-deployment nginx=nginx:1.16.1", "completion": "This shell code 2.1 - Deployments | or use the following command:", "language": "shell", "type": "code-example_explanation", "context": "2.1 - Deployments | or use the following command:", "source": "https://kubernetes.io/docs/concepts/workloads/_print/", "id": "k8s_00658_code_explain_20873"} {"prompt": "Explain this shell code:\n\nkubectl set image deployment/nginx-deployment nginx=nginx:1.16.1", "completion": "This shell code 2.1 - Deployments | where deployment/nginx-deployment indicates the Deployment, nginx indicates the Container the update will take place and nginx:1.16.1 indicates the new image and its tag.", "language": "shell", "type": "code-example_explanation", "context": "2.1 - Deployments | where deployment/nginx-deployment indicates the Deployment, nginx indicates the Container the update will take place and nginx:1.16.1 indicates the new image and its tag.", "source": "https://kubernetes.io/docs/concepts/workloads/_print/", "id": "k8s_00658_code_explain_20874"} {"prompt": "Explain this text code:\n\nWaiting for rollout to finish: 2 out of 3 new replicas have been updated...", "completion": "This text code 2.1 - Deployments | or", "language": "text", "type": "code-example_explanation", "context": "2.1 - Deployments | or", "source": "https://kubernetes.io/docs/concepts/workloads/_print/", "id": "k8s_00658_code_explain_20875"} {"prompt": "Explain this text code:\n\ndeployment \"nginx-deployment\" successfully rolled out", "completion": "This text code 2.1 - Deployments | Get more details on your updated Deployment:", "language": "text", "type": "code-example_explanation", "context": "2.1 - Deployments | Get more details on your updated Deployment:", "source": "https://kubernetes.io/docs/concepts/workloads/_print/", "id": "k8s_00658_code_explain_20876"} {"prompt": "Explain this text code:\n\nNAME READY UP-TO-DATE AVAILABLE AGE\nnginx-deployment 3/3 3 3 36s", "completion": "This text code 2.1 - Deployments | Run kubectl get rs to see that the Deployment updated the Pods by creating a new ReplicaSet and scaling it up to 3 replicas, as well as scaling down the old ReplicaSet to 0 replicas.", "language": "text", "type": "code-example_explanation", "context": "2.1 - Deployments | Run kubectl get rs to see that the Deployment updated the Pods by creating a new ReplicaSet and scaling it up to 3 replicas, as well as scaling down the old ReplicaSet to 0 replicas.", "source": "https://kubernetes.io/docs/concepts/workloads/_print/", "id": "k8s_00658_code_explain_20877"} {"prompt": "Explain this text code:\n\nNAME DESIRED CURRENT READY AGE\nnginx-deployment-1564180365 3 3 3 6s\nnginx-deployment-2035384211 0 0 0 36s", "completion": "This text code 2.1 - Deployments | Running get pods should now show only the new Pods:", "language": "text", "type": "code-example_explanation", "context": "2.1 - Deployments | Running get pods should now show only the new Pods:", "source": "https://kubernetes.io/docs/concepts/workloads/_print/", "id": "k8s_00658_code_explain_20878"} {"prompt": "Complete this text code:\n\nNAME READY STATUS RESTARTS AGE\nnginx-deployment-1564180365-khku8 1/1 Running 0 14s", "completion": "nginx-deployment-1564180365-nacti 1/1 Running 0 14s\nnginx-deployment-1564180365-z9gth 1/1 Running 0 14s", "language": "text", "type": "code-example", "context": "2.1 - Deployments | Next time you want to update these Pods, you only need to update the Deployment's Pod template again.", "source": "https://kubernetes.io/docs/concepts/workloads/_print/", "id": "k8s_00658_code_completion_20879"} {"prompt": "Explain this text code:\n\nNAME READY STATUS RESTARTS AGE\nnginx-deployment-1564180365-khku8 1/1 Running 0 14s\nnginx-deployment-1564180365-nacti 1/1 Running 0 14s\nnginx-deployment-1564180365-z9gth 1/1 Running 0 14s", "completion": "This text code 2.1 - Deployments | Next time you want to update these Pods, you only need to update the Deployment's Pod template again.", "language": "text", "type": "code-example_explanation", "context": "2.1 - Deployments | Next time you want to update these Pods, you only need to update the Deployment's Pod template again.", "source": "https://kubernetes.io/docs/concepts/workloads/_print/", "id": "k8s_00658_code_explain_20880"} {"prompt": "Complete this text code:\n\nName: nginx-deployment\nNamespace: default\nCreationTimestamp: Thu, 30 Nov 2017 10:56:25 +0000\nLabels: app=nginx\nAnnotations: deployment.kubernetes.io/revision=2\nSelector: app=nginx\nReplicas: 3 desired | 3 updated | 3 total | 3 available | 0 unavailable\nStrategyType: RollingUpdate\nMinReadySeconds: 0\nRollingUpdateStrategy: 25% max unavailable, 25% max surge\nPod Template:\n Labels: app=nginx\n Containers:\n nginx:\n Image: nginx:1.16.1\n Port: 80/TCP\n Environment: <none>\n Mounts: <none>", "completion": " Volumes: <none>\n Conditions:\n Type Status Reason\n ---- ------ ------\n Available True MinimumReplicasAvailable\n Progressing True NewReplicaSetAvailable\n OldReplicaSets: <none>\n NewReplicaSet: nginx-deployment-1564180365 (3/3 replicas created)\n Events:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal ScalingReplicaSet 2m deployment-controller Scaled up replica set nginx-deployment-2035384211 to 3\n Normal ScalingReplicaSet 24s deployment-controller Scaled up replica set nginx-deployment-1564180365 to 1\n Normal ScalingReplicaSet 22s deployment-controller Scaled down replica set nginx-deployment-2035384211 to 2\n Normal ScalingReplicaSet 22s deployment-controller Scaled up replica set nginx-deployment-1564180365 to 2\n Normal ScalingReplicaSet 19s deployment-controller Scaled down replica set nginx-deployment-2035384211 to 1\n Normal ScalingReplicaSet 19s deployment-controller Scaled up replica set nginx-deployment-1564180365 to 3\n Normal ScalingReplicaSet 14s deployment-controller Scaled down replica set nginx-deployment-2035384211 to 0", "language": "text", "type": "code-example", "context": "2.1 - Deployments | Here you see that when you first created the Deployment, it created a ReplicaSet (nginx-deployment-2035384211) and scaled it up to 3 replicas directly. When you updated the Deployment, it created a ne", "source": "https://kubernetes.io/docs/concepts/workloads/_print/", "id": "k8s_00658_code_completion_20881"} {"prompt": "Explain this text code:\n\nName: nginx-deployment\nNamespace: default\nCreationTimestamp: Thu, 30 Nov 2017 10:56:25 +0000\nLabels: app=nginx\nAnnotations: deployment.kubernetes.io/revision=2\nSelector: app=nginx\nReplicas: 3 desired | 3 updated | 3 total | 3 available | 0 unavailable\nStrategyType: RollingUpdate\nMinReadySeconds: 0\nRollingUpdateStrategy: 25% max unavailable, 25% max surge\nPod Template:\n Labels: app=nginx\n Containers:\n nginx:\n Image: nginx:1.16.1\n Port: 80/TCP\n Environment: <none>\n Mounts: <none>\n Volumes: <none>\n Conditions:\n Type Status Reason\n ---- ------ ------\n Available True MinimumReplicasAvailable\n Progressing True NewReplicaSetAvailable\n OldReplicaSets: <none>\n NewReplicaSet: nginx-deployment-1564180365 (3/3 replicas created)\n Events:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal ScalingReplicaSet 2m deployment-controller Scaled up replica set nginx-deployment-2035384211 to 3\n Normal ScalingReplicaSet 24s deployment-controller Scaled up replica set nginx-deployment-1564180365 to 1\n Normal ScalingReplicaSet 22s deployment-controller Scaled down replica set nginx-deployment-2035384211 to 2\n Normal ScalingReplicaSet 22s deployment-controller Scaled up replica set nginx-deployment-1564180365 to 2\n Normal ScalingReplicaSet 19s deployment-controller Scaled down replica set nginx-deployment-2035384211 to 1\n Normal ScalingReplicaSet 19s deployment-controller Scaled up replica set nginx-deployment-1564180365 to 3\n Normal ScalingReplicaSet 14s deployment-controller Scaled down replica set nginx-deployment-2035384211 to 0", "completion": "This text code 2.1 - Deployments | Here you see that when you first created the Deployment, it created a ReplicaSet (nginx-deployment-2035384211) and scaled it up to 3 replicas directly. When you updated the Deployment, it created a ne", "language": "text", "type": "code-example_explanation", "context": "2.1 - Deployments | Here you see that when you first created the Deployment, it created a ReplicaSet (nginx-deployment-2035384211) and scaled it up to 3 replicas directly. When you updated the Deployment, it created a ne", "source": "https://kubernetes.io/docs/concepts/workloads/_print/", "id": "k8s_00658_code_explain_20882"} {"prompt": "Explain this shell code:\n\nkubectl set image deployment/nginx-deployment nginx=nginx:1.161", "completion": "This shell code 2.1 - Deployments | The output is similar to this:", "language": "shell", "type": "code-example_explanation", "context": "2.1 - Deployments | The output is similar to this:", "source": "https://kubernetes.io/docs/concepts/workloads/_print/", "id": "k8s_00658_code_explain_20883"} {"prompt": "Explain this text code:\n\nWaiting for rollout to finish: 1 out of 3 new replicas have been updated...", "completion": "This text code 2.1 - Deployments | Press Ctrl-C to stop the above rollout status watch. For more information on stuck rollouts, read more here.", "language": "text", "type": "code-example_explanation", "context": "2.1 - Deployments | Press Ctrl-C to stop the above rollout status watch. For more information on stuck rollouts, read more here.", "source": "https://kubernetes.io/docs/concepts/workloads/_print/", "id": "k8s_00658_code_explain_20884"} {"prompt": "Complete this text code:\n\nNAME DESIRED CURRENT READY AGE\nnginx-deployment-1564180365 3 3 3 25s", "completion": "nginx-deployment-2035384211 0 0 0 36s\nnginx-deployment-3066724191 1 1 0 6s", "language": "text", "type": "code-example", "context": "2.1 - Deployments | Looking at the Pods created, you see that 1 Pod created by new ReplicaSet is stuck in an image pull loop.", "source": "https://kubernetes.io/docs/concepts/workloads/_print/", "id": "k8s_00658_code_completion_20885"} {"prompt": "Explain this text code:\n\nNAME DESIRED CURRENT READY AGE\nnginx-deployment-1564180365 3 3 3 25s\nnginx-deployment-2035384211 0 0 0 36s\nnginx-deployment-3066724191 1 1 0 6s", "completion": "This text code 2.1 - Deployments | Looking at the Pods created, you see that 1 Pod created by new ReplicaSet is stuck in an image pull loop.", "language": "text", "type": "code-example_explanation", "context": "2.1 - Deployments | Looking at the Pods created, you see that 1 Pod created by new ReplicaSet is stuck in an image pull loop.", "source": "https://kubernetes.io/docs/concepts/workloads/_print/", "id": "k8s_00658_code_explain_20886"} {"prompt": "Complete this text code:\n\nNAME READY STATUS RESTARTS AGE\nnginx-deployment-1564180365-70iae 1/1 Running 0 25s", "completion": "nginx-deployment-1564180365-jbqqo 1/1 Running 0 25s\nnginx-deployment-1564180365-hysrc 1/1 Running 0 25s\nnginx-deployment-3066724191-08mng 0/1 ImagePullBackOff 0 6s", "language": "text", "type": "code-example", "context": "2.1 - Deployments | Get the description of the Deployment:", "source": "https://kubernetes.io/docs/concepts/workloads/_print/", "id": "k8s_00658_code_completion_20887"} {"prompt": "Explain this text code:\n\nNAME READY STATUS RESTARTS AGE\nnginx-deployment-1564180365-70iae 1/1 Running 0 25s\nnginx-deployment-1564180365-jbqqo 1/1 Running 0 25s\nnginx-deployment-1564180365-hysrc 1/1 Running 0 25s\nnginx-deployment-3066724191-08mng 0/1 ImagePullBackOff 0 6s", "completion": "This text code 2.1 - Deployments | Get the description of the Deployment:", "language": "text", "type": "code-example_explanation", "context": "2.1 - Deployments | Get the description of the Deployment:", "source": "https://kubernetes.io/docs/concepts/workloads/_print/", "id": "k8s_00658_code_explain_20888"} {"prompt": "Complete this text code:\n\nName: nginx-deployment\nNamespace: default\nCreationTimestamp: Tue, 15 Mar 2016 14:48:04 -0700\nLabels: app=nginx\nSelector: app=nginx\nReplicas: 3 desired | 1 updated | 4 total | 3 available | 1 unavailable\nStrategyType: RollingUpdate\nMinReadySeconds: 0\nRollingUpdateStrategy: 25% max unavailable, 25% max surge\nPod Template:\n Labels: app=nginx\n Containers:\n nginx:\n Image: nginx:1.161\n Port: 80/TCP\n Host Port: 0/TCP\n Environment: <none>\n Mounts: <none>", "completion": " Volumes: <none>\nConditions:\n Type Status Reason\n ---- ------ ------\n Available True MinimumReplicasAvailable\n Progressing True ReplicaSetUpdated\nOldReplicaSets: nginx-deployment-1564180365 (3/3 replicas created)\nNewReplicaSet: nginx-deployment-3066724191 (1/1 replicas created)\nEvents:\n FirstSeen LastSeen Count From SubObjectPath Type Reason Message\n --------- -------- ----- ---- ------------- -------- ------ -------\n 1m 1m 1 {deployment-controller } Normal ScalingReplicaSet Scaled up replica set nginx-deployment-2035384211 to 3\n 22s 22s 1 {deployment-controller } Normal ScalingReplicaSet Scaled up replica set nginx-deployment-1564180365 to 1\n 22s 22s 1 {deployment-controller } Normal ScalingReplicaSet Scaled down replica set nginx-deployment-2035384211 to 2\n 22s 22s 1 {deployment-controller } Normal ScalingReplicaSet Scaled up replica set nginx-deployment-1564180365 to 2\n 21s 21s 1 {deployment-controller } Normal ScalingReplicaSet Scaled down replica set nginx-deployment-2035384211 to 1\n 21s 21s 1 {deployment-controller } Normal ScalingReplicaSet Scaled up replica set nginx-deployment-1564180365 to 3\n 13s 13s 1 {deployment-controller } Normal ScalingReplicaSet Scaled down replica set nginx-deployment-2035384211 to 0\n 13s 13s 1 {deployment-controller } Normal ScalingReplicaSet Scaled up replica set nginx-deployment-3066724191 to 1", "language": "text", "type": "code-example", "context": "2.1 - Deployments | To fix this, you need to rollback to a previous revision of Deployment that is stable.", "source": "https://kubernetes.io/docs/concepts/workloads/_print/", "id": "k8s_00658_code_completion_20889"} {"prompt": "Explain this text code:\n\nName: nginx-deployment\nNamespace: default\nCreationTimestamp: Tue, 15 Mar 2016 14:48:04 -0700\nLabels: app=nginx\nSelector: app=nginx\nReplicas: 3 desired | 1 updated | 4 total | 3 available | 1 unavailable\nStrategyType: RollingUpdate\nMinReadySeconds: 0\nRollingUpdateStrategy: 25% max unavailable, 25% max surge\nPod Template:\n Labels: app=nginx\n Containers:\n nginx:\n Image: nginx:1.161\n Port: 80/TCP\n Host Port: 0/TCP\n Environment: <none>\n Mounts: <none>\n Volumes: <none>\nConditions:\n Type Status Reason\n ---- ------ ------\n Available True MinimumReplicasAvailable\n Progressing True ReplicaSetUpdated\nOldReplicaSets: nginx-deployment-1564180365 (3/3 replicas created)\nNewReplicaSet: nginx-deployment-3066724191 (1/1 replicas created)\nEvents:\n FirstSeen LastSeen Count From SubObjectPath Type Reason Message\n --------- -------- ----- ---- ------------- -------- ------ -------\n 1m 1m 1 {deployment-controller } Normal ScalingReplicaSet Scaled up replica set nginx-deployment-2035384211 to 3\n 22s 22s 1 {deployment-controller } Normal ScalingReplicaSet Scaled up replica set nginx-deployment-1564180365 to 1\n 22s 22s 1 {deployment-controller } Normal ScalingReplicaSet Scaled down replica set nginx-deployment-2035384211 to 2\n 22s 22s 1 {deployment-controller } Normal ScalingReplicaSet Scaled up replica set nginx-deployment-1564180365 to 2\n 21s 21s 1 {deployment-controller } Normal ScalingReplicaSet Scaled down replica set nginx-deployment-2035384211 to 1\n 21s 21s 1 {deployment-controller } Normal ScalingReplicaSet Scaled up replica set nginx-deployment-1564180365 to 3\n 13s 13s 1 {deployment-controller } Normal ScalingReplicaSet Scaled down replica set nginx-deployment-2035384211 to 0\n 13s 13s 1 {deployment-controller } Normal ScalingReplicaSet Scaled up replica set nginx-deployment-3066724191 to 1", "completion": "This text code 2.1 - Deployments | To fix this, you need to rollback to a previous revision of Deployment that is stable.", "language": "text", "type": "code-example_explanation", "context": "2.1 - Deployments | To fix this, you need to rollback to a previous revision of Deployment that is stable.", "source": "https://kubernetes.io/docs/concepts/workloads/_print/", "id": "k8s_00658_code_explain_20890"} {"prompt": "Explain this shell code:\n\nkubectl rollout history deployment/nginx-deployment", "completion": "This shell code 2.1 - Deployments | The output is similar to this:", "language": "shell", "type": "code-example_explanation", "context": "2.1 - Deployments | The output is similar to this:", "source": "https://kubernetes.io/docs/concepts/workloads/_print/", "id": "k8s_00658_code_explain_20891"} {"prompt": "Complete this text code:\n\ndeployments \"nginx-deployment\"\nREVISION CHANGE-CAUSE", "completion": "1 kubectl apply --filename=https://k8s.io/examples/controllers/nginx-deployment.yaml\n2 kubectl set image deployment/nginx-deployment nginx=nginx:1.16.1\n3 kubectl set image deployment/nginx-deployment nginx=nginx:1.161", "language": "text", "type": "code-example", "context": "2.1 - Deployments | CHANGE-CAUSE is copied from the Deployment annotation kubernetes.io/change-cause to its revisions upon creation. You can specify theCHANGE-CAUSE message by:", "source": "https://kubernetes.io/docs/concepts/workloads/_print/", "id": "k8s_00658_code_completion_20892"} {"prompt": "Explain this text code:\n\ndeployments \"nginx-deployment\"\nREVISION CHANGE-CAUSE\n1 kubectl apply --filename=https://k8s.io/examples/controllers/nginx-deployment.yaml\n2 kubectl set image deployment/nginx-deployment nginx=nginx:1.16.1\n3 kubectl set image deployment/nginx-deployment nginx=nginx:1.161", "completion": "This text code 2.1 - Deployments | CHANGE-CAUSE is copied from the Deployment annotation kubernetes.io/change-cause to its revisions upon creation. You can specify theCHANGE-CAUSE message by:", "language": "text", "type": "code-example_explanation", "context": "2.1 - Deployments | CHANGE-CAUSE is copied from the Deployment annotation kubernetes.io/change-cause to its revisions upon creation. You can specify theCHANGE-CAUSE message by:", "source": "https://kubernetes.io/docs/concepts/workloads/_print/", "id": "k8s_00658_code_explain_20893"} {"prompt": "Explain this bash code:\n\nkubectl annotate deployment/nginx-deployment kubernetes.io/change-cause=\"image updated to 1.16.1\"", "completion": "This bash code 2.1 - Deployments | To see the details of each revision, run:", "language": "bash", "type": "kubectl-command_explanation", "context": "2.1 - Deployments | To see the details of each revision, run:", "source": "https://kubernetes.io/docs/concepts/workloads/_print/", "id": "k8s_00658_code_explain_20894"} {"prompt": "Explain this shell code:\n\nkubectl rollout history deployment/nginx-deployment --revision=2", "completion": "This shell code 2.1 - Deployments | The output is similar to this:", "language": "shell", "type": "code-example_explanation", "context": "2.1 - Deployments | The output is similar to this:", "source": "https://kubernetes.io/docs/concepts/workloads/_print/", "id": "k8s_00658_code_explain_20895"} {"prompt": "Complete this text code:\n\ndeployments \"nginx-deployment\" revision 2\n Labels: app=nginx\n pod-template-hash=1159050644\n Annotations: kubernetes.io/change-cause=kubectl set image deployment/nginx-deployment nginx=nginx:1.16.1\n Containers:\n nginx:", "completion": " Image: nginx:1.16.1\n Port: 80/TCP\n QoS Tier:\n cpu: BestEffort\n memory: BestEffort\n Environment Variables: <none>\n No volumes.", "language": "text", "type": "code-example", "context": "2.1 - Deployments | Follow the steps given below to rollback the Deployment from the current version to the previous version, which is version 2.", "source": "https://kubernetes.io/docs/concepts/workloads/_print/", "id": "k8s_00658_code_completion_20896"} {"prompt": "Explain this text code:\n\ndeployments \"nginx-deployment\" revision 2\n Labels: app=nginx\n pod-template-hash=1159050644\n Annotations: kubernetes.io/change-cause=kubectl set image deployment/nginx-deployment nginx=nginx:1.16.1\n Containers:\n nginx:\n Image: nginx:1.16.1\n Port: 80/TCP\n QoS Tier:\n cpu: BestEffort\n memory: BestEffort\n Environment Variables: <none>\n No volumes.", "completion": "This text code 2.1 - Deployments | Follow the steps given below to rollback the Deployment from the current version to the previous version, which is version 2.", "language": "text", "type": "code-example_explanation", "context": "2.1 - Deployments | Follow the steps given below to rollback the Deployment from the current version to the previous version, which is version 2.", "source": "https://kubernetes.io/docs/concepts/workloads/_print/", "id": "k8s_00658_code_explain_20897"} {"prompt": "Explain this shell code:\n\nkubectl rollout undo deployment/nginx-deployment --to-revision=2", "completion": "This shell code 2.1 - Deployments | The output is similar to this:", "language": "shell", "type": "code-example_explanation", "context": "2.1 - Deployments | The output is similar to this:", "source": "https://kubernetes.io/docs/concepts/workloads/_print/", "id": "k8s_00658_code_explain_20898"} {"prompt": "Explain this text code:\n\nNAME READY UP-TO-DATE AVAILABLE AGE\nnginx-deployment 3/3 3 3 30m", "completion": "This text code 2.1 - Deployments | Get the description of the Deployment:", "language": "text", "type": "code-example_explanation", "context": "2.1 - Deployments | Get the description of the Deployment:", "source": "https://kubernetes.io/docs/concepts/workloads/_print/", "id": "k8s_00658_code_explain_20899"} {"prompt": "Complete this text code:\n\nName: nginx-deployment\nNamespace: default\nCreationTimestamp: Sun, 02 Sep 2018 18:17:55 -0500\nLabels: app=nginx\nAnnotations: deployment.kubernetes.io/revision=4\n kubernetes.io/change-cause=kubectl set image deployment/nginx-deployment nginx=nginx:1.16.1\nSelector: app=nginx\nReplicas: 3 desired | 3 updated | 3 total | 3 available | 0 unavailable\nStrategyType: RollingUpdate\nMinReadySeconds: 0\nRollingUpdateStrategy: 25% max unavailable, 25% max surge\nPod Template:\n Labels: app=nginx\n Containers:\n nginx:\n Image: nginx:1.16.1\n Port: 80/TCP\n Host Port: 0/TCP\n Environment: <none>\n Mounts: <none>", "completion": " Volumes: <none>\nConditions:\n Type Status Reason\n ---- ------ ------\n Available True MinimumReplicasAvailable\n Progressing True NewReplicaSetAvailable\nOldReplicaSets: <none>\nNewReplicaSet: nginx-deployment-c4747d96c (3/3 replicas created)\nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal ScalingReplicaSet 12m deployment-controller Scaled up replica set nginx-deployment-75675f5897 to 3\n Normal ScalingReplicaSet 11m deployment-controller Scaled up replica set nginx-deployment-c4747d96c to 1\n Normal ScalingReplicaSet 11m deployment-controller Scaled down replica set nginx-deployment-75675f5897 to 2\n Normal ScalingReplicaSet 11m deployment-controller Scaled up replica set nginx-deployment-c4747d96c to 2\n Normal ScalingReplicaSet 11m deployment-controller Scaled down replica set nginx-deployment-75675f5897 to 1\n Normal ScalingReplicaSet 11m deployment-controller Scaled up replica set nginx-deployment-c4747d96c to 3\n Normal ScalingReplicaSet 11m deployment-controller Scaled down replica set nginx-deployment-75675f5897 to 0\n Normal ScalingReplicaSet 11m deployment-controller Scaled up replica set nginx-deployment-595696685f to 1\n Normal DeploymentRollback 15s deployment-controller Rolled back deployment \"nginx-deployment\" to revision 2\n Normal ScalingReplicaSet 15s deployment-controller Scaled down replica set nginx-deployment-595696685f to 0", "language": "text", "type": "code-example", "context": "2.1 - Deployments | You can scale a Deployment by using the following command:", "source": "https://kubernetes.io/docs/concepts/workloads/_print/", "id": "k8s_00658_code_completion_20900"} {"prompt": "Explain this text code:\n\nName: nginx-deployment\nNamespace: default\nCreationTimestamp: Sun, 02 Sep 2018 18:17:55 -0500\nLabels: app=nginx\nAnnotations: deployment.kubernetes.io/revision=4\n kubernetes.io/change-cause=kubectl set image deployment/nginx-deployment nginx=nginx:1.16.1\nSelector: app=nginx\nReplicas: 3 desired | 3 updated | 3 total | 3 available | 0 unavailable\nStrategyType: RollingUpdate\nMinReadySeconds: 0\nRollingUpdateStrategy: 25% max unavailable, 25% max surge\nPod Template:\n Labels: app=nginx\n Containers:\n nginx:\n Image: nginx:1.16.1\n Port: 80/TCP\n Host Port: 0/TCP\n Environment: <none>\n Mounts: <none>\n Volumes: <none>\nConditions:\n Type Status Reason\n ---- ------ ------\n Available True MinimumReplicasAvailable\n Progressing True NewReplicaSetAvailable\nOldReplicaSets: <none>\nNewReplicaSet: nginx-deployment-c4747d96c (3/3 replicas created)\nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal ScalingReplicaSet 12m deployment-controller Scaled up replica set nginx-deployment-75675f5897 to 3\n Normal ScalingReplicaSet 11m deployment-controller Scaled up replica set nginx-deployment-c4747d96c to 1\n Normal ScalingReplicaSet 11m deployment-controller Scaled down replica set nginx-deployment-75675f5897 to 2\n Normal ScalingReplicaSet 11m deployment-controller Scaled up replica set nginx-deployment-c4747d96c to 2\n Normal ScalingReplicaSet 11m deployment-controller Scaled down replica set nginx-deployment-75675f5897 to 1\n Normal ScalingReplicaSet 11m deployment-controller Scaled up replica set nginx-deployment-c4747d96c to 3\n Normal ScalingReplicaSet 11m deployment-controller Scaled down replica set nginx-deployment-75675f5897 to 0\n Normal ScalingReplicaSet 11m deployment-controller Scaled up replica set nginx-deployment-595696685f to 1\n Normal DeploymentRollback 15s deployment-controller Rolled back deployment \"nginx-deployment\" to revision 2\n Normal ScalingReplicaSet 15s deployment-controller Scaled down replica set nginx-deployment-595696685f to 0", "completion": "This text code 2.1 - Deployments | You can scale a Deployment by using the following command:", "language": "text", "type": "code-example_explanation", "context": "2.1 - Deployments | You can scale a Deployment by using the following command:", "source": "https://kubernetes.io/docs/concepts/workloads/_print/", "id": "k8s_00658_code_explain_20901"} {"prompt": "Explain this shell code:\n\nkubectl scale deployment/nginx-deployment --replicas=10", "completion": "This shell code 2.1 - Deployments | The output is similar to this:", "language": "shell", "type": "code-example_explanation", "context": "2.1 - Deployments | The output is similar to this:", "source": "https://kubernetes.io/docs/concepts/workloads/_print/", "id": "k8s_00658_code_explain_20902"} {"prompt": "Explain this shell code:\n\nkubectl autoscale deployment/nginx-deployment --min=10 --max=15 --cpu-percent=80", "completion": "This shell code 2.1 - Deployments | The output is similar to this:", "language": "shell", "type": "code-example_explanation", "context": "2.1 - Deployments | The output is similar to this:", "source": "https://kubernetes.io/docs/concepts/workloads/_print/", "id": "k8s_00658_code_explain_20903"} {"prompt": "Explain this text code:\n\nNAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE\nnginx-deployment 10 10 10 10 50s", "completion": "This text code 2.1 - Deployments | You update to a new image which happens to be unresolvable from inside the cluster.", "language": "text", "type": "code-example_explanation", "context": "2.1 - Deployments | You update to a new image which happens to be unresolvable from inside the cluster.", "source": "https://kubernetes.io/docs/concepts/workloads/_print/", "id": "k8s_00658_code_explain_20904"} {"prompt": "Explain this shell code:\n\nkubectl set image deployment/nginx-deployment nginx=nginx:sometag", "completion": "This shell code 2.1 - Deployments | The output is similar to this:", "language": "shell", "type": "code-example_explanation", "context": "2.1 - Deployments | The output is similar to this:", "source": "https://kubernetes.io/docs/concepts/workloads/_print/", "id": "k8s_00658_code_explain_20905"} {"prompt": "Explain this text code:\n\nNAME DESIRED CURRENT READY AGE\nnginx-deployment-1989198191 5 5 0 9s\nnginx-deployment-618515232 8 8 8 1m", "completion": "This text code 2.1 - Deployments | Then a new scaling request for the Deployment comes along. The autoscaler increments the Deployment replicas to 15. The Deployment controller needs to decide where to add these new 5 replicas. If you", "language": "text", "type": "code-example_explanation", "context": "2.1 - Deployments | Then a new scaling request for the Deployment comes along. The autoscaler increments the Deployment replicas to 15. The Deployment controller needs to decide where to add these new 5 replicas. If you", "source": "https://kubernetes.io/docs/concepts/workloads/_print/", "id": "k8s_00658_code_explain_20906"} {"prompt": "Explain this text code:\n\nNAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE\nnginx-deployment 15 18 7 8 7m", "completion": "This text code 2.1 - Deployments | The rollout status confirms how the replicas were added to each ReplicaSet.", "language": "text", "type": "code-example_explanation", "context": "2.1 - Deployments | The rollout status confirms how the replicas were added to each ReplicaSet.", "source": "https://kubernetes.io/docs/concepts/workloads/_print/", "id": "k8s_00658_code_explain_20907"} {"prompt": "Explain this text code:\n\nNAME DESIRED CURRENT READY AGE\nnginx-deployment-1989198191 7 7 0 7m\nnginx-deployment-618515232 11 11 11 7m", "completion": "This text code 2.1 - Deployments | When you update a Deployment, or plan to, you can pause rollouts for that Deployment before you trigger one or more updates. When you're ready to apply those changes, you resume rollouts for the Deplo", "language": "text", "type": "code-example_explanation", "context": "2.1 - Deployments | When you update a Deployment, or plan to, you can pause rollouts for that Deployment before you trigger one or more updates. When you're ready to apply those changes, you resume rollouts for the Deplo", "source": "https://kubernetes.io/docs/concepts/workloads/_print/", "id": "k8s_00658_code_explain_20908"} {"prompt": "Explain this text code:\n\nNAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE\nnginx 3 3 3 3 1m", "completion": "This text code 2.1 - Deployments | Get the rollout status:", "language": "text", "type": "code-example_explanation", "context": "2.1 - Deployments | Get the rollout status:", "source": "https://kubernetes.io/docs/concepts/workloads/_print/", "id": "k8s_00658_code_explain_20909"} {"prompt": "Explain this text code:\n\nNAME DESIRED CURRENT READY AGE\nnginx-2142116321 3 3 3 1m", "completion": "This text code 2.1 - Deployments | Pause by running the following command:", "language": "text", "type": "code-example_explanation", "context": "2.1 - Deployments | Pause by running the following command:", "source": "https://kubernetes.io/docs/concepts/workloads/_print/", "id": "k8s_00658_code_explain_20910"} {"prompt": "Explain this shell code:\n\nkubectl set image deployment/nginx-deployment nginx=nginx:1.16.1", "completion": "This shell code 2.1 - Deployments | The output is similar to this:", "language": "shell", "type": "code-example_explanation", "context": "2.1 - Deployments | The output is similar to this:", "source": "https://kubernetes.io/docs/concepts/workloads/_print/", "id": "k8s_00658_code_explain_20911"} {"prompt": "Explain this shell code:\n\nkubectl rollout history deployment/nginx-deployment", "completion": "This shell code 2.1 - Deployments | The output is similar to this:", "language": "shell", "type": "code-example_explanation", "context": "2.1 - Deployments | The output is similar to this:", "source": "https://kubernetes.io/docs/concepts/workloads/_print/", "id": "k8s_00658_code_explain_20912"} {"prompt": "Explain this text code:\n\ndeployments \"nginx\"\nREVISION CHANGE-CAUSE\n1 <none>", "completion": "This text code 2.1 - Deployments | Get the rollout status to verify that the existing ReplicaSet has not changed:", "language": "text", "type": "code-example_explanation", "context": "2.1 - Deployments | Get the rollout status to verify that the existing ReplicaSet has not changed:", "source": "https://kubernetes.io/docs/concepts/workloads/_print/", "id": "k8s_00658_code_explain_20913"} {"prompt": "Explain this text code:\n\nNAME DESIRED CURRENT READY AGE\nnginx-2142116321 3 3 3 2m", "completion": "This text code 2.1 - Deployments | You can make as many updates as you wish, for example, update the resources that will be used:", "language": "text", "type": "code-example_explanation", "context": "2.1 - Deployments | You can make as many updates as you wish, for example, update the resources that will be used:", "source": "https://kubernetes.io/docs/concepts/workloads/_print/", "id": "k8s_00658_code_explain_20914"} {"prompt": "Explain this shell code:\n\nkubectl set resources deployment/nginx-deployment -c=nginx --limits=cpu=200m,memory=512Mi", "completion": "This shell code 2.1 - Deployments | The output is similar to this:", "language": "shell", "type": "code-example_explanation", "context": "2.1 - Deployments | The output is similar to this:", "source": "https://kubernetes.io/docs/concepts/workloads/_print/", "id": "k8s_00658_code_explain_20915"} {"prompt": "Explain this text code:\n\ndeployment.apps/nginx-deployment resource requirements updated", "completion": "This text code 2.1 - Deployments | The initial state of the Deployment prior to pausing its rollout will continue its function, but new updates to the Deployment will not have any effect as long as the Deployment rollout is paused.", "language": "text", "type": "code-example_explanation", "context": "2.1 - Deployments | The initial state of the Deployment prior to pausing its rollout will continue its function, but new updates to the Deployment will not have any effect as long as the Deployment rollout is paused.", "source": "https://kubernetes.io/docs/concepts/workloads/_print/", "id": "k8s_00658_code_explain_20916"} {"prompt": "Complete this text code:\n\nNAME DESIRED CURRENT READY AGE\nnginx-2142116321 2 2 2 2m\nnginx-3926361531 2 2 0 6s\nnginx-3926361531 2 2 1 18s\nnginx-2142116321 1 2 2 2m\nnginx-2142116321 1 2 2 2m\nnginx-3926361531 3 2 1 18s", "completion": "nginx-3926361531 3 2 1 18s\nnginx-2142116321 1 1 1 2m\nnginx-3926361531 3 3 1 18s\nnginx-3926361531 3 3 2 19s\nnginx-2142116321 0 1 1 2m\nnginx-2142116321 0 1 1 2m\nnginx-2142116321 0 0 0 2m\nnginx-3926361531 3 3 3 20s", "language": "text", "type": "code-example", "context": "2.1 - Deployments | Get the status of the latest rollout:", "source": "https://kubernetes.io/docs/concepts/workloads/_print/", "id": "k8s_00658_code_completion_20917"} {"prompt": "Explain this text code:\n\nNAME DESIRED CURRENT READY AGE\nnginx-2142116321 2 2 2 2m\nnginx-3926361531 2 2 0 6s\nnginx-3926361531 2 2 1 18s\nnginx-2142116321 1 2 2 2m\nnginx-2142116321 1 2 2 2m\nnginx-3926361531 3 2 1 18s\nnginx-3926361531 3 2 1 18s\nnginx-2142116321 1 1 1 2m\nnginx-3926361531 3 3 1 18s\nnginx-3926361531 3 3 2 19s\nnginx-2142116321 0 1 1 2m\nnginx-2142116321 0 1 1 2m\nnginx-2142116321 0 0 0 2m\nnginx-3926361531 3 3 3 20s", "completion": "This text code 2.1 - Deployments | Get the status of the latest rollout:", "language": "text", "type": "code-example_explanation", "context": "2.1 - Deployments | Get the status of the latest rollout:", "source": "https://kubernetes.io/docs/concepts/workloads/_print/", "id": "k8s_00658_code_explain_20918"} {"prompt": "Explain this text code:\n\nNAME DESIRED CURRENT READY AGE\nnginx-2142116321 0 0 0 2m\nnginx-3926361531 3 3 3 28s", "completion": "This text code 2.1 - Deployments | A Deployment enters various states during its lifecycle. It can be progressing while rolling out a new ReplicaSet, it can be complete, or it can fail to progress.", "language": "text", "type": "code-example_explanation", "context": "2.1 - Deployments | A Deployment enters various states during its lifecycle. It can be progressing while rolling out a new ReplicaSet, it can be complete, or it can fail to progress.", "source": "https://kubernetes.io/docs/concepts/workloads/_print/", "id": "k8s_00658_code_explain_20919"} {"prompt": "Explain this text code:\n\nWaiting for rollout to finish: 2 of 3 updated replicas are available...\ndeployment \"nginx-deployment\" successfully rolled out", "completion": "This text code 2.1 - Deployments | and the exit status from kubectl rollout is 0 (success):", "language": "text", "type": "code-example_explanation", "context": "2.1 - Deployments | and the exit status from kubectl rollout is 0 (success):", "source": "https://kubernetes.io/docs/concepts/workloads/_print/", "id": "k8s_00658_code_explain_20920"} {"prompt": "Explain this shell code:\n\nkubectl patch deployment/nginx-deployment -p '{\"spec\":{\"progressDeadlineSeconds\":600}}'", "completion": "This shell code 2.1 - Deployments | The output is similar to this:", "language": "shell", "type": "code-example_explanation", "context": "2.1 - Deployments | The output is similar to this:", "source": "https://kubernetes.io/docs/concepts/workloads/_print/", "id": "k8s_00658_code_explain_20921"} {"prompt": "Complete this text code:\n\n<...>\nConditions:\n Type Status Reason\n ---- ------ ------", "completion": " Available True MinimumReplicasAvailable\n Progressing True ReplicaSetUpdated\n ReplicaFailure True FailedCreate\n<...>", "language": "text", "type": "code-example", "context": "2.1 - Deployments | If you run kubectl get deployment nginx-deployment -o yaml, the Deployment status is similar to this:", "source": "https://kubernetes.io/docs/concepts/workloads/_print/", "id": "k8s_00658_code_completion_20922"} {"prompt": "Explain this text code:\n\n<...>\nConditions:\n Type Status Reason\n ---- ------ ------\n Available True MinimumReplicasAvailable\n Progressing True ReplicaSetUpdated\n ReplicaFailure True FailedCreate\n<...>", "completion": "This text code 2.1 - Deployments | If you run kubectl get deployment nginx-deployment -o yaml, the Deployment status is similar to this:", "language": "text", "type": "code-example_explanation", "context": "2.1 - Deployments | If you run kubectl get deployment nginx-deployment -o yaml, the Deployment status is similar to this:", "source": "https://kubernetes.io/docs/concepts/workloads/_print/", "id": "k8s_00658_code_explain_20923"} {"prompt": "Complete this text code:\n\nstatus:\n availableReplicas: 2\n conditions:\n - lastTransitionTime: 2016-10-04T12:25:39Z\n lastUpdateTime: 2016-10-04T12:25:39Z\n message: Replica set \"nginx-deployment-4262182780\" is progressing.\n reason: ReplicaSetUpdated\n status: \"True\"\n type: Progressing\n - lastTransitionTime: 2016-10-04T12:25:42Z\n lastUpdateTime: 2016-10-04T12:25:42Z\n message: Deployment has minimum availability.", "completion": " reason: MinimumReplicasAvailable\n status: \"True\"\n type: Available\n - lastTransitionTime: 2016-10-04T12:25:39Z\n lastUpdateTime: 2016-10-04T12:25:39Z\n message: 'Error creating: pods \"nginx-deployment-4262182780-\" is forbidden: exceeded quota:\n object-counts, requested: pods=1, used: pods=3, limited: pods=2'\n reason: FailedCreate\n status: \"True\"\n type: ReplicaFailure\n observedGeneration: 3\n replicas: 2\n unavailableReplicas: 2", "language": "text", "type": "code-example", "context": "2.1 - Deployments | Eventually, once the Deployment progress deadline is exceeded, Kubernetes updates the status and the reason for the Progressing condition:", "source": "https://kubernetes.io/docs/concepts/workloads/_print/", "id": "k8s_00658_code_completion_20924"} {"prompt": "Explain this text code:\n\nstatus:\n availableReplicas: 2\n conditions:\n - lastTransitionTime: 2016-10-04T12:25:39Z\n lastUpdateTime: 2016-10-04T12:25:39Z\n message: Replica set \"nginx-deployment-4262182780\" is progressing.\n reason: ReplicaSetUpdated\n status: \"True\"\n type: Progressing\n - lastTransitionTime: 2016-10-04T12:25:42Z\n lastUpdateTime: 2016-10-04T12:25:42Z\n message: Deployment has minimum availability.\n reason: MinimumReplicasAvailable\n status: \"True\"\n type: Available\n - lastTransitionTime: 2016-10-04T12:25:39Z\n lastUpdateTime: 2016-10-04T12:25:39Z\n message: 'Error creating: pods \"nginx-deployment-4262182780-\" is forbidden: exceeded quota:\n object-counts, requested: pods=1, used: pods=3, limited: pods=2'\n reason: FailedCreate\n status: \"True\"\n type: ReplicaFailure\n observedGeneration: 3\n replicas: 2\n unavailableReplicas: 2", "completion": "This text code 2.1 - Deployments | Eventually, once the Deployment progress deadline is exceeded, Kubernetes updates the status and the reason for the Progressing condition:", "language": "text", "type": "code-example_explanation", "context": "2.1 - Deployments | Eventually, once the Deployment progress deadline is exceeded, Kubernetes updates the status and the reason for the Progressing condition:", "source": "https://kubernetes.io/docs/concepts/workloads/_print/", "id": "k8s_00658_code_explain_20925"} {"prompt": "Complete this text code:\n\nConditions:\n Type Status Reason\n ---- ------ ------", "completion": " Available True MinimumReplicasAvailable\n Progressing False ProgressDeadlineExceeded\n ReplicaFailure True FailedCreate", "language": "text", "type": "code-example", "context": "2.1 - Deployments | You can address an issue of insufficient quota by scaling down your Deployment, by scaling down other controllers you may be running, or by increasing quota in your namespace. If you satisfy the quota", "source": "https://kubernetes.io/docs/concepts/workloads/_print/", "id": "k8s_00658_code_completion_20926"} {"prompt": "Explain this text code:\n\nConditions:\n Type Status Reason\n ---- ------ ------\n Available True MinimumReplicasAvailable\n Progressing False ProgressDeadlineExceeded\n ReplicaFailure True FailedCreate", "completion": "This text code 2.1 - Deployments | You can address an issue of insufficient quota by scaling down your Deployment, by scaling down other controllers you may be running, or by increasing quota in your namespace. If you satisfy the quota", "language": "text", "type": "code-example_explanation", "context": "2.1 - Deployments | You can address an issue of insufficient quota by scaling down your Deployment, by scaling down other controllers you may be running, or by increasing quota in your namespace. If you satisfy the quota", "source": "https://kubernetes.io/docs/concepts/workloads/_print/", "id": "k8s_00658_code_explain_20927"} {"prompt": "Complete this text code:\n\nConditions:\n Type Status Reason", "completion": " ---- ------ ------\n Available True MinimumReplicasAvailable\n Progressing True NewReplicaSetAvailable", "language": "text", "type": "code-example", "context": "2.1 - Deployments | type: Available with status: \"True\" means that your Deployment has minimum availability. Minimum availability is dictated by the parameters specified in the deployment strategy. type: Progressing with", "source": "https://kubernetes.io/docs/concepts/workloads/_print/", "id": "k8s_00658_code_completion_20928"} {"prompt": "Explain this text code:\n\nConditions:\n Type Status Reason\n ---- ------ ------\n Available True MinimumReplicasAvailable\n Progressing True NewReplicaSetAvailable", "completion": "This text code 2.1 - Deployments | type: Available with status: \"True\" means that your Deployment has minimum availability. Minimum availability is dictated by the parameters specified in the deployment strategy. type: Progressing with", "language": "text", "type": "code-example_explanation", "context": "2.1 - Deployments | type: Available with status: \"True\" means that your Deployment has minimum availability. Minimum availability is dictated by the parameters specified in the deployment strategy. type: Progressing with", "source": "https://kubernetes.io/docs/concepts/workloads/_print/", "id": "k8s_00658_code_explain_20929"} {"prompt": "Explain this text code:\n\nWaiting for rollout to finish: 2 out of 3 new replicas have been updated...\nerror: deployment \"nginx\" exceeded its progress deadline", "completion": "This text code 2.1 - Deployments | and the exit status from kubectl rollout is 1 (indicating an error):", "language": "text", "type": "code-example_explanation", "context": "2.1 - Deployments | and the exit status from kubectl rollout is 1 (indicating an error):", "source": "https://kubernetes.io/docs/concepts/workloads/_print/", "id": "k8s_00658_code_explain_20930"} {"prompt": "Complete this yaml code:\n\napiVersion: apps/v1\nkind: Deployment\nmetadata:\n name: nginx-deployment\n labels:\n app: nginx\nspec:\n replicas: 3\n selector:\n matchLabels:\n app: nginx\n template:", "completion": " metadata:\n labels:\n app: nginx\n spec:\n containers:\n - name: nginx\n image: nginx:1.14.2\n ports:\n - containerPort: 80\n strategy:\n type: RollingUpdate\n rollingUpdate:\n maxUnavailable: 1", "language": "yaml", "type": "deployment-definition", "context": "2.1 - Deployments | apiVersion: apps/v1 kind: Deployment metadata: name: nginx-deployment labels: app: nginx spec: replicas: 3 selector: matchLabels: app: nginx template: metadata: labels:", "source": "https://kubernetes.io/docs/concepts/workloads/_print/", "id": "k8s_00658_code_completion_20931"} {"prompt": "Explain this yaml code:\n\napiVersion: apps/v1\nkind: Deployment\nmetadata:\n name: nginx-deployment\n labels:\n app: nginx\nspec:\n replicas: 3\n selector:\n matchLabels:\n app: nginx\n template:\n metadata:\n labels:\n app: nginx\n spec:\n containers:\n - name: nginx\n image: nginx:1.14.2\n ports:\n - containerPort: 80\n strategy:\n type: RollingUpdate\n rollingUpdate:\n maxUnavailable: 1", "completion": "This yaml code 2.1 - Deployments | apiVersion: apps/v1 kind: Deployment metadata: name: nginx-deployment labels: app: nginx spec: replicas: 3 selector: matchLabels: app: nginx template: metadata: labels:", "language": "yaml", "type": "deployment-definition_explanation", "context": "2.1 - Deployments | apiVersion: apps/v1 kind: Deployment metadata: name: nginx-deployment labels: app: nginx spec: replicas: 3 selector: matchLabels: app: nginx template: metadata: labels:", "source": "https://kubernetes.io/docs/concepts/workloads/_print/", "id": "k8s_00658_code_explain_20932"} {"prompt": "Complete this yaml code:\n\napiVersion: apps/v1\nkind: Deployment\nmetadata:\n name: nginx-deployment\n labels:\n app: nginx\nspec:\n replicas: 3\n selector:\n matchLabels:\n app: nginx\n template:", "completion": " metadata:\n labels:\n app: nginx\n spec:\n containers:\n - name: nginx\n image: nginx:1.14.2\n ports:\n - containerPort: 80\n strategy:\n type: RollingUpdate\n rollingUpdate:\n maxSurge: 1", "language": "yaml", "type": "deployment-definition", "context": "2.1 - Deployments | apiVersion: apps/v1 kind: Deployment metadata: name: nginx-deployment labels: app: nginx spec: replicas: 3 selector: matchLabels: app: nginx template: metadata: labels:", "source": "https://kubernetes.io/docs/concepts/workloads/_print/", "id": "k8s_00658_code_completion_20933"} {"prompt": "Explain this yaml code:\n\napiVersion: apps/v1\nkind: Deployment\nmetadata:\n name: nginx-deployment\n labels:\n app: nginx\nspec:\n replicas: 3\n selector:\n matchLabels:\n app: nginx\n template:\n metadata:\n labels:\n app: nginx\n spec:\n containers:\n - name: nginx\n image: nginx:1.14.2\n ports:\n - containerPort: 80\n strategy:\n type: RollingUpdate\n rollingUpdate:\n maxSurge: 1", "completion": "This yaml code 2.1 - Deployments | apiVersion: apps/v1 kind: Deployment metadata: name: nginx-deployment labels: app: nginx spec: replicas: 3 selector: matchLabels: app: nginx template: metadata: labels:", "language": "yaml", "type": "deployment-definition_explanation", "context": "2.1 - Deployments | apiVersion: apps/v1 kind: Deployment metadata: name: nginx-deployment labels: app: nginx spec: replicas: 3 selector: matchLabels: app: nginx template: metadata: labels:", "source": "https://kubernetes.io/docs/concepts/workloads/_print/", "id": "k8s_00658_code_explain_20934"} {"prompt": "Complete this yaml code:\n\napiVersion: apps/v1\nkind: Deployment\nmetadata:\n name: nginx-deployment\n labels:\n app: nginx\nspec:\n replicas: 3\n selector:\n matchLabels:\n app: nginx\n template:\n metadata:", "completion": " labels:\n app: nginx\n spec:\n containers:\n - name: nginx\n image: nginx:1.14.2\n ports:\n - containerPort: 80\n strategy:\n type: RollingUpdate\n rollingUpdate:\n maxSurge: 1\n maxUnavailable: 1", "language": "yaml", "type": "deployment-definition", "context": "2.1 - Deployments | .spec.progressDeadlineSeconds is an optional field that specifies the number of seconds you want to wait for your Deployment to progress before the system reports back that the Deployment has failed p", "source": "https://kubernetes.io/docs/concepts/workloads/_print/", "id": "k8s_00658_code_completion_20935"} {"prompt": "Explain this yaml code:\n\napiVersion: apps/v1\nkind: Deployment\nmetadata:\n name: nginx-deployment\n labels:\n app: nginx\nspec:\n replicas: 3\n selector:\n matchLabels:\n app: nginx\n template:\n metadata:\n labels:\n app: nginx\n spec:\n containers:\n - name: nginx\n image: nginx:1.14.2\n ports:\n - containerPort: 80\n strategy:\n type: RollingUpdate\n rollingUpdate:\n maxSurge: 1\n maxUnavailable: 1", "completion": "This yaml code 2.1 - Deployments | .spec.progressDeadlineSeconds is an optional field that specifies the number of seconds you want to wait for your Deployment to progress before the system reports back that the Deployment has failed p", "language": "yaml", "type": "deployment-definition_explanation", "context": "2.1 - Deployments | .spec.progressDeadlineSeconds is an optional field that specifies the number of seconds you want to wait for your Deployment to progress before the system reports back that the Deployment has failed p", "source": "https://kubernetes.io/docs/concepts/workloads/_print/", "id": "k8s_00658_code_explain_20936"} {"prompt": "Complete this yaml code:\n\napiVersion: apps/v1\nkind: ReplicaSet\nmetadata:\n name: frontend\n labels:\n app: guestbook\n tier: frontend\nspec:\n # modify replicas according to your case\n replicas: 3", "completion": " selector:\n matchLabels:\n tier: frontend\n template:\n metadata:\n labels:\n tier: frontend\n spec:\n containers:\n - name: php-redis\n image: us-docker.pkg.dev/google-samples/containers/gke/gb-frontend:v5", "language": "yaml", "type": "kubernetes-manifest", "context": "2.2 - ReplicaSet | Saving this manifest into frontend.yaml and submitting it to a Kubernetes cluster will create the defined ReplicaSet and the Pods that it manages.", "source": "https://kubernetes.io/docs/concepts/workloads/_print/", "id": "k8s_00658_code_completion_20937"} {"prompt": "Explain this yaml code:\n\napiVersion: apps/v1\nkind: ReplicaSet\nmetadata:\n name: frontend\n labels:\n app: guestbook\n tier: frontend\nspec:\n # modify replicas according to your case\n replicas: 3\n selector:\n matchLabels:\n tier: frontend\n template:\n metadata:\n labels:\n tier: frontend\n spec:\n containers:\n - name: php-redis\n image: us-docker.pkg.dev/google-samples/containers/gke/gb-frontend:v5", "completion": "This yaml code 2.2 - ReplicaSet | Saving this manifest into frontend.yaml and submitting it to a Kubernetes cluster will create the defined ReplicaSet and the Pods that it manages.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "2.2 - ReplicaSet | Saving this manifest into frontend.yaml and submitting it to a Kubernetes cluster will create the defined ReplicaSet and the Pods that it manages.", "source": "https://kubernetes.io/docs/concepts/workloads/_print/", "id": "k8s_00658_code_explain_20938"} {"prompt": "Explain this shell code:\n\nkubectl apply -f https://kubernetes.io/examples/controllers/frontend.yaml", "completion": "This shell code 2.2 - ReplicaSet | You can then get the current ReplicaSets deployed:", "language": "shell", "type": "code-example_explanation", "context": "2.2 - ReplicaSet | You can then get the current ReplicaSets deployed:", "source": "https://kubernetes.io/docs/concepts/workloads/_print/", "id": "k8s_00658_code_explain_20939"} {"prompt": "Explain this text code:\n\nNAME DESIRED CURRENT READY AGE\nfrontend 3 3 3 6s", "completion": "This text code 2.2 - ReplicaSet | You can also check on the state of the ReplicaSet:", "language": "text", "type": "code-example_explanation", "context": "2.2 - ReplicaSet | You can also check on the state of the ReplicaSet:", "source": "https://kubernetes.io/docs/concepts/workloads/_print/", "id": "k8s_00658_code_explain_20940"} {"prompt": "Complete this text code:\n\nName: frontend\nNamespace: default\nSelector: tier=frontend\nLabels: app=guestbook\n tier=frontend\nAnnotations: <none>\nReplicas: 3 current / 3 desired\nPods Status: 3 Running / 0 Waiting / 0 Succeeded / 0 Failed\nPod Template:\n Labels: tier=frontend\n Containers:\n php-redis:", "completion": " Image: us-docker.pkg.dev/google-samples/containers/gke/gb-frontend:v5\n Port: <none>\n Host Port: <none>\n Environment: <none>\n Mounts: <none>\n Volumes: <none>\nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal SuccessfulCreate 13s replicaset-controller Created pod: frontend-gbgfx\n Normal SuccessfulCreate 13s replicaset-controller Created pod: frontend-rwz57\n Normal SuccessfulCreate 13s replicaset-controller Created pod: frontend-wkl7w", "language": "text", "type": "code-example", "context": "2.2 - ReplicaSet | And lastly you can check for the Pods brought up:", "source": "https://kubernetes.io/docs/concepts/workloads/_print/", "id": "k8s_00658_code_completion_20941"} {"prompt": "Explain this text code:\n\nName: frontend\nNamespace: default\nSelector: tier=frontend\nLabels: app=guestbook\n tier=frontend\nAnnotations: <none>\nReplicas: 3 current / 3 desired\nPods Status: 3 Running / 0 Waiting / 0 Succeeded / 0 Failed\nPod Template:\n Labels: tier=frontend\n Containers:\n php-redis:\n Image: us-docker.pkg.dev/google-samples/containers/gke/gb-frontend:v5\n Port: <none>\n Host Port: <none>\n Environment: <none>\n Mounts: <none>\n Volumes: <none>\nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal SuccessfulCreate 13s replicaset-controller Created pod: frontend-gbgfx\n Normal SuccessfulCreate 13s replicaset-controller Created pod: frontend-rwz57\n Normal SuccessfulCreate 13s replicaset-controller Created pod: frontend-wkl7w", "completion": "This text code 2.2 - ReplicaSet | And lastly you can check for the Pods brought up:", "language": "text", "type": "code-example_explanation", "context": "2.2 - ReplicaSet | And lastly you can check for the Pods brought up:", "source": "https://kubernetes.io/docs/concepts/workloads/_print/", "id": "k8s_00658_code_explain_20942"} {"prompt": "Complete this text code:\n\nNAME READY STATUS RESTARTS AGE\nfrontend-gbgfx 1/1 Running 0 10m", "completion": "frontend-rwz57 1/1 Running 0 10m\nfrontend-wkl7w 1/1 Running 0 10m", "language": "text", "type": "code-example", "context": "2.2 - ReplicaSet | You can also verify that the owner reference of these pods is set to the frontend ReplicaSet. To do this, get the yaml of one of the Pods running:", "source": "https://kubernetes.io/docs/concepts/workloads/_print/", "id": "k8s_00658_code_completion_20943"} {"prompt": "Explain this text code:\n\nNAME READY STATUS RESTARTS AGE\nfrontend-gbgfx 1/1 Running 0 10m\nfrontend-rwz57 1/1 Running 0 10m\nfrontend-wkl7w 1/1 Running 0 10m", "completion": "This text code 2.2 - ReplicaSet | You can also verify that the owner reference of these pods is set to the frontend ReplicaSet. To do this, get the yaml of one of the Pods running:", "language": "text", "type": "code-example_explanation", "context": "2.2 - ReplicaSet | You can also verify that the owner reference of these pods is set to the frontend ReplicaSet. To do this, get the yaml of one of the Pods running:", "source": "https://kubernetes.io/docs/concepts/workloads/_print/", "id": "k8s_00658_code_explain_20944"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n creationTimestamp: \"2024-02-28T22:30:44Z\"\n generateName: frontend-\n labels:\n tier: frontend\n name: frontend-gbgfx", "completion": " namespace: default\n ownerReferences:\n - apiVersion: apps/v1\n blockOwnerDeletion: true\n controller: true\n kind: ReplicaSet\n name: frontend\n uid: e129deca-f864-481b-bb16-b27abfd92292\n...", "language": "yaml", "type": "pod-definition", "context": "2.2 - ReplicaSet | While you can create bare Pods with no problems, it is strongly recommended to make sure that the bare Pods do not have labels which match the selector of one of your ReplicaSets. The reason for this", "source": "https://kubernetes.io/docs/concepts/workloads/_print/", "id": "k8s_00658_code_completion_20945"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n creationTimestamp: \"2024-02-28T22:30:44Z\"\n generateName: frontend-\n labels:\n tier: frontend\n name: frontend-gbgfx\n namespace: default\n ownerReferences:\n - apiVersion: apps/v1\n blockOwnerDeletion: true\n controller: true\n kind: ReplicaSet\n name: frontend\n uid: e129deca-f864-481b-bb16-b27abfd92292\n...", "completion": "This yaml code 2.2 - ReplicaSet | While you can create bare Pods with no problems, it is strongly recommended to make sure that the bare Pods do not have labels which match the selector of one of your ReplicaSets. The reason for this", "language": "yaml", "type": "pod-definition_explanation", "context": "2.2 - ReplicaSet | While you can create bare Pods with no problems, it is strongly recommended to make sure that the bare Pods do not have labels which match the selector of one of your ReplicaSets. The reason for this", "source": "https://kubernetes.io/docs/concepts/workloads/_print/", "id": "k8s_00658_code_explain_20946"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: pod1\n labels:\n tier: frontend\nspec:\n containers:\n - name: hello1\n image: gcr.io/google-samples/hello-app:2.0\n", "completion": "---\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: pod2\n labels:\n tier: frontend\nspec:\n containers:\n - name: hello2\n image: gcr.io/google-samples/hello-app:1.0", "language": "yaml", "type": "pod-definition", "context": "2.2 - ReplicaSet | As those Pods do not have a Controller (or any object) as their owner reference and match the selector of the frontend ReplicaSet, they will immediately be acquired by it.", "source": "https://kubernetes.io/docs/concepts/workloads/_print/", "id": "k8s_00658_code_completion_20947"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: pod1\n labels:\n tier: frontend\nspec:\n containers:\n - name: hello1\n image: gcr.io/google-samples/hello-app:2.0\n\n---\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: pod2\n labels:\n tier: frontend\nspec:\n containers:\n - name: hello2\n image: gcr.io/google-samples/hello-app:1.0", "completion": "This yaml code 2.2 - ReplicaSet | As those Pods do not have a Controller (or any object) as their owner reference and match the selector of the frontend ReplicaSet, they will immediately be acquired by it.", "language": "yaml", "type": "pod-definition_explanation", "context": "2.2 - ReplicaSet | As those Pods do not have a Controller (or any object) as their owner reference and match the selector of the frontend ReplicaSet, they will immediately be acquired by it.", "source": "https://kubernetes.io/docs/concepts/workloads/_print/", "id": "k8s_00658_code_explain_20948"} {"prompt": "Explain this shell code:\n\nkubectl apply -f https://kubernetes.io/examples/pods/pod-rs.yaml", "completion": "This shell code 2.2 - ReplicaSet | The new Pods will be acquired by the ReplicaSet, and then immediately terminated as the ReplicaSet would be over its desired count.", "language": "shell", "type": "code-example_explanation", "context": "2.2 - ReplicaSet | The new Pods will be acquired by the ReplicaSet, and then immediately terminated as the ReplicaSet would be over its desired count.", "source": "https://kubernetes.io/docs/concepts/workloads/_print/", "id": "k8s_00658_code_explain_20949"} {"prompt": "Complete this text code:\n\nNAME READY STATUS RESTARTS AGE\nfrontend-b2zdv 1/1 Running 0 10m\nfrontend-vcmts 1/1 Running 0 10m", "completion": "frontend-wtsmm 1/1 Running 0 10m\npod1 0/1 Terminating 0 1s\npod2 0/1 Terminating 0 1s", "language": "text", "type": "code-example", "context": "2.2 - ReplicaSet | If you create the Pods first:", "source": "https://kubernetes.io/docs/concepts/workloads/_print/", "id": "k8s_00658_code_completion_20950"} {"prompt": "Explain this text code:\n\nNAME READY STATUS RESTARTS AGE\nfrontend-b2zdv 1/1 Running 0 10m\nfrontend-vcmts 1/1 Running 0 10m\nfrontend-wtsmm 1/1 Running 0 10m\npod1 0/1 Terminating 0 1s\npod2 0/1 Terminating 0 1s", "completion": "This text code 2.2 - ReplicaSet | If you create the Pods first:", "language": "text", "type": "code-example_explanation", "context": "2.2 - ReplicaSet | If you create the Pods first:", "source": "https://kubernetes.io/docs/concepts/workloads/_print/", "id": "k8s_00658_code_explain_20951"} {"prompt": "Explain this shell code:\n\nkubectl apply -f https://kubernetes.io/examples/pods/pod-rs.yaml", "completion": "This shell code 2.2 - ReplicaSet | And then create the ReplicaSet however:", "language": "shell", "type": "code-example_explanation", "context": "2.2 - ReplicaSet | And then create the ReplicaSet however:", "source": "https://kubernetes.io/docs/concepts/workloads/_print/", "id": "k8s_00658_code_explain_20952"} {"prompt": "Explain this shell code:\n\nkubectl apply -f https://kubernetes.io/examples/controllers/frontend.yaml", "completion": "This shell code 2.2 - ReplicaSet | You shall see that the ReplicaSet has acquired the Pods and has only created new ones according to its spec until the number of its new Pods and the original matches its desired count. As fetching the", "language": "shell", "type": "code-example_explanation", "context": "2.2 - ReplicaSet | You shall see that the ReplicaSet has acquired the Pods and has only created new ones according to its spec until the number of its new Pods and the original matches its desired count. As fetching the", "source": "https://kubernetes.io/docs/concepts/workloads/_print/", "id": "k8s_00658_code_explain_20953"} {"prompt": "Complete this text code:\n\nNAME READY STATUS RESTARTS AGE\nfrontend-hmmj2 1/1 Running 0 9s", "completion": "pod1 1/1 Running 0 36s\npod2 1/1 Running 0 36s", "language": "text", "type": "code-example", "context": "2.2 - ReplicaSet | In this manner, a ReplicaSet can own a non-homogeneous set of Pods", "source": "https://kubernetes.io/docs/concepts/workloads/_print/", "id": "k8s_00658_code_completion_20954"} {"prompt": "Explain this text code:\n\nNAME READY STATUS RESTARTS AGE\nfrontend-hmmj2 1/1 Running 0 9s\npod1 1/1 Running 0 36s\npod2 1/1 Running 0 36s", "completion": "This text code 2.2 - ReplicaSet | In this manner, a ReplicaSet can own a non-homogeneous set of Pods", "language": "text", "type": "code-example_explanation", "context": "2.2 - ReplicaSet | In this manner, a ReplicaSet can own a non-homogeneous set of Pods", "source": "https://kubernetes.io/docs/concepts/workloads/_print/", "id": "k8s_00658_code_explain_20955"} {"prompt": "Complete this shell code:\n\nkubectl proxy --port=8080\ncurl -X DELETE 'localhost:8080/apis/apps/v1/namespaces/default/replicasets/frontend' \\", "completion": " -d '{\"kind\":\"DeleteOptions\",\"apiVersion\":\"v1\",\"propagationPolicy\":\"Foreground\"}' \\\n -H \"Content-Type: application/json\"", "language": "shell", "type": "code-example", "context": "2.2 - ReplicaSet | You can delete a ReplicaSet without affecting any of its Pods using kubectl delete with the --cascade=orphan option. When using the REST API or the client-go library, you must set propagationPolicy to", "source": "https://kubernetes.io/docs/concepts/workloads/_print/", "id": "k8s_00658_code_completion_20956"} {"prompt": "Explain this shell code:\n\nkubectl proxy --port=8080\ncurl -X DELETE 'localhost:8080/apis/apps/v1/namespaces/default/replicasets/frontend' \\\n -d '{\"kind\":\"DeleteOptions\",\"apiVersion\":\"v1\",\"propagationPolicy\":\"Foreground\"}' \\\n -H \"Content-Type: application/json\"", "completion": "This shell code 2.2 - ReplicaSet | You can delete a ReplicaSet without affecting any of its Pods using kubectl delete with the --cascade=orphan option. When using the REST API or the client-go library, you must set propagationPolicy to", "language": "shell", "type": "code-example_explanation", "context": "2.2 - ReplicaSet | You can delete a ReplicaSet without affecting any of its Pods using kubectl delete with the --cascade=orphan option. When using the REST API or the client-go library, you must set propagationPolicy to", "source": "https://kubernetes.io/docs/concepts/workloads/_print/", "id": "k8s_00658_code_explain_20957"} {"prompt": "Complete this shell code:\n\nkubectl proxy --port=8080\ncurl -X DELETE 'localhost:8080/apis/apps/v1/namespaces/default/replicasets/frontend' \\", "completion": " -d '{\"kind\":\"DeleteOptions\",\"apiVersion\":\"v1\",\"propagationPolicy\":\"Orphan\"}' \\\n -H \"Content-Type: application/json\"", "language": "shell", "type": "code-example", "context": "2.2 - ReplicaSet | Once the original is deleted, you can create a new ReplicaSet to replace it. As long as the old and new .spec.selector are the same, then the new one will adopt the old Pods. However, it will not make", "source": "https://kubernetes.io/docs/concepts/workloads/_print/", "id": "k8s_00658_code_completion_20958"} {"prompt": "Explain this shell code:\n\nkubectl proxy --port=8080\ncurl -X DELETE 'localhost:8080/apis/apps/v1/namespaces/default/replicasets/frontend' \\\n -d '{\"kind\":\"DeleteOptions\",\"apiVersion\":\"v1\",\"propagationPolicy\":\"Orphan\"}' \\\n -H \"Content-Type: application/json\"", "completion": "This shell code 2.2 - ReplicaSet | Once the original is deleted, you can create a new ReplicaSet to replace it. As long as the old and new .spec.selector are the same, then the new one will adopt the old Pods. However, it will not make", "language": "shell", "type": "code-example_explanation", "context": "2.2 - ReplicaSet | Once the original is deleted, you can create a new ReplicaSet to replace it. As long as the old and new .spec.selector are the same, then the new one will adopt the old Pods. However, it will not make", "source": "https://kubernetes.io/docs/concepts/workloads/_print/", "id": "k8s_00658_code_explain_20959"} {"prompt": "Complete this yaml code:\n\napiVersion: autoscaling/v1\nkind: HorizontalPodAutoscaler\nmetadata:\n name: frontend-scaler\nspec:", "completion": " scaleTargetRef:\n kind: ReplicaSet\n name: frontend\n minReplicas: 3\n maxReplicas: 10\n targetCPUUtilizationPercentage: 50", "language": "yaml", "type": "kubernetes-manifest", "context": "2.2 - ReplicaSet | Saving this manifest into hpa-rs.yaml and submitting it to a Kubernetes cluster should create the defined HPA that autoscales the target ReplicaSet depending on the CPU usage of the replicated Pods.", "source": "https://kubernetes.io/docs/concepts/workloads/_print/", "id": "k8s_00658_code_completion_20960"} {"prompt": "Explain this yaml code:\n\napiVersion: autoscaling/v1\nkind: HorizontalPodAutoscaler\nmetadata:\n name: frontend-scaler\nspec:\n scaleTargetRef:\n kind: ReplicaSet\n name: frontend\n minReplicas: 3\n maxReplicas: 10\n targetCPUUtilizationPercentage: 50", "completion": "This yaml code 2.2 - ReplicaSet | Saving this manifest into hpa-rs.yaml and submitting it to a Kubernetes cluster should create the defined HPA that autoscales the target ReplicaSet depending on the CPU usage of the replicated Pods.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "2.2 - ReplicaSet | Saving this manifest into hpa-rs.yaml and submitting it to a Kubernetes cluster should create the defined HPA that autoscales the target ReplicaSet depending on the CPU usage of the replicated Pods.", "source": "https://kubernetes.io/docs/concepts/workloads/_print/", "id": "k8s_00658_code_explain_20961"} {"prompt": "Explain this shell code:\n\nkubectl apply -f https://k8s.io/examples/controllers/hpa-rs.yaml", "completion": "This shell code 2.2 - ReplicaSet | Alternatively, you can use the kubectl autoscale command to accomplish the same (and it's easier!)", "language": "shell", "type": "code-example_explanation", "context": "2.2 - ReplicaSet | Alternatively, you can use the kubectl autoscale command to accomplish the same (and it's easier!)", "source": "https://kubernetes.io/docs/concepts/workloads/_print/", "id": "k8s_00658_code_explain_20962"} {"prompt": "Explain this shell code:\n\nkubectl autoscale rs frontend --max=10 --min=3 --cpu-percent=50", "completion": "This shell code 2.2 - ReplicaSet | Deployment is an object which can own ReplicaSets and update them and their Pods via declarative, server-side rolling updates. While ReplicaSets can be used independently, today they're mainly used by", "language": "shell", "type": "code-example_explanation", "context": "2.2 - ReplicaSet | Deployment is an object which can own ReplicaSets and update them and their Pods via declarative, server-side rolling updates. While ReplicaSets can be used independently, today they're mainly used by", "source": "https://kubernetes.io/docs/concepts/workloads/_print/", "id": "k8s_00658_code_explain_20963"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Service\nmetadata:\n name: nginx\n labels:\n app: nginx\nspec:\n ports:\n - port: 80\n name: web\n clusterIP: None\n selector:\n app: nginx\n---\napiVersion: apps/v1\nkind: StatefulSet\nmetadata:\n name: web\nspec:\n selector:\n matchLabels:\n app: nginx # has to match .spec.template.metadata.labels\n serviceName: \"nginx\"\n replicas: 3 # by default is 1", "completion": " minReadySeconds: 10 # by default is 0\n template:\n metadata:\n labels:\n app: nginx # has to match .spec.selector.matchLabels\n spec:\n terminationGracePeriodSeconds: 10\n containers:\n - name: nginx\n image: registry.k8s.io/nginx-slim:0.24\n ports:\n - containerPort: 80\n name: web\n volumeMounts:\n - name: www\n mountPath: /usr/share/nginx/html\n volumeClaimTemplates:\n - metadata:\n name: www\n spec:\n accessModes: [ \"ReadWriteOnce\" ]\n storageClassName: \"my-storage-class\"\n resources:\n requests:\n storage: 1Gi", "language": "yaml", "type": "service-definition", "context": "2.3 - StatefulSets | In the above example:", "source": "https://kubernetes.io/docs/concepts/workloads/_print/", "id": "k8s_00658_code_completion_20964"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Service\nmetadata:\n name: nginx\n labels:\n app: nginx\nspec:\n ports:\n - port: 80\n name: web\n clusterIP: None\n selector:\n app: nginx\n---\napiVersion: apps/v1\nkind: StatefulSet\nmetadata:\n name: web\nspec:\n selector:\n matchLabels:\n app: nginx # has to match .spec.template.metadata.labels\n serviceName: \"nginx\"\n replicas: 3 # by default is 1\n minReadySeconds: 10 # by default is 0\n template:\n metadata:\n labels:\n app: nginx # has to match .spec.selector.matchLabels\n spec:\n terminationGracePeriodSeconds: 10\n containers:\n - name: nginx\n image: registry.k8s.io/nginx-slim:0.24\n ports:\n - containerPort: 80\n name: web\n volumeMounts:\n - name: www\n mountPath: /usr/share/nginx/html\n volumeClaimTemplates:\n - metadata:\n name: www\n spec:\n accessModes: [ \"ReadWriteOnce\" ]\n storageClassName: \"my-storage-class\"\n resources:\n requests:\n storage: 1Gi", "completion": "This yaml code 2.3 - StatefulSets | In the above example:", "language": "yaml", "type": "service-definition_explanation", "context": "2.3 - StatefulSets | In the above example:", "source": "https://kubernetes.io/docs/concepts/workloads/_print/", "id": "k8s_00658_code_explain_20965"} {"prompt": "Complete this yaml code:\n\napiVersion: apps/v1\nkind: StatefulSet\nmetadata:", "completion": " name: webapp\nspec:\n revisionHistoryLimit: 5 # Keep last 5 revisions\n # ... other spec fields ...", "language": "yaml", "type": "kubernetes-manifest", "context": "2.3 - StatefulSets | You can revert to a previous configuration using:", "source": "https://kubernetes.io/docs/concepts/workloads/_print/", "id": "k8s_00658_code_completion_20966"} {"prompt": "Explain this yaml code:\n\napiVersion: apps/v1\nkind: StatefulSet\nmetadata:\n name: webapp\nspec:\n revisionHistoryLimit: 5 # Keep last 5 revisions\n # ... other spec fields ...", "completion": "This yaml code 2.3 - StatefulSets | You can revert to a previous configuration using:", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "2.3 - StatefulSets | You can revert to a previous configuration using:", "source": "https://kubernetes.io/docs/concepts/workloads/_print/", "id": "k8s_00658_code_explain_20967"} {"prompt": "Complete this bash code:\n\n# View revision history\nkubectl rollout history statefulset/webapp", "completion": "\n# Rollback to a specific revision\nkubectl rollout undo statefulset/webapp --to-revision=3", "language": "bash", "type": "shell-command", "context": "2.3 - StatefulSets | This will:", "source": "https://kubernetes.io/docs/concepts/workloads/_print/", "id": "k8s_00658_code_completion_20968"} {"prompt": "Explain this bash code:\n\n# View revision history\nkubectl rollout history statefulset/webapp\n\n# Rollback to a specific revision\nkubectl rollout undo statefulset/webapp --to-revision=3", "completion": "This bash code 2.3 - StatefulSets | This will:", "language": "bash", "type": "shell-command_explanation", "context": "2.3 - StatefulSets | This will:", "source": "https://kubernetes.io/docs/concepts/workloads/_print/", "id": "k8s_00658_code_explain_20969"} {"prompt": "Complete this bash code:\n\n# List all revisions for the StatefulSet\nkubectl get controllerrevisions -l app.kubernetes.io/name=webapp", "completion": "\n# View detailed configuration of a specific revision\nkubectl get controllerrevision/webapp-3 -o yaml", "language": "bash", "type": "shell-command", "context": "2.3 - StatefulSets | Regularly check revisions with:", "source": "https://kubernetes.io/docs/concepts/workloads/_print/", "id": "k8s_00658_code_completion_20970"} {"prompt": "Explain this bash code:\n\n# List all revisions for the StatefulSet\nkubectl get controllerrevisions -l app.kubernetes.io/name=webapp\n\n# View detailed configuration of a specific revision\nkubectl get controllerrevision/webapp-3 -o yaml", "completion": "This bash code 2.3 - StatefulSets | Regularly check revisions with:", "language": "bash", "type": "shell-command_explanation", "context": "2.3 - StatefulSets | Regularly check revisions with:", "source": "https://kubernetes.io/docs/concepts/workloads/_print/", "id": "k8s_00658_code_explain_20971"} {"prompt": "Complete this yaml code:\n\napiVersion: apps/v1\nkind: StatefulSet\n...\nspec:", "completion": " persistentVolumeClaimRetentionPolicy:\n whenDeleted: Retain\n whenScaled: Delete\n...", "language": "yaml", "type": "kubernetes-manifest", "context": "2.3 - StatefulSets | The StatefulSet controller adds owner references to its PVCs, which are then deleted by the garbage collector after the Pod is terminated. This enables the Pod to cleanly unmount all volumes before th", "source": "https://kubernetes.io/docs/concepts/workloads/_print/", "id": "k8s_00658_code_completion_20972"} {"prompt": "Explain this yaml code:\n\napiVersion: apps/v1\nkind: StatefulSet\n...\nspec:\n persistentVolumeClaimRetentionPolicy:\n whenDeleted: Retain\n whenScaled: Delete\n...", "completion": "This yaml code 2.3 - StatefulSets | The StatefulSet controller adds owner references to its PVCs, which are then deleted by the garbage collector after the Pod is terminated. This enables the Pod to cleanly unmount all volumes before th", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "2.3 - StatefulSets | The StatefulSet controller adds owner references to its PVCs, which are then deleted by the garbage collector after the Pod is terminated. This enables the Pod to cleanly unmount all volumes before th", "source": "https://kubernetes.io/docs/concepts/workloads/_print/", "id": "k8s_00658_code_explain_20973"} {"prompt": "Complete this yaml code:\n\napiVersion: apps/v1\nkind: DaemonSet\nmetadata:\n name: fluentd-elasticsearch\n namespace: kube-system\n labels:\n k8s-app: fluentd-logging\nspec:\n selector:\n matchLabels:\n name: fluentd-elasticsearch\n template:\n metadata:\n labels:\n name: fluentd-elasticsearch\n spec:\n tolerations:\n # these tolerations are to have the daemonset runnable on control plane nodes\n # remove them if your control plane nodes should not run pods\n - key: node-role.kubernetes.io/control-plane\n operator: Exists\n effect: NoSchedule", "completion": " - key: node-role.kubernetes.io/master\n operator: Exists\n effect: NoSchedule\n containers:\n - name: fluentd-elasticsearch\n image: quay.io/fluentd_elasticsearch/fluentd:v5.0.1\n resources:\n limits:\n memory: 200Mi\n requests:\n cpu: 100m\n memory: 200Mi\n volumeMounts:\n - name: varlog\n mountPath: /var/log\n # it may be desirable to set a high priority class to ensure that a DaemonSet Pod\n # preempts running Pods\n # priorityClassName: important\n terminationGracePeriodSeconds: 30\n volumes:\n - name: varlog\n hostPath:\n path: /var/log", "language": "yaml", "type": "kubernetes-manifest", "context": "2.4 - DaemonSet | Create a DaemonSet based on the YAML file:", "source": "https://kubernetes.io/docs/concepts/workloads/_print/", "id": "k8s_00658_code_completion_20974"} {"prompt": "Explain this yaml code:\n\napiVersion: apps/v1\nkind: DaemonSet\nmetadata:\n name: fluentd-elasticsearch\n namespace: kube-system\n labels:\n k8s-app: fluentd-logging\nspec:\n selector:\n matchLabels:\n name: fluentd-elasticsearch\n template:\n metadata:\n labels:\n name: fluentd-elasticsearch\n spec:\n tolerations:\n # these tolerations are to have the daemonset runnable on control plane nodes\n # remove them if your control plane nodes should not run pods\n - key: node-role.kubernetes.io/control-plane\n operator: Exists\n effect: NoSchedule\n - key: node-role.kubernetes.io/master\n operator: Exists\n effect: NoSchedule\n containers:\n - name: fluentd-elasticsearch\n image: quay.io/fluentd_elasticsearch/fluentd:v5.0.1\n resources:\n limits:\n memory: 200Mi\n requests:\n cpu: 100m\n memory: 200Mi\n volumeMounts:\n - name: varlog\n mountPath: /var/log\n # it may be desirable to set a high priority class to ensure that a DaemonSet Pod\n # preempts running Pods\n # priorityClassName: important\n terminationGracePeriodSeconds: 30\n volumes:\n - name: varlog\n hostPath:\n path: /var/log", "completion": "This yaml code 2.4 - DaemonSet | Create a DaemonSet based on the YAML file:", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "2.4 - DaemonSet | Create a DaemonSet based on the YAML file:", "source": "https://kubernetes.io/docs/concepts/workloads/_print/", "id": "k8s_00658_code_explain_20975"} {"prompt": "Explain this bash code:\n\nkubectl apply -f https://k8s.io/examples/controllers/daemonset.yaml", "completion": "This bash code 2.4 - DaemonSet | As with all other Kubernetes config, a DaemonSet needs apiVersion, kind, and metadata fields. For general information about working with config files, see running stateless applications and object man", "language": "bash", "type": "kubectl-command_explanation", "context": "2.4 - DaemonSet | As with all other Kubernetes config, a DaemonSet needs apiVersion, kind, and metadata fields. For general information about working with config files, see running stateless applications and object man", "source": "https://kubernetes.io/docs/concepts/workloads/_print/", "id": "k8s_00658_code_explain_20976"} {"prompt": "Complete this yaml code:\n\nnodeAffinity:\n requiredDuringSchedulingIgnoredDuringExecution:\n nodeSelectorTerms:\n - matchFields:", "completion": " - key: metadata.name\n operator: In\n values:\n - target-host-name", "language": "yaml", "type": "kubernetes-manifest", "context": "2.4 - DaemonSet | The DaemonSet controller automatically adds a set of tolerations to DaemonSet Pods:", "source": "https://kubernetes.io/docs/concepts/workloads/_print/", "id": "k8s_00658_code_completion_20977"} {"prompt": "Explain this yaml code:\n\nnodeAffinity:\n requiredDuringSchedulingIgnoredDuringExecution:\n nodeSelectorTerms:\n - matchFields:\n - key: metadata.name\n operator: In\n values:\n - target-host-name", "completion": "This yaml code 2.4 - DaemonSet | The DaemonSet controller automatically adds a set of tolerations to DaemonSet Pods:", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "2.4 - DaemonSet | The DaemonSet controller automatically adds a set of tolerations to DaemonSet Pods:", "source": "https://kubernetes.io/docs/concepts/workloads/_print/", "id": "k8s_00658_code_explain_20978"} {"prompt": "Complete this yaml code:\n\napiVersion: batch/v1\nkind: Job\nmetadata:\n name: pi\nspec:\n template:", "completion": " spec:\n containers:\n - name: pi\n image: perl:5.34.0\n command: [\"perl\", \"-Mbignum=bpi\", \"-wle\", \"print bpi(2000)\"]\n restartPolicy: Never\n backoffLimit: 4", "language": "yaml", "type": "kubernetes-manifest", "context": "2.5 - Jobs | You can run the example with this command:", "source": "https://kubernetes.io/docs/concepts/workloads/_print/", "id": "k8s_00658_code_completion_20979"} {"prompt": "Explain this yaml code:\n\napiVersion: batch/v1\nkind: Job\nmetadata:\n name: pi\nspec:\n template:\n spec:\n containers:\n - name: pi\n image: perl:5.34.0\n command: [\"perl\", \"-Mbignum=bpi\", \"-wle\", \"print bpi(2000)\"]\n restartPolicy: Never\n backoffLimit: 4", "completion": "This yaml code 2.5 - Jobs | You can run the example with this command:", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "2.5 - Jobs | You can run the example with this command:", "source": "https://kubernetes.io/docs/concepts/workloads/_print/", "id": "k8s_00658_code_explain_20980"} {"prompt": "Explain this shell code:\n\nkubectl apply -f https://kubernetes.io/examples/controllers/job.yaml", "completion": "This shell code 2.5 - Jobs | The output is similar to this:", "language": "shell", "type": "code-example_explanation", "context": "2.5 - Jobs | The output is similar to this:", "source": "https://kubernetes.io/docs/concepts/workloads/_print/", "id": "k8s_00658_code_explain_20981"} {"prompt": "Complete this bash code:\n\nName: pi\nNamespace: default\nSelector: batch.kubernetes.io/controller-uid=c9948307-e56d-4b5d-8302-ae2d7b7da67c\nLabels: batch.kubernetes.io/controller-uid=c9948307-e56d-4b5d-8302-ae2d7b7da67c\n batch.kubernetes.io/job-name=pi\n ...\nAnnotations: batch.kubernetes.io/job-tracking: \"\"\nParallelism: 1\nCompletions: 1\nStart Time: Mon, 02 Dec 2019 15:20:11 +0200\nCompleted At: Mon, 02 Dec 2019 15:21:16 +0200\nDuration: 65s\nPods Statuses: 0 Running / 1 Succeeded / 0 Failed\nPod Template:\n Labels: batch.kubernetes.io/controller-uid=c9948307-e56d-4b5d-8302-ae2d7b7da67c\n batch.kubernetes.io/job-name=pi\n Containers:", "completion": " pi:\n Image: perl:5.34.0\n Port: <none>\n Host Port: <none>\n Command:\n perl\n -Mbignum=bpi\n -wle\n print bpi(2000)\n Environment: <none>\n Mounts: <none>\n Volumes: <none>\nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal SuccessfulCreate 21s job-controller Created pod: pi-xf9p4\n Normal Completed 18s job-controller Job completed", "language": "bash", "type": "shell-command", "context": "2.5 - Jobs | apiVersion: batch/v1 kind: Job metadata: annotations: batch.kubernetes.io/job-tracking: \"\" ... creationTimestamp: \"2022-11-10T17:53:53Z\" generation: 1 labels: batch.kubernet", "source": "https://kubernetes.io/docs/concepts/workloads/_print/", "id": "k8s_00658_code_completion_20982"} {"prompt": "Explain this bash code:\n\nName: pi\nNamespace: default\nSelector: batch.kubernetes.io/controller-uid=c9948307-e56d-4b5d-8302-ae2d7b7da67c\nLabels: batch.kubernetes.io/controller-uid=c9948307-e56d-4b5d-8302-ae2d7b7da67c\n batch.kubernetes.io/job-name=pi\n ...\nAnnotations: batch.kubernetes.io/job-tracking: \"\"\nParallelism: 1\nCompletions: 1\nStart Time: Mon, 02 Dec 2019 15:20:11 +0200\nCompleted At: Mon, 02 Dec 2019 15:21:16 +0200\nDuration: 65s\nPods Statuses: 0 Running / 1 Succeeded / 0 Failed\nPod Template:\n Labels: batch.kubernetes.io/controller-uid=c9948307-e56d-4b5d-8302-ae2d7b7da67c\n batch.kubernetes.io/job-name=pi\n Containers:\n pi:\n Image: perl:5.34.0\n Port: <none>\n Host Port: <none>\n Command:\n perl\n -Mbignum=bpi\n -wle\n print bpi(2000)\n Environment: <none>\n Mounts: <none>\n Volumes: <none>\nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal SuccessfulCreate 21s job-controller Created pod: pi-xf9p4\n Normal Completed 18s job-controller Job completed", "completion": "This bash code 2.5 - Jobs | apiVersion: batch/v1 kind: Job metadata: annotations: batch.kubernetes.io/job-tracking: \"\" ... creationTimestamp: \"2022-11-10T17:53:53Z\" generation: 1 labels: batch.kubernet", "language": "bash", "type": "shell-command_explanation", "context": "2.5 - Jobs | apiVersion: batch/v1 kind: Job metadata: annotations: batch.kubernetes.io/job-tracking: \"\" ... creationTimestamp: \"2022-11-10T17:53:53Z\" generation: 1 labels: batch.kubernet", "source": "https://kubernetes.io/docs/concepts/workloads/_print/", "id": "k8s_00658_code_explain_20983"} {"prompt": "Complete this bash code:\n\napiVersion: batch/v1\nkind: Job\nmetadata:\n annotations: batch.kubernetes.io/job-tracking: \"\"\n ... \n creationTimestamp: \"2022-11-10T17:53:53Z\"\n generation: 1\n labels:\n batch.kubernetes.io/controller-uid: 863452e6-270d-420e-9b94-53a54146c223\n batch.kubernetes.io/job-name: pi\n name: pi\n namespace: default\n resourceVersion: \"4751\"\n uid: 204fb678-040b-497f-9266-35ffa8716d14\nspec:\n backoffLimit: 4\n completionMode: NonIndexed\n completions: 1\n parallelism: 1\n selector:\n matchLabels:\n batch.kubernetes.io/controller-uid: 863452e6-270d-420e-9b94-53a54146c223\n suspend: false\n template:\n metadata:\n creationTimestamp: null", "completion": " labels:\n batch.kubernetes.io/controller-uid: 863452e6-270d-420e-9b94-53a54146c223\n batch.kubernetes.io/job-name: pi\n spec:\n containers:\n - command:\n - perl\n - -Mbignum=bpi\n - -wle\n - print bpi(2000)\n image: perl:5.34.0\n imagePullPolicy: IfNotPresent\n name: pi\n resources: {}\n terminationMessagePath: /dev/termination-log\n terminationMessagePolicy: File\n dnsPolicy: ClusterFirst\n restartPolicy: Never\n schedulerName: default-scheduler\n securityContext: {}\n terminationGracePeriodSeconds: 30\nstatus:\n active: 1\n ready: 0\n startTime: \"2022-11-10T17:53:57Z\"\n uncountedTerminatedPods: {}", "language": "bash", "type": "shell-command", "context": "2.5 - Jobs | To view completed Pods of a Job, use kubectl get pods.", "source": "https://kubernetes.io/docs/concepts/workloads/_print/", "id": "k8s_00658_code_completion_20984"} {"prompt": "Explain this bash code:\n\napiVersion: batch/v1\nkind: Job\nmetadata:\n annotations: batch.kubernetes.io/job-tracking: \"\"\n ... \n creationTimestamp: \"2022-11-10T17:53:53Z\"\n generation: 1\n labels:\n batch.kubernetes.io/controller-uid: 863452e6-270d-420e-9b94-53a54146c223\n batch.kubernetes.io/job-name: pi\n name: pi\n namespace: default\n resourceVersion: \"4751\"\n uid: 204fb678-040b-497f-9266-35ffa8716d14\nspec:\n backoffLimit: 4\n completionMode: NonIndexed\n completions: 1\n parallelism: 1\n selector:\n matchLabels:\n batch.kubernetes.io/controller-uid: 863452e6-270d-420e-9b94-53a54146c223\n suspend: false\n template:\n metadata:\n creationTimestamp: null\n labels:\n batch.kubernetes.io/controller-uid: 863452e6-270d-420e-9b94-53a54146c223\n batch.kubernetes.io/job-name: pi\n spec:\n containers:\n - command:\n - perl\n - -Mbignum=bpi\n - -wle\n - print bpi(2000)\n image: perl:5.34.0\n imagePullPolicy: IfNotPresent\n name: pi\n resources: {}\n terminationMessagePath: /dev/termination-log\n terminationMessagePolicy: File\n dnsPolicy: ClusterFirst\n restartPolicy: Never\n schedulerName: default-scheduler\n securityContext: {}\n terminationGracePeriodSeconds: 30\nstatus:\n active: 1\n ready: 0\n startTime: \"2022-11-10T17:53:57Z\"\n uncountedTerminatedPods: {}", "completion": "This bash code 2.5 - Jobs | To view completed Pods of a Job, use kubectl get pods.", "language": "bash", "type": "shell-command_explanation", "context": "2.5 - Jobs | To view completed Pods of a Job, use kubectl get pods.", "source": "https://kubernetes.io/docs/concepts/workloads/_print/", "id": "k8s_00658_code_explain_20985"} {"prompt": "Explain this shell code:\n\npods=$(kubectl get pods --selector=batch.kubernetes.io/job-name=pi --output=jsonpath='{.items[*].metadata.name}')\necho $pods", "completion": "This shell code 2.5 - Jobs | The output is similar to this:", "language": "shell", "type": "code-example_explanation", "context": "2.5 - Jobs | The output is similar to this:", "source": "https://kubernetes.io/docs/concepts/workloads/_print/", "id": "k8s_00658_code_explain_20986"} {"prompt": "Explain this text code:\n\n3.1415926535897932384626433832795028841971693993751058209749445923078164062862089986280348253421170679821480865132823066470938446095505822317253594081284811174502841027019385211055596446229489549303819644288109756659334461284756482337867831652712019091456485669234603486104543266482133936072602491412737245870066063155881748815209209628292540917153643678925903600113305305488204665213841469519415116094330572703657595919530921861173819326117931051185480744623799627495673518857527248912279381830119491298336733624406566430860213949463952247371907021798609437027705392171762931767523846748184676694051320005681271452635608277857713427577896091736371787214684409012249534301465495853710507922796892589235420199561121290219608640344181598136297747713099605187072113499999983729780499510597317328160963185950244594553469083026425223082533446850352619311881710100031378387528865875332083814206171776691473035982534904287554687311595628638823537875937519577818577805321712268066130019278766111959092164201989380952572010654858632788659361533818279682303019520353018529689957736225994138912497217752834791315155748572424541506959508295331168617278558890750983817546374649393192550604009277016711390098488240128583616035637076601047101819429555961989467678374494482553797747268471040475346462080466842590694912933136770289891521047521620569660240580381501935112533824300355876402474964732639141992726042699227967823547816360093417216412199245863150302861829745557067498385054945885869269956909272107975093029553211653449872027559602364806654991198818347977535663698074265425278625518184175746728909777727938000816470600161452491921732172147723501414419735685481613611573525521334757418494684385233239073941433345477624168625189835694855620992192221842725502542568876717904946016534668049886272327917860857843838279679766814541009538837863609506800642251252051173929848960841284886269456042419652850222106611863067442786220391949450471237137869609563643719172874677646575739624138908658326459958133904780275901", "completion": "This text code 2.5 - Jobs | As with all other Kubernetes config, a Job needs apiVersion, kind, and metadata fields.", "language": "text", "type": "code-example_explanation", "context": "2.5 - Jobs | As with all other Kubernetes config, a Job needs apiVersion, kind, and metadata fields.", "source": "https://kubernetes.io/docs/concepts/workloads/_print/", "id": "k8s_00658_code_explain_20987"} {"prompt": "Explain this text code:\n\n/controllers/job-backoff-limit-per-index-example.yaml", "completion": "This text code 2.5 - Jobs | In the example above, the Job controller allows for one restart for each of the indexes. When the total number of failed indexes exceeds 5, then the entire Job is terminated.", "language": "text", "type": "code-example_explanation", "context": "2.5 - Jobs | In the example above, the Job controller allows for one restart for each of the indexes. When the total number of failed indexes exceeds 5, then the entire Job is terminated.", "source": "https://kubernetes.io/docs/concepts/workloads/_print/", "id": "k8s_00658_code_explain_20988"} {"prompt": "Complete this yaml code:\n\napiVersion: batch/v1\nkind: Job\nmetadata:\n name: job-backoff-limit-per-index-example\nspec:\n completions: 10\n parallelism: 3\n completionMode: Indexed # required for the feature\n backoffLimitPerIndex: 1 # maximal number of failures per index\n maxFailedIndexes: 5 # maximal number of failed indexes before terminating the Job execution\n template:\n spec:\n restartPolicy: Never # required for the feature", "completion": " containers:\n - name: example\n image: python\n command: # The jobs fails as there is at least one failed index\n # (all even indexes fail in here), yet all indexes\n # are executed as maxFailedIndexes is not exceeded.\n - python3\n - -c\n - |\n import os, sys\n print(\"Hello world\")\n if int(os.environ.get(\"JOB_COMPLETION_INDEX\")) % 2 == 0:\n sys.exit(1)", "language": "yaml", "type": "kubernetes-manifest", "context": "2.5 - Jobs | In the example above, the Job controller allows for one restart for each of the indexes. When the total number of failed indexes exceeds 5, then the entire Job is terminated.", "source": "https://kubernetes.io/docs/concepts/workloads/_print/", "id": "k8s_00658_code_completion_20989"} {"prompt": "Explain this yaml code:\n\napiVersion: batch/v1\nkind: Job\nmetadata:\n name: job-backoff-limit-per-index-example\nspec:\n completions: 10\n parallelism: 3\n completionMode: Indexed # required for the feature\n backoffLimitPerIndex: 1 # maximal number of failures per index\n maxFailedIndexes: 5 # maximal number of failed indexes before terminating the Job execution\n template:\n spec:\n restartPolicy: Never # required for the feature\n containers:\n - name: example\n image: python\n command: # The jobs fails as there is at least one failed index\n # (all even indexes fail in here), yet all indexes\n # are executed as maxFailedIndexes is not exceeded.\n - python3\n - -c\n - |\n import os, sys\n print(\"Hello world\")\n if int(os.environ.get(\"JOB_COMPLETION_INDEX\")) % 2 == 0:\n sys.exit(1)", "completion": "This yaml code 2.5 - Jobs | In the example above, the Job controller allows for one restart for each of the indexes. When the total number of failed indexes exceeds 5, then the entire Job is terminated.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "2.5 - Jobs | In the example above, the Job controller allows for one restart for each of the indexes. When the total number of failed indexes exceeds 5, then the entire Job is terminated.", "source": "https://kubernetes.io/docs/concepts/workloads/_print/", "id": "k8s_00658_code_explain_20990"} {"prompt": "Explain this sh code:\n\nkubectl get -o yaml job job-backoff-limit-per-index-example", "completion": "This sh code 2.5 - Jobs | The Job controller adds the FailureTarget Job condition to trigger Job termination and cleanup. When all of the Job Pods are terminated, the Job controller adds the Failed condition with the same valu", "language": "sh", "type": "code-example_explanation", "context": "2.5 - Jobs | The Job controller adds the FailureTarget Job condition to trigger Job termination and cleanup. When all of the Job Pods are terminated, the Job controller adds the Failed condition with the same valu", "source": "https://kubernetes.io/docs/concepts/workloads/_print/", "id": "k8s_00658_code_explain_20991"} {"prompt": "Complete this yaml code:\n\nstatus:\n completedIndexes: 1,3,5,7,9\n failedIndexes: 0,2,4,6,8\n succeeded: 5 # 1 succeeded pod for each of 5 succeeded indexes\n failed: 10 # 2 failed pods (1 retry) for each of 5 failed indexes\n conditions:\n - message: Job has failed indexes", "completion": " reason: FailedIndexes\n status: \"True\"\n type: FailureTarget\n - message: Job has failed indexes\n reason: FailedIndexes\n status: \"True\"\n type: Failed", "language": "yaml", "type": "kubernetes-manifest", "context": "2.5 - Jobs | The Job controller adds the FailureTarget Job condition to trigger Job termination and cleanup. When all of the Job Pods are terminated, the Job controller adds the Failed condition with the same valu", "source": "https://kubernetes.io/docs/concepts/workloads/_print/", "id": "k8s_00658_code_completion_20992"} {"prompt": "Explain this yaml code:\n\nstatus:\n completedIndexes: 1,3,5,7,9\n failedIndexes: 0,2,4,6,8\n succeeded: 5 # 1 succeeded pod for each of 5 succeeded indexes\n failed: 10 # 2 failed pods (1 retry) for each of 5 failed indexes\n conditions:\n - message: Job has failed indexes\n reason: FailedIndexes\n status: \"True\"\n type: FailureTarget\n - message: Job has failed indexes\n reason: FailedIndexes\n status: \"True\"\n type: Failed", "completion": "This yaml code 2.5 - Jobs | The Job controller adds the FailureTarget Job condition to trigger Job termination and cleanup. When all of the Job Pods are terminated, the Job controller adds the Failed condition with the same valu", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "2.5 - Jobs | The Job controller adds the FailureTarget Job condition to trigger Job termination and cleanup. When all of the Job Pods are terminated, the Job controller adds the Failed condition with the same valu", "source": "https://kubernetes.io/docs/concepts/workloads/_print/", "id": "k8s_00658_code_explain_20993"} {"prompt": "Complete this yaml code:\n\napiVersion: batch/v1\nkind: Job\nmetadata:\n name: job-pod-failure-policy-example\nspec:\n completions: 12\n parallelism: 3\n template:\n spec:\n restartPolicy: Never\n containers:\n - name: main\n image: docker.io/library/bash:5\n command: [\"bash\"] # example command simulating a bug which triggers the FailJob action", "completion": " args:\n - -c\n - echo \"Hello world!\" && sleep 5 && exit 42\n backoffLimit: 6\n podFailurePolicy:\n rules:\n - action: FailJob\n onExitCodes:\n containerName: main # optional\n operator: In # one of: In, NotIn\n values: [42]\n - action: Ignore # one of: Ignore, FailJob, Count\n onPodConditions:\n - type: DisruptionTarget # indicates Pod disruption", "language": "yaml", "type": "kubernetes-manifest", "context": "2.5 - Jobs | In the example above, the first rule of the Pod failure policy specifies that the Job should be marked failed if the main container fails with the 42 exit code. The following are the rules for the mai", "source": "https://kubernetes.io/docs/concepts/workloads/_print/", "id": "k8s_00658_code_completion_20994"} {"prompt": "Explain this yaml code:\n\napiVersion: batch/v1\nkind: Job\nmetadata:\n name: job-pod-failure-policy-example\nspec:\n completions: 12\n parallelism: 3\n template:\n spec:\n restartPolicy: Never\n containers:\n - name: main\n image: docker.io/library/bash:5\n command: [\"bash\"] # example command simulating a bug which triggers the FailJob action\n args:\n - -c\n - echo \"Hello world!\" && sleep 5 && exit 42\n backoffLimit: 6\n podFailurePolicy:\n rules:\n - action: FailJob\n onExitCodes:\n containerName: main # optional\n operator: In # one of: In, NotIn\n values: [42]\n - action: Ignore # one of: Ignore, FailJob, Count\n onPodConditions:\n - type: DisruptionTarget # indicates Pod disruption", "completion": "This yaml code 2.5 - Jobs | In the example above, the first rule of the Pod failure policy specifies that the Job should be marked failed if the main container fails with the 42 exit code. The following are the rules for the mai", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "2.5 - Jobs | In the example above, the first rule of the Pod failure policy specifies that the Job should be marked failed if the main container fails with the 42 exit code. The following are the rules for the mai", "source": "https://kubernetes.io/docs/concepts/workloads/_print/", "id": "k8s_00658_code_explain_20995"} {"prompt": "Explain this text code:\n\nspec.podFailurePolicy.rules[*].onExitCodes.containerName", "completion": "This text code 2.5 - Jobs | When you use the podFailurePolicy, and the Job fails due to the pod matching the rule with the FailJob action, then the Job controller triggers the Job termination process by adding the FailureTarget", "language": "text", "type": "code-example_explanation", "context": "2.5 - Jobs | When you use the podFailurePolicy, and the Job fails due to the pod matching the rule with the FailJob action, then the Job controller triggers the Job termination process by adding the FailureTarget", "source": "https://kubernetes.io/docs/concepts/workloads/_print/", "id": "k8s_00658_code_explain_20996"} {"prompt": "Complete this yaml code:\n\napiVersion: batch/v1\nkind: Job\nmetadata:\n name: job-success\nspec:\n parallelism: 10\n completions: 10\n completionMode: Indexed # Required for the success policy\n successPolicy:\n rules:\n - succeededIndexes: 0,2-3\n succeededCount: 1\n template:\n spec:", "completion": " containers:\n - name: main\n image: python\n command: # Provided that at least one of the Pods with 0, 2, and 3 indexes has succeeded,\n # the overall Job is a success.\n - python3\n - -c\n - |\n import os, sys\n if os.environ.get(\"JOB_COMPLETION_INDEX\") == \"2\":\n sys.exit(0)\n else:\n sys.exit(1) \n restartPolicy: Never", "language": "yaml", "type": "kubernetes-manifest", "context": "2.5 - Jobs | In the example above, both succeededIndexes and succeededCount have been specified. Therefore, the job controller will mark the Job as succeeded and terminate the lingering Pods when either of the spe", "source": "https://kubernetes.io/docs/concepts/workloads/_print/", "id": "k8s_00658_code_completion_20997"} {"prompt": "Explain this yaml code:\n\napiVersion: batch/v1\nkind: Job\nmetadata:\n name: job-success\nspec:\n parallelism: 10\n completions: 10\n completionMode: Indexed # Required for the success policy\n successPolicy:\n rules:\n - succeededIndexes: 0,2-3\n succeededCount: 1\n template:\n spec:\n containers:\n - name: main\n image: python\n command: # Provided that at least one of the Pods with 0, 2, and 3 indexes has succeeded,\n # the overall Job is a success.\n - python3\n - -c\n - |\n import os, sys\n if os.environ.get(\"JOB_COMPLETION_INDEX\") == \"2\":\n sys.exit(0)\n else:\n sys.exit(1) \n restartPolicy: Never", "completion": "This yaml code 2.5 - Jobs | In the example above, both succeededIndexes and succeededCount have been specified. Therefore, the job controller will mark the Job as succeeded and terminate the lingering Pods when either of the spe", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "2.5 - Jobs | In the example above, both succeededIndexes and succeededCount have been specified. Therefore, the job controller will mark the Job as succeeded and terminate the lingering Pods when either of the spe", "source": "https://kubernetes.io/docs/concepts/workloads/_print/", "id": "k8s_00658_code_explain_20998"} {"prompt": "Complete this yaml code:\n\napiVersion: batch/v1\nkind: Job\nmetadata:\n name: pi-with-timeout\nspec:\n backoffLimit: 5\n activeDeadlineSeconds: 100", "completion": " template:\n spec:\n containers:\n - name: pi\n image: perl:5.34.0\n command: [\"perl\", \"-Mbignum=bpi\", \"-wle\", \"print bpi(2000)\"]\n restartPolicy: Never", "language": "yaml", "type": "kubernetes-manifest", "context": "2.5 - Jobs | Note that both the Job spec and the Pod template spec within the Job have an activeDeadlineSeconds field. Ensure that you set this field at the proper level.", "source": "https://kubernetes.io/docs/concepts/workloads/_print/", "id": "k8s_00658_code_completion_20999"} {"prompt": "Explain this yaml code:\n\napiVersion: batch/v1\nkind: Job\nmetadata:\n name: pi-with-timeout\nspec:\n backoffLimit: 5\n activeDeadlineSeconds: 100\n template:\n spec:\n containers:\n - name: pi\n image: perl:5.34.0\n command: [\"perl\", \"-Mbignum=bpi\", \"-wle\", \"print bpi(2000)\"]\n restartPolicy: Never", "completion": "This yaml code 2.5 - Jobs | Note that both the Job spec and the Pod template spec within the Job have an activeDeadlineSeconds field. Ensure that you set this field at the proper level.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "2.5 - Jobs | Note that both the Job spec and the Pod template spec within the Job have an activeDeadlineSeconds field. Ensure that you set this field at the proper level.", "source": "https://kubernetes.io/docs/concepts/workloads/_print/", "id": "k8s_00658_code_explain_21000"} {"prompt": "Complete this yaml code:\n\napiVersion: batch/v1\nkind: Job\nmetadata:\n name: pi-with-ttl\nspec:\n ttlSecondsAfterFinished: 100", "completion": " template:\n spec:\n containers:\n - name: pi\n image: perl:5.34.0\n command: [\"perl\", \"-Mbignum=bpi\", \"-wle\", \"print bpi(2000)\"]\n restartPolicy: Never", "language": "yaml", "type": "kubernetes-manifest", "context": "2.5 - Jobs | The Job pi-with-ttl will be eligible to be automatically deleted, 100 seconds after it finishes.", "source": "https://kubernetes.io/docs/concepts/workloads/_print/", "id": "k8s_00658_code_completion_21001"} {"prompt": "Explain this yaml code:\n\napiVersion: batch/v1\nkind: Job\nmetadata:\n name: pi-with-ttl\nspec:\n ttlSecondsAfterFinished: 100\n template:\n spec:\n containers:\n - name: pi\n image: perl:5.34.0\n command: [\"perl\", \"-Mbignum=bpi\", \"-wle\", \"print bpi(2000)\"]\n restartPolicy: Never", "completion": "This yaml code 2.5 - Jobs | The Job pi-with-ttl will be eligible to be automatically deleted, 100 seconds after it finishes.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "2.5 - Jobs | The Job pi-with-ttl will be eligible to be automatically deleted, 100 seconds after it finishes.", "source": "https://kubernetes.io/docs/concepts/workloads/_print/", "id": "k8s_00658_code_explain_21002"} {"prompt": "Complete this yaml code:\n\napiVersion: batch/v1\nkind: Job\nmetadata:\n name: myjob\nspec:", "completion": " suspend: true\n parallelism: 1\n completions: 5\n template:\n spec:\n ...", "language": "yaml", "type": "kubernetes-manifest", "context": "2.5 - Jobs | You can also toggle Job suspension by patching the Job using the command line.", "source": "https://kubernetes.io/docs/concepts/workloads/_print/", "id": "k8s_00658_code_completion_21003"} {"prompt": "Explain this yaml code:\n\napiVersion: batch/v1\nkind: Job\nmetadata:\n name: myjob\nspec:\n suspend: true\n parallelism: 1\n completions: 5\n template:\n spec:\n ...", "completion": "This yaml code 2.5 - Jobs | You can also toggle Job suspension by patching the Job using the command line.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "2.5 - Jobs | You can also toggle Job suspension by patching the Job using the command line.", "source": "https://kubernetes.io/docs/concepts/workloads/_print/", "id": "k8s_00658_code_explain_21004"} {"prompt": "Explain this shell code:\n\nkubectl patch job/myjob --type=strategic --patch '{\"spec\":{\"suspend\":true}}'", "completion": "This shell code 2.5 - Jobs | Resume a suspended Job:", "language": "shell", "type": "code-example_explanation", "context": "2.5 - Jobs | Resume a suspended Job:", "source": "https://kubernetes.io/docs/concepts/workloads/_print/", "id": "k8s_00658_code_explain_21005"} {"prompt": "Explain this shell code:\n\nkubectl patch job/myjob --type=strategic --patch '{\"spec\":{\"suspend\":false}}'", "completion": "This shell code 2.5 - Jobs | The Job's status can be used to determine if a Job is suspended or has been suspended in the past:", "language": "shell", "type": "code-example_explanation", "context": "2.5 - Jobs | The Job's status can be used to determine if a Job is suspended or has been suspended in the past:", "source": "https://kubernetes.io/docs/concepts/workloads/_print/", "id": "k8s_00658_code_explain_21006"} {"prompt": "Complete this yaml code:\n\napiVersion: batch/v1\nkind: Job\n# .metadata and .spec omitted\nstatus:\n conditions:", "completion": " - lastProbeTime: \"2021-02-05T13:14:33Z\"\n lastTransitionTime: \"2021-02-05T13:14:33Z\"\n status: \"True\"\n type: Suspended\n startTime: \"2021-02-05T13:13:48Z\"", "language": "yaml", "type": "kubernetes-manifest", "context": "2.5 - Jobs | The Job condition of type \"Suspended\" with status \"True\" means the Job is suspended; the lastTransitionTime field can be used to determine how long the Job has been suspended for. If the status of tha", "source": "https://kubernetes.io/docs/concepts/workloads/_print/", "id": "k8s_00658_code_completion_21007"} {"prompt": "Explain this yaml code:\n\napiVersion: batch/v1\nkind: Job\n# .metadata and .spec omitted\nstatus:\n conditions:\n - lastProbeTime: \"2021-02-05T13:14:33Z\"\n lastTransitionTime: \"2021-02-05T13:14:33Z\"\n status: \"True\"\n type: Suspended\n startTime: \"2021-02-05T13:13:48Z\"", "completion": "This yaml code 2.5 - Jobs | The Job condition of type \"Suspended\" with status \"True\" means the Job is suspended; the lastTransitionTime field can be used to determine how long the Job has been suspended for. If the status of tha", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "2.5 - Jobs | The Job condition of type \"Suspended\" with status \"True\" means the Job is suspended; the lastTransitionTime field can be used to determine how long the Job has been suspended for. If the status of tha", "source": "https://kubernetes.io/docs/concepts/workloads/_print/", "id": "k8s_00658_code_explain_21008"} {"prompt": "Complete this text code:\n\nName: myjob\n...\nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------", "completion": " Normal SuccessfulCreate 12m job-controller Created pod: myjob-hlrpl\n Normal SuccessfulDelete 11m job-controller Deleted pod: myjob-hlrpl\n Normal Suspended 11m job-controller Job suspended\n Normal SuccessfulCreate 3s job-controller Created pod: myjob-jvb44\n Normal Resumed 3s job-controller Job resumed", "language": "text", "type": "code-example", "context": "2.5 - Jobs | The last four events, particularly the \"Suspended\" and \"Resumed\" events, are directly a result of toggling the .spec.suspend field. In the time between these two events, we see that no Pods were creat", "source": "https://kubernetes.io/docs/concepts/workloads/_print/", "id": "k8s_00658_code_completion_21009"} {"prompt": "Explain this text code:\n\nName: myjob\n...\nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal SuccessfulCreate 12m job-controller Created pod: myjob-hlrpl\n Normal SuccessfulDelete 11m job-controller Deleted pod: myjob-hlrpl\n Normal Suspended 11m job-controller Job suspended\n Normal SuccessfulCreate 3s job-controller Created pod: myjob-jvb44\n Normal Resumed 3s job-controller Job resumed", "completion": "This text code 2.5 - Jobs | The last four events, particularly the \"Suspended\" and \"Resumed\" events, are directly a result of toggling the .spec.suspend field. In the time between these two events, we see that no Pods were creat", "language": "text", "type": "code-example_explanation", "context": "2.5 - Jobs | The last four events, particularly the \"Suspended\" and \"Resumed\" events, are directly a result of toggling the .spec.suspend field. In the time between these two events, we see that no Pods were creat", "source": "https://kubernetes.io/docs/concepts/workloads/_print/", "id": "k8s_00658_code_explain_21010"} {"prompt": "Complete this yaml code:\n\nkind: Job\nmetadata:\n name: old\n ...", "completion": "spec:\n selector:\n matchLabels:\n batch.kubernetes.io/controller-uid: a8f3d00d-c6d2-11e5-9f87-42010af00002\n ...", "language": "yaml", "type": "kubernetes-manifest", "context": "2.5 - Jobs | Then you create a new Job with name new and you explicitly specify the same selector. Since the existing Pods have label batch.kubernetes.io/controller-uid=a8f3d00d-c6d2-11e5-9f87-42010af00002, they a", "source": "https://kubernetes.io/docs/concepts/workloads/_print/", "id": "k8s_00658_code_completion_21011"} {"prompt": "Explain this yaml code:\n\nkind: Job\nmetadata:\n name: old\n ...\nspec:\n selector:\n matchLabels:\n batch.kubernetes.io/controller-uid: a8f3d00d-c6d2-11e5-9f87-42010af00002\n ...", "completion": "This yaml code 2.5 - Jobs | Then you create a new Job with name new and you explicitly specify the same selector. Since the existing Pods have label batch.kubernetes.io/controller-uid=a8f3d00d-c6d2-11e5-9f87-42010af00002, they a", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "2.5 - Jobs | Then you create a new Job with name new and you explicitly specify the same selector. Since the existing Pods have label batch.kubernetes.io/controller-uid=a8f3d00d-c6d2-11e5-9f87-42010af00002, they a", "source": "https://kubernetes.io/docs/concepts/workloads/_print/", "id": "k8s_00658_code_explain_21012"} {"prompt": "Explain this text code:\n\nbatch.kubernetes.io/controller-uid=a8f3d00d-c6d2-11e5-9f87-42010af00002", "completion": "This text code 2.5 - Jobs | You need to specify manualSelector: true in the new Job since you are not using the selector that the system normally generates for you automatically.", "language": "text", "type": "code-example_explanation", "context": "2.5 - Jobs | You need to specify manualSelector: true in the new Job since you are not using the selector that the system normally generates for you automatically.", "source": "https://kubernetes.io/docs/concepts/workloads/_print/", "id": "k8s_00658_code_explain_21013"} {"prompt": "Complete this yaml code:\n\nkind: Job\nmetadata:\n name: new\n ...\nspec:", "completion": " manualSelector: true\n selector:\n matchLabels:\n batch.kubernetes.io/controller-uid: a8f3d00d-c6d2-11e5-9f87-42010af00002\n ...", "language": "yaml", "type": "kubernetes-manifest", "context": "2.5 - Jobs | The new Job itself will have a different uid from a8f3d00d-c6d2-11e5-9f87-42010af00002. Setting manualSelector: true tells the system that you know what you are doing and to allow this mismatch.", "source": "https://kubernetes.io/docs/concepts/workloads/_print/", "id": "k8s_00658_code_completion_21014"} {"prompt": "Explain this yaml code:\n\nkind: Job\nmetadata:\n name: new\n ...\nspec:\n manualSelector: true\n selector:\n matchLabels:\n batch.kubernetes.io/controller-uid: a8f3d00d-c6d2-11e5-9f87-42010af00002\n ...", "completion": "This yaml code 2.5 - Jobs | The new Job itself will have a different uid from a8f3d00d-c6d2-11e5-9f87-42010af00002. Setting manualSelector: true tells the system that you know what you are doing and to allow this mismatch.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "2.5 - Jobs | The new Job itself will have a different uid from a8f3d00d-c6d2-11e5-9f87-42010af00002. Setting manualSelector: true tells the system that you know what you are doing and to allow this mismatch.", "source": "https://kubernetes.io/docs/concepts/workloads/_print/", "id": "k8s_00658_code_explain_21015"} {"prompt": "Complete this yaml code:\n\nkind: Job\nmetadata:\n name: new", "completion": " ...\nspec:\n podReplacementPolicy: Failed\n ...", "language": "yaml", "type": "kubernetes-manifest", "context": "2.5 - Jobs | Provided your cluster has the feature gate enabled, you can inspect the .status.terminating field of a Job. The value of the field is the number of Pods owned by the Job that are currently terminating", "source": "https://kubernetes.io/docs/concepts/workloads/_print/", "id": "k8s_00658_code_completion_21016"} {"prompt": "Explain this yaml code:\n\nkind: Job\nmetadata:\n name: new\n ...\nspec:\n podReplacementPolicy: Failed\n ...", "completion": "This yaml code 2.5 - Jobs | Provided your cluster has the feature gate enabled, you can inspect the .status.terminating field of a Job. The value of the field is the number of Pods owned by the Job that are currently terminating", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "2.5 - Jobs | Provided your cluster has the feature gate enabled, you can inspect the .status.terminating field of a Job. The value of the field is the number of Pods owned by the Job that are currently terminating", "source": "https://kubernetes.io/docs/concepts/workloads/_print/", "id": "k8s_00658_code_explain_21017"} {"prompt": "Complete this yaml code:\n\napiVersion: batch/v1\nkind: Job", "completion": "# .metadata and .spec omitted\nstatus:\n terminating: 3 # three Pods are terminating and have not yet reached the Failed phase", "language": "yaml", "type": "kubernetes-manifest", "context": "2.5 - Jobs | This feature allows you to disable the built-in Job controller, for a specific Job, and delegate reconciliation of the Job to an external controller.", "source": "https://kubernetes.io/docs/concepts/workloads/_print/", "id": "k8s_00658_code_completion_21018"} {"prompt": "Explain this yaml code:\n\napiVersion: batch/v1\nkind: Job\n# .metadata and .spec omitted\nstatus:\n terminating: 3 # three Pods are terminating and have not yet reached the Failed phase", "completion": "This yaml code 2.5 - Jobs | This feature allows you to disable the built-in Job controller, for a specific Job, and delegate reconciliation of the Job to an external controller.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "2.5 - Jobs | This feature allows you to disable the built-in Job controller, for a specific Job, and delegate reconciliation of the Job to an external controller.", "source": "https://kubernetes.io/docs/concepts/workloads/_print/", "id": "k8s_00658_code_explain_21019"} {"prompt": "Complete this yaml code:\n\napiVersion: batch/v1\nkind: CronJob\nmetadata:\n name: hello\nspec:\n schedule: \"* * * * *\"\n jobTemplate:\n spec:\n template:", "completion": " spec:\n containers:\n - name: hello\n image: busybox:1.28\n imagePullPolicy: IfNotPresent\n command:\n - /bin/sh\n - -c\n - date; echo Hello from the Kubernetes cluster\n restartPolicy: OnFailure", "language": "yaml", "type": "kubernetes-manifest", "context": "2.7 - CronJob | (Running Automated Tasks with a CronJob takes you through this example in more detail).", "source": "https://kubernetes.io/docs/concepts/workloads/_print/", "id": "k8s_00658_code_completion_21020"} {"prompt": "Explain this yaml code:\n\napiVersion: batch/v1\nkind: CronJob\nmetadata:\n name: hello\nspec:\n schedule: \"* * * * *\"\n jobTemplate:\n spec:\n template:\n spec:\n containers:\n - name: hello\n image: busybox:1.28\n imagePullPolicy: IfNotPresent\n command:\n - /bin/sh\n - -c\n - date; echo Hello from the Kubernetes cluster\n restartPolicy: OnFailure", "completion": "This yaml code 2.7 - CronJob | (Running Automated Tasks with a CronJob takes you through this example in more detail).", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "2.7 - CronJob | (Running Automated Tasks with a CronJob takes you through this example in more detail).", "source": "https://kubernetes.io/docs/concepts/workloads/_print/", "id": "k8s_00658_code_explain_21021"} {"prompt": "Complete this text code:\n\n# ┌───────────── minute (0 - 59)\n# │ ┌───────────── hour (0 - 23)\n# │ │ ┌───────────── day of the month (1 - 31)\n# │ │ │ ┌───────────── month (1 - 12)", "completion": "# │ │ │ │ ┌───────────── day of the week (0 - 6) (Sunday to Saturday)\n# │ │ │ │ │ OR sun, mon, tue, wed, thu, fri, sat\n# │ │ │ │ │\n# │ │ │ │ │\n# * * * * *", "language": "text", "type": "code-example", "context": "2.7 - CronJob | For example, 0 3 * * 1 means this task is scheduled to run weekly on a Monday at 3 AM.", "source": "https://kubernetes.io/docs/concepts/workloads/_print/", "id": "k8s_00658_code_completion_21022"} {"prompt": "Explain this text code:\n\n# ┌───────────── minute (0 - 59)\n# │ ┌───────────── hour (0 - 23)\n# │ │ ┌───────────── day of the month (1 - 31)\n# │ │ │ ┌───────────── month (1 - 12)\n# │ │ │ │ ┌───────────── day of the week (0 - 6) (Sunday to Saturday)\n# │ │ │ │ │ OR sun, mon, tue, wed, thu, fri, sat\n# │ │ │ │ │\n# │ │ │ │ │\n# * * * * *", "completion": "This text code 2.7 - CronJob | For example, 0 3 * * 1 means this task is scheduled to run weekly on a Monday at 3 AM.", "language": "text", "type": "code-example_explanation", "context": "2.7 - CronJob | For example, 0 3 * * 1 means this task is scheduled to run weekly on a Monday at 3 AM.", "source": "https://kubernetes.io/docs/concepts/workloads/_print/", "id": "k8s_00658_code_explain_21023"} {"prompt": "Explain this text code:\n\nCannot determine if job needs to be started. Too many missed start time (> 100). Set or decrease .spec.startingDeadlineSeconds or check clock skew.", "completion": "This text code 2.7 - CronJob | It is important to note that if the startingDeadlineSeconds field is set (not nil), the controller counts how many missed Jobs occurred from the value of startingDeadlineSeconds until now rather than", "language": "text", "type": "code-example_explanation", "context": "2.7 - CronJob | It is important to note that if the startingDeadlineSeconds field is set (not nil), the controller counts how many missed Jobs occurred from the value of startingDeadlineSeconds until now rather than", "source": "https://kubernetes.io/docs/concepts/workloads/_print/", "id": "k8s_00658_code_explain_21024"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: ReplicationController\nmetadata:\n name: nginx\nspec:\n replicas: 3\n selector:\n app: nginx\n template:", "completion": " metadata:\n name: nginx\n labels:\n app: nginx\n spec:\n containers:\n - name: nginx\n image: nginx\n ports:\n - containerPort: 80", "language": "yaml", "type": "kubernetes-manifest", "context": "2.8 - ReplicationController | Run the example job by downloading the example file and then running this command:", "source": "https://kubernetes.io/docs/concepts/workloads/_print/", "id": "k8s_00658_code_completion_21025"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: ReplicationController\nmetadata:\n name: nginx\nspec:\n replicas: 3\n selector:\n app: nginx\n template:\n metadata:\n name: nginx\n labels:\n app: nginx\n spec:\n containers:\n - name: nginx\n image: nginx\n ports:\n - containerPort: 80", "completion": "This yaml code 2.8 - ReplicationController | Run the example job by downloading the example file and then running this command:", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "2.8 - ReplicationController | Run the example job by downloading the example file and then running this command:", "source": "https://kubernetes.io/docs/concepts/workloads/_print/", "id": "k8s_00658_code_explain_21026"} {"prompt": "Explain this shell code:\n\nkubectl apply -f https://k8s.io/examples/controllers/replication.yaml", "completion": "This shell code 2.8 - ReplicationController | The output is similar to this:", "language": "shell", "type": "code-example_explanation", "context": "2.8 - ReplicationController | The output is similar to this:", "source": "https://kubernetes.io/docs/concepts/workloads/_print/", "id": "k8s_00658_code_explain_21027"} {"prompt": "Complete this text code:\n\nName: nginx\nNamespace: default\nSelector: app=nginx\nLabels: app=nginx\nAnnotations: <none>\nReplicas: 3 current / 3 desired\nPods Status: 0 Running / 3 Waiting / 0 Succeeded / 0 Failed\nPod Template:\n Labels: app=nginx\n Containers:\n nginx:", "completion": " Image: nginx\n Port: 80/TCP\n Environment: <none>\n Mounts: <none>\n Volumes: <none>\nEvents:\n FirstSeen LastSeen Count From SubobjectPath Type Reason Message\n --------- -------- ----- ---- ------------- ---- ------ -------\n 20s 20s 1 {replication-controller } Normal SuccessfulCreate Created pod: nginx-qrm3m\n 20s 20s 1 {replication-controller } Normal SuccessfulCreate Created pod: nginx-3ntk0\n 20s 20s 1 {replication-controller } Normal SuccessfulCreate Created pod: nginx-4ok8v", "language": "text", "type": "code-example", "context": "2.8 - ReplicationController | Here, three pods are created, but none is running yet, perhaps because the image is being pulled. A little later, the same command may show:", "source": "https://kubernetes.io/docs/concepts/workloads/_print/", "id": "k8s_00658_code_completion_21028"} {"prompt": "Explain this text code:\n\nName: nginx\nNamespace: default\nSelector: app=nginx\nLabels: app=nginx\nAnnotations: <none>\nReplicas: 3 current / 3 desired\nPods Status: 0 Running / 3 Waiting / 0 Succeeded / 0 Failed\nPod Template:\n Labels: app=nginx\n Containers:\n nginx:\n Image: nginx\n Port: 80/TCP\n Environment: <none>\n Mounts: <none>\n Volumes: <none>\nEvents:\n FirstSeen LastSeen Count From SubobjectPath Type Reason Message\n --------- -------- ----- ---- ------------- ---- ------ -------\n 20s 20s 1 {replication-controller } Normal SuccessfulCreate Created pod: nginx-qrm3m\n 20s 20s 1 {replication-controller } Normal SuccessfulCreate Created pod: nginx-3ntk0\n 20s 20s 1 {replication-controller } Normal SuccessfulCreate Created pod: nginx-4ok8v", "completion": "This text code 2.8 - ReplicationController | Here, three pods are created, but none is running yet, perhaps because the image is being pulled. A little later, the same command may show:", "language": "text", "type": "code-example_explanation", "context": "2.8 - ReplicationController | Here, three pods are created, but none is running yet, perhaps because the image is being pulled. A little later, the same command may show:", "source": "https://kubernetes.io/docs/concepts/workloads/_print/", "id": "k8s_00658_code_explain_21029"} {"prompt": "Explain this shell code:\n\nPods Status: 3 Running / 0 Waiting / 0 Succeeded / 0 Failed", "completion": "This shell code 2.8 - ReplicationController | To list all the pods that belong to the ReplicationController in a machine readable form, you can use a command like this:", "language": "shell", "type": "code-example_explanation", "context": "2.8 - ReplicationController | To list all the pods that belong to the ReplicationController in a machine readable form, you can use a command like this:", "source": "https://kubernetes.io/docs/concepts/workloads/_print/", "id": "k8s_00658_code_explain_21030"} {"prompt": "Explain this shell code:\n\npods=$(kubectl get pods --selector=app=nginx --output=jsonpath={.items..metadata.name})\necho $pods", "completion": "This shell code 2.8 - ReplicationController | The output is similar to this:", "language": "shell", "type": "code-example_explanation", "context": "2.8 - ReplicationController | The output is similar to this:", "source": "https://kubernetes.io/docs/concepts/workloads/_print/", "id": "k8s_00658_code_explain_21031"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Service\nmetadata:\n name: my-nginx-svc\n labels:\n app: nginx\nspec:\n type: LoadBalancer\n ports:\n - port: 80\n selector:\n app: nginx\n---\napiVersion: apps/v1\nkind: Deployment\nmetadata:\n name: my-nginx", "completion": " labels:\n app: nginx\nspec:\n replicas: 3\n selector:\n matchLabels:\n app: nginx\n template:\n metadata:\n labels:\n app: nginx\n spec:\n containers:\n - name: nginx\n image: nginx:1.14.2\n ports:\n - containerPort: 80", "language": "yaml", "type": "deployment-definition", "context": "4 - Managing Workloads | Multiple resources can be created the same way as a single resource:", "source": "https://kubernetes.io/docs/concepts/workloads/_print/", "id": "k8s_00658_code_completion_21032"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Service\nmetadata:\n name: my-nginx-svc\n labels:\n app: nginx\nspec:\n type: LoadBalancer\n ports:\n - port: 80\n selector:\n app: nginx\n---\napiVersion: apps/v1\nkind: Deployment\nmetadata:\n name: my-nginx\n labels:\n app: nginx\nspec:\n replicas: 3\n selector:\n matchLabels:\n app: nginx\n template:\n metadata:\n labels:\n app: nginx\n spec:\n containers:\n - name: nginx\n image: nginx:1.14.2\n ports:\n - containerPort: 80", "completion": "This yaml code 4 - Managing Workloads | Multiple resources can be created the same way as a single resource:", "language": "yaml", "type": "deployment-definition_explanation", "context": "4 - Managing Workloads | Multiple resources can be created the same way as a single resource:", "source": "https://kubernetes.io/docs/concepts/workloads/_print/", "id": "k8s_00658_code_explain_21033"} {"prompt": "Explain this shell code:\n\nkubectl apply -f https://k8s.io/examples/application/nginx-app.yaml", "completion": "This shell code 4 - Managing Workloads | The resources will be created in the order they appear in the manifest. Therefore, it's best to specify the Service first, since that will ensure the scheduler can spread the pods associated with the", "language": "shell", "type": "code-example_explanation", "context": "4 - Managing Workloads | The resources will be created in the order they appear in the manifest. Therefore, it's best to specify the Service first, since that will ensure the scheduler can spread the pods associated with the", "source": "https://kubernetes.io/docs/concepts/workloads/_print/", "id": "k8s_00658_code_explain_21034"} {"prompt": "Explain this none code:\n\nservice/my-nginx-svc created\ndeployment.apps/my-nginx created", "completion": "This none code 4 - Managing Workloads | The resources will be created in the order they appear in the manifest. Therefore, it's best to specify the Service first, since that will ensure the scheduler can spread the pods associated with the", "language": "none", "type": "code-example_explanation", "context": "4 - Managing Workloads | The resources will be created in the order they appear in the manifest. Therefore, it's best to specify the Service first, since that will ensure the scheduler can spread the pods associated with the", "source": "https://kubernetes.io/docs/concepts/workloads/_print/", "id": "k8s_00658_code_explain_21035"} {"prompt": "Explain this shell code:\n\nkubectl apply -f https://k8s.io/examples/application/nginx/nginx-svc.yaml \\\n -f https://k8s.io/examples/application/nginx/nginx-deployment.yaml", "completion": "This shell code 4 - Managing Workloads | It is a recommended practice to put resources related to the same microservice or application tier into the same file, and to group all of the files associated with your application in the same direct", "language": "shell", "type": "code-example_explanation", "context": "4 - Managing Workloads | It is a recommended practice to put resources related to the same microservice or application tier into the same file, and to group all of the files associated with your application in the same direct", "source": "https://kubernetes.io/docs/concepts/workloads/_print/", "id": "k8s_00658_code_explain_21036"} {"prompt": "Explain this shell code:\n\nkubectl apply -f https://k8s.io/examples/application/nginx/nginx-deployment.yaml", "completion": "This shell code 4 - Managing Workloads | If you need to define more manifests, such as adding a ConfigMap, you can do that too.", "language": "shell", "type": "code-example_explanation", "context": "4 - Managing Workloads | If you need to define more manifests, such as adding a ConfigMap, you can do that too.", "source": "https://kubernetes.io/docs/concepts/workloads/_print/", "id": "k8s_00658_code_explain_21037"} {"prompt": "Explain this shell code:\n\nkubectl delete -f https://k8s.io/examples/application/nginx-app.yaml", "completion": "This shell code 4 - Managing Workloads | In the case of two resources, you can specify both resources on the command line using the resource/name syntax:", "language": "shell", "type": "code-example_explanation", "context": "4 - Managing Workloads | In the case of two resources, you can specify both resources on the command line using the resource/name syntax:", "source": "https://kubernetes.io/docs/concepts/workloads/_print/", "id": "k8s_00658_code_explain_21038"} {"prompt": "Explain this none code:\n\ndeployment.apps \"my-nginx\" deleted\nservice \"my-nginx-svc\" deleted", "completion": "This none code 4 - Managing Workloads | In the case of two resources, you can specify both resources on the command line using the resource/name syntax:", "language": "none", "type": "code-example_explanation", "context": "4 - Managing Workloads | In the case of two resources, you can specify both resources on the command line using the resource/name syntax:", "source": "https://kubernetes.io/docs/concepts/workloads/_print/", "id": "k8s_00658_code_explain_21039"} {"prompt": "Explain this shell code:\n\nkubectl delete deployments/my-nginx services/my-nginx-svc", "completion": "This shell code 4 - Managing Workloads | For larger numbers of resources, you'll find it easier to specify the selector (label query) specified using -l or --selector, to filter resources by their labels:", "language": "shell", "type": "code-example_explanation", "context": "4 - Managing Workloads | For larger numbers of resources, you'll find it easier to specify the selector (label query) specified using -l or --selector, to filter resources by their labels:", "source": "https://kubernetes.io/docs/concepts/workloads/_print/", "id": "k8s_00658_code_explain_21040"} {"prompt": "Explain this none code:\n\ndeployment.apps \"my-nginx\" deleted\nservice \"my-nginx-svc\" deleted", "completion": "This none code 4 - Managing Workloads | Because kubectl outputs resource names in the same syntax it accepts, you can chain operations using $() or xargs:", "language": "none", "type": "code-example_explanation", "context": "4 - Managing Workloads | Because kubectl outputs resource names in the same syntax it accepts, you can chain operations using $() or xargs:", "source": "https://kubernetes.io/docs/concepts/workloads/_print/", "id": "k8s_00658_code_explain_21041"} {"prompt": "Explain this shell code:\n\nkubectl get $(kubectl create -f docs/concepts/cluster-administration/nginx/ -o name | grep service/ )\nkubectl create -f docs/concepts/cluster-administration/nginx/ -o name | grep service/ | xargs -i kubectl get '{}'", "completion": "This shell code 4 - Managing Workloads | The output might be similar to:", "language": "shell", "type": "code-example_explanation", "context": "4 - Managing Workloads | The output might be similar to:", "source": "https://kubernetes.io/docs/concepts/workloads/_print/", "id": "k8s_00658_code_explain_21042"} {"prompt": "Explain this none code:\n\nNAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE\nmy-nginx-svc LoadBalancer 10.0.0.208 <pending> 80/TCP 0s", "completion": "This none code 4 - Managing Workloads | With the above commands, first you create resources under examples/application/nginx/ and print the resources created with -o name output format (print each resource as resource/name). Then you grep o", "language": "none", "type": "code-example_explanation", "context": "4 - Managing Workloads | With the above commands, first you create resources under examples/application/nginx/ and print the resources created with -o name output format (print each resource as resource/name). Then you grep o", "source": "https://kubernetes.io/docs/concepts/workloads/_print/", "id": "k8s_00658_code_explain_21043"} {"prompt": "Complete this none code:\n\nproject/k8s/development\n├── configmap\n│   └── my-configmap.yaml", "completion": "├── deployment\n│   └── my-deployment.yaml\n└── pvc\n └── my-pvc.yaml", "language": "none", "type": "code-example", "context": "4 - Managing Workloads | By default, performing a bulk operation on project/k8s/development will stop at the first level of the directory, not processing any subdirectories. If you had tried to create the resources in this di", "source": "https://kubernetes.io/docs/concepts/workloads/_print/", "id": "k8s_00658_code_completion_21044"} {"prompt": "Explain this none code:\n\nproject/k8s/development\n├── configmap\n│   └── my-configmap.yaml\n├── deployment\n│   └── my-deployment.yaml\n└── pvc\n └── my-pvc.yaml", "completion": "This none code 4 - Managing Workloads | By default, performing a bulk operation on project/k8s/development will stop at the first level of the directory, not processing any subdirectories. If you had tried to create the resources in this di", "language": "none", "type": "code-example_explanation", "context": "4 - Managing Workloads | By default, performing a bulk operation on project/k8s/development will stop at the first level of the directory, not processing any subdirectories. If you had tried to create the resources in this di", "source": "https://kubernetes.io/docs/concepts/workloads/_print/", "id": "k8s_00658_code_explain_21045"} {"prompt": "Explain this none code:\n\nerror: you must provide one or more resources by argument or filename (.json|.yaml|.yml|stdin)", "completion": "This none code 4 - Managing Workloads | Instead, specify the --recursive or -R command line argument along with the --filename/-f argument:", "language": "none", "type": "code-example_explanation", "context": "4 - Managing Workloads | Instead, specify the --recursive or -R command line argument along with the --filename/-f argument:", "source": "https://kubernetes.io/docs/concepts/workloads/_print/", "id": "k8s_00658_code_explain_21046"} {"prompt": "Explain this shell code:\n\nkubectl apply -f project/k8s/development --recursive", "completion": "This shell code 4 - Managing Workloads | The --recursive argument works with any operation that accepts the --filename/-f argument such as: kubectl create, kubectl get, kubectl delete, kubectl describe, or even kubectl rollout.", "language": "shell", "type": "code-example_explanation", "context": "4 - Managing Workloads | The --recursive argument works with any operation that accepts the --filename/-f argument such as: kubectl create, kubectl get, kubectl delete, kubectl describe, or even kubectl rollout.", "source": "https://kubernetes.io/docs/concepts/workloads/_print/", "id": "k8s_00658_code_explain_21047"} {"prompt": "Explain this none code:\n\nconfigmap/my-config created\ndeployment.apps/my-deployment created\npersistentvolumeclaim/my-pvc created", "completion": "This none code 4 - Managing Workloads | The --recursive argument works with any operation that accepts the --filename/-f argument such as: kubectl create, kubectl get, kubectl delete, kubectl describe, or even kubectl rollout.", "language": "none", "type": "code-example_explanation", "context": "4 - Managing Workloads | The --recursive argument works with any operation that accepts the --filename/-f argument such as: kubectl create, kubectl get, kubectl delete, kubectl describe, or even kubectl rollout.", "source": "https://kubernetes.io/docs/concepts/workloads/_print/", "id": "k8s_00658_code_explain_21048"} {"prompt": "Explain this shell code:\n\nkubectl apply -f project/k8s/namespaces -f project/k8s/development --recursive", "completion": "This shell code 4 - Managing Workloads | If you're interested in learning more about kubectl, go ahead and read Command line tool (kubectl).", "language": "shell", "type": "code-example_explanation", "context": "4 - Managing Workloads | If you're interested in learning more about kubectl, go ahead and read Command line tool (kubectl).", "source": "https://kubernetes.io/docs/concepts/workloads/_print/", "id": "k8s_00658_code_explain_21049"} {"prompt": "Complete this none code:\n\nnamespace/development created\nnamespace/staging created", "completion": "configmap/my-config created\ndeployment.apps/my-deployment created\npersistentvolumeclaim/my-pvc created", "language": "none", "type": "code-example", "context": "4 - Managing Workloads | If you're interested in learning more about kubectl, go ahead and read Command line tool (kubectl).", "source": "https://kubernetes.io/docs/concepts/workloads/_print/", "id": "k8s_00658_code_completion_21050"} {"prompt": "Explain this none code:\n\nnamespace/development created\nnamespace/staging created\nconfigmap/my-config created\ndeployment.apps/my-deployment created\npersistentvolumeclaim/my-pvc created", "completion": "This none code 4 - Managing Workloads | If you're interested in learning more about kubectl, go ahead and read Command line tool (kubectl).", "language": "none", "type": "code-example_explanation", "context": "4 - Managing Workloads | If you're interested in learning more about kubectl, go ahead and read Command line tool (kubectl).", "source": "https://kubernetes.io/docs/concepts/workloads/_print/", "id": "k8s_00658_code_explain_21051"} {"prompt": "Explain this shell code:\n\nkubectl create deployment my-nginx --image=nginx:1.14.2", "completion": "This shell code 4 - Managing Workloads | Ensure that there is 1 replica:", "language": "shell", "type": "code-example_explanation", "context": "4 - Managing Workloads | Ensure that there is 1 replica:", "source": "https://kubernetes.io/docs/concepts/workloads/_print/", "id": "k8s_00658_code_explain_21052"} {"prompt": "Explain this shell code:\n\nkubectl scale --replicas 1 deployments/my-nginx --subresource='scale' --type='merge' -p '{\"spec\":{\"replicas\": 1}}'", "completion": "This shell code 4 - Managing Workloads | and allow Kubernetes to add more temporary replicas during a rollout, by setting a surge maximum of 100%:", "language": "shell", "type": "code-example_explanation", "context": "4 - Managing Workloads | and allow Kubernetes to add more temporary replicas during a rollout, by setting a surge maximum of 100%:", "source": "https://kubernetes.io/docs/concepts/workloads/_print/", "id": "k8s_00658_code_explain_21053"} {"prompt": "Explain this shell code:\n\nkubectl patch --type='merge' -p '{\"spec\":{\"strategy\":{\"rollingUpdate\":{\"maxSurge\": \"100%\" }}}}'", "completion": "This shell code 4 - Managing Workloads | To update to version 1.16.1, change .spec.template.spec.containers[0].image from nginx:1.14.2 to nginx:1.16.1 using kubectl edit:", "language": "shell", "type": "code-example_explanation", "context": "4 - Managing Workloads | To update to version 1.16.1, change .spec.template.spec.containers[0].image from nginx:1.14.2 to nginx:1.16.1 using kubectl edit:", "source": "https://kubernetes.io/docs/concepts/workloads/_print/", "id": "k8s_00658_code_explain_21054"} {"prompt": "Explain this shell code:\n\nkubectl edit deployment/my-nginx\n# Change the manifest to use the newer container image, then save your changes", "completion": "This shell code 4 - Managing Workloads | That's it! The Deployment will declaratively update the deployed nginx application progressively behind the scene. It ensures that only a certain number of old replicas may be down while they are bein", "language": "shell", "type": "code-example_explanation", "context": "4 - Managing Workloads | That's it! The Deployment will declaratively update the deployed nginx application progressively behind the scene. It ensures that only a certain number of old replicas may be down while they are bein", "source": "https://kubernetes.io/docs/concepts/workloads/_print/", "id": "k8s_00658_code_explain_21055"} {"prompt": "Complete this shell code:\n\nkubectl apply -f my-deployment.yaml\n", "completion": "# wait for rollout to finish\nkubectl rollout status deployment/my-deployment --timeout 10m # 10 minute timeout", "language": "shell", "type": "code-example", "context": "4 - Managing Workloads | or", "source": "https://kubernetes.io/docs/concepts/workloads/_print/", "id": "k8s_00658_code_completion_21056"} {"prompt": "Explain this shell code:\n\nkubectl apply -f my-deployment.yaml\n\n# wait for rollout to finish\nkubectl rollout status deployment/my-deployment --timeout 10m # 10 minute timeout", "completion": "This shell code 4 - Managing Workloads | or", "language": "shell", "type": "code-example_explanation", "context": "4 - Managing Workloads | or", "source": "https://kubernetes.io/docs/concepts/workloads/_print/", "id": "k8s_00658_code_explain_21057"} {"prompt": "Complete this shell code:\n\nkubectl apply -f backing-stateful-component.yaml\n", "completion": "# don't wait for rollout to finish, just check the status\nkubectl rollout status statefulsets/backing-stateful-component --watch=false", "language": "shell", "type": "code-example", "context": "4 - Managing Workloads | You can also pause, resume or cancel a rollout. Visit kubectl rollout to learn more.", "source": "https://kubernetes.io/docs/concepts/workloads/_print/", "id": "k8s_00658_code_completion_21058"} {"prompt": "Explain this shell code:\n\nkubectl apply -f backing-stateful-component.yaml\n\n# don't wait for rollout to finish, just check the status\nkubectl rollout status statefulsets/backing-stateful-component --watch=false", "completion": "This shell code 4 - Managing Workloads | You can also pause, resume or cancel a rollout. Visit kubectl rollout to learn more.", "language": "shell", "type": "code-example_explanation", "context": "4 - Managing Workloads | You can also pause, resume or cancel a rollout. Visit kubectl rollout to learn more.", "source": "https://kubernetes.io/docs/concepts/workloads/_print/", "id": "k8s_00658_code_explain_21059"} {"prompt": "Complete this none code:\n\nname: frontend\nreplicas: 3\n...\nlabels:", "completion": " app: guestbook\n tier: frontend\n track: stable\n...\nimage: gb-frontend:v3", "language": "none", "type": "code-example", "context": "4 - Managing Workloads | and then you can create a new release of the guestbook frontend that carries the track label with different value (i.e. canary), so that two sets of pods would not overlap:", "source": "https://kubernetes.io/docs/concepts/workloads/_print/", "id": "k8s_00658_code_completion_21060"} {"prompt": "Explain this none code:\n\nname: frontend\nreplicas: 3\n...\nlabels:\n app: guestbook\n tier: frontend\n track: stable\n...\nimage: gb-frontend:v3", "completion": "This none code 4 - Managing Workloads | and then you can create a new release of the guestbook frontend that carries the track label with different value (i.e. canary), so that two sets of pods would not overlap:", "language": "none", "type": "code-example_explanation", "context": "4 - Managing Workloads | and then you can create a new release of the guestbook frontend that carries the track label with different value (i.e. canary), so that two sets of pods would not overlap:", "source": "https://kubernetes.io/docs/concepts/workloads/_print/", "id": "k8s_00658_code_explain_21061"} {"prompt": "Complete this none code:\n\nname: frontend-canary\nreplicas: 1\n...\nlabels:", "completion": " app: guestbook\n tier: frontend\n track: canary\n...\nimage: gb-frontend:v4", "language": "none", "type": "code-example", "context": "4 - Managing Workloads | The frontend service would span both sets of replicas by selecting the common subset of their labels (i.e. omitting the track label), so that the traffic will be redirected to both applications:", "source": "https://kubernetes.io/docs/concepts/workloads/_print/", "id": "k8s_00658_code_completion_21062"} {"prompt": "Explain this none code:\n\nname: frontend-canary\nreplicas: 1\n...\nlabels:\n app: guestbook\n tier: frontend\n track: canary\n...\nimage: gb-frontend:v4", "completion": "This none code 4 - Managing Workloads | The frontend service would span both sets of replicas by selecting the common subset of their labels (i.e. omitting the track label), so that the traffic will be redirected to both applications:", "language": "none", "type": "code-example_explanation", "context": "4 - Managing Workloads | The frontend service would span both sets of replicas by selecting the common subset of their labels (i.e. omitting the track label), so that the traffic will be redirected to both applications:", "source": "https://kubernetes.io/docs/concepts/workloads/_print/", "id": "k8s_00658_code_explain_21063"} {"prompt": "Explain this shell code:\n\nkubectl annotate pods my-nginx-v4-9gw19 description='my frontend running nginx'\nkubectl get pods my-nginx-v4-9gw19 -o yaml", "completion": "This shell code 4 - Managing Workloads | For more information, see annotations and kubectl annotate.", "language": "shell", "type": "code-example_explanation", "context": "4 - Managing Workloads | For more information, see annotations and kubectl annotate.", "source": "https://kubernetes.io/docs/concepts/workloads/_print/", "id": "k8s_00658_code_explain_21064"} {"prompt": "Complete this shell code:\n\napiVersion: v1\nkind: pod\nmetadata:", "completion": " annotations:\n description: my frontend running nginx\n...", "language": "shell", "type": "code-example", "context": "4 - Managing Workloads | For more information, see annotations and kubectl annotate.", "source": "https://kubernetes.io/docs/concepts/workloads/_print/", "id": "k8s_00658_code_completion_21065"} {"prompt": "Explain this shell code:\n\napiVersion: v1\nkind: pod\nmetadata:\n annotations:\n description: my frontend running nginx\n...", "completion": "This shell code 4 - Managing Workloads | For more information, see annotations and kubectl annotate.", "language": "shell", "type": "code-example_explanation", "context": "4 - Managing Workloads | For more information, see annotations and kubectl annotate.", "source": "https://kubernetes.io/docs/concepts/workloads/_print/", "id": "k8s_00658_code_explain_21066"} {"prompt": "Explain this none code:\n\nNAME READY STATUS RESTARTS AGE\nmy-nginx-2035384211-j5fhi 1/1 Running 0 30m", "completion": "This none code 4 - Managing Workloads | To have the system automatically choose the number of nginx replicas as needed, ranging from 1 to 3, do:", "language": "none", "type": "code-example_explanation", "context": "4 - Managing Workloads | To have the system automatically choose the number of nginx replicas as needed, ranging from 1 to 3, do:", "source": "https://kubernetes.io/docs/concepts/workloads/_print/", "id": "k8s_00658_code_explain_21067"} {"prompt": "Explain this shell code:\n\n# This requires an existing source of container and Pod metrics\nkubectl autoscale deployment/my-nginx --min=1 --max=3", "completion": "This shell code 4 - Managing Workloads | Now your nginx replicas will be scaled up and down as needed, automatically.", "language": "shell", "type": "code-example_explanation", "context": "4 - Managing Workloads | Now your nginx replicas will be scaled up and down as needed, automatically.", "source": "https://kubernetes.io/docs/concepts/workloads/_print/", "id": "k8s_00658_code_explain_21068"} {"prompt": "Explain this none code:\n\nhorizontalpodautoscaler.autoscaling/my-nginx autoscaled", "completion": "This none code 4 - Managing Workloads | Now your nginx replicas will be scaled up and down as needed, automatically.", "language": "none", "type": "code-example_explanation", "context": "4 - Managing Workloads | Now your nginx replicas will be scaled up and down as needed, automatically.", "source": "https://kubernetes.io/docs/concepts/workloads/_print/", "id": "k8s_00658_code_explain_21069"} {"prompt": "Explain this shell code:\n\nkubectl apply -f https://k8s.io/examples/application/nginx/nginx-deployment.yaml", "completion": "This shell code 4 - Managing Workloads | To learn more about the underlying mechanism, read server-side apply.", "language": "shell", "type": "code-example_explanation", "context": "4 - Managing Workloads | To learn more about the underlying mechanism, read server-side apply.", "source": "https://kubernetes.io/docs/concepts/workloads/_print/", "id": "k8s_00658_code_explain_21070"} {"prompt": "Complete this shell code:\n\nkubectl get deployment my-nginx -o yaml > /tmp/nginx.yaml\nvi /tmp/nginx.yaml\n# do some edit, and then save the file\n", "completion": "kubectl apply -f /tmp/nginx.yaml\ndeployment.apps/my-nginx configured\n\nrm /tmp/nginx.yaml", "language": "shell", "type": "code-example", "context": "4 - Managing Workloads | This allows you to do more significant changes more easily. Note that you can specify the editor with your EDITOR or KUBE_EDITOR environment variables.", "source": "https://kubernetes.io/docs/concepts/workloads/_print/", "id": "k8s_00658_code_completion_21071"} {"prompt": "Explain this shell code:\n\nkubectl get deployment my-nginx -o yaml > /tmp/nginx.yaml\nvi /tmp/nginx.yaml\n# do some edit, and then save the file\n\nkubectl apply -f /tmp/nginx.yaml\ndeployment.apps/my-nginx configured\n\nrm /tmp/nginx.yaml", "completion": "This shell code 4 - Managing Workloads | This allows you to do more significant changes more easily. Note that you can specify the editor with your EDITOR or KUBE_EDITOR environment variables.", "language": "shell", "type": "code-example_explanation", "context": "4 - Managing Workloads | This allows you to do more significant changes more easily. Note that you can specify the editor with your EDITOR or KUBE_EDITOR environment variables.", "source": "https://kubernetes.io/docs/concepts/workloads/_print/", "id": "k8s_00658_code_explain_21072"} {"prompt": "Explain this shell code:\n\nkubectl replace -f https://k8s.io/examples/application/nginx/nginx-deployment.yaml --force", "completion": "This shell code 4 - Managing Workloads | © 2025 The Kubernetes Authors | Documentation Distributed under CC BY 4.0", "language": "shell", "type": "code-example_explanation", "context": "4 - Managing Workloads | © 2025 The Kubernetes Authors | Documentation Distributed under CC BY 4.0", "source": "https://kubernetes.io/docs/concepts/workloads/_print/", "id": "k8s_00658_code_explain_21073"} {"prompt": "Explain this none code:\n\ndeployment.apps/my-nginx deleted\ndeployment.apps/my-nginx replaced", "completion": "This none code 4 - Managing Workloads | © 2025 The Kubernetes Authors | Documentation Distributed under CC BY 4.0", "language": "none", "type": "code-example_explanation", "context": "4 - Managing Workloads | © 2025 The Kubernetes Authors | Documentation Distributed under CC BY 4.0", "source": "https://kubernetes.io/docs/concepts/workloads/_print/", "id": "k8s_00658_code_explain_21074"} {"prompt": "Complete this yaml code:\n\npods/simple-pod.yaml\napiVersion: v1\nkind: Pod\nmetadata:\n name: nginx", "completion": "spec:\n containers:\n - name: nginx\n image: nginx:1.14.2\n ports:\n - containerPort: 80", "language": "yaml", "type": "pod-definition", "context": "1 - Pods | To create the Pod shown above, run the following command:", "source": "https://kubernetes.io/docs/concepts/workloads/_print/", "id": "k8s_00658_code_completion_21075"} {"prompt": "Explain this yaml code:\n\npods/simple-pod.yaml\napiVersion: v1\nkind: Pod\nmetadata:\n name: nginx\nspec:\n containers:\n - name: nginx\n image: nginx:1.14.2\n ports:\n - containerPort: 80", "completion": "This yaml code 1 - Pods | To create the Pod shown above, run the following command:", "language": "yaml", "type": "pod-definition_explanation", "context": "1 - Pods | To create the Pod shown above, run the following command:", "source": "https://kubernetes.io/docs/concepts/workloads/_print/", "id": "k8s_00658_code_explain_21076"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: nginx\nspec:", "completion": " containers:\n - name: nginx\n image: nginx:1.14.2\n ports:\n - containerPort: 80", "language": "yaml", "type": "pod-definition", "context": "1 - Pods | To create the Pod shown above, run the following command:", "source": "https://kubernetes.io/docs/concepts/workloads/_print/", "id": "k8s_00658_code_completion_21077"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: nginx\nspec:\n containers:\n - name: nginx\n image: nginx:1.14.2\n ports:\n - containerPort: 80", "completion": "This yaml code 1 - Pods | To create the Pod shown above, run the following command:", "language": "yaml", "type": "pod-definition_explanation", "context": "1 - Pods | To create the Pod shown above, run the following command:", "source": "https://kubernetes.io/docs/concepts/workloads/_print/", "id": "k8s_00658_code_explain_21078"} {"prompt": "Explain this bash code:\n\nkubectl apply -f https://k8s.io/examples/pods/simple-pod.yaml", "completion": "This bash code 1 - Pods | Pods are generally not created directly and are created using workload resources. See Working with Pods for more information on how Pods are used with workload resources.", "language": "bash", "type": "kubectl-command_explanation", "context": "1 - Pods | Pods are generally not created directly and are created using workload resources. See Working with Pods for more information on how Pods are used with workload resources.", "source": "https://kubernetes.io/docs/concepts/workloads/_print/", "id": "k8s_00658_code_explain_21079"} {"prompt": "Complete this yaml code:\n\napiVersion: batch/v1\nkind: Job\nmetadata:\n name: hello\nspec:\n template:\n # This is the pod template", "completion": " spec:\n containers:\n - name: hello\n image: busybox:1.28\n command: ['sh', '-c', 'echo \"Hello, Kubernetes!\" && sleep 3600']\n restartPolicy: OnFailure\n # The pod template ends here", "language": "yaml", "type": "kubernetes-manifest", "context": "1 - Pods | Modifying the pod template or switching to a new pod template has no direct effect on the Pods that already exist. If you change the pod template for a workload resource, that resource needs to create", "source": "https://kubernetes.io/docs/concepts/workloads/_print/", "id": "k8s_00658_code_completion_21080"} {"prompt": "Explain this yaml code:\n\napiVersion: batch/v1\nkind: Job\nmetadata:\n name: hello\nspec:\n template:\n # This is the pod template\n spec:\n containers:\n - name: hello\n image: busybox:1.28\n command: ['sh', '-c', 'echo \"Hello, Kubernetes!\" && sleep 3600']\n restartPolicy: OnFailure\n # The pod template ends here", "completion": "This yaml code 1 - Pods | Modifying the pod template or switching to a new pod template has no direct effect on the Pods that already exist. If you change the pod template for a workload resource, that resource needs to create", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "1 - Pods | Modifying the pod template or switching to a new pod template has no direct effect on the Pods that already exist. If you change the pod template for a workload resource, that resource needs to create", "source": "https://kubernetes.io/docs/concepts/workloads/_print/", "id": "k8s_00658_code_explain_21081"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: on-failure-pod\nspec:\n restartPolicy: OnFailure\n containers:", "completion": " - name: try-once-container # This container will run only once because the restartPolicy is Never.\n image: docker.io/library/busybox:1.28\n command: ['sh', '-c', 'echo \"Only running once\" && sleep 10 && exit 1']\n restartPolicy: Never \n - name: on-failure-container # This container will be restarted on failure.\n image: docker.io/library/busybox:1.28\n command: ['sh', '-c', 'echo \"Keep restarting\" && sleep 1800 && exit 1']", "language": "yaml", "type": "pod-definition", "context": "1.1 - Pod Lifecycle | A Pod with Always restart policy with an init container that only execute once. If the init container fails, the Pod fails. This allows the Pod to fail if the initialization failed, but also keep runn", "source": "https://kubernetes.io/docs/concepts/workloads/_print/", "id": "k8s_00658_code_completion_21082"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: on-failure-pod\nspec:\n restartPolicy: OnFailure\n containers:\n - name: try-once-container # This container will run only once because the restartPolicy is Never.\n image: docker.io/library/busybox:1.28\n command: ['sh', '-c', 'echo \"Only running once\" && sleep 10 && exit 1']\n restartPolicy: Never \n - name: on-failure-container # This container will be restarted on failure.\n image: docker.io/library/busybox:1.28\n command: ['sh', '-c', 'echo \"Keep restarting\" && sleep 1800 && exit 1']", "completion": "This yaml code 1.1 - Pod Lifecycle | A Pod with Always restart policy with an init container that only execute once. If the init container fails, the Pod fails. This allows the Pod to fail if the initialization failed, but also keep runn", "language": "yaml", "type": "pod-definition_explanation", "context": "1.1 - Pod Lifecycle | A Pod with Always restart policy with an init container that only execute once. If the init container fails, the Pod fails. This allows the Pod to fail if the initialization failed, but also keep runn", "source": "https://kubernetes.io/docs/concepts/workloads/_print/", "id": "k8s_00658_code_explain_21083"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: fail-pod-if-init-fails\nspec:\n restartPolicy: Always\n initContainers:", "completion": " - name: init-once # This init container will only try once. If it fails, the pod will fail.\n image: docker.io/library/busybox:1.28\n command: ['sh', '-c', 'echo \"Failing initialization\" && sleep 10 && exit 1']\n restartPolicy: Never\n containers:\n - name: main-container # This container will always be restarted once initialization succeeds.\n image: docker.io/library/busybox:1.28\n command: ['sh', '-c', 'sleep 1800 && exit 0']", "language": "yaml", "type": "pod-definition", "context": "1.1 - Pod Lifecycle | A Pod with Never restart policy with a container that ignores and restarts on specific exit codes. This is useful to differentiate between restartable errors and non-restartable errors:", "source": "https://kubernetes.io/docs/concepts/workloads/_print/", "id": "k8s_00658_code_completion_21084"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: fail-pod-if-init-fails\nspec:\n restartPolicy: Always\n initContainers:\n - name: init-once # This init container will only try once. If it fails, the pod will fail.\n image: docker.io/library/busybox:1.28\n command: ['sh', '-c', 'echo \"Failing initialization\" && sleep 10 && exit 1']\n restartPolicy: Never\n containers:\n - name: main-container # This container will always be restarted once initialization succeeds.\n image: docker.io/library/busybox:1.28\n command: ['sh', '-c', 'sleep 1800 && exit 0']", "completion": "This yaml code 1.1 - Pod Lifecycle | A Pod with Never restart policy with a container that ignores and restarts on specific exit codes. This is useful to differentiate between restartable errors and non-restartable errors:", "language": "yaml", "type": "pod-definition_explanation", "context": "1.1 - Pod Lifecycle | A Pod with Never restart policy with a container that ignores and restarts on specific exit codes. This is useful to differentiate between restartable errors and non-restartable errors:", "source": "https://kubernetes.io/docs/concepts/workloads/_print/", "id": "k8s_00658_code_explain_21085"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: restart-on-exit-codes\nspec:\n restartPolicy: Never\n containers:\n - name: restart-on-exit-codes", "completion": " image: docker.io/library/busybox:1.28\n command: ['sh', '-c', 'sleep 60 && exit 0']\n restartPolicy: Never # Container restart policy must be specified if rules are specified\n restartPolicyRules: # Only restart the container if it exits with code 42\n - action: Restart\n exitCodes:\n operator: In\n values: [42]", "language": "yaml", "type": "pod-definition", "context": "1.1 - Pod Lifecycle | Restart rules can be used for many more advanced lifecycle management scenarios. Note, restart rules are affected by the same inconsistencies as the regular restart policy. Kubelet restarts, container", "source": "https://kubernetes.io/docs/concepts/workloads/_print/", "id": "k8s_00658_code_completion_21086"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: restart-on-exit-codes\nspec:\n restartPolicy: Never\n containers:\n - name: restart-on-exit-codes\n image: docker.io/library/busybox:1.28\n command: ['sh', '-c', 'sleep 60 && exit 0']\n restartPolicy: Never # Container restart policy must be specified if rules are specified\n restartPolicyRules: # Only restart the container if it exits with code 42\n - action: Restart\n exitCodes:\n operator: In\n values: [42]", "completion": "This yaml code 1.1 - Pod Lifecycle | Restart rules can be used for many more advanced lifecycle management scenarios. Note, restart rules are affected by the same inconsistencies as the regular restart policy. Kubelet restarts, container", "language": "yaml", "type": "pod-definition_explanation", "context": "1.1 - Pod Lifecycle | Restart rules can be used for many more advanced lifecycle management scenarios. Note, restart rules are affected by the same inconsistencies as the regular restart policy. Kubelet restarts, container", "source": "https://kubernetes.io/docs/concepts/workloads/_print/", "id": "k8s_00658_code_explain_21087"} {"prompt": "Complete this yaml code:\n\n# container restart delays will start at 10s, increasing\n# 2x each time they are restarted, to a maximum of 100s", "completion": "kind: KubeletConfiguration\ncrashLoopBackOff:\n maxContainerRestartPeriod: \"100s\"", "language": "yaml", "type": "kubernetes-manifest", "context": "1.1 - Pod Lifecycle | If you use this feature along with the alpha feature ReduceDefaultCrashLoopBackOffDecay (described above), your cluster defaults for initial backoff and maximum backoff will no longer be 10s and 300s,", "source": "https://kubernetes.io/docs/concepts/workloads/_print/", "id": "k8s_00658_code_completion_21088"} {"prompt": "Explain this yaml code:\n\n# container restart delays will start at 10s, increasing\n# 2x each time they are restarted, to a maximum of 100s\nkind: KubeletConfiguration\ncrashLoopBackOff:\n maxContainerRestartPeriod: \"100s\"", "completion": "This yaml code 1.1 - Pod Lifecycle | If you use this feature along with the alpha feature ReduceDefaultCrashLoopBackOffDecay (described above), your cluster defaults for initial backoff and maximum backoff will no longer be 10s and 300s,", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "1.1 - Pod Lifecycle | If you use this feature along with the alpha feature ReduceDefaultCrashLoopBackOffDecay (described above), your cluster defaults for initial backoff and maximum backoff will no longer be 10s and 300s,", "source": "https://kubernetes.io/docs/concepts/workloads/_print/", "id": "k8s_00658_code_explain_21089"} {"prompt": "Complete this yaml code:\n\n# delays between container restarts will always be 2s\nkind: KubeletConfiguration", "completion": "crashLoopBackOff:\n maxContainerRestartPeriod: \"2s\"", "language": "yaml", "type": "kubernetes-manifest", "context": "1.1 - Pod Lifecycle | If you use this feature along with the alpha feature ReduceDefaultCrashLoopBackOffDecay (described above), your cluster defaults for initial backoff and maximum backoff will no longer be 10s and 300s,", "source": "https://kubernetes.io/docs/concepts/workloads/_print/", "id": "k8s_00658_code_completion_21090"} {"prompt": "Explain this yaml code:\n\n# delays between container restarts will always be 2s\nkind: KubeletConfiguration\ncrashLoopBackOff:\n maxContainerRestartPeriod: \"2s\"", "completion": "This yaml code 1.1 - Pod Lifecycle | If you use this feature along with the alpha feature ReduceDefaultCrashLoopBackOffDecay (described above), your cluster defaults for initial backoff and maximum backoff will no longer be 10s and 300s,", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "1.1 - Pod Lifecycle | If you use this feature along with the alpha feature ReduceDefaultCrashLoopBackOffDecay (described above), your cluster defaults for initial backoff and maximum backoff will no longer be 10s and 300s,", "source": "https://kubernetes.io/docs/concepts/workloads/_print/", "id": "k8s_00658_code_explain_21091"} {"prompt": "Complete this yaml code:\n\nkind: Pod\n...\nspec:\n readinessGates:\n - conditionType: \"www.example.com/feature-1\"\nstatus:\n conditions:\n - type: Ready # a built-in PodCondition\n status: \"False\"", "completion": " lastProbeTime: null\n lastTransitionTime: 2018-01-01T00:00:00Z\n - type: \"www.example.com/feature-1\" # an extra PodCondition\n status: \"False\"\n lastProbeTime: null\n lastTransitionTime: 2018-01-01T00:00:00Z\n containerStatuses:\n - containerID: docker://abcd...\n ready: true\n...", "language": "yaml", "type": "pod-definition", "context": "1.1 - Pod Lifecycle | The Pod conditions you add must have names that meet the Kubernetes label key format.", "source": "https://kubernetes.io/docs/concepts/workloads/_print/", "id": "k8s_00658_code_completion_21092"} {"prompt": "Explain this yaml code:\n\nkind: Pod\n...\nspec:\n readinessGates:\n - conditionType: \"www.example.com/feature-1\"\nstatus:\n conditions:\n - type: Ready # a built-in PodCondition\n status: \"False\"\n lastProbeTime: null\n lastTransitionTime: 2018-01-01T00:00:00Z\n - type: \"www.example.com/feature-1\" # an extra PodCondition\n status: \"False\"\n lastProbeTime: null\n lastTransitionTime: 2018-01-01T00:00:00Z\n containerStatuses:\n - containerID: docker://abcd...\n ready: true\n...", "completion": "This yaml code 1.1 - Pod Lifecycle | The Pod conditions you add must have names that meet the Kubernetes label key format.", "language": "yaml", "type": "pod-definition_explanation", "context": "1.1 - Pod Lifecycle | The Pod conditions you add must have names that meet the Kubernetes label key format.", "source": "https://kubernetes.io/docs/concepts/workloads/_print/", "id": "k8s_00658_code_explain_21093"} {"prompt": "Complete this text code:\n\nspec:\n os:\n name: linux\n containers:", "completion": " - name: my-container\n image: container-image:latest\n lifecycle:\n stopSignal: SIGUSR1", "language": "text", "type": "code-example", "context": "1.1 - Pod Lifecycle | If a stop signal is defined in the lifecycle, this will override the signal defined in the container image. If no stop signal is defined in the container spec, the container would fall back to the def", "source": "https://kubernetes.io/docs/concepts/workloads/_print/", "id": "k8s_00658_code_completion_21094"} {"prompt": "Explain this text code:\n\nspec:\n os:\n name: linux\n containers:\n - name: my-container\n image: container-image:latest\n lifecycle:\n stopSignal: SIGUSR1", "completion": "This text code 1.1 - Pod Lifecycle | If a stop signal is defined in the lifecycle, this will override the signal defined in the container image. If no stop signal is defined in the container spec, the container would fall back to the def", "language": "text", "type": "code-example_explanation", "context": "1.1 - Pod Lifecycle | If a stop signal is defined in the lifecycle, this will override the signal defined in the container image. If no stop signal is defined in the container spec, the container would fall back to the def", "source": "https://kubernetes.io/docs/concepts/workloads/_print/", "id": "k8s_00658_code_explain_21095"} {"prompt": "Explain this text code:\n\nfor i in {1..100}; do sleep 1; if nslookup myservice; then exit 0; fi; done; exit 1", "completion": "This text code 1.2 - Init Containers | Register this Pod with a remote server from the downward API with a command like:", "language": "text", "type": "code-example_explanation", "context": "1.2 - Init Containers | Register this Pod with a remote server from the downward API with a command like:", "source": "https://kubernetes.io/docs/concepts/workloads/_print/", "id": "k8s_00658_code_explain_21096"} {"prompt": "Explain this text code:\n\ncurl -X POST http://$MANAGEMENT_SERVICE_HOST:$MANAGEMENT_SERVICE_PORT/register -d 'instance=$(<POD_NAME>)&ip=$(<POD_IP>)'", "completion": "This text code 1.2 - Init Containers | Wait for some time before starting the app container with a command like", "language": "text", "type": "code-example_explanation", "context": "1.2 - Init Containers | Wait for some time before starting the app container with a command like", "source": "https://kubernetes.io/docs/concepts/workloads/_print/", "id": "k8s_00658_code_explain_21097"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: myapp-pod\n labels:\n app.kubernetes.io/name: MyApp\nspec:\n containers:\n - name: myapp-container", "completion": " image: busybox:1.28\n command: ['sh', '-c', 'echo The app is running! && sleep 3600']\n initContainers:\n - name: init-myservice\n image: busybox:1.28\n command: ['sh', '-c', \"until nslookup myservice.$(cat /var/run/secrets/kubernetes.io/serviceaccount/namespace).svc.cluster.local; do echo waiting for myservice; sleep 2; done\"]\n - name: init-mydb\n image: busybox:1.28\n command: ['sh', '-c', \"until nslookup mydb.$(cat /var/run/secrets/kubernetes.io/serviceaccount/namespace).svc.cluster.local; do echo waiting for mydb; sleep 2; done\"]", "language": "yaml", "type": "pod-definition", "context": "1.2 - Init Containers | You can start this Pod by running:", "source": "https://kubernetes.io/docs/concepts/workloads/_print/", "id": "k8s_00658_code_completion_21098"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: myapp-pod\n labels:\n app.kubernetes.io/name: MyApp\nspec:\n containers:\n - name: myapp-container\n image: busybox:1.28\n command: ['sh', '-c', 'echo The app is running! && sleep 3600']\n initContainers:\n - name: init-myservice\n image: busybox:1.28\n command: ['sh', '-c', \"until nslookup myservice.$(cat /var/run/secrets/kubernetes.io/serviceaccount/namespace).svc.cluster.local; do echo waiting for myservice; sleep 2; done\"]\n - name: init-mydb\n image: busybox:1.28\n command: ['sh', '-c', \"until nslookup mydb.$(cat /var/run/secrets/kubernetes.io/serviceaccount/namespace).svc.cluster.local; do echo waiting for mydb; sleep 2; done\"]", "completion": "This yaml code 1.2 - Init Containers | You can start this Pod by running:", "language": "yaml", "type": "pod-definition_explanation", "context": "1.2 - Init Containers | You can start this Pod by running:", "source": "https://kubernetes.io/docs/concepts/workloads/_print/", "id": "k8s_00658_code_explain_21099"} {"prompt": "Explain this bash code:\n\nkubectl logs myapp-pod -c init-myservice # Inspect the first init container\nkubectl logs myapp-pod -c init-mydb # Inspect the second init container", "completion": "This bash code 1.2 - Init Containers | At this point, those init containers will be waiting to discover Services named mydb and myservice.", "language": "bash", "type": "kubectl-command_explanation", "context": "1.2 - Init Containers | At this point, those init containers will be waiting to discover Services named mydb and myservice.", "source": "https://kubernetes.io/docs/concepts/workloads/_print/", "id": "k8s_00658_code_explain_21100"} {"prompt": "Complete this yaml code:\n\n---\napiVersion: v1\nkind: Service\nmetadata:\n name: myservice\nspec:\n ports:\n - protocol: TCP\n port: 80\n targetPort: 9376", "completion": "---\napiVersion: v1\nkind: Service\nmetadata:\n name: mydb\nspec:\n ports:\n - protocol: TCP\n port: 80\n targetPort: 9377", "language": "yaml", "type": "service-definition", "context": "1.2 - Init Containers | To create the mydb and myservice services:", "source": "https://kubernetes.io/docs/concepts/workloads/_print/", "id": "k8s_00658_code_completion_21101"} {"prompt": "Explain this yaml code:\n\n---\napiVersion: v1\nkind: Service\nmetadata:\n name: myservice\nspec:\n ports:\n - protocol: TCP\n port: 80\n targetPort: 9376\n---\napiVersion: v1\nkind: Service\nmetadata:\n name: mydb\nspec:\n ports:\n - protocol: TCP\n port: 80\n targetPort: 9377", "completion": "This yaml code 1.2 - Init Containers | To create the mydb and myservice services:", "language": "yaml", "type": "service-definition_explanation", "context": "1.2 - Init Containers | To create the mydb and myservice services:", "source": "https://kubernetes.io/docs/concepts/workloads/_print/", "id": "k8s_00658_code_explain_21102"} {"prompt": "Complete this yaml code:\n\napplication/deployment-sidecar.yaml\napiVersion: apps/v1\nkind: Deployment\nmetadata:\n name: myapp\n labels:\n app: myapp\nspec:\n replicas: 1\n selector:\n matchLabels:\n app: myapp\n template:\n metadata:\n labels:\n app: myapp\n spec:", "completion": " containers:\n - name: myapp\n image: alpine:latest\n command: ['sh', '-c', 'while true; do echo \"logging\" >> /opt/logs.txt; sleep 1; done']\n volumeMounts:\n - name: data\n mountPath: /opt\n initContainers:\n - name: logshipper\n image: alpine:latest\n restartPolicy: Always\n command: ['sh', '-c', 'tail -F /opt/logs.txt']\n volumeMounts:\n - name: data\n mountPath: /opt\n volumes:\n - name: data\n emptyDir: {}", "language": "yaml", "type": "deployment-definition", "context": "1.3 - Sidecar Containers | If an init container is created with its restartPolicy set to Always, it will start and remain running during the entire life of the Pod. This can be helpful for running supporting services separated", "source": "https://kubernetes.io/docs/concepts/workloads/_print/", "id": "k8s_00658_code_completion_21103"} {"prompt": "Explain this yaml code:\n\napplication/deployment-sidecar.yaml\napiVersion: apps/v1\nkind: Deployment\nmetadata:\n name: myapp\n labels:\n app: myapp\nspec:\n replicas: 1\n selector:\n matchLabels:\n app: myapp\n template:\n metadata:\n labels:\n app: myapp\n spec:\n containers:\n - name: myapp\n image: alpine:latest\n command: ['sh', '-c', 'while true; do echo \"logging\" >> /opt/logs.txt; sleep 1; done']\n volumeMounts:\n - name: data\n mountPath: /opt\n initContainers:\n - name: logshipper\n image: alpine:latest\n restartPolicy: Always\n command: ['sh', '-c', 'tail -F /opt/logs.txt']\n volumeMounts:\n - name: data\n mountPath: /opt\n volumes:\n - name: data\n emptyDir: {}", "completion": "This yaml code 1.3 - Sidecar Containers | If an init container is created with its restartPolicy set to Always, it will start and remain running during the entire life of the Pod. This can be helpful for running supporting services separated", "language": "yaml", "type": "deployment-definition_explanation", "context": "1.3 - Sidecar Containers | If an init container is created with its restartPolicy set to Always, it will start and remain running during the entire life of the Pod. This can be helpful for running supporting services separated", "source": "https://kubernetes.io/docs/concepts/workloads/_print/", "id": "k8s_00658_code_explain_21104"} {"prompt": "Complete this yaml code:\n\napiVersion: apps/v1\nkind: Deployment\nmetadata:\n name: myapp\n labels:\n app: myapp\nspec:\n replicas: 1\n selector:\n matchLabels:\n app: myapp\n template:\n metadata:\n labels:\n app: myapp\n spec:\n containers:", "completion": " - name: myapp\n image: alpine:latest\n command: ['sh', '-c', 'while true; do echo \"logging\" >> /opt/logs.txt; sleep 1; done']\n volumeMounts:\n - name: data\n mountPath: /opt\n initContainers:\n - name: logshipper\n image: alpine:latest\n restartPolicy: Always\n command: ['sh', '-c', 'tail -F /opt/logs.txt']\n volumeMounts:\n - name: data\n mountPath: /opt\n volumes:\n - name: data\n emptyDir: {}", "language": "yaml", "type": "deployment-definition", "context": "1.3 - Sidecar Containers | If an init container is created with its restartPolicy set to Always, it will start and remain running during the entire life of the Pod. This can be helpful for running supporting services separated", "source": "https://kubernetes.io/docs/concepts/workloads/_print/", "id": "k8s_00658_code_completion_21105"} {"prompt": "Explain this yaml code:\n\napiVersion: apps/v1\nkind: Deployment\nmetadata:\n name: myapp\n labels:\n app: myapp\nspec:\n replicas: 1\n selector:\n matchLabels:\n app: myapp\n template:\n metadata:\n labels:\n app: myapp\n spec:\n containers:\n - name: myapp\n image: alpine:latest\n command: ['sh', '-c', 'while true; do echo \"logging\" >> /opt/logs.txt; sleep 1; done']\n volumeMounts:\n - name: data\n mountPath: /opt\n initContainers:\n - name: logshipper\n image: alpine:latest\n restartPolicy: Always\n command: ['sh', '-c', 'tail -F /opt/logs.txt']\n volumeMounts:\n - name: data\n mountPath: /opt\n volumes:\n - name: data\n emptyDir: {}", "completion": "This yaml code 1.3 - Sidecar Containers | If an init container is created with its restartPolicy set to Always, it will start and remain running during the entire life of the Pod. This can be helpful for running supporting services separated", "language": "yaml", "type": "deployment-definition_explanation", "context": "1.3 - Sidecar Containers | If an init container is created with its restartPolicy set to Always, it will start and remain running during the entire life of the Pod. This can be helpful for running supporting services separated", "source": "https://kubernetes.io/docs/concepts/workloads/_print/", "id": "k8s_00658_code_explain_21106"} {"prompt": "Complete this yaml code:\n\napplication/job/job-sidecar.yaml\napiVersion: batch/v1\nkind: Job\nmetadata:\n name: myjob\nspec:\n template:\n spec:\n containers:\n - name: myjob\n image: alpine:latest\n command: ['sh', '-c', 'echo \"logging\" > /opt/logs.txt']\n volumeMounts:", "completion": " - name: data\n mountPath: /opt\n initContainers:\n - name: logshipper\n image: alpine:latest\n restartPolicy: Always\n command: ['sh', '-c', 'tail -F /opt/logs.txt']\n volumeMounts:\n - name: data\n mountPath: /opt\n restartPolicy: Never\n volumes:\n - name: data\n emptyDir: {}", "language": "yaml", "type": "kubernetes-manifest", "context": "1.3 - Sidecar Containers | Sidecar containers run alongside app containers in the same pod. However, they do not execute the primary application logic; instead, they provide supporting functionality to the main application.", "source": "https://kubernetes.io/docs/concepts/workloads/_print/", "id": "k8s_00658_code_completion_21107"} {"prompt": "Explain this yaml code:\n\napplication/job/job-sidecar.yaml\napiVersion: batch/v1\nkind: Job\nmetadata:\n name: myjob\nspec:\n template:\n spec:\n containers:\n - name: myjob\n image: alpine:latest\n command: ['sh', '-c', 'echo \"logging\" > /opt/logs.txt']\n volumeMounts:\n - name: data\n mountPath: /opt\n initContainers:\n - name: logshipper\n image: alpine:latest\n restartPolicy: Always\n command: ['sh', '-c', 'tail -F /opt/logs.txt']\n volumeMounts:\n - name: data\n mountPath: /opt\n restartPolicy: Never\n volumes:\n - name: data\n emptyDir: {}", "completion": "This yaml code 1.3 - Sidecar Containers | Sidecar containers run alongside app containers in the same pod. However, they do not execute the primary application logic; instead, they provide supporting functionality to the main application.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "1.3 - Sidecar Containers | Sidecar containers run alongside app containers in the same pod. However, they do not execute the primary application logic; instead, they provide supporting functionality to the main application.", "source": "https://kubernetes.io/docs/concepts/workloads/_print/", "id": "k8s_00658_code_explain_21108"} {"prompt": "Complete this yaml code:\n\napiVersion: batch/v1\nkind: Job\nmetadata:\n name: myjob\nspec:\n template:\n spec:\n containers:\n - name: myjob\n image: alpine:latest\n command: ['sh', '-c', 'echo \"logging\" > /opt/logs.txt']\n volumeMounts:\n - name: data", "completion": " mountPath: /opt\n initContainers:\n - name: logshipper\n image: alpine:latest\n restartPolicy: Always\n command: ['sh', '-c', 'tail -F /opt/logs.txt']\n volumeMounts:\n - name: data\n mountPath: /opt\n restartPolicy: Never\n volumes:\n - name: data\n emptyDir: {}", "language": "yaml", "type": "kubernetes-manifest", "context": "1.3 - Sidecar Containers | Sidecar containers run alongside app containers in the same pod. However, they do not execute the primary application logic; instead, they provide supporting functionality to the main application.", "source": "https://kubernetes.io/docs/concepts/workloads/_print/", "id": "k8s_00658_code_completion_21109"} {"prompt": "Explain this yaml code:\n\napiVersion: batch/v1\nkind: Job\nmetadata:\n name: myjob\nspec:\n template:\n spec:\n containers:\n - name: myjob\n image: alpine:latest\n command: ['sh', '-c', 'echo \"logging\" > /opt/logs.txt']\n volumeMounts:\n - name: data\n mountPath: /opt\n initContainers:\n - name: logshipper\n image: alpine:latest\n restartPolicy: Always\n command: ['sh', '-c', 'tail -F /opt/logs.txt']\n volumeMounts:\n - name: data\n mountPath: /opt\n restartPolicy: Never\n volumes:\n - name: data\n emptyDir: {}", "completion": "This yaml code 1.3 - Sidecar Containers | Sidecar containers run alongside app containers in the same pod. However, they do not execute the primary application logic; instead, they provide supporting functionality to the main application.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "1.3 - Sidecar Containers | Sidecar containers run alongside app containers in the same pod. However, they do not execute the primary application logic; instead, they provide supporting functionality to the main application.", "source": "https://kubernetes.io/docs/concepts/workloads/_print/", "id": "k8s_00658_code_explain_21110"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: busybox-1\nspec:", "completion": " containers:\n - image: busybox:1.28\n command:\n - sleep\n - \"3600\"\n name: busybox", "language": "yaml", "type": "pod-definition", "context": "1.6 - Pod Hostname | The Pod created by this manifest will have its hostname and fully qualified domain name (FQDN) set to busybox-1.", "source": "https://kubernetes.io/docs/concepts/workloads/_print/", "id": "k8s_00658_code_completion_21111"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: busybox-1\nspec:\n containers:\n - image: busybox:1.28\n command:\n - sleep\n - \"3600\"\n name: busybox", "completion": "This yaml code 1.6 - Pod Hostname | The Pod created by this manifest will have its hostname and fully qualified domain name (FQDN) set to busybox-1.", "language": "yaml", "type": "pod-definition_explanation", "context": "1.6 - Pod Hostname | The Pod created by this manifest will have its hostname and fully qualified domain name (FQDN) set to busybox-1.", "source": "https://kubernetes.io/docs/concepts/workloads/_print/", "id": "k8s_00658_code_explain_21112"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: busybox-2-busybox-example-domain\nspec:\n hostnameOverride: busybox-2.busybox.example.domain", "completion": " containers:\n - image: busybox:1.28\n command:\n - sleep\n - \"3600\"\n name: busybox", "language": "yaml", "type": "pod-definition", "context": "1.6 - Pod Hostname | If hostnameOverride is set alongside hostname and subdomain fields:", "source": "https://kubernetes.io/docs/concepts/workloads/_print/", "id": "k8s_00658_code_completion_21113"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: busybox-2-busybox-example-domain\nspec:\n hostnameOverride: busybox-2.busybox.example.domain\n containers:\n - image: busybox:1.28\n command:\n - sleep\n - \"3600\"\n name: busybox", "completion": "This yaml code 1.6 - Pod Hostname | If hostnameOverride is set alongside hostname and subdomain fields:", "language": "yaml", "type": "pod-definition_explanation", "context": "1.6 - Pod Hostname | If hostnameOverride is set alongside hostname and subdomain fields:", "source": "https://kubernetes.io/docs/concepts/workloads/_print/", "id": "k8s_00658_code_explain_21114"} {"prompt": "Complete this yaml code:\n\napiVersion: kubelet.config.k8s.io/v1beta1\nkind: KubeletConfiguration", "completion": "userNamespaces:\n idsPerPod: 1048576", "language": "yaml", "type": "kubernetes-manifest", "context": "1.8 - User Namespaces | The value of idsPerPod (uint32) must be a multiple of 65536. The default value is 65536. This value only applies to containers created after the kubelet was started with this KubeletConfiguration. Run", "source": "https://kubernetes.io/docs/concepts/workloads/_print/", "id": "k8s_00658_code_completion_21115"} {"prompt": "Explain this yaml code:\n\napiVersion: kubelet.config.k8s.io/v1beta1\nkind: KubeletConfiguration\nuserNamespaces:\n idsPerPod: 1048576", "completion": "This yaml code 1.8 - User Namespaces | The value of idsPerPod (uint32) must be a multiple of 65536. The default value is 65536. This value only applies to containers created after the kubelet was started with this KubeletConfiguration. Run", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "1.8 - User Namespaces | The value of idsPerPod (uint32) must be a multiple of 65536. The default value is 65536. This value only applies to containers created after the kubelet was started with this KubeletConfiguration. Run", "source": "https://kubernetes.io/docs/concepts/workloads/_print/", "id": "k8s_00658_code_explain_21116"} {"prompt": "Complete this yaml code:\n\ncontrollers/nginx-deployment.yaml\napiVersion: apps/v1\nkind: Deployment\nmetadata:\n name: nginx-deployment\n labels:\n app: nginx\nspec:\n replicas: 3\n selector:\n matchLabels:", "completion": " app: nginx\n template:\n metadata:\n labels:\n app: nginx\n spec:\n containers:\n - name: nginx\n image: nginx:1.14.2\n ports:\n - containerPort: 80", "language": "yaml", "type": "deployment-definition", "context": "2.1 - Deployments | In this example:", "source": "https://kubernetes.io/docs/concepts/workloads/_print/", "id": "k8s_00658_code_completion_21117"} {"prompt": "Explain this yaml code:\n\ncontrollers/nginx-deployment.yaml\napiVersion: apps/v1\nkind: Deployment\nmetadata:\n name: nginx-deployment\n labels:\n app: nginx\nspec:\n replicas: 3\n selector:\n matchLabels:\n app: nginx\n template:\n metadata:\n labels:\n app: nginx\n spec:\n containers:\n - name: nginx\n image: nginx:1.14.2\n ports:\n - containerPort: 80", "completion": "This yaml code 2.1 - Deployments | In this example:", "language": "yaml", "type": "deployment-definition_explanation", "context": "2.1 - Deployments | In this example:", "source": "https://kubernetes.io/docs/concepts/workloads/_print/", "id": "k8s_00658_code_explain_21118"} {"prompt": "Complete this yaml code:\n\napiVersion: apps/v1\nkind: Deployment\nmetadata:\n name: nginx-deployment\n labels:\n app: nginx\nspec:\n replicas: 3\n selector:\n matchLabels:", "completion": " app: nginx\n template:\n metadata:\n labels:\n app: nginx\n spec:\n containers:\n - name: nginx\n image: nginx:1.14.2\n ports:\n - containerPort: 80", "language": "yaml", "type": "deployment-definition", "context": "2.1 - Deployments | In this example:", "source": "https://kubernetes.io/docs/concepts/workloads/_print/", "id": "k8s_00658_code_completion_21119"} {"prompt": "Explain this yaml code:\n\napiVersion: apps/v1\nkind: Deployment\nmetadata:\n name: nginx-deployment\n labels:\n app: nginx\nspec:\n replicas: 3\n selector:\n matchLabels:\n app: nginx\n template:\n metadata:\n labels:\n app: nginx\n spec:\n containers:\n - name: nginx\n image: nginx:1.14.2\n ports:\n - containerPort: 80", "completion": "This yaml code 2.1 - Deployments | In this example:", "language": "yaml", "type": "deployment-definition_explanation", "context": "2.1 - Deployments | In this example:", "source": "https://kubernetes.io/docs/concepts/workloads/_print/", "id": "k8s_00658_code_explain_21120"} {"prompt": "Explain this bash code:\n\nkubectl apply -f https://k8s.io/examples/controllers/nginx-deployment.yaml", "completion": "This bash code 2.1 - Deployments | Run kubectl get deployments to check if the Deployment was created.", "language": "bash", "type": "kubectl-command_explanation", "context": "2.1 - Deployments | Run kubectl get deployments to check if the Deployment was created.", "source": "https://kubernetes.io/docs/concepts/workloads/_print/", "id": "k8s_00658_code_explain_21121"} {"prompt": "Explain this bash code:\n\nkubectl set image deployment.v1.apps/nginx-deployment nginx=nginx:1.16.1", "completion": "This bash code 2.1 - Deployments | or use the following command:", "language": "bash", "type": "kubectl-command_explanation", "context": "2.1 - Deployments | or use the following command:", "source": "https://kubernetes.io/docs/concepts/workloads/_print/", "id": "k8s_00658_code_explain_21122"} {"prompt": "Explain this bash code:\n\nkubectl set image deployment/nginx-deployment nginx=nginx:1.16.1", "completion": "This bash code 2.1 - Deployments | where deployment/nginx-deployment indicates the Deployment, nginx indicates the Container the update will take place and nginx:1.16.1 indicates the new image and its tag.", "language": "bash", "type": "kubectl-command_explanation", "context": "2.1 - Deployments | where deployment/nginx-deployment indicates the Deployment, nginx indicates the Container the update will take place and nginx:1.16.1 indicates the new image and its tag.", "source": "https://kubernetes.io/docs/concepts/workloads/_print/", "id": "k8s_00658_code_explain_21123"} {"prompt": "Explain this bash code:\n\nkubectl set image deployment/nginx-deployment nginx=nginx:1.161", "completion": "This bash code 2.1 - Deployments | The output is similar to this:", "language": "bash", "type": "kubectl-command_explanation", "context": "2.1 - Deployments | The output is similar to this:", "source": "https://kubernetes.io/docs/concepts/workloads/_print/", "id": "k8s_00658_code_explain_21124"} {"prompt": "Explain this bash code:\n\nkubectl rollout history deployment/nginx-deployment", "completion": "This bash code 2.1 - Deployments | The output is similar to this:", "language": "bash", "type": "kubectl-command_explanation", "context": "2.1 - Deployments | The output is similar to this:", "source": "https://kubernetes.io/docs/concepts/workloads/_print/", "id": "k8s_00658_code_explain_21125"} {"prompt": "Explain this bash code:\n\nkubectl rollout history deployment/nginx-deployment --revision=2", "completion": "This bash code 2.1 - Deployments | The output is similar to this:", "language": "bash", "type": "kubectl-command_explanation", "context": "2.1 - Deployments | The output is similar to this:", "source": "https://kubernetes.io/docs/concepts/workloads/_print/", "id": "k8s_00658_code_explain_21126"} {"prompt": "Explain this bash code:\n\nkubectl rollout undo deployment/nginx-deployment --to-revision=2", "completion": "This bash code 2.1 - Deployments | The output is similar to this:", "language": "bash", "type": "kubectl-command_explanation", "context": "2.1 - Deployments | The output is similar to this:", "source": "https://kubernetes.io/docs/concepts/workloads/_print/", "id": "k8s_00658_code_explain_21127"} {"prompt": "Explain this bash code:\n\nkubectl scale deployment/nginx-deployment --replicas=10", "completion": "This bash code 2.1 - Deployments | The output is similar to this:", "language": "bash", "type": "kubectl-command_explanation", "context": "2.1 - Deployments | The output is similar to this:", "source": "https://kubernetes.io/docs/concepts/workloads/_print/", "id": "k8s_00658_code_explain_21128"} {"prompt": "Explain this bash code:\n\nkubectl autoscale deployment/nginx-deployment --min=10 --max=15 --cpu-percent=80", "completion": "This bash code 2.1 - Deployments | The output is similar to this:", "language": "bash", "type": "kubectl-command_explanation", "context": "2.1 - Deployments | The output is similar to this:", "source": "https://kubernetes.io/docs/concepts/workloads/_print/", "id": "k8s_00658_code_explain_21129"} {"prompt": "Explain this bash code:\n\nkubectl set image deployment/nginx-deployment nginx=nginx:sometag", "completion": "This bash code 2.1 - Deployments | The output is similar to this:", "language": "bash", "type": "kubectl-command_explanation", "context": "2.1 - Deployments | The output is similar to this:", "source": "https://kubernetes.io/docs/concepts/workloads/_print/", "id": "k8s_00658_code_explain_21130"} {"prompt": "Explain this bash code:\n\nkubectl set image deployment/nginx-deployment nginx=nginx:1.16.1", "completion": "This bash code 2.1 - Deployments | The output is similar to this:", "language": "bash", "type": "kubectl-command_explanation", "context": "2.1 - Deployments | The output is similar to this:", "source": "https://kubernetes.io/docs/concepts/workloads/_print/", "id": "k8s_00658_code_explain_21131"} {"prompt": "Explain this bash code:\n\nkubectl rollout history deployment/nginx-deployment", "completion": "This bash code 2.1 - Deployments | The output is similar to this:", "language": "bash", "type": "kubectl-command_explanation", "context": "2.1 - Deployments | The output is similar to this:", "source": "https://kubernetes.io/docs/concepts/workloads/_print/", "id": "k8s_00658_code_explain_21132"} {"prompt": "Explain this bash code:\n\nkubectl set resources deployment/nginx-deployment -c=nginx --limits=cpu=200m,memory=512Mi", "completion": "This bash code 2.1 - Deployments | The output is similar to this:", "language": "bash", "type": "kubectl-command_explanation", "context": "2.1 - Deployments | The output is similar to this:", "source": "https://kubernetes.io/docs/concepts/workloads/_print/", "id": "k8s_00658_code_explain_21133"} {"prompt": "Explain this bash code:\n\nkubectl patch deployment/nginx-deployment -p '{\"spec\":{\"progressDeadlineSeconds\":600}}'", "completion": "This bash code 2.1 - Deployments | The output is similar to this:", "language": "bash", "type": "kubectl-command_explanation", "context": "2.1 - Deployments | The output is similar to this:", "source": "https://kubernetes.io/docs/concepts/workloads/_print/", "id": "k8s_00658_code_explain_21134"} {"prompt": "Complete this yaml code:\n\napiVersion: apps/v1\nkind: Deployment\nmetadata:\n name: nginx-deployment\n labels:\n app: nginx\nspec:\n replicas: 3\n selector:\n matchLabels:\n app: nginx\n template:", "completion": " metadata:\n labels:\n app: nginx\n spec:\n containers:\n - name: nginx\n image: nginx:1.14.2\n ports:\n - containerPort: 80\n strategy:\n type: RollingUpdate\n rollingUpdate:\n maxUnavailable: 1", "language": "yaml", "type": "deployment-definition", "context": "2.1 - Deployments | apiVersion: apps/v1 kind: Deployment metadata: name: nginx-deployment labels: app: nginx spec: replicas: 3 selector: matchLabels: app: nginx template: metadata: labels:", "source": "https://kubernetes.io/docs/concepts/workloads/_print/", "id": "k8s_00658_code_completion_21135"} {"prompt": "Explain this yaml code:\n\napiVersion: apps/v1\nkind: Deployment\nmetadata:\n name: nginx-deployment\n labels:\n app: nginx\nspec:\n replicas: 3\n selector:\n matchLabels:\n app: nginx\n template:\n metadata:\n labels:\n app: nginx\n spec:\n containers:\n - name: nginx\n image: nginx:1.14.2\n ports:\n - containerPort: 80\n strategy:\n type: RollingUpdate\n rollingUpdate:\n maxUnavailable: 1", "completion": "This yaml code 2.1 - Deployments | apiVersion: apps/v1 kind: Deployment metadata: name: nginx-deployment labels: app: nginx spec: replicas: 3 selector: matchLabels: app: nginx template: metadata: labels:", "language": "yaml", "type": "deployment-definition_explanation", "context": "2.1 - Deployments | apiVersion: apps/v1 kind: Deployment metadata: name: nginx-deployment labels: app: nginx spec: replicas: 3 selector: matchLabels: app: nginx template: metadata: labels:", "source": "https://kubernetes.io/docs/concepts/workloads/_print/", "id": "k8s_00658_code_explain_21136"} {"prompt": "Complete this yaml code:\n\napiVersion: apps/v1\nkind: Deployment\nmetadata:\n name: nginx-deployment\n labels:\n app: nginx\nspec:\n replicas: 3\n selector:\n matchLabels:\n app: nginx\n template:", "completion": " metadata:\n labels:\n app: nginx\n spec:\n containers:\n - name: nginx\n image: nginx:1.14.2\n ports:\n - containerPort: 80\n strategy:\n type: RollingUpdate\n rollingUpdate:\n maxSurge: 1", "language": "yaml", "type": "deployment-definition", "context": "2.1 - Deployments | apiVersion: apps/v1 kind: Deployment metadata: name: nginx-deployment labels: app: nginx spec: replicas: 3 selector: matchLabels: app: nginx template: metadata: labels:", "source": "https://kubernetes.io/docs/concepts/workloads/_print/", "id": "k8s_00658_code_completion_21137"} {"prompt": "Explain this yaml code:\n\napiVersion: apps/v1\nkind: Deployment\nmetadata:\n name: nginx-deployment\n labels:\n app: nginx\nspec:\n replicas: 3\n selector:\n matchLabels:\n app: nginx\n template:\n metadata:\n labels:\n app: nginx\n spec:\n containers:\n - name: nginx\n image: nginx:1.14.2\n ports:\n - containerPort: 80\n strategy:\n type: RollingUpdate\n rollingUpdate:\n maxSurge: 1", "completion": "This yaml code 2.1 - Deployments | apiVersion: apps/v1 kind: Deployment metadata: name: nginx-deployment labels: app: nginx spec: replicas: 3 selector: matchLabels: app: nginx template: metadata: labels:", "language": "yaml", "type": "deployment-definition_explanation", "context": "2.1 - Deployments | apiVersion: apps/v1 kind: Deployment metadata: name: nginx-deployment labels: app: nginx spec: replicas: 3 selector: matchLabels: app: nginx template: metadata: labels:", "source": "https://kubernetes.io/docs/concepts/workloads/_print/", "id": "k8s_00658_code_explain_21138"} {"prompt": "Complete this yaml code:\n\napiVersion: apps/v1\nkind: Deployment\nmetadata:\n name: nginx-deployment\n labels:\n app: nginx\nspec:\n replicas: 3\n selector:\n matchLabels:\n app: nginx\n template:\n metadata:", "completion": " labels:\n app: nginx\n spec:\n containers:\n - name: nginx\n image: nginx:1.14.2\n ports:\n - containerPort: 80\n strategy:\n type: RollingUpdate\n rollingUpdate:\n maxSurge: 1\n maxUnavailable: 1", "language": "yaml", "type": "deployment-definition", "context": "2.1 - Deployments | .spec.progressDeadlineSeconds is an optional field that specifies the number of seconds you want to wait for your Deployment to progress before the system reports back that the Deployment has failed p", "source": "https://kubernetes.io/docs/concepts/workloads/_print/", "id": "k8s_00658_code_completion_21139"} {"prompt": "Explain this yaml code:\n\napiVersion: apps/v1\nkind: Deployment\nmetadata:\n name: nginx-deployment\n labels:\n app: nginx\nspec:\n replicas: 3\n selector:\n matchLabels:\n app: nginx\n template:\n metadata:\n labels:\n app: nginx\n spec:\n containers:\n - name: nginx\n image: nginx:1.14.2\n ports:\n - containerPort: 80\n strategy:\n type: RollingUpdate\n rollingUpdate:\n maxSurge: 1\n maxUnavailable: 1", "completion": "This yaml code 2.1 - Deployments | .spec.progressDeadlineSeconds is an optional field that specifies the number of seconds you want to wait for your Deployment to progress before the system reports back that the Deployment has failed p", "language": "yaml", "type": "deployment-definition_explanation", "context": "2.1 - Deployments | .spec.progressDeadlineSeconds is an optional field that specifies the number of seconds you want to wait for your Deployment to progress before the system reports back that the Deployment has failed p", "source": "https://kubernetes.io/docs/concepts/workloads/_print/", "id": "k8s_00658_code_explain_21140"} {"prompt": "Complete this yaml code:\n\ncontrollers/frontend.yaml\napiVersion: apps/v1\nkind: ReplicaSet\nmetadata:\n name: frontend\n labels:\n app: guestbook\n tier: frontend\nspec:\n # modify replicas according to your case\n replicas: 3", "completion": " selector:\n matchLabels:\n tier: frontend\n template:\n metadata:\n labels:\n tier: frontend\n spec:\n containers:\n - name: php-redis\n image: us-docker.pkg.dev/google-samples/containers/gke/gb-frontend:v5", "language": "yaml", "type": "kubernetes-manifest", "context": "2.2 - ReplicaSet | Saving this manifest into frontend.yaml and submitting it to a Kubernetes cluster will create the defined ReplicaSet and the Pods that it manages.", "source": "https://kubernetes.io/docs/concepts/workloads/_print/", "id": "k8s_00658_code_completion_21141"} {"prompt": "Explain this yaml code:\n\ncontrollers/frontend.yaml\napiVersion: apps/v1\nkind: ReplicaSet\nmetadata:\n name: frontend\n labels:\n app: guestbook\n tier: frontend\nspec:\n # modify replicas according to your case\n replicas: 3\n selector:\n matchLabels:\n tier: frontend\n template:\n metadata:\n labels:\n tier: frontend\n spec:\n containers:\n - name: php-redis\n image: us-docker.pkg.dev/google-samples/containers/gke/gb-frontend:v5", "completion": "This yaml code 2.2 - ReplicaSet | Saving this manifest into frontend.yaml and submitting it to a Kubernetes cluster will create the defined ReplicaSet and the Pods that it manages.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "2.2 - ReplicaSet | Saving this manifest into frontend.yaml and submitting it to a Kubernetes cluster will create the defined ReplicaSet and the Pods that it manages.", "source": "https://kubernetes.io/docs/concepts/workloads/_print/", "id": "k8s_00658_code_explain_21142"} {"prompt": "Complete this yaml code:\n\napiVersion: apps/v1\nkind: ReplicaSet\nmetadata:\n name: frontend\n labels:\n app: guestbook\n tier: frontend\nspec:\n # modify replicas according to your case\n replicas: 3", "completion": " selector:\n matchLabels:\n tier: frontend\n template:\n metadata:\n labels:\n tier: frontend\n spec:\n containers:\n - name: php-redis\n image: us-docker.pkg.dev/google-samples/containers/gke/gb-frontend:v5", "language": "yaml", "type": "kubernetes-manifest", "context": "2.2 - ReplicaSet | Saving this manifest into frontend.yaml and submitting it to a Kubernetes cluster will create the defined ReplicaSet and the Pods that it manages.", "source": "https://kubernetes.io/docs/concepts/workloads/_print/", "id": "k8s_00658_code_completion_21143"} {"prompt": "Explain this yaml code:\n\napiVersion: apps/v1\nkind: ReplicaSet\nmetadata:\n name: frontend\n labels:\n app: guestbook\n tier: frontend\nspec:\n # modify replicas according to your case\n replicas: 3\n selector:\n matchLabels:\n tier: frontend\n template:\n metadata:\n labels:\n tier: frontend\n spec:\n containers:\n - name: php-redis\n image: us-docker.pkg.dev/google-samples/containers/gke/gb-frontend:v5", "completion": "This yaml code 2.2 - ReplicaSet | Saving this manifest into frontend.yaml and submitting it to a Kubernetes cluster will create the defined ReplicaSet and the Pods that it manages.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "2.2 - ReplicaSet | Saving this manifest into frontend.yaml and submitting it to a Kubernetes cluster will create the defined ReplicaSet and the Pods that it manages.", "source": "https://kubernetes.io/docs/concepts/workloads/_print/", "id": "k8s_00658_code_explain_21144"} {"prompt": "Explain this bash code:\n\nkubectl apply -f https://kubernetes.io/examples/controllers/frontend.yaml", "completion": "This bash code 2.2 - ReplicaSet | You can then get the current ReplicaSets deployed:", "language": "bash", "type": "kubectl-command_explanation", "context": "2.2 - ReplicaSet | You can then get the current ReplicaSets deployed:", "source": "https://kubernetes.io/docs/concepts/workloads/_print/", "id": "k8s_00658_code_explain_21145"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n creationTimestamp: \"2024-02-28T22:30:44Z\"\n generateName: frontend-\n labels:\n tier: frontend\n name: frontend-gbgfx", "completion": " namespace: default\n ownerReferences:\n - apiVersion: apps/v1\n blockOwnerDeletion: true\n controller: true\n kind: ReplicaSet\n name: frontend\n uid: e129deca-f864-481b-bb16-b27abfd92292\n...", "language": "yaml", "type": "pod-definition", "context": "2.2 - ReplicaSet | While you can create bare Pods with no problems, it is strongly recommended to make sure that the bare Pods do not have labels which match the selector of one of your ReplicaSets. The reason for this", "source": "https://kubernetes.io/docs/concepts/workloads/_print/", "id": "k8s_00658_code_completion_21146"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n creationTimestamp: \"2024-02-28T22:30:44Z\"\n generateName: frontend-\n labels:\n tier: frontend\n name: frontend-gbgfx\n namespace: default\n ownerReferences:\n - apiVersion: apps/v1\n blockOwnerDeletion: true\n controller: true\n kind: ReplicaSet\n name: frontend\n uid: e129deca-f864-481b-bb16-b27abfd92292\n...", "completion": "This yaml code 2.2 - ReplicaSet | While you can create bare Pods with no problems, it is strongly recommended to make sure that the bare Pods do not have labels which match the selector of one of your ReplicaSets. The reason for this", "language": "yaml", "type": "pod-definition_explanation", "context": "2.2 - ReplicaSet | While you can create bare Pods with no problems, it is strongly recommended to make sure that the bare Pods do not have labels which match the selector of one of your ReplicaSets. The reason for this", "source": "https://kubernetes.io/docs/concepts/workloads/_print/", "id": "k8s_00658_code_explain_21147"} {"prompt": "Complete this yaml code:\n\npods/pod-rs.yaml\napiVersion: v1\nkind: Pod\nmetadata:\n name: pod1\n labels:\n tier: frontend\nspec:\n containers:\n - name: hello1\n image: gcr.io/google-samples/hello-app:2.0\n", "completion": "---\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: pod2\n labels:\n tier: frontend\nspec:\n containers:\n - name: hello2\n image: gcr.io/google-samples/hello-app:1.0", "language": "yaml", "type": "pod-definition", "context": "2.2 - ReplicaSet | As those Pods do not have a Controller (or any object) as their owner reference and match the selector of the frontend ReplicaSet, they will immediately be acquired by it.", "source": "https://kubernetes.io/docs/concepts/workloads/_print/", "id": "k8s_00658_code_completion_21148"} {"prompt": "Explain this yaml code:\n\npods/pod-rs.yaml\napiVersion: v1\nkind: Pod\nmetadata:\n name: pod1\n labels:\n tier: frontend\nspec:\n containers:\n - name: hello1\n image: gcr.io/google-samples/hello-app:2.0\n\n---\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: pod2\n labels:\n tier: frontend\nspec:\n containers:\n - name: hello2\n image: gcr.io/google-samples/hello-app:1.0", "completion": "This yaml code 2.2 - ReplicaSet | As those Pods do not have a Controller (or any object) as their owner reference and match the selector of the frontend ReplicaSet, they will immediately be acquired by it.", "language": "yaml", "type": "pod-definition_explanation", "context": "2.2 - ReplicaSet | As those Pods do not have a Controller (or any object) as their owner reference and match the selector of the frontend ReplicaSet, they will immediately be acquired by it.", "source": "https://kubernetes.io/docs/concepts/workloads/_print/", "id": "k8s_00658_code_explain_21149"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: pod1\n labels:\n tier: frontend\nspec:\n containers:\n - name: hello1\n image: gcr.io/google-samples/hello-app:2.0\n", "completion": "---\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: pod2\n labels:\n tier: frontend\nspec:\n containers:\n - name: hello2\n image: gcr.io/google-samples/hello-app:1.0", "language": "yaml", "type": "pod-definition", "context": "2.2 - ReplicaSet | As those Pods do not have a Controller (or any object) as their owner reference and match the selector of the frontend ReplicaSet, they will immediately be acquired by it.", "source": "https://kubernetes.io/docs/concepts/workloads/_print/", "id": "k8s_00658_code_completion_21150"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: pod1\n labels:\n tier: frontend\nspec:\n containers:\n - name: hello1\n image: gcr.io/google-samples/hello-app:2.0\n\n---\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: pod2\n labels:\n tier: frontend\nspec:\n containers:\n - name: hello2\n image: gcr.io/google-samples/hello-app:1.0", "completion": "This yaml code 2.2 - ReplicaSet | As those Pods do not have a Controller (or any object) as their owner reference and match the selector of the frontend ReplicaSet, they will immediately be acquired by it.", "language": "yaml", "type": "pod-definition_explanation", "context": "2.2 - ReplicaSet | As those Pods do not have a Controller (or any object) as their owner reference and match the selector of the frontend ReplicaSet, they will immediately be acquired by it.", "source": "https://kubernetes.io/docs/concepts/workloads/_print/", "id": "k8s_00658_code_explain_21151"} {"prompt": "Explain this bash code:\n\nkubectl apply -f https://kubernetes.io/examples/pods/pod-rs.yaml", "completion": "This bash code 2.2 - ReplicaSet | The new Pods will be acquired by the ReplicaSet, and then immediately terminated as the ReplicaSet would be over its desired count.", "language": "bash", "type": "kubectl-command_explanation", "context": "2.2 - ReplicaSet | The new Pods will be acquired by the ReplicaSet, and then immediately terminated as the ReplicaSet would be over its desired count.", "source": "https://kubernetes.io/docs/concepts/workloads/_print/", "id": "k8s_00658_code_explain_21152"} {"prompt": "Explain this bash code:\n\nkubectl apply -f https://kubernetes.io/examples/pods/pod-rs.yaml", "completion": "This bash code 2.2 - ReplicaSet | And then create the ReplicaSet however:", "language": "bash", "type": "kubectl-command_explanation", "context": "2.2 - ReplicaSet | And then create the ReplicaSet however:", "source": "https://kubernetes.io/docs/concepts/workloads/_print/", "id": "k8s_00658_code_explain_21153"} {"prompt": "Explain this bash code:\n\nkubectl apply -f https://kubernetes.io/examples/controllers/frontend.yaml", "completion": "This bash code 2.2 - ReplicaSet | You shall see that the ReplicaSet has acquired the Pods and has only created new ones according to its spec until the number of its new Pods and the original matches its desired count. As fetching the", "language": "bash", "type": "kubectl-command_explanation", "context": "2.2 - ReplicaSet | You shall see that the ReplicaSet has acquired the Pods and has only created new ones according to its spec until the number of its new Pods and the original matches its desired count. As fetching the", "source": "https://kubernetes.io/docs/concepts/workloads/_print/", "id": "k8s_00658_code_explain_21154"} {"prompt": "Complete this bash code:\n\nkubectl proxy --port=8080\ncurl -X DELETE 'localhost:8080/apis/apps/v1/namespaces/default/replicasets/frontend' \\", "completion": " -d '{\"kind\":\"DeleteOptions\",\"apiVersion\":\"v1\",\"propagationPolicy\":\"Foreground\"}' \\\n -H \"Content-Type: application/json\"", "language": "bash", "type": "kubectl-command", "context": "2.2 - ReplicaSet | You can delete a ReplicaSet without affecting any of its Pods using kubectl delete with the --cascade=orphan option. When using the REST API or the client-go library, you must set propagationPolicy to", "source": "https://kubernetes.io/docs/concepts/workloads/_print/", "id": "k8s_00658_code_completion_21155"} {"prompt": "Explain this bash code:\n\nkubectl proxy --port=8080\ncurl -X DELETE 'localhost:8080/apis/apps/v1/namespaces/default/replicasets/frontend' \\\n -d '{\"kind\":\"DeleteOptions\",\"apiVersion\":\"v1\",\"propagationPolicy\":\"Foreground\"}' \\\n -H \"Content-Type: application/json\"", "completion": "This bash code 2.2 - ReplicaSet | You can delete a ReplicaSet without affecting any of its Pods using kubectl delete with the --cascade=orphan option. When using the REST API or the client-go library, you must set propagationPolicy to", "language": "bash", "type": "kubectl-command_explanation", "context": "2.2 - ReplicaSet | You can delete a ReplicaSet without affecting any of its Pods using kubectl delete with the --cascade=orphan option. When using the REST API or the client-go library, you must set propagationPolicy to", "source": "https://kubernetes.io/docs/concepts/workloads/_print/", "id": "k8s_00658_code_explain_21156"} {"prompt": "Complete this bash code:\n\nkubectl proxy --port=8080\ncurl -X DELETE 'localhost:8080/apis/apps/v1/namespaces/default/replicasets/frontend' \\", "completion": " -d '{\"kind\":\"DeleteOptions\",\"apiVersion\":\"v1\",\"propagationPolicy\":\"Orphan\"}' \\\n -H \"Content-Type: application/json\"", "language": "bash", "type": "kubectl-command", "context": "2.2 - ReplicaSet | Once the original is deleted, you can create a new ReplicaSet to replace it. As long as the old and new .spec.selector are the same, then the new one will adopt the old Pods. However, it will not make", "source": "https://kubernetes.io/docs/concepts/workloads/_print/", "id": "k8s_00658_code_completion_21157"} {"prompt": "Explain this bash code:\n\nkubectl proxy --port=8080\ncurl -X DELETE 'localhost:8080/apis/apps/v1/namespaces/default/replicasets/frontend' \\\n -d '{\"kind\":\"DeleteOptions\",\"apiVersion\":\"v1\",\"propagationPolicy\":\"Orphan\"}' \\\n -H \"Content-Type: application/json\"", "completion": "This bash code 2.2 - ReplicaSet | Once the original is deleted, you can create a new ReplicaSet to replace it. As long as the old and new .spec.selector are the same, then the new one will adopt the old Pods. However, it will not make", "language": "bash", "type": "kubectl-command_explanation", "context": "2.2 - ReplicaSet | Once the original is deleted, you can create a new ReplicaSet to replace it. As long as the old and new .spec.selector are the same, then the new one will adopt the old Pods. However, it will not make", "source": "https://kubernetes.io/docs/concepts/workloads/_print/", "id": "k8s_00658_code_explain_21158"} {"prompt": "Complete this yaml code:\n\ncontrollers/hpa-rs.yaml\napiVersion: autoscaling/v1\nkind: HorizontalPodAutoscaler\nmetadata:\n name: frontend-scaler\nspec:", "completion": " scaleTargetRef:\n kind: ReplicaSet\n name: frontend\n minReplicas: 3\n maxReplicas: 10\n targetCPUUtilizationPercentage: 50", "language": "yaml", "type": "kubernetes-manifest", "context": "2.2 - ReplicaSet | Saving this manifest into hpa-rs.yaml and submitting it to a Kubernetes cluster should create the defined HPA that autoscales the target ReplicaSet depending on the CPU usage of the replicated Pods.", "source": "https://kubernetes.io/docs/concepts/workloads/_print/", "id": "k8s_00658_code_completion_21159"} {"prompt": "Explain this yaml code:\n\ncontrollers/hpa-rs.yaml\napiVersion: autoscaling/v1\nkind: HorizontalPodAutoscaler\nmetadata:\n name: frontend-scaler\nspec:\n scaleTargetRef:\n kind: ReplicaSet\n name: frontend\n minReplicas: 3\n maxReplicas: 10\n targetCPUUtilizationPercentage: 50", "completion": "This yaml code 2.2 - ReplicaSet | Saving this manifest into hpa-rs.yaml and submitting it to a Kubernetes cluster should create the defined HPA that autoscales the target ReplicaSet depending on the CPU usage of the replicated Pods.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "2.2 - ReplicaSet | Saving this manifest into hpa-rs.yaml and submitting it to a Kubernetes cluster should create the defined HPA that autoscales the target ReplicaSet depending on the CPU usage of the replicated Pods.", "source": "https://kubernetes.io/docs/concepts/workloads/_print/", "id": "k8s_00658_code_explain_21160"} {"prompt": "Complete this yaml code:\n\napiVersion: autoscaling/v1\nkind: HorizontalPodAutoscaler\nmetadata:\n name: frontend-scaler\nspec:", "completion": " scaleTargetRef:\n kind: ReplicaSet\n name: frontend\n minReplicas: 3\n maxReplicas: 10\n targetCPUUtilizationPercentage: 50", "language": "yaml", "type": "kubernetes-manifest", "context": "2.2 - ReplicaSet | Saving this manifest into hpa-rs.yaml and submitting it to a Kubernetes cluster should create the defined HPA that autoscales the target ReplicaSet depending on the CPU usage of the replicated Pods.", "source": "https://kubernetes.io/docs/concepts/workloads/_print/", "id": "k8s_00658_code_completion_21161"} {"prompt": "Explain this yaml code:\n\napiVersion: autoscaling/v1\nkind: HorizontalPodAutoscaler\nmetadata:\n name: frontend-scaler\nspec:\n scaleTargetRef:\n kind: ReplicaSet\n name: frontend\n minReplicas: 3\n maxReplicas: 10\n targetCPUUtilizationPercentage: 50", "completion": "This yaml code 2.2 - ReplicaSet | Saving this manifest into hpa-rs.yaml and submitting it to a Kubernetes cluster should create the defined HPA that autoscales the target ReplicaSet depending on the CPU usage of the replicated Pods.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "2.2 - ReplicaSet | Saving this manifest into hpa-rs.yaml and submitting it to a Kubernetes cluster should create the defined HPA that autoscales the target ReplicaSet depending on the CPU usage of the replicated Pods.", "source": "https://kubernetes.io/docs/concepts/workloads/_print/", "id": "k8s_00658_code_explain_21162"} {"prompt": "Explain this bash code:\n\nkubectl apply -f https://k8s.io/examples/controllers/hpa-rs.yaml", "completion": "This bash code 2.2 - ReplicaSet | Alternatively, you can use the kubectl autoscale command to accomplish the same (and it's easier!)", "language": "bash", "type": "kubectl-command_explanation", "context": "2.2 - ReplicaSet | Alternatively, you can use the kubectl autoscale command to accomplish the same (and it's easier!)", "source": "https://kubernetes.io/docs/concepts/workloads/_print/", "id": "k8s_00658_code_explain_21163"} {"prompt": "Explain this bash code:\n\nkubectl autoscale rs frontend --max=10 --min=3 --cpu-percent=50", "completion": "This bash code 2.2 - ReplicaSet | Deployment is an object which can own ReplicaSets and update them and their Pods via declarative, server-side rolling updates. While ReplicaSets can be used independently, today they're mainly used by", "language": "bash", "type": "kubectl-command_explanation", "context": "2.2 - ReplicaSet | Deployment is an object which can own ReplicaSets and update them and their Pods via declarative, server-side rolling updates. While ReplicaSets can be used independently, today they're mainly used by", "source": "https://kubernetes.io/docs/concepts/workloads/_print/", "id": "k8s_00658_code_explain_21164"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Service\nmetadata:\n name: nginx\n labels:\n app: nginx\nspec:\n ports:\n - port: 80\n name: web\n clusterIP: None\n selector:\n app: nginx\n---\napiVersion: apps/v1\nkind: StatefulSet\nmetadata:\n name: web\nspec:\n selector:\n matchLabels:\n app: nginx # has to match .spec.template.metadata.labels\n serviceName: \"nginx\"\n replicas: 3 # by default is 1", "completion": " minReadySeconds: 10 # by default is 0\n template:\n metadata:\n labels:\n app: nginx # has to match .spec.selector.matchLabels\n spec:\n terminationGracePeriodSeconds: 10\n containers:\n - name: nginx\n image: registry.k8s.io/nginx-slim:0.24\n ports:\n - containerPort: 80\n name: web\n volumeMounts:\n - name: www\n mountPath: /usr/share/nginx/html\n volumeClaimTemplates:\n - metadata:\n name: www\n spec:\n accessModes: [ \"ReadWriteOnce\" ]\n storageClassName: \"my-storage-class\"\n resources:\n requests:\n storage: 1Gi", "language": "yaml", "type": "service-definition", "context": "2.3 - StatefulSets | In the above example:", "source": "https://kubernetes.io/docs/concepts/workloads/_print/", "id": "k8s_00658_code_completion_21165"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Service\nmetadata:\n name: nginx\n labels:\n app: nginx\nspec:\n ports:\n - port: 80\n name: web\n clusterIP: None\n selector:\n app: nginx\n---\napiVersion: apps/v1\nkind: StatefulSet\nmetadata:\n name: web\nspec:\n selector:\n matchLabels:\n app: nginx # has to match .spec.template.metadata.labels\n serviceName: \"nginx\"\n replicas: 3 # by default is 1\n minReadySeconds: 10 # by default is 0\n template:\n metadata:\n labels:\n app: nginx # has to match .spec.selector.matchLabels\n spec:\n terminationGracePeriodSeconds: 10\n containers:\n - name: nginx\n image: registry.k8s.io/nginx-slim:0.24\n ports:\n - containerPort: 80\n name: web\n volumeMounts:\n - name: www\n mountPath: /usr/share/nginx/html\n volumeClaimTemplates:\n - metadata:\n name: www\n spec:\n accessModes: [ \"ReadWriteOnce\" ]\n storageClassName: \"my-storage-class\"\n resources:\n requests:\n storage: 1Gi", "completion": "This yaml code 2.3 - StatefulSets | In the above example:", "language": "yaml", "type": "service-definition_explanation", "context": "2.3 - StatefulSets | In the above example:", "source": "https://kubernetes.io/docs/concepts/workloads/_print/", "id": "k8s_00658_code_explain_21166"} {"prompt": "Complete this yaml code:\n\napiVersion: apps/v1\nkind: StatefulSet\nmetadata:", "completion": " name: webapp\nspec:\n revisionHistoryLimit: 5 # Keep last 5 revisions\n # ... other spec fields ...", "language": "yaml", "type": "kubernetes-manifest", "context": "2.3 - StatefulSets | You can revert to a previous configuration using:", "source": "https://kubernetes.io/docs/concepts/workloads/_print/", "id": "k8s_00658_code_completion_21167"} {"prompt": "Explain this yaml code:\n\napiVersion: apps/v1\nkind: StatefulSet\nmetadata:\n name: webapp\nspec:\n revisionHistoryLimit: 5 # Keep last 5 revisions\n # ... other spec fields ...", "completion": "This yaml code 2.3 - StatefulSets | You can revert to a previous configuration using:", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "2.3 - StatefulSets | You can revert to a previous configuration using:", "source": "https://kubernetes.io/docs/concepts/workloads/_print/", "id": "k8s_00658_code_explain_21168"} {"prompt": "Complete this text code:\n\n# View revision history\nkubectl rollout history statefulset/webapp", "completion": "\n# Rollback to a specific revision\nkubectl rollout undo statefulset/webapp --to-revision=3", "language": "text", "type": "code-example", "context": "2.3 - StatefulSets | This will:", "source": "https://kubernetes.io/docs/concepts/workloads/_print/", "id": "k8s_00658_code_completion_21169"} {"prompt": "Explain this text code:\n\n# View revision history\nkubectl rollout history statefulset/webapp\n\n# Rollback to a specific revision\nkubectl rollout undo statefulset/webapp --to-revision=3", "completion": "This text code 2.3 - StatefulSets | This will:", "language": "text", "type": "code-example_explanation", "context": "2.3 - StatefulSets | This will:", "source": "https://kubernetes.io/docs/concepts/workloads/_print/", "id": "k8s_00658_code_explain_21170"} {"prompt": "Complete this text code:\n\n# List all revisions for the StatefulSet\nkubectl get controllerrevisions -l app.kubernetes.io/name=webapp", "completion": "\n# View detailed configuration of a specific revision\nkubectl get controllerrevision/webapp-3 -o yaml", "language": "text", "type": "code-example", "context": "2.3 - StatefulSets | Regularly check revisions with:", "source": "https://kubernetes.io/docs/concepts/workloads/_print/", "id": "k8s_00658_code_completion_21171"} {"prompt": "Explain this text code:\n\n# List all revisions for the StatefulSet\nkubectl get controllerrevisions -l app.kubernetes.io/name=webapp\n\n# View detailed configuration of a specific revision\nkubectl get controllerrevision/webapp-3 -o yaml", "completion": "This text code 2.3 - StatefulSets | Regularly check revisions with:", "language": "text", "type": "code-example_explanation", "context": "2.3 - StatefulSets | Regularly check revisions with:", "source": "https://kubernetes.io/docs/concepts/workloads/_print/", "id": "k8s_00658_code_explain_21172"} {"prompt": "Complete this yaml code:\n\napiVersion: apps/v1\nkind: StatefulSet\n...\nspec:", "completion": " persistentVolumeClaimRetentionPolicy:\n whenDeleted: Retain\n whenScaled: Delete\n...", "language": "yaml", "type": "kubernetes-manifest", "context": "2.3 - StatefulSets | The StatefulSet controller adds owner references to its PVCs, which are then deleted by the garbage collector after the Pod is terminated. This enables the Pod to cleanly unmount all volumes before th", "source": "https://kubernetes.io/docs/concepts/workloads/_print/", "id": "k8s_00658_code_completion_21173"} {"prompt": "Explain this yaml code:\n\napiVersion: apps/v1\nkind: StatefulSet\n...\nspec:\n persistentVolumeClaimRetentionPolicy:\n whenDeleted: Retain\n whenScaled: Delete\n...", "completion": "This yaml code 2.3 - StatefulSets | The StatefulSet controller adds owner references to its PVCs, which are then deleted by the garbage collector after the Pod is terminated. This enables the Pod to cleanly unmount all volumes before th", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "2.3 - StatefulSets | The StatefulSet controller adds owner references to its PVCs, which are then deleted by the garbage collector after the Pod is terminated. This enables the Pod to cleanly unmount all volumes before th", "source": "https://kubernetes.io/docs/concepts/workloads/_print/", "id": "k8s_00658_code_explain_21174"} {"prompt": "Complete this yaml code:\n\ncontrollers/daemonset.yaml\napiVersion: apps/v1\nkind: DaemonSet\nmetadata:\n name: fluentd-elasticsearch\n namespace: kube-system\n labels:\n k8s-app: fluentd-logging\nspec:\n selector:\n matchLabels:\n name: fluentd-elasticsearch\n template:\n metadata:\n labels:\n name: fluentd-elasticsearch\n spec:\n tolerations:\n # these tolerations are to have the daemonset runnable on control plane nodes\n # remove them if your control plane nodes should not run pods\n - key: node-role.kubernetes.io/control-plane\n operator: Exists\n effect: NoSchedule", "completion": " - key: node-role.kubernetes.io/master\n operator: Exists\n effect: NoSchedule\n containers:\n - name: fluentd-elasticsearch\n image: quay.io/fluentd_elasticsearch/fluentd:v5.0.1\n resources:\n limits:\n memory: 200Mi\n requests:\n cpu: 100m\n memory: 200Mi\n volumeMounts:\n - name: varlog\n mountPath: /var/log\n # it may be desirable to set a high priority class to ensure that a DaemonSet Pod\n # preempts running Pods\n # priorityClassName: important\n terminationGracePeriodSeconds: 30\n volumes:\n - name: varlog\n hostPath:\n path: /var/log", "language": "yaml", "type": "kubernetes-manifest", "context": "2.4 - DaemonSet | Create a DaemonSet based on the YAML file:", "source": "https://kubernetes.io/docs/concepts/workloads/_print/", "id": "k8s_00658_code_completion_21175"} {"prompt": "Explain this yaml code:\n\ncontrollers/daemonset.yaml\napiVersion: apps/v1\nkind: DaemonSet\nmetadata:\n name: fluentd-elasticsearch\n namespace: kube-system\n labels:\n k8s-app: fluentd-logging\nspec:\n selector:\n matchLabels:\n name: fluentd-elasticsearch\n template:\n metadata:\n labels:\n name: fluentd-elasticsearch\n spec:\n tolerations:\n # these tolerations are to have the daemonset runnable on control plane nodes\n # remove them if your control plane nodes should not run pods\n - key: node-role.kubernetes.io/control-plane\n operator: Exists\n effect: NoSchedule\n - key: node-role.kubernetes.io/master\n operator: Exists\n effect: NoSchedule\n containers:\n - name: fluentd-elasticsearch\n image: quay.io/fluentd_elasticsearch/fluentd:v5.0.1\n resources:\n limits:\n memory: 200Mi\n requests:\n cpu: 100m\n memory: 200Mi\n volumeMounts:\n - name: varlog\n mountPath: /var/log\n # it may be desirable to set a high priority class to ensure that a DaemonSet Pod\n # preempts running Pods\n # priorityClassName: important\n terminationGracePeriodSeconds: 30\n volumes:\n - name: varlog\n hostPath:\n path: /var/log", "completion": "This yaml code 2.4 - DaemonSet | Create a DaemonSet based on the YAML file:", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "2.4 - DaemonSet | Create a DaemonSet based on the YAML file:", "source": "https://kubernetes.io/docs/concepts/workloads/_print/", "id": "k8s_00658_code_explain_21176"} {"prompt": "Complete this yaml code:\n\napiVersion: apps/v1\nkind: DaemonSet\nmetadata:\n name: fluentd-elasticsearch\n namespace: kube-system\n labels:\n k8s-app: fluentd-logging\nspec:\n selector:\n matchLabels:\n name: fluentd-elasticsearch\n template:\n metadata:\n labels:\n name: fluentd-elasticsearch\n spec:\n tolerations:\n # these tolerations are to have the daemonset runnable on control plane nodes\n # remove them if your control plane nodes should not run pods\n - key: node-role.kubernetes.io/control-plane\n operator: Exists\n effect: NoSchedule", "completion": " - key: node-role.kubernetes.io/master\n operator: Exists\n effect: NoSchedule\n containers:\n - name: fluentd-elasticsearch\n image: quay.io/fluentd_elasticsearch/fluentd:v5.0.1\n resources:\n limits:\n memory: 200Mi\n requests:\n cpu: 100m\n memory: 200Mi\n volumeMounts:\n - name: varlog\n mountPath: /var/log\n # it may be desirable to set a high priority class to ensure that a DaemonSet Pod\n # preempts running Pods\n # priorityClassName: important\n terminationGracePeriodSeconds: 30\n volumes:\n - name: varlog\n hostPath:\n path: /var/log", "language": "yaml", "type": "kubernetes-manifest", "context": "2.4 - DaemonSet | Create a DaemonSet based on the YAML file:", "source": "https://kubernetes.io/docs/concepts/workloads/_print/", "id": "k8s_00658_code_completion_21177"} {"prompt": "Explain this yaml code:\n\napiVersion: apps/v1\nkind: DaemonSet\nmetadata:\n name: fluentd-elasticsearch\n namespace: kube-system\n labels:\n k8s-app: fluentd-logging\nspec:\n selector:\n matchLabels:\n name: fluentd-elasticsearch\n template:\n metadata:\n labels:\n name: fluentd-elasticsearch\n spec:\n tolerations:\n # these tolerations are to have the daemonset runnable on control plane nodes\n # remove them if your control plane nodes should not run pods\n - key: node-role.kubernetes.io/control-plane\n operator: Exists\n effect: NoSchedule\n - key: node-role.kubernetes.io/master\n operator: Exists\n effect: NoSchedule\n containers:\n - name: fluentd-elasticsearch\n image: quay.io/fluentd_elasticsearch/fluentd:v5.0.1\n resources:\n limits:\n memory: 200Mi\n requests:\n cpu: 100m\n memory: 200Mi\n volumeMounts:\n - name: varlog\n mountPath: /var/log\n # it may be desirable to set a high priority class to ensure that a DaemonSet Pod\n # preempts running Pods\n # priorityClassName: important\n terminationGracePeriodSeconds: 30\n volumes:\n - name: varlog\n hostPath:\n path: /var/log", "completion": "This yaml code 2.4 - DaemonSet | Create a DaemonSet based on the YAML file:", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "2.4 - DaemonSet | Create a DaemonSet based on the YAML file:", "source": "https://kubernetes.io/docs/concepts/workloads/_print/", "id": "k8s_00658_code_explain_21178"} {"prompt": "Complete this text code:\n\nnodeAffinity:\n requiredDuringSchedulingIgnoredDuringExecution:\n nodeSelectorTerms:\n - matchFields:", "completion": " - key: metadata.name\n operator: In\n values:\n - target-host-name", "language": "text", "type": "code-example", "context": "2.4 - DaemonSet | The DaemonSet controller automatically adds a set of tolerations to DaemonSet Pods:", "source": "https://kubernetes.io/docs/concepts/workloads/_print/", "id": "k8s_00658_code_completion_21179"} {"prompt": "Explain this text code:\n\nnodeAffinity:\n requiredDuringSchedulingIgnoredDuringExecution:\n nodeSelectorTerms:\n - matchFields:\n - key: metadata.name\n operator: In\n values:\n - target-host-name", "completion": "This text code 2.4 - DaemonSet | The DaemonSet controller automatically adds a set of tolerations to DaemonSet Pods:", "language": "text", "type": "code-example_explanation", "context": "2.4 - DaemonSet | The DaemonSet controller automatically adds a set of tolerations to DaemonSet Pods:", "source": "https://kubernetes.io/docs/concepts/workloads/_print/", "id": "k8s_00658_code_explain_21180"} {"prompt": "Complete this yaml code:\n\ncontrollers/job.yaml\napiVersion: batch/v1\nkind: Job\nmetadata:\n name: pi\nspec:\n template:", "completion": " spec:\n containers:\n - name: pi\n image: perl:5.34.0\n command: [\"perl\", \"-Mbignum=bpi\", \"-wle\", \"print bpi(2000)\"]\n restartPolicy: Never\n backoffLimit: 4", "language": "yaml", "type": "kubernetes-manifest", "context": "2.5 - Jobs | You can run the example with this command:", "source": "https://kubernetes.io/docs/concepts/workloads/_print/", "id": "k8s_00658_code_completion_21181"} {"prompt": "Explain this yaml code:\n\ncontrollers/job.yaml\napiVersion: batch/v1\nkind: Job\nmetadata:\n name: pi\nspec:\n template:\n spec:\n containers:\n - name: pi\n image: perl:5.34.0\n command: [\"perl\", \"-Mbignum=bpi\", \"-wle\", \"print bpi(2000)\"]\n restartPolicy: Never\n backoffLimit: 4", "completion": "This yaml code 2.5 - Jobs | You can run the example with this command:", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "2.5 - Jobs | You can run the example with this command:", "source": "https://kubernetes.io/docs/concepts/workloads/_print/", "id": "k8s_00658_code_explain_21182"} {"prompt": "Complete this yaml code:\n\napiVersion: batch/v1\nkind: Job\nmetadata:\n name: pi\nspec:\n template:", "completion": " spec:\n containers:\n - name: pi\n image: perl:5.34.0\n command: [\"perl\", \"-Mbignum=bpi\", \"-wle\", \"print bpi(2000)\"]\n restartPolicy: Never\n backoffLimit: 4", "language": "yaml", "type": "kubernetes-manifest", "context": "2.5 - Jobs | You can run the example with this command:", "source": "https://kubernetes.io/docs/concepts/workloads/_print/", "id": "k8s_00658_code_completion_21183"} {"prompt": "Explain this yaml code:\n\napiVersion: batch/v1\nkind: Job\nmetadata:\n name: pi\nspec:\n template:\n spec:\n containers:\n - name: pi\n image: perl:5.34.0\n command: [\"perl\", \"-Mbignum=bpi\", \"-wle\", \"print bpi(2000)\"]\n restartPolicy: Never\n backoffLimit: 4", "completion": "This yaml code 2.5 - Jobs | You can run the example with this command:", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "2.5 - Jobs | You can run the example with this command:", "source": "https://kubernetes.io/docs/concepts/workloads/_print/", "id": "k8s_00658_code_explain_21184"} {"prompt": "Explain this bash code:\n\nkubectl apply -f https://kubernetes.io/examples/controllers/job.yaml", "completion": "This bash code 2.5 - Jobs | The output is similar to this:", "language": "bash", "type": "kubectl-command_explanation", "context": "2.5 - Jobs | The output is similar to this:", "source": "https://kubernetes.io/docs/concepts/workloads/_print/", "id": "k8s_00658_code_explain_21185"} {"prompt": "Complete this text code:\n\nName: pi\nNamespace: default\nSelector: batch.kubernetes.io/controller-uid=c9948307-e56d-4b5d-8302-ae2d7b7da67c\nLabels: batch.kubernetes.io/controller-uid=c9948307-e56d-4b5d-8302-ae2d7b7da67c\n batch.kubernetes.io/job-name=pi\n ...\nAnnotations: batch.kubernetes.io/job-tracking: \"\"\nParallelism: 1\nCompletions: 1\nStart Time: Mon, 02 Dec 2019 15:20:11 +0200\nCompleted At: Mon, 02 Dec 2019 15:21:16 +0200\nDuration: 65s\nPods Statuses: 0 Running / 1 Succeeded / 0 Failed\nPod Template:\n Labels: batch.kubernetes.io/controller-uid=c9948307-e56d-4b5d-8302-ae2d7b7da67c\n batch.kubernetes.io/job-name=pi\n Containers:", "completion": " pi:\n Image: perl:5.34.0\n Port: <none>\n Host Port: <none>\n Command:\n perl\n -Mbignum=bpi\n -wle\n print bpi(2000)\n Environment: <none>\n Mounts: <none>\n Volumes: <none>\nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal SuccessfulCreate 21s job-controller Created pod: pi-xf9p4\n Normal Completed 18s job-controller Job completed", "language": "text", "type": "code-example", "context": "2.5 - Jobs | apiVersion: batch/v1 kind: Job metadata: annotations: batch.kubernetes.io/job-tracking: \"\" ... creationTimestamp: \"2022-11-10T17:53:53Z\" generation: 1 labels: batch.kubernet", "source": "https://kubernetes.io/docs/concepts/workloads/_print/", "id": "k8s_00658_code_completion_21186"} {"prompt": "Explain this text code:\n\nName: pi\nNamespace: default\nSelector: batch.kubernetes.io/controller-uid=c9948307-e56d-4b5d-8302-ae2d7b7da67c\nLabels: batch.kubernetes.io/controller-uid=c9948307-e56d-4b5d-8302-ae2d7b7da67c\n batch.kubernetes.io/job-name=pi\n ...\nAnnotations: batch.kubernetes.io/job-tracking: \"\"\nParallelism: 1\nCompletions: 1\nStart Time: Mon, 02 Dec 2019 15:20:11 +0200\nCompleted At: Mon, 02 Dec 2019 15:21:16 +0200\nDuration: 65s\nPods Statuses: 0 Running / 1 Succeeded / 0 Failed\nPod Template:\n Labels: batch.kubernetes.io/controller-uid=c9948307-e56d-4b5d-8302-ae2d7b7da67c\n batch.kubernetes.io/job-name=pi\n Containers:\n pi:\n Image: perl:5.34.0\n Port: <none>\n Host Port: <none>\n Command:\n perl\n -Mbignum=bpi\n -wle\n print bpi(2000)\n Environment: <none>\n Mounts: <none>\n Volumes: <none>\nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal SuccessfulCreate 21s job-controller Created pod: pi-xf9p4\n Normal Completed 18s job-controller Job completed", "completion": "This text code 2.5 - Jobs | apiVersion: batch/v1 kind: Job metadata: annotations: batch.kubernetes.io/job-tracking: \"\" ... creationTimestamp: \"2022-11-10T17:53:53Z\" generation: 1 labels: batch.kubernet", "language": "text", "type": "code-example_explanation", "context": "2.5 - Jobs | apiVersion: batch/v1 kind: Job metadata: annotations: batch.kubernetes.io/job-tracking: \"\" ... creationTimestamp: \"2022-11-10T17:53:53Z\" generation: 1 labels: batch.kubernet", "source": "https://kubernetes.io/docs/concepts/workloads/_print/", "id": "k8s_00658_code_explain_21187"} {"prompt": "Complete this yaml code:\n\napiVersion: batch/v1\nkind: Job\nmetadata:\n annotations: batch.kubernetes.io/job-tracking: \"\"\n ... \n creationTimestamp: \"2022-11-10T17:53:53Z\"\n generation: 1\n labels:\n batch.kubernetes.io/controller-uid: 863452e6-270d-420e-9b94-53a54146c223\n batch.kubernetes.io/job-name: pi\n name: pi\n namespace: default\n resourceVersion: \"4751\"\n uid: 204fb678-040b-497f-9266-35ffa8716d14\nspec:\n backoffLimit: 4\n completionMode: NonIndexed\n completions: 1\n parallelism: 1\n selector:\n matchLabels:\n batch.kubernetes.io/controller-uid: 863452e6-270d-420e-9b94-53a54146c223\n suspend: false\n template:\n metadata:\n creationTimestamp: null", "completion": " labels:\n batch.kubernetes.io/controller-uid: 863452e6-270d-420e-9b94-53a54146c223\n batch.kubernetes.io/job-name: pi\n spec:\n containers:\n - command:\n - perl\n - -Mbignum=bpi\n - -wle\n - print bpi(2000)\n image: perl:5.34.0\n imagePullPolicy: IfNotPresent\n name: pi\n resources: {}\n terminationMessagePath: /dev/termination-log\n terminationMessagePolicy: File\n dnsPolicy: ClusterFirst\n restartPolicy: Never\n schedulerName: default-scheduler\n securityContext: {}\n terminationGracePeriodSeconds: 30\nstatus:\n active: 1\n ready: 0\n startTime: \"2022-11-10T17:53:57Z\"\n uncountedTerminatedPods: {}", "language": "yaml", "type": "kubernetes-manifest", "context": "2.5 - Jobs | To view completed Pods of a Job, use kubectl get pods.", "source": "https://kubernetes.io/docs/concepts/workloads/_print/", "id": "k8s_00658_code_completion_21188"} {"prompt": "Explain this yaml code:\n\napiVersion: batch/v1\nkind: Job\nmetadata:\n annotations: batch.kubernetes.io/job-tracking: \"\"\n ... \n creationTimestamp: \"2022-11-10T17:53:53Z\"\n generation: 1\n labels:\n batch.kubernetes.io/controller-uid: 863452e6-270d-420e-9b94-53a54146c223\n batch.kubernetes.io/job-name: pi\n name: pi\n namespace: default\n resourceVersion: \"4751\"\n uid: 204fb678-040b-497f-9266-35ffa8716d14\nspec:\n backoffLimit: 4\n completionMode: NonIndexed\n completions: 1\n parallelism: 1\n selector:\n matchLabels:\n batch.kubernetes.io/controller-uid: 863452e6-270d-420e-9b94-53a54146c223\n suspend: false\n template:\n metadata:\n creationTimestamp: null\n labels:\n batch.kubernetes.io/controller-uid: 863452e6-270d-420e-9b94-53a54146c223\n batch.kubernetes.io/job-name: pi\n spec:\n containers:\n - command:\n - perl\n - -Mbignum=bpi\n - -wle\n - print bpi(2000)\n image: perl:5.34.0\n imagePullPolicy: IfNotPresent\n name: pi\n resources: {}\n terminationMessagePath: /dev/termination-log\n terminationMessagePolicy: File\n dnsPolicy: ClusterFirst\n restartPolicy: Never\n schedulerName: default-scheduler\n securityContext: {}\n terminationGracePeriodSeconds: 30\nstatus:\n active: 1\n ready: 0\n startTime: \"2022-11-10T17:53:57Z\"\n uncountedTerminatedPods: {}", "completion": "This yaml code 2.5 - Jobs | To view completed Pods of a Job, use kubectl get pods.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "2.5 - Jobs | To view completed Pods of a Job, use kubectl get pods.", "source": "https://kubernetes.io/docs/concepts/workloads/_print/", "id": "k8s_00658_code_explain_21189"} {"prompt": "Explain this text code:\n\npods=$(kubectl get pods --selector=batch.kubernetes.io/job-name=pi --output=jsonpath='{.items[*].metadata.name}')\necho $pods", "completion": "This text code 2.5 - Jobs | The output is similar to this:", "language": "text", "type": "code-example_explanation", "context": "2.5 - Jobs | The output is similar to this:", "source": "https://kubernetes.io/docs/concepts/workloads/_print/", "id": "k8s_00658_code_explain_21190"} {"prompt": "Complete this yaml code:\n\n/controllers/job-backoff-limit-per-index-example.yaml\napiVersion: batch/v1\nkind: Job\nmetadata:\n name: job-backoff-limit-per-index-example\nspec:\n completions: 10\n parallelism: 3\n completionMode: Indexed # required for the feature\n backoffLimitPerIndex: 1 # maximal number of failures per index\n maxFailedIndexes: 5 # maximal number of failed indexes before terminating the Job execution\n template:\n spec:", "completion": " restartPolicy: Never # required for the feature\n containers:\n - name: example\n image: python\n command: # The jobs fails as there is at least one failed index\n # (all even indexes fail in here), yet all indexes\n # are executed as maxFailedIndexes is not exceeded.\n - python3\n - -c\n - |\n import os, sys\n print(\"Hello world\")\n if int(os.environ.get(\"JOB_COMPLETION_INDEX\")) % 2 == 0:\n sys.exit(1)", "language": "yaml", "type": "kubernetes-manifest", "context": "2.5 - Jobs | In the example above, the Job controller allows for one restart for each of the indexes. When the total number of failed indexes exceeds 5, then the entire Job is terminated.", "source": "https://kubernetes.io/docs/concepts/workloads/_print/", "id": "k8s_00658_code_completion_21191"} {"prompt": "Explain this yaml code:\n\n/controllers/job-backoff-limit-per-index-example.yaml\napiVersion: batch/v1\nkind: Job\nmetadata:\n name: job-backoff-limit-per-index-example\nspec:\n completions: 10\n parallelism: 3\n completionMode: Indexed # required for the feature\n backoffLimitPerIndex: 1 # maximal number of failures per index\n maxFailedIndexes: 5 # maximal number of failed indexes before terminating the Job execution\n template:\n spec:\n restartPolicy: Never # required for the feature\n containers:\n - name: example\n image: python\n command: # The jobs fails as there is at least one failed index\n # (all even indexes fail in here), yet all indexes\n # are executed as maxFailedIndexes is not exceeded.\n - python3\n - -c\n - |\n import os, sys\n print(\"Hello world\")\n if int(os.environ.get(\"JOB_COMPLETION_INDEX\")) % 2 == 0:\n sys.exit(1)", "completion": "This yaml code 2.5 - Jobs | In the example above, the Job controller allows for one restart for each of the indexes. When the total number of failed indexes exceeds 5, then the entire Job is terminated.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "2.5 - Jobs | In the example above, the Job controller allows for one restart for each of the indexes. When the total number of failed indexes exceeds 5, then the entire Job is terminated.", "source": "https://kubernetes.io/docs/concepts/workloads/_print/", "id": "k8s_00658_code_explain_21192"} {"prompt": "Complete this yaml code:\n\napiVersion: batch/v1\nkind: Job\nmetadata:\n name: job-backoff-limit-per-index-example\nspec:\n completions: 10\n parallelism: 3\n completionMode: Indexed # required for the feature\n backoffLimitPerIndex: 1 # maximal number of failures per index\n maxFailedIndexes: 5 # maximal number of failed indexes before terminating the Job execution\n template:\n spec:\n restartPolicy: Never # required for the feature", "completion": " containers:\n - name: example\n image: python\n command: # The jobs fails as there is at least one failed index\n # (all even indexes fail in here), yet all indexes\n # are executed as maxFailedIndexes is not exceeded.\n - python3\n - -c\n - |\n import os, sys\n print(\"Hello world\")\n if int(os.environ.get(\"JOB_COMPLETION_INDEX\")) % 2 == 0:\n sys.exit(1)", "language": "yaml", "type": "kubernetes-manifest", "context": "2.5 - Jobs | In the example above, the Job controller allows for one restart for each of the indexes. When the total number of failed indexes exceeds 5, then the entire Job is terminated.", "source": "https://kubernetes.io/docs/concepts/workloads/_print/", "id": "k8s_00658_code_completion_21193"} {"prompt": "Explain this yaml code:\n\napiVersion: batch/v1\nkind: Job\nmetadata:\n name: job-backoff-limit-per-index-example\nspec:\n completions: 10\n parallelism: 3\n completionMode: Indexed # required for the feature\n backoffLimitPerIndex: 1 # maximal number of failures per index\n maxFailedIndexes: 5 # maximal number of failed indexes before terminating the Job execution\n template:\n spec:\n restartPolicy: Never # required for the feature\n containers:\n - name: example\n image: python\n command: # The jobs fails as there is at least one failed index\n # (all even indexes fail in here), yet all indexes\n # are executed as maxFailedIndexes is not exceeded.\n - python3\n - -c\n - |\n import os, sys\n print(\"Hello world\")\n if int(os.environ.get(\"JOB_COMPLETION_INDEX\")) % 2 == 0:\n sys.exit(1)", "completion": "This yaml code 2.5 - Jobs | In the example above, the Job controller allows for one restart for each of the indexes. When the total number of failed indexes exceeds 5, then the entire Job is terminated.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "2.5 - Jobs | In the example above, the Job controller allows for one restart for each of the indexes. When the total number of failed indexes exceeds 5, then the entire Job is terminated.", "source": "https://kubernetes.io/docs/concepts/workloads/_print/", "id": "k8s_00658_code_explain_21194"} {"prompt": "Explain this bash code:\n\nkubectl get -o yaml job job-backoff-limit-per-index-example", "completion": "This bash code 2.5 - Jobs | The Job controller adds the FailureTarget Job condition to trigger Job termination and cleanup. When all of the Job Pods are terminated, the Job controller adds the Failed condition with the same valu", "language": "bash", "type": "kubectl-command_explanation", "context": "2.5 - Jobs | The Job controller adds the FailureTarget Job condition to trigger Job termination and cleanup. When all of the Job Pods are terminated, the Job controller adds the Failed condition with the same valu", "source": "https://kubernetes.io/docs/concepts/workloads/_print/", "id": "k8s_00658_code_explain_21195"} {"prompt": "Complete this text code:\n\nstatus:\n completedIndexes: 1,3,5,7,9\n failedIndexes: 0,2,4,6,8\n succeeded: 5 # 1 succeeded pod for each of 5 succeeded indexes\n failed: 10 # 2 failed pods (1 retry) for each of 5 failed indexes\n conditions:\n - message: Job has failed indexes", "completion": " reason: FailedIndexes\n status: \"True\"\n type: FailureTarget\n - message: Job has failed indexes\n reason: FailedIndexes\n status: \"True\"\n type: Failed", "language": "text", "type": "code-example", "context": "2.5 - Jobs | The Job controller adds the FailureTarget Job condition to trigger Job termination and cleanup. When all of the Job Pods are terminated, the Job controller adds the Failed condition with the same valu", "source": "https://kubernetes.io/docs/concepts/workloads/_print/", "id": "k8s_00658_code_completion_21196"} {"prompt": "Explain this text code:\n\nstatus:\n completedIndexes: 1,3,5,7,9\n failedIndexes: 0,2,4,6,8\n succeeded: 5 # 1 succeeded pod for each of 5 succeeded indexes\n failed: 10 # 2 failed pods (1 retry) for each of 5 failed indexes\n conditions:\n - message: Job has failed indexes\n reason: FailedIndexes\n status: \"True\"\n type: FailureTarget\n - message: Job has failed indexes\n reason: FailedIndexes\n status: \"True\"\n type: Failed", "completion": "This text code 2.5 - Jobs | The Job controller adds the FailureTarget Job condition to trigger Job termination and cleanup. When all of the Job Pods are terminated, the Job controller adds the Failed condition with the same valu", "language": "text", "type": "code-example_explanation", "context": "2.5 - Jobs | The Job controller adds the FailureTarget Job condition to trigger Job termination and cleanup. When all of the Job Pods are terminated, the Job controller adds the Failed condition with the same valu", "source": "https://kubernetes.io/docs/concepts/workloads/_print/", "id": "k8s_00658_code_explain_21197"} {"prompt": "Complete this yaml code:\n\n/controllers/job-pod-failure-policy-example.yaml\napiVersion: batch/v1\nkind: Job\nmetadata:\n name: job-pod-failure-policy-example\nspec:\n completions: 12\n parallelism: 3\n template:\n spec:\n restartPolicy: Never\n containers:\n - name: main\n image: docker.io/library/bash:5", "completion": " command: [\"bash\"] # example command simulating a bug which triggers the FailJob action\n args:\n - -c\n - echo \"Hello world!\" && sleep 5 && exit 42\n backoffLimit: 6\n podFailurePolicy:\n rules:\n - action: FailJob\n onExitCodes:\n containerName: main # optional\n operator: In # one of: In, NotIn\n values: [42]\n - action: Ignore # one of: Ignore, FailJob, Count\n onPodConditions:\n - type: DisruptionTarget # indicates Pod disruption", "language": "yaml", "type": "kubernetes-manifest", "context": "2.5 - Jobs | In the example above, the first rule of the Pod failure policy specifies that the Job should be marked failed if the main container fails with the 42 exit code. The following are the rules for the mai", "source": "https://kubernetes.io/docs/concepts/workloads/_print/", "id": "k8s_00658_code_completion_21198"} {"prompt": "Explain this yaml code:\n\n/controllers/job-pod-failure-policy-example.yaml\napiVersion: batch/v1\nkind: Job\nmetadata:\n name: job-pod-failure-policy-example\nspec:\n completions: 12\n parallelism: 3\n template:\n spec:\n restartPolicy: Never\n containers:\n - name: main\n image: docker.io/library/bash:5\n command: [\"bash\"] # example command simulating a bug which triggers the FailJob action\n args:\n - -c\n - echo \"Hello world!\" && sleep 5 && exit 42\n backoffLimit: 6\n podFailurePolicy:\n rules:\n - action: FailJob\n onExitCodes:\n containerName: main # optional\n operator: In # one of: In, NotIn\n values: [42]\n - action: Ignore # one of: Ignore, FailJob, Count\n onPodConditions:\n - type: DisruptionTarget # indicates Pod disruption", "completion": "This yaml code 2.5 - Jobs | In the example above, the first rule of the Pod failure policy specifies that the Job should be marked failed if the main container fails with the 42 exit code. The following are the rules for the mai", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "2.5 - Jobs | In the example above, the first rule of the Pod failure policy specifies that the Job should be marked failed if the main container fails with the 42 exit code. The following are the rules for the mai", "source": "https://kubernetes.io/docs/concepts/workloads/_print/", "id": "k8s_00658_code_explain_21199"} {"prompt": "Complete this yaml code:\n\napiVersion: batch/v1\nkind: Job\nmetadata:\n name: job-pod-failure-policy-example\nspec:\n completions: 12\n parallelism: 3\n template:\n spec:\n restartPolicy: Never\n containers:\n - name: main\n image: docker.io/library/bash:5\n command: [\"bash\"] # example command simulating a bug which triggers the FailJob action", "completion": " args:\n - -c\n - echo \"Hello world!\" && sleep 5 && exit 42\n backoffLimit: 6\n podFailurePolicy:\n rules:\n - action: FailJob\n onExitCodes:\n containerName: main # optional\n operator: In # one of: In, NotIn\n values: [42]\n - action: Ignore # one of: Ignore, FailJob, Count\n onPodConditions:\n - type: DisruptionTarget # indicates Pod disruption", "language": "yaml", "type": "kubernetes-manifest", "context": "2.5 - Jobs | In the example above, the first rule of the Pod failure policy specifies that the Job should be marked failed if the main container fails with the 42 exit code. The following are the rules for the mai", "source": "https://kubernetes.io/docs/concepts/workloads/_print/", "id": "k8s_00658_code_completion_21200"} {"prompt": "Explain this yaml code:\n\napiVersion: batch/v1\nkind: Job\nmetadata:\n name: job-pod-failure-policy-example\nspec:\n completions: 12\n parallelism: 3\n template:\n spec:\n restartPolicy: Never\n containers:\n - name: main\n image: docker.io/library/bash:5\n command: [\"bash\"] # example command simulating a bug which triggers the FailJob action\n args:\n - -c\n - echo \"Hello world!\" && sleep 5 && exit 42\n backoffLimit: 6\n podFailurePolicy:\n rules:\n - action: FailJob\n onExitCodes:\n containerName: main # optional\n operator: In # one of: In, NotIn\n values: [42]\n - action: Ignore # one of: Ignore, FailJob, Count\n onPodConditions:\n - type: DisruptionTarget # indicates Pod disruption", "completion": "This yaml code 2.5 - Jobs | In the example above, the first rule of the Pod failure policy specifies that the Job should be marked failed if the main container fails with the 42 exit code. The following are the rules for the mai", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "2.5 - Jobs | In the example above, the first rule of the Pod failure policy specifies that the Job should be marked failed if the main container fails with the 42 exit code. The following are the rules for the mai", "source": "https://kubernetes.io/docs/concepts/workloads/_print/", "id": "k8s_00658_code_explain_21201"} {"prompt": "Complete this yaml code:\n\n/controllers/job-success-policy.yaml\napiVersion: batch/v1\nkind: Job\nmetadata:\n name: job-success\nspec:\n parallelism: 10\n completions: 10\n completionMode: Indexed # Required for the success policy\n successPolicy:\n rules:\n - succeededIndexes: 0,2-3\n succeededCount: 1\n template:", "completion": " spec:\n containers:\n - name: main\n image: python\n command: # Provided that at least one of the Pods with 0, 2, and 3 indexes has succeeded,\n # the overall Job is a success.\n - python3\n - -c\n - |\n import os, sys\n if os.environ.get(\"JOB_COMPLETION_INDEX\") == \"2\":\n sys.exit(0)\n else:\n sys.exit(1) \n restartPolicy: Never", "language": "yaml", "type": "kubernetes-manifest", "context": "2.5 - Jobs | In the example above, both succeededIndexes and succeededCount have been specified. Therefore, the job controller will mark the Job as succeeded and terminate the lingering Pods when either of the spe", "source": "https://kubernetes.io/docs/concepts/workloads/_print/", "id": "k8s_00658_code_completion_21202"} {"prompt": "Explain this yaml code:\n\n/controllers/job-success-policy.yaml\napiVersion: batch/v1\nkind: Job\nmetadata:\n name: job-success\nspec:\n parallelism: 10\n completions: 10\n completionMode: Indexed # Required for the success policy\n successPolicy:\n rules:\n - succeededIndexes: 0,2-3\n succeededCount: 1\n template:\n spec:\n containers:\n - name: main\n image: python\n command: # Provided that at least one of the Pods with 0, 2, and 3 indexes has succeeded,\n # the overall Job is a success.\n - python3\n - -c\n - |\n import os, sys\n if os.environ.get(\"JOB_COMPLETION_INDEX\") == \"2\":\n sys.exit(0)\n else:\n sys.exit(1) \n restartPolicy: Never", "completion": "This yaml code 2.5 - Jobs | In the example above, both succeededIndexes and succeededCount have been specified. Therefore, the job controller will mark the Job as succeeded and terminate the lingering Pods when either of the spe", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "2.5 - Jobs | In the example above, both succeededIndexes and succeededCount have been specified. Therefore, the job controller will mark the Job as succeeded and terminate the lingering Pods when either of the spe", "source": "https://kubernetes.io/docs/concepts/workloads/_print/", "id": "k8s_00658_code_explain_21203"} {"prompt": "Complete this yaml code:\n\napiVersion: batch/v1\nkind: Job\nmetadata:\n name: job-success\nspec:\n parallelism: 10\n completions: 10\n completionMode: Indexed # Required for the success policy\n successPolicy:\n rules:\n - succeededIndexes: 0,2-3\n succeededCount: 1\n template:\n spec:", "completion": " containers:\n - name: main\n image: python\n command: # Provided that at least one of the Pods with 0, 2, and 3 indexes has succeeded,\n # the overall Job is a success.\n - python3\n - -c\n - |\n import os, sys\n if os.environ.get(\"JOB_COMPLETION_INDEX\") == \"2\":\n sys.exit(0)\n else:\n sys.exit(1) \n restartPolicy: Never", "language": "yaml", "type": "kubernetes-manifest", "context": "2.5 - Jobs | In the example above, both succeededIndexes and succeededCount have been specified. Therefore, the job controller will mark the Job as succeeded and terminate the lingering Pods when either of the spe", "source": "https://kubernetes.io/docs/concepts/workloads/_print/", "id": "k8s_00658_code_completion_21204"} {"prompt": "Explain this yaml code:\n\napiVersion: batch/v1\nkind: Job\nmetadata:\n name: job-success\nspec:\n parallelism: 10\n completions: 10\n completionMode: Indexed # Required for the success policy\n successPolicy:\n rules:\n - succeededIndexes: 0,2-3\n succeededCount: 1\n template:\n spec:\n containers:\n - name: main\n image: python\n command: # Provided that at least one of the Pods with 0, 2, and 3 indexes has succeeded,\n # the overall Job is a success.\n - python3\n - -c\n - |\n import os, sys\n if os.environ.get(\"JOB_COMPLETION_INDEX\") == \"2\":\n sys.exit(0)\n else:\n sys.exit(1) \n restartPolicy: Never", "completion": "This yaml code 2.5 - Jobs | In the example above, both succeededIndexes and succeededCount have been specified. Therefore, the job controller will mark the Job as succeeded and terminate the lingering Pods when either of the spe", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "2.5 - Jobs | In the example above, both succeededIndexes and succeededCount have been specified. Therefore, the job controller will mark the Job as succeeded and terminate the lingering Pods when either of the spe", "source": "https://kubernetes.io/docs/concepts/workloads/_print/", "id": "k8s_00658_code_explain_21205"} {"prompt": "Complete this yaml code:\n\napiVersion: batch/v1\nkind: Job\nmetadata:\n name: pi-with-timeout\nspec:\n backoffLimit: 5\n activeDeadlineSeconds: 100", "completion": " template:\n spec:\n containers:\n - name: pi\n image: perl:5.34.0\n command: [\"perl\", \"-Mbignum=bpi\", \"-wle\", \"print bpi(2000)\"]\n restartPolicy: Never", "language": "yaml", "type": "kubernetes-manifest", "context": "2.5 - Jobs | Note that both the Job spec and the Pod template spec within the Job have an activeDeadlineSeconds field. Ensure that you set this field at the proper level.", "source": "https://kubernetes.io/docs/concepts/workloads/_print/", "id": "k8s_00658_code_completion_21206"} {"prompt": "Explain this yaml code:\n\napiVersion: batch/v1\nkind: Job\nmetadata:\n name: pi-with-timeout\nspec:\n backoffLimit: 5\n activeDeadlineSeconds: 100\n template:\n spec:\n containers:\n - name: pi\n image: perl:5.34.0\n command: [\"perl\", \"-Mbignum=bpi\", \"-wle\", \"print bpi(2000)\"]\n restartPolicy: Never", "completion": "This yaml code 2.5 - Jobs | Note that both the Job spec and the Pod template spec within the Job have an activeDeadlineSeconds field. Ensure that you set this field at the proper level.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "2.5 - Jobs | Note that both the Job spec and the Pod template spec within the Job have an activeDeadlineSeconds field. Ensure that you set this field at the proper level.", "source": "https://kubernetes.io/docs/concepts/workloads/_print/", "id": "k8s_00658_code_explain_21207"} {"prompt": "Complete this yaml code:\n\napiVersion: batch/v1\nkind: Job\nmetadata:\n name: pi-with-ttl\nspec:\n ttlSecondsAfterFinished: 100", "completion": " template:\n spec:\n containers:\n - name: pi\n image: perl:5.34.0\n command: [\"perl\", \"-Mbignum=bpi\", \"-wle\", \"print bpi(2000)\"]\n restartPolicy: Never", "language": "yaml", "type": "kubernetes-manifest", "context": "2.5 - Jobs | The Job pi-with-ttl will be eligible to be automatically deleted, 100 seconds after it finishes.", "source": "https://kubernetes.io/docs/concepts/workloads/_print/", "id": "k8s_00658_code_completion_21208"} {"prompt": "Explain this yaml code:\n\napiVersion: batch/v1\nkind: Job\nmetadata:\n name: pi-with-ttl\nspec:\n ttlSecondsAfterFinished: 100\n template:\n spec:\n containers:\n - name: pi\n image: perl:5.34.0\n command: [\"perl\", \"-Mbignum=bpi\", \"-wle\", \"print bpi(2000)\"]\n restartPolicy: Never", "completion": "This yaml code 2.5 - Jobs | The Job pi-with-ttl will be eligible to be automatically deleted, 100 seconds after it finishes.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "2.5 - Jobs | The Job pi-with-ttl will be eligible to be automatically deleted, 100 seconds after it finishes.", "source": "https://kubernetes.io/docs/concepts/workloads/_print/", "id": "k8s_00658_code_explain_21209"} {"prompt": "Complete this yaml code:\n\napiVersion: batch/v1\nkind: Job\nmetadata:\n name: myjob\nspec:", "completion": " suspend: true\n parallelism: 1\n completions: 5\n template:\n spec:\n ...", "language": "yaml", "type": "kubernetes-manifest", "context": "2.5 - Jobs | You can also toggle Job suspension by patching the Job using the command line.", "source": "https://kubernetes.io/docs/concepts/workloads/_print/", "id": "k8s_00658_code_completion_21210"} {"prompt": "Explain this yaml code:\n\napiVersion: batch/v1\nkind: Job\nmetadata:\n name: myjob\nspec:\n suspend: true\n parallelism: 1\n completions: 5\n template:\n spec:\n ...", "completion": "This yaml code 2.5 - Jobs | You can also toggle Job suspension by patching the Job using the command line.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "2.5 - Jobs | You can also toggle Job suspension by patching the Job using the command line.", "source": "https://kubernetes.io/docs/concepts/workloads/_print/", "id": "k8s_00658_code_explain_21211"} {"prompt": "Explain this bash code:\n\nkubectl patch job/myjob --type=strategic --patch '{\"spec\":{\"suspend\":true}}'", "completion": "This bash code 2.5 - Jobs | Resume a suspended Job:", "language": "bash", "type": "kubectl-command_explanation", "context": "2.5 - Jobs | Resume a suspended Job:", "source": "https://kubernetes.io/docs/concepts/workloads/_print/", "id": "k8s_00658_code_explain_21212"} {"prompt": "Explain this bash code:\n\nkubectl patch job/myjob --type=strategic --patch '{\"spec\":{\"suspend\":false}}'", "completion": "This bash code 2.5 - Jobs | The Job's status can be used to determine if a Job is suspended or has been suspended in the past:", "language": "bash", "type": "kubectl-command_explanation", "context": "2.5 - Jobs | The Job's status can be used to determine if a Job is suspended or has been suspended in the past:", "source": "https://kubernetes.io/docs/concepts/workloads/_print/", "id": "k8s_00658_code_explain_21213"} {"prompt": "Complete this yaml code:\n\napiVersion: batch/v1\nkind: Job\n# .metadata and .spec omitted\nstatus:\n conditions:", "completion": " - lastProbeTime: \"2021-02-05T13:14:33Z\"\n lastTransitionTime: \"2021-02-05T13:14:33Z\"\n status: \"True\"\n type: Suspended\n startTime: \"2021-02-05T13:13:48Z\"", "language": "yaml", "type": "kubernetes-manifest", "context": "2.5 - Jobs | The Job condition of type \"Suspended\" with status \"True\" means the Job is suspended; the lastTransitionTime field can be used to determine how long the Job has been suspended for. If the status of tha", "source": "https://kubernetes.io/docs/concepts/workloads/_print/", "id": "k8s_00658_code_completion_21214"} {"prompt": "Explain this yaml code:\n\napiVersion: batch/v1\nkind: Job\n# .metadata and .spec omitted\nstatus:\n conditions:\n - lastProbeTime: \"2021-02-05T13:14:33Z\"\n lastTransitionTime: \"2021-02-05T13:14:33Z\"\n status: \"True\"\n type: Suspended\n startTime: \"2021-02-05T13:13:48Z\"", "completion": "This yaml code 2.5 - Jobs | The Job condition of type \"Suspended\" with status \"True\" means the Job is suspended; the lastTransitionTime field can be used to determine how long the Job has been suspended for. If the status of tha", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "2.5 - Jobs | The Job condition of type \"Suspended\" with status \"True\" means the Job is suspended; the lastTransitionTime field can be used to determine how long the Job has been suspended for. If the status of tha", "source": "https://kubernetes.io/docs/concepts/workloads/_print/", "id": "k8s_00658_code_explain_21215"} {"prompt": "Complete this yaml code:\n\nkind: Job\nmetadata:\n name: old\n ...", "completion": "spec:\n selector:\n matchLabels:\n batch.kubernetes.io/controller-uid: a8f3d00d-c6d2-11e5-9f87-42010af00002\n ...", "language": "yaml", "type": "kubernetes-manifest", "context": "2.5 - Jobs | Then you create a new Job with name new and you explicitly specify the same selector. Since the existing Pods have label batch.kubernetes.io/controller-uid=a8f3d00d-c6d2-11e5-9f87-42010af00002, they a", "source": "https://kubernetes.io/docs/concepts/workloads/_print/", "id": "k8s_00658_code_completion_21216"} {"prompt": "Explain this yaml code:\n\nkind: Job\nmetadata:\n name: old\n ...\nspec:\n selector:\n matchLabels:\n batch.kubernetes.io/controller-uid: a8f3d00d-c6d2-11e5-9f87-42010af00002\n ...", "completion": "This yaml code 2.5 - Jobs | Then you create a new Job with name new and you explicitly specify the same selector. Since the existing Pods have label batch.kubernetes.io/controller-uid=a8f3d00d-c6d2-11e5-9f87-42010af00002, they a", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "2.5 - Jobs | Then you create a new Job with name new and you explicitly specify the same selector. Since the existing Pods have label batch.kubernetes.io/controller-uid=a8f3d00d-c6d2-11e5-9f87-42010af00002, they a", "source": "https://kubernetes.io/docs/concepts/workloads/_print/", "id": "k8s_00658_code_explain_21217"} {"prompt": "Complete this yaml code:\n\nkind: Job\nmetadata:\n name: new\n ...\nspec:", "completion": " manualSelector: true\n selector:\n matchLabels:\n batch.kubernetes.io/controller-uid: a8f3d00d-c6d2-11e5-9f87-42010af00002\n ...", "language": "yaml", "type": "kubernetes-manifest", "context": "2.5 - Jobs | The new Job itself will have a different uid from a8f3d00d-c6d2-11e5-9f87-42010af00002. Setting manualSelector: true tells the system that you know what you are doing and to allow this mismatch.", "source": "https://kubernetes.io/docs/concepts/workloads/_print/", "id": "k8s_00658_code_completion_21218"} {"prompt": "Explain this yaml code:\n\nkind: Job\nmetadata:\n name: new\n ...\nspec:\n manualSelector: true\n selector:\n matchLabels:\n batch.kubernetes.io/controller-uid: a8f3d00d-c6d2-11e5-9f87-42010af00002\n ...", "completion": "This yaml code 2.5 - Jobs | The new Job itself will have a different uid from a8f3d00d-c6d2-11e5-9f87-42010af00002. Setting manualSelector: true tells the system that you know what you are doing and to allow this mismatch.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "2.5 - Jobs | The new Job itself will have a different uid from a8f3d00d-c6d2-11e5-9f87-42010af00002. Setting manualSelector: true tells the system that you know what you are doing and to allow this mismatch.", "source": "https://kubernetes.io/docs/concepts/workloads/_print/", "id": "k8s_00658_code_explain_21219"} {"prompt": "Complete this yaml code:\n\nkind: Job\nmetadata:\n name: new", "completion": " ...\nspec:\n podReplacementPolicy: Failed\n ...", "language": "yaml", "type": "kubernetes-manifest", "context": "2.5 - Jobs | Provided your cluster has the feature gate enabled, you can inspect the .status.terminating field of a Job. The value of the field is the number of Pods owned by the Job that are currently terminating", "source": "https://kubernetes.io/docs/concepts/workloads/_print/", "id": "k8s_00658_code_completion_21220"} {"prompt": "Explain this yaml code:\n\nkind: Job\nmetadata:\n name: new\n ...\nspec:\n podReplacementPolicy: Failed\n ...", "completion": "This yaml code 2.5 - Jobs | Provided your cluster has the feature gate enabled, you can inspect the .status.terminating field of a Job. The value of the field is the number of Pods owned by the Job that are currently terminating", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "2.5 - Jobs | Provided your cluster has the feature gate enabled, you can inspect the .status.terminating field of a Job. The value of the field is the number of Pods owned by the Job that are currently terminating", "source": "https://kubernetes.io/docs/concepts/workloads/_print/", "id": "k8s_00658_code_explain_21221"} {"prompt": "Complete this yaml code:\n\napiVersion: batch/v1\nkind: Job", "completion": "# .metadata and .spec omitted\nstatus:\n terminating: 3 # three Pods are terminating and have not yet reached the Failed phase", "language": "yaml", "type": "kubernetes-manifest", "context": "2.5 - Jobs | This feature allows you to disable the built-in Job controller, for a specific Job, and delegate reconciliation of the Job to an external controller.", "source": "https://kubernetes.io/docs/concepts/workloads/_print/", "id": "k8s_00658_code_completion_21222"} {"prompt": "Explain this yaml code:\n\napiVersion: batch/v1\nkind: Job\n# .metadata and .spec omitted\nstatus:\n terminating: 3 # three Pods are terminating and have not yet reached the Failed phase", "completion": "This yaml code 2.5 - Jobs | This feature allows you to disable the built-in Job controller, for a specific Job, and delegate reconciliation of the Job to an external controller.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "2.5 - Jobs | This feature allows you to disable the built-in Job controller, for a specific Job, and delegate reconciliation of the Job to an external controller.", "source": "https://kubernetes.io/docs/concepts/workloads/_print/", "id": "k8s_00658_code_explain_21223"} {"prompt": "Complete this yaml code:\n\napplication/job/cronjob.yaml\napiVersion: batch/v1\nkind: CronJob\nmetadata:\n name: hello\nspec:\n schedule: \"* * * * *\"\n jobTemplate:\n spec:\n template:", "completion": " spec:\n containers:\n - name: hello\n image: busybox:1.28\n imagePullPolicy: IfNotPresent\n command:\n - /bin/sh\n - -c\n - date; echo Hello from the Kubernetes cluster\n restartPolicy: OnFailure", "language": "yaml", "type": "kubernetes-manifest", "context": "2.7 - CronJob | (Running Automated Tasks with a CronJob takes you through this example in more detail).", "source": "https://kubernetes.io/docs/concepts/workloads/_print/", "id": "k8s_00658_code_completion_21224"} {"prompt": "Explain this yaml code:\n\napplication/job/cronjob.yaml\napiVersion: batch/v1\nkind: CronJob\nmetadata:\n name: hello\nspec:\n schedule: \"* * * * *\"\n jobTemplate:\n spec:\n template:\n spec:\n containers:\n - name: hello\n image: busybox:1.28\n imagePullPolicy: IfNotPresent\n command:\n - /bin/sh\n - -c\n - date; echo Hello from the Kubernetes cluster\n restartPolicy: OnFailure", "completion": "This yaml code 2.7 - CronJob | (Running Automated Tasks with a CronJob takes you through this example in more detail).", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "2.7 - CronJob | (Running Automated Tasks with a CronJob takes you through this example in more detail).", "source": "https://kubernetes.io/docs/concepts/workloads/_print/", "id": "k8s_00658_code_explain_21225"} {"prompt": "Complete this yaml code:\n\napiVersion: batch/v1\nkind: CronJob\nmetadata:\n name: hello\nspec:\n schedule: \"* * * * *\"\n jobTemplate:\n spec:\n template:", "completion": " spec:\n containers:\n - name: hello\n image: busybox:1.28\n imagePullPolicy: IfNotPresent\n command:\n - /bin/sh\n - -c\n - date; echo Hello from the Kubernetes cluster\n restartPolicy: OnFailure", "language": "yaml", "type": "kubernetes-manifest", "context": "2.7 - CronJob | (Running Automated Tasks with a CronJob takes you through this example in more detail).", "source": "https://kubernetes.io/docs/concepts/workloads/_print/", "id": "k8s_00658_code_completion_21226"} {"prompt": "Explain this yaml code:\n\napiVersion: batch/v1\nkind: CronJob\nmetadata:\n name: hello\nspec:\n schedule: \"* * * * *\"\n jobTemplate:\n spec:\n template:\n spec:\n containers:\n - name: hello\n image: busybox:1.28\n imagePullPolicy: IfNotPresent\n command:\n - /bin/sh\n - -c\n - date; echo Hello from the Kubernetes cluster\n restartPolicy: OnFailure", "completion": "This yaml code 2.7 - CronJob | (Running Automated Tasks with a CronJob takes you through this example in more detail).", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "2.7 - CronJob | (Running Automated Tasks with a CronJob takes you through this example in more detail).", "source": "https://kubernetes.io/docs/concepts/workloads/_print/", "id": "k8s_00658_code_explain_21227"} {"prompt": "Complete this yaml code:\n\ncontrollers/replication.yaml\napiVersion: v1\nkind: ReplicationController\nmetadata:\n name: nginx\nspec:\n replicas: 3\n selector:\n app: nginx\n template:", "completion": " metadata:\n name: nginx\n labels:\n app: nginx\n spec:\n containers:\n - name: nginx\n image: nginx\n ports:\n - containerPort: 80", "language": "yaml", "type": "kubernetes-manifest", "context": "2.8 - ReplicationController | Run the example job by downloading the example file and then running this command:", "source": "https://kubernetes.io/docs/concepts/workloads/_print/", "id": "k8s_00658_code_completion_21228"} {"prompt": "Explain this yaml code:\n\ncontrollers/replication.yaml\napiVersion: v1\nkind: ReplicationController\nmetadata:\n name: nginx\nspec:\n replicas: 3\n selector:\n app: nginx\n template:\n metadata:\n name: nginx\n labels:\n app: nginx\n spec:\n containers:\n - name: nginx\n image: nginx\n ports:\n - containerPort: 80", "completion": "This yaml code 2.8 - ReplicationController | Run the example job by downloading the example file and then running this command:", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "2.8 - ReplicationController | Run the example job by downloading the example file and then running this command:", "source": "https://kubernetes.io/docs/concepts/workloads/_print/", "id": "k8s_00658_code_explain_21229"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: ReplicationController\nmetadata:\n name: nginx\nspec:\n replicas: 3\n selector:\n app: nginx\n template:", "completion": " metadata:\n name: nginx\n labels:\n app: nginx\n spec:\n containers:\n - name: nginx\n image: nginx\n ports:\n - containerPort: 80", "language": "yaml", "type": "kubernetes-manifest", "context": "2.8 - ReplicationController | Run the example job by downloading the example file and then running this command:", "source": "https://kubernetes.io/docs/concepts/workloads/_print/", "id": "k8s_00658_code_completion_21230"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: ReplicationController\nmetadata:\n name: nginx\nspec:\n replicas: 3\n selector:\n app: nginx\n template:\n metadata:\n name: nginx\n labels:\n app: nginx\n spec:\n containers:\n - name: nginx\n image: nginx\n ports:\n - containerPort: 80", "completion": "This yaml code 2.8 - ReplicationController | Run the example job by downloading the example file and then running this command:", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "2.8 - ReplicationController | Run the example job by downloading the example file and then running this command:", "source": "https://kubernetes.io/docs/concepts/workloads/_print/", "id": "k8s_00658_code_explain_21231"} {"prompt": "Explain this bash code:\n\nkubectl apply -f https://k8s.io/examples/controllers/replication.yaml", "completion": "This bash code 2.8 - ReplicationController | The output is similar to this:", "language": "bash", "type": "kubectl-command_explanation", "context": "2.8 - ReplicationController | The output is similar to this:", "source": "https://kubernetes.io/docs/concepts/workloads/_print/", "id": "k8s_00658_code_explain_21232"} {"prompt": "Explain this text code:\n\nPods Status: 3 Running / 0 Waiting / 0 Succeeded / 0 Failed", "completion": "This text code 2.8 - ReplicationController | To list all the pods that belong to the ReplicationController in a machine readable form, you can use a command like this:", "language": "text", "type": "code-example_explanation", "context": "2.8 - ReplicationController | To list all the pods that belong to the ReplicationController in a machine readable form, you can use a command like this:", "source": "https://kubernetes.io/docs/concepts/workloads/_print/", "id": "k8s_00658_code_explain_21233"} {"prompt": "Explain this text code:\n\npods=$(kubectl get pods --selector=app=nginx --output=jsonpath={.items..metadata.name})\necho $pods", "completion": "This text code 2.8 - ReplicationController | The output is similar to this:", "language": "text", "type": "code-example_explanation", "context": "2.8 - ReplicationController | The output is similar to this:", "source": "https://kubernetes.io/docs/concepts/workloads/_print/", "id": "k8s_00658_code_explain_21234"} {"prompt": "Complete this yaml code:\n\napplication/nginx-app.yaml\napiVersion: v1\nkind: Service\nmetadata:\n name: my-nginx-svc\n labels:\n app: nginx\nspec:\n type: LoadBalancer\n ports:\n - port: 80\n selector:\n app: nginx\n---\napiVersion: apps/v1\nkind: Deployment\nmetadata:", "completion": " name: my-nginx\n labels:\n app: nginx\nspec:\n replicas: 3\n selector:\n matchLabels:\n app: nginx\n template:\n metadata:\n labels:\n app: nginx\n spec:\n containers:\n - name: nginx\n image: nginx:1.14.2\n ports:\n - containerPort: 80", "language": "yaml", "type": "deployment-definition", "context": "4 - Managing Workloads | Multiple resources can be created the same way as a single resource:", "source": "https://kubernetes.io/docs/concepts/workloads/_print/", "id": "k8s_00658_code_completion_21235"} {"prompt": "Explain this yaml code:\n\napplication/nginx-app.yaml\napiVersion: v1\nkind: Service\nmetadata:\n name: my-nginx-svc\n labels:\n app: nginx\nspec:\n type: LoadBalancer\n ports:\n - port: 80\n selector:\n app: nginx\n---\napiVersion: apps/v1\nkind: Deployment\nmetadata:\n name: my-nginx\n labels:\n app: nginx\nspec:\n replicas: 3\n selector:\n matchLabels:\n app: nginx\n template:\n metadata:\n labels:\n app: nginx\n spec:\n containers:\n - name: nginx\n image: nginx:1.14.2\n ports:\n - containerPort: 80", "completion": "This yaml code 4 - Managing Workloads | Multiple resources can be created the same way as a single resource:", "language": "yaml", "type": "deployment-definition_explanation", "context": "4 - Managing Workloads | Multiple resources can be created the same way as a single resource:", "source": "https://kubernetes.io/docs/concepts/workloads/_print/", "id": "k8s_00658_code_explain_21236"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Service\nmetadata:\n name: my-nginx-svc\n labels:\n app: nginx\nspec:\n type: LoadBalancer\n ports:\n - port: 80\n selector:\n app: nginx\n---\napiVersion: apps/v1\nkind: Deployment\nmetadata:\n name: my-nginx", "completion": " labels:\n app: nginx\nspec:\n replicas: 3\n selector:\n matchLabels:\n app: nginx\n template:\n metadata:\n labels:\n app: nginx\n spec:\n containers:\n - name: nginx\n image: nginx:1.14.2\n ports:\n - containerPort: 80", "language": "yaml", "type": "deployment-definition", "context": "4 - Managing Workloads | Multiple resources can be created the same way as a single resource:", "source": "https://kubernetes.io/docs/concepts/workloads/_print/", "id": "k8s_00658_code_completion_21237"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Service\nmetadata:\n name: my-nginx-svc\n labels:\n app: nginx\nspec:\n type: LoadBalancer\n ports:\n - port: 80\n selector:\n app: nginx\n---\napiVersion: apps/v1\nkind: Deployment\nmetadata:\n name: my-nginx\n labels:\n app: nginx\nspec:\n replicas: 3\n selector:\n matchLabels:\n app: nginx\n template:\n metadata:\n labels:\n app: nginx\n spec:\n containers:\n - name: nginx\n image: nginx:1.14.2\n ports:\n - containerPort: 80", "completion": "This yaml code 4 - Managing Workloads | Multiple resources can be created the same way as a single resource:", "language": "yaml", "type": "deployment-definition_explanation", "context": "4 - Managing Workloads | Multiple resources can be created the same way as a single resource:", "source": "https://kubernetes.io/docs/concepts/workloads/_print/", "id": "k8s_00658_code_explain_21238"} {"prompt": "Explain this bash code:\n\nkubectl apply -f https://k8s.io/examples/application/nginx-app.yaml", "completion": "This bash code 4 - Managing Workloads | The resources will be created in the order they appear in the manifest. Therefore, it's best to specify the Service first, since that will ensure the scheduler can spread the pods associated with the", "language": "bash", "type": "kubectl-command_explanation", "context": "4 - Managing Workloads | The resources will be created in the order they appear in the manifest. Therefore, it's best to specify the Service first, since that will ensure the scheduler can spread the pods associated with the", "source": "https://kubernetes.io/docs/concepts/workloads/_print/", "id": "k8s_00658_code_explain_21239"} {"prompt": "Explain this bash code:\n\nkubectl apply -f https://k8s.io/examples/application/nginx/nginx-svc.yaml \\\n -f https://k8s.io/examples/application/nginx/nginx-deployment.yaml", "completion": "This bash code 4 - Managing Workloads | It is a recommended practice to put resources related to the same microservice or application tier into the same file, and to group all of the files associated with your application in the same direct", "language": "bash", "type": "kubectl-command_explanation", "context": "4 - Managing Workloads | It is a recommended practice to put resources related to the same microservice or application tier into the same file, and to group all of the files associated with your application in the same direct", "source": "https://kubernetes.io/docs/concepts/workloads/_print/", "id": "k8s_00658_code_explain_21240"} {"prompt": "Explain this bash code:\n\nkubectl apply -f https://k8s.io/examples/application/nginx/nginx-deployment.yaml", "completion": "This bash code 4 - Managing Workloads | If you need to define more manifests, such as adding a ConfigMap, you can do that too.", "language": "bash", "type": "kubectl-command_explanation", "context": "4 - Managing Workloads | If you need to define more manifests, such as adding a ConfigMap, you can do that too.", "source": "https://kubernetes.io/docs/concepts/workloads/_print/", "id": "k8s_00658_code_explain_21241"} {"prompt": "Explain this bash code:\n\nkubectl delete -f https://k8s.io/examples/application/nginx-app.yaml", "completion": "This bash code 4 - Managing Workloads | In the case of two resources, you can specify both resources on the command line using the resource/name syntax:", "language": "bash", "type": "kubectl-command_explanation", "context": "4 - Managing Workloads | In the case of two resources, you can specify both resources on the command line using the resource/name syntax:", "source": "https://kubernetes.io/docs/concepts/workloads/_print/", "id": "k8s_00658_code_explain_21242"} {"prompt": "Explain this bash code:\n\nkubectl delete deployments/my-nginx services/my-nginx-svc", "completion": "This bash code 4 - Managing Workloads | For larger numbers of resources, you'll find it easier to specify the selector (label query) specified using -l or --selector, to filter resources by their labels:", "language": "bash", "type": "kubectl-command_explanation", "context": "4 - Managing Workloads | For larger numbers of resources, you'll find it easier to specify the selector (label query) specified using -l or --selector, to filter resources by their labels:", "source": "https://kubernetes.io/docs/concepts/workloads/_print/", "id": "k8s_00658_code_explain_21243"} {"prompt": "Explain this bash code:\n\nkubectl get $(kubectl create -f docs/concepts/cluster-administration/nginx/ -o name | grep service/ )\nkubectl create -f docs/concepts/cluster-administration/nginx/ -o name | grep service/ | xargs -i kubectl get '{}'", "completion": "This bash code 4 - Managing Workloads | The output might be similar to:", "language": "bash", "type": "kubectl-command_explanation", "context": "4 - Managing Workloads | The output might be similar to:", "source": "https://kubernetes.io/docs/concepts/workloads/_print/", "id": "k8s_00658_code_explain_21244"} {"prompt": "Explain this bash code:\n\nkubectl apply -f project/k8s/development --recursive", "completion": "This bash code 4 - Managing Workloads | The --recursive argument works with any operation that accepts the --filename/-f argument such as: kubectl create, kubectl get, kubectl delete, kubectl describe, or even kubectl rollout.", "language": "bash", "type": "kubectl-command_explanation", "context": "4 - Managing Workloads | The --recursive argument works with any operation that accepts the --filename/-f argument such as: kubectl create, kubectl get, kubectl delete, kubectl describe, or even kubectl rollout.", "source": "https://kubernetes.io/docs/concepts/workloads/_print/", "id": "k8s_00658_code_explain_21245"} {"prompt": "Explain this bash code:\n\nkubectl apply -f project/k8s/namespaces -f project/k8s/development --recursive", "completion": "This bash code 4 - Managing Workloads | If you're interested in learning more about kubectl, go ahead and read Command line tool (kubectl).", "language": "bash", "type": "kubectl-command_explanation", "context": "4 - Managing Workloads | If you're interested in learning more about kubectl, go ahead and read Command line tool (kubectl).", "source": "https://kubernetes.io/docs/concepts/workloads/_print/", "id": "k8s_00658_code_explain_21246"} {"prompt": "Explain this bash code:\n\nkubectl create deployment my-nginx --image=nginx:1.14.2", "completion": "This bash code 4 - Managing Workloads | Ensure that there is 1 replica:", "language": "bash", "type": "kubectl-command_explanation", "context": "4 - Managing Workloads | Ensure that there is 1 replica:", "source": "https://kubernetes.io/docs/concepts/workloads/_print/", "id": "k8s_00658_code_explain_21247"} {"prompt": "Explain this bash code:\n\nkubectl scale --replicas 1 deployments/my-nginx --subresource='scale' --type='merge' -p '{\"spec\":{\"replicas\": 1}}'", "completion": "This bash code 4 - Managing Workloads | and allow Kubernetes to add more temporary replicas during a rollout, by setting a surge maximum of 100%:", "language": "bash", "type": "kubectl-command_explanation", "context": "4 - Managing Workloads | and allow Kubernetes to add more temporary replicas during a rollout, by setting a surge maximum of 100%:", "source": "https://kubernetes.io/docs/concepts/workloads/_print/", "id": "k8s_00658_code_explain_21248"} {"prompt": "Explain this bash code:\n\nkubectl patch --type='merge' -p '{\"spec\":{\"strategy\":{\"rollingUpdate\":{\"maxSurge\": \"100%\" }}}}'", "completion": "This bash code 4 - Managing Workloads | To update to version 1.16.1, change .spec.template.spec.containers[0].image from nginx:1.14.2 to nginx:1.16.1 using kubectl edit:", "language": "bash", "type": "kubectl-command_explanation", "context": "4 - Managing Workloads | To update to version 1.16.1, change .spec.template.spec.containers[0].image from nginx:1.14.2 to nginx:1.16.1 using kubectl edit:", "source": "https://kubernetes.io/docs/concepts/workloads/_print/", "id": "k8s_00658_code_explain_21249"} {"prompt": "Explain this bash code:\n\nkubectl edit deployment/my-nginx\n# Change the manifest to use the newer container image, then save your changes", "completion": "This bash code 4 - Managing Workloads | That's it! The Deployment will declaratively update the deployed nginx application progressively behind the scene. It ensures that only a certain number of old replicas may be down while they are bein", "language": "bash", "type": "kubectl-command_explanation", "context": "4 - Managing Workloads | That's it! The Deployment will declaratively update the deployed nginx application progressively behind the scene. It ensures that only a certain number of old replicas may be down while they are bein", "source": "https://kubernetes.io/docs/concepts/workloads/_print/", "id": "k8s_00658_code_explain_21250"} {"prompt": "Complete this bash code:\n\nkubectl apply -f my-deployment.yaml\n", "completion": "# wait for rollout to finish\nkubectl rollout status deployment/my-deployment --timeout 10m # 10 minute timeout", "language": "bash", "type": "kubectl-command", "context": "4 - Managing Workloads | or", "source": "https://kubernetes.io/docs/concepts/workloads/_print/", "id": "k8s_00658_code_completion_21251"} {"prompt": "Explain this bash code:\n\nkubectl apply -f my-deployment.yaml\n\n# wait for rollout to finish\nkubectl rollout status deployment/my-deployment --timeout 10m # 10 minute timeout", "completion": "This bash code 4 - Managing Workloads | or", "language": "bash", "type": "kubectl-command_explanation", "context": "4 - Managing Workloads | or", "source": "https://kubernetes.io/docs/concepts/workloads/_print/", "id": "k8s_00658_code_explain_21252"} {"prompt": "Complete this bash code:\n\nkubectl apply -f backing-stateful-component.yaml\n", "completion": "# don't wait for rollout to finish, just check the status\nkubectl rollout status statefulsets/backing-stateful-component --watch=false", "language": "bash", "type": "kubectl-command", "context": "4 - Managing Workloads | You can also pause, resume or cancel a rollout. Visit kubectl rollout to learn more.", "source": "https://kubernetes.io/docs/concepts/workloads/_print/", "id": "k8s_00658_code_completion_21253"} {"prompt": "Explain this bash code:\n\nkubectl apply -f backing-stateful-component.yaml\n\n# don't wait for rollout to finish, just check the status\nkubectl rollout status statefulsets/backing-stateful-component --watch=false", "completion": "This bash code 4 - Managing Workloads | You can also pause, resume or cancel a rollout. Visit kubectl rollout to learn more.", "language": "bash", "type": "kubectl-command_explanation", "context": "4 - Managing Workloads | You can also pause, resume or cancel a rollout. Visit kubectl rollout to learn more.", "source": "https://kubernetes.io/docs/concepts/workloads/_print/", "id": "k8s_00658_code_explain_21254"} {"prompt": "Explain this bash code:\n\nkubectl annotate pods my-nginx-v4-9gw19 description='my frontend running nginx'\nkubectl get pods my-nginx-v4-9gw19 -o yaml", "completion": "This bash code 4 - Managing Workloads | For more information, see annotations and kubectl annotate.", "language": "bash", "type": "kubectl-command_explanation", "context": "4 - Managing Workloads | For more information, see annotations and kubectl annotate.", "source": "https://kubernetes.io/docs/concepts/workloads/_print/", "id": "k8s_00658_code_explain_21255"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: pod\nmetadata:", "completion": " annotations:\n description: my frontend running nginx\n...", "language": "yaml", "type": "kubernetes-manifest", "context": "4 - Managing Workloads | For more information, see annotations and kubectl annotate.", "source": "https://kubernetes.io/docs/concepts/workloads/_print/", "id": "k8s_00658_code_completion_21256"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: pod\nmetadata:\n annotations:\n description: my frontend running nginx\n...", "completion": "This yaml code 4 - Managing Workloads | For more information, see annotations and kubectl annotate.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "4 - Managing Workloads | For more information, see annotations and kubectl annotate.", "source": "https://kubernetes.io/docs/concepts/workloads/_print/", "id": "k8s_00658_code_explain_21257"} {"prompt": "Explain this text code:\n\n# This requires an existing source of container and Pod metrics\nkubectl autoscale deployment/my-nginx --min=1 --max=3", "completion": "This text code 4 - Managing Workloads | Now your nginx replicas will be scaled up and down as needed, automatically.", "language": "text", "type": "code-example_explanation", "context": "4 - Managing Workloads | Now your nginx replicas will be scaled up and down as needed, automatically.", "source": "https://kubernetes.io/docs/concepts/workloads/_print/", "id": "k8s_00658_code_explain_21258"} {"prompt": "Explain this bash code:\n\nkubectl apply -f https://k8s.io/examples/application/nginx/nginx-deployment.yaml", "completion": "This bash code 4 - Managing Workloads | To learn more about the underlying mechanism, read server-side apply.", "language": "bash", "type": "kubectl-command_explanation", "context": "4 - Managing Workloads | To learn more about the underlying mechanism, read server-side apply.", "source": "https://kubernetes.io/docs/concepts/workloads/_print/", "id": "k8s_00658_code_explain_21259"} {"prompt": "Complete this bash code:\n\nkubectl get deployment my-nginx -o yaml > /tmp/nginx.yaml\nvi /tmp/nginx.yaml\n# do some edit, and then save the file\n", "completion": "kubectl apply -f /tmp/nginx.yaml\ndeployment.apps/my-nginx configured\n\nrm /tmp/nginx.yaml", "language": "bash", "type": "kubectl-command", "context": "4 - Managing Workloads | This allows you to do more significant changes more easily. Note that you can specify the editor with your EDITOR or KUBE_EDITOR environment variables.", "source": "https://kubernetes.io/docs/concepts/workloads/_print/", "id": "k8s_00658_code_completion_21260"} {"prompt": "Explain this bash code:\n\nkubectl get deployment my-nginx -o yaml > /tmp/nginx.yaml\nvi /tmp/nginx.yaml\n# do some edit, and then save the file\n\nkubectl apply -f /tmp/nginx.yaml\ndeployment.apps/my-nginx configured\n\nrm /tmp/nginx.yaml", "completion": "This bash code 4 - Managing Workloads | This allows you to do more significant changes more easily. Note that you can specify the editor with your EDITOR or KUBE_EDITOR environment variables.", "language": "bash", "type": "kubectl-command_explanation", "context": "4 - Managing Workloads | This allows you to do more significant changes more easily. Note that you can specify the editor with your EDITOR or KUBE_EDITOR environment variables.", "source": "https://kubernetes.io/docs/concepts/workloads/_print/", "id": "k8s_00658_code_explain_21261"} {"prompt": "Explain this bash code:\n\nkubectl replace -f https://k8s.io/examples/application/nginx/nginx-deployment.yaml --force", "completion": "This bash code 4 - Managing Workloads | © 2025 The Kubernetes Authors | Documentation Distributed under CC BY 4.0", "language": "bash", "type": "kubectl-command_explanation", "context": "4 - Managing Workloads | © 2025 The Kubernetes Authors | Documentation Distributed under CC BY 4.0", "source": "https://kubernetes.io/docs/concepts/workloads/_print/", "id": "k8s_00658_code_explain_21262"} {"prompt": "Explain this text code:\n\n# List the users that kubectl knows about\n kubectl config get-users", "completion": "This text code kubectl config get-users | help for get-users", "language": "text", "type": "code-example_explanation", "context": "kubectl config get-users | help for get-users", "source": "https://kubernetes.io/docs/reference/kubectl/generated/kubectl_config/kubectl_config_get-users/", "id": "k8s_00659_code_explain_21263"} {"prompt": "Explain this text code:\n\n# List the users that kubectl knows about\n kubectl config get-users", "completion": "This text code kubectl config get-users | help for get-users", "language": "text", "type": "code-example_explanation", "context": "kubectl config get-users | help for get-users", "source": "https://kubernetes.io/docs/reference/kubectl/generated/kubectl_config/kubectl_config_get-users/", "id": "k8s_00659_code_explain_21264"} {"prompt": "Explain this bash code:\n\nkubectl describe (-f FILENAME | TYPE [NAME_PREFIX | -l label] | TYPE/NAME)", "completion": "This bash code kubectl describe | If present, list the requested object(s) across all namespaces. Namespace in current context is ignored even if specified with --namespace.", "language": "bash", "type": "kubectl-command_explanation", "context": "kubectl describe | If present, list the requested object(s) across all namespaces. Namespace in current context is ignored even if specified with --namespace.", "source": "https://kubernetes.io/docs/reference/kubectl/generated/kubectl_describe/", "id": "k8s_00660_code_explain_21265"} {"prompt": "Complete this text code:\n\n# Describe a node\n kubectl describe nodes kubernetes-node-emt8.c.myproject.internal\n \n # Describe a pod\n kubectl describe pods/nginx\n \n # Describe a pod identified by type and name in \"pod.json\"\n kubectl describe -f pod.json\n ", "completion": " # Describe all pods\n kubectl describe pods\n \n # Describe pods by label name=myLabel\n kubectl describe pods -l name=myLabel\n \n # Describe all pods managed by the 'frontend' replication controller\n # (rc-created pods get the name of the rc as a prefix in the pod name)\n kubectl describe pods frontend", "language": "text", "type": "code-example", "context": "kubectl describe | If present, list the requested object(s) across all namespaces. Namespace in current context is ignored even if specified with --namespace.", "source": "https://kubernetes.io/docs/reference/kubectl/generated/kubectl_describe/", "id": "k8s_00660_code_completion_21266"} {"prompt": "Explain this text code:\n\n# Describe a node\n kubectl describe nodes kubernetes-node-emt8.c.myproject.internal\n \n # Describe a pod\n kubectl describe pods/nginx\n \n # Describe a pod identified by type and name in \"pod.json\"\n kubectl describe -f pod.json\n \n # Describe all pods\n kubectl describe pods\n \n # Describe pods by label name=myLabel\n kubectl describe pods -l name=myLabel\n \n # Describe all pods managed by the 'frontend' replication controller\n # (rc-created pods get the name of the rc as a prefix in the pod name)\n kubectl describe pods frontend", "completion": "This text code kubectl describe | If present, list the requested object(s) across all namespaces. Namespace in current context is ignored even if specified with --namespace.", "language": "text", "type": "code-example_explanation", "context": "kubectl describe | If present, list the requested object(s) across all namespaces. Namespace in current context is ignored even if specified with --namespace.", "source": "https://kubernetes.io/docs/reference/kubectl/generated/kubectl_describe/", "id": "k8s_00660_code_explain_21267"} {"prompt": "Explain this bash code:\n\nkubectl describe (-f FILENAME | TYPE [NAME_PREFIX | -l label] | TYPE/NAME)", "completion": "This bash code kubectl describe | If present, list the requested object(s) across all namespaces. Namespace in current context is ignored even if specified with --namespace.", "language": "bash", "type": "kubectl-command_explanation", "context": "kubectl describe | If present, list the requested object(s) across all namespaces. Namespace in current context is ignored even if specified with --namespace.", "source": "https://kubernetes.io/docs/reference/kubectl/generated/kubectl_describe/", "id": "k8s_00660_code_explain_21268"} {"prompt": "Complete this text code:\n\n# Describe a node\n kubectl describe nodes kubernetes-node-emt8.c.myproject.internal\n \n # Describe a pod\n kubectl describe pods/nginx\n \n # Describe a pod identified by type and name in \"pod.json\"\n kubectl describe -f pod.json\n ", "completion": " # Describe all pods\n kubectl describe pods\n \n # Describe pods by label name=myLabel\n kubectl describe pods -l name=myLabel\n \n # Describe all pods managed by the 'frontend' replication controller\n # (rc-created pods get the name of the rc as a prefix in the pod name)\n kubectl describe pods frontend", "language": "text", "type": "code-example", "context": "kubectl describe | If present, list the requested object(s) across all namespaces. Namespace in current context is ignored even if specified with --namespace.", "source": "https://kubernetes.io/docs/reference/kubectl/generated/kubectl_describe/", "id": "k8s_00660_code_completion_21269"} {"prompt": "Explain this text code:\n\n# Describe a node\n kubectl describe nodes kubernetes-node-emt8.c.myproject.internal\n \n # Describe a pod\n kubectl describe pods/nginx\n \n # Describe a pod identified by type and name in \"pod.json\"\n kubectl describe -f pod.json\n \n # Describe all pods\n kubectl describe pods\n \n # Describe pods by label name=myLabel\n kubectl describe pods -l name=myLabel\n \n # Describe all pods managed by the 'frontend' replication controller\n # (rc-created pods get the name of the rc as a prefix in the pod name)\n kubectl describe pods frontend", "completion": "This text code kubectl describe | If present, list the requested object(s) across all namespaces. Namespace in current context is ignored even if specified with --namespace.", "language": "text", "type": "code-example_explanation", "context": "kubectl describe | If present, list the requested object(s) across all namespaces. Namespace in current context is ignored even if specified with --namespace.", "source": "https://kubernetes.io/docs/reference/kubectl/generated/kubectl_describe/", "id": "k8s_00660_code_explain_21270"} {"prompt": "Explain this text code:\n\ncontainer_t, container_init_t, container_kvm_t, container_engine_t", "completion": "This text code Mapping PodSecurityPolicies to Pod Security Standards | Baseline: Anything", "language": "text", "type": "code-example_explanation", "context": "Mapping PodSecurityPolicies to Pod Security Standards | Baseline: Anything", "source": "https://kubernetes.io/docs/reference/access-authn-authz/psp-to-pod-security-standards/", "id": "k8s_00661_code_explain_21271"} {"prompt": "Explain this python code:\n\nimport \"k8s.io/kube-aggregator/pkg/apis/apiregistration/v1\"", "completion": "This python code APIService | APIService represents a server for a particular GroupVersion. Name must be \"version.group\".", "language": "python", "type": "code-example_explanation", "context": "APIService | APIService represents a server for a particular GroupVersion. Name must be \"version.group\".", "source": "https://kubernetes.io/docs/reference/kubernetes-api/cluster-resources/api-service-v1/", "id": "k8s_00663_code_explain_21272"} {"prompt": "Explain this text code:\n\nSubject: O = kubeadm:cluster-admins, CN = kubernetes-admin", "completion": "This text code Implementation details | During kubeadm init another kubeconfig file is generated and stored in /etc/kubernetes/super-admin.conf. This file includes a certificate with Subject: O = system:masters, CN = kubernetes-super-admin.", "language": "text", "type": "code-example_explanation", "context": "Implementation details | During kubeadm init another kubeconfig file is generated and stored in /etc/kubernetes/super-admin.conf. This file includes a certificate with Subject: O = system:masters, CN = kubernetes-super-admin.", "source": "https://kubernetes.io/docs/reference/setup-tools/kubeadm/implementation-details/", "id": "k8s_00670_code_explain_21273"} {"prompt": "Explain this text code:\n\nSubject: O = system:masters, CN = kubernetes-super-admin", "completion": "This text code Implementation details | See RBAC user facing role bindings for additional information on RBAC and built-in ClusterRoles and groups.", "language": "text", "type": "code-example_explanation", "context": "Implementation details | See RBAC user facing role bindings for additional information on RBAC and built-in ClusterRoles and groups.", "source": "https://kubernetes.io/docs/reference/setup-tools/kubeadm/implementation-details/", "id": "k8s_00670_code_explain_21274"} {"prompt": "Explain this text code:\n\n--requestheader-extra-headers-prefix=X-Remote-Extra-", "completion": "This text code Implementation details | The static Pod manifest for the controller manager is affected by following parameters provided by the users:", "language": "text", "type": "code-example_explanation", "context": "Implementation details | The static Pod manifest for the controller manager is affected by following parameters provided by the users:", "source": "https://kubernetes.io/docs/reference/setup-tools/kubeadm/implementation-details/", "id": "k8s_00670_code_explain_21275"} {"prompt": "Explain this text code:\n\nsystem:certificates.k8s.io:certificatesigningrequests:nodeclient", "completion": "This text code Implementation details | The role system:certificates.k8s.io:certificatesigningrequests:nodeclient should be created as well, granting POST permission to /apis/certificates.k8s.io/certificatesigningrequests/nodeclient.", "language": "text", "type": "code-example_explanation", "context": "Implementation details | The role system:certificates.k8s.io:certificatesigningrequests:nodeclient should be created as well, granting POST permission to /apis/certificates.k8s.io/certificatesigningrequests/nodeclient.", "source": "https://kubernetes.io/docs/reference/setup-tools/kubeadm/implementation-details/", "id": "k8s_00670_code_explain_21276"} {"prompt": "Explain this text code:\n\nsystem:certificates.k8s.io:certificatesigningrequests:nodeclient", "completion": "This text code Implementation details | Kubeadm ensures that certificate rotation is enabled for nodes, and that a new certificate request for nodes will get its CSR request automatically approved by the csrapprover controller.", "language": "text", "type": "code-example_explanation", "context": "Implementation details | Kubeadm ensures that certificate rotation is enabled for nodes, and that a new certificate request for nodes will get its CSR request automatically approved by the csrapprover controller.", "source": "https://kubernetes.io/docs/reference/setup-tools/kubeadm/implementation-details/", "id": "k8s_00670_code_explain_21277"} {"prompt": "Explain this text code:\n\n/apis/certificates.k8s.io/certificatesigningrequests/nodeclient", "completion": "This text code Implementation details | Kubeadm ensures that certificate rotation is enabled for nodes, and that a new certificate request for nodes will get its CSR request automatically approved by the csrapprover controller.", "language": "text", "type": "code-example_explanation", "context": "Implementation details | Kubeadm ensures that certificate rotation is enabled for nodes, and that a new certificate request for nodes will get its CSR request automatically approved by the csrapprover controller.", "source": "https://kubernetes.io/docs/reference/setup-tools/kubeadm/implementation-details/", "id": "k8s_00670_code_explain_21278"} {"prompt": "Explain this text code:\n\nsystem:certificates.k8s.io:certificatesigningrequests:selfnodeclient", "completion": "This text code Implementation details | This phase creates the cluster-info ConfigMap in the kube-public namespace.", "language": "text", "type": "code-example_explanation", "context": "Implementation details | This phase creates the cluster-info ConfigMap in the kube-public namespace.", "source": "https://kubernetes.io/docs/reference/setup-tools/kubeadm/implementation-details/", "id": "k8s_00670_code_explain_21279"} {"prompt": "Complete this text code:\n\nNAME STATUS AGE\ndefault Active 1d", "completion": "kube-node-lease Active 1d\nkube-public Active 1d\nkube-system Active 1d", "language": "text", "type": "code-example", "context": "Namespaces | To set the namespace for a current request, use the --namespace flag.", "source": "https://kubernetes.io/docs/concepts/overview/working-with-objects/namespaces", "id": "k8s_00672_code_completion_21280"} {"prompt": "Explain this text code:\n\nNAME STATUS AGE\ndefault Active 1d\nkube-node-lease Active 1d\nkube-public Active 1d\nkube-system Active 1d", "completion": "This text code Namespaces | To set the namespace for a current request, use the --namespace flag.", "language": "text", "type": "code-example_explanation", "context": "Namespaces | To set the namespace for a current request, use the --namespace flag.", "source": "https://kubernetes.io/docs/concepts/overview/working-with-objects/namespaces", "id": "k8s_00672_code_explain_21281"} {"prompt": "Explain this bash code:\n\nkubectl run nginx --image=nginx --namespace=<insert-namespace-name-here>\nkubectl get pods --namespace=<insert-namespace-name-here>", "completion": "This bash code Namespaces | You can permanently save the namespace for all subsequent kubectl commands in that context.", "language": "bash", "type": "kubectl-command_explanation", "context": "Namespaces | You can permanently save the namespace for all subsequent kubectl commands in that context.", "source": "https://kubernetes.io/docs/concepts/overview/working-with-objects/namespaces", "id": "k8s_00672_code_explain_21282"} {"prompt": "Explain this bash code:\n\nkubectl config set-context --current --namespace=<insert-namespace-name-here>\n# Validate it\nkubectl config view --minify | grep namespace:", "completion": "This bash code Namespaces | When you create a Service, it creates a corresponding DNS entry. This entry is of the form <service-name>.<namespace-name>.svc.cluster.local, which means that if a container only uses <service-name>,", "language": "bash", "type": "kubectl-command_explanation", "context": "Namespaces | When you create a Service, it creates a corresponding DNS entry. This entry is of the form <service-name>.<namespace-name>.svc.cluster.local, which means that if a container only uses <service-name>,", "source": "https://kubernetes.io/docs/concepts/overview/working-with-objects/namespaces", "id": "k8s_00672_code_explain_21283"} {"prompt": "Complete this text code:\n\n# In a namespace\nkubectl api-resources --namespaced=true", "completion": "\n# Not in a namespace\nkubectl api-resources --namespaced=false", "language": "text", "type": "code-example", "context": "Namespaces | The Kubernetes control plane sets an immutable label kubernetes.io/metadata.name on all namespaces. The value of the label is the namespace name.", "source": "https://kubernetes.io/docs/concepts/overview/working-with-objects/namespaces", "id": "k8s_00672_code_completion_21284"} {"prompt": "Explain this text code:\n\n# In a namespace\nkubectl api-resources --namespaced=true\n\n# Not in a namespace\nkubectl api-resources --namespaced=false", "completion": "This text code Namespaces | The Kubernetes control plane sets an immutable label kubernetes.io/metadata.name on all namespaces. The value of the label is the namespace name.", "language": "text", "type": "code-example_explanation", "context": "Namespaces | The Kubernetes control plane sets an immutable label kubernetes.io/metadata.name on all namespaces. The value of the label is the namespace name.", "source": "https://kubernetes.io/docs/concepts/overview/working-with-objects/namespaces", "id": "k8s_00672_code_explain_21285"} {"prompt": "Complete this text code:\n\nNAME STATUS AGE\ndefault Active 1d", "completion": "kube-node-lease Active 1d\nkube-public Active 1d\nkube-system Active 1d", "language": "text", "type": "code-example", "context": "Namespaces | To set the namespace for a current request, use the --namespace flag.", "source": "https://kubernetes.io/docs/concepts/overview/working-with-objects/namespaces", "id": "k8s_00672_code_completion_21286"} {"prompt": "Explain this text code:\n\nNAME STATUS AGE\ndefault Active 1d\nkube-node-lease Active 1d\nkube-public Active 1d\nkube-system Active 1d", "completion": "This text code Namespaces | To set the namespace for a current request, use the --namespace flag.", "language": "text", "type": "code-example_explanation", "context": "Namespaces | To set the namespace for a current request, use the --namespace flag.", "source": "https://kubernetes.io/docs/concepts/overview/working-with-objects/namespaces", "id": "k8s_00672_code_explain_21287"} {"prompt": "Explain this shell code:\n\nkubectl run nginx --image=nginx --namespace=<insert-namespace-name-here>\nkubectl get pods --namespace=<insert-namespace-name-here>", "completion": "This shell code Namespaces | You can permanently save the namespace for all subsequent kubectl commands in that context.", "language": "shell", "type": "code-example_explanation", "context": "Namespaces | You can permanently save the namespace for all subsequent kubectl commands in that context.", "source": "https://kubernetes.io/docs/concepts/overview/working-with-objects/namespaces", "id": "k8s_00672_code_explain_21288"} {"prompt": "Explain this shell code:\n\nkubectl config set-context --current --namespace=<insert-namespace-name-here>\n# Validate it\nkubectl config view --minify | grep namespace:", "completion": "This shell code Namespaces | When you create a Service, it creates a corresponding DNS entry. This entry is of the form <service-name>.<namespace-name>.svc.cluster.local, which means that if a container only uses <service-name>,", "language": "shell", "type": "code-example_explanation", "context": "Namespaces | When you create a Service, it creates a corresponding DNS entry. This entry is of the form <service-name>.<namespace-name>.svc.cluster.local, which means that if a container only uses <service-name>,", "source": "https://kubernetes.io/docs/concepts/overview/working-with-objects/namespaces", "id": "k8s_00672_code_explain_21289"} {"prompt": "Complete this shell code:\n\n# In a namespace\nkubectl api-resources --namespaced=true", "completion": "\n# Not in a namespace\nkubectl api-resources --namespaced=false", "language": "shell", "type": "code-example", "context": "Namespaces | The Kubernetes control plane sets an immutable label kubernetes.io/metadata.name on all namespaces. The value of the label is the namespace name.", "source": "https://kubernetes.io/docs/concepts/overview/working-with-objects/namespaces", "id": "k8s_00672_code_completion_21290"} {"prompt": "Explain this shell code:\n\n# In a namespace\nkubectl api-resources --namespaced=true\n\n# Not in a namespace\nkubectl api-resources --namespaced=false", "completion": "This shell code Namespaces | The Kubernetes control plane sets an immutable label kubernetes.io/metadata.name on all namespaces. The value of the label is the namespace name.", "language": "shell", "type": "code-example_explanation", "context": "Namespaces | The Kubernetes control plane sets an immutable label kubernetes.io/metadata.name on all namespaces. The value of the label is the namespace name.", "source": "https://kubernetes.io/docs/concepts/overview/working-with-objects/namespaces", "id": "k8s_00672_code_explain_21291"} {"prompt": "Explain this bash code:\n\nkubectl run nginx --image=nginx --namespace=<insert-namespace-name-here>\nkubectl get pods --namespace=<insert-namespace-name-here>", "completion": "This bash code Namespaces | You can permanently save the namespace for all subsequent kubectl commands in that context.", "language": "bash", "type": "kubectl-command_explanation", "context": "Namespaces | You can permanently save the namespace for all subsequent kubectl commands in that context.", "source": "https://kubernetes.io/docs/concepts/overview/working-with-objects/namespaces", "id": "k8s_00672_code_explain_21292"} {"prompt": "Explain this bash code:\n\nkubectl config set-context --current --namespace=<insert-namespace-name-here>\n# Validate it\nkubectl config view --minify | grep namespace:", "completion": "This bash code Namespaces | When you create a Service, it creates a corresponding DNS entry. This entry is of the form <service-name>.<namespace-name>.svc.cluster.local, which means that if a container only uses <service-name>,", "language": "bash", "type": "kubectl-command_explanation", "context": "Namespaces | When you create a Service, it creates a corresponding DNS entry. This entry is of the form <service-name>.<namespace-name>.svc.cluster.local, which means that if a container only uses <service-name>,", "source": "https://kubernetes.io/docs/concepts/overview/working-with-objects/namespaces", "id": "k8s_00672_code_explain_21293"} {"prompt": "Complete this text code:\n\n# In a namespace\nkubectl api-resources --namespaced=true", "completion": "\n# Not in a namespace\nkubectl api-resources --namespaced=false", "language": "text", "type": "code-example", "context": "Namespaces | The Kubernetes control plane sets an immutable label kubernetes.io/metadata.name on all namespaces. The value of the label is the namespace name.", "source": "https://kubernetes.io/docs/concepts/overview/working-with-objects/namespaces", "id": "k8s_00672_code_completion_21294"} {"prompt": "Explain this text code:\n\n# In a namespace\nkubectl api-resources --namespaced=true\n\n# Not in a namespace\nkubectl api-resources --namespaced=false", "completion": "This text code Namespaces | The Kubernetes control plane sets an immutable label kubernetes.io/metadata.name on all namespaces. The value of the label is the namespace name.", "language": "text", "type": "code-example_explanation", "context": "Namespaces | The Kubernetes control plane sets an immutable label kubernetes.io/metadata.name on all namespaces. The value of the label is the namespace name.", "source": "https://kubernetes.io/docs/concepts/overview/working-with-objects/namespaces", "id": "k8s_00672_code_explain_21295"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: redis\nspec:\n containers:\n - name: redis", "completion": " image: redis\n volumeMounts:\n - name: redis-storage\n mountPath: /data/redis\n volumes:\n - name: redis-storage\n emptyDir: {}", "language": "yaml", "type": "pod-definition", "context": "Configure a Pod to Use a Volume for Storage | Create the Pod:", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/configure-volume-storage/", "id": "k8s_00673_code_completion_21296"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: redis\nspec:\n containers:\n - name: redis\n image: redis\n volumeMounts:\n - name: redis-storage\n mountPath: /data/redis\n volumes:\n - name: redis-storage\n emptyDir: {}", "completion": "This yaml code Configure a Pod to Use a Volume for Storage | Create the Pod:", "language": "yaml", "type": "pod-definition_explanation", "context": "Configure a Pod to Use a Volume for Storage | Create the Pod:", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/configure-volume-storage/", "id": "k8s_00673_code_explain_21297"} {"prompt": "Explain this bash code:\n\nkubectl apply -f https://k8s.io/examples/pods/storage/redis.yaml", "completion": "This bash code Configure a Pod to Use a Volume for Storage | Verify that the Pod's Container is running, and then watch for changes to the Pod:", "language": "bash", "type": "kubectl-command_explanation", "context": "Configure a Pod to Use a Volume for Storage | Verify that the Pod's Container is running, and then watch for changes to the Pod:", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/configure-volume-storage/", "id": "k8s_00673_code_explain_21298"} {"prompt": "Explain this text code:\n\nNAME READY STATUS RESTARTS AGE\nredis 1/1 Running 0 13s", "completion": "This text code Configure a Pod to Use a Volume for Storage | In another terminal, get a shell to the running Container:", "language": "text", "type": "code-example_explanation", "context": "Configure a Pod to Use a Volume for Storage | In another terminal, get a shell to the running Container:", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/configure-volume-storage/", "id": "k8s_00673_code_explain_21299"} {"prompt": "Explain this text code:\n\nroot@redis:/data# cd /data/redis/\nroot@redis:/data/redis# echo Hello > test-file", "completion": "This text code Configure a Pod to Use a Volume for Storage | In your shell, list the running processes:", "language": "text", "type": "code-example_explanation", "context": "Configure a Pod to Use a Volume for Storage | In your shell, list the running processes:", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/configure-volume-storage/", "id": "k8s_00673_code_explain_21300"} {"prompt": "Explain this text code:\n\nroot@redis:/data/redis# apt-get update\nroot@redis:/data/redis# apt-get install procps\nroot@redis:/data/redis# ps aux", "completion": "This text code Configure a Pod to Use a Volume for Storage | The output is similar to this:", "language": "text", "type": "code-example_explanation", "context": "Configure a Pod to Use a Volume for Storage | The output is similar to this:", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/configure-volume-storage/", "id": "k8s_00673_code_explain_21301"} {"prompt": "Complete this text code:\n\nUSER PID %CPU %MEM VSZ RSS TTY STAT START TIME COMMAND\nredis 1 0.1 0.1 33308 3828 ? Ssl 00:46 0:00 redis-server *:6379", "completion": "root 12 0.0 0.0 20228 3020 ? Ss 00:47 0:00 /bin/bash\nroot 15 0.0 0.0 17500 2072 ? R+ 00:48 0:00 ps aux", "language": "text", "type": "code-example", "context": "Configure a Pod to Use a Volume for Storage | In your shell, kill the Redis process:", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/configure-volume-storage/", "id": "k8s_00673_code_completion_21302"} {"prompt": "Explain this text code:\n\nUSER PID %CPU %MEM VSZ RSS TTY STAT START TIME COMMAND\nredis 1 0.1 0.1 33308 3828 ? Ssl 00:46 0:00 redis-server *:6379\nroot 12 0.0 0.0 20228 3020 ? Ss 00:47 0:00 /bin/bash\nroot 15 0.0 0.0 17500 2072 ? R+ 00:48 0:00 ps aux", "completion": "This text code Configure a Pod to Use a Volume for Storage | In your shell, kill the Redis process:", "language": "text", "type": "code-example_explanation", "context": "Configure a Pod to Use a Volume for Storage | In your shell, kill the Redis process:", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/configure-volume-storage/", "id": "k8s_00673_code_explain_21303"} {"prompt": "Complete this text code:\n\nNAME READY STATUS RESTARTS AGE\nredis 1/1 Running 0 13s", "completion": "redis 0/1 Completed 0 6m\nredis 1/1 Running 1 6m", "language": "text", "type": "code-example", "context": "Configure a Pod to Use a Volume for Storage | At this point, the Container has terminated and restarted. This is because the Redis Pod has a restartPolicy of Always.", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/configure-volume-storage/", "id": "k8s_00673_code_completion_21304"} {"prompt": "Explain this text code:\n\nNAME READY STATUS RESTARTS AGE\nredis 1/1 Running 0 13s\nredis 0/1 Completed 0 6m\nredis 1/1 Running 1 6m", "completion": "This text code Configure a Pod to Use a Volume for Storage | At this point, the Container has terminated and restarted. This is because the Redis Pod has a restartPolicy of Always.", "language": "text", "type": "code-example_explanation", "context": "Configure a Pod to Use a Volume for Storage | At this point, the Container has terminated and restarted. This is because the Redis Pod has a restartPolicy of Always.", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/configure-volume-storage/", "id": "k8s_00673_code_explain_21305"} {"prompt": "Explain this text code:\n\nroot@redis:/data/redis# cd /data/redis/\nroot@redis:/data/redis# ls\ntest-file", "completion": "This text code Configure a Pod to Use a Volume for Storage | Delete the Pod that you created for this exercise:", "language": "text", "type": "code-example_explanation", "context": "Configure a Pod to Use a Volume for Storage | Delete the Pod that you created for this exercise:", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/configure-volume-storage/", "id": "k8s_00673_code_explain_21306"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: redis\nspec:\n containers:\n - name: redis", "completion": " image: redis\n volumeMounts:\n - name: redis-storage\n mountPath: /data/redis\n volumes:\n - name: redis-storage\n emptyDir: {}", "language": "yaml", "type": "pod-definition", "context": "Configure a Pod to Use a Volume for Storage | Create the Pod:", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/configure-volume-storage/", "id": "k8s_00673_code_completion_21307"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: redis\nspec:\n containers:\n - name: redis\n image: redis\n volumeMounts:\n - name: redis-storage\n mountPath: /data/redis\n volumes:\n - name: redis-storage\n emptyDir: {}", "completion": "This yaml code Configure a Pod to Use a Volume for Storage | Create the Pod:", "language": "yaml", "type": "pod-definition_explanation", "context": "Configure a Pod to Use a Volume for Storage | Create the Pod:", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/configure-volume-storage/", "id": "k8s_00673_code_explain_21308"} {"prompt": "Explain this shell code:\n\nkubectl apply -f https://k8s.io/examples/pods/storage/redis.yaml", "completion": "This shell code Configure a Pod to Use a Volume for Storage | Verify that the Pod's Container is running, and then watch for changes to the Pod:", "language": "shell", "type": "code-example_explanation", "context": "Configure a Pod to Use a Volume for Storage | Verify that the Pod's Container is running, and then watch for changes to the Pod:", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/configure-volume-storage/", "id": "k8s_00673_code_explain_21309"} {"prompt": "Explain this console code:\n\nNAME READY STATUS RESTARTS AGE\nredis 1/1 Running 0 13s", "completion": "This console code Configure a Pod to Use a Volume for Storage | In another terminal, get a shell to the running Container:", "language": "console", "type": "code-example_explanation", "context": "Configure a Pod to Use a Volume for Storage | In another terminal, get a shell to the running Container:", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/configure-volume-storage/", "id": "k8s_00673_code_explain_21310"} {"prompt": "Explain this shell code:\n\nroot@redis:/data# cd /data/redis/\nroot@redis:/data/redis# echo Hello > test-file", "completion": "This shell code Configure a Pod to Use a Volume for Storage | In your shell, list the running processes:", "language": "shell", "type": "code-example_explanation", "context": "Configure a Pod to Use a Volume for Storage | In your shell, list the running processes:", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/configure-volume-storage/", "id": "k8s_00673_code_explain_21311"} {"prompt": "Explain this shell code:\n\nroot@redis:/data/redis# apt-get update\nroot@redis:/data/redis# apt-get install procps\nroot@redis:/data/redis# ps aux", "completion": "This shell code Configure a Pod to Use a Volume for Storage | The output is similar to this:", "language": "shell", "type": "code-example_explanation", "context": "Configure a Pod to Use a Volume for Storage | The output is similar to this:", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/configure-volume-storage/", "id": "k8s_00673_code_explain_21312"} {"prompt": "Complete this console code:\n\nUSER PID %CPU %MEM VSZ RSS TTY STAT START TIME COMMAND\nredis 1 0.1 0.1 33308 3828 ? Ssl 00:46 0:00 redis-server *:6379", "completion": "root 12 0.0 0.0 20228 3020 ? Ss 00:47 0:00 /bin/bash\nroot 15 0.0 0.0 17500 2072 ? R+ 00:48 0:00 ps aux", "language": "console", "type": "code-example", "context": "Configure a Pod to Use a Volume for Storage | In your shell, kill the Redis process:", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/configure-volume-storage/", "id": "k8s_00673_code_completion_21313"} {"prompt": "Explain this console code:\n\nUSER PID %CPU %MEM VSZ RSS TTY STAT START TIME COMMAND\nredis 1 0.1 0.1 33308 3828 ? Ssl 00:46 0:00 redis-server *:6379\nroot 12 0.0 0.0 20228 3020 ? Ss 00:47 0:00 /bin/bash\nroot 15 0.0 0.0 17500 2072 ? R+ 00:48 0:00 ps aux", "completion": "This console code Configure a Pod to Use a Volume for Storage | In your shell, kill the Redis process:", "language": "console", "type": "code-example_explanation", "context": "Configure a Pod to Use a Volume for Storage | In your shell, kill the Redis process:", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/configure-volume-storage/", "id": "k8s_00673_code_explain_21314"} {"prompt": "Complete this console code:\n\nNAME READY STATUS RESTARTS AGE\nredis 1/1 Running 0 13s", "completion": "redis 0/1 Completed 0 6m\nredis 1/1 Running 1 6m", "language": "console", "type": "code-example", "context": "Configure a Pod to Use a Volume for Storage | At this point, the Container has terminated and restarted. This is because the Redis Pod has a restartPolicy of Always.", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/configure-volume-storage/", "id": "k8s_00673_code_completion_21315"} {"prompt": "Explain this console code:\n\nNAME READY STATUS RESTARTS AGE\nredis 1/1 Running 0 13s\nredis 0/1 Completed 0 6m\nredis 1/1 Running 1 6m", "completion": "This console code Configure a Pod to Use a Volume for Storage | At this point, the Container has terminated and restarted. This is because the Redis Pod has a restartPolicy of Always.", "language": "console", "type": "code-example_explanation", "context": "Configure a Pod to Use a Volume for Storage | At this point, the Container has terminated and restarted. This is because the Redis Pod has a restartPolicy of Always.", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/configure-volume-storage/", "id": "k8s_00673_code_explain_21316"} {"prompt": "Explain this shell code:\n\nroot@redis:/data/redis# cd /data/redis/\nroot@redis:/data/redis# ls\ntest-file", "completion": "This shell code Configure a Pod to Use a Volume for Storage | Delete the Pod that you created for this exercise:", "language": "shell", "type": "code-example_explanation", "context": "Configure a Pod to Use a Volume for Storage | Delete the Pod that you created for this exercise:", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/configure-volume-storage/", "id": "k8s_00673_code_explain_21317"} {"prompt": "Complete this yaml code:\n\npods/storage/redis.yaml\napiVersion: v1\nkind: Pod\nmetadata:\n name: redis\nspec:\n containers:", "completion": " - name: redis\n image: redis\n volumeMounts:\n - name: redis-storage\n mountPath: /data/redis\n volumes:\n - name: redis-storage\n emptyDir: {}", "language": "yaml", "type": "pod-definition", "context": "Configure a Pod to Use a Volume for Storage | Create the Pod:", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/configure-volume-storage/", "id": "k8s_00673_code_completion_21318"} {"prompt": "Explain this yaml code:\n\npods/storage/redis.yaml\napiVersion: v1\nkind: Pod\nmetadata:\n name: redis\nspec:\n containers:\n - name: redis\n image: redis\n volumeMounts:\n - name: redis-storage\n mountPath: /data/redis\n volumes:\n - name: redis-storage\n emptyDir: {}", "completion": "This yaml code Configure a Pod to Use a Volume for Storage | Create the Pod:", "language": "yaml", "type": "pod-definition_explanation", "context": "Configure a Pod to Use a Volume for Storage | Create the Pod:", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/configure-volume-storage/", "id": "k8s_00673_code_explain_21319"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: redis\nspec:\n containers:\n - name: redis", "completion": " image: redis\n volumeMounts:\n - name: redis-storage\n mountPath: /data/redis\n volumes:\n - name: redis-storage\n emptyDir: {}", "language": "yaml", "type": "pod-definition", "context": "Configure a Pod to Use a Volume for Storage | Create the Pod:", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/configure-volume-storage/", "id": "k8s_00673_code_completion_21320"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: redis\nspec:\n containers:\n - name: redis\n image: redis\n volumeMounts:\n - name: redis-storage\n mountPath: /data/redis\n volumes:\n - name: redis-storage\n emptyDir: {}", "completion": "This yaml code Configure a Pod to Use a Volume for Storage | Create the Pod:", "language": "yaml", "type": "pod-definition_explanation", "context": "Configure a Pod to Use a Volume for Storage | Create the Pod:", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/configure-volume-storage/", "id": "k8s_00673_code_explain_21321"} {"prompt": "Explain this bash code:\n\nkubectl apply -f https://k8s.io/examples/pods/storage/redis.yaml", "completion": "This bash code Configure a Pod to Use a Volume for Storage | Verify that the Pod's Container is running, and then watch for changes to the Pod:", "language": "bash", "type": "kubectl-command_explanation", "context": "Configure a Pod to Use a Volume for Storage | Verify that the Pod's Container is running, and then watch for changes to the Pod:", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/configure-volume-storage/", "id": "k8s_00673_code_explain_21322"} {"prompt": "Explain this text code:\n\nNAME READY STATUS RESTARTS AGE\nredis 1/1 Running 0 13s", "completion": "This text code Configure a Pod to Use a Volume for Storage | In another terminal, get a shell to the running Container:", "language": "text", "type": "code-example_explanation", "context": "Configure a Pod to Use a Volume for Storage | In another terminal, get a shell to the running Container:", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/configure-volume-storage/", "id": "k8s_00673_code_explain_21323"} {"prompt": "Explain this text code:\n\nroot@redis:/data# cd /data/redis/\nroot@redis:/data/redis# echo Hello > test-file", "completion": "This text code Configure a Pod to Use a Volume for Storage | In your shell, list the running processes:", "language": "text", "type": "code-example_explanation", "context": "Configure a Pod to Use a Volume for Storage | In your shell, list the running processes:", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/configure-volume-storage/", "id": "k8s_00673_code_explain_21324"} {"prompt": "Explain this text code:\n\nroot@redis:/data/redis# apt-get update\nroot@redis:/data/redis# apt-get install procps\nroot@redis:/data/redis# ps aux", "completion": "This text code Configure a Pod to Use a Volume for Storage | The output is similar to this:", "language": "text", "type": "code-example_explanation", "context": "Configure a Pod to Use a Volume for Storage | The output is similar to this:", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/configure-volume-storage/", "id": "k8s_00673_code_explain_21325"} {"prompt": "Complete this text code:\n\nUSER PID %CPU %MEM VSZ RSS TTY STAT START TIME COMMAND\nredis 1 0.1 0.1 33308 3828 ? Ssl 00:46 0:00 redis-server *:6379", "completion": "root 12 0.0 0.0 20228 3020 ? Ss 00:47 0:00 /bin/bash\nroot 15 0.0 0.0 17500 2072 ? R+ 00:48 0:00 ps aux", "language": "text", "type": "code-example", "context": "Configure a Pod to Use a Volume for Storage | In your shell, kill the Redis process:", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/configure-volume-storage/", "id": "k8s_00673_code_completion_21326"} {"prompt": "Explain this text code:\n\nUSER PID %CPU %MEM VSZ RSS TTY STAT START TIME COMMAND\nredis 1 0.1 0.1 33308 3828 ? Ssl 00:46 0:00 redis-server *:6379\nroot 12 0.0 0.0 20228 3020 ? Ss 00:47 0:00 /bin/bash\nroot 15 0.0 0.0 17500 2072 ? R+ 00:48 0:00 ps aux", "completion": "This text code Configure a Pod to Use a Volume for Storage | In your shell, kill the Redis process:", "language": "text", "type": "code-example_explanation", "context": "Configure a Pod to Use a Volume for Storage | In your shell, kill the Redis process:", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/configure-volume-storage/", "id": "k8s_00673_code_explain_21327"} {"prompt": "Complete this text code:\n\nNAME READY STATUS RESTARTS AGE\nredis 1/1 Running 0 13s", "completion": "redis 0/1 Completed 0 6m\nredis 1/1 Running 1 6m", "language": "text", "type": "code-example", "context": "Configure a Pod to Use a Volume for Storage | At this point, the Container has terminated and restarted. This is because the Redis Pod has a restartPolicy of Always.", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/configure-volume-storage/", "id": "k8s_00673_code_completion_21328"} {"prompt": "Explain this text code:\n\nNAME READY STATUS RESTARTS AGE\nredis 1/1 Running 0 13s\nredis 0/1 Completed 0 6m\nredis 1/1 Running 1 6m", "completion": "This text code Configure a Pod to Use a Volume for Storage | At this point, the Container has terminated and restarted. This is because the Redis Pod has a restartPolicy of Always.", "language": "text", "type": "code-example_explanation", "context": "Configure a Pod to Use a Volume for Storage | At this point, the Container has terminated and restarted. This is because the Redis Pod has a restartPolicy of Always.", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/configure-volume-storage/", "id": "k8s_00673_code_explain_21329"} {"prompt": "Explain this text code:\n\nroot@redis:/data/redis# cd /data/redis/\nroot@redis:/data/redis# ls\ntest-file", "completion": "This text code Configure a Pod to Use a Volume for Storage | Delete the Pod that you created for this exercise:", "language": "text", "type": "code-example_explanation", "context": "Configure a Pod to Use a Volume for Storage | Delete the Pod that you created for this exercise:", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/configure-volume-storage/", "id": "k8s_00673_code_explain_21330"} {"prompt": "Complete this text code:\n\n$ telepresence connect\n ", "completion": "Launching Telepresence Daemon\n...\nConnected to context default (https://<cluster public IP>)", "language": "text", "type": "code-example", "context": "Developing and debugging services locally using telepresence | You can curl services using the Kubernetes syntax e.g. curl -ik https://kubernetes.default", "source": "https://kubernetes.io/docs/tasks/debug/debug-cluster/local-debugging/", "id": "k8s_00674_code_completion_21331"} {"prompt": "Explain this text code:\n\n$ telepresence connect\n \nLaunching Telepresence Daemon\n...\nConnected to context default (https://<cluster public IP>)", "completion": "This text code Developing and debugging services locally using telepresence | You can curl services using the Kubernetes syntax e.g. curl -ik https://kubernetes.default", "language": "text", "type": "code-example_explanation", "context": "Developing and debugging services locally using telepresence | You can curl services using the Kubernetes syntax e.g. curl -ik https://kubernetes.default", "source": "https://kubernetes.io/docs/tasks/debug/debug-cluster/local-debugging/", "id": "k8s_00674_code_explain_21332"} {"prompt": "Complete this text code:\n\n$ telepresence connect\n ", "completion": "Launching Telepresence Daemon\n...\nConnected to context default (https://<cluster public IP>)", "language": "text", "type": "code-example", "context": "Developing and debugging services locally using telepresence | You can curl services using the Kubernetes syntax e.g. curl -ik https://kubernetes.default", "source": "https://kubernetes.io/docs/tasks/debug/debug-cluster/local-debugging/", "id": "k8s_00674_code_completion_21333"} {"prompt": "Explain this text code:\n\n$ telepresence connect\n \nLaunching Telepresence Daemon\n...\nConnected to context default (https://<cluster public IP>)", "completion": "This text code Developing and debugging services locally using telepresence | You can curl services using the Kubernetes syntax e.g. curl -ik https://kubernetes.default", "language": "text", "type": "code-example_explanation", "context": "Developing and debugging services locally using telepresence | You can curl services using the Kubernetes syntax e.g. curl -ik https://kubernetes.default", "source": "https://kubernetes.io/docs/tasks/debug/debug-cluster/local-debugging/", "id": "k8s_00674_code_explain_21334"} {"prompt": "Explain this text code:\n\ntelepresence intercept $SERVICE_NAME --port $LOCAL_PORT:$REMOTE_PORT", "completion": "This text code Developing and debugging services locally using telepresence | Where:", "language": "text", "type": "code-example_explanation", "context": "Developing and debugging services locally using telepresence | Where:", "source": "https://kubernetes.io/docs/tasks/debug/debug-cluster/local-debugging/", "id": "k8s_00674_code_explain_21335"} {"prompt": "Explain this text code:\n\nkubeadm upgrade apply phase bootstrap-token [flags]", "completion": "This text code kubeadm upgrade phases | Path to a kubeadm configuration file.", "language": "text", "type": "code-example_explanation", "context": "kubeadm upgrade phases | Path to a kubeadm configuration file.", "source": "https://kubernetes.io/docs/reference/setup-tools/kubeadm/kubeadm-upgrade-phase/", "id": "k8s_00676_code_explain_21336"} {"prompt": "Explain this text code:\n\nkubeadm upgrade apply phase bootstrap-token [flags]", "completion": "This text code kubeadm upgrade phases | Path to a kubeadm configuration file.", "language": "text", "type": "code-example_explanation", "context": "kubeadm upgrade phases | Path to a kubeadm configuration file.", "source": "https://kubernetes.io/docs/reference/setup-tools/kubeadm/kubeadm-upgrade-phase/", "id": "k8s_00676_code_explain_21337"} {"prompt": "Explain this yaml code:\n\napiVersion: kubeadm.k8s.io/v1beta4\nkind: ClusterConfiguration\nencryptionAlgorithm: <ALGORITHM>", "completion": "This yaml code Certificate Management with kubeadm | <ALGORITHM> can be one of RSA-2048 (default), RSA-3072, RSA-4096 or ECDSA-P256.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Certificate Management with kubeadm | <ALGORITHM> can be one of RSA-2048 (default), RSA-3072, RSA-4096 or ECDSA-P256.", "source": "https://kubernetes.io/docs/tasks/administer-cluster/kubeadm/kubeadm-certs/", "id": "k8s_00677_code_explain_21338"} {"prompt": "Complete this yaml code:\n\napiVersion: kubeadm.k8s.io/v1beta4\nkind: ClusterConfiguration", "completion": "certificateValidityPeriod: 8760h # Default: 365 days × 24 hours = 1 year\ncaCertificateValidityPeriod: 87600h # Default: 365 days × 24 hours * 10 = 10 years", "language": "yaml", "type": "kubernetes-manifest", "context": "Certificate Management with kubeadm | The values of the fields follow the accepted format for Go's time.Duration values, with the longest supported unit being h (hours).", "source": "https://kubernetes.io/docs/tasks/administer-cluster/kubeadm/kubeadm-certs/", "id": "k8s_00677_code_completion_21339"} {"prompt": "Explain this yaml code:\n\napiVersion: kubeadm.k8s.io/v1beta4\nkind: ClusterConfiguration\ncertificateValidityPeriod: 8760h # Default: 365 days × 24 hours = 1 year\ncaCertificateValidityPeriod: 87600h # Default: 365 days × 24 hours * 10 = 10 years", "completion": "This yaml code Certificate Management with kubeadm | The values of the fields follow the accepted format for Go's time.Duration values, with the longest supported unit being h (hours).", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Certificate Management with kubeadm | The values of the fields follow the accepted format for Go's time.Duration values, with the longest supported unit being h (hours).", "source": "https://kubernetes.io/docs/tasks/administer-cluster/kubeadm/kubeadm-certs/", "id": "k8s_00677_code_explain_21340"} {"prompt": "Complete this text code:\n\nCERTIFICATE EXPIRES RESIDUAL TIME CERTIFICATE AUTHORITY EXTERNALLY MANAGED\nadmin.conf Dec 30, 2020 23:36 UTC 364d no\napiserver Dec 30, 2020 23:36 UTC 364d ca no\napiserver-etcd-client Dec 30, 2020 23:36 UTC 364d etcd-ca no\napiserver-kubelet-client Dec 30, 2020 23:36 UTC 364d ca no\ncontroller-manager.conf Dec 30, 2020 23:36 UTC 364d no\netcd-healthcheck-client Dec 30, 2020 23:36 UTC 364d etcd-ca no\netcd-peer Dec 30, 2020 23:36 UTC 364d etcd-ca no", "completion": "etcd-server Dec 30, 2020 23:36 UTC 364d etcd-ca no\nfront-proxy-client Dec 30, 2020 23:36 UTC 364d front-proxy-ca no\nscheduler.conf Dec 30, 2020 23:36 UTC 364d no\n\nCERTIFICATE AUTHORITY EXPIRES RESIDUAL TIME EXTERNALLY MANAGED\nca Dec 28, 2029 23:36 UTC 9y no\netcd-ca Dec 28, 2029 23:36 UTC 9y no\nfront-proxy-ca Dec 28, 2029 23:36 UTC 9y no", "language": "text", "type": "code-example", "context": "Certificate Management with kubeadm | The command shows expiration/residual time for the client certificates in the /etc/kubernetes/pki folder and for the client certificate embedded in the kubeconfig files used by kubeadm (admin.conf, co", "source": "https://kubernetes.io/docs/tasks/administer-cluster/kubeadm/kubeadm-certs/", "id": "k8s_00677_code_completion_21341"} {"prompt": "Explain this text code:\n\nCERTIFICATE EXPIRES RESIDUAL TIME CERTIFICATE AUTHORITY EXTERNALLY MANAGED\nadmin.conf Dec 30, 2020 23:36 UTC 364d no\napiserver Dec 30, 2020 23:36 UTC 364d ca no\napiserver-etcd-client Dec 30, 2020 23:36 UTC 364d etcd-ca no\napiserver-kubelet-client Dec 30, 2020 23:36 UTC 364d ca no\ncontroller-manager.conf Dec 30, 2020 23:36 UTC 364d no\netcd-healthcheck-client Dec 30, 2020 23:36 UTC 364d etcd-ca no\netcd-peer Dec 30, 2020 23:36 UTC 364d etcd-ca no\netcd-server Dec 30, 2020 23:36 UTC 364d etcd-ca no\nfront-proxy-client Dec 30, 2020 23:36 UTC 364d front-proxy-ca no\nscheduler.conf Dec 30, 2020 23:36 UTC 364d no\n\nCERTIFICATE AUTHORITY EXPIRES RESIDUAL TIME EXTERNALLY MANAGED\nca Dec 28, 2029 23:36 UTC 9y no\netcd-ca Dec 28, 2029 23:36 UTC 9y no\nfront-proxy-ca Dec 28, 2029 23:36 UTC 9y no", "completion": "This text code Certificate Management with kubeadm | The command shows expiration/residual time for the client certificates in the /etc/kubernetes/pki folder and for the client certificate embedded in the kubeconfig files used by kubeadm (admin.conf, co", "language": "text", "type": "code-example_explanation", "context": "Certificate Management with kubeadm | The command shows expiration/residual time for the client certificates in the /etc/kubernetes/pki folder and for the client certificate embedded in the kubeconfig files used by kubeadm (admin.conf, co", "source": "https://kubernetes.io/docs/tasks/administer-cluster/kubeadm/kubeadm-certs/", "id": "k8s_00677_code_explain_21342"} {"prompt": "Explain this text code:\n\nclient-certificate: /var/lib/kubelet/pki/kubelet-client-current.pem\nclient-key: /var/lib/kubelet/pki/kubelet-client-current.pem", "completion": "This text code Certificate Management with kubeadm | kubeadm renews all the certificates during control plane upgrade.", "language": "text", "type": "code-example_explanation", "context": "Certificate Management with kubeadm | kubeadm renews all the certificates during control plane upgrade.", "source": "https://kubernetes.io/docs/tasks/administer-cluster/kubeadm/kubeadm-certs/", "id": "k8s_00677_code_explain_21343"} {"prompt": "Explain this text code:\n\n# If you are running cluster with a replicated control plane, this command\n# needs to be executed on all the control-plane nodes.\nkubeadm certs renew all", "completion": "This text code Certificate Management with kubeadm | Clusters built with kubeadm often copy the admin.conf certificate into $HOME/.kube/config, as instructed in Creating a cluster with kubeadm. On such a system, to update the contents of $HOME/.kube/con", "language": "text", "type": "code-example_explanation", "context": "Certificate Management with kubeadm | Clusters built with kubeadm often copy the admin.conf certificate into $HOME/.kube/config, as instructed in Creating a cluster with kubeadm. On such a system, to update the contents of $HOME/.kube/con", "source": "https://kubernetes.io/docs/tasks/administer-cluster/kubeadm/kubeadm-certs/", "id": "k8s_00677_code_explain_21344"} {"prompt": "Explain this text code:\n\nsudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config\nsudo chown $(id -u):$(id -g) $HOME/.kube/config", "completion": "This text code Certificate Management with kubeadm | This section provides more details about how to execute manual certificate renewal using the Kubernetes certificates API.", "language": "text", "type": "code-example_explanation", "context": "Certificate Management with kubeadm | This section provides more details about how to execute manual certificate renewal using the Kubernetes certificates API.", "source": "https://kubernetes.io/docs/tasks/administer-cluster/kubeadm/kubeadm-certs/", "id": "k8s_00677_code_explain_21345"} {"prompt": "Complete this yaml code:\n\napiVersion: kubeadm.k8s.io/v1beta4\nkind: ClusterConfiguration\ncontrollerManager:\n extraArgs:", "completion": " - name: \"cluster-signing-cert-file\"\n value: \"/etc/kubernetes/pki/ca.crt\"\n - name: \"cluster-signing-key-file\"\n value: \"/etc/kubernetes/pki/ca.key\"", "language": "yaml", "type": "kubernetes-manifest", "context": "Certificate Management with kubeadm | See Create CertificateSigningRequest for creating CSRs with the Kubernetes API.", "source": "https://kubernetes.io/docs/tasks/administer-cluster/kubeadm/kubeadm-certs/", "id": "k8s_00677_code_completion_21346"} {"prompt": "Explain this yaml code:\n\napiVersion: kubeadm.k8s.io/v1beta4\nkind: ClusterConfiguration\ncontrollerManager:\n extraArgs:\n - name: \"cluster-signing-cert-file\"\n value: \"/etc/kubernetes/pki/ca.crt\"\n - name: \"cluster-signing-key-file\"\n value: \"/etc/kubernetes/pki/ca.key\"", "completion": "This yaml code Certificate Management with kubeadm | See Create CertificateSigningRequest for creating CSRs with the Kubernetes API.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Certificate Management with kubeadm | See Create CertificateSigningRequest for creating CSRs with the Kubernetes API.", "source": "https://kubernetes.io/docs/tasks/administer-cluster/kubeadm/kubeadm-certs/", "id": "k8s_00677_code_explain_21347"} {"prompt": "Complete this yaml code:\n\napiVersion: kubeadm.k8s.io/v1beta4\nkind: ClusterConfiguration\n---", "completion": "apiVersion: kubelet.config.k8s.io/v1beta1\nkind: KubeletConfiguration\nserverTLSBootstrap: true", "language": "yaml", "type": "kubernetes-manifest", "context": "Certificate Management with kubeadm | If you have already created the cluster you must adapt it by doing the following:", "source": "https://kubernetes.io/docs/tasks/administer-cluster/kubeadm/kubeadm-certs/", "id": "k8s_00677_code_completion_21348"} {"prompt": "Explain this yaml code:\n\napiVersion: kubeadm.k8s.io/v1beta4\nkind: ClusterConfiguration\n---\napiVersion: kubelet.config.k8s.io/v1beta1\nkind: KubeletConfiguration\nserverTLSBootstrap: true", "completion": "This yaml code Certificate Management with kubeadm | If you have already created the cluster you must adapt it by doing the following:", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Certificate Management with kubeadm | If you have already created the cluster you must adapt it by doing the following:", "source": "https://kubernetes.io/docs/tasks/administer-cluster/kubeadm/kubeadm-certs/", "id": "k8s_00677_code_explain_21349"} {"prompt": "Explain this text code:\n\nNAME AGE SIGNERNAME REQUESTOR CONDITION\ncsr-9wvgt 112s kubernetes.io/kubelet-serving system:node:worker-1 Pending\ncsr-lz97v 1m58s kubernetes.io/kubelet-serving system:node:control-plane-1 Pending", "completion": "This text code Certificate Management with kubeadm | To approve them you can do the following:", "language": "text", "type": "code-example_explanation", "context": "Certificate Management with kubeadm | To approve them you can do the following:", "source": "https://kubernetes.io/docs/tasks/administer-cluster/kubeadm/kubeadm-certs/", "id": "k8s_00677_code_explain_21350"} {"prompt": "Complete this yaml code:\n\n# example.yaml\napiVersion: kubeadm.k8s.io/v1beta4\nkind: ClusterConfiguration\n# Will be used as the target \"cluster\" in the kubeconfig", "completion": "clusterName: \"kubernetes\"\n# Will be used as the \"server\" (IP or DNS name) of this cluster in the kubeconfig\ncontrolPlaneEndpoint: \"some-dns-address:6443\"\n# The cluster CA key and certificate will be loaded from this local directory\ncertificatesDir: \"/etc/kubernetes/pki\"", "language": "yaml", "type": "kubernetes-manifest", "context": "Certificate Management with kubeadm | Make sure that these settings match the desired target cluster settings. To see the settings of an existing cluster use:", "source": "https://kubernetes.io/docs/tasks/administer-cluster/kubeadm/kubeadm-certs/", "id": "k8s_00677_code_completion_21351"} {"prompt": "Explain this yaml code:\n\n# example.yaml\napiVersion: kubeadm.k8s.io/v1beta4\nkind: ClusterConfiguration\n# Will be used as the target \"cluster\" in the kubeconfig\nclusterName: \"kubernetes\"\n# Will be used as the \"server\" (IP or DNS name) of this cluster in the kubeconfig\ncontrolPlaneEndpoint: \"some-dns-address:6443\"\n# The cluster CA key and certificate will be loaded from this local directory\ncertificatesDir: \"/etc/kubernetes/pki\"", "completion": "This yaml code Certificate Management with kubeadm | Make sure that these settings match the desired target cluster settings. To see the settings of an existing cluster use:", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Certificate Management with kubeadm | Make sure that these settings match the desired target cluster settings. To see the settings of an existing cluster use:", "source": "https://kubernetes.io/docs/tasks/administer-cluster/kubeadm/kubeadm-certs/", "id": "k8s_00677_code_explain_21352"} {"prompt": "Explain this bash code:\n\nkubectl get cm kubeadm-config -n kube-system -o=jsonpath=\"{.data.ClusterConfiguration}\"", "completion": "This bash code Certificate Management with kubeadm | The following example will generate a kubeconfig file with credentials valid for 24 hours for a new user johndoe that is part of the appdevs group:", "language": "bash", "type": "kubectl-command_explanation", "context": "Certificate Management with kubeadm | The following example will generate a kubeconfig file with credentials valid for 24 hours for a new user johndoe that is part of the appdevs group:", "source": "https://kubernetes.io/docs/tasks/administer-cluster/kubeadm/kubeadm-certs/", "id": "k8s_00677_code_explain_21353"} {"prompt": "Explain this text code:\n\nkubeadm kubeconfig user --config example.yaml --org appdevs --client-name johndoe --validity-period 24h", "completion": "This text code Certificate Management with kubeadm | The following example will generate a kubeconfig file with administrator credentials valid for 1 week:", "language": "text", "type": "code-example_explanation", "context": "Certificate Management with kubeadm | The following example will generate a kubeconfig file with administrator credentials valid for 1 week:", "source": "https://kubernetes.io/docs/tasks/administer-cluster/kubeadm/kubeadm-certs/", "id": "k8s_00677_code_explain_21354"} {"prompt": "Explain this text code:\n\nkubeadm kubeconfig user --config example.yaml --client-name admin --validity-period 168h", "completion": "This text code Certificate Management with kubeadm | You can create certificate signing requests with kubeadm certs generate-csr. Calling this command will generate .csr / .key file pairs for regular certificates. For certificates embedded in kubeconfig", "language": "text", "type": "code-example_explanation", "context": "Certificate Management with kubeadm | You can create certificate signing requests with kubeadm certs generate-csr. Calling this command will generate .csr / .key file pairs for regular certificates. For certificates embedded in kubeconfig", "source": "https://kubernetes.io/docs/tasks/administer-cluster/kubeadm/kubeadm-certs/", "id": "k8s_00677_code_explain_21355"} {"prompt": "Complete this text code:\n\nsudo kubeadm init phase certs ca\nsudo kubeadm init phase certs etcd-ca", "completion": "sudo kubeadm init phase certs front-proxy-ca\nsudo kubeadm init phase certs sa", "language": "text", "type": "code-example", "context": "Certificate Management with kubeadm | This will populate the folders /etc/kubernetes/pki and /etc/kubernetes/pki/etcd with all self-signed CA files (certificates and keys) and service account (public and private keys) that kubeadm needs f", "source": "https://kubernetes.io/docs/tasks/administer-cluster/kubeadm/kubeadm-certs/", "id": "k8s_00677_code_completion_21356"} {"prompt": "Explain this text code:\n\nsudo kubeadm init phase certs ca\nsudo kubeadm init phase certs etcd-ca\nsudo kubeadm init phase certs front-proxy-ca\nsudo kubeadm init phase certs sa", "completion": "This text code Certificate Management with kubeadm | This will populate the folders /etc/kubernetes/pki and /etc/kubernetes/pki/etcd with all self-signed CA files (certificates and keys) and service account (public and private keys) that kubeadm needs f", "language": "text", "type": "code-example_explanation", "context": "Certificate Management with kubeadm | This will populate the folders /etc/kubernetes/pki and /etc/kubernetes/pki/etcd with all self-signed CA files (certificates and keys) and service account (public and private keys) that kubeadm needs f", "source": "https://kubernetes.io/docs/tasks/administer-cluster/kubeadm/kubeadm-certs/", "id": "k8s_00677_code_explain_21357"} {"prompt": "Complete this text code:\n\n#!/bin/bash\n\n# Set certificate expiration time in days\nDAYS=365\n\n# Process all CSR files except those for front-proxy and etcd\nfind ./ -name \"*.csr\" | grep -v \"pki/etcd\" | grep -v \"front-proxy\" | while read -r FILE;\ndo\n echo \"* Processing ${FILE} ...\"\n FILE=${FILE%.*} # Trim the extension\n if [ -f \"./pki/ca.srl\" ]; then\n SERIAL_FLAG=\"-CAserial ./pki/ca.srl\"\n else\n SERIAL_FLAG=\"-CAcreateserial\"\n fi\n openssl x509 -req -days \"${DAYS}\" -CA ./pki/ca.crt -CAkey ./pki/ca.key ${SERIAL_FLAG} \\\n -in \"${FILE}.csr\" -out \"${FILE}.crt\"\n sleep 2\ndone", "completion": "\n# Process all etcd CSRs\nfind ./pki/etcd -name \"*.csr\" | while read -r FILE;\ndo\n echo \"* Processing ${FILE} ...\"\n FILE=${FILE%.*} # Trim the extension\n if [ -f \"./pki/etcd/ca.srl\" ]; then\n SERIAL_FLAG=-CAserial ./pki/etcd/ca.srl\n else\n SERIAL_FLAG=-CAcreateserial\n fi\n openssl x509 -req -days \"${DAYS}\" -CA ./pki/etcd/ca.crt -CAkey ./pki/etcd/ca.key ${SERIAL_FLAG} \\\n -in \"${FILE}.csr\" -out \"${FILE}.crt\"\ndone\n\n# Process front-proxy CSRs\necho \"* Processing ./pki/front-proxy-client.csr ...\"\nopenssl x509 -req -days \"${DAYS}\" -CA ./pki/front-proxy-ca.crt -CAkey ./pki/front-proxy-ca.key -CAcreateserial \\\n -in ./pki/front-proxy-client.csr -out ./pki/front-proxy-client.crt", "language": "text", "type": "code-example", "context": "Certificate Management with kubeadm | Repeat this step for all nodes that have CSR files.", "source": "https://kubernetes.io/docs/tasks/administer-cluster/kubeadm/kubeadm-certs/", "id": "k8s_00677_code_completion_21358"} {"prompt": "Explain this text code:\n\n#!/bin/bash\n\n# Set certificate expiration time in days\nDAYS=365\n\n# Process all CSR files except those for front-proxy and etcd\nfind ./ -name \"*.csr\" | grep -v \"pki/etcd\" | grep -v \"front-proxy\" | while read -r FILE;\ndo\n echo \"* Processing ${FILE} ...\"\n FILE=${FILE%.*} # Trim the extension\n if [ -f \"./pki/ca.srl\" ]; then\n SERIAL_FLAG=\"-CAserial ./pki/ca.srl\"\n else\n SERIAL_FLAG=\"-CAcreateserial\"\n fi\n openssl x509 -req -days \"${DAYS}\" -CA ./pki/ca.crt -CAkey ./pki/ca.key ${SERIAL_FLAG} \\\n -in \"${FILE}.csr\" -out \"${FILE}.crt\"\n sleep 2\ndone\n\n# Process all etcd CSRs\nfind ./pki/etcd -name \"*.csr\" | while read -r FILE;\ndo\n echo \"* Processing ${FILE} ...\"\n FILE=${FILE%.*} # Trim the extension\n if [ -f \"./pki/etcd/ca.srl\" ]; then\n SERIAL_FLAG=-CAserial ./pki/etcd/ca.srl\n else\n SERIAL_FLAG=-CAcreateserial\n fi\n openssl x509 -req -days \"${DAYS}\" -CA ./pki/etcd/ca.crt -CAkey ./pki/etcd/ca.key ${SERIAL_FLAG} \\\n -in \"${FILE}.csr\" -out \"${FILE}.crt\"\ndone\n\n# Process front-proxy CSRs\necho \"* Processing ./pki/front-proxy-client.csr ...\"\nopenssl x509 -req -days \"${DAYS}\" -CA ./pki/front-proxy-ca.crt -CAkey ./pki/front-proxy-ca.key -CAcreateserial \\\n -in ./pki/front-proxy-client.csr -out ./pki/front-proxy-client.crt", "completion": "This text code Certificate Management with kubeadm | Repeat this step for all nodes that have CSR files.", "language": "text", "type": "code-example_explanation", "context": "Certificate Management with kubeadm | Repeat this step for all nodes that have CSR files.", "source": "https://kubernetes.io/docs/tasks/administer-cluster/kubeadm/kubeadm-certs/", "id": "k8s_00677_code_explain_21359"} {"prompt": "Complete this text code:\n\n#!/bin/bash\n\nCLUSTER=kubernetes\nfind ./ -name \"*.conf\" | while read -r FILE;\ndo", "completion": " echo \"* Processing ${FILE} ...\"\n KUBECONFIG=\"${FILE}\" kubectl config set-cluster \"${CLUSTER}\" --certificate-authority ./pki/ca.crt --embed-certs\n USER=$(KUBECONFIG=\"${FILE}\" kubectl config view -o jsonpath='{.users[0].name}')\n KUBECONFIG=\"${FILE}\" kubectl config set-credentials \"${USER}\" --client-certificate \"${FILE}.crt\" --embed-certs\ndone", "language": "text", "type": "code-example", "context": "Certificate Management with kubeadm | Perform this step on all nodes that have CSR files.", "source": "https://kubernetes.io/docs/tasks/administer-cluster/kubeadm/kubeadm-certs/", "id": "k8s_00677_code_completion_21360"} {"prompt": "Explain this text code:\n\n#!/bin/bash\n\nCLUSTER=kubernetes\nfind ./ -name \"*.conf\" | while read -r FILE;\ndo\n echo \"* Processing ${FILE} ...\"\n KUBECONFIG=\"${FILE}\" kubectl config set-cluster \"${CLUSTER}\" --certificate-authority ./pki/ca.crt --embed-certs\n USER=$(KUBECONFIG=\"${FILE}\" kubectl config view -o jsonpath='{.users[0].name}')\n KUBECONFIG=\"${FILE}\" kubectl config set-credentials \"${USER}\" --client-certificate \"${FILE}.crt\" --embed-certs\ndone", "completion": "This text code Certificate Management with kubeadm | Perform this step on all nodes that have CSR files.", "language": "text", "type": "code-example_explanation", "context": "Certificate Management with kubeadm | Perform this step on all nodes that have CSR files.", "source": "https://kubernetes.io/docs/tasks/administer-cluster/kubeadm/kubeadm-certs/", "id": "k8s_00677_code_explain_21361"} {"prompt": "Complete this text code:\n\n#!/bin/bash\n\n# Cleanup CSR files", "completion": "rm -f ./*.csr ./pki/*.csr ./pki/etcd/*.csr # Clean all CSR files\n\n# Cleanup CRT files that were already embedded in kubeconfig files\nrm -f ./*.crt", "language": "text", "type": "code-example", "context": "Certificate Management with kubeadm | Optionally, move .srl files to the next node to be processed.", "source": "https://kubernetes.io/docs/tasks/administer-cluster/kubeadm/kubeadm-certs/", "id": "k8s_00677_code_completion_21362"} {"prompt": "Explain this text code:\n\n#!/bin/bash\n\n# Cleanup CSR files\nrm -f ./*.csr ./pki/*.csr ./pki/etcd/*.csr # Clean all CSR files\n\n# Cleanup CRT files that were already embedded in kubeconfig files\nrm -f ./*.crt", "completion": "This text code Certificate Management with kubeadm | Optionally, move .srl files to the next node to be processed.", "language": "text", "type": "code-example_explanation", "context": "Certificate Management with kubeadm | Optionally, move .srl files to the next node to be processed.", "source": "https://kubernetes.io/docs/tasks/administer-cluster/kubeadm/kubeadm-certs/", "id": "k8s_00677_code_explain_21363"} {"prompt": "Explain this yaml code:\n\napiVersion: kubeadm.k8s.io/v1beta4\nkind: ClusterConfiguration\nencryptionAlgorithm: <ALGORITHM>", "completion": "This yaml code Certificate Management with kubeadm | <ALGORITHM> can be one of RSA-2048 (default), RSA-3072, RSA-4096 or ECDSA-P256.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Certificate Management with kubeadm | <ALGORITHM> can be one of RSA-2048 (default), RSA-3072, RSA-4096 or ECDSA-P256.", "source": "https://kubernetes.io/docs/tasks/administer-cluster/kubeadm/kubeadm-certs/", "id": "k8s_00677_code_explain_21364"} {"prompt": "Complete this yaml code:\n\napiVersion: kubeadm.k8s.io/v1beta4\nkind: ClusterConfiguration", "completion": "certificateValidityPeriod: 8760h # Default: 365 days × 24 hours = 1 year\ncaCertificateValidityPeriod: 87600h # Default: 365 days × 24 hours * 10 = 10 years", "language": "yaml", "type": "kubernetes-manifest", "context": "Certificate Management with kubeadm | The values of the fields follow the accepted format for Go's time.Duration values, with the longest supported unit being h (hours).", "source": "https://kubernetes.io/docs/tasks/administer-cluster/kubeadm/kubeadm-certs/", "id": "k8s_00677_code_completion_21365"} {"prompt": "Explain this yaml code:\n\napiVersion: kubeadm.k8s.io/v1beta4\nkind: ClusterConfiguration\ncertificateValidityPeriod: 8760h # Default: 365 days × 24 hours = 1 year\ncaCertificateValidityPeriod: 87600h # Default: 365 days × 24 hours * 10 = 10 years", "completion": "This yaml code Certificate Management with kubeadm | The values of the fields follow the accepted format for Go's time.Duration values, with the longest supported unit being h (hours).", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Certificate Management with kubeadm | The values of the fields follow the accepted format for Go's time.Duration values, with the longest supported unit being h (hours).", "source": "https://kubernetes.io/docs/tasks/administer-cluster/kubeadm/kubeadm-certs/", "id": "k8s_00677_code_explain_21366"} {"prompt": "Explain this text code:\n\nkubeadm init phase kubeconfig all --config config.yaml", "completion": "This text code Certificate Management with kubeadm | Once the credentials are prepared on all nodes, call kubeadm init and kubeadm join for these nodes to join the cluster. kubeadm will use the existing kubeconfig and certificate files under /etc/kubern", "language": "text", "type": "code-example_explanation", "context": "Certificate Management with kubeadm | Once the credentials are prepared on all nodes, call kubeadm init and kubeadm join for these nodes to join the cluster. kubeadm will use the existing kubeconfig and certificate files under /etc/kubern", "source": "https://kubernetes.io/docs/tasks/administer-cluster/kubeadm/kubeadm-certs/", "id": "k8s_00677_code_explain_21367"} {"prompt": "Complete this console code:\n\nCERTIFICATE EXPIRES RESIDUAL TIME CERTIFICATE AUTHORITY EXTERNALLY MANAGED\nadmin.conf Dec 30, 2020 23:36 UTC 364d no\napiserver Dec 30, 2020 23:36 UTC 364d ca no\napiserver-etcd-client Dec 30, 2020 23:36 UTC 364d etcd-ca no\napiserver-kubelet-client Dec 30, 2020 23:36 UTC 364d ca no\ncontroller-manager.conf Dec 30, 2020 23:36 UTC 364d no\netcd-healthcheck-client Dec 30, 2020 23:36 UTC 364d etcd-ca no\netcd-peer Dec 30, 2020 23:36 UTC 364d etcd-ca no", "completion": "etcd-server Dec 30, 2020 23:36 UTC 364d etcd-ca no\nfront-proxy-client Dec 30, 2020 23:36 UTC 364d front-proxy-ca no\nscheduler.conf Dec 30, 2020 23:36 UTC 364d no\n\nCERTIFICATE AUTHORITY EXPIRES RESIDUAL TIME EXTERNALLY MANAGED\nca Dec 28, 2029 23:36 UTC 9y no\netcd-ca Dec 28, 2029 23:36 UTC 9y no\nfront-proxy-ca Dec 28, 2029 23:36 UTC 9y no", "language": "console", "type": "code-example", "context": "Certificate Management with kubeadm | The command shows expiration/residual time for the client certificates in the /etc/kubernetes/pki folder and for the client certificate embedded in the kubeconfig files used by kubeadm (admin.conf, co", "source": "https://kubernetes.io/docs/tasks/administer-cluster/kubeadm/kubeadm-certs/", "id": "k8s_00677_code_completion_21368"} {"prompt": "Explain this console code:\n\nCERTIFICATE EXPIRES RESIDUAL TIME CERTIFICATE AUTHORITY EXTERNALLY MANAGED\nadmin.conf Dec 30, 2020 23:36 UTC 364d no\napiserver Dec 30, 2020 23:36 UTC 364d ca no\napiserver-etcd-client Dec 30, 2020 23:36 UTC 364d etcd-ca no\napiserver-kubelet-client Dec 30, 2020 23:36 UTC 364d ca no\ncontroller-manager.conf Dec 30, 2020 23:36 UTC 364d no\netcd-healthcheck-client Dec 30, 2020 23:36 UTC 364d etcd-ca no\netcd-peer Dec 30, 2020 23:36 UTC 364d etcd-ca no\netcd-server Dec 30, 2020 23:36 UTC 364d etcd-ca no\nfront-proxy-client Dec 30, 2020 23:36 UTC 364d front-proxy-ca no\nscheduler.conf Dec 30, 2020 23:36 UTC 364d no\n\nCERTIFICATE AUTHORITY EXPIRES RESIDUAL TIME EXTERNALLY MANAGED\nca Dec 28, 2029 23:36 UTC 9y no\netcd-ca Dec 28, 2029 23:36 UTC 9y no\nfront-proxy-ca Dec 28, 2029 23:36 UTC 9y no", "completion": "This console code Certificate Management with kubeadm | The command shows expiration/residual time for the client certificates in the /etc/kubernetes/pki folder and for the client certificate embedded in the kubeconfig files used by kubeadm (admin.conf, co", "language": "console", "type": "code-example_explanation", "context": "Certificate Management with kubeadm | The command shows expiration/residual time for the client certificates in the /etc/kubernetes/pki folder and for the client certificate embedded in the kubeconfig files used by kubeadm (admin.conf, co", "source": "https://kubernetes.io/docs/tasks/administer-cluster/kubeadm/kubeadm-certs/", "id": "k8s_00677_code_explain_21369"} {"prompt": "Explain this yaml code:\n\nclient-certificate: /var/lib/kubelet/pki/kubelet-client-current.pem\nclient-key: /var/lib/kubelet/pki/kubelet-client-current.pem", "completion": "This yaml code Certificate Management with kubeadm | kubeadm renews all the certificates during control plane upgrade.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Certificate Management with kubeadm | kubeadm renews all the certificates during control plane upgrade.", "source": "https://kubernetes.io/docs/tasks/administer-cluster/kubeadm/kubeadm-certs/", "id": "k8s_00677_code_explain_21370"} {"prompt": "Explain this shell code:\n\n# If you are running cluster with a replicated control plane, this command\n# needs to be executed on all the control-plane nodes.\nkubeadm certs renew all", "completion": "This shell code Certificate Management with kubeadm | Clusters built with kubeadm often copy the admin.conf certificate into $HOME/.kube/config, as instructed in Creating a cluster with kubeadm. On such a system, to update the contents of $HOME/.kube/con", "language": "shell", "type": "code-example_explanation", "context": "Certificate Management with kubeadm | Clusters built with kubeadm often copy the admin.conf certificate into $HOME/.kube/config, as instructed in Creating a cluster with kubeadm. On such a system, to update the contents of $HOME/.kube/con", "source": "https://kubernetes.io/docs/tasks/administer-cluster/kubeadm/kubeadm-certs/", "id": "k8s_00677_code_explain_21371"} {"prompt": "Explain this shell code:\n\nsudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config\nsudo chown $(id -u):$(id -g) $HOME/.kube/config", "completion": "This shell code Certificate Management with kubeadm | This section provides more details about how to execute manual certificate renewal using the Kubernetes certificates API.", "language": "shell", "type": "code-example_explanation", "context": "Certificate Management with kubeadm | This section provides more details about how to execute manual certificate renewal using the Kubernetes certificates API.", "source": "https://kubernetes.io/docs/tasks/administer-cluster/kubeadm/kubeadm-certs/", "id": "k8s_00677_code_explain_21372"} {"prompt": "Complete this yaml code:\n\napiVersion: kubeadm.k8s.io/v1beta4\nkind: ClusterConfiguration\ncontrollerManager:\n extraArgs:", "completion": " - name: \"cluster-signing-cert-file\"\n value: \"/etc/kubernetes/pki/ca.crt\"\n - name: \"cluster-signing-key-file\"\n value: \"/etc/kubernetes/pki/ca.key\"", "language": "yaml", "type": "kubernetes-manifest", "context": "Certificate Management with kubeadm | See Create CertificateSigningRequest for creating CSRs with the Kubernetes API.", "source": "https://kubernetes.io/docs/tasks/administer-cluster/kubeadm/kubeadm-certs/", "id": "k8s_00677_code_completion_21373"} {"prompt": "Explain this yaml code:\n\napiVersion: kubeadm.k8s.io/v1beta4\nkind: ClusterConfiguration\ncontrollerManager:\n extraArgs:\n - name: \"cluster-signing-cert-file\"\n value: \"/etc/kubernetes/pki/ca.crt\"\n - name: \"cluster-signing-key-file\"\n value: \"/etc/kubernetes/pki/ca.key\"", "completion": "This yaml code Certificate Management with kubeadm | See Create CertificateSigningRequest for creating CSRs with the Kubernetes API.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Certificate Management with kubeadm | See Create CertificateSigningRequest for creating CSRs with the Kubernetes API.", "source": "https://kubernetes.io/docs/tasks/administer-cluster/kubeadm/kubeadm-certs/", "id": "k8s_00677_code_explain_21374"} {"prompt": "Complete this yaml code:\n\napiVersion: kubeadm.k8s.io/v1beta4\nkind: ClusterConfiguration\n---", "completion": "apiVersion: kubelet.config.k8s.io/v1beta1\nkind: KubeletConfiguration\nserverTLSBootstrap: true", "language": "yaml", "type": "kubernetes-manifest", "context": "Certificate Management with kubeadm | If you have already created the cluster you must adapt it by doing the following:", "source": "https://kubernetes.io/docs/tasks/administer-cluster/kubeadm/kubeadm-certs/", "id": "k8s_00677_code_completion_21375"} {"prompt": "Explain this yaml code:\n\napiVersion: kubeadm.k8s.io/v1beta4\nkind: ClusterConfiguration\n---\napiVersion: kubelet.config.k8s.io/v1beta1\nkind: KubeletConfiguration\nserverTLSBootstrap: true", "completion": "This yaml code Certificate Management with kubeadm | If you have already created the cluster you must adapt it by doing the following:", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Certificate Management with kubeadm | If you have already created the cluster you must adapt it by doing the following:", "source": "https://kubernetes.io/docs/tasks/administer-cluster/kubeadm/kubeadm-certs/", "id": "k8s_00677_code_explain_21376"} {"prompt": "Explain this console code:\n\nNAME AGE SIGNERNAME REQUESTOR CONDITION\ncsr-9wvgt 112s kubernetes.io/kubelet-serving system:node:worker-1 Pending\ncsr-lz97v 1m58s kubernetes.io/kubelet-serving system:node:control-plane-1 Pending", "completion": "This console code Certificate Management with kubeadm | To approve them you can do the following:", "language": "console", "type": "code-example_explanation", "context": "Certificate Management with kubeadm | To approve them you can do the following:", "source": "https://kubernetes.io/docs/tasks/administer-cluster/kubeadm/kubeadm-certs/", "id": "k8s_00677_code_explain_21377"} {"prompt": "Explain this text code:\n\nSubject: O = system:masters, CN = kubernetes-super-admin", "completion": "This text code Certificate Management with kubeadm | You can use the kubeadm kubeconfig user command to generate kubeconfig files for additional users. The command accepts a mixture of command line flags and kubeadm configuration options. The generated", "language": "text", "type": "code-example_explanation", "context": "Certificate Management with kubeadm | You can use the kubeadm kubeconfig user command to generate kubeconfig files for additional users. The command accepts a mixture of command line flags and kubeadm configuration options. The generated", "source": "https://kubernetes.io/docs/tasks/administer-cluster/kubeadm/kubeadm-certs/", "id": "k8s_00677_code_explain_21378"} {"prompt": "Explain this text code:\n\nSubject: O = kubeadm:cluster-admins, CN = kubernetes-admin", "completion": "This text code Certificate Management with kubeadm | You can use the kubeadm kubeconfig user command to generate kubeconfig files for additional users. The command accepts a mixture of command line flags and kubeadm configuration options. The generated", "language": "text", "type": "code-example_explanation", "context": "Certificate Management with kubeadm | You can use the kubeadm kubeconfig user command to generate kubeconfig files for additional users. The command accepts a mixture of command line flags and kubeadm configuration options. The generated", "source": "https://kubernetes.io/docs/tasks/administer-cluster/kubeadm/kubeadm-certs/", "id": "k8s_00677_code_explain_21379"} {"prompt": "Complete this yaml code:\n\n# example.yaml\napiVersion: kubeadm.k8s.io/v1beta4\nkind: ClusterConfiguration\n# Will be used as the target \"cluster\" in the kubeconfig", "completion": "clusterName: \"kubernetes\"\n# Will be used as the \"server\" (IP or DNS name) of this cluster in the kubeconfig\ncontrolPlaneEndpoint: \"some-dns-address:6443\"\n# The cluster CA key and certificate will be loaded from this local directory\ncertificatesDir: \"/etc/kubernetes/pki\"", "language": "yaml", "type": "kubernetes-manifest", "context": "Certificate Management with kubeadm | Make sure that these settings match the desired target cluster settings. To see the settings of an existing cluster use:", "source": "https://kubernetes.io/docs/tasks/administer-cluster/kubeadm/kubeadm-certs/", "id": "k8s_00677_code_completion_21380"} {"prompt": "Explain this yaml code:\n\n# example.yaml\napiVersion: kubeadm.k8s.io/v1beta4\nkind: ClusterConfiguration\n# Will be used as the target \"cluster\" in the kubeconfig\nclusterName: \"kubernetes\"\n# Will be used as the \"server\" (IP or DNS name) of this cluster in the kubeconfig\ncontrolPlaneEndpoint: \"some-dns-address:6443\"\n# The cluster CA key and certificate will be loaded from this local directory\ncertificatesDir: \"/etc/kubernetes/pki\"", "completion": "This yaml code Certificate Management with kubeadm | Make sure that these settings match the desired target cluster settings. To see the settings of an existing cluster use:", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Certificate Management with kubeadm | Make sure that these settings match the desired target cluster settings. To see the settings of an existing cluster use:", "source": "https://kubernetes.io/docs/tasks/administer-cluster/kubeadm/kubeadm-certs/", "id": "k8s_00677_code_explain_21381"} {"prompt": "Explain this shell code:\n\nkubectl get cm kubeadm-config -n kube-system -o=jsonpath=\"{.data.ClusterConfiguration}\"", "completion": "This shell code Certificate Management with kubeadm | The following example will generate a kubeconfig file with credentials valid for 24 hours for a new user johndoe that is part of the appdevs group:", "language": "shell", "type": "code-example_explanation", "context": "Certificate Management with kubeadm | The following example will generate a kubeconfig file with credentials valid for 24 hours for a new user johndoe that is part of the appdevs group:", "source": "https://kubernetes.io/docs/tasks/administer-cluster/kubeadm/kubeadm-certs/", "id": "k8s_00677_code_explain_21382"} {"prompt": "Explain this shell code:\n\nkubeadm kubeconfig user --config example.yaml --org appdevs --client-name johndoe --validity-period 24h", "completion": "This shell code Certificate Management with kubeadm | The following example will generate a kubeconfig file with administrator credentials valid for 1 week:", "language": "shell", "type": "code-example_explanation", "context": "Certificate Management with kubeadm | The following example will generate a kubeconfig file with administrator credentials valid for 1 week:", "source": "https://kubernetes.io/docs/tasks/administer-cluster/kubeadm/kubeadm-certs/", "id": "k8s_00677_code_explain_21383"} {"prompt": "Explain this shell code:\n\nkubeadm kubeconfig user --config example.yaml --client-name admin --validity-period 168h", "completion": "This shell code Certificate Management with kubeadm | You can create certificate signing requests with kubeadm certs generate-csr. Calling this command will generate .csr / .key file pairs for regular certificates. For certificates embedded in kubeconfig", "language": "shell", "type": "code-example_explanation", "context": "Certificate Management with kubeadm | You can create certificate signing requests with kubeadm certs generate-csr. Calling this command will generate .csr / .key file pairs for regular certificates. For certificates embedded in kubeconfig", "source": "https://kubernetes.io/docs/tasks/administer-cluster/kubeadm/kubeadm-certs/", "id": "k8s_00677_code_explain_21384"} {"prompt": "Complete this shell code:\n\nsudo kubeadm init phase certs ca\nsudo kubeadm init phase certs etcd-ca", "completion": "sudo kubeadm init phase certs front-proxy-ca\nsudo kubeadm init phase certs sa", "language": "shell", "type": "code-example", "context": "Certificate Management with kubeadm | This will populate the folders /etc/kubernetes/pki and /etc/kubernetes/pki/etcd with all self-signed CA files (certificates and keys) and service account (public and private keys) that kubeadm needs f", "source": "https://kubernetes.io/docs/tasks/administer-cluster/kubeadm/kubeadm-certs/", "id": "k8s_00677_code_completion_21385"} {"prompt": "Explain this shell code:\n\nsudo kubeadm init phase certs ca\nsudo kubeadm init phase certs etcd-ca\nsudo kubeadm init phase certs front-proxy-ca\nsudo kubeadm init phase certs sa", "completion": "This shell code Certificate Management with kubeadm | This will populate the folders /etc/kubernetes/pki and /etc/kubernetes/pki/etcd with all self-signed CA files (certificates and keys) and service account (public and private keys) that kubeadm needs f", "language": "shell", "type": "code-example_explanation", "context": "Certificate Management with kubeadm | This will populate the folders /etc/kubernetes/pki and /etc/kubernetes/pki/etcd with all self-signed CA files (certificates and keys) and service account (public and private keys) that kubeadm needs f", "source": "https://kubernetes.io/docs/tasks/administer-cluster/kubeadm/kubeadm-certs/", "id": "k8s_00677_code_explain_21386"} {"prompt": "Complete this bash code:\n\n#!/bin/bash\n\n# Set certificate expiration time in days\nDAYS=365\n\n# Process all CSR files except those for front-proxy and etcd\nfind ./ -name \"*.csr\" | grep -v \"pki/etcd\" | grep -v \"front-proxy\" | while read -r FILE;\ndo\n echo \"* Processing ${FILE} ...\"\n FILE=${FILE%.*} # Trim the extension\n if [ -f \"./pki/ca.srl\" ]; then\n SERIAL_FLAG=\"-CAserial ./pki/ca.srl\"\n else\n SERIAL_FLAG=\"-CAcreateserial\"\n fi\n openssl x509 -req -days \"${DAYS}\" -CA ./pki/ca.crt -CAkey ./pki/ca.key ${SERIAL_FLAG} \\\n -in \"${FILE}.csr\" -out \"${FILE}.crt\"\n sleep 2\ndone", "completion": "\n# Process all etcd CSRs\nfind ./pki/etcd -name \"*.csr\" | while read -r FILE;\ndo\n echo \"* Processing ${FILE} ...\"\n FILE=${FILE%.*} # Trim the extension\n if [ -f \"./pki/etcd/ca.srl\" ]; then\n SERIAL_FLAG=-CAserial ./pki/etcd/ca.srl\n else\n SERIAL_FLAG=-CAcreateserial\n fi\n openssl x509 -req -days \"${DAYS}\" -CA ./pki/etcd/ca.crt -CAkey ./pki/etcd/ca.key ${SERIAL_FLAG} \\\n -in \"${FILE}.csr\" -out \"${FILE}.crt\"\ndone\n\n# Process front-proxy CSRs\necho \"* Processing ./pki/front-proxy-client.csr ...\"\nopenssl x509 -req -days \"${DAYS}\" -CA ./pki/front-proxy-ca.crt -CAkey ./pki/front-proxy-ca.key -CAcreateserial \\\n -in ./pki/front-proxy-client.csr -out ./pki/front-proxy-client.crt", "language": "bash", "type": "shell-command", "context": "Certificate Management with kubeadm | Repeat this step for all nodes that have CSR files.", "source": "https://kubernetes.io/docs/tasks/administer-cluster/kubeadm/kubeadm-certs/", "id": "k8s_00677_code_completion_21387"} {"prompt": "Explain this bash code:\n\n#!/bin/bash\n\n# Set certificate expiration time in days\nDAYS=365\n\n# Process all CSR files except those for front-proxy and etcd\nfind ./ -name \"*.csr\" | grep -v \"pki/etcd\" | grep -v \"front-proxy\" | while read -r FILE;\ndo\n echo \"* Processing ${FILE} ...\"\n FILE=${FILE%.*} # Trim the extension\n if [ -f \"./pki/ca.srl\" ]; then\n SERIAL_FLAG=\"-CAserial ./pki/ca.srl\"\n else\n SERIAL_FLAG=\"-CAcreateserial\"\n fi\n openssl x509 -req -days \"${DAYS}\" -CA ./pki/ca.crt -CAkey ./pki/ca.key ${SERIAL_FLAG} \\\n -in \"${FILE}.csr\" -out \"${FILE}.crt\"\n sleep 2\ndone\n\n# Process all etcd CSRs\nfind ./pki/etcd -name \"*.csr\" | while read -r FILE;\ndo\n echo \"* Processing ${FILE} ...\"\n FILE=${FILE%.*} # Trim the extension\n if [ -f \"./pki/etcd/ca.srl\" ]; then\n SERIAL_FLAG=-CAserial ./pki/etcd/ca.srl\n else\n SERIAL_FLAG=-CAcreateserial\n fi\n openssl x509 -req -days \"${DAYS}\" -CA ./pki/etcd/ca.crt -CAkey ./pki/etcd/ca.key ${SERIAL_FLAG} \\\n -in \"${FILE}.csr\" -out \"${FILE}.crt\"\ndone\n\n# Process front-proxy CSRs\necho \"* Processing ./pki/front-proxy-client.csr ...\"\nopenssl x509 -req -days \"${DAYS}\" -CA ./pki/front-proxy-ca.crt -CAkey ./pki/front-proxy-ca.key -CAcreateserial \\\n -in ./pki/front-proxy-client.csr -out ./pki/front-proxy-client.crt", "completion": "This bash code Certificate Management with kubeadm | Repeat this step for all nodes that have CSR files.", "language": "bash", "type": "shell-command_explanation", "context": "Certificate Management with kubeadm | Repeat this step for all nodes that have CSR files.", "source": "https://kubernetes.io/docs/tasks/administer-cluster/kubeadm/kubeadm-certs/", "id": "k8s_00677_code_explain_21388"} {"prompt": "Complete this bash code:\n\n#!/bin/bash\n\nCLUSTER=kubernetes\nfind ./ -name \"*.conf\" | while read -r FILE;\ndo", "completion": " echo \"* Processing ${FILE} ...\"\n KUBECONFIG=\"${FILE}\" kubectl config set-cluster \"${CLUSTER}\" --certificate-authority ./pki/ca.crt --embed-certs\n USER=$(KUBECONFIG=\"${FILE}\" kubectl config view -o jsonpath='{.users[0].name}')\n KUBECONFIG=\"${FILE}\" kubectl config set-credentials \"${USER}\" --client-certificate \"${FILE}.crt\" --embed-certs\ndone", "language": "bash", "type": "shell-command", "context": "Certificate Management with kubeadm | Perform this step on all nodes that have CSR files.", "source": "https://kubernetes.io/docs/tasks/administer-cluster/kubeadm/kubeadm-certs/", "id": "k8s_00677_code_completion_21389"} {"prompt": "Explain this bash code:\n\n#!/bin/bash\n\nCLUSTER=kubernetes\nfind ./ -name \"*.conf\" | while read -r FILE;\ndo\n echo \"* Processing ${FILE} ...\"\n KUBECONFIG=\"${FILE}\" kubectl config set-cluster \"${CLUSTER}\" --certificate-authority ./pki/ca.crt --embed-certs\n USER=$(KUBECONFIG=\"${FILE}\" kubectl config view -o jsonpath='{.users[0].name}')\n KUBECONFIG=\"${FILE}\" kubectl config set-credentials \"${USER}\" --client-certificate \"${FILE}.crt\" --embed-certs\ndone", "completion": "This bash code Certificate Management with kubeadm | Perform this step on all nodes that have CSR files.", "language": "bash", "type": "shell-command_explanation", "context": "Certificate Management with kubeadm | Perform this step on all nodes that have CSR files.", "source": "https://kubernetes.io/docs/tasks/administer-cluster/kubeadm/kubeadm-certs/", "id": "k8s_00677_code_explain_21390"} {"prompt": "Complete this bash code:\n\n#!/bin/bash\n\n# Cleanup CSR files", "completion": "rm -f ./*.csr ./pki/*.csr ./pki/etcd/*.csr # Clean all CSR files\n\n# Cleanup CRT files that were already embedded in kubeconfig files\nrm -f ./*.crt", "language": "bash", "type": "shell-command", "context": "Certificate Management with kubeadm | Optionally, move .srl files to the next node to be processed.", "source": "https://kubernetes.io/docs/tasks/administer-cluster/kubeadm/kubeadm-certs/", "id": "k8s_00677_code_completion_21391"} {"prompt": "Explain this bash code:\n\n#!/bin/bash\n\n# Cleanup CSR files\nrm -f ./*.csr ./pki/*.csr ./pki/etcd/*.csr # Clean all CSR files\n\n# Cleanup CRT files that were already embedded in kubeconfig files\nrm -f ./*.crt", "completion": "This bash code Certificate Management with kubeadm | Optionally, move .srl files to the next node to be processed.", "language": "bash", "type": "shell-command_explanation", "context": "Certificate Management with kubeadm | Optionally, move .srl files to the next node to be processed.", "source": "https://kubernetes.io/docs/tasks/administer-cluster/kubeadm/kubeadm-certs/", "id": "k8s_00677_code_explain_21392"} {"prompt": "Explain this yaml code:\n\napiVersion: kubeadm.k8s.io/v1beta4\nkind: ClusterConfiguration\nencryptionAlgorithm: <ALGORITHM>", "completion": "This yaml code Certificate Management with kubeadm | <ALGORITHM> can be one of RSA-2048 (default), RSA-3072, RSA-4096 or ECDSA-P256.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Certificate Management with kubeadm | <ALGORITHM> can be one of RSA-2048 (default), RSA-3072, RSA-4096 or ECDSA-P256.", "source": "https://kubernetes.io/docs/tasks/administer-cluster/kubeadm/kubeadm-certs/", "id": "k8s_00677_code_explain_21393"} {"prompt": "Complete this yaml code:\n\napiVersion: kubeadm.k8s.io/v1beta4\nkind: ClusterConfiguration", "completion": "certificateValidityPeriod: 8760h # Default: 365 days × 24 hours = 1 year\ncaCertificateValidityPeriod: 87600h # Default: 365 days × 24 hours * 10 = 10 years", "language": "yaml", "type": "kubernetes-manifest", "context": "Certificate Management with kubeadm | The values of the fields follow the accepted format for Go's time.Duration values, with the longest supported unit being h (hours).", "source": "https://kubernetes.io/docs/tasks/administer-cluster/kubeadm/kubeadm-certs/", "id": "k8s_00677_code_completion_21394"} {"prompt": "Explain this yaml code:\n\napiVersion: kubeadm.k8s.io/v1beta4\nkind: ClusterConfiguration\ncertificateValidityPeriod: 8760h # Default: 365 days × 24 hours = 1 year\ncaCertificateValidityPeriod: 87600h # Default: 365 days × 24 hours * 10 = 10 years", "completion": "This yaml code Certificate Management with kubeadm | The values of the fields follow the accepted format for Go's time.Duration values, with the longest supported unit being h (hours).", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Certificate Management with kubeadm | The values of the fields follow the accepted format for Go's time.Duration values, with the longest supported unit being h (hours).", "source": "https://kubernetes.io/docs/tasks/administer-cluster/kubeadm/kubeadm-certs/", "id": "k8s_00677_code_explain_21395"} {"prompt": "Complete this text code:\n\nCERTIFICATE EXPIRES RESIDUAL TIME CERTIFICATE AUTHORITY EXTERNALLY MANAGED\nadmin.conf Dec 30, 2020 23:36 UTC 364d no\napiserver Dec 30, 2020 23:36 UTC 364d ca no\napiserver-etcd-client Dec 30, 2020 23:36 UTC 364d etcd-ca no\napiserver-kubelet-client Dec 30, 2020 23:36 UTC 364d ca no\ncontroller-manager.conf Dec 30, 2020 23:36 UTC 364d no\netcd-healthcheck-client Dec 30, 2020 23:36 UTC 364d etcd-ca no\netcd-peer Dec 30, 2020 23:36 UTC 364d etcd-ca no", "completion": "etcd-server Dec 30, 2020 23:36 UTC 364d etcd-ca no\nfront-proxy-client Dec 30, 2020 23:36 UTC 364d front-proxy-ca no\nscheduler.conf Dec 30, 2020 23:36 UTC 364d no\n\nCERTIFICATE AUTHORITY EXPIRES RESIDUAL TIME EXTERNALLY MANAGED\nca Dec 28, 2029 23:36 UTC 9y no\netcd-ca Dec 28, 2029 23:36 UTC 9y no\nfront-proxy-ca Dec 28, 2029 23:36 UTC 9y no", "language": "text", "type": "code-example", "context": "Certificate Management with kubeadm | The command shows expiration/residual time for the client certificates in the /etc/kubernetes/pki folder and for the client certificate embedded in the kubeconfig files used by kubeadm (admin.conf, co", "source": "https://kubernetes.io/docs/tasks/administer-cluster/kubeadm/kubeadm-certs/", "id": "k8s_00677_code_completion_21396"} {"prompt": "Explain this text code:\n\nCERTIFICATE EXPIRES RESIDUAL TIME CERTIFICATE AUTHORITY EXTERNALLY MANAGED\nadmin.conf Dec 30, 2020 23:36 UTC 364d no\napiserver Dec 30, 2020 23:36 UTC 364d ca no\napiserver-etcd-client Dec 30, 2020 23:36 UTC 364d etcd-ca no\napiserver-kubelet-client Dec 30, 2020 23:36 UTC 364d ca no\ncontroller-manager.conf Dec 30, 2020 23:36 UTC 364d no\netcd-healthcheck-client Dec 30, 2020 23:36 UTC 364d etcd-ca no\netcd-peer Dec 30, 2020 23:36 UTC 364d etcd-ca no\netcd-server Dec 30, 2020 23:36 UTC 364d etcd-ca no\nfront-proxy-client Dec 30, 2020 23:36 UTC 364d front-proxy-ca no\nscheduler.conf Dec 30, 2020 23:36 UTC 364d no\n\nCERTIFICATE AUTHORITY EXPIRES RESIDUAL TIME EXTERNALLY MANAGED\nca Dec 28, 2029 23:36 UTC 9y no\netcd-ca Dec 28, 2029 23:36 UTC 9y no\nfront-proxy-ca Dec 28, 2029 23:36 UTC 9y no", "completion": "This text code Certificate Management with kubeadm | The command shows expiration/residual time for the client certificates in the /etc/kubernetes/pki folder and for the client certificate embedded in the kubeconfig files used by kubeadm (admin.conf, co", "language": "text", "type": "code-example_explanation", "context": "Certificate Management with kubeadm | The command shows expiration/residual time for the client certificates in the /etc/kubernetes/pki folder and for the client certificate embedded in the kubeconfig files used by kubeadm (admin.conf, co", "source": "https://kubernetes.io/docs/tasks/administer-cluster/kubeadm/kubeadm-certs/", "id": "k8s_00677_code_explain_21397"} {"prompt": "Explain this text code:\n\nclient-certificate: /var/lib/kubelet/pki/kubelet-client-current.pem\nclient-key: /var/lib/kubelet/pki/kubelet-client-current.pem", "completion": "This text code Certificate Management with kubeadm | kubeadm renews all the certificates during control plane upgrade.", "language": "text", "type": "code-example_explanation", "context": "Certificate Management with kubeadm | kubeadm renews all the certificates during control plane upgrade.", "source": "https://kubernetes.io/docs/tasks/administer-cluster/kubeadm/kubeadm-certs/", "id": "k8s_00677_code_explain_21398"} {"prompt": "Explain this text code:\n\n# If you are running cluster with a replicated control plane, this command\n# needs to be executed on all the control-plane nodes.\nkubeadm certs renew all", "completion": "This text code Certificate Management with kubeadm | Clusters built with kubeadm often copy the admin.conf certificate into $HOME/.kube/config, as instructed in Creating a cluster with kubeadm. On such a system, to update the contents of $HOME/.kube/con", "language": "text", "type": "code-example_explanation", "context": "Certificate Management with kubeadm | Clusters built with kubeadm often copy the admin.conf certificate into $HOME/.kube/config, as instructed in Creating a cluster with kubeadm. On such a system, to update the contents of $HOME/.kube/con", "source": "https://kubernetes.io/docs/tasks/administer-cluster/kubeadm/kubeadm-certs/", "id": "k8s_00677_code_explain_21399"} {"prompt": "Explain this text code:\n\nsudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config\nsudo chown $(id -u):$(id -g) $HOME/.kube/config", "completion": "This text code Certificate Management with kubeadm | This section provides more details about how to execute manual certificate renewal using the Kubernetes certificates API.", "language": "text", "type": "code-example_explanation", "context": "Certificate Management with kubeadm | This section provides more details about how to execute manual certificate renewal using the Kubernetes certificates API.", "source": "https://kubernetes.io/docs/tasks/administer-cluster/kubeadm/kubeadm-certs/", "id": "k8s_00677_code_explain_21400"} {"prompt": "Complete this yaml code:\n\napiVersion: kubeadm.k8s.io/v1beta4\nkind: ClusterConfiguration\ncontrollerManager:\n extraArgs:", "completion": " - name: \"cluster-signing-cert-file\"\n value: \"/etc/kubernetes/pki/ca.crt\"\n - name: \"cluster-signing-key-file\"\n value: \"/etc/kubernetes/pki/ca.key\"", "language": "yaml", "type": "kubernetes-manifest", "context": "Certificate Management with kubeadm | See Create CertificateSigningRequest for creating CSRs with the Kubernetes API.", "source": "https://kubernetes.io/docs/tasks/administer-cluster/kubeadm/kubeadm-certs/", "id": "k8s_00677_code_completion_21401"} {"prompt": "Explain this yaml code:\n\napiVersion: kubeadm.k8s.io/v1beta4\nkind: ClusterConfiguration\ncontrollerManager:\n extraArgs:\n - name: \"cluster-signing-cert-file\"\n value: \"/etc/kubernetes/pki/ca.crt\"\n - name: \"cluster-signing-key-file\"\n value: \"/etc/kubernetes/pki/ca.key\"", "completion": "This yaml code Certificate Management with kubeadm | See Create CertificateSigningRequest for creating CSRs with the Kubernetes API.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Certificate Management with kubeadm | See Create CertificateSigningRequest for creating CSRs with the Kubernetes API.", "source": "https://kubernetes.io/docs/tasks/administer-cluster/kubeadm/kubeadm-certs/", "id": "k8s_00677_code_explain_21402"} {"prompt": "Complete this yaml code:\n\napiVersion: kubeadm.k8s.io/v1beta4\nkind: ClusterConfiguration\n---", "completion": "apiVersion: kubelet.config.k8s.io/v1beta1\nkind: KubeletConfiguration\nserverTLSBootstrap: true", "language": "yaml", "type": "kubernetes-manifest", "context": "Certificate Management with kubeadm | If you have already created the cluster you must adapt it by doing the following:", "source": "https://kubernetes.io/docs/tasks/administer-cluster/kubeadm/kubeadm-certs/", "id": "k8s_00677_code_completion_21403"} {"prompt": "Explain this yaml code:\n\napiVersion: kubeadm.k8s.io/v1beta4\nkind: ClusterConfiguration\n---\napiVersion: kubelet.config.k8s.io/v1beta1\nkind: KubeletConfiguration\nserverTLSBootstrap: true", "completion": "This yaml code Certificate Management with kubeadm | If you have already created the cluster you must adapt it by doing the following:", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Certificate Management with kubeadm | If you have already created the cluster you must adapt it by doing the following:", "source": "https://kubernetes.io/docs/tasks/administer-cluster/kubeadm/kubeadm-certs/", "id": "k8s_00677_code_explain_21404"} {"prompt": "Explain this text code:\n\nNAME AGE SIGNERNAME REQUESTOR CONDITION\ncsr-9wvgt 112s kubernetes.io/kubelet-serving system:node:worker-1 Pending\ncsr-lz97v 1m58s kubernetes.io/kubelet-serving system:node:control-plane-1 Pending", "completion": "This text code Certificate Management with kubeadm | To approve them you can do the following:", "language": "text", "type": "code-example_explanation", "context": "Certificate Management with kubeadm | To approve them you can do the following:", "source": "https://kubernetes.io/docs/tasks/administer-cluster/kubeadm/kubeadm-certs/", "id": "k8s_00677_code_explain_21405"} {"prompt": "Complete this yaml code:\n\n# example.yaml\napiVersion: kubeadm.k8s.io/v1beta4\nkind: ClusterConfiguration\n# Will be used as the target \"cluster\" in the kubeconfig", "completion": "clusterName: \"kubernetes\"\n# Will be used as the \"server\" (IP or DNS name) of this cluster in the kubeconfig\ncontrolPlaneEndpoint: \"some-dns-address:6443\"\n# The cluster CA key and certificate will be loaded from this local directory\ncertificatesDir: \"/etc/kubernetes/pki\"", "language": "yaml", "type": "kubernetes-manifest", "context": "Certificate Management with kubeadm | Make sure that these settings match the desired target cluster settings. To see the settings of an existing cluster use:", "source": "https://kubernetes.io/docs/tasks/administer-cluster/kubeadm/kubeadm-certs/", "id": "k8s_00677_code_completion_21406"} {"prompt": "Explain this yaml code:\n\n# example.yaml\napiVersion: kubeadm.k8s.io/v1beta4\nkind: ClusterConfiguration\n# Will be used as the target \"cluster\" in the kubeconfig\nclusterName: \"kubernetes\"\n# Will be used as the \"server\" (IP or DNS name) of this cluster in the kubeconfig\ncontrolPlaneEndpoint: \"some-dns-address:6443\"\n# The cluster CA key and certificate will be loaded from this local directory\ncertificatesDir: \"/etc/kubernetes/pki\"", "completion": "This yaml code Certificate Management with kubeadm | Make sure that these settings match the desired target cluster settings. To see the settings of an existing cluster use:", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Certificate Management with kubeadm | Make sure that these settings match the desired target cluster settings. To see the settings of an existing cluster use:", "source": "https://kubernetes.io/docs/tasks/administer-cluster/kubeadm/kubeadm-certs/", "id": "k8s_00677_code_explain_21407"} {"prompt": "Explain this bash code:\n\nkubectl get cm kubeadm-config -n kube-system -o=jsonpath=\"{.data.ClusterConfiguration}\"", "completion": "This bash code Certificate Management with kubeadm | The following example will generate a kubeconfig file with credentials valid for 24 hours for a new user johndoe that is part of the appdevs group:", "language": "bash", "type": "kubectl-command_explanation", "context": "Certificate Management with kubeadm | The following example will generate a kubeconfig file with credentials valid for 24 hours for a new user johndoe that is part of the appdevs group:", "source": "https://kubernetes.io/docs/tasks/administer-cluster/kubeadm/kubeadm-certs/", "id": "k8s_00677_code_explain_21408"} {"prompt": "Explain this text code:\n\nkubeadm kubeconfig user --config example.yaml --org appdevs --client-name johndoe --validity-period 24h", "completion": "This text code Certificate Management with kubeadm | The following example will generate a kubeconfig file with administrator credentials valid for 1 week:", "language": "text", "type": "code-example_explanation", "context": "Certificate Management with kubeadm | The following example will generate a kubeconfig file with administrator credentials valid for 1 week:", "source": "https://kubernetes.io/docs/tasks/administer-cluster/kubeadm/kubeadm-certs/", "id": "k8s_00677_code_explain_21409"} {"prompt": "Explain this text code:\n\nkubeadm kubeconfig user --config example.yaml --client-name admin --validity-period 168h", "completion": "This text code Certificate Management with kubeadm | You can create certificate signing requests with kubeadm certs generate-csr. Calling this command will generate .csr / .key file pairs for regular certificates. For certificates embedded in kubeconfig", "language": "text", "type": "code-example_explanation", "context": "Certificate Management with kubeadm | You can create certificate signing requests with kubeadm certs generate-csr. Calling this command will generate .csr / .key file pairs for regular certificates. For certificates embedded in kubeconfig", "source": "https://kubernetes.io/docs/tasks/administer-cluster/kubeadm/kubeadm-certs/", "id": "k8s_00677_code_explain_21410"} {"prompt": "Complete this text code:\n\nsudo kubeadm init phase certs ca\nsudo kubeadm init phase certs etcd-ca", "completion": "sudo kubeadm init phase certs front-proxy-ca\nsudo kubeadm init phase certs sa", "language": "text", "type": "code-example", "context": "Certificate Management with kubeadm | This will populate the folders /etc/kubernetes/pki and /etc/kubernetes/pki/etcd with all self-signed CA files (certificates and keys) and service account (public and private keys) that kubeadm needs f", "source": "https://kubernetes.io/docs/tasks/administer-cluster/kubeadm/kubeadm-certs/", "id": "k8s_00677_code_completion_21411"} {"prompt": "Explain this text code:\n\nsudo kubeadm init phase certs ca\nsudo kubeadm init phase certs etcd-ca\nsudo kubeadm init phase certs front-proxy-ca\nsudo kubeadm init phase certs sa", "completion": "This text code Certificate Management with kubeadm | This will populate the folders /etc/kubernetes/pki and /etc/kubernetes/pki/etcd with all self-signed CA files (certificates and keys) and service account (public and private keys) that kubeadm needs f", "language": "text", "type": "code-example_explanation", "context": "Certificate Management with kubeadm | This will populate the folders /etc/kubernetes/pki and /etc/kubernetes/pki/etcd with all self-signed CA files (certificates and keys) and service account (public and private keys) that kubeadm needs f", "source": "https://kubernetes.io/docs/tasks/administer-cluster/kubeadm/kubeadm-certs/", "id": "k8s_00677_code_explain_21412"} {"prompt": "Complete this text code:\n\n#!/bin/bash\n\n# Set certificate expiration time in days\nDAYS=365\n\n# Process all CSR files except those for front-proxy and etcd\nfind ./ -name \"*.csr\" | grep -v \"pki/etcd\" | grep -v \"front-proxy\" | while read -r FILE;\ndo\n echo \"* Processing ${FILE} ...\"\n FILE=${FILE%.*} # Trim the extension\n if [ -f \"./pki/ca.srl\" ]; then\n SERIAL_FLAG=\"-CAserial ./pki/ca.srl\"\n else\n SERIAL_FLAG=\"-CAcreateserial\"\n fi\n openssl x509 -req -days \"${DAYS}\" -CA ./pki/ca.crt -CAkey ./pki/ca.key ${SERIAL_FLAG} \\\n -in \"${FILE}.csr\" -out \"${FILE}.crt\"\n sleep 2\ndone", "completion": "\n# Process all etcd CSRs\nfind ./pki/etcd -name \"*.csr\" | while read -r FILE;\ndo\n echo \"* Processing ${FILE} ...\"\n FILE=${FILE%.*} # Trim the extension\n if [ -f \"./pki/etcd/ca.srl\" ]; then\n SERIAL_FLAG=-CAserial ./pki/etcd/ca.srl\n else\n SERIAL_FLAG=-CAcreateserial\n fi\n openssl x509 -req -days \"${DAYS}\" -CA ./pki/etcd/ca.crt -CAkey ./pki/etcd/ca.key ${SERIAL_FLAG} \\\n -in \"${FILE}.csr\" -out \"${FILE}.crt\"\ndone\n\n# Process front-proxy CSRs\necho \"* Processing ./pki/front-proxy-client.csr ...\"\nopenssl x509 -req -days \"${DAYS}\" -CA ./pki/front-proxy-ca.crt -CAkey ./pki/front-proxy-ca.key -CAcreateserial \\\n -in ./pki/front-proxy-client.csr -out ./pki/front-proxy-client.crt", "language": "text", "type": "code-example", "context": "Certificate Management with kubeadm | Repeat this step for all nodes that have CSR files.", "source": "https://kubernetes.io/docs/tasks/administer-cluster/kubeadm/kubeadm-certs/", "id": "k8s_00677_code_completion_21413"} {"prompt": "Explain this text code:\n\n#!/bin/bash\n\n# Set certificate expiration time in days\nDAYS=365\n\n# Process all CSR files except those for front-proxy and etcd\nfind ./ -name \"*.csr\" | grep -v \"pki/etcd\" | grep -v \"front-proxy\" | while read -r FILE;\ndo\n echo \"* Processing ${FILE} ...\"\n FILE=${FILE%.*} # Trim the extension\n if [ -f \"./pki/ca.srl\" ]; then\n SERIAL_FLAG=\"-CAserial ./pki/ca.srl\"\n else\n SERIAL_FLAG=\"-CAcreateserial\"\n fi\n openssl x509 -req -days \"${DAYS}\" -CA ./pki/ca.crt -CAkey ./pki/ca.key ${SERIAL_FLAG} \\\n -in \"${FILE}.csr\" -out \"${FILE}.crt\"\n sleep 2\ndone\n\n# Process all etcd CSRs\nfind ./pki/etcd -name \"*.csr\" | while read -r FILE;\ndo\n echo \"* Processing ${FILE} ...\"\n FILE=${FILE%.*} # Trim the extension\n if [ -f \"./pki/etcd/ca.srl\" ]; then\n SERIAL_FLAG=-CAserial ./pki/etcd/ca.srl\n else\n SERIAL_FLAG=-CAcreateserial\n fi\n openssl x509 -req -days \"${DAYS}\" -CA ./pki/etcd/ca.crt -CAkey ./pki/etcd/ca.key ${SERIAL_FLAG} \\\n -in \"${FILE}.csr\" -out \"${FILE}.crt\"\ndone\n\n# Process front-proxy CSRs\necho \"* Processing ./pki/front-proxy-client.csr ...\"\nopenssl x509 -req -days \"${DAYS}\" -CA ./pki/front-proxy-ca.crt -CAkey ./pki/front-proxy-ca.key -CAcreateserial \\\n -in ./pki/front-proxy-client.csr -out ./pki/front-proxy-client.crt", "completion": "This text code Certificate Management with kubeadm | Repeat this step for all nodes that have CSR files.", "language": "text", "type": "code-example_explanation", "context": "Certificate Management with kubeadm | Repeat this step for all nodes that have CSR files.", "source": "https://kubernetes.io/docs/tasks/administer-cluster/kubeadm/kubeadm-certs/", "id": "k8s_00677_code_explain_21414"} {"prompt": "Complete this text code:\n\n#!/bin/bash\n\nCLUSTER=kubernetes\nfind ./ -name \"*.conf\" | while read -r FILE;\ndo", "completion": " echo \"* Processing ${FILE} ...\"\n KUBECONFIG=\"${FILE}\" kubectl config set-cluster \"${CLUSTER}\" --certificate-authority ./pki/ca.crt --embed-certs\n USER=$(KUBECONFIG=\"${FILE}\" kubectl config view -o jsonpath='{.users[0].name}')\n KUBECONFIG=\"${FILE}\" kubectl config set-credentials \"${USER}\" --client-certificate \"${FILE}.crt\" --embed-certs\ndone", "language": "text", "type": "code-example", "context": "Certificate Management with kubeadm | Perform this step on all nodes that have CSR files.", "source": "https://kubernetes.io/docs/tasks/administer-cluster/kubeadm/kubeadm-certs/", "id": "k8s_00677_code_completion_21415"} {"prompt": "Explain this text code:\n\n#!/bin/bash\n\nCLUSTER=kubernetes\nfind ./ -name \"*.conf\" | while read -r FILE;\ndo\n echo \"* Processing ${FILE} ...\"\n KUBECONFIG=\"${FILE}\" kubectl config set-cluster \"${CLUSTER}\" --certificate-authority ./pki/ca.crt --embed-certs\n USER=$(KUBECONFIG=\"${FILE}\" kubectl config view -o jsonpath='{.users[0].name}')\n KUBECONFIG=\"${FILE}\" kubectl config set-credentials \"${USER}\" --client-certificate \"${FILE}.crt\" --embed-certs\ndone", "completion": "This text code Certificate Management with kubeadm | Perform this step on all nodes that have CSR files.", "language": "text", "type": "code-example_explanation", "context": "Certificate Management with kubeadm | Perform this step on all nodes that have CSR files.", "source": "https://kubernetes.io/docs/tasks/administer-cluster/kubeadm/kubeadm-certs/", "id": "k8s_00677_code_explain_21416"} {"prompt": "Complete this text code:\n\n#!/bin/bash\n\n# Cleanup CSR files", "completion": "rm -f ./*.csr ./pki/*.csr ./pki/etcd/*.csr # Clean all CSR files\n\n# Cleanup CRT files that were already embedded in kubeconfig files\nrm -f ./*.crt", "language": "text", "type": "code-example", "context": "Certificate Management with kubeadm | Optionally, move .srl files to the next node to be processed.", "source": "https://kubernetes.io/docs/tasks/administer-cluster/kubeadm/kubeadm-certs/", "id": "k8s_00677_code_completion_21417"} {"prompt": "Explain this text code:\n\n#!/bin/bash\n\n# Cleanup CSR files\nrm -f ./*.csr ./pki/*.csr ./pki/etcd/*.csr # Clean all CSR files\n\n# Cleanup CRT files that were already embedded in kubeconfig files\nrm -f ./*.crt", "completion": "This text code Certificate Management with kubeadm | Optionally, move .srl files to the next node to be processed.", "language": "text", "type": "code-example_explanation", "context": "Certificate Management with kubeadm | Optionally, move .srl files to the next node to be processed.", "source": "https://kubernetes.io/docs/tasks/administer-cluster/kubeadm/kubeadm-certs/", "id": "k8s_00677_code_explain_21418"} {"prompt": "Complete this yaml code:\n\ncat << EOF > /etc/systemd/system/kubelet.service.d/kubelet.conf\n# Replace \"systemd\" with the cgroup driver of your container runtime. The default value in the kubelet is \"cgroupfs\".\n# Replace the value of \"containerRuntimeEndpoint\" for a different container runtime if needed.\n#\napiVersion: kubelet.config.k8s.io/v1beta1\nkind: KubeletConfiguration\nauthentication:\n anonymous:\n enabled: false\n webhook:\n enabled: false\nauthorization:\n mode: AlwaysAllow\ncgroupDriver: systemd", "completion": "address: 127.0.0.1\ncontainerRuntimeEndpoint: unix:///var/run/containerd/containerd.sock\nstaticPodPath: /etc/kubernetes/manifests\nEOF\n\ncat << EOF > /etc/systemd/system/kubelet.service.d/20-etcd-service-manager.conf\n[Service]\nExecStart=\nExecStart=/usr/bin/kubelet --config=/etc/systemd/system/kubelet.service.d/kubelet.conf\nRestart=always\nEOF\n\nsystemctl daemon-reload\nsystemctl restart kubelet", "language": "yaml", "type": "kubernetes-manifest", "context": "Set up a High Availability etcd Cluster with kubeadm | Check the kubelet status to ensure it is running.", "source": "https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/setup-ha-etcd-with-kubeadm/", "id": "k8s_00678_code_completion_21419"} {"prompt": "Explain this yaml code:\n\ncat << EOF > /etc/systemd/system/kubelet.service.d/kubelet.conf\n# Replace \"systemd\" with the cgroup driver of your container runtime. The default value in the kubelet is \"cgroupfs\".\n# Replace the value of \"containerRuntimeEndpoint\" for a different container runtime if needed.\n#\napiVersion: kubelet.config.k8s.io/v1beta1\nkind: KubeletConfiguration\nauthentication:\n anonymous:\n enabled: false\n webhook:\n enabled: false\nauthorization:\n mode: AlwaysAllow\ncgroupDriver: systemd\naddress: 127.0.0.1\ncontainerRuntimeEndpoint: unix:///var/run/containerd/containerd.sock\nstaticPodPath: /etc/kubernetes/manifests\nEOF\n\ncat << EOF > /etc/systemd/system/kubelet.service.d/20-etcd-service-manager.conf\n[Service]\nExecStart=\nExecStart=/usr/bin/kubelet --config=/etc/systemd/system/kubelet.service.d/kubelet.conf\nRestart=always\nEOF\n\nsystemctl daemon-reload\nsystemctl restart kubelet", "completion": "This yaml code Set up a High Availability etcd Cluster with kubeadm | Check the kubelet status to ensure it is running.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Set up a High Availability etcd Cluster with kubeadm | Check the kubelet status to ensure it is running.", "source": "https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/setup-ha-etcd-with-kubeadm/", "id": "k8s_00678_code_explain_21420"} {"prompt": "Complete this yaml code:\n\n# Update HOST0, HOST1 and HOST2 with the IPs of your hosts\nexport HOST0=10.0.0.6\nexport HOST1=10.0.0.7\nexport HOST2=10.0.0.8\n\n# Update NAME0, NAME1 and NAME2 with the hostnames of your hosts\nexport NAME0=\"infra0\"\nexport NAME1=\"infra1\"\nexport NAME2=\"infra2\"\n\n# Create temp directories to store files that will end up on other hosts\nmkdir -p /tmp/${HOST0}/ /tmp/${HOST1}/ /tmp/${HOST2}/\n\nHOSTS=(${HOST0} ${HOST1} ${HOST2})\nNAMES=(${NAME0} ${NAME1} ${NAME2})\n\nfor i in \"${!HOSTS[@]}\"; do\nHOST=${HOSTS[$i]}\nNAME=${NAMES[$i]}\ncat << EOF > /tmp/${HOST}/kubeadmcfg.yaml\n---\napiVersion: \"kubeadm.k8s.io/v1beta4\"\nkind: InitConfiguration\nnodeRegistration:\n name: ${NAME}\nlocalAPIEndpoint:", "completion": " advertiseAddress: ${HOST}\n---\napiVersion: \"kubeadm.k8s.io/v1beta4\"\nkind: ClusterConfiguration\netcd:\n local:\n serverCertSANs:\n - \"${HOST}\"\n peerCertSANs:\n - \"${HOST}\"\n extraArgs:\n - name: initial-cluster\n value: ${NAMES[0]}=https://${HOSTS[0]}:2380,${NAMES[1]}=https://${HOSTS[1]}:2380,${NAMES[2]}=https://${HOSTS[2]}:2380\n - name: initial-cluster-state\n value: new\n - name: name\n value: ${NAME}\n - name: listen-peer-urls\n value: https://${HOST}:2380\n - name: listen-client-urls\n value: https://${HOST}:2379\n - name: advertise-client-urls\n value: https://${HOST}:2379\n - name: initial-advertise-peer-urls\n value: https://${HOST}:2380\nEOF\ndone", "language": "yaml", "type": "kubernetes-manifest", "context": "Set up a High Availability etcd Cluster with kubeadm | Generate the certificate authority.", "source": "https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/setup-ha-etcd-with-kubeadm/", "id": "k8s_00678_code_completion_21421"} {"prompt": "Explain this yaml code:\n\n# Update HOST0, HOST1 and HOST2 with the IPs of your hosts\nexport HOST0=10.0.0.6\nexport HOST1=10.0.0.7\nexport HOST2=10.0.0.8\n\n# Update NAME0, NAME1 and NAME2 with the hostnames of your hosts\nexport NAME0=\"infra0\"\nexport NAME1=\"infra1\"\nexport NAME2=\"infra2\"\n\n# Create temp directories to store files that will end up on other hosts\nmkdir -p /tmp/${HOST0}/ /tmp/${HOST1}/ /tmp/${HOST2}/\n\nHOSTS=(${HOST0} ${HOST1} ${HOST2})\nNAMES=(${NAME0} ${NAME1} ${NAME2})\n\nfor i in \"${!HOSTS[@]}\"; do\nHOST=${HOSTS[$i]}\nNAME=${NAMES[$i]}\ncat << EOF > /tmp/${HOST}/kubeadmcfg.yaml\n---\napiVersion: \"kubeadm.k8s.io/v1beta4\"\nkind: InitConfiguration\nnodeRegistration:\n name: ${NAME}\nlocalAPIEndpoint:\n advertiseAddress: ${HOST}\n---\napiVersion: \"kubeadm.k8s.io/v1beta4\"\nkind: ClusterConfiguration\netcd:\n local:\n serverCertSANs:\n - \"${HOST}\"\n peerCertSANs:\n - \"${HOST}\"\n extraArgs:\n - name: initial-cluster\n value: ${NAMES[0]}=https://${HOSTS[0]}:2380,${NAMES[1]}=https://${HOSTS[1]}:2380,${NAMES[2]}=https://${HOSTS[2]}:2380\n - name: initial-cluster-state\n value: new\n - name: name\n value: ${NAME}\n - name: listen-peer-urls\n value: https://${HOST}:2380\n - name: listen-client-urls\n value: https://${HOST}:2379\n - name: advertise-client-urls\n value: https://${HOST}:2379\n - name: initial-advertise-peer-urls\n value: https://${HOST}:2380\nEOF\ndone", "completion": "This yaml code Set up a High Availability etcd Cluster with kubeadm | Generate the certificate authority.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Set up a High Availability etcd Cluster with kubeadm | Generate the certificate authority.", "source": "https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/setup-ha-etcd-with-kubeadm/", "id": "k8s_00678_code_explain_21422"} {"prompt": "Complete this text code:\n\nkubeadm init phase certs etcd-server --config=/tmp/${HOST2}/kubeadmcfg.yaml\nkubeadm init phase certs etcd-peer --config=/tmp/${HOST2}/kubeadmcfg.yaml\nkubeadm init phase certs etcd-healthcheck-client --config=/tmp/${HOST2}/kubeadmcfg.yaml\nkubeadm init phase certs apiserver-etcd-client --config=/tmp/${HOST2}/kubeadmcfg.yaml\ncp -R /etc/kubernetes/pki /tmp/${HOST2}/\n# cleanup non-reusable certificates\nfind /etc/kubernetes/pki -not -name ca.crt -not -name ca.key -type f -delete\n\nkubeadm init phase certs etcd-server --config=/tmp/${HOST1}/kubeadmcfg.yaml\nkubeadm init phase certs etcd-peer --config=/tmp/${HOST1}/kubeadmcfg.yaml\nkubeadm init phase certs etcd-healthcheck-client --config=/tmp/${HOST1}/kubeadmcfg.yaml\nkubeadm init phase certs apiserver-etcd-client --config=/tmp/${HOST1}/kubeadmcfg.yaml", "completion": "cp -R /etc/kubernetes/pki /tmp/${HOST1}/\nfind /etc/kubernetes/pki -not -name ca.crt -not -name ca.key -type f -delete\n\nkubeadm init phase certs etcd-server --config=/tmp/${HOST0}/kubeadmcfg.yaml\nkubeadm init phase certs etcd-peer --config=/tmp/${HOST0}/kubeadmcfg.yaml\nkubeadm init phase certs etcd-healthcheck-client --config=/tmp/${HOST0}/kubeadmcfg.yaml\nkubeadm init phase certs apiserver-etcd-client --config=/tmp/${HOST0}/kubeadmcfg.yaml\n# No need to move the certs because they are for HOST0\n\n# clean up certs that should not be copied off this host\nfind /tmp/${HOST2} -name ca.key -type f -delete\nfind /tmp/${HOST1} -name ca.key -type f -delete", "language": "text", "type": "code-example", "context": "Set up a High Availability etcd Cluster with kubeadm | Copy certificates and kubeadm configs.", "source": "https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/setup-ha-etcd-with-kubeadm/", "id": "k8s_00678_code_completion_21423"} {"prompt": "Explain this text code:\n\nkubeadm init phase certs etcd-server --config=/tmp/${HOST2}/kubeadmcfg.yaml\nkubeadm init phase certs etcd-peer --config=/tmp/${HOST2}/kubeadmcfg.yaml\nkubeadm init phase certs etcd-healthcheck-client --config=/tmp/${HOST2}/kubeadmcfg.yaml\nkubeadm init phase certs apiserver-etcd-client --config=/tmp/${HOST2}/kubeadmcfg.yaml\ncp -R /etc/kubernetes/pki /tmp/${HOST2}/\n# cleanup non-reusable certificates\nfind /etc/kubernetes/pki -not -name ca.crt -not -name ca.key -type f -delete\n\nkubeadm init phase certs etcd-server --config=/tmp/${HOST1}/kubeadmcfg.yaml\nkubeadm init phase certs etcd-peer --config=/tmp/${HOST1}/kubeadmcfg.yaml\nkubeadm init phase certs etcd-healthcheck-client --config=/tmp/${HOST1}/kubeadmcfg.yaml\nkubeadm init phase certs apiserver-etcd-client --config=/tmp/${HOST1}/kubeadmcfg.yaml\ncp -R /etc/kubernetes/pki /tmp/${HOST1}/\nfind /etc/kubernetes/pki -not -name ca.crt -not -name ca.key -type f -delete\n\nkubeadm init phase certs etcd-server --config=/tmp/${HOST0}/kubeadmcfg.yaml\nkubeadm init phase certs etcd-peer --config=/tmp/${HOST0}/kubeadmcfg.yaml\nkubeadm init phase certs etcd-healthcheck-client --config=/tmp/${HOST0}/kubeadmcfg.yaml\nkubeadm init phase certs apiserver-etcd-client --config=/tmp/${HOST0}/kubeadmcfg.yaml\n# No need to move the certs because they are for HOST0\n\n# clean up certs that should not be copied off this host\nfind /tmp/${HOST2} -name ca.key -type f -delete\nfind /tmp/${HOST1} -name ca.key -type f -delete", "completion": "This text code Set up a High Availability etcd Cluster with kubeadm | Copy certificates and kubeadm configs.", "language": "text", "type": "code-example_explanation", "context": "Set up a High Availability etcd Cluster with kubeadm | Copy certificates and kubeadm configs.", "source": "https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/setup-ha-etcd-with-kubeadm/", "id": "k8s_00678_code_explain_21424"} {"prompt": "Complete this text code:\n\nUSER=ubuntu\nHOST=${HOST1}\nscp -r /tmp/${HOST}/* ${USER}@${HOST}:", "completion": "ssh ${USER}@${HOST}\nUSER@HOST $ sudo -Es\nroot@HOST $ chown -R root:root pki\nroot@HOST $ mv pki /etc/kubernetes/", "language": "text", "type": "code-example", "context": "Set up a High Availability etcd Cluster with kubeadm | Ensure all expected files exist.", "source": "https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/setup-ha-etcd-with-kubeadm/", "id": "k8s_00678_code_completion_21425"} {"prompt": "Explain this text code:\n\nUSER=ubuntu\nHOST=${HOST1}\nscp -r /tmp/${HOST}/* ${USER}@${HOST}:\nssh ${USER}@${HOST}\nUSER@HOST $ sudo -Es\nroot@HOST $ chown -R root:root pki\nroot@HOST $ mv pki /etc/kubernetes/", "completion": "This text code Set up a High Availability etcd Cluster with kubeadm | Ensure all expected files exist.", "language": "text", "type": "code-example_explanation", "context": "Set up a High Availability etcd Cluster with kubeadm | Ensure all expected files exist.", "source": "https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/setup-ha-etcd-with-kubeadm/", "id": "k8s_00678_code_explain_21426"} {"prompt": "Complete this text code:\n\n/tmp/${HOST0}\n└── kubeadmcfg.yaml\n---\n/etc/kubernetes/pki\n├── apiserver-etcd-client.crt\n├── apiserver-etcd-client.key\n└── etcd", "completion": " ├── ca.crt\n ├── ca.key\n ├── healthcheck-client.crt\n ├── healthcheck-client.key\n ├── peer.crt\n ├── peer.key\n ├── server.crt\n └── server.key", "language": "text", "type": "code-example", "context": "Set up a High Availability etcd Cluster with kubeadm | On $HOST1:", "source": "https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/setup-ha-etcd-with-kubeadm/", "id": "k8s_00678_code_completion_21427"} {"prompt": "Explain this text code:\n\n/tmp/${HOST0}\n└── kubeadmcfg.yaml\n---\n/etc/kubernetes/pki\n├── apiserver-etcd-client.crt\n├── apiserver-etcd-client.key\n└── etcd\n ├── ca.crt\n ├── ca.key\n ├── healthcheck-client.crt\n ├── healthcheck-client.key\n ├── peer.crt\n ├── peer.key\n ├── server.crt\n └── server.key", "completion": "This text code Set up a High Availability etcd Cluster with kubeadm | On $HOST1:", "language": "text", "type": "code-example_explanation", "context": "Set up a High Availability etcd Cluster with kubeadm | On $HOST1:", "source": "https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/setup-ha-etcd-with-kubeadm/", "id": "k8s_00678_code_explain_21428"} {"prompt": "Complete this text code:\n\n$HOME\n└── kubeadmcfg.yaml\n---\n/etc/kubernetes/pki\n├── apiserver-etcd-client.crt\n├── apiserver-etcd-client.key\n└── etcd", "completion": " ├── ca.crt\n ├── healthcheck-client.crt\n ├── healthcheck-client.key\n ├── peer.crt\n ├── peer.key\n ├── server.crt\n └── server.key", "language": "text", "type": "code-example", "context": "Set up a High Availability etcd Cluster with kubeadm | On $HOST2:", "source": "https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/setup-ha-etcd-with-kubeadm/", "id": "k8s_00678_code_completion_21429"} {"prompt": "Explain this text code:\n\n$HOME\n└── kubeadmcfg.yaml\n---\n/etc/kubernetes/pki\n├── apiserver-etcd-client.crt\n├── apiserver-etcd-client.key\n└── etcd\n ├── ca.crt\n ├── healthcheck-client.crt\n ├── healthcheck-client.key\n ├── peer.crt\n ├── peer.key\n ├── server.crt\n └── server.key", "completion": "This text code Set up a High Availability etcd Cluster with kubeadm | On $HOST2:", "language": "text", "type": "code-example_explanation", "context": "Set up a High Availability etcd Cluster with kubeadm | On $HOST2:", "source": "https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/setup-ha-etcd-with-kubeadm/", "id": "k8s_00678_code_explain_21430"} {"prompt": "Complete this text code:\n\n$HOME\n└── kubeadmcfg.yaml\n---\n/etc/kubernetes/pki\n├── apiserver-etcd-client.crt\n├── apiserver-etcd-client.key\n└── etcd", "completion": " ├── ca.crt\n ├── healthcheck-client.crt\n ├── healthcheck-client.key\n ├── peer.crt\n ├── peer.key\n ├── server.crt\n └── server.key", "language": "text", "type": "code-example", "context": "Set up a High Availability etcd Cluster with kubeadm | Create the static pod manifests.", "source": "https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/setup-ha-etcd-with-kubeadm/", "id": "k8s_00678_code_completion_21431"} {"prompt": "Explain this text code:\n\n$HOME\n└── kubeadmcfg.yaml\n---\n/etc/kubernetes/pki\n├── apiserver-etcd-client.crt\n├── apiserver-etcd-client.key\n└── etcd\n ├── ca.crt\n ├── healthcheck-client.crt\n ├── healthcheck-client.key\n ├── peer.crt\n ├── peer.key\n ├── server.crt\n └── server.key", "completion": "This text code Set up a High Availability etcd Cluster with kubeadm | Create the static pod manifests.", "language": "text", "type": "code-example_explanation", "context": "Set up a High Availability etcd Cluster with kubeadm | Create the static pod manifests.", "source": "https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/setup-ha-etcd-with-kubeadm/", "id": "k8s_00678_code_explain_21432"} {"prompt": "Explain this text code:\n\nroot@HOST0 $ kubeadm init phase etcd local --config=/tmp/${HOST0}/kubeadmcfg.yaml\nroot@HOST1 $ kubeadm init phase etcd local --config=$HOME/kubeadmcfg.yaml\nroot@HOST2 $ kubeadm init phase etcd local --config=$HOME/kubeadmcfg.yaml", "completion": "This text code Set up a High Availability etcd Cluster with kubeadm | Optional: Check the cluster health.", "language": "text", "type": "code-example_explanation", "context": "Set up a High Availability etcd Cluster with kubeadm | Optional: Check the cluster health.", "source": "https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/setup-ha-etcd-with-kubeadm/", "id": "k8s_00678_code_explain_21433"} {"prompt": "Complete this text code:\n\nETCDCTL_API=3 etcdctl \\\n--cert /etc/kubernetes/pki/etcd/peer.crt \\\n--key /etc/kubernetes/pki/etcd/peer.key \\\n--cacert /etc/kubernetes/pki/etcd/ca.crt \\", "completion": "--endpoints https://${HOST0}:2379 endpoint health\n...\nhttps://[HOST0 IP]:2379 is healthy: successfully committed proposal: took = 16.283339ms\nhttps://[HOST1 IP]:2379 is healthy: successfully committed proposal: took = 19.44402ms\nhttps://[HOST2 IP]:2379 is healthy: successfully committed proposal: took = 35.926451ms", "language": "text", "type": "code-example", "context": "Set up a High Availability etcd Cluster with kubeadm | Once you have an etcd cluster with 3 working members, you can continue setting up a highly available control plane using the external etcd method with kubeadm.", "source": "https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/setup-ha-etcd-with-kubeadm/", "id": "k8s_00678_code_completion_21434"} {"prompt": "Explain this text code:\n\nETCDCTL_API=3 etcdctl \\\n--cert /etc/kubernetes/pki/etcd/peer.crt \\\n--key /etc/kubernetes/pki/etcd/peer.key \\\n--cacert /etc/kubernetes/pki/etcd/ca.crt \\\n--endpoints https://${HOST0}:2379 endpoint health\n...\nhttps://[HOST0 IP]:2379 is healthy: successfully committed proposal: took = 16.283339ms\nhttps://[HOST1 IP]:2379 is healthy: successfully committed proposal: took = 19.44402ms\nhttps://[HOST2 IP]:2379 is healthy: successfully committed proposal: took = 35.926451ms", "completion": "This text code Set up a High Availability etcd Cluster with kubeadm | Once you have an etcd cluster with 3 working members, you can continue setting up a highly available control plane using the external etcd method with kubeadm.", "language": "text", "type": "code-example_explanation", "context": "Set up a High Availability etcd Cluster with kubeadm | Once you have an etcd cluster with 3 working members, you can continue setting up a highly available control plane using the external etcd method with kubeadm.", "source": "https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/setup-ha-etcd-with-kubeadm/", "id": "k8s_00678_code_explain_21435"} {"prompt": "Complete this sh code:\n\ncat << EOF > /etc/systemd/system/kubelet.service.d/kubelet.conf\n# Replace \"systemd\" with the cgroup driver of your container runtime. The default value in the kubelet is \"cgroupfs\".\n# Replace the value of \"containerRuntimeEndpoint\" for a different container runtime if needed.\n#\napiVersion: kubelet.config.k8s.io/v1beta1\nkind: KubeletConfiguration\nauthentication:\n anonymous:\n enabled: false\n webhook:\n enabled: false\nauthorization:\n mode: AlwaysAllow\ncgroupDriver: systemd", "completion": "address: 127.0.0.1\ncontainerRuntimeEndpoint: unix:///var/run/containerd/containerd.sock\nstaticPodPath: /etc/kubernetes/manifests\nEOF\n\ncat << EOF > /etc/systemd/system/kubelet.service.d/20-etcd-service-manager.conf\n[Service]\nExecStart=\nExecStart=/usr/bin/kubelet --config=/etc/systemd/system/kubelet.service.d/kubelet.conf\nRestart=always\nEOF\n\nsystemctl daemon-reload\nsystemctl restart kubelet", "language": "sh", "type": "code-example", "context": "Set up a High Availability etcd Cluster with kubeadm | Check the kubelet status to ensure it is running.", "source": "https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/setup-ha-etcd-with-kubeadm/", "id": "k8s_00678_code_completion_21436"} {"prompt": "Explain this sh code:\n\ncat << EOF > /etc/systemd/system/kubelet.service.d/kubelet.conf\n# Replace \"systemd\" with the cgroup driver of your container runtime. The default value in the kubelet is \"cgroupfs\".\n# Replace the value of \"containerRuntimeEndpoint\" for a different container runtime if needed.\n#\napiVersion: kubelet.config.k8s.io/v1beta1\nkind: KubeletConfiguration\nauthentication:\n anonymous:\n enabled: false\n webhook:\n enabled: false\nauthorization:\n mode: AlwaysAllow\ncgroupDriver: systemd\naddress: 127.0.0.1\ncontainerRuntimeEndpoint: unix:///var/run/containerd/containerd.sock\nstaticPodPath: /etc/kubernetes/manifests\nEOF\n\ncat << EOF > /etc/systemd/system/kubelet.service.d/20-etcd-service-manager.conf\n[Service]\nExecStart=\nExecStart=/usr/bin/kubelet --config=/etc/systemd/system/kubelet.service.d/kubelet.conf\nRestart=always\nEOF\n\nsystemctl daemon-reload\nsystemctl restart kubelet", "completion": "This sh code Set up a High Availability etcd Cluster with kubeadm | Check the kubelet status to ensure it is running.", "language": "sh", "type": "code-example_explanation", "context": "Set up a High Availability etcd Cluster with kubeadm | Check the kubelet status to ensure it is running.", "source": "https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/setup-ha-etcd-with-kubeadm/", "id": "k8s_00678_code_explain_21437"} {"prompt": "Complete this sh code:\n\n# Update HOST0, HOST1 and HOST2 with the IPs of your hosts\nexport HOST0=10.0.0.6\nexport HOST1=10.0.0.7\nexport HOST2=10.0.0.8\n\n# Update NAME0, NAME1 and NAME2 with the hostnames of your hosts\nexport NAME0=\"infra0\"\nexport NAME1=\"infra1\"\nexport NAME2=\"infra2\"\n\n# Create temp directories to store files that will end up on other hosts\nmkdir -p /tmp/${HOST0}/ /tmp/${HOST1}/ /tmp/${HOST2}/\n\nHOSTS=(${HOST0} ${HOST1} ${HOST2})\nNAMES=(${NAME0} ${NAME1} ${NAME2})\n\nfor i in \"${!HOSTS[@]}\"; do\nHOST=${HOSTS[$i]}\nNAME=${NAMES[$i]}\ncat << EOF > /tmp/${HOST}/kubeadmcfg.yaml\n---\napiVersion: \"kubeadm.k8s.io/v1beta4\"\nkind: InitConfiguration\nnodeRegistration:\n name: ${NAME}\nlocalAPIEndpoint:", "completion": " advertiseAddress: ${HOST}\n---\napiVersion: \"kubeadm.k8s.io/v1beta4\"\nkind: ClusterConfiguration\netcd:\n local:\n serverCertSANs:\n - \"${HOST}\"\n peerCertSANs:\n - \"${HOST}\"\n extraArgs:\n - name: initial-cluster\n value: ${NAMES[0]}=https://${HOSTS[0]}:2380,${NAMES[1]}=https://${HOSTS[1]}:2380,${NAMES[2]}=https://${HOSTS[2]}:2380\n - name: initial-cluster-state\n value: new\n - name: name\n value: ${NAME}\n - name: listen-peer-urls\n value: https://${HOST}:2380\n - name: listen-client-urls\n value: https://${HOST}:2379\n - name: advertise-client-urls\n value: https://${HOST}:2379\n - name: initial-advertise-peer-urls\n value: https://${HOST}:2380\nEOF\ndone", "language": "sh", "type": "code-example", "context": "Set up a High Availability etcd Cluster with kubeadm | Generate the certificate authority.", "source": "https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/setup-ha-etcd-with-kubeadm/", "id": "k8s_00678_code_completion_21438"} {"prompt": "Explain this sh code:\n\n# Update HOST0, HOST1 and HOST2 with the IPs of your hosts\nexport HOST0=10.0.0.6\nexport HOST1=10.0.0.7\nexport HOST2=10.0.0.8\n\n# Update NAME0, NAME1 and NAME2 with the hostnames of your hosts\nexport NAME0=\"infra0\"\nexport NAME1=\"infra1\"\nexport NAME2=\"infra2\"\n\n# Create temp directories to store files that will end up on other hosts\nmkdir -p /tmp/${HOST0}/ /tmp/${HOST1}/ /tmp/${HOST2}/\n\nHOSTS=(${HOST0} ${HOST1} ${HOST2})\nNAMES=(${NAME0} ${NAME1} ${NAME2})\n\nfor i in \"${!HOSTS[@]}\"; do\nHOST=${HOSTS[$i]}\nNAME=${NAMES[$i]}\ncat << EOF > /tmp/${HOST}/kubeadmcfg.yaml\n---\napiVersion: \"kubeadm.k8s.io/v1beta4\"\nkind: InitConfiguration\nnodeRegistration:\n name: ${NAME}\nlocalAPIEndpoint:\n advertiseAddress: ${HOST}\n---\napiVersion: \"kubeadm.k8s.io/v1beta4\"\nkind: ClusterConfiguration\netcd:\n local:\n serverCertSANs:\n - \"${HOST}\"\n peerCertSANs:\n - \"${HOST}\"\n extraArgs:\n - name: initial-cluster\n value: ${NAMES[0]}=https://${HOSTS[0]}:2380,${NAMES[1]}=https://${HOSTS[1]}:2380,${NAMES[2]}=https://${HOSTS[2]}:2380\n - name: initial-cluster-state\n value: new\n - name: name\n value: ${NAME}\n - name: listen-peer-urls\n value: https://${HOST}:2380\n - name: listen-client-urls\n value: https://${HOST}:2379\n - name: advertise-client-urls\n value: https://${HOST}:2379\n - name: initial-advertise-peer-urls\n value: https://${HOST}:2380\nEOF\ndone", "completion": "This sh code Set up a High Availability etcd Cluster with kubeadm | Generate the certificate authority.", "language": "sh", "type": "code-example_explanation", "context": "Set up a High Availability etcd Cluster with kubeadm | Generate the certificate authority.", "source": "https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/setup-ha-etcd-with-kubeadm/", "id": "k8s_00678_code_explain_21439"} {"prompt": "Complete this sh code:\n\nkubeadm init phase certs etcd-server --config=/tmp/${HOST2}/kubeadmcfg.yaml\nkubeadm init phase certs etcd-peer --config=/tmp/${HOST2}/kubeadmcfg.yaml\nkubeadm init phase certs etcd-healthcheck-client --config=/tmp/${HOST2}/kubeadmcfg.yaml\nkubeadm init phase certs apiserver-etcd-client --config=/tmp/${HOST2}/kubeadmcfg.yaml\ncp -R /etc/kubernetes/pki /tmp/${HOST2}/\n# cleanup non-reusable certificates\nfind /etc/kubernetes/pki -not -name ca.crt -not -name ca.key -type f -delete\n\nkubeadm init phase certs etcd-server --config=/tmp/${HOST1}/kubeadmcfg.yaml\nkubeadm init phase certs etcd-peer --config=/tmp/${HOST1}/kubeadmcfg.yaml\nkubeadm init phase certs etcd-healthcheck-client --config=/tmp/${HOST1}/kubeadmcfg.yaml\nkubeadm init phase certs apiserver-etcd-client --config=/tmp/${HOST1}/kubeadmcfg.yaml", "completion": "cp -R /etc/kubernetes/pki /tmp/${HOST1}/\nfind /etc/kubernetes/pki -not -name ca.crt -not -name ca.key -type f -delete\n\nkubeadm init phase certs etcd-server --config=/tmp/${HOST0}/kubeadmcfg.yaml\nkubeadm init phase certs etcd-peer --config=/tmp/${HOST0}/kubeadmcfg.yaml\nkubeadm init phase certs etcd-healthcheck-client --config=/tmp/${HOST0}/kubeadmcfg.yaml\nkubeadm init phase certs apiserver-etcd-client --config=/tmp/${HOST0}/kubeadmcfg.yaml\n# No need to move the certs because they are for HOST0\n\n# clean up certs that should not be copied off this host\nfind /tmp/${HOST2} -name ca.key -type f -delete\nfind /tmp/${HOST1} -name ca.key -type f -delete", "language": "sh", "type": "code-example", "context": "Set up a High Availability etcd Cluster with kubeadm | Copy certificates and kubeadm configs.", "source": "https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/setup-ha-etcd-with-kubeadm/", "id": "k8s_00678_code_completion_21440"} {"prompt": "Explain this sh code:\n\nkubeadm init phase certs etcd-server --config=/tmp/${HOST2}/kubeadmcfg.yaml\nkubeadm init phase certs etcd-peer --config=/tmp/${HOST2}/kubeadmcfg.yaml\nkubeadm init phase certs etcd-healthcheck-client --config=/tmp/${HOST2}/kubeadmcfg.yaml\nkubeadm init phase certs apiserver-etcd-client --config=/tmp/${HOST2}/kubeadmcfg.yaml\ncp -R /etc/kubernetes/pki /tmp/${HOST2}/\n# cleanup non-reusable certificates\nfind /etc/kubernetes/pki -not -name ca.crt -not -name ca.key -type f -delete\n\nkubeadm init phase certs etcd-server --config=/tmp/${HOST1}/kubeadmcfg.yaml\nkubeadm init phase certs etcd-peer --config=/tmp/${HOST1}/kubeadmcfg.yaml\nkubeadm init phase certs etcd-healthcheck-client --config=/tmp/${HOST1}/kubeadmcfg.yaml\nkubeadm init phase certs apiserver-etcd-client --config=/tmp/${HOST1}/kubeadmcfg.yaml\ncp -R /etc/kubernetes/pki /tmp/${HOST1}/\nfind /etc/kubernetes/pki -not -name ca.crt -not -name ca.key -type f -delete\n\nkubeadm init phase certs etcd-server --config=/tmp/${HOST0}/kubeadmcfg.yaml\nkubeadm init phase certs etcd-peer --config=/tmp/${HOST0}/kubeadmcfg.yaml\nkubeadm init phase certs etcd-healthcheck-client --config=/tmp/${HOST0}/kubeadmcfg.yaml\nkubeadm init phase certs apiserver-etcd-client --config=/tmp/${HOST0}/kubeadmcfg.yaml\n# No need to move the certs because they are for HOST0\n\n# clean up certs that should not be copied off this host\nfind /tmp/${HOST2} -name ca.key -type f -delete\nfind /tmp/${HOST1} -name ca.key -type f -delete", "completion": "This sh code Set up a High Availability etcd Cluster with kubeadm | Copy certificates and kubeadm configs.", "language": "sh", "type": "code-example_explanation", "context": "Set up a High Availability etcd Cluster with kubeadm | Copy certificates and kubeadm configs.", "source": "https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/setup-ha-etcd-with-kubeadm/", "id": "k8s_00678_code_explain_21441"} {"prompt": "Complete this sh code:\n\nUSER=ubuntu\nHOST=${HOST1}\nscp -r /tmp/${HOST}/* ${USER}@${HOST}:", "completion": "ssh ${USER}@${HOST}\nUSER@HOST $ sudo -Es\nroot@HOST $ chown -R root:root pki\nroot@HOST $ mv pki /etc/kubernetes/", "language": "sh", "type": "code-example", "context": "Set up a High Availability etcd Cluster with kubeadm | Ensure all expected files exist.", "source": "https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/setup-ha-etcd-with-kubeadm/", "id": "k8s_00678_code_completion_21442"} {"prompt": "Explain this sh code:\n\nUSER=ubuntu\nHOST=${HOST1}\nscp -r /tmp/${HOST}/* ${USER}@${HOST}:\nssh ${USER}@${HOST}\nUSER@HOST $ sudo -Es\nroot@HOST $ chown -R root:root pki\nroot@HOST $ mv pki /etc/kubernetes/", "completion": "This sh code Set up a High Availability etcd Cluster with kubeadm | Ensure all expected files exist.", "language": "sh", "type": "code-example_explanation", "context": "Set up a High Availability etcd Cluster with kubeadm | Ensure all expected files exist.", "source": "https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/setup-ha-etcd-with-kubeadm/", "id": "k8s_00678_code_explain_21443"} {"prompt": "Complete this text code:\n\n/tmp/${HOST0}\n└── kubeadmcfg.yaml\n---\n/etc/kubernetes/pki\n├── apiserver-etcd-client.crt\n├── apiserver-etcd-client.key\n└── etcd", "completion": " ├── ca.crt\n ├── ca.key\n ├── healthcheck-client.crt\n ├── healthcheck-client.key\n ├── peer.crt\n ├── peer.key\n ├── server.crt\n └── server.key", "language": "text", "type": "code-example", "context": "Set up a High Availability etcd Cluster with kubeadm | On $HOST1:", "source": "https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/setup-ha-etcd-with-kubeadm/", "id": "k8s_00678_code_completion_21444"} {"prompt": "Explain this text code:\n\n/tmp/${HOST0}\n└── kubeadmcfg.yaml\n---\n/etc/kubernetes/pki\n├── apiserver-etcd-client.crt\n├── apiserver-etcd-client.key\n└── etcd\n ├── ca.crt\n ├── ca.key\n ├── healthcheck-client.crt\n ├── healthcheck-client.key\n ├── peer.crt\n ├── peer.key\n ├── server.crt\n └── server.key", "completion": "This text code Set up a High Availability etcd Cluster with kubeadm | On $HOST1:", "language": "text", "type": "code-example_explanation", "context": "Set up a High Availability etcd Cluster with kubeadm | On $HOST1:", "source": "https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/setup-ha-etcd-with-kubeadm/", "id": "k8s_00678_code_explain_21445"} {"prompt": "Complete this text code:\n\n$HOME\n└── kubeadmcfg.yaml\n---\n/etc/kubernetes/pki\n├── apiserver-etcd-client.crt\n├── apiserver-etcd-client.key\n└── etcd", "completion": " ├── ca.crt\n ├── healthcheck-client.crt\n ├── healthcheck-client.key\n ├── peer.crt\n ├── peer.key\n ├── server.crt\n └── server.key", "language": "text", "type": "code-example", "context": "Set up a High Availability etcd Cluster with kubeadm | On $HOST2:", "source": "https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/setup-ha-etcd-with-kubeadm/", "id": "k8s_00678_code_completion_21446"} {"prompt": "Explain this text code:\n\n$HOME\n└── kubeadmcfg.yaml\n---\n/etc/kubernetes/pki\n├── apiserver-etcd-client.crt\n├── apiserver-etcd-client.key\n└── etcd\n ├── ca.crt\n ├── healthcheck-client.crt\n ├── healthcheck-client.key\n ├── peer.crt\n ├── peer.key\n ├── server.crt\n └── server.key", "completion": "This text code Set up a High Availability etcd Cluster with kubeadm | On $HOST2:", "language": "text", "type": "code-example_explanation", "context": "Set up a High Availability etcd Cluster with kubeadm | On $HOST2:", "source": "https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/setup-ha-etcd-with-kubeadm/", "id": "k8s_00678_code_explain_21447"} {"prompt": "Complete this text code:\n\n$HOME\n└── kubeadmcfg.yaml\n---\n/etc/kubernetes/pki\n├── apiserver-etcd-client.crt\n├── apiserver-etcd-client.key\n└── etcd", "completion": " ├── ca.crt\n ├── healthcheck-client.crt\n ├── healthcheck-client.key\n ├── peer.crt\n ├── peer.key\n ├── server.crt\n └── server.key", "language": "text", "type": "code-example", "context": "Set up a High Availability etcd Cluster with kubeadm | Create the static pod manifests.", "source": "https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/setup-ha-etcd-with-kubeadm/", "id": "k8s_00678_code_completion_21448"} {"prompt": "Explain this text code:\n\n$HOME\n└── kubeadmcfg.yaml\n---\n/etc/kubernetes/pki\n├── apiserver-etcd-client.crt\n├── apiserver-etcd-client.key\n└── etcd\n ├── ca.crt\n ├── healthcheck-client.crt\n ├── healthcheck-client.key\n ├── peer.crt\n ├── peer.key\n ├── server.crt\n └── server.key", "completion": "This text code Set up a High Availability etcd Cluster with kubeadm | Create the static pod manifests.", "language": "text", "type": "code-example_explanation", "context": "Set up a High Availability etcd Cluster with kubeadm | Create the static pod manifests.", "source": "https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/setup-ha-etcd-with-kubeadm/", "id": "k8s_00678_code_explain_21449"} {"prompt": "Explain this sh code:\n\nroot@HOST0 $ kubeadm init phase etcd local --config=/tmp/${HOST0}/kubeadmcfg.yaml\nroot@HOST1 $ kubeadm init phase etcd local --config=$HOME/kubeadmcfg.yaml\nroot@HOST2 $ kubeadm init phase etcd local --config=$HOME/kubeadmcfg.yaml", "completion": "This sh code Set up a High Availability etcd Cluster with kubeadm | Optional: Check the cluster health.", "language": "sh", "type": "code-example_explanation", "context": "Set up a High Availability etcd Cluster with kubeadm | Optional: Check the cluster health.", "source": "https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/setup-ha-etcd-with-kubeadm/", "id": "k8s_00678_code_explain_21450"} {"prompt": "Complete this sh code:\n\nETCDCTL_API=3 etcdctl \\\n--cert /etc/kubernetes/pki/etcd/peer.crt \\\n--key /etc/kubernetes/pki/etcd/peer.key \\\n--cacert /etc/kubernetes/pki/etcd/ca.crt \\", "completion": "--endpoints https://${HOST0}:2379 endpoint health\n...\nhttps://[HOST0 IP]:2379 is healthy: successfully committed proposal: took = 16.283339ms\nhttps://[HOST1 IP]:2379 is healthy: successfully committed proposal: took = 19.44402ms\nhttps://[HOST2 IP]:2379 is healthy: successfully committed proposal: took = 35.926451ms", "language": "sh", "type": "code-example", "context": "Set up a High Availability etcd Cluster with kubeadm | Once you have an etcd cluster with 3 working members, you can continue setting up a highly available control plane using the external etcd method with kubeadm.", "source": "https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/setup-ha-etcd-with-kubeadm/", "id": "k8s_00678_code_completion_21451"} {"prompt": "Explain this sh code:\n\nETCDCTL_API=3 etcdctl \\\n--cert /etc/kubernetes/pki/etcd/peer.crt \\\n--key /etc/kubernetes/pki/etcd/peer.key \\\n--cacert /etc/kubernetes/pki/etcd/ca.crt \\\n--endpoints https://${HOST0}:2379 endpoint health\n...\nhttps://[HOST0 IP]:2379 is healthy: successfully committed proposal: took = 16.283339ms\nhttps://[HOST1 IP]:2379 is healthy: successfully committed proposal: took = 19.44402ms\nhttps://[HOST2 IP]:2379 is healthy: successfully committed proposal: took = 35.926451ms", "completion": "This sh code Set up a High Availability etcd Cluster with kubeadm | Once you have an etcd cluster with 3 working members, you can continue setting up a highly available control plane using the external etcd method with kubeadm.", "language": "sh", "type": "code-example_explanation", "context": "Set up a High Availability etcd Cluster with kubeadm | Once you have an etcd cluster with 3 working members, you can continue setting up a highly available control plane using the external etcd method with kubeadm.", "source": "https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/setup-ha-etcd-with-kubeadm/", "id": "k8s_00678_code_explain_21452"} {"prompt": "Complete this yaml code:\n\ncat << EOF > /etc/systemd/system/kubelet.service.d/kubelet.conf\n# Replace \"systemd\" with the cgroup driver of your container runtime. The default value in the kubelet is \"cgroupfs\".\n# Replace the value of \"containerRuntimeEndpoint\" for a different container runtime if needed.\n#\napiVersion: kubelet.config.k8s.io/v1beta1\nkind: KubeletConfiguration\nauthentication:\n anonymous:\n enabled: false\n webhook:\n enabled: false\nauthorization:\n mode: AlwaysAllow\ncgroupDriver: systemd", "completion": "address: 127.0.0.1\ncontainerRuntimeEndpoint: unix:///var/run/containerd/containerd.sock\nstaticPodPath: /etc/kubernetes/manifests\nEOF\n\ncat << EOF > /etc/systemd/system/kubelet.service.d/20-etcd-service-manager.conf\n[Service]\nExecStart=\nExecStart=/usr/bin/kubelet --config=/etc/systemd/system/kubelet.service.d/kubelet.conf\nRestart=always\nEOF\n\nsystemctl daemon-reload\nsystemctl restart kubelet", "language": "yaml", "type": "kubernetes-manifest", "context": "Set up a High Availability etcd Cluster with kubeadm | Check the kubelet status to ensure it is running.", "source": "https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/setup-ha-etcd-with-kubeadm/", "id": "k8s_00678_code_completion_21453"} {"prompt": "Explain this yaml code:\n\ncat << EOF > /etc/systemd/system/kubelet.service.d/kubelet.conf\n# Replace \"systemd\" with the cgroup driver of your container runtime. The default value in the kubelet is \"cgroupfs\".\n# Replace the value of \"containerRuntimeEndpoint\" for a different container runtime if needed.\n#\napiVersion: kubelet.config.k8s.io/v1beta1\nkind: KubeletConfiguration\nauthentication:\n anonymous:\n enabled: false\n webhook:\n enabled: false\nauthorization:\n mode: AlwaysAllow\ncgroupDriver: systemd\naddress: 127.0.0.1\ncontainerRuntimeEndpoint: unix:///var/run/containerd/containerd.sock\nstaticPodPath: /etc/kubernetes/manifests\nEOF\n\ncat << EOF > /etc/systemd/system/kubelet.service.d/20-etcd-service-manager.conf\n[Service]\nExecStart=\nExecStart=/usr/bin/kubelet --config=/etc/systemd/system/kubelet.service.d/kubelet.conf\nRestart=always\nEOF\n\nsystemctl daemon-reload\nsystemctl restart kubelet", "completion": "This yaml code Set up a High Availability etcd Cluster with kubeadm | Check the kubelet status to ensure it is running.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Set up a High Availability etcd Cluster with kubeadm | Check the kubelet status to ensure it is running.", "source": "https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/setup-ha-etcd-with-kubeadm/", "id": "k8s_00678_code_explain_21454"} {"prompt": "Complete this yaml code:\n\n# Update HOST0, HOST1 and HOST2 with the IPs of your hosts\nexport HOST0=10.0.0.6\nexport HOST1=10.0.0.7\nexport HOST2=10.0.0.8\n\n# Update NAME0, NAME1 and NAME2 with the hostnames of your hosts\nexport NAME0=\"infra0\"\nexport NAME1=\"infra1\"\nexport NAME2=\"infra2\"\n\n# Create temp directories to store files that will end up on other hosts\nmkdir -p /tmp/${HOST0}/ /tmp/${HOST1}/ /tmp/${HOST2}/\n\nHOSTS=(${HOST0} ${HOST1} ${HOST2})\nNAMES=(${NAME0} ${NAME1} ${NAME2})\n\nfor i in \"${!HOSTS[@]}\"; do\nHOST=${HOSTS[$i]}\nNAME=${NAMES[$i]}\ncat << EOF > /tmp/${HOST}/kubeadmcfg.yaml\n---\napiVersion: \"kubeadm.k8s.io/v1beta4\"\nkind: InitConfiguration\nnodeRegistration:\n name: ${NAME}\nlocalAPIEndpoint:", "completion": " advertiseAddress: ${HOST}\n---\napiVersion: \"kubeadm.k8s.io/v1beta4\"\nkind: ClusterConfiguration\netcd:\n local:\n serverCertSANs:\n - \"${HOST}\"\n peerCertSANs:\n - \"${HOST}\"\n extraArgs:\n - name: initial-cluster\n value: ${NAMES[0]}=https://${HOSTS[0]}:2380,${NAMES[1]}=https://${HOSTS[1]}:2380,${NAMES[2]}=https://${HOSTS[2]}:2380\n - name: initial-cluster-state\n value: new\n - name: name\n value: ${NAME}\n - name: listen-peer-urls\n value: https://${HOST}:2380\n - name: listen-client-urls\n value: https://${HOST}:2379\n - name: advertise-client-urls\n value: https://${HOST}:2379\n - name: initial-advertise-peer-urls\n value: https://${HOST}:2380\nEOF\ndone", "language": "yaml", "type": "kubernetes-manifest", "context": "Set up a High Availability etcd Cluster with kubeadm | Generate the certificate authority.", "source": "https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/setup-ha-etcd-with-kubeadm/", "id": "k8s_00678_code_completion_21455"} {"prompt": "Explain this yaml code:\n\n# Update HOST0, HOST1 and HOST2 with the IPs of your hosts\nexport HOST0=10.0.0.6\nexport HOST1=10.0.0.7\nexport HOST2=10.0.0.8\n\n# Update NAME0, NAME1 and NAME2 with the hostnames of your hosts\nexport NAME0=\"infra0\"\nexport NAME1=\"infra1\"\nexport NAME2=\"infra2\"\n\n# Create temp directories to store files that will end up on other hosts\nmkdir -p /tmp/${HOST0}/ /tmp/${HOST1}/ /tmp/${HOST2}/\n\nHOSTS=(${HOST0} ${HOST1} ${HOST2})\nNAMES=(${NAME0} ${NAME1} ${NAME2})\n\nfor i in \"${!HOSTS[@]}\"; do\nHOST=${HOSTS[$i]}\nNAME=${NAMES[$i]}\ncat << EOF > /tmp/${HOST}/kubeadmcfg.yaml\n---\napiVersion: \"kubeadm.k8s.io/v1beta4\"\nkind: InitConfiguration\nnodeRegistration:\n name: ${NAME}\nlocalAPIEndpoint:\n advertiseAddress: ${HOST}\n---\napiVersion: \"kubeadm.k8s.io/v1beta4\"\nkind: ClusterConfiguration\netcd:\n local:\n serverCertSANs:\n - \"${HOST}\"\n peerCertSANs:\n - \"${HOST}\"\n extraArgs:\n - name: initial-cluster\n value: ${NAMES[0]}=https://${HOSTS[0]}:2380,${NAMES[1]}=https://${HOSTS[1]}:2380,${NAMES[2]}=https://${HOSTS[2]}:2380\n - name: initial-cluster-state\n value: new\n - name: name\n value: ${NAME}\n - name: listen-peer-urls\n value: https://${HOST}:2380\n - name: listen-client-urls\n value: https://${HOST}:2379\n - name: advertise-client-urls\n value: https://${HOST}:2379\n - name: initial-advertise-peer-urls\n value: https://${HOST}:2380\nEOF\ndone", "completion": "This yaml code Set up a High Availability etcd Cluster with kubeadm | Generate the certificate authority.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Set up a High Availability etcd Cluster with kubeadm | Generate the certificate authority.", "source": "https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/setup-ha-etcd-with-kubeadm/", "id": "k8s_00678_code_explain_21456"} {"prompt": "Complete this text code:\n\nkubeadm init phase certs etcd-server --config=/tmp/${HOST2}/kubeadmcfg.yaml\nkubeadm init phase certs etcd-peer --config=/tmp/${HOST2}/kubeadmcfg.yaml\nkubeadm init phase certs etcd-healthcheck-client --config=/tmp/${HOST2}/kubeadmcfg.yaml\nkubeadm init phase certs apiserver-etcd-client --config=/tmp/${HOST2}/kubeadmcfg.yaml\ncp -R /etc/kubernetes/pki /tmp/${HOST2}/\n# cleanup non-reusable certificates\nfind /etc/kubernetes/pki -not -name ca.crt -not -name ca.key -type f -delete\n\nkubeadm init phase certs etcd-server --config=/tmp/${HOST1}/kubeadmcfg.yaml\nkubeadm init phase certs etcd-peer --config=/tmp/${HOST1}/kubeadmcfg.yaml\nkubeadm init phase certs etcd-healthcheck-client --config=/tmp/${HOST1}/kubeadmcfg.yaml\nkubeadm init phase certs apiserver-etcd-client --config=/tmp/${HOST1}/kubeadmcfg.yaml", "completion": "cp -R /etc/kubernetes/pki /tmp/${HOST1}/\nfind /etc/kubernetes/pki -not -name ca.crt -not -name ca.key -type f -delete\n\nkubeadm init phase certs etcd-server --config=/tmp/${HOST0}/kubeadmcfg.yaml\nkubeadm init phase certs etcd-peer --config=/tmp/${HOST0}/kubeadmcfg.yaml\nkubeadm init phase certs etcd-healthcheck-client --config=/tmp/${HOST0}/kubeadmcfg.yaml\nkubeadm init phase certs apiserver-etcd-client --config=/tmp/${HOST0}/kubeadmcfg.yaml\n# No need to move the certs because they are for HOST0\n\n# clean up certs that should not be copied off this host\nfind /tmp/${HOST2} -name ca.key -type f -delete\nfind /tmp/${HOST1} -name ca.key -type f -delete", "language": "text", "type": "code-example", "context": "Set up a High Availability etcd Cluster with kubeadm | Copy certificates and kubeadm configs.", "source": "https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/setup-ha-etcd-with-kubeadm/", "id": "k8s_00678_code_completion_21457"} {"prompt": "Explain this text code:\n\nkubeadm init phase certs etcd-server --config=/tmp/${HOST2}/kubeadmcfg.yaml\nkubeadm init phase certs etcd-peer --config=/tmp/${HOST2}/kubeadmcfg.yaml\nkubeadm init phase certs etcd-healthcheck-client --config=/tmp/${HOST2}/kubeadmcfg.yaml\nkubeadm init phase certs apiserver-etcd-client --config=/tmp/${HOST2}/kubeadmcfg.yaml\ncp -R /etc/kubernetes/pki /tmp/${HOST2}/\n# cleanup non-reusable certificates\nfind /etc/kubernetes/pki -not -name ca.crt -not -name ca.key -type f -delete\n\nkubeadm init phase certs etcd-server --config=/tmp/${HOST1}/kubeadmcfg.yaml\nkubeadm init phase certs etcd-peer --config=/tmp/${HOST1}/kubeadmcfg.yaml\nkubeadm init phase certs etcd-healthcheck-client --config=/tmp/${HOST1}/kubeadmcfg.yaml\nkubeadm init phase certs apiserver-etcd-client --config=/tmp/${HOST1}/kubeadmcfg.yaml\ncp -R /etc/kubernetes/pki /tmp/${HOST1}/\nfind /etc/kubernetes/pki -not -name ca.crt -not -name ca.key -type f -delete\n\nkubeadm init phase certs etcd-server --config=/tmp/${HOST0}/kubeadmcfg.yaml\nkubeadm init phase certs etcd-peer --config=/tmp/${HOST0}/kubeadmcfg.yaml\nkubeadm init phase certs etcd-healthcheck-client --config=/tmp/${HOST0}/kubeadmcfg.yaml\nkubeadm init phase certs apiserver-etcd-client --config=/tmp/${HOST0}/kubeadmcfg.yaml\n# No need to move the certs because they are for HOST0\n\n# clean up certs that should not be copied off this host\nfind /tmp/${HOST2} -name ca.key -type f -delete\nfind /tmp/${HOST1} -name ca.key -type f -delete", "completion": "This text code Set up a High Availability etcd Cluster with kubeadm | Copy certificates and kubeadm configs.", "language": "text", "type": "code-example_explanation", "context": "Set up a High Availability etcd Cluster with kubeadm | Copy certificates and kubeadm configs.", "source": "https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/setup-ha-etcd-with-kubeadm/", "id": "k8s_00678_code_explain_21458"} {"prompt": "Complete this text code:\n\nUSER=ubuntu\nHOST=${HOST1}\nscp -r /tmp/${HOST}/* ${USER}@${HOST}:", "completion": "ssh ${USER}@${HOST}\nUSER@HOST $ sudo -Es\nroot@HOST $ chown -R root:root pki\nroot@HOST $ mv pki /etc/kubernetes/", "language": "text", "type": "code-example", "context": "Set up a High Availability etcd Cluster with kubeadm | Ensure all expected files exist.", "source": "https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/setup-ha-etcd-with-kubeadm/", "id": "k8s_00678_code_completion_21459"} {"prompt": "Explain this text code:\n\nUSER=ubuntu\nHOST=${HOST1}\nscp -r /tmp/${HOST}/* ${USER}@${HOST}:\nssh ${USER}@${HOST}\nUSER@HOST $ sudo -Es\nroot@HOST $ chown -R root:root pki\nroot@HOST $ mv pki /etc/kubernetes/", "completion": "This text code Set up a High Availability etcd Cluster with kubeadm | Ensure all expected files exist.", "language": "text", "type": "code-example_explanation", "context": "Set up a High Availability etcd Cluster with kubeadm | Ensure all expected files exist.", "source": "https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/setup-ha-etcd-with-kubeadm/", "id": "k8s_00678_code_explain_21460"} {"prompt": "Explain this text code:\n\nroot@HOST0 $ kubeadm init phase etcd local --config=/tmp/${HOST0}/kubeadmcfg.yaml\nroot@HOST1 $ kubeadm init phase etcd local --config=$HOME/kubeadmcfg.yaml\nroot@HOST2 $ kubeadm init phase etcd local --config=$HOME/kubeadmcfg.yaml", "completion": "This text code Set up a High Availability etcd Cluster with kubeadm | Optional: Check the cluster health.", "language": "text", "type": "code-example_explanation", "context": "Set up a High Availability etcd Cluster with kubeadm | Optional: Check the cluster health.", "source": "https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/setup-ha-etcd-with-kubeadm/", "id": "k8s_00678_code_explain_21461"} {"prompt": "Complete this text code:\n\nETCDCTL_API=3 etcdctl \\\n--cert /etc/kubernetes/pki/etcd/peer.crt \\\n--key /etc/kubernetes/pki/etcd/peer.key \\\n--cacert /etc/kubernetes/pki/etcd/ca.crt \\", "completion": "--endpoints https://${HOST0}:2379 endpoint health\n...\nhttps://[HOST0 IP]:2379 is healthy: successfully committed proposal: took = 16.283339ms\nhttps://[HOST1 IP]:2379 is healthy: successfully committed proposal: took = 19.44402ms\nhttps://[HOST2 IP]:2379 is healthy: successfully committed proposal: took = 35.926451ms", "language": "text", "type": "code-example", "context": "Set up a High Availability etcd Cluster with kubeadm | Once you have an etcd cluster with 3 working members, you can continue setting up a highly available control plane using the external etcd method with kubeadm.", "source": "https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/setup-ha-etcd-with-kubeadm/", "id": "k8s_00678_code_completion_21462"} {"prompt": "Explain this text code:\n\nETCDCTL_API=3 etcdctl \\\n--cert /etc/kubernetes/pki/etcd/peer.crt \\\n--key /etc/kubernetes/pki/etcd/peer.key \\\n--cacert /etc/kubernetes/pki/etcd/ca.crt \\\n--endpoints https://${HOST0}:2379 endpoint health\n...\nhttps://[HOST0 IP]:2379 is healthy: successfully committed proposal: took = 16.283339ms\nhttps://[HOST1 IP]:2379 is healthy: successfully committed proposal: took = 19.44402ms\nhttps://[HOST2 IP]:2379 is healthy: successfully committed proposal: took = 35.926451ms", "completion": "This text code Set up a High Availability etcd Cluster with kubeadm | Once you have an etcd cluster with 3 working members, you can continue setting up a highly available control plane using the external etcd method with kubeadm.", "language": "text", "type": "code-example_explanation", "context": "Set up a High Availability etcd Cluster with kubeadm | Once you have an etcd cluster with 3 working members, you can continue setting up a highly available control plane using the external etcd method with kubeadm.", "source": "https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/setup-ha-etcd-with-kubeadm/", "id": "k8s_00678_code_explain_21463"} {"prompt": "Explain this text code:\n\nFOO_SERVICE_HOST=<the host the Service is running on>\nFOO_SERVICE_PORT=<the port the Service is running on>", "completion": "This text code Configuration Best Practices | This does imply an ordering requirement - any Service that a Pod wants to access must be created before the Pod itself, or else the environment variables will not be populated. DNS does not have this", "language": "text", "type": "code-example_explanation", "context": "Configuration Best Practices | This does imply an ordering requirement - any Service that a Pod wants to access must be created before the Pod itself, or else the environment variables will not be populated. DNS does not have this", "source": "https://kubernetes.io/docs/concepts/configuration/overview/", "id": "k8s_00680_code_explain_21464"} {"prompt": "Explain this shell code:\n\nFOO_SERVICE_HOST=<the host the Service is running on>\nFOO_SERVICE_PORT=<the port the Service is running on>", "completion": "This shell code Configuration Best Practices | This does imply an ordering requirement - any Service that a Pod wants to access must be created before the Pod itself, or else the environment variables will not be populated. DNS does not have this", "language": "shell", "type": "code-example_explanation", "context": "Configuration Best Practices | This does imply an ordering requirement - any Service that a Pod wants to access must be created before the Pod itself, or else the environment variables will not be populated. DNS does not have this", "source": "https://kubernetes.io/docs/concepts/configuration/overview/", "id": "k8s_00680_code_explain_21465"} {"prompt": "Explain this json code:\n\n{ app.kubernetes.io/name: MyApp, tier: frontend, phase: test, deployment: v3 }", "completion": "This json code Configuration Best Practices | A Service can be made to span multiple Deployments by omitting release-specific labels from its selector. When you need to update a running service without downtime, use a Deployment.", "language": "json", "type": "api-response_explanation", "context": "Configuration Best Practices | A Service can be made to span multiple Deployments by omitting release-specific labels from its selector. When you need to update a running service without downtime, use a Deployment.", "source": "https://kubernetes.io/docs/concepts/configuration/overview/", "id": "k8s_00680_code_explain_21466"} {"prompt": "Explain this text code:\n\nFOO_SERVICE_HOST=<the host the Service is running on>\nFOO_SERVICE_PORT=<the port the Service is running on>", "completion": "This text code Configuration Best Practices | This does imply an ordering requirement - any Service that a Pod wants to access must be created before the Pod itself, or else the environment variables will not be populated. DNS does not have this", "language": "text", "type": "code-example_explanation", "context": "Configuration Best Practices | This does imply an ordering requirement - any Service that a Pod wants to access must be created before the Pod itself, or else the environment variables will not be populated. DNS does not have this", "source": "https://kubernetes.io/docs/concepts/configuration/overview/", "id": "k8s_00680_code_explain_21467"} {"prompt": "Complete this yaml code:\n\napiVersion: apps/v1\nkind: Deployment\nmetadata:\n name: nginx-deployment\nspec:\n selector:\n matchLabels:\n app: nginx\n minReadySeconds: 5", "completion": " template:\n metadata:\n labels:\n app: nginx\n spec:\n containers:\n - name: nginx\n image: nginx:1.14.2\n ports:\n - containerPort: 80", "language": "yaml", "type": "deployment-definition", "context": "Declarative Management of Kubernetes Objects Using Configuration Files | Run kubectl diff to print the object that will be created:", "source": "https://kubernetes.io/docs/tasks/manage-kubernetes-objects/declarative-config/", "id": "k8s_00681_code_completion_21468"} {"prompt": "Explain this yaml code:\n\napiVersion: apps/v1\nkind: Deployment\nmetadata:\n name: nginx-deployment\nspec:\n selector:\n matchLabels:\n app: nginx\n minReadySeconds: 5\n template:\n metadata:\n labels:\n app: nginx\n spec:\n containers:\n - name: nginx\n image: nginx:1.14.2\n ports:\n - containerPort: 80", "completion": "This yaml code Declarative Management of Kubernetes Objects Using Configuration Files | Run kubectl diff to print the object that will be created:", "language": "yaml", "type": "deployment-definition_explanation", "context": "Declarative Management of Kubernetes Objects Using Configuration Files | Run kubectl diff to print the object that will be created:", "source": "https://kubernetes.io/docs/tasks/manage-kubernetes-objects/declarative-config/", "id": "k8s_00681_code_explain_21469"} {"prompt": "Explain this bash code:\n\nkubectl diff -f https://k8s.io/examples/application/simple_deployment.yaml", "completion": "This bash code Declarative Management of Kubernetes Objects Using Configuration Files | diff uses server-side dry-run, which needs to be enabled on kube-apiserver.", "language": "bash", "type": "kubectl-command_explanation", "context": "Declarative Management of Kubernetes Objects Using Configuration Files | diff uses server-side dry-run, which needs to be enabled on kube-apiserver.", "source": "https://kubernetes.io/docs/tasks/manage-kubernetes-objects/declarative-config/", "id": "k8s_00681_code_explain_21470"} {"prompt": "Explain this bash code:\n\nkubectl apply -f https://k8s.io/examples/application/simple_deployment.yaml", "completion": "This bash code Declarative Management of Kubernetes Objects Using Configuration Files | Print the live configuration using kubectl get:", "language": "bash", "type": "kubectl-command_explanation", "context": "Declarative Management of Kubernetes Objects Using Configuration Files | Print the live configuration using kubectl get:", "source": "https://kubernetes.io/docs/tasks/manage-kubernetes-objects/declarative-config/", "id": "k8s_00681_code_explain_21471"} {"prompt": "Explain this bash code:\n\nkubectl get -f https://k8s.io/examples/application/simple_deployment.yaml -o yaml", "completion": "This bash code Declarative Management of Kubernetes Objects Using Configuration Files | The output shows that the kubectl.kubernetes.io/last-applied-configuration annotation was written to the live configuration, and it matches the configuration file:", "language": "bash", "type": "kubectl-command_explanation", "context": "Declarative Management of Kubernetes Objects Using Configuration Files | The output shows that the kubectl.kubernetes.io/last-applied-configuration annotation was written to the live configuration, and it matches the configuration file:", "source": "https://kubernetes.io/docs/tasks/manage-kubernetes-objects/declarative-config/", "id": "k8s_00681_code_explain_21472"} {"prompt": "Complete this yaml code:\n\nkind: Deployment\nmetadata:\n annotations:\n # ...\n # This is the json representation of simple_deployment.yaml\n # It was written by kubectl apply when the object was created\n kubectl.kubernetes.io/last-applied-configuration: |\n {\"apiVersion\":\"apps/v1\",\"kind\":\"Deployment\",\n \"metadata\":{\"annotations\":{},\"name\":\"nginx-deployment\",\"namespace\":\"default\"},\n \"spec\":{\"minReadySeconds\":5,\"selector\":{\"matchLabels\":{\"app\":nginx}},\"template\":{\"metadata\":{\"labels\":{\"app\":\"nginx\"}},\n \"spec\":{\"containers\":[{\"image\":\"nginx:1.14.2\",\"name\":\"nginx\",\n \"ports\":[{\"containerPort\":80}]}]}}}} \n # ...\nspec:\n # ...\n minReadySeconds: 5\n selector:\n matchLabels:", "completion": " # ...\n app: nginx\n template:\n metadata:\n # ...\n labels:\n app: nginx\n spec:\n containers:\n - image: nginx:1.14.2\n # ...\n name: nginx\n ports:\n - containerPort: 80\n # ...\n # ...\n # ...\n # ...", "language": "yaml", "type": "deployment-definition", "context": "Declarative Management of Kubernetes Objects Using Configuration Files | You can also use kubectl apply to update all objects defined in a directory, even if those objects already exist. This approach accomplishes the following:", "source": "https://kubernetes.io/docs/tasks/manage-kubernetes-objects/declarative-config/", "id": "k8s_00681_code_completion_21473"} {"prompt": "Explain this yaml code:\n\nkind: Deployment\nmetadata:\n annotations:\n # ...\n # This is the json representation of simple_deployment.yaml\n # It was written by kubectl apply when the object was created\n kubectl.kubernetes.io/last-applied-configuration: |\n {\"apiVersion\":\"apps/v1\",\"kind\":\"Deployment\",\n \"metadata\":{\"annotations\":{},\"name\":\"nginx-deployment\",\"namespace\":\"default\"},\n \"spec\":{\"minReadySeconds\":5,\"selector\":{\"matchLabels\":{\"app\":nginx}},\"template\":{\"metadata\":{\"labels\":{\"app\":\"nginx\"}},\n \"spec\":{\"containers\":[{\"image\":\"nginx:1.14.2\",\"name\":\"nginx\",\n \"ports\":[{\"containerPort\":80}]}]}}}} \n # ...\nspec:\n # ...\n minReadySeconds: 5\n selector:\n matchLabels:\n # ...\n app: nginx\n template:\n metadata:\n # ...\n labels:\n app: nginx\n spec:\n containers:\n - image: nginx:1.14.2\n # ...\n name: nginx\n ports:\n - containerPort: 80\n # ...\n # ...\n # ...\n # ...", "completion": "This yaml code Declarative Management of Kubernetes Objects Using Configuration Files | You can also use kubectl apply to update all objects defined in a directory, even if those objects already exist. This approach accomplishes the following:", "language": "yaml", "type": "deployment-definition_explanation", "context": "Declarative Management of Kubernetes Objects Using Configuration Files | You can also use kubectl apply to update all objects defined in a directory, even if those objects already exist. This approach accomplishes the following:", "source": "https://kubernetes.io/docs/tasks/manage-kubernetes-objects/declarative-config/", "id": "k8s_00681_code_explain_21474"} {"prompt": "Explain this bash code:\n\nkubectl diff -f <directory>\nkubectl apply -f <directory>", "completion": "This bash code Declarative Management of Kubernetes Objects Using Configuration Files | Here's an example configuration file:", "language": "bash", "type": "kubectl-command_explanation", "context": "Declarative Management of Kubernetes Objects Using Configuration Files | Here's an example configuration file:", "source": "https://kubernetes.io/docs/tasks/manage-kubernetes-objects/declarative-config/", "id": "k8s_00681_code_explain_21475"} {"prompt": "Complete this yaml code:\n\napiVersion: apps/v1\nkind: Deployment\nmetadata:\n name: nginx-deployment\nspec:\n selector:\n matchLabels:\n app: nginx\n minReadySeconds: 5", "completion": " template:\n metadata:\n labels:\n app: nginx\n spec:\n containers:\n - name: nginx\n image: nginx:1.14.2\n ports:\n - containerPort: 80", "language": "yaml", "type": "deployment-definition", "context": "Declarative Management of Kubernetes Objects Using Configuration Files | Create the object using kubectl apply:", "source": "https://kubernetes.io/docs/tasks/manage-kubernetes-objects/declarative-config/", "id": "k8s_00681_code_completion_21476"} {"prompt": "Explain this yaml code:\n\napiVersion: apps/v1\nkind: Deployment\nmetadata:\n name: nginx-deployment\nspec:\n selector:\n matchLabels:\n app: nginx\n minReadySeconds: 5\n template:\n metadata:\n labels:\n app: nginx\n spec:\n containers:\n - name: nginx\n image: nginx:1.14.2\n ports:\n - containerPort: 80", "completion": "This yaml code Declarative Management of Kubernetes Objects Using Configuration Files | Create the object using kubectl apply:", "language": "yaml", "type": "deployment-definition_explanation", "context": "Declarative Management of Kubernetes Objects Using Configuration Files | Create the object using kubectl apply:", "source": "https://kubernetes.io/docs/tasks/manage-kubernetes-objects/declarative-config/", "id": "k8s_00681_code_explain_21477"} {"prompt": "Explain this bash code:\n\nkubectl apply -f https://k8s.io/examples/application/simple_deployment.yaml", "completion": "This bash code Declarative Management of Kubernetes Objects Using Configuration Files | Print the live configuration using kubectl get:", "language": "bash", "type": "kubectl-command_explanation", "context": "Declarative Management of Kubernetes Objects Using Configuration Files | Print the live configuration using kubectl get:", "source": "https://kubernetes.io/docs/tasks/manage-kubernetes-objects/declarative-config/", "id": "k8s_00681_code_explain_21478"} {"prompt": "Explain this bash code:\n\nkubectl get -f https://k8s.io/examples/application/simple_deployment.yaml -o yaml", "completion": "This bash code Declarative Management of Kubernetes Objects Using Configuration Files | The output shows that the kubectl.kubernetes.io/last-applied-configuration annotation was written to the live configuration, and it matches the configuration file:", "language": "bash", "type": "kubectl-command_explanation", "context": "Declarative Management of Kubernetes Objects Using Configuration Files | The output shows that the kubectl.kubernetes.io/last-applied-configuration annotation was written to the live configuration, and it matches the configuration file:", "source": "https://kubernetes.io/docs/tasks/manage-kubernetes-objects/declarative-config/", "id": "k8s_00681_code_explain_21479"} {"prompt": "Complete this yaml code:\n\nkind: Deployment\nmetadata:\n annotations:\n # ...\n # This is the json representation of simple_deployment.yaml\n # It was written by kubectl apply when the object was created\n kubectl.kubernetes.io/last-applied-configuration: |\n {\"apiVersion\":\"apps/v1\",\"kind\":\"Deployment\",\n \"metadata\":{\"annotations\":{},\"name\":\"nginx-deployment\",\"namespace\":\"default\"},\n \"spec\":{\"minReadySeconds\":5,\"selector\":{\"matchLabels\":{\"app\":nginx}},\"template\":{\"metadata\":{\"labels\":{\"app\":\"nginx\"}},\n \"spec\":{\"containers\":[{\"image\":\"nginx:1.14.2\",\"name\":\"nginx\",\n \"ports\":[{\"containerPort\":80}]}]}}}} \n # ...\nspec:\n # ...\n minReadySeconds: 5\n selector:\n matchLabels:", "completion": " # ...\n app: nginx\n template:\n metadata:\n # ...\n labels:\n app: nginx\n spec:\n containers:\n - image: nginx:1.14.2\n # ...\n name: nginx\n ports:\n - containerPort: 80\n # ...\n # ...\n # ...\n # ...", "language": "yaml", "type": "deployment-definition", "context": "Declarative Management of Kubernetes Objects Using Configuration Files | Directly update the replicas field in the live configuration by using kubectl scale. This does not use kubectl apply:", "source": "https://kubernetes.io/docs/tasks/manage-kubernetes-objects/declarative-config/", "id": "k8s_00681_code_completion_21480"} {"prompt": "Explain this yaml code:\n\nkind: Deployment\nmetadata:\n annotations:\n # ...\n # This is the json representation of simple_deployment.yaml\n # It was written by kubectl apply when the object was created\n kubectl.kubernetes.io/last-applied-configuration: |\n {\"apiVersion\":\"apps/v1\",\"kind\":\"Deployment\",\n \"metadata\":{\"annotations\":{},\"name\":\"nginx-deployment\",\"namespace\":\"default\"},\n \"spec\":{\"minReadySeconds\":5,\"selector\":{\"matchLabels\":{\"app\":nginx}},\"template\":{\"metadata\":{\"labels\":{\"app\":\"nginx\"}},\n \"spec\":{\"containers\":[{\"image\":\"nginx:1.14.2\",\"name\":\"nginx\",\n \"ports\":[{\"containerPort\":80}]}]}}}} \n # ...\nspec:\n # ...\n minReadySeconds: 5\n selector:\n matchLabels:\n # ...\n app: nginx\n template:\n metadata:\n # ...\n labels:\n app: nginx\n spec:\n containers:\n - image: nginx:1.14.2\n # ...\n name: nginx\n ports:\n - containerPort: 80\n # ...\n # ...\n # ...\n # ...", "completion": "This yaml code Declarative Management of Kubernetes Objects Using Configuration Files | Directly update the replicas field in the live configuration by using kubectl scale. This does not use kubectl apply:", "language": "yaml", "type": "deployment-definition_explanation", "context": "Declarative Management of Kubernetes Objects Using Configuration Files | Directly update the replicas field in the live configuration by using kubectl scale. This does not use kubectl apply:", "source": "https://kubernetes.io/docs/tasks/manage-kubernetes-objects/declarative-config/", "id": "k8s_00681_code_explain_21481"} {"prompt": "Explain this bash code:\n\nkubectl scale deployment/nginx-deployment --replicas=2", "completion": "This bash code Declarative Management of Kubernetes Objects Using Configuration Files | Print the live configuration using kubectl get:", "language": "bash", "type": "kubectl-command_explanation", "context": "Declarative Management of Kubernetes Objects Using Configuration Files | Print the live configuration using kubectl get:", "source": "https://kubernetes.io/docs/tasks/manage-kubernetes-objects/declarative-config/", "id": "k8s_00681_code_explain_21482"} {"prompt": "Complete this yaml code:\n\napiVersion: apps/v1\nkind: Deployment\nmetadata:\n annotations:\n # ...\n # note that the annotation does not contain replicas\n # because it was not updated through apply\n kubectl.kubernetes.io/last-applied-configuration: |\n {\"apiVersion\":\"apps/v1\",\"kind\":\"Deployment\",\n \"metadata\":{\"annotations\":{},\"name\":\"nginx-deployment\",\"namespace\":\"default\"},\n \"spec\":{\"minReadySeconds\":5,\"selector\":{\"matchLabels\":{\"app\":nginx}},\"template\":{\"metadata\":{\"labels\":{\"app\":\"nginx\"}},\n \"spec\":{\"containers\":[{\"image\":\"nginx:1.14.2\",\"name\":\"nginx\",\n \"ports\":[{\"containerPort\":80}]}]}}}} \n # ...\nspec:\n replicas: 2 # written by scale\n # ...", "completion": " minReadySeconds: 5\n selector:\n matchLabels:\n # ...\n app: nginx\n template:\n metadata:\n # ...\n labels:\n app: nginx\n spec:\n containers:\n - image: nginx:1.14.2\n # ...\n name: nginx\n ports:\n - containerPort: 80\n # ...", "language": "yaml", "type": "deployment-definition", "context": "Declarative Management of Kubernetes Objects Using Configuration Files | Update the simple_deployment.yaml configuration file to change the image from nginx:1.14.2 to nginx:1.16.1, and delete the minReadySeconds field:", "source": "https://kubernetes.io/docs/tasks/manage-kubernetes-objects/declarative-config/", "id": "k8s_00681_code_completion_21483"} {"prompt": "Explain this yaml code:\n\napiVersion: apps/v1\nkind: Deployment\nmetadata:\n annotations:\n # ...\n # note that the annotation does not contain replicas\n # because it was not updated through apply\n kubectl.kubernetes.io/last-applied-configuration: |\n {\"apiVersion\":\"apps/v1\",\"kind\":\"Deployment\",\n \"metadata\":{\"annotations\":{},\"name\":\"nginx-deployment\",\"namespace\":\"default\"},\n \"spec\":{\"minReadySeconds\":5,\"selector\":{\"matchLabels\":{\"app\":nginx}},\"template\":{\"metadata\":{\"labels\":{\"app\":\"nginx\"}},\n \"spec\":{\"containers\":[{\"image\":\"nginx:1.14.2\",\"name\":\"nginx\",\n \"ports\":[{\"containerPort\":80}]}]}}}} \n # ...\nspec:\n replicas: 2 # written by scale\n # ...\n minReadySeconds: 5\n selector:\n matchLabels:\n # ...\n app: nginx\n template:\n metadata:\n # ...\n labels:\n app: nginx\n spec:\n containers:\n - image: nginx:1.14.2\n # ...\n name: nginx\n ports:\n - containerPort: 80\n # ...", "completion": "This yaml code Declarative Management of Kubernetes Objects Using Configuration Files | Update the simple_deployment.yaml configuration file to change the image from nginx:1.14.2 to nginx:1.16.1, and delete the minReadySeconds field:", "language": "yaml", "type": "deployment-definition_explanation", "context": "Declarative Management of Kubernetes Objects Using Configuration Files | Update the simple_deployment.yaml configuration file to change the image from nginx:1.14.2 to nginx:1.16.1, and delete the minReadySeconds field:", "source": "https://kubernetes.io/docs/tasks/manage-kubernetes-objects/declarative-config/", "id": "k8s_00681_code_explain_21484"} {"prompt": "Complete this yaml code:\n\napiVersion: apps/v1\nkind: Deployment\nmetadata:\n name: nginx-deployment\nspec:\n selector:\n matchLabels:\n app: nginx\n template:", "completion": " metadata:\n labels:\n app: nginx\n spec:\n containers:\n - name: nginx\n image: nginx:1.16.1 # update the image\n ports:\n - containerPort: 80", "language": "yaml", "type": "deployment-definition", "context": "Declarative Management of Kubernetes Objects Using Configuration Files | Apply the changes made to the configuration file:", "source": "https://kubernetes.io/docs/tasks/manage-kubernetes-objects/declarative-config/", "id": "k8s_00681_code_completion_21485"} {"prompt": "Explain this yaml code:\n\napiVersion: apps/v1\nkind: Deployment\nmetadata:\n name: nginx-deployment\nspec:\n selector:\n matchLabels:\n app: nginx\n template:\n metadata:\n labels:\n app: nginx\n spec:\n containers:\n - name: nginx\n image: nginx:1.16.1 # update the image\n ports:\n - containerPort: 80", "completion": "This yaml code Declarative Management of Kubernetes Objects Using Configuration Files | Apply the changes made to the configuration file:", "language": "yaml", "type": "deployment-definition_explanation", "context": "Declarative Management of Kubernetes Objects Using Configuration Files | Apply the changes made to the configuration file:", "source": "https://kubernetes.io/docs/tasks/manage-kubernetes-objects/declarative-config/", "id": "k8s_00681_code_explain_21486"} {"prompt": "Explain this bash code:\n\nkubectl diff -f https://k8s.io/examples/application/update_deployment.yaml\nkubectl apply -f https://k8s.io/examples/application/update_deployment.yaml", "completion": "This bash code Declarative Management of Kubernetes Objects Using Configuration Files | Print the live configuration using kubectl get:", "language": "bash", "type": "kubectl-command_explanation", "context": "Declarative Management of Kubernetes Objects Using Configuration Files | Print the live configuration using kubectl get:", "source": "https://kubernetes.io/docs/tasks/manage-kubernetes-objects/declarative-config/", "id": "k8s_00681_code_explain_21487"} {"prompt": "Explain this bash code:\n\nkubectl get -f https://k8s.io/examples/application/update_deployment.yaml -o yaml", "completion": "This bash code Declarative Management of Kubernetes Objects Using Configuration Files | The output shows the following changes to the live configuration:", "language": "bash", "type": "kubectl-command_explanation", "context": "Declarative Management of Kubernetes Objects Using Configuration Files | The output shows the following changes to the live configuration:", "source": "https://kubernetes.io/docs/tasks/manage-kubernetes-objects/declarative-config/", "id": "k8s_00681_code_explain_21488"} {"prompt": "Complete this yaml code:\n\napiVersion: apps/v1\nkind: Deployment\nmetadata:\n annotations:\n # ...\n # The annotation contains the updated image to nginx 1.16.1,\n # but does not contain the updated replicas to 2\n kubectl.kubernetes.io/last-applied-configuration: |\n {\"apiVersion\":\"apps/v1\",\"kind\":\"Deployment\",\n \"metadata\":{\"annotations\":{},\"name\":\"nginx-deployment\",\"namespace\":\"default\"},\n \"spec\":{\"selector\":{\"matchLabels\":{\"app\":nginx}},\"template\":{\"metadata\":{\"labels\":{\"app\":\"nginx\"}},\n \"spec\":{\"containers\":[{\"image\":\"nginx:1.16.1\",\"name\":\"nginx\",\n \"ports\":[{\"containerPort\":80}]}]}}}} \n # ...\nspec:\n replicas: 2 # Set by `kubectl scale`. Ignored by `kubectl apply`.\n # minReadySeconds cleared by `kubectl apply`\n # ...\n selector:", "completion": " matchLabels:\n # ...\n app: nginx\n template:\n metadata:\n # ...\n labels:\n app: nginx\n spec:\n containers:\n - image: nginx:1.16.1 # Set by `kubectl apply`\n # ...\n name: nginx\n ports:\n - containerPort: 80\n # ...\n # ...\n # ...\n # ...", "language": "yaml", "type": "deployment-definition", "context": "Declarative Management of Kubernetes Objects Using Configuration Files | There are two approaches to delete objects managed by kubectl apply.", "source": "https://kubernetes.io/docs/tasks/manage-kubernetes-objects/declarative-config/", "id": "k8s_00681_code_completion_21489"} {"prompt": "Explain this yaml code:\n\napiVersion: apps/v1\nkind: Deployment\nmetadata:\n annotations:\n # ...\n # The annotation contains the updated image to nginx 1.16.1,\n # but does not contain the updated replicas to 2\n kubectl.kubernetes.io/last-applied-configuration: |\n {\"apiVersion\":\"apps/v1\",\"kind\":\"Deployment\",\n \"metadata\":{\"annotations\":{},\"name\":\"nginx-deployment\",\"namespace\":\"default\"},\n \"spec\":{\"selector\":{\"matchLabels\":{\"app\":nginx}},\"template\":{\"metadata\":{\"labels\":{\"app\":\"nginx\"}},\n \"spec\":{\"containers\":[{\"image\":\"nginx:1.16.1\",\"name\":\"nginx\",\n \"ports\":[{\"containerPort\":80}]}]}}}} \n # ...\nspec:\n replicas: 2 # Set by `kubectl scale`. Ignored by `kubectl apply`.\n # minReadySeconds cleared by `kubectl apply`\n # ...\n selector:\n matchLabels:\n # ...\n app: nginx\n template:\n metadata:\n # ...\n labels:\n app: nginx\n spec:\n containers:\n - image: nginx:1.16.1 # Set by `kubectl apply`\n # ...\n name: nginx\n ports:\n - containerPort: 80\n # ...\n # ...\n # ...\n # ...", "completion": "This yaml code Declarative Management of Kubernetes Objects Using Configuration Files | There are two approaches to delete objects managed by kubectl apply.", "language": "yaml", "type": "deployment-definition_explanation", "context": "Declarative Management of Kubernetes Objects Using Configuration Files | There are two approaches to delete objects managed by kubectl apply.", "source": "https://kubernetes.io/docs/tasks/manage-kubernetes-objects/declarative-config/", "id": "k8s_00681_code_explain_21490"} {"prompt": "Explain this bash code:\n\nkubectl apply -f <directory> --prune -l <labels> --prune-allowlist=<gvk-list>", "completion": "This bash code Declarative Management of Kubernetes Objects Using Configuration Files | FEATURE STATE: Kubernetes v1.27 [alpha]Caution:kubectl apply --prune --applyset is in alpha, and backwards incompatible changes might be introduced in subsequent releases.To use ApplySet-based pruning", "language": "bash", "type": "kubectl-command_explanation", "context": "Declarative Management of Kubernetes Objects Using Configuration Files | FEATURE STATE: Kubernetes v1.27 [alpha]Caution:kubectl apply --prune --applyset is in alpha, and backwards incompatible changes might be introduced in subsequent releases.To use ApplySet-based pruning", "source": "https://kubernetes.io/docs/tasks/manage-kubernetes-objects/declarative-config/", "id": "k8s_00681_code_explain_21491"} {"prompt": "Explain this text code:\n\nKUBECTL_APPLYSET=true kubectl apply -f <directory> --prune --applyset=<name>", "completion": "This text code Declarative Management of Kubernetes Objects Using Configuration Files | By default, the type of the ApplySet parent object used is a Secret. However, ConfigMaps can also be used in the format: --applyset=configmaps/<name>. When using a Secret or ConfigMap, kubectl will cr", "language": "text", "type": "code-example_explanation", "context": "Declarative Management of Kubernetes Objects Using Configuration Files | By default, the type of the ApplySet parent object used is a Secret. However, ConfigMaps can also be used in the format: --applyset=configmaps/<name>. When using a Secret or ConfigMap, kubectl will cr", "source": "https://kubernetes.io/docs/tasks/manage-kubernetes-objects/declarative-config/", "id": "k8s_00681_code_explain_21492"} {"prompt": "Complete this yaml code:\n\napiVersion: apps/v1\nkind: Deployment\nmetadata:\n name: nginx-deployment\nspec:\n selector:\n matchLabels:\n app: nginx\n template:", "completion": " metadata:\n labels:\n app: nginx\n spec:\n containers:\n - name: nginx\n image: nginx:1.16.1 # update the image\n ports:\n - containerPort: 80", "language": "yaml", "type": "deployment-definition", "context": "Declarative Management of Kubernetes Objects Using Configuration Files | Also, suppose this is the live configuration for the same Deployment object:", "source": "https://kubernetes.io/docs/tasks/manage-kubernetes-objects/declarative-config/", "id": "k8s_00681_code_completion_21493"} {"prompt": "Explain this yaml code:\n\napiVersion: apps/v1\nkind: Deployment\nmetadata:\n name: nginx-deployment\nspec:\n selector:\n matchLabels:\n app: nginx\n template:\n metadata:\n labels:\n app: nginx\n spec:\n containers:\n - name: nginx\n image: nginx:1.16.1 # update the image\n ports:\n - containerPort: 80", "completion": "This yaml code Declarative Management of Kubernetes Objects Using Configuration Files | Also, suppose this is the live configuration for the same Deployment object:", "language": "yaml", "type": "deployment-definition_explanation", "context": "Declarative Management of Kubernetes Objects Using Configuration Files | Also, suppose this is the live configuration for the same Deployment object:", "source": "https://kubernetes.io/docs/tasks/manage-kubernetes-objects/declarative-config/", "id": "k8s_00681_code_explain_21494"} {"prompt": "Complete this yaml code:\n\napiVersion: apps/v1\nkind: Deployment\nmetadata:\n annotations:\n # ...\n # note that the annotation does not contain replicas\n # because it was not updated through apply\n kubectl.kubernetes.io/last-applied-configuration: |\n {\"apiVersion\":\"apps/v1\",\"kind\":\"Deployment\",\n \"metadata\":{\"annotations\":{},\"name\":\"nginx-deployment\",\"namespace\":\"default\"},\n \"spec\":{\"minReadySeconds\":5,\"selector\":{\"matchLabels\":{\"app\":nginx}},\"template\":{\"metadata\":{\"labels\":{\"app\":\"nginx\"}},\n \"spec\":{\"containers\":[{\"image\":\"nginx:1.14.2\",\"name\":\"nginx\",\n \"ports\":[{\"containerPort\":80}]}]}}}} \n # ...\nspec:\n replicas: 2 # written by scale\n # ...", "completion": " minReadySeconds: 5\n selector:\n matchLabels:\n # ...\n app: nginx\n template:\n metadata:\n # ...\n labels:\n app: nginx\n spec:\n containers:\n - image: nginx:1.14.2\n # ...\n name: nginx\n ports:\n - containerPort: 80\n # ...", "language": "yaml", "type": "deployment-definition", "context": "Declarative Management of Kubernetes Objects Using Configuration Files | Here are the merge calculations that would be performed by kubectl apply:", "source": "https://kubernetes.io/docs/tasks/manage-kubernetes-objects/declarative-config/", "id": "k8s_00681_code_completion_21495"} {"prompt": "Explain this yaml code:\n\napiVersion: apps/v1\nkind: Deployment\nmetadata:\n annotations:\n # ...\n # note that the annotation does not contain replicas\n # because it was not updated through apply\n kubectl.kubernetes.io/last-applied-configuration: |\n {\"apiVersion\":\"apps/v1\",\"kind\":\"Deployment\",\n \"metadata\":{\"annotations\":{},\"name\":\"nginx-deployment\",\"namespace\":\"default\"},\n \"spec\":{\"minReadySeconds\":5,\"selector\":{\"matchLabels\":{\"app\":nginx}},\"template\":{\"metadata\":{\"labels\":{\"app\":\"nginx\"}},\n \"spec\":{\"containers\":[{\"image\":\"nginx:1.14.2\",\"name\":\"nginx\",\n \"ports\":[{\"containerPort\":80}]}]}}}} \n # ...\nspec:\n replicas: 2 # written by scale\n # ...\n minReadySeconds: 5\n selector:\n matchLabels:\n # ...\n app: nginx\n template:\n metadata:\n # ...\n labels:\n app: nginx\n spec:\n containers:\n - image: nginx:1.14.2\n # ...\n name: nginx\n ports:\n - containerPort: 80\n # ...", "completion": "This yaml code Declarative Management of Kubernetes Objects Using Configuration Files | Here are the merge calculations that would be performed by kubectl apply:", "language": "yaml", "type": "deployment-definition_explanation", "context": "Declarative Management of Kubernetes Objects Using Configuration Files | Here are the merge calculations that would be performed by kubectl apply:", "source": "https://kubernetes.io/docs/tasks/manage-kubernetes-objects/declarative-config/", "id": "k8s_00681_code_explain_21496"} {"prompt": "Complete this yaml code:\n\napiVersion: apps/v1\nkind: Deployment\nmetadata:\n annotations:\n # ...\n # The annotation contains the updated image to nginx 1.16.1,\n # but does not contain the updated replicas to 2\n kubectl.kubernetes.io/last-applied-configuration: |\n {\"apiVersion\":\"apps/v1\",\"kind\":\"Deployment\",\n \"metadata\":{\"annotations\":{},\"name\":\"nginx-deployment\",\"namespace\":\"default\"},\n \"spec\":{\"selector\":{\"matchLabels\":{\"app\":nginx}},\"template\":{\"metadata\":{\"labels\":{\"app\":\"nginx\"}},\n \"spec\":{\"containers\":[{\"image\":\"nginx:1.16.1\",\"name\":\"nginx\",\n \"ports\":[{\"containerPort\":80}]}]}}}} \n # ...\nspec:\n selector:\n matchLabels:\n # ...\n app: nginx", "completion": " replicas: 2 # Set by `kubectl scale`. Ignored by `kubectl apply`.\n # minReadySeconds cleared by `kubectl apply`\n # ...\n template:\n metadata:\n # ...\n labels:\n app: nginx\n spec:\n containers:\n - image: nginx:1.16.1 # Set by `kubectl apply`\n # ...\n name: nginx\n ports:\n - containerPort: 80\n # ...\n # ...\n # ...\n # ...", "language": "yaml", "type": "deployment-definition", "context": "Declarative Management of Kubernetes Objects Using Configuration Files | How a particular field in a configuration file is merged with the live configuration depends on the type of the field. There are several types of fields:", "source": "https://kubernetes.io/docs/tasks/manage-kubernetes-objects/declarative-config/", "id": "k8s_00681_code_completion_21497"} {"prompt": "Explain this yaml code:\n\napiVersion: apps/v1\nkind: Deployment\nmetadata:\n annotations:\n # ...\n # The annotation contains the updated image to nginx 1.16.1,\n # but does not contain the updated replicas to 2\n kubectl.kubernetes.io/last-applied-configuration: |\n {\"apiVersion\":\"apps/v1\",\"kind\":\"Deployment\",\n \"metadata\":{\"annotations\":{},\"name\":\"nginx-deployment\",\"namespace\":\"default\"},\n \"spec\":{\"selector\":{\"matchLabels\":{\"app\":nginx}},\"template\":{\"metadata\":{\"labels\":{\"app\":\"nginx\"}},\n \"spec\":{\"containers\":[{\"image\":\"nginx:1.16.1\",\"name\":\"nginx\",\n \"ports\":[{\"containerPort\":80}]}]}}}} \n # ...\nspec:\n selector:\n matchLabels:\n # ...\n app: nginx\n replicas: 2 # Set by `kubectl scale`. Ignored by `kubectl apply`.\n # minReadySeconds cleared by `kubectl apply`\n # ...\n template:\n metadata:\n # ...\n labels:\n app: nginx\n spec:\n containers:\n - image: nginx:1.16.1 # Set by `kubectl apply`\n # ...\n name: nginx\n ports:\n - containerPort: 80\n # ...\n # ...\n # ...\n # ...", "completion": "This yaml code Declarative Management of Kubernetes Objects Using Configuration Files | How a particular field in a configuration file is merged with the live configuration depends on the type of the field. There are several types of fields:", "language": "yaml", "type": "deployment-definition_explanation", "context": "Declarative Management of Kubernetes Objects Using Configuration Files | How a particular field in a configuration file is merged with the live configuration depends on the type of the field. There are several types of fields:", "source": "https://kubernetes.io/docs/tasks/manage-kubernetes-objects/declarative-config/", "id": "k8s_00681_code_explain_21498"} {"prompt": "Complete this text code:\n\n# last-applied-configuration value\n args: [\"a\", \"b\"]\n\n# configuration file value\n args: [\"a\", \"c\"]", "completion": "\n# live configuration\n args: [\"a\", \"b\", \"d\"]\n\n# result after merge\n args: [\"a\", \"c\"]", "language": "text", "type": "code-example", "context": "Declarative Management of Kubernetes Objects Using Configuration Files | Explanation: The merge used the configuration file value as the new list value.", "source": "https://kubernetes.io/docs/tasks/manage-kubernetes-objects/declarative-config/", "id": "k8s_00681_code_completion_21499"} {"prompt": "Explain this text code:\n\n# last-applied-configuration value\n args: [\"a\", \"b\"]\n\n# configuration file value\n args: [\"a\", \"c\"]\n\n# live configuration\n args: [\"a\", \"b\", \"d\"]\n\n# result after merge\n args: [\"a\", \"c\"]", "completion": "This text code Declarative Management of Kubernetes Objects Using Configuration Files | Explanation: The merge used the configuration file value as the new list value.", "language": "text", "type": "code-example_explanation", "context": "Declarative Management of Kubernetes Objects Using Configuration Files | Explanation: The merge used the configuration file value as the new list value.", "source": "https://kubernetes.io/docs/tasks/manage-kubernetes-objects/declarative-config/", "id": "k8s_00681_code_explain_21500"} {"prompt": "Complete this text code:\n\n# last-applied-configuration value\n containers:\n - name: nginx\n image: nginx:1.16\n - name: nginx-helper-a # key: nginx-helper-a; will be deleted in result\n image: helper:1.3\n - name: nginx-helper-b # key: nginx-helper-b; will be retained\n image: helper:1.3\n\n# configuration file value\n containers:\n - name: nginx\n image: nginx:1.16\n - name: nginx-helper-b\n image: helper:1.3\n - name: nginx-helper-c # key: nginx-helper-c; will be added in result\n image: helper:1.3\n\n# live configuration\n containers:\n - name: nginx", "completion": " image: nginx:1.16\n - name: nginx-helper-a\n image: helper:1.3\n - name: nginx-helper-b\n image: helper:1.3\n args: [\"run\"] # Field will be retained\n - name: nginx-helper-d # key: nginx-helper-d; will be retained\n image: helper:1.3\n\n# result after merge\n containers:\n - name: nginx\n image: nginx:1.16\n # Element nginx-helper-a was deleted\n - name: nginx-helper-b\n image: helper:1.3\n args: [\"run\"] # Field was retained\n - name: nginx-helper-c # Element was added\n image: helper:1.3\n - name: nginx-helper-d # Element was ignored\n image: helper:1.3", "language": "text", "type": "code-example", "context": "Declarative Management of Kubernetes Objects Using Configuration Files | Explanation:", "source": "https://kubernetes.io/docs/tasks/manage-kubernetes-objects/declarative-config/", "id": "k8s_00681_code_completion_21501"} {"prompt": "Explain this text code:\n\n# last-applied-configuration value\n containers:\n - name: nginx\n image: nginx:1.16\n - name: nginx-helper-a # key: nginx-helper-a; will be deleted in result\n image: helper:1.3\n - name: nginx-helper-b # key: nginx-helper-b; will be retained\n image: helper:1.3\n\n# configuration file value\n containers:\n - name: nginx\n image: nginx:1.16\n - name: nginx-helper-b\n image: helper:1.3\n - name: nginx-helper-c # key: nginx-helper-c; will be added in result\n image: helper:1.3\n\n# live configuration\n containers:\n - name: nginx\n image: nginx:1.16\n - name: nginx-helper-a\n image: helper:1.3\n - name: nginx-helper-b\n image: helper:1.3\n args: [\"run\"] # Field will be retained\n - name: nginx-helper-d # key: nginx-helper-d; will be retained\n image: helper:1.3\n\n# result after merge\n containers:\n - name: nginx\n image: nginx:1.16\n # Element nginx-helper-a was deleted\n - name: nginx-helper-b\n image: helper:1.3\n args: [\"run\"] # Field was retained\n - name: nginx-helper-c # Element was added\n image: helper:1.3\n - name: nginx-helper-d # Element was ignored\n image: helper:1.3", "completion": "This text code Declarative Management of Kubernetes Objects Using Configuration Files | Explanation:", "language": "text", "type": "code-example_explanation", "context": "Declarative Management of Kubernetes Objects Using Configuration Files | Explanation:", "source": "https://kubernetes.io/docs/tasks/manage-kubernetes-objects/declarative-config/", "id": "k8s_00681_code_explain_21502"} {"prompt": "Complete this yaml code:\n\napiVersion: apps/v1\nkind: Deployment\nmetadata:\n name: nginx-deployment\nspec:\n selector:\n matchLabels:\n app: nginx\n minReadySeconds: 5", "completion": " template:\n metadata:\n labels:\n app: nginx\n spec:\n containers:\n - name: nginx\n image: nginx:1.14.2\n ports:\n - containerPort: 80", "language": "yaml", "type": "deployment-definition", "context": "Declarative Management of Kubernetes Objects Using Configuration Files | Create the object using kubectl apply:", "source": "https://kubernetes.io/docs/tasks/manage-kubernetes-objects/declarative-config/", "id": "k8s_00681_code_completion_21503"} {"prompt": "Explain this yaml code:\n\napiVersion: apps/v1\nkind: Deployment\nmetadata:\n name: nginx-deployment\nspec:\n selector:\n matchLabels:\n app: nginx\n minReadySeconds: 5\n template:\n metadata:\n labels:\n app: nginx\n spec:\n containers:\n - name: nginx\n image: nginx:1.14.2\n ports:\n - containerPort: 80", "completion": "This yaml code Declarative Management of Kubernetes Objects Using Configuration Files | Create the object using kubectl apply:", "language": "yaml", "type": "deployment-definition_explanation", "context": "Declarative Management of Kubernetes Objects Using Configuration Files | Create the object using kubectl apply:", "source": "https://kubernetes.io/docs/tasks/manage-kubernetes-objects/declarative-config/", "id": "k8s_00681_code_explain_21504"} {"prompt": "Explain this bash code:\n\nkubectl apply -f https://k8s.io/examples/application/simple_deployment.yaml", "completion": "This bash code Declarative Management of Kubernetes Objects Using Configuration Files | Print the live configuration using kubectl get:", "language": "bash", "type": "kubectl-command_explanation", "context": "Declarative Management of Kubernetes Objects Using Configuration Files | Print the live configuration using kubectl get:", "source": "https://kubernetes.io/docs/tasks/manage-kubernetes-objects/declarative-config/", "id": "k8s_00681_code_explain_21505"} {"prompt": "Explain this bash code:\n\nkubectl get -f https://k8s.io/examples/application/simple_deployment.yaml -o yaml", "completion": "This bash code Declarative Management of Kubernetes Objects Using Configuration Files | The output shows that the API server set several fields to default values in the live configuration. These fields were not specified in the configuration file.", "language": "bash", "type": "kubectl-command_explanation", "context": "Declarative Management of Kubernetes Objects Using Configuration Files | The output shows that the API server set several fields to default values in the live configuration. These fields were not specified in the configuration file.", "source": "https://kubernetes.io/docs/tasks/manage-kubernetes-objects/declarative-config/", "id": "k8s_00681_code_explain_21506"} {"prompt": "Complete this yaml code:\n\napiVersion: apps/v1\nkind: Deployment\n# ...\nspec:\n selector:\n matchLabels:\n app: nginx\n minReadySeconds: 5\n replicas: 1 # defaulted by apiserver\n strategy:\n rollingUpdate: # defaulted by apiserver - derived from strategy.type\n maxSurge: 1\n maxUnavailable: 1\n type: RollingUpdate # defaulted by apiserver\n template:\n metadata:\n creationTimestamp: null", "completion": " labels:\n app: nginx\n spec:\n containers:\n - image: nginx:1.14.2\n imagePullPolicy: IfNotPresent # defaulted by apiserver\n name: nginx\n ports:\n - containerPort: 80\n protocol: TCP # defaulted by apiserver\n resources: {} # defaulted by apiserver\n terminationMessagePath: /dev/termination-log # defaulted by apiserver\n dnsPolicy: ClusterFirst # defaulted by apiserver\n restartPolicy: Always # defaulted by apiserver\n securityContext: {} # defaulted by apiserver\n terminationGracePeriodSeconds: 30 # defaulted by apiserver\n# ...", "language": "yaml", "type": "deployment-definition", "context": "Declarative Management of Kubernetes Objects Using Configuration Files | In a patch request, defaulted fields are not re-defaulted unless they are explicitly cleared as part of a patch request. This can cause unexpected behavior for fields that are defaulted based on the v", "source": "https://kubernetes.io/docs/tasks/manage-kubernetes-objects/declarative-config/", "id": "k8s_00681_code_completion_21507"} {"prompt": "Explain this yaml code:\n\napiVersion: apps/v1\nkind: Deployment\n# ...\nspec:\n selector:\n matchLabels:\n app: nginx\n minReadySeconds: 5\n replicas: 1 # defaulted by apiserver\n strategy:\n rollingUpdate: # defaulted by apiserver - derived from strategy.type\n maxSurge: 1\n maxUnavailable: 1\n type: RollingUpdate # defaulted by apiserver\n template:\n metadata:\n creationTimestamp: null\n labels:\n app: nginx\n spec:\n containers:\n - image: nginx:1.14.2\n imagePullPolicy: IfNotPresent # defaulted by apiserver\n name: nginx\n ports:\n - containerPort: 80\n protocol: TCP # defaulted by apiserver\n resources: {} # defaulted by apiserver\n terminationMessagePath: /dev/termination-log # defaulted by apiserver\n dnsPolicy: ClusterFirst # defaulted by apiserver\n restartPolicy: Always # defaulted by apiserver\n securityContext: {} # defaulted by apiserver\n terminationGracePeriodSeconds: 30 # defaulted by apiserver\n# ...", "completion": "This yaml code Declarative Management of Kubernetes Objects Using Configuration Files | In a patch request, defaulted fields are not re-defaulted unless they are explicitly cleared as part of a patch request. This can cause unexpected behavior for fields that are defaulted based on the v", "language": "yaml", "type": "deployment-definition_explanation", "context": "Declarative Management of Kubernetes Objects Using Configuration Files | In a patch request, defaulted fields are not re-defaulted unless they are explicitly cleared as part of a patch request. This can cause unexpected behavior for fields that are defaulted based on the v", "source": "https://kubernetes.io/docs/tasks/manage-kubernetes-objects/declarative-config/", "id": "k8s_00681_code_explain_21508"} {"prompt": "Complete this text code:\n\n# last-applied-configuration\nspec:\n template:\n metadata:\n labels:\n app: nginx\n spec:\n containers:\n - name: nginx\n image: nginx:1.14.2\n ports:\n - containerPort: 80\n\n# configuration file\nspec:\n strategy:\n type: Recreate # updated value\n template:\n metadata:\n labels:\n app: nginx\n spec:\n containers:\n - name: nginx\n image: nginx:1.14.2\n ports:\n - containerPort: 80\n\n# live configuration\nspec:\n strategy:", "completion": " type: RollingUpdate # defaulted value\n rollingUpdate: # defaulted value derived from type\n maxSurge : 1\n maxUnavailable: 1\n template:\n metadata:\n labels:\n app: nginx\n spec:\n containers:\n - name: nginx\n image: nginx:1.14.2\n ports:\n - containerPort: 80\n\n# result after merge - ERROR!\nspec:\n strategy:\n type: Recreate # updated value: incompatible with rollingUpdate\n rollingUpdate: # defaulted value: incompatible with \"type: Recreate\"\n maxSurge : 1\n maxUnavailable: 1\n template:\n metadata:\n labels:\n app: nginx\n spec:\n containers:\n - name: nginx\n image: nginx:1.14.2\n ports:\n - containerPort: 80", "language": "text", "type": "code-example", "context": "Declarative Management of Kubernetes Objects Using Configuration Files | Explanation:", "source": "https://kubernetes.io/docs/tasks/manage-kubernetes-objects/declarative-config/", "id": "k8s_00681_code_completion_21509"} {"prompt": "Explain this text code:\n\n# last-applied-configuration\nspec:\n template:\n metadata:\n labels:\n app: nginx\n spec:\n containers:\n - name: nginx\n image: nginx:1.14.2\n ports:\n - containerPort: 80\n\n# configuration file\nspec:\n strategy:\n type: Recreate # updated value\n template:\n metadata:\n labels:\n app: nginx\n spec:\n containers:\n - name: nginx\n image: nginx:1.14.2\n ports:\n - containerPort: 80\n\n# live configuration\nspec:\n strategy:\n type: RollingUpdate # defaulted value\n rollingUpdate: # defaulted value derived from type\n maxSurge : 1\n maxUnavailable: 1\n template:\n metadata:\n labels:\n app: nginx\n spec:\n containers:\n - name: nginx\n image: nginx:1.14.2\n ports:\n - containerPort: 80\n\n# result after merge - ERROR!\nspec:\n strategy:\n type: Recreate # updated value: incompatible with rollingUpdate\n rollingUpdate: # defaulted value: incompatible with \"type: Recreate\"\n maxSurge : 1\n maxUnavailable: 1\n template:\n metadata:\n labels:\n app: nginx\n spec:\n containers:\n - name: nginx\n image: nginx:1.14.2\n ports:\n - containerPort: 80", "completion": "This text code Declarative Management of Kubernetes Objects Using Configuration Files | Explanation:", "language": "text", "type": "code-example_explanation", "context": "Declarative Management of Kubernetes Objects Using Configuration Files | Explanation:", "source": "https://kubernetes.io/docs/tasks/manage-kubernetes-objects/declarative-config/", "id": "k8s_00681_code_explain_21510"} {"prompt": "Explain this bash code:\n\nkubectl get <kind>/<name> -o yaml > <kind>_<name>.yaml", "completion": "This bash code Declarative Management of Kubernetes Objects Using Configuration Files | Manually remove the status field from the configuration file.", "language": "bash", "type": "kubectl-command_explanation", "context": "Declarative Management of Kubernetes Objects Using Configuration Files | Manually remove the status field from the configuration file.", "source": "https://kubernetes.io/docs/tasks/manage-kubernetes-objects/declarative-config/", "id": "k8s_00681_code_explain_21511"} {"prompt": "Explain this bash code:\n\nkubectl replace --save-config -f <kind>_<name>.yaml", "completion": "This bash code Declarative Management of Kubernetes Objects Using Configuration Files | Change processes to use kubectl apply for managing the object exclusively.", "language": "bash", "type": "kubectl-command_explanation", "context": "Declarative Management of Kubernetes Objects Using Configuration Files | Change processes to use kubectl apply for managing the object exclusively.", "source": "https://kubernetes.io/docs/tasks/manage-kubernetes-objects/declarative-config/", "id": "k8s_00681_code_explain_21512"} {"prompt": "Explain this bash code:\n\nkubectl replace --save-config -f <kind>_<name>.yaml", "completion": "This bash code Declarative Management of Kubernetes Objects Using Configuration Files | Change processes to use kubectl apply for managing the object exclusively.", "language": "bash", "type": "kubectl-command_explanation", "context": "Declarative Management of Kubernetes Objects Using Configuration Files | Change processes to use kubectl apply for managing the object exclusively.", "source": "https://kubernetes.io/docs/tasks/manage-kubernetes-objects/declarative-config/", "id": "k8s_00681_code_explain_21513"} {"prompt": "Complete this text code:\n\nselector:\n matchLabels:\n controller-selector: \"apps/v1/deployment/nginx\"", "completion": "template:\n metadata:\n labels:\n controller-selector: \"apps/v1/deployment/nginx\"", "language": "text", "type": "code-example", "context": "Declarative Management of Kubernetes Objects Using Configuration Files |", "source": "https://kubernetes.io/docs/tasks/manage-kubernetes-objects/declarative-config/", "id": "k8s_00681_code_completion_21514"} {"prompt": "Explain this text code:\n\nselector:\n matchLabels:\n controller-selector: \"apps/v1/deployment/nginx\"\ntemplate:\n metadata:\n labels:\n controller-selector: \"apps/v1/deployment/nginx\"", "completion": "This text code Declarative Management of Kubernetes Objects Using Configuration Files |", "language": "text", "type": "code-example_explanation", "context": "Declarative Management of Kubernetes Objects Using Configuration Files |", "source": "https://kubernetes.io/docs/tasks/manage-kubernetes-objects/declarative-config/", "id": "k8s_00681_code_explain_21515"} {"prompt": "Explain this bash code:\n\nkubectl.kubernetes.io/last-applied-configuration: '{...}'", "completion": "This bash code Declarative Management of Kubernetes Objects Using Configuration Files | Here's an example of an object configuration file:", "language": "bash", "type": "kubectl-command_explanation", "context": "Declarative Management of Kubernetes Objects Using Configuration Files | Here's an example of an object configuration file:", "source": "https://kubernetes.io/docs/tasks/manage-kubernetes-objects/declarative-config/", "id": "k8s_00681_code_explain_21516"} {"prompt": "Complete this yaml code:\n\napiVersion: apps/v1\nkind: Deployment\nmetadata:\n name: nginx-deployment\nspec:\n selector:\n matchLabels:\n app: nginx\n minReadySeconds: 5", "completion": " template:\n metadata:\n labels:\n app: nginx\n spec:\n containers:\n - name: nginx\n image: nginx:1.14.2\n ports:\n - containerPort: 80", "language": "yaml", "type": "deployment-definition", "context": "Declarative Management of Kubernetes Objects Using Configuration Files | Run kubectl diff to print the object that will be created:", "source": "https://kubernetes.io/docs/tasks/manage-kubernetes-objects/declarative-config/", "id": "k8s_00681_code_completion_21517"} {"prompt": "Explain this yaml code:\n\napiVersion: apps/v1\nkind: Deployment\nmetadata:\n name: nginx-deployment\nspec:\n selector:\n matchLabels:\n app: nginx\n minReadySeconds: 5\n template:\n metadata:\n labels:\n app: nginx\n spec:\n containers:\n - name: nginx\n image: nginx:1.14.2\n ports:\n - containerPort: 80", "completion": "This yaml code Declarative Management of Kubernetes Objects Using Configuration Files | Run kubectl diff to print the object that will be created:", "language": "yaml", "type": "deployment-definition_explanation", "context": "Declarative Management of Kubernetes Objects Using Configuration Files | Run kubectl diff to print the object that will be created:", "source": "https://kubernetes.io/docs/tasks/manage-kubernetes-objects/declarative-config/", "id": "k8s_00681_code_explain_21518"} {"prompt": "Explain this shell code:\n\nkubectl diff -f https://k8s.io/examples/application/simple_deployment.yaml", "completion": "This shell code Declarative Management of Kubernetes Objects Using Configuration Files | diff uses server-side dry-run, which needs to be enabled on kube-apiserver.", "language": "shell", "type": "code-example_explanation", "context": "Declarative Management of Kubernetes Objects Using Configuration Files | diff uses server-side dry-run, which needs to be enabled on kube-apiserver.", "source": "https://kubernetes.io/docs/tasks/manage-kubernetes-objects/declarative-config/", "id": "k8s_00681_code_explain_21519"} {"prompt": "Explain this shell code:\n\nkubectl apply -f https://k8s.io/examples/application/simple_deployment.yaml", "completion": "This shell code Declarative Management of Kubernetes Objects Using Configuration Files | Print the live configuration using kubectl get:", "language": "shell", "type": "code-example_explanation", "context": "Declarative Management of Kubernetes Objects Using Configuration Files | Print the live configuration using kubectl get:", "source": "https://kubernetes.io/docs/tasks/manage-kubernetes-objects/declarative-config/", "id": "k8s_00681_code_explain_21520"} {"prompt": "Explain this shell code:\n\nkubectl get -f https://k8s.io/examples/application/simple_deployment.yaml -o yaml", "completion": "This shell code Declarative Management of Kubernetes Objects Using Configuration Files | The output shows that the kubectl.kubernetes.io/last-applied-configuration annotation was written to the live configuration, and it matches the configuration file:", "language": "shell", "type": "code-example_explanation", "context": "Declarative Management of Kubernetes Objects Using Configuration Files | The output shows that the kubectl.kubernetes.io/last-applied-configuration annotation was written to the live configuration, and it matches the configuration file:", "source": "https://kubernetes.io/docs/tasks/manage-kubernetes-objects/declarative-config/", "id": "k8s_00681_code_explain_21521"} {"prompt": "Complete this yaml code:\n\nkind: Deployment\nmetadata:\n annotations:\n # ...\n # This is the json representation of simple_deployment.yaml\n # It was written by kubectl apply when the object was created\n kubectl.kubernetes.io/last-applied-configuration: |\n {\"apiVersion\":\"apps/v1\",\"kind\":\"Deployment\",\n \"metadata\":{\"annotations\":{},\"name\":\"nginx-deployment\",\"namespace\":\"default\"},\n \"spec\":{\"minReadySeconds\":5,\"selector\":{\"matchLabels\":{\"app\":nginx}},\"template\":{\"metadata\":{\"labels\":{\"app\":\"nginx\"}},\n \"spec\":{\"containers\":[{\"image\":\"nginx:1.14.2\",\"name\":\"nginx\",\n \"ports\":[{\"containerPort\":80}]}]}}}} \n # ...\nspec:\n # ...\n minReadySeconds: 5\n selector:\n matchLabels:", "completion": " # ...\n app: nginx\n template:\n metadata:\n # ...\n labels:\n app: nginx\n spec:\n containers:\n - image: nginx:1.14.2\n # ...\n name: nginx\n ports:\n - containerPort: 80\n # ...\n # ...\n # ...\n # ...", "language": "yaml", "type": "deployment-definition", "context": "Declarative Management of Kubernetes Objects Using Configuration Files | You can also use kubectl apply to update all objects defined in a directory, even if those objects already exist. This approach accomplishes the following:", "source": "https://kubernetes.io/docs/tasks/manage-kubernetes-objects/declarative-config/", "id": "k8s_00681_code_completion_21522"} {"prompt": "Explain this yaml code:\n\nkind: Deployment\nmetadata:\n annotations:\n # ...\n # This is the json representation of simple_deployment.yaml\n # It was written by kubectl apply when the object was created\n kubectl.kubernetes.io/last-applied-configuration: |\n {\"apiVersion\":\"apps/v1\",\"kind\":\"Deployment\",\n \"metadata\":{\"annotations\":{},\"name\":\"nginx-deployment\",\"namespace\":\"default\"},\n \"spec\":{\"minReadySeconds\":5,\"selector\":{\"matchLabels\":{\"app\":nginx}},\"template\":{\"metadata\":{\"labels\":{\"app\":\"nginx\"}},\n \"spec\":{\"containers\":[{\"image\":\"nginx:1.14.2\",\"name\":\"nginx\",\n \"ports\":[{\"containerPort\":80}]}]}}}} \n # ...\nspec:\n # ...\n minReadySeconds: 5\n selector:\n matchLabels:\n # ...\n app: nginx\n template:\n metadata:\n # ...\n labels:\n app: nginx\n spec:\n containers:\n - image: nginx:1.14.2\n # ...\n name: nginx\n ports:\n - containerPort: 80\n # ...\n # ...\n # ...\n # ...", "completion": "This yaml code Declarative Management of Kubernetes Objects Using Configuration Files | You can also use kubectl apply to update all objects defined in a directory, even if those objects already exist. This approach accomplishes the following:", "language": "yaml", "type": "deployment-definition_explanation", "context": "Declarative Management of Kubernetes Objects Using Configuration Files | You can also use kubectl apply to update all objects defined in a directory, even if those objects already exist. This approach accomplishes the following:", "source": "https://kubernetes.io/docs/tasks/manage-kubernetes-objects/declarative-config/", "id": "k8s_00681_code_explain_21523"} {"prompt": "Explain this shell code:\n\nkubectl diff -f <directory>\nkubectl apply -f <directory>", "completion": "This shell code Declarative Management of Kubernetes Objects Using Configuration Files | Here's an example configuration file:", "language": "shell", "type": "code-example_explanation", "context": "Declarative Management of Kubernetes Objects Using Configuration Files | Here's an example configuration file:", "source": "https://kubernetes.io/docs/tasks/manage-kubernetes-objects/declarative-config/", "id": "k8s_00681_code_explain_21524"} {"prompt": "Complete this yaml code:\n\napiVersion: apps/v1\nkind: Deployment\nmetadata:\n name: nginx-deployment\nspec:\n selector:\n matchLabels:\n app: nginx\n minReadySeconds: 5", "completion": " template:\n metadata:\n labels:\n app: nginx\n spec:\n containers:\n - name: nginx\n image: nginx:1.14.2\n ports:\n - containerPort: 80", "language": "yaml", "type": "deployment-definition", "context": "Declarative Management of Kubernetes Objects Using Configuration Files | Create the object using kubectl apply:", "source": "https://kubernetes.io/docs/tasks/manage-kubernetes-objects/declarative-config/", "id": "k8s_00681_code_completion_21525"} {"prompt": "Explain this yaml code:\n\napiVersion: apps/v1\nkind: Deployment\nmetadata:\n name: nginx-deployment\nspec:\n selector:\n matchLabels:\n app: nginx\n minReadySeconds: 5\n template:\n metadata:\n labels:\n app: nginx\n spec:\n containers:\n - name: nginx\n image: nginx:1.14.2\n ports:\n - containerPort: 80", "completion": "This yaml code Declarative Management of Kubernetes Objects Using Configuration Files | Create the object using kubectl apply:", "language": "yaml", "type": "deployment-definition_explanation", "context": "Declarative Management of Kubernetes Objects Using Configuration Files | Create the object using kubectl apply:", "source": "https://kubernetes.io/docs/tasks/manage-kubernetes-objects/declarative-config/", "id": "k8s_00681_code_explain_21526"} {"prompt": "Explain this shell code:\n\nkubectl apply -f https://k8s.io/examples/application/simple_deployment.yaml", "completion": "This shell code Declarative Management of Kubernetes Objects Using Configuration Files | Print the live configuration using kubectl get:", "language": "shell", "type": "code-example_explanation", "context": "Declarative Management of Kubernetes Objects Using Configuration Files | Print the live configuration using kubectl get:", "source": "https://kubernetes.io/docs/tasks/manage-kubernetes-objects/declarative-config/", "id": "k8s_00681_code_explain_21527"} {"prompt": "Explain this shell code:\n\nkubectl get -f https://k8s.io/examples/application/simple_deployment.yaml -o yaml", "completion": "This shell code Declarative Management of Kubernetes Objects Using Configuration Files | The output shows that the kubectl.kubernetes.io/last-applied-configuration annotation was written to the live configuration, and it matches the configuration file:", "language": "shell", "type": "code-example_explanation", "context": "Declarative Management of Kubernetes Objects Using Configuration Files | The output shows that the kubectl.kubernetes.io/last-applied-configuration annotation was written to the live configuration, and it matches the configuration file:", "source": "https://kubernetes.io/docs/tasks/manage-kubernetes-objects/declarative-config/", "id": "k8s_00681_code_explain_21528"} {"prompt": "Complete this yaml code:\n\nkind: Deployment\nmetadata:\n annotations:\n # ...\n # This is the json representation of simple_deployment.yaml\n # It was written by kubectl apply when the object was created\n kubectl.kubernetes.io/last-applied-configuration: |\n {\"apiVersion\":\"apps/v1\",\"kind\":\"Deployment\",\n \"metadata\":{\"annotations\":{},\"name\":\"nginx-deployment\",\"namespace\":\"default\"},\n \"spec\":{\"minReadySeconds\":5,\"selector\":{\"matchLabels\":{\"app\":nginx}},\"template\":{\"metadata\":{\"labels\":{\"app\":\"nginx\"}},\n \"spec\":{\"containers\":[{\"image\":\"nginx:1.14.2\",\"name\":\"nginx\",\n \"ports\":[{\"containerPort\":80}]}]}}}} \n # ...\nspec:\n # ...\n minReadySeconds: 5\n selector:\n matchLabels:", "completion": " # ...\n app: nginx\n template:\n metadata:\n # ...\n labels:\n app: nginx\n spec:\n containers:\n - image: nginx:1.14.2\n # ...\n name: nginx\n ports:\n - containerPort: 80\n # ...\n # ...\n # ...\n # ...", "language": "yaml", "type": "deployment-definition", "context": "Declarative Management of Kubernetes Objects Using Configuration Files | Directly update the replicas field in the live configuration by using kubectl scale. This does not use kubectl apply:", "source": "https://kubernetes.io/docs/tasks/manage-kubernetes-objects/declarative-config/", "id": "k8s_00681_code_completion_21529"} {"prompt": "Explain this yaml code:\n\nkind: Deployment\nmetadata:\n annotations:\n # ...\n # This is the json representation of simple_deployment.yaml\n # It was written by kubectl apply when the object was created\n kubectl.kubernetes.io/last-applied-configuration: |\n {\"apiVersion\":\"apps/v1\",\"kind\":\"Deployment\",\n \"metadata\":{\"annotations\":{},\"name\":\"nginx-deployment\",\"namespace\":\"default\"},\n \"spec\":{\"minReadySeconds\":5,\"selector\":{\"matchLabels\":{\"app\":nginx}},\"template\":{\"metadata\":{\"labels\":{\"app\":\"nginx\"}},\n \"spec\":{\"containers\":[{\"image\":\"nginx:1.14.2\",\"name\":\"nginx\",\n \"ports\":[{\"containerPort\":80}]}]}}}} \n # ...\nspec:\n # ...\n minReadySeconds: 5\n selector:\n matchLabels:\n # ...\n app: nginx\n template:\n metadata:\n # ...\n labels:\n app: nginx\n spec:\n containers:\n - image: nginx:1.14.2\n # ...\n name: nginx\n ports:\n - containerPort: 80\n # ...\n # ...\n # ...\n # ...", "completion": "This yaml code Declarative Management of Kubernetes Objects Using Configuration Files | Directly update the replicas field in the live configuration by using kubectl scale. This does not use kubectl apply:", "language": "yaml", "type": "deployment-definition_explanation", "context": "Declarative Management of Kubernetes Objects Using Configuration Files | Directly update the replicas field in the live configuration by using kubectl scale. This does not use kubectl apply:", "source": "https://kubernetes.io/docs/tasks/manage-kubernetes-objects/declarative-config/", "id": "k8s_00681_code_explain_21530"} {"prompt": "Explain this shell code:\n\nkubectl scale deployment/nginx-deployment --replicas=2", "completion": "This shell code Declarative Management of Kubernetes Objects Using Configuration Files | Print the live configuration using kubectl get:", "language": "shell", "type": "code-example_explanation", "context": "Declarative Management of Kubernetes Objects Using Configuration Files | Print the live configuration using kubectl get:", "source": "https://kubernetes.io/docs/tasks/manage-kubernetes-objects/declarative-config/", "id": "k8s_00681_code_explain_21531"} {"prompt": "Complete this yaml code:\n\napiVersion: apps/v1\nkind: Deployment\nmetadata:\n annotations:\n # ...\n # note that the annotation does not contain replicas\n # because it was not updated through apply\n kubectl.kubernetes.io/last-applied-configuration: |\n {\"apiVersion\":\"apps/v1\",\"kind\":\"Deployment\",\n \"metadata\":{\"annotations\":{},\"name\":\"nginx-deployment\",\"namespace\":\"default\"},\n \"spec\":{\"minReadySeconds\":5,\"selector\":{\"matchLabels\":{\"app\":nginx}},\"template\":{\"metadata\":{\"labels\":{\"app\":\"nginx\"}},\n \"spec\":{\"containers\":[{\"image\":\"nginx:1.14.2\",\"name\":\"nginx\",\n \"ports\":[{\"containerPort\":80}]}]}}}} \n # ...\nspec:\n replicas: 2 # written by scale\n # ...", "completion": " minReadySeconds: 5\n selector:\n matchLabels:\n # ...\n app: nginx\n template:\n metadata:\n # ...\n labels:\n app: nginx\n spec:\n containers:\n - image: nginx:1.14.2\n # ...\n name: nginx\n ports:\n - containerPort: 80\n # ...", "language": "yaml", "type": "deployment-definition", "context": "Declarative Management of Kubernetes Objects Using Configuration Files | Update the simple_deployment.yaml configuration file to change the image from nginx:1.14.2 to nginx:1.16.1, and delete the minReadySeconds field:", "source": "https://kubernetes.io/docs/tasks/manage-kubernetes-objects/declarative-config/", "id": "k8s_00681_code_completion_21532"} {"prompt": "Explain this yaml code:\n\napiVersion: apps/v1\nkind: Deployment\nmetadata:\n annotations:\n # ...\n # note that the annotation does not contain replicas\n # because it was not updated through apply\n kubectl.kubernetes.io/last-applied-configuration: |\n {\"apiVersion\":\"apps/v1\",\"kind\":\"Deployment\",\n \"metadata\":{\"annotations\":{},\"name\":\"nginx-deployment\",\"namespace\":\"default\"},\n \"spec\":{\"minReadySeconds\":5,\"selector\":{\"matchLabels\":{\"app\":nginx}},\"template\":{\"metadata\":{\"labels\":{\"app\":\"nginx\"}},\n \"spec\":{\"containers\":[{\"image\":\"nginx:1.14.2\",\"name\":\"nginx\",\n \"ports\":[{\"containerPort\":80}]}]}}}} \n # ...\nspec:\n replicas: 2 # written by scale\n # ...\n minReadySeconds: 5\n selector:\n matchLabels:\n # ...\n app: nginx\n template:\n metadata:\n # ...\n labels:\n app: nginx\n spec:\n containers:\n - image: nginx:1.14.2\n # ...\n name: nginx\n ports:\n - containerPort: 80\n # ...", "completion": "This yaml code Declarative Management of Kubernetes Objects Using Configuration Files | Update the simple_deployment.yaml configuration file to change the image from nginx:1.14.2 to nginx:1.16.1, and delete the minReadySeconds field:", "language": "yaml", "type": "deployment-definition_explanation", "context": "Declarative Management of Kubernetes Objects Using Configuration Files | Update the simple_deployment.yaml configuration file to change the image from nginx:1.14.2 to nginx:1.16.1, and delete the minReadySeconds field:", "source": "https://kubernetes.io/docs/tasks/manage-kubernetes-objects/declarative-config/", "id": "k8s_00681_code_explain_21533"} {"prompt": "Complete this yaml code:\n\napiVersion: apps/v1\nkind: Deployment\nmetadata:\n name: nginx-deployment\nspec:\n selector:\n matchLabels:\n app: nginx\n template:", "completion": " metadata:\n labels:\n app: nginx\n spec:\n containers:\n - name: nginx\n image: nginx:1.16.1 # update the image\n ports:\n - containerPort: 80", "language": "yaml", "type": "deployment-definition", "context": "Declarative Management of Kubernetes Objects Using Configuration Files | Apply the changes made to the configuration file:", "source": "https://kubernetes.io/docs/tasks/manage-kubernetes-objects/declarative-config/", "id": "k8s_00681_code_completion_21534"} {"prompt": "Explain this yaml code:\n\napiVersion: apps/v1\nkind: Deployment\nmetadata:\n name: nginx-deployment\nspec:\n selector:\n matchLabels:\n app: nginx\n template:\n metadata:\n labels:\n app: nginx\n spec:\n containers:\n - name: nginx\n image: nginx:1.16.1 # update the image\n ports:\n - containerPort: 80", "completion": "This yaml code Declarative Management of Kubernetes Objects Using Configuration Files | Apply the changes made to the configuration file:", "language": "yaml", "type": "deployment-definition_explanation", "context": "Declarative Management of Kubernetes Objects Using Configuration Files | Apply the changes made to the configuration file:", "source": "https://kubernetes.io/docs/tasks/manage-kubernetes-objects/declarative-config/", "id": "k8s_00681_code_explain_21535"} {"prompt": "Explain this shell code:\n\nkubectl diff -f https://k8s.io/examples/application/update_deployment.yaml\nkubectl apply -f https://k8s.io/examples/application/update_deployment.yaml", "completion": "This shell code Declarative Management of Kubernetes Objects Using Configuration Files | Print the live configuration using kubectl get:", "language": "shell", "type": "code-example_explanation", "context": "Declarative Management of Kubernetes Objects Using Configuration Files | Print the live configuration using kubectl get:", "source": "https://kubernetes.io/docs/tasks/manage-kubernetes-objects/declarative-config/", "id": "k8s_00681_code_explain_21536"} {"prompt": "Explain this shell code:\n\nkubectl get -f https://k8s.io/examples/application/update_deployment.yaml -o yaml", "completion": "This shell code Declarative Management of Kubernetes Objects Using Configuration Files | The output shows the following changes to the live configuration:", "language": "shell", "type": "code-example_explanation", "context": "Declarative Management of Kubernetes Objects Using Configuration Files | The output shows the following changes to the live configuration:", "source": "https://kubernetes.io/docs/tasks/manage-kubernetes-objects/declarative-config/", "id": "k8s_00681_code_explain_21537"} {"prompt": "Complete this yaml code:\n\napiVersion: apps/v1\nkind: Deployment\nmetadata:\n annotations:\n # ...\n # The annotation contains the updated image to nginx 1.16.1,\n # but does not contain the updated replicas to 2\n kubectl.kubernetes.io/last-applied-configuration: |\n {\"apiVersion\":\"apps/v1\",\"kind\":\"Deployment\",\n \"metadata\":{\"annotations\":{},\"name\":\"nginx-deployment\",\"namespace\":\"default\"},\n \"spec\":{\"selector\":{\"matchLabels\":{\"app\":nginx}},\"template\":{\"metadata\":{\"labels\":{\"app\":\"nginx\"}},\n \"spec\":{\"containers\":[{\"image\":\"nginx:1.16.1\",\"name\":\"nginx\",\n \"ports\":[{\"containerPort\":80}]}]}}}} \n # ...\nspec:\n replicas: 2 # Set by `kubectl scale`. Ignored by `kubectl apply`.\n # minReadySeconds cleared by `kubectl apply`\n # ...\n selector:", "completion": " matchLabels:\n # ...\n app: nginx\n template:\n metadata:\n # ...\n labels:\n app: nginx\n spec:\n containers:\n - image: nginx:1.16.1 # Set by `kubectl apply`\n # ...\n name: nginx\n ports:\n - containerPort: 80\n # ...\n # ...\n # ...\n # ...", "language": "yaml", "type": "deployment-definition", "context": "Declarative Management of Kubernetes Objects Using Configuration Files | There are two approaches to delete objects managed by kubectl apply.", "source": "https://kubernetes.io/docs/tasks/manage-kubernetes-objects/declarative-config/", "id": "k8s_00681_code_completion_21538"} {"prompt": "Explain this yaml code:\n\napiVersion: apps/v1\nkind: Deployment\nmetadata:\n annotations:\n # ...\n # The annotation contains the updated image to nginx 1.16.1,\n # but does not contain the updated replicas to 2\n kubectl.kubernetes.io/last-applied-configuration: |\n {\"apiVersion\":\"apps/v1\",\"kind\":\"Deployment\",\n \"metadata\":{\"annotations\":{},\"name\":\"nginx-deployment\",\"namespace\":\"default\"},\n \"spec\":{\"selector\":{\"matchLabels\":{\"app\":nginx}},\"template\":{\"metadata\":{\"labels\":{\"app\":\"nginx\"}},\n \"spec\":{\"containers\":[{\"image\":\"nginx:1.16.1\",\"name\":\"nginx\",\n \"ports\":[{\"containerPort\":80}]}]}}}} \n # ...\nspec:\n replicas: 2 # Set by `kubectl scale`. Ignored by `kubectl apply`.\n # minReadySeconds cleared by `kubectl apply`\n # ...\n selector:\n matchLabels:\n # ...\n app: nginx\n template:\n metadata:\n # ...\n labels:\n app: nginx\n spec:\n containers:\n - image: nginx:1.16.1 # Set by `kubectl apply`\n # ...\n name: nginx\n ports:\n - containerPort: 80\n # ...\n # ...\n # ...\n # ...", "completion": "This yaml code Declarative Management of Kubernetes Objects Using Configuration Files | There are two approaches to delete objects managed by kubectl apply.", "language": "yaml", "type": "deployment-definition_explanation", "context": "Declarative Management of Kubernetes Objects Using Configuration Files | There are two approaches to delete objects managed by kubectl apply.", "source": "https://kubernetes.io/docs/tasks/manage-kubernetes-objects/declarative-config/", "id": "k8s_00681_code_explain_21539"} {"prompt": "Explain this shell code:\n\nkubectl apply -f <directory> --prune -l <labels> --prune-allowlist=<gvk-list>", "completion": "This shell code Declarative Management of Kubernetes Objects Using Configuration Files | FEATURE STATE: Kubernetes v1.27 [alpha]Caution:kubectl apply --prune --applyset is in alpha, and backwards incompatible changes might be introduced in subsequent releases.To use ApplySet-based pruning", "language": "shell", "type": "code-example_explanation", "context": "Declarative Management of Kubernetes Objects Using Configuration Files | FEATURE STATE: Kubernetes v1.27 [alpha]Caution:kubectl apply --prune --applyset is in alpha, and backwards incompatible changes might be introduced in subsequent releases.To use ApplySet-based pruning", "source": "https://kubernetes.io/docs/tasks/manage-kubernetes-objects/declarative-config/", "id": "k8s_00681_code_explain_21540"} {"prompt": "Explain this shell code:\n\nKUBECTL_APPLYSET=true kubectl apply -f <directory> --prune --applyset=<name>", "completion": "This shell code Declarative Management of Kubernetes Objects Using Configuration Files | By default, the type of the ApplySet parent object used is a Secret. However, ConfigMaps can also be used in the format: --applyset=configmaps/<name>. When using a Secret or ConfigMap, kubectl will cr", "language": "shell", "type": "code-example_explanation", "context": "Declarative Management of Kubernetes Objects Using Configuration Files | By default, the type of the ApplySet parent object used is a Secret. However, ConfigMaps can also be used in the format: --applyset=configmaps/<name>. When using a Secret or ConfigMap, kubectl will cr", "source": "https://kubernetes.io/docs/tasks/manage-kubernetes-objects/declarative-config/", "id": "k8s_00681_code_explain_21541"} {"prompt": "Complete this yaml code:\n\napiVersion: apps/v1\nkind: Deployment\nmetadata:\n name: nginx-deployment\nspec:\n selector:\n matchLabels:\n app: nginx\n template:", "completion": " metadata:\n labels:\n app: nginx\n spec:\n containers:\n - name: nginx\n image: nginx:1.16.1 # update the image\n ports:\n - containerPort: 80", "language": "yaml", "type": "deployment-definition", "context": "Declarative Management of Kubernetes Objects Using Configuration Files | Also, suppose this is the live configuration for the same Deployment object:", "source": "https://kubernetes.io/docs/tasks/manage-kubernetes-objects/declarative-config/", "id": "k8s_00681_code_completion_21542"} {"prompt": "Explain this yaml code:\n\napiVersion: apps/v1\nkind: Deployment\nmetadata:\n name: nginx-deployment\nspec:\n selector:\n matchLabels:\n app: nginx\n template:\n metadata:\n labels:\n app: nginx\n spec:\n containers:\n - name: nginx\n image: nginx:1.16.1 # update the image\n ports:\n - containerPort: 80", "completion": "This yaml code Declarative Management of Kubernetes Objects Using Configuration Files | Also, suppose this is the live configuration for the same Deployment object:", "language": "yaml", "type": "deployment-definition_explanation", "context": "Declarative Management of Kubernetes Objects Using Configuration Files | Also, suppose this is the live configuration for the same Deployment object:", "source": "https://kubernetes.io/docs/tasks/manage-kubernetes-objects/declarative-config/", "id": "k8s_00681_code_explain_21543"} {"prompt": "Complete this yaml code:\n\napiVersion: apps/v1\nkind: Deployment\nmetadata:\n annotations:\n # ...\n # note that the annotation does not contain replicas\n # because it was not updated through apply\n kubectl.kubernetes.io/last-applied-configuration: |\n {\"apiVersion\":\"apps/v1\",\"kind\":\"Deployment\",\n \"metadata\":{\"annotations\":{},\"name\":\"nginx-deployment\",\"namespace\":\"default\"},\n \"spec\":{\"minReadySeconds\":5,\"selector\":{\"matchLabels\":{\"app\":nginx}},\"template\":{\"metadata\":{\"labels\":{\"app\":\"nginx\"}},\n \"spec\":{\"containers\":[{\"image\":\"nginx:1.14.2\",\"name\":\"nginx\",\n \"ports\":[{\"containerPort\":80}]}]}}}} \n # ...\nspec:\n replicas: 2 # written by scale\n # ...", "completion": " minReadySeconds: 5\n selector:\n matchLabels:\n # ...\n app: nginx\n template:\n metadata:\n # ...\n labels:\n app: nginx\n spec:\n containers:\n - image: nginx:1.14.2\n # ...\n name: nginx\n ports:\n - containerPort: 80\n # ...", "language": "yaml", "type": "deployment-definition", "context": "Declarative Management of Kubernetes Objects Using Configuration Files | Here are the merge calculations that would be performed by kubectl apply:", "source": "https://kubernetes.io/docs/tasks/manage-kubernetes-objects/declarative-config/", "id": "k8s_00681_code_completion_21544"} {"prompt": "Explain this yaml code:\n\napiVersion: apps/v1\nkind: Deployment\nmetadata:\n annotations:\n # ...\n # note that the annotation does not contain replicas\n # because it was not updated through apply\n kubectl.kubernetes.io/last-applied-configuration: |\n {\"apiVersion\":\"apps/v1\",\"kind\":\"Deployment\",\n \"metadata\":{\"annotations\":{},\"name\":\"nginx-deployment\",\"namespace\":\"default\"},\n \"spec\":{\"minReadySeconds\":5,\"selector\":{\"matchLabels\":{\"app\":nginx}},\"template\":{\"metadata\":{\"labels\":{\"app\":\"nginx\"}},\n \"spec\":{\"containers\":[{\"image\":\"nginx:1.14.2\",\"name\":\"nginx\",\n \"ports\":[{\"containerPort\":80}]}]}}}} \n # ...\nspec:\n replicas: 2 # written by scale\n # ...\n minReadySeconds: 5\n selector:\n matchLabels:\n # ...\n app: nginx\n template:\n metadata:\n # ...\n labels:\n app: nginx\n spec:\n containers:\n - image: nginx:1.14.2\n # ...\n name: nginx\n ports:\n - containerPort: 80\n # ...", "completion": "This yaml code Declarative Management of Kubernetes Objects Using Configuration Files | Here are the merge calculations that would be performed by kubectl apply:", "language": "yaml", "type": "deployment-definition_explanation", "context": "Declarative Management of Kubernetes Objects Using Configuration Files | Here are the merge calculations that would be performed by kubectl apply:", "source": "https://kubernetes.io/docs/tasks/manage-kubernetes-objects/declarative-config/", "id": "k8s_00681_code_explain_21545"} {"prompt": "Complete this yaml code:\n\napiVersion: apps/v1\nkind: Deployment\nmetadata:\n annotations:\n # ...\n # The annotation contains the updated image to nginx 1.16.1,\n # but does not contain the updated replicas to 2\n kubectl.kubernetes.io/last-applied-configuration: |\n {\"apiVersion\":\"apps/v1\",\"kind\":\"Deployment\",\n \"metadata\":{\"annotations\":{},\"name\":\"nginx-deployment\",\"namespace\":\"default\"},\n \"spec\":{\"selector\":{\"matchLabels\":{\"app\":nginx}},\"template\":{\"metadata\":{\"labels\":{\"app\":\"nginx\"}},\n \"spec\":{\"containers\":[{\"image\":\"nginx:1.16.1\",\"name\":\"nginx\",\n \"ports\":[{\"containerPort\":80}]}]}}}} \n # ...\nspec:\n selector:\n matchLabels:\n # ...\n app: nginx", "completion": " replicas: 2 # Set by `kubectl scale`. Ignored by `kubectl apply`.\n # minReadySeconds cleared by `kubectl apply`\n # ...\n template:\n metadata:\n # ...\n labels:\n app: nginx\n spec:\n containers:\n - image: nginx:1.16.1 # Set by `kubectl apply`\n # ...\n name: nginx\n ports:\n - containerPort: 80\n # ...\n # ...\n # ...\n # ...", "language": "yaml", "type": "deployment-definition", "context": "Declarative Management of Kubernetes Objects Using Configuration Files | How a particular field in a configuration file is merged with the live configuration depends on the type of the field. There are several types of fields:", "source": "https://kubernetes.io/docs/tasks/manage-kubernetes-objects/declarative-config/", "id": "k8s_00681_code_completion_21546"} {"prompt": "Explain this yaml code:\n\napiVersion: apps/v1\nkind: Deployment\nmetadata:\n annotations:\n # ...\n # The annotation contains the updated image to nginx 1.16.1,\n # but does not contain the updated replicas to 2\n kubectl.kubernetes.io/last-applied-configuration: |\n {\"apiVersion\":\"apps/v1\",\"kind\":\"Deployment\",\n \"metadata\":{\"annotations\":{},\"name\":\"nginx-deployment\",\"namespace\":\"default\"},\n \"spec\":{\"selector\":{\"matchLabels\":{\"app\":nginx}},\"template\":{\"metadata\":{\"labels\":{\"app\":\"nginx\"}},\n \"spec\":{\"containers\":[{\"image\":\"nginx:1.16.1\",\"name\":\"nginx\",\n \"ports\":[{\"containerPort\":80}]}]}}}} \n # ...\nspec:\n selector:\n matchLabels:\n # ...\n app: nginx\n replicas: 2 # Set by `kubectl scale`. Ignored by `kubectl apply`.\n # minReadySeconds cleared by `kubectl apply`\n # ...\n template:\n metadata:\n # ...\n labels:\n app: nginx\n spec:\n containers:\n - image: nginx:1.16.1 # Set by `kubectl apply`\n # ...\n name: nginx\n ports:\n - containerPort: 80\n # ...\n # ...\n # ...\n # ...", "completion": "This yaml code Declarative Management of Kubernetes Objects Using Configuration Files | How a particular field in a configuration file is merged with the live configuration depends on the type of the field. There are several types of fields:", "language": "yaml", "type": "deployment-definition_explanation", "context": "Declarative Management of Kubernetes Objects Using Configuration Files | How a particular field in a configuration file is merged with the live configuration depends on the type of the field. There are several types of fields:", "source": "https://kubernetes.io/docs/tasks/manage-kubernetes-objects/declarative-config/", "id": "k8s_00681_code_explain_21547"} {"prompt": "Complete this yaml code:\n\n# last-applied-configuration value\n args: [\"a\", \"b\"]\n\n# configuration file value\n args: [\"a\", \"c\"]", "completion": "\n# live configuration\n args: [\"a\", \"b\", \"d\"]\n\n# result after merge\n args: [\"a\", \"c\"]", "language": "yaml", "type": "kubernetes-manifest", "context": "Declarative Management of Kubernetes Objects Using Configuration Files | Explanation: The merge used the configuration file value as the new list value.", "source": "https://kubernetes.io/docs/tasks/manage-kubernetes-objects/declarative-config/", "id": "k8s_00681_code_completion_21548"} {"prompt": "Explain this yaml code:\n\n# last-applied-configuration value\n args: [\"a\", \"b\"]\n\n# configuration file value\n args: [\"a\", \"c\"]\n\n# live configuration\n args: [\"a\", \"b\", \"d\"]\n\n# result after merge\n args: [\"a\", \"c\"]", "completion": "This yaml code Declarative Management of Kubernetes Objects Using Configuration Files | Explanation: The merge used the configuration file value as the new list value.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Declarative Management of Kubernetes Objects Using Configuration Files | Explanation: The merge used the configuration file value as the new list value.", "source": "https://kubernetes.io/docs/tasks/manage-kubernetes-objects/declarative-config/", "id": "k8s_00681_code_explain_21549"} {"prompt": "Complete this yaml code:\n\n# last-applied-configuration value\n containers:\n - name: nginx\n image: nginx:1.16\n - name: nginx-helper-a # key: nginx-helper-a; will be deleted in result\n image: helper:1.3\n - name: nginx-helper-b # key: nginx-helper-b; will be retained\n image: helper:1.3\n\n# configuration file value\n containers:\n - name: nginx\n image: nginx:1.16\n - name: nginx-helper-b\n image: helper:1.3\n - name: nginx-helper-c # key: nginx-helper-c; will be added in result\n image: helper:1.3\n\n# live configuration\n containers:\n - name: nginx", "completion": " image: nginx:1.16\n - name: nginx-helper-a\n image: helper:1.3\n - name: nginx-helper-b\n image: helper:1.3\n args: [\"run\"] # Field will be retained\n - name: nginx-helper-d # key: nginx-helper-d; will be retained\n image: helper:1.3\n\n# result after merge\n containers:\n - name: nginx\n image: nginx:1.16\n # Element nginx-helper-a was deleted\n - name: nginx-helper-b\n image: helper:1.3\n args: [\"run\"] # Field was retained\n - name: nginx-helper-c # Element was added\n image: helper:1.3\n - name: nginx-helper-d # Element was ignored\n image: helper:1.3", "language": "yaml", "type": "kubernetes-manifest", "context": "Declarative Management of Kubernetes Objects Using Configuration Files | Explanation:", "source": "https://kubernetes.io/docs/tasks/manage-kubernetes-objects/declarative-config/", "id": "k8s_00681_code_completion_21550"} {"prompt": "Explain this yaml code:\n\n# last-applied-configuration value\n containers:\n - name: nginx\n image: nginx:1.16\n - name: nginx-helper-a # key: nginx-helper-a; will be deleted in result\n image: helper:1.3\n - name: nginx-helper-b # key: nginx-helper-b; will be retained\n image: helper:1.3\n\n# configuration file value\n containers:\n - name: nginx\n image: nginx:1.16\n - name: nginx-helper-b\n image: helper:1.3\n - name: nginx-helper-c # key: nginx-helper-c; will be added in result\n image: helper:1.3\n\n# live configuration\n containers:\n - name: nginx\n image: nginx:1.16\n - name: nginx-helper-a\n image: helper:1.3\n - name: nginx-helper-b\n image: helper:1.3\n args: [\"run\"] # Field will be retained\n - name: nginx-helper-d # key: nginx-helper-d; will be retained\n image: helper:1.3\n\n# result after merge\n containers:\n - name: nginx\n image: nginx:1.16\n # Element nginx-helper-a was deleted\n - name: nginx-helper-b\n image: helper:1.3\n args: [\"run\"] # Field was retained\n - name: nginx-helper-c # Element was added\n image: helper:1.3\n - name: nginx-helper-d # Element was ignored\n image: helper:1.3", "completion": "This yaml code Declarative Management of Kubernetes Objects Using Configuration Files | Explanation:", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Declarative Management of Kubernetes Objects Using Configuration Files | Explanation:", "source": "https://kubernetes.io/docs/tasks/manage-kubernetes-objects/declarative-config/", "id": "k8s_00681_code_explain_21551"} {"prompt": "Complete this yaml code:\n\napiVersion: apps/v1\nkind: Deployment\nmetadata:\n name: nginx-deployment\nspec:\n selector:\n matchLabels:\n app: nginx\n minReadySeconds: 5", "completion": " template:\n metadata:\n labels:\n app: nginx\n spec:\n containers:\n - name: nginx\n image: nginx:1.14.2\n ports:\n - containerPort: 80", "language": "yaml", "type": "deployment-definition", "context": "Declarative Management of Kubernetes Objects Using Configuration Files | Create the object using kubectl apply:", "source": "https://kubernetes.io/docs/tasks/manage-kubernetes-objects/declarative-config/", "id": "k8s_00681_code_completion_21552"} {"prompt": "Explain this yaml code:\n\napiVersion: apps/v1\nkind: Deployment\nmetadata:\n name: nginx-deployment\nspec:\n selector:\n matchLabels:\n app: nginx\n minReadySeconds: 5\n template:\n metadata:\n labels:\n app: nginx\n spec:\n containers:\n - name: nginx\n image: nginx:1.14.2\n ports:\n - containerPort: 80", "completion": "This yaml code Declarative Management of Kubernetes Objects Using Configuration Files | Create the object using kubectl apply:", "language": "yaml", "type": "deployment-definition_explanation", "context": "Declarative Management of Kubernetes Objects Using Configuration Files | Create the object using kubectl apply:", "source": "https://kubernetes.io/docs/tasks/manage-kubernetes-objects/declarative-config/", "id": "k8s_00681_code_explain_21553"} {"prompt": "Explain this shell code:\n\nkubectl apply -f https://k8s.io/examples/application/simple_deployment.yaml", "completion": "This shell code Declarative Management of Kubernetes Objects Using Configuration Files | Print the live configuration using kubectl get:", "language": "shell", "type": "code-example_explanation", "context": "Declarative Management of Kubernetes Objects Using Configuration Files | Print the live configuration using kubectl get:", "source": "https://kubernetes.io/docs/tasks/manage-kubernetes-objects/declarative-config/", "id": "k8s_00681_code_explain_21554"} {"prompt": "Explain this shell code:\n\nkubectl get -f https://k8s.io/examples/application/simple_deployment.yaml -o yaml", "completion": "This shell code Declarative Management of Kubernetes Objects Using Configuration Files | The output shows that the API server set several fields to default values in the live configuration. These fields were not specified in the configuration file.", "language": "shell", "type": "code-example_explanation", "context": "Declarative Management of Kubernetes Objects Using Configuration Files | The output shows that the API server set several fields to default values in the live configuration. These fields were not specified in the configuration file.", "source": "https://kubernetes.io/docs/tasks/manage-kubernetes-objects/declarative-config/", "id": "k8s_00681_code_explain_21555"} {"prompt": "Complete this yaml code:\n\napiVersion: apps/v1\nkind: Deployment\n# ...\nspec:\n selector:\n matchLabels:\n app: nginx\n minReadySeconds: 5\n replicas: 1 # defaulted by apiserver\n strategy:\n rollingUpdate: # defaulted by apiserver - derived from strategy.type\n maxSurge: 1\n maxUnavailable: 1\n type: RollingUpdate # defaulted by apiserver\n template:\n metadata:\n creationTimestamp: null", "completion": " labels:\n app: nginx\n spec:\n containers:\n - image: nginx:1.14.2\n imagePullPolicy: IfNotPresent # defaulted by apiserver\n name: nginx\n ports:\n - containerPort: 80\n protocol: TCP # defaulted by apiserver\n resources: {} # defaulted by apiserver\n terminationMessagePath: /dev/termination-log # defaulted by apiserver\n dnsPolicy: ClusterFirst # defaulted by apiserver\n restartPolicy: Always # defaulted by apiserver\n securityContext: {} # defaulted by apiserver\n terminationGracePeriodSeconds: 30 # defaulted by apiserver\n# ...", "language": "yaml", "type": "deployment-definition", "context": "Declarative Management of Kubernetes Objects Using Configuration Files | In a patch request, defaulted fields are not re-defaulted unless they are explicitly cleared as part of a patch request. This can cause unexpected behavior for fields that are defaulted based on the v", "source": "https://kubernetes.io/docs/tasks/manage-kubernetes-objects/declarative-config/", "id": "k8s_00681_code_completion_21556"} {"prompt": "Explain this yaml code:\n\napiVersion: apps/v1\nkind: Deployment\n# ...\nspec:\n selector:\n matchLabels:\n app: nginx\n minReadySeconds: 5\n replicas: 1 # defaulted by apiserver\n strategy:\n rollingUpdate: # defaulted by apiserver - derived from strategy.type\n maxSurge: 1\n maxUnavailable: 1\n type: RollingUpdate # defaulted by apiserver\n template:\n metadata:\n creationTimestamp: null\n labels:\n app: nginx\n spec:\n containers:\n - image: nginx:1.14.2\n imagePullPolicy: IfNotPresent # defaulted by apiserver\n name: nginx\n ports:\n - containerPort: 80\n protocol: TCP # defaulted by apiserver\n resources: {} # defaulted by apiserver\n terminationMessagePath: /dev/termination-log # defaulted by apiserver\n dnsPolicy: ClusterFirst # defaulted by apiserver\n restartPolicy: Always # defaulted by apiserver\n securityContext: {} # defaulted by apiserver\n terminationGracePeriodSeconds: 30 # defaulted by apiserver\n# ...", "completion": "This yaml code Declarative Management of Kubernetes Objects Using Configuration Files | In a patch request, defaulted fields are not re-defaulted unless they are explicitly cleared as part of a patch request. This can cause unexpected behavior for fields that are defaulted based on the v", "language": "yaml", "type": "deployment-definition_explanation", "context": "Declarative Management of Kubernetes Objects Using Configuration Files | In a patch request, defaulted fields are not re-defaulted unless they are explicitly cleared as part of a patch request. This can cause unexpected behavior for fields that are defaulted based on the v", "source": "https://kubernetes.io/docs/tasks/manage-kubernetes-objects/declarative-config/", "id": "k8s_00681_code_explain_21557"} {"prompt": "Complete this yaml code:\n\n# last-applied-configuration\nspec:\n template:\n metadata:\n labels:\n app: nginx\n spec:\n containers:\n - name: nginx\n image: nginx:1.14.2\n ports:\n - containerPort: 80\n\n# configuration file\nspec:\n strategy:\n type: Recreate # updated value\n template:\n metadata:\n labels:\n app: nginx\n spec:\n containers:\n - name: nginx\n image: nginx:1.14.2\n ports:\n - containerPort: 80\n\n# live configuration\nspec:\n strategy:", "completion": " type: RollingUpdate # defaulted value\n rollingUpdate: # defaulted value derived from type\n maxSurge : 1\n maxUnavailable: 1\n template:\n metadata:\n labels:\n app: nginx\n spec:\n containers:\n - name: nginx\n image: nginx:1.14.2\n ports:\n - containerPort: 80\n\n# result after merge - ERROR!\nspec:\n strategy:\n type: Recreate # updated value: incompatible with rollingUpdate\n rollingUpdate: # defaulted value: incompatible with \"type: Recreate\"\n maxSurge : 1\n maxUnavailable: 1\n template:\n metadata:\n labels:\n app: nginx\n spec:\n containers:\n - name: nginx\n image: nginx:1.14.2\n ports:\n - containerPort: 80", "language": "yaml", "type": "kubernetes-manifest", "context": "Declarative Management of Kubernetes Objects Using Configuration Files | Explanation:", "source": "https://kubernetes.io/docs/tasks/manage-kubernetes-objects/declarative-config/", "id": "k8s_00681_code_completion_21558"} {"prompt": "Explain this yaml code:\n\n# last-applied-configuration\nspec:\n template:\n metadata:\n labels:\n app: nginx\n spec:\n containers:\n - name: nginx\n image: nginx:1.14.2\n ports:\n - containerPort: 80\n\n# configuration file\nspec:\n strategy:\n type: Recreate # updated value\n template:\n metadata:\n labels:\n app: nginx\n spec:\n containers:\n - name: nginx\n image: nginx:1.14.2\n ports:\n - containerPort: 80\n\n# live configuration\nspec:\n strategy:\n type: RollingUpdate # defaulted value\n rollingUpdate: # defaulted value derived from type\n maxSurge : 1\n maxUnavailable: 1\n template:\n metadata:\n labels:\n app: nginx\n spec:\n containers:\n - name: nginx\n image: nginx:1.14.2\n ports:\n - containerPort: 80\n\n# result after merge - ERROR!\nspec:\n strategy:\n type: Recreate # updated value: incompatible with rollingUpdate\n rollingUpdate: # defaulted value: incompatible with \"type: Recreate\"\n maxSurge : 1\n maxUnavailable: 1\n template:\n metadata:\n labels:\n app: nginx\n spec:\n containers:\n - name: nginx\n image: nginx:1.14.2\n ports:\n - containerPort: 80", "completion": "This yaml code Declarative Management of Kubernetes Objects Using Configuration Files | Explanation:", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Declarative Management of Kubernetes Objects Using Configuration Files | Explanation:", "source": "https://kubernetes.io/docs/tasks/manage-kubernetes-objects/declarative-config/", "id": "k8s_00681_code_explain_21559"} {"prompt": "Explain this shell code:\n\nkubectl get <kind>/<name> -o yaml > <kind>_<name>.yaml", "completion": "This shell code Declarative Management of Kubernetes Objects Using Configuration Files | Manually remove the status field from the configuration file.", "language": "shell", "type": "code-example_explanation", "context": "Declarative Management of Kubernetes Objects Using Configuration Files | Manually remove the status field from the configuration file.", "source": "https://kubernetes.io/docs/tasks/manage-kubernetes-objects/declarative-config/", "id": "k8s_00681_code_explain_21560"} {"prompt": "Explain this shell code:\n\nkubectl replace --save-config -f <kind>_<name>.yaml", "completion": "This shell code Declarative Management of Kubernetes Objects Using Configuration Files | Change processes to use kubectl apply for managing the object exclusively.", "language": "shell", "type": "code-example_explanation", "context": "Declarative Management of Kubernetes Objects Using Configuration Files | Change processes to use kubectl apply for managing the object exclusively.", "source": "https://kubernetes.io/docs/tasks/manage-kubernetes-objects/declarative-config/", "id": "k8s_00681_code_explain_21561"} {"prompt": "Explain this shell code:\n\nkubectl replace --save-config -f <kind>_<name>.yaml", "completion": "This shell code Declarative Management of Kubernetes Objects Using Configuration Files | Change processes to use kubectl apply for managing the object exclusively.", "language": "shell", "type": "code-example_explanation", "context": "Declarative Management of Kubernetes Objects Using Configuration Files | Change processes to use kubectl apply for managing the object exclusively.", "source": "https://kubernetes.io/docs/tasks/manage-kubernetes-objects/declarative-config/", "id": "k8s_00681_code_explain_21562"} {"prompt": "Complete this yaml code:\n\nselector:\n matchLabels:\n controller-selector: \"apps/v1/deployment/nginx\"", "completion": "template:\n metadata:\n labels:\n controller-selector: \"apps/v1/deployment/nginx\"", "language": "yaml", "type": "kubernetes-manifest", "context": "Declarative Management of Kubernetes Objects Using Configuration Files |", "source": "https://kubernetes.io/docs/tasks/manage-kubernetes-objects/declarative-config/", "id": "k8s_00681_code_completion_21563"} {"prompt": "Explain this yaml code:\n\nselector:\n matchLabels:\n controller-selector: \"apps/v1/deployment/nginx\"\ntemplate:\n metadata:\n labels:\n controller-selector: \"apps/v1/deployment/nginx\"", "completion": "This yaml code Declarative Management of Kubernetes Objects Using Configuration Files |", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Declarative Management of Kubernetes Objects Using Configuration Files |", "source": "https://kubernetes.io/docs/tasks/manage-kubernetes-objects/declarative-config/", "id": "k8s_00681_code_explain_21564"} {"prompt": "Complete this yaml code:\n\napplication/simple_deployment.yaml\napiVersion: apps/v1\nkind: Deployment\nmetadata:\n name: nginx-deployment\nspec:\n selector:\n matchLabels:\n app: nginx\n minReadySeconds: 5", "completion": " template:\n metadata:\n labels:\n app: nginx\n spec:\n containers:\n - name: nginx\n image: nginx:1.14.2\n ports:\n - containerPort: 80", "language": "yaml", "type": "deployment-definition", "context": "Declarative Management of Kubernetes Objects Using Configuration Files | Run kubectl diff to print the object that will be created:", "source": "https://kubernetes.io/docs/tasks/manage-kubernetes-objects/declarative-config/", "id": "k8s_00681_code_completion_21565"} {"prompt": "Explain this yaml code:\n\napplication/simple_deployment.yaml\napiVersion: apps/v1\nkind: Deployment\nmetadata:\n name: nginx-deployment\nspec:\n selector:\n matchLabels:\n app: nginx\n minReadySeconds: 5\n template:\n metadata:\n labels:\n app: nginx\n spec:\n containers:\n - name: nginx\n image: nginx:1.14.2\n ports:\n - containerPort: 80", "completion": "This yaml code Declarative Management of Kubernetes Objects Using Configuration Files | Run kubectl diff to print the object that will be created:", "language": "yaml", "type": "deployment-definition_explanation", "context": "Declarative Management of Kubernetes Objects Using Configuration Files | Run kubectl diff to print the object that will be created:", "source": "https://kubernetes.io/docs/tasks/manage-kubernetes-objects/declarative-config/", "id": "k8s_00681_code_explain_21566"} {"prompt": "Complete this yaml code:\n\napiVersion: apps/v1\nkind: Deployment\nmetadata:\n name: nginx-deployment\nspec:\n selector:\n matchLabels:\n app: nginx\n minReadySeconds: 5", "completion": " template:\n metadata:\n labels:\n app: nginx\n spec:\n containers:\n - name: nginx\n image: nginx:1.14.2\n ports:\n - containerPort: 80", "language": "yaml", "type": "deployment-definition", "context": "Declarative Management of Kubernetes Objects Using Configuration Files | Run kubectl diff to print the object that will be created:", "source": "https://kubernetes.io/docs/tasks/manage-kubernetes-objects/declarative-config/", "id": "k8s_00681_code_completion_21567"} {"prompt": "Explain this yaml code:\n\napiVersion: apps/v1\nkind: Deployment\nmetadata:\n name: nginx-deployment\nspec:\n selector:\n matchLabels:\n app: nginx\n minReadySeconds: 5\n template:\n metadata:\n labels:\n app: nginx\n spec:\n containers:\n - name: nginx\n image: nginx:1.14.2\n ports:\n - containerPort: 80", "completion": "This yaml code Declarative Management of Kubernetes Objects Using Configuration Files | Run kubectl diff to print the object that will be created:", "language": "yaml", "type": "deployment-definition_explanation", "context": "Declarative Management of Kubernetes Objects Using Configuration Files | Run kubectl diff to print the object that will be created:", "source": "https://kubernetes.io/docs/tasks/manage-kubernetes-objects/declarative-config/", "id": "k8s_00681_code_explain_21568"} {"prompt": "Explain this bash code:\n\nkubectl diff -f https://k8s.io/examples/application/simple_deployment.yaml", "completion": "This bash code Declarative Management of Kubernetes Objects Using Configuration Files | diff uses server-side dry-run, which needs to be enabled on kube-apiserver.", "language": "bash", "type": "kubectl-command_explanation", "context": "Declarative Management of Kubernetes Objects Using Configuration Files | diff uses server-side dry-run, which needs to be enabled on kube-apiserver.", "source": "https://kubernetes.io/docs/tasks/manage-kubernetes-objects/declarative-config/", "id": "k8s_00681_code_explain_21569"} {"prompt": "Explain this bash code:\n\nkubectl apply -f https://k8s.io/examples/application/simple_deployment.yaml", "completion": "This bash code Declarative Management of Kubernetes Objects Using Configuration Files | Print the live configuration using kubectl get:", "language": "bash", "type": "kubectl-command_explanation", "context": "Declarative Management of Kubernetes Objects Using Configuration Files | Print the live configuration using kubectl get:", "source": "https://kubernetes.io/docs/tasks/manage-kubernetes-objects/declarative-config/", "id": "k8s_00681_code_explain_21570"} {"prompt": "Explain this bash code:\n\nkubectl get -f https://k8s.io/examples/application/simple_deployment.yaml -o yaml", "completion": "This bash code Declarative Management of Kubernetes Objects Using Configuration Files | The output shows that the kubectl.kubernetes.io/last-applied-configuration annotation was written to the live configuration, and it matches the configuration file:", "language": "bash", "type": "kubectl-command_explanation", "context": "Declarative Management of Kubernetes Objects Using Configuration Files | The output shows that the kubectl.kubernetes.io/last-applied-configuration annotation was written to the live configuration, and it matches the configuration file:", "source": "https://kubernetes.io/docs/tasks/manage-kubernetes-objects/declarative-config/", "id": "k8s_00681_code_explain_21571"} {"prompt": "Complete this yaml code:\n\nkind: Deployment\nmetadata:\n annotations:\n # ...\n # This is the json representation of simple_deployment.yaml\n # It was written by kubectl apply when the object was created\n kubectl.kubernetes.io/last-applied-configuration: |\n {\"apiVersion\":\"apps/v1\",\"kind\":\"Deployment\",\n \"metadata\":{\"annotations\":{},\"name\":\"nginx-deployment\",\"namespace\":\"default\"},\n \"spec\":{\"minReadySeconds\":5,\"selector\":{\"matchLabels\":{\"app\":nginx}},\"template\":{\"metadata\":{\"labels\":{\"app\":\"nginx\"}},\n \"spec\":{\"containers\":[{\"image\":\"nginx:1.14.2\",\"name\":\"nginx\",\n \"ports\":[{\"containerPort\":80}]}]}}}} \n # ...\nspec:\n # ...\n minReadySeconds: 5\n selector:\n matchLabels:", "completion": " # ...\n app: nginx\n template:\n metadata:\n # ...\n labels:\n app: nginx\n spec:\n containers:\n - image: nginx:1.14.2\n # ...\n name: nginx\n ports:\n - containerPort: 80\n # ...\n # ...\n # ...\n # ...", "language": "yaml", "type": "deployment-definition", "context": "Declarative Management of Kubernetes Objects Using Configuration Files | You can also use kubectl apply to update all objects defined in a directory, even if those objects already exist. This approach accomplishes the following:", "source": "https://kubernetes.io/docs/tasks/manage-kubernetes-objects/declarative-config/", "id": "k8s_00681_code_completion_21572"} {"prompt": "Explain this yaml code:\n\nkind: Deployment\nmetadata:\n annotations:\n # ...\n # This is the json representation of simple_deployment.yaml\n # It was written by kubectl apply when the object was created\n kubectl.kubernetes.io/last-applied-configuration: |\n {\"apiVersion\":\"apps/v1\",\"kind\":\"Deployment\",\n \"metadata\":{\"annotations\":{},\"name\":\"nginx-deployment\",\"namespace\":\"default\"},\n \"spec\":{\"minReadySeconds\":5,\"selector\":{\"matchLabels\":{\"app\":nginx}},\"template\":{\"metadata\":{\"labels\":{\"app\":\"nginx\"}},\n \"spec\":{\"containers\":[{\"image\":\"nginx:1.14.2\",\"name\":\"nginx\",\n \"ports\":[{\"containerPort\":80}]}]}}}} \n # ...\nspec:\n # ...\n minReadySeconds: 5\n selector:\n matchLabels:\n # ...\n app: nginx\n template:\n metadata:\n # ...\n labels:\n app: nginx\n spec:\n containers:\n - image: nginx:1.14.2\n # ...\n name: nginx\n ports:\n - containerPort: 80\n # ...\n # ...\n # ...\n # ...", "completion": "This yaml code Declarative Management of Kubernetes Objects Using Configuration Files | You can also use kubectl apply to update all objects defined in a directory, even if those objects already exist. This approach accomplishes the following:", "language": "yaml", "type": "deployment-definition_explanation", "context": "Declarative Management of Kubernetes Objects Using Configuration Files | You can also use kubectl apply to update all objects defined in a directory, even if those objects already exist. This approach accomplishes the following:", "source": "https://kubernetes.io/docs/tasks/manage-kubernetes-objects/declarative-config/", "id": "k8s_00681_code_explain_21573"} {"prompt": "Explain this bash code:\n\nkubectl diff -f <directory>\nkubectl apply -f <directory>", "completion": "This bash code Declarative Management of Kubernetes Objects Using Configuration Files | Here's an example configuration file:", "language": "bash", "type": "kubectl-command_explanation", "context": "Declarative Management of Kubernetes Objects Using Configuration Files | Here's an example configuration file:", "source": "https://kubernetes.io/docs/tasks/manage-kubernetes-objects/declarative-config/", "id": "k8s_00681_code_explain_21574"} {"prompt": "Complete this yaml code:\n\napplication/simple_deployment.yaml\napiVersion: apps/v1\nkind: Deployment\nmetadata:\n name: nginx-deployment\nspec:\n selector:\n matchLabels:\n app: nginx\n minReadySeconds: 5", "completion": " template:\n metadata:\n labels:\n app: nginx\n spec:\n containers:\n - name: nginx\n image: nginx:1.14.2\n ports:\n - containerPort: 80", "language": "yaml", "type": "deployment-definition", "context": "Declarative Management of Kubernetes Objects Using Configuration Files | Create the object using kubectl apply:", "source": "https://kubernetes.io/docs/tasks/manage-kubernetes-objects/declarative-config/", "id": "k8s_00681_code_completion_21575"} {"prompt": "Explain this yaml code:\n\napplication/simple_deployment.yaml\napiVersion: apps/v1\nkind: Deployment\nmetadata:\n name: nginx-deployment\nspec:\n selector:\n matchLabels:\n app: nginx\n minReadySeconds: 5\n template:\n metadata:\n labels:\n app: nginx\n spec:\n containers:\n - name: nginx\n image: nginx:1.14.2\n ports:\n - containerPort: 80", "completion": "This yaml code Declarative Management of Kubernetes Objects Using Configuration Files | Create the object using kubectl apply:", "language": "yaml", "type": "deployment-definition_explanation", "context": "Declarative Management of Kubernetes Objects Using Configuration Files | Create the object using kubectl apply:", "source": "https://kubernetes.io/docs/tasks/manage-kubernetes-objects/declarative-config/", "id": "k8s_00681_code_explain_21576"} {"prompt": "Complete this yaml code:\n\napiVersion: apps/v1\nkind: Deployment\nmetadata:\n name: nginx-deployment\nspec:\n selector:\n matchLabels:\n app: nginx\n minReadySeconds: 5", "completion": " template:\n metadata:\n labels:\n app: nginx\n spec:\n containers:\n - name: nginx\n image: nginx:1.14.2\n ports:\n - containerPort: 80", "language": "yaml", "type": "deployment-definition", "context": "Declarative Management of Kubernetes Objects Using Configuration Files | Create the object using kubectl apply:", "source": "https://kubernetes.io/docs/tasks/manage-kubernetes-objects/declarative-config/", "id": "k8s_00681_code_completion_21577"} {"prompt": "Explain this yaml code:\n\napiVersion: apps/v1\nkind: Deployment\nmetadata:\n name: nginx-deployment\nspec:\n selector:\n matchLabels:\n app: nginx\n minReadySeconds: 5\n template:\n metadata:\n labels:\n app: nginx\n spec:\n containers:\n - name: nginx\n image: nginx:1.14.2\n ports:\n - containerPort: 80", "completion": "This yaml code Declarative Management of Kubernetes Objects Using Configuration Files | Create the object using kubectl apply:", "language": "yaml", "type": "deployment-definition_explanation", "context": "Declarative Management of Kubernetes Objects Using Configuration Files | Create the object using kubectl apply:", "source": "https://kubernetes.io/docs/tasks/manage-kubernetes-objects/declarative-config/", "id": "k8s_00681_code_explain_21578"} {"prompt": "Explain this bash code:\n\nkubectl apply -f https://k8s.io/examples/application/simple_deployment.yaml", "completion": "This bash code Declarative Management of Kubernetes Objects Using Configuration Files | Print the live configuration using kubectl get:", "language": "bash", "type": "kubectl-command_explanation", "context": "Declarative Management of Kubernetes Objects Using Configuration Files | Print the live configuration using kubectl get:", "source": "https://kubernetes.io/docs/tasks/manage-kubernetes-objects/declarative-config/", "id": "k8s_00681_code_explain_21579"} {"prompt": "Explain this bash code:\n\nkubectl get -f https://k8s.io/examples/application/simple_deployment.yaml -o yaml", "completion": "This bash code Declarative Management of Kubernetes Objects Using Configuration Files | The output shows that the kubectl.kubernetes.io/last-applied-configuration annotation was written to the live configuration, and it matches the configuration file:", "language": "bash", "type": "kubectl-command_explanation", "context": "Declarative Management of Kubernetes Objects Using Configuration Files | The output shows that the kubectl.kubernetes.io/last-applied-configuration annotation was written to the live configuration, and it matches the configuration file:", "source": "https://kubernetes.io/docs/tasks/manage-kubernetes-objects/declarative-config/", "id": "k8s_00681_code_explain_21580"} {"prompt": "Complete this yaml code:\n\nkind: Deployment\nmetadata:\n annotations:\n # ...\n # This is the json representation of simple_deployment.yaml\n # It was written by kubectl apply when the object was created\n kubectl.kubernetes.io/last-applied-configuration: |\n {\"apiVersion\":\"apps/v1\",\"kind\":\"Deployment\",\n \"metadata\":{\"annotations\":{},\"name\":\"nginx-deployment\",\"namespace\":\"default\"},\n \"spec\":{\"minReadySeconds\":5,\"selector\":{\"matchLabels\":{\"app\":nginx}},\"template\":{\"metadata\":{\"labels\":{\"app\":\"nginx\"}},\n \"spec\":{\"containers\":[{\"image\":\"nginx:1.14.2\",\"name\":\"nginx\",\n \"ports\":[{\"containerPort\":80}]}]}}}} \n # ...\nspec:\n # ...\n minReadySeconds: 5\n selector:\n matchLabels:", "completion": " # ...\n app: nginx\n template:\n metadata:\n # ...\n labels:\n app: nginx\n spec:\n containers:\n - image: nginx:1.14.2\n # ...\n name: nginx\n ports:\n - containerPort: 80\n # ...\n # ...\n # ...\n # ...", "language": "yaml", "type": "deployment-definition", "context": "Declarative Management of Kubernetes Objects Using Configuration Files | Directly update the replicas field in the live configuration by using kubectl scale. This does not use kubectl apply:", "source": "https://kubernetes.io/docs/tasks/manage-kubernetes-objects/declarative-config/", "id": "k8s_00681_code_completion_21581"} {"prompt": "Explain this yaml code:\n\nkind: Deployment\nmetadata:\n annotations:\n # ...\n # This is the json representation of simple_deployment.yaml\n # It was written by kubectl apply when the object was created\n kubectl.kubernetes.io/last-applied-configuration: |\n {\"apiVersion\":\"apps/v1\",\"kind\":\"Deployment\",\n \"metadata\":{\"annotations\":{},\"name\":\"nginx-deployment\",\"namespace\":\"default\"},\n \"spec\":{\"minReadySeconds\":5,\"selector\":{\"matchLabels\":{\"app\":nginx}},\"template\":{\"metadata\":{\"labels\":{\"app\":\"nginx\"}},\n \"spec\":{\"containers\":[{\"image\":\"nginx:1.14.2\",\"name\":\"nginx\",\n \"ports\":[{\"containerPort\":80}]}]}}}} \n # ...\nspec:\n # ...\n minReadySeconds: 5\n selector:\n matchLabels:\n # ...\n app: nginx\n template:\n metadata:\n # ...\n labels:\n app: nginx\n spec:\n containers:\n - image: nginx:1.14.2\n # ...\n name: nginx\n ports:\n - containerPort: 80\n # ...\n # ...\n # ...\n # ...", "completion": "This yaml code Declarative Management of Kubernetes Objects Using Configuration Files | Directly update the replicas field in the live configuration by using kubectl scale. This does not use kubectl apply:", "language": "yaml", "type": "deployment-definition_explanation", "context": "Declarative Management of Kubernetes Objects Using Configuration Files | Directly update the replicas field in the live configuration by using kubectl scale. This does not use kubectl apply:", "source": "https://kubernetes.io/docs/tasks/manage-kubernetes-objects/declarative-config/", "id": "k8s_00681_code_explain_21582"} {"prompt": "Explain this bash code:\n\nkubectl scale deployment/nginx-deployment --replicas=2", "completion": "This bash code Declarative Management of Kubernetes Objects Using Configuration Files | Print the live configuration using kubectl get:", "language": "bash", "type": "kubectl-command_explanation", "context": "Declarative Management of Kubernetes Objects Using Configuration Files | Print the live configuration using kubectl get:", "source": "https://kubernetes.io/docs/tasks/manage-kubernetes-objects/declarative-config/", "id": "k8s_00681_code_explain_21583"} {"prompt": "Complete this yaml code:\n\napiVersion: apps/v1\nkind: Deployment\nmetadata:\n annotations:\n # ...\n # note that the annotation does not contain replicas\n # because it was not updated through apply\n kubectl.kubernetes.io/last-applied-configuration: |\n {\"apiVersion\":\"apps/v1\",\"kind\":\"Deployment\",\n \"metadata\":{\"annotations\":{},\"name\":\"nginx-deployment\",\"namespace\":\"default\"},\n \"spec\":{\"minReadySeconds\":5,\"selector\":{\"matchLabels\":{\"app\":nginx}},\"template\":{\"metadata\":{\"labels\":{\"app\":\"nginx\"}},\n \"spec\":{\"containers\":[{\"image\":\"nginx:1.14.2\",\"name\":\"nginx\",\n \"ports\":[{\"containerPort\":80}]}]}}}} \n # ...\nspec:\n replicas: 2 # written by scale\n # ...", "completion": " minReadySeconds: 5\n selector:\n matchLabels:\n # ...\n app: nginx\n template:\n metadata:\n # ...\n labels:\n app: nginx\n spec:\n containers:\n - image: nginx:1.14.2\n # ...\n name: nginx\n ports:\n - containerPort: 80\n # ...", "language": "yaml", "type": "deployment-definition", "context": "Declarative Management of Kubernetes Objects Using Configuration Files | Update the simple_deployment.yaml configuration file to change the image from nginx:1.14.2 to nginx:1.16.1, and delete the minReadySeconds field:", "source": "https://kubernetes.io/docs/tasks/manage-kubernetes-objects/declarative-config/", "id": "k8s_00681_code_completion_21584"} {"prompt": "Explain this yaml code:\n\napiVersion: apps/v1\nkind: Deployment\nmetadata:\n annotations:\n # ...\n # note that the annotation does not contain replicas\n # because it was not updated through apply\n kubectl.kubernetes.io/last-applied-configuration: |\n {\"apiVersion\":\"apps/v1\",\"kind\":\"Deployment\",\n \"metadata\":{\"annotations\":{},\"name\":\"nginx-deployment\",\"namespace\":\"default\"},\n \"spec\":{\"minReadySeconds\":5,\"selector\":{\"matchLabels\":{\"app\":nginx}},\"template\":{\"metadata\":{\"labels\":{\"app\":\"nginx\"}},\n \"spec\":{\"containers\":[{\"image\":\"nginx:1.14.2\",\"name\":\"nginx\",\n \"ports\":[{\"containerPort\":80}]}]}}}} \n # ...\nspec:\n replicas: 2 # written by scale\n # ...\n minReadySeconds: 5\n selector:\n matchLabels:\n # ...\n app: nginx\n template:\n metadata:\n # ...\n labels:\n app: nginx\n spec:\n containers:\n - image: nginx:1.14.2\n # ...\n name: nginx\n ports:\n - containerPort: 80\n # ...", "completion": "This yaml code Declarative Management of Kubernetes Objects Using Configuration Files | Update the simple_deployment.yaml configuration file to change the image from nginx:1.14.2 to nginx:1.16.1, and delete the minReadySeconds field:", "language": "yaml", "type": "deployment-definition_explanation", "context": "Declarative Management of Kubernetes Objects Using Configuration Files | Update the simple_deployment.yaml configuration file to change the image from nginx:1.14.2 to nginx:1.16.1, and delete the minReadySeconds field:", "source": "https://kubernetes.io/docs/tasks/manage-kubernetes-objects/declarative-config/", "id": "k8s_00681_code_explain_21585"} {"prompt": "Complete this yaml code:\n\napplication/update_deployment.yaml\napiVersion: apps/v1\nkind: Deployment\nmetadata:\n name: nginx-deployment\nspec:\n selector:\n matchLabels:\n app: nginx", "completion": " template:\n metadata:\n labels:\n app: nginx\n spec:\n containers:\n - name: nginx\n image: nginx:1.16.1 # update the image\n ports:\n - containerPort: 80", "language": "yaml", "type": "deployment-definition", "context": "Declarative Management of Kubernetes Objects Using Configuration Files | Apply the changes made to the configuration file:", "source": "https://kubernetes.io/docs/tasks/manage-kubernetes-objects/declarative-config/", "id": "k8s_00681_code_completion_21586"} {"prompt": "Explain this yaml code:\n\napplication/update_deployment.yaml\napiVersion: apps/v1\nkind: Deployment\nmetadata:\n name: nginx-deployment\nspec:\n selector:\n matchLabels:\n app: nginx\n template:\n metadata:\n labels:\n app: nginx\n spec:\n containers:\n - name: nginx\n image: nginx:1.16.1 # update the image\n ports:\n - containerPort: 80", "completion": "This yaml code Declarative Management of Kubernetes Objects Using Configuration Files | Apply the changes made to the configuration file:", "language": "yaml", "type": "deployment-definition_explanation", "context": "Declarative Management of Kubernetes Objects Using Configuration Files | Apply the changes made to the configuration file:", "source": "https://kubernetes.io/docs/tasks/manage-kubernetes-objects/declarative-config/", "id": "k8s_00681_code_explain_21587"} {"prompt": "Complete this yaml code:\n\napiVersion: apps/v1\nkind: Deployment\nmetadata:\n name: nginx-deployment\nspec:\n selector:\n matchLabels:\n app: nginx\n template:", "completion": " metadata:\n labels:\n app: nginx\n spec:\n containers:\n - name: nginx\n image: nginx:1.16.1 # update the image\n ports:\n - containerPort: 80", "language": "yaml", "type": "deployment-definition", "context": "Declarative Management of Kubernetes Objects Using Configuration Files | Apply the changes made to the configuration file:", "source": "https://kubernetes.io/docs/tasks/manage-kubernetes-objects/declarative-config/", "id": "k8s_00681_code_completion_21588"} {"prompt": "Explain this yaml code:\n\napiVersion: apps/v1\nkind: Deployment\nmetadata:\n name: nginx-deployment\nspec:\n selector:\n matchLabels:\n app: nginx\n template:\n metadata:\n labels:\n app: nginx\n spec:\n containers:\n - name: nginx\n image: nginx:1.16.1 # update the image\n ports:\n - containerPort: 80", "completion": "This yaml code Declarative Management of Kubernetes Objects Using Configuration Files | Apply the changes made to the configuration file:", "language": "yaml", "type": "deployment-definition_explanation", "context": "Declarative Management of Kubernetes Objects Using Configuration Files | Apply the changes made to the configuration file:", "source": "https://kubernetes.io/docs/tasks/manage-kubernetes-objects/declarative-config/", "id": "k8s_00681_code_explain_21589"} {"prompt": "Explain this bash code:\n\nkubectl diff -f https://k8s.io/examples/application/update_deployment.yaml\nkubectl apply -f https://k8s.io/examples/application/update_deployment.yaml", "completion": "This bash code Declarative Management of Kubernetes Objects Using Configuration Files | Print the live configuration using kubectl get:", "language": "bash", "type": "kubectl-command_explanation", "context": "Declarative Management of Kubernetes Objects Using Configuration Files | Print the live configuration using kubectl get:", "source": "https://kubernetes.io/docs/tasks/manage-kubernetes-objects/declarative-config/", "id": "k8s_00681_code_explain_21590"} {"prompt": "Explain this bash code:\n\nkubectl get -f https://k8s.io/examples/application/update_deployment.yaml -o yaml", "completion": "This bash code Declarative Management of Kubernetes Objects Using Configuration Files | The output shows the following changes to the live configuration:", "language": "bash", "type": "kubectl-command_explanation", "context": "Declarative Management of Kubernetes Objects Using Configuration Files | The output shows the following changes to the live configuration:", "source": "https://kubernetes.io/docs/tasks/manage-kubernetes-objects/declarative-config/", "id": "k8s_00681_code_explain_21591"} {"prompt": "Complete this yaml code:\n\napiVersion: apps/v1\nkind: Deployment\nmetadata:\n annotations:\n # ...\n # The annotation contains the updated image to nginx 1.16.1,\n # but does not contain the updated replicas to 2\n kubectl.kubernetes.io/last-applied-configuration: |\n {\"apiVersion\":\"apps/v1\",\"kind\":\"Deployment\",\n \"metadata\":{\"annotations\":{},\"name\":\"nginx-deployment\",\"namespace\":\"default\"},\n \"spec\":{\"selector\":{\"matchLabels\":{\"app\":nginx}},\"template\":{\"metadata\":{\"labels\":{\"app\":\"nginx\"}},\n \"spec\":{\"containers\":[{\"image\":\"nginx:1.16.1\",\"name\":\"nginx\",\n \"ports\":[{\"containerPort\":80}]}]}}}} \n # ...\nspec:\n replicas: 2 # Set by `kubectl scale`. Ignored by `kubectl apply`.\n # minReadySeconds cleared by `kubectl apply`\n # ...\n selector:", "completion": " matchLabels:\n # ...\n app: nginx\n template:\n metadata:\n # ...\n labels:\n app: nginx\n spec:\n containers:\n - image: nginx:1.16.1 # Set by `kubectl apply`\n # ...\n name: nginx\n ports:\n - containerPort: 80\n # ...\n # ...\n # ...\n # ...", "language": "yaml", "type": "deployment-definition", "context": "Declarative Management of Kubernetes Objects Using Configuration Files | There are two approaches to delete objects managed by kubectl apply.", "source": "https://kubernetes.io/docs/tasks/manage-kubernetes-objects/declarative-config/", "id": "k8s_00681_code_completion_21592"} {"prompt": "Explain this yaml code:\n\napiVersion: apps/v1\nkind: Deployment\nmetadata:\n annotations:\n # ...\n # The annotation contains the updated image to nginx 1.16.1,\n # but does not contain the updated replicas to 2\n kubectl.kubernetes.io/last-applied-configuration: |\n {\"apiVersion\":\"apps/v1\",\"kind\":\"Deployment\",\n \"metadata\":{\"annotations\":{},\"name\":\"nginx-deployment\",\"namespace\":\"default\"},\n \"spec\":{\"selector\":{\"matchLabels\":{\"app\":nginx}},\"template\":{\"metadata\":{\"labels\":{\"app\":\"nginx\"}},\n \"spec\":{\"containers\":[{\"image\":\"nginx:1.16.1\",\"name\":\"nginx\",\n \"ports\":[{\"containerPort\":80}]}]}}}} \n # ...\nspec:\n replicas: 2 # Set by `kubectl scale`. Ignored by `kubectl apply`.\n # minReadySeconds cleared by `kubectl apply`\n # ...\n selector:\n matchLabels:\n # ...\n app: nginx\n template:\n metadata:\n # ...\n labels:\n app: nginx\n spec:\n containers:\n - image: nginx:1.16.1 # Set by `kubectl apply`\n # ...\n name: nginx\n ports:\n - containerPort: 80\n # ...\n # ...\n # ...\n # ...", "completion": "This yaml code Declarative Management of Kubernetes Objects Using Configuration Files | There are two approaches to delete objects managed by kubectl apply.", "language": "yaml", "type": "deployment-definition_explanation", "context": "Declarative Management of Kubernetes Objects Using Configuration Files | There are two approaches to delete objects managed by kubectl apply.", "source": "https://kubernetes.io/docs/tasks/manage-kubernetes-objects/declarative-config/", "id": "k8s_00681_code_explain_21593"} {"prompt": "Explain this bash code:\n\nkubectl apply -f <directory> --prune -l <labels> --prune-allowlist=<gvk-list>", "completion": "This bash code Declarative Management of Kubernetes Objects Using Configuration Files | FEATURE STATE: Kubernetes v1.27 [alpha]Caution:kubectl apply --prune --applyset is in alpha, and backwards incompatible changes might be introduced in subsequent releases.To use ApplySet-based pruning", "language": "bash", "type": "kubectl-command_explanation", "context": "Declarative Management of Kubernetes Objects Using Configuration Files | FEATURE STATE: Kubernetes v1.27 [alpha]Caution:kubectl apply --prune --applyset is in alpha, and backwards incompatible changes might be introduced in subsequent releases.To use ApplySet-based pruning", "source": "https://kubernetes.io/docs/tasks/manage-kubernetes-objects/declarative-config/", "id": "k8s_00681_code_explain_21594"} {"prompt": "Explain this text code:\n\nKUBECTL_APPLYSET=true kubectl apply -f <directory> --prune --applyset=<name>", "completion": "This text code Declarative Management of Kubernetes Objects Using Configuration Files | By default, the type of the ApplySet parent object used is a Secret. However, ConfigMaps can also be used in the format: --applyset=configmaps/<name>. When using a Secret or ConfigMap, kubectl will cr", "language": "text", "type": "code-example_explanation", "context": "Declarative Management of Kubernetes Objects Using Configuration Files | By default, the type of the ApplySet parent object used is a Secret. However, ConfigMaps can also be used in the format: --applyset=configmaps/<name>. When using a Secret or ConfigMap, kubectl will cr", "source": "https://kubernetes.io/docs/tasks/manage-kubernetes-objects/declarative-config/", "id": "k8s_00681_code_explain_21595"} {"prompt": "Complete this yaml code:\n\napplication/update_deployment.yaml\napiVersion: apps/v1\nkind: Deployment\nmetadata:\n name: nginx-deployment\nspec:\n selector:\n matchLabels:\n app: nginx", "completion": " template:\n metadata:\n labels:\n app: nginx\n spec:\n containers:\n - name: nginx\n image: nginx:1.16.1 # update the image\n ports:\n - containerPort: 80", "language": "yaml", "type": "deployment-definition", "context": "Declarative Management of Kubernetes Objects Using Configuration Files | Also, suppose this is the live configuration for the same Deployment object:", "source": "https://kubernetes.io/docs/tasks/manage-kubernetes-objects/declarative-config/", "id": "k8s_00681_code_completion_21596"} {"prompt": "Explain this yaml code:\n\napplication/update_deployment.yaml\napiVersion: apps/v1\nkind: Deployment\nmetadata:\n name: nginx-deployment\nspec:\n selector:\n matchLabels:\n app: nginx\n template:\n metadata:\n labels:\n app: nginx\n spec:\n containers:\n - name: nginx\n image: nginx:1.16.1 # update the image\n ports:\n - containerPort: 80", "completion": "This yaml code Declarative Management of Kubernetes Objects Using Configuration Files | Also, suppose this is the live configuration for the same Deployment object:", "language": "yaml", "type": "deployment-definition_explanation", "context": "Declarative Management of Kubernetes Objects Using Configuration Files | Also, suppose this is the live configuration for the same Deployment object:", "source": "https://kubernetes.io/docs/tasks/manage-kubernetes-objects/declarative-config/", "id": "k8s_00681_code_explain_21597"} {"prompt": "Complete this yaml code:\n\napiVersion: apps/v1\nkind: Deployment\nmetadata:\n name: nginx-deployment\nspec:\n selector:\n matchLabels:\n app: nginx\n template:", "completion": " metadata:\n labels:\n app: nginx\n spec:\n containers:\n - name: nginx\n image: nginx:1.16.1 # update the image\n ports:\n - containerPort: 80", "language": "yaml", "type": "deployment-definition", "context": "Declarative Management of Kubernetes Objects Using Configuration Files | Also, suppose this is the live configuration for the same Deployment object:", "source": "https://kubernetes.io/docs/tasks/manage-kubernetes-objects/declarative-config/", "id": "k8s_00681_code_completion_21598"} {"prompt": "Explain this yaml code:\n\napiVersion: apps/v1\nkind: Deployment\nmetadata:\n name: nginx-deployment\nspec:\n selector:\n matchLabels:\n app: nginx\n template:\n metadata:\n labels:\n app: nginx\n spec:\n containers:\n - name: nginx\n image: nginx:1.16.1 # update the image\n ports:\n - containerPort: 80", "completion": "This yaml code Declarative Management of Kubernetes Objects Using Configuration Files | Also, suppose this is the live configuration for the same Deployment object:", "language": "yaml", "type": "deployment-definition_explanation", "context": "Declarative Management of Kubernetes Objects Using Configuration Files | Also, suppose this is the live configuration for the same Deployment object:", "source": "https://kubernetes.io/docs/tasks/manage-kubernetes-objects/declarative-config/", "id": "k8s_00681_code_explain_21599"} {"prompt": "Complete this yaml code:\n\napiVersion: apps/v1\nkind: Deployment\nmetadata:\n annotations:\n # ...\n # note that the annotation does not contain replicas\n # because it was not updated through apply\n kubectl.kubernetes.io/last-applied-configuration: |\n {\"apiVersion\":\"apps/v1\",\"kind\":\"Deployment\",\n \"metadata\":{\"annotations\":{},\"name\":\"nginx-deployment\",\"namespace\":\"default\"},\n \"spec\":{\"minReadySeconds\":5,\"selector\":{\"matchLabels\":{\"app\":nginx}},\"template\":{\"metadata\":{\"labels\":{\"app\":\"nginx\"}},\n \"spec\":{\"containers\":[{\"image\":\"nginx:1.14.2\",\"name\":\"nginx\",\n \"ports\":[{\"containerPort\":80}]}]}}}} \n # ...\nspec:\n replicas: 2 # written by scale\n # ...", "completion": " minReadySeconds: 5\n selector:\n matchLabels:\n # ...\n app: nginx\n template:\n metadata:\n # ...\n labels:\n app: nginx\n spec:\n containers:\n - image: nginx:1.14.2\n # ...\n name: nginx\n ports:\n - containerPort: 80\n # ...", "language": "yaml", "type": "deployment-definition", "context": "Declarative Management of Kubernetes Objects Using Configuration Files | Here are the merge calculations that would be performed by kubectl apply:", "source": "https://kubernetes.io/docs/tasks/manage-kubernetes-objects/declarative-config/", "id": "k8s_00681_code_completion_21600"} {"prompt": "Explain this yaml code:\n\napiVersion: apps/v1\nkind: Deployment\nmetadata:\n annotations:\n # ...\n # note that the annotation does not contain replicas\n # because it was not updated through apply\n kubectl.kubernetes.io/last-applied-configuration: |\n {\"apiVersion\":\"apps/v1\",\"kind\":\"Deployment\",\n \"metadata\":{\"annotations\":{},\"name\":\"nginx-deployment\",\"namespace\":\"default\"},\n \"spec\":{\"minReadySeconds\":5,\"selector\":{\"matchLabels\":{\"app\":nginx}},\"template\":{\"metadata\":{\"labels\":{\"app\":\"nginx\"}},\n \"spec\":{\"containers\":[{\"image\":\"nginx:1.14.2\",\"name\":\"nginx\",\n \"ports\":[{\"containerPort\":80}]}]}}}} \n # ...\nspec:\n replicas: 2 # written by scale\n # ...\n minReadySeconds: 5\n selector:\n matchLabels:\n # ...\n app: nginx\n template:\n metadata:\n # ...\n labels:\n app: nginx\n spec:\n containers:\n - image: nginx:1.14.2\n # ...\n name: nginx\n ports:\n - containerPort: 80\n # ...", "completion": "This yaml code Declarative Management of Kubernetes Objects Using Configuration Files | Here are the merge calculations that would be performed by kubectl apply:", "language": "yaml", "type": "deployment-definition_explanation", "context": "Declarative Management of Kubernetes Objects Using Configuration Files | Here are the merge calculations that would be performed by kubectl apply:", "source": "https://kubernetes.io/docs/tasks/manage-kubernetes-objects/declarative-config/", "id": "k8s_00681_code_explain_21601"} {"prompt": "Complete this yaml code:\n\napiVersion: apps/v1\nkind: Deployment\nmetadata:\n annotations:\n # ...\n # The annotation contains the updated image to nginx 1.16.1,\n # but does not contain the updated replicas to 2\n kubectl.kubernetes.io/last-applied-configuration: |\n {\"apiVersion\":\"apps/v1\",\"kind\":\"Deployment\",\n \"metadata\":{\"annotations\":{},\"name\":\"nginx-deployment\",\"namespace\":\"default\"},\n \"spec\":{\"selector\":{\"matchLabels\":{\"app\":nginx}},\"template\":{\"metadata\":{\"labels\":{\"app\":\"nginx\"}},\n \"spec\":{\"containers\":[{\"image\":\"nginx:1.16.1\",\"name\":\"nginx\",\n \"ports\":[{\"containerPort\":80}]}]}}}} \n # ...\nspec:\n selector:\n matchLabels:\n # ...\n app: nginx", "completion": " replicas: 2 # Set by `kubectl scale`. Ignored by `kubectl apply`.\n # minReadySeconds cleared by `kubectl apply`\n # ...\n template:\n metadata:\n # ...\n labels:\n app: nginx\n spec:\n containers:\n - image: nginx:1.16.1 # Set by `kubectl apply`\n # ...\n name: nginx\n ports:\n - containerPort: 80\n # ...\n # ...\n # ...\n # ...", "language": "yaml", "type": "deployment-definition", "context": "Declarative Management of Kubernetes Objects Using Configuration Files | How a particular field in a configuration file is merged with the live configuration depends on the type of the field. There are several types of fields:", "source": "https://kubernetes.io/docs/tasks/manage-kubernetes-objects/declarative-config/", "id": "k8s_00681_code_completion_21602"} {"prompt": "Explain this yaml code:\n\napiVersion: apps/v1\nkind: Deployment\nmetadata:\n annotations:\n # ...\n # The annotation contains the updated image to nginx 1.16.1,\n # but does not contain the updated replicas to 2\n kubectl.kubernetes.io/last-applied-configuration: |\n {\"apiVersion\":\"apps/v1\",\"kind\":\"Deployment\",\n \"metadata\":{\"annotations\":{},\"name\":\"nginx-deployment\",\"namespace\":\"default\"},\n \"spec\":{\"selector\":{\"matchLabels\":{\"app\":nginx}},\"template\":{\"metadata\":{\"labels\":{\"app\":\"nginx\"}},\n \"spec\":{\"containers\":[{\"image\":\"nginx:1.16.1\",\"name\":\"nginx\",\n \"ports\":[{\"containerPort\":80}]}]}}}} \n # ...\nspec:\n selector:\n matchLabels:\n # ...\n app: nginx\n replicas: 2 # Set by `kubectl scale`. Ignored by `kubectl apply`.\n # minReadySeconds cleared by `kubectl apply`\n # ...\n template:\n metadata:\n # ...\n labels:\n app: nginx\n spec:\n containers:\n - image: nginx:1.16.1 # Set by `kubectl apply`\n # ...\n name: nginx\n ports:\n - containerPort: 80\n # ...\n # ...\n # ...\n # ...", "completion": "This yaml code Declarative Management of Kubernetes Objects Using Configuration Files | How a particular field in a configuration file is merged with the live configuration depends on the type of the field. There are several types of fields:", "language": "yaml", "type": "deployment-definition_explanation", "context": "Declarative Management of Kubernetes Objects Using Configuration Files | How a particular field in a configuration file is merged with the live configuration depends on the type of the field. There are several types of fields:", "source": "https://kubernetes.io/docs/tasks/manage-kubernetes-objects/declarative-config/", "id": "k8s_00681_code_explain_21603"} {"prompt": "Complete this text code:\n\n# last-applied-configuration value\n args: [\"a\", \"b\"]\n\n# configuration file value\n args: [\"a\", \"c\"]", "completion": "\n# live configuration\n args: [\"a\", \"b\", \"d\"]\n\n# result after merge\n args: [\"a\", \"c\"]", "language": "text", "type": "code-example", "context": "Declarative Management of Kubernetes Objects Using Configuration Files | Explanation: The merge used the configuration file value as the new list value.", "source": "https://kubernetes.io/docs/tasks/manage-kubernetes-objects/declarative-config/", "id": "k8s_00681_code_completion_21604"} {"prompt": "Explain this text code:\n\n# last-applied-configuration value\n args: [\"a\", \"b\"]\n\n# configuration file value\n args: [\"a\", \"c\"]\n\n# live configuration\n args: [\"a\", \"b\", \"d\"]\n\n# result after merge\n args: [\"a\", \"c\"]", "completion": "This text code Declarative Management of Kubernetes Objects Using Configuration Files | Explanation: The merge used the configuration file value as the new list value.", "language": "text", "type": "code-example_explanation", "context": "Declarative Management of Kubernetes Objects Using Configuration Files | Explanation: The merge used the configuration file value as the new list value.", "source": "https://kubernetes.io/docs/tasks/manage-kubernetes-objects/declarative-config/", "id": "k8s_00681_code_explain_21605"} {"prompt": "Complete this text code:\n\n# last-applied-configuration value\n containers:\n - name: nginx\n image: nginx:1.16\n - name: nginx-helper-a # key: nginx-helper-a; will be deleted in result\n image: helper:1.3\n - name: nginx-helper-b # key: nginx-helper-b; will be retained\n image: helper:1.3\n\n# configuration file value\n containers:\n - name: nginx\n image: nginx:1.16\n - name: nginx-helper-b\n image: helper:1.3\n - name: nginx-helper-c # key: nginx-helper-c; will be added in result\n image: helper:1.3\n\n# live configuration\n containers:\n - name: nginx", "completion": " image: nginx:1.16\n - name: nginx-helper-a\n image: helper:1.3\n - name: nginx-helper-b\n image: helper:1.3\n args: [\"run\"] # Field will be retained\n - name: nginx-helper-d # key: nginx-helper-d; will be retained\n image: helper:1.3\n\n# result after merge\n containers:\n - name: nginx\n image: nginx:1.16\n # Element nginx-helper-a was deleted\n - name: nginx-helper-b\n image: helper:1.3\n args: [\"run\"] # Field was retained\n - name: nginx-helper-c # Element was added\n image: helper:1.3\n - name: nginx-helper-d # Element was ignored\n image: helper:1.3", "language": "text", "type": "code-example", "context": "Declarative Management of Kubernetes Objects Using Configuration Files | Explanation:", "source": "https://kubernetes.io/docs/tasks/manage-kubernetes-objects/declarative-config/", "id": "k8s_00681_code_completion_21606"} {"prompt": "Explain this text code:\n\n# last-applied-configuration value\n containers:\n - name: nginx\n image: nginx:1.16\n - name: nginx-helper-a # key: nginx-helper-a; will be deleted in result\n image: helper:1.3\n - name: nginx-helper-b # key: nginx-helper-b; will be retained\n image: helper:1.3\n\n# configuration file value\n containers:\n - name: nginx\n image: nginx:1.16\n - name: nginx-helper-b\n image: helper:1.3\n - name: nginx-helper-c # key: nginx-helper-c; will be added in result\n image: helper:1.3\n\n# live configuration\n containers:\n - name: nginx\n image: nginx:1.16\n - name: nginx-helper-a\n image: helper:1.3\n - name: nginx-helper-b\n image: helper:1.3\n args: [\"run\"] # Field will be retained\n - name: nginx-helper-d # key: nginx-helper-d; will be retained\n image: helper:1.3\n\n# result after merge\n containers:\n - name: nginx\n image: nginx:1.16\n # Element nginx-helper-a was deleted\n - name: nginx-helper-b\n image: helper:1.3\n args: [\"run\"] # Field was retained\n - name: nginx-helper-c # Element was added\n image: helper:1.3\n - name: nginx-helper-d # Element was ignored\n image: helper:1.3", "completion": "This text code Declarative Management of Kubernetes Objects Using Configuration Files | Explanation:", "language": "text", "type": "code-example_explanation", "context": "Declarative Management of Kubernetes Objects Using Configuration Files | Explanation:", "source": "https://kubernetes.io/docs/tasks/manage-kubernetes-objects/declarative-config/", "id": "k8s_00681_code_explain_21607"} {"prompt": "Complete this yaml code:\n\napplication/simple_deployment.yaml\napiVersion: apps/v1\nkind: Deployment\nmetadata:\n name: nginx-deployment\nspec:\n selector:\n matchLabels:\n app: nginx\n minReadySeconds: 5", "completion": " template:\n metadata:\n labels:\n app: nginx\n spec:\n containers:\n - name: nginx\n image: nginx:1.14.2\n ports:\n - containerPort: 80", "language": "yaml", "type": "deployment-definition", "context": "Declarative Management of Kubernetes Objects Using Configuration Files | Create the object using kubectl apply:", "source": "https://kubernetes.io/docs/tasks/manage-kubernetes-objects/declarative-config/", "id": "k8s_00681_code_completion_21608"} {"prompt": "Explain this yaml code:\n\napplication/simple_deployment.yaml\napiVersion: apps/v1\nkind: Deployment\nmetadata:\n name: nginx-deployment\nspec:\n selector:\n matchLabels:\n app: nginx\n minReadySeconds: 5\n template:\n metadata:\n labels:\n app: nginx\n spec:\n containers:\n - name: nginx\n image: nginx:1.14.2\n ports:\n - containerPort: 80", "completion": "This yaml code Declarative Management of Kubernetes Objects Using Configuration Files | Create the object using kubectl apply:", "language": "yaml", "type": "deployment-definition_explanation", "context": "Declarative Management of Kubernetes Objects Using Configuration Files | Create the object using kubectl apply:", "source": "https://kubernetes.io/docs/tasks/manage-kubernetes-objects/declarative-config/", "id": "k8s_00681_code_explain_21609"} {"prompt": "Complete this yaml code:\n\napiVersion: apps/v1\nkind: Deployment\nmetadata:\n name: nginx-deployment\nspec:\n selector:\n matchLabels:\n app: nginx\n minReadySeconds: 5", "completion": " template:\n metadata:\n labels:\n app: nginx\n spec:\n containers:\n - name: nginx\n image: nginx:1.14.2\n ports:\n - containerPort: 80", "language": "yaml", "type": "deployment-definition", "context": "Declarative Management of Kubernetes Objects Using Configuration Files | Create the object using kubectl apply:", "source": "https://kubernetes.io/docs/tasks/manage-kubernetes-objects/declarative-config/", "id": "k8s_00681_code_completion_21610"} {"prompt": "Explain this yaml code:\n\napiVersion: apps/v1\nkind: Deployment\nmetadata:\n name: nginx-deployment\nspec:\n selector:\n matchLabels:\n app: nginx\n minReadySeconds: 5\n template:\n metadata:\n labels:\n app: nginx\n spec:\n containers:\n - name: nginx\n image: nginx:1.14.2\n ports:\n - containerPort: 80", "completion": "This yaml code Declarative Management of Kubernetes Objects Using Configuration Files | Create the object using kubectl apply:", "language": "yaml", "type": "deployment-definition_explanation", "context": "Declarative Management of Kubernetes Objects Using Configuration Files | Create the object using kubectl apply:", "source": "https://kubernetes.io/docs/tasks/manage-kubernetes-objects/declarative-config/", "id": "k8s_00681_code_explain_21611"} {"prompt": "Explain this bash code:\n\nkubectl apply -f https://k8s.io/examples/application/simple_deployment.yaml", "completion": "This bash code Declarative Management of Kubernetes Objects Using Configuration Files | Print the live configuration using kubectl get:", "language": "bash", "type": "kubectl-command_explanation", "context": "Declarative Management of Kubernetes Objects Using Configuration Files | Print the live configuration using kubectl get:", "source": "https://kubernetes.io/docs/tasks/manage-kubernetes-objects/declarative-config/", "id": "k8s_00681_code_explain_21612"} {"prompt": "Explain this bash code:\n\nkubectl get -f https://k8s.io/examples/application/simple_deployment.yaml -o yaml", "completion": "This bash code Declarative Management of Kubernetes Objects Using Configuration Files | The output shows that the API server set several fields to default values in the live configuration. These fields were not specified in the configuration file.", "language": "bash", "type": "kubectl-command_explanation", "context": "Declarative Management of Kubernetes Objects Using Configuration Files | The output shows that the API server set several fields to default values in the live configuration. These fields were not specified in the configuration file.", "source": "https://kubernetes.io/docs/tasks/manage-kubernetes-objects/declarative-config/", "id": "k8s_00681_code_explain_21613"} {"prompt": "Complete this yaml code:\n\napiVersion: apps/v1\nkind: Deployment\n# ...\nspec:\n selector:\n matchLabels:\n app: nginx\n minReadySeconds: 5\n replicas: 1 # defaulted by apiserver\n strategy:\n rollingUpdate: # defaulted by apiserver - derived from strategy.type\n maxSurge: 1\n maxUnavailable: 1\n type: RollingUpdate # defaulted by apiserver\n template:\n metadata:\n creationTimestamp: null", "completion": " labels:\n app: nginx\n spec:\n containers:\n - image: nginx:1.14.2\n imagePullPolicy: IfNotPresent # defaulted by apiserver\n name: nginx\n ports:\n - containerPort: 80\n protocol: TCP # defaulted by apiserver\n resources: {} # defaulted by apiserver\n terminationMessagePath: /dev/termination-log # defaulted by apiserver\n dnsPolicy: ClusterFirst # defaulted by apiserver\n restartPolicy: Always # defaulted by apiserver\n securityContext: {} # defaulted by apiserver\n terminationGracePeriodSeconds: 30 # defaulted by apiserver\n# ...", "language": "yaml", "type": "deployment-definition", "context": "Declarative Management of Kubernetes Objects Using Configuration Files | In a patch request, defaulted fields are not re-defaulted unless they are explicitly cleared as part of a patch request. This can cause unexpected behavior for fields that are defaulted based on the v", "source": "https://kubernetes.io/docs/tasks/manage-kubernetes-objects/declarative-config/", "id": "k8s_00681_code_completion_21614"} {"prompt": "Explain this yaml code:\n\napiVersion: apps/v1\nkind: Deployment\n# ...\nspec:\n selector:\n matchLabels:\n app: nginx\n minReadySeconds: 5\n replicas: 1 # defaulted by apiserver\n strategy:\n rollingUpdate: # defaulted by apiserver - derived from strategy.type\n maxSurge: 1\n maxUnavailable: 1\n type: RollingUpdate # defaulted by apiserver\n template:\n metadata:\n creationTimestamp: null\n labels:\n app: nginx\n spec:\n containers:\n - image: nginx:1.14.2\n imagePullPolicy: IfNotPresent # defaulted by apiserver\n name: nginx\n ports:\n - containerPort: 80\n protocol: TCP # defaulted by apiserver\n resources: {} # defaulted by apiserver\n terminationMessagePath: /dev/termination-log # defaulted by apiserver\n dnsPolicy: ClusterFirst # defaulted by apiserver\n restartPolicy: Always # defaulted by apiserver\n securityContext: {} # defaulted by apiserver\n terminationGracePeriodSeconds: 30 # defaulted by apiserver\n# ...", "completion": "This yaml code Declarative Management of Kubernetes Objects Using Configuration Files | In a patch request, defaulted fields are not re-defaulted unless they are explicitly cleared as part of a patch request. This can cause unexpected behavior for fields that are defaulted based on the v", "language": "yaml", "type": "deployment-definition_explanation", "context": "Declarative Management of Kubernetes Objects Using Configuration Files | In a patch request, defaulted fields are not re-defaulted unless they are explicitly cleared as part of a patch request. This can cause unexpected behavior for fields that are defaulted based on the v", "source": "https://kubernetes.io/docs/tasks/manage-kubernetes-objects/declarative-config/", "id": "k8s_00681_code_explain_21615"} {"prompt": "Complete this text code:\n\n# last-applied-configuration\nspec:\n template:\n metadata:\n labels:\n app: nginx\n spec:\n containers:\n - name: nginx\n image: nginx:1.14.2\n ports:\n - containerPort: 80\n\n# configuration file\nspec:\n strategy:\n type: Recreate # updated value\n template:\n metadata:\n labels:\n app: nginx\n spec:\n containers:\n - name: nginx\n image: nginx:1.14.2\n ports:\n - containerPort: 80\n\n# live configuration\nspec:\n strategy:", "completion": " type: RollingUpdate # defaulted value\n rollingUpdate: # defaulted value derived from type\n maxSurge : 1\n maxUnavailable: 1\n template:\n metadata:\n labels:\n app: nginx\n spec:\n containers:\n - name: nginx\n image: nginx:1.14.2\n ports:\n - containerPort: 80\n\n# result after merge - ERROR!\nspec:\n strategy:\n type: Recreate # updated value: incompatible with rollingUpdate\n rollingUpdate: # defaulted value: incompatible with \"type: Recreate\"\n maxSurge : 1\n maxUnavailable: 1\n template:\n metadata:\n labels:\n app: nginx\n spec:\n containers:\n - name: nginx\n image: nginx:1.14.2\n ports:\n - containerPort: 80", "language": "text", "type": "code-example", "context": "Declarative Management of Kubernetes Objects Using Configuration Files | Explanation:", "source": "https://kubernetes.io/docs/tasks/manage-kubernetes-objects/declarative-config/", "id": "k8s_00681_code_completion_21616"} {"prompt": "Explain this text code:\n\n# last-applied-configuration\nspec:\n template:\n metadata:\n labels:\n app: nginx\n spec:\n containers:\n - name: nginx\n image: nginx:1.14.2\n ports:\n - containerPort: 80\n\n# configuration file\nspec:\n strategy:\n type: Recreate # updated value\n template:\n metadata:\n labels:\n app: nginx\n spec:\n containers:\n - name: nginx\n image: nginx:1.14.2\n ports:\n - containerPort: 80\n\n# live configuration\nspec:\n strategy:\n type: RollingUpdate # defaulted value\n rollingUpdate: # defaulted value derived from type\n maxSurge : 1\n maxUnavailable: 1\n template:\n metadata:\n labels:\n app: nginx\n spec:\n containers:\n - name: nginx\n image: nginx:1.14.2\n ports:\n - containerPort: 80\n\n# result after merge - ERROR!\nspec:\n strategy:\n type: Recreate # updated value: incompatible with rollingUpdate\n rollingUpdate: # defaulted value: incompatible with \"type: Recreate\"\n maxSurge : 1\n maxUnavailable: 1\n template:\n metadata:\n labels:\n app: nginx\n spec:\n containers:\n - name: nginx\n image: nginx:1.14.2\n ports:\n - containerPort: 80", "completion": "This text code Declarative Management of Kubernetes Objects Using Configuration Files | Explanation:", "language": "text", "type": "code-example_explanation", "context": "Declarative Management of Kubernetes Objects Using Configuration Files | Explanation:", "source": "https://kubernetes.io/docs/tasks/manage-kubernetes-objects/declarative-config/", "id": "k8s_00681_code_explain_21617"} {"prompt": "Explain this bash code:\n\nkubectl get <kind>/<name> -o yaml > <kind>_<name>.yaml", "completion": "This bash code Declarative Management of Kubernetes Objects Using Configuration Files | Manually remove the status field from the configuration file.", "language": "bash", "type": "kubectl-command_explanation", "context": "Declarative Management of Kubernetes Objects Using Configuration Files | Manually remove the status field from the configuration file.", "source": "https://kubernetes.io/docs/tasks/manage-kubernetes-objects/declarative-config/", "id": "k8s_00681_code_explain_21618"} {"prompt": "Explain this bash code:\n\nkubectl replace --save-config -f <kind>_<name>.yaml", "completion": "This bash code Declarative Management of Kubernetes Objects Using Configuration Files | Change processes to use kubectl apply for managing the object exclusively.", "language": "bash", "type": "kubectl-command_explanation", "context": "Declarative Management of Kubernetes Objects Using Configuration Files | Change processes to use kubectl apply for managing the object exclusively.", "source": "https://kubernetes.io/docs/tasks/manage-kubernetes-objects/declarative-config/", "id": "k8s_00681_code_explain_21619"} {"prompt": "Explain this bash code:\n\nkubectl replace --save-config -f <kind>_<name>.yaml", "completion": "This bash code Declarative Management of Kubernetes Objects Using Configuration Files | Change processes to use kubectl apply for managing the object exclusively.", "language": "bash", "type": "kubectl-command_explanation", "context": "Declarative Management of Kubernetes Objects Using Configuration Files | Change processes to use kubectl apply for managing the object exclusively.", "source": "https://kubernetes.io/docs/tasks/manage-kubernetes-objects/declarative-config/", "id": "k8s_00681_code_explain_21620"} {"prompt": "Complete this text code:\n\nselector:\n matchLabels:\n controller-selector: \"apps/v1/deployment/nginx\"", "completion": "template:\n metadata:\n labels:\n controller-selector: \"apps/v1/deployment/nginx\"", "language": "text", "type": "code-example", "context": "Declarative Management of Kubernetes Objects Using Configuration Files |", "source": "https://kubernetes.io/docs/tasks/manage-kubernetes-objects/declarative-config/", "id": "k8s_00681_code_completion_21621"} {"prompt": "Explain this text code:\n\nselector:\n matchLabels:\n controller-selector: \"apps/v1/deployment/nginx\"\ntemplate:\n metadata:\n labels:\n controller-selector: \"apps/v1/deployment/nginx\"", "completion": "This text code Declarative Management of Kubernetes Objects Using Configuration Files |", "language": "text", "type": "code-example_explanation", "context": "Declarative Management of Kubernetes Objects Using Configuration Files |", "source": "https://kubernetes.io/docs/tasks/manage-kubernetes-objects/declarative-config/", "id": "k8s_00681_code_explain_21622"} {"prompt": "Complete this text code:\n\n/etc/kubernetes/pki/ca.crt\n/etc/kubernetes/pki/ca.key\n/etc/kubernetes/pki/etcd/ca.crt", "completion": "/etc/kubernetes/pki/etcd/ca.key\n/etc/kubernetes/pki/front-proxy-ca.crt\n/etc/kubernetes/pki/front-proxy-ca.key", "language": "text", "type": "code-example", "context": "PKI certificates and requirements | If you don't wish to copy the CA private keys to your cluster, you can generate all certificates yourself.", "source": "https://kubernetes.io/docs/setup/best-practices/certificates/", "id": "k8s_00683_code_completion_21623"} {"prompt": "Explain this text code:\n\n/etc/kubernetes/pki/ca.crt\n/etc/kubernetes/pki/ca.key\n/etc/kubernetes/pki/etcd/ca.crt\n/etc/kubernetes/pki/etcd/ca.key\n/etc/kubernetes/pki/front-proxy-ca.crt\n/etc/kubernetes/pki/front-proxy-ca.key", "completion": "This text code PKI certificates and requirements | If you don't wish to copy the CA private keys to your cluster, you can generate all certificates yourself.", "language": "text", "type": "code-example_explanation", "context": "PKI certificates and requirements | If you don't wish to copy the CA private keys to your cluster, you can generate all certificates yourself.", "source": "https://kubernetes.io/docs/setup/best-practices/certificates/", "id": "k8s_00683_code_explain_21624"} {"prompt": "Complete this text code:\n\n/etc/kubernetes/pki/etcd/ca.key\n/etc/kubernetes/pki/etcd/ca.crt\n/etc/kubernetes/pki/apiserver-etcd-client.key\n/etc/kubernetes/pki/apiserver-etcd-client.crt\n/etc/kubernetes/pki/ca.key\n/etc/kubernetes/pki/ca.crt\n/etc/kubernetes/pki/apiserver.key\n/etc/kubernetes/pki/apiserver.crt\n/etc/kubernetes/pki/apiserver-kubelet-client.key\n/etc/kubernetes/pki/apiserver-kubelet-client.crt\n/etc/kubernetes/pki/front-proxy-ca.key", "completion": "/etc/kubernetes/pki/front-proxy-ca.crt\n/etc/kubernetes/pki/front-proxy-client.key\n/etc/kubernetes/pki/front-proxy-client.crt\n/etc/kubernetes/pki/etcd/server.key\n/etc/kubernetes/pki/etcd/server.crt\n/etc/kubernetes/pki/etcd/peer.key\n/etc/kubernetes/pki/etcd/peer.crt\n/etc/kubernetes/pki/etcd/healthcheck-client.key\n/etc/kubernetes/pki/etcd/healthcheck-client.crt\n/etc/kubernetes/pki/sa.key\n/etc/kubernetes/pki/sa.pub", "language": "text", "type": "code-example", "context": "PKI certificates and requirements | You must manually configure these administrator accounts and service accounts:", "source": "https://kubernetes.io/docs/setup/best-practices/certificates/", "id": "k8s_00683_code_completion_21625"} {"prompt": "Explain this text code:\n\n/etc/kubernetes/pki/etcd/ca.key\n/etc/kubernetes/pki/etcd/ca.crt\n/etc/kubernetes/pki/apiserver-etcd-client.key\n/etc/kubernetes/pki/apiserver-etcd-client.crt\n/etc/kubernetes/pki/ca.key\n/etc/kubernetes/pki/ca.crt\n/etc/kubernetes/pki/apiserver.key\n/etc/kubernetes/pki/apiserver.crt\n/etc/kubernetes/pki/apiserver-kubelet-client.key\n/etc/kubernetes/pki/apiserver-kubelet-client.crt\n/etc/kubernetes/pki/front-proxy-ca.key\n/etc/kubernetes/pki/front-proxy-ca.crt\n/etc/kubernetes/pki/front-proxy-client.key\n/etc/kubernetes/pki/front-proxy-client.crt\n/etc/kubernetes/pki/etcd/server.key\n/etc/kubernetes/pki/etcd/server.crt\n/etc/kubernetes/pki/etcd/peer.key\n/etc/kubernetes/pki/etcd/peer.crt\n/etc/kubernetes/pki/etcd/healthcheck-client.key\n/etc/kubernetes/pki/etcd/healthcheck-client.crt\n/etc/kubernetes/pki/sa.key\n/etc/kubernetes/pki/sa.pub", "completion": "This text code PKI certificates and requirements | You must manually configure these administrator accounts and service accounts:", "language": "text", "type": "code-example_explanation", "context": "PKI certificates and requirements | You must manually configure these administrator accounts and service accounts:", "source": "https://kubernetes.io/docs/setup/best-practices/certificates/", "id": "k8s_00683_code_explain_21626"} {"prompt": "Complete this text code:\n\nKUBECONFIG=<filename> kubectl config set-cluster default-cluster --server=https://<host ip>:6443 --certificate-authority <path-to-kubernetes-ca> --embed-certs\nKUBECONFIG=<filename> kubectl config set-credentials <credential-name> --client-key <path-to-key>.pem --client-certificate <path-to-cert>.pem --embed-certs", "completion": "KUBECONFIG=<filename> kubectl config set-context default-system --cluster default-cluster --user <credential-name>\nKUBECONFIG=<filename> kubectl config use-context default-system", "language": "text", "type": "code-example", "context": "PKI certificates and requirements | These files are used as follows:", "source": "https://kubernetes.io/docs/setup/best-practices/certificates/", "id": "k8s_00683_code_completion_21627"} {"prompt": "Explain this text code:\n\nKUBECONFIG=<filename> kubectl config set-cluster default-cluster --server=https://<host ip>:6443 --certificate-authority <path-to-kubernetes-ca> --embed-certs\nKUBECONFIG=<filename> kubectl config set-credentials <credential-name> --client-key <path-to-key>.pem --client-certificate <path-to-cert>.pem --embed-certs\nKUBECONFIG=<filename> kubectl config set-context default-system --cluster default-cluster --user <credential-name>\nKUBECONFIG=<filename> kubectl config use-context default-system", "completion": "This text code PKI certificates and requirements | These files are used as follows:", "language": "text", "type": "code-example_explanation", "context": "PKI certificates and requirements | These files are used as follows:", "source": "https://kubernetes.io/docs/setup/best-practices/certificates/", "id": "k8s_00683_code_explain_21628"} {"prompt": "Complete this text code:\n\n/etc/kubernetes/admin.conf\n/etc/kubernetes/super-admin.conf", "completion": "/etc/kubernetes/kubelet.conf\n/etc/kubernetes/controller-manager.conf\n/etc/kubernetes/scheduler.conf", "language": "text", "type": "code-example", "context": "PKI certificates and requirements | any other IP or DNS name you contact your cluster on (as used by kubeadm the load balancer stable IP and/or DNS name, kubernetes, kubernetes.default, kubernetes.default.svc, kubernetes.default.svc.clu", "source": "https://kubernetes.io/docs/setup/best-practices/certificates/", "id": "k8s_00683_code_completion_21629"} {"prompt": "Explain this text code:\n\n/etc/kubernetes/admin.conf\n/etc/kubernetes/super-admin.conf\n/etc/kubernetes/kubelet.conf\n/etc/kubernetes/controller-manager.conf\n/etc/kubernetes/scheduler.conf", "completion": "This text code PKI certificates and requirements | any other IP or DNS name you contact your cluster on (as used by kubeadm the load balancer stable IP and/or DNS name, kubernetes, kubernetes.default, kubernetes.default.svc, kubernetes.default.svc.clu", "language": "text", "type": "code-example_explanation", "context": "PKI certificates and requirements | any other IP or DNS name you contact your cluster on (as used by kubeadm the load balancer stable IP and/or DNS name, kubernetes, kubernetes.default, kubernetes.default.svc, kubernetes.default.svc.clu", "source": "https://kubernetes.io/docs/setup/best-practices/certificates/", "id": "k8s_00683_code_explain_21630"} {"prompt": "Complete this text code:\n\n/etc/kubernetes/pki/ca.crt\n/etc/kubernetes/pki/ca.key\n/etc/kubernetes/pki/etcd/ca.crt", "completion": "/etc/kubernetes/pki/etcd/ca.key\n/etc/kubernetes/pki/front-proxy-ca.crt\n/etc/kubernetes/pki/front-proxy-ca.key", "language": "text", "type": "code-example", "context": "PKI certificates and requirements | If you don't wish to copy the CA private keys to your cluster, you can generate all certificates yourself.", "source": "https://kubernetes.io/docs/setup/best-practices/certificates/", "id": "k8s_00683_code_completion_21631"} {"prompt": "Explain this text code:\n\n/etc/kubernetes/pki/ca.crt\n/etc/kubernetes/pki/ca.key\n/etc/kubernetes/pki/etcd/ca.crt\n/etc/kubernetes/pki/etcd/ca.key\n/etc/kubernetes/pki/front-proxy-ca.crt\n/etc/kubernetes/pki/front-proxy-ca.key", "completion": "This text code PKI certificates and requirements | If you don't wish to copy the CA private keys to your cluster, you can generate all certificates yourself.", "language": "text", "type": "code-example_explanation", "context": "PKI certificates and requirements | If you don't wish to copy the CA private keys to your cluster, you can generate all certificates yourself.", "source": "https://kubernetes.io/docs/setup/best-practices/certificates/", "id": "k8s_00683_code_explain_21632"} {"prompt": "Complete this text code:\n\n/etc/kubernetes/pki/etcd/ca.key\n/etc/kubernetes/pki/etcd/ca.crt\n/etc/kubernetes/pki/apiserver-etcd-client.key\n/etc/kubernetes/pki/apiserver-etcd-client.crt\n/etc/kubernetes/pki/ca.key\n/etc/kubernetes/pki/ca.crt\n/etc/kubernetes/pki/apiserver.key\n/etc/kubernetes/pki/apiserver.crt\n/etc/kubernetes/pki/apiserver-kubelet-client.key\n/etc/kubernetes/pki/apiserver-kubelet-client.crt\n/etc/kubernetes/pki/front-proxy-ca.key", "completion": "/etc/kubernetes/pki/front-proxy-ca.crt\n/etc/kubernetes/pki/front-proxy-client.key\n/etc/kubernetes/pki/front-proxy-client.crt\n/etc/kubernetes/pki/etcd/server.key\n/etc/kubernetes/pki/etcd/server.crt\n/etc/kubernetes/pki/etcd/peer.key\n/etc/kubernetes/pki/etcd/peer.crt\n/etc/kubernetes/pki/etcd/healthcheck-client.key\n/etc/kubernetes/pki/etcd/healthcheck-client.crt\n/etc/kubernetes/pki/sa.key\n/etc/kubernetes/pki/sa.pub", "language": "text", "type": "code-example", "context": "PKI certificates and requirements | You must manually configure these administrator accounts and service accounts:", "source": "https://kubernetes.io/docs/setup/best-practices/certificates/", "id": "k8s_00683_code_completion_21633"} {"prompt": "Explain this text code:\n\n/etc/kubernetes/pki/etcd/ca.key\n/etc/kubernetes/pki/etcd/ca.crt\n/etc/kubernetes/pki/apiserver-etcd-client.key\n/etc/kubernetes/pki/apiserver-etcd-client.crt\n/etc/kubernetes/pki/ca.key\n/etc/kubernetes/pki/ca.crt\n/etc/kubernetes/pki/apiserver.key\n/etc/kubernetes/pki/apiserver.crt\n/etc/kubernetes/pki/apiserver-kubelet-client.key\n/etc/kubernetes/pki/apiserver-kubelet-client.crt\n/etc/kubernetes/pki/front-proxy-ca.key\n/etc/kubernetes/pki/front-proxy-ca.crt\n/etc/kubernetes/pki/front-proxy-client.key\n/etc/kubernetes/pki/front-proxy-client.crt\n/etc/kubernetes/pki/etcd/server.key\n/etc/kubernetes/pki/etcd/server.crt\n/etc/kubernetes/pki/etcd/peer.key\n/etc/kubernetes/pki/etcd/peer.crt\n/etc/kubernetes/pki/etcd/healthcheck-client.key\n/etc/kubernetes/pki/etcd/healthcheck-client.crt\n/etc/kubernetes/pki/sa.key\n/etc/kubernetes/pki/sa.pub", "completion": "This text code PKI certificates and requirements | You must manually configure these administrator accounts and service accounts:", "language": "text", "type": "code-example_explanation", "context": "PKI certificates and requirements | You must manually configure these administrator accounts and service accounts:", "source": "https://kubernetes.io/docs/setup/best-practices/certificates/", "id": "k8s_00683_code_explain_21634"} {"prompt": "Explain this text code:\n\nSubject: O = kubeadm:cluster-admins, CN = kubernetes-admin", "completion": "This text code PKI certificates and requirements | Another is in super-admin.conf that has Subject: O = system:masters, CN = kubernetes-super-admin. This file is generated only on the node where kubeadm init was called.", "language": "text", "type": "code-example_explanation", "context": "PKI certificates and requirements | Another is in super-admin.conf that has Subject: O = system:masters, CN = kubernetes-super-admin. This file is generated only on the node where kubeadm init was called.", "source": "https://kubernetes.io/docs/setup/best-practices/certificates/", "id": "k8s_00683_code_explain_21635"} {"prompt": "Explain this text code:\n\nSubject: O = system:masters, CN = kubernetes-super-admin", "completion": "This text code PKI certificates and requirements | For each configuration, generate an x509 certificate/key pair with the given Common Name (CN) and Organization (O).", "language": "text", "type": "code-example_explanation", "context": "PKI certificates and requirements | For each configuration, generate an x509 certificate/key pair with the given Common Name (CN) and Organization (O).", "source": "https://kubernetes.io/docs/setup/best-practices/certificates/", "id": "k8s_00683_code_explain_21636"} {"prompt": "Complete this text code:\n\nKUBECONFIG=<filename> kubectl config set-cluster default-cluster --server=https://<host ip>:6443 --certificate-authority <path-to-kubernetes-ca> --embed-certs\nKUBECONFIG=<filename> kubectl config set-credentials <credential-name> --client-key <path-to-key>.pem --client-certificate <path-to-cert>.pem --embed-certs", "completion": "KUBECONFIG=<filename> kubectl config set-context default-system --cluster default-cluster --user <credential-name>\nKUBECONFIG=<filename> kubectl config use-context default-system", "language": "text", "type": "code-example", "context": "PKI certificates and requirements | These files are used as follows:", "source": "https://kubernetes.io/docs/setup/best-practices/certificates/", "id": "k8s_00683_code_completion_21637"} {"prompt": "Explain this text code:\n\nKUBECONFIG=<filename> kubectl config set-cluster default-cluster --server=https://<host ip>:6443 --certificate-authority <path-to-kubernetes-ca> --embed-certs\nKUBECONFIG=<filename> kubectl config set-credentials <credential-name> --client-key <path-to-key>.pem --client-certificate <path-to-cert>.pem --embed-certs\nKUBECONFIG=<filename> kubectl config set-context default-system --cluster default-cluster --user <credential-name>\nKUBECONFIG=<filename> kubectl config use-context default-system", "completion": "This text code PKI certificates and requirements | These files are used as follows:", "language": "text", "type": "code-example_explanation", "context": "PKI certificates and requirements | These files are used as follows:", "source": "https://kubernetes.io/docs/setup/best-practices/certificates/", "id": "k8s_00683_code_explain_21638"} {"prompt": "Complete this text code:\n\n/etc/kubernetes/admin.conf\n/etc/kubernetes/super-admin.conf", "completion": "/etc/kubernetes/kubelet.conf\n/etc/kubernetes/controller-manager.conf\n/etc/kubernetes/scheduler.conf", "language": "text", "type": "code-example", "context": "PKI certificates and requirements | any other IP or DNS name you contact your cluster on (as used by kubeadm the load balancer stable IP and/or DNS name, kubernetes, kubernetes.default, kubernetes.default.svc, kubernetes.default.svc.clu", "source": "https://kubernetes.io/docs/setup/best-practices/certificates/", "id": "k8s_00683_code_completion_21639"} {"prompt": "Explain this text code:\n\n/etc/kubernetes/admin.conf\n/etc/kubernetes/super-admin.conf\n/etc/kubernetes/kubelet.conf\n/etc/kubernetes/controller-manager.conf\n/etc/kubernetes/scheduler.conf", "completion": "This text code PKI certificates and requirements | any other IP or DNS name you contact your cluster on (as used by kubeadm the load balancer stable IP and/or DNS name, kubernetes, kubernetes.default, kubernetes.default.svc, kubernetes.default.svc.clu", "language": "text", "type": "code-example_explanation", "context": "PKI certificates and requirements | any other IP or DNS name you contact your cluster on (as used by kubeadm the load balancer stable IP and/or DNS name, kubernetes, kubernetes.default, kubernetes.default.svc, kubernetes.default.svc.clu", "source": "https://kubernetes.io/docs/setup/best-practices/certificates/", "id": "k8s_00683_code_explain_21640"} {"prompt": "Explain this text code:\n\nsudo tee /etc/sysctl.d/k8s.conf <<EOF\nnet.ipv4.ip_forward = 1\nEOF", "completion": "This text code Running Kubelet in Standalone Mode | Apply the changes to the system:", "language": "text", "type": "code-example_explanation", "context": "Running Kubelet in Standalone Mode | Apply the changes to the system:", "source": "https://kubernetes.io/docs/tutorials/cluster-management/kubelet-standalone/", "id": "k8s_00684_code_explain_21641"} {"prompt": "Complete this text code:\n\n...\n* Applying /etc/sysctl.d/k8s.conf ...", "completion": "net.ipv4.ip_forward = 1\n* Applying /etc/sysctl.conf ...", "language": "text", "type": "code-example", "context": "Running Kubelet in Standalone Mode | Download the latest available versions of the required packages (recommended).", "source": "https://kubernetes.io/docs/tutorials/cluster-management/kubelet-standalone/", "id": "k8s_00684_code_completion_21642"} {"prompt": "Explain this text code:\n\n...\n* Applying /etc/sysctl.d/k8s.conf ...\nnet.ipv4.ip_forward = 1\n* Applying /etc/sysctl.conf ...", "completion": "This text code Running Kubelet in Standalone Mode | Download the latest available versions of the required packages (recommended).", "language": "text", "type": "code-example_explanation", "context": "Running Kubelet in Standalone Mode | Download the latest available versions of the required packages (recommended).", "source": "https://kubernetes.io/docs/tutorials/cluster-management/kubelet-standalone/", "id": "k8s_00684_code_explain_21643"} {"prompt": "Explain this text code:\n\ncurl https://raw.githubusercontent.com/cri-o/packaging/main/get > crio-install", "completion": "This text code Running Kubelet in Standalone Mode | Run the installer script:", "language": "text", "type": "code-example_explanation", "context": "Running Kubelet in Standalone Mode | Run the installer script:", "source": "https://kubernetes.io/docs/tutorials/cluster-management/kubelet-standalone/", "id": "k8s_00684_code_explain_21644"} {"prompt": "Explain this text code:\n\nsudo systemctl daemon-reload\nsudo systemctl enable --now crio.service", "completion": "This text code Running Kubelet in Standalone Mode | Quick test:", "language": "text", "type": "code-example_explanation", "context": "Running Kubelet in Standalone Mode | Quick test:", "source": "https://kubernetes.io/docs/tutorials/cluster-management/kubelet-standalone/", "id": "k8s_00684_code_explain_21645"} {"prompt": "Explain this text code:\n\nCNI bridge plugin v1.5.1\nCNI protocol versions supported: 0.1.0, 0.2.0, 0.3.0, 0.3.1, 0.4.0, 1.0.0", "completion": "This text code Running Kubelet in Standalone Mode | To check the default configuration:", "language": "text", "type": "code-example_explanation", "context": "Running Kubelet in Standalone Mode | To check the default configuration:", "source": "https://kubernetes.io/docs/tutorials/cluster-management/kubelet-standalone/", "id": "k8s_00684_code_explain_21646"} {"prompt": "Complete this json code:\n\n{\n \"cniVersion\": \"1.0.0\",\n \"name\": \"crio\",\n \"plugins\": [\n {\n \"type\": \"bridge\",\n \"bridge\": \"cni0\",\n \"isGateway\": true,\n \"ipMasq\": true,\n \"hairpinMode\": true,\n \"ipam\": {", "completion": " \"type\": \"host-local\",\n \"routes\": [\n { \"dst\": \"0.0.0.0/0\" }\n ],\n \"ranges\": [\n [{ \"subnet\": \"10.85.0.0/16\" }]\n ]\n }\n }\n ]\n}", "language": "json", "type": "api-response", "context": "Running Kubelet in Standalone Mode | Download the latest stable release of the kubelet.", "source": "https://kubernetes.io/docs/tutorials/cluster-management/kubelet-standalone/", "id": "k8s_00684_code_completion_21647"} {"prompt": "Explain this json code:\n\n{\n \"cniVersion\": \"1.0.0\",\n \"name\": \"crio\",\n \"plugins\": [\n {\n \"type\": \"bridge\",\n \"bridge\": \"cni0\",\n \"isGateway\": true,\n \"ipMasq\": true,\n \"hairpinMode\": true,\n \"ipam\": {\n \"type\": \"host-local\",\n \"routes\": [\n { \"dst\": \"0.0.0.0/0\" }\n ],\n \"ranges\": [\n [{ \"subnet\": \"10.85.0.0/16\" }]\n ]\n }\n }\n ]\n}", "completion": "This json code Running Kubelet in Standalone Mode | Download the latest stable release of the kubelet.", "language": "json", "type": "api-response_explanation", "context": "Running Kubelet in Standalone Mode | Download the latest stable release of the kubelet.", "source": "https://kubernetes.io/docs/tutorials/cluster-management/kubelet-standalone/", "id": "k8s_00684_code_explain_21648"} {"prompt": "Explain this text code:\n\ncurl -LO \"https://dl.k8s.io/release/$(curl -L -s https://dl.k8s.io/release/stable.txt)/bin/linux/amd64/kubelet\"", "completion": "This text code Running Kubelet in Standalone Mode | curl -LO \"https://dl.k8s.io/release/$(curl -L -s https://dl.k8s.io/release/stable.txt)/bin/linux/arm64/kubelet\"", "language": "text", "type": "code-example_explanation", "context": "Running Kubelet in Standalone Mode | curl -LO \"https://dl.k8s.io/release/$(curl -L -s https://dl.k8s.io/release/stable.txt)/bin/linux/arm64/kubelet\"", "source": "https://kubernetes.io/docs/tutorials/cluster-management/kubelet-standalone/", "id": "k8s_00684_code_explain_21649"} {"prompt": "Explain this text code:\n\ncurl -LO \"https://dl.k8s.io/release/$(curl -L -s https://dl.k8s.io/release/stable.txt)/bin/linux/arm64/kubelet\"", "completion": "This text code Running Kubelet in Standalone Mode | Configure:", "language": "text", "type": "code-example_explanation", "context": "Running Kubelet in Standalone Mode | Configure:", "source": "https://kubernetes.io/docs/tutorials/cluster-management/kubelet-standalone/", "id": "k8s_00684_code_explain_21650"} {"prompt": "Complete this yaml code:\n\nsudo tee /etc/kubernetes/kubelet.yaml <<EOF\napiVersion: kubelet.config.k8s.io/v1beta1\nkind: KubeletConfiguration\nauthentication:\n webhook:\n enabled: false # Do NOT use in production clusters!\nauthorization:\n mode: AlwaysAllow # Do NOT use in production clusters!", "completion": "enableServer: false\nlogging:\n format: text\naddress: 127.0.0.1 # Restrict access to localhost\nreadOnlyPort: 10255 # Do NOT use in production clusters!\nstaticPodPath: /etc/kubernetes/manifests\ncontainerRuntimeEndpoint: unix:///var/run/crio/crio.sock\nEOF", "language": "yaml", "type": "kubernetes-manifest", "context": "Running Kubelet in Standalone Mode | Because you are not setting up a production cluster, you are using plain HTTP (readOnlyPort: 10255) for unauthenticated queries to the kubelet's API.", "source": "https://kubernetes.io/docs/tutorials/cluster-management/kubelet-standalone/", "id": "k8s_00684_code_completion_21651"} {"prompt": "Explain this yaml code:\n\nsudo tee /etc/kubernetes/kubelet.yaml <<EOF\napiVersion: kubelet.config.k8s.io/v1beta1\nkind: KubeletConfiguration\nauthentication:\n webhook:\n enabled: false # Do NOT use in production clusters!\nauthorization:\n mode: AlwaysAllow # Do NOT use in production clusters!\nenableServer: false\nlogging:\n format: text\naddress: 127.0.0.1 # Restrict access to localhost\nreadOnlyPort: 10255 # Do NOT use in production clusters!\nstaticPodPath: /etc/kubernetes/manifests\ncontainerRuntimeEndpoint: unix:///var/run/crio/crio.sock\nEOF", "completion": "This yaml code Running Kubelet in Standalone Mode | Because you are not setting up a production cluster, you are using plain HTTP (readOnlyPort: 10255) for unauthenticated queries to the kubelet's API.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Running Kubelet in Standalone Mode | Because you are not setting up a production cluster, you are using plain HTTP (readOnlyPort: 10255) for unauthenticated queries to the kubelet's API.", "source": "https://kubernetes.io/docs/tutorials/cluster-management/kubelet-standalone/", "id": "k8s_00684_code_explain_21652"} {"prompt": "Complete this text code:\n\nsudo tee /etc/systemd/system/kubelet.service <<EOF\n[Unit]\nDescription=Kubelet\n\n[Service]\nExecStart=/usr/bin/kubelet \\", "completion": " --config=/etc/kubernetes/kubelet.yaml\nRestart=always\n\n[Install]\nWantedBy=multi-user.target\nEOF", "language": "text", "type": "code-example", "context": "Running Kubelet in Standalone Mode | The command line argument --kubeconfig has been intentionally omitted in the service configuration file. This argument sets the path to a kubeconfig file that specifies how to connect to the API serve", "source": "https://kubernetes.io/docs/tutorials/cluster-management/kubelet-standalone/", "id": "k8s_00684_code_completion_21653"} {"prompt": "Explain this text code:\n\nsudo tee /etc/systemd/system/kubelet.service <<EOF\n[Unit]\nDescription=Kubelet\n\n[Service]\nExecStart=/usr/bin/kubelet \\\n --config=/etc/kubernetes/kubelet.yaml\nRestart=always\n\n[Install]\nWantedBy=multi-user.target\nEOF", "completion": "This text code Running Kubelet in Standalone Mode | The command line argument --kubeconfig has been intentionally omitted in the service configuration file. This argument sets the path to a kubeconfig file that specifies how to connect to the API serve", "language": "text", "type": "code-example_explanation", "context": "Running Kubelet in Standalone Mode | The command line argument --kubeconfig has been intentionally omitted in the service configuration file. This argument sets the path to a kubeconfig file that specifies how to connect to the API serve", "source": "https://kubernetes.io/docs/tutorials/cluster-management/kubelet-standalone/", "id": "k8s_00684_code_explain_21654"} {"prompt": "Explain this text code:\n\nsudo systemctl daemon-reload\nsudo systemctl enable --now kubelet.service", "completion": "This text code Running Kubelet in Standalone Mode | Quick test:", "language": "text", "type": "code-example_explanation", "context": "Running Kubelet in Standalone Mode | Quick test:", "source": "https://kubernetes.io/docs/tutorials/cluster-management/kubelet-standalone/", "id": "k8s_00684_code_explain_21655"} {"prompt": "Complete this json code:\n\n[+]ping ok\n[+]log ok", "completion": "[+]syncloop ok\nhealthz check passed", "language": "json", "type": "api-response", "context": "Running Kubelet in Standalone Mode | Query the kubelet's API /pods endpoint:", "source": "https://kubernetes.io/docs/tutorials/cluster-management/kubelet-standalone/", "id": "k8s_00684_code_completion_21656"} {"prompt": "Explain this json code:\n\n[+]ping ok\n[+]log ok\n[+]syncloop ok\nhealthz check passed", "completion": "This json code Running Kubelet in Standalone Mode | Query the kubelet's API /pods endpoint:", "language": "json", "type": "api-response_explanation", "context": "Running Kubelet in Standalone Mode | Query the kubelet's API /pods endpoint:", "source": "https://kubernetes.io/docs/tutorials/cluster-management/kubelet-standalone/", "id": "k8s_00684_code_explain_21657"} {"prompt": "Complete this json code:\n\n{\n \"kind\": \"PodList\",\n \"apiVersion\": \"v1\",", "completion": " \"metadata\": {},\n \"items\": null\n}", "language": "json", "type": "api-response", "context": "Running Kubelet in Standalone Mode | In standalone mode, you can run Pods using Pod manifests. The manifests can either be on the local filesystem, or fetched via HTTP from a configuration source.", "source": "https://kubernetes.io/docs/tutorials/cluster-management/kubelet-standalone/", "id": "k8s_00684_code_completion_21658"} {"prompt": "Explain this json code:\n\n{\n \"kind\": \"PodList\",\n \"apiVersion\": \"v1\",\n \"metadata\": {},\n \"items\": null\n}", "completion": "This json code Running Kubelet in Standalone Mode | In standalone mode, you can run Pods using Pod manifests. The manifests can either be on the local filesystem, or fetched via HTTP from a configuration source.", "language": "json", "type": "api-response_explanation", "context": "Running Kubelet in Standalone Mode | In standalone mode, you can run Pods using Pod manifests. The manifests can either be on the local filesystem, or fetched via HTTP from a configuration source.", "source": "https://kubernetes.io/docs/tutorials/cluster-management/kubelet-standalone/", "id": "k8s_00684_code_explain_21659"} {"prompt": "Complete this yaml code:\n\ncat <<EOF > static-web.yaml\napiVersion: v1\nkind: Pod\nmetadata:\n name: static-web\nspec:\n containers:", "completion": " - name: web\n image: nginx\n ports:\n - name: web\n containerPort: 80\n protocol: TCP\nEOF", "language": "yaml", "type": "pod-definition", "context": "Running Kubelet in Standalone Mode | Copy the static-web.yaml manifest file to the /etc/kubernetes/manifests directory.", "source": "https://kubernetes.io/docs/tutorials/cluster-management/kubelet-standalone/", "id": "k8s_00684_code_completion_21660"} {"prompt": "Explain this yaml code:\n\ncat <<EOF > static-web.yaml\napiVersion: v1\nkind: Pod\nmetadata:\n name: static-web\nspec:\n containers:\n - name: web\n image: nginx\n ports:\n - name: web\n containerPort: 80\n protocol: TCP\nEOF", "completion": "This yaml code Running Kubelet in Standalone Mode | Copy the static-web.yaml manifest file to the /etc/kubernetes/manifests directory.", "language": "yaml", "type": "pod-definition_explanation", "context": "Running Kubelet in Standalone Mode | Copy the static-web.yaml manifest file to the /etc/kubernetes/manifests directory.", "source": "https://kubernetes.io/docs/tutorials/cluster-management/kubelet-standalone/", "id": "k8s_00684_code_explain_21661"} {"prompt": "Explain this text code:\n\ncurl http://localhost:10255/pods | jq '.items[].status.podIP'", "completion": "This text code Running Kubelet in Standalone Mode | The output is similar to:", "language": "text", "type": "code-example_explanation", "context": "Running Kubelet in Standalone Mode | The output is similar to:", "source": "https://kubernetes.io/docs/tutorials/cluster-management/kubelet-standalone/", "id": "k8s_00684_code_explain_21662"} {"prompt": "Complete this text code:\n\n<!DOCTYPE html>\n<html>", "completion": "<head>\n<title>Welcome to nginx!\n...", "language": "text", "type": "code-example", "context": "Running Kubelet in Standalone Mode | If you need to diagnose a problem getting this tutorial to work, you can look within the following directories for monitoring and troubleshooting:", "source": "https://kubernetes.io/docs/tutorials/cluster-management/kubelet-standalone/", "id": "k8s_00684_code_completion_21663"} {"prompt": "Explain this text code:\n\n\n\n\nWelcome to nginx!\n...", "completion": "This text code Running Kubelet in Standalone Mode | If you need to diagnose a problem getting this tutorial to work, you can look within the following directories for monitoring and troubleshooting:", "language": "text", "type": "code-example_explanation", "context": "Running Kubelet in Standalone Mode | If you need to diagnose a problem getting this tutorial to work, you can look within the following directories for monitoring and troubleshooting:", "source": "https://kubernetes.io/docs/tutorials/cluster-management/kubelet-standalone/", "id": "k8s_00684_code_explain_21664"} {"prompt": "Complete this text code:\n\n/var/lib/cni\n/var/lib/containers\n/var/lib/kubelet", "completion": "\n/var/log/containers\n/var/log/pods", "language": "text", "type": "code-example", "context": "Running Kubelet in Standalone Mode | This page covered the basic aspects of deploying a kubelet in standalone mode. You are now ready to deploy Pods and test additional functionality.", "source": "https://kubernetes.io/docs/tutorials/cluster-management/kubelet-standalone/", "id": "k8s_00684_code_completion_21665"} {"prompt": "Explain this text code:\n\n/var/lib/cni\n/var/lib/containers\n/var/lib/kubelet\n\n/var/log/containers\n/var/log/pods", "completion": "This text code Running Kubelet in Standalone Mode | This page covered the basic aspects of deploying a kubelet in standalone mode. You are now ready to deploy Pods and test additional functionality.", "language": "text", "type": "code-example_explanation", "context": "Running Kubelet in Standalone Mode | This page covered the basic aspects of deploying a kubelet in standalone mode. You are now ready to deploy Pods and test additional functionality.", "source": "https://kubernetes.io/docs/tutorials/cluster-management/kubelet-standalone/", "id": "k8s_00684_code_explain_21666"} {"prompt": "Complete this text code:\n\nsudo systemctl disable --now kubelet.service\nsudo systemctl daemon-reload\nsudo rm /etc/systemd/system/kubelet.service\nsudo rm /usr/bin/kubelet", "completion": "sudo rm -rf /etc/kubernetes\nsudo rm -rf /var/lib/kubelet\nsudo rm -rf /var/log/containers\nsudo rm -rf /var/log/pods", "language": "text", "type": "code-example", "context": "Running Kubelet in Standalone Mode | This page covered the basic aspects of deploying a kubelet in standalone mode. You are now ready to deploy Pods and test additional functionality.", "source": "https://kubernetes.io/docs/tutorials/cluster-management/kubelet-standalone/", "id": "k8s_00684_code_completion_21667"} {"prompt": "Explain this text code:\n\nsudo systemctl disable --now kubelet.service\nsudo systemctl daemon-reload\nsudo rm /etc/systemd/system/kubelet.service\nsudo rm /usr/bin/kubelet\nsudo rm -rf /etc/kubernetes\nsudo rm -rf /var/lib/kubelet\nsudo rm -rf /var/log/containers\nsudo rm -rf /var/log/pods", "completion": "This text code Running Kubelet in Standalone Mode | This page covered the basic aspects of deploying a kubelet in standalone mode. You are now ready to deploy Pods and test additional functionality.", "language": "text", "type": "code-example_explanation", "context": "Running Kubelet in Standalone Mode | This page covered the basic aspects of deploying a kubelet in standalone mode. You are now ready to deploy Pods and test additional functionality.", "source": "https://kubernetes.io/docs/tutorials/cluster-management/kubelet-standalone/", "id": "k8s_00684_code_explain_21668"} {"prompt": "Complete this text code:\n\nsudo systemctl disable --now crio.service\nsudo systemctl daemon-reload\nsudo rm -rf /usr/local/bin\nsudo rm -rf /usr/local/lib", "completion": "sudo rm -rf /usr/local/share\nsudo rm -rf /usr/libexec/crio\nsudo rm -rf /etc/crio\nsudo rm -rf /etc/containers", "language": "text", "type": "code-example", "context": "Running Kubelet in Standalone Mode | This page covered the basic aspects of deploying a kubelet in standalone mode. You are now ready to deploy Pods and test additional functionality.", "source": "https://kubernetes.io/docs/tutorials/cluster-management/kubelet-standalone/", "id": "k8s_00684_code_completion_21669"} {"prompt": "Explain this text code:\n\nsudo systemctl disable --now crio.service\nsudo systemctl daemon-reload\nsudo rm -rf /usr/local/bin\nsudo rm -rf /usr/local/lib\nsudo rm -rf /usr/local/share\nsudo rm -rf /usr/libexec/crio\nsudo rm -rf /etc/crio\nsudo rm -rf /etc/containers", "completion": "This text code Running Kubelet in Standalone Mode | This page covered the basic aspects of deploying a kubelet in standalone mode. You are now ready to deploy Pods and test additional functionality.", "language": "text", "type": "code-example_explanation", "context": "Running Kubelet in Standalone Mode | This page covered the basic aspects of deploying a kubelet in standalone mode. You are now ready to deploy Pods and test additional functionality.", "source": "https://kubernetes.io/docs/tutorials/cluster-management/kubelet-standalone/", "id": "k8s_00684_code_explain_21670"} {"prompt": "Explain this text code:\n\nsudo rm -rf /opt/cni\nsudo rm -rf /etc/cni\nsudo rm -rf /var/lib/cni", "completion": "This text code Running Kubelet in Standalone Mode | This page covered the basic aspects of deploying a kubelet in standalone mode. You are now ready to deploy Pods and test additional functionality.", "language": "text", "type": "code-example_explanation", "context": "Running Kubelet in Standalone Mode | This page covered the basic aspects of deploying a kubelet in standalone mode. You are now ready to deploy Pods and test additional functionality.", "source": "https://kubernetes.io/docs/tutorials/cluster-management/kubelet-standalone/", "id": "k8s_00684_code_explain_21671"} {"prompt": "Explain this shell code:\n\nsudo tee /etc/sysctl.d/k8s.conf < crio-install", "completion": "This shell code Running Kubelet in Standalone Mode | Run the installer script:", "language": "shell", "type": "code-example_explanation", "context": "Running Kubelet in Standalone Mode | Run the installer script:", "source": "https://kubernetes.io/docs/tutorials/cluster-management/kubelet-standalone/", "id": "k8s_00684_code_explain_21675"} {"prompt": "Explain this shell code:\n\nsudo systemctl daemon-reload\nsudo systemctl enable --now crio.service", "completion": "This shell code Running Kubelet in Standalone Mode | Quick test:", "language": "shell", "type": "code-example_explanation", "context": "Running Kubelet in Standalone Mode | Quick test:", "source": "https://kubernetes.io/docs/tutorials/cluster-management/kubelet-standalone/", "id": "k8s_00684_code_explain_21676"} {"prompt": "Explain this text code:\n\nCNI bridge plugin v1.5.1\nCNI protocol versions supported: 0.1.0, 0.2.0, 0.3.0, 0.3.1, 0.4.0, 1.0.0", "completion": "This text code Running Kubelet in Standalone Mode | To check the default configuration:", "language": "text", "type": "code-example_explanation", "context": "Running Kubelet in Standalone Mode | To check the default configuration:", "source": "https://kubernetes.io/docs/tutorials/cluster-management/kubelet-standalone/", "id": "k8s_00684_code_explain_21677"} {"prompt": "Complete this json code:\n\n{\n \"cniVersion\": \"1.0.0\",\n \"name\": \"crio\",\n \"plugins\": [\n {\n \"type\": \"bridge\",\n \"bridge\": \"cni0\",\n \"isGateway\": true,\n \"ipMasq\": true,\n \"hairpinMode\": true,\n \"ipam\": {", "completion": " \"type\": \"host-local\",\n \"routes\": [\n { \"dst\": \"0.0.0.0/0\" }\n ],\n \"ranges\": [\n [{ \"subnet\": \"10.85.0.0/16\" }]\n ]\n }\n }\n ]\n}", "language": "json", "type": "api-response", "context": "Running Kubelet in Standalone Mode | Download the latest stable release of the kubelet.", "source": "https://kubernetes.io/docs/tutorials/cluster-management/kubelet-standalone/", "id": "k8s_00684_code_completion_21678"} {"prompt": "Explain this json code:\n\n{\n \"cniVersion\": \"1.0.0\",\n \"name\": \"crio\",\n \"plugins\": [\n {\n \"type\": \"bridge\",\n \"bridge\": \"cni0\",\n \"isGateway\": true,\n \"ipMasq\": true,\n \"hairpinMode\": true,\n \"ipam\": {\n \"type\": \"host-local\",\n \"routes\": [\n { \"dst\": \"0.0.0.0/0\" }\n ],\n \"ranges\": [\n [{ \"subnet\": \"10.85.0.0/16\" }]\n ]\n }\n }\n ]\n}", "completion": "This json code Running Kubelet in Standalone Mode | Download the latest stable release of the kubelet.", "language": "json", "type": "api-response_explanation", "context": "Running Kubelet in Standalone Mode | Download the latest stable release of the kubelet.", "source": "https://kubernetes.io/docs/tutorials/cluster-management/kubelet-standalone/", "id": "k8s_00684_code_explain_21679"} {"prompt": "Explain this bash code:\n\ncurl -LO \"https://dl.k8s.io/release/$(curl -L -s https://dl.k8s.io/release/stable.txt)/bin/linux/amd64/kubelet\"", "completion": "This bash code Running Kubelet in Standalone Mode | curl -LO \"https://dl.k8s.io/release/$(curl -L -s https://dl.k8s.io/release/stable.txt)/bin/linux/arm64/kubelet\"", "language": "bash", "type": "shell-command_explanation", "context": "Running Kubelet in Standalone Mode | curl -LO \"https://dl.k8s.io/release/$(curl -L -s https://dl.k8s.io/release/stable.txt)/bin/linux/arm64/kubelet\"", "source": "https://kubernetes.io/docs/tutorials/cluster-management/kubelet-standalone/", "id": "k8s_00684_code_explain_21680"} {"prompt": "Explain this bash code:\n\ncurl -LO \"https://dl.k8s.io/release/$(curl -L -s https://dl.k8s.io/release/stable.txt)/bin/linux/arm64/kubelet\"", "completion": "This bash code Running Kubelet in Standalone Mode | Configure:", "language": "bash", "type": "shell-command_explanation", "context": "Running Kubelet in Standalone Mode | Configure:", "source": "https://kubernetes.io/docs/tutorials/cluster-management/kubelet-standalone/", "id": "k8s_00684_code_explain_21681"} {"prompt": "Complete this shell code:\n\nsudo tee /etc/kubernetes/kubelet.yaml < static-web.yaml\napiVersion: v1\nkind: Pod\nmetadata:\n name: static-web\nspec:\n containers:", "completion": " - name: web\n image: nginx\n ports:\n - name: web\n containerPort: 80\n protocol: TCP\nEOF", "language": "shell", "type": "code-example", "context": "Running Kubelet in Standalone Mode | Copy the static-web.yaml manifest file to the /etc/kubernetes/manifests directory.", "source": "https://kubernetes.io/docs/tutorials/cluster-management/kubelet-standalone/", "id": "k8s_00684_code_completion_21691"} {"prompt": "Explain this shell code:\n\ncat < static-web.yaml\napiVersion: v1\nkind: Pod\nmetadata:\n name: static-web\nspec:\n containers:\n - name: web\n image: nginx\n ports:\n - name: web\n containerPort: 80\n protocol: TCP\nEOF", "completion": "This shell code Running Kubelet in Standalone Mode | Copy the static-web.yaml manifest file to the /etc/kubernetes/manifests directory.", "language": "shell", "type": "code-example_explanation", "context": "Running Kubelet in Standalone Mode | Copy the static-web.yaml manifest file to the /etc/kubernetes/manifests directory.", "source": "https://kubernetes.io/docs/tutorials/cluster-management/kubelet-standalone/", "id": "k8s_00684_code_explain_21692"} {"prompt": "Explain this shell code:\n\ncurl http://localhost:10255/pods | jq '.items[].status.podIP'", "completion": "This shell code Running Kubelet in Standalone Mode | The output is similar to:", "language": "shell", "type": "code-example_explanation", "context": "Running Kubelet in Standalone Mode | The output is similar to:", "source": "https://kubernetes.io/docs/tutorials/cluster-management/kubelet-standalone/", "id": "k8s_00684_code_explain_21693"} {"prompt": "Complete this html code:\n\n\n", "completion": "\nWelcome to nginx!\n...", "language": "html", "type": "code-example", "context": "Running Kubelet in Standalone Mode | If you need to diagnose a problem getting this tutorial to work, you can look within the following directories for monitoring and troubleshooting:", "source": "https://kubernetes.io/docs/tutorials/cluster-management/kubelet-standalone/", "id": "k8s_00684_code_completion_21694"} {"prompt": "Explain this html code:\n\n\n\n\nWelcome to nginx!\n...", "completion": "This html code Running Kubelet in Standalone Mode | If you need to diagnose a problem getting this tutorial to work, you can look within the following directories for monitoring and troubleshooting:", "language": "html", "type": "code-example_explanation", "context": "Running Kubelet in Standalone Mode | If you need to diagnose a problem getting this tutorial to work, you can look within the following directories for monitoring and troubleshooting:", "source": "https://kubernetes.io/docs/tutorials/cluster-management/kubelet-standalone/", "id": "k8s_00684_code_explain_21695"} {"prompt": "Complete this text code:\n\n/var/lib/cni\n/var/lib/containers\n/var/lib/kubelet", "completion": "\n/var/log/containers\n/var/log/pods", "language": "text", "type": "code-example", "context": "Running Kubelet in Standalone Mode | This page covered the basic aspects of deploying a kubelet in standalone mode. You are now ready to deploy Pods and test additional functionality.", "source": "https://kubernetes.io/docs/tutorials/cluster-management/kubelet-standalone/", "id": "k8s_00684_code_completion_21696"} {"prompt": "Explain this text code:\n\n/var/lib/cni\n/var/lib/containers\n/var/lib/kubelet\n\n/var/log/containers\n/var/log/pods", "completion": "This text code Running Kubelet in Standalone Mode | This page covered the basic aspects of deploying a kubelet in standalone mode. You are now ready to deploy Pods and test additional functionality.", "language": "text", "type": "code-example_explanation", "context": "Running Kubelet in Standalone Mode | This page covered the basic aspects of deploying a kubelet in standalone mode. You are now ready to deploy Pods and test additional functionality.", "source": "https://kubernetes.io/docs/tutorials/cluster-management/kubelet-standalone/", "id": "k8s_00684_code_explain_21697"} {"prompt": "Complete this shell code:\n\nsudo systemctl disable --now kubelet.service\nsudo systemctl daemon-reload\nsudo rm /etc/systemd/system/kubelet.service\nsudo rm /usr/bin/kubelet", "completion": "sudo rm -rf /etc/kubernetes\nsudo rm -rf /var/lib/kubelet\nsudo rm -rf /var/log/containers\nsudo rm -rf /var/log/pods", "language": "shell", "type": "code-example", "context": "Running Kubelet in Standalone Mode | This page covered the basic aspects of deploying a kubelet in standalone mode. You are now ready to deploy Pods and test additional functionality.", "source": "https://kubernetes.io/docs/tutorials/cluster-management/kubelet-standalone/", "id": "k8s_00684_code_completion_21698"} {"prompt": "Explain this shell code:\n\nsudo systemctl disable --now kubelet.service\nsudo systemctl daemon-reload\nsudo rm /etc/systemd/system/kubelet.service\nsudo rm /usr/bin/kubelet\nsudo rm -rf /etc/kubernetes\nsudo rm -rf /var/lib/kubelet\nsudo rm -rf /var/log/containers\nsudo rm -rf /var/log/pods", "completion": "This shell code Running Kubelet in Standalone Mode | This page covered the basic aspects of deploying a kubelet in standalone mode. You are now ready to deploy Pods and test additional functionality.", "language": "shell", "type": "code-example_explanation", "context": "Running Kubelet in Standalone Mode | This page covered the basic aspects of deploying a kubelet in standalone mode. You are now ready to deploy Pods and test additional functionality.", "source": "https://kubernetes.io/docs/tutorials/cluster-management/kubelet-standalone/", "id": "k8s_00684_code_explain_21699"} {"prompt": "Complete this shell code:\n\nsudo systemctl disable --now crio.service\nsudo systemctl daemon-reload\nsudo rm -rf /usr/local/bin\nsudo rm -rf /usr/local/lib", "completion": "sudo rm -rf /usr/local/share\nsudo rm -rf /usr/libexec/crio\nsudo rm -rf /etc/crio\nsudo rm -rf /etc/containers", "language": "shell", "type": "code-example", "context": "Running Kubelet in Standalone Mode | This page covered the basic aspects of deploying a kubelet in standalone mode. You are now ready to deploy Pods and test additional functionality.", "source": "https://kubernetes.io/docs/tutorials/cluster-management/kubelet-standalone/", "id": "k8s_00684_code_completion_21700"} {"prompt": "Explain this shell code:\n\nsudo systemctl disable --now crio.service\nsudo systemctl daemon-reload\nsudo rm -rf /usr/local/bin\nsudo rm -rf /usr/local/lib\nsudo rm -rf /usr/local/share\nsudo rm -rf /usr/libexec/crio\nsudo rm -rf /etc/crio\nsudo rm -rf /etc/containers", "completion": "This shell code Running Kubelet in Standalone Mode | This page covered the basic aspects of deploying a kubelet in standalone mode. You are now ready to deploy Pods and test additional functionality.", "language": "shell", "type": "code-example_explanation", "context": "Running Kubelet in Standalone Mode | This page covered the basic aspects of deploying a kubelet in standalone mode. You are now ready to deploy Pods and test additional functionality.", "source": "https://kubernetes.io/docs/tutorials/cluster-management/kubelet-standalone/", "id": "k8s_00684_code_explain_21701"} {"prompt": "Explain this shell code:\n\nsudo rm -rf /opt/cni\nsudo rm -rf /etc/cni\nsudo rm -rf /var/lib/cni", "completion": "This shell code Running Kubelet in Standalone Mode | This page covered the basic aspects of deploying a kubelet in standalone mode. You are now ready to deploy Pods and test additional functionality.", "language": "shell", "type": "code-example_explanation", "context": "Running Kubelet in Standalone Mode | This page covered the basic aspects of deploying a kubelet in standalone mode. You are now ready to deploy Pods and test additional functionality.", "source": "https://kubernetes.io/docs/tutorials/cluster-management/kubelet-standalone/", "id": "k8s_00684_code_explain_21702"} {"prompt": "Explain this text code:\n\nsudo tee /etc/sysctl.d/k8s.conf < crio-install", "completion": "This text code Running Kubelet in Standalone Mode | Run the installer script:", "language": "text", "type": "code-example_explanation", "context": "Running Kubelet in Standalone Mode | Run the installer script:", "source": "https://kubernetes.io/docs/tutorials/cluster-management/kubelet-standalone/", "id": "k8s_00684_code_explain_21704"} {"prompt": "Explain this text code:\n\nsudo systemctl daemon-reload\nsudo systemctl enable --now crio.service", "completion": "This text code Running Kubelet in Standalone Mode | Quick test:", "language": "text", "type": "code-example_explanation", "context": "Running Kubelet in Standalone Mode | Quick test:", "source": "https://kubernetes.io/docs/tutorials/cluster-management/kubelet-standalone/", "id": "k8s_00684_code_explain_21705"} {"prompt": "Complete this json code:\n\n{\n \"cniVersion\": \"1.0.0\",\n \"name\": \"crio\",\n \"plugins\": [\n {\n \"type\": \"bridge\",\n \"bridge\": \"cni0\",\n \"isGateway\": true,\n \"ipMasq\": true,\n \"hairpinMode\": true,\n \"ipam\": {", "completion": " \"type\": \"host-local\",\n \"routes\": [\n { \"dst\": \"0.0.0.0/0\" }\n ],\n \"ranges\": [\n [{ \"subnet\": \"10.85.0.0/16\" }]\n ]\n }\n }\n ]\n}", "language": "json", "type": "api-response", "context": "Running Kubelet in Standalone Mode | Download the latest stable release of the kubelet.", "source": "https://kubernetes.io/docs/tutorials/cluster-management/kubelet-standalone/", "id": "k8s_00684_code_completion_21706"} {"prompt": "Explain this json code:\n\n{\n \"cniVersion\": \"1.0.0\",\n \"name\": \"crio\",\n \"plugins\": [\n {\n \"type\": \"bridge\",\n \"bridge\": \"cni0\",\n \"isGateway\": true,\n \"ipMasq\": true,\n \"hairpinMode\": true,\n \"ipam\": {\n \"type\": \"host-local\",\n \"routes\": [\n { \"dst\": \"0.0.0.0/0\" }\n ],\n \"ranges\": [\n [{ \"subnet\": \"10.85.0.0/16\" }]\n ]\n }\n }\n ]\n}", "completion": "This json code Running Kubelet in Standalone Mode | Download the latest stable release of the kubelet.", "language": "json", "type": "api-response_explanation", "context": "Running Kubelet in Standalone Mode | Download the latest stable release of the kubelet.", "source": "https://kubernetes.io/docs/tutorials/cluster-management/kubelet-standalone/", "id": "k8s_00684_code_explain_21707"} {"prompt": "Explain this text code:\n\ncurl -LO \"https://dl.k8s.io/release/$(curl -L -s https://dl.k8s.io/release/stable.txt)/bin/linux/amd64/kubelet\"", "completion": "This text code Running Kubelet in Standalone Mode | curl -LO \"https://dl.k8s.io/release/$(curl -L -s https://dl.k8s.io/release/stable.txt)/bin/linux/arm64/kubelet\"", "language": "text", "type": "code-example_explanation", "context": "Running Kubelet in Standalone Mode | curl -LO \"https://dl.k8s.io/release/$(curl -L -s https://dl.k8s.io/release/stable.txt)/bin/linux/arm64/kubelet\"", "source": "https://kubernetes.io/docs/tutorials/cluster-management/kubelet-standalone/", "id": "k8s_00684_code_explain_21708"} {"prompt": "Explain this text code:\n\ncurl -LO \"https://dl.k8s.io/release/$(curl -L -s https://dl.k8s.io/release/stable.txt)/bin/linux/arm64/kubelet\"", "completion": "This text code Running Kubelet in Standalone Mode | Configure:", "language": "text", "type": "code-example_explanation", "context": "Running Kubelet in Standalone Mode | Configure:", "source": "https://kubernetes.io/docs/tutorials/cluster-management/kubelet-standalone/", "id": "k8s_00684_code_explain_21709"} {"prompt": "Complete this yaml code:\n\nsudo tee /etc/kubernetes/kubelet.yaml < static-web.yaml\napiVersion: v1\nkind: Pod\nmetadata:\n name: static-web\nspec:\n containers:", "completion": " - name: web\n image: nginx\n ports:\n - name: web\n containerPort: 80\n protocol: TCP\nEOF", "language": "yaml", "type": "pod-definition", "context": "Running Kubelet in Standalone Mode | Copy the static-web.yaml manifest file to the /etc/kubernetes/manifests directory.", "source": "https://kubernetes.io/docs/tutorials/cluster-management/kubelet-standalone/", "id": "k8s_00684_code_completion_21717"} {"prompt": "Explain this yaml code:\n\ncat < static-web.yaml\napiVersion: v1\nkind: Pod\nmetadata:\n name: static-web\nspec:\n containers:\n - name: web\n image: nginx\n ports:\n - name: web\n containerPort: 80\n protocol: TCP\nEOF", "completion": "This yaml code Running Kubelet in Standalone Mode | Copy the static-web.yaml manifest file to the /etc/kubernetes/manifests directory.", "language": "yaml", "type": "pod-definition_explanation", "context": "Running Kubelet in Standalone Mode | Copy the static-web.yaml manifest file to the /etc/kubernetes/manifests directory.", "source": "https://kubernetes.io/docs/tutorials/cluster-management/kubelet-standalone/", "id": "k8s_00684_code_explain_21718"} {"prompt": "Explain this text code:\n\ncurl http://localhost:10255/pods | jq '.items[].status.podIP'", "completion": "This text code Running Kubelet in Standalone Mode | The output is similar to:", "language": "text", "type": "code-example_explanation", "context": "Running Kubelet in Standalone Mode | The output is similar to:", "source": "https://kubernetes.io/docs/tutorials/cluster-management/kubelet-standalone/", "id": "k8s_00684_code_explain_21719"} {"prompt": "Complete this text code:\n\n\n", "completion": "\nWelcome to nginx!\n...", "language": "text", "type": "code-example", "context": "Running Kubelet in Standalone Mode | If you need to diagnose a problem getting this tutorial to work, you can look within the following directories for monitoring and troubleshooting:", "source": "https://kubernetes.io/docs/tutorials/cluster-management/kubelet-standalone/", "id": "k8s_00684_code_completion_21720"} {"prompt": "Explain this text code:\n\n\n\n\nWelcome to nginx!\n...", "completion": "This text code Running Kubelet in Standalone Mode | If you need to diagnose a problem getting this tutorial to work, you can look within the following directories for monitoring and troubleshooting:", "language": "text", "type": "code-example_explanation", "context": "Running Kubelet in Standalone Mode | If you need to diagnose a problem getting this tutorial to work, you can look within the following directories for monitoring and troubleshooting:", "source": "https://kubernetes.io/docs/tutorials/cluster-management/kubelet-standalone/", "id": "k8s_00684_code_explain_21721"} {"prompt": "Complete this text code:\n\nsudo systemctl disable --now kubelet.service\nsudo systemctl daemon-reload\nsudo rm /etc/systemd/system/kubelet.service\nsudo rm /usr/bin/kubelet", "completion": "sudo rm -rf /etc/kubernetes\nsudo rm -rf /var/lib/kubelet\nsudo rm -rf /var/log/containers\nsudo rm -rf /var/log/pods", "language": "text", "type": "code-example", "context": "Running Kubelet in Standalone Mode | This page covered the basic aspects of deploying a kubelet in standalone mode. You are now ready to deploy Pods and test additional functionality.", "source": "https://kubernetes.io/docs/tutorials/cluster-management/kubelet-standalone/", "id": "k8s_00684_code_completion_21722"} {"prompt": "Explain this text code:\n\nsudo systemctl disable --now kubelet.service\nsudo systemctl daemon-reload\nsudo rm /etc/systemd/system/kubelet.service\nsudo rm /usr/bin/kubelet\nsudo rm -rf /etc/kubernetes\nsudo rm -rf /var/lib/kubelet\nsudo rm -rf /var/log/containers\nsudo rm -rf /var/log/pods", "completion": "This text code Running Kubelet in Standalone Mode | This page covered the basic aspects of deploying a kubelet in standalone mode. You are now ready to deploy Pods and test additional functionality.", "language": "text", "type": "code-example_explanation", "context": "Running Kubelet in Standalone Mode | This page covered the basic aspects of deploying a kubelet in standalone mode. You are now ready to deploy Pods and test additional functionality.", "source": "https://kubernetes.io/docs/tutorials/cluster-management/kubelet-standalone/", "id": "k8s_00684_code_explain_21723"} {"prompt": "Complete this text code:\n\nsudo systemctl disable --now crio.service\nsudo systemctl daemon-reload\nsudo rm -rf /usr/local/bin\nsudo rm -rf /usr/local/lib", "completion": "sudo rm -rf /usr/local/share\nsudo rm -rf /usr/libexec/crio\nsudo rm -rf /etc/crio\nsudo rm -rf /etc/containers", "language": "text", "type": "code-example", "context": "Running Kubelet in Standalone Mode | This page covered the basic aspects of deploying a kubelet in standalone mode. You are now ready to deploy Pods and test additional functionality.", "source": "https://kubernetes.io/docs/tutorials/cluster-management/kubelet-standalone/", "id": "k8s_00684_code_completion_21724"} {"prompt": "Explain this text code:\n\nsudo systemctl disable --now crio.service\nsudo systemctl daemon-reload\nsudo rm -rf /usr/local/bin\nsudo rm -rf /usr/local/lib\nsudo rm -rf /usr/local/share\nsudo rm -rf /usr/libexec/crio\nsudo rm -rf /etc/crio\nsudo rm -rf /etc/containers", "completion": "This text code Running Kubelet in Standalone Mode | This page covered the basic aspects of deploying a kubelet in standalone mode. You are now ready to deploy Pods and test additional functionality.", "language": "text", "type": "code-example_explanation", "context": "Running Kubelet in Standalone Mode | This page covered the basic aspects of deploying a kubelet in standalone mode. You are now ready to deploy Pods and test additional functionality.", "source": "https://kubernetes.io/docs/tutorials/cluster-management/kubelet-standalone/", "id": "k8s_00684_code_explain_21725"} {"prompt": "Explain this text code:\n\nsudo rm -rf /opt/cni\nsudo rm -rf /etc/cni\nsudo rm -rf /var/lib/cni", "completion": "This text code Running Kubelet in Standalone Mode | This page covered the basic aspects of deploying a kubelet in standalone mode. You are now ready to deploy Pods and test additional functionality.", "language": "text", "type": "code-example_explanation", "context": "Running Kubelet in Standalone Mode | This page covered the basic aspects of deploying a kubelet in standalone mode. You are now ready to deploy Pods and test additional functionality.", "source": "https://kubernetes.io/docs/tutorials/cluster-management/kubelet-standalone/", "id": "k8s_00684_code_explain_21726"} {"prompt": "Explain this text code:\n\nmkdir $GOPATH/src\ncd $GOPATH/src\ngo get github.com/kubernetes/kubernetes", "completion": "This text code Contributing to the Upstream Kubernetes Code | Determine the base directory of your clone of the kubernetes/kubernetes repository. For example, if you followed the preceding step to get the repository, your base directory is $GOPATH/src/github.com", "language": "text", "type": "code-example_explanation", "context": "Contributing to the Upstream Kubernetes Code | Determine the base directory of your clone of the kubernetes/kubernetes repository. For example, if you followed the preceding step to get the repository, your base directory is $GOPATH/src/github.com", "source": "https://kubernetes.io/docs/contribute/generate-ref-docs/contribute-upstream/", "id": "k8s_00687_code_explain_21727"} {"prompt": "Explain this text code:\n\ncd \ngit checkout master\ngit pull https://github.com/kubernetes/kubernetes master", "completion": "This text code Contributing to the Upstream Kubernetes Code | Suppose this source file in that default branch has the typo \"atmost\":", "language": "text", "type": "code-example_explanation", "context": "Contributing to the Upstream Kubernetes Code | Suppose this source file in that default branch has the typo \"atmost\":", "source": "https://kubernetes.io/docs/contribute/generate-ref-docs/contribute-upstream/", "id": "k8s_00687_code_explain_21728"} {"prompt": "Explain this text code:\n\nOn branch master\n...\n modified: staging/src/k8s.io/api/apps/v1/types.go", "completion": "This text code Contributing to the Upstream Kubernetes Code | Run git add and git commit to commit the changes you have made so far. In the next step, you will do a second commit. It is important to keep your changes separated into two commits.", "language": "text", "type": "code-example_explanation", "context": "Contributing to the Upstream Kubernetes Code | Run git add and git commit to commit the changes you have made so far. In the next step, you will do a second commit. It is important to keep your changes separated into two commits.", "source": "https://kubernetes.io/docs/contribute/generate-ref-docs/contribute-upstream/", "id": "k8s_00687_code_explain_21729"} {"prompt": "Explain this text code:\n\n./hack/update-codegen.sh\n./hack/update-openapi-spec.sh", "completion": "This text code Contributing to the Upstream Kubernetes Code | Run git status to see what was generated.", "language": "text", "type": "code-example_explanation", "context": "Contributing to the Upstream Kubernetes Code | Run git status to see what was generated.", "source": "https://kubernetes.io/docs/contribute/generate-ref-docs/contribute-upstream/", "id": "k8s_00687_code_explain_21730"} {"prompt": "Complete this text code:\n\nOn branch master\n...\n modified: api/openapi-spec/swagger.json", "completion": " modified: api/openapi-spec/v3/apis__apps__v1_openapi.json\n modified: pkg/generated/openapi/zz_generated.openapi.go\n modified: staging/src/k8s.io/api/apps/v1/generated.proto\n modified: staging/src/k8s.io/api/apps/v1/types_swagger_doc_generated.go", "language": "text", "type": "code-example", "context": "Contributing to the Upstream Kubernetes Code | View the contents of api/openapi-spec/swagger.json to make sure the typo is fixed. For example, you could run git diff -a api/openapi-spec/swagger.json. This is important, because swagger.json is the", "source": "https://kubernetes.io/docs/contribute/generate-ref-docs/contribute-upstream/", "id": "k8s_00687_code_completion_21731"} {"prompt": "Explain this text code:\n\nOn branch master\n...\n modified: api/openapi-spec/swagger.json\n modified: api/openapi-spec/v3/apis__apps__v1_openapi.json\n modified: pkg/generated/openapi/zz_generated.openapi.go\n modified: staging/src/k8s.io/api/apps/v1/generated.proto\n modified: staging/src/k8s.io/api/apps/v1/types_swagger_doc_generated.go", "completion": "This text code Contributing to the Upstream Kubernetes Code | View the contents of api/openapi-spec/swagger.json to make sure the typo is fixed. For example, you could run git diff -a api/openapi-spec/swagger.json. This is important, because swagger.json is the", "language": "text", "type": "code-example_explanation", "context": "Contributing to the Upstream Kubernetes Code | View the contents of api/openapi-spec/swagger.json to make sure the typo is fixed. For example, you could run git diff -a api/openapi-spec/swagger.json. This is important, because swagger.json is the", "source": "https://kubernetes.io/docs/contribute/generate-ref-docs/contribute-upstream/", "id": "k8s_00687_code_explain_21732"} {"prompt": "Explain this text code:\n\n./hack/update-codegen.sh\n./hack/update-openapi-spec.sh", "completion": "This text code Contributing to the Upstream Kubernetes Code | Now add a commit to your cherry-pick pull request that has the recently generated OpenAPI spec and related files. Monitor your pull request until it gets merged into the release-1.33 branch.", "language": "text", "type": "code-example_explanation", "context": "Contributing to the Upstream Kubernetes Code | Now add a commit to your cherry-pick pull request that has the recently generated OpenAPI spec and related files. Monitor your pull request until it gets merged into the release-1.33 branch.", "source": "https://kubernetes.io/docs/contribute/generate-ref-docs/contribute-upstream/", "id": "k8s_00687_code_explain_21733"} {"prompt": "Explain this shell code:\n\nmkdir $GOPATH/src\ncd $GOPATH/src\ngo get github.com/kubernetes/kubernetes", "completion": "This shell code Contributing to the Upstream Kubernetes Code | Determine the base directory of your clone of the kubernetes/kubernetes repository. For example, if you followed the preceding step to get the repository, your base directory is $GOPATH/src/github.com", "language": "shell", "type": "code-example_explanation", "context": "Contributing to the Upstream Kubernetes Code | Determine the base directory of your clone of the kubernetes/kubernetes repository. For example, if you followed the preceding step to get the repository, your base directory is $GOPATH/src/github.com", "source": "https://kubernetes.io/docs/contribute/generate-ref-docs/contribute-upstream/", "id": "k8s_00687_code_explain_21734"} {"prompt": "Explain this text code:\n\n$GOPATH/src/github.com/kubernetes-sigs/reference-docs", "completion": "This text code Contributing to the Upstream Kubernetes Code | The Kubernetes API reference documentation is automatically generated from an OpenAPI spec, which is generated from the Kubernetes source code. If you want to change the API reference documentation, t", "language": "text", "type": "code-example_explanation", "context": "Contributing to the Upstream Kubernetes Code | The Kubernetes API reference documentation is automatically generated from an OpenAPI spec, which is generated from the Kubernetes source code. If you want to change the API reference documentation, t", "source": "https://kubernetes.io/docs/contribute/generate-ref-docs/contribute-upstream/", "id": "k8s_00687_code_explain_21735"} {"prompt": "Explain this shell code:\n\ncd \ngit checkout master\ngit pull https://github.com/kubernetes/kubernetes master", "completion": "This shell code Contributing to the Upstream Kubernetes Code | Suppose this source file in that default branch has the typo \"atmost\":", "language": "shell", "type": "code-example_explanation", "context": "Contributing to the Upstream Kubernetes Code | Suppose this source file in that default branch has the typo \"atmost\":", "source": "https://kubernetes.io/docs/contribute/generate-ref-docs/contribute-upstream/", "id": "k8s_00687_code_explain_21736"} {"prompt": "Explain this shell code:\n\nOn branch master\n...\n modified: staging/src/k8s.io/api/apps/v1/types.go", "completion": "This shell code Contributing to the Upstream Kubernetes Code | Run git add and git commit to commit the changes you have made so far. In the next step, you will do a second commit. It is important to keep your changes separated into two commits.", "language": "shell", "type": "code-example_explanation", "context": "Contributing to the Upstream Kubernetes Code | Run git add and git commit to commit the changes you have made so far. In the next step, you will do a second commit. It is important to keep your changes separated into two commits.", "source": "https://kubernetes.io/docs/contribute/generate-ref-docs/contribute-upstream/", "id": "k8s_00687_code_explain_21737"} {"prompt": "Explain this shell code:\n\n./hack/update-codegen.sh\n./hack/update-openapi-spec.sh", "completion": "This shell code Contributing to the Upstream Kubernetes Code | Run git status to see what was generated.", "language": "shell", "type": "code-example_explanation", "context": "Contributing to the Upstream Kubernetes Code | Run git status to see what was generated.", "source": "https://kubernetes.io/docs/contribute/generate-ref-docs/contribute-upstream/", "id": "k8s_00687_code_explain_21738"} {"prompt": "Complete this shell code:\n\nOn branch master\n...\n modified: api/openapi-spec/swagger.json", "completion": " modified: api/openapi-spec/v3/apis__apps__v1_openapi.json\n modified: pkg/generated/openapi/zz_generated.openapi.go\n modified: staging/src/k8s.io/api/apps/v1/generated.proto\n modified: staging/src/k8s.io/api/apps/v1/types_swagger_doc_generated.go", "language": "shell", "type": "code-example", "context": "Contributing to the Upstream Kubernetes Code | View the contents of api/openapi-spec/swagger.json to make sure the typo is fixed. For example, you could run git diff -a api/openapi-spec/swagger.json. This is important, because swagger.json is the", "source": "https://kubernetes.io/docs/contribute/generate-ref-docs/contribute-upstream/", "id": "k8s_00687_code_completion_21739"} {"prompt": "Explain this shell code:\n\nOn branch master\n...\n modified: api/openapi-spec/swagger.json\n modified: api/openapi-spec/v3/apis__apps__v1_openapi.json\n modified: pkg/generated/openapi/zz_generated.openapi.go\n modified: staging/src/k8s.io/api/apps/v1/generated.proto\n modified: staging/src/k8s.io/api/apps/v1/types_swagger_doc_generated.go", "completion": "This shell code Contributing to the Upstream Kubernetes Code | View the contents of api/openapi-spec/swagger.json to make sure the typo is fixed. For example, you could run git diff -a api/openapi-spec/swagger.json. This is important, because swagger.json is the", "language": "shell", "type": "code-example_explanation", "context": "Contributing to the Upstream Kubernetes Code | View the contents of api/openapi-spec/swagger.json to make sure the typo is fixed. For example, you could run git diff -a api/openapi-spec/swagger.json. This is important, because swagger.json is the", "source": "https://kubernetes.io/docs/contribute/generate-ref-docs/contribute-upstream/", "id": "k8s_00687_code_explain_21740"} {"prompt": "Explain this shell code:\n\n./hack/update-codegen.sh\n./hack/update-openapi-spec.sh", "completion": "This shell code Contributing to the Upstream Kubernetes Code | Now add a commit to your cherry-pick pull request that has the recently generated OpenAPI spec and related files. Monitor your pull request until it gets merged into the release-1.33 branch.", "language": "shell", "type": "code-example_explanation", "context": "Contributing to the Upstream Kubernetes Code | Now add a commit to your cherry-pick pull request that has the recently generated OpenAPI spec and related files. Monitor your pull request until it gets merged into the release-1.33 branch.", "source": "https://kubernetes.io/docs/contribute/generate-ref-docs/contribute-upstream/", "id": "k8s_00687_code_explain_21741"} {"prompt": "Explain this text code:\n\nmkdir $GOPATH/src\ncd $GOPATH/src\ngo get github.com/kubernetes/kubernetes", "completion": "This text code Contributing to the Upstream Kubernetes Code | Determine the base directory of your clone of the kubernetes/kubernetes repository. For example, if you followed the preceding step to get the repository, your base directory is $GOPATH/src/github.com", "language": "text", "type": "code-example_explanation", "context": "Contributing to the Upstream Kubernetes Code | Determine the base directory of your clone of the kubernetes/kubernetes repository. For example, if you followed the preceding step to get the repository, your base directory is $GOPATH/src/github.com", "source": "https://kubernetes.io/docs/contribute/generate-ref-docs/contribute-upstream/", "id": "k8s_00687_code_explain_21742"} {"prompt": "Explain this text code:\n\ncd \ngit checkout master\ngit pull https://github.com/kubernetes/kubernetes master", "completion": "This text code Contributing to the Upstream Kubernetes Code | Suppose this source file in that default branch has the typo \"atmost\":", "language": "text", "type": "code-example_explanation", "context": "Contributing to the Upstream Kubernetes Code | Suppose this source file in that default branch has the typo \"atmost\":", "source": "https://kubernetes.io/docs/contribute/generate-ref-docs/contribute-upstream/", "id": "k8s_00687_code_explain_21743"} {"prompt": "Explain this text code:\n\nOn branch master\n...\n modified: staging/src/k8s.io/api/apps/v1/types.go", "completion": "This text code Contributing to the Upstream Kubernetes Code | Run git add and git commit to commit the changes you have made so far. In the next step, you will do a second commit. It is important to keep your changes separated into two commits.", "language": "text", "type": "code-example_explanation", "context": "Contributing to the Upstream Kubernetes Code | Run git add and git commit to commit the changes you have made so far. In the next step, you will do a second commit. It is important to keep your changes separated into two commits.", "source": "https://kubernetes.io/docs/contribute/generate-ref-docs/contribute-upstream/", "id": "k8s_00687_code_explain_21744"} {"prompt": "Explain this text code:\n\n./hack/update-codegen.sh\n./hack/update-openapi-spec.sh", "completion": "This text code Contributing to the Upstream Kubernetes Code | Run git status to see what was generated.", "language": "text", "type": "code-example_explanation", "context": "Contributing to the Upstream Kubernetes Code | Run git status to see what was generated.", "source": "https://kubernetes.io/docs/contribute/generate-ref-docs/contribute-upstream/", "id": "k8s_00687_code_explain_21745"} {"prompt": "Complete this text code:\n\nOn branch master\n...\n modified: api/openapi-spec/swagger.json", "completion": " modified: api/openapi-spec/v3/apis__apps__v1_openapi.json\n modified: pkg/generated/openapi/zz_generated.openapi.go\n modified: staging/src/k8s.io/api/apps/v1/generated.proto\n modified: staging/src/k8s.io/api/apps/v1/types_swagger_doc_generated.go", "language": "text", "type": "code-example", "context": "Contributing to the Upstream Kubernetes Code | View the contents of api/openapi-spec/swagger.json to make sure the typo is fixed. For example, you could run git diff -a api/openapi-spec/swagger.json. This is important, because swagger.json is the", "source": "https://kubernetes.io/docs/contribute/generate-ref-docs/contribute-upstream/", "id": "k8s_00687_code_completion_21746"} {"prompt": "Explain this text code:\n\nOn branch master\n...\n modified: api/openapi-spec/swagger.json\n modified: api/openapi-spec/v3/apis__apps__v1_openapi.json\n modified: pkg/generated/openapi/zz_generated.openapi.go\n modified: staging/src/k8s.io/api/apps/v1/generated.proto\n modified: staging/src/k8s.io/api/apps/v1/types_swagger_doc_generated.go", "completion": "This text code Contributing to the Upstream Kubernetes Code | View the contents of api/openapi-spec/swagger.json to make sure the typo is fixed. For example, you could run git diff -a api/openapi-spec/swagger.json. This is important, because swagger.json is the", "language": "text", "type": "code-example_explanation", "context": "Contributing to the Upstream Kubernetes Code | View the contents of api/openapi-spec/swagger.json to make sure the typo is fixed. For example, you could run git diff -a api/openapi-spec/swagger.json. This is important, because swagger.json is the", "source": "https://kubernetes.io/docs/contribute/generate-ref-docs/contribute-upstream/", "id": "k8s_00687_code_explain_21747"} {"prompt": "Explain this text code:\n\n./hack/update-codegen.sh\n./hack/update-openapi-spec.sh", "completion": "This text code Contributing to the Upstream Kubernetes Code | Now add a commit to your cherry-pick pull request that has the recently generated OpenAPI spec and related files. Monitor your pull request until it gets merged into the release-1.33 branch.", "language": "text", "type": "code-example_explanation", "context": "Contributing to the Upstream Kubernetes Code | Now add a commit to your cherry-pick pull request that has the recently generated OpenAPI spec and related files. Monitor your pull request until it gets merged into the release-1.33 branch.", "source": "https://kubernetes.io/docs/contribute/generate-ref-docs/contribute-upstream/", "id": "k8s_00687_code_explain_21748"} {"prompt": "Explain this bash code:\n\nkubectl scale statefulsets --replicas=", "completion": "This bash code Scale a StatefulSet | Alternatively, you can do in-place updates on your StatefulSets.", "language": "bash", "type": "kubectl-command_explanation", "context": "Scale a StatefulSet | Alternatively, you can do in-place updates on your StatefulSets.", "source": "https://kubernetes.io/docs/tasks/run-application/scale-stateful-set/", "id": "k8s_00689_code_explain_21749"} {"prompt": "Explain this bash code:\n\nkubectl patch statefulsets -p '{\"spec\":{\"replicas\":}}'", "completion": "This bash code Scale a StatefulSet | You cannot scale down a StatefulSet when any of the stateful Pods it manages is unhealthy. Scaling down only takes place after those stateful Pods become running and ready.", "language": "bash", "type": "kubectl-command_explanation", "context": "Scale a StatefulSet | You cannot scale down a StatefulSet when any of the stateful Pods it manages is unhealthy. Scaling down only takes place after those stateful Pods become running and ready.", "source": "https://kubernetes.io/docs/tasks/run-application/scale-stateful-set/", "id": "k8s_00689_code_explain_21750"} {"prompt": "Explain this shell code:\n\nkubectl scale statefulsets --replicas=", "completion": "This shell code Scale a StatefulSet | Alternatively, you can do in-place updates on your StatefulSets.", "language": "shell", "type": "code-example_explanation", "context": "Scale a StatefulSet | Alternatively, you can do in-place updates on your StatefulSets.", "source": "https://kubernetes.io/docs/tasks/run-application/scale-stateful-set/", "id": "k8s_00689_code_explain_21751"} {"prompt": "Explain this shell code:\n\nkubectl patch statefulsets -p '{\"spec\":{\"replicas\":}}'", "completion": "This shell code Scale a StatefulSet | You cannot scale down a StatefulSet when any of the stateful Pods it manages is unhealthy. Scaling down only takes place after those stateful Pods become running and ready.", "language": "shell", "type": "code-example_explanation", "context": "Scale a StatefulSet | You cannot scale down a StatefulSet when any of the stateful Pods it manages is unhealthy. Scaling down only takes place after those stateful Pods become running and ready.", "source": "https://kubernetes.io/docs/tasks/run-application/scale-stateful-set/", "id": "k8s_00689_code_explain_21752"} {"prompt": "Explain this bash code:\n\nkubectl scale statefulsets --replicas=", "completion": "This bash code Scale a StatefulSet | Alternatively, you can do in-place updates on your StatefulSets.", "language": "bash", "type": "kubectl-command_explanation", "context": "Scale a StatefulSet | Alternatively, you can do in-place updates on your StatefulSets.", "source": "https://kubernetes.io/docs/tasks/run-application/scale-stateful-set/", "id": "k8s_00689_code_explain_21753"} {"prompt": "Explain this bash code:\n\nkubectl patch statefulsets -p '{\"spec\":{\"replicas\":}}'", "completion": "This bash code Scale a StatefulSet | You cannot scale down a StatefulSet when any of the stateful Pods it manages is unhealthy. Scaling down only takes place after those stateful Pods become running and ready.", "language": "bash", "type": "kubectl-command_explanation", "context": "Scale a StatefulSet | You cannot scale down a StatefulSet when any of the stateful Pods it manages is unhealthy. Scaling down only takes place after those stateful Pods become running and ready.", "source": "https://kubernetes.io/docs/tasks/run-application/scale-stateful-set/", "id": "k8s_00689_code_explain_21754"} {"prompt": "Complete this text code:\n\n# Set the last-applied-configuration of a resource to match the contents of a file\n kubectl apply set-last-applied -f deploy.yaml\n \n # Execute set-last-applied against each configuration file in a directory", "completion": " kubectl apply set-last-applied -f path/\n \n # Set the last-applied-configuration of a resource to match the contents of a file; will create the annotation if it does not already exist\n kubectl apply set-last-applied -f deploy.yaml --create-annotation=true", "language": "text", "type": "code-example", "context": "kubectl apply set-last-applied | If true, ignore any errors in templates when a field or map key is missing in the template. Only applies to golang and jsonpath output formats.", "source": "https://kubernetes.io/docs/reference/kubectl/generated/kubectl_apply/kubectl_apply_set-last-applied/", "id": "k8s_00690_code_completion_21755"} {"prompt": "Explain this text code:\n\n# Set the last-applied-configuration of a resource to match the contents of a file\n kubectl apply set-last-applied -f deploy.yaml\n \n # Execute set-last-applied against each configuration file in a directory\n kubectl apply set-last-applied -f path/\n \n # Set the last-applied-configuration of a resource to match the contents of a file; will create the annotation if it does not already exist\n kubectl apply set-last-applied -f deploy.yaml --create-annotation=true", "completion": "This text code kubectl apply set-last-applied | If true, ignore any errors in templates when a field or map key is missing in the template. Only applies to golang and jsonpath output formats.", "language": "text", "type": "code-example_explanation", "context": "kubectl apply set-last-applied | If true, ignore any errors in templates when a field or map key is missing in the template. Only applies to golang and jsonpath output formats.", "source": "https://kubernetes.io/docs/reference/kubectl/generated/kubectl_apply/kubectl_apply_set-last-applied/", "id": "k8s_00690_code_explain_21756"} {"prompt": "Complete this text code:\n\n# Set the last-applied-configuration of a resource to match the contents of a file\n kubectl apply set-last-applied -f deploy.yaml\n \n # Execute set-last-applied against each configuration file in a directory", "completion": " kubectl apply set-last-applied -f path/\n \n # Set the last-applied-configuration of a resource to match the contents of a file; will create the annotation if it does not already exist\n kubectl apply set-last-applied -f deploy.yaml --create-annotation=true", "language": "text", "type": "code-example", "context": "kubectl apply set-last-applied | If true, ignore any errors in templates when a field or map key is missing in the template. Only applies to golang and jsonpath output formats.", "source": "https://kubernetes.io/docs/reference/kubectl/generated/kubectl_apply/kubectl_apply_set-last-applied/", "id": "k8s_00690_code_completion_21757"} {"prompt": "Explain this text code:\n\n# Set the last-applied-configuration of a resource to match the contents of a file\n kubectl apply set-last-applied -f deploy.yaml\n \n # Execute set-last-applied against each configuration file in a directory\n kubectl apply set-last-applied -f path/\n \n # Set the last-applied-configuration of a resource to match the contents of a file; will create the annotation if it does not already exist\n kubectl apply set-last-applied -f deploy.yaml --create-annotation=true", "completion": "This text code kubectl apply set-last-applied | If true, ignore any errors in templates when a field or map key is missing in the template. Only applies to golang and jsonpath output formats.", "language": "text", "type": "code-example_explanation", "context": "kubectl apply set-last-applied | If true, ignore any errors in templates when a field or map key is missing in the template. Only applies to golang and jsonpath output formats.", "source": "https://kubernetes.io/docs/reference/kubectl/generated/kubectl_apply/kubectl_apply_set-last-applied/", "id": "k8s_00690_code_explain_21758"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: nginx\nspec:", "completion": " containers:\n - name: nginx\n image: nginx:1.14.2\n ports:\n - containerPort: 80", "language": "yaml", "type": "pod-definition", "context": "Pods | To create the Pod shown above, run the following command:", "source": "https://kubernetes.io/docs/concepts/workloads/pods/", "id": "k8s_00694_code_completion_21759"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: nginx\nspec:\n containers:\n - name: nginx\n image: nginx:1.14.2\n ports:\n - containerPort: 80", "completion": "This yaml code Pods | To create the Pod shown above, run the following command:", "language": "yaml", "type": "pod-definition_explanation", "context": "Pods | To create the Pod shown above, run the following command:", "source": "https://kubernetes.io/docs/concepts/workloads/pods/", "id": "k8s_00694_code_explain_21760"} {"prompt": "Explain this bash code:\n\nkubectl apply -f https://k8s.io/examples/pods/simple-pod.yaml", "completion": "This bash code Pods | Pods are generally not created directly and are created using workload resources. See Working with Pods for more information on how Pods are used with workload resources.", "language": "bash", "type": "kubectl-command_explanation", "context": "Pods | Pods are generally not created directly and are created using workload resources. See Working with Pods for more information on how Pods are used with workload resources.", "source": "https://kubernetes.io/docs/concepts/workloads/pods/", "id": "k8s_00694_code_explain_21761"} {"prompt": "Complete this yaml code:\n\napiVersion: batch/v1\nkind: Job\nmetadata:\n name: hello\nspec:\n template:\n # This is the pod template", "completion": " spec:\n containers:\n - name: hello\n image: busybox:1.28\n command: ['sh', '-c', 'echo \"Hello, Kubernetes!\" && sleep 3600']\n restartPolicy: OnFailure\n # The pod template ends here", "language": "yaml", "type": "kubernetes-manifest", "context": "Pods | Modifying the pod template or switching to a new pod template has no direct effect on the Pods that already exist. If you change the pod template for a workload resource, that resource needs to create", "source": "https://kubernetes.io/docs/concepts/workloads/pods/", "id": "k8s_00694_code_completion_21762"} {"prompt": "Explain this yaml code:\n\napiVersion: batch/v1\nkind: Job\nmetadata:\n name: hello\nspec:\n template:\n # This is the pod template\n spec:\n containers:\n - name: hello\n image: busybox:1.28\n command: ['sh', '-c', 'echo \"Hello, Kubernetes!\" && sleep 3600']\n restartPolicy: OnFailure\n # The pod template ends here", "completion": "This yaml code Pods | Modifying the pod template or switching to a new pod template has no direct effect on the Pods that already exist. If you change the pod template for a workload resource, that resource needs to create", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Pods | Modifying the pod template or switching to a new pod template has no direct effect on the Pods that already exist. If you change the pod template for a workload resource, that resource needs to create", "source": "https://kubernetes.io/docs/concepts/workloads/pods/", "id": "k8s_00694_code_explain_21763"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: nginx\nspec:", "completion": " containers:\n - name: nginx\n image: nginx:1.14.2\n ports:\n - containerPort: 80", "language": "yaml", "type": "pod-definition", "context": "Pods | To create the Pod shown above, run the following command:", "source": "https://kubernetes.io/docs/concepts/workloads/pods/", "id": "k8s_00694_code_completion_21764"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: nginx\nspec:\n containers:\n - name: nginx\n image: nginx:1.14.2\n ports:\n - containerPort: 80", "completion": "This yaml code Pods | To create the Pod shown above, run the following command:", "language": "yaml", "type": "pod-definition_explanation", "context": "Pods | To create the Pod shown above, run the following command:", "source": "https://kubernetes.io/docs/concepts/workloads/pods/", "id": "k8s_00694_code_explain_21765"} {"prompt": "Explain this shell code:\n\nkubectl apply -f https://k8s.io/examples/pods/simple-pod.yaml", "completion": "This shell code Pods | Pods are generally not created directly and are created using workload resources. See Working with Pods for more information on how Pods are used with workload resources.", "language": "shell", "type": "code-example_explanation", "context": "Pods | Pods are generally not created directly and are created using workload resources. See Working with Pods for more information on how Pods are used with workload resources.", "source": "https://kubernetes.io/docs/concepts/workloads/pods/", "id": "k8s_00694_code_explain_21766"} {"prompt": "Complete this yaml code:\n\napiVersion: batch/v1\nkind: Job\nmetadata:\n name: hello\nspec:\n template:\n # This is the pod template", "completion": " spec:\n containers:\n - name: hello\n image: busybox:1.28\n command: ['sh', '-c', 'echo \"Hello, Kubernetes!\" && sleep 3600']\n restartPolicy: OnFailure\n # The pod template ends here", "language": "yaml", "type": "kubernetes-manifest", "context": "Pods | Modifying the pod template or switching to a new pod template has no direct effect on the Pods that already exist. If you change the pod template for a workload resource, that resource needs to create", "source": "https://kubernetes.io/docs/concepts/workloads/pods/", "id": "k8s_00694_code_completion_21767"} {"prompt": "Explain this yaml code:\n\napiVersion: batch/v1\nkind: Job\nmetadata:\n name: hello\nspec:\n template:\n # This is the pod template\n spec:\n containers:\n - name: hello\n image: busybox:1.28\n command: ['sh', '-c', 'echo \"Hello, Kubernetes!\" && sleep 3600']\n restartPolicy: OnFailure\n # The pod template ends here", "completion": "This yaml code Pods | Modifying the pod template or switching to a new pod template has no direct effect on the Pods that already exist. If you change the pod template for a workload resource, that resource needs to create", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Pods | Modifying the pod template or switching to a new pod template has no direct effect on the Pods that already exist. If you change the pod template for a workload resource, that resource needs to create", "source": "https://kubernetes.io/docs/concepts/workloads/pods/", "id": "k8s_00694_code_explain_21768"} {"prompt": "Complete this yaml code:\n\npods/simple-pod.yaml\napiVersion: v1\nkind: Pod\nmetadata:\n name: nginx", "completion": "spec:\n containers:\n - name: nginx\n image: nginx:1.14.2\n ports:\n - containerPort: 80", "language": "yaml", "type": "pod-definition", "context": "Pods | To create the Pod shown above, run the following command:", "source": "https://kubernetes.io/docs/concepts/workloads/pods/", "id": "k8s_00694_code_completion_21769"} {"prompt": "Explain this yaml code:\n\npods/simple-pod.yaml\napiVersion: v1\nkind: Pod\nmetadata:\n name: nginx\nspec:\n containers:\n - name: nginx\n image: nginx:1.14.2\n ports:\n - containerPort: 80", "completion": "This yaml code Pods | To create the Pod shown above, run the following command:", "language": "yaml", "type": "pod-definition_explanation", "context": "Pods | To create the Pod shown above, run the following command:", "source": "https://kubernetes.io/docs/concepts/workloads/pods/", "id": "k8s_00694_code_explain_21770"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: nginx\nspec:", "completion": " containers:\n - name: nginx\n image: nginx:1.14.2\n ports:\n - containerPort: 80", "language": "yaml", "type": "pod-definition", "context": "Pods | To create the Pod shown above, run the following command:", "source": "https://kubernetes.io/docs/concepts/workloads/pods/", "id": "k8s_00694_code_completion_21771"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: nginx\nspec:\n containers:\n - name: nginx\n image: nginx:1.14.2\n ports:\n - containerPort: 80", "completion": "This yaml code Pods | To create the Pod shown above, run the following command:", "language": "yaml", "type": "pod-definition_explanation", "context": "Pods | To create the Pod shown above, run the following command:", "source": "https://kubernetes.io/docs/concepts/workloads/pods/", "id": "k8s_00694_code_explain_21772"} {"prompt": "Explain this bash code:\n\nkubectl apply -f https://k8s.io/examples/pods/simple-pod.yaml", "completion": "This bash code Pods | Pods are generally not created directly and are created using workload resources. See Working with Pods for more information on how Pods are used with workload resources.", "language": "bash", "type": "kubectl-command_explanation", "context": "Pods | Pods are generally not created directly and are created using workload resources. See Working with Pods for more information on how Pods are used with workload resources.", "source": "https://kubernetes.io/docs/concepts/workloads/pods/", "id": "k8s_00694_code_explain_21773"} {"prompt": "Complete this yaml code:\n\napiVersion: batch/v1\nkind: Job\nmetadata:\n name: hello\nspec:\n template:\n # This is the pod template", "completion": " spec:\n containers:\n - name: hello\n image: busybox:1.28\n command: ['sh', '-c', 'echo \"Hello, Kubernetes!\" && sleep 3600']\n restartPolicy: OnFailure\n # The pod template ends here", "language": "yaml", "type": "kubernetes-manifest", "context": "Pods | Modifying the pod template or switching to a new pod template has no direct effect on the Pods that already exist. If you change the pod template for a workload resource, that resource needs to create", "source": "https://kubernetes.io/docs/concepts/workloads/pods/", "id": "k8s_00694_code_completion_21774"} {"prompt": "Explain this yaml code:\n\napiVersion: batch/v1\nkind: Job\nmetadata:\n name: hello\nspec:\n template:\n # This is the pod template\n spec:\n containers:\n - name: hello\n image: busybox:1.28\n command: ['sh', '-c', 'echo \"Hello, Kubernetes!\" && sleep 3600']\n restartPolicy: OnFailure\n # The pod template ends here", "completion": "This yaml code Pods | Modifying the pod template or switching to a new pod template has no direct effect on the Pods that already exist. If you change the pod template for a workload resource, that resource needs to create", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Pods | Modifying the pod template or switching to a new pod template has no direct effect on the Pods that already exist. If you change the pod template for a workload resource, that resource needs to create", "source": "https://kubernetes.io/docs/concepts/workloads/pods/", "id": "k8s_00694_code_explain_21775"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: ReplicationController\nmetadata:\n name: nginx\nspec:\n replicas: 3\n selector:\n app: nginx\n template:", "completion": " metadata:\n name: nginx\n labels:\n app: nginx\n spec:\n containers:\n - name: nginx\n image: nginx\n ports:\n - containerPort: 80", "language": "yaml", "type": "kubernetes-manifest", "context": "ReplicationController | Run the example job by downloading the example file and then running this command:", "source": "https://kubernetes.io/docs/concepts/workloads/controllers/replicationcontroller/", "id": "k8s_00695_code_completion_21776"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: ReplicationController\nmetadata:\n name: nginx\nspec:\n replicas: 3\n selector:\n app: nginx\n template:\n metadata:\n name: nginx\n labels:\n app: nginx\n spec:\n containers:\n - name: nginx\n image: nginx\n ports:\n - containerPort: 80", "completion": "This yaml code ReplicationController | Run the example job by downloading the example file and then running this command:", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "ReplicationController | Run the example job by downloading the example file and then running this command:", "source": "https://kubernetes.io/docs/concepts/workloads/controllers/replicationcontroller/", "id": "k8s_00695_code_explain_21777"} {"prompt": "Explain this bash code:\n\nkubectl apply -f https://k8s.io/examples/controllers/replication.yaml", "completion": "This bash code ReplicationController | The output is similar to this:", "language": "bash", "type": "kubectl-command_explanation", "context": "ReplicationController | The output is similar to this:", "source": "https://kubernetes.io/docs/concepts/workloads/controllers/replicationcontroller/", "id": "k8s_00695_code_explain_21778"} {"prompt": "Complete this text code:\n\nName: nginx\nNamespace: default\nSelector: app=nginx\nLabels: app=nginx\nAnnotations: \nReplicas: 3 current / 3 desired\nPods Status: 0 Running / 3 Waiting / 0 Succeeded / 0 Failed\nPod Template:\n Labels: app=nginx\n Containers:\n nginx:", "completion": " Image: nginx\n Port: 80/TCP\n Environment: \n Mounts: \n Volumes: \nEvents:\n FirstSeen LastSeen Count From SubobjectPath Type Reason Message\n --------- -------- ----- ---- ------------- ---- ------ -------\n 20s 20s 1 {replication-controller } Normal SuccessfulCreate Created pod: nginx-qrm3m\n 20s 20s 1 {replication-controller } Normal SuccessfulCreate Created pod: nginx-3ntk0\n 20s 20s 1 {replication-controller } Normal SuccessfulCreate Created pod: nginx-4ok8v", "language": "text", "type": "code-example", "context": "ReplicationController | Here, three pods are created, but none is running yet, perhaps because the image is being pulled. A little later, the same command may show:", "source": "https://kubernetes.io/docs/concepts/workloads/controllers/replicationcontroller/", "id": "k8s_00695_code_completion_21779"} {"prompt": "Explain this text code:\n\nName: nginx\nNamespace: default\nSelector: app=nginx\nLabels: app=nginx\nAnnotations: \nReplicas: 3 current / 3 desired\nPods Status: 0 Running / 3 Waiting / 0 Succeeded / 0 Failed\nPod Template:\n Labels: app=nginx\n Containers:\n nginx:\n Image: nginx\n Port: 80/TCP\n Environment: \n Mounts: \n Volumes: \nEvents:\n FirstSeen LastSeen Count From SubobjectPath Type Reason Message\n --------- -------- ----- ---- ------------- ---- ------ -------\n 20s 20s 1 {replication-controller } Normal SuccessfulCreate Created pod: nginx-qrm3m\n 20s 20s 1 {replication-controller } Normal SuccessfulCreate Created pod: nginx-3ntk0\n 20s 20s 1 {replication-controller } Normal SuccessfulCreate Created pod: nginx-4ok8v", "completion": "This text code ReplicationController | Here, three pods are created, but none is running yet, perhaps because the image is being pulled. A little later, the same command may show:", "language": "text", "type": "code-example_explanation", "context": "ReplicationController | Here, three pods are created, but none is running yet, perhaps because the image is being pulled. A little later, the same command may show:", "source": "https://kubernetes.io/docs/concepts/workloads/controllers/replicationcontroller/", "id": "k8s_00695_code_explain_21780"} {"prompt": "Explain this text code:\n\nPods Status: 3 Running / 0 Waiting / 0 Succeeded / 0 Failed", "completion": "This text code ReplicationController | To list all the pods that belong to the ReplicationController in a machine readable form, you can use a command like this:", "language": "text", "type": "code-example_explanation", "context": "ReplicationController | To list all the pods that belong to the ReplicationController in a machine readable form, you can use a command like this:", "source": "https://kubernetes.io/docs/concepts/workloads/controllers/replicationcontroller/", "id": "k8s_00695_code_explain_21781"} {"prompt": "Explain this text code:\n\npods=$(kubectl get pods --selector=app=nginx --output=jsonpath={.items..metadata.name})\necho $pods", "completion": "This text code ReplicationController | The output is similar to this:", "language": "text", "type": "code-example_explanation", "context": "ReplicationController | The output is similar to this:", "source": "https://kubernetes.io/docs/concepts/workloads/controllers/replicationcontroller/", "id": "k8s_00695_code_explain_21782"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: ReplicationController\nmetadata:\n name: nginx\nspec:\n replicas: 3\n selector:\n app: nginx\n template:", "completion": " metadata:\n name: nginx\n labels:\n app: nginx\n spec:\n containers:\n - name: nginx\n image: nginx\n ports:\n - containerPort: 80", "language": "yaml", "type": "kubernetes-manifest", "context": "ReplicationController | Run the example job by downloading the example file and then running this command:", "source": "https://kubernetes.io/docs/concepts/workloads/controllers/replicationcontroller/", "id": "k8s_00695_code_completion_21783"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: ReplicationController\nmetadata:\n name: nginx\nspec:\n replicas: 3\n selector:\n app: nginx\n template:\n metadata:\n name: nginx\n labels:\n app: nginx\n spec:\n containers:\n - name: nginx\n image: nginx\n ports:\n - containerPort: 80", "completion": "This yaml code ReplicationController | Run the example job by downloading the example file and then running this command:", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "ReplicationController | Run the example job by downloading the example file and then running this command:", "source": "https://kubernetes.io/docs/concepts/workloads/controllers/replicationcontroller/", "id": "k8s_00695_code_explain_21784"} {"prompt": "Explain this shell code:\n\nkubectl apply -f https://k8s.io/examples/controllers/replication.yaml", "completion": "This shell code ReplicationController | The output is similar to this:", "language": "shell", "type": "code-example_explanation", "context": "ReplicationController | The output is similar to this:", "source": "https://kubernetes.io/docs/concepts/workloads/controllers/replicationcontroller/", "id": "k8s_00695_code_explain_21785"} {"prompt": "Complete this text code:\n\nName: nginx\nNamespace: default\nSelector: app=nginx\nLabels: app=nginx\nAnnotations: \nReplicas: 3 current / 3 desired\nPods Status: 0 Running / 3 Waiting / 0 Succeeded / 0 Failed\nPod Template:\n Labels: app=nginx\n Containers:\n nginx:", "completion": " Image: nginx\n Port: 80/TCP\n Environment: \n Mounts: \n Volumes: \nEvents:\n FirstSeen LastSeen Count From SubobjectPath Type Reason Message\n --------- -------- ----- ---- ------------- ---- ------ -------\n 20s 20s 1 {replication-controller } Normal SuccessfulCreate Created pod: nginx-qrm3m\n 20s 20s 1 {replication-controller } Normal SuccessfulCreate Created pod: nginx-3ntk0\n 20s 20s 1 {replication-controller } Normal SuccessfulCreate Created pod: nginx-4ok8v", "language": "text", "type": "code-example", "context": "ReplicationController | Here, three pods are created, but none is running yet, perhaps because the image is being pulled. A little later, the same command may show:", "source": "https://kubernetes.io/docs/concepts/workloads/controllers/replicationcontroller/", "id": "k8s_00695_code_completion_21786"} {"prompt": "Explain this text code:\n\nName: nginx\nNamespace: default\nSelector: app=nginx\nLabels: app=nginx\nAnnotations: \nReplicas: 3 current / 3 desired\nPods Status: 0 Running / 3 Waiting / 0 Succeeded / 0 Failed\nPod Template:\n Labels: app=nginx\n Containers:\n nginx:\n Image: nginx\n Port: 80/TCP\n Environment: \n Mounts: \n Volumes: \nEvents:\n FirstSeen LastSeen Count From SubobjectPath Type Reason Message\n --------- -------- ----- ---- ------------- ---- ------ -------\n 20s 20s 1 {replication-controller } Normal SuccessfulCreate Created pod: nginx-qrm3m\n 20s 20s 1 {replication-controller } Normal SuccessfulCreate Created pod: nginx-3ntk0\n 20s 20s 1 {replication-controller } Normal SuccessfulCreate Created pod: nginx-4ok8v", "completion": "This text code ReplicationController | Here, three pods are created, but none is running yet, perhaps because the image is being pulled. A little later, the same command may show:", "language": "text", "type": "code-example_explanation", "context": "ReplicationController | Here, three pods are created, but none is running yet, perhaps because the image is being pulled. A little later, the same command may show:", "source": "https://kubernetes.io/docs/concepts/workloads/controllers/replicationcontroller/", "id": "k8s_00695_code_explain_21787"} {"prompt": "Explain this shell code:\n\nPods Status: 3 Running / 0 Waiting / 0 Succeeded / 0 Failed", "completion": "This shell code ReplicationController | To list all the pods that belong to the ReplicationController in a machine readable form, you can use a command like this:", "language": "shell", "type": "code-example_explanation", "context": "ReplicationController | To list all the pods that belong to the ReplicationController in a machine readable form, you can use a command like this:", "source": "https://kubernetes.io/docs/concepts/workloads/controllers/replicationcontroller/", "id": "k8s_00695_code_explain_21788"} {"prompt": "Explain this shell code:\n\npods=$(kubectl get pods --selector=app=nginx --output=jsonpath={.items..metadata.name})\necho $pods", "completion": "This shell code ReplicationController | The output is similar to this:", "language": "shell", "type": "code-example_explanation", "context": "ReplicationController | The output is similar to this:", "source": "https://kubernetes.io/docs/concepts/workloads/controllers/replicationcontroller/", "id": "k8s_00695_code_explain_21789"} {"prompt": "Complete this yaml code:\n\ncontrollers/replication.yaml\napiVersion: v1\nkind: ReplicationController\nmetadata:\n name: nginx\nspec:\n replicas: 3\n selector:\n app: nginx\n template:", "completion": " metadata:\n name: nginx\n labels:\n app: nginx\n spec:\n containers:\n - name: nginx\n image: nginx\n ports:\n - containerPort: 80", "language": "yaml", "type": "kubernetes-manifest", "context": "ReplicationController | Run the example job by downloading the example file and then running this command:", "source": "https://kubernetes.io/docs/concepts/workloads/controllers/replicationcontroller/", "id": "k8s_00695_code_completion_21790"} {"prompt": "Explain this yaml code:\n\ncontrollers/replication.yaml\napiVersion: v1\nkind: ReplicationController\nmetadata:\n name: nginx\nspec:\n replicas: 3\n selector:\n app: nginx\n template:\n metadata:\n name: nginx\n labels:\n app: nginx\n spec:\n containers:\n - name: nginx\n image: nginx\n ports:\n - containerPort: 80", "completion": "This yaml code ReplicationController | Run the example job by downloading the example file and then running this command:", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "ReplicationController | Run the example job by downloading the example file and then running this command:", "source": "https://kubernetes.io/docs/concepts/workloads/controllers/replicationcontroller/", "id": "k8s_00695_code_explain_21791"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: ReplicationController\nmetadata:\n name: nginx\nspec:\n replicas: 3\n selector:\n app: nginx\n template:", "completion": " metadata:\n name: nginx\n labels:\n app: nginx\n spec:\n containers:\n - name: nginx\n image: nginx\n ports:\n - containerPort: 80", "language": "yaml", "type": "kubernetes-manifest", "context": "ReplicationController | Run the example job by downloading the example file and then running this command:", "source": "https://kubernetes.io/docs/concepts/workloads/controllers/replicationcontroller/", "id": "k8s_00695_code_completion_21792"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: ReplicationController\nmetadata:\n name: nginx\nspec:\n replicas: 3\n selector:\n app: nginx\n template:\n metadata:\n name: nginx\n labels:\n app: nginx\n spec:\n containers:\n - name: nginx\n image: nginx\n ports:\n - containerPort: 80", "completion": "This yaml code ReplicationController | Run the example job by downloading the example file and then running this command:", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "ReplicationController | Run the example job by downloading the example file and then running this command:", "source": "https://kubernetes.io/docs/concepts/workloads/controllers/replicationcontroller/", "id": "k8s_00695_code_explain_21793"} {"prompt": "Explain this bash code:\n\nkubectl apply -f https://k8s.io/examples/controllers/replication.yaml", "completion": "This bash code ReplicationController | The output is similar to this:", "language": "bash", "type": "kubectl-command_explanation", "context": "ReplicationController | The output is similar to this:", "source": "https://kubernetes.io/docs/concepts/workloads/controllers/replicationcontroller/", "id": "k8s_00695_code_explain_21794"} {"prompt": "Explain this text code:\n\nPods Status: 3 Running / 0 Waiting / 0 Succeeded / 0 Failed", "completion": "This text code ReplicationController | To list all the pods that belong to the ReplicationController in a machine readable form, you can use a command like this:", "language": "text", "type": "code-example_explanation", "context": "ReplicationController | To list all the pods that belong to the ReplicationController in a machine readable form, you can use a command like this:", "source": "https://kubernetes.io/docs/concepts/workloads/controllers/replicationcontroller/", "id": "k8s_00695_code_explain_21795"} {"prompt": "Explain this text code:\n\npods=$(kubectl get pods --selector=app=nginx --output=jsonpath={.items..metadata.name})\necho $pods", "completion": "This text code ReplicationController | The output is similar to this:", "language": "text", "type": "code-example_explanation", "context": "ReplicationController | The output is similar to this:", "source": "https://kubernetes.io/docs/concepts/workloads/controllers/replicationcontroller/", "id": "k8s_00695_code_explain_21796"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: qos-demo\n namespace: qos-example\nspec:\n containers:\n - name: qos-demo-ctr", "completion": " image: nginx\n resources:\n limits:\n memory: \"200Mi\"\n cpu: \"700m\"\n requests:\n memory: \"200Mi\"\n cpu: \"700m\"", "language": "yaml", "type": "pod-definition", "context": "Configure Quality of Service for Pods | Create the Pod:", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/quality-service-pod/", "id": "k8s_00697_code_completion_21797"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: qos-demo\n namespace: qos-example\nspec:\n containers:\n - name: qos-demo-ctr\n image: nginx\n resources:\n limits:\n memory: \"200Mi\"\n cpu: \"700m\"\n requests:\n memory: \"200Mi\"\n cpu: \"700m\"", "completion": "This yaml code Configure Quality of Service for Pods | Create the Pod:", "language": "yaml", "type": "pod-definition_explanation", "context": "Configure Quality of Service for Pods | Create the Pod:", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/quality-service-pod/", "id": "k8s_00697_code_explain_21798"} {"prompt": "Explain this bash code:\n\nkubectl apply -f https://k8s.io/examples/pods/qos/qos-pod.yaml --namespace=qos-example", "completion": "This bash code Configure Quality of Service for Pods | View detailed information about the Pod:", "language": "bash", "type": "kubectl-command_explanation", "context": "Configure Quality of Service for Pods | View detailed information about the Pod:", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/quality-service-pod/", "id": "k8s_00697_code_explain_21799"} {"prompt": "Explain this bash code:\n\nkubectl get pod qos-demo --namespace=qos-example --output=yaml", "completion": "This bash code Configure Quality of Service for Pods | The output shows that Kubernetes gave the Pod a QoS class of Guaranteed. The output also verifies that the Pod Container has a memory request that matches its memory limit, and it has a CPU request th", "language": "bash", "type": "kubectl-command_explanation", "context": "Configure Quality of Service for Pods | The output shows that Kubernetes gave the Pod a QoS class of Guaranteed. The output also verifies that the Pod Container has a memory request that matches its memory limit, and it has a CPU request th", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/quality-service-pod/", "id": "k8s_00697_code_explain_21800"} {"prompt": "Complete this text code:\n\nspec:\n containers:\n ...\n resources:\n limits:\n cpu: 700m", "completion": " memory: 200Mi\n requests:\n cpu: 700m\n memory: 200Mi\n ...\nstatus:\n qosClass: Guaranteed", "language": "text", "type": "code-example", "context": "Configure Quality of Service for Pods | Delete your Pod:", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/quality-service-pod/", "id": "k8s_00697_code_completion_21801"} {"prompt": "Explain this text code:\n\nspec:\n containers:\n ...\n resources:\n limits:\n cpu: 700m\n memory: 200Mi\n requests:\n cpu: 700m\n memory: 200Mi\n ...\nstatus:\n qosClass: Guaranteed", "completion": "This text code Configure Quality of Service for Pods | Delete your Pod:", "language": "text", "type": "code-example_explanation", "context": "Configure Quality of Service for Pods | Delete your Pod:", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/quality-service-pod/", "id": "k8s_00697_code_explain_21802"} {"prompt": "Explain this bash code:\n\nkubectl delete pod qos-demo --namespace=qos-example", "completion": "This bash code Configure Quality of Service for Pods | A Pod is given a QoS class of Burstable if:", "language": "bash", "type": "kubectl-command_explanation", "context": "Configure Quality of Service for Pods | A Pod is given a QoS class of Burstable if:", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/quality-service-pod/", "id": "k8s_00697_code_explain_21803"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: qos-demo-2\n namespace: qos-example\nspec:\n containers:", "completion": " - name: qos-demo-2-ctr\n image: nginx\n resources:\n limits:\n memory: \"200Mi\"\n requests:\n memory: \"100Mi\"", "language": "yaml", "type": "pod-definition", "context": "Configure Quality of Service for Pods | Create the Pod:", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/quality-service-pod/", "id": "k8s_00697_code_completion_21804"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: qos-demo-2\n namespace: qos-example\nspec:\n containers:\n - name: qos-demo-2-ctr\n image: nginx\n resources:\n limits:\n memory: \"200Mi\"\n requests:\n memory: \"100Mi\"", "completion": "This yaml code Configure Quality of Service for Pods | Create the Pod:", "language": "yaml", "type": "pod-definition_explanation", "context": "Configure Quality of Service for Pods | Create the Pod:", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/quality-service-pod/", "id": "k8s_00697_code_explain_21805"} {"prompt": "Explain this bash code:\n\nkubectl apply -f https://k8s.io/examples/pods/qos/qos-pod-2.yaml --namespace=qos-example", "completion": "This bash code Configure Quality of Service for Pods | View detailed information about the Pod:", "language": "bash", "type": "kubectl-command_explanation", "context": "Configure Quality of Service for Pods | View detailed information about the Pod:", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/quality-service-pod/", "id": "k8s_00697_code_explain_21806"} {"prompt": "Explain this bash code:\n\nkubectl get pod qos-demo-2 --namespace=qos-example --output=yaml", "completion": "This bash code Configure Quality of Service for Pods | The output shows that Kubernetes gave the Pod a QoS class of Burstable:", "language": "bash", "type": "kubectl-command_explanation", "context": "Configure Quality of Service for Pods | The output shows that Kubernetes gave the Pod a QoS class of Burstable:", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/quality-service-pod/", "id": "k8s_00697_code_explain_21807"} {"prompt": "Complete this text code:\n\nspec:\n containers:\n - image: nginx\n imagePullPolicy: Always\n name: qos-demo-2-ctr\n resources:", "completion": " limits:\n memory: 200Mi\n requests:\n memory: 100Mi\n ...\nstatus:\n qosClass: Burstable", "language": "text", "type": "code-example", "context": "Configure Quality of Service for Pods | Delete your Pod:", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/quality-service-pod/", "id": "k8s_00697_code_completion_21808"} {"prompt": "Explain this text code:\n\nspec:\n containers:\n - image: nginx\n imagePullPolicy: Always\n name: qos-demo-2-ctr\n resources:\n limits:\n memory: 200Mi\n requests:\n memory: 100Mi\n ...\nstatus:\n qosClass: Burstable", "completion": "This text code Configure Quality of Service for Pods | Delete your Pod:", "language": "text", "type": "code-example_explanation", "context": "Configure Quality of Service for Pods | Delete your Pod:", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/quality-service-pod/", "id": "k8s_00697_code_explain_21809"} {"prompt": "Explain this bash code:\n\nkubectl delete pod qos-demo-2 --namespace=qos-example", "completion": "This bash code Configure Quality of Service for Pods | For a Pod to be given a QoS class of BestEffort, the Containers in the Pod must not have any memory or CPU limits or requests.", "language": "bash", "type": "kubectl-command_explanation", "context": "Configure Quality of Service for Pods | For a Pod to be given a QoS class of BestEffort, the Containers in the Pod must not have any memory or CPU limits or requests.", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/quality-service-pod/", "id": "k8s_00697_code_explain_21810"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: qos-demo-3", "completion": " namespace: qos-example\nspec:\n containers:\n - name: qos-demo-3-ctr\n image: nginx", "language": "yaml", "type": "pod-definition", "context": "Configure Quality of Service for Pods | Create the Pod:", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/quality-service-pod/", "id": "k8s_00697_code_completion_21811"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: qos-demo-3\n namespace: qos-example\nspec:\n containers:\n - name: qos-demo-3-ctr\n image: nginx", "completion": "This yaml code Configure Quality of Service for Pods | Create the Pod:", "language": "yaml", "type": "pod-definition_explanation", "context": "Configure Quality of Service for Pods | Create the Pod:", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/quality-service-pod/", "id": "k8s_00697_code_explain_21812"} {"prompt": "Explain this bash code:\n\nkubectl apply -f https://k8s.io/examples/pods/qos/qos-pod-3.yaml --namespace=qos-example", "completion": "This bash code Configure Quality of Service for Pods | View detailed information about the Pod:", "language": "bash", "type": "kubectl-command_explanation", "context": "Configure Quality of Service for Pods | View detailed information about the Pod:", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/quality-service-pod/", "id": "k8s_00697_code_explain_21813"} {"prompt": "Explain this bash code:\n\nkubectl get pod qos-demo-3 --namespace=qos-example --output=yaml", "completion": "This bash code Configure Quality of Service for Pods | The output shows that Kubernetes gave the Pod a QoS class of BestEffort:", "language": "bash", "type": "kubectl-command_explanation", "context": "Configure Quality of Service for Pods | The output shows that Kubernetes gave the Pod a QoS class of BestEffort:", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/quality-service-pod/", "id": "k8s_00697_code_explain_21814"} {"prompt": "Complete this text code:\n\nspec:\n containers:\n ...", "completion": " resources: {}\n ...\nstatus:\n qosClass: BestEffort", "language": "text", "type": "code-example", "context": "Configure Quality of Service for Pods | Delete your Pod:", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/quality-service-pod/", "id": "k8s_00697_code_completion_21815"} {"prompt": "Explain this text code:\n\nspec:\n containers:\n ...\n resources: {}\n ...\nstatus:\n qosClass: BestEffort", "completion": "This text code Configure Quality of Service for Pods | Delete your Pod:", "language": "text", "type": "code-example_explanation", "context": "Configure Quality of Service for Pods | Delete your Pod:", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/quality-service-pod/", "id": "k8s_00697_code_explain_21816"} {"prompt": "Explain this bash code:\n\nkubectl delete pod qos-demo-3 --namespace=qos-example", "completion": "This bash code Configure Quality of Service for Pods | Here is a manifest for a Pod that has two Containers. One container specifies a memory request of 200 MiB. The other Container does not specify any requests or limits.", "language": "bash", "type": "kubectl-command_explanation", "context": "Configure Quality of Service for Pods | Here is a manifest for a Pod that has two Containers. One container specifies a memory request of 200 MiB. The other Container does not specify any requests or limits.", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/quality-service-pod/", "id": "k8s_00697_code_explain_21817"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: qos-demo-4\n namespace: qos-example\nspec:\n containers:\n", "completion": " - name: qos-demo-4-ctr-1\n image: nginx\n resources:\n requests:\n memory: \"200Mi\"\n\n - name: qos-demo-4-ctr-2\n image: redis", "language": "yaml", "type": "pod-definition", "context": "Configure Quality of Service for Pods | Notice that this Pod meets the criteria for QoS class Burstable. That is, it does not meet the criteria for QoS class Guaranteed, and one of its Containers has a memory request.", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/quality-service-pod/", "id": "k8s_00697_code_completion_21818"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: qos-demo-4\n namespace: qos-example\nspec:\n containers:\n\n - name: qos-demo-4-ctr-1\n image: nginx\n resources:\n requests:\n memory: \"200Mi\"\n\n - name: qos-demo-4-ctr-2\n image: redis", "completion": "This yaml code Configure Quality of Service for Pods | Notice that this Pod meets the criteria for QoS class Burstable. That is, it does not meet the criteria for QoS class Guaranteed, and one of its Containers has a memory request.", "language": "yaml", "type": "pod-definition_explanation", "context": "Configure Quality of Service for Pods | Notice that this Pod meets the criteria for QoS class Burstable. That is, it does not meet the criteria for QoS class Guaranteed, and one of its Containers has a memory request.", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/quality-service-pod/", "id": "k8s_00697_code_explain_21819"} {"prompt": "Explain this bash code:\n\nkubectl apply -f https://k8s.io/examples/pods/qos/qos-pod-4.yaml --namespace=qos-example", "completion": "This bash code Configure Quality of Service for Pods | View detailed information about the Pod:", "language": "bash", "type": "kubectl-command_explanation", "context": "Configure Quality of Service for Pods | View detailed information about the Pod:", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/quality-service-pod/", "id": "k8s_00697_code_explain_21820"} {"prompt": "Explain this bash code:\n\nkubectl get pod qos-demo-4 --namespace=qos-example --output=yaml", "completion": "This bash code Configure Quality of Service for Pods | The output shows that Kubernetes gave the Pod a QoS class of Burstable:", "language": "bash", "type": "kubectl-command_explanation", "context": "Configure Quality of Service for Pods | The output shows that Kubernetes gave the Pod a QoS class of Burstable:", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/quality-service-pod/", "id": "k8s_00697_code_explain_21821"} {"prompt": "Complete this text code:\n\nspec:\n containers:\n ...\n name: qos-demo-4-ctr-1\n resources:\n requests:", "completion": " memory: 200Mi\n ...\n name: qos-demo-4-ctr-2\n resources: {}\n ...\nstatus:\n qosClass: Burstable", "language": "text", "type": "code-example", "context": "Configure Quality of Service for Pods | Rather than see all the fields, you can view just the field you need:", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/quality-service-pod/", "id": "k8s_00697_code_completion_21822"} {"prompt": "Explain this text code:\n\nspec:\n containers:\n ...\n name: qos-demo-4-ctr-1\n resources:\n requests:\n memory: 200Mi\n ...\n name: qos-demo-4-ctr-2\n resources: {}\n ...\nstatus:\n qosClass: Burstable", "completion": "This text code Configure Quality of Service for Pods | Rather than see all the fields, you can view just the field you need:", "language": "text", "type": "code-example_explanation", "context": "Configure Quality of Service for Pods | Rather than see all the fields, you can view just the field you need:", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/quality-service-pod/", "id": "k8s_00697_code_explain_21823"} {"prompt": "Explain this bash code:\n\nkubectl --namespace=qos-example get pod qos-demo-4 -o jsonpath='{ .status.qosClass}{\"\\n\"}'", "completion": "This bash code Configure Quality of Service for Pods | Delete your namespace:", "language": "bash", "type": "kubectl-command_explanation", "context": "Configure Quality of Service for Pods | Delete your namespace:", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/quality-service-pod/", "id": "k8s_00697_code_explain_21824"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: qos-demo\n namespace: qos-example\nspec:\n containers:\n - name: qos-demo-ctr", "completion": " image: nginx\n resources:\n limits:\n memory: \"200Mi\"\n cpu: \"700m\"\n requests:\n memory: \"200Mi\"\n cpu: \"700m\"", "language": "yaml", "type": "pod-definition", "context": "Configure Quality of Service for Pods | Create the Pod:", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/quality-service-pod/", "id": "k8s_00697_code_completion_21825"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: qos-demo\n namespace: qos-example\nspec:\n containers:\n - name: qos-demo-ctr\n image: nginx\n resources:\n limits:\n memory: \"200Mi\"\n cpu: \"700m\"\n requests:\n memory: \"200Mi\"\n cpu: \"700m\"", "completion": "This yaml code Configure Quality of Service for Pods | Create the Pod:", "language": "yaml", "type": "pod-definition_explanation", "context": "Configure Quality of Service for Pods | Create the Pod:", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/quality-service-pod/", "id": "k8s_00697_code_explain_21826"} {"prompt": "Explain this shell code:\n\nkubectl apply -f https://k8s.io/examples/pods/qos/qos-pod.yaml --namespace=qos-example", "completion": "This shell code Configure Quality of Service for Pods | View detailed information about the Pod:", "language": "shell", "type": "code-example_explanation", "context": "Configure Quality of Service for Pods | View detailed information about the Pod:", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/quality-service-pod/", "id": "k8s_00697_code_explain_21827"} {"prompt": "Explain this shell code:\n\nkubectl get pod qos-demo --namespace=qos-example --output=yaml", "completion": "This shell code Configure Quality of Service for Pods | The output shows that Kubernetes gave the Pod a QoS class of Guaranteed. The output also verifies that the Pod Container has a memory request that matches its memory limit, and it has a CPU request th", "language": "shell", "type": "code-example_explanation", "context": "Configure Quality of Service for Pods | The output shows that Kubernetes gave the Pod a QoS class of Guaranteed. The output also verifies that the Pod Container has a memory request that matches its memory limit, and it has a CPU request th", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/quality-service-pod/", "id": "k8s_00697_code_explain_21828"} {"prompt": "Complete this yaml code:\n\nspec:\n containers:\n ...\n resources:\n limits:\n cpu: 700m", "completion": " memory: 200Mi\n requests:\n cpu: 700m\n memory: 200Mi\n ...\nstatus:\n qosClass: Guaranteed", "language": "yaml", "type": "kubernetes-manifest", "context": "Configure Quality of Service for Pods | Delete your Pod:", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/quality-service-pod/", "id": "k8s_00697_code_completion_21829"} {"prompt": "Explain this yaml code:\n\nspec:\n containers:\n ...\n resources:\n limits:\n cpu: 700m\n memory: 200Mi\n requests:\n cpu: 700m\n memory: 200Mi\n ...\nstatus:\n qosClass: Guaranteed", "completion": "This yaml code Configure Quality of Service for Pods | Delete your Pod:", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Configure Quality of Service for Pods | Delete your Pod:", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/quality-service-pod/", "id": "k8s_00697_code_explain_21830"} {"prompt": "Explain this shell code:\n\nkubectl delete pod qos-demo --namespace=qos-example", "completion": "This shell code Configure Quality of Service for Pods | A Pod is given a QoS class of Burstable if:", "language": "shell", "type": "code-example_explanation", "context": "Configure Quality of Service for Pods | A Pod is given a QoS class of Burstable if:", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/quality-service-pod/", "id": "k8s_00697_code_explain_21831"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: qos-demo-2\n namespace: qos-example\nspec:\n containers:", "completion": " - name: qos-demo-2-ctr\n image: nginx\n resources:\n limits:\n memory: \"200Mi\"\n requests:\n memory: \"100Mi\"", "language": "yaml", "type": "pod-definition", "context": "Configure Quality of Service for Pods | Create the Pod:", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/quality-service-pod/", "id": "k8s_00697_code_completion_21832"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: qos-demo-2\n namespace: qos-example\nspec:\n containers:\n - name: qos-demo-2-ctr\n image: nginx\n resources:\n limits:\n memory: \"200Mi\"\n requests:\n memory: \"100Mi\"", "completion": "This yaml code Configure Quality of Service for Pods | Create the Pod:", "language": "yaml", "type": "pod-definition_explanation", "context": "Configure Quality of Service for Pods | Create the Pod:", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/quality-service-pod/", "id": "k8s_00697_code_explain_21833"} {"prompt": "Explain this shell code:\n\nkubectl apply -f https://k8s.io/examples/pods/qos/qos-pod-2.yaml --namespace=qos-example", "completion": "This shell code Configure Quality of Service for Pods | View detailed information about the Pod:", "language": "shell", "type": "code-example_explanation", "context": "Configure Quality of Service for Pods | View detailed information about the Pod:", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/quality-service-pod/", "id": "k8s_00697_code_explain_21834"} {"prompt": "Explain this shell code:\n\nkubectl get pod qos-demo-2 --namespace=qos-example --output=yaml", "completion": "This shell code Configure Quality of Service for Pods | The output shows that Kubernetes gave the Pod a QoS class of Burstable:", "language": "shell", "type": "code-example_explanation", "context": "Configure Quality of Service for Pods | The output shows that Kubernetes gave the Pod a QoS class of Burstable:", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/quality-service-pod/", "id": "k8s_00697_code_explain_21835"} {"prompt": "Complete this yaml code:\n\nspec:\n containers:\n - image: nginx\n imagePullPolicy: Always\n name: qos-demo-2-ctr\n resources:", "completion": " limits:\n memory: 200Mi\n requests:\n memory: 100Mi\n ...\nstatus:\n qosClass: Burstable", "language": "yaml", "type": "kubernetes-manifest", "context": "Configure Quality of Service for Pods | Delete your Pod:", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/quality-service-pod/", "id": "k8s_00697_code_completion_21836"} {"prompt": "Explain this yaml code:\n\nspec:\n containers:\n - image: nginx\n imagePullPolicy: Always\n name: qos-demo-2-ctr\n resources:\n limits:\n memory: 200Mi\n requests:\n memory: 100Mi\n ...\nstatus:\n qosClass: Burstable", "completion": "This yaml code Configure Quality of Service for Pods | Delete your Pod:", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Configure Quality of Service for Pods | Delete your Pod:", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/quality-service-pod/", "id": "k8s_00697_code_explain_21837"} {"prompt": "Explain this shell code:\n\nkubectl delete pod qos-demo-2 --namespace=qos-example", "completion": "This shell code Configure Quality of Service for Pods | For a Pod to be given a QoS class of BestEffort, the Containers in the Pod must not have any memory or CPU limits or requests.", "language": "shell", "type": "code-example_explanation", "context": "Configure Quality of Service for Pods | For a Pod to be given a QoS class of BestEffort, the Containers in the Pod must not have any memory or CPU limits or requests.", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/quality-service-pod/", "id": "k8s_00697_code_explain_21838"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: qos-demo-3", "completion": " namespace: qos-example\nspec:\n containers:\n - name: qos-demo-3-ctr\n image: nginx", "language": "yaml", "type": "pod-definition", "context": "Configure Quality of Service for Pods | Create the Pod:", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/quality-service-pod/", "id": "k8s_00697_code_completion_21839"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: qos-demo-3\n namespace: qos-example\nspec:\n containers:\n - name: qos-demo-3-ctr\n image: nginx", "completion": "This yaml code Configure Quality of Service for Pods | Create the Pod:", "language": "yaml", "type": "pod-definition_explanation", "context": "Configure Quality of Service for Pods | Create the Pod:", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/quality-service-pod/", "id": "k8s_00697_code_explain_21840"} {"prompt": "Explain this shell code:\n\nkubectl apply -f https://k8s.io/examples/pods/qos/qos-pod-3.yaml --namespace=qos-example", "completion": "This shell code Configure Quality of Service for Pods | View detailed information about the Pod:", "language": "shell", "type": "code-example_explanation", "context": "Configure Quality of Service for Pods | View detailed information about the Pod:", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/quality-service-pod/", "id": "k8s_00697_code_explain_21841"} {"prompt": "Explain this shell code:\n\nkubectl get pod qos-demo-3 --namespace=qos-example --output=yaml", "completion": "This shell code Configure Quality of Service for Pods | The output shows that Kubernetes gave the Pod a QoS class of BestEffort:", "language": "shell", "type": "code-example_explanation", "context": "Configure Quality of Service for Pods | The output shows that Kubernetes gave the Pod a QoS class of BestEffort:", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/quality-service-pod/", "id": "k8s_00697_code_explain_21842"} {"prompt": "Complete this yaml code:\n\nspec:\n containers:\n ...", "completion": " resources: {}\n ...\nstatus:\n qosClass: BestEffort", "language": "yaml", "type": "kubernetes-manifest", "context": "Configure Quality of Service for Pods | Delete your Pod:", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/quality-service-pod/", "id": "k8s_00697_code_completion_21843"} {"prompt": "Explain this yaml code:\n\nspec:\n containers:\n ...\n resources: {}\n ...\nstatus:\n qosClass: BestEffort", "completion": "This yaml code Configure Quality of Service for Pods | Delete your Pod:", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Configure Quality of Service for Pods | Delete your Pod:", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/quality-service-pod/", "id": "k8s_00697_code_explain_21844"} {"prompt": "Explain this shell code:\n\nkubectl delete pod qos-demo-3 --namespace=qos-example", "completion": "This shell code Configure Quality of Service for Pods | Here is a manifest for a Pod that has two Containers. One container specifies a memory request of 200 MiB. The other Container does not specify any requests or limits.", "language": "shell", "type": "code-example_explanation", "context": "Configure Quality of Service for Pods | Here is a manifest for a Pod that has two Containers. One container specifies a memory request of 200 MiB. The other Container does not specify any requests or limits.", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/quality-service-pod/", "id": "k8s_00697_code_explain_21845"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: qos-demo-4\n namespace: qos-example\nspec:\n containers:\n", "completion": " - name: qos-demo-4-ctr-1\n image: nginx\n resources:\n requests:\n memory: \"200Mi\"\n\n - name: qos-demo-4-ctr-2\n image: redis", "language": "yaml", "type": "pod-definition", "context": "Configure Quality of Service for Pods | Notice that this Pod meets the criteria for QoS class Burstable. That is, it does not meet the criteria for QoS class Guaranteed, and one of its Containers has a memory request.", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/quality-service-pod/", "id": "k8s_00697_code_completion_21846"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: qos-demo-4\n namespace: qos-example\nspec:\n containers:\n\n - name: qos-demo-4-ctr-1\n image: nginx\n resources:\n requests:\n memory: \"200Mi\"\n\n - name: qos-demo-4-ctr-2\n image: redis", "completion": "This yaml code Configure Quality of Service for Pods | Notice that this Pod meets the criteria for QoS class Burstable. That is, it does not meet the criteria for QoS class Guaranteed, and one of its Containers has a memory request.", "language": "yaml", "type": "pod-definition_explanation", "context": "Configure Quality of Service for Pods | Notice that this Pod meets the criteria for QoS class Burstable. That is, it does not meet the criteria for QoS class Guaranteed, and one of its Containers has a memory request.", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/quality-service-pod/", "id": "k8s_00697_code_explain_21847"} {"prompt": "Explain this shell code:\n\nkubectl apply -f https://k8s.io/examples/pods/qos/qos-pod-4.yaml --namespace=qos-example", "completion": "This shell code Configure Quality of Service for Pods | View detailed information about the Pod:", "language": "shell", "type": "code-example_explanation", "context": "Configure Quality of Service for Pods | View detailed information about the Pod:", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/quality-service-pod/", "id": "k8s_00697_code_explain_21848"} {"prompt": "Explain this shell code:\n\nkubectl get pod qos-demo-4 --namespace=qos-example --output=yaml", "completion": "This shell code Configure Quality of Service for Pods | The output shows that Kubernetes gave the Pod a QoS class of Burstable:", "language": "shell", "type": "code-example_explanation", "context": "Configure Quality of Service for Pods | The output shows that Kubernetes gave the Pod a QoS class of Burstable:", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/quality-service-pod/", "id": "k8s_00697_code_explain_21849"} {"prompt": "Complete this yaml code:\n\nspec:\n containers:\n ...\n name: qos-demo-4-ctr-1\n resources:\n requests:", "completion": " memory: 200Mi\n ...\n name: qos-demo-4-ctr-2\n resources: {}\n ...\nstatus:\n qosClass: Burstable", "language": "yaml", "type": "kubernetes-manifest", "context": "Configure Quality of Service for Pods | Rather than see all the fields, you can view just the field you need:", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/quality-service-pod/", "id": "k8s_00697_code_completion_21850"} {"prompt": "Explain this yaml code:\n\nspec:\n containers:\n ...\n name: qos-demo-4-ctr-1\n resources:\n requests:\n memory: 200Mi\n ...\n name: qos-demo-4-ctr-2\n resources: {}\n ...\nstatus:\n qosClass: Burstable", "completion": "This yaml code Configure Quality of Service for Pods | Rather than see all the fields, you can view just the field you need:", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Configure Quality of Service for Pods | Rather than see all the fields, you can view just the field you need:", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/quality-service-pod/", "id": "k8s_00697_code_explain_21851"} {"prompt": "Explain this bash code:\n\nkubectl --namespace=qos-example get pod qos-demo-4 -o jsonpath='{ .status.qosClass}{\"\\n\"}'", "completion": "This bash code Configure Quality of Service for Pods | Delete your namespace:", "language": "bash", "type": "kubectl-command_explanation", "context": "Configure Quality of Service for Pods | Delete your namespace:", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/quality-service-pod/", "id": "k8s_00697_code_explain_21852"} {"prompt": "Complete this yaml code:\n\npods/qos/qos-pod.yaml\napiVersion: v1\nkind: Pod\nmetadata:\n name: qos-demo\n namespace: qos-example\nspec:\n containers:", "completion": " - name: qos-demo-ctr\n image: nginx\n resources:\n limits:\n memory: \"200Mi\"\n cpu: \"700m\"\n requests:\n memory: \"200Mi\"\n cpu: \"700m\"", "language": "yaml", "type": "pod-definition", "context": "Configure Quality of Service for Pods | Create the Pod:", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/quality-service-pod/", "id": "k8s_00697_code_completion_21853"} {"prompt": "Explain this yaml code:\n\npods/qos/qos-pod.yaml\napiVersion: v1\nkind: Pod\nmetadata:\n name: qos-demo\n namespace: qos-example\nspec:\n containers:\n - name: qos-demo-ctr\n image: nginx\n resources:\n limits:\n memory: \"200Mi\"\n cpu: \"700m\"\n requests:\n memory: \"200Mi\"\n cpu: \"700m\"", "completion": "This yaml code Configure Quality of Service for Pods | Create the Pod:", "language": "yaml", "type": "pod-definition_explanation", "context": "Configure Quality of Service for Pods | Create the Pod:", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/quality-service-pod/", "id": "k8s_00697_code_explain_21854"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: qos-demo\n namespace: qos-example\nspec:\n containers:\n - name: qos-demo-ctr", "completion": " image: nginx\n resources:\n limits:\n memory: \"200Mi\"\n cpu: \"700m\"\n requests:\n memory: \"200Mi\"\n cpu: \"700m\"", "language": "yaml", "type": "pod-definition", "context": "Configure Quality of Service for Pods | Create the Pod:", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/quality-service-pod/", "id": "k8s_00697_code_completion_21855"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: qos-demo\n namespace: qos-example\nspec:\n containers:\n - name: qos-demo-ctr\n image: nginx\n resources:\n limits:\n memory: \"200Mi\"\n cpu: \"700m\"\n requests:\n memory: \"200Mi\"\n cpu: \"700m\"", "completion": "This yaml code Configure Quality of Service for Pods | Create the Pod:", "language": "yaml", "type": "pod-definition_explanation", "context": "Configure Quality of Service for Pods | Create the Pod:", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/quality-service-pod/", "id": "k8s_00697_code_explain_21856"} {"prompt": "Explain this bash code:\n\nkubectl apply -f https://k8s.io/examples/pods/qos/qos-pod.yaml --namespace=qos-example", "completion": "This bash code Configure Quality of Service for Pods | View detailed information about the Pod:", "language": "bash", "type": "kubectl-command_explanation", "context": "Configure Quality of Service for Pods | View detailed information about the Pod:", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/quality-service-pod/", "id": "k8s_00697_code_explain_21857"} {"prompt": "Explain this bash code:\n\nkubectl get pod qos-demo --namespace=qos-example --output=yaml", "completion": "This bash code Configure Quality of Service for Pods | The output shows that Kubernetes gave the Pod a QoS class of Guaranteed. The output also verifies that the Pod Container has a memory request that matches its memory limit, and it has a CPU request th", "language": "bash", "type": "kubectl-command_explanation", "context": "Configure Quality of Service for Pods | The output shows that Kubernetes gave the Pod a QoS class of Guaranteed. The output also verifies that the Pod Container has a memory request that matches its memory limit, and it has a CPU request th", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/quality-service-pod/", "id": "k8s_00697_code_explain_21858"} {"prompt": "Complete this text code:\n\nspec:\n containers:\n ...\n resources:\n limits:\n cpu: 700m", "completion": " memory: 200Mi\n requests:\n cpu: 700m\n memory: 200Mi\n ...\nstatus:\n qosClass: Guaranteed", "language": "text", "type": "code-example", "context": "Configure Quality of Service for Pods | Delete your Pod:", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/quality-service-pod/", "id": "k8s_00697_code_completion_21859"} {"prompt": "Explain this text code:\n\nspec:\n containers:\n ...\n resources:\n limits:\n cpu: 700m\n memory: 200Mi\n requests:\n cpu: 700m\n memory: 200Mi\n ...\nstatus:\n qosClass: Guaranteed", "completion": "This text code Configure Quality of Service for Pods | Delete your Pod:", "language": "text", "type": "code-example_explanation", "context": "Configure Quality of Service for Pods | Delete your Pod:", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/quality-service-pod/", "id": "k8s_00697_code_explain_21860"} {"prompt": "Explain this bash code:\n\nkubectl delete pod qos-demo --namespace=qos-example", "completion": "This bash code Configure Quality of Service for Pods | A Pod is given a QoS class of Burstable if:", "language": "bash", "type": "kubectl-command_explanation", "context": "Configure Quality of Service for Pods | A Pod is given a QoS class of Burstable if:", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/quality-service-pod/", "id": "k8s_00697_code_explain_21861"} {"prompt": "Complete this yaml code:\n\npods/qos/qos-pod-2.yaml\napiVersion: v1\nkind: Pod\nmetadata:\n name: qos-demo-2\n namespace: qos-example\nspec:", "completion": " containers:\n - name: qos-demo-2-ctr\n image: nginx\n resources:\n limits:\n memory: \"200Mi\"\n requests:\n memory: \"100Mi\"", "language": "yaml", "type": "pod-definition", "context": "Configure Quality of Service for Pods | Create the Pod:", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/quality-service-pod/", "id": "k8s_00697_code_completion_21862"} {"prompt": "Explain this yaml code:\n\npods/qos/qos-pod-2.yaml\napiVersion: v1\nkind: Pod\nmetadata:\n name: qos-demo-2\n namespace: qos-example\nspec:\n containers:\n - name: qos-demo-2-ctr\n image: nginx\n resources:\n limits:\n memory: \"200Mi\"\n requests:\n memory: \"100Mi\"", "completion": "This yaml code Configure Quality of Service for Pods | Create the Pod:", "language": "yaml", "type": "pod-definition_explanation", "context": "Configure Quality of Service for Pods | Create the Pod:", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/quality-service-pod/", "id": "k8s_00697_code_explain_21863"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: qos-demo-2\n namespace: qos-example\nspec:\n containers:", "completion": " - name: qos-demo-2-ctr\n image: nginx\n resources:\n limits:\n memory: \"200Mi\"\n requests:\n memory: \"100Mi\"", "language": "yaml", "type": "pod-definition", "context": "Configure Quality of Service for Pods | Create the Pod:", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/quality-service-pod/", "id": "k8s_00697_code_completion_21864"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: qos-demo-2\n namespace: qos-example\nspec:\n containers:\n - name: qos-demo-2-ctr\n image: nginx\n resources:\n limits:\n memory: \"200Mi\"\n requests:\n memory: \"100Mi\"", "completion": "This yaml code Configure Quality of Service for Pods | Create the Pod:", "language": "yaml", "type": "pod-definition_explanation", "context": "Configure Quality of Service for Pods | Create the Pod:", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/quality-service-pod/", "id": "k8s_00697_code_explain_21865"} {"prompt": "Explain this bash code:\n\nkubectl apply -f https://k8s.io/examples/pods/qos/qos-pod-2.yaml --namespace=qos-example", "completion": "This bash code Configure Quality of Service for Pods | View detailed information about the Pod:", "language": "bash", "type": "kubectl-command_explanation", "context": "Configure Quality of Service for Pods | View detailed information about the Pod:", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/quality-service-pod/", "id": "k8s_00697_code_explain_21866"} {"prompt": "Explain this bash code:\n\nkubectl get pod qos-demo-2 --namespace=qos-example --output=yaml", "completion": "This bash code Configure Quality of Service for Pods | The output shows that Kubernetes gave the Pod a QoS class of Burstable:", "language": "bash", "type": "kubectl-command_explanation", "context": "Configure Quality of Service for Pods | The output shows that Kubernetes gave the Pod a QoS class of Burstable:", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/quality-service-pod/", "id": "k8s_00697_code_explain_21867"} {"prompt": "Complete this text code:\n\nspec:\n containers:\n - image: nginx\n imagePullPolicy: Always\n name: qos-demo-2-ctr\n resources:", "completion": " limits:\n memory: 200Mi\n requests:\n memory: 100Mi\n ...\nstatus:\n qosClass: Burstable", "language": "text", "type": "code-example", "context": "Configure Quality of Service for Pods | Delete your Pod:", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/quality-service-pod/", "id": "k8s_00697_code_completion_21868"} {"prompt": "Explain this text code:\n\nspec:\n containers:\n - image: nginx\n imagePullPolicy: Always\n name: qos-demo-2-ctr\n resources:\n limits:\n memory: 200Mi\n requests:\n memory: 100Mi\n ...\nstatus:\n qosClass: Burstable", "completion": "This text code Configure Quality of Service for Pods | Delete your Pod:", "language": "text", "type": "code-example_explanation", "context": "Configure Quality of Service for Pods | Delete your Pod:", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/quality-service-pod/", "id": "k8s_00697_code_explain_21869"} {"prompt": "Explain this bash code:\n\nkubectl delete pod qos-demo-2 --namespace=qos-example", "completion": "This bash code Configure Quality of Service for Pods | For a Pod to be given a QoS class of BestEffort, the Containers in the Pod must not have any memory or CPU limits or requests.", "language": "bash", "type": "kubectl-command_explanation", "context": "Configure Quality of Service for Pods | For a Pod to be given a QoS class of BestEffort, the Containers in the Pod must not have any memory or CPU limits or requests.", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/quality-service-pod/", "id": "k8s_00697_code_explain_21870"} {"prompt": "Complete this yaml code:\n\npods/qos/qos-pod-3.yaml\napiVersion: v1\nkind: Pod\nmetadata:\n name: qos-demo-3", "completion": " namespace: qos-example\nspec:\n containers:\n - name: qos-demo-3-ctr\n image: nginx", "language": "yaml", "type": "pod-definition", "context": "Configure Quality of Service for Pods | Create the Pod:", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/quality-service-pod/", "id": "k8s_00697_code_completion_21871"} {"prompt": "Explain this yaml code:\n\npods/qos/qos-pod-3.yaml\napiVersion: v1\nkind: Pod\nmetadata:\n name: qos-demo-3\n namespace: qos-example\nspec:\n containers:\n - name: qos-demo-3-ctr\n image: nginx", "completion": "This yaml code Configure Quality of Service for Pods | Create the Pod:", "language": "yaml", "type": "pod-definition_explanation", "context": "Configure Quality of Service for Pods | Create the Pod:", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/quality-service-pod/", "id": "k8s_00697_code_explain_21872"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: qos-demo-3", "completion": " namespace: qos-example\nspec:\n containers:\n - name: qos-demo-3-ctr\n image: nginx", "language": "yaml", "type": "pod-definition", "context": "Configure Quality of Service for Pods | Create the Pod:", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/quality-service-pod/", "id": "k8s_00697_code_completion_21873"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: qos-demo-3\n namespace: qos-example\nspec:\n containers:\n - name: qos-demo-3-ctr\n image: nginx", "completion": "This yaml code Configure Quality of Service for Pods | Create the Pod:", "language": "yaml", "type": "pod-definition_explanation", "context": "Configure Quality of Service for Pods | Create the Pod:", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/quality-service-pod/", "id": "k8s_00697_code_explain_21874"} {"prompt": "Explain this bash code:\n\nkubectl apply -f https://k8s.io/examples/pods/qos/qos-pod-3.yaml --namespace=qos-example", "completion": "This bash code Configure Quality of Service for Pods | View detailed information about the Pod:", "language": "bash", "type": "kubectl-command_explanation", "context": "Configure Quality of Service for Pods | View detailed information about the Pod:", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/quality-service-pod/", "id": "k8s_00697_code_explain_21875"} {"prompt": "Explain this bash code:\n\nkubectl get pod qos-demo-3 --namespace=qos-example --output=yaml", "completion": "This bash code Configure Quality of Service for Pods | The output shows that Kubernetes gave the Pod a QoS class of BestEffort:", "language": "bash", "type": "kubectl-command_explanation", "context": "Configure Quality of Service for Pods | The output shows that Kubernetes gave the Pod a QoS class of BestEffort:", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/quality-service-pod/", "id": "k8s_00697_code_explain_21876"} {"prompt": "Complete this text code:\n\nspec:\n containers:\n ...", "completion": " resources: {}\n ...\nstatus:\n qosClass: BestEffort", "language": "text", "type": "code-example", "context": "Configure Quality of Service for Pods | Delete your Pod:", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/quality-service-pod/", "id": "k8s_00697_code_completion_21877"} {"prompt": "Explain this text code:\n\nspec:\n containers:\n ...\n resources: {}\n ...\nstatus:\n qosClass: BestEffort", "completion": "This text code Configure Quality of Service for Pods | Delete your Pod:", "language": "text", "type": "code-example_explanation", "context": "Configure Quality of Service for Pods | Delete your Pod:", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/quality-service-pod/", "id": "k8s_00697_code_explain_21878"} {"prompt": "Explain this bash code:\n\nkubectl delete pod qos-demo-3 --namespace=qos-example", "completion": "This bash code Configure Quality of Service for Pods | Here is a manifest for a Pod that has two Containers. One container specifies a memory request of 200 MiB. The other Container does not specify any requests or limits.", "language": "bash", "type": "kubectl-command_explanation", "context": "Configure Quality of Service for Pods | Here is a manifest for a Pod that has two Containers. One container specifies a memory request of 200 MiB. The other Container does not specify any requests or limits.", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/quality-service-pod/", "id": "k8s_00697_code_explain_21879"} {"prompt": "Complete this yaml code:\n\npods/qos/qos-pod-4.yaml\napiVersion: v1\nkind: Pod\nmetadata:\n name: qos-demo-4\n namespace: qos-example\nspec:\n containers:", "completion": "\n - name: qos-demo-4-ctr-1\n image: nginx\n resources:\n requests:\n memory: \"200Mi\"\n\n - name: qos-demo-4-ctr-2\n image: redis", "language": "yaml", "type": "pod-definition", "context": "Configure Quality of Service for Pods | Notice that this Pod meets the criteria for QoS class Burstable. That is, it does not meet the criteria for QoS class Guaranteed, and one of its Containers has a memory request.", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/quality-service-pod/", "id": "k8s_00697_code_completion_21880"} {"prompt": "Explain this yaml code:\n\npods/qos/qos-pod-4.yaml\napiVersion: v1\nkind: Pod\nmetadata:\n name: qos-demo-4\n namespace: qos-example\nspec:\n containers:\n\n - name: qos-demo-4-ctr-1\n image: nginx\n resources:\n requests:\n memory: \"200Mi\"\n\n - name: qos-demo-4-ctr-2\n image: redis", "completion": "This yaml code Configure Quality of Service for Pods | Notice that this Pod meets the criteria for QoS class Burstable. That is, it does not meet the criteria for QoS class Guaranteed, and one of its Containers has a memory request.", "language": "yaml", "type": "pod-definition_explanation", "context": "Configure Quality of Service for Pods | Notice that this Pod meets the criteria for QoS class Burstable. That is, it does not meet the criteria for QoS class Guaranteed, and one of its Containers has a memory request.", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/quality-service-pod/", "id": "k8s_00697_code_explain_21881"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: qos-demo-4\n namespace: qos-example\nspec:\n containers:\n", "completion": " - name: qos-demo-4-ctr-1\n image: nginx\n resources:\n requests:\n memory: \"200Mi\"\n\n - name: qos-demo-4-ctr-2\n image: redis", "language": "yaml", "type": "pod-definition", "context": "Configure Quality of Service for Pods | Notice that this Pod meets the criteria for QoS class Burstable. That is, it does not meet the criteria for QoS class Guaranteed, and one of its Containers has a memory request.", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/quality-service-pod/", "id": "k8s_00697_code_completion_21882"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: qos-demo-4\n namespace: qos-example\nspec:\n containers:\n\n - name: qos-demo-4-ctr-1\n image: nginx\n resources:\n requests:\n memory: \"200Mi\"\n\n - name: qos-demo-4-ctr-2\n image: redis", "completion": "This yaml code Configure Quality of Service for Pods | Notice that this Pod meets the criteria for QoS class Burstable. That is, it does not meet the criteria for QoS class Guaranteed, and one of its Containers has a memory request.", "language": "yaml", "type": "pod-definition_explanation", "context": "Configure Quality of Service for Pods | Notice that this Pod meets the criteria for QoS class Burstable. That is, it does not meet the criteria for QoS class Guaranteed, and one of its Containers has a memory request.", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/quality-service-pod/", "id": "k8s_00697_code_explain_21883"} {"prompt": "Explain this bash code:\n\nkubectl apply -f https://k8s.io/examples/pods/qos/qos-pod-4.yaml --namespace=qos-example", "completion": "This bash code Configure Quality of Service for Pods | View detailed information about the Pod:", "language": "bash", "type": "kubectl-command_explanation", "context": "Configure Quality of Service for Pods | View detailed information about the Pod:", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/quality-service-pod/", "id": "k8s_00697_code_explain_21884"} {"prompt": "Explain this bash code:\n\nkubectl get pod qos-demo-4 --namespace=qos-example --output=yaml", "completion": "This bash code Configure Quality of Service for Pods | The output shows that Kubernetes gave the Pod a QoS class of Burstable:", "language": "bash", "type": "kubectl-command_explanation", "context": "Configure Quality of Service for Pods | The output shows that Kubernetes gave the Pod a QoS class of Burstable:", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/quality-service-pod/", "id": "k8s_00697_code_explain_21885"} {"prompt": "Complete this text code:\n\nspec:\n containers:\n ...\n name: qos-demo-4-ctr-1\n resources:\n requests:", "completion": " memory: 200Mi\n ...\n name: qos-demo-4-ctr-2\n resources: {}\n ...\nstatus:\n qosClass: Burstable", "language": "text", "type": "code-example", "context": "Configure Quality of Service for Pods | Rather than see all the fields, you can view just the field you need:", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/quality-service-pod/", "id": "k8s_00697_code_completion_21886"} {"prompt": "Explain this text code:\n\nspec:\n containers:\n ...\n name: qos-demo-4-ctr-1\n resources:\n requests:\n memory: 200Mi\n ...\n name: qos-demo-4-ctr-2\n resources: {}\n ...\nstatus:\n qosClass: Burstable", "completion": "This text code Configure Quality of Service for Pods | Rather than see all the fields, you can view just the field you need:", "language": "text", "type": "code-example_explanation", "context": "Configure Quality of Service for Pods | Rather than see all the fields, you can view just the field you need:", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/quality-service-pod/", "id": "k8s_00697_code_explain_21887"} {"prompt": "Explain this bash code:\n\nkubectl --namespace=qos-example get pod qos-demo-4 -o jsonpath='{ .status.qosClass}{\"\\n\"}'", "completion": "This bash code Configure Quality of Service for Pods | Delete your namespace:", "language": "bash", "type": "kubectl-command_explanation", "context": "Configure Quality of Service for Pods | Delete your namespace:", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/quality-service-pod/", "id": "k8s_00697_code_explain_21888"} {"prompt": "Complete this text code:\n\n# Add kubernetes-dashboard repository\nhelm repo add kubernetes-dashboard https://kubernetes.github.io/dashboard/", "completion": "# Deploy a Helm Release named \"kubernetes-dashboard\" using the kubernetes-dashboard chart\nhelm upgrade --install kubernetes-dashboard kubernetes-dashboard/kubernetes-dashboard --create-namespace --namespace kubernetes-dashboard", "language": "text", "type": "code-example", "context": "Deploy and Access the Kubernetes Dashboard | To protect your cluster data, Dashboard deploys with a minimal RBAC configuration by default. Currently, Dashboard only supports logging in with a Bearer Token. To create a token for this demo, you ca", "source": "https://kubernetes.io/docs/tasks/access-application-cluster/web-ui-dashboard/", "id": "k8s_00698_code_completion_21889"} {"prompt": "Explain this text code:\n\n# Add kubernetes-dashboard repository\nhelm repo add kubernetes-dashboard https://kubernetes.github.io/dashboard/\n# Deploy a Helm Release named \"kubernetes-dashboard\" using the kubernetes-dashboard chart\nhelm upgrade --install kubernetes-dashboard kubernetes-dashboard/kubernetes-dashboard --create-namespace --namespace kubernetes-dashboard", "completion": "This text code Deploy and Access the Kubernetes Dashboard | To protect your cluster data, Dashboard deploys with a minimal RBAC configuration by default. Currently, Dashboard only supports logging in with a Bearer Token. To create a token for this demo, you ca", "language": "text", "type": "code-example_explanation", "context": "Deploy and Access the Kubernetes Dashboard | To protect your cluster data, Dashboard deploys with a minimal RBAC configuration by default. Currently, Dashboard only supports logging in with a Bearer Token. To create a token for this demo, you ca", "source": "https://kubernetes.io/docs/tasks/access-application-cluster/web-ui-dashboard/", "id": "k8s_00698_code_explain_21890"} {"prompt": "Explain this bash code:\n\nkubectl -n kubernetes-dashboard port-forward svc/kubernetes-dashboard-kong-proxy 8443:443", "completion": "This bash code Deploy and Access the Kubernetes Dashboard | Kubectl will make Dashboard available at https://localhost:8443.", "language": "bash", "type": "kubectl-command_explanation", "context": "Deploy and Access the Kubernetes Dashboard | Kubectl will make Dashboard available at https://localhost:8443.", "source": "https://kubernetes.io/docs/tasks/access-application-cluster/web-ui-dashboard/", "id": "k8s_00698_code_explain_21891"} {"prompt": "Complete this text code:\n\nrelease=1.0\ntier=frontend", "completion": "environment=pod\ntrack=stable", "language": "text", "type": "code-example", "context": "Deploy and Access the Kubernetes Dashboard | Namespace: Kubernetes supports multiple virtual clusters backed by the same physical cluster. These virtual clusters are called namespaces. They let you partition resources into logically named groups", "source": "https://kubernetes.io/docs/tasks/access-application-cluster/web-ui-dashboard/", "id": "k8s_00698_code_completion_21892"} {"prompt": "Explain this text code:\n\nrelease=1.0\ntier=frontend\nenvironment=pod\ntrack=stable", "completion": "This text code Deploy and Access the Kubernetes Dashboard | Namespace: Kubernetes supports multiple virtual clusters backed by the same physical cluster. These virtual clusters are called namespaces. They let you partition resources into logically named groups", "language": "text", "type": "code-example_explanation", "context": "Deploy and Access the Kubernetes Dashboard | Namespace: Kubernetes supports multiple virtual clusters backed by the same physical cluster. These virtual clusters are called namespaces. They let you partition resources into logically named groups", "source": "https://kubernetes.io/docs/tasks/access-application-cluster/web-ui-dashboard/", "id": "k8s_00698_code_explain_21893"} {"prompt": "Complete this shell code:\n\n# Add kubernetes-dashboard repository\nhelm repo add kubernetes-dashboard https://kubernetes.github.io/dashboard/", "completion": "# Deploy a Helm Release named \"kubernetes-dashboard\" using the kubernetes-dashboard chart\nhelm upgrade --install kubernetes-dashboard kubernetes-dashboard/kubernetes-dashboard --create-namespace --namespace kubernetes-dashboard", "language": "shell", "type": "code-example", "context": "Deploy and Access the Kubernetes Dashboard | To protect your cluster data, Dashboard deploys with a minimal RBAC configuration by default. Currently, Dashboard only supports logging in with a Bearer Token. To create a token for this demo, you ca", "source": "https://kubernetes.io/docs/tasks/access-application-cluster/web-ui-dashboard/", "id": "k8s_00698_code_completion_21894"} {"prompt": "Explain this shell code:\n\n# Add kubernetes-dashboard repository\nhelm repo add kubernetes-dashboard https://kubernetes.github.io/dashboard/\n# Deploy a Helm Release named \"kubernetes-dashboard\" using the kubernetes-dashboard chart\nhelm upgrade --install kubernetes-dashboard kubernetes-dashboard/kubernetes-dashboard --create-namespace --namespace kubernetes-dashboard", "completion": "This shell code Deploy and Access the Kubernetes Dashboard | To protect your cluster data, Dashboard deploys with a minimal RBAC configuration by default. Currently, Dashboard only supports logging in with a Bearer Token. To create a token for this demo, you ca", "language": "shell", "type": "code-example_explanation", "context": "Deploy and Access the Kubernetes Dashboard | To protect your cluster data, Dashboard deploys with a minimal RBAC configuration by default. Currently, Dashboard only supports logging in with a Bearer Token. To create a token for this demo, you ca", "source": "https://kubernetes.io/docs/tasks/access-application-cluster/web-ui-dashboard/", "id": "k8s_00698_code_explain_21895"} {"prompt": "Explain this bash code:\n\nkubectl -n kubernetes-dashboard port-forward svc/kubernetes-dashboard-kong-proxy 8443:443", "completion": "This bash code Deploy and Access the Kubernetes Dashboard | Kubectl will make Dashboard available at https://localhost:8443.", "language": "bash", "type": "kubectl-command_explanation", "context": "Deploy and Access the Kubernetes Dashboard | Kubectl will make Dashboard available at https://localhost:8443.", "source": "https://kubernetes.io/docs/tasks/access-application-cluster/web-ui-dashboard/", "id": "k8s_00698_code_explain_21896"} {"prompt": "Complete this conf code:\n\nrelease=1.0\ntier=frontend", "completion": "environment=pod\ntrack=stable", "language": "conf", "type": "code-example", "context": "Deploy and Access the Kubernetes Dashboard | Namespace: Kubernetes supports multiple virtual clusters backed by the same physical cluster. These virtual clusters are called namespaces. They let you partition resources into logically named groups", "source": "https://kubernetes.io/docs/tasks/access-application-cluster/web-ui-dashboard/", "id": "k8s_00698_code_completion_21897"} {"prompt": "Explain this conf code:\n\nrelease=1.0\ntier=frontend\nenvironment=pod\ntrack=stable", "completion": "This conf code Deploy and Access the Kubernetes Dashboard | Namespace: Kubernetes supports multiple virtual clusters backed by the same physical cluster. These virtual clusters are called namespaces. They let you partition resources into logically named groups", "language": "conf", "type": "code-example_explanation", "context": "Deploy and Access the Kubernetes Dashboard | Namespace: Kubernetes supports multiple virtual clusters backed by the same physical cluster. These virtual clusters are called namespaces. They let you partition resources into logically named groups", "source": "https://kubernetes.io/docs/tasks/access-application-cluster/web-ui-dashboard/", "id": "k8s_00698_code_explain_21898"} {"prompt": "Complete this text code:\n\n# Add kubernetes-dashboard repository\nhelm repo add kubernetes-dashboard https://kubernetes.github.io/dashboard/", "completion": "# Deploy a Helm Release named \"kubernetes-dashboard\" using the kubernetes-dashboard chart\nhelm upgrade --install kubernetes-dashboard kubernetes-dashboard/kubernetes-dashboard --create-namespace --namespace kubernetes-dashboard", "language": "text", "type": "code-example", "context": "Deploy and Access the Kubernetes Dashboard | To protect your cluster data, Dashboard deploys with a minimal RBAC configuration by default. Currently, Dashboard only supports logging in with a Bearer Token. To create a token for this demo, you ca", "source": "https://kubernetes.io/docs/tasks/access-application-cluster/web-ui-dashboard/", "id": "k8s_00698_code_completion_21899"} {"prompt": "Explain this text code:\n\n# Add kubernetes-dashboard repository\nhelm repo add kubernetes-dashboard https://kubernetes.github.io/dashboard/\n# Deploy a Helm Release named \"kubernetes-dashboard\" using the kubernetes-dashboard chart\nhelm upgrade --install kubernetes-dashboard kubernetes-dashboard/kubernetes-dashboard --create-namespace --namespace kubernetes-dashboard", "completion": "This text code Deploy and Access the Kubernetes Dashboard | To protect your cluster data, Dashboard deploys with a minimal RBAC configuration by default. Currently, Dashboard only supports logging in with a Bearer Token. To create a token for this demo, you ca", "language": "text", "type": "code-example_explanation", "context": "Deploy and Access the Kubernetes Dashboard | To protect your cluster data, Dashboard deploys with a minimal RBAC configuration by default. Currently, Dashboard only supports logging in with a Bearer Token. To create a token for this demo, you ca", "source": "https://kubernetes.io/docs/tasks/access-application-cluster/web-ui-dashboard/", "id": "k8s_00698_code_explain_21900"} {"prompt": "Explain this text code:\n\nnameserver 10.32.0.10\nsearch .svc.cluster.local svc.cluster.local cluster.local\noptions ndots:5", "completion": "This text code DNS for Services and Pods | In summary, a Pod in the test namespace can successfully resolve either data.prod or data.prod.svc.cluster.local.", "language": "text", "type": "code-example_explanation", "context": "DNS for Services and Pods | In summary, a Pod in the test namespace can successfully resolve either data.prod or data.prod.svc.cluster.local.", "source": "https://kubernetes.io/docs/concepts/services-networking/dns-pod-service/", "id": "k8s_00699_code_explain_21901"} {"prompt": "Explain this text code:\n\n..pod.", "completion": "This text code DNS for Services and Pods | For example, if a Pod in the default namespace has the IP address 172.17.0.3, and the domain name for your cluster is cluster.local, then the Pod has a DNS name:", "language": "text", "type": "code-example_explanation", "context": "DNS for Services and Pods | For example, if a Pod in the default namespace has the IP address 172.17.0.3, and the domain name for your cluster is cluster.local, then the Pod has a DNS name:", "source": "https://kubernetes.io/docs/concepts/services-networking/dns-pod-service/", "id": "k8s_00699_code_explain_21902"} {"prompt": "Explain this text code:\n\n...svc.", "completion": "This text code DNS for Services and Pods | For example, if a Pod in the cafe namespace has the IP address 172.17.0.3, is an endpoint of a Service named barista, and the domain name for your cluster is cluster.local, then the Pod would have thi", "language": "text", "type": "code-example_explanation", "context": "DNS for Services and Pods | For example, if a Pod in the cafe namespace has the IP address 172.17.0.3, is an endpoint of a Service named barista, and the domain name for your cluster is cluster.local, then the Pod would have thi", "source": "https://kubernetes.io/docs/concepts/services-networking/dns-pod-service/", "id": "k8s_00699_code_explain_21903"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Service\nmetadata:\n name: busybox-subdomain\nspec:\n selector:\n name: busybox\n clusterIP: None\n ports:\n - name: foo # name is not required for single-port Services\n port: 1234\n---\napiVersion: v1\nkind: Pod\nmetadata:\n name: busybox1\n labels:\n name: busybox\nspec:\n hostname: busybox-1\n subdomain: busybox-subdomain", "completion": " containers:\n - image: busybox:1.28\n command:\n - sleep\n - \"3600\"\n name: busybox\n---\napiVersion: v1\nkind: Pod\nmetadata:\n name: busybox2\n labels:\n name: busybox\nspec:\n hostname: busybox-2\n subdomain: busybox-subdomain\n containers:\n - image: busybox:1.28\n command:\n - sleep\n - \"3600\"\n name: busybox", "language": "yaml", "type": "pod-definition", "context": "DNS for Services and Pods | Given the above Service \"busybox-subdomain\" and the Pods which set spec.subdomain to \"busybox-subdomain\", the first Pod will see its own FQDN as \"busybox-1.busybox-subdomain.my-namespace.svc.cluster-d", "source": "https://kubernetes.io/docs/concepts/services-networking/dns-pod-service/", "id": "k8s_00699_code_completion_21904"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Service\nmetadata:\n name: busybox-subdomain\nspec:\n selector:\n name: busybox\n clusterIP: None\n ports:\n - name: foo # name is not required for single-port Services\n port: 1234\n---\napiVersion: v1\nkind: Pod\nmetadata:\n name: busybox1\n labels:\n name: busybox\nspec:\n hostname: busybox-1\n subdomain: busybox-subdomain\n containers:\n - image: busybox:1.28\n command:\n - sleep\n - \"3600\"\n name: busybox\n---\napiVersion: v1\nkind: Pod\nmetadata:\n name: busybox2\n labels:\n name: busybox\nspec:\n hostname: busybox-2\n subdomain: busybox-subdomain\n containers:\n - image: busybox:1.28\n command:\n - sleep\n - \"3600\"\n name: busybox", "completion": "This yaml code DNS for Services and Pods | Given the above Service \"busybox-subdomain\" and the Pods which set spec.subdomain to \"busybox-subdomain\", the first Pod will see its own FQDN as \"busybox-1.busybox-subdomain.my-namespace.svc.cluster-d", "language": "yaml", "type": "pod-definition_explanation", "context": "DNS for Services and Pods | Given the above Service \"busybox-subdomain\" and the Pods which set spec.subdomain to \"busybox-subdomain\", the first Pod will see its own FQDN as \"busybox-1.busybox-subdomain.my-namespace.svc.cluster-d", "source": "https://kubernetes.io/docs/concepts/services-networking/dns-pod-service/", "id": "k8s_00699_code_explain_21905"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: busybox\n namespace: default\nspec:\n containers:\n - image: busybox:1.28", "completion": " command:\n - sleep\n - \"3600\"\n imagePullPolicy: IfNotPresent\n name: busybox\n restartPolicy: Always\n hostNetwork: true\n dnsPolicy: ClusterFirstWithHostNet", "language": "yaml", "type": "pod-definition", "context": "DNS for Services and Pods | Pod's DNS Config allows users more control on the DNS settings for a Pod.", "source": "https://kubernetes.io/docs/concepts/services-networking/dns-pod-service/", "id": "k8s_00699_code_completion_21906"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: busybox\n namespace: default\nspec:\n containers:\n - image: busybox:1.28\n command:\n - sleep\n - \"3600\"\n imagePullPolicy: IfNotPresent\n name: busybox\n restartPolicy: Always\n hostNetwork: true\n dnsPolicy: ClusterFirstWithHostNet", "completion": "This yaml code DNS for Services and Pods | Pod's DNS Config allows users more control on the DNS settings for a Pod.", "language": "yaml", "type": "pod-definition_explanation", "context": "DNS for Services and Pods | Pod's DNS Config allows users more control on the DNS settings for a Pod.", "source": "https://kubernetes.io/docs/concepts/services-networking/dns-pod-service/", "id": "k8s_00699_code_explain_21907"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n namespace: default\n name: dns-example\nspec:\n containers:\n - name: test\n image: nginx\n dnsPolicy: \"None\"", "completion": " dnsConfig:\n nameservers:\n - 192.0.2.1 # this is an example\n searches:\n - ns1.svc.cluster-domain.example\n - my.dns.search.suffix\n options:\n - name: ndots\n value: \"2\"\n - name: edns0", "language": "yaml", "type": "pod-definition", "context": "DNS for Services and Pods | When the Pod above is created, the container test gets the following contents in its /etc/resolv.conf file:", "source": "https://kubernetes.io/docs/concepts/services-networking/dns-pod-service/", "id": "k8s_00699_code_completion_21908"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n namespace: default\n name: dns-example\nspec:\n containers:\n - name: test\n image: nginx\n dnsPolicy: \"None\"\n dnsConfig:\n nameservers:\n - 192.0.2.1 # this is an example\n searches:\n - ns1.svc.cluster-domain.example\n - my.dns.search.suffix\n options:\n - name: ndots\n value: \"2\"\n - name: edns0", "completion": "This yaml code DNS for Services and Pods | When the Pod above is created, the container test gets the following contents in its /etc/resolv.conf file:", "language": "yaml", "type": "pod-definition_explanation", "context": "DNS for Services and Pods | When the Pod above is created, the container test gets the following contents in its /etc/resolv.conf file:", "source": "https://kubernetes.io/docs/concepts/services-networking/dns-pod-service/", "id": "k8s_00699_code_explain_21909"} {"prompt": "Explain this text code:\n\nnameserver 192.0.2.1\nsearch ns1.svc.cluster-domain.example my.dns.search.suffix\noptions ndots:2 edns0", "completion": "This text code DNS for Services and Pods | For IPv6 setup, search path and name server should be set up like this:", "language": "text", "type": "code-example_explanation", "context": "DNS for Services and Pods | For IPv6 setup, search path and name server should be set up like this:", "source": "https://kubernetes.io/docs/concepts/services-networking/dns-pod-service/", "id": "k8s_00699_code_explain_21910"} {"prompt": "Explain this bash code:\n\nkubectl exec -it dns-example -- cat /etc/resolv.conf", "completion": "This bash code DNS for Services and Pods | The output is similar to this:", "language": "bash", "type": "kubectl-command_explanation", "context": "DNS for Services and Pods | The output is similar to this:", "source": "https://kubernetes.io/docs/concepts/services-networking/dns-pod-service/", "id": "k8s_00699_code_explain_21911"} {"prompt": "Explain this text code:\n\nnameserver 2001:db8:30::a\nsearch default.svc.cluster-domain.example svc.cluster-domain.example cluster-domain.example\noptions ndots:5", "completion": "This text code DNS for Services and Pods | Kubernetes itself does not limit the DNS Config until the length of the search domain list exceeds 32 or the total length of all search domains exceeds 2048. This limit applies to the node's resolver", "language": "text", "type": "code-example_explanation", "context": "DNS for Services and Pods | Kubernetes itself does not limit the DNS Config until the length of the search domain list exceeds 32 or the total length of all search domains exceeds 2048. This limit applies to the node's resolver", "source": "https://kubernetes.io/docs/concepts/services-networking/dns-pod-service/", "id": "k8s_00699_code_explain_21912"} {"prompt": "Explain this text code:\n\nnameserver 10.32.0.10\nsearch .svc.cluster.local svc.cluster.local cluster.local\noptions ndots:5", "completion": "This text code DNS for Services and Pods | In summary, a Pod in the test namespace can successfully resolve either data.prod or data.prod.svc.cluster.local.", "language": "text", "type": "code-example_explanation", "context": "DNS for Services and Pods | In summary, a Pod in the test namespace can successfully resolve either data.prod or data.prod.svc.cluster.local.", "source": "https://kubernetes.io/docs/concepts/services-networking/dns-pod-service/", "id": "k8s_00699_code_explain_21913"} {"prompt": "Explain this text code:\n\n_port-name._port-protocol.my-svc.my-namespace.svc.cluster-domain.example", "completion": "This text code DNS for Services and Pods | Kube-DNS versions, prior to the implementation of the DNS specification, had the following DNS resolution:", "language": "text", "type": "code-example_explanation", "context": "DNS for Services and Pods | Kube-DNS versions, prior to the implementation of the DNS specification, had the following DNS resolution:", "source": "https://kubernetes.io/docs/concepts/services-networking/dns-pod-service/", "id": "k8s_00699_code_explain_21914"} {"prompt": "Explain this text code:\n\nhostname.my-svc.my-namespace.svc.cluster-domain.example", "completion": "This text code DNS for Services and Pods | Kube-DNS versions, prior to the implementation of the DNS specification, had the following DNS resolution:", "language": "text", "type": "code-example_explanation", "context": "DNS for Services and Pods | Kube-DNS versions, prior to the implementation of the DNS specification, had the following DNS resolution:", "source": "https://kubernetes.io/docs/concepts/services-networking/dns-pod-service/", "id": "k8s_00699_code_explain_21915"} {"prompt": "Explain this text code:\n\n..pod.", "completion": "This text code DNS for Services and Pods | For example, if a Pod in the default namespace has the IP address 172.17.0.3, and the domain name for your cluster is cluster.local, then the Pod has a DNS name:", "language": "text", "type": "code-example_explanation", "context": "DNS for Services and Pods | For example, if a Pod in the default namespace has the IP address 172.17.0.3, and the domain name for your cluster is cluster.local, then the Pod has a DNS name:", "source": "https://kubernetes.io/docs/concepts/services-networking/dns-pod-service/", "id": "k8s_00699_code_explain_21916"} {"prompt": "Explain this text code:\n\n...svc.", "completion": "This text code DNS for Services and Pods | For example, if a Pod in the cafe namespace has the IP address 172.17.0.3, is an endpoint of a Service named barista, and the domain name for your cluster is cluster.local, then the Pod would have thi", "language": "text", "type": "code-example_explanation", "context": "DNS for Services and Pods | For example, if a Pod in the cafe namespace has the IP address 172.17.0.3, is an endpoint of a Service named barista, and the domain name for your cluster is cluster.local, then the Pod would have thi", "source": "https://kubernetes.io/docs/concepts/services-networking/dns-pod-service/", "id": "k8s_00699_code_explain_21917"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Service\nmetadata:\n name: busybox-subdomain\nspec:\n selector:\n name: busybox\n clusterIP: None\n ports:\n - name: foo # name is not required for single-port Services\n port: 1234\n---\napiVersion: v1\nkind: Pod\nmetadata:\n name: busybox1\n labels:\n name: busybox\nspec:\n hostname: busybox-1\n subdomain: busybox-subdomain", "completion": " containers:\n - image: busybox:1.28\n command:\n - sleep\n - \"3600\"\n name: busybox\n---\napiVersion: v1\nkind: Pod\nmetadata:\n name: busybox2\n labels:\n name: busybox\nspec:\n hostname: busybox-2\n subdomain: busybox-subdomain\n containers:\n - image: busybox:1.28\n command:\n - sleep\n - \"3600\"\n name: busybox", "language": "yaml", "type": "pod-definition", "context": "DNS for Services and Pods | Given the above Service \"busybox-subdomain\" and the Pods which set spec.subdomain to \"busybox-subdomain\", the first Pod will see its own FQDN as \"busybox-1.busybox-subdomain.my-namespace.svc.cluster-d", "source": "https://kubernetes.io/docs/concepts/services-networking/dns-pod-service/", "id": "k8s_00699_code_completion_21918"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Service\nmetadata:\n name: busybox-subdomain\nspec:\n selector:\n name: busybox\n clusterIP: None\n ports:\n - name: foo # name is not required for single-port Services\n port: 1234\n---\napiVersion: v1\nkind: Pod\nmetadata:\n name: busybox1\n labels:\n name: busybox\nspec:\n hostname: busybox-1\n subdomain: busybox-subdomain\n containers:\n - image: busybox:1.28\n command:\n - sleep\n - \"3600\"\n name: busybox\n---\napiVersion: v1\nkind: Pod\nmetadata:\n name: busybox2\n labels:\n name: busybox\nspec:\n hostname: busybox-2\n subdomain: busybox-subdomain\n containers:\n - image: busybox:1.28\n command:\n - sleep\n - \"3600\"\n name: busybox", "completion": "This yaml code DNS for Services and Pods | Given the above Service \"busybox-subdomain\" and the Pods which set spec.subdomain to \"busybox-subdomain\", the first Pod will see its own FQDN as \"busybox-1.busybox-subdomain.my-namespace.svc.cluster-d", "language": "yaml", "type": "pod-definition_explanation", "context": "DNS for Services and Pods | Given the above Service \"busybox-subdomain\" and the Pods which set spec.subdomain to \"busybox-subdomain\", the first Pod will see its own FQDN as \"busybox-1.busybox-subdomain.my-namespace.svc.cluster-d", "source": "https://kubernetes.io/docs/concepts/services-networking/dns-pod-service/", "id": "k8s_00699_code_explain_21919"} {"prompt": "Explain this text code:\n\n\"busybox-1.busybox-subdomain.my-namespace.svc.cluster-domain.example\"", "completion": "This text code DNS for Services and Pods | An EndpointSlice can specify the DNS hostname for any endpoint addresses, along with its IP.", "language": "text", "type": "code-example_explanation", "context": "DNS for Services and Pods | An EndpointSlice can specify the DNS hostname for any endpoint addresses, along with its IP.", "source": "https://kubernetes.io/docs/concepts/services-networking/dns-pod-service/", "id": "k8s_00699_code_explain_21920"} {"prompt": "Explain this text code:\n\nbusybox-subdomain.my-namespace.svc.cluster-domain.example", "completion": "This text code DNS for Services and Pods | When a Pod is configured to have fully qualified domain name (FQDN), its hostname is the short hostname. For example, if you have a Pod with the fully qualified domain name busybox-1.busybox-subdomain", "language": "text", "type": "code-example_explanation", "context": "DNS for Services and Pods | When a Pod is configured to have fully qualified domain name (FQDN), its hostname is the short hostname. For example, if you have a Pod with the fully qualified domain name busybox-1.busybox-subdomain", "source": "https://kubernetes.io/docs/concepts/services-networking/dns-pod-service/", "id": "k8s_00699_code_explain_21921"} {"prompt": "Explain this text code:\n\nbusybox-1.busybox-subdomain.my-namespace.svc.cluster-domain.example", "completion": "This text code DNS for Services and Pods | When you set setHostnameAsFQDN: true in the Pod spec, the kubelet writes the Pod's FQDN into the hostname for that Pod's namespace. In this case, both hostname and hostname --fqdn return the Pod's FQD", "language": "text", "type": "code-example_explanation", "context": "DNS for Services and Pods | When you set setHostnameAsFQDN: true in the Pod spec, the kubelet writes the Pod's FQDN into the hostname for that Pod's namespace. In this case, both hostname and hostname --fqdn return the Pod's FQD", "source": "https://kubernetes.io/docs/concepts/services-networking/dns-pod-service/", "id": "k8s_00699_code_explain_21922"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: busybox\n namespace: default\nspec:\n containers:\n - image: busybox:1.28", "completion": " command:\n - sleep\n - \"3600\"\n imagePullPolicy: IfNotPresent\n name: busybox\n restartPolicy: Always\n hostNetwork: true\n dnsPolicy: ClusterFirstWithHostNet", "language": "yaml", "type": "pod-definition", "context": "DNS for Services and Pods | Pod's DNS Config allows users more control on the DNS settings for a Pod.", "source": "https://kubernetes.io/docs/concepts/services-networking/dns-pod-service/", "id": "k8s_00699_code_completion_21923"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: busybox\n namespace: default\nspec:\n containers:\n - image: busybox:1.28\n command:\n - sleep\n - \"3600\"\n imagePullPolicy: IfNotPresent\n name: busybox\n restartPolicy: Always\n hostNetwork: true\n dnsPolicy: ClusterFirstWithHostNet", "completion": "This yaml code DNS for Services and Pods | Pod's DNS Config allows users more control on the DNS settings for a Pod.", "language": "yaml", "type": "pod-definition_explanation", "context": "DNS for Services and Pods | Pod's DNS Config allows users more control on the DNS settings for a Pod.", "source": "https://kubernetes.io/docs/concepts/services-networking/dns-pod-service/", "id": "k8s_00699_code_explain_21924"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n namespace: default\n name: dns-example\nspec:\n containers:\n - name: test\n image: nginx\n dnsPolicy: \"None\"", "completion": " dnsConfig:\n nameservers:\n - 192.0.2.1 # this is an example\n searches:\n - ns1.svc.cluster-domain.example\n - my.dns.search.suffix\n options:\n - name: ndots\n value: \"2\"\n - name: edns0", "language": "yaml", "type": "pod-definition", "context": "DNS for Services and Pods | When the Pod above is created, the container test gets the following contents in its /etc/resolv.conf file:", "source": "https://kubernetes.io/docs/concepts/services-networking/dns-pod-service/", "id": "k8s_00699_code_completion_21925"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n namespace: default\n name: dns-example\nspec:\n containers:\n - name: test\n image: nginx\n dnsPolicy: \"None\"\n dnsConfig:\n nameservers:\n - 192.0.2.1 # this is an example\n searches:\n - ns1.svc.cluster-domain.example\n - my.dns.search.suffix\n options:\n - name: ndots\n value: \"2\"\n - name: edns0", "completion": "This yaml code DNS for Services and Pods | When the Pod above is created, the container test gets the following contents in its /etc/resolv.conf file:", "language": "yaml", "type": "pod-definition_explanation", "context": "DNS for Services and Pods | When the Pod above is created, the container test gets the following contents in its /etc/resolv.conf file:", "source": "https://kubernetes.io/docs/concepts/services-networking/dns-pod-service/", "id": "k8s_00699_code_explain_21926"} {"prompt": "Explain this text code:\n\nnameserver 192.0.2.1\nsearch ns1.svc.cluster-domain.example my.dns.search.suffix\noptions ndots:2 edns0", "completion": "This text code DNS for Services and Pods | For IPv6 setup, search path and name server should be set up like this:", "language": "text", "type": "code-example_explanation", "context": "DNS for Services and Pods | For IPv6 setup, search path and name server should be set up like this:", "source": "https://kubernetes.io/docs/concepts/services-networking/dns-pod-service/", "id": "k8s_00699_code_explain_21927"} {"prompt": "Explain this shell code:\n\nkubectl exec -it dns-example -- cat /etc/resolv.conf", "completion": "This shell code DNS for Services and Pods | The output is similar to this:", "language": "shell", "type": "code-example_explanation", "context": "DNS for Services and Pods | The output is similar to this:", "source": "https://kubernetes.io/docs/concepts/services-networking/dns-pod-service/", "id": "k8s_00699_code_explain_21928"} {"prompt": "Explain this text code:\n\nnameserver 2001:db8:30::a\nsearch default.svc.cluster-domain.example svc.cluster-domain.example cluster-domain.example\noptions ndots:5", "completion": "This text code DNS for Services and Pods | Kubernetes itself does not limit the DNS Config until the length of the search domain list exceeds 32 or the total length of all search domains exceeds 2048. This limit applies to the node's resolver", "language": "text", "type": "code-example_explanation", "context": "DNS for Services and Pods | Kubernetes itself does not limit the DNS Config until the length of the search domain list exceeds 32 or the total length of all search domains exceeds 2048. This limit applies to the node's resolver", "source": "https://kubernetes.io/docs/concepts/services-networking/dns-pod-service/", "id": "k8s_00699_code_explain_21929"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Service\nmetadata:\n name: busybox-subdomain\nspec:\n selector:\n name: busybox\n clusterIP: None\n ports:\n - name: foo # name is not required for single-port Services\n port: 1234\n---\napiVersion: v1\nkind: Pod\nmetadata:\n name: busybox1\n labels:\n name: busybox\nspec:\n hostname: busybox-1\n subdomain: busybox-subdomain", "completion": " containers:\n - image: busybox:1.28\n command:\n - sleep\n - \"3600\"\n name: busybox\n---\napiVersion: v1\nkind: Pod\nmetadata:\n name: busybox2\n labels:\n name: busybox\nspec:\n hostname: busybox-2\n subdomain: busybox-subdomain\n containers:\n - image: busybox:1.28\n command:\n - sleep\n - \"3600\"\n name: busybox", "language": "yaml", "type": "pod-definition", "context": "DNS for Services and Pods | Given the above Service \"busybox-subdomain\" and the Pods which set spec.subdomain to \"busybox-subdomain\", the first Pod will see its own FQDN as \"busybox-1.busybox-subdomain.my-namespace.svc.cluster-d", "source": "https://kubernetes.io/docs/concepts/services-networking/dns-pod-service/", "id": "k8s_00699_code_completion_21930"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Service\nmetadata:\n name: busybox-subdomain\nspec:\n selector:\n name: busybox\n clusterIP: None\n ports:\n - name: foo # name is not required for single-port Services\n port: 1234\n---\napiVersion: v1\nkind: Pod\nmetadata:\n name: busybox1\n labels:\n name: busybox\nspec:\n hostname: busybox-1\n subdomain: busybox-subdomain\n containers:\n - image: busybox:1.28\n command:\n - sleep\n - \"3600\"\n name: busybox\n---\napiVersion: v1\nkind: Pod\nmetadata:\n name: busybox2\n labels:\n name: busybox\nspec:\n hostname: busybox-2\n subdomain: busybox-subdomain\n containers:\n - image: busybox:1.28\n command:\n - sleep\n - \"3600\"\n name: busybox", "completion": "This yaml code DNS for Services and Pods | Given the above Service \"busybox-subdomain\" and the Pods which set spec.subdomain to \"busybox-subdomain\", the first Pod will see its own FQDN as \"busybox-1.busybox-subdomain.my-namespace.svc.cluster-d", "language": "yaml", "type": "pod-definition_explanation", "context": "DNS for Services and Pods | Given the above Service \"busybox-subdomain\" and the Pods which set spec.subdomain to \"busybox-subdomain\", the first Pod will see its own FQDN as \"busybox-1.busybox-subdomain.my-namespace.svc.cluster-d", "source": "https://kubernetes.io/docs/concepts/services-networking/dns-pod-service/", "id": "k8s_00699_code_explain_21931"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: busybox\n namespace: default\nspec:\n containers:\n - image: busybox:1.28", "completion": " command:\n - sleep\n - \"3600\"\n imagePullPolicy: IfNotPresent\n name: busybox\n restartPolicy: Always\n hostNetwork: true\n dnsPolicy: ClusterFirstWithHostNet", "language": "yaml", "type": "pod-definition", "context": "DNS for Services and Pods | Pod's DNS Config allows users more control on the DNS settings for a Pod.", "source": "https://kubernetes.io/docs/concepts/services-networking/dns-pod-service/", "id": "k8s_00699_code_completion_21932"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: busybox\n namespace: default\nspec:\n containers:\n - image: busybox:1.28\n command:\n - sleep\n - \"3600\"\n imagePullPolicy: IfNotPresent\n name: busybox\n restartPolicy: Always\n hostNetwork: true\n dnsPolicy: ClusterFirstWithHostNet", "completion": "This yaml code DNS for Services and Pods | Pod's DNS Config allows users more control on the DNS settings for a Pod.", "language": "yaml", "type": "pod-definition_explanation", "context": "DNS for Services and Pods | Pod's DNS Config allows users more control on the DNS settings for a Pod.", "source": "https://kubernetes.io/docs/concepts/services-networking/dns-pod-service/", "id": "k8s_00699_code_explain_21933"} {"prompt": "Complete this yaml code:\n\nservice/networking/custom-dns.yaml\napiVersion: v1\nkind: Pod\nmetadata:\n namespace: default\n name: dns-example\nspec:\n containers:\n - name: test\n image: nginx", "completion": " dnsPolicy: \"None\"\n dnsConfig:\n nameservers:\n - 192.0.2.1 # this is an example\n searches:\n - ns1.svc.cluster-domain.example\n - my.dns.search.suffix\n options:\n - name: ndots\n value: \"2\"\n - name: edns0", "language": "yaml", "type": "pod-definition", "context": "DNS for Services and Pods | When the Pod above is created, the container test gets the following contents in its /etc/resolv.conf file:", "source": "https://kubernetes.io/docs/concepts/services-networking/dns-pod-service/", "id": "k8s_00699_code_completion_21934"} {"prompt": "Explain this yaml code:\n\nservice/networking/custom-dns.yaml\napiVersion: v1\nkind: Pod\nmetadata:\n namespace: default\n name: dns-example\nspec:\n containers:\n - name: test\n image: nginx\n dnsPolicy: \"None\"\n dnsConfig:\n nameservers:\n - 192.0.2.1 # this is an example\n searches:\n - ns1.svc.cluster-domain.example\n - my.dns.search.suffix\n options:\n - name: ndots\n value: \"2\"\n - name: edns0", "completion": "This yaml code DNS for Services and Pods | When the Pod above is created, the container test gets the following contents in its /etc/resolv.conf file:", "language": "yaml", "type": "pod-definition_explanation", "context": "DNS for Services and Pods | When the Pod above is created, the container test gets the following contents in its /etc/resolv.conf file:", "source": "https://kubernetes.io/docs/concepts/services-networking/dns-pod-service/", "id": "k8s_00699_code_explain_21935"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n namespace: default\n name: dns-example\nspec:\n containers:\n - name: test\n image: nginx\n dnsPolicy: \"None\"", "completion": " dnsConfig:\n nameservers:\n - 192.0.2.1 # this is an example\n searches:\n - ns1.svc.cluster-domain.example\n - my.dns.search.suffix\n options:\n - name: ndots\n value: \"2\"\n - name: edns0", "language": "yaml", "type": "pod-definition", "context": "DNS for Services and Pods | When the Pod above is created, the container test gets the following contents in its /etc/resolv.conf file:", "source": "https://kubernetes.io/docs/concepts/services-networking/dns-pod-service/", "id": "k8s_00699_code_completion_21936"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n namespace: default\n name: dns-example\nspec:\n containers:\n - name: test\n image: nginx\n dnsPolicy: \"None\"\n dnsConfig:\n nameservers:\n - 192.0.2.1 # this is an example\n searches:\n - ns1.svc.cluster-domain.example\n - my.dns.search.suffix\n options:\n - name: ndots\n value: \"2\"\n - name: edns0", "completion": "This yaml code DNS for Services and Pods | When the Pod above is created, the container test gets the following contents in its /etc/resolv.conf file:", "language": "yaml", "type": "pod-definition_explanation", "context": "DNS for Services and Pods | When the Pod above is created, the container test gets the following contents in its /etc/resolv.conf file:", "source": "https://kubernetes.io/docs/concepts/services-networking/dns-pod-service/", "id": "k8s_00699_code_explain_21937"} {"prompt": "Explain this bash code:\n\nkubectl exec -it dns-example -- cat /etc/resolv.conf", "completion": "This bash code DNS for Services and Pods | The output is similar to this:", "language": "bash", "type": "kubectl-command_explanation", "context": "DNS for Services and Pods | The output is similar to this:", "source": "https://kubernetes.io/docs/concepts/services-networking/dns-pod-service/", "id": "k8s_00699_code_explain_21938"} {"prompt": "Explain this bash code:\n\nkubectl get --raw /metrics | grep kubernetes_feature_enabled | grep SidecarContainers", "completion": "This bash code Adopting Sidecar Containers | For the individual node:", "language": "bash", "type": "kubectl-command_explanation", "context": "Adopting Sidecar Containers | For the individual node:", "source": "https://kubernetes.io/docs/tutorials/configuration/pod-sidecar-containers/", "id": "k8s_00700_code_explain_21939"} {"prompt": "Explain this bash code:\n\nkubectl get --raw /api/v1/nodes//proxy/metrics | grep kubernetes_feature_enabled | grep SidecarContainers", "completion": "This bash code Adopting Sidecar Containers | If you see something like this:", "language": "bash", "type": "kubectl-command_explanation", "context": "Adopting Sidecar Containers | If you see something like this:", "source": "https://kubernetes.io/docs/tutorials/configuration/pod-sidecar-containers/", "id": "k8s_00700_code_explain_21940"} {"prompt": "Explain this text code:\n\nkubernetes_feature_enabled{name=\"SidecarContainers\",stage=\"BETA\"} 1", "completion": "This text code Adopting Sidecar Containers | it means that the feature is enabled.", "language": "text", "type": "code-example_explanation", "context": "Adopting Sidecar Containers | it means that the feature is enabled.", "source": "https://kubernetes.io/docs/tutorials/configuration/pod-sidecar-containers/", "id": "k8s_00700_code_explain_21941"} {"prompt": "Explain this shell code:\n\nkubectl get --raw /metrics | grep kubernetes_feature_enabled | grep SidecarContainers", "completion": "This shell code Adopting Sidecar Containers | For the individual node:", "language": "shell", "type": "code-example_explanation", "context": "Adopting Sidecar Containers | For the individual node:", "source": "https://kubernetes.io/docs/tutorials/configuration/pod-sidecar-containers/", "id": "k8s_00700_code_explain_21942"} {"prompt": "Explain this shell code:\n\nkubectl get --raw /api/v1/nodes//proxy/metrics | grep kubernetes_feature_enabled | grep SidecarContainers", "completion": "This shell code Adopting Sidecar Containers | If you see something like this:", "language": "shell", "type": "code-example_explanation", "context": "Adopting Sidecar Containers | If you see something like this:", "source": "https://kubernetes.io/docs/tutorials/configuration/pod-sidecar-containers/", "id": "k8s_00700_code_explain_21943"} {"prompt": "Explain this text code:\n\nkubernetes_feature_enabled{name=\"SidecarContainers\",stage=\"BETA\"} 1", "completion": "This text code Adopting Sidecar Containers | it means that the feature is enabled.", "language": "text", "type": "code-example_explanation", "context": "Adopting Sidecar Containers | it means that the feature is enabled.", "source": "https://kubernetes.io/docs/tutorials/configuration/pod-sidecar-containers/", "id": "k8s_00700_code_explain_21944"} {"prompt": "Explain this bash code:\n\nkubectl get --raw /metrics | grep kubernetes_feature_enabled | grep SidecarContainers", "completion": "This bash code Adopting Sidecar Containers | For the individual node:", "language": "bash", "type": "kubectl-command_explanation", "context": "Adopting Sidecar Containers | For the individual node:", "source": "https://kubernetes.io/docs/tutorials/configuration/pod-sidecar-containers/", "id": "k8s_00700_code_explain_21945"} {"prompt": "Explain this bash code:\n\nkubectl get --raw /api/v1/nodes//proxy/metrics | grep kubernetes_feature_enabled | grep SidecarContainers", "completion": "This bash code Adopting Sidecar Containers | If you see something like this:", "language": "bash", "type": "kubectl-command_explanation", "context": "Adopting Sidecar Containers | If you see something like this:", "source": "https://kubernetes.io/docs/tutorials/configuration/pod-sidecar-containers/", "id": "k8s_00700_code_explain_21946"} {"prompt": "Complete this yaml code:\n\napiVersion: apps/v1\nkind: Deployment\nmetadata:\n name: nginx-deployment\n labels:\n app: nginx\nspec:\n replicas: 1\n selector:\n matchLabels:\n app: nginx\n template:\n metadata:\n labels:\n app: nginx\n spec:", "completion": " terminationGracePeriodSeconds: 120 # extra long grace period\n containers:\n - name: nginx\n image: nginx:latest\n ports:\n - containerPort: 80\n lifecycle:\n preStop:\n exec:\n # Real life termination may take any time up to terminationGracePeriodSeconds.\n # In this example - just hang around for at least the duration of terminationGracePeriodSeconds,\n # at 120 seconds container will be forcibly terminated.\n # Note, all this time nginx will keep processing requests.\n command: [\n \"/bin/sh\", \"-c\", \"sleep 180\"\n ]", "language": "yaml", "type": "deployment-definition", "context": "Explore Termination Behavior for Pods And Their Endpoints | Now create the Deployment Pod and Service using the above files:", "source": "https://kubernetes.io/docs/tutorials/services/pods-and-endpoint-termination-flow/", "id": "k8s_00702_code_completion_21947"} {"prompt": "Explain this yaml code:\n\napiVersion: apps/v1\nkind: Deployment\nmetadata:\n name: nginx-deployment\n labels:\n app: nginx\nspec:\n replicas: 1\n selector:\n matchLabels:\n app: nginx\n template:\n metadata:\n labels:\n app: nginx\n spec:\n terminationGracePeriodSeconds: 120 # extra long grace period\n containers:\n - name: nginx\n image: nginx:latest\n ports:\n - containerPort: 80\n lifecycle:\n preStop:\n exec:\n # Real life termination may take any time up to terminationGracePeriodSeconds.\n # In this example - just hang around for at least the duration of terminationGracePeriodSeconds,\n # at 120 seconds container will be forcibly terminated.\n # Note, all this time nginx will keep processing requests.\n command: [\n \"/bin/sh\", \"-c\", \"sleep 180\"\n ]", "completion": "This yaml code Explore Termination Behavior for Pods And Their Endpoints | Now create the Deployment Pod and Service using the above files:", "language": "yaml", "type": "deployment-definition_explanation", "context": "Explore Termination Behavior for Pods And Their Endpoints | Now create the Deployment Pod and Service using the above files:", "source": "https://kubernetes.io/docs/tutorials/services/pods-and-endpoint-termination-flow/", "id": "k8s_00702_code_explain_21948"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Service\nmetadata:\n name: nginx-service\nspec:", "completion": " selector:\n app: nginx\n ports:\n - protocol: TCP\n port: 80\n targetPort: 80", "language": "yaml", "type": "service-definition", "context": "Explore Termination Behavior for Pods And Their Endpoints | Now create the Deployment Pod and Service using the above files:", "source": "https://kubernetes.io/docs/tutorials/services/pods-and-endpoint-termination-flow/", "id": "k8s_00702_code_completion_21949"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Service\nmetadata:\n name: nginx-service\nspec:\n selector:\n app: nginx\n ports:\n - protocol: TCP\n port: 80\n targetPort: 80", "completion": "This yaml code Explore Termination Behavior for Pods And Their Endpoints | Now create the Deployment Pod and Service using the above files:", "language": "yaml", "type": "service-definition_explanation", "context": "Explore Termination Behavior for Pods And Their Endpoints | Now create the Deployment Pod and Service using the above files:", "source": "https://kubernetes.io/docs/tutorials/services/pods-and-endpoint-termination-flow/", "id": "k8s_00702_code_explain_21950"} {"prompt": "Explain this bash code:\n\nkubectl apply -f pod-with-graceful-termination.yaml\nkubectl apply -f explore-graceful-termination-nginx.yaml", "completion": "This bash code Explore Termination Behavior for Pods And Their Endpoints | Once the Pod and Service are running, you can get the name of any associated EndpointSlices:", "language": "bash", "type": "kubectl-command_explanation", "context": "Explore Termination Behavior for Pods And Their Endpoints | Once the Pod and Service are running, you can get the name of any associated EndpointSlices:", "source": "https://kubernetes.io/docs/tutorials/services/pods-and-endpoint-termination-flow/", "id": "k8s_00702_code_explain_21951"} {"prompt": "Explain this text code:\n\nNAME ADDRESSTYPE PORTS ENDPOINTS AGE\nnginx-service-6tjbr IPv4 80 10.12.1.199,10.12.1.201 22m", "completion": "This text code Explore Termination Behavior for Pods And Their Endpoints | You can see its status, and validate that there is one endpoint registered:", "language": "text", "type": "code-example_explanation", "context": "Explore Termination Behavior for Pods And Their Endpoints | You can see its status, and validate that there is one endpoint registered:", "source": "https://kubernetes.io/docs/tutorials/services/pods-and-endpoint-termination-flow/", "id": "k8s_00702_code_explain_21952"} {"prompt": "Explain this bash code:\n\nkubectl get endpointslices -o json -l kubernetes.io/service-name=nginx-service", "completion": "This bash code Explore Termination Behavior for Pods And Their Endpoints | The output is similar to this:", "language": "bash", "type": "kubectl-command_explanation", "context": "Explore Termination Behavior for Pods And Their Endpoints | The output is similar to this:", "source": "https://kubernetes.io/docs/tutorials/services/pods-and-endpoint-termination-flow/", "id": "k8s_00702_code_explain_21953"} {"prompt": "Complete this json code:\n\n{\n \"addressType\": \"IPv4\",\n \"apiVersion\": \"discovery.k8s.io/v1\",\n \"endpoints\": [\n {\n \"addresses\": [", "completion": " \"10.12.1.201\"\n ],\n \"conditions\": {\n \"ready\": true,\n \"serving\": true,\n \"terminating\": false", "language": "json", "type": "api-response", "context": "Explore Termination Behavior for Pods And Their Endpoints | Now let's terminate the Pod and validate that the Pod is being terminated respecting the graceful termination period configuration:", "source": "https://kubernetes.io/docs/tutorials/services/pods-and-endpoint-termination-flow/", "id": "k8s_00702_code_completion_21954"} {"prompt": "Explain this json code:\n\n{\n \"addressType\": \"IPv4\",\n \"apiVersion\": \"discovery.k8s.io/v1\",\n \"endpoints\": [\n {\n \"addresses\": [\n \"10.12.1.201\"\n ],\n \"conditions\": {\n \"ready\": true,\n \"serving\": true,\n \"terminating\": false", "completion": "This json code Explore Termination Behavior for Pods And Their Endpoints | Now let's terminate the Pod and validate that the Pod is being terminated respecting the graceful termination period configuration:", "language": "json", "type": "api-response_explanation", "context": "Explore Termination Behavior for Pods And Their Endpoints | Now let's terminate the Pod and validate that the Pod is being terminated respecting the graceful termination period configuration:", "source": "https://kubernetes.io/docs/tutorials/services/pods-and-endpoint-termination-flow/", "id": "k8s_00702_code_explain_21955"} {"prompt": "Explain this bash code:\n\nkubectl delete pod nginx-deployment-7768647bf9-b4b9s", "completion": "This bash code Explore Termination Behavior for Pods And Their Endpoints | All pods:", "language": "bash", "type": "kubectl-command_explanation", "context": "Explore Termination Behavior for Pods And Their Endpoints | All pods:", "source": "https://kubernetes.io/docs/tutorials/services/pods-and-endpoint-termination-flow/", "id": "k8s_00702_code_explain_21956"} {"prompt": "Explain this text code:\n\nNAME READY STATUS RESTARTS AGE\nnginx-deployment-7768647bf9-b4b9s 1/1 Terminating 0 4m1s\nnginx-deployment-7768647bf9-rkxlw 1/1 Running 0 8s", "completion": "This text code Explore Termination Behavior for Pods And Their Endpoints | You can see that the new pod got scheduled.", "language": "text", "type": "code-example_explanation", "context": "Explore Termination Behavior for Pods And Their Endpoints | You can see that the new pod got scheduled.", "source": "https://kubernetes.io/docs/tutorials/services/pods-and-endpoint-termination-flow/", "id": "k8s_00702_code_explain_21957"} {"prompt": "Explain this bash code:\n\nkubectl get endpointslice -o json nginx-service-6tjbr", "completion": "This bash code Explore Termination Behavior for Pods And Their Endpoints | The output is similar to this:", "language": "bash", "type": "kubectl-command_explanation", "context": "Explore Termination Behavior for Pods And Their Endpoints | The output is similar to this:", "source": "https://kubernetes.io/docs/tutorials/services/pods-and-endpoint-termination-flow/", "id": "k8s_00702_code_explain_21958"} {"prompt": "Complete this json code:\n\n{\n \"addressType\": \"IPv4\",\n \"apiVersion\": \"discovery.k8s.io/v1\",\n \"endpoints\": [\n {\n \"addresses\": [\n \"10.12.1.201\"\n ],\n \"conditions\": {\n \"ready\": false,\n \"serving\": true,\n \"terminating\": true\n },\n \"nodeName\": \"gke-main-default-pool-dca1511c-d17b\",\n \"targetRef\": {\n \"kind\": \"Pod\",\n \"name\": \"nginx-deployment-7768647bf9-b4b9s\",\n \"namespace\": \"default\",\n \"uid\": \"66fa831c-7eb2-407f-bd2c-f96dfe841478\"", "completion": " },\n \"zone\": \"us-central1-c\"\n },\n {\n \"addresses\": [\n \"10.12.1.202\"\n ],\n \"conditions\": {\n \"ready\": true,\n \"serving\": true,\n \"terminating\": false\n },\n \"nodeName\": \"gke-main-default-pool-dca1511c-d17b\",\n \"targetRef\": {\n \"kind\": \"Pod\",\n \"name\": \"nginx-deployment-7768647bf9-rkxlw\",\n \"namespace\": \"default\",\n \"uid\": \"722b1cbe-dcd7-4ed4-8928-4a4d0e2bbe35\"\n },\n \"zone\": \"us-central1-c\"", "language": "json", "type": "api-response", "context": "Explore Termination Behavior for Pods And Their Endpoints | This allows applications to communicate their state during termination and clients (such as load balancers) to implement connection draining functionality. These clients may detect terminating endpoin", "source": "https://kubernetes.io/docs/tutorials/services/pods-and-endpoint-termination-flow/", "id": "k8s_00702_code_completion_21959"} {"prompt": "Explain this json code:\n\n{\n \"addressType\": \"IPv4\",\n \"apiVersion\": \"discovery.k8s.io/v1\",\n \"endpoints\": [\n {\n \"addresses\": [\n \"10.12.1.201\"\n ],\n \"conditions\": {\n \"ready\": false,\n \"serving\": true,\n \"terminating\": true\n },\n \"nodeName\": \"gke-main-default-pool-dca1511c-d17b\",\n \"targetRef\": {\n \"kind\": \"Pod\",\n \"name\": \"nginx-deployment-7768647bf9-b4b9s\",\n \"namespace\": \"default\",\n \"uid\": \"66fa831c-7eb2-407f-bd2c-f96dfe841478\"\n },\n \"zone\": \"us-central1-c\"\n },\n {\n \"addresses\": [\n \"10.12.1.202\"\n ],\n \"conditions\": {\n \"ready\": true,\n \"serving\": true,\n \"terminating\": false\n },\n \"nodeName\": \"gke-main-default-pool-dca1511c-d17b\",\n \"targetRef\": {\n \"kind\": \"Pod\",\n \"name\": \"nginx-deployment-7768647bf9-rkxlw\",\n \"namespace\": \"default\",\n \"uid\": \"722b1cbe-dcd7-4ed4-8928-4a4d0e2bbe35\"\n },\n \"zone\": \"us-central1-c\"", "completion": "This json code Explore Termination Behavior for Pods And Their Endpoints | This allows applications to communicate their state during termination and clients (such as load balancers) to implement connection draining functionality. These clients may detect terminating endpoin", "language": "json", "type": "api-response_explanation", "context": "Explore Termination Behavior for Pods And Their Endpoints | This allows applications to communicate their state during termination and clients (such as load balancers) to implement connection draining functionality. These clients may detect terminating endpoin", "source": "https://kubernetes.io/docs/tutorials/services/pods-and-endpoint-termination-flow/", "id": "k8s_00702_code_explain_21960"} {"prompt": "Complete this yaml code:\n\napiVersion: apps/v1\nkind: Deployment\nmetadata:\n name: nginx-deployment\n labels:\n app: nginx\nspec:\n replicas: 1\n selector:\n matchLabels:\n app: nginx\n template:\n metadata:\n labels:\n app: nginx\n spec:", "completion": " terminationGracePeriodSeconds: 120 # extra long grace period\n containers:\n - name: nginx\n image: nginx:latest\n ports:\n - containerPort: 80\n lifecycle:\n preStop:\n exec:\n # Real life termination may take any time up to terminationGracePeriodSeconds.\n # In this example - just hang around for at least the duration of terminationGracePeriodSeconds,\n # at 120 seconds container will be forcibly terminated.\n # Note, all this time nginx will keep processing requests.\n command: [\n \"/bin/sh\", \"-c\", \"sleep 180\"\n ]", "language": "yaml", "type": "deployment-definition", "context": "Explore Termination Behavior for Pods And Their Endpoints | Now create the Deployment Pod and Service using the above files:", "source": "https://kubernetes.io/docs/tutorials/services/pods-and-endpoint-termination-flow/", "id": "k8s_00702_code_completion_21961"} {"prompt": "Explain this yaml code:\n\napiVersion: apps/v1\nkind: Deployment\nmetadata:\n name: nginx-deployment\n labels:\n app: nginx\nspec:\n replicas: 1\n selector:\n matchLabels:\n app: nginx\n template:\n metadata:\n labels:\n app: nginx\n spec:\n terminationGracePeriodSeconds: 120 # extra long grace period\n containers:\n - name: nginx\n image: nginx:latest\n ports:\n - containerPort: 80\n lifecycle:\n preStop:\n exec:\n # Real life termination may take any time up to terminationGracePeriodSeconds.\n # In this example - just hang around for at least the duration of terminationGracePeriodSeconds,\n # at 120 seconds container will be forcibly terminated.\n # Note, all this time nginx will keep processing requests.\n command: [\n \"/bin/sh\", \"-c\", \"sleep 180\"\n ]", "completion": "This yaml code Explore Termination Behavior for Pods And Their Endpoints | Now create the Deployment Pod and Service using the above files:", "language": "yaml", "type": "deployment-definition_explanation", "context": "Explore Termination Behavior for Pods And Their Endpoints | Now create the Deployment Pod and Service using the above files:", "source": "https://kubernetes.io/docs/tutorials/services/pods-and-endpoint-termination-flow/", "id": "k8s_00702_code_explain_21962"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Service\nmetadata:\n name: nginx-service\nspec:", "completion": " selector:\n app: nginx\n ports:\n - protocol: TCP\n port: 80\n targetPort: 80", "language": "yaml", "type": "service-definition", "context": "Explore Termination Behavior for Pods And Their Endpoints | Now create the Deployment Pod and Service using the above files:", "source": "https://kubernetes.io/docs/tutorials/services/pods-and-endpoint-termination-flow/", "id": "k8s_00702_code_completion_21963"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Service\nmetadata:\n name: nginx-service\nspec:\n selector:\n app: nginx\n ports:\n - protocol: TCP\n port: 80\n targetPort: 80", "completion": "This yaml code Explore Termination Behavior for Pods And Their Endpoints | Now create the Deployment Pod and Service using the above files:", "language": "yaml", "type": "service-definition_explanation", "context": "Explore Termination Behavior for Pods And Their Endpoints | Now create the Deployment Pod and Service using the above files:", "source": "https://kubernetes.io/docs/tutorials/services/pods-and-endpoint-termination-flow/", "id": "k8s_00702_code_explain_21964"} {"prompt": "Explain this shell code:\n\nkubectl apply -f pod-with-graceful-termination.yaml\nkubectl apply -f explore-graceful-termination-nginx.yaml", "completion": "This shell code Explore Termination Behavior for Pods And Their Endpoints | Once the Pod and Service are running, you can get the name of any associated EndpointSlices:", "language": "shell", "type": "code-example_explanation", "context": "Explore Termination Behavior for Pods And Their Endpoints | Once the Pod and Service are running, you can get the name of any associated EndpointSlices:", "source": "https://kubernetes.io/docs/tutorials/services/pods-and-endpoint-termination-flow/", "id": "k8s_00702_code_explain_21965"} {"prompt": "Explain this none code:\n\nNAME ADDRESSTYPE PORTS ENDPOINTS AGE\nnginx-service-6tjbr IPv4 80 10.12.1.199,10.12.1.201 22m", "completion": "This none code Explore Termination Behavior for Pods And Their Endpoints | You can see its status, and validate that there is one endpoint registered:", "language": "none", "type": "code-example_explanation", "context": "Explore Termination Behavior for Pods And Their Endpoints | You can see its status, and validate that there is one endpoint registered:", "source": "https://kubernetes.io/docs/tutorials/services/pods-and-endpoint-termination-flow/", "id": "k8s_00702_code_explain_21966"} {"prompt": "Explain this shell code:\n\nkubectl get endpointslices -o json -l kubernetes.io/service-name=nginx-service", "completion": "This shell code Explore Termination Behavior for Pods And Their Endpoints | The output is similar to this:", "language": "shell", "type": "code-example_explanation", "context": "Explore Termination Behavior for Pods And Their Endpoints | The output is similar to this:", "source": "https://kubernetes.io/docs/tutorials/services/pods-and-endpoint-termination-flow/", "id": "k8s_00702_code_explain_21967"} {"prompt": "Complete this none code:\n\n{\n \"addressType\": \"IPv4\",\n \"apiVersion\": \"discovery.k8s.io/v1\",\n \"endpoints\": [\n {\n \"addresses\": [", "completion": " \"10.12.1.201\"\n ],\n \"conditions\": {\n \"ready\": true,\n \"serving\": true,\n \"terminating\": false", "language": "none", "type": "code-example", "context": "Explore Termination Behavior for Pods And Their Endpoints | Now let's terminate the Pod and validate that the Pod is being terminated respecting the graceful termination period configuration:", "source": "https://kubernetes.io/docs/tutorials/services/pods-and-endpoint-termination-flow/", "id": "k8s_00702_code_completion_21968"} {"prompt": "Explain this none code:\n\n{\n \"addressType\": \"IPv4\",\n \"apiVersion\": \"discovery.k8s.io/v1\",\n \"endpoints\": [\n {\n \"addresses\": [\n \"10.12.1.201\"\n ],\n \"conditions\": {\n \"ready\": true,\n \"serving\": true,\n \"terminating\": false", "completion": "This none code Explore Termination Behavior for Pods And Their Endpoints | Now let's terminate the Pod and validate that the Pod is being terminated respecting the graceful termination period configuration:", "language": "none", "type": "code-example_explanation", "context": "Explore Termination Behavior for Pods And Their Endpoints | Now let's terminate the Pod and validate that the Pod is being terminated respecting the graceful termination period configuration:", "source": "https://kubernetes.io/docs/tutorials/services/pods-and-endpoint-termination-flow/", "id": "k8s_00702_code_explain_21969"} {"prompt": "Explain this shell code:\n\nkubectl delete pod nginx-deployment-7768647bf9-b4b9s", "completion": "This shell code Explore Termination Behavior for Pods And Their Endpoints | All pods:", "language": "shell", "type": "code-example_explanation", "context": "Explore Termination Behavior for Pods And Their Endpoints | All pods:", "source": "https://kubernetes.io/docs/tutorials/services/pods-and-endpoint-termination-flow/", "id": "k8s_00702_code_explain_21970"} {"prompt": "Explain this none code:\n\nNAME READY STATUS RESTARTS AGE\nnginx-deployment-7768647bf9-b4b9s 1/1 Terminating 0 4m1s\nnginx-deployment-7768647bf9-rkxlw 1/1 Running 0 8s", "completion": "This none code Explore Termination Behavior for Pods And Their Endpoints | You can see that the new pod got scheduled.", "language": "none", "type": "code-example_explanation", "context": "Explore Termination Behavior for Pods And Their Endpoints | You can see that the new pod got scheduled.", "source": "https://kubernetes.io/docs/tutorials/services/pods-and-endpoint-termination-flow/", "id": "k8s_00702_code_explain_21971"} {"prompt": "Explain this shell code:\n\nkubectl get endpointslice -o json nginx-service-6tjbr", "completion": "This shell code Explore Termination Behavior for Pods And Their Endpoints | The output is similar to this:", "language": "shell", "type": "code-example_explanation", "context": "Explore Termination Behavior for Pods And Their Endpoints | The output is similar to this:", "source": "https://kubernetes.io/docs/tutorials/services/pods-and-endpoint-termination-flow/", "id": "k8s_00702_code_explain_21972"} {"prompt": "Complete this none code:\n\n{\n \"addressType\": \"IPv4\",\n \"apiVersion\": \"discovery.k8s.io/v1\",\n \"endpoints\": [\n {\n \"addresses\": [\n \"10.12.1.201\"\n ],\n \"conditions\": {\n \"ready\": false,\n \"serving\": true,\n \"terminating\": true\n },\n \"nodeName\": \"gke-main-default-pool-dca1511c-d17b\",\n \"targetRef\": {\n \"kind\": \"Pod\",\n \"name\": \"nginx-deployment-7768647bf9-b4b9s\",\n \"namespace\": \"default\",\n \"uid\": \"66fa831c-7eb2-407f-bd2c-f96dfe841478\"", "completion": " },\n \"zone\": \"us-central1-c\"\n },\n {\n \"addresses\": [\n \"10.12.1.202\"\n ],\n \"conditions\": {\n \"ready\": true,\n \"serving\": true,\n \"terminating\": false\n },\n \"nodeName\": \"gke-main-default-pool-dca1511c-d17b\",\n \"targetRef\": {\n \"kind\": \"Pod\",\n \"name\": \"nginx-deployment-7768647bf9-rkxlw\",\n \"namespace\": \"default\",\n \"uid\": \"722b1cbe-dcd7-4ed4-8928-4a4d0e2bbe35\"\n },\n \"zone\": \"us-central1-c\"", "language": "none", "type": "code-example", "context": "Explore Termination Behavior for Pods And Their Endpoints | This allows applications to communicate their state during termination and clients (such as load balancers) to implement connection draining functionality. These clients may detect terminating endpoin", "source": "https://kubernetes.io/docs/tutorials/services/pods-and-endpoint-termination-flow/", "id": "k8s_00702_code_completion_21973"} {"prompt": "Explain this none code:\n\n{\n \"addressType\": \"IPv4\",\n \"apiVersion\": \"discovery.k8s.io/v1\",\n \"endpoints\": [\n {\n \"addresses\": [\n \"10.12.1.201\"\n ],\n \"conditions\": {\n \"ready\": false,\n \"serving\": true,\n \"terminating\": true\n },\n \"nodeName\": \"gke-main-default-pool-dca1511c-d17b\",\n \"targetRef\": {\n \"kind\": \"Pod\",\n \"name\": \"nginx-deployment-7768647bf9-b4b9s\",\n \"namespace\": \"default\",\n \"uid\": \"66fa831c-7eb2-407f-bd2c-f96dfe841478\"\n },\n \"zone\": \"us-central1-c\"\n },\n {\n \"addresses\": [\n \"10.12.1.202\"\n ],\n \"conditions\": {\n \"ready\": true,\n \"serving\": true,\n \"terminating\": false\n },\n \"nodeName\": \"gke-main-default-pool-dca1511c-d17b\",\n \"targetRef\": {\n \"kind\": \"Pod\",\n \"name\": \"nginx-deployment-7768647bf9-rkxlw\",\n \"namespace\": \"default\",\n \"uid\": \"722b1cbe-dcd7-4ed4-8928-4a4d0e2bbe35\"\n },\n \"zone\": \"us-central1-c\"", "completion": "This none code Explore Termination Behavior for Pods And Their Endpoints | This allows applications to communicate their state during termination and clients (such as load balancers) to implement connection draining functionality. These clients may detect terminating endpoin", "language": "none", "type": "code-example_explanation", "context": "Explore Termination Behavior for Pods And Their Endpoints | This allows applications to communicate their state during termination and clients (such as load balancers) to implement connection draining functionality. These clients may detect terminating endpoin", "source": "https://kubernetes.io/docs/tutorials/services/pods-and-endpoint-termination-flow/", "id": "k8s_00702_code_explain_21974"} {"prompt": "Complete this yaml code:\n\nservice/pod-with-graceful-termination.yaml\napiVersion: apps/v1\nkind: Deployment\nmetadata:\n name: nginx-deployment\n labels:\n app: nginx\nspec:\n replicas: 1\n selector:\n matchLabels:\n app: nginx\n template:\n metadata:\n labels:\n app: nginx", "completion": " spec:\n terminationGracePeriodSeconds: 120 # extra long grace period\n containers:\n - name: nginx\n image: nginx:latest\n ports:\n - containerPort: 80\n lifecycle:\n preStop:\n exec:\n # Real life termination may take any time up to terminationGracePeriodSeconds.\n # In this example - just hang around for at least the duration of terminationGracePeriodSeconds,\n # at 120 seconds container will be forcibly terminated.\n # Note, all this time nginx will keep processing requests.\n command: [\n \"/bin/sh\", \"-c\", \"sleep 180\"\n ]", "language": "yaml", "type": "deployment-definition", "context": "Explore Termination Behavior for Pods And Their Endpoints | Now create the Deployment Pod and Service using the above files:", "source": "https://kubernetes.io/docs/tutorials/services/pods-and-endpoint-termination-flow/", "id": "k8s_00702_code_completion_21975"} {"prompt": "Explain this yaml code:\n\nservice/pod-with-graceful-termination.yaml\napiVersion: apps/v1\nkind: Deployment\nmetadata:\n name: nginx-deployment\n labels:\n app: nginx\nspec:\n replicas: 1\n selector:\n matchLabels:\n app: nginx\n template:\n metadata:\n labels:\n app: nginx\n spec:\n terminationGracePeriodSeconds: 120 # extra long grace period\n containers:\n - name: nginx\n image: nginx:latest\n ports:\n - containerPort: 80\n lifecycle:\n preStop:\n exec:\n # Real life termination may take any time up to terminationGracePeriodSeconds.\n # In this example - just hang around for at least the duration of terminationGracePeriodSeconds,\n # at 120 seconds container will be forcibly terminated.\n # Note, all this time nginx will keep processing requests.\n command: [\n \"/bin/sh\", \"-c\", \"sleep 180\"\n ]", "completion": "This yaml code Explore Termination Behavior for Pods And Their Endpoints | Now create the Deployment Pod and Service using the above files:", "language": "yaml", "type": "deployment-definition_explanation", "context": "Explore Termination Behavior for Pods And Their Endpoints | Now create the Deployment Pod and Service using the above files:", "source": "https://kubernetes.io/docs/tutorials/services/pods-and-endpoint-termination-flow/", "id": "k8s_00702_code_explain_21976"} {"prompt": "Complete this yaml code:\n\napiVersion: apps/v1\nkind: Deployment\nmetadata:\n name: nginx-deployment\n labels:\n app: nginx\nspec:\n replicas: 1\n selector:\n matchLabels:\n app: nginx\n template:\n metadata:\n labels:\n app: nginx\n spec:", "completion": " terminationGracePeriodSeconds: 120 # extra long grace period\n containers:\n - name: nginx\n image: nginx:latest\n ports:\n - containerPort: 80\n lifecycle:\n preStop:\n exec:\n # Real life termination may take any time up to terminationGracePeriodSeconds.\n # In this example - just hang around for at least the duration of terminationGracePeriodSeconds,\n # at 120 seconds container will be forcibly terminated.\n # Note, all this time nginx will keep processing requests.\n command: [\n \"/bin/sh\", \"-c\", \"sleep 180\"\n ]", "language": "yaml", "type": "deployment-definition", "context": "Explore Termination Behavior for Pods And Their Endpoints | Now create the Deployment Pod and Service using the above files:", "source": "https://kubernetes.io/docs/tutorials/services/pods-and-endpoint-termination-flow/", "id": "k8s_00702_code_completion_21977"} {"prompt": "Explain this yaml code:\n\napiVersion: apps/v1\nkind: Deployment\nmetadata:\n name: nginx-deployment\n labels:\n app: nginx\nspec:\n replicas: 1\n selector:\n matchLabels:\n app: nginx\n template:\n metadata:\n labels:\n app: nginx\n spec:\n terminationGracePeriodSeconds: 120 # extra long grace period\n containers:\n - name: nginx\n image: nginx:latest\n ports:\n - containerPort: 80\n lifecycle:\n preStop:\n exec:\n # Real life termination may take any time up to terminationGracePeriodSeconds.\n # In this example - just hang around for at least the duration of terminationGracePeriodSeconds,\n # at 120 seconds container will be forcibly terminated.\n # Note, all this time nginx will keep processing requests.\n command: [\n \"/bin/sh\", \"-c\", \"sleep 180\"\n ]", "completion": "This yaml code Explore Termination Behavior for Pods And Their Endpoints | Now create the Deployment Pod and Service using the above files:", "language": "yaml", "type": "deployment-definition_explanation", "context": "Explore Termination Behavior for Pods And Their Endpoints | Now create the Deployment Pod and Service using the above files:", "source": "https://kubernetes.io/docs/tutorials/services/pods-and-endpoint-termination-flow/", "id": "k8s_00702_code_explain_21978"} {"prompt": "Complete this yaml code:\n\nservice/explore-graceful-termination-nginx.yaml\napiVersion: v1\nkind: Service\nmetadata:\n name: nginx-service\nspec:", "completion": " selector:\n app: nginx\n ports:\n - protocol: TCP\n port: 80\n targetPort: 80", "language": "yaml", "type": "service-definition", "context": "Explore Termination Behavior for Pods And Their Endpoints | Now create the Deployment Pod and Service using the above files:", "source": "https://kubernetes.io/docs/tutorials/services/pods-and-endpoint-termination-flow/", "id": "k8s_00702_code_completion_21979"} {"prompt": "Explain this yaml code:\n\nservice/explore-graceful-termination-nginx.yaml\napiVersion: v1\nkind: Service\nmetadata:\n name: nginx-service\nspec:\n selector:\n app: nginx\n ports:\n - protocol: TCP\n port: 80\n targetPort: 80", "completion": "This yaml code Explore Termination Behavior for Pods And Their Endpoints | Now create the Deployment Pod and Service using the above files:", "language": "yaml", "type": "service-definition_explanation", "context": "Explore Termination Behavior for Pods And Their Endpoints | Now create the Deployment Pod and Service using the above files:", "source": "https://kubernetes.io/docs/tutorials/services/pods-and-endpoint-termination-flow/", "id": "k8s_00702_code_explain_21980"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Service\nmetadata:\n name: nginx-service\nspec:", "completion": " selector:\n app: nginx\n ports:\n - protocol: TCP\n port: 80\n targetPort: 80", "language": "yaml", "type": "service-definition", "context": "Explore Termination Behavior for Pods And Their Endpoints | Now create the Deployment Pod and Service using the above files:", "source": "https://kubernetes.io/docs/tutorials/services/pods-and-endpoint-termination-flow/", "id": "k8s_00702_code_completion_21981"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Service\nmetadata:\n name: nginx-service\nspec:\n selector:\n app: nginx\n ports:\n - protocol: TCP\n port: 80\n targetPort: 80", "completion": "This yaml code Explore Termination Behavior for Pods And Their Endpoints | Now create the Deployment Pod and Service using the above files:", "language": "yaml", "type": "service-definition_explanation", "context": "Explore Termination Behavior for Pods And Their Endpoints | Now create the Deployment Pod and Service using the above files:", "source": "https://kubernetes.io/docs/tutorials/services/pods-and-endpoint-termination-flow/", "id": "k8s_00702_code_explain_21982"} {"prompt": "Explain this bash code:\n\nkubectl apply -f pod-with-graceful-termination.yaml\nkubectl apply -f explore-graceful-termination-nginx.yaml", "completion": "This bash code Explore Termination Behavior for Pods And Their Endpoints | Once the Pod and Service are running, you can get the name of any associated EndpointSlices:", "language": "bash", "type": "kubectl-command_explanation", "context": "Explore Termination Behavior for Pods And Their Endpoints | Once the Pod and Service are running, you can get the name of any associated EndpointSlices:", "source": "https://kubernetes.io/docs/tutorials/services/pods-and-endpoint-termination-flow/", "id": "k8s_00702_code_explain_21983"} {"prompt": "Explain this bash code:\n\nkubectl get endpointslices -o json -l kubernetes.io/service-name=nginx-service", "completion": "This bash code Explore Termination Behavior for Pods And Their Endpoints | The output is similar to this:", "language": "bash", "type": "kubectl-command_explanation", "context": "Explore Termination Behavior for Pods And Their Endpoints | The output is similar to this:", "source": "https://kubernetes.io/docs/tutorials/services/pods-and-endpoint-termination-flow/", "id": "k8s_00702_code_explain_21984"} {"prompt": "Explain this bash code:\n\nkubectl delete pod nginx-deployment-7768647bf9-b4b9s", "completion": "This bash code Explore Termination Behavior for Pods And Their Endpoints | All pods:", "language": "bash", "type": "kubectl-command_explanation", "context": "Explore Termination Behavior for Pods And Their Endpoints | All pods:", "source": "https://kubernetes.io/docs/tutorials/services/pods-and-endpoint-termination-flow/", "id": "k8s_00702_code_explain_21985"} {"prompt": "Explain this bash code:\n\nkubectl get endpointslice -o json nginx-service-6tjbr", "completion": "This bash code Explore Termination Behavior for Pods And Their Endpoints | The output is similar to this:", "language": "bash", "type": "kubectl-command_explanation", "context": "Explore Termination Behavior for Pods And Their Endpoints | The output is similar to this:", "source": "https://kubernetes.io/docs/tutorials/services/pods-and-endpoint-termination-flow/", "id": "k8s_00702_code_explain_21986"} {"prompt": "Explain this bash code:\n\nkubectl get pod pod1\nkubectl get pods pod1\nkubectl get po pod1", "completion": "This bash code Command line tool (kubectl) | NAME: Specifies the name of the resource. Names are case-sensitive. If the name is omitted, details for all resources are displayed, for example kubectl get pods.", "language": "bash", "type": "kubectl-command_explanation", "context": "Command line tool (kubectl) | NAME: Specifies the name of the resource. Names are case-sensitive. If the name is omitted, details for all resources are displayed, for example kubectl get pods.", "source": "https://kubernetes.io/docs/reference/kubectl/", "id": "k8s_00703_code_explain_21987"} {"prompt": "Explain this bash code:\n\nkubectl config set-context --current --namespace=", "completion": "This bash code Command line tool (kubectl) | The following table includes short descriptions and the general syntax for all of the kubectl operations:", "language": "bash", "type": "kubectl-command_explanation", "context": "Command line tool (kubectl) | The following table includes short descriptions and the general syntax for all of the kubectl operations:", "source": "https://kubernetes.io/docs/reference/kubectl/", "id": "k8s_00703_code_explain_21988"} {"prompt": "Explain this bash code:\n\nkubectl get pods -o custom-columns=NAME:.metadata.name,RSRC:.metadata.resourceVersion", "completion": "This bash code Command line tool (kubectl) | Template file:", "language": "bash", "type": "kubectl-command_explanation", "context": "Command line tool (kubectl) | Template file:", "source": "https://kubernetes.io/docs/reference/kubectl/", "id": "k8s_00703_code_explain_21989"} {"prompt": "Explain this bash code:\n\nkubectl get pods -o custom-columns-file=template.txt", "completion": "This bash code Command line tool (kubectl) | where the template.txt file contains:", "language": "bash", "type": "kubectl-command_explanation", "context": "Command line tool (kubectl) | where the template.txt file contains:", "source": "https://kubernetes.io/docs/reference/kubectl/", "id": "k8s_00703_code_explain_21990"} {"prompt": "Explain this text code:\n\nNAME RSRC\nmetadata.name metadata.resourceVersion", "completion": "This text code Command line tool (kubectl) | The result of running either command is similar to:", "language": "text", "type": "code-example_explanation", "context": "Command line tool (kubectl) | The result of running either command is similar to:", "source": "https://kubernetes.io/docs/reference/kubectl/", "id": "k8s_00703_code_explain_21991"} {"prompt": "Explain this bash code:\n\nkubectl [command] [TYPE] [NAME] --sort-by=", "completion": "This bash code Command line tool (kubectl) | To print a list of pods sorted by name, you run:", "language": "bash", "type": "kubectl-command_explanation", "context": "Command line tool (kubectl) | To print a list of pods sorted by name, you run:", "source": "https://kubernetes.io/docs/reference/kubectl/", "id": "k8s_00703_code_explain_21992"} {"prompt": "Complete this text code:\n\n# Create a service using the definition in example-service.yaml.\nkubectl apply -f example-service.yaml\n\n# Create a replication controller using the definition in example-controller.yaml.", "completion": "kubectl apply -f example-controller.yaml\n\n# Create the objects that are defined in any .yaml, .yml, or .json file within the directory.\nkubectl apply -f ", "language": "text", "type": "code-example", "context": "Command line tool (kubectl) | kubectl get - List one or more resources.", "source": "https://kubernetes.io/docs/reference/kubectl/", "id": "k8s_00703_code_completion_21993"} {"prompt": "Explain this text code:\n\n# Create a service using the definition in example-service.yaml.\nkubectl apply -f example-service.yaml\n\n# Create a replication controller using the definition in example-controller.yaml.\nkubectl apply -f example-controller.yaml\n\n# Create the objects that are defined in any .yaml, .yml, or .json file within the directory.\nkubectl apply -f ", "completion": "This text code Command line tool (kubectl) | kubectl get - List one or more resources.", "language": "text", "type": "code-example_explanation", "context": "Command line tool (kubectl) | kubectl get - List one or more resources.", "source": "https://kubernetes.io/docs/reference/kubectl/", "id": "k8s_00703_code_explain_21994"} {"prompt": "Complete this text code:\n\n# List all pods in plain-text output format.\nkubectl get pods\n\n# List all pods in plain-text output format and include additional information (such as node name).\nkubectl get pods -o wide\n\n# List the replication controller with the specified name in plain-text output format. Tip: You can shorten and replace the 'replicationcontroller' resource type with the alias 'rc'.\nkubectl get replicationcontroller ", "completion": "\n# List all replication controllers and services together in plain-text output format.\nkubectl get rc,services\n\n# List all daemon sets in plain-text output format.\nkubectl get ds\n\n# List all pods running on node server01\nkubectl get pods --field-selector=spec.nodeName=server01", "language": "text", "type": "code-example", "context": "Command line tool (kubectl) | kubectl describe - Display detailed state of one or more resources, including the uninitialized ones by default.", "source": "https://kubernetes.io/docs/reference/kubectl/", "id": "k8s_00703_code_completion_21995"} {"prompt": "Explain this text code:\n\n# List all pods in plain-text output format.\nkubectl get pods\n\n# List all pods in plain-text output format and include additional information (such as node name).\nkubectl get pods -o wide\n\n# List the replication controller with the specified name in plain-text output format. Tip: You can shorten and replace the 'replicationcontroller' resource type with the alias 'rc'.\nkubectl get replicationcontroller \n\n# List all replication controllers and services together in plain-text output format.\nkubectl get rc,services\n\n# List all daemon sets in plain-text output format.\nkubectl get ds\n\n# List all pods running on node server01\nkubectl get pods --field-selector=spec.nodeName=server01", "completion": "This text code Command line tool (kubectl) | kubectl describe - Display detailed state of one or more resources, including the uninitialized ones by default.", "language": "text", "type": "code-example_explanation", "context": "Command line tool (kubectl) | kubectl describe - Display detailed state of one or more resources, including the uninitialized ones by default.", "source": "https://kubernetes.io/docs/reference/kubectl/", "id": "k8s_00703_code_explain_21996"} {"prompt": "Complete this text code:\n\n# Display the details of the node with name .\nkubectl describe nodes \n\n# Display the details of the pod with name .\nkubectl describe pods/\n", "completion": "# Display the details of all the pods that are managed by the replication controller named .\n# Remember: Any pods that are created by the replication controller get prefixed with the name of the replication controller.\nkubectl describe pods \n\n# Describe all pods\nkubectl describe pods", "language": "text", "type": "code-example", "context": "Command line tool (kubectl) | kubectl delete - Delete resources either from a file, stdin, or specifying label selectors, names, resource selectors, or resources.", "source": "https://kubernetes.io/docs/reference/kubectl/", "id": "k8s_00703_code_completion_21997"} {"prompt": "Explain this text code:\n\n# Display the details of the node with name .\nkubectl describe nodes \n\n# Display the details of the pod with name .\nkubectl describe pods/\n\n# Display the details of all the pods that are managed by the replication controller named .\n# Remember: Any pods that are created by the replication controller get prefixed with the name of the replication controller.\nkubectl describe pods \n\n# Describe all pods\nkubectl describe pods", "completion": "This text code Command line tool (kubectl) | kubectl delete - Delete resources either from a file, stdin, or specifying label selectors, names, resource selectors, or resources.", "language": "text", "type": "code-example_explanation", "context": "Command line tool (kubectl) | kubectl delete - Delete resources either from a file, stdin, or specifying label selectors, names, resource selectors, or resources.", "source": "https://kubernetes.io/docs/reference/kubectl/", "id": "k8s_00703_code_explain_21998"} {"prompt": "Complete this text code:\n\n# Delete a pod using the type and name specified in the pod.yaml file.\nkubectl delete -f pod.yaml\n\n# Delete all the pods and services that have the label '='.", "completion": "kubectl delete pods,services -l =\n\n# Delete all pods, including uninitialized ones.\nkubectl delete pods --all", "language": "text", "type": "code-example", "context": "Command line tool (kubectl) | kubectl exec - Execute a command against a container in a pod.", "source": "https://kubernetes.io/docs/reference/kubectl/", "id": "k8s_00703_code_completion_21999"} {"prompt": "Explain this text code:\n\n# Delete a pod using the type and name specified in the pod.yaml file.\nkubectl delete -f pod.yaml\n\n# Delete all the pods and services that have the label '='.\nkubectl delete pods,services -l =\n\n# Delete all pods, including uninitialized ones.\nkubectl delete pods --all", "completion": "This text code Command line tool (kubectl) | kubectl exec - Execute a command against a container in a pod.", "language": "text", "type": "code-example_explanation", "context": "Command line tool (kubectl) | kubectl exec - Execute a command against a container in a pod.", "source": "https://kubernetes.io/docs/reference/kubectl/", "id": "k8s_00703_code_explain_22000"} {"prompt": "Complete this text code:\n\n# Get output from running 'date' from pod . By default, output is from the first container.\nkubectl exec -- date\n\n# Get output from running 'date' in container of pod .", "completion": "kubectl exec -c -- date\n\n# Get an interactive TTY and run /bin/bash from pod . By default, output is from the first container.\nkubectl exec -ti -- /bin/bash", "language": "text", "type": "code-example", "context": "Command line tool (kubectl) | kubectl logs - Print the logs for a container in a pod.", "source": "https://kubernetes.io/docs/reference/kubectl/", "id": "k8s_00703_code_completion_22001"} {"prompt": "Explain this text code:\n\n# Get output from running 'date' from pod . By default, output is from the first container.\nkubectl exec -- date\n\n# Get output from running 'date' in container of pod .\nkubectl exec -c -- date\n\n# Get an interactive TTY and run /bin/bash from pod . By default, output is from the first container.\nkubectl exec -ti -- /bin/bash", "completion": "This text code Command line tool (kubectl) | kubectl logs - Print the logs for a container in a pod.", "language": "text", "type": "code-example_explanation", "context": "Command line tool (kubectl) | kubectl logs - Print the logs for a container in a pod.", "source": "https://kubernetes.io/docs/reference/kubectl/", "id": "k8s_00703_code_explain_22002"} {"prompt": "Complete this text code:\n\n# Return a snapshot of the logs from pod .\nkubectl logs ", "completion": "\n# Start streaming the logs from pod . This is similar to the 'tail -f' Linux command.\nkubectl logs -f ", "language": "text", "type": "code-example", "context": "Command line tool (kubectl) | kubectl diff - View a diff of the proposed updates to a cluster.", "source": "https://kubernetes.io/docs/reference/kubectl/", "id": "k8s_00703_code_completion_22003"} {"prompt": "Explain this text code:\n\n# Return a snapshot of the logs from pod .\nkubectl logs \n\n# Start streaming the logs from pod . This is similar to the 'tail -f' Linux command.\nkubectl logs -f ", "completion": "This text code Command line tool (kubectl) | kubectl diff - View a diff of the proposed updates to a cluster.", "language": "text", "type": "code-example_explanation", "context": "Command line tool (kubectl) | kubectl diff - View a diff of the proposed updates to a cluster.", "source": "https://kubernetes.io/docs/reference/kubectl/", "id": "k8s_00703_code_explain_22004"} {"prompt": "Complete this text code:\n\n# Diff resources included in \"pod.json\".\nkubectl diff -f pod.json", "completion": "\n# Diff file read from stdin.\ncat service.yaml | kubectl diff -f -", "language": "text", "type": "code-example", "context": "Command line tool (kubectl) | Use the following set of examples to help you familiarize yourself with writing and using kubectl plugins:", "source": "https://kubernetes.io/docs/reference/kubectl/", "id": "k8s_00703_code_completion_22005"} {"prompt": "Explain this text code:\n\n# Diff resources included in \"pod.json\".\nkubectl diff -f pod.json\n\n# Diff file read from stdin.\ncat service.yaml | kubectl diff -f -", "completion": "This text code Command line tool (kubectl) | Use the following set of examples to help you familiarize yourself with writing and using kubectl plugins:", "language": "text", "type": "code-example_explanation", "context": "Command line tool (kubectl) | Use the following set of examples to help you familiarize yourself with writing and using kubectl plugins:", "source": "https://kubernetes.io/docs/reference/kubectl/", "id": "k8s_00703_code_explain_22006"} {"prompt": "Explain this text code:\n\n# create a simple plugin in any language and name the resulting executable file\n# so that it begins with the prefix \"kubectl-\"\ncat ./kubectl-hello", "completion": "This text code Command line tool (kubectl) | With a plugin written, let's make it executable:", "language": "text", "type": "code-example_explanation", "context": "Command line tool (kubectl) | With a plugin written, let's make it executable:", "source": "https://kubernetes.io/docs/reference/kubectl/", "id": "k8s_00703_code_explain_22007"} {"prompt": "Complete this text code:\n\n#!/bin/sh\n", "completion": "# this plugin prints the words \"hello world\"\necho \"hello world\"", "language": "text", "type": "code-example", "context": "Command line tool (kubectl) | With a plugin written, let's make it executable:", "source": "https://kubernetes.io/docs/reference/kubectl/", "id": "k8s_00703_code_completion_22008"} {"prompt": "Explain this text code:\n\n#!/bin/sh\n\n# this plugin prints the words \"hello world\"\necho \"hello world\"", "completion": "This text code Command line tool (kubectl) | With a plugin written, let's make it executable:", "language": "text", "type": "code-example_explanation", "context": "Command line tool (kubectl) | With a plugin written, let's make it executable:", "source": "https://kubernetes.io/docs/reference/kubectl/", "id": "k8s_00703_code_explain_22009"} {"prompt": "Complete this text code:\n\nchmod a+x ./kubectl-hello\n\n# and move it to a location in our PATH\nsudo mv ./kubectl-hello /usr/local/bin", "completion": "sudo chown root:root /usr/local/bin\n\n# You have now created and \"installed\" a kubectl plugin.\n# You can begin using this plugin by invoking it from kubectl as if it were a regular command\nkubectl hello", "language": "text", "type": "code-example", "context": "Command line tool (kubectl) | In order to view all of the plugins that are available to kubectl, use the kubectl plugin list subcommand:", "source": "https://kubernetes.io/docs/reference/kubectl/", "id": "k8s_00703_code_completion_22010"} {"prompt": "Explain this text code:\n\nchmod a+x ./kubectl-hello\n\n# and move it to a location in our PATH\nsudo mv ./kubectl-hello /usr/local/bin\nsudo chown root:root /usr/local/bin\n\n# You have now created and \"installed\" a kubectl plugin.\n# You can begin using this plugin by invoking it from kubectl as if it were a regular command\nkubectl hello", "completion": "This text code Command line tool (kubectl) | In order to view all of the plugins that are available to kubectl, use the kubectl plugin list subcommand:", "language": "text", "type": "code-example_explanation", "context": "Command line tool (kubectl) | In order to view all of the plugins that are available to kubectl, use the kubectl plugin list subcommand:", "source": "https://kubernetes.io/docs/reference/kubectl/", "id": "k8s_00703_code_explain_22011"} {"prompt": "Explain this text code:\n\n# You can \"uninstall\" a plugin, by removing it from the folder in your\n# $PATH where you placed it\nsudo rm /usr/local/bin/kubectl-hello", "completion": "This text code Command line tool (kubectl) | In order to view all of the plugins that are available to kubectl, use the kubectl plugin list subcommand:", "language": "text", "type": "code-example_explanation", "context": "Command line tool (kubectl) | In order to view all of the plugins that are available to kubectl, use the kubectl plugin list subcommand:", "source": "https://kubernetes.io/docs/reference/kubectl/", "id": "k8s_00703_code_explain_22012"} {"prompt": "Complete this text code:\n\nThe following kubectl-compatible plugins are available:\n", "completion": "/usr/local/bin/kubectl-hello\n/usr/local/bin/kubectl-foo\n/usr/local/bin/kubectl-bar", "language": "text", "type": "code-example", "context": "Command line tool (kubectl) | kubectl plugin list also warns you about plugins that are not executable, or that are shadowed by other plugins; for example:", "source": "https://kubernetes.io/docs/reference/kubectl/", "id": "k8s_00703_code_completion_22013"} {"prompt": "Explain this text code:\n\nThe following kubectl-compatible plugins are available:\n\n/usr/local/bin/kubectl-hello\n/usr/local/bin/kubectl-foo\n/usr/local/bin/kubectl-bar", "completion": "This text code Command line tool (kubectl) | kubectl plugin list also warns you about plugins that are not executable, or that are shadowed by other plugins; for example:", "language": "text", "type": "code-example_explanation", "context": "Command line tool (kubectl) | kubectl plugin list also warns you about plugins that are not executable, or that are shadowed by other plugins; for example:", "source": "https://kubernetes.io/docs/reference/kubectl/", "id": "k8s_00703_code_explain_22014"} {"prompt": "Explain this text code:\n\nsudo chmod -x /usr/local/bin/kubectl-foo # remove execute permission\nkubectl plugin list", "completion": "This text code Command line tool (kubectl) | You can think of plugins as a means to build more complex functionality on top of the existing kubectl commands:", "language": "text", "type": "code-example_explanation", "context": "Command line tool (kubectl) | You can think of plugins as a means to build more complex functionality on top of the existing kubectl commands:", "source": "https://kubernetes.io/docs/reference/kubectl/", "id": "k8s_00703_code_explain_22015"} {"prompt": "Complete this text code:\n\nThe following kubectl-compatible plugins are available:\n\n/usr/local/bin/kubectl-hello\n/usr/local/bin/kubectl-foo", "completion": " - warning: /usr/local/bin/kubectl-foo identified as a plugin, but it is not executable\n/usr/local/bin/kubectl-bar\n\nerror: one plugin warning was found", "language": "text", "type": "code-example", "context": "Command line tool (kubectl) | You can think of plugins as a means to build more complex functionality on top of the existing kubectl commands:", "source": "https://kubernetes.io/docs/reference/kubectl/", "id": "k8s_00703_code_completion_22016"} {"prompt": "Explain this text code:\n\nThe following kubectl-compatible plugins are available:\n\n/usr/local/bin/kubectl-hello\n/usr/local/bin/kubectl-foo\n - warning: /usr/local/bin/kubectl-foo identified as a plugin, but it is not executable\n/usr/local/bin/kubectl-bar\n\nerror: one plugin warning was found", "completion": "This text code Command line tool (kubectl) | You can think of plugins as a means to build more complex functionality on top of the existing kubectl commands:", "language": "text", "type": "code-example_explanation", "context": "Command line tool (kubectl) | You can think of plugins as a means to build more complex functionality on top of the existing kubectl commands:", "source": "https://kubernetes.io/docs/reference/kubectl/", "id": "k8s_00703_code_explain_22017"} {"prompt": "Complete this text code:\n\n#!/bin/bash\n", "completion": "# this plugin makes use of the `kubectl config` command in order to output\n# information about the current user, based on the currently selected context\nkubectl config view --template='{{ range .contexts }}{{ if eq .name \"'$(kubectl config current-context)'\" }}Current user: {{ printf \"%s\\n\" .context.user }}{{ end }}{{ end }}'", "language": "text", "type": "code-example", "context": "Command line tool (kubectl) | Running the above command gives you an output containing the user for the current context in your KUBECONFIG file:", "source": "https://kubernetes.io/docs/reference/kubectl/", "id": "k8s_00703_code_completion_22018"} {"prompt": "Explain this text code:\n\n#!/bin/bash\n\n# this plugin makes use of the `kubectl config` command in order to output\n# information about the current user, based on the currently selected context\nkubectl config view --template='{{ range .contexts }}{{ if eq .name \"'$(kubectl config current-context)'\" }}Current user: {{ printf \"%s\\n\" .context.user }}{{ end }}{{ end }}'", "completion": "This text code Command line tool (kubectl) | Running the above command gives you an output containing the user for the current context in your KUBECONFIG file:", "language": "text", "type": "code-example_explanation", "context": "Command line tool (kubectl) | Running the above command gives you an output containing the user for the current context in your KUBECONFIG file:", "source": "https://kubernetes.io/docs/reference/kubectl/", "id": "k8s_00703_code_explain_22019"} {"prompt": "Complete this text code:\n\n# make the file executable\nsudo chmod +x ./kubectl-whoami\n\n# and move it into your PATH", "completion": "sudo mv ./kubectl-whoami /usr/local/bin\n\nkubectl whoami\nCurrent user: plugins-user", "language": "text", "type": "code-example", "context": "Command line tool (kubectl) |", "source": "https://kubernetes.io/docs/reference/kubectl/", "id": "k8s_00703_code_completion_22020"} {"prompt": "Explain this text code:\n\n# make the file executable\nsudo chmod +x ./kubectl-whoami\n\n# and move it into your PATH\nsudo mv ./kubectl-whoami /usr/local/bin\n\nkubectl whoami\nCurrent user: plugins-user", "completion": "This text code Command line tool (kubectl) |", "language": "text", "type": "code-example_explanation", "context": "Command line tool (kubectl) |", "source": "https://kubernetes.io/docs/reference/kubectl/", "id": "k8s_00703_code_explain_22021"} {"prompt": "Explain this shell code:\n\nkubectl get pod pod1\nkubectl get pods pod1\nkubectl get po pod1", "completion": "This shell code Command line tool (kubectl) | NAME: Specifies the name of the resource. Names are case-sensitive. If the name is omitted, details for all resources are displayed, for example kubectl get pods.", "language": "shell", "type": "code-example_explanation", "context": "Command line tool (kubectl) | NAME: Specifies the name of the resource. Names are case-sensitive. If the name is omitted, details for all resources are displayed, for example kubectl get pods.", "source": "https://kubernetes.io/docs/reference/kubectl/", "id": "k8s_00703_code_explain_22022"} {"prompt": "Explain this text code:\n\nTYPE1/name1 TYPE1/name2 TYPE2/name3 TYPE<#>/name<#>", "completion": "This text code Command line tool (kubectl) | To specify resources with one or more files: -f file1 -f file2 -f file<#>", "language": "text", "type": "code-example_explanation", "context": "Command line tool (kubectl) | To specify resources with one or more files: -f file1 -f file2 -f file<#>", "source": "https://kubernetes.io/docs/reference/kubectl/", "id": "k8s_00703_code_explain_22023"} {"prompt": "Explain this bash code:\n\nkubectl get pod/example-pod1 replicationcontroller/example-rc1", "completion": "This bash code Command line tool (kubectl) | To specify resources with one or more files: -f file1 -f file2 -f file<#>", "language": "bash", "type": "kubectl-command_explanation", "context": "Command line tool (kubectl) | To specify resources with one or more files: -f file1 -f file2 -f file<#>", "source": "https://kubernetes.io/docs/reference/kubectl/", "id": "k8s_00703_code_explain_22024"} {"prompt": "Explain this text code:\n\n/var/run/secrets/kubernetes.io/serviceaccount/token", "completion": "This text code Command line tool (kubectl) | To maintain backwards compatibility, if the POD_NAMESPACE environment variable is set during in-cluster authentication it will override the default namespace from the service account token. Any manife", "language": "text", "type": "code-example_explanation", "context": "Command line tool (kubectl) | To maintain backwards compatibility, if the POD_NAMESPACE environment variable is set during in-cluster authentication it will override the default namespace from the service account token. Any manife", "source": "https://kubernetes.io/docs/reference/kubectl/", "id": "k8s_00703_code_explain_22025"} {"prompt": "Explain this text code:\n\n/var/run/secrets/kubernetes.io/serviceaccount/token", "completion": "This text code Command line tool (kubectl) | then kubectl assumes it is running in your cluster. The kubectl tool looks up the namespace of that ServiceAccount (this is the same as the namespace of the Pod) and acts against that namespace. This", "language": "text", "type": "code-example_explanation", "context": "Command line tool (kubectl) | then kubectl assumes it is running in your cluster. The kubectl tool looks up the namespace of that ServiceAccount (this is the same as the namespace of the Pod) and acts against that namespace. This", "source": "https://kubernetes.io/docs/reference/kubectl/", "id": "k8s_00703_code_explain_22026"} {"prompt": "Explain this shell code:\n\nkubectl config set-context --current --namespace=", "completion": "This shell code Command line tool (kubectl) | The following table includes short descriptions and the general syntax for all of the kubectl operations:", "language": "shell", "type": "code-example_explanation", "context": "Command line tool (kubectl) | The following table includes short descriptions and the general syntax for all of the kubectl operations:", "source": "https://kubernetes.io/docs/reference/kubectl/", "id": "k8s_00703_code_explain_22027"} {"prompt": "Explain this bash code:\n\nkubectl annotate (-f FILENAME | TYPE NAME | TYPE/NAME) KEY_1=VAL_1 ... KEY_N=VAL_N [--overwrite] [--all] [--resource-version=version] [flags]", "completion": "This bash code Command line tool (kubectl) | To learn more about command operations, see the kubectl reference documentation.", "language": "bash", "type": "kubectl-command_explanation", "context": "Command line tool (kubectl) | To learn more about command operations, see the kubectl reference documentation.", "source": "https://kubernetes.io/docs/reference/kubectl/", "id": "k8s_00703_code_explain_22028"} {"prompt": "Explain this bash code:\n\nkubectl autoscale (-f FILENAME | TYPE NAME | TYPE/NAME) [--min=MINPODS] --max=MAXPODS [--cpu-percent=CPU] [flags]", "completion": "This bash code Command line tool (kubectl) | To learn more about command operations, see the kubectl reference documentation.", "language": "bash", "type": "kubectl-command_explanation", "context": "Command line tool (kubectl) | To learn more about command operations, see the kubectl reference documentation.", "source": "https://kubernetes.io/docs/reference/kubectl/", "id": "k8s_00703_code_explain_22029"} {"prompt": "Explain this bash code:\n\nkubectl cp [options]", "completion": "This bash code Command line tool (kubectl) | To learn more about command operations, see the kubectl reference documentation.", "language": "bash", "type": "kubectl-command_explanation", "context": "Command line tool (kubectl) | To learn more about command operations, see the kubectl reference documentation.", "source": "https://kubernetes.io/docs/reference/kubectl/", "id": "k8s_00703_code_explain_22030"} {"prompt": "Explain this bash code:\n\nkubectl delete (-f FILENAME | TYPE [NAME | /NAME | -l label | --all]) [flags]", "completion": "This bash code Command line tool (kubectl) | To learn more about command operations, see the kubectl reference documentation.", "language": "bash", "type": "kubectl-command_explanation", "context": "Command line tool (kubectl) | To learn more about command operations, see the kubectl reference documentation.", "source": "https://kubernetes.io/docs/reference/kubectl/", "id": "k8s_00703_code_explain_22031"} {"prompt": "Explain this bash code:\n\nkubectl describe (-f FILENAME | TYPE [NAME_PREFIX | /NAME | -l label]) [flags]", "completion": "This bash code Command line tool (kubectl) | To learn more about command operations, see the kubectl reference documentation.", "language": "bash", "type": "kubectl-command_explanation", "context": "Command line tool (kubectl) | To learn more about command operations, see the kubectl reference documentation.", "source": "https://kubernetes.io/docs/reference/kubectl/", "id": "k8s_00703_code_explain_22032"} {"prompt": "Explain this bash code:\n\nkubectl edit (-f FILENAME | TYPE NAME | TYPE/NAME) [flags]", "completion": "This bash code Command line tool (kubectl) | To learn more about command operations, see the kubectl reference documentation.", "language": "bash", "type": "kubectl-command_explanation", "context": "Command line tool (kubectl) | To learn more about command operations, see the kubectl reference documentation.", "source": "https://kubernetes.io/docs/reference/kubectl/", "id": "k8s_00703_code_explain_22033"} {"prompt": "Explain this bash code:\n\nkubectl exec POD [-c CONTAINER] [-i] [-t] [flags] [-- COMMAND [args...]]", "completion": "This bash code Command line tool (kubectl) | To learn more about command operations, see the kubectl reference documentation.", "language": "bash", "type": "kubectl-command_explanation", "context": "Command line tool (kubectl) | To learn more about command operations, see the kubectl reference documentation.", "source": "https://kubernetes.io/docs/reference/kubectl/", "id": "k8s_00703_code_explain_22034"} {"prompt": "Explain this bash code:\n\nkubectl expose (-f FILENAME | TYPE NAME | TYPE/NAME) [--port=port] [--protocol=TCP|UDP] [--target-port=number-or-name] [--name=name] [--external-ip=external-ip-of-service] [--type=type] [flags]", "completion": "This bash code Command line tool (kubectl) | To learn more about command operations, see the kubectl reference documentation.", "language": "bash", "type": "kubectl-command_explanation", "context": "Command line tool (kubectl) | To learn more about command operations, see the kubectl reference documentation.", "source": "https://kubernetes.io/docs/reference/kubectl/", "id": "k8s_00703_code_explain_22035"} {"prompt": "Explain this bash code:\n\nkubectl get (-f FILENAME | TYPE [NAME | /NAME | -l label]) [--watch] [--sort-by=FIELD] [[-o | --output]=OUTPUT_FORMAT] [flags]", "completion": "This bash code Command line tool (kubectl) | To learn more about command operations, see the kubectl reference documentation.", "language": "bash", "type": "kubectl-command_explanation", "context": "Command line tool (kubectl) | To learn more about command operations, see the kubectl reference documentation.", "source": "https://kubernetes.io/docs/reference/kubectl/", "id": "k8s_00703_code_explain_22036"} {"prompt": "Explain this bash code:\n\nkubectl label (-f FILENAME | TYPE NAME | TYPE/NAME) KEY_1=VAL_1 ... KEY_N=VAL_N [--overwrite] [--all] [--resource-version=version] [flags]", "completion": "This bash code Command line tool (kubectl) | To learn more about command operations, see the kubectl reference documentation.", "language": "bash", "type": "kubectl-command_explanation", "context": "Command line tool (kubectl) | To learn more about command operations, see the kubectl reference documentation.", "source": "https://kubernetes.io/docs/reference/kubectl/", "id": "k8s_00703_code_explain_22037"} {"prompt": "Explain this bash code:\n\nkubectl patch (-f FILENAME | TYPE NAME | TYPE/NAME) --patch PATCH [flags]", "completion": "This bash code Command line tool (kubectl) | To learn more about command operations, see the kubectl reference documentation.", "language": "bash", "type": "kubectl-command_explanation", "context": "Command line tool (kubectl) | To learn more about command operations, see the kubectl reference documentation.", "source": "https://kubernetes.io/docs/reference/kubectl/", "id": "k8s_00703_code_explain_22038"} {"prompt": "Explain this bash code:\n\nkubectl port-forward POD [LOCAL_PORT:]REMOTE_PORT [...[LOCAL_PORT_N:]REMOTE_PORT_N] [flags]", "completion": "This bash code Command line tool (kubectl) | To learn more about command operations, see the kubectl reference documentation.", "language": "bash", "type": "kubectl-command_explanation", "context": "Command line tool (kubectl) | To learn more about command operations, see the kubectl reference documentation.", "source": "https://kubernetes.io/docs/reference/kubectl/", "id": "k8s_00703_code_explain_22039"} {"prompt": "Explain this bash code:\n\nkubectl proxy [--port=PORT] [--www=static-dir] [--www-prefix=prefix] [--api-prefix=prefix] [flags]", "completion": "This bash code Command line tool (kubectl) | To learn more about command operations, see the kubectl reference documentation.", "language": "bash", "type": "kubectl-command_explanation", "context": "Command line tool (kubectl) | To learn more about command operations, see the kubectl reference documentation.", "source": "https://kubernetes.io/docs/reference/kubectl/", "id": "k8s_00703_code_explain_22040"} {"prompt": "Explain this bash code:\n\nkubectl run NAME --image=image [--env=\"key=value\"] [--port=port] [--dry-run=server|client|none] [--overrides=inline-json] [flags]", "completion": "This bash code Command line tool (kubectl) | To learn more about command operations, see the kubectl reference documentation.", "language": "bash", "type": "kubectl-command_explanation", "context": "Command line tool (kubectl) | To learn more about command operations, see the kubectl reference documentation.", "source": "https://kubernetes.io/docs/reference/kubectl/", "id": "k8s_00703_code_explain_22041"} {"prompt": "Explain this bash code:\n\nkubectl scale (-f FILENAME | TYPE NAME | TYPE/NAME) --replicas=COUNT [--resource-version=version] [--current-replicas=count] [flags]", "completion": "This bash code Command line tool (kubectl) | To learn more about command operations, see the kubectl reference documentation.", "language": "bash", "type": "kubectl-command_explanation", "context": "Command line tool (kubectl) | To learn more about command operations, see the kubectl reference documentation.", "source": "https://kubernetes.io/docs/reference/kubectl/", "id": "k8s_00703_code_explain_22042"} {"prompt": "Explain this bash code:\n\nkubectl taint NODE NAME KEY_1=VAL_1:TAINT_EFFECT_1 ... KEY_N=VAL_N:TAINT_EFFECT_N [options]", "completion": "This bash code Command line tool (kubectl) | To learn more about command operations, see the kubectl reference documentation.", "language": "bash", "type": "kubectl-command_explanation", "context": "Command line tool (kubectl) | To learn more about command operations, see the kubectl reference documentation.", "source": "https://kubernetes.io/docs/reference/kubectl/", "id": "k8s_00703_code_explain_22043"} {"prompt": "Explain this bash code:\n\nkubectl wait ([-f FILENAME] | resource.group/resource.name | resource.group [(-l label | --all)]) [--for=delete|--for condition=available] [options]", "completion": "This bash code Command line tool (kubectl) | To learn more about command operations, see the kubectl reference documentation.", "language": "bash", "type": "kubectl-command_explanation", "context": "Command line tool (kubectl) | To learn more about command operations, see the kubectl reference documentation.", "source": "https://kubernetes.io/docs/reference/kubectl/", "id": "k8s_00703_code_explain_22044"} {"prompt": "Explain this shell code:\n\nkubectl get pods -o custom-columns=NAME:.metadata.name,RSRC:.metadata.resourceVersion", "completion": "This shell code Command line tool (kubectl) | Template file:", "language": "shell", "type": "code-example_explanation", "context": "Command line tool (kubectl) | Template file:", "source": "https://kubernetes.io/docs/reference/kubectl/", "id": "k8s_00703_code_explain_22045"} {"prompt": "Explain this shell code:\n\nkubectl get pods -o custom-columns-file=template.txt", "completion": "This shell code Command line tool (kubectl) | where the template.txt file contains:", "language": "shell", "type": "code-example_explanation", "context": "Command line tool (kubectl) | where the template.txt file contains:", "source": "https://kubernetes.io/docs/reference/kubectl/", "id": "k8s_00703_code_explain_22046"} {"prompt": "Explain this text code:\n\nNAME RSRC\nmetadata.name metadata.resourceVersion", "completion": "This text code Command line tool (kubectl) | The result of running either command is similar to:", "language": "text", "type": "code-example_explanation", "context": "Command line tool (kubectl) | The result of running either command is similar to:", "source": "https://kubernetes.io/docs/reference/kubectl/", "id": "k8s_00703_code_explain_22047"} {"prompt": "Explain this shell code:\n\nkubectl [command] [TYPE] [NAME] --sort-by=", "completion": "This shell code Command line tool (kubectl) | To print a list of pods sorted by name, you run:", "language": "shell", "type": "code-example_explanation", "context": "Command line tool (kubectl) | To print a list of pods sorted by name, you run:", "source": "https://kubernetes.io/docs/reference/kubectl/", "id": "k8s_00703_code_explain_22048"} {"prompt": "Complete this shell code:\n\n# Create a service using the definition in example-service.yaml.\nkubectl apply -f example-service.yaml\n\n# Create a replication controller using the definition in example-controller.yaml.", "completion": "kubectl apply -f example-controller.yaml\n\n# Create the objects that are defined in any .yaml, .yml, or .json file within the directory.\nkubectl apply -f ", "language": "shell", "type": "code-example", "context": "Command line tool (kubectl) | kubectl get - List one or more resources.", "source": "https://kubernetes.io/docs/reference/kubectl/", "id": "k8s_00703_code_completion_22049"} {"prompt": "Explain this shell code:\n\n# Create a service using the definition in example-service.yaml.\nkubectl apply -f example-service.yaml\n\n# Create a replication controller using the definition in example-controller.yaml.\nkubectl apply -f example-controller.yaml\n\n# Create the objects that are defined in any .yaml, .yml, or .json file within the directory.\nkubectl apply -f ", "completion": "This shell code Command line tool (kubectl) | kubectl get - List one or more resources.", "language": "shell", "type": "code-example_explanation", "context": "Command line tool (kubectl) | kubectl get - List one or more resources.", "source": "https://kubernetes.io/docs/reference/kubectl/", "id": "k8s_00703_code_explain_22050"} {"prompt": "Complete this shell code:\n\n# List all pods in plain-text output format.\nkubectl get pods\n\n# List all pods in plain-text output format and include additional information (such as node name).\nkubectl get pods -o wide\n\n# List the replication controller with the specified name in plain-text output format. Tip: You can shorten and replace the 'replicationcontroller' resource type with the alias 'rc'.\nkubectl get replicationcontroller ", "completion": "\n# List all replication controllers and services together in plain-text output format.\nkubectl get rc,services\n\n# List all daemon sets in plain-text output format.\nkubectl get ds\n\n# List all pods running on node server01\nkubectl get pods --field-selector=spec.nodeName=server01", "language": "shell", "type": "code-example", "context": "Command line tool (kubectl) | kubectl describe - Display detailed state of one or more resources, including the uninitialized ones by default.", "source": "https://kubernetes.io/docs/reference/kubectl/", "id": "k8s_00703_code_completion_22051"} {"prompt": "Explain this shell code:\n\n# List all pods in plain-text output format.\nkubectl get pods\n\n# List all pods in plain-text output format and include additional information (such as node name).\nkubectl get pods -o wide\n\n# List the replication controller with the specified name in plain-text output format. Tip: You can shorten and replace the 'replicationcontroller' resource type with the alias 'rc'.\nkubectl get replicationcontroller \n\n# List all replication controllers and services together in plain-text output format.\nkubectl get rc,services\n\n# List all daemon sets in plain-text output format.\nkubectl get ds\n\n# List all pods running on node server01\nkubectl get pods --field-selector=spec.nodeName=server01", "completion": "This shell code Command line tool (kubectl) | kubectl describe - Display detailed state of one or more resources, including the uninitialized ones by default.", "language": "shell", "type": "code-example_explanation", "context": "Command line tool (kubectl) | kubectl describe - Display detailed state of one or more resources, including the uninitialized ones by default.", "source": "https://kubernetes.io/docs/reference/kubectl/", "id": "k8s_00703_code_explain_22052"} {"prompt": "Complete this shell code:\n\n# Display the details of the node with name .\nkubectl describe nodes \n\n# Display the details of the pod with name .\nkubectl describe pods/\n", "completion": "# Display the details of all the pods that are managed by the replication controller named .\n# Remember: Any pods that are created by the replication controller get prefixed with the name of the replication controller.\nkubectl describe pods \n\n# Describe all pods\nkubectl describe pods", "language": "shell", "type": "code-example", "context": "Command line tool (kubectl) | kubectl delete - Delete resources either from a file, stdin, or specifying label selectors, names, resource selectors, or resources.", "source": "https://kubernetes.io/docs/reference/kubectl/", "id": "k8s_00703_code_completion_22053"} {"prompt": "Explain this shell code:\n\n# Display the details of the node with name .\nkubectl describe nodes \n\n# Display the details of the pod with name .\nkubectl describe pods/\n\n# Display the details of all the pods that are managed by the replication controller named .\n# Remember: Any pods that are created by the replication controller get prefixed with the name of the replication controller.\nkubectl describe pods \n\n# Describe all pods\nkubectl describe pods", "completion": "This shell code Command line tool (kubectl) | kubectl delete - Delete resources either from a file, stdin, or specifying label selectors, names, resource selectors, or resources.", "language": "shell", "type": "code-example_explanation", "context": "Command line tool (kubectl) | kubectl delete - Delete resources either from a file, stdin, or specifying label selectors, names, resource selectors, or resources.", "source": "https://kubernetes.io/docs/reference/kubectl/", "id": "k8s_00703_code_explain_22054"} {"prompt": "Complete this shell code:\n\n# Delete a pod using the type and name specified in the pod.yaml file.\nkubectl delete -f pod.yaml\n\n# Delete all the pods and services that have the label '='.", "completion": "kubectl delete pods,services -l =\n\n# Delete all pods, including uninitialized ones.\nkubectl delete pods --all", "language": "shell", "type": "code-example", "context": "Command line tool (kubectl) | kubectl exec - Execute a command against a container in a pod.", "source": "https://kubernetes.io/docs/reference/kubectl/", "id": "k8s_00703_code_completion_22055"} {"prompt": "Explain this shell code:\n\n# Delete a pod using the type and name specified in the pod.yaml file.\nkubectl delete -f pod.yaml\n\n# Delete all the pods and services that have the label '='.\nkubectl delete pods,services -l =\n\n# Delete all pods, including uninitialized ones.\nkubectl delete pods --all", "completion": "This shell code Command line tool (kubectl) | kubectl exec - Execute a command against a container in a pod.", "language": "shell", "type": "code-example_explanation", "context": "Command line tool (kubectl) | kubectl exec - Execute a command against a container in a pod.", "source": "https://kubernetes.io/docs/reference/kubectl/", "id": "k8s_00703_code_explain_22056"} {"prompt": "Complete this shell code:\n\n# Get output from running 'date' from pod . By default, output is from the first container.\nkubectl exec -- date\n\n# Get output from running 'date' in container of pod .", "completion": "kubectl exec -c -- date\n\n# Get an interactive TTY and run /bin/bash from pod . By default, output is from the first container.\nkubectl exec -ti -- /bin/bash", "language": "shell", "type": "code-example", "context": "Command line tool (kubectl) | kubectl logs - Print the logs for a container in a pod.", "source": "https://kubernetes.io/docs/reference/kubectl/", "id": "k8s_00703_code_completion_22057"} {"prompt": "Explain this shell code:\n\n# Get output from running 'date' from pod . By default, output is from the first container.\nkubectl exec -- date\n\n# Get output from running 'date' in container of pod .\nkubectl exec -c -- date\n\n# Get an interactive TTY and run /bin/bash from pod . By default, output is from the first container.\nkubectl exec -ti -- /bin/bash", "completion": "This shell code Command line tool (kubectl) | kubectl logs - Print the logs for a container in a pod.", "language": "shell", "type": "code-example_explanation", "context": "Command line tool (kubectl) | kubectl logs - Print the logs for a container in a pod.", "source": "https://kubernetes.io/docs/reference/kubectl/", "id": "k8s_00703_code_explain_22058"} {"prompt": "Complete this shell code:\n\n# Return a snapshot of the logs from pod .\nkubectl logs ", "completion": "\n# Start streaming the logs from pod . This is similar to the 'tail -f' Linux command.\nkubectl logs -f ", "language": "shell", "type": "code-example", "context": "Command line tool (kubectl) | kubectl diff - View a diff of the proposed updates to a cluster.", "source": "https://kubernetes.io/docs/reference/kubectl/", "id": "k8s_00703_code_completion_22059"} {"prompt": "Explain this shell code:\n\n# Return a snapshot of the logs from pod .\nkubectl logs \n\n# Start streaming the logs from pod . This is similar to the 'tail -f' Linux command.\nkubectl logs -f ", "completion": "This shell code Command line tool (kubectl) | kubectl diff - View a diff of the proposed updates to a cluster.", "language": "shell", "type": "code-example_explanation", "context": "Command line tool (kubectl) | kubectl diff - View a diff of the proposed updates to a cluster.", "source": "https://kubernetes.io/docs/reference/kubectl/", "id": "k8s_00703_code_explain_22060"} {"prompt": "Complete this shell code:\n\n# Diff resources included in \"pod.json\".\nkubectl diff -f pod.json", "completion": "\n# Diff file read from stdin.\ncat service.yaml | kubectl diff -f -", "language": "shell", "type": "code-example", "context": "Command line tool (kubectl) | Use the following set of examples to help you familiarize yourself with writing and using kubectl plugins:", "source": "https://kubernetes.io/docs/reference/kubectl/", "id": "k8s_00703_code_completion_22061"} {"prompt": "Explain this shell code:\n\n# Diff resources included in \"pod.json\".\nkubectl diff -f pod.json\n\n# Diff file read from stdin.\ncat service.yaml | kubectl diff -f -", "completion": "This shell code Command line tool (kubectl) | Use the following set of examples to help you familiarize yourself with writing and using kubectl plugins:", "language": "shell", "type": "code-example_explanation", "context": "Command line tool (kubectl) | Use the following set of examples to help you familiarize yourself with writing and using kubectl plugins:", "source": "https://kubernetes.io/docs/reference/kubectl/", "id": "k8s_00703_code_explain_22062"} {"prompt": "Explain this shell code:\n\n# create a simple plugin in any language and name the resulting executable file\n# so that it begins with the prefix \"kubectl-\"\ncat ./kubectl-hello", "completion": "This shell code Command line tool (kubectl) | With a plugin written, let's make it executable:", "language": "shell", "type": "code-example_explanation", "context": "Command line tool (kubectl) | With a plugin written, let's make it executable:", "source": "https://kubernetes.io/docs/reference/kubectl/", "id": "k8s_00703_code_explain_22063"} {"prompt": "Complete this shell code:\n\n#!/bin/sh\n", "completion": "# this plugin prints the words \"hello world\"\necho \"hello world\"", "language": "shell", "type": "code-example", "context": "Command line tool (kubectl) | With a plugin written, let's make it executable:", "source": "https://kubernetes.io/docs/reference/kubectl/", "id": "k8s_00703_code_completion_22064"} {"prompt": "Explain this shell code:\n\n#!/bin/sh\n\n# this plugin prints the words \"hello world\"\necho \"hello world\"", "completion": "This shell code Command line tool (kubectl) | With a plugin written, let's make it executable:", "language": "shell", "type": "code-example_explanation", "context": "Command line tool (kubectl) | With a plugin written, let's make it executable:", "source": "https://kubernetes.io/docs/reference/kubectl/", "id": "k8s_00703_code_explain_22065"} {"prompt": "Complete this bash code:\n\nchmod a+x ./kubectl-hello\n\n# and move it to a location in our PATH\nsudo mv ./kubectl-hello /usr/local/bin", "completion": "sudo chown root:root /usr/local/bin\n\n# You have now created and \"installed\" a kubectl plugin.\n# You can begin using this plugin by invoking it from kubectl as if it were a regular command\nkubectl hello", "language": "bash", "type": "shell-command", "context": "Command line tool (kubectl) | In order to view all of the plugins that are available to kubectl, use the kubectl plugin list subcommand:", "source": "https://kubernetes.io/docs/reference/kubectl/", "id": "k8s_00703_code_completion_22066"} {"prompt": "Explain this bash code:\n\nchmod a+x ./kubectl-hello\n\n# and move it to a location in our PATH\nsudo mv ./kubectl-hello /usr/local/bin\nsudo chown root:root /usr/local/bin\n\n# You have now created and \"installed\" a kubectl plugin.\n# You can begin using this plugin by invoking it from kubectl as if it were a regular command\nkubectl hello", "completion": "This bash code Command line tool (kubectl) | In order to view all of the plugins that are available to kubectl, use the kubectl plugin list subcommand:", "language": "bash", "type": "shell-command_explanation", "context": "Command line tool (kubectl) | In order to view all of the plugins that are available to kubectl, use the kubectl plugin list subcommand:", "source": "https://kubernetes.io/docs/reference/kubectl/", "id": "k8s_00703_code_explain_22067"} {"prompt": "Explain this shell code:\n\n# You can \"uninstall\" a plugin, by removing it from the folder in your\n# $PATH where you placed it\nsudo rm /usr/local/bin/kubectl-hello", "completion": "This shell code Command line tool (kubectl) | In order to view all of the plugins that are available to kubectl, use the kubectl plugin list subcommand:", "language": "shell", "type": "code-example_explanation", "context": "Command line tool (kubectl) | In order to view all of the plugins that are available to kubectl, use the kubectl plugin list subcommand:", "source": "https://kubernetes.io/docs/reference/kubectl/", "id": "k8s_00703_code_explain_22068"} {"prompt": "Complete this text code:\n\nThe following kubectl-compatible plugins are available:\n", "completion": "/usr/local/bin/kubectl-hello\n/usr/local/bin/kubectl-foo\n/usr/local/bin/kubectl-bar", "language": "text", "type": "code-example", "context": "Command line tool (kubectl) | kubectl plugin list also warns you about plugins that are not executable, or that are shadowed by other plugins; for example:", "source": "https://kubernetes.io/docs/reference/kubectl/", "id": "k8s_00703_code_completion_22069"} {"prompt": "Explain this text code:\n\nThe following kubectl-compatible plugins are available:\n\n/usr/local/bin/kubectl-hello\n/usr/local/bin/kubectl-foo\n/usr/local/bin/kubectl-bar", "completion": "This text code Command line tool (kubectl) | kubectl plugin list also warns you about plugins that are not executable, or that are shadowed by other plugins; for example:", "language": "text", "type": "code-example_explanation", "context": "Command line tool (kubectl) | kubectl plugin list also warns you about plugins that are not executable, or that are shadowed by other plugins; for example:", "source": "https://kubernetes.io/docs/reference/kubectl/", "id": "k8s_00703_code_explain_22070"} {"prompt": "Explain this shell code:\n\nsudo chmod -x /usr/local/bin/kubectl-foo # remove execute permission\nkubectl plugin list", "completion": "This shell code Command line tool (kubectl) | You can think of plugins as a means to build more complex functionality on top of the existing kubectl commands:", "language": "shell", "type": "code-example_explanation", "context": "Command line tool (kubectl) | You can think of plugins as a means to build more complex functionality on top of the existing kubectl commands:", "source": "https://kubernetes.io/docs/reference/kubectl/", "id": "k8s_00703_code_explain_22071"} {"prompt": "Complete this text code:\n\nThe following kubectl-compatible plugins are available:\n\n/usr/local/bin/kubectl-hello\n/usr/local/bin/kubectl-foo", "completion": " - warning: /usr/local/bin/kubectl-foo identified as a plugin, but it is not executable\n/usr/local/bin/kubectl-bar\n\nerror: one plugin warning was found", "language": "text", "type": "code-example", "context": "Command line tool (kubectl) | You can think of plugins as a means to build more complex functionality on top of the existing kubectl commands:", "source": "https://kubernetes.io/docs/reference/kubectl/", "id": "k8s_00703_code_completion_22072"} {"prompt": "Explain this text code:\n\nThe following kubectl-compatible plugins are available:\n\n/usr/local/bin/kubectl-hello\n/usr/local/bin/kubectl-foo\n - warning: /usr/local/bin/kubectl-foo identified as a plugin, but it is not executable\n/usr/local/bin/kubectl-bar\n\nerror: one plugin warning was found", "completion": "This text code Command line tool (kubectl) | You can think of plugins as a means to build more complex functionality on top of the existing kubectl commands:", "language": "text", "type": "code-example_explanation", "context": "Command line tool (kubectl) | You can think of plugins as a means to build more complex functionality on top of the existing kubectl commands:", "source": "https://kubernetes.io/docs/reference/kubectl/", "id": "k8s_00703_code_explain_22073"} {"prompt": "Complete this shell code:\n\n#!/bin/bash\n", "completion": "# this plugin makes use of the `kubectl config` command in order to output\n# information about the current user, based on the currently selected context\nkubectl config view --template='{{ range .contexts }}{{ if eq .name \"'$(kubectl config current-context)'\" }}Current user: {{ printf \"%s\\n\" .context.user }}{{ end }}{{ end }}'", "language": "shell", "type": "code-example", "context": "Command line tool (kubectl) | Running the above command gives you an output containing the user for the current context in your KUBECONFIG file:", "source": "https://kubernetes.io/docs/reference/kubectl/", "id": "k8s_00703_code_completion_22074"} {"prompt": "Explain this shell code:\n\n#!/bin/bash\n\n# this plugin makes use of the `kubectl config` command in order to output\n# information about the current user, based on the currently selected context\nkubectl config view --template='{{ range .contexts }}{{ if eq .name \"'$(kubectl config current-context)'\" }}Current user: {{ printf \"%s\\n\" .context.user }}{{ end }}{{ end }}'", "completion": "This shell code Command line tool (kubectl) | Running the above command gives you an output containing the user for the current context in your KUBECONFIG file:", "language": "shell", "type": "code-example_explanation", "context": "Command line tool (kubectl) | Running the above command gives you an output containing the user for the current context in your KUBECONFIG file:", "source": "https://kubernetes.io/docs/reference/kubectl/", "id": "k8s_00703_code_explain_22075"} {"prompt": "Complete this shell code:\n\n# make the file executable\nsudo chmod +x ./kubectl-whoami\n\n# and move it into your PATH", "completion": "sudo mv ./kubectl-whoami /usr/local/bin\n\nkubectl whoami\nCurrent user: plugins-user", "language": "shell", "type": "code-example", "context": "Command line tool (kubectl) |", "source": "https://kubernetes.io/docs/reference/kubectl/", "id": "k8s_00703_code_completion_22076"} {"prompt": "Explain this shell code:\n\n# make the file executable\nsudo chmod +x ./kubectl-whoami\n\n# and move it into your PATH\nsudo mv ./kubectl-whoami /usr/local/bin\n\nkubectl whoami\nCurrent user: plugins-user", "completion": "This shell code Command line tool (kubectl) |", "language": "shell", "type": "code-example_explanation", "context": "Command line tool (kubectl) |", "source": "https://kubernetes.io/docs/reference/kubectl/", "id": "k8s_00703_code_explain_22077"} {"prompt": "Explain this bash code:\n\nkubectl get pod pod1\nkubectl get pods pod1\nkubectl get po pod1", "completion": "This bash code Command line tool (kubectl) | NAME: Specifies the name of the resource. Names are case-sensitive. If the name is omitted, details for all resources are displayed, for example kubectl get pods.", "language": "bash", "type": "kubectl-command_explanation", "context": "Command line tool (kubectl) | NAME: Specifies the name of the resource. Names are case-sensitive. If the name is omitted, details for all resources are displayed, for example kubectl get pods.", "source": "https://kubernetes.io/docs/reference/kubectl/", "id": "k8s_00703_code_explain_22078"} {"prompt": "Explain this bash code:\n\nkubectl config set-context --current --namespace=", "completion": "This bash code Command line tool (kubectl) | The following table includes short descriptions and the general syntax for all of the kubectl operations:", "language": "bash", "type": "kubectl-command_explanation", "context": "Command line tool (kubectl) | The following table includes short descriptions and the general syntax for all of the kubectl operations:", "source": "https://kubernetes.io/docs/reference/kubectl/", "id": "k8s_00703_code_explain_22079"} {"prompt": "Explain this bash code:\n\nkubectl get pods -o custom-columns=NAME:.metadata.name,RSRC:.metadata.resourceVersion", "completion": "This bash code Command line tool (kubectl) | Template file:", "language": "bash", "type": "kubectl-command_explanation", "context": "Command line tool (kubectl) | Template file:", "source": "https://kubernetes.io/docs/reference/kubectl/", "id": "k8s_00703_code_explain_22080"} {"prompt": "Explain this bash code:\n\nkubectl get pods -o custom-columns-file=template.txt", "completion": "This bash code Command line tool (kubectl) | where the template.txt file contains:", "language": "bash", "type": "kubectl-command_explanation", "context": "Command line tool (kubectl) | where the template.txt file contains:", "source": "https://kubernetes.io/docs/reference/kubectl/", "id": "k8s_00703_code_explain_22081"} {"prompt": "Explain this bash code:\n\nkubectl [command] [TYPE] [NAME] --sort-by=", "completion": "This bash code Command line tool (kubectl) | To print a list of pods sorted by name, you run:", "language": "bash", "type": "kubectl-command_explanation", "context": "Command line tool (kubectl) | To print a list of pods sorted by name, you run:", "source": "https://kubernetes.io/docs/reference/kubectl/", "id": "k8s_00703_code_explain_22082"} {"prompt": "Complete this text code:\n\n# Create a service using the definition in example-service.yaml.\nkubectl apply -f example-service.yaml\n\n# Create a replication controller using the definition in example-controller.yaml.", "completion": "kubectl apply -f example-controller.yaml\n\n# Create the objects that are defined in any .yaml, .yml, or .json file within the directory.\nkubectl apply -f ", "language": "text", "type": "code-example", "context": "Command line tool (kubectl) | kubectl get - List one or more resources.", "source": "https://kubernetes.io/docs/reference/kubectl/", "id": "k8s_00703_code_completion_22083"} {"prompt": "Explain this text code:\n\n# Create a service using the definition in example-service.yaml.\nkubectl apply -f example-service.yaml\n\n# Create a replication controller using the definition in example-controller.yaml.\nkubectl apply -f example-controller.yaml\n\n# Create the objects that are defined in any .yaml, .yml, or .json file within the directory.\nkubectl apply -f ", "completion": "This text code Command line tool (kubectl) | kubectl get - List one or more resources.", "language": "text", "type": "code-example_explanation", "context": "Command line tool (kubectl) | kubectl get - List one or more resources.", "source": "https://kubernetes.io/docs/reference/kubectl/", "id": "k8s_00703_code_explain_22084"} {"prompt": "Complete this text code:\n\n# List all pods in plain-text output format.\nkubectl get pods\n\n# List all pods in plain-text output format and include additional information (such as node name).\nkubectl get pods -o wide\n\n# List the replication controller with the specified name in plain-text output format. Tip: You can shorten and replace the 'replicationcontroller' resource type with the alias 'rc'.\nkubectl get replicationcontroller ", "completion": "\n# List all replication controllers and services together in plain-text output format.\nkubectl get rc,services\n\n# List all daemon sets in plain-text output format.\nkubectl get ds\n\n# List all pods running on node server01\nkubectl get pods --field-selector=spec.nodeName=server01", "language": "text", "type": "code-example", "context": "Command line tool (kubectl) | kubectl describe - Display detailed state of one or more resources, including the uninitialized ones by default.", "source": "https://kubernetes.io/docs/reference/kubectl/", "id": "k8s_00703_code_completion_22085"} {"prompt": "Explain this text code:\n\n# List all pods in plain-text output format.\nkubectl get pods\n\n# List all pods in plain-text output format and include additional information (such as node name).\nkubectl get pods -o wide\n\n# List the replication controller with the specified name in plain-text output format. Tip: You can shorten and replace the 'replicationcontroller' resource type with the alias 'rc'.\nkubectl get replicationcontroller \n\n# List all replication controllers and services together in plain-text output format.\nkubectl get rc,services\n\n# List all daemon sets in plain-text output format.\nkubectl get ds\n\n# List all pods running on node server01\nkubectl get pods --field-selector=spec.nodeName=server01", "completion": "This text code Command line tool (kubectl) | kubectl describe - Display detailed state of one or more resources, including the uninitialized ones by default.", "language": "text", "type": "code-example_explanation", "context": "Command line tool (kubectl) | kubectl describe - Display detailed state of one or more resources, including the uninitialized ones by default.", "source": "https://kubernetes.io/docs/reference/kubectl/", "id": "k8s_00703_code_explain_22086"} {"prompt": "Complete this text code:\n\n# Display the details of the node with name .\nkubectl describe nodes \n\n# Display the details of the pod with name .\nkubectl describe pods/\n", "completion": "# Display the details of all the pods that are managed by the replication controller named .\n# Remember: Any pods that are created by the replication controller get prefixed with the name of the replication controller.\nkubectl describe pods \n\n# Describe all pods\nkubectl describe pods", "language": "text", "type": "code-example", "context": "Command line tool (kubectl) | kubectl delete - Delete resources either from a file, stdin, or specifying label selectors, names, resource selectors, or resources.", "source": "https://kubernetes.io/docs/reference/kubectl/", "id": "k8s_00703_code_completion_22087"} {"prompt": "Explain this text code:\n\n# Display the details of the node with name .\nkubectl describe nodes \n\n# Display the details of the pod with name .\nkubectl describe pods/\n\n# Display the details of all the pods that are managed by the replication controller named .\n# Remember: Any pods that are created by the replication controller get prefixed with the name of the replication controller.\nkubectl describe pods \n\n# Describe all pods\nkubectl describe pods", "completion": "This text code Command line tool (kubectl) | kubectl delete - Delete resources either from a file, stdin, or specifying label selectors, names, resource selectors, or resources.", "language": "text", "type": "code-example_explanation", "context": "Command line tool (kubectl) | kubectl delete - Delete resources either from a file, stdin, or specifying label selectors, names, resource selectors, or resources.", "source": "https://kubernetes.io/docs/reference/kubectl/", "id": "k8s_00703_code_explain_22088"} {"prompt": "Complete this text code:\n\n# Delete a pod using the type and name specified in the pod.yaml file.\nkubectl delete -f pod.yaml\n\n# Delete all the pods and services that have the label '='.", "completion": "kubectl delete pods,services -l =\n\n# Delete all pods, including uninitialized ones.\nkubectl delete pods --all", "language": "text", "type": "code-example", "context": "Command line tool (kubectl) | kubectl exec - Execute a command against a container in a pod.", "source": "https://kubernetes.io/docs/reference/kubectl/", "id": "k8s_00703_code_completion_22089"} {"prompt": "Explain this text code:\n\n# Delete a pod using the type and name specified in the pod.yaml file.\nkubectl delete -f pod.yaml\n\n# Delete all the pods and services that have the label '='.\nkubectl delete pods,services -l =\n\n# Delete all pods, including uninitialized ones.\nkubectl delete pods --all", "completion": "This text code Command line tool (kubectl) | kubectl exec - Execute a command against a container in a pod.", "language": "text", "type": "code-example_explanation", "context": "Command line tool (kubectl) | kubectl exec - Execute a command against a container in a pod.", "source": "https://kubernetes.io/docs/reference/kubectl/", "id": "k8s_00703_code_explain_22090"} {"prompt": "Complete this text code:\n\n# Get output from running 'date' from pod . By default, output is from the first container.\nkubectl exec -- date\n\n# Get output from running 'date' in container of pod .", "completion": "kubectl exec -c -- date\n\n# Get an interactive TTY and run /bin/bash from pod . By default, output is from the first container.\nkubectl exec -ti -- /bin/bash", "language": "text", "type": "code-example", "context": "Command line tool (kubectl) | kubectl logs - Print the logs for a container in a pod.", "source": "https://kubernetes.io/docs/reference/kubectl/", "id": "k8s_00703_code_completion_22091"} {"prompt": "Explain this text code:\n\n# Get output from running 'date' from pod . By default, output is from the first container.\nkubectl exec -- date\n\n# Get output from running 'date' in container of pod .\nkubectl exec -c -- date\n\n# Get an interactive TTY and run /bin/bash from pod . By default, output is from the first container.\nkubectl exec -ti -- /bin/bash", "completion": "This text code Command line tool (kubectl) | kubectl logs - Print the logs for a container in a pod.", "language": "text", "type": "code-example_explanation", "context": "Command line tool (kubectl) | kubectl logs - Print the logs for a container in a pod.", "source": "https://kubernetes.io/docs/reference/kubectl/", "id": "k8s_00703_code_explain_22092"} {"prompt": "Complete this text code:\n\n# Return a snapshot of the logs from pod .\nkubectl logs ", "completion": "\n# Start streaming the logs from pod . This is similar to the 'tail -f' Linux command.\nkubectl logs -f ", "language": "text", "type": "code-example", "context": "Command line tool (kubectl) | kubectl diff - View a diff of the proposed updates to a cluster.", "source": "https://kubernetes.io/docs/reference/kubectl/", "id": "k8s_00703_code_completion_22093"} {"prompt": "Explain this text code:\n\n# Return a snapshot of the logs from pod .\nkubectl logs \n\n# Start streaming the logs from pod . This is similar to the 'tail -f' Linux command.\nkubectl logs -f ", "completion": "This text code Command line tool (kubectl) | kubectl diff - View a diff of the proposed updates to a cluster.", "language": "text", "type": "code-example_explanation", "context": "Command line tool (kubectl) | kubectl diff - View a diff of the proposed updates to a cluster.", "source": "https://kubernetes.io/docs/reference/kubectl/", "id": "k8s_00703_code_explain_22094"} {"prompt": "Complete this text code:\n\n# Diff resources included in \"pod.json\".\nkubectl diff -f pod.json", "completion": "\n# Diff file read from stdin.\ncat service.yaml | kubectl diff -f -", "language": "text", "type": "code-example", "context": "Command line tool (kubectl) | Use the following set of examples to help you familiarize yourself with writing and using kubectl plugins:", "source": "https://kubernetes.io/docs/reference/kubectl/", "id": "k8s_00703_code_completion_22095"} {"prompt": "Explain this text code:\n\n# Diff resources included in \"pod.json\".\nkubectl diff -f pod.json\n\n# Diff file read from stdin.\ncat service.yaml | kubectl diff -f -", "completion": "This text code Command line tool (kubectl) | Use the following set of examples to help you familiarize yourself with writing and using kubectl plugins:", "language": "text", "type": "code-example_explanation", "context": "Command line tool (kubectl) | Use the following set of examples to help you familiarize yourself with writing and using kubectl plugins:", "source": "https://kubernetes.io/docs/reference/kubectl/", "id": "k8s_00703_code_explain_22096"} {"prompt": "Explain this text code:\n\n# create a simple plugin in any language and name the resulting executable file\n# so that it begins with the prefix \"kubectl-\"\ncat ./kubectl-hello", "completion": "This text code Command line tool (kubectl) | With a plugin written, let's make it executable:", "language": "text", "type": "code-example_explanation", "context": "Command line tool (kubectl) | With a plugin written, let's make it executable:", "source": "https://kubernetes.io/docs/reference/kubectl/", "id": "k8s_00703_code_explain_22097"} {"prompt": "Complete this text code:\n\n#!/bin/sh\n", "completion": "# this plugin prints the words \"hello world\"\necho \"hello world\"", "language": "text", "type": "code-example", "context": "Command line tool (kubectl) | With a plugin written, let's make it executable:", "source": "https://kubernetes.io/docs/reference/kubectl/", "id": "k8s_00703_code_completion_22098"} {"prompt": "Explain this text code:\n\n#!/bin/sh\n\n# this plugin prints the words \"hello world\"\necho \"hello world\"", "completion": "This text code Command line tool (kubectl) | With a plugin written, let's make it executable:", "language": "text", "type": "code-example_explanation", "context": "Command line tool (kubectl) | With a plugin written, let's make it executable:", "source": "https://kubernetes.io/docs/reference/kubectl/", "id": "k8s_00703_code_explain_22099"} {"prompt": "Complete this text code:\n\nchmod a+x ./kubectl-hello\n\n# and move it to a location in our PATH\nsudo mv ./kubectl-hello /usr/local/bin", "completion": "sudo chown root:root /usr/local/bin\n\n# You have now created and \"installed\" a kubectl plugin.\n# You can begin using this plugin by invoking it from kubectl as if it were a regular command\nkubectl hello", "language": "text", "type": "code-example", "context": "Command line tool (kubectl) | In order to view all of the plugins that are available to kubectl, use the kubectl plugin list subcommand:", "source": "https://kubernetes.io/docs/reference/kubectl/", "id": "k8s_00703_code_completion_22100"} {"prompt": "Explain this text code:\n\nchmod a+x ./kubectl-hello\n\n# and move it to a location in our PATH\nsudo mv ./kubectl-hello /usr/local/bin\nsudo chown root:root /usr/local/bin\n\n# You have now created and \"installed\" a kubectl plugin.\n# You can begin using this plugin by invoking it from kubectl as if it were a regular command\nkubectl hello", "completion": "This text code Command line tool (kubectl) | In order to view all of the plugins that are available to kubectl, use the kubectl plugin list subcommand:", "language": "text", "type": "code-example_explanation", "context": "Command line tool (kubectl) | In order to view all of the plugins that are available to kubectl, use the kubectl plugin list subcommand:", "source": "https://kubernetes.io/docs/reference/kubectl/", "id": "k8s_00703_code_explain_22101"} {"prompt": "Explain this text code:\n\n# You can \"uninstall\" a plugin, by removing it from the folder in your\n# $PATH where you placed it\nsudo rm /usr/local/bin/kubectl-hello", "completion": "This text code Command line tool (kubectl) | In order to view all of the plugins that are available to kubectl, use the kubectl plugin list subcommand:", "language": "text", "type": "code-example_explanation", "context": "Command line tool (kubectl) | In order to view all of the plugins that are available to kubectl, use the kubectl plugin list subcommand:", "source": "https://kubernetes.io/docs/reference/kubectl/", "id": "k8s_00703_code_explain_22102"} {"prompt": "Explain this text code:\n\nsudo chmod -x /usr/local/bin/kubectl-foo # remove execute permission\nkubectl plugin list", "completion": "This text code Command line tool (kubectl) | You can think of plugins as a means to build more complex functionality on top of the existing kubectl commands:", "language": "text", "type": "code-example_explanation", "context": "Command line tool (kubectl) | You can think of plugins as a means to build more complex functionality on top of the existing kubectl commands:", "source": "https://kubernetes.io/docs/reference/kubectl/", "id": "k8s_00703_code_explain_22103"} {"prompt": "Complete this text code:\n\n#!/bin/bash\n", "completion": "# this plugin makes use of the `kubectl config` command in order to output\n# information about the current user, based on the currently selected context\nkubectl config view --template='{{ range .contexts }}{{ if eq .name \"'$(kubectl config current-context)'\" }}Current user: {{ printf \"%s\\n\" .context.user }}{{ end }}{{ end }}'", "language": "text", "type": "code-example", "context": "Command line tool (kubectl) | Running the above command gives you an output containing the user for the current context in your KUBECONFIG file:", "source": "https://kubernetes.io/docs/reference/kubectl/", "id": "k8s_00703_code_completion_22104"} {"prompt": "Explain this text code:\n\n#!/bin/bash\n\n# this plugin makes use of the `kubectl config` command in order to output\n# information about the current user, based on the currently selected context\nkubectl config view --template='{{ range .contexts }}{{ if eq .name \"'$(kubectl config current-context)'\" }}Current user: {{ printf \"%s\\n\" .context.user }}{{ end }}{{ end }}'", "completion": "This text code Command line tool (kubectl) | Running the above command gives you an output containing the user for the current context in your KUBECONFIG file:", "language": "text", "type": "code-example_explanation", "context": "Command line tool (kubectl) | Running the above command gives you an output containing the user for the current context in your KUBECONFIG file:", "source": "https://kubernetes.io/docs/reference/kubectl/", "id": "k8s_00703_code_explain_22105"} {"prompt": "Complete this text code:\n\n# make the file executable\nsudo chmod +x ./kubectl-whoami\n\n# and move it into your PATH", "completion": "sudo mv ./kubectl-whoami /usr/local/bin\n\nkubectl whoami\nCurrent user: plugins-user", "language": "text", "type": "code-example", "context": "Command line tool (kubectl) |", "source": "https://kubernetes.io/docs/reference/kubectl/", "id": "k8s_00703_code_completion_22106"} {"prompt": "Explain this text code:\n\n# make the file executable\nsudo chmod +x ./kubectl-whoami\n\n# and move it into your PATH\nsudo mv ./kubectl-whoami /usr/local/bin\n\nkubectl whoami\nCurrent user: plugins-user", "completion": "This text code Command line tool (kubectl) |", "language": "text", "type": "code-example_explanation", "context": "Command line tool (kubectl) |", "source": "https://kubernetes.io/docs/reference/kubectl/", "id": "k8s_00703_code_explain_22107"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: pod\nspec:\n securityContext:\n seccompProfile:\n type: Unconfined\n ephemeralContainers:\n - name: ephemeral-container\n image: debian\n securityContext:\n seccompProfile:", "completion": " type: RuntimeDefault\n initContainers:\n - name: init-container\n image: debian\n securityContext:\n seccompProfile:\n type: RuntimeDefault\n containers:\n - name: container\n image: docker.io/library/debian:stable\n securityContext:\n seccompProfile:\n type: Localhost\n localhostProfile: my-profile.json", "language": "yaml", "type": "pod-definition", "context": "Seccomp and Kubernetes | The Pod in the example above runs as Unconfined, while the ephemeral-container and init-container specifically defines RuntimeDefault. If the ephemeral or init container would not have set the securit", "source": "https://kubernetes.io/docs/reference/node/seccomp/", "id": "k8s_00705_code_completion_22108"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: pod\nspec:\n securityContext:\n seccompProfile:\n type: Unconfined\n ephemeralContainers:\n - name: ephemeral-container\n image: debian\n securityContext:\n seccompProfile:\n type: RuntimeDefault\n initContainers:\n - name: init-container\n image: debian\n securityContext:\n seccompProfile:\n type: RuntimeDefault\n containers:\n - name: container\n image: docker.io/library/debian:stable\n securityContext:\n seccompProfile:\n type: Localhost\n localhostProfile: my-profile.json", "completion": "This yaml code Seccomp and Kubernetes | The Pod in the example above runs as Unconfined, while the ephemeral-container and init-container specifically defines RuntimeDefault. If the ephemeral or init container would not have set the securit", "language": "yaml", "type": "pod-definition_explanation", "context": "Seccomp and Kubernetes | The Pod in the example above runs as Unconfined, while the ephemeral-container and init-container specifically defines RuntimeDefault. If the ephemeral or init container would not have set the securit", "source": "https://kubernetes.io/docs/reference/node/seccomp/", "id": "k8s_00705_code_explain_22109"} {"prompt": "Complete this json code:\n\n{\n \"defaultAction\": \"SCMP_ACT_ERRNO\",\n \"defaultErrnoRet\": 38,\n \"syscalls\": [\n {\n \"names\": [\n \"adjtimex\",\n \"alarm\",\n \"bind\",", "completion": " \"waitid\",\n \"waitpid\",\n \"write\",\n \"writev\"\n ],\n \"action\": \"SCMP_ACT_ALLOW\"\n }\n ]\n}", "language": "json", "type": "api-response", "context": "Seccomp and Kubernetes | The defaultAction in the profile above is defined as SCMP_ACT_ERRNO and will return as fallback to the actions defined in syscalls. The error is defined as code 38 via the defaultErrnoRet field.", "source": "https://kubernetes.io/docs/reference/node/seccomp/", "id": "k8s_00705_code_completion_22110"} {"prompt": "Explain this json code:\n\n{\n \"defaultAction\": \"SCMP_ACT_ERRNO\",\n \"defaultErrnoRet\": 38,\n \"syscalls\": [\n {\n \"names\": [\n \"adjtimex\",\n \"alarm\",\n \"bind\",\n \"waitid\",\n \"waitpid\",\n \"write\",\n \"writev\"\n ],\n \"action\": \"SCMP_ACT_ALLOW\"\n }\n ]\n}", "completion": "This json code Seccomp and Kubernetes | The defaultAction in the profile above is defined as SCMP_ACT_ERRNO and will return as fallback to the actions defined in syscalls. The error is defined as code 38 via the defaultErrnoRet field.", "language": "json", "type": "api-response_explanation", "context": "Seccomp and Kubernetes | The defaultAction in the profile above is defined as SCMP_ACT_ERRNO and will return as fallback to the actions defined in syscalls. The error is defined as code 38 via the defaultErrnoRet field.", "source": "https://kubernetes.io/docs/reference/node/seccomp/", "id": "k8s_00705_code_explain_22111"} {"prompt": "Explain this text code:\n\nspec.initContainers[*].securityContext.seccompProfile", "completion": "This text code Seccomp and Kubernetes | The Pod in the example above runs as Unconfined, while the ephemeral-container and init-container specifically defines RuntimeDefault. If the ephemeral or init container would not have set the securit", "language": "text", "type": "code-example_explanation", "context": "Seccomp and Kubernetes | The Pod in the example above runs as Unconfined, while the ephemeral-container and init-container specifically defines RuntimeDefault. If the ephemeral or init container would not have set the securit", "source": "https://kubernetes.io/docs/reference/node/seccomp/", "id": "k8s_00705_code_explain_22112"} {"prompt": "Explain this text code:\n\nspec.ephemeralContainers[*].securityContext.seccompProfile", "completion": "This text code Seccomp and Kubernetes | The Pod in the example above runs as Unconfined, while the ephemeral-container and init-container specifically defines RuntimeDefault. If the ephemeral or init container would not have set the securit", "language": "text", "type": "code-example_explanation", "context": "Seccomp and Kubernetes | The Pod in the example above runs as Unconfined, while the ephemeral-container and init-container specifically defines RuntimeDefault. If the ephemeral or init container would not have set the securit", "source": "https://kubernetes.io/docs/reference/node/seccomp/", "id": "k8s_00705_code_explain_22113"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: pod\nspec:\n securityContext:\n seccompProfile:\n type: Unconfined\n ephemeralContainers:\n - name: ephemeral-container\n image: debian\n securityContext:\n seccompProfile:", "completion": " type: RuntimeDefault\n initContainers:\n - name: init-container\n image: debian\n securityContext:\n seccompProfile:\n type: RuntimeDefault\n containers:\n - name: container\n image: docker.io/library/debian:stable\n securityContext:\n seccompProfile:\n type: Localhost\n localhostProfile: my-profile.json", "language": "yaml", "type": "pod-definition", "context": "Seccomp and Kubernetes | The Pod in the example above runs as Unconfined, while the ephemeral-container and init-container specifically defines RuntimeDefault. If the ephemeral or init container would not have set the securit", "source": "https://kubernetes.io/docs/reference/node/seccomp/", "id": "k8s_00705_code_completion_22114"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: pod\nspec:\n securityContext:\n seccompProfile:\n type: Unconfined\n ephemeralContainers:\n - name: ephemeral-container\n image: debian\n securityContext:\n seccompProfile:\n type: RuntimeDefault\n initContainers:\n - name: init-container\n image: debian\n securityContext:\n seccompProfile:\n type: RuntimeDefault\n containers:\n - name: container\n image: docker.io/library/debian:stable\n securityContext:\n seccompProfile:\n type: Localhost\n localhostProfile: my-profile.json", "completion": "This yaml code Seccomp and Kubernetes | The Pod in the example above runs as Unconfined, while the ephemeral-container and init-container specifically defines RuntimeDefault. If the ephemeral or init container would not have set the securit", "language": "yaml", "type": "pod-definition_explanation", "context": "Seccomp and Kubernetes | The Pod in the example above runs as Unconfined, while the ephemeral-container and init-container specifically defines RuntimeDefault. If the ephemeral or init container would not have set the securit", "source": "https://kubernetes.io/docs/reference/node/seccomp/", "id": "k8s_00705_code_explain_22115"} {"prompt": "Complete this json code:\n\n{\n \"defaultAction\": \"SCMP_ACT_ERRNO\",\n \"defaultErrnoRet\": 38,\n \"syscalls\": [\n {\n \"names\": [\n \"adjtimex\",\n \"alarm\",\n \"bind\",", "completion": " \"waitid\",\n \"waitpid\",\n \"write\",\n \"writev\"\n ],\n \"action\": \"SCMP_ACT_ALLOW\"\n }\n ]\n}", "language": "json", "type": "api-response", "context": "Seccomp and Kubernetes | The defaultAction in the profile above is defined as SCMP_ACT_ERRNO and will return as fallback to the actions defined in syscalls. The error is defined as code 38 via the defaultErrnoRet field.", "source": "https://kubernetes.io/docs/reference/node/seccomp/", "id": "k8s_00705_code_completion_22116"} {"prompt": "Explain this json code:\n\n{\n \"defaultAction\": \"SCMP_ACT_ERRNO\",\n \"defaultErrnoRet\": 38,\n \"syscalls\": [\n {\n \"names\": [\n \"adjtimex\",\n \"alarm\",\n \"bind\",\n \"waitid\",\n \"waitpid\",\n \"write\",\n \"writev\"\n ],\n \"action\": \"SCMP_ACT_ALLOW\"\n }\n ]\n}", "completion": "This json code Seccomp and Kubernetes | The defaultAction in the profile above is defined as SCMP_ACT_ERRNO and will return as fallback to the actions defined in syscalls. The error is defined as code 38 via the defaultErrnoRet field.", "language": "json", "type": "api-response_explanation", "context": "Seccomp and Kubernetes | The defaultAction in the profile above is defined as SCMP_ACT_ERRNO and will return as fallback to the actions defined in syscalls. The error is defined as code 38 via the defaultErrnoRet field.", "source": "https://kubernetes.io/docs/reference/node/seccomp/", "id": "k8s_00705_code_explain_22117"} {"prompt": "Complete this yaml code:\n\npods/security/seccomp/fields.yaml\napiVersion: v1\nkind: Pod\nmetadata:\n name: pod\nspec:\n securityContext:\n seccompProfile:\n type: Unconfined\n ephemeralContainers:\n - name: ephemeral-container\n image: debian\n securityContext:\n seccompProfile:", "completion": " type: RuntimeDefault\n initContainers:\n - name: init-container\n image: debian\n securityContext:\n seccompProfile:\n type: RuntimeDefault\n containers:\n - name: container\n image: docker.io/library/debian:stable\n securityContext:\n seccompProfile:\n type: Localhost\n localhostProfile: my-profile.json", "language": "yaml", "type": "pod-definition", "context": "Seccomp and Kubernetes | The Pod in the example above runs as Unconfined, while the ephemeral-container and init-container specifically defines RuntimeDefault. If the ephemeral or init container would not have set the securit", "source": "https://kubernetes.io/docs/reference/node/seccomp/", "id": "k8s_00705_code_completion_22118"} {"prompt": "Explain this yaml code:\n\npods/security/seccomp/fields.yaml\napiVersion: v1\nkind: Pod\nmetadata:\n name: pod\nspec:\n securityContext:\n seccompProfile:\n type: Unconfined\n ephemeralContainers:\n - name: ephemeral-container\n image: debian\n securityContext:\n seccompProfile:\n type: RuntimeDefault\n initContainers:\n - name: init-container\n image: debian\n securityContext:\n seccompProfile:\n type: RuntimeDefault\n containers:\n - name: container\n image: docker.io/library/debian:stable\n securityContext:\n seccompProfile:\n type: Localhost\n localhostProfile: my-profile.json", "completion": "This yaml code Seccomp and Kubernetes | The Pod in the example above runs as Unconfined, while the ephemeral-container and init-container specifically defines RuntimeDefault. If the ephemeral or init container would not have set the securit", "language": "yaml", "type": "pod-definition_explanation", "context": "Seccomp and Kubernetes | The Pod in the example above runs as Unconfined, while the ephemeral-container and init-container specifically defines RuntimeDefault. If the ephemeral or init container would not have set the securit", "source": "https://kubernetes.io/docs/reference/node/seccomp/", "id": "k8s_00705_code_explain_22119"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: pod\nspec:\n securityContext:\n seccompProfile:\n type: Unconfined\n ephemeralContainers:\n - name: ephemeral-container\n image: debian\n securityContext:\n seccompProfile:", "completion": " type: RuntimeDefault\n initContainers:\n - name: init-container\n image: debian\n securityContext:\n seccompProfile:\n type: RuntimeDefault\n containers:\n - name: container\n image: docker.io/library/debian:stable\n securityContext:\n seccompProfile:\n type: Localhost\n localhostProfile: my-profile.json", "language": "yaml", "type": "pod-definition", "context": "Seccomp and Kubernetes | The Pod in the example above runs as Unconfined, while the ephemeral-container and init-container specifically defines RuntimeDefault. If the ephemeral or init container would not have set the securit", "source": "https://kubernetes.io/docs/reference/node/seccomp/", "id": "k8s_00705_code_completion_22120"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: pod\nspec:\n securityContext:\n seccompProfile:\n type: Unconfined\n ephemeralContainers:\n - name: ephemeral-container\n image: debian\n securityContext:\n seccompProfile:\n type: RuntimeDefault\n initContainers:\n - name: init-container\n image: debian\n securityContext:\n seccompProfile:\n type: RuntimeDefault\n containers:\n - name: container\n image: docker.io/library/debian:stable\n securityContext:\n seccompProfile:\n type: Localhost\n localhostProfile: my-profile.json", "completion": "This yaml code Seccomp and Kubernetes | The Pod in the example above runs as Unconfined, while the ephemeral-container and init-container specifically defines RuntimeDefault. If the ephemeral or init container would not have set the securit", "language": "yaml", "type": "pod-definition_explanation", "context": "Seccomp and Kubernetes | The Pod in the example above runs as Unconfined, while the ephemeral-container and init-container specifically defines RuntimeDefault. If the ephemeral or init container would not have set the securit", "source": "https://kubernetes.io/docs/reference/node/seccomp/", "id": "k8s_00705_code_explain_22121"} {"prompt": "Complete this json code:\n\n{\n \"defaultAction\": \"SCMP_ACT_ERRNO\",\n \"defaultErrnoRet\": 38,\n \"syscalls\": [\n {\n \"names\": [\n \"adjtimex\",\n \"alarm\",\n \"bind\",", "completion": " \"waitid\",\n \"waitpid\",\n \"write\",\n \"writev\"\n ],\n \"action\": \"SCMP_ACT_ALLOW\"\n }\n ]\n}", "language": "json", "type": "api-response", "context": "Seccomp and Kubernetes | The defaultAction in the profile above is defined as SCMP_ACT_ERRNO and will return as fallback to the actions defined in syscalls. The error is defined as code 38 via the defaultErrnoRet field.", "source": "https://kubernetes.io/docs/reference/node/seccomp/", "id": "k8s_00705_code_completion_22122"} {"prompt": "Explain this json code:\n\n{\n \"defaultAction\": \"SCMP_ACT_ERRNO\",\n \"defaultErrnoRet\": 38,\n \"syscalls\": [\n {\n \"names\": [\n \"adjtimex\",\n \"alarm\",\n \"bind\",\n \"waitid\",\n \"waitpid\",\n \"write\",\n \"writev\"\n ],\n \"action\": \"SCMP_ACT_ALLOW\"\n }\n ]\n}", "completion": "This json code Seccomp and Kubernetes | The defaultAction in the profile above is defined as SCMP_ACT_ERRNO and will return as fallback to the actions defined in syscalls. The error is defined as code 38 via the defaultErrnoRet field.", "language": "json", "type": "api-response_explanation", "context": "Seccomp and Kubernetes | The defaultAction in the profile above is defined as SCMP_ACT_ERRNO and will return as fallback to the actions defined in syscalls. The error is defined as code 38 via the defaultErrnoRet field.", "source": "https://kubernetes.io/docs/reference/node/seccomp/", "id": "k8s_00705_code_explain_22123"} {"prompt": "Complete this text code:\n\n200 OK\nContent-Type: application/json", "completion": "\n… JSON encoded collection of Pods (PodList object)", "language": "text", "type": "code-example", "context": "Kubernetes API Concepts | Create a pod by sending JSON to the server, requesting a JSON response.", "source": "https://kubernetes.io/docs/reference/using-api/api-concepts/", "id": "k8s_00706_code_completion_22124"} {"prompt": "Explain this text code:\n\n200 OK\nContent-Type: application/json\n\n… JSON encoded collection of Pods (PodList object)", "completion": "This text code Kubernetes API Concepts | Create a pod by sending JSON to the server, requesting a JSON response.", "language": "text", "type": "code-example_explanation", "context": "Kubernetes API Concepts | Create a pod by sending JSON to the server, requesting a JSON response.", "source": "https://kubernetes.io/docs/reference/using-api/api-concepts/", "id": "k8s_00706_code_explain_22125"} {"prompt": "Complete this text code:\n\nPOST /api/v1/namespaces/test/pods\nContent-Type: application/json", "completion": "Accept: application/json\n… JSON encoded Pod object", "language": "text", "type": "code-example", "context": "Kubernetes API Concepts | Kubernetes also supports the application/yaml media type for both requests and responses. YAML can be used for defining Kubernetes manifests and API interactions.", "source": "https://kubernetes.io/docs/reference/using-api/api-concepts/", "id": "k8s_00706_code_completion_22126"} {"prompt": "Explain this text code:\n\nPOST /api/v1/namespaces/test/pods\nContent-Type: application/json\nAccept: application/json\n… JSON encoded Pod object", "completion": "This text code Kubernetes API Concepts | Kubernetes also supports the application/yaml media type for both requests and responses. YAML can be used for defining Kubernetes manifests and API interactions.", "language": "text", "type": "code-example_explanation", "context": "Kubernetes API Concepts | Kubernetes also supports the application/yaml media type for both requests and responses. YAML can be used for defining Kubernetes manifests and API interactions.", "source": "https://kubernetes.io/docs/reference/using-api/api-concepts/", "id": "k8s_00706_code_explain_22127"} {"prompt": "Complete this text code:\n\n200 OK\nContent-Type: application/json\n\n{", "completion": " \"kind\": \"Pod\",\n \"apiVersion\": \"v1\",\n …\n}", "language": "text", "type": "code-example", "context": "Kubernetes API Concepts | Kubernetes also supports the application/yaml media type for both requests and responses. YAML can be used for defining Kubernetes manifests and API interactions.", "source": "https://kubernetes.io/docs/reference/using-api/api-concepts/", "id": "k8s_00706_code_completion_22128"} {"prompt": "Explain this text code:\n\n200 OK\nContent-Type: application/json\n\n{\n \"kind\": \"Pod\",\n \"apiVersion\": \"v1\",\n …\n}", "completion": "This text code Kubernetes API Concepts | Kubernetes also supports the application/yaml media type for both requests and responses. YAML can be used for defining Kubernetes manifests and API interactions.", "language": "text", "type": "code-example_explanation", "context": "Kubernetes API Concepts | Kubernetes also supports the application/yaml media type for both requests and responses. YAML can be used for defining Kubernetes manifests and API interactions.", "source": "https://kubernetes.io/docs/reference/using-api/api-concepts/", "id": "k8s_00706_code_explain_22129"} {"prompt": "Complete this text code:\n\n200 OK\nContent-Type: application/yaml", "completion": "\n… YAML encoded collection of Pods (PodList object)", "language": "text", "type": "code-example", "context": "Kubernetes API Concepts | Create a pod by sending YAML-encoded data to the server, requesting a YAML response:", "source": "https://kubernetes.io/docs/reference/using-api/api-concepts/", "id": "k8s_00706_code_completion_22130"} {"prompt": "Explain this text code:\n\n200 OK\nContent-Type: application/yaml\n\n… YAML encoded collection of Pods (PodList object)", "completion": "This text code Kubernetes API Concepts | Create a pod by sending YAML-encoded data to the server, requesting a YAML response:", "language": "text", "type": "code-example_explanation", "context": "Kubernetes API Concepts | Create a pod by sending YAML-encoded data to the server, requesting a YAML response:", "source": "https://kubernetes.io/docs/reference/using-api/api-concepts/", "id": "k8s_00706_code_explain_22131"} {"prompt": "Complete this text code:\n\nPOST /api/v1/namespaces/test/pods\nContent-Type: application/yaml", "completion": "Accept: application/yaml\n… YAML encoded Pod object", "language": "text", "type": "code-example", "context": "Kubernetes API Concepts | Kubernetes uses an envelope wrapper to encode Protobuf responses. That wrapper starts with a 4 byte magic number to help identify content in disk or in etcd as Protobuf (as opposed to JSON). The 4 byt", "source": "https://kubernetes.io/docs/reference/using-api/api-concepts/", "id": "k8s_00706_code_completion_22132"} {"prompt": "Explain this text code:\n\nPOST /api/v1/namespaces/test/pods\nContent-Type: application/yaml\nAccept: application/yaml\n… YAML encoded Pod object", "completion": "This text code Kubernetes API Concepts | Kubernetes uses an envelope wrapper to encode Protobuf responses. That wrapper starts with a 4 byte magic number to help identify content in disk or in etcd as Protobuf (as opposed to JSON). The 4 byt", "language": "text", "type": "code-example_explanation", "context": "Kubernetes API Concepts | Kubernetes uses an envelope wrapper to encode Protobuf responses. That wrapper starts with a 4 byte magic number to help identify content in disk or in etcd as Protobuf (as opposed to JSON). The 4 byt", "source": "https://kubernetes.io/docs/reference/using-api/api-concepts/", "id": "k8s_00706_code_explain_22133"} {"prompt": "Complete this yaml code:\n\n200 OK\nContent-Type: application/yaml\n\napiVersion: v1", "completion": "kind: Pod\nmetadata:\n name: my-pod\n …", "language": "yaml", "type": "pod-definition", "context": "Kubernetes API Concepts | Kubernetes uses an envelope wrapper to encode Protobuf responses. That wrapper starts with a 4 byte magic number to help identify content in disk or in etcd as Protobuf (as opposed to JSON). The 4 byt", "source": "https://kubernetes.io/docs/reference/using-api/api-concepts/", "id": "k8s_00706_code_completion_22134"} {"prompt": "Explain this yaml code:\n\n200 OK\nContent-Type: application/yaml\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: my-pod\n …", "completion": "This yaml code Kubernetes API Concepts | Kubernetes uses an envelope wrapper to encode Protobuf responses. That wrapper starts with a 4 byte magic number to help identify content in disk or in etcd as Protobuf (as opposed to JSON). The 4 byt", "language": "yaml", "type": "pod-definition_explanation", "context": "Kubernetes API Concepts | Kubernetes uses an envelope wrapper to encode Protobuf responses. That wrapper starts with a 4 byte magic number to help identify content in disk or in etcd as Protobuf (as opposed to JSON). The 4 byt", "source": "https://kubernetes.io/docs/reference/using-api/api-concepts/", "id": "k8s_00706_code_explain_22135"} {"prompt": "Explain this text code:\n\nGET /api/v1/pods\nAccept: application/vnd.kubernetes.protobuf", "completion": "This text code Kubernetes API Concepts | Create a pod by sending Protobuf encoded data to the server, but request a response in JSON.", "language": "text", "type": "code-example_explanation", "context": "Kubernetes API Concepts | Create a pod by sending Protobuf encoded data to the server, but request a response in JSON.", "source": "https://kubernetes.io/docs/reference/using-api/api-concepts/", "id": "k8s_00706_code_explain_22136"} {"prompt": "Complete this text code:\n\n200 OK\nContent-Type: application/vnd.kubernetes.protobuf", "completion": "\n… binary encoded collection of Pods (PodList object)", "language": "text", "type": "code-example", "context": "Kubernetes API Concepts | Create a pod by sending Protobuf encoded data to the server, but request a response in JSON.", "source": "https://kubernetes.io/docs/reference/using-api/api-concepts/", "id": "k8s_00706_code_completion_22137"} {"prompt": "Explain this text code:\n\n200 OK\nContent-Type: application/vnd.kubernetes.protobuf\n\n… binary encoded collection of Pods (PodList object)", "completion": "This text code Kubernetes API Concepts | Create a pod by sending Protobuf encoded data to the server, but request a response in JSON.", "language": "text", "type": "code-example_explanation", "context": "Kubernetes API Concepts | Create a pod by sending Protobuf encoded data to the server, but request a response in JSON.", "source": "https://kubernetes.io/docs/reference/using-api/api-concepts/", "id": "k8s_00706_code_explain_22138"} {"prompt": "Complete this text code:\n\nPOST /api/v1/namespaces/test/pods\nContent-Type: application/vnd.kubernetes.protobuf", "completion": "Accept: application/json\n… binary encoded Pod object", "language": "text", "type": "code-example", "context": "Kubernetes API Concepts | You can use both techniques together and use Kubernetes' Protobuf encoding to interact with any API that supports it, for both reads and writes. Only some API resource types are compatible with Protob", "source": "https://kubernetes.io/docs/reference/using-api/api-concepts/", "id": "k8s_00706_code_completion_22139"} {"prompt": "Explain this text code:\n\nPOST /api/v1/namespaces/test/pods\nContent-Type: application/vnd.kubernetes.protobuf\nAccept: application/json\n… binary encoded Pod object", "completion": "This text code Kubernetes API Concepts | You can use both techniques together and use Kubernetes' Protobuf encoding to interact with any API that supports it, for both reads and writes. Only some API resource types are compatible with Protob", "language": "text", "type": "code-example_explanation", "context": "Kubernetes API Concepts | You can use both techniques together and use Kubernetes' Protobuf encoding to interact with any API that supports it, for both reads and writes. Only some API resource types are compatible with Protob", "source": "https://kubernetes.io/docs/reference/using-api/api-concepts/", "id": "k8s_00706_code_explain_22140"} {"prompt": "Complete this text code:\n\n200 OK\nContent-Type: application/json\n\n{", "completion": " \"kind\": \"Pod\",\n \"apiVersion\": \"v1\",\n ...\n}", "language": "text", "type": "code-example", "context": "Kubernetes API Concepts | You can use both techniques together and use Kubernetes' Protobuf encoding to interact with any API that supports it, for both reads and writes. Only some API resource types are compatible with Protob", "source": "https://kubernetes.io/docs/reference/using-api/api-concepts/", "id": "k8s_00706_code_completion_22141"} {"prompt": "Explain this text code:\n\n200 OK\nContent-Type: application/json\n\n{\n \"kind\": \"Pod\",\n \"apiVersion\": \"v1\",\n ...\n}", "completion": "This text code Kubernetes API Concepts | You can use both techniques together and use Kubernetes' Protobuf encoding to interact with any API that supports it, for both reads and writes. Only some API resource types are compatible with Protob", "language": "text", "type": "code-example_explanation", "context": "Kubernetes API Concepts | You can use both techniques together and use Kubernetes' Protobuf encoding to interact with any API that supports it, for both reads and writes. Only some API resource types are compatible with Protob", "source": "https://kubernetes.io/docs/reference/using-api/api-concepts/", "id": "k8s_00706_code_explain_22142"} {"prompt": "Complete this text code:\n\nA four byte magic number prefix:\n Bytes 0-3: \"k8s\\x00\" [0x6b, 0x38, 0x73, 0x00]\n\nAn encoded Protobuf message with the following IDL:\n message Unknown {\n // typeMeta should have the string values for \"kind\" and \"apiVersion\" as set on the JSON object\n optional TypeMeta typeMeta = 1;\n\n // raw will hold the complete serialized object in protobuf. See the protobuf definitions in the client libraries for a given kind.\n optional bytes raw = 2;\n\n // contentEncoding is encoding used for the raw data. Unspecified means no encoding.", "completion": " optional string contentEncoding = 3;\n\n // contentType is the serialization method used to serialize 'raw'. Unspecified means application/vnd.kubernetes.protobuf and is usually\n // omitted.\n optional string contentType = 4;\n }\n\n message TypeMeta {\n // apiVersion is the group/version for this type\n optional string apiVersion = 1;\n // kind is the name of the object schema. A protobuf definition should exist for this object.\n optional string kind = 2;\n }", "language": "text", "type": "code-example", "context": "Kubernetes API Concepts | Not all API resource types support Kubernetes' Protobuf encoding; specifically, Protobuf isn't available for resources that are defined as CustomResourceDefinitions or are served via the aggregation l", "source": "https://kubernetes.io/docs/reference/using-api/api-concepts/", "id": "k8s_00706_code_completion_22143"} {"prompt": "Explain this text code:\n\nA four byte magic number prefix:\n Bytes 0-3: \"k8s\\x00\" [0x6b, 0x38, 0x73, 0x00]\n\nAn encoded Protobuf message with the following IDL:\n message Unknown {\n // typeMeta should have the string values for \"kind\" and \"apiVersion\" as set on the JSON object\n optional TypeMeta typeMeta = 1;\n\n // raw will hold the complete serialized object in protobuf. See the protobuf definitions in the client libraries for a given kind.\n optional bytes raw = 2;\n\n // contentEncoding is encoding used for the raw data. Unspecified means no encoding.\n optional string contentEncoding = 3;\n\n // contentType is the serialization method used to serialize 'raw'. Unspecified means application/vnd.kubernetes.protobuf and is usually\n // omitted.\n optional string contentType = 4;\n }\n\n message TypeMeta {\n // apiVersion is the group/version for this type\n optional string apiVersion = 1;\n // kind is the name of the object schema. A protobuf definition should exist for this object.\n optional string kind = 2;\n }", "completion": "This text code Kubernetes API Concepts | Not all API resource types support Kubernetes' Protobuf encoding; specifically, Protobuf isn't available for resources that are defined as CustomResourceDefinitions or are served via the aggregation l", "language": "text", "type": "code-example_explanation", "context": "Kubernetes API Concepts | Not all API resource types support Kubernetes' Protobuf encoding; specifically, Protobuf isn't available for resources that are defined as CustomResourceDefinitions or are served via the aggregation l", "source": "https://kubernetes.io/docs/reference/using-api/api-concepts/", "id": "k8s_00706_code_explain_22144"} {"prompt": "Explain this text code:\n\nAccept: application/vnd.kubernetes.protobuf, application/json", "completion": "This text code Kubernetes API Concepts | With the CBORServingAndStorage feature gate enabled, request and response bodies for all built-in resource types and all resources defined by a CustomResourceDefinition may be encoded to the CBOR bina", "language": "text", "type": "code-example_explanation", "context": "Kubernetes API Concepts | With the CBORServingAndStorage feature gate enabled, request and response bodies for all built-in resource types and all resources defined by a CustomResourceDefinition may be encoded to the CBOR bina", "source": "https://kubernetes.io/docs/reference/using-api/api-concepts/", "id": "k8s_00706_code_explain_22145"} {"prompt": "Complete this text code:\n\nGET /api/v1/namespaces/test/pods\n---\n200 OK\nContent-Type: application/json\n", "completion": "{\n \"kind\": \"PodList\",\n \"apiVersion\": \"v1\",\n \"metadata\": {\"resourceVersion\":\"10245\"},\n \"items\": [...]\n}", "language": "text", "type": "code-example", "context": "Kubernetes API Concepts | Starting from resource version 10245, receive notifications of any API operations (such as create, delete, patch or update) that affect Pods in the test namespace. Each change notification is a JSON d", "source": "https://kubernetes.io/docs/reference/using-api/api-concepts/", "id": "k8s_00706_code_completion_22146"} {"prompt": "Explain this text code:\n\nGET /api/v1/namespaces/test/pods\n---\n200 OK\nContent-Type: application/json\n\n{\n \"kind\": \"PodList\",\n \"apiVersion\": \"v1\",\n \"metadata\": {\"resourceVersion\":\"10245\"},\n \"items\": [...]\n}", "completion": "This text code Kubernetes API Concepts | Starting from resource version 10245, receive notifications of any API operations (such as create, delete, patch or update) that affect Pods in the test namespace. Each change notification is a JSON d", "language": "text", "type": "code-example_explanation", "context": "Kubernetes API Concepts | Starting from resource version 10245, receive notifications of any API operations (such as create, delete, patch or update) that affect Pods in the test namespace. Each change notification is a JSON d", "source": "https://kubernetes.io/docs/reference/using-api/api-concepts/", "id": "k8s_00706_code_explain_22147"} {"prompt": "Complete this text code:\n\nGET /api/v1/namespaces/test/pods?watch=1&resourceVersion=10245\n---\n200 OK\nTransfer-Encoding: chunked\nContent-Type: application/json\n\n{", "completion": " \"type\": \"ADDED\",\n \"object\": {\"kind\": \"Pod\", \"apiVersion\": \"v1\", \"metadata\": {\"resourceVersion\": \"10596\", ...}, ...}\n}\n{\n \"type\": \"MODIFIED\",\n \"object\": {\"kind\": \"Pod\", \"apiVersion\": \"v1\", \"metadata\": {\"resourceVersion\": \"11020\", ...}, ...}\n}\n...", "language": "text", "type": "code-example", "context": "Kubernetes API Concepts | A given Kubernetes server will only preserve a historical record of changes for a limited time. Clusters using etcd 3 preserve changes in the last 5 minutes by default. When the requested watch operat", "source": "https://kubernetes.io/docs/reference/using-api/api-concepts/", "id": "k8s_00706_code_completion_22148"} {"prompt": "Explain this text code:\n\nGET /api/v1/namespaces/test/pods?watch=1&resourceVersion=10245\n---\n200 OK\nTransfer-Encoding: chunked\nContent-Type: application/json\n\n{\n \"type\": \"ADDED\",\n \"object\": {\"kind\": \"Pod\", \"apiVersion\": \"v1\", \"metadata\": {\"resourceVersion\": \"10596\", ...}, ...}\n}\n{\n \"type\": \"MODIFIED\",\n \"object\": {\"kind\": \"Pod\", \"apiVersion\": \"v1\", \"metadata\": {\"resourceVersion\": \"11020\", ...}, ...}\n}\n...", "completion": "This text code Kubernetes API Concepts | A given Kubernetes server will only preserve a historical record of changes for a limited time. Clusters using etcd 3 preserve changes in the last 5 minutes by default. When the requested watch operat", "language": "text", "type": "code-example_explanation", "context": "Kubernetes API Concepts | A given Kubernetes server will only preserve a historical record of changes for a limited time. Clusters using etcd 3 preserve changes in the last 5 minutes by default. When the requested watch operat", "source": "https://kubernetes.io/docs/reference/using-api/api-concepts/", "id": "k8s_00706_code_explain_22149"} {"prompt": "Complete this text code:\n\nGET /api/v1/namespaces/test/pods?watch=1&resourceVersion=10245&allowWatchBookmarks=true\n---\n200 OK\nTransfer-Encoding: chunked\nContent-Type: application/json\n\n{", "completion": " \"type\": \"ADDED\",\n \"object\": {\"kind\": \"Pod\", \"apiVersion\": \"v1\", \"metadata\": {\"resourceVersion\": \"10596\", ...}, ...}\n}\n...\n{\n \"type\": \"BOOKMARK\",\n \"object\": {\"kind\": \"Pod\", \"apiVersion\": \"v1\", \"metadata\": {\"resourceVersion\": \"12746\"} }\n}", "language": "text", "type": "code-example", "context": "Kubernetes API Concepts | As a client, you can request BOOKMARK events by setting the allowWatchBookmarks=true query parameter to a watch request, but you shouldn't assume bookmarks are returned at any specific interval, nor c", "source": "https://kubernetes.io/docs/reference/using-api/api-concepts/", "id": "k8s_00706_code_completion_22150"} {"prompt": "Explain this text code:\n\nGET /api/v1/namespaces/test/pods?watch=1&resourceVersion=10245&allowWatchBookmarks=true\n---\n200 OK\nTransfer-Encoding: chunked\nContent-Type: application/json\n\n{\n \"type\": \"ADDED\",\n \"object\": {\"kind\": \"Pod\", \"apiVersion\": \"v1\", \"metadata\": {\"resourceVersion\": \"10596\", ...}, ...}\n}\n...\n{\n \"type\": \"BOOKMARK\",\n \"object\": {\"kind\": \"Pod\", \"apiVersion\": \"v1\", \"metadata\": {\"resourceVersion\": \"12746\"} }\n}", "completion": "This text code Kubernetes API Concepts | As a client, you can request BOOKMARK events by setting the allowWatchBookmarks=true query parameter to a watch request, but you shouldn't assume bookmarks are returned at any specific interval, nor c", "language": "text", "type": "code-example_explanation", "context": "Kubernetes API Concepts | As a client, you can request BOOKMARK events by setting the allowWatchBookmarks=true query parameter to a watch request, but you shouldn't assume bookmarks are returned at any specific interval, nor c", "source": "https://kubernetes.io/docs/reference/using-api/api-concepts/", "id": "k8s_00706_code_explain_22151"} {"prompt": "Complete this text code:\n\nGET /api/v1/namespaces/test/pods?watch=1&sendInitialEvents=true&allowWatchBookmarks=true&resourceVersion=&resourceVersionMatch=NotOlderThan\n---\n200 OK\nTransfer-Encoding: chunked\nContent-Type: application/json\n\n{\n \"type\": \"ADDED\",\n \"object\": {\"kind\": \"Pod\", \"apiVersion\": \"v1\", \"metadata\": {\"resourceVersion\": \"8467\", \"name\": \"foo\"}, ...}\n}", "completion": "{\n \"type\": \"ADDED\",\n \"object\": {\"kind\": \"Pod\", \"apiVersion\": \"v1\", \"metadata\": {\"resourceVersion\": \"5726\", \"name\": \"bar\"}, ...}\n}\n{\n \"type\": \"BOOKMARK\",\n \"object\": {\"kind\": \"Pod\", \"apiVersion\": \"v1\", \"metadata\": {\"resourceVersion\": \"10245\"} }\n}\n...\n", "language": "text", "type": "code-example", "context": "Kubernetes API Concepts | APIResponseCompression is an option that allows the API server to compress the responses for get and list requests, reducing the network bandwidth and improving the performance of large-scale clusters", "source": "https://kubernetes.io/docs/reference/using-api/api-concepts/", "id": "k8s_00706_code_completion_22152"} {"prompt": "Explain this text code:\n\nGET /api/v1/namespaces/test/pods?watch=1&sendInitialEvents=true&allowWatchBookmarks=true&resourceVersion=&resourceVersionMatch=NotOlderThan\n---\n200 OK\nTransfer-Encoding: chunked\nContent-Type: application/json\n\n{\n \"type\": \"ADDED\",\n \"object\": {\"kind\": \"Pod\", \"apiVersion\": \"v1\", \"metadata\": {\"resourceVersion\": \"8467\", \"name\": \"foo\"}, ...}\n}\n{\n \"type\": \"ADDED\",\n \"object\": {\"kind\": \"Pod\", \"apiVersion\": \"v1\", \"metadata\": {\"resourceVersion\": \"5726\", \"name\": \"bar\"}, ...}\n}\n{\n \"type\": \"BOOKMARK\",\n \"object\": {\"kind\": \"Pod\", \"apiVersion\": \"v1\", \"metadata\": {\"resourceVersion\": \"10245\"} }\n}\n...\n", "completion": "This text code Kubernetes API Concepts | APIResponseCompression is an option that allows the API server to compress the responses for get and list requests, reducing the network bandwidth and improving the performance of large-scale clusters", "language": "text", "type": "code-example_explanation", "context": "Kubernetes API Concepts | APIResponseCompression is an option that allows the API server to compress the responses for get and list requests, reducing the network bandwidth and improving the performance of large-scale clusters", "source": "https://kubernetes.io/docs/reference/using-api/api-concepts/", "id": "k8s_00706_code_explain_22153"} {"prompt": "Complete this text code:\n\nGET /api/v1/pods\nAccept-Encoding: gzip\n---", "completion": "200 OK\nContent-Type: application/json\ncontent-encoding: gzip\n...", "language": "text", "type": "code-example", "context": "Kubernetes API Concepts | The content-encoding header indicates that the response is compressed with gzip.", "source": "https://kubernetes.io/docs/reference/using-api/api-concepts/", "id": "k8s_00706_code_completion_22154"} {"prompt": "Explain this text code:\n\nGET /api/v1/pods\nAccept-Encoding: gzip\n---\n200 OK\nContent-Type: application/json\ncontent-encoding: gzip\n...", "completion": "This text code Kubernetes API Concepts | The content-encoding header indicates that the response is compressed with gzip.", "language": "text", "type": "code-example_explanation", "context": "Kubernetes API Concepts | The content-encoding header indicates that the response is compressed with gzip.", "source": "https://kubernetes.io/docs/reference/using-api/api-concepts/", "id": "k8s_00706_code_explain_22155"} {"prompt": "Complete this text code:\n\nGET /api/v1/pods?limit=500\n---\n200 OK\nContent-Type: application/json\n\n{\n \"kind\": \"PodList\",\n \"apiVersion\": \"v1\",", "completion": " \"metadata\": {\n \"resourceVersion\":\"10245\",\n \"continue\": \"ENCODED_CONTINUE_TOKEN\",\n \"remainingItemCount\": 753,\n ...\n },\n \"items\": [...] // returns pods 1-500\n}", "language": "text", "type": "code-example", "context": "Kubernetes API Concepts | Continue the previous call, retrieving the next set of 500 pods.", "source": "https://kubernetes.io/docs/reference/using-api/api-concepts/", "id": "k8s_00706_code_completion_22156"} {"prompt": "Explain this text code:\n\nGET /api/v1/pods?limit=500\n---\n200 OK\nContent-Type: application/json\n\n{\n \"kind\": \"PodList\",\n \"apiVersion\": \"v1\",\n \"metadata\": {\n \"resourceVersion\":\"10245\",\n \"continue\": \"ENCODED_CONTINUE_TOKEN\",\n \"remainingItemCount\": 753,\n ...\n },\n \"items\": [...] // returns pods 1-500\n}", "completion": "This text code Kubernetes API Concepts | Continue the previous call, retrieving the next set of 500 pods.", "language": "text", "type": "code-example_explanation", "context": "Kubernetes API Concepts | Continue the previous call, retrieving the next set of 500 pods.", "source": "https://kubernetes.io/docs/reference/using-api/api-concepts/", "id": "k8s_00706_code_explain_22157"} {"prompt": "Complete this text code:\n\nGET /api/v1/pods?limit=500&continue=ENCODED_CONTINUE_TOKEN\n---\n200 OK\nContent-Type: application/json\n\n{\n \"kind\": \"PodList\",\n \"apiVersion\": \"v1\",", "completion": " \"metadata\": {\n \"resourceVersion\":\"10245\",\n \"continue\": \"ENCODED_CONTINUE_TOKEN_2\",\n \"remainingItemCount\": 253,\n ...\n },\n \"items\": [...] // returns pods 501-1000\n}", "language": "text", "type": "code-example", "context": "Kubernetes API Concepts | Continue the previous call, retrieving the last 253 pods.", "source": "https://kubernetes.io/docs/reference/using-api/api-concepts/", "id": "k8s_00706_code_completion_22158"} {"prompt": "Explain this text code:\n\nGET /api/v1/pods?limit=500&continue=ENCODED_CONTINUE_TOKEN\n---\n200 OK\nContent-Type: application/json\n\n{\n \"kind\": \"PodList\",\n \"apiVersion\": \"v1\",\n \"metadata\": {\n \"resourceVersion\":\"10245\",\n \"continue\": \"ENCODED_CONTINUE_TOKEN_2\",\n \"remainingItemCount\": 253,\n ...\n },\n \"items\": [...] // returns pods 501-1000\n}", "completion": "This text code Kubernetes API Concepts | Continue the previous call, retrieving the last 253 pods.", "language": "text", "type": "code-example_explanation", "context": "Kubernetes API Concepts | Continue the previous call, retrieving the last 253 pods.", "source": "https://kubernetes.io/docs/reference/using-api/api-concepts/", "id": "k8s_00706_code_explain_22159"} {"prompt": "Complete this text code:\n\nGET /api/v1/pods?limit=500&continue=ENCODED_CONTINUE_TOKEN_2\n---\n200 OK\nContent-Type: application/json\n\n{\n \"kind\": \"PodList\",", "completion": " \"apiVersion\": \"v1\",\n \"metadata\": {\n \"resourceVersion\":\"10245\",\n \"continue\": \"\", // continue token is empty because we have reached the end of the list\n ...\n },\n \"items\": [...] // returns pods 1001-1253\n}", "language": "text", "type": "code-example", "context": "Kubernetes API Concepts | Notice that the resourceVersion of the collection remains constant across each request, indicating the server is showing you a consistent snapshot of the pods. Pods that are created, updated, or delet", "source": "https://kubernetes.io/docs/reference/using-api/api-concepts/", "id": "k8s_00706_code_completion_22160"} {"prompt": "Explain this text code:\n\nGET /api/v1/pods?limit=500&continue=ENCODED_CONTINUE_TOKEN_2\n---\n200 OK\nContent-Type: application/json\n\n{\n \"kind\": \"PodList\",\n \"apiVersion\": \"v1\",\n \"metadata\": {\n \"resourceVersion\":\"10245\",\n \"continue\": \"\", // continue token is empty because we have reached the end of the list\n ...\n },\n \"items\": [...] // returns pods 1001-1253\n}", "completion": "This text code Kubernetes API Concepts | Notice that the resourceVersion of the collection remains constant across each request, indicating the server is showing you a consistent snapshot of the pods. Pods that are created, updated, or delet", "language": "text", "type": "code-example_explanation", "context": "Kubernetes API Concepts | Notice that the resourceVersion of the collection remains constant across each request, indicating the server is showing you a consistent snapshot of the pods. Pods that are created, updated, or delet", "source": "https://kubernetes.io/docs/reference/using-api/api-concepts/", "id": "k8s_00706_code_explain_22161"} {"prompt": "Complete this json code:\n\n{\n \"kind\": \"ServiceList\",\n \"apiVersion\": \"v1\",\n \"metadata\": {\n \"resourceVersion\": \"2947301\"\n },\n \"items\": [\n {", "completion": " \"metadata\": {\n \"name\": \"kubernetes\",\n \"namespace\": \"default\",\n...\n \"metadata\": {\n \"name\": \"kube-dns\",\n \"namespace\": \"kube-system\",\n...", "language": "json", "type": "api-response", "context": "Kubernetes API Concepts | There are dozens of collection types (such as PodList, ServiceList, and NodeList) defined in the Kubernetes API. You can get more information about each collection type from the Kubernetes API documen", "source": "https://kubernetes.io/docs/reference/using-api/api-concepts/", "id": "k8s_00706_code_completion_22162"} {"prompt": "Explain this json code:\n\n{\n \"kind\": \"ServiceList\",\n \"apiVersion\": \"v1\",\n \"metadata\": {\n \"resourceVersion\": \"2947301\"\n },\n \"items\": [\n {\n \"metadata\": {\n \"name\": \"kubernetes\",\n \"namespace\": \"default\",\n...\n \"metadata\": {\n \"name\": \"kube-dns\",\n \"namespace\": \"kube-system\",\n...", "completion": "This json code Kubernetes API Concepts | There are dozens of collection types (such as PodList, ServiceList, and NodeList) defined in the Kubernetes API. You can get more information about each collection type from the Kubernetes API documen", "language": "json", "type": "api-response_explanation", "context": "Kubernetes API Concepts | There are dozens of collection types (such as PodList, ServiceList, and NodeList) defined in the Kubernetes API. You can get more information about each collection type from the Kubernetes API documen", "source": "https://kubernetes.io/docs/reference/using-api/api-concepts/", "id": "k8s_00706_code_explain_22163"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: List\nmetadata:\n resourceVersion: \"\"\n selfLink: \"\"\nitems:\n- apiVersion: v1\n kind: Service\n metadata:\n creationTimestamp: \"2021-06-03T14:54:12Z\"\n labels:\n component: apiserver\n provider: kubernetes\n name: kubernetes", "completion": " namespace: default\n...\n- apiVersion: v1\n kind: Service\n metadata:\n annotations:\n prometheus.io/port: \"9153\"\n prometheus.io/scrape: \"true\"\n creationTimestamp: \"2021-06-03T14:54:14Z\"\n labels:\n k8s-app: kube-dns\n kubernetes.io/cluster-service: \"true\"\n kubernetes.io/name: CoreDNS\n name: kube-dns\n namespace: kube-system", "language": "yaml", "type": "service-definition", "context": "Kubernetes API Concepts | Keep in mind that the Kubernetes API does not have a kind named List.", "source": "https://kubernetes.io/docs/reference/using-api/api-concepts/", "id": "k8s_00706_code_completion_22164"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: List\nmetadata:\n resourceVersion: \"\"\n selfLink: \"\"\nitems:\n- apiVersion: v1\n kind: Service\n metadata:\n creationTimestamp: \"2021-06-03T14:54:12Z\"\n labels:\n component: apiserver\n provider: kubernetes\n name: kubernetes\n namespace: default\n...\n- apiVersion: v1\n kind: Service\n metadata:\n annotations:\n prometheus.io/port: \"9153\"\n prometheus.io/scrape: \"true\"\n creationTimestamp: \"2021-06-03T14:54:14Z\"\n labels:\n k8s-app: kube-dns\n kubernetes.io/cluster-service: \"true\"\n kubernetes.io/name: CoreDNS\n name: kube-dns\n namespace: kube-system", "completion": "This yaml code Kubernetes API Concepts | Keep in mind that the Kubernetes API does not have a kind named List.", "language": "yaml", "type": "service-definition_explanation", "context": "Kubernetes API Concepts | Keep in mind that the Kubernetes API does not have a kind named List.", "source": "https://kubernetes.io/docs/reference/using-api/api-concepts/", "id": "k8s_00706_code_explain_22165"} {"prompt": "Complete this text code:\n\nGET /api/v1/pods\nAccept: application/json;as=Table;g=meta.k8s.io;v=v1\n---\n200 OK\nContent-Type: application/json\n\n{", "completion": " \"kind\": \"Table\",\n \"apiVersion\": \"meta.k8s.io/v1\",\n ...\n \"columnDefinitions\": [\n ...\n ]\n}", "language": "text", "type": "code-example", "context": "Kubernetes API Concepts | For API resource types that do not have a custom Table definition known to the control plane, the API server returns a default Table response that consists of the resource's name and creationTimestamp", "source": "https://kubernetes.io/docs/reference/using-api/api-concepts/", "id": "k8s_00706_code_completion_22166"} {"prompt": "Explain this text code:\n\nGET /api/v1/pods\nAccept: application/json;as=Table;g=meta.k8s.io;v=v1\n---\n200 OK\nContent-Type: application/json\n\n{\n \"kind\": \"Table\",\n \"apiVersion\": \"meta.k8s.io/v1\",\n ...\n \"columnDefinitions\": [\n ...\n ]\n}", "completion": "This text code Kubernetes API Concepts | For API resource types that do not have a custom Table definition known to the control plane, the API server returns a default Table response that consists of the resource's name and creationTimestamp", "language": "text", "type": "code-example_explanation", "context": "Kubernetes API Concepts | For API resource types that do not have a custom Table definition known to the control plane, the API server returns a default Table response that consists of the resource's name and creationTimestamp", "source": "https://kubernetes.io/docs/reference/using-api/api-concepts/", "id": "k8s_00706_code_explain_22167"} {"prompt": "Complete this text code:\n\nGET /apis/crd.example.com/v1alpha1/namespaces/default/resources\n---\n200 OK\nContent-Type: application/json\n...\n\n{\n \"kind\": \"Table\",\n \"apiVersion\": \"meta.k8s.io/v1\",\n ...\n \"columnDefinitions\": [", "completion": " {\n \"name\": \"Name\",\n \"type\": \"string\",\n ...\n },\n {\n \"name\": \"Created At\",\n \"type\": \"date\",\n ...\n }\n ]\n}", "language": "text", "type": "code-example", "context": "Kubernetes API Concepts | Not all API resource types support a Table response; for example, a CustomResourceDefinitions might not define field-to-table mappings, and an APIService that extends the core Kubernetes API might not", "source": "https://kubernetes.io/docs/reference/using-api/api-concepts/", "id": "k8s_00706_code_completion_22168"} {"prompt": "Explain this text code:\n\nGET /apis/crd.example.com/v1alpha1/namespaces/default/resources\n---\n200 OK\nContent-Type: application/json\n...\n\n{\n \"kind\": \"Table\",\n \"apiVersion\": \"meta.k8s.io/v1\",\n ...\n \"columnDefinitions\": [\n {\n \"name\": \"Name\",\n \"type\": \"string\",\n ...\n },\n {\n \"name\": \"Created At\",\n \"type\": \"date\",\n ...\n }\n ]\n}", "completion": "This text code Kubernetes API Concepts | Not all API resource types support a Table response; for example, a CustomResourceDefinitions might not define field-to-table mappings, and an APIService that extends the core Kubernetes API might not", "language": "text", "type": "code-example_explanation", "context": "Kubernetes API Concepts | Not all API resource types support a Table response; for example, a CustomResourceDefinitions might not define field-to-table mappings, and an APIService that extends the core Kubernetes API might not", "source": "https://kubernetes.io/docs/reference/using-api/api-concepts/", "id": "k8s_00706_code_explain_22169"} {"prompt": "Explain this text code:\n\nAccept: application/json;as=Table;g=meta.k8s.io;v=v1, application/json", "completion": "This text code Kubernetes API Concepts | When you delete a resource this takes place in two phases.", "language": "text", "type": "code-example_explanation", "context": "Kubernetes API Concepts | When you delete a resource this takes place in two phases.", "source": "https://kubernetes.io/docs/reference/using-api/api-concepts/", "id": "k8s_00706_code_explain_22170"} {"prompt": "Complete this json code:\n\n{\n \"kind\": \"ConfigMap\",\n \"apiVersion\": \"v1\",\n \"metadata\": {", "completion": " \"finalizers\": [\"url.io/neat-finalization\", \"other-url.io/my-finalizer\"],\n \"deletionTimestamp\": nil,\n }\n}", "language": "json", "type": "api-response", "context": "Kubernetes API Concepts | When a client first sends a delete to request the removal of a resource, the .metadata.deletionTimestamp is set to the current time. Once the .metadata.deletionTimestamp is set, external controllers t", "source": "https://kubernetes.io/docs/reference/using-api/api-concepts/", "id": "k8s_00706_code_completion_22171"} {"prompt": "Explain this json code:\n\n{\n \"kind\": \"ConfigMap\",\n \"apiVersion\": \"v1\",\n \"metadata\": {\n \"finalizers\": [\"url.io/neat-finalization\", \"other-url.io/my-finalizer\"],\n \"deletionTimestamp\": nil,\n }\n}", "completion": "This json code Kubernetes API Concepts | When a client first sends a delete to request the removal of a resource, the .metadata.deletionTimestamp is set to the current time. Once the .metadata.deletionTimestamp is set, external controllers t", "language": "json", "type": "api-response_explanation", "context": "Kubernetes API Concepts | When a client first sends a delete to request the removal of a resource, the .metadata.deletionTimestamp is set to the current time. Once the .metadata.deletionTimestamp is set, external controllers t", "source": "https://kubernetes.io/docs/reference/using-api/api-concepts/", "id": "k8s_00706_code_explain_22172"} {"prompt": "Explain this text code:\n\nPOST /api/v1/namespaces/test/pods?dryRun=All\nContent-Type: application/json\nAccept: application/json", "completion": "This text code Kubernetes API Concepts | The response would look the same as for non-dry-run request, but the values of some generated fields may differ.", "language": "text", "type": "code-example_explanation", "context": "Kubernetes API Concepts | The response would look the same as for non-dry-run request, but the values of some generated fields may differ.", "source": "https://kubernetes.io/docs/reference/using-api/api-concepts/", "id": "k8s_00706_code_explain_22173"} {"prompt": "Complete this text code:\n\nrules:\n- apiGroups: [\"apps\"]", "completion": " resources: [\"deployments\"]\n verbs: [\"patch\"]", "language": "text", "type": "code-example", "context": "Kubernetes API Concepts | See Authorization Overview.", "source": "https://kubernetes.io/docs/reference/using-api/api-concepts/", "id": "k8s_00706_code_completion_22174"} {"prompt": "Explain this text code:\n\nrules:\n- apiGroups: [\"apps\"]\n resources: [\"deployments\"]\n verbs: [\"patch\"]", "completion": "This text code Kubernetes API Concepts | See Authorization Overview.", "language": "text", "type": "code-example_explanation", "context": "Kubernetes API Concepts | See Authorization Overview.", "source": "https://kubernetes.io/docs/reference/using-api/api-concepts/", "id": "k8s_00706_code_explain_22175"} {"prompt": "Explain this text code:\n\n/apis/apps/v1/namespaces/my-namespace/deployments/my-deployment", "completion": "This text code Kubernetes API Concepts | You can also access collections of resources (for example: listing all Nodes). The following paths are used to retrieve collections and resources:", "language": "text", "type": "code-example_explanation", "context": "Kubernetes API Concepts | You can also access collections of resources (for example: listing all Nodes). The following paths are used to retrieve collections and resources:", "source": "https://kubernetes.io/docs/reference/using-api/api-concepts/", "id": "k8s_00706_code_explain_22176"} {"prompt": "Explain this text code:\n\nGET /apis/GROUP/VERSION/namespaces/NAMESPACE/RESOURCETYPE", "completion": "This text code Kubernetes API Concepts | Since a namespace is a cluster-scoped resource type, you can retrieve the list (“collection”) of all namespaces with GET /api/v1/namespaces and details about a particular namespace with GET /api/v1/na", "language": "text", "type": "code-example_explanation", "context": "Kubernetes API Concepts | Since a namespace is a cluster-scoped resource type, you can retrieve the list (“collection”) of all namespaces with GET /api/v1/namespaces and details about a particular namespace with GET /api/v1/na", "source": "https://kubernetes.io/docs/reference/using-api/api-concepts/", "id": "k8s_00706_code_explain_22177"} {"prompt": "Explain this text code:\n\nGET /apis/GROUP/VERSION/namespaces/NAMESPACE/RESOURCETYPE/NAME", "completion": "This text code Kubernetes API Concepts | Since a namespace is a cluster-scoped resource type, you can retrieve the list (“collection”) of all namespaces with GET /api/v1/namespaces and details about a particular namespace with GET /api/v1/na", "language": "text", "type": "code-example_explanation", "context": "Kubernetes API Concepts | Since a namespace is a cluster-scoped resource type, you can retrieve the list (“collection”) of all namespaces with GET /api/v1/namespaces and details about a particular namespace with GET /api/v1/na", "source": "https://kubernetes.io/docs/reference/using-api/api-concepts/", "id": "k8s_00706_code_explain_22178"} {"prompt": "Explain this text code:\n\nGET /apis/GROUP/VERSION/RESOURCETYPE/NAME/SUBRESOURCE", "completion": "This text code Kubernetes API Concepts | The verbs supported for each subresource will differ depending on the object - see the API reference for more information. It is not possible to access sub-resources across multiple resources - genera", "language": "text", "type": "code-example_explanation", "context": "Kubernetes API Concepts | The verbs supported for each subresource will differ depending on the object - see the API reference for more information. It is not possible to access sub-resources across multiple resources - genera", "source": "https://kubernetes.io/docs/reference/using-api/api-concepts/", "id": "k8s_00706_code_explain_22179"} {"prompt": "Explain this text code:\n\nGET /apis/GROUP/VERSION/namespaces/NAMESPACE/RESOURCETYPE/NAME/SUBRESOURCE", "completion": "This text code Kubernetes API Concepts | The verbs supported for each subresource will differ depending on the object - see the API reference for more information. It is not possible to access sub-resources across multiple resources - genera", "language": "text", "type": "code-example_explanation", "context": "Kubernetes API Concepts | The verbs supported for each subresource will differ depending on the object - see the API reference for more information. It is not possible to access sub-resources across multiple resources - genera", "source": "https://kubernetes.io/docs/reference/using-api/api-concepts/", "id": "k8s_00706_code_explain_22180"} {"prompt": "Complete this text code:\n\n200 OK\nContent-Type: application/json", "completion": "\n… JSON encoded collection of Pods (PodList object)", "language": "text", "type": "code-example", "context": "Kubernetes API Concepts | Create a pod by sending JSON to the server, requesting a JSON response.", "source": "https://kubernetes.io/docs/reference/using-api/api-concepts/", "id": "k8s_00706_code_completion_22181"} {"prompt": "Explain this text code:\n\n200 OK\nContent-Type: application/json\n\n… JSON encoded collection of Pods (PodList object)", "completion": "This text code Kubernetes API Concepts | Create a pod by sending JSON to the server, requesting a JSON response.", "language": "text", "type": "code-example_explanation", "context": "Kubernetes API Concepts | Create a pod by sending JSON to the server, requesting a JSON response.", "source": "https://kubernetes.io/docs/reference/using-api/api-concepts/", "id": "k8s_00706_code_explain_22182"} {"prompt": "Complete this text code:\n\nPOST /api/v1/namespaces/test/pods\nContent-Type: application/json", "completion": "Accept: application/json\n… JSON encoded Pod object", "language": "text", "type": "code-example", "context": "Kubernetes API Concepts | Kubernetes also supports the application/yaml media type for both requests and responses. YAML can be used for defining Kubernetes manifests and API interactions.", "source": "https://kubernetes.io/docs/reference/using-api/api-concepts/", "id": "k8s_00706_code_completion_22183"} {"prompt": "Explain this text code:\n\nPOST /api/v1/namespaces/test/pods\nContent-Type: application/json\nAccept: application/json\n… JSON encoded Pod object", "completion": "This text code Kubernetes API Concepts | Kubernetes also supports the application/yaml media type for both requests and responses. YAML can be used for defining Kubernetes manifests and API interactions.", "language": "text", "type": "code-example_explanation", "context": "Kubernetes API Concepts | Kubernetes also supports the application/yaml media type for both requests and responses. YAML can be used for defining Kubernetes manifests and API interactions.", "source": "https://kubernetes.io/docs/reference/using-api/api-concepts/", "id": "k8s_00706_code_explain_22184"} {"prompt": "Complete this text code:\n\n200 OK\nContent-Type: application/json\n\n{", "completion": " \"kind\": \"Pod\",\n \"apiVersion\": \"v1\",\n …\n}", "language": "text", "type": "code-example", "context": "Kubernetes API Concepts | Kubernetes also supports the application/yaml media type for both requests and responses. YAML can be used for defining Kubernetes manifests and API interactions.", "source": "https://kubernetes.io/docs/reference/using-api/api-concepts/", "id": "k8s_00706_code_completion_22185"} {"prompt": "Explain this text code:\n\n200 OK\nContent-Type: application/json\n\n{\n \"kind\": \"Pod\",\n \"apiVersion\": \"v1\",\n …\n}", "completion": "This text code Kubernetes API Concepts | Kubernetes also supports the application/yaml media type for both requests and responses. YAML can be used for defining Kubernetes manifests and API interactions.", "language": "text", "type": "code-example_explanation", "context": "Kubernetes API Concepts | Kubernetes also supports the application/yaml media type for both requests and responses. YAML can be used for defining Kubernetes manifests and API interactions.", "source": "https://kubernetes.io/docs/reference/using-api/api-concepts/", "id": "k8s_00706_code_explain_22186"} {"prompt": "Complete this text code:\n\n200 OK\nContent-Type: application/yaml", "completion": "\n… YAML encoded collection of Pods (PodList object)", "language": "text", "type": "code-example", "context": "Kubernetes API Concepts | Create a pod by sending YAML-encoded data to the server, requesting a YAML response:", "source": "https://kubernetes.io/docs/reference/using-api/api-concepts/", "id": "k8s_00706_code_completion_22187"} {"prompt": "Explain this text code:\n\n200 OK\nContent-Type: application/yaml\n\n… YAML encoded collection of Pods (PodList object)", "completion": "This text code Kubernetes API Concepts | Create a pod by sending YAML-encoded data to the server, requesting a YAML response:", "language": "text", "type": "code-example_explanation", "context": "Kubernetes API Concepts | Create a pod by sending YAML-encoded data to the server, requesting a YAML response:", "source": "https://kubernetes.io/docs/reference/using-api/api-concepts/", "id": "k8s_00706_code_explain_22188"} {"prompt": "Complete this text code:\n\nPOST /api/v1/namespaces/test/pods\nContent-Type: application/yaml", "completion": "Accept: application/yaml\n… YAML encoded Pod object", "language": "text", "type": "code-example", "context": "Kubernetes API Concepts | Kubernetes uses an envelope wrapper to encode Protobuf responses. That wrapper starts with a 4 byte magic number to help identify content in disk or in etcd as Protobuf (as opposed to JSON). The 4 byt", "source": "https://kubernetes.io/docs/reference/using-api/api-concepts/", "id": "k8s_00706_code_completion_22189"} {"prompt": "Explain this text code:\n\nPOST /api/v1/namespaces/test/pods\nContent-Type: application/yaml\nAccept: application/yaml\n… YAML encoded Pod object", "completion": "This text code Kubernetes API Concepts | Kubernetes uses an envelope wrapper to encode Protobuf responses. That wrapper starts with a 4 byte magic number to help identify content in disk or in etcd as Protobuf (as opposed to JSON). The 4 byt", "language": "text", "type": "code-example_explanation", "context": "Kubernetes API Concepts | Kubernetes uses an envelope wrapper to encode Protobuf responses. That wrapper starts with a 4 byte magic number to help identify content in disk or in etcd as Protobuf (as opposed to JSON). The 4 byt", "source": "https://kubernetes.io/docs/reference/using-api/api-concepts/", "id": "k8s_00706_code_explain_22190"} {"prompt": "Complete this yaml code:\n\n200 OK\nContent-Type: application/yaml\n\napiVersion: v1", "completion": "kind: Pod\nmetadata:\n name: my-pod\n …", "language": "yaml", "type": "pod-definition", "context": "Kubernetes API Concepts | Kubernetes uses an envelope wrapper to encode Protobuf responses. That wrapper starts with a 4 byte magic number to help identify content in disk or in etcd as Protobuf (as opposed to JSON). The 4 byt", "source": "https://kubernetes.io/docs/reference/using-api/api-concepts/", "id": "k8s_00706_code_completion_22191"} {"prompt": "Explain this yaml code:\n\n200 OK\nContent-Type: application/yaml\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: my-pod\n …", "completion": "This yaml code Kubernetes API Concepts | Kubernetes uses an envelope wrapper to encode Protobuf responses. That wrapper starts with a 4 byte magic number to help identify content in disk or in etcd as Protobuf (as opposed to JSON). The 4 byt", "language": "yaml", "type": "pod-definition_explanation", "context": "Kubernetes API Concepts | Kubernetes uses an envelope wrapper to encode Protobuf responses. That wrapper starts with a 4 byte magic number to help identify content in disk or in etcd as Protobuf (as opposed to JSON). The 4 byt", "source": "https://kubernetes.io/docs/reference/using-api/api-concepts/", "id": "k8s_00706_code_explain_22192"} {"prompt": "Explain this text code:\n\nGET /api/v1/pods\nAccept: application/vnd.kubernetes.protobuf", "completion": "This text code Kubernetes API Concepts | Create a pod by sending Protobuf encoded data to the server, but request a response in JSON.", "language": "text", "type": "code-example_explanation", "context": "Kubernetes API Concepts | Create a pod by sending Protobuf encoded data to the server, but request a response in JSON.", "source": "https://kubernetes.io/docs/reference/using-api/api-concepts/", "id": "k8s_00706_code_explain_22193"} {"prompt": "Complete this text code:\n\n200 OK\nContent-Type: application/vnd.kubernetes.protobuf", "completion": "\n… binary encoded collection of Pods (PodList object)", "language": "text", "type": "code-example", "context": "Kubernetes API Concepts | Create a pod by sending Protobuf encoded data to the server, but request a response in JSON.", "source": "https://kubernetes.io/docs/reference/using-api/api-concepts/", "id": "k8s_00706_code_completion_22194"} {"prompt": "Explain this text code:\n\n200 OK\nContent-Type: application/vnd.kubernetes.protobuf\n\n… binary encoded collection of Pods (PodList object)", "completion": "This text code Kubernetes API Concepts | Create a pod by sending Protobuf encoded data to the server, but request a response in JSON.", "language": "text", "type": "code-example_explanation", "context": "Kubernetes API Concepts | Create a pod by sending Protobuf encoded data to the server, but request a response in JSON.", "source": "https://kubernetes.io/docs/reference/using-api/api-concepts/", "id": "k8s_00706_code_explain_22195"} {"prompt": "Complete this text code:\n\nPOST /api/v1/namespaces/test/pods\nContent-Type: application/vnd.kubernetes.protobuf", "completion": "Accept: application/json\n… binary encoded Pod object", "language": "text", "type": "code-example", "context": "Kubernetes API Concepts | You can use both techniques together and use Kubernetes' Protobuf encoding to interact with any API that supports it, for both reads and writes. Only some API resource types are compatible with Protob", "source": "https://kubernetes.io/docs/reference/using-api/api-concepts/", "id": "k8s_00706_code_completion_22196"} {"prompt": "Explain this text code:\n\nPOST /api/v1/namespaces/test/pods\nContent-Type: application/vnd.kubernetes.protobuf\nAccept: application/json\n… binary encoded Pod object", "completion": "This text code Kubernetes API Concepts | You can use both techniques together and use Kubernetes' Protobuf encoding to interact with any API that supports it, for both reads and writes. Only some API resource types are compatible with Protob", "language": "text", "type": "code-example_explanation", "context": "Kubernetes API Concepts | You can use both techniques together and use Kubernetes' Protobuf encoding to interact with any API that supports it, for both reads and writes. Only some API resource types are compatible with Protob", "source": "https://kubernetes.io/docs/reference/using-api/api-concepts/", "id": "k8s_00706_code_explain_22197"} {"prompt": "Complete this text code:\n\n200 OK\nContent-Type: application/json\n\n{", "completion": " \"kind\": \"Pod\",\n \"apiVersion\": \"v1\",\n ...\n}", "language": "text", "type": "code-example", "context": "Kubernetes API Concepts | You can use both techniques together and use Kubernetes' Protobuf encoding to interact with any API that supports it, for both reads and writes. Only some API resource types are compatible with Protob", "source": "https://kubernetes.io/docs/reference/using-api/api-concepts/", "id": "k8s_00706_code_completion_22198"} {"prompt": "Explain this text code:\n\n200 OK\nContent-Type: application/json\n\n{\n \"kind\": \"Pod\",\n \"apiVersion\": \"v1\",\n ...\n}", "completion": "This text code Kubernetes API Concepts | You can use both techniques together and use Kubernetes' Protobuf encoding to interact with any API that supports it, for both reads and writes. Only some API resource types are compatible with Protob", "language": "text", "type": "code-example_explanation", "context": "Kubernetes API Concepts | You can use both techniques together and use Kubernetes' Protobuf encoding to interact with any API that supports it, for both reads and writes. Only some API resource types are compatible with Protob", "source": "https://kubernetes.io/docs/reference/using-api/api-concepts/", "id": "k8s_00706_code_explain_22199"} {"prompt": "Complete this text code:\n\nA four byte magic number prefix:\n Bytes 0-3: \"k8s\\x00\" [0x6b, 0x38, 0x73, 0x00]\n\nAn encoded Protobuf message with the following IDL:\n message Unknown {\n // typeMeta should have the string values for \"kind\" and \"apiVersion\" as set on the JSON object\n optional TypeMeta typeMeta = 1;\n\n // raw will hold the complete serialized object in protobuf. See the protobuf definitions in the client libraries for a given kind.\n optional bytes raw = 2;\n\n // contentEncoding is encoding used for the raw data. Unspecified means no encoding.", "completion": " optional string contentEncoding = 3;\n\n // contentType is the serialization method used to serialize 'raw'. Unspecified means application/vnd.kubernetes.protobuf and is usually\n // omitted.\n optional string contentType = 4;\n }\n\n message TypeMeta {\n // apiVersion is the group/version for this type\n optional string apiVersion = 1;\n // kind is the name of the object schema. A protobuf definition should exist for this object.\n optional string kind = 2;\n }", "language": "text", "type": "code-example", "context": "Kubernetes API Concepts | Not all API resource types support Kubernetes' Protobuf encoding; specifically, Protobuf isn't available for resources that are defined as CustomResourceDefinitions or are served via the aggregation l", "source": "https://kubernetes.io/docs/reference/using-api/api-concepts/", "id": "k8s_00706_code_completion_22200"} {"prompt": "Explain this text code:\n\nA four byte magic number prefix:\n Bytes 0-3: \"k8s\\x00\" [0x6b, 0x38, 0x73, 0x00]\n\nAn encoded Protobuf message with the following IDL:\n message Unknown {\n // typeMeta should have the string values for \"kind\" and \"apiVersion\" as set on the JSON object\n optional TypeMeta typeMeta = 1;\n\n // raw will hold the complete serialized object in protobuf. See the protobuf definitions in the client libraries for a given kind.\n optional bytes raw = 2;\n\n // contentEncoding is encoding used for the raw data. Unspecified means no encoding.\n optional string contentEncoding = 3;\n\n // contentType is the serialization method used to serialize 'raw'. Unspecified means application/vnd.kubernetes.protobuf and is usually\n // omitted.\n optional string contentType = 4;\n }\n\n message TypeMeta {\n // apiVersion is the group/version for this type\n optional string apiVersion = 1;\n // kind is the name of the object schema. A protobuf definition should exist for this object.\n optional string kind = 2;\n }", "completion": "This text code Kubernetes API Concepts | Not all API resource types support Kubernetes' Protobuf encoding; specifically, Protobuf isn't available for resources that are defined as CustomResourceDefinitions or are served via the aggregation l", "language": "text", "type": "code-example_explanation", "context": "Kubernetes API Concepts | Not all API resource types support Kubernetes' Protobuf encoding; specifically, Protobuf isn't available for resources that are defined as CustomResourceDefinitions or are served via the aggregation l", "source": "https://kubernetes.io/docs/reference/using-api/api-concepts/", "id": "k8s_00706_code_explain_22201"} {"prompt": "Explain this text code:\n\nAccept: application/vnd.kubernetes.protobuf, application/json", "completion": "This text code Kubernetes API Concepts | With the CBORServingAndStorage feature gate enabled, request and response bodies for all built-in resource types and all resources defined by a CustomResourceDefinition may be encoded to the CBOR bina", "language": "text", "type": "code-example_explanation", "context": "Kubernetes API Concepts | With the CBORServingAndStorage feature gate enabled, request and response bodies for all built-in resource types and all resources defined by a CustomResourceDefinition may be encoded to the CBOR bina", "source": "https://kubernetes.io/docs/reference/using-api/api-concepts/", "id": "k8s_00706_code_explain_22202"} {"prompt": "Complete this http code:\n\nGET /api/v1/namespaces/test/pods\n---\n200 OK\nContent-Type: application/json\n", "completion": "{\n \"kind\": \"PodList\",\n \"apiVersion\": \"v1\",\n \"metadata\": {\"resourceVersion\":\"10245\"},\n \"items\": [...]\n}", "language": "http", "type": "code-example", "context": "Kubernetes API Concepts | Starting from resource version 10245, receive notifications of any API operations (such as create, delete, patch or update) that affect Pods in the test namespace. Each change notification is a JSON d", "source": "https://kubernetes.io/docs/reference/using-api/api-concepts/", "id": "k8s_00706_code_completion_22203"} {"prompt": "Explain this http code:\n\nGET /api/v1/namespaces/test/pods\n---\n200 OK\nContent-Type: application/json\n\n{\n \"kind\": \"PodList\",\n \"apiVersion\": \"v1\",\n \"metadata\": {\"resourceVersion\":\"10245\"},\n \"items\": [...]\n}", "completion": "This http code Kubernetes API Concepts | Starting from resource version 10245, receive notifications of any API operations (such as create, delete, patch or update) that affect Pods in the test namespace. Each change notification is a JSON d", "language": "http", "type": "code-example_explanation", "context": "Kubernetes API Concepts | Starting from resource version 10245, receive notifications of any API operations (such as create, delete, patch or update) that affect Pods in the test namespace. Each change notification is a JSON d", "source": "https://kubernetes.io/docs/reference/using-api/api-concepts/", "id": "k8s_00706_code_explain_22204"} {"prompt": "Complete this http code:\n\nGET /api/v1/namespaces/test/pods?watch=1&resourceVersion=10245\n---\n200 OK\nTransfer-Encoding: chunked\nContent-Type: application/json\n\n{", "completion": " \"type\": \"ADDED\",\n \"object\": {\"kind\": \"Pod\", \"apiVersion\": \"v1\", \"metadata\": {\"resourceVersion\": \"10596\", ...}, ...}\n}\n{\n \"type\": \"MODIFIED\",\n \"object\": {\"kind\": \"Pod\", \"apiVersion\": \"v1\", \"metadata\": {\"resourceVersion\": \"11020\", ...}, ...}\n}\n...", "language": "http", "type": "code-example", "context": "Kubernetes API Concepts | A given Kubernetes server will only preserve a historical record of changes for a limited time. Clusters using etcd 3 preserve changes in the last 5 minutes by default. When the requested watch operat", "source": "https://kubernetes.io/docs/reference/using-api/api-concepts/", "id": "k8s_00706_code_completion_22205"} {"prompt": "Explain this http code:\n\nGET /api/v1/namespaces/test/pods?watch=1&resourceVersion=10245\n---\n200 OK\nTransfer-Encoding: chunked\nContent-Type: application/json\n\n{\n \"type\": \"ADDED\",\n \"object\": {\"kind\": \"Pod\", \"apiVersion\": \"v1\", \"metadata\": {\"resourceVersion\": \"10596\", ...}, ...}\n}\n{\n \"type\": \"MODIFIED\",\n \"object\": {\"kind\": \"Pod\", \"apiVersion\": \"v1\", \"metadata\": {\"resourceVersion\": \"11020\", ...}, ...}\n}\n...", "completion": "This http code Kubernetes API Concepts | A given Kubernetes server will only preserve a historical record of changes for a limited time. Clusters using etcd 3 preserve changes in the last 5 minutes by default. When the requested watch operat", "language": "http", "type": "code-example_explanation", "context": "Kubernetes API Concepts | A given Kubernetes server will only preserve a historical record of changes for a limited time. Clusters using etcd 3 preserve changes in the last 5 minutes by default. When the requested watch operat", "source": "https://kubernetes.io/docs/reference/using-api/api-concepts/", "id": "k8s_00706_code_explain_22206"} {"prompt": "Complete this http code:\n\nGET /api/v1/namespaces/test/pods?watch=1&resourceVersion=10245&allowWatchBookmarks=true\n---\n200 OK\nTransfer-Encoding: chunked\nContent-Type: application/json\n\n{", "completion": " \"type\": \"ADDED\",\n \"object\": {\"kind\": \"Pod\", \"apiVersion\": \"v1\", \"metadata\": {\"resourceVersion\": \"10596\", ...}, ...}\n}\n...\n{\n \"type\": \"BOOKMARK\",\n \"object\": {\"kind\": \"Pod\", \"apiVersion\": \"v1\", \"metadata\": {\"resourceVersion\": \"12746\"} }\n}", "language": "http", "type": "code-example", "context": "Kubernetes API Concepts | As a client, you can request BOOKMARK events by setting the allowWatchBookmarks=true query parameter to a watch request, but you shouldn't assume bookmarks are returned at any specific interval, nor c", "source": "https://kubernetes.io/docs/reference/using-api/api-concepts/", "id": "k8s_00706_code_completion_22207"} {"prompt": "Explain this http code:\n\nGET /api/v1/namespaces/test/pods?watch=1&resourceVersion=10245&allowWatchBookmarks=true\n---\n200 OK\nTransfer-Encoding: chunked\nContent-Type: application/json\n\n{\n \"type\": \"ADDED\",\n \"object\": {\"kind\": \"Pod\", \"apiVersion\": \"v1\", \"metadata\": {\"resourceVersion\": \"10596\", ...}, ...}\n}\n...\n{\n \"type\": \"BOOKMARK\",\n \"object\": {\"kind\": \"Pod\", \"apiVersion\": \"v1\", \"metadata\": {\"resourceVersion\": \"12746\"} }\n}", "completion": "This http code Kubernetes API Concepts | As a client, you can request BOOKMARK events by setting the allowWatchBookmarks=true query parameter to a watch request, but you shouldn't assume bookmarks are returned at any specific interval, nor c", "language": "http", "type": "code-example_explanation", "context": "Kubernetes API Concepts | As a client, you can request BOOKMARK events by setting the allowWatchBookmarks=true query parameter to a watch request, but you shouldn't assume bookmarks are returned at any specific interval, nor c", "source": "https://kubernetes.io/docs/reference/using-api/api-concepts/", "id": "k8s_00706_code_explain_22208"} {"prompt": "Complete this http code:\n\nGET /api/v1/namespaces/test/pods?watch=1&sendInitialEvents=true&allowWatchBookmarks=true&resourceVersion=&resourceVersionMatch=NotOlderThan\n---\n200 OK\nTransfer-Encoding: chunked\nContent-Type: application/json\n\n{\n \"type\": \"ADDED\",\n \"object\": {\"kind\": \"Pod\", \"apiVersion\": \"v1\", \"metadata\": {\"resourceVersion\": \"8467\", \"name\": \"foo\"}, ...}\n}", "completion": "{\n \"type\": \"ADDED\",\n \"object\": {\"kind\": \"Pod\", \"apiVersion\": \"v1\", \"metadata\": {\"resourceVersion\": \"5726\", \"name\": \"bar\"}, ...}\n}\n{\n \"type\": \"BOOKMARK\",\n \"object\": {\"kind\": \"Pod\", \"apiVersion\": \"v1\", \"metadata\": {\"resourceVersion\": \"10245\"} }\n}\n...\n", "language": "http", "type": "code-example", "context": "Kubernetes API Concepts | APIResponseCompression is an option that allows the API server to compress the responses for get and list requests, reducing the network bandwidth and improving the performance of large-scale clusters", "source": "https://kubernetes.io/docs/reference/using-api/api-concepts/", "id": "k8s_00706_code_completion_22209"} {"prompt": "Explain this http code:\n\nGET /api/v1/namespaces/test/pods?watch=1&sendInitialEvents=true&allowWatchBookmarks=true&resourceVersion=&resourceVersionMatch=NotOlderThan\n---\n200 OK\nTransfer-Encoding: chunked\nContent-Type: application/json\n\n{\n \"type\": \"ADDED\",\n \"object\": {\"kind\": \"Pod\", \"apiVersion\": \"v1\", \"metadata\": {\"resourceVersion\": \"8467\", \"name\": \"foo\"}, ...}\n}\n{\n \"type\": \"ADDED\",\n \"object\": {\"kind\": \"Pod\", \"apiVersion\": \"v1\", \"metadata\": {\"resourceVersion\": \"5726\", \"name\": \"bar\"}, ...}\n}\n{\n \"type\": \"BOOKMARK\",\n \"object\": {\"kind\": \"Pod\", \"apiVersion\": \"v1\", \"metadata\": {\"resourceVersion\": \"10245\"} }\n}\n...\n", "completion": "This http code Kubernetes API Concepts | APIResponseCompression is an option that allows the API server to compress the responses for get and list requests, reducing the network bandwidth and improving the performance of large-scale clusters", "language": "http", "type": "code-example_explanation", "context": "Kubernetes API Concepts | APIResponseCompression is an option that allows the API server to compress the responses for get and list requests, reducing the network bandwidth and improving the performance of large-scale clusters", "source": "https://kubernetes.io/docs/reference/using-api/api-concepts/", "id": "k8s_00706_code_explain_22210"} {"prompt": "Complete this http code:\n\nGET /api/v1/pods\nAccept-Encoding: gzip\n---", "completion": "200 OK\nContent-Type: application/json\ncontent-encoding: gzip\n...", "language": "http", "type": "code-example", "context": "Kubernetes API Concepts | The content-encoding header indicates that the response is compressed with gzip.", "source": "https://kubernetes.io/docs/reference/using-api/api-concepts/", "id": "k8s_00706_code_completion_22211"} {"prompt": "Explain this http code:\n\nGET /api/v1/pods\nAccept-Encoding: gzip\n---\n200 OK\nContent-Type: application/json\ncontent-encoding: gzip\n...", "completion": "This http code Kubernetes API Concepts | The content-encoding header indicates that the response is compressed with gzip.", "language": "http", "type": "code-example_explanation", "context": "Kubernetes API Concepts | The content-encoding header indicates that the response is compressed with gzip.", "source": "https://kubernetes.io/docs/reference/using-api/api-concepts/", "id": "k8s_00706_code_explain_22212"} {"prompt": "Complete this http code:\n\nGET /api/v1/pods?limit=500\n---\n200 OK\nContent-Type: application/json\n\n{\n \"kind\": \"PodList\",\n \"apiVersion\": \"v1\",", "completion": " \"metadata\": {\n \"resourceVersion\":\"10245\",\n \"continue\": \"ENCODED_CONTINUE_TOKEN\",\n \"remainingItemCount\": 753,\n ...\n },\n \"items\": [...] // returns pods 1-500\n}", "language": "http", "type": "code-example", "context": "Kubernetes API Concepts | Continue the previous call, retrieving the next set of 500 pods.", "source": "https://kubernetes.io/docs/reference/using-api/api-concepts/", "id": "k8s_00706_code_completion_22213"} {"prompt": "Explain this http code:\n\nGET /api/v1/pods?limit=500\n---\n200 OK\nContent-Type: application/json\n\n{\n \"kind\": \"PodList\",\n \"apiVersion\": \"v1\",\n \"metadata\": {\n \"resourceVersion\":\"10245\",\n \"continue\": \"ENCODED_CONTINUE_TOKEN\",\n \"remainingItemCount\": 753,\n ...\n },\n \"items\": [...] // returns pods 1-500\n}", "completion": "This http code Kubernetes API Concepts | Continue the previous call, retrieving the next set of 500 pods.", "language": "http", "type": "code-example_explanation", "context": "Kubernetes API Concepts | Continue the previous call, retrieving the next set of 500 pods.", "source": "https://kubernetes.io/docs/reference/using-api/api-concepts/", "id": "k8s_00706_code_explain_22214"} {"prompt": "Complete this http code:\n\nGET /api/v1/pods?limit=500&continue=ENCODED_CONTINUE_TOKEN\n---\n200 OK\nContent-Type: application/json\n\n{\n \"kind\": \"PodList\",\n \"apiVersion\": \"v1\",", "completion": " \"metadata\": {\n \"resourceVersion\":\"10245\",\n \"continue\": \"ENCODED_CONTINUE_TOKEN_2\",\n \"remainingItemCount\": 253,\n ...\n },\n \"items\": [...] // returns pods 501-1000\n}", "language": "http", "type": "code-example", "context": "Kubernetes API Concepts | Continue the previous call, retrieving the last 253 pods.", "source": "https://kubernetes.io/docs/reference/using-api/api-concepts/", "id": "k8s_00706_code_completion_22215"} {"prompt": "Explain this http code:\n\nGET /api/v1/pods?limit=500&continue=ENCODED_CONTINUE_TOKEN\n---\n200 OK\nContent-Type: application/json\n\n{\n \"kind\": \"PodList\",\n \"apiVersion\": \"v1\",\n \"metadata\": {\n \"resourceVersion\":\"10245\",\n \"continue\": \"ENCODED_CONTINUE_TOKEN_2\",\n \"remainingItemCount\": 253,\n ...\n },\n \"items\": [...] // returns pods 501-1000\n}", "completion": "This http code Kubernetes API Concepts | Continue the previous call, retrieving the last 253 pods.", "language": "http", "type": "code-example_explanation", "context": "Kubernetes API Concepts | Continue the previous call, retrieving the last 253 pods.", "source": "https://kubernetes.io/docs/reference/using-api/api-concepts/", "id": "k8s_00706_code_explain_22216"} {"prompt": "Complete this http code:\n\nGET /api/v1/pods?limit=500&continue=ENCODED_CONTINUE_TOKEN_2\n---\n200 OK\nContent-Type: application/json\n\n{\n \"kind\": \"PodList\",", "completion": " \"apiVersion\": \"v1\",\n \"metadata\": {\n \"resourceVersion\":\"10245\",\n \"continue\": \"\", // continue token is empty because we have reached the end of the list\n ...\n },\n \"items\": [...] // returns pods 1001-1253\n}", "language": "http", "type": "code-example", "context": "Kubernetes API Concepts | Notice that the resourceVersion of the collection remains constant across each request, indicating the server is showing you a consistent snapshot of the pods. Pods that are created, updated, or delet", "source": "https://kubernetes.io/docs/reference/using-api/api-concepts/", "id": "k8s_00706_code_completion_22217"} {"prompt": "Explain this http code:\n\nGET /api/v1/pods?limit=500&continue=ENCODED_CONTINUE_TOKEN_2\n---\n200 OK\nContent-Type: application/json\n\n{\n \"kind\": \"PodList\",\n \"apiVersion\": \"v1\",\n \"metadata\": {\n \"resourceVersion\":\"10245\",\n \"continue\": \"\", // continue token is empty because we have reached the end of the list\n ...\n },\n \"items\": [...] // returns pods 1001-1253\n}", "completion": "This http code Kubernetes API Concepts | Notice that the resourceVersion of the collection remains constant across each request, indicating the server is showing you a consistent snapshot of the pods. Pods that are created, updated, or delet", "language": "http", "type": "code-example_explanation", "context": "Kubernetes API Concepts | Notice that the resourceVersion of the collection remains constant across each request, indicating the server is showing you a consistent snapshot of the pods. Pods that are created, updated, or delet", "source": "https://kubernetes.io/docs/reference/using-api/api-concepts/", "id": "k8s_00706_code_explain_22218"} {"prompt": "Complete this yaml code:\n\n{\n \"kind\": \"ServiceList\",\n \"apiVersion\": \"v1\",\n \"metadata\": {\n \"resourceVersion\": \"2947301\"\n },\n \"items\": [\n {", "completion": " \"metadata\": {\n \"name\": \"kubernetes\",\n \"namespace\": \"default\",\n...\n \"metadata\": {\n \"name\": \"kube-dns\",\n \"namespace\": \"kube-system\",\n...", "language": "yaml", "type": "kubernetes-manifest", "context": "Kubernetes API Concepts | There are dozens of collection types (such as PodList, ServiceList, and NodeList) defined in the Kubernetes API. You can get more information about each collection type from the Kubernetes API documen", "source": "https://kubernetes.io/docs/reference/using-api/api-concepts/", "id": "k8s_00706_code_completion_22219"} {"prompt": "Explain this yaml code:\n\n{\n \"kind\": \"ServiceList\",\n \"apiVersion\": \"v1\",\n \"metadata\": {\n \"resourceVersion\": \"2947301\"\n },\n \"items\": [\n {\n \"metadata\": {\n \"name\": \"kubernetes\",\n \"namespace\": \"default\",\n...\n \"metadata\": {\n \"name\": \"kube-dns\",\n \"namespace\": \"kube-system\",\n...", "completion": "This yaml code Kubernetes API Concepts | There are dozens of collection types (such as PodList, ServiceList, and NodeList) defined in the Kubernetes API. You can get more information about each collection type from the Kubernetes API documen", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Kubernetes API Concepts | There are dozens of collection types (such as PodList, ServiceList, and NodeList) defined in the Kubernetes API. You can get more information about each collection type from the Kubernetes API documen", "source": "https://kubernetes.io/docs/reference/using-api/api-concepts/", "id": "k8s_00706_code_explain_22220"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: List\nmetadata:\n resourceVersion: \"\"\n selfLink: \"\"\nitems:\n- apiVersion: v1\n kind: Service\n metadata:\n creationTimestamp: \"2021-06-03T14:54:12Z\"\n labels:\n component: apiserver\n provider: kubernetes\n name: kubernetes", "completion": " namespace: default\n...\n- apiVersion: v1\n kind: Service\n metadata:\n annotations:\n prometheus.io/port: \"9153\"\n prometheus.io/scrape: \"true\"\n creationTimestamp: \"2021-06-03T14:54:14Z\"\n labels:\n k8s-app: kube-dns\n kubernetes.io/cluster-service: \"true\"\n kubernetes.io/name: CoreDNS\n name: kube-dns\n namespace: kube-system", "language": "yaml", "type": "service-definition", "context": "Kubernetes API Concepts | Keep in mind that the Kubernetes API does not have a kind named List.", "source": "https://kubernetes.io/docs/reference/using-api/api-concepts/", "id": "k8s_00706_code_completion_22221"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: List\nmetadata:\n resourceVersion: \"\"\n selfLink: \"\"\nitems:\n- apiVersion: v1\n kind: Service\n metadata:\n creationTimestamp: \"2021-06-03T14:54:12Z\"\n labels:\n component: apiserver\n provider: kubernetes\n name: kubernetes\n namespace: default\n...\n- apiVersion: v1\n kind: Service\n metadata:\n annotations:\n prometheus.io/port: \"9153\"\n prometheus.io/scrape: \"true\"\n creationTimestamp: \"2021-06-03T14:54:14Z\"\n labels:\n k8s-app: kube-dns\n kubernetes.io/cluster-service: \"true\"\n kubernetes.io/name: CoreDNS\n name: kube-dns\n namespace: kube-system", "completion": "This yaml code Kubernetes API Concepts | Keep in mind that the Kubernetes API does not have a kind named List.", "language": "yaml", "type": "service-definition_explanation", "context": "Kubernetes API Concepts | Keep in mind that the Kubernetes API does not have a kind named List.", "source": "https://kubernetes.io/docs/reference/using-api/api-concepts/", "id": "k8s_00706_code_explain_22222"} {"prompt": "Complete this http code:\n\nGET /api/v1/pods\nAccept: application/json;as=Table;g=meta.k8s.io;v=v1\n---\n200 OK\nContent-Type: application/json\n\n{", "completion": " \"kind\": \"Table\",\n \"apiVersion\": \"meta.k8s.io/v1\",\n ...\n \"columnDefinitions\": [\n ...\n ]\n}", "language": "http", "type": "code-example", "context": "Kubernetes API Concepts | For API resource types that do not have a custom Table definition known to the control plane, the API server returns a default Table response that consists of the resource's name and creationTimestamp", "source": "https://kubernetes.io/docs/reference/using-api/api-concepts/", "id": "k8s_00706_code_completion_22223"} {"prompt": "Explain this http code:\n\nGET /api/v1/pods\nAccept: application/json;as=Table;g=meta.k8s.io;v=v1\n---\n200 OK\nContent-Type: application/json\n\n{\n \"kind\": \"Table\",\n \"apiVersion\": \"meta.k8s.io/v1\",\n ...\n \"columnDefinitions\": [\n ...\n ]\n}", "completion": "This http code Kubernetes API Concepts | For API resource types that do not have a custom Table definition known to the control plane, the API server returns a default Table response that consists of the resource's name and creationTimestamp", "language": "http", "type": "code-example_explanation", "context": "Kubernetes API Concepts | For API resource types that do not have a custom Table definition known to the control plane, the API server returns a default Table response that consists of the resource's name and creationTimestamp", "source": "https://kubernetes.io/docs/reference/using-api/api-concepts/", "id": "k8s_00706_code_explain_22224"} {"prompt": "Complete this http code:\n\nGET /apis/crd.example.com/v1alpha1/namespaces/default/resources\n---\n200 OK\nContent-Type: application/json\n...\n\n{\n \"kind\": \"Table\",\n \"apiVersion\": \"meta.k8s.io/v1\",\n ...\n \"columnDefinitions\": [", "completion": " {\n \"name\": \"Name\",\n \"type\": \"string\",\n ...\n },\n {\n \"name\": \"Created At\",\n \"type\": \"date\",\n ...\n }\n ]\n}", "language": "http", "type": "code-example", "context": "Kubernetes API Concepts | Not all API resource types support a Table response; for example, a CustomResourceDefinitions might not define field-to-table mappings, and an APIService that extends the core Kubernetes API might not", "source": "https://kubernetes.io/docs/reference/using-api/api-concepts/", "id": "k8s_00706_code_completion_22225"} {"prompt": "Explain this http code:\n\nGET /apis/crd.example.com/v1alpha1/namespaces/default/resources\n---\n200 OK\nContent-Type: application/json\n...\n\n{\n \"kind\": \"Table\",\n \"apiVersion\": \"meta.k8s.io/v1\",\n ...\n \"columnDefinitions\": [\n {\n \"name\": \"Name\",\n \"type\": \"string\",\n ...\n },\n {\n \"name\": \"Created At\",\n \"type\": \"date\",\n ...\n }\n ]\n}", "completion": "This http code Kubernetes API Concepts | Not all API resource types support a Table response; for example, a CustomResourceDefinitions might not define field-to-table mappings, and an APIService that extends the core Kubernetes API might not", "language": "http", "type": "code-example_explanation", "context": "Kubernetes API Concepts | Not all API resource types support a Table response; for example, a CustomResourceDefinitions might not define field-to-table mappings, and an APIService that extends the core Kubernetes API might not", "source": "https://kubernetes.io/docs/reference/using-api/api-concepts/", "id": "k8s_00706_code_explain_22226"} {"prompt": "Explain this text code:\n\nAccept: application/json;as=Table;g=meta.k8s.io;v=v1, application/json", "completion": "This text code Kubernetes API Concepts | When you delete a resource this takes place in two phases.", "language": "text", "type": "code-example_explanation", "context": "Kubernetes API Concepts | When you delete a resource this takes place in two phases.", "source": "https://kubernetes.io/docs/reference/using-api/api-concepts/", "id": "k8s_00706_code_explain_22227"} {"prompt": "Complete this yaml code:\n\n{\n \"kind\": \"ConfigMap\",\n \"apiVersion\": \"v1\",\n \"metadata\": {", "completion": " \"finalizers\": [\"url.io/neat-finalization\", \"other-url.io/my-finalizer\"],\n \"deletionTimestamp\": nil,\n }\n}", "language": "yaml", "type": "kubernetes-manifest", "context": "Kubernetes API Concepts | When a client first sends a delete to request the removal of a resource, the .metadata.deletionTimestamp is set to the current time. Once the .metadata.deletionTimestamp is set, external controllers t", "source": "https://kubernetes.io/docs/reference/using-api/api-concepts/", "id": "k8s_00706_code_completion_22228"} {"prompt": "Explain this yaml code:\n\n{\n \"kind\": \"ConfigMap\",\n \"apiVersion\": \"v1\",\n \"metadata\": {\n \"finalizers\": [\"url.io/neat-finalization\", \"other-url.io/my-finalizer\"],\n \"deletionTimestamp\": nil,\n }\n}", "completion": "This yaml code Kubernetes API Concepts | When a client first sends a delete to request the removal of a resource, the .metadata.deletionTimestamp is set to the current time. Once the .metadata.deletionTimestamp is set, external controllers t", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Kubernetes API Concepts | When a client first sends a delete to request the removal of a resource, the .metadata.deletionTimestamp is set to the current time. Once the .metadata.deletionTimestamp is set, external controllers t", "source": "https://kubernetes.io/docs/reference/using-api/api-concepts/", "id": "k8s_00706_code_explain_22229"} {"prompt": "Explain this text code:\n\n--feature-gates=AllowUnsafeMalformedObjectDeletion=true", "completion": "This text code Kubernetes API Concepts | A resource is considered corrupt if it can not be successfully retrieved from the storage due to:", "language": "text", "type": "code-example_explanation", "context": "Kubernetes API Concepts | A resource is considered corrupt if it can not be successfully retrieved from the storage due to:", "source": "https://kubernetes.io/docs/reference/using-api/api-concepts/", "id": "k8s_00706_code_explain_22230"} {"prompt": "Explain this http code:\n\nPOST /api/v1/namespaces/test/pods?dryRun=All\nContent-Type: application/json\nAccept: application/json", "completion": "This http code Kubernetes API Concepts | The response would look the same as for non-dry-run request, but the values of some generated fields may differ.", "language": "http", "type": "code-example_explanation", "context": "Kubernetes API Concepts | The response would look the same as for non-dry-run request, but the values of some generated fields may differ.", "source": "https://kubernetes.io/docs/reference/using-api/api-concepts/", "id": "k8s_00706_code_explain_22231"} {"prompt": "Complete this yaml code:\n\nrules:\n- apiGroups: [\"apps\"]", "completion": " resources: [\"deployments\"]\n verbs: [\"patch\"]", "language": "yaml", "type": "kubernetes-manifest", "context": "Kubernetes API Concepts | See Authorization Overview.", "source": "https://kubernetes.io/docs/reference/using-api/api-concepts/", "id": "k8s_00706_code_completion_22232"} {"prompt": "Explain this yaml code:\n\nrules:\n- apiGroups: [\"apps\"]\n resources: [\"deployments\"]\n verbs: [\"patch\"]", "completion": "This yaml code Kubernetes API Concepts | See Authorization Overview.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Kubernetes API Concepts | See Authorization Overview.", "source": "https://kubernetes.io/docs/reference/using-api/api-concepts/", "id": "k8s_00706_code_explain_22233"} {"prompt": "Explain this json code:\n\n{\"op\": \"add\", \"path\": \"/a/b/c\", \"value\": [ \"foo\", \"bar\" ]}", "completion": "This json code Kubernetes API Concepts | A patch using application/json-patch+json can include conditions to validate consistency, allowing the operation to fail if those conditions are not met (for example, to avoid a lost update).", "language": "json", "type": "api-response_explanation", "context": "Kubernetes API Concepts | A patch using application/json-patch+json can include conditions to validate consistency, allowing the operation to fail if those conditions are not met (for example, to avoid a lost update).", "source": "https://kubernetes.io/docs/reference/using-api/api-concepts/", "id": "k8s_00706_code_explain_22234"} {"prompt": "Complete this text code:\n\nGET /api/v1/namespaces/test/pods\n---\n200 OK\nContent-Type: application/json\n", "completion": "{\n \"kind\": \"PodList\",\n \"apiVersion\": \"v1\",\n \"metadata\": {\"resourceVersion\":\"10245\"},\n \"items\": [...]\n}", "language": "text", "type": "code-example", "context": "Kubernetes API Concepts | Starting from resource version 10245, receive notifications of any API operations (such as create, delete, patch or update) that affect Pods in the test namespace. Each change notification is a JSON d", "source": "https://kubernetes.io/docs/reference/using-api/api-concepts/", "id": "k8s_00706_code_completion_22235"} {"prompt": "Explain this text code:\n\nGET /api/v1/namespaces/test/pods\n---\n200 OK\nContent-Type: application/json\n\n{\n \"kind\": \"PodList\",\n \"apiVersion\": \"v1\",\n \"metadata\": {\"resourceVersion\":\"10245\"},\n \"items\": [...]\n}", "completion": "This text code Kubernetes API Concepts | Starting from resource version 10245, receive notifications of any API operations (such as create, delete, patch or update) that affect Pods in the test namespace. Each change notification is a JSON d", "language": "text", "type": "code-example_explanation", "context": "Kubernetes API Concepts | Starting from resource version 10245, receive notifications of any API operations (such as create, delete, patch or update) that affect Pods in the test namespace. Each change notification is a JSON d", "source": "https://kubernetes.io/docs/reference/using-api/api-concepts/", "id": "k8s_00706_code_explain_22236"} {"prompt": "Complete this text code:\n\nGET /api/v1/namespaces/test/pods?watch=1&resourceVersion=10245\n---\n200 OK\nTransfer-Encoding: chunked\nContent-Type: application/json\n\n{", "completion": " \"type\": \"ADDED\",\n \"object\": {\"kind\": \"Pod\", \"apiVersion\": \"v1\", \"metadata\": {\"resourceVersion\": \"10596\", ...}, ...}\n}\n{\n \"type\": \"MODIFIED\",\n \"object\": {\"kind\": \"Pod\", \"apiVersion\": \"v1\", \"metadata\": {\"resourceVersion\": \"11020\", ...}, ...}\n}\n...", "language": "text", "type": "code-example", "context": "Kubernetes API Concepts | A given Kubernetes server will only preserve a historical record of changes for a limited time. Clusters using etcd 3 preserve changes in the last 5 minutes by default. When the requested watch operat", "source": "https://kubernetes.io/docs/reference/using-api/api-concepts/", "id": "k8s_00706_code_completion_22237"} {"prompt": "Explain this text code:\n\nGET /api/v1/namespaces/test/pods?watch=1&resourceVersion=10245\n---\n200 OK\nTransfer-Encoding: chunked\nContent-Type: application/json\n\n{\n \"type\": \"ADDED\",\n \"object\": {\"kind\": \"Pod\", \"apiVersion\": \"v1\", \"metadata\": {\"resourceVersion\": \"10596\", ...}, ...}\n}\n{\n \"type\": \"MODIFIED\",\n \"object\": {\"kind\": \"Pod\", \"apiVersion\": \"v1\", \"metadata\": {\"resourceVersion\": \"11020\", ...}, ...}\n}\n...", "completion": "This text code Kubernetes API Concepts | A given Kubernetes server will only preserve a historical record of changes for a limited time. Clusters using etcd 3 preserve changes in the last 5 minutes by default. When the requested watch operat", "language": "text", "type": "code-example_explanation", "context": "Kubernetes API Concepts | A given Kubernetes server will only preserve a historical record of changes for a limited time. Clusters using etcd 3 preserve changes in the last 5 minutes by default. When the requested watch operat", "source": "https://kubernetes.io/docs/reference/using-api/api-concepts/", "id": "k8s_00706_code_explain_22238"} {"prompt": "Complete this text code:\n\nGET /api/v1/namespaces/test/pods?watch=1&resourceVersion=10245&allowWatchBookmarks=true\n---\n200 OK\nTransfer-Encoding: chunked\nContent-Type: application/json\n\n{", "completion": " \"type\": \"ADDED\",\n \"object\": {\"kind\": \"Pod\", \"apiVersion\": \"v1\", \"metadata\": {\"resourceVersion\": \"10596\", ...}, ...}\n}\n...\n{\n \"type\": \"BOOKMARK\",\n \"object\": {\"kind\": \"Pod\", \"apiVersion\": \"v1\", \"metadata\": {\"resourceVersion\": \"12746\"} }\n}", "language": "text", "type": "code-example", "context": "Kubernetes API Concepts | As a client, you can request BOOKMARK events by setting the allowWatchBookmarks=true query parameter to a watch request, but you shouldn't assume bookmarks are returned at any specific interval, nor c", "source": "https://kubernetes.io/docs/reference/using-api/api-concepts/", "id": "k8s_00706_code_completion_22239"} {"prompt": "Explain this text code:\n\nGET /api/v1/namespaces/test/pods?watch=1&resourceVersion=10245&allowWatchBookmarks=true\n---\n200 OK\nTransfer-Encoding: chunked\nContent-Type: application/json\n\n{\n \"type\": \"ADDED\",\n \"object\": {\"kind\": \"Pod\", \"apiVersion\": \"v1\", \"metadata\": {\"resourceVersion\": \"10596\", ...}, ...}\n}\n...\n{\n \"type\": \"BOOKMARK\",\n \"object\": {\"kind\": \"Pod\", \"apiVersion\": \"v1\", \"metadata\": {\"resourceVersion\": \"12746\"} }\n}", "completion": "This text code Kubernetes API Concepts | As a client, you can request BOOKMARK events by setting the allowWatchBookmarks=true query parameter to a watch request, but you shouldn't assume bookmarks are returned at any specific interval, nor c", "language": "text", "type": "code-example_explanation", "context": "Kubernetes API Concepts | As a client, you can request BOOKMARK events by setting the allowWatchBookmarks=true query parameter to a watch request, but you shouldn't assume bookmarks are returned at any specific interval, nor c", "source": "https://kubernetes.io/docs/reference/using-api/api-concepts/", "id": "k8s_00706_code_explain_22240"} {"prompt": "Complete this text code:\n\nGET /api/v1/namespaces/test/pods?watch=1&sendInitialEvents=true&allowWatchBookmarks=true&resourceVersion=&resourceVersionMatch=NotOlderThan\n---\n200 OK\nTransfer-Encoding: chunked\nContent-Type: application/json\n\n{\n \"type\": \"ADDED\",\n \"object\": {\"kind\": \"Pod\", \"apiVersion\": \"v1\", \"metadata\": {\"resourceVersion\": \"8467\", \"name\": \"foo\"}, ...}\n}", "completion": "{\n \"type\": \"ADDED\",\n \"object\": {\"kind\": \"Pod\", \"apiVersion\": \"v1\", \"metadata\": {\"resourceVersion\": \"5726\", \"name\": \"bar\"}, ...}\n}\n{\n \"type\": \"BOOKMARK\",\n \"object\": {\"kind\": \"Pod\", \"apiVersion\": \"v1\", \"metadata\": {\"resourceVersion\": \"10245\"} }\n}\n...\n", "language": "text", "type": "code-example", "context": "Kubernetes API Concepts | APIResponseCompression is an option that allows the API server to compress the responses for get and list requests, reducing the network bandwidth and improving the performance of large-scale clusters", "source": "https://kubernetes.io/docs/reference/using-api/api-concepts/", "id": "k8s_00706_code_completion_22241"} {"prompt": "Explain this text code:\n\nGET /api/v1/namespaces/test/pods?watch=1&sendInitialEvents=true&allowWatchBookmarks=true&resourceVersion=&resourceVersionMatch=NotOlderThan\n---\n200 OK\nTransfer-Encoding: chunked\nContent-Type: application/json\n\n{\n \"type\": \"ADDED\",\n \"object\": {\"kind\": \"Pod\", \"apiVersion\": \"v1\", \"metadata\": {\"resourceVersion\": \"8467\", \"name\": \"foo\"}, ...}\n}\n{\n \"type\": \"ADDED\",\n \"object\": {\"kind\": \"Pod\", \"apiVersion\": \"v1\", \"metadata\": {\"resourceVersion\": \"5726\", \"name\": \"bar\"}, ...}\n}\n{\n \"type\": \"BOOKMARK\",\n \"object\": {\"kind\": \"Pod\", \"apiVersion\": \"v1\", \"metadata\": {\"resourceVersion\": \"10245\"} }\n}\n...\n", "completion": "This text code Kubernetes API Concepts | APIResponseCompression is an option that allows the API server to compress the responses for get and list requests, reducing the network bandwidth and improving the performance of large-scale clusters", "language": "text", "type": "code-example_explanation", "context": "Kubernetes API Concepts | APIResponseCompression is an option that allows the API server to compress the responses for get and list requests, reducing the network bandwidth and improving the performance of large-scale clusters", "source": "https://kubernetes.io/docs/reference/using-api/api-concepts/", "id": "k8s_00706_code_explain_22242"} {"prompt": "Complete this text code:\n\nGET /api/v1/pods\nAccept-Encoding: gzip\n---", "completion": "200 OK\nContent-Type: application/json\ncontent-encoding: gzip\n...", "language": "text", "type": "code-example", "context": "Kubernetes API Concepts | The content-encoding header indicates that the response is compressed with gzip.", "source": "https://kubernetes.io/docs/reference/using-api/api-concepts/", "id": "k8s_00706_code_completion_22243"} {"prompt": "Explain this text code:\n\nGET /api/v1/pods\nAccept-Encoding: gzip\n---\n200 OK\nContent-Type: application/json\ncontent-encoding: gzip\n...", "completion": "This text code Kubernetes API Concepts | The content-encoding header indicates that the response is compressed with gzip.", "language": "text", "type": "code-example_explanation", "context": "Kubernetes API Concepts | The content-encoding header indicates that the response is compressed with gzip.", "source": "https://kubernetes.io/docs/reference/using-api/api-concepts/", "id": "k8s_00706_code_explain_22244"} {"prompt": "Complete this text code:\n\nGET /api/v1/pods?limit=500\n---\n200 OK\nContent-Type: application/json\n\n{\n \"kind\": \"PodList\",\n \"apiVersion\": \"v1\",", "completion": " \"metadata\": {\n \"resourceVersion\":\"10245\",\n \"continue\": \"ENCODED_CONTINUE_TOKEN\",\n \"remainingItemCount\": 753,\n ...\n },\n \"items\": [...] // returns pods 1-500\n}", "language": "text", "type": "code-example", "context": "Kubernetes API Concepts | Continue the previous call, retrieving the next set of 500 pods.", "source": "https://kubernetes.io/docs/reference/using-api/api-concepts/", "id": "k8s_00706_code_completion_22245"} {"prompt": "Explain this text code:\n\nGET /api/v1/pods?limit=500\n---\n200 OK\nContent-Type: application/json\n\n{\n \"kind\": \"PodList\",\n \"apiVersion\": \"v1\",\n \"metadata\": {\n \"resourceVersion\":\"10245\",\n \"continue\": \"ENCODED_CONTINUE_TOKEN\",\n \"remainingItemCount\": 753,\n ...\n },\n \"items\": [...] // returns pods 1-500\n}", "completion": "This text code Kubernetes API Concepts | Continue the previous call, retrieving the next set of 500 pods.", "language": "text", "type": "code-example_explanation", "context": "Kubernetes API Concepts | Continue the previous call, retrieving the next set of 500 pods.", "source": "https://kubernetes.io/docs/reference/using-api/api-concepts/", "id": "k8s_00706_code_explain_22246"} {"prompt": "Complete this text code:\n\nGET /api/v1/pods?limit=500&continue=ENCODED_CONTINUE_TOKEN\n---\n200 OK\nContent-Type: application/json\n\n{\n \"kind\": \"PodList\",\n \"apiVersion\": \"v1\",", "completion": " \"metadata\": {\n \"resourceVersion\":\"10245\",\n \"continue\": \"ENCODED_CONTINUE_TOKEN_2\",\n \"remainingItemCount\": 253,\n ...\n },\n \"items\": [...] // returns pods 501-1000\n}", "language": "text", "type": "code-example", "context": "Kubernetes API Concepts | Continue the previous call, retrieving the last 253 pods.", "source": "https://kubernetes.io/docs/reference/using-api/api-concepts/", "id": "k8s_00706_code_completion_22247"} {"prompt": "Explain this text code:\n\nGET /api/v1/pods?limit=500&continue=ENCODED_CONTINUE_TOKEN\n---\n200 OK\nContent-Type: application/json\n\n{\n \"kind\": \"PodList\",\n \"apiVersion\": \"v1\",\n \"metadata\": {\n \"resourceVersion\":\"10245\",\n \"continue\": \"ENCODED_CONTINUE_TOKEN_2\",\n \"remainingItemCount\": 253,\n ...\n },\n \"items\": [...] // returns pods 501-1000\n}", "completion": "This text code Kubernetes API Concepts | Continue the previous call, retrieving the last 253 pods.", "language": "text", "type": "code-example_explanation", "context": "Kubernetes API Concepts | Continue the previous call, retrieving the last 253 pods.", "source": "https://kubernetes.io/docs/reference/using-api/api-concepts/", "id": "k8s_00706_code_explain_22248"} {"prompt": "Complete this text code:\n\nGET /api/v1/pods?limit=500&continue=ENCODED_CONTINUE_TOKEN_2\n---\n200 OK\nContent-Type: application/json\n\n{\n \"kind\": \"PodList\",", "completion": " \"apiVersion\": \"v1\",\n \"metadata\": {\n \"resourceVersion\":\"10245\",\n \"continue\": \"\", // continue token is empty because we have reached the end of the list\n ...\n },\n \"items\": [...] // returns pods 1001-1253\n}", "language": "text", "type": "code-example", "context": "Kubernetes API Concepts | Notice that the resourceVersion of the collection remains constant across each request, indicating the server is showing you a consistent snapshot of the pods. Pods that are created, updated, or delet", "source": "https://kubernetes.io/docs/reference/using-api/api-concepts/", "id": "k8s_00706_code_completion_22249"} {"prompt": "Explain this text code:\n\nGET /api/v1/pods?limit=500&continue=ENCODED_CONTINUE_TOKEN_2\n---\n200 OK\nContent-Type: application/json\n\n{\n \"kind\": \"PodList\",\n \"apiVersion\": \"v1\",\n \"metadata\": {\n \"resourceVersion\":\"10245\",\n \"continue\": \"\", // continue token is empty because we have reached the end of the list\n ...\n },\n \"items\": [...] // returns pods 1001-1253\n}", "completion": "This text code Kubernetes API Concepts | Notice that the resourceVersion of the collection remains constant across each request, indicating the server is showing you a consistent snapshot of the pods. Pods that are created, updated, or delet", "language": "text", "type": "code-example_explanation", "context": "Kubernetes API Concepts | Notice that the resourceVersion of the collection remains constant across each request, indicating the server is showing you a consistent snapshot of the pods. Pods that are created, updated, or delet", "source": "https://kubernetes.io/docs/reference/using-api/api-concepts/", "id": "k8s_00706_code_explain_22250"} {"prompt": "Complete this json code:\n\n{\n \"kind\": \"ServiceList\",\n \"apiVersion\": \"v1\",\n \"metadata\": {\n \"resourceVersion\": \"2947301\"\n },\n \"items\": [\n {", "completion": " \"metadata\": {\n \"name\": \"kubernetes\",\n \"namespace\": \"default\",\n...\n \"metadata\": {\n \"name\": \"kube-dns\",\n \"namespace\": \"kube-system\",\n...", "language": "json", "type": "api-response", "context": "Kubernetes API Concepts | There are dozens of collection types (such as PodList, ServiceList, and NodeList) defined in the Kubernetes API. You can get more information about each collection type from the Kubernetes API documen", "source": "https://kubernetes.io/docs/reference/using-api/api-concepts/", "id": "k8s_00706_code_completion_22251"} {"prompt": "Explain this json code:\n\n{\n \"kind\": \"ServiceList\",\n \"apiVersion\": \"v1\",\n \"metadata\": {\n \"resourceVersion\": \"2947301\"\n },\n \"items\": [\n {\n \"metadata\": {\n \"name\": \"kubernetes\",\n \"namespace\": \"default\",\n...\n \"metadata\": {\n \"name\": \"kube-dns\",\n \"namespace\": \"kube-system\",\n...", "completion": "This json code Kubernetes API Concepts | There are dozens of collection types (such as PodList, ServiceList, and NodeList) defined in the Kubernetes API. You can get more information about each collection type from the Kubernetes API documen", "language": "json", "type": "api-response_explanation", "context": "Kubernetes API Concepts | There are dozens of collection types (such as PodList, ServiceList, and NodeList) defined in the Kubernetes API. You can get more information about each collection type from the Kubernetes API documen", "source": "https://kubernetes.io/docs/reference/using-api/api-concepts/", "id": "k8s_00706_code_explain_22252"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: List\nmetadata:\n resourceVersion: \"\"\n selfLink: \"\"\nitems:\n- apiVersion: v1\n kind: Service\n metadata:\n creationTimestamp: \"2021-06-03T14:54:12Z\"\n labels:\n component: apiserver\n provider: kubernetes\n name: kubernetes", "completion": " namespace: default\n...\n- apiVersion: v1\n kind: Service\n metadata:\n annotations:\n prometheus.io/port: \"9153\"\n prometheus.io/scrape: \"true\"\n creationTimestamp: \"2021-06-03T14:54:14Z\"\n labels:\n k8s-app: kube-dns\n kubernetes.io/cluster-service: \"true\"\n kubernetes.io/name: CoreDNS\n name: kube-dns\n namespace: kube-system", "language": "yaml", "type": "service-definition", "context": "Kubernetes API Concepts | Keep in mind that the Kubernetes API does not have a kind named List.", "source": "https://kubernetes.io/docs/reference/using-api/api-concepts/", "id": "k8s_00706_code_completion_22253"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: List\nmetadata:\n resourceVersion: \"\"\n selfLink: \"\"\nitems:\n- apiVersion: v1\n kind: Service\n metadata:\n creationTimestamp: \"2021-06-03T14:54:12Z\"\n labels:\n component: apiserver\n provider: kubernetes\n name: kubernetes\n namespace: default\n...\n- apiVersion: v1\n kind: Service\n metadata:\n annotations:\n prometheus.io/port: \"9153\"\n prometheus.io/scrape: \"true\"\n creationTimestamp: \"2021-06-03T14:54:14Z\"\n labels:\n k8s-app: kube-dns\n kubernetes.io/cluster-service: \"true\"\n kubernetes.io/name: CoreDNS\n name: kube-dns\n namespace: kube-system", "completion": "This yaml code Kubernetes API Concepts | Keep in mind that the Kubernetes API does not have a kind named List.", "language": "yaml", "type": "service-definition_explanation", "context": "Kubernetes API Concepts | Keep in mind that the Kubernetes API does not have a kind named List.", "source": "https://kubernetes.io/docs/reference/using-api/api-concepts/", "id": "k8s_00706_code_explain_22254"} {"prompt": "Complete this text code:\n\nGET /api/v1/pods\nAccept: application/json;as=Table;g=meta.k8s.io;v=v1\n---\n200 OK\nContent-Type: application/json\n\n{", "completion": " \"kind\": \"Table\",\n \"apiVersion\": \"meta.k8s.io/v1\",\n ...\n \"columnDefinitions\": [\n ...\n ]\n}", "language": "text", "type": "code-example", "context": "Kubernetes API Concepts | For API resource types that do not have a custom Table definition known to the control plane, the API server returns a default Table response that consists of the resource's name and creationTimestamp", "source": "https://kubernetes.io/docs/reference/using-api/api-concepts/", "id": "k8s_00706_code_completion_22255"} {"prompt": "Explain this text code:\n\nGET /api/v1/pods\nAccept: application/json;as=Table;g=meta.k8s.io;v=v1\n---\n200 OK\nContent-Type: application/json\n\n{\n \"kind\": \"Table\",\n \"apiVersion\": \"meta.k8s.io/v1\",\n ...\n \"columnDefinitions\": [\n ...\n ]\n}", "completion": "This text code Kubernetes API Concepts | For API resource types that do not have a custom Table definition known to the control plane, the API server returns a default Table response that consists of the resource's name and creationTimestamp", "language": "text", "type": "code-example_explanation", "context": "Kubernetes API Concepts | For API resource types that do not have a custom Table definition known to the control plane, the API server returns a default Table response that consists of the resource's name and creationTimestamp", "source": "https://kubernetes.io/docs/reference/using-api/api-concepts/", "id": "k8s_00706_code_explain_22256"} {"prompt": "Complete this text code:\n\nGET /apis/crd.example.com/v1alpha1/namespaces/default/resources\n---\n200 OK\nContent-Type: application/json\n...\n\n{\n \"kind\": \"Table\",\n \"apiVersion\": \"meta.k8s.io/v1\",\n ...\n \"columnDefinitions\": [", "completion": " {\n \"name\": \"Name\",\n \"type\": \"string\",\n ...\n },\n {\n \"name\": \"Created At\",\n \"type\": \"date\",\n ...\n }\n ]\n}", "language": "text", "type": "code-example", "context": "Kubernetes API Concepts | Not all API resource types support a Table response; for example, a CustomResourceDefinitions might not define field-to-table mappings, and an APIService that extends the core Kubernetes API might not", "source": "https://kubernetes.io/docs/reference/using-api/api-concepts/", "id": "k8s_00706_code_completion_22257"} {"prompt": "Explain this text code:\n\nGET /apis/crd.example.com/v1alpha1/namespaces/default/resources\n---\n200 OK\nContent-Type: application/json\n...\n\n{\n \"kind\": \"Table\",\n \"apiVersion\": \"meta.k8s.io/v1\",\n ...\n \"columnDefinitions\": [\n {\n \"name\": \"Name\",\n \"type\": \"string\",\n ...\n },\n {\n \"name\": \"Created At\",\n \"type\": \"date\",\n ...\n }\n ]\n}", "completion": "This text code Kubernetes API Concepts | Not all API resource types support a Table response; for example, a CustomResourceDefinitions might not define field-to-table mappings, and an APIService that extends the core Kubernetes API might not", "language": "text", "type": "code-example_explanation", "context": "Kubernetes API Concepts | Not all API resource types support a Table response; for example, a CustomResourceDefinitions might not define field-to-table mappings, and an APIService that extends the core Kubernetes API might not", "source": "https://kubernetes.io/docs/reference/using-api/api-concepts/", "id": "k8s_00706_code_explain_22258"} {"prompt": "Complete this json code:\n\n{\n \"kind\": \"ConfigMap\",\n \"apiVersion\": \"v1\",\n \"metadata\": {", "completion": " \"finalizers\": [\"url.io/neat-finalization\", \"other-url.io/my-finalizer\"],\n \"deletionTimestamp\": nil,\n }\n}", "language": "json", "type": "api-response", "context": "Kubernetes API Concepts | When a client first sends a delete to request the removal of a resource, the .metadata.deletionTimestamp is set to the current time. Once the .metadata.deletionTimestamp is set, external controllers t", "source": "https://kubernetes.io/docs/reference/using-api/api-concepts/", "id": "k8s_00706_code_completion_22259"} {"prompt": "Explain this json code:\n\n{\n \"kind\": \"ConfigMap\",\n \"apiVersion\": \"v1\",\n \"metadata\": {\n \"finalizers\": [\"url.io/neat-finalization\", \"other-url.io/my-finalizer\"],\n \"deletionTimestamp\": nil,\n }\n}", "completion": "This json code Kubernetes API Concepts | When a client first sends a delete to request the removal of a resource, the .metadata.deletionTimestamp is set to the current time. Once the .metadata.deletionTimestamp is set, external controllers t", "language": "json", "type": "api-response_explanation", "context": "Kubernetes API Concepts | When a client first sends a delete to request the removal of a resource, the .metadata.deletionTimestamp is set to the current time. Once the .metadata.deletionTimestamp is set, external controllers t", "source": "https://kubernetes.io/docs/reference/using-api/api-concepts/", "id": "k8s_00706_code_explain_22260"} {"prompt": "Explain this text code:\n\nPOST /api/v1/namespaces/test/pods?dryRun=All\nContent-Type: application/json\nAccept: application/json", "completion": "This text code Kubernetes API Concepts | The response would look the same as for non-dry-run request, but the values of some generated fields may differ.", "language": "text", "type": "code-example_explanation", "context": "Kubernetes API Concepts | The response would look the same as for non-dry-run request, but the values of some generated fields may differ.", "source": "https://kubernetes.io/docs/reference/using-api/api-concepts/", "id": "k8s_00706_code_explain_22261"} {"prompt": "Complete this text code:\n\nrules:\n- apiGroups: [\"apps\"]", "completion": " resources: [\"deployments\"]\n verbs: [\"patch\"]", "language": "text", "type": "code-example", "context": "Kubernetes API Concepts | See Authorization Overview.", "source": "https://kubernetes.io/docs/reference/using-api/api-concepts/", "id": "k8s_00706_code_completion_22262"} {"prompt": "Explain this text code:\n\nrules:\n- apiGroups: [\"apps\"]\n resources: [\"deployments\"]\n verbs: [\"patch\"]", "completion": "This text code Kubernetes API Concepts | See Authorization Overview.", "language": "text", "type": "code-example_explanation", "context": "Kubernetes API Concepts | See Authorization Overview.", "source": "https://kubernetes.io/docs/reference/using-api/api-concepts/", "id": "k8s_00706_code_explain_22263"} {"prompt": "Explain this bash code:\n\nkubectl create service clusterip my-svc --clusterip=\"None\" -o yaml --dry-run=client | kubectl set selector --local -f - 'environment=qa' -o yaml | kubectl create -f -", "completion": "This bash code Managing Kubernetes Objects Using Imperative Commands | You can use kubectl create --edit to make arbitrary changes to an object before it is created. Here's an example:", "language": "bash", "type": "kubectl-command_explanation", "context": "Managing Kubernetes Objects Using Imperative Commands | You can use kubectl create --edit to make arbitrary changes to an object before it is created. Here's an example:", "source": "https://kubernetes.io/docs/tasks/manage-kubernetes-objects/imperative-command/", "id": "k8s_00708_code_explain_22264"} {"prompt": "Explain this bash code:\n\nkubectl create service clusterip my-svc --clusterip=\"None\" -o yaml --dry-run=client > /tmp/srv.yaml\nkubectl create --edit -f /tmp/srv.yaml", "completion": "This bash code Managing Kubernetes Objects Using Imperative Commands |", "language": "bash", "type": "kubectl-command_explanation", "context": "Managing Kubernetes Objects Using Imperative Commands |", "source": "https://kubernetes.io/docs/tasks/manage-kubernetes-objects/imperative-command/", "id": "k8s_00708_code_explain_22265"} {"prompt": "Explain this sh code:\n\nkubectl create service clusterip my-svc --clusterip=\"None\" -o yaml --dry-run=client | kubectl set selector --local -f - 'environment=qa' -o yaml | kubectl create -f -", "completion": "This sh code Managing Kubernetes Objects Using Imperative Commands | You can use kubectl create --edit to make arbitrary changes to an object before it is created. Here's an example:", "language": "sh", "type": "code-example_explanation", "context": "Managing Kubernetes Objects Using Imperative Commands | You can use kubectl create --edit to make arbitrary changes to an object before it is created. Here's an example:", "source": "https://kubernetes.io/docs/tasks/manage-kubernetes-objects/imperative-command/", "id": "k8s_00708_code_explain_22266"} {"prompt": "Explain this sh code:\n\nkubectl create service clusterip my-svc --clusterip=\"None\" -o yaml --dry-run=client > /tmp/srv.yaml\nkubectl create --edit -f /tmp/srv.yaml", "completion": "This sh code Managing Kubernetes Objects Using Imperative Commands |", "language": "sh", "type": "code-example_explanation", "context": "Managing Kubernetes Objects Using Imperative Commands |", "source": "https://kubernetes.io/docs/tasks/manage-kubernetes-objects/imperative-command/", "id": "k8s_00708_code_explain_22267"} {"prompt": "Explain this bash code:\n\nkubectl create service clusterip my-svc --clusterip=\"None\" -o yaml --dry-run=client | kubectl set selector --local -f - 'environment=qa' -o yaml | kubectl create -f -", "completion": "This bash code Managing Kubernetes Objects Using Imperative Commands | You can use kubectl create --edit to make arbitrary changes to an object before it is created. Here's an example:", "language": "bash", "type": "kubectl-command_explanation", "context": "Managing Kubernetes Objects Using Imperative Commands | You can use kubectl create --edit to make arbitrary changes to an object before it is created. Here's an example:", "source": "https://kubernetes.io/docs/tasks/manage-kubernetes-objects/imperative-command/", "id": "k8s_00708_code_explain_22268"} {"prompt": "Explain this bash code:\n\nkubectl create service clusterip my-svc --clusterip=\"None\" -o yaml --dry-run=client > /tmp/srv.yaml\nkubectl create --edit -f /tmp/srv.yaml", "completion": "This bash code Managing Kubernetes Objects Using Imperative Commands |", "language": "bash", "type": "kubectl-command_explanation", "context": "Managing Kubernetes Objects Using Imperative Commands |", "source": "https://kubernetes.io/docs/tasks/manage-kubernetes-objects/imperative-command/", "id": "k8s_00708_code_explain_22269"} {"prompt": "Complete this text code:\n\n# Show merged kubeconfig settings\n kubectl config view\n \n # Show merged kubeconfig settings, raw certificate data, and exposed secrets", "completion": " kubectl config view --raw\n \n # Get the password for the e2e user\n kubectl config view -o jsonpath='{.users[?(@.name == \"e2e\")].user.password}'", "language": "text", "type": "code-example", "context": "kubectl config view | If true, ignore any errors in templates when a field or map key is missing in the template. Only applies to golang and jsonpath output formats.", "source": "https://kubernetes.io/docs/reference/kubectl/generated/kubectl_config/kubectl_config_view/", "id": "k8s_00709_code_completion_22270"} {"prompt": "Explain this text code:\n\n# Show merged kubeconfig settings\n kubectl config view\n \n # Show merged kubeconfig settings, raw certificate data, and exposed secrets\n kubectl config view --raw\n \n # Get the password for the e2e user\n kubectl config view -o jsonpath='{.users[?(@.name == \"e2e\")].user.password}'", "completion": "This text code kubectl config view | If true, ignore any errors in templates when a field or map key is missing in the template. Only applies to golang and jsonpath output formats.", "language": "text", "type": "code-example_explanation", "context": "kubectl config view | If true, ignore any errors in templates when a field or map key is missing in the template. Only applies to golang and jsonpath output formats.", "source": "https://kubernetes.io/docs/reference/kubectl/generated/kubectl_config/kubectl_config_view/", "id": "k8s_00709_code_explain_22271"} {"prompt": "Complete this text code:\n\n# Show merged kubeconfig settings\n kubectl config view\n \n # Show merged kubeconfig settings, raw certificate data, and exposed secrets", "completion": " kubectl config view --raw\n \n # Get the password for the e2e user\n kubectl config view -o jsonpath='{.users[?(@.name == \"e2e\")].user.password}'", "language": "text", "type": "code-example", "context": "kubectl config view | If true, ignore any errors in templates when a field or map key is missing in the template. Only applies to golang and jsonpath output formats.", "source": "https://kubernetes.io/docs/reference/kubectl/generated/kubectl_config/kubectl_config_view/", "id": "k8s_00709_code_completion_22272"} {"prompt": "Explain this text code:\n\n# Show merged kubeconfig settings\n kubectl config view\n \n # Show merged kubeconfig settings, raw certificate data, and exposed secrets\n kubectl config view --raw\n \n # Get the password for the e2e user\n kubectl config view -o jsonpath='{.users[?(@.name == \"e2e\")].user.password}'", "completion": "This text code kubectl config view | If true, ignore any errors in templates when a field or map key is missing in the template. Only applies to golang and jsonpath output formats.", "language": "text", "type": "code-example_explanation", "context": "kubectl config view | If true, ignore any errors in templates when a field or map key is missing in the template. Only applies to golang and jsonpath output formats.", "source": "https://kubernetes.io/docs/reference/kubectl/generated/kubectl_config/kubectl_config_view/", "id": "k8s_00709_code_explain_22273"} {"prompt": "Complete this text code:\n\n# Dump current cluster state to stdout\n kubectl cluster-info dump\n \n # Dump current cluster state to /path/to/cluster-state\n kubectl cluster-info dump --output-directory=/path/to/cluster-state", "completion": " \n # Dump all namespaces to stdout\n kubectl cluster-info dump --all-namespaces\n \n # Dump a set of namespaces to /path/to/cluster-state\n kubectl cluster-info dump --namespaces default,kube-system --output-directory=/path/to/cluster-state", "language": "text", "type": "code-example", "context": "kubectl cluster-info dump | If true, dump all namespaces. If true, --namespaces is ignored.", "source": "https://kubernetes.io/docs/reference/kubectl/generated/kubectl_cluster-info/kubectl_cluster-info_dump/", "id": "k8s_00711_code_completion_22274"} {"prompt": "Explain this text code:\n\n# Dump current cluster state to stdout\n kubectl cluster-info dump\n \n # Dump current cluster state to /path/to/cluster-state\n kubectl cluster-info dump --output-directory=/path/to/cluster-state\n \n # Dump all namespaces to stdout\n kubectl cluster-info dump --all-namespaces\n \n # Dump a set of namespaces to /path/to/cluster-state\n kubectl cluster-info dump --namespaces default,kube-system --output-directory=/path/to/cluster-state", "completion": "This text code kubectl cluster-info dump | If true, dump all namespaces. If true, --namespaces is ignored.", "language": "text", "type": "code-example_explanation", "context": "kubectl cluster-info dump | If true, dump all namespaces. If true, --namespaces is ignored.", "source": "https://kubernetes.io/docs/reference/kubectl/generated/kubectl_cluster-info/kubectl_cluster-info_dump/", "id": "k8s_00711_code_explain_22275"} {"prompt": "Complete this text code:\n\n# Dump current cluster state to stdout\n kubectl cluster-info dump\n \n # Dump current cluster state to /path/to/cluster-state\n kubectl cluster-info dump --output-directory=/path/to/cluster-state", "completion": " \n # Dump all namespaces to stdout\n kubectl cluster-info dump --all-namespaces\n \n # Dump a set of namespaces to /path/to/cluster-state\n kubectl cluster-info dump --namespaces default,kube-system --output-directory=/path/to/cluster-state", "language": "text", "type": "code-example", "context": "kubectl cluster-info dump | If true, dump all namespaces. If true, --namespaces is ignored.", "source": "https://kubernetes.io/docs/reference/kubectl/generated/kubectl_cluster-info/kubectl_cluster-info_dump/", "id": "k8s_00711_code_completion_22276"} {"prompt": "Explain this text code:\n\n# Dump current cluster state to stdout\n kubectl cluster-info dump\n \n # Dump current cluster state to /path/to/cluster-state\n kubectl cluster-info dump --output-directory=/path/to/cluster-state\n \n # Dump all namespaces to stdout\n kubectl cluster-info dump --all-namespaces\n \n # Dump a set of namespaces to /path/to/cluster-state\n kubectl cluster-info dump --namespaces default,kube-system --output-directory=/path/to/cluster-state", "completion": "This text code kubectl cluster-info dump | If true, dump all namespaces. If true, --namespaces is ignored.", "language": "text", "type": "code-example_explanation", "context": "kubectl cluster-info dump | If true, dump all namespaces. If true, --namespaces is ignored.", "source": "https://kubernetes.io/docs/reference/kubectl/generated/kubectl_cluster-info/kubectl_cluster-info_dump/", "id": "k8s_00711_code_explain_22277"} {"prompt": "Explain this bash code:\n\nkubectl run NAME --image=image [--env=\"key=value\"] [--port=port] [--dry-run=server|client] [--overrides=inline-json] [--command] -- [COMMAND] [args...]", "completion": "This bash code kubectl run | If true, ignore any errors in templates when a field or map key is missing in the template. Only applies to golang and jsonpath output formats.", "language": "bash", "type": "kubectl-command_explanation", "context": "kubectl run | If true, ignore any errors in templates when a field or map key is missing in the template. Only applies to golang and jsonpath output formats.", "source": "https://kubernetes.io/docs/reference/kubectl/generated/kubectl_run/", "id": "k8s_00712_code_explain_22278"} {"prompt": "Complete this text code:\n\n# Start a nginx pod\n kubectl run nginx --image=nginx\n \n # Start a hazelcast pod and let the container expose port 5701\n kubectl run hazelcast --image=hazelcast/hazelcast --port=5701\n \n # Start a hazelcast pod and set environment variables \"DNS_DOMAIN=cluster\" and \"POD_NAMESPACE=default\" in the container\n kubectl run hazelcast --image=hazelcast/hazelcast --env=\"DNS_DOMAIN=cluster\" --env=\"POD_NAMESPACE=default\"\n \n # Start a hazelcast pod and set labels \"app=hazelcast\" and \"env=prod\" in the container\n kubectl run hazelcast --image=hazelcast/hazelcast --labels=\"app=hazelcast,env=prod\"\n \n # Dry run; print the corresponding API objects without creating them", "completion": " kubectl run nginx --image=nginx --dry-run=client\n \n # Start a nginx pod, but overload the spec with a partial set of values parsed from JSON\n kubectl run nginx --image=nginx --overrides='{ \"apiVersion\": \"v1\", \"spec\": { ... } }'\n \n # Start a busybox pod and keep it in the foreground, don't restart it if it exits\n kubectl run -i -t busybox --image=busybox --restart=Never\n \n # Start the nginx pod using the default command, but use custom arguments (arg1 .. argN) for that command\n kubectl run nginx --image=nginx -- ... \n \n # Start the nginx pod using a different command and custom arguments\n kubectl run nginx --image=nginx --command -- ... ", "language": "text", "type": "code-example", "context": "kubectl run | If true, ignore any errors in templates when a field or map key is missing in the template. Only applies to golang and jsonpath output formats.", "source": "https://kubernetes.io/docs/reference/kubectl/generated/kubectl_run/", "id": "k8s_00712_code_completion_22279"} {"prompt": "Explain this text code:\n\n# Start a nginx pod\n kubectl run nginx --image=nginx\n \n # Start a hazelcast pod and let the container expose port 5701\n kubectl run hazelcast --image=hazelcast/hazelcast --port=5701\n \n # Start a hazelcast pod and set environment variables \"DNS_DOMAIN=cluster\" and \"POD_NAMESPACE=default\" in the container\n kubectl run hazelcast --image=hazelcast/hazelcast --env=\"DNS_DOMAIN=cluster\" --env=\"POD_NAMESPACE=default\"\n \n # Start a hazelcast pod and set labels \"app=hazelcast\" and \"env=prod\" in the container\n kubectl run hazelcast --image=hazelcast/hazelcast --labels=\"app=hazelcast,env=prod\"\n \n # Dry run; print the corresponding API objects without creating them\n kubectl run nginx --image=nginx --dry-run=client\n \n # Start a nginx pod, but overload the spec with a partial set of values parsed from JSON\n kubectl run nginx --image=nginx --overrides='{ \"apiVersion\": \"v1\", \"spec\": { ... } }'\n \n # Start a busybox pod and keep it in the foreground, don't restart it if it exits\n kubectl run -i -t busybox --image=busybox --restart=Never\n \n # Start the nginx pod using the default command, but use custom arguments (arg1 .. argN) for that command\n kubectl run nginx --image=nginx -- ... \n \n # Start the nginx pod using a different command and custom arguments\n kubectl run nginx --image=nginx --command -- ... ", "completion": "This text code kubectl run | If true, ignore any errors in templates when a field or map key is missing in the template. Only applies to golang and jsonpath output formats.", "language": "text", "type": "code-example_explanation", "context": "kubectl run | If true, ignore any errors in templates when a field or map key is missing in the template. Only applies to golang and jsonpath output formats.", "source": "https://kubernetes.io/docs/reference/kubectl/generated/kubectl_run/", "id": "k8s_00712_code_explain_22280"} {"prompt": "Explain this bash code:\n\nkubectl run NAME --image=image [--env=\"key=value\"] [--port=port] [--dry-run=server|client] [--overrides=inline-json] [--command] -- [COMMAND] [args...]", "completion": "This bash code kubectl run | If true, ignore any errors in templates when a field or map key is missing in the template. Only applies to golang and jsonpath output formats.", "language": "bash", "type": "kubectl-command_explanation", "context": "kubectl run | If true, ignore any errors in templates when a field or map key is missing in the template. Only applies to golang and jsonpath output formats.", "source": "https://kubernetes.io/docs/reference/kubectl/generated/kubectl_run/", "id": "k8s_00712_code_explain_22281"} {"prompt": "Complete this text code:\n\n# Start a nginx pod\n kubectl run nginx --image=nginx\n \n # Start a hazelcast pod and let the container expose port 5701\n kubectl run hazelcast --image=hazelcast/hazelcast --port=5701\n \n # Start a hazelcast pod and set environment variables \"DNS_DOMAIN=cluster\" and \"POD_NAMESPACE=default\" in the container\n kubectl run hazelcast --image=hazelcast/hazelcast --env=\"DNS_DOMAIN=cluster\" --env=\"POD_NAMESPACE=default\"\n \n # Start a hazelcast pod and set labels \"app=hazelcast\" and \"env=prod\" in the container\n kubectl run hazelcast --image=hazelcast/hazelcast --labels=\"app=hazelcast,env=prod\"\n \n # Dry run; print the corresponding API objects without creating them", "completion": " kubectl run nginx --image=nginx --dry-run=client\n \n # Start a nginx pod, but overload the spec with a partial set of values parsed from JSON\n kubectl run nginx --image=nginx --overrides='{ \"apiVersion\": \"v1\", \"spec\": { ... } }'\n \n # Start a busybox pod and keep it in the foreground, don't restart it if it exits\n kubectl run -i -t busybox --image=busybox --restart=Never\n \n # Start the nginx pod using the default command, but use custom arguments (arg1 .. argN) for that command\n kubectl run nginx --image=nginx -- ... \n \n # Start the nginx pod using a different command and custom arguments\n kubectl run nginx --image=nginx --command -- ... ", "language": "text", "type": "code-example", "context": "kubectl run | If true, ignore any errors in templates when a field or map key is missing in the template. Only applies to golang and jsonpath output formats.", "source": "https://kubernetes.io/docs/reference/kubectl/generated/kubectl_run/", "id": "k8s_00712_code_completion_22282"} {"prompt": "Explain this text code:\n\n# Start a nginx pod\n kubectl run nginx --image=nginx\n \n # Start a hazelcast pod and let the container expose port 5701\n kubectl run hazelcast --image=hazelcast/hazelcast --port=5701\n \n # Start a hazelcast pod and set environment variables \"DNS_DOMAIN=cluster\" and \"POD_NAMESPACE=default\" in the container\n kubectl run hazelcast --image=hazelcast/hazelcast --env=\"DNS_DOMAIN=cluster\" --env=\"POD_NAMESPACE=default\"\n \n # Start a hazelcast pod and set labels \"app=hazelcast\" and \"env=prod\" in the container\n kubectl run hazelcast --image=hazelcast/hazelcast --labels=\"app=hazelcast,env=prod\"\n \n # Dry run; print the corresponding API objects without creating them\n kubectl run nginx --image=nginx --dry-run=client\n \n # Start a nginx pod, but overload the spec with a partial set of values parsed from JSON\n kubectl run nginx --image=nginx --overrides='{ \"apiVersion\": \"v1\", \"spec\": { ... } }'\n \n # Start a busybox pod and keep it in the foreground, don't restart it if it exits\n kubectl run -i -t busybox --image=busybox --restart=Never\n \n # Start the nginx pod using the default command, but use custom arguments (arg1 .. argN) for that command\n kubectl run nginx --image=nginx -- ... \n \n # Start the nginx pod using a different command and custom arguments\n kubectl run nginx --image=nginx --command -- ... ", "completion": "This text code kubectl run | If true, ignore any errors in templates when a field or map key is missing in the template. Only applies to golang and jsonpath output formats.", "language": "text", "type": "code-example_explanation", "context": "kubectl run | If true, ignore any errors in templates when a field or map key is missing in the template. Only applies to golang and jsonpath output formats.", "source": "https://kubernetes.io/docs/reference/kubectl/generated/kubectl_run/", "id": "k8s_00712_code_explain_22283"} {"prompt": "Complete this yaml code:\n\nkind: Pod\napiVersion: v1\nmetadata:\n name: my-csi-app\nspec:\n containers:\n - name: my-frontend\n image: busybox:1.28\n volumeMounts:", "completion": " - mountPath: \"/data\"\n name: my-csi-inline-vol\n command: [ \"sleep\", \"1000000\" ]\n volumes:\n - name: my-csi-inline-vol\n csi:\n driver: inline.storage.kubernetes.io\n volumeAttributes:\n foo: bar", "language": "yaml", "type": "pod-definition", "context": "Ephemeral Volumes | The volumeAttributes determine what volume is prepared by the driver. These attributes are specific to each driver and not standardized. See the documentation of each CSI driver for further instructio", "source": "https://kubernetes.io/docs/concepts/storage/ephemeral-volumes/", "id": "k8s_00714_code_completion_22284"} {"prompt": "Explain this yaml code:\n\nkind: Pod\napiVersion: v1\nmetadata:\n name: my-csi-app\nspec:\n containers:\n - name: my-frontend\n image: busybox:1.28\n volumeMounts:\n - mountPath: \"/data\"\n name: my-csi-inline-vol\n command: [ \"sleep\", \"1000000\" ]\n volumes:\n - name: my-csi-inline-vol\n csi:\n driver: inline.storage.kubernetes.io\n volumeAttributes:\n foo: bar", "completion": "This yaml code Ephemeral Volumes | The volumeAttributes determine what volume is prepared by the driver. These attributes are specific to each driver and not standardized. See the documentation of each CSI driver for further instructio", "language": "yaml", "type": "pod-definition_explanation", "context": "Ephemeral Volumes | The volumeAttributes determine what volume is prepared by the driver. These attributes are specific to each driver and not standardized. See the documentation of each CSI driver for further instructio", "source": "https://kubernetes.io/docs/concepts/storage/ephemeral-volumes/", "id": "k8s_00714_code_explain_22285"} {"prompt": "Complete this yaml code:\n\nkind: Pod\napiVersion: v1\nmetadata:\n name: my-app\nspec:\n containers:\n - name: my-frontend\n image: busybox:1.28\n volumeMounts:\n - mountPath: \"/scratch\"\n name: scratch-volume\n command: [ \"sleep\", \"1000000\" ]", "completion": " volumes:\n - name: scratch-volume\n ephemeral:\n volumeClaimTemplate:\n metadata:\n labels:\n type: my-frontend-volume\n spec:\n accessModes: [ \"ReadWriteOnce\" ]\n storageClassName: \"scratch-storage-class\"\n resources:\n requests:\n storage: 1Gi", "language": "yaml", "type": "pod-definition", "context": "Ephemeral Volumes | The key design idea is that the parameters for a volume claim are allowed inside a volume source of the Pod. Labels, annotations and the whole set of fields for a PersistentVolumeClaim are supported.", "source": "https://kubernetes.io/docs/concepts/storage/ephemeral-volumes/", "id": "k8s_00714_code_completion_22286"} {"prompt": "Explain this yaml code:\n\nkind: Pod\napiVersion: v1\nmetadata:\n name: my-app\nspec:\n containers:\n - name: my-frontend\n image: busybox:1.28\n volumeMounts:\n - mountPath: \"/scratch\"\n name: scratch-volume\n command: [ \"sleep\", \"1000000\" ]\n volumes:\n - name: scratch-volume\n ephemeral:\n volumeClaimTemplate:\n metadata:\n labels:\n type: my-frontend-volume\n spec:\n accessModes: [ \"ReadWriteOnce\" ]\n storageClassName: \"scratch-storage-class\"\n resources:\n requests:\n storage: 1Gi", "completion": "This yaml code Ephemeral Volumes | The key design idea is that the parameters for a volume claim are allowed inside a volume source of the Pod. Labels, annotations and the whole set of fields for a PersistentVolumeClaim are supported.", "language": "yaml", "type": "pod-definition_explanation", "context": "Ephemeral Volumes | The key design idea is that the parameters for a volume claim are allowed inside a volume source of the Pod. Labels, annotations and the whole set of fields for a PersistentVolumeClaim are supported.", "source": "https://kubernetes.io/docs/concepts/storage/ephemeral-volumes/", "id": "k8s_00714_code_explain_22287"} {"prompt": "Complete this yaml code:\n\nkind: Pod\napiVersion: v1\nmetadata:\n name: my-csi-app\nspec:\n containers:\n - name: my-frontend\n image: busybox:1.28\n volumeMounts:", "completion": " - mountPath: \"/data\"\n name: my-csi-inline-vol\n command: [ \"sleep\", \"1000000\" ]\n volumes:\n - name: my-csi-inline-vol\n csi:\n driver: inline.storage.kubernetes.io\n volumeAttributes:\n foo: bar", "language": "yaml", "type": "pod-definition", "context": "Ephemeral Volumes | The volumeAttributes determine what volume is prepared by the driver. These attributes are specific to each driver and not standardized. See the documentation of each CSI driver for further instructio", "source": "https://kubernetes.io/docs/concepts/storage/ephemeral-volumes/", "id": "k8s_00714_code_completion_22288"} {"prompt": "Explain this yaml code:\n\nkind: Pod\napiVersion: v1\nmetadata:\n name: my-csi-app\nspec:\n containers:\n - name: my-frontend\n image: busybox:1.28\n volumeMounts:\n - mountPath: \"/data\"\n name: my-csi-inline-vol\n command: [ \"sleep\", \"1000000\" ]\n volumes:\n - name: my-csi-inline-vol\n csi:\n driver: inline.storage.kubernetes.io\n volumeAttributes:\n foo: bar", "completion": "This yaml code Ephemeral Volumes | The volumeAttributes determine what volume is prepared by the driver. These attributes are specific to each driver and not standardized. See the documentation of each CSI driver for further instructio", "language": "yaml", "type": "pod-definition_explanation", "context": "Ephemeral Volumes | The volumeAttributes determine what volume is prepared by the driver. These attributes are specific to each driver and not standardized. See the documentation of each CSI driver for further instructio", "source": "https://kubernetes.io/docs/concepts/storage/ephemeral-volumes/", "id": "k8s_00714_code_explain_22289"} {"prompt": "Complete this yaml code:\n\nkind: Pod\napiVersion: v1\nmetadata:\n name: my-app\nspec:\n containers:\n - name: my-frontend\n image: busybox:1.28\n volumeMounts:\n - mountPath: \"/scratch\"\n name: scratch-volume\n command: [ \"sleep\", \"1000000\" ]", "completion": " volumes:\n - name: scratch-volume\n ephemeral:\n volumeClaimTemplate:\n metadata:\n labels:\n type: my-frontend-volume\n spec:\n accessModes: [ \"ReadWriteOnce\" ]\n storageClassName: \"scratch-storage-class\"\n resources:\n requests:\n storage: 1Gi", "language": "yaml", "type": "pod-definition", "context": "Ephemeral Volumes | The key design idea is that the parameters for a volume claim are allowed inside a volume source of the Pod. Labels, annotations and the whole set of fields for a PersistentVolumeClaim are supported.", "source": "https://kubernetes.io/docs/concepts/storage/ephemeral-volumes/", "id": "k8s_00714_code_completion_22290"} {"prompt": "Explain this yaml code:\n\nkind: Pod\napiVersion: v1\nmetadata:\n name: my-app\nspec:\n containers:\n - name: my-frontend\n image: busybox:1.28\n volumeMounts:\n - mountPath: \"/scratch\"\n name: scratch-volume\n command: [ \"sleep\", \"1000000\" ]\n volumes:\n - name: scratch-volume\n ephemeral:\n volumeClaimTemplate:\n metadata:\n labels:\n type: my-frontend-volume\n spec:\n accessModes: [ \"ReadWriteOnce\" ]\n storageClassName: \"scratch-storage-class\"\n resources:\n requests:\n storage: 1Gi", "completion": "This yaml code Ephemeral Volumes | The key design idea is that the parameters for a volume claim are allowed inside a volume source of the Pod. Labels, annotations and the whole set of fields for a PersistentVolumeClaim are supported.", "language": "yaml", "type": "pod-definition_explanation", "context": "Ephemeral Volumes | The key design idea is that the parameters for a volume claim are allowed inside a volume source of the Pod. Labels, annotations and the whole set of fields for a PersistentVolumeClaim are supported.", "source": "https://kubernetes.io/docs/concepts/storage/ephemeral-volumes/", "id": "k8s_00714_code_explain_22291"} {"prompt": "Complete this yaml code:\n\nkind: Pod\napiVersion: v1\nmetadata:\n name: my-csi-app\nspec:\n containers:\n - name: my-frontend\n image: busybox:1.28\n volumeMounts:", "completion": " - mountPath: \"/data\"\n name: my-csi-inline-vol\n command: [ \"sleep\", \"1000000\" ]\n volumes:\n - name: my-csi-inline-vol\n csi:\n driver: inline.storage.kubernetes.io\n volumeAttributes:\n foo: bar", "language": "yaml", "type": "pod-definition", "context": "Ephemeral Volumes | The volumeAttributes determine what volume is prepared by the driver. These attributes are specific to each driver and not standardized. See the documentation of each CSI driver for further instructio", "source": "https://kubernetes.io/docs/concepts/storage/ephemeral-volumes/", "id": "k8s_00714_code_completion_22292"} {"prompt": "Explain this yaml code:\n\nkind: Pod\napiVersion: v1\nmetadata:\n name: my-csi-app\nspec:\n containers:\n - name: my-frontend\n image: busybox:1.28\n volumeMounts:\n - mountPath: \"/data\"\n name: my-csi-inline-vol\n command: [ \"sleep\", \"1000000\" ]\n volumes:\n - name: my-csi-inline-vol\n csi:\n driver: inline.storage.kubernetes.io\n volumeAttributes:\n foo: bar", "completion": "This yaml code Ephemeral Volumes | The volumeAttributes determine what volume is prepared by the driver. These attributes are specific to each driver and not standardized. See the documentation of each CSI driver for further instructio", "language": "yaml", "type": "pod-definition_explanation", "context": "Ephemeral Volumes | The volumeAttributes determine what volume is prepared by the driver. These attributes are specific to each driver and not standardized. See the documentation of each CSI driver for further instructio", "source": "https://kubernetes.io/docs/concepts/storage/ephemeral-volumes/", "id": "k8s_00714_code_explain_22293"} {"prompt": "Complete this yaml code:\n\nkind: Pod\napiVersion: v1\nmetadata:\n name: my-app\nspec:\n containers:\n - name: my-frontend\n image: busybox:1.28\n volumeMounts:\n - mountPath: \"/scratch\"\n name: scratch-volume\n command: [ \"sleep\", \"1000000\" ]", "completion": " volumes:\n - name: scratch-volume\n ephemeral:\n volumeClaimTemplate:\n metadata:\n labels:\n type: my-frontend-volume\n spec:\n accessModes: [ \"ReadWriteOnce\" ]\n storageClassName: \"scratch-storage-class\"\n resources:\n requests:\n storage: 1Gi", "language": "yaml", "type": "pod-definition", "context": "Ephemeral Volumes | The key design idea is that the parameters for a volume claim are allowed inside a volume source of the Pod. Labels, annotations and the whole set of fields for a PersistentVolumeClaim are supported.", "source": "https://kubernetes.io/docs/concepts/storage/ephemeral-volumes/", "id": "k8s_00714_code_completion_22294"} {"prompt": "Explain this yaml code:\n\nkind: Pod\napiVersion: v1\nmetadata:\n name: my-app\nspec:\n containers:\n - name: my-frontend\n image: busybox:1.28\n volumeMounts:\n - mountPath: \"/scratch\"\n name: scratch-volume\n command: [ \"sleep\", \"1000000\" ]\n volumes:\n - name: scratch-volume\n ephemeral:\n volumeClaimTemplate:\n metadata:\n labels:\n type: my-frontend-volume\n spec:\n accessModes: [ \"ReadWriteOnce\" ]\n storageClassName: \"scratch-storage-class\"\n resources:\n requests:\n storage: 1Gi", "completion": "This yaml code Ephemeral Volumes | The key design idea is that the parameters for a volume claim are allowed inside a volume source of the Pod. Labels, annotations and the whole set of fields for a PersistentVolumeClaim are supported.", "language": "yaml", "type": "pod-definition_explanation", "context": "Ephemeral Volumes | The key design idea is that the parameters for a volume claim are allowed inside a volume source of the Pod. Labels, annotations and the whole set of fields for a PersistentVolumeClaim are supported.", "source": "https://kubernetes.io/docs/concepts/storage/ephemeral-volumes/", "id": "k8s_00714_code_explain_22295"} {"prompt": "Complete this yaml code:\n\napiVersion: kubeadm.k8s.io/v1beta3\nkind: InitConfiguration\n\napiVersion: kubeadm.k8s.io/v1beta3\nkind: ClusterConfiguration\n\napiVersion: kubelet.config.k8s.io/v1beta1", "completion": "kind: KubeletConfiguration\n\napiVersion: kubeproxy.config.k8s.io/v1alpha1\nkind: KubeProxyConfiguration\n\napiVersion: kubeadm.k8s.io/v1beta3\nkind: JoinConfiguration", "language": "yaml", "type": "kubernetes-manifest", "context": "kubeadm Configuration (v1beta3) | To print the defaults for \"init\" and \"join\" actions use the following commands:", "source": "https://kubernetes.io/docs/reference/config-api/kubeadm-config.v1beta3/", "id": "k8s_00715_code_completion_22296"} {"prompt": "Explain this yaml code:\n\napiVersion: kubeadm.k8s.io/v1beta3\nkind: InitConfiguration\n\napiVersion: kubeadm.k8s.io/v1beta3\nkind: ClusterConfiguration\n\napiVersion: kubelet.config.k8s.io/v1beta1\nkind: KubeletConfiguration\n\napiVersion: kubeproxy.config.k8s.io/v1alpha1\nkind: KubeProxyConfiguration\n\napiVersion: kubeadm.k8s.io/v1beta3\nkind: JoinConfiguration", "completion": "This yaml code kubeadm Configuration (v1beta3) | To print the defaults for \"init\" and \"join\" actions use the following commands:", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "kubeadm Configuration (v1beta3) | To print the defaults for \"init\" and \"join\" actions use the following commands:", "source": "https://kubernetes.io/docs/reference/config-api/kubeadm-config.v1beta3/", "id": "k8s_00715_code_explain_22297"} {"prompt": "Explain this text code:\n\nkubeadm config print init-defaults\nkubeadm config print join-defaults", "completion": "This text code kubeadm Configuration (v1beta3) | The list of configuration types that must be included in a configuration file depends by the action you are performing (init or join) and by the configuration options you are going to use (defaults or", "language": "text", "type": "code-example_explanation", "context": "kubeadm Configuration (v1beta3) | The list of configuration types that must be included in a configuration file depends by the action you are performing (init or join) and by the configuration options you are going to use (defaults or", "source": "https://kubernetes.io/docs/reference/config-api/kubeadm-config.v1beta3/", "id": "k8s_00715_code_explain_22298"} {"prompt": "Complete this yaml code:\n\napiVersion: kubeadm.k8s.io/v1beta3\nkind: InitConfiguration\nbootstrapTokens:", "completion": " ...\nnodeRegistration:\n ...", "language": "yaml", "type": "kubernetes-manifest", "context": "kubeadm Configuration (v1beta3) | The InitConfiguration type should be used to configure runtime settings, that in case of kubeadm init are the configuration of the bootstrap token and all the setting which are specific to the node wh", "source": "https://kubernetes.io/docs/reference/config-api/kubeadm-config.v1beta3/", "id": "k8s_00715_code_completion_22299"} {"prompt": "Explain this yaml code:\n\napiVersion: kubeadm.k8s.io/v1beta3\nkind: InitConfiguration\nbootstrapTokens:\n ...\nnodeRegistration:\n ...", "completion": "This yaml code kubeadm Configuration (v1beta3) | The InitConfiguration type should be used to configure runtime settings, that in case of kubeadm init are the configuration of the bootstrap token and all the setting which are specific to the node wh", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "kubeadm Configuration (v1beta3) | The InitConfiguration type should be used to configure runtime settings, that in case of kubeadm init are the configuration of the bootstrap token and all the setting which are specific to the node wh", "source": "https://kubernetes.io/docs/reference/config-api/kubeadm-config.v1beta3/", "id": "k8s_00715_code_explain_22300"} {"prompt": "Complete this yaml code:\n\napiVersion: kubeadm.k8s.io/v1beta3\nkind: ClusterConfiguration\nnetworking:\n ...\netcd:\n ...", "completion": "apiServer:\n extraArgs:\n ...\n extraVolumes:\n ...\n...", "language": "yaml", "type": "kubernetes-manifest", "context": "kubeadm Configuration (v1beta3) | The ClusterConfiguration type should be used to configure cluster-wide settings, including settings for:", "source": "https://kubernetes.io/docs/reference/config-api/kubeadm-config.v1beta3/", "id": "k8s_00715_code_completion_22301"} {"prompt": "Explain this yaml code:\n\napiVersion: kubeadm.k8s.io/v1beta3\nkind: ClusterConfiguration\nnetworking:\n ...\netcd:\n ...\napiServer:\n extraArgs:\n ...\n extraVolumes:\n ...\n...", "completion": "This yaml code kubeadm Configuration (v1beta3) | The ClusterConfiguration type should be used to configure cluster-wide settings, including settings for:", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "kubeadm Configuration (v1beta3) | The ClusterConfiguration type should be used to configure cluster-wide settings, including settings for:", "source": "https://kubernetes.io/docs/reference/config-api/kubeadm-config.v1beta3/", "id": "k8s_00715_code_explain_22302"} {"prompt": "Explain this yaml code:\n\napiVersion: kubeproxy.config.k8s.io/v1alpha1\nkind: KubeProxyConfiguration\n ...", "completion": "This yaml code kubeadm Configuration (v1beta3) | The KubeProxyConfiguration type should be used to change the configuration passed to kube-proxy instances deployed in the cluster. If this object is not provided or provided only partially, kubeadm ap", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "kubeadm Configuration (v1beta3) | The KubeProxyConfiguration type should be used to change the configuration passed to kube-proxy instances deployed in the cluster. If this object is not provided or provided only partially, kubeadm ap", "source": "https://kubernetes.io/docs/reference/config-api/kubeadm-config.v1beta3/", "id": "k8s_00715_code_explain_22303"} {"prompt": "Explain this yaml code:\n\napiVersion: kubelet.config.k8s.io/v1beta1\nkind: KubeletConfiguration\n ...", "completion": "This yaml code kubeadm Configuration (v1beta3) | The KubeletConfiguration type should be used to change the configurations that will be passed to all kubelet instances deployed in the cluster. If this object is not provided or provided only partiall", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "kubeadm Configuration (v1beta3) | The KubeletConfiguration type should be used to change the configurations that will be passed to all kubelet instances deployed in the cluster. If this object is not provided or provided only partiall", "source": "https://kubernetes.io/docs/reference/config-api/kubeadm-config.v1beta3/", "id": "k8s_00715_code_explain_22304"} {"prompt": "Complete this yaml code:\n\napiVersion: kubeadm.k8s.io/v1beta3\nkind: InitConfiguration\nbootstrapTokens:\n - token: \"9a08jv.c0izixklcxtmnze7\"\n description: \"kubeadm bootstrap token\"\n ttl: \"24h\"\n - token: \"783bde.3f89s0fje9f38fhf\"\n description: \"another bootstrap token\"\n usages:\n - authentication\n - signing\n groups:\n - system:bootstrappers:kubeadm:default-node-token\nnodeRegistration:\n name: \"ec2-10-100-0-1\"\n criSocket: \"/var/run/dockershim.sock\"\n taints:\n - key: \"kubeadmNode\"\n value: \"someValue\"\n effect: \"NoSchedule\"\n kubeletExtraArgs:\n v: 4\n ignorePreflightErrors:\n - IsPrivilegedUser\n imagePullPolicy: \"IfNotPresent\"\nlocalAPIEndpoint:\n advertiseAddress: \"10.100.0.1\"\n bindPort: 6443\ncertificateKey: \"e6a2eb8581237ab72a4f494f30285ec12a9694d750b9785706a83bfcbbbd2204\"\nskipPhases:\n - addon/kube-proxy\n---\napiVersion: kubeadm.k8s.io/v1beta3\nkind: ClusterConfiguration\netcd:\n # one of local or external\n local:\n imageRepository: \"registry.k8s.io\"\n imageTag: \"3.2.24\"\n dataDir: \"/var/lib/etcd\"\n extraArgs:\n listen-client-urls: \"http://10.100.0.1:2379\"\n serverCertSANs:\n - \"ec2-10-100-0-1.compute-1.amazonaws.com\"\n peerCertSANs:\n - \"10.100.0.1\"\n # external:\n # endpoints:\n # - \"10.100.0.1:2379\"\n # - \"10.100.0.2:2379\"", "completion": " # caFile: \"/etcd/kubernetes/pki/etcd/etcd-ca.crt\"\n # certFile: \"/etcd/kubernetes/pki/etcd/etcd.crt\"\n # keyFile: \"/etcd/kubernetes/pki/etcd/etcd.key\"\nnetworking:\n serviceSubnet: \"10.96.0.0/16\"\n podSubnet: \"10.244.0.0/24\"\n dnsDomain: \"cluster.local\"\nkubernetesVersion: \"v1.21.0\"\ncontrolPlaneEndpoint: \"10.100.0.1:6443\"\napiServer:\n extraArgs:\n authorization-mode: \"Node,RBAC\"\n extraVolumes:\n - name: \"some-volume\"\n hostPath: \"/etc/some-path\"\n mountPath: \"/etc/some-pod-path\"\n readOnly: false\n pathType: File\n certSANs:\n - \"10.100.1.1\"\n - \"ec2-10-100-0-1.compute-1.amazonaws.com\"\n timeoutForControlPlane: 4m0s\ncontrollerManager:\n extraArgs:\n \"node-cidr-mask-size\": \"20\"\n extraVolumes:\n - name: \"some-volume\"\n hostPath: \"/etc/some-path\"\n mountPath: \"/etc/some-pod-path\"\n readOnly: false\n pathType: File\nscheduler:\n extraArgs:\n bind-address: \"10.100.0.1\"\n extraVolumes:\n - name: \"some-volume\"\n hostPath: \"/etc/some-path\"\n mountPath: \"/etc/some-pod-path\"\n readOnly: false\n pathType: File\ncertificatesDir: \"/etc/kubernetes/pki\"\nimageRepository: \"registry.k8s.io\"\nclusterName: \"example-cluster\"\n---\napiVersion: kubelet.config.k8s.io/v1beta1\nkind: KubeletConfiguration\n# kubelet specific options here\n---\napiVersion: kubeproxy.config.k8s.io/v1alpha1\nkind: KubeProxyConfiguration\n# kube-proxy specific options here", "language": "yaml", "type": "kubernetes-manifest", "context": "kubeadm Configuration (v1beta3) | When executing kubeadm join with the --config option, the JoinConfiguration type should be provided.", "source": "https://kubernetes.io/docs/reference/config-api/kubeadm-config.v1beta3/", "id": "k8s_00715_code_completion_22305"} {"prompt": "Explain this yaml code:\n\napiVersion: kubeadm.k8s.io/v1beta3\nkind: InitConfiguration\nbootstrapTokens:\n - token: \"9a08jv.c0izixklcxtmnze7\"\n description: \"kubeadm bootstrap token\"\n ttl: \"24h\"\n - token: \"783bde.3f89s0fje9f38fhf\"\n description: \"another bootstrap token\"\n usages:\n - authentication\n - signing\n groups:\n - system:bootstrappers:kubeadm:default-node-token\nnodeRegistration:\n name: \"ec2-10-100-0-1\"\n criSocket: \"/var/run/dockershim.sock\"\n taints:\n - key: \"kubeadmNode\"\n value: \"someValue\"\n effect: \"NoSchedule\"\n kubeletExtraArgs:\n v: 4\n ignorePreflightErrors:\n - IsPrivilegedUser\n imagePullPolicy: \"IfNotPresent\"\nlocalAPIEndpoint:\n advertiseAddress: \"10.100.0.1\"\n bindPort: 6443\ncertificateKey: \"e6a2eb8581237ab72a4f494f30285ec12a9694d750b9785706a83bfcbbbd2204\"\nskipPhases:\n - addon/kube-proxy\n---\napiVersion: kubeadm.k8s.io/v1beta3\nkind: ClusterConfiguration\netcd:\n # one of local or external\n local:\n imageRepository: \"registry.k8s.io\"\n imageTag: \"3.2.24\"\n dataDir: \"/var/lib/etcd\"\n extraArgs:\n listen-client-urls: \"http://10.100.0.1:2379\"\n serverCertSANs:\n - \"ec2-10-100-0-1.compute-1.amazonaws.com\"\n peerCertSANs:\n - \"10.100.0.1\"\n # external:\n # endpoints:\n # - \"10.100.0.1:2379\"\n # - \"10.100.0.2:2379\"\n # caFile: \"/etcd/kubernetes/pki/etcd/etcd-ca.crt\"\n # certFile: \"/etcd/kubernetes/pki/etcd/etcd.crt\"\n # keyFile: \"/etcd/kubernetes/pki/etcd/etcd.key\"\nnetworking:\n serviceSubnet: \"10.96.0.0/16\"\n podSubnet: \"10.244.0.0/24\"\n dnsDomain: \"cluster.local\"\nkubernetesVersion: \"v1.21.0\"\ncontrolPlaneEndpoint: \"10.100.0.1:6443\"\napiServer:\n extraArgs:\n authorization-mode: \"Node,RBAC\"\n extraVolumes:\n - name: \"some-volume\"\n hostPath: \"/etc/some-path\"\n mountPath: \"/etc/some-pod-path\"\n readOnly: false\n pathType: File\n certSANs:\n - \"10.100.1.1\"\n - \"ec2-10-100-0-1.compute-1.amazonaws.com\"\n timeoutForControlPlane: 4m0s\ncontrollerManager:\n extraArgs:\n \"node-cidr-mask-size\": \"20\"\n extraVolumes:\n - name: \"some-volume\"\n hostPath: \"/etc/some-path\"\n mountPath: \"/etc/some-pod-path\"\n readOnly: false\n pathType: File\nscheduler:\n extraArgs:\n bind-address: \"10.100.0.1\"\n extraVolumes:\n - name: \"some-volume\"\n hostPath: \"/etc/some-path\"\n mountPath: \"/etc/some-pod-path\"\n readOnly: false\n pathType: File\ncertificatesDir: \"/etc/kubernetes/pki\"\nimageRepository: \"registry.k8s.io\"\nclusterName: \"example-cluster\"\n---\napiVersion: kubelet.config.k8s.io/v1beta1\nkind: KubeletConfiguration\n# kubelet specific options here\n---\napiVersion: kubeproxy.config.k8s.io/v1alpha1\nkind: KubeProxyConfiguration\n# kube-proxy specific options here", "completion": "This yaml code kubeadm Configuration (v1beta3) | When executing kubeadm join with the --config option, the JoinConfiguration type should be provided.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "kubeadm Configuration (v1beta3) | When executing kubeadm join with the --config option, the JoinConfiguration type should be provided.", "source": "https://kubernetes.io/docs/reference/config-api/kubeadm-config.v1beta3/", "id": "k8s_00715_code_explain_22306"} {"prompt": "Explain this yaml code:\n\napiVersion: kubeadm.k8s.io/v1beta3\nkind: JoinConfiguration\n ...", "completion": "This yaml code kubeadm Configuration (v1beta3) | The JoinConfiguration type should be used to configure runtime settings, that in case of kubeadm join are the discovery method used for accessing the cluster info and all the setting which are specifi", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "kubeadm Configuration (v1beta3) | The JoinConfiguration type should be used to configure runtime settings, that in case of kubeadm join are the discovery method used for accessing the cluster info and all the setting which are specifi", "source": "https://kubernetes.io/docs/reference/config-api/kubeadm-config.v1beta3/", "id": "k8s_00715_code_explain_22307"} {"prompt": "Complete this text code:\n\nCreating cluster \"psa-wo-cluster-pss\" ...\n✓ Ensuring node image (kindest/node:v1.34.0) 🖼\n✓ Preparing nodes 📦\n✓ Writing configuration 📜\n✓ Starting control-plane 🕹️\n✓ Installing CNI 🔌", "completion": "✓ Installing StorageClass 💾\nSet kubectl context to \"kind-psa-wo-cluster-pss\"\nYou can now use your cluster with:\n\nkubectl cluster-info --context kind-psa-wo-cluster-pss\n\nThanks for using kind! 😊", "language": "text", "type": "code-example", "context": "Apply Pod Security Standards at the Cluster Level | Set the kubectl context to the new cluster:", "source": "https://kubernetes.io/docs/tutorials/security/cluster-level-pss/", "id": "k8s_00716_code_completion_22308"} {"prompt": "Explain this text code:\n\nCreating cluster \"psa-wo-cluster-pss\" ...\n✓ Ensuring node image (kindest/node:v1.34.0) 🖼\n✓ Preparing nodes 📦\n✓ Writing configuration 📜\n✓ Starting control-plane 🕹️\n✓ Installing CNI 🔌\n✓ Installing StorageClass 💾\nSet kubectl context to \"kind-psa-wo-cluster-pss\"\nYou can now use your cluster with:\n\nkubectl cluster-info --context kind-psa-wo-cluster-pss\n\nThanks for using kind! 😊", "completion": "This text code Apply Pod Security Standards at the Cluster Level | Set the kubectl context to the new cluster:", "language": "text", "type": "code-example_explanation", "context": "Apply Pod Security Standards at the Cluster Level | Set the kubectl context to the new cluster:", "source": "https://kubernetes.io/docs/tutorials/security/cluster-level-pss/", "id": "k8s_00716_code_explain_22309"} {"prompt": "Explain this bash code:\n\nkubectl cluster-info --context kind-psa-wo-cluster-pss", "completion": "This bash code Apply Pod Security Standards at the Cluster Level | The output is similar to this:", "language": "bash", "type": "kubectl-command_explanation", "context": "Apply Pod Security Standards at the Cluster Level | The output is similar to this:", "source": "https://kubernetes.io/docs/tutorials/security/cluster-level-pss/", "id": "k8s_00716_code_explain_22310"} {"prompt": "Complete this text code:\n\nKubernetes control plane is running at https://127.0.0.1:61350\n", "completion": "CoreDNS is running at https://127.0.0.1:61350/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy\n\nTo further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.", "language": "text", "type": "code-example", "context": "Apply Pod Security Standards at the Cluster Level | Get a list of namespaces in the cluster:", "source": "https://kubernetes.io/docs/tutorials/security/cluster-level-pss/", "id": "k8s_00716_code_completion_22311"} {"prompt": "Explain this text code:\n\nKubernetes control plane is running at https://127.0.0.1:61350\n\nCoreDNS is running at https://127.0.0.1:61350/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy\n\nTo further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.", "completion": "This text code Apply Pod Security Standards at the Cluster Level | Get a list of namespaces in the cluster:", "language": "text", "type": "code-example_explanation", "context": "Apply Pod Security Standards at the Cluster Level | Get a list of namespaces in the cluster:", "source": "https://kubernetes.io/docs/tutorials/security/cluster-level-pss/", "id": "k8s_00716_code_explain_22312"} {"prompt": "Complete this text code:\n\nNAME STATUS AGE\ndefault Active 9m30s\nkube-node-lease Active 9m32s", "completion": "kube-public Active 9m32s\nkube-system Active 9m32s\nlocal-path-storage Active 9m26s", "language": "text", "type": "code-example", "context": "Apply Pod Security Standards at the Cluster Level | Use --dry-run=server to understand what happens when different Pod Security Standards are applied:", "source": "https://kubernetes.io/docs/tutorials/security/cluster-level-pss/", "id": "k8s_00716_code_completion_22313"} {"prompt": "Explain this text code:\n\nNAME STATUS AGE\ndefault Active 9m30s\nkube-node-lease Active 9m32s\nkube-public Active 9m32s\nkube-system Active 9m32s\nlocal-path-storage Active 9m26s", "completion": "This text code Apply Pod Security Standards at the Cluster Level | Use --dry-run=server to understand what happens when different Pod Security Standards are applied:", "language": "text", "type": "code-example_explanation", "context": "Apply Pod Security Standards at the Cluster Level | Use --dry-run=server to understand what happens when different Pod Security Standards are applied:", "source": "https://kubernetes.io/docs/tutorials/security/cluster-level-pss/", "id": "k8s_00716_code_explain_22314"} {"prompt": "Explain this bash code:\n\nkubectl label --dry-run=server --overwrite ns --all \\\npod-security.kubernetes.io/enforce=privileged", "completion": "This bash code Apply Pod Security Standards at the Cluster Level | The output is similar to:", "language": "bash", "type": "kubectl-command_explanation", "context": "Apply Pod Security Standards at the Cluster Level | The output is similar to:", "source": "https://kubernetes.io/docs/tutorials/security/cluster-level-pss/", "id": "k8s_00716_code_explain_22315"} {"prompt": "Complete this text code:\n\nnamespace/default labeled\nnamespace/kube-node-lease labeled", "completion": "namespace/kube-public labeled\nnamespace/kube-system labeled\nnamespace/local-path-storage labeled", "language": "text", "type": "code-example", "context": "Apply Pod Security Standards at the Cluster Level | Baseline", "source": "https://kubernetes.io/docs/tutorials/security/cluster-level-pss/", "id": "k8s_00716_code_completion_22316"} {"prompt": "Explain this text code:\n\nnamespace/default labeled\nnamespace/kube-node-lease labeled\nnamespace/kube-public labeled\nnamespace/kube-system labeled\nnamespace/local-path-storage labeled", "completion": "This text code Apply Pod Security Standards at the Cluster Level | Baseline", "language": "text", "type": "code-example_explanation", "context": "Apply Pod Security Standards at the Cluster Level | Baseline", "source": "https://kubernetes.io/docs/tutorials/security/cluster-level-pss/", "id": "k8s_00716_code_explain_22317"} {"prompt": "Explain this bash code:\n\nkubectl label --dry-run=server --overwrite ns --all \\\npod-security.kubernetes.io/enforce=baseline", "completion": "This bash code Apply Pod Security Standards at the Cluster Level | The output is similar to:", "language": "bash", "type": "kubectl-command_explanation", "context": "Apply Pod Security Standards at the Cluster Level | The output is similar to:", "source": "https://kubernetes.io/docs/tutorials/security/cluster-level-pss/", "id": "k8s_00716_code_explain_22318"} {"prompt": "Complete this text code:\n\nnamespace/default labeled\nnamespace/kube-node-lease labeled\nnamespace/kube-public labeled\nWarning: existing pods in namespace \"kube-system\" violate the new PodSecurity enforce level \"baseline:latest\"", "completion": "Warning: etcd-psa-wo-cluster-pss-control-plane (and 3 other pods): host namespaces, hostPath volumes\nWarning: kindnet-vzj42: non-default capabilities, host namespaces, hostPath volumes\nWarning: kube-proxy-m6hwf: host namespaces, hostPath volumes, privileged\nnamespace/kube-system labeled\nnamespace/local-path-storage labeled", "language": "text", "type": "code-example", "context": "Apply Pod Security Standards at the Cluster Level | Restricted", "source": "https://kubernetes.io/docs/tutorials/security/cluster-level-pss/", "id": "k8s_00716_code_completion_22319"} {"prompt": "Explain this text code:\n\nnamespace/default labeled\nnamespace/kube-node-lease labeled\nnamespace/kube-public labeled\nWarning: existing pods in namespace \"kube-system\" violate the new PodSecurity enforce level \"baseline:latest\"\nWarning: etcd-psa-wo-cluster-pss-control-plane (and 3 other pods): host namespaces, hostPath volumes\nWarning: kindnet-vzj42: non-default capabilities, host namespaces, hostPath volumes\nWarning: kube-proxy-m6hwf: host namespaces, hostPath volumes, privileged\nnamespace/kube-system labeled\nnamespace/local-path-storage labeled", "completion": "This text code Apply Pod Security Standards at the Cluster Level | Restricted", "language": "text", "type": "code-example_explanation", "context": "Apply Pod Security Standards at the Cluster Level | Restricted", "source": "https://kubernetes.io/docs/tutorials/security/cluster-level-pss/", "id": "k8s_00716_code_explain_22320"} {"prompt": "Explain this bash code:\n\nkubectl label --dry-run=server --overwrite ns --all \\\npod-security.kubernetes.io/enforce=restricted", "completion": "This bash code Apply Pod Security Standards at the Cluster Level | The output is similar to:", "language": "bash", "type": "kubectl-command_explanation", "context": "Apply Pod Security Standards at the Cluster Level | The output is similar to:", "source": "https://kubernetes.io/docs/tutorials/security/cluster-level-pss/", "id": "k8s_00716_code_explain_22321"} {"prompt": "Complete this text code:\n\nnamespace/default labeled\nnamespace/kube-node-lease labeled\nnamespace/kube-public labeled\nWarning: existing pods in namespace \"kube-system\" violate the new PodSecurity enforce level \"restricted:latest\"\nWarning: coredns-7bb9c7b568-hsptc (and 1 other pod): unrestricted capabilities, runAsNonRoot != true, seccompProfile\nWarning: etcd-psa-wo-cluster-pss-control-plane (and 3 other pods): host namespaces, hostPath volumes, allowPrivilegeEscalation != false, unrestricted capabilities, restricted volume types, runAsNonRoot != true", "completion": "Warning: kindnet-vzj42: non-default capabilities, host namespaces, hostPath volumes, allowPrivilegeEscalation != false, unrestricted capabilities, restricted volume types, runAsNonRoot != true, seccompProfile\nWarning: kube-proxy-m6hwf: host namespaces, hostPath volumes, privileged, allowPrivilegeEscalation != false, unrestricted capabilities, restricted volume types, runAsNonRoot != true, seccompProfile\nnamespace/kube-system labeled\nWarning: existing pods in namespace \"local-path-storage\" violate the new PodSecurity enforce level \"restricted:latest\"\nWarning: local-path-provisioner-d6d9f7ffc-lw9lh: allowPrivilegeEscalation != false, unrestricted capabilities, runAsNonRoot != true, seccompProfile\nnamespace/local-path-storage labeled", "language": "text", "type": "code-example", "context": "Apply Pod Security Standards at the Cluster Level | From the previous output, you'll notice that applying the privileged Pod Security Standard shows no warnings for any namespaces. However, baseline and restricted standards both have warnings, specific", "source": "https://kubernetes.io/docs/tutorials/security/cluster-level-pss/", "id": "k8s_00716_code_completion_22322"} {"prompt": "Explain this text code:\n\nnamespace/default labeled\nnamespace/kube-node-lease labeled\nnamespace/kube-public labeled\nWarning: existing pods in namespace \"kube-system\" violate the new PodSecurity enforce level \"restricted:latest\"\nWarning: coredns-7bb9c7b568-hsptc (and 1 other pod): unrestricted capabilities, runAsNonRoot != true, seccompProfile\nWarning: etcd-psa-wo-cluster-pss-control-plane (and 3 other pods): host namespaces, hostPath volumes, allowPrivilegeEscalation != false, unrestricted capabilities, restricted volume types, runAsNonRoot != true\nWarning: kindnet-vzj42: non-default capabilities, host namespaces, hostPath volumes, allowPrivilegeEscalation != false, unrestricted capabilities, restricted volume types, runAsNonRoot != true, seccompProfile\nWarning: kube-proxy-m6hwf: host namespaces, hostPath volumes, privileged, allowPrivilegeEscalation != false, unrestricted capabilities, restricted volume types, runAsNonRoot != true, seccompProfile\nnamespace/kube-system labeled\nWarning: existing pods in namespace \"local-path-storage\" violate the new PodSecurity enforce level \"restricted:latest\"\nWarning: local-path-provisioner-d6d9f7ffc-lw9lh: allowPrivilegeEscalation != false, unrestricted capabilities, runAsNonRoot != true, seccompProfile\nnamespace/local-path-storage labeled", "completion": "This text code Apply Pod Security Standards at the Cluster Level | From the previous output, you'll notice that applying the privileged Pod Security Standard shows no warnings for any namespaces. However, baseline and restricted standards both have warnings, specific", "language": "text", "type": "code-example_explanation", "context": "Apply Pod Security Standards at the Cluster Level | From the previous output, you'll notice that applying the privileged Pod Security Standard shows no warnings for any namespaces. However, baseline and restricted standards both have warnings, specific", "source": "https://kubernetes.io/docs/tutorials/security/cluster-level-pss/", "id": "k8s_00716_code_explain_22323"} {"prompt": "Complete this yaml code:\n\nmkdir -p /tmp/pss\ncat < /tmp/pss/cluster-level-pss.yaml\napiVersion: apiserver.config.k8s.io/v1\nkind: AdmissionConfiguration\nplugins:\n- name: PodSecurity\n configuration:\n apiVersion: pod-security.admission.config.k8s.io/v1\n kind: PodSecurityConfiguration\n defaults:", "completion": " enforce: \"baseline\"\n enforce-version: \"latest\"\n audit: \"restricted\"\n audit-version: \"latest\"\n warn: \"restricted\"\n warn-version: \"latest\"\n exemptions:\n usernames: []\n runtimeClasses: []\n namespaces: [kube-system]\nEOF", "language": "yaml", "type": "pod-definition", "context": "Apply Pod Security Standards at the Cluster Level | Configure the API server to consume this file during cluster creation:", "source": "https://kubernetes.io/docs/tutorials/security/cluster-level-pss/", "id": "k8s_00716_code_completion_22324"} {"prompt": "Explain this yaml code:\n\nmkdir -p /tmp/pss\ncat < /tmp/pss/cluster-level-pss.yaml\napiVersion: apiserver.config.k8s.io/v1\nkind: AdmissionConfiguration\nplugins:\n- name: PodSecurity\n configuration:\n apiVersion: pod-security.admission.config.k8s.io/v1\n kind: PodSecurityConfiguration\n defaults:\n enforce: \"baseline\"\n enforce-version: \"latest\"\n audit: \"restricted\"\n audit-version: \"latest\"\n warn: \"restricted\"\n warn-version: \"latest\"\n exemptions:\n usernames: []\n runtimeClasses: []\n namespaces: [kube-system]\nEOF", "completion": "This yaml code Apply Pod Security Standards at the Cluster Level | Configure the API server to consume this file during cluster creation:", "language": "yaml", "type": "pod-definition_explanation", "context": "Apply Pod Security Standards at the Cluster Level | Configure the API server to consume this file during cluster creation:", "source": "https://kubernetes.io/docs/tutorials/security/cluster-level-pss/", "id": "k8s_00716_code_explain_22325"} {"prompt": "Complete this yaml code:\n\ncat < /tmp/pss/cluster-config.yaml\nkind: Cluster\napiVersion: kind.x-k8s.io/v1alpha4\nnodes:\n- role: control-plane\n kubeadmConfigPatches:\n - |\n kind: ClusterConfiguration\n apiServer:\n extraArgs:\n admission-control-config-file: /etc/config/cluster-level-pss.yaml\n extraVolumes:\n - name: accf\n hostPath: /etc/config\n mountPath: /etc/config", "completion": " readOnly: false\n pathType: \"DirectoryOrCreate\"\n extraMounts:\n - hostPath: /tmp/pss\n containerPath: /etc/config\n # optional: if set, the mount is read-only.\n # default false\n readOnly: false\n # optional: if set, the mount needs SELinux relabeling.\n # default false\n selinuxRelabel: false\n # optional: set propagation mode (None, HostToContainer or Bidirectional)\n # see https://kubernetes.io/docs/concepts/storage/volumes/#mount-propagation\n # default None\n propagation: None\nEOF", "language": "yaml", "type": "kubernetes-manifest", "context": "Apply Pod Security Standards at the Cluster Level | Create a cluster that uses Pod Security Admission to apply these Pod Security Standards:", "source": "https://kubernetes.io/docs/tutorials/security/cluster-level-pss/", "id": "k8s_00716_code_completion_22326"} {"prompt": "Explain this yaml code:\n\ncat < /tmp/pss/cluster-config.yaml\nkind: Cluster\napiVersion: kind.x-k8s.io/v1alpha4\nnodes:\n- role: control-plane\n kubeadmConfigPatches:\n - |\n kind: ClusterConfiguration\n apiServer:\n extraArgs:\n admission-control-config-file: /etc/config/cluster-level-pss.yaml\n extraVolumes:\n - name: accf\n hostPath: /etc/config\n mountPath: /etc/config\n readOnly: false\n pathType: \"DirectoryOrCreate\"\n extraMounts:\n - hostPath: /tmp/pss\n containerPath: /etc/config\n # optional: if set, the mount is read-only.\n # default false\n readOnly: false\n # optional: if set, the mount needs SELinux relabeling.\n # default false\n selinuxRelabel: false\n # optional: set propagation mode (None, HostToContainer or Bidirectional)\n # see https://kubernetes.io/docs/concepts/storage/volumes/#mount-propagation\n # default None\n propagation: None\nEOF", "completion": "This yaml code Apply Pod Security Standards at the Cluster Level | Create a cluster that uses Pod Security Admission to apply these Pod Security Standards:", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Apply Pod Security Standards at the Cluster Level | Create a cluster that uses Pod Security Admission to apply these Pod Security Standards:", "source": "https://kubernetes.io/docs/tutorials/security/cluster-level-pss/", "id": "k8s_00716_code_explain_22327"} {"prompt": "Explain this text code:\n\nkind create cluster --name psa-with-cluster-pss --config /tmp/pss/cluster-config.yaml", "completion": "This text code Apply Pod Security Standards at the Cluster Level | The output is similar to this:", "language": "text", "type": "code-example_explanation", "context": "Apply Pod Security Standards at the Cluster Level | The output is similar to this:", "source": "https://kubernetes.io/docs/tutorials/security/cluster-level-pss/", "id": "k8s_00716_code_explain_22328"} {"prompt": "Complete this text code:\n\nCreating cluster \"psa-with-cluster-pss\" ...\n ✓ Ensuring node image (kindest/node:v1.34.0) 🖼\n ✓ Preparing nodes 📦\n ✓ Writing configuration 📜\n ✓ Starting control-plane 🕹️\n ✓ Installing CNI 🔌", "completion": " ✓ Installing StorageClass 💾\nSet kubectl context to \"kind-psa-with-cluster-pss\"\nYou can now use your cluster with:\n\nkubectl cluster-info --context kind-psa-with-cluster-pss\n\nHave a question, bug, or feature request? Let us know! https://kind.sigs.k8s.io/#community 🙂", "language": "text", "type": "code-example", "context": "Apply Pod Security Standards at the Cluster Level | Point kubectl to the cluster:", "source": "https://kubernetes.io/docs/tutorials/security/cluster-level-pss/", "id": "k8s_00716_code_completion_22329"} {"prompt": "Explain this text code:\n\nCreating cluster \"psa-with-cluster-pss\" ...\n ✓ Ensuring node image (kindest/node:v1.34.0) 🖼\n ✓ Preparing nodes 📦\n ✓ Writing configuration 📜\n ✓ Starting control-plane 🕹️\n ✓ Installing CNI 🔌\n ✓ Installing StorageClass 💾\nSet kubectl context to \"kind-psa-with-cluster-pss\"\nYou can now use your cluster with:\n\nkubectl cluster-info --context kind-psa-with-cluster-pss\n\nHave a question, bug, or feature request? Let us know! https://kind.sigs.k8s.io/#community 🙂", "completion": "This text code Apply Pod Security Standards at the Cluster Level | Point kubectl to the cluster:", "language": "text", "type": "code-example_explanation", "context": "Apply Pod Security Standards at the Cluster Level | Point kubectl to the cluster:", "source": "https://kubernetes.io/docs/tutorials/security/cluster-level-pss/", "id": "k8s_00716_code_explain_22330"} {"prompt": "Explain this bash code:\n\nkubectl cluster-info --context kind-psa-with-cluster-pss", "completion": "This bash code Apply Pod Security Standards at the Cluster Level | The output is similar to this:", "language": "bash", "type": "kubectl-command_explanation", "context": "Apply Pod Security Standards at the Cluster Level | The output is similar to this:", "source": "https://kubernetes.io/docs/tutorials/security/cluster-level-pss/", "id": "k8s_00716_code_explain_22331"} {"prompt": "Complete this text code:\n\nKubernetes control plane is running at https://127.0.0.1:63855\nCoreDNS is running at https://127.0.0.1:63855/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy", "completion": "\nTo further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.", "language": "text", "type": "code-example", "context": "Apply Pod Security Standards at the Cluster Level | Create a Pod in the default namespace:", "source": "https://kubernetes.io/docs/tutorials/security/cluster-level-pss/", "id": "k8s_00716_code_completion_22332"} {"prompt": "Explain this text code:\n\nKubernetes control plane is running at https://127.0.0.1:63855\nCoreDNS is running at https://127.0.0.1:63855/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy\n\nTo further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.", "completion": "This text code Apply Pod Security Standards at the Cluster Level | Create a Pod in the default namespace:", "language": "text", "type": "code-example_explanation", "context": "Apply Pod Security Standards at the Cluster Level | Create a Pod in the default namespace:", "source": "https://kubernetes.io/docs/tutorials/security/cluster-level-pss/", "id": "k8s_00716_code_explain_22333"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\n kind: Pod\n metadata:\n name: nginx\n spec:", "completion": " containers:\n - image: nginx\n name: nginx\n ports:\n - containerPort: 80", "language": "yaml", "type": "pod-definition", "context": "Apply Pod Security Standards at the Cluster Level | The pod is started normally, but the output includes a warning:", "source": "https://kubernetes.io/docs/tutorials/security/cluster-level-pss/", "id": "k8s_00716_code_completion_22334"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\n kind: Pod\n metadata:\n name: nginx\n spec:\n containers:\n - image: nginx\n name: nginx\n ports:\n - containerPort: 80", "completion": "This yaml code Apply Pod Security Standards at the Cluster Level | The pod is started normally, but the output includes a warning:", "language": "yaml", "type": "pod-definition_explanation", "context": "Apply Pod Security Standards at the Cluster Level | The pod is started normally, but the output includes a warning:", "source": "https://kubernetes.io/docs/tutorials/security/cluster-level-pss/", "id": "k8s_00716_code_explain_22335"} {"prompt": "Explain this bash code:\n\nkubectl apply -f https://k8s.io/examples/security/example-baseline-pod.yaml", "completion": "This bash code Apply Pod Security Standards at the Cluster Level | The pod is started normally, but the output includes a warning:", "language": "bash", "type": "kubectl-command_explanation", "context": "Apply Pod Security Standards at the Cluster Level | The pod is started normally, but the output includes a warning:", "source": "https://kubernetes.io/docs/tutorials/security/cluster-level-pss/", "id": "k8s_00716_code_explain_22336"} {"prompt": "Explain this text code:\n\nWarning: would violate PodSecurity \"restricted:latest\": allowPrivilegeEscalation != false (container \"nginx\" must set securityContext.allowPrivilegeEscalation=false), unrestricted capabilities (container \"nginx\" must set securityContext.capabilities.drop=[\"ALL\"]), runAsNonRoot != true (pod or container \"nginx\" must set securityContext.runAsNonRoot=true), seccompProfile (pod or container \"nginx\" must set securityContext.seccompProfile.type to \"RuntimeDefault\" or \"Localhost\")\npod/nginx created", "completion": "This text code Apply Pod Security Standards at the Cluster Level | Now delete the clusters which you created above by running the following command:", "language": "text", "type": "code-example_explanation", "context": "Apply Pod Security Standards at the Cluster Level | Now delete the clusters which you created above by running the following command:", "source": "https://kubernetes.io/docs/tutorials/security/cluster-level-pss/", "id": "k8s_00716_code_explain_22337"} {"prompt": "Complete this text code:\n\nCreating cluster \"psa-wo-cluster-pss\" ...\n✓ Ensuring node image (kindest/node:v1.34.0) 🖼\n✓ Preparing nodes 📦\n✓ Writing configuration 📜\n✓ Starting control-plane 🕹️\n✓ Installing CNI 🔌", "completion": "✓ Installing StorageClass 💾\nSet kubectl context to \"kind-psa-wo-cluster-pss\"\nYou can now use your cluster with:\n\nkubectl cluster-info --context kind-psa-wo-cluster-pss\n\nThanks for using kind! 😊", "language": "text", "type": "code-example", "context": "Apply Pod Security Standards at the Cluster Level | Set the kubectl context to the new cluster:", "source": "https://kubernetes.io/docs/tutorials/security/cluster-level-pss/", "id": "k8s_00716_code_completion_22338"} {"prompt": "Explain this text code:\n\nCreating cluster \"psa-wo-cluster-pss\" ...\n✓ Ensuring node image (kindest/node:v1.34.0) 🖼\n✓ Preparing nodes 📦\n✓ Writing configuration 📜\n✓ Starting control-plane 🕹️\n✓ Installing CNI 🔌\n✓ Installing StorageClass 💾\nSet kubectl context to \"kind-psa-wo-cluster-pss\"\nYou can now use your cluster with:\n\nkubectl cluster-info --context kind-psa-wo-cluster-pss\n\nThanks for using kind! 😊", "completion": "This text code Apply Pod Security Standards at the Cluster Level | Set the kubectl context to the new cluster:", "language": "text", "type": "code-example_explanation", "context": "Apply Pod Security Standards at the Cluster Level | Set the kubectl context to the new cluster:", "source": "https://kubernetes.io/docs/tutorials/security/cluster-level-pss/", "id": "k8s_00716_code_explain_22339"} {"prompt": "Explain this shell code:\n\nkubectl cluster-info --context kind-psa-wo-cluster-pss", "completion": "This shell code Apply Pod Security Standards at the Cluster Level | The output is similar to this:", "language": "shell", "type": "code-example_explanation", "context": "Apply Pod Security Standards at the Cluster Level | The output is similar to this:", "source": "https://kubernetes.io/docs/tutorials/security/cluster-level-pss/", "id": "k8s_00716_code_explain_22340"} {"prompt": "Complete this text code:\n\nKubernetes control plane is running at https://127.0.0.1:61350\n", "completion": "CoreDNS is running at https://127.0.0.1:61350/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy\n\nTo further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.", "language": "text", "type": "code-example", "context": "Apply Pod Security Standards at the Cluster Level | Get a list of namespaces in the cluster:", "source": "https://kubernetes.io/docs/tutorials/security/cluster-level-pss/", "id": "k8s_00716_code_completion_22341"} {"prompt": "Explain this text code:\n\nKubernetes control plane is running at https://127.0.0.1:61350\n\nCoreDNS is running at https://127.0.0.1:61350/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy\n\nTo further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.", "completion": "This text code Apply Pod Security Standards at the Cluster Level | Get a list of namespaces in the cluster:", "language": "text", "type": "code-example_explanation", "context": "Apply Pod Security Standards at the Cluster Level | Get a list of namespaces in the cluster:", "source": "https://kubernetes.io/docs/tutorials/security/cluster-level-pss/", "id": "k8s_00716_code_explain_22342"} {"prompt": "Complete this text code:\n\nNAME STATUS AGE\ndefault Active 9m30s\nkube-node-lease Active 9m32s", "completion": "kube-public Active 9m32s\nkube-system Active 9m32s\nlocal-path-storage Active 9m26s", "language": "text", "type": "code-example", "context": "Apply Pod Security Standards at the Cluster Level | Use --dry-run=server to understand what happens when different Pod Security Standards are applied:", "source": "https://kubernetes.io/docs/tutorials/security/cluster-level-pss/", "id": "k8s_00716_code_completion_22343"} {"prompt": "Explain this text code:\n\nNAME STATUS AGE\ndefault Active 9m30s\nkube-node-lease Active 9m32s\nkube-public Active 9m32s\nkube-system Active 9m32s\nlocal-path-storage Active 9m26s", "completion": "This text code Apply Pod Security Standards at the Cluster Level | Use --dry-run=server to understand what happens when different Pod Security Standards are applied:", "language": "text", "type": "code-example_explanation", "context": "Apply Pod Security Standards at the Cluster Level | Use --dry-run=server to understand what happens when different Pod Security Standards are applied:", "source": "https://kubernetes.io/docs/tutorials/security/cluster-level-pss/", "id": "k8s_00716_code_explain_22344"} {"prompt": "Explain this shell code:\n\nkubectl label --dry-run=server --overwrite ns --all \\\npod-security.kubernetes.io/enforce=privileged", "completion": "This shell code Apply Pod Security Standards at the Cluster Level | The output is similar to:", "language": "shell", "type": "code-example_explanation", "context": "Apply Pod Security Standards at the Cluster Level | The output is similar to:", "source": "https://kubernetes.io/docs/tutorials/security/cluster-level-pss/", "id": "k8s_00716_code_explain_22345"} {"prompt": "Complete this text code:\n\nnamespace/default labeled\nnamespace/kube-node-lease labeled", "completion": "namespace/kube-public labeled\nnamespace/kube-system labeled\nnamespace/local-path-storage labeled", "language": "text", "type": "code-example", "context": "Apply Pod Security Standards at the Cluster Level | Baseline", "source": "https://kubernetes.io/docs/tutorials/security/cluster-level-pss/", "id": "k8s_00716_code_completion_22346"} {"prompt": "Explain this text code:\n\nnamespace/default labeled\nnamespace/kube-node-lease labeled\nnamespace/kube-public labeled\nnamespace/kube-system labeled\nnamespace/local-path-storage labeled", "completion": "This text code Apply Pod Security Standards at the Cluster Level | Baseline", "language": "text", "type": "code-example_explanation", "context": "Apply Pod Security Standards at the Cluster Level | Baseline", "source": "https://kubernetes.io/docs/tutorials/security/cluster-level-pss/", "id": "k8s_00716_code_explain_22347"} {"prompt": "Explain this shell code:\n\nkubectl label --dry-run=server --overwrite ns --all \\\npod-security.kubernetes.io/enforce=baseline", "completion": "This shell code Apply Pod Security Standards at the Cluster Level | The output is similar to:", "language": "shell", "type": "code-example_explanation", "context": "Apply Pod Security Standards at the Cluster Level | The output is similar to:", "source": "https://kubernetes.io/docs/tutorials/security/cluster-level-pss/", "id": "k8s_00716_code_explain_22348"} {"prompt": "Complete this text code:\n\nnamespace/default labeled\nnamespace/kube-node-lease labeled\nnamespace/kube-public labeled\nWarning: existing pods in namespace \"kube-system\" violate the new PodSecurity enforce level \"baseline:latest\"", "completion": "Warning: etcd-psa-wo-cluster-pss-control-plane (and 3 other pods): host namespaces, hostPath volumes\nWarning: kindnet-vzj42: non-default capabilities, host namespaces, hostPath volumes\nWarning: kube-proxy-m6hwf: host namespaces, hostPath volumes, privileged\nnamespace/kube-system labeled\nnamespace/local-path-storage labeled", "language": "text", "type": "code-example", "context": "Apply Pod Security Standards at the Cluster Level | Restricted", "source": "https://kubernetes.io/docs/tutorials/security/cluster-level-pss/", "id": "k8s_00716_code_completion_22349"} {"prompt": "Explain this text code:\n\nnamespace/default labeled\nnamespace/kube-node-lease labeled\nnamespace/kube-public labeled\nWarning: existing pods in namespace \"kube-system\" violate the new PodSecurity enforce level \"baseline:latest\"\nWarning: etcd-psa-wo-cluster-pss-control-plane (and 3 other pods): host namespaces, hostPath volumes\nWarning: kindnet-vzj42: non-default capabilities, host namespaces, hostPath volumes\nWarning: kube-proxy-m6hwf: host namespaces, hostPath volumes, privileged\nnamespace/kube-system labeled\nnamespace/local-path-storage labeled", "completion": "This text code Apply Pod Security Standards at the Cluster Level | Restricted", "language": "text", "type": "code-example_explanation", "context": "Apply Pod Security Standards at the Cluster Level | Restricted", "source": "https://kubernetes.io/docs/tutorials/security/cluster-level-pss/", "id": "k8s_00716_code_explain_22350"} {"prompt": "Explain this shell code:\n\nkubectl label --dry-run=server --overwrite ns --all \\\npod-security.kubernetes.io/enforce=restricted", "completion": "This shell code Apply Pod Security Standards at the Cluster Level | The output is similar to:", "language": "shell", "type": "code-example_explanation", "context": "Apply Pod Security Standards at the Cluster Level | The output is similar to:", "source": "https://kubernetes.io/docs/tutorials/security/cluster-level-pss/", "id": "k8s_00716_code_explain_22351"} {"prompt": "Complete this text code:\n\nnamespace/default labeled\nnamespace/kube-node-lease labeled\nnamespace/kube-public labeled\nWarning: existing pods in namespace \"kube-system\" violate the new PodSecurity enforce level \"restricted:latest\"\nWarning: coredns-7bb9c7b568-hsptc (and 1 other pod): unrestricted capabilities, runAsNonRoot != true, seccompProfile\nWarning: etcd-psa-wo-cluster-pss-control-plane (and 3 other pods): host namespaces, hostPath volumes, allowPrivilegeEscalation != false, unrestricted capabilities, restricted volume types, runAsNonRoot != true", "completion": "Warning: kindnet-vzj42: non-default capabilities, host namespaces, hostPath volumes, allowPrivilegeEscalation != false, unrestricted capabilities, restricted volume types, runAsNonRoot != true, seccompProfile\nWarning: kube-proxy-m6hwf: host namespaces, hostPath volumes, privileged, allowPrivilegeEscalation != false, unrestricted capabilities, restricted volume types, runAsNonRoot != true, seccompProfile\nnamespace/kube-system labeled\nWarning: existing pods in namespace \"local-path-storage\" violate the new PodSecurity enforce level \"restricted:latest\"\nWarning: local-path-provisioner-d6d9f7ffc-lw9lh: allowPrivilegeEscalation != false, unrestricted capabilities, runAsNonRoot != true, seccompProfile\nnamespace/local-path-storage labeled", "language": "text", "type": "code-example", "context": "Apply Pod Security Standards at the Cluster Level | From the previous output, you'll notice that applying the privileged Pod Security Standard shows no warnings for any namespaces. However, baseline and restricted standards both have warnings, specific", "source": "https://kubernetes.io/docs/tutorials/security/cluster-level-pss/", "id": "k8s_00716_code_completion_22352"} {"prompt": "Explain this text code:\n\nnamespace/default labeled\nnamespace/kube-node-lease labeled\nnamespace/kube-public labeled\nWarning: existing pods in namespace \"kube-system\" violate the new PodSecurity enforce level \"restricted:latest\"\nWarning: coredns-7bb9c7b568-hsptc (and 1 other pod): unrestricted capabilities, runAsNonRoot != true, seccompProfile\nWarning: etcd-psa-wo-cluster-pss-control-plane (and 3 other pods): host namespaces, hostPath volumes, allowPrivilegeEscalation != false, unrestricted capabilities, restricted volume types, runAsNonRoot != true\nWarning: kindnet-vzj42: non-default capabilities, host namespaces, hostPath volumes, allowPrivilegeEscalation != false, unrestricted capabilities, restricted volume types, runAsNonRoot != true, seccompProfile\nWarning: kube-proxy-m6hwf: host namespaces, hostPath volumes, privileged, allowPrivilegeEscalation != false, unrestricted capabilities, restricted volume types, runAsNonRoot != true, seccompProfile\nnamespace/kube-system labeled\nWarning: existing pods in namespace \"local-path-storage\" violate the new PodSecurity enforce level \"restricted:latest\"\nWarning: local-path-provisioner-d6d9f7ffc-lw9lh: allowPrivilegeEscalation != false, unrestricted capabilities, runAsNonRoot != true, seccompProfile\nnamespace/local-path-storage labeled", "completion": "This text code Apply Pod Security Standards at the Cluster Level | From the previous output, you'll notice that applying the privileged Pod Security Standard shows no warnings for any namespaces. However, baseline and restricted standards both have warnings, specific", "language": "text", "type": "code-example_explanation", "context": "Apply Pod Security Standards at the Cluster Level | From the previous output, you'll notice that applying the privileged Pod Security Standard shows no warnings for any namespaces. However, baseline and restricted standards both have warnings, specific", "source": "https://kubernetes.io/docs/tutorials/security/cluster-level-pss/", "id": "k8s_00716_code_explain_22353"} {"prompt": "Complete this yaml code:\n\nmkdir -p /tmp/pss\ncat < /tmp/pss/cluster-level-pss.yaml\napiVersion: apiserver.config.k8s.io/v1\nkind: AdmissionConfiguration\nplugins:\n- name: PodSecurity\n configuration:\n apiVersion: pod-security.admission.config.k8s.io/v1\n kind: PodSecurityConfiguration\n defaults:", "completion": " enforce: \"baseline\"\n enforce-version: \"latest\"\n audit: \"restricted\"\n audit-version: \"latest\"\n warn: \"restricted\"\n warn-version: \"latest\"\n exemptions:\n usernames: []\n runtimeClasses: []\n namespaces: [kube-system]\nEOF", "language": "yaml", "type": "pod-definition", "context": "Apply Pod Security Standards at the Cluster Level | Configure the API server to consume this file during cluster creation:", "source": "https://kubernetes.io/docs/tutorials/security/cluster-level-pss/", "id": "k8s_00716_code_completion_22354"} {"prompt": "Explain this yaml code:\n\nmkdir -p /tmp/pss\ncat < /tmp/pss/cluster-level-pss.yaml\napiVersion: apiserver.config.k8s.io/v1\nkind: AdmissionConfiguration\nplugins:\n- name: PodSecurity\n configuration:\n apiVersion: pod-security.admission.config.k8s.io/v1\n kind: PodSecurityConfiguration\n defaults:\n enforce: \"baseline\"\n enforce-version: \"latest\"\n audit: \"restricted\"\n audit-version: \"latest\"\n warn: \"restricted\"\n warn-version: \"latest\"\n exemptions:\n usernames: []\n runtimeClasses: []\n namespaces: [kube-system]\nEOF", "completion": "This yaml code Apply Pod Security Standards at the Cluster Level | Configure the API server to consume this file during cluster creation:", "language": "yaml", "type": "pod-definition_explanation", "context": "Apply Pod Security Standards at the Cluster Level | Configure the API server to consume this file during cluster creation:", "source": "https://kubernetes.io/docs/tutorials/security/cluster-level-pss/", "id": "k8s_00716_code_explain_22355"} {"prompt": "Complete this yaml code:\n\ncat < /tmp/pss/cluster-config.yaml\nkind: Cluster\napiVersion: kind.x-k8s.io/v1alpha4\nnodes:\n- role: control-plane\n kubeadmConfigPatches:\n - |\n kind: ClusterConfiguration\n apiServer:\n extraArgs:\n admission-control-config-file: /etc/config/cluster-level-pss.yaml\n extraVolumes:\n - name: accf\n hostPath: /etc/config\n mountPath: /etc/config", "completion": " readOnly: false\n pathType: \"DirectoryOrCreate\"\n extraMounts:\n - hostPath: /tmp/pss\n containerPath: /etc/config\n # optional: if set, the mount is read-only.\n # default false\n readOnly: false\n # optional: if set, the mount needs SELinux relabeling.\n # default false\n selinuxRelabel: false\n # optional: set propagation mode (None, HostToContainer or Bidirectional)\n # see https://kubernetes.io/docs/concepts/storage/volumes/#mount-propagation\n # default None\n propagation: None\nEOF", "language": "yaml", "type": "kubernetes-manifest", "context": "Apply Pod Security Standards at the Cluster Level | Create a cluster that uses Pod Security Admission to apply these Pod Security Standards:", "source": "https://kubernetes.io/docs/tutorials/security/cluster-level-pss/", "id": "k8s_00716_code_completion_22356"} {"prompt": "Explain this yaml code:\n\ncat < /tmp/pss/cluster-config.yaml\nkind: Cluster\napiVersion: kind.x-k8s.io/v1alpha4\nnodes:\n- role: control-plane\n kubeadmConfigPatches:\n - |\n kind: ClusterConfiguration\n apiServer:\n extraArgs:\n admission-control-config-file: /etc/config/cluster-level-pss.yaml\n extraVolumes:\n - name: accf\n hostPath: /etc/config\n mountPath: /etc/config\n readOnly: false\n pathType: \"DirectoryOrCreate\"\n extraMounts:\n - hostPath: /tmp/pss\n containerPath: /etc/config\n # optional: if set, the mount is read-only.\n # default false\n readOnly: false\n # optional: if set, the mount needs SELinux relabeling.\n # default false\n selinuxRelabel: false\n # optional: set propagation mode (None, HostToContainer or Bidirectional)\n # see https://kubernetes.io/docs/concepts/storage/volumes/#mount-propagation\n # default None\n propagation: None\nEOF", "completion": "This yaml code Apply Pod Security Standards at the Cluster Level | Create a cluster that uses Pod Security Admission to apply these Pod Security Standards:", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Apply Pod Security Standards at the Cluster Level | Create a cluster that uses Pod Security Admission to apply these Pod Security Standards:", "source": "https://kubernetes.io/docs/tutorials/security/cluster-level-pss/", "id": "k8s_00716_code_explain_22357"} {"prompt": "Explain this shell code:\n\nkind create cluster --name psa-with-cluster-pss --config /tmp/pss/cluster-config.yaml", "completion": "This shell code Apply Pod Security Standards at the Cluster Level | The output is similar to this:", "language": "shell", "type": "code-example_explanation", "context": "Apply Pod Security Standards at the Cluster Level | The output is similar to this:", "source": "https://kubernetes.io/docs/tutorials/security/cluster-level-pss/", "id": "k8s_00716_code_explain_22358"} {"prompt": "Complete this text code:\n\nCreating cluster \"psa-with-cluster-pss\" ...\n ✓ Ensuring node image (kindest/node:v1.34.0) 🖼\n ✓ Preparing nodes 📦\n ✓ Writing configuration 📜\n ✓ Starting control-plane 🕹️\n ✓ Installing CNI 🔌", "completion": " ✓ Installing StorageClass 💾\nSet kubectl context to \"kind-psa-with-cluster-pss\"\nYou can now use your cluster with:\n\nkubectl cluster-info --context kind-psa-with-cluster-pss\n\nHave a question, bug, or feature request? Let us know! https://kind.sigs.k8s.io/#community 🙂", "language": "text", "type": "code-example", "context": "Apply Pod Security Standards at the Cluster Level | Point kubectl to the cluster:", "source": "https://kubernetes.io/docs/tutorials/security/cluster-level-pss/", "id": "k8s_00716_code_completion_22359"} {"prompt": "Explain this text code:\n\nCreating cluster \"psa-with-cluster-pss\" ...\n ✓ Ensuring node image (kindest/node:v1.34.0) 🖼\n ✓ Preparing nodes 📦\n ✓ Writing configuration 📜\n ✓ Starting control-plane 🕹️\n ✓ Installing CNI 🔌\n ✓ Installing StorageClass 💾\nSet kubectl context to \"kind-psa-with-cluster-pss\"\nYou can now use your cluster with:\n\nkubectl cluster-info --context kind-psa-with-cluster-pss\n\nHave a question, bug, or feature request? Let us know! https://kind.sigs.k8s.io/#community 🙂", "completion": "This text code Apply Pod Security Standards at the Cluster Level | Point kubectl to the cluster:", "language": "text", "type": "code-example_explanation", "context": "Apply Pod Security Standards at the Cluster Level | Point kubectl to the cluster:", "source": "https://kubernetes.io/docs/tutorials/security/cluster-level-pss/", "id": "k8s_00716_code_explain_22360"} {"prompt": "Explain this shell code:\n\nkubectl cluster-info --context kind-psa-with-cluster-pss", "completion": "This shell code Apply Pod Security Standards at the Cluster Level | The output is similar to this:", "language": "shell", "type": "code-example_explanation", "context": "Apply Pod Security Standards at the Cluster Level | The output is similar to this:", "source": "https://kubernetes.io/docs/tutorials/security/cluster-level-pss/", "id": "k8s_00716_code_explain_22361"} {"prompt": "Complete this text code:\n\nKubernetes control plane is running at https://127.0.0.1:63855\nCoreDNS is running at https://127.0.0.1:63855/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy", "completion": "\nTo further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.", "language": "text", "type": "code-example", "context": "Apply Pod Security Standards at the Cluster Level | Create a Pod in the default namespace:", "source": "https://kubernetes.io/docs/tutorials/security/cluster-level-pss/", "id": "k8s_00716_code_completion_22362"} {"prompt": "Explain this text code:\n\nKubernetes control plane is running at https://127.0.0.1:63855\nCoreDNS is running at https://127.0.0.1:63855/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy\n\nTo further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.", "completion": "This text code Apply Pod Security Standards at the Cluster Level | Create a Pod in the default namespace:", "language": "text", "type": "code-example_explanation", "context": "Apply Pod Security Standards at the Cluster Level | Create a Pod in the default namespace:", "source": "https://kubernetes.io/docs/tutorials/security/cluster-level-pss/", "id": "k8s_00716_code_explain_22363"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\n kind: Pod\n metadata:\n name: nginx\n spec:", "completion": " containers:\n - image: nginx\n name: nginx\n ports:\n - containerPort: 80", "language": "yaml", "type": "pod-definition", "context": "Apply Pod Security Standards at the Cluster Level | The pod is started normally, but the output includes a warning:", "source": "https://kubernetes.io/docs/tutorials/security/cluster-level-pss/", "id": "k8s_00716_code_completion_22364"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\n kind: Pod\n metadata:\n name: nginx\n spec:\n containers:\n - image: nginx\n name: nginx\n ports:\n - containerPort: 80", "completion": "This yaml code Apply Pod Security Standards at the Cluster Level | The pod is started normally, but the output includes a warning:", "language": "yaml", "type": "pod-definition_explanation", "context": "Apply Pod Security Standards at the Cluster Level | The pod is started normally, but the output includes a warning:", "source": "https://kubernetes.io/docs/tutorials/security/cluster-level-pss/", "id": "k8s_00716_code_explain_22365"} {"prompt": "Explain this shell code:\n\nkubectl apply -f https://k8s.io/examples/security/example-baseline-pod.yaml", "completion": "This shell code Apply Pod Security Standards at the Cluster Level | The pod is started normally, but the output includes a warning:", "language": "shell", "type": "code-example_explanation", "context": "Apply Pod Security Standards at the Cluster Level | The pod is started normally, but the output includes a warning:", "source": "https://kubernetes.io/docs/tutorials/security/cluster-level-pss/", "id": "k8s_00716_code_explain_22366"} {"prompt": "Explain this text code:\n\nWarning: would violate PodSecurity \"restricted:latest\": allowPrivilegeEscalation != false (container \"nginx\" must set securityContext.allowPrivilegeEscalation=false), unrestricted capabilities (container \"nginx\" must set securityContext.capabilities.drop=[\"ALL\"]), runAsNonRoot != true (pod or container \"nginx\" must set securityContext.runAsNonRoot=true), seccompProfile (pod or container \"nginx\" must set securityContext.seccompProfile.type to \"RuntimeDefault\" or \"Localhost\")\npod/nginx created", "completion": "This text code Apply Pod Security Standards at the Cluster Level | Now delete the clusters which you created above by running the following command:", "language": "text", "type": "code-example_explanation", "context": "Apply Pod Security Standards at the Cluster Level | Now delete the clusters which you created above by running the following command:", "source": "https://kubernetes.io/docs/tutorials/security/cluster-level-pss/", "id": "k8s_00716_code_explain_22367"} {"prompt": "Explain this bash code:\n\nkubectl cluster-info --context kind-psa-wo-cluster-pss", "completion": "This bash code Apply Pod Security Standards at the Cluster Level | The output is similar to this:", "language": "bash", "type": "kubectl-command_explanation", "context": "Apply Pod Security Standards at the Cluster Level | The output is similar to this:", "source": "https://kubernetes.io/docs/tutorials/security/cluster-level-pss/", "id": "k8s_00716_code_explain_22368"} {"prompt": "Explain this bash code:\n\nkubectl label --dry-run=server --overwrite ns --all \\\npod-security.kubernetes.io/enforce=privileged", "completion": "This bash code Apply Pod Security Standards at the Cluster Level | The output is similar to:", "language": "bash", "type": "kubectl-command_explanation", "context": "Apply Pod Security Standards at the Cluster Level | The output is similar to:", "source": "https://kubernetes.io/docs/tutorials/security/cluster-level-pss/", "id": "k8s_00716_code_explain_22369"} {"prompt": "Explain this bash code:\n\nkubectl label --dry-run=server --overwrite ns --all \\\npod-security.kubernetes.io/enforce=baseline", "completion": "This bash code Apply Pod Security Standards at the Cluster Level | The output is similar to:", "language": "bash", "type": "kubectl-command_explanation", "context": "Apply Pod Security Standards at the Cluster Level | The output is similar to:", "source": "https://kubernetes.io/docs/tutorials/security/cluster-level-pss/", "id": "k8s_00716_code_explain_22370"} {"prompt": "Explain this bash code:\n\nkubectl label --dry-run=server --overwrite ns --all \\\npod-security.kubernetes.io/enforce=restricted", "completion": "This bash code Apply Pod Security Standards at the Cluster Level | The output is similar to:", "language": "bash", "type": "kubectl-command_explanation", "context": "Apply Pod Security Standards at the Cluster Level | The output is similar to:", "source": "https://kubernetes.io/docs/tutorials/security/cluster-level-pss/", "id": "k8s_00716_code_explain_22371"} {"prompt": "Explain this text code:\n\nkind create cluster --name psa-with-cluster-pss --config /tmp/pss/cluster-config.yaml", "completion": "This text code Apply Pod Security Standards at the Cluster Level | The output is similar to this:", "language": "text", "type": "code-example_explanation", "context": "Apply Pod Security Standards at the Cluster Level | The output is similar to this:", "source": "https://kubernetes.io/docs/tutorials/security/cluster-level-pss/", "id": "k8s_00716_code_explain_22372"} {"prompt": "Explain this bash code:\n\nkubectl cluster-info --context kind-psa-with-cluster-pss", "completion": "This bash code Apply Pod Security Standards at the Cluster Level | The output is similar to this:", "language": "bash", "type": "kubectl-command_explanation", "context": "Apply Pod Security Standards at the Cluster Level | The output is similar to this:", "source": "https://kubernetes.io/docs/tutorials/security/cluster-level-pss/", "id": "k8s_00716_code_explain_22373"} {"prompt": "Complete this yaml code:\n\nsecurity/example-baseline-pod.yaml\napiVersion: v1\n kind: Pod\n metadata:\n name: nginx", "completion": " spec:\n containers:\n - image: nginx\n name: nginx\n ports:\n - containerPort: 80", "language": "yaml", "type": "pod-definition", "context": "Apply Pod Security Standards at the Cluster Level | The pod is started normally, but the output includes a warning:", "source": "https://kubernetes.io/docs/tutorials/security/cluster-level-pss/", "id": "k8s_00716_code_completion_22374"} {"prompt": "Explain this yaml code:\n\nsecurity/example-baseline-pod.yaml\napiVersion: v1\n kind: Pod\n metadata:\n name: nginx\n spec:\n containers:\n - image: nginx\n name: nginx\n ports:\n - containerPort: 80", "completion": "This yaml code Apply Pod Security Standards at the Cluster Level | The pod is started normally, but the output includes a warning:", "language": "yaml", "type": "pod-definition_explanation", "context": "Apply Pod Security Standards at the Cluster Level | The pod is started normally, but the output includes a warning:", "source": "https://kubernetes.io/docs/tutorials/security/cluster-level-pss/", "id": "k8s_00716_code_explain_22375"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\n kind: Pod\n metadata:\n name: nginx\n spec:", "completion": " containers:\n - image: nginx\n name: nginx\n ports:\n - containerPort: 80", "language": "yaml", "type": "pod-definition", "context": "Apply Pod Security Standards at the Cluster Level | The pod is started normally, but the output includes a warning:", "source": "https://kubernetes.io/docs/tutorials/security/cluster-level-pss/", "id": "k8s_00716_code_completion_22376"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\n kind: Pod\n metadata:\n name: nginx\n spec:\n containers:\n - image: nginx\n name: nginx\n ports:\n - containerPort: 80", "completion": "This yaml code Apply Pod Security Standards at the Cluster Level | The pod is started normally, but the output includes a warning:", "language": "yaml", "type": "pod-definition_explanation", "context": "Apply Pod Security Standards at the Cluster Level | The pod is started normally, but the output includes a warning:", "source": "https://kubernetes.io/docs/tutorials/security/cluster-level-pss/", "id": "k8s_00716_code_explain_22377"} {"prompt": "Explain this bash code:\n\nkubectl apply -f https://k8s.io/examples/security/example-baseline-pod.yaml", "completion": "This bash code Apply Pod Security Standards at the Cluster Level | The pod is started normally, but the output includes a warning:", "language": "bash", "type": "kubectl-command_explanation", "context": "Apply Pod Security Standards at the Cluster Level | The pod is started normally, but the output includes a warning:", "source": "https://kubernetes.io/docs/tutorials/security/cluster-level-pss/", "id": "k8s_00716_code_explain_22378"} {"prompt": "Explain this text code:\n\nmkdir github.com\ncd github.com\ngit clone git@github.com:/website.git", "completion": "This text code Reference Documentation Quickstart | Determine the base directory of your clone. For example, if you followed the preceding step to get the repository, your base directory is github.com/website. The remaining steps refer to your base dir", "language": "text", "type": "code-example_explanation", "context": "Reference Documentation Quickstart | Determine the base directory of your clone. For example, if you followed the preceding step to get the repository, your base directory is github.com/website. The remaining steps refer to your base dir", "source": "https://kubernetes.io/docs/contribute/generate-ref-docs/quickstart/", "id": "k8s_00719_code_explain_22379"} {"prompt": "Complete this text code:\n\nrepos:\n- name: community\n remote: https://github.com/kubernetes/community.git\n branch: master", "completion": " files:\n - src: contributors/devel/README.md\n dst: docs/imported/community/devel.md\n - src: contributors/guide/README.md\n dst: docs/imported/community/guide.md", "language": "text", "type": "code-example", "context": "Reference Documentation Quickstart | Single page Markdown documents, imported by the tool, must adhere to the Documentation Style Guide.", "source": "https://kubernetes.io/docs/contribute/generate-ref-docs/quickstart/", "id": "k8s_00719_code_completion_22380"} {"prompt": "Explain this text code:\n\nrepos:\n- name: community\n remote: https://github.com/kubernetes/community.git\n branch: master\n files:\n - src: contributors/devel/README.md\n dst: docs/imported/community/devel.md\n - src: contributors/guide/README.md\n dst: docs/imported/community/guide.md", "completion": "This text code Reference Documentation Quickstart | Single page Markdown documents, imported by the tool, must adhere to the Documentation Style Guide.", "language": "text", "type": "code-example_explanation", "context": "Reference Documentation Quickstart | Single page Markdown documents, imported by the tool, must adhere to the Documentation Style Guide.", "source": "https://kubernetes.io/docs/contribute/generate-ref-docs/quickstart/", "id": "k8s_00719_code_explain_22381"} {"prompt": "Complete this text code:\n\nrepos:\n- name: reference-docs\n remote: https://github.com/kubernetes-sigs/reference-docs.git", "completion": " files:\n - src: gen-compdocs/build/kube-apiserver.md\n dst: content/en/docs/reference/command-line-tools-reference/kube-apiserver.md\n ...", "language": "text", "type": "code-example", "context": "Reference Documentation Quickstart | Note that when there are many files to be copied from the same source directory to the same destination directory, you can use wildcards in the value given to src. You must provide the directory name", "source": "https://kubernetes.io/docs/contribute/generate-ref-docs/quickstart/", "id": "k8s_00719_code_completion_22382"} {"prompt": "Explain this text code:\n\nrepos:\n- name: reference-docs\n remote: https://github.com/kubernetes-sigs/reference-docs.git\n files:\n - src: gen-compdocs/build/kube-apiserver.md\n dst: content/en/docs/reference/command-line-tools-reference/kube-apiserver.md\n ...", "completion": "This text code Reference Documentation Quickstart | Note that when there are many files to be copied from the same source directory to the same destination directory, you can use wildcards in the value given to src. You must provide the directory name", "language": "text", "type": "code-example_explanation", "context": "Reference Documentation Quickstart | Note that when there are many files to be copied from the same source directory to the same destination directory, you can use wildcards in the value given to src. You must provide the directory name", "source": "https://kubernetes.io/docs/contribute/generate-ref-docs/quickstart/", "id": "k8s_00719_code_explain_22383"} {"prompt": "Explain this text code:\n\nfiles:\n - src: gen-compdocs/build/kubeadm*.md\n dst: content/en/docs/reference/setup-tools/kubeadm/generated/", "completion": "This text code Reference Documentation Quickstart | You can run the update-imported-docs.py tool as follows:", "language": "text", "type": "code-example_explanation", "context": "Reference Documentation Quickstart | You can run the update-imported-docs.py tool as follows:", "source": "https://kubernetes.io/docs/contribute/generate-ref-docs/quickstart/", "id": "k8s_00719_code_explain_22384"} {"prompt": "Explain this text code:\n\ncd /update-imported-docs\n./update-imported-docs.py ", "completion": "This text code Reference Documentation Quickstart | For example:", "language": "text", "type": "code-example_explanation", "context": "Reference Documentation Quickstart | For example:", "source": "https://kubernetes.io/docs/contribute/generate-ref-docs/quickstart/", "id": "k8s_00719_code_explain_22385"} {"prompt": "Complete this text code:\n\ncontent/en/docs/reference/command-line-tools-reference/kube-apiserver.md\ncontent/en/docs/reference/command-line-tools-reference/kube-controller-manager.md\ncontent/en/docs/reference/command-line-tools-reference/kube-proxy.md", "completion": "content/en/docs/reference/command-line-tools-reference/kube-scheduler.md\ncontent/en/docs/reference/setup-tools/kubeadm/generated/kubeadm.md\ncontent/en/docs/reference/kubectl/kubectl.md", "language": "text", "type": "code-example", "context": "Reference Documentation Quickstart | Run git add and git commit to commit the files.", "source": "https://kubernetes.io/docs/contribute/generate-ref-docs/quickstart/", "id": "k8s_00719_code_completion_22386"} {"prompt": "Explain this text code:\n\ncontent/en/docs/reference/command-line-tools-reference/kube-apiserver.md\ncontent/en/docs/reference/command-line-tools-reference/kube-controller-manager.md\ncontent/en/docs/reference/command-line-tools-reference/kube-proxy.md\ncontent/en/docs/reference/command-line-tools-reference/kube-scheduler.md\ncontent/en/docs/reference/setup-tools/kubeadm/generated/kubeadm.md\ncontent/en/docs/reference/kubectl/kubectl.md", "completion": "This text code Reference Documentation Quickstart | Run git add and git commit to commit the files.", "language": "text", "type": "code-example_explanation", "context": "Reference Documentation Quickstart | Run git add and git commit to commit the files.", "source": "https://kubernetes.io/docs/contribute/generate-ref-docs/quickstart/", "id": "k8s_00719_code_explain_22387"} {"prompt": "Complete this text code:\n\nstatic/docs/reference/generated/kubectl/kubectl-commands.html\nstatic/docs/reference/generated/kubectl/navData.js\nstatic/docs/reference/generated/kubectl/scroll.js\nstatic/docs/reference/generated/kubectl/stylesheet.css\nstatic/docs/reference/generated/kubectl/tabvisibility.js", "completion": "static/docs/reference/generated/kubectl/node_modules/bootstrap/dist/css/bootstrap.min.css\nstatic/docs/reference/generated/kubectl/node_modules/highlight.js/styles/default.css\nstatic/docs/reference/generated/kubectl/node_modules/jquery.scrollto/jquery.scrollTo.min.js\nstatic/docs/reference/generated/kubectl/node_modules/jquery/dist/jquery.min.js\nstatic/docs/reference/generated/kubectl/css/font-awesome.min.css", "language": "text", "type": "code-example", "context": "Reference Documentation Quickstart | Run git add and git commit to commit the files.", "source": "https://kubernetes.io/docs/contribute/generate-ref-docs/quickstart/", "id": "k8s_00719_code_completion_22388"} {"prompt": "Explain this text code:\n\nstatic/docs/reference/generated/kubectl/kubectl-commands.html\nstatic/docs/reference/generated/kubectl/navData.js\nstatic/docs/reference/generated/kubectl/scroll.js\nstatic/docs/reference/generated/kubectl/stylesheet.css\nstatic/docs/reference/generated/kubectl/tabvisibility.js\nstatic/docs/reference/generated/kubectl/node_modules/bootstrap/dist/css/bootstrap.min.css\nstatic/docs/reference/generated/kubectl/node_modules/highlight.js/styles/default.css\nstatic/docs/reference/generated/kubectl/node_modules/jquery.scrollto/jquery.scrollTo.min.js\nstatic/docs/reference/generated/kubectl/node_modules/jquery/dist/jquery.min.js\nstatic/docs/reference/generated/kubectl/css/font-awesome.min.css", "completion": "This text code Reference Documentation Quickstart | Run git add and git commit to commit the files.", "language": "text", "type": "code-example_explanation", "context": "Reference Documentation Quickstart | Run git add and git commit to commit the files.", "source": "https://kubernetes.io/docs/contribute/generate-ref-docs/quickstart/", "id": "k8s_00719_code_explain_22389"} {"prompt": "Complete this text code:\n\nstatic/docs/reference/generated/kubernetes-api/v1.34/index.html\nstatic/docs/reference/generated/kubernetes-api/v1.34/js/navData.js\nstatic/docs/reference/generated/kubernetes-api/v1.34/js/scroll.js\nstatic/docs/reference/generated/kubernetes-api/v1.34/js/query.scrollTo.min.js\nstatic/docs/reference/generated/kubernetes-api/v1.34/css/font-awesome.min.css\nstatic/docs/reference/generated/kubernetes-api/v1.34/css/bootstrap.min.css", "completion": "static/docs/reference/generated/kubernetes-api/v1.34/css/stylesheet.css\nstatic/docs/reference/generated/kubernetes-api/v1.34/fonts/FontAwesome.otf\nstatic/docs/reference/generated/kubernetes-api/v1.34/fonts/fontawesome-webfont.eot\nstatic/docs/reference/generated/kubernetes-api/v1.34/fonts/fontawesome-webfont.svg\nstatic/docs/reference/generated/kubernetes-api/v1.34/fonts/fontawesome-webfont.ttf\nstatic/docs/reference/generated/kubernetes-api/v1.34/fonts/fontawesome-webfont.woff\nstatic/docs/reference/generated/kubernetes-api/v1.34/fonts/fontawesome-webfont.woff2", "language": "text", "type": "code-example", "context": "Reference Documentation Quickstart | Run git add and git commit to commit the files.", "source": "https://kubernetes.io/docs/contribute/generate-ref-docs/quickstart/", "id": "k8s_00719_code_completion_22390"} {"prompt": "Explain this text code:\n\nstatic/docs/reference/generated/kubernetes-api/v1.34/index.html\nstatic/docs/reference/generated/kubernetes-api/v1.34/js/navData.js\nstatic/docs/reference/generated/kubernetes-api/v1.34/js/scroll.js\nstatic/docs/reference/generated/kubernetes-api/v1.34/js/query.scrollTo.min.js\nstatic/docs/reference/generated/kubernetes-api/v1.34/css/font-awesome.min.css\nstatic/docs/reference/generated/kubernetes-api/v1.34/css/bootstrap.min.css\nstatic/docs/reference/generated/kubernetes-api/v1.34/css/stylesheet.css\nstatic/docs/reference/generated/kubernetes-api/v1.34/fonts/FontAwesome.otf\nstatic/docs/reference/generated/kubernetes-api/v1.34/fonts/fontawesome-webfont.eot\nstatic/docs/reference/generated/kubernetes-api/v1.34/fonts/fontawesome-webfont.svg\nstatic/docs/reference/generated/kubernetes-api/v1.34/fonts/fontawesome-webfont.ttf\nstatic/docs/reference/generated/kubernetes-api/v1.34/fonts/fontawesome-webfont.woff\nstatic/docs/reference/generated/kubernetes-api/v1.34/fonts/fontawesome-webfont.woff2", "completion": "This text code Reference Documentation Quickstart | Run git add and git commit to commit the files.", "language": "text", "type": "code-example_explanation", "context": "Reference Documentation Quickstart | Run git add and git commit to commit the files.", "source": "https://kubernetes.io/docs/contribute/generate-ref-docs/quickstart/", "id": "k8s_00719_code_explain_22391"} {"prompt": "Explain this shell code:\n\nmkdir github.com\ncd github.com\ngit clone git@github.com:/website.git", "completion": "This shell code Reference Documentation Quickstart | Determine the base directory of your clone. For example, if you followed the preceding step to get the repository, your base directory is github.com/website. The remaining steps refer to your base dir", "language": "shell", "type": "code-example_explanation", "context": "Reference Documentation Quickstart | Determine the base directory of your clone. For example, if you followed the preceding step to get the repository, your base directory is github.com/website. The remaining steps refer to your base dir", "source": "https://kubernetes.io/docs/contribute/generate-ref-docs/quickstart/", "id": "k8s_00719_code_explain_22392"} {"prompt": "Complete this yaml code:\n\nrepos:\n- name: community\n remote: https://github.com/kubernetes/community.git\n branch: master", "completion": " files:\n - src: contributors/devel/README.md\n dst: docs/imported/community/devel.md\n - src: contributors/guide/README.md\n dst: docs/imported/community/guide.md", "language": "yaml", "type": "kubernetes-manifest", "context": "Reference Documentation Quickstart | Single page Markdown documents, imported by the tool, must adhere to the Documentation Style Guide.", "source": "https://kubernetes.io/docs/contribute/generate-ref-docs/quickstart/", "id": "k8s_00719_code_completion_22393"} {"prompt": "Explain this yaml code:\n\nrepos:\n- name: community\n remote: https://github.com/kubernetes/community.git\n branch: master\n files:\n - src: contributors/devel/README.md\n dst: docs/imported/community/devel.md\n - src: contributors/guide/README.md\n dst: docs/imported/community/guide.md", "completion": "This yaml code Reference Documentation Quickstart | Single page Markdown documents, imported by the tool, must adhere to the Documentation Style Guide.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Reference Documentation Quickstart | Single page Markdown documents, imported by the tool, must adhere to the Documentation Style Guide.", "source": "https://kubernetes.io/docs/contribute/generate-ref-docs/quickstart/", "id": "k8s_00719_code_explain_22394"} {"prompt": "Complete this yaml code:\n\nrepos:\n- name: reference-docs\n remote: https://github.com/kubernetes-sigs/reference-docs.git", "completion": " files:\n - src: gen-compdocs/build/kube-apiserver.md\n dst: content/en/docs/reference/command-line-tools-reference/kube-apiserver.md\n ...", "language": "yaml", "type": "kubernetes-manifest", "context": "Reference Documentation Quickstart | Note that when there are many files to be copied from the same source directory to the same destination directory, you can use wildcards in the value given to src. You must provide the directory name", "source": "https://kubernetes.io/docs/contribute/generate-ref-docs/quickstart/", "id": "k8s_00719_code_completion_22395"} {"prompt": "Explain this yaml code:\n\nrepos:\n- name: reference-docs\n remote: https://github.com/kubernetes-sigs/reference-docs.git\n files:\n - src: gen-compdocs/build/kube-apiserver.md\n dst: content/en/docs/reference/command-line-tools-reference/kube-apiserver.md\n ...", "completion": "This yaml code Reference Documentation Quickstart | Note that when there are many files to be copied from the same source directory to the same destination directory, you can use wildcards in the value given to src. You must provide the directory name", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Reference Documentation Quickstart | Note that when there are many files to be copied from the same source directory to the same destination directory, you can use wildcards in the value given to src. You must provide the directory name", "source": "https://kubernetes.io/docs/contribute/generate-ref-docs/quickstart/", "id": "k8s_00719_code_explain_22396"} {"prompt": "Explain this yaml code:\n\nfiles:\n - src: gen-compdocs/build/kubeadm*.md\n dst: content/en/docs/reference/setup-tools/kubeadm/generated/", "completion": "This yaml code Reference Documentation Quickstart | You can run the update-imported-docs.py tool as follows:", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Reference Documentation Quickstart | You can run the update-imported-docs.py tool as follows:", "source": "https://kubernetes.io/docs/contribute/generate-ref-docs/quickstart/", "id": "k8s_00719_code_explain_22397"} {"prompt": "Explain this shell code:\n\ncd /update-imported-docs\n./update-imported-docs.py ", "completion": "This shell code Reference Documentation Quickstart | For example:", "language": "shell", "type": "code-example_explanation", "context": "Reference Documentation Quickstart | For example:", "source": "https://kubernetes.io/docs/contribute/generate-ref-docs/quickstart/", "id": "k8s_00719_code_explain_22398"} {"prompt": "Complete this text code:\n\ncontent/en/docs/reference/command-line-tools-reference/kube-apiserver.md\ncontent/en/docs/reference/command-line-tools-reference/kube-controller-manager.md\ncontent/en/docs/reference/command-line-tools-reference/kube-proxy.md", "completion": "content/en/docs/reference/command-line-tools-reference/kube-scheduler.md\ncontent/en/docs/reference/setup-tools/kubeadm/generated/kubeadm.md\ncontent/en/docs/reference/kubectl/kubectl.md", "language": "text", "type": "code-example", "context": "Reference Documentation Quickstart | Run git add and git commit to commit the files.", "source": "https://kubernetes.io/docs/contribute/generate-ref-docs/quickstart/", "id": "k8s_00719_code_completion_22399"} {"prompt": "Explain this text code:\n\ncontent/en/docs/reference/command-line-tools-reference/kube-apiserver.md\ncontent/en/docs/reference/command-line-tools-reference/kube-controller-manager.md\ncontent/en/docs/reference/command-line-tools-reference/kube-proxy.md\ncontent/en/docs/reference/command-line-tools-reference/kube-scheduler.md\ncontent/en/docs/reference/setup-tools/kubeadm/generated/kubeadm.md\ncontent/en/docs/reference/kubectl/kubectl.md", "completion": "This text code Reference Documentation Quickstart | Run git add and git commit to commit the files.", "language": "text", "type": "code-example_explanation", "context": "Reference Documentation Quickstart | Run git add and git commit to commit the files.", "source": "https://kubernetes.io/docs/contribute/generate-ref-docs/quickstart/", "id": "k8s_00719_code_explain_22400"} {"prompt": "Complete this text code:\n\nstatic/docs/reference/generated/kubectl/kubectl-commands.html\nstatic/docs/reference/generated/kubectl/navData.js\nstatic/docs/reference/generated/kubectl/scroll.js\nstatic/docs/reference/generated/kubectl/stylesheet.css\nstatic/docs/reference/generated/kubectl/tabvisibility.js", "completion": "static/docs/reference/generated/kubectl/node_modules/bootstrap/dist/css/bootstrap.min.css\nstatic/docs/reference/generated/kubectl/node_modules/highlight.js/styles/default.css\nstatic/docs/reference/generated/kubectl/node_modules/jquery.scrollto/jquery.scrollTo.min.js\nstatic/docs/reference/generated/kubectl/node_modules/jquery/dist/jquery.min.js\nstatic/docs/reference/generated/kubectl/css/font-awesome.min.css", "language": "text", "type": "code-example", "context": "Reference Documentation Quickstart | Run git add and git commit to commit the files.", "source": "https://kubernetes.io/docs/contribute/generate-ref-docs/quickstart/", "id": "k8s_00719_code_completion_22401"} {"prompt": "Explain this text code:\n\nstatic/docs/reference/generated/kubectl/kubectl-commands.html\nstatic/docs/reference/generated/kubectl/navData.js\nstatic/docs/reference/generated/kubectl/scroll.js\nstatic/docs/reference/generated/kubectl/stylesheet.css\nstatic/docs/reference/generated/kubectl/tabvisibility.js\nstatic/docs/reference/generated/kubectl/node_modules/bootstrap/dist/css/bootstrap.min.css\nstatic/docs/reference/generated/kubectl/node_modules/highlight.js/styles/default.css\nstatic/docs/reference/generated/kubectl/node_modules/jquery.scrollto/jquery.scrollTo.min.js\nstatic/docs/reference/generated/kubectl/node_modules/jquery/dist/jquery.min.js\nstatic/docs/reference/generated/kubectl/css/font-awesome.min.css", "completion": "This text code Reference Documentation Quickstart | Run git add and git commit to commit the files.", "language": "text", "type": "code-example_explanation", "context": "Reference Documentation Quickstart | Run git add and git commit to commit the files.", "source": "https://kubernetes.io/docs/contribute/generate-ref-docs/quickstart/", "id": "k8s_00719_code_explain_22402"} {"prompt": "Complete this text code:\n\nstatic/docs/reference/generated/kubernetes-api/v1.34/index.html\nstatic/docs/reference/generated/kubernetes-api/v1.34/js/navData.js\nstatic/docs/reference/generated/kubernetes-api/v1.34/js/scroll.js\nstatic/docs/reference/generated/kubernetes-api/v1.34/js/query.scrollTo.min.js\nstatic/docs/reference/generated/kubernetes-api/v1.34/css/font-awesome.min.css\nstatic/docs/reference/generated/kubernetes-api/v1.34/css/bootstrap.min.css", "completion": "static/docs/reference/generated/kubernetes-api/v1.34/css/stylesheet.css\nstatic/docs/reference/generated/kubernetes-api/v1.34/fonts/FontAwesome.otf\nstatic/docs/reference/generated/kubernetes-api/v1.34/fonts/fontawesome-webfont.eot\nstatic/docs/reference/generated/kubernetes-api/v1.34/fonts/fontawesome-webfont.svg\nstatic/docs/reference/generated/kubernetes-api/v1.34/fonts/fontawesome-webfont.ttf\nstatic/docs/reference/generated/kubernetes-api/v1.34/fonts/fontawesome-webfont.woff\nstatic/docs/reference/generated/kubernetes-api/v1.34/fonts/fontawesome-webfont.woff2", "language": "text", "type": "code-example", "context": "Reference Documentation Quickstart | Run git add and git commit to commit the files.", "source": "https://kubernetes.io/docs/contribute/generate-ref-docs/quickstart/", "id": "k8s_00719_code_completion_22403"} {"prompt": "Explain this text code:\n\nstatic/docs/reference/generated/kubernetes-api/v1.34/index.html\nstatic/docs/reference/generated/kubernetes-api/v1.34/js/navData.js\nstatic/docs/reference/generated/kubernetes-api/v1.34/js/scroll.js\nstatic/docs/reference/generated/kubernetes-api/v1.34/js/query.scrollTo.min.js\nstatic/docs/reference/generated/kubernetes-api/v1.34/css/font-awesome.min.css\nstatic/docs/reference/generated/kubernetes-api/v1.34/css/bootstrap.min.css\nstatic/docs/reference/generated/kubernetes-api/v1.34/css/stylesheet.css\nstatic/docs/reference/generated/kubernetes-api/v1.34/fonts/FontAwesome.otf\nstatic/docs/reference/generated/kubernetes-api/v1.34/fonts/fontawesome-webfont.eot\nstatic/docs/reference/generated/kubernetes-api/v1.34/fonts/fontawesome-webfont.svg\nstatic/docs/reference/generated/kubernetes-api/v1.34/fonts/fontawesome-webfont.ttf\nstatic/docs/reference/generated/kubernetes-api/v1.34/fonts/fontawesome-webfont.woff\nstatic/docs/reference/generated/kubernetes-api/v1.34/fonts/fontawesome-webfont.woff2", "completion": "This text code Reference Documentation Quickstart | Run git add and git commit to commit the files.", "language": "text", "type": "code-example_explanation", "context": "Reference Documentation Quickstart | Run git add and git commit to commit the files.", "source": "https://kubernetes.io/docs/contribute/generate-ref-docs/quickstart/", "id": "k8s_00719_code_explain_22404"} {"prompt": "Explain this text code:\n\nmkdir github.com\ncd github.com\ngit clone git@github.com:/website.git", "completion": "This text code Reference Documentation Quickstart | Determine the base directory of your clone. For example, if you followed the preceding step to get the repository, your base directory is github.com/website. The remaining steps refer to your base dir", "language": "text", "type": "code-example_explanation", "context": "Reference Documentation Quickstart | Determine the base directory of your clone. For example, if you followed the preceding step to get the repository, your base directory is github.com/website. The remaining steps refer to your base dir", "source": "https://kubernetes.io/docs/contribute/generate-ref-docs/quickstart/", "id": "k8s_00719_code_explain_22405"} {"prompt": "Complete this text code:\n\nrepos:\n- name: community\n remote: https://github.com/kubernetes/community.git\n branch: master", "completion": " files:\n - src: contributors/devel/README.md\n dst: docs/imported/community/devel.md\n - src: contributors/guide/README.md\n dst: docs/imported/community/guide.md", "language": "text", "type": "code-example", "context": "Reference Documentation Quickstart | Single page Markdown documents, imported by the tool, must adhere to the Documentation Style Guide.", "source": "https://kubernetes.io/docs/contribute/generate-ref-docs/quickstart/", "id": "k8s_00719_code_completion_22406"} {"prompt": "Explain this text code:\n\nrepos:\n- name: community\n remote: https://github.com/kubernetes/community.git\n branch: master\n files:\n - src: contributors/devel/README.md\n dst: docs/imported/community/devel.md\n - src: contributors/guide/README.md\n dst: docs/imported/community/guide.md", "completion": "This text code Reference Documentation Quickstart | Single page Markdown documents, imported by the tool, must adhere to the Documentation Style Guide.", "language": "text", "type": "code-example_explanation", "context": "Reference Documentation Quickstart | Single page Markdown documents, imported by the tool, must adhere to the Documentation Style Guide.", "source": "https://kubernetes.io/docs/contribute/generate-ref-docs/quickstart/", "id": "k8s_00719_code_explain_22407"} {"prompt": "Complete this text code:\n\nrepos:\n- name: reference-docs\n remote: https://github.com/kubernetes-sigs/reference-docs.git", "completion": " files:\n - src: gen-compdocs/build/kube-apiserver.md\n dst: content/en/docs/reference/command-line-tools-reference/kube-apiserver.md\n ...", "language": "text", "type": "code-example", "context": "Reference Documentation Quickstart | Note that when there are many files to be copied from the same source directory to the same destination directory, you can use wildcards in the value given to src. You must provide the directory name", "source": "https://kubernetes.io/docs/contribute/generate-ref-docs/quickstart/", "id": "k8s_00719_code_completion_22408"} {"prompt": "Explain this text code:\n\nrepos:\n- name: reference-docs\n remote: https://github.com/kubernetes-sigs/reference-docs.git\n files:\n - src: gen-compdocs/build/kube-apiserver.md\n dst: content/en/docs/reference/command-line-tools-reference/kube-apiserver.md\n ...", "completion": "This text code Reference Documentation Quickstart | Note that when there are many files to be copied from the same source directory to the same destination directory, you can use wildcards in the value given to src. You must provide the directory name", "language": "text", "type": "code-example_explanation", "context": "Reference Documentation Quickstart | Note that when there are many files to be copied from the same source directory to the same destination directory, you can use wildcards in the value given to src. You must provide the directory name", "source": "https://kubernetes.io/docs/contribute/generate-ref-docs/quickstart/", "id": "k8s_00719_code_explain_22409"} {"prompt": "Explain this text code:\n\nfiles:\n - src: gen-compdocs/build/kubeadm*.md\n dst: content/en/docs/reference/setup-tools/kubeadm/generated/", "completion": "This text code Reference Documentation Quickstart | You can run the update-imported-docs.py tool as follows:", "language": "text", "type": "code-example_explanation", "context": "Reference Documentation Quickstart | You can run the update-imported-docs.py tool as follows:", "source": "https://kubernetes.io/docs/contribute/generate-ref-docs/quickstart/", "id": "k8s_00719_code_explain_22410"} {"prompt": "Explain this text code:\n\ncd /update-imported-docs\n./update-imported-docs.py ", "completion": "This text code Reference Documentation Quickstart | For example:", "language": "text", "type": "code-example_explanation", "context": "Reference Documentation Quickstart | For example:", "source": "https://kubernetes.io/docs/contribute/generate-ref-docs/quickstart/", "id": "k8s_00719_code_explain_22411"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: extended-resource-demo\nspec:\n containers:", "completion": " - name: extended-resource-demo-ctr\n image: nginx\n resources:\n requests:\n example.com/dongle: 3\n limits:\n example.com/dongle: 3", "language": "yaml", "type": "pod-definition", "context": "Assign Extended Resources to a Container | In the configuration file, you can see that the Container requests 3 dongles.", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/extended-resource/", "id": "k8s_00720_code_completion_22412"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: extended-resource-demo\nspec:\n containers:\n - name: extended-resource-demo-ctr\n image: nginx\n resources:\n requests:\n example.com/dongle: 3\n limits:\n example.com/dongle: 3", "completion": "This yaml code Assign Extended Resources to a Container | In the configuration file, you can see that the Container requests 3 dongles.", "language": "yaml", "type": "pod-definition_explanation", "context": "Assign Extended Resources to a Container | In the configuration file, you can see that the Container requests 3 dongles.", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/extended-resource/", "id": "k8s_00720_code_explain_22413"} {"prompt": "Explain this bash code:\n\nkubectl apply -f https://k8s.io/examples/pods/resource/extended-resource-pod.yaml", "completion": "This bash code Assign Extended Resources to a Container | Verify that the Pod is running:", "language": "bash", "type": "kubectl-command_explanation", "context": "Assign Extended Resources to a Container | Verify that the Pod is running:", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/extended-resource/", "id": "k8s_00720_code_explain_22414"} {"prompt": "Complete this text code:\n\nLimits:\n example.com/dongle: 3", "completion": "Requests:\n example.com/dongle: 3", "language": "text", "type": "code-example", "context": "Assign Extended Resources to a Container | Here is the configuration file for a Pod that has one Container. The Container requests two dongles.", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/extended-resource/", "id": "k8s_00720_code_completion_22415"} {"prompt": "Explain this text code:\n\nLimits:\n example.com/dongle: 3\nRequests:\n example.com/dongle: 3", "completion": "This text code Assign Extended Resources to a Container | Here is the configuration file for a Pod that has one Container. The Container requests two dongles.", "language": "text", "type": "code-example_explanation", "context": "Assign Extended Resources to a Container | Here is the configuration file for a Pod that has one Container. The Container requests two dongles.", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/extended-resource/", "id": "k8s_00720_code_explain_22416"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: extended-resource-demo-2\nspec:\n containers:", "completion": " - name: extended-resource-demo-2-ctr\n image: nginx\n resources:\n requests:\n example.com/dongle: 2\n limits:\n example.com/dongle: 2", "language": "yaml", "type": "pod-definition", "context": "Assign Extended Resources to a Container | Kubernetes will not be able to satisfy the request for two dongles, because the first Pod used three of the four available dongles.", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/extended-resource/", "id": "k8s_00720_code_completion_22417"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: extended-resource-demo-2\nspec:\n containers:\n - name: extended-resource-demo-2-ctr\n image: nginx\n resources:\n requests:\n example.com/dongle: 2\n limits:\n example.com/dongle: 2", "completion": "This yaml code Assign Extended Resources to a Container | Kubernetes will not be able to satisfy the request for two dongles, because the first Pod used three of the four available dongles.", "language": "yaml", "type": "pod-definition_explanation", "context": "Assign Extended Resources to a Container | Kubernetes will not be able to satisfy the request for two dongles, because the first Pod used three of the four available dongles.", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/extended-resource/", "id": "k8s_00720_code_explain_22418"} {"prompt": "Explain this bash code:\n\nkubectl apply -f https://k8s.io/examples/pods/resource/extended-resource-pod-2.yaml", "completion": "This bash code Assign Extended Resources to a Container | Describe the Pod", "language": "bash", "type": "kubectl-command_explanation", "context": "Assign Extended Resources to a Container | Describe the Pod", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/extended-resource/", "id": "k8s_00720_code_explain_22419"} {"prompt": "Complete this text code:\n\nConditions:\n Type Status\n PodScheduled False\n...", "completion": "Events:\n ...\n ... Warning FailedScheduling pod (extended-resource-demo-2) failed to fit in any node\nfit failure summary on nodes : Insufficient example.com/dongle (1)", "language": "text", "type": "code-example", "context": "Assign Extended Resources to a Container | View the Pod status:", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/extended-resource/", "id": "k8s_00720_code_completion_22420"} {"prompt": "Explain this text code:\n\nConditions:\n Type Status\n PodScheduled False\n...\nEvents:\n ...\n ... Warning FailedScheduling pod (extended-resource-demo-2) failed to fit in any node\nfit failure summary on nodes : Insufficient example.com/dongle (1)", "completion": "This text code Assign Extended Resources to a Container | View the Pod status:", "language": "text", "type": "code-example_explanation", "context": "Assign Extended Resources to a Container | View the Pod status:", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/extended-resource/", "id": "k8s_00720_code_explain_22421"} {"prompt": "Explain this text code:\n\nNAME READY STATUS RESTARTS AGE\nextended-resource-demo-2 0/1 Pending 0 6m", "completion": "This text code Assign Extended Resources to a Container | Delete the Pods that you created for this exercise:", "language": "text", "type": "code-example_explanation", "context": "Assign Extended Resources to a Container | Delete the Pods that you created for this exercise:", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/extended-resource/", "id": "k8s_00720_code_explain_22422"} {"prompt": "Explain this bash code:\n\nkubectl delete pod extended-resource-demo\nkubectl delete pod extended-resource-demo-2", "completion": "This bash code Assign Extended Resources to a Container |", "language": "bash", "type": "kubectl-command_explanation", "context": "Assign Extended Resources to a Container |", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/extended-resource/", "id": "k8s_00720_code_explain_22423"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: extended-resource-demo\nspec:\n containers:", "completion": " - name: extended-resource-demo-ctr\n image: nginx\n resources:\n requests:\n example.com/dongle: 3\n limits:\n example.com/dongle: 3", "language": "yaml", "type": "pod-definition", "context": "Assign Extended Resources to a Container | In the configuration file, you can see that the Container requests 3 dongles.", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/extended-resource/", "id": "k8s_00720_code_completion_22424"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: extended-resource-demo\nspec:\n containers:\n - name: extended-resource-demo-ctr\n image: nginx\n resources:\n requests:\n example.com/dongle: 3\n limits:\n example.com/dongle: 3", "completion": "This yaml code Assign Extended Resources to a Container | In the configuration file, you can see that the Container requests 3 dongles.", "language": "yaml", "type": "pod-definition_explanation", "context": "Assign Extended Resources to a Container | In the configuration file, you can see that the Container requests 3 dongles.", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/extended-resource/", "id": "k8s_00720_code_explain_22425"} {"prompt": "Explain this shell code:\n\nkubectl apply -f https://k8s.io/examples/pods/resource/extended-resource-pod.yaml", "completion": "This shell code Assign Extended Resources to a Container | Verify that the Pod is running:", "language": "shell", "type": "code-example_explanation", "context": "Assign Extended Resources to a Container | Verify that the Pod is running:", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/extended-resource/", "id": "k8s_00720_code_explain_22426"} {"prompt": "Complete this yaml code:\n\nLimits:\n example.com/dongle: 3", "completion": "Requests:\n example.com/dongle: 3", "language": "yaml", "type": "kubernetes-manifest", "context": "Assign Extended Resources to a Container | Here is the configuration file for a Pod that has one Container. The Container requests two dongles.", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/extended-resource/", "id": "k8s_00720_code_completion_22427"} {"prompt": "Explain this yaml code:\n\nLimits:\n example.com/dongle: 3\nRequests:\n example.com/dongle: 3", "completion": "This yaml code Assign Extended Resources to a Container | Here is the configuration file for a Pod that has one Container. The Container requests two dongles.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Assign Extended Resources to a Container | Here is the configuration file for a Pod that has one Container. The Container requests two dongles.", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/extended-resource/", "id": "k8s_00720_code_explain_22428"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: extended-resource-demo-2\nspec:\n containers:", "completion": " - name: extended-resource-demo-2-ctr\n image: nginx\n resources:\n requests:\n example.com/dongle: 2\n limits:\n example.com/dongle: 2", "language": "yaml", "type": "pod-definition", "context": "Assign Extended Resources to a Container | Kubernetes will not be able to satisfy the request for two dongles, because the first Pod used three of the four available dongles.", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/extended-resource/", "id": "k8s_00720_code_completion_22429"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: extended-resource-demo-2\nspec:\n containers:\n - name: extended-resource-demo-2-ctr\n image: nginx\n resources:\n requests:\n example.com/dongle: 2\n limits:\n example.com/dongle: 2", "completion": "This yaml code Assign Extended Resources to a Container | Kubernetes will not be able to satisfy the request for two dongles, because the first Pod used three of the four available dongles.", "language": "yaml", "type": "pod-definition_explanation", "context": "Assign Extended Resources to a Container | Kubernetes will not be able to satisfy the request for two dongles, because the first Pod used three of the four available dongles.", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/extended-resource/", "id": "k8s_00720_code_explain_22430"} {"prompt": "Explain this shell code:\n\nkubectl apply -f https://k8s.io/examples/pods/resource/extended-resource-pod-2.yaml", "completion": "This shell code Assign Extended Resources to a Container | Describe the Pod", "language": "shell", "type": "code-example_explanation", "context": "Assign Extended Resources to a Container | Describe the Pod", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/extended-resource/", "id": "k8s_00720_code_explain_22431"} {"prompt": "Complete this text code:\n\nConditions:\n Type Status\n PodScheduled False\n...", "completion": "Events:\n ...\n ... Warning FailedScheduling pod (extended-resource-demo-2) failed to fit in any node\nfit failure summary on nodes : Insufficient example.com/dongle (1)", "language": "text", "type": "code-example", "context": "Assign Extended Resources to a Container | View the Pod status:", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/extended-resource/", "id": "k8s_00720_code_completion_22432"} {"prompt": "Explain this text code:\n\nConditions:\n Type Status\n PodScheduled False\n...\nEvents:\n ...\n ... Warning FailedScheduling pod (extended-resource-demo-2) failed to fit in any node\nfit failure summary on nodes : Insufficient example.com/dongle (1)", "completion": "This text code Assign Extended Resources to a Container | View the Pod status:", "language": "text", "type": "code-example_explanation", "context": "Assign Extended Resources to a Container | View the Pod status:", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/extended-resource/", "id": "k8s_00720_code_explain_22433"} {"prompt": "Explain this yaml code:\n\nNAME READY STATUS RESTARTS AGE\nextended-resource-demo-2 0/1 Pending 0 6m", "completion": "This yaml code Assign Extended Resources to a Container | Delete the Pods that you created for this exercise:", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Assign Extended Resources to a Container | Delete the Pods that you created for this exercise:", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/extended-resource/", "id": "k8s_00720_code_explain_22434"} {"prompt": "Explain this shell code:\n\nkubectl delete pod extended-resource-demo\nkubectl delete pod extended-resource-demo-2", "completion": "This shell code Assign Extended Resources to a Container |", "language": "shell", "type": "code-example_explanation", "context": "Assign Extended Resources to a Container |", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/extended-resource/", "id": "k8s_00720_code_explain_22435"} {"prompt": "Complete this yaml code:\n\npods/resource/extended-resource-pod.yaml\napiVersion: v1\nkind: Pod\nmetadata:\n name: extended-resource-demo\nspec:\n containers:", "completion": " - name: extended-resource-demo-ctr\n image: nginx\n resources:\n requests:\n example.com/dongle: 3\n limits:\n example.com/dongle: 3", "language": "yaml", "type": "pod-definition", "context": "Assign Extended Resources to a Container | In the configuration file, you can see that the Container requests 3 dongles.", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/extended-resource/", "id": "k8s_00720_code_completion_22436"} {"prompt": "Explain this yaml code:\n\npods/resource/extended-resource-pod.yaml\napiVersion: v1\nkind: Pod\nmetadata:\n name: extended-resource-demo\nspec:\n containers:\n - name: extended-resource-demo-ctr\n image: nginx\n resources:\n requests:\n example.com/dongle: 3\n limits:\n example.com/dongle: 3", "completion": "This yaml code Assign Extended Resources to a Container | In the configuration file, you can see that the Container requests 3 dongles.", "language": "yaml", "type": "pod-definition_explanation", "context": "Assign Extended Resources to a Container | In the configuration file, you can see that the Container requests 3 dongles.", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/extended-resource/", "id": "k8s_00720_code_explain_22437"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: extended-resource-demo\nspec:\n containers:", "completion": " - name: extended-resource-demo-ctr\n image: nginx\n resources:\n requests:\n example.com/dongle: 3\n limits:\n example.com/dongle: 3", "language": "yaml", "type": "pod-definition", "context": "Assign Extended Resources to a Container | In the configuration file, you can see that the Container requests 3 dongles.", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/extended-resource/", "id": "k8s_00720_code_completion_22438"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: extended-resource-demo\nspec:\n containers:\n - name: extended-resource-demo-ctr\n image: nginx\n resources:\n requests:\n example.com/dongle: 3\n limits:\n example.com/dongle: 3", "completion": "This yaml code Assign Extended Resources to a Container | In the configuration file, you can see that the Container requests 3 dongles.", "language": "yaml", "type": "pod-definition_explanation", "context": "Assign Extended Resources to a Container | In the configuration file, you can see that the Container requests 3 dongles.", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/extended-resource/", "id": "k8s_00720_code_explain_22439"} {"prompt": "Explain this bash code:\n\nkubectl apply -f https://k8s.io/examples/pods/resource/extended-resource-pod.yaml", "completion": "This bash code Assign Extended Resources to a Container | Verify that the Pod is running:", "language": "bash", "type": "kubectl-command_explanation", "context": "Assign Extended Resources to a Container | Verify that the Pod is running:", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/extended-resource/", "id": "k8s_00720_code_explain_22440"} {"prompt": "Complete this text code:\n\nLimits:\n example.com/dongle: 3", "completion": "Requests:\n example.com/dongle: 3", "language": "text", "type": "code-example", "context": "Assign Extended Resources to a Container | Here is the configuration file for a Pod that has one Container. The Container requests two dongles.", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/extended-resource/", "id": "k8s_00720_code_completion_22441"} {"prompt": "Explain this text code:\n\nLimits:\n example.com/dongle: 3\nRequests:\n example.com/dongle: 3", "completion": "This text code Assign Extended Resources to a Container | Here is the configuration file for a Pod that has one Container. The Container requests two dongles.", "language": "text", "type": "code-example_explanation", "context": "Assign Extended Resources to a Container | Here is the configuration file for a Pod that has one Container. The Container requests two dongles.", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/extended-resource/", "id": "k8s_00720_code_explain_22442"} {"prompt": "Complete this yaml code:\n\npods/resource/extended-resource-pod-2.yaml\napiVersion: v1\nkind: Pod\nmetadata:\n name: extended-resource-demo-2\nspec:\n containers:", "completion": " - name: extended-resource-demo-2-ctr\n image: nginx\n resources:\n requests:\n example.com/dongle: 2\n limits:\n example.com/dongle: 2", "language": "yaml", "type": "pod-definition", "context": "Assign Extended Resources to a Container | Kubernetes will not be able to satisfy the request for two dongles, because the first Pod used three of the four available dongles.", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/extended-resource/", "id": "k8s_00720_code_completion_22443"} {"prompt": "Explain this yaml code:\n\npods/resource/extended-resource-pod-2.yaml\napiVersion: v1\nkind: Pod\nmetadata:\n name: extended-resource-demo-2\nspec:\n containers:\n - name: extended-resource-demo-2-ctr\n image: nginx\n resources:\n requests:\n example.com/dongle: 2\n limits:\n example.com/dongle: 2", "completion": "This yaml code Assign Extended Resources to a Container | Kubernetes will not be able to satisfy the request for two dongles, because the first Pod used three of the four available dongles.", "language": "yaml", "type": "pod-definition_explanation", "context": "Assign Extended Resources to a Container | Kubernetes will not be able to satisfy the request for two dongles, because the first Pod used three of the four available dongles.", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/extended-resource/", "id": "k8s_00720_code_explain_22444"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: extended-resource-demo-2\nspec:\n containers:", "completion": " - name: extended-resource-demo-2-ctr\n image: nginx\n resources:\n requests:\n example.com/dongle: 2\n limits:\n example.com/dongle: 2", "language": "yaml", "type": "pod-definition", "context": "Assign Extended Resources to a Container | Kubernetes will not be able to satisfy the request for two dongles, because the first Pod used three of the four available dongles.", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/extended-resource/", "id": "k8s_00720_code_completion_22445"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: extended-resource-demo-2\nspec:\n containers:\n - name: extended-resource-demo-2-ctr\n image: nginx\n resources:\n requests:\n example.com/dongle: 2\n limits:\n example.com/dongle: 2", "completion": "This yaml code Assign Extended Resources to a Container | Kubernetes will not be able to satisfy the request for two dongles, because the first Pod used three of the four available dongles.", "language": "yaml", "type": "pod-definition_explanation", "context": "Assign Extended Resources to a Container | Kubernetes will not be able to satisfy the request for two dongles, because the first Pod used three of the four available dongles.", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/extended-resource/", "id": "k8s_00720_code_explain_22446"} {"prompt": "Explain this bash code:\n\nkubectl apply -f https://k8s.io/examples/pods/resource/extended-resource-pod-2.yaml", "completion": "This bash code Assign Extended Resources to a Container | Describe the Pod", "language": "bash", "type": "kubectl-command_explanation", "context": "Assign Extended Resources to a Container | Describe the Pod", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/extended-resource/", "id": "k8s_00720_code_explain_22447"} {"prompt": "Explain this text code:\n\nNAME READY STATUS RESTARTS AGE\nextended-resource-demo-2 0/1 Pending 0 6m", "completion": "This text code Assign Extended Resources to a Container | Delete the Pods that you created for this exercise:", "language": "text", "type": "code-example_explanation", "context": "Assign Extended Resources to a Container | Delete the Pods that you created for this exercise:", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/extended-resource/", "id": "k8s_00720_code_explain_22448"} {"prompt": "Explain this bash code:\n\nkubectl delete pod extended-resource-demo\nkubectl delete pod extended-resource-demo-2", "completion": "This bash code Assign Extended Resources to a Container |", "language": "bash", "type": "kubectl-command_explanation", "context": "Assign Extended Resources to a Container |", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/extended-resource/", "id": "k8s_00720_code_explain_22449"} {"prompt": "Complete this yaml code:\n\napiVersion: apiextensions.k8s.io/v1\nkind: CustomResourceDefinition\nmetadata:\n # name must match the spec fields below, and be in the form: .\n name: crontabs.stable.example.com\nspec:\n # group name to use for REST API: /apis//\n group: stable.example.com\n # list of versions supported by this CustomResourceDefinition\n versions:\n - name: v1\n # Each version can be enabled/disabled by Served flag.\n served: true\n # One and only one version must be marked as the storage version.\n storage: true\n schema:\n openAPIV3Schema:\n type: object\n properties:\n spec:", "completion": " type: object\n properties:\n cronSpec:\n type: string\n image:\n type: string\n replicas:\n type: integer\n # either Namespaced or Cluster\n scope: Namespaced\n names:\n # plural name to be used in the URL: /apis///\n plural: crontabs\n # singular name to be used as an alias on the CLI and for display\n singular: crontab\n # kind is normally the CamelCased singular type. Your resource manifests use this.\n kind: CronTab\n # shortNames allow shorter string to match your resource on the CLI\n shortNames:\n - ct", "language": "yaml", "type": "kubernetes-manifest", "context": "Extend the Kubernetes API with CustomResourceDefinitions | and create it:", "source": "https://kubernetes.io/docs/tasks/extend-kubernetes/custom-resources/custom-resource-definitions/", "id": "k8s_00721_code_completion_22450"} {"prompt": "Explain this yaml code:\n\napiVersion: apiextensions.k8s.io/v1\nkind: CustomResourceDefinition\nmetadata:\n # name must match the spec fields below, and be in the form: .\n name: crontabs.stable.example.com\nspec:\n # group name to use for REST API: /apis//\n group: stable.example.com\n # list of versions supported by this CustomResourceDefinition\n versions:\n - name: v1\n # Each version can be enabled/disabled by Served flag.\n served: true\n # One and only one version must be marked as the storage version.\n storage: true\n schema:\n openAPIV3Schema:\n type: object\n properties:\n spec:\n type: object\n properties:\n cronSpec:\n type: string\n image:\n type: string\n replicas:\n type: integer\n # either Namespaced or Cluster\n scope: Namespaced\n names:\n # plural name to be used in the URL: /apis///\n plural: crontabs\n # singular name to be used as an alias on the CLI and for display\n singular: crontab\n # kind is normally the CamelCased singular type. Your resource manifests use this.\n kind: CronTab\n # shortNames allow shorter string to match your resource on the CLI\n shortNames:\n - ct", "completion": "This yaml code Extend the Kubernetes API with CustomResourceDefinitions | and create it:", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Extend the Kubernetes API with CustomResourceDefinitions | and create it:", "source": "https://kubernetes.io/docs/tasks/extend-kubernetes/custom-resources/custom-resource-definitions/", "id": "k8s_00721_code_explain_22451"} {"prompt": "Explain this text code:\n\n/apis/stable.example.com/v1/namespaces/*/crontabs/...", "completion": "This text code Extend the Kubernetes API with CustomResourceDefinitions | This endpoint URL can then be used to create and manage custom objects. The kind of these objects will be CronTab from the spec of the CustomResourceDefinition object you created above.", "language": "text", "type": "code-example_explanation", "context": "Extend the Kubernetes API with CustomResourceDefinitions | This endpoint URL can then be used to create and manage custom objects. The kind of these objects will be CronTab from the spec of the CustomResourceDefinition object you created above.", "source": "https://kubernetes.io/docs/tasks/extend-kubernetes/custom-resources/custom-resource-definitions/", "id": "k8s_00721_code_explain_22452"} {"prompt": "Complete this yaml code:\n\napiVersion: \"stable.example.com/v1\"\nkind: CronTab\nmetadata:", "completion": " name: my-new-cron-object\nspec:\n cronSpec: \"* * * * */5\"\n image: my-awesome-cron-image", "language": "yaml", "type": "kubernetes-manifest", "context": "Extend the Kubernetes API with CustomResourceDefinitions | and create it:", "source": "https://kubernetes.io/docs/tasks/extend-kubernetes/custom-resources/custom-resource-definitions/", "id": "k8s_00721_code_completion_22453"} {"prompt": "Explain this yaml code:\n\napiVersion: \"stable.example.com/v1\"\nkind: CronTab\nmetadata:\n name: my-new-cron-object\nspec:\n cronSpec: \"* * * * */5\"\n image: my-awesome-cron-image", "completion": "This yaml code Extend the Kubernetes API with CustomResourceDefinitions | and create it:", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Extend the Kubernetes API with CustomResourceDefinitions | and create it:", "source": "https://kubernetes.io/docs/tasks/extend-kubernetes/custom-resources/custom-resource-definitions/", "id": "k8s_00721_code_explain_22454"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nitems:\n- apiVersion: stable.example.com/v1\n kind: CronTab\n metadata:\n annotations:\n kubectl.kubernetes.io/last-applied-configuration: |\n {\"apiVersion\":\"stable.example.com/v1\",\"kind\":\"CronTab\",\"metadata\":{\"annotations\":{},\"name\":\"my-new-cron-object\",\"namespace\":\"default\"},\"spec\":{\"cronSpec\":\"* * * * */5\",\"image\":\"my-awesome-cron-image\"}} \n creationTimestamp: \"2021-06-20T07:35:27Z\"\n generation: 1", "completion": " name: my-new-cron-object\n namespace: default\n resourceVersion: \"1326\"\n uid: 9aab1d66-628e-41bb-a422-57b8b3b1f5a9\n spec:\n cronSpec: '* * * * */5'\n image: my-awesome-cron-image\nkind: List\nmetadata:\n resourceVersion: \"\"\n selfLink: \"\"", "language": "yaml", "type": "kubernetes-manifest", "context": "Extend the Kubernetes API with CustomResourceDefinitions | When you delete a CustomResourceDefinition, the server will uninstall the RESTful API endpoint and delete all custom objects stored in it.", "source": "https://kubernetes.io/docs/tasks/extend-kubernetes/custom-resources/custom-resource-definitions/", "id": "k8s_00721_code_completion_22455"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nitems:\n- apiVersion: stable.example.com/v1\n kind: CronTab\n metadata:\n annotations:\n kubectl.kubernetes.io/last-applied-configuration: |\n {\"apiVersion\":\"stable.example.com/v1\",\"kind\":\"CronTab\",\"metadata\":{\"annotations\":{},\"name\":\"my-new-cron-object\",\"namespace\":\"default\"},\"spec\":{\"cronSpec\":\"* * * * */5\",\"image\":\"my-awesome-cron-image\"}} \n creationTimestamp: \"2021-06-20T07:35:27Z\"\n generation: 1\n name: my-new-cron-object\n namespace: default\n resourceVersion: \"1326\"\n uid: 9aab1d66-628e-41bb-a422-57b8b3b1f5a9\n spec:\n cronSpec: '* * * * */5'\n image: my-awesome-cron-image\nkind: List\nmetadata:\n resourceVersion: \"\"\n selfLink: \"\"", "completion": "This yaml code Extend the Kubernetes API with CustomResourceDefinitions | When you delete a CustomResourceDefinition, the server will uninstall the RESTful API endpoint and delete all custom objects stored in it.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Extend the Kubernetes API with CustomResourceDefinitions | When you delete a CustomResourceDefinition, the server will uninstall the RESTful API endpoint and delete all custom objects stored in it.", "source": "https://kubernetes.io/docs/tasks/extend-kubernetes/custom-resources/custom-resource-definitions/", "id": "k8s_00721_code_explain_22456"} {"prompt": "Explain this bash code:\n\nkubectl delete -f resourcedefinition.yaml\nkubectl get crontabs", "completion": "This bash code Extend the Kubernetes API with CustomResourceDefinitions | If you later recreate the same CustomResourceDefinition, it will start out empty.", "language": "bash", "type": "kubectl-command_explanation", "context": "Extend the Kubernetes API with CustomResourceDefinitions | If you later recreate the same CustomResourceDefinition, it will start out empty.", "source": "https://kubernetes.io/docs/tasks/extend-kubernetes/custom-resources/custom-resource-definitions/", "id": "k8s_00721_code_explain_22457"} {"prompt": "Explain this text code:\n\nError from server (NotFound): Unable to list {\"stable.example.com\" \"v1\" \"crontabs\"}: the server could not\nfind the requested resource (get crontabs.stable.example.com)", "completion": "This text code Extend the Kubernetes API with CustomResourceDefinitions | If you later recreate the same CustomResourceDefinition, it will start out empty.", "language": "text", "type": "code-example_explanation", "context": "Extend the Kubernetes API with CustomResourceDefinitions | If you later recreate the same CustomResourceDefinition, it will start out empty.", "source": "https://kubernetes.io/docs/tasks/extend-kubernetes/custom-resources/custom-resource-definitions/", "id": "k8s_00721_code_explain_22458"} {"prompt": "Complete this text code:\n\nproperties:\n foo:\n # ...", "completion": "allOf:\n- properties:\n foo:\n # ...", "language": "text", "type": "code-example", "context": "Extend the Kubernetes API with CustomResourceDefinitions | Non-structural example 2:", "source": "https://kubernetes.io/docs/tasks/extend-kubernetes/custom-resources/custom-resource-definitions/", "id": "k8s_00721_code_completion_22459"} {"prompt": "Explain this text code:\n\nproperties:\n foo:\n # ...\nallOf:\n- properties:\n foo:\n # ...", "completion": "This text code Extend the Kubernetes API with CustomResourceDefinitions | Non-structural example 2:", "language": "text", "type": "code-example_explanation", "context": "Extend the Kubernetes API with CustomResourceDefinitions | Non-structural example 2:", "source": "https://kubernetes.io/docs/tasks/extend-kubernetes/custom-resources/custom-resource-definitions/", "id": "k8s_00721_code_explain_22460"} {"prompt": "Complete this text code:\n\nallOf:\n- items:", "completion": " properties:\n foo:\n # ...", "language": "text", "type": "code-example", "context": "Extend the Kubernetes API with CustomResourceDefinitions | conflicts with rule 2. The following would be correct:", "source": "https://kubernetes.io/docs/tasks/extend-kubernetes/custom-resources/custom-resource-definitions/", "id": "k8s_00721_code_completion_22461"} {"prompt": "Explain this text code:\n\nallOf:\n- items:\n properties:\n foo:\n # ...", "completion": "This text code Extend the Kubernetes API with CustomResourceDefinitions | conflicts with rule 2. The following would be correct:", "language": "text", "type": "code-example_explanation", "context": "Extend the Kubernetes API with CustomResourceDefinitions | conflicts with rule 2. The following would be correct:", "source": "https://kubernetes.io/docs/tasks/extend-kubernetes/custom-resources/custom-resource-definitions/", "id": "k8s_00721_code_explain_22462"} {"prompt": "Complete this text code:\n\nitems:\n properties:\n foo:\n # ...", "completion": "allOf:\n- items:\n properties:\n foo:\n # ...", "language": "text", "type": "code-example", "context": "Extend the Kubernetes API with CustomResourceDefinitions | Non-structural example 3:", "source": "https://kubernetes.io/docs/tasks/extend-kubernetes/custom-resources/custom-resource-definitions/", "id": "k8s_00721_code_completion_22463"} {"prompt": "Explain this text code:\n\nitems:\n properties:\n foo:\n # ...\nallOf:\n- items:\n properties:\n foo:\n # ...", "completion": "This text code Extend the Kubernetes API with CustomResourceDefinitions | Non-structural example 3:", "language": "text", "type": "code-example_explanation", "context": "Extend the Kubernetes API with CustomResourceDefinitions | Non-structural example 3:", "source": "https://kubernetes.io/docs/tasks/extend-kubernetes/custom-resources/custom-resource-definitions/", "id": "k8s_00721_code_explain_22464"} {"prompt": "Complete this text code:\n\nproperties:\n foo:\n pattern: \"abc\"\n metadata:\n type: object\n properties:\n name:\n type: string\n pattern: \"^a\"\n finalizers:", "completion": " type: array\n items:\n type: string\n pattern: \"my-finalizer\"\nanyOf:\n- properties:\n bar:\n type: integer\n minimum: 42\n required: [\"bar\"]\n description: \"foo bar object\"", "language": "text", "type": "code-example", "context": "Extend the Kubernetes API with CustomResourceDefinitions | is not a structural schema because of the following violations:", "source": "https://kubernetes.io/docs/tasks/extend-kubernetes/custom-resources/custom-resource-definitions/", "id": "k8s_00721_code_completion_22465"} {"prompt": "Explain this text code:\n\nproperties:\n foo:\n pattern: \"abc\"\n metadata:\n type: object\n properties:\n name:\n type: string\n pattern: \"^a\"\n finalizers:\n type: array\n items:\n type: string\n pattern: \"my-finalizer\"\nanyOf:\n- properties:\n bar:\n type: integer\n minimum: 42\n required: [\"bar\"]\n description: \"foo bar object\"", "completion": "This text code Extend the Kubernetes API with CustomResourceDefinitions | is not a structural schema because of the following violations:", "language": "text", "type": "code-example_explanation", "context": "Extend the Kubernetes API with CustomResourceDefinitions | is not a structural schema because of the following violations:", "source": "https://kubernetes.io/docs/tasks/extend-kubernetes/custom-resources/custom-resource-definitions/", "id": "k8s_00721_code_explain_22466"} {"prompt": "Complete this text code:\n\ntype: object\ndescription: \"foo bar object\"\nproperties:\n foo:\n type: string\n pattern: \"abc\"\n bar:\n type: integer\n metadata:", "completion": " type: object\n properties:\n name:\n type: string\n pattern: \"^a\"\nanyOf:\n- properties:\n bar:\n minimum: 42\n required: [\"bar\"]", "language": "text", "type": "code-example", "context": "Extend the Kubernetes API with CustomResourceDefinitions | Violations of the structural schema rules are reported in the NonStructural condition in the CustomResourceDefinition.", "source": "https://kubernetes.io/docs/tasks/extend-kubernetes/custom-resources/custom-resource-definitions/", "id": "k8s_00721_code_completion_22467"} {"prompt": "Explain this text code:\n\ntype: object\ndescription: \"foo bar object\"\nproperties:\n foo:\n type: string\n pattern: \"abc\"\n bar:\n type: integer\n metadata:\n type: object\n properties:\n name:\n type: string\n pattern: \"^a\"\nanyOf:\n- properties:\n bar:\n minimum: 42\n required: [\"bar\"]", "completion": "This text code Extend the Kubernetes API with CustomResourceDefinitions | Violations of the structural schema rules are reported in the NonStructural condition in the CustomResourceDefinition.", "language": "text", "type": "code-example_explanation", "context": "Extend the Kubernetes API with CustomResourceDefinitions | Violations of the structural schema rules are reported in the NonStructural condition in the CustomResourceDefinition.", "source": "https://kubernetes.io/docs/tasks/extend-kubernetes/custom-resources/custom-resource-definitions/", "id": "k8s_00721_code_explain_22468"} {"prompt": "Complete this yaml code:\n\napiVersion: \"stable.example.com/v1\"\nkind: CronTab\nmetadata:\n name: my-new-cron-object", "completion": "spec:\n cronSpec: \"* * * * */5\"\n image: my-awesome-cron-image\n someRandomField: 42", "language": "yaml", "type": "kubernetes-manifest", "context": "Extend the Kubernetes API with CustomResourceDefinitions | and create it:", "source": "https://kubernetes.io/docs/tasks/extend-kubernetes/custom-resources/custom-resource-definitions/", "id": "k8s_00721_code_completion_22469"} {"prompt": "Explain this yaml code:\n\napiVersion: \"stable.example.com/v1\"\nkind: CronTab\nmetadata:\n name: my-new-cron-object\nspec:\n cronSpec: \"* * * * */5\"\n image: my-awesome-cron-image\n someRandomField: 42", "completion": "This yaml code Extend the Kubernetes API with CustomResourceDefinitions | and create it:", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Extend the Kubernetes API with CustomResourceDefinitions | and create it:", "source": "https://kubernetes.io/docs/tasks/extend-kubernetes/custom-resources/custom-resource-definitions/", "id": "k8s_00721_code_explain_22470"} {"prompt": "Explain this bash code:\n\nkubectl create --validate=false -f my-crontab.yaml -o yaml", "completion": "This bash code Extend the Kubernetes API with CustomResourceDefinitions | Your output is similar to:", "language": "bash", "type": "kubectl-command_explanation", "context": "Extend the Kubernetes API with CustomResourceDefinitions | Your output is similar to:", "source": "https://kubernetes.io/docs/tasks/extend-kubernetes/custom-resources/custom-resource-definitions/", "id": "k8s_00721_code_explain_22471"} {"prompt": "Complete this yaml code:\n\napiVersion: stable.example.com/v1\nkind: CronTab\nmetadata:\n creationTimestamp: 2017-05-31T12:56:35Z\n generation: 1\n name: my-new-cron-object", "completion": " namespace: default\n resourceVersion: \"285\"\n uid: 9423255b-4600-11e7-af6a-28d2447dc82b\nspec:\n cronSpec: '* * * * */5'\n image: my-awesome-cron-image", "language": "yaml", "type": "kubernetes-manifest", "context": "Extend the Kubernetes API with CustomResourceDefinitions | Notice that the field someRandomField was pruned.", "source": "https://kubernetes.io/docs/tasks/extend-kubernetes/custom-resources/custom-resource-definitions/", "id": "k8s_00721_code_completion_22472"} {"prompt": "Explain this yaml code:\n\napiVersion: stable.example.com/v1\nkind: CronTab\nmetadata:\n creationTimestamp: 2017-05-31T12:56:35Z\n generation: 1\n name: my-new-cron-object\n namespace: default\n resourceVersion: \"285\"\n uid: 9423255b-4600-11e7-af6a-28d2447dc82b\nspec:\n cronSpec: '* * * * */5'\n image: my-awesome-cron-image", "completion": "This yaml code Extend the Kubernetes API with CustomResourceDefinitions | Notice that the field someRandomField was pruned.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Extend the Kubernetes API with CustomResourceDefinitions | Notice that the field someRandomField was pruned.", "source": "https://kubernetes.io/docs/tasks/extend-kubernetes/custom-resources/custom-resource-definitions/", "id": "k8s_00721_code_explain_22473"} {"prompt": "Complete this text code:\n\ntype: object\nproperties:", "completion": " json:\n x-kubernetes-preserve-unknown-fields: true", "language": "text", "type": "code-example", "context": "Extend the Kubernetes API with CustomResourceDefinitions | The field json can store any JSON value, without anything being pruned.", "source": "https://kubernetes.io/docs/tasks/extend-kubernetes/custom-resources/custom-resource-definitions/", "id": "k8s_00721_code_completion_22474"} {"prompt": "Explain this text code:\n\ntype: object\nproperties:\n json:\n x-kubernetes-preserve-unknown-fields: true", "completion": "This text code Extend the Kubernetes API with CustomResourceDefinitions | The field json can store any JSON value, without anything being pruned.", "language": "text", "type": "code-example_explanation", "context": "Extend the Kubernetes API with CustomResourceDefinitions | The field json can store any JSON value, without anything being pruned.", "source": "https://kubernetes.io/docs/tasks/extend-kubernetes/custom-resources/custom-resource-definitions/", "id": "k8s_00721_code_explain_22475"} {"prompt": "Complete this text code:\n\ntype: object\nproperties:\n json:", "completion": " x-kubernetes-preserve-unknown-fields: true\n type: object\n description: this is arbitrary JSON", "language": "text", "type": "code-example", "context": "Extend the Kubernetes API with CustomResourceDefinitions | With this, only object type values are allowed.", "source": "https://kubernetes.io/docs/tasks/extend-kubernetes/custom-resources/custom-resource-definitions/", "id": "k8s_00721_code_completion_22476"} {"prompt": "Explain this text code:\n\ntype: object\nproperties:\n json:\n x-kubernetes-preserve-unknown-fields: true\n type: object\n description: this is arbitrary JSON", "completion": "This text code Extend the Kubernetes API with CustomResourceDefinitions | With this, only object type values are allowed.", "language": "text", "type": "code-example_explanation", "context": "Extend the Kubernetes API with CustomResourceDefinitions | With this, only object type values are allowed.", "source": "https://kubernetes.io/docs/tasks/extend-kubernetes/custom-resources/custom-resource-definitions/", "id": "k8s_00721_code_explain_22477"} {"prompt": "Complete this text code:\n\ntype: object\nproperties:\n json:\n x-kubernetes-preserve-unknown-fields: true\n type: object\n properties:", "completion": " spec:\n type: object\n properties:\n foo:\n type: string\n bar:\n type: string", "language": "text", "type": "code-example", "context": "Extend the Kubernetes API with CustomResourceDefinitions | With this, the value:", "source": "https://kubernetes.io/docs/tasks/extend-kubernetes/custom-resources/custom-resource-definitions/", "id": "k8s_00721_code_completion_22478"} {"prompt": "Explain this text code:\n\ntype: object\nproperties:\n json:\n x-kubernetes-preserve-unknown-fields: true\n type: object\n properties:\n spec:\n type: object\n properties:\n foo:\n type: string\n bar:\n type: string", "completion": "This text code Extend the Kubernetes API with CustomResourceDefinitions | With this, the value:", "language": "text", "type": "code-example_explanation", "context": "Extend the Kubernetes API with CustomResourceDefinitions | With this, the value:", "source": "https://kubernetes.io/docs/tasks/extend-kubernetes/custom-resources/custom-resource-definitions/", "id": "k8s_00721_code_explain_22479"} {"prompt": "Complete this text code:\n\njson:\n spec:\n foo: abc", "completion": " bar: def\n something: x\n status:\n something: x", "language": "text", "type": "code-example", "context": "Extend the Kubernetes API with CustomResourceDefinitions | is pruned to:", "source": "https://kubernetes.io/docs/tasks/extend-kubernetes/custom-resources/custom-resource-definitions/", "id": "k8s_00721_code_completion_22480"} {"prompt": "Explain this text code:\n\njson:\n spec:\n foo: abc\n bar: def\n something: x\n status:\n something: x", "completion": "This text code Extend the Kubernetes API with CustomResourceDefinitions | is pruned to:", "language": "text", "type": "code-example_explanation", "context": "Extend the Kubernetes API with CustomResourceDefinitions | is pruned to:", "source": "https://kubernetes.io/docs/tasks/extend-kubernetes/custom-resources/custom-resource-definitions/", "id": "k8s_00721_code_explain_22481"} {"prompt": "Complete this text code:\n\njson:\n spec:\n foo: abc", "completion": " bar: def\n status:\n something: x", "language": "text", "type": "code-example", "context": "Extend the Kubernetes API with CustomResourceDefinitions | This means that the something field in the specified spec object is pruned, but everything outside is not.", "source": "https://kubernetes.io/docs/tasks/extend-kubernetes/custom-resources/custom-resource-definitions/", "id": "k8s_00721_code_completion_22482"} {"prompt": "Explain this text code:\n\njson:\n spec:\n foo: abc\n bar: def\n status:\n something: x", "completion": "This text code Extend the Kubernetes API with CustomResourceDefinitions | This means that the something field in the specified spec object is pruned, but everything outside is not.", "language": "text", "type": "code-example_explanation", "context": "Extend the Kubernetes API with CustomResourceDefinitions | This means that the something field in the specified spec object is pruned, but everything outside is not.", "source": "https://kubernetes.io/docs/tasks/extend-kubernetes/custom-resources/custom-resource-definitions/", "id": "k8s_00721_code_explain_22483"} {"prompt": "Complete this text code:\n\ntype: object\nproperties:", "completion": " foo:\n x-kubernetes-int-or-string: true", "language": "text", "type": "code-example", "context": "Extend the Kubernetes API with CustomResourceDefinitions | Also those nodes are partially excluded from rule 3 in the sense that the following two patterns are allowed (exactly those, without variations in order to additional fields):", "source": "https://kubernetes.io/docs/tasks/extend-kubernetes/custom-resources/custom-resource-definitions/", "id": "k8s_00721_code_completion_22484"} {"prompt": "Explain this text code:\n\ntype: object\nproperties:\n foo:\n x-kubernetes-int-or-string: true", "completion": "This text code Extend the Kubernetes API with CustomResourceDefinitions | Also those nodes are partially excluded from rule 3 in the sense that the following two patterns are allowed (exactly those, without variations in order to additional fields):", "language": "text", "type": "code-example_explanation", "context": "Extend the Kubernetes API with CustomResourceDefinitions | Also those nodes are partially excluded from rule 3 in the sense that the following two patterns are allowed (exactly those, without variations in order to additional fields):", "source": "https://kubernetes.io/docs/tasks/extend-kubernetes/custom-resources/custom-resource-definitions/", "id": "k8s_00721_code_explain_22485"} {"prompt": "Complete this text code:\n\nx-kubernetes-int-or-string: true\nanyOf:", "completion": " - type: integer\n - type: string\n...", "language": "text", "type": "code-example", "context": "Extend the Kubernetes API with CustomResourceDefinitions | and", "source": "https://kubernetes.io/docs/tasks/extend-kubernetes/custom-resources/custom-resource-definitions/", "id": "k8s_00721_code_completion_22486"} {"prompt": "Explain this text code:\n\nx-kubernetes-int-or-string: true\nanyOf:\n - type: integer\n - type: string\n...", "completion": "This text code Extend the Kubernetes API with CustomResourceDefinitions | and", "language": "text", "type": "code-example_explanation", "context": "Extend the Kubernetes API with CustomResourceDefinitions | and", "source": "https://kubernetes.io/docs/tasks/extend-kubernetes/custom-resources/custom-resource-definitions/", "id": "k8s_00721_code_explain_22487"} {"prompt": "Complete this text code:\n\nx-kubernetes-int-or-string: true\nallOf:\n - anyOf:", "completion": " - type: integer\n - type: string\n - # ... zero or more\n...", "language": "text", "type": "code-example", "context": "Extend the Kubernetes API with CustomResourceDefinitions | With one of those specification, both an integer and a string validate.", "source": "https://kubernetes.io/docs/tasks/extend-kubernetes/custom-resources/custom-resource-definitions/", "id": "k8s_00721_code_completion_22488"} {"prompt": "Explain this text code:\n\nx-kubernetes-int-or-string: true\nallOf:\n - anyOf:\n - type: integer\n - type: string\n - # ... zero or more\n...", "completion": "This text code Extend the Kubernetes API with CustomResourceDefinitions | With one of those specification, both an integer and a string validate.", "language": "text", "type": "code-example_explanation", "context": "Extend the Kubernetes API with CustomResourceDefinitions | With one of those specification, both an integer and a string validate.", "source": "https://kubernetes.io/docs/tasks/extend-kubernetes/custom-resources/custom-resource-definitions/", "id": "k8s_00721_code_explain_22489"} {"prompt": "Complete this text code:\n\ntype: object\nproperties:", "completion": " foo:\n x-kubernetes-embedded-resource: true\n x-kubernetes-preserve-unknown-fields: true", "language": "text", "type": "code-example", "context": "Extend the Kubernetes API with CustomResourceDefinitions | Here, the field foo holds a complete object, e.g.:", "source": "https://kubernetes.io/docs/tasks/extend-kubernetes/custom-resources/custom-resource-definitions/", "id": "k8s_00721_code_completion_22490"} {"prompt": "Explain this text code:\n\ntype: object\nproperties:\n foo:\n x-kubernetes-embedded-resource: true\n x-kubernetes-preserve-unknown-fields: true", "completion": "This text code Extend the Kubernetes API with CustomResourceDefinitions | Here, the field foo holds a complete object, e.g.:", "language": "text", "type": "code-example_explanation", "context": "Extend the Kubernetes API with CustomResourceDefinitions | Here, the field foo holds a complete object, e.g.:", "source": "https://kubernetes.io/docs/tasks/extend-kubernetes/custom-resources/custom-resource-definitions/", "id": "k8s_00721_code_explain_22491"} {"prompt": "Complete this yaml code:\n\nfoo:\n apiVersion: v1", "completion": " kind: Pod\n spec:\n # ...", "language": "yaml", "type": "pod-definition", "context": "Extend the Kubernetes API with CustomResourceDefinitions | Because x-kubernetes-preserve-unknown-fields: true is specified alongside, nothing is pruned. The use of x-kubernetes-preserve-unknown-fields: true is optional though.", "source": "https://kubernetes.io/docs/tasks/extend-kubernetes/custom-resources/custom-resource-definitions/", "id": "k8s_00721_code_completion_22492"} {"prompt": "Explain this yaml code:\n\nfoo:\n apiVersion: v1\n kind: Pod\n spec:\n # ...", "completion": "This yaml code Extend the Kubernetes API with CustomResourceDefinitions | Because x-kubernetes-preserve-unknown-fields: true is specified alongside, nothing is pruned. The use of x-kubernetes-preserve-unknown-fields: true is optional though.", "language": "yaml", "type": "pod-definition_explanation", "context": "Extend the Kubernetes API with CustomResourceDefinitions | Because x-kubernetes-preserve-unknown-fields: true is specified alongside, nothing is pruned. The use of x-kubernetes-preserve-unknown-fields: true is optional though.", "source": "https://kubernetes.io/docs/tasks/extend-kubernetes/custom-resources/custom-resource-definitions/", "id": "k8s_00721_code_explain_22493"} {"prompt": "Complete this yaml code:\n\napiVersion: \"stable.example.com/v1\"\nkind: CronTab", "completion": "metadata:\n finalizers:\n - stable.example.com/finalizer", "language": "yaml", "type": "kubernetes-manifest", "context": "Extend the Kubernetes API with CustomResourceDefinitions | Identifiers of custom finalizers consist of a domain name, a forward slash and the name of the finalizer. Any controller can add a finalizer to any object's list of finalizers.", "source": "https://kubernetes.io/docs/tasks/extend-kubernetes/custom-resources/custom-resource-definitions/", "id": "k8s_00721_code_completion_22494"} {"prompt": "Explain this yaml code:\n\napiVersion: \"stable.example.com/v1\"\nkind: CronTab\nmetadata:\n finalizers:\n - stable.example.com/finalizer", "completion": "This yaml code Extend the Kubernetes API with CustomResourceDefinitions | Identifiers of custom finalizers consist of a domain name, a forward slash and the name of the finalizer. Any controller can add a finalizer to any object's list of finalizers.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Extend the Kubernetes API with CustomResourceDefinitions | Identifiers of custom finalizers consist of a domain name, a forward slash and the name of the finalizer. Any controller can add a finalizer to any object's list of finalizers.", "source": "https://kubernetes.io/docs/tasks/extend-kubernetes/custom-resources/custom-resource-definitions/", "id": "k8s_00721_code_explain_22495"} {"prompt": "Complete this yaml code:\n\napiVersion: apiextensions.k8s.io/v1\nkind: CustomResourceDefinition\nmetadata:\n name: crontabs.stable.example.com\nspec:\n group: stable.example.com\n versions:\n - name: v1\n served: true\n storage: true\n schema:\n # openAPIV3Schema is the schema for validating custom objects.\n openAPIV3Schema:\n type: object\n properties:\n spec:\n type: object", "completion": " properties:\n cronSpec:\n type: string\n pattern: '^(\\d+|\\*)(/\\d+)?(\\s+(\\d+|\\*)(/\\d+)?){4}$'\n image:\n type: string\n replicas:\n type: integer\n minimum: 1\n maximum: 10\n scope: Namespaced\n names:\n plural: crontabs\n singular: crontab\n kind: CronTab\n shortNames:\n - ct", "language": "yaml", "type": "kubernetes-manifest", "context": "Extend the Kubernetes API with CustomResourceDefinitions | and create it:", "source": "https://kubernetes.io/docs/tasks/extend-kubernetes/custom-resources/custom-resource-definitions/", "id": "k8s_00721_code_completion_22496"} {"prompt": "Explain this yaml code:\n\napiVersion: apiextensions.k8s.io/v1\nkind: CustomResourceDefinition\nmetadata:\n name: crontabs.stable.example.com\nspec:\n group: stable.example.com\n versions:\n - name: v1\n served: true\n storage: true\n schema:\n # openAPIV3Schema is the schema for validating custom objects.\n openAPIV3Schema:\n type: object\n properties:\n spec:\n type: object\n properties:\n cronSpec:\n type: string\n pattern: '^(\\d+|\\*)(/\\d+)?(\\s+(\\d+|\\*)(/\\d+)?){4}$'\n image:\n type: string\n replicas:\n type: integer\n minimum: 1\n maximum: 10\n scope: Namespaced\n names:\n plural: crontabs\n singular: crontab\n kind: CronTab\n shortNames:\n - ct", "completion": "This yaml code Extend the Kubernetes API with CustomResourceDefinitions | and create it:", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Extend the Kubernetes API with CustomResourceDefinitions | and create it:", "source": "https://kubernetes.io/docs/tasks/extend-kubernetes/custom-resources/custom-resource-definitions/", "id": "k8s_00721_code_explain_22497"} {"prompt": "Complete this yaml code:\n\napiVersion: \"stable.example.com/v1\"\nkind: CronTab\nmetadata:\n name: my-new-cron-object", "completion": "spec:\n cronSpec: \"* * * *\"\n image: my-awesome-cron-image\n replicas: 15", "language": "yaml", "type": "kubernetes-manifest", "context": "Extend the Kubernetes API with CustomResourceDefinitions | and attempt to create it:", "source": "https://kubernetes.io/docs/tasks/extend-kubernetes/custom-resources/custom-resource-definitions/", "id": "k8s_00721_code_completion_22498"} {"prompt": "Explain this yaml code:\n\napiVersion: \"stable.example.com/v1\"\nkind: CronTab\nmetadata:\n name: my-new-cron-object\nspec:\n cronSpec: \"* * * *\"\n image: my-awesome-cron-image\n replicas: 15", "completion": "This yaml code Extend the Kubernetes API with CustomResourceDefinitions | and attempt to create it:", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Extend the Kubernetes API with CustomResourceDefinitions | and attempt to create it:", "source": "https://kubernetes.io/docs/tasks/extend-kubernetes/custom-resources/custom-resource-definitions/", "id": "k8s_00721_code_explain_22499"} {"prompt": "Complete this text code:\n\nThe CronTab \"my-new-cron-object\" is invalid: []: Invalid value: map[string]interface {}{\"apiVersion\":\"stable.example.com/v1\", \"kind\":\"CronTab\", \"metadata\":map[string]interface {}{\"name\":\"my-new-cron-object\", \"namespace\":\"default\", \"deletionTimestamp\":interface {}(nil), \"deletionGracePeriodSeconds\":(*int64)(nil), \"creationTimestamp\":\"2017-09-05T05:20:07Z\", \"uid\":\"e14d79e7-91f9-11e7-a598-f0761cb232d1\", \"clusterName\":\"\"}, \"spec\":map[string]interface {}{\"cronSpec\":\"* * * *\", \"image\":\"my-awesome-cron-image\", \"replicas\":15}}:\nvalidation failure list:", "completion": "spec.cronSpec in body should match '^(\\d+|\\*)(/\\d+)?(\\s+(\\d+|\\*)(/\\d+)?){4}$'\nspec.replicas in body should be less than or equal to 10", "language": "text", "type": "code-example", "context": "Extend the Kubernetes API with CustomResourceDefinitions | If the fields contain valid values, the object creation request is accepted.", "source": "https://kubernetes.io/docs/tasks/extend-kubernetes/custom-resources/custom-resource-definitions/", "id": "k8s_00721_code_completion_22500"} {"prompt": "Explain this text code:\n\nThe CronTab \"my-new-cron-object\" is invalid: []: Invalid value: map[string]interface {}{\"apiVersion\":\"stable.example.com/v1\", \"kind\":\"CronTab\", \"metadata\":map[string]interface {}{\"name\":\"my-new-cron-object\", \"namespace\":\"default\", \"deletionTimestamp\":interface {}(nil), \"deletionGracePeriodSeconds\":(*int64)(nil), \"creationTimestamp\":\"2017-09-05T05:20:07Z\", \"uid\":\"e14d79e7-91f9-11e7-a598-f0761cb232d1\", \"clusterName\":\"\"}, \"spec\":map[string]interface {}{\"cronSpec\":\"* * * *\", \"image\":\"my-awesome-cron-image\", \"replicas\":15}}:\nvalidation failure list:\nspec.cronSpec in body should match '^(\\d+|\\*)(/\\d+)?(\\s+(\\d+|\\*)(/\\d+)?){4}$'\nspec.replicas in body should be less than or equal to 10", "completion": "This text code Extend the Kubernetes API with CustomResourceDefinitions | If the fields contain valid values, the object creation request is accepted.", "language": "text", "type": "code-example_explanation", "context": "Extend the Kubernetes API with CustomResourceDefinitions | If the fields contain valid values, the object creation request is accepted.", "source": "https://kubernetes.io/docs/tasks/extend-kubernetes/custom-resources/custom-resource-definitions/", "id": "k8s_00721_code_explain_22501"} {"prompt": "Complete this yaml code:\n\napiVersion: \"stable.example.com/v1\"\nkind: CronTab\nmetadata:\n name: my-new-cron-object", "completion": "spec:\n cronSpec: \"* * * * */5\"\n image: my-awesome-cron-image\n replicas: 5", "language": "yaml", "type": "kubernetes-manifest", "context": "Extend the Kubernetes API with CustomResourceDefinitions | And create it:", "source": "https://kubernetes.io/docs/tasks/extend-kubernetes/custom-resources/custom-resource-definitions/", "id": "k8s_00721_code_completion_22502"} {"prompt": "Explain this yaml code:\n\napiVersion: \"stable.example.com/v1\"\nkind: CronTab\nmetadata:\n name: my-new-cron-object\nspec:\n cronSpec: \"* * * * */5\"\n image: my-awesome-cron-image\n replicas: 5", "completion": "This yaml code Extend the Kubernetes API with CustomResourceDefinitions | And create it:", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Extend the Kubernetes API with CustomResourceDefinitions | And create it:", "source": "https://kubernetes.io/docs/tasks/extend-kubernetes/custom-resources/custom-resource-definitions/", "id": "k8s_00721_code_explain_22503"} {"prompt": "Explain this bash code:\n\nkubectl apply -f my-crontab.yaml\ncrontab \"my-new-cron-object\" created", "completion": "This bash code Extend the Kubernetes API with CustomResourceDefinitions | If you are using a version of Kubernetes older than v1.30, you need to explicitly enable the CRDValidationRatcheting feature gate to use this behavior, which then applies to all CustomResourceDefiniti", "language": "bash", "type": "kubectl-command_explanation", "context": "Extend the Kubernetes API with CustomResourceDefinitions | If you are using a version of Kubernetes older than v1.30, you need to explicitly enable the CRDValidationRatcheting feature gate to use this behavior, which then applies to all CustomResourceDefiniti", "source": "https://kubernetes.io/docs/tasks/extend-kubernetes/custom-resources/custom-resource-definitions/", "id": "k8s_00721_code_explain_22504"} {"prompt": "Complete this text code:\n\n# ...\n openAPIV3Schema:\n type: object\n properties:\n spec:\n type: object\n x-kubernetes-validations:\n - rule: \"self.minReplicas <= self.replicas\"\n message: \"replicas should be greater than or equal to minReplicas.\"\n - rule: \"self.replicas <= self.maxReplicas\"\n message: \"replicas should be smaller than or equal to maxReplicas.\"", "completion": " properties:\n # ...\n minReplicas:\n type: integer\n replicas:\n type: integer\n maxReplicas:\n type: integer\n required:\n - minReplicas\n - replicas\n - maxReplicas", "language": "text", "type": "code-example", "context": "Extend the Kubernetes API with CustomResourceDefinitions | will reject a request to create this custom resource:", "source": "https://kubernetes.io/docs/tasks/extend-kubernetes/custom-resources/custom-resource-definitions/", "id": "k8s_00721_code_completion_22505"} {"prompt": "Explain this text code:\n\n# ...\n openAPIV3Schema:\n type: object\n properties:\n spec:\n type: object\n x-kubernetes-validations:\n - rule: \"self.minReplicas <= self.replicas\"\n message: \"replicas should be greater than or equal to minReplicas.\"\n - rule: \"self.replicas <= self.maxReplicas\"\n message: \"replicas should be smaller than or equal to maxReplicas.\"\n properties:\n # ...\n minReplicas:\n type: integer\n replicas:\n type: integer\n maxReplicas:\n type: integer\n required:\n - minReplicas\n - replicas\n - maxReplicas", "completion": "This text code Extend the Kubernetes API with CustomResourceDefinitions | will reject a request to create this custom resource:", "language": "text", "type": "code-example_explanation", "context": "Extend the Kubernetes API with CustomResourceDefinitions | will reject a request to create this custom resource:", "source": "https://kubernetes.io/docs/tasks/extend-kubernetes/custom-resources/custom-resource-definitions/", "id": "k8s_00721_code_explain_22506"} {"prompt": "Complete this yaml code:\n\napiVersion: \"stable.example.com/v1\"\nkind: CronTab\nmetadata:\n name: my-new-cron-object", "completion": "spec:\n minReplicas: 0\n replicas: 20\n maxReplicas: 10", "language": "yaml", "type": "kubernetes-manifest", "context": "Extend the Kubernetes API with CustomResourceDefinitions | with the response:", "source": "https://kubernetes.io/docs/tasks/extend-kubernetes/custom-resources/custom-resource-definitions/", "id": "k8s_00721_code_completion_22507"} {"prompt": "Explain this yaml code:\n\napiVersion: \"stable.example.com/v1\"\nkind: CronTab\nmetadata:\n name: my-new-cron-object\nspec:\n minReplicas: 0\n replicas: 20\n maxReplicas: 10", "completion": "This yaml code Extend the Kubernetes API with CustomResourceDefinitions | with the response:", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Extend the Kubernetes API with CustomResourceDefinitions | with the response:", "source": "https://kubernetes.io/docs/tasks/extend-kubernetes/custom-resources/custom-resource-definitions/", "id": "k8s_00721_code_explain_22508"} {"prompt": "Explain this text code:\n\nThe CronTab \"my-new-cron-object\" is invalid:\n* spec: Invalid value: map[string]interface {}{\"maxReplicas\":10, \"minReplicas\":0, \"replicas\":20}: replicas should be smaller than or equal to maxReplicas.", "completion": "This text code Extend the Kubernetes API with CustomResourceDefinitions | x-kubernetes-validations could have multiple rules. The rule under x-kubernetes-validations represents the expression which will be evaluated by CEL. The message represents the message displayed when", "language": "text", "type": "code-example_explanation", "context": "Extend the Kubernetes API with CustomResourceDefinitions | x-kubernetes-validations could have multiple rules. The rule under x-kubernetes-validations represents the expression which will be evaluated by CEL. The message represents the message displayed when", "source": "https://kubernetes.io/docs/tasks/extend-kubernetes/custom-resources/custom-resource-definitions/", "id": "k8s_00721_code_explain_22509"} {"prompt": "Explain this text code:\n\nThe CronTab \"my-new-cron-object\" is invalid:\n* spec: Invalid value: map[string]interface {}{\"maxReplicas\":10, \"minReplicas\":0, \"replicas\":20}: failed rule: self.replicas <= self.maxReplicas", "completion": "This text code Extend the Kubernetes API with CustomResourceDefinitions | Validation rules are compiled when CRDs are created/updated. The request of CRDs create/update will fail if compilation of validation rules fail. Compilation process includes type checking as well.", "language": "text", "type": "code-example_explanation", "context": "Extend the Kubernetes API with CustomResourceDefinitions | Validation rules are compiled when CRDs are created/updated. The request of CRDs create/update will fail if compilation of validation rules fail. Compilation process includes type checking as well.", "source": "https://kubernetes.io/docs/tasks/extend-kubernetes/custom-resources/custom-resource-definitions/", "id": "k8s_00721_code_explain_22510"} {"prompt": "Explain this text code:\n\nInvalid value: apiextensions.ValidationRule{Rule:\"self == true\", Message:\"\"}: compilation failed: ERROR: \\:1:6: found no matching overload for '_==_' applied to '(int, bool)'", "completion": "This text code Extend the Kubernetes API with CustomResourceDefinitions | no_such_field: does not contain the desired field.", "language": "text", "type": "code-example_explanation", "context": "Extend the Kubernetes API with CustomResourceDefinitions | no_such_field: does not contain the desired field.", "source": "https://kubernetes.io/docs/tasks/extend-kubernetes/custom-resources/custom-resource-definitions/", "id": "k8s_00721_code_explain_22511"} {"prompt": "Explain this text code:\n\nInvalid value: apiextensions.ValidationRule{Rule:\"self.nonExistingField > 0\", Message:\"\"}: compilation failed: ERROR: \\:1:5: undefined field 'nonExistingField'", "completion": "This text code Extend the Kubernetes API with CustomResourceDefinitions | invalid argument: invalid argument to macros.", "language": "text", "type": "code-example_explanation", "context": "Extend the Kubernetes API with CustomResourceDefinitions | invalid argument: invalid argument to macros.", "source": "https://kubernetes.io/docs/tasks/extend-kubernetes/custom-resources/custom-resource-definitions/", "id": "k8s_00721_code_explain_22512"} {"prompt": "Explain this text code:\n\nInvalid value: apiextensions.ValidationRule{Rule:\"has(self)\", Message:\"\"}: compilation failed: ERROR: :1:4: invalid argument to has() macro", "completion": "This text code Extend the Kubernetes API with CustomResourceDefinitions | Validation Rules Examples:", "language": "text", "type": "code-example_explanation", "context": "Extend the Kubernetes API with CustomResourceDefinitions | Validation Rules Examples:", "source": "https://kubernetes.io/docs/tasks/extend-kubernetes/custom-resources/custom-resource-definitions/", "id": "k8s_00721_code_explain_22513"} {"prompt": "Complete this text code:\n\n# ...\n openAPIV3Schema:\n type: object\n x-kubernetes-validations:\n - rule: \"self.status.availableReplicas >= self.spec.minReplicas\"\n properties:\n spec:\n type: object", "completion": " properties:\n minReplicas:\n type: integer\n # ...\n status:\n type: object\n properties:\n availableReplicas:\n type: integer", "language": "text", "type": "code-example", "context": "Extend the Kubernetes API with CustomResourceDefinitions | If the Rule is scoped to an object with properties, the accessible properties of the object are field selectable via self.field and field presence can be checked via has(self.field). Null valued field", "source": "https://kubernetes.io/docs/tasks/extend-kubernetes/custom-resources/custom-resource-definitions/", "id": "k8s_00721_code_completion_22514"} {"prompt": "Explain this text code:\n\n# ...\n openAPIV3Schema:\n type: object\n x-kubernetes-validations:\n - rule: \"self.status.availableReplicas >= self.spec.minReplicas\"\n properties:\n spec:\n type: object\n properties:\n minReplicas:\n type: integer\n # ...\n status:\n type: object\n properties:\n availableReplicas:\n type: integer", "completion": "This text code Extend the Kubernetes API with CustomResourceDefinitions | If the Rule is scoped to an object with properties, the accessible properties of the object are field selectable via self.field and field presence can be checked via has(self.field). Null valued field", "language": "text", "type": "code-example_explanation", "context": "Extend the Kubernetes API with CustomResourceDefinitions | If the Rule is scoped to an object with properties, the accessible properties of the object are field selectable via self.field and field presence can be checked via has(self.field). Null valued field", "source": "https://kubernetes.io/docs/tasks/extend-kubernetes/custom-resources/custom-resource-definitions/", "id": "k8s_00721_code_explain_22515"} {"prompt": "Complete this text code:\n\n# ...\n openAPIV3Schema:\n type: object\n properties:\n spec:\n type: object", "completion": " x-kubernetes-validations:\n - rule: \"has(self.foo)\"\n properties:\n # ...\n foo:\n type: integer", "language": "text", "type": "code-example", "context": "Extend the Kubernetes API with CustomResourceDefinitions | If the Rule is scoped to an object with additionalProperties (i.e. a map) the value of the map are accessible via self[mapKey], map containment can be checked via mapKey in self and all entries of the", "source": "https://kubernetes.io/docs/tasks/extend-kubernetes/custom-resources/custom-resource-definitions/", "id": "k8s_00721_code_completion_22516"} {"prompt": "Explain this text code:\n\n# ...\n openAPIV3Schema:\n type: object\n properties:\n spec:\n type: object\n x-kubernetes-validations:\n - rule: \"has(self.foo)\"\n properties:\n # ...\n foo:\n type: integer", "completion": "This text code Extend the Kubernetes API with CustomResourceDefinitions | If the Rule is scoped to an object with additionalProperties (i.e. a map) the value of the map are accessible via self[mapKey], map containment can be checked via mapKey in self and all entries of the", "language": "text", "type": "code-example_explanation", "context": "Extend the Kubernetes API with CustomResourceDefinitions | If the Rule is scoped to an object with additionalProperties (i.e. a map) the value of the map are accessible via self[mapKey], map containment can be checked via mapKey in self and all entries of the", "source": "https://kubernetes.io/docs/tasks/extend-kubernetes/custom-resources/custom-resource-definitions/", "id": "k8s_00721_code_explain_22517"} {"prompt": "Complete this text code:\n\n# ...\n openAPIV3Schema:\n type: object\n properties:\n spec:\n type: object\n x-kubernetes-validations:", "completion": " - rule: \"self['xyz'].foo > 0\"\n additionalProperties:\n # ...\n type: object\n properties:\n foo:\n type: integer", "language": "text", "type": "code-example", "context": "Extend the Kubernetes API with CustomResourceDefinitions | If the Rule is scoped to an array, the elements of the array are accessible via self[i] and also by macros and functions.", "source": "https://kubernetes.io/docs/tasks/extend-kubernetes/custom-resources/custom-resource-definitions/", "id": "k8s_00721_code_completion_22518"} {"prompt": "Explain this text code:\n\n# ...\n openAPIV3Schema:\n type: object\n properties:\n spec:\n type: object\n x-kubernetes-validations:\n - rule: \"self['xyz'].foo > 0\"\n additionalProperties:\n # ...\n type: object\n properties:\n foo:\n type: integer", "completion": "This text code Extend the Kubernetes API with CustomResourceDefinitions | If the Rule is scoped to an array, the elements of the array are accessible via self[i] and also by macros and functions.", "language": "text", "type": "code-example_explanation", "context": "Extend the Kubernetes API with CustomResourceDefinitions | If the Rule is scoped to an array, the elements of the array are accessible via self[i] and also by macros and functions.", "source": "https://kubernetes.io/docs/tasks/extend-kubernetes/custom-resources/custom-resource-definitions/", "id": "k8s_00721_code_explain_22519"} {"prompt": "Complete this text code:\n\n# ...\n openAPIV3Schema:\n type: object\n properties:\n # ...", "completion": " foo:\n type: array\n x-kubernetes-validations:\n - rule: \"size(self) == 1\"\n items:\n type: string", "language": "text", "type": "code-example", "context": "Extend the Kubernetes API with CustomResourceDefinitions | If the Rule is scoped to a scalar, self is bound to the scalar value.", "source": "https://kubernetes.io/docs/tasks/extend-kubernetes/custom-resources/custom-resource-definitions/", "id": "k8s_00721_code_completion_22520"} {"prompt": "Explain this text code:\n\n# ...\n openAPIV3Schema:\n type: object\n properties:\n # ...\n foo:\n type: array\n x-kubernetes-validations:\n - rule: \"size(self) == 1\"\n items:\n type: string", "completion": "This text code Extend the Kubernetes API with CustomResourceDefinitions | If the Rule is scoped to a scalar, self is bound to the scalar value.", "language": "text", "type": "code-example_explanation", "context": "Extend the Kubernetes API with CustomResourceDefinitions | If the Rule is scoped to a scalar, self is bound to the scalar value.", "source": "https://kubernetes.io/docs/tasks/extend-kubernetes/custom-resources/custom-resource-definitions/", "id": "k8s_00721_code_explain_22521"} {"prompt": "Complete this text code:\n\n# ...\n openAPIV3Schema:\n type: object\n properties:\n spec:\n type: object", "completion": " properties:\n # ...\n foo:\n type: integer\n x-kubernetes-validations:\n - rule: \"self > 0\"", "language": "text", "type": "code-example", "context": "Extend the Kubernetes API with CustomResourceDefinitions | Examples:", "source": "https://kubernetes.io/docs/tasks/extend-kubernetes/custom-resources/custom-resource-definitions/", "id": "k8s_00721_code_completion_22522"} {"prompt": "Explain this text code:\n\n# ...\n openAPIV3Schema:\n type: object\n properties:\n spec:\n type: object\n properties:\n # ...\n foo:\n type: integer\n x-kubernetes-validations:\n - rule: \"self > 0\"", "completion": "This text code Extend the Kubernetes API with CustomResourceDefinitions | Examples:", "language": "text", "type": "code-example_explanation", "context": "Extend the Kubernetes API with CustomResourceDefinitions | Examples:", "source": "https://kubernetes.io/docs/tasks/extend-kubernetes/custom-resources/custom-resource-definitions/", "id": "k8s_00721_code_explain_22523"} {"prompt": "Explain this text code:\n\nx-kubernetes-validations:\n- rule: \"self.x <= self.maxLimit\"\n messageExpression: '\"x exceeded max limit of \" + string(self.maxLimit)'", "completion": "This text code Extend the Kubernetes API with CustomResourceDefinitions | Keep in mind that CEL string concatenation (+ operator) does not auto-cast to string. If you have a non-string scalar, use the string() function to cast the scalar to a string like shown in the", "language": "text", "type": "code-example_explanation", "context": "Extend the Kubernetes API with CustomResourceDefinitions | Keep in mind that CEL string concatenation (+ operator) does not auto-cast to string. If you have a non-string scalar, use the string() function to cast the scalar to a string like shown in the", "source": "https://kubernetes.io/docs/tasks/extend-kubernetes/custom-resources/custom-resource-definitions/", "id": "k8s_00721_code_explain_22524"} {"prompt": "Explain this text code:\n\nx-kubernetes-validations:\n- rule: \"self.x <= self.maxLimit\"\n reason: \"FieldValueInvalid\"", "completion": "This text code Extend the Kubernetes API with CustomResourceDefinitions | The HTTP status code returned to the caller will match the reason of the first failed validation rule. The currently supported reasons are: \"FieldValueInvalid\", \"FieldValueForbidden\", \"FieldValueRequi", "language": "text", "type": "code-example_explanation", "context": "Extend the Kubernetes API with CustomResourceDefinitions | The HTTP status code returned to the caller will match the reason of the first failed validation rule. The currently supported reasons are: \"FieldValueInvalid\", \"FieldValueForbidden\", \"FieldValueRequi", "source": "https://kubernetes.io/docs/tasks/extend-kubernetes/custom-resources/custom-resource-definitions/", "id": "k8s_00721_code_explain_22525"} {"prompt": "Explain this text code:\n\nx-kubernetes-validations:\n- rule: \"self.foo.test.x <= self.maxLimit\"\n fieldPath: \".foo.test.x\"", "completion": "This text code Extend the Kubernetes API with CustomResourceDefinitions | In the example above, the validation checks the value of field x should be less than the value of maxLimit. If no fieldPath specified, when validation fails, the fieldPath would be default to wherever", "language": "text", "type": "code-example_explanation", "context": "Extend the Kubernetes API with CustomResourceDefinitions | In the example above, the validation checks the value of field x should be less than the value of maxLimit. If no fieldPath specified, when validation fails, the fieldPath would be default to wherever", "source": "https://kubernetes.io/docs/tasks/extend-kubernetes/custom-resources/custom-resource-definitions/", "id": "k8s_00721_code_explain_22526"} {"prompt": "Complete this text code:\n\ntype: string\nenum: [\"low\", \"medium\", \"high\"]", "completion": "x-kubernetes-validations:\n- rule: \"!(self == 'high' && oldSelf == 'low') && !(self == 'low' && oldSelf == 'high')\"\n message: cannot transition directly between 'low' and 'high'", "language": "text", "type": "code-example", "context": "Extend the Kubernetes API with CustomResourceDefinitions | Unlike other rules, transition rules apply only to operations meeting the following criteria:", "source": "https://kubernetes.io/docs/tasks/extend-kubernetes/custom-resources/custom-resource-definitions/", "id": "k8s_00721_code_completion_22527"} {"prompt": "Explain this text code:\n\ntype: string\nenum: [\"low\", \"medium\", \"high\"]\nx-kubernetes-validations:\n- rule: \"!(self == 'high' && oldSelf == 'low') && !(self == 'low' && oldSelf == 'high')\"\n message: cannot transition directly between 'low' and 'high'", "completion": "This text code Extend the Kubernetes API with CustomResourceDefinitions | Unlike other rules, transition rules apply only to operations meeting the following criteria:", "language": "text", "type": "code-example_explanation", "context": "Extend the Kubernetes API with CustomResourceDefinitions | Unlike other rules, transition rules apply only to operations meeting the following criteria:", "source": "https://kubernetes.io/docs/tasks/extend-kubernetes/custom-resources/custom-resource-definitions/", "id": "k8s_00721_code_explain_22528"} {"prompt": "Complete this text code:\n\nopenAPIV3Schema:\n type: object\n properties:\n foo:", "completion": " type: array\n items:\n type: string\n x-kubernetes-validations:\n - rule: \"self.all(x, x.contains('a string'))\"", "language": "text", "type": "code-example", "context": "Extend the Kubernetes API with CustomResourceDefinitions | then the API server rejects this rule on validation budget grounds with error:", "source": "https://kubernetes.io/docs/tasks/extend-kubernetes/custom-resources/custom-resource-definitions/", "id": "k8s_00721_code_completion_22529"} {"prompt": "Explain this text code:\n\nopenAPIV3Schema:\n type: object\n properties:\n foo:\n type: array\n items:\n type: string\n x-kubernetes-validations:\n - rule: \"self.all(x, x.contains('a string'))\"", "completion": "This text code Extend the Kubernetes API with CustomResourceDefinitions | then the API server rejects this rule on validation budget grounds with error:", "language": "text", "type": "code-example_explanation", "context": "Extend the Kubernetes API with CustomResourceDefinitions | then the API server rejects this rule on validation budget grounds with error:", "source": "https://kubernetes.io/docs/tasks/extend-kubernetes/custom-resources/custom-resource-definitions/", "id": "k8s_00721_code_explain_22530"} {"prompt": "Explain this text code:\n\nspec.validation.openAPIV3Schema.properties[spec].properties[foo].x-kubernetes-validations[0].rule: Forbidden:\nCEL rule exceeded budget by more than 100x (try simplifying the rule, or adding maxItems, maxProperties, and\nmaxLength where arrays, maps, and strings are used)", "completion": "This text code Extend the Kubernetes API with CustomResourceDefinitions | The rejection happens because self.all implies calling contains() on every string in foo, which in turn will check the given string to see if it contains 'a string'. Without limits, this is a very exp", "language": "text", "type": "code-example_explanation", "context": "Extend the Kubernetes API with CustomResourceDefinitions | The rejection happens because self.all implies calling contains() on every string in foo, which in turn will check the given string to see if it contains 'a string'. Without limits, this is a very exp", "source": "https://kubernetes.io/docs/tasks/extend-kubernetes/custom-resources/custom-resource-definitions/", "id": "k8s_00721_code_explain_22531"} {"prompt": "Complete this text code:\n\nopenAPIV3Schema:\n type: object\n properties:\n foo:\n type: array", "completion": " maxItems: 25\n items:\n type: string\n maxLength: 10\n x-kubernetes-validations:\n - rule: \"self.all(x, x.contains('a string'))\"", "language": "text", "type": "code-example", "context": "Extend the Kubernetes API with CustomResourceDefinitions | The cost estimation system takes into account how many times the rule will be executed in addition to the estimated cost of the rule itself. For instance, the following rule will have the same estimat", "source": "https://kubernetes.io/docs/tasks/extend-kubernetes/custom-resources/custom-resource-definitions/", "id": "k8s_00721_code_completion_22532"} {"prompt": "Explain this text code:\n\nopenAPIV3Schema:\n type: object\n properties:\n foo:\n type: array\n maxItems: 25\n items:\n type: string\n maxLength: 10\n x-kubernetes-validations:\n - rule: \"self.all(x, x.contains('a string'))\"", "completion": "This text code Extend the Kubernetes API with CustomResourceDefinitions | The cost estimation system takes into account how many times the rule will be executed in addition to the estimated cost of the rule itself. For instance, the following rule will have the same estimat", "language": "text", "type": "code-example_explanation", "context": "Extend the Kubernetes API with CustomResourceDefinitions | The cost estimation system takes into account how many times the rule will be executed in addition to the estimated cost of the rule itself. For instance, the following rule will have the same estimat", "source": "https://kubernetes.io/docs/tasks/extend-kubernetes/custom-resources/custom-resource-definitions/", "id": "k8s_00721_code_explain_22533"} {"prompt": "Complete this text code:\n\nopenAPIV3Schema:\n type: object\n properties:\n foo:\n type: array", "completion": " maxItems: 25\n items:\n type: string\n x-kubernetes-validations:\n - rule: \"self.contains('a string'))\"\n maxLength: 10", "language": "text", "type": "code-example", "context": "Extend the Kubernetes API with CustomResourceDefinitions | If a list inside of a list has a validation rule that uses self.all, that is significantly more expensive than a non-nested list with the same rule. A rule that would have been allowed on a non-nested", "source": "https://kubernetes.io/docs/tasks/extend-kubernetes/custom-resources/custom-resource-definitions/", "id": "k8s_00721_code_completion_22534"} {"prompt": "Explain this text code:\n\nopenAPIV3Schema:\n type: object\n properties:\n foo:\n type: array\n maxItems: 25\n items:\n type: string\n x-kubernetes-validations:\n - rule: \"self.contains('a string'))\"\n maxLength: 10", "completion": "This text code Extend the Kubernetes API with CustomResourceDefinitions | If a list inside of a list has a validation rule that uses self.all, that is significantly more expensive than a non-nested list with the same rule. A rule that would have been allowed on a non-nested", "language": "text", "type": "code-example_explanation", "context": "Extend the Kubernetes API with CustomResourceDefinitions | If a list inside of a list has a validation rule that uses self.all, that is significantly more expensive than a non-nested list with the same rule. A rule that would have been allowed on a non-nested", "source": "https://kubernetes.io/docs/tasks/extend-kubernetes/custom-resources/custom-resource-definitions/", "id": "k8s_00721_code_explain_22535"} {"prompt": "Complete this text code:\n\nopenAPIV3Schema:\n type: object\n properties:\n foo:", "completion": " type: array\n items:\n type: integer\n x-kubernetes-validations:\n - rule: \"self.all(x, x == 5)\"", "language": "text", "type": "code-example", "context": "Extend the Kubernetes API with CustomResourceDefinitions | But the same rule on the following schema (with a nested array added) produces a validation error:", "source": "https://kubernetes.io/docs/tasks/extend-kubernetes/custom-resources/custom-resource-definitions/", "id": "k8s_00721_code_completion_22536"} {"prompt": "Explain this text code:\n\nopenAPIV3Schema:\n type: object\n properties:\n foo:\n type: array\n items:\n type: integer\n x-kubernetes-validations:\n - rule: \"self.all(x, x == 5)\"", "completion": "This text code Extend the Kubernetes API with CustomResourceDefinitions | But the same rule on the following schema (with a nested array added) produces a validation error:", "language": "text", "type": "code-example_explanation", "context": "Extend the Kubernetes API with CustomResourceDefinitions | But the same rule on the following schema (with a nested array added) produces a validation error:", "source": "https://kubernetes.io/docs/tasks/extend-kubernetes/custom-resources/custom-resource-definitions/", "id": "k8s_00721_code_explain_22537"} {"prompt": "Complete this text code:\n\nopenAPIV3Schema:\n type: object\n properties:\n foo:\n type: array", "completion": " items:\n type: array\n items:\n type: integer\n x-kubernetes-validations:\n - rule: \"self.all(x, x == 5)\"", "language": "text", "type": "code-example", "context": "Extend the Kubernetes API with CustomResourceDefinitions | This is because each item of foo is itself an array, and each subarray in turn calls self.all. Avoid nested lists and maps if possible where validation rules are used.", "source": "https://kubernetes.io/docs/tasks/extend-kubernetes/custom-resources/custom-resource-definitions/", "id": "k8s_00721_code_completion_22538"} {"prompt": "Explain this text code:\n\nopenAPIV3Schema:\n type: object\n properties:\n foo:\n type: array\n items:\n type: array\n items:\n type: integer\n x-kubernetes-validations:\n - rule: \"self.all(x, x == 5)\"", "completion": "This text code Extend the Kubernetes API with CustomResourceDefinitions | This is because each item of foo is itself an array, and each subarray in turn calls self.all. Avoid nested lists and maps if possible where validation rules are used.", "language": "text", "type": "code-example_explanation", "context": "Extend the Kubernetes API with CustomResourceDefinitions | This is because each item of foo is itself an array, and each subarray in turn calls self.all. Avoid nested lists and maps if possible where validation rules are used.", "source": "https://kubernetes.io/docs/tasks/extend-kubernetes/custom-resources/custom-resource-definitions/", "id": "k8s_00721_code_explain_22539"} {"prompt": "Complete this yaml code:\n\napiVersion: apiextensions.k8s.io/v1\nkind: CustomResourceDefinition\nmetadata:\n name: crontabs.stable.example.com\nspec:\n group: stable.example.com\n versions:\n - name: v1\n served: true\n storage: true\n schema:\n # openAPIV3Schema is the schema for validating custom objects.\n openAPIV3Schema:\n type: object\n properties:\n spec:\n type: object\n properties:", "completion": " cronSpec:\n type: string\n pattern: '^(\\d+|\\*)(/\\d+)?(\\s+(\\d+|\\*)(/\\d+)?){4}$'\n default: \"5 0 * * *\"\n image:\n type: string\n replicas:\n type: integer\n minimum: 1\n maximum: 10\n default: 1\n scope: Namespaced\n names:\n plural: crontabs\n singular: crontab\n kind: CronTab\n shortNames:\n - ct", "language": "yaml", "type": "kubernetes-manifest", "context": "Extend the Kubernetes API with CustomResourceDefinitions | With this both cronSpec and replicas are defaulted:", "source": "https://kubernetes.io/docs/tasks/extend-kubernetes/custom-resources/custom-resource-definitions/", "id": "k8s_00721_code_completion_22540"} {"prompt": "Explain this yaml code:\n\napiVersion: apiextensions.k8s.io/v1\nkind: CustomResourceDefinition\nmetadata:\n name: crontabs.stable.example.com\nspec:\n group: stable.example.com\n versions:\n - name: v1\n served: true\n storage: true\n schema:\n # openAPIV3Schema is the schema for validating custom objects.\n openAPIV3Schema:\n type: object\n properties:\n spec:\n type: object\n properties:\n cronSpec:\n type: string\n pattern: '^(\\d+|\\*)(/\\d+)?(\\s+(\\d+|\\*)(/\\d+)?){4}$'\n default: \"5 0 * * *\"\n image:\n type: string\n replicas:\n type: integer\n minimum: 1\n maximum: 10\n default: 1\n scope: Namespaced\n names:\n plural: crontabs\n singular: crontab\n kind: CronTab\n shortNames:\n - ct", "completion": "This yaml code Extend the Kubernetes API with CustomResourceDefinitions | With this both cronSpec and replicas are defaulted:", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Extend the Kubernetes API with CustomResourceDefinitions | With this both cronSpec and replicas are defaulted:", "source": "https://kubernetes.io/docs/tasks/extend-kubernetes/custom-resources/custom-resource-definitions/", "id": "k8s_00721_code_explain_22541"} {"prompt": "Complete this yaml code:\n\napiVersion: \"stable.example.com/v1\"\nkind: CronTab\nmetadata:", "completion": " name: my-new-cron-object\nspec:\n image: my-awesome-cron-image", "language": "yaml", "type": "kubernetes-manifest", "context": "Extend the Kubernetes API with CustomResourceDefinitions | leads to", "source": "https://kubernetes.io/docs/tasks/extend-kubernetes/custom-resources/custom-resource-definitions/", "id": "k8s_00721_code_completion_22542"} {"prompt": "Explain this yaml code:\n\napiVersion: \"stable.example.com/v1\"\nkind: CronTab\nmetadata:\n name: my-new-cron-object\nspec:\n image: my-awesome-cron-image", "completion": "This yaml code Extend the Kubernetes API with CustomResourceDefinitions | leads to", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Extend the Kubernetes API with CustomResourceDefinitions | leads to", "source": "https://kubernetes.io/docs/tasks/extend-kubernetes/custom-resources/custom-resource-definitions/", "id": "k8s_00721_code_explain_22543"} {"prompt": "Complete this yaml code:\n\napiVersion: \"stable.example.com/v1\"\nkind: CronTab\nmetadata:\n name: my-new-cron-object", "completion": "spec:\n cronSpec: \"5 0 * * *\"\n image: my-awesome-cron-image\n replicas: 1", "language": "yaml", "type": "kubernetes-manifest", "context": "Extend the Kubernetes API with CustomResourceDefinitions | Defaulting happens on the object", "source": "https://kubernetes.io/docs/tasks/extend-kubernetes/custom-resources/custom-resource-definitions/", "id": "k8s_00721_code_completion_22544"} {"prompt": "Explain this yaml code:\n\napiVersion: \"stable.example.com/v1\"\nkind: CronTab\nmetadata:\n name: my-new-cron-object\nspec:\n cronSpec: \"5 0 * * *\"\n image: my-awesome-cron-image\n replicas: 1", "completion": "This yaml code Extend the Kubernetes API with CustomResourceDefinitions | Defaulting happens on the object", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Extend the Kubernetes API with CustomResourceDefinitions | Defaulting happens on the object", "source": "https://kubernetes.io/docs/tasks/extend-kubernetes/custom-resources/custom-resource-definitions/", "id": "k8s_00721_code_explain_22545"} {"prompt": "Complete this text code:\n\ntype: object\nproperties:\n spec:\n type: object\n properties:\n foo:\n type: string", "completion": " nullable: false\n default: \"default\"\n bar:\n type: string\n nullable: true\n baz:\n type: string", "language": "text", "type": "code-example", "context": "Extend the Kubernetes API with CustomResourceDefinitions | creating an object with null values for foo and bar and baz", "source": "https://kubernetes.io/docs/tasks/extend-kubernetes/custom-resources/custom-resource-definitions/", "id": "k8s_00721_code_completion_22546"} {"prompt": "Explain this text code:\n\ntype: object\nproperties:\n spec:\n type: object\n properties:\n foo:\n type: string\n nullable: false\n default: \"default\"\n bar:\n type: string\n nullable: true\n baz:\n type: string", "completion": "This text code Extend the Kubernetes API with CustomResourceDefinitions | creating an object with null values for foo and bar and baz", "language": "text", "type": "code-example_explanation", "context": "Extend the Kubernetes API with CustomResourceDefinitions | creating an object with null values for foo and bar and baz", "source": "https://kubernetes.io/docs/tasks/extend-kubernetes/custom-resources/custom-resource-definitions/", "id": "k8s_00721_code_explain_22547"} {"prompt": "Complete this yaml code:\n\napiVersion: apiextensions.k8s.io/v1\nkind: CustomResourceDefinition\nmetadata:\n name: crontabs.stable.example.com\nspec:\n group: stable.example.com\n scope: Namespaced\n names:\n plural: crontabs\n singular: crontab\n kind: CronTab\n shortNames:\n - ct\n versions:\n - name: v1\n served: true\n storage: true\n schema:\n openAPIV3Schema:\n type: object\n properties:", "completion": " spec:\n type: object\n properties:\n cronSpec:\n type: string\n image:\n type: string\n replicas:\n type: integer\n additionalPrinterColumns:\n - name: Spec\n type: string\n description: The cron spec defining the interval a CronJob is run\n jsonPath: .spec.cronSpec\n - name: Replicas\n type: integer\n description: The number of jobs launched by the CronJob\n jsonPath: .spec.replicas\n - name: Age\n type: date\n jsonPath: .metadata.creationTimestamp", "language": "yaml", "type": "kubernetes-manifest", "context": "Extend the Kubernetes API with CustomResourceDefinitions | Create the CustomResourceDefinition:", "source": "https://kubernetes.io/docs/tasks/extend-kubernetes/custom-resources/custom-resource-definitions/", "id": "k8s_00721_code_completion_22548"} {"prompt": "Explain this yaml code:\n\napiVersion: apiextensions.k8s.io/v1\nkind: CustomResourceDefinition\nmetadata:\n name: crontabs.stable.example.com\nspec:\n group: stable.example.com\n scope: Namespaced\n names:\n plural: crontabs\n singular: crontab\n kind: CronTab\n shortNames:\n - ct\n versions:\n - name: v1\n served: true\n storage: true\n schema:\n openAPIV3Schema:\n type: object\n properties:\n spec:\n type: object\n properties:\n cronSpec:\n type: string\n image:\n type: string\n replicas:\n type: integer\n additionalPrinterColumns:\n - name: Spec\n type: string\n description: The cron spec defining the interval a CronJob is run\n jsonPath: .spec.cronSpec\n - name: Replicas\n type: integer\n description: The number of jobs launched by the CronJob\n jsonPath: .spec.replicas\n - name: Age\n type: date\n jsonPath: .metadata.creationTimestamp", "completion": "This yaml code Extend the Kubernetes API with CustomResourceDefinitions | Create the CustomResourceDefinition:", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Extend the Kubernetes API with CustomResourceDefinitions | Create the CustomResourceDefinition:", "source": "https://kubernetes.io/docs/tasks/extend-kubernetes/custom-resources/custom-resource-definitions/", "id": "k8s_00721_code_explain_22549"} {"prompt": "Explain this text code:\n\nNAME SPEC REPLICAS AGE\nmy-new-cron-object * * * * * 1 7s", "completion": "This text code Extend the Kubernetes API with CustomResourceDefinitions | Each column includes a priority field. Currently, the priority differentiates between columns shown in standard view or wide view (using the -o wide flag).", "language": "text", "type": "code-example_explanation", "context": "Extend the Kubernetes API with CustomResourceDefinitions | Each column includes a priority field. Currently, the priority differentiates between columns shown in standard view or wide view (using the -o wide flag).", "source": "https://kubernetes.io/docs/tasks/extend-kubernetes/custom-resources/custom-resource-definitions/", "id": "k8s_00721_code_explain_22550"} {"prompt": "Complete this yaml code:\n\napiVersion: apiextensions.k8s.io/v1\nkind: CustomResourceDefinition\nmetadata:\n name: shirts.stable.example.com\nspec:\n group: stable.example.com\n scope: Namespaced\n names:\n plural: shirts\n singular: shirt\n kind: Shirt\n versions:\n - name: v1\n served: true\n storage: true\n schema:\n openAPIV3Schema:\n type: object", "completion": " properties:\n spec:\n type: object\n properties:\n color:\n type: string\n size:\n type: string\n selectableFields:\n - jsonPath: .spec.color\n - jsonPath: .spec.size\n additionalPrinterColumns:\n - jsonPath: .spec.color\n name: Color\n type: string\n - jsonPath: .spec.size\n name: Size\n type: string", "language": "yaml", "type": "kubernetes-manifest", "context": "Extend the Kubernetes API with CustomResourceDefinitions | Create the CustomResourceDefinition:", "source": "https://kubernetes.io/docs/tasks/extend-kubernetes/custom-resources/custom-resource-definitions/", "id": "k8s_00721_code_completion_22551"} {"prompt": "Explain this yaml code:\n\napiVersion: apiextensions.k8s.io/v1\nkind: CustomResourceDefinition\nmetadata:\n name: shirts.stable.example.com\nspec:\n group: stable.example.com\n scope: Namespaced\n names:\n plural: shirts\n singular: shirt\n kind: Shirt\n versions:\n - name: v1\n served: true\n storage: true\n schema:\n openAPIV3Schema:\n type: object\n properties:\n spec:\n type: object\n properties:\n color:\n type: string\n size:\n type: string\n selectableFields:\n - jsonPath: .spec.color\n - jsonPath: .spec.size\n additionalPrinterColumns:\n - jsonPath: .spec.color\n name: Color\n type: string\n - jsonPath: .spec.size\n name: Size\n type: string", "completion": "This yaml code Extend the Kubernetes API with CustomResourceDefinitions | Create the CustomResourceDefinition:", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Extend the Kubernetes API with CustomResourceDefinitions | Create the CustomResourceDefinition:", "source": "https://kubernetes.io/docs/tasks/extend-kubernetes/custom-resources/custom-resource-definitions/", "id": "k8s_00721_code_explain_22552"} {"prompt": "Explain this bash code:\n\nkubectl apply -f https://k8s.io/examples/customresourcedefinition/shirt-resource-definition.yaml", "completion": "This bash code Extend the Kubernetes API with CustomResourceDefinitions | Define some Shirts by editing shirt-resources.yaml; for example:", "language": "bash", "type": "kubectl-command_explanation", "context": "Extend the Kubernetes API with CustomResourceDefinitions | Define some Shirts by editing shirt-resources.yaml; for example:", "source": "https://kubernetes.io/docs/tasks/extend-kubernetes/custom-resources/custom-resource-definitions/", "id": "k8s_00721_code_explain_22553"} {"prompt": "Complete this yaml code:\n\n---\napiVersion: stable.example.com/v1\nkind: Shirt\nmetadata:\n name: example1\nspec:\n color: blue\n size: S\n---\napiVersion: stable.example.com/v1\nkind: Shirt\nmetadata:", "completion": " name: example2\nspec:\n color: blue\n size: M\n---\napiVersion: stable.example.com/v1\nkind: Shirt\nmetadata:\n name: example3\nspec:\n color: green\n size: M", "language": "yaml", "type": "kubernetes-manifest", "context": "Extend the Kubernetes API with CustomResourceDefinitions | Create the custom resources:", "source": "https://kubernetes.io/docs/tasks/extend-kubernetes/custom-resources/custom-resource-definitions/", "id": "k8s_00721_code_completion_22554"} {"prompt": "Explain this yaml code:\n\n---\napiVersion: stable.example.com/v1\nkind: Shirt\nmetadata:\n name: example1\nspec:\n color: blue\n size: S\n---\napiVersion: stable.example.com/v1\nkind: Shirt\nmetadata:\n name: example2\nspec:\n color: blue\n size: M\n---\napiVersion: stable.example.com/v1\nkind: Shirt\nmetadata:\n name: example3\nspec:\n color: green\n size: M", "completion": "This yaml code Extend the Kubernetes API with CustomResourceDefinitions | Create the custom resources:", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Extend the Kubernetes API with CustomResourceDefinitions | Create the custom resources:", "source": "https://kubernetes.io/docs/tasks/extend-kubernetes/custom-resources/custom-resource-definitions/", "id": "k8s_00721_code_explain_22555"} {"prompt": "Explain this bash code:\n\nkubectl apply -f https://k8s.io/examples/customresourcedefinition/shirt-resources.yaml", "completion": "This bash code Extend the Kubernetes API with CustomResourceDefinitions | Get all the resources:", "language": "bash", "type": "kubectl-command_explanation", "context": "Extend the Kubernetes API with CustomResourceDefinitions | Get all the resources:", "source": "https://kubernetes.io/docs/tasks/extend-kubernetes/custom-resources/custom-resource-definitions/", "id": "k8s_00721_code_explain_22556"} {"prompt": "Complete this text code:\n\nNAME COLOR SIZE\nexample1 blue S", "completion": "example2 blue M\nexample3 green M", "language": "text", "type": "code-example", "context": "Extend the Kubernetes API with CustomResourceDefinitions | Fetch blue shirts (retrieve Shirts with a color of blue):", "source": "https://kubernetes.io/docs/tasks/extend-kubernetes/custom-resources/custom-resource-definitions/", "id": "k8s_00721_code_completion_22557"} {"prompt": "Explain this text code:\n\nNAME COLOR SIZE\nexample1 blue S\nexample2 blue M\nexample3 green M", "completion": "This text code Extend the Kubernetes API with CustomResourceDefinitions | Fetch blue shirts (retrieve Shirts with a color of blue):", "language": "text", "type": "code-example_explanation", "context": "Extend the Kubernetes API with CustomResourceDefinitions | Fetch blue shirts (retrieve Shirts with a color of blue):", "source": "https://kubernetes.io/docs/tasks/extend-kubernetes/custom-resources/custom-resource-definitions/", "id": "k8s_00721_code_explain_22558"} {"prompt": "Explain this bash code:\n\nkubectl get shirts.stable.example.com --field-selector spec.color=blue", "completion": "This bash code Extend the Kubernetes API with CustomResourceDefinitions | Should output:", "language": "bash", "type": "kubectl-command_explanation", "context": "Extend the Kubernetes API with CustomResourceDefinitions | Should output:", "source": "https://kubernetes.io/docs/tasks/extend-kubernetes/custom-resources/custom-resource-definitions/", "id": "k8s_00721_code_explain_22559"} {"prompt": "Explain this text code:\n\nNAME COLOR SIZE\nexample1 blue S\nexample2 blue M", "completion": "This text code Extend the Kubernetes API with CustomResourceDefinitions | Get only resources with a color of green and a size of M:", "language": "text", "type": "code-example_explanation", "context": "Extend the Kubernetes API with CustomResourceDefinitions | Get only resources with a color of green and a size of M:", "source": "https://kubernetes.io/docs/tasks/extend-kubernetes/custom-resources/custom-resource-definitions/", "id": "k8s_00721_code_explain_22560"} {"prompt": "Explain this bash code:\n\nkubectl get shirts.stable.example.com --field-selector spec.color=green,spec.size=M", "completion": "This bash code Extend the Kubernetes API with CustomResourceDefinitions | Should output:", "language": "bash", "type": "kubectl-command_explanation", "context": "Extend the Kubernetes API with CustomResourceDefinitions | Should output:", "source": "https://kubernetes.io/docs/tasks/extend-kubernetes/custom-resources/custom-resource-definitions/", "id": "k8s_00721_code_explain_22561"} {"prompt": "Complete this yaml code:\n\napiVersion: apiextensions.k8s.io/v1\nkind: CustomResourceDefinition\nmetadata:\n name: crontabs.stable.example.com\nspec:\n group: stable.example.com\n versions:\n - name: v1\n served: true\n storage: true\n schema:\n openAPIV3Schema:\n type: object\n properties:\n spec:\n type: object\n properties:\n cronSpec:\n type: string\n image:\n type: string\n replicas:\n type: integer\n status:", "completion": " type: object\n properties:\n replicas:\n type: integer\n labelSelector:\n type: string\n # subresources describes the subresources for custom resources.\n subresources:\n # status enables the status subresource.\n status: {}\n # scale enables the scale subresource.\n scale:\n # specReplicasPath defines the JSONPath inside of a custom resource that corresponds to Scale.Spec.Replicas.\n specReplicasPath: .spec.replicas\n # statusReplicasPath defines the JSONPath inside of a custom resource that corresponds to Scale.Status.Replicas.\n statusReplicasPath: .status.replicas\n # labelSelectorPath defines the JSONPath inside of a custom resource that corresponds to Scale.Status.Selector.\n labelSelectorPath: .status.labelSelector\n scope: Namespaced\n names:\n plural: crontabs\n singular: crontab\n kind: CronTab\n shortNames:\n - ct", "language": "yaml", "type": "kubernetes-manifest", "context": "Extend the Kubernetes API with CustomResourceDefinitions | And create it:", "source": "https://kubernetes.io/docs/tasks/extend-kubernetes/custom-resources/custom-resource-definitions/", "id": "k8s_00721_code_completion_22562"} {"prompt": "Explain this yaml code:\n\napiVersion: apiextensions.k8s.io/v1\nkind: CustomResourceDefinition\nmetadata:\n name: crontabs.stable.example.com\nspec:\n group: stable.example.com\n versions:\n - name: v1\n served: true\n storage: true\n schema:\n openAPIV3Schema:\n type: object\n properties:\n spec:\n type: object\n properties:\n cronSpec:\n type: string\n image:\n type: string\n replicas:\n type: integer\n status:\n type: object\n properties:\n replicas:\n type: integer\n labelSelector:\n type: string\n # subresources describes the subresources for custom resources.\n subresources:\n # status enables the status subresource.\n status: {}\n # scale enables the scale subresource.\n scale:\n # specReplicasPath defines the JSONPath inside of a custom resource that corresponds to Scale.Spec.Replicas.\n specReplicasPath: .spec.replicas\n # statusReplicasPath defines the JSONPath inside of a custom resource that corresponds to Scale.Status.Replicas.\n statusReplicasPath: .status.replicas\n # labelSelectorPath defines the JSONPath inside of a custom resource that corresponds to Scale.Status.Selector.\n labelSelectorPath: .status.labelSelector\n scope: Namespaced\n names:\n plural: crontabs\n singular: crontab\n kind: CronTab\n shortNames:\n - ct", "completion": "This yaml code Extend the Kubernetes API with CustomResourceDefinitions | And create it:", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Extend the Kubernetes API with CustomResourceDefinitions | And create it:", "source": "https://kubernetes.io/docs/tasks/extend-kubernetes/custom-resources/custom-resource-definitions/", "id": "k8s_00721_code_explain_22563"} {"prompt": "Complete this yaml code:\n\napiVersion: \"stable.example.com/v1\"\nkind: CronTab\nmetadata:\n name: my-new-cron-object", "completion": "spec:\n cronSpec: \"* * * * */5\"\n image: my-awesome-cron-image\n replicas: 3", "language": "yaml", "type": "kubernetes-manifest", "context": "Extend the Kubernetes API with CustomResourceDefinitions | and create it:", "source": "https://kubernetes.io/docs/tasks/extend-kubernetes/custom-resources/custom-resource-definitions/", "id": "k8s_00721_code_completion_22564"} {"prompt": "Explain this yaml code:\n\napiVersion: \"stable.example.com/v1\"\nkind: CronTab\nmetadata:\n name: my-new-cron-object\nspec:\n cronSpec: \"* * * * */5\"\n image: my-awesome-cron-image\n replicas: 3", "completion": "This yaml code Extend the Kubernetes API with CustomResourceDefinitions | and create it:", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Extend the Kubernetes API with CustomResourceDefinitions | and create it:", "source": "https://kubernetes.io/docs/tasks/extend-kubernetes/custom-resources/custom-resource-definitions/", "id": "k8s_00721_code_explain_22565"} {"prompt": "Explain this text code:\n\n/apis/stable.example.com/v1/namespaces/*/crontabs/status", "completion": "This text code Extend the Kubernetes API with CustomResourceDefinitions | and", "language": "text", "type": "code-example_explanation", "context": "Extend the Kubernetes API with CustomResourceDefinitions | and", "source": "https://kubernetes.io/docs/tasks/extend-kubernetes/custom-resources/custom-resource-definitions/", "id": "k8s_00721_code_explain_22566"} {"prompt": "Explain this text code:\n\n/apis/stable.example.com/v1/namespaces/*/crontabs/scale", "completion": "This text code Extend the Kubernetes API with CustomResourceDefinitions | A custom resource can be scaled using the kubectl scale command. For example, the following command sets .spec.replicas of the custom resource created above to 5:", "language": "text", "type": "code-example_explanation", "context": "Extend the Kubernetes API with CustomResourceDefinitions | A custom resource can be scaled using the kubectl scale command. For example, the following command sets .spec.replicas of the custom resource created above to 5:", "source": "https://kubernetes.io/docs/tasks/extend-kubernetes/custom-resources/custom-resource-definitions/", "id": "k8s_00721_code_explain_22567"} {"prompt": "Complete this bash code:\n\nkubectl scale --replicas=5 crontabs/my-new-cron-object\ncrontabs \"my-new-cron-object\" scaled", "completion": "\nkubectl get crontabs my-new-cron-object -o jsonpath='{.spec.replicas}'\n5", "language": "bash", "type": "kubectl-command", "context": "Extend the Kubernetes API with CustomResourceDefinitions | You can use a PodDisruptionBudget to protect custom resources that have the scale subresource enabled.", "source": "https://kubernetes.io/docs/tasks/extend-kubernetes/custom-resources/custom-resource-definitions/", "id": "k8s_00721_code_completion_22568"} {"prompt": "Explain this bash code:\n\nkubectl scale --replicas=5 crontabs/my-new-cron-object\ncrontabs \"my-new-cron-object\" scaled\n\nkubectl get crontabs my-new-cron-object -o jsonpath='{.spec.replicas}'\n5", "completion": "This bash code Extend the Kubernetes API with CustomResourceDefinitions | You can use a PodDisruptionBudget to protect custom resources that have the scale subresource enabled.", "language": "bash", "type": "kubectl-command_explanation", "context": "Extend the Kubernetes API with CustomResourceDefinitions | You can use a PodDisruptionBudget to protect custom resources that have the scale subresource enabled.", "source": "https://kubernetes.io/docs/tasks/extend-kubernetes/custom-resources/custom-resource-definitions/", "id": "k8s_00721_code_explain_22569"} {"prompt": "Complete this yaml code:\n\napiVersion: apiextensions.k8s.io/v1\nkind: CustomResourceDefinition\nmetadata:\n name: crontabs.stable.example.com\nspec:\n group: stable.example.com\n versions:\n - name: v1\n served: true\n storage: true\n schema:\n openAPIV3Schema:\n type: object\n properties:\n spec:\n type: object", "completion": " properties:\n cronSpec:\n type: string\n image:\n type: string\n replicas:\n type: integer\n scope: Namespaced\n names:\n plural: crontabs\n singular: crontab\n kind: CronTab\n shortNames:\n - ct\n # categories is a list of grouped resources the custom resource belongs to.\n categories:\n - all", "language": "yaml", "type": "kubernetes-manifest", "context": "Extend the Kubernetes API with CustomResourceDefinitions | and create it:", "source": "https://kubernetes.io/docs/tasks/extend-kubernetes/custom-resources/custom-resource-definitions/", "id": "k8s_00721_code_completion_22570"} {"prompt": "Explain this yaml code:\n\napiVersion: apiextensions.k8s.io/v1\nkind: CustomResourceDefinition\nmetadata:\n name: crontabs.stable.example.com\nspec:\n group: stable.example.com\n versions:\n - name: v1\n served: true\n storage: true\n schema:\n openAPIV3Schema:\n type: object\n properties:\n spec:\n type: object\n properties:\n cronSpec:\n type: string\n image:\n type: string\n replicas:\n type: integer\n scope: Namespaced\n names:\n plural: crontabs\n singular: crontab\n kind: CronTab\n shortNames:\n - ct\n # categories is a list of grouped resources the custom resource belongs to.\n categories:\n - all", "completion": "This yaml code Extend the Kubernetes API with CustomResourceDefinitions | and create it:", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Extend the Kubernetes API with CustomResourceDefinitions | and create it:", "source": "https://kubernetes.io/docs/tasks/extend-kubernetes/custom-resources/custom-resource-definitions/", "id": "k8s_00721_code_explain_22571"} {"prompt": "Complete this yaml code:\n\napiVersion: \"stable.example.com/v1\"\nkind: CronTab\nmetadata:", "completion": " name: my-new-cron-object\nspec:\n cronSpec: \"* * * * */5\"\n image: my-awesome-cron-image", "language": "yaml", "type": "kubernetes-manifest", "context": "Extend the Kubernetes API with CustomResourceDefinitions | and create it:", "source": "https://kubernetes.io/docs/tasks/extend-kubernetes/custom-resources/custom-resource-definitions/", "id": "k8s_00721_code_completion_22572"} {"prompt": "Explain this yaml code:\n\napiVersion: \"stable.example.com/v1\"\nkind: CronTab\nmetadata:\n name: my-new-cron-object\nspec:\n cronSpec: \"* * * * */5\"\n image: my-awesome-cron-image", "completion": "This yaml code Extend the Kubernetes API with CustomResourceDefinitions | and create it:", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Extend the Kubernetes API with CustomResourceDefinitions | and create it:", "source": "https://kubernetes.io/docs/tasks/extend-kubernetes/custom-resources/custom-resource-definitions/", "id": "k8s_00721_code_explain_22573"} {"prompt": "Explain this text code:\n\nNAME AGE\ncrontabs/my-new-cron-object 3s", "completion": "This text code Extend the Kubernetes API with CustomResourceDefinitions | Read about custom resources.", "language": "text", "type": "code-example_explanation", "context": "Extend the Kubernetes API with CustomResourceDefinitions | Read about custom resources.", "source": "https://kubernetes.io/docs/tasks/extend-kubernetes/custom-resources/custom-resource-definitions/", "id": "k8s_00721_code_explain_22574"} {"prompt": "Complete this yaml code:\n\napiVersion: apiextensions.k8s.io/v1\nkind: CustomResourceDefinition\nmetadata:\n # name must match the spec fields below, and be in the form: .\n name: crontabs.stable.example.com\nspec:\n # group name to use for REST API: /apis//\n group: stable.example.com\n # list of versions supported by this CustomResourceDefinition\n versions:\n - name: v1\n # Each version can be enabled/disabled by Served flag.\n served: true\n # One and only one version must be marked as the storage version.\n storage: true\n schema:\n openAPIV3Schema:\n type: object\n properties:\n spec:", "completion": " type: object\n properties:\n cronSpec:\n type: string\n image:\n type: string\n replicas:\n type: integer\n # either Namespaced or Cluster\n scope: Namespaced\n names:\n # plural name to be used in the URL: /apis///\n plural: crontabs\n # singular name to be used as an alias on the CLI and for display\n singular: crontab\n # kind is normally the CamelCased singular type. Your resource manifests use this.\n kind: CronTab\n # shortNames allow shorter string to match your resource on the CLI\n shortNames:\n - ct", "language": "yaml", "type": "kubernetes-manifest", "context": "Extend the Kubernetes API with CustomResourceDefinitions | and create it:", "source": "https://kubernetes.io/docs/tasks/extend-kubernetes/custom-resources/custom-resource-definitions/", "id": "k8s_00721_code_completion_22575"} {"prompt": "Explain this yaml code:\n\napiVersion: apiextensions.k8s.io/v1\nkind: CustomResourceDefinition\nmetadata:\n # name must match the spec fields below, and be in the form: .\n name: crontabs.stable.example.com\nspec:\n # group name to use for REST API: /apis//\n group: stable.example.com\n # list of versions supported by this CustomResourceDefinition\n versions:\n - name: v1\n # Each version can be enabled/disabled by Served flag.\n served: true\n # One and only one version must be marked as the storage version.\n storage: true\n schema:\n openAPIV3Schema:\n type: object\n properties:\n spec:\n type: object\n properties:\n cronSpec:\n type: string\n image:\n type: string\n replicas:\n type: integer\n # either Namespaced or Cluster\n scope: Namespaced\n names:\n # plural name to be used in the URL: /apis///\n plural: crontabs\n # singular name to be used as an alias on the CLI and for display\n singular: crontab\n # kind is normally the CamelCased singular type. Your resource manifests use this.\n kind: CronTab\n # shortNames allow shorter string to match your resource on the CLI\n shortNames:\n - ct", "completion": "This yaml code Extend the Kubernetes API with CustomResourceDefinitions | and create it:", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Extend the Kubernetes API with CustomResourceDefinitions | and create it:", "source": "https://kubernetes.io/docs/tasks/extend-kubernetes/custom-resources/custom-resource-definitions/", "id": "k8s_00721_code_explain_22576"} {"prompt": "Explain this text code:\n\n/apis/stable.example.com/v1/namespaces/*/crontabs/...", "completion": "This text code Extend the Kubernetes API with CustomResourceDefinitions | This endpoint URL can then be used to create and manage custom objects. The kind of these objects will be CronTab from the spec of the CustomResourceDefinition object you created above.", "language": "text", "type": "code-example_explanation", "context": "Extend the Kubernetes API with CustomResourceDefinitions | This endpoint URL can then be used to create and manage custom objects. The kind of these objects will be CronTab from the spec of the CustomResourceDefinition object you created above.", "source": "https://kubernetes.io/docs/tasks/extend-kubernetes/custom-resources/custom-resource-definitions/", "id": "k8s_00721_code_explain_22577"} {"prompt": "Complete this yaml code:\n\napiVersion: \"stable.example.com/v1\"\nkind: CronTab\nmetadata:", "completion": " name: my-new-cron-object\nspec:\n cronSpec: \"* * * * */5\"\n image: my-awesome-cron-image", "language": "yaml", "type": "kubernetes-manifest", "context": "Extend the Kubernetes API with CustomResourceDefinitions | and create it:", "source": "https://kubernetes.io/docs/tasks/extend-kubernetes/custom-resources/custom-resource-definitions/", "id": "k8s_00721_code_completion_22578"} {"prompt": "Explain this yaml code:\n\napiVersion: \"stable.example.com/v1\"\nkind: CronTab\nmetadata:\n name: my-new-cron-object\nspec:\n cronSpec: \"* * * * */5\"\n image: my-awesome-cron-image", "completion": "This yaml code Extend the Kubernetes API with CustomResourceDefinitions | and create it:", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Extend the Kubernetes API with CustomResourceDefinitions | and create it:", "source": "https://kubernetes.io/docs/tasks/extend-kubernetes/custom-resources/custom-resource-definitions/", "id": "k8s_00721_code_explain_22579"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nitems:\n- apiVersion: stable.example.com/v1\n kind: CronTab\n metadata:\n annotations:\n kubectl.kubernetes.io/last-applied-configuration: |\n {\"apiVersion\":\"stable.example.com/v1\",\"kind\":\"CronTab\",\"metadata\":{\"annotations\":{},\"name\":\"my-new-cron-object\",\"namespace\":\"default\"},\"spec\":{\"cronSpec\":\"* * * * */5\",\"image\":\"my-awesome-cron-image\"}} \n creationTimestamp: \"2021-06-20T07:35:27Z\"\n generation: 1", "completion": " name: my-new-cron-object\n namespace: default\n resourceVersion: \"1326\"\n uid: 9aab1d66-628e-41bb-a422-57b8b3b1f5a9\n spec:\n cronSpec: '* * * * */5'\n image: my-awesome-cron-image\nkind: List\nmetadata:\n resourceVersion: \"\"\n selfLink: \"\"", "language": "yaml", "type": "kubernetes-manifest", "context": "Extend the Kubernetes API with CustomResourceDefinitions | When you delete a CustomResourceDefinition, the server will uninstall the RESTful API endpoint and delete all custom objects stored in it.", "source": "https://kubernetes.io/docs/tasks/extend-kubernetes/custom-resources/custom-resource-definitions/", "id": "k8s_00721_code_completion_22580"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nitems:\n- apiVersion: stable.example.com/v1\n kind: CronTab\n metadata:\n annotations:\n kubectl.kubernetes.io/last-applied-configuration: |\n {\"apiVersion\":\"stable.example.com/v1\",\"kind\":\"CronTab\",\"metadata\":{\"annotations\":{},\"name\":\"my-new-cron-object\",\"namespace\":\"default\"},\"spec\":{\"cronSpec\":\"* * * * */5\",\"image\":\"my-awesome-cron-image\"}} \n creationTimestamp: \"2021-06-20T07:35:27Z\"\n generation: 1\n name: my-new-cron-object\n namespace: default\n resourceVersion: \"1326\"\n uid: 9aab1d66-628e-41bb-a422-57b8b3b1f5a9\n spec:\n cronSpec: '* * * * */5'\n image: my-awesome-cron-image\nkind: List\nmetadata:\n resourceVersion: \"\"\n selfLink: \"\"", "completion": "This yaml code Extend the Kubernetes API with CustomResourceDefinitions | When you delete a CustomResourceDefinition, the server will uninstall the RESTful API endpoint and delete all custom objects stored in it.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Extend the Kubernetes API with CustomResourceDefinitions | When you delete a CustomResourceDefinition, the server will uninstall the RESTful API endpoint and delete all custom objects stored in it.", "source": "https://kubernetes.io/docs/tasks/extend-kubernetes/custom-resources/custom-resource-definitions/", "id": "k8s_00721_code_explain_22581"} {"prompt": "Explain this shell code:\n\nkubectl delete -f resourcedefinition.yaml\nkubectl get crontabs", "completion": "This shell code Extend the Kubernetes API with CustomResourceDefinitions | If you later recreate the same CustomResourceDefinition, it will start out empty.", "language": "shell", "type": "code-example_explanation", "context": "Extend the Kubernetes API with CustomResourceDefinitions | If you later recreate the same CustomResourceDefinition, it will start out empty.", "source": "https://kubernetes.io/docs/tasks/extend-kubernetes/custom-resources/custom-resource-definitions/", "id": "k8s_00721_code_explain_22582"} {"prompt": "Explain this none code:\n\nError from server (NotFound): Unable to list {\"stable.example.com\" \"v1\" \"crontabs\"}: the server could not\nfind the requested resource (get crontabs.stable.example.com)", "completion": "This none code Extend the Kubernetes API with CustomResourceDefinitions | If you later recreate the same CustomResourceDefinition, it will start out empty.", "language": "none", "type": "code-example_explanation", "context": "Extend the Kubernetes API with CustomResourceDefinitions | If you later recreate the same CustomResourceDefinition, it will start out empty.", "source": "https://kubernetes.io/docs/tasks/extend-kubernetes/custom-resources/custom-resource-definitions/", "id": "k8s_00721_code_explain_22583"} {"prompt": "Complete this yaml code:\n\nproperties:\n foo:\n # ...", "completion": "allOf:\n- properties:\n foo:\n # ...", "language": "yaml", "type": "kubernetes-manifest", "context": "Extend the Kubernetes API with CustomResourceDefinitions | Non-structural example 2:", "source": "https://kubernetes.io/docs/tasks/extend-kubernetes/custom-resources/custom-resource-definitions/", "id": "k8s_00721_code_completion_22584"} {"prompt": "Explain this yaml code:\n\nproperties:\n foo:\n # ...\nallOf:\n- properties:\n foo:\n # ...", "completion": "This yaml code Extend the Kubernetes API with CustomResourceDefinitions | Non-structural example 2:", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Extend the Kubernetes API with CustomResourceDefinitions | Non-structural example 2:", "source": "https://kubernetes.io/docs/tasks/extend-kubernetes/custom-resources/custom-resource-definitions/", "id": "k8s_00721_code_explain_22585"} {"prompt": "Complete this yaml code:\n\nallOf:\n- items:", "completion": " properties:\n foo:\n # ...", "language": "yaml", "type": "kubernetes-manifest", "context": "Extend the Kubernetes API with CustomResourceDefinitions | conflicts with rule 2. The following would be correct:", "source": "https://kubernetes.io/docs/tasks/extend-kubernetes/custom-resources/custom-resource-definitions/", "id": "k8s_00721_code_completion_22586"} {"prompt": "Explain this yaml code:\n\nallOf:\n- items:\n properties:\n foo:\n # ...", "completion": "This yaml code Extend the Kubernetes API with CustomResourceDefinitions | conflicts with rule 2. The following would be correct:", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Extend the Kubernetes API with CustomResourceDefinitions | conflicts with rule 2. The following would be correct:", "source": "https://kubernetes.io/docs/tasks/extend-kubernetes/custom-resources/custom-resource-definitions/", "id": "k8s_00721_code_explain_22587"} {"prompt": "Complete this yaml code:\n\nitems:\n properties:\n foo:\n # ...", "completion": "allOf:\n- items:\n properties:\n foo:\n # ...", "language": "yaml", "type": "kubernetes-manifest", "context": "Extend the Kubernetes API with CustomResourceDefinitions | Non-structural example 3:", "source": "https://kubernetes.io/docs/tasks/extend-kubernetes/custom-resources/custom-resource-definitions/", "id": "k8s_00721_code_completion_22588"} {"prompt": "Explain this yaml code:\n\nitems:\n properties:\n foo:\n # ...\nallOf:\n- items:\n properties:\n foo:\n # ...", "completion": "This yaml code Extend the Kubernetes API with CustomResourceDefinitions | Non-structural example 3:", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Extend the Kubernetes API with CustomResourceDefinitions | Non-structural example 3:", "source": "https://kubernetes.io/docs/tasks/extend-kubernetes/custom-resources/custom-resource-definitions/", "id": "k8s_00721_code_explain_22589"} {"prompt": "Complete this yaml code:\n\nproperties:\n foo:\n pattern: \"abc\"\n metadata:\n type: object\n properties:\n name:\n type: string\n pattern: \"^a\"\n finalizers:", "completion": " type: array\n items:\n type: string\n pattern: \"my-finalizer\"\nanyOf:\n- properties:\n bar:\n type: integer\n minimum: 42\n required: [\"bar\"]\n description: \"foo bar object\"", "language": "yaml", "type": "kubernetes-manifest", "context": "Extend the Kubernetes API with CustomResourceDefinitions | is not a structural schema because of the following violations:", "source": "https://kubernetes.io/docs/tasks/extend-kubernetes/custom-resources/custom-resource-definitions/", "id": "k8s_00721_code_completion_22590"} {"prompt": "Explain this yaml code:\n\nproperties:\n foo:\n pattern: \"abc\"\n metadata:\n type: object\n properties:\n name:\n type: string\n pattern: \"^a\"\n finalizers:\n type: array\n items:\n type: string\n pattern: \"my-finalizer\"\nanyOf:\n- properties:\n bar:\n type: integer\n minimum: 42\n required: [\"bar\"]\n description: \"foo bar object\"", "completion": "This yaml code Extend the Kubernetes API with CustomResourceDefinitions | is not a structural schema because of the following violations:", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Extend the Kubernetes API with CustomResourceDefinitions | is not a structural schema because of the following violations:", "source": "https://kubernetes.io/docs/tasks/extend-kubernetes/custom-resources/custom-resource-definitions/", "id": "k8s_00721_code_explain_22591"} {"prompt": "Complete this yaml code:\n\ntype: object\ndescription: \"foo bar object\"\nproperties:\n foo:\n type: string\n pattern: \"abc\"\n bar:\n type: integer\n metadata:", "completion": " type: object\n properties:\n name:\n type: string\n pattern: \"^a\"\nanyOf:\n- properties:\n bar:\n minimum: 42\n required: [\"bar\"]", "language": "yaml", "type": "kubernetes-manifest", "context": "Extend the Kubernetes API with CustomResourceDefinitions | Violations of the structural schema rules are reported in the NonStructural condition in the CustomResourceDefinition.", "source": "https://kubernetes.io/docs/tasks/extend-kubernetes/custom-resources/custom-resource-definitions/", "id": "k8s_00721_code_completion_22592"} {"prompt": "Explain this yaml code:\n\ntype: object\ndescription: \"foo bar object\"\nproperties:\n foo:\n type: string\n pattern: \"abc\"\n bar:\n type: integer\n metadata:\n type: object\n properties:\n name:\n type: string\n pattern: \"^a\"\nanyOf:\n- properties:\n bar:\n minimum: 42\n required: [\"bar\"]", "completion": "This yaml code Extend the Kubernetes API with CustomResourceDefinitions | Violations of the structural schema rules are reported in the NonStructural condition in the CustomResourceDefinition.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Extend the Kubernetes API with CustomResourceDefinitions | Violations of the structural schema rules are reported in the NonStructural condition in the CustomResourceDefinition.", "source": "https://kubernetes.io/docs/tasks/extend-kubernetes/custom-resources/custom-resource-definitions/", "id": "k8s_00721_code_explain_22593"} {"prompt": "Complete this yaml code:\n\napiVersion: \"stable.example.com/v1\"\nkind: CronTab\nmetadata:\n name: my-new-cron-object", "completion": "spec:\n cronSpec: \"* * * * */5\"\n image: my-awesome-cron-image\n someRandomField: 42", "language": "yaml", "type": "kubernetes-manifest", "context": "Extend the Kubernetes API with CustomResourceDefinitions | and create it:", "source": "https://kubernetes.io/docs/tasks/extend-kubernetes/custom-resources/custom-resource-definitions/", "id": "k8s_00721_code_completion_22594"} {"prompt": "Explain this yaml code:\n\napiVersion: \"stable.example.com/v1\"\nkind: CronTab\nmetadata:\n name: my-new-cron-object\nspec:\n cronSpec: \"* * * * */5\"\n image: my-awesome-cron-image\n someRandomField: 42", "completion": "This yaml code Extend the Kubernetes API with CustomResourceDefinitions | and create it:", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Extend the Kubernetes API with CustomResourceDefinitions | and create it:", "source": "https://kubernetes.io/docs/tasks/extend-kubernetes/custom-resources/custom-resource-definitions/", "id": "k8s_00721_code_explain_22595"} {"prompt": "Explain this shell code:\n\nkubectl create --validate=false -f my-crontab.yaml -o yaml", "completion": "This shell code Extend the Kubernetes API with CustomResourceDefinitions | Your output is similar to:", "language": "shell", "type": "code-example_explanation", "context": "Extend the Kubernetes API with CustomResourceDefinitions | Your output is similar to:", "source": "https://kubernetes.io/docs/tasks/extend-kubernetes/custom-resources/custom-resource-definitions/", "id": "k8s_00721_code_explain_22596"} {"prompt": "Complete this yaml code:\n\napiVersion: stable.example.com/v1\nkind: CronTab\nmetadata:\n creationTimestamp: 2017-05-31T12:56:35Z\n generation: 1\n name: my-new-cron-object", "completion": " namespace: default\n resourceVersion: \"285\"\n uid: 9423255b-4600-11e7-af6a-28d2447dc82b\nspec:\n cronSpec: '* * * * */5'\n image: my-awesome-cron-image", "language": "yaml", "type": "kubernetes-manifest", "context": "Extend the Kubernetes API with CustomResourceDefinitions | Notice that the field someRandomField was pruned.", "source": "https://kubernetes.io/docs/tasks/extend-kubernetes/custom-resources/custom-resource-definitions/", "id": "k8s_00721_code_completion_22597"} {"prompt": "Explain this yaml code:\n\napiVersion: stable.example.com/v1\nkind: CronTab\nmetadata:\n creationTimestamp: 2017-05-31T12:56:35Z\n generation: 1\n name: my-new-cron-object\n namespace: default\n resourceVersion: \"285\"\n uid: 9423255b-4600-11e7-af6a-28d2447dc82b\nspec:\n cronSpec: '* * * * */5'\n image: my-awesome-cron-image", "completion": "This yaml code Extend the Kubernetes API with CustomResourceDefinitions | Notice that the field someRandomField was pruned.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Extend the Kubernetes API with CustomResourceDefinitions | Notice that the field someRandomField was pruned.", "source": "https://kubernetes.io/docs/tasks/extend-kubernetes/custom-resources/custom-resource-definitions/", "id": "k8s_00721_code_explain_22598"} {"prompt": "Complete this yaml code:\n\ntype: object\nproperties:", "completion": " json:\n x-kubernetes-preserve-unknown-fields: true", "language": "yaml", "type": "kubernetes-manifest", "context": "Extend the Kubernetes API with CustomResourceDefinitions | The field json can store any JSON value, without anything being pruned.", "source": "https://kubernetes.io/docs/tasks/extend-kubernetes/custom-resources/custom-resource-definitions/", "id": "k8s_00721_code_completion_22599"} {"prompt": "Explain this yaml code:\n\ntype: object\nproperties:\n json:\n x-kubernetes-preserve-unknown-fields: true", "completion": "This yaml code Extend the Kubernetes API with CustomResourceDefinitions | The field json can store any JSON value, without anything being pruned.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Extend the Kubernetes API with CustomResourceDefinitions | The field json can store any JSON value, without anything being pruned.", "source": "https://kubernetes.io/docs/tasks/extend-kubernetes/custom-resources/custom-resource-definitions/", "id": "k8s_00721_code_explain_22600"} {"prompt": "Complete this yaml code:\n\ntype: object\nproperties:\n json:", "completion": " x-kubernetes-preserve-unknown-fields: true\n type: object\n description: this is arbitrary JSON", "language": "yaml", "type": "kubernetes-manifest", "context": "Extend the Kubernetes API with CustomResourceDefinitions | With this, only object type values are allowed.", "source": "https://kubernetes.io/docs/tasks/extend-kubernetes/custom-resources/custom-resource-definitions/", "id": "k8s_00721_code_completion_22601"} {"prompt": "Explain this yaml code:\n\ntype: object\nproperties:\n json:\n x-kubernetes-preserve-unknown-fields: true\n type: object\n description: this is arbitrary JSON", "completion": "This yaml code Extend the Kubernetes API with CustomResourceDefinitions | With this, only object type values are allowed.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Extend the Kubernetes API with CustomResourceDefinitions | With this, only object type values are allowed.", "source": "https://kubernetes.io/docs/tasks/extend-kubernetes/custom-resources/custom-resource-definitions/", "id": "k8s_00721_code_explain_22602"} {"prompt": "Complete this yaml code:\n\ntype: object\nproperties:\n json:\n x-kubernetes-preserve-unknown-fields: true\n type: object\n properties:", "completion": " spec:\n type: object\n properties:\n foo:\n type: string\n bar:\n type: string", "language": "yaml", "type": "kubernetes-manifest", "context": "Extend the Kubernetes API with CustomResourceDefinitions | With this, the value:", "source": "https://kubernetes.io/docs/tasks/extend-kubernetes/custom-resources/custom-resource-definitions/", "id": "k8s_00721_code_completion_22603"} {"prompt": "Explain this yaml code:\n\ntype: object\nproperties:\n json:\n x-kubernetes-preserve-unknown-fields: true\n type: object\n properties:\n spec:\n type: object\n properties:\n foo:\n type: string\n bar:\n type: string", "completion": "This yaml code Extend the Kubernetes API with CustomResourceDefinitions | With this, the value:", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Extend the Kubernetes API with CustomResourceDefinitions | With this, the value:", "source": "https://kubernetes.io/docs/tasks/extend-kubernetes/custom-resources/custom-resource-definitions/", "id": "k8s_00721_code_explain_22604"} {"prompt": "Complete this yaml code:\n\njson:\n spec:\n foo: abc", "completion": " bar: def\n something: x\n status:\n something: x", "language": "yaml", "type": "kubernetes-manifest", "context": "Extend the Kubernetes API with CustomResourceDefinitions | is pruned to:", "source": "https://kubernetes.io/docs/tasks/extend-kubernetes/custom-resources/custom-resource-definitions/", "id": "k8s_00721_code_completion_22605"} {"prompt": "Explain this yaml code:\n\njson:\n spec:\n foo: abc\n bar: def\n something: x\n status:\n something: x", "completion": "This yaml code Extend the Kubernetes API with CustomResourceDefinitions | is pruned to:", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Extend the Kubernetes API with CustomResourceDefinitions | is pruned to:", "source": "https://kubernetes.io/docs/tasks/extend-kubernetes/custom-resources/custom-resource-definitions/", "id": "k8s_00721_code_explain_22606"} {"prompt": "Complete this yaml code:\n\njson:\n spec:\n foo: abc", "completion": " bar: def\n status:\n something: x", "language": "yaml", "type": "kubernetes-manifest", "context": "Extend the Kubernetes API with CustomResourceDefinitions | This means that the something field in the specified spec object is pruned, but everything outside is not.", "source": "https://kubernetes.io/docs/tasks/extend-kubernetes/custom-resources/custom-resource-definitions/", "id": "k8s_00721_code_completion_22607"} {"prompt": "Explain this yaml code:\n\njson:\n spec:\n foo: abc\n bar: def\n status:\n something: x", "completion": "This yaml code Extend the Kubernetes API with CustomResourceDefinitions | This means that the something field in the specified spec object is pruned, but everything outside is not.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Extend the Kubernetes API with CustomResourceDefinitions | This means that the something field in the specified spec object is pruned, but everything outside is not.", "source": "https://kubernetes.io/docs/tasks/extend-kubernetes/custom-resources/custom-resource-definitions/", "id": "k8s_00721_code_explain_22608"} {"prompt": "Complete this yaml code:\n\ntype: object\nproperties:", "completion": " foo:\n x-kubernetes-int-or-string: true", "language": "yaml", "type": "kubernetes-manifest", "context": "Extend the Kubernetes API with CustomResourceDefinitions | Also those nodes are partially excluded from rule 3 in the sense that the following two patterns are allowed (exactly those, without variations in order to additional fields):", "source": "https://kubernetes.io/docs/tasks/extend-kubernetes/custom-resources/custom-resource-definitions/", "id": "k8s_00721_code_completion_22609"} {"prompt": "Explain this yaml code:\n\ntype: object\nproperties:\n foo:\n x-kubernetes-int-or-string: true", "completion": "This yaml code Extend the Kubernetes API with CustomResourceDefinitions | Also those nodes are partially excluded from rule 3 in the sense that the following two patterns are allowed (exactly those, without variations in order to additional fields):", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Extend the Kubernetes API with CustomResourceDefinitions | Also those nodes are partially excluded from rule 3 in the sense that the following two patterns are allowed (exactly those, without variations in order to additional fields):", "source": "https://kubernetes.io/docs/tasks/extend-kubernetes/custom-resources/custom-resource-definitions/", "id": "k8s_00721_code_explain_22610"} {"prompt": "Complete this yaml code:\n\nx-kubernetes-int-or-string: true\nanyOf:", "completion": " - type: integer\n - type: string\n...", "language": "yaml", "type": "kubernetes-manifest", "context": "Extend the Kubernetes API with CustomResourceDefinitions | and", "source": "https://kubernetes.io/docs/tasks/extend-kubernetes/custom-resources/custom-resource-definitions/", "id": "k8s_00721_code_completion_22611"} {"prompt": "Explain this yaml code:\n\nx-kubernetes-int-or-string: true\nanyOf:\n - type: integer\n - type: string\n...", "completion": "This yaml code Extend the Kubernetes API with CustomResourceDefinitions | and", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Extend the Kubernetes API with CustomResourceDefinitions | and", "source": "https://kubernetes.io/docs/tasks/extend-kubernetes/custom-resources/custom-resource-definitions/", "id": "k8s_00721_code_explain_22612"} {"prompt": "Complete this yaml code:\n\nx-kubernetes-int-or-string: true\nallOf:\n - anyOf:", "completion": " - type: integer\n - type: string\n - # ... zero or more\n...", "language": "yaml", "type": "kubernetes-manifest", "context": "Extend the Kubernetes API with CustomResourceDefinitions | With one of those specification, both an integer and a string validate.", "source": "https://kubernetes.io/docs/tasks/extend-kubernetes/custom-resources/custom-resource-definitions/", "id": "k8s_00721_code_completion_22613"} {"prompt": "Explain this yaml code:\n\nx-kubernetes-int-or-string: true\nallOf:\n - anyOf:\n - type: integer\n - type: string\n - # ... zero or more\n...", "completion": "This yaml code Extend the Kubernetes API with CustomResourceDefinitions | With one of those specification, both an integer and a string validate.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Extend the Kubernetes API with CustomResourceDefinitions | With one of those specification, both an integer and a string validate.", "source": "https://kubernetes.io/docs/tasks/extend-kubernetes/custom-resources/custom-resource-definitions/", "id": "k8s_00721_code_explain_22614"} {"prompt": "Complete this yaml code:\n\ntype: object\nproperties:", "completion": " foo:\n x-kubernetes-embedded-resource: true\n x-kubernetes-preserve-unknown-fields: true", "language": "yaml", "type": "kubernetes-manifest", "context": "Extend the Kubernetes API with CustomResourceDefinitions | Here, the field foo holds a complete object, e.g.:", "source": "https://kubernetes.io/docs/tasks/extend-kubernetes/custom-resources/custom-resource-definitions/", "id": "k8s_00721_code_completion_22615"} {"prompt": "Explain this yaml code:\n\ntype: object\nproperties:\n foo:\n x-kubernetes-embedded-resource: true\n x-kubernetes-preserve-unknown-fields: true", "completion": "This yaml code Extend the Kubernetes API with CustomResourceDefinitions | Here, the field foo holds a complete object, e.g.:", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Extend the Kubernetes API with CustomResourceDefinitions | Here, the field foo holds a complete object, e.g.:", "source": "https://kubernetes.io/docs/tasks/extend-kubernetes/custom-resources/custom-resource-definitions/", "id": "k8s_00721_code_explain_22616"} {"prompt": "Complete this yaml code:\n\nfoo:\n apiVersion: v1", "completion": " kind: Pod\n spec:\n # ...", "language": "yaml", "type": "pod-definition", "context": "Extend the Kubernetes API with CustomResourceDefinitions | Because x-kubernetes-preserve-unknown-fields: true is specified alongside, nothing is pruned. The use of x-kubernetes-preserve-unknown-fields: true is optional though.", "source": "https://kubernetes.io/docs/tasks/extend-kubernetes/custom-resources/custom-resource-definitions/", "id": "k8s_00721_code_completion_22617"} {"prompt": "Explain this yaml code:\n\nfoo:\n apiVersion: v1\n kind: Pod\n spec:\n # ...", "completion": "This yaml code Extend the Kubernetes API with CustomResourceDefinitions | Because x-kubernetes-preserve-unknown-fields: true is specified alongside, nothing is pruned. The use of x-kubernetes-preserve-unknown-fields: true is optional though.", "language": "yaml", "type": "pod-definition_explanation", "context": "Extend the Kubernetes API with CustomResourceDefinitions | Because x-kubernetes-preserve-unknown-fields: true is specified alongside, nothing is pruned. The use of x-kubernetes-preserve-unknown-fields: true is optional though.", "source": "https://kubernetes.io/docs/tasks/extend-kubernetes/custom-resources/custom-resource-definitions/", "id": "k8s_00721_code_explain_22618"} {"prompt": "Complete this yaml code:\n\napiVersion: \"stable.example.com/v1\"\nkind: CronTab", "completion": "metadata:\n finalizers:\n - stable.example.com/finalizer", "language": "yaml", "type": "kubernetes-manifest", "context": "Extend the Kubernetes API with CustomResourceDefinitions | Identifiers of custom finalizers consist of a domain name, a forward slash and the name of the finalizer. Any controller can add a finalizer to any object's list of finalizers.", "source": "https://kubernetes.io/docs/tasks/extend-kubernetes/custom-resources/custom-resource-definitions/", "id": "k8s_00721_code_completion_22619"} {"prompt": "Explain this yaml code:\n\napiVersion: \"stable.example.com/v1\"\nkind: CronTab\nmetadata:\n finalizers:\n - stable.example.com/finalizer", "completion": "This yaml code Extend the Kubernetes API with CustomResourceDefinitions | Identifiers of custom finalizers consist of a domain name, a forward slash and the name of the finalizer. Any controller can add a finalizer to any object's list of finalizers.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Extend the Kubernetes API with CustomResourceDefinitions | Identifiers of custom finalizers consist of a domain name, a forward slash and the name of the finalizer. Any controller can add a finalizer to any object's list of finalizers.", "source": "https://kubernetes.io/docs/tasks/extend-kubernetes/custom-resources/custom-resource-definitions/", "id": "k8s_00721_code_explain_22620"} {"prompt": "Complete this yaml code:\n\napiVersion: apiextensions.k8s.io/v1\nkind: CustomResourceDefinition\nmetadata:\n name: crontabs.stable.example.com\nspec:\n group: stable.example.com\n versions:\n - name: v1\n served: true\n storage: true\n schema:\n # openAPIV3Schema is the schema for validating custom objects.\n openAPIV3Schema:\n type: object\n properties:\n spec:\n type: object", "completion": " properties:\n cronSpec:\n type: string\n pattern: '^(\\d+|\\*)(/\\d+)?(\\s+(\\d+|\\*)(/\\d+)?){4}$'\n image:\n type: string\n replicas:\n type: integer\n minimum: 1\n maximum: 10\n scope: Namespaced\n names:\n plural: crontabs\n singular: crontab\n kind: CronTab\n shortNames:\n - ct", "language": "yaml", "type": "kubernetes-manifest", "context": "Extend the Kubernetes API with CustomResourceDefinitions | and create it:", "source": "https://kubernetes.io/docs/tasks/extend-kubernetes/custom-resources/custom-resource-definitions/", "id": "k8s_00721_code_completion_22621"} {"prompt": "Explain this yaml code:\n\napiVersion: apiextensions.k8s.io/v1\nkind: CustomResourceDefinition\nmetadata:\n name: crontabs.stable.example.com\nspec:\n group: stable.example.com\n versions:\n - name: v1\n served: true\n storage: true\n schema:\n # openAPIV3Schema is the schema for validating custom objects.\n openAPIV3Schema:\n type: object\n properties:\n spec:\n type: object\n properties:\n cronSpec:\n type: string\n pattern: '^(\\d+|\\*)(/\\d+)?(\\s+(\\d+|\\*)(/\\d+)?){4}$'\n image:\n type: string\n replicas:\n type: integer\n minimum: 1\n maximum: 10\n scope: Namespaced\n names:\n plural: crontabs\n singular: crontab\n kind: CronTab\n shortNames:\n - ct", "completion": "This yaml code Extend the Kubernetes API with CustomResourceDefinitions | and create it:", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Extend the Kubernetes API with CustomResourceDefinitions | and create it:", "source": "https://kubernetes.io/docs/tasks/extend-kubernetes/custom-resources/custom-resource-definitions/", "id": "k8s_00721_code_explain_22622"} {"prompt": "Complete this yaml code:\n\napiVersion: \"stable.example.com/v1\"\nkind: CronTab\nmetadata:\n name: my-new-cron-object", "completion": "spec:\n cronSpec: \"* * * *\"\n image: my-awesome-cron-image\n replicas: 15", "language": "yaml", "type": "kubernetes-manifest", "context": "Extend the Kubernetes API with CustomResourceDefinitions | and attempt to create it:", "source": "https://kubernetes.io/docs/tasks/extend-kubernetes/custom-resources/custom-resource-definitions/", "id": "k8s_00721_code_completion_22623"} {"prompt": "Explain this yaml code:\n\napiVersion: \"stable.example.com/v1\"\nkind: CronTab\nmetadata:\n name: my-new-cron-object\nspec:\n cronSpec: \"* * * *\"\n image: my-awesome-cron-image\n replicas: 15", "completion": "This yaml code Extend the Kubernetes API with CustomResourceDefinitions | and attempt to create it:", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Extend the Kubernetes API with CustomResourceDefinitions | and attempt to create it:", "source": "https://kubernetes.io/docs/tasks/extend-kubernetes/custom-resources/custom-resource-definitions/", "id": "k8s_00721_code_explain_22624"} {"prompt": "Complete this console code:\n\nThe CronTab \"my-new-cron-object\" is invalid: []: Invalid value: map[string]interface {}{\"apiVersion\":\"stable.example.com/v1\", \"kind\":\"CronTab\", \"metadata\":map[string]interface {}{\"name\":\"my-new-cron-object\", \"namespace\":\"default\", \"deletionTimestamp\":interface {}(nil), \"deletionGracePeriodSeconds\":(*int64)(nil), \"creationTimestamp\":\"2017-09-05T05:20:07Z\", \"uid\":\"e14d79e7-91f9-11e7-a598-f0761cb232d1\", \"clusterName\":\"\"}, \"spec\":map[string]interface {}{\"cronSpec\":\"* * * *\", \"image\":\"my-awesome-cron-image\", \"replicas\":15}}:\nvalidation failure list:", "completion": "spec.cronSpec in body should match '^(\\d+|\\*)(/\\d+)?(\\s+(\\d+|\\*)(/\\d+)?){4}$'\nspec.replicas in body should be less than or equal to 10", "language": "console", "type": "code-example", "context": "Extend the Kubernetes API with CustomResourceDefinitions | If the fields contain valid values, the object creation request is accepted.", "source": "https://kubernetes.io/docs/tasks/extend-kubernetes/custom-resources/custom-resource-definitions/", "id": "k8s_00721_code_completion_22625"} {"prompt": "Explain this console code:\n\nThe CronTab \"my-new-cron-object\" is invalid: []: Invalid value: map[string]interface {}{\"apiVersion\":\"stable.example.com/v1\", \"kind\":\"CronTab\", \"metadata\":map[string]interface {}{\"name\":\"my-new-cron-object\", \"namespace\":\"default\", \"deletionTimestamp\":interface {}(nil), \"deletionGracePeriodSeconds\":(*int64)(nil), \"creationTimestamp\":\"2017-09-05T05:20:07Z\", \"uid\":\"e14d79e7-91f9-11e7-a598-f0761cb232d1\", \"clusterName\":\"\"}, \"spec\":map[string]interface {}{\"cronSpec\":\"* * * *\", \"image\":\"my-awesome-cron-image\", \"replicas\":15}}:\nvalidation failure list:\nspec.cronSpec in body should match '^(\\d+|\\*)(/\\d+)?(\\s+(\\d+|\\*)(/\\d+)?){4}$'\nspec.replicas in body should be less than or equal to 10", "completion": "This console code Extend the Kubernetes API with CustomResourceDefinitions | If the fields contain valid values, the object creation request is accepted.", "language": "console", "type": "code-example_explanation", "context": "Extend the Kubernetes API with CustomResourceDefinitions | If the fields contain valid values, the object creation request is accepted.", "source": "https://kubernetes.io/docs/tasks/extend-kubernetes/custom-resources/custom-resource-definitions/", "id": "k8s_00721_code_explain_22626"} {"prompt": "Complete this yaml code:\n\napiVersion: \"stable.example.com/v1\"\nkind: CronTab\nmetadata:\n name: my-new-cron-object", "completion": "spec:\n cronSpec: \"* * * * */5\"\n image: my-awesome-cron-image\n replicas: 5", "language": "yaml", "type": "kubernetes-manifest", "context": "Extend the Kubernetes API with CustomResourceDefinitions | And create it:", "source": "https://kubernetes.io/docs/tasks/extend-kubernetes/custom-resources/custom-resource-definitions/", "id": "k8s_00721_code_completion_22627"} {"prompt": "Explain this yaml code:\n\napiVersion: \"stable.example.com/v1\"\nkind: CronTab\nmetadata:\n name: my-new-cron-object\nspec:\n cronSpec: \"* * * * */5\"\n image: my-awesome-cron-image\n replicas: 5", "completion": "This yaml code Extend the Kubernetes API with CustomResourceDefinitions | And create it:", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Extend the Kubernetes API with CustomResourceDefinitions | And create it:", "source": "https://kubernetes.io/docs/tasks/extend-kubernetes/custom-resources/custom-resource-definitions/", "id": "k8s_00721_code_explain_22628"} {"prompt": "Explain this shell code:\n\nkubectl apply -f my-crontab.yaml\ncrontab \"my-new-cron-object\" created", "completion": "This shell code Extend the Kubernetes API with CustomResourceDefinitions | If you are using a version of Kubernetes older than v1.30, you need to explicitly enable the CRDValidationRatcheting feature gate to use this behavior, which then applies to all CustomResourceDefiniti", "language": "shell", "type": "code-example_explanation", "context": "Extend the Kubernetes API with CustomResourceDefinitions | If you are using a version of Kubernetes older than v1.30, you need to explicitly enable the CRDValidationRatcheting feature gate to use this behavior, which then applies to all CustomResourceDefiniti", "source": "https://kubernetes.io/docs/tasks/extend-kubernetes/custom-resources/custom-resource-definitions/", "id": "k8s_00721_code_explain_22629"} {"prompt": "Complete this yaml code:\n\n# ...\n openAPIV3Schema:\n type: object\n properties:\n spec:\n type: object\n x-kubernetes-validations:\n - rule: \"self.minReplicas <= self.replicas\"\n message: \"replicas should be greater than or equal to minReplicas.\"\n - rule: \"self.replicas <= self.maxReplicas\"\n message: \"replicas should be smaller than or equal to maxReplicas.\"", "completion": " properties:\n # ...\n minReplicas:\n type: integer\n replicas:\n type: integer\n maxReplicas:\n type: integer\n required:\n - minReplicas\n - replicas\n - maxReplicas", "language": "yaml", "type": "kubernetes-manifest", "context": "Extend the Kubernetes API with CustomResourceDefinitions | will reject a request to create this custom resource:", "source": "https://kubernetes.io/docs/tasks/extend-kubernetes/custom-resources/custom-resource-definitions/", "id": "k8s_00721_code_completion_22630"} {"prompt": "Explain this yaml code:\n\n# ...\n openAPIV3Schema:\n type: object\n properties:\n spec:\n type: object\n x-kubernetes-validations:\n - rule: \"self.minReplicas <= self.replicas\"\n message: \"replicas should be greater than or equal to minReplicas.\"\n - rule: \"self.replicas <= self.maxReplicas\"\n message: \"replicas should be smaller than or equal to maxReplicas.\"\n properties:\n # ...\n minReplicas:\n type: integer\n replicas:\n type: integer\n maxReplicas:\n type: integer\n required:\n - minReplicas\n - replicas\n - maxReplicas", "completion": "This yaml code Extend the Kubernetes API with CustomResourceDefinitions | will reject a request to create this custom resource:", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Extend the Kubernetes API with CustomResourceDefinitions | will reject a request to create this custom resource:", "source": "https://kubernetes.io/docs/tasks/extend-kubernetes/custom-resources/custom-resource-definitions/", "id": "k8s_00721_code_explain_22631"} {"prompt": "Complete this yaml code:\n\napiVersion: \"stable.example.com/v1\"\nkind: CronTab\nmetadata:\n name: my-new-cron-object", "completion": "spec:\n minReplicas: 0\n replicas: 20\n maxReplicas: 10", "language": "yaml", "type": "kubernetes-manifest", "context": "Extend the Kubernetes API with CustomResourceDefinitions | with the response:", "source": "https://kubernetes.io/docs/tasks/extend-kubernetes/custom-resources/custom-resource-definitions/", "id": "k8s_00721_code_completion_22632"} {"prompt": "Explain this yaml code:\n\napiVersion: \"stable.example.com/v1\"\nkind: CronTab\nmetadata:\n name: my-new-cron-object\nspec:\n minReplicas: 0\n replicas: 20\n maxReplicas: 10", "completion": "This yaml code Extend the Kubernetes API with CustomResourceDefinitions | with the response:", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Extend the Kubernetes API with CustomResourceDefinitions | with the response:", "source": "https://kubernetes.io/docs/tasks/extend-kubernetes/custom-resources/custom-resource-definitions/", "id": "k8s_00721_code_explain_22633"} {"prompt": "Explain this text code:\n\nThe CronTab \"my-new-cron-object\" is invalid:\n* spec: Invalid value: map[string]interface {}{\"maxReplicas\":10, \"minReplicas\":0, \"replicas\":20}: replicas should be smaller than or equal to maxReplicas.", "completion": "This text code Extend the Kubernetes API with CustomResourceDefinitions | x-kubernetes-validations could have multiple rules. The rule under x-kubernetes-validations represents the expression which will be evaluated by CEL. The message represents the message displayed when", "language": "text", "type": "code-example_explanation", "context": "Extend the Kubernetes API with CustomResourceDefinitions | x-kubernetes-validations could have multiple rules. The rule under x-kubernetes-validations represents the expression which will be evaluated by CEL. The message represents the message displayed when", "source": "https://kubernetes.io/docs/tasks/extend-kubernetes/custom-resources/custom-resource-definitions/", "id": "k8s_00721_code_explain_22634"} {"prompt": "Explain this text code:\n\nThe CronTab \"my-new-cron-object\" is invalid:\n* spec: Invalid value: map[string]interface {}{\"maxReplicas\":10, \"minReplicas\":0, \"replicas\":20}: failed rule: self.replicas <= self.maxReplicas", "completion": "This text code Extend the Kubernetes API with CustomResourceDefinitions | Validation rules are compiled when CRDs are created/updated. The request of CRDs create/update will fail if compilation of validation rules fail. Compilation process includes type checking as well.", "language": "text", "type": "code-example_explanation", "context": "Extend the Kubernetes API with CustomResourceDefinitions | Validation rules are compiled when CRDs are created/updated. The request of CRDs create/update will fail if compilation of validation rules fail. Compilation process includes type checking as well.", "source": "https://kubernetes.io/docs/tasks/extend-kubernetes/custom-resources/custom-resource-definitions/", "id": "k8s_00721_code_explain_22635"} {"prompt": "Explain this none code:\n\nInvalid value: apiextensions.ValidationRule{Rule:\"self == true\", Message:\"\"}: compilation failed: ERROR: \\:1:6: found no matching overload for '_==_' applied to '(int, bool)'", "completion": "This none code Extend the Kubernetes API with CustomResourceDefinitions | no_such_field: does not contain the desired field.", "language": "none", "type": "code-example_explanation", "context": "Extend the Kubernetes API with CustomResourceDefinitions | no_such_field: does not contain the desired field.", "source": "https://kubernetes.io/docs/tasks/extend-kubernetes/custom-resources/custom-resource-definitions/", "id": "k8s_00721_code_explain_22636"} {"prompt": "Explain this none code:\n\nInvalid value: apiextensions.ValidationRule{Rule:\"self.nonExistingField > 0\", Message:\"\"}: compilation failed: ERROR: \\:1:5: undefined field 'nonExistingField'", "completion": "This none code Extend the Kubernetes API with CustomResourceDefinitions | invalid argument: invalid argument to macros.", "language": "none", "type": "code-example_explanation", "context": "Extend the Kubernetes API with CustomResourceDefinitions | invalid argument: invalid argument to macros.", "source": "https://kubernetes.io/docs/tasks/extend-kubernetes/custom-resources/custom-resource-definitions/", "id": "k8s_00721_code_explain_22637"} {"prompt": "Explain this none code:\n\nInvalid value: apiextensions.ValidationRule{Rule:\"has(self)\", Message:\"\"}: compilation failed: ERROR: :1:4: invalid argument to has() macro", "completion": "This none code Extend the Kubernetes API with CustomResourceDefinitions | Validation Rules Examples:", "language": "none", "type": "code-example_explanation", "context": "Extend the Kubernetes API with CustomResourceDefinitions | Validation Rules Examples:", "source": "https://kubernetes.io/docs/tasks/extend-kubernetes/custom-resources/custom-resource-definitions/", "id": "k8s_00721_code_explain_22638"} {"prompt": "Explain this text code:\n\nself.minReplicas <= self.replicas && self.replicas <= self.maxReplicas", "completion": "This text code Extend the Kubernetes API with CustomResourceDefinitions | Xref: Supported evaluation on CEL", "language": "text", "type": "code-example_explanation", "context": "Extend the Kubernetes API with CustomResourceDefinitions | Xref: Supported evaluation on CEL", "source": "https://kubernetes.io/docs/tasks/extend-kubernetes/custom-resources/custom-resource-definitions/", "id": "k8s_00721_code_explain_22639"} {"prompt": "Explain this text code:\n\n!('MY_KEY' in self.map1) || self['MY_KEY'].matches('^[a-zA-Z]*$')", "completion": "This text code Extend the Kubernetes API with CustomResourceDefinitions | Xref: Supported evaluation on CEL", "language": "text", "type": "code-example_explanation", "context": "Extend the Kubernetes API with CustomResourceDefinitions | Xref: Supported evaluation on CEL", "source": "https://kubernetes.io/docs/tasks/extend-kubernetes/custom-resources/custom-resource-definitions/", "id": "k8s_00721_code_explain_22640"} {"prompt": "Explain this text code:\n\nself.envars.filter(e, e.name == 'MY_ENV').all(e, e.value.matches('^[a-zA-Z]*$')", "completion": "This text code Extend the Kubernetes API with CustomResourceDefinitions | Xref: Supported evaluation on CEL", "language": "text", "type": "code-example_explanation", "context": "Extend the Kubernetes API with CustomResourceDefinitions | Xref: Supported evaluation on CEL", "source": "https://kubernetes.io/docs/tasks/extend-kubernetes/custom-resources/custom-resource-definitions/", "id": "k8s_00721_code_explain_22641"} {"prompt": "Explain this text code:\n\nhas(self.expired) && self.created + self.ttl < self.expired", "completion": "This text code Extend the Kubernetes API with CustomResourceDefinitions | Xref: Supported evaluation on CEL", "language": "text", "type": "code-example_explanation", "context": "Extend the Kubernetes API with CustomResourceDefinitions | Xref: Supported evaluation on CEL", "source": "https://kubernetes.io/docs/tasks/extend-kubernetes/custom-resources/custom-resource-definitions/", "id": "k8s_00721_code_explain_22642"} {"prompt": "Explain this text code:\n\ntype(self) == string ? self == '100%' : self == 1000", "completion": "This text code Extend the Kubernetes API with CustomResourceDefinitions | Xref: Supported evaluation on CEL", "language": "text", "type": "code-example_explanation", "context": "Extend the Kubernetes API with CustomResourceDefinitions | Xref: Supported evaluation on CEL", "source": "https://kubernetes.io/docs/tasks/extend-kubernetes/custom-resources/custom-resource-definitions/", "id": "k8s_00721_code_explain_22643"} {"prompt": "Explain this text code:\n\nsize(self.names) == size(self.details) && self.names.all(n, n in self.details)", "completion": "This text code Extend the Kubernetes API with CustomResourceDefinitions | Xref: Supported evaluation on CEL", "language": "text", "type": "code-example_explanation", "context": "Extend the Kubernetes API with CustomResourceDefinitions | Xref: Supported evaluation on CEL", "source": "https://kubernetes.io/docs/tasks/extend-kubernetes/custom-resources/custom-resource-definitions/", "id": "k8s_00721_code_explain_22644"} {"prompt": "Explain this text code:\n\nsize(self.clusters.filter(c, c.name == self.primary)) == 1", "completion": "This text code Extend the Kubernetes API with CustomResourceDefinitions | Xref: Supported evaluation on CEL", "language": "text", "type": "code-example_explanation", "context": "Extend the Kubernetes API with CustomResourceDefinitions | Xref: Supported evaluation on CEL", "source": "https://kubernetes.io/docs/tasks/extend-kubernetes/custom-resources/custom-resource-definitions/", "id": "k8s_00721_code_explain_22645"} {"prompt": "Complete this yaml code:\n\n# ...\n openAPIV3Schema:\n type: object\n x-kubernetes-validations:\n - rule: \"self.status.availableReplicas >= self.spec.minReplicas\"\n properties:\n spec:\n type: object", "completion": " properties:\n minReplicas:\n type: integer\n # ...\n status:\n type: object\n properties:\n availableReplicas:\n type: integer", "language": "yaml", "type": "kubernetes-manifest", "context": "Extend the Kubernetes API with CustomResourceDefinitions | If the Rule is scoped to an object with properties, the accessible properties of the object are field selectable via self.field and field presence can be checked via has(self.field). Null valued field", "source": "https://kubernetes.io/docs/tasks/extend-kubernetes/custom-resources/custom-resource-definitions/", "id": "k8s_00721_code_completion_22646"} {"prompt": "Explain this yaml code:\n\n# ...\n openAPIV3Schema:\n type: object\n x-kubernetes-validations:\n - rule: \"self.status.availableReplicas >= self.spec.minReplicas\"\n properties:\n spec:\n type: object\n properties:\n minReplicas:\n type: integer\n # ...\n status:\n type: object\n properties:\n availableReplicas:\n type: integer", "completion": "This yaml code Extend the Kubernetes API with CustomResourceDefinitions | If the Rule is scoped to an object with properties, the accessible properties of the object are field selectable via self.field and field presence can be checked via has(self.field). Null valued field", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Extend the Kubernetes API with CustomResourceDefinitions | If the Rule is scoped to an object with properties, the accessible properties of the object are field selectable via self.field and field presence can be checked via has(self.field). Null valued field", "source": "https://kubernetes.io/docs/tasks/extend-kubernetes/custom-resources/custom-resource-definitions/", "id": "k8s_00721_code_explain_22647"} {"prompt": "Complete this yaml code:\n\n# ...\n openAPIV3Schema:\n type: object\n properties:\n spec:\n type: object", "completion": " x-kubernetes-validations:\n - rule: \"has(self.foo)\"\n properties:\n # ...\n foo:\n type: integer", "language": "yaml", "type": "kubernetes-manifest", "context": "Extend the Kubernetes API with CustomResourceDefinitions | If the Rule is scoped to an object with additionalProperties (i.e. a map) the value of the map are accessible via self[mapKey], map containment can be checked via mapKey in self and all entries of the", "source": "https://kubernetes.io/docs/tasks/extend-kubernetes/custom-resources/custom-resource-definitions/", "id": "k8s_00721_code_completion_22648"} {"prompt": "Explain this yaml code:\n\n# ...\n openAPIV3Schema:\n type: object\n properties:\n spec:\n type: object\n x-kubernetes-validations:\n - rule: \"has(self.foo)\"\n properties:\n # ...\n foo:\n type: integer", "completion": "This yaml code Extend the Kubernetes API with CustomResourceDefinitions | If the Rule is scoped to an object with additionalProperties (i.e. a map) the value of the map are accessible via self[mapKey], map containment can be checked via mapKey in self and all entries of the", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Extend the Kubernetes API with CustomResourceDefinitions | If the Rule is scoped to an object with additionalProperties (i.e. a map) the value of the map are accessible via self[mapKey], map containment can be checked via mapKey in self and all entries of the", "source": "https://kubernetes.io/docs/tasks/extend-kubernetes/custom-resources/custom-resource-definitions/", "id": "k8s_00721_code_explain_22649"} {"prompt": "Complete this yaml code:\n\n# ...\n openAPIV3Schema:\n type: object\n properties:\n spec:\n type: object\n x-kubernetes-validations:", "completion": " - rule: \"self['xyz'].foo > 0\"\n additionalProperties:\n # ...\n type: object\n properties:\n foo:\n type: integer", "language": "yaml", "type": "kubernetes-manifest", "context": "Extend the Kubernetes API with CustomResourceDefinitions | If the Rule is scoped to an array, the elements of the array are accessible via self[i] and also by macros and functions.", "source": "https://kubernetes.io/docs/tasks/extend-kubernetes/custom-resources/custom-resource-definitions/", "id": "k8s_00721_code_completion_22650"} {"prompt": "Explain this yaml code:\n\n# ...\n openAPIV3Schema:\n type: object\n properties:\n spec:\n type: object\n x-kubernetes-validations:\n - rule: \"self['xyz'].foo > 0\"\n additionalProperties:\n # ...\n type: object\n properties:\n foo:\n type: integer", "completion": "This yaml code Extend the Kubernetes API with CustomResourceDefinitions | If the Rule is scoped to an array, the elements of the array are accessible via self[i] and also by macros and functions.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Extend the Kubernetes API with CustomResourceDefinitions | If the Rule is scoped to an array, the elements of the array are accessible via self[i] and also by macros and functions.", "source": "https://kubernetes.io/docs/tasks/extend-kubernetes/custom-resources/custom-resource-definitions/", "id": "k8s_00721_code_explain_22651"} {"prompt": "Complete this yaml code:\n\n# ...\n openAPIV3Schema:\n type: object\n properties:\n # ...", "completion": " foo:\n type: array\n x-kubernetes-validations:\n - rule: \"size(self) == 1\"\n items:\n type: string", "language": "yaml", "type": "kubernetes-manifest", "context": "Extend the Kubernetes API with CustomResourceDefinitions | If the Rule is scoped to a scalar, self is bound to the scalar value.", "source": "https://kubernetes.io/docs/tasks/extend-kubernetes/custom-resources/custom-resource-definitions/", "id": "k8s_00721_code_completion_22652"} {"prompt": "Explain this yaml code:\n\n# ...\n openAPIV3Schema:\n type: object\n properties:\n # ...\n foo:\n type: array\n x-kubernetes-validations:\n - rule: \"size(self) == 1\"\n items:\n type: string", "completion": "This yaml code Extend the Kubernetes API with CustomResourceDefinitions | If the Rule is scoped to a scalar, self is bound to the scalar value.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Extend the Kubernetes API with CustomResourceDefinitions | If the Rule is scoped to a scalar, self is bound to the scalar value.", "source": "https://kubernetes.io/docs/tasks/extend-kubernetes/custom-resources/custom-resource-definitions/", "id": "k8s_00721_code_explain_22653"} {"prompt": "Complete this yaml code:\n\n# ...\n openAPIV3Schema:\n type: object\n properties:\n spec:\n type: object", "completion": " properties:\n # ...\n foo:\n type: integer\n x-kubernetes-validations:\n - rule: \"self > 0\"", "language": "yaml", "type": "kubernetes-manifest", "context": "Extend the Kubernetes API with CustomResourceDefinitions | Examples:", "source": "https://kubernetes.io/docs/tasks/extend-kubernetes/custom-resources/custom-resource-definitions/", "id": "k8s_00721_code_completion_22654"} {"prompt": "Explain this yaml code:\n\n# ...\n openAPIV3Schema:\n type: object\n properties:\n spec:\n type: object\n properties:\n # ...\n foo:\n type: integer\n x-kubernetes-validations:\n - rule: \"self > 0\"", "completion": "This yaml code Extend the Kubernetes API with CustomResourceDefinitions | Examples:", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Extend the Kubernetes API with CustomResourceDefinitions | Examples:", "source": "https://kubernetes.io/docs/tasks/extend-kubernetes/custom-resources/custom-resource-definitions/", "id": "k8s_00721_code_explain_22655"} {"prompt": "Explain this yaml code:\n\nx-kubernetes-validations:\n- rule: \"self.x <= self.maxLimit\"\n messageExpression: '\"x exceeded max limit of \" + string(self.maxLimit)'", "completion": "This yaml code Extend the Kubernetes API with CustomResourceDefinitions | Keep in mind that CEL string concatenation (+ operator) does not auto-cast to string. If you have a non-string scalar, use the string() function to cast the scalar to a string like shown in the", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Extend the Kubernetes API with CustomResourceDefinitions | Keep in mind that CEL string concatenation (+ operator) does not auto-cast to string. If you have a non-string scalar, use the string() function to cast the scalar to a string like shown in the", "source": "https://kubernetes.io/docs/tasks/extend-kubernetes/custom-resources/custom-resource-definitions/", "id": "k8s_00721_code_explain_22656"} {"prompt": "Explain this yaml code:\n\nx-kubernetes-validations:\n- rule: \"self.x <= self.maxLimit\"\n reason: \"FieldValueInvalid\"", "completion": "This yaml code Extend the Kubernetes API with CustomResourceDefinitions | The HTTP status code returned to the caller will match the reason of the first failed validation rule. The currently supported reasons are: \"FieldValueInvalid\", \"FieldValueForbidden\", \"FieldValueRequi", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Extend the Kubernetes API with CustomResourceDefinitions | The HTTP status code returned to the caller will match the reason of the first failed validation rule. The currently supported reasons are: \"FieldValueInvalid\", \"FieldValueForbidden\", \"FieldValueRequi", "source": "https://kubernetes.io/docs/tasks/extend-kubernetes/custom-resources/custom-resource-definitions/", "id": "k8s_00721_code_explain_22657"} {"prompt": "Explain this yaml code:\n\nx-kubernetes-validations:\n- rule: \"self.foo.test.x <= self.maxLimit\"\n fieldPath: \".foo.test.x\"", "completion": "This yaml code Extend the Kubernetes API with CustomResourceDefinitions | In the example above, the validation checks the value of field x should be less than the value of maxLimit. If no fieldPath specified, when validation fails, the fieldPath would be default to wherever", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Extend the Kubernetes API with CustomResourceDefinitions | In the example above, the validation checks the value of field x should be less than the value of maxLimit. If no fieldPath specified, when validation fails, the fieldPath would be default to wherever", "source": "https://kubernetes.io/docs/tasks/extend-kubernetes/custom-resources/custom-resource-definitions/", "id": "k8s_00721_code_explain_22658"} {"prompt": "Explain this text code:\n\nself.foo == \"foo\" || (oldSelf.hasValue() && oldSelf.value().foo != \"foo\")", "completion": "This text code Extend the Kubernetes API with CustomResourceDefinitions | Functions available include:", "language": "text", "type": "code-example_explanation", "context": "Extend the Kubernetes API with CustomResourceDefinitions | Functions available include:", "source": "https://kubernetes.io/docs/tasks/extend-kubernetes/custom-resources/custom-resource-definitions/", "id": "k8s_00721_code_explain_22659"} {"prompt": "Explain this json code:\n\n[oldSelf.orValue(\"\"), self].all(x, [\"OldCase1\", \"OldCase2\"].exists(case, x == case)) || [\"NewCase1\", \"NewCase2\"].exists(case, self == case) || [\"NewCase\"].has(self)", "completion": "This json code Extend the Kubernetes API with CustomResourceDefinitions | Functions available include:", "language": "json", "type": "api-response_explanation", "context": "Extend the Kubernetes API with CustomResourceDefinitions | Functions available include:", "source": "https://kubernetes.io/docs/tasks/extend-kubernetes/custom-resources/custom-resource-definitions/", "id": "k8s_00721_code_explain_22660"} {"prompt": "Explain this text code:\n\noldSelf.optMap(o, o.size()).orValue(0) < 4 || self.size() >= 4", "completion": "This text code Extend the Kubernetes API with CustomResourceDefinitions | Functions available include:", "language": "text", "type": "code-example_explanation", "context": "Extend the Kubernetes API with CustomResourceDefinitions | Functions available include:", "source": "https://kubernetes.io/docs/tasks/extend-kubernetes/custom-resources/custom-resource-definitions/", "id": "k8s_00721_code_explain_22661"} {"prompt": "Complete this yaml code:\n\ntype: string\nenum: [\"low\", \"medium\", \"high\"]", "completion": "x-kubernetes-validations:\n- rule: \"!(self == 'high' && oldSelf == 'low') && !(self == 'low' && oldSelf == 'high')\"\n message: cannot transition directly between 'low' and 'high'", "language": "yaml", "type": "kubernetes-manifest", "context": "Extend the Kubernetes API with CustomResourceDefinitions | Unlike other rules, transition rules apply only to operations meeting the following criteria:", "source": "https://kubernetes.io/docs/tasks/extend-kubernetes/custom-resources/custom-resource-definitions/", "id": "k8s_00721_code_completion_22662"} {"prompt": "Explain this yaml code:\n\ntype: string\nenum: [\"low\", \"medium\", \"high\"]\nx-kubernetes-validations:\n- rule: \"!(self == 'high' && oldSelf == 'low') && !(self == 'low' && oldSelf == 'high')\"\n message: cannot transition directly between 'low' and 'high'", "completion": "This yaml code Extend the Kubernetes API with CustomResourceDefinitions | Unlike other rules, transition rules apply only to operations meeting the following criteria:", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Extend the Kubernetes API with CustomResourceDefinitions | Unlike other rules, transition rules apply only to operations meeting the following criteria:", "source": "https://kubernetes.io/docs/tasks/extend-kubernetes/custom-resources/custom-resource-definitions/", "id": "k8s_00721_code_explain_22663"} {"prompt": "Complete this yaml code:\n\nopenAPIV3Schema:\n type: object\n properties:\n foo:", "completion": " type: array\n items:\n type: string\n x-kubernetes-validations:\n - rule: \"self.all(x, x.contains('a string'))\"", "language": "yaml", "type": "kubernetes-manifest", "context": "Extend the Kubernetes API with CustomResourceDefinitions | then the API server rejects this rule on validation budget grounds with error:", "source": "https://kubernetes.io/docs/tasks/extend-kubernetes/custom-resources/custom-resource-definitions/", "id": "k8s_00721_code_completion_22664"} {"prompt": "Explain this yaml code:\n\nopenAPIV3Schema:\n type: object\n properties:\n foo:\n type: array\n items:\n type: string\n x-kubernetes-validations:\n - rule: \"self.all(x, x.contains('a string'))\"", "completion": "This yaml code Extend the Kubernetes API with CustomResourceDefinitions | then the API server rejects this rule on validation budget grounds with error:", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Extend the Kubernetes API with CustomResourceDefinitions | then the API server rejects this rule on validation budget grounds with error:", "source": "https://kubernetes.io/docs/tasks/extend-kubernetes/custom-resources/custom-resource-definitions/", "id": "k8s_00721_code_explain_22665"} {"prompt": "Explain this text code:\n\nspec.validation.openAPIV3Schema.properties[spec].properties[foo].x-kubernetes-validations[0].rule: Forbidden:\nCEL rule exceeded budget by more than 100x (try simplifying the rule, or adding maxItems, maxProperties, and\nmaxLength where arrays, maps, and strings are used)", "completion": "This text code Extend the Kubernetes API with CustomResourceDefinitions | The rejection happens because self.all implies calling contains() on every string in foo, which in turn will check the given string to see if it contains 'a string'. Without limits, this is a very exp", "language": "text", "type": "code-example_explanation", "context": "Extend the Kubernetes API with CustomResourceDefinitions | The rejection happens because self.all implies calling contains() on every string in foo, which in turn will check the given string to see if it contains 'a string'. Without limits, this is a very exp", "source": "https://kubernetes.io/docs/tasks/extend-kubernetes/custom-resources/custom-resource-definitions/", "id": "k8s_00721_code_explain_22666"} {"prompt": "Complete this yaml code:\n\nopenAPIV3Schema:\n type: object\n properties:\n foo:\n type: array", "completion": " maxItems: 25\n items:\n type: string\n maxLength: 10\n x-kubernetes-validations:\n - rule: \"self.all(x, x.contains('a string'))\"", "language": "yaml", "type": "kubernetes-manifest", "context": "Extend the Kubernetes API with CustomResourceDefinitions | The cost estimation system takes into account how many times the rule will be executed in addition to the estimated cost of the rule itself. For instance, the following rule will have the same estimat", "source": "https://kubernetes.io/docs/tasks/extend-kubernetes/custom-resources/custom-resource-definitions/", "id": "k8s_00721_code_completion_22667"} {"prompt": "Explain this yaml code:\n\nopenAPIV3Schema:\n type: object\n properties:\n foo:\n type: array\n maxItems: 25\n items:\n type: string\n maxLength: 10\n x-kubernetes-validations:\n - rule: \"self.all(x, x.contains('a string'))\"", "completion": "This yaml code Extend the Kubernetes API with CustomResourceDefinitions | The cost estimation system takes into account how many times the rule will be executed in addition to the estimated cost of the rule itself. For instance, the following rule will have the same estimat", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Extend the Kubernetes API with CustomResourceDefinitions | The cost estimation system takes into account how many times the rule will be executed in addition to the estimated cost of the rule itself. For instance, the following rule will have the same estimat", "source": "https://kubernetes.io/docs/tasks/extend-kubernetes/custom-resources/custom-resource-definitions/", "id": "k8s_00721_code_explain_22668"} {"prompt": "Complete this yaml code:\n\nopenAPIV3Schema:\n type: object\n properties:\n foo:\n type: array", "completion": " maxItems: 25\n items:\n type: string\n x-kubernetes-validations:\n - rule: \"self.contains('a string'))\"\n maxLength: 10", "language": "yaml", "type": "kubernetes-manifest", "context": "Extend the Kubernetes API with CustomResourceDefinitions | If a list inside of a list has a validation rule that uses self.all, that is significantly more expensive than a non-nested list with the same rule. A rule that would have been allowed on a non-nested", "source": "https://kubernetes.io/docs/tasks/extend-kubernetes/custom-resources/custom-resource-definitions/", "id": "k8s_00721_code_completion_22669"} {"prompt": "Explain this yaml code:\n\nopenAPIV3Schema:\n type: object\n properties:\n foo:\n type: array\n maxItems: 25\n items:\n type: string\n x-kubernetes-validations:\n - rule: \"self.contains('a string'))\"\n maxLength: 10", "completion": "This yaml code Extend the Kubernetes API with CustomResourceDefinitions | If a list inside of a list has a validation rule that uses self.all, that is significantly more expensive than a non-nested list with the same rule. A rule that would have been allowed on a non-nested", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Extend the Kubernetes API with CustomResourceDefinitions | If a list inside of a list has a validation rule that uses self.all, that is significantly more expensive than a non-nested list with the same rule. A rule that would have been allowed on a non-nested", "source": "https://kubernetes.io/docs/tasks/extend-kubernetes/custom-resources/custom-resource-definitions/", "id": "k8s_00721_code_explain_22670"} {"prompt": "Complete this yaml code:\n\nopenAPIV3Schema:\n type: object\n properties:\n foo:", "completion": " type: array\n items:\n type: integer\n x-kubernetes-validations:\n - rule: \"self.all(x, x == 5)\"", "language": "yaml", "type": "kubernetes-manifest", "context": "Extend the Kubernetes API with CustomResourceDefinitions | But the same rule on the following schema (with a nested array added) produces a validation error:", "source": "https://kubernetes.io/docs/tasks/extend-kubernetes/custom-resources/custom-resource-definitions/", "id": "k8s_00721_code_completion_22671"} {"prompt": "Explain this yaml code:\n\nopenAPIV3Schema:\n type: object\n properties:\n foo:\n type: array\n items:\n type: integer\n x-kubernetes-validations:\n - rule: \"self.all(x, x == 5)\"", "completion": "This yaml code Extend the Kubernetes API with CustomResourceDefinitions | But the same rule on the following schema (with a nested array added) produces a validation error:", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Extend the Kubernetes API with CustomResourceDefinitions | But the same rule on the following schema (with a nested array added) produces a validation error:", "source": "https://kubernetes.io/docs/tasks/extend-kubernetes/custom-resources/custom-resource-definitions/", "id": "k8s_00721_code_explain_22672"} {"prompt": "Complete this yaml code:\n\nopenAPIV3Schema:\n type: object\n properties:\n foo:\n type: array", "completion": " items:\n type: array\n items:\n type: integer\n x-kubernetes-validations:\n - rule: \"self.all(x, x == 5)\"", "language": "yaml", "type": "kubernetes-manifest", "context": "Extend the Kubernetes API with CustomResourceDefinitions | This is because each item of foo is itself an array, and each subarray in turn calls self.all. Avoid nested lists and maps if possible where validation rules are used.", "source": "https://kubernetes.io/docs/tasks/extend-kubernetes/custom-resources/custom-resource-definitions/", "id": "k8s_00721_code_completion_22673"} {"prompt": "Explain this yaml code:\n\nopenAPIV3Schema:\n type: object\n properties:\n foo:\n type: array\n items:\n type: array\n items:\n type: integer\n x-kubernetes-validations:\n - rule: \"self.all(x, x == 5)\"", "completion": "This yaml code Extend the Kubernetes API with CustomResourceDefinitions | This is because each item of foo is itself an array, and each subarray in turn calls self.all. Avoid nested lists and maps if possible where validation rules are used.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Extend the Kubernetes API with CustomResourceDefinitions | This is because each item of foo is itself an array, and each subarray in turn calls self.all. Avoid nested lists and maps if possible where validation rules are used.", "source": "https://kubernetes.io/docs/tasks/extend-kubernetes/custom-resources/custom-resource-definitions/", "id": "k8s_00721_code_explain_22674"} {"prompt": "Complete this yaml code:\n\napiVersion: apiextensions.k8s.io/v1\nkind: CustomResourceDefinition\nmetadata:\n name: crontabs.stable.example.com\nspec:\n group: stable.example.com\n versions:\n - name: v1\n served: true\n storage: true\n schema:\n # openAPIV3Schema is the schema for validating custom objects.\n openAPIV3Schema:\n type: object\n properties:\n spec:\n type: object\n properties:", "completion": " cronSpec:\n type: string\n pattern: '^(\\d+|\\*)(/\\d+)?(\\s+(\\d+|\\*)(/\\d+)?){4}$'\n default: \"5 0 * * *\"\n image:\n type: string\n replicas:\n type: integer\n minimum: 1\n maximum: 10\n default: 1\n scope: Namespaced\n names:\n plural: crontabs\n singular: crontab\n kind: CronTab\n shortNames:\n - ct", "language": "yaml", "type": "kubernetes-manifest", "context": "Extend the Kubernetes API with CustomResourceDefinitions | With this both cronSpec and replicas are defaulted:", "source": "https://kubernetes.io/docs/tasks/extend-kubernetes/custom-resources/custom-resource-definitions/", "id": "k8s_00721_code_completion_22675"} {"prompt": "Explain this yaml code:\n\napiVersion: apiextensions.k8s.io/v1\nkind: CustomResourceDefinition\nmetadata:\n name: crontabs.stable.example.com\nspec:\n group: stable.example.com\n versions:\n - name: v1\n served: true\n storage: true\n schema:\n # openAPIV3Schema is the schema for validating custom objects.\n openAPIV3Schema:\n type: object\n properties:\n spec:\n type: object\n properties:\n cronSpec:\n type: string\n pattern: '^(\\d+|\\*)(/\\d+)?(\\s+(\\d+|\\*)(/\\d+)?){4}$'\n default: \"5 0 * * *\"\n image:\n type: string\n replicas:\n type: integer\n minimum: 1\n maximum: 10\n default: 1\n scope: Namespaced\n names:\n plural: crontabs\n singular: crontab\n kind: CronTab\n shortNames:\n - ct", "completion": "This yaml code Extend the Kubernetes API with CustomResourceDefinitions | With this both cronSpec and replicas are defaulted:", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Extend the Kubernetes API with CustomResourceDefinitions | With this both cronSpec and replicas are defaulted:", "source": "https://kubernetes.io/docs/tasks/extend-kubernetes/custom-resources/custom-resource-definitions/", "id": "k8s_00721_code_explain_22676"} {"prompt": "Complete this yaml code:\n\napiVersion: \"stable.example.com/v1\"\nkind: CronTab\nmetadata:", "completion": " name: my-new-cron-object\nspec:\n image: my-awesome-cron-image", "language": "yaml", "type": "kubernetes-manifest", "context": "Extend the Kubernetes API with CustomResourceDefinitions | leads to", "source": "https://kubernetes.io/docs/tasks/extend-kubernetes/custom-resources/custom-resource-definitions/", "id": "k8s_00721_code_completion_22677"} {"prompt": "Explain this yaml code:\n\napiVersion: \"stable.example.com/v1\"\nkind: CronTab\nmetadata:\n name: my-new-cron-object\nspec:\n image: my-awesome-cron-image", "completion": "This yaml code Extend the Kubernetes API with CustomResourceDefinitions | leads to", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Extend the Kubernetes API with CustomResourceDefinitions | leads to", "source": "https://kubernetes.io/docs/tasks/extend-kubernetes/custom-resources/custom-resource-definitions/", "id": "k8s_00721_code_explain_22678"} {"prompt": "Complete this yaml code:\n\napiVersion: \"stable.example.com/v1\"\nkind: CronTab\nmetadata:\n name: my-new-cron-object", "completion": "spec:\n cronSpec: \"5 0 * * *\"\n image: my-awesome-cron-image\n replicas: 1", "language": "yaml", "type": "kubernetes-manifest", "context": "Extend the Kubernetes API with CustomResourceDefinitions | Defaulting happens on the object", "source": "https://kubernetes.io/docs/tasks/extend-kubernetes/custom-resources/custom-resource-definitions/", "id": "k8s_00721_code_completion_22679"} {"prompt": "Explain this yaml code:\n\napiVersion: \"stable.example.com/v1\"\nkind: CronTab\nmetadata:\n name: my-new-cron-object\nspec:\n cronSpec: \"5 0 * * *\"\n image: my-awesome-cron-image\n replicas: 1", "completion": "This yaml code Extend the Kubernetes API with CustomResourceDefinitions | Defaulting happens on the object", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Extend the Kubernetes API with CustomResourceDefinitions | Defaulting happens on the object", "source": "https://kubernetes.io/docs/tasks/extend-kubernetes/custom-resources/custom-resource-definitions/", "id": "k8s_00721_code_explain_22680"} {"prompt": "Complete this yaml code:\n\ntype: object\nproperties:\n spec:\n type: object\n properties:\n foo:\n type: string", "completion": " nullable: false\n default: \"default\"\n bar:\n type: string\n nullable: true\n baz:\n type: string", "language": "yaml", "type": "kubernetes-manifest", "context": "Extend the Kubernetes API with CustomResourceDefinitions | creating an object with null values for foo and bar and baz", "source": "https://kubernetes.io/docs/tasks/extend-kubernetes/custom-resources/custom-resource-definitions/", "id": "k8s_00721_code_completion_22681"} {"prompt": "Explain this yaml code:\n\ntype: object\nproperties:\n spec:\n type: object\n properties:\n foo:\n type: string\n nullable: false\n default: \"default\"\n bar:\n type: string\n nullable: true\n baz:\n type: string", "completion": "This yaml code Extend the Kubernetes API with CustomResourceDefinitions | creating an object with null values for foo and bar and baz", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Extend the Kubernetes API with CustomResourceDefinitions | creating an object with null values for foo and bar and baz", "source": "https://kubernetes.io/docs/tasks/extend-kubernetes/custom-resources/custom-resource-definitions/", "id": "k8s_00721_code_explain_22682"} {"prompt": "Complete this yaml code:\n\napiVersion: apiextensions.k8s.io/v1\nkind: CustomResourceDefinition\nmetadata:\n name: crontabs.stable.example.com\nspec:\n group: stable.example.com\n scope: Namespaced\n names:\n plural: crontabs\n singular: crontab\n kind: CronTab\n shortNames:\n - ct\n versions:\n - name: v1\n served: true\n storage: true\n schema:\n openAPIV3Schema:\n type: object\n properties:", "completion": " spec:\n type: object\n properties:\n cronSpec:\n type: string\n image:\n type: string\n replicas:\n type: integer\n additionalPrinterColumns:\n - name: Spec\n type: string\n description: The cron spec defining the interval a CronJob is run\n jsonPath: .spec.cronSpec\n - name: Replicas\n type: integer\n description: The number of jobs launched by the CronJob\n jsonPath: .spec.replicas\n - name: Age\n type: date\n jsonPath: .metadata.creationTimestamp", "language": "yaml", "type": "kubernetes-manifest", "context": "Extend the Kubernetes API with CustomResourceDefinitions | Create the CustomResourceDefinition:", "source": "https://kubernetes.io/docs/tasks/extend-kubernetes/custom-resources/custom-resource-definitions/", "id": "k8s_00721_code_completion_22683"} {"prompt": "Explain this yaml code:\n\napiVersion: apiextensions.k8s.io/v1\nkind: CustomResourceDefinition\nmetadata:\n name: crontabs.stable.example.com\nspec:\n group: stable.example.com\n scope: Namespaced\n names:\n plural: crontabs\n singular: crontab\n kind: CronTab\n shortNames:\n - ct\n versions:\n - name: v1\n served: true\n storage: true\n schema:\n openAPIV3Schema:\n type: object\n properties:\n spec:\n type: object\n properties:\n cronSpec:\n type: string\n image:\n type: string\n replicas:\n type: integer\n additionalPrinterColumns:\n - name: Spec\n type: string\n description: The cron spec defining the interval a CronJob is run\n jsonPath: .spec.cronSpec\n - name: Replicas\n type: integer\n description: The number of jobs launched by the CronJob\n jsonPath: .spec.replicas\n - name: Age\n type: date\n jsonPath: .metadata.creationTimestamp", "completion": "This yaml code Extend the Kubernetes API with CustomResourceDefinitions | Create the CustomResourceDefinition:", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Extend the Kubernetes API with CustomResourceDefinitions | Create the CustomResourceDefinition:", "source": "https://kubernetes.io/docs/tasks/extend-kubernetes/custom-resources/custom-resource-definitions/", "id": "k8s_00721_code_explain_22684"} {"prompt": "Explain this text code:\n\nNAME SPEC REPLICAS AGE\nmy-new-cron-object * * * * * 1 7s", "completion": "This text code Extend the Kubernetes API with CustomResourceDefinitions | Each column includes a priority field. Currently, the priority differentiates between columns shown in standard view or wide view (using the -o wide flag).", "language": "text", "type": "code-example_explanation", "context": "Extend the Kubernetes API with CustomResourceDefinitions | Each column includes a priority field. Currently, the priority differentiates between columns shown in standard view or wide view (using the -o wide flag).", "source": "https://kubernetes.io/docs/tasks/extend-kubernetes/custom-resources/custom-resource-definitions/", "id": "k8s_00721_code_explain_22685"} {"prompt": "Explain this text code:\n\ncustomresourcedefinition/shirt-resource-definition.yaml", "completion": "This text code Extend the Kubernetes API with CustomResourceDefinitions | Create the CustomResourceDefinition:", "language": "text", "type": "code-example_explanation", "context": "Extend the Kubernetes API with CustomResourceDefinitions | Create the CustomResourceDefinition:", "source": "https://kubernetes.io/docs/tasks/extend-kubernetes/custom-resources/custom-resource-definitions/", "id": "k8s_00721_code_explain_22686"} {"prompt": "Complete this yaml code:\n\napiVersion: apiextensions.k8s.io/v1\nkind: CustomResourceDefinition\nmetadata:\n name: shirts.stable.example.com\nspec:\n group: stable.example.com\n scope: Namespaced\n names:\n plural: shirts\n singular: shirt\n kind: Shirt\n versions:\n - name: v1\n served: true\n storage: true\n schema:\n openAPIV3Schema:\n type: object", "completion": " properties:\n spec:\n type: object\n properties:\n color:\n type: string\n size:\n type: string\n selectableFields:\n - jsonPath: .spec.color\n - jsonPath: .spec.size\n additionalPrinterColumns:\n - jsonPath: .spec.color\n name: Color\n type: string\n - jsonPath: .spec.size\n name: Size\n type: string", "language": "yaml", "type": "kubernetes-manifest", "context": "Extend the Kubernetes API with CustomResourceDefinitions | Create the CustomResourceDefinition:", "source": "https://kubernetes.io/docs/tasks/extend-kubernetes/custom-resources/custom-resource-definitions/", "id": "k8s_00721_code_completion_22687"} {"prompt": "Explain this yaml code:\n\napiVersion: apiextensions.k8s.io/v1\nkind: CustomResourceDefinition\nmetadata:\n name: shirts.stable.example.com\nspec:\n group: stable.example.com\n scope: Namespaced\n names:\n plural: shirts\n singular: shirt\n kind: Shirt\n versions:\n - name: v1\n served: true\n storage: true\n schema:\n openAPIV3Schema:\n type: object\n properties:\n spec:\n type: object\n properties:\n color:\n type: string\n size:\n type: string\n selectableFields:\n - jsonPath: .spec.color\n - jsonPath: .spec.size\n additionalPrinterColumns:\n - jsonPath: .spec.color\n name: Color\n type: string\n - jsonPath: .spec.size\n name: Size\n type: string", "completion": "This yaml code Extend the Kubernetes API with CustomResourceDefinitions | Create the CustomResourceDefinition:", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Extend the Kubernetes API with CustomResourceDefinitions | Create the CustomResourceDefinition:", "source": "https://kubernetes.io/docs/tasks/extend-kubernetes/custom-resources/custom-resource-definitions/", "id": "k8s_00721_code_explain_22688"} {"prompt": "Explain this shell code:\n\nkubectl apply -f https://k8s.io/examples/customresourcedefinition/shirt-resource-definition.yaml", "completion": "This shell code Extend the Kubernetes API with CustomResourceDefinitions | Define some Shirts by editing shirt-resources.yaml; for example:", "language": "shell", "type": "code-example_explanation", "context": "Extend the Kubernetes API with CustomResourceDefinitions | Define some Shirts by editing shirt-resources.yaml; for example:", "source": "https://kubernetes.io/docs/tasks/extend-kubernetes/custom-resources/custom-resource-definitions/", "id": "k8s_00721_code_explain_22689"} {"prompt": "Complete this yaml code:\n\n---\napiVersion: stable.example.com/v1\nkind: Shirt\nmetadata:\n name: example1\nspec:\n color: blue\n size: S\n---\napiVersion: stable.example.com/v1\nkind: Shirt\nmetadata:", "completion": " name: example2\nspec:\n color: blue\n size: M\n---\napiVersion: stable.example.com/v1\nkind: Shirt\nmetadata:\n name: example3\nspec:\n color: green\n size: M", "language": "yaml", "type": "kubernetes-manifest", "context": "Extend the Kubernetes API with CustomResourceDefinitions | Create the custom resources:", "source": "https://kubernetes.io/docs/tasks/extend-kubernetes/custom-resources/custom-resource-definitions/", "id": "k8s_00721_code_completion_22690"} {"prompt": "Explain this yaml code:\n\n---\napiVersion: stable.example.com/v1\nkind: Shirt\nmetadata:\n name: example1\nspec:\n color: blue\n size: S\n---\napiVersion: stable.example.com/v1\nkind: Shirt\nmetadata:\n name: example2\nspec:\n color: blue\n size: M\n---\napiVersion: stable.example.com/v1\nkind: Shirt\nmetadata:\n name: example3\nspec:\n color: green\n size: M", "completion": "This yaml code Extend the Kubernetes API with CustomResourceDefinitions | Create the custom resources:", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Extend the Kubernetes API with CustomResourceDefinitions | Create the custom resources:", "source": "https://kubernetes.io/docs/tasks/extend-kubernetes/custom-resources/custom-resource-definitions/", "id": "k8s_00721_code_explain_22691"} {"prompt": "Explain this shell code:\n\nkubectl apply -f https://k8s.io/examples/customresourcedefinition/shirt-resources.yaml", "completion": "This shell code Extend the Kubernetes API with CustomResourceDefinitions | Get all the resources:", "language": "shell", "type": "code-example_explanation", "context": "Extend the Kubernetes API with CustomResourceDefinitions | Get all the resources:", "source": "https://kubernetes.io/docs/tasks/extend-kubernetes/custom-resources/custom-resource-definitions/", "id": "k8s_00721_code_explain_22692"} {"prompt": "Complete this text code:\n\nNAME COLOR SIZE\nexample1 blue S", "completion": "example2 blue M\nexample3 green M", "language": "text", "type": "code-example", "context": "Extend the Kubernetes API with CustomResourceDefinitions | Fetch blue shirts (retrieve Shirts with a color of blue):", "source": "https://kubernetes.io/docs/tasks/extend-kubernetes/custom-resources/custom-resource-definitions/", "id": "k8s_00721_code_completion_22693"} {"prompt": "Explain this text code:\n\nNAME COLOR SIZE\nexample1 blue S\nexample2 blue M\nexample3 green M", "completion": "This text code Extend the Kubernetes API with CustomResourceDefinitions | Fetch blue shirts (retrieve Shirts with a color of blue):", "language": "text", "type": "code-example_explanation", "context": "Extend the Kubernetes API with CustomResourceDefinitions | Fetch blue shirts (retrieve Shirts with a color of blue):", "source": "https://kubernetes.io/docs/tasks/extend-kubernetes/custom-resources/custom-resource-definitions/", "id": "k8s_00721_code_explain_22694"} {"prompt": "Explain this shell code:\n\nkubectl get shirts.stable.example.com --field-selector spec.color=blue", "completion": "This shell code Extend the Kubernetes API with CustomResourceDefinitions | Should output:", "language": "shell", "type": "code-example_explanation", "context": "Extend the Kubernetes API with CustomResourceDefinitions | Should output:", "source": "https://kubernetes.io/docs/tasks/extend-kubernetes/custom-resources/custom-resource-definitions/", "id": "k8s_00721_code_explain_22695"} {"prompt": "Explain this text code:\n\nNAME COLOR SIZE\nexample1 blue S\nexample2 blue M", "completion": "This text code Extend the Kubernetes API with CustomResourceDefinitions | Get only resources with a color of green and a size of M:", "language": "text", "type": "code-example_explanation", "context": "Extend the Kubernetes API with CustomResourceDefinitions | Get only resources with a color of green and a size of M:", "source": "https://kubernetes.io/docs/tasks/extend-kubernetes/custom-resources/custom-resource-definitions/", "id": "k8s_00721_code_explain_22696"} {"prompt": "Explain this shell code:\n\nkubectl get shirts.stable.example.com --field-selector spec.color=green,spec.size=M", "completion": "This shell code Extend the Kubernetes API with CustomResourceDefinitions | Should output:", "language": "shell", "type": "code-example_explanation", "context": "Extend the Kubernetes API with CustomResourceDefinitions | Should output:", "source": "https://kubernetes.io/docs/tasks/extend-kubernetes/custom-resources/custom-resource-definitions/", "id": "k8s_00721_code_explain_22697"} {"prompt": "Complete this yaml code:\n\napiVersion: apiextensions.k8s.io/v1\nkind: CustomResourceDefinition\nmetadata:\n name: crontabs.stable.example.com\nspec:\n group: stable.example.com\n versions:\n - name: v1\n served: true\n storage: true\n schema:\n openAPIV3Schema:\n type: object\n properties:\n spec:\n type: object\n properties:\n cronSpec:\n type: string\n image:\n type: string\n replicas:\n type: integer\n status:", "completion": " type: object\n properties:\n replicas:\n type: integer\n labelSelector:\n type: string\n # subresources describes the subresources for custom resources.\n subresources:\n # status enables the status subresource.\n status: {}\n # scale enables the scale subresource.\n scale:\n # specReplicasPath defines the JSONPath inside of a custom resource that corresponds to Scale.Spec.Replicas.\n specReplicasPath: .spec.replicas\n # statusReplicasPath defines the JSONPath inside of a custom resource that corresponds to Scale.Status.Replicas.\n statusReplicasPath: .status.replicas\n # labelSelectorPath defines the JSONPath inside of a custom resource that corresponds to Scale.Status.Selector.\n labelSelectorPath: .status.labelSelector\n scope: Namespaced\n names:\n plural: crontabs\n singular: crontab\n kind: CronTab\n shortNames:\n - ct", "language": "yaml", "type": "kubernetes-manifest", "context": "Extend the Kubernetes API with CustomResourceDefinitions | And create it:", "source": "https://kubernetes.io/docs/tasks/extend-kubernetes/custom-resources/custom-resource-definitions/", "id": "k8s_00721_code_completion_22698"} {"prompt": "Explain this yaml code:\n\napiVersion: apiextensions.k8s.io/v1\nkind: CustomResourceDefinition\nmetadata:\n name: crontabs.stable.example.com\nspec:\n group: stable.example.com\n versions:\n - name: v1\n served: true\n storage: true\n schema:\n openAPIV3Schema:\n type: object\n properties:\n spec:\n type: object\n properties:\n cronSpec:\n type: string\n image:\n type: string\n replicas:\n type: integer\n status:\n type: object\n properties:\n replicas:\n type: integer\n labelSelector:\n type: string\n # subresources describes the subresources for custom resources.\n subresources:\n # status enables the status subresource.\n status: {}\n # scale enables the scale subresource.\n scale:\n # specReplicasPath defines the JSONPath inside of a custom resource that corresponds to Scale.Spec.Replicas.\n specReplicasPath: .spec.replicas\n # statusReplicasPath defines the JSONPath inside of a custom resource that corresponds to Scale.Status.Replicas.\n statusReplicasPath: .status.replicas\n # labelSelectorPath defines the JSONPath inside of a custom resource that corresponds to Scale.Status.Selector.\n labelSelectorPath: .status.labelSelector\n scope: Namespaced\n names:\n plural: crontabs\n singular: crontab\n kind: CronTab\n shortNames:\n - ct", "completion": "This yaml code Extend the Kubernetes API with CustomResourceDefinitions | And create it:", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Extend the Kubernetes API with CustomResourceDefinitions | And create it:", "source": "https://kubernetes.io/docs/tasks/extend-kubernetes/custom-resources/custom-resource-definitions/", "id": "k8s_00721_code_explain_22699"} {"prompt": "Complete this yaml code:\n\napiVersion: \"stable.example.com/v1\"\nkind: CronTab\nmetadata:\n name: my-new-cron-object", "completion": "spec:\n cronSpec: \"* * * * */5\"\n image: my-awesome-cron-image\n replicas: 3", "language": "yaml", "type": "kubernetes-manifest", "context": "Extend the Kubernetes API with CustomResourceDefinitions | and create it:", "source": "https://kubernetes.io/docs/tasks/extend-kubernetes/custom-resources/custom-resource-definitions/", "id": "k8s_00721_code_completion_22700"} {"prompt": "Explain this yaml code:\n\napiVersion: \"stable.example.com/v1\"\nkind: CronTab\nmetadata:\n name: my-new-cron-object\nspec:\n cronSpec: \"* * * * */5\"\n image: my-awesome-cron-image\n replicas: 3", "completion": "This yaml code Extend the Kubernetes API with CustomResourceDefinitions | and create it:", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Extend the Kubernetes API with CustomResourceDefinitions | and create it:", "source": "https://kubernetes.io/docs/tasks/extend-kubernetes/custom-resources/custom-resource-definitions/", "id": "k8s_00721_code_explain_22701"} {"prompt": "Explain this none code:\n\n/apis/stable.example.com/v1/namespaces/*/crontabs/status", "completion": "This none code Extend the Kubernetes API with CustomResourceDefinitions | and", "language": "none", "type": "code-example_explanation", "context": "Extend the Kubernetes API with CustomResourceDefinitions | and", "source": "https://kubernetes.io/docs/tasks/extend-kubernetes/custom-resources/custom-resource-definitions/", "id": "k8s_00721_code_explain_22702"} {"prompt": "Explain this none code:\n\n/apis/stable.example.com/v1/namespaces/*/crontabs/scale", "completion": "This none code Extend the Kubernetes API with CustomResourceDefinitions | A custom resource can be scaled using the kubectl scale command. For example, the following command sets .spec.replicas of the custom resource created above to 5:", "language": "none", "type": "code-example_explanation", "context": "Extend the Kubernetes API with CustomResourceDefinitions | A custom resource can be scaled using the kubectl scale command. For example, the following command sets .spec.replicas of the custom resource created above to 5:", "source": "https://kubernetes.io/docs/tasks/extend-kubernetes/custom-resources/custom-resource-definitions/", "id": "k8s_00721_code_explain_22703"} {"prompt": "Complete this shell code:\n\nkubectl scale --replicas=5 crontabs/my-new-cron-object\ncrontabs \"my-new-cron-object\" scaled", "completion": "\nkubectl get crontabs my-new-cron-object -o jsonpath='{.spec.replicas}'\n5", "language": "shell", "type": "code-example", "context": "Extend the Kubernetes API with CustomResourceDefinitions | You can use a PodDisruptionBudget to protect custom resources that have the scale subresource enabled.", "source": "https://kubernetes.io/docs/tasks/extend-kubernetes/custom-resources/custom-resource-definitions/", "id": "k8s_00721_code_completion_22704"} {"prompt": "Explain this shell code:\n\nkubectl scale --replicas=5 crontabs/my-new-cron-object\ncrontabs \"my-new-cron-object\" scaled\n\nkubectl get crontabs my-new-cron-object -o jsonpath='{.spec.replicas}'\n5", "completion": "This shell code Extend the Kubernetes API with CustomResourceDefinitions | You can use a PodDisruptionBudget to protect custom resources that have the scale subresource enabled.", "language": "shell", "type": "code-example_explanation", "context": "Extend the Kubernetes API with CustomResourceDefinitions | You can use a PodDisruptionBudget to protect custom resources that have the scale subresource enabled.", "source": "https://kubernetes.io/docs/tasks/extend-kubernetes/custom-resources/custom-resource-definitions/", "id": "k8s_00721_code_explain_22705"} {"prompt": "Complete this yaml code:\n\napiVersion: apiextensions.k8s.io/v1\nkind: CustomResourceDefinition\nmetadata:\n name: crontabs.stable.example.com\nspec:\n group: stable.example.com\n versions:\n - name: v1\n served: true\n storage: true\n schema:\n openAPIV3Schema:\n type: object\n properties:\n spec:\n type: object", "completion": " properties:\n cronSpec:\n type: string\n image:\n type: string\n replicas:\n type: integer\n scope: Namespaced\n names:\n plural: crontabs\n singular: crontab\n kind: CronTab\n shortNames:\n - ct\n # categories is a list of grouped resources the custom resource belongs to.\n categories:\n - all", "language": "yaml", "type": "kubernetes-manifest", "context": "Extend the Kubernetes API with CustomResourceDefinitions | and create it:", "source": "https://kubernetes.io/docs/tasks/extend-kubernetes/custom-resources/custom-resource-definitions/", "id": "k8s_00721_code_completion_22706"} {"prompt": "Explain this yaml code:\n\napiVersion: apiextensions.k8s.io/v1\nkind: CustomResourceDefinition\nmetadata:\n name: crontabs.stable.example.com\nspec:\n group: stable.example.com\n versions:\n - name: v1\n served: true\n storage: true\n schema:\n openAPIV3Schema:\n type: object\n properties:\n spec:\n type: object\n properties:\n cronSpec:\n type: string\n image:\n type: string\n replicas:\n type: integer\n scope: Namespaced\n names:\n plural: crontabs\n singular: crontab\n kind: CronTab\n shortNames:\n - ct\n # categories is a list of grouped resources the custom resource belongs to.\n categories:\n - all", "completion": "This yaml code Extend the Kubernetes API with CustomResourceDefinitions | and create it:", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Extend the Kubernetes API with CustomResourceDefinitions | and create it:", "source": "https://kubernetes.io/docs/tasks/extend-kubernetes/custom-resources/custom-resource-definitions/", "id": "k8s_00721_code_explain_22707"} {"prompt": "Complete this yaml code:\n\napiVersion: \"stable.example.com/v1\"\nkind: CronTab\nmetadata:", "completion": " name: my-new-cron-object\nspec:\n cronSpec: \"* * * * */5\"\n image: my-awesome-cron-image", "language": "yaml", "type": "kubernetes-manifest", "context": "Extend the Kubernetes API with CustomResourceDefinitions | and create it:", "source": "https://kubernetes.io/docs/tasks/extend-kubernetes/custom-resources/custom-resource-definitions/", "id": "k8s_00721_code_completion_22708"} {"prompt": "Explain this yaml code:\n\napiVersion: \"stable.example.com/v1\"\nkind: CronTab\nmetadata:\n name: my-new-cron-object\nspec:\n cronSpec: \"* * * * */5\"\n image: my-awesome-cron-image", "completion": "This yaml code Extend the Kubernetes API with CustomResourceDefinitions | and create it:", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Extend the Kubernetes API with CustomResourceDefinitions | and create it:", "source": "https://kubernetes.io/docs/tasks/extend-kubernetes/custom-resources/custom-resource-definitions/", "id": "k8s_00721_code_explain_22709"} {"prompt": "Explain this none code:\n\nNAME AGE\ncrontabs/my-new-cron-object 3s", "completion": "This none code Extend the Kubernetes API with CustomResourceDefinitions | Read about custom resources.", "language": "none", "type": "code-example_explanation", "context": "Extend the Kubernetes API with CustomResourceDefinitions | Read about custom resources.", "source": "https://kubernetes.io/docs/tasks/extend-kubernetes/custom-resources/custom-resource-definitions/", "id": "k8s_00721_code_explain_22710"} {"prompt": "Complete this yaml code:\n\napiVersion: apiextensions.k8s.io/v1\nkind: CustomResourceDefinition\nmetadata:\n # name must match the spec fields below, and be in the form: .\n name: crontabs.stable.example.com\nspec:\n # group name to use for REST API: /apis//\n group: stable.example.com\n # list of versions supported by this CustomResourceDefinition\n versions:\n - name: v1\n # Each version can be enabled/disabled by Served flag.\n served: true\n # One and only one version must be marked as the storage version.\n storage: true\n schema:\n openAPIV3Schema:\n type: object\n properties:\n spec:", "completion": " type: object\n properties:\n cronSpec:\n type: string\n image:\n type: string\n replicas:\n type: integer\n # either Namespaced or Cluster\n scope: Namespaced\n names:\n # plural name to be used in the URL: /apis///\n plural: crontabs\n # singular name to be used as an alias on the CLI and for display\n singular: crontab\n # kind is normally the CamelCased singular type. Your resource manifests use this.\n kind: CronTab\n # shortNames allow shorter string to match your resource on the CLI\n shortNames:\n - ct", "language": "yaml", "type": "kubernetes-manifest", "context": "Extend the Kubernetes API with CustomResourceDefinitions | and create it:", "source": "https://kubernetes.io/docs/tasks/extend-kubernetes/custom-resources/custom-resource-definitions/", "id": "k8s_00721_code_completion_22711"} {"prompt": "Explain this yaml code:\n\napiVersion: apiextensions.k8s.io/v1\nkind: CustomResourceDefinition\nmetadata:\n # name must match the spec fields below, and be in the form: .\n name: crontabs.stable.example.com\nspec:\n # group name to use for REST API: /apis//\n group: stable.example.com\n # list of versions supported by this CustomResourceDefinition\n versions:\n - name: v1\n # Each version can be enabled/disabled by Served flag.\n served: true\n # One and only one version must be marked as the storage version.\n storage: true\n schema:\n openAPIV3Schema:\n type: object\n properties:\n spec:\n type: object\n properties:\n cronSpec:\n type: string\n image:\n type: string\n replicas:\n type: integer\n # either Namespaced or Cluster\n scope: Namespaced\n names:\n # plural name to be used in the URL: /apis///\n plural: crontabs\n # singular name to be used as an alias on the CLI and for display\n singular: crontab\n # kind is normally the CamelCased singular type. Your resource manifests use this.\n kind: CronTab\n # shortNames allow shorter string to match your resource on the CLI\n shortNames:\n - ct", "completion": "This yaml code Extend the Kubernetes API with CustomResourceDefinitions | and create it:", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Extend the Kubernetes API with CustomResourceDefinitions | and create it:", "source": "https://kubernetes.io/docs/tasks/extend-kubernetes/custom-resources/custom-resource-definitions/", "id": "k8s_00721_code_explain_22712"} {"prompt": "Complete this yaml code:\n\napiVersion: \"stable.example.com/v1\"\nkind: CronTab\nmetadata:", "completion": " name: my-new-cron-object\nspec:\n cronSpec: \"* * * * */5\"\n image: my-awesome-cron-image", "language": "yaml", "type": "kubernetes-manifest", "context": "Extend the Kubernetes API with CustomResourceDefinitions | and create it:", "source": "https://kubernetes.io/docs/tasks/extend-kubernetes/custom-resources/custom-resource-definitions/", "id": "k8s_00721_code_completion_22713"} {"prompt": "Explain this yaml code:\n\napiVersion: \"stable.example.com/v1\"\nkind: CronTab\nmetadata:\n name: my-new-cron-object\nspec:\n cronSpec: \"* * * * */5\"\n image: my-awesome-cron-image", "completion": "This yaml code Extend the Kubernetes API with CustomResourceDefinitions | and create it:", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Extend the Kubernetes API with CustomResourceDefinitions | and create it:", "source": "https://kubernetes.io/docs/tasks/extend-kubernetes/custom-resources/custom-resource-definitions/", "id": "k8s_00721_code_explain_22714"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nitems:\n- apiVersion: stable.example.com/v1\n kind: CronTab\n metadata:\n annotations:\n kubectl.kubernetes.io/last-applied-configuration: |\n {\"apiVersion\":\"stable.example.com/v1\",\"kind\":\"CronTab\",\"metadata\":{\"annotations\":{},\"name\":\"my-new-cron-object\",\"namespace\":\"default\"},\"spec\":{\"cronSpec\":\"* * * * */5\",\"image\":\"my-awesome-cron-image\"}} \n creationTimestamp: \"2021-06-20T07:35:27Z\"\n generation: 1", "completion": " name: my-new-cron-object\n namespace: default\n resourceVersion: \"1326\"\n uid: 9aab1d66-628e-41bb-a422-57b8b3b1f5a9\n spec:\n cronSpec: '* * * * */5'\n image: my-awesome-cron-image\nkind: List\nmetadata:\n resourceVersion: \"\"\n selfLink: \"\"", "language": "yaml", "type": "kubernetes-manifest", "context": "Extend the Kubernetes API with CustomResourceDefinitions | When you delete a CustomResourceDefinition, the server will uninstall the RESTful API endpoint and delete all custom objects stored in it.", "source": "https://kubernetes.io/docs/tasks/extend-kubernetes/custom-resources/custom-resource-definitions/", "id": "k8s_00721_code_completion_22715"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nitems:\n- apiVersion: stable.example.com/v1\n kind: CronTab\n metadata:\n annotations:\n kubectl.kubernetes.io/last-applied-configuration: |\n {\"apiVersion\":\"stable.example.com/v1\",\"kind\":\"CronTab\",\"metadata\":{\"annotations\":{},\"name\":\"my-new-cron-object\",\"namespace\":\"default\"},\"spec\":{\"cronSpec\":\"* * * * */5\",\"image\":\"my-awesome-cron-image\"}} \n creationTimestamp: \"2021-06-20T07:35:27Z\"\n generation: 1\n name: my-new-cron-object\n namespace: default\n resourceVersion: \"1326\"\n uid: 9aab1d66-628e-41bb-a422-57b8b3b1f5a9\n spec:\n cronSpec: '* * * * */5'\n image: my-awesome-cron-image\nkind: List\nmetadata:\n resourceVersion: \"\"\n selfLink: \"\"", "completion": "This yaml code Extend the Kubernetes API with CustomResourceDefinitions | When you delete a CustomResourceDefinition, the server will uninstall the RESTful API endpoint and delete all custom objects stored in it.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Extend the Kubernetes API with CustomResourceDefinitions | When you delete a CustomResourceDefinition, the server will uninstall the RESTful API endpoint and delete all custom objects stored in it.", "source": "https://kubernetes.io/docs/tasks/extend-kubernetes/custom-resources/custom-resource-definitions/", "id": "k8s_00721_code_explain_22716"} {"prompt": "Explain this bash code:\n\nkubectl delete -f resourcedefinition.yaml\nkubectl get crontabs", "completion": "This bash code Extend the Kubernetes API with CustomResourceDefinitions | If you later recreate the same CustomResourceDefinition, it will start out empty.", "language": "bash", "type": "kubectl-command_explanation", "context": "Extend the Kubernetes API with CustomResourceDefinitions | If you later recreate the same CustomResourceDefinition, it will start out empty.", "source": "https://kubernetes.io/docs/tasks/extend-kubernetes/custom-resources/custom-resource-definitions/", "id": "k8s_00721_code_explain_22717"} {"prompt": "Complete this text code:\n\nproperties:\n foo:\n # ...", "completion": "allOf:\n- properties:\n foo:\n # ...", "language": "text", "type": "code-example", "context": "Extend the Kubernetes API with CustomResourceDefinitions | Non-structural example 2:", "source": "https://kubernetes.io/docs/tasks/extend-kubernetes/custom-resources/custom-resource-definitions/", "id": "k8s_00721_code_completion_22718"} {"prompt": "Explain this text code:\n\nproperties:\n foo:\n # ...\nallOf:\n- properties:\n foo:\n # ...", "completion": "This text code Extend the Kubernetes API with CustomResourceDefinitions | Non-structural example 2:", "language": "text", "type": "code-example_explanation", "context": "Extend the Kubernetes API with CustomResourceDefinitions | Non-structural example 2:", "source": "https://kubernetes.io/docs/tasks/extend-kubernetes/custom-resources/custom-resource-definitions/", "id": "k8s_00721_code_explain_22719"} {"prompt": "Complete this text code:\n\nallOf:\n- items:", "completion": " properties:\n foo:\n # ...", "language": "text", "type": "code-example", "context": "Extend the Kubernetes API with CustomResourceDefinitions | conflicts with rule 2. The following would be correct:", "source": "https://kubernetes.io/docs/tasks/extend-kubernetes/custom-resources/custom-resource-definitions/", "id": "k8s_00721_code_completion_22720"} {"prompt": "Explain this text code:\n\nallOf:\n- items:\n properties:\n foo:\n # ...", "completion": "This text code Extend the Kubernetes API with CustomResourceDefinitions | conflicts with rule 2. The following would be correct:", "language": "text", "type": "code-example_explanation", "context": "Extend the Kubernetes API with CustomResourceDefinitions | conflicts with rule 2. The following would be correct:", "source": "https://kubernetes.io/docs/tasks/extend-kubernetes/custom-resources/custom-resource-definitions/", "id": "k8s_00721_code_explain_22721"} {"prompt": "Complete this text code:\n\nitems:\n properties:\n foo:\n # ...", "completion": "allOf:\n- items:\n properties:\n foo:\n # ...", "language": "text", "type": "code-example", "context": "Extend the Kubernetes API with CustomResourceDefinitions | Non-structural example 3:", "source": "https://kubernetes.io/docs/tasks/extend-kubernetes/custom-resources/custom-resource-definitions/", "id": "k8s_00721_code_completion_22722"} {"prompt": "Explain this text code:\n\nitems:\n properties:\n foo:\n # ...\nallOf:\n- items:\n properties:\n foo:\n # ...", "completion": "This text code Extend the Kubernetes API with CustomResourceDefinitions | Non-structural example 3:", "language": "text", "type": "code-example_explanation", "context": "Extend the Kubernetes API with CustomResourceDefinitions | Non-structural example 3:", "source": "https://kubernetes.io/docs/tasks/extend-kubernetes/custom-resources/custom-resource-definitions/", "id": "k8s_00721_code_explain_22723"} {"prompt": "Complete this text code:\n\nproperties:\n foo:\n pattern: \"abc\"\n metadata:\n type: object\n properties:\n name:\n type: string\n pattern: \"^a\"\n finalizers:", "completion": " type: array\n items:\n type: string\n pattern: \"my-finalizer\"\nanyOf:\n- properties:\n bar:\n type: integer\n minimum: 42\n required: [\"bar\"]\n description: \"foo bar object\"", "language": "text", "type": "code-example", "context": "Extend the Kubernetes API with CustomResourceDefinitions | is not a structural schema because of the following violations:", "source": "https://kubernetes.io/docs/tasks/extend-kubernetes/custom-resources/custom-resource-definitions/", "id": "k8s_00721_code_completion_22724"} {"prompt": "Explain this text code:\n\nproperties:\n foo:\n pattern: \"abc\"\n metadata:\n type: object\n properties:\n name:\n type: string\n pattern: \"^a\"\n finalizers:\n type: array\n items:\n type: string\n pattern: \"my-finalizer\"\nanyOf:\n- properties:\n bar:\n type: integer\n minimum: 42\n required: [\"bar\"]\n description: \"foo bar object\"", "completion": "This text code Extend the Kubernetes API with CustomResourceDefinitions | is not a structural schema because of the following violations:", "language": "text", "type": "code-example_explanation", "context": "Extend the Kubernetes API with CustomResourceDefinitions | is not a structural schema because of the following violations:", "source": "https://kubernetes.io/docs/tasks/extend-kubernetes/custom-resources/custom-resource-definitions/", "id": "k8s_00721_code_explain_22725"} {"prompt": "Complete this text code:\n\ntype: object\ndescription: \"foo bar object\"\nproperties:\n foo:\n type: string\n pattern: \"abc\"\n bar:\n type: integer\n metadata:", "completion": " type: object\n properties:\n name:\n type: string\n pattern: \"^a\"\nanyOf:\n- properties:\n bar:\n minimum: 42\n required: [\"bar\"]", "language": "text", "type": "code-example", "context": "Extend the Kubernetes API with CustomResourceDefinitions | Violations of the structural schema rules are reported in the NonStructural condition in the CustomResourceDefinition.", "source": "https://kubernetes.io/docs/tasks/extend-kubernetes/custom-resources/custom-resource-definitions/", "id": "k8s_00721_code_completion_22726"} {"prompt": "Explain this text code:\n\ntype: object\ndescription: \"foo bar object\"\nproperties:\n foo:\n type: string\n pattern: \"abc\"\n bar:\n type: integer\n metadata:\n type: object\n properties:\n name:\n type: string\n pattern: \"^a\"\nanyOf:\n- properties:\n bar:\n minimum: 42\n required: [\"bar\"]", "completion": "This text code Extend the Kubernetes API with CustomResourceDefinitions | Violations of the structural schema rules are reported in the NonStructural condition in the CustomResourceDefinition.", "language": "text", "type": "code-example_explanation", "context": "Extend the Kubernetes API with CustomResourceDefinitions | Violations of the structural schema rules are reported in the NonStructural condition in the CustomResourceDefinition.", "source": "https://kubernetes.io/docs/tasks/extend-kubernetes/custom-resources/custom-resource-definitions/", "id": "k8s_00721_code_explain_22727"} {"prompt": "Complete this yaml code:\n\napiVersion: \"stable.example.com/v1\"\nkind: CronTab\nmetadata:\n name: my-new-cron-object", "completion": "spec:\n cronSpec: \"* * * * */5\"\n image: my-awesome-cron-image\n someRandomField: 42", "language": "yaml", "type": "kubernetes-manifest", "context": "Extend the Kubernetes API with CustomResourceDefinitions | and create it:", "source": "https://kubernetes.io/docs/tasks/extend-kubernetes/custom-resources/custom-resource-definitions/", "id": "k8s_00721_code_completion_22728"} {"prompt": "Explain this yaml code:\n\napiVersion: \"stable.example.com/v1\"\nkind: CronTab\nmetadata:\n name: my-new-cron-object\nspec:\n cronSpec: \"* * * * */5\"\n image: my-awesome-cron-image\n someRandomField: 42", "completion": "This yaml code Extend the Kubernetes API with CustomResourceDefinitions | and create it:", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Extend the Kubernetes API with CustomResourceDefinitions | and create it:", "source": "https://kubernetes.io/docs/tasks/extend-kubernetes/custom-resources/custom-resource-definitions/", "id": "k8s_00721_code_explain_22729"} {"prompt": "Explain this bash code:\n\nkubectl create --validate=false -f my-crontab.yaml -o yaml", "completion": "This bash code Extend the Kubernetes API with CustomResourceDefinitions | Your output is similar to:", "language": "bash", "type": "kubectl-command_explanation", "context": "Extend the Kubernetes API with CustomResourceDefinitions | Your output is similar to:", "source": "https://kubernetes.io/docs/tasks/extend-kubernetes/custom-resources/custom-resource-definitions/", "id": "k8s_00721_code_explain_22730"} {"prompt": "Complete this yaml code:\n\napiVersion: stable.example.com/v1\nkind: CronTab\nmetadata:\n creationTimestamp: 2017-05-31T12:56:35Z\n generation: 1\n name: my-new-cron-object", "completion": " namespace: default\n resourceVersion: \"285\"\n uid: 9423255b-4600-11e7-af6a-28d2447dc82b\nspec:\n cronSpec: '* * * * */5'\n image: my-awesome-cron-image", "language": "yaml", "type": "kubernetes-manifest", "context": "Extend the Kubernetes API with CustomResourceDefinitions | Notice that the field someRandomField was pruned.", "source": "https://kubernetes.io/docs/tasks/extend-kubernetes/custom-resources/custom-resource-definitions/", "id": "k8s_00721_code_completion_22731"} {"prompt": "Explain this yaml code:\n\napiVersion: stable.example.com/v1\nkind: CronTab\nmetadata:\n creationTimestamp: 2017-05-31T12:56:35Z\n generation: 1\n name: my-new-cron-object\n namespace: default\n resourceVersion: \"285\"\n uid: 9423255b-4600-11e7-af6a-28d2447dc82b\nspec:\n cronSpec: '* * * * */5'\n image: my-awesome-cron-image", "completion": "This yaml code Extend the Kubernetes API with CustomResourceDefinitions | Notice that the field someRandomField was pruned.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Extend the Kubernetes API with CustomResourceDefinitions | Notice that the field someRandomField was pruned.", "source": "https://kubernetes.io/docs/tasks/extend-kubernetes/custom-resources/custom-resource-definitions/", "id": "k8s_00721_code_explain_22732"} {"prompt": "Complete this text code:\n\ntype: object\nproperties:", "completion": " json:\n x-kubernetes-preserve-unknown-fields: true", "language": "text", "type": "code-example", "context": "Extend the Kubernetes API with CustomResourceDefinitions | The field json can store any JSON value, without anything being pruned.", "source": "https://kubernetes.io/docs/tasks/extend-kubernetes/custom-resources/custom-resource-definitions/", "id": "k8s_00721_code_completion_22733"} {"prompt": "Explain this text code:\n\ntype: object\nproperties:\n json:\n x-kubernetes-preserve-unknown-fields: true", "completion": "This text code Extend the Kubernetes API with CustomResourceDefinitions | The field json can store any JSON value, without anything being pruned.", "language": "text", "type": "code-example_explanation", "context": "Extend the Kubernetes API with CustomResourceDefinitions | The field json can store any JSON value, without anything being pruned.", "source": "https://kubernetes.io/docs/tasks/extend-kubernetes/custom-resources/custom-resource-definitions/", "id": "k8s_00721_code_explain_22734"} {"prompt": "Complete this text code:\n\ntype: object\nproperties:\n json:", "completion": " x-kubernetes-preserve-unknown-fields: true\n type: object\n description: this is arbitrary JSON", "language": "text", "type": "code-example", "context": "Extend the Kubernetes API with CustomResourceDefinitions | With this, only object type values are allowed.", "source": "https://kubernetes.io/docs/tasks/extend-kubernetes/custom-resources/custom-resource-definitions/", "id": "k8s_00721_code_completion_22735"} {"prompt": "Explain this text code:\n\ntype: object\nproperties:\n json:\n x-kubernetes-preserve-unknown-fields: true\n type: object\n description: this is arbitrary JSON", "completion": "This text code Extend the Kubernetes API with CustomResourceDefinitions | With this, only object type values are allowed.", "language": "text", "type": "code-example_explanation", "context": "Extend the Kubernetes API with CustomResourceDefinitions | With this, only object type values are allowed.", "source": "https://kubernetes.io/docs/tasks/extend-kubernetes/custom-resources/custom-resource-definitions/", "id": "k8s_00721_code_explain_22736"} {"prompt": "Complete this text code:\n\ntype: object\nproperties:\n json:\n x-kubernetes-preserve-unknown-fields: true\n type: object\n properties:", "completion": " spec:\n type: object\n properties:\n foo:\n type: string\n bar:\n type: string", "language": "text", "type": "code-example", "context": "Extend the Kubernetes API with CustomResourceDefinitions | With this, the value:", "source": "https://kubernetes.io/docs/tasks/extend-kubernetes/custom-resources/custom-resource-definitions/", "id": "k8s_00721_code_completion_22737"} {"prompt": "Explain this text code:\n\ntype: object\nproperties:\n json:\n x-kubernetes-preserve-unknown-fields: true\n type: object\n properties:\n spec:\n type: object\n properties:\n foo:\n type: string\n bar:\n type: string", "completion": "This text code Extend the Kubernetes API with CustomResourceDefinitions | With this, the value:", "language": "text", "type": "code-example_explanation", "context": "Extend the Kubernetes API with CustomResourceDefinitions | With this, the value:", "source": "https://kubernetes.io/docs/tasks/extend-kubernetes/custom-resources/custom-resource-definitions/", "id": "k8s_00721_code_explain_22738"} {"prompt": "Complete this text code:\n\njson:\n spec:\n foo: abc", "completion": " bar: def\n something: x\n status:\n something: x", "language": "text", "type": "code-example", "context": "Extend the Kubernetes API with CustomResourceDefinitions | is pruned to:", "source": "https://kubernetes.io/docs/tasks/extend-kubernetes/custom-resources/custom-resource-definitions/", "id": "k8s_00721_code_completion_22739"} {"prompt": "Explain this text code:\n\njson:\n spec:\n foo: abc\n bar: def\n something: x\n status:\n something: x", "completion": "This text code Extend the Kubernetes API with CustomResourceDefinitions | is pruned to:", "language": "text", "type": "code-example_explanation", "context": "Extend the Kubernetes API with CustomResourceDefinitions | is pruned to:", "source": "https://kubernetes.io/docs/tasks/extend-kubernetes/custom-resources/custom-resource-definitions/", "id": "k8s_00721_code_explain_22740"} {"prompt": "Complete this text code:\n\njson:\n spec:\n foo: abc", "completion": " bar: def\n status:\n something: x", "language": "text", "type": "code-example", "context": "Extend the Kubernetes API with CustomResourceDefinitions | This means that the something field in the specified spec object is pruned, but everything outside is not.", "source": "https://kubernetes.io/docs/tasks/extend-kubernetes/custom-resources/custom-resource-definitions/", "id": "k8s_00721_code_completion_22741"} {"prompt": "Explain this text code:\n\njson:\n spec:\n foo: abc\n bar: def\n status:\n something: x", "completion": "This text code Extend the Kubernetes API with CustomResourceDefinitions | This means that the something field in the specified spec object is pruned, but everything outside is not.", "language": "text", "type": "code-example_explanation", "context": "Extend the Kubernetes API with CustomResourceDefinitions | This means that the something field in the specified spec object is pruned, but everything outside is not.", "source": "https://kubernetes.io/docs/tasks/extend-kubernetes/custom-resources/custom-resource-definitions/", "id": "k8s_00721_code_explain_22742"} {"prompt": "Complete this text code:\n\ntype: object\nproperties:", "completion": " foo:\n x-kubernetes-int-or-string: true", "language": "text", "type": "code-example", "context": "Extend the Kubernetes API with CustomResourceDefinitions | Also those nodes are partially excluded from rule 3 in the sense that the following two patterns are allowed (exactly those, without variations in order to additional fields):", "source": "https://kubernetes.io/docs/tasks/extend-kubernetes/custom-resources/custom-resource-definitions/", "id": "k8s_00721_code_completion_22743"} {"prompt": "Explain this text code:\n\ntype: object\nproperties:\n foo:\n x-kubernetes-int-or-string: true", "completion": "This text code Extend the Kubernetes API with CustomResourceDefinitions | Also those nodes are partially excluded from rule 3 in the sense that the following two patterns are allowed (exactly those, without variations in order to additional fields):", "language": "text", "type": "code-example_explanation", "context": "Extend the Kubernetes API with CustomResourceDefinitions | Also those nodes are partially excluded from rule 3 in the sense that the following two patterns are allowed (exactly those, without variations in order to additional fields):", "source": "https://kubernetes.io/docs/tasks/extend-kubernetes/custom-resources/custom-resource-definitions/", "id": "k8s_00721_code_explain_22744"} {"prompt": "Complete this text code:\n\nx-kubernetes-int-or-string: true\nanyOf:", "completion": " - type: integer\n - type: string\n...", "language": "text", "type": "code-example", "context": "Extend the Kubernetes API with CustomResourceDefinitions | and", "source": "https://kubernetes.io/docs/tasks/extend-kubernetes/custom-resources/custom-resource-definitions/", "id": "k8s_00721_code_completion_22745"} {"prompt": "Explain this text code:\n\nx-kubernetes-int-or-string: true\nanyOf:\n - type: integer\n - type: string\n...", "completion": "This text code Extend the Kubernetes API with CustomResourceDefinitions | and", "language": "text", "type": "code-example_explanation", "context": "Extend the Kubernetes API with CustomResourceDefinitions | and", "source": "https://kubernetes.io/docs/tasks/extend-kubernetes/custom-resources/custom-resource-definitions/", "id": "k8s_00721_code_explain_22746"} {"prompt": "Complete this text code:\n\nx-kubernetes-int-or-string: true\nallOf:\n - anyOf:", "completion": " - type: integer\n - type: string\n - # ... zero or more\n...", "language": "text", "type": "code-example", "context": "Extend the Kubernetes API with CustomResourceDefinitions | With one of those specification, both an integer and a string validate.", "source": "https://kubernetes.io/docs/tasks/extend-kubernetes/custom-resources/custom-resource-definitions/", "id": "k8s_00721_code_completion_22747"} {"prompt": "Explain this text code:\n\nx-kubernetes-int-or-string: true\nallOf:\n - anyOf:\n - type: integer\n - type: string\n - # ... zero or more\n...", "completion": "This text code Extend the Kubernetes API with CustomResourceDefinitions | With one of those specification, both an integer and a string validate.", "language": "text", "type": "code-example_explanation", "context": "Extend the Kubernetes API with CustomResourceDefinitions | With one of those specification, both an integer and a string validate.", "source": "https://kubernetes.io/docs/tasks/extend-kubernetes/custom-resources/custom-resource-definitions/", "id": "k8s_00721_code_explain_22748"} {"prompt": "Complete this text code:\n\ntype: object\nproperties:", "completion": " foo:\n x-kubernetes-embedded-resource: true\n x-kubernetes-preserve-unknown-fields: true", "language": "text", "type": "code-example", "context": "Extend the Kubernetes API with CustomResourceDefinitions | Here, the field foo holds a complete object, e.g.:", "source": "https://kubernetes.io/docs/tasks/extend-kubernetes/custom-resources/custom-resource-definitions/", "id": "k8s_00721_code_completion_22749"} {"prompt": "Explain this text code:\n\ntype: object\nproperties:\n foo:\n x-kubernetes-embedded-resource: true\n x-kubernetes-preserve-unknown-fields: true", "completion": "This text code Extend the Kubernetes API with CustomResourceDefinitions | Here, the field foo holds a complete object, e.g.:", "language": "text", "type": "code-example_explanation", "context": "Extend the Kubernetes API with CustomResourceDefinitions | Here, the field foo holds a complete object, e.g.:", "source": "https://kubernetes.io/docs/tasks/extend-kubernetes/custom-resources/custom-resource-definitions/", "id": "k8s_00721_code_explain_22750"} {"prompt": "Complete this yaml code:\n\nfoo:\n apiVersion: v1", "completion": " kind: Pod\n spec:\n # ...", "language": "yaml", "type": "pod-definition", "context": "Extend the Kubernetes API with CustomResourceDefinitions | Because x-kubernetes-preserve-unknown-fields: true is specified alongside, nothing is pruned. The use of x-kubernetes-preserve-unknown-fields: true is optional though.", "source": "https://kubernetes.io/docs/tasks/extend-kubernetes/custom-resources/custom-resource-definitions/", "id": "k8s_00721_code_completion_22751"} {"prompt": "Explain this yaml code:\n\nfoo:\n apiVersion: v1\n kind: Pod\n spec:\n # ...", "completion": "This yaml code Extend the Kubernetes API with CustomResourceDefinitions | Because x-kubernetes-preserve-unknown-fields: true is specified alongside, nothing is pruned. The use of x-kubernetes-preserve-unknown-fields: true is optional though.", "language": "yaml", "type": "pod-definition_explanation", "context": "Extend the Kubernetes API with CustomResourceDefinitions | Because x-kubernetes-preserve-unknown-fields: true is specified alongside, nothing is pruned. The use of x-kubernetes-preserve-unknown-fields: true is optional though.", "source": "https://kubernetes.io/docs/tasks/extend-kubernetes/custom-resources/custom-resource-definitions/", "id": "k8s_00721_code_explain_22752"} {"prompt": "Complete this yaml code:\n\napiVersion: \"stable.example.com/v1\"\nkind: CronTab", "completion": "metadata:\n finalizers:\n - stable.example.com/finalizer", "language": "yaml", "type": "kubernetes-manifest", "context": "Extend the Kubernetes API with CustomResourceDefinitions | Identifiers of custom finalizers consist of a domain name, a forward slash and the name of the finalizer. Any controller can add a finalizer to any object's list of finalizers.", "source": "https://kubernetes.io/docs/tasks/extend-kubernetes/custom-resources/custom-resource-definitions/", "id": "k8s_00721_code_completion_22753"} {"prompt": "Explain this yaml code:\n\napiVersion: \"stable.example.com/v1\"\nkind: CronTab\nmetadata:\n finalizers:\n - stable.example.com/finalizer", "completion": "This yaml code Extend the Kubernetes API with CustomResourceDefinitions | Identifiers of custom finalizers consist of a domain name, a forward slash and the name of the finalizer. Any controller can add a finalizer to any object's list of finalizers.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Extend the Kubernetes API with CustomResourceDefinitions | Identifiers of custom finalizers consist of a domain name, a forward slash and the name of the finalizer. Any controller can add a finalizer to any object's list of finalizers.", "source": "https://kubernetes.io/docs/tasks/extend-kubernetes/custom-resources/custom-resource-definitions/", "id": "k8s_00721_code_explain_22754"} {"prompt": "Complete this yaml code:\n\napiVersion: apiextensions.k8s.io/v1\nkind: CustomResourceDefinition\nmetadata:\n name: crontabs.stable.example.com\nspec:\n group: stable.example.com\n versions:\n - name: v1\n served: true\n storage: true\n schema:\n # openAPIV3Schema is the schema for validating custom objects.\n openAPIV3Schema:\n type: object\n properties:\n spec:\n type: object", "completion": " properties:\n cronSpec:\n type: string\n pattern: '^(\\d+|\\*)(/\\d+)?(\\s+(\\d+|\\*)(/\\d+)?){4}$'\n image:\n type: string\n replicas:\n type: integer\n minimum: 1\n maximum: 10\n scope: Namespaced\n names:\n plural: crontabs\n singular: crontab\n kind: CronTab\n shortNames:\n - ct", "language": "yaml", "type": "kubernetes-manifest", "context": "Extend the Kubernetes API with CustomResourceDefinitions | and create it:", "source": "https://kubernetes.io/docs/tasks/extend-kubernetes/custom-resources/custom-resource-definitions/", "id": "k8s_00721_code_completion_22755"} {"prompt": "Explain this yaml code:\n\napiVersion: apiextensions.k8s.io/v1\nkind: CustomResourceDefinition\nmetadata:\n name: crontabs.stable.example.com\nspec:\n group: stable.example.com\n versions:\n - name: v1\n served: true\n storage: true\n schema:\n # openAPIV3Schema is the schema for validating custom objects.\n openAPIV3Schema:\n type: object\n properties:\n spec:\n type: object\n properties:\n cronSpec:\n type: string\n pattern: '^(\\d+|\\*)(/\\d+)?(\\s+(\\d+|\\*)(/\\d+)?){4}$'\n image:\n type: string\n replicas:\n type: integer\n minimum: 1\n maximum: 10\n scope: Namespaced\n names:\n plural: crontabs\n singular: crontab\n kind: CronTab\n shortNames:\n - ct", "completion": "This yaml code Extend the Kubernetes API with CustomResourceDefinitions | and create it:", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Extend the Kubernetes API with CustomResourceDefinitions | and create it:", "source": "https://kubernetes.io/docs/tasks/extend-kubernetes/custom-resources/custom-resource-definitions/", "id": "k8s_00721_code_explain_22756"} {"prompt": "Complete this yaml code:\n\napiVersion: \"stable.example.com/v1\"\nkind: CronTab\nmetadata:\n name: my-new-cron-object", "completion": "spec:\n cronSpec: \"* * * *\"\n image: my-awesome-cron-image\n replicas: 15", "language": "yaml", "type": "kubernetes-manifest", "context": "Extend the Kubernetes API with CustomResourceDefinitions | and attempt to create it:", "source": "https://kubernetes.io/docs/tasks/extend-kubernetes/custom-resources/custom-resource-definitions/", "id": "k8s_00721_code_completion_22757"} {"prompt": "Explain this yaml code:\n\napiVersion: \"stable.example.com/v1\"\nkind: CronTab\nmetadata:\n name: my-new-cron-object\nspec:\n cronSpec: \"* * * *\"\n image: my-awesome-cron-image\n replicas: 15", "completion": "This yaml code Extend the Kubernetes API with CustomResourceDefinitions | and attempt to create it:", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Extend the Kubernetes API with CustomResourceDefinitions | and attempt to create it:", "source": "https://kubernetes.io/docs/tasks/extend-kubernetes/custom-resources/custom-resource-definitions/", "id": "k8s_00721_code_explain_22758"} {"prompt": "Complete this text code:\n\nThe CronTab \"my-new-cron-object\" is invalid: []: Invalid value: map[string]interface {}{\"apiVersion\":\"stable.example.com/v1\", \"kind\":\"CronTab\", \"metadata\":map[string]interface {}{\"name\":\"my-new-cron-object\", \"namespace\":\"default\", \"deletionTimestamp\":interface {}(nil), \"deletionGracePeriodSeconds\":(*int64)(nil), \"creationTimestamp\":\"2017-09-05T05:20:07Z\", \"uid\":\"e14d79e7-91f9-11e7-a598-f0761cb232d1\", \"clusterName\":\"\"}, \"spec\":map[string]interface {}{\"cronSpec\":\"* * * *\", \"image\":\"my-awesome-cron-image\", \"replicas\":15}}:\nvalidation failure list:", "completion": "spec.cronSpec in body should match '^(\\d+|\\*)(/\\d+)?(\\s+(\\d+|\\*)(/\\d+)?){4}$'\nspec.replicas in body should be less than or equal to 10", "language": "text", "type": "code-example", "context": "Extend the Kubernetes API with CustomResourceDefinitions | If the fields contain valid values, the object creation request is accepted.", "source": "https://kubernetes.io/docs/tasks/extend-kubernetes/custom-resources/custom-resource-definitions/", "id": "k8s_00721_code_completion_22759"} {"prompt": "Explain this text code:\n\nThe CronTab \"my-new-cron-object\" is invalid: []: Invalid value: map[string]interface {}{\"apiVersion\":\"stable.example.com/v1\", \"kind\":\"CronTab\", \"metadata\":map[string]interface {}{\"name\":\"my-new-cron-object\", \"namespace\":\"default\", \"deletionTimestamp\":interface {}(nil), \"deletionGracePeriodSeconds\":(*int64)(nil), \"creationTimestamp\":\"2017-09-05T05:20:07Z\", \"uid\":\"e14d79e7-91f9-11e7-a598-f0761cb232d1\", \"clusterName\":\"\"}, \"spec\":map[string]interface {}{\"cronSpec\":\"* * * *\", \"image\":\"my-awesome-cron-image\", \"replicas\":15}}:\nvalidation failure list:\nspec.cronSpec in body should match '^(\\d+|\\*)(/\\d+)?(\\s+(\\d+|\\*)(/\\d+)?){4}$'\nspec.replicas in body should be less than or equal to 10", "completion": "This text code Extend the Kubernetes API with CustomResourceDefinitions | If the fields contain valid values, the object creation request is accepted.", "language": "text", "type": "code-example_explanation", "context": "Extend the Kubernetes API with CustomResourceDefinitions | If the fields contain valid values, the object creation request is accepted.", "source": "https://kubernetes.io/docs/tasks/extend-kubernetes/custom-resources/custom-resource-definitions/", "id": "k8s_00721_code_explain_22760"} {"prompt": "Complete this yaml code:\n\napiVersion: \"stable.example.com/v1\"\nkind: CronTab\nmetadata:\n name: my-new-cron-object", "completion": "spec:\n cronSpec: \"* * * * */5\"\n image: my-awesome-cron-image\n replicas: 5", "language": "yaml", "type": "kubernetes-manifest", "context": "Extend the Kubernetes API with CustomResourceDefinitions | And create it:", "source": "https://kubernetes.io/docs/tasks/extend-kubernetes/custom-resources/custom-resource-definitions/", "id": "k8s_00721_code_completion_22761"} {"prompt": "Explain this yaml code:\n\napiVersion: \"stable.example.com/v1\"\nkind: CronTab\nmetadata:\n name: my-new-cron-object\nspec:\n cronSpec: \"* * * * */5\"\n image: my-awesome-cron-image\n replicas: 5", "completion": "This yaml code Extend the Kubernetes API with CustomResourceDefinitions | And create it:", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Extend the Kubernetes API with CustomResourceDefinitions | And create it:", "source": "https://kubernetes.io/docs/tasks/extend-kubernetes/custom-resources/custom-resource-definitions/", "id": "k8s_00721_code_explain_22762"} {"prompt": "Explain this bash code:\n\nkubectl apply -f my-crontab.yaml\ncrontab \"my-new-cron-object\" created", "completion": "This bash code Extend the Kubernetes API with CustomResourceDefinitions | If you are using a version of Kubernetes older than v1.30, you need to explicitly enable the CRDValidationRatcheting feature gate to use this behavior, which then applies to all CustomResourceDefiniti", "language": "bash", "type": "kubectl-command_explanation", "context": "Extend the Kubernetes API with CustomResourceDefinitions | If you are using a version of Kubernetes older than v1.30, you need to explicitly enable the CRDValidationRatcheting feature gate to use this behavior, which then applies to all CustomResourceDefiniti", "source": "https://kubernetes.io/docs/tasks/extend-kubernetes/custom-resources/custom-resource-definitions/", "id": "k8s_00721_code_explain_22763"} {"prompt": "Complete this text code:\n\n# ...\n openAPIV3Schema:\n type: object\n properties:\n spec:\n type: object\n x-kubernetes-validations:\n - rule: \"self.minReplicas <= self.replicas\"\n message: \"replicas should be greater than or equal to minReplicas.\"\n - rule: \"self.replicas <= self.maxReplicas\"\n message: \"replicas should be smaller than or equal to maxReplicas.\"", "completion": " properties:\n # ...\n minReplicas:\n type: integer\n replicas:\n type: integer\n maxReplicas:\n type: integer\n required:\n - minReplicas\n - replicas\n - maxReplicas", "language": "text", "type": "code-example", "context": "Extend the Kubernetes API with CustomResourceDefinitions | will reject a request to create this custom resource:", "source": "https://kubernetes.io/docs/tasks/extend-kubernetes/custom-resources/custom-resource-definitions/", "id": "k8s_00721_code_completion_22764"} {"prompt": "Explain this text code:\n\n# ...\n openAPIV3Schema:\n type: object\n properties:\n spec:\n type: object\n x-kubernetes-validations:\n - rule: \"self.minReplicas <= self.replicas\"\n message: \"replicas should be greater than or equal to minReplicas.\"\n - rule: \"self.replicas <= self.maxReplicas\"\n message: \"replicas should be smaller than or equal to maxReplicas.\"\n properties:\n # ...\n minReplicas:\n type: integer\n replicas:\n type: integer\n maxReplicas:\n type: integer\n required:\n - minReplicas\n - replicas\n - maxReplicas", "completion": "This text code Extend the Kubernetes API with CustomResourceDefinitions | will reject a request to create this custom resource:", "language": "text", "type": "code-example_explanation", "context": "Extend the Kubernetes API with CustomResourceDefinitions | will reject a request to create this custom resource:", "source": "https://kubernetes.io/docs/tasks/extend-kubernetes/custom-resources/custom-resource-definitions/", "id": "k8s_00721_code_explain_22765"} {"prompt": "Complete this yaml code:\n\napiVersion: \"stable.example.com/v1\"\nkind: CronTab\nmetadata:\n name: my-new-cron-object", "completion": "spec:\n minReplicas: 0\n replicas: 20\n maxReplicas: 10", "language": "yaml", "type": "kubernetes-manifest", "context": "Extend the Kubernetes API with CustomResourceDefinitions | with the response:", "source": "https://kubernetes.io/docs/tasks/extend-kubernetes/custom-resources/custom-resource-definitions/", "id": "k8s_00721_code_completion_22766"} {"prompt": "Explain this yaml code:\n\napiVersion: \"stable.example.com/v1\"\nkind: CronTab\nmetadata:\n name: my-new-cron-object\nspec:\n minReplicas: 0\n replicas: 20\n maxReplicas: 10", "completion": "This yaml code Extend the Kubernetes API with CustomResourceDefinitions | with the response:", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Extend the Kubernetes API with CustomResourceDefinitions | with the response:", "source": "https://kubernetes.io/docs/tasks/extend-kubernetes/custom-resources/custom-resource-definitions/", "id": "k8s_00721_code_explain_22767"} {"prompt": "Complete this text code:\n\n# ...\n openAPIV3Schema:\n type: object\n x-kubernetes-validations:\n - rule: \"self.status.availableReplicas >= self.spec.minReplicas\"\n properties:\n spec:\n type: object", "completion": " properties:\n minReplicas:\n type: integer\n # ...\n status:\n type: object\n properties:\n availableReplicas:\n type: integer", "language": "text", "type": "code-example", "context": "Extend the Kubernetes API with CustomResourceDefinitions | If the Rule is scoped to an object with properties, the accessible properties of the object are field selectable via self.field and field presence can be checked via has(self.field). Null valued field", "source": "https://kubernetes.io/docs/tasks/extend-kubernetes/custom-resources/custom-resource-definitions/", "id": "k8s_00721_code_completion_22768"} {"prompt": "Explain this text code:\n\n# ...\n openAPIV3Schema:\n type: object\n x-kubernetes-validations:\n - rule: \"self.status.availableReplicas >= self.spec.minReplicas\"\n properties:\n spec:\n type: object\n properties:\n minReplicas:\n type: integer\n # ...\n status:\n type: object\n properties:\n availableReplicas:\n type: integer", "completion": "This text code Extend the Kubernetes API with CustomResourceDefinitions | If the Rule is scoped to an object with properties, the accessible properties of the object are field selectable via self.field and field presence can be checked via has(self.field). Null valued field", "language": "text", "type": "code-example_explanation", "context": "Extend the Kubernetes API with CustomResourceDefinitions | If the Rule is scoped to an object with properties, the accessible properties of the object are field selectable via self.field and field presence can be checked via has(self.field). Null valued field", "source": "https://kubernetes.io/docs/tasks/extend-kubernetes/custom-resources/custom-resource-definitions/", "id": "k8s_00721_code_explain_22769"} {"prompt": "Complete this text code:\n\n# ...\n openAPIV3Schema:\n type: object\n properties:\n spec:\n type: object", "completion": " x-kubernetes-validations:\n - rule: \"has(self.foo)\"\n properties:\n # ...\n foo:\n type: integer", "language": "text", "type": "code-example", "context": "Extend the Kubernetes API with CustomResourceDefinitions | If the Rule is scoped to an object with additionalProperties (i.e. a map) the value of the map are accessible via self[mapKey], map containment can be checked via mapKey in self and all entries of the", "source": "https://kubernetes.io/docs/tasks/extend-kubernetes/custom-resources/custom-resource-definitions/", "id": "k8s_00721_code_completion_22770"} {"prompt": "Explain this text code:\n\n# ...\n openAPIV3Schema:\n type: object\n properties:\n spec:\n type: object\n x-kubernetes-validations:\n - rule: \"has(self.foo)\"\n properties:\n # ...\n foo:\n type: integer", "completion": "This text code Extend the Kubernetes API with CustomResourceDefinitions | If the Rule is scoped to an object with additionalProperties (i.e. a map) the value of the map are accessible via self[mapKey], map containment can be checked via mapKey in self and all entries of the", "language": "text", "type": "code-example_explanation", "context": "Extend the Kubernetes API with CustomResourceDefinitions | If the Rule is scoped to an object with additionalProperties (i.e. a map) the value of the map are accessible via self[mapKey], map containment can be checked via mapKey in self and all entries of the", "source": "https://kubernetes.io/docs/tasks/extend-kubernetes/custom-resources/custom-resource-definitions/", "id": "k8s_00721_code_explain_22771"} {"prompt": "Complete this text code:\n\n# ...\n openAPIV3Schema:\n type: object\n properties:\n spec:\n type: object\n x-kubernetes-validations:", "completion": " - rule: \"self['xyz'].foo > 0\"\n additionalProperties:\n # ...\n type: object\n properties:\n foo:\n type: integer", "language": "text", "type": "code-example", "context": "Extend the Kubernetes API with CustomResourceDefinitions | If the Rule is scoped to an array, the elements of the array are accessible via self[i] and also by macros and functions.", "source": "https://kubernetes.io/docs/tasks/extend-kubernetes/custom-resources/custom-resource-definitions/", "id": "k8s_00721_code_completion_22772"} {"prompt": "Explain this text code:\n\n# ...\n openAPIV3Schema:\n type: object\n properties:\n spec:\n type: object\n x-kubernetes-validations:\n - rule: \"self['xyz'].foo > 0\"\n additionalProperties:\n # ...\n type: object\n properties:\n foo:\n type: integer", "completion": "This text code Extend the Kubernetes API with CustomResourceDefinitions | If the Rule is scoped to an array, the elements of the array are accessible via self[i] and also by macros and functions.", "language": "text", "type": "code-example_explanation", "context": "Extend the Kubernetes API with CustomResourceDefinitions | If the Rule is scoped to an array, the elements of the array are accessible via self[i] and also by macros and functions.", "source": "https://kubernetes.io/docs/tasks/extend-kubernetes/custom-resources/custom-resource-definitions/", "id": "k8s_00721_code_explain_22773"} {"prompt": "Complete this text code:\n\n# ...\n openAPIV3Schema:\n type: object\n properties:\n # ...", "completion": " foo:\n type: array\n x-kubernetes-validations:\n - rule: \"size(self) == 1\"\n items:\n type: string", "language": "text", "type": "code-example", "context": "Extend the Kubernetes API with CustomResourceDefinitions | If the Rule is scoped to a scalar, self is bound to the scalar value.", "source": "https://kubernetes.io/docs/tasks/extend-kubernetes/custom-resources/custom-resource-definitions/", "id": "k8s_00721_code_completion_22774"} {"prompt": "Explain this text code:\n\n# ...\n openAPIV3Schema:\n type: object\n properties:\n # ...\n foo:\n type: array\n x-kubernetes-validations:\n - rule: \"size(self) == 1\"\n items:\n type: string", "completion": "This text code Extend the Kubernetes API with CustomResourceDefinitions | If the Rule is scoped to a scalar, self is bound to the scalar value.", "language": "text", "type": "code-example_explanation", "context": "Extend the Kubernetes API with CustomResourceDefinitions | If the Rule is scoped to a scalar, self is bound to the scalar value.", "source": "https://kubernetes.io/docs/tasks/extend-kubernetes/custom-resources/custom-resource-definitions/", "id": "k8s_00721_code_explain_22775"} {"prompt": "Complete this text code:\n\n# ...\n openAPIV3Schema:\n type: object\n properties:\n spec:\n type: object", "completion": " properties:\n # ...\n foo:\n type: integer\n x-kubernetes-validations:\n - rule: \"self > 0\"", "language": "text", "type": "code-example", "context": "Extend the Kubernetes API with CustomResourceDefinitions | Examples:", "source": "https://kubernetes.io/docs/tasks/extend-kubernetes/custom-resources/custom-resource-definitions/", "id": "k8s_00721_code_completion_22776"} {"prompt": "Explain this text code:\n\n# ...\n openAPIV3Schema:\n type: object\n properties:\n spec:\n type: object\n properties:\n # ...\n foo:\n type: integer\n x-kubernetes-validations:\n - rule: \"self > 0\"", "completion": "This text code Extend the Kubernetes API with CustomResourceDefinitions | Examples:", "language": "text", "type": "code-example_explanation", "context": "Extend the Kubernetes API with CustomResourceDefinitions | Examples:", "source": "https://kubernetes.io/docs/tasks/extend-kubernetes/custom-resources/custom-resource-definitions/", "id": "k8s_00721_code_explain_22777"} {"prompt": "Explain this text code:\n\nx-kubernetes-validations:\n- rule: \"self.x <= self.maxLimit\"\n messageExpression: '\"x exceeded max limit of \" + string(self.maxLimit)'", "completion": "This text code Extend the Kubernetes API with CustomResourceDefinitions | Keep in mind that CEL string concatenation (+ operator) does not auto-cast to string. If you have a non-string scalar, use the string() function to cast the scalar to a string like shown in the", "language": "text", "type": "code-example_explanation", "context": "Extend the Kubernetes API with CustomResourceDefinitions | Keep in mind that CEL string concatenation (+ operator) does not auto-cast to string. If you have a non-string scalar, use the string() function to cast the scalar to a string like shown in the", "source": "https://kubernetes.io/docs/tasks/extend-kubernetes/custom-resources/custom-resource-definitions/", "id": "k8s_00721_code_explain_22778"} {"prompt": "Explain this text code:\n\nx-kubernetes-validations:\n- rule: \"self.x <= self.maxLimit\"\n reason: \"FieldValueInvalid\"", "completion": "This text code Extend the Kubernetes API with CustomResourceDefinitions | The HTTP status code returned to the caller will match the reason of the first failed validation rule. The currently supported reasons are: \"FieldValueInvalid\", \"FieldValueForbidden\", \"FieldValueRequi", "language": "text", "type": "code-example_explanation", "context": "Extend the Kubernetes API with CustomResourceDefinitions | The HTTP status code returned to the caller will match the reason of the first failed validation rule. The currently supported reasons are: \"FieldValueInvalid\", \"FieldValueForbidden\", \"FieldValueRequi", "source": "https://kubernetes.io/docs/tasks/extend-kubernetes/custom-resources/custom-resource-definitions/", "id": "k8s_00721_code_explain_22779"} {"prompt": "Explain this text code:\n\nx-kubernetes-validations:\n- rule: \"self.foo.test.x <= self.maxLimit\"\n fieldPath: \".foo.test.x\"", "completion": "This text code Extend the Kubernetes API with CustomResourceDefinitions | In the example above, the validation checks the value of field x should be less than the value of maxLimit. If no fieldPath specified, when validation fails, the fieldPath would be default to wherever", "language": "text", "type": "code-example_explanation", "context": "Extend the Kubernetes API with CustomResourceDefinitions | In the example above, the validation checks the value of field x should be less than the value of maxLimit. If no fieldPath specified, when validation fails, the fieldPath would be default to wherever", "source": "https://kubernetes.io/docs/tasks/extend-kubernetes/custom-resources/custom-resource-definitions/", "id": "k8s_00721_code_explain_22780"} {"prompt": "Complete this text code:\n\ntype: string\nenum: [\"low\", \"medium\", \"high\"]", "completion": "x-kubernetes-validations:\n- rule: \"!(self == 'high' && oldSelf == 'low') && !(self == 'low' && oldSelf == 'high')\"\n message: cannot transition directly between 'low' and 'high'", "language": "text", "type": "code-example", "context": "Extend the Kubernetes API with CustomResourceDefinitions | Unlike other rules, transition rules apply only to operations meeting the following criteria:", "source": "https://kubernetes.io/docs/tasks/extend-kubernetes/custom-resources/custom-resource-definitions/", "id": "k8s_00721_code_completion_22781"} {"prompt": "Explain this text code:\n\ntype: string\nenum: [\"low\", \"medium\", \"high\"]\nx-kubernetes-validations:\n- rule: \"!(self == 'high' && oldSelf == 'low') && !(self == 'low' && oldSelf == 'high')\"\n message: cannot transition directly between 'low' and 'high'", "completion": "This text code Extend the Kubernetes API with CustomResourceDefinitions | Unlike other rules, transition rules apply only to operations meeting the following criteria:", "language": "text", "type": "code-example_explanation", "context": "Extend the Kubernetes API with CustomResourceDefinitions | Unlike other rules, transition rules apply only to operations meeting the following criteria:", "source": "https://kubernetes.io/docs/tasks/extend-kubernetes/custom-resources/custom-resource-definitions/", "id": "k8s_00721_code_explain_22782"} {"prompt": "Complete this text code:\n\nopenAPIV3Schema:\n type: object\n properties:\n foo:", "completion": " type: array\n items:\n type: string\n x-kubernetes-validations:\n - rule: \"self.all(x, x.contains('a string'))\"", "language": "text", "type": "code-example", "context": "Extend the Kubernetes API with CustomResourceDefinitions | then the API server rejects this rule on validation budget grounds with error:", "source": "https://kubernetes.io/docs/tasks/extend-kubernetes/custom-resources/custom-resource-definitions/", "id": "k8s_00721_code_completion_22783"} {"prompt": "Explain this text code:\n\nopenAPIV3Schema:\n type: object\n properties:\n foo:\n type: array\n items:\n type: string\n x-kubernetes-validations:\n - rule: \"self.all(x, x.contains('a string'))\"", "completion": "This text code Extend the Kubernetes API with CustomResourceDefinitions | then the API server rejects this rule on validation budget grounds with error:", "language": "text", "type": "code-example_explanation", "context": "Extend the Kubernetes API with CustomResourceDefinitions | then the API server rejects this rule on validation budget grounds with error:", "source": "https://kubernetes.io/docs/tasks/extend-kubernetes/custom-resources/custom-resource-definitions/", "id": "k8s_00721_code_explain_22784"} {"prompt": "Complete this text code:\n\nopenAPIV3Schema:\n type: object\n properties:\n foo:\n type: array", "completion": " maxItems: 25\n items:\n type: string\n maxLength: 10\n x-kubernetes-validations:\n - rule: \"self.all(x, x.contains('a string'))\"", "language": "text", "type": "code-example", "context": "Extend the Kubernetes API with CustomResourceDefinitions | The cost estimation system takes into account how many times the rule will be executed in addition to the estimated cost of the rule itself. For instance, the following rule will have the same estimat", "source": "https://kubernetes.io/docs/tasks/extend-kubernetes/custom-resources/custom-resource-definitions/", "id": "k8s_00721_code_completion_22785"} {"prompt": "Explain this text code:\n\nopenAPIV3Schema:\n type: object\n properties:\n foo:\n type: array\n maxItems: 25\n items:\n type: string\n maxLength: 10\n x-kubernetes-validations:\n - rule: \"self.all(x, x.contains('a string'))\"", "completion": "This text code Extend the Kubernetes API with CustomResourceDefinitions | The cost estimation system takes into account how many times the rule will be executed in addition to the estimated cost of the rule itself. For instance, the following rule will have the same estimat", "language": "text", "type": "code-example_explanation", "context": "Extend the Kubernetes API with CustomResourceDefinitions | The cost estimation system takes into account how many times the rule will be executed in addition to the estimated cost of the rule itself. For instance, the following rule will have the same estimat", "source": "https://kubernetes.io/docs/tasks/extend-kubernetes/custom-resources/custom-resource-definitions/", "id": "k8s_00721_code_explain_22786"} {"prompt": "Complete this text code:\n\nopenAPIV3Schema:\n type: object\n properties:\n foo:\n type: array", "completion": " maxItems: 25\n items:\n type: string\n x-kubernetes-validations:\n - rule: \"self.contains('a string'))\"\n maxLength: 10", "language": "text", "type": "code-example", "context": "Extend the Kubernetes API with CustomResourceDefinitions | If a list inside of a list has a validation rule that uses self.all, that is significantly more expensive than a non-nested list with the same rule. A rule that would have been allowed on a non-nested", "source": "https://kubernetes.io/docs/tasks/extend-kubernetes/custom-resources/custom-resource-definitions/", "id": "k8s_00721_code_completion_22787"} {"prompt": "Explain this text code:\n\nopenAPIV3Schema:\n type: object\n properties:\n foo:\n type: array\n maxItems: 25\n items:\n type: string\n x-kubernetes-validations:\n - rule: \"self.contains('a string'))\"\n maxLength: 10", "completion": "This text code Extend the Kubernetes API with CustomResourceDefinitions | If a list inside of a list has a validation rule that uses self.all, that is significantly more expensive than a non-nested list with the same rule. A rule that would have been allowed on a non-nested", "language": "text", "type": "code-example_explanation", "context": "Extend the Kubernetes API with CustomResourceDefinitions | If a list inside of a list has a validation rule that uses self.all, that is significantly more expensive than a non-nested list with the same rule. A rule that would have been allowed on a non-nested", "source": "https://kubernetes.io/docs/tasks/extend-kubernetes/custom-resources/custom-resource-definitions/", "id": "k8s_00721_code_explain_22788"} {"prompt": "Complete this text code:\n\nopenAPIV3Schema:\n type: object\n properties:\n foo:", "completion": " type: array\n items:\n type: integer\n x-kubernetes-validations:\n - rule: \"self.all(x, x == 5)\"", "language": "text", "type": "code-example", "context": "Extend the Kubernetes API with CustomResourceDefinitions | But the same rule on the following schema (with a nested array added) produces a validation error:", "source": "https://kubernetes.io/docs/tasks/extend-kubernetes/custom-resources/custom-resource-definitions/", "id": "k8s_00721_code_completion_22789"} {"prompt": "Explain this text code:\n\nopenAPIV3Schema:\n type: object\n properties:\n foo:\n type: array\n items:\n type: integer\n x-kubernetes-validations:\n - rule: \"self.all(x, x == 5)\"", "completion": "This text code Extend the Kubernetes API with CustomResourceDefinitions | But the same rule on the following schema (with a nested array added) produces a validation error:", "language": "text", "type": "code-example_explanation", "context": "Extend the Kubernetes API with CustomResourceDefinitions | But the same rule on the following schema (with a nested array added) produces a validation error:", "source": "https://kubernetes.io/docs/tasks/extend-kubernetes/custom-resources/custom-resource-definitions/", "id": "k8s_00721_code_explain_22790"} {"prompt": "Complete this text code:\n\nopenAPIV3Schema:\n type: object\n properties:\n foo:\n type: array", "completion": " items:\n type: array\n items:\n type: integer\n x-kubernetes-validations:\n - rule: \"self.all(x, x == 5)\"", "language": "text", "type": "code-example", "context": "Extend the Kubernetes API with CustomResourceDefinitions | This is because each item of foo is itself an array, and each subarray in turn calls self.all. Avoid nested lists and maps if possible where validation rules are used.", "source": "https://kubernetes.io/docs/tasks/extend-kubernetes/custom-resources/custom-resource-definitions/", "id": "k8s_00721_code_completion_22791"} {"prompt": "Explain this text code:\n\nopenAPIV3Schema:\n type: object\n properties:\n foo:\n type: array\n items:\n type: array\n items:\n type: integer\n x-kubernetes-validations:\n - rule: \"self.all(x, x == 5)\"", "completion": "This text code Extend the Kubernetes API with CustomResourceDefinitions | This is because each item of foo is itself an array, and each subarray in turn calls self.all. Avoid nested lists and maps if possible where validation rules are used.", "language": "text", "type": "code-example_explanation", "context": "Extend the Kubernetes API with CustomResourceDefinitions | This is because each item of foo is itself an array, and each subarray in turn calls self.all. Avoid nested lists and maps if possible where validation rules are used.", "source": "https://kubernetes.io/docs/tasks/extend-kubernetes/custom-resources/custom-resource-definitions/", "id": "k8s_00721_code_explain_22792"} {"prompt": "Complete this yaml code:\n\napiVersion: apiextensions.k8s.io/v1\nkind: CustomResourceDefinition\nmetadata:\n name: crontabs.stable.example.com\nspec:\n group: stable.example.com\n versions:\n - name: v1\n served: true\n storage: true\n schema:\n # openAPIV3Schema is the schema for validating custom objects.\n openAPIV3Schema:\n type: object\n properties:\n spec:\n type: object\n properties:", "completion": " cronSpec:\n type: string\n pattern: '^(\\d+|\\*)(/\\d+)?(\\s+(\\d+|\\*)(/\\d+)?){4}$'\n default: \"5 0 * * *\"\n image:\n type: string\n replicas:\n type: integer\n minimum: 1\n maximum: 10\n default: 1\n scope: Namespaced\n names:\n plural: crontabs\n singular: crontab\n kind: CronTab\n shortNames:\n - ct", "language": "yaml", "type": "kubernetes-manifest", "context": "Extend the Kubernetes API with CustomResourceDefinitions | With this both cronSpec and replicas are defaulted:", "source": "https://kubernetes.io/docs/tasks/extend-kubernetes/custom-resources/custom-resource-definitions/", "id": "k8s_00721_code_completion_22793"} {"prompt": "Explain this yaml code:\n\napiVersion: apiextensions.k8s.io/v1\nkind: CustomResourceDefinition\nmetadata:\n name: crontabs.stable.example.com\nspec:\n group: stable.example.com\n versions:\n - name: v1\n served: true\n storage: true\n schema:\n # openAPIV3Schema is the schema for validating custom objects.\n openAPIV3Schema:\n type: object\n properties:\n spec:\n type: object\n properties:\n cronSpec:\n type: string\n pattern: '^(\\d+|\\*)(/\\d+)?(\\s+(\\d+|\\*)(/\\d+)?){4}$'\n default: \"5 0 * * *\"\n image:\n type: string\n replicas:\n type: integer\n minimum: 1\n maximum: 10\n default: 1\n scope: Namespaced\n names:\n plural: crontabs\n singular: crontab\n kind: CronTab\n shortNames:\n - ct", "completion": "This yaml code Extend the Kubernetes API with CustomResourceDefinitions | With this both cronSpec and replicas are defaulted:", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Extend the Kubernetes API with CustomResourceDefinitions | With this both cronSpec and replicas are defaulted:", "source": "https://kubernetes.io/docs/tasks/extend-kubernetes/custom-resources/custom-resource-definitions/", "id": "k8s_00721_code_explain_22794"} {"prompt": "Complete this yaml code:\n\napiVersion: \"stable.example.com/v1\"\nkind: CronTab\nmetadata:", "completion": " name: my-new-cron-object\nspec:\n image: my-awesome-cron-image", "language": "yaml", "type": "kubernetes-manifest", "context": "Extend the Kubernetes API with CustomResourceDefinitions | leads to", "source": "https://kubernetes.io/docs/tasks/extend-kubernetes/custom-resources/custom-resource-definitions/", "id": "k8s_00721_code_completion_22795"} {"prompt": "Explain this yaml code:\n\napiVersion: \"stable.example.com/v1\"\nkind: CronTab\nmetadata:\n name: my-new-cron-object\nspec:\n image: my-awesome-cron-image", "completion": "This yaml code Extend the Kubernetes API with CustomResourceDefinitions | leads to", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Extend the Kubernetes API with CustomResourceDefinitions | leads to", "source": "https://kubernetes.io/docs/tasks/extend-kubernetes/custom-resources/custom-resource-definitions/", "id": "k8s_00721_code_explain_22796"} {"prompt": "Complete this yaml code:\n\napiVersion: \"stable.example.com/v1\"\nkind: CronTab\nmetadata:\n name: my-new-cron-object", "completion": "spec:\n cronSpec: \"5 0 * * *\"\n image: my-awesome-cron-image\n replicas: 1", "language": "yaml", "type": "kubernetes-manifest", "context": "Extend the Kubernetes API with CustomResourceDefinitions | Defaulting happens on the object", "source": "https://kubernetes.io/docs/tasks/extend-kubernetes/custom-resources/custom-resource-definitions/", "id": "k8s_00721_code_completion_22797"} {"prompt": "Explain this yaml code:\n\napiVersion: \"stable.example.com/v1\"\nkind: CronTab\nmetadata:\n name: my-new-cron-object\nspec:\n cronSpec: \"5 0 * * *\"\n image: my-awesome-cron-image\n replicas: 1", "completion": "This yaml code Extend the Kubernetes API with CustomResourceDefinitions | Defaulting happens on the object", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Extend the Kubernetes API with CustomResourceDefinitions | Defaulting happens on the object", "source": "https://kubernetes.io/docs/tasks/extend-kubernetes/custom-resources/custom-resource-definitions/", "id": "k8s_00721_code_explain_22798"} {"prompt": "Complete this text code:\n\ntype: object\nproperties:\n spec:\n type: object\n properties:\n foo:\n type: string", "completion": " nullable: false\n default: \"default\"\n bar:\n type: string\n nullable: true\n baz:\n type: string", "language": "text", "type": "code-example", "context": "Extend the Kubernetes API with CustomResourceDefinitions | creating an object with null values for foo and bar and baz", "source": "https://kubernetes.io/docs/tasks/extend-kubernetes/custom-resources/custom-resource-definitions/", "id": "k8s_00721_code_completion_22799"} {"prompt": "Explain this text code:\n\ntype: object\nproperties:\n spec:\n type: object\n properties:\n foo:\n type: string\n nullable: false\n default: \"default\"\n bar:\n type: string\n nullable: true\n baz:\n type: string", "completion": "This text code Extend the Kubernetes API with CustomResourceDefinitions | creating an object with null values for foo and bar and baz", "language": "text", "type": "code-example_explanation", "context": "Extend the Kubernetes API with CustomResourceDefinitions | creating an object with null values for foo and bar and baz", "source": "https://kubernetes.io/docs/tasks/extend-kubernetes/custom-resources/custom-resource-definitions/", "id": "k8s_00721_code_explain_22800"} {"prompt": "Complete this yaml code:\n\napiVersion: apiextensions.k8s.io/v1\nkind: CustomResourceDefinition\nmetadata:\n name: crontabs.stable.example.com\nspec:\n group: stable.example.com\n scope: Namespaced\n names:\n plural: crontabs\n singular: crontab\n kind: CronTab\n shortNames:\n - ct\n versions:\n - name: v1\n served: true\n storage: true\n schema:\n openAPIV3Schema:\n type: object\n properties:", "completion": " spec:\n type: object\n properties:\n cronSpec:\n type: string\n image:\n type: string\n replicas:\n type: integer\n additionalPrinterColumns:\n - name: Spec\n type: string\n description: The cron spec defining the interval a CronJob is run\n jsonPath: .spec.cronSpec\n - name: Replicas\n type: integer\n description: The number of jobs launched by the CronJob\n jsonPath: .spec.replicas\n - name: Age\n type: date\n jsonPath: .metadata.creationTimestamp", "language": "yaml", "type": "kubernetes-manifest", "context": "Extend the Kubernetes API with CustomResourceDefinitions | Create the CustomResourceDefinition:", "source": "https://kubernetes.io/docs/tasks/extend-kubernetes/custom-resources/custom-resource-definitions/", "id": "k8s_00721_code_completion_22801"} {"prompt": "Explain this yaml code:\n\napiVersion: apiextensions.k8s.io/v1\nkind: CustomResourceDefinition\nmetadata:\n name: crontabs.stable.example.com\nspec:\n group: stable.example.com\n scope: Namespaced\n names:\n plural: crontabs\n singular: crontab\n kind: CronTab\n shortNames:\n - ct\n versions:\n - name: v1\n served: true\n storage: true\n schema:\n openAPIV3Schema:\n type: object\n properties:\n spec:\n type: object\n properties:\n cronSpec:\n type: string\n image:\n type: string\n replicas:\n type: integer\n additionalPrinterColumns:\n - name: Spec\n type: string\n description: The cron spec defining the interval a CronJob is run\n jsonPath: .spec.cronSpec\n - name: Replicas\n type: integer\n description: The number of jobs launched by the CronJob\n jsonPath: .spec.replicas\n - name: Age\n type: date\n jsonPath: .metadata.creationTimestamp", "completion": "This yaml code Extend the Kubernetes API with CustomResourceDefinitions | Create the CustomResourceDefinition:", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Extend the Kubernetes API with CustomResourceDefinitions | Create the CustomResourceDefinition:", "source": "https://kubernetes.io/docs/tasks/extend-kubernetes/custom-resources/custom-resource-definitions/", "id": "k8s_00721_code_explain_22802"} {"prompt": "Complete this yaml code:\n\ncustomresourcedefinition/shirt-resource-definition.yaml\napiVersion: apiextensions.k8s.io/v1\nkind: CustomResourceDefinition\nmetadata:\n name: shirts.stable.example.com\nspec:\n group: stable.example.com\n scope: Namespaced\n names:\n plural: shirts\n singular: shirt\n kind: Shirt\n versions:\n - name: v1\n served: true\n storage: true\n schema:\n openAPIV3Schema:", "completion": " type: object\n properties:\n spec:\n type: object\n properties:\n color:\n type: string\n size:\n type: string\n selectableFields:\n - jsonPath: .spec.color\n - jsonPath: .spec.size\n additionalPrinterColumns:\n - jsonPath: .spec.color\n name: Color\n type: string\n - jsonPath: .spec.size\n name: Size\n type: string", "language": "yaml", "type": "kubernetes-manifest", "context": "Extend the Kubernetes API with CustomResourceDefinitions | Create the CustomResourceDefinition:", "source": "https://kubernetes.io/docs/tasks/extend-kubernetes/custom-resources/custom-resource-definitions/", "id": "k8s_00721_code_completion_22803"} {"prompt": "Explain this yaml code:\n\ncustomresourcedefinition/shirt-resource-definition.yaml\napiVersion: apiextensions.k8s.io/v1\nkind: CustomResourceDefinition\nmetadata:\n name: shirts.stable.example.com\nspec:\n group: stable.example.com\n scope: Namespaced\n names:\n plural: shirts\n singular: shirt\n kind: Shirt\n versions:\n - name: v1\n served: true\n storage: true\n schema:\n openAPIV3Schema:\n type: object\n properties:\n spec:\n type: object\n properties:\n color:\n type: string\n size:\n type: string\n selectableFields:\n - jsonPath: .spec.color\n - jsonPath: .spec.size\n additionalPrinterColumns:\n - jsonPath: .spec.color\n name: Color\n type: string\n - jsonPath: .spec.size\n name: Size\n type: string", "completion": "This yaml code Extend the Kubernetes API with CustomResourceDefinitions | Create the CustomResourceDefinition:", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Extend the Kubernetes API with CustomResourceDefinitions | Create the CustomResourceDefinition:", "source": "https://kubernetes.io/docs/tasks/extend-kubernetes/custom-resources/custom-resource-definitions/", "id": "k8s_00721_code_explain_22804"} {"prompt": "Complete this yaml code:\n\napiVersion: apiextensions.k8s.io/v1\nkind: CustomResourceDefinition\nmetadata:\n name: shirts.stable.example.com\nspec:\n group: stable.example.com\n scope: Namespaced\n names:\n plural: shirts\n singular: shirt\n kind: Shirt\n versions:\n - name: v1\n served: true\n storage: true\n schema:\n openAPIV3Schema:\n type: object", "completion": " properties:\n spec:\n type: object\n properties:\n color:\n type: string\n size:\n type: string\n selectableFields:\n - jsonPath: .spec.color\n - jsonPath: .spec.size\n additionalPrinterColumns:\n - jsonPath: .spec.color\n name: Color\n type: string\n - jsonPath: .spec.size\n name: Size\n type: string", "language": "yaml", "type": "kubernetes-manifest", "context": "Extend the Kubernetes API with CustomResourceDefinitions | Create the CustomResourceDefinition:", "source": "https://kubernetes.io/docs/tasks/extend-kubernetes/custom-resources/custom-resource-definitions/", "id": "k8s_00721_code_completion_22805"} {"prompt": "Explain this yaml code:\n\napiVersion: apiextensions.k8s.io/v1\nkind: CustomResourceDefinition\nmetadata:\n name: shirts.stable.example.com\nspec:\n group: stable.example.com\n scope: Namespaced\n names:\n plural: shirts\n singular: shirt\n kind: Shirt\n versions:\n - name: v1\n served: true\n storage: true\n schema:\n openAPIV3Schema:\n type: object\n properties:\n spec:\n type: object\n properties:\n color:\n type: string\n size:\n type: string\n selectableFields:\n - jsonPath: .spec.color\n - jsonPath: .spec.size\n additionalPrinterColumns:\n - jsonPath: .spec.color\n name: Color\n type: string\n - jsonPath: .spec.size\n name: Size\n type: string", "completion": "This yaml code Extend the Kubernetes API with CustomResourceDefinitions | Create the CustomResourceDefinition:", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Extend the Kubernetes API with CustomResourceDefinitions | Create the CustomResourceDefinition:", "source": "https://kubernetes.io/docs/tasks/extend-kubernetes/custom-resources/custom-resource-definitions/", "id": "k8s_00721_code_explain_22806"} {"prompt": "Explain this bash code:\n\nkubectl apply -f https://k8s.io/examples/customresourcedefinition/shirt-resource-definition.yaml", "completion": "This bash code Extend the Kubernetes API with CustomResourceDefinitions | Define some Shirts by editing shirt-resources.yaml; for example:", "language": "bash", "type": "kubectl-command_explanation", "context": "Extend the Kubernetes API with CustomResourceDefinitions | Define some Shirts by editing shirt-resources.yaml; for example:", "source": "https://kubernetes.io/docs/tasks/extend-kubernetes/custom-resources/custom-resource-definitions/", "id": "k8s_00721_code_explain_22807"} {"prompt": "Complete this yaml code:\n\ncustomresourcedefinition/shirt-resources.yaml\n---\napiVersion: stable.example.com/v1\nkind: Shirt\nmetadata:\n name: example1\nspec:\n color: blue\n size: S\n---\napiVersion: stable.example.com/v1\nkind: Shirt", "completion": "metadata:\n name: example2\nspec:\n color: blue\n size: M\n---\napiVersion: stable.example.com/v1\nkind: Shirt\nmetadata:\n name: example3\nspec:\n color: green\n size: M", "language": "yaml", "type": "kubernetes-manifest", "context": "Extend the Kubernetes API with CustomResourceDefinitions | Create the custom resources:", "source": "https://kubernetes.io/docs/tasks/extend-kubernetes/custom-resources/custom-resource-definitions/", "id": "k8s_00721_code_completion_22808"} {"prompt": "Explain this yaml code:\n\ncustomresourcedefinition/shirt-resources.yaml\n---\napiVersion: stable.example.com/v1\nkind: Shirt\nmetadata:\n name: example1\nspec:\n color: blue\n size: S\n---\napiVersion: stable.example.com/v1\nkind: Shirt\nmetadata:\n name: example2\nspec:\n color: blue\n size: M\n---\napiVersion: stable.example.com/v1\nkind: Shirt\nmetadata:\n name: example3\nspec:\n color: green\n size: M", "completion": "This yaml code Extend the Kubernetes API with CustomResourceDefinitions | Create the custom resources:", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Extend the Kubernetes API with CustomResourceDefinitions | Create the custom resources:", "source": "https://kubernetes.io/docs/tasks/extend-kubernetes/custom-resources/custom-resource-definitions/", "id": "k8s_00721_code_explain_22809"} {"prompt": "Complete this yaml code:\n\n---\napiVersion: stable.example.com/v1\nkind: Shirt\nmetadata:\n name: example1\nspec:\n color: blue\n size: S\n---\napiVersion: stable.example.com/v1\nkind: Shirt\nmetadata:", "completion": " name: example2\nspec:\n color: blue\n size: M\n---\napiVersion: stable.example.com/v1\nkind: Shirt\nmetadata:\n name: example3\nspec:\n color: green\n size: M", "language": "yaml", "type": "kubernetes-manifest", "context": "Extend the Kubernetes API with CustomResourceDefinitions | Create the custom resources:", "source": "https://kubernetes.io/docs/tasks/extend-kubernetes/custom-resources/custom-resource-definitions/", "id": "k8s_00721_code_completion_22810"} {"prompt": "Explain this yaml code:\n\n---\napiVersion: stable.example.com/v1\nkind: Shirt\nmetadata:\n name: example1\nspec:\n color: blue\n size: S\n---\napiVersion: stable.example.com/v1\nkind: Shirt\nmetadata:\n name: example2\nspec:\n color: blue\n size: M\n---\napiVersion: stable.example.com/v1\nkind: Shirt\nmetadata:\n name: example3\nspec:\n color: green\n size: M", "completion": "This yaml code Extend the Kubernetes API with CustomResourceDefinitions | Create the custom resources:", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Extend the Kubernetes API with CustomResourceDefinitions | Create the custom resources:", "source": "https://kubernetes.io/docs/tasks/extend-kubernetes/custom-resources/custom-resource-definitions/", "id": "k8s_00721_code_explain_22811"} {"prompt": "Explain this bash code:\n\nkubectl apply -f https://k8s.io/examples/customresourcedefinition/shirt-resources.yaml", "completion": "This bash code Extend the Kubernetes API with CustomResourceDefinitions | Get all the resources:", "language": "bash", "type": "kubectl-command_explanation", "context": "Extend the Kubernetes API with CustomResourceDefinitions | Get all the resources:", "source": "https://kubernetes.io/docs/tasks/extend-kubernetes/custom-resources/custom-resource-definitions/", "id": "k8s_00721_code_explain_22812"} {"prompt": "Explain this bash code:\n\nkubectl get shirts.stable.example.com --field-selector spec.color=blue", "completion": "This bash code Extend the Kubernetes API with CustomResourceDefinitions | Should output:", "language": "bash", "type": "kubectl-command_explanation", "context": "Extend the Kubernetes API with CustomResourceDefinitions | Should output:", "source": "https://kubernetes.io/docs/tasks/extend-kubernetes/custom-resources/custom-resource-definitions/", "id": "k8s_00721_code_explain_22813"} {"prompt": "Explain this bash code:\n\nkubectl get shirts.stable.example.com --field-selector spec.color=green,spec.size=M", "completion": "This bash code Extend the Kubernetes API with CustomResourceDefinitions | Should output:", "language": "bash", "type": "kubectl-command_explanation", "context": "Extend the Kubernetes API with CustomResourceDefinitions | Should output:", "source": "https://kubernetes.io/docs/tasks/extend-kubernetes/custom-resources/custom-resource-definitions/", "id": "k8s_00721_code_explain_22814"} {"prompt": "Complete this yaml code:\n\napiVersion: apiextensions.k8s.io/v1\nkind: CustomResourceDefinition\nmetadata:\n name: crontabs.stable.example.com\nspec:\n group: stable.example.com\n versions:\n - name: v1\n served: true\n storage: true\n schema:\n openAPIV3Schema:\n type: object\n properties:\n spec:\n type: object\n properties:\n cronSpec:\n type: string\n image:\n type: string\n replicas:\n type: integer\n status:", "completion": " type: object\n properties:\n replicas:\n type: integer\n labelSelector:\n type: string\n # subresources describes the subresources for custom resources.\n subresources:\n # status enables the status subresource.\n status: {}\n # scale enables the scale subresource.\n scale:\n # specReplicasPath defines the JSONPath inside of a custom resource that corresponds to Scale.Spec.Replicas.\n specReplicasPath: .spec.replicas\n # statusReplicasPath defines the JSONPath inside of a custom resource that corresponds to Scale.Status.Replicas.\n statusReplicasPath: .status.replicas\n # labelSelectorPath defines the JSONPath inside of a custom resource that corresponds to Scale.Status.Selector.\n labelSelectorPath: .status.labelSelector\n scope: Namespaced\n names:\n plural: crontabs\n singular: crontab\n kind: CronTab\n shortNames:\n - ct", "language": "yaml", "type": "kubernetes-manifest", "context": "Extend the Kubernetes API with CustomResourceDefinitions | And create it:", "source": "https://kubernetes.io/docs/tasks/extend-kubernetes/custom-resources/custom-resource-definitions/", "id": "k8s_00721_code_completion_22815"} {"prompt": "Explain this yaml code:\n\napiVersion: apiextensions.k8s.io/v1\nkind: CustomResourceDefinition\nmetadata:\n name: crontabs.stable.example.com\nspec:\n group: stable.example.com\n versions:\n - name: v1\n served: true\n storage: true\n schema:\n openAPIV3Schema:\n type: object\n properties:\n spec:\n type: object\n properties:\n cronSpec:\n type: string\n image:\n type: string\n replicas:\n type: integer\n status:\n type: object\n properties:\n replicas:\n type: integer\n labelSelector:\n type: string\n # subresources describes the subresources for custom resources.\n subresources:\n # status enables the status subresource.\n status: {}\n # scale enables the scale subresource.\n scale:\n # specReplicasPath defines the JSONPath inside of a custom resource that corresponds to Scale.Spec.Replicas.\n specReplicasPath: .spec.replicas\n # statusReplicasPath defines the JSONPath inside of a custom resource that corresponds to Scale.Status.Replicas.\n statusReplicasPath: .status.replicas\n # labelSelectorPath defines the JSONPath inside of a custom resource that corresponds to Scale.Status.Selector.\n labelSelectorPath: .status.labelSelector\n scope: Namespaced\n names:\n plural: crontabs\n singular: crontab\n kind: CronTab\n shortNames:\n - ct", "completion": "This yaml code Extend the Kubernetes API with CustomResourceDefinitions | And create it:", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Extend the Kubernetes API with CustomResourceDefinitions | And create it:", "source": "https://kubernetes.io/docs/tasks/extend-kubernetes/custom-resources/custom-resource-definitions/", "id": "k8s_00721_code_explain_22816"} {"prompt": "Complete this yaml code:\n\napiVersion: \"stable.example.com/v1\"\nkind: CronTab\nmetadata:\n name: my-new-cron-object", "completion": "spec:\n cronSpec: \"* * * * */5\"\n image: my-awesome-cron-image\n replicas: 3", "language": "yaml", "type": "kubernetes-manifest", "context": "Extend the Kubernetes API with CustomResourceDefinitions | and create it:", "source": "https://kubernetes.io/docs/tasks/extend-kubernetes/custom-resources/custom-resource-definitions/", "id": "k8s_00721_code_completion_22817"} {"prompt": "Explain this yaml code:\n\napiVersion: \"stable.example.com/v1\"\nkind: CronTab\nmetadata:\n name: my-new-cron-object\nspec:\n cronSpec: \"* * * * */5\"\n image: my-awesome-cron-image\n replicas: 3", "completion": "This yaml code Extend the Kubernetes API with CustomResourceDefinitions | and create it:", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Extend the Kubernetes API with CustomResourceDefinitions | and create it:", "source": "https://kubernetes.io/docs/tasks/extend-kubernetes/custom-resources/custom-resource-definitions/", "id": "k8s_00721_code_explain_22818"} {"prompt": "Complete this bash code:\n\nkubectl scale --replicas=5 crontabs/my-new-cron-object\ncrontabs \"my-new-cron-object\" scaled", "completion": "\nkubectl get crontabs my-new-cron-object -o jsonpath='{.spec.replicas}'\n5", "language": "bash", "type": "kubectl-command", "context": "Extend the Kubernetes API with CustomResourceDefinitions | You can use a PodDisruptionBudget to protect custom resources that have the scale subresource enabled.", "source": "https://kubernetes.io/docs/tasks/extend-kubernetes/custom-resources/custom-resource-definitions/", "id": "k8s_00721_code_completion_22819"} {"prompt": "Explain this bash code:\n\nkubectl scale --replicas=5 crontabs/my-new-cron-object\ncrontabs \"my-new-cron-object\" scaled\n\nkubectl get crontabs my-new-cron-object -o jsonpath='{.spec.replicas}'\n5", "completion": "This bash code Extend the Kubernetes API with CustomResourceDefinitions | You can use a PodDisruptionBudget to protect custom resources that have the scale subresource enabled.", "language": "bash", "type": "kubectl-command_explanation", "context": "Extend the Kubernetes API with CustomResourceDefinitions | You can use a PodDisruptionBudget to protect custom resources that have the scale subresource enabled.", "source": "https://kubernetes.io/docs/tasks/extend-kubernetes/custom-resources/custom-resource-definitions/", "id": "k8s_00721_code_explain_22820"} {"prompt": "Complete this yaml code:\n\napiVersion: apiextensions.k8s.io/v1\nkind: CustomResourceDefinition\nmetadata:\n name: crontabs.stable.example.com\nspec:\n group: stable.example.com\n versions:\n - name: v1\n served: true\n storage: true\n schema:\n openAPIV3Schema:\n type: object\n properties:\n spec:\n type: object", "completion": " properties:\n cronSpec:\n type: string\n image:\n type: string\n replicas:\n type: integer\n scope: Namespaced\n names:\n plural: crontabs\n singular: crontab\n kind: CronTab\n shortNames:\n - ct\n # categories is a list of grouped resources the custom resource belongs to.\n categories:\n - all", "language": "yaml", "type": "kubernetes-manifest", "context": "Extend the Kubernetes API with CustomResourceDefinitions | and create it:", "source": "https://kubernetes.io/docs/tasks/extend-kubernetes/custom-resources/custom-resource-definitions/", "id": "k8s_00721_code_completion_22821"} {"prompt": "Explain this yaml code:\n\napiVersion: apiextensions.k8s.io/v1\nkind: CustomResourceDefinition\nmetadata:\n name: crontabs.stable.example.com\nspec:\n group: stable.example.com\n versions:\n - name: v1\n served: true\n storage: true\n schema:\n openAPIV3Schema:\n type: object\n properties:\n spec:\n type: object\n properties:\n cronSpec:\n type: string\n image:\n type: string\n replicas:\n type: integer\n scope: Namespaced\n names:\n plural: crontabs\n singular: crontab\n kind: CronTab\n shortNames:\n - ct\n # categories is a list of grouped resources the custom resource belongs to.\n categories:\n - all", "completion": "This yaml code Extend the Kubernetes API with CustomResourceDefinitions | and create it:", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Extend the Kubernetes API with CustomResourceDefinitions | and create it:", "source": "https://kubernetes.io/docs/tasks/extend-kubernetes/custom-resources/custom-resource-definitions/", "id": "k8s_00721_code_explain_22822"} {"prompt": "Complete this yaml code:\n\napiVersion: \"stable.example.com/v1\"\nkind: CronTab\nmetadata:", "completion": " name: my-new-cron-object\nspec:\n cronSpec: \"* * * * */5\"\n image: my-awesome-cron-image", "language": "yaml", "type": "kubernetes-manifest", "context": "Extend the Kubernetes API with CustomResourceDefinitions | and create it:", "source": "https://kubernetes.io/docs/tasks/extend-kubernetes/custom-resources/custom-resource-definitions/", "id": "k8s_00721_code_completion_22823"} {"prompt": "Explain this yaml code:\n\napiVersion: \"stable.example.com/v1\"\nkind: CronTab\nmetadata:\n name: my-new-cron-object\nspec:\n cronSpec: \"* * * * */5\"\n image: my-awesome-cron-image", "completion": "This yaml code Extend the Kubernetes API with CustomResourceDefinitions | and create it:", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Extend the Kubernetes API with CustomResourceDefinitions | and create it:", "source": "https://kubernetes.io/docs/tasks/extend-kubernetes/custom-resources/custom-resource-definitions/", "id": "k8s_00721_code_explain_22824"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: ConfigMap\nmetadata:\n name: mysql\n labels:\n app: mysql\n app.kubernetes.io/name: mysql\ndata:", "completion": " primary.cnf: |\n # Apply this config only on the primary.\n [mysqld]\n log-bin \n replica.cnf: |\n # Apply this config only on replicas.\n [mysqld]\n super-read-only", "language": "yaml", "type": "kubernetes-manifest", "context": "Run a Replicated Stateful Application | This ConfigMap provides my.cnf overrides that let you independently control configuration on the primary MySQL server and its replicas. In this case, you want the primary server to be able to serve re", "source": "https://kubernetes.io/docs/tasks/run-application/run-replicated-stateful-application/", "id": "k8s_00724_code_completion_22825"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: ConfigMap\nmetadata:\n name: mysql\n labels:\n app: mysql\n app.kubernetes.io/name: mysql\ndata:\n primary.cnf: |\n # Apply this config only on the primary.\n [mysqld]\n log-bin \n replica.cnf: |\n # Apply this config only on replicas.\n [mysqld]\n super-read-only", "completion": "This yaml code Run a Replicated Stateful Application | This ConfigMap provides my.cnf overrides that let you independently control configuration on the primary MySQL server and its replicas. In this case, you want the primary server to be able to serve re", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Run a Replicated Stateful Application | This ConfigMap provides my.cnf overrides that let you independently control configuration on the primary MySQL server and its replicas. In this case, you want the primary server to be able to serve re", "source": "https://kubernetes.io/docs/tasks/run-application/run-replicated-stateful-application/", "id": "k8s_00724_code_explain_22826"} {"prompt": "Explain this bash code:\n\nkubectl apply -f https://k8s.io/examples/application/mysql/mysql-configmap.yaml", "completion": "This bash code Run a Replicated Stateful Application | This ConfigMap provides my.cnf overrides that let you independently control configuration on the primary MySQL server and its replicas. In this case, you want the primary server to be able to serve re", "language": "bash", "type": "kubectl-command_explanation", "context": "Run a Replicated Stateful Application | This ConfigMap provides my.cnf overrides that let you independently control configuration on the primary MySQL server and its replicas. In this case, you want the primary server to be able to serve re", "source": "https://kubernetes.io/docs/tasks/run-application/run-replicated-stateful-application/", "id": "k8s_00724_code_explain_22827"} {"prompt": "Complete this yaml code:\n\n# Headless service for stable DNS entries of StatefulSet members.\napiVersion: v1\nkind: Service\nmetadata:\n name: mysql\n labels:\n app: mysql\n app.kubernetes.io/name: mysql\nspec:\n ports:\n - name: mysql\n port: 3306\n clusterIP: None\n selector:\n app: mysql\n---", "completion": "# Client service for connecting to any MySQL instance for reads.\n# For writes, you must instead connect to the primary: mysql-0.mysql.\napiVersion: v1\nkind: Service\nmetadata:\n name: mysql-read\n labels:\n app: mysql\n app.kubernetes.io/name: mysql\n readonly: \"true\"\nspec:\n ports:\n - name: mysql\n port: 3306\n selector:\n app: mysql", "language": "yaml", "type": "service-definition", "context": "Run a Replicated Stateful Application | The headless Service provides a home for the DNS entries that the StatefulSet controllers creates for each Pod that's part of the set. Because the headless Service is named mysql, the Pods are accessi", "source": "https://kubernetes.io/docs/tasks/run-application/run-replicated-stateful-application/", "id": "k8s_00724_code_completion_22828"} {"prompt": "Explain this yaml code:\n\n# Headless service for stable DNS entries of StatefulSet members.\napiVersion: v1\nkind: Service\nmetadata:\n name: mysql\n labels:\n app: mysql\n app.kubernetes.io/name: mysql\nspec:\n ports:\n - name: mysql\n port: 3306\n clusterIP: None\n selector:\n app: mysql\n---\n# Client service for connecting to any MySQL instance for reads.\n# For writes, you must instead connect to the primary: mysql-0.mysql.\napiVersion: v1\nkind: Service\nmetadata:\n name: mysql-read\n labels:\n app: mysql\n app.kubernetes.io/name: mysql\n readonly: \"true\"\nspec:\n ports:\n - name: mysql\n port: 3306\n selector:\n app: mysql", "completion": "This yaml code Run a Replicated Stateful Application | The headless Service provides a home for the DNS entries that the StatefulSet controllers creates for each Pod that's part of the set. Because the headless Service is named mysql, the Pods are accessi", "language": "yaml", "type": "service-definition_explanation", "context": "Run a Replicated Stateful Application | The headless Service provides a home for the DNS entries that the StatefulSet controllers creates for each Pod that's part of the set. Because the headless Service is named mysql, the Pods are accessi", "source": "https://kubernetes.io/docs/tasks/run-application/run-replicated-stateful-application/", "id": "k8s_00724_code_explain_22829"} {"prompt": "Explain this bash code:\n\nkubectl apply -f https://k8s.io/examples/application/mysql/mysql-services.yaml", "completion": "This bash code Run a Replicated Stateful Application | The headless Service provides a home for the DNS entries that the StatefulSet controllers creates for each Pod that's part of the set. Because the headless Service is named mysql, the Pods are accessi", "language": "bash", "type": "kubectl-command_explanation", "context": "Run a Replicated Stateful Application | The headless Service provides a home for the DNS entries that the StatefulSet controllers creates for each Pod that's part of the set. Because the headless Service is named mysql, the Pods are accessi", "source": "https://kubernetes.io/docs/tasks/run-application/run-replicated-stateful-application/", "id": "k8s_00724_code_explain_22830"} {"prompt": "Complete this yaml code:\n\napiVersion: apps/v1\nkind: StatefulSet\nmetadata:\n name: mysql\nspec:\n selector:\n matchLabels:\n app: mysql\n app.kubernetes.io/name: mysql\n serviceName: mysql\n replicas: 3\n template:\n metadata:\n labels:\n app: mysql\n app.kubernetes.io/name: mysql\n spec:\n initContainers:\n - name: init-mysql\n image: mysql:5.7\n command:\n - bash\n - \"-c\"\n - |\n set -ex\n # Generate mysql server-id from pod ordinal index.\n [[ $HOSTNAME =~ -([0-9]+)$ ]] || exit 1\n ordinal=${BASH_REMATCH[1]}\n echo [mysqld] > /mnt/conf.d/server-id.cnf\n # Add an offset to avoid reserved server-id=0 value.\n echo server-id=$((100 + $ordinal)) >> /mnt/conf.d/server-id.cnf\n # Copy appropriate conf.d files from config-map to emptyDir.\n if [[ $ordinal -eq 0 ]]; then\n cp /mnt/config-map/primary.cnf /mnt/conf.d/\n else\n cp /mnt/config-map/replica.cnf /mnt/conf.d/\n fi \n volumeMounts:\n - name: conf\n mountPath: /mnt/conf.d\n - name: config-map\n mountPath: /mnt/config-map\n - name: clone-mysql\n image: gcr.io/google-samples/xtrabackup:1.0\n command:\n - bash\n - \"-c\"\n - |\n set -ex\n # Skip the clone if data already exists.\n [[ -d /var/lib/mysql/mysql ]] && exit 0\n # Skip the clone on primary (ordinal index 0).\n [[ `hostname` =~ -([0-9]+)$ ]] || exit 1\n ordinal=${BASH_REMATCH[1]}\n [[ $ordinal -eq 0 ]] && exit 0\n # Clone data from previous peer.\n ncat --recv-only mysql-$(($ordinal-1)).mysql 3307 | xbstream -x -C /var/lib/mysql\n # Prepare the backup.\n xtrabackup --prepare --target-dir=/var/lib/mysql \n volumeMounts:\n - name: data\n mountPath: /var/lib/mysql\n subPath: mysql\n - name: conf\n mountPath: /etc/mysql/conf.d\n containers:\n - name: mysql\n image: mysql:5.7\n env:\n - name: MYSQL_ALLOW_EMPTY_PASSWORD\n value: \"1\"\n ports:\n - name: mysql\n containerPort: 3306\n volumeMounts:\n - name: data\n mountPath: /var/lib/mysql\n subPath: mysql\n - name: conf\n mountPath: /etc/mysql/conf.d\n resources:\n requests:\n cpu: 500m\n memory: 1Gi", "completion": " livenessProbe:\n exec:\n command: [\"mysqladmin\", \"ping\"]\n initialDelaySeconds: 30\n periodSeconds: 10\n timeoutSeconds: 5\n readinessProbe:\n exec:\n # Check we can execute queries over TCP (skip-networking is off).\n command: [\"mysql\", \"-h\", \"127.0.0.1\", \"-e\", \"SELECT 1\"]\n initialDelaySeconds: 5\n periodSeconds: 2\n timeoutSeconds: 1\n - name: xtrabackup\n image: gcr.io/google-samples/xtrabackup:1.0\n ports:\n - name: xtrabackup\n containerPort: 3307\n command:\n - bash\n - \"-c\"\n - |\n set -ex\n cd /var/lib/mysql\n\n # Determine binlog position of cloned data, if any.\n if [[ -f xtrabackup_slave_info && \"x$( change_master_to.sql.in\n # Ignore xtrabackup_binlog_info in this case (it's useless).\n rm -f xtrabackup_slave_info xtrabackup_binlog_info\n elif [[ -f xtrabackup_binlog_info ]]; then\n # We're cloning directly from primary. Parse binlog position.\n [[ `cat xtrabackup_binlog_info` =~ ^(.*?)[[:space:]]+(.*?)$ ]] || exit 1\n rm -f xtrabackup_binlog_info xtrabackup_slave_info\n echo \"CHANGE MASTER TO MASTER_LOG_FILE='${BASH_REMATCH[1]}',\\\n MASTER_LOG_POS=${BASH_REMATCH[2]}\" > change_master_to.sql.in\n fi\n\n # Check if we need to complete a clone by starting replication.\n if [[ -f change_master_to.sql.in ]]; then\n echo \"Waiting for mysqld to be ready (accepting connections)\"\n until mysql -h 127.0.0.1 -e \"SELECT 1\"; do sleep 1; done\n\n echo \"Initializing replication from clone position\"\n mysql -h 127.0.0.1 \\\n -e \"$( /mnt/conf.d/server-id.cnf\n # Add an offset to avoid reserved server-id=0 value.\n echo server-id=$((100 + $ordinal)) >> /mnt/conf.d/server-id.cnf\n # Copy appropriate conf.d files from config-map to emptyDir.\n if [[ $ordinal -eq 0 ]]; then\n cp /mnt/config-map/primary.cnf /mnt/conf.d/\n else\n cp /mnt/config-map/replica.cnf /mnt/conf.d/\n fi \n volumeMounts:\n - name: conf\n mountPath: /mnt/conf.d\n - name: config-map\n mountPath: /mnt/config-map\n - name: clone-mysql\n image: gcr.io/google-samples/xtrabackup:1.0\n command:\n - bash\n - \"-c\"\n - |\n set -ex\n # Skip the clone if data already exists.\n [[ -d /var/lib/mysql/mysql ]] && exit 0\n # Skip the clone on primary (ordinal index 0).\n [[ `hostname` =~ -([0-9]+)$ ]] || exit 1\n ordinal=${BASH_REMATCH[1]}\n [[ $ordinal -eq 0 ]] && exit 0\n # Clone data from previous peer.\n ncat --recv-only mysql-$(($ordinal-1)).mysql 3307 | xbstream -x -C /var/lib/mysql\n # Prepare the backup.\n xtrabackup --prepare --target-dir=/var/lib/mysql \n volumeMounts:\n - name: data\n mountPath: /var/lib/mysql\n subPath: mysql\n - name: conf\n mountPath: /etc/mysql/conf.d\n containers:\n - name: mysql\n image: mysql:5.7\n env:\n - name: MYSQL_ALLOW_EMPTY_PASSWORD\n value: \"1\"\n ports:\n - name: mysql\n containerPort: 3306\n volumeMounts:\n - name: data\n mountPath: /var/lib/mysql\n subPath: mysql\n - name: conf\n mountPath: /etc/mysql/conf.d\n resources:\n requests:\n cpu: 500m\n memory: 1Gi\n livenessProbe:\n exec:\n command: [\"mysqladmin\", \"ping\"]\n initialDelaySeconds: 30\n periodSeconds: 10\n timeoutSeconds: 5\n readinessProbe:\n exec:\n # Check we can execute queries over TCP (skip-networking is off).\n command: [\"mysql\", \"-h\", \"127.0.0.1\", \"-e\", \"SELECT 1\"]\n initialDelaySeconds: 5\n periodSeconds: 2\n timeoutSeconds: 1\n - name: xtrabackup\n image: gcr.io/google-samples/xtrabackup:1.0\n ports:\n - name: xtrabackup\n containerPort: 3307\n command:\n - bash\n - \"-c\"\n - |\n set -ex\n cd /var/lib/mysql\n\n # Determine binlog position of cloned data, if any.\n if [[ -f xtrabackup_slave_info && \"x$( change_master_to.sql.in\n # Ignore xtrabackup_binlog_info in this case (it's useless).\n rm -f xtrabackup_slave_info xtrabackup_binlog_info\n elif [[ -f xtrabackup_binlog_info ]]; then\n # We're cloning directly from primary. Parse binlog position.\n [[ `cat xtrabackup_binlog_info` =~ ^(.*?)[[:space:]]+(.*?)$ ]] || exit 1\n rm -f xtrabackup_binlog_info xtrabackup_slave_info\n echo \"CHANGE MASTER TO MASTER_LOG_FILE='${BASH_REMATCH[1]}',\\\n MASTER_LOG_POS=${BASH_REMATCH[2]}\" > change_master_to.sql.in\n fi\n\n # Check if we need to complete a clone by starting replication.\n if [[ -f change_master_to.sql.in ]]; then\n echo \"Waiting for mysqld to be ready (accepting connections)\"\n until mysql -h 127.0.0.1 -e \"SELECT 1\"; do sleep 1; done\n\n echo \"Initializing replication from clone position\"\n mysql -h 127.0.0.1 \\\n -e \"$( with the name of the Node you found in", "language": "text", "type": "code-example_explanation", "context": "Run a Replicated Stateful Application | Then, drain the Node by running the following command, which cordons it so no new Pods may schedule there, and then evicts any existing Pods. Replace with the name of the Node you found in", "source": "https://kubernetes.io/docs/tasks/run-application/run-replicated-stateful-application/", "id": "k8s_00724_code_explain_22847"} {"prompt": "Explain this text code:\n\n# See above advice about impact on other workloads\nkubectl drain --force --delete-emptydir-data --ignore-daemonsets", "completion": "This text code Run a Replicated Stateful Application | Now you can watch as the Pod reschedules on a different Node:", "language": "text", "type": "code-example_explanation", "context": "Run a Replicated Stateful Application | Now you can watch as the Pod reschedules on a different Node:", "source": "https://kubernetes.io/docs/tasks/run-application/run-replicated-stateful-application/", "id": "k8s_00724_code_explain_22848"} {"prompt": "Complete this text code:\n\nNAME READY STATUS RESTARTS AGE IP NODE\nmysql-2 2/2 Terminating 0 15m 10.244.1.56 kubernetes-node-9l2t\n[...]\nmysql-2 0/2 Pending 0 0s kubernetes-node-fjlm", "completion": "mysql-2 0/2 Init:0/2 0 0s kubernetes-node-fjlm\nmysql-2 0/2 Init:1/2 0 20s 10.244.5.32 kubernetes-node-fjlm\nmysql-2 0/2 PodInitializing 0 21s 10.244.5.32 kubernetes-node-fjlm\nmysql-2 1/2 Running 0 22s 10.244.5.32 kubernetes-node-fjlm\nmysql-2 2/2 Running 0 30s 10.244.5.32 kubernetes-node-fjlm", "language": "text", "type": "code-example", "context": "Run a Replicated Stateful Application | And again, you should see server ID 102 disappear from the SELECT @@server_id loop output for a while and then return.", "source": "https://kubernetes.io/docs/tasks/run-application/run-replicated-stateful-application/", "id": "k8s_00724_code_completion_22849"} {"prompt": "Explain this text code:\n\nNAME READY STATUS RESTARTS AGE IP NODE\nmysql-2 2/2 Terminating 0 15m 10.244.1.56 kubernetes-node-9l2t\n[...]\nmysql-2 0/2 Pending 0 0s kubernetes-node-fjlm\nmysql-2 0/2 Init:0/2 0 0s kubernetes-node-fjlm\nmysql-2 0/2 Init:1/2 0 20s 10.244.5.32 kubernetes-node-fjlm\nmysql-2 0/2 PodInitializing 0 21s 10.244.5.32 kubernetes-node-fjlm\nmysql-2 1/2 Running 0 22s 10.244.5.32 kubernetes-node-fjlm\nmysql-2 2/2 Running 0 30s 10.244.5.32 kubernetes-node-fjlm", "completion": "This text code Run a Replicated Stateful Application | And again, you should see server ID 102 disappear from the SELECT @@server_id loop output for a while and then return.", "language": "text", "type": "code-example_explanation", "context": "Run a Replicated Stateful Application | And again, you should see server ID 102 disappear from the SELECT @@server_id loop output for a while and then return.", "source": "https://kubernetes.io/docs/tasks/run-application/run-replicated-stateful-application/", "id": "k8s_00724_code_explain_22850"} {"prompt": "Explain this bash code:\n\nkubectl run mysql-client --image=mysql:5.7 -i -t --rm --restart=Never --\\\n mysql -h mysql-3.mysql -e \"SELECT * FROM test.messages\"", "completion": "This bash code Run a Replicated Stateful Application | Scaling back down is also seamless:", "language": "bash", "type": "kubectl-command_explanation", "context": "Run a Replicated Stateful Application | Scaling back down is also seamless:", "source": "https://kubernetes.io/docs/tasks/run-application/run-replicated-stateful-application/", "id": "k8s_00724_code_explain_22851"} {"prompt": "Complete this text code:\n\nWaiting for pod default/mysql-client to be running, status is Pending, pod ready: false\n+---------+\n| message |", "completion": "+---------+\n| hello |\n+---------+\npod \"mysql-client\" deleted", "language": "text", "type": "code-example", "context": "Run a Replicated Stateful Application | Scaling back down is also seamless:", "source": "https://kubernetes.io/docs/tasks/run-application/run-replicated-stateful-application/", "id": "k8s_00724_code_completion_22852"} {"prompt": "Explain this text code:\n\nWaiting for pod default/mysql-client to be running, status is Pending, pod ready: false\n+---------+\n| message |\n+---------+\n| hello |\n+---------+\npod \"mysql-client\" deleted", "completion": "This text code Run a Replicated Stateful Application | Scaling back down is also seamless:", "language": "text", "type": "code-example_explanation", "context": "Run a Replicated Stateful Application | Scaling back down is also seamless:", "source": "https://kubernetes.io/docs/tasks/run-application/run-replicated-stateful-application/", "id": "k8s_00724_code_explain_22853"} {"prompt": "Complete this text code:\n\nNAME STATUS VOLUME CAPACITY ACCESSMODES AGE\ndata-mysql-0 Bound pvc-8acbf5dc-b103-11e6-93fa-42010a800002 10Gi RWO 20m\ndata-mysql-1 Bound pvc-8ad39820-b103-11e6-93fa-42010a800002 10Gi RWO 20m", "completion": "data-mysql-2 Bound pvc-8ad69a6d-b103-11e6-93fa-42010a800002 10Gi RWO 20m\ndata-mysql-3 Bound pvc-50043c45-b1c5-11e6-93fa-42010a800002 10Gi RWO 2m\ndata-mysql-4 Bound pvc-500a9957-b1c5-11e6-93fa-42010a800002 10Gi RWO 2m", "language": "text", "type": "code-example", "context": "Run a Replicated Stateful Application | If you don't intend to reuse the extra PVCs, you can delete them:", "source": "https://kubernetes.io/docs/tasks/run-application/run-replicated-stateful-application/", "id": "k8s_00724_code_completion_22854"} {"prompt": "Explain this text code:\n\nNAME STATUS VOLUME CAPACITY ACCESSMODES AGE\ndata-mysql-0 Bound pvc-8acbf5dc-b103-11e6-93fa-42010a800002 10Gi RWO 20m\ndata-mysql-1 Bound pvc-8ad39820-b103-11e6-93fa-42010a800002 10Gi RWO 20m\ndata-mysql-2 Bound pvc-8ad69a6d-b103-11e6-93fa-42010a800002 10Gi RWO 20m\ndata-mysql-3 Bound pvc-50043c45-b1c5-11e6-93fa-42010a800002 10Gi RWO 2m\ndata-mysql-4 Bound pvc-500a9957-b1c5-11e6-93fa-42010a800002 10Gi RWO 2m", "completion": "This text code Run a Replicated Stateful Application | If you don't intend to reuse the extra PVCs, you can delete them:", "language": "text", "type": "code-example_explanation", "context": "Run a Replicated Stateful Application | If you don't intend to reuse the extra PVCs, you can delete them:", "source": "https://kubernetes.io/docs/tasks/run-application/run-replicated-stateful-application/", "id": "k8s_00724_code_explain_22855"} {"prompt": "Explain this bash code:\n\nkubectl delete pvc data-mysql-3\nkubectl delete pvc data-mysql-4", "completion": "This bash code Run a Replicated Stateful Application | Cancel the SELECT @@server_id loop by pressing Ctrl+C in its terminal, or running the following from another terminal:", "language": "bash", "type": "kubectl-command_explanation", "context": "Run a Replicated Stateful Application | Cancel the SELECT @@server_id loop by pressing Ctrl+C in its terminal, or running the following from another terminal:", "source": "https://kubernetes.io/docs/tasks/run-application/run-replicated-stateful-application/", "id": "k8s_00724_code_explain_22856"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: ConfigMap\nmetadata:\n name: mysql\n labels:\n app: mysql\n app.kubernetes.io/name: mysql\ndata:", "completion": " primary.cnf: |\n # Apply this config only on the primary.\n [mysqld]\n log-bin \n replica.cnf: |\n # Apply this config only on replicas.\n [mysqld]\n super-read-only", "language": "yaml", "type": "kubernetes-manifest", "context": "Run a Replicated Stateful Application | This ConfigMap provides my.cnf overrides that let you independently control configuration on the primary MySQL server and its replicas. In this case, you want the primary server to be able to serve re", "source": "https://kubernetes.io/docs/tasks/run-application/run-replicated-stateful-application/", "id": "k8s_00724_code_completion_22857"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: ConfigMap\nmetadata:\n name: mysql\n labels:\n app: mysql\n app.kubernetes.io/name: mysql\ndata:\n primary.cnf: |\n # Apply this config only on the primary.\n [mysqld]\n log-bin \n replica.cnf: |\n # Apply this config only on replicas.\n [mysqld]\n super-read-only", "completion": "This yaml code Run a Replicated Stateful Application | This ConfigMap provides my.cnf overrides that let you independently control configuration on the primary MySQL server and its replicas. In this case, you want the primary server to be able to serve re", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Run a Replicated Stateful Application | This ConfigMap provides my.cnf overrides that let you independently control configuration on the primary MySQL server and its replicas. In this case, you want the primary server to be able to serve re", "source": "https://kubernetes.io/docs/tasks/run-application/run-replicated-stateful-application/", "id": "k8s_00724_code_explain_22858"} {"prompt": "Explain this shell code:\n\nkubectl apply -f https://k8s.io/examples/application/mysql/mysql-configmap.yaml", "completion": "This shell code Run a Replicated Stateful Application | This ConfigMap provides my.cnf overrides that let you independently control configuration on the primary MySQL server and its replicas. In this case, you want the primary server to be able to serve re", "language": "shell", "type": "code-example_explanation", "context": "Run a Replicated Stateful Application | This ConfigMap provides my.cnf overrides that let you independently control configuration on the primary MySQL server and its replicas. In this case, you want the primary server to be able to serve re", "source": "https://kubernetes.io/docs/tasks/run-application/run-replicated-stateful-application/", "id": "k8s_00724_code_explain_22859"} {"prompt": "Complete this yaml code:\n\n# Headless service for stable DNS entries of StatefulSet members.\napiVersion: v1\nkind: Service\nmetadata:\n name: mysql\n labels:\n app: mysql\n app.kubernetes.io/name: mysql\nspec:\n ports:\n - name: mysql\n port: 3306\n clusterIP: None\n selector:\n app: mysql\n---", "completion": "# Client service for connecting to any MySQL instance for reads.\n# For writes, you must instead connect to the primary: mysql-0.mysql.\napiVersion: v1\nkind: Service\nmetadata:\n name: mysql-read\n labels:\n app: mysql\n app.kubernetes.io/name: mysql\n readonly: \"true\"\nspec:\n ports:\n - name: mysql\n port: 3306\n selector:\n app: mysql", "language": "yaml", "type": "service-definition", "context": "Run a Replicated Stateful Application | The headless Service provides a home for the DNS entries that the StatefulSet controllers creates for each Pod that's part of the set. Because the headless Service is named mysql, the Pods are accessi", "source": "https://kubernetes.io/docs/tasks/run-application/run-replicated-stateful-application/", "id": "k8s_00724_code_completion_22860"} {"prompt": "Explain this yaml code:\n\n# Headless service for stable DNS entries of StatefulSet members.\napiVersion: v1\nkind: Service\nmetadata:\n name: mysql\n labels:\n app: mysql\n app.kubernetes.io/name: mysql\nspec:\n ports:\n - name: mysql\n port: 3306\n clusterIP: None\n selector:\n app: mysql\n---\n# Client service for connecting to any MySQL instance for reads.\n# For writes, you must instead connect to the primary: mysql-0.mysql.\napiVersion: v1\nkind: Service\nmetadata:\n name: mysql-read\n labels:\n app: mysql\n app.kubernetes.io/name: mysql\n readonly: \"true\"\nspec:\n ports:\n - name: mysql\n port: 3306\n selector:\n app: mysql", "completion": "This yaml code Run a Replicated Stateful Application | The headless Service provides a home for the DNS entries that the StatefulSet controllers creates for each Pod that's part of the set. Because the headless Service is named mysql, the Pods are accessi", "language": "yaml", "type": "service-definition_explanation", "context": "Run a Replicated Stateful Application | The headless Service provides a home for the DNS entries that the StatefulSet controllers creates for each Pod that's part of the set. Because the headless Service is named mysql, the Pods are accessi", "source": "https://kubernetes.io/docs/tasks/run-application/run-replicated-stateful-application/", "id": "k8s_00724_code_explain_22861"} {"prompt": "Explain this shell code:\n\nkubectl apply -f https://k8s.io/examples/application/mysql/mysql-services.yaml", "completion": "This shell code Run a Replicated Stateful Application | The headless Service provides a home for the DNS entries that the StatefulSet controllers creates for each Pod that's part of the set. Because the headless Service is named mysql, the Pods are accessi", "language": "shell", "type": "code-example_explanation", "context": "Run a Replicated Stateful Application | The headless Service provides a home for the DNS entries that the StatefulSet controllers creates for each Pod that's part of the set. Because the headless Service is named mysql, the Pods are accessi", "source": "https://kubernetes.io/docs/tasks/run-application/run-replicated-stateful-application/", "id": "k8s_00724_code_explain_22862"} {"prompt": "Complete this yaml code:\n\napiVersion: apps/v1\nkind: StatefulSet\nmetadata:\n name: mysql\nspec:\n selector:\n matchLabels:\n app: mysql\n app.kubernetes.io/name: mysql\n serviceName: mysql\n replicas: 3\n template:\n metadata:\n labels:\n app: mysql\n app.kubernetes.io/name: mysql\n spec:\n initContainers:\n - name: init-mysql\n image: mysql:5.7\n command:\n - bash\n - \"-c\"\n - |\n set -ex\n # Generate mysql server-id from pod ordinal index.\n [[ $HOSTNAME =~ -([0-9]+)$ ]] || exit 1\n ordinal=${BASH_REMATCH[1]}\n echo [mysqld] > /mnt/conf.d/server-id.cnf\n # Add an offset to avoid reserved server-id=0 value.\n echo server-id=$((100 + $ordinal)) >> /mnt/conf.d/server-id.cnf\n # Copy appropriate conf.d files from config-map to emptyDir.\n if [[ $ordinal -eq 0 ]]; then\n cp /mnt/config-map/primary.cnf /mnt/conf.d/\n else\n cp /mnt/config-map/replica.cnf /mnt/conf.d/\n fi \n volumeMounts:\n - name: conf\n mountPath: /mnt/conf.d\n - name: config-map\n mountPath: /mnt/config-map\n - name: clone-mysql\n image: gcr.io/google-samples/xtrabackup:1.0\n command:\n - bash\n - \"-c\"\n - |\n set -ex\n # Skip the clone if data already exists.\n [[ -d /var/lib/mysql/mysql ]] && exit 0\n # Skip the clone on primary (ordinal index 0).\n [[ `hostname` =~ -([0-9]+)$ ]] || exit 1\n ordinal=${BASH_REMATCH[1]}\n [[ $ordinal -eq 0 ]] && exit 0\n # Clone data from previous peer.\n ncat --recv-only mysql-$(($ordinal-1)).mysql 3307 | xbstream -x -C /var/lib/mysql\n # Prepare the backup.\n xtrabackup --prepare --target-dir=/var/lib/mysql \n volumeMounts:\n - name: data\n mountPath: /var/lib/mysql\n subPath: mysql\n - name: conf\n mountPath: /etc/mysql/conf.d\n containers:\n - name: mysql\n image: mysql:5.7\n env:\n - name: MYSQL_ALLOW_EMPTY_PASSWORD\n value: \"1\"\n ports:\n - name: mysql\n containerPort: 3306\n volumeMounts:\n - name: data\n mountPath: /var/lib/mysql\n subPath: mysql\n - name: conf\n mountPath: /etc/mysql/conf.d\n resources:\n requests:\n cpu: 500m\n memory: 1Gi", "completion": " livenessProbe:\n exec:\n command: [\"mysqladmin\", \"ping\"]\n initialDelaySeconds: 30\n periodSeconds: 10\n timeoutSeconds: 5\n readinessProbe:\n exec:\n # Check we can execute queries over TCP (skip-networking is off).\n command: [\"mysql\", \"-h\", \"127.0.0.1\", \"-e\", \"SELECT 1\"]\n initialDelaySeconds: 5\n periodSeconds: 2\n timeoutSeconds: 1\n - name: xtrabackup\n image: gcr.io/google-samples/xtrabackup:1.0\n ports:\n - name: xtrabackup\n containerPort: 3307\n command:\n - bash\n - \"-c\"\n - |\n set -ex\n cd /var/lib/mysql\n\n # Determine binlog position of cloned data, if any.\n if [[ -f xtrabackup_slave_info && \"x$( change_master_to.sql.in\n # Ignore xtrabackup_binlog_info in this case (it's useless).\n rm -f xtrabackup_slave_info xtrabackup_binlog_info\n elif [[ -f xtrabackup_binlog_info ]]; then\n # We're cloning directly from primary. Parse binlog position.\n [[ `cat xtrabackup_binlog_info` =~ ^(.*?)[[:space:]]+(.*?)$ ]] || exit 1\n rm -f xtrabackup_binlog_info xtrabackup_slave_info\n echo \"CHANGE MASTER TO MASTER_LOG_FILE='${BASH_REMATCH[1]}',\\\n MASTER_LOG_POS=${BASH_REMATCH[2]}\" > change_master_to.sql.in\n fi\n\n # Check if we need to complete a clone by starting replication.\n if [[ -f change_master_to.sql.in ]]; then\n echo \"Waiting for mysqld to be ready (accepting connections)\"\n until mysql -h 127.0.0.1 -e \"SELECT 1\"; do sleep 1; done\n\n echo \"Initializing replication from clone position\"\n mysql -h 127.0.0.1 \\\n -e \"$( /mnt/conf.d/server-id.cnf\n # Add an offset to avoid reserved server-id=0 value.\n echo server-id=$((100 + $ordinal)) >> /mnt/conf.d/server-id.cnf\n # Copy appropriate conf.d files from config-map to emptyDir.\n if [[ $ordinal -eq 0 ]]; then\n cp /mnt/config-map/primary.cnf /mnt/conf.d/\n else\n cp /mnt/config-map/replica.cnf /mnt/conf.d/\n fi \n volumeMounts:\n - name: conf\n mountPath: /mnt/conf.d\n - name: config-map\n mountPath: /mnt/config-map\n - name: clone-mysql\n image: gcr.io/google-samples/xtrabackup:1.0\n command:\n - bash\n - \"-c\"\n - |\n set -ex\n # Skip the clone if data already exists.\n [[ -d /var/lib/mysql/mysql ]] && exit 0\n # Skip the clone on primary (ordinal index 0).\n [[ `hostname` =~ -([0-9]+)$ ]] || exit 1\n ordinal=${BASH_REMATCH[1]}\n [[ $ordinal -eq 0 ]] && exit 0\n # Clone data from previous peer.\n ncat --recv-only mysql-$(($ordinal-1)).mysql 3307 | xbstream -x -C /var/lib/mysql\n # Prepare the backup.\n xtrabackup --prepare --target-dir=/var/lib/mysql \n volumeMounts:\n - name: data\n mountPath: /var/lib/mysql\n subPath: mysql\n - name: conf\n mountPath: /etc/mysql/conf.d\n containers:\n - name: mysql\n image: mysql:5.7\n env:\n - name: MYSQL_ALLOW_EMPTY_PASSWORD\n value: \"1\"\n ports:\n - name: mysql\n containerPort: 3306\n volumeMounts:\n - name: data\n mountPath: /var/lib/mysql\n subPath: mysql\n - name: conf\n mountPath: /etc/mysql/conf.d\n resources:\n requests:\n cpu: 500m\n memory: 1Gi\n livenessProbe:\n exec:\n command: [\"mysqladmin\", \"ping\"]\n initialDelaySeconds: 30\n periodSeconds: 10\n timeoutSeconds: 5\n readinessProbe:\n exec:\n # Check we can execute queries over TCP (skip-networking is off).\n command: [\"mysql\", \"-h\", \"127.0.0.1\", \"-e\", \"SELECT 1\"]\n initialDelaySeconds: 5\n periodSeconds: 2\n timeoutSeconds: 1\n - name: xtrabackup\n image: gcr.io/google-samples/xtrabackup:1.0\n ports:\n - name: xtrabackup\n containerPort: 3307\n command:\n - bash\n - \"-c\"\n - |\n set -ex\n cd /var/lib/mysql\n\n # Determine binlog position of cloned data, if any.\n if [[ -f xtrabackup_slave_info && \"x$( change_master_to.sql.in\n # Ignore xtrabackup_binlog_info in this case (it's useless).\n rm -f xtrabackup_slave_info xtrabackup_binlog_info\n elif [[ -f xtrabackup_binlog_info ]]; then\n # We're cloning directly from primary. Parse binlog position.\n [[ `cat xtrabackup_binlog_info` =~ ^(.*?)[[:space:]]+(.*?)$ ]] || exit 1\n rm -f xtrabackup_binlog_info xtrabackup_slave_info\n echo \"CHANGE MASTER TO MASTER_LOG_FILE='${BASH_REMATCH[1]}',\\\n MASTER_LOG_POS=${BASH_REMATCH[2]}\" > change_master_to.sql.in\n fi\n\n # Check if we need to complete a clone by starting replication.\n if [[ -f change_master_to.sql.in ]]; then\n echo \"Waiting for mysqld to be ready (accepting connections)\"\n until mysql -h 127.0.0.1 -e \"SELECT 1\"; do sleep 1; done\n\n echo \"Initializing replication from clone position\"\n mysql -h 127.0.0.1 \\\n -e \"$( with the name of the Node you found in", "language": "text", "type": "code-example_explanation", "context": "Run a Replicated Stateful Application | Then, drain the Node by running the following command, which cordons it so no new Pods may schedule there, and then evicts any existing Pods. Replace with the name of the Node you found in", "source": "https://kubernetes.io/docs/tasks/run-application/run-replicated-stateful-application/", "id": "k8s_00724_code_explain_22879"} {"prompt": "Explain this shell code:\n\n# See above advice about impact on other workloads\nkubectl drain --force --delete-emptydir-data --ignore-daemonsets", "completion": "This shell code Run a Replicated Stateful Application | Now you can watch as the Pod reschedules on a different Node:", "language": "shell", "type": "code-example_explanation", "context": "Run a Replicated Stateful Application | Now you can watch as the Pod reschedules on a different Node:", "source": "https://kubernetes.io/docs/tasks/run-application/run-replicated-stateful-application/", "id": "k8s_00724_code_explain_22880"} {"prompt": "Complete this text code:\n\nNAME READY STATUS RESTARTS AGE IP NODE\nmysql-2 2/2 Terminating 0 15m 10.244.1.56 kubernetes-node-9l2t\n[...]\nmysql-2 0/2 Pending 0 0s kubernetes-node-fjlm", "completion": "mysql-2 0/2 Init:0/2 0 0s kubernetes-node-fjlm\nmysql-2 0/2 Init:1/2 0 20s 10.244.5.32 kubernetes-node-fjlm\nmysql-2 0/2 PodInitializing 0 21s 10.244.5.32 kubernetes-node-fjlm\nmysql-2 1/2 Running 0 22s 10.244.5.32 kubernetes-node-fjlm\nmysql-2 2/2 Running 0 30s 10.244.5.32 kubernetes-node-fjlm", "language": "text", "type": "code-example", "context": "Run a Replicated Stateful Application | And again, you should see server ID 102 disappear from the SELECT @@server_id loop output for a while and then return.", "source": "https://kubernetes.io/docs/tasks/run-application/run-replicated-stateful-application/", "id": "k8s_00724_code_completion_22881"} {"prompt": "Explain this text code:\n\nNAME READY STATUS RESTARTS AGE IP NODE\nmysql-2 2/2 Terminating 0 15m 10.244.1.56 kubernetes-node-9l2t\n[...]\nmysql-2 0/2 Pending 0 0s kubernetes-node-fjlm\nmysql-2 0/2 Init:0/2 0 0s kubernetes-node-fjlm\nmysql-2 0/2 Init:1/2 0 20s 10.244.5.32 kubernetes-node-fjlm\nmysql-2 0/2 PodInitializing 0 21s 10.244.5.32 kubernetes-node-fjlm\nmysql-2 1/2 Running 0 22s 10.244.5.32 kubernetes-node-fjlm\nmysql-2 2/2 Running 0 30s 10.244.5.32 kubernetes-node-fjlm", "completion": "This text code Run a Replicated Stateful Application | And again, you should see server ID 102 disappear from the SELECT @@server_id loop output for a while and then return.", "language": "text", "type": "code-example_explanation", "context": "Run a Replicated Stateful Application | And again, you should see server ID 102 disappear from the SELECT @@server_id loop output for a while and then return.", "source": "https://kubernetes.io/docs/tasks/run-application/run-replicated-stateful-application/", "id": "k8s_00724_code_explain_22882"} {"prompt": "Explain this shell code:\n\nkubectl run mysql-client --image=mysql:5.7 -i -t --rm --restart=Never --\\\n mysql -h mysql-3.mysql -e \"SELECT * FROM test.messages\"", "completion": "This shell code Run a Replicated Stateful Application | Scaling back down is also seamless:", "language": "shell", "type": "code-example_explanation", "context": "Run a Replicated Stateful Application | Scaling back down is also seamless:", "source": "https://kubernetes.io/docs/tasks/run-application/run-replicated-stateful-application/", "id": "k8s_00724_code_explain_22883"} {"prompt": "Complete this text code:\n\nWaiting for pod default/mysql-client to be running, status is Pending, pod ready: false\n+---------+\n| message |", "completion": "+---------+\n| hello |\n+---------+\npod \"mysql-client\" deleted", "language": "text", "type": "code-example", "context": "Run a Replicated Stateful Application | Scaling back down is also seamless:", "source": "https://kubernetes.io/docs/tasks/run-application/run-replicated-stateful-application/", "id": "k8s_00724_code_completion_22884"} {"prompt": "Explain this text code:\n\nWaiting for pod default/mysql-client to be running, status is Pending, pod ready: false\n+---------+\n| message |\n+---------+\n| hello |\n+---------+\npod \"mysql-client\" deleted", "completion": "This text code Run a Replicated Stateful Application | Scaling back down is also seamless:", "language": "text", "type": "code-example_explanation", "context": "Run a Replicated Stateful Application | Scaling back down is also seamless:", "source": "https://kubernetes.io/docs/tasks/run-application/run-replicated-stateful-application/", "id": "k8s_00724_code_explain_22885"} {"prompt": "Complete this text code:\n\nNAME STATUS VOLUME CAPACITY ACCESSMODES AGE\ndata-mysql-0 Bound pvc-8acbf5dc-b103-11e6-93fa-42010a800002 10Gi RWO 20m\ndata-mysql-1 Bound pvc-8ad39820-b103-11e6-93fa-42010a800002 10Gi RWO 20m", "completion": "data-mysql-2 Bound pvc-8ad69a6d-b103-11e6-93fa-42010a800002 10Gi RWO 20m\ndata-mysql-3 Bound pvc-50043c45-b1c5-11e6-93fa-42010a800002 10Gi RWO 2m\ndata-mysql-4 Bound pvc-500a9957-b1c5-11e6-93fa-42010a800002 10Gi RWO 2m", "language": "text", "type": "code-example", "context": "Run a Replicated Stateful Application | If you don't intend to reuse the extra PVCs, you can delete them:", "source": "https://kubernetes.io/docs/tasks/run-application/run-replicated-stateful-application/", "id": "k8s_00724_code_completion_22886"} {"prompt": "Explain this text code:\n\nNAME STATUS VOLUME CAPACITY ACCESSMODES AGE\ndata-mysql-0 Bound pvc-8acbf5dc-b103-11e6-93fa-42010a800002 10Gi RWO 20m\ndata-mysql-1 Bound pvc-8ad39820-b103-11e6-93fa-42010a800002 10Gi RWO 20m\ndata-mysql-2 Bound pvc-8ad69a6d-b103-11e6-93fa-42010a800002 10Gi RWO 20m\ndata-mysql-3 Bound pvc-50043c45-b1c5-11e6-93fa-42010a800002 10Gi RWO 2m\ndata-mysql-4 Bound pvc-500a9957-b1c5-11e6-93fa-42010a800002 10Gi RWO 2m", "completion": "This text code Run a Replicated Stateful Application | If you don't intend to reuse the extra PVCs, you can delete them:", "language": "text", "type": "code-example_explanation", "context": "Run a Replicated Stateful Application | If you don't intend to reuse the extra PVCs, you can delete them:", "source": "https://kubernetes.io/docs/tasks/run-application/run-replicated-stateful-application/", "id": "k8s_00724_code_explain_22887"} {"prompt": "Explain this shell code:\n\nkubectl delete pvc data-mysql-3\nkubectl delete pvc data-mysql-4", "completion": "This shell code Run a Replicated Stateful Application | Cancel the SELECT @@server_id loop by pressing Ctrl+C in its terminal, or running the following from another terminal:", "language": "shell", "type": "code-example_explanation", "context": "Run a Replicated Stateful Application | Cancel the SELECT @@server_id loop by pressing Ctrl+C in its terminal, or running the following from another terminal:", "source": "https://kubernetes.io/docs/tasks/run-application/run-replicated-stateful-application/", "id": "k8s_00724_code_explain_22888"} {"prompt": "Complete this yaml code:\n\napplication/mysql/mysql-configmap.yaml\napiVersion: v1\nkind: ConfigMap\nmetadata:\n name: mysql\n labels:\n app: mysql\n app.kubernetes.io/name: mysql", "completion": "data:\n primary.cnf: |\n # Apply this config only on the primary.\n [mysqld]\n log-bin \n replica.cnf: |\n # Apply this config only on replicas.\n [mysqld]\n super-read-only", "language": "yaml", "type": "kubernetes-manifest", "context": "Run a Replicated Stateful Application | This ConfigMap provides my.cnf overrides that let you independently control configuration on the primary MySQL server and its replicas. In this case, you want the primary server to be able to serve re", "source": "https://kubernetes.io/docs/tasks/run-application/run-replicated-stateful-application/", "id": "k8s_00724_code_completion_22889"} {"prompt": "Explain this yaml code:\n\napplication/mysql/mysql-configmap.yaml\napiVersion: v1\nkind: ConfigMap\nmetadata:\n name: mysql\n labels:\n app: mysql\n app.kubernetes.io/name: mysql\ndata:\n primary.cnf: |\n # Apply this config only on the primary.\n [mysqld]\n log-bin \n replica.cnf: |\n # Apply this config only on replicas.\n [mysqld]\n super-read-only", "completion": "This yaml code Run a Replicated Stateful Application | This ConfigMap provides my.cnf overrides that let you independently control configuration on the primary MySQL server and its replicas. In this case, you want the primary server to be able to serve re", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Run a Replicated Stateful Application | This ConfigMap provides my.cnf overrides that let you independently control configuration on the primary MySQL server and its replicas. In this case, you want the primary server to be able to serve re", "source": "https://kubernetes.io/docs/tasks/run-application/run-replicated-stateful-application/", "id": "k8s_00724_code_explain_22890"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: ConfigMap\nmetadata:\n name: mysql\n labels:\n app: mysql\n app.kubernetes.io/name: mysql\ndata:", "completion": " primary.cnf: |\n # Apply this config only on the primary.\n [mysqld]\n log-bin \n replica.cnf: |\n # Apply this config only on replicas.\n [mysqld]\n super-read-only", "language": "yaml", "type": "kubernetes-manifest", "context": "Run a Replicated Stateful Application | This ConfigMap provides my.cnf overrides that let you independently control configuration on the primary MySQL server and its replicas. In this case, you want the primary server to be able to serve re", "source": "https://kubernetes.io/docs/tasks/run-application/run-replicated-stateful-application/", "id": "k8s_00724_code_completion_22891"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: ConfigMap\nmetadata:\n name: mysql\n labels:\n app: mysql\n app.kubernetes.io/name: mysql\ndata:\n primary.cnf: |\n # Apply this config only on the primary.\n [mysqld]\n log-bin \n replica.cnf: |\n # Apply this config only on replicas.\n [mysqld]\n super-read-only", "completion": "This yaml code Run a Replicated Stateful Application | This ConfigMap provides my.cnf overrides that let you independently control configuration on the primary MySQL server and its replicas. In this case, you want the primary server to be able to serve re", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Run a Replicated Stateful Application | This ConfigMap provides my.cnf overrides that let you independently control configuration on the primary MySQL server and its replicas. In this case, you want the primary server to be able to serve re", "source": "https://kubernetes.io/docs/tasks/run-application/run-replicated-stateful-application/", "id": "k8s_00724_code_explain_22892"} {"prompt": "Explain this bash code:\n\nkubectl apply -f https://k8s.io/examples/application/mysql/mysql-configmap.yaml", "completion": "This bash code Run a Replicated Stateful Application | This ConfigMap provides my.cnf overrides that let you independently control configuration on the primary MySQL server and its replicas. In this case, you want the primary server to be able to serve re", "language": "bash", "type": "kubectl-command_explanation", "context": "Run a Replicated Stateful Application | This ConfigMap provides my.cnf overrides that let you independently control configuration on the primary MySQL server and its replicas. In this case, you want the primary server to be able to serve re", "source": "https://kubernetes.io/docs/tasks/run-application/run-replicated-stateful-application/", "id": "k8s_00724_code_explain_22893"} {"prompt": "Complete this yaml code:\n\napplication/mysql/mysql-services.yaml\n# Headless service for stable DNS entries of StatefulSet members.\napiVersion: v1\nkind: Service\nmetadata:\n name: mysql\n labels:\n app: mysql\n app.kubernetes.io/name: mysql\nspec:\n ports:\n - name: mysql\n port: 3306\n clusterIP: None\n selector:\n app: mysql", "completion": "---\n# Client service for connecting to any MySQL instance for reads.\n# For writes, you must instead connect to the primary: mysql-0.mysql.\napiVersion: v1\nkind: Service\nmetadata:\n name: mysql-read\n labels:\n app: mysql\n app.kubernetes.io/name: mysql\n readonly: \"true\"\nspec:\n ports:\n - name: mysql\n port: 3306\n selector:\n app: mysql", "language": "yaml", "type": "service-definition", "context": "Run a Replicated Stateful Application | The headless Service provides a home for the DNS entries that the StatefulSet controllers creates for each Pod that's part of the set. Because the headless Service is named mysql, the Pods are accessi", "source": "https://kubernetes.io/docs/tasks/run-application/run-replicated-stateful-application/", "id": "k8s_00724_code_completion_22894"} {"prompt": "Explain this yaml code:\n\napplication/mysql/mysql-services.yaml\n# Headless service for stable DNS entries of StatefulSet members.\napiVersion: v1\nkind: Service\nmetadata:\n name: mysql\n labels:\n app: mysql\n app.kubernetes.io/name: mysql\nspec:\n ports:\n - name: mysql\n port: 3306\n clusterIP: None\n selector:\n app: mysql\n---\n# Client service for connecting to any MySQL instance for reads.\n# For writes, you must instead connect to the primary: mysql-0.mysql.\napiVersion: v1\nkind: Service\nmetadata:\n name: mysql-read\n labels:\n app: mysql\n app.kubernetes.io/name: mysql\n readonly: \"true\"\nspec:\n ports:\n - name: mysql\n port: 3306\n selector:\n app: mysql", "completion": "This yaml code Run a Replicated Stateful Application | The headless Service provides a home for the DNS entries that the StatefulSet controllers creates for each Pod that's part of the set. Because the headless Service is named mysql, the Pods are accessi", "language": "yaml", "type": "service-definition_explanation", "context": "Run a Replicated Stateful Application | The headless Service provides a home for the DNS entries that the StatefulSet controllers creates for each Pod that's part of the set. Because the headless Service is named mysql, the Pods are accessi", "source": "https://kubernetes.io/docs/tasks/run-application/run-replicated-stateful-application/", "id": "k8s_00724_code_explain_22895"} {"prompt": "Complete this yaml code:\n\n# Headless service for stable DNS entries of StatefulSet members.\napiVersion: v1\nkind: Service\nmetadata:\n name: mysql\n labels:\n app: mysql\n app.kubernetes.io/name: mysql\nspec:\n ports:\n - name: mysql\n port: 3306\n clusterIP: None\n selector:\n app: mysql\n---", "completion": "# Client service for connecting to any MySQL instance for reads.\n# For writes, you must instead connect to the primary: mysql-0.mysql.\napiVersion: v1\nkind: Service\nmetadata:\n name: mysql-read\n labels:\n app: mysql\n app.kubernetes.io/name: mysql\n readonly: \"true\"\nspec:\n ports:\n - name: mysql\n port: 3306\n selector:\n app: mysql", "language": "yaml", "type": "service-definition", "context": "Run a Replicated Stateful Application | The headless Service provides a home for the DNS entries that the StatefulSet controllers creates for each Pod that's part of the set. Because the headless Service is named mysql, the Pods are accessi", "source": "https://kubernetes.io/docs/tasks/run-application/run-replicated-stateful-application/", "id": "k8s_00724_code_completion_22896"} {"prompt": "Explain this yaml code:\n\n# Headless service for stable DNS entries of StatefulSet members.\napiVersion: v1\nkind: Service\nmetadata:\n name: mysql\n labels:\n app: mysql\n app.kubernetes.io/name: mysql\nspec:\n ports:\n - name: mysql\n port: 3306\n clusterIP: None\n selector:\n app: mysql\n---\n# Client service for connecting to any MySQL instance for reads.\n# For writes, you must instead connect to the primary: mysql-0.mysql.\napiVersion: v1\nkind: Service\nmetadata:\n name: mysql-read\n labels:\n app: mysql\n app.kubernetes.io/name: mysql\n readonly: \"true\"\nspec:\n ports:\n - name: mysql\n port: 3306\n selector:\n app: mysql", "completion": "This yaml code Run a Replicated Stateful Application | The headless Service provides a home for the DNS entries that the StatefulSet controllers creates for each Pod that's part of the set. Because the headless Service is named mysql, the Pods are accessi", "language": "yaml", "type": "service-definition_explanation", "context": "Run a Replicated Stateful Application | The headless Service provides a home for the DNS entries that the StatefulSet controllers creates for each Pod that's part of the set. Because the headless Service is named mysql, the Pods are accessi", "source": "https://kubernetes.io/docs/tasks/run-application/run-replicated-stateful-application/", "id": "k8s_00724_code_explain_22897"} {"prompt": "Explain this bash code:\n\nkubectl apply -f https://k8s.io/examples/application/mysql/mysql-services.yaml", "completion": "This bash code Run a Replicated Stateful Application | The headless Service provides a home for the DNS entries that the StatefulSet controllers creates for each Pod that's part of the set. Because the headless Service is named mysql, the Pods are accessi", "language": "bash", "type": "kubectl-command_explanation", "context": "Run a Replicated Stateful Application | The headless Service provides a home for the DNS entries that the StatefulSet controllers creates for each Pod that's part of the set. Because the headless Service is named mysql, the Pods are accessi", "source": "https://kubernetes.io/docs/tasks/run-application/run-replicated-stateful-application/", "id": "k8s_00724_code_explain_22898"} {"prompt": "Complete this yaml code:\n\napplication/mysql/mysql-statefulset.yaml\napiVersion: apps/v1\nkind: StatefulSet\nmetadata:\n name: mysql\nspec:\n selector:\n matchLabels:\n app: mysql\n app.kubernetes.io/name: mysql\n serviceName: mysql\n replicas: 3\n template:\n metadata:\n labels:\n app: mysql\n app.kubernetes.io/name: mysql\n spec:\n initContainers:\n - name: init-mysql\n image: mysql:5.7\n command:\n - bash\n - \"-c\"\n - |\n set -ex\n # Generate mysql server-id from pod ordinal index.\n [[ $HOSTNAME =~ -([0-9]+)$ ]] || exit 1\n ordinal=${BASH_REMATCH[1]}\n echo [mysqld] > /mnt/conf.d/server-id.cnf\n # Add an offset to avoid reserved server-id=0 value.\n echo server-id=$((100 + $ordinal)) >> /mnt/conf.d/server-id.cnf\n # Copy appropriate conf.d files from config-map to emptyDir.\n if [[ $ordinal -eq 0 ]]; then\n cp /mnt/config-map/primary.cnf /mnt/conf.d/\n else\n cp /mnt/config-map/replica.cnf /mnt/conf.d/\n fi \n volumeMounts:\n - name: conf\n mountPath: /mnt/conf.d\n - name: config-map\n mountPath: /mnt/config-map\n - name: clone-mysql\n image: gcr.io/google-samples/xtrabackup:1.0\n command:\n - bash\n - \"-c\"\n - |\n set -ex\n # Skip the clone if data already exists.\n [[ -d /var/lib/mysql/mysql ]] && exit 0\n # Skip the clone on primary (ordinal index 0).\n [[ `hostname` =~ -([0-9]+)$ ]] || exit 1\n ordinal=${BASH_REMATCH[1]}\n [[ $ordinal -eq 0 ]] && exit 0\n # Clone data from previous peer.\n ncat --recv-only mysql-$(($ordinal-1)).mysql 3307 | xbstream -x -C /var/lib/mysql\n # Prepare the backup.\n xtrabackup --prepare --target-dir=/var/lib/mysql \n volumeMounts:\n - name: data\n mountPath: /var/lib/mysql\n subPath: mysql\n - name: conf\n mountPath: /etc/mysql/conf.d\n containers:\n - name: mysql\n image: mysql:5.7\n env:\n - name: MYSQL_ALLOW_EMPTY_PASSWORD\n value: \"1\"\n ports:\n - name: mysql\n containerPort: 3306\n volumeMounts:\n - name: data\n mountPath: /var/lib/mysql\n subPath: mysql\n - name: conf\n mountPath: /etc/mysql/conf.d\n resources:\n requests:\n cpu: 500m", "completion": " memory: 1Gi\n livenessProbe:\n exec:\n command: [\"mysqladmin\", \"ping\"]\n initialDelaySeconds: 30\n periodSeconds: 10\n timeoutSeconds: 5\n readinessProbe:\n exec:\n # Check we can execute queries over TCP (skip-networking is off).\n command: [\"mysql\", \"-h\", \"127.0.0.1\", \"-e\", \"SELECT 1\"]\n initialDelaySeconds: 5\n periodSeconds: 2\n timeoutSeconds: 1\n - name: xtrabackup\n image: gcr.io/google-samples/xtrabackup:1.0\n ports:\n - name: xtrabackup\n containerPort: 3307\n command:\n - bash\n - \"-c\"\n - |\n set -ex\n cd /var/lib/mysql\n\n # Determine binlog position of cloned data, if any.\n if [[ -f xtrabackup_slave_info && \"x$( change_master_to.sql.in\n # Ignore xtrabackup_binlog_info in this case (it's useless).\n rm -f xtrabackup_slave_info xtrabackup_binlog_info\n elif [[ -f xtrabackup_binlog_info ]]; then\n # We're cloning directly from primary. Parse binlog position.\n [[ `cat xtrabackup_binlog_info` =~ ^(.*?)[[:space:]]+(.*?)$ ]] || exit 1\n rm -f xtrabackup_binlog_info xtrabackup_slave_info\n echo \"CHANGE MASTER TO MASTER_LOG_FILE='${BASH_REMATCH[1]}',\\\n MASTER_LOG_POS=${BASH_REMATCH[2]}\" > change_master_to.sql.in\n fi\n\n # Check if we need to complete a clone by starting replication.\n if [[ -f change_master_to.sql.in ]]; then\n echo \"Waiting for mysqld to be ready (accepting connections)\"\n until mysql -h 127.0.0.1 -e \"SELECT 1\"; do sleep 1; done\n\n echo \"Initializing replication from clone position\"\n mysql -h 127.0.0.1 \\\n -e \"$( /mnt/conf.d/server-id.cnf\n # Add an offset to avoid reserved server-id=0 value.\n echo server-id=$((100 + $ordinal)) >> /mnt/conf.d/server-id.cnf\n # Copy appropriate conf.d files from config-map to emptyDir.\n if [[ $ordinal -eq 0 ]]; then\n cp /mnt/config-map/primary.cnf /mnt/conf.d/\n else\n cp /mnt/config-map/replica.cnf /mnt/conf.d/\n fi \n volumeMounts:\n - name: conf\n mountPath: /mnt/conf.d\n - name: config-map\n mountPath: /mnt/config-map\n - name: clone-mysql\n image: gcr.io/google-samples/xtrabackup:1.0\n command:\n - bash\n - \"-c\"\n - |\n set -ex\n # Skip the clone if data already exists.\n [[ -d /var/lib/mysql/mysql ]] && exit 0\n # Skip the clone on primary (ordinal index 0).\n [[ `hostname` =~ -([0-9]+)$ ]] || exit 1\n ordinal=${BASH_REMATCH[1]}\n [[ $ordinal -eq 0 ]] && exit 0\n # Clone data from previous peer.\n ncat --recv-only mysql-$(($ordinal-1)).mysql 3307 | xbstream -x -C /var/lib/mysql\n # Prepare the backup.\n xtrabackup --prepare --target-dir=/var/lib/mysql \n volumeMounts:\n - name: data\n mountPath: /var/lib/mysql\n subPath: mysql\n - name: conf\n mountPath: /etc/mysql/conf.d\n containers:\n - name: mysql\n image: mysql:5.7\n env:\n - name: MYSQL_ALLOW_EMPTY_PASSWORD\n value: \"1\"\n ports:\n - name: mysql\n containerPort: 3306\n volumeMounts:\n - name: data\n mountPath: /var/lib/mysql\n subPath: mysql\n - name: conf\n mountPath: /etc/mysql/conf.d\n resources:\n requests:\n cpu: 500m\n memory: 1Gi\n livenessProbe:\n exec:\n command: [\"mysqladmin\", \"ping\"]\n initialDelaySeconds: 30\n periodSeconds: 10\n timeoutSeconds: 5\n readinessProbe:\n exec:\n # Check we can execute queries over TCP (skip-networking is off).\n command: [\"mysql\", \"-h\", \"127.0.0.1\", \"-e\", \"SELECT 1\"]\n initialDelaySeconds: 5\n periodSeconds: 2\n timeoutSeconds: 1\n - name: xtrabackup\n image: gcr.io/google-samples/xtrabackup:1.0\n ports:\n - name: xtrabackup\n containerPort: 3307\n command:\n - bash\n - \"-c\"\n - |\n set -ex\n cd /var/lib/mysql\n\n # Determine binlog position of cloned data, if any.\n if [[ -f xtrabackup_slave_info && \"x$( change_master_to.sql.in\n # Ignore xtrabackup_binlog_info in this case (it's useless).\n rm -f xtrabackup_slave_info xtrabackup_binlog_info\n elif [[ -f xtrabackup_binlog_info ]]; then\n # We're cloning directly from primary. Parse binlog position.\n [[ `cat xtrabackup_binlog_info` =~ ^(.*?)[[:space:]]+(.*?)$ ]] || exit 1\n rm -f xtrabackup_binlog_info xtrabackup_slave_info\n echo \"CHANGE MASTER TO MASTER_LOG_FILE='${BASH_REMATCH[1]}',\\\n MASTER_LOG_POS=${BASH_REMATCH[2]}\" > change_master_to.sql.in\n fi\n\n # Check if we need to complete a clone by starting replication.\n if [[ -f change_master_to.sql.in ]]; then\n echo \"Waiting for mysqld to be ready (accepting connections)\"\n until mysql -h 127.0.0.1 -e \"SELECT 1\"; do sleep 1; done\n\n echo \"Initializing replication from clone position\"\n mysql -h 127.0.0.1 \\\n -e \"$( /mnt/conf.d/server-id.cnf\n # Add an offset to avoid reserved server-id=0 value.\n echo server-id=$((100 + $ordinal)) >> /mnt/conf.d/server-id.cnf\n # Copy appropriate conf.d files from config-map to emptyDir.\n if [[ $ordinal -eq 0 ]]; then\n cp /mnt/config-map/primary.cnf /mnt/conf.d/\n else\n cp /mnt/config-map/replica.cnf /mnt/conf.d/\n fi \n volumeMounts:\n - name: conf\n mountPath: /mnt/conf.d\n - name: config-map\n mountPath: /mnt/config-map\n - name: clone-mysql\n image: gcr.io/google-samples/xtrabackup:1.0\n command:\n - bash\n - \"-c\"\n - |\n set -ex\n # Skip the clone if data already exists.\n [[ -d /var/lib/mysql/mysql ]] && exit 0\n # Skip the clone on primary (ordinal index 0).\n [[ `hostname` =~ -([0-9]+)$ ]] || exit 1\n ordinal=${BASH_REMATCH[1]}\n [[ $ordinal -eq 0 ]] && exit 0\n # Clone data from previous peer.\n ncat --recv-only mysql-$(($ordinal-1)).mysql 3307 | xbstream -x -C /var/lib/mysql\n # Prepare the backup.\n xtrabackup --prepare --target-dir=/var/lib/mysql \n volumeMounts:\n - name: data\n mountPath: /var/lib/mysql\n subPath: mysql\n - name: conf\n mountPath: /etc/mysql/conf.d\n containers:\n - name: mysql\n image: mysql:5.7\n env:\n - name: MYSQL_ALLOW_EMPTY_PASSWORD\n value: \"1\"\n ports:\n - name: mysql\n containerPort: 3306\n volumeMounts:\n - name: data\n mountPath: /var/lib/mysql\n subPath: mysql\n - name: conf\n mountPath: /etc/mysql/conf.d\n resources:\n requests:\n cpu: 500m\n memory: 1Gi", "completion": " livenessProbe:\n exec:\n command: [\"mysqladmin\", \"ping\"]\n initialDelaySeconds: 30\n periodSeconds: 10\n timeoutSeconds: 5\n readinessProbe:\n exec:\n # Check we can execute queries over TCP (skip-networking is off).\n command: [\"mysql\", \"-h\", \"127.0.0.1\", \"-e\", \"SELECT 1\"]\n initialDelaySeconds: 5\n periodSeconds: 2\n timeoutSeconds: 1\n - name: xtrabackup\n image: gcr.io/google-samples/xtrabackup:1.0\n ports:\n - name: xtrabackup\n containerPort: 3307\n command:\n - bash\n - \"-c\"\n - |\n set -ex\n cd /var/lib/mysql\n\n # Determine binlog position of cloned data, if any.\n if [[ -f xtrabackup_slave_info && \"x$( change_master_to.sql.in\n # Ignore xtrabackup_binlog_info in this case (it's useless).\n rm -f xtrabackup_slave_info xtrabackup_binlog_info\n elif [[ -f xtrabackup_binlog_info ]]; then\n # We're cloning directly from primary. Parse binlog position.\n [[ `cat xtrabackup_binlog_info` =~ ^(.*?)[[:space:]]+(.*?)$ ]] || exit 1\n rm -f xtrabackup_binlog_info xtrabackup_slave_info\n echo \"CHANGE MASTER TO MASTER_LOG_FILE='${BASH_REMATCH[1]}',\\\n MASTER_LOG_POS=${BASH_REMATCH[2]}\" > change_master_to.sql.in\n fi\n\n # Check if we need to complete a clone by starting replication.\n if [[ -f change_master_to.sql.in ]]; then\n echo \"Waiting for mysqld to be ready (accepting connections)\"\n until mysql -h 127.0.0.1 -e \"SELECT 1\"; do sleep 1; done\n\n echo \"Initializing replication from clone position\"\n mysql -h 127.0.0.1 \\\n -e \"$( /mnt/conf.d/server-id.cnf\n # Add an offset to avoid reserved server-id=0 value.\n echo server-id=$((100 + $ordinal)) >> /mnt/conf.d/server-id.cnf\n # Copy appropriate conf.d files from config-map to emptyDir.\n if [[ $ordinal -eq 0 ]]; then\n cp /mnt/config-map/primary.cnf /mnt/conf.d/\n else\n cp /mnt/config-map/replica.cnf /mnt/conf.d/\n fi \n volumeMounts:\n - name: conf\n mountPath: /mnt/conf.d\n - name: config-map\n mountPath: /mnt/config-map\n - name: clone-mysql\n image: gcr.io/google-samples/xtrabackup:1.0\n command:\n - bash\n - \"-c\"\n - |\n set -ex\n # Skip the clone if data already exists.\n [[ -d /var/lib/mysql/mysql ]] && exit 0\n # Skip the clone on primary (ordinal index 0).\n [[ `hostname` =~ -([0-9]+)$ ]] || exit 1\n ordinal=${BASH_REMATCH[1]}\n [[ $ordinal -eq 0 ]] && exit 0\n # Clone data from previous peer.\n ncat --recv-only mysql-$(($ordinal-1)).mysql 3307 | xbstream -x -C /var/lib/mysql\n # Prepare the backup.\n xtrabackup --prepare --target-dir=/var/lib/mysql \n volumeMounts:\n - name: data\n mountPath: /var/lib/mysql\n subPath: mysql\n - name: conf\n mountPath: /etc/mysql/conf.d\n containers:\n - name: mysql\n image: mysql:5.7\n env:\n - name: MYSQL_ALLOW_EMPTY_PASSWORD\n value: \"1\"\n ports:\n - name: mysql\n containerPort: 3306\n volumeMounts:\n - name: data\n mountPath: /var/lib/mysql\n subPath: mysql\n - name: conf\n mountPath: /etc/mysql/conf.d\n resources:\n requests:\n cpu: 500m\n memory: 1Gi\n livenessProbe:\n exec:\n command: [\"mysqladmin\", \"ping\"]\n initialDelaySeconds: 30\n periodSeconds: 10\n timeoutSeconds: 5\n readinessProbe:\n exec:\n # Check we can execute queries over TCP (skip-networking is off).\n command: [\"mysql\", \"-h\", \"127.0.0.1\", \"-e\", \"SELECT 1\"]\n initialDelaySeconds: 5\n periodSeconds: 2\n timeoutSeconds: 1\n - name: xtrabackup\n image: gcr.io/google-samples/xtrabackup:1.0\n ports:\n - name: xtrabackup\n containerPort: 3307\n command:\n - bash\n - \"-c\"\n - |\n set -ex\n cd /var/lib/mysql\n\n # Determine binlog position of cloned data, if any.\n if [[ -f xtrabackup_slave_info && \"x$( change_master_to.sql.in\n # Ignore xtrabackup_binlog_info in this case (it's useless).\n rm -f xtrabackup_slave_info xtrabackup_binlog_info\n elif [[ -f xtrabackup_binlog_info ]]; then\n # We're cloning directly from primary. Parse binlog position.\n [[ `cat xtrabackup_binlog_info` =~ ^(.*?)[[:space:]]+(.*?)$ ]] || exit 1\n rm -f xtrabackup_binlog_info xtrabackup_slave_info\n echo \"CHANGE MASTER TO MASTER_LOG_FILE='${BASH_REMATCH[1]}',\\\n MASTER_LOG_POS=${BASH_REMATCH[2]}\" > change_master_to.sql.in\n fi\n\n # Check if we need to complete a clone by starting replication.\n if [[ -f change_master_to.sql.in ]]; then\n echo \"Waiting for mysqld to be ready (accepting connections)\"\n until mysql -h 127.0.0.1 -e \"SELECT 1\"; do sleep 1; done\n\n echo \"Initializing replication from clone position\"\n mysql -h 127.0.0.1 \\\n -e \"$( --force --delete-emptydir-data --ignore-daemonsets", "completion": "This text code Run a Replicated Stateful Application | Now you can watch as the Pod reschedules on a different Node:", "language": "text", "type": "code-example_explanation", "context": "Run a Replicated Stateful Application | Now you can watch as the Pod reschedules on a different Node:", "source": "https://kubernetes.io/docs/tasks/run-application/run-replicated-stateful-application/", "id": "k8s_00724_code_explain_22910"} {"prompt": "Explain this bash code:\n\nkubectl run mysql-client --image=mysql:5.7 -i -t --rm --restart=Never --\\\n mysql -h mysql-3.mysql -e \"SELECT * FROM test.messages\"", "completion": "This bash code Run a Replicated Stateful Application | Scaling back down is also seamless:", "language": "bash", "type": "kubectl-command_explanation", "context": "Run a Replicated Stateful Application | Scaling back down is also seamless:", "source": "https://kubernetes.io/docs/tasks/run-application/run-replicated-stateful-application/", "id": "k8s_00724_code_explain_22911"} {"prompt": "Explain this bash code:\n\nkubectl delete pvc data-mysql-3\nkubectl delete pvc data-mysql-4", "completion": "This bash code Run a Replicated Stateful Application | Cancel the SELECT @@server_id loop by pressing Ctrl+C in its terminal, or running the following from another terminal:", "language": "bash", "type": "kubectl-command_explanation", "context": "Run a Replicated Stateful Application | Cancel the SELECT @@server_id loop by pressing Ctrl+C in its terminal, or running the following from another terminal:", "source": "https://kubernetes.io/docs/tasks/run-application/run-replicated-stateful-application/", "id": "k8s_00724_code_explain_22912"} {"prompt": "Complete this yaml code:\n\napiVersion: kubectl.config.k8s.io/v1beta1\nkind: Preference\naliases:\n- name: getn", "completion": " command: get\n options:\n - name: output\n default: json", "language": "yaml", "type": "kubernetes-manifest", "context": "Kubectl user preferences (kuberc) | In this example, the following settings were used:", "source": "https://kubernetes.io/docs/reference/kubectl/kuberc/", "id": "k8s_00725_code_completion_22913"} {"prompt": "Explain this yaml code:\n\napiVersion: kubectl.config.k8s.io/v1beta1\nkind: Preference\naliases:\n- name: getn\n command: get\n options:\n - name: output\n default: json", "completion": "This yaml code Kubectl user preferences (kuberc) | In this example, the following settings were used:", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Kubectl user preferences (kuberc) | In this example, the following settings were used:", "source": "https://kubernetes.io/docs/reference/kubectl/kuberc/", "id": "k8s_00725_code_explain_22914"} {"prompt": "Complete this yaml code:\n\napiVersion: kubectl.config.k8s.io/v1beta1\nkind: Preference\naliases:\n - name: getn\n command: get", "completion": " options:\n - name: output\n default: json\n prependArgs:\n - namespace", "language": "yaml", "type": "kubernetes-manifest", "context": "Kubectl user preferences (kuberc) | In this example, the following settings were used:", "source": "https://kubernetes.io/docs/reference/kubectl/kuberc/", "id": "k8s_00725_code_completion_22915"} {"prompt": "Explain this yaml code:\n\napiVersion: kubectl.config.k8s.io/v1beta1\nkind: Preference\naliases:\n - name: getn\n command: get\n options:\n - name: output\n default: json\n prependArgs:\n - namespace", "completion": "This yaml code Kubectl user preferences (kuberc) | In this example, the following settings were used:", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Kubectl user preferences (kuberc) | In this example, the following settings were used:", "source": "https://kubernetes.io/docs/reference/kubectl/kuberc/", "id": "k8s_00725_code_explain_22916"} {"prompt": "Complete this yaml code:\n\napiVersion: kubectl.config.k8s.io/v1beta1\nkind: Preference\naliases:\n- name: runx\n command: run\n options:", "completion": " - name: image\n default: busybox\n - name: namespace\n default: test-ns\n appendArgs:\n - --\n - custom-arg", "language": "yaml", "type": "kubernetes-manifest", "context": "Kubectl user preferences (kuberc) | In this example, the following settings were used:", "source": "https://kubernetes.io/docs/reference/kubectl/kuberc/", "id": "k8s_00725_code_completion_22917"} {"prompt": "Explain this yaml code:\n\napiVersion: kubectl.config.k8s.io/v1beta1\nkind: Preference\naliases:\n- name: runx\n command: run\n options:\n - name: image\n default: busybox\n - name: namespace\n default: test-ns\n appendArgs:\n - --\n - custom-arg", "completion": "This yaml code Kubectl user preferences (kuberc) | In this example, the following settings were used:", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Kubectl user preferences (kuberc) | In this example, the following settings were used:", "source": "https://kubernetes.io/docs/reference/kubectl/kuberc/", "id": "k8s_00725_code_explain_22918"} {"prompt": "Complete this yaml code:\n\napiVersion: kubectl.config.k8s.io/v1beta1\nkind: Preference\ndefaults:", "completion": "- command: delete\n options:\n - name: interactive\n default: \"true\"", "language": "yaml", "type": "kubernetes-manifest", "context": "Kubectl user preferences (kuberc) | In this example, the following settings were used:", "source": "https://kubernetes.io/docs/reference/kubectl/kuberc/", "id": "k8s_00725_code_completion_22919"} {"prompt": "Explain this yaml code:\n\napiVersion: kubectl.config.k8s.io/v1beta1\nkind: Preference\ndefaults:\n- command: delete\n options:\n - name: interactive\n default: \"true\"", "completion": "This yaml code Kubectl user preferences (kuberc) | In this example, the following settings were used:", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Kubectl user preferences (kuberc) | In this example, the following settings were used:", "source": "https://kubernetes.io/docs/reference/kubectl/kuberc/", "id": "k8s_00725_code_explain_22920"} {"prompt": "Complete this yaml code:\n\napiVersion: kubectl.config.k8s.io/v1beta1\nkind: Preference\ndefaults:\n # (1) default server-side apply\n - command: apply\n options:\n - name: server-side", "completion": " default: \"true\"\n\n # (2) default interactive deletion\n - command: delete\n options:\n - name: interactive\n default: \"true\"", "language": "yaml", "type": "kubernetes-manifest", "context": "Kubectl user preferences (kuberc) | In this example, the following settings are enforced:", "source": "https://kubernetes.io/docs/reference/kubectl/kuberc/", "id": "k8s_00725_code_completion_22921"} {"prompt": "Explain this yaml code:\n\napiVersion: kubectl.config.k8s.io/v1beta1\nkind: Preference\ndefaults:\n # (1) default server-side apply\n - command: apply\n options:\n - name: server-side\n default: \"true\"\n\n # (2) default interactive deletion\n - command: delete\n options:\n - name: interactive\n default: \"true\"", "completion": "This yaml code Kubectl user preferences (kuberc) | In this example, the following settings are enforced:", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Kubectl user preferences (kuberc) | In this example, the following settings are enforced:", "source": "https://kubernetes.io/docs/reference/kubectl/kuberc/", "id": "k8s_00725_code_explain_22922"} {"prompt": "Complete this yaml code:\n\napiVersion: kubectl.config.k8s.io/v1beta1\nkind: Preference\naliases:\n- name: getn", "completion": " command: get\n options:\n - name: output\n default: json", "language": "yaml", "type": "kubernetes-manifest", "context": "Kubectl user preferences (kuberc) | In this example, the following settings were used:", "source": "https://kubernetes.io/docs/reference/kubectl/kuberc/", "id": "k8s_00725_code_completion_22923"} {"prompt": "Explain this yaml code:\n\napiVersion: kubectl.config.k8s.io/v1beta1\nkind: Preference\naliases:\n- name: getn\n command: get\n options:\n - name: output\n default: json", "completion": "This yaml code Kubectl user preferences (kuberc) | In this example, the following settings were used:", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Kubectl user preferences (kuberc) | In this example, the following settings were used:", "source": "https://kubernetes.io/docs/reference/kubectl/kuberc/", "id": "k8s_00725_code_explain_22924"} {"prompt": "Complete this yaml code:\n\napiVersion: kubectl.config.k8s.io/v1beta1\nkind: Preference\naliases:\n - name: getn\n command: get", "completion": " options:\n - name: output\n default: json\n prependArgs:\n - namespace", "language": "yaml", "type": "kubernetes-manifest", "context": "Kubectl user preferences (kuberc) | In this example, the following settings were used:", "source": "https://kubernetes.io/docs/reference/kubectl/kuberc/", "id": "k8s_00725_code_completion_22925"} {"prompt": "Explain this yaml code:\n\napiVersion: kubectl.config.k8s.io/v1beta1\nkind: Preference\naliases:\n - name: getn\n command: get\n options:\n - name: output\n default: json\n prependArgs:\n - namespace", "completion": "This yaml code Kubectl user preferences (kuberc) | In this example, the following settings were used:", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Kubectl user preferences (kuberc) | In this example, the following settings were used:", "source": "https://kubernetes.io/docs/reference/kubectl/kuberc/", "id": "k8s_00725_code_explain_22926"} {"prompt": "Complete this yaml code:\n\napiVersion: kubectl.config.k8s.io/v1beta1\nkind: Preference\naliases:\n- name: runx\n command: run\n options:", "completion": " - name: image\n default: busybox\n - name: namespace\n default: test-ns\n appendArgs:\n - --\n - custom-arg", "language": "yaml", "type": "kubernetes-manifest", "context": "Kubectl user preferences (kuberc) | In this example, the following settings were used:", "source": "https://kubernetes.io/docs/reference/kubectl/kuberc/", "id": "k8s_00725_code_completion_22927"} {"prompt": "Explain this yaml code:\n\napiVersion: kubectl.config.k8s.io/v1beta1\nkind: Preference\naliases:\n- name: runx\n command: run\n options:\n - name: image\n default: busybox\n - name: namespace\n default: test-ns\n appendArgs:\n - --\n - custom-arg", "completion": "This yaml code Kubectl user preferences (kuberc) | In this example, the following settings were used:", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Kubectl user preferences (kuberc) | In this example, the following settings were used:", "source": "https://kubernetes.io/docs/reference/kubectl/kuberc/", "id": "k8s_00725_code_explain_22928"} {"prompt": "Explain this bash code:\n\nkubectl run test-pod --namespace test-ns --image busybox -- custom-arg", "completion": "This bash code Kubectl user preferences (kuberc) | Within a kuberc configuration, defaults section lets you specify default values for command line arguments.", "language": "bash", "type": "kubectl-command_explanation", "context": "Kubectl user preferences (kuberc) | Within a kuberc configuration, defaults section lets you specify default values for command line arguments.", "source": "https://kubernetes.io/docs/reference/kubectl/kuberc/", "id": "k8s_00725_code_explain_22929"} {"prompt": "Complete this yaml code:\n\napiVersion: kubectl.config.k8s.io/v1beta1\nkind: Preference\ndefaults:", "completion": "- command: delete\n options:\n - name: interactive\n default: \"true\"", "language": "yaml", "type": "kubernetes-manifest", "context": "Kubectl user preferences (kuberc) | In this example, the following settings were used:", "source": "https://kubernetes.io/docs/reference/kubectl/kuberc/", "id": "k8s_00725_code_completion_22930"} {"prompt": "Explain this yaml code:\n\napiVersion: kubectl.config.k8s.io/v1beta1\nkind: Preference\ndefaults:\n- command: delete\n options:\n - name: interactive\n default: \"true\"", "completion": "This yaml code Kubectl user preferences (kuberc) | In this example, the following settings were used:", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Kubectl user preferences (kuberc) | In this example, the following settings were used:", "source": "https://kubernetes.io/docs/reference/kubectl/kuberc/", "id": "k8s_00725_code_explain_22931"} {"prompt": "Complete this yaml code:\n\napiVersion: kubectl.config.k8s.io/v1beta1\nkind: Preference\ndefaults:\n # (1) default server-side apply\n - command: apply\n options:\n - name: server-side", "completion": " default: \"true\"\n\n # (2) default interactive deletion\n - command: delete\n options:\n - name: interactive\n default: \"true\"", "language": "yaml", "type": "kubernetes-manifest", "context": "Kubectl user preferences (kuberc) | In this example, the following settings are enforced:", "source": "https://kubernetes.io/docs/reference/kubectl/kuberc/", "id": "k8s_00725_code_completion_22932"} {"prompt": "Explain this yaml code:\n\napiVersion: kubectl.config.k8s.io/v1beta1\nkind: Preference\ndefaults:\n # (1) default server-side apply\n - command: apply\n options:\n - name: server-side\n default: \"true\"\n\n # (2) default interactive deletion\n - command: delete\n options:\n - name: interactive\n default: \"true\"", "completion": "This yaml code Kubectl user preferences (kuberc) | In this example, the following settings are enforced:", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Kubectl user preferences (kuberc) | In this example, the following settings are enforced:", "source": "https://kubernetes.io/docs/reference/kubectl/kuberc/", "id": "k8s_00725_code_explain_22933"} {"prompt": "Complete this yaml code:\n\napiVersion: kubectl.config.k8s.io/v1beta1\nkind: Preference\naliases:\n- name: getn", "completion": " command: get\n options:\n - name: output\n default: json", "language": "yaml", "type": "kubernetes-manifest", "context": "Kubectl user preferences (kuberc) | In this example, the following settings were used:", "source": "https://kubernetes.io/docs/reference/kubectl/kuberc/", "id": "k8s_00725_code_completion_22934"} {"prompt": "Explain this yaml code:\n\napiVersion: kubectl.config.k8s.io/v1beta1\nkind: Preference\naliases:\n- name: getn\n command: get\n options:\n - name: output\n default: json", "completion": "This yaml code Kubectl user preferences (kuberc) | In this example, the following settings were used:", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Kubectl user preferences (kuberc) | In this example, the following settings were used:", "source": "https://kubernetes.io/docs/reference/kubectl/kuberc/", "id": "k8s_00725_code_explain_22935"} {"prompt": "Complete this yaml code:\n\napiVersion: kubectl.config.k8s.io/v1beta1\nkind: Preference\naliases:\n - name: getn\n command: get", "completion": " options:\n - name: output\n default: json\n prependArgs:\n - namespace", "language": "yaml", "type": "kubernetes-manifest", "context": "Kubectl user preferences (kuberc) | In this example, the following settings were used:", "source": "https://kubernetes.io/docs/reference/kubectl/kuberc/", "id": "k8s_00725_code_completion_22936"} {"prompt": "Explain this yaml code:\n\napiVersion: kubectl.config.k8s.io/v1beta1\nkind: Preference\naliases:\n - name: getn\n command: get\n options:\n - name: output\n default: json\n prependArgs:\n - namespace", "completion": "This yaml code Kubectl user preferences (kuberc) | In this example, the following settings were used:", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Kubectl user preferences (kuberc) | In this example, the following settings were used:", "source": "https://kubernetes.io/docs/reference/kubectl/kuberc/", "id": "k8s_00725_code_explain_22937"} {"prompt": "Complete this yaml code:\n\napiVersion: kubectl.config.k8s.io/v1beta1\nkind: Preference\naliases:\n- name: runx\n command: run\n options:", "completion": " - name: image\n default: busybox\n - name: namespace\n default: test-ns\n appendArgs:\n - --\n - custom-arg", "language": "yaml", "type": "kubernetes-manifest", "context": "Kubectl user preferences (kuberc) | In this example, the following settings were used:", "source": "https://kubernetes.io/docs/reference/kubectl/kuberc/", "id": "k8s_00725_code_completion_22938"} {"prompt": "Explain this yaml code:\n\napiVersion: kubectl.config.k8s.io/v1beta1\nkind: Preference\naliases:\n- name: runx\n command: run\n options:\n - name: image\n default: busybox\n - name: namespace\n default: test-ns\n appendArgs:\n - --\n - custom-arg", "completion": "This yaml code Kubectl user preferences (kuberc) | In this example, the following settings were used:", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Kubectl user preferences (kuberc) | In this example, the following settings were used:", "source": "https://kubernetes.io/docs/reference/kubectl/kuberc/", "id": "k8s_00725_code_explain_22939"} {"prompt": "Complete this yaml code:\n\napiVersion: kubectl.config.k8s.io/v1beta1\nkind: Preference\ndefaults:", "completion": "- command: delete\n options:\n - name: interactive\n default: \"true\"", "language": "yaml", "type": "kubernetes-manifest", "context": "Kubectl user preferences (kuberc) | In this example, the following settings were used:", "source": "https://kubernetes.io/docs/reference/kubectl/kuberc/", "id": "k8s_00725_code_completion_22940"} {"prompt": "Explain this yaml code:\n\napiVersion: kubectl.config.k8s.io/v1beta1\nkind: Preference\ndefaults:\n- command: delete\n options:\n - name: interactive\n default: \"true\"", "completion": "This yaml code Kubectl user preferences (kuberc) | In this example, the following settings were used:", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Kubectl user preferences (kuberc) | In this example, the following settings were used:", "source": "https://kubernetes.io/docs/reference/kubectl/kuberc/", "id": "k8s_00725_code_explain_22941"} {"prompt": "Complete this yaml code:\n\napiVersion: kubectl.config.k8s.io/v1beta1\nkind: Preference\ndefaults:\n # (1) default server-side apply\n - command: apply\n options:\n - name: server-side", "completion": " default: \"true\"\n\n # (2) default interactive deletion\n - command: delete\n options:\n - name: interactive\n default: \"true\"", "language": "yaml", "type": "kubernetes-manifest", "context": "Kubectl user preferences (kuberc) | In this example, the following settings are enforced:", "source": "https://kubernetes.io/docs/reference/kubectl/kuberc/", "id": "k8s_00725_code_completion_22942"} {"prompt": "Explain this yaml code:\n\napiVersion: kubectl.config.k8s.io/v1beta1\nkind: Preference\ndefaults:\n # (1) default server-side apply\n - command: apply\n options:\n - name: server-side\n default: \"true\"\n\n # (2) default interactive deletion\n - command: delete\n options:\n - name: interactive\n default: \"true\"", "completion": "This yaml code Kubectl user preferences (kuberc) | In this example, the following settings are enforced:", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Kubectl user preferences (kuberc) | In this example, the following settings are enforced:", "source": "https://kubernetes.io/docs/reference/kubectl/kuberc/", "id": "k8s_00725_code_explain_22943"} {"prompt": "Complete this text code:\n\n# Unset the current-context\n kubectl config unset current-context", "completion": " \n # Unset namespace in foo context\n kubectl config unset contexts.foo.namespace", "language": "text", "type": "code-example", "context": "kubectl config unset | help for unset", "source": "https://kubernetes.io/docs/reference/kubectl/generated/kubectl_config/kubectl_config_unset/", "id": "k8s_00727_code_completion_22944"} {"prompt": "Explain this text code:\n\n# Unset the current-context\n kubectl config unset current-context\n \n # Unset namespace in foo context\n kubectl config unset contexts.foo.namespace", "completion": "This text code kubectl config unset | help for unset", "language": "text", "type": "code-example_explanation", "context": "kubectl config unset | help for unset", "source": "https://kubernetes.io/docs/reference/kubectl/generated/kubectl_config/kubectl_config_unset/", "id": "k8s_00727_code_explain_22945"} {"prompt": "Complete this text code:\n\n# Unset the current-context\n kubectl config unset current-context", "completion": " \n # Unset namespace in foo context\n kubectl config unset contexts.foo.namespace", "language": "text", "type": "code-example", "context": "kubectl config unset | help for unset", "source": "https://kubernetes.io/docs/reference/kubectl/generated/kubectl_config/kubectl_config_unset/", "id": "k8s_00727_code_completion_22946"} {"prompt": "Explain this text code:\n\n# Unset the current-context\n kubectl config unset current-context\n \n # Unset namespace in foo context\n kubectl config unset contexts.foo.namespace", "completion": "This text code kubectl config unset | help for unset", "language": "text", "type": "code-example_explanation", "context": "kubectl config unset | help for unset", "source": "https://kubernetes.io/docs/reference/kubectl/generated/kubectl_config/kubectl_config_unset/", "id": "k8s_00727_code_explain_22947"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: nginx-demo\nspec:", "completion": " containers:\n - name: nginx\n image: nginx:1.14.2\n ports:\n - containerPort: 80", "language": "yaml", "type": "pod-definition", "context": "Object Names and IDs | A Kubernetes systems-generated string to uniquely identify objects.", "source": "https://kubernetes.io/docs/concepts/overview/working-with-objects/names/", "id": "k8s_00728_code_completion_22948"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: nginx-demo\nspec:\n containers:\n - name: nginx\n image: nginx:1.14.2\n ports:\n - containerPort: 80", "completion": "This yaml code Object Names and IDs | A Kubernetes systems-generated string to uniquely identify objects.", "language": "yaml", "type": "pod-definition_explanation", "context": "Object Names and IDs | A Kubernetes systems-generated string to uniquely identify objects.", "source": "https://kubernetes.io/docs/concepts/overview/working-with-objects/names/", "id": "k8s_00728_code_explain_22949"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: nginx-demo\nspec:", "completion": " containers:\n - name: nginx\n image: nginx:1.14.2\n ports:\n - containerPort: 80", "language": "yaml", "type": "pod-definition", "context": "Object Names and IDs | A Kubernetes systems-generated string to uniquely identify objects.", "source": "https://kubernetes.io/docs/concepts/overview/working-with-objects/names/", "id": "k8s_00728_code_completion_22950"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: nginx-demo\nspec:\n containers:\n - name: nginx\n image: nginx:1.14.2\n ports:\n - containerPort: 80", "completion": "This yaml code Object Names and IDs | A Kubernetes systems-generated string to uniquely identify objects.", "language": "yaml", "type": "pod-definition_explanation", "context": "Object Names and IDs | A Kubernetes systems-generated string to uniquely identify objects.", "source": "https://kubernetes.io/docs/concepts/overview/working-with-objects/names/", "id": "k8s_00728_code_explain_22951"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: nginx-demo\nspec:", "completion": " containers:\n - name: nginx\n image: nginx:1.14.2\n ports:\n - containerPort: 80", "language": "yaml", "type": "pod-definition", "context": "Object Names and IDs | A Kubernetes systems-generated string to uniquely identify objects.", "source": "https://kubernetes.io/docs/concepts/overview/working-with-objects/names/", "id": "k8s_00728_code_completion_22952"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: nginx-demo\nspec:\n containers:\n - name: nginx\n image: nginx:1.14.2\n ports:\n - containerPort: 80", "completion": "This yaml code Object Names and IDs | A Kubernetes systems-generated string to uniquely identify objects.", "language": "yaml", "type": "pod-definition_explanation", "context": "Object Names and IDs | A Kubernetes systems-generated string to uniquely identify objects.", "source": "https://kubernetes.io/docs/concepts/overview/working-with-objects/names/", "id": "k8s_00728_code_explain_22953"} {"prompt": "Explain this text code:\n\n# Use the context for the minikube cluster\n kubectl config use-context minikube", "completion": "This text code kubectl config use-context | help for use-context", "language": "text", "type": "code-example_explanation", "context": "kubectl config use-context | help for use-context", "source": "https://kubernetes.io/docs/reference/kubectl/generated/kubectl_config/kubectl_config_use-context/", "id": "k8s_00729_code_explain_22954"} {"prompt": "Explain this text code:\n\n# Use the context for the minikube cluster\n kubectl config use-context minikube", "completion": "This text code kubectl config use-context | help for use-context", "language": "text", "type": "code-example_explanation", "context": "kubectl config use-context | help for use-context", "source": "https://kubernetes.io/docs/reference/kubectl/generated/kubectl_config/kubectl_config_use-context/", "id": "k8s_00729_code_explain_22955"} {"prompt": "Explain this text code:\n\nopenssl req -new -key jbeda.pem -out jbeda-csr.pem -subj \"/CN=jbeda/O=app1/O=app2\"", "completion": "This text code Authenticating | This would create a CSR for the username \"jbeda\", belonging to two groups, \"app1\" and \"app2\".", "language": "text", "type": "code-example_explanation", "context": "Authenticating | This would create a CSR for the username \"jbeda\", belonging to two groups, \"app1\" and \"app2\".", "source": "https://kubernetes.io/docs/reference/access-authn-authz/authentication/", "id": "k8s_00731_code_explain_22956"} {"prompt": "Explain this text code:\n\nAuthorization: Bearer 31ada4fd-adec-460c-809a-9e56ceb75269", "completion": "This text code Authenticating | To allow for streamlined bootstrapping for new clusters, Kubernetes includes a dynamically-managed Bearer token type called a Bootstrap Token. These tokens are stored as Secrets in the kube-system nam", "language": "text", "type": "code-example_explanation", "context": "Authenticating | To allow for streamlined bootstrapping for new clusters, Kubernetes includes a dynamically-managed Bearer token type called a Bootstrap Token. These tokens are stored as Secrets in the kube-system nam", "source": "https://kubernetes.io/docs/reference/access-authn-authz/authentication/", "id": "k8s_00731_code_explain_22957"} {"prompt": "Complete this yaml code:\n\napiVersion: apps/v1 # this apiVersion is relevant as of Kubernetes 1.9\nkind: Deployment\nmetadata:\n name: nginx-deployment\n namespace: default\nspec:\n replicas: 3", "completion": " template:\n metadata:\n # ...\n spec:\n serviceAccountName: bob-the-bot\n containers:\n - name: nginx\n image: nginx:1.14.2", "language": "yaml", "type": "deployment-definition", "context": "Authenticating | Service account bearer tokens are perfectly valid to use outside the cluster and can be used to create identities for long standing jobs that wish to talk to the Kubernetes API. To manually create a s", "source": "https://kubernetes.io/docs/reference/access-authn-authz/authentication/", "id": "k8s_00731_code_completion_22958"} {"prompt": "Explain this yaml code:\n\napiVersion: apps/v1 # this apiVersion is relevant as of Kubernetes 1.9\nkind: Deployment\nmetadata:\n name: nginx-deployment\n namespace: default\nspec:\n replicas: 3\n template:\n metadata:\n # ...\n spec:\n serviceAccountName: bob-the-bot\n containers:\n - name: nginx\n image: nginx:1.14.2", "completion": "This yaml code Authenticating | Service account bearer tokens are perfectly valid to use outside the cluster and can be used to create identities for long standing jobs that wish to talk to the Kubernetes API. To manually create a s", "language": "yaml", "type": "deployment-definition_explanation", "context": "Authenticating | Service account bearer tokens are perfectly valid to use outside the cluster and can be used to create identities for long standing jobs that wish to talk to the Kubernetes API. To manually create a s", "source": "https://kubernetes.io/docs/reference/access-authn-authz/authentication/", "id": "k8s_00731_code_explain_22959"} {"prompt": "Complete this json code:\n\n{\n \"iss\": \"https://example.com\", // must match the issuer.url\n \"aud\": [\"my-app\"], // at least one of the entries in issuer.audiences must match the \"aud\" claim in presented JWTs.", "completion": " \"exp\": 1234567890, // token expiration as Unix time (the number of seconds elapsed since January 1, 1970 UTC)\n \"\": \"user\" // this is the username claim configured in the claimMappings.username.claim or claimMappings.username.expression\n}", "language": "json", "type": "api-response", "context": "Authenticating | The configuration file approach allows you to configure multiple JWT authenticators, each with a unique issuer.url and issuer.discoveryURL. The configuration file even allows you to specify CEL expres", "source": "https://kubernetes.io/docs/reference/access-authn-authz/authentication/", "id": "k8s_00731_code_completion_22960"} {"prompt": "Explain this json code:\n\n{\n \"iss\": \"https://example.com\", // must match the issuer.url\n \"aud\": [\"my-app\"], // at least one of the entries in issuer.audiences must match the \"aud\" claim in presented JWTs.\n \"exp\": 1234567890, // token expiration as Unix time (the number of seconds elapsed since January 1, 1970 UTC)\n \"\": \"user\" // this is the username claim configured in the claimMappings.username.claim or claimMappings.username.expression\n}", "completion": "This json code Authenticating | The configuration file approach allows you to configure multiple JWT authenticators, each with a unique issuer.url and issuer.discoveryURL. The configuration file even allows you to specify CEL expres", "language": "json", "type": "api-response_explanation", "context": "Authenticating | The configuration file approach allows you to configure multiple JWT authenticators, each with a unique issuer.url and issuer.discoveryURL. The configuration file even allows you to specify CEL expres", "source": "https://kubernetes.io/docs/reference/access-authn-authz/authentication/", "id": "k8s_00731_code_explain_22961"} {"prompt": "Complete this yaml code:\n\n---\n#\n# CAUTION: this is an example configuration.\n# Do not use this for your own cluster!\n#\napiVersion: apiserver.config.k8s.io/v1\nkind: AuthenticationConfiguration\n# list of authenticators to authenticate Kubernetes users using JWT compliant tokens.\n# the maximum number of allowed authenticators is 64.\njwt:\n- issuer:\n # url must be unique across all authenticators.\n # url must not conflict with issuer configured in --service-account-issuer.\n url: https://example.com # Same as --oidc-issuer-url.\n # discoveryURL, if specified, overrides the URL used to fetch discovery\n # information instead of using \"{url}/.well-known/openid-configuration\".\n # The exact value specified is used, so \"/.well-known/openid-configuration\"\n # must be included in discoveryURL if needed.\n #\n # The \"issuer\" field in the fetched discovery information must match the \"issuer.url\" field\n # in the AuthenticationConfiguration and will be used to validate the \"iss\" claim in the presented JWT.\n # This is for scenarios where the well-known and jwks endpoints are hosted at a different\n # location than the issuer (such as locally in the cluster).\n # discoveryURL must be different from url if specified and must be unique across all authenticators.\n discoveryURL: https://discovery.example.com/.well-known/openid-configuration\n # PEM encoded CA certificates used to validate the connection when fetching\n # discovery information. If not set, the system verifier will be used.\n # Same value as the content of the file referenced by the --oidc-ca-file command line argument.\n certificateAuthority: \n # audiences is the set of acceptable audiences the JWT must be issued to.\n # At least one of the entries must match the \"aud\" claim in presented JWTs.\n audiences:\n - my-app # Same as --oidc-client-id.\n - my-other-app\n # this is required to be set to \"MatchAny\" when multiple audiences are specified.\n audienceMatchPolicy: MatchAny\n # egressSelectorType is an indicator of which egress selection should be used for sending all traffic related\n # to this issuer (discovery, JWKS, distributed claims, etc). If unspecified, no custom dialer is used.\n # The StructuredAuthenticationConfigurationEgressSelector feature gate must be enabled\n # before you can use the egressSelectorType field.\n # When specified, the valid choices are \"controlplane\" and \"cluster\". These correspond to the associated\n # values in the --egress-selector-config-file.\n # - controlplane: for traffic intended to go to the control plane.\n # - cluster: for traffic intended to go to the system being managed by Kubernetes.\n egressSelectorType: \n # rules applied to validate token claims to authenticate users.\n claimValidationRules:\n # Same as --oidc-required-claim key=value.\n - claim: hd\n requiredValue: example.com\n # Instead of claim and requiredValue, you can use expression to validate the claim.\n # expression is a CEL expression that evaluates to a boolean.\n # all the expressions must evaluate to true for validation to succeed.\n - expression: 'claims.hd == \"example.com\"'\n # Message customizes the error message seen in the API server logs when the validation fails.\n message: the hd claim must be set to example.com\n - expression: 'claims.exp - claims.nbf <= 86400'\n message: total token lifetime must not exceed 24 hours\n claimMappings:", "completion": " # username represents an option for the username attribute.\n # This is the only required attribute.\n username:\n # Same as --oidc-username-claim. Mutually exclusive with username.expression.\n claim: \"sub\"\n # Same as --oidc-username-prefix. Mutually exclusive with username.expression.\n # if username.claim is set, username.prefix is required.\n # Explicitly set it to \"\" if no prefix is desired.\n prefix: \"\"\n # Mutually exclusive with username.claim and username.prefix.\n # expression is a CEL expression that evaluates to a string.\n #\n # 1. If username.expression uses 'claims.email', then 'claims.email_verified' must be used in\n # username.expression or extra[*].valueExpression or claimValidationRules[*].expression.\n # An example claim validation rule expression that matches the validation automatically\n # applied when username.claim is set to 'email' is 'claims.?email_verified.orValue(true) == true'.\n # By explicitly comparing the value to true, we let type-checking see the result will be a boolean, and\n # to make sure a non-boolean email_verified claim will be caught at runtime.\n # 2. If the username asserted based on username.expression is the empty string, the authentication\n # request will fail.\n expression: 'claims.username + \":external-user\"'\n # groups represents an option for the groups attribute.\n groups:\n # Same as --oidc-groups-claim. Mutually exclusive with groups.expression.\n claim: \"sub\"\n # Same as --oidc-groups-prefix. Mutually exclusive with groups.expression.\n # if groups.claim is set, groups.prefix is required.\n # Explicitly set it to \"\" if no prefix is desired.\n prefix: \"\"\n # Mutually exclusive with groups.claim and groups.prefix.\n # expression is a CEL expression that evaluates to a string or a list of strings.\n expression: 'claims.roles.split(\",\")'\n # uid represents an option for the uid attribute.\n uid:\n # Mutually exclusive with uid.expression.\n claim: 'sub'\n # Mutually exclusive with uid.claim\n # expression is a CEL expression that evaluates to a string.\n expression: 'claims.sub'\n # extra attributes to be added to the UserInfo object. Keys must be domain-prefix path and must be unique.\n extra:\n # key is a string to use as the extra attribute key.\n # key must be a domain-prefix path (e.g. example.org/foo). All characters before the first \"/\" must be a valid\n # subdomain as defined by RFC 1123. All characters trailing the first \"/\" must\n # be valid HTTP Path characters as defined by RFC 3986.\n # k8s.io, kubernetes.io and their subdomains are reserved for Kubernetes use and cannot be used.\n # key must be lowercase and unique across all extra attributes.\n - key: 'example.com/tenant'\n # valueExpression is a CEL expression that evaluates to a string or a list of strings.\n valueExpression: 'claims.tenant'\n # validation rules applied to the final user object.\n userValidationRules:\n # expression is a CEL expression that evaluates to a boolean.\n # all the expressions must evaluate to true for the user to be valid.\n - expression: \"!user.username.startsWith('system:')\"\n # Message customizes the error message seen in the API server logs when the validation fails.\n message: 'username cannot used reserved system: prefix'\n - expression: \"user.groups.all(group, !group.startsWith('system:'))\"\n message: 'groups cannot used reserved system: prefix'", "language": "yaml", "type": "kubernetes-manifest", "context": "Authenticating | Claim validation rule expression", "source": "https://kubernetes.io/docs/reference/access-authn-authz/authentication/", "id": "k8s_00731_code_completion_22962"} {"prompt": "Explain this yaml code:\n\n---\n#\n# CAUTION: this is an example configuration.\n# Do not use this for your own cluster!\n#\napiVersion: apiserver.config.k8s.io/v1\nkind: AuthenticationConfiguration\n# list of authenticators to authenticate Kubernetes users using JWT compliant tokens.\n# the maximum number of allowed authenticators is 64.\njwt:\n- issuer:\n # url must be unique across all authenticators.\n # url must not conflict with issuer configured in --service-account-issuer.\n url: https://example.com # Same as --oidc-issuer-url.\n # discoveryURL, if specified, overrides the URL used to fetch discovery\n # information instead of using \"{url}/.well-known/openid-configuration\".\n # The exact value specified is used, so \"/.well-known/openid-configuration\"\n # must be included in discoveryURL if needed.\n #\n # The \"issuer\" field in the fetched discovery information must match the \"issuer.url\" field\n # in the AuthenticationConfiguration and will be used to validate the \"iss\" claim in the presented JWT.\n # This is for scenarios where the well-known and jwks endpoints are hosted at a different\n # location than the issuer (such as locally in the cluster).\n # discoveryURL must be different from url if specified and must be unique across all authenticators.\n discoveryURL: https://discovery.example.com/.well-known/openid-configuration\n # PEM encoded CA certificates used to validate the connection when fetching\n # discovery information. If not set, the system verifier will be used.\n # Same value as the content of the file referenced by the --oidc-ca-file command line argument.\n certificateAuthority: \n # audiences is the set of acceptable audiences the JWT must be issued to.\n # At least one of the entries must match the \"aud\" claim in presented JWTs.\n audiences:\n - my-app # Same as --oidc-client-id.\n - my-other-app\n # this is required to be set to \"MatchAny\" when multiple audiences are specified.\n audienceMatchPolicy: MatchAny\n # egressSelectorType is an indicator of which egress selection should be used for sending all traffic related\n # to this issuer (discovery, JWKS, distributed claims, etc). If unspecified, no custom dialer is used.\n # The StructuredAuthenticationConfigurationEgressSelector feature gate must be enabled\n # before you can use the egressSelectorType field.\n # When specified, the valid choices are \"controlplane\" and \"cluster\". These correspond to the associated\n # values in the --egress-selector-config-file.\n # - controlplane: for traffic intended to go to the control plane.\n # - cluster: for traffic intended to go to the system being managed by Kubernetes.\n egressSelectorType: \n # rules applied to validate token claims to authenticate users.\n claimValidationRules:\n # Same as --oidc-required-claim key=value.\n - claim: hd\n requiredValue: example.com\n # Instead of claim and requiredValue, you can use expression to validate the claim.\n # expression is a CEL expression that evaluates to a boolean.\n # all the expressions must evaluate to true for validation to succeed.\n - expression: 'claims.hd == \"example.com\"'\n # Message customizes the error message seen in the API server logs when the validation fails.\n message: the hd claim must be set to example.com\n - expression: 'claims.exp - claims.nbf <= 86400'\n message: total token lifetime must not exceed 24 hours\n claimMappings:\n # username represents an option for the username attribute.\n # This is the only required attribute.\n username:\n # Same as --oidc-username-claim. Mutually exclusive with username.expression.\n claim: \"sub\"\n # Same as --oidc-username-prefix. Mutually exclusive with username.expression.\n # if username.claim is set, username.prefix is required.\n # Explicitly set it to \"\" if no prefix is desired.\n prefix: \"\"\n # Mutually exclusive with username.claim and username.prefix.\n # expression is a CEL expression that evaluates to a string.\n #\n # 1. If username.expression uses 'claims.email', then 'claims.email_verified' must be used in\n # username.expression or extra[*].valueExpression or claimValidationRules[*].expression.\n # An example claim validation rule expression that matches the validation automatically\n # applied when username.claim is set to 'email' is 'claims.?email_verified.orValue(true) == true'.\n # By explicitly comparing the value to true, we let type-checking see the result will be a boolean, and\n # to make sure a non-boolean email_verified claim will be caught at runtime.\n # 2. If the username asserted based on username.expression is the empty string, the authentication\n # request will fail.\n expression: 'claims.username + \":external-user\"'\n # groups represents an option for the groups attribute.\n groups:\n # Same as --oidc-groups-claim. Mutually exclusive with groups.expression.\n claim: \"sub\"\n # Same as --oidc-groups-prefix. Mutually exclusive with groups.expression.\n # if groups.claim is set, groups.prefix is required.\n # Explicitly set it to \"\" if no prefix is desired.\n prefix: \"\"\n # Mutually exclusive with groups.claim and groups.prefix.\n # expression is a CEL expression that evaluates to a string or a list of strings.\n expression: 'claims.roles.split(\",\")'\n # uid represents an option for the uid attribute.\n uid:\n # Mutually exclusive with uid.expression.\n claim: 'sub'\n # Mutually exclusive with uid.claim\n # expression is a CEL expression that evaluates to a string.\n expression: 'claims.sub'\n # extra attributes to be added to the UserInfo object. Keys must be domain-prefix path and must be unique.\n extra:\n # key is a string to use as the extra attribute key.\n # key must be a domain-prefix path (e.g. example.org/foo). All characters before the first \"/\" must be a valid\n # subdomain as defined by RFC 1123. All characters trailing the first \"/\" must\n # be valid HTTP Path characters as defined by RFC 3986.\n # k8s.io, kubernetes.io and their subdomains are reserved for Kubernetes use and cannot be used.\n # key must be lowercase and unique across all extra attributes.\n - key: 'example.com/tenant'\n # valueExpression is a CEL expression that evaluates to a string or a list of strings.\n valueExpression: 'claims.tenant'\n # validation rules applied to the final user object.\n userValidationRules:\n # expression is a CEL expression that evaluates to a boolean.\n # all the expressions must evaluate to true for the user to be valid.\n - expression: \"!user.username.startsWith('system:')\"\n # Message customizes the error message seen in the API server logs when the validation fails.\n message: 'username cannot used reserved system: prefix'\n - expression: \"user.groups.all(group, !group.startsWith('system:'))\"\n message: 'groups cannot used reserved system: prefix'", "completion": "This yaml code Authenticating | Claim validation rule expression", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Authenticating | Claim validation rule expression", "source": "https://kubernetes.io/docs/reference/access-authn-authz/authentication/", "id": "k8s_00731_code_explain_22963"} {"prompt": "Complete this yaml code:\n\napiVersion: apiserver.config.k8s.io/v1\nkind: AuthenticationConfiguration\njwt:\n- issuer:\n url: https://example.com\n audiences:\n - my-app\n claimMappings:\n username:\n expression: 'claims.username + \":external-user\"'", "completion": " groups:\n expression: 'claims.roles.split(\",\")'\n uid:\n expression: 'claims.sub'\n extra:\n - key: 'example.com/tenant'\n valueExpression: 'claims.tenant'\n userValidationRules:\n - expression: \"!user.username.startsWith('system:')\" # the expression will evaluate to true, so validation will succeed.\n message: 'username cannot used reserved system: prefix'", "language": "yaml", "type": "kubernetes-manifest", "context": "Authenticating | where the token payload is:", "source": "https://kubernetes.io/docs/reference/access-authn-authz/authentication/", "id": "k8s_00731_code_completion_22964"} {"prompt": "Explain this yaml code:\n\napiVersion: apiserver.config.k8s.io/v1\nkind: AuthenticationConfiguration\njwt:\n- issuer:\n url: https://example.com\n audiences:\n - my-app\n claimMappings:\n username:\n expression: 'claims.username + \":external-user\"'\n groups:\n expression: 'claims.roles.split(\",\")'\n uid:\n expression: 'claims.sub'\n extra:\n - key: 'example.com/tenant'\n valueExpression: 'claims.tenant'\n userValidationRules:\n - expression: \"!user.username.startsWith('system:')\" # the expression will evaluate to true, so validation will succeed.\n message: 'username cannot used reserved system: prefix'", "completion": "This yaml code Authenticating | where the token payload is:", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Authenticating | where the token payload is:", "source": "https://kubernetes.io/docs/reference/access-authn-authz/authentication/", "id": "k8s_00731_code_explain_22965"} {"prompt": "Explain this text code:\n\nTOKEN=eyJhbGciOiJSUzI1NiIsImtpZCI6ImY3dF9tOEROWmFTQk1oWGw5QXZTWGhBUC04Y0JmZ0JVbFVpTG5oQkgxdXMiLCJ0eXAiOiJKV1QifQ.eyJhdWQiOiJrdWJlcm5ldGVzIiwiZXhwIjoxNzAzMjMyOTQ5LCJpYXQiOjE3MDExMDcyMzMsImlzcyI6Imh0dHBzOi8vZXhhbXBsZS5jb20iLCJqdGkiOiI3YzMzNzk0MjgwN2U3M2NhYTJjMzBjODY4YWMwY2U5MTBiY2UwMmRkY2JmZWJlOGMyM2I4YjVmMjdhZDYyODczIiwibmJmIjoxNzAxMTA3MjMzLCJyb2xlcyI6InVzZXIsYWRtaW4iLCJzdWIiOiJhdXRoIiwidGVuYW50IjoiNzJmOTg4YmYtODZmMS00MWFmLTkxYWItMmQ3Y2QwMTFkYjRhIiwidXNlcm5hbWUiOiJmb28ifQ.TBWF2RkQHm4QQz85AYPcwLxSk-VLvQW-mNDHx7SEOSv9LVwcPYPuPajJpuQn9C_gKq1R94QKSQ5F6UgHMILz8OfmPKmX_00wpwwNVGeevJ79ieX2V-__W56iNR5gJ-i9nn6FYk5pwfVREB0l4HSlpTOmu80gbPWAXY5hLW0ZtcE1JTEEmefORHV2ge8e3jp1xGafNy6LdJWabYuKiw8d7Qga__HxtKB-t0kRMNzLRS7rka_SfQg0dSYektuxhLbiDkqhmRffGlQKXGVzUsuvFw7IGM5ZWnZgEMDzCI357obHeM3tRqpn5WRjtB8oM7JgnCymaJi-P3iCd88iu1xnzA", "completion": "This text code Authenticating | where the token payload is:", "language": "text", "type": "code-example_explanation", "context": "Authenticating | where the token payload is:", "source": "https://kubernetes.io/docs/reference/access-authn-authz/authentication/", "id": "k8s_00731_code_explain_22966"} {"prompt": "Complete this json code:\n\n{\n \"aud\": \"kubernetes\",\n \"exp\": 1703232949,\n \"iat\": 1701107233,\n \"iss\": \"https://example.com\",\n \"jti\": \"7c337942807e73caa2c30c868ac0ce910bce02ddcbfebe8c23b8b5f27ad62873\",", "completion": " \"nbf\": 1701107233,\n \"roles\": \"user,admin\",\n \"sub\": \"auth\",\n \"tenant\": \"72f988bf-86f1-41af-91ab-2d7cd011db4a\",\n \"username\": \"foo\"\n }", "language": "json", "type": "api-response", "context": "Authenticating | The token with the above AuthenticationConfiguration will produce the following UserInfo object and successfully authenticate the user.", "source": "https://kubernetes.io/docs/reference/access-authn-authz/authentication/", "id": "k8s_00731_code_completion_22967"} {"prompt": "Explain this json code:\n\n{\n \"aud\": \"kubernetes\",\n \"exp\": 1703232949,\n \"iat\": 1701107233,\n \"iss\": \"https://example.com\",\n \"jti\": \"7c337942807e73caa2c30c868ac0ce910bce02ddcbfebe8c23b8b5f27ad62873\",\n \"nbf\": 1701107233,\n \"roles\": \"user,admin\",\n \"sub\": \"auth\",\n \"tenant\": \"72f988bf-86f1-41af-91ab-2d7cd011db4a\",\n \"username\": \"foo\"\n }", "completion": "This json code Authenticating | The token with the above AuthenticationConfiguration will produce the following UserInfo object and successfully authenticate the user.", "language": "json", "type": "api-response_explanation", "context": "Authenticating | The token with the above AuthenticationConfiguration will produce the following UserInfo object and successfully authenticate the user.", "source": "https://kubernetes.io/docs/reference/access-authn-authz/authentication/", "id": "k8s_00731_code_explain_22968"} {"prompt": "Complete this json code:\n\n{\n \"username\": \"foo:external-user\",\n \"uid\": \"auth\",\n \"groups\": [\n \"user\",", "completion": " \"admin\"\n ],\n \"extra\": {\n \"example.com/tenant\": [\"72f988bf-86f1-41af-91ab-2d7cd011db4a\"]\n }\n}", "language": "json", "type": "api-response", "context": "Authenticating | apiVersion: apiserver.config.k8s.io/v1 kind: AuthenticationConfiguration jwt: - issuer: url: https://example.com audiences: - my-app claimValidationRules: - expression: 'claims.hd == \"", "source": "https://kubernetes.io/docs/reference/access-authn-authz/authentication/", "id": "k8s_00731_code_completion_22969"} {"prompt": "Explain this json code:\n\n{\n \"username\": \"foo:external-user\",\n \"uid\": \"auth\",\n \"groups\": [\n \"user\",\n \"admin\"\n ],\n \"extra\": {\n \"example.com/tenant\": [\"72f988bf-86f1-41af-91ab-2d7cd011db4a\"]\n }\n}", "completion": "This json code Authenticating | apiVersion: apiserver.config.k8s.io/v1 kind: AuthenticationConfiguration jwt: - issuer: url: https://example.com audiences: - my-app claimValidationRules: - expression: 'claims.hd == \"", "language": "json", "type": "api-response_explanation", "context": "Authenticating | apiVersion: apiserver.config.k8s.io/v1 kind: AuthenticationConfiguration jwt: - issuer: url: https://example.com audiences: - my-app claimValidationRules: - expression: 'claims.hd == \"", "source": "https://kubernetes.io/docs/reference/access-authn-authz/authentication/", "id": "k8s_00731_code_explain_22970"} {"prompt": "Complete this yaml code:\n\napiVersion: apiserver.config.k8s.io/v1\nkind: AuthenticationConfiguration\njwt:\n- issuer:\n url: https://example.com\n audiences:\n - my-app\n claimValidationRules:\n - expression: 'claims.hd == \"example.com\"' # the token below does not have this claim, so validation will fail.\n message: the hd claim must be set to example.com\n claimMappings:", "completion": " username:\n expression: 'claims.username + \":external-user\"'\n groups:\n expression: 'claims.roles.split(\",\")'\n uid:\n expression: 'claims.sub'\n extra:\n - key: 'example.com/tenant'\n valueExpression: 'claims.tenant'\n userValidationRules:\n - expression: \"!user.username.startsWith('system:')\" # the expression will evaluate to true, so validation will succeed.\n message: 'username cannot used reserved system: prefix'", "language": "yaml", "type": "kubernetes-manifest", "context": "Authenticating | where the token payload is:", "source": "https://kubernetes.io/docs/reference/access-authn-authz/authentication/", "id": "k8s_00731_code_completion_22971"} {"prompt": "Explain this yaml code:\n\napiVersion: apiserver.config.k8s.io/v1\nkind: AuthenticationConfiguration\njwt:\n- issuer:\n url: https://example.com\n audiences:\n - my-app\n claimValidationRules:\n - expression: 'claims.hd == \"example.com\"' # the token below does not have this claim, so validation will fail.\n message: the hd claim must be set to example.com\n claimMappings:\n username:\n expression: 'claims.username + \":external-user\"'\n groups:\n expression: 'claims.roles.split(\",\")'\n uid:\n expression: 'claims.sub'\n extra:\n - key: 'example.com/tenant'\n valueExpression: 'claims.tenant'\n userValidationRules:\n - expression: \"!user.username.startsWith('system:')\" # the expression will evaluate to true, so validation will succeed.\n message: 'username cannot used reserved system: prefix'", "completion": "This yaml code Authenticating | where the token payload is:", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Authenticating | where the token payload is:", "source": "https://kubernetes.io/docs/reference/access-authn-authz/authentication/", "id": "k8s_00731_code_explain_22972"} {"prompt": "Explain this text code:\n\nTOKEN=eyJhbGciOiJSUzI1NiIsImtpZCI6ImY3dF9tOEROWmFTQk1oWGw5QXZTWGhBUC04Y0JmZ0JVbFVpTG5oQkgxdXMiLCJ0eXAiOiJKV1QifQ.eyJhdWQiOiJrdWJlcm5ldGVzIiwiZXhwIjoxNzAzMjMyOTQ5LCJpYXQiOjE3MDExMDcyMzMsImlzcyI6Imh0dHBzOi8vZXhhbXBsZS5jb20iLCJqdGkiOiI3YzMzNzk0MjgwN2U3M2NhYTJjMzBjODY4YWMwY2U5MTBiY2UwMmRkY2JmZWJlOGMyM2I4YjVmMjdhZDYyODczIiwibmJmIjoxNzAxMTA3MjMzLCJyb2xlcyI6InVzZXIsYWRtaW4iLCJzdWIiOiJhdXRoIiwidGVuYW50IjoiNzJmOTg4YmYtODZmMS00MWFmLTkxYWItMmQ3Y2QwMTFkYjRhIiwidXNlcm5hbWUiOiJmb28ifQ.TBWF2RkQHm4QQz85AYPcwLxSk-VLvQW-mNDHx7SEOSv9LVwcPYPuPajJpuQn9C_gKq1R94QKSQ5F6UgHMILz8OfmPKmX_00wpwwNVGeevJ79ieX2V-__W56iNR5gJ-i9nn6FYk5pwfVREB0l4HSlpTOmu80gbPWAXY5hLW0ZtcE1JTEEmefORHV2ge8e3jp1xGafNy6LdJWabYuKiw8d7Qga__HxtKB-t0kRMNzLRS7rka_SfQg0dSYektuxhLbiDkqhmRffGlQKXGVzUsuvFw7IGM5ZWnZgEMDzCI357obHeM3tRqpn5WRjtB8oM7JgnCymaJi-P3iCd88iu1xnzA", "completion": "This text code Authenticating | where the token payload is:", "language": "text", "type": "code-example_explanation", "context": "Authenticating | where the token payload is:", "source": "https://kubernetes.io/docs/reference/access-authn-authz/authentication/", "id": "k8s_00731_code_explain_22973"} {"prompt": "Complete this json code:\n\n{\n \"aud\": \"kubernetes\",\n \"exp\": 1703232949,\n \"iat\": 1701107233,\n \"iss\": \"https://example.com\",\n \"jti\": \"7c337942807e73caa2c30c868ac0ce910bce02ddcbfebe8c23b8b5f27ad62873\",", "completion": " \"nbf\": 1701107233,\n \"roles\": \"user,admin\",\n \"sub\": \"auth\",\n \"tenant\": \"72f988bf-86f1-41af-91ab-2d7cd011db4a\",\n \"username\": \"foo\"\n }", "language": "json", "type": "api-response", "context": "Authenticating | The token with the above AuthenticationConfiguration will fail to authenticate because the hd claim is not set to example.com. The API server will return 401 Unauthorized error.", "source": "https://kubernetes.io/docs/reference/access-authn-authz/authentication/", "id": "k8s_00731_code_completion_22974"} {"prompt": "Explain this json code:\n\n{\n \"aud\": \"kubernetes\",\n \"exp\": 1703232949,\n \"iat\": 1701107233,\n \"iss\": \"https://example.com\",\n \"jti\": \"7c337942807e73caa2c30c868ac0ce910bce02ddcbfebe8c23b8b5f27ad62873\",\n \"nbf\": 1701107233,\n \"roles\": \"user,admin\",\n \"sub\": \"auth\",\n \"tenant\": \"72f988bf-86f1-41af-91ab-2d7cd011db4a\",\n \"username\": \"foo\"\n }", "completion": "This json code Authenticating | The token with the above AuthenticationConfiguration will fail to authenticate because the hd claim is not set to example.com. The API server will return 401 Unauthorized error.", "language": "json", "type": "api-response_explanation", "context": "Authenticating | The token with the above AuthenticationConfiguration will fail to authenticate because the hd claim is not set to example.com. The API server will return 401 Unauthorized error.", "source": "https://kubernetes.io/docs/reference/access-authn-authz/authentication/", "id": "k8s_00731_code_explain_22975"} {"prompt": "Complete this yaml code:\n\napiVersion: apiserver.config.k8s.io/v1\nkind: AuthenticationConfiguration\njwt:\n- issuer:\n url: https://example.com\n audiences:\n - my-app\n claimValidationRules:\n - expression: 'claims.hd == \"example.com\"'\n message: the hd claim must be set to example.com\n claimMappings:", "completion": " username:\n expression: '\"system:\" + claims.username' # this will prefix the username with \"system:\" and will fail user validation.\n groups:\n expression: 'claims.roles.split(\",\")'\n uid:\n expression: 'claims.sub'\n extra:\n - key: 'example.com/tenant'\n valueExpression: 'claims.tenant'\n userValidationRules:\n - expression: \"!user.username.startsWith('system:')\" # the username will be system:foo and expression will evaluate to false, so validation will fail.\n message: 'username cannot used reserved system: prefix'", "language": "yaml", "type": "kubernetes-manifest", "context": "Authenticating | where the token payload is:", "source": "https://kubernetes.io/docs/reference/access-authn-authz/authentication/", "id": "k8s_00731_code_completion_22976"} {"prompt": "Explain this yaml code:\n\napiVersion: apiserver.config.k8s.io/v1\nkind: AuthenticationConfiguration\njwt:\n- issuer:\n url: https://example.com\n audiences:\n - my-app\n claimValidationRules:\n - expression: 'claims.hd == \"example.com\"'\n message: the hd claim must be set to example.com\n claimMappings:\n username:\n expression: '\"system:\" + claims.username' # this will prefix the username with \"system:\" and will fail user validation.\n groups:\n expression: 'claims.roles.split(\",\")'\n uid:\n expression: 'claims.sub'\n extra:\n - key: 'example.com/tenant'\n valueExpression: 'claims.tenant'\n userValidationRules:\n - expression: \"!user.username.startsWith('system:')\" # the username will be system:foo and expression will evaluate to false, so validation will fail.\n message: 'username cannot used reserved system: prefix'", "completion": "This yaml code Authenticating | where the token payload is:", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Authenticating | where the token payload is:", "source": "https://kubernetes.io/docs/reference/access-authn-authz/authentication/", "id": "k8s_00731_code_explain_22977"} {"prompt": "Explain this text code:\n\nTOKEN=eyJhbGciOiJSUzI1NiIsImtpZCI6ImY3dF9tOEROWmFTQk1oWGw5QXZTWGhBUC04Y0JmZ0JVbFVpTG5oQkgxdXMiLCJ0eXAiOiJKV1QifQ.eyJhdWQiOiJrdWJlcm5ldGVzIiwiZXhwIjoxNzAzMjMyOTQ5LCJoZCI6ImV4YW1wbGUuY29tIiwiaWF0IjoxNzAxMTEzMTAxLCJpc3MiOiJodHRwczovL2V4YW1wbGUuY29tIiwianRpIjoiYjViMDY1MjM3MmNkMjBlMzQ1YjZmZGZmY2RjMjE4MWY0YWZkNmYyNTlhYWI0YjdlMzU4ODEyMzdkMjkyMjBiYyIsIm5iZiI6MTcwMTExMzEwMSwicm9sZXMiOiJ1c2VyLGFkbWluIiwic3ViIjoiYXV0aCIsInRlbmFudCI6IjcyZjk4OGJmLTg2ZjEtNDFhZi05MWFiLTJkN2NkMDExZGI0YSIsInVzZXJuYW1lIjoiZm9vIn0.FgPJBYLobo9jnbHreooBlvpgEcSPWnKfX6dc0IvdlRB-F0dCcgy91oCJeK_aBk-8zH5AKUXoFTlInfLCkPivMOJqMECA1YTrMUwt_IVqwb116AqihfByUYIIqzMjvUbthtbpIeHQm2fF0HbrUqa_Q0uaYwgy8mD807h7sBcUMjNd215ff_nFIHss-9zegH8GI1d9fiBf-g6zjkR1j987EP748khpQh9IxPjMJbSgG_uH5x80YFuqgEWwq-aYJPQxXX6FatP96a2EAn7wfPpGlPRt0HcBOvq5pCnudgCgfVgiOJiLr_7robQu4T1bis0W75VPEvwWtgFcLnvcQx0JWg", "completion": "This text code Authenticating | where the token payload is:", "language": "text", "type": "code-example_explanation", "context": "Authenticating | where the token payload is:", "source": "https://kubernetes.io/docs/reference/access-authn-authz/authentication/", "id": "k8s_00731_code_explain_22978"} {"prompt": "Complete this json code:\n\n{\n \"aud\": \"kubernetes\",\n \"exp\": 1703232949,\n \"hd\": \"example.com\",\n \"iat\": 1701113101,\n \"iss\": \"https://example.com\",", "completion": " \"jti\": \"b5b0652372cd20e345b6fdffcdc2181f4afd6f259aab4b7e35881237d29220bc\",\n \"nbf\": 1701113101,\n \"roles\": \"user,admin\",\n \"sub\": \"auth\",\n \"tenant\": \"72f988bf-86f1-41af-91ab-2d7cd011db4a\",\n \"username\": \"foo\"\n }", "language": "json", "type": "api-response", "context": "Authenticating | The token with the above AuthenticationConfiguration will produce the following UserInfo object:", "source": "https://kubernetes.io/docs/reference/access-authn-authz/authentication/", "id": "k8s_00731_code_completion_22979"} {"prompt": "Explain this json code:\n\n{\n \"aud\": \"kubernetes\",\n \"exp\": 1703232949,\n \"hd\": \"example.com\",\n \"iat\": 1701113101,\n \"iss\": \"https://example.com\",\n \"jti\": \"b5b0652372cd20e345b6fdffcdc2181f4afd6f259aab4b7e35881237d29220bc\",\n \"nbf\": 1701113101,\n \"roles\": \"user,admin\",\n \"sub\": \"auth\",\n \"tenant\": \"72f988bf-86f1-41af-91ab-2d7cd011db4a\",\n \"username\": \"foo\"\n }", "completion": "This json code Authenticating | The token with the above AuthenticationConfiguration will produce the following UserInfo object:", "language": "json", "type": "api-response_explanation", "context": "Authenticating | The token with the above AuthenticationConfiguration will produce the following UserInfo object:", "source": "https://kubernetes.io/docs/reference/access-authn-authz/authentication/", "id": "k8s_00731_code_explain_22980"} {"prompt": "Complete this json code:\n\n{\n \"username\": \"system:foo\",\n \"uid\": \"auth\",\n \"groups\": [\n \"user\",", "completion": " \"admin\"\n ],\n \"extra\": {\n \"example.com/tenant\": [\"72f988bf-86f1-41af-91ab-2d7cd011db4a\"]\n }\n}", "language": "json", "type": "api-response", "context": "Authenticating | which will fail user validation because the username starts with system:. The API server will return 401 Unauthorized error.", "source": "https://kubernetes.io/docs/reference/access-authn-authz/authentication/", "id": "k8s_00731_code_completion_22981"} {"prompt": "Explain this json code:\n\n{\n \"username\": \"system:foo\",\n \"uid\": \"auth\",\n \"groups\": [\n \"user\",\n \"admin\"\n ],\n \"extra\": {\n \"example.com/tenant\": [\"72f988bf-86f1-41af-91ab-2d7cd011db4a\"]\n }\n}", "completion": "This json code Authenticating | which will fail user validation because the username starts with system:. The API server will return 401 Unauthorized error.", "language": "json", "type": "api-response_explanation", "context": "Authenticating | which will fail user validation because the username starts with system:. The API server will return 401 Unauthorized error.", "source": "https://kubernetes.io/docs/reference/access-authn-authz/authentication/", "id": "k8s_00731_code_explain_22982"} {"prompt": "Complete this bash code:\n\nkubectl config set-credentials USER_NAME \\\n --auth-provider=oidc \\\n --auth-provider-arg=idp-issuer-url=( issuer url ) \\\n --auth-provider-arg=client-id=( your client id ) \\", "completion": " --auth-provider-arg=client-secret=( your client secret ) \\\n --auth-provider-arg=refresh-token=( your refresh token ) \\\n --auth-provider-arg=idp-certificate-authority=( path to your ca certificate ) \\\n --auth-provider-arg=id-token=( your id_token )", "language": "bash", "type": "kubectl-command", "context": "Authenticating | As an example, running the below command after authenticating to your identity provider:", "source": "https://kubernetes.io/docs/reference/access-authn-authz/authentication/", "id": "k8s_00731_code_completion_22983"} {"prompt": "Explain this bash code:\n\nkubectl config set-credentials USER_NAME \\\n --auth-provider=oidc \\\n --auth-provider-arg=idp-issuer-url=( issuer url ) \\\n --auth-provider-arg=client-id=( your client id ) \\\n --auth-provider-arg=client-secret=( your client secret ) \\\n --auth-provider-arg=refresh-token=( your refresh token ) \\\n --auth-provider-arg=idp-certificate-authority=( path to your ca certificate ) \\\n --auth-provider-arg=id-token=( your id_token )", "completion": "This bash code Authenticating | As an example, running the below command after authenticating to your identity provider:", "language": "bash", "type": "kubectl-command_explanation", "context": "Authenticating | As an example, running the below command after authenticating to your identity provider:", "source": "https://kubernetes.io/docs/reference/access-authn-authz/authentication/", "id": "k8s_00731_code_explain_22984"} {"prompt": "Complete this bash code:\n\nkubectl config set-credentials mmosley \\\n --auth-provider=oidc \\\n --auth-provider-arg=idp-issuer-url=https://oidcidp.tremolo.lan:8443/auth/idp/OidcIdP \\\n --auth-provider-arg=client-id=kubernetes \\", "completion": " --auth-provider-arg=client-secret=1db158f6-177d-4d9c-8a8b-d36869918ec5 \\\n --auth-provider-arg=refresh-token=q1bKLFOyUiosTfawzA93TzZIDzH2TNa2SMm0zEiPKTUwME6BkEo6Sql5yUWVBSWpKUGphaWpxSVAfekBOZbBhaEW+VlFUeVRGcluyVF5JT4+haZmPsluFoFu5XkpXk5BXqHega4GAXlF+ma+vmYpFcHe5eZR+slBFpZKtQA= \\\n --auth-provider-arg=idp-certificate-authority=/root/ca.pem \\\n --auth-provider-arg=id-token=eyJraWQiOiJDTj1vaWRjaWRwLnRyZW1vbG8ubGFuLCBPVT1EZW1vLCBPPVRybWVvbG8gU2VjdXJpdHksIEw9QXJsaW5ndG9uLCBTVD1WaXJnaW5pYSwgQz1VUy1DTj1rdWJlLWNhLTEyMDIxNDc5MjEwMzYwNzMyMTUyIiwiYWxnIjoiUlMyNTYifQ.eyJpc3MiOiJodHRwczovL29pZGNpZHAudHJlbW9sby5sYW46ODQ0My9hdXRoL2lkcC9PaWRjSWRQIiwiYXVkIjoia3ViZXJuZXRlcyIsImV4cCI6MTQ4MzU0OTUxMSwianRpIjoiMm96US15TXdFcHV4WDlHZUhQdy1hZyIsImlhdCI6MTQ4MzU0OTQ1MSwibmJmIjoxNDgzNTQ5MzMxLCJzdWIiOiI0YWViMzdiYS1iNjQ1LTQ4ZmQtYWIzMC0xYTAxZWU0MWUyMTgifQ.w6p4J_6qQ1HzTG9nrEOrubxIMb9K5hzcMPxc9IxPx2K4xO9l-oFiUw93daH3m5pluP6K7eOE6txBuRVfEcpJSwlelsOsW8gb8VJcnzMS9EnZpeA0tW_p-mnkFc3VcfyXuhe5R3G7aa5d8uHv70yJ9Y3-UhjiN9EhpMdfPAoEB9fYKKkJRzF7utTTIPGrSaSU6d2pcpfYKaxIwePzEkT4DfcQthoZdy9ucNvvLoi1DIC-UocFD8HLs8LYKEqSxQvOcvnThbObJ9af71EwmuE21fO5KzMW20KtAeget1gnldOosPtz1G5EwvaQ401-RPQzPGMVBld0_zMCAwZttJ4knw", "language": "bash", "type": "kubectl-command", "context": "Authenticating | Which would produce the below configuration:", "source": "https://kubernetes.io/docs/reference/access-authn-authz/authentication/", "id": "k8s_00731_code_completion_22985"} {"prompt": "Explain this bash code:\n\nkubectl config set-credentials mmosley \\\n --auth-provider=oidc \\\n --auth-provider-arg=idp-issuer-url=https://oidcidp.tremolo.lan:8443/auth/idp/OidcIdP \\\n --auth-provider-arg=client-id=kubernetes \\\n --auth-provider-arg=client-secret=1db158f6-177d-4d9c-8a8b-d36869918ec5 \\\n --auth-provider-arg=refresh-token=q1bKLFOyUiosTfawzA93TzZIDzH2TNa2SMm0zEiPKTUwME6BkEo6Sql5yUWVBSWpKUGphaWpxSVAfekBOZbBhaEW+VlFUeVRGcluyVF5JT4+haZmPsluFoFu5XkpXk5BXqHega4GAXlF+ma+vmYpFcHe5eZR+slBFpZKtQA= \\\n --auth-provider-arg=idp-certificate-authority=/root/ca.pem \\\n --auth-provider-arg=id-token=eyJraWQiOiJDTj1vaWRjaWRwLnRyZW1vbG8ubGFuLCBPVT1EZW1vLCBPPVRybWVvbG8gU2VjdXJpdHksIEw9QXJsaW5ndG9uLCBTVD1WaXJnaW5pYSwgQz1VUy1DTj1rdWJlLWNhLTEyMDIxNDc5MjEwMzYwNzMyMTUyIiwiYWxnIjoiUlMyNTYifQ.eyJpc3MiOiJodHRwczovL29pZGNpZHAudHJlbW9sby5sYW46ODQ0My9hdXRoL2lkcC9PaWRjSWRQIiwiYXVkIjoia3ViZXJuZXRlcyIsImV4cCI6MTQ4MzU0OTUxMSwianRpIjoiMm96US15TXdFcHV4WDlHZUhQdy1hZyIsImlhdCI6MTQ4MzU0OTQ1MSwibmJmIjoxNDgzNTQ5MzMxLCJzdWIiOiI0YWViMzdiYS1iNjQ1LTQ4ZmQtYWIzMC0xYTAxZWU0MWUyMTgifQ.w6p4J_6qQ1HzTG9nrEOrubxIMb9K5hzcMPxc9IxPx2K4xO9l-oFiUw93daH3m5pluP6K7eOE6txBuRVfEcpJSwlelsOsW8gb8VJcnzMS9EnZpeA0tW_p-mnkFc3VcfyXuhe5R3G7aa5d8uHv70yJ9Y3-UhjiN9EhpMdfPAoEB9fYKKkJRzF7utTTIPGrSaSU6d2pcpfYKaxIwePzEkT4DfcQthoZdy9ucNvvLoi1DIC-UocFD8HLs8LYKEqSxQvOcvnThbObJ9af71EwmuE21fO5KzMW20KtAeget1gnldOosPtz1G5EwvaQ401-RPQzPGMVBld0_zMCAwZttJ4knw", "completion": "This bash code Authenticating | Which would produce the below configuration:", "language": "bash", "type": "kubectl-command_explanation", "context": "Authenticating | Which would produce the below configuration:", "source": "https://kubernetes.io/docs/reference/access-authn-authz/authentication/", "id": "k8s_00731_code_explain_22986"} {"prompt": "Complete this text code:\n\nusers:\n- name: mmosley\n user:\n auth-provider:\n config:\n client-id: kubernetes", "completion": " client-secret: 1db158f6-177d-4d9c-8a8b-d36869918ec5\n id-token: eyJraWQiOiJDTj1vaWRjaWRwLnRyZW1vbG8ubGFuLCBPVT1EZW1vLCBPPVRybWVvbG8gU2VjdXJpdHksIEw9QXJsaW5ndG9uLCBTVD1WaXJnaW5pYSwgQz1VUy1DTj1rdWJlLWNhLTEyMDIxNDc5MjEwMzYwNzMyMTUyIiwiYWxnIjoiUlMyNTYifQ.eyJpc3MiOiJodHRwczovL29pZGNpZHAudHJlbW9sby5sYW46ODQ0My9hdXRoL2lkcC9PaWRjSWRQIiwiYXVkIjoia3ViZXJuZXRlcyIsImV4cCI6MTQ4MzU0OTUxMSwianRpIjoiMm96US15TXdFcHV4WDlHZUhQdy1hZyIsImlhdCI6MTQ4MzU0OTQ1MSwibmJmIjoxNDgzNTQ5MzMxLCJzdWIiOiI0YWViMzdiYS1iNjQ1LTQ4ZmQtYWIzMC0xYTAxZWU0MWUyMTgifQ.w6p4J_6qQ1HzTG9nrEOrubxIMb9K5hzcMPxc9IxPx2K4xO9l-oFiUw93daH3m5pluP6K7eOE6txBuRVfEcpJSwlelsOsW8gb8VJcnzMS9EnZpeA0tW_p-mnkFc3VcfyXuhe5R3G7aa5d8uHv70yJ9Y3-UhjiN9EhpMdfPAoEB9fYKKkJRzF7utTTIPGrSaSU6d2pcpfYKaxIwePzEkT4DfcQthoZdy9ucNvvLoi1DIC-UocFD8HLs8LYKEqSxQvOcvnThbObJ9af71EwmuE21fO5KzMW20KtAeget1gnldOosPtz1G5EwvaQ401-RPQzPGMVBld0_zMCAwZttJ4knw\n idp-certificate-authority: /root/ca.pem\n idp-issuer-url: https://oidcidp.tremolo.lan:8443/auth/idp/OidcIdP\n refresh-token: q1bKLFOyUiosTfawzA93TzZIDzH2TNa2SMm0zEiPKTUwME6BkEo6Sql5yUWVBSWpKUGphaWpxSVAfekBOZbBhaEW+VlFUeVRGcluyVF5JT4+haZmPsluFoFu5XkpXk5BXq\n name: oidc", "language": "text", "type": "code-example", "context": "Authenticating | Once your id_token expires, kubectl will attempt to refresh your id_token using your refresh_token and client_secret storing the new values for the refresh_token and id_token in your .kube/config.", "source": "https://kubernetes.io/docs/reference/access-authn-authz/authentication/", "id": "k8s_00731_code_completion_22987"} {"prompt": "Explain this text code:\n\nusers:\n- name: mmosley\n user:\n auth-provider:\n config:\n client-id: kubernetes\n client-secret: 1db158f6-177d-4d9c-8a8b-d36869918ec5\n id-token: eyJraWQiOiJDTj1vaWRjaWRwLnRyZW1vbG8ubGFuLCBPVT1EZW1vLCBPPVRybWVvbG8gU2VjdXJpdHksIEw9QXJsaW5ndG9uLCBTVD1WaXJnaW5pYSwgQz1VUy1DTj1rdWJlLWNhLTEyMDIxNDc5MjEwMzYwNzMyMTUyIiwiYWxnIjoiUlMyNTYifQ.eyJpc3MiOiJodHRwczovL29pZGNpZHAudHJlbW9sby5sYW46ODQ0My9hdXRoL2lkcC9PaWRjSWRQIiwiYXVkIjoia3ViZXJuZXRlcyIsImV4cCI6MTQ4MzU0OTUxMSwianRpIjoiMm96US15TXdFcHV4WDlHZUhQdy1hZyIsImlhdCI6MTQ4MzU0OTQ1MSwibmJmIjoxNDgzNTQ5MzMxLCJzdWIiOiI0YWViMzdiYS1iNjQ1LTQ4ZmQtYWIzMC0xYTAxZWU0MWUyMTgifQ.w6p4J_6qQ1HzTG9nrEOrubxIMb9K5hzcMPxc9IxPx2K4xO9l-oFiUw93daH3m5pluP6K7eOE6txBuRVfEcpJSwlelsOsW8gb8VJcnzMS9EnZpeA0tW_p-mnkFc3VcfyXuhe5R3G7aa5d8uHv70yJ9Y3-UhjiN9EhpMdfPAoEB9fYKKkJRzF7utTTIPGrSaSU6d2pcpfYKaxIwePzEkT4DfcQthoZdy9ucNvvLoi1DIC-UocFD8HLs8LYKEqSxQvOcvnThbObJ9af71EwmuE21fO5KzMW20KtAeget1gnldOosPtz1G5EwvaQ401-RPQzPGMVBld0_zMCAwZttJ4knw\n idp-certificate-authority: /root/ca.pem\n idp-issuer-url: https://oidcidp.tremolo.lan:8443/auth/idp/OidcIdP\n refresh-token: q1bKLFOyUiosTfawzA93TzZIDzH2TNa2SMm0zEiPKTUwME6BkEo6Sql5yUWVBSWpKUGphaWpxSVAfekBOZbBhaEW+VlFUeVRGcluyVF5JT4+haZmPsluFoFu5XkpXk5BXq\n name: oidc", "completion": "This text code Authenticating | Once your id_token expires, kubectl will attempt to refresh your id_token using your refresh_token and client_secret storing the new values for the refresh_token and id_token in your .kube/config.", "language": "text", "type": "code-example_explanation", "context": "Authenticating | Once your id_token expires, kubectl will attempt to refresh your id_token using your refresh_token and client_secret storing the new values for the refresh_token and id_token in your .kube/config.", "source": "https://kubernetes.io/docs/reference/access-authn-authz/authentication/", "id": "k8s_00731_code_explain_22988"} {"prompt": "Explain this bash code:\n\nkubectl --token=eyJhbGciOiJSUzI1NiJ9.eyJpc3MiOiJodHRwczovL21sYi50cmVtb2xvLmxhbjo4MDQzL2F1dGgvaWRwL29pZGMiLCJhdWQiOiJrdWJlcm5ldGVzIiwiZXhwIjoxNDc0NTk2NjY5LCJqdGkiOiI2RDUzNXoxUEpFNjJOR3QxaWVyYm9RIiwiaWF0IjoxNDc0NTk2MzY5LCJuYmYiOjE0NzQ1OTYyNDksInN1YiI6Im13aW5kdSIsInVzZXJfcm9sZSI6WyJ1c2VycyIsIm5ldy1uYW1lc3BhY2Utdmlld2VyIl0sImVtYWlsIjoibXdpbmR1QG5vbW9yZWplZGkuY29tIn0.f2As579n9VNoaKzoF-dOQGmXkFKf1FMyNV0-va_B63jn-_n9LGSCca_6IVMP8pO-Zb4KvRqGyTP0r3HkHxYy5c81AnIh8ijarruczl-TK_yF5akjSTHFZD-0gRzlevBDiH8Q79NAr-ky0P4iIXS8lY9Vnjch5MF74Zx0c3alKJHJUnnpjIACByfF2SCaYzbWFMUNat-K1PaUk5-ujMBG7yYnr95xD-63n8CO8teGUAAEMx6zRjzfhnhbzX-ajwZLGwGUBT4WqjMs70-6a7_8gZmLZb2az1cZynkFRj2BaCkVT3A2RrjeEwZEtGXlMqKJ1_I2ulrOVsYx01_yD35-rw get nodes", "completion": "This bash code Authenticating | Kubernetes webhook authentication is a mechanism to make an HTTP callout for verifying bearer tokens.", "language": "bash", "type": "kubectl-command_explanation", "context": "Authenticating | Kubernetes webhook authentication is a mechanism to make an HTTP callout for verifying bearer tokens.", "source": "https://kubernetes.io/docs/reference/access-authn-authz/authentication/", "id": "k8s_00731_code_explain_22989"} {"prompt": "Complete this yaml code:\n\n# Kubernetes API version\napiVersion: v1\n# kind of the API object\nkind: Config\n# clusters refers to the remote service.\nclusters:\n - name: name-of-remote-authn-service\n cluster:\n certificate-authority: /path/to/ca.pem # CA for verifying the remote service.\n server: https://authn.example.com/authenticate # URL of remote service to query. 'https' recommended for production.\n\n# users refers to the API server's webhook configuration.", "completion": "users:\n - name: name-of-api-server\n user:\n client-certificate: /path/to/cert.pem # cert for the webhook plugin to use\n client-key: /path/to/key.pem # key matching the cert\n\n# kubeconfig files require a context. Provide one for the API server.\ncurrent-context: webhook\ncontexts:\n- context:\n cluster: name-of-remote-authn-service\n user: name-of-api-server\n name: webhook", "language": "yaml", "type": "kubernetes-manifest", "context": "Authenticating | When a client attempts to authenticate with the API server using a bearer token as discussed above, the authentication webhook POSTs a JSON-serialized TokenReview object containing the token to the re", "source": "https://kubernetes.io/docs/reference/access-authn-authz/authentication/", "id": "k8s_00731_code_completion_22990"} {"prompt": "Explain this yaml code:\n\n# Kubernetes API version\napiVersion: v1\n# kind of the API object\nkind: Config\n# clusters refers to the remote service.\nclusters:\n - name: name-of-remote-authn-service\n cluster:\n certificate-authority: /path/to/ca.pem # CA for verifying the remote service.\n server: https://authn.example.com/authenticate # URL of remote service to query. 'https' recommended for production.\n\n# users refers to the API server's webhook configuration.\nusers:\n - name: name-of-api-server\n user:\n client-certificate: /path/to/cert.pem # cert for the webhook plugin to use\n client-key: /path/to/key.pem # key matching the cert\n\n# kubeconfig files require a context. Provide one for the API server.\ncurrent-context: webhook\ncontexts:\n- context:\n cluster: name-of-remote-authn-service\n user: name-of-api-server\n name: webhook", "completion": "This yaml code Authenticating | When a client attempts to authenticate with the API server using a bearer token as discussed above, the authentication webhook POSTs a JSON-serialized TokenReview object containing the token to the re", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Authenticating | When a client attempts to authenticate with the API server using a bearer token as discussed above, the authentication webhook POSTs a JSON-serialized TokenReview object containing the token to the re", "source": "https://kubernetes.io/docs/reference/access-authn-authz/authentication/", "id": "k8s_00731_code_explain_22991"} {"prompt": "Complete this json code:\n\n{\n \"apiVersion\": \"authentication.k8s.io/v1\",\n \"kind\": \"TokenReview\",\n \"spec\": {\n # Opaque bearer token sent to the API server\n \"token\": \"014fbff9a07c...\",\n\n # Optional list of the audience identifiers for the server the token was presented to.", "completion": " # Audience-aware token authenticators (for example, OIDC token authenticators)\n # should verify the token was intended for at least one of the audiences in this list,\n # and return the intersection of this list and the valid audiences for the token in the response status.\n # This ensures the token is valid to authenticate to the server it was presented to.\n # If no audiences are provided, the token should be validated to authenticate to the Kubernetes API server.\n \"audiences\": [\"https://myserver.example.com\", \"https://myserver.internal.example.com\"]\n }\n}", "language": "json", "type": "api-response", "context": "Authenticating | { \"apiVersion\": \"authentication.k8s.io/v1beta1\", \"kind\": \"TokenReview\", \"spec\": { # Opaque bearer token sent to the API server \"token\": \"014fbff9a07c...\", # Optional list of the aud", "source": "https://kubernetes.io/docs/reference/access-authn-authz/authentication/", "id": "k8s_00731_code_completion_22992"} {"prompt": "Explain this json code:\n\n{\n \"apiVersion\": \"authentication.k8s.io/v1\",\n \"kind\": \"TokenReview\",\n \"spec\": {\n # Opaque bearer token sent to the API server\n \"token\": \"014fbff9a07c...\",\n\n # Optional list of the audience identifiers for the server the token was presented to.\n # Audience-aware token authenticators (for example, OIDC token authenticators)\n # should verify the token was intended for at least one of the audiences in this list,\n # and return the intersection of this list and the valid audiences for the token in the response status.\n # This ensures the token is valid to authenticate to the server it was presented to.\n # If no audiences are provided, the token should be validated to authenticate to the Kubernetes API server.\n \"audiences\": [\"https://myserver.example.com\", \"https://myserver.internal.example.com\"]\n }\n}", "completion": "This json code Authenticating | { \"apiVersion\": \"authentication.k8s.io/v1beta1\", \"kind\": \"TokenReview\", \"spec\": { # Opaque bearer token sent to the API server \"token\": \"014fbff9a07c...\", # Optional list of the aud", "language": "json", "type": "api-response_explanation", "context": "Authenticating | { \"apiVersion\": \"authentication.k8s.io/v1beta1\", \"kind\": \"TokenReview\", \"spec\": { # Opaque bearer token sent to the API server \"token\": \"014fbff9a07c...\", # Optional list of the aud", "source": "https://kubernetes.io/docs/reference/access-authn-authz/authentication/", "id": "k8s_00731_code_explain_22993"} {"prompt": "Complete this json code:\n\n{\n \"apiVersion\": \"authentication.k8s.io/v1beta1\",\n \"kind\": \"TokenReview\",\n \"spec\": {\n # Opaque bearer token sent to the API server\n \"token\": \"014fbff9a07c...\",\n\n # Optional list of the audience identifiers for the server the token was presented to.", "completion": " # Audience-aware token authenticators (for example, OIDC token authenticators)\n # should verify the token was intended for at least one of the audiences in this list,\n # and return the intersection of this list and the valid audiences for the token in the response status.\n # This ensures the token is valid to authenticate to the server it was presented to.\n # If no audiences are provided, the token should be validated to authenticate to the Kubernetes API server.\n \"audiences\": [\"https://myserver.example.com\", \"https://myserver.internal.example.com\"]\n }\n}", "language": "json", "type": "api-response", "context": "Authenticating | The remote service is expected to fill the status field of the request to indicate the success of the login. The response body's spec field is ignored and may be omitted. The remote service must retur", "source": "https://kubernetes.io/docs/reference/access-authn-authz/authentication/", "id": "k8s_00731_code_completion_22994"} {"prompt": "Explain this json code:\n\n{\n \"apiVersion\": \"authentication.k8s.io/v1beta1\",\n \"kind\": \"TokenReview\",\n \"spec\": {\n # Opaque bearer token sent to the API server\n \"token\": \"014fbff9a07c...\",\n\n # Optional list of the audience identifiers for the server the token was presented to.\n # Audience-aware token authenticators (for example, OIDC token authenticators)\n # should verify the token was intended for at least one of the audiences in this list,\n # and return the intersection of this list and the valid audiences for the token in the response status.\n # This ensures the token is valid to authenticate to the server it was presented to.\n # If no audiences are provided, the token should be validated to authenticate to the Kubernetes API server.\n \"audiences\": [\"https://myserver.example.com\", \"https://myserver.internal.example.com\"]\n }\n}", "completion": "This json code Authenticating | The remote service is expected to fill the status field of the request to indicate the success of the login. The response body's spec field is ignored and may be omitted. The remote service must retur", "language": "json", "type": "api-response_explanation", "context": "Authenticating | The remote service is expected to fill the status field of the request to indicate the success of the login. The response body's spec field is ignored and may be omitted. The remote service must retur", "source": "https://kubernetes.io/docs/reference/access-authn-authz/authentication/", "id": "k8s_00731_code_explain_22995"} {"prompt": "Complete this json code:\n\n{\n \"apiVersion\": \"authentication.k8s.io/v1\",\n \"kind\": \"TokenReview\",\n \"status\": {\n \"authenticated\": true,\n \"user\": {\n # Required\n \"username\": \"janedoe@example.com\",\n # Optional\n \"uid\": \"42\",\n # Optional group memberships\n \"groups\": [\"developers\", \"qa\"],\n # Optional additional information provided by the authenticator.\n # This should not contain confidential data, as it can be recorded in logs", "completion": " # or API objects, and is made available to admission webhooks.\n \"extra\": {\n \"extrafield1\": [\n \"extravalue1\",\n \"extravalue2\"\n ]\n }\n },\n # Optional list audience-aware token authenticators can return,\n # containing the audiences from the `spec.audiences` list for which the provided token was valid.\n # If this is omitted, the token is considered to be valid to authenticate to the Kubernetes API server.\n \"audiences\": [\"https://myserver.example.com\"]\n }\n}", "language": "json", "type": "api-response", "context": "Authenticating | { \"apiVersion\": \"authentication.k8s.io/v1beta1\", \"kind\": \"TokenReview\", \"status\": { \"authenticated\": true, \"user\": { # Required \"username\": \"janedoe@example.com\", # Opt", "source": "https://kubernetes.io/docs/reference/access-authn-authz/authentication/", "id": "k8s_00731_code_completion_22996"} {"prompt": "Explain this json code:\n\n{\n \"apiVersion\": \"authentication.k8s.io/v1\",\n \"kind\": \"TokenReview\",\n \"status\": {\n \"authenticated\": true,\n \"user\": {\n # Required\n \"username\": \"janedoe@example.com\",\n # Optional\n \"uid\": \"42\",\n # Optional group memberships\n \"groups\": [\"developers\", \"qa\"],\n # Optional additional information provided by the authenticator.\n # This should not contain confidential data, as it can be recorded in logs\n # or API objects, and is made available to admission webhooks.\n \"extra\": {\n \"extrafield1\": [\n \"extravalue1\",\n \"extravalue2\"\n ]\n }\n },\n # Optional list audience-aware token authenticators can return,\n # containing the audiences from the `spec.audiences` list for which the provided token was valid.\n # If this is omitted, the token is considered to be valid to authenticate to the Kubernetes API server.\n \"audiences\": [\"https://myserver.example.com\"]\n }\n}", "completion": "This json code Authenticating | { \"apiVersion\": \"authentication.k8s.io/v1beta1\", \"kind\": \"TokenReview\", \"status\": { \"authenticated\": true, \"user\": { # Required \"username\": \"janedoe@example.com\", # Opt", "language": "json", "type": "api-response_explanation", "context": "Authenticating | { \"apiVersion\": \"authentication.k8s.io/v1beta1\", \"kind\": \"TokenReview\", \"status\": { \"authenticated\": true, \"user\": { # Required \"username\": \"janedoe@example.com\", # Opt", "source": "https://kubernetes.io/docs/reference/access-authn-authz/authentication/", "id": "k8s_00731_code_explain_22997"} {"prompt": "Complete this json code:\n\n{\n \"apiVersion\": \"authentication.k8s.io/v1beta1\",\n \"kind\": \"TokenReview\",\n \"status\": {\n \"authenticated\": true,\n \"user\": {\n # Required\n \"username\": \"janedoe@example.com\",\n # Optional\n \"uid\": \"42\",\n # Optional group memberships\n \"groups\": [\"developers\", \"qa\"],\n # Optional additional information provided by the authenticator.\n # This should not contain confidential data, as it can be recorded in logs", "completion": " # or API objects, and is made available to admission webhooks.\n \"extra\": {\n \"extrafield1\": [\n \"extravalue1\",\n \"extravalue2\"\n ]\n }\n },\n # Optional list audience-aware token authenticators can return,\n # containing the audiences from the `spec.audiences` list for which the provided token was valid.\n # If this is omitted, the token is considered to be valid to authenticate to the Kubernetes API server.\n \"audiences\": [\"https://myserver.example.com\"]\n }\n}", "language": "json", "type": "api-response", "context": "Authenticating | An unsuccessful request would return:", "source": "https://kubernetes.io/docs/reference/access-authn-authz/authentication/", "id": "k8s_00731_code_completion_22998"} {"prompt": "Explain this json code:\n\n{\n \"apiVersion\": \"authentication.k8s.io/v1beta1\",\n \"kind\": \"TokenReview\",\n \"status\": {\n \"authenticated\": true,\n \"user\": {\n # Required\n \"username\": \"janedoe@example.com\",\n # Optional\n \"uid\": \"42\",\n # Optional group memberships\n \"groups\": [\"developers\", \"qa\"],\n # Optional additional information provided by the authenticator.\n # This should not contain confidential data, as it can be recorded in logs\n # or API objects, and is made available to admission webhooks.\n \"extra\": {\n \"extrafield1\": [\n \"extravalue1\",\n \"extravalue2\"\n ]\n }\n },\n # Optional list audience-aware token authenticators can return,\n # containing the audiences from the `spec.audiences` list for which the provided token was valid.\n # If this is omitted, the token is considered to be valid to authenticate to the Kubernetes API server.\n \"audiences\": [\"https://myserver.example.com\"]\n }\n}", "completion": "This json code Authenticating | An unsuccessful request would return:", "language": "json", "type": "api-response_explanation", "context": "Authenticating | An unsuccessful request would return:", "source": "https://kubernetes.io/docs/reference/access-authn-authz/authentication/", "id": "k8s_00731_code_explain_22999"} {"prompt": "Complete this json code:\n\n{\n \"apiVersion\": \"authentication.k8s.io/v1\",\n \"kind\": \"TokenReview\",\n \"status\": {\n \"authenticated\": false,", "completion": " # Optionally include details about why authentication failed.\n # If no error is provided, the API will return a generic Unauthorized message.\n # The error field is ignored when authenticated=true.\n \"error\": \"Credentials are expired\"\n }\n}", "language": "json", "type": "api-response", "context": "Authenticating | { \"apiVersion\": \"authentication.k8s.io/v1beta1\", \"kind\": \"TokenReview\", \"status\": { \"authenticated\": false, # Optionally include details about why authentication failed. # If no erro", "source": "https://kubernetes.io/docs/reference/access-authn-authz/authentication/", "id": "k8s_00731_code_completion_23000"} {"prompt": "Explain this json code:\n\n{\n \"apiVersion\": \"authentication.k8s.io/v1\",\n \"kind\": \"TokenReview\",\n \"status\": {\n \"authenticated\": false,\n # Optionally include details about why authentication failed.\n # If no error is provided, the API will return a generic Unauthorized message.\n # The error field is ignored when authenticated=true.\n \"error\": \"Credentials are expired\"\n }\n}", "completion": "This json code Authenticating | { \"apiVersion\": \"authentication.k8s.io/v1beta1\", \"kind\": \"TokenReview\", \"status\": { \"authenticated\": false, # Optionally include details about why authentication failed. # If no erro", "language": "json", "type": "api-response_explanation", "context": "Authenticating | { \"apiVersion\": \"authentication.k8s.io/v1beta1\", \"kind\": \"TokenReview\", \"status\": { \"authenticated\": false, # Optionally include details about why authentication failed. # If no erro", "source": "https://kubernetes.io/docs/reference/access-authn-authz/authentication/", "id": "k8s_00731_code_explain_23001"} {"prompt": "Complete this json code:\n\n{\n \"apiVersion\": \"authentication.k8s.io/v1beta1\",\n \"kind\": \"TokenReview\",\n \"status\": {\n \"authenticated\": false,", "completion": " # Optionally include details about why authentication failed.\n # If no error is provided, the API will return a generic Unauthorized message.\n # The error field is ignored when authenticated=true.\n \"error\": \"Credentials are expired\"\n }\n}", "language": "json", "type": "api-response", "context": "Authenticating | The API server can be configured to identify users from request header values, such as X-Remote-User. It is designed for use in combination with an authenticating proxy that sets these headers.", "source": "https://kubernetes.io/docs/reference/access-authn-authz/authentication/", "id": "k8s_00731_code_completion_23002"} {"prompt": "Explain this json code:\n\n{\n \"apiVersion\": \"authentication.k8s.io/v1beta1\",\n \"kind\": \"TokenReview\",\n \"status\": {\n \"authenticated\": false,\n # Optionally include details about why authentication failed.\n # If no error is provided, the API will return a generic Unauthorized message.\n # The error field is ignored when authenticated=true.\n \"error\": \"Credentials are expired\"\n }\n}", "completion": "This json code Authenticating | The API server can be configured to identify users from request header values, such as X-Remote-User. It is designed for use in combination with an authenticating proxy that sets these headers.", "language": "json", "type": "api-response_explanation", "context": "Authenticating | The API server can be configured to identify users from request header values, such as X-Remote-User. It is designed for use in combination with an authenticating proxy that sets these headers.", "source": "https://kubernetes.io/docs/reference/access-authn-authz/authentication/", "id": "k8s_00731_code_explain_23003"} {"prompt": "Explain this text code:\n\n--requestheader-username-headers=X-Remote-User\n--requestheader-group-headers=X-Remote-Group\n--requestheader-extra-headers-prefix=X-Remote-Extra-", "completion": "This text code Authenticating | this request:", "language": "text", "type": "code-example_explanation", "context": "Authenticating | this request:", "source": "https://kubernetes.io/docs/reference/access-authn-authz/authentication/", "id": "k8s_00731_code_explain_23004"} {"prompt": "Complete this text code:\n\nGET / HTTP/1.1\nX-Remote-User: fido\nX-Remote-Group: dogs", "completion": "X-Remote-Group: dachshunds\nX-Remote-Extra-Acme.com%2Fproject: some-project\nX-Remote-Extra-Scopes: openid\nX-Remote-Extra-Scopes: profile", "language": "text", "type": "code-example", "context": "Authenticating | would result in this user info:", "source": "https://kubernetes.io/docs/reference/access-authn-authz/authentication/", "id": "k8s_00731_code_completion_23005"} {"prompt": "Explain this text code:\n\nGET / HTTP/1.1\nX-Remote-User: fido\nX-Remote-Group: dogs\nX-Remote-Group: dachshunds\nX-Remote-Extra-Acme.com%2Fproject: some-project\nX-Remote-Extra-Scopes: openid\nX-Remote-Extra-Scopes: profile", "completion": "This text code Authenticating | would result in this user info:", "language": "text", "type": "code-example_explanation", "context": "Authenticating | would result in this user info:", "source": "https://kubernetes.io/docs/reference/access-authn-authz/authentication/", "id": "k8s_00731_code_explain_23006"} {"prompt": "Complete this text code:\n\nname: fido\ngroups:\n- dogs\n- dachshunds\nextra:", "completion": " acme.com/project:\n - some-project\n scopes:\n - openid\n - profile", "language": "text", "type": "code-example", "context": "Authenticating | In order to prevent header spoofing, the authenticating proxy is required to present a valid client certificate to the API server for validation against the specified CA before the request headers are", "source": "https://kubernetes.io/docs/reference/access-authn-authz/authentication/", "id": "k8s_00731_code_completion_23007"} {"prompt": "Explain this text code:\n\nname: fido\ngroups:\n- dogs\n- dachshunds\nextra:\n acme.com/project:\n - some-project\n scopes:\n - openid\n - profile", "completion": "This text code Authenticating | In order to prevent header spoofing, the authenticating proxy is required to present a valid client certificate to the API server for validation against the specified CA before the request headers are", "language": "text", "type": "code-example_explanation", "context": "Authenticating | In order to prevent header spoofing, the authenticating proxy is required to present a valid client certificate to the API server for validation against the specified CA before the request headers are", "source": "https://kubernetes.io/docs/reference/access-authn-authz/authentication/", "id": "k8s_00731_code_explain_23008"} {"prompt": "Complete this yaml code:\n\n---\n#\n# CAUTION: this is an example configuration.\n# Do not use this for your own cluster!\n#\napiVersion: apiserver.config.k8s.io/v1", "completion": "kind: AuthenticationConfiguration\nanonymous:\n enabled: true\n conditions:\n - path: /livez\n - path: /readyz\n - path: /healthz", "language": "yaml", "type": "kubernetes-manifest", "context": "Authenticating | In the configuration above only the /livez, /readyz and /healthz endpoints are reachable by anonymous requests. Any other endpoints will not be reachable even if it is allowed by RBAC configuration.", "source": "https://kubernetes.io/docs/reference/access-authn-authz/authentication/", "id": "k8s_00731_code_completion_23009"} {"prompt": "Explain this yaml code:\n\n---\n#\n# CAUTION: this is an example configuration.\n# Do not use this for your own cluster!\n#\napiVersion: apiserver.config.k8s.io/v1\nkind: AuthenticationConfiguration\nanonymous:\n enabled: true\n conditions:\n - path: /livez\n - path: /readyz\n - path: /healthz", "completion": "This yaml code Authenticating | In the configuration above only the /livez, /readyz and /healthz endpoints are reachable by anonymous requests. Any other endpoints will not be reachable even if it is allowed by RBAC configuration.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Authenticating | In the configuration above only the /livez, /readyz and /healthz endpoints are reachable by anonymous requests. Any other endpoints will not be reachable even if it is allowed by RBAC configuration.", "source": "https://kubernetes.io/docs/reference/access-authn-authz/authentication/", "id": "k8s_00731_code_explain_23010"} {"prompt": "Explain this text code:\n\nImpersonate-User: jane.doe@example.com\nImpersonate-Group: developers\nImpersonate-Group: admins", "completion": "This text code Authenticating | An example of the impersonation headers used when impersonating a user with a UID and extra fields:", "language": "text", "type": "code-example_explanation", "context": "Authenticating | An example of the impersonation headers used when impersonating a user with a UID and extra fields:", "source": "https://kubernetes.io/docs/reference/access-authn-authz/authentication/", "id": "k8s_00731_code_explain_23011"} {"prompt": "Complete this text code:\n\nImpersonate-User: jane.doe@example.com\nImpersonate-Extra-dn: cn=jane,ou=engineers,dc=example,dc=com\nImpersonate-Extra-acme.com%2Fproject: some-project", "completion": "Impersonate-Extra-scopes: view\nImpersonate-Extra-scopes: development\nImpersonate-Uid: 06f6ce97-e2c5-4ab8-7ba5-7654dd08d52b", "language": "text", "type": "code-example", "context": "Authenticating | When using kubectl set the --as command line argument to configure the Impersonate-User header, you can also set the --as-group flag to configure the Impersonate-Group header.", "source": "https://kubernetes.io/docs/reference/access-authn-authz/authentication/", "id": "k8s_00731_code_completion_23012"} {"prompt": "Explain this text code:\n\nImpersonate-User: jane.doe@example.com\nImpersonate-Extra-dn: cn=jane,ou=engineers,dc=example,dc=com\nImpersonate-Extra-acme.com%2Fproject: some-project\nImpersonate-Extra-scopes: view\nImpersonate-Extra-scopes: development\nImpersonate-Uid: 06f6ce97-e2c5-4ab8-7ba5-7654dd08d52b", "completion": "This text code Authenticating | When using kubectl set the --as command line argument to configure the Impersonate-User header, you can also set the --as-group flag to configure the Impersonate-Group header.", "language": "text", "type": "code-example_explanation", "context": "Authenticating | When using kubectl set the --as command line argument to configure the Impersonate-User header, you can also set the --as-group flag to configure the Impersonate-Group header.", "source": "https://kubernetes.io/docs/reference/access-authn-authz/authentication/", "id": "k8s_00731_code_explain_23013"} {"prompt": "Explain this text code:\n\nError from server (Forbidden): User \"clark\" cannot get nodes at the cluster scope. (get nodes mynode)", "completion": "This text code Authenticating | Set the --as and --as-group flag:", "language": "text", "type": "code-example_explanation", "context": "Authenticating | Set the --as and --as-group flag:", "source": "https://kubernetes.io/docs/reference/access-authn-authz/authentication/", "id": "k8s_00731_code_explain_23014"} {"prompt": "Explain this bash code:\n\nkubectl drain mynode --as=superman --as-group=system:masters", "completion": "This bash code Authenticating | To impersonate a user, group, user identifier (UID) or extra fields, the impersonating user must have the ability to perform the impersonate verb on the kind of attribute being impersonated (\"user\", \"", "language": "bash", "type": "kubectl-command_explanation", "context": "Authenticating | To impersonate a user, group, user identifier (UID) or extra fields, the impersonating user must have the ability to perform the impersonate verb on the kind of attribute being impersonated (\"user\", \"", "source": "https://kubernetes.io/docs/reference/access-authn-authz/authentication/", "id": "k8s_00731_code_explain_23015"} {"prompt": "Complete this yaml code:\n\napiVersion: rbac.authorization.k8s.io/v1\nkind: ClusterRole\nmetadata:\n name: impersonator", "completion": "rules:\n- apiGroups: [\"\"]\n resources: [\"users\", \"groups\", \"serviceaccounts\"]\n verbs: [\"impersonate\"]", "language": "yaml", "type": "kubernetes-manifest", "context": "Authenticating | For impersonation, extra fields and impersonated UIDs are both under the \"authentication.k8s.io\" apiGroup. Extra fields are evaluated as sub-resources of the resource \"userextras\". To allow a user to", "source": "https://kubernetes.io/docs/reference/access-authn-authz/authentication/", "id": "k8s_00731_code_completion_23016"} {"prompt": "Explain this yaml code:\n\napiVersion: rbac.authorization.k8s.io/v1\nkind: ClusterRole\nmetadata:\n name: impersonator\nrules:\n- apiGroups: [\"\"]\n resources: [\"users\", \"groups\", \"serviceaccounts\"]\n verbs: [\"impersonate\"]", "completion": "This yaml code Authenticating | For impersonation, extra fields and impersonated UIDs are both under the \"authentication.k8s.io\" apiGroup. Extra fields are evaluated as sub-resources of the resource \"userextras\". To allow a user to", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Authenticating | For impersonation, extra fields and impersonated UIDs are both under the \"authentication.k8s.io\" apiGroup. Extra fields are evaluated as sub-resources of the resource \"userextras\". To allow a user to", "source": "https://kubernetes.io/docs/reference/access-authn-authz/authentication/", "id": "k8s_00731_code_explain_23017"} {"prompt": "Complete this yaml code:\n\napiVersion: rbac.authorization.k8s.io/v1\nkind: ClusterRole\nmetadata:\n name: scopes-and-uid-impersonator", "completion": "rules:\n# Can set \"Impersonate-Extra-scopes\" header and the \"Impersonate-Uid\" header.\n- apiGroups: [\"authentication.k8s.io\"]\n resources: [\"userextras/scopes\", \"uids\"]\n verbs: [\"impersonate\"]", "language": "yaml", "type": "kubernetes-manifest", "context": "Authenticating | The values of impersonation headers can also be restricted by limiting the set of resourceNames a resource can take.", "source": "https://kubernetes.io/docs/reference/access-authn-authz/authentication/", "id": "k8s_00731_code_completion_23018"} {"prompt": "Explain this yaml code:\n\napiVersion: rbac.authorization.k8s.io/v1\nkind: ClusterRole\nmetadata:\n name: scopes-and-uid-impersonator\nrules:\n# Can set \"Impersonate-Extra-scopes\" header and the \"Impersonate-Uid\" header.\n- apiGroups: [\"authentication.k8s.io\"]\n resources: [\"userextras/scopes\", \"uids\"]\n verbs: [\"impersonate\"]", "completion": "This yaml code Authenticating | The values of impersonation headers can also be restricted by limiting the set of resourceNames a resource can take.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Authenticating | The values of impersonation headers can also be restricted by limiting the set of resourceNames a resource can take.", "source": "https://kubernetes.io/docs/reference/access-authn-authz/authentication/", "id": "k8s_00731_code_explain_23019"} {"prompt": "Complete this yaml code:\n\napiVersion: rbac.authorization.k8s.io/v1\nkind: ClusterRole\nmetadata:\n name: limited-impersonator\nrules:\n# Can impersonate the user \"jane.doe@example.com\"\n- apiGroups: [\"\"]\n resources: [\"users\"]\n verbs: [\"impersonate\"]\n resourceNames: [\"jane.doe@example.com\"]\n\n# Can impersonate the groups \"developers\" and \"admins\"\n- apiGroups: [\"\"]\n resources: [\"groups\"]", "completion": " verbs: [\"impersonate\"]\n resourceNames: [\"developers\",\"admins\"]\n\n# Can impersonate the extras field \"scopes\" with the values \"view\" and \"development\"\n- apiGroups: [\"authentication.k8s.io\"]\n resources: [\"userextras/scopes\"]\n verbs: [\"impersonate\"]\n resourceNames: [\"view\", \"development\"]\n\n# Can impersonate the uid \"06f6ce97-e2c5-4ab8-7ba5-7654dd08d52b\"\n- apiGroups: [\"authentication.k8s.io\"]\n resources: [\"uids\"]\n verbs: [\"impersonate\"]\n resourceNames: [\"06f6ce97-e2c5-4ab8-7ba5-7654dd08d52b\"]", "language": "yaml", "type": "kubernetes-manifest", "context": "Authenticating | k8s.io/client-go and tools using it such as kubectl and kubelet are able to execute an external command to receive user credentials.", "source": "https://kubernetes.io/docs/reference/access-authn-authz/authentication/", "id": "k8s_00731_code_completion_23020"} {"prompt": "Explain this yaml code:\n\napiVersion: rbac.authorization.k8s.io/v1\nkind: ClusterRole\nmetadata:\n name: limited-impersonator\nrules:\n# Can impersonate the user \"jane.doe@example.com\"\n- apiGroups: [\"\"]\n resources: [\"users\"]\n verbs: [\"impersonate\"]\n resourceNames: [\"jane.doe@example.com\"]\n\n# Can impersonate the groups \"developers\" and \"admins\"\n- apiGroups: [\"\"]\n resources: [\"groups\"]\n verbs: [\"impersonate\"]\n resourceNames: [\"developers\",\"admins\"]\n\n# Can impersonate the extras field \"scopes\" with the values \"view\" and \"development\"\n- apiGroups: [\"authentication.k8s.io\"]\n resources: [\"userextras/scopes\"]\n verbs: [\"impersonate\"]\n resourceNames: [\"view\", \"development\"]\n\n# Can impersonate the uid \"06f6ce97-e2c5-4ab8-7ba5-7654dd08d52b\"\n- apiGroups: [\"authentication.k8s.io\"]\n resources: [\"uids\"]\n verbs: [\"impersonate\"]\n resourceNames: [\"06f6ce97-e2c5-4ab8-7ba5-7654dd08d52b\"]", "completion": "This yaml code Authenticating | k8s.io/client-go and tools using it such as kubectl and kubelet are able to execute an external command to receive user credentials.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Authenticating | k8s.io/client-go and tools using it such as kubectl and kubelet are able to execute an external command to receive user credentials.", "source": "https://kubernetes.io/docs/reference/access-authn-authz/authentication/", "id": "k8s_00731_code_explain_23021"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Config\nusers:\n- name: my-user\n user:\n exec:\n # Command to execute. Required.\n command: \"example-client-go-exec-plugin\"\n\n # API version to use when decoding the ExecCredentials resource. Required.\n #\n # The API version returned by the plugin MUST match the version listed here.\n #\n # To integrate with tools that support multiple versions (such as client.authentication.k8s.io/v1beta1),\n # set an environment variable, pass an argument to the tool that indicates which version the exec plugin expects,\n # or read the version from the ExecCredential object in the KUBERNETES_EXEC_INFO environment variable.\n apiVersion: \"client.authentication.k8s.io/v1\"\n\n # Environment variables to set when executing the plugin. Optional.\n env:\n - name: \"FOO\"\n value: \"bar\"\n\n # Arguments to pass when executing the plugin. Optional.\n args:\n - \"arg1\"\n - \"arg2\"\n\n # Text shown to the user when the executable doesn't seem to be present. Optional.\n installHint: |\n example-client-go-exec-plugin is required to authenticate\n to the current cluster. It can be installed:\n\n On macOS: brew install example-client-go-exec-plugin", "completion": "\n On Ubuntu: apt-get install example-client-go-exec-plugin\n\n On Fedora: dnf install example-client-go-exec-plugin\n\n ... \n\n # Whether or not to provide cluster information, which could potentially contain\n # very large CA data, to this exec plugin as a part of the KUBERNETES_EXEC_INFO\n # environment variable.\n provideClusterInfo: true\n\n # The contract between the exec plugin and the standard input I/O stream. If the\n # contract cannot be satisfied, this plugin will not be run and an error will be\n # returned. Valid values are \"Never\" (this exec plugin never uses standard input),\n # \"IfAvailable\" (this exec plugin wants to use standard input if it is available),\n # or \"Always\" (this exec plugin requires standard input to function). Required.\n interactiveMode: Never\nclusters:\n- name: my-cluster\n cluster:\n server: \"https://172.17.4.100:6443\"\n certificate-authority: \"/etc/kubernetes/ca.pem\"\n extensions:\n - name: client.authentication.k8s.io/exec # reserved extension name for per cluster exec config\n extension:\n arbitrary: config\n this: can be provided via the KUBERNETES_EXEC_INFO environment variable upon setting provideClusterInfo\n you: [\"can\", \"put\", \"anything\", \"here\"]\ncontexts:\n- name: my-cluster\n context:\n cluster: my-cluster\n user: my-user\ncurrent-context: my-cluster", "language": "yaml", "type": "kubernetes-manifest", "context": "Authenticating | apiVersion: v1 kind: Config users: - name: my-user user: exec: # Command to execute. Required. command: \"example-client-go-exec-plugin\" # API version to use when decoding the", "source": "https://kubernetes.io/docs/reference/access-authn-authz/authentication/", "id": "k8s_00731_code_completion_23022"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Config\nusers:\n- name: my-user\n user:\n exec:\n # Command to execute. Required.\n command: \"example-client-go-exec-plugin\"\n\n # API version to use when decoding the ExecCredentials resource. Required.\n #\n # The API version returned by the plugin MUST match the version listed here.\n #\n # To integrate with tools that support multiple versions (such as client.authentication.k8s.io/v1beta1),\n # set an environment variable, pass an argument to the tool that indicates which version the exec plugin expects,\n # or read the version from the ExecCredential object in the KUBERNETES_EXEC_INFO environment variable.\n apiVersion: \"client.authentication.k8s.io/v1\"\n\n # Environment variables to set when executing the plugin. Optional.\n env:\n - name: \"FOO\"\n value: \"bar\"\n\n # Arguments to pass when executing the plugin. Optional.\n args:\n - \"arg1\"\n - \"arg2\"\n\n # Text shown to the user when the executable doesn't seem to be present. Optional.\n installHint: |\n example-client-go-exec-plugin is required to authenticate\n to the current cluster. It can be installed:\n\n On macOS: brew install example-client-go-exec-plugin\n\n On Ubuntu: apt-get install example-client-go-exec-plugin\n\n On Fedora: dnf install example-client-go-exec-plugin\n\n ... \n\n # Whether or not to provide cluster information, which could potentially contain\n # very large CA data, to this exec plugin as a part of the KUBERNETES_EXEC_INFO\n # environment variable.\n provideClusterInfo: true\n\n # The contract between the exec plugin and the standard input I/O stream. If the\n # contract cannot be satisfied, this plugin will not be run and an error will be\n # returned. Valid values are \"Never\" (this exec plugin never uses standard input),\n # \"IfAvailable\" (this exec plugin wants to use standard input if it is available),\n # or \"Always\" (this exec plugin requires standard input to function). Required.\n interactiveMode: Never\nclusters:\n- name: my-cluster\n cluster:\n server: \"https://172.17.4.100:6443\"\n certificate-authority: \"/etc/kubernetes/ca.pem\"\n extensions:\n - name: client.authentication.k8s.io/exec # reserved extension name for per cluster exec config\n extension:\n arbitrary: config\n this: can be provided via the KUBERNETES_EXEC_INFO environment variable upon setting provideClusterInfo\n you: [\"can\", \"put\", \"anything\", \"here\"]\ncontexts:\n- name: my-cluster\n context:\n cluster: my-cluster\n user: my-user\ncurrent-context: my-cluster", "completion": "This yaml code Authenticating | apiVersion: v1 kind: Config users: - name: my-user user: exec: # Command to execute. Required. command: \"example-client-go-exec-plugin\" # API version to use when decoding the", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Authenticating | apiVersion: v1 kind: Config users: - name: my-user user: exec: # Command to execute. Required. command: \"example-client-go-exec-plugin\" # API version to use when decoding the", "source": "https://kubernetes.io/docs/reference/access-authn-authz/authentication/", "id": "k8s_00731_code_explain_23023"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Config\nusers:\n- name: my-user\n user:\n exec:\n # Command to execute. Required.\n command: \"example-client-go-exec-plugin\"\n\n # API version to use when decoding the ExecCredentials resource. Required.\n #\n # The API version returned by the plugin MUST match the version listed here.\n #\n # To integrate with tools that support multiple versions (such as client.authentication.k8s.io/v1),\n # set an environment variable, pass an argument to the tool that indicates which version the exec plugin expects,\n # or read the version from the ExecCredential object in the KUBERNETES_EXEC_INFO environment variable.\n apiVersion: \"client.authentication.k8s.io/v1beta1\"\n\n # Environment variables to set when executing the plugin. Optional.\n env:\n - name: \"FOO\"\n value: \"bar\"\n\n # Arguments to pass when executing the plugin. Optional.\n args:\n - \"arg1\"\n - \"arg2\"\n\n # Text shown to the user when the executable doesn't seem to be present. Optional.\n installHint: |\n example-client-go-exec-plugin is required to authenticate\n to the current cluster. It can be installed:\n\n On macOS: brew install example-client-go-exec-plugin\n", "completion": " On Ubuntu: apt-get install example-client-go-exec-plugin\n\n On Fedora: dnf install example-client-go-exec-plugin\n\n ... \n\n # Whether or not to provide cluster information, which could potentially contain\n # very large CA data, to this exec plugin as a part of the KUBERNETES_EXEC_INFO\n # environment variable.\n provideClusterInfo: true\n\n # The contract between the exec plugin and the standard input I/O stream. If the\n # contract cannot be satisfied, this plugin will not be run and an error will be\n # returned. Valid values are \"Never\" (this exec plugin never uses standard input),\n # \"IfAvailable\" (this exec plugin wants to use standard input if it is available),\n # or \"Always\" (this exec plugin requires standard input to function). Optional.\n # Defaults to \"IfAvailable\".\n interactiveMode: Never\nclusters:\n- name: my-cluster\n cluster:\n server: \"https://172.17.4.100:6443\"\n certificate-authority: \"/etc/kubernetes/ca.pem\"\n extensions:\n - name: client.authentication.k8s.io/exec # reserved extension name for per cluster exec config\n extension:\n arbitrary: config\n this: can be provided via the KUBERNETES_EXEC_INFO environment variable upon setting provideClusterInfo\n you: [\"can\", \"put\", \"anything\", \"here\"]\ncontexts:\n- name: my-cluster\n context:\n cluster: my-cluster\n user: my-user\ncurrent-context: my-cluster", "language": "yaml", "type": "kubernetes-manifest", "context": "Authenticating | Relative command paths are interpreted as relative to the directory of the config file. If KUBECONFIG is set to /home/jane/kubeconfig and the exec command is ./bin/example-client-go-exec-plugin, the b", "source": "https://kubernetes.io/docs/reference/access-authn-authz/authentication/", "id": "k8s_00731_code_completion_23024"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Config\nusers:\n- name: my-user\n user:\n exec:\n # Command to execute. Required.\n command: \"example-client-go-exec-plugin\"\n\n # API version to use when decoding the ExecCredentials resource. Required.\n #\n # The API version returned by the plugin MUST match the version listed here.\n #\n # To integrate with tools that support multiple versions (such as client.authentication.k8s.io/v1),\n # set an environment variable, pass an argument to the tool that indicates which version the exec plugin expects,\n # or read the version from the ExecCredential object in the KUBERNETES_EXEC_INFO environment variable.\n apiVersion: \"client.authentication.k8s.io/v1beta1\"\n\n # Environment variables to set when executing the plugin. Optional.\n env:\n - name: \"FOO\"\n value: \"bar\"\n\n # Arguments to pass when executing the plugin. Optional.\n args:\n - \"arg1\"\n - \"arg2\"\n\n # Text shown to the user when the executable doesn't seem to be present. Optional.\n installHint: |\n example-client-go-exec-plugin is required to authenticate\n to the current cluster. It can be installed:\n\n On macOS: brew install example-client-go-exec-plugin\n\n On Ubuntu: apt-get install example-client-go-exec-plugin\n\n On Fedora: dnf install example-client-go-exec-plugin\n\n ... \n\n # Whether or not to provide cluster information, which could potentially contain\n # very large CA data, to this exec plugin as a part of the KUBERNETES_EXEC_INFO\n # environment variable.\n provideClusterInfo: true\n\n # The contract between the exec plugin and the standard input I/O stream. If the\n # contract cannot be satisfied, this plugin will not be run and an error will be\n # returned. Valid values are \"Never\" (this exec plugin never uses standard input),\n # \"IfAvailable\" (this exec plugin wants to use standard input if it is available),\n # or \"Always\" (this exec plugin requires standard input to function). Optional.\n # Defaults to \"IfAvailable\".\n interactiveMode: Never\nclusters:\n- name: my-cluster\n cluster:\n server: \"https://172.17.4.100:6443\"\n certificate-authority: \"/etc/kubernetes/ca.pem\"\n extensions:\n - name: client.authentication.k8s.io/exec # reserved extension name for per cluster exec config\n extension:\n arbitrary: config\n this: can be provided via the KUBERNETES_EXEC_INFO environment variable upon setting provideClusterInfo\n you: [\"can\", \"put\", \"anything\", \"here\"]\ncontexts:\n- name: my-cluster\n context:\n cluster: my-cluster\n user: my-user\ncurrent-context: my-cluster", "completion": "This yaml code Authenticating | Relative command paths are interpreted as relative to the directory of the config file. If KUBECONFIG is set to /home/jane/kubeconfig and the exec command is ./bin/example-client-go-exec-plugin, the b", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Authenticating | Relative command paths are interpreted as relative to the directory of the config file. If KUBECONFIG is set to /home/jane/kubeconfig and the exec command is ./bin/example-client-go-exec-plugin, the b", "source": "https://kubernetes.io/docs/reference/access-authn-authz/authentication/", "id": "k8s_00731_code_explain_23025"} {"prompt": "Complete this yaml code:\n\n- name: my-user\n user:\n exec:", "completion": " # Path relative to the directory of the kubeconfig\n command: \"./bin/example-client-go-exec-plugin\"\n apiVersion: \"client.authentication.k8s.io/v1\"\n interactiveMode: Never", "language": "yaml", "type": "kubernetes-manifest", "context": "Authenticating | The executed command prints an ExecCredential object to stdout. k8s.io/client-go authenticates against the Kubernetes API using the returned credentials in the status. The executed command is passed a", "source": "https://kubernetes.io/docs/reference/access-authn-authz/authentication/", "id": "k8s_00731_code_completion_23026"} {"prompt": "Explain this yaml code:\n\n- name: my-user\n user:\n exec:\n # Path relative to the directory of the kubeconfig\n command: \"./bin/example-client-go-exec-plugin\"\n apiVersion: \"client.authentication.k8s.io/v1\"\n interactiveMode: Never", "completion": "This yaml code Authenticating | The executed command prints an ExecCredential object to stdout. k8s.io/client-go authenticates against the Kubernetes API using the returned credentials in the status. The executed command is passed a", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Authenticating | The executed command prints an ExecCredential object to stdout. k8s.io/client-go authenticates against the Kubernetes API using the returned credentials in the status. The executed command is passed a", "source": "https://kubernetes.io/docs/reference/access-authn-authz/authentication/", "id": "k8s_00731_code_explain_23027"} {"prompt": "Complete this json code:\n\n{\n \"apiVersion\": \"client.authentication.k8s.io/v1\",\n \"kind\": \"ExecCredential\",", "completion": " \"status\": {\n \"token\": \"my-bearer-token\"\n }\n}", "language": "json", "type": "api-response", "context": "Authenticating | { \"apiVersion\": \"client.authentication.k8s.io/v1beta1\", \"kind\": \"ExecCredential\", \"status\": { \"token\": \"my-bearer-token\" } }", "source": "https://kubernetes.io/docs/reference/access-authn-authz/authentication/", "id": "k8s_00731_code_completion_23028"} {"prompt": "Explain this json code:\n\n{\n \"apiVersion\": \"client.authentication.k8s.io/v1\",\n \"kind\": \"ExecCredential\",\n \"status\": {\n \"token\": \"my-bearer-token\"\n }\n}", "completion": "This json code Authenticating | { \"apiVersion\": \"client.authentication.k8s.io/v1beta1\", \"kind\": \"ExecCredential\", \"status\": { \"token\": \"my-bearer-token\" } }", "language": "json", "type": "api-response_explanation", "context": "Authenticating | { \"apiVersion\": \"client.authentication.k8s.io/v1beta1\", \"kind\": \"ExecCredential\", \"status\": { \"token\": \"my-bearer-token\" } }", "source": "https://kubernetes.io/docs/reference/access-authn-authz/authentication/", "id": "k8s_00731_code_explain_23029"} {"prompt": "Complete this json code:\n\n{\n \"apiVersion\": \"client.authentication.k8s.io/v1beta1\",\n \"kind\": \"ExecCredential\",", "completion": " \"status\": {\n \"token\": \"my-bearer-token\"\n }\n}", "language": "json", "type": "api-response", "context": "Authenticating | Alternatively, a PEM-encoded client certificate and key can be returned to use TLS client auth. If the plugin returns a different certificate and key on a subsequent call, k8s.io/client-go will close", "source": "https://kubernetes.io/docs/reference/access-authn-authz/authentication/", "id": "k8s_00731_code_completion_23030"} {"prompt": "Explain this json code:\n\n{\n \"apiVersion\": \"client.authentication.k8s.io/v1beta1\",\n \"kind\": \"ExecCredential\",\n \"status\": {\n \"token\": \"my-bearer-token\"\n }\n}", "completion": "This json code Authenticating | Alternatively, a PEM-encoded client certificate and key can be returned to use TLS client auth. If the plugin returns a different certificate and key on a subsequent call, k8s.io/client-go will close", "language": "json", "type": "api-response_explanation", "context": "Authenticating | Alternatively, a PEM-encoded client certificate and key can be returned to use TLS client auth. If the plugin returns a different certificate and key on a subsequent call, k8s.io/client-go will close", "source": "https://kubernetes.io/docs/reference/access-authn-authz/authentication/", "id": "k8s_00731_code_explain_23031"} {"prompt": "Complete this json code:\n\n{\n \"apiVersion\": \"client.authentication.k8s.io/v1\",\n \"kind\": \"ExecCredential\",\n \"status\": {", "completion": " \"clientCertificateData\": \"-----BEGIN CERTIFICATE-----\\n...\\n-----END CERTIFICATE-----\",\n \"clientKeyData\": \"-----BEGIN RSA PRIVATE KEY-----\\n...\\n-----END RSA PRIVATE KEY-----\"\n }\n}", "language": "json", "type": "api-response", "context": "Authenticating | { \"apiVersion\": \"client.authentication.k8s.io/v1beta1\", \"kind\": \"ExecCredential\", \"status\": { \"clientCertificateData\": \"-----BEGIN CERTIFICATE-----\\n...\\n-----END CERTIFICATE-----\", \"cli", "source": "https://kubernetes.io/docs/reference/access-authn-authz/authentication/", "id": "k8s_00731_code_completion_23032"} {"prompt": "Explain this json code:\n\n{\n \"apiVersion\": \"client.authentication.k8s.io/v1\",\n \"kind\": \"ExecCredential\",\n \"status\": {\n \"clientCertificateData\": \"-----BEGIN CERTIFICATE-----\\n...\\n-----END CERTIFICATE-----\",\n \"clientKeyData\": \"-----BEGIN RSA PRIVATE KEY-----\\n...\\n-----END RSA PRIVATE KEY-----\"\n }\n}", "completion": "This json code Authenticating | { \"apiVersion\": \"client.authentication.k8s.io/v1beta1\", \"kind\": \"ExecCredential\", \"status\": { \"clientCertificateData\": \"-----BEGIN CERTIFICATE-----\\n...\\n-----END CERTIFICATE-----\", \"cli", "language": "json", "type": "api-response_explanation", "context": "Authenticating | { \"apiVersion\": \"client.authentication.k8s.io/v1beta1\", \"kind\": \"ExecCredential\", \"status\": { \"clientCertificateData\": \"-----BEGIN CERTIFICATE-----\\n...\\n-----END CERTIFICATE-----\", \"cli", "source": "https://kubernetes.io/docs/reference/access-authn-authz/authentication/", "id": "k8s_00731_code_explain_23033"} {"prompt": "Complete this json code:\n\n{\n \"apiVersion\": \"client.authentication.k8s.io/v1beta1\",\n \"kind\": \"ExecCredential\",\n \"status\": {", "completion": " \"clientCertificateData\": \"-----BEGIN CERTIFICATE-----\\n...\\n-----END CERTIFICATE-----\",\n \"clientKeyData\": \"-----BEGIN RSA PRIVATE KEY-----\\n...\\n-----END RSA PRIVATE KEY-----\"\n }\n}", "language": "json", "type": "api-response", "context": "Authenticating | Optionally, the response can include the expiry of the credential formatted as a RFC 3339 timestamp.", "source": "https://kubernetes.io/docs/reference/access-authn-authz/authentication/", "id": "k8s_00731_code_completion_23034"} {"prompt": "Explain this json code:\n\n{\n \"apiVersion\": \"client.authentication.k8s.io/v1beta1\",\n \"kind\": \"ExecCredential\",\n \"status\": {\n \"clientCertificateData\": \"-----BEGIN CERTIFICATE-----\\n...\\n-----END CERTIFICATE-----\",\n \"clientKeyData\": \"-----BEGIN RSA PRIVATE KEY-----\\n...\\n-----END RSA PRIVATE KEY-----\"\n }\n}", "completion": "This json code Authenticating | Optionally, the response can include the expiry of the credential formatted as a RFC 3339 timestamp.", "language": "json", "type": "api-response_explanation", "context": "Authenticating | Optionally, the response can include the expiry of the credential formatted as a RFC 3339 timestamp.", "source": "https://kubernetes.io/docs/reference/access-authn-authz/authentication/", "id": "k8s_00731_code_explain_23035"} {"prompt": "Complete this json code:\n\n{\n \"apiVersion\": \"client.authentication.k8s.io/v1\",\n \"kind\": \"ExecCredential\",\n \"status\": {", "completion": " \"token\": \"my-bearer-token\",\n \"expirationTimestamp\": \"2018-03-05T17:30:20-08:00\"\n }\n}", "language": "json", "type": "api-response", "context": "Authenticating | { \"apiVersion\": \"client.authentication.k8s.io/v1beta1\", \"kind\": \"ExecCredential\", \"status\": { \"token\": \"my-bearer-token\", \"expirationTimestamp\": \"2018-03-05T17:30:20-08:00\" } }", "source": "https://kubernetes.io/docs/reference/access-authn-authz/authentication/", "id": "k8s_00731_code_completion_23036"} {"prompt": "Explain this json code:\n\n{\n \"apiVersion\": \"client.authentication.k8s.io/v1\",\n \"kind\": \"ExecCredential\",\n \"status\": {\n \"token\": \"my-bearer-token\",\n \"expirationTimestamp\": \"2018-03-05T17:30:20-08:00\"\n }\n}", "completion": "This json code Authenticating | { \"apiVersion\": \"client.authentication.k8s.io/v1beta1\", \"kind\": \"ExecCredential\", \"status\": { \"token\": \"my-bearer-token\", \"expirationTimestamp\": \"2018-03-05T17:30:20-08:00\" } }", "language": "json", "type": "api-response_explanation", "context": "Authenticating | { \"apiVersion\": \"client.authentication.k8s.io/v1beta1\", \"kind\": \"ExecCredential\", \"status\": { \"token\": \"my-bearer-token\", \"expirationTimestamp\": \"2018-03-05T17:30:20-08:00\" } }", "source": "https://kubernetes.io/docs/reference/access-authn-authz/authentication/", "id": "k8s_00731_code_explain_23037"} {"prompt": "Complete this json code:\n\n{\n \"apiVersion\": \"client.authentication.k8s.io/v1beta1\",\n \"kind\": \"ExecCredential\",\n \"status\": {", "completion": " \"token\": \"my-bearer-token\",\n \"expirationTimestamp\": \"2018-03-05T17:30:20-08:00\"\n }\n}", "language": "json", "type": "api-response", "context": "Authenticating | To enable the exec plugin to obtain cluster-specific information, set provideClusterInfo on the user.exec field in the kubeconfig. The plugin will then be supplied this cluster-specific information in", "source": "https://kubernetes.io/docs/reference/access-authn-authz/authentication/", "id": "k8s_00731_code_completion_23038"} {"prompt": "Explain this json code:\n\n{\n \"apiVersion\": \"client.authentication.k8s.io/v1beta1\",\n \"kind\": \"ExecCredential\",\n \"status\": {\n \"token\": \"my-bearer-token\",\n \"expirationTimestamp\": \"2018-03-05T17:30:20-08:00\"\n }\n}", "completion": "This json code Authenticating | To enable the exec plugin to obtain cluster-specific information, set provideClusterInfo on the user.exec field in the kubeconfig. The plugin will then be supplied this cluster-specific information in", "language": "json", "type": "api-response_explanation", "context": "Authenticating | To enable the exec plugin to obtain cluster-specific information, set provideClusterInfo on the user.exec field in the kubeconfig. The plugin will then be supplied this cluster-specific information in", "source": "https://kubernetes.io/docs/reference/access-authn-authz/authentication/", "id": "k8s_00731_code_explain_23039"} {"prompt": "Complete this json code:\n\n{\n \"apiVersion\": \"client.authentication.k8s.io/v1\",\n \"kind\": \"ExecCredential\",\n \"spec\": {\n \"cluster\": {\n \"server\": \"https://172.17.4.100:6443\",\n \"certificate-authority-data\": \"LS0t...\",\n \"config\": {", "completion": " \"arbitrary\": \"config\",\n \"this\": \"can be provided via the KUBERNETES_EXEC_INFO environment variable upon setting provideClusterInfo\",\n \"you\": [\"can\", \"put\", \"anything\", \"here\"]\n }\n },\n \"interactive\": true\n }\n}", "language": "json", "type": "api-response", "context": "Authenticating | { \"apiVersion\": \"client.authentication.k8s.io/v1beta1\", \"kind\": \"ExecCredential\", \"spec\": { \"cluster\": { \"server\": \"https://172.17.4.100:6443\", \"certificate-authority-data\": \"LS0", "source": "https://kubernetes.io/docs/reference/access-authn-authz/authentication/", "id": "k8s_00731_code_completion_23040"} {"prompt": "Explain this json code:\n\n{\n \"apiVersion\": \"client.authentication.k8s.io/v1\",\n \"kind\": \"ExecCredential\",\n \"spec\": {\n \"cluster\": {\n \"server\": \"https://172.17.4.100:6443\",\n \"certificate-authority-data\": \"LS0t...\",\n \"config\": {\n \"arbitrary\": \"config\",\n \"this\": \"can be provided via the KUBERNETES_EXEC_INFO environment variable upon setting provideClusterInfo\",\n \"you\": [\"can\", \"put\", \"anything\", \"here\"]\n }\n },\n \"interactive\": true\n }\n}", "completion": "This json code Authenticating | { \"apiVersion\": \"client.authentication.k8s.io/v1beta1\", \"kind\": \"ExecCredential\", \"spec\": { \"cluster\": { \"server\": \"https://172.17.4.100:6443\", \"certificate-authority-data\": \"LS0", "language": "json", "type": "api-response_explanation", "context": "Authenticating | { \"apiVersion\": \"client.authentication.k8s.io/v1beta1\", \"kind\": \"ExecCredential\", \"spec\": { \"cluster\": { \"server\": \"https://172.17.4.100:6443\", \"certificate-authority-data\": \"LS0", "source": "https://kubernetes.io/docs/reference/access-authn-authz/authentication/", "id": "k8s_00731_code_explain_23041"} {"prompt": "Complete this json code:\n\n{\n \"apiVersion\": \"client.authentication.k8s.io/v1beta1\",\n \"kind\": \"ExecCredential\",\n \"spec\": {\n \"cluster\": {\n \"server\": \"https://172.17.4.100:6443\",\n \"certificate-authority-data\": \"LS0t...\",\n \"config\": {", "completion": " \"arbitrary\": \"config\",\n \"this\": \"can be provided via the KUBERNETES_EXEC_INFO environment variable upon setting provideClusterInfo\",\n \"you\": [\"can\", \"put\", \"anything\", \"here\"]\n }\n },\n \"interactive\": true\n }\n}", "language": "json", "type": "api-response", "context": "Authenticating | If your cluster has the API enabled, you can use the SelfSubjectReview API to find out how your Kubernetes cluster maps your authentication information to identify you as a client. This works whether", "source": "https://kubernetes.io/docs/reference/access-authn-authz/authentication/", "id": "k8s_00731_code_completion_23042"} {"prompt": "Explain this json code:\n\n{\n \"apiVersion\": \"client.authentication.k8s.io/v1beta1\",\n \"kind\": \"ExecCredential\",\n \"spec\": {\n \"cluster\": {\n \"server\": \"https://172.17.4.100:6443\",\n \"certificate-authority-data\": \"LS0t...\",\n \"config\": {\n \"arbitrary\": \"config\",\n \"this\": \"can be provided via the KUBERNETES_EXEC_INFO environment variable upon setting provideClusterInfo\",\n \"you\": [\"can\", \"put\", \"anything\", \"here\"]\n }\n },\n \"interactive\": true\n }\n}", "completion": "This json code Authenticating | If your cluster has the API enabled, you can use the SelfSubjectReview API to find out how your Kubernetes cluster maps your authentication information to identify you as a client. This works whether", "language": "json", "type": "api-response_explanation", "context": "Authenticating | If your cluster has the API enabled, you can use the SelfSubjectReview API to find out how your Kubernetes cluster maps your authentication information to identify you as a client. This works whether", "source": "https://kubernetes.io/docs/reference/access-authn-authz/authentication/", "id": "k8s_00731_code_explain_23043"} {"prompt": "Explain this text code:\n\nPOST /apis/authentication.k8s.io/v1/selfsubjectreviews", "completion": "This text code Authenticating | Response example:", "language": "text", "type": "code-example_explanation", "context": "Authenticating | Response example:", "source": "https://kubernetes.io/docs/reference/access-authn-authz/authentication/", "id": "k8s_00731_code_explain_23044"} {"prompt": "Complete this json code:\n\n{\n \"apiVersion\": \"authentication.k8s.io/v1\",", "completion": " \"kind\": \"SelfSubjectReview\"\n}", "language": "json", "type": "api-response", "context": "Authenticating | Response example:", "source": "https://kubernetes.io/docs/reference/access-authn-authz/authentication/", "id": "k8s_00731_code_completion_23045"} {"prompt": "Explain this json code:\n\n{\n \"apiVersion\": \"authentication.k8s.io/v1\",\n \"kind\": \"SelfSubjectReview\"\n}", "completion": "This json code Authenticating | Response example:", "language": "json", "type": "api-response_explanation", "context": "Authenticating | Response example:", "source": "https://kubernetes.io/docs/reference/access-authn-authz/authentication/", "id": "k8s_00731_code_explain_23046"} {"prompt": "Complete this json code:\n\n{\n \"apiVersion\": \"authentication.k8s.io/v1\",\n \"kind\": \"SelfSubjectReview\",\n \"status\": {\n \"userInfo\": {\n \"name\": \"jane.doe\",\n \"uid\": \"b6c7cfd4-f166-11ec-8ea0-0242ac120002\",\n \"groups\": [\n \"viewers\",", "completion": " \"editors\",\n \"system:authenticated\"\n ],\n \"extra\": {\n \"provider_id\": [\"token.company.example\"]\n }\n }\n }\n}", "language": "json", "type": "api-response", "context": "Authenticating | For convenience, the kubectl auth whoami command is present. Executing this command will produce the following output (yet different user attributes will be shown):", "source": "https://kubernetes.io/docs/reference/access-authn-authz/authentication/", "id": "k8s_00731_code_completion_23047"} {"prompt": "Explain this json code:\n\n{\n \"apiVersion\": \"authentication.k8s.io/v1\",\n \"kind\": \"SelfSubjectReview\",\n \"status\": {\n \"userInfo\": {\n \"name\": \"jane.doe\",\n \"uid\": \"b6c7cfd4-f166-11ec-8ea0-0242ac120002\",\n \"groups\": [\n \"viewers\",\n \"editors\",\n \"system:authenticated\"\n ],\n \"extra\": {\n \"provider_id\": [\"token.company.example\"]\n }\n }\n }\n}", "completion": "This json code Authenticating | For convenience, the kubectl auth whoami command is present. Executing this command will produce the following output (yet different user attributes will be shown):", "language": "json", "type": "api-response_explanation", "context": "Authenticating | For convenience, the kubectl auth whoami command is present. Executing this command will produce the following output (yet different user attributes will be shown):", "source": "https://kubernetes.io/docs/reference/access-authn-authz/authentication/", "id": "k8s_00731_code_explain_23048"} {"prompt": "Explain this text code:\n\nATTRIBUTE VALUE\nUsername jane.doe\nGroups [system:authenticated]", "completion": "This text code Authenticating | Complex example including extra attributes", "language": "text", "type": "code-example_explanation", "context": "Authenticating | Complex example including extra attributes", "source": "https://kubernetes.io/docs/reference/access-authn-authz/authentication/", "id": "k8s_00731_code_explain_23049"} {"prompt": "Complete this text code:\n\nATTRIBUTE VALUE\nUsername jane.doe\nUID b79dbf30-0c6a-11ed-861d-0242ac120002", "completion": "Groups [students teachers system:authenticated]\nExtra: skills [reading learning]\nExtra: subjects [math sports]", "language": "text", "type": "code-example", "context": "Authenticating | By providing the output flag, it is also possible to print the JSON or YAML representation of the result:", "source": "https://kubernetes.io/docs/reference/access-authn-authz/authentication/", "id": "k8s_00731_code_completion_23050"} {"prompt": "Explain this text code:\n\nATTRIBUTE VALUE\nUsername jane.doe\nUID b79dbf30-0c6a-11ed-861d-0242ac120002\nGroups [students teachers system:authenticated]\nExtra: skills [reading learning]\nExtra: subjects [math sports]", "completion": "This text code Authenticating | By providing the output flag, it is also possible to print the JSON or YAML representation of the result:", "language": "text", "type": "code-example_explanation", "context": "Authenticating | By providing the output flag, it is also possible to print the JSON or YAML representation of the result:", "source": "https://kubernetes.io/docs/reference/access-authn-authz/authentication/", "id": "k8s_00731_code_explain_23051"} {"prompt": "Complete this json code:\n\n{\n \"apiVersion\": \"authentication.k8s.io/v1\",\n \"kind\": \"SelfSubjectReview\",\n \"status\": {\n \"userInfo\": {\n \"username\": \"jane.doe\",\n \"uid\": \"b79dbf30-0c6a-11ed-861d-0242ac120002\",\n \"groups\": [\n \"students\",\n \"teachers\",\n \"system:authenticated\"\n ],", "completion": " \"extra\": {\n \"skills\": [\n \"reading\",\n \"learning\"\n ],\n \"subjects\": [\n \"math\",\n \"sports\"\n ]\n }\n }\n }\n}", "language": "json", "type": "api-response", "context": "Authenticating | apiVersion: authentication.k8s.io/v1 kind: SelfSubjectReview status: userInfo: username: jane.doe uid: b79dbf30-0c6a-11ed-861d-0242ac120002 groups: - students - teachers - sy", "source": "https://kubernetes.io/docs/reference/access-authn-authz/authentication/", "id": "k8s_00731_code_completion_23052"} {"prompt": "Explain this json code:\n\n{\n \"apiVersion\": \"authentication.k8s.io/v1\",\n \"kind\": \"SelfSubjectReview\",\n \"status\": {\n \"userInfo\": {\n \"username\": \"jane.doe\",\n \"uid\": \"b79dbf30-0c6a-11ed-861d-0242ac120002\",\n \"groups\": [\n \"students\",\n \"teachers\",\n \"system:authenticated\"\n ],\n \"extra\": {\n \"skills\": [\n \"reading\",\n \"learning\"\n ],\n \"subjects\": [\n \"math\",\n \"sports\"\n ]\n }\n }\n }\n}", "completion": "This json code Authenticating | apiVersion: authentication.k8s.io/v1 kind: SelfSubjectReview status: userInfo: username: jane.doe uid: b79dbf30-0c6a-11ed-861d-0242ac120002 groups: - students - teachers - sy", "language": "json", "type": "api-response_explanation", "context": "Authenticating | apiVersion: authentication.k8s.io/v1 kind: SelfSubjectReview status: userInfo: username: jane.doe uid: b79dbf30-0c6a-11ed-861d-0242ac120002 groups: - students - teachers - sy", "source": "https://kubernetes.io/docs/reference/access-authn-authz/authentication/", "id": "k8s_00731_code_explain_23053"} {"prompt": "Complete this yaml code:\n\napiVersion: authentication.k8s.io/v1\nkind: SelfSubjectReview\nstatus:\n userInfo:\n username: jane.doe\n uid: b79dbf30-0c6a-11ed-861d-0242ac120002\n groups:\n - students", "completion": " - teachers\n - system:authenticated\n extra:\n skills:\n - reading\n - learning\n subjects:\n - math\n - sports", "language": "yaml", "type": "kubernetes-manifest", "context": "Authenticating | This feature is extremely useful when a complicated authentication flow is used in a Kubernetes cluster, for example, if you use webhook token authentication or authenticating proxy.", "source": "https://kubernetes.io/docs/reference/access-authn-authz/authentication/", "id": "k8s_00731_code_completion_23054"} {"prompt": "Explain this yaml code:\n\napiVersion: authentication.k8s.io/v1\nkind: SelfSubjectReview\nstatus:\n userInfo:\n username: jane.doe\n uid: b79dbf30-0c6a-11ed-861d-0242ac120002\n groups:\n - students\n - teachers\n - system:authenticated\n extra:\n skills:\n - reading\n - learning\n subjects:\n - math\n - sports", "completion": "This yaml code Authenticating | This feature is extremely useful when a complicated authentication flow is used in a Kubernetes cluster, for example, if you use webhook token authentication or authenticating proxy.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Authenticating | This feature is extremely useful when a complicated authentication flow is used in a Kubernetes cluster, for example, if you use webhook token authentication or authenticating proxy.", "source": "https://kubernetes.io/docs/reference/access-authn-authz/authentication/", "id": "k8s_00731_code_explain_23055"} {"prompt": "Explain this bash code:\n\nopenssl req -new -key jbeda.pem -out jbeda-csr.pem -subj \"/CN=jbeda/O=app1/O=app2\"", "completion": "This bash code Authenticating | This would create a CSR for the username \"jbeda\", belonging to two groups, \"app1\" and \"app2\".", "language": "bash", "type": "shell-command_explanation", "context": "Authenticating | This would create a CSR for the username \"jbeda\", belonging to two groups, \"app1\" and \"app2\".", "source": "https://kubernetes.io/docs/reference/access-authn-authz/authentication/", "id": "k8s_00731_code_explain_23056"} {"prompt": "Explain this http code:\n\nAuthorization: Bearer 31ada4fd-adec-460c-809a-9e56ceb75269", "completion": "This http code Authenticating | To allow for streamlined bootstrapping for new clusters, Kubernetes includes a dynamically-managed Bearer token type called a Bootstrap Token. These tokens are stored as Secrets in the kube-system nam", "language": "http", "type": "code-example_explanation", "context": "Authenticating | To allow for streamlined bootstrapping for new clusters, Kubernetes includes a dynamically-managed Bearer token type called a Bootstrap Token. These tokens are stored as Secrets in the kube-system nam", "source": "https://kubernetes.io/docs/reference/access-authn-authz/authentication/", "id": "k8s_00731_code_explain_23057"} {"prompt": "Complete this yaml code:\n\napiVersion: apps/v1 # this apiVersion is relevant as of Kubernetes 1.9\nkind: Deployment\nmetadata:\n name: nginx-deployment\n namespace: default\nspec:\n replicas: 3", "completion": " template:\n metadata:\n # ...\n spec:\n serviceAccountName: bob-the-bot\n containers:\n - name: nginx\n image: nginx:1.14.2", "language": "yaml", "type": "deployment-definition", "context": "Authenticating | Service account bearer tokens are perfectly valid to use outside the cluster and can be used to create identities for long standing jobs that wish to talk to the Kubernetes API. To manually create a s", "source": "https://kubernetes.io/docs/reference/access-authn-authz/authentication/", "id": "k8s_00731_code_completion_23058"} {"prompt": "Explain this yaml code:\n\napiVersion: apps/v1 # this apiVersion is relevant as of Kubernetes 1.9\nkind: Deployment\nmetadata:\n name: nginx-deployment\n namespace: default\nspec:\n replicas: 3\n template:\n metadata:\n # ...\n spec:\n serviceAccountName: bob-the-bot\n containers:\n - name: nginx\n image: nginx:1.14.2", "completion": "This yaml code Authenticating | Service account bearer tokens are perfectly valid to use outside the cluster and can be used to create identities for long standing jobs that wish to talk to the Kubernetes API. To manually create a s", "language": "yaml", "type": "deployment-definition_explanation", "context": "Authenticating | Service account bearer tokens are perfectly valid to use outside the cluster and can be used to create identities for long standing jobs that wish to talk to the Kubernetes API. To manually create a s", "source": "https://kubernetes.io/docs/reference/access-authn-authz/authentication/", "id": "k8s_00731_code_explain_23059"} {"prompt": "Explain this text code:\n\nhttps://accounts.provider.example/.well-known/openid-configuration", "completion": "This text code Authenticating | JWT Authenticator is an authenticator to authenticate Kubernetes users using JWT compliant tokens. The authenticator will attempt to parse a raw ID token, verify it's been signed by the configured iss", "language": "text", "type": "code-example_explanation", "context": "Authenticating | JWT Authenticator is an authenticator to authenticate Kubernetes users using JWT compliant tokens. The authenticator will attempt to parse a raw ID token, verify it's been signed by the configured iss", "source": "https://kubernetes.io/docs/reference/access-authn-authz/authentication/", "id": "k8s_00731_code_explain_23060"} {"prompt": "Complete this json code:\n\n{\n \"iss\": \"https://example.com\", // must match the issuer.url\n \"aud\": [\"my-app\"], // at least one of the entries in issuer.audiences must match the \"aud\" claim in presented JWTs.", "completion": " \"exp\": 1234567890, // token expiration as Unix time (the number of seconds elapsed since January 1, 1970 UTC)\n \"\": \"user\" // this is the username claim configured in the claimMappings.username.claim or claimMappings.username.expression\n}", "language": "json", "type": "api-response", "context": "Authenticating | The configuration file approach allows you to configure multiple JWT authenticators, each with a unique issuer.url and issuer.discoveryURL. The configuration file even allows you to specify CEL expres", "source": "https://kubernetes.io/docs/reference/access-authn-authz/authentication/", "id": "k8s_00731_code_completion_23061"} {"prompt": "Explain this json code:\n\n{\n \"iss\": \"https://example.com\", // must match the issuer.url\n \"aud\": [\"my-app\"], // at least one of the entries in issuer.audiences must match the \"aud\" claim in presented JWTs.\n \"exp\": 1234567890, // token expiration as Unix time (the number of seconds elapsed since January 1, 1970 UTC)\n \"\": \"user\" // this is the username claim configured in the claimMappings.username.claim or claimMappings.username.expression\n}", "completion": "This json code Authenticating | The configuration file approach allows you to configure multiple JWT authenticators, each with a unique issuer.url and issuer.discoveryURL. The configuration file even allows you to specify CEL expres", "language": "json", "type": "api-response_explanation", "context": "Authenticating | The configuration file approach allows you to configure multiple JWT authenticators, each with a unique issuer.url and issuer.discoveryURL. The configuration file even allows you to specify CEL expres", "source": "https://kubernetes.io/docs/reference/access-authn-authz/authentication/", "id": "k8s_00731_code_explain_23062"} {"prompt": "Explain this text code:\n\napiserver_authentication_config_controller_automatic_reload_last_timestamp_seconds", "completion": "This text code Authenticating | You must specify the path to the authentication configuration using the --authentication-config command line argument to the API server. If you want to use command line arguments instead of the config", "language": "text", "type": "code-example_explanation", "context": "Authenticating | You must specify the path to the authentication configuration using the --authentication-config command line argument to the API server. If you want to use command line arguments instead of the config", "source": "https://kubernetes.io/docs/reference/access-authn-authz/authentication/", "id": "k8s_00731_code_explain_23063"} {"prompt": "Complete this yaml code:\n\n---\n#\n# CAUTION: this is an example configuration.\n# Do not use this for your own cluster!\n#\napiVersion: apiserver.config.k8s.io/v1\nkind: AuthenticationConfiguration\n# list of authenticators to authenticate Kubernetes users using JWT compliant tokens.\n# the maximum number of allowed authenticators is 64.\njwt:\n- issuer:\n # url must be unique across all authenticators.\n # url must not conflict with issuer configured in --service-account-issuer.\n url: https://example.com # Same as --oidc-issuer-url.\n # discoveryURL, if specified, overrides the URL used to fetch discovery\n # information instead of using \"{url}/.well-known/openid-configuration\".\n # The exact value specified is used, so \"/.well-known/openid-configuration\"\n # must be included in discoveryURL if needed.\n #\n # The \"issuer\" field in the fetched discovery information must match the \"issuer.url\" field\n # in the AuthenticationConfiguration and will be used to validate the \"iss\" claim in the presented JWT.\n # This is for scenarios where the well-known and jwks endpoints are hosted at a different\n # location than the issuer (such as locally in the cluster).\n # discoveryURL must be different from url if specified and must be unique across all authenticators.\n discoveryURL: https://discovery.example.com/.well-known/openid-configuration\n # PEM encoded CA certificates used to validate the connection when fetching\n # discovery information. If not set, the system verifier will be used.\n # Same value as the content of the file referenced by the --oidc-ca-file command line argument.\n certificateAuthority: \n # audiences is the set of acceptable audiences the JWT must be issued to.\n # At least one of the entries must match the \"aud\" claim in presented JWTs.\n audiences:\n - my-app # Same as --oidc-client-id.\n - my-other-app\n # this is required to be set to \"MatchAny\" when multiple audiences are specified.\n audienceMatchPolicy: MatchAny\n # egressSelectorType is an indicator of which egress selection should be used for sending all traffic related\n # to this issuer (discovery, JWKS, distributed claims, etc). If unspecified, no custom dialer is used.\n # The StructuredAuthenticationConfigurationEgressSelector feature gate must be enabled\n # before you can use the egressSelectorType field.\n # When specified, the valid choices are \"controlplane\" and \"cluster\". These correspond to the associated\n # values in the --egress-selector-config-file.\n # - controlplane: for traffic intended to go to the control plane.\n # - cluster: for traffic intended to go to the system being managed by Kubernetes.\n egressSelectorType: \n # rules applied to validate token claims to authenticate users.\n claimValidationRules:\n # Same as --oidc-required-claim key=value.\n - claim: hd\n requiredValue: example.com\n # Instead of claim and requiredValue, you can use expression to validate the claim.\n # expression is a CEL expression that evaluates to a boolean.\n # all the expressions must evaluate to true for validation to succeed.\n - expression: 'claims.hd == \"example.com\"'\n # Message customizes the error message seen in the API server logs when the validation fails.\n message: the hd claim must be set to example.com\n - expression: 'claims.exp - claims.nbf <= 86400'\n message: total token lifetime must not exceed 24 hours\n claimMappings:", "completion": " # username represents an option for the username attribute.\n # This is the only required attribute.\n username:\n # Same as --oidc-username-claim. Mutually exclusive with username.expression.\n claim: \"sub\"\n # Same as --oidc-username-prefix. Mutually exclusive with username.expression.\n # if username.claim is set, username.prefix is required.\n # Explicitly set it to \"\" if no prefix is desired.\n prefix: \"\"\n # Mutually exclusive with username.claim and username.prefix.\n # expression is a CEL expression that evaluates to a string.\n #\n # 1. If username.expression uses 'claims.email', then 'claims.email_verified' must be used in\n # username.expression or extra[*].valueExpression or claimValidationRules[*].expression.\n # An example claim validation rule expression that matches the validation automatically\n # applied when username.claim is set to 'email' is 'claims.?email_verified.orValue(true) == true'.\n # By explicitly comparing the value to true, we let type-checking see the result will be a boolean, and\n # to make sure a non-boolean email_verified claim will be caught at runtime.\n # 2. If the username asserted based on username.expression is the empty string, the authentication\n # request will fail.\n expression: 'claims.username + \":external-user\"'\n # groups represents an option for the groups attribute.\n groups:\n # Same as --oidc-groups-claim. Mutually exclusive with groups.expression.\n claim: \"sub\"\n # Same as --oidc-groups-prefix. Mutually exclusive with groups.expression.\n # if groups.claim is set, groups.prefix is required.\n # Explicitly set it to \"\" if no prefix is desired.\n prefix: \"\"\n # Mutually exclusive with groups.claim and groups.prefix.\n # expression is a CEL expression that evaluates to a string or a list of strings.\n expression: 'claims.roles.split(\",\")'\n # uid represents an option for the uid attribute.\n uid:\n # Mutually exclusive with uid.expression.\n claim: 'sub'\n # Mutually exclusive with uid.claim\n # expression is a CEL expression that evaluates to a string.\n expression: 'claims.sub'\n # extra attributes to be added to the UserInfo object. Keys must be domain-prefix path and must be unique.\n extra:\n # key is a string to use as the extra attribute key.\n # key must be a domain-prefix path (e.g. example.org/foo). All characters before the first \"/\" must be a valid\n # subdomain as defined by RFC 1123. All characters trailing the first \"/\" must\n # be valid HTTP Path characters as defined by RFC 3986.\n # k8s.io, kubernetes.io and their subdomains are reserved for Kubernetes use and cannot be used.\n # key must be lowercase and unique across all extra attributes.\n - key: 'example.com/tenant'\n # valueExpression is a CEL expression that evaluates to a string or a list of strings.\n valueExpression: 'claims.tenant'\n # validation rules applied to the final user object.\n userValidationRules:\n # expression is a CEL expression that evaluates to a boolean.\n # all the expressions must evaluate to true for the user to be valid.\n - expression: \"!user.username.startsWith('system:')\"\n # Message customizes the error message seen in the API server logs when the validation fails.\n message: 'username cannot used reserved system: prefix'\n - expression: \"user.groups.all(group, !group.startsWith('system:'))\"\n message: 'groups cannot used reserved system: prefix'", "language": "yaml", "type": "kubernetes-manifest", "context": "Authenticating | Claim validation rule expression", "source": "https://kubernetes.io/docs/reference/access-authn-authz/authentication/", "id": "k8s_00731_code_completion_23064"} {"prompt": "Explain this yaml code:\n\n---\n#\n# CAUTION: this is an example configuration.\n# Do not use this for your own cluster!\n#\napiVersion: apiserver.config.k8s.io/v1\nkind: AuthenticationConfiguration\n# list of authenticators to authenticate Kubernetes users using JWT compliant tokens.\n# the maximum number of allowed authenticators is 64.\njwt:\n- issuer:\n # url must be unique across all authenticators.\n # url must not conflict with issuer configured in --service-account-issuer.\n url: https://example.com # Same as --oidc-issuer-url.\n # discoveryURL, if specified, overrides the URL used to fetch discovery\n # information instead of using \"{url}/.well-known/openid-configuration\".\n # The exact value specified is used, so \"/.well-known/openid-configuration\"\n # must be included in discoveryURL if needed.\n #\n # The \"issuer\" field in the fetched discovery information must match the \"issuer.url\" field\n # in the AuthenticationConfiguration and will be used to validate the \"iss\" claim in the presented JWT.\n # This is for scenarios where the well-known and jwks endpoints are hosted at a different\n # location than the issuer (such as locally in the cluster).\n # discoveryURL must be different from url if specified and must be unique across all authenticators.\n discoveryURL: https://discovery.example.com/.well-known/openid-configuration\n # PEM encoded CA certificates used to validate the connection when fetching\n # discovery information. If not set, the system verifier will be used.\n # Same value as the content of the file referenced by the --oidc-ca-file command line argument.\n certificateAuthority: \n # audiences is the set of acceptable audiences the JWT must be issued to.\n # At least one of the entries must match the \"aud\" claim in presented JWTs.\n audiences:\n - my-app # Same as --oidc-client-id.\n - my-other-app\n # this is required to be set to \"MatchAny\" when multiple audiences are specified.\n audienceMatchPolicy: MatchAny\n # egressSelectorType is an indicator of which egress selection should be used for sending all traffic related\n # to this issuer (discovery, JWKS, distributed claims, etc). If unspecified, no custom dialer is used.\n # The StructuredAuthenticationConfigurationEgressSelector feature gate must be enabled\n # before you can use the egressSelectorType field.\n # When specified, the valid choices are \"controlplane\" and \"cluster\". These correspond to the associated\n # values in the --egress-selector-config-file.\n # - controlplane: for traffic intended to go to the control plane.\n # - cluster: for traffic intended to go to the system being managed by Kubernetes.\n egressSelectorType: \n # rules applied to validate token claims to authenticate users.\n claimValidationRules:\n # Same as --oidc-required-claim key=value.\n - claim: hd\n requiredValue: example.com\n # Instead of claim and requiredValue, you can use expression to validate the claim.\n # expression is a CEL expression that evaluates to a boolean.\n # all the expressions must evaluate to true for validation to succeed.\n - expression: 'claims.hd == \"example.com\"'\n # Message customizes the error message seen in the API server logs when the validation fails.\n message: the hd claim must be set to example.com\n - expression: 'claims.exp - claims.nbf <= 86400'\n message: total token lifetime must not exceed 24 hours\n claimMappings:\n # username represents an option for the username attribute.\n # This is the only required attribute.\n username:\n # Same as --oidc-username-claim. Mutually exclusive with username.expression.\n claim: \"sub\"\n # Same as --oidc-username-prefix. Mutually exclusive with username.expression.\n # if username.claim is set, username.prefix is required.\n # Explicitly set it to \"\" if no prefix is desired.\n prefix: \"\"\n # Mutually exclusive with username.claim and username.prefix.\n # expression is a CEL expression that evaluates to a string.\n #\n # 1. If username.expression uses 'claims.email', then 'claims.email_verified' must be used in\n # username.expression or extra[*].valueExpression or claimValidationRules[*].expression.\n # An example claim validation rule expression that matches the validation automatically\n # applied when username.claim is set to 'email' is 'claims.?email_verified.orValue(true) == true'.\n # By explicitly comparing the value to true, we let type-checking see the result will be a boolean, and\n # to make sure a non-boolean email_verified claim will be caught at runtime.\n # 2. If the username asserted based on username.expression is the empty string, the authentication\n # request will fail.\n expression: 'claims.username + \":external-user\"'\n # groups represents an option for the groups attribute.\n groups:\n # Same as --oidc-groups-claim. Mutually exclusive with groups.expression.\n claim: \"sub\"\n # Same as --oidc-groups-prefix. Mutually exclusive with groups.expression.\n # if groups.claim is set, groups.prefix is required.\n # Explicitly set it to \"\" if no prefix is desired.\n prefix: \"\"\n # Mutually exclusive with groups.claim and groups.prefix.\n # expression is a CEL expression that evaluates to a string or a list of strings.\n expression: 'claims.roles.split(\",\")'\n # uid represents an option for the uid attribute.\n uid:\n # Mutually exclusive with uid.expression.\n claim: 'sub'\n # Mutually exclusive with uid.claim\n # expression is a CEL expression that evaluates to a string.\n expression: 'claims.sub'\n # extra attributes to be added to the UserInfo object. Keys must be domain-prefix path and must be unique.\n extra:\n # key is a string to use as the extra attribute key.\n # key must be a domain-prefix path (e.g. example.org/foo). All characters before the first \"/\" must be a valid\n # subdomain as defined by RFC 1123. All characters trailing the first \"/\" must\n # be valid HTTP Path characters as defined by RFC 3986.\n # k8s.io, kubernetes.io and their subdomains are reserved for Kubernetes use and cannot be used.\n # key must be lowercase and unique across all extra attributes.\n - key: 'example.com/tenant'\n # valueExpression is a CEL expression that evaluates to a string or a list of strings.\n valueExpression: 'claims.tenant'\n # validation rules applied to the final user object.\n userValidationRules:\n # expression is a CEL expression that evaluates to a boolean.\n # all the expressions must evaluate to true for the user to be valid.\n - expression: \"!user.username.startsWith('system:')\"\n # Message customizes the error message seen in the API server logs when the validation fails.\n message: 'username cannot used reserved system: prefix'\n - expression: \"user.groups.all(group, !group.startsWith('system:'))\"\n message: 'groups cannot used reserved system: prefix'", "completion": "This yaml code Authenticating | Claim validation rule expression", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Authenticating | Claim validation rule expression", "source": "https://kubernetes.io/docs/reference/access-authn-authz/authentication/", "id": "k8s_00731_code_explain_23065"} {"prompt": "Complete this yaml code:\n\napiVersion: apiserver.config.k8s.io/v1\nkind: AuthenticationConfiguration\njwt:\n- issuer:\n url: https://example.com\n audiences:\n - my-app\n claimMappings:\n username:\n expression: 'claims.username + \":external-user\"'", "completion": " groups:\n expression: 'claims.roles.split(\",\")'\n uid:\n expression: 'claims.sub'\n extra:\n - key: 'example.com/tenant'\n valueExpression: 'claims.tenant'\n userValidationRules:\n - expression: \"!user.username.startsWith('system:')\" # the expression will evaluate to true, so validation will succeed.\n message: 'username cannot used reserved system: prefix'", "language": "yaml", "type": "kubernetes-manifest", "context": "Authenticating | where the token payload is:", "source": "https://kubernetes.io/docs/reference/access-authn-authz/authentication/", "id": "k8s_00731_code_completion_23066"} {"prompt": "Explain this yaml code:\n\napiVersion: apiserver.config.k8s.io/v1\nkind: AuthenticationConfiguration\njwt:\n- issuer:\n url: https://example.com\n audiences:\n - my-app\n claimMappings:\n username:\n expression: 'claims.username + \":external-user\"'\n groups:\n expression: 'claims.roles.split(\",\")'\n uid:\n expression: 'claims.sub'\n extra:\n - key: 'example.com/tenant'\n valueExpression: 'claims.tenant'\n userValidationRules:\n - expression: \"!user.username.startsWith('system:')\" # the expression will evaluate to true, so validation will succeed.\n message: 'username cannot used reserved system: prefix'", "completion": "This yaml code Authenticating | where the token payload is:", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Authenticating | where the token payload is:", "source": "https://kubernetes.io/docs/reference/access-authn-authz/authentication/", "id": "k8s_00731_code_explain_23067"} {"prompt": "Explain this bash code:\n\nTOKEN=eyJhbGciOiJSUzI1NiIsImtpZCI6ImY3dF9tOEROWmFTQk1oWGw5QXZTWGhBUC04Y0JmZ0JVbFVpTG5oQkgxdXMiLCJ0eXAiOiJKV1QifQ.eyJhdWQiOiJrdWJlcm5ldGVzIiwiZXhwIjoxNzAzMjMyOTQ5LCJpYXQiOjE3MDExMDcyMzMsImlzcyI6Imh0dHBzOi8vZXhhbXBsZS5jb20iLCJqdGkiOiI3YzMzNzk0MjgwN2U3M2NhYTJjMzBjODY4YWMwY2U5MTBiY2UwMmRkY2JmZWJlOGMyM2I4YjVmMjdhZDYyODczIiwibmJmIjoxNzAxMTA3MjMzLCJyb2xlcyI6InVzZXIsYWRtaW4iLCJzdWIiOiJhdXRoIiwidGVuYW50IjoiNzJmOTg4YmYtODZmMS00MWFmLTkxYWItMmQ3Y2QwMTFkYjRhIiwidXNlcm5hbWUiOiJmb28ifQ.TBWF2RkQHm4QQz85AYPcwLxSk-VLvQW-mNDHx7SEOSv9LVwcPYPuPajJpuQn9C_gKq1R94QKSQ5F6UgHMILz8OfmPKmX_00wpwwNVGeevJ79ieX2V-__W56iNR5gJ-i9nn6FYk5pwfVREB0l4HSlpTOmu80gbPWAXY5hLW0ZtcE1JTEEmefORHV2ge8e3jp1xGafNy6LdJWabYuKiw8d7Qga__HxtKB-t0kRMNzLRS7rka_SfQg0dSYektuxhLbiDkqhmRffGlQKXGVzUsuvFw7IGM5ZWnZgEMDzCI357obHeM3tRqpn5WRjtB8oM7JgnCymaJi-P3iCd88iu1xnzA", "completion": "This bash code Authenticating | where the token payload is:", "language": "bash", "type": "shell-command_explanation", "context": "Authenticating | where the token payload is:", "source": "https://kubernetes.io/docs/reference/access-authn-authz/authentication/", "id": "k8s_00731_code_explain_23068"} {"prompt": "Complete this json code:\n\n{\n \"aud\": \"kubernetes\",\n \"exp\": 1703232949,\n \"iat\": 1701107233,\n \"iss\": \"https://example.com\",\n \"jti\": \"7c337942807e73caa2c30c868ac0ce910bce02ddcbfebe8c23b8b5f27ad62873\",", "completion": " \"nbf\": 1701107233,\n \"roles\": \"user,admin\",\n \"sub\": \"auth\",\n \"tenant\": \"72f988bf-86f1-41af-91ab-2d7cd011db4a\",\n \"username\": \"foo\"\n }", "language": "json", "type": "api-response", "context": "Authenticating | The token with the above AuthenticationConfiguration will produce the following UserInfo object and successfully authenticate the user.", "source": "https://kubernetes.io/docs/reference/access-authn-authz/authentication/", "id": "k8s_00731_code_completion_23069"} {"prompt": "Explain this json code:\n\n{\n \"aud\": \"kubernetes\",\n \"exp\": 1703232949,\n \"iat\": 1701107233,\n \"iss\": \"https://example.com\",\n \"jti\": \"7c337942807e73caa2c30c868ac0ce910bce02ddcbfebe8c23b8b5f27ad62873\",\n \"nbf\": 1701107233,\n \"roles\": \"user,admin\",\n \"sub\": \"auth\",\n \"tenant\": \"72f988bf-86f1-41af-91ab-2d7cd011db4a\",\n \"username\": \"foo\"\n }", "completion": "This json code Authenticating | The token with the above AuthenticationConfiguration will produce the following UserInfo object and successfully authenticate the user.", "language": "json", "type": "api-response_explanation", "context": "Authenticating | The token with the above AuthenticationConfiguration will produce the following UserInfo object and successfully authenticate the user.", "source": "https://kubernetes.io/docs/reference/access-authn-authz/authentication/", "id": "k8s_00731_code_explain_23070"} {"prompt": "Complete this json code:\n\n{\n \"username\": \"foo:external-user\",\n \"uid\": \"auth\",\n \"groups\": [\n \"user\",", "completion": " \"admin\"\n ],\n \"extra\": {\n \"example.com/tenant\": [\"72f988bf-86f1-41af-91ab-2d7cd011db4a\"]\n }\n}", "language": "json", "type": "api-response", "context": "Authenticating | apiVersion: apiserver.config.k8s.io/v1 kind: AuthenticationConfiguration jwt: - issuer: url: https://example.com audiences: - my-app claimValidationRules: - expression: 'claims.hd == \"", "source": "https://kubernetes.io/docs/reference/access-authn-authz/authentication/", "id": "k8s_00731_code_completion_23071"} {"prompt": "Explain this json code:\n\n{\n \"username\": \"foo:external-user\",\n \"uid\": \"auth\",\n \"groups\": [\n \"user\",\n \"admin\"\n ],\n \"extra\": {\n \"example.com/tenant\": [\"72f988bf-86f1-41af-91ab-2d7cd011db4a\"]\n }\n}", "completion": "This json code Authenticating | apiVersion: apiserver.config.k8s.io/v1 kind: AuthenticationConfiguration jwt: - issuer: url: https://example.com audiences: - my-app claimValidationRules: - expression: 'claims.hd == \"", "language": "json", "type": "api-response_explanation", "context": "Authenticating | apiVersion: apiserver.config.k8s.io/v1 kind: AuthenticationConfiguration jwt: - issuer: url: https://example.com audiences: - my-app claimValidationRules: - expression: 'claims.hd == \"", "source": "https://kubernetes.io/docs/reference/access-authn-authz/authentication/", "id": "k8s_00731_code_explain_23072"} {"prompt": "Complete this yaml code:\n\napiVersion: apiserver.config.k8s.io/v1\nkind: AuthenticationConfiguration\njwt:\n- issuer:\n url: https://example.com\n audiences:\n - my-app\n claimValidationRules:\n - expression: 'claims.hd == \"example.com\"' # the token below does not have this claim, so validation will fail.\n message: the hd claim must be set to example.com\n claimMappings:", "completion": " username:\n expression: 'claims.username + \":external-user\"'\n groups:\n expression: 'claims.roles.split(\",\")'\n uid:\n expression: 'claims.sub'\n extra:\n - key: 'example.com/tenant'\n valueExpression: 'claims.tenant'\n userValidationRules:\n - expression: \"!user.username.startsWith('system:')\" # the expression will evaluate to true, so validation will succeed.\n message: 'username cannot used reserved system: prefix'", "language": "yaml", "type": "kubernetes-manifest", "context": "Authenticating | where the token payload is:", "source": "https://kubernetes.io/docs/reference/access-authn-authz/authentication/", "id": "k8s_00731_code_completion_23073"} {"prompt": "Explain this yaml code:\n\napiVersion: apiserver.config.k8s.io/v1\nkind: AuthenticationConfiguration\njwt:\n- issuer:\n url: https://example.com\n audiences:\n - my-app\n claimValidationRules:\n - expression: 'claims.hd == \"example.com\"' # the token below does not have this claim, so validation will fail.\n message: the hd claim must be set to example.com\n claimMappings:\n username:\n expression: 'claims.username + \":external-user\"'\n groups:\n expression: 'claims.roles.split(\",\")'\n uid:\n expression: 'claims.sub'\n extra:\n - key: 'example.com/tenant'\n valueExpression: 'claims.tenant'\n userValidationRules:\n - expression: \"!user.username.startsWith('system:')\" # the expression will evaluate to true, so validation will succeed.\n message: 'username cannot used reserved system: prefix'", "completion": "This yaml code Authenticating | where the token payload is:", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Authenticating | where the token payload is:", "source": "https://kubernetes.io/docs/reference/access-authn-authz/authentication/", "id": "k8s_00731_code_explain_23074"} {"prompt": "Explain this bash code:\n\nTOKEN=eyJhbGciOiJSUzI1NiIsImtpZCI6ImY3dF9tOEROWmFTQk1oWGw5QXZTWGhBUC04Y0JmZ0JVbFVpTG5oQkgxdXMiLCJ0eXAiOiJKV1QifQ.eyJhdWQiOiJrdWJlcm5ldGVzIiwiZXhwIjoxNzAzMjMyOTQ5LCJpYXQiOjE3MDExMDcyMzMsImlzcyI6Imh0dHBzOi8vZXhhbXBsZS5jb20iLCJqdGkiOiI3YzMzNzk0MjgwN2U3M2NhYTJjMzBjODY4YWMwY2U5MTBiY2UwMmRkY2JmZWJlOGMyM2I4YjVmMjdhZDYyODczIiwibmJmIjoxNzAxMTA3MjMzLCJyb2xlcyI6InVzZXIsYWRtaW4iLCJzdWIiOiJhdXRoIiwidGVuYW50IjoiNzJmOTg4YmYtODZmMS00MWFmLTkxYWItMmQ3Y2QwMTFkYjRhIiwidXNlcm5hbWUiOiJmb28ifQ.TBWF2RkQHm4QQz85AYPcwLxSk-VLvQW-mNDHx7SEOSv9LVwcPYPuPajJpuQn9C_gKq1R94QKSQ5F6UgHMILz8OfmPKmX_00wpwwNVGeevJ79ieX2V-__W56iNR5gJ-i9nn6FYk5pwfVREB0l4HSlpTOmu80gbPWAXY5hLW0ZtcE1JTEEmefORHV2ge8e3jp1xGafNy6LdJWabYuKiw8d7Qga__HxtKB-t0kRMNzLRS7rka_SfQg0dSYektuxhLbiDkqhmRffGlQKXGVzUsuvFw7IGM5ZWnZgEMDzCI357obHeM3tRqpn5WRjtB8oM7JgnCymaJi-P3iCd88iu1xnzA", "completion": "This bash code Authenticating | where the token payload is:", "language": "bash", "type": "shell-command_explanation", "context": "Authenticating | where the token payload is:", "source": "https://kubernetes.io/docs/reference/access-authn-authz/authentication/", "id": "k8s_00731_code_explain_23075"} {"prompt": "Complete this json code:\n\n{\n \"aud\": \"kubernetes\",\n \"exp\": 1703232949,\n \"iat\": 1701107233,\n \"iss\": \"https://example.com\",\n \"jti\": \"7c337942807e73caa2c30c868ac0ce910bce02ddcbfebe8c23b8b5f27ad62873\",", "completion": " \"nbf\": 1701107233,\n \"roles\": \"user,admin\",\n \"sub\": \"auth\",\n \"tenant\": \"72f988bf-86f1-41af-91ab-2d7cd011db4a\",\n \"username\": \"foo\"\n }", "language": "json", "type": "api-response", "context": "Authenticating | The token with the above AuthenticationConfiguration will fail to authenticate because the hd claim is not set to example.com. The API server will return 401 Unauthorized error.", "source": "https://kubernetes.io/docs/reference/access-authn-authz/authentication/", "id": "k8s_00731_code_completion_23076"} {"prompt": "Explain this json code:\n\n{\n \"aud\": \"kubernetes\",\n \"exp\": 1703232949,\n \"iat\": 1701107233,\n \"iss\": \"https://example.com\",\n \"jti\": \"7c337942807e73caa2c30c868ac0ce910bce02ddcbfebe8c23b8b5f27ad62873\",\n \"nbf\": 1701107233,\n \"roles\": \"user,admin\",\n \"sub\": \"auth\",\n \"tenant\": \"72f988bf-86f1-41af-91ab-2d7cd011db4a\",\n \"username\": \"foo\"\n }", "completion": "This json code Authenticating | The token with the above AuthenticationConfiguration will fail to authenticate because the hd claim is not set to example.com. The API server will return 401 Unauthorized error.", "language": "json", "type": "api-response_explanation", "context": "Authenticating | The token with the above AuthenticationConfiguration will fail to authenticate because the hd claim is not set to example.com. The API server will return 401 Unauthorized error.", "source": "https://kubernetes.io/docs/reference/access-authn-authz/authentication/", "id": "k8s_00731_code_explain_23077"} {"prompt": "Complete this yaml code:\n\napiVersion: apiserver.config.k8s.io/v1\nkind: AuthenticationConfiguration\njwt:\n- issuer:\n url: https://example.com\n audiences:\n - my-app\n claimValidationRules:\n - expression: 'claims.hd == \"example.com\"'\n message: the hd claim must be set to example.com\n claimMappings:", "completion": " username:\n expression: '\"system:\" + claims.username' # this will prefix the username with \"system:\" and will fail user validation.\n groups:\n expression: 'claims.roles.split(\",\")'\n uid:\n expression: 'claims.sub'\n extra:\n - key: 'example.com/tenant'\n valueExpression: 'claims.tenant'\n userValidationRules:\n - expression: \"!user.username.startsWith('system:')\" # the username will be system:foo and expression will evaluate to false, so validation will fail.\n message: 'username cannot used reserved system: prefix'", "language": "yaml", "type": "kubernetes-manifest", "context": "Authenticating | where the token payload is:", "source": "https://kubernetes.io/docs/reference/access-authn-authz/authentication/", "id": "k8s_00731_code_completion_23078"} {"prompt": "Explain this yaml code:\n\napiVersion: apiserver.config.k8s.io/v1\nkind: AuthenticationConfiguration\njwt:\n- issuer:\n url: https://example.com\n audiences:\n - my-app\n claimValidationRules:\n - expression: 'claims.hd == \"example.com\"'\n message: the hd claim must be set to example.com\n claimMappings:\n username:\n expression: '\"system:\" + claims.username' # this will prefix the username with \"system:\" and will fail user validation.\n groups:\n expression: 'claims.roles.split(\",\")'\n uid:\n expression: 'claims.sub'\n extra:\n - key: 'example.com/tenant'\n valueExpression: 'claims.tenant'\n userValidationRules:\n - expression: \"!user.username.startsWith('system:')\" # the username will be system:foo and expression will evaluate to false, so validation will fail.\n message: 'username cannot used reserved system: prefix'", "completion": "This yaml code Authenticating | where the token payload is:", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Authenticating | where the token payload is:", "source": "https://kubernetes.io/docs/reference/access-authn-authz/authentication/", "id": "k8s_00731_code_explain_23079"} {"prompt": "Explain this bash code:\n\nTOKEN=eyJhbGciOiJSUzI1NiIsImtpZCI6ImY3dF9tOEROWmFTQk1oWGw5QXZTWGhBUC04Y0JmZ0JVbFVpTG5oQkgxdXMiLCJ0eXAiOiJKV1QifQ.eyJhdWQiOiJrdWJlcm5ldGVzIiwiZXhwIjoxNzAzMjMyOTQ5LCJoZCI6ImV4YW1wbGUuY29tIiwiaWF0IjoxNzAxMTEzMTAxLCJpc3MiOiJodHRwczovL2V4YW1wbGUuY29tIiwianRpIjoiYjViMDY1MjM3MmNkMjBlMzQ1YjZmZGZmY2RjMjE4MWY0YWZkNmYyNTlhYWI0YjdlMzU4ODEyMzdkMjkyMjBiYyIsIm5iZiI6MTcwMTExMzEwMSwicm9sZXMiOiJ1c2VyLGFkbWluIiwic3ViIjoiYXV0aCIsInRlbmFudCI6IjcyZjk4OGJmLTg2ZjEtNDFhZi05MWFiLTJkN2NkMDExZGI0YSIsInVzZXJuYW1lIjoiZm9vIn0.FgPJBYLobo9jnbHreooBlvpgEcSPWnKfX6dc0IvdlRB-F0dCcgy91oCJeK_aBk-8zH5AKUXoFTlInfLCkPivMOJqMECA1YTrMUwt_IVqwb116AqihfByUYIIqzMjvUbthtbpIeHQm2fF0HbrUqa_Q0uaYwgy8mD807h7sBcUMjNd215ff_nFIHss-9zegH8GI1d9fiBf-g6zjkR1j987EP748khpQh9IxPjMJbSgG_uH5x80YFuqgEWwq-aYJPQxXX6FatP96a2EAn7wfPpGlPRt0HcBOvq5pCnudgCgfVgiOJiLr_7robQu4T1bis0W75VPEvwWtgFcLnvcQx0JWg", "completion": "This bash code Authenticating | where the token payload is:", "language": "bash", "type": "shell-command_explanation", "context": "Authenticating | where the token payload is:", "source": "https://kubernetes.io/docs/reference/access-authn-authz/authentication/", "id": "k8s_00731_code_explain_23080"} {"prompt": "Complete this json code:\n\n{\n \"aud\": \"kubernetes\",\n \"exp\": 1703232949,\n \"hd\": \"example.com\",\n \"iat\": 1701113101,\n \"iss\": \"https://example.com\",", "completion": " \"jti\": \"b5b0652372cd20e345b6fdffcdc2181f4afd6f259aab4b7e35881237d29220bc\",\n \"nbf\": 1701113101,\n \"roles\": \"user,admin\",\n \"sub\": \"auth\",\n \"tenant\": \"72f988bf-86f1-41af-91ab-2d7cd011db4a\",\n \"username\": \"foo\"\n }", "language": "json", "type": "api-response", "context": "Authenticating | The token with the above AuthenticationConfiguration will produce the following UserInfo object:", "source": "https://kubernetes.io/docs/reference/access-authn-authz/authentication/", "id": "k8s_00731_code_completion_23081"} {"prompt": "Explain this json code:\n\n{\n \"aud\": \"kubernetes\",\n \"exp\": 1703232949,\n \"hd\": \"example.com\",\n \"iat\": 1701113101,\n \"iss\": \"https://example.com\",\n \"jti\": \"b5b0652372cd20e345b6fdffcdc2181f4afd6f259aab4b7e35881237d29220bc\",\n \"nbf\": 1701113101,\n \"roles\": \"user,admin\",\n \"sub\": \"auth\",\n \"tenant\": \"72f988bf-86f1-41af-91ab-2d7cd011db4a\",\n \"username\": \"foo\"\n }", "completion": "This json code Authenticating | The token with the above AuthenticationConfiguration will produce the following UserInfo object:", "language": "json", "type": "api-response_explanation", "context": "Authenticating | The token with the above AuthenticationConfiguration will produce the following UserInfo object:", "source": "https://kubernetes.io/docs/reference/access-authn-authz/authentication/", "id": "k8s_00731_code_explain_23082"} {"prompt": "Complete this json code:\n\n{\n \"username\": \"system:foo\",\n \"uid\": \"auth\",\n \"groups\": [\n \"user\",", "completion": " \"admin\"\n ],\n \"extra\": {\n \"example.com/tenant\": [\"72f988bf-86f1-41af-91ab-2d7cd011db4a\"]\n }\n}", "language": "json", "type": "api-response", "context": "Authenticating | which will fail user validation because the username starts with system:. The API server will return 401 Unauthorized error.", "source": "https://kubernetes.io/docs/reference/access-authn-authz/authentication/", "id": "k8s_00731_code_completion_23083"} {"prompt": "Explain this json code:\n\n{\n \"username\": \"system:foo\",\n \"uid\": \"auth\",\n \"groups\": [\n \"user\",\n \"admin\"\n ],\n \"extra\": {\n \"example.com/tenant\": [\"72f988bf-86f1-41af-91ab-2d7cd011db4a\"]\n }\n}", "completion": "This json code Authenticating | which will fail user validation because the username starts with system:. The API server will return 401 Unauthorized error.", "language": "json", "type": "api-response_explanation", "context": "Authenticating | which will fail user validation because the username starts with system:. The API server will return 401 Unauthorized error.", "source": "https://kubernetes.io/docs/reference/access-authn-authz/authentication/", "id": "k8s_00731_code_explain_23084"} {"prompt": "Explain this text code:\n\nStructuredAuthenticationConfigurationEgressSelector", "completion": "This text code Authenticating | Kubernetes does not provide an OpenID Connect Identity Provider. You can use an existing public OpenID Connect Identity Provider or run your own Identity Provider that supports the OpenID Connect prot", "language": "text", "type": "code-example_explanation", "context": "Authenticating | Kubernetes does not provide an OpenID Connect Identity Provider. You can use an existing public OpenID Connect Identity Provider or run your own Identity Provider that supports the OpenID Connect prot", "source": "https://kubernetes.io/docs/reference/access-authn-authz/authentication/", "id": "k8s_00731_code_explain_23085"} {"prompt": "Complete this bash code:\n\nkubectl config set-credentials USER_NAME \\\n --auth-provider=oidc \\\n --auth-provider-arg=idp-issuer-url=( issuer url ) \\\n --auth-provider-arg=client-id=( your client id ) \\", "completion": " --auth-provider-arg=client-secret=( your client secret ) \\\n --auth-provider-arg=refresh-token=( your refresh token ) \\\n --auth-provider-arg=idp-certificate-authority=( path to your ca certificate ) \\\n --auth-provider-arg=id-token=( your id_token )", "language": "bash", "type": "kubectl-command", "context": "Authenticating | As an example, running the below command after authenticating to your identity provider:", "source": "https://kubernetes.io/docs/reference/access-authn-authz/authentication/", "id": "k8s_00731_code_completion_23086"} {"prompt": "Explain this bash code:\n\nkubectl config set-credentials USER_NAME \\\n --auth-provider=oidc \\\n --auth-provider-arg=idp-issuer-url=( issuer url ) \\\n --auth-provider-arg=client-id=( your client id ) \\\n --auth-provider-arg=client-secret=( your client secret ) \\\n --auth-provider-arg=refresh-token=( your refresh token ) \\\n --auth-provider-arg=idp-certificate-authority=( path to your ca certificate ) \\\n --auth-provider-arg=id-token=( your id_token )", "completion": "This bash code Authenticating | As an example, running the below command after authenticating to your identity provider:", "language": "bash", "type": "kubectl-command_explanation", "context": "Authenticating | As an example, running the below command after authenticating to your identity provider:", "source": "https://kubernetes.io/docs/reference/access-authn-authz/authentication/", "id": "k8s_00731_code_explain_23087"} {"prompt": "Complete this bash code:\n\nkubectl config set-credentials mmosley \\\n --auth-provider=oidc \\\n --auth-provider-arg=idp-issuer-url=https://oidcidp.tremolo.lan:8443/auth/idp/OidcIdP \\\n --auth-provider-arg=client-id=kubernetes \\", "completion": " --auth-provider-arg=client-secret=1db158f6-177d-4d9c-8a8b-d36869918ec5 \\\n --auth-provider-arg=refresh-token=q1bKLFOyUiosTfawzA93TzZIDzH2TNa2SMm0zEiPKTUwME6BkEo6Sql5yUWVBSWpKUGphaWpxSVAfekBOZbBhaEW+VlFUeVRGcluyVF5JT4+haZmPsluFoFu5XkpXk5BXqHega4GAXlF+ma+vmYpFcHe5eZR+slBFpZKtQA= \\\n --auth-provider-arg=idp-certificate-authority=/root/ca.pem \\\n --auth-provider-arg=id-token=eyJraWQiOiJDTj1vaWRjaWRwLnRyZW1vbG8ubGFuLCBPVT1EZW1vLCBPPVRybWVvbG8gU2VjdXJpdHksIEw9QXJsaW5ndG9uLCBTVD1WaXJnaW5pYSwgQz1VUy1DTj1rdWJlLWNhLTEyMDIxNDc5MjEwMzYwNzMyMTUyIiwiYWxnIjoiUlMyNTYifQ.eyJpc3MiOiJodHRwczovL29pZGNpZHAudHJlbW9sby5sYW46ODQ0My9hdXRoL2lkcC9PaWRjSWRQIiwiYXVkIjoia3ViZXJuZXRlcyIsImV4cCI6MTQ4MzU0OTUxMSwianRpIjoiMm96US15TXdFcHV4WDlHZUhQdy1hZyIsImlhdCI6MTQ4MzU0OTQ1MSwibmJmIjoxNDgzNTQ5MzMxLCJzdWIiOiI0YWViMzdiYS1iNjQ1LTQ4ZmQtYWIzMC0xYTAxZWU0MWUyMTgifQ.w6p4J_6qQ1HzTG9nrEOrubxIMb9K5hzcMPxc9IxPx2K4xO9l-oFiUw93daH3m5pluP6K7eOE6txBuRVfEcpJSwlelsOsW8gb8VJcnzMS9EnZpeA0tW_p-mnkFc3VcfyXuhe5R3G7aa5d8uHv70yJ9Y3-UhjiN9EhpMdfPAoEB9fYKKkJRzF7utTTIPGrSaSU6d2pcpfYKaxIwePzEkT4DfcQthoZdy9ucNvvLoi1DIC-UocFD8HLs8LYKEqSxQvOcvnThbObJ9af71EwmuE21fO5KzMW20KtAeget1gnldOosPtz1G5EwvaQ401-RPQzPGMVBld0_zMCAwZttJ4knw", "language": "bash", "type": "kubectl-command", "context": "Authenticating | Which would produce the below configuration:", "source": "https://kubernetes.io/docs/reference/access-authn-authz/authentication/", "id": "k8s_00731_code_completion_23088"} {"prompt": "Explain this bash code:\n\nkubectl config set-credentials mmosley \\\n --auth-provider=oidc \\\n --auth-provider-arg=idp-issuer-url=https://oidcidp.tremolo.lan:8443/auth/idp/OidcIdP \\\n --auth-provider-arg=client-id=kubernetes \\\n --auth-provider-arg=client-secret=1db158f6-177d-4d9c-8a8b-d36869918ec5 \\\n --auth-provider-arg=refresh-token=q1bKLFOyUiosTfawzA93TzZIDzH2TNa2SMm0zEiPKTUwME6BkEo6Sql5yUWVBSWpKUGphaWpxSVAfekBOZbBhaEW+VlFUeVRGcluyVF5JT4+haZmPsluFoFu5XkpXk5BXqHega4GAXlF+ma+vmYpFcHe5eZR+slBFpZKtQA= \\\n --auth-provider-arg=idp-certificate-authority=/root/ca.pem \\\n --auth-provider-arg=id-token=eyJraWQiOiJDTj1vaWRjaWRwLnRyZW1vbG8ubGFuLCBPVT1EZW1vLCBPPVRybWVvbG8gU2VjdXJpdHksIEw9QXJsaW5ndG9uLCBTVD1WaXJnaW5pYSwgQz1VUy1DTj1rdWJlLWNhLTEyMDIxNDc5MjEwMzYwNzMyMTUyIiwiYWxnIjoiUlMyNTYifQ.eyJpc3MiOiJodHRwczovL29pZGNpZHAudHJlbW9sby5sYW46ODQ0My9hdXRoL2lkcC9PaWRjSWRQIiwiYXVkIjoia3ViZXJuZXRlcyIsImV4cCI6MTQ4MzU0OTUxMSwianRpIjoiMm96US15TXdFcHV4WDlHZUhQdy1hZyIsImlhdCI6MTQ4MzU0OTQ1MSwibmJmIjoxNDgzNTQ5MzMxLCJzdWIiOiI0YWViMzdiYS1iNjQ1LTQ4ZmQtYWIzMC0xYTAxZWU0MWUyMTgifQ.w6p4J_6qQ1HzTG9nrEOrubxIMb9K5hzcMPxc9IxPx2K4xO9l-oFiUw93daH3m5pluP6K7eOE6txBuRVfEcpJSwlelsOsW8gb8VJcnzMS9EnZpeA0tW_p-mnkFc3VcfyXuhe5R3G7aa5d8uHv70yJ9Y3-UhjiN9EhpMdfPAoEB9fYKKkJRzF7utTTIPGrSaSU6d2pcpfYKaxIwePzEkT4DfcQthoZdy9ucNvvLoi1DIC-UocFD8HLs8LYKEqSxQvOcvnThbObJ9af71EwmuE21fO5KzMW20KtAeget1gnldOosPtz1G5EwvaQ401-RPQzPGMVBld0_zMCAwZttJ4knw", "completion": "This bash code Authenticating | Which would produce the below configuration:", "language": "bash", "type": "kubectl-command_explanation", "context": "Authenticating | Which would produce the below configuration:", "source": "https://kubernetes.io/docs/reference/access-authn-authz/authentication/", "id": "k8s_00731_code_explain_23089"} {"prompt": "Complete this yaml code:\n\nusers:\n- name: mmosley\n user:\n auth-provider:\n config:\n client-id: kubernetes", "completion": " client-secret: 1db158f6-177d-4d9c-8a8b-d36869918ec5\n id-token: eyJraWQiOiJDTj1vaWRjaWRwLnRyZW1vbG8ubGFuLCBPVT1EZW1vLCBPPVRybWVvbG8gU2VjdXJpdHksIEw9QXJsaW5ndG9uLCBTVD1WaXJnaW5pYSwgQz1VUy1DTj1rdWJlLWNhLTEyMDIxNDc5MjEwMzYwNzMyMTUyIiwiYWxnIjoiUlMyNTYifQ.eyJpc3MiOiJodHRwczovL29pZGNpZHAudHJlbW9sby5sYW46ODQ0My9hdXRoL2lkcC9PaWRjSWRQIiwiYXVkIjoia3ViZXJuZXRlcyIsImV4cCI6MTQ4MzU0OTUxMSwianRpIjoiMm96US15TXdFcHV4WDlHZUhQdy1hZyIsImlhdCI6MTQ4MzU0OTQ1MSwibmJmIjoxNDgzNTQ5MzMxLCJzdWIiOiI0YWViMzdiYS1iNjQ1LTQ4ZmQtYWIzMC0xYTAxZWU0MWUyMTgifQ.w6p4J_6qQ1HzTG9nrEOrubxIMb9K5hzcMPxc9IxPx2K4xO9l-oFiUw93daH3m5pluP6K7eOE6txBuRVfEcpJSwlelsOsW8gb8VJcnzMS9EnZpeA0tW_p-mnkFc3VcfyXuhe5R3G7aa5d8uHv70yJ9Y3-UhjiN9EhpMdfPAoEB9fYKKkJRzF7utTTIPGrSaSU6d2pcpfYKaxIwePzEkT4DfcQthoZdy9ucNvvLoi1DIC-UocFD8HLs8LYKEqSxQvOcvnThbObJ9af71EwmuE21fO5KzMW20KtAeget1gnldOosPtz1G5EwvaQ401-RPQzPGMVBld0_zMCAwZttJ4knw\n idp-certificate-authority: /root/ca.pem\n idp-issuer-url: https://oidcidp.tremolo.lan:8443/auth/idp/OidcIdP\n refresh-token: q1bKLFOyUiosTfawzA93TzZIDzH2TNa2SMm0zEiPKTUwME6BkEo6Sql5yUWVBSWpKUGphaWpxSVAfekBOZbBhaEW+VlFUeVRGcluyVF5JT4+haZmPsluFoFu5XkpXk5BXq\n name: oidc", "language": "yaml", "type": "kubernetes-manifest", "context": "Authenticating | Once your id_token expires, kubectl will attempt to refresh your id_token using your refresh_token and client_secret storing the new values for the refresh_token and id_token in your .kube/config.", "source": "https://kubernetes.io/docs/reference/access-authn-authz/authentication/", "id": "k8s_00731_code_completion_23090"} {"prompt": "Explain this yaml code:\n\nusers:\n- name: mmosley\n user:\n auth-provider:\n config:\n client-id: kubernetes\n client-secret: 1db158f6-177d-4d9c-8a8b-d36869918ec5\n id-token: eyJraWQiOiJDTj1vaWRjaWRwLnRyZW1vbG8ubGFuLCBPVT1EZW1vLCBPPVRybWVvbG8gU2VjdXJpdHksIEw9QXJsaW5ndG9uLCBTVD1WaXJnaW5pYSwgQz1VUy1DTj1rdWJlLWNhLTEyMDIxNDc5MjEwMzYwNzMyMTUyIiwiYWxnIjoiUlMyNTYifQ.eyJpc3MiOiJodHRwczovL29pZGNpZHAudHJlbW9sby5sYW46ODQ0My9hdXRoL2lkcC9PaWRjSWRQIiwiYXVkIjoia3ViZXJuZXRlcyIsImV4cCI6MTQ4MzU0OTUxMSwianRpIjoiMm96US15TXdFcHV4WDlHZUhQdy1hZyIsImlhdCI6MTQ4MzU0OTQ1MSwibmJmIjoxNDgzNTQ5MzMxLCJzdWIiOiI0YWViMzdiYS1iNjQ1LTQ4ZmQtYWIzMC0xYTAxZWU0MWUyMTgifQ.w6p4J_6qQ1HzTG9nrEOrubxIMb9K5hzcMPxc9IxPx2K4xO9l-oFiUw93daH3m5pluP6K7eOE6txBuRVfEcpJSwlelsOsW8gb8VJcnzMS9EnZpeA0tW_p-mnkFc3VcfyXuhe5R3G7aa5d8uHv70yJ9Y3-UhjiN9EhpMdfPAoEB9fYKKkJRzF7utTTIPGrSaSU6d2pcpfYKaxIwePzEkT4DfcQthoZdy9ucNvvLoi1DIC-UocFD8HLs8LYKEqSxQvOcvnThbObJ9af71EwmuE21fO5KzMW20KtAeget1gnldOosPtz1G5EwvaQ401-RPQzPGMVBld0_zMCAwZttJ4knw\n idp-certificate-authority: /root/ca.pem\n idp-issuer-url: https://oidcidp.tremolo.lan:8443/auth/idp/OidcIdP\n refresh-token: q1bKLFOyUiosTfawzA93TzZIDzH2TNa2SMm0zEiPKTUwME6BkEo6Sql5yUWVBSWpKUGphaWpxSVAfekBOZbBhaEW+VlFUeVRGcluyVF5JT4+haZmPsluFoFu5XkpXk5BXq\n name: oidc", "completion": "This yaml code Authenticating | Once your id_token expires, kubectl will attempt to refresh your id_token using your refresh_token and client_secret storing the new values for the refresh_token and id_token in your .kube/config.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Authenticating | Once your id_token expires, kubectl will attempt to refresh your id_token using your refresh_token and client_secret storing the new values for the refresh_token and id_token in your .kube/config.", "source": "https://kubernetes.io/docs/reference/access-authn-authz/authentication/", "id": "k8s_00731_code_explain_23091"} {"prompt": "Explain this bash code:\n\nkubectl --token=eyJhbGciOiJSUzI1NiJ9.eyJpc3MiOiJodHRwczovL21sYi50cmVtb2xvLmxhbjo4MDQzL2F1dGgvaWRwL29pZGMiLCJhdWQiOiJrdWJlcm5ldGVzIiwiZXhwIjoxNDc0NTk2NjY5LCJqdGkiOiI2RDUzNXoxUEpFNjJOR3QxaWVyYm9RIiwiaWF0IjoxNDc0NTk2MzY5LCJuYmYiOjE0NzQ1OTYyNDksInN1YiI6Im13aW5kdSIsInVzZXJfcm9sZSI6WyJ1c2VycyIsIm5ldy1uYW1lc3BhY2Utdmlld2VyIl0sImVtYWlsIjoibXdpbmR1QG5vbW9yZWplZGkuY29tIn0.f2As579n9VNoaKzoF-dOQGmXkFKf1FMyNV0-va_B63jn-_n9LGSCca_6IVMP8pO-Zb4KvRqGyTP0r3HkHxYy5c81AnIh8ijarruczl-TK_yF5akjSTHFZD-0gRzlevBDiH8Q79NAr-ky0P4iIXS8lY9Vnjch5MF74Zx0c3alKJHJUnnpjIACByfF2SCaYzbWFMUNat-K1PaUk5-ujMBG7yYnr95xD-63n8CO8teGUAAEMx6zRjzfhnhbzX-ajwZLGwGUBT4WqjMs70-6a7_8gZmLZb2az1cZynkFRj2BaCkVT3A2RrjeEwZEtGXlMqKJ1_I2ulrOVsYx01_yD35-rw get nodes", "completion": "This bash code Authenticating | Kubernetes webhook authentication is a mechanism to make an HTTP callout for verifying bearer tokens.", "language": "bash", "type": "kubectl-command_explanation", "context": "Authenticating | Kubernetes webhook authentication is a mechanism to make an HTTP callout for verifying bearer tokens.", "source": "https://kubernetes.io/docs/reference/access-authn-authz/authentication/", "id": "k8s_00731_code_explain_23092"} {"prompt": "Complete this yaml code:\n\n# Kubernetes API version\napiVersion: v1\n# kind of the API object\nkind: Config\n# clusters refers to the remote service.\nclusters:\n - name: name-of-remote-authn-service\n cluster:\n certificate-authority: /path/to/ca.pem # CA for verifying the remote service.\n server: https://authn.example.com/authenticate # URL of remote service to query. 'https' recommended for production.\n\n# users refers to the API server's webhook configuration.", "completion": "users:\n - name: name-of-api-server\n user:\n client-certificate: /path/to/cert.pem # cert for the webhook plugin to use\n client-key: /path/to/key.pem # key matching the cert\n\n# kubeconfig files require a context. Provide one for the API server.\ncurrent-context: webhook\ncontexts:\n- context:\n cluster: name-of-remote-authn-service\n user: name-of-api-server\n name: webhook", "language": "yaml", "type": "kubernetes-manifest", "context": "Authenticating | When a client attempts to authenticate with the API server using a bearer token as discussed above, the authentication webhook POSTs a JSON-serialized TokenReview object containing the token to the re", "source": "https://kubernetes.io/docs/reference/access-authn-authz/authentication/", "id": "k8s_00731_code_completion_23093"} {"prompt": "Explain this yaml code:\n\n# Kubernetes API version\napiVersion: v1\n# kind of the API object\nkind: Config\n# clusters refers to the remote service.\nclusters:\n - name: name-of-remote-authn-service\n cluster:\n certificate-authority: /path/to/ca.pem # CA for verifying the remote service.\n server: https://authn.example.com/authenticate # URL of remote service to query. 'https' recommended for production.\n\n# users refers to the API server's webhook configuration.\nusers:\n - name: name-of-api-server\n user:\n client-certificate: /path/to/cert.pem # cert for the webhook plugin to use\n client-key: /path/to/key.pem # key matching the cert\n\n# kubeconfig files require a context. Provide one for the API server.\ncurrent-context: webhook\ncontexts:\n- context:\n cluster: name-of-remote-authn-service\n user: name-of-api-server\n name: webhook", "completion": "This yaml code Authenticating | When a client attempts to authenticate with the API server using a bearer token as discussed above, the authentication webhook POSTs a JSON-serialized TokenReview object containing the token to the re", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Authenticating | When a client attempts to authenticate with the API server using a bearer token as discussed above, the authentication webhook POSTs a JSON-serialized TokenReview object containing the token to the re", "source": "https://kubernetes.io/docs/reference/access-authn-authz/authentication/", "id": "k8s_00731_code_explain_23094"} {"prompt": "Complete this yaml code:\n\n{\n \"apiVersion\": \"authentication.k8s.io/v1\",\n \"kind\": \"TokenReview\",\n \"spec\": {\n # Opaque bearer token sent to the API server\n \"token\": \"014fbff9a07c...\",\n\n # Optional list of the audience identifiers for the server the token was presented to.", "completion": " # Audience-aware token authenticators (for example, OIDC token authenticators)\n # should verify the token was intended for at least one of the audiences in this list,\n # and return the intersection of this list and the valid audiences for the token in the response status.\n # This ensures the token is valid to authenticate to the server it was presented to.\n # If no audiences are provided, the token should be validated to authenticate to the Kubernetes API server.\n \"audiences\": [\"https://myserver.example.com\", \"https://myserver.internal.example.com\"]\n }\n}", "language": "yaml", "type": "kubernetes-manifest", "context": "Authenticating | { \"apiVersion\": \"authentication.k8s.io/v1beta1\", \"kind\": \"TokenReview\", \"spec\": { # Opaque bearer token sent to the API server \"token\": \"014fbff9a07c...\", # Optional list of the aud", "source": "https://kubernetes.io/docs/reference/access-authn-authz/authentication/", "id": "k8s_00731_code_completion_23095"} {"prompt": "Explain this yaml code:\n\n{\n \"apiVersion\": \"authentication.k8s.io/v1\",\n \"kind\": \"TokenReview\",\n \"spec\": {\n # Opaque bearer token sent to the API server\n \"token\": \"014fbff9a07c...\",\n\n # Optional list of the audience identifiers for the server the token was presented to.\n # Audience-aware token authenticators (for example, OIDC token authenticators)\n # should verify the token was intended for at least one of the audiences in this list,\n # and return the intersection of this list and the valid audiences for the token in the response status.\n # This ensures the token is valid to authenticate to the server it was presented to.\n # If no audiences are provided, the token should be validated to authenticate to the Kubernetes API server.\n \"audiences\": [\"https://myserver.example.com\", \"https://myserver.internal.example.com\"]\n }\n}", "completion": "This yaml code Authenticating | { \"apiVersion\": \"authentication.k8s.io/v1beta1\", \"kind\": \"TokenReview\", \"spec\": { # Opaque bearer token sent to the API server \"token\": \"014fbff9a07c...\", # Optional list of the aud", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Authenticating | { \"apiVersion\": \"authentication.k8s.io/v1beta1\", \"kind\": \"TokenReview\", \"spec\": { # Opaque bearer token sent to the API server \"token\": \"014fbff9a07c...\", # Optional list of the aud", "source": "https://kubernetes.io/docs/reference/access-authn-authz/authentication/", "id": "k8s_00731_code_explain_23096"} {"prompt": "Complete this yaml code:\n\n{\n \"apiVersion\": \"authentication.k8s.io/v1beta1\",\n \"kind\": \"TokenReview\",\n \"spec\": {\n # Opaque bearer token sent to the API server\n \"token\": \"014fbff9a07c...\",\n\n # Optional list of the audience identifiers for the server the token was presented to.", "completion": " # Audience-aware token authenticators (for example, OIDC token authenticators)\n # should verify the token was intended for at least one of the audiences in this list,\n # and return the intersection of this list and the valid audiences for the token in the response status.\n # This ensures the token is valid to authenticate to the server it was presented to.\n # If no audiences are provided, the token should be validated to authenticate to the Kubernetes API server.\n \"audiences\": [\"https://myserver.example.com\", \"https://myserver.internal.example.com\"]\n }\n}", "language": "yaml", "type": "kubernetes-manifest", "context": "Authenticating | The remote service is expected to fill the status field of the request to indicate the success of the login. The response body's spec field is ignored and may be omitted. The remote service must retur", "source": "https://kubernetes.io/docs/reference/access-authn-authz/authentication/", "id": "k8s_00731_code_completion_23097"} {"prompt": "Explain this yaml code:\n\n{\n \"apiVersion\": \"authentication.k8s.io/v1beta1\",\n \"kind\": \"TokenReview\",\n \"spec\": {\n # Opaque bearer token sent to the API server\n \"token\": \"014fbff9a07c...\",\n\n # Optional list of the audience identifiers for the server the token was presented to.\n # Audience-aware token authenticators (for example, OIDC token authenticators)\n # should verify the token was intended for at least one of the audiences in this list,\n # and return the intersection of this list and the valid audiences for the token in the response status.\n # This ensures the token is valid to authenticate to the server it was presented to.\n # If no audiences are provided, the token should be validated to authenticate to the Kubernetes API server.\n \"audiences\": [\"https://myserver.example.com\", \"https://myserver.internal.example.com\"]\n }\n}", "completion": "This yaml code Authenticating | The remote service is expected to fill the status field of the request to indicate the success of the login. The response body's spec field is ignored and may be omitted. The remote service must retur", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Authenticating | The remote service is expected to fill the status field of the request to indicate the success of the login. The response body's spec field is ignored and may be omitted. The remote service must retur", "source": "https://kubernetes.io/docs/reference/access-authn-authz/authentication/", "id": "k8s_00731_code_explain_23098"} {"prompt": "Complete this yaml code:\n\n{\n \"apiVersion\": \"authentication.k8s.io/v1\",\n \"kind\": \"TokenReview\",\n \"status\": {\n \"authenticated\": true,\n \"user\": {\n # Required\n \"username\": \"janedoe@example.com\",\n # Optional\n \"uid\": \"42\",\n # Optional group memberships\n \"groups\": [\"developers\", \"qa\"],\n # Optional additional information provided by the authenticator.\n # This should not contain confidential data, as it can be recorded in logs", "completion": " # or API objects, and is made available to admission webhooks.\n \"extra\": {\n \"extrafield1\": [\n \"extravalue1\",\n \"extravalue2\"\n ]\n }\n },\n # Optional list audience-aware token authenticators can return,\n # containing the audiences from the `spec.audiences` list for which the provided token was valid.\n # If this is omitted, the token is considered to be valid to authenticate to the Kubernetes API server.\n \"audiences\": [\"https://myserver.example.com\"]\n }\n}", "language": "yaml", "type": "kubernetes-manifest", "context": "Authenticating | { \"apiVersion\": \"authentication.k8s.io/v1beta1\", \"kind\": \"TokenReview\", \"status\": { \"authenticated\": true, \"user\": { # Required \"username\": \"janedoe@example.com\", # Opt", "source": "https://kubernetes.io/docs/reference/access-authn-authz/authentication/", "id": "k8s_00731_code_completion_23099"} {"prompt": "Explain this yaml code:\n\n{\n \"apiVersion\": \"authentication.k8s.io/v1\",\n \"kind\": \"TokenReview\",\n \"status\": {\n \"authenticated\": true,\n \"user\": {\n # Required\n \"username\": \"janedoe@example.com\",\n # Optional\n \"uid\": \"42\",\n # Optional group memberships\n \"groups\": [\"developers\", \"qa\"],\n # Optional additional information provided by the authenticator.\n # This should not contain confidential data, as it can be recorded in logs\n # or API objects, and is made available to admission webhooks.\n \"extra\": {\n \"extrafield1\": [\n \"extravalue1\",\n \"extravalue2\"\n ]\n }\n },\n # Optional list audience-aware token authenticators can return,\n # containing the audiences from the `spec.audiences` list for which the provided token was valid.\n # If this is omitted, the token is considered to be valid to authenticate to the Kubernetes API server.\n \"audiences\": [\"https://myserver.example.com\"]\n }\n}", "completion": "This yaml code Authenticating | { \"apiVersion\": \"authentication.k8s.io/v1beta1\", \"kind\": \"TokenReview\", \"status\": { \"authenticated\": true, \"user\": { # Required \"username\": \"janedoe@example.com\", # Opt", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Authenticating | { \"apiVersion\": \"authentication.k8s.io/v1beta1\", \"kind\": \"TokenReview\", \"status\": { \"authenticated\": true, \"user\": { # Required \"username\": \"janedoe@example.com\", # Opt", "source": "https://kubernetes.io/docs/reference/access-authn-authz/authentication/", "id": "k8s_00731_code_explain_23100"} {"prompt": "Complete this yaml code:\n\n{\n \"apiVersion\": \"authentication.k8s.io/v1beta1\",\n \"kind\": \"TokenReview\",\n \"status\": {\n \"authenticated\": true,\n \"user\": {\n # Required\n \"username\": \"janedoe@example.com\",\n # Optional\n \"uid\": \"42\",\n # Optional group memberships\n \"groups\": [\"developers\", \"qa\"],\n # Optional additional information provided by the authenticator.\n # This should not contain confidential data, as it can be recorded in logs", "completion": " # or API objects, and is made available to admission webhooks.\n \"extra\": {\n \"extrafield1\": [\n \"extravalue1\",\n \"extravalue2\"\n ]\n }\n },\n # Optional list audience-aware token authenticators can return,\n # containing the audiences from the `spec.audiences` list for which the provided token was valid.\n # If this is omitted, the token is considered to be valid to authenticate to the Kubernetes API server.\n \"audiences\": [\"https://myserver.example.com\"]\n }\n}", "language": "yaml", "type": "kubernetes-manifest", "context": "Authenticating | An unsuccessful request would return:", "source": "https://kubernetes.io/docs/reference/access-authn-authz/authentication/", "id": "k8s_00731_code_completion_23101"} {"prompt": "Explain this yaml code:\n\n{\n \"apiVersion\": \"authentication.k8s.io/v1beta1\",\n \"kind\": \"TokenReview\",\n \"status\": {\n \"authenticated\": true,\n \"user\": {\n # Required\n \"username\": \"janedoe@example.com\",\n # Optional\n \"uid\": \"42\",\n # Optional group memberships\n \"groups\": [\"developers\", \"qa\"],\n # Optional additional information provided by the authenticator.\n # This should not contain confidential data, as it can be recorded in logs\n # or API objects, and is made available to admission webhooks.\n \"extra\": {\n \"extrafield1\": [\n \"extravalue1\",\n \"extravalue2\"\n ]\n }\n },\n # Optional list audience-aware token authenticators can return,\n # containing the audiences from the `spec.audiences` list for which the provided token was valid.\n # If this is omitted, the token is considered to be valid to authenticate to the Kubernetes API server.\n \"audiences\": [\"https://myserver.example.com\"]\n }\n}", "completion": "This yaml code Authenticating | An unsuccessful request would return:", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Authenticating | An unsuccessful request would return:", "source": "https://kubernetes.io/docs/reference/access-authn-authz/authentication/", "id": "k8s_00731_code_explain_23102"} {"prompt": "Complete this yaml code:\n\n{\n \"apiVersion\": \"authentication.k8s.io/v1\",\n \"kind\": \"TokenReview\",\n \"status\": {\n \"authenticated\": false,", "completion": " # Optionally include details about why authentication failed.\n # If no error is provided, the API will return a generic Unauthorized message.\n # The error field is ignored when authenticated=true.\n \"error\": \"Credentials are expired\"\n }\n}", "language": "yaml", "type": "kubernetes-manifest", "context": "Authenticating | { \"apiVersion\": \"authentication.k8s.io/v1beta1\", \"kind\": \"TokenReview\", \"status\": { \"authenticated\": false, # Optionally include details about why authentication failed. # If no erro", "source": "https://kubernetes.io/docs/reference/access-authn-authz/authentication/", "id": "k8s_00731_code_completion_23103"} {"prompt": "Explain this yaml code:\n\n{\n \"apiVersion\": \"authentication.k8s.io/v1\",\n \"kind\": \"TokenReview\",\n \"status\": {\n \"authenticated\": false,\n # Optionally include details about why authentication failed.\n # If no error is provided, the API will return a generic Unauthorized message.\n # The error field is ignored when authenticated=true.\n \"error\": \"Credentials are expired\"\n }\n}", "completion": "This yaml code Authenticating | { \"apiVersion\": \"authentication.k8s.io/v1beta1\", \"kind\": \"TokenReview\", \"status\": { \"authenticated\": false, # Optionally include details about why authentication failed. # If no erro", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Authenticating | { \"apiVersion\": \"authentication.k8s.io/v1beta1\", \"kind\": \"TokenReview\", \"status\": { \"authenticated\": false, # Optionally include details about why authentication failed. # If no erro", "source": "https://kubernetes.io/docs/reference/access-authn-authz/authentication/", "id": "k8s_00731_code_explain_23104"} {"prompt": "Complete this yaml code:\n\n{\n \"apiVersion\": \"authentication.k8s.io/v1beta1\",\n \"kind\": \"TokenReview\",\n \"status\": {\n \"authenticated\": false,", "completion": " # Optionally include details about why authentication failed.\n # If no error is provided, the API will return a generic Unauthorized message.\n # The error field is ignored when authenticated=true.\n \"error\": \"Credentials are expired\"\n }\n}", "language": "yaml", "type": "kubernetes-manifest", "context": "Authenticating | The API server can be configured to identify users from request header values, such as X-Remote-User. It is designed for use in combination with an authenticating proxy that sets these headers.", "source": "https://kubernetes.io/docs/reference/access-authn-authz/authentication/", "id": "k8s_00731_code_completion_23105"} {"prompt": "Explain this yaml code:\n\n{\n \"apiVersion\": \"authentication.k8s.io/v1beta1\",\n \"kind\": \"TokenReview\",\n \"status\": {\n \"authenticated\": false,\n # Optionally include details about why authentication failed.\n # If no error is provided, the API will return a generic Unauthorized message.\n # The error field is ignored when authenticated=true.\n \"error\": \"Credentials are expired\"\n }\n}", "completion": "This yaml code Authenticating | The API server can be configured to identify users from request header values, such as X-Remote-User. It is designed for use in combination with an authenticating proxy that sets these headers.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Authenticating | The API server can be configured to identify users from request header values, such as X-Remote-User. It is designed for use in combination with an authenticating proxy that sets these headers.", "source": "https://kubernetes.io/docs/reference/access-authn-authz/authentication/", "id": "k8s_00731_code_explain_23106"} {"prompt": "Explain this text code:\n\n--requestheader-username-headers=X-Remote-User\n--requestheader-group-headers=X-Remote-Group\n--requestheader-extra-headers-prefix=X-Remote-Extra-", "completion": "This text code Authenticating | this request:", "language": "text", "type": "code-example_explanation", "context": "Authenticating | this request:", "source": "https://kubernetes.io/docs/reference/access-authn-authz/authentication/", "id": "k8s_00731_code_explain_23107"} {"prompt": "Complete this http code:\n\nGET / HTTP/1.1\nX-Remote-User: fido\nX-Remote-Group: dogs", "completion": "X-Remote-Group: dachshunds\nX-Remote-Extra-Acme.com%2Fproject: some-project\nX-Remote-Extra-Scopes: openid\nX-Remote-Extra-Scopes: profile", "language": "http", "type": "code-example", "context": "Authenticating | would result in this user info:", "source": "https://kubernetes.io/docs/reference/access-authn-authz/authentication/", "id": "k8s_00731_code_completion_23108"} {"prompt": "Explain this http code:\n\nGET / HTTP/1.1\nX-Remote-User: fido\nX-Remote-Group: dogs\nX-Remote-Group: dachshunds\nX-Remote-Extra-Acme.com%2Fproject: some-project\nX-Remote-Extra-Scopes: openid\nX-Remote-Extra-Scopes: profile", "completion": "This http code Authenticating | would result in this user info:", "language": "http", "type": "code-example_explanation", "context": "Authenticating | would result in this user info:", "source": "https://kubernetes.io/docs/reference/access-authn-authz/authentication/", "id": "k8s_00731_code_explain_23109"} {"prompt": "Complete this yaml code:\n\nname: fido\ngroups:\n- dogs\n- dachshunds\nextra:", "completion": " acme.com/project:\n - some-project\n scopes:\n - openid\n - profile", "language": "yaml", "type": "kubernetes-manifest", "context": "Authenticating | In order to prevent header spoofing, the authenticating proxy is required to present a valid client certificate to the API server for validation against the specified CA before the request headers are", "source": "https://kubernetes.io/docs/reference/access-authn-authz/authentication/", "id": "k8s_00731_code_completion_23110"} {"prompt": "Explain this yaml code:\n\nname: fido\ngroups:\n- dogs\n- dachshunds\nextra:\n acme.com/project:\n - some-project\n scopes:\n - openid\n - profile", "completion": "This yaml code Authenticating | In order to prevent header spoofing, the authenticating proxy is required to present a valid client certificate to the API server for validation against the specified CA before the request headers are", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Authenticating | In order to prevent header spoofing, the authenticating proxy is required to present a valid client certificate to the API server for validation against the specified CA before the request headers are", "source": "https://kubernetes.io/docs/reference/access-authn-authz/authentication/", "id": "k8s_00731_code_explain_23111"} {"prompt": "Complete this yaml code:\n\n---\n#\n# CAUTION: this is an example configuration.\n# Do not use this for your own cluster!\n#\napiVersion: apiserver.config.k8s.io/v1", "completion": "kind: AuthenticationConfiguration\nanonymous:\n enabled: true\n conditions:\n - path: /livez\n - path: /readyz\n - path: /healthz", "language": "yaml", "type": "kubernetes-manifest", "context": "Authenticating | In the configuration above only the /livez, /readyz and /healthz endpoints are reachable by anonymous requests. Any other endpoints will not be reachable even if it is allowed by RBAC configuration.", "source": "https://kubernetes.io/docs/reference/access-authn-authz/authentication/", "id": "k8s_00731_code_completion_23112"} {"prompt": "Explain this yaml code:\n\n---\n#\n# CAUTION: this is an example configuration.\n# Do not use this for your own cluster!\n#\napiVersion: apiserver.config.k8s.io/v1\nkind: AuthenticationConfiguration\nanonymous:\n enabled: true\n conditions:\n - path: /livez\n - path: /readyz\n - path: /healthz", "completion": "This yaml code Authenticating | In the configuration above only the /livez, /readyz and /healthz endpoints are reachable by anonymous requests. Any other endpoints will not be reachable even if it is allowed by RBAC configuration.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Authenticating | In the configuration above only the /livez, /readyz and /healthz endpoints are reachable by anonymous requests. Any other endpoints will not be reachable even if it is allowed by RBAC configuration.", "source": "https://kubernetes.io/docs/reference/access-authn-authz/authentication/", "id": "k8s_00731_code_explain_23113"} {"prompt": "Explain this http code:\n\nImpersonate-User: jane.doe@example.com\nImpersonate-Group: developers\nImpersonate-Group: admins", "completion": "This http code Authenticating | An example of the impersonation headers used when impersonating a user with a UID and extra fields:", "language": "http", "type": "code-example_explanation", "context": "Authenticating | An example of the impersonation headers used when impersonating a user with a UID and extra fields:", "source": "https://kubernetes.io/docs/reference/access-authn-authz/authentication/", "id": "k8s_00731_code_explain_23114"} {"prompt": "Complete this http code:\n\nImpersonate-User: jane.doe@example.com\nImpersonate-Extra-dn: cn=jane,ou=engineers,dc=example,dc=com\nImpersonate-Extra-acme.com%2Fproject: some-project", "completion": "Impersonate-Extra-scopes: view\nImpersonate-Extra-scopes: development\nImpersonate-Uid: 06f6ce97-e2c5-4ab8-7ba5-7654dd08d52b", "language": "http", "type": "code-example", "context": "Authenticating | When using kubectl set the --as command line argument to configure the Impersonate-User header, you can also set the --as-group flag to configure the Impersonate-Group header.", "source": "https://kubernetes.io/docs/reference/access-authn-authz/authentication/", "id": "k8s_00731_code_completion_23115"} {"prompt": "Explain this http code:\n\nImpersonate-User: jane.doe@example.com\nImpersonate-Extra-dn: cn=jane,ou=engineers,dc=example,dc=com\nImpersonate-Extra-acme.com%2Fproject: some-project\nImpersonate-Extra-scopes: view\nImpersonate-Extra-scopes: development\nImpersonate-Uid: 06f6ce97-e2c5-4ab8-7ba5-7654dd08d52b", "completion": "This http code Authenticating | When using kubectl set the --as command line argument to configure the Impersonate-User header, you can also set the --as-group flag to configure the Impersonate-Group header.", "language": "http", "type": "code-example_explanation", "context": "Authenticating | When using kubectl set the --as command line argument to configure the Impersonate-User header, you can also set the --as-group flag to configure the Impersonate-Group header.", "source": "https://kubernetes.io/docs/reference/access-authn-authz/authentication/", "id": "k8s_00731_code_explain_23116"} {"prompt": "Explain this none code:\n\nError from server (Forbidden): User \"clark\" cannot get nodes at the cluster scope. (get nodes mynode)", "completion": "This none code Authenticating | Set the --as and --as-group flag:", "language": "none", "type": "code-example_explanation", "context": "Authenticating | Set the --as and --as-group flag:", "source": "https://kubernetes.io/docs/reference/access-authn-authz/authentication/", "id": "k8s_00731_code_explain_23117"} {"prompt": "Explain this bash code:\n\nkubectl drain mynode --as=superman --as-group=system:masters", "completion": "This bash code Authenticating | To impersonate a user, group, user identifier (UID) or extra fields, the impersonating user must have the ability to perform the impersonate verb on the kind of attribute being impersonated (\"user\", \"", "language": "bash", "type": "kubectl-command_explanation", "context": "Authenticating | To impersonate a user, group, user identifier (UID) or extra fields, the impersonating user must have the ability to perform the impersonate verb on the kind of attribute being impersonated (\"user\", \"", "source": "https://kubernetes.io/docs/reference/access-authn-authz/authentication/", "id": "k8s_00731_code_explain_23118"} {"prompt": "Complete this yaml code:\n\napiVersion: rbac.authorization.k8s.io/v1\nkind: ClusterRole\nmetadata:\n name: impersonator", "completion": "rules:\n- apiGroups: [\"\"]\n resources: [\"users\", \"groups\", \"serviceaccounts\"]\n verbs: [\"impersonate\"]", "language": "yaml", "type": "kubernetes-manifest", "context": "Authenticating | For impersonation, extra fields and impersonated UIDs are both under the \"authentication.k8s.io\" apiGroup. Extra fields are evaluated as sub-resources of the resource \"userextras\". To allow a user to", "source": "https://kubernetes.io/docs/reference/access-authn-authz/authentication/", "id": "k8s_00731_code_completion_23119"} {"prompt": "Explain this yaml code:\n\napiVersion: rbac.authorization.k8s.io/v1\nkind: ClusterRole\nmetadata:\n name: impersonator\nrules:\n- apiGroups: [\"\"]\n resources: [\"users\", \"groups\", \"serviceaccounts\"]\n verbs: [\"impersonate\"]", "completion": "This yaml code Authenticating | For impersonation, extra fields and impersonated UIDs are both under the \"authentication.k8s.io\" apiGroup. Extra fields are evaluated as sub-resources of the resource \"userextras\". To allow a user to", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Authenticating | For impersonation, extra fields and impersonated UIDs are both under the \"authentication.k8s.io\" apiGroup. Extra fields are evaluated as sub-resources of the resource \"userextras\". To allow a user to", "source": "https://kubernetes.io/docs/reference/access-authn-authz/authentication/", "id": "k8s_00731_code_explain_23120"} {"prompt": "Complete this yaml code:\n\napiVersion: rbac.authorization.k8s.io/v1\nkind: ClusterRole\nmetadata:\n name: scopes-and-uid-impersonator", "completion": "rules:\n# Can set \"Impersonate-Extra-scopes\" header and the \"Impersonate-Uid\" header.\n- apiGroups: [\"authentication.k8s.io\"]\n resources: [\"userextras/scopes\", \"uids\"]\n verbs: [\"impersonate\"]", "language": "yaml", "type": "kubernetes-manifest", "context": "Authenticating | The values of impersonation headers can also be restricted by limiting the set of resourceNames a resource can take.", "source": "https://kubernetes.io/docs/reference/access-authn-authz/authentication/", "id": "k8s_00731_code_completion_23121"} {"prompt": "Explain this yaml code:\n\napiVersion: rbac.authorization.k8s.io/v1\nkind: ClusterRole\nmetadata:\n name: scopes-and-uid-impersonator\nrules:\n# Can set \"Impersonate-Extra-scopes\" header and the \"Impersonate-Uid\" header.\n- apiGroups: [\"authentication.k8s.io\"]\n resources: [\"userextras/scopes\", \"uids\"]\n verbs: [\"impersonate\"]", "completion": "This yaml code Authenticating | The values of impersonation headers can also be restricted by limiting the set of resourceNames a resource can take.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Authenticating | The values of impersonation headers can also be restricted by limiting the set of resourceNames a resource can take.", "source": "https://kubernetes.io/docs/reference/access-authn-authz/authentication/", "id": "k8s_00731_code_explain_23122"} {"prompt": "Complete this yaml code:\n\napiVersion: rbac.authorization.k8s.io/v1\nkind: ClusterRole\nmetadata:\n name: limited-impersonator\nrules:\n# Can impersonate the user \"jane.doe@example.com\"\n- apiGroups: [\"\"]\n resources: [\"users\"]\n verbs: [\"impersonate\"]\n resourceNames: [\"jane.doe@example.com\"]\n\n# Can impersonate the groups \"developers\" and \"admins\"\n- apiGroups: [\"\"]\n resources: [\"groups\"]", "completion": " verbs: [\"impersonate\"]\n resourceNames: [\"developers\",\"admins\"]\n\n# Can impersonate the extras field \"scopes\" with the values \"view\" and \"development\"\n- apiGroups: [\"authentication.k8s.io\"]\n resources: [\"userextras/scopes\"]\n verbs: [\"impersonate\"]\n resourceNames: [\"view\", \"development\"]\n\n# Can impersonate the uid \"06f6ce97-e2c5-4ab8-7ba5-7654dd08d52b\"\n- apiGroups: [\"authentication.k8s.io\"]\n resources: [\"uids\"]\n verbs: [\"impersonate\"]\n resourceNames: [\"06f6ce97-e2c5-4ab8-7ba5-7654dd08d52b\"]", "language": "yaml", "type": "kubernetes-manifest", "context": "Authenticating | k8s.io/client-go and tools using it such as kubectl and kubelet are able to execute an external command to receive user credentials.", "source": "https://kubernetes.io/docs/reference/access-authn-authz/authentication/", "id": "k8s_00731_code_completion_23123"} {"prompt": "Explain this yaml code:\n\napiVersion: rbac.authorization.k8s.io/v1\nkind: ClusterRole\nmetadata:\n name: limited-impersonator\nrules:\n# Can impersonate the user \"jane.doe@example.com\"\n- apiGroups: [\"\"]\n resources: [\"users\"]\n verbs: [\"impersonate\"]\n resourceNames: [\"jane.doe@example.com\"]\n\n# Can impersonate the groups \"developers\" and \"admins\"\n- apiGroups: [\"\"]\n resources: [\"groups\"]\n verbs: [\"impersonate\"]\n resourceNames: [\"developers\",\"admins\"]\n\n# Can impersonate the extras field \"scopes\" with the values \"view\" and \"development\"\n- apiGroups: [\"authentication.k8s.io\"]\n resources: [\"userextras/scopes\"]\n verbs: [\"impersonate\"]\n resourceNames: [\"view\", \"development\"]\n\n# Can impersonate the uid \"06f6ce97-e2c5-4ab8-7ba5-7654dd08d52b\"\n- apiGroups: [\"authentication.k8s.io\"]\n resources: [\"uids\"]\n verbs: [\"impersonate\"]\n resourceNames: [\"06f6ce97-e2c5-4ab8-7ba5-7654dd08d52b\"]", "completion": "This yaml code Authenticating | k8s.io/client-go and tools using it such as kubectl and kubelet are able to execute an external command to receive user credentials.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Authenticating | k8s.io/client-go and tools using it such as kubectl and kubelet are able to execute an external command to receive user credentials.", "source": "https://kubernetes.io/docs/reference/access-authn-authz/authentication/", "id": "k8s_00731_code_explain_23124"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Config\nusers:\n- name: my-user\n user:\n exec:\n # Command to execute. Required.\n command: \"example-client-go-exec-plugin\"\n\n # API version to use when decoding the ExecCredentials resource. Required.\n #\n # The API version returned by the plugin MUST match the version listed here.\n #\n # To integrate with tools that support multiple versions (such as client.authentication.k8s.io/v1beta1),\n # set an environment variable, pass an argument to the tool that indicates which version the exec plugin expects,\n # or read the version from the ExecCredential object in the KUBERNETES_EXEC_INFO environment variable.\n apiVersion: \"client.authentication.k8s.io/v1\"\n\n # Environment variables to set when executing the plugin. Optional.\n env:\n - name: \"FOO\"\n value: \"bar\"\n\n # Arguments to pass when executing the plugin. Optional.\n args:\n - \"arg1\"\n - \"arg2\"\n\n # Text shown to the user when the executable doesn't seem to be present. Optional.\n installHint: |\n example-client-go-exec-plugin is required to authenticate\n to the current cluster. It can be installed:\n\n On macOS: brew install example-client-go-exec-plugin", "completion": "\n On Ubuntu: apt-get install example-client-go-exec-plugin\n\n On Fedora: dnf install example-client-go-exec-plugin\n\n ... \n\n # Whether or not to provide cluster information, which could potentially contain\n # very large CA data, to this exec plugin as a part of the KUBERNETES_EXEC_INFO\n # environment variable.\n provideClusterInfo: true\n\n # The contract between the exec plugin and the standard input I/O stream. If the\n # contract cannot be satisfied, this plugin will not be run and an error will be\n # returned. Valid values are \"Never\" (this exec plugin never uses standard input),\n # \"IfAvailable\" (this exec plugin wants to use standard input if it is available),\n # or \"Always\" (this exec plugin requires standard input to function). Required.\n interactiveMode: Never\nclusters:\n- name: my-cluster\n cluster:\n server: \"https://172.17.4.100:6443\"\n certificate-authority: \"/etc/kubernetes/ca.pem\"\n extensions:\n - name: client.authentication.k8s.io/exec # reserved extension name for per cluster exec config\n extension:\n arbitrary: config\n this: can be provided via the KUBERNETES_EXEC_INFO environment variable upon setting provideClusterInfo\n you: [\"can\", \"put\", \"anything\", \"here\"]\ncontexts:\n- name: my-cluster\n context:\n cluster: my-cluster\n user: my-user\ncurrent-context: my-cluster", "language": "yaml", "type": "kubernetes-manifest", "context": "Authenticating | apiVersion: v1 kind: Config users: - name: my-user user: exec: # Command to execute. Required. command: \"example-client-go-exec-plugin\" # API version to use when decoding the", "source": "https://kubernetes.io/docs/reference/access-authn-authz/authentication/", "id": "k8s_00731_code_completion_23125"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Config\nusers:\n- name: my-user\n user:\n exec:\n # Command to execute. Required.\n command: \"example-client-go-exec-plugin\"\n\n # API version to use when decoding the ExecCredentials resource. Required.\n #\n # The API version returned by the plugin MUST match the version listed here.\n #\n # To integrate with tools that support multiple versions (such as client.authentication.k8s.io/v1beta1),\n # set an environment variable, pass an argument to the tool that indicates which version the exec plugin expects,\n # or read the version from the ExecCredential object in the KUBERNETES_EXEC_INFO environment variable.\n apiVersion: \"client.authentication.k8s.io/v1\"\n\n # Environment variables to set when executing the plugin. Optional.\n env:\n - name: \"FOO\"\n value: \"bar\"\n\n # Arguments to pass when executing the plugin. Optional.\n args:\n - \"arg1\"\n - \"arg2\"\n\n # Text shown to the user when the executable doesn't seem to be present. Optional.\n installHint: |\n example-client-go-exec-plugin is required to authenticate\n to the current cluster. It can be installed:\n\n On macOS: brew install example-client-go-exec-plugin\n\n On Ubuntu: apt-get install example-client-go-exec-plugin\n\n On Fedora: dnf install example-client-go-exec-plugin\n\n ... \n\n # Whether or not to provide cluster information, which could potentially contain\n # very large CA data, to this exec plugin as a part of the KUBERNETES_EXEC_INFO\n # environment variable.\n provideClusterInfo: true\n\n # The contract between the exec plugin and the standard input I/O stream. If the\n # contract cannot be satisfied, this plugin will not be run and an error will be\n # returned. Valid values are \"Never\" (this exec plugin never uses standard input),\n # \"IfAvailable\" (this exec plugin wants to use standard input if it is available),\n # or \"Always\" (this exec plugin requires standard input to function). Required.\n interactiveMode: Never\nclusters:\n- name: my-cluster\n cluster:\n server: \"https://172.17.4.100:6443\"\n certificate-authority: \"/etc/kubernetes/ca.pem\"\n extensions:\n - name: client.authentication.k8s.io/exec # reserved extension name for per cluster exec config\n extension:\n arbitrary: config\n this: can be provided via the KUBERNETES_EXEC_INFO environment variable upon setting provideClusterInfo\n you: [\"can\", \"put\", \"anything\", \"here\"]\ncontexts:\n- name: my-cluster\n context:\n cluster: my-cluster\n user: my-user\ncurrent-context: my-cluster", "completion": "This yaml code Authenticating | apiVersion: v1 kind: Config users: - name: my-user user: exec: # Command to execute. Required. command: \"example-client-go-exec-plugin\" # API version to use when decoding the", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Authenticating | apiVersion: v1 kind: Config users: - name: my-user user: exec: # Command to execute. Required. command: \"example-client-go-exec-plugin\" # API version to use when decoding the", "source": "https://kubernetes.io/docs/reference/access-authn-authz/authentication/", "id": "k8s_00731_code_explain_23126"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Config\nusers:\n- name: my-user\n user:\n exec:\n # Command to execute. Required.\n command: \"example-client-go-exec-plugin\"\n\n # API version to use when decoding the ExecCredentials resource. Required.\n #\n # The API version returned by the plugin MUST match the version listed here.\n #\n # To integrate with tools that support multiple versions (such as client.authentication.k8s.io/v1),\n # set an environment variable, pass an argument to the tool that indicates which version the exec plugin expects,\n # or read the version from the ExecCredential object in the KUBERNETES_EXEC_INFO environment variable.\n apiVersion: \"client.authentication.k8s.io/v1beta1\"\n\n # Environment variables to set when executing the plugin. Optional.\n env:\n - name: \"FOO\"\n value: \"bar\"\n\n # Arguments to pass when executing the plugin. Optional.\n args:\n - \"arg1\"\n - \"arg2\"\n\n # Text shown to the user when the executable doesn't seem to be present. Optional.\n installHint: |\n example-client-go-exec-plugin is required to authenticate\n to the current cluster. It can be installed:\n\n On macOS: brew install example-client-go-exec-plugin\n", "completion": " On Ubuntu: apt-get install example-client-go-exec-plugin\n\n On Fedora: dnf install example-client-go-exec-plugin\n\n ... \n\n # Whether or not to provide cluster information, which could potentially contain\n # very large CA data, to this exec plugin as a part of the KUBERNETES_EXEC_INFO\n # environment variable.\n provideClusterInfo: true\n\n # The contract between the exec plugin and the standard input I/O stream. If the\n # contract cannot be satisfied, this plugin will not be run and an error will be\n # returned. Valid values are \"Never\" (this exec plugin never uses standard input),\n # \"IfAvailable\" (this exec plugin wants to use standard input if it is available),\n # or \"Always\" (this exec plugin requires standard input to function). Optional.\n # Defaults to \"IfAvailable\".\n interactiveMode: Never\nclusters:\n- name: my-cluster\n cluster:\n server: \"https://172.17.4.100:6443\"\n certificate-authority: \"/etc/kubernetes/ca.pem\"\n extensions:\n - name: client.authentication.k8s.io/exec # reserved extension name for per cluster exec config\n extension:\n arbitrary: config\n this: can be provided via the KUBERNETES_EXEC_INFO environment variable upon setting provideClusterInfo\n you: [\"can\", \"put\", \"anything\", \"here\"]\ncontexts:\n- name: my-cluster\n context:\n cluster: my-cluster\n user: my-user\ncurrent-context: my-cluster", "language": "yaml", "type": "kubernetes-manifest", "context": "Authenticating | Relative command paths are interpreted as relative to the directory of the config file. If KUBECONFIG is set to /home/jane/kubeconfig and the exec command is ./bin/example-client-go-exec-plugin, the b", "source": "https://kubernetes.io/docs/reference/access-authn-authz/authentication/", "id": "k8s_00731_code_completion_23127"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Config\nusers:\n- name: my-user\n user:\n exec:\n # Command to execute. Required.\n command: \"example-client-go-exec-plugin\"\n\n # API version to use when decoding the ExecCredentials resource. Required.\n #\n # The API version returned by the plugin MUST match the version listed here.\n #\n # To integrate with tools that support multiple versions (such as client.authentication.k8s.io/v1),\n # set an environment variable, pass an argument to the tool that indicates which version the exec plugin expects,\n # or read the version from the ExecCredential object in the KUBERNETES_EXEC_INFO environment variable.\n apiVersion: \"client.authentication.k8s.io/v1beta1\"\n\n # Environment variables to set when executing the plugin. Optional.\n env:\n - name: \"FOO\"\n value: \"bar\"\n\n # Arguments to pass when executing the plugin. Optional.\n args:\n - \"arg1\"\n - \"arg2\"\n\n # Text shown to the user when the executable doesn't seem to be present. Optional.\n installHint: |\n example-client-go-exec-plugin is required to authenticate\n to the current cluster. It can be installed:\n\n On macOS: brew install example-client-go-exec-plugin\n\n On Ubuntu: apt-get install example-client-go-exec-plugin\n\n On Fedora: dnf install example-client-go-exec-plugin\n\n ... \n\n # Whether or not to provide cluster information, which could potentially contain\n # very large CA data, to this exec plugin as a part of the KUBERNETES_EXEC_INFO\n # environment variable.\n provideClusterInfo: true\n\n # The contract between the exec plugin and the standard input I/O stream. If the\n # contract cannot be satisfied, this plugin will not be run and an error will be\n # returned. Valid values are \"Never\" (this exec plugin never uses standard input),\n # \"IfAvailable\" (this exec plugin wants to use standard input if it is available),\n # or \"Always\" (this exec plugin requires standard input to function). Optional.\n # Defaults to \"IfAvailable\".\n interactiveMode: Never\nclusters:\n- name: my-cluster\n cluster:\n server: \"https://172.17.4.100:6443\"\n certificate-authority: \"/etc/kubernetes/ca.pem\"\n extensions:\n - name: client.authentication.k8s.io/exec # reserved extension name for per cluster exec config\n extension:\n arbitrary: config\n this: can be provided via the KUBERNETES_EXEC_INFO environment variable upon setting provideClusterInfo\n you: [\"can\", \"put\", \"anything\", \"here\"]\ncontexts:\n- name: my-cluster\n context:\n cluster: my-cluster\n user: my-user\ncurrent-context: my-cluster", "completion": "This yaml code Authenticating | Relative command paths are interpreted as relative to the directory of the config file. If KUBECONFIG is set to /home/jane/kubeconfig and the exec command is ./bin/example-client-go-exec-plugin, the b", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Authenticating | Relative command paths are interpreted as relative to the directory of the config file. If KUBECONFIG is set to /home/jane/kubeconfig and the exec command is ./bin/example-client-go-exec-plugin, the b", "source": "https://kubernetes.io/docs/reference/access-authn-authz/authentication/", "id": "k8s_00731_code_explain_23128"} {"prompt": "Complete this yaml code:\n\n- name: my-user\n user:\n exec:", "completion": " # Path relative to the directory of the kubeconfig\n command: \"./bin/example-client-go-exec-plugin\"\n apiVersion: \"client.authentication.k8s.io/v1\"\n interactiveMode: Never", "language": "yaml", "type": "kubernetes-manifest", "context": "Authenticating | The executed command prints an ExecCredential object to stdout. k8s.io/client-go authenticates against the Kubernetes API using the returned credentials in the status. The executed command is passed a", "source": "https://kubernetes.io/docs/reference/access-authn-authz/authentication/", "id": "k8s_00731_code_completion_23129"} {"prompt": "Explain this yaml code:\n\n- name: my-user\n user:\n exec:\n # Path relative to the directory of the kubeconfig\n command: \"./bin/example-client-go-exec-plugin\"\n apiVersion: \"client.authentication.k8s.io/v1\"\n interactiveMode: Never", "completion": "This yaml code Authenticating | The executed command prints an ExecCredential object to stdout. k8s.io/client-go authenticates against the Kubernetes API using the returned credentials in the status. The executed command is passed a", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Authenticating | The executed command prints an ExecCredential object to stdout. k8s.io/client-go authenticates against the Kubernetes API using the returned credentials in the status. The executed command is passed a", "source": "https://kubernetes.io/docs/reference/access-authn-authz/authentication/", "id": "k8s_00731_code_explain_23130"} {"prompt": "Complete this json code:\n\n{\n \"apiVersion\": \"client.authentication.k8s.io/v1\",\n \"kind\": \"ExecCredential\",", "completion": " \"status\": {\n \"token\": \"my-bearer-token\"\n }\n}", "language": "json", "type": "api-response", "context": "Authenticating | { \"apiVersion\": \"client.authentication.k8s.io/v1beta1\", \"kind\": \"ExecCredential\", \"status\": { \"token\": \"my-bearer-token\" } }", "source": "https://kubernetes.io/docs/reference/access-authn-authz/authentication/", "id": "k8s_00731_code_completion_23131"} {"prompt": "Explain this json code:\n\n{\n \"apiVersion\": \"client.authentication.k8s.io/v1\",\n \"kind\": \"ExecCredential\",\n \"status\": {\n \"token\": \"my-bearer-token\"\n }\n}", "completion": "This json code Authenticating | { \"apiVersion\": \"client.authentication.k8s.io/v1beta1\", \"kind\": \"ExecCredential\", \"status\": { \"token\": \"my-bearer-token\" } }", "language": "json", "type": "api-response_explanation", "context": "Authenticating | { \"apiVersion\": \"client.authentication.k8s.io/v1beta1\", \"kind\": \"ExecCredential\", \"status\": { \"token\": \"my-bearer-token\" } }", "source": "https://kubernetes.io/docs/reference/access-authn-authz/authentication/", "id": "k8s_00731_code_explain_23132"} {"prompt": "Complete this json code:\n\n{\n \"apiVersion\": \"client.authentication.k8s.io/v1beta1\",\n \"kind\": \"ExecCredential\",", "completion": " \"status\": {\n \"token\": \"my-bearer-token\"\n }\n}", "language": "json", "type": "api-response", "context": "Authenticating | Alternatively, a PEM-encoded client certificate and key can be returned to use TLS client auth. If the plugin returns a different certificate and key on a subsequent call, k8s.io/client-go will close", "source": "https://kubernetes.io/docs/reference/access-authn-authz/authentication/", "id": "k8s_00731_code_completion_23133"} {"prompt": "Explain this json code:\n\n{\n \"apiVersion\": \"client.authentication.k8s.io/v1beta1\",\n \"kind\": \"ExecCredential\",\n \"status\": {\n \"token\": \"my-bearer-token\"\n }\n}", "completion": "This json code Authenticating | Alternatively, a PEM-encoded client certificate and key can be returned to use TLS client auth. If the plugin returns a different certificate and key on a subsequent call, k8s.io/client-go will close", "language": "json", "type": "api-response_explanation", "context": "Authenticating | Alternatively, a PEM-encoded client certificate and key can be returned to use TLS client auth. If the plugin returns a different certificate and key on a subsequent call, k8s.io/client-go will close", "source": "https://kubernetes.io/docs/reference/access-authn-authz/authentication/", "id": "k8s_00731_code_explain_23134"} {"prompt": "Complete this json code:\n\n{\n \"apiVersion\": \"client.authentication.k8s.io/v1\",\n \"kind\": \"ExecCredential\",\n \"status\": {", "completion": " \"clientCertificateData\": \"-----BEGIN CERTIFICATE-----\\n...\\n-----END CERTIFICATE-----\",\n \"clientKeyData\": \"-----BEGIN RSA PRIVATE KEY-----\\n...\\n-----END RSA PRIVATE KEY-----\"\n }\n}", "language": "json", "type": "api-response", "context": "Authenticating | { \"apiVersion\": \"client.authentication.k8s.io/v1beta1\", \"kind\": \"ExecCredential\", \"status\": { \"clientCertificateData\": \"-----BEGIN CERTIFICATE-----\\n...\\n-----END CERTIFICATE-----\", \"cli", "source": "https://kubernetes.io/docs/reference/access-authn-authz/authentication/", "id": "k8s_00731_code_completion_23135"} {"prompt": "Explain this json code:\n\n{\n \"apiVersion\": \"client.authentication.k8s.io/v1\",\n \"kind\": \"ExecCredential\",\n \"status\": {\n \"clientCertificateData\": \"-----BEGIN CERTIFICATE-----\\n...\\n-----END CERTIFICATE-----\",\n \"clientKeyData\": \"-----BEGIN RSA PRIVATE KEY-----\\n...\\n-----END RSA PRIVATE KEY-----\"\n }\n}", "completion": "This json code Authenticating | { \"apiVersion\": \"client.authentication.k8s.io/v1beta1\", \"kind\": \"ExecCredential\", \"status\": { \"clientCertificateData\": \"-----BEGIN CERTIFICATE-----\\n...\\n-----END CERTIFICATE-----\", \"cli", "language": "json", "type": "api-response_explanation", "context": "Authenticating | { \"apiVersion\": \"client.authentication.k8s.io/v1beta1\", \"kind\": \"ExecCredential\", \"status\": { \"clientCertificateData\": \"-----BEGIN CERTIFICATE-----\\n...\\n-----END CERTIFICATE-----\", \"cli", "source": "https://kubernetes.io/docs/reference/access-authn-authz/authentication/", "id": "k8s_00731_code_explain_23136"} {"prompt": "Complete this json code:\n\n{\n \"apiVersion\": \"client.authentication.k8s.io/v1beta1\",\n \"kind\": \"ExecCredential\",\n \"status\": {", "completion": " \"clientCertificateData\": \"-----BEGIN CERTIFICATE-----\\n...\\n-----END CERTIFICATE-----\",\n \"clientKeyData\": \"-----BEGIN RSA PRIVATE KEY-----\\n...\\n-----END RSA PRIVATE KEY-----\"\n }\n}", "language": "json", "type": "api-response", "context": "Authenticating | Optionally, the response can include the expiry of the credential formatted as a RFC 3339 timestamp.", "source": "https://kubernetes.io/docs/reference/access-authn-authz/authentication/", "id": "k8s_00731_code_completion_23137"} {"prompt": "Explain this json code:\n\n{\n \"apiVersion\": \"client.authentication.k8s.io/v1beta1\",\n \"kind\": \"ExecCredential\",\n \"status\": {\n \"clientCertificateData\": \"-----BEGIN CERTIFICATE-----\\n...\\n-----END CERTIFICATE-----\",\n \"clientKeyData\": \"-----BEGIN RSA PRIVATE KEY-----\\n...\\n-----END RSA PRIVATE KEY-----\"\n }\n}", "completion": "This json code Authenticating | Optionally, the response can include the expiry of the credential formatted as a RFC 3339 timestamp.", "language": "json", "type": "api-response_explanation", "context": "Authenticating | Optionally, the response can include the expiry of the credential formatted as a RFC 3339 timestamp.", "source": "https://kubernetes.io/docs/reference/access-authn-authz/authentication/", "id": "k8s_00731_code_explain_23138"} {"prompt": "Complete this json code:\n\n{\n \"apiVersion\": \"client.authentication.k8s.io/v1\",\n \"kind\": \"ExecCredential\",\n \"status\": {", "completion": " \"token\": \"my-bearer-token\",\n \"expirationTimestamp\": \"2018-03-05T17:30:20-08:00\"\n }\n}", "language": "json", "type": "api-response", "context": "Authenticating | { \"apiVersion\": \"client.authentication.k8s.io/v1beta1\", \"kind\": \"ExecCredential\", \"status\": { \"token\": \"my-bearer-token\", \"expirationTimestamp\": \"2018-03-05T17:30:20-08:00\" } }", "source": "https://kubernetes.io/docs/reference/access-authn-authz/authentication/", "id": "k8s_00731_code_completion_23139"} {"prompt": "Explain this json code:\n\n{\n \"apiVersion\": \"client.authentication.k8s.io/v1\",\n \"kind\": \"ExecCredential\",\n \"status\": {\n \"token\": \"my-bearer-token\",\n \"expirationTimestamp\": \"2018-03-05T17:30:20-08:00\"\n }\n}", "completion": "This json code Authenticating | { \"apiVersion\": \"client.authentication.k8s.io/v1beta1\", \"kind\": \"ExecCredential\", \"status\": { \"token\": \"my-bearer-token\", \"expirationTimestamp\": \"2018-03-05T17:30:20-08:00\" } }", "language": "json", "type": "api-response_explanation", "context": "Authenticating | { \"apiVersion\": \"client.authentication.k8s.io/v1beta1\", \"kind\": \"ExecCredential\", \"status\": { \"token\": \"my-bearer-token\", \"expirationTimestamp\": \"2018-03-05T17:30:20-08:00\" } }", "source": "https://kubernetes.io/docs/reference/access-authn-authz/authentication/", "id": "k8s_00731_code_explain_23140"} {"prompt": "Complete this json code:\n\n{\n \"apiVersion\": \"client.authentication.k8s.io/v1beta1\",\n \"kind\": \"ExecCredential\",\n \"status\": {", "completion": " \"token\": \"my-bearer-token\",\n \"expirationTimestamp\": \"2018-03-05T17:30:20-08:00\"\n }\n}", "language": "json", "type": "api-response", "context": "Authenticating | To enable the exec plugin to obtain cluster-specific information, set provideClusterInfo on the user.exec field in the kubeconfig. The plugin will then be supplied this cluster-specific information in", "source": "https://kubernetes.io/docs/reference/access-authn-authz/authentication/", "id": "k8s_00731_code_completion_23141"} {"prompt": "Explain this json code:\n\n{\n \"apiVersion\": \"client.authentication.k8s.io/v1beta1\",\n \"kind\": \"ExecCredential\",\n \"status\": {\n \"token\": \"my-bearer-token\",\n \"expirationTimestamp\": \"2018-03-05T17:30:20-08:00\"\n }\n}", "completion": "This json code Authenticating | To enable the exec plugin to obtain cluster-specific information, set provideClusterInfo on the user.exec field in the kubeconfig. The plugin will then be supplied this cluster-specific information in", "language": "json", "type": "api-response_explanation", "context": "Authenticating | To enable the exec plugin to obtain cluster-specific information, set provideClusterInfo on the user.exec field in the kubeconfig. The plugin will then be supplied this cluster-specific information in", "source": "https://kubernetes.io/docs/reference/access-authn-authz/authentication/", "id": "k8s_00731_code_explain_23142"} {"prompt": "Complete this json code:\n\n{\n \"apiVersion\": \"client.authentication.k8s.io/v1\",\n \"kind\": \"ExecCredential\",\n \"spec\": {\n \"cluster\": {\n \"server\": \"https://172.17.4.100:6443\",\n \"certificate-authority-data\": \"LS0t...\",\n \"config\": {", "completion": " \"arbitrary\": \"config\",\n \"this\": \"can be provided via the KUBERNETES_EXEC_INFO environment variable upon setting provideClusterInfo\",\n \"you\": [\"can\", \"put\", \"anything\", \"here\"]\n }\n },\n \"interactive\": true\n }\n}", "language": "json", "type": "api-response", "context": "Authenticating | { \"apiVersion\": \"client.authentication.k8s.io/v1beta1\", \"kind\": \"ExecCredential\", \"spec\": { \"cluster\": { \"server\": \"https://172.17.4.100:6443\", \"certificate-authority-data\": \"LS0", "source": "https://kubernetes.io/docs/reference/access-authn-authz/authentication/", "id": "k8s_00731_code_completion_23143"} {"prompt": "Explain this json code:\n\n{\n \"apiVersion\": \"client.authentication.k8s.io/v1\",\n \"kind\": \"ExecCredential\",\n \"spec\": {\n \"cluster\": {\n \"server\": \"https://172.17.4.100:6443\",\n \"certificate-authority-data\": \"LS0t...\",\n \"config\": {\n \"arbitrary\": \"config\",\n \"this\": \"can be provided via the KUBERNETES_EXEC_INFO environment variable upon setting provideClusterInfo\",\n \"you\": [\"can\", \"put\", \"anything\", \"here\"]\n }\n },\n \"interactive\": true\n }\n}", "completion": "This json code Authenticating | { \"apiVersion\": \"client.authentication.k8s.io/v1beta1\", \"kind\": \"ExecCredential\", \"spec\": { \"cluster\": { \"server\": \"https://172.17.4.100:6443\", \"certificate-authority-data\": \"LS0", "language": "json", "type": "api-response_explanation", "context": "Authenticating | { \"apiVersion\": \"client.authentication.k8s.io/v1beta1\", \"kind\": \"ExecCredential\", \"spec\": { \"cluster\": { \"server\": \"https://172.17.4.100:6443\", \"certificate-authority-data\": \"LS0", "source": "https://kubernetes.io/docs/reference/access-authn-authz/authentication/", "id": "k8s_00731_code_explain_23144"} {"prompt": "Complete this json code:\n\n{\n \"apiVersion\": \"client.authentication.k8s.io/v1beta1\",\n \"kind\": \"ExecCredential\",\n \"spec\": {\n \"cluster\": {\n \"server\": \"https://172.17.4.100:6443\",\n \"certificate-authority-data\": \"LS0t...\",\n \"config\": {", "completion": " \"arbitrary\": \"config\",\n \"this\": \"can be provided via the KUBERNETES_EXEC_INFO environment variable upon setting provideClusterInfo\",\n \"you\": [\"can\", \"put\", \"anything\", \"here\"]\n }\n },\n \"interactive\": true\n }\n}", "language": "json", "type": "api-response", "context": "Authenticating | If your cluster has the API enabled, you can use the SelfSubjectReview API to find out how your Kubernetes cluster maps your authentication information to identify you as a client. This works whether", "source": "https://kubernetes.io/docs/reference/access-authn-authz/authentication/", "id": "k8s_00731_code_completion_23145"} {"prompt": "Explain this json code:\n\n{\n \"apiVersion\": \"client.authentication.k8s.io/v1beta1\",\n \"kind\": \"ExecCredential\",\n \"spec\": {\n \"cluster\": {\n \"server\": \"https://172.17.4.100:6443\",\n \"certificate-authority-data\": \"LS0t...\",\n \"config\": {\n \"arbitrary\": \"config\",\n \"this\": \"can be provided via the KUBERNETES_EXEC_INFO environment variable upon setting provideClusterInfo\",\n \"you\": [\"can\", \"put\", \"anything\", \"here\"]\n }\n },\n \"interactive\": true\n }\n}", "completion": "This json code Authenticating | If your cluster has the API enabled, you can use the SelfSubjectReview API to find out how your Kubernetes cluster maps your authentication information to identify you as a client. This works whether", "language": "json", "type": "api-response_explanation", "context": "Authenticating | If your cluster has the API enabled, you can use the SelfSubjectReview API to find out how your Kubernetes cluster maps your authentication information to identify you as a client. This works whether", "source": "https://kubernetes.io/docs/reference/access-authn-authz/authentication/", "id": "k8s_00731_code_explain_23146"} {"prompt": "Explain this http code:\n\nPOST /apis/authentication.k8s.io/v1/selfsubjectreviews", "completion": "This http code Authenticating | Response example:", "language": "http", "type": "code-example_explanation", "context": "Authenticating | Response example:", "source": "https://kubernetes.io/docs/reference/access-authn-authz/authentication/", "id": "k8s_00731_code_explain_23147"} {"prompt": "Complete this json code:\n\n{\n \"apiVersion\": \"authentication.k8s.io/v1\",", "completion": " \"kind\": \"SelfSubjectReview\"\n}", "language": "json", "type": "api-response", "context": "Authenticating | Response example:", "source": "https://kubernetes.io/docs/reference/access-authn-authz/authentication/", "id": "k8s_00731_code_completion_23148"} {"prompt": "Explain this json code:\n\n{\n \"apiVersion\": \"authentication.k8s.io/v1\",\n \"kind\": \"SelfSubjectReview\"\n}", "completion": "This json code Authenticating | Response example:", "language": "json", "type": "api-response_explanation", "context": "Authenticating | Response example:", "source": "https://kubernetes.io/docs/reference/access-authn-authz/authentication/", "id": "k8s_00731_code_explain_23149"} {"prompt": "Complete this json code:\n\n{\n \"apiVersion\": \"authentication.k8s.io/v1\",\n \"kind\": \"SelfSubjectReview\",\n \"status\": {\n \"userInfo\": {\n \"name\": \"jane.doe\",\n \"uid\": \"b6c7cfd4-f166-11ec-8ea0-0242ac120002\",\n \"groups\": [\n \"viewers\",", "completion": " \"editors\",\n \"system:authenticated\"\n ],\n \"extra\": {\n \"provider_id\": [\"token.company.example\"]\n }\n }\n }\n}", "language": "json", "type": "api-response", "context": "Authenticating | For convenience, the kubectl auth whoami command is present. Executing this command will produce the following output (yet different user attributes will be shown):", "source": "https://kubernetes.io/docs/reference/access-authn-authz/authentication/", "id": "k8s_00731_code_completion_23150"} {"prompt": "Explain this json code:\n\n{\n \"apiVersion\": \"authentication.k8s.io/v1\",\n \"kind\": \"SelfSubjectReview\",\n \"status\": {\n \"userInfo\": {\n \"name\": \"jane.doe\",\n \"uid\": \"b6c7cfd4-f166-11ec-8ea0-0242ac120002\",\n \"groups\": [\n \"viewers\",\n \"editors\",\n \"system:authenticated\"\n ],\n \"extra\": {\n \"provider_id\": [\"token.company.example\"]\n }\n }\n }\n}", "completion": "This json code Authenticating | For convenience, the kubectl auth whoami command is present. Executing this command will produce the following output (yet different user attributes will be shown):", "language": "json", "type": "api-response_explanation", "context": "Authenticating | For convenience, the kubectl auth whoami command is present. Executing this command will produce the following output (yet different user attributes will be shown):", "source": "https://kubernetes.io/docs/reference/access-authn-authz/authentication/", "id": "k8s_00731_code_explain_23151"} {"prompt": "Explain this text code:\n\nATTRIBUTE VALUE\nUsername jane.doe\nGroups [system:authenticated]", "completion": "This text code Authenticating | Complex example including extra attributes", "language": "text", "type": "code-example_explanation", "context": "Authenticating | Complex example including extra attributes", "source": "https://kubernetes.io/docs/reference/access-authn-authz/authentication/", "id": "k8s_00731_code_explain_23152"} {"prompt": "Complete this text code:\n\nATTRIBUTE VALUE\nUsername jane.doe\nUID b79dbf30-0c6a-11ed-861d-0242ac120002", "completion": "Groups [students teachers system:authenticated]\nExtra: skills [reading learning]\nExtra: subjects [math sports]", "language": "text", "type": "code-example", "context": "Authenticating | By providing the output flag, it is also possible to print the JSON or YAML representation of the result:", "source": "https://kubernetes.io/docs/reference/access-authn-authz/authentication/", "id": "k8s_00731_code_completion_23153"} {"prompt": "Explain this text code:\n\nATTRIBUTE VALUE\nUsername jane.doe\nUID b79dbf30-0c6a-11ed-861d-0242ac120002\nGroups [students teachers system:authenticated]\nExtra: skills [reading learning]\nExtra: subjects [math sports]", "completion": "This text code Authenticating | By providing the output flag, it is also possible to print the JSON or YAML representation of the result:", "language": "text", "type": "code-example_explanation", "context": "Authenticating | By providing the output flag, it is also possible to print the JSON or YAML representation of the result:", "source": "https://kubernetes.io/docs/reference/access-authn-authz/authentication/", "id": "k8s_00731_code_explain_23154"} {"prompt": "Complete this json code:\n\n{\n \"apiVersion\": \"authentication.k8s.io/v1\",\n \"kind\": \"SelfSubjectReview\",\n \"status\": {\n \"userInfo\": {\n \"username\": \"jane.doe\",\n \"uid\": \"b79dbf30-0c6a-11ed-861d-0242ac120002\",\n \"groups\": [\n \"students\",\n \"teachers\",\n \"system:authenticated\"\n ],", "completion": " \"extra\": {\n \"skills\": [\n \"reading\",\n \"learning\"\n ],\n \"subjects\": [\n \"math\",\n \"sports\"\n ]\n }\n }\n }\n}", "language": "json", "type": "api-response", "context": "Authenticating | apiVersion: authentication.k8s.io/v1 kind: SelfSubjectReview status: userInfo: username: jane.doe uid: b79dbf30-0c6a-11ed-861d-0242ac120002 groups: - students - teachers - sy", "source": "https://kubernetes.io/docs/reference/access-authn-authz/authentication/", "id": "k8s_00731_code_completion_23155"} {"prompt": "Explain this json code:\n\n{\n \"apiVersion\": \"authentication.k8s.io/v1\",\n \"kind\": \"SelfSubjectReview\",\n \"status\": {\n \"userInfo\": {\n \"username\": \"jane.doe\",\n \"uid\": \"b79dbf30-0c6a-11ed-861d-0242ac120002\",\n \"groups\": [\n \"students\",\n \"teachers\",\n \"system:authenticated\"\n ],\n \"extra\": {\n \"skills\": [\n \"reading\",\n \"learning\"\n ],\n \"subjects\": [\n \"math\",\n \"sports\"\n ]\n }\n }\n }\n}", "completion": "This json code Authenticating | apiVersion: authentication.k8s.io/v1 kind: SelfSubjectReview status: userInfo: username: jane.doe uid: b79dbf30-0c6a-11ed-861d-0242ac120002 groups: - students - teachers - sy", "language": "json", "type": "api-response_explanation", "context": "Authenticating | apiVersion: authentication.k8s.io/v1 kind: SelfSubjectReview status: userInfo: username: jane.doe uid: b79dbf30-0c6a-11ed-861d-0242ac120002 groups: - students - teachers - sy", "source": "https://kubernetes.io/docs/reference/access-authn-authz/authentication/", "id": "k8s_00731_code_explain_23156"} {"prompt": "Complete this yaml code:\n\napiVersion: authentication.k8s.io/v1\nkind: SelfSubjectReview\nstatus:\n userInfo:\n username: jane.doe\n uid: b79dbf30-0c6a-11ed-861d-0242ac120002\n groups:\n - students", "completion": " - teachers\n - system:authenticated\n extra:\n skills:\n - reading\n - learning\n subjects:\n - math\n - sports", "language": "yaml", "type": "kubernetes-manifest", "context": "Authenticating | This feature is extremely useful when a complicated authentication flow is used in a Kubernetes cluster, for example, if you use webhook token authentication or authenticating proxy.", "source": "https://kubernetes.io/docs/reference/access-authn-authz/authentication/", "id": "k8s_00731_code_completion_23157"} {"prompt": "Explain this yaml code:\n\napiVersion: authentication.k8s.io/v1\nkind: SelfSubjectReview\nstatus:\n userInfo:\n username: jane.doe\n uid: b79dbf30-0c6a-11ed-861d-0242ac120002\n groups:\n - students\n - teachers\n - system:authenticated\n extra:\n skills:\n - reading\n - learning\n subjects:\n - math\n - sports", "completion": "This yaml code Authenticating | This feature is extremely useful when a complicated authentication flow is used in a Kubernetes cluster, for example, if you use webhook token authentication or authenticating proxy.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Authenticating | This feature is extremely useful when a complicated authentication flow is used in a Kubernetes cluster, for example, if you use webhook token authentication or authenticating proxy.", "source": "https://kubernetes.io/docs/reference/access-authn-authz/authentication/", "id": "k8s_00731_code_explain_23158"} {"prompt": "Explain this text code:\n\nopenssl req -new -key jbeda.pem -out jbeda-csr.pem -subj \"/CN=jbeda/O=app1/O=app2\"", "completion": "This text code Authenticating | This would create a CSR for the username \"jbeda\", belonging to two groups, \"app1\" and \"app2\".", "language": "text", "type": "code-example_explanation", "context": "Authenticating | This would create a CSR for the username \"jbeda\", belonging to two groups, \"app1\" and \"app2\".", "source": "https://kubernetes.io/docs/reference/access-authn-authz/authentication/", "id": "k8s_00731_code_explain_23159"} {"prompt": "Explain this text code:\n\nAuthorization: Bearer 31ada4fd-adec-460c-809a-9e56ceb75269", "completion": "This text code Authenticating | To allow for streamlined bootstrapping for new clusters, Kubernetes includes a dynamically-managed Bearer token type called a Bootstrap Token. These tokens are stored as Secrets in the kube-system nam", "language": "text", "type": "code-example_explanation", "context": "Authenticating | To allow for streamlined bootstrapping for new clusters, Kubernetes includes a dynamically-managed Bearer token type called a Bootstrap Token. These tokens are stored as Secrets in the kube-system nam", "source": "https://kubernetes.io/docs/reference/access-authn-authz/authentication/", "id": "k8s_00731_code_explain_23160"} {"prompt": "Complete this yaml code:\n\napiVersion: apps/v1 # this apiVersion is relevant as of Kubernetes 1.9\nkind: Deployment\nmetadata:\n name: nginx-deployment\n namespace: default\nspec:\n replicas: 3", "completion": " template:\n metadata:\n # ...\n spec:\n serviceAccountName: bob-the-bot\n containers:\n - name: nginx\n image: nginx:1.14.2", "language": "yaml", "type": "deployment-definition", "context": "Authenticating | Service account bearer tokens are perfectly valid to use outside the cluster and can be used to create identities for long standing jobs that wish to talk to the Kubernetes API. To manually create a s", "source": "https://kubernetes.io/docs/reference/access-authn-authz/authentication/", "id": "k8s_00731_code_completion_23161"} {"prompt": "Explain this yaml code:\n\napiVersion: apps/v1 # this apiVersion is relevant as of Kubernetes 1.9\nkind: Deployment\nmetadata:\n name: nginx-deployment\n namespace: default\nspec:\n replicas: 3\n template:\n metadata:\n # ...\n spec:\n serviceAccountName: bob-the-bot\n containers:\n - name: nginx\n image: nginx:1.14.2", "completion": "This yaml code Authenticating | Service account bearer tokens are perfectly valid to use outside the cluster and can be used to create identities for long standing jobs that wish to talk to the Kubernetes API. To manually create a s", "language": "yaml", "type": "deployment-definition_explanation", "context": "Authenticating | Service account bearer tokens are perfectly valid to use outside the cluster and can be used to create identities for long standing jobs that wish to talk to the Kubernetes API. To manually create a s", "source": "https://kubernetes.io/docs/reference/access-authn-authz/authentication/", "id": "k8s_00731_code_explain_23162"} {"prompt": "Complete this json code:\n\n{\n \"iss\": \"https://example.com\", // must match the issuer.url\n \"aud\": [\"my-app\"], // at least one of the entries in issuer.audiences must match the \"aud\" claim in presented JWTs.", "completion": " \"exp\": 1234567890, // token expiration as Unix time (the number of seconds elapsed since January 1, 1970 UTC)\n \"\": \"user\" // this is the username claim configured in the claimMappings.username.claim or claimMappings.username.expression\n}", "language": "json", "type": "api-response", "context": "Authenticating | The configuration file approach allows you to configure multiple JWT authenticators, each with a unique issuer.url and issuer.discoveryURL. The configuration file even allows you to specify CEL expres", "source": "https://kubernetes.io/docs/reference/access-authn-authz/authentication/", "id": "k8s_00731_code_completion_23163"} {"prompt": "Explain this json code:\n\n{\n \"iss\": \"https://example.com\", // must match the issuer.url\n \"aud\": [\"my-app\"], // at least one of the entries in issuer.audiences must match the \"aud\" claim in presented JWTs.\n \"exp\": 1234567890, // token expiration as Unix time (the number of seconds elapsed since January 1, 1970 UTC)\n \"\": \"user\" // this is the username claim configured in the claimMappings.username.claim or claimMappings.username.expression\n}", "completion": "This json code Authenticating | The configuration file approach allows you to configure multiple JWT authenticators, each with a unique issuer.url and issuer.discoveryURL. The configuration file even allows you to specify CEL expres", "language": "json", "type": "api-response_explanation", "context": "Authenticating | The configuration file approach allows you to configure multiple JWT authenticators, each with a unique issuer.url and issuer.discoveryURL. The configuration file even allows you to specify CEL expres", "source": "https://kubernetes.io/docs/reference/access-authn-authz/authentication/", "id": "k8s_00731_code_explain_23164"} {"prompt": "Complete this yaml code:\n\n---\n#\n# CAUTION: this is an example configuration.\n# Do not use this for your own cluster!\n#\napiVersion: apiserver.config.k8s.io/v1\nkind: AuthenticationConfiguration\n# list of authenticators to authenticate Kubernetes users using JWT compliant tokens.\n# the maximum number of allowed authenticators is 64.\njwt:\n- issuer:\n # url must be unique across all authenticators.\n # url must not conflict with issuer configured in --service-account-issuer.\n url: https://example.com # Same as --oidc-issuer-url.\n # discoveryURL, if specified, overrides the URL used to fetch discovery\n # information instead of using \"{url}/.well-known/openid-configuration\".\n # The exact value specified is used, so \"/.well-known/openid-configuration\"\n # must be included in discoveryURL if needed.\n #\n # The \"issuer\" field in the fetched discovery information must match the \"issuer.url\" field\n # in the AuthenticationConfiguration and will be used to validate the \"iss\" claim in the presented JWT.\n # This is for scenarios where the well-known and jwks endpoints are hosted at a different\n # location than the issuer (such as locally in the cluster).\n # discoveryURL must be different from url if specified and must be unique across all authenticators.\n discoveryURL: https://discovery.example.com/.well-known/openid-configuration\n # PEM encoded CA certificates used to validate the connection when fetching\n # discovery information. If not set, the system verifier will be used.\n # Same value as the content of the file referenced by the --oidc-ca-file command line argument.\n certificateAuthority: \n # audiences is the set of acceptable audiences the JWT must be issued to.\n # At least one of the entries must match the \"aud\" claim in presented JWTs.\n audiences:\n - my-app # Same as --oidc-client-id.\n - my-other-app\n # this is required to be set to \"MatchAny\" when multiple audiences are specified.\n audienceMatchPolicy: MatchAny\n # egressSelectorType is an indicator of which egress selection should be used for sending all traffic related\n # to this issuer (discovery, JWKS, distributed claims, etc). If unspecified, no custom dialer is used.\n # The StructuredAuthenticationConfigurationEgressSelector feature gate must be enabled\n # before you can use the egressSelectorType field.\n # When specified, the valid choices are \"controlplane\" and \"cluster\". These correspond to the associated\n # values in the --egress-selector-config-file.\n # - controlplane: for traffic intended to go to the control plane.\n # - cluster: for traffic intended to go to the system being managed by Kubernetes.\n egressSelectorType: \n # rules applied to validate token claims to authenticate users.\n claimValidationRules:\n # Same as --oidc-required-claim key=value.\n - claim: hd\n requiredValue: example.com\n # Instead of claim and requiredValue, you can use expression to validate the claim.\n # expression is a CEL expression that evaluates to a boolean.\n # all the expressions must evaluate to true for validation to succeed.\n - expression: 'claims.hd == \"example.com\"'\n # Message customizes the error message seen in the API server logs when the validation fails.\n message: the hd claim must be set to example.com\n - expression: 'claims.exp - claims.nbf <= 86400'\n message: total token lifetime must not exceed 24 hours\n claimMappings:", "completion": " # username represents an option for the username attribute.\n # This is the only required attribute.\n username:\n # Same as --oidc-username-claim. Mutually exclusive with username.expression.\n claim: \"sub\"\n # Same as --oidc-username-prefix. Mutually exclusive with username.expression.\n # if username.claim is set, username.prefix is required.\n # Explicitly set it to \"\" if no prefix is desired.\n prefix: \"\"\n # Mutually exclusive with username.claim and username.prefix.\n # expression is a CEL expression that evaluates to a string.\n #\n # 1. If username.expression uses 'claims.email', then 'claims.email_verified' must be used in\n # username.expression or extra[*].valueExpression or claimValidationRules[*].expression.\n # An example claim validation rule expression that matches the validation automatically\n # applied when username.claim is set to 'email' is 'claims.?email_verified.orValue(true) == true'.\n # By explicitly comparing the value to true, we let type-checking see the result will be a boolean, and\n # to make sure a non-boolean email_verified claim will be caught at runtime.\n # 2. If the username asserted based on username.expression is the empty string, the authentication\n # request will fail.\n expression: 'claims.username + \":external-user\"'\n # groups represents an option for the groups attribute.\n groups:\n # Same as --oidc-groups-claim. Mutually exclusive with groups.expression.\n claim: \"sub\"\n # Same as --oidc-groups-prefix. Mutually exclusive with groups.expression.\n # if groups.claim is set, groups.prefix is required.\n # Explicitly set it to \"\" if no prefix is desired.\n prefix: \"\"\n # Mutually exclusive with groups.claim and groups.prefix.\n # expression is a CEL expression that evaluates to a string or a list of strings.\n expression: 'claims.roles.split(\",\")'\n # uid represents an option for the uid attribute.\n uid:\n # Mutually exclusive with uid.expression.\n claim: 'sub'\n # Mutually exclusive with uid.claim\n # expression is a CEL expression that evaluates to a string.\n expression: 'claims.sub'\n # extra attributes to be added to the UserInfo object. Keys must be domain-prefix path and must be unique.\n extra:\n # key is a string to use as the extra attribute key.\n # key must be a domain-prefix path (e.g. example.org/foo). All characters before the first \"/\" must be a valid\n # subdomain as defined by RFC 1123. All characters trailing the first \"/\" must\n # be valid HTTP Path characters as defined by RFC 3986.\n # k8s.io, kubernetes.io and their subdomains are reserved for Kubernetes use and cannot be used.\n # key must be lowercase and unique across all extra attributes.\n - key: 'example.com/tenant'\n # valueExpression is a CEL expression that evaluates to a string or a list of strings.\n valueExpression: 'claims.tenant'\n # validation rules applied to the final user object.\n userValidationRules:\n # expression is a CEL expression that evaluates to a boolean.\n # all the expressions must evaluate to true for the user to be valid.\n - expression: \"!user.username.startsWith('system:')\"\n # Message customizes the error message seen in the API server logs when the validation fails.\n message: 'username cannot used reserved system: prefix'\n - expression: \"user.groups.all(group, !group.startsWith('system:'))\"\n message: 'groups cannot used reserved system: prefix'", "language": "yaml", "type": "kubernetes-manifest", "context": "Authenticating | Claim validation rule expression", "source": "https://kubernetes.io/docs/reference/access-authn-authz/authentication/", "id": "k8s_00731_code_completion_23165"} {"prompt": "Explain this yaml code:\n\n---\n#\n# CAUTION: this is an example configuration.\n# Do not use this for your own cluster!\n#\napiVersion: apiserver.config.k8s.io/v1\nkind: AuthenticationConfiguration\n# list of authenticators to authenticate Kubernetes users using JWT compliant tokens.\n# the maximum number of allowed authenticators is 64.\njwt:\n- issuer:\n # url must be unique across all authenticators.\n # url must not conflict with issuer configured in --service-account-issuer.\n url: https://example.com # Same as --oidc-issuer-url.\n # discoveryURL, if specified, overrides the URL used to fetch discovery\n # information instead of using \"{url}/.well-known/openid-configuration\".\n # The exact value specified is used, so \"/.well-known/openid-configuration\"\n # must be included in discoveryURL if needed.\n #\n # The \"issuer\" field in the fetched discovery information must match the \"issuer.url\" field\n # in the AuthenticationConfiguration and will be used to validate the \"iss\" claim in the presented JWT.\n # This is for scenarios where the well-known and jwks endpoints are hosted at a different\n # location than the issuer (such as locally in the cluster).\n # discoveryURL must be different from url if specified and must be unique across all authenticators.\n discoveryURL: https://discovery.example.com/.well-known/openid-configuration\n # PEM encoded CA certificates used to validate the connection when fetching\n # discovery information. If not set, the system verifier will be used.\n # Same value as the content of the file referenced by the --oidc-ca-file command line argument.\n certificateAuthority: \n # audiences is the set of acceptable audiences the JWT must be issued to.\n # At least one of the entries must match the \"aud\" claim in presented JWTs.\n audiences:\n - my-app # Same as --oidc-client-id.\n - my-other-app\n # this is required to be set to \"MatchAny\" when multiple audiences are specified.\n audienceMatchPolicy: MatchAny\n # egressSelectorType is an indicator of which egress selection should be used for sending all traffic related\n # to this issuer (discovery, JWKS, distributed claims, etc). If unspecified, no custom dialer is used.\n # The StructuredAuthenticationConfigurationEgressSelector feature gate must be enabled\n # before you can use the egressSelectorType field.\n # When specified, the valid choices are \"controlplane\" and \"cluster\". These correspond to the associated\n # values in the --egress-selector-config-file.\n # - controlplane: for traffic intended to go to the control plane.\n # - cluster: for traffic intended to go to the system being managed by Kubernetes.\n egressSelectorType: \n # rules applied to validate token claims to authenticate users.\n claimValidationRules:\n # Same as --oidc-required-claim key=value.\n - claim: hd\n requiredValue: example.com\n # Instead of claim and requiredValue, you can use expression to validate the claim.\n # expression is a CEL expression that evaluates to a boolean.\n # all the expressions must evaluate to true for validation to succeed.\n - expression: 'claims.hd == \"example.com\"'\n # Message customizes the error message seen in the API server logs when the validation fails.\n message: the hd claim must be set to example.com\n - expression: 'claims.exp - claims.nbf <= 86400'\n message: total token lifetime must not exceed 24 hours\n claimMappings:\n # username represents an option for the username attribute.\n # This is the only required attribute.\n username:\n # Same as --oidc-username-claim. Mutually exclusive with username.expression.\n claim: \"sub\"\n # Same as --oidc-username-prefix. Mutually exclusive with username.expression.\n # if username.claim is set, username.prefix is required.\n # Explicitly set it to \"\" if no prefix is desired.\n prefix: \"\"\n # Mutually exclusive with username.claim and username.prefix.\n # expression is a CEL expression that evaluates to a string.\n #\n # 1. If username.expression uses 'claims.email', then 'claims.email_verified' must be used in\n # username.expression or extra[*].valueExpression or claimValidationRules[*].expression.\n # An example claim validation rule expression that matches the validation automatically\n # applied when username.claim is set to 'email' is 'claims.?email_verified.orValue(true) == true'.\n # By explicitly comparing the value to true, we let type-checking see the result will be a boolean, and\n # to make sure a non-boolean email_verified claim will be caught at runtime.\n # 2. If the username asserted based on username.expression is the empty string, the authentication\n # request will fail.\n expression: 'claims.username + \":external-user\"'\n # groups represents an option for the groups attribute.\n groups:\n # Same as --oidc-groups-claim. Mutually exclusive with groups.expression.\n claim: \"sub\"\n # Same as --oidc-groups-prefix. Mutually exclusive with groups.expression.\n # if groups.claim is set, groups.prefix is required.\n # Explicitly set it to \"\" if no prefix is desired.\n prefix: \"\"\n # Mutually exclusive with groups.claim and groups.prefix.\n # expression is a CEL expression that evaluates to a string or a list of strings.\n expression: 'claims.roles.split(\",\")'\n # uid represents an option for the uid attribute.\n uid:\n # Mutually exclusive with uid.expression.\n claim: 'sub'\n # Mutually exclusive with uid.claim\n # expression is a CEL expression that evaluates to a string.\n expression: 'claims.sub'\n # extra attributes to be added to the UserInfo object. Keys must be domain-prefix path and must be unique.\n extra:\n # key is a string to use as the extra attribute key.\n # key must be a domain-prefix path (e.g. example.org/foo). All characters before the first \"/\" must be a valid\n # subdomain as defined by RFC 1123. All characters trailing the first \"/\" must\n # be valid HTTP Path characters as defined by RFC 3986.\n # k8s.io, kubernetes.io and their subdomains are reserved for Kubernetes use and cannot be used.\n # key must be lowercase and unique across all extra attributes.\n - key: 'example.com/tenant'\n # valueExpression is a CEL expression that evaluates to a string or a list of strings.\n valueExpression: 'claims.tenant'\n # validation rules applied to the final user object.\n userValidationRules:\n # expression is a CEL expression that evaluates to a boolean.\n # all the expressions must evaluate to true for the user to be valid.\n - expression: \"!user.username.startsWith('system:')\"\n # Message customizes the error message seen in the API server logs when the validation fails.\n message: 'username cannot used reserved system: prefix'\n - expression: \"user.groups.all(group, !group.startsWith('system:'))\"\n message: 'groups cannot used reserved system: prefix'", "completion": "This yaml code Authenticating | Claim validation rule expression", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Authenticating | Claim validation rule expression", "source": "https://kubernetes.io/docs/reference/access-authn-authz/authentication/", "id": "k8s_00731_code_explain_23166"} {"prompt": "Complete this yaml code:\n\napiVersion: apiserver.config.k8s.io/v1\nkind: AuthenticationConfiguration\njwt:\n- issuer:\n url: https://example.com\n audiences:\n - my-app\n claimMappings:\n username:\n expression: 'claims.username + \":external-user\"'", "completion": " groups:\n expression: 'claims.roles.split(\",\")'\n uid:\n expression: 'claims.sub'\n extra:\n - key: 'example.com/tenant'\n valueExpression: 'claims.tenant'\n userValidationRules:\n - expression: \"!user.username.startsWith('system:')\" # the expression will evaluate to true, so validation will succeed.\n message: 'username cannot used reserved system: prefix'", "language": "yaml", "type": "kubernetes-manifest", "context": "Authenticating | where the token payload is:", "source": "https://kubernetes.io/docs/reference/access-authn-authz/authentication/", "id": "k8s_00731_code_completion_23167"} {"prompt": "Explain this yaml code:\n\napiVersion: apiserver.config.k8s.io/v1\nkind: AuthenticationConfiguration\njwt:\n- issuer:\n url: https://example.com\n audiences:\n - my-app\n claimMappings:\n username:\n expression: 'claims.username + \":external-user\"'\n groups:\n expression: 'claims.roles.split(\",\")'\n uid:\n expression: 'claims.sub'\n extra:\n - key: 'example.com/tenant'\n valueExpression: 'claims.tenant'\n userValidationRules:\n - expression: \"!user.username.startsWith('system:')\" # the expression will evaluate to true, so validation will succeed.\n message: 'username cannot used reserved system: prefix'", "completion": "This yaml code Authenticating | where the token payload is:", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Authenticating | where the token payload is:", "source": "https://kubernetes.io/docs/reference/access-authn-authz/authentication/", "id": "k8s_00731_code_explain_23168"} {"prompt": "Explain this text code:\n\nTOKEN=eyJhbGciOiJSUzI1NiIsImtpZCI6ImY3dF9tOEROWmFTQk1oWGw5QXZTWGhBUC04Y0JmZ0JVbFVpTG5oQkgxdXMiLCJ0eXAiOiJKV1QifQ.eyJhdWQiOiJrdWJlcm5ldGVzIiwiZXhwIjoxNzAzMjMyOTQ5LCJpYXQiOjE3MDExMDcyMzMsImlzcyI6Imh0dHBzOi8vZXhhbXBsZS5jb20iLCJqdGkiOiI3YzMzNzk0MjgwN2U3M2NhYTJjMzBjODY4YWMwY2U5MTBiY2UwMmRkY2JmZWJlOGMyM2I4YjVmMjdhZDYyODczIiwibmJmIjoxNzAxMTA3MjMzLCJyb2xlcyI6InVzZXIsYWRtaW4iLCJzdWIiOiJhdXRoIiwidGVuYW50IjoiNzJmOTg4YmYtODZmMS00MWFmLTkxYWItMmQ3Y2QwMTFkYjRhIiwidXNlcm5hbWUiOiJmb28ifQ.TBWF2RkQHm4QQz85AYPcwLxSk-VLvQW-mNDHx7SEOSv9LVwcPYPuPajJpuQn9C_gKq1R94QKSQ5F6UgHMILz8OfmPKmX_00wpwwNVGeevJ79ieX2V-__W56iNR5gJ-i9nn6FYk5pwfVREB0l4HSlpTOmu80gbPWAXY5hLW0ZtcE1JTEEmefORHV2ge8e3jp1xGafNy6LdJWabYuKiw8d7Qga__HxtKB-t0kRMNzLRS7rka_SfQg0dSYektuxhLbiDkqhmRffGlQKXGVzUsuvFw7IGM5ZWnZgEMDzCI357obHeM3tRqpn5WRjtB8oM7JgnCymaJi-P3iCd88iu1xnzA", "completion": "This text code Authenticating | where the token payload is:", "language": "text", "type": "code-example_explanation", "context": "Authenticating | where the token payload is:", "source": "https://kubernetes.io/docs/reference/access-authn-authz/authentication/", "id": "k8s_00731_code_explain_23169"} {"prompt": "Complete this json code:\n\n{\n \"aud\": \"kubernetes\",\n \"exp\": 1703232949,\n \"iat\": 1701107233,\n \"iss\": \"https://example.com\",\n \"jti\": \"7c337942807e73caa2c30c868ac0ce910bce02ddcbfebe8c23b8b5f27ad62873\",", "completion": " \"nbf\": 1701107233,\n \"roles\": \"user,admin\",\n \"sub\": \"auth\",\n \"tenant\": \"72f988bf-86f1-41af-91ab-2d7cd011db4a\",\n \"username\": \"foo\"\n }", "language": "json", "type": "api-response", "context": "Authenticating | The token with the above AuthenticationConfiguration will produce the following UserInfo object and successfully authenticate the user.", "source": "https://kubernetes.io/docs/reference/access-authn-authz/authentication/", "id": "k8s_00731_code_completion_23170"} {"prompt": "Explain this json code:\n\n{\n \"aud\": \"kubernetes\",\n \"exp\": 1703232949,\n \"iat\": 1701107233,\n \"iss\": \"https://example.com\",\n \"jti\": \"7c337942807e73caa2c30c868ac0ce910bce02ddcbfebe8c23b8b5f27ad62873\",\n \"nbf\": 1701107233,\n \"roles\": \"user,admin\",\n \"sub\": \"auth\",\n \"tenant\": \"72f988bf-86f1-41af-91ab-2d7cd011db4a\",\n \"username\": \"foo\"\n }", "completion": "This json code Authenticating | The token with the above AuthenticationConfiguration will produce the following UserInfo object and successfully authenticate the user.", "language": "json", "type": "api-response_explanation", "context": "Authenticating | The token with the above AuthenticationConfiguration will produce the following UserInfo object and successfully authenticate the user.", "source": "https://kubernetes.io/docs/reference/access-authn-authz/authentication/", "id": "k8s_00731_code_explain_23171"} {"prompt": "Complete this json code:\n\n{\n \"username\": \"foo:external-user\",\n \"uid\": \"auth\",\n \"groups\": [\n \"user\",", "completion": " \"admin\"\n ],\n \"extra\": {\n \"example.com/tenant\": [\"72f988bf-86f1-41af-91ab-2d7cd011db4a\"]\n }\n}", "language": "json", "type": "api-response", "context": "Authenticating | apiVersion: apiserver.config.k8s.io/v1 kind: AuthenticationConfiguration jwt: - issuer: url: https://example.com audiences: - my-app claimValidationRules: - expression: 'claims.hd == \"", "source": "https://kubernetes.io/docs/reference/access-authn-authz/authentication/", "id": "k8s_00731_code_completion_23172"} {"prompt": "Explain this json code:\n\n{\n \"username\": \"foo:external-user\",\n \"uid\": \"auth\",\n \"groups\": [\n \"user\",\n \"admin\"\n ],\n \"extra\": {\n \"example.com/tenant\": [\"72f988bf-86f1-41af-91ab-2d7cd011db4a\"]\n }\n}", "completion": "This json code Authenticating | apiVersion: apiserver.config.k8s.io/v1 kind: AuthenticationConfiguration jwt: - issuer: url: https://example.com audiences: - my-app claimValidationRules: - expression: 'claims.hd == \"", "language": "json", "type": "api-response_explanation", "context": "Authenticating | apiVersion: apiserver.config.k8s.io/v1 kind: AuthenticationConfiguration jwt: - issuer: url: https://example.com audiences: - my-app claimValidationRules: - expression: 'claims.hd == \"", "source": "https://kubernetes.io/docs/reference/access-authn-authz/authentication/", "id": "k8s_00731_code_explain_23173"} {"prompt": "Complete this yaml code:\n\napiVersion: apiserver.config.k8s.io/v1\nkind: AuthenticationConfiguration\njwt:\n- issuer:\n url: https://example.com\n audiences:\n - my-app\n claimValidationRules:\n - expression: 'claims.hd == \"example.com\"' # the token below does not have this claim, so validation will fail.\n message: the hd claim must be set to example.com\n claimMappings:", "completion": " username:\n expression: 'claims.username + \":external-user\"'\n groups:\n expression: 'claims.roles.split(\",\")'\n uid:\n expression: 'claims.sub'\n extra:\n - key: 'example.com/tenant'\n valueExpression: 'claims.tenant'\n userValidationRules:\n - expression: \"!user.username.startsWith('system:')\" # the expression will evaluate to true, so validation will succeed.\n message: 'username cannot used reserved system: prefix'", "language": "yaml", "type": "kubernetes-manifest", "context": "Authenticating | where the token payload is:", "source": "https://kubernetes.io/docs/reference/access-authn-authz/authentication/", "id": "k8s_00731_code_completion_23174"} {"prompt": "Explain this yaml code:\n\napiVersion: apiserver.config.k8s.io/v1\nkind: AuthenticationConfiguration\njwt:\n- issuer:\n url: https://example.com\n audiences:\n - my-app\n claimValidationRules:\n - expression: 'claims.hd == \"example.com\"' # the token below does not have this claim, so validation will fail.\n message: the hd claim must be set to example.com\n claimMappings:\n username:\n expression: 'claims.username + \":external-user\"'\n groups:\n expression: 'claims.roles.split(\",\")'\n uid:\n expression: 'claims.sub'\n extra:\n - key: 'example.com/tenant'\n valueExpression: 'claims.tenant'\n userValidationRules:\n - expression: \"!user.username.startsWith('system:')\" # the expression will evaluate to true, so validation will succeed.\n message: 'username cannot used reserved system: prefix'", "completion": "This yaml code Authenticating | where the token payload is:", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Authenticating | where the token payload is:", "source": "https://kubernetes.io/docs/reference/access-authn-authz/authentication/", "id": "k8s_00731_code_explain_23175"} {"prompt": "Explain this text code:\n\nTOKEN=eyJhbGciOiJSUzI1NiIsImtpZCI6ImY3dF9tOEROWmFTQk1oWGw5QXZTWGhBUC04Y0JmZ0JVbFVpTG5oQkgxdXMiLCJ0eXAiOiJKV1QifQ.eyJhdWQiOiJrdWJlcm5ldGVzIiwiZXhwIjoxNzAzMjMyOTQ5LCJpYXQiOjE3MDExMDcyMzMsImlzcyI6Imh0dHBzOi8vZXhhbXBsZS5jb20iLCJqdGkiOiI3YzMzNzk0MjgwN2U3M2NhYTJjMzBjODY4YWMwY2U5MTBiY2UwMmRkY2JmZWJlOGMyM2I4YjVmMjdhZDYyODczIiwibmJmIjoxNzAxMTA3MjMzLCJyb2xlcyI6InVzZXIsYWRtaW4iLCJzdWIiOiJhdXRoIiwidGVuYW50IjoiNzJmOTg4YmYtODZmMS00MWFmLTkxYWItMmQ3Y2QwMTFkYjRhIiwidXNlcm5hbWUiOiJmb28ifQ.TBWF2RkQHm4QQz85AYPcwLxSk-VLvQW-mNDHx7SEOSv9LVwcPYPuPajJpuQn9C_gKq1R94QKSQ5F6UgHMILz8OfmPKmX_00wpwwNVGeevJ79ieX2V-__W56iNR5gJ-i9nn6FYk5pwfVREB0l4HSlpTOmu80gbPWAXY5hLW0ZtcE1JTEEmefORHV2ge8e3jp1xGafNy6LdJWabYuKiw8d7Qga__HxtKB-t0kRMNzLRS7rka_SfQg0dSYektuxhLbiDkqhmRffGlQKXGVzUsuvFw7IGM5ZWnZgEMDzCI357obHeM3tRqpn5WRjtB8oM7JgnCymaJi-P3iCd88iu1xnzA", "completion": "This text code Authenticating | where the token payload is:", "language": "text", "type": "code-example_explanation", "context": "Authenticating | where the token payload is:", "source": "https://kubernetes.io/docs/reference/access-authn-authz/authentication/", "id": "k8s_00731_code_explain_23176"} {"prompt": "Complete this json code:\n\n{\n \"aud\": \"kubernetes\",\n \"exp\": 1703232949,\n \"iat\": 1701107233,\n \"iss\": \"https://example.com\",\n \"jti\": \"7c337942807e73caa2c30c868ac0ce910bce02ddcbfebe8c23b8b5f27ad62873\",", "completion": " \"nbf\": 1701107233,\n \"roles\": \"user,admin\",\n \"sub\": \"auth\",\n \"tenant\": \"72f988bf-86f1-41af-91ab-2d7cd011db4a\",\n \"username\": \"foo\"\n }", "language": "json", "type": "api-response", "context": "Authenticating | The token with the above AuthenticationConfiguration will fail to authenticate because the hd claim is not set to example.com. The API server will return 401 Unauthorized error.", "source": "https://kubernetes.io/docs/reference/access-authn-authz/authentication/", "id": "k8s_00731_code_completion_23177"} {"prompt": "Explain this json code:\n\n{\n \"aud\": \"kubernetes\",\n \"exp\": 1703232949,\n \"iat\": 1701107233,\n \"iss\": \"https://example.com\",\n \"jti\": \"7c337942807e73caa2c30c868ac0ce910bce02ddcbfebe8c23b8b5f27ad62873\",\n \"nbf\": 1701107233,\n \"roles\": \"user,admin\",\n \"sub\": \"auth\",\n \"tenant\": \"72f988bf-86f1-41af-91ab-2d7cd011db4a\",\n \"username\": \"foo\"\n }", "completion": "This json code Authenticating | The token with the above AuthenticationConfiguration will fail to authenticate because the hd claim is not set to example.com. The API server will return 401 Unauthorized error.", "language": "json", "type": "api-response_explanation", "context": "Authenticating | The token with the above AuthenticationConfiguration will fail to authenticate because the hd claim is not set to example.com. The API server will return 401 Unauthorized error.", "source": "https://kubernetes.io/docs/reference/access-authn-authz/authentication/", "id": "k8s_00731_code_explain_23178"} {"prompt": "Complete this yaml code:\n\napiVersion: apiserver.config.k8s.io/v1\nkind: AuthenticationConfiguration\njwt:\n- issuer:\n url: https://example.com\n audiences:\n - my-app\n claimValidationRules:\n - expression: 'claims.hd == \"example.com\"'\n message: the hd claim must be set to example.com\n claimMappings:", "completion": " username:\n expression: '\"system:\" + claims.username' # this will prefix the username with \"system:\" and will fail user validation.\n groups:\n expression: 'claims.roles.split(\",\")'\n uid:\n expression: 'claims.sub'\n extra:\n - key: 'example.com/tenant'\n valueExpression: 'claims.tenant'\n userValidationRules:\n - expression: \"!user.username.startsWith('system:')\" # the username will be system:foo and expression will evaluate to false, so validation will fail.\n message: 'username cannot used reserved system: prefix'", "language": "yaml", "type": "kubernetes-manifest", "context": "Authenticating | where the token payload is:", "source": "https://kubernetes.io/docs/reference/access-authn-authz/authentication/", "id": "k8s_00731_code_completion_23179"} {"prompt": "Explain this yaml code:\n\napiVersion: apiserver.config.k8s.io/v1\nkind: AuthenticationConfiguration\njwt:\n- issuer:\n url: https://example.com\n audiences:\n - my-app\n claimValidationRules:\n - expression: 'claims.hd == \"example.com\"'\n message: the hd claim must be set to example.com\n claimMappings:\n username:\n expression: '\"system:\" + claims.username' # this will prefix the username with \"system:\" and will fail user validation.\n groups:\n expression: 'claims.roles.split(\",\")'\n uid:\n expression: 'claims.sub'\n extra:\n - key: 'example.com/tenant'\n valueExpression: 'claims.tenant'\n userValidationRules:\n - expression: \"!user.username.startsWith('system:')\" # the username will be system:foo and expression will evaluate to false, so validation will fail.\n message: 'username cannot used reserved system: prefix'", "completion": "This yaml code Authenticating | where the token payload is:", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Authenticating | where the token payload is:", "source": "https://kubernetes.io/docs/reference/access-authn-authz/authentication/", "id": "k8s_00731_code_explain_23180"} {"prompt": "Explain this text code:\n\nTOKEN=eyJhbGciOiJSUzI1NiIsImtpZCI6ImY3dF9tOEROWmFTQk1oWGw5QXZTWGhBUC04Y0JmZ0JVbFVpTG5oQkgxdXMiLCJ0eXAiOiJKV1QifQ.eyJhdWQiOiJrdWJlcm5ldGVzIiwiZXhwIjoxNzAzMjMyOTQ5LCJoZCI6ImV4YW1wbGUuY29tIiwiaWF0IjoxNzAxMTEzMTAxLCJpc3MiOiJodHRwczovL2V4YW1wbGUuY29tIiwianRpIjoiYjViMDY1MjM3MmNkMjBlMzQ1YjZmZGZmY2RjMjE4MWY0YWZkNmYyNTlhYWI0YjdlMzU4ODEyMzdkMjkyMjBiYyIsIm5iZiI6MTcwMTExMzEwMSwicm9sZXMiOiJ1c2VyLGFkbWluIiwic3ViIjoiYXV0aCIsInRlbmFudCI6IjcyZjk4OGJmLTg2ZjEtNDFhZi05MWFiLTJkN2NkMDExZGI0YSIsInVzZXJuYW1lIjoiZm9vIn0.FgPJBYLobo9jnbHreooBlvpgEcSPWnKfX6dc0IvdlRB-F0dCcgy91oCJeK_aBk-8zH5AKUXoFTlInfLCkPivMOJqMECA1YTrMUwt_IVqwb116AqihfByUYIIqzMjvUbthtbpIeHQm2fF0HbrUqa_Q0uaYwgy8mD807h7sBcUMjNd215ff_nFIHss-9zegH8GI1d9fiBf-g6zjkR1j987EP748khpQh9IxPjMJbSgG_uH5x80YFuqgEWwq-aYJPQxXX6FatP96a2EAn7wfPpGlPRt0HcBOvq5pCnudgCgfVgiOJiLr_7robQu4T1bis0W75VPEvwWtgFcLnvcQx0JWg", "completion": "This text code Authenticating | where the token payload is:", "language": "text", "type": "code-example_explanation", "context": "Authenticating | where the token payload is:", "source": "https://kubernetes.io/docs/reference/access-authn-authz/authentication/", "id": "k8s_00731_code_explain_23181"} {"prompt": "Complete this json code:\n\n{\n \"aud\": \"kubernetes\",\n \"exp\": 1703232949,\n \"hd\": \"example.com\",\n \"iat\": 1701113101,\n \"iss\": \"https://example.com\",", "completion": " \"jti\": \"b5b0652372cd20e345b6fdffcdc2181f4afd6f259aab4b7e35881237d29220bc\",\n \"nbf\": 1701113101,\n \"roles\": \"user,admin\",\n \"sub\": \"auth\",\n \"tenant\": \"72f988bf-86f1-41af-91ab-2d7cd011db4a\",\n \"username\": \"foo\"\n }", "language": "json", "type": "api-response", "context": "Authenticating | The token with the above AuthenticationConfiguration will produce the following UserInfo object:", "source": "https://kubernetes.io/docs/reference/access-authn-authz/authentication/", "id": "k8s_00731_code_completion_23182"} {"prompt": "Explain this json code:\n\n{\n \"aud\": \"kubernetes\",\n \"exp\": 1703232949,\n \"hd\": \"example.com\",\n \"iat\": 1701113101,\n \"iss\": \"https://example.com\",\n \"jti\": \"b5b0652372cd20e345b6fdffcdc2181f4afd6f259aab4b7e35881237d29220bc\",\n \"nbf\": 1701113101,\n \"roles\": \"user,admin\",\n \"sub\": \"auth\",\n \"tenant\": \"72f988bf-86f1-41af-91ab-2d7cd011db4a\",\n \"username\": \"foo\"\n }", "completion": "This json code Authenticating | The token with the above AuthenticationConfiguration will produce the following UserInfo object:", "language": "json", "type": "api-response_explanation", "context": "Authenticating | The token with the above AuthenticationConfiguration will produce the following UserInfo object:", "source": "https://kubernetes.io/docs/reference/access-authn-authz/authentication/", "id": "k8s_00731_code_explain_23183"} {"prompt": "Complete this json code:\n\n{\n \"username\": \"system:foo\",\n \"uid\": \"auth\",\n \"groups\": [\n \"user\",", "completion": " \"admin\"\n ],\n \"extra\": {\n \"example.com/tenant\": [\"72f988bf-86f1-41af-91ab-2d7cd011db4a\"]\n }\n}", "language": "json", "type": "api-response", "context": "Authenticating | which will fail user validation because the username starts with system:. The API server will return 401 Unauthorized error.", "source": "https://kubernetes.io/docs/reference/access-authn-authz/authentication/", "id": "k8s_00731_code_completion_23184"} {"prompt": "Explain this json code:\n\n{\n \"username\": \"system:foo\",\n \"uid\": \"auth\",\n \"groups\": [\n \"user\",\n \"admin\"\n ],\n \"extra\": {\n \"example.com/tenant\": [\"72f988bf-86f1-41af-91ab-2d7cd011db4a\"]\n }\n}", "completion": "This json code Authenticating | which will fail user validation because the username starts with system:. The API server will return 401 Unauthorized error.", "language": "json", "type": "api-response_explanation", "context": "Authenticating | which will fail user validation because the username starts with system:. The API server will return 401 Unauthorized error.", "source": "https://kubernetes.io/docs/reference/access-authn-authz/authentication/", "id": "k8s_00731_code_explain_23185"} {"prompt": "Complete this bash code:\n\nkubectl config set-credentials USER_NAME \\\n --auth-provider=oidc \\\n --auth-provider-arg=idp-issuer-url=( issuer url ) \\\n --auth-provider-arg=client-id=( your client id ) \\", "completion": " --auth-provider-arg=client-secret=( your client secret ) \\\n --auth-provider-arg=refresh-token=( your refresh token ) \\\n --auth-provider-arg=idp-certificate-authority=( path to your ca certificate ) \\\n --auth-provider-arg=id-token=( your id_token )", "language": "bash", "type": "kubectl-command", "context": "Authenticating | As an example, running the below command after authenticating to your identity provider:", "source": "https://kubernetes.io/docs/reference/access-authn-authz/authentication/", "id": "k8s_00731_code_completion_23186"} {"prompt": "Explain this bash code:\n\nkubectl config set-credentials USER_NAME \\\n --auth-provider=oidc \\\n --auth-provider-arg=idp-issuer-url=( issuer url ) \\\n --auth-provider-arg=client-id=( your client id ) \\\n --auth-provider-arg=client-secret=( your client secret ) \\\n --auth-provider-arg=refresh-token=( your refresh token ) \\\n --auth-provider-arg=idp-certificate-authority=( path to your ca certificate ) \\\n --auth-provider-arg=id-token=( your id_token )", "completion": "This bash code Authenticating | As an example, running the below command after authenticating to your identity provider:", "language": "bash", "type": "kubectl-command_explanation", "context": "Authenticating | As an example, running the below command after authenticating to your identity provider:", "source": "https://kubernetes.io/docs/reference/access-authn-authz/authentication/", "id": "k8s_00731_code_explain_23187"} {"prompt": "Complete this bash code:\n\nkubectl config set-credentials mmosley \\\n --auth-provider=oidc \\\n --auth-provider-arg=idp-issuer-url=https://oidcidp.tremolo.lan:8443/auth/idp/OidcIdP \\\n --auth-provider-arg=client-id=kubernetes \\", "completion": " --auth-provider-arg=client-secret=1db158f6-177d-4d9c-8a8b-d36869918ec5 \\\n --auth-provider-arg=refresh-token=q1bKLFOyUiosTfawzA93TzZIDzH2TNa2SMm0zEiPKTUwME6BkEo6Sql5yUWVBSWpKUGphaWpxSVAfekBOZbBhaEW+VlFUeVRGcluyVF5JT4+haZmPsluFoFu5XkpXk5BXqHega4GAXlF+ma+vmYpFcHe5eZR+slBFpZKtQA= \\\n --auth-provider-arg=idp-certificate-authority=/root/ca.pem \\\n --auth-provider-arg=id-token=eyJraWQiOiJDTj1vaWRjaWRwLnRyZW1vbG8ubGFuLCBPVT1EZW1vLCBPPVRybWVvbG8gU2VjdXJpdHksIEw9QXJsaW5ndG9uLCBTVD1WaXJnaW5pYSwgQz1VUy1DTj1rdWJlLWNhLTEyMDIxNDc5MjEwMzYwNzMyMTUyIiwiYWxnIjoiUlMyNTYifQ.eyJpc3MiOiJodHRwczovL29pZGNpZHAudHJlbW9sby5sYW46ODQ0My9hdXRoL2lkcC9PaWRjSWRQIiwiYXVkIjoia3ViZXJuZXRlcyIsImV4cCI6MTQ4MzU0OTUxMSwianRpIjoiMm96US15TXdFcHV4WDlHZUhQdy1hZyIsImlhdCI6MTQ4MzU0OTQ1MSwibmJmIjoxNDgzNTQ5MzMxLCJzdWIiOiI0YWViMzdiYS1iNjQ1LTQ4ZmQtYWIzMC0xYTAxZWU0MWUyMTgifQ.w6p4J_6qQ1HzTG9nrEOrubxIMb9K5hzcMPxc9IxPx2K4xO9l-oFiUw93daH3m5pluP6K7eOE6txBuRVfEcpJSwlelsOsW8gb8VJcnzMS9EnZpeA0tW_p-mnkFc3VcfyXuhe5R3G7aa5d8uHv70yJ9Y3-UhjiN9EhpMdfPAoEB9fYKKkJRzF7utTTIPGrSaSU6d2pcpfYKaxIwePzEkT4DfcQthoZdy9ucNvvLoi1DIC-UocFD8HLs8LYKEqSxQvOcvnThbObJ9af71EwmuE21fO5KzMW20KtAeget1gnldOosPtz1G5EwvaQ401-RPQzPGMVBld0_zMCAwZttJ4knw", "language": "bash", "type": "kubectl-command", "context": "Authenticating | Which would produce the below configuration:", "source": "https://kubernetes.io/docs/reference/access-authn-authz/authentication/", "id": "k8s_00731_code_completion_23188"} {"prompt": "Explain this bash code:\n\nkubectl config set-credentials mmosley \\\n --auth-provider=oidc \\\n --auth-provider-arg=idp-issuer-url=https://oidcidp.tremolo.lan:8443/auth/idp/OidcIdP \\\n --auth-provider-arg=client-id=kubernetes \\\n --auth-provider-arg=client-secret=1db158f6-177d-4d9c-8a8b-d36869918ec5 \\\n --auth-provider-arg=refresh-token=q1bKLFOyUiosTfawzA93TzZIDzH2TNa2SMm0zEiPKTUwME6BkEo6Sql5yUWVBSWpKUGphaWpxSVAfekBOZbBhaEW+VlFUeVRGcluyVF5JT4+haZmPsluFoFu5XkpXk5BXqHega4GAXlF+ma+vmYpFcHe5eZR+slBFpZKtQA= \\\n --auth-provider-arg=idp-certificate-authority=/root/ca.pem \\\n --auth-provider-arg=id-token=eyJraWQiOiJDTj1vaWRjaWRwLnRyZW1vbG8ubGFuLCBPVT1EZW1vLCBPPVRybWVvbG8gU2VjdXJpdHksIEw9QXJsaW5ndG9uLCBTVD1WaXJnaW5pYSwgQz1VUy1DTj1rdWJlLWNhLTEyMDIxNDc5MjEwMzYwNzMyMTUyIiwiYWxnIjoiUlMyNTYifQ.eyJpc3MiOiJodHRwczovL29pZGNpZHAudHJlbW9sby5sYW46ODQ0My9hdXRoL2lkcC9PaWRjSWRQIiwiYXVkIjoia3ViZXJuZXRlcyIsImV4cCI6MTQ4MzU0OTUxMSwianRpIjoiMm96US15TXdFcHV4WDlHZUhQdy1hZyIsImlhdCI6MTQ4MzU0OTQ1MSwibmJmIjoxNDgzNTQ5MzMxLCJzdWIiOiI0YWViMzdiYS1iNjQ1LTQ4ZmQtYWIzMC0xYTAxZWU0MWUyMTgifQ.w6p4J_6qQ1HzTG9nrEOrubxIMb9K5hzcMPxc9IxPx2K4xO9l-oFiUw93daH3m5pluP6K7eOE6txBuRVfEcpJSwlelsOsW8gb8VJcnzMS9EnZpeA0tW_p-mnkFc3VcfyXuhe5R3G7aa5d8uHv70yJ9Y3-UhjiN9EhpMdfPAoEB9fYKKkJRzF7utTTIPGrSaSU6d2pcpfYKaxIwePzEkT4DfcQthoZdy9ucNvvLoi1DIC-UocFD8HLs8LYKEqSxQvOcvnThbObJ9af71EwmuE21fO5KzMW20KtAeget1gnldOosPtz1G5EwvaQ401-RPQzPGMVBld0_zMCAwZttJ4knw", "completion": "This bash code Authenticating | Which would produce the below configuration:", "language": "bash", "type": "kubectl-command_explanation", "context": "Authenticating | Which would produce the below configuration:", "source": "https://kubernetes.io/docs/reference/access-authn-authz/authentication/", "id": "k8s_00731_code_explain_23189"} {"prompt": "Complete this text code:\n\nusers:\n- name: mmosley\n user:\n auth-provider:\n config:\n client-id: kubernetes", "completion": " client-secret: 1db158f6-177d-4d9c-8a8b-d36869918ec5\n id-token: eyJraWQiOiJDTj1vaWRjaWRwLnRyZW1vbG8ubGFuLCBPVT1EZW1vLCBPPVRybWVvbG8gU2VjdXJpdHksIEw9QXJsaW5ndG9uLCBTVD1WaXJnaW5pYSwgQz1VUy1DTj1rdWJlLWNhLTEyMDIxNDc5MjEwMzYwNzMyMTUyIiwiYWxnIjoiUlMyNTYifQ.eyJpc3MiOiJodHRwczovL29pZGNpZHAudHJlbW9sby5sYW46ODQ0My9hdXRoL2lkcC9PaWRjSWRQIiwiYXVkIjoia3ViZXJuZXRlcyIsImV4cCI6MTQ4MzU0OTUxMSwianRpIjoiMm96US15TXdFcHV4WDlHZUhQdy1hZyIsImlhdCI6MTQ4MzU0OTQ1MSwibmJmIjoxNDgzNTQ5MzMxLCJzdWIiOiI0YWViMzdiYS1iNjQ1LTQ4ZmQtYWIzMC0xYTAxZWU0MWUyMTgifQ.w6p4J_6qQ1HzTG9nrEOrubxIMb9K5hzcMPxc9IxPx2K4xO9l-oFiUw93daH3m5pluP6K7eOE6txBuRVfEcpJSwlelsOsW8gb8VJcnzMS9EnZpeA0tW_p-mnkFc3VcfyXuhe5R3G7aa5d8uHv70yJ9Y3-UhjiN9EhpMdfPAoEB9fYKKkJRzF7utTTIPGrSaSU6d2pcpfYKaxIwePzEkT4DfcQthoZdy9ucNvvLoi1DIC-UocFD8HLs8LYKEqSxQvOcvnThbObJ9af71EwmuE21fO5KzMW20KtAeget1gnldOosPtz1G5EwvaQ401-RPQzPGMVBld0_zMCAwZttJ4knw\n idp-certificate-authority: /root/ca.pem\n idp-issuer-url: https://oidcidp.tremolo.lan:8443/auth/idp/OidcIdP\n refresh-token: q1bKLFOyUiosTfawzA93TzZIDzH2TNa2SMm0zEiPKTUwME6BkEo6Sql5yUWVBSWpKUGphaWpxSVAfekBOZbBhaEW+VlFUeVRGcluyVF5JT4+haZmPsluFoFu5XkpXk5BXq\n name: oidc", "language": "text", "type": "code-example", "context": "Authenticating | Once your id_token expires, kubectl will attempt to refresh your id_token using your refresh_token and client_secret storing the new values for the refresh_token and id_token in your .kube/config.", "source": "https://kubernetes.io/docs/reference/access-authn-authz/authentication/", "id": "k8s_00731_code_completion_23190"} {"prompt": "Explain this text code:\n\nusers:\n- name: mmosley\n user:\n auth-provider:\n config:\n client-id: kubernetes\n client-secret: 1db158f6-177d-4d9c-8a8b-d36869918ec5\n id-token: eyJraWQiOiJDTj1vaWRjaWRwLnRyZW1vbG8ubGFuLCBPVT1EZW1vLCBPPVRybWVvbG8gU2VjdXJpdHksIEw9QXJsaW5ndG9uLCBTVD1WaXJnaW5pYSwgQz1VUy1DTj1rdWJlLWNhLTEyMDIxNDc5MjEwMzYwNzMyMTUyIiwiYWxnIjoiUlMyNTYifQ.eyJpc3MiOiJodHRwczovL29pZGNpZHAudHJlbW9sby5sYW46ODQ0My9hdXRoL2lkcC9PaWRjSWRQIiwiYXVkIjoia3ViZXJuZXRlcyIsImV4cCI6MTQ4MzU0OTUxMSwianRpIjoiMm96US15TXdFcHV4WDlHZUhQdy1hZyIsImlhdCI6MTQ4MzU0OTQ1MSwibmJmIjoxNDgzNTQ5MzMxLCJzdWIiOiI0YWViMzdiYS1iNjQ1LTQ4ZmQtYWIzMC0xYTAxZWU0MWUyMTgifQ.w6p4J_6qQ1HzTG9nrEOrubxIMb9K5hzcMPxc9IxPx2K4xO9l-oFiUw93daH3m5pluP6K7eOE6txBuRVfEcpJSwlelsOsW8gb8VJcnzMS9EnZpeA0tW_p-mnkFc3VcfyXuhe5R3G7aa5d8uHv70yJ9Y3-UhjiN9EhpMdfPAoEB9fYKKkJRzF7utTTIPGrSaSU6d2pcpfYKaxIwePzEkT4DfcQthoZdy9ucNvvLoi1DIC-UocFD8HLs8LYKEqSxQvOcvnThbObJ9af71EwmuE21fO5KzMW20KtAeget1gnldOosPtz1G5EwvaQ401-RPQzPGMVBld0_zMCAwZttJ4knw\n idp-certificate-authority: /root/ca.pem\n idp-issuer-url: https://oidcidp.tremolo.lan:8443/auth/idp/OidcIdP\n refresh-token: q1bKLFOyUiosTfawzA93TzZIDzH2TNa2SMm0zEiPKTUwME6BkEo6Sql5yUWVBSWpKUGphaWpxSVAfekBOZbBhaEW+VlFUeVRGcluyVF5JT4+haZmPsluFoFu5XkpXk5BXq\n name: oidc", "completion": "This text code Authenticating | Once your id_token expires, kubectl will attempt to refresh your id_token using your refresh_token and client_secret storing the new values for the refresh_token and id_token in your .kube/config.", "language": "text", "type": "code-example_explanation", "context": "Authenticating | Once your id_token expires, kubectl will attempt to refresh your id_token using your refresh_token and client_secret storing the new values for the refresh_token and id_token in your .kube/config.", "source": "https://kubernetes.io/docs/reference/access-authn-authz/authentication/", "id": "k8s_00731_code_explain_23191"} {"prompt": "Explain this bash code:\n\nkubectl --token=eyJhbGciOiJSUzI1NiJ9.eyJpc3MiOiJodHRwczovL21sYi50cmVtb2xvLmxhbjo4MDQzL2F1dGgvaWRwL29pZGMiLCJhdWQiOiJrdWJlcm5ldGVzIiwiZXhwIjoxNDc0NTk2NjY5LCJqdGkiOiI2RDUzNXoxUEpFNjJOR3QxaWVyYm9RIiwiaWF0IjoxNDc0NTk2MzY5LCJuYmYiOjE0NzQ1OTYyNDksInN1YiI6Im13aW5kdSIsInVzZXJfcm9sZSI6WyJ1c2VycyIsIm5ldy1uYW1lc3BhY2Utdmlld2VyIl0sImVtYWlsIjoibXdpbmR1QG5vbW9yZWplZGkuY29tIn0.f2As579n9VNoaKzoF-dOQGmXkFKf1FMyNV0-va_B63jn-_n9LGSCca_6IVMP8pO-Zb4KvRqGyTP0r3HkHxYy5c81AnIh8ijarruczl-TK_yF5akjSTHFZD-0gRzlevBDiH8Q79NAr-ky0P4iIXS8lY9Vnjch5MF74Zx0c3alKJHJUnnpjIACByfF2SCaYzbWFMUNat-K1PaUk5-ujMBG7yYnr95xD-63n8CO8teGUAAEMx6zRjzfhnhbzX-ajwZLGwGUBT4WqjMs70-6a7_8gZmLZb2az1cZynkFRj2BaCkVT3A2RrjeEwZEtGXlMqKJ1_I2ulrOVsYx01_yD35-rw get nodes", "completion": "This bash code Authenticating | Kubernetes webhook authentication is a mechanism to make an HTTP callout for verifying bearer tokens.", "language": "bash", "type": "kubectl-command_explanation", "context": "Authenticating | Kubernetes webhook authentication is a mechanism to make an HTTP callout for verifying bearer tokens.", "source": "https://kubernetes.io/docs/reference/access-authn-authz/authentication/", "id": "k8s_00731_code_explain_23192"} {"prompt": "Complete this yaml code:\n\n# Kubernetes API version\napiVersion: v1\n# kind of the API object\nkind: Config\n# clusters refers to the remote service.\nclusters:\n - name: name-of-remote-authn-service\n cluster:\n certificate-authority: /path/to/ca.pem # CA for verifying the remote service.\n server: https://authn.example.com/authenticate # URL of remote service to query. 'https' recommended for production.\n\n# users refers to the API server's webhook configuration.", "completion": "users:\n - name: name-of-api-server\n user:\n client-certificate: /path/to/cert.pem # cert for the webhook plugin to use\n client-key: /path/to/key.pem # key matching the cert\n\n# kubeconfig files require a context. Provide one for the API server.\ncurrent-context: webhook\ncontexts:\n- context:\n cluster: name-of-remote-authn-service\n user: name-of-api-server\n name: webhook", "language": "yaml", "type": "kubernetes-manifest", "context": "Authenticating | When a client attempts to authenticate with the API server using a bearer token as discussed above, the authentication webhook POSTs a JSON-serialized TokenReview object containing the token to the re", "source": "https://kubernetes.io/docs/reference/access-authn-authz/authentication/", "id": "k8s_00731_code_completion_23193"} {"prompt": "Explain this yaml code:\n\n# Kubernetes API version\napiVersion: v1\n# kind of the API object\nkind: Config\n# clusters refers to the remote service.\nclusters:\n - name: name-of-remote-authn-service\n cluster:\n certificate-authority: /path/to/ca.pem # CA for verifying the remote service.\n server: https://authn.example.com/authenticate # URL of remote service to query. 'https' recommended for production.\n\n# users refers to the API server's webhook configuration.\nusers:\n - name: name-of-api-server\n user:\n client-certificate: /path/to/cert.pem # cert for the webhook plugin to use\n client-key: /path/to/key.pem # key matching the cert\n\n# kubeconfig files require a context. Provide one for the API server.\ncurrent-context: webhook\ncontexts:\n- context:\n cluster: name-of-remote-authn-service\n user: name-of-api-server\n name: webhook", "completion": "This yaml code Authenticating | When a client attempts to authenticate with the API server using a bearer token as discussed above, the authentication webhook POSTs a JSON-serialized TokenReview object containing the token to the re", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Authenticating | When a client attempts to authenticate with the API server using a bearer token as discussed above, the authentication webhook POSTs a JSON-serialized TokenReview object containing the token to the re", "source": "https://kubernetes.io/docs/reference/access-authn-authz/authentication/", "id": "k8s_00731_code_explain_23194"} {"prompt": "Complete this json code:\n\n{\n \"apiVersion\": \"authentication.k8s.io/v1\",\n \"kind\": \"TokenReview\",\n \"spec\": {\n # Opaque bearer token sent to the API server\n \"token\": \"014fbff9a07c...\",\n\n # Optional list of the audience identifiers for the server the token was presented to.", "completion": " # Audience-aware token authenticators (for example, OIDC token authenticators)\n # should verify the token was intended for at least one of the audiences in this list,\n # and return the intersection of this list and the valid audiences for the token in the response status.\n # This ensures the token is valid to authenticate to the server it was presented to.\n # If no audiences are provided, the token should be validated to authenticate to the Kubernetes API server.\n \"audiences\": [\"https://myserver.example.com\", \"https://myserver.internal.example.com\"]\n }\n}", "language": "json", "type": "api-response", "context": "Authenticating | { \"apiVersion\": \"authentication.k8s.io/v1beta1\", \"kind\": \"TokenReview\", \"spec\": { # Opaque bearer token sent to the API server \"token\": \"014fbff9a07c...\", # Optional list of the aud", "source": "https://kubernetes.io/docs/reference/access-authn-authz/authentication/", "id": "k8s_00731_code_completion_23195"} {"prompt": "Explain this json code:\n\n{\n \"apiVersion\": \"authentication.k8s.io/v1\",\n \"kind\": \"TokenReview\",\n \"spec\": {\n # Opaque bearer token sent to the API server\n \"token\": \"014fbff9a07c...\",\n\n # Optional list of the audience identifiers for the server the token was presented to.\n # Audience-aware token authenticators (for example, OIDC token authenticators)\n # should verify the token was intended for at least one of the audiences in this list,\n # and return the intersection of this list and the valid audiences for the token in the response status.\n # This ensures the token is valid to authenticate to the server it was presented to.\n # If no audiences are provided, the token should be validated to authenticate to the Kubernetes API server.\n \"audiences\": [\"https://myserver.example.com\", \"https://myserver.internal.example.com\"]\n }\n}", "completion": "This json code Authenticating | { \"apiVersion\": \"authentication.k8s.io/v1beta1\", \"kind\": \"TokenReview\", \"spec\": { # Opaque bearer token sent to the API server \"token\": \"014fbff9a07c...\", # Optional list of the aud", "language": "json", "type": "api-response_explanation", "context": "Authenticating | { \"apiVersion\": \"authentication.k8s.io/v1beta1\", \"kind\": \"TokenReview\", \"spec\": { # Opaque bearer token sent to the API server \"token\": \"014fbff9a07c...\", # Optional list of the aud", "source": "https://kubernetes.io/docs/reference/access-authn-authz/authentication/", "id": "k8s_00731_code_explain_23196"} {"prompt": "Complete this json code:\n\n{\n \"apiVersion\": \"authentication.k8s.io/v1beta1\",\n \"kind\": \"TokenReview\",\n \"spec\": {\n # Opaque bearer token sent to the API server\n \"token\": \"014fbff9a07c...\",\n\n # Optional list of the audience identifiers for the server the token was presented to.", "completion": " # Audience-aware token authenticators (for example, OIDC token authenticators)\n # should verify the token was intended for at least one of the audiences in this list,\n # and return the intersection of this list and the valid audiences for the token in the response status.\n # This ensures the token is valid to authenticate to the server it was presented to.\n # If no audiences are provided, the token should be validated to authenticate to the Kubernetes API server.\n \"audiences\": [\"https://myserver.example.com\", \"https://myserver.internal.example.com\"]\n }\n}", "language": "json", "type": "api-response", "context": "Authenticating | The remote service is expected to fill the status field of the request to indicate the success of the login. The response body's spec field is ignored and may be omitted. The remote service must retur", "source": "https://kubernetes.io/docs/reference/access-authn-authz/authentication/", "id": "k8s_00731_code_completion_23197"} {"prompt": "Explain this json code:\n\n{\n \"apiVersion\": \"authentication.k8s.io/v1beta1\",\n \"kind\": \"TokenReview\",\n \"spec\": {\n # Opaque bearer token sent to the API server\n \"token\": \"014fbff9a07c...\",\n\n # Optional list of the audience identifiers for the server the token was presented to.\n # Audience-aware token authenticators (for example, OIDC token authenticators)\n # should verify the token was intended for at least one of the audiences in this list,\n # and return the intersection of this list and the valid audiences for the token in the response status.\n # This ensures the token is valid to authenticate to the server it was presented to.\n # If no audiences are provided, the token should be validated to authenticate to the Kubernetes API server.\n \"audiences\": [\"https://myserver.example.com\", \"https://myserver.internal.example.com\"]\n }\n}", "completion": "This json code Authenticating | The remote service is expected to fill the status field of the request to indicate the success of the login. The response body's spec field is ignored and may be omitted. The remote service must retur", "language": "json", "type": "api-response_explanation", "context": "Authenticating | The remote service is expected to fill the status field of the request to indicate the success of the login. The response body's spec field is ignored and may be omitted. The remote service must retur", "source": "https://kubernetes.io/docs/reference/access-authn-authz/authentication/", "id": "k8s_00731_code_explain_23198"} {"prompt": "Complete this json code:\n\n{\n \"apiVersion\": \"authentication.k8s.io/v1\",\n \"kind\": \"TokenReview\",\n \"status\": {\n \"authenticated\": true,\n \"user\": {\n # Required\n \"username\": \"janedoe@example.com\",\n # Optional\n \"uid\": \"42\",\n # Optional group memberships\n \"groups\": [\"developers\", \"qa\"],\n # Optional additional information provided by the authenticator.\n # This should not contain confidential data, as it can be recorded in logs", "completion": " # or API objects, and is made available to admission webhooks.\n \"extra\": {\n \"extrafield1\": [\n \"extravalue1\",\n \"extravalue2\"\n ]\n }\n },\n # Optional list audience-aware token authenticators can return,\n # containing the audiences from the `spec.audiences` list for which the provided token was valid.\n # If this is omitted, the token is considered to be valid to authenticate to the Kubernetes API server.\n \"audiences\": [\"https://myserver.example.com\"]\n }\n}", "language": "json", "type": "api-response", "context": "Authenticating | { \"apiVersion\": \"authentication.k8s.io/v1beta1\", \"kind\": \"TokenReview\", \"status\": { \"authenticated\": true, \"user\": { # Required \"username\": \"janedoe@example.com\", # Opt", "source": "https://kubernetes.io/docs/reference/access-authn-authz/authentication/", "id": "k8s_00731_code_completion_23199"} {"prompt": "Explain this json code:\n\n{\n \"apiVersion\": \"authentication.k8s.io/v1\",\n \"kind\": \"TokenReview\",\n \"status\": {\n \"authenticated\": true,\n \"user\": {\n # Required\n \"username\": \"janedoe@example.com\",\n # Optional\n \"uid\": \"42\",\n # Optional group memberships\n \"groups\": [\"developers\", \"qa\"],\n # Optional additional information provided by the authenticator.\n # This should not contain confidential data, as it can be recorded in logs\n # or API objects, and is made available to admission webhooks.\n \"extra\": {\n \"extrafield1\": [\n \"extravalue1\",\n \"extravalue2\"\n ]\n }\n },\n # Optional list audience-aware token authenticators can return,\n # containing the audiences from the `spec.audiences` list for which the provided token was valid.\n # If this is omitted, the token is considered to be valid to authenticate to the Kubernetes API server.\n \"audiences\": [\"https://myserver.example.com\"]\n }\n}", "completion": "This json code Authenticating | { \"apiVersion\": \"authentication.k8s.io/v1beta1\", \"kind\": \"TokenReview\", \"status\": { \"authenticated\": true, \"user\": { # Required \"username\": \"janedoe@example.com\", # Opt", "language": "json", "type": "api-response_explanation", "context": "Authenticating | { \"apiVersion\": \"authentication.k8s.io/v1beta1\", \"kind\": \"TokenReview\", \"status\": { \"authenticated\": true, \"user\": { # Required \"username\": \"janedoe@example.com\", # Opt", "source": "https://kubernetes.io/docs/reference/access-authn-authz/authentication/", "id": "k8s_00731_code_explain_23200"} {"prompt": "Complete this json code:\n\n{\n \"apiVersion\": \"authentication.k8s.io/v1beta1\",\n \"kind\": \"TokenReview\",\n \"status\": {\n \"authenticated\": true,\n \"user\": {\n # Required\n \"username\": \"janedoe@example.com\",\n # Optional\n \"uid\": \"42\",\n # Optional group memberships\n \"groups\": [\"developers\", \"qa\"],\n # Optional additional information provided by the authenticator.\n # This should not contain confidential data, as it can be recorded in logs", "completion": " # or API objects, and is made available to admission webhooks.\n \"extra\": {\n \"extrafield1\": [\n \"extravalue1\",\n \"extravalue2\"\n ]\n }\n },\n # Optional list audience-aware token authenticators can return,\n # containing the audiences from the `spec.audiences` list for which the provided token was valid.\n # If this is omitted, the token is considered to be valid to authenticate to the Kubernetes API server.\n \"audiences\": [\"https://myserver.example.com\"]\n }\n}", "language": "json", "type": "api-response", "context": "Authenticating | An unsuccessful request would return:", "source": "https://kubernetes.io/docs/reference/access-authn-authz/authentication/", "id": "k8s_00731_code_completion_23201"} {"prompt": "Explain this json code:\n\n{\n \"apiVersion\": \"authentication.k8s.io/v1beta1\",\n \"kind\": \"TokenReview\",\n \"status\": {\n \"authenticated\": true,\n \"user\": {\n # Required\n \"username\": \"janedoe@example.com\",\n # Optional\n \"uid\": \"42\",\n # Optional group memberships\n \"groups\": [\"developers\", \"qa\"],\n # Optional additional information provided by the authenticator.\n # This should not contain confidential data, as it can be recorded in logs\n # or API objects, and is made available to admission webhooks.\n \"extra\": {\n \"extrafield1\": [\n \"extravalue1\",\n \"extravalue2\"\n ]\n }\n },\n # Optional list audience-aware token authenticators can return,\n # containing the audiences from the `spec.audiences` list for which the provided token was valid.\n # If this is omitted, the token is considered to be valid to authenticate to the Kubernetes API server.\n \"audiences\": [\"https://myserver.example.com\"]\n }\n}", "completion": "This json code Authenticating | An unsuccessful request would return:", "language": "json", "type": "api-response_explanation", "context": "Authenticating | An unsuccessful request would return:", "source": "https://kubernetes.io/docs/reference/access-authn-authz/authentication/", "id": "k8s_00731_code_explain_23202"} {"prompt": "Complete this json code:\n\n{\n \"apiVersion\": \"authentication.k8s.io/v1\",\n \"kind\": \"TokenReview\",\n \"status\": {\n \"authenticated\": false,", "completion": " # Optionally include details about why authentication failed.\n # If no error is provided, the API will return a generic Unauthorized message.\n # The error field is ignored when authenticated=true.\n \"error\": \"Credentials are expired\"\n }\n}", "language": "json", "type": "api-response", "context": "Authenticating | { \"apiVersion\": \"authentication.k8s.io/v1beta1\", \"kind\": \"TokenReview\", \"status\": { \"authenticated\": false, # Optionally include details about why authentication failed. # If no erro", "source": "https://kubernetes.io/docs/reference/access-authn-authz/authentication/", "id": "k8s_00731_code_completion_23203"} {"prompt": "Explain this json code:\n\n{\n \"apiVersion\": \"authentication.k8s.io/v1\",\n \"kind\": \"TokenReview\",\n \"status\": {\n \"authenticated\": false,\n # Optionally include details about why authentication failed.\n # If no error is provided, the API will return a generic Unauthorized message.\n # The error field is ignored when authenticated=true.\n \"error\": \"Credentials are expired\"\n }\n}", "completion": "This json code Authenticating | { \"apiVersion\": \"authentication.k8s.io/v1beta1\", \"kind\": \"TokenReview\", \"status\": { \"authenticated\": false, # Optionally include details about why authentication failed. # If no erro", "language": "json", "type": "api-response_explanation", "context": "Authenticating | { \"apiVersion\": \"authentication.k8s.io/v1beta1\", \"kind\": \"TokenReview\", \"status\": { \"authenticated\": false, # Optionally include details about why authentication failed. # If no erro", "source": "https://kubernetes.io/docs/reference/access-authn-authz/authentication/", "id": "k8s_00731_code_explain_23204"} {"prompt": "Complete this json code:\n\n{\n \"apiVersion\": \"authentication.k8s.io/v1beta1\",\n \"kind\": \"TokenReview\",\n \"status\": {\n \"authenticated\": false,", "completion": " # Optionally include details about why authentication failed.\n # If no error is provided, the API will return a generic Unauthorized message.\n # The error field is ignored when authenticated=true.\n \"error\": \"Credentials are expired\"\n }\n}", "language": "json", "type": "api-response", "context": "Authenticating | The API server can be configured to identify users from request header values, such as X-Remote-User. It is designed for use in combination with an authenticating proxy that sets these headers.", "source": "https://kubernetes.io/docs/reference/access-authn-authz/authentication/", "id": "k8s_00731_code_completion_23205"} {"prompt": "Explain this json code:\n\n{\n \"apiVersion\": \"authentication.k8s.io/v1beta1\",\n \"kind\": \"TokenReview\",\n \"status\": {\n \"authenticated\": false,\n # Optionally include details about why authentication failed.\n # If no error is provided, the API will return a generic Unauthorized message.\n # The error field is ignored when authenticated=true.\n \"error\": \"Credentials are expired\"\n }\n}", "completion": "This json code Authenticating | The API server can be configured to identify users from request header values, such as X-Remote-User. It is designed for use in combination with an authenticating proxy that sets these headers.", "language": "json", "type": "api-response_explanation", "context": "Authenticating | The API server can be configured to identify users from request header values, such as X-Remote-User. It is designed for use in combination with an authenticating proxy that sets these headers.", "source": "https://kubernetes.io/docs/reference/access-authn-authz/authentication/", "id": "k8s_00731_code_explain_23206"} {"prompt": "Complete this text code:\n\nGET / HTTP/1.1\nX-Remote-User: fido\nX-Remote-Group: dogs", "completion": "X-Remote-Group: dachshunds\nX-Remote-Extra-Acme.com%2Fproject: some-project\nX-Remote-Extra-Scopes: openid\nX-Remote-Extra-Scopes: profile", "language": "text", "type": "code-example", "context": "Authenticating | would result in this user info:", "source": "https://kubernetes.io/docs/reference/access-authn-authz/authentication/", "id": "k8s_00731_code_completion_23207"} {"prompt": "Explain this text code:\n\nGET / HTTP/1.1\nX-Remote-User: fido\nX-Remote-Group: dogs\nX-Remote-Group: dachshunds\nX-Remote-Extra-Acme.com%2Fproject: some-project\nX-Remote-Extra-Scopes: openid\nX-Remote-Extra-Scopes: profile", "completion": "This text code Authenticating | would result in this user info:", "language": "text", "type": "code-example_explanation", "context": "Authenticating | would result in this user info:", "source": "https://kubernetes.io/docs/reference/access-authn-authz/authentication/", "id": "k8s_00731_code_explain_23208"} {"prompt": "Complete this text code:\n\nname: fido\ngroups:\n- dogs\n- dachshunds\nextra:", "completion": " acme.com/project:\n - some-project\n scopes:\n - openid\n - profile", "language": "text", "type": "code-example", "context": "Authenticating | In order to prevent header spoofing, the authenticating proxy is required to present a valid client certificate to the API server for validation against the specified CA before the request headers are", "source": "https://kubernetes.io/docs/reference/access-authn-authz/authentication/", "id": "k8s_00731_code_completion_23209"} {"prompt": "Explain this text code:\n\nname: fido\ngroups:\n- dogs\n- dachshunds\nextra:\n acme.com/project:\n - some-project\n scopes:\n - openid\n - profile", "completion": "This text code Authenticating | In order to prevent header spoofing, the authenticating proxy is required to present a valid client certificate to the API server for validation against the specified CA before the request headers are", "language": "text", "type": "code-example_explanation", "context": "Authenticating | In order to prevent header spoofing, the authenticating proxy is required to present a valid client certificate to the API server for validation against the specified CA before the request headers are", "source": "https://kubernetes.io/docs/reference/access-authn-authz/authentication/", "id": "k8s_00731_code_explain_23210"} {"prompt": "Complete this yaml code:\n\n---\n#\n# CAUTION: this is an example configuration.\n# Do not use this for your own cluster!\n#\napiVersion: apiserver.config.k8s.io/v1", "completion": "kind: AuthenticationConfiguration\nanonymous:\n enabled: true\n conditions:\n - path: /livez\n - path: /readyz\n - path: /healthz", "language": "yaml", "type": "kubernetes-manifest", "context": "Authenticating | In the configuration above only the /livez, /readyz and /healthz endpoints are reachable by anonymous requests. Any other endpoints will not be reachable even if it is allowed by RBAC configuration.", "source": "https://kubernetes.io/docs/reference/access-authn-authz/authentication/", "id": "k8s_00731_code_completion_23211"} {"prompt": "Explain this yaml code:\n\n---\n#\n# CAUTION: this is an example configuration.\n# Do not use this for your own cluster!\n#\napiVersion: apiserver.config.k8s.io/v1\nkind: AuthenticationConfiguration\nanonymous:\n enabled: true\n conditions:\n - path: /livez\n - path: /readyz\n - path: /healthz", "completion": "This yaml code Authenticating | In the configuration above only the /livez, /readyz and /healthz endpoints are reachable by anonymous requests. Any other endpoints will not be reachable even if it is allowed by RBAC configuration.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Authenticating | In the configuration above only the /livez, /readyz and /healthz endpoints are reachable by anonymous requests. Any other endpoints will not be reachable even if it is allowed by RBAC configuration.", "source": "https://kubernetes.io/docs/reference/access-authn-authz/authentication/", "id": "k8s_00731_code_explain_23212"} {"prompt": "Explain this text code:\n\nImpersonate-User: jane.doe@example.com\nImpersonate-Group: developers\nImpersonate-Group: admins", "completion": "This text code Authenticating | An example of the impersonation headers used when impersonating a user with a UID and extra fields:", "language": "text", "type": "code-example_explanation", "context": "Authenticating | An example of the impersonation headers used when impersonating a user with a UID and extra fields:", "source": "https://kubernetes.io/docs/reference/access-authn-authz/authentication/", "id": "k8s_00731_code_explain_23213"} {"prompt": "Complete this text code:\n\nImpersonate-User: jane.doe@example.com\nImpersonate-Extra-dn: cn=jane,ou=engineers,dc=example,dc=com\nImpersonate-Extra-acme.com%2Fproject: some-project", "completion": "Impersonate-Extra-scopes: view\nImpersonate-Extra-scopes: development\nImpersonate-Uid: 06f6ce97-e2c5-4ab8-7ba5-7654dd08d52b", "language": "text", "type": "code-example", "context": "Authenticating | When using kubectl set the --as command line argument to configure the Impersonate-User header, you can also set the --as-group flag to configure the Impersonate-Group header.", "source": "https://kubernetes.io/docs/reference/access-authn-authz/authentication/", "id": "k8s_00731_code_completion_23214"} {"prompt": "Explain this text code:\n\nImpersonate-User: jane.doe@example.com\nImpersonate-Extra-dn: cn=jane,ou=engineers,dc=example,dc=com\nImpersonate-Extra-acme.com%2Fproject: some-project\nImpersonate-Extra-scopes: view\nImpersonate-Extra-scopes: development\nImpersonate-Uid: 06f6ce97-e2c5-4ab8-7ba5-7654dd08d52b", "completion": "This text code Authenticating | When using kubectl set the --as command line argument to configure the Impersonate-User header, you can also set the --as-group flag to configure the Impersonate-Group header.", "language": "text", "type": "code-example_explanation", "context": "Authenticating | When using kubectl set the --as command line argument to configure the Impersonate-User header, you can also set the --as-group flag to configure the Impersonate-Group header.", "source": "https://kubernetes.io/docs/reference/access-authn-authz/authentication/", "id": "k8s_00731_code_explain_23215"} {"prompt": "Explain this bash code:\n\nkubectl drain mynode --as=superman --as-group=system:masters", "completion": "This bash code Authenticating | To impersonate a user, group, user identifier (UID) or extra fields, the impersonating user must have the ability to perform the impersonate verb on the kind of attribute being impersonated (\"user\", \"", "language": "bash", "type": "kubectl-command_explanation", "context": "Authenticating | To impersonate a user, group, user identifier (UID) or extra fields, the impersonating user must have the ability to perform the impersonate verb on the kind of attribute being impersonated (\"user\", \"", "source": "https://kubernetes.io/docs/reference/access-authn-authz/authentication/", "id": "k8s_00731_code_explain_23216"} {"prompt": "Complete this yaml code:\n\napiVersion: rbac.authorization.k8s.io/v1\nkind: ClusterRole\nmetadata:\n name: impersonator", "completion": "rules:\n- apiGroups: [\"\"]\n resources: [\"users\", \"groups\", \"serviceaccounts\"]\n verbs: [\"impersonate\"]", "language": "yaml", "type": "kubernetes-manifest", "context": "Authenticating | For impersonation, extra fields and impersonated UIDs are both under the \"authentication.k8s.io\" apiGroup. Extra fields are evaluated as sub-resources of the resource \"userextras\". To allow a user to", "source": "https://kubernetes.io/docs/reference/access-authn-authz/authentication/", "id": "k8s_00731_code_completion_23217"} {"prompt": "Explain this yaml code:\n\napiVersion: rbac.authorization.k8s.io/v1\nkind: ClusterRole\nmetadata:\n name: impersonator\nrules:\n- apiGroups: [\"\"]\n resources: [\"users\", \"groups\", \"serviceaccounts\"]\n verbs: [\"impersonate\"]", "completion": "This yaml code Authenticating | For impersonation, extra fields and impersonated UIDs are both under the \"authentication.k8s.io\" apiGroup. Extra fields are evaluated as sub-resources of the resource \"userextras\". To allow a user to", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Authenticating | For impersonation, extra fields and impersonated UIDs are both under the \"authentication.k8s.io\" apiGroup. Extra fields are evaluated as sub-resources of the resource \"userextras\". To allow a user to", "source": "https://kubernetes.io/docs/reference/access-authn-authz/authentication/", "id": "k8s_00731_code_explain_23218"} {"prompt": "Complete this yaml code:\n\napiVersion: rbac.authorization.k8s.io/v1\nkind: ClusterRole\nmetadata:\n name: scopes-and-uid-impersonator", "completion": "rules:\n# Can set \"Impersonate-Extra-scopes\" header and the \"Impersonate-Uid\" header.\n- apiGroups: [\"authentication.k8s.io\"]\n resources: [\"userextras/scopes\", \"uids\"]\n verbs: [\"impersonate\"]", "language": "yaml", "type": "kubernetes-manifest", "context": "Authenticating | The values of impersonation headers can also be restricted by limiting the set of resourceNames a resource can take.", "source": "https://kubernetes.io/docs/reference/access-authn-authz/authentication/", "id": "k8s_00731_code_completion_23219"} {"prompt": "Explain this yaml code:\n\napiVersion: rbac.authorization.k8s.io/v1\nkind: ClusterRole\nmetadata:\n name: scopes-and-uid-impersonator\nrules:\n# Can set \"Impersonate-Extra-scopes\" header and the \"Impersonate-Uid\" header.\n- apiGroups: [\"authentication.k8s.io\"]\n resources: [\"userextras/scopes\", \"uids\"]\n verbs: [\"impersonate\"]", "completion": "This yaml code Authenticating | The values of impersonation headers can also be restricted by limiting the set of resourceNames a resource can take.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Authenticating | The values of impersonation headers can also be restricted by limiting the set of resourceNames a resource can take.", "source": "https://kubernetes.io/docs/reference/access-authn-authz/authentication/", "id": "k8s_00731_code_explain_23220"} {"prompt": "Complete this yaml code:\n\napiVersion: rbac.authorization.k8s.io/v1\nkind: ClusterRole\nmetadata:\n name: limited-impersonator\nrules:\n# Can impersonate the user \"jane.doe@example.com\"\n- apiGroups: [\"\"]\n resources: [\"users\"]\n verbs: [\"impersonate\"]\n resourceNames: [\"jane.doe@example.com\"]\n\n# Can impersonate the groups \"developers\" and \"admins\"\n- apiGroups: [\"\"]\n resources: [\"groups\"]", "completion": " verbs: [\"impersonate\"]\n resourceNames: [\"developers\",\"admins\"]\n\n# Can impersonate the extras field \"scopes\" with the values \"view\" and \"development\"\n- apiGroups: [\"authentication.k8s.io\"]\n resources: [\"userextras/scopes\"]\n verbs: [\"impersonate\"]\n resourceNames: [\"view\", \"development\"]\n\n# Can impersonate the uid \"06f6ce97-e2c5-4ab8-7ba5-7654dd08d52b\"\n- apiGroups: [\"authentication.k8s.io\"]\n resources: [\"uids\"]\n verbs: [\"impersonate\"]\n resourceNames: [\"06f6ce97-e2c5-4ab8-7ba5-7654dd08d52b\"]", "language": "yaml", "type": "kubernetes-manifest", "context": "Authenticating | k8s.io/client-go and tools using it such as kubectl and kubelet are able to execute an external command to receive user credentials.", "source": "https://kubernetes.io/docs/reference/access-authn-authz/authentication/", "id": "k8s_00731_code_completion_23221"} {"prompt": "Explain this yaml code:\n\napiVersion: rbac.authorization.k8s.io/v1\nkind: ClusterRole\nmetadata:\n name: limited-impersonator\nrules:\n# Can impersonate the user \"jane.doe@example.com\"\n- apiGroups: [\"\"]\n resources: [\"users\"]\n verbs: [\"impersonate\"]\n resourceNames: [\"jane.doe@example.com\"]\n\n# Can impersonate the groups \"developers\" and \"admins\"\n- apiGroups: [\"\"]\n resources: [\"groups\"]\n verbs: [\"impersonate\"]\n resourceNames: [\"developers\",\"admins\"]\n\n# Can impersonate the extras field \"scopes\" with the values \"view\" and \"development\"\n- apiGroups: [\"authentication.k8s.io\"]\n resources: [\"userextras/scopes\"]\n verbs: [\"impersonate\"]\n resourceNames: [\"view\", \"development\"]\n\n# Can impersonate the uid \"06f6ce97-e2c5-4ab8-7ba5-7654dd08d52b\"\n- apiGroups: [\"authentication.k8s.io\"]\n resources: [\"uids\"]\n verbs: [\"impersonate\"]\n resourceNames: [\"06f6ce97-e2c5-4ab8-7ba5-7654dd08d52b\"]", "completion": "This yaml code Authenticating | k8s.io/client-go and tools using it such as kubectl and kubelet are able to execute an external command to receive user credentials.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Authenticating | k8s.io/client-go and tools using it such as kubectl and kubelet are able to execute an external command to receive user credentials.", "source": "https://kubernetes.io/docs/reference/access-authn-authz/authentication/", "id": "k8s_00731_code_explain_23222"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Config\nusers:\n- name: my-user\n user:\n exec:\n # Command to execute. Required.\n command: \"example-client-go-exec-plugin\"\n\n # API version to use when decoding the ExecCredentials resource. Required.\n #\n # The API version returned by the plugin MUST match the version listed here.\n #\n # To integrate with tools that support multiple versions (such as client.authentication.k8s.io/v1beta1),\n # set an environment variable, pass an argument to the tool that indicates which version the exec plugin expects,\n # or read the version from the ExecCredential object in the KUBERNETES_EXEC_INFO environment variable.\n apiVersion: \"client.authentication.k8s.io/v1\"\n\n # Environment variables to set when executing the plugin. Optional.\n env:\n - name: \"FOO\"\n value: \"bar\"\n\n # Arguments to pass when executing the plugin. Optional.\n args:\n - \"arg1\"\n - \"arg2\"\n\n # Text shown to the user when the executable doesn't seem to be present. Optional.\n installHint: |\n example-client-go-exec-plugin is required to authenticate\n to the current cluster. It can be installed:\n\n On macOS: brew install example-client-go-exec-plugin", "completion": "\n On Ubuntu: apt-get install example-client-go-exec-plugin\n\n On Fedora: dnf install example-client-go-exec-plugin\n\n ... \n\n # Whether or not to provide cluster information, which could potentially contain\n # very large CA data, to this exec plugin as a part of the KUBERNETES_EXEC_INFO\n # environment variable.\n provideClusterInfo: true\n\n # The contract between the exec plugin and the standard input I/O stream. If the\n # contract cannot be satisfied, this plugin will not be run and an error will be\n # returned. Valid values are \"Never\" (this exec plugin never uses standard input),\n # \"IfAvailable\" (this exec plugin wants to use standard input if it is available),\n # or \"Always\" (this exec plugin requires standard input to function). Required.\n interactiveMode: Never\nclusters:\n- name: my-cluster\n cluster:\n server: \"https://172.17.4.100:6443\"\n certificate-authority: \"/etc/kubernetes/ca.pem\"\n extensions:\n - name: client.authentication.k8s.io/exec # reserved extension name for per cluster exec config\n extension:\n arbitrary: config\n this: can be provided via the KUBERNETES_EXEC_INFO environment variable upon setting provideClusterInfo\n you: [\"can\", \"put\", \"anything\", \"here\"]\ncontexts:\n- name: my-cluster\n context:\n cluster: my-cluster\n user: my-user\ncurrent-context: my-cluster", "language": "yaml", "type": "kubernetes-manifest", "context": "Authenticating | apiVersion: v1 kind: Config users: - name: my-user user: exec: # Command to execute. Required. command: \"example-client-go-exec-plugin\" # API version to use when decoding the", "source": "https://kubernetes.io/docs/reference/access-authn-authz/authentication/", "id": "k8s_00731_code_completion_23223"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Config\nusers:\n- name: my-user\n user:\n exec:\n # Command to execute. Required.\n command: \"example-client-go-exec-plugin\"\n\n # API version to use when decoding the ExecCredentials resource. Required.\n #\n # The API version returned by the plugin MUST match the version listed here.\n #\n # To integrate with tools that support multiple versions (such as client.authentication.k8s.io/v1beta1),\n # set an environment variable, pass an argument to the tool that indicates which version the exec plugin expects,\n # or read the version from the ExecCredential object in the KUBERNETES_EXEC_INFO environment variable.\n apiVersion: \"client.authentication.k8s.io/v1\"\n\n # Environment variables to set when executing the plugin. Optional.\n env:\n - name: \"FOO\"\n value: \"bar\"\n\n # Arguments to pass when executing the plugin. Optional.\n args:\n - \"arg1\"\n - \"arg2\"\n\n # Text shown to the user when the executable doesn't seem to be present. Optional.\n installHint: |\n example-client-go-exec-plugin is required to authenticate\n to the current cluster. It can be installed:\n\n On macOS: brew install example-client-go-exec-plugin\n\n On Ubuntu: apt-get install example-client-go-exec-plugin\n\n On Fedora: dnf install example-client-go-exec-plugin\n\n ... \n\n # Whether or not to provide cluster information, which could potentially contain\n # very large CA data, to this exec plugin as a part of the KUBERNETES_EXEC_INFO\n # environment variable.\n provideClusterInfo: true\n\n # The contract between the exec plugin and the standard input I/O stream. If the\n # contract cannot be satisfied, this plugin will not be run and an error will be\n # returned. Valid values are \"Never\" (this exec plugin never uses standard input),\n # \"IfAvailable\" (this exec plugin wants to use standard input if it is available),\n # or \"Always\" (this exec plugin requires standard input to function). Required.\n interactiveMode: Never\nclusters:\n- name: my-cluster\n cluster:\n server: \"https://172.17.4.100:6443\"\n certificate-authority: \"/etc/kubernetes/ca.pem\"\n extensions:\n - name: client.authentication.k8s.io/exec # reserved extension name for per cluster exec config\n extension:\n arbitrary: config\n this: can be provided via the KUBERNETES_EXEC_INFO environment variable upon setting provideClusterInfo\n you: [\"can\", \"put\", \"anything\", \"here\"]\ncontexts:\n- name: my-cluster\n context:\n cluster: my-cluster\n user: my-user\ncurrent-context: my-cluster", "completion": "This yaml code Authenticating | apiVersion: v1 kind: Config users: - name: my-user user: exec: # Command to execute. Required. command: \"example-client-go-exec-plugin\" # API version to use when decoding the", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Authenticating | apiVersion: v1 kind: Config users: - name: my-user user: exec: # Command to execute. Required. command: \"example-client-go-exec-plugin\" # API version to use when decoding the", "source": "https://kubernetes.io/docs/reference/access-authn-authz/authentication/", "id": "k8s_00731_code_explain_23224"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Config\nusers:\n- name: my-user\n user:\n exec:\n # Command to execute. Required.\n command: \"example-client-go-exec-plugin\"\n\n # API version to use when decoding the ExecCredentials resource. Required.\n #\n # The API version returned by the plugin MUST match the version listed here.\n #\n # To integrate with tools that support multiple versions (such as client.authentication.k8s.io/v1),\n # set an environment variable, pass an argument to the tool that indicates which version the exec plugin expects,\n # or read the version from the ExecCredential object in the KUBERNETES_EXEC_INFO environment variable.\n apiVersion: \"client.authentication.k8s.io/v1beta1\"\n\n # Environment variables to set when executing the plugin. Optional.\n env:\n - name: \"FOO\"\n value: \"bar\"\n\n # Arguments to pass when executing the plugin. Optional.\n args:\n - \"arg1\"\n - \"arg2\"\n\n # Text shown to the user when the executable doesn't seem to be present. Optional.\n installHint: |\n example-client-go-exec-plugin is required to authenticate\n to the current cluster. It can be installed:\n\n On macOS: brew install example-client-go-exec-plugin\n", "completion": " On Ubuntu: apt-get install example-client-go-exec-plugin\n\n On Fedora: dnf install example-client-go-exec-plugin\n\n ... \n\n # Whether or not to provide cluster information, which could potentially contain\n # very large CA data, to this exec plugin as a part of the KUBERNETES_EXEC_INFO\n # environment variable.\n provideClusterInfo: true\n\n # The contract between the exec plugin and the standard input I/O stream. If the\n # contract cannot be satisfied, this plugin will not be run and an error will be\n # returned. Valid values are \"Never\" (this exec plugin never uses standard input),\n # \"IfAvailable\" (this exec plugin wants to use standard input if it is available),\n # or \"Always\" (this exec plugin requires standard input to function). Optional.\n # Defaults to \"IfAvailable\".\n interactiveMode: Never\nclusters:\n- name: my-cluster\n cluster:\n server: \"https://172.17.4.100:6443\"\n certificate-authority: \"/etc/kubernetes/ca.pem\"\n extensions:\n - name: client.authentication.k8s.io/exec # reserved extension name for per cluster exec config\n extension:\n arbitrary: config\n this: can be provided via the KUBERNETES_EXEC_INFO environment variable upon setting provideClusterInfo\n you: [\"can\", \"put\", \"anything\", \"here\"]\ncontexts:\n- name: my-cluster\n context:\n cluster: my-cluster\n user: my-user\ncurrent-context: my-cluster", "language": "yaml", "type": "kubernetes-manifest", "context": "Authenticating | Relative command paths are interpreted as relative to the directory of the config file. If KUBECONFIG is set to /home/jane/kubeconfig and the exec command is ./bin/example-client-go-exec-plugin, the b", "source": "https://kubernetes.io/docs/reference/access-authn-authz/authentication/", "id": "k8s_00731_code_completion_23225"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Config\nusers:\n- name: my-user\n user:\n exec:\n # Command to execute. Required.\n command: \"example-client-go-exec-plugin\"\n\n # API version to use when decoding the ExecCredentials resource. Required.\n #\n # The API version returned by the plugin MUST match the version listed here.\n #\n # To integrate with tools that support multiple versions (such as client.authentication.k8s.io/v1),\n # set an environment variable, pass an argument to the tool that indicates which version the exec plugin expects,\n # or read the version from the ExecCredential object in the KUBERNETES_EXEC_INFO environment variable.\n apiVersion: \"client.authentication.k8s.io/v1beta1\"\n\n # Environment variables to set when executing the plugin. Optional.\n env:\n - name: \"FOO\"\n value: \"bar\"\n\n # Arguments to pass when executing the plugin. Optional.\n args:\n - \"arg1\"\n - \"arg2\"\n\n # Text shown to the user when the executable doesn't seem to be present. Optional.\n installHint: |\n example-client-go-exec-plugin is required to authenticate\n to the current cluster. It can be installed:\n\n On macOS: brew install example-client-go-exec-plugin\n\n On Ubuntu: apt-get install example-client-go-exec-plugin\n\n On Fedora: dnf install example-client-go-exec-plugin\n\n ... \n\n # Whether or not to provide cluster information, which could potentially contain\n # very large CA data, to this exec plugin as a part of the KUBERNETES_EXEC_INFO\n # environment variable.\n provideClusterInfo: true\n\n # The contract between the exec plugin and the standard input I/O stream. If the\n # contract cannot be satisfied, this plugin will not be run and an error will be\n # returned. Valid values are \"Never\" (this exec plugin never uses standard input),\n # \"IfAvailable\" (this exec plugin wants to use standard input if it is available),\n # or \"Always\" (this exec plugin requires standard input to function). Optional.\n # Defaults to \"IfAvailable\".\n interactiveMode: Never\nclusters:\n- name: my-cluster\n cluster:\n server: \"https://172.17.4.100:6443\"\n certificate-authority: \"/etc/kubernetes/ca.pem\"\n extensions:\n - name: client.authentication.k8s.io/exec # reserved extension name for per cluster exec config\n extension:\n arbitrary: config\n this: can be provided via the KUBERNETES_EXEC_INFO environment variable upon setting provideClusterInfo\n you: [\"can\", \"put\", \"anything\", \"here\"]\ncontexts:\n- name: my-cluster\n context:\n cluster: my-cluster\n user: my-user\ncurrent-context: my-cluster", "completion": "This yaml code Authenticating | Relative command paths are interpreted as relative to the directory of the config file. If KUBECONFIG is set to /home/jane/kubeconfig and the exec command is ./bin/example-client-go-exec-plugin, the b", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Authenticating | Relative command paths are interpreted as relative to the directory of the config file. If KUBECONFIG is set to /home/jane/kubeconfig and the exec command is ./bin/example-client-go-exec-plugin, the b", "source": "https://kubernetes.io/docs/reference/access-authn-authz/authentication/", "id": "k8s_00731_code_explain_23226"} {"prompt": "Complete this yaml code:\n\n- name: my-user\n user:\n exec:", "completion": " # Path relative to the directory of the kubeconfig\n command: \"./bin/example-client-go-exec-plugin\"\n apiVersion: \"client.authentication.k8s.io/v1\"\n interactiveMode: Never", "language": "yaml", "type": "kubernetes-manifest", "context": "Authenticating | The executed command prints an ExecCredential object to stdout. k8s.io/client-go authenticates against the Kubernetes API using the returned credentials in the status. The executed command is passed a", "source": "https://kubernetes.io/docs/reference/access-authn-authz/authentication/", "id": "k8s_00731_code_completion_23227"} {"prompt": "Explain this yaml code:\n\n- name: my-user\n user:\n exec:\n # Path relative to the directory of the kubeconfig\n command: \"./bin/example-client-go-exec-plugin\"\n apiVersion: \"client.authentication.k8s.io/v1\"\n interactiveMode: Never", "completion": "This yaml code Authenticating | The executed command prints an ExecCredential object to stdout. k8s.io/client-go authenticates against the Kubernetes API using the returned credentials in the status. The executed command is passed a", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Authenticating | The executed command prints an ExecCredential object to stdout. k8s.io/client-go authenticates against the Kubernetes API using the returned credentials in the status. The executed command is passed a", "source": "https://kubernetes.io/docs/reference/access-authn-authz/authentication/", "id": "k8s_00731_code_explain_23228"} {"prompt": "Complete this json code:\n\n{\n \"apiVersion\": \"client.authentication.k8s.io/v1\",\n \"kind\": \"ExecCredential\",", "completion": " \"status\": {\n \"token\": \"my-bearer-token\"\n }\n}", "language": "json", "type": "api-response", "context": "Authenticating | { \"apiVersion\": \"client.authentication.k8s.io/v1beta1\", \"kind\": \"ExecCredential\", \"status\": { \"token\": \"my-bearer-token\" } }", "source": "https://kubernetes.io/docs/reference/access-authn-authz/authentication/", "id": "k8s_00731_code_completion_23229"} {"prompt": "Explain this json code:\n\n{\n \"apiVersion\": \"client.authentication.k8s.io/v1\",\n \"kind\": \"ExecCredential\",\n \"status\": {\n \"token\": \"my-bearer-token\"\n }\n}", "completion": "This json code Authenticating | { \"apiVersion\": \"client.authentication.k8s.io/v1beta1\", \"kind\": \"ExecCredential\", \"status\": { \"token\": \"my-bearer-token\" } }", "language": "json", "type": "api-response_explanation", "context": "Authenticating | { \"apiVersion\": \"client.authentication.k8s.io/v1beta1\", \"kind\": \"ExecCredential\", \"status\": { \"token\": \"my-bearer-token\" } }", "source": "https://kubernetes.io/docs/reference/access-authn-authz/authentication/", "id": "k8s_00731_code_explain_23230"} {"prompt": "Complete this json code:\n\n{\n \"apiVersion\": \"client.authentication.k8s.io/v1beta1\",\n \"kind\": \"ExecCredential\",", "completion": " \"status\": {\n \"token\": \"my-bearer-token\"\n }\n}", "language": "json", "type": "api-response", "context": "Authenticating | Alternatively, a PEM-encoded client certificate and key can be returned to use TLS client auth. If the plugin returns a different certificate and key on a subsequent call, k8s.io/client-go will close", "source": "https://kubernetes.io/docs/reference/access-authn-authz/authentication/", "id": "k8s_00731_code_completion_23231"} {"prompt": "Explain this json code:\n\n{\n \"apiVersion\": \"client.authentication.k8s.io/v1beta1\",\n \"kind\": \"ExecCredential\",\n \"status\": {\n \"token\": \"my-bearer-token\"\n }\n}", "completion": "This json code Authenticating | Alternatively, a PEM-encoded client certificate and key can be returned to use TLS client auth. If the plugin returns a different certificate and key on a subsequent call, k8s.io/client-go will close", "language": "json", "type": "api-response_explanation", "context": "Authenticating | Alternatively, a PEM-encoded client certificate and key can be returned to use TLS client auth. If the plugin returns a different certificate and key on a subsequent call, k8s.io/client-go will close", "source": "https://kubernetes.io/docs/reference/access-authn-authz/authentication/", "id": "k8s_00731_code_explain_23232"} {"prompt": "Complete this json code:\n\n{\n \"apiVersion\": \"client.authentication.k8s.io/v1\",\n \"kind\": \"ExecCredential\",\n \"status\": {", "completion": " \"clientCertificateData\": \"-----BEGIN CERTIFICATE-----\\n...\\n-----END CERTIFICATE-----\",\n \"clientKeyData\": \"-----BEGIN RSA PRIVATE KEY-----\\n...\\n-----END RSA PRIVATE KEY-----\"\n }\n}", "language": "json", "type": "api-response", "context": "Authenticating | { \"apiVersion\": \"client.authentication.k8s.io/v1beta1\", \"kind\": \"ExecCredential\", \"status\": { \"clientCertificateData\": \"-----BEGIN CERTIFICATE-----\\n...\\n-----END CERTIFICATE-----\", \"cli", "source": "https://kubernetes.io/docs/reference/access-authn-authz/authentication/", "id": "k8s_00731_code_completion_23233"} {"prompt": "Explain this json code:\n\n{\n \"apiVersion\": \"client.authentication.k8s.io/v1\",\n \"kind\": \"ExecCredential\",\n \"status\": {\n \"clientCertificateData\": \"-----BEGIN CERTIFICATE-----\\n...\\n-----END CERTIFICATE-----\",\n \"clientKeyData\": \"-----BEGIN RSA PRIVATE KEY-----\\n...\\n-----END RSA PRIVATE KEY-----\"\n }\n}", "completion": "This json code Authenticating | { \"apiVersion\": \"client.authentication.k8s.io/v1beta1\", \"kind\": \"ExecCredential\", \"status\": { \"clientCertificateData\": \"-----BEGIN CERTIFICATE-----\\n...\\n-----END CERTIFICATE-----\", \"cli", "language": "json", "type": "api-response_explanation", "context": "Authenticating | { \"apiVersion\": \"client.authentication.k8s.io/v1beta1\", \"kind\": \"ExecCredential\", \"status\": { \"clientCertificateData\": \"-----BEGIN CERTIFICATE-----\\n...\\n-----END CERTIFICATE-----\", \"cli", "source": "https://kubernetes.io/docs/reference/access-authn-authz/authentication/", "id": "k8s_00731_code_explain_23234"} {"prompt": "Complete this json code:\n\n{\n \"apiVersion\": \"client.authentication.k8s.io/v1beta1\",\n \"kind\": \"ExecCredential\",\n \"status\": {", "completion": " \"clientCertificateData\": \"-----BEGIN CERTIFICATE-----\\n...\\n-----END CERTIFICATE-----\",\n \"clientKeyData\": \"-----BEGIN RSA PRIVATE KEY-----\\n...\\n-----END RSA PRIVATE KEY-----\"\n }\n}", "language": "json", "type": "api-response", "context": "Authenticating | Optionally, the response can include the expiry of the credential formatted as a RFC 3339 timestamp.", "source": "https://kubernetes.io/docs/reference/access-authn-authz/authentication/", "id": "k8s_00731_code_completion_23235"} {"prompt": "Explain this json code:\n\n{\n \"apiVersion\": \"client.authentication.k8s.io/v1beta1\",\n \"kind\": \"ExecCredential\",\n \"status\": {\n \"clientCertificateData\": \"-----BEGIN CERTIFICATE-----\\n...\\n-----END CERTIFICATE-----\",\n \"clientKeyData\": \"-----BEGIN RSA PRIVATE KEY-----\\n...\\n-----END RSA PRIVATE KEY-----\"\n }\n}", "completion": "This json code Authenticating | Optionally, the response can include the expiry of the credential formatted as a RFC 3339 timestamp.", "language": "json", "type": "api-response_explanation", "context": "Authenticating | Optionally, the response can include the expiry of the credential formatted as a RFC 3339 timestamp.", "source": "https://kubernetes.io/docs/reference/access-authn-authz/authentication/", "id": "k8s_00731_code_explain_23236"} {"prompt": "Complete this json code:\n\n{\n \"apiVersion\": \"client.authentication.k8s.io/v1\",\n \"kind\": \"ExecCredential\",\n \"status\": {", "completion": " \"token\": \"my-bearer-token\",\n \"expirationTimestamp\": \"2018-03-05T17:30:20-08:00\"\n }\n}", "language": "json", "type": "api-response", "context": "Authenticating | { \"apiVersion\": \"client.authentication.k8s.io/v1beta1\", \"kind\": \"ExecCredential\", \"status\": { \"token\": \"my-bearer-token\", \"expirationTimestamp\": \"2018-03-05T17:30:20-08:00\" } }", "source": "https://kubernetes.io/docs/reference/access-authn-authz/authentication/", "id": "k8s_00731_code_completion_23237"} {"prompt": "Explain this json code:\n\n{\n \"apiVersion\": \"client.authentication.k8s.io/v1\",\n \"kind\": \"ExecCredential\",\n \"status\": {\n \"token\": \"my-bearer-token\",\n \"expirationTimestamp\": \"2018-03-05T17:30:20-08:00\"\n }\n}", "completion": "This json code Authenticating | { \"apiVersion\": \"client.authentication.k8s.io/v1beta1\", \"kind\": \"ExecCredential\", \"status\": { \"token\": \"my-bearer-token\", \"expirationTimestamp\": \"2018-03-05T17:30:20-08:00\" } }", "language": "json", "type": "api-response_explanation", "context": "Authenticating | { \"apiVersion\": \"client.authentication.k8s.io/v1beta1\", \"kind\": \"ExecCredential\", \"status\": { \"token\": \"my-bearer-token\", \"expirationTimestamp\": \"2018-03-05T17:30:20-08:00\" } }", "source": "https://kubernetes.io/docs/reference/access-authn-authz/authentication/", "id": "k8s_00731_code_explain_23238"} {"prompt": "Complete this json code:\n\n{\n \"apiVersion\": \"client.authentication.k8s.io/v1beta1\",\n \"kind\": \"ExecCredential\",\n \"status\": {", "completion": " \"token\": \"my-bearer-token\",\n \"expirationTimestamp\": \"2018-03-05T17:30:20-08:00\"\n }\n}", "language": "json", "type": "api-response", "context": "Authenticating | To enable the exec plugin to obtain cluster-specific information, set provideClusterInfo on the user.exec field in the kubeconfig. The plugin will then be supplied this cluster-specific information in", "source": "https://kubernetes.io/docs/reference/access-authn-authz/authentication/", "id": "k8s_00731_code_completion_23239"} {"prompt": "Explain this json code:\n\n{\n \"apiVersion\": \"client.authentication.k8s.io/v1beta1\",\n \"kind\": \"ExecCredential\",\n \"status\": {\n \"token\": \"my-bearer-token\",\n \"expirationTimestamp\": \"2018-03-05T17:30:20-08:00\"\n }\n}", "completion": "This json code Authenticating | To enable the exec plugin to obtain cluster-specific information, set provideClusterInfo on the user.exec field in the kubeconfig. The plugin will then be supplied this cluster-specific information in", "language": "json", "type": "api-response_explanation", "context": "Authenticating | To enable the exec plugin to obtain cluster-specific information, set provideClusterInfo on the user.exec field in the kubeconfig. The plugin will then be supplied this cluster-specific information in", "source": "https://kubernetes.io/docs/reference/access-authn-authz/authentication/", "id": "k8s_00731_code_explain_23240"} {"prompt": "Complete this json code:\n\n{\n \"apiVersion\": \"client.authentication.k8s.io/v1\",\n \"kind\": \"ExecCredential\",\n \"spec\": {\n \"cluster\": {\n \"server\": \"https://172.17.4.100:6443\",\n \"certificate-authority-data\": \"LS0t...\",\n \"config\": {", "completion": " \"arbitrary\": \"config\",\n \"this\": \"can be provided via the KUBERNETES_EXEC_INFO environment variable upon setting provideClusterInfo\",\n \"you\": [\"can\", \"put\", \"anything\", \"here\"]\n }\n },\n \"interactive\": true\n }\n}", "language": "json", "type": "api-response", "context": "Authenticating | { \"apiVersion\": \"client.authentication.k8s.io/v1beta1\", \"kind\": \"ExecCredential\", \"spec\": { \"cluster\": { \"server\": \"https://172.17.4.100:6443\", \"certificate-authority-data\": \"LS0", "source": "https://kubernetes.io/docs/reference/access-authn-authz/authentication/", "id": "k8s_00731_code_completion_23241"} {"prompt": "Explain this json code:\n\n{\n \"apiVersion\": \"client.authentication.k8s.io/v1\",\n \"kind\": \"ExecCredential\",\n \"spec\": {\n \"cluster\": {\n \"server\": \"https://172.17.4.100:6443\",\n \"certificate-authority-data\": \"LS0t...\",\n \"config\": {\n \"arbitrary\": \"config\",\n \"this\": \"can be provided via the KUBERNETES_EXEC_INFO environment variable upon setting provideClusterInfo\",\n \"you\": [\"can\", \"put\", \"anything\", \"here\"]\n }\n },\n \"interactive\": true\n }\n}", "completion": "This json code Authenticating | { \"apiVersion\": \"client.authentication.k8s.io/v1beta1\", \"kind\": \"ExecCredential\", \"spec\": { \"cluster\": { \"server\": \"https://172.17.4.100:6443\", \"certificate-authority-data\": \"LS0", "language": "json", "type": "api-response_explanation", "context": "Authenticating | { \"apiVersion\": \"client.authentication.k8s.io/v1beta1\", \"kind\": \"ExecCredential\", \"spec\": { \"cluster\": { \"server\": \"https://172.17.4.100:6443\", \"certificate-authority-data\": \"LS0", "source": "https://kubernetes.io/docs/reference/access-authn-authz/authentication/", "id": "k8s_00731_code_explain_23242"} {"prompt": "Complete this json code:\n\n{\n \"apiVersion\": \"client.authentication.k8s.io/v1beta1\",\n \"kind\": \"ExecCredential\",\n \"spec\": {\n \"cluster\": {\n \"server\": \"https://172.17.4.100:6443\",\n \"certificate-authority-data\": \"LS0t...\",\n \"config\": {", "completion": " \"arbitrary\": \"config\",\n \"this\": \"can be provided via the KUBERNETES_EXEC_INFO environment variable upon setting provideClusterInfo\",\n \"you\": [\"can\", \"put\", \"anything\", \"here\"]\n }\n },\n \"interactive\": true\n }\n}", "language": "json", "type": "api-response", "context": "Authenticating | If your cluster has the API enabled, you can use the SelfSubjectReview API to find out how your Kubernetes cluster maps your authentication information to identify you as a client. This works whether", "source": "https://kubernetes.io/docs/reference/access-authn-authz/authentication/", "id": "k8s_00731_code_completion_23243"} {"prompt": "Explain this json code:\n\n{\n \"apiVersion\": \"client.authentication.k8s.io/v1beta1\",\n \"kind\": \"ExecCredential\",\n \"spec\": {\n \"cluster\": {\n \"server\": \"https://172.17.4.100:6443\",\n \"certificate-authority-data\": \"LS0t...\",\n \"config\": {\n \"arbitrary\": \"config\",\n \"this\": \"can be provided via the KUBERNETES_EXEC_INFO environment variable upon setting provideClusterInfo\",\n \"you\": [\"can\", \"put\", \"anything\", \"here\"]\n }\n },\n \"interactive\": true\n }\n}", "completion": "This json code Authenticating | If your cluster has the API enabled, you can use the SelfSubjectReview API to find out how your Kubernetes cluster maps your authentication information to identify you as a client. This works whether", "language": "json", "type": "api-response_explanation", "context": "Authenticating | If your cluster has the API enabled, you can use the SelfSubjectReview API to find out how your Kubernetes cluster maps your authentication information to identify you as a client. This works whether", "source": "https://kubernetes.io/docs/reference/access-authn-authz/authentication/", "id": "k8s_00731_code_explain_23244"} {"prompt": "Explain this text code:\n\nPOST /apis/authentication.k8s.io/v1/selfsubjectreviews", "completion": "This text code Authenticating | Response example:", "language": "text", "type": "code-example_explanation", "context": "Authenticating | Response example:", "source": "https://kubernetes.io/docs/reference/access-authn-authz/authentication/", "id": "k8s_00731_code_explain_23245"} {"prompt": "Complete this json code:\n\n{\n \"apiVersion\": \"authentication.k8s.io/v1\",", "completion": " \"kind\": \"SelfSubjectReview\"\n}", "language": "json", "type": "api-response", "context": "Authenticating | Response example:", "source": "https://kubernetes.io/docs/reference/access-authn-authz/authentication/", "id": "k8s_00731_code_completion_23246"} {"prompt": "Explain this json code:\n\n{\n \"apiVersion\": \"authentication.k8s.io/v1\",\n \"kind\": \"SelfSubjectReview\"\n}", "completion": "This json code Authenticating | Response example:", "language": "json", "type": "api-response_explanation", "context": "Authenticating | Response example:", "source": "https://kubernetes.io/docs/reference/access-authn-authz/authentication/", "id": "k8s_00731_code_explain_23247"} {"prompt": "Complete this json code:\n\n{\n \"apiVersion\": \"authentication.k8s.io/v1\",\n \"kind\": \"SelfSubjectReview\",\n \"status\": {\n \"userInfo\": {\n \"name\": \"jane.doe\",\n \"uid\": \"b6c7cfd4-f166-11ec-8ea0-0242ac120002\",\n \"groups\": [\n \"viewers\",", "completion": " \"editors\",\n \"system:authenticated\"\n ],\n \"extra\": {\n \"provider_id\": [\"token.company.example\"]\n }\n }\n }\n}", "language": "json", "type": "api-response", "context": "Authenticating | For convenience, the kubectl auth whoami command is present. Executing this command will produce the following output (yet different user attributes will be shown):", "source": "https://kubernetes.io/docs/reference/access-authn-authz/authentication/", "id": "k8s_00731_code_completion_23248"} {"prompt": "Explain this json code:\n\n{\n \"apiVersion\": \"authentication.k8s.io/v1\",\n \"kind\": \"SelfSubjectReview\",\n \"status\": {\n \"userInfo\": {\n \"name\": \"jane.doe\",\n \"uid\": \"b6c7cfd4-f166-11ec-8ea0-0242ac120002\",\n \"groups\": [\n \"viewers\",\n \"editors\",\n \"system:authenticated\"\n ],\n \"extra\": {\n \"provider_id\": [\"token.company.example\"]\n }\n }\n }\n}", "completion": "This json code Authenticating | For convenience, the kubectl auth whoami command is present. Executing this command will produce the following output (yet different user attributes will be shown):", "language": "json", "type": "api-response_explanation", "context": "Authenticating | For convenience, the kubectl auth whoami command is present. Executing this command will produce the following output (yet different user attributes will be shown):", "source": "https://kubernetes.io/docs/reference/access-authn-authz/authentication/", "id": "k8s_00731_code_explain_23249"} {"prompt": "Complete this json code:\n\n{\n \"apiVersion\": \"authentication.k8s.io/v1\",\n \"kind\": \"SelfSubjectReview\",\n \"status\": {\n \"userInfo\": {\n \"username\": \"jane.doe\",\n \"uid\": \"b79dbf30-0c6a-11ed-861d-0242ac120002\",\n \"groups\": [\n \"students\",\n \"teachers\",\n \"system:authenticated\"\n ],", "completion": " \"extra\": {\n \"skills\": [\n \"reading\",\n \"learning\"\n ],\n \"subjects\": [\n \"math\",\n \"sports\"\n ]\n }\n }\n }\n}", "language": "json", "type": "api-response", "context": "Authenticating | apiVersion: authentication.k8s.io/v1 kind: SelfSubjectReview status: userInfo: username: jane.doe uid: b79dbf30-0c6a-11ed-861d-0242ac120002 groups: - students - teachers - sy", "source": "https://kubernetes.io/docs/reference/access-authn-authz/authentication/", "id": "k8s_00731_code_completion_23250"} {"prompt": "Explain this json code:\n\n{\n \"apiVersion\": \"authentication.k8s.io/v1\",\n \"kind\": \"SelfSubjectReview\",\n \"status\": {\n \"userInfo\": {\n \"username\": \"jane.doe\",\n \"uid\": \"b79dbf30-0c6a-11ed-861d-0242ac120002\",\n \"groups\": [\n \"students\",\n \"teachers\",\n \"system:authenticated\"\n ],\n \"extra\": {\n \"skills\": [\n \"reading\",\n \"learning\"\n ],\n \"subjects\": [\n \"math\",\n \"sports\"\n ]\n }\n }\n }\n}", "completion": "This json code Authenticating | apiVersion: authentication.k8s.io/v1 kind: SelfSubjectReview status: userInfo: username: jane.doe uid: b79dbf30-0c6a-11ed-861d-0242ac120002 groups: - students - teachers - sy", "language": "json", "type": "api-response_explanation", "context": "Authenticating | apiVersion: authentication.k8s.io/v1 kind: SelfSubjectReview status: userInfo: username: jane.doe uid: b79dbf30-0c6a-11ed-861d-0242ac120002 groups: - students - teachers - sy", "source": "https://kubernetes.io/docs/reference/access-authn-authz/authentication/", "id": "k8s_00731_code_explain_23251"} {"prompt": "Complete this yaml code:\n\napiVersion: authentication.k8s.io/v1\nkind: SelfSubjectReview\nstatus:\n userInfo:\n username: jane.doe\n uid: b79dbf30-0c6a-11ed-861d-0242ac120002\n groups:\n - students", "completion": " - teachers\n - system:authenticated\n extra:\n skills:\n - reading\n - learning\n subjects:\n - math\n - sports", "language": "yaml", "type": "kubernetes-manifest", "context": "Authenticating | This feature is extremely useful when a complicated authentication flow is used in a Kubernetes cluster, for example, if you use webhook token authentication or authenticating proxy.", "source": "https://kubernetes.io/docs/reference/access-authn-authz/authentication/", "id": "k8s_00731_code_completion_23252"} {"prompt": "Explain this yaml code:\n\napiVersion: authentication.k8s.io/v1\nkind: SelfSubjectReview\nstatus:\n userInfo:\n username: jane.doe\n uid: b79dbf30-0c6a-11ed-861d-0242ac120002\n groups:\n - students\n - teachers\n - system:authenticated\n extra:\n skills:\n - reading\n - learning\n subjects:\n - math\n - sports", "completion": "This yaml code Authenticating | This feature is extremely useful when a complicated authentication flow is used in a Kubernetes cluster, for example, if you use webhook token authentication or authenticating proxy.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Authenticating | This feature is extremely useful when a complicated authentication flow is used in a Kubernetes cluster, for example, if you use webhook token authentication or authenticating proxy.", "source": "https://kubernetes.io/docs/reference/access-authn-authz/authentication/", "id": "k8s_00731_code_explain_23253"} {"prompt": "Complete this yaml code:\n\napiVersion: windows.k8s.io/v1\nkind: GMSACredentialSpec\nmetadata:\n name: gmsa-WebApp1 # This is an arbitrary name but it will be used as a reference\ncredspec:\n ActiveDirectoryConfig:\n GroupManagedServiceAccounts:\n - Name: WebApp1 # Username of the GMSA account\n Scope: CONTOSO # NETBIOS Domain Name\n - Name: WebApp1 # Username of the GMSA account", "completion": " Scope: contoso.com # DNS Domain Name\n CmsPlugins:\n - ActiveDirectory\n DomainJoinConfig:\n DnsName: contoso.com # DNS Domain Name\n DnsTreeName: contoso.com # DNS Domain Name Root\n Guid: 244818ae-87ac-4fcd-92ec-e79e5252348a # GUID of the Domain\n MachineAccountName: WebApp1 # Username of the GMSA account\n NetBiosName: CONTOSO # NETBIOS Domain Name\n Sid: S-1-5-21-2126449477-2524075714-3094792973 # SID of the Domain", "language": "yaml", "type": "kubernetes-manifest", "context": "Configure GMSA for Windows Pods and containers | The above credential spec resource may be saved as gmsa-Webapp1-credspec.yaml and applied to the cluster using: kubectl apply -f gmsa-Webapp1-credspec.yml", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/configure-gmsa/", "id": "k8s_00732_code_completion_23254"} {"prompt": "Explain this yaml code:\n\napiVersion: windows.k8s.io/v1\nkind: GMSACredentialSpec\nmetadata:\n name: gmsa-WebApp1 # This is an arbitrary name but it will be used as a reference\ncredspec:\n ActiveDirectoryConfig:\n GroupManagedServiceAccounts:\n - Name: WebApp1 # Username of the GMSA account\n Scope: CONTOSO # NETBIOS Domain Name\n - Name: WebApp1 # Username of the GMSA account\n Scope: contoso.com # DNS Domain Name\n CmsPlugins:\n - ActiveDirectory\n DomainJoinConfig:\n DnsName: contoso.com # DNS Domain Name\n DnsTreeName: contoso.com # DNS Domain Name Root\n Guid: 244818ae-87ac-4fcd-92ec-e79e5252348a # GUID of the Domain\n MachineAccountName: WebApp1 # Username of the GMSA account\n NetBiosName: CONTOSO # NETBIOS Domain Name\n Sid: S-1-5-21-2126449477-2524075714-3094792973 # SID of the Domain", "completion": "This yaml code Configure GMSA for Windows Pods and containers | The above credential spec resource may be saved as gmsa-Webapp1-credspec.yaml and applied to the cluster using: kubectl apply -f gmsa-Webapp1-credspec.yml", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Configure GMSA for Windows Pods and containers | The above credential spec resource may be saved as gmsa-Webapp1-credspec.yaml and applied to the cluster using: kubectl apply -f gmsa-Webapp1-credspec.yml", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/configure-gmsa/", "id": "k8s_00732_code_explain_23255"} {"prompt": "Complete this yaml code:\n\n# Create the Role to read the credspec\napiVersion: rbac.authorization.k8s.io/v1\nkind: ClusterRole\nmetadata:\n name: webapp1-role", "completion": "rules:\n- apiGroups: [\"windows.k8s.io\"]\n resources: [\"gmsacredentialspecs\"]\n verbs: [\"use\"]\n resourceNames: [\"gmsa-WebApp1\"]", "language": "yaml", "type": "kubernetes-manifest", "context": "Configure GMSA for Windows Pods and containers | A service account (that Pods will be configured with) needs to be bound to the cluster role create above. This authorizes the service account to use the desired GMSA credential spec resource. The foll", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/configure-gmsa/", "id": "k8s_00732_code_completion_23256"} {"prompt": "Explain this yaml code:\n\n# Create the Role to read the credspec\napiVersion: rbac.authorization.k8s.io/v1\nkind: ClusterRole\nmetadata:\n name: webapp1-role\nrules:\n- apiGroups: [\"windows.k8s.io\"]\n resources: [\"gmsacredentialspecs\"]\n verbs: [\"use\"]\n resourceNames: [\"gmsa-WebApp1\"]", "completion": "This yaml code Configure GMSA for Windows Pods and containers | A service account (that Pods will be configured with) needs to be bound to the cluster role create above. This authorizes the service account to use the desired GMSA credential spec resource. The foll", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Configure GMSA for Windows Pods and containers | A service account (that Pods will be configured with) needs to be bound to the cluster role create above. This authorizes the service account to use the desired GMSA credential spec resource. The foll", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/configure-gmsa/", "id": "k8s_00732_code_explain_23257"} {"prompt": "Complete this yaml code:\n\napiVersion: rbac.authorization.k8s.io/v1\nkind: RoleBinding\nmetadata:\n name: allow-default-svc-account-read-on-gmsa-WebApp1\n namespace: default\nsubjects:", "completion": "- kind: ServiceAccount\n name: default\n namespace: default\nroleRef:\n kind: ClusterRole\n name: webapp1-role\n apiGroup: rbac.authorization.k8s.io", "language": "yaml", "type": "service-definition", "context": "Configure GMSA for Windows Pods and containers | The Pod spec field securityContext.windowsOptions.gmsaCredentialSpecName is used to specify references to desired GMSA credential spec custom resources in Pod specs. This configures all containers in", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/configure-gmsa/", "id": "k8s_00732_code_completion_23258"} {"prompt": "Explain this yaml code:\n\napiVersion: rbac.authorization.k8s.io/v1\nkind: RoleBinding\nmetadata:\n name: allow-default-svc-account-read-on-gmsa-WebApp1\n namespace: default\nsubjects:\n- kind: ServiceAccount\n name: default\n namespace: default\nroleRef:\n kind: ClusterRole\n name: webapp1-role\n apiGroup: rbac.authorization.k8s.io", "completion": "This yaml code Configure GMSA for Windows Pods and containers | The Pod spec field securityContext.windowsOptions.gmsaCredentialSpecName is used to specify references to desired GMSA credential spec custom resources in Pod specs. This configures all containers in", "language": "yaml", "type": "service-definition_explanation", "context": "Configure GMSA for Windows Pods and containers | The Pod spec field securityContext.windowsOptions.gmsaCredentialSpecName is used to specify references to desired GMSA credential spec custom resources in Pod specs. This configures all containers in", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/configure-gmsa/", "id": "k8s_00732_code_explain_23259"} {"prompt": "Complete this yaml code:\n\napiVersion: apps/v1\nkind: Deployment\nmetadata:\n labels:\n run: with-creds\n name: with-creds\n namespace: default\nspec:\n replicas: 1\n selector:\n matchLabels:\n run: with-creds\n template:", "completion": " metadata:\n labels:\n run: with-creds\n spec:\n securityContext:\n windowsOptions:\n gmsaCredentialSpecName: gmsa-webapp1\n containers:\n - image: mcr.microsoft.com/windows/servercore/iis:windowsservercore-ltsc2019\n imagePullPolicy: Always\n name: iis\n nodeSelector:\n kubernetes.io/os: windows", "language": "yaml", "type": "deployment-definition", "context": "Configure GMSA for Windows Pods and containers | Individual containers in a Pod spec can also specify the desired GMSA credspec using a per-container securityContext.windowsOptions.gmsaCredentialSpecName field. For example:", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/configure-gmsa/", "id": "k8s_00732_code_completion_23260"} {"prompt": "Explain this yaml code:\n\napiVersion: apps/v1\nkind: Deployment\nmetadata:\n labels:\n run: with-creds\n name: with-creds\n namespace: default\nspec:\n replicas: 1\n selector:\n matchLabels:\n run: with-creds\n template:\n metadata:\n labels:\n run: with-creds\n spec:\n securityContext:\n windowsOptions:\n gmsaCredentialSpecName: gmsa-webapp1\n containers:\n - image: mcr.microsoft.com/windows/servercore/iis:windowsservercore-ltsc2019\n imagePullPolicy: Always\n name: iis\n nodeSelector:\n kubernetes.io/os: windows", "completion": "This yaml code Configure GMSA for Windows Pods and containers | Individual containers in a Pod spec can also specify the desired GMSA credspec using a per-container securityContext.windowsOptions.gmsaCredentialSpecName field. For example:", "language": "yaml", "type": "deployment-definition_explanation", "context": "Configure GMSA for Windows Pods and containers | Individual containers in a Pod spec can also specify the desired GMSA credspec using a per-container securityContext.windowsOptions.gmsaCredentialSpecName field. For example:", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/configure-gmsa/", "id": "k8s_00732_code_explain_23261"} {"prompt": "Complete this yaml code:\n\napiVersion: apps/v1\nkind: Deployment\nmetadata:\n labels:\n run: with-creds\n name: with-creds\n namespace: default\nspec:\n replicas: 1\n selector:\n matchLabels:\n run: with-creds\n template:", "completion": " metadata:\n labels:\n run: with-creds\n spec:\n containers:\n - image: mcr.microsoft.com/windows/servercore/iis:windowsservercore-ltsc2019\n imagePullPolicy: Always\n name: iis\n securityContext:\n windowsOptions:\n gmsaCredentialSpecName: gmsa-Webapp1\n nodeSelector:\n kubernetes.io/os: windows", "language": "yaml", "type": "deployment-definition", "context": "Configure GMSA for Windows Pods and containers | As Pod specs with GMSA fields populated (as described above) are applied in a cluster, the following sequence of events take place:", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/configure-gmsa/", "id": "k8s_00732_code_completion_23262"} {"prompt": "Explain this yaml code:\n\napiVersion: apps/v1\nkind: Deployment\nmetadata:\n labels:\n run: with-creds\n name: with-creds\n namespace: default\nspec:\n replicas: 1\n selector:\n matchLabels:\n run: with-creds\n template:\n metadata:\n labels:\n run: with-creds\n spec:\n containers:\n - image: mcr.microsoft.com/windows/servercore/iis:windowsservercore-ltsc2019\n imagePullPolicy: Always\n name: iis\n securityContext:\n windowsOptions:\n gmsaCredentialSpecName: gmsa-Webapp1\n nodeSelector:\n kubernetes.io/os: windows", "completion": "This yaml code Configure GMSA for Windows Pods and containers | As Pod specs with GMSA fields populated (as described above) are applied in a cluster, the following sequence of events take place:", "language": "yaml", "type": "deployment-definition_explanation", "context": "Configure GMSA for Windows Pods and containers | As Pod specs with GMSA fields populated (as described above) are applied in a cluster, the following sequence of events take place:", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/configure-gmsa/", "id": "k8s_00732_code_explain_23263"} {"prompt": "Explain this text code:\n\nreg add \"HKLM\\SYSTEM\\CurrentControlSet\\Services\\hns\\State\" /v EnableCompartmentNamespace /t REG_DWORD /d 1", "completion": "This text code Configure GMSA for Windows Pods and containers | Running Pods will then need to be recreated to pick up the behavior changes. More information on how this registry key is used can be found here", "language": "text", "type": "code-example_explanation", "context": "Configure GMSA for Windows Pods and containers | Running Pods will then need to be recreated to pick up the behavior changes. More information on how this registry key is used can be found here", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/configure-gmsa/", "id": "k8s_00732_code_explain_23264"} {"prompt": "Explain this bash code:\n\nkubectl exec -it iis-auth-7776966999-n5nzr powershell.exe", "completion": "This bash code Configure GMSA for Windows Pods and containers | nltest.exe /parentdomain results in the following error:", "language": "bash", "type": "kubectl-command_explanation", "context": "Configure GMSA for Windows Pods and containers | nltest.exe /parentdomain results in the following error:", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/configure-gmsa/", "id": "k8s_00732_code_explain_23265"} {"prompt": "Explain this text code:\n\nGetting parent domain failed: Status = 1722 0x6ba RPC_S_SERVER_UNAVAILABLE", "completion": "This text code Configure GMSA for Windows Pods and containers | If your Pod did get the credspec correctly, then next check communication with the domain. First, from inside of your Pod, quickly do an nslookup to find the root of your domain.", "language": "text", "type": "code-example_explanation", "context": "Configure GMSA for Windows Pods and containers | If your Pod did get the credspec correctly, then next check communication with the domain. First, from inside of your Pod, quickly do an nslookup to find the root of your domain.", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/configure-gmsa/", "id": "k8s_00732_code_explain_23266"} {"prompt": "Explain this text code:\n\nI_NetLogonControl failed: Status = 1722 0x6ba RPC_S_SERVER_UNAVAILABLE", "completion": "This text code Configure GMSA for Windows Pods and containers | This tells us that for some reason, the Pod was unable to logon to the domain using the account specified in the credspec. You can try to repair the secure channel by running the following:", "language": "text", "type": "code-example_explanation", "context": "Configure GMSA for Windows Pods and containers | This tells us that for some reason, the Pod was unable to logon to the domain using the account specified in the credspec. You can try to repair the secure channel by running the following:", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/configure-gmsa/", "id": "k8s_00732_code_explain_23267"} {"prompt": "Complete this text code:\n\nFlags: 30 HAS_IP HAS_TIMESERV\nTrusted DC Name \\\\dc10.domain.example", "completion": "Trusted DC Connection Status Status = 0 0x0 NERR_Success\nThe command completed successfully", "language": "text", "type": "code-example", "context": "Configure GMSA for Windows Pods and containers | If the above corrects the error, you can automate the step by adding the following lifecycle hook to your Pod spec. If it did not correct the error, you will need to examine your credspec again and co", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/configure-gmsa/", "id": "k8s_00732_code_completion_23268"} {"prompt": "Explain this text code:\n\nFlags: 30 HAS_IP HAS_TIMESERV\nTrusted DC Name \\\\dc10.domain.example\nTrusted DC Connection Status Status = 0 0x0 NERR_Success\nThe command completed successfully", "completion": "This text code Configure GMSA for Windows Pods and containers | If the above corrects the error, you can automate the step by adding the following lifecycle hook to your Pod spec. If it did not correct the error, you will need to examine your credspec again and co", "language": "text", "type": "code-example_explanation", "context": "Configure GMSA for Windows Pods and containers | If the above corrects the error, you can automate the step by adding the following lifecycle hook to your Pod spec. If it did not correct the error, you will need to examine your credspec again and co", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/configure-gmsa/", "id": "k8s_00732_code_explain_23269"} {"prompt": "Complete this text code:\n\nimage: registry.domain.example/iis-auth:1809v1\n lifecycle:\n postStart:", "completion": " exec:\n command: [\"powershell.exe\",\"-command\",\"do { Restart-Service -Name netlogon } while ( $($Result = (nltest.exe /query); if ($Result -like '*0x0 NERR_Success*') {return $true} else {return $false}) -eq $false)\"]\n imagePullPolicy: IfNotPresent", "language": "text", "type": "code-example", "context": "Configure GMSA for Windows Pods and containers | If you add the lifecycle section show above to your Pod spec, the Pod will execute the commands listed to restart the netlogon service until the nltest.exe /query command exits without error.", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/configure-gmsa/", "id": "k8s_00732_code_completion_23270"} {"prompt": "Explain this text code:\n\nimage: registry.domain.example/iis-auth:1809v1\n lifecycle:\n postStart:\n exec:\n command: [\"powershell.exe\",\"-command\",\"do { Restart-Service -Name netlogon } while ( $($Result = (nltest.exe /query); if ($Result -like '*0x0 NERR_Success*') {return $true} else {return $false}) -eq $false)\"]\n imagePullPolicy: IfNotPresent", "completion": "This text code Configure GMSA for Windows Pods and containers | If you add the lifecycle section show above to your Pod spec, the Pod will execute the commands listed to restart the netlogon service until the nltest.exe /query command exits without error.", "language": "text", "type": "code-example_explanation", "context": "Configure GMSA for Windows Pods and containers | If you add the lifecycle section show above to your Pod spec, the Pod will execute the commands listed to restart the netlogon service until the nltest.exe /query command exits without error.", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/configure-gmsa/", "id": "k8s_00732_code_explain_23271"} {"prompt": "Explain this text code:\n\nNew-CredentialSpec -Name WebApp1 -AccountName WebApp1 -Domain $(Get-ADDomain -Current LocalComputer)", "completion": "This text code Configure GMSA for Windows Pods and containers | Use Get-CredentialSpec to show the path of the JSON file.", "language": "text", "type": "code-example_explanation", "context": "Configure GMSA for Windows Pods and containers | Use Get-CredentialSpec to show the path of the JSON file.", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/configure-gmsa/", "id": "k8s_00732_code_explain_23272"} {"prompt": "Complete this yaml code:\n\napiVersion: windows.k8s.io/v1\nkind: GMSACredentialSpec\nmetadata:\n name: gmsa-WebApp1 # This is an arbitrary name but it will be used as a reference\ncredspec:\n ActiveDirectoryConfig:\n GroupManagedServiceAccounts:\n - Name: WebApp1 # Username of the GMSA account\n Scope: CONTOSO # NETBIOS Domain Name\n - Name: WebApp1 # Username of the GMSA account", "completion": " Scope: contoso.com # DNS Domain Name\n CmsPlugins:\n - ActiveDirectory\n DomainJoinConfig:\n DnsName: contoso.com # DNS Domain Name\n DnsTreeName: contoso.com # DNS Domain Name Root\n Guid: 244818ae-87ac-4fcd-92ec-e79e5252348a # GUID of the Domain\n MachineAccountName: WebApp1 # Username of the GMSA account\n NetBiosName: CONTOSO # NETBIOS Domain Name\n Sid: S-1-5-21-2126449477-2524075714-3094792973 # SID of the Domain", "language": "yaml", "type": "kubernetes-manifest", "context": "Configure GMSA for Windows Pods and containers | The above credential spec resource may be saved as gmsa-Webapp1-credspec.yaml and applied to the cluster using: kubectl apply -f gmsa-Webapp1-credspec.yml", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/configure-gmsa/", "id": "k8s_00732_code_completion_23273"} {"prompt": "Explain this yaml code:\n\napiVersion: windows.k8s.io/v1\nkind: GMSACredentialSpec\nmetadata:\n name: gmsa-WebApp1 # This is an arbitrary name but it will be used as a reference\ncredspec:\n ActiveDirectoryConfig:\n GroupManagedServiceAccounts:\n - Name: WebApp1 # Username of the GMSA account\n Scope: CONTOSO # NETBIOS Domain Name\n - Name: WebApp1 # Username of the GMSA account\n Scope: contoso.com # DNS Domain Name\n CmsPlugins:\n - ActiveDirectory\n DomainJoinConfig:\n DnsName: contoso.com # DNS Domain Name\n DnsTreeName: contoso.com # DNS Domain Name Root\n Guid: 244818ae-87ac-4fcd-92ec-e79e5252348a # GUID of the Domain\n MachineAccountName: WebApp1 # Username of the GMSA account\n NetBiosName: CONTOSO # NETBIOS Domain Name\n Sid: S-1-5-21-2126449477-2524075714-3094792973 # SID of the Domain", "completion": "This yaml code Configure GMSA for Windows Pods and containers | The above credential spec resource may be saved as gmsa-Webapp1-credspec.yaml and applied to the cluster using: kubectl apply -f gmsa-Webapp1-credspec.yml", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Configure GMSA for Windows Pods and containers | The above credential spec resource may be saved as gmsa-Webapp1-credspec.yaml and applied to the cluster using: kubectl apply -f gmsa-Webapp1-credspec.yml", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/configure-gmsa/", "id": "k8s_00732_code_explain_23274"} {"prompt": "Complete this yaml code:\n\n# Create the Role to read the credspec\napiVersion: rbac.authorization.k8s.io/v1\nkind: ClusterRole\nmetadata:\n name: webapp1-role", "completion": "rules:\n- apiGroups: [\"windows.k8s.io\"]\n resources: [\"gmsacredentialspecs\"]\n verbs: [\"use\"]\n resourceNames: [\"gmsa-WebApp1\"]", "language": "yaml", "type": "kubernetes-manifest", "context": "Configure GMSA for Windows Pods and containers | A service account (that Pods will be configured with) needs to be bound to the cluster role create above. This authorizes the service account to use the desired GMSA credential spec resource. The foll", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/configure-gmsa/", "id": "k8s_00732_code_completion_23275"} {"prompt": "Explain this yaml code:\n\n# Create the Role to read the credspec\napiVersion: rbac.authorization.k8s.io/v1\nkind: ClusterRole\nmetadata:\n name: webapp1-role\nrules:\n- apiGroups: [\"windows.k8s.io\"]\n resources: [\"gmsacredentialspecs\"]\n verbs: [\"use\"]\n resourceNames: [\"gmsa-WebApp1\"]", "completion": "This yaml code Configure GMSA for Windows Pods and containers | A service account (that Pods will be configured with) needs to be bound to the cluster role create above. This authorizes the service account to use the desired GMSA credential spec resource. The foll", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Configure GMSA for Windows Pods and containers | A service account (that Pods will be configured with) needs to be bound to the cluster role create above. This authorizes the service account to use the desired GMSA credential spec resource. The foll", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/configure-gmsa/", "id": "k8s_00732_code_explain_23276"} {"prompt": "Complete this yaml code:\n\napiVersion: rbac.authorization.k8s.io/v1\nkind: RoleBinding\nmetadata:\n name: allow-default-svc-account-read-on-gmsa-WebApp1\n namespace: default\nsubjects:", "completion": "- kind: ServiceAccount\n name: default\n namespace: default\nroleRef:\n kind: ClusterRole\n name: webapp1-role\n apiGroup: rbac.authorization.k8s.io", "language": "yaml", "type": "service-definition", "context": "Configure GMSA for Windows Pods and containers | The Pod spec field securityContext.windowsOptions.gmsaCredentialSpecName is used to specify references to desired GMSA credential spec custom resources in Pod specs. This configures all containers in", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/configure-gmsa/", "id": "k8s_00732_code_completion_23277"} {"prompt": "Explain this yaml code:\n\napiVersion: rbac.authorization.k8s.io/v1\nkind: RoleBinding\nmetadata:\n name: allow-default-svc-account-read-on-gmsa-WebApp1\n namespace: default\nsubjects:\n- kind: ServiceAccount\n name: default\n namespace: default\nroleRef:\n kind: ClusterRole\n name: webapp1-role\n apiGroup: rbac.authorization.k8s.io", "completion": "This yaml code Configure GMSA for Windows Pods and containers | The Pod spec field securityContext.windowsOptions.gmsaCredentialSpecName is used to specify references to desired GMSA credential spec custom resources in Pod specs. This configures all containers in", "language": "yaml", "type": "service-definition_explanation", "context": "Configure GMSA for Windows Pods and containers | The Pod spec field securityContext.windowsOptions.gmsaCredentialSpecName is used to specify references to desired GMSA credential spec custom resources in Pod specs. This configures all containers in", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/configure-gmsa/", "id": "k8s_00732_code_explain_23278"} {"prompt": "Explain this text code:\n\nsecurityContext.windowsOptions.gmsaCredentialSpecName", "completion": "This text code Configure GMSA for Windows Pods and containers | Individual containers in a Pod spec can also specify the desired GMSA credspec using a per-container securityContext.windowsOptions.gmsaCredentialSpecName field. For example:", "language": "text", "type": "code-example_explanation", "context": "Configure GMSA for Windows Pods and containers | Individual containers in a Pod spec can also specify the desired GMSA credspec using a per-container securityContext.windowsOptions.gmsaCredentialSpecName field. For example:", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/configure-gmsa/", "id": "k8s_00732_code_explain_23279"} {"prompt": "Complete this yaml code:\n\napiVersion: apps/v1\nkind: Deployment\nmetadata:\n labels:\n run: with-creds\n name: with-creds\n namespace: default\nspec:\n replicas: 1\n selector:\n matchLabels:\n run: with-creds\n template:", "completion": " metadata:\n labels:\n run: with-creds\n spec:\n securityContext:\n windowsOptions:\n gmsaCredentialSpecName: gmsa-webapp1\n containers:\n - image: mcr.microsoft.com/windows/servercore/iis:windowsservercore-ltsc2019\n imagePullPolicy: Always\n name: iis\n nodeSelector:\n kubernetes.io/os: windows", "language": "yaml", "type": "deployment-definition", "context": "Configure GMSA for Windows Pods and containers | Individual containers in a Pod spec can also specify the desired GMSA credspec using a per-container securityContext.windowsOptions.gmsaCredentialSpecName field. For example:", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/configure-gmsa/", "id": "k8s_00732_code_completion_23280"} {"prompt": "Explain this yaml code:\n\napiVersion: apps/v1\nkind: Deployment\nmetadata:\n labels:\n run: with-creds\n name: with-creds\n namespace: default\nspec:\n replicas: 1\n selector:\n matchLabels:\n run: with-creds\n template:\n metadata:\n labels:\n run: with-creds\n spec:\n securityContext:\n windowsOptions:\n gmsaCredentialSpecName: gmsa-webapp1\n containers:\n - image: mcr.microsoft.com/windows/servercore/iis:windowsservercore-ltsc2019\n imagePullPolicy: Always\n name: iis\n nodeSelector:\n kubernetes.io/os: windows", "completion": "This yaml code Configure GMSA for Windows Pods and containers | Individual containers in a Pod spec can also specify the desired GMSA credspec using a per-container securityContext.windowsOptions.gmsaCredentialSpecName field. For example:", "language": "yaml", "type": "deployment-definition_explanation", "context": "Configure GMSA for Windows Pods and containers | Individual containers in a Pod spec can also specify the desired GMSA credspec using a per-container securityContext.windowsOptions.gmsaCredentialSpecName field. For example:", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/configure-gmsa/", "id": "k8s_00732_code_explain_23281"} {"prompt": "Explain this text code:\n\nsecurityContext.windowsOptions.gmsaCredentialSpecName", "completion": "This text code Configure GMSA for Windows Pods and containers | As Pod specs with GMSA fields populated (as described above) are applied in a cluster, the following sequence of events take place:", "language": "text", "type": "code-example_explanation", "context": "Configure GMSA for Windows Pods and containers | As Pod specs with GMSA fields populated (as described above) are applied in a cluster, the following sequence of events take place:", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/configure-gmsa/", "id": "k8s_00732_code_explain_23282"} {"prompt": "Complete this yaml code:\n\napiVersion: apps/v1\nkind: Deployment\nmetadata:\n labels:\n run: with-creds\n name: with-creds\n namespace: default\nspec:\n replicas: 1\n selector:\n matchLabels:\n run: with-creds\n template:", "completion": " metadata:\n labels:\n run: with-creds\n spec:\n containers:\n - image: mcr.microsoft.com/windows/servercore/iis:windowsservercore-ltsc2019\n imagePullPolicy: Always\n name: iis\n securityContext:\n windowsOptions:\n gmsaCredentialSpecName: gmsa-Webapp1\n nodeSelector:\n kubernetes.io/os: windows", "language": "yaml", "type": "deployment-definition", "context": "Configure GMSA for Windows Pods and containers | As Pod specs with GMSA fields populated (as described above) are applied in a cluster, the following sequence of events take place:", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/configure-gmsa/", "id": "k8s_00732_code_completion_23283"} {"prompt": "Explain this yaml code:\n\napiVersion: apps/v1\nkind: Deployment\nmetadata:\n labels:\n run: with-creds\n name: with-creds\n namespace: default\nspec:\n replicas: 1\n selector:\n matchLabels:\n run: with-creds\n template:\n metadata:\n labels:\n run: with-creds\n spec:\n containers:\n - image: mcr.microsoft.com/windows/servercore/iis:windowsservercore-ltsc2019\n imagePullPolicy: Always\n name: iis\n securityContext:\n windowsOptions:\n gmsaCredentialSpecName: gmsa-Webapp1\n nodeSelector:\n kubernetes.io/os: windows", "completion": "This yaml code Configure GMSA for Windows Pods and containers | As Pod specs with GMSA fields populated (as described above) are applied in a cluster, the following sequence of events take place:", "language": "yaml", "type": "deployment-definition_explanation", "context": "Configure GMSA for Windows Pods and containers | As Pod specs with GMSA fields populated (as described above) are applied in a cluster, the following sequence of events take place:", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/configure-gmsa/", "id": "k8s_00732_code_explain_23284"} {"prompt": "Explain this cmd code:\n\nreg add \"HKLM\\SYSTEM\\CurrentControlSet\\Services\\hns\\State\" /v EnableCompartmentNamespace /t REG_DWORD /d 1", "completion": "This cmd code Configure GMSA for Windows Pods and containers | Running Pods will then need to be recreated to pick up the behavior changes. More information on how this registry key is used can be found here", "language": "cmd", "type": "code-example_explanation", "context": "Configure GMSA for Windows Pods and containers | Running Pods will then need to be recreated to pick up the behavior changes. More information on how this registry key is used can be found here", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/configure-gmsa/", "id": "k8s_00732_code_explain_23285"} {"prompt": "Explain this PowerShell code:\n\nkubectl exec -it iis-auth-7776966999-n5nzr powershell.exe", "completion": "This PowerShell code Configure GMSA for Windows Pods and containers | nltest.exe /parentdomain results in the following error:", "language": "PowerShell", "type": "code-example_explanation", "context": "Configure GMSA for Windows Pods and containers | nltest.exe /parentdomain results in the following error:", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/configure-gmsa/", "id": "k8s_00732_code_explain_23286"} {"prompt": "Explain this output code:\n\nGetting parent domain failed: Status = 1722 0x6ba RPC_S_SERVER_UNAVAILABLE", "completion": "This output code Configure GMSA for Windows Pods and containers | If your Pod did get the credspec correctly, then next check communication with the domain. First, from inside of your Pod, quickly do an nslookup to find the root of your domain.", "language": "output", "type": "code-example_explanation", "context": "Configure GMSA for Windows Pods and containers | If your Pod did get the credspec correctly, then next check communication with the domain. First, from inside of your Pod, quickly do an nslookup to find the root of your domain.", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/configure-gmsa/", "id": "k8s_00732_code_explain_23287"} {"prompt": "Explain this output code:\n\nI_NetLogonControl failed: Status = 1722 0x6ba RPC_S_SERVER_UNAVAILABLE", "completion": "This output code Configure GMSA for Windows Pods and containers | This tells us that for some reason, the Pod was unable to logon to the domain using the account specified in the credspec. You can try to repair the secure channel by running the following:", "language": "output", "type": "code-example_explanation", "context": "Configure GMSA for Windows Pods and containers | This tells us that for some reason, the Pod was unable to logon to the domain using the account specified in the credspec. You can try to repair the secure channel by running the following:", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/configure-gmsa/", "id": "k8s_00732_code_explain_23288"} {"prompt": "Complete this output code:\n\nFlags: 30 HAS_IP HAS_TIMESERV\nTrusted DC Name \\\\dc10.domain.example", "completion": "Trusted DC Connection Status Status = 0 0x0 NERR_Success\nThe command completed successfully", "language": "output", "type": "code-example", "context": "Configure GMSA for Windows Pods and containers | If the above corrects the error, you can automate the step by adding the following lifecycle hook to your Pod spec. If it did not correct the error, you will need to examine your credspec again and co", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/configure-gmsa/", "id": "k8s_00732_code_completion_23289"} {"prompt": "Explain this output code:\n\nFlags: 30 HAS_IP HAS_TIMESERV\nTrusted DC Name \\\\dc10.domain.example\nTrusted DC Connection Status Status = 0 0x0 NERR_Success\nThe command completed successfully", "completion": "This output code Configure GMSA for Windows Pods and containers | If the above corrects the error, you can automate the step by adding the following lifecycle hook to your Pod spec. If it did not correct the error, you will need to examine your credspec again and co", "language": "output", "type": "code-example_explanation", "context": "Configure GMSA for Windows Pods and containers | If the above corrects the error, you can automate the step by adding the following lifecycle hook to your Pod spec. If it did not correct the error, you will need to examine your credspec again and co", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/configure-gmsa/", "id": "k8s_00732_code_explain_23290"} {"prompt": "Complete this yaml code:\n\nimage: registry.domain.example/iis-auth:1809v1\n lifecycle:\n postStart:", "completion": " exec:\n command: [\"powershell.exe\",\"-command\",\"do { Restart-Service -Name netlogon } while ( $($Result = (nltest.exe /query); if ($Result -like '*0x0 NERR_Success*') {return $true} else {return $false}) -eq $false)\"]\n imagePullPolicy: IfNotPresent", "language": "yaml", "type": "kubernetes-manifest", "context": "Configure GMSA for Windows Pods and containers | If you add the lifecycle section show above to your Pod spec, the Pod will execute the commands listed to restart the netlogon service until the nltest.exe /query command exits without error.", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/configure-gmsa/", "id": "k8s_00732_code_completion_23291"} {"prompt": "Explain this yaml code:\n\nimage: registry.domain.example/iis-auth:1809v1\n lifecycle:\n postStart:\n exec:\n command: [\"powershell.exe\",\"-command\",\"do { Restart-Service -Name netlogon } while ( $($Result = (nltest.exe /query); if ($Result -like '*0x0 NERR_Success*') {return $true} else {return $false}) -eq $false)\"]\n imagePullPolicy: IfNotPresent", "completion": "This yaml code Configure GMSA for Windows Pods and containers | If you add the lifecycle section show above to your Pod spec, the Pod will execute the commands listed to restart the netlogon service until the nltest.exe /query command exits without error.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Configure GMSA for Windows Pods and containers | If you add the lifecycle section show above to your Pod spec, the Pod will execute the commands listed to restart the netlogon service until the nltest.exe /query command exits without error.", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/configure-gmsa/", "id": "k8s_00732_code_explain_23292"} {"prompt": "Complete this yaml code:\n\napiVersion: windows.k8s.io/v1\nkind: GMSACredentialSpec\nmetadata:\n name: gmsa-WebApp1 # This is an arbitrary name but it will be used as a reference\ncredspec:\n ActiveDirectoryConfig:\n GroupManagedServiceAccounts:\n - Name: WebApp1 # Username of the GMSA account\n Scope: CONTOSO # NETBIOS Domain Name\n - Name: WebApp1 # Username of the GMSA account", "completion": " Scope: contoso.com # DNS Domain Name\n CmsPlugins:\n - ActiveDirectory\n DomainJoinConfig:\n DnsName: contoso.com # DNS Domain Name\n DnsTreeName: contoso.com # DNS Domain Name Root\n Guid: 244818ae-87ac-4fcd-92ec-e79e5252348a # GUID of the Domain\n MachineAccountName: WebApp1 # Username of the GMSA account\n NetBiosName: CONTOSO # NETBIOS Domain Name\n Sid: S-1-5-21-2126449477-2524075714-3094792973 # SID of the Domain", "language": "yaml", "type": "kubernetes-manifest", "context": "Configure GMSA for Windows Pods and containers | The above credential spec resource may be saved as gmsa-Webapp1-credspec.yaml and applied to the cluster using: kubectl apply -f gmsa-Webapp1-credspec.yml", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/configure-gmsa/", "id": "k8s_00732_code_completion_23293"} {"prompt": "Explain this yaml code:\n\napiVersion: windows.k8s.io/v1\nkind: GMSACredentialSpec\nmetadata:\n name: gmsa-WebApp1 # This is an arbitrary name but it will be used as a reference\ncredspec:\n ActiveDirectoryConfig:\n GroupManagedServiceAccounts:\n - Name: WebApp1 # Username of the GMSA account\n Scope: CONTOSO # NETBIOS Domain Name\n - Name: WebApp1 # Username of the GMSA account\n Scope: contoso.com # DNS Domain Name\n CmsPlugins:\n - ActiveDirectory\n DomainJoinConfig:\n DnsName: contoso.com # DNS Domain Name\n DnsTreeName: contoso.com # DNS Domain Name Root\n Guid: 244818ae-87ac-4fcd-92ec-e79e5252348a # GUID of the Domain\n MachineAccountName: WebApp1 # Username of the GMSA account\n NetBiosName: CONTOSO # NETBIOS Domain Name\n Sid: S-1-5-21-2126449477-2524075714-3094792973 # SID of the Domain", "completion": "This yaml code Configure GMSA for Windows Pods and containers | The above credential spec resource may be saved as gmsa-Webapp1-credspec.yaml and applied to the cluster using: kubectl apply -f gmsa-Webapp1-credspec.yml", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Configure GMSA for Windows Pods and containers | The above credential spec resource may be saved as gmsa-Webapp1-credspec.yaml and applied to the cluster using: kubectl apply -f gmsa-Webapp1-credspec.yml", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/configure-gmsa/", "id": "k8s_00732_code_explain_23294"} {"prompt": "Complete this yaml code:\n\n# Create the Role to read the credspec\napiVersion: rbac.authorization.k8s.io/v1\nkind: ClusterRole\nmetadata:\n name: webapp1-role", "completion": "rules:\n- apiGroups: [\"windows.k8s.io\"]\n resources: [\"gmsacredentialspecs\"]\n verbs: [\"use\"]\n resourceNames: [\"gmsa-WebApp1\"]", "language": "yaml", "type": "kubernetes-manifest", "context": "Configure GMSA for Windows Pods and containers | A service account (that Pods will be configured with) needs to be bound to the cluster role create above. This authorizes the service account to use the desired GMSA credential spec resource. The foll", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/configure-gmsa/", "id": "k8s_00732_code_completion_23295"} {"prompt": "Explain this yaml code:\n\n# Create the Role to read the credspec\napiVersion: rbac.authorization.k8s.io/v1\nkind: ClusterRole\nmetadata:\n name: webapp1-role\nrules:\n- apiGroups: [\"windows.k8s.io\"]\n resources: [\"gmsacredentialspecs\"]\n verbs: [\"use\"]\n resourceNames: [\"gmsa-WebApp1\"]", "completion": "This yaml code Configure GMSA for Windows Pods and containers | A service account (that Pods will be configured with) needs to be bound to the cluster role create above. This authorizes the service account to use the desired GMSA credential spec resource. The foll", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Configure GMSA for Windows Pods and containers | A service account (that Pods will be configured with) needs to be bound to the cluster role create above. This authorizes the service account to use the desired GMSA credential spec resource. The foll", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/configure-gmsa/", "id": "k8s_00732_code_explain_23296"} {"prompt": "Complete this yaml code:\n\napiVersion: rbac.authorization.k8s.io/v1\nkind: RoleBinding\nmetadata:\n name: allow-default-svc-account-read-on-gmsa-WebApp1\n namespace: default\nsubjects:", "completion": "- kind: ServiceAccount\n name: default\n namespace: default\nroleRef:\n kind: ClusterRole\n name: webapp1-role\n apiGroup: rbac.authorization.k8s.io", "language": "yaml", "type": "service-definition", "context": "Configure GMSA for Windows Pods and containers | The Pod spec field securityContext.windowsOptions.gmsaCredentialSpecName is used to specify references to desired GMSA credential spec custom resources in Pod specs. This configures all containers in", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/configure-gmsa/", "id": "k8s_00732_code_completion_23297"} {"prompt": "Explain this yaml code:\n\napiVersion: rbac.authorization.k8s.io/v1\nkind: RoleBinding\nmetadata:\n name: allow-default-svc-account-read-on-gmsa-WebApp1\n namespace: default\nsubjects:\n- kind: ServiceAccount\n name: default\n namespace: default\nroleRef:\n kind: ClusterRole\n name: webapp1-role\n apiGroup: rbac.authorization.k8s.io", "completion": "This yaml code Configure GMSA for Windows Pods and containers | The Pod spec field securityContext.windowsOptions.gmsaCredentialSpecName is used to specify references to desired GMSA credential spec custom resources in Pod specs. This configures all containers in", "language": "yaml", "type": "service-definition_explanation", "context": "Configure GMSA for Windows Pods and containers | The Pod spec field securityContext.windowsOptions.gmsaCredentialSpecName is used to specify references to desired GMSA credential spec custom resources in Pod specs. This configures all containers in", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/configure-gmsa/", "id": "k8s_00732_code_explain_23298"} {"prompt": "Complete this yaml code:\n\napiVersion: apps/v1\nkind: Deployment\nmetadata:\n labels:\n run: with-creds\n name: with-creds\n namespace: default\nspec:\n replicas: 1\n selector:\n matchLabels:\n run: with-creds\n template:", "completion": " metadata:\n labels:\n run: with-creds\n spec:\n securityContext:\n windowsOptions:\n gmsaCredentialSpecName: gmsa-webapp1\n containers:\n - image: mcr.microsoft.com/windows/servercore/iis:windowsservercore-ltsc2019\n imagePullPolicy: Always\n name: iis\n nodeSelector:\n kubernetes.io/os: windows", "language": "yaml", "type": "deployment-definition", "context": "Configure GMSA for Windows Pods and containers | Individual containers in a Pod spec can also specify the desired GMSA credspec using a per-container securityContext.windowsOptions.gmsaCredentialSpecName field. For example:", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/configure-gmsa/", "id": "k8s_00732_code_completion_23299"} {"prompt": "Explain this yaml code:\n\napiVersion: apps/v1\nkind: Deployment\nmetadata:\n labels:\n run: with-creds\n name: with-creds\n namespace: default\nspec:\n replicas: 1\n selector:\n matchLabels:\n run: with-creds\n template:\n metadata:\n labels:\n run: with-creds\n spec:\n securityContext:\n windowsOptions:\n gmsaCredentialSpecName: gmsa-webapp1\n containers:\n - image: mcr.microsoft.com/windows/servercore/iis:windowsservercore-ltsc2019\n imagePullPolicy: Always\n name: iis\n nodeSelector:\n kubernetes.io/os: windows", "completion": "This yaml code Configure GMSA for Windows Pods and containers | Individual containers in a Pod spec can also specify the desired GMSA credspec using a per-container securityContext.windowsOptions.gmsaCredentialSpecName field. For example:", "language": "yaml", "type": "deployment-definition_explanation", "context": "Configure GMSA for Windows Pods and containers | Individual containers in a Pod spec can also specify the desired GMSA credspec using a per-container securityContext.windowsOptions.gmsaCredentialSpecName field. For example:", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/configure-gmsa/", "id": "k8s_00732_code_explain_23300"} {"prompt": "Complete this yaml code:\n\napiVersion: apps/v1\nkind: Deployment\nmetadata:\n labels:\n run: with-creds\n name: with-creds\n namespace: default\nspec:\n replicas: 1\n selector:\n matchLabels:\n run: with-creds\n template:", "completion": " metadata:\n labels:\n run: with-creds\n spec:\n containers:\n - image: mcr.microsoft.com/windows/servercore/iis:windowsservercore-ltsc2019\n imagePullPolicy: Always\n name: iis\n securityContext:\n windowsOptions:\n gmsaCredentialSpecName: gmsa-Webapp1\n nodeSelector:\n kubernetes.io/os: windows", "language": "yaml", "type": "deployment-definition", "context": "Configure GMSA for Windows Pods and containers | As Pod specs with GMSA fields populated (as described above) are applied in a cluster, the following sequence of events take place:", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/configure-gmsa/", "id": "k8s_00732_code_completion_23301"} {"prompt": "Explain this yaml code:\n\napiVersion: apps/v1\nkind: Deployment\nmetadata:\n labels:\n run: with-creds\n name: with-creds\n namespace: default\nspec:\n replicas: 1\n selector:\n matchLabels:\n run: with-creds\n template:\n metadata:\n labels:\n run: with-creds\n spec:\n containers:\n - image: mcr.microsoft.com/windows/servercore/iis:windowsservercore-ltsc2019\n imagePullPolicy: Always\n name: iis\n securityContext:\n windowsOptions:\n gmsaCredentialSpecName: gmsa-Webapp1\n nodeSelector:\n kubernetes.io/os: windows", "completion": "This yaml code Configure GMSA for Windows Pods and containers | As Pod specs with GMSA fields populated (as described above) are applied in a cluster, the following sequence of events take place:", "language": "yaml", "type": "deployment-definition_explanation", "context": "Configure GMSA for Windows Pods and containers | As Pod specs with GMSA fields populated (as described above) are applied in a cluster, the following sequence of events take place:", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/configure-gmsa/", "id": "k8s_00732_code_explain_23302"} {"prompt": "Explain this text code:\n\nreg add \"HKLM\\SYSTEM\\CurrentControlSet\\Services\\hns\\State\" /v EnableCompartmentNamespace /t REG_DWORD /d 1", "completion": "This text code Configure GMSA for Windows Pods and containers | Running Pods will then need to be recreated to pick up the behavior changes. More information on how this registry key is used can be found here", "language": "text", "type": "code-example_explanation", "context": "Configure GMSA for Windows Pods and containers | Running Pods will then need to be recreated to pick up the behavior changes. More information on how this registry key is used can be found here", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/configure-gmsa/", "id": "k8s_00732_code_explain_23303"} {"prompt": "Explain this bash code:\n\nkubectl exec -it iis-auth-7776966999-n5nzr powershell.exe", "completion": "This bash code Configure GMSA for Windows Pods and containers | nltest.exe /parentdomain results in the following error:", "language": "bash", "type": "kubectl-command_explanation", "context": "Configure GMSA for Windows Pods and containers | nltest.exe /parentdomain results in the following error:", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/configure-gmsa/", "id": "k8s_00732_code_explain_23304"} {"prompt": "Complete this text code:\n\nimage: registry.domain.example/iis-auth:1809v1\n lifecycle:\n postStart:", "completion": " exec:\n command: [\"powershell.exe\",\"-command\",\"do { Restart-Service -Name netlogon } while ( $($Result = (nltest.exe /query); if ($Result -like '*0x0 NERR_Success*') {return $true} else {return $false}) -eq $false)\"]\n imagePullPolicy: IfNotPresent", "language": "text", "type": "code-example", "context": "Configure GMSA for Windows Pods and containers | If you add the lifecycle section show above to your Pod spec, the Pod will execute the commands listed to restart the netlogon service until the nltest.exe /query command exits without error.", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/configure-gmsa/", "id": "k8s_00732_code_completion_23305"} {"prompt": "Explain this text code:\n\nimage: registry.domain.example/iis-auth:1809v1\n lifecycle:\n postStart:\n exec:\n command: [\"powershell.exe\",\"-command\",\"do { Restart-Service -Name netlogon } while ( $($Result = (nltest.exe /query); if ($Result -like '*0x0 NERR_Success*') {return $true} else {return $false}) -eq $false)\"]\n imagePullPolicy: IfNotPresent", "completion": "This text code Configure GMSA for Windows Pods and containers | If you add the lifecycle section show above to your Pod spec, the Pod will execute the commands listed to restart the netlogon service until the nltest.exe /query command exits without error.", "language": "text", "type": "code-example_explanation", "context": "Configure GMSA for Windows Pods and containers | If you add the lifecycle section show above to your Pod spec, the Pod will execute the commands listed to restart the netlogon service until the nltest.exe /query command exits without error.", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/configure-gmsa/", "id": "k8s_00732_code_explain_23306"} {"prompt": "Complete this yaml code:\n\napiVersion: resource.k8s.io/v1\nkind: ResourceClaimTemplate\nmetadata:\n name: prioritized-list-claim-template\nspec:\n spec:\n devices:\n requests:\n - name: req-0\n firstAvailable:\n - name: large-black\n deviceClassName: resource.example.com", "completion": " selectors:\n - cel:\n expression: |-\n device.attributes[\"resource-driver.example.com\"].color == \"black\" &&\n device.attributes[\"resource-driver.example.com\"].size == \"large\" \n - name: small-white\n deviceClassName: resource.example.com\n selectors:\n - cel:\n expression: |-\n device.attributes[\"resource-driver.example.com\"].color == \"white\" &&\n device.attributes[\"resource-driver.example.com\"].size == \"small\" \n count: 2", "language": "yaml", "type": "kubernetes-manifest", "context": "Dynamic Resource Allocation | The decision is made on a per-Pod basis, so if the Pod is a member of a ReplicaSet or similar grouping, you cannot rely on all the members of the group having the same subrequest chosen. Your workload", "source": "https://kubernetes.io/docs/concepts/scheduling-eviction/dynamic-resource-allocation/", "id": "k8s_00734_code_completion_23307"} {"prompt": "Explain this yaml code:\n\napiVersion: resource.k8s.io/v1\nkind: ResourceClaimTemplate\nmetadata:\n name: prioritized-list-claim-template\nspec:\n spec:\n devices:\n requests:\n - name: req-0\n firstAvailable:\n - name: large-black\n deviceClassName: resource.example.com\n selectors:\n - cel:\n expression: |-\n device.attributes[\"resource-driver.example.com\"].color == \"black\" &&\n device.attributes[\"resource-driver.example.com\"].size == \"large\" \n - name: small-white\n deviceClassName: resource.example.com\n selectors:\n - cel:\n expression: |-\n device.attributes[\"resource-driver.example.com\"].color == \"white\" &&\n device.attributes[\"resource-driver.example.com\"].size == \"small\" \n count: 2", "completion": "This yaml code Dynamic Resource Allocation | The decision is made on a per-Pod basis, so if the Pod is a member of a ReplicaSet or similar grouping, you cannot rely on all the members of the group having the same subrequest chosen. Your workload", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Dynamic Resource Allocation | The decision is made on a per-Pod basis, so if the Pod is a member of a ReplicaSet or similar grouping, you cannot rely on all the members of the group having the same subrequest chosen. Your workload", "source": "https://kubernetes.io/docs/concepts/scheduling-eviction/dynamic-resource-allocation/", "id": "k8s_00734_code_explain_23308"} {"prompt": "Complete this yaml code:\n\napiVersion: resource.k8s.io/v1\nkind: ResourceSlice\nmetadata:\n name: cat-slice\nspec:\n driver: \"resource-driver.example.com\"\n pool:\n generation: 1\n name: \"black-cat-pool\"\n resourceSliceCount: 1", "completion": " # The allNodes field defines whether any node in the cluster can access the device.\n allNodes: true\n devices:\n - name: \"large-black-cat\"\n attributes:\n color:\n string: \"black\"\n size:\n string: \"large\"\n cat:\n boolean: true", "language": "yaml", "type": "kubernetes-manifest", "context": "Dynamic Resource Allocation | This ResourceSlice is managed by the resource-driver.example.com driver in the black-cat-pool pool. The allNodes: true field indicates that any node in the cluster can access the devices. There's one", "source": "https://kubernetes.io/docs/concepts/scheduling-eviction/dynamic-resource-allocation/", "id": "k8s_00734_code_completion_23309"} {"prompt": "Explain this yaml code:\n\napiVersion: resource.k8s.io/v1\nkind: ResourceSlice\nmetadata:\n name: cat-slice\nspec:\n driver: \"resource-driver.example.com\"\n pool:\n generation: 1\n name: \"black-cat-pool\"\n resourceSliceCount: 1\n # The allNodes field defines whether any node in the cluster can access the device.\n allNodes: true\n devices:\n - name: \"large-black-cat\"\n attributes:\n color:\n string: \"black\"\n size:\n string: \"large\"\n cat:\n boolean: true", "completion": "This yaml code Dynamic Resource Allocation | This ResourceSlice is managed by the resource-driver.example.com driver in the black-cat-pool pool. The allNodes: true field indicates that any node in the cluster can access the devices. There's one", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Dynamic Resource Allocation | This ResourceSlice is managed by the resource-driver.example.com driver in the black-cat-pool pool. The allNodes: true field indicates that any node in the cluster can access the devices. There's one", "source": "https://kubernetes.io/docs/concepts/scheduling-eviction/dynamic-resource-allocation/", "id": "k8s_00734_code_explain_23310"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: pod-with-cats", "completion": "spec:\n nodeSelector:\n kubernetes.io/hostname: name-of-the-intended-node\n ...", "language": "yaml", "type": "pod-definition", "context": "Dynamic Resource Allocation | You may also be able to mutate the incoming Pod, at admission time, to unset the .spec.nodeName field and to use a node selector instead.", "source": "https://kubernetes.io/docs/concepts/scheduling-eviction/dynamic-resource-allocation/", "id": "k8s_00734_code_completion_23311"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: pod-with-cats\nspec:\n nodeSelector:\n kubernetes.io/hostname: name-of-the-intended-node\n ...", "completion": "This yaml code Dynamic Resource Allocation | You may also be able to mutate the incoming Pod, at admission time, to unset the .spec.nodeName field and to use a node selector instead.", "language": "yaml", "type": "pod-definition_explanation", "context": "Dynamic Resource Allocation | You may also be able to mutate the incoming Pod, at admission time, to unset the .spec.nodeName field and to use a node selector instead.", "source": "https://kubernetes.io/docs/concepts/scheduling-eviction/dynamic-resource-allocation/", "id": "k8s_00734_code_explain_23312"} {"prompt": "Complete this yaml code:\n\napiVersion: resource.k8s.io/v1\nkind: ResourceClaimTemplate\nmetadata:\n name: large-black-cat-claim-template\nspec:\n spec:", "completion": " devices:\n requests:\n - name: req-0\n exactly:\n deviceClassName: resource.example.com\n allocationMode: All\n adminAccess: true", "language": "yaml", "type": "kubernetes-manifest", "context": "Dynamic Resource Allocation | If this feature is disabled, the adminAccess field will be removed automatically when creating such a ResourceClaim.", "source": "https://kubernetes.io/docs/concepts/scheduling-eviction/dynamic-resource-allocation/", "id": "k8s_00734_code_completion_23313"} {"prompt": "Explain this yaml code:\n\napiVersion: resource.k8s.io/v1\nkind: ResourceClaimTemplate\nmetadata:\n name: large-black-cat-claim-template\nspec:\n spec:\n devices:\n requests:\n - name: req-0\n exactly:\n deviceClassName: resource.example.com\n allocationMode: All\n adminAccess: true", "completion": "This yaml code Dynamic Resource Allocation | If this feature is disabled, the adminAccess field will be removed automatically when creating such a ResourceClaim.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Dynamic Resource Allocation | If this feature is disabled, the adminAccess field will be removed automatically when creating such a ResourceClaim.", "source": "https://kubernetes.io/docs/concepts/scheduling-eviction/dynamic-resource-allocation/", "id": "k8s_00734_code_explain_23314"} {"prompt": "Complete this yaml code:\n\napiVersion: resource.k8s.io/v1\nkind: DeviceClass\nmetadata:\n name: gpu.example.com\nspec:", "completion": " selectors:\n - cel:\n expression: device.driver == 'gpu.example.com' && device.attributes['gpu.example.com'].type\n == 'gpu'\n extendedResourceName: example.com/gpu", "language": "yaml", "type": "kubernetes-manifest", "context": "Dynamic Resource Allocation | In addition, users can use a special extended resource to allocate devices without having to explicitly create a ResourceClaim. Using the extended resource name prefix deviceclass.resource.kubernetes.", "source": "https://kubernetes.io/docs/concepts/scheduling-eviction/dynamic-resource-allocation/", "id": "k8s_00734_code_completion_23315"} {"prompt": "Explain this yaml code:\n\napiVersion: resource.k8s.io/v1\nkind: DeviceClass\nmetadata:\n name: gpu.example.com\nspec:\n selectors:\n - cel:\n expression: device.driver == 'gpu.example.com' && device.attributes['gpu.example.com'].type\n == 'gpu'\n extendedResourceName: example.com/gpu", "completion": "This yaml code Dynamic Resource Allocation | In addition, users can use a special extended resource to allocate devices without having to explicitly create a ResourceClaim. Using the extended resource name prefix deviceclass.resource.kubernetes.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Dynamic Resource Allocation | In addition, users can use a special extended resource to allocate devices without having to explicitly create a ResourceClaim. Using the extended resource name prefix deviceclass.resource.kubernetes.", "source": "https://kubernetes.io/docs/concepts/scheduling-eviction/dynamic-resource-allocation/", "id": "k8s_00734_code_explain_23316"} {"prompt": "Complete this yaml code:\n\nkind: ResourceSlice\napiVersion: resource.k8s.io/v1\nmetadata:\n name: resourceslice\nspec:\n nodeName: worker-1\n pool:\n name: pool\n generation: 1\n resourceSliceCount: 1\n driver: dra.example.com\n sharedCounters:\n - name: gpu-1-counters\n counters:", "completion": " memory:\n value: 8Gi\n devices:\n - name: device-1\n consumesCounters:\n - counterSet: gpu-1-counters\n counters:\n memory:\n value: 6Gi\n - name: device-2\n consumesCounters:\n - counterSet: gpu-1-counters\n counters:\n memory:\n value: 6Gi", "language": "yaml", "type": "kubernetes-manifest", "context": "Dynamic Resource Allocation | Partitionable devices is an alpha feature and only enabled when the DRAPartitionableDevices feature gate is enabled in the kube-apiserver and kube-scheduler.", "source": "https://kubernetes.io/docs/concepts/scheduling-eviction/dynamic-resource-allocation/", "id": "k8s_00734_code_completion_23317"} {"prompt": "Explain this yaml code:\n\nkind: ResourceSlice\napiVersion: resource.k8s.io/v1\nmetadata:\n name: resourceslice\nspec:\n nodeName: worker-1\n pool:\n name: pool\n generation: 1\n resourceSliceCount: 1\n driver: dra.example.com\n sharedCounters:\n - name: gpu-1-counters\n counters:\n memory:\n value: 8Gi\n devices:\n - name: device-1\n consumesCounters:\n - counterSet: gpu-1-counters\n counters:\n memory:\n value: 6Gi\n - name: device-2\n consumesCounters:\n - counterSet: gpu-1-counters\n counters:\n memory:\n value: 6Gi", "completion": "This yaml code Dynamic Resource Allocation | Partitionable devices is an alpha feature and only enabled when the DRAPartitionableDevices feature gate is enabled in the kube-apiserver and kube-scheduler.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Dynamic Resource Allocation | Partitionable devices is an alpha feature and only enabled when the DRAPartitionableDevices feature gate is enabled in the kube-apiserver and kube-scheduler.", "source": "https://kubernetes.io/docs/concepts/scheduling-eviction/dynamic-resource-allocation/", "id": "k8s_00734_code_explain_23318"} {"prompt": "Complete this yaml code:\n\nkind: ResourceSlice\napiVersion: resource.k8s.io/v1\nmetadata:\n name: resourceslice\nspec:\n nodeName: worker-1\n pool:\n name: pool\n generation: 1\n resourceSliceCount: 1\n driver: dra.example.com\n devices:", "completion": " - name: eth1\n allowMultipleAllocations: true\n attributes:\n name:\n string: \"eth1\"\n capacity:\n bandwidth:\n requestPolicy:\n default: \"1M\"\n validRange:\n min: \"1M\"\n step: \"8\"\n value: \"10G\"", "language": "yaml", "type": "kubernetes-manifest", "context": "Dynamic Resource Allocation | The consumable capacity can be requested as shown in the below example.", "source": "https://kubernetes.io/docs/concepts/scheduling-eviction/dynamic-resource-allocation/", "id": "k8s_00734_code_completion_23319"} {"prompt": "Explain this yaml code:\n\nkind: ResourceSlice\napiVersion: resource.k8s.io/v1\nmetadata:\n name: resourceslice\nspec:\n nodeName: worker-1\n pool:\n name: pool\n generation: 1\n resourceSliceCount: 1\n driver: dra.example.com\n devices:\n - name: eth1\n allowMultipleAllocations: true\n attributes:\n name:\n string: \"eth1\"\n capacity:\n bandwidth:\n requestPolicy:\n default: \"1M\"\n validRange:\n min: \"1M\"\n step: \"8\"\n value: \"10G\"", "completion": "This yaml code Dynamic Resource Allocation | The consumable capacity can be requested as shown in the below example.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Dynamic Resource Allocation | The consumable capacity can be requested as shown in the below example.", "source": "https://kubernetes.io/docs/concepts/scheduling-eviction/dynamic-resource-allocation/", "id": "k8s_00734_code_explain_23320"} {"prompt": "Complete this yaml code:\n\napiVersion: resource.k8s.io/v1\nkind: ResourceClaimTemplate\nmetadata:\n name: bandwidth-claim-template\nspec:\n spec:\n devices:", "completion": " requests:\n - name: req-0\n exactly:\n - deviceClassName: resource.example.com\n capacity:\n requests:\n bandwidth: 1G", "language": "yaml", "type": "kubernetes-manifest", "context": "Dynamic Resource Allocation | The allocation result will include the consumed capacity and the identifier of the share.", "source": "https://kubernetes.io/docs/concepts/scheduling-eviction/dynamic-resource-allocation/", "id": "k8s_00734_code_completion_23321"} {"prompt": "Explain this yaml code:\n\napiVersion: resource.k8s.io/v1\nkind: ResourceClaimTemplate\nmetadata:\n name: bandwidth-claim-template\nspec:\n spec:\n devices:\n requests:\n - name: req-0\n exactly:\n - deviceClassName: resource.example.com\n capacity:\n requests:\n bandwidth: 1G", "completion": "This yaml code Dynamic Resource Allocation | The allocation result will include the consumed capacity and the identifier of the share.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Dynamic Resource Allocation | The allocation result will include the consumed capacity and the identifier of the share.", "source": "https://kubernetes.io/docs/concepts/scheduling-eviction/dynamic-resource-allocation/", "id": "k8s_00734_code_explain_23322"} {"prompt": "Complete this yaml code:\n\napiVersion: resource.k8s.io/v1\nkind: ResourceClaim\n...\nstatus:\n allocation:", "completion": " devices:\n results:\n - consumedCapacity:\n bandwidth: 1G\n device: eth1\n shareID: \"a671734a-e8e5-11e4-8fde-42010af09327\"", "language": "yaml", "type": "kubernetes-manifest", "context": "Dynamic Resource Allocation | In this example, a multiply-allocatable device was chosen. However, any resource.example.com device with at least the requested 1G bandwidth could have met the requirement. If a non-multiply-allocatab", "source": "https://kubernetes.io/docs/concepts/scheduling-eviction/dynamic-resource-allocation/", "id": "k8s_00734_code_completion_23323"} {"prompt": "Explain this yaml code:\n\napiVersion: resource.k8s.io/v1\nkind: ResourceClaim\n...\nstatus:\n allocation:\n devices:\n results:\n - consumedCapacity:\n bandwidth: 1G\n device: eth1\n shareID: \"a671734a-e8e5-11e4-8fde-42010af09327\"", "completion": "This yaml code Dynamic Resource Allocation | In this example, a multiply-allocatable device was chosen. However, any resource.example.com device with at least the requested 1G bandwidth could have met the requirement. If a non-multiply-allocatab", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Dynamic Resource Allocation | In this example, a multiply-allocatable device was chosen. However, any resource.example.com device with at least the requested 1G bandwidth could have met the requirement. If a non-multiply-allocatab", "source": "https://kubernetes.io/docs/concepts/scheduling-eviction/dynamic-resource-allocation/", "id": "k8s_00734_code_explain_23324"} {"prompt": "Complete this yaml code:\n\napiVersion: resource.k8s.io/v1alpha3\nkind: DeviceTaintRule\nmetadata:\n name: example\nspec:\n # The entire hardware installation for this\n # particular driver is broken.", "completion": " # Evict all pods and don't schedule new ones.\n deviceSelector:\n driver: dra.example.com\n taint:\n key: dra.example.com/unhealthy\n value: Broken\n effect: NoExecute", "language": "yaml", "type": "kubernetes-manifest", "context": "Dynamic Resource Allocation | Device Binding Conditions allow the Kubernetes scheduler to delay Pod binding until external resources, such as fabric-attached GPUs or reprogrammable FPGAs, are confirmed to be ready.", "source": "https://kubernetes.io/docs/concepts/scheduling-eviction/dynamic-resource-allocation/", "id": "k8s_00734_code_completion_23325"} {"prompt": "Explain this yaml code:\n\napiVersion: resource.k8s.io/v1alpha3\nkind: DeviceTaintRule\nmetadata:\n name: example\nspec:\n # The entire hardware installation for this\n # particular driver is broken.\n # Evict all pods and don't schedule new ones.\n deviceSelector:\n driver: dra.example.com\n taint:\n key: dra.example.com/unhealthy\n value: Broken\n effect: NoExecute", "completion": "This yaml code Dynamic Resource Allocation | Device Binding Conditions allow the Kubernetes scheduler to delay Pod binding until external resources, such as fabric-attached GPUs or reprogrammable FPGAs, are confirmed to be ready.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Dynamic Resource Allocation | Device Binding Conditions allow the Kubernetes scheduler to delay Pod binding until external resources, such as fabric-attached GPUs or reprogrammable FPGAs, are confirmed to be ready.", "source": "https://kubernetes.io/docs/concepts/scheduling-eviction/dynamic-resource-allocation/", "id": "k8s_00734_code_explain_23326"} {"prompt": "Complete this yaml code:\n\napiVersion: resource.k8s.io/v1\nkind: ResourceSlice\nmetadata:\n name: gpu-slice\nspec:\n driver: dra.example.com\n nodeSelector:\n accelerator-type: high-performance\n pool:\n name: gpu-pool\n generation: 1\n resourceSliceCount: 1", "completion": " devices:\n - name: gpu-1\n attributes:\n vendor:\n string: \"example\"\n model:\n string: \"example-gpu\"\n bindsToNode: true\n bindingConditions:\n - dra.example.com/is-prepared\n bindingFailureConditions:\n - dra.example.com/preparing-failed", "language": "yaml", "type": "kubernetes-manifest", "context": "Dynamic Resource Allocation | This example ResourceSlice has the following properties:", "source": "https://kubernetes.io/docs/concepts/scheduling-eviction/dynamic-resource-allocation/", "id": "k8s_00734_code_completion_23327"} {"prompt": "Explain this yaml code:\n\napiVersion: resource.k8s.io/v1\nkind: ResourceSlice\nmetadata:\n name: gpu-slice\nspec:\n driver: dra.example.com\n nodeSelector:\n accelerator-type: high-performance\n pool:\n name: gpu-pool\n generation: 1\n resourceSliceCount: 1\n devices:\n - name: gpu-1\n attributes:\n vendor:\n string: \"example\"\n model:\n string: \"example-gpu\"\n bindsToNode: true\n bindingConditions:\n - dra.example.com/is-prepared\n bindingFailureConditions:\n - dra.example.com/preparing-failed", "completion": "This yaml code Dynamic Resource Allocation | This example ResourceSlice has the following properties:", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Dynamic Resource Allocation | This example ResourceSlice has the following properties:", "source": "https://kubernetes.io/docs/concepts/scheduling-eviction/dynamic-resource-allocation/", "id": "k8s_00734_code_explain_23328"} {"prompt": "Complete this yaml code:\n\napiVersion: resource.k8s.io/v1\nkind: ResourceClaimTemplate\nmetadata:\n name: prioritized-list-claim-template\nspec:\n spec:\n devices:\n requests:\n - name: req-0\n firstAvailable:\n - name: large-black\n deviceClassName: resource.example.com", "completion": " selectors:\n - cel:\n expression: |-\n device.attributes[\"resource-driver.example.com\"].color == \"black\" &&\n device.attributes[\"resource-driver.example.com\"].size == \"large\" \n - name: small-white\n deviceClassName: resource.example.com\n selectors:\n - cel:\n expression: |-\n device.attributes[\"resource-driver.example.com\"].color == \"white\" &&\n device.attributes[\"resource-driver.example.com\"].size == \"small\" \n count: 2", "language": "yaml", "type": "kubernetes-manifest", "context": "Dynamic Resource Allocation | The decision is made on a per-Pod basis, so if the Pod is a member of a ReplicaSet or similar grouping, you cannot rely on all the members of the group having the same subrequest chosen. Your workload", "source": "https://kubernetes.io/docs/concepts/scheduling-eviction/dynamic-resource-allocation/", "id": "k8s_00734_code_completion_23329"} {"prompt": "Explain this yaml code:\n\napiVersion: resource.k8s.io/v1\nkind: ResourceClaimTemplate\nmetadata:\n name: prioritized-list-claim-template\nspec:\n spec:\n devices:\n requests:\n - name: req-0\n firstAvailable:\n - name: large-black\n deviceClassName: resource.example.com\n selectors:\n - cel:\n expression: |-\n device.attributes[\"resource-driver.example.com\"].color == \"black\" &&\n device.attributes[\"resource-driver.example.com\"].size == \"large\" \n - name: small-white\n deviceClassName: resource.example.com\n selectors:\n - cel:\n expression: |-\n device.attributes[\"resource-driver.example.com\"].color == \"white\" &&\n device.attributes[\"resource-driver.example.com\"].size == \"small\" \n count: 2", "completion": "This yaml code Dynamic Resource Allocation | The decision is made on a per-Pod basis, so if the Pod is a member of a ReplicaSet or similar grouping, you cannot rely on all the members of the group having the same subrequest chosen. Your workload", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Dynamic Resource Allocation | The decision is made on a per-Pod basis, so if the Pod is a member of a ReplicaSet or similar grouping, you cannot rely on all the members of the group having the same subrequest chosen. Your workload", "source": "https://kubernetes.io/docs/concepts/scheduling-eviction/dynamic-resource-allocation/", "id": "k8s_00734_code_explain_23330"} {"prompt": "Complete this yaml code:\n\napiVersion: resource.k8s.io/v1\nkind: ResourceSlice\nmetadata:\n name: cat-slice\nspec:\n driver: \"resource-driver.example.com\"\n pool:\n generation: 1\n name: \"black-cat-pool\"\n resourceSliceCount: 1", "completion": " # The allNodes field defines whether any node in the cluster can access the device.\n allNodes: true\n devices:\n - name: \"large-black-cat\"\n attributes:\n color:\n string: \"black\"\n size:\n string: \"large\"\n cat:\n boolean: true", "language": "yaml", "type": "kubernetes-manifest", "context": "Dynamic Resource Allocation | This ResourceSlice is managed by the resource-driver.example.com driver in the black-cat-pool pool. The allNodes: true field indicates that any node in the cluster can access the devices. There's one", "source": "https://kubernetes.io/docs/concepts/scheduling-eviction/dynamic-resource-allocation/", "id": "k8s_00734_code_completion_23331"} {"prompt": "Explain this yaml code:\n\napiVersion: resource.k8s.io/v1\nkind: ResourceSlice\nmetadata:\n name: cat-slice\nspec:\n driver: \"resource-driver.example.com\"\n pool:\n generation: 1\n name: \"black-cat-pool\"\n resourceSliceCount: 1\n # The allNodes field defines whether any node in the cluster can access the device.\n allNodes: true\n devices:\n - name: \"large-black-cat\"\n attributes:\n color:\n string: \"black\"\n size:\n string: \"large\"\n cat:\n boolean: true", "completion": "This yaml code Dynamic Resource Allocation | This ResourceSlice is managed by the resource-driver.example.com driver in the black-cat-pool pool. The allNodes: true field indicates that any node in the cluster can access the devices. There's one", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Dynamic Resource Allocation | This ResourceSlice is managed by the resource-driver.example.com driver in the black-cat-pool pool. The allNodes: true field indicates that any node in the cluster can access the devices. There's one", "source": "https://kubernetes.io/docs/concepts/scheduling-eviction/dynamic-resource-allocation/", "id": "k8s_00734_code_explain_23332"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: pod-with-cats", "completion": "spec:\n nodeSelector:\n kubernetes.io/hostname: name-of-the-intended-node\n ...", "language": "yaml", "type": "pod-definition", "context": "Dynamic Resource Allocation | You may also be able to mutate the incoming Pod, at admission time, to unset the .spec.nodeName field and to use a node selector instead.", "source": "https://kubernetes.io/docs/concepts/scheduling-eviction/dynamic-resource-allocation/", "id": "k8s_00734_code_completion_23333"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: pod-with-cats\nspec:\n nodeSelector:\n kubernetes.io/hostname: name-of-the-intended-node\n ...", "completion": "This yaml code Dynamic Resource Allocation | You may also be able to mutate the incoming Pod, at admission time, to unset the .spec.nodeName field and to use a node selector instead.", "language": "yaml", "type": "pod-definition_explanation", "context": "Dynamic Resource Allocation | You may also be able to mutate the incoming Pod, at admission time, to unset the .spec.nodeName field and to use a node selector instead.", "source": "https://kubernetes.io/docs/concepts/scheduling-eviction/dynamic-resource-allocation/", "id": "k8s_00734_code_explain_23334"} {"prompt": "Complete this yaml code:\n\napiVersion: resource.k8s.io/v1\nkind: ResourceClaimTemplate\nmetadata:\n name: large-black-cat-claim-template\nspec:\n spec:", "completion": " devices:\n requests:\n - name: req-0\n exactly:\n deviceClassName: resource.example.com\n allocationMode: All\n adminAccess: true", "language": "yaml", "type": "kubernetes-manifest", "context": "Dynamic Resource Allocation | If this feature is disabled, the adminAccess field will be removed automatically when creating such a ResourceClaim.", "source": "https://kubernetes.io/docs/concepts/scheduling-eviction/dynamic-resource-allocation/", "id": "k8s_00734_code_completion_23335"} {"prompt": "Explain this yaml code:\n\napiVersion: resource.k8s.io/v1\nkind: ResourceClaimTemplate\nmetadata:\n name: large-black-cat-claim-template\nspec:\n spec:\n devices:\n requests:\n - name: req-0\n exactly:\n deviceClassName: resource.example.com\n allocationMode: All\n adminAccess: true", "completion": "This yaml code Dynamic Resource Allocation | If this feature is disabled, the adminAccess field will be removed automatically when creating such a ResourceClaim.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Dynamic Resource Allocation | If this feature is disabled, the adminAccess field will be removed automatically when creating such a ResourceClaim.", "source": "https://kubernetes.io/docs/concepts/scheduling-eviction/dynamic-resource-allocation/", "id": "k8s_00734_code_explain_23336"} {"prompt": "Complete this yaml code:\n\napiVersion: resource.k8s.io/v1\nkind: DeviceClass\nmetadata:\n name: gpu.example.com\nspec:", "completion": " selectors:\n - cel:\n expression: device.driver == 'gpu.example.com' && device.attributes['gpu.example.com'].type\n == 'gpu'\n extendedResourceName: example.com/gpu", "language": "yaml", "type": "kubernetes-manifest", "context": "Dynamic Resource Allocation | In addition, users can use a special extended resource to allocate devices without having to explicitly create a ResourceClaim. Using the extended resource name prefix deviceclass.resource.kubernetes.", "source": "https://kubernetes.io/docs/concepts/scheduling-eviction/dynamic-resource-allocation/", "id": "k8s_00734_code_completion_23337"} {"prompt": "Explain this yaml code:\n\napiVersion: resource.k8s.io/v1\nkind: DeviceClass\nmetadata:\n name: gpu.example.com\nspec:\n selectors:\n - cel:\n expression: device.driver == 'gpu.example.com' && device.attributes['gpu.example.com'].type\n == 'gpu'\n extendedResourceName: example.com/gpu", "completion": "This yaml code Dynamic Resource Allocation | In addition, users can use a special extended resource to allocate devices without having to explicitly create a ResourceClaim. Using the extended resource name prefix deviceclass.resource.kubernetes.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Dynamic Resource Allocation | In addition, users can use a special extended resource to allocate devices without having to explicitly create a ResourceClaim. Using the extended resource name prefix deviceclass.resource.kubernetes.", "source": "https://kubernetes.io/docs/concepts/scheduling-eviction/dynamic-resource-allocation/", "id": "k8s_00734_code_explain_23338"} {"prompt": "Complete this yaml code:\n\nkind: ResourceSlice\napiVersion: resource.k8s.io/v1\nmetadata:\n name: resourceslice\nspec:\n nodeName: worker-1\n pool:\n name: pool\n generation: 1\n resourceSliceCount: 1\n driver: dra.example.com\n sharedCounters:\n - name: gpu-1-counters\n counters:", "completion": " memory:\n value: 8Gi\n devices:\n - name: device-1\n consumesCounters:\n - counterSet: gpu-1-counters\n counters:\n memory:\n value: 6Gi\n - name: device-2\n consumesCounters:\n - counterSet: gpu-1-counters\n counters:\n memory:\n value: 6Gi", "language": "yaml", "type": "kubernetes-manifest", "context": "Dynamic Resource Allocation | Partitionable devices is an alpha feature and only enabled when the DRAPartitionableDevices feature gate is enabled in the kube-apiserver and kube-scheduler.", "source": "https://kubernetes.io/docs/concepts/scheduling-eviction/dynamic-resource-allocation/", "id": "k8s_00734_code_completion_23339"} {"prompt": "Explain this yaml code:\n\nkind: ResourceSlice\napiVersion: resource.k8s.io/v1\nmetadata:\n name: resourceslice\nspec:\n nodeName: worker-1\n pool:\n name: pool\n generation: 1\n resourceSliceCount: 1\n driver: dra.example.com\n sharedCounters:\n - name: gpu-1-counters\n counters:\n memory:\n value: 8Gi\n devices:\n - name: device-1\n consumesCounters:\n - counterSet: gpu-1-counters\n counters:\n memory:\n value: 6Gi\n - name: device-2\n consumesCounters:\n - counterSet: gpu-1-counters\n counters:\n memory:\n value: 6Gi", "completion": "This yaml code Dynamic Resource Allocation | Partitionable devices is an alpha feature and only enabled when the DRAPartitionableDevices feature gate is enabled in the kube-apiserver and kube-scheduler.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Dynamic Resource Allocation | Partitionable devices is an alpha feature and only enabled when the DRAPartitionableDevices feature gate is enabled in the kube-apiserver and kube-scheduler.", "source": "https://kubernetes.io/docs/concepts/scheduling-eviction/dynamic-resource-allocation/", "id": "k8s_00734_code_explain_23340"} {"prompt": "Complete this yaml code:\n\nkind: ResourceSlice\napiVersion: resource.k8s.io/v1\nmetadata:\n name: resourceslice\nspec:\n nodeName: worker-1\n pool:\n name: pool\n generation: 1\n resourceSliceCount: 1\n driver: dra.example.com\n devices:", "completion": " - name: eth1\n allowMultipleAllocations: true\n attributes:\n name:\n string: \"eth1\"\n capacity:\n bandwidth:\n requestPolicy:\n default: \"1M\"\n validRange:\n min: \"1M\"\n step: \"8\"\n value: \"10G\"", "language": "yaml", "type": "kubernetes-manifest", "context": "Dynamic Resource Allocation | The consumable capacity can be requested as shown in the below example.", "source": "https://kubernetes.io/docs/concepts/scheduling-eviction/dynamic-resource-allocation/", "id": "k8s_00734_code_completion_23341"} {"prompt": "Explain this yaml code:\n\nkind: ResourceSlice\napiVersion: resource.k8s.io/v1\nmetadata:\n name: resourceslice\nspec:\n nodeName: worker-1\n pool:\n name: pool\n generation: 1\n resourceSliceCount: 1\n driver: dra.example.com\n devices:\n - name: eth1\n allowMultipleAllocations: true\n attributes:\n name:\n string: \"eth1\"\n capacity:\n bandwidth:\n requestPolicy:\n default: \"1M\"\n validRange:\n min: \"1M\"\n step: \"8\"\n value: \"10G\"", "completion": "This yaml code Dynamic Resource Allocation | The consumable capacity can be requested as shown in the below example.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Dynamic Resource Allocation | The consumable capacity can be requested as shown in the below example.", "source": "https://kubernetes.io/docs/concepts/scheduling-eviction/dynamic-resource-allocation/", "id": "k8s_00734_code_explain_23342"} {"prompt": "Complete this yaml code:\n\napiVersion: resource.k8s.io/v1\nkind: ResourceClaimTemplate\nmetadata:\n name: bandwidth-claim-template\nspec:\n spec:\n devices:", "completion": " requests:\n - name: req-0\n exactly:\n - deviceClassName: resource.example.com\n capacity:\n requests:\n bandwidth: 1G", "language": "yaml", "type": "kubernetes-manifest", "context": "Dynamic Resource Allocation | The allocation result will include the consumed capacity and the identifier of the share.", "source": "https://kubernetes.io/docs/concepts/scheduling-eviction/dynamic-resource-allocation/", "id": "k8s_00734_code_completion_23343"} {"prompt": "Explain this yaml code:\n\napiVersion: resource.k8s.io/v1\nkind: ResourceClaimTemplate\nmetadata:\n name: bandwidth-claim-template\nspec:\n spec:\n devices:\n requests:\n - name: req-0\n exactly:\n - deviceClassName: resource.example.com\n capacity:\n requests:\n bandwidth: 1G", "completion": "This yaml code Dynamic Resource Allocation | The allocation result will include the consumed capacity and the identifier of the share.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Dynamic Resource Allocation | The allocation result will include the consumed capacity and the identifier of the share.", "source": "https://kubernetes.io/docs/concepts/scheduling-eviction/dynamic-resource-allocation/", "id": "k8s_00734_code_explain_23344"} {"prompt": "Complete this yaml code:\n\napiVersion: resource.k8s.io/v1\nkind: ResourceClaim\n...\nstatus:\n allocation:", "completion": " devices:\n results:\n - consumedCapacity:\n bandwidth: 1G\n device: eth1\n shareID: \"a671734a-e8e5-11e4-8fde-42010af09327\"", "language": "yaml", "type": "kubernetes-manifest", "context": "Dynamic Resource Allocation | In this example, a multiply-allocatable device was chosen. However, any resource.example.com device with at least the requested 1G bandwidth could have met the requirement. If a non-multiply-allocatab", "source": "https://kubernetes.io/docs/concepts/scheduling-eviction/dynamic-resource-allocation/", "id": "k8s_00734_code_completion_23345"} {"prompt": "Explain this yaml code:\n\napiVersion: resource.k8s.io/v1\nkind: ResourceClaim\n...\nstatus:\n allocation:\n devices:\n results:\n - consumedCapacity:\n bandwidth: 1G\n device: eth1\n shareID: \"a671734a-e8e5-11e4-8fde-42010af09327\"", "completion": "This yaml code Dynamic Resource Allocation | In this example, a multiply-allocatable device was chosen. However, any resource.example.com device with at least the requested 1G bandwidth could have met the requirement. If a non-multiply-allocatab", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Dynamic Resource Allocation | In this example, a multiply-allocatable device was chosen. However, any resource.example.com device with at least the requested 1G bandwidth could have met the requirement. If a non-multiply-allocatab", "source": "https://kubernetes.io/docs/concepts/scheduling-eviction/dynamic-resource-allocation/", "id": "k8s_00734_code_explain_23346"} {"prompt": "Complete this yaml code:\n\napiVersion: resource.k8s.io/v1alpha3\nkind: DeviceTaintRule\nmetadata:\n name: example\nspec:\n # The entire hardware installation for this\n # particular driver is broken.", "completion": " # Evict all pods and don't schedule new ones.\n deviceSelector:\n driver: dra.example.com\n taint:\n key: dra.example.com/unhealthy\n value: Broken\n effect: NoExecute", "language": "yaml", "type": "kubernetes-manifest", "context": "Dynamic Resource Allocation | Device Binding Conditions allow the Kubernetes scheduler to delay Pod binding until external resources, such as fabric-attached GPUs or reprogrammable FPGAs, are confirmed to be ready.", "source": "https://kubernetes.io/docs/concepts/scheduling-eviction/dynamic-resource-allocation/", "id": "k8s_00734_code_completion_23347"} {"prompt": "Explain this yaml code:\n\napiVersion: resource.k8s.io/v1alpha3\nkind: DeviceTaintRule\nmetadata:\n name: example\nspec:\n # The entire hardware installation for this\n # particular driver is broken.\n # Evict all pods and don't schedule new ones.\n deviceSelector:\n driver: dra.example.com\n taint:\n key: dra.example.com/unhealthy\n value: Broken\n effect: NoExecute", "completion": "This yaml code Dynamic Resource Allocation | Device Binding Conditions allow the Kubernetes scheduler to delay Pod binding until external resources, such as fabric-attached GPUs or reprogrammable FPGAs, are confirmed to be ready.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Dynamic Resource Allocation | Device Binding Conditions allow the Kubernetes scheduler to delay Pod binding until external resources, such as fabric-attached GPUs or reprogrammable FPGAs, are confirmed to be ready.", "source": "https://kubernetes.io/docs/concepts/scheduling-eviction/dynamic-resource-allocation/", "id": "k8s_00734_code_explain_23348"} {"prompt": "Complete this yaml code:\n\napiVersion: resource.k8s.io/v1\nkind: ResourceSlice\nmetadata:\n name: gpu-slice\nspec:\n driver: dra.example.com\n nodeSelector:\n accelerator-type: high-performance\n pool:\n name: gpu-pool\n generation: 1\n resourceSliceCount: 1", "completion": " devices:\n - name: gpu-1\n attributes:\n vendor:\n string: \"example\"\n model:\n string: \"example-gpu\"\n bindsToNode: true\n bindingConditions:\n - dra.example.com/is-prepared\n bindingFailureConditions:\n - dra.example.com/preparing-failed", "language": "yaml", "type": "kubernetes-manifest", "context": "Dynamic Resource Allocation | This example ResourceSlice has the following properties:", "source": "https://kubernetes.io/docs/concepts/scheduling-eviction/dynamic-resource-allocation/", "id": "k8s_00734_code_completion_23349"} {"prompt": "Explain this yaml code:\n\napiVersion: resource.k8s.io/v1\nkind: ResourceSlice\nmetadata:\n name: gpu-slice\nspec:\n driver: dra.example.com\n nodeSelector:\n accelerator-type: high-performance\n pool:\n name: gpu-pool\n generation: 1\n resourceSliceCount: 1\n devices:\n - name: gpu-1\n attributes:\n vendor:\n string: \"example\"\n model:\n string: \"example-gpu\"\n bindsToNode: true\n bindingConditions:\n - dra.example.com/is-prepared\n bindingFailureConditions:\n - dra.example.com/preparing-failed", "completion": "This yaml code Dynamic Resource Allocation | This example ResourceSlice has the following properties:", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Dynamic Resource Allocation | This example ResourceSlice has the following properties:", "source": "https://kubernetes.io/docs/concepts/scheduling-eviction/dynamic-resource-allocation/", "id": "k8s_00734_code_explain_23350"} {"prompt": "Complete this yaml code:\n\napiVersion: resource.k8s.io/v1\nkind: ResourceClaimTemplate\nmetadata:\n name: prioritized-list-claim-template\nspec:\n spec:\n devices:\n requests:\n - name: req-0\n firstAvailable:\n - name: large-black\n deviceClassName: resource.example.com", "completion": " selectors:\n - cel:\n expression: |-\n device.attributes[\"resource-driver.example.com\"].color == \"black\" &&\n device.attributes[\"resource-driver.example.com\"].size == \"large\" \n - name: small-white\n deviceClassName: resource.example.com\n selectors:\n - cel:\n expression: |-\n device.attributes[\"resource-driver.example.com\"].color == \"white\" &&\n device.attributes[\"resource-driver.example.com\"].size == \"small\" \n count: 2", "language": "yaml", "type": "kubernetes-manifest", "context": "Dynamic Resource Allocation | The decision is made on a per-Pod basis, so if the Pod is a member of a ReplicaSet or similar grouping, you cannot rely on all the members of the group having the same subrequest chosen. Your workload", "source": "https://kubernetes.io/docs/concepts/scheduling-eviction/dynamic-resource-allocation/", "id": "k8s_00734_code_completion_23351"} {"prompt": "Explain this yaml code:\n\napiVersion: resource.k8s.io/v1\nkind: ResourceClaimTemplate\nmetadata:\n name: prioritized-list-claim-template\nspec:\n spec:\n devices:\n requests:\n - name: req-0\n firstAvailable:\n - name: large-black\n deviceClassName: resource.example.com\n selectors:\n - cel:\n expression: |-\n device.attributes[\"resource-driver.example.com\"].color == \"black\" &&\n device.attributes[\"resource-driver.example.com\"].size == \"large\" \n - name: small-white\n deviceClassName: resource.example.com\n selectors:\n - cel:\n expression: |-\n device.attributes[\"resource-driver.example.com\"].color == \"white\" &&\n device.attributes[\"resource-driver.example.com\"].size == \"small\" \n count: 2", "completion": "This yaml code Dynamic Resource Allocation | The decision is made on a per-Pod basis, so if the Pod is a member of a ReplicaSet or similar grouping, you cannot rely on all the members of the group having the same subrequest chosen. Your workload", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Dynamic Resource Allocation | The decision is made on a per-Pod basis, so if the Pod is a member of a ReplicaSet or similar grouping, you cannot rely on all the members of the group having the same subrequest chosen. Your workload", "source": "https://kubernetes.io/docs/concepts/scheduling-eviction/dynamic-resource-allocation/", "id": "k8s_00734_code_explain_23352"} {"prompt": "Complete this yaml code:\n\napiVersion: resource.k8s.io/v1\nkind: ResourceSlice\nmetadata:\n name: cat-slice\nspec:\n driver: \"resource-driver.example.com\"\n pool:\n generation: 1\n name: \"black-cat-pool\"\n resourceSliceCount: 1", "completion": " # The allNodes field defines whether any node in the cluster can access the device.\n allNodes: true\n devices:\n - name: \"large-black-cat\"\n attributes:\n color:\n string: \"black\"\n size:\n string: \"large\"\n cat:\n boolean: true", "language": "yaml", "type": "kubernetes-manifest", "context": "Dynamic Resource Allocation | This ResourceSlice is managed by the resource-driver.example.com driver in the black-cat-pool pool. The allNodes: true field indicates that any node in the cluster can access the devices. There's one", "source": "https://kubernetes.io/docs/concepts/scheduling-eviction/dynamic-resource-allocation/", "id": "k8s_00734_code_completion_23353"} {"prompt": "Explain this yaml code:\n\napiVersion: resource.k8s.io/v1\nkind: ResourceSlice\nmetadata:\n name: cat-slice\nspec:\n driver: \"resource-driver.example.com\"\n pool:\n generation: 1\n name: \"black-cat-pool\"\n resourceSliceCount: 1\n # The allNodes field defines whether any node in the cluster can access the device.\n allNodes: true\n devices:\n - name: \"large-black-cat\"\n attributes:\n color:\n string: \"black\"\n size:\n string: \"large\"\n cat:\n boolean: true", "completion": "This yaml code Dynamic Resource Allocation | This ResourceSlice is managed by the resource-driver.example.com driver in the black-cat-pool pool. The allNodes: true field indicates that any node in the cluster can access the devices. There's one", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Dynamic Resource Allocation | This ResourceSlice is managed by the resource-driver.example.com driver in the black-cat-pool pool. The allNodes: true field indicates that any node in the cluster can access the devices. There's one", "source": "https://kubernetes.io/docs/concepts/scheduling-eviction/dynamic-resource-allocation/", "id": "k8s_00734_code_explain_23354"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: pod-with-cats", "completion": "spec:\n nodeSelector:\n kubernetes.io/hostname: name-of-the-intended-node\n ...", "language": "yaml", "type": "pod-definition", "context": "Dynamic Resource Allocation | You may also be able to mutate the incoming Pod, at admission time, to unset the .spec.nodeName field and to use a node selector instead.", "source": "https://kubernetes.io/docs/concepts/scheduling-eviction/dynamic-resource-allocation/", "id": "k8s_00734_code_completion_23355"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: pod-with-cats\nspec:\n nodeSelector:\n kubernetes.io/hostname: name-of-the-intended-node\n ...", "completion": "This yaml code Dynamic Resource Allocation | You may also be able to mutate the incoming Pod, at admission time, to unset the .spec.nodeName field and to use a node selector instead.", "language": "yaml", "type": "pod-definition_explanation", "context": "Dynamic Resource Allocation | You may also be able to mutate the incoming Pod, at admission time, to unset the .spec.nodeName field and to use a node selector instead.", "source": "https://kubernetes.io/docs/concepts/scheduling-eviction/dynamic-resource-allocation/", "id": "k8s_00734_code_explain_23356"} {"prompt": "Complete this yaml code:\n\napiVersion: resource.k8s.io/v1\nkind: ResourceClaimTemplate\nmetadata:\n name: large-black-cat-claim-template\nspec:\n spec:", "completion": " devices:\n requests:\n - name: req-0\n exactly:\n deviceClassName: resource.example.com\n allocationMode: All\n adminAccess: true", "language": "yaml", "type": "kubernetes-manifest", "context": "Dynamic Resource Allocation | If this feature is disabled, the adminAccess field will be removed automatically when creating such a ResourceClaim.", "source": "https://kubernetes.io/docs/concepts/scheduling-eviction/dynamic-resource-allocation/", "id": "k8s_00734_code_completion_23357"} {"prompt": "Explain this yaml code:\n\napiVersion: resource.k8s.io/v1\nkind: ResourceClaimTemplate\nmetadata:\n name: large-black-cat-claim-template\nspec:\n spec:\n devices:\n requests:\n - name: req-0\n exactly:\n deviceClassName: resource.example.com\n allocationMode: All\n adminAccess: true", "completion": "This yaml code Dynamic Resource Allocation | If this feature is disabled, the adminAccess field will be removed automatically when creating such a ResourceClaim.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Dynamic Resource Allocation | If this feature is disabled, the adminAccess field will be removed automatically when creating such a ResourceClaim.", "source": "https://kubernetes.io/docs/concepts/scheduling-eviction/dynamic-resource-allocation/", "id": "k8s_00734_code_explain_23358"} {"prompt": "Complete this yaml code:\n\napiVersion: resource.k8s.io/v1\nkind: DeviceClass\nmetadata:\n name: gpu.example.com\nspec:", "completion": " selectors:\n - cel:\n expression: device.driver == 'gpu.example.com' && device.attributes['gpu.example.com'].type\n == 'gpu'\n extendedResourceName: example.com/gpu", "language": "yaml", "type": "kubernetes-manifest", "context": "Dynamic Resource Allocation | In addition, users can use a special extended resource to allocate devices without having to explicitly create a ResourceClaim. Using the extended resource name prefix deviceclass.resource.kubernetes.", "source": "https://kubernetes.io/docs/concepts/scheduling-eviction/dynamic-resource-allocation/", "id": "k8s_00734_code_completion_23359"} {"prompt": "Explain this yaml code:\n\napiVersion: resource.k8s.io/v1\nkind: DeviceClass\nmetadata:\n name: gpu.example.com\nspec:\n selectors:\n - cel:\n expression: device.driver == 'gpu.example.com' && device.attributes['gpu.example.com'].type\n == 'gpu'\n extendedResourceName: example.com/gpu", "completion": "This yaml code Dynamic Resource Allocation | In addition, users can use a special extended resource to allocate devices without having to explicitly create a ResourceClaim. Using the extended resource name prefix deviceclass.resource.kubernetes.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Dynamic Resource Allocation | In addition, users can use a special extended resource to allocate devices without having to explicitly create a ResourceClaim. Using the extended resource name prefix deviceclass.resource.kubernetes.", "source": "https://kubernetes.io/docs/concepts/scheduling-eviction/dynamic-resource-allocation/", "id": "k8s_00734_code_explain_23360"} {"prompt": "Complete this yaml code:\n\nkind: ResourceSlice\napiVersion: resource.k8s.io/v1\nmetadata:\n name: resourceslice\nspec:\n nodeName: worker-1\n pool:\n name: pool\n generation: 1\n resourceSliceCount: 1\n driver: dra.example.com\n sharedCounters:\n - name: gpu-1-counters\n counters:", "completion": " memory:\n value: 8Gi\n devices:\n - name: device-1\n consumesCounters:\n - counterSet: gpu-1-counters\n counters:\n memory:\n value: 6Gi\n - name: device-2\n consumesCounters:\n - counterSet: gpu-1-counters\n counters:\n memory:\n value: 6Gi", "language": "yaml", "type": "kubernetes-manifest", "context": "Dynamic Resource Allocation | Partitionable devices is an alpha feature and only enabled when the DRAPartitionableDevices feature gate is enabled in the kube-apiserver and kube-scheduler.", "source": "https://kubernetes.io/docs/concepts/scheduling-eviction/dynamic-resource-allocation/", "id": "k8s_00734_code_completion_23361"} {"prompt": "Explain this yaml code:\n\nkind: ResourceSlice\napiVersion: resource.k8s.io/v1\nmetadata:\n name: resourceslice\nspec:\n nodeName: worker-1\n pool:\n name: pool\n generation: 1\n resourceSliceCount: 1\n driver: dra.example.com\n sharedCounters:\n - name: gpu-1-counters\n counters:\n memory:\n value: 8Gi\n devices:\n - name: device-1\n consumesCounters:\n - counterSet: gpu-1-counters\n counters:\n memory:\n value: 6Gi\n - name: device-2\n consumesCounters:\n - counterSet: gpu-1-counters\n counters:\n memory:\n value: 6Gi", "completion": "This yaml code Dynamic Resource Allocation | Partitionable devices is an alpha feature and only enabled when the DRAPartitionableDevices feature gate is enabled in the kube-apiserver and kube-scheduler.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Dynamic Resource Allocation | Partitionable devices is an alpha feature and only enabled when the DRAPartitionableDevices feature gate is enabled in the kube-apiserver and kube-scheduler.", "source": "https://kubernetes.io/docs/concepts/scheduling-eviction/dynamic-resource-allocation/", "id": "k8s_00734_code_explain_23362"} {"prompt": "Complete this yaml code:\n\nkind: ResourceSlice\napiVersion: resource.k8s.io/v1\nmetadata:\n name: resourceslice\nspec:\n nodeName: worker-1\n pool:\n name: pool\n generation: 1\n resourceSliceCount: 1\n driver: dra.example.com\n devices:", "completion": " - name: eth1\n allowMultipleAllocations: true\n attributes:\n name:\n string: \"eth1\"\n capacity:\n bandwidth:\n requestPolicy:\n default: \"1M\"\n validRange:\n min: \"1M\"\n step: \"8\"\n value: \"10G\"", "language": "yaml", "type": "kubernetes-manifest", "context": "Dynamic Resource Allocation | The consumable capacity can be requested as shown in the below example.", "source": "https://kubernetes.io/docs/concepts/scheduling-eviction/dynamic-resource-allocation/", "id": "k8s_00734_code_completion_23363"} {"prompt": "Explain this yaml code:\n\nkind: ResourceSlice\napiVersion: resource.k8s.io/v1\nmetadata:\n name: resourceslice\nspec:\n nodeName: worker-1\n pool:\n name: pool\n generation: 1\n resourceSliceCount: 1\n driver: dra.example.com\n devices:\n - name: eth1\n allowMultipleAllocations: true\n attributes:\n name:\n string: \"eth1\"\n capacity:\n bandwidth:\n requestPolicy:\n default: \"1M\"\n validRange:\n min: \"1M\"\n step: \"8\"\n value: \"10G\"", "completion": "This yaml code Dynamic Resource Allocation | The consumable capacity can be requested as shown in the below example.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Dynamic Resource Allocation | The consumable capacity can be requested as shown in the below example.", "source": "https://kubernetes.io/docs/concepts/scheduling-eviction/dynamic-resource-allocation/", "id": "k8s_00734_code_explain_23364"} {"prompt": "Complete this yaml code:\n\napiVersion: resource.k8s.io/v1\nkind: ResourceClaimTemplate\nmetadata:\n name: bandwidth-claim-template\nspec:\n spec:\n devices:", "completion": " requests:\n - name: req-0\n exactly:\n - deviceClassName: resource.example.com\n capacity:\n requests:\n bandwidth: 1G", "language": "yaml", "type": "kubernetes-manifest", "context": "Dynamic Resource Allocation | The allocation result will include the consumed capacity and the identifier of the share.", "source": "https://kubernetes.io/docs/concepts/scheduling-eviction/dynamic-resource-allocation/", "id": "k8s_00734_code_completion_23365"} {"prompt": "Explain this yaml code:\n\napiVersion: resource.k8s.io/v1\nkind: ResourceClaimTemplate\nmetadata:\n name: bandwidth-claim-template\nspec:\n spec:\n devices:\n requests:\n - name: req-0\n exactly:\n - deviceClassName: resource.example.com\n capacity:\n requests:\n bandwidth: 1G", "completion": "This yaml code Dynamic Resource Allocation | The allocation result will include the consumed capacity and the identifier of the share.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Dynamic Resource Allocation | The allocation result will include the consumed capacity and the identifier of the share.", "source": "https://kubernetes.io/docs/concepts/scheduling-eviction/dynamic-resource-allocation/", "id": "k8s_00734_code_explain_23366"} {"prompt": "Complete this yaml code:\n\napiVersion: resource.k8s.io/v1\nkind: ResourceClaim\n...\nstatus:\n allocation:", "completion": " devices:\n results:\n - consumedCapacity:\n bandwidth: 1G\n device: eth1\n shareID: \"a671734a-e8e5-11e4-8fde-42010af09327\"", "language": "yaml", "type": "kubernetes-manifest", "context": "Dynamic Resource Allocation | In this example, a multiply-allocatable device was chosen. However, any resource.example.com device with at least the requested 1G bandwidth could have met the requirement. If a non-multiply-allocatab", "source": "https://kubernetes.io/docs/concepts/scheduling-eviction/dynamic-resource-allocation/", "id": "k8s_00734_code_completion_23367"} {"prompt": "Explain this yaml code:\n\napiVersion: resource.k8s.io/v1\nkind: ResourceClaim\n...\nstatus:\n allocation:\n devices:\n results:\n - consumedCapacity:\n bandwidth: 1G\n device: eth1\n shareID: \"a671734a-e8e5-11e4-8fde-42010af09327\"", "completion": "This yaml code Dynamic Resource Allocation | In this example, a multiply-allocatable device was chosen. However, any resource.example.com device with at least the requested 1G bandwidth could have met the requirement. If a non-multiply-allocatab", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Dynamic Resource Allocation | In this example, a multiply-allocatable device was chosen. However, any resource.example.com device with at least the requested 1G bandwidth could have met the requirement. If a non-multiply-allocatab", "source": "https://kubernetes.io/docs/concepts/scheduling-eviction/dynamic-resource-allocation/", "id": "k8s_00734_code_explain_23368"} {"prompt": "Complete this yaml code:\n\napiVersion: resource.k8s.io/v1alpha3\nkind: DeviceTaintRule\nmetadata:\n name: example\nspec:\n # The entire hardware installation for this\n # particular driver is broken.", "completion": " # Evict all pods and don't schedule new ones.\n deviceSelector:\n driver: dra.example.com\n taint:\n key: dra.example.com/unhealthy\n value: Broken\n effect: NoExecute", "language": "yaml", "type": "kubernetes-manifest", "context": "Dynamic Resource Allocation | Device Binding Conditions allow the Kubernetes scheduler to delay Pod binding until external resources, such as fabric-attached GPUs or reprogrammable FPGAs, are confirmed to be ready.", "source": "https://kubernetes.io/docs/concepts/scheduling-eviction/dynamic-resource-allocation/", "id": "k8s_00734_code_completion_23369"} {"prompt": "Explain this yaml code:\n\napiVersion: resource.k8s.io/v1alpha3\nkind: DeviceTaintRule\nmetadata:\n name: example\nspec:\n # The entire hardware installation for this\n # particular driver is broken.\n # Evict all pods and don't schedule new ones.\n deviceSelector:\n driver: dra.example.com\n taint:\n key: dra.example.com/unhealthy\n value: Broken\n effect: NoExecute", "completion": "This yaml code Dynamic Resource Allocation | Device Binding Conditions allow the Kubernetes scheduler to delay Pod binding until external resources, such as fabric-attached GPUs or reprogrammable FPGAs, are confirmed to be ready.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Dynamic Resource Allocation | Device Binding Conditions allow the Kubernetes scheduler to delay Pod binding until external resources, such as fabric-attached GPUs or reprogrammable FPGAs, are confirmed to be ready.", "source": "https://kubernetes.io/docs/concepts/scheduling-eviction/dynamic-resource-allocation/", "id": "k8s_00734_code_explain_23370"} {"prompt": "Complete this yaml code:\n\napiVersion: resource.k8s.io/v1\nkind: ResourceSlice\nmetadata:\n name: gpu-slice\nspec:\n driver: dra.example.com\n nodeSelector:\n accelerator-type: high-performance\n pool:\n name: gpu-pool\n generation: 1\n resourceSliceCount: 1", "completion": " devices:\n - name: gpu-1\n attributes:\n vendor:\n string: \"example\"\n model:\n string: \"example-gpu\"\n bindsToNode: true\n bindingConditions:\n - dra.example.com/is-prepared\n bindingFailureConditions:\n - dra.example.com/preparing-failed", "language": "yaml", "type": "kubernetes-manifest", "context": "Dynamic Resource Allocation | This example ResourceSlice has the following properties:", "source": "https://kubernetes.io/docs/concepts/scheduling-eviction/dynamic-resource-allocation/", "id": "k8s_00734_code_completion_23371"} {"prompt": "Explain this yaml code:\n\napiVersion: resource.k8s.io/v1\nkind: ResourceSlice\nmetadata:\n name: gpu-slice\nspec:\n driver: dra.example.com\n nodeSelector:\n accelerator-type: high-performance\n pool:\n name: gpu-pool\n generation: 1\n resourceSliceCount: 1\n devices:\n - name: gpu-1\n attributes:\n vendor:\n string: \"example\"\n model:\n string: \"example-gpu\"\n bindsToNode: true\n bindingConditions:\n - dra.example.com/is-prepared\n bindingFailureConditions:\n - dra.example.com/preparing-failed", "completion": "This yaml code Dynamic Resource Allocation | This example ResourceSlice has the following properties:", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Dynamic Resource Allocation | This example ResourceSlice has the following properties:", "source": "https://kubernetes.io/docs/concepts/scheduling-eviction/dynamic-resource-allocation/", "id": "k8s_00734_code_explain_23372"} {"prompt": "Complete this yaml code:\n\n# Kubernetes API version\napiVersion: v1\n# kind of the API object\nkind: Config\n# clusters refers to the remote service.\nclusters:\n - name: name-of-remote-authz-service\n cluster:\n # CA for verifying the remote service.\n certificate-authority: /path/to/ca.pem\n # URL of remote service to query. Must use 'https'. May not include parameters.\n server: https://authz.example.com/authorize\n", "completion": "# users refers to the API Server's webhook configuration.\nusers:\n - name: name-of-api-server\n user:\n client-certificate: /path/to/cert.pem # cert for the webhook plugin to use\n client-key: /path/to/key.pem # key matching the cert\n\n# kubeconfig files require a context. Provide one for the API Server.\ncurrent-context: webhook\ncontexts:\n- context:\n cluster: name-of-remote-authz-service\n user: name-of-api-server\n name: webhook", "language": "yaml", "type": "kubernetes-manifest", "context": "Webhook Mode | When faced with an authorization decision, the API Server POSTs a JSON- serialized authorization.k8s.io/v1beta1 SubjectAccessReview object describing the action. This object contains fields describing", "source": "https://kubernetes.io/docs/reference/access-authn-authz/webhook/", "id": "k8s_00735_code_completion_23373"} {"prompt": "Explain this yaml code:\n\n# Kubernetes API version\napiVersion: v1\n# kind of the API object\nkind: Config\n# clusters refers to the remote service.\nclusters:\n - name: name-of-remote-authz-service\n cluster:\n # CA for verifying the remote service.\n certificate-authority: /path/to/ca.pem\n # URL of remote service to query. Must use 'https'. May not include parameters.\n server: https://authz.example.com/authorize\n\n# users refers to the API Server's webhook configuration.\nusers:\n - name: name-of-api-server\n user:\n client-certificate: /path/to/cert.pem # cert for the webhook plugin to use\n client-key: /path/to/key.pem # key matching the cert\n\n# kubeconfig files require a context. Provide one for the API Server.\ncurrent-context: webhook\ncontexts:\n- context:\n cluster: name-of-remote-authz-service\n user: name-of-api-server\n name: webhook", "completion": "This yaml code Webhook Mode | When faced with an authorization decision, the API Server POSTs a JSON- serialized authorization.k8s.io/v1beta1 SubjectAccessReview object describing the action. This object contains fields describing", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Webhook Mode | When faced with an authorization decision, the API Server POSTs a JSON- serialized authorization.k8s.io/v1beta1 SubjectAccessReview object describing the action. This object contains fields describing", "source": "https://kubernetes.io/docs/reference/access-authn-authz/webhook/", "id": "k8s_00735_code_explain_23374"} {"prompt": "Complete this json code:\n\n{\n \"apiVersion\": \"authorization.k8s.io/v1beta1\",\n \"kind\": \"SubjectAccessReview\",\n \"spec\": {\n \"resourceAttributes\": {\n \"namespace\": \"kittensandponies\",\n \"verb\": \"get\",\n \"group\": \"unicorn.example.org\",", "completion": " \"resource\": \"pods\"\n },\n \"user\": \"jane\",\n \"group\": [\n \"group1\",\n \"group2\"\n ]\n }\n}", "language": "json", "type": "api-response", "context": "Webhook Mode | The remote service is expected to fill the status field of the request and respond to either allow or disallow access. The response body's spec field is ignored and may be omitted. A permissive respon", "source": "https://kubernetes.io/docs/reference/access-authn-authz/webhook/", "id": "k8s_00735_code_completion_23375"} {"prompt": "Explain this json code:\n\n{\n \"apiVersion\": \"authorization.k8s.io/v1beta1\",\n \"kind\": \"SubjectAccessReview\",\n \"spec\": {\n \"resourceAttributes\": {\n \"namespace\": \"kittensandponies\",\n \"verb\": \"get\",\n \"group\": \"unicorn.example.org\",\n \"resource\": \"pods\"\n },\n \"user\": \"jane\",\n \"group\": [\n \"group1\",\n \"group2\"\n ]\n }\n}", "completion": "This json code Webhook Mode | The remote service is expected to fill the status field of the request and respond to either allow or disallow access. The response body's spec field is ignored and may be omitted. A permissive respon", "language": "json", "type": "api-response_explanation", "context": "Webhook Mode | The remote service is expected to fill the status field of the request and respond to either allow or disallow access. The response body's spec field is ignored and may be omitted. A permissive respon", "source": "https://kubernetes.io/docs/reference/access-authn-authz/webhook/", "id": "k8s_00735_code_explain_23376"} {"prompt": "Complete this json code:\n\n{\n \"apiVersion\": \"authorization.k8s.io/v1beta1\",\n \"kind\": \"SubjectAccessReview\",", "completion": " \"status\": {\n \"allowed\": true\n }\n}", "language": "json", "type": "api-response", "context": "Webhook Mode | For disallowing access there are two methods.", "source": "https://kubernetes.io/docs/reference/access-authn-authz/webhook/", "id": "k8s_00735_code_completion_23377"} {"prompt": "Explain this json code:\n\n{\n \"apiVersion\": \"authorization.k8s.io/v1beta1\",\n \"kind\": \"SubjectAccessReview\",\n \"status\": {\n \"allowed\": true\n }\n}", "completion": "This json code Webhook Mode | For disallowing access there are two methods.", "language": "json", "type": "api-response_explanation", "context": "Webhook Mode | For disallowing access there are two methods.", "source": "https://kubernetes.io/docs/reference/access-authn-authz/webhook/", "id": "k8s_00735_code_explain_23378"} {"prompt": "Complete this json code:\n\n{\n \"apiVersion\": \"authorization.k8s.io/v1beta1\",\n \"kind\": \"SubjectAccessReview\",\n \"status\": {", "completion": " \"allowed\": false,\n \"reason\": \"user does not have read access to the namespace\"\n }\n}", "language": "json", "type": "api-response", "context": "Webhook Mode | The second method denies immediately, short-circuiting evaluation by other configured authorizers. This should only be used by webhooks that have detailed knowledge of the full authorizer configuratio", "source": "https://kubernetes.io/docs/reference/access-authn-authz/webhook/", "id": "k8s_00735_code_completion_23379"} {"prompt": "Explain this json code:\n\n{\n \"apiVersion\": \"authorization.k8s.io/v1beta1\",\n \"kind\": \"SubjectAccessReview\",\n \"status\": {\n \"allowed\": false,\n \"reason\": \"user does not have read access to the namespace\"\n }\n}", "completion": "This json code Webhook Mode | The second method denies immediately, short-circuiting evaluation by other configured authorizers. This should only be used by webhooks that have detailed knowledge of the full authorizer configuratio", "language": "json", "type": "api-response_explanation", "context": "Webhook Mode | The second method denies immediately, short-circuiting evaluation by other configured authorizers. This should only be used by webhooks that have detailed knowledge of the full authorizer configuratio", "source": "https://kubernetes.io/docs/reference/access-authn-authz/webhook/", "id": "k8s_00735_code_explain_23380"} {"prompt": "Complete this json code:\n\n{\n \"apiVersion\": \"authorization.k8s.io/v1beta1\",\n \"kind\": \"SubjectAccessReview\",\n \"status\": {", "completion": " \"allowed\": false,\n \"denied\": true,\n \"reason\": \"user does not have read access to the namespace\"\n }\n}", "language": "json", "type": "api-response", "context": "Webhook Mode | Access to non-resource paths are sent as:", "source": "https://kubernetes.io/docs/reference/access-authn-authz/webhook/", "id": "k8s_00735_code_completion_23381"} {"prompt": "Explain this json code:\n\n{\n \"apiVersion\": \"authorization.k8s.io/v1beta1\",\n \"kind\": \"SubjectAccessReview\",\n \"status\": {\n \"allowed\": false,\n \"denied\": true,\n \"reason\": \"user does not have read access to the namespace\"\n }\n}", "completion": "This json code Webhook Mode | Access to non-resource paths are sent as:", "language": "json", "type": "api-response_explanation", "context": "Webhook Mode | Access to non-resource paths are sent as:", "source": "https://kubernetes.io/docs/reference/access-authn-authz/webhook/", "id": "k8s_00735_code_explain_23382"} {"prompt": "Complete this json code:\n\n{\n \"apiVersion\": \"authorization.k8s.io/v1beta1\",\n \"kind\": \"SubjectAccessReview\",\n \"spec\": {\n \"nonResourceAttributes\": {\n \"path\": \"/debug\",\n \"verb\": \"get\"", "completion": " },\n \"user\": \"jane\",\n \"group\": [\n \"group1\",\n \"group2\"\n ]\n }\n}", "language": "json", "type": "api-response", "context": "Webhook Mode | When calling out to an authorization webhook, Kubernetes passes label and field selectors in the request to the authorization webhook. The authorization webhook can make authorization decisions inform", "source": "https://kubernetes.io/docs/reference/access-authn-authz/webhook/", "id": "k8s_00735_code_completion_23383"} {"prompt": "Explain this json code:\n\n{\n \"apiVersion\": \"authorization.k8s.io/v1beta1\",\n \"kind\": \"SubjectAccessReview\",\n \"spec\": {\n \"nonResourceAttributes\": {\n \"path\": \"/debug\",\n \"verb\": \"get\"\n },\n \"user\": \"jane\",\n \"group\": [\n \"group1\",\n \"group2\"\n ]\n }\n}", "completion": "This json code Webhook Mode | When calling out to an authorization webhook, Kubernetes passes label and field selectors in the request to the authorization webhook. The authorization webhook can make authorization decisions inform", "language": "json", "type": "api-response_explanation", "context": "Webhook Mode | When calling out to an authorization webhook, Kubernetes passes label and field selectors in the request to the authorization webhook. The authorization webhook can make authorization decisions inform", "source": "https://kubernetes.io/docs/reference/access-authn-authz/webhook/", "id": "k8s_00735_code_explain_23384"} {"prompt": "Complete this json code:\n\n{\n \"apiVersion\": \"authorization.k8s.io/v1beta1\",\n \"kind\": \"SubjectAccessReview\",\n \"spec\": {\n \"resourceAttributes\": {\n \"verb\": \"list\",\n \"group\": \"\",\n \"resource\": \"pods\",\n \"fieldSelector\": {\n \"requirements\": [\n {\"key\":\"spec.nodeName\", \"operator\":\"In\", \"values\":[\"mynode\"]}\n ]\n },", "completion": " \"labelSelector\": {\n \"requirements\": [\n {\"key\":\"example.com/mykey\", \"operator\":\"In\", \"values\":[\"myvalue\"]}\n ]\n }\n },\n \"user\": \"jane\",\n \"group\": [\n \"group1\",\n \"group2\"\n ]\n }\n}", "language": "json", "type": "api-response", "context": "Webhook Mode | Non-resource paths include: /api, /apis, /metrics, /logs, /debug, /healthz, /livez, /openapi/v2, /readyz, and /version. Clients require access to /api, /api/*, /apis, /apis/*, and /version to discover", "source": "https://kubernetes.io/docs/reference/access-authn-authz/webhook/", "id": "k8s_00735_code_completion_23385"} {"prompt": "Explain this json code:\n\n{\n \"apiVersion\": \"authorization.k8s.io/v1beta1\",\n \"kind\": \"SubjectAccessReview\",\n \"spec\": {\n \"resourceAttributes\": {\n \"verb\": \"list\",\n \"group\": \"\",\n \"resource\": \"pods\",\n \"fieldSelector\": {\n \"requirements\": [\n {\"key\":\"spec.nodeName\", \"operator\":\"In\", \"values\":[\"mynode\"]}\n ]\n },\n \"labelSelector\": {\n \"requirements\": [\n {\"key\":\"example.com/mykey\", \"operator\":\"In\", \"values\":[\"myvalue\"]}\n ]\n }\n },\n \"user\": \"jane\",\n \"group\": [\n \"group1\",\n \"group2\"\n ]\n }\n}", "completion": "This json code Webhook Mode | Non-resource paths include: /api, /apis, /metrics, /logs, /debug, /healthz, /livez, /openapi/v2, /readyz, and /version. Clients require access to /api, /api/*, /apis, /apis/*, and /version to discover", "language": "json", "type": "api-response_explanation", "context": "Webhook Mode | Non-resource paths include: /api, /apis, /metrics, /logs, /debug, /healthz, /livez, /openapi/v2, /readyz, and /version. Clients require access to /api, /api/*, /apis, /apis/*, and /version to discover", "source": "https://kubernetes.io/docs/reference/access-authn-authz/webhook/", "id": "k8s_00735_code_explain_23386"} {"prompt": "Complete this yaml code:\n\n# Kubernetes API version\napiVersion: v1\n# kind of the API object\nkind: Config\n# clusters refers to the remote service.\nclusters:\n - name: name-of-remote-authz-service\n cluster:\n # CA for verifying the remote service.\n certificate-authority: /path/to/ca.pem\n # URL of remote service to query. Must use 'https'. May not include parameters.\n server: https://authz.example.com/authorize\n", "completion": "# users refers to the API Server's webhook configuration.\nusers:\n - name: name-of-api-server\n user:\n client-certificate: /path/to/cert.pem # cert for the webhook plugin to use\n client-key: /path/to/key.pem # key matching the cert\n\n# kubeconfig files require a context. Provide one for the API Server.\ncurrent-context: webhook\ncontexts:\n- context:\n cluster: name-of-remote-authz-service\n user: name-of-api-server\n name: webhook", "language": "yaml", "type": "kubernetes-manifest", "context": "Webhook Mode | When faced with an authorization decision, the API Server POSTs a JSON- serialized authorization.k8s.io/v1beta1 SubjectAccessReview object describing the action. This object contains fields describing", "source": "https://kubernetes.io/docs/reference/access-authn-authz/webhook/", "id": "k8s_00735_code_completion_23387"} {"prompt": "Explain this yaml code:\n\n# Kubernetes API version\napiVersion: v1\n# kind of the API object\nkind: Config\n# clusters refers to the remote service.\nclusters:\n - name: name-of-remote-authz-service\n cluster:\n # CA for verifying the remote service.\n certificate-authority: /path/to/ca.pem\n # URL of remote service to query. Must use 'https'. May not include parameters.\n server: https://authz.example.com/authorize\n\n# users refers to the API Server's webhook configuration.\nusers:\n - name: name-of-api-server\n user:\n client-certificate: /path/to/cert.pem # cert for the webhook plugin to use\n client-key: /path/to/key.pem # key matching the cert\n\n# kubeconfig files require a context. Provide one for the API Server.\ncurrent-context: webhook\ncontexts:\n- context:\n cluster: name-of-remote-authz-service\n user: name-of-api-server\n name: webhook", "completion": "This yaml code Webhook Mode | When faced with an authorization decision, the API Server POSTs a JSON- serialized authorization.k8s.io/v1beta1 SubjectAccessReview object describing the action. This object contains fields describing", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Webhook Mode | When faced with an authorization decision, the API Server POSTs a JSON- serialized authorization.k8s.io/v1beta1 SubjectAccessReview object describing the action. This object contains fields describing", "source": "https://kubernetes.io/docs/reference/access-authn-authz/webhook/", "id": "k8s_00735_code_explain_23388"} {"prompt": "Complete this json code:\n\n{\n \"apiVersion\": \"authorization.k8s.io/v1beta1\",\n \"kind\": \"SubjectAccessReview\",\n \"spec\": {\n \"resourceAttributes\": {\n \"namespace\": \"kittensandponies\",\n \"verb\": \"get\",\n \"group\": \"unicorn.example.org\",", "completion": " \"resource\": \"pods\"\n },\n \"user\": \"jane\",\n \"group\": [\n \"group1\",\n \"group2\"\n ]\n }\n}", "language": "json", "type": "api-response", "context": "Webhook Mode | The remote service is expected to fill the status field of the request and respond to either allow or disallow access. The response body's spec field is ignored and may be omitted. A permissive respon", "source": "https://kubernetes.io/docs/reference/access-authn-authz/webhook/", "id": "k8s_00735_code_completion_23389"} {"prompt": "Explain this json code:\n\n{\n \"apiVersion\": \"authorization.k8s.io/v1beta1\",\n \"kind\": \"SubjectAccessReview\",\n \"spec\": {\n \"resourceAttributes\": {\n \"namespace\": \"kittensandponies\",\n \"verb\": \"get\",\n \"group\": \"unicorn.example.org\",\n \"resource\": \"pods\"\n },\n \"user\": \"jane\",\n \"group\": [\n \"group1\",\n \"group2\"\n ]\n }\n}", "completion": "This json code Webhook Mode | The remote service is expected to fill the status field of the request and respond to either allow or disallow access. The response body's spec field is ignored and may be omitted. A permissive respon", "language": "json", "type": "api-response_explanation", "context": "Webhook Mode | The remote service is expected to fill the status field of the request and respond to either allow or disallow access. The response body's spec field is ignored and may be omitted. A permissive respon", "source": "https://kubernetes.io/docs/reference/access-authn-authz/webhook/", "id": "k8s_00735_code_explain_23390"} {"prompt": "Complete this json code:\n\n{\n \"apiVersion\": \"authorization.k8s.io/v1beta1\",\n \"kind\": \"SubjectAccessReview\",", "completion": " \"status\": {\n \"allowed\": true\n }\n}", "language": "json", "type": "api-response", "context": "Webhook Mode | For disallowing access there are two methods.", "source": "https://kubernetes.io/docs/reference/access-authn-authz/webhook/", "id": "k8s_00735_code_completion_23391"} {"prompt": "Explain this json code:\n\n{\n \"apiVersion\": \"authorization.k8s.io/v1beta1\",\n \"kind\": \"SubjectAccessReview\",\n \"status\": {\n \"allowed\": true\n }\n}", "completion": "This json code Webhook Mode | For disallowing access there are two methods.", "language": "json", "type": "api-response_explanation", "context": "Webhook Mode | For disallowing access there are two methods.", "source": "https://kubernetes.io/docs/reference/access-authn-authz/webhook/", "id": "k8s_00735_code_explain_23392"} {"prompt": "Complete this json code:\n\n{\n \"apiVersion\": \"authorization.k8s.io/v1beta1\",\n \"kind\": \"SubjectAccessReview\",\n \"status\": {", "completion": " \"allowed\": false,\n \"reason\": \"user does not have read access to the namespace\"\n }\n}", "language": "json", "type": "api-response", "context": "Webhook Mode | The second method denies immediately, short-circuiting evaluation by other configured authorizers. This should only be used by webhooks that have detailed knowledge of the full authorizer configuratio", "source": "https://kubernetes.io/docs/reference/access-authn-authz/webhook/", "id": "k8s_00735_code_completion_23393"} {"prompt": "Explain this json code:\n\n{\n \"apiVersion\": \"authorization.k8s.io/v1beta1\",\n \"kind\": \"SubjectAccessReview\",\n \"status\": {\n \"allowed\": false,\n \"reason\": \"user does not have read access to the namespace\"\n }\n}", "completion": "This json code Webhook Mode | The second method denies immediately, short-circuiting evaluation by other configured authorizers. This should only be used by webhooks that have detailed knowledge of the full authorizer configuratio", "language": "json", "type": "api-response_explanation", "context": "Webhook Mode | The second method denies immediately, short-circuiting evaluation by other configured authorizers. This should only be used by webhooks that have detailed knowledge of the full authorizer configuratio", "source": "https://kubernetes.io/docs/reference/access-authn-authz/webhook/", "id": "k8s_00735_code_explain_23394"} {"prompt": "Complete this json code:\n\n{\n \"apiVersion\": \"authorization.k8s.io/v1beta1\",\n \"kind\": \"SubjectAccessReview\",\n \"status\": {", "completion": " \"allowed\": false,\n \"denied\": true,\n \"reason\": \"user does not have read access to the namespace\"\n }\n}", "language": "json", "type": "api-response", "context": "Webhook Mode | Access to non-resource paths are sent as:", "source": "https://kubernetes.io/docs/reference/access-authn-authz/webhook/", "id": "k8s_00735_code_completion_23395"} {"prompt": "Explain this json code:\n\n{\n \"apiVersion\": \"authorization.k8s.io/v1beta1\",\n \"kind\": \"SubjectAccessReview\",\n \"status\": {\n \"allowed\": false,\n \"denied\": true,\n \"reason\": \"user does not have read access to the namespace\"\n }\n}", "completion": "This json code Webhook Mode | Access to non-resource paths are sent as:", "language": "json", "type": "api-response_explanation", "context": "Webhook Mode | Access to non-resource paths are sent as:", "source": "https://kubernetes.io/docs/reference/access-authn-authz/webhook/", "id": "k8s_00735_code_explain_23396"} {"prompt": "Complete this json code:\n\n{\n \"apiVersion\": \"authorization.k8s.io/v1beta1\",\n \"kind\": \"SubjectAccessReview\",\n \"spec\": {\n \"nonResourceAttributes\": {\n \"path\": \"/debug\",\n \"verb\": \"get\"", "completion": " },\n \"user\": \"jane\",\n \"group\": [\n \"group1\",\n \"group2\"\n ]\n }\n}", "language": "json", "type": "api-response", "context": "Webhook Mode | When calling out to an authorization webhook, Kubernetes passes label and field selectors in the request to the authorization webhook. The authorization webhook can make authorization decisions inform", "source": "https://kubernetes.io/docs/reference/access-authn-authz/webhook/", "id": "k8s_00735_code_completion_23397"} {"prompt": "Explain this json code:\n\n{\n \"apiVersion\": \"authorization.k8s.io/v1beta1\",\n \"kind\": \"SubjectAccessReview\",\n \"spec\": {\n \"nonResourceAttributes\": {\n \"path\": \"/debug\",\n \"verb\": \"get\"\n },\n \"user\": \"jane\",\n \"group\": [\n \"group1\",\n \"group2\"\n ]\n }\n}", "completion": "This json code Webhook Mode | When calling out to an authorization webhook, Kubernetes passes label and field selectors in the request to the authorization webhook. The authorization webhook can make authorization decisions inform", "language": "json", "type": "api-response_explanation", "context": "Webhook Mode | When calling out to an authorization webhook, Kubernetes passes label and field selectors in the request to the authorization webhook. The authorization webhook can make authorization decisions inform", "source": "https://kubernetes.io/docs/reference/access-authn-authz/webhook/", "id": "k8s_00735_code_explain_23398"} {"prompt": "Complete this json code:\n\n{\n \"apiVersion\": \"authorization.k8s.io/v1beta1\",\n \"kind\": \"SubjectAccessReview\",\n \"spec\": {\n \"resourceAttributes\": {\n \"verb\": \"list\",\n \"group\": \"\",\n \"resource\": \"pods\",\n \"fieldSelector\": {\n \"requirements\": [\n {\"key\":\"spec.nodeName\", \"operator\":\"In\", \"values\":[\"mynode\"]}\n ]\n },", "completion": " \"labelSelector\": {\n \"requirements\": [\n {\"key\":\"example.com/mykey\", \"operator\":\"In\", \"values\":[\"myvalue\"]}\n ]\n }\n },\n \"user\": \"jane\",\n \"group\": [\n \"group1\",\n \"group2\"\n ]\n }\n}", "language": "json", "type": "api-response", "context": "Webhook Mode | Non-resource paths include: /api, /apis, /metrics, /logs, /debug, /healthz, /livez, /openapi/v2, /readyz, and /version. Clients require access to /api, /api/*, /apis, /apis/*, and /version to discover", "source": "https://kubernetes.io/docs/reference/access-authn-authz/webhook/", "id": "k8s_00735_code_completion_23399"} {"prompt": "Explain this json code:\n\n{\n \"apiVersion\": \"authorization.k8s.io/v1beta1\",\n \"kind\": \"SubjectAccessReview\",\n \"spec\": {\n \"resourceAttributes\": {\n \"verb\": \"list\",\n \"group\": \"\",\n \"resource\": \"pods\",\n \"fieldSelector\": {\n \"requirements\": [\n {\"key\":\"spec.nodeName\", \"operator\":\"In\", \"values\":[\"mynode\"]}\n ]\n },\n \"labelSelector\": {\n \"requirements\": [\n {\"key\":\"example.com/mykey\", \"operator\":\"In\", \"values\":[\"myvalue\"]}\n ]\n }\n },\n \"user\": \"jane\",\n \"group\": [\n \"group1\",\n \"group2\"\n ]\n }\n}", "completion": "This json code Webhook Mode | Non-resource paths include: /api, /apis, /metrics, /logs, /debug, /healthz, /livez, /openapi/v2, /readyz, and /version. Clients require access to /api, /api/*, /apis, /apis/*, and /version to discover", "language": "json", "type": "api-response_explanation", "context": "Webhook Mode | Non-resource paths include: /api, /apis, /metrics, /logs, /debug, /healthz, /livez, /openapi/v2, /readyz, and /version. Clients require access to /api, /api/*, /apis, /apis/*, and /version to discover", "source": "https://kubernetes.io/docs/reference/access-authn-authz/webhook/", "id": "k8s_00735_code_explain_23400"} {"prompt": "Complete this yaml code:\n\n# Kubernetes API version\napiVersion: v1\n# kind of the API object\nkind: Config\n# clusters refers to the remote service.\nclusters:\n - name: name-of-remote-authz-service\n cluster:\n # CA for verifying the remote service.\n certificate-authority: /path/to/ca.pem\n # URL of remote service to query. Must use 'https'. May not include parameters.\n server: https://authz.example.com/authorize\n", "completion": "# users refers to the API Server's webhook configuration.\nusers:\n - name: name-of-api-server\n user:\n client-certificate: /path/to/cert.pem # cert for the webhook plugin to use\n client-key: /path/to/key.pem # key matching the cert\n\n# kubeconfig files require a context. Provide one for the API Server.\ncurrent-context: webhook\ncontexts:\n- context:\n cluster: name-of-remote-authz-service\n user: name-of-api-server\n name: webhook", "language": "yaml", "type": "kubernetes-manifest", "context": "Webhook Mode | When faced with an authorization decision, the API Server POSTs a JSON- serialized authorization.k8s.io/v1beta1 SubjectAccessReview object describing the action. This object contains fields describing", "source": "https://kubernetes.io/docs/reference/access-authn-authz/webhook/", "id": "k8s_00735_code_completion_23401"} {"prompt": "Explain this yaml code:\n\n# Kubernetes API version\napiVersion: v1\n# kind of the API object\nkind: Config\n# clusters refers to the remote service.\nclusters:\n - name: name-of-remote-authz-service\n cluster:\n # CA for verifying the remote service.\n certificate-authority: /path/to/ca.pem\n # URL of remote service to query. Must use 'https'. May not include parameters.\n server: https://authz.example.com/authorize\n\n# users refers to the API Server's webhook configuration.\nusers:\n - name: name-of-api-server\n user:\n client-certificate: /path/to/cert.pem # cert for the webhook plugin to use\n client-key: /path/to/key.pem # key matching the cert\n\n# kubeconfig files require a context. Provide one for the API Server.\ncurrent-context: webhook\ncontexts:\n- context:\n cluster: name-of-remote-authz-service\n user: name-of-api-server\n name: webhook", "completion": "This yaml code Webhook Mode | When faced with an authorization decision, the API Server POSTs a JSON- serialized authorization.k8s.io/v1beta1 SubjectAccessReview object describing the action. This object contains fields describing", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Webhook Mode | When faced with an authorization decision, the API Server POSTs a JSON- serialized authorization.k8s.io/v1beta1 SubjectAccessReview object describing the action. This object contains fields describing", "source": "https://kubernetes.io/docs/reference/access-authn-authz/webhook/", "id": "k8s_00735_code_explain_23402"} {"prompt": "Complete this json code:\n\n{\n \"apiVersion\": \"authorization.k8s.io/v1beta1\",\n \"kind\": \"SubjectAccessReview\",\n \"spec\": {\n \"resourceAttributes\": {\n \"namespace\": \"kittensandponies\",\n \"verb\": \"get\",\n \"group\": \"unicorn.example.org\",", "completion": " \"resource\": \"pods\"\n },\n \"user\": \"jane\",\n \"group\": [\n \"group1\",\n \"group2\"\n ]\n }\n}", "language": "json", "type": "api-response", "context": "Webhook Mode | The remote service is expected to fill the status field of the request and respond to either allow or disallow access. The response body's spec field is ignored and may be omitted. A permissive respon", "source": "https://kubernetes.io/docs/reference/access-authn-authz/webhook/", "id": "k8s_00735_code_completion_23403"} {"prompt": "Explain this json code:\n\n{\n \"apiVersion\": \"authorization.k8s.io/v1beta1\",\n \"kind\": \"SubjectAccessReview\",\n \"spec\": {\n \"resourceAttributes\": {\n \"namespace\": \"kittensandponies\",\n \"verb\": \"get\",\n \"group\": \"unicorn.example.org\",\n \"resource\": \"pods\"\n },\n \"user\": \"jane\",\n \"group\": [\n \"group1\",\n \"group2\"\n ]\n }\n}", "completion": "This json code Webhook Mode | The remote service is expected to fill the status field of the request and respond to either allow or disallow access. The response body's spec field is ignored and may be omitted. A permissive respon", "language": "json", "type": "api-response_explanation", "context": "Webhook Mode | The remote service is expected to fill the status field of the request and respond to either allow or disallow access. The response body's spec field is ignored and may be omitted. A permissive respon", "source": "https://kubernetes.io/docs/reference/access-authn-authz/webhook/", "id": "k8s_00735_code_explain_23404"} {"prompt": "Complete this json code:\n\n{\n \"apiVersion\": \"authorization.k8s.io/v1beta1\",\n \"kind\": \"SubjectAccessReview\",", "completion": " \"status\": {\n \"allowed\": true\n }\n}", "language": "json", "type": "api-response", "context": "Webhook Mode | For disallowing access there are two methods.", "source": "https://kubernetes.io/docs/reference/access-authn-authz/webhook/", "id": "k8s_00735_code_completion_23405"} {"prompt": "Explain this json code:\n\n{\n \"apiVersion\": \"authorization.k8s.io/v1beta1\",\n \"kind\": \"SubjectAccessReview\",\n \"status\": {\n \"allowed\": true\n }\n}", "completion": "This json code Webhook Mode | For disallowing access there are two methods.", "language": "json", "type": "api-response_explanation", "context": "Webhook Mode | For disallowing access there are two methods.", "source": "https://kubernetes.io/docs/reference/access-authn-authz/webhook/", "id": "k8s_00735_code_explain_23406"} {"prompt": "Complete this json code:\n\n{\n \"apiVersion\": \"authorization.k8s.io/v1beta1\",\n \"kind\": \"SubjectAccessReview\",\n \"status\": {", "completion": " \"allowed\": false,\n \"reason\": \"user does not have read access to the namespace\"\n }\n}", "language": "json", "type": "api-response", "context": "Webhook Mode | The second method denies immediately, short-circuiting evaluation by other configured authorizers. This should only be used by webhooks that have detailed knowledge of the full authorizer configuratio", "source": "https://kubernetes.io/docs/reference/access-authn-authz/webhook/", "id": "k8s_00735_code_completion_23407"} {"prompt": "Explain this json code:\n\n{\n \"apiVersion\": \"authorization.k8s.io/v1beta1\",\n \"kind\": \"SubjectAccessReview\",\n \"status\": {\n \"allowed\": false,\n \"reason\": \"user does not have read access to the namespace\"\n }\n}", "completion": "This json code Webhook Mode | The second method denies immediately, short-circuiting evaluation by other configured authorizers. This should only be used by webhooks that have detailed knowledge of the full authorizer configuratio", "language": "json", "type": "api-response_explanation", "context": "Webhook Mode | The second method denies immediately, short-circuiting evaluation by other configured authorizers. This should only be used by webhooks that have detailed knowledge of the full authorizer configuratio", "source": "https://kubernetes.io/docs/reference/access-authn-authz/webhook/", "id": "k8s_00735_code_explain_23408"} {"prompt": "Complete this json code:\n\n{\n \"apiVersion\": \"authorization.k8s.io/v1beta1\",\n \"kind\": \"SubjectAccessReview\",\n \"status\": {", "completion": " \"allowed\": false,\n \"denied\": true,\n \"reason\": \"user does not have read access to the namespace\"\n }\n}", "language": "json", "type": "api-response", "context": "Webhook Mode | Access to non-resource paths are sent as:", "source": "https://kubernetes.io/docs/reference/access-authn-authz/webhook/", "id": "k8s_00735_code_completion_23409"} {"prompt": "Explain this json code:\n\n{\n \"apiVersion\": \"authorization.k8s.io/v1beta1\",\n \"kind\": \"SubjectAccessReview\",\n \"status\": {\n \"allowed\": false,\n \"denied\": true,\n \"reason\": \"user does not have read access to the namespace\"\n }\n}", "completion": "This json code Webhook Mode | Access to non-resource paths are sent as:", "language": "json", "type": "api-response_explanation", "context": "Webhook Mode | Access to non-resource paths are sent as:", "source": "https://kubernetes.io/docs/reference/access-authn-authz/webhook/", "id": "k8s_00735_code_explain_23410"} {"prompt": "Complete this json code:\n\n{\n \"apiVersion\": \"authorization.k8s.io/v1beta1\",\n \"kind\": \"SubjectAccessReview\",\n \"spec\": {\n \"nonResourceAttributes\": {\n \"path\": \"/debug\",\n \"verb\": \"get\"", "completion": " },\n \"user\": \"jane\",\n \"group\": [\n \"group1\",\n \"group2\"\n ]\n }\n}", "language": "json", "type": "api-response", "context": "Webhook Mode | When calling out to an authorization webhook, Kubernetes passes label and field selectors in the request to the authorization webhook. The authorization webhook can make authorization decisions inform", "source": "https://kubernetes.io/docs/reference/access-authn-authz/webhook/", "id": "k8s_00735_code_completion_23411"} {"prompt": "Explain this json code:\n\n{\n \"apiVersion\": \"authorization.k8s.io/v1beta1\",\n \"kind\": \"SubjectAccessReview\",\n \"spec\": {\n \"nonResourceAttributes\": {\n \"path\": \"/debug\",\n \"verb\": \"get\"\n },\n \"user\": \"jane\",\n \"group\": [\n \"group1\",\n \"group2\"\n ]\n }\n}", "completion": "This json code Webhook Mode | When calling out to an authorization webhook, Kubernetes passes label and field selectors in the request to the authorization webhook. The authorization webhook can make authorization decisions inform", "language": "json", "type": "api-response_explanation", "context": "Webhook Mode | When calling out to an authorization webhook, Kubernetes passes label and field selectors in the request to the authorization webhook. The authorization webhook can make authorization decisions inform", "source": "https://kubernetes.io/docs/reference/access-authn-authz/webhook/", "id": "k8s_00735_code_explain_23412"} {"prompt": "Complete this json code:\n\n{\n \"apiVersion\": \"authorization.k8s.io/v1beta1\",\n \"kind\": \"SubjectAccessReview\",\n \"spec\": {\n \"resourceAttributes\": {\n \"verb\": \"list\",\n \"group\": \"\",\n \"resource\": \"pods\",\n \"fieldSelector\": {\n \"requirements\": [\n {\"key\":\"spec.nodeName\", \"operator\":\"In\", \"values\":[\"mynode\"]}\n ]\n },", "completion": " \"labelSelector\": {\n \"requirements\": [\n {\"key\":\"example.com/mykey\", \"operator\":\"In\", \"values\":[\"myvalue\"]}\n ]\n }\n },\n \"user\": \"jane\",\n \"group\": [\n \"group1\",\n \"group2\"\n ]\n }\n}", "language": "json", "type": "api-response", "context": "Webhook Mode | Non-resource paths include: /api, /apis, /metrics, /logs, /debug, /healthz, /livez, /openapi/v2, /readyz, and /version. Clients require access to /api, /api/*, /apis, /apis/*, and /version to discover", "source": "https://kubernetes.io/docs/reference/access-authn-authz/webhook/", "id": "k8s_00735_code_completion_23413"} {"prompt": "Explain this json code:\n\n{\n \"apiVersion\": \"authorization.k8s.io/v1beta1\",\n \"kind\": \"SubjectAccessReview\",\n \"spec\": {\n \"resourceAttributes\": {\n \"verb\": \"list\",\n \"group\": \"\",\n \"resource\": \"pods\",\n \"fieldSelector\": {\n \"requirements\": [\n {\"key\":\"spec.nodeName\", \"operator\":\"In\", \"values\":[\"mynode\"]}\n ]\n },\n \"labelSelector\": {\n \"requirements\": [\n {\"key\":\"example.com/mykey\", \"operator\":\"In\", \"values\":[\"myvalue\"]}\n ]\n }\n },\n \"user\": \"jane\",\n \"group\": [\n \"group1\",\n \"group2\"\n ]\n }\n}", "completion": "This json code Webhook Mode | Non-resource paths include: /api, /apis, /metrics, /logs, /debug, /healthz, /livez, /openapi/v2, /readyz, and /version. Clients require access to /api, /api/*, /apis, /apis/*, and /version to discover", "language": "json", "type": "api-response_explanation", "context": "Webhook Mode | Non-resource paths include: /api, /apis, /metrics, /logs, /debug, /healthz, /livez, /openapi/v2, /readyz, and /version. Clients require access to /api, /api/*, /apis, /apis/*, and /version to discover", "source": "https://kubernetes.io/docs/reference/access-authn-authz/webhook/", "id": "k8s_00735_code_explain_23414"} {"prompt": "Explain this text code:\n\n# Delete the minikube user\n kubectl config delete-user minikube", "completion": "This text code kubectl config delete-user | help for delete-user", "language": "text", "type": "code-example_explanation", "context": "kubectl config delete-user | help for delete-user", "source": "https://kubernetes.io/docs/reference/kubectl/generated/kubectl_config/kubectl_config_delete-user/", "id": "k8s_00739_code_explain_23415"} {"prompt": "Explain this text code:\n\n# Delete the minikube user\n kubectl config delete-user minikube", "completion": "This text code kubectl config delete-user | help for delete-user", "language": "text", "type": "code-example_explanation", "context": "kubectl config delete-user | help for delete-user", "source": "https://kubernetes.io/docs/reference/kubectl/generated/kubectl_config/kubectl_config_delete-user/", "id": "k8s_00739_code_explain_23416"} {"prompt": "Explain this text code:\n\napiserver_requested_deprecated_apis{removed_release=\"1.22\"} * on(group,version,resource,subresource) group_right() apiserver_request_total", "completion": "This text code Kubernetes Deprecation Policy | As with whole REST resources, an individual field which was present in API v1 must exist and function until API v1 is removed. Unlike whole resources, the v2 APIs may choose a different representation", "language": "text", "type": "code-example_explanation", "context": "Kubernetes Deprecation Policy | As with whole REST resources, an individual field which was present in API v1 must exist and function until API v1 is removed. Unlike whole resources, the v2 APIs may choose a different representation", "source": "https://kubernetes.io/docs/reference/using-api/deprecation-policy/", "id": "k8s_00740_code_explain_23417"} {"prompt": "Explain this promql code:\n\napiserver_requested_deprecated_apis{removed_release=\"1.22\"} * on(group,version,resource,subresource) group_right() apiserver_request_total", "completion": "This promql code Kubernetes Deprecation Policy | As with whole REST resources, an individual field which was present in API v1 must exist and function until API v1 is removed. Unlike whole resources, the v2 APIs may choose a different representation", "language": "promql", "type": "code-example_explanation", "context": "Kubernetes Deprecation Policy | As with whole REST resources, an individual field which was present in API v1 must exist and function until API v1 is removed. Unlike whole resources, the v2 APIs may choose a different representation", "source": "https://kubernetes.io/docs/reference/using-api/deprecation-policy/", "id": "k8s_00740_code_explain_23418"} {"prompt": "Explain this text code:\n\napiserver_requested_deprecated_apis{removed_release=\"1.22\"} * on(group,version,resource,subresource) group_right() apiserver_request_total", "completion": "This text code Kubernetes Deprecation Policy | As with whole REST resources, an individual field which was present in API v1 must exist and function until API v1 is removed. Unlike whole resources, the v2 APIs may choose a different representation", "language": "text", "type": "code-example_explanation", "context": "Kubernetes Deprecation Policy | As with whole REST resources, an individual field which was present in API v1 must exist and function until API v1 is removed. Unlike whole resources, the v2 APIs may choose a different representation", "source": "https://kubernetes.io/docs/reference/using-api/deprecation-policy/", "id": "k8s_00740_code_explain_23419"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: ResourceQuota\nmetadata:\n name: mem-cpu-demo\nspec:", "completion": " hard:\n requests.cpu: \"1\"\n requests.memory: 1Gi\n limits.cpu: \"2\"\n limits.memory: 2Gi", "language": "yaml", "type": "kubernetes-manifest", "context": "Configure Memory and CPU Quotas for a Namespace | Create the ResourceQuota:", "source": "https://kubernetes.io/docs/tasks/administer-cluster/manage-resources/quota-memory-cpu-namespace/", "id": "k8s_00741_code_completion_23420"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: ResourceQuota\nmetadata:\n name: mem-cpu-demo\nspec:\n hard:\n requests.cpu: \"1\"\n requests.memory: 1Gi\n limits.cpu: \"2\"\n limits.memory: 2Gi", "completion": "This yaml code Configure Memory and CPU Quotas for a Namespace | Create the ResourceQuota:", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Configure Memory and CPU Quotas for a Namespace | Create the ResourceQuota:", "source": "https://kubernetes.io/docs/tasks/administer-cluster/manage-resources/quota-memory-cpu-namespace/", "id": "k8s_00741_code_explain_23421"} {"prompt": "Explain this bash code:\n\nkubectl apply -f https://k8s.io/examples/admin/resource/quota-mem-cpu.yaml --namespace=quota-mem-cpu-example", "completion": "This bash code Configure Memory and CPU Quotas for a Namespace | View detailed information about the ResourceQuota:", "language": "bash", "type": "kubectl-command_explanation", "context": "Configure Memory and CPU Quotas for a Namespace | View detailed information about the ResourceQuota:", "source": "https://kubernetes.io/docs/tasks/administer-cluster/manage-resources/quota-memory-cpu-namespace/", "id": "k8s_00741_code_explain_23422"} {"prompt": "Explain this bash code:\n\nkubectl get resourcequota mem-cpu-demo --namespace=quota-mem-cpu-example --output=yaml", "completion": "This bash code Configure Memory and CPU Quotas for a Namespace | The ResourceQuota places these requirements on the quota-mem-cpu-example namespace:", "language": "bash", "type": "kubectl-command_explanation", "context": "Configure Memory and CPU Quotas for a Namespace | The ResourceQuota places these requirements on the quota-mem-cpu-example namespace:", "source": "https://kubernetes.io/docs/tasks/administer-cluster/manage-resources/quota-memory-cpu-namespace/", "id": "k8s_00741_code_explain_23423"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: quota-mem-cpu-demo\nspec:\n containers:\n - name: quota-mem-cpu-demo-ctr", "completion": " image: nginx\n resources:\n limits:\n memory: \"800Mi\"\n cpu: \"800m\"\n requests:\n memory: \"600Mi\"\n cpu: \"400m\"", "language": "yaml", "type": "pod-definition", "context": "Configure Memory and CPU Quotas for a Namespace | Create the Pod:", "source": "https://kubernetes.io/docs/tasks/administer-cluster/manage-resources/quota-memory-cpu-namespace/", "id": "k8s_00741_code_completion_23424"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: quota-mem-cpu-demo\nspec:\n containers:\n - name: quota-mem-cpu-demo-ctr\n image: nginx\n resources:\n limits:\n memory: \"800Mi\"\n cpu: \"800m\"\n requests:\n memory: \"600Mi\"\n cpu: \"400m\"", "completion": "This yaml code Configure Memory and CPU Quotas for a Namespace | Create the Pod:", "language": "yaml", "type": "pod-definition_explanation", "context": "Configure Memory and CPU Quotas for a Namespace | Create the Pod:", "source": "https://kubernetes.io/docs/tasks/administer-cluster/manage-resources/quota-memory-cpu-namespace/", "id": "k8s_00741_code_explain_23425"} {"prompt": "Explain this bash code:\n\nkubectl apply -f https://k8s.io/examples/admin/resource/quota-mem-cpu-pod.yaml --namespace=quota-mem-cpu-example", "completion": "This bash code Configure Memory and CPU Quotas for a Namespace | Verify that the Pod is running and that its (only) container is healthy:", "language": "bash", "type": "kubectl-command_explanation", "context": "Configure Memory and CPU Quotas for a Namespace | Verify that the Pod is running and that its (only) container is healthy:", "source": "https://kubernetes.io/docs/tasks/administer-cluster/manage-resources/quota-memory-cpu-namespace/", "id": "k8s_00741_code_explain_23426"} {"prompt": "Explain this bash code:\n\nkubectl get pod quota-mem-cpu-demo --namespace=quota-mem-cpu-example", "completion": "This bash code Configure Memory and CPU Quotas for a Namespace | Once again, view detailed information about the ResourceQuota:", "language": "bash", "type": "kubectl-command_explanation", "context": "Configure Memory and CPU Quotas for a Namespace | Once again, view detailed information about the ResourceQuota:", "source": "https://kubernetes.io/docs/tasks/administer-cluster/manage-resources/quota-memory-cpu-namespace/", "id": "k8s_00741_code_explain_23427"} {"prompt": "Explain this bash code:\n\nkubectl get resourcequota mem-cpu-demo --namespace=quota-mem-cpu-example --output=yaml", "completion": "This bash code Configure Memory and CPU Quotas for a Namespace | The output shows the quota along with how much of the quota has been used. You can see that the memory and CPU requests and limits for your Pod do not exceed the quota.", "language": "bash", "type": "kubectl-command_explanation", "context": "Configure Memory and CPU Quotas for a Namespace | The output shows the quota along with how much of the quota has been used. You can see that the memory and CPU requests and limits for your Pod do not exceed the quota.", "source": "https://kubernetes.io/docs/tasks/administer-cluster/manage-resources/quota-memory-cpu-namespace/", "id": "k8s_00741_code_explain_23428"} {"prompt": "Complete this text code:\n\nstatus:\n hard:\n limits.cpu: \"2\"\n limits.memory: 2Gi\n requests.cpu: \"1\"", "completion": " requests.memory: 1Gi\n used:\n limits.cpu: 800m\n limits.memory: 800Mi\n requests.cpu: 400m\n requests.memory: 600Mi", "language": "text", "type": "code-example", "context": "Configure Memory and CPU Quotas for a Namespace | If you have the jq tool, you can also query (using JSONPath) for just the used values, and pretty-print that that of the output. For example:", "source": "https://kubernetes.io/docs/tasks/administer-cluster/manage-resources/quota-memory-cpu-namespace/", "id": "k8s_00741_code_completion_23429"} {"prompt": "Explain this text code:\n\nstatus:\n hard:\n limits.cpu: \"2\"\n limits.memory: 2Gi\n requests.cpu: \"1\"\n requests.memory: 1Gi\n used:\n limits.cpu: 800m\n limits.memory: 800Mi\n requests.cpu: 400m\n requests.memory: 600Mi", "completion": "This text code Configure Memory and CPU Quotas for a Namespace | If you have the jq tool, you can also query (using JSONPath) for just the used values, and pretty-print that that of the output. For example:", "language": "text", "type": "code-example_explanation", "context": "Configure Memory and CPU Quotas for a Namespace | If you have the jq tool, you can also query (using JSONPath) for just the used values, and pretty-print that that of the output. For example:", "source": "https://kubernetes.io/docs/tasks/administer-cluster/manage-resources/quota-memory-cpu-namespace/", "id": "k8s_00741_code_explain_23430"} {"prompt": "Explain this bash code:\n\nkubectl get resourcequota mem-cpu-demo --namespace=quota-mem-cpu-example -o jsonpath='{ .status.used }' | jq .", "completion": "This bash code Configure Memory and CPU Quotas for a Namespace | Here is a manifest for a second Pod:", "language": "bash", "type": "kubectl-command_explanation", "context": "Configure Memory and CPU Quotas for a Namespace | Here is a manifest for a second Pod:", "source": "https://kubernetes.io/docs/tasks/administer-cluster/manage-resources/quota-memory-cpu-namespace/", "id": "k8s_00741_code_explain_23431"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: quota-mem-cpu-demo-2\nspec:\n containers:\n - name: quota-mem-cpu-demo-2-ctr", "completion": " image: redis\n resources:\n limits:\n memory: \"1Gi\"\n cpu: \"800m\"\n requests:\n memory: \"700Mi\"\n cpu: \"400m\"", "language": "yaml", "type": "pod-definition", "context": "Configure Memory and CPU Quotas for a Namespace | In the manifest, you can see that the Pod has a memory request of 700 MiB. Notice that the sum of the used memory request and this new memory request exceeds the memory request quota: 600 MiB + 700 Mi", "source": "https://kubernetes.io/docs/tasks/administer-cluster/manage-resources/quota-memory-cpu-namespace/", "id": "k8s_00741_code_completion_23432"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: quota-mem-cpu-demo-2\nspec:\n containers:\n - name: quota-mem-cpu-demo-2-ctr\n image: redis\n resources:\n limits:\n memory: \"1Gi\"\n cpu: \"800m\"\n requests:\n memory: \"700Mi\"\n cpu: \"400m\"", "completion": "This yaml code Configure Memory and CPU Quotas for a Namespace | In the manifest, you can see that the Pod has a memory request of 700 MiB. Notice that the sum of the used memory request and this new memory request exceeds the memory request quota: 600 MiB + 700 Mi", "language": "yaml", "type": "pod-definition_explanation", "context": "Configure Memory and CPU Quotas for a Namespace | In the manifest, you can see that the Pod has a memory request of 700 MiB. Notice that the sum of the used memory request and this new memory request exceeds the memory request quota: 600 MiB + 700 Mi", "source": "https://kubernetes.io/docs/tasks/administer-cluster/manage-resources/quota-memory-cpu-namespace/", "id": "k8s_00741_code_explain_23433"} {"prompt": "Explain this bash code:\n\nkubectl apply -f https://k8s.io/examples/admin/resource/quota-mem-cpu-pod-2.yaml --namespace=quota-mem-cpu-example", "completion": "This bash code Configure Memory and CPU Quotas for a Namespace | The second Pod does not get created. The output shows that creating the second Pod would cause the memory request total to exceed the memory request quota.", "language": "bash", "type": "kubectl-command_explanation", "context": "Configure Memory and CPU Quotas for a Namespace | The second Pod does not get created. The output shows that creating the second Pod would cause the memory request total to exceed the memory request quota.", "source": "https://kubernetes.io/docs/tasks/administer-cluster/manage-resources/quota-memory-cpu-namespace/", "id": "k8s_00741_code_explain_23434"} {"prompt": "Explain this text code:\n\nError from server (Forbidden): error when creating \"examples/admin/resource/quota-mem-cpu-pod-2.yaml\":\npods \"quota-mem-cpu-demo-2\" is forbidden: exceeded quota: mem-cpu-demo,\nrequested: requests.memory=700Mi,used: requests.memory=600Mi, limited: requests.memory=1Gi", "completion": "This text code Configure Memory and CPU Quotas for a Namespace | As you have seen in this exercise, you can use a ResourceQuota to restrict the memory request total for all Pods running in a namespace. You can also restrict the totals for memory limit, cpu request,", "language": "text", "type": "code-example_explanation", "context": "Configure Memory and CPU Quotas for a Namespace | As you have seen in this exercise, you can use a ResourceQuota to restrict the memory request total for all Pods running in a namespace. You can also restrict the totals for memory limit, cpu request,", "source": "https://kubernetes.io/docs/tasks/administer-cluster/manage-resources/quota-memory-cpu-namespace/", "id": "k8s_00741_code_explain_23435"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: ResourceQuota\nmetadata:\n name: mem-cpu-demo\nspec:", "completion": " hard:\n requests.cpu: \"1\"\n requests.memory: 1Gi\n limits.cpu: \"2\"\n limits.memory: 2Gi", "language": "yaml", "type": "kubernetes-manifest", "context": "Configure Memory and CPU Quotas for a Namespace | Create the ResourceQuota:", "source": "https://kubernetes.io/docs/tasks/administer-cluster/manage-resources/quota-memory-cpu-namespace/", "id": "k8s_00741_code_completion_23436"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: ResourceQuota\nmetadata:\n name: mem-cpu-demo\nspec:\n hard:\n requests.cpu: \"1\"\n requests.memory: 1Gi\n limits.cpu: \"2\"\n limits.memory: 2Gi", "completion": "This yaml code Configure Memory and CPU Quotas for a Namespace | Create the ResourceQuota:", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Configure Memory and CPU Quotas for a Namespace | Create the ResourceQuota:", "source": "https://kubernetes.io/docs/tasks/administer-cluster/manage-resources/quota-memory-cpu-namespace/", "id": "k8s_00741_code_explain_23437"} {"prompt": "Explain this shell code:\n\nkubectl apply -f https://k8s.io/examples/admin/resource/quota-mem-cpu.yaml --namespace=quota-mem-cpu-example", "completion": "This shell code Configure Memory and CPU Quotas for a Namespace | View detailed information about the ResourceQuota:", "language": "shell", "type": "code-example_explanation", "context": "Configure Memory and CPU Quotas for a Namespace | View detailed information about the ResourceQuota:", "source": "https://kubernetes.io/docs/tasks/administer-cluster/manage-resources/quota-memory-cpu-namespace/", "id": "k8s_00741_code_explain_23438"} {"prompt": "Explain this shell code:\n\nkubectl get resourcequota mem-cpu-demo --namespace=quota-mem-cpu-example --output=yaml", "completion": "This shell code Configure Memory and CPU Quotas for a Namespace | The ResourceQuota places these requirements on the quota-mem-cpu-example namespace:", "language": "shell", "type": "code-example_explanation", "context": "Configure Memory and CPU Quotas for a Namespace | The ResourceQuota places these requirements on the quota-mem-cpu-example namespace:", "source": "https://kubernetes.io/docs/tasks/administer-cluster/manage-resources/quota-memory-cpu-namespace/", "id": "k8s_00741_code_explain_23439"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: quota-mem-cpu-demo\nspec:\n containers:\n - name: quota-mem-cpu-demo-ctr", "completion": " image: nginx\n resources:\n limits:\n memory: \"800Mi\"\n cpu: \"800m\"\n requests:\n memory: \"600Mi\"\n cpu: \"400m\"", "language": "yaml", "type": "pod-definition", "context": "Configure Memory and CPU Quotas for a Namespace | Create the Pod:", "source": "https://kubernetes.io/docs/tasks/administer-cluster/manage-resources/quota-memory-cpu-namespace/", "id": "k8s_00741_code_completion_23440"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: quota-mem-cpu-demo\nspec:\n containers:\n - name: quota-mem-cpu-demo-ctr\n image: nginx\n resources:\n limits:\n memory: \"800Mi\"\n cpu: \"800m\"\n requests:\n memory: \"600Mi\"\n cpu: \"400m\"", "completion": "This yaml code Configure Memory and CPU Quotas for a Namespace | Create the Pod:", "language": "yaml", "type": "pod-definition_explanation", "context": "Configure Memory and CPU Quotas for a Namespace | Create the Pod:", "source": "https://kubernetes.io/docs/tasks/administer-cluster/manage-resources/quota-memory-cpu-namespace/", "id": "k8s_00741_code_explain_23441"} {"prompt": "Explain this shell code:\n\nkubectl apply -f https://k8s.io/examples/admin/resource/quota-mem-cpu-pod.yaml --namespace=quota-mem-cpu-example", "completion": "This shell code Configure Memory and CPU Quotas for a Namespace | Verify that the Pod is running and that its (only) container is healthy:", "language": "shell", "type": "code-example_explanation", "context": "Configure Memory and CPU Quotas for a Namespace | Verify that the Pod is running and that its (only) container is healthy:", "source": "https://kubernetes.io/docs/tasks/administer-cluster/manage-resources/quota-memory-cpu-namespace/", "id": "k8s_00741_code_explain_23442"} {"prompt": "Explain this shell code:\n\nkubectl get pod quota-mem-cpu-demo --namespace=quota-mem-cpu-example", "completion": "This shell code Configure Memory and CPU Quotas for a Namespace | Once again, view detailed information about the ResourceQuota:", "language": "shell", "type": "code-example_explanation", "context": "Configure Memory and CPU Quotas for a Namespace | Once again, view detailed information about the ResourceQuota:", "source": "https://kubernetes.io/docs/tasks/administer-cluster/manage-resources/quota-memory-cpu-namespace/", "id": "k8s_00741_code_explain_23443"} {"prompt": "Explain this shell code:\n\nkubectl get resourcequota mem-cpu-demo --namespace=quota-mem-cpu-example --output=yaml", "completion": "This shell code Configure Memory and CPU Quotas for a Namespace | The output shows the quota along with how much of the quota has been used. You can see that the memory and CPU requests and limits for your Pod do not exceed the quota.", "language": "shell", "type": "code-example_explanation", "context": "Configure Memory and CPU Quotas for a Namespace | The output shows the quota along with how much of the quota has been used. You can see that the memory and CPU requests and limits for your Pod do not exceed the quota.", "source": "https://kubernetes.io/docs/tasks/administer-cluster/manage-resources/quota-memory-cpu-namespace/", "id": "k8s_00741_code_explain_23444"} {"prompt": "Complete this text code:\n\nstatus:\n hard:\n limits.cpu: \"2\"\n limits.memory: 2Gi\n requests.cpu: \"1\"", "completion": " requests.memory: 1Gi\n used:\n limits.cpu: 800m\n limits.memory: 800Mi\n requests.cpu: 400m\n requests.memory: 600Mi", "language": "text", "type": "code-example", "context": "Configure Memory and CPU Quotas for a Namespace | If you have the jq tool, you can also query (using JSONPath) for just the used values, and pretty-print that that of the output. For example:", "source": "https://kubernetes.io/docs/tasks/administer-cluster/manage-resources/quota-memory-cpu-namespace/", "id": "k8s_00741_code_completion_23445"} {"prompt": "Explain this text code:\n\nstatus:\n hard:\n limits.cpu: \"2\"\n limits.memory: 2Gi\n requests.cpu: \"1\"\n requests.memory: 1Gi\n used:\n limits.cpu: 800m\n limits.memory: 800Mi\n requests.cpu: 400m\n requests.memory: 600Mi", "completion": "This text code Configure Memory and CPU Quotas for a Namespace | If you have the jq tool, you can also query (using JSONPath) for just the used values, and pretty-print that that of the output. For example:", "language": "text", "type": "code-example_explanation", "context": "Configure Memory and CPU Quotas for a Namespace | If you have the jq tool, you can also query (using JSONPath) for just the used values, and pretty-print that that of the output. For example:", "source": "https://kubernetes.io/docs/tasks/administer-cluster/manage-resources/quota-memory-cpu-namespace/", "id": "k8s_00741_code_explain_23446"} {"prompt": "Explain this shell code:\n\nkubectl get resourcequota mem-cpu-demo --namespace=quota-mem-cpu-example -o jsonpath='{ .status.used }' | jq .", "completion": "This shell code Configure Memory and CPU Quotas for a Namespace | Here is a manifest for a second Pod:", "language": "shell", "type": "code-example_explanation", "context": "Configure Memory and CPU Quotas for a Namespace | Here is a manifest for a second Pod:", "source": "https://kubernetes.io/docs/tasks/administer-cluster/manage-resources/quota-memory-cpu-namespace/", "id": "k8s_00741_code_explain_23447"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: quota-mem-cpu-demo-2\nspec:\n containers:\n - name: quota-mem-cpu-demo-2-ctr", "completion": " image: redis\n resources:\n limits:\n memory: \"1Gi\"\n cpu: \"800m\"\n requests:\n memory: \"700Mi\"\n cpu: \"400m\"", "language": "yaml", "type": "pod-definition", "context": "Configure Memory and CPU Quotas for a Namespace | In the manifest, you can see that the Pod has a memory request of 700 MiB. Notice that the sum of the used memory request and this new memory request exceeds the memory request quota: 600 MiB + 700 Mi", "source": "https://kubernetes.io/docs/tasks/administer-cluster/manage-resources/quota-memory-cpu-namespace/", "id": "k8s_00741_code_completion_23448"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: quota-mem-cpu-demo-2\nspec:\n containers:\n - name: quota-mem-cpu-demo-2-ctr\n image: redis\n resources:\n limits:\n memory: \"1Gi\"\n cpu: \"800m\"\n requests:\n memory: \"700Mi\"\n cpu: \"400m\"", "completion": "This yaml code Configure Memory and CPU Quotas for a Namespace | In the manifest, you can see that the Pod has a memory request of 700 MiB. Notice that the sum of the used memory request and this new memory request exceeds the memory request quota: 600 MiB + 700 Mi", "language": "yaml", "type": "pod-definition_explanation", "context": "Configure Memory and CPU Quotas for a Namespace | In the manifest, you can see that the Pod has a memory request of 700 MiB. Notice that the sum of the used memory request and this new memory request exceeds the memory request quota: 600 MiB + 700 Mi", "source": "https://kubernetes.io/docs/tasks/administer-cluster/manage-resources/quota-memory-cpu-namespace/", "id": "k8s_00741_code_explain_23449"} {"prompt": "Explain this shell code:\n\nkubectl apply -f https://k8s.io/examples/admin/resource/quota-mem-cpu-pod-2.yaml --namespace=quota-mem-cpu-example", "completion": "This shell code Configure Memory and CPU Quotas for a Namespace | The second Pod does not get created. The output shows that creating the second Pod would cause the memory request total to exceed the memory request quota.", "language": "shell", "type": "code-example_explanation", "context": "Configure Memory and CPU Quotas for a Namespace | The second Pod does not get created. The output shows that creating the second Pod would cause the memory request total to exceed the memory request quota.", "source": "https://kubernetes.io/docs/tasks/administer-cluster/manage-resources/quota-memory-cpu-namespace/", "id": "k8s_00741_code_explain_23450"} {"prompt": "Explain this text code:\n\nError from server (Forbidden): error when creating \"examples/admin/resource/quota-mem-cpu-pod-2.yaml\":\npods \"quota-mem-cpu-demo-2\" is forbidden: exceeded quota: mem-cpu-demo,\nrequested: requests.memory=700Mi,used: requests.memory=600Mi, limited: requests.memory=1Gi", "completion": "This text code Configure Memory and CPU Quotas for a Namespace | As you have seen in this exercise, you can use a ResourceQuota to restrict the memory request total for all Pods running in a namespace. You can also restrict the totals for memory limit, cpu request,", "language": "text", "type": "code-example_explanation", "context": "Configure Memory and CPU Quotas for a Namespace | As you have seen in this exercise, you can use a ResourceQuota to restrict the memory request total for all Pods running in a namespace. You can also restrict the totals for memory limit, cpu request,", "source": "https://kubernetes.io/docs/tasks/administer-cluster/manage-resources/quota-memory-cpu-namespace/", "id": "k8s_00741_code_explain_23451"} {"prompt": "Complete this yaml code:\n\nadmin/resource/quota-mem-cpu.yaml\napiVersion: v1\nkind: ResourceQuota\nmetadata:\n name: mem-cpu-demo", "completion": "spec:\n hard:\n requests.cpu: \"1\"\n requests.memory: 1Gi\n limits.cpu: \"2\"\n limits.memory: 2Gi", "language": "yaml", "type": "kubernetes-manifest", "context": "Configure Memory and CPU Quotas for a Namespace | Create the ResourceQuota:", "source": "https://kubernetes.io/docs/tasks/administer-cluster/manage-resources/quota-memory-cpu-namespace/", "id": "k8s_00741_code_completion_23452"} {"prompt": "Explain this yaml code:\n\nadmin/resource/quota-mem-cpu.yaml\napiVersion: v1\nkind: ResourceQuota\nmetadata:\n name: mem-cpu-demo\nspec:\n hard:\n requests.cpu: \"1\"\n requests.memory: 1Gi\n limits.cpu: \"2\"\n limits.memory: 2Gi", "completion": "This yaml code Configure Memory and CPU Quotas for a Namespace | Create the ResourceQuota:", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Configure Memory and CPU Quotas for a Namespace | Create the ResourceQuota:", "source": "https://kubernetes.io/docs/tasks/administer-cluster/manage-resources/quota-memory-cpu-namespace/", "id": "k8s_00741_code_explain_23453"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: ResourceQuota\nmetadata:\n name: mem-cpu-demo\nspec:", "completion": " hard:\n requests.cpu: \"1\"\n requests.memory: 1Gi\n limits.cpu: \"2\"\n limits.memory: 2Gi", "language": "yaml", "type": "kubernetes-manifest", "context": "Configure Memory and CPU Quotas for a Namespace | Create the ResourceQuota:", "source": "https://kubernetes.io/docs/tasks/administer-cluster/manage-resources/quota-memory-cpu-namespace/", "id": "k8s_00741_code_completion_23454"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: ResourceQuota\nmetadata:\n name: mem-cpu-demo\nspec:\n hard:\n requests.cpu: \"1\"\n requests.memory: 1Gi\n limits.cpu: \"2\"\n limits.memory: 2Gi", "completion": "This yaml code Configure Memory and CPU Quotas for a Namespace | Create the ResourceQuota:", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Configure Memory and CPU Quotas for a Namespace | Create the ResourceQuota:", "source": "https://kubernetes.io/docs/tasks/administer-cluster/manage-resources/quota-memory-cpu-namespace/", "id": "k8s_00741_code_explain_23455"} {"prompt": "Explain this bash code:\n\nkubectl apply -f https://k8s.io/examples/admin/resource/quota-mem-cpu.yaml --namespace=quota-mem-cpu-example", "completion": "This bash code Configure Memory and CPU Quotas for a Namespace | View detailed information about the ResourceQuota:", "language": "bash", "type": "kubectl-command_explanation", "context": "Configure Memory and CPU Quotas for a Namespace | View detailed information about the ResourceQuota:", "source": "https://kubernetes.io/docs/tasks/administer-cluster/manage-resources/quota-memory-cpu-namespace/", "id": "k8s_00741_code_explain_23456"} {"prompt": "Explain this bash code:\n\nkubectl get resourcequota mem-cpu-demo --namespace=quota-mem-cpu-example --output=yaml", "completion": "This bash code Configure Memory and CPU Quotas for a Namespace | The ResourceQuota places these requirements on the quota-mem-cpu-example namespace:", "language": "bash", "type": "kubectl-command_explanation", "context": "Configure Memory and CPU Quotas for a Namespace | The ResourceQuota places these requirements on the quota-mem-cpu-example namespace:", "source": "https://kubernetes.io/docs/tasks/administer-cluster/manage-resources/quota-memory-cpu-namespace/", "id": "k8s_00741_code_explain_23457"} {"prompt": "Complete this yaml code:\n\nadmin/resource/quota-mem-cpu-pod.yaml\napiVersion: v1\nkind: Pod\nmetadata:\n name: quota-mem-cpu-demo\nspec:\n containers:\n - name: quota-mem-cpu-demo-ctr", "completion": " image: nginx\n resources:\n limits:\n memory: \"800Mi\"\n cpu: \"800m\"\n requests:\n memory: \"600Mi\"\n cpu: \"400m\"", "language": "yaml", "type": "pod-definition", "context": "Configure Memory and CPU Quotas for a Namespace | Create the Pod:", "source": "https://kubernetes.io/docs/tasks/administer-cluster/manage-resources/quota-memory-cpu-namespace/", "id": "k8s_00741_code_completion_23458"} {"prompt": "Explain this yaml code:\n\nadmin/resource/quota-mem-cpu-pod.yaml\napiVersion: v1\nkind: Pod\nmetadata:\n name: quota-mem-cpu-demo\nspec:\n containers:\n - name: quota-mem-cpu-demo-ctr\n image: nginx\n resources:\n limits:\n memory: \"800Mi\"\n cpu: \"800m\"\n requests:\n memory: \"600Mi\"\n cpu: \"400m\"", "completion": "This yaml code Configure Memory and CPU Quotas for a Namespace | Create the Pod:", "language": "yaml", "type": "pod-definition_explanation", "context": "Configure Memory and CPU Quotas for a Namespace | Create the Pod:", "source": "https://kubernetes.io/docs/tasks/administer-cluster/manage-resources/quota-memory-cpu-namespace/", "id": "k8s_00741_code_explain_23459"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: quota-mem-cpu-demo\nspec:\n containers:\n - name: quota-mem-cpu-demo-ctr", "completion": " image: nginx\n resources:\n limits:\n memory: \"800Mi\"\n cpu: \"800m\"\n requests:\n memory: \"600Mi\"\n cpu: \"400m\"", "language": "yaml", "type": "pod-definition", "context": "Configure Memory and CPU Quotas for a Namespace | Create the Pod:", "source": "https://kubernetes.io/docs/tasks/administer-cluster/manage-resources/quota-memory-cpu-namespace/", "id": "k8s_00741_code_completion_23460"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: quota-mem-cpu-demo\nspec:\n containers:\n - name: quota-mem-cpu-demo-ctr\n image: nginx\n resources:\n limits:\n memory: \"800Mi\"\n cpu: \"800m\"\n requests:\n memory: \"600Mi\"\n cpu: \"400m\"", "completion": "This yaml code Configure Memory and CPU Quotas for a Namespace | Create the Pod:", "language": "yaml", "type": "pod-definition_explanation", "context": "Configure Memory and CPU Quotas for a Namespace | Create the Pod:", "source": "https://kubernetes.io/docs/tasks/administer-cluster/manage-resources/quota-memory-cpu-namespace/", "id": "k8s_00741_code_explain_23461"} {"prompt": "Explain this bash code:\n\nkubectl apply -f https://k8s.io/examples/admin/resource/quota-mem-cpu-pod.yaml --namespace=quota-mem-cpu-example", "completion": "This bash code Configure Memory and CPU Quotas for a Namespace | Verify that the Pod is running and that its (only) container is healthy:", "language": "bash", "type": "kubectl-command_explanation", "context": "Configure Memory and CPU Quotas for a Namespace | Verify that the Pod is running and that its (only) container is healthy:", "source": "https://kubernetes.io/docs/tasks/administer-cluster/manage-resources/quota-memory-cpu-namespace/", "id": "k8s_00741_code_explain_23462"} {"prompt": "Explain this bash code:\n\nkubectl get pod quota-mem-cpu-demo --namespace=quota-mem-cpu-example", "completion": "This bash code Configure Memory and CPU Quotas for a Namespace | Once again, view detailed information about the ResourceQuota:", "language": "bash", "type": "kubectl-command_explanation", "context": "Configure Memory and CPU Quotas for a Namespace | Once again, view detailed information about the ResourceQuota:", "source": "https://kubernetes.io/docs/tasks/administer-cluster/manage-resources/quota-memory-cpu-namespace/", "id": "k8s_00741_code_explain_23463"} {"prompt": "Explain this bash code:\n\nkubectl get resourcequota mem-cpu-demo --namespace=quota-mem-cpu-example --output=yaml", "completion": "This bash code Configure Memory and CPU Quotas for a Namespace | The output shows the quota along with how much of the quota has been used. You can see that the memory and CPU requests and limits for your Pod do not exceed the quota.", "language": "bash", "type": "kubectl-command_explanation", "context": "Configure Memory and CPU Quotas for a Namespace | The output shows the quota along with how much of the quota has been used. You can see that the memory and CPU requests and limits for your Pod do not exceed the quota.", "source": "https://kubernetes.io/docs/tasks/administer-cluster/manage-resources/quota-memory-cpu-namespace/", "id": "k8s_00741_code_explain_23464"} {"prompt": "Explain this bash code:\n\nkubectl get resourcequota mem-cpu-demo --namespace=quota-mem-cpu-example -o jsonpath='{ .status.used }' | jq .", "completion": "This bash code Configure Memory and CPU Quotas for a Namespace | Here is a manifest for a second Pod:", "language": "bash", "type": "kubectl-command_explanation", "context": "Configure Memory and CPU Quotas for a Namespace | Here is a manifest for a second Pod:", "source": "https://kubernetes.io/docs/tasks/administer-cluster/manage-resources/quota-memory-cpu-namespace/", "id": "k8s_00741_code_explain_23465"} {"prompt": "Complete this yaml code:\n\nadmin/resource/quota-mem-cpu-pod-2.yaml\napiVersion: v1\nkind: Pod\nmetadata:\n name: quota-mem-cpu-demo-2\nspec:\n containers:\n - name: quota-mem-cpu-demo-2-ctr", "completion": " image: redis\n resources:\n limits:\n memory: \"1Gi\"\n cpu: \"800m\"\n requests:\n memory: \"700Mi\"\n cpu: \"400m\"", "language": "yaml", "type": "pod-definition", "context": "Configure Memory and CPU Quotas for a Namespace | In the manifest, you can see that the Pod has a memory request of 700 MiB. Notice that the sum of the used memory request and this new memory request exceeds the memory request quota: 600 MiB + 700 Mi", "source": "https://kubernetes.io/docs/tasks/administer-cluster/manage-resources/quota-memory-cpu-namespace/", "id": "k8s_00741_code_completion_23466"} {"prompt": "Explain this yaml code:\n\nadmin/resource/quota-mem-cpu-pod-2.yaml\napiVersion: v1\nkind: Pod\nmetadata:\n name: quota-mem-cpu-demo-2\nspec:\n containers:\n - name: quota-mem-cpu-demo-2-ctr\n image: redis\n resources:\n limits:\n memory: \"1Gi\"\n cpu: \"800m\"\n requests:\n memory: \"700Mi\"\n cpu: \"400m\"", "completion": "This yaml code Configure Memory and CPU Quotas for a Namespace | In the manifest, you can see that the Pod has a memory request of 700 MiB. Notice that the sum of the used memory request and this new memory request exceeds the memory request quota: 600 MiB + 700 Mi", "language": "yaml", "type": "pod-definition_explanation", "context": "Configure Memory and CPU Quotas for a Namespace | In the manifest, you can see that the Pod has a memory request of 700 MiB. Notice that the sum of the used memory request and this new memory request exceeds the memory request quota: 600 MiB + 700 Mi", "source": "https://kubernetes.io/docs/tasks/administer-cluster/manage-resources/quota-memory-cpu-namespace/", "id": "k8s_00741_code_explain_23467"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: quota-mem-cpu-demo-2\nspec:\n containers:\n - name: quota-mem-cpu-demo-2-ctr", "completion": " image: redis\n resources:\n limits:\n memory: \"1Gi\"\n cpu: \"800m\"\n requests:\n memory: \"700Mi\"\n cpu: \"400m\"", "language": "yaml", "type": "pod-definition", "context": "Configure Memory and CPU Quotas for a Namespace | In the manifest, you can see that the Pod has a memory request of 700 MiB. Notice that the sum of the used memory request and this new memory request exceeds the memory request quota: 600 MiB + 700 Mi", "source": "https://kubernetes.io/docs/tasks/administer-cluster/manage-resources/quota-memory-cpu-namespace/", "id": "k8s_00741_code_completion_23468"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: quota-mem-cpu-demo-2\nspec:\n containers:\n - name: quota-mem-cpu-demo-2-ctr\n image: redis\n resources:\n limits:\n memory: \"1Gi\"\n cpu: \"800m\"\n requests:\n memory: \"700Mi\"\n cpu: \"400m\"", "completion": "This yaml code Configure Memory and CPU Quotas for a Namespace | In the manifest, you can see that the Pod has a memory request of 700 MiB. Notice that the sum of the used memory request and this new memory request exceeds the memory request quota: 600 MiB + 700 Mi", "language": "yaml", "type": "pod-definition_explanation", "context": "Configure Memory and CPU Quotas for a Namespace | In the manifest, you can see that the Pod has a memory request of 700 MiB. Notice that the sum of the used memory request and this new memory request exceeds the memory request quota: 600 MiB + 700 Mi", "source": "https://kubernetes.io/docs/tasks/administer-cluster/manage-resources/quota-memory-cpu-namespace/", "id": "k8s_00741_code_explain_23469"} {"prompt": "Explain this bash code:\n\nkubectl apply -f https://k8s.io/examples/admin/resource/quota-mem-cpu-pod-2.yaml --namespace=quota-mem-cpu-example", "completion": "This bash code Configure Memory and CPU Quotas for a Namespace | The second Pod does not get created. The output shows that creating the second Pod would cause the memory request total to exceed the memory request quota.", "language": "bash", "type": "kubectl-command_explanation", "context": "Configure Memory and CPU Quotas for a Namespace | The second Pod does not get created. The output shows that creating the second Pod would cause the memory request total to exceed the memory request quota.", "source": "https://kubernetes.io/docs/tasks/administer-cluster/manage-resources/quota-memory-cpu-namespace/", "id": "k8s_00741_code_explain_23470"} {"prompt": "Complete this json code:\n\n{\n \"kind\": \"APIVersions\",\n \"versions\": [\n \"v1\"\n ],\n \"serverAddressByClientCIDRs\": [", "completion": " {\n \"clientCIDR\": \"0.0.0.0/0\",\n \"serverAddress\": \"10.0.1.149:443\"\n }\n ]\n}", "language": "json", "type": "api-response", "context": "Accessing Clusters | Use kubectl apply and kubectl describe secret... to create a token for the default service account with grep/cut:", "source": "https://kubernetes.io/docs/tasks/access-application-cluster/access-cluster/", "id": "k8s_00744_code_completion_23471"} {"prompt": "Explain this json code:\n\n{\n \"kind\": \"APIVersions\",\n \"versions\": [\n \"v1\"\n ],\n \"serverAddressByClientCIDRs\": [\n {\n \"clientCIDR\": \"0.0.0.0/0\",\n \"serverAddress\": \"10.0.1.149:443\"\n }\n ]\n}", "completion": "This json code Accessing Clusters | Use kubectl apply and kubectl describe secret... to create a token for the default service account with grep/cut:", "language": "json", "type": "api-response_explanation", "context": "Accessing Clusters | Use kubectl apply and kubectl describe secret... to create a token for the default service account with grep/cut:", "source": "https://kubernetes.io/docs/tasks/access-application-cluster/access-cluster/", "id": "k8s_00744_code_explain_23472"} {"prompt": "Complete this yaml code:\n\nkubectl apply -f - </dev/null; do\n echo \"waiting for token...\" >&2", "completion": " sleep 1\ndone", "language": "text", "type": "code-example", "context": "Accessing Clusters | Capture and use the generated token:", "source": "https://kubernetes.io/docs/tasks/access-application-cluster/access-cluster/", "id": "k8s_00744_code_completion_23475"} {"prompt": "Explain this text code:\n\nwhile ! kubectl describe secret default-token | grep -E '^token' >/dev/null; do\n echo \"waiting for token...\" >&2\n sleep 1\ndone", "completion": "This text code Accessing Clusters | Capture and use the generated token:", "language": "text", "type": "code-example_explanation", "context": "Accessing Clusters | Capture and use the generated token:", "source": "https://kubernetes.io/docs/tasks/access-application-cluster/access-cluster/", "id": "k8s_00744_code_explain_23476"} {"prompt": "Complete this text code:\n\nAPISERVER=$(kubectl config view --minify | grep server | cut -f 2- -d \":\" | tr -d \" \")\nTOKEN=$(kubectl describe secret default-token | grep -E '^token' | cut -f2 -d':' | tr -d \" \")", "completion": "\ncurl $APISERVER/api --header \"Authorization: Bearer $TOKEN\" --insecure", "language": "text", "type": "code-example", "context": "Accessing Clusters | The output is similar to this:", "source": "https://kubernetes.io/docs/tasks/access-application-cluster/access-cluster/", "id": "k8s_00744_code_completion_23477"} {"prompt": "Explain this text code:\n\nAPISERVER=$(kubectl config view --minify | grep server | cut -f 2- -d \":\" | tr -d \" \")\nTOKEN=$(kubectl describe secret default-token | grep -E '^token' | cut -f2 -d':' | tr -d \" \")\n\ncurl $APISERVER/api --header \"Authorization: Bearer $TOKEN\" --insecure", "completion": "This text code Accessing Clusters | The output is similar to this:", "language": "text", "type": "code-example_explanation", "context": "Accessing Clusters | The output is similar to this:", "source": "https://kubernetes.io/docs/tasks/access-application-cluster/access-cluster/", "id": "k8s_00744_code_explain_23478"} {"prompt": "Complete this json code:\n\n{\n \"kind\": \"APIVersions\",\n \"versions\": [\n \"v1\"\n ],\n \"serverAddressByClientCIDRs\": [", "completion": " {\n \"clientCIDR\": \"0.0.0.0/0\",\n \"serverAddress\": \"10.0.1.149:443\"\n }\n ]\n}", "language": "json", "type": "api-response", "context": "Accessing Clusters | Using jsonpath:", "source": "https://kubernetes.io/docs/tasks/access-application-cluster/access-cluster/", "id": "k8s_00744_code_completion_23479"} {"prompt": "Explain this json code:\n\n{\n \"kind\": \"APIVersions\",\n \"versions\": [\n \"v1\"\n ],\n \"serverAddressByClientCIDRs\": [\n {\n \"clientCIDR\": \"0.0.0.0/0\",\n \"serverAddress\": \"10.0.1.149:443\"\n }\n ]\n}", "completion": "This json code Accessing Clusters | Using jsonpath:", "language": "json", "type": "api-response_explanation", "context": "Accessing Clusters | Using jsonpath:", "source": "https://kubernetes.io/docs/tasks/access-application-cluster/access-cluster/", "id": "k8s_00744_code_explain_23480"} {"prompt": "Complete this text code:\n\nAPISERVER=$(kubectl config view --minify -o jsonpath='{.clusters[0].cluster.server}')\nTOKEN=$(kubectl get secret default-token -o jsonpath='{.data.token}' | base64 --decode)", "completion": "\ncurl $APISERVER/api --header \"Authorization: Bearer $TOKEN\" --insecure", "language": "text", "type": "code-example", "context": "Accessing Clusters | The output is similar to this:", "source": "https://kubernetes.io/docs/tasks/access-application-cluster/access-cluster/", "id": "k8s_00744_code_completion_23481"} {"prompt": "Explain this text code:\n\nAPISERVER=$(kubectl config view --minify -o jsonpath='{.clusters[0].cluster.server}')\nTOKEN=$(kubectl get secret default-token -o jsonpath='{.data.token}' | base64 --decode)\n\ncurl $APISERVER/api --header \"Authorization: Bearer $TOKEN\" --insecure", "completion": "This text code Accessing Clusters | The output is similar to this:", "language": "text", "type": "code-example_explanation", "context": "Accessing Clusters | The output is similar to this:", "source": "https://kubernetes.io/docs/tasks/access-application-cluster/access-cluster/", "id": "k8s_00744_code_explain_23482"} {"prompt": "Complete this json code:\n\n{\n \"kind\": \"APIVersions\",\n \"versions\": [\n \"v1\"\n ],\n \"serverAddressByClientCIDRs\": [", "completion": " {\n \"clientCIDR\": \"0.0.0.0/0\",\n \"serverAddress\": \"10.0.1.149:443\"\n }\n ]\n}", "language": "json", "type": "api-response", "context": "Accessing Clusters | The above examples use the --insecure flag. This leaves it subject to MITM attacks. When kubectl accesses the cluster it uses a stored root certificate and client certificates to access the server. (T", "source": "https://kubernetes.io/docs/tasks/access-application-cluster/access-cluster/", "id": "k8s_00744_code_completion_23483"} {"prompt": "Explain this json code:\n\n{\n \"kind\": \"APIVersions\",\n \"versions\": [\n \"v1\"\n ],\n \"serverAddressByClientCIDRs\": [\n {\n \"clientCIDR\": \"0.0.0.0/0\",\n \"serverAddress\": \"10.0.1.149:443\"\n }\n ]\n}", "completion": "This json code Accessing Clusters | The above examples use the --insecure flag. This leaves it subject to MITM attacks. When kubectl accesses the cluster it uses a stored root certificate and client certificates to access the server. (T", "language": "json", "type": "api-response_explanation", "context": "Accessing Clusters | The above examples use the --insecure flag. This leaves it subject to MITM attacks. When kubectl accesses the cluster it uses a stored root certificate and client certificates to access the server. (T", "source": "https://kubernetes.io/docs/tasks/access-application-cluster/access-cluster/", "id": "k8s_00744_code_explain_23484"} {"prompt": "Complete this json code:\n\n{\n \"kind\": \"APIVersions\",\n \"versions\": [\n \"v1\"\n ],\n \"serverAddressByClientCIDRs\": [", "completion": " {\n \"clientCIDR\": \"0.0.0.0/0\",\n \"serverAddress\": \"10.0.1.149:443\"\n }\n ]\n}", "language": "json", "type": "api-response", "context": "Accessing Clusters | Use kubectl apply and kubectl describe secret... to create a token for the default service account with grep/cut:", "source": "https://kubernetes.io/docs/tasks/access-application-cluster/access-cluster/", "id": "k8s_00744_code_completion_23485"} {"prompt": "Explain this json code:\n\n{\n \"kind\": \"APIVersions\",\n \"versions\": [\n \"v1\"\n ],\n \"serverAddressByClientCIDRs\": [\n {\n \"clientCIDR\": \"0.0.0.0/0\",\n \"serverAddress\": \"10.0.1.149:443\"\n }\n ]\n}", "completion": "This json code Accessing Clusters | Use kubectl apply and kubectl describe secret... to create a token for the default service account with grep/cut:", "language": "json", "type": "api-response_explanation", "context": "Accessing Clusters | Use kubectl apply and kubectl describe secret... to create a token for the default service account with grep/cut:", "source": "https://kubernetes.io/docs/tasks/access-application-cluster/access-cluster/", "id": "k8s_00744_code_explain_23486"} {"prompt": "Complete this shell code:\n\nkubectl apply -f - </dev/null; do\n echo \"waiting for token...\" >&2", "completion": " sleep 1\ndone", "language": "shell", "type": "code-example", "context": "Accessing Clusters | Capture and use the generated token:", "source": "https://kubernetes.io/docs/tasks/access-application-cluster/access-cluster/", "id": "k8s_00744_code_completion_23489"} {"prompt": "Explain this shell code:\n\nwhile ! kubectl describe secret default-token | grep -E '^token' >/dev/null; do\n echo \"waiting for token...\" >&2\n sleep 1\ndone", "completion": "This shell code Accessing Clusters | Capture and use the generated token:", "language": "shell", "type": "code-example_explanation", "context": "Accessing Clusters | Capture and use the generated token:", "source": "https://kubernetes.io/docs/tasks/access-application-cluster/access-cluster/", "id": "k8s_00744_code_explain_23490"} {"prompt": "Complete this shell code:\n\nAPISERVER=$(kubectl config view --minify | grep server | cut -f 2- -d \":\" | tr -d \" \")\nTOKEN=$(kubectl describe secret default-token | grep -E '^token' | cut -f2 -d':' | tr -d \" \")", "completion": "\ncurl $APISERVER/api --header \"Authorization: Bearer $TOKEN\" --insecure", "language": "shell", "type": "code-example", "context": "Accessing Clusters | The output is similar to this:", "source": "https://kubernetes.io/docs/tasks/access-application-cluster/access-cluster/", "id": "k8s_00744_code_completion_23491"} {"prompt": "Explain this shell code:\n\nAPISERVER=$(kubectl config view --minify | grep server | cut -f 2- -d \":\" | tr -d \" \")\nTOKEN=$(kubectl describe secret default-token | grep -E '^token' | cut -f2 -d':' | tr -d \" \")\n\ncurl $APISERVER/api --header \"Authorization: Bearer $TOKEN\" --insecure", "completion": "This shell code Accessing Clusters | The output is similar to this:", "language": "shell", "type": "code-example_explanation", "context": "Accessing Clusters | The output is similar to this:", "source": "https://kubernetes.io/docs/tasks/access-application-cluster/access-cluster/", "id": "k8s_00744_code_explain_23492"} {"prompt": "Complete this json code:\n\n{\n \"kind\": \"APIVersions\",\n \"versions\": [\n \"v1\"\n ],\n \"serverAddressByClientCIDRs\": [", "completion": " {\n \"clientCIDR\": \"0.0.0.0/0\",\n \"serverAddress\": \"10.0.1.149:443\"\n }\n ]\n}", "language": "json", "type": "api-response", "context": "Accessing Clusters | Using jsonpath:", "source": "https://kubernetes.io/docs/tasks/access-application-cluster/access-cluster/", "id": "k8s_00744_code_completion_23493"} {"prompt": "Explain this json code:\n\n{\n \"kind\": \"APIVersions\",\n \"versions\": [\n \"v1\"\n ],\n \"serverAddressByClientCIDRs\": [\n {\n \"clientCIDR\": \"0.0.0.0/0\",\n \"serverAddress\": \"10.0.1.149:443\"\n }\n ]\n}", "completion": "This json code Accessing Clusters | Using jsonpath:", "language": "json", "type": "api-response_explanation", "context": "Accessing Clusters | Using jsonpath:", "source": "https://kubernetes.io/docs/tasks/access-application-cluster/access-cluster/", "id": "k8s_00744_code_explain_23494"} {"prompt": "Complete this shell code:\n\nAPISERVER=$(kubectl config view --minify -o jsonpath='{.clusters[0].cluster.server}')\nTOKEN=$(kubectl get secret default-token -o jsonpath='{.data.token}' | base64 --decode)", "completion": "\ncurl $APISERVER/api --header \"Authorization: Bearer $TOKEN\" --insecure", "language": "shell", "type": "code-example", "context": "Accessing Clusters | The output is similar to this:", "source": "https://kubernetes.io/docs/tasks/access-application-cluster/access-cluster/", "id": "k8s_00744_code_completion_23495"} {"prompt": "Explain this shell code:\n\nAPISERVER=$(kubectl config view --minify -o jsonpath='{.clusters[0].cluster.server}')\nTOKEN=$(kubectl get secret default-token -o jsonpath='{.data.token}' | base64 --decode)\n\ncurl $APISERVER/api --header \"Authorization: Bearer $TOKEN\" --insecure", "completion": "This shell code Accessing Clusters | The output is similar to this:", "language": "shell", "type": "code-example_explanation", "context": "Accessing Clusters | The output is similar to this:", "source": "https://kubernetes.io/docs/tasks/access-application-cluster/access-cluster/", "id": "k8s_00744_code_explain_23496"} {"prompt": "Complete this json code:\n\n{\n \"kind\": \"APIVersions\",\n \"versions\": [\n \"v1\"\n ],\n \"serverAddressByClientCIDRs\": [", "completion": " {\n \"clientCIDR\": \"0.0.0.0/0\",\n \"serverAddress\": \"10.0.1.149:443\"\n }\n ]\n}", "language": "json", "type": "api-response", "context": "Accessing Clusters | The above examples use the --insecure flag. This leaves it subject to MITM attacks. When kubectl accesses the cluster it uses a stored root certificate and client certificates to access the server. (T", "source": "https://kubernetes.io/docs/tasks/access-application-cluster/access-cluster/", "id": "k8s_00744_code_completion_23497"} {"prompt": "Explain this json code:\n\n{\n \"kind\": \"APIVersions\",\n \"versions\": [\n \"v1\"\n ],\n \"serverAddressByClientCIDRs\": [\n {\n \"clientCIDR\": \"0.0.0.0/0\",\n \"serverAddress\": \"10.0.1.149:443\"\n }\n ]\n}", "completion": "This json code Accessing Clusters | The above examples use the --insecure flag. This leaves it subject to MITM attacks. When kubectl accesses the cluster it uses a stored root certificate and client certificates to access the server. (T", "language": "json", "type": "api-response_explanation", "context": "Accessing Clusters | The above examples use the --insecure flag. This leaves it subject to MITM attacks. When kubectl accesses the cluster it uses a stored root certificate and client certificates to access the server. (T", "source": "https://kubernetes.io/docs/tasks/access-application-cluster/access-cluster/", "id": "k8s_00744_code_explain_23498"} {"prompt": "Explain this text code:\n\ngo get k8s.io/client-go@kubernetes-", "completion": "This text code Accessing Clusters | The Go client can use the same kubeconfig file as the kubectl CLI does to locate and authenticate to the apiserver. See this example.", "language": "text", "type": "code-example_explanation", "context": "Accessing Clusters | The Go client can use the same kubeconfig file as the kubectl CLI does to locate and authenticate to the apiserver. See this example.", "source": "https://kubernetes.io/docs/tasks/access-application-cluster/access-cluster/", "id": "k8s_00744_code_explain_23499"} {"prompt": "Complete this json code:\n\n{\n \"kind\": \"APIVersions\",\n \"versions\": [\n \"v1\"\n ],\n \"serverAddressByClientCIDRs\": [", "completion": " {\n \"clientCIDR\": \"0.0.0.0/0\",\n \"serverAddress\": \"10.0.1.149:443\"\n }\n ]\n}", "language": "json", "type": "api-response", "context": "Accessing Clusters | Use kubectl apply and kubectl describe secret... to create a token for the default service account with grep/cut:", "source": "https://kubernetes.io/docs/tasks/access-application-cluster/access-cluster/", "id": "k8s_00744_code_completion_23500"} {"prompt": "Explain this json code:\n\n{\n \"kind\": \"APIVersions\",\n \"versions\": [\n \"v1\"\n ],\n \"serverAddressByClientCIDRs\": [\n {\n \"clientCIDR\": \"0.0.0.0/0\",\n \"serverAddress\": \"10.0.1.149:443\"\n }\n ]\n}", "completion": "This json code Accessing Clusters | Use kubectl apply and kubectl describe secret... to create a token for the default service account with grep/cut:", "language": "json", "type": "api-response_explanation", "context": "Accessing Clusters | Use kubectl apply and kubectl describe secret... to create a token for the default service account with grep/cut:", "source": "https://kubernetes.io/docs/tasks/access-application-cluster/access-cluster/", "id": "k8s_00744_code_explain_23501"} {"prompt": "Complete this yaml code:\n\nkubectl apply -f - </dev/null; do\n echo \"waiting for token...\" >&2", "completion": " sleep 1\ndone", "language": "text", "type": "code-example", "context": "Accessing Clusters | Capture and use the generated token:", "source": "https://kubernetes.io/docs/tasks/access-application-cluster/access-cluster/", "id": "k8s_00744_code_completion_23504"} {"prompt": "Explain this text code:\n\nwhile ! kubectl describe secret default-token | grep -E '^token' >/dev/null; do\n echo \"waiting for token...\" >&2\n sleep 1\ndone", "completion": "This text code Accessing Clusters | Capture and use the generated token:", "language": "text", "type": "code-example_explanation", "context": "Accessing Clusters | Capture and use the generated token:", "source": "https://kubernetes.io/docs/tasks/access-application-cluster/access-cluster/", "id": "k8s_00744_code_explain_23505"} {"prompt": "Complete this text code:\n\nAPISERVER=$(kubectl config view --minify | grep server | cut -f 2- -d \":\" | tr -d \" \")\nTOKEN=$(kubectl describe secret default-token | grep -E '^token' | cut -f2 -d':' | tr -d \" \")", "completion": "\ncurl $APISERVER/api --header \"Authorization: Bearer $TOKEN\" --insecure", "language": "text", "type": "code-example", "context": "Accessing Clusters | The output is similar to this:", "source": "https://kubernetes.io/docs/tasks/access-application-cluster/access-cluster/", "id": "k8s_00744_code_completion_23506"} {"prompt": "Explain this text code:\n\nAPISERVER=$(kubectl config view --minify | grep server | cut -f 2- -d \":\" | tr -d \" \")\nTOKEN=$(kubectl describe secret default-token | grep -E '^token' | cut -f2 -d':' | tr -d \" \")\n\ncurl $APISERVER/api --header \"Authorization: Bearer $TOKEN\" --insecure", "completion": "This text code Accessing Clusters | The output is similar to this:", "language": "text", "type": "code-example_explanation", "context": "Accessing Clusters | The output is similar to this:", "source": "https://kubernetes.io/docs/tasks/access-application-cluster/access-cluster/", "id": "k8s_00744_code_explain_23507"} {"prompt": "Complete this json code:\n\n{\n \"kind\": \"APIVersions\",\n \"versions\": [\n \"v1\"\n ],\n \"serverAddressByClientCIDRs\": [", "completion": " {\n \"clientCIDR\": \"0.0.0.0/0\",\n \"serverAddress\": \"10.0.1.149:443\"\n }\n ]\n}", "language": "json", "type": "api-response", "context": "Accessing Clusters | Using jsonpath:", "source": "https://kubernetes.io/docs/tasks/access-application-cluster/access-cluster/", "id": "k8s_00744_code_completion_23508"} {"prompt": "Explain this json code:\n\n{\n \"kind\": \"APIVersions\",\n \"versions\": [\n \"v1\"\n ],\n \"serverAddressByClientCIDRs\": [\n {\n \"clientCIDR\": \"0.0.0.0/0\",\n \"serverAddress\": \"10.0.1.149:443\"\n }\n ]\n}", "completion": "This json code Accessing Clusters | Using jsonpath:", "language": "json", "type": "api-response_explanation", "context": "Accessing Clusters | Using jsonpath:", "source": "https://kubernetes.io/docs/tasks/access-application-cluster/access-cluster/", "id": "k8s_00744_code_explain_23509"} {"prompt": "Complete this text code:\n\nAPISERVER=$(kubectl config view --minify -o jsonpath='{.clusters[0].cluster.server}')\nTOKEN=$(kubectl get secret default-token -o jsonpath='{.data.token}' | base64 --decode)", "completion": "\ncurl $APISERVER/api --header \"Authorization: Bearer $TOKEN\" --insecure", "language": "text", "type": "code-example", "context": "Accessing Clusters | The output is similar to this:", "source": "https://kubernetes.io/docs/tasks/access-application-cluster/access-cluster/", "id": "k8s_00744_code_completion_23510"} {"prompt": "Explain this text code:\n\nAPISERVER=$(kubectl config view --minify -o jsonpath='{.clusters[0].cluster.server}')\nTOKEN=$(kubectl get secret default-token -o jsonpath='{.data.token}' | base64 --decode)\n\ncurl $APISERVER/api --header \"Authorization: Bearer $TOKEN\" --insecure", "completion": "This text code Accessing Clusters | The output is similar to this:", "language": "text", "type": "code-example_explanation", "context": "Accessing Clusters | The output is similar to this:", "source": "https://kubernetes.io/docs/tasks/access-application-cluster/access-cluster/", "id": "k8s_00744_code_explain_23511"} {"prompt": "Complete this json code:\n\n{\n \"kind\": \"APIVersions\",\n \"versions\": [\n \"v1\"\n ],\n \"serverAddressByClientCIDRs\": [", "completion": " {\n \"clientCIDR\": \"0.0.0.0/0\",\n \"serverAddress\": \"10.0.1.149:443\"\n }\n ]\n}", "language": "json", "type": "api-response", "context": "Accessing Clusters | The above examples use the --insecure flag. This leaves it subject to MITM attacks. When kubectl accesses the cluster it uses a stored root certificate and client certificates to access the server. (T", "source": "https://kubernetes.io/docs/tasks/access-application-cluster/access-cluster/", "id": "k8s_00744_code_completion_23512"} {"prompt": "Explain this json code:\n\n{\n \"kind\": \"APIVersions\",\n \"versions\": [\n \"v1\"\n ],\n \"serverAddressByClientCIDRs\": [\n {\n \"clientCIDR\": \"0.0.0.0/0\",\n \"serverAddress\": \"10.0.1.149:443\"\n }\n ]\n}", "completion": "This json code Accessing Clusters | The above examples use the --insecure flag. This leaves it subject to MITM attacks. When kubectl accesses the cluster it uses a stored root certificate and client certificates to access the server. (T", "language": "json", "type": "api-response_explanation", "context": "Accessing Clusters | The above examples use the --insecure flag. This leaves it subject to MITM attacks. When kubectl accesses the cluster it uses a stored root certificate and client certificates to access the server. (T", "source": "https://kubernetes.io/docs/tasks/access-application-cluster/access-cluster/", "id": "k8s_00744_code_explain_23513"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: ResourceQuota\nmetadata:", "completion": " name: pod-demo\nspec:\n hard:\n pods: \"2\"", "language": "yaml", "type": "kubernetes-manifest", "context": "Configure a Pod Quota for a Namespace | Create the ResourceQuota:", "source": "https://kubernetes.io/docs/tasks/administer-cluster/manage-resources/quota-pod-namespace/", "id": "k8s_00745_code_completion_23514"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: ResourceQuota\nmetadata:\n name: pod-demo\nspec:\n hard:\n pods: \"2\"", "completion": "This yaml code Configure a Pod Quota for a Namespace | Create the ResourceQuota:", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Configure a Pod Quota for a Namespace | Create the ResourceQuota:", "source": "https://kubernetes.io/docs/tasks/administer-cluster/manage-resources/quota-pod-namespace/", "id": "k8s_00745_code_explain_23515"} {"prompt": "Explain this bash code:\n\nkubectl apply -f https://k8s.io/examples/admin/resource/quota-pod.yaml --namespace=quota-pod-example", "completion": "This bash code Configure a Pod Quota for a Namespace | View detailed information about the ResourceQuota:", "language": "bash", "type": "kubectl-command_explanation", "context": "Configure a Pod Quota for a Namespace | View detailed information about the ResourceQuota:", "source": "https://kubernetes.io/docs/tasks/administer-cluster/manage-resources/quota-pod-namespace/", "id": "k8s_00745_code_explain_23516"} {"prompt": "Explain this bash code:\n\nkubectl get resourcequota pod-demo --namespace=quota-pod-example --output=yaml", "completion": "This bash code Configure a Pod Quota for a Namespace | The output shows that the namespace has a quota of two Pods, and that currently there are no Pods; that is, none of the quota is used.", "language": "bash", "type": "kubectl-command_explanation", "context": "Configure a Pod Quota for a Namespace | The output shows that the namespace has a quota of two Pods, and that currently there are no Pods; that is, none of the quota is used.", "source": "https://kubernetes.io/docs/tasks/administer-cluster/manage-resources/quota-pod-namespace/", "id": "k8s_00745_code_explain_23517"} {"prompt": "Complete this text code:\n\nspec:\n hard:\n pods: \"2\"\nstatus:", "completion": " hard:\n pods: \"2\"\n used:\n pods: \"0\"", "language": "text", "type": "code-example", "context": "Configure a Pod Quota for a Namespace | Here is an example manifest for a Deployment:", "source": "https://kubernetes.io/docs/tasks/administer-cluster/manage-resources/quota-pod-namespace/", "id": "k8s_00745_code_completion_23518"} {"prompt": "Explain this text code:\n\nspec:\n hard:\n pods: \"2\"\nstatus:\n hard:\n pods: \"2\"\n used:\n pods: \"0\"", "completion": "This text code Configure a Pod Quota for a Namespace | Here is an example manifest for a Deployment:", "language": "text", "type": "code-example_explanation", "context": "Configure a Pod Quota for a Namespace | Here is an example manifest for a Deployment:", "source": "https://kubernetes.io/docs/tasks/administer-cluster/manage-resources/quota-pod-namespace/", "id": "k8s_00745_code_explain_23519"} {"prompt": "Complete this yaml code:\n\napiVersion: apps/v1\nkind: Deployment\nmetadata:\n name: pod-quota-demo\nspec:\n selector:\n matchLabels:\n purpose: quota-demo", "completion": " replicas: 3\n template:\n metadata:\n labels:\n purpose: quota-demo\n spec:\n containers:\n - name: pod-quota-demo\n image: nginx", "language": "yaml", "type": "deployment-definition", "context": "Configure a Pod Quota for a Namespace | In that manifest, replicas: 3 tells Kubernetes to attempt to create three new Pods, all running the same application.", "source": "https://kubernetes.io/docs/tasks/administer-cluster/manage-resources/quota-pod-namespace/", "id": "k8s_00745_code_completion_23520"} {"prompt": "Explain this yaml code:\n\napiVersion: apps/v1\nkind: Deployment\nmetadata:\n name: pod-quota-demo\nspec:\n selector:\n matchLabels:\n purpose: quota-demo\n replicas: 3\n template:\n metadata:\n labels:\n purpose: quota-demo\n spec:\n containers:\n - name: pod-quota-demo\n image: nginx", "completion": "This yaml code Configure a Pod Quota for a Namespace | In that manifest, replicas: 3 tells Kubernetes to attempt to create three new Pods, all running the same application.", "language": "yaml", "type": "deployment-definition_explanation", "context": "Configure a Pod Quota for a Namespace | In that manifest, replicas: 3 tells Kubernetes to attempt to create three new Pods, all running the same application.", "source": "https://kubernetes.io/docs/tasks/administer-cluster/manage-resources/quota-pod-namespace/", "id": "k8s_00745_code_explain_23521"} {"prompt": "Explain this bash code:\n\nkubectl apply -f https://k8s.io/examples/admin/resource/quota-pod-deployment.yaml --namespace=quota-pod-example", "completion": "This bash code Configure a Pod Quota for a Namespace | View detailed information about the Deployment:", "language": "bash", "type": "kubectl-command_explanation", "context": "Configure a Pod Quota for a Namespace | View detailed information about the Deployment:", "source": "https://kubernetes.io/docs/tasks/administer-cluster/manage-resources/quota-pod-namespace/", "id": "k8s_00745_code_explain_23522"} {"prompt": "Explain this bash code:\n\nkubectl get deployment pod-quota-demo --namespace=quota-pod-example --output=yaml", "completion": "This bash code Configure a Pod Quota for a Namespace | The output shows that even though the Deployment specifies three replicas, only two Pods were created because of the quota you defined earlier:", "language": "bash", "type": "kubectl-command_explanation", "context": "Configure a Pod Quota for a Namespace | The output shows that even though the Deployment specifies three replicas, only two Pods were created because of the quota you defined earlier:", "source": "https://kubernetes.io/docs/tasks/administer-cluster/manage-resources/quota-pod-namespace/", "id": "k8s_00745_code_explain_23523"} {"prompt": "Complete this text code:\n\nspec:\n ...\n replicas: 3\n...\nstatus:", "completion": " availableReplicas: 2\n...\nlastUpdateTime: 2021-04-02T20:57:05Z\n message: 'unable to create pods: pods \"pod-quota-demo-1650323038-\" is forbidden:\n exceeded quota: pod-demo, requested: pods=1, used: pods=2, limited: pods=2'", "language": "text", "type": "code-example", "context": "Configure a Pod Quota for a Namespace | In this task you have defined a ResourceQuota that limited the total number of Pods, but you could also limit the total number of other kinds of object. For example, you might decide to limit how many", "source": "https://kubernetes.io/docs/tasks/administer-cluster/manage-resources/quota-pod-namespace/", "id": "k8s_00745_code_completion_23524"} {"prompt": "Explain this text code:\n\nspec:\n ...\n replicas: 3\n...\nstatus:\n availableReplicas: 2\n...\nlastUpdateTime: 2021-04-02T20:57:05Z\n message: 'unable to create pods: pods \"pod-quota-demo-1650323038-\" is forbidden:\n exceeded quota: pod-demo, requested: pods=1, used: pods=2, limited: pods=2'", "completion": "This text code Configure a Pod Quota for a Namespace | In this task you have defined a ResourceQuota that limited the total number of Pods, but you could also limit the total number of other kinds of object. For example, you might decide to limit how many", "language": "text", "type": "code-example_explanation", "context": "Configure a Pod Quota for a Namespace | In this task you have defined a ResourceQuota that limited the total number of Pods, but you could also limit the total number of other kinds of object. For example, you might decide to limit how many", "source": "https://kubernetes.io/docs/tasks/administer-cluster/manage-resources/quota-pod-namespace/", "id": "k8s_00745_code_explain_23525"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: ResourceQuota\nmetadata:", "completion": " name: pod-demo\nspec:\n hard:\n pods: \"2\"", "language": "yaml", "type": "kubernetes-manifest", "context": "Configure a Pod Quota for a Namespace | Create the ResourceQuota:", "source": "https://kubernetes.io/docs/tasks/administer-cluster/manage-resources/quota-pod-namespace/", "id": "k8s_00745_code_completion_23526"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: ResourceQuota\nmetadata:\n name: pod-demo\nspec:\n hard:\n pods: \"2\"", "completion": "This yaml code Configure a Pod Quota for a Namespace | Create the ResourceQuota:", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Configure a Pod Quota for a Namespace | Create the ResourceQuota:", "source": "https://kubernetes.io/docs/tasks/administer-cluster/manage-resources/quota-pod-namespace/", "id": "k8s_00745_code_explain_23527"} {"prompt": "Explain this shell code:\n\nkubectl apply -f https://k8s.io/examples/admin/resource/quota-pod.yaml --namespace=quota-pod-example", "completion": "This shell code Configure a Pod Quota for a Namespace | View detailed information about the ResourceQuota:", "language": "shell", "type": "code-example_explanation", "context": "Configure a Pod Quota for a Namespace | View detailed information about the ResourceQuota:", "source": "https://kubernetes.io/docs/tasks/administer-cluster/manage-resources/quota-pod-namespace/", "id": "k8s_00745_code_explain_23528"} {"prompt": "Explain this shell code:\n\nkubectl get resourcequota pod-demo --namespace=quota-pod-example --output=yaml", "completion": "This shell code Configure a Pod Quota for a Namespace | The output shows that the namespace has a quota of two Pods, and that currently there are no Pods; that is, none of the quota is used.", "language": "shell", "type": "code-example_explanation", "context": "Configure a Pod Quota for a Namespace | The output shows that the namespace has a quota of two Pods, and that currently there are no Pods; that is, none of the quota is used.", "source": "https://kubernetes.io/docs/tasks/administer-cluster/manage-resources/quota-pod-namespace/", "id": "k8s_00745_code_explain_23529"} {"prompt": "Complete this yaml code:\n\nspec:\n hard:\n pods: \"2\"\nstatus:", "completion": " hard:\n pods: \"2\"\n used:\n pods: \"0\"", "language": "yaml", "type": "kubernetes-manifest", "context": "Configure a Pod Quota for a Namespace | Here is an example manifest for a Deployment:", "source": "https://kubernetes.io/docs/tasks/administer-cluster/manage-resources/quota-pod-namespace/", "id": "k8s_00745_code_completion_23530"} {"prompt": "Explain this yaml code:\n\nspec:\n hard:\n pods: \"2\"\nstatus:\n hard:\n pods: \"2\"\n used:\n pods: \"0\"", "completion": "This yaml code Configure a Pod Quota for a Namespace | Here is an example manifest for a Deployment:", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Configure a Pod Quota for a Namespace | Here is an example manifest for a Deployment:", "source": "https://kubernetes.io/docs/tasks/administer-cluster/manage-resources/quota-pod-namespace/", "id": "k8s_00745_code_explain_23531"} {"prompt": "Complete this yaml code:\n\napiVersion: apps/v1\nkind: Deployment\nmetadata:\n name: pod-quota-demo\nspec:\n selector:\n matchLabels:\n purpose: quota-demo", "completion": " replicas: 3\n template:\n metadata:\n labels:\n purpose: quota-demo\n spec:\n containers:\n - name: pod-quota-demo\n image: nginx", "language": "yaml", "type": "deployment-definition", "context": "Configure a Pod Quota for a Namespace | In that manifest, replicas: 3 tells Kubernetes to attempt to create three new Pods, all running the same application.", "source": "https://kubernetes.io/docs/tasks/administer-cluster/manage-resources/quota-pod-namespace/", "id": "k8s_00745_code_completion_23532"} {"prompt": "Explain this yaml code:\n\napiVersion: apps/v1\nkind: Deployment\nmetadata:\n name: pod-quota-demo\nspec:\n selector:\n matchLabels:\n purpose: quota-demo\n replicas: 3\n template:\n metadata:\n labels:\n purpose: quota-demo\n spec:\n containers:\n - name: pod-quota-demo\n image: nginx", "completion": "This yaml code Configure a Pod Quota for a Namespace | In that manifest, replicas: 3 tells Kubernetes to attempt to create three new Pods, all running the same application.", "language": "yaml", "type": "deployment-definition_explanation", "context": "Configure a Pod Quota for a Namespace | In that manifest, replicas: 3 tells Kubernetes to attempt to create three new Pods, all running the same application.", "source": "https://kubernetes.io/docs/tasks/administer-cluster/manage-resources/quota-pod-namespace/", "id": "k8s_00745_code_explain_23533"} {"prompt": "Explain this shell code:\n\nkubectl apply -f https://k8s.io/examples/admin/resource/quota-pod-deployment.yaml --namespace=quota-pod-example", "completion": "This shell code Configure a Pod Quota for a Namespace | View detailed information about the Deployment:", "language": "shell", "type": "code-example_explanation", "context": "Configure a Pod Quota for a Namespace | View detailed information about the Deployment:", "source": "https://kubernetes.io/docs/tasks/administer-cluster/manage-resources/quota-pod-namespace/", "id": "k8s_00745_code_explain_23534"} {"prompt": "Explain this shell code:\n\nkubectl get deployment pod-quota-demo --namespace=quota-pod-example --output=yaml", "completion": "This shell code Configure a Pod Quota for a Namespace | The output shows that even though the Deployment specifies three replicas, only two Pods were created because of the quota you defined earlier:", "language": "shell", "type": "code-example_explanation", "context": "Configure a Pod Quota for a Namespace | The output shows that even though the Deployment specifies three replicas, only two Pods were created because of the quota you defined earlier:", "source": "https://kubernetes.io/docs/tasks/administer-cluster/manage-resources/quota-pod-namespace/", "id": "k8s_00745_code_explain_23535"} {"prompt": "Complete this yaml code:\n\nspec:\n ...\n replicas: 3\n...\nstatus:", "completion": " availableReplicas: 2\n...\nlastUpdateTime: 2021-04-02T20:57:05Z\n message: 'unable to create pods: pods \"pod-quota-demo-1650323038-\" is forbidden:\n exceeded quota: pod-demo, requested: pods=1, used: pods=2, limited: pods=2'", "language": "yaml", "type": "kubernetes-manifest", "context": "Configure a Pod Quota for a Namespace | In this task you have defined a ResourceQuota that limited the total number of Pods, but you could also limit the total number of other kinds of object. For example, you might decide to limit how many", "source": "https://kubernetes.io/docs/tasks/administer-cluster/manage-resources/quota-pod-namespace/", "id": "k8s_00745_code_completion_23536"} {"prompt": "Explain this yaml code:\n\nspec:\n ...\n replicas: 3\n...\nstatus:\n availableReplicas: 2\n...\nlastUpdateTime: 2021-04-02T20:57:05Z\n message: 'unable to create pods: pods \"pod-quota-demo-1650323038-\" is forbidden:\n exceeded quota: pod-demo, requested: pods=1, used: pods=2, limited: pods=2'", "completion": "This yaml code Configure a Pod Quota for a Namespace | In this task you have defined a ResourceQuota that limited the total number of Pods, but you could also limit the total number of other kinds of object. For example, you might decide to limit how many", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Configure a Pod Quota for a Namespace | In this task you have defined a ResourceQuota that limited the total number of Pods, but you could also limit the total number of other kinds of object. For example, you might decide to limit how many", "source": "https://kubernetes.io/docs/tasks/administer-cluster/manage-resources/quota-pod-namespace/", "id": "k8s_00745_code_explain_23537"} {"prompt": "Complete this yaml code:\n\nadmin/resource/quota-pod.yaml\napiVersion: v1\nkind: ResourceQuota\nmetadata:", "completion": " name: pod-demo\nspec:\n hard:\n pods: \"2\"", "language": "yaml", "type": "kubernetes-manifest", "context": "Configure a Pod Quota for a Namespace | Create the ResourceQuota:", "source": "https://kubernetes.io/docs/tasks/administer-cluster/manage-resources/quota-pod-namespace/", "id": "k8s_00745_code_completion_23538"} {"prompt": "Explain this yaml code:\n\nadmin/resource/quota-pod.yaml\napiVersion: v1\nkind: ResourceQuota\nmetadata:\n name: pod-demo\nspec:\n hard:\n pods: \"2\"", "completion": "This yaml code Configure a Pod Quota for a Namespace | Create the ResourceQuota:", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Configure a Pod Quota for a Namespace | Create the ResourceQuota:", "source": "https://kubernetes.io/docs/tasks/administer-cluster/manage-resources/quota-pod-namespace/", "id": "k8s_00745_code_explain_23539"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: ResourceQuota\nmetadata:", "completion": " name: pod-demo\nspec:\n hard:\n pods: \"2\"", "language": "yaml", "type": "kubernetes-manifest", "context": "Configure a Pod Quota for a Namespace | Create the ResourceQuota:", "source": "https://kubernetes.io/docs/tasks/administer-cluster/manage-resources/quota-pod-namespace/", "id": "k8s_00745_code_completion_23540"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: ResourceQuota\nmetadata:\n name: pod-demo\nspec:\n hard:\n pods: \"2\"", "completion": "This yaml code Configure a Pod Quota for a Namespace | Create the ResourceQuota:", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Configure a Pod Quota for a Namespace | Create the ResourceQuota:", "source": "https://kubernetes.io/docs/tasks/administer-cluster/manage-resources/quota-pod-namespace/", "id": "k8s_00745_code_explain_23541"} {"prompt": "Explain this bash code:\n\nkubectl apply -f https://k8s.io/examples/admin/resource/quota-pod.yaml --namespace=quota-pod-example", "completion": "This bash code Configure a Pod Quota for a Namespace | View detailed information about the ResourceQuota:", "language": "bash", "type": "kubectl-command_explanation", "context": "Configure a Pod Quota for a Namespace | View detailed information about the ResourceQuota:", "source": "https://kubernetes.io/docs/tasks/administer-cluster/manage-resources/quota-pod-namespace/", "id": "k8s_00745_code_explain_23542"} {"prompt": "Explain this bash code:\n\nkubectl get resourcequota pod-demo --namespace=quota-pod-example --output=yaml", "completion": "This bash code Configure a Pod Quota for a Namespace | The output shows that the namespace has a quota of two Pods, and that currently there are no Pods; that is, none of the quota is used.", "language": "bash", "type": "kubectl-command_explanation", "context": "Configure a Pod Quota for a Namespace | The output shows that the namespace has a quota of two Pods, and that currently there are no Pods; that is, none of the quota is used.", "source": "https://kubernetes.io/docs/tasks/administer-cluster/manage-resources/quota-pod-namespace/", "id": "k8s_00745_code_explain_23543"} {"prompt": "Complete this text code:\n\nspec:\n hard:\n pods: \"2\"\nstatus:", "completion": " hard:\n pods: \"2\"\n used:\n pods: \"0\"", "language": "text", "type": "code-example", "context": "Configure a Pod Quota for a Namespace | Here is an example manifest for a Deployment:", "source": "https://kubernetes.io/docs/tasks/administer-cluster/manage-resources/quota-pod-namespace/", "id": "k8s_00745_code_completion_23544"} {"prompt": "Explain this text code:\n\nspec:\n hard:\n pods: \"2\"\nstatus:\n hard:\n pods: \"2\"\n used:\n pods: \"0\"", "completion": "This text code Configure a Pod Quota for a Namespace | Here is an example manifest for a Deployment:", "language": "text", "type": "code-example_explanation", "context": "Configure a Pod Quota for a Namespace | Here is an example manifest for a Deployment:", "source": "https://kubernetes.io/docs/tasks/administer-cluster/manage-resources/quota-pod-namespace/", "id": "k8s_00745_code_explain_23545"} {"prompt": "Complete this yaml code:\n\nadmin/resource/quota-pod-deployment.yaml\napiVersion: apps/v1\nkind: Deployment\nmetadata:\n name: pod-quota-demo\nspec:\n selector:\n matchLabels:\n purpose: quota-demo", "completion": " replicas: 3\n template:\n metadata:\n labels:\n purpose: quota-demo\n spec:\n containers:\n - name: pod-quota-demo\n image: nginx", "language": "yaml", "type": "deployment-definition", "context": "Configure a Pod Quota for a Namespace | In that manifest, replicas: 3 tells Kubernetes to attempt to create three new Pods, all running the same application.", "source": "https://kubernetes.io/docs/tasks/administer-cluster/manage-resources/quota-pod-namespace/", "id": "k8s_00745_code_completion_23546"} {"prompt": "Explain this yaml code:\n\nadmin/resource/quota-pod-deployment.yaml\napiVersion: apps/v1\nkind: Deployment\nmetadata:\n name: pod-quota-demo\nspec:\n selector:\n matchLabels:\n purpose: quota-demo\n replicas: 3\n template:\n metadata:\n labels:\n purpose: quota-demo\n spec:\n containers:\n - name: pod-quota-demo\n image: nginx", "completion": "This yaml code Configure a Pod Quota for a Namespace | In that manifest, replicas: 3 tells Kubernetes to attempt to create three new Pods, all running the same application.", "language": "yaml", "type": "deployment-definition_explanation", "context": "Configure a Pod Quota for a Namespace | In that manifest, replicas: 3 tells Kubernetes to attempt to create three new Pods, all running the same application.", "source": "https://kubernetes.io/docs/tasks/administer-cluster/manage-resources/quota-pod-namespace/", "id": "k8s_00745_code_explain_23547"} {"prompt": "Complete this yaml code:\n\napiVersion: apps/v1\nkind: Deployment\nmetadata:\n name: pod-quota-demo\nspec:\n selector:\n matchLabels:\n purpose: quota-demo", "completion": " replicas: 3\n template:\n metadata:\n labels:\n purpose: quota-demo\n spec:\n containers:\n - name: pod-quota-demo\n image: nginx", "language": "yaml", "type": "deployment-definition", "context": "Configure a Pod Quota for a Namespace | In that manifest, replicas: 3 tells Kubernetes to attempt to create three new Pods, all running the same application.", "source": "https://kubernetes.io/docs/tasks/administer-cluster/manage-resources/quota-pod-namespace/", "id": "k8s_00745_code_completion_23548"} {"prompt": "Explain this yaml code:\n\napiVersion: apps/v1\nkind: Deployment\nmetadata:\n name: pod-quota-demo\nspec:\n selector:\n matchLabels:\n purpose: quota-demo\n replicas: 3\n template:\n metadata:\n labels:\n purpose: quota-demo\n spec:\n containers:\n - name: pod-quota-demo\n image: nginx", "completion": "This yaml code Configure a Pod Quota for a Namespace | In that manifest, replicas: 3 tells Kubernetes to attempt to create three new Pods, all running the same application.", "language": "yaml", "type": "deployment-definition_explanation", "context": "Configure a Pod Quota for a Namespace | In that manifest, replicas: 3 tells Kubernetes to attempt to create three new Pods, all running the same application.", "source": "https://kubernetes.io/docs/tasks/administer-cluster/manage-resources/quota-pod-namespace/", "id": "k8s_00745_code_explain_23549"} {"prompt": "Explain this bash code:\n\nkubectl apply -f https://k8s.io/examples/admin/resource/quota-pod-deployment.yaml --namespace=quota-pod-example", "completion": "This bash code Configure a Pod Quota for a Namespace | View detailed information about the Deployment:", "language": "bash", "type": "kubectl-command_explanation", "context": "Configure a Pod Quota for a Namespace | View detailed information about the Deployment:", "source": "https://kubernetes.io/docs/tasks/administer-cluster/manage-resources/quota-pod-namespace/", "id": "k8s_00745_code_explain_23550"} {"prompt": "Explain this bash code:\n\nkubectl get deployment pod-quota-demo --namespace=quota-pod-example --output=yaml", "completion": "This bash code Configure a Pod Quota for a Namespace | The output shows that even though the Deployment specifies three replicas, only two Pods were created because of the quota you defined earlier:", "language": "bash", "type": "kubectl-command_explanation", "context": "Configure a Pod Quota for a Namespace | The output shows that even though the Deployment specifies three replicas, only two Pods were created because of the quota you defined earlier:", "source": "https://kubernetes.io/docs/tasks/administer-cluster/manage-resources/quota-pod-namespace/", "id": "k8s_00745_code_explain_23551"} {"prompt": "Complete this text code:\n\nspec:\n ...\n replicas: 3\n...\nstatus:", "completion": " availableReplicas: 2\n...\nlastUpdateTime: 2021-04-02T20:57:05Z\n message: 'unable to create pods: pods \"pod-quota-demo-1650323038-\" is forbidden:\n exceeded quota: pod-demo, requested: pods=1, used: pods=2, limited: pods=2'", "language": "text", "type": "code-example", "context": "Configure a Pod Quota for a Namespace | In this task you have defined a ResourceQuota that limited the total number of Pods, but you could also limit the total number of other kinds of object. For example, you might decide to limit how many", "source": "https://kubernetes.io/docs/tasks/administer-cluster/manage-resources/quota-pod-namespace/", "id": "k8s_00745_code_completion_23552"} {"prompt": "Explain this text code:\n\nspec:\n ...\n replicas: 3\n...\nstatus:\n availableReplicas: 2\n...\nlastUpdateTime: 2021-04-02T20:57:05Z\n message: 'unable to create pods: pods \"pod-quota-demo-1650323038-\" is forbidden:\n exceeded quota: pod-demo, requested: pods=1, used: pods=2, limited: pods=2'", "completion": "This text code Configure a Pod Quota for a Namespace | In this task you have defined a ResourceQuota that limited the total number of Pods, but you could also limit the total number of other kinds of object. For example, you might decide to limit how many", "language": "text", "type": "code-example_explanation", "context": "Configure a Pod Quota for a Namespace | In this task you have defined a ResourceQuota that limited the total number of Pods, but you could also limit the total number of other kinds of object. For example, you might decide to limit how many", "source": "https://kubernetes.io/docs/tasks/administer-cluster/manage-resources/quota-pod-namespace/", "id": "k8s_00745_code_explain_23553"} {"prompt": "Complete this text code:\n\nPATCH /api/v1/nodes//status HTTP/1.1\nAccept: application/json\nContent-Type: application/json-patch+json\nHost: k8s-master:8080\n\n[", "completion": " {\n \"op\": \"add\",\n \"path\": \"/status/capacity/example.com~1dongle\",\n \"value\": \"4\"\n }\n]", "language": "text", "type": "code-example", "context": "Advertise Extended Resources for a Node | Note that Kubernetes does not need to know what a dongle is or what a dongle is for. The preceding PATCH request tells Kubernetes that your Node has four things that you call dongles.", "source": "https://kubernetes.io/docs/tasks/administer-cluster/extended-resource-node/", "id": "k8s_00747_code_completion_23554"} {"prompt": "Explain this text code:\n\nPATCH /api/v1/nodes//status HTTP/1.1\nAccept: application/json\nContent-Type: application/json-patch+json\nHost: k8s-master:8080\n\n[\n {\n \"op\": \"add\",\n \"path\": \"/status/capacity/example.com~1dongle\",\n \"value\": \"4\"\n }\n]", "completion": "This text code Advertise Extended Resources for a Node | Note that Kubernetes does not need to know what a dongle is or what a dongle is for. The preceding PATCH request tells Kubernetes that your Node has four things that you call dongles.", "language": "text", "type": "code-example_explanation", "context": "Advertise Extended Resources for a Node | Note that Kubernetes does not need to know what a dongle is or what a dongle is for. The preceding PATCH request tells Kubernetes that your Node has four things that you call dongles.", "source": "https://kubernetes.io/docs/tasks/administer-cluster/extended-resource-node/", "id": "k8s_00747_code_explain_23555"} {"prompt": "Complete this text code:\n\ncurl --header \"Content-Type: application/json-patch+json\" \\\n --request PATCH \\", "completion": " --data '[{\"op\": \"add\", \"path\": \"/status/capacity/example.com~1dongle\", \"value\": \"4\"}]' \\\n http://localhost:8001/api/v1/nodes//status", "language": "text", "type": "code-example", "context": "Advertise Extended Resources for a Node | The output shows that the Node has a capacity of 4 dongles:", "source": "https://kubernetes.io/docs/tasks/administer-cluster/extended-resource-node/", "id": "k8s_00747_code_completion_23556"} {"prompt": "Explain this text code:\n\ncurl --header \"Content-Type: application/json-patch+json\" \\\n --request PATCH \\\n --data '[{\"op\": \"add\", \"path\": \"/status/capacity/example.com~1dongle\", \"value\": \"4\"}]' \\\n http://localhost:8001/api/v1/nodes//status", "completion": "This text code Advertise Extended Resources for a Node | The output shows that the Node has a capacity of 4 dongles:", "language": "text", "type": "code-example_explanation", "context": "Advertise Extended Resources for a Node | The output shows that the Node has a capacity of 4 dongles:", "source": "https://kubernetes.io/docs/tasks/administer-cluster/extended-resource-node/", "id": "k8s_00747_code_explain_23557"} {"prompt": "Complete this text code:\n\n\"capacity\": {\n \"cpu\": \"2\",", "completion": " \"memory\": \"2049008Ki\",\n \"example.com/dongle\": \"4\",", "language": "text", "type": "code-example", "context": "Advertise Extended Resources for a Node | Describe your Node:", "source": "https://kubernetes.io/docs/tasks/administer-cluster/extended-resource-node/", "id": "k8s_00747_code_completion_23558"} {"prompt": "Explain this text code:\n\n\"capacity\": {\n \"cpu\": \"2\",\n \"memory\": \"2049008Ki\",\n \"example.com/dongle\": \"4\",", "completion": "This text code Advertise Extended Resources for a Node | Describe your Node:", "language": "text", "type": "code-example_explanation", "context": "Advertise Extended Resources for a Node | Describe your Node:", "source": "https://kubernetes.io/docs/tasks/administer-cluster/extended-resource-node/", "id": "k8s_00747_code_explain_23559"} {"prompt": "Complete this text code:\n\nCapacity:\n cpu: 2", "completion": " memory: 2049008Ki\n example.com/dongle: 4", "language": "text", "type": "code-example", "context": "Advertise Extended Resources for a Node | Now, application developers can create Pods that request a certain number of dongles. See Assign Extended Resources to a Container.", "source": "https://kubernetes.io/docs/tasks/administer-cluster/extended-resource-node/", "id": "k8s_00747_code_completion_23560"} {"prompt": "Explain this text code:\n\nCapacity:\n cpu: 2\n memory: 2049008Ki\n example.com/dongle: 4", "completion": "This text code Advertise Extended Resources for a Node | Now, application developers can create Pods that request a certain number of dongles. See Assign Extended Resources to a Container.", "language": "text", "type": "code-example_explanation", "context": "Advertise Extended Resources for a Node | Now, application developers can create Pods that request a certain number of dongles. See Assign Extended Resources to a Container.", "source": "https://kubernetes.io/docs/tasks/administer-cluster/extended-resource-node/", "id": "k8s_00747_code_explain_23561"} {"prompt": "Explain this text code:\n\nCapacity:\n ...\n example.com/special-storage: 800Gi", "completion": "This text code Advertise Extended Resources for a Node | Then a Container could request any number of bytes of special storage, up to 800Gi.", "language": "text", "type": "code-example_explanation", "context": "Advertise Extended Resources for a Node | Then a Container could request any number of bytes of special storage, up to 800Gi.", "source": "https://kubernetes.io/docs/tasks/administer-cluster/extended-resource-node/", "id": "k8s_00747_code_explain_23562"} {"prompt": "Complete this text code:\n\nPATCH /api/v1/nodes//status HTTP/1.1\nAccept: application/json\nContent-Type: application/json-patch+json\nHost: k8s-master:8080\n", "completion": "[\n {\n \"op\": \"remove\",\n \"path\": \"/status/capacity/example.com~1dongle\",\n }\n]", "language": "text", "type": "code-example", "context": "Advertise Extended Resources for a Node | Start a proxy, so that you can easily send requests to the Kubernetes API server:", "source": "https://kubernetes.io/docs/tasks/administer-cluster/extended-resource-node/", "id": "k8s_00747_code_completion_23563"} {"prompt": "Explain this text code:\n\nPATCH /api/v1/nodes//status HTTP/1.1\nAccept: application/json\nContent-Type: application/json-patch+json\nHost: k8s-master:8080\n\n[\n {\n \"op\": \"remove\",\n \"path\": \"/status/capacity/example.com~1dongle\",\n }\n]", "completion": "This text code Advertise Extended Resources for a Node | Start a proxy, so that you can easily send requests to the Kubernetes API server:", "language": "text", "type": "code-example_explanation", "context": "Advertise Extended Resources for a Node | Start a proxy, so that you can easily send requests to the Kubernetes API server:", "source": "https://kubernetes.io/docs/tasks/administer-cluster/extended-resource-node/", "id": "k8s_00747_code_explain_23564"} {"prompt": "Complete this text code:\n\ncurl --header \"Content-Type: application/json-patch+json\" \\\n --request PATCH \\", "completion": " --data '[{\"op\": \"remove\", \"path\": \"/status/capacity/example.com~1dongle\"}]' \\\n http://localhost:8001/api/v1/nodes//status", "language": "text", "type": "code-example", "context": "Advertise Extended Resources for a Node | Verify that the dongle advertisement has been removed:", "source": "https://kubernetes.io/docs/tasks/administer-cluster/extended-resource-node/", "id": "k8s_00747_code_completion_23565"} {"prompt": "Explain this text code:\n\ncurl --header \"Content-Type: application/json-patch+json\" \\\n --request PATCH \\\n --data '[{\"op\": \"remove\", \"path\": \"/status/capacity/example.com~1dongle\"}]' \\\n http://localhost:8001/api/v1/nodes//status", "completion": "This text code Advertise Extended Resources for a Node | Verify that the dongle advertisement has been removed:", "language": "text", "type": "code-example_explanation", "context": "Advertise Extended Resources for a Node | Verify that the dongle advertisement has been removed:", "source": "https://kubernetes.io/docs/tasks/administer-cluster/extended-resource-node/", "id": "k8s_00747_code_explain_23566"} {"prompt": "Explain this bash code:\n\nkubectl describe node | grep dongle", "completion": "This bash code Advertise Extended Resources for a Node | (you should not see any output)", "language": "bash", "type": "kubectl-command_explanation", "context": "Advertise Extended Resources for a Node | (you should not see any output)", "source": "https://kubernetes.io/docs/tasks/administer-cluster/extended-resource-node/", "id": "k8s_00747_code_explain_23567"} {"prompt": "Complete this text code:\n\nPATCH /api/v1/nodes//status HTTP/1.1\nAccept: application/json\nContent-Type: application/json-patch+json\nHost: k8s-master:8080\n\n[", "completion": " {\n \"op\": \"add\",\n \"path\": \"/status/capacity/example.com~1dongle\",\n \"value\": \"4\"\n }\n]", "language": "text", "type": "code-example", "context": "Advertise Extended Resources for a Node | Note that Kubernetes does not need to know what a dongle is or what a dongle is for. The preceding PATCH request tells Kubernetes that your Node has four things that you call dongles.", "source": "https://kubernetes.io/docs/tasks/administer-cluster/extended-resource-node/", "id": "k8s_00747_code_completion_23568"} {"prompt": "Explain this text code:\n\nPATCH /api/v1/nodes//status HTTP/1.1\nAccept: application/json\nContent-Type: application/json-patch+json\nHost: k8s-master:8080\n\n[\n {\n \"op\": \"add\",\n \"path\": \"/status/capacity/example.com~1dongle\",\n \"value\": \"4\"\n }\n]", "completion": "This text code Advertise Extended Resources for a Node | Note that Kubernetes does not need to know what a dongle is or what a dongle is for. The preceding PATCH request tells Kubernetes that your Node has four things that you call dongles.", "language": "text", "type": "code-example_explanation", "context": "Advertise Extended Resources for a Node | Note that Kubernetes does not need to know what a dongle is or what a dongle is for. The preceding PATCH request tells Kubernetes that your Node has four things that you call dongles.", "source": "https://kubernetes.io/docs/tasks/administer-cluster/extended-resource-node/", "id": "k8s_00747_code_explain_23569"} {"prompt": "Complete this shell code:\n\ncurl --header \"Content-Type: application/json-patch+json\" \\\n --request PATCH \\", "completion": " --data '[{\"op\": \"add\", \"path\": \"/status/capacity/example.com~1dongle\", \"value\": \"4\"}]' \\\n http://localhost:8001/api/v1/nodes//status", "language": "shell", "type": "code-example", "context": "Advertise Extended Resources for a Node | The output shows that the Node has a capacity of 4 dongles:", "source": "https://kubernetes.io/docs/tasks/administer-cluster/extended-resource-node/", "id": "k8s_00747_code_completion_23570"} {"prompt": "Explain this shell code:\n\ncurl --header \"Content-Type: application/json-patch+json\" \\\n --request PATCH \\\n --data '[{\"op\": \"add\", \"path\": \"/status/capacity/example.com~1dongle\", \"value\": \"4\"}]' \\\n http://localhost:8001/api/v1/nodes//status", "completion": "This shell code Advertise Extended Resources for a Node | The output shows that the Node has a capacity of 4 dongles:", "language": "shell", "type": "code-example_explanation", "context": "Advertise Extended Resources for a Node | The output shows that the Node has a capacity of 4 dongles:", "source": "https://kubernetes.io/docs/tasks/administer-cluster/extended-resource-node/", "id": "k8s_00747_code_explain_23571"} {"prompt": "Complete this text code:\n\n\"capacity\": {\n \"cpu\": \"2\",", "completion": " \"memory\": \"2049008Ki\",\n \"example.com/dongle\": \"4\",", "language": "text", "type": "code-example", "context": "Advertise Extended Resources for a Node | Describe your Node:", "source": "https://kubernetes.io/docs/tasks/administer-cluster/extended-resource-node/", "id": "k8s_00747_code_completion_23572"} {"prompt": "Explain this text code:\n\n\"capacity\": {\n \"cpu\": \"2\",\n \"memory\": \"2049008Ki\",\n \"example.com/dongle\": \"4\",", "completion": "This text code Advertise Extended Resources for a Node | Describe your Node:", "language": "text", "type": "code-example_explanation", "context": "Advertise Extended Resources for a Node | Describe your Node:", "source": "https://kubernetes.io/docs/tasks/administer-cluster/extended-resource-node/", "id": "k8s_00747_code_explain_23573"} {"prompt": "Complete this yaml code:\n\nCapacity:\n cpu: 2", "completion": " memory: 2049008Ki\n example.com/dongle: 4", "language": "yaml", "type": "kubernetes-manifest", "context": "Advertise Extended Resources for a Node | Now, application developers can create Pods that request a certain number of dongles. See Assign Extended Resources to a Container.", "source": "https://kubernetes.io/docs/tasks/administer-cluster/extended-resource-node/", "id": "k8s_00747_code_completion_23574"} {"prompt": "Explain this yaml code:\n\nCapacity:\n cpu: 2\n memory: 2049008Ki\n example.com/dongle: 4", "completion": "This yaml code Advertise Extended Resources for a Node | Now, application developers can create Pods that request a certain number of dongles. See Assign Extended Resources to a Container.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Advertise Extended Resources for a Node | Now, application developers can create Pods that request a certain number of dongles. See Assign Extended Resources to a Container.", "source": "https://kubernetes.io/docs/tasks/administer-cluster/extended-resource-node/", "id": "k8s_00747_code_explain_23575"} {"prompt": "Explain this yaml code:\n\nCapacity:\n ...\n example.com/special-storage: 800Gi", "completion": "This yaml code Advertise Extended Resources for a Node | Then a Container could request any number of bytes of special storage, up to 800Gi.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Advertise Extended Resources for a Node | Then a Container could request any number of bytes of special storage, up to 800Gi.", "source": "https://kubernetes.io/docs/tasks/administer-cluster/extended-resource-node/", "id": "k8s_00747_code_explain_23576"} {"prompt": "Complete this text code:\n\nPATCH /api/v1/nodes//status HTTP/1.1\nAccept: application/json\nContent-Type: application/json-patch+json\nHost: k8s-master:8080\n", "completion": "[\n {\n \"op\": \"remove\",\n \"path\": \"/status/capacity/example.com~1dongle\",\n }\n]", "language": "text", "type": "code-example", "context": "Advertise Extended Resources for a Node | Start a proxy, so that you can easily send requests to the Kubernetes API server:", "source": "https://kubernetes.io/docs/tasks/administer-cluster/extended-resource-node/", "id": "k8s_00747_code_completion_23577"} {"prompt": "Explain this text code:\n\nPATCH /api/v1/nodes//status HTTP/1.1\nAccept: application/json\nContent-Type: application/json-patch+json\nHost: k8s-master:8080\n\n[\n {\n \"op\": \"remove\",\n \"path\": \"/status/capacity/example.com~1dongle\",\n }\n]", "completion": "This text code Advertise Extended Resources for a Node | Start a proxy, so that you can easily send requests to the Kubernetes API server:", "language": "text", "type": "code-example_explanation", "context": "Advertise Extended Resources for a Node | Start a proxy, so that you can easily send requests to the Kubernetes API server:", "source": "https://kubernetes.io/docs/tasks/administer-cluster/extended-resource-node/", "id": "k8s_00747_code_explain_23578"} {"prompt": "Complete this shell code:\n\ncurl --header \"Content-Type: application/json-patch+json\" \\\n --request PATCH \\", "completion": " --data '[{\"op\": \"remove\", \"path\": \"/status/capacity/example.com~1dongle\"}]' \\\n http://localhost:8001/api/v1/nodes//status", "language": "shell", "type": "code-example", "context": "Advertise Extended Resources for a Node | Verify that the dongle advertisement has been removed:", "source": "https://kubernetes.io/docs/tasks/administer-cluster/extended-resource-node/", "id": "k8s_00747_code_completion_23579"} {"prompt": "Explain this shell code:\n\ncurl --header \"Content-Type: application/json-patch+json\" \\\n --request PATCH \\\n --data '[{\"op\": \"remove\", \"path\": \"/status/capacity/example.com~1dongle\"}]' \\\n http://localhost:8001/api/v1/nodes//status", "completion": "This shell code Advertise Extended Resources for a Node | Verify that the dongle advertisement has been removed:", "language": "shell", "type": "code-example_explanation", "context": "Advertise Extended Resources for a Node | Verify that the dongle advertisement has been removed:", "source": "https://kubernetes.io/docs/tasks/administer-cluster/extended-resource-node/", "id": "k8s_00747_code_explain_23580"} {"prompt": "Explain this bash code:\n\nkubectl describe node | grep dongle", "completion": "This bash code Advertise Extended Resources for a Node | (you should not see any output)", "language": "bash", "type": "kubectl-command_explanation", "context": "Advertise Extended Resources for a Node | (you should not see any output)", "source": "https://kubernetes.io/docs/tasks/administer-cluster/extended-resource-node/", "id": "k8s_00747_code_explain_23581"} {"prompt": "Complete this text code:\n\ncurl --header \"Content-Type: application/json-patch+json\" \\\n --request PATCH \\", "completion": " --data '[{\"op\": \"add\", \"path\": \"/status/capacity/example.com~1dongle\", \"value\": \"4\"}]' \\\n http://localhost:8001/api/v1/nodes//status", "language": "text", "type": "code-example", "context": "Advertise Extended Resources for a Node | The output shows that the Node has a capacity of 4 dongles:", "source": "https://kubernetes.io/docs/tasks/administer-cluster/extended-resource-node/", "id": "k8s_00747_code_completion_23582"} {"prompt": "Explain this text code:\n\ncurl --header \"Content-Type: application/json-patch+json\" \\\n --request PATCH \\\n --data '[{\"op\": \"add\", \"path\": \"/status/capacity/example.com~1dongle\", \"value\": \"4\"}]' \\\n http://localhost:8001/api/v1/nodes//status", "completion": "This text code Advertise Extended Resources for a Node | The output shows that the Node has a capacity of 4 dongles:", "language": "text", "type": "code-example_explanation", "context": "Advertise Extended Resources for a Node | The output shows that the Node has a capacity of 4 dongles:", "source": "https://kubernetes.io/docs/tasks/administer-cluster/extended-resource-node/", "id": "k8s_00747_code_explain_23583"} {"prompt": "Complete this text code:\n\nCapacity:\n cpu: 2", "completion": " memory: 2049008Ki\n example.com/dongle: 4", "language": "text", "type": "code-example", "context": "Advertise Extended Resources for a Node | Now, application developers can create Pods that request a certain number of dongles. See Assign Extended Resources to a Container.", "source": "https://kubernetes.io/docs/tasks/administer-cluster/extended-resource-node/", "id": "k8s_00747_code_completion_23584"} {"prompt": "Explain this text code:\n\nCapacity:\n cpu: 2\n memory: 2049008Ki\n example.com/dongle: 4", "completion": "This text code Advertise Extended Resources for a Node | Now, application developers can create Pods that request a certain number of dongles. See Assign Extended Resources to a Container.", "language": "text", "type": "code-example_explanation", "context": "Advertise Extended Resources for a Node | Now, application developers can create Pods that request a certain number of dongles. See Assign Extended Resources to a Container.", "source": "https://kubernetes.io/docs/tasks/administer-cluster/extended-resource-node/", "id": "k8s_00747_code_explain_23585"} {"prompt": "Explain this text code:\n\nCapacity:\n ...\n example.com/special-storage: 800Gi", "completion": "This text code Advertise Extended Resources for a Node | Then a Container could request any number of bytes of special storage, up to 800Gi.", "language": "text", "type": "code-example_explanation", "context": "Advertise Extended Resources for a Node | Then a Container could request any number of bytes of special storage, up to 800Gi.", "source": "https://kubernetes.io/docs/tasks/administer-cluster/extended-resource-node/", "id": "k8s_00747_code_explain_23586"} {"prompt": "Complete this text code:\n\ncurl --header \"Content-Type: application/json-patch+json\" \\\n --request PATCH \\", "completion": " --data '[{\"op\": \"remove\", \"path\": \"/status/capacity/example.com~1dongle\"}]' \\\n http://localhost:8001/api/v1/nodes//status", "language": "text", "type": "code-example", "context": "Advertise Extended Resources for a Node | Verify that the dongle advertisement has been removed:", "source": "https://kubernetes.io/docs/tasks/administer-cluster/extended-resource-node/", "id": "k8s_00747_code_completion_23587"} {"prompt": "Explain this text code:\n\ncurl --header \"Content-Type: application/json-patch+json\" \\\n --request PATCH \\\n --data '[{\"op\": \"remove\", \"path\": \"/status/capacity/example.com~1dongle\"}]' \\\n http://localhost:8001/api/v1/nodes//status", "completion": "This text code Advertise Extended Resources for a Node | Verify that the dongle advertisement has been removed:", "language": "text", "type": "code-example_explanation", "context": "Advertise Extended Resources for a Node | Verify that the dongle advertisement has been removed:", "source": "https://kubernetes.io/docs/tasks/administer-cluster/extended-resource-node/", "id": "k8s_00747_code_explain_23588"} {"prompt": "Explain this text code:\n\nThis command is helpful to get yourself aware of the current user attributes,\n especially when dynamic authentication, e.g., token webhook, auth proxy, or OIDC provider,\n is enabled in the Kubernetes cluster.", "completion": "This text code kubectl auth whoami | If true, ignore any errors in templates when a field or map key is missing in the template. Only applies to golang and jsonpath output formats.", "language": "text", "type": "code-example_explanation", "context": "kubectl auth whoami | If true, ignore any errors in templates when a field or map key is missing in the template. Only applies to golang and jsonpath output formats.", "source": "https://kubernetes.io/docs/reference/kubectl/generated/kubectl_auth/kubectl_auth_whoami/", "id": "k8s_00748_code_explain_23589"} {"prompt": "Complete this text code:\n\n# Get your subject attributes\n kubectl auth whoami", "completion": " \n # Get your subject attributes in JSON format\n kubectl auth whoami -o json", "language": "text", "type": "code-example", "context": "kubectl auth whoami | If true, ignore any errors in templates when a field or map key is missing in the template. Only applies to golang and jsonpath output formats.", "source": "https://kubernetes.io/docs/reference/kubectl/generated/kubectl_auth/kubectl_auth_whoami/", "id": "k8s_00748_code_completion_23590"} {"prompt": "Explain this text code:\n\n# Get your subject attributes\n kubectl auth whoami\n \n # Get your subject attributes in JSON format\n kubectl auth whoami -o json", "completion": "This text code kubectl auth whoami | If true, ignore any errors in templates when a field or map key is missing in the template. Only applies to golang and jsonpath output formats.", "language": "text", "type": "code-example_explanation", "context": "kubectl auth whoami | If true, ignore any errors in templates when a field or map key is missing in the template. Only applies to golang and jsonpath output formats.", "source": "https://kubernetes.io/docs/reference/kubectl/generated/kubectl_auth/kubectl_auth_whoami/", "id": "k8s_00748_code_explain_23591"} {"prompt": "Explain this text code:\n\nThis command is helpful to get yourself aware of the current user attributes,\n especially when dynamic authentication, e.g., token webhook, auth proxy, or OIDC provider,\n is enabled in the Kubernetes cluster.", "completion": "This text code kubectl auth whoami | If true, ignore any errors in templates when a field or map key is missing in the template. Only applies to golang and jsonpath output formats.", "language": "text", "type": "code-example_explanation", "context": "kubectl auth whoami | If true, ignore any errors in templates when a field or map key is missing in the template. Only applies to golang and jsonpath output formats.", "source": "https://kubernetes.io/docs/reference/kubectl/generated/kubectl_auth/kubectl_auth_whoami/", "id": "k8s_00748_code_explain_23592"} {"prompt": "Complete this text code:\n\n# Get your subject attributes\n kubectl auth whoami", "completion": " \n # Get your subject attributes in JSON format\n kubectl auth whoami -o json", "language": "text", "type": "code-example", "context": "kubectl auth whoami | If true, ignore any errors in templates when a field or map key is missing in the template. Only applies to golang and jsonpath output formats.", "source": "https://kubernetes.io/docs/reference/kubectl/generated/kubectl_auth/kubectl_auth_whoami/", "id": "k8s_00748_code_completion_23593"} {"prompt": "Explain this text code:\n\n# Get your subject attributes\n kubectl auth whoami\n \n # Get your subject attributes in JSON format\n kubectl auth whoami -o json", "completion": "This text code kubectl auth whoami | If true, ignore any errors in templates when a field or map key is missing in the template. Only applies to golang and jsonpath output formats.", "language": "text", "type": "code-example_explanation", "context": "kubectl auth whoami | If true, ignore any errors in templates when a field or map key is missing in the template. Only applies to golang and jsonpath output formats.", "source": "https://kubernetes.io/docs/reference/kubectl/generated/kubectl_auth/kubectl_auth_whoami/", "id": "k8s_00748_code_explain_23594"} {"prompt": "Complete this json code:\n\n{\n \"apiVersion\": \"abac.authorization.kubernetes.io/v1beta1\",\n \"kind\": \"Policy\",\n \"spec\": {\n \"user\": \"bob\",", "completion": " \"namespace\": \"projectCaribou\",\n \"resource\": \"pods\",\n \"readonly\": true\n }\n}", "language": "json", "type": "api-response", "context": "Controlling Access to the Kubernetes API | If Bob makes the following request, the request is authorized because he is allowed to read objects in the projectCaribou namespace:", "source": "https://kubernetes.io/docs/concepts/security/controlling-access/", "id": "k8s_00749_code_completion_23595"} {"prompt": "Explain this json code:\n\n{\n \"apiVersion\": \"abac.authorization.kubernetes.io/v1beta1\",\n \"kind\": \"Policy\",\n \"spec\": {\n \"user\": \"bob\",\n \"namespace\": \"projectCaribou\",\n \"resource\": \"pods\",\n \"readonly\": true\n }\n}", "completion": "This json code Controlling Access to the Kubernetes API | If Bob makes the following request, the request is authorized because he is allowed to read objects in the projectCaribou namespace:", "language": "json", "type": "api-response_explanation", "context": "Controlling Access to the Kubernetes API | If Bob makes the following request, the request is authorized because he is allowed to read objects in the projectCaribou namespace:", "source": "https://kubernetes.io/docs/concepts/security/controlling-access/", "id": "k8s_00749_code_explain_23596"} {"prompt": "Complete this json code:\n\n{\n \"apiVersion\": \"authorization.k8s.io/v1beta1\",\n \"kind\": \"SubjectAccessReview\",\n \"spec\": {\n \"resourceAttributes\": {\n \"namespace\": \"projectCaribou\",", "completion": " \"verb\": \"get\",\n \"group\": \"unicorn.example.org\",\n \"resource\": \"pods\"\n }\n }\n}", "language": "json", "type": "api-response", "context": "Controlling Access to the Kubernetes API | If Bob makes a request to write (create or update) to the objects in the projectCaribou namespace, his authorization is denied. If Bob makes a request to read (get) objects in a different namespace su", "source": "https://kubernetes.io/docs/concepts/security/controlling-access/", "id": "k8s_00749_code_completion_23597"} {"prompt": "Explain this json code:\n\n{\n \"apiVersion\": \"authorization.k8s.io/v1beta1\",\n \"kind\": \"SubjectAccessReview\",\n \"spec\": {\n \"resourceAttributes\": {\n \"namespace\": \"projectCaribou\",\n \"verb\": \"get\",\n \"group\": \"unicorn.example.org\",\n \"resource\": \"pods\"\n }\n }\n}", "completion": "This json code Controlling Access to the Kubernetes API | If Bob makes a request to write (create or update) to the objects in the projectCaribou namespace, his authorization is denied. If Bob makes a request to read (get) objects in a different namespace su", "language": "json", "type": "api-response_explanation", "context": "Controlling Access to the Kubernetes API | If Bob makes a request to write (create or update) to the objects in the projectCaribou namespace, his authorization is denied. If Bob makes a request to read (get) objects in a different namespace su", "source": "https://kubernetes.io/docs/concepts/security/controlling-access/", "id": "k8s_00749_code_explain_23598"} {"prompt": "Complete this json code:\n\n{\n \"apiVersion\": \"abac.authorization.kubernetes.io/v1beta1\",\n \"kind\": \"Policy\",\n \"spec\": {\n \"user\": \"bob\",", "completion": " \"namespace\": \"projectCaribou\",\n \"resource\": \"pods\",\n \"readonly\": true\n }\n}", "language": "json", "type": "api-response", "context": "Controlling Access to the Kubernetes API | If Bob makes the following request, the request is authorized because he is allowed to read objects in the projectCaribou namespace:", "source": "https://kubernetes.io/docs/concepts/security/controlling-access/", "id": "k8s_00749_code_completion_23599"} {"prompt": "Explain this json code:\n\n{\n \"apiVersion\": \"abac.authorization.kubernetes.io/v1beta1\",\n \"kind\": \"Policy\",\n \"spec\": {\n \"user\": \"bob\",\n \"namespace\": \"projectCaribou\",\n \"resource\": \"pods\",\n \"readonly\": true\n }\n}", "completion": "This json code Controlling Access to the Kubernetes API | If Bob makes the following request, the request is authorized because he is allowed to read objects in the projectCaribou namespace:", "language": "json", "type": "api-response_explanation", "context": "Controlling Access to the Kubernetes API | If Bob makes the following request, the request is authorized because he is allowed to read objects in the projectCaribou namespace:", "source": "https://kubernetes.io/docs/concepts/security/controlling-access/", "id": "k8s_00749_code_explain_23600"} {"prompt": "Complete this json code:\n\n{\n \"apiVersion\": \"authorization.k8s.io/v1beta1\",\n \"kind\": \"SubjectAccessReview\",\n \"spec\": {\n \"resourceAttributes\": {\n \"namespace\": \"projectCaribou\",", "completion": " \"verb\": \"get\",\n \"group\": \"unicorn.example.org\",\n \"resource\": \"pods\"\n }\n }\n}", "language": "json", "type": "api-response", "context": "Controlling Access to the Kubernetes API | If Bob makes a request to write (create or update) to the objects in the projectCaribou namespace, his authorization is denied. If Bob makes a request to read (get) objects in a different namespace su", "source": "https://kubernetes.io/docs/concepts/security/controlling-access/", "id": "k8s_00749_code_completion_23601"} {"prompt": "Explain this json code:\n\n{\n \"apiVersion\": \"authorization.k8s.io/v1beta1\",\n \"kind\": \"SubjectAccessReview\",\n \"spec\": {\n \"resourceAttributes\": {\n \"namespace\": \"projectCaribou\",\n \"verb\": \"get\",\n \"group\": \"unicorn.example.org\",\n \"resource\": \"pods\"\n }\n }\n}", "completion": "This json code Controlling Access to the Kubernetes API | If Bob makes a request to write (create or update) to the objects in the projectCaribou namespace, his authorization is denied. If Bob makes a request to read (get) objects in a different namespace su", "language": "json", "type": "api-response_explanation", "context": "Controlling Access to the Kubernetes API | If Bob makes a request to write (create or update) to the objects in the projectCaribou namespace, his authorization is denied. If Bob makes a request to read (get) objects in a different namespace su", "source": "https://kubernetes.io/docs/concepts/security/controlling-access/", "id": "k8s_00749_code_explain_23602"} {"prompt": "Complete this json code:\n\n{\n \"apiVersion\": \"abac.authorization.kubernetes.io/v1beta1\",\n \"kind\": \"Policy\",\n \"spec\": {\n \"user\": \"bob\",", "completion": " \"namespace\": \"projectCaribou\",\n \"resource\": \"pods\",\n \"readonly\": true\n }\n}", "language": "json", "type": "api-response", "context": "Controlling Access to the Kubernetes API | If Bob makes the following request, the request is authorized because he is allowed to read objects in the projectCaribou namespace:", "source": "https://kubernetes.io/docs/concepts/security/controlling-access/", "id": "k8s_00749_code_completion_23603"} {"prompt": "Explain this json code:\n\n{\n \"apiVersion\": \"abac.authorization.kubernetes.io/v1beta1\",\n \"kind\": \"Policy\",\n \"spec\": {\n \"user\": \"bob\",\n \"namespace\": \"projectCaribou\",\n \"resource\": \"pods\",\n \"readonly\": true\n }\n}", "completion": "This json code Controlling Access to the Kubernetes API | If Bob makes the following request, the request is authorized because he is allowed to read objects in the projectCaribou namespace:", "language": "json", "type": "api-response_explanation", "context": "Controlling Access to the Kubernetes API | If Bob makes the following request, the request is authorized because he is allowed to read objects in the projectCaribou namespace:", "source": "https://kubernetes.io/docs/concepts/security/controlling-access/", "id": "k8s_00749_code_explain_23604"} {"prompt": "Complete this json code:\n\n{\n \"apiVersion\": \"authorization.k8s.io/v1beta1\",\n \"kind\": \"SubjectAccessReview\",\n \"spec\": {\n \"resourceAttributes\": {\n \"namespace\": \"projectCaribou\",", "completion": " \"verb\": \"get\",\n \"group\": \"unicorn.example.org\",\n \"resource\": \"pods\"\n }\n }\n}", "language": "json", "type": "api-response", "context": "Controlling Access to the Kubernetes API | If Bob makes a request to write (create or update) to the objects in the projectCaribou namespace, his authorization is denied. If Bob makes a request to read (get) objects in a different namespace su", "source": "https://kubernetes.io/docs/concepts/security/controlling-access/", "id": "k8s_00749_code_completion_23605"} {"prompt": "Explain this json code:\n\n{\n \"apiVersion\": \"authorization.k8s.io/v1beta1\",\n \"kind\": \"SubjectAccessReview\",\n \"spec\": {\n \"resourceAttributes\": {\n \"namespace\": \"projectCaribou\",\n \"verb\": \"get\",\n \"group\": \"unicorn.example.org\",\n \"resource\": \"pods\"\n }\n }\n}", "completion": "This json code Controlling Access to the Kubernetes API | If Bob makes a request to write (create or update) to the objects in the projectCaribou namespace, his authorization is denied. If Bob makes a request to read (get) objects in a different namespace su", "language": "json", "type": "api-response_explanation", "context": "Controlling Access to the Kubernetes API | If Bob makes a request to write (create or update) to the objects in the projectCaribou namespace, his authorization is denied. If Bob makes a request to read (get) objects in a different namespace su", "source": "https://kubernetes.io/docs/concepts/security/controlling-access/", "id": "k8s_00749_code_explain_23606"} {"prompt": "Explain this text code:\n\ncurl -LO https://github.com/cilium/cilium-cli/releases/latest/download/cilium-linux-amd64.tar.gz", "completion": "This text code Use Cilium for NetworkPolicy | Then extract the downloaded file to your /usr/local/bin directory with the following command:", "language": "text", "type": "code-example_explanation", "context": "Use Cilium for NetworkPolicy | Then extract the downloaded file to your /usr/local/bin directory with the following command:", "source": "https://kubernetes.io/docs/tasks/administer-cluster/network-policy-provider/cilium-network-policy/", "id": "k8s_00750_code_explain_23607"} {"prompt": "Explain this text code:\n\nsudo tar xzvfC cilium-linux-amd64.tar.gz /usr/local/bin\nrm cilium-linux-amd64.tar.gz", "completion": "This text code Use Cilium for NetworkPolicy | After running the above commands, you can now install Cilium with the following command:", "language": "text", "type": "code-example_explanation", "context": "Use Cilium for NetworkPolicy | After running the above commands, you can now install Cilium with the following command:", "source": "https://kubernetes.io/docs/tasks/administer-cluster/network-policy-provider/cilium-network-policy/", "id": "k8s_00750_code_explain_23608"} {"prompt": "Explain this bash code:\n\nkubectl get pods --namespace=kube-system -l k8s-app=cilium", "completion": "This bash code Use Cilium for NetworkPolicy | You'll see a list of Pods similar to this:", "language": "bash", "type": "kubectl-command_explanation", "context": "Use Cilium for NetworkPolicy | You'll see a list of Pods similar to this:", "source": "https://kubernetes.io/docs/tasks/administer-cluster/network-policy-provider/cilium-network-policy/", "id": "k8s_00750_code_explain_23609"} {"prompt": "Explain this text code:\n\nNAME READY STATUS RESTARTS AGE\ncilium-kkdhz 1/1 Running 0 3m23s\n...", "completion": "This text code Use Cilium for NetworkPolicy | A cilium Pod runs on each node in your cluster and enforces network policy on the traffic to/from Pods on that node using Linux BPF.", "language": "text", "type": "code-example_explanation", "context": "Use Cilium for NetworkPolicy | A cilium Pod runs on each node in your cluster and enforces network policy on the traffic to/from Pods on that node using Linux BPF.", "source": "https://kubernetes.io/docs/tasks/administer-cluster/network-policy-provider/cilium-network-policy/", "id": "k8s_00750_code_explain_23610"} {"prompt": "Explain this shell code:\n\ncurl -LO https://github.com/cilium/cilium-cli/releases/latest/download/cilium-linux-amd64.tar.gz", "completion": "This shell code Use Cilium for NetworkPolicy | Then extract the downloaded file to your /usr/local/bin directory with the following command:", "language": "shell", "type": "code-example_explanation", "context": "Use Cilium for NetworkPolicy | Then extract the downloaded file to your /usr/local/bin directory with the following command:", "source": "https://kubernetes.io/docs/tasks/administer-cluster/network-policy-provider/cilium-network-policy/", "id": "k8s_00750_code_explain_23611"} {"prompt": "Explain this shell code:\n\nsudo tar xzvfC cilium-linux-amd64.tar.gz /usr/local/bin\nrm cilium-linux-amd64.tar.gz", "completion": "This shell code Use Cilium for NetworkPolicy | After running the above commands, you can now install Cilium with the following command:", "language": "shell", "type": "code-example_explanation", "context": "Use Cilium for NetworkPolicy | After running the above commands, you can now install Cilium with the following command:", "source": "https://kubernetes.io/docs/tasks/administer-cluster/network-policy-provider/cilium-network-policy/", "id": "k8s_00750_code_explain_23612"} {"prompt": "Explain this shell code:\n\nkubectl get pods --namespace=kube-system -l k8s-app=cilium", "completion": "This shell code Use Cilium for NetworkPolicy | You'll see a list of Pods similar to this:", "language": "shell", "type": "code-example_explanation", "context": "Use Cilium for NetworkPolicy | You'll see a list of Pods similar to this:", "source": "https://kubernetes.io/docs/tasks/administer-cluster/network-policy-provider/cilium-network-policy/", "id": "k8s_00750_code_explain_23613"} {"prompt": "Explain this console code:\n\nNAME READY STATUS RESTARTS AGE\ncilium-kkdhz 1/1 Running 0 3m23s\n...", "completion": "This console code Use Cilium for NetworkPolicy | A cilium Pod runs on each node in your cluster and enforces network policy on the traffic to/from Pods on that node using Linux BPF.", "language": "console", "type": "code-example_explanation", "context": "Use Cilium for NetworkPolicy | A cilium Pod runs on each node in your cluster and enforces network policy on the traffic to/from Pods on that node using Linux BPF.", "source": "https://kubernetes.io/docs/tasks/administer-cluster/network-policy-provider/cilium-network-policy/", "id": "k8s_00750_code_explain_23614"} {"prompt": "Explain this text code:\n\ncurl -LO https://github.com/cilium/cilium-cli/releases/latest/download/cilium-linux-amd64.tar.gz", "completion": "This text code Use Cilium for NetworkPolicy | Then extract the downloaded file to your /usr/local/bin directory with the following command:", "language": "text", "type": "code-example_explanation", "context": "Use Cilium for NetworkPolicy | Then extract the downloaded file to your /usr/local/bin directory with the following command:", "source": "https://kubernetes.io/docs/tasks/administer-cluster/network-policy-provider/cilium-network-policy/", "id": "k8s_00750_code_explain_23615"} {"prompt": "Explain this text code:\n\nsudo tar xzvfC cilium-linux-amd64.tar.gz /usr/local/bin\nrm cilium-linux-amd64.tar.gz", "completion": "This text code Use Cilium for NetworkPolicy | After running the above commands, you can now install Cilium with the following command:", "language": "text", "type": "code-example_explanation", "context": "Use Cilium for NetworkPolicy | After running the above commands, you can now install Cilium with the following command:", "source": "https://kubernetes.io/docs/tasks/administer-cluster/network-policy-provider/cilium-network-policy/", "id": "k8s_00750_code_explain_23616"} {"prompt": "Explain this bash code:\n\nkubectl get pods --namespace=kube-system -l k8s-app=cilium", "completion": "This bash code Use Cilium for NetworkPolicy | You'll see a list of Pods similar to this:", "language": "bash", "type": "kubectl-command_explanation", "context": "Use Cilium for NetworkPolicy | You'll see a list of Pods similar to this:", "source": "https://kubernetes.io/docs/tasks/administer-cluster/network-policy-provider/cilium-network-policy/", "id": "k8s_00750_code_explain_23617"} {"prompt": "Explain this text code:\n\nNAME READY STATUS RESTARTS AGE\ncilium-kkdhz 1/1 Running 0 3m23s\n...", "completion": "This text code Use Cilium for NetworkPolicy | A cilium Pod runs on each node in your cluster and enforces network policy on the traffic to/from Pods on that node using Linux BPF.", "language": "text", "type": "code-example_explanation", "context": "Use Cilium for NetworkPolicy | A cilium Pod runs on each node in your cluster and enforces network policy on the traffic to/from Pods on that node using Linux BPF.", "source": "https://kubernetes.io/docs/tasks/administer-cluster/network-policy-provider/cilium-network-policy/", "id": "k8s_00750_code_explain_23618"} {"prompt": "Explain this bash code:\n\nkubectl rollout status (TYPE NAME | TYPE/NAME) [flags]", "completion": "This bash code kubectl rollout status | Filename, directory, or URL to files identifying the resource to get from a server.", "language": "bash", "type": "kubectl-command_explanation", "context": "kubectl rollout status | Filename, directory, or URL to files identifying the resource to get from a server.", "source": "https://kubernetes.io/docs/reference/kubectl/generated/kubectl_rollout/kubectl_rollout_status/", "id": "k8s_00751_code_explain_23619"} {"prompt": "Explain this text code:\n\n# Watch the rollout status of a deployment\n kubectl rollout status deployment/nginx", "completion": "This text code kubectl rollout status | Filename, directory, or URL to files identifying the resource to get from a server.", "language": "text", "type": "code-example_explanation", "context": "kubectl rollout status | Filename, directory, or URL to files identifying the resource to get from a server.", "source": "https://kubernetes.io/docs/reference/kubectl/generated/kubectl_rollout/kubectl_rollout_status/", "id": "k8s_00751_code_explain_23620"} {"prompt": "Explain this bash code:\n\nkubectl rollout status (TYPE NAME | TYPE/NAME) [flags]", "completion": "This bash code kubectl rollout status | Filename, directory, or URL to files identifying the resource to get from a server.", "language": "bash", "type": "kubectl-command_explanation", "context": "kubectl rollout status | Filename, directory, or URL to files identifying the resource to get from a server.", "source": "https://kubernetes.io/docs/reference/kubectl/generated/kubectl_rollout/kubectl_rollout_status/", "id": "k8s_00751_code_explain_23621"} {"prompt": "Explain this text code:\n\n# Watch the rollout status of a deployment\n kubectl rollout status deployment/nginx", "completion": "This text code kubectl rollout status | Filename, directory, or URL to files identifying the resource to get from a server.", "language": "text", "type": "code-example_explanation", "context": "kubectl rollout status | Filename, directory, or URL to files identifying the resource to get from a server.", "source": "https://kubernetes.io/docs/reference/kubectl/generated/kubectl_rollout/kubectl_rollout_status/", "id": "k8s_00751_code_explain_23622"} {"prompt": "Explain this text code:\n\n# Approve CSR 'csr-sqgzp'\n kubectl certificate approve csr-sqgzp", "completion": "This text code kubectl certificate approve | If true, ignore any errors in templates when a field or map key is missing in the template. Only applies to golang and jsonpath output formats.", "language": "text", "type": "code-example_explanation", "context": "kubectl certificate approve | If true, ignore any errors in templates when a field or map key is missing in the template. Only applies to golang and jsonpath output formats.", "source": "https://kubernetes.io/docs/reference/kubectl/generated/kubectl_certificate/kubectl_certificate_approve/", "id": "k8s_00756_code_explain_23623"} {"prompt": "Explain this text code:\n\n# Approve CSR 'csr-sqgzp'\n kubectl certificate approve csr-sqgzp", "completion": "This text code kubectl certificate approve | If true, ignore any errors in templates when a field or map key is missing in the template. Only applies to golang and jsonpath output formats.", "language": "text", "type": "code-example_explanation", "context": "kubectl certificate approve | If true, ignore any errors in templates when a field or map key is missing in the template. Only applies to golang and jsonpath output formats.", "source": "https://kubernetes.io/docs/reference/kubectl/generated/kubectl_certificate/kubectl_certificate_approve/", "id": "k8s_00756_code_explain_23624"} {"prompt": "Complete this yaml code:\n\napiVersion: apiextensions.k8s.io/v1\nkind: CustomResourceDefinition\nmetadata:\n name: shirts.stable.example.com\nspec:\n group: stable.example.com\n scope: Namespaced\n names:\n plural: shirts\n singular: shirt\n kind: Shirt\n versions:\n - name: v1\n served: true\n storage: true\n schema:\n openAPIV3Schema:\n type: object", "completion": " properties:\n spec:\n type: object\n properties:\n color:\n type: string\n size:\n type: string\n selectableFields:\n - jsonPath: .spec.color\n - jsonPath: .spec.size\n additionalPrinterColumns:\n - jsonPath: .spec.color\n name: Color\n type: string\n - jsonPath: .spec.size\n name: Size\n type: string", "language": "yaml", "type": "kubernetes-manifest", "context": "Custom Resources | Field selectors can then be used to get only resources with a color of blue:", "source": "https://kubernetes.io/docs/concepts/extend-kubernetes/api-extension/custom-resources/", "id": "k8s_00757_code_completion_23625"} {"prompt": "Explain this yaml code:\n\napiVersion: apiextensions.k8s.io/v1\nkind: CustomResourceDefinition\nmetadata:\n name: shirts.stable.example.com\nspec:\n group: stable.example.com\n scope: Namespaced\n names:\n plural: shirts\n singular: shirt\n kind: Shirt\n versions:\n - name: v1\n served: true\n storage: true\n schema:\n openAPIV3Schema:\n type: object\n properties:\n spec:\n type: object\n properties:\n color:\n type: string\n size:\n type: string\n selectableFields:\n - jsonPath: .spec.color\n - jsonPath: .spec.size\n additionalPrinterColumns:\n - jsonPath: .spec.color\n name: Color\n type: string\n - jsonPath: .spec.size\n name: Size\n type: string", "completion": "This yaml code Custom Resources | Field selectors can then be used to get only resources with a color of blue:", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Custom Resources | Field selectors can then be used to get only resources with a color of blue:", "source": "https://kubernetes.io/docs/concepts/extend-kubernetes/api-extension/custom-resources/", "id": "k8s_00757_code_explain_23626"} {"prompt": "Explain this bash code:\n\nkubectl get shirts.stable.example.com --field-selector spec.color=blue", "completion": "This bash code Custom Resources | The output should be:", "language": "bash", "type": "kubectl-command_explanation", "context": "Custom Resources | The output should be:", "source": "https://kubernetes.io/docs/concepts/extend-kubernetes/api-extension/custom-resources/", "id": "k8s_00757_code_explain_23627"} {"prompt": "Explain this text code:\n\nNAME COLOR SIZE\nexample1 blue S\nexample2 blue M", "completion": "This text code Custom Resources |", "language": "text", "type": "code-example_explanation", "context": "Custom Resources |", "source": "https://kubernetes.io/docs/concepts/extend-kubernetes/api-extension/custom-resources/", "id": "k8s_00757_code_explain_23628"} {"prompt": "Explain this text code:\n\nContent-Type: application/strategic-merge-patch+json", "completion": "This text code Custom Resources | When you create a custom resource, either via a CRD or an AA, you get many features for your API, compared to implementing it outside the Kubernetes platform:", "language": "text", "type": "code-example_explanation", "context": "Custom Resources | When you create a custom resource, either via a CRD or an AA, you get many features for your API, compared to implementing it outside the Kubernetes platform:", "source": "https://kubernetes.io/docs/concepts/extend-kubernetes/api-extension/custom-resources/", "id": "k8s_00757_code_explain_23629"} {"prompt": "Explain this text code:\n\ncustomresourcedefinition/shirt-resource-definition.yaml", "completion": "This text code Custom Resources | Field selectors can then be used to get only resources with a color of blue:", "language": "text", "type": "code-example_explanation", "context": "Custom Resources | Field selectors can then be used to get only resources with a color of blue:", "source": "https://kubernetes.io/docs/concepts/extend-kubernetes/api-extension/custom-resources/", "id": "k8s_00757_code_explain_23630"} {"prompt": "Complete this yaml code:\n\napiVersion: apiextensions.k8s.io/v1\nkind: CustomResourceDefinition\nmetadata:\n name: shirts.stable.example.com\nspec:\n group: stable.example.com\n scope: Namespaced\n names:\n plural: shirts\n singular: shirt\n kind: Shirt\n versions:\n - name: v1\n served: true\n storage: true\n schema:\n openAPIV3Schema:\n type: object", "completion": " properties:\n spec:\n type: object\n properties:\n color:\n type: string\n size:\n type: string\n selectableFields:\n - jsonPath: .spec.color\n - jsonPath: .spec.size\n additionalPrinterColumns:\n - jsonPath: .spec.color\n name: Color\n type: string\n - jsonPath: .spec.size\n name: Size\n type: string", "language": "yaml", "type": "kubernetes-manifest", "context": "Custom Resources | Field selectors can then be used to get only resources with a color of blue:", "source": "https://kubernetes.io/docs/concepts/extend-kubernetes/api-extension/custom-resources/", "id": "k8s_00757_code_completion_23631"} {"prompt": "Explain this yaml code:\n\napiVersion: apiextensions.k8s.io/v1\nkind: CustomResourceDefinition\nmetadata:\n name: shirts.stable.example.com\nspec:\n group: stable.example.com\n scope: Namespaced\n names:\n plural: shirts\n singular: shirt\n kind: Shirt\n versions:\n - name: v1\n served: true\n storage: true\n schema:\n openAPIV3Schema:\n type: object\n properties:\n spec:\n type: object\n properties:\n color:\n type: string\n size:\n type: string\n selectableFields:\n - jsonPath: .spec.color\n - jsonPath: .spec.size\n additionalPrinterColumns:\n - jsonPath: .spec.color\n name: Color\n type: string\n - jsonPath: .spec.size\n name: Size\n type: string", "completion": "This yaml code Custom Resources | Field selectors can then be used to get only resources with a color of blue:", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Custom Resources | Field selectors can then be used to get only resources with a color of blue:", "source": "https://kubernetes.io/docs/concepts/extend-kubernetes/api-extension/custom-resources/", "id": "k8s_00757_code_explain_23632"} {"prompt": "Explain this shell code:\n\nkubectl get shirts.stable.example.com --field-selector spec.color=blue", "completion": "This shell code Custom Resources | The output should be:", "language": "shell", "type": "code-example_explanation", "context": "Custom Resources | The output should be:", "source": "https://kubernetes.io/docs/concepts/extend-kubernetes/api-extension/custom-resources/", "id": "k8s_00757_code_explain_23633"} {"prompt": "Explain this text code:\n\nNAME COLOR SIZE\nexample1 blue S\nexample2 blue M", "completion": "This text code Custom Resources |", "language": "text", "type": "code-example_explanation", "context": "Custom Resources |", "source": "https://kubernetes.io/docs/concepts/extend-kubernetes/api-extension/custom-resources/", "id": "k8s_00757_code_explain_23634"} {"prompt": "Complete this yaml code:\n\ncustomresourcedefinition/shirt-resource-definition.yaml\napiVersion: apiextensions.k8s.io/v1\nkind: CustomResourceDefinition\nmetadata:\n name: shirts.stable.example.com\nspec:\n group: stable.example.com\n scope: Namespaced\n names:\n plural: shirts\n singular: shirt\n kind: Shirt\n versions:\n - name: v1\n served: true\n storage: true\n schema:\n openAPIV3Schema:", "completion": " type: object\n properties:\n spec:\n type: object\n properties:\n color:\n type: string\n size:\n type: string\n selectableFields:\n - jsonPath: .spec.color\n - jsonPath: .spec.size\n additionalPrinterColumns:\n - jsonPath: .spec.color\n name: Color\n type: string\n - jsonPath: .spec.size\n name: Size\n type: string", "language": "yaml", "type": "kubernetes-manifest", "context": "Custom Resources | Field selectors can then be used to get only resources with a color of blue:", "source": "https://kubernetes.io/docs/concepts/extend-kubernetes/api-extension/custom-resources/", "id": "k8s_00757_code_completion_23635"} {"prompt": "Explain this yaml code:\n\ncustomresourcedefinition/shirt-resource-definition.yaml\napiVersion: apiextensions.k8s.io/v1\nkind: CustomResourceDefinition\nmetadata:\n name: shirts.stable.example.com\nspec:\n group: stable.example.com\n scope: Namespaced\n names:\n plural: shirts\n singular: shirt\n kind: Shirt\n versions:\n - name: v1\n served: true\n storage: true\n schema:\n openAPIV3Schema:\n type: object\n properties:\n spec:\n type: object\n properties:\n color:\n type: string\n size:\n type: string\n selectableFields:\n - jsonPath: .spec.color\n - jsonPath: .spec.size\n additionalPrinterColumns:\n - jsonPath: .spec.color\n name: Color\n type: string\n - jsonPath: .spec.size\n name: Size\n type: string", "completion": "This yaml code Custom Resources | Field selectors can then be used to get only resources with a color of blue:", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Custom Resources | Field selectors can then be used to get only resources with a color of blue:", "source": "https://kubernetes.io/docs/concepts/extend-kubernetes/api-extension/custom-resources/", "id": "k8s_00757_code_explain_23636"} {"prompt": "Complete this yaml code:\n\napiVersion: apiextensions.k8s.io/v1\nkind: CustomResourceDefinition\nmetadata:\n name: shirts.stable.example.com\nspec:\n group: stable.example.com\n scope: Namespaced\n names:\n plural: shirts\n singular: shirt\n kind: Shirt\n versions:\n - name: v1\n served: true\n storage: true\n schema:\n openAPIV3Schema:\n type: object", "completion": " properties:\n spec:\n type: object\n properties:\n color:\n type: string\n size:\n type: string\n selectableFields:\n - jsonPath: .spec.color\n - jsonPath: .spec.size\n additionalPrinterColumns:\n - jsonPath: .spec.color\n name: Color\n type: string\n - jsonPath: .spec.size\n name: Size\n type: string", "language": "yaml", "type": "kubernetes-manifest", "context": "Custom Resources | Field selectors can then be used to get only resources with a color of blue:", "source": "https://kubernetes.io/docs/concepts/extend-kubernetes/api-extension/custom-resources/", "id": "k8s_00757_code_completion_23637"} {"prompt": "Explain this yaml code:\n\napiVersion: apiextensions.k8s.io/v1\nkind: CustomResourceDefinition\nmetadata:\n name: shirts.stable.example.com\nspec:\n group: stable.example.com\n scope: Namespaced\n names:\n plural: shirts\n singular: shirt\n kind: Shirt\n versions:\n - name: v1\n served: true\n storage: true\n schema:\n openAPIV3Schema:\n type: object\n properties:\n spec:\n type: object\n properties:\n color:\n type: string\n size:\n type: string\n selectableFields:\n - jsonPath: .spec.color\n - jsonPath: .spec.size\n additionalPrinterColumns:\n - jsonPath: .spec.color\n name: Color\n type: string\n - jsonPath: .spec.size\n name: Size\n type: string", "completion": "This yaml code Custom Resources | Field selectors can then be used to get only resources with a color of blue:", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Custom Resources | Field selectors can then be used to get only resources with a color of blue:", "source": "https://kubernetes.io/docs/concepts/extend-kubernetes/api-extension/custom-resources/", "id": "k8s_00757_code_explain_23638"} {"prompt": "Explain this bash code:\n\nkubectl get shirts.stable.example.com --field-selector spec.color=blue", "completion": "This bash code Custom Resources | The output should be:", "language": "bash", "type": "kubectl-command_explanation", "context": "Custom Resources | The output should be:", "source": "https://kubernetes.io/docs/concepts/extend-kubernetes/api-extension/custom-resources/", "id": "k8s_00757_code_explain_23639"} {"prompt": "Explain this bash code:\n\nkubectl delete pods -l app.kubernetes.io/name=MyApp", "completion": "This bash code Delete a StatefulSet | Deleting the Pods in a StatefulSet will not delete the associated volumes. This is to ensure that you have the chance to copy data off the volume before deleting it. Deleting the PVC after the pods ha", "language": "bash", "type": "kubectl-command_explanation", "context": "Delete a StatefulSet | Deleting the Pods in a StatefulSet will not delete the associated volumes. This is to ensure that you have the chance to copy data off the volume before deleting it. Deleting the PVC after the pods ha", "source": "https://kubernetes.io/docs/tasks/run-application/delete-stateful-set/", "id": "k8s_00758_code_explain_23640"} {"prompt": "Complete this text code:\n\ngrace=$(kubectl get pods --template '{{.spec.terminationGracePeriodSeconds}}')\nkubectl delete statefulset -l app.kubernetes.io/name=MyApp", "completion": "sleep $grace\nkubectl delete pvc -l app.kubernetes.io/name=MyApp", "language": "text", "type": "code-example", "context": "Delete a StatefulSet | In the example above, the Pods have the label app.kubernetes.io/name=MyApp; substitute your own label as appropriate.", "source": "https://kubernetes.io/docs/tasks/run-application/delete-stateful-set/", "id": "k8s_00758_code_completion_23641"} {"prompt": "Explain this text code:\n\ngrace=$(kubectl get pods --template '{{.spec.terminationGracePeriodSeconds}}')\nkubectl delete statefulset -l app.kubernetes.io/name=MyApp\nsleep $grace\nkubectl delete pvc -l app.kubernetes.io/name=MyApp", "completion": "This text code Delete a StatefulSet | In the example above, the Pods have the label app.kubernetes.io/name=MyApp; substitute your own label as appropriate.", "language": "text", "type": "code-example_explanation", "context": "Delete a StatefulSet | In the example above, the Pods have the label app.kubernetes.io/name=MyApp; substitute your own label as appropriate.", "source": "https://kubernetes.io/docs/tasks/run-application/delete-stateful-set/", "id": "k8s_00758_code_explain_23642"} {"prompt": "Explain this shell code:\n\nkubectl delete pods -l app.kubernetes.io/name=MyApp", "completion": "This shell code Delete a StatefulSet | Deleting the Pods in a StatefulSet will not delete the associated volumes. This is to ensure that you have the chance to copy data off the volume before deleting it. Deleting the PVC after the pods ha", "language": "shell", "type": "code-example_explanation", "context": "Delete a StatefulSet | Deleting the Pods in a StatefulSet will not delete the associated volumes. This is to ensure that you have the chance to copy data off the volume before deleting it. Deleting the PVC after the pods ha", "source": "https://kubernetes.io/docs/tasks/run-application/delete-stateful-set/", "id": "k8s_00758_code_explain_23643"} {"prompt": "Complete this shell code:\n\ngrace=$(kubectl get pods --template '{{.spec.terminationGracePeriodSeconds}}')\nkubectl delete statefulset -l app.kubernetes.io/name=MyApp", "completion": "sleep $grace\nkubectl delete pvc -l app.kubernetes.io/name=MyApp", "language": "shell", "type": "code-example", "context": "Delete a StatefulSet | In the example above, the Pods have the label app.kubernetes.io/name=MyApp; substitute your own label as appropriate.", "source": "https://kubernetes.io/docs/tasks/run-application/delete-stateful-set/", "id": "k8s_00758_code_completion_23644"} {"prompt": "Explain this shell code:\n\ngrace=$(kubectl get pods --template '{{.spec.terminationGracePeriodSeconds}}')\nkubectl delete statefulset -l app.kubernetes.io/name=MyApp\nsleep $grace\nkubectl delete pvc -l app.kubernetes.io/name=MyApp", "completion": "This shell code Delete a StatefulSet | In the example above, the Pods have the label app.kubernetes.io/name=MyApp; substitute your own label as appropriate.", "language": "shell", "type": "code-example_explanation", "context": "Delete a StatefulSet | In the example above, the Pods have the label app.kubernetes.io/name=MyApp; substitute your own label as appropriate.", "source": "https://kubernetes.io/docs/tasks/run-application/delete-stateful-set/", "id": "k8s_00758_code_explain_23645"} {"prompt": "Explain this bash code:\n\nkubectl delete pods -l app.kubernetes.io/name=MyApp", "completion": "This bash code Delete a StatefulSet | Deleting the Pods in a StatefulSet will not delete the associated volumes. This is to ensure that you have the chance to copy data off the volume before deleting it. Deleting the PVC after the pods ha", "language": "bash", "type": "kubectl-command_explanation", "context": "Delete a StatefulSet | Deleting the Pods in a StatefulSet will not delete the associated volumes. This is to ensure that you have the chance to copy data off the volume before deleting it. Deleting the PVC after the pods ha", "source": "https://kubernetes.io/docs/tasks/run-application/delete-stateful-set/", "id": "k8s_00758_code_explain_23646"} {"prompt": "Complete this text code:\n\ngrace=$(kubectl get pods --template '{{.spec.terminationGracePeriodSeconds}}')\nkubectl delete statefulset -l app.kubernetes.io/name=MyApp", "completion": "sleep $grace\nkubectl delete pvc -l app.kubernetes.io/name=MyApp", "language": "text", "type": "code-example", "context": "Delete a StatefulSet | In the example above, the Pods have the label app.kubernetes.io/name=MyApp; substitute your own label as appropriate.", "source": "https://kubernetes.io/docs/tasks/run-application/delete-stateful-set/", "id": "k8s_00758_code_completion_23647"} {"prompt": "Explain this text code:\n\ngrace=$(kubectl get pods --template '{{.spec.terminationGracePeriodSeconds}}')\nkubectl delete statefulset -l app.kubernetes.io/name=MyApp\nsleep $grace\nkubectl delete pvc -l app.kubernetes.io/name=MyApp", "completion": "This text code Delete a StatefulSet | In the example above, the Pods have the label app.kubernetes.io/name=MyApp; substitute your own label as appropriate.", "language": "text", "type": "code-example_explanation", "context": "Delete a StatefulSet | In the example above, the Pods have the label app.kubernetes.io/name=MyApp; substitute your own label as appropriate.", "source": "https://kubernetes.io/docs/tasks/run-application/delete-stateful-set/", "id": "k8s_00758_code_explain_23648"} {"prompt": "Explain this bash code:\n\nkubectl scale [--resource-version=version] [--current-replicas=count] --replicas=COUNT (-f FILENAME | TYPE NAME)", "completion": "This bash code kubectl scale | Select all resources in the namespace of the specified resource types", "language": "bash", "type": "kubectl-command_explanation", "context": "kubectl scale | Select all resources in the namespace of the specified resource types", "source": "https://kubernetes.io/docs/reference/kubectl/generated/kubectl_scale/", "id": "k8s_00760_code_explain_23649"} {"prompt": "Complete this text code:\n\n# Scale a replica set named 'foo' to 3\n kubectl scale --replicas=3 rs/foo\n \n # Scale a resource identified by type and name specified in \"foo.yaml\" to 3\n kubectl scale --replicas=3 -f foo.yaml\n \n # If the deployment named mysql's current size is 2, scale mysql to 3", "completion": " kubectl scale --current-replicas=2 --replicas=3 deployment/mysql\n \n # Scale multiple replication controllers\n kubectl scale --replicas=5 rc/example1 rc/example2 rc/example3\n \n # Scale stateful set named 'web' to 3\n kubectl scale --replicas=3 statefulset/web", "language": "text", "type": "code-example", "context": "kubectl scale | Select all resources in the namespace of the specified resource types", "source": "https://kubernetes.io/docs/reference/kubectl/generated/kubectl_scale/", "id": "k8s_00760_code_completion_23650"} {"prompt": "Explain this text code:\n\n# Scale a replica set named 'foo' to 3\n kubectl scale --replicas=3 rs/foo\n \n # Scale a resource identified by type and name specified in \"foo.yaml\" to 3\n kubectl scale --replicas=3 -f foo.yaml\n \n # If the deployment named mysql's current size is 2, scale mysql to 3\n kubectl scale --current-replicas=2 --replicas=3 deployment/mysql\n \n # Scale multiple replication controllers\n kubectl scale --replicas=5 rc/example1 rc/example2 rc/example3\n \n # Scale stateful set named 'web' to 3\n kubectl scale --replicas=3 statefulset/web", "completion": "This text code kubectl scale | Select all resources in the namespace of the specified resource types", "language": "text", "type": "code-example_explanation", "context": "kubectl scale | Select all resources in the namespace of the specified resource types", "source": "https://kubernetes.io/docs/reference/kubectl/generated/kubectl_scale/", "id": "k8s_00760_code_explain_23651"} {"prompt": "Explain this bash code:\n\nkubectl scale [--resource-version=version] [--current-replicas=count] --replicas=COUNT (-f FILENAME | TYPE NAME)", "completion": "This bash code kubectl scale | Select all resources in the namespace of the specified resource types", "language": "bash", "type": "kubectl-command_explanation", "context": "kubectl scale | Select all resources in the namespace of the specified resource types", "source": "https://kubernetes.io/docs/reference/kubectl/generated/kubectl_scale/", "id": "k8s_00760_code_explain_23652"} {"prompt": "Complete this text code:\n\n# Scale a replica set named 'foo' to 3\n kubectl scale --replicas=3 rs/foo\n \n # Scale a resource identified by type and name specified in \"foo.yaml\" to 3\n kubectl scale --replicas=3 -f foo.yaml\n \n # If the deployment named mysql's current size is 2, scale mysql to 3", "completion": " kubectl scale --current-replicas=2 --replicas=3 deployment/mysql\n \n # Scale multiple replication controllers\n kubectl scale --replicas=5 rc/example1 rc/example2 rc/example3\n \n # Scale stateful set named 'web' to 3\n kubectl scale --replicas=3 statefulset/web", "language": "text", "type": "code-example", "context": "kubectl scale | Select all resources in the namespace of the specified resource types", "source": "https://kubernetes.io/docs/reference/kubectl/generated/kubectl_scale/", "id": "k8s_00760_code_completion_23653"} {"prompt": "Explain this text code:\n\n# Scale a replica set named 'foo' to 3\n kubectl scale --replicas=3 rs/foo\n \n # Scale a resource identified by type and name specified in \"foo.yaml\" to 3\n kubectl scale --replicas=3 -f foo.yaml\n \n # If the deployment named mysql's current size is 2, scale mysql to 3\n kubectl scale --current-replicas=2 --replicas=3 deployment/mysql\n \n # Scale multiple replication controllers\n kubectl scale --replicas=5 rc/example1 rc/example2 rc/example3\n \n # Scale stateful set named 'web' to 3\n kubectl scale --replicas=3 statefulset/web", "completion": "This text code kubectl scale | Select all resources in the namespace of the specified resource types", "language": "text", "type": "code-example_explanation", "context": "kubectl scale | Select all resources in the namespace of the specified resource types", "source": "https://kubernetes.io/docs/reference/kubectl/generated/kubectl_scale/", "id": "k8s_00760_code_explain_23654"} {"prompt": "Complete this text code:\n\n# replace x in 1.34.x-* with the latest patch version\nsudo apt-mark unhold kubeadm && \\", "completion": "sudo apt-get update && sudo apt-get install -y kubeadm='1.34.x-*' && \\\nsudo apt-mark hold kubeadm", "language": "text", "type": "code-example", "context": "Upgrading Linux nodes | For systems with DNF:# replace x in 1.34.x-* with the latest patch version sudo yum install -y kubeadm-'1.34.x-*' --disableexcludes=kubernetes For systems with DNF5:# replace x in 1.34.x-* with the la", "source": "https://kubernetes.io/docs/tasks/administer-cluster/kubeadm/upgrading-linux-nodes/", "id": "k8s_00761_code_completion_23655"} {"prompt": "Explain this text code:\n\n# replace x in 1.34.x-* with the latest patch version\nsudo apt-mark unhold kubeadm && \\\nsudo apt-get update && sudo apt-get install -y kubeadm='1.34.x-*' && \\\nsudo apt-mark hold kubeadm", "completion": "This text code Upgrading Linux nodes | For systems with DNF:# replace x in 1.34.x-* with the latest patch version sudo yum install -y kubeadm-'1.34.x-*' --disableexcludes=kubernetes For systems with DNF5:# replace x in 1.34.x-* with the la", "language": "text", "type": "code-example_explanation", "context": "Upgrading Linux nodes | For systems with DNF:# replace x in 1.34.x-* with the latest patch version sudo yum install -y kubeadm-'1.34.x-*' --disableexcludes=kubernetes For systems with DNF5:# replace x in 1.34.x-* with the la", "source": "https://kubernetes.io/docs/tasks/administer-cluster/kubeadm/upgrading-linux-nodes/", "id": "k8s_00761_code_explain_23656"} {"prompt": "Explain this text code:\n\n# replace x in 1.34.x-* with the latest patch version\nsudo yum install -y kubeadm-'1.34.x-*' --disableexcludes=kubernetes", "completion": "This text code Upgrading Linux nodes | For systems with DNF5:", "language": "text", "type": "code-example_explanation", "context": "Upgrading Linux nodes | For systems with DNF5:", "source": "https://kubernetes.io/docs/tasks/administer-cluster/kubeadm/upgrading-linux-nodes/", "id": "k8s_00761_code_explain_23657"} {"prompt": "Explain this text code:\n\n# replace x in 1.34.x-* with the latest patch version\nsudo yum install -y kubeadm-'1.34.x-*' --setopt=disable_excludes=kubernetes", "completion": "This text code Upgrading Linux nodes | For worker nodes this upgrades the local kubelet configuration:", "language": "text", "type": "code-example_explanation", "context": "Upgrading Linux nodes | For worker nodes this upgrades the local kubelet configuration:", "source": "https://kubernetes.io/docs/tasks/administer-cluster/kubeadm/upgrading-linux-nodes/", "id": "k8s_00761_code_explain_23658"} {"prompt": "Explain this text code:\n\n# execute this command on a control plane node\n# replace with the name of your node you are draining\nkubectl drain --ignore-daemonsets", "completion": "This text code Upgrading Linux nodes | Upgrade the kubelet and kubectl:", "language": "text", "type": "code-example_explanation", "context": "Upgrading Linux nodes | Upgrade the kubelet and kubectl:", "source": "https://kubernetes.io/docs/tasks/administer-cluster/kubeadm/upgrading-linux-nodes/", "id": "k8s_00761_code_explain_23659"} {"prompt": "Complete this text code:\n\n# replace x in 1.34.x-* with the latest patch version\nsudo apt-mark unhold kubelet kubectl && \\", "completion": "sudo apt-get update && sudo apt-get install -y kubelet='1.34.x-*' kubectl='1.34.x-*' && \\\nsudo apt-mark hold kubelet kubectl", "language": "text", "type": "code-example", "context": "Upgrading Linux nodes | For systems with DNF:# replace x in 1.34.x-* with the latest patch version sudo yum install -y kubelet-'1.34.x-*' kubectl-'1.34.x-*' --disableexcludes=kubernetes For systems with DNF5:# replace x in 1", "source": "https://kubernetes.io/docs/tasks/administer-cluster/kubeadm/upgrading-linux-nodes/", "id": "k8s_00761_code_completion_23660"} {"prompt": "Explain this text code:\n\n# replace x in 1.34.x-* with the latest patch version\nsudo apt-mark unhold kubelet kubectl && \\\nsudo apt-get update && sudo apt-get install -y kubelet='1.34.x-*' kubectl='1.34.x-*' && \\\nsudo apt-mark hold kubelet kubectl", "completion": "This text code Upgrading Linux nodes | For systems with DNF:# replace x in 1.34.x-* with the latest patch version sudo yum install -y kubelet-'1.34.x-*' kubectl-'1.34.x-*' --disableexcludes=kubernetes For systems with DNF5:# replace x in 1", "language": "text", "type": "code-example_explanation", "context": "Upgrading Linux nodes | For systems with DNF:# replace x in 1.34.x-* with the latest patch version sudo yum install -y kubelet-'1.34.x-*' kubectl-'1.34.x-*' --disableexcludes=kubernetes For systems with DNF5:# replace x in 1", "source": "https://kubernetes.io/docs/tasks/administer-cluster/kubeadm/upgrading-linux-nodes/", "id": "k8s_00761_code_explain_23661"} {"prompt": "Explain this text code:\n\n# replace x in 1.34.x-* with the latest patch version\nsudo yum install -y kubelet-'1.34.x-*' kubectl-'1.34.x-*' --disableexcludes=kubernetes", "completion": "This text code Upgrading Linux nodes | For systems with DNF5:", "language": "text", "type": "code-example_explanation", "context": "Upgrading Linux nodes | For systems with DNF5:", "source": "https://kubernetes.io/docs/tasks/administer-cluster/kubeadm/upgrading-linux-nodes/", "id": "k8s_00761_code_explain_23662"} {"prompt": "Explain this text code:\n\n# replace x in 1.34.x-* with the latest patch version\nsudo yum install -y kubelet-'1.34.x-*' kubectl-'1.34.x-*' --setopt=disable_excludes=kubernetes", "completion": "This text code Upgrading Linux nodes | Restart the kubelet:", "language": "text", "type": "code-example_explanation", "context": "Upgrading Linux nodes | Restart the kubelet:", "source": "https://kubernetes.io/docs/tasks/administer-cluster/kubeadm/upgrading-linux-nodes/", "id": "k8s_00761_code_explain_23663"} {"prompt": "Explain this text code:\n\nsudo systemctl daemon-reload\nsudo systemctl restart kubelet", "completion": "This text code Upgrading Linux nodes | Bring the node back online by marking it schedulable:", "language": "text", "type": "code-example_explanation", "context": "Upgrading Linux nodes | Bring the node back online by marking it schedulable:", "source": "https://kubernetes.io/docs/tasks/administer-cluster/kubeadm/upgrading-linux-nodes/", "id": "k8s_00761_code_explain_23664"} {"prompt": "Explain this text code:\n\n# execute this command on a control plane node\n# replace with the name of your node\nkubectl uncordon ", "completion": "This text code Upgrading Linux nodes |", "language": "text", "type": "code-example_explanation", "context": "Upgrading Linux nodes |", "source": "https://kubernetes.io/docs/tasks/administer-cluster/kubeadm/upgrading-linux-nodes/", "id": "k8s_00761_code_explain_23665"} {"prompt": "Complete this shell code:\n\n# replace x in 1.34.x-* with the latest patch version\nsudo apt-mark unhold kubeadm && \\", "completion": "sudo apt-get update && sudo apt-get install -y kubeadm='1.34.x-*' && \\\nsudo apt-mark hold kubeadm", "language": "shell", "type": "code-example", "context": "Upgrading Linux nodes | For systems with DNF:# replace x in 1.34.x-* with the latest patch version sudo yum install -y kubeadm-'1.34.x-*' --disableexcludes=kubernetes For systems with DNF5:# replace x in 1.34.x-* with the la", "source": "https://kubernetes.io/docs/tasks/administer-cluster/kubeadm/upgrading-linux-nodes/", "id": "k8s_00761_code_completion_23666"} {"prompt": "Explain this shell code:\n\n# replace x in 1.34.x-* with the latest patch version\nsudo apt-mark unhold kubeadm && \\\nsudo apt-get update && sudo apt-get install -y kubeadm='1.34.x-*' && \\\nsudo apt-mark hold kubeadm", "completion": "This shell code Upgrading Linux nodes | For systems with DNF:# replace x in 1.34.x-* with the latest patch version sudo yum install -y kubeadm-'1.34.x-*' --disableexcludes=kubernetes For systems with DNF5:# replace x in 1.34.x-* with the la", "language": "shell", "type": "code-example_explanation", "context": "Upgrading Linux nodes | For systems with DNF:# replace x in 1.34.x-* with the latest patch version sudo yum install -y kubeadm-'1.34.x-*' --disableexcludes=kubernetes For systems with DNF5:# replace x in 1.34.x-* with the la", "source": "https://kubernetes.io/docs/tasks/administer-cluster/kubeadm/upgrading-linux-nodes/", "id": "k8s_00761_code_explain_23667"} {"prompt": "Explain this shell code:\n\n# replace x in 1.34.x-* with the latest patch version\nsudo yum install -y kubeadm-'1.34.x-*' --disableexcludes=kubernetes", "completion": "This shell code Upgrading Linux nodes | For systems with DNF5:", "language": "shell", "type": "code-example_explanation", "context": "Upgrading Linux nodes | For systems with DNF5:", "source": "https://kubernetes.io/docs/tasks/administer-cluster/kubeadm/upgrading-linux-nodes/", "id": "k8s_00761_code_explain_23668"} {"prompt": "Explain this shell code:\n\n# replace x in 1.34.x-* with the latest patch version\nsudo yum install -y kubeadm-'1.34.x-*' --setopt=disable_excludes=kubernetes", "completion": "This shell code Upgrading Linux nodes | For worker nodes this upgrades the local kubelet configuration:", "language": "shell", "type": "code-example_explanation", "context": "Upgrading Linux nodes | For worker nodes this upgrades the local kubelet configuration:", "source": "https://kubernetes.io/docs/tasks/administer-cluster/kubeadm/upgrading-linux-nodes/", "id": "k8s_00761_code_explain_23669"} {"prompt": "Explain this shell code:\n\n# execute this command on a control plane node\n# replace with the name of your node you are draining\nkubectl drain --ignore-daemonsets", "completion": "This shell code Upgrading Linux nodes | Upgrade the kubelet and kubectl:", "language": "shell", "type": "code-example_explanation", "context": "Upgrading Linux nodes | Upgrade the kubelet and kubectl:", "source": "https://kubernetes.io/docs/tasks/administer-cluster/kubeadm/upgrading-linux-nodes/", "id": "k8s_00761_code_explain_23670"} {"prompt": "Complete this shell code:\n\n# replace x in 1.34.x-* with the latest patch version\nsudo apt-mark unhold kubelet kubectl && \\", "completion": "sudo apt-get update && sudo apt-get install -y kubelet='1.34.x-*' kubectl='1.34.x-*' && \\\nsudo apt-mark hold kubelet kubectl", "language": "shell", "type": "code-example", "context": "Upgrading Linux nodes | For systems with DNF:# replace x in 1.34.x-* with the latest patch version sudo yum install -y kubelet-'1.34.x-*' kubectl-'1.34.x-*' --disableexcludes=kubernetes For systems with DNF5:# replace x in 1", "source": "https://kubernetes.io/docs/tasks/administer-cluster/kubeadm/upgrading-linux-nodes/", "id": "k8s_00761_code_completion_23671"} {"prompt": "Explain this shell code:\n\n# replace x in 1.34.x-* with the latest patch version\nsudo apt-mark unhold kubelet kubectl && \\\nsudo apt-get update && sudo apt-get install -y kubelet='1.34.x-*' kubectl='1.34.x-*' && \\\nsudo apt-mark hold kubelet kubectl", "completion": "This shell code Upgrading Linux nodes | For systems with DNF:# replace x in 1.34.x-* with the latest patch version sudo yum install -y kubelet-'1.34.x-*' kubectl-'1.34.x-*' --disableexcludes=kubernetes For systems with DNF5:# replace x in 1", "language": "shell", "type": "code-example_explanation", "context": "Upgrading Linux nodes | For systems with DNF:# replace x in 1.34.x-* with the latest patch version sudo yum install -y kubelet-'1.34.x-*' kubectl-'1.34.x-*' --disableexcludes=kubernetes For systems with DNF5:# replace x in 1", "source": "https://kubernetes.io/docs/tasks/administer-cluster/kubeadm/upgrading-linux-nodes/", "id": "k8s_00761_code_explain_23672"} {"prompt": "Explain this shell code:\n\n# replace x in 1.34.x-* with the latest patch version\nsudo yum install -y kubelet-'1.34.x-*' kubectl-'1.34.x-*' --disableexcludes=kubernetes", "completion": "This shell code Upgrading Linux nodes | For systems with DNF5:", "language": "shell", "type": "code-example_explanation", "context": "Upgrading Linux nodes | For systems with DNF5:", "source": "https://kubernetes.io/docs/tasks/administer-cluster/kubeadm/upgrading-linux-nodes/", "id": "k8s_00761_code_explain_23673"} {"prompt": "Explain this shell code:\n\n# replace x in 1.34.x-* with the latest patch version\nsudo yum install -y kubelet-'1.34.x-*' kubectl-'1.34.x-*' --setopt=disable_excludes=kubernetes", "completion": "This shell code Upgrading Linux nodes | Restart the kubelet:", "language": "shell", "type": "code-example_explanation", "context": "Upgrading Linux nodes | Restart the kubelet:", "source": "https://kubernetes.io/docs/tasks/administer-cluster/kubeadm/upgrading-linux-nodes/", "id": "k8s_00761_code_explain_23674"} {"prompt": "Explain this shell code:\n\nsudo systemctl daemon-reload\nsudo systemctl restart kubelet", "completion": "This shell code Upgrading Linux nodes | Bring the node back online by marking it schedulable:", "language": "shell", "type": "code-example_explanation", "context": "Upgrading Linux nodes | Bring the node back online by marking it schedulable:", "source": "https://kubernetes.io/docs/tasks/administer-cluster/kubeadm/upgrading-linux-nodes/", "id": "k8s_00761_code_explain_23675"} {"prompt": "Explain this shell code:\n\n# execute this command on a control plane node\n# replace with the name of your node\nkubectl uncordon ", "completion": "This shell code Upgrading Linux nodes |", "language": "shell", "type": "code-example_explanation", "context": "Upgrading Linux nodes |", "source": "https://kubernetes.io/docs/tasks/administer-cluster/kubeadm/upgrading-linux-nodes/", "id": "k8s_00761_code_explain_23676"} {"prompt": "Complete this text code:\n\n# replace x in 1.34.x-* with the latest patch version\nsudo apt-mark unhold kubeadm && \\", "completion": "sudo apt-get update && sudo apt-get install -y kubeadm='1.34.x-*' && \\\nsudo apt-mark hold kubeadm", "language": "text", "type": "code-example", "context": "Upgrading Linux nodes | For systems with DNF:# replace x in 1.34.x-* with the latest patch version sudo yum install -y kubeadm-'1.34.x-*' --disableexcludes=kubernetes For systems with DNF5:# replace x in 1.34.x-* with the la", "source": "https://kubernetes.io/docs/tasks/administer-cluster/kubeadm/upgrading-linux-nodes/", "id": "k8s_00761_code_completion_23677"} {"prompt": "Explain this text code:\n\n# replace x in 1.34.x-* with the latest patch version\nsudo apt-mark unhold kubeadm && \\\nsudo apt-get update && sudo apt-get install -y kubeadm='1.34.x-*' && \\\nsudo apt-mark hold kubeadm", "completion": "This text code Upgrading Linux nodes | For systems with DNF:# replace x in 1.34.x-* with the latest patch version sudo yum install -y kubeadm-'1.34.x-*' --disableexcludes=kubernetes For systems with DNF5:# replace x in 1.34.x-* with the la", "language": "text", "type": "code-example_explanation", "context": "Upgrading Linux nodes | For systems with DNF:# replace x in 1.34.x-* with the latest patch version sudo yum install -y kubeadm-'1.34.x-*' --disableexcludes=kubernetes For systems with DNF5:# replace x in 1.34.x-* with the la", "source": "https://kubernetes.io/docs/tasks/administer-cluster/kubeadm/upgrading-linux-nodes/", "id": "k8s_00761_code_explain_23678"} {"prompt": "Explain this text code:\n\n# replace x in 1.34.x-* with the latest patch version\nsudo yum install -y kubeadm-'1.34.x-*' --disableexcludes=kubernetes", "completion": "This text code Upgrading Linux nodes | For systems with DNF5:", "language": "text", "type": "code-example_explanation", "context": "Upgrading Linux nodes | For systems with DNF5:", "source": "https://kubernetes.io/docs/tasks/administer-cluster/kubeadm/upgrading-linux-nodes/", "id": "k8s_00761_code_explain_23679"} {"prompt": "Explain this text code:\n\n# replace x in 1.34.x-* with the latest patch version\nsudo yum install -y kubeadm-'1.34.x-*' --setopt=disable_excludes=kubernetes", "completion": "This text code Upgrading Linux nodes | For worker nodes this upgrades the local kubelet configuration:", "language": "text", "type": "code-example_explanation", "context": "Upgrading Linux nodes | For worker nodes this upgrades the local kubelet configuration:", "source": "https://kubernetes.io/docs/tasks/administer-cluster/kubeadm/upgrading-linux-nodes/", "id": "k8s_00761_code_explain_23680"} {"prompt": "Explain this text code:\n\n# execute this command on a control plane node\n# replace with the name of your node you are draining\nkubectl drain --ignore-daemonsets", "completion": "This text code Upgrading Linux nodes | Upgrade the kubelet and kubectl:", "language": "text", "type": "code-example_explanation", "context": "Upgrading Linux nodes | Upgrade the kubelet and kubectl:", "source": "https://kubernetes.io/docs/tasks/administer-cluster/kubeadm/upgrading-linux-nodes/", "id": "k8s_00761_code_explain_23681"} {"prompt": "Complete this text code:\n\n# replace x in 1.34.x-* with the latest patch version\nsudo apt-mark unhold kubelet kubectl && \\", "completion": "sudo apt-get update && sudo apt-get install -y kubelet='1.34.x-*' kubectl='1.34.x-*' && \\\nsudo apt-mark hold kubelet kubectl", "language": "text", "type": "code-example", "context": "Upgrading Linux nodes | For systems with DNF:# replace x in 1.34.x-* with the latest patch version sudo yum install -y kubelet-'1.34.x-*' kubectl-'1.34.x-*' --disableexcludes=kubernetes For systems with DNF5:# replace x in 1", "source": "https://kubernetes.io/docs/tasks/administer-cluster/kubeadm/upgrading-linux-nodes/", "id": "k8s_00761_code_completion_23682"} {"prompt": "Explain this text code:\n\n# replace x in 1.34.x-* with the latest patch version\nsudo apt-mark unhold kubelet kubectl && \\\nsudo apt-get update && sudo apt-get install -y kubelet='1.34.x-*' kubectl='1.34.x-*' && \\\nsudo apt-mark hold kubelet kubectl", "completion": "This text code Upgrading Linux nodes | For systems with DNF:# replace x in 1.34.x-* with the latest patch version sudo yum install -y kubelet-'1.34.x-*' kubectl-'1.34.x-*' --disableexcludes=kubernetes For systems with DNF5:# replace x in 1", "language": "text", "type": "code-example_explanation", "context": "Upgrading Linux nodes | For systems with DNF:# replace x in 1.34.x-* with the latest patch version sudo yum install -y kubelet-'1.34.x-*' kubectl-'1.34.x-*' --disableexcludes=kubernetes For systems with DNF5:# replace x in 1", "source": "https://kubernetes.io/docs/tasks/administer-cluster/kubeadm/upgrading-linux-nodes/", "id": "k8s_00761_code_explain_23683"} {"prompt": "Explain this text code:\n\n# replace x in 1.34.x-* with the latest patch version\nsudo yum install -y kubelet-'1.34.x-*' kubectl-'1.34.x-*' --disableexcludes=kubernetes", "completion": "This text code Upgrading Linux nodes | For systems with DNF5:", "language": "text", "type": "code-example_explanation", "context": "Upgrading Linux nodes | For systems with DNF5:", "source": "https://kubernetes.io/docs/tasks/administer-cluster/kubeadm/upgrading-linux-nodes/", "id": "k8s_00761_code_explain_23684"} {"prompt": "Explain this text code:\n\n# replace x in 1.34.x-* with the latest patch version\nsudo yum install -y kubelet-'1.34.x-*' kubectl-'1.34.x-*' --setopt=disable_excludes=kubernetes", "completion": "This text code Upgrading Linux nodes | Restart the kubelet:", "language": "text", "type": "code-example_explanation", "context": "Upgrading Linux nodes | Restart the kubelet:", "source": "https://kubernetes.io/docs/tasks/administer-cluster/kubeadm/upgrading-linux-nodes/", "id": "k8s_00761_code_explain_23685"} {"prompt": "Explain this text code:\n\nsudo systemctl daemon-reload\nsudo systemctl restart kubelet", "completion": "This text code Upgrading Linux nodes | Bring the node back online by marking it schedulable:", "language": "text", "type": "code-example_explanation", "context": "Upgrading Linux nodes | Bring the node back online by marking it schedulable:", "source": "https://kubernetes.io/docs/tasks/administer-cluster/kubeadm/upgrading-linux-nodes/", "id": "k8s_00761_code_explain_23686"} {"prompt": "Explain this text code:\n\n# execute this command on a control plane node\n# replace with the name of your node\nkubectl uncordon ", "completion": "This text code Upgrading Linux nodes |", "language": "text", "type": "code-example_explanation", "context": "Upgrading Linux nodes |", "source": "https://kubernetes.io/docs/tasks/administer-cluster/kubeadm/upgrading-linux-nodes/", "id": "k8s_00761_code_explain_23687"} {"prompt": "Complete this text code:\n\n# Show metrics for all nodes\n kubectl top node", "completion": " \n # Show metrics for a given node\n kubectl top node NODE_NAME", "language": "text", "type": "code-example", "context": "kubectl top node | help for node", "source": "https://kubernetes.io/docs/reference/kubectl/generated/kubectl_top/kubectl_top_node/", "id": "k8s_00762_code_completion_23688"} {"prompt": "Explain this text code:\n\n# Show metrics for all nodes\n kubectl top node\n \n # Show metrics for a given node\n kubectl top node NODE_NAME", "completion": "This text code kubectl top node | help for node", "language": "text", "type": "code-example_explanation", "context": "kubectl top node | help for node", "source": "https://kubernetes.io/docs/reference/kubectl/generated/kubectl_top/kubectl_top_node/", "id": "k8s_00762_code_explain_23689"} {"prompt": "Complete this text code:\n\n# Show metrics for all nodes\n kubectl top node", "completion": " \n # Show metrics for a given node\n kubectl top node NODE_NAME", "language": "text", "type": "code-example", "context": "kubectl top node | help for node", "source": "https://kubernetes.io/docs/reference/kubectl/generated/kubectl_top/kubectl_top_node/", "id": "k8s_00762_code_completion_23690"} {"prompt": "Explain this text code:\n\n# Show metrics for all nodes\n kubectl top node\n \n # Show metrics for a given node\n kubectl top node NODE_NAME", "completion": "This text code kubectl top node | help for node", "language": "text", "type": "code-example_explanation", "context": "kubectl top node | help for node", "source": "https://kubernetes.io/docs/reference/kubectl/generated/kubectl_top/kubectl_top_node/", "id": "k8s_00762_code_explain_23691"} {"prompt": "Explain this bash code:\n\nkubectl autoscale (-f FILENAME | TYPE NAME | TYPE/NAME) [--min=MINPODS] --max=MAXPODS [--cpu=CPU] [--memory=MEMORY]", "completion": "This bash code kubectl autoscale | If true, ignore any errors in templates when a field or map key is missing in the template. Only applies to golang and jsonpath output formats.", "language": "bash", "type": "kubectl-command_explanation", "context": "kubectl autoscale | If true, ignore any errors in templates when a field or map key is missing in the template. Only applies to golang and jsonpath output formats.", "source": "https://kubernetes.io/docs/reference/kubectl/generated/kubectl_autoscale/", "id": "k8s_00763_code_explain_23692"} {"prompt": "Complete this text code:\n\n# Auto scale a deployment \"foo\", with the number of pods between 2 and 10, no target CPU utilization specified so a default autoscaling policy will be used\n kubectl autoscale deployment foo --min=2 --max=10\n \n # Auto scale a replication controller \"foo\", with the number of pods between 1 and 5, target CPU utilization at 80%\n kubectl autoscale rc foo --max=5 --cpu=80%", "completion": " \n # Auto scale a deployment \"bar\", with the number of pods between 3 and 6, target average CPU of 500m and memory of 200Mi\n kubectl autoscale deployment bar --min=3 --max=6 --cpu=500m --memory=200Mi\n \n # Auto scale a deployment \"bar\", with the number of pods between 2 and 8, target CPU utilization 60% and memory utilization 70%\n kubectl autoscale deployment bar --min=3 --max=6 --cpu=60% --memory=70%", "language": "text", "type": "code-example", "context": "kubectl autoscale | If true, ignore any errors in templates when a field or map key is missing in the template. Only applies to golang and jsonpath output formats.", "source": "https://kubernetes.io/docs/reference/kubectl/generated/kubectl_autoscale/", "id": "k8s_00763_code_completion_23693"} {"prompt": "Explain this text code:\n\n# Auto scale a deployment \"foo\", with the number of pods between 2 and 10, no target CPU utilization specified so a default autoscaling policy will be used\n kubectl autoscale deployment foo --min=2 --max=10\n \n # Auto scale a replication controller \"foo\", with the number of pods between 1 and 5, target CPU utilization at 80%\n kubectl autoscale rc foo --max=5 --cpu=80%\n \n # Auto scale a deployment \"bar\", with the number of pods between 3 and 6, target average CPU of 500m and memory of 200Mi\n kubectl autoscale deployment bar --min=3 --max=6 --cpu=500m --memory=200Mi\n \n # Auto scale a deployment \"bar\", with the number of pods between 2 and 8, target CPU utilization 60% and memory utilization 70%\n kubectl autoscale deployment bar --min=3 --max=6 --cpu=60% --memory=70%", "completion": "This text code kubectl autoscale | If true, ignore any errors in templates when a field or map key is missing in the template. Only applies to golang and jsonpath output formats.", "language": "text", "type": "code-example_explanation", "context": "kubectl autoscale | If true, ignore any errors in templates when a field or map key is missing in the template. Only applies to golang and jsonpath output formats.", "source": "https://kubernetes.io/docs/reference/kubectl/generated/kubectl_autoscale/", "id": "k8s_00763_code_explain_23694"} {"prompt": "Explain this bash code:\n\nkubectl autoscale (-f FILENAME | TYPE NAME | TYPE/NAME) [--min=MINPODS] --max=MAXPODS [--cpu=CPU] [--memory=MEMORY]", "completion": "This bash code kubectl autoscale | If true, ignore any errors in templates when a field or map key is missing in the template. Only applies to golang and jsonpath output formats.", "language": "bash", "type": "kubectl-command_explanation", "context": "kubectl autoscale | If true, ignore any errors in templates when a field or map key is missing in the template. Only applies to golang and jsonpath output formats.", "source": "https://kubernetes.io/docs/reference/kubectl/generated/kubectl_autoscale/", "id": "k8s_00763_code_explain_23695"} {"prompt": "Complete this text code:\n\n# Auto scale a deployment \"foo\", with the number of pods between 2 and 10, no target CPU utilization specified so a default autoscaling policy will be used\n kubectl autoscale deployment foo --min=2 --max=10\n \n # Auto scale a replication controller \"foo\", with the number of pods between 1 and 5, target CPU utilization at 80%\n kubectl autoscale rc foo --max=5 --cpu=80%", "completion": " \n # Auto scale a deployment \"bar\", with the number of pods between 3 and 6, target average CPU of 500m and memory of 200Mi\n kubectl autoscale deployment bar --min=3 --max=6 --cpu=500m --memory=200Mi\n \n # Auto scale a deployment \"bar\", with the number of pods between 2 and 8, target CPU utilization 60% and memory utilization 70%\n kubectl autoscale deployment bar --min=3 --max=6 --cpu=60% --memory=70%", "language": "text", "type": "code-example", "context": "kubectl autoscale | If true, ignore any errors in templates when a field or map key is missing in the template. Only applies to golang and jsonpath output formats.", "source": "https://kubernetes.io/docs/reference/kubectl/generated/kubectl_autoscale/", "id": "k8s_00763_code_completion_23696"} {"prompt": "Explain this text code:\n\n# Auto scale a deployment \"foo\", with the number of pods between 2 and 10, no target CPU utilization specified so a default autoscaling policy will be used\n kubectl autoscale deployment foo --min=2 --max=10\n \n # Auto scale a replication controller \"foo\", with the number of pods between 1 and 5, target CPU utilization at 80%\n kubectl autoscale rc foo --max=5 --cpu=80%\n \n # Auto scale a deployment \"bar\", with the number of pods between 3 and 6, target average CPU of 500m and memory of 200Mi\n kubectl autoscale deployment bar --min=3 --max=6 --cpu=500m --memory=200Mi\n \n # Auto scale a deployment \"bar\", with the number of pods between 2 and 8, target CPU utilization 60% and memory utilization 70%\n kubectl autoscale deployment bar --min=3 --max=6 --cpu=60% --memory=70%", "completion": "This text code kubectl autoscale | If true, ignore any errors in templates when a field or map key is missing in the template. Only applies to golang and jsonpath output formats.", "language": "text", "type": "code-example_explanation", "context": "kubectl autoscale | If true, ignore any errors in templates when a field or map key is missing in the template. Only applies to golang and jsonpath output formats.", "source": "https://kubernetes.io/docs/reference/kubectl/generated/kubectl_autoscale/", "id": "k8s_00763_code_explain_23697"} {"prompt": "Complete this yaml code:\n\napiVersion: kubelet.config.k8s.io/v1beta1\nkind: KubeletConfiguration\naddress: \"192.168.0.8\"\nport: 20250\nserializeImagePulls: false", "completion": "evictionHard:\n memory.available: \"100Mi\"\n nodefs.available: \"10%\"\n nodefs.inodesFree: \"5%\"\n imagefs.available: \"15%\"\n imagefs.inodesFree: \"5%\"", "language": "yaml", "type": "kubernetes-manifest", "context": "Set Kubelet Parameters Via A Configuration File | In this example, the kubelet is configured with the following settings:", "source": "https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/", "id": "k8s_00764_code_completion_23698"} {"prompt": "Explain this yaml code:\n\napiVersion: kubelet.config.k8s.io/v1beta1\nkind: KubeletConfiguration\naddress: \"192.168.0.8\"\nport: 20250\nserializeImagePulls: false\nevictionHard:\n memory.available: \"100Mi\"\n nodefs.available: \"10%\"\n nodefs.inodesFree: \"5%\"\n imagefs.available: \"15%\"\n imagefs.inodesFree: \"5%\"", "completion": "This yaml code Set Kubelet Parameters Via A Configuration File | In this example, the kubelet is configured with the following settings:", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Set Kubelet Parameters Via A Configuration File | In this example, the kubelet is configured with the following settings:", "source": "https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/", "id": "k8s_00764_code_explain_23699"} {"prompt": "Explain this text code:\n\ncurl -X GET http://127.0.0.1:8001/api/v1/nodes//proxy/configz | jq .", "completion": "This text code Set Kubelet Parameters Via A Configuration File |", "language": "text", "type": "code-example_explanation", "context": "Set Kubelet Parameters Via A Configuration File |", "source": "https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/", "id": "k8s_00764_code_explain_23700"} {"prompt": "Complete this json code:\n\n{\n \"kubeletconfig\": {\n \"enableServer\": true,\n \"staticPodPath\": \"/var/run/kubernetes/static-pods\",\n \"syncFrequency\": \"1m0s\",\n \"fileCheckFrequency\": \"20s\",\n \"httpCheckFrequency\": \"20s\",\n \"address\": \"192.168.1.16\",\n \"port\": 10250,\n \"readOnlyPort\": 10255,\n \"tlsCertFile\": \"/var/lib/kubelet/pki/kubelet.crt\",\n \"tlsPrivateKeyFile\": \"/var/lib/kubelet/pki/kubelet.key\",\n \"rotateCertificates\": true,\n \"authentication\": {\n \"x509\": {\n \"clientCAFile\": \"/var/run/kubernetes/client-ca.crt\"\n },\n \"webhook\": {\n \"enabled\": true,\n \"cacheTTL\": \"2m0s\"\n },\n \"anonymous\": {\n \"enabled\": true\n }\n },\n \"authorization\": {\n \"mode\": \"AlwaysAllow\",\n \"webhook\": {\n \"cacheAuthorizedTTL\": \"5m0s\",\n \"cacheUnauthorizedTTL\": \"30s\"\n }\n },\n \"registryPullQPS\": 5,\n \"registryBurst\": 10,\n \"eventRecordQPS\": 50,\n \"eventBurst\": 100,\n \"enableDebuggingHandlers\": true,\n \"healthzPort\": 10248,\n \"healthzBindAddress\": \"127.0.0.1\",\n \"oomScoreAdj\": -999,\n \"clusterDomain\": \"cluster.local\",\n \"clusterDNS\": [\n \"10.0.0.10\"\n ],\n \"streamingConnectionIdleTimeout\": \"4h0m0s\",\n \"nodeStatusUpdateFrequency\": \"10s\",\n \"nodeStatusReportFrequency\": \"5m0s\",\n \"nodeLeaseDurationSeconds\": 40,\n \"imageMinimumGCAge\": \"2m0s\",\n \"imageMaximumGCAge\": \"0s\",\n \"imageGCHighThresholdPercent\": 85,\n \"imageGCLowThresholdPercent\": 80,\n \"volumeStatsAggPeriod\": \"1m0s\",\n \"cgroupsPerQOS\": true,\n \"cgroupDriver\": \"systemd\",\n \"cpuManagerPolicy\": \"none\",\n \"cpuManagerReconcilePeriod\": \"10s\",\n \"memoryManagerPolicy\": \"None\",\n \"topologyManagerPolicy\": \"none\",\n \"topologyManagerScope\": \"container\",", "completion": " \"runtimeRequestTimeout\": \"2m0s\",\n \"hairpinMode\": \"promiscuous-bridge\",\n \"maxPods\": 110,\n \"podPidsLimit\": -1,\n \"resolvConf\": \"/run/systemd/resolve/resolv.conf\",\n \"cpuCFSQuota\": true,\n \"cpuCFSQuotaPeriod\": \"100ms\",\n \"nodeStatusMaxImages\": 50,\n \"maxOpenFiles\": 1000000,\n \"contentType\": \"application/vnd.kubernetes.protobuf\",\n \"kubeAPIQPS\": 50,\n \"kubeAPIBurst\": 100,\n \"serializeImagePulls\": true,\n \"evictionHard\": {\n \"imagefs.available\": \"15%\",\n \"memory.available\": \"100Mi\",\n \"nodefs.available\": \"10%\",\n \"nodefs.inodesFree\": \"5%\",\n \"imagefs.inodesFree\": \"5%\"\n },\n \"evictionPressureTransitionPeriod\": \"1m0s\",\n \"mergeDefaultEvictionSettings\": false,\n \"enableControllerAttachDetach\": true,\n \"makeIPTablesUtilChains\": true,\n \"iptablesMasqueradeBit\": 14,\n \"iptablesDropBit\": 15,\n \"featureGates\": {\n \"AllAlpha\": false\n },\n \"failSwapOn\": false,\n \"memorySwap\": {},\n \"containerLogMaxSize\": \"10Mi\",\n \"containerLogMaxFiles\": 5,\n \"configMapAndSecretChangeDetectionStrategy\": \"Watch\",\n \"enforceNodeAllocatable\": [\n \"pods\"\n ],\n \"volumePluginDir\": \"/usr/libexec/kubernetes/kubelet-plugins/volume/exec/\",\n \"logging\": {\n \"format\": \"text\",\n \"flushFrequency\": \"5s\",\n \"verbosity\": 3,\n \"options\": {\n \"json\": {\n \"infoBufferSize\": \"0\"\n }\n }\n },\n \"enableSystemLogHandler\": true,\n \"enableSystemLogQuery\": false,\n \"shutdownGracePeriod\": \"0s\",\n \"shutdownGracePeriodCriticalPods\": \"0s\",\n \"enableProfilingHandler\": true,\n \"enableDebugFlagsHandler\": true,\n \"seccompDefault\": false,\n \"memoryThrottlingFactor\": 0.9,\n \"registerNode\": true,\n \"localStorageCapacityIsolation\": true,\n \"containerRuntimeEndpoint\": \"unix:///var/run/crio/crio.sock\"\n }\n}", "language": "json", "type": "api-response", "context": "Set Kubelet Parameters Via A Configuration File |", "source": "https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/", "id": "k8s_00764_code_completion_23701"} {"prompt": "Explain this json code:\n\n{\n \"kubeletconfig\": {\n \"enableServer\": true,\n \"staticPodPath\": \"/var/run/kubernetes/static-pods\",\n \"syncFrequency\": \"1m0s\",\n \"fileCheckFrequency\": \"20s\",\n \"httpCheckFrequency\": \"20s\",\n \"address\": \"192.168.1.16\",\n \"port\": 10250,\n \"readOnlyPort\": 10255,\n \"tlsCertFile\": \"/var/lib/kubelet/pki/kubelet.crt\",\n \"tlsPrivateKeyFile\": \"/var/lib/kubelet/pki/kubelet.key\",\n \"rotateCertificates\": true,\n \"authentication\": {\n \"x509\": {\n \"clientCAFile\": \"/var/run/kubernetes/client-ca.crt\"\n },\n \"webhook\": {\n \"enabled\": true,\n \"cacheTTL\": \"2m0s\"\n },\n \"anonymous\": {\n \"enabled\": true\n }\n },\n \"authorization\": {\n \"mode\": \"AlwaysAllow\",\n \"webhook\": {\n \"cacheAuthorizedTTL\": \"5m0s\",\n \"cacheUnauthorizedTTL\": \"30s\"\n }\n },\n \"registryPullQPS\": 5,\n \"registryBurst\": 10,\n \"eventRecordQPS\": 50,\n \"eventBurst\": 100,\n \"enableDebuggingHandlers\": true,\n \"healthzPort\": 10248,\n \"healthzBindAddress\": \"127.0.0.1\",\n \"oomScoreAdj\": -999,\n \"clusterDomain\": \"cluster.local\",\n \"clusterDNS\": [\n \"10.0.0.10\"\n ],\n \"streamingConnectionIdleTimeout\": \"4h0m0s\",\n \"nodeStatusUpdateFrequency\": \"10s\",\n \"nodeStatusReportFrequency\": \"5m0s\",\n \"nodeLeaseDurationSeconds\": 40,\n \"imageMinimumGCAge\": \"2m0s\",\n \"imageMaximumGCAge\": \"0s\",\n \"imageGCHighThresholdPercent\": 85,\n \"imageGCLowThresholdPercent\": 80,\n \"volumeStatsAggPeriod\": \"1m0s\",\n \"cgroupsPerQOS\": true,\n \"cgroupDriver\": \"systemd\",\n \"cpuManagerPolicy\": \"none\",\n \"cpuManagerReconcilePeriod\": \"10s\",\n \"memoryManagerPolicy\": \"None\",\n \"topologyManagerPolicy\": \"none\",\n \"topologyManagerScope\": \"container\",\n \"runtimeRequestTimeout\": \"2m0s\",\n \"hairpinMode\": \"promiscuous-bridge\",\n \"maxPods\": 110,\n \"podPidsLimit\": -1,\n \"resolvConf\": \"/run/systemd/resolve/resolv.conf\",\n \"cpuCFSQuota\": true,\n \"cpuCFSQuotaPeriod\": \"100ms\",\n \"nodeStatusMaxImages\": 50,\n \"maxOpenFiles\": 1000000,\n \"contentType\": \"application/vnd.kubernetes.protobuf\",\n \"kubeAPIQPS\": 50,\n \"kubeAPIBurst\": 100,\n \"serializeImagePulls\": true,\n \"evictionHard\": {\n \"imagefs.available\": \"15%\",\n \"memory.available\": \"100Mi\",\n \"nodefs.available\": \"10%\",\n \"nodefs.inodesFree\": \"5%\",\n \"imagefs.inodesFree\": \"5%\"\n },\n \"evictionPressureTransitionPeriod\": \"1m0s\",\n \"mergeDefaultEvictionSettings\": false,\n \"enableControllerAttachDetach\": true,\n \"makeIPTablesUtilChains\": true,\n \"iptablesMasqueradeBit\": 14,\n \"iptablesDropBit\": 15,\n \"featureGates\": {\n \"AllAlpha\": false\n },\n \"failSwapOn\": false,\n \"memorySwap\": {},\n \"containerLogMaxSize\": \"10Mi\",\n \"containerLogMaxFiles\": 5,\n \"configMapAndSecretChangeDetectionStrategy\": \"Watch\",\n \"enforceNodeAllocatable\": [\n \"pods\"\n ],\n \"volumePluginDir\": \"/usr/libexec/kubernetes/kubelet-plugins/volume/exec/\",\n \"logging\": {\n \"format\": \"text\",\n \"flushFrequency\": \"5s\",\n \"verbosity\": 3,\n \"options\": {\n \"json\": {\n \"infoBufferSize\": \"0\"\n }\n }\n },\n \"enableSystemLogHandler\": true,\n \"enableSystemLogQuery\": false,\n \"shutdownGracePeriod\": \"0s\",\n \"shutdownGracePeriodCriticalPods\": \"0s\",\n \"enableProfilingHandler\": true,\n \"enableDebugFlagsHandler\": true,\n \"seccompDefault\": false,\n \"memoryThrottlingFactor\": 0.9,\n \"registerNode\": true,\n \"localStorageCapacityIsolation\": true,\n \"containerRuntimeEndpoint\": \"unix:///var/run/crio/crio.sock\"\n }\n}", "completion": "This json code Set Kubelet Parameters Via A Configuration File |", "language": "json", "type": "api-response_explanation", "context": "Set Kubelet Parameters Via A Configuration File |", "source": "https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/", "id": "k8s_00764_code_explain_23702"} {"prompt": "Complete this yaml code:\n\napiVersion: kubelet.config.k8s.io/v1beta1\nkind: KubeletConfiguration\naddress: \"192.168.0.8\"\nport: 20250\nserializeImagePulls: false", "completion": "evictionHard:\n memory.available: \"100Mi\"\n nodefs.available: \"10%\"\n nodefs.inodesFree: \"5%\"\n imagefs.available: \"15%\"\n imagefs.inodesFree: \"5%\"", "language": "yaml", "type": "kubernetes-manifest", "context": "Set Kubelet Parameters Via A Configuration File | In this example, the kubelet is configured with the following settings:", "source": "https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/", "id": "k8s_00764_code_completion_23703"} {"prompt": "Explain this yaml code:\n\napiVersion: kubelet.config.k8s.io/v1beta1\nkind: KubeletConfiguration\naddress: \"192.168.0.8\"\nport: 20250\nserializeImagePulls: false\nevictionHard:\n memory.available: \"100Mi\"\n nodefs.available: \"10%\"\n nodefs.inodesFree: \"5%\"\n imagefs.available: \"15%\"\n imagefs.inodesFree: \"5%\"", "completion": "This yaml code Set Kubelet Parameters Via A Configuration File | In this example, the kubelet is configured with the following settings:", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Set Kubelet Parameters Via A Configuration File | In this example, the kubelet is configured with the following settings:", "source": "https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/", "id": "k8s_00764_code_explain_23704"} {"prompt": "Explain this bash code:\n\ncurl -X GET http://127.0.0.1:8001/api/v1/nodes//proxy/configz | jq .", "completion": "This bash code Set Kubelet Parameters Via A Configuration File |", "language": "bash", "type": "shell-command_explanation", "context": "Set Kubelet Parameters Via A Configuration File |", "source": "https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/", "id": "k8s_00764_code_explain_23705"} {"prompt": "Complete this json code:\n\n{\n \"kubeletconfig\": {\n \"enableServer\": true,\n \"staticPodPath\": \"/var/run/kubernetes/static-pods\",\n \"syncFrequency\": \"1m0s\",\n \"fileCheckFrequency\": \"20s\",\n \"httpCheckFrequency\": \"20s\",\n \"address\": \"192.168.1.16\",\n \"port\": 10250,\n \"readOnlyPort\": 10255,\n \"tlsCertFile\": \"/var/lib/kubelet/pki/kubelet.crt\",\n \"tlsPrivateKeyFile\": \"/var/lib/kubelet/pki/kubelet.key\",\n \"rotateCertificates\": true,\n \"authentication\": {\n \"x509\": {\n \"clientCAFile\": \"/var/run/kubernetes/client-ca.crt\"\n },\n \"webhook\": {\n \"enabled\": true,\n \"cacheTTL\": \"2m0s\"\n },\n \"anonymous\": {\n \"enabled\": true\n }\n },\n \"authorization\": {\n \"mode\": \"AlwaysAllow\",\n \"webhook\": {\n \"cacheAuthorizedTTL\": \"5m0s\",\n \"cacheUnauthorizedTTL\": \"30s\"\n }\n },\n \"registryPullQPS\": 5,\n \"registryBurst\": 10,\n \"eventRecordQPS\": 50,\n \"eventBurst\": 100,\n \"enableDebuggingHandlers\": true,\n \"healthzPort\": 10248,\n \"healthzBindAddress\": \"127.0.0.1\",\n \"oomScoreAdj\": -999,\n \"clusterDomain\": \"cluster.local\",\n \"clusterDNS\": [\n \"10.0.0.10\"\n ],\n \"streamingConnectionIdleTimeout\": \"4h0m0s\",\n \"nodeStatusUpdateFrequency\": \"10s\",\n \"nodeStatusReportFrequency\": \"5m0s\",\n \"nodeLeaseDurationSeconds\": 40,\n \"imageMinimumGCAge\": \"2m0s\",\n \"imageMaximumGCAge\": \"0s\",\n \"imageGCHighThresholdPercent\": 85,\n \"imageGCLowThresholdPercent\": 80,\n \"volumeStatsAggPeriod\": \"1m0s\",\n \"cgroupsPerQOS\": true,\n \"cgroupDriver\": \"systemd\",\n \"cpuManagerPolicy\": \"none\",\n \"cpuManagerReconcilePeriod\": \"10s\",\n \"memoryManagerPolicy\": \"None\",\n \"topologyManagerPolicy\": \"none\",\n \"topologyManagerScope\": \"container\",", "completion": " \"runtimeRequestTimeout\": \"2m0s\",\n \"hairpinMode\": \"promiscuous-bridge\",\n \"maxPods\": 110,\n \"podPidsLimit\": -1,\n \"resolvConf\": \"/run/systemd/resolve/resolv.conf\",\n \"cpuCFSQuota\": true,\n \"cpuCFSQuotaPeriod\": \"100ms\",\n \"nodeStatusMaxImages\": 50,\n \"maxOpenFiles\": 1000000,\n \"contentType\": \"application/vnd.kubernetes.protobuf\",\n \"kubeAPIQPS\": 50,\n \"kubeAPIBurst\": 100,\n \"serializeImagePulls\": true,\n \"evictionHard\": {\n \"imagefs.available\": \"15%\",\n \"memory.available\": \"100Mi\",\n \"nodefs.available\": \"10%\",\n \"nodefs.inodesFree\": \"5%\",\n \"imagefs.inodesFree\": \"5%\"\n },\n \"evictionPressureTransitionPeriod\": \"1m0s\",\n \"mergeDefaultEvictionSettings\": false,\n \"enableControllerAttachDetach\": true,\n \"makeIPTablesUtilChains\": true,\n \"iptablesMasqueradeBit\": 14,\n \"iptablesDropBit\": 15,\n \"featureGates\": {\n \"AllAlpha\": false\n },\n \"failSwapOn\": false,\n \"memorySwap\": {},\n \"containerLogMaxSize\": \"10Mi\",\n \"containerLogMaxFiles\": 5,\n \"configMapAndSecretChangeDetectionStrategy\": \"Watch\",\n \"enforceNodeAllocatable\": [\n \"pods\"\n ],\n \"volumePluginDir\": \"/usr/libexec/kubernetes/kubelet-plugins/volume/exec/\",\n \"logging\": {\n \"format\": \"text\",\n \"flushFrequency\": \"5s\",\n \"verbosity\": 3,\n \"options\": {\n \"json\": {\n \"infoBufferSize\": \"0\"\n }\n }\n },\n \"enableSystemLogHandler\": true,\n \"enableSystemLogQuery\": false,\n \"shutdownGracePeriod\": \"0s\",\n \"shutdownGracePeriodCriticalPods\": \"0s\",\n \"enableProfilingHandler\": true,\n \"enableDebugFlagsHandler\": true,\n \"seccompDefault\": false,\n \"memoryThrottlingFactor\": 0.9,\n \"registerNode\": true,\n \"localStorageCapacityIsolation\": true,\n \"containerRuntimeEndpoint\": \"unix:///var/run/crio/crio.sock\"\n }\n}", "language": "json", "type": "api-response", "context": "Set Kubelet Parameters Via A Configuration File |", "source": "https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/", "id": "k8s_00764_code_completion_23706"} {"prompt": "Explain this json code:\n\n{\n \"kubeletconfig\": {\n \"enableServer\": true,\n \"staticPodPath\": \"/var/run/kubernetes/static-pods\",\n \"syncFrequency\": \"1m0s\",\n \"fileCheckFrequency\": \"20s\",\n \"httpCheckFrequency\": \"20s\",\n \"address\": \"192.168.1.16\",\n \"port\": 10250,\n \"readOnlyPort\": 10255,\n \"tlsCertFile\": \"/var/lib/kubelet/pki/kubelet.crt\",\n \"tlsPrivateKeyFile\": \"/var/lib/kubelet/pki/kubelet.key\",\n \"rotateCertificates\": true,\n \"authentication\": {\n \"x509\": {\n \"clientCAFile\": \"/var/run/kubernetes/client-ca.crt\"\n },\n \"webhook\": {\n \"enabled\": true,\n \"cacheTTL\": \"2m0s\"\n },\n \"anonymous\": {\n \"enabled\": true\n }\n },\n \"authorization\": {\n \"mode\": \"AlwaysAllow\",\n \"webhook\": {\n \"cacheAuthorizedTTL\": \"5m0s\",\n \"cacheUnauthorizedTTL\": \"30s\"\n }\n },\n \"registryPullQPS\": 5,\n \"registryBurst\": 10,\n \"eventRecordQPS\": 50,\n \"eventBurst\": 100,\n \"enableDebuggingHandlers\": true,\n \"healthzPort\": 10248,\n \"healthzBindAddress\": \"127.0.0.1\",\n \"oomScoreAdj\": -999,\n \"clusterDomain\": \"cluster.local\",\n \"clusterDNS\": [\n \"10.0.0.10\"\n ],\n \"streamingConnectionIdleTimeout\": \"4h0m0s\",\n \"nodeStatusUpdateFrequency\": \"10s\",\n \"nodeStatusReportFrequency\": \"5m0s\",\n \"nodeLeaseDurationSeconds\": 40,\n \"imageMinimumGCAge\": \"2m0s\",\n \"imageMaximumGCAge\": \"0s\",\n \"imageGCHighThresholdPercent\": 85,\n \"imageGCLowThresholdPercent\": 80,\n \"volumeStatsAggPeriod\": \"1m0s\",\n \"cgroupsPerQOS\": true,\n \"cgroupDriver\": \"systemd\",\n \"cpuManagerPolicy\": \"none\",\n \"cpuManagerReconcilePeriod\": \"10s\",\n \"memoryManagerPolicy\": \"None\",\n \"topologyManagerPolicy\": \"none\",\n \"topologyManagerScope\": \"container\",\n \"runtimeRequestTimeout\": \"2m0s\",\n \"hairpinMode\": \"promiscuous-bridge\",\n \"maxPods\": 110,\n \"podPidsLimit\": -1,\n \"resolvConf\": \"/run/systemd/resolve/resolv.conf\",\n \"cpuCFSQuota\": true,\n \"cpuCFSQuotaPeriod\": \"100ms\",\n \"nodeStatusMaxImages\": 50,\n \"maxOpenFiles\": 1000000,\n \"contentType\": \"application/vnd.kubernetes.protobuf\",\n \"kubeAPIQPS\": 50,\n \"kubeAPIBurst\": 100,\n \"serializeImagePulls\": true,\n \"evictionHard\": {\n \"imagefs.available\": \"15%\",\n \"memory.available\": \"100Mi\",\n \"nodefs.available\": \"10%\",\n \"nodefs.inodesFree\": \"5%\",\n \"imagefs.inodesFree\": \"5%\"\n },\n \"evictionPressureTransitionPeriod\": \"1m0s\",\n \"mergeDefaultEvictionSettings\": false,\n \"enableControllerAttachDetach\": true,\n \"makeIPTablesUtilChains\": true,\n \"iptablesMasqueradeBit\": 14,\n \"iptablesDropBit\": 15,\n \"featureGates\": {\n \"AllAlpha\": false\n },\n \"failSwapOn\": false,\n \"memorySwap\": {},\n \"containerLogMaxSize\": \"10Mi\",\n \"containerLogMaxFiles\": 5,\n \"configMapAndSecretChangeDetectionStrategy\": \"Watch\",\n \"enforceNodeAllocatable\": [\n \"pods\"\n ],\n \"volumePluginDir\": \"/usr/libexec/kubernetes/kubelet-plugins/volume/exec/\",\n \"logging\": {\n \"format\": \"text\",\n \"flushFrequency\": \"5s\",\n \"verbosity\": 3,\n \"options\": {\n \"json\": {\n \"infoBufferSize\": \"0\"\n }\n }\n },\n \"enableSystemLogHandler\": true,\n \"enableSystemLogQuery\": false,\n \"shutdownGracePeriod\": \"0s\",\n \"shutdownGracePeriodCriticalPods\": \"0s\",\n \"enableProfilingHandler\": true,\n \"enableDebugFlagsHandler\": true,\n \"seccompDefault\": false,\n \"memoryThrottlingFactor\": 0.9,\n \"registerNode\": true,\n \"localStorageCapacityIsolation\": true,\n \"containerRuntimeEndpoint\": \"unix:///var/run/crio/crio.sock\"\n }\n}", "completion": "This json code Set Kubelet Parameters Via A Configuration File |", "language": "json", "type": "api-response_explanation", "context": "Set Kubelet Parameters Via A Configuration File |", "source": "https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/", "id": "k8s_00764_code_explain_23707"} {"prompt": "Complete this yaml code:\n\napiVersion: kubelet.config.k8s.io/v1beta1\nkind: KubeletConfiguration\naddress: \"192.168.0.8\"\nport: 20250\nserializeImagePulls: false", "completion": "evictionHard:\n memory.available: \"100Mi\"\n nodefs.available: \"10%\"\n nodefs.inodesFree: \"5%\"\n imagefs.available: \"15%\"\n imagefs.inodesFree: \"5%\"", "language": "yaml", "type": "kubernetes-manifest", "context": "Set Kubelet Parameters Via A Configuration File | In this example, the kubelet is configured with the following settings:", "source": "https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/", "id": "k8s_00764_code_completion_23708"} {"prompt": "Explain this yaml code:\n\napiVersion: kubelet.config.k8s.io/v1beta1\nkind: KubeletConfiguration\naddress: \"192.168.0.8\"\nport: 20250\nserializeImagePulls: false\nevictionHard:\n memory.available: \"100Mi\"\n nodefs.available: \"10%\"\n nodefs.inodesFree: \"5%\"\n imagefs.available: \"15%\"\n imagefs.inodesFree: \"5%\"", "completion": "This yaml code Set Kubelet Parameters Via A Configuration File | In this example, the kubelet is configured with the following settings:", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Set Kubelet Parameters Via A Configuration File | In this example, the kubelet is configured with the following settings:", "source": "https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/", "id": "k8s_00764_code_explain_23709"} {"prompt": "Explain this text code:\n\ncurl -X GET http://127.0.0.1:8001/api/v1/nodes//proxy/configz | jq .", "completion": "This text code Set Kubelet Parameters Via A Configuration File |", "language": "text", "type": "code-example_explanation", "context": "Set Kubelet Parameters Via A Configuration File |", "source": "https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/", "id": "k8s_00764_code_explain_23710"} {"prompt": "Complete this json code:\n\n{\n \"kubeletconfig\": {\n \"enableServer\": true,\n \"staticPodPath\": \"/var/run/kubernetes/static-pods\",\n \"syncFrequency\": \"1m0s\",\n \"fileCheckFrequency\": \"20s\",\n \"httpCheckFrequency\": \"20s\",\n \"address\": \"192.168.1.16\",\n \"port\": 10250,\n \"readOnlyPort\": 10255,\n \"tlsCertFile\": \"/var/lib/kubelet/pki/kubelet.crt\",\n \"tlsPrivateKeyFile\": \"/var/lib/kubelet/pki/kubelet.key\",\n \"rotateCertificates\": true,\n \"authentication\": {\n \"x509\": {\n \"clientCAFile\": \"/var/run/kubernetes/client-ca.crt\"\n },\n \"webhook\": {\n \"enabled\": true,\n \"cacheTTL\": \"2m0s\"\n },\n \"anonymous\": {\n \"enabled\": true\n }\n },\n \"authorization\": {\n \"mode\": \"AlwaysAllow\",\n \"webhook\": {\n \"cacheAuthorizedTTL\": \"5m0s\",\n \"cacheUnauthorizedTTL\": \"30s\"\n }\n },\n \"registryPullQPS\": 5,\n \"registryBurst\": 10,\n \"eventRecordQPS\": 50,\n \"eventBurst\": 100,\n \"enableDebuggingHandlers\": true,\n \"healthzPort\": 10248,\n \"healthzBindAddress\": \"127.0.0.1\",\n \"oomScoreAdj\": -999,\n \"clusterDomain\": \"cluster.local\",\n \"clusterDNS\": [\n \"10.0.0.10\"\n ],\n \"streamingConnectionIdleTimeout\": \"4h0m0s\",\n \"nodeStatusUpdateFrequency\": \"10s\",\n \"nodeStatusReportFrequency\": \"5m0s\",\n \"nodeLeaseDurationSeconds\": 40,\n \"imageMinimumGCAge\": \"2m0s\",\n \"imageMaximumGCAge\": \"0s\",\n \"imageGCHighThresholdPercent\": 85,\n \"imageGCLowThresholdPercent\": 80,\n \"volumeStatsAggPeriod\": \"1m0s\",\n \"cgroupsPerQOS\": true,\n \"cgroupDriver\": \"systemd\",\n \"cpuManagerPolicy\": \"none\",\n \"cpuManagerReconcilePeriod\": \"10s\",\n \"memoryManagerPolicy\": \"None\",\n \"topologyManagerPolicy\": \"none\",\n \"topologyManagerScope\": \"container\",", "completion": " \"runtimeRequestTimeout\": \"2m0s\",\n \"hairpinMode\": \"promiscuous-bridge\",\n \"maxPods\": 110,\n \"podPidsLimit\": -1,\n \"resolvConf\": \"/run/systemd/resolve/resolv.conf\",\n \"cpuCFSQuota\": true,\n \"cpuCFSQuotaPeriod\": \"100ms\",\n \"nodeStatusMaxImages\": 50,\n \"maxOpenFiles\": 1000000,\n \"contentType\": \"application/vnd.kubernetes.protobuf\",\n \"kubeAPIQPS\": 50,\n \"kubeAPIBurst\": 100,\n \"serializeImagePulls\": true,\n \"evictionHard\": {\n \"imagefs.available\": \"15%\",\n \"memory.available\": \"100Mi\",\n \"nodefs.available\": \"10%\",\n \"nodefs.inodesFree\": \"5%\",\n \"imagefs.inodesFree\": \"5%\"\n },\n \"evictionPressureTransitionPeriod\": \"1m0s\",\n \"mergeDefaultEvictionSettings\": false,\n \"enableControllerAttachDetach\": true,\n \"makeIPTablesUtilChains\": true,\n \"iptablesMasqueradeBit\": 14,\n \"iptablesDropBit\": 15,\n \"featureGates\": {\n \"AllAlpha\": false\n },\n \"failSwapOn\": false,\n \"memorySwap\": {},\n \"containerLogMaxSize\": \"10Mi\",\n \"containerLogMaxFiles\": 5,\n \"configMapAndSecretChangeDetectionStrategy\": \"Watch\",\n \"enforceNodeAllocatable\": [\n \"pods\"\n ],\n \"volumePluginDir\": \"/usr/libexec/kubernetes/kubelet-plugins/volume/exec/\",\n \"logging\": {\n \"format\": \"text\",\n \"flushFrequency\": \"5s\",\n \"verbosity\": 3,\n \"options\": {\n \"json\": {\n \"infoBufferSize\": \"0\"\n }\n }\n },\n \"enableSystemLogHandler\": true,\n \"enableSystemLogQuery\": false,\n \"shutdownGracePeriod\": \"0s\",\n \"shutdownGracePeriodCriticalPods\": \"0s\",\n \"enableProfilingHandler\": true,\n \"enableDebugFlagsHandler\": true,\n \"seccompDefault\": false,\n \"memoryThrottlingFactor\": 0.9,\n \"registerNode\": true,\n \"localStorageCapacityIsolation\": true,\n \"containerRuntimeEndpoint\": \"unix:///var/run/crio/crio.sock\"\n }\n}", "language": "json", "type": "api-response", "context": "Set Kubelet Parameters Via A Configuration File |", "source": "https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/", "id": "k8s_00764_code_completion_23711"} {"prompt": "Explain this json code:\n\n{\n \"kubeletconfig\": {\n \"enableServer\": true,\n \"staticPodPath\": \"/var/run/kubernetes/static-pods\",\n \"syncFrequency\": \"1m0s\",\n \"fileCheckFrequency\": \"20s\",\n \"httpCheckFrequency\": \"20s\",\n \"address\": \"192.168.1.16\",\n \"port\": 10250,\n \"readOnlyPort\": 10255,\n \"tlsCertFile\": \"/var/lib/kubelet/pki/kubelet.crt\",\n \"tlsPrivateKeyFile\": \"/var/lib/kubelet/pki/kubelet.key\",\n \"rotateCertificates\": true,\n \"authentication\": {\n \"x509\": {\n \"clientCAFile\": \"/var/run/kubernetes/client-ca.crt\"\n },\n \"webhook\": {\n \"enabled\": true,\n \"cacheTTL\": \"2m0s\"\n },\n \"anonymous\": {\n \"enabled\": true\n }\n },\n \"authorization\": {\n \"mode\": \"AlwaysAllow\",\n \"webhook\": {\n \"cacheAuthorizedTTL\": \"5m0s\",\n \"cacheUnauthorizedTTL\": \"30s\"\n }\n },\n \"registryPullQPS\": 5,\n \"registryBurst\": 10,\n \"eventRecordQPS\": 50,\n \"eventBurst\": 100,\n \"enableDebuggingHandlers\": true,\n \"healthzPort\": 10248,\n \"healthzBindAddress\": \"127.0.0.1\",\n \"oomScoreAdj\": -999,\n \"clusterDomain\": \"cluster.local\",\n \"clusterDNS\": [\n \"10.0.0.10\"\n ],\n \"streamingConnectionIdleTimeout\": \"4h0m0s\",\n \"nodeStatusUpdateFrequency\": \"10s\",\n \"nodeStatusReportFrequency\": \"5m0s\",\n \"nodeLeaseDurationSeconds\": 40,\n \"imageMinimumGCAge\": \"2m0s\",\n \"imageMaximumGCAge\": \"0s\",\n \"imageGCHighThresholdPercent\": 85,\n \"imageGCLowThresholdPercent\": 80,\n \"volumeStatsAggPeriod\": \"1m0s\",\n \"cgroupsPerQOS\": true,\n \"cgroupDriver\": \"systemd\",\n \"cpuManagerPolicy\": \"none\",\n \"cpuManagerReconcilePeriod\": \"10s\",\n \"memoryManagerPolicy\": \"None\",\n \"topologyManagerPolicy\": \"none\",\n \"topologyManagerScope\": \"container\",\n \"runtimeRequestTimeout\": \"2m0s\",\n \"hairpinMode\": \"promiscuous-bridge\",\n \"maxPods\": 110,\n \"podPidsLimit\": -1,\n \"resolvConf\": \"/run/systemd/resolve/resolv.conf\",\n \"cpuCFSQuota\": true,\n \"cpuCFSQuotaPeriod\": \"100ms\",\n \"nodeStatusMaxImages\": 50,\n \"maxOpenFiles\": 1000000,\n \"contentType\": \"application/vnd.kubernetes.protobuf\",\n \"kubeAPIQPS\": 50,\n \"kubeAPIBurst\": 100,\n \"serializeImagePulls\": true,\n \"evictionHard\": {\n \"imagefs.available\": \"15%\",\n \"memory.available\": \"100Mi\",\n \"nodefs.available\": \"10%\",\n \"nodefs.inodesFree\": \"5%\",\n \"imagefs.inodesFree\": \"5%\"\n },\n \"evictionPressureTransitionPeriod\": \"1m0s\",\n \"mergeDefaultEvictionSettings\": false,\n \"enableControllerAttachDetach\": true,\n \"makeIPTablesUtilChains\": true,\n \"iptablesMasqueradeBit\": 14,\n \"iptablesDropBit\": 15,\n \"featureGates\": {\n \"AllAlpha\": false\n },\n \"failSwapOn\": false,\n \"memorySwap\": {},\n \"containerLogMaxSize\": \"10Mi\",\n \"containerLogMaxFiles\": 5,\n \"configMapAndSecretChangeDetectionStrategy\": \"Watch\",\n \"enforceNodeAllocatable\": [\n \"pods\"\n ],\n \"volumePluginDir\": \"/usr/libexec/kubernetes/kubelet-plugins/volume/exec/\",\n \"logging\": {\n \"format\": \"text\",\n \"flushFrequency\": \"5s\",\n \"verbosity\": 3,\n \"options\": {\n \"json\": {\n \"infoBufferSize\": \"0\"\n }\n }\n },\n \"enableSystemLogHandler\": true,\n \"enableSystemLogQuery\": false,\n \"shutdownGracePeriod\": \"0s\",\n \"shutdownGracePeriodCriticalPods\": \"0s\",\n \"enableProfilingHandler\": true,\n \"enableDebugFlagsHandler\": true,\n \"seccompDefault\": false,\n \"memoryThrottlingFactor\": 0.9,\n \"registerNode\": true,\n \"localStorageCapacityIsolation\": true,\n \"containerRuntimeEndpoint\": \"unix:///var/run/crio/crio.sock\"\n }\n}", "completion": "This json code Set Kubelet Parameters Via A Configuration File |", "language": "json", "type": "api-response_explanation", "context": "Set Kubelet Parameters Via A Configuration File |", "source": "https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/", "id": "k8s_00764_code_explain_23712"} {"prompt": "Explain this bash code:\n\nkubectl get resourceclaims -A\nkubectl get resourceclaimtemplates -A", "completion": "This bash code Install Drivers and Allocate Devices with DRA | The output is similar to this:", "language": "bash", "type": "kubectl-command_explanation", "context": "Install Drivers and Allocate Devices with DRA | The output is similar to this:", "source": "https://kubernetes.io/docs/tutorials/cluster-management/install-use-dra/", "id": "k8s_00765_code_explain_23713"} {"prompt": "Explain this bash code:\n\ndocker pull registry.k8s.io/dra-example-driver/dra-example-driver:v0.2.0", "completion": "This bash code Install Drivers and Allocate Devices with DRA | For this tutorial, you will install the critical example resource driver components individually with kubectl.", "language": "bash", "type": "shell-command_explanation", "context": "Install Drivers and Allocate Devices with DRA | For this tutorial, you will install the critical example resource driver components individually with kubectl.", "source": "https://kubernetes.io/docs/tutorials/cluster-management/install-use-dra/", "id": "k8s_00765_code_explain_23714"} {"prompt": "Complete this yaml code:\n\napiVersion: resource.k8s.io/v1\nkind: DeviceClass\nmetadata:\n name: gpu.example.com", "completion": "spec:\n selectors:\n - cel: \n expression: \"device.driver == 'gpu.example.com'\"", "language": "yaml", "type": "kubernetes-manifest", "context": "Install Drivers and Allocate Devices with DRA | Create the ServiceAccount, ClusterRole and ClusterRoleBinding that will be used by the driver to gain permissions to interact with the Kubernetes API on this cluster:", "source": "https://kubernetes.io/docs/tutorials/cluster-management/install-use-dra/", "id": "k8s_00765_code_completion_23715"} {"prompt": "Explain this yaml code:\n\napiVersion: resource.k8s.io/v1\nkind: DeviceClass\nmetadata:\n name: gpu.example.com\nspec:\n selectors:\n - cel: \n expression: \"device.driver == 'gpu.example.com'\"", "completion": "This yaml code Install Drivers and Allocate Devices with DRA | Create the ServiceAccount, ClusterRole and ClusterRoleBinding that will be used by the driver to gain permissions to interact with the Kubernetes API on this cluster:", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Install Drivers and Allocate Devices with DRA | Create the ServiceAccount, ClusterRole and ClusterRoleBinding that will be used by the driver to gain permissions to interact with the Kubernetes API on this cluster:", "source": "https://kubernetes.io/docs/tutorials/cluster-management/install-use-dra/", "id": "k8s_00765_code_explain_23716"} {"prompt": "Explain this bash code:\n\nkubectl apply --server-side -f http://k8s.io/examples/dra/driver-install/deviceclass.yaml", "completion": "This bash code Install Drivers and Allocate Devices with DRA | Create the ServiceAccount, ClusterRole and ClusterRoleBinding that will be used by the driver to gain permissions to interact with the Kubernetes API on this cluster:", "language": "bash", "type": "kubectl-command_explanation", "context": "Install Drivers and Allocate Devices with DRA | Create the ServiceAccount, ClusterRole and ClusterRoleBinding that will be used by the driver to gain permissions to interact with the Kubernetes API on this cluster:", "source": "https://kubernetes.io/docs/tutorials/cluster-management/install-use-dra/", "id": "k8s_00765_code_explain_23717"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: ServiceAccount\nmetadata:\n name: dra-example-driver-service-account", "completion": " namespace: dra-tutorial\n labels:\n app.kubernetes.io/name: dra-example-driver\n app.kubernetes.io/instance: dra-example-driver", "language": "yaml", "type": "service-definition", "context": "Install Drivers and Allocate Devices with DRA | Create the ClusterRole:", "source": "https://kubernetes.io/docs/tutorials/cluster-management/install-use-dra/", "id": "k8s_00765_code_completion_23718"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: ServiceAccount\nmetadata:\n name: dra-example-driver-service-account\n namespace: dra-tutorial\n labels:\n app.kubernetes.io/name: dra-example-driver\n app.kubernetes.io/instance: dra-example-driver", "completion": "This yaml code Install Drivers and Allocate Devices with DRA | Create the ClusterRole:", "language": "yaml", "type": "service-definition_explanation", "context": "Install Drivers and Allocate Devices with DRA | Create the ClusterRole:", "source": "https://kubernetes.io/docs/tutorials/cluster-management/install-use-dra/", "id": "k8s_00765_code_explain_23719"} {"prompt": "Explain this bash code:\n\nkubectl apply --server-side -f http://k8s.io/examples/dra/driver-install/serviceaccount.yaml", "completion": "This bash code Install Drivers and Allocate Devices with DRA | Create the ClusterRole:", "language": "bash", "type": "kubectl-command_explanation", "context": "Install Drivers and Allocate Devices with DRA | Create the ClusterRole:", "source": "https://kubernetes.io/docs/tutorials/cluster-management/install-use-dra/", "id": "k8s_00765_code_explain_23720"} {"prompt": "Complete this yaml code:\n\napiVersion: rbac.authorization.k8s.io/v1\nkind: ClusterRole\nmetadata:\n name: dra-example-driver-role\nrules:\n- apiGroups: [\"resource.k8s.io\"]\n resources: [\"resourceclaims\"]", "completion": " verbs: [\"get\"]\n- apiGroups: [\"\"]\n resources: [\"nodes\"]\n verbs: [\"get\"]\n- apiGroups: [\"resource.k8s.io\"]\n resources: [\"resourceslices\"]\n verbs: [\"get\", \"list\", \"watch\", \"create\", \"update\", \"patch\", \"delete\"]", "language": "yaml", "type": "kubernetes-manifest", "context": "Install Drivers and Allocate Devices with DRA | Create the ClusterRoleBinding:", "source": "https://kubernetes.io/docs/tutorials/cluster-management/install-use-dra/", "id": "k8s_00765_code_completion_23721"} {"prompt": "Explain this yaml code:\n\napiVersion: rbac.authorization.k8s.io/v1\nkind: ClusterRole\nmetadata:\n name: dra-example-driver-role\nrules:\n- apiGroups: [\"resource.k8s.io\"]\n resources: [\"resourceclaims\"]\n verbs: [\"get\"]\n- apiGroups: [\"\"]\n resources: [\"nodes\"]\n verbs: [\"get\"]\n- apiGroups: [\"resource.k8s.io\"]\n resources: [\"resourceslices\"]\n verbs: [\"get\", \"list\", \"watch\", \"create\", \"update\", \"patch\", \"delete\"]", "completion": "This yaml code Install Drivers and Allocate Devices with DRA | Create the ClusterRoleBinding:", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Install Drivers and Allocate Devices with DRA | Create the ClusterRoleBinding:", "source": "https://kubernetes.io/docs/tutorials/cluster-management/install-use-dra/", "id": "k8s_00765_code_explain_23722"} {"prompt": "Explain this bash code:\n\nkubectl apply --server-side -f http://k8s.io/examples/dra/driver-install/clusterrole.yaml", "completion": "This bash code Install Drivers and Allocate Devices with DRA | Create the ClusterRoleBinding:", "language": "bash", "type": "kubectl-command_explanation", "context": "Install Drivers and Allocate Devices with DRA | Create the ClusterRoleBinding:", "source": "https://kubernetes.io/docs/tutorials/cluster-management/install-use-dra/", "id": "k8s_00765_code_explain_23723"} {"prompt": "Complete this yaml code:\n\napiVersion: rbac.authorization.k8s.io/v1\nkind: ClusterRoleBinding\nmetadata:\n name: dra-example-driver-role-binding\nsubjects:\n- kind: ServiceAccount", "completion": " name: dra-example-driver-service-account\n namespace: dra-tutorial\nroleRef:\n kind: ClusterRole\n name: dra-example-driver-role\n apiGroup: rbac.authorization.k8s.io", "language": "yaml", "type": "service-definition", "context": "Install Drivers and Allocate Devices with DRA | Create a PriorityClass for the DRA driver. The PriorityClass prevents preemption of th DRA driver component, which is responsible for important lifecycle operations for Pods with claims. Learn more ab", "source": "https://kubernetes.io/docs/tutorials/cluster-management/install-use-dra/", "id": "k8s_00765_code_completion_23724"} {"prompt": "Explain this yaml code:\n\napiVersion: rbac.authorization.k8s.io/v1\nkind: ClusterRoleBinding\nmetadata:\n name: dra-example-driver-role-binding\nsubjects:\n- kind: ServiceAccount\n name: dra-example-driver-service-account\n namespace: dra-tutorial\nroleRef:\n kind: ClusterRole\n name: dra-example-driver-role\n apiGroup: rbac.authorization.k8s.io", "completion": "This yaml code Install Drivers and Allocate Devices with DRA | Create a PriorityClass for the DRA driver. The PriorityClass prevents preemption of th DRA driver component, which is responsible for important lifecycle operations for Pods with claims. Learn more ab", "language": "yaml", "type": "service-definition_explanation", "context": "Install Drivers and Allocate Devices with DRA | Create a PriorityClass for the DRA driver. The PriorityClass prevents preemption of th DRA driver component, which is responsible for important lifecycle operations for Pods with claims. Learn more ab", "source": "https://kubernetes.io/docs/tutorials/cluster-management/install-use-dra/", "id": "k8s_00765_code_explain_23725"} {"prompt": "Explain this bash code:\n\nkubectl apply --server-side -f http://k8s.io/examples/dra/driver-install/clusterrolebinding.yaml", "completion": "This bash code Install Drivers and Allocate Devices with DRA | Create a PriorityClass for the DRA driver. The PriorityClass prevents preemption of th DRA driver component, which is responsible for important lifecycle operations for Pods with claims. Learn more ab", "language": "bash", "type": "kubectl-command_explanation", "context": "Install Drivers and Allocate Devices with DRA | Create a PriorityClass for the DRA driver. The PriorityClass prevents preemption of th DRA driver component, which is responsible for important lifecycle operations for Pods with claims. Learn more ab", "source": "https://kubernetes.io/docs/tutorials/cluster-management/install-use-dra/", "id": "k8s_00765_code_explain_23726"} {"prompt": "Complete this yaml code:\n\napiVersion: scheduling.k8s.io/v1\nkind: PriorityClass\nmetadata:", "completion": " name: dra-driver-high-priority\nvalue: 1000000\nglobalDefault: false\ndescription: \"This priority class should be used for DRA driver pods only.\"", "language": "yaml", "type": "kubernetes-manifest", "context": "Install Drivers and Allocate Devices with DRA | Deploy the actual DRA driver as a DaemonSet configured to run the example driver binary with the permissions provisioned above. The DaemonSet has the permissions that you granted to the ServiceAccount", "source": "https://kubernetes.io/docs/tutorials/cluster-management/install-use-dra/", "id": "k8s_00765_code_completion_23727"} {"prompt": "Explain this yaml code:\n\napiVersion: scheduling.k8s.io/v1\nkind: PriorityClass\nmetadata:\n name: dra-driver-high-priority\nvalue: 1000000\nglobalDefault: false\ndescription: \"This priority class should be used for DRA driver pods only.\"", "completion": "This yaml code Install Drivers and Allocate Devices with DRA | Deploy the actual DRA driver as a DaemonSet configured to run the example driver binary with the permissions provisioned above. The DaemonSet has the permissions that you granted to the ServiceAccount", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Install Drivers and Allocate Devices with DRA | Deploy the actual DRA driver as a DaemonSet configured to run the example driver binary with the permissions provisioned above. The DaemonSet has the permissions that you granted to the ServiceAccount", "source": "https://kubernetes.io/docs/tutorials/cluster-management/install-use-dra/", "id": "k8s_00765_code_explain_23728"} {"prompt": "Explain this bash code:\n\nkubectl apply --server-side -f http://k8s.io/examples/dra/driver-install/priorityclass.yaml", "completion": "This bash code Install Drivers and Allocate Devices with DRA | Deploy the actual DRA driver as a DaemonSet configured to run the example driver binary with the permissions provisioned above. The DaemonSet has the permissions that you granted to the ServiceAccount", "language": "bash", "type": "kubectl-command_explanation", "context": "Install Drivers and Allocate Devices with DRA | Deploy the actual DRA driver as a DaemonSet configured to run the example driver binary with the permissions provisioned above. The DaemonSet has the permissions that you granted to the ServiceAccount", "source": "https://kubernetes.io/docs/tutorials/cluster-management/install-use-dra/", "id": "k8s_00765_code_explain_23729"} {"prompt": "Complete this yaml code:\n\napiVersion: apps/v1\nkind: DaemonSet\nmetadata:\n name: dra-example-driver-kubeletplugin\n namespace: dra-tutorial\n labels:\n app.kubernetes.io/name: dra-example-driver\nspec:\n selector:\n matchLabels:\n app.kubernetes.io/name: dra-example-driver\n updateStrategy:\n type: RollingUpdate\n template:\n metadata:\n labels:\n app.kubernetes.io/name: dra-example-driver\n spec:\n priorityClassName: dra-driver-high-priority\n serviceAccountName: dra-example-driver-service-account\n securityContext:\n {}\n containers:\n - name: plugin\n securityContext:\n privileged: true\n image: registry.k8s.io/dra-example-driver/dra-example-driver:v0.2.0\n imagePullPolicy: IfNotPresent\n command: [\"dra-example-kubeletplugin\"]\n resources:\n {}\n # Production drivers should always implement a liveness probe\n # For the tutorial we simply omit it\n # livenessProbe:\n # grpc:\n # port: 51515\n # service: liveness\n # failureThreshold: 3", "completion": " # periodSeconds: 10\n env:\n - name: CDI_ROOT\n value: /var/run/cdi\n - name: KUBELET_REGISTRAR_DIRECTORY_PATH\n value: \"/var/lib/kubelet/plugins_registry\"\n - name: KUBELET_PLUGINS_DIRECTORY_PATH\n value: \"/var/lib/kubelet/plugins\"\n - name: NODE_NAME\n valueFrom:\n fieldRef:\n fieldPath: spec.nodeName\n - name: NAMESPACE\n valueFrom:\n fieldRef:\n fieldPath: metadata.namespace\n # Simulated number of devices the example driver will pretend to have.\n - name: NUM_DEVICES\n value: \"9\"\n - name: HEALTHCHECK_PORT\n value: \"51515\"\n volumeMounts:\n - name: plugins-registry\n mountPath: \"/var/lib/kubelet/plugins_registry\"\n - name: plugins\n mountPath: \"/var/lib/kubelet/plugins\"\n - name: cdi\n mountPath: /var/run/cdi\n volumes:\n - name: plugins-registry\n hostPath:\n path: \"/var/lib/kubelet/plugins_registry\"\n - name: plugins\n hostPath:\n path: \"/var/lib/kubelet/plugins\"\n - name: cdi\n hostPath:\n path: /var/run/cdi", "language": "yaml", "type": "kubernetes-manifest", "context": "Install Drivers and Allocate Devices with DRA | The DaemonSet is configured with the volume mounts necessary to interact with the underlying Container Device Interface (CDI) directory, and to expose its socket to kubelet via the kubelet/plugins dir", "source": "https://kubernetes.io/docs/tutorials/cluster-management/install-use-dra/", "id": "k8s_00765_code_completion_23730"} {"prompt": "Explain this yaml code:\n\napiVersion: apps/v1\nkind: DaemonSet\nmetadata:\n name: dra-example-driver-kubeletplugin\n namespace: dra-tutorial\n labels:\n app.kubernetes.io/name: dra-example-driver\nspec:\n selector:\n matchLabels:\n app.kubernetes.io/name: dra-example-driver\n updateStrategy:\n type: RollingUpdate\n template:\n metadata:\n labels:\n app.kubernetes.io/name: dra-example-driver\n spec:\n priorityClassName: dra-driver-high-priority\n serviceAccountName: dra-example-driver-service-account\n securityContext:\n {}\n containers:\n - name: plugin\n securityContext:\n privileged: true\n image: registry.k8s.io/dra-example-driver/dra-example-driver:v0.2.0\n imagePullPolicy: IfNotPresent\n command: [\"dra-example-kubeletplugin\"]\n resources:\n {}\n # Production drivers should always implement a liveness probe\n # For the tutorial we simply omit it\n # livenessProbe:\n # grpc:\n # port: 51515\n # service: liveness\n # failureThreshold: 3\n # periodSeconds: 10\n env:\n - name: CDI_ROOT\n value: /var/run/cdi\n - name: KUBELET_REGISTRAR_DIRECTORY_PATH\n value: \"/var/lib/kubelet/plugins_registry\"\n - name: KUBELET_PLUGINS_DIRECTORY_PATH\n value: \"/var/lib/kubelet/plugins\"\n - name: NODE_NAME\n valueFrom:\n fieldRef:\n fieldPath: spec.nodeName\n - name: NAMESPACE\n valueFrom:\n fieldRef:\n fieldPath: metadata.namespace\n # Simulated number of devices the example driver will pretend to have.\n - name: NUM_DEVICES\n value: \"9\"\n - name: HEALTHCHECK_PORT\n value: \"51515\"\n volumeMounts:\n - name: plugins-registry\n mountPath: \"/var/lib/kubelet/plugins_registry\"\n - name: plugins\n mountPath: \"/var/lib/kubelet/plugins\"\n - name: cdi\n mountPath: /var/run/cdi\n volumes:\n - name: plugins-registry\n hostPath:\n path: \"/var/lib/kubelet/plugins_registry\"\n - name: plugins\n hostPath:\n path: \"/var/lib/kubelet/plugins\"\n - name: cdi\n hostPath:\n path: /var/run/cdi", "completion": "This yaml code Install Drivers and Allocate Devices with DRA | The DaemonSet is configured with the volume mounts necessary to interact with the underlying Container Device Interface (CDI) directory, and to expose its socket to kubelet via the kubelet/plugins dir", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Install Drivers and Allocate Devices with DRA | The DaemonSet is configured with the volume mounts necessary to interact with the underlying Container Device Interface (CDI) directory, and to expose its socket to kubelet via the kubelet/plugins dir", "source": "https://kubernetes.io/docs/tutorials/cluster-management/install-use-dra/", "id": "k8s_00765_code_explain_23731"} {"prompt": "Explain this bash code:\n\nkubectl apply --server-side -f http://k8s.io/examples/dra/driver-install/daemonset.yaml", "completion": "This bash code Install Drivers and Allocate Devices with DRA | The DaemonSet is configured with the volume mounts necessary to interact with the underlying Container Device Interface (CDI) directory, and to expose its socket to kubelet via the kubelet/plugins dir", "language": "bash", "type": "kubectl-command_explanation", "context": "Install Drivers and Allocate Devices with DRA | The DaemonSet is configured with the volume mounts necessary to interact with the underlying Container Device Interface (CDI) directory, and to expose its socket to kubelet via the kubelet/plugins dir", "source": "https://kubernetes.io/docs/tutorials/cluster-management/install-use-dra/", "id": "k8s_00765_code_explain_23732"} {"prompt": "Explain this bash code:\n\nkubectl get pod -l app.kubernetes.io/name=dra-example-driver -n dra-tutorial", "completion": "This bash code Install Drivers and Allocate Devices with DRA | The output is similar to this:", "language": "bash", "type": "kubectl-command_explanation", "context": "Install Drivers and Allocate Devices with DRA | The output is similar to this:", "source": "https://kubernetes.io/docs/tutorials/cluster-management/install-use-dra/", "id": "k8s_00765_code_explain_23733"} {"prompt": "Explain this text code:\n\nNAME READY STATUS RESTARTS AGE\ndra-example-driver-kubeletplugin-4sk2x 1/1 Running 0 13s\ndra-example-driver-kubeletplugin-cttr2 1/1 Running 0 13s", "completion": "This text code Install Drivers and Allocate Devices with DRA | The initial responsibility of each node's local DRA driver is to update the cluster with what devices are available to Pods on that node, by publishing its metadata to the ResourceSlices API. You can", "language": "text", "type": "code-example_explanation", "context": "Install Drivers and Allocate Devices with DRA | The initial responsibility of each node's local DRA driver is to update the cluster with what devices are available to Pods on that node, by publishing its metadata to the ResourceSlices API. You can", "source": "https://kubernetes.io/docs/tutorials/cluster-management/install-use-dra/", "id": "k8s_00765_code_explain_23734"} {"prompt": "Explain this text code:\n\nNAME NODE DRIVER POOL AGE\nkind-worker-gpu.example.com-k69gd kind-worker gpu.example.com kind-worker 19s\nkind-worker2-gpu.example.com-qdgpn kind-worker2 gpu.example.com kind-worker2 19s", "completion": "This text code Install Drivers and Allocate Devices with DRA | At this point, you have successfully installed the example DRA driver, and confirmed its initial configuration. You're now ready to use DRA to schedule Pods.", "language": "text", "type": "code-example_explanation", "context": "Install Drivers and Allocate Devices with DRA | At this point, you have successfully installed the example DRA driver, and confirmed its initial configuration. You're now ready to use DRA to schedule Pods.", "source": "https://kubernetes.io/docs/tutorials/cluster-management/install-use-dra/", "id": "k8s_00765_code_explain_23735"} {"prompt": "Complete this yaml code:\n\napiVersion: resource.k8s.io/v1\nkind: ResourceClaim\nmetadata:\n name: some-gpu\n namespace: dra-tutorial\nspec:\n devices:", "completion": " requests:\n - name: some-gpu\n exactly:\n deviceClassName: gpu.example.com\n selectors:\n - cel:\n expression: \"device.capacity['gpu.example.com'].memory.compareTo(quantity('10Gi')) >= 0\"", "language": "yaml", "type": "kubernetes-manifest", "context": "Install Drivers and Allocate Devices with DRA | Below is the Pod manifest referencing the ResourceClaim you just made, some-gpu, in the spec.resourceClaims.resourceClaimName field. The local name for that claim, gpu, is then used in the spec.contai", "source": "https://kubernetes.io/docs/tutorials/cluster-management/install-use-dra/", "id": "k8s_00765_code_completion_23736"} {"prompt": "Explain this yaml code:\n\napiVersion: resource.k8s.io/v1\nkind: ResourceClaim\nmetadata:\n name: some-gpu\n namespace: dra-tutorial\nspec:\n devices:\n requests:\n - name: some-gpu\n exactly:\n deviceClassName: gpu.example.com\n selectors:\n - cel:\n expression: \"device.capacity['gpu.example.com'].memory.compareTo(quantity('10Gi')) >= 0\"", "completion": "This yaml code Install Drivers and Allocate Devices with DRA | Below is the Pod manifest referencing the ResourceClaim you just made, some-gpu, in the spec.resourceClaims.resourceClaimName field. The local name for that claim, gpu, is then used in the spec.contai", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Install Drivers and Allocate Devices with DRA | Below is the Pod manifest referencing the ResourceClaim you just made, some-gpu, in the spec.resourceClaims.resourceClaimName field. The local name for that claim, gpu, is then used in the spec.contai", "source": "https://kubernetes.io/docs/tutorials/cluster-management/install-use-dra/", "id": "k8s_00765_code_explain_23737"} {"prompt": "Explain this bash code:\n\nkubectl apply --server-side -f http://k8s.io/examples/dra/driver-install/example/resourceclaim.yaml", "completion": "This bash code Install Drivers and Allocate Devices with DRA | Below is the Pod manifest referencing the ResourceClaim you just made, some-gpu, in the spec.resourceClaims.resourceClaimName field. The local name for that claim, gpu, is then used in the spec.contai", "language": "bash", "type": "kubectl-command_explanation", "context": "Install Drivers and Allocate Devices with DRA | Below is the Pod manifest referencing the ResourceClaim you just made, some-gpu, in the spec.resourceClaims.resourceClaimName field. The local name for that claim, gpu, is then used in the spec.contai", "source": "https://kubernetes.io/docs/tutorials/cluster-management/install-use-dra/", "id": "k8s_00765_code_explain_23738"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: pod0\n namespace: dra-tutorial\n labels:\n app: pod\nspec:\n containers:", "completion": " - name: ctr0\n image: ubuntu:24.04\n command: [\"bash\", \"-c\"]\n args: [\"export; trap 'exit 0' TERM; sleep 9999 & wait\"]\n resources:\n claims:\n - name: gpu\n resourceClaims:\n - name: gpu\n resourceClaimName: some-gpu", "language": "yaml", "type": "pod-definition", "context": "Install Drivers and Allocate Devices with DRA | Confirm the pod has deployed:", "source": "https://kubernetes.io/docs/tutorials/cluster-management/install-use-dra/", "id": "k8s_00765_code_completion_23739"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: pod0\n namespace: dra-tutorial\n labels:\n app: pod\nspec:\n containers:\n - name: ctr0\n image: ubuntu:24.04\n command: [\"bash\", \"-c\"]\n args: [\"export; trap 'exit 0' TERM; sleep 9999 & wait\"]\n resources:\n claims:\n - name: gpu\n resourceClaims:\n - name: gpu\n resourceClaimName: some-gpu", "completion": "This yaml code Install Drivers and Allocate Devices with DRA | Confirm the pod has deployed:", "language": "yaml", "type": "pod-definition_explanation", "context": "Install Drivers and Allocate Devices with DRA | Confirm the pod has deployed:", "source": "https://kubernetes.io/docs/tutorials/cluster-management/install-use-dra/", "id": "k8s_00765_code_explain_23740"} {"prompt": "Explain this bash code:\n\nkubectl apply --server-side -f http://k8s.io/examples/dra/driver-install/example/pod.yaml", "completion": "This bash code Install Drivers and Allocate Devices with DRA | Confirm the pod has deployed:", "language": "bash", "type": "kubectl-command_explanation", "context": "Install Drivers and Allocate Devices with DRA | Confirm the pod has deployed:", "source": "https://kubernetes.io/docs/tutorials/cluster-management/install-use-dra/", "id": "k8s_00765_code_explain_23741"} {"prompt": "Explain this text code:\n\nNAME READY STATUS RESTARTS AGE\npod0 1/1 Running 0 9s", "completion": "This text code Install Drivers and Allocate Devices with DRA | After you create the Pod, the cluster tries to schedule that Pod to a node where Kubernetes can satisfy the ResourceClaim. In this tutorial, the DRA driver is deployed on all nodes, and is advertising", "language": "text", "type": "code-example_explanation", "context": "Install Drivers and Allocate Devices with DRA | After you create the Pod, the cluster tries to schedule that Pod to a node where Kubernetes can satisfy the ResourceClaim. In this tutorial, the DRA driver is deployed on all nodes, and is advertising", "source": "https://kubernetes.io/docs/tutorials/cluster-management/install-use-dra/", "id": "k8s_00765_code_explain_23742"} {"prompt": "Explain this bash code:\n\nkubectl logs pod0 -c ctr0 -n dra-tutorial | grep -E \"GPU_DEVICE_[0-9]+=\" | grep -v \"RESOURCE_CLAIM\"", "completion": "This bash code Install Drivers and Allocate Devices with DRA | The output is similar to this:", "language": "bash", "type": "kubectl-command_explanation", "context": "Install Drivers and Allocate Devices with DRA | The output is similar to this:", "source": "https://kubernetes.io/docs/tutorials/cluster-management/install-use-dra/", "id": "k8s_00765_code_explain_23743"} {"prompt": "Explain this text code:\n\nNAME STATE AGE\nsome-gpu allocated,reserved 34s", "completion": "This text code Install Drivers and Allocate Devices with DRA | In this output, the STATE column shows that the ResourceClaim is allocated and reserved.", "language": "text", "type": "code-example_explanation", "context": "Install Drivers and Allocate Devices with DRA | In this output, the STATE column shows that the ResourceClaim is allocated and reserved.", "source": "https://kubernetes.io/docs/tutorials/cluster-management/install-use-dra/", "id": "k8s_00765_code_explain_23744"} {"prompt": "Explain this bash code:\n\nkubectl get resourceclaim some-gpu -n dra-tutorial -o yaml", "completion": "This bash code Install Drivers and Allocate Devices with DRA | The output is similar to this: 1apiVersion: resource.k8s.io/v1 2kind: ResourceClaim 3metadata: 4 creationTimestamp: \"2025-08-20T18:17:31Z\" 5 finalizers: 6 - resource.kubernetes.io/delete", "language": "bash", "type": "kubectl-command_explanation", "context": "Install Drivers and Allocate Devices with DRA | The output is similar to this: 1apiVersion: resource.k8s.io/v1 2kind: ResourceClaim 3metadata: 4 creationTimestamp: \"2025-08-20T18:17:31Z\" 5 finalizers: 6 - resource.kubernetes.io/delete", "source": "https://kubernetes.io/docs/tutorials/cluster-management/install-use-dra/", "id": "k8s_00765_code_explain_23745"} {"prompt": "Complete this yaml code:\n\n1apiVersion: resource.k8s.io/v1\n 2kind: ResourceClaim\n 3metadata:\n 4 creationTimestamp: \"2025-08-20T18:17:31Z\"\n 5 finalizers:\n 6 - resource.kubernetes.io/delete-protection\n 7 name: some-gpu\n 8 namespace: dra-tutorial\n 9 resourceVersion: \"2326\"\n10 uid: d3e48dbf-40da-47c3-a7b9-f7d54d1051c3\n11spec:\n12 devices:\n13 requests:\n14 - exactly:\n15 allocationMode: ExactCount\n16 count: 1\n17 deviceClassName: gpu.example.com\n18 selectors:\n19 - cel:\n20 expression: device.capacity['gpu.example.com'].memory.compareTo(quantity('10Gi'))", "completion": "21 >= 0\n22 name: some-gpu\n23status:\n24 allocation:\n25 devices:\n26 results:\n27 - device: gpu-0\n28 driver: gpu.example.com\n29 pool: kind-worker\n30 request: some-gpu\n31 nodeSelector:\n32 nodeSelectorTerms:\n33 - matchFields:\n34 - key: metadata.name\n35 operator: In\n36 values:\n37 - kind-worker\n38 reservedFor:\n39 - name: pod0\n40 resource: pods\n41 uid: c4dadf20-392a-474d-a47b-ab82080c8bd7", "language": "yaml", "type": "kubernetes-manifest", "context": "Install Drivers and Allocate Devices with DRA | To check how the driver handled device allocation, get the logs for the driver DaemonSet Pods:", "source": "https://kubernetes.io/docs/tutorials/cluster-management/install-use-dra/", "id": "k8s_00765_code_completion_23746"} {"prompt": "Explain this yaml code:\n\n1apiVersion: resource.k8s.io/v1\n 2kind: ResourceClaim\n 3metadata:\n 4 creationTimestamp: \"2025-08-20T18:17:31Z\"\n 5 finalizers:\n 6 - resource.kubernetes.io/delete-protection\n 7 name: some-gpu\n 8 namespace: dra-tutorial\n 9 resourceVersion: \"2326\"\n10 uid: d3e48dbf-40da-47c3-a7b9-f7d54d1051c3\n11spec:\n12 devices:\n13 requests:\n14 - exactly:\n15 allocationMode: ExactCount\n16 count: 1\n17 deviceClassName: gpu.example.com\n18 selectors:\n19 - cel:\n20 expression: device.capacity['gpu.example.com'].memory.compareTo(quantity('10Gi'))\n21 >= 0\n22 name: some-gpu\n23status:\n24 allocation:\n25 devices:\n26 results:\n27 - device: gpu-0\n28 driver: gpu.example.com\n29 pool: kind-worker\n30 request: some-gpu\n31 nodeSelector:\n32 nodeSelectorTerms:\n33 - matchFields:\n34 - key: metadata.name\n35 operator: In\n36 values:\n37 - kind-worker\n38 reservedFor:\n39 - name: pod0\n40 resource: pods\n41 uid: c4dadf20-392a-474d-a47b-ab82080c8bd7", "completion": "This yaml code Install Drivers and Allocate Devices with DRA | To check how the driver handled device allocation, get the logs for the driver DaemonSet Pods:", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Install Drivers and Allocate Devices with DRA | To check how the driver handled device allocation, get the logs for the driver DaemonSet Pods:", "source": "https://kubernetes.io/docs/tutorials/cluster-management/install-use-dra/", "id": "k8s_00765_code_explain_23747"} {"prompt": "Explain this bash code:\n\nkubectl logs -l app.kubernetes.io/name=dra-example-driver -n dra-tutorial", "completion": "This bash code Install Drivers and Allocate Devices with DRA | The output is similar to this:", "language": "bash", "type": "kubectl-command_explanation", "context": "Install Drivers and Allocate Devices with DRA | The output is similar to this:", "source": "https://kubernetes.io/docs/tutorials/cluster-management/install-use-dra/", "id": "k8s_00765_code_explain_23748"} {"prompt": "Explain this text code:\n\nI0820 18:17:44.131324 1 driver.go:106] PrepareResourceClaims is called: number of claims: 1\nI0820 18:17:44.135056 1 driver.go:133] Returning newly prepared devices for claim 'd3e48dbf-40da-47c3-a7b9-f7d54d1051c3': [{[some-gpu] kind-worker gpu-0 [k8s.gpu.example.com/gpu=common k8s.gpu.example.com/gpu=d3e48dbf-40da-47c3-a7b9-f7d54d1051c3-gpu-0]}]", "completion": "This text code Install Drivers and Allocate Devices with DRA | You have now successfully deployed a Pod that claims devices using DRA, verified that the Pod was scheduled to an appropriate node, and saw that the associated DRA APIs kinds were updated with the all", "language": "text", "type": "code-example_explanation", "context": "Install Drivers and Allocate Devices with DRA | You have now successfully deployed a Pod that claims devices using DRA, verified that the Pod was scheduled to an appropriate node, and saw that the associated DRA APIs kinds were updated with the all", "source": "https://kubernetes.io/docs/tutorials/cluster-management/install-use-dra/", "id": "k8s_00765_code_explain_23749"} {"prompt": "Explain this bash code:\n\nkubectl logs -l app.kubernetes.io/name=dra-example-driver -n dra-tutorial", "completion": "This bash code Install Drivers and Allocate Devices with DRA | The output is similar to this:", "language": "bash", "type": "kubectl-command_explanation", "context": "Install Drivers and Allocate Devices with DRA | The output is similar to this:", "source": "https://kubernetes.io/docs/tutorials/cluster-management/install-use-dra/", "id": "k8s_00765_code_explain_23750"} {"prompt": "Explain this text code:\n\nI0820 18:22:15.629376 1 driver.go:138] UnprepareResourceClaims is called: number of claims: 1", "completion": "This text code Install Drivers and Allocate Devices with DRA | You have now deleted a Pod that had a claim, and observed that the driver took action to unprepare the underlying hardware resource and update the DRA APIs to reflect that the resource is available ag", "language": "text", "type": "code-example_explanation", "context": "Install Drivers and Allocate Devices with DRA | You have now deleted a Pod that had a claim, and observed that the driver took action to unprepare the underlying hardware resource and update the DRA APIs to reflect that the resource is available ag", "source": "https://kubernetes.io/docs/tutorials/cluster-management/install-use-dra/", "id": "k8s_00765_code_explain_23751"} {"prompt": "Complete this bash code:\n\nkubectl delete namespace dra-tutorial\nkubectl delete deviceclass gpu.example.com", "completion": "kubectl delete clusterrole dra-example-driver-role\nkubectl delete clusterrolebinding dra-example-driver-role-binding\nkubectl delete priorityclass dra-driver-high-priority", "language": "bash", "type": "kubectl-command", "context": "Install Drivers and Allocate Devices with DRA |", "source": "https://kubernetes.io/docs/tutorials/cluster-management/install-use-dra/", "id": "k8s_00765_code_completion_23752"} {"prompt": "Explain this bash code:\n\nkubectl delete namespace dra-tutorial\nkubectl delete deviceclass gpu.example.com\nkubectl delete clusterrole dra-example-driver-role\nkubectl delete clusterrolebinding dra-example-driver-role-binding\nkubectl delete priorityclass dra-driver-high-priority", "completion": "This bash code Install Drivers and Allocate Devices with DRA |", "language": "bash", "type": "kubectl-command_explanation", "context": "Install Drivers and Allocate Devices with DRA |", "source": "https://kubernetes.io/docs/tutorials/cluster-management/install-use-dra/", "id": "k8s_00765_code_explain_23753"} {"prompt": "Explain this shell code:\n\nkubectl get resourceclaims -A\nkubectl get resourceclaimtemplates -A", "completion": "This shell code Install Drivers and Allocate Devices with DRA | The output is similar to this:", "language": "shell", "type": "code-example_explanation", "context": "Install Drivers and Allocate Devices with DRA | The output is similar to this:", "source": "https://kubernetes.io/docs/tutorials/cluster-management/install-use-dra/", "id": "k8s_00765_code_explain_23754"} {"prompt": "Explain this shell code:\n\ndocker pull registry.k8s.io/dra-example-driver/dra-example-driver:v0.2.0", "completion": "This shell code Install Drivers and Allocate Devices with DRA | For this tutorial, you will install the critical example resource driver components individually with kubectl.", "language": "shell", "type": "code-example_explanation", "context": "Install Drivers and Allocate Devices with DRA | For this tutorial, you will install the critical example resource driver components individually with kubectl.", "source": "https://kubernetes.io/docs/tutorials/cluster-management/install-use-dra/", "id": "k8s_00765_code_explain_23755"} {"prompt": "Complete this yaml code:\n\napiVersion: resource.k8s.io/v1\nkind: DeviceClass\nmetadata:\n name: gpu.example.com", "completion": "spec:\n selectors:\n - cel: \n expression: \"device.driver == 'gpu.example.com'\"", "language": "yaml", "type": "kubernetes-manifest", "context": "Install Drivers and Allocate Devices with DRA | Create the ServiceAccount, ClusterRole and ClusterRoleBinding that will be used by the driver to gain permissions to interact with the Kubernetes API on this cluster:", "source": "https://kubernetes.io/docs/tutorials/cluster-management/install-use-dra/", "id": "k8s_00765_code_completion_23756"} {"prompt": "Explain this yaml code:\n\napiVersion: resource.k8s.io/v1\nkind: DeviceClass\nmetadata:\n name: gpu.example.com\nspec:\n selectors:\n - cel: \n expression: \"device.driver == 'gpu.example.com'\"", "completion": "This yaml code Install Drivers and Allocate Devices with DRA | Create the ServiceAccount, ClusterRole and ClusterRoleBinding that will be used by the driver to gain permissions to interact with the Kubernetes API on this cluster:", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Install Drivers and Allocate Devices with DRA | Create the ServiceAccount, ClusterRole and ClusterRoleBinding that will be used by the driver to gain permissions to interact with the Kubernetes API on this cluster:", "source": "https://kubernetes.io/docs/tutorials/cluster-management/install-use-dra/", "id": "k8s_00765_code_explain_23757"} {"prompt": "Explain this shell code:\n\nkubectl apply --server-side -f http://k8s.io/examples/dra/driver-install/deviceclass.yaml", "completion": "This shell code Install Drivers and Allocate Devices with DRA | Create the ServiceAccount, ClusterRole and ClusterRoleBinding that will be used by the driver to gain permissions to interact with the Kubernetes API on this cluster:", "language": "shell", "type": "code-example_explanation", "context": "Install Drivers and Allocate Devices with DRA | Create the ServiceAccount, ClusterRole and ClusterRoleBinding that will be used by the driver to gain permissions to interact with the Kubernetes API on this cluster:", "source": "https://kubernetes.io/docs/tutorials/cluster-management/install-use-dra/", "id": "k8s_00765_code_explain_23758"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: ServiceAccount\nmetadata:\n name: dra-example-driver-service-account", "completion": " namespace: dra-tutorial\n labels:\n app.kubernetes.io/name: dra-example-driver\n app.kubernetes.io/instance: dra-example-driver", "language": "yaml", "type": "service-definition", "context": "Install Drivers and Allocate Devices with DRA | Create the ClusterRole:", "source": "https://kubernetes.io/docs/tutorials/cluster-management/install-use-dra/", "id": "k8s_00765_code_completion_23759"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: ServiceAccount\nmetadata:\n name: dra-example-driver-service-account\n namespace: dra-tutorial\n labels:\n app.kubernetes.io/name: dra-example-driver\n app.kubernetes.io/instance: dra-example-driver", "completion": "This yaml code Install Drivers and Allocate Devices with DRA | Create the ClusterRole:", "language": "yaml", "type": "service-definition_explanation", "context": "Install Drivers and Allocate Devices with DRA | Create the ClusterRole:", "source": "https://kubernetes.io/docs/tutorials/cluster-management/install-use-dra/", "id": "k8s_00765_code_explain_23760"} {"prompt": "Explain this shell code:\n\nkubectl apply --server-side -f http://k8s.io/examples/dra/driver-install/serviceaccount.yaml", "completion": "This shell code Install Drivers and Allocate Devices with DRA | Create the ClusterRole:", "language": "shell", "type": "code-example_explanation", "context": "Install Drivers and Allocate Devices with DRA | Create the ClusterRole:", "source": "https://kubernetes.io/docs/tutorials/cluster-management/install-use-dra/", "id": "k8s_00765_code_explain_23761"} {"prompt": "Complete this yaml code:\n\napiVersion: rbac.authorization.k8s.io/v1\nkind: ClusterRole\nmetadata:\n name: dra-example-driver-role\nrules:\n- apiGroups: [\"resource.k8s.io\"]\n resources: [\"resourceclaims\"]", "completion": " verbs: [\"get\"]\n- apiGroups: [\"\"]\n resources: [\"nodes\"]\n verbs: [\"get\"]\n- apiGroups: [\"resource.k8s.io\"]\n resources: [\"resourceslices\"]\n verbs: [\"get\", \"list\", \"watch\", \"create\", \"update\", \"patch\", \"delete\"]", "language": "yaml", "type": "kubernetes-manifest", "context": "Install Drivers and Allocate Devices with DRA | Create the ClusterRoleBinding:", "source": "https://kubernetes.io/docs/tutorials/cluster-management/install-use-dra/", "id": "k8s_00765_code_completion_23762"} {"prompt": "Explain this yaml code:\n\napiVersion: rbac.authorization.k8s.io/v1\nkind: ClusterRole\nmetadata:\n name: dra-example-driver-role\nrules:\n- apiGroups: [\"resource.k8s.io\"]\n resources: [\"resourceclaims\"]\n verbs: [\"get\"]\n- apiGroups: [\"\"]\n resources: [\"nodes\"]\n verbs: [\"get\"]\n- apiGroups: [\"resource.k8s.io\"]\n resources: [\"resourceslices\"]\n verbs: [\"get\", \"list\", \"watch\", \"create\", \"update\", \"patch\", \"delete\"]", "completion": "This yaml code Install Drivers and Allocate Devices with DRA | Create the ClusterRoleBinding:", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Install Drivers and Allocate Devices with DRA | Create the ClusterRoleBinding:", "source": "https://kubernetes.io/docs/tutorials/cluster-management/install-use-dra/", "id": "k8s_00765_code_explain_23763"} {"prompt": "Explain this shell code:\n\nkubectl apply --server-side -f http://k8s.io/examples/dra/driver-install/clusterrole.yaml", "completion": "This shell code Install Drivers and Allocate Devices with DRA | Create the ClusterRoleBinding:", "language": "shell", "type": "code-example_explanation", "context": "Install Drivers and Allocate Devices with DRA | Create the ClusterRoleBinding:", "source": "https://kubernetes.io/docs/tutorials/cluster-management/install-use-dra/", "id": "k8s_00765_code_explain_23764"} {"prompt": "Complete this yaml code:\n\napiVersion: rbac.authorization.k8s.io/v1\nkind: ClusterRoleBinding\nmetadata:\n name: dra-example-driver-role-binding\nsubjects:\n- kind: ServiceAccount", "completion": " name: dra-example-driver-service-account\n namespace: dra-tutorial\nroleRef:\n kind: ClusterRole\n name: dra-example-driver-role\n apiGroup: rbac.authorization.k8s.io", "language": "yaml", "type": "service-definition", "context": "Install Drivers and Allocate Devices with DRA | Create a PriorityClass for the DRA driver. The PriorityClass prevents preemption of th DRA driver component, which is responsible for important lifecycle operations for Pods with claims. Learn more ab", "source": "https://kubernetes.io/docs/tutorials/cluster-management/install-use-dra/", "id": "k8s_00765_code_completion_23765"} {"prompt": "Explain this yaml code:\n\napiVersion: rbac.authorization.k8s.io/v1\nkind: ClusterRoleBinding\nmetadata:\n name: dra-example-driver-role-binding\nsubjects:\n- kind: ServiceAccount\n name: dra-example-driver-service-account\n namespace: dra-tutorial\nroleRef:\n kind: ClusterRole\n name: dra-example-driver-role\n apiGroup: rbac.authorization.k8s.io", "completion": "This yaml code Install Drivers and Allocate Devices with DRA | Create a PriorityClass for the DRA driver. The PriorityClass prevents preemption of th DRA driver component, which is responsible for important lifecycle operations for Pods with claims. Learn more ab", "language": "yaml", "type": "service-definition_explanation", "context": "Install Drivers and Allocate Devices with DRA | Create a PriorityClass for the DRA driver. The PriorityClass prevents preemption of th DRA driver component, which is responsible for important lifecycle operations for Pods with claims. Learn more ab", "source": "https://kubernetes.io/docs/tutorials/cluster-management/install-use-dra/", "id": "k8s_00765_code_explain_23766"} {"prompt": "Explain this shell code:\n\nkubectl apply --server-side -f http://k8s.io/examples/dra/driver-install/clusterrolebinding.yaml", "completion": "This shell code Install Drivers and Allocate Devices with DRA | Create a PriorityClass for the DRA driver. The PriorityClass prevents preemption of th DRA driver component, which is responsible for important lifecycle operations for Pods with claims. Learn more ab", "language": "shell", "type": "code-example_explanation", "context": "Install Drivers and Allocate Devices with DRA | Create a PriorityClass for the DRA driver. The PriorityClass prevents preemption of th DRA driver component, which is responsible for important lifecycle operations for Pods with claims. Learn more ab", "source": "https://kubernetes.io/docs/tutorials/cluster-management/install-use-dra/", "id": "k8s_00765_code_explain_23767"} {"prompt": "Complete this yaml code:\n\napiVersion: scheduling.k8s.io/v1\nkind: PriorityClass\nmetadata:", "completion": " name: dra-driver-high-priority\nvalue: 1000000\nglobalDefault: false\ndescription: \"This priority class should be used for DRA driver pods only.\"", "language": "yaml", "type": "kubernetes-manifest", "context": "Install Drivers and Allocate Devices with DRA | Deploy the actual DRA driver as a DaemonSet configured to run the example driver binary with the permissions provisioned above. The DaemonSet has the permissions that you granted to the ServiceAccount", "source": "https://kubernetes.io/docs/tutorials/cluster-management/install-use-dra/", "id": "k8s_00765_code_completion_23768"} {"prompt": "Explain this yaml code:\n\napiVersion: scheduling.k8s.io/v1\nkind: PriorityClass\nmetadata:\n name: dra-driver-high-priority\nvalue: 1000000\nglobalDefault: false\ndescription: \"This priority class should be used for DRA driver pods only.\"", "completion": "This yaml code Install Drivers and Allocate Devices with DRA | Deploy the actual DRA driver as a DaemonSet configured to run the example driver binary with the permissions provisioned above. The DaemonSet has the permissions that you granted to the ServiceAccount", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Install Drivers and Allocate Devices with DRA | Deploy the actual DRA driver as a DaemonSet configured to run the example driver binary with the permissions provisioned above. The DaemonSet has the permissions that you granted to the ServiceAccount", "source": "https://kubernetes.io/docs/tutorials/cluster-management/install-use-dra/", "id": "k8s_00765_code_explain_23769"} {"prompt": "Explain this shell code:\n\nkubectl apply --server-side -f http://k8s.io/examples/dra/driver-install/priorityclass.yaml", "completion": "This shell code Install Drivers and Allocate Devices with DRA | Deploy the actual DRA driver as a DaemonSet configured to run the example driver binary with the permissions provisioned above. The DaemonSet has the permissions that you granted to the ServiceAccount", "language": "shell", "type": "code-example_explanation", "context": "Install Drivers and Allocate Devices with DRA | Deploy the actual DRA driver as a DaemonSet configured to run the example driver binary with the permissions provisioned above. The DaemonSet has the permissions that you granted to the ServiceAccount", "source": "https://kubernetes.io/docs/tutorials/cluster-management/install-use-dra/", "id": "k8s_00765_code_explain_23770"} {"prompt": "Complete this yaml code:\n\napiVersion: apps/v1\nkind: DaemonSet\nmetadata:\n name: dra-example-driver-kubeletplugin\n namespace: dra-tutorial\n labels:\n app.kubernetes.io/name: dra-example-driver\nspec:\n selector:\n matchLabels:\n app.kubernetes.io/name: dra-example-driver\n updateStrategy:\n type: RollingUpdate\n template:\n metadata:\n labels:\n app.kubernetes.io/name: dra-example-driver\n spec:\n priorityClassName: dra-driver-high-priority\n serviceAccountName: dra-example-driver-service-account\n securityContext:\n {}\n containers:\n - name: plugin\n securityContext:\n privileged: true\n image: registry.k8s.io/dra-example-driver/dra-example-driver:v0.2.0\n imagePullPolicy: IfNotPresent\n command: [\"dra-example-kubeletplugin\"]\n resources:\n {}\n # Production drivers should always implement a liveness probe\n # For the tutorial we simply omit it\n # livenessProbe:\n # grpc:\n # port: 51515\n # service: liveness\n # failureThreshold: 3", "completion": " # periodSeconds: 10\n env:\n - name: CDI_ROOT\n value: /var/run/cdi\n - name: KUBELET_REGISTRAR_DIRECTORY_PATH\n value: \"/var/lib/kubelet/plugins_registry\"\n - name: KUBELET_PLUGINS_DIRECTORY_PATH\n value: \"/var/lib/kubelet/plugins\"\n - name: NODE_NAME\n valueFrom:\n fieldRef:\n fieldPath: spec.nodeName\n - name: NAMESPACE\n valueFrom:\n fieldRef:\n fieldPath: metadata.namespace\n # Simulated number of devices the example driver will pretend to have.\n - name: NUM_DEVICES\n value: \"9\"\n - name: HEALTHCHECK_PORT\n value: \"51515\"\n volumeMounts:\n - name: plugins-registry\n mountPath: \"/var/lib/kubelet/plugins_registry\"\n - name: plugins\n mountPath: \"/var/lib/kubelet/plugins\"\n - name: cdi\n mountPath: /var/run/cdi\n volumes:\n - name: plugins-registry\n hostPath:\n path: \"/var/lib/kubelet/plugins_registry\"\n - name: plugins\n hostPath:\n path: \"/var/lib/kubelet/plugins\"\n - name: cdi\n hostPath:\n path: /var/run/cdi", "language": "yaml", "type": "kubernetes-manifest", "context": "Install Drivers and Allocate Devices with DRA | The DaemonSet is configured with the volume mounts necessary to interact with the underlying Container Device Interface (CDI) directory, and to expose its socket to kubelet via the kubelet/plugins dir", "source": "https://kubernetes.io/docs/tutorials/cluster-management/install-use-dra/", "id": "k8s_00765_code_completion_23771"} {"prompt": "Explain this yaml code:\n\napiVersion: apps/v1\nkind: DaemonSet\nmetadata:\n name: dra-example-driver-kubeletplugin\n namespace: dra-tutorial\n labels:\n app.kubernetes.io/name: dra-example-driver\nspec:\n selector:\n matchLabels:\n app.kubernetes.io/name: dra-example-driver\n updateStrategy:\n type: RollingUpdate\n template:\n metadata:\n labels:\n app.kubernetes.io/name: dra-example-driver\n spec:\n priorityClassName: dra-driver-high-priority\n serviceAccountName: dra-example-driver-service-account\n securityContext:\n {}\n containers:\n - name: plugin\n securityContext:\n privileged: true\n image: registry.k8s.io/dra-example-driver/dra-example-driver:v0.2.0\n imagePullPolicy: IfNotPresent\n command: [\"dra-example-kubeletplugin\"]\n resources:\n {}\n # Production drivers should always implement a liveness probe\n # For the tutorial we simply omit it\n # livenessProbe:\n # grpc:\n # port: 51515\n # service: liveness\n # failureThreshold: 3\n # periodSeconds: 10\n env:\n - name: CDI_ROOT\n value: /var/run/cdi\n - name: KUBELET_REGISTRAR_DIRECTORY_PATH\n value: \"/var/lib/kubelet/plugins_registry\"\n - name: KUBELET_PLUGINS_DIRECTORY_PATH\n value: \"/var/lib/kubelet/plugins\"\n - name: NODE_NAME\n valueFrom:\n fieldRef:\n fieldPath: spec.nodeName\n - name: NAMESPACE\n valueFrom:\n fieldRef:\n fieldPath: metadata.namespace\n # Simulated number of devices the example driver will pretend to have.\n - name: NUM_DEVICES\n value: \"9\"\n - name: HEALTHCHECK_PORT\n value: \"51515\"\n volumeMounts:\n - name: plugins-registry\n mountPath: \"/var/lib/kubelet/plugins_registry\"\n - name: plugins\n mountPath: \"/var/lib/kubelet/plugins\"\n - name: cdi\n mountPath: /var/run/cdi\n volumes:\n - name: plugins-registry\n hostPath:\n path: \"/var/lib/kubelet/plugins_registry\"\n - name: plugins\n hostPath:\n path: \"/var/lib/kubelet/plugins\"\n - name: cdi\n hostPath:\n path: /var/run/cdi", "completion": "This yaml code Install Drivers and Allocate Devices with DRA | The DaemonSet is configured with the volume mounts necessary to interact with the underlying Container Device Interface (CDI) directory, and to expose its socket to kubelet via the kubelet/plugins dir", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Install Drivers and Allocate Devices with DRA | The DaemonSet is configured with the volume mounts necessary to interact with the underlying Container Device Interface (CDI) directory, and to expose its socket to kubelet via the kubelet/plugins dir", "source": "https://kubernetes.io/docs/tutorials/cluster-management/install-use-dra/", "id": "k8s_00765_code_explain_23772"} {"prompt": "Explain this shell code:\n\nkubectl apply --server-side -f http://k8s.io/examples/dra/driver-install/daemonset.yaml", "completion": "This shell code Install Drivers and Allocate Devices with DRA | The DaemonSet is configured with the volume mounts necessary to interact with the underlying Container Device Interface (CDI) directory, and to expose its socket to kubelet via the kubelet/plugins dir", "language": "shell", "type": "code-example_explanation", "context": "Install Drivers and Allocate Devices with DRA | The DaemonSet is configured with the volume mounts necessary to interact with the underlying Container Device Interface (CDI) directory, and to expose its socket to kubelet via the kubelet/plugins dir", "source": "https://kubernetes.io/docs/tutorials/cluster-management/install-use-dra/", "id": "k8s_00765_code_explain_23773"} {"prompt": "Explain this shell code:\n\nkubectl get pod -l app.kubernetes.io/name=dra-example-driver -n dra-tutorial", "completion": "This shell code Install Drivers and Allocate Devices with DRA | The output is similar to this:", "language": "shell", "type": "code-example_explanation", "context": "Install Drivers and Allocate Devices with DRA | The output is similar to this:", "source": "https://kubernetes.io/docs/tutorials/cluster-management/install-use-dra/", "id": "k8s_00765_code_explain_23774"} {"prompt": "Explain this text code:\n\nNAME READY STATUS RESTARTS AGE\ndra-example-driver-kubeletplugin-4sk2x 1/1 Running 0 13s\ndra-example-driver-kubeletplugin-cttr2 1/1 Running 0 13s", "completion": "This text code Install Drivers and Allocate Devices with DRA | The initial responsibility of each node's local DRA driver is to update the cluster with what devices are available to Pods on that node, by publishing its metadata to the ResourceSlices API. You can", "language": "text", "type": "code-example_explanation", "context": "Install Drivers and Allocate Devices with DRA | The initial responsibility of each node's local DRA driver is to update the cluster with what devices are available to Pods on that node, by publishing its metadata to the ResourceSlices API. You can", "source": "https://kubernetes.io/docs/tutorials/cluster-management/install-use-dra/", "id": "k8s_00765_code_explain_23775"} {"prompt": "Explain this text code:\n\nNAME NODE DRIVER POOL AGE\nkind-worker-gpu.example.com-k69gd kind-worker gpu.example.com kind-worker 19s\nkind-worker2-gpu.example.com-qdgpn kind-worker2 gpu.example.com kind-worker2 19s", "completion": "This text code Install Drivers and Allocate Devices with DRA | At this point, you have successfully installed the example DRA driver, and confirmed its initial configuration. You're now ready to use DRA to schedule Pods.", "language": "text", "type": "code-example_explanation", "context": "Install Drivers and Allocate Devices with DRA | At this point, you have successfully installed the example DRA driver, and confirmed its initial configuration. You're now ready to use DRA to schedule Pods.", "source": "https://kubernetes.io/docs/tutorials/cluster-management/install-use-dra/", "id": "k8s_00765_code_explain_23776"} {"prompt": "Complete this yaml code:\n\napiVersion: resource.k8s.io/v1\nkind: ResourceClaim\nmetadata:\n name: some-gpu\n namespace: dra-tutorial\nspec:\n devices:", "completion": " requests:\n - name: some-gpu\n exactly:\n deviceClassName: gpu.example.com\n selectors:\n - cel:\n expression: \"device.capacity['gpu.example.com'].memory.compareTo(quantity('10Gi')) >= 0\"", "language": "yaml", "type": "kubernetes-manifest", "context": "Install Drivers and Allocate Devices with DRA | Below is the Pod manifest referencing the ResourceClaim you just made, some-gpu, in the spec.resourceClaims.resourceClaimName field. The local name for that claim, gpu, is then used in the spec.contai", "source": "https://kubernetes.io/docs/tutorials/cluster-management/install-use-dra/", "id": "k8s_00765_code_completion_23777"} {"prompt": "Explain this yaml code:\n\napiVersion: resource.k8s.io/v1\nkind: ResourceClaim\nmetadata:\n name: some-gpu\n namespace: dra-tutorial\nspec:\n devices:\n requests:\n - name: some-gpu\n exactly:\n deviceClassName: gpu.example.com\n selectors:\n - cel:\n expression: \"device.capacity['gpu.example.com'].memory.compareTo(quantity('10Gi')) >= 0\"", "completion": "This yaml code Install Drivers and Allocate Devices with DRA | Below is the Pod manifest referencing the ResourceClaim you just made, some-gpu, in the spec.resourceClaims.resourceClaimName field. The local name for that claim, gpu, is then used in the spec.contai", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Install Drivers and Allocate Devices with DRA | Below is the Pod manifest referencing the ResourceClaim you just made, some-gpu, in the spec.resourceClaims.resourceClaimName field. The local name for that claim, gpu, is then used in the spec.contai", "source": "https://kubernetes.io/docs/tutorials/cluster-management/install-use-dra/", "id": "k8s_00765_code_explain_23778"} {"prompt": "Explain this shell code:\n\nkubectl apply --server-side -f http://k8s.io/examples/dra/driver-install/example/resourceclaim.yaml", "completion": "This shell code Install Drivers and Allocate Devices with DRA | Below is the Pod manifest referencing the ResourceClaim you just made, some-gpu, in the spec.resourceClaims.resourceClaimName field. The local name for that claim, gpu, is then used in the spec.contai", "language": "shell", "type": "code-example_explanation", "context": "Install Drivers and Allocate Devices with DRA | Below is the Pod manifest referencing the ResourceClaim you just made, some-gpu, in the spec.resourceClaims.resourceClaimName field. The local name for that claim, gpu, is then used in the spec.contai", "source": "https://kubernetes.io/docs/tutorials/cluster-management/install-use-dra/", "id": "k8s_00765_code_explain_23779"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: pod0\n namespace: dra-tutorial\n labels:\n app: pod\nspec:\n containers:", "completion": " - name: ctr0\n image: ubuntu:24.04\n command: [\"bash\", \"-c\"]\n args: [\"export; trap 'exit 0' TERM; sleep 9999 & wait\"]\n resources:\n claims:\n - name: gpu\n resourceClaims:\n - name: gpu\n resourceClaimName: some-gpu", "language": "yaml", "type": "pod-definition", "context": "Install Drivers and Allocate Devices with DRA | Confirm the pod has deployed:", "source": "https://kubernetes.io/docs/tutorials/cluster-management/install-use-dra/", "id": "k8s_00765_code_completion_23780"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: pod0\n namespace: dra-tutorial\n labels:\n app: pod\nspec:\n containers:\n - name: ctr0\n image: ubuntu:24.04\n command: [\"bash\", \"-c\"]\n args: [\"export; trap 'exit 0' TERM; sleep 9999 & wait\"]\n resources:\n claims:\n - name: gpu\n resourceClaims:\n - name: gpu\n resourceClaimName: some-gpu", "completion": "This yaml code Install Drivers and Allocate Devices with DRA | Confirm the pod has deployed:", "language": "yaml", "type": "pod-definition_explanation", "context": "Install Drivers and Allocate Devices with DRA | Confirm the pod has deployed:", "source": "https://kubernetes.io/docs/tutorials/cluster-management/install-use-dra/", "id": "k8s_00765_code_explain_23781"} {"prompt": "Explain this shell code:\n\nkubectl apply --server-side -f http://k8s.io/examples/dra/driver-install/example/pod.yaml", "completion": "This shell code Install Drivers and Allocate Devices with DRA | Confirm the pod has deployed:", "language": "shell", "type": "code-example_explanation", "context": "Install Drivers and Allocate Devices with DRA | Confirm the pod has deployed:", "source": "https://kubernetes.io/docs/tutorials/cluster-management/install-use-dra/", "id": "k8s_00765_code_explain_23782"} {"prompt": "Explain this text code:\n\nNAME READY STATUS RESTARTS AGE\npod0 1/1 Running 0 9s", "completion": "This text code Install Drivers and Allocate Devices with DRA | After you create the Pod, the cluster tries to schedule that Pod to a node where Kubernetes can satisfy the ResourceClaim. In this tutorial, the DRA driver is deployed on all nodes, and is advertising", "language": "text", "type": "code-example_explanation", "context": "Install Drivers and Allocate Devices with DRA | After you create the Pod, the cluster tries to schedule that Pod to a node where Kubernetes can satisfy the ResourceClaim. In this tutorial, the DRA driver is deployed on all nodes, and is advertising", "source": "https://kubernetes.io/docs/tutorials/cluster-management/install-use-dra/", "id": "k8s_00765_code_explain_23783"} {"prompt": "Explain this shell code:\n\nkubectl logs pod0 -c ctr0 -n dra-tutorial | grep -E \"GPU_DEVICE_[0-9]+=\" | grep -v \"RESOURCE_CLAIM\"", "completion": "This shell code Install Drivers and Allocate Devices with DRA | The output is similar to this:", "language": "shell", "type": "code-example_explanation", "context": "Install Drivers and Allocate Devices with DRA | The output is similar to this:", "source": "https://kubernetes.io/docs/tutorials/cluster-management/install-use-dra/", "id": "k8s_00765_code_explain_23784"} {"prompt": "Explain this text code:\n\nNAME STATE AGE\nsome-gpu allocated,reserved 34s", "completion": "This text code Install Drivers and Allocate Devices with DRA | In this output, the STATE column shows that the ResourceClaim is allocated and reserved.", "language": "text", "type": "code-example_explanation", "context": "Install Drivers and Allocate Devices with DRA | In this output, the STATE column shows that the ResourceClaim is allocated and reserved.", "source": "https://kubernetes.io/docs/tutorials/cluster-management/install-use-dra/", "id": "k8s_00765_code_explain_23785"} {"prompt": "Explain this shell code:\n\nkubectl get resourceclaim some-gpu -n dra-tutorial -o yaml", "completion": "This shell code Install Drivers and Allocate Devices with DRA | The output is similar to this: 1apiVersion: resource.k8s.io/v1 2kind: ResourceClaim 3metadata: 4 creationTimestamp: \"2025-08-20T18:17:31Z\" 5 finalizers: 6 - resource.kubernetes.io/delete", "language": "shell", "type": "code-example_explanation", "context": "Install Drivers and Allocate Devices with DRA | The output is similar to this: 1apiVersion: resource.k8s.io/v1 2kind: ResourceClaim 3metadata: 4 creationTimestamp: \"2025-08-20T18:17:31Z\" 5 finalizers: 6 - resource.kubernetes.io/delete", "source": "https://kubernetes.io/docs/tutorials/cluster-management/install-use-dra/", "id": "k8s_00765_code_explain_23786"} {"prompt": "Complete this yaml code:\n\n1apiVersion: resource.k8s.io/v1\n 2kind: ResourceClaim\n 3metadata:\n 4 creationTimestamp: \"2025-08-20T18:17:31Z\"\n 5 finalizers:\n 6 - resource.kubernetes.io/delete-protection\n 7 name: some-gpu\n 8 namespace: dra-tutorial\n 9 resourceVersion: \"2326\"\n10 uid: d3e48dbf-40da-47c3-a7b9-f7d54d1051c3\n11spec:\n12 devices:\n13 requests:\n14 - exactly:\n15 allocationMode: ExactCount\n16 count: 1\n17 deviceClassName: gpu.example.com\n18 selectors:\n19 - cel:\n20 expression: device.capacity['gpu.example.com'].memory.compareTo(quantity('10Gi'))", "completion": "21 >= 0\n22 name: some-gpu\n23status:\n24 allocation:\n25 devices:\n26 results:\n27 - device: gpu-0\n28 driver: gpu.example.com\n29 pool: kind-worker\n30 request: some-gpu\n31 nodeSelector:\n32 nodeSelectorTerms:\n33 - matchFields:\n34 - key: metadata.name\n35 operator: In\n36 values:\n37 - kind-worker\n38 reservedFor:\n39 - name: pod0\n40 resource: pods\n41 uid: c4dadf20-392a-474d-a47b-ab82080c8bd7", "language": "yaml", "type": "kubernetes-manifest", "context": "Install Drivers and Allocate Devices with DRA | To check how the driver handled device allocation, get the logs for the driver DaemonSet Pods:", "source": "https://kubernetes.io/docs/tutorials/cluster-management/install-use-dra/", "id": "k8s_00765_code_completion_23787"} {"prompt": "Explain this yaml code:\n\n1apiVersion: resource.k8s.io/v1\n 2kind: ResourceClaim\n 3metadata:\n 4 creationTimestamp: \"2025-08-20T18:17:31Z\"\n 5 finalizers:\n 6 - resource.kubernetes.io/delete-protection\n 7 name: some-gpu\n 8 namespace: dra-tutorial\n 9 resourceVersion: \"2326\"\n10 uid: d3e48dbf-40da-47c3-a7b9-f7d54d1051c3\n11spec:\n12 devices:\n13 requests:\n14 - exactly:\n15 allocationMode: ExactCount\n16 count: 1\n17 deviceClassName: gpu.example.com\n18 selectors:\n19 - cel:\n20 expression: device.capacity['gpu.example.com'].memory.compareTo(quantity('10Gi'))\n21 >= 0\n22 name: some-gpu\n23status:\n24 allocation:\n25 devices:\n26 results:\n27 - device: gpu-0\n28 driver: gpu.example.com\n29 pool: kind-worker\n30 request: some-gpu\n31 nodeSelector:\n32 nodeSelectorTerms:\n33 - matchFields:\n34 - key: metadata.name\n35 operator: In\n36 values:\n37 - kind-worker\n38 reservedFor:\n39 - name: pod0\n40 resource: pods\n41 uid: c4dadf20-392a-474d-a47b-ab82080c8bd7", "completion": "This yaml code Install Drivers and Allocate Devices with DRA | To check how the driver handled device allocation, get the logs for the driver DaemonSet Pods:", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Install Drivers and Allocate Devices with DRA | To check how the driver handled device allocation, get the logs for the driver DaemonSet Pods:", "source": "https://kubernetes.io/docs/tutorials/cluster-management/install-use-dra/", "id": "k8s_00765_code_explain_23788"} {"prompt": "Explain this shell code:\n\nkubectl logs -l app.kubernetes.io/name=dra-example-driver -n dra-tutorial", "completion": "This shell code Install Drivers and Allocate Devices with DRA | The output is similar to this:", "language": "shell", "type": "code-example_explanation", "context": "Install Drivers and Allocate Devices with DRA | The output is similar to this:", "source": "https://kubernetes.io/docs/tutorials/cluster-management/install-use-dra/", "id": "k8s_00765_code_explain_23789"} {"prompt": "Explain this text code:\n\nI0820 18:17:44.131324 1 driver.go:106] PrepareResourceClaims is called: number of claims: 1\nI0820 18:17:44.135056 1 driver.go:133] Returning newly prepared devices for claim 'd3e48dbf-40da-47c3-a7b9-f7d54d1051c3': [{[some-gpu] kind-worker gpu-0 [k8s.gpu.example.com/gpu=common k8s.gpu.example.com/gpu=d3e48dbf-40da-47c3-a7b9-f7d54d1051c3-gpu-0]}]", "completion": "This text code Install Drivers and Allocate Devices with DRA | You have now successfully deployed a Pod that claims devices using DRA, verified that the Pod was scheduled to an appropriate node, and saw that the associated DRA APIs kinds were updated with the all", "language": "text", "type": "code-example_explanation", "context": "Install Drivers and Allocate Devices with DRA | You have now successfully deployed a Pod that claims devices using DRA, verified that the Pod was scheduled to an appropriate node, and saw that the associated DRA APIs kinds were updated with the all", "source": "https://kubernetes.io/docs/tutorials/cluster-management/install-use-dra/", "id": "k8s_00765_code_explain_23790"} {"prompt": "Explain this shell code:\n\nkubectl logs -l app.kubernetes.io/name=dra-example-driver -n dra-tutorial", "completion": "This shell code Install Drivers and Allocate Devices with DRA | The output is similar to this:", "language": "shell", "type": "code-example_explanation", "context": "Install Drivers and Allocate Devices with DRA | The output is similar to this:", "source": "https://kubernetes.io/docs/tutorials/cluster-management/install-use-dra/", "id": "k8s_00765_code_explain_23791"} {"prompt": "Explain this text code:\n\nI0820 18:22:15.629376 1 driver.go:138] UnprepareResourceClaims is called: number of claims: 1", "completion": "This text code Install Drivers and Allocate Devices with DRA | You have now deleted a Pod that had a claim, and observed that the driver took action to unprepare the underlying hardware resource and update the DRA APIs to reflect that the resource is available ag", "language": "text", "type": "code-example_explanation", "context": "Install Drivers and Allocate Devices with DRA | You have now deleted a Pod that had a claim, and observed that the driver took action to unprepare the underlying hardware resource and update the DRA APIs to reflect that the resource is available ag", "source": "https://kubernetes.io/docs/tutorials/cluster-management/install-use-dra/", "id": "k8s_00765_code_explain_23792"} {"prompt": "Complete this shell code:\n\nkubectl delete namespace dra-tutorial\nkubectl delete deviceclass gpu.example.com", "completion": "kubectl delete clusterrole dra-example-driver-role\nkubectl delete clusterrolebinding dra-example-driver-role-binding\nkubectl delete priorityclass dra-driver-high-priority", "language": "shell", "type": "code-example", "context": "Install Drivers and Allocate Devices with DRA |", "source": "https://kubernetes.io/docs/tutorials/cluster-management/install-use-dra/", "id": "k8s_00765_code_completion_23793"} {"prompt": "Explain this shell code:\n\nkubectl delete namespace dra-tutorial\nkubectl delete deviceclass gpu.example.com\nkubectl delete clusterrole dra-example-driver-role\nkubectl delete clusterrolebinding dra-example-driver-role-binding\nkubectl delete priorityclass dra-driver-high-priority", "completion": "This shell code Install Drivers and Allocate Devices with DRA |", "language": "shell", "type": "code-example_explanation", "context": "Install Drivers and Allocate Devices with DRA |", "source": "https://kubernetes.io/docs/tutorials/cluster-management/install-use-dra/", "id": "k8s_00765_code_explain_23794"} {"prompt": "Explain this bash code:\n\nkubectl get resourceclaims -A\nkubectl get resourceclaimtemplates -A", "completion": "This bash code Install Drivers and Allocate Devices with DRA | The output is similar to this:", "language": "bash", "type": "kubectl-command_explanation", "context": "Install Drivers and Allocate Devices with DRA | The output is similar to this:", "source": "https://kubernetes.io/docs/tutorials/cluster-management/install-use-dra/", "id": "k8s_00765_code_explain_23795"} {"prompt": "Explain this bash code:\n\ndocker pull registry.k8s.io/dra-example-driver/dra-example-driver:v0.2.0", "completion": "This bash code Install Drivers and Allocate Devices with DRA | For this tutorial, you will install the critical example resource driver components individually with kubectl.", "language": "bash", "type": "shell-command_explanation", "context": "Install Drivers and Allocate Devices with DRA | For this tutorial, you will install the critical example resource driver components individually with kubectl.", "source": "https://kubernetes.io/docs/tutorials/cluster-management/install-use-dra/", "id": "k8s_00765_code_explain_23796"} {"prompt": "Complete this yaml code:\n\ndra/driver-install/deviceclass.yaml\napiVersion: resource.k8s.io/v1\nkind: DeviceClass\nmetadata:", "completion": " name: gpu.example.com\nspec:\n selectors:\n - cel: \n expression: \"device.driver == 'gpu.example.com'\"", "language": "yaml", "type": "kubernetes-manifest", "context": "Install Drivers and Allocate Devices with DRA | Create the ServiceAccount, ClusterRole and ClusterRoleBinding that will be used by the driver to gain permissions to interact with the Kubernetes API on this cluster:", "source": "https://kubernetes.io/docs/tutorials/cluster-management/install-use-dra/", "id": "k8s_00765_code_completion_23797"} {"prompt": "Explain this yaml code:\n\ndra/driver-install/deviceclass.yaml\napiVersion: resource.k8s.io/v1\nkind: DeviceClass\nmetadata:\n name: gpu.example.com\nspec:\n selectors:\n - cel: \n expression: \"device.driver == 'gpu.example.com'\"", "completion": "This yaml code Install Drivers and Allocate Devices with DRA | Create the ServiceAccount, ClusterRole and ClusterRoleBinding that will be used by the driver to gain permissions to interact with the Kubernetes API on this cluster:", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Install Drivers and Allocate Devices with DRA | Create the ServiceAccount, ClusterRole and ClusterRoleBinding that will be used by the driver to gain permissions to interact with the Kubernetes API on this cluster:", "source": "https://kubernetes.io/docs/tutorials/cluster-management/install-use-dra/", "id": "k8s_00765_code_explain_23798"} {"prompt": "Complete this yaml code:\n\napiVersion: resource.k8s.io/v1\nkind: DeviceClass\nmetadata:\n name: gpu.example.com", "completion": "spec:\n selectors:\n - cel: \n expression: \"device.driver == 'gpu.example.com'\"", "language": "yaml", "type": "kubernetes-manifest", "context": "Install Drivers and Allocate Devices with DRA | Create the ServiceAccount, ClusterRole and ClusterRoleBinding that will be used by the driver to gain permissions to interact with the Kubernetes API on this cluster:", "source": "https://kubernetes.io/docs/tutorials/cluster-management/install-use-dra/", "id": "k8s_00765_code_completion_23799"} {"prompt": "Explain this yaml code:\n\napiVersion: resource.k8s.io/v1\nkind: DeviceClass\nmetadata:\n name: gpu.example.com\nspec:\n selectors:\n - cel: \n expression: \"device.driver == 'gpu.example.com'\"", "completion": "This yaml code Install Drivers and Allocate Devices with DRA | Create the ServiceAccount, ClusterRole and ClusterRoleBinding that will be used by the driver to gain permissions to interact with the Kubernetes API on this cluster:", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Install Drivers and Allocate Devices with DRA | Create the ServiceAccount, ClusterRole and ClusterRoleBinding that will be used by the driver to gain permissions to interact with the Kubernetes API on this cluster:", "source": "https://kubernetes.io/docs/tutorials/cluster-management/install-use-dra/", "id": "k8s_00765_code_explain_23800"} {"prompt": "Explain this bash code:\n\nkubectl apply --server-side -f http://k8s.io/examples/dra/driver-install/deviceclass.yaml", "completion": "This bash code Install Drivers and Allocate Devices with DRA | Create the ServiceAccount, ClusterRole and ClusterRoleBinding that will be used by the driver to gain permissions to interact with the Kubernetes API on this cluster:", "language": "bash", "type": "kubectl-command_explanation", "context": "Install Drivers and Allocate Devices with DRA | Create the ServiceAccount, ClusterRole and ClusterRoleBinding that will be used by the driver to gain permissions to interact with the Kubernetes API on this cluster:", "source": "https://kubernetes.io/docs/tutorials/cluster-management/install-use-dra/", "id": "k8s_00765_code_explain_23801"} {"prompt": "Complete this yaml code:\n\ndra/driver-install/serviceaccount.yaml\napiVersion: v1\nkind: ServiceAccount\nmetadata:", "completion": " name: dra-example-driver-service-account\n namespace: dra-tutorial\n labels:\n app.kubernetes.io/name: dra-example-driver\n app.kubernetes.io/instance: dra-example-driver", "language": "yaml", "type": "service-definition", "context": "Install Drivers and Allocate Devices with DRA | Create the ClusterRole:", "source": "https://kubernetes.io/docs/tutorials/cluster-management/install-use-dra/", "id": "k8s_00765_code_completion_23802"} {"prompt": "Explain this yaml code:\n\ndra/driver-install/serviceaccount.yaml\napiVersion: v1\nkind: ServiceAccount\nmetadata:\n name: dra-example-driver-service-account\n namespace: dra-tutorial\n labels:\n app.kubernetes.io/name: dra-example-driver\n app.kubernetes.io/instance: dra-example-driver", "completion": "This yaml code Install Drivers and Allocate Devices with DRA | Create the ClusterRole:", "language": "yaml", "type": "service-definition_explanation", "context": "Install Drivers and Allocate Devices with DRA | Create the ClusterRole:", "source": "https://kubernetes.io/docs/tutorials/cluster-management/install-use-dra/", "id": "k8s_00765_code_explain_23803"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: ServiceAccount\nmetadata:\n name: dra-example-driver-service-account", "completion": " namespace: dra-tutorial\n labels:\n app.kubernetes.io/name: dra-example-driver\n app.kubernetes.io/instance: dra-example-driver", "language": "yaml", "type": "service-definition", "context": "Install Drivers and Allocate Devices with DRA | Create the ClusterRole:", "source": "https://kubernetes.io/docs/tutorials/cluster-management/install-use-dra/", "id": "k8s_00765_code_completion_23804"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: ServiceAccount\nmetadata:\n name: dra-example-driver-service-account\n namespace: dra-tutorial\n labels:\n app.kubernetes.io/name: dra-example-driver\n app.kubernetes.io/instance: dra-example-driver", "completion": "This yaml code Install Drivers and Allocate Devices with DRA | Create the ClusterRole:", "language": "yaml", "type": "service-definition_explanation", "context": "Install Drivers and Allocate Devices with DRA | Create the ClusterRole:", "source": "https://kubernetes.io/docs/tutorials/cluster-management/install-use-dra/", "id": "k8s_00765_code_explain_23805"} {"prompt": "Explain this bash code:\n\nkubectl apply --server-side -f http://k8s.io/examples/dra/driver-install/serviceaccount.yaml", "completion": "This bash code Install Drivers and Allocate Devices with DRA | Create the ClusterRole:", "language": "bash", "type": "kubectl-command_explanation", "context": "Install Drivers and Allocate Devices with DRA | Create the ClusterRole:", "source": "https://kubernetes.io/docs/tutorials/cluster-management/install-use-dra/", "id": "k8s_00765_code_explain_23806"} {"prompt": "Complete this yaml code:\n\ndra/driver-install/clusterrole.yaml\napiVersion: rbac.authorization.k8s.io/v1\nkind: ClusterRole\nmetadata:\n name: dra-example-driver-role\nrules:\n- apiGroups: [\"resource.k8s.io\"]", "completion": " resources: [\"resourceclaims\"]\n verbs: [\"get\"]\n- apiGroups: [\"\"]\n resources: [\"nodes\"]\n verbs: [\"get\"]\n- apiGroups: [\"resource.k8s.io\"]\n resources: [\"resourceslices\"]\n verbs: [\"get\", \"list\", \"watch\", \"create\", \"update\", \"patch\", \"delete\"]", "language": "yaml", "type": "kubernetes-manifest", "context": "Install Drivers and Allocate Devices with DRA | Create the ClusterRoleBinding:", "source": "https://kubernetes.io/docs/tutorials/cluster-management/install-use-dra/", "id": "k8s_00765_code_completion_23807"} {"prompt": "Explain this yaml code:\n\ndra/driver-install/clusterrole.yaml\napiVersion: rbac.authorization.k8s.io/v1\nkind: ClusterRole\nmetadata:\n name: dra-example-driver-role\nrules:\n- apiGroups: [\"resource.k8s.io\"]\n resources: [\"resourceclaims\"]\n verbs: [\"get\"]\n- apiGroups: [\"\"]\n resources: [\"nodes\"]\n verbs: [\"get\"]\n- apiGroups: [\"resource.k8s.io\"]\n resources: [\"resourceslices\"]\n verbs: [\"get\", \"list\", \"watch\", \"create\", \"update\", \"patch\", \"delete\"]", "completion": "This yaml code Install Drivers and Allocate Devices with DRA | Create the ClusterRoleBinding:", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Install Drivers and Allocate Devices with DRA | Create the ClusterRoleBinding:", "source": "https://kubernetes.io/docs/tutorials/cluster-management/install-use-dra/", "id": "k8s_00765_code_explain_23808"} {"prompt": "Complete this yaml code:\n\napiVersion: rbac.authorization.k8s.io/v1\nkind: ClusterRole\nmetadata:\n name: dra-example-driver-role\nrules:\n- apiGroups: [\"resource.k8s.io\"]\n resources: [\"resourceclaims\"]", "completion": " verbs: [\"get\"]\n- apiGroups: [\"\"]\n resources: [\"nodes\"]\n verbs: [\"get\"]\n- apiGroups: [\"resource.k8s.io\"]\n resources: [\"resourceslices\"]\n verbs: [\"get\", \"list\", \"watch\", \"create\", \"update\", \"patch\", \"delete\"]", "language": "yaml", "type": "kubernetes-manifest", "context": "Install Drivers and Allocate Devices with DRA | Create the ClusterRoleBinding:", "source": "https://kubernetes.io/docs/tutorials/cluster-management/install-use-dra/", "id": "k8s_00765_code_completion_23809"} {"prompt": "Explain this yaml code:\n\napiVersion: rbac.authorization.k8s.io/v1\nkind: ClusterRole\nmetadata:\n name: dra-example-driver-role\nrules:\n- apiGroups: [\"resource.k8s.io\"]\n resources: [\"resourceclaims\"]\n verbs: [\"get\"]\n- apiGroups: [\"\"]\n resources: [\"nodes\"]\n verbs: [\"get\"]\n- apiGroups: [\"resource.k8s.io\"]\n resources: [\"resourceslices\"]\n verbs: [\"get\", \"list\", \"watch\", \"create\", \"update\", \"patch\", \"delete\"]", "completion": "This yaml code Install Drivers and Allocate Devices with DRA | Create the ClusterRoleBinding:", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Install Drivers and Allocate Devices with DRA | Create the ClusterRoleBinding:", "source": "https://kubernetes.io/docs/tutorials/cluster-management/install-use-dra/", "id": "k8s_00765_code_explain_23810"} {"prompt": "Explain this bash code:\n\nkubectl apply --server-side -f http://k8s.io/examples/dra/driver-install/clusterrole.yaml", "completion": "This bash code Install Drivers and Allocate Devices with DRA | Create the ClusterRoleBinding:", "language": "bash", "type": "kubectl-command_explanation", "context": "Install Drivers and Allocate Devices with DRA | Create the ClusterRoleBinding:", "source": "https://kubernetes.io/docs/tutorials/cluster-management/install-use-dra/", "id": "k8s_00765_code_explain_23811"} {"prompt": "Complete this yaml code:\n\ndra/driver-install/clusterrolebinding.yaml\napiVersion: rbac.authorization.k8s.io/v1\nkind: ClusterRoleBinding\nmetadata:\n name: dra-example-driver-role-binding\nsubjects:", "completion": "- kind: ServiceAccount\n name: dra-example-driver-service-account\n namespace: dra-tutorial\nroleRef:\n kind: ClusterRole\n name: dra-example-driver-role\n apiGroup: rbac.authorization.k8s.io", "language": "yaml", "type": "service-definition", "context": "Install Drivers and Allocate Devices with DRA | Create a PriorityClass for the DRA driver. The PriorityClass prevents preemption of th DRA driver component, which is responsible for important lifecycle operations for Pods with claims. Learn more ab", "source": "https://kubernetes.io/docs/tutorials/cluster-management/install-use-dra/", "id": "k8s_00765_code_completion_23812"} {"prompt": "Explain this yaml code:\n\ndra/driver-install/clusterrolebinding.yaml\napiVersion: rbac.authorization.k8s.io/v1\nkind: ClusterRoleBinding\nmetadata:\n name: dra-example-driver-role-binding\nsubjects:\n- kind: ServiceAccount\n name: dra-example-driver-service-account\n namespace: dra-tutorial\nroleRef:\n kind: ClusterRole\n name: dra-example-driver-role\n apiGroup: rbac.authorization.k8s.io", "completion": "This yaml code Install Drivers and Allocate Devices with DRA | Create a PriorityClass for the DRA driver. The PriorityClass prevents preemption of th DRA driver component, which is responsible for important lifecycle operations for Pods with claims. Learn more ab", "language": "yaml", "type": "service-definition_explanation", "context": "Install Drivers and Allocate Devices with DRA | Create a PriorityClass for the DRA driver. The PriorityClass prevents preemption of th DRA driver component, which is responsible for important lifecycle operations for Pods with claims. Learn more ab", "source": "https://kubernetes.io/docs/tutorials/cluster-management/install-use-dra/", "id": "k8s_00765_code_explain_23813"} {"prompt": "Complete this yaml code:\n\napiVersion: rbac.authorization.k8s.io/v1\nkind: ClusterRoleBinding\nmetadata:\n name: dra-example-driver-role-binding\nsubjects:\n- kind: ServiceAccount", "completion": " name: dra-example-driver-service-account\n namespace: dra-tutorial\nroleRef:\n kind: ClusterRole\n name: dra-example-driver-role\n apiGroup: rbac.authorization.k8s.io", "language": "yaml", "type": "service-definition", "context": "Install Drivers and Allocate Devices with DRA | Create a PriorityClass for the DRA driver. The PriorityClass prevents preemption of th DRA driver component, which is responsible for important lifecycle operations for Pods with claims. Learn more ab", "source": "https://kubernetes.io/docs/tutorials/cluster-management/install-use-dra/", "id": "k8s_00765_code_completion_23814"} {"prompt": "Explain this yaml code:\n\napiVersion: rbac.authorization.k8s.io/v1\nkind: ClusterRoleBinding\nmetadata:\n name: dra-example-driver-role-binding\nsubjects:\n- kind: ServiceAccount\n name: dra-example-driver-service-account\n namespace: dra-tutorial\nroleRef:\n kind: ClusterRole\n name: dra-example-driver-role\n apiGroup: rbac.authorization.k8s.io", "completion": "This yaml code Install Drivers and Allocate Devices with DRA | Create a PriorityClass for the DRA driver. The PriorityClass prevents preemption of th DRA driver component, which is responsible for important lifecycle operations for Pods with claims. Learn more ab", "language": "yaml", "type": "service-definition_explanation", "context": "Install Drivers and Allocate Devices with DRA | Create a PriorityClass for the DRA driver. The PriorityClass prevents preemption of th DRA driver component, which is responsible for important lifecycle operations for Pods with claims. Learn more ab", "source": "https://kubernetes.io/docs/tutorials/cluster-management/install-use-dra/", "id": "k8s_00765_code_explain_23815"} {"prompt": "Explain this bash code:\n\nkubectl apply --server-side -f http://k8s.io/examples/dra/driver-install/clusterrolebinding.yaml", "completion": "This bash code Install Drivers and Allocate Devices with DRA | Create a PriorityClass for the DRA driver. The PriorityClass prevents preemption of th DRA driver component, which is responsible for important lifecycle operations for Pods with claims. Learn more ab", "language": "bash", "type": "kubectl-command_explanation", "context": "Install Drivers and Allocate Devices with DRA | Create a PriorityClass for the DRA driver. The PriorityClass prevents preemption of th DRA driver component, which is responsible for important lifecycle operations for Pods with claims. Learn more ab", "source": "https://kubernetes.io/docs/tutorials/cluster-management/install-use-dra/", "id": "k8s_00765_code_explain_23816"} {"prompt": "Complete this yaml code:\n\ndra/driver-install/priorityclass.yaml\napiVersion: scheduling.k8s.io/v1\nkind: PriorityClass\nmetadata:", "completion": " name: dra-driver-high-priority\nvalue: 1000000\nglobalDefault: false\ndescription: \"This priority class should be used for DRA driver pods only.\"", "language": "yaml", "type": "kubernetes-manifest", "context": "Install Drivers and Allocate Devices with DRA | Deploy the actual DRA driver as a DaemonSet configured to run the example driver binary with the permissions provisioned above. The DaemonSet has the permissions that you granted to the ServiceAccount", "source": "https://kubernetes.io/docs/tutorials/cluster-management/install-use-dra/", "id": "k8s_00765_code_completion_23817"} {"prompt": "Explain this yaml code:\n\ndra/driver-install/priorityclass.yaml\napiVersion: scheduling.k8s.io/v1\nkind: PriorityClass\nmetadata:\n name: dra-driver-high-priority\nvalue: 1000000\nglobalDefault: false\ndescription: \"This priority class should be used for DRA driver pods only.\"", "completion": "This yaml code Install Drivers and Allocate Devices with DRA | Deploy the actual DRA driver as a DaemonSet configured to run the example driver binary with the permissions provisioned above. The DaemonSet has the permissions that you granted to the ServiceAccount", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Install Drivers and Allocate Devices with DRA | Deploy the actual DRA driver as a DaemonSet configured to run the example driver binary with the permissions provisioned above. The DaemonSet has the permissions that you granted to the ServiceAccount", "source": "https://kubernetes.io/docs/tutorials/cluster-management/install-use-dra/", "id": "k8s_00765_code_explain_23818"} {"prompt": "Complete this yaml code:\n\napiVersion: scheduling.k8s.io/v1\nkind: PriorityClass\nmetadata:", "completion": " name: dra-driver-high-priority\nvalue: 1000000\nglobalDefault: false\ndescription: \"This priority class should be used for DRA driver pods only.\"", "language": "yaml", "type": "kubernetes-manifest", "context": "Install Drivers and Allocate Devices with DRA | Deploy the actual DRA driver as a DaemonSet configured to run the example driver binary with the permissions provisioned above. The DaemonSet has the permissions that you granted to the ServiceAccount", "source": "https://kubernetes.io/docs/tutorials/cluster-management/install-use-dra/", "id": "k8s_00765_code_completion_23819"} {"prompt": "Explain this yaml code:\n\napiVersion: scheduling.k8s.io/v1\nkind: PriorityClass\nmetadata:\n name: dra-driver-high-priority\nvalue: 1000000\nglobalDefault: false\ndescription: \"This priority class should be used for DRA driver pods only.\"", "completion": "This yaml code Install Drivers and Allocate Devices with DRA | Deploy the actual DRA driver as a DaemonSet configured to run the example driver binary with the permissions provisioned above. The DaemonSet has the permissions that you granted to the ServiceAccount", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Install Drivers and Allocate Devices with DRA | Deploy the actual DRA driver as a DaemonSet configured to run the example driver binary with the permissions provisioned above. The DaemonSet has the permissions that you granted to the ServiceAccount", "source": "https://kubernetes.io/docs/tutorials/cluster-management/install-use-dra/", "id": "k8s_00765_code_explain_23820"} {"prompt": "Explain this bash code:\n\nkubectl apply --server-side -f http://k8s.io/examples/dra/driver-install/priorityclass.yaml", "completion": "This bash code Install Drivers and Allocate Devices with DRA | Deploy the actual DRA driver as a DaemonSet configured to run the example driver binary with the permissions provisioned above. The DaemonSet has the permissions that you granted to the ServiceAccount", "language": "bash", "type": "kubectl-command_explanation", "context": "Install Drivers and Allocate Devices with DRA | Deploy the actual DRA driver as a DaemonSet configured to run the example driver binary with the permissions provisioned above. The DaemonSet has the permissions that you granted to the ServiceAccount", "source": "https://kubernetes.io/docs/tutorials/cluster-management/install-use-dra/", "id": "k8s_00765_code_explain_23821"} {"prompt": "Complete this yaml code:\n\ndra/driver-install/daemonset.yaml\napiVersion: apps/v1\nkind: DaemonSet\nmetadata:\n name: dra-example-driver-kubeletplugin\n namespace: dra-tutorial\n labels:\n app.kubernetes.io/name: dra-example-driver\nspec:\n selector:\n matchLabels:\n app.kubernetes.io/name: dra-example-driver\n updateStrategy:\n type: RollingUpdate\n template:\n metadata:\n labels:\n app.kubernetes.io/name: dra-example-driver\n spec:\n priorityClassName: dra-driver-high-priority\n serviceAccountName: dra-example-driver-service-account\n securityContext:\n {}\n containers:\n - name: plugin\n securityContext:\n privileged: true\n image: registry.k8s.io/dra-example-driver/dra-example-driver:v0.2.0\n imagePullPolicy: IfNotPresent\n command: [\"dra-example-kubeletplugin\"]\n resources:\n {}\n # Production drivers should always implement a liveness probe\n # For the tutorial we simply omit it\n # livenessProbe:\n # grpc:\n # port: 51515\n # service: liveness", "completion": " # failureThreshold: 3\n # periodSeconds: 10\n env:\n - name: CDI_ROOT\n value: /var/run/cdi\n - name: KUBELET_REGISTRAR_DIRECTORY_PATH\n value: \"/var/lib/kubelet/plugins_registry\"\n - name: KUBELET_PLUGINS_DIRECTORY_PATH\n value: \"/var/lib/kubelet/plugins\"\n - name: NODE_NAME\n valueFrom:\n fieldRef:\n fieldPath: spec.nodeName\n - name: NAMESPACE\n valueFrom:\n fieldRef:\n fieldPath: metadata.namespace\n # Simulated number of devices the example driver will pretend to have.\n - name: NUM_DEVICES\n value: \"9\"\n - name: HEALTHCHECK_PORT\n value: \"51515\"\n volumeMounts:\n - name: plugins-registry\n mountPath: \"/var/lib/kubelet/plugins_registry\"\n - name: plugins\n mountPath: \"/var/lib/kubelet/plugins\"\n - name: cdi\n mountPath: /var/run/cdi\n volumes:\n - name: plugins-registry\n hostPath:\n path: \"/var/lib/kubelet/plugins_registry\"\n - name: plugins\n hostPath:\n path: \"/var/lib/kubelet/plugins\"\n - name: cdi\n hostPath:\n path: /var/run/cdi", "language": "yaml", "type": "kubernetes-manifest", "context": "Install Drivers and Allocate Devices with DRA | The DaemonSet is configured with the volume mounts necessary to interact with the underlying Container Device Interface (CDI) directory, and to expose its socket to kubelet via the kubelet/plugins dir", "source": "https://kubernetes.io/docs/tutorials/cluster-management/install-use-dra/", "id": "k8s_00765_code_completion_23822"} {"prompt": "Explain this yaml code:\n\ndra/driver-install/daemonset.yaml\napiVersion: apps/v1\nkind: DaemonSet\nmetadata:\n name: dra-example-driver-kubeletplugin\n namespace: dra-tutorial\n labels:\n app.kubernetes.io/name: dra-example-driver\nspec:\n selector:\n matchLabels:\n app.kubernetes.io/name: dra-example-driver\n updateStrategy:\n type: RollingUpdate\n template:\n metadata:\n labels:\n app.kubernetes.io/name: dra-example-driver\n spec:\n priorityClassName: dra-driver-high-priority\n serviceAccountName: dra-example-driver-service-account\n securityContext:\n {}\n containers:\n - name: plugin\n securityContext:\n privileged: true\n image: registry.k8s.io/dra-example-driver/dra-example-driver:v0.2.0\n imagePullPolicy: IfNotPresent\n command: [\"dra-example-kubeletplugin\"]\n resources:\n {}\n # Production drivers should always implement a liveness probe\n # For the tutorial we simply omit it\n # livenessProbe:\n # grpc:\n # port: 51515\n # service: liveness\n # failureThreshold: 3\n # periodSeconds: 10\n env:\n - name: CDI_ROOT\n value: /var/run/cdi\n - name: KUBELET_REGISTRAR_DIRECTORY_PATH\n value: \"/var/lib/kubelet/plugins_registry\"\n - name: KUBELET_PLUGINS_DIRECTORY_PATH\n value: \"/var/lib/kubelet/plugins\"\n - name: NODE_NAME\n valueFrom:\n fieldRef:\n fieldPath: spec.nodeName\n - name: NAMESPACE\n valueFrom:\n fieldRef:\n fieldPath: metadata.namespace\n # Simulated number of devices the example driver will pretend to have.\n - name: NUM_DEVICES\n value: \"9\"\n - name: HEALTHCHECK_PORT\n value: \"51515\"\n volumeMounts:\n - name: plugins-registry\n mountPath: \"/var/lib/kubelet/plugins_registry\"\n - name: plugins\n mountPath: \"/var/lib/kubelet/plugins\"\n - name: cdi\n mountPath: /var/run/cdi\n volumes:\n - name: plugins-registry\n hostPath:\n path: \"/var/lib/kubelet/plugins_registry\"\n - name: plugins\n hostPath:\n path: \"/var/lib/kubelet/plugins\"\n - name: cdi\n hostPath:\n path: /var/run/cdi", "completion": "This yaml code Install Drivers and Allocate Devices with DRA | The DaemonSet is configured with the volume mounts necessary to interact with the underlying Container Device Interface (CDI) directory, and to expose its socket to kubelet via the kubelet/plugins dir", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Install Drivers and Allocate Devices with DRA | The DaemonSet is configured with the volume mounts necessary to interact with the underlying Container Device Interface (CDI) directory, and to expose its socket to kubelet via the kubelet/plugins dir", "source": "https://kubernetes.io/docs/tutorials/cluster-management/install-use-dra/", "id": "k8s_00765_code_explain_23823"} {"prompt": "Complete this yaml code:\n\napiVersion: apps/v1\nkind: DaemonSet\nmetadata:\n name: dra-example-driver-kubeletplugin\n namespace: dra-tutorial\n labels:\n app.kubernetes.io/name: dra-example-driver\nspec:\n selector:\n matchLabels:\n app.kubernetes.io/name: dra-example-driver\n updateStrategy:\n type: RollingUpdate\n template:\n metadata:\n labels:\n app.kubernetes.io/name: dra-example-driver\n spec:\n priorityClassName: dra-driver-high-priority\n serviceAccountName: dra-example-driver-service-account\n securityContext:\n {}\n containers:\n - name: plugin\n securityContext:\n privileged: true\n image: registry.k8s.io/dra-example-driver/dra-example-driver:v0.2.0\n imagePullPolicy: IfNotPresent\n command: [\"dra-example-kubeletplugin\"]\n resources:\n {}\n # Production drivers should always implement a liveness probe\n # For the tutorial we simply omit it\n # livenessProbe:\n # grpc:\n # port: 51515\n # service: liveness\n # failureThreshold: 3", "completion": " # periodSeconds: 10\n env:\n - name: CDI_ROOT\n value: /var/run/cdi\n - name: KUBELET_REGISTRAR_DIRECTORY_PATH\n value: \"/var/lib/kubelet/plugins_registry\"\n - name: KUBELET_PLUGINS_DIRECTORY_PATH\n value: \"/var/lib/kubelet/plugins\"\n - name: NODE_NAME\n valueFrom:\n fieldRef:\n fieldPath: spec.nodeName\n - name: NAMESPACE\n valueFrom:\n fieldRef:\n fieldPath: metadata.namespace\n # Simulated number of devices the example driver will pretend to have.\n - name: NUM_DEVICES\n value: \"9\"\n - name: HEALTHCHECK_PORT\n value: \"51515\"\n volumeMounts:\n - name: plugins-registry\n mountPath: \"/var/lib/kubelet/plugins_registry\"\n - name: plugins\n mountPath: \"/var/lib/kubelet/plugins\"\n - name: cdi\n mountPath: /var/run/cdi\n volumes:\n - name: plugins-registry\n hostPath:\n path: \"/var/lib/kubelet/plugins_registry\"\n - name: plugins\n hostPath:\n path: \"/var/lib/kubelet/plugins\"\n - name: cdi\n hostPath:\n path: /var/run/cdi", "language": "yaml", "type": "kubernetes-manifest", "context": "Install Drivers and Allocate Devices with DRA | The DaemonSet is configured with the volume mounts necessary to interact with the underlying Container Device Interface (CDI) directory, and to expose its socket to kubelet via the kubelet/plugins dir", "source": "https://kubernetes.io/docs/tutorials/cluster-management/install-use-dra/", "id": "k8s_00765_code_completion_23824"} {"prompt": "Explain this yaml code:\n\napiVersion: apps/v1\nkind: DaemonSet\nmetadata:\n name: dra-example-driver-kubeletplugin\n namespace: dra-tutorial\n labels:\n app.kubernetes.io/name: dra-example-driver\nspec:\n selector:\n matchLabels:\n app.kubernetes.io/name: dra-example-driver\n updateStrategy:\n type: RollingUpdate\n template:\n metadata:\n labels:\n app.kubernetes.io/name: dra-example-driver\n spec:\n priorityClassName: dra-driver-high-priority\n serviceAccountName: dra-example-driver-service-account\n securityContext:\n {}\n containers:\n - name: plugin\n securityContext:\n privileged: true\n image: registry.k8s.io/dra-example-driver/dra-example-driver:v0.2.0\n imagePullPolicy: IfNotPresent\n command: [\"dra-example-kubeletplugin\"]\n resources:\n {}\n # Production drivers should always implement a liveness probe\n # For the tutorial we simply omit it\n # livenessProbe:\n # grpc:\n # port: 51515\n # service: liveness\n # failureThreshold: 3\n # periodSeconds: 10\n env:\n - name: CDI_ROOT\n value: /var/run/cdi\n - name: KUBELET_REGISTRAR_DIRECTORY_PATH\n value: \"/var/lib/kubelet/plugins_registry\"\n - name: KUBELET_PLUGINS_DIRECTORY_PATH\n value: \"/var/lib/kubelet/plugins\"\n - name: NODE_NAME\n valueFrom:\n fieldRef:\n fieldPath: spec.nodeName\n - name: NAMESPACE\n valueFrom:\n fieldRef:\n fieldPath: metadata.namespace\n # Simulated number of devices the example driver will pretend to have.\n - name: NUM_DEVICES\n value: \"9\"\n - name: HEALTHCHECK_PORT\n value: \"51515\"\n volumeMounts:\n - name: plugins-registry\n mountPath: \"/var/lib/kubelet/plugins_registry\"\n - name: plugins\n mountPath: \"/var/lib/kubelet/plugins\"\n - name: cdi\n mountPath: /var/run/cdi\n volumes:\n - name: plugins-registry\n hostPath:\n path: \"/var/lib/kubelet/plugins_registry\"\n - name: plugins\n hostPath:\n path: \"/var/lib/kubelet/plugins\"\n - name: cdi\n hostPath:\n path: /var/run/cdi", "completion": "This yaml code Install Drivers and Allocate Devices with DRA | The DaemonSet is configured with the volume mounts necessary to interact with the underlying Container Device Interface (CDI) directory, and to expose its socket to kubelet via the kubelet/plugins dir", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Install Drivers and Allocate Devices with DRA | The DaemonSet is configured with the volume mounts necessary to interact with the underlying Container Device Interface (CDI) directory, and to expose its socket to kubelet via the kubelet/plugins dir", "source": "https://kubernetes.io/docs/tutorials/cluster-management/install-use-dra/", "id": "k8s_00765_code_explain_23825"} {"prompt": "Explain this bash code:\n\nkubectl apply --server-side -f http://k8s.io/examples/dra/driver-install/daemonset.yaml", "completion": "This bash code Install Drivers and Allocate Devices with DRA | The DaemonSet is configured with the volume mounts necessary to interact with the underlying Container Device Interface (CDI) directory, and to expose its socket to kubelet via the kubelet/plugins dir", "language": "bash", "type": "kubectl-command_explanation", "context": "Install Drivers and Allocate Devices with DRA | The DaemonSet is configured with the volume mounts necessary to interact with the underlying Container Device Interface (CDI) directory, and to expose its socket to kubelet via the kubelet/plugins dir", "source": "https://kubernetes.io/docs/tutorials/cluster-management/install-use-dra/", "id": "k8s_00765_code_explain_23826"} {"prompt": "Explain this bash code:\n\nkubectl get pod -l app.kubernetes.io/name=dra-example-driver -n dra-tutorial", "completion": "This bash code Install Drivers and Allocate Devices with DRA | The output is similar to this:", "language": "bash", "type": "kubectl-command_explanation", "context": "Install Drivers and Allocate Devices with DRA | The output is similar to this:", "source": "https://kubernetes.io/docs/tutorials/cluster-management/install-use-dra/", "id": "k8s_00765_code_explain_23827"} {"prompt": "Complete this yaml code:\n\ndra/driver-install/example/resourceclaim.yaml\napiVersion: resource.k8s.io/v1\nkind: ResourceClaim\nmetadata:\n name: some-gpu\n namespace: dra-tutorial\nspec:", "completion": " devices:\n requests:\n - name: some-gpu\n exactly:\n deviceClassName: gpu.example.com\n selectors:\n - cel:\n expression: \"device.capacity['gpu.example.com'].memory.compareTo(quantity('10Gi')) >= 0\"", "language": "yaml", "type": "kubernetes-manifest", "context": "Install Drivers and Allocate Devices with DRA | Below is the Pod manifest referencing the ResourceClaim you just made, some-gpu, in the spec.resourceClaims.resourceClaimName field. The local name for that claim, gpu, is then used in the spec.contai", "source": "https://kubernetes.io/docs/tutorials/cluster-management/install-use-dra/", "id": "k8s_00765_code_completion_23828"} {"prompt": "Explain this yaml code:\n\ndra/driver-install/example/resourceclaim.yaml\napiVersion: resource.k8s.io/v1\nkind: ResourceClaim\nmetadata:\n name: some-gpu\n namespace: dra-tutorial\nspec:\n devices:\n requests:\n - name: some-gpu\n exactly:\n deviceClassName: gpu.example.com\n selectors:\n - cel:\n expression: \"device.capacity['gpu.example.com'].memory.compareTo(quantity('10Gi')) >= 0\"", "completion": "This yaml code Install Drivers and Allocate Devices with DRA | Below is the Pod manifest referencing the ResourceClaim you just made, some-gpu, in the spec.resourceClaims.resourceClaimName field. The local name for that claim, gpu, is then used in the spec.contai", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Install Drivers and Allocate Devices with DRA | Below is the Pod manifest referencing the ResourceClaim you just made, some-gpu, in the spec.resourceClaims.resourceClaimName field. The local name for that claim, gpu, is then used in the spec.contai", "source": "https://kubernetes.io/docs/tutorials/cluster-management/install-use-dra/", "id": "k8s_00765_code_explain_23829"} {"prompt": "Complete this yaml code:\n\napiVersion: resource.k8s.io/v1\nkind: ResourceClaim\nmetadata:\n name: some-gpu\n namespace: dra-tutorial\nspec:\n devices:", "completion": " requests:\n - name: some-gpu\n exactly:\n deviceClassName: gpu.example.com\n selectors:\n - cel:\n expression: \"device.capacity['gpu.example.com'].memory.compareTo(quantity('10Gi')) >= 0\"", "language": "yaml", "type": "kubernetes-manifest", "context": "Install Drivers and Allocate Devices with DRA | Below is the Pod manifest referencing the ResourceClaim you just made, some-gpu, in the spec.resourceClaims.resourceClaimName field. The local name for that claim, gpu, is then used in the spec.contai", "source": "https://kubernetes.io/docs/tutorials/cluster-management/install-use-dra/", "id": "k8s_00765_code_completion_23830"} {"prompt": "Explain this yaml code:\n\napiVersion: resource.k8s.io/v1\nkind: ResourceClaim\nmetadata:\n name: some-gpu\n namespace: dra-tutorial\nspec:\n devices:\n requests:\n - name: some-gpu\n exactly:\n deviceClassName: gpu.example.com\n selectors:\n - cel:\n expression: \"device.capacity['gpu.example.com'].memory.compareTo(quantity('10Gi')) >= 0\"", "completion": "This yaml code Install Drivers and Allocate Devices with DRA | Below is the Pod manifest referencing the ResourceClaim you just made, some-gpu, in the spec.resourceClaims.resourceClaimName field. The local name for that claim, gpu, is then used in the spec.contai", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Install Drivers and Allocate Devices with DRA | Below is the Pod manifest referencing the ResourceClaim you just made, some-gpu, in the spec.resourceClaims.resourceClaimName field. The local name for that claim, gpu, is then used in the spec.contai", "source": "https://kubernetes.io/docs/tutorials/cluster-management/install-use-dra/", "id": "k8s_00765_code_explain_23831"} {"prompt": "Explain this bash code:\n\nkubectl apply --server-side -f http://k8s.io/examples/dra/driver-install/example/resourceclaim.yaml", "completion": "This bash code Install Drivers and Allocate Devices with DRA | Below is the Pod manifest referencing the ResourceClaim you just made, some-gpu, in the spec.resourceClaims.resourceClaimName field. The local name for that claim, gpu, is then used in the spec.contai", "language": "bash", "type": "kubectl-command_explanation", "context": "Install Drivers and Allocate Devices with DRA | Below is the Pod manifest referencing the ResourceClaim you just made, some-gpu, in the spec.resourceClaims.resourceClaimName field. The local name for that claim, gpu, is then used in the spec.contai", "source": "https://kubernetes.io/docs/tutorials/cluster-management/install-use-dra/", "id": "k8s_00765_code_explain_23832"} {"prompt": "Complete this yaml code:\n\ndra/driver-install/example/pod.yaml\napiVersion: v1\nkind: Pod\nmetadata:\n name: pod0\n namespace: dra-tutorial\n labels:\n app: pod\nspec:\n containers:", "completion": " - name: ctr0\n image: ubuntu:24.04\n command: [\"bash\", \"-c\"]\n args: [\"export; trap 'exit 0' TERM; sleep 9999 & wait\"]\n resources:\n claims:\n - name: gpu\n resourceClaims:\n - name: gpu\n resourceClaimName: some-gpu", "language": "yaml", "type": "pod-definition", "context": "Install Drivers and Allocate Devices with DRA | Confirm the pod has deployed:", "source": "https://kubernetes.io/docs/tutorials/cluster-management/install-use-dra/", "id": "k8s_00765_code_completion_23833"} {"prompt": "Explain this yaml code:\n\ndra/driver-install/example/pod.yaml\napiVersion: v1\nkind: Pod\nmetadata:\n name: pod0\n namespace: dra-tutorial\n labels:\n app: pod\nspec:\n containers:\n - name: ctr0\n image: ubuntu:24.04\n command: [\"bash\", \"-c\"]\n args: [\"export; trap 'exit 0' TERM; sleep 9999 & wait\"]\n resources:\n claims:\n - name: gpu\n resourceClaims:\n - name: gpu\n resourceClaimName: some-gpu", "completion": "This yaml code Install Drivers and Allocate Devices with DRA | Confirm the pod has deployed:", "language": "yaml", "type": "pod-definition_explanation", "context": "Install Drivers and Allocate Devices with DRA | Confirm the pod has deployed:", "source": "https://kubernetes.io/docs/tutorials/cluster-management/install-use-dra/", "id": "k8s_00765_code_explain_23834"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: pod0\n namespace: dra-tutorial\n labels:\n app: pod\nspec:\n containers:", "completion": " - name: ctr0\n image: ubuntu:24.04\n command: [\"bash\", \"-c\"]\n args: [\"export; trap 'exit 0' TERM; sleep 9999 & wait\"]\n resources:\n claims:\n - name: gpu\n resourceClaims:\n - name: gpu\n resourceClaimName: some-gpu", "language": "yaml", "type": "pod-definition", "context": "Install Drivers and Allocate Devices with DRA | Confirm the pod has deployed:", "source": "https://kubernetes.io/docs/tutorials/cluster-management/install-use-dra/", "id": "k8s_00765_code_completion_23835"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: pod0\n namespace: dra-tutorial\n labels:\n app: pod\nspec:\n containers:\n - name: ctr0\n image: ubuntu:24.04\n command: [\"bash\", \"-c\"]\n args: [\"export; trap 'exit 0' TERM; sleep 9999 & wait\"]\n resources:\n claims:\n - name: gpu\n resourceClaims:\n - name: gpu\n resourceClaimName: some-gpu", "completion": "This yaml code Install Drivers and Allocate Devices with DRA | Confirm the pod has deployed:", "language": "yaml", "type": "pod-definition_explanation", "context": "Install Drivers and Allocate Devices with DRA | Confirm the pod has deployed:", "source": "https://kubernetes.io/docs/tutorials/cluster-management/install-use-dra/", "id": "k8s_00765_code_explain_23836"} {"prompt": "Explain this bash code:\n\nkubectl apply --server-side -f http://k8s.io/examples/dra/driver-install/example/pod.yaml", "completion": "This bash code Install Drivers and Allocate Devices with DRA | Confirm the pod has deployed:", "language": "bash", "type": "kubectl-command_explanation", "context": "Install Drivers and Allocate Devices with DRA | Confirm the pod has deployed:", "source": "https://kubernetes.io/docs/tutorials/cluster-management/install-use-dra/", "id": "k8s_00765_code_explain_23837"} {"prompt": "Explain this bash code:\n\nkubectl logs pod0 -c ctr0 -n dra-tutorial | grep -E \"GPU_DEVICE_[0-9]+=\" | grep -v \"RESOURCE_CLAIM\"", "completion": "This bash code Install Drivers and Allocate Devices with DRA | The output is similar to this:", "language": "bash", "type": "kubectl-command_explanation", "context": "Install Drivers and Allocate Devices with DRA | The output is similar to this:", "source": "https://kubernetes.io/docs/tutorials/cluster-management/install-use-dra/", "id": "k8s_00765_code_explain_23838"} {"prompt": "Explain this bash code:\n\nkubectl get resourceclaim some-gpu -n dra-tutorial -o yaml", "completion": "This bash code Install Drivers and Allocate Devices with DRA | The output is similar to this: 1apiVersion: resource.k8s.io/v1 2kind: ResourceClaim 3metadata: 4 creationTimestamp: \"2025-08-20T18:17:31Z\" 5 finalizers: 6 - resource.kubernetes.io/delete", "language": "bash", "type": "kubectl-command_explanation", "context": "Install Drivers and Allocate Devices with DRA | The output is similar to this: 1apiVersion: resource.k8s.io/v1 2kind: ResourceClaim 3metadata: 4 creationTimestamp: \"2025-08-20T18:17:31Z\" 5 finalizers: 6 - resource.kubernetes.io/delete", "source": "https://kubernetes.io/docs/tutorials/cluster-management/install-use-dra/", "id": "k8s_00765_code_explain_23839"} {"prompt": "Complete this yaml code:\n\n1apiVersion: resource.k8s.io/v1\n 2kind: ResourceClaim\n 3metadata:\n 4 creationTimestamp: \"2025-08-20T18:17:31Z\"\n 5 finalizers:\n 6 - resource.kubernetes.io/delete-protection\n 7 name: some-gpu\n 8 namespace: dra-tutorial\n 9 resourceVersion: \"2326\"\n10 uid: d3e48dbf-40da-47c3-a7b9-f7d54d1051c3\n11spec:\n12 devices:\n13 requests:\n14 - exactly:\n15 allocationMode: ExactCount\n16 count: 1\n17 deviceClassName: gpu.example.com\n18 selectors:\n19 - cel:\n20 expression: device.capacity['gpu.example.com'].memory.compareTo(quantity('10Gi'))", "completion": "21 >= 0\n22 name: some-gpu\n23status:\n24 allocation:\n25 devices:\n26 results:\n27 - device: gpu-0\n28 driver: gpu.example.com\n29 pool: kind-worker\n30 request: some-gpu\n31 nodeSelector:\n32 nodeSelectorTerms:\n33 - matchFields:\n34 - key: metadata.name\n35 operator: In\n36 values:\n37 - kind-worker\n38 reservedFor:\n39 - name: pod0\n40 resource: pods\n41 uid: c4dadf20-392a-474d-a47b-ab82080c8bd7", "language": "yaml", "type": "kubernetes-manifest", "context": "Install Drivers and Allocate Devices with DRA | To check how the driver handled device allocation, get the logs for the driver DaemonSet Pods:", "source": "https://kubernetes.io/docs/tutorials/cluster-management/install-use-dra/", "id": "k8s_00765_code_completion_23840"} {"prompt": "Explain this yaml code:\n\n1apiVersion: resource.k8s.io/v1\n 2kind: ResourceClaim\n 3metadata:\n 4 creationTimestamp: \"2025-08-20T18:17:31Z\"\n 5 finalizers:\n 6 - resource.kubernetes.io/delete-protection\n 7 name: some-gpu\n 8 namespace: dra-tutorial\n 9 resourceVersion: \"2326\"\n10 uid: d3e48dbf-40da-47c3-a7b9-f7d54d1051c3\n11spec:\n12 devices:\n13 requests:\n14 - exactly:\n15 allocationMode: ExactCount\n16 count: 1\n17 deviceClassName: gpu.example.com\n18 selectors:\n19 - cel:\n20 expression: device.capacity['gpu.example.com'].memory.compareTo(quantity('10Gi'))\n21 >= 0\n22 name: some-gpu\n23status:\n24 allocation:\n25 devices:\n26 results:\n27 - device: gpu-0\n28 driver: gpu.example.com\n29 pool: kind-worker\n30 request: some-gpu\n31 nodeSelector:\n32 nodeSelectorTerms:\n33 - matchFields:\n34 - key: metadata.name\n35 operator: In\n36 values:\n37 - kind-worker\n38 reservedFor:\n39 - name: pod0\n40 resource: pods\n41 uid: c4dadf20-392a-474d-a47b-ab82080c8bd7", "completion": "This yaml code Install Drivers and Allocate Devices with DRA | To check how the driver handled device allocation, get the logs for the driver DaemonSet Pods:", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Install Drivers and Allocate Devices with DRA | To check how the driver handled device allocation, get the logs for the driver DaemonSet Pods:", "source": "https://kubernetes.io/docs/tutorials/cluster-management/install-use-dra/", "id": "k8s_00765_code_explain_23841"} {"prompt": "Explain this bash code:\n\nkubectl logs -l app.kubernetes.io/name=dra-example-driver -n dra-tutorial", "completion": "This bash code Install Drivers and Allocate Devices with DRA | The output is similar to this:", "language": "bash", "type": "kubectl-command_explanation", "context": "Install Drivers and Allocate Devices with DRA | The output is similar to this:", "source": "https://kubernetes.io/docs/tutorials/cluster-management/install-use-dra/", "id": "k8s_00765_code_explain_23842"} {"prompt": "Explain this bash code:\n\nkubectl logs -l app.kubernetes.io/name=dra-example-driver -n dra-tutorial", "completion": "This bash code Install Drivers and Allocate Devices with DRA | The output is similar to this:", "language": "bash", "type": "kubectl-command_explanation", "context": "Install Drivers and Allocate Devices with DRA | The output is similar to this:", "source": "https://kubernetes.io/docs/tutorials/cluster-management/install-use-dra/", "id": "k8s_00765_code_explain_23843"} {"prompt": "Complete this bash code:\n\nkubectl delete namespace dra-tutorial\nkubectl delete deviceclass gpu.example.com", "completion": "kubectl delete clusterrole dra-example-driver-role\nkubectl delete clusterrolebinding dra-example-driver-role-binding\nkubectl delete priorityclass dra-driver-high-priority", "language": "bash", "type": "kubectl-command", "context": "Install Drivers and Allocate Devices with DRA |", "source": "https://kubernetes.io/docs/tutorials/cluster-management/install-use-dra/", "id": "k8s_00765_code_completion_23844"} {"prompt": "Explain this bash code:\n\nkubectl delete namespace dra-tutorial\nkubectl delete deviceclass gpu.example.com\nkubectl delete clusterrole dra-example-driver-role\nkubectl delete clusterrolebinding dra-example-driver-role-binding\nkubectl delete priorityclass dra-driver-high-priority", "completion": "This bash code Install Drivers and Allocate Devices with DRA |", "language": "bash", "type": "kubectl-command_explanation", "context": "Install Drivers and Allocate Devices with DRA |", "source": "https://kubernetes.io/docs/tutorials/cluster-management/install-use-dra/", "id": "k8s_00765_code_explain_23845"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Service\nmetadata:\n name: example-service\nspec:", "completion": " selector:\n app: example\n ports:\n - port: 8765\n targetPort: 9376\n type: LoadBalancer", "language": "yaml", "type": "service-definition", "context": "Create an External Load Balancer | You can alternatively create the service with the kubectl expose command and its --type=LoadBalancer flag:", "source": "https://kubernetes.io/docs/tasks/access-application-cluster/create-external-load-balancer/", "id": "k8s_00766_code_completion_23846"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Service\nmetadata:\n name: example-service\nspec:\n selector:\n app: example\n ports:\n - port: 8765\n targetPort: 9376\n type: LoadBalancer", "completion": "This yaml code Create an External Load Balancer | You can alternatively create the service with the kubectl expose command and its --type=LoadBalancer flag:", "language": "yaml", "type": "service-definition_explanation", "context": "Create an External Load Balancer | You can alternatively create the service with the kubectl expose command and its --type=LoadBalancer flag:", "source": "https://kubernetes.io/docs/tasks/access-application-cluster/create-external-load-balancer/", "id": "k8s_00766_code_explain_23847"} {"prompt": "Explain this bash code:\n\nkubectl expose deployment example --port=8765 --target-port=9376 \\\n --name=example-service --type=LoadBalancer", "completion": "This bash code Create an External Load Balancer | This command creates a new Service using the same selectors as the referenced resource (in the case of the example above, a Deployment named example).", "language": "bash", "type": "kubectl-command_explanation", "context": "Create an External Load Balancer | This command creates a new Service using the same selectors as the referenced resource (in the case of the example above, a Deployment named example).", "source": "https://kubernetes.io/docs/tasks/access-application-cluster/create-external-load-balancer/", "id": "k8s_00766_code_explain_23848"} {"prompt": "Complete this text code:\n\nName: example-service\nNamespace: default\nLabels: app=example\nAnnotations: \nSelector: app=example\nType: LoadBalancer\nIP Families: \nIP: 10.3.22.96", "completion": "IPs: 10.3.22.96\nLoadBalancer Ingress: 192.0.2.89\nPort: 8765/TCP\nTargetPort: 9376/TCP\nNodePort: 30593/TCP\nEndpoints: 172.17.0.3:9376\nSession Affinity: None\nExternal Traffic Policy: Cluster\nEvents: ", "language": "text", "type": "code-example", "context": "Create an External Load Balancer | The load balancer's IP address is listed next to LoadBalancer Ingress.", "source": "https://kubernetes.io/docs/tasks/access-application-cluster/create-external-load-balancer/", "id": "k8s_00766_code_completion_23849"} {"prompt": "Explain this text code:\n\nName: example-service\nNamespace: default\nLabels: app=example\nAnnotations: \nSelector: app=example\nType: LoadBalancer\nIP Families: \nIP: 10.3.22.96\nIPs: 10.3.22.96\nLoadBalancer Ingress: 192.0.2.89\nPort: 8765/TCP\nTargetPort: 9376/TCP\nNodePort: 30593/TCP\nEndpoints: 172.17.0.3:9376\nSession Affinity: None\nExternal Traffic Policy: Cluster\nEvents: ", "completion": "This text code Create an External Load Balancer | The load balancer's IP address is listed next to LoadBalancer Ingress.", "language": "text", "type": "code-example_explanation", "context": "Create an External Load Balancer | The load balancer's IP address is listed next to LoadBalancer Ingress.", "source": "https://kubernetes.io/docs/tasks/access-application-cluster/create-external-load-balancer/", "id": "k8s_00766_code_explain_23850"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Service\nmetadata:\n name: example-service\nspec:\n selector:", "completion": " app: example\n ports:\n - port: 8765\n targetPort: 9376\n externalTrafficPolicy: Local\n type: LoadBalancer", "language": "yaml", "type": "service-definition", "context": "Create an External Load Balancer | Load balancing services from some cloud providers do not let you configure different weights for each target.", "source": "https://kubernetes.io/docs/tasks/access-application-cluster/create-external-load-balancer/", "id": "k8s_00766_code_completion_23851"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Service\nmetadata:\n name: example-service\nspec:\n selector:\n app: example\n ports:\n - port: 8765\n targetPort: 9376\n externalTrafficPolicy: Local\n type: LoadBalancer", "completion": "This yaml code Create an External Load Balancer | Load balancing services from some cloud providers do not let you configure different weights for each target.", "language": "yaml", "type": "service-definition_explanation", "context": "Create an External Load Balancer | Load balancing services from some cloud providers do not let you configure different weights for each target.", "source": "https://kubernetes.io/docs/tasks/access-application-cluster/create-external-load-balancer/", "id": "k8s_00766_code_explain_23852"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Service\nmetadata:\n name: example-service\nspec:", "completion": " selector:\n app: example\n ports:\n - port: 8765\n targetPort: 9376\n type: LoadBalancer", "language": "yaml", "type": "service-definition", "context": "Create an External Load Balancer | You can alternatively create the service with the kubectl expose command and its --type=LoadBalancer flag:", "source": "https://kubernetes.io/docs/tasks/access-application-cluster/create-external-load-balancer/", "id": "k8s_00766_code_completion_23853"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Service\nmetadata:\n name: example-service\nspec:\n selector:\n app: example\n ports:\n - port: 8765\n targetPort: 9376\n type: LoadBalancer", "completion": "This yaml code Create an External Load Balancer | You can alternatively create the service with the kubectl expose command and its --type=LoadBalancer flag:", "language": "yaml", "type": "service-definition_explanation", "context": "Create an External Load Balancer | You can alternatively create the service with the kubectl expose command and its --type=LoadBalancer flag:", "source": "https://kubernetes.io/docs/tasks/access-application-cluster/create-external-load-balancer/", "id": "k8s_00766_code_explain_23854"} {"prompt": "Explain this bash code:\n\nkubectl expose deployment example --port=8765 --target-port=9376 \\\n --name=example-service --type=LoadBalancer", "completion": "This bash code Create an External Load Balancer | This command creates a new Service using the same selectors as the referenced resource (in the case of the example above, a Deployment named example).", "language": "bash", "type": "kubectl-command_explanation", "context": "Create an External Load Balancer | This command creates a new Service using the same selectors as the referenced resource (in the case of the example above, a Deployment named example).", "source": "https://kubernetes.io/docs/tasks/access-application-cluster/create-external-load-balancer/", "id": "k8s_00766_code_explain_23855"} {"prompt": "Complete this text code:\n\nName: example-service\nNamespace: default\nLabels: app=example\nAnnotations: \nSelector: app=example\nType: LoadBalancer\nIP Families: \nIP: 10.3.22.96", "completion": "IPs: 10.3.22.96\nLoadBalancer Ingress: 192.0.2.89\nPort: 8765/TCP\nTargetPort: 9376/TCP\nNodePort: 30593/TCP\nEndpoints: 172.17.0.3:9376\nSession Affinity: None\nExternal Traffic Policy: Cluster\nEvents: ", "language": "text", "type": "code-example", "context": "Create an External Load Balancer | The load balancer's IP address is listed next to LoadBalancer Ingress.", "source": "https://kubernetes.io/docs/tasks/access-application-cluster/create-external-load-balancer/", "id": "k8s_00766_code_completion_23856"} {"prompt": "Explain this text code:\n\nName: example-service\nNamespace: default\nLabels: app=example\nAnnotations: \nSelector: app=example\nType: LoadBalancer\nIP Families: \nIP: 10.3.22.96\nIPs: 10.3.22.96\nLoadBalancer Ingress: 192.0.2.89\nPort: 8765/TCP\nTargetPort: 9376/TCP\nNodePort: 30593/TCP\nEndpoints: 172.17.0.3:9376\nSession Affinity: None\nExternal Traffic Policy: Cluster\nEvents: ", "completion": "This text code Create an External Load Balancer | The load balancer's IP address is listed next to LoadBalancer Ingress.", "language": "text", "type": "code-example_explanation", "context": "Create an External Load Balancer | The load balancer's IP address is listed next to LoadBalancer Ingress.", "source": "https://kubernetes.io/docs/tasks/access-application-cluster/create-external-load-balancer/", "id": "k8s_00766_code_explain_23857"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Service\nmetadata:\n name: example-service\nspec:\n selector:", "completion": " app: example\n ports:\n - port: 8765\n targetPort: 9376\n externalTrafficPolicy: Local\n type: LoadBalancer", "language": "yaml", "type": "service-definition", "context": "Create an External Load Balancer | Load balancing services from some cloud providers do not let you configure different weights for each target.", "source": "https://kubernetes.io/docs/tasks/access-application-cluster/create-external-load-balancer/", "id": "k8s_00766_code_completion_23858"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Service\nmetadata:\n name: example-service\nspec:\n selector:\n app: example\n ports:\n - port: 8765\n targetPort: 9376\n externalTrafficPolicy: Local\n type: LoadBalancer", "completion": "This yaml code Create an External Load Balancer | Load balancing services from some cloud providers do not let you configure different weights for each target.", "language": "yaml", "type": "service-definition_explanation", "context": "Create an External Load Balancer | Load balancing services from some cloud providers do not let you configure different weights for each target.", "source": "https://kubernetes.io/docs/tasks/access-application-cluster/create-external-load-balancer/", "id": "k8s_00766_code_explain_23859"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Service\nmetadata:\n name: example-service\nspec:", "completion": " selector:\n app: example\n ports:\n - port: 8765\n targetPort: 9376\n type: LoadBalancer", "language": "yaml", "type": "service-definition", "context": "Create an External Load Balancer | You can alternatively create the service with the kubectl expose command and its --type=LoadBalancer flag:", "source": "https://kubernetes.io/docs/tasks/access-application-cluster/create-external-load-balancer/", "id": "k8s_00766_code_completion_23860"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Service\nmetadata:\n name: example-service\nspec:\n selector:\n app: example\n ports:\n - port: 8765\n targetPort: 9376\n type: LoadBalancer", "completion": "This yaml code Create an External Load Balancer | You can alternatively create the service with the kubectl expose command and its --type=LoadBalancer flag:", "language": "yaml", "type": "service-definition_explanation", "context": "Create an External Load Balancer | You can alternatively create the service with the kubectl expose command and its --type=LoadBalancer flag:", "source": "https://kubernetes.io/docs/tasks/access-application-cluster/create-external-load-balancer/", "id": "k8s_00766_code_explain_23861"} {"prompt": "Explain this bash code:\n\nkubectl expose deployment example --port=8765 --target-port=9376 \\\n --name=example-service --type=LoadBalancer", "completion": "This bash code Create an External Load Balancer | This command creates a new Service using the same selectors as the referenced resource (in the case of the example above, a Deployment named example).", "language": "bash", "type": "kubectl-command_explanation", "context": "Create an External Load Balancer | This command creates a new Service using the same selectors as the referenced resource (in the case of the example above, a Deployment named example).", "source": "https://kubernetes.io/docs/tasks/access-application-cluster/create-external-load-balancer/", "id": "k8s_00766_code_explain_23862"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Service\nmetadata:\n name: example-service\nspec:\n selector:", "completion": " app: example\n ports:\n - port: 8765\n targetPort: 9376\n externalTrafficPolicy: Local\n type: LoadBalancer", "language": "yaml", "type": "service-definition", "context": "Create an External Load Balancer | Load balancing services from some cloud providers do not let you configure different weights for each target.", "source": "https://kubernetes.io/docs/tasks/access-application-cluster/create-external-load-balancer/", "id": "k8s_00766_code_completion_23863"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Service\nmetadata:\n name: example-service\nspec:\n selector:\n app: example\n ports:\n - port: 8765\n targetPort: 9376\n externalTrafficPolicy: Local\n type: LoadBalancer", "completion": "This yaml code Create an External Load Balancer | Load balancing services from some cloud providers do not let you configure different weights for each target.", "language": "yaml", "type": "service-definition_explanation", "context": "Create an External Load Balancer | Load balancing services from some cloud providers do not let you configure different weights for each target.", "source": "https://kubernetes.io/docs/tasks/access-application-cluster/create-external-load-balancer/", "id": "k8s_00766_code_explain_23864"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Service\nmetadata:\n labels:\n app: cassandra\n name: cassandra", "completion": "spec:\n clusterIP: None\n ports:\n - port: 9042\n selector:\n app: cassandra", "language": "yaml", "type": "service-definition", "context": "Example: Deploying Cassandra with a StatefulSet | Create a Service to track all Cassandra StatefulSet members from the cassandra-service.yaml file:", "source": "https://kubernetes.io/docs/tutorials/stateful-application/cassandra/", "id": "k8s_00767_code_completion_23865"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Service\nmetadata:\n labels:\n app: cassandra\n name: cassandra\nspec:\n clusterIP: None\n ports:\n - port: 9042\n selector:\n app: cassandra", "completion": "This yaml code Example: Deploying Cassandra with a StatefulSet | Create a Service to track all Cassandra StatefulSet members from the cassandra-service.yaml file:", "language": "yaml", "type": "service-definition_explanation", "context": "Example: Deploying Cassandra with a StatefulSet | Create a Service to track all Cassandra StatefulSet members from the cassandra-service.yaml file:", "source": "https://kubernetes.io/docs/tutorials/stateful-application/cassandra/", "id": "k8s_00767_code_explain_23866"} {"prompt": "Explain this bash code:\n\nkubectl apply -f https://k8s.io/examples/application/cassandra/cassandra-service.yaml", "completion": "This bash code Example: Deploying Cassandra with a StatefulSet | Get the Cassandra Service.", "language": "bash", "type": "kubectl-command_explanation", "context": "Example: Deploying Cassandra with a StatefulSet | Get the Cassandra Service.", "source": "https://kubernetes.io/docs/tutorials/stateful-application/cassandra/", "id": "k8s_00767_code_explain_23867"} {"prompt": "Explain this text code:\n\nNAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE\ncassandra ClusterIP None 9042/TCP 45s", "completion": "This text code Example: Deploying Cassandra with a StatefulSet | If you don't see a Service named cassandra, that means creation failed. Read Debug Services for help troubleshooting common issues.", "language": "text", "type": "code-example_explanation", "context": "Example: Deploying Cassandra with a StatefulSet | If you don't see a Service named cassandra, that means creation failed. Read Debug Services for help troubleshooting common issues.", "source": "https://kubernetes.io/docs/tutorials/stateful-application/cassandra/", "id": "k8s_00767_code_explain_23868"} {"prompt": "Complete this yaml code:\n\napiVersion: apps/v1\nkind: StatefulSet\nmetadata:\n name: cassandra\n labels:\n app: cassandra\nspec:\n serviceName: cassandra\n replicas: 3\n selector:\n matchLabels:\n app: cassandra\n template:\n metadata:\n labels:\n app: cassandra\n spec:\n terminationGracePeriodSeconds: 500\n containers:\n - name: cassandra\n image: gcr.io/google-samples/cassandra:v13\n imagePullPolicy: Always\n ports:\n - containerPort: 7000\n name: intra-node\n - containerPort: 7001\n name: tls-intra-node\n - containerPort: 7199\n name: jmx\n - containerPort: 9042\n name: cql\n resources:\n limits:\n cpu: \"500m\"\n memory: 1Gi\n requests:\n cpu: \"500m\"\n memory: 1Gi\n securityContext:\n capabilities:\n add:\n - IPC_LOCK\n lifecycle:\n preStop:\n exec:\n command: \n - /bin/sh\n - -c\n - nodetool drain\n env:", "completion": " - name: MAX_HEAP_SIZE\n value: 512M\n - name: HEAP_NEWSIZE\n value: 100M\n - name: CASSANDRA_SEEDS\n value: \"cassandra-0.cassandra.default.svc.cluster.local\"\n - name: CASSANDRA_CLUSTER_NAME\n value: \"K8Demo\"\n - name: CASSANDRA_DC\n value: \"DC1-K8Demo\"\n - name: CASSANDRA_RACK\n value: \"Rack1-K8Demo\"\n - name: POD_IP\n valueFrom:\n fieldRef:\n fieldPath: status.podIP\n readinessProbe:\n exec:\n command:\n - /bin/bash\n - -c\n - /ready-probe.sh\n initialDelaySeconds: 15\n timeoutSeconds: 5\n # These volume mounts are persistent. They are like inline claims,\n # but not exactly because the names need to match exactly one of\n # the stateful pod volumes.\n volumeMounts:\n - name: cassandra-data\n mountPath: /cassandra_data\n # These are converted to volume claims by the controller\n # and mounted at the paths mentioned above.\n # do not use these in production until ssd GCEPersistentDisk or other ssd pd\n volumeClaimTemplates:\n - metadata:\n name: cassandra-data\n spec:\n accessModes: [ \"ReadWriteOnce\" ]\n storageClassName: fast\n resources:\n requests:\n storage: 1Gi\n---\nkind: StorageClass\napiVersion: storage.k8s.io/v1\nmetadata:\n name: fast\nprovisioner: k8s.io/minikube-hostpath\nparameters:\n type: pd-ssd", "language": "yaml", "type": "kubernetes-manifest", "context": "Example: Deploying Cassandra with a StatefulSet | Create the Cassandra StatefulSet from the cassandra-statefulset.yaml file:", "source": "https://kubernetes.io/docs/tutorials/stateful-application/cassandra/", "id": "k8s_00767_code_completion_23869"} {"prompt": "Explain this yaml code:\n\napiVersion: apps/v1\nkind: StatefulSet\nmetadata:\n name: cassandra\n labels:\n app: cassandra\nspec:\n serviceName: cassandra\n replicas: 3\n selector:\n matchLabels:\n app: cassandra\n template:\n metadata:\n labels:\n app: cassandra\n spec:\n terminationGracePeriodSeconds: 500\n containers:\n - name: cassandra\n image: gcr.io/google-samples/cassandra:v13\n imagePullPolicy: Always\n ports:\n - containerPort: 7000\n name: intra-node\n - containerPort: 7001\n name: tls-intra-node\n - containerPort: 7199\n name: jmx\n - containerPort: 9042\n name: cql\n resources:\n limits:\n cpu: \"500m\"\n memory: 1Gi\n requests:\n cpu: \"500m\"\n memory: 1Gi\n securityContext:\n capabilities:\n add:\n - IPC_LOCK\n lifecycle:\n preStop:\n exec:\n command: \n - /bin/sh\n - -c\n - nodetool drain\n env:\n - name: MAX_HEAP_SIZE\n value: 512M\n - name: HEAP_NEWSIZE\n value: 100M\n - name: CASSANDRA_SEEDS\n value: \"cassandra-0.cassandra.default.svc.cluster.local\"\n - name: CASSANDRA_CLUSTER_NAME\n value: \"K8Demo\"\n - name: CASSANDRA_DC\n value: \"DC1-K8Demo\"\n - name: CASSANDRA_RACK\n value: \"Rack1-K8Demo\"\n - name: POD_IP\n valueFrom:\n fieldRef:\n fieldPath: status.podIP\n readinessProbe:\n exec:\n command:\n - /bin/bash\n - -c\n - /ready-probe.sh\n initialDelaySeconds: 15\n timeoutSeconds: 5\n # These volume mounts are persistent. They are like inline claims,\n # but not exactly because the names need to match exactly one of\n # the stateful pod volumes.\n volumeMounts:\n - name: cassandra-data\n mountPath: /cassandra_data\n # These are converted to volume claims by the controller\n # and mounted at the paths mentioned above.\n # do not use these in production until ssd GCEPersistentDisk or other ssd pd\n volumeClaimTemplates:\n - metadata:\n name: cassandra-data\n spec:\n accessModes: [ \"ReadWriteOnce\" ]\n storageClassName: fast\n resources:\n requests:\n storage: 1Gi\n---\nkind: StorageClass\napiVersion: storage.k8s.io/v1\nmetadata:\n name: fast\nprovisioner: k8s.io/minikube-hostpath\nparameters:\n type: pd-ssd", "completion": "This yaml code Example: Deploying Cassandra with a StatefulSet | Create the Cassandra StatefulSet from the cassandra-statefulset.yaml file:", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Example: Deploying Cassandra with a StatefulSet | Create the Cassandra StatefulSet from the cassandra-statefulset.yaml file:", "source": "https://kubernetes.io/docs/tutorials/stateful-application/cassandra/", "id": "k8s_00767_code_explain_23870"} {"prompt": "Explain this text code:\n\n# Use this if you are able to apply cassandra-statefulset.yaml unmodified\nkubectl apply -f https://k8s.io/examples/application/cassandra/cassandra-statefulset.yaml", "completion": "This text code Example: Deploying Cassandra with a StatefulSet | If you need to modify cassandra-statefulset.yaml to suit your cluster, download https://k8s.io/examples/application/cassandra/cassandra-statefulset.yaml and then apply that manifest, from the folder y", "language": "text", "type": "code-example_explanation", "context": "Example: Deploying Cassandra with a StatefulSet | If you need to modify cassandra-statefulset.yaml to suit your cluster, download https://k8s.io/examples/application/cassandra/cassandra-statefulset.yaml and then apply that manifest, from the folder y", "source": "https://kubernetes.io/docs/tutorials/stateful-application/cassandra/", "id": "k8s_00767_code_explain_23871"} {"prompt": "Explain this text code:\n\n# Use this if you needed to modify cassandra-statefulset.yaml locally\nkubectl apply -f cassandra-statefulset.yaml", "completion": "This text code Example: Deploying Cassandra with a StatefulSet | Get the Cassandra StatefulSet:", "language": "text", "type": "code-example_explanation", "context": "Example: Deploying Cassandra with a StatefulSet | Get the Cassandra StatefulSet:", "source": "https://kubernetes.io/docs/tutorials/stateful-application/cassandra/", "id": "k8s_00767_code_explain_23872"} {"prompt": "Explain this text code:\n\nNAME DESIRED CURRENT AGE\ncassandra 3 0 13s", "completion": "This text code Example: Deploying Cassandra with a StatefulSet | The StatefulSet resource deploys Pods sequentially.", "language": "text", "type": "code-example_explanation", "context": "Example: Deploying Cassandra with a StatefulSet | The StatefulSet resource deploys Pods sequentially.", "source": "https://kubernetes.io/docs/tutorials/stateful-application/cassandra/", "id": "k8s_00767_code_explain_23873"} {"prompt": "Explain this text code:\n\nNAME READY STATUS RESTARTS AGE\ncassandra-0 1/1 Running 0 1m\ncassandra-1 0/1 ContainerCreating 0 8s", "completion": "This text code Example: Deploying Cassandra with a StatefulSet | It can take several minutes for all three Pods to deploy. Once they are deployed, the same command returns output similar to:", "language": "text", "type": "code-example_explanation", "context": "Example: Deploying Cassandra with a StatefulSet | It can take several minutes for all three Pods to deploy. Once they are deployed, the same command returns output similar to:", "source": "https://kubernetes.io/docs/tutorials/stateful-application/cassandra/", "id": "k8s_00767_code_explain_23874"} {"prompt": "Complete this text code:\n\nNAME READY STATUS RESTARTS AGE\ncassandra-0 1/1 Running 0 10m", "completion": "cassandra-1 1/1 Running 0 9m\ncassandra-2 1/1 Running 0 8m", "language": "text", "type": "code-example", "context": "Example: Deploying Cassandra with a StatefulSet | Run the Cassandra nodetool inside the first Pod, to display the status of the ring.", "source": "https://kubernetes.io/docs/tutorials/stateful-application/cassandra/", "id": "k8s_00767_code_completion_23875"} {"prompt": "Explain this text code:\n\nNAME READY STATUS RESTARTS AGE\ncassandra-0 1/1 Running 0 10m\ncassandra-1 1/1 Running 0 9m\ncassandra-2 1/1 Running 0 8m", "completion": "This text code Example: Deploying Cassandra with a StatefulSet | Run the Cassandra nodetool inside the first Pod, to display the status of the ring.", "language": "text", "type": "code-example_explanation", "context": "Example: Deploying Cassandra with a StatefulSet | Run the Cassandra nodetool inside the first Pod, to display the status of the ring.", "source": "https://kubernetes.io/docs/tutorials/stateful-application/cassandra/", "id": "k8s_00767_code_explain_23876"} {"prompt": "Complete this text code:\n\nDatacenter: DC1-K8Demo\n======================\nStatus=Up/Down\n|/ State=Normal/Leaving/Joining/Moving", "completion": "-- Address Load Tokens Owns (effective) Host ID Rack\nUN 172.17.0.5 83.57 KiB 32 74.0% e2dd09e6-d9d3-477e-96c5-45094c08db0f Rack1-K8Demo\nUN 172.17.0.4 101.04 KiB 32 58.8% f89d6835-3a42-4419-92b3-0e62cae1479c Rack1-K8Demo\nUN 172.17.0.6 84.74 KiB 32 67.1% a6a1e8c2-3dc5-4417-b1a0-26507af2aaad Rack1-K8Demo", "language": "text", "type": "code-example", "context": "Example: Deploying Cassandra with a StatefulSet | Use kubectl edit to modify the size of a Cassandra StatefulSet.", "source": "https://kubernetes.io/docs/tutorials/stateful-application/cassandra/", "id": "k8s_00767_code_completion_23877"} {"prompt": "Explain this text code:\n\nDatacenter: DC1-K8Demo\n======================\nStatus=Up/Down\n|/ State=Normal/Leaving/Joining/Moving\n-- Address Load Tokens Owns (effective) Host ID Rack\nUN 172.17.0.5 83.57 KiB 32 74.0% e2dd09e6-d9d3-477e-96c5-45094c08db0f Rack1-K8Demo\nUN 172.17.0.4 101.04 KiB 32 58.8% f89d6835-3a42-4419-92b3-0e62cae1479c Rack1-K8Demo\nUN 172.17.0.6 84.74 KiB 32 67.1% a6a1e8c2-3dc5-4417-b1a0-26507af2aaad Rack1-K8Demo", "completion": "This text code Example: Deploying Cassandra with a StatefulSet | Use kubectl edit to modify the size of a Cassandra StatefulSet.", "language": "text", "type": "code-example_explanation", "context": "Example: Deploying Cassandra with a StatefulSet | Use kubectl edit to modify the size of a Cassandra StatefulSet.", "source": "https://kubernetes.io/docs/tutorials/stateful-application/cassandra/", "id": "k8s_00767_code_explain_23878"} {"prompt": "Complete this yaml code:\n\n# Please edit the object below. Lines beginning with a '#' will be ignored,\n# and an empty file will abort the edit. If an error occurs while saving this file will be\n# reopened with the relevant failures.\n#\napiVersion: apps/v1\nkind: StatefulSet\nmetadata:\n creationTimestamp: 2016-08-13T18:40:58Z", "completion": " generation: 1\n labels:\n app: cassandra\n name: cassandra\n namespace: default\n resourceVersion: \"323\"\n uid: 7a219483-6185-11e6-a910-42010a8a0fc0\nspec:\n replicas: 3", "language": "yaml", "type": "kubernetes-manifest", "context": "Example: Deploying Cassandra with a StatefulSet | Change the number of replicas to 4, and then save the manifest.", "source": "https://kubernetes.io/docs/tutorials/stateful-application/cassandra/", "id": "k8s_00767_code_completion_23879"} {"prompt": "Explain this yaml code:\n\n# Please edit the object below. Lines beginning with a '#' will be ignored,\n# and an empty file will abort the edit. If an error occurs while saving this file will be\n# reopened with the relevant failures.\n#\napiVersion: apps/v1\nkind: StatefulSet\nmetadata:\n creationTimestamp: 2016-08-13T18:40:58Z\n generation: 1\n labels:\n app: cassandra\n name: cassandra\n namespace: default\n resourceVersion: \"323\"\n uid: 7a219483-6185-11e6-a910-42010a8a0fc0\nspec:\n replicas: 3", "completion": "This yaml code Example: Deploying Cassandra with a StatefulSet | Change the number of replicas to 4, and then save the manifest.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Example: Deploying Cassandra with a StatefulSet | Change the number of replicas to 4, and then save the manifest.", "source": "https://kubernetes.io/docs/tutorials/stateful-application/cassandra/", "id": "k8s_00767_code_explain_23880"} {"prompt": "Explain this text code:\n\nNAME DESIRED CURRENT AGE\ncassandra 4 4 36m", "completion": "This text code Example: Deploying Cassandra with a StatefulSet | Deleting or scaling a StatefulSet down does not delete the volumes associated with the StatefulSet. This setting is for your safety because your data is more valuable than automatically purging all re", "language": "text", "type": "code-example_explanation", "context": "Example: Deploying Cassandra with a StatefulSet | Deleting or scaling a StatefulSet down does not delete the volumes associated with the StatefulSet. This setting is for your safety because your data is more valuable than automatically purging all re", "source": "https://kubernetes.io/docs/tutorials/stateful-application/cassandra/", "id": "k8s_00767_code_explain_23881"} {"prompt": "Complete this text code:\n\ngrace=$(kubectl get pod cassandra-0 -o=jsonpath='{.spec.terminationGracePeriodSeconds}') \\\n && kubectl delete statefulset -l app=cassandra \\", "completion": " && echo \"Sleeping ${grace} seconds\" 1>&2 \\\n && sleep $grace \\\n && kubectl delete persistentvolumeclaim -l app=cassandra", "language": "text", "type": "code-example", "context": "Example: Deploying Cassandra with a StatefulSet | Run the following command to delete the Service you set up for Cassandra:", "source": "https://kubernetes.io/docs/tutorials/stateful-application/cassandra/", "id": "k8s_00767_code_completion_23882"} {"prompt": "Explain this text code:\n\ngrace=$(kubectl get pod cassandra-0 -o=jsonpath='{.spec.terminationGracePeriodSeconds}') \\\n && kubectl delete statefulset -l app=cassandra \\\n && echo \"Sleeping ${grace} seconds\" 1>&2 \\\n && sleep $grace \\\n && kubectl delete persistentvolumeclaim -l app=cassandra", "completion": "This text code Example: Deploying Cassandra with a StatefulSet | Run the following command to delete the Service you set up for Cassandra:", "language": "text", "type": "code-example_explanation", "context": "Example: Deploying Cassandra with a StatefulSet | Run the following command to delete the Service you set up for Cassandra:", "source": "https://kubernetes.io/docs/tutorials/stateful-application/cassandra/", "id": "k8s_00767_code_explain_23883"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Service\nmetadata:\n labels:\n app: cassandra\n name: cassandra", "completion": "spec:\n clusterIP: None\n ports:\n - port: 9042\n selector:\n app: cassandra", "language": "yaml", "type": "service-definition", "context": "Example: Deploying Cassandra with a StatefulSet | Create a Service to track all Cassandra StatefulSet members from the cassandra-service.yaml file:", "source": "https://kubernetes.io/docs/tutorials/stateful-application/cassandra/", "id": "k8s_00767_code_completion_23884"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Service\nmetadata:\n labels:\n app: cassandra\n name: cassandra\nspec:\n clusterIP: None\n ports:\n - port: 9042\n selector:\n app: cassandra", "completion": "This yaml code Example: Deploying Cassandra with a StatefulSet | Create a Service to track all Cassandra StatefulSet members from the cassandra-service.yaml file:", "language": "yaml", "type": "service-definition_explanation", "context": "Example: Deploying Cassandra with a StatefulSet | Create a Service to track all Cassandra StatefulSet members from the cassandra-service.yaml file:", "source": "https://kubernetes.io/docs/tutorials/stateful-application/cassandra/", "id": "k8s_00767_code_explain_23885"} {"prompt": "Explain this shell code:\n\nkubectl apply -f https://k8s.io/examples/application/cassandra/cassandra-service.yaml", "completion": "This shell code Example: Deploying Cassandra with a StatefulSet | Get the Cassandra Service.", "language": "shell", "type": "code-example_explanation", "context": "Example: Deploying Cassandra with a StatefulSet | Get the Cassandra Service.", "source": "https://kubernetes.io/docs/tutorials/stateful-application/cassandra/", "id": "k8s_00767_code_explain_23886"} {"prompt": "Explain this text code:\n\nNAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE\ncassandra ClusterIP None 9042/TCP 45s", "completion": "This text code Example: Deploying Cassandra with a StatefulSet | If you don't see a Service named cassandra, that means creation failed. Read Debug Services for help troubleshooting common issues.", "language": "text", "type": "code-example_explanation", "context": "Example: Deploying Cassandra with a StatefulSet | If you don't see a Service named cassandra, that means creation failed. Read Debug Services for help troubleshooting common issues.", "source": "https://kubernetes.io/docs/tutorials/stateful-application/cassandra/", "id": "k8s_00767_code_explain_23887"} {"prompt": "Complete this yaml code:\n\napiVersion: apps/v1\nkind: StatefulSet\nmetadata:\n name: cassandra\n labels:\n app: cassandra\nspec:\n serviceName: cassandra\n replicas: 3\n selector:\n matchLabels:\n app: cassandra\n template:\n metadata:\n labels:\n app: cassandra\n spec:\n terminationGracePeriodSeconds: 500\n containers:\n - name: cassandra\n image: gcr.io/google-samples/cassandra:v13\n imagePullPolicy: Always\n ports:\n - containerPort: 7000\n name: intra-node\n - containerPort: 7001\n name: tls-intra-node\n - containerPort: 7199\n name: jmx\n - containerPort: 9042\n name: cql\n resources:\n limits:\n cpu: \"500m\"\n memory: 1Gi\n requests:\n cpu: \"500m\"\n memory: 1Gi\n securityContext:\n capabilities:\n add:\n - IPC_LOCK\n lifecycle:\n preStop:\n exec:\n command: \n - /bin/sh\n - -c\n - nodetool drain\n env:", "completion": " - name: MAX_HEAP_SIZE\n value: 512M\n - name: HEAP_NEWSIZE\n value: 100M\n - name: CASSANDRA_SEEDS\n value: \"cassandra-0.cassandra.default.svc.cluster.local\"\n - name: CASSANDRA_CLUSTER_NAME\n value: \"K8Demo\"\n - name: CASSANDRA_DC\n value: \"DC1-K8Demo\"\n - name: CASSANDRA_RACK\n value: \"Rack1-K8Demo\"\n - name: POD_IP\n valueFrom:\n fieldRef:\n fieldPath: status.podIP\n readinessProbe:\n exec:\n command:\n - /bin/bash\n - -c\n - /ready-probe.sh\n initialDelaySeconds: 15\n timeoutSeconds: 5\n # These volume mounts are persistent. They are like inline claims,\n # but not exactly because the names need to match exactly one of\n # the stateful pod volumes.\n volumeMounts:\n - name: cassandra-data\n mountPath: /cassandra_data\n # These are converted to volume claims by the controller\n # and mounted at the paths mentioned above.\n # do not use these in production until ssd GCEPersistentDisk or other ssd pd\n volumeClaimTemplates:\n - metadata:\n name: cassandra-data\n spec:\n accessModes: [ \"ReadWriteOnce\" ]\n storageClassName: fast\n resources:\n requests:\n storage: 1Gi\n---\nkind: StorageClass\napiVersion: storage.k8s.io/v1\nmetadata:\n name: fast\nprovisioner: k8s.io/minikube-hostpath\nparameters:\n type: pd-ssd", "language": "yaml", "type": "kubernetes-manifest", "context": "Example: Deploying Cassandra with a StatefulSet | Create the Cassandra StatefulSet from the cassandra-statefulset.yaml file:", "source": "https://kubernetes.io/docs/tutorials/stateful-application/cassandra/", "id": "k8s_00767_code_completion_23888"} {"prompt": "Explain this yaml code:\n\napiVersion: apps/v1\nkind: StatefulSet\nmetadata:\n name: cassandra\n labels:\n app: cassandra\nspec:\n serviceName: cassandra\n replicas: 3\n selector:\n matchLabels:\n app: cassandra\n template:\n metadata:\n labels:\n app: cassandra\n spec:\n terminationGracePeriodSeconds: 500\n containers:\n - name: cassandra\n image: gcr.io/google-samples/cassandra:v13\n imagePullPolicy: Always\n ports:\n - containerPort: 7000\n name: intra-node\n - containerPort: 7001\n name: tls-intra-node\n - containerPort: 7199\n name: jmx\n - containerPort: 9042\n name: cql\n resources:\n limits:\n cpu: \"500m\"\n memory: 1Gi\n requests:\n cpu: \"500m\"\n memory: 1Gi\n securityContext:\n capabilities:\n add:\n - IPC_LOCK\n lifecycle:\n preStop:\n exec:\n command: \n - /bin/sh\n - -c\n - nodetool drain\n env:\n - name: MAX_HEAP_SIZE\n value: 512M\n - name: HEAP_NEWSIZE\n value: 100M\n - name: CASSANDRA_SEEDS\n value: \"cassandra-0.cassandra.default.svc.cluster.local\"\n - name: CASSANDRA_CLUSTER_NAME\n value: \"K8Demo\"\n - name: CASSANDRA_DC\n value: \"DC1-K8Demo\"\n - name: CASSANDRA_RACK\n value: \"Rack1-K8Demo\"\n - name: POD_IP\n valueFrom:\n fieldRef:\n fieldPath: status.podIP\n readinessProbe:\n exec:\n command:\n - /bin/bash\n - -c\n - /ready-probe.sh\n initialDelaySeconds: 15\n timeoutSeconds: 5\n # These volume mounts are persistent. They are like inline claims,\n # but not exactly because the names need to match exactly one of\n # the stateful pod volumes.\n volumeMounts:\n - name: cassandra-data\n mountPath: /cassandra_data\n # These are converted to volume claims by the controller\n # and mounted at the paths mentioned above.\n # do not use these in production until ssd GCEPersistentDisk or other ssd pd\n volumeClaimTemplates:\n - metadata:\n name: cassandra-data\n spec:\n accessModes: [ \"ReadWriteOnce\" ]\n storageClassName: fast\n resources:\n requests:\n storage: 1Gi\n---\nkind: StorageClass\napiVersion: storage.k8s.io/v1\nmetadata:\n name: fast\nprovisioner: k8s.io/minikube-hostpath\nparameters:\n type: pd-ssd", "completion": "This yaml code Example: Deploying Cassandra with a StatefulSet | Create the Cassandra StatefulSet from the cassandra-statefulset.yaml file:", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Example: Deploying Cassandra with a StatefulSet | Create the Cassandra StatefulSet from the cassandra-statefulset.yaml file:", "source": "https://kubernetes.io/docs/tutorials/stateful-application/cassandra/", "id": "k8s_00767_code_explain_23889"} {"prompt": "Explain this shell code:\n\n# Use this if you are able to apply cassandra-statefulset.yaml unmodified\nkubectl apply -f https://k8s.io/examples/application/cassandra/cassandra-statefulset.yaml", "completion": "This shell code Example: Deploying Cassandra with a StatefulSet | If you need to modify cassandra-statefulset.yaml to suit your cluster, download https://k8s.io/examples/application/cassandra/cassandra-statefulset.yaml and then apply that manifest, from the folder y", "language": "shell", "type": "code-example_explanation", "context": "Example: Deploying Cassandra with a StatefulSet | If you need to modify cassandra-statefulset.yaml to suit your cluster, download https://k8s.io/examples/application/cassandra/cassandra-statefulset.yaml and then apply that manifest, from the folder y", "source": "https://kubernetes.io/docs/tutorials/stateful-application/cassandra/", "id": "k8s_00767_code_explain_23890"} {"prompt": "Explain this shell code:\n\n# Use this if you needed to modify cassandra-statefulset.yaml locally\nkubectl apply -f cassandra-statefulset.yaml", "completion": "This shell code Example: Deploying Cassandra with a StatefulSet | Get the Cassandra StatefulSet:", "language": "shell", "type": "code-example_explanation", "context": "Example: Deploying Cassandra with a StatefulSet | Get the Cassandra StatefulSet:", "source": "https://kubernetes.io/docs/tutorials/stateful-application/cassandra/", "id": "k8s_00767_code_explain_23891"} {"prompt": "Explain this text code:\n\nNAME DESIRED CURRENT AGE\ncassandra 3 0 13s", "completion": "This text code Example: Deploying Cassandra with a StatefulSet | The StatefulSet resource deploys Pods sequentially.", "language": "text", "type": "code-example_explanation", "context": "Example: Deploying Cassandra with a StatefulSet | The StatefulSet resource deploys Pods sequentially.", "source": "https://kubernetes.io/docs/tutorials/stateful-application/cassandra/", "id": "k8s_00767_code_explain_23892"} {"prompt": "Explain this shell code:\n\nNAME READY STATUS RESTARTS AGE\ncassandra-0 1/1 Running 0 1m\ncassandra-1 0/1 ContainerCreating 0 8s", "completion": "This shell code Example: Deploying Cassandra with a StatefulSet | It can take several minutes for all three Pods to deploy. Once they are deployed, the same command returns output similar to:", "language": "shell", "type": "code-example_explanation", "context": "Example: Deploying Cassandra with a StatefulSet | It can take several minutes for all three Pods to deploy. Once they are deployed, the same command returns output similar to:", "source": "https://kubernetes.io/docs/tutorials/stateful-application/cassandra/", "id": "k8s_00767_code_explain_23893"} {"prompt": "Complete this text code:\n\nNAME READY STATUS RESTARTS AGE\ncassandra-0 1/1 Running 0 10m", "completion": "cassandra-1 1/1 Running 0 9m\ncassandra-2 1/1 Running 0 8m", "language": "text", "type": "code-example", "context": "Example: Deploying Cassandra with a StatefulSet | Run the Cassandra nodetool inside the first Pod, to display the status of the ring.", "source": "https://kubernetes.io/docs/tutorials/stateful-application/cassandra/", "id": "k8s_00767_code_completion_23894"} {"prompt": "Explain this text code:\n\nNAME READY STATUS RESTARTS AGE\ncassandra-0 1/1 Running 0 10m\ncassandra-1 1/1 Running 0 9m\ncassandra-2 1/1 Running 0 8m", "completion": "This text code Example: Deploying Cassandra with a StatefulSet | Run the Cassandra nodetool inside the first Pod, to display the status of the ring.", "language": "text", "type": "code-example_explanation", "context": "Example: Deploying Cassandra with a StatefulSet | Run the Cassandra nodetool inside the first Pod, to display the status of the ring.", "source": "https://kubernetes.io/docs/tutorials/stateful-application/cassandra/", "id": "k8s_00767_code_explain_23895"} {"prompt": "Complete this text code:\n\nDatacenter: DC1-K8Demo\n======================\nStatus=Up/Down\n|/ State=Normal/Leaving/Joining/Moving", "completion": "-- Address Load Tokens Owns (effective) Host ID Rack\nUN 172.17.0.5 83.57 KiB 32 74.0% e2dd09e6-d9d3-477e-96c5-45094c08db0f Rack1-K8Demo\nUN 172.17.0.4 101.04 KiB 32 58.8% f89d6835-3a42-4419-92b3-0e62cae1479c Rack1-K8Demo\nUN 172.17.0.6 84.74 KiB 32 67.1% a6a1e8c2-3dc5-4417-b1a0-26507af2aaad Rack1-K8Demo", "language": "text", "type": "code-example", "context": "Example: Deploying Cassandra with a StatefulSet | Use kubectl edit to modify the size of a Cassandra StatefulSet.", "source": "https://kubernetes.io/docs/tutorials/stateful-application/cassandra/", "id": "k8s_00767_code_completion_23896"} {"prompt": "Explain this text code:\n\nDatacenter: DC1-K8Demo\n======================\nStatus=Up/Down\n|/ State=Normal/Leaving/Joining/Moving\n-- Address Load Tokens Owns (effective) Host ID Rack\nUN 172.17.0.5 83.57 KiB 32 74.0% e2dd09e6-d9d3-477e-96c5-45094c08db0f Rack1-K8Demo\nUN 172.17.0.4 101.04 KiB 32 58.8% f89d6835-3a42-4419-92b3-0e62cae1479c Rack1-K8Demo\nUN 172.17.0.6 84.74 KiB 32 67.1% a6a1e8c2-3dc5-4417-b1a0-26507af2aaad Rack1-K8Demo", "completion": "This text code Example: Deploying Cassandra with a StatefulSet | Use kubectl edit to modify the size of a Cassandra StatefulSet.", "language": "text", "type": "code-example_explanation", "context": "Example: Deploying Cassandra with a StatefulSet | Use kubectl edit to modify the size of a Cassandra StatefulSet.", "source": "https://kubernetes.io/docs/tutorials/stateful-application/cassandra/", "id": "k8s_00767_code_explain_23897"} {"prompt": "Complete this yaml code:\n\n# Please edit the object below. Lines beginning with a '#' will be ignored,\n# and an empty file will abort the edit. If an error occurs while saving this file will be\n# reopened with the relevant failures.\n#\napiVersion: apps/v1\nkind: StatefulSet\nmetadata:\n creationTimestamp: 2016-08-13T18:40:58Z", "completion": " generation: 1\n labels:\n app: cassandra\n name: cassandra\n namespace: default\n resourceVersion: \"323\"\n uid: 7a219483-6185-11e6-a910-42010a8a0fc0\nspec:\n replicas: 3", "language": "yaml", "type": "kubernetes-manifest", "context": "Example: Deploying Cassandra with a StatefulSet | Change the number of replicas to 4, and then save the manifest.", "source": "https://kubernetes.io/docs/tutorials/stateful-application/cassandra/", "id": "k8s_00767_code_completion_23898"} {"prompt": "Explain this yaml code:\n\n# Please edit the object below. Lines beginning with a '#' will be ignored,\n# and an empty file will abort the edit. If an error occurs while saving this file will be\n# reopened with the relevant failures.\n#\napiVersion: apps/v1\nkind: StatefulSet\nmetadata:\n creationTimestamp: 2016-08-13T18:40:58Z\n generation: 1\n labels:\n app: cassandra\n name: cassandra\n namespace: default\n resourceVersion: \"323\"\n uid: 7a219483-6185-11e6-a910-42010a8a0fc0\nspec:\n replicas: 3", "completion": "This yaml code Example: Deploying Cassandra with a StatefulSet | Change the number of replicas to 4, and then save the manifest.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Example: Deploying Cassandra with a StatefulSet | Change the number of replicas to 4, and then save the manifest.", "source": "https://kubernetes.io/docs/tutorials/stateful-application/cassandra/", "id": "k8s_00767_code_explain_23899"} {"prompt": "Explain this text code:\n\nNAME DESIRED CURRENT AGE\ncassandra 4 4 36m", "completion": "This text code Example: Deploying Cassandra with a StatefulSet | Deleting or scaling a StatefulSet down does not delete the volumes associated with the StatefulSet. This setting is for your safety because your data is more valuable than automatically purging all re", "language": "text", "type": "code-example_explanation", "context": "Example: Deploying Cassandra with a StatefulSet | Deleting or scaling a StatefulSet down does not delete the volumes associated with the StatefulSet. This setting is for your safety because your data is more valuable than automatically purging all re", "source": "https://kubernetes.io/docs/tutorials/stateful-application/cassandra/", "id": "k8s_00767_code_explain_23900"} {"prompt": "Complete this shell code:\n\ngrace=$(kubectl get pod cassandra-0 -o=jsonpath='{.spec.terminationGracePeriodSeconds}') \\\n && kubectl delete statefulset -l app=cassandra \\", "completion": " && echo \"Sleeping ${grace} seconds\" 1>&2 \\\n && sleep $grace \\\n && kubectl delete persistentvolumeclaim -l app=cassandra", "language": "shell", "type": "code-example", "context": "Example: Deploying Cassandra with a StatefulSet | Run the following command to delete the Service you set up for Cassandra:", "source": "https://kubernetes.io/docs/tutorials/stateful-application/cassandra/", "id": "k8s_00767_code_completion_23901"} {"prompt": "Explain this shell code:\n\ngrace=$(kubectl get pod cassandra-0 -o=jsonpath='{.spec.terminationGracePeriodSeconds}') \\\n && kubectl delete statefulset -l app=cassandra \\\n && echo \"Sleeping ${grace} seconds\" 1>&2 \\\n && sleep $grace \\\n && kubectl delete persistentvolumeclaim -l app=cassandra", "completion": "This shell code Example: Deploying Cassandra with a StatefulSet | Run the following command to delete the Service you set up for Cassandra:", "language": "shell", "type": "code-example_explanation", "context": "Example: Deploying Cassandra with a StatefulSet | Run the following command to delete the Service you set up for Cassandra:", "source": "https://kubernetes.io/docs/tutorials/stateful-application/cassandra/", "id": "k8s_00767_code_explain_23902"} {"prompt": "Complete this yaml code:\n\napplication/cassandra/cassandra-service.yaml\napiVersion: v1\nkind: Service\nmetadata:\n labels:\n app: cassandra", "completion": " name: cassandra\nspec:\n clusterIP: None\n ports:\n - port: 9042\n selector:\n app: cassandra", "language": "yaml", "type": "service-definition", "context": "Example: Deploying Cassandra with a StatefulSet | Create a Service to track all Cassandra StatefulSet members from the cassandra-service.yaml file:", "source": "https://kubernetes.io/docs/tutorials/stateful-application/cassandra/", "id": "k8s_00767_code_completion_23903"} {"prompt": "Explain this yaml code:\n\napplication/cassandra/cassandra-service.yaml\napiVersion: v1\nkind: Service\nmetadata:\n labels:\n app: cassandra\n name: cassandra\nspec:\n clusterIP: None\n ports:\n - port: 9042\n selector:\n app: cassandra", "completion": "This yaml code Example: Deploying Cassandra with a StatefulSet | Create a Service to track all Cassandra StatefulSet members from the cassandra-service.yaml file:", "language": "yaml", "type": "service-definition_explanation", "context": "Example: Deploying Cassandra with a StatefulSet | Create a Service to track all Cassandra StatefulSet members from the cassandra-service.yaml file:", "source": "https://kubernetes.io/docs/tutorials/stateful-application/cassandra/", "id": "k8s_00767_code_explain_23904"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Service\nmetadata:\n labels:\n app: cassandra\n name: cassandra", "completion": "spec:\n clusterIP: None\n ports:\n - port: 9042\n selector:\n app: cassandra", "language": "yaml", "type": "service-definition", "context": "Example: Deploying Cassandra with a StatefulSet | Create a Service to track all Cassandra StatefulSet members from the cassandra-service.yaml file:", "source": "https://kubernetes.io/docs/tutorials/stateful-application/cassandra/", "id": "k8s_00767_code_completion_23905"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Service\nmetadata:\n labels:\n app: cassandra\n name: cassandra\nspec:\n clusterIP: None\n ports:\n - port: 9042\n selector:\n app: cassandra", "completion": "This yaml code Example: Deploying Cassandra with a StatefulSet | Create a Service to track all Cassandra StatefulSet members from the cassandra-service.yaml file:", "language": "yaml", "type": "service-definition_explanation", "context": "Example: Deploying Cassandra with a StatefulSet | Create a Service to track all Cassandra StatefulSet members from the cassandra-service.yaml file:", "source": "https://kubernetes.io/docs/tutorials/stateful-application/cassandra/", "id": "k8s_00767_code_explain_23906"} {"prompt": "Explain this bash code:\n\nkubectl apply -f https://k8s.io/examples/application/cassandra/cassandra-service.yaml", "completion": "This bash code Example: Deploying Cassandra with a StatefulSet | Get the Cassandra Service.", "language": "bash", "type": "kubectl-command_explanation", "context": "Example: Deploying Cassandra with a StatefulSet | Get the Cassandra Service.", "source": "https://kubernetes.io/docs/tutorials/stateful-application/cassandra/", "id": "k8s_00767_code_explain_23907"} {"prompt": "Complete this yaml code:\n\napplication/cassandra/cassandra-statefulset.yaml\napiVersion: apps/v1\nkind: StatefulSet\nmetadata:\n name: cassandra\n labels:\n app: cassandra\nspec:\n serviceName: cassandra\n replicas: 3\n selector:\n matchLabels:\n app: cassandra\n template:\n metadata:\n labels:\n app: cassandra\n spec:\n terminationGracePeriodSeconds: 500\n containers:\n - name: cassandra\n image: gcr.io/google-samples/cassandra:v13\n imagePullPolicy: Always\n ports:\n - containerPort: 7000\n name: intra-node\n - containerPort: 7001\n name: tls-intra-node\n - containerPort: 7199\n name: jmx\n - containerPort: 9042\n name: cql\n resources:\n limits:\n cpu: \"500m\"\n memory: 1Gi\n requests:\n cpu: \"500m\"\n memory: 1Gi\n securityContext:\n capabilities:\n add:\n - IPC_LOCK\n lifecycle:\n preStop:\n exec:\n command: \n - /bin/sh\n - -c\n - nodetool drain", "completion": " env:\n - name: MAX_HEAP_SIZE\n value: 512M\n - name: HEAP_NEWSIZE\n value: 100M\n - name: CASSANDRA_SEEDS\n value: \"cassandra-0.cassandra.default.svc.cluster.local\"\n - name: CASSANDRA_CLUSTER_NAME\n value: \"K8Demo\"\n - name: CASSANDRA_DC\n value: \"DC1-K8Demo\"\n - name: CASSANDRA_RACK\n value: \"Rack1-K8Demo\"\n - name: POD_IP\n valueFrom:\n fieldRef:\n fieldPath: status.podIP\n readinessProbe:\n exec:\n command:\n - /bin/bash\n - -c\n - /ready-probe.sh\n initialDelaySeconds: 15\n timeoutSeconds: 5\n # These volume mounts are persistent. They are like inline claims,\n # but not exactly because the names need to match exactly one of\n # the stateful pod volumes.\n volumeMounts:\n - name: cassandra-data\n mountPath: /cassandra_data\n # These are converted to volume claims by the controller\n # and mounted at the paths mentioned above.\n # do not use these in production until ssd GCEPersistentDisk or other ssd pd\n volumeClaimTemplates:\n - metadata:\n name: cassandra-data\n spec:\n accessModes: [ \"ReadWriteOnce\" ]\n storageClassName: fast\n resources:\n requests:\n storage: 1Gi\n---\nkind: StorageClass\napiVersion: storage.k8s.io/v1\nmetadata:\n name: fast\nprovisioner: k8s.io/minikube-hostpath\nparameters:\n type: pd-ssd", "language": "yaml", "type": "kubernetes-manifest", "context": "Example: Deploying Cassandra with a StatefulSet | Create the Cassandra StatefulSet from the cassandra-statefulset.yaml file:", "source": "https://kubernetes.io/docs/tutorials/stateful-application/cassandra/", "id": "k8s_00767_code_completion_23908"} {"prompt": "Explain this yaml code:\n\napplication/cassandra/cassandra-statefulset.yaml\napiVersion: apps/v1\nkind: StatefulSet\nmetadata:\n name: cassandra\n labels:\n app: cassandra\nspec:\n serviceName: cassandra\n replicas: 3\n selector:\n matchLabels:\n app: cassandra\n template:\n metadata:\n labels:\n app: cassandra\n spec:\n terminationGracePeriodSeconds: 500\n containers:\n - name: cassandra\n image: gcr.io/google-samples/cassandra:v13\n imagePullPolicy: Always\n ports:\n - containerPort: 7000\n name: intra-node\n - containerPort: 7001\n name: tls-intra-node\n - containerPort: 7199\n name: jmx\n - containerPort: 9042\n name: cql\n resources:\n limits:\n cpu: \"500m\"\n memory: 1Gi\n requests:\n cpu: \"500m\"\n memory: 1Gi\n securityContext:\n capabilities:\n add:\n - IPC_LOCK\n lifecycle:\n preStop:\n exec:\n command: \n - /bin/sh\n - -c\n - nodetool drain\n env:\n - name: MAX_HEAP_SIZE\n value: 512M\n - name: HEAP_NEWSIZE\n value: 100M\n - name: CASSANDRA_SEEDS\n value: \"cassandra-0.cassandra.default.svc.cluster.local\"\n - name: CASSANDRA_CLUSTER_NAME\n value: \"K8Demo\"\n - name: CASSANDRA_DC\n value: \"DC1-K8Demo\"\n - name: CASSANDRA_RACK\n value: \"Rack1-K8Demo\"\n - name: POD_IP\n valueFrom:\n fieldRef:\n fieldPath: status.podIP\n readinessProbe:\n exec:\n command:\n - /bin/bash\n - -c\n - /ready-probe.sh\n initialDelaySeconds: 15\n timeoutSeconds: 5\n # These volume mounts are persistent. They are like inline claims,\n # but not exactly because the names need to match exactly one of\n # the stateful pod volumes.\n volumeMounts:\n - name: cassandra-data\n mountPath: /cassandra_data\n # These are converted to volume claims by the controller\n # and mounted at the paths mentioned above.\n # do not use these in production until ssd GCEPersistentDisk or other ssd pd\n volumeClaimTemplates:\n - metadata:\n name: cassandra-data\n spec:\n accessModes: [ \"ReadWriteOnce\" ]\n storageClassName: fast\n resources:\n requests:\n storage: 1Gi\n---\nkind: StorageClass\napiVersion: storage.k8s.io/v1\nmetadata:\n name: fast\nprovisioner: k8s.io/minikube-hostpath\nparameters:\n type: pd-ssd", "completion": "This yaml code Example: Deploying Cassandra with a StatefulSet | Create the Cassandra StatefulSet from the cassandra-statefulset.yaml file:", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Example: Deploying Cassandra with a StatefulSet | Create the Cassandra StatefulSet from the cassandra-statefulset.yaml file:", "source": "https://kubernetes.io/docs/tutorials/stateful-application/cassandra/", "id": "k8s_00767_code_explain_23909"} {"prompt": "Complete this yaml code:\n\napiVersion: apps/v1\nkind: StatefulSet\nmetadata:\n name: cassandra\n labels:\n app: cassandra\nspec:\n serviceName: cassandra\n replicas: 3\n selector:\n matchLabels:\n app: cassandra\n template:\n metadata:\n labels:\n app: cassandra\n spec:\n terminationGracePeriodSeconds: 500\n containers:\n - name: cassandra\n image: gcr.io/google-samples/cassandra:v13\n imagePullPolicy: Always\n ports:\n - containerPort: 7000\n name: intra-node\n - containerPort: 7001\n name: tls-intra-node\n - containerPort: 7199\n name: jmx\n - containerPort: 9042\n name: cql\n resources:\n limits:\n cpu: \"500m\"\n memory: 1Gi\n requests:\n cpu: \"500m\"\n memory: 1Gi\n securityContext:\n capabilities:\n add:\n - IPC_LOCK\n lifecycle:\n preStop:\n exec:\n command: \n - /bin/sh\n - -c\n - nodetool drain\n env:", "completion": " - name: MAX_HEAP_SIZE\n value: 512M\n - name: HEAP_NEWSIZE\n value: 100M\n - name: CASSANDRA_SEEDS\n value: \"cassandra-0.cassandra.default.svc.cluster.local\"\n - name: CASSANDRA_CLUSTER_NAME\n value: \"K8Demo\"\n - name: CASSANDRA_DC\n value: \"DC1-K8Demo\"\n - name: CASSANDRA_RACK\n value: \"Rack1-K8Demo\"\n - name: POD_IP\n valueFrom:\n fieldRef:\n fieldPath: status.podIP\n readinessProbe:\n exec:\n command:\n - /bin/bash\n - -c\n - /ready-probe.sh\n initialDelaySeconds: 15\n timeoutSeconds: 5\n # These volume mounts are persistent. They are like inline claims,\n # but not exactly because the names need to match exactly one of\n # the stateful pod volumes.\n volumeMounts:\n - name: cassandra-data\n mountPath: /cassandra_data\n # These are converted to volume claims by the controller\n # and mounted at the paths mentioned above.\n # do not use these in production until ssd GCEPersistentDisk or other ssd pd\n volumeClaimTemplates:\n - metadata:\n name: cassandra-data\n spec:\n accessModes: [ \"ReadWriteOnce\" ]\n storageClassName: fast\n resources:\n requests:\n storage: 1Gi\n---\nkind: StorageClass\napiVersion: storage.k8s.io/v1\nmetadata:\n name: fast\nprovisioner: k8s.io/minikube-hostpath\nparameters:\n type: pd-ssd", "language": "yaml", "type": "kubernetes-manifest", "context": "Example: Deploying Cassandra with a StatefulSet | Create the Cassandra StatefulSet from the cassandra-statefulset.yaml file:", "source": "https://kubernetes.io/docs/tutorials/stateful-application/cassandra/", "id": "k8s_00767_code_completion_23910"} {"prompt": "Explain this yaml code:\n\napiVersion: apps/v1\nkind: StatefulSet\nmetadata:\n name: cassandra\n labels:\n app: cassandra\nspec:\n serviceName: cassandra\n replicas: 3\n selector:\n matchLabels:\n app: cassandra\n template:\n metadata:\n labels:\n app: cassandra\n spec:\n terminationGracePeriodSeconds: 500\n containers:\n - name: cassandra\n image: gcr.io/google-samples/cassandra:v13\n imagePullPolicy: Always\n ports:\n - containerPort: 7000\n name: intra-node\n - containerPort: 7001\n name: tls-intra-node\n - containerPort: 7199\n name: jmx\n - containerPort: 9042\n name: cql\n resources:\n limits:\n cpu: \"500m\"\n memory: 1Gi\n requests:\n cpu: \"500m\"\n memory: 1Gi\n securityContext:\n capabilities:\n add:\n - IPC_LOCK\n lifecycle:\n preStop:\n exec:\n command: \n - /bin/sh\n - -c\n - nodetool drain\n env:\n - name: MAX_HEAP_SIZE\n value: 512M\n - name: HEAP_NEWSIZE\n value: 100M\n - name: CASSANDRA_SEEDS\n value: \"cassandra-0.cassandra.default.svc.cluster.local\"\n - name: CASSANDRA_CLUSTER_NAME\n value: \"K8Demo\"\n - name: CASSANDRA_DC\n value: \"DC1-K8Demo\"\n - name: CASSANDRA_RACK\n value: \"Rack1-K8Demo\"\n - name: POD_IP\n valueFrom:\n fieldRef:\n fieldPath: status.podIP\n readinessProbe:\n exec:\n command:\n - /bin/bash\n - -c\n - /ready-probe.sh\n initialDelaySeconds: 15\n timeoutSeconds: 5\n # These volume mounts are persistent. They are like inline claims,\n # but not exactly because the names need to match exactly one of\n # the stateful pod volumes.\n volumeMounts:\n - name: cassandra-data\n mountPath: /cassandra_data\n # These are converted to volume claims by the controller\n # and mounted at the paths mentioned above.\n # do not use these in production until ssd GCEPersistentDisk or other ssd pd\n volumeClaimTemplates:\n - metadata:\n name: cassandra-data\n spec:\n accessModes: [ \"ReadWriteOnce\" ]\n storageClassName: fast\n resources:\n requests:\n storage: 1Gi\n---\nkind: StorageClass\napiVersion: storage.k8s.io/v1\nmetadata:\n name: fast\nprovisioner: k8s.io/minikube-hostpath\nparameters:\n type: pd-ssd", "completion": "This yaml code Example: Deploying Cassandra with a StatefulSet | Create the Cassandra StatefulSet from the cassandra-statefulset.yaml file:", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Example: Deploying Cassandra with a StatefulSet | Create the Cassandra StatefulSet from the cassandra-statefulset.yaml file:", "source": "https://kubernetes.io/docs/tutorials/stateful-application/cassandra/", "id": "k8s_00767_code_explain_23911"} {"prompt": "Explain this text code:\n\n# Use this if you are able to apply cassandra-statefulset.yaml unmodified\nkubectl apply -f https://k8s.io/examples/application/cassandra/cassandra-statefulset.yaml", "completion": "This text code Example: Deploying Cassandra with a StatefulSet | If you need to modify cassandra-statefulset.yaml to suit your cluster, download https://k8s.io/examples/application/cassandra/cassandra-statefulset.yaml and then apply that manifest, from the folder y", "language": "text", "type": "code-example_explanation", "context": "Example: Deploying Cassandra with a StatefulSet | If you need to modify cassandra-statefulset.yaml to suit your cluster, download https://k8s.io/examples/application/cassandra/cassandra-statefulset.yaml and then apply that manifest, from the folder y", "source": "https://kubernetes.io/docs/tutorials/stateful-application/cassandra/", "id": "k8s_00767_code_explain_23912"} {"prompt": "Explain this text code:\n\n# Use this if you needed to modify cassandra-statefulset.yaml locally\nkubectl apply -f cassandra-statefulset.yaml", "completion": "This text code Example: Deploying Cassandra with a StatefulSet | Get the Cassandra StatefulSet:", "language": "text", "type": "code-example_explanation", "context": "Example: Deploying Cassandra with a StatefulSet | Get the Cassandra StatefulSet:", "source": "https://kubernetes.io/docs/tutorials/stateful-application/cassandra/", "id": "k8s_00767_code_explain_23913"} {"prompt": "Explain this text code:\n\nNAME READY STATUS RESTARTS AGE\ncassandra-0 1/1 Running 0 1m\ncassandra-1 0/1 ContainerCreating 0 8s", "completion": "This text code Example: Deploying Cassandra with a StatefulSet | It can take several minutes for all three Pods to deploy. Once they are deployed, the same command returns output similar to:", "language": "text", "type": "code-example_explanation", "context": "Example: Deploying Cassandra with a StatefulSet | It can take several minutes for all three Pods to deploy. Once they are deployed, the same command returns output similar to:", "source": "https://kubernetes.io/docs/tutorials/stateful-application/cassandra/", "id": "k8s_00767_code_explain_23914"} {"prompt": "Complete this yaml code:\n\n# Please edit the object below. Lines beginning with a '#' will be ignored,\n# and an empty file will abort the edit. If an error occurs while saving this file will be\n# reopened with the relevant failures.\n#\napiVersion: apps/v1\nkind: StatefulSet\nmetadata:\n creationTimestamp: 2016-08-13T18:40:58Z", "completion": " generation: 1\n labels:\n app: cassandra\n name: cassandra\n namespace: default\n resourceVersion: \"323\"\n uid: 7a219483-6185-11e6-a910-42010a8a0fc0\nspec:\n replicas: 3", "language": "yaml", "type": "kubernetes-manifest", "context": "Example: Deploying Cassandra with a StatefulSet | Change the number of replicas to 4, and then save the manifest.", "source": "https://kubernetes.io/docs/tutorials/stateful-application/cassandra/", "id": "k8s_00767_code_completion_23915"} {"prompt": "Explain this yaml code:\n\n# Please edit the object below. Lines beginning with a '#' will be ignored,\n# and an empty file will abort the edit. If an error occurs while saving this file will be\n# reopened with the relevant failures.\n#\napiVersion: apps/v1\nkind: StatefulSet\nmetadata:\n creationTimestamp: 2016-08-13T18:40:58Z\n generation: 1\n labels:\n app: cassandra\n name: cassandra\n namespace: default\n resourceVersion: \"323\"\n uid: 7a219483-6185-11e6-a910-42010a8a0fc0\nspec:\n replicas: 3", "completion": "This yaml code Example: Deploying Cassandra with a StatefulSet | Change the number of replicas to 4, and then save the manifest.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Example: Deploying Cassandra with a StatefulSet | Change the number of replicas to 4, and then save the manifest.", "source": "https://kubernetes.io/docs/tutorials/stateful-application/cassandra/", "id": "k8s_00767_code_explain_23916"} {"prompt": "Complete this text code:\n\ngrace=$(kubectl get pod cassandra-0 -o=jsonpath='{.spec.terminationGracePeriodSeconds}') \\\n && kubectl delete statefulset -l app=cassandra \\", "completion": " && echo \"Sleeping ${grace} seconds\" 1>&2 \\\n && sleep $grace \\\n && kubectl delete persistentvolumeclaim -l app=cassandra", "language": "text", "type": "code-example", "context": "Example: Deploying Cassandra with a StatefulSet | Run the following command to delete the Service you set up for Cassandra:", "source": "https://kubernetes.io/docs/tutorials/stateful-application/cassandra/", "id": "k8s_00767_code_completion_23917"} {"prompt": "Explain this text code:\n\ngrace=$(kubectl get pod cassandra-0 -o=jsonpath='{.spec.terminationGracePeriodSeconds}') \\\n && kubectl delete statefulset -l app=cassandra \\\n && echo \"Sleeping ${grace} seconds\" 1>&2 \\\n && sleep $grace \\\n && kubectl delete persistentvolumeclaim -l app=cassandra", "completion": "This text code Example: Deploying Cassandra with a StatefulSet | Run the following command to delete the Service you set up for Cassandra:", "language": "text", "type": "code-example_explanation", "context": "Example: Deploying Cassandra with a StatefulSet | Run the following command to delete the Service you set up for Cassandra:", "source": "https://kubernetes.io/docs/tutorials/stateful-application/cassandra/", "id": "k8s_00767_code_explain_23918"} {"prompt": "Explain this bash code:\n\nkubectl get SampleDB # find configured databases\n\nkubectl edit SampleDB/example-database # manually change some settings", "completion": "This bash code Operator pattern | …and that's it! The operator will take care of applying the changes as well as keeping the existing service in good shape.", "language": "bash", "type": "kubectl-command_explanation", "context": "Operator pattern | …and that's it! The operator will take care of applying the changes as well as keeping the existing service in good shape.", "source": "https://kubernetes.io/docs/concepts/extend-kubernetes/operator/", "id": "k8s_00769_code_explain_23919"} {"prompt": "Explain this shell code:\n\nkubectl get SampleDB # find configured databases\n\nkubectl edit SampleDB/example-database # manually change some settings", "completion": "This shell code Operator pattern | …and that's it! The operator will take care of applying the changes as well as keeping the existing service in good shape.", "language": "shell", "type": "code-example_explanation", "context": "Operator pattern | …and that's it! The operator will take care of applying the changes as well as keeping the existing service in good shape.", "source": "https://kubernetes.io/docs/concepts/extend-kubernetes/operator/", "id": "k8s_00769_code_explain_23920"} {"prompt": "Explain this bash code:\n\nkubectl get SampleDB # find configured databases\n\nkubectl edit SampleDB/example-database # manually change some settings", "completion": "This bash code Operator pattern | …and that's it! The operator will take care of applying the changes as well as keeping the existing service in good shape.", "language": "bash", "type": "kubectl-command_explanation", "context": "Operator pattern | …and that's it! The operator will take care of applying the changes as well as keeping the existing service in good shape.", "source": "https://kubernetes.io/docs/concepts/extend-kubernetes/operator/", "id": "k8s_00769_code_explain_23921"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: security-context-demo\nspec:\n securityContext:\n runAsUser: 1000\n runAsGroup: 3000\n fsGroup: 2000\n supplementalGroups: [4000]\n volumes:", "completion": " - name: sec-ctx-vol\n emptyDir: {}\n containers:\n - name: sec-ctx-demo\n image: busybox:1.28\n command: [ \"sh\", \"-c\", \"sleep 1h\" ]\n volumeMounts:\n - name: sec-ctx-vol\n mountPath: /data/demo\n securityContext:\n allowPrivilegeEscalation: false", "language": "yaml", "type": "pod-definition", "context": "Configure a Security Context for a Pod or Container | In the configuration file, the runAsUser field specifies that for any Containers in the Pod, all processes run with user ID 1000. The runAsGroup field specifies the primary group ID of 3000 for all pr", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/security-context/", "id": "k8s_00770_code_completion_23922"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: security-context-demo\nspec:\n securityContext:\n runAsUser: 1000\n runAsGroup: 3000\n fsGroup: 2000\n supplementalGroups: [4000]\n volumes:\n - name: sec-ctx-vol\n emptyDir: {}\n containers:\n - name: sec-ctx-demo\n image: busybox:1.28\n command: [ \"sh\", \"-c\", \"sleep 1h\" ]\n volumeMounts:\n - name: sec-ctx-vol\n mountPath: /data/demo\n securityContext:\n allowPrivilegeEscalation: false", "completion": "This yaml code Configure a Security Context for a Pod or Container | In the configuration file, the runAsUser field specifies that for any Containers in the Pod, all processes run with user ID 1000. The runAsGroup field specifies the primary group ID of 3000 for all pr", "language": "yaml", "type": "pod-definition_explanation", "context": "Configure a Security Context for a Pod or Container | In the configuration file, the runAsUser field specifies that for any Containers in the Pod, all processes run with user ID 1000. The runAsGroup field specifies the primary group ID of 3000 for all pr", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/security-context/", "id": "k8s_00770_code_explain_23923"} {"prompt": "Explain this bash code:\n\nkubectl apply -f https://k8s.io/examples/pods/security/security-context.yaml", "completion": "This bash code Configure a Security Context for a Pod or Container | Verify that the Pod's Container is running:", "language": "bash", "type": "kubectl-command_explanation", "context": "Configure a Security Context for a Pod or Container | Verify that the Pod's Container is running:", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/security-context/", "id": "k8s_00770_code_explain_23924"} {"prompt": "Complete this text code:\n\nPID USER TIME COMMAND\n 1 1000 0:00 sleep 1h", "completion": " 6 1000 0:00 sh\n...", "language": "text", "type": "code-example", "context": "Configure a Security Context for a Pod or Container | In your shell, navigate to /data, and list the one directory:", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/security-context/", "id": "k8s_00770_code_completion_23925"} {"prompt": "Explain this text code:\n\nPID USER TIME COMMAND\n 1 1000 0:00 sleep 1h\n 6 1000 0:00 sh\n...", "completion": "This text code Configure a Security Context for a Pod or Container | In your shell, navigate to /data, and list the one directory:", "language": "text", "type": "code-example_explanation", "context": "Configure a Security Context for a Pod or Container | In your shell, navigate to /data, and list the one directory:", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/security-context/", "id": "k8s_00770_code_explain_23926"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: security-context-demo\nspec:\n securityContext:\n runAsUser: 1000", "completion": " runAsGroup: 3000\n supplementalGroups: [4000]\n containers:\n - name: sec-ctx-demo\n image: registry.k8s.io/e2e-test-images/agnhost:2.45\n command: [ \"sh\", \"-c\", \"sleep 1h\" ]\n securityContext:\n allowPrivilegeEscalation: false", "language": "yaml", "type": "pod-definition", "context": "Configure a Security Context for a Pod or Container | This Pod security context contains runAsUser, runAsGroup and supplementalGroups. However, you can see that the actual supplementary groups attached to the container process will include group IDs whic", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/security-context/", "id": "k8s_00770_code_completion_23927"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: security-context-demo\nspec:\n securityContext:\n runAsUser: 1000\n runAsGroup: 3000\n supplementalGroups: [4000]\n containers:\n - name: sec-ctx-demo\n image: registry.k8s.io/e2e-test-images/agnhost:2.45\n command: [ \"sh\", \"-c\", \"sleep 1h\" ]\n securityContext:\n allowPrivilegeEscalation: false", "completion": "This yaml code Configure a Security Context for a Pod or Container | This Pod security context contains runAsUser, runAsGroup and supplementalGroups. However, you can see that the actual supplementary groups attached to the container process will include group IDs whic", "language": "yaml", "type": "pod-definition_explanation", "context": "Configure a Security Context for a Pod or Container | This Pod security context contains runAsUser, runAsGroup and supplementalGroups. However, you can see that the actual supplementary groups attached to the container process will include group IDs whic", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/security-context/", "id": "k8s_00770_code_explain_23928"} {"prompt": "Explain this bash code:\n\nkubectl apply -f https://k8s.io/examples/pods/security/security-context-5.yaml", "completion": "This bash code Configure a Security Context for a Pod or Container | Verify that the Pod's Container is running:", "language": "bash", "type": "kubectl-command_explanation", "context": "Configure a Security Context for a Pod or Container | Verify that the Pod's Container is running:", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/security-context/", "id": "k8s_00770_code_explain_23929"} {"prompt": "Explain this text code:\n\n...\nuser-defined-in-image:x:1000:\ngroup-defined-in-image:x:50000:user-defined-in-image", "completion": "This text code Configure a Security Context for a Pod or Container | Exit your shell:", "language": "text", "type": "code-example_explanation", "context": "Configure a Security Context for a Pod or Container | Exit your shell:", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/security-context/", "id": "k8s_00770_code_explain_23930"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: security-context-demo\nspec:\n securityContext:\n runAsUser: 1000\n runAsGroup: 3000", "completion": " supplementalGroups: [4000]\n supplementalGroupsPolicy: Strict\n containers:\n - name: sec-ctx-demo\n image: registry.k8s.io/e2e-test-images/agnhost:2.45\n command: [ \"sh\", \"-c\", \"sleep 1h\" ]\n securityContext:\n allowPrivilegeEscalation: false", "language": "yaml", "type": "pod-definition", "context": "Configure a Security Context for a Pod or Container | This pod manifest defines supplementalGroupsPolicy=Strict. You can see that no group memberships defined in /etc/group are merged to the supplementary groups for container processes.", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/security-context/", "id": "k8s_00770_code_completion_23931"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: security-context-demo\nspec:\n securityContext:\n runAsUser: 1000\n runAsGroup: 3000\n supplementalGroups: [4000]\n supplementalGroupsPolicy: Strict\n containers:\n - name: sec-ctx-demo\n image: registry.k8s.io/e2e-test-images/agnhost:2.45\n command: [ \"sh\", \"-c\", \"sleep 1h\" ]\n securityContext:\n allowPrivilegeEscalation: false", "completion": "This yaml code Configure a Security Context for a Pod or Container | This pod manifest defines supplementalGroupsPolicy=Strict. You can see that no group memberships defined in /etc/group are merged to the supplementary groups for container processes.", "language": "yaml", "type": "pod-definition_explanation", "context": "Configure a Security Context for a Pod or Container | This pod manifest defines supplementalGroupsPolicy=Strict. You can see that no group memberships defined in /etc/group are merged to the supplementary groups for container processes.", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/security-context/", "id": "k8s_00770_code_explain_23932"} {"prompt": "Explain this bash code:\n\nkubectl apply -f https://k8s.io/examples/pods/security/security-context-6.yaml", "completion": "This bash code Configure a Security Context for a Pod or Container | Verify that the Pod's Container is running:", "language": "bash", "type": "kubectl-command_explanation", "context": "Configure a Security Context for a Pod or Container | Verify that the Pod's Container is running:", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/security-context/", "id": "k8s_00770_code_explain_23933"} {"prompt": "Complete this text code:\n\n...\nstatus:\n containerStatuses:\n - name: sec-ctx-demo\n user:\n linux:", "completion": " gid: 3000\n supplementalGroups:\n - 3000\n - 4000\n uid: 1000\n...", "language": "text", "type": "code-example", "context": "Configure a Security Context for a Pod or Container | The following container runtimes are known to support fine-grained SupplementalGroups control.", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/security-context/", "id": "k8s_00770_code_completion_23934"} {"prompt": "Explain this text code:\n\n...\nstatus:\n containerStatuses:\n - name: sec-ctx-demo\n user:\n linux:\n gid: 3000\n supplementalGroups:\n - 3000\n - 4000\n uid: 1000\n...", "completion": "This text code Configure a Security Context for a Pod or Container | The following container runtimes are known to support fine-grained SupplementalGroups control.", "language": "text", "type": "code-example_explanation", "context": "Configure a Security Context for a Pod or Container | The following container runtimes are known to support fine-grained SupplementalGroups control.", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/security-context/", "id": "k8s_00770_code_explain_23935"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Node\n...", "completion": "status:\n features:\n supplementalGroupsPolicy: true", "language": "yaml", "type": "kubernetes-manifest", "context": "Configure a Security Context for a Pod or Container | At this alpha release(from v1.31 to v1.32), when a pod with SupplementalGroupsPolicy=Strict are scheduled to a node that does NOT support this feature(i.e. .status.features.supplementalGroupsPolicy=fa", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/security-context/", "id": "k8s_00770_code_completion_23936"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Node\n...\nstatus:\n features:\n supplementalGroupsPolicy: true", "completion": "This yaml code Configure a Security Context for a Pod or Container | At this alpha release(from v1.31 to v1.32), when a pod with SupplementalGroupsPolicy=Strict are scheduled to a node that does NOT support this feature(i.e. .status.features.supplementalGroupsPolicy=fa", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Configure a Security Context for a Pod or Container | At this alpha release(from v1.31 to v1.32), when a pod with SupplementalGroupsPolicy=Strict are scheduled to a node that does NOT support this feature(i.e. .status.features.supplementalGroupsPolicy=fa", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/security-context/", "id": "k8s_00770_code_explain_23937"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Event\n...\ntype: Warning\nreason: SupplementalGroupsPolicyNotSupported", "completion": "message: \"SupplementalGroupsPolicy=Strict is not supported in this node\"\ninvolvedObject:\n apiVersion: v1\n kind: Pod\n ...", "language": "yaml", "type": "pod-definition", "context": "Configure a Security Context for a Pod or Container | By default, Kubernetes recursively changes ownership and permissions for the contents of each volume to match the fsGroup specified in a Pod's securityContext when that volume is mounted. For large vo", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/security-context/", "id": "k8s_00770_code_completion_23938"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Event\n...\ntype: Warning\nreason: SupplementalGroupsPolicyNotSupported\nmessage: \"SupplementalGroupsPolicy=Strict is not supported in this node\"\ninvolvedObject:\n apiVersion: v1\n kind: Pod\n ...", "completion": "This yaml code Configure a Security Context for a Pod or Container | By default, Kubernetes recursively changes ownership and permissions for the contents of each volume to match the fsGroup specified in a Pod's securityContext when that volume is mounted. For large vo", "language": "yaml", "type": "pod-definition_explanation", "context": "Configure a Security Context for a Pod or Container | By default, Kubernetes recursively changes ownership and permissions for the contents of each volume to match the fsGroup specified in a Pod's securityContext when that volume is mounted. For large vo", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/security-context/", "id": "k8s_00770_code_explain_23939"} {"prompt": "Complete this text code:\n\nsecurityContext:\n runAsUser: 1000", "completion": " runAsGroup: 3000\n fsGroup: 2000\n fsGroupChangePolicy: \"OnRootMismatch\"", "language": "text", "type": "code-example", "context": "Configure a Security Context for a Pod or Container | If you deploy a Container Storage Interface (CSI) driver which supports the VOLUME_MOUNT_GROUP NodeServiceCapability, the process of setting file ownership and permissions based on the fsGroup specifi", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/security-context/", "id": "k8s_00770_code_completion_23940"} {"prompt": "Explain this text code:\n\nsecurityContext:\n runAsUser: 1000\n runAsGroup: 3000\n fsGroup: 2000\n fsGroupChangePolicy: \"OnRootMismatch\"", "completion": "This text code Configure a Security Context for a Pod or Container | If you deploy a Container Storage Interface (CSI) driver which supports the VOLUME_MOUNT_GROUP NodeServiceCapability, the process of setting file ownership and permissions based on the fsGroup specifi", "language": "text", "type": "code-example_explanation", "context": "Configure a Security Context for a Pod or Container | If you deploy a Container Storage Interface (CSI) driver which supports the VOLUME_MOUNT_GROUP NodeServiceCapability, the process of setting file ownership and permissions based on the fsGroup specifi", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/security-context/", "id": "k8s_00770_code_explain_23941"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: security-context-demo-2\nspec:\n securityContext:", "completion": " runAsUser: 1000\n containers:\n - name: sec-ctx-demo-2\n image: gcr.io/google-samples/hello-app:2.0\n securityContext:\n runAsUser: 2000\n allowPrivilegeEscalation: false", "language": "yaml", "type": "pod-definition", "context": "Configure a Security Context for a Pod or Container | Create the Pod:", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/security-context/", "id": "k8s_00770_code_completion_23942"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: security-context-demo-2\nspec:\n securityContext:\n runAsUser: 1000\n containers:\n - name: sec-ctx-demo-2\n image: gcr.io/google-samples/hello-app:2.0\n securityContext:\n runAsUser: 2000\n allowPrivilegeEscalation: false", "completion": "This yaml code Configure a Security Context for a Pod or Container | Create the Pod:", "language": "yaml", "type": "pod-definition_explanation", "context": "Configure a Security Context for a Pod or Container | Create the Pod:", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/security-context/", "id": "k8s_00770_code_explain_23943"} {"prompt": "Explain this bash code:\n\nkubectl apply -f https://k8s.io/examples/pods/security/security-context-2.yaml", "completion": "This bash code Configure a Security Context for a Pod or Container | Verify that the Pod's Container is running:", "language": "bash", "type": "kubectl-command_explanation", "context": "Configure a Security Context for a Pod or Container | Verify that the Pod's Container is running:", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/security-context/", "id": "k8s_00770_code_explain_23944"} {"prompt": "Complete this text code:\n\nUSER PID %CPU %MEM VSZ RSS TTY STAT START TIME COMMAND\n2000 1 0.0 0.0 4336 764 ? Ss 20:36 0:00 /bin/sh -c node server.js", "completion": "2000 8 0.1 0.5 772124 22604 ? Sl 20:36 0:00 node server.js\n...", "language": "text", "type": "code-example", "context": "Configure a Security Context for a Pod or Container | Exit your shell:", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/security-context/", "id": "k8s_00770_code_completion_23945"} {"prompt": "Explain this text code:\n\nUSER PID %CPU %MEM VSZ RSS TTY STAT START TIME COMMAND\n2000 1 0.0 0.0 4336 764 ? Ss 20:36 0:00 /bin/sh -c node server.js\n2000 8 0.1 0.5 772124 22604 ? Sl 20:36 0:00 node server.js\n...", "completion": "This text code Configure a Security Context for a Pod or Container | Exit your shell:", "language": "text", "type": "code-example_explanation", "context": "Configure a Security Context for a Pod or Container | Exit your shell:", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/security-context/", "id": "k8s_00770_code_explain_23946"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: security-context-demo-3", "completion": "spec:\n containers:\n - name: sec-ctx-3\n image: gcr.io/google-samples/hello-app:2.0", "language": "yaml", "type": "pod-definition", "context": "Configure a Security Context for a Pod or Container | Create the Pod:", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/security-context/", "id": "k8s_00770_code_completion_23947"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: security-context-demo-3\nspec:\n containers:\n - name: sec-ctx-3\n image: gcr.io/google-samples/hello-app:2.0", "completion": "This yaml code Configure a Security Context for a Pod or Container | Create the Pod:", "language": "yaml", "type": "pod-definition_explanation", "context": "Configure a Security Context for a Pod or Container | Create the Pod:", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/security-context/", "id": "k8s_00770_code_explain_23948"} {"prompt": "Explain this bash code:\n\nkubectl apply -f https://k8s.io/examples/pods/security/security-context-3.yaml", "completion": "This bash code Configure a Security Context for a Pod or Container | Verify that the Pod's Container is running:", "language": "bash", "type": "kubectl-command_explanation", "context": "Configure a Security Context for a Pod or Container | Verify that the Pod's Container is running:", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/security-context/", "id": "k8s_00770_code_explain_23949"} {"prompt": "Explain this text code:\n\nUSER PID %CPU %MEM VSZ RSS TTY STAT START TIME COMMAND\nroot 1 0.0 0.0 4336 796 ? Ss 18:17 0:00 /bin/sh -c node server.js\nroot 5 0.1 0.5 772124 22700 ? Sl 18:17 0:00 node server.js", "completion": "This text code Configure a Security Context for a Pod or Container | In your shell, view the status for process 1:", "language": "text", "type": "code-example_explanation", "context": "Configure a Security Context for a Pod or Container | In your shell, view the status for process 1:", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/security-context/", "id": "k8s_00770_code_explain_23950"} {"prompt": "Complete this text code:\n\n...\nCapPrm:\t00000000a80425fb", "completion": "CapEff:\t00000000a80425fb\n...", "language": "text", "type": "code-example", "context": "Configure a Security Context for a Pod or Container | Make a note of the capabilities bitmap, and then exit your shell:", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/security-context/", "id": "k8s_00770_code_completion_23951"} {"prompt": "Explain this text code:\n\n...\nCapPrm:\t00000000a80425fb\nCapEff:\t00000000a80425fb\n...", "completion": "This text code Configure a Security Context for a Pod or Container | Make a note of the capabilities bitmap, and then exit your shell:", "language": "text", "type": "code-example_explanation", "context": "Configure a Security Context for a Pod or Container | Make a note of the capabilities bitmap, and then exit your shell:", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/security-context/", "id": "k8s_00770_code_explain_23952"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: security-context-demo-4\nspec:", "completion": " containers:\n - name: sec-ctx-4\n image: gcr.io/google-samples/hello-app:2.0\n securityContext:\n capabilities:\n add: [\"NET_ADMIN\", \"SYS_TIME\"]", "language": "yaml", "type": "pod-definition", "context": "Configure a Security Context for a Pod or Container | Create the Pod:", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/security-context/", "id": "k8s_00770_code_completion_23953"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: security-context-demo-4\nspec:\n containers:\n - name: sec-ctx-4\n image: gcr.io/google-samples/hello-app:2.0\n securityContext:\n capabilities:\n add: [\"NET_ADMIN\", \"SYS_TIME\"]", "completion": "This yaml code Configure a Security Context for a Pod or Container | Create the Pod:", "language": "yaml", "type": "pod-definition_explanation", "context": "Configure a Security Context for a Pod or Container | Create the Pod:", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/security-context/", "id": "k8s_00770_code_explain_23954"} {"prompt": "Explain this bash code:\n\nkubectl apply -f https://k8s.io/examples/pods/security/security-context-4.yaml", "completion": "This bash code Configure a Security Context for a Pod or Container | Get a shell into the running Container:", "language": "bash", "type": "kubectl-command_explanation", "context": "Configure a Security Context for a Pod or Container | Get a shell into the running Container:", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/security-context/", "id": "k8s_00770_code_explain_23955"} {"prompt": "Complete this text code:\n\n...\nCapPrm:\t00000000aa0435fb", "completion": "CapEff:\t00000000aa0435fb\n...", "language": "text", "type": "code-example", "context": "Configure a Security Context for a Pod or Container | Compare the capabilities of the two Containers:", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/security-context/", "id": "k8s_00770_code_completion_23956"} {"prompt": "Explain this text code:\n\n...\nCapPrm:\t00000000aa0435fb\nCapEff:\t00000000aa0435fb\n...", "completion": "This text code Configure a Security Context for a Pod or Container | Compare the capabilities of the two Containers:", "language": "text", "type": "code-example_explanation", "context": "Configure a Security Context for a Pod or Container | Compare the capabilities of the two Containers:", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/security-context/", "id": "k8s_00770_code_explain_23957"} {"prompt": "Complete this text code:\n\n...\nsecurityContext:", "completion": " seccompProfile:\n type: RuntimeDefault", "language": "text", "type": "code-example", "context": "Configure a Security Context for a Pod or Container | Here is an example that sets the Seccomp profile to a pre-configured file at /seccomp/my-profiles/profile-allow.json:", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/security-context/", "id": "k8s_00770_code_completion_23958"} {"prompt": "Explain this text code:\n\n...\nsecurityContext:\n seccompProfile:\n type: RuntimeDefault", "completion": "This text code Configure a Security Context for a Pod or Container | Here is an example that sets the Seccomp profile to a pre-configured file at /seccomp/my-profiles/profile-allow.json:", "language": "text", "type": "code-example_explanation", "context": "Configure a Security Context for a Pod or Container | Here is an example that sets the Seccomp profile to a pre-configured file at /seccomp/my-profiles/profile-allow.json:", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/security-context/", "id": "k8s_00770_code_explain_23959"} {"prompt": "Complete this text code:\n\n...\nsecurityContext:", "completion": " seccompProfile:\n type: Localhost\n localhostProfile: my-profiles/profile-allow.json", "language": "text", "type": "code-example", "context": "Configure a Security Context for a Pod or Container | To set the AppArmor profile for a Container, include the appArmorProfile field in the securityContext section of your Container. The appArmorProfile field is a AppArmorProfile object consisting of typ", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/security-context/", "id": "k8s_00770_code_completion_23960"} {"prompt": "Explain this text code:\n\n...\nsecurityContext:\n seccompProfile:\n type: Localhost\n localhostProfile: my-profiles/profile-allow.json", "completion": "This text code Configure a Security Context for a Pod or Container | To set the AppArmor profile for a Container, include the appArmorProfile field in the securityContext section of your Container. The appArmorProfile field is a AppArmorProfile object consisting of typ", "language": "text", "type": "code-example_explanation", "context": "Configure a Security Context for a Pod or Container | To set the AppArmor profile for a Container, include the appArmorProfile field in the securityContext section of your Container. The appArmorProfile field is a AppArmorProfile object consisting of typ", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/security-context/", "id": "k8s_00770_code_explain_23961"} {"prompt": "Complete this text code:\n\n...\ncontainers:\n- name: container-1", "completion": " securityContext:\n appArmorProfile:\n type: RuntimeDefault", "language": "text", "type": "code-example", "context": "Configure a Security Context for a Pod or Container | Here is an example that sets the AppArmor profile to a pre-configured profile named k8s-apparmor-example-deny-write:", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/security-context/", "id": "k8s_00770_code_completion_23962"} {"prompt": "Explain this text code:\n\n...\ncontainers:\n- name: container-1\n securityContext:\n appArmorProfile:\n type: RuntimeDefault", "completion": "This text code Configure a Security Context for a Pod or Container | Here is an example that sets the AppArmor profile to a pre-configured profile named k8s-apparmor-example-deny-write:", "language": "text", "type": "code-example_explanation", "context": "Configure a Security Context for a Pod or Container | Here is an example that sets the AppArmor profile to a pre-configured profile named k8s-apparmor-example-deny-write:", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/security-context/", "id": "k8s_00770_code_explain_23963"} {"prompt": "Complete this text code:\n\n...\ncontainers:\n- name: container-1", "completion": " securityContext:\n appArmorProfile:\n type: Localhost\n localhostProfile: k8s-apparmor-example-deny-write", "language": "text", "type": "code-example", "context": "Configure a Security Context for a Pod or Container | For more details please see, Restrict a Container's Access to Resources with AppArmor.", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/security-context/", "id": "k8s_00770_code_completion_23964"} {"prompt": "Explain this text code:\n\n...\ncontainers:\n- name: container-1\n securityContext:\n appArmorProfile:\n type: Localhost\n localhostProfile: k8s-apparmor-example-deny-write", "completion": "This text code Configure a Security Context for a Pod or Container | For more details please see, Restrict a Container's Access to Resources with AppArmor.", "language": "text", "type": "code-example_explanation", "context": "Configure a Security Context for a Pod or Container | For more details please see, Restrict a Container's Access to Resources with AppArmor.", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/security-context/", "id": "k8s_00770_code_explain_23965"} {"prompt": "Complete this text code:\n\n...\nsecurityContext:", "completion": " seLinuxOptions:\n level: \"s0:c123,c456\"", "language": "text", "type": "code-example", "context": "Configure a Security Context for a Pod or Container | Kubernetes v1.27 introduced an early limited form of this behavior that was only applicable to volumes (and PersistentVolumeClaims) using the ReadWriteOncePod access mode.", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/security-context/", "id": "k8s_00770_code_completion_23966"} {"prompt": "Explain this text code:\n\n...\nsecurityContext:\n seLinuxOptions:\n level: \"s0:c123,c456\"", "completion": "This text code Configure a Security Context for a Pod or Container | Kubernetes v1.27 introduced an early limited form of this behavior that was only applicable to volumes (and PersistentVolumeClaims) using the ReadWriteOncePod access mode.", "language": "text", "type": "code-example_explanation", "context": "Configure a Security Context for a Pod or Container | Kubernetes v1.27 introduced an early limited form of this behavior that was only applicable to volumes (and PersistentVolumeClaims) using the ReadWriteOncePod access mode.", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/security-context/", "id": "k8s_00770_code_explain_23967"} {"prompt": "Complete this bash code:\n\nkubectl delete pod security-context-demo\nkubectl delete pod security-context-demo-2", "completion": "kubectl delete pod security-context-demo-3\nkubectl delete pod security-context-demo-4", "language": "bash", "type": "kubectl-command", "context": "Configure a Security Context for a Pod or Container | Items on this page refer to third party products or projects that provide functionality required by Kubernetes. The Kubernetes project authors aren't responsible for those third-party products or proj", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/security-context/", "id": "k8s_00770_code_completion_23968"} {"prompt": "Explain this bash code:\n\nkubectl delete pod security-context-demo\nkubectl delete pod security-context-demo-2\nkubectl delete pod security-context-demo-3\nkubectl delete pod security-context-demo-4", "completion": "This bash code Configure a Security Context for a Pod or Container | Items on this page refer to third party products or projects that provide functionality required by Kubernetes. The Kubernetes project authors aren't responsible for those third-party products or proj", "language": "bash", "type": "kubectl-command_explanation", "context": "Configure a Security Context for a Pod or Container | Items on this page refer to third party products or projects that provide functionality required by Kubernetes. The Kubernetes project authors aren't responsible for those third-party products or proj", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/security-context/", "id": "k8s_00770_code_explain_23969"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: security-context-demo\nspec:\n securityContext:\n runAsUser: 1000\n runAsGroup: 3000\n fsGroup: 2000\n supplementalGroups: [4000]\n volumes:", "completion": " - name: sec-ctx-vol\n emptyDir: {}\n containers:\n - name: sec-ctx-demo\n image: busybox:1.28\n command: [ \"sh\", \"-c\", \"sleep 1h\" ]\n volumeMounts:\n - name: sec-ctx-vol\n mountPath: /data/demo\n securityContext:\n allowPrivilegeEscalation: false", "language": "yaml", "type": "pod-definition", "context": "Configure a Security Context for a Pod or Container | In the configuration file, the runAsUser field specifies that for any Containers in the Pod, all processes run with user ID 1000. The runAsGroup field specifies the primary group ID of 3000 for all pr", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/security-context/", "id": "k8s_00770_code_completion_23970"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: security-context-demo\nspec:\n securityContext:\n runAsUser: 1000\n runAsGroup: 3000\n fsGroup: 2000\n supplementalGroups: [4000]\n volumes:\n - name: sec-ctx-vol\n emptyDir: {}\n containers:\n - name: sec-ctx-demo\n image: busybox:1.28\n command: [ \"sh\", \"-c\", \"sleep 1h\" ]\n volumeMounts:\n - name: sec-ctx-vol\n mountPath: /data/demo\n securityContext:\n allowPrivilegeEscalation: false", "completion": "This yaml code Configure a Security Context for a Pod or Container | In the configuration file, the runAsUser field specifies that for any Containers in the Pod, all processes run with user ID 1000. The runAsGroup field specifies the primary group ID of 3000 for all pr", "language": "yaml", "type": "pod-definition_explanation", "context": "Configure a Security Context for a Pod or Container | In the configuration file, the runAsUser field specifies that for any Containers in the Pod, all processes run with user ID 1000. The runAsGroup field specifies the primary group ID of 3000 for all pr", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/security-context/", "id": "k8s_00770_code_explain_23971"} {"prompt": "Explain this shell code:\n\nkubectl apply -f https://k8s.io/examples/pods/security/security-context.yaml", "completion": "This shell code Configure a Security Context for a Pod or Container | Verify that the Pod's Container is running:", "language": "shell", "type": "code-example_explanation", "context": "Configure a Security Context for a Pod or Container | Verify that the Pod's Container is running:", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/security-context/", "id": "k8s_00770_code_explain_23972"} {"prompt": "Complete this none code:\n\nPID USER TIME COMMAND\n 1 1000 0:00 sleep 1h", "completion": " 6 1000 0:00 sh\n...", "language": "none", "type": "code-example", "context": "Configure a Security Context for a Pod or Container | In your shell, navigate to /data, and list the one directory:", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/security-context/", "id": "k8s_00770_code_completion_23973"} {"prompt": "Explain this none code:\n\nPID USER TIME COMMAND\n 1 1000 0:00 sleep 1h\n 6 1000 0:00 sh\n...", "completion": "This none code Configure a Security Context for a Pod or Container | In your shell, navigate to /data, and list the one directory:", "language": "none", "type": "code-example_explanation", "context": "Configure a Security Context for a Pod or Container | In your shell, navigate to /data, and list the one directory:", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/security-context/", "id": "k8s_00770_code_explain_23974"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: security-context-demo\nspec:\n securityContext:\n runAsUser: 1000", "completion": " runAsGroup: 3000\n supplementalGroups: [4000]\n containers:\n - name: sec-ctx-demo\n image: registry.k8s.io/e2e-test-images/agnhost:2.45\n command: [ \"sh\", \"-c\", \"sleep 1h\" ]\n securityContext:\n allowPrivilegeEscalation: false", "language": "yaml", "type": "pod-definition", "context": "Configure a Security Context for a Pod or Container | This Pod security context contains runAsUser, runAsGroup and supplementalGroups. However, you can see that the actual supplementary groups attached to the container process will include group IDs whic", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/security-context/", "id": "k8s_00770_code_completion_23975"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: security-context-demo\nspec:\n securityContext:\n runAsUser: 1000\n runAsGroup: 3000\n supplementalGroups: [4000]\n containers:\n - name: sec-ctx-demo\n image: registry.k8s.io/e2e-test-images/agnhost:2.45\n command: [ \"sh\", \"-c\", \"sleep 1h\" ]\n securityContext:\n allowPrivilegeEscalation: false", "completion": "This yaml code Configure a Security Context for a Pod or Container | This Pod security context contains runAsUser, runAsGroup and supplementalGroups. However, you can see that the actual supplementary groups attached to the container process will include group IDs whic", "language": "yaml", "type": "pod-definition_explanation", "context": "Configure a Security Context for a Pod or Container | This Pod security context contains runAsUser, runAsGroup and supplementalGroups. However, you can see that the actual supplementary groups attached to the container process will include group IDs whic", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/security-context/", "id": "k8s_00770_code_explain_23976"} {"prompt": "Explain this shell code:\n\nkubectl apply -f https://k8s.io/examples/pods/security/security-context-5.yaml", "completion": "This shell code Configure a Security Context for a Pod or Container | Verify that the Pod's Container is running:", "language": "shell", "type": "code-example_explanation", "context": "Configure a Security Context for a Pod or Container | Verify that the Pod's Container is running:", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/security-context/", "id": "k8s_00770_code_explain_23977"} {"prompt": "Explain this none code:\n\n...\nuser-defined-in-image:x:1000:\ngroup-defined-in-image:x:50000:user-defined-in-image", "completion": "This none code Configure a Security Context for a Pod or Container | Exit your shell:", "language": "none", "type": "code-example_explanation", "context": "Configure a Security Context for a Pod or Container | Exit your shell:", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/security-context/", "id": "k8s_00770_code_explain_23978"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: security-context-demo\nspec:\n securityContext:\n runAsUser: 1000\n runAsGroup: 3000", "completion": " supplementalGroups: [4000]\n supplementalGroupsPolicy: Strict\n containers:\n - name: sec-ctx-demo\n image: registry.k8s.io/e2e-test-images/agnhost:2.45\n command: [ \"sh\", \"-c\", \"sleep 1h\" ]\n securityContext:\n allowPrivilegeEscalation: false", "language": "yaml", "type": "pod-definition", "context": "Configure a Security Context for a Pod or Container | This pod manifest defines supplementalGroupsPolicy=Strict. You can see that no group memberships defined in /etc/group are merged to the supplementary groups for container processes.", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/security-context/", "id": "k8s_00770_code_completion_23979"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: security-context-demo\nspec:\n securityContext:\n runAsUser: 1000\n runAsGroup: 3000\n supplementalGroups: [4000]\n supplementalGroupsPolicy: Strict\n containers:\n - name: sec-ctx-demo\n image: registry.k8s.io/e2e-test-images/agnhost:2.45\n command: [ \"sh\", \"-c\", \"sleep 1h\" ]\n securityContext:\n allowPrivilegeEscalation: false", "completion": "This yaml code Configure a Security Context for a Pod or Container | This pod manifest defines supplementalGroupsPolicy=Strict. You can see that no group memberships defined in /etc/group are merged to the supplementary groups for container processes.", "language": "yaml", "type": "pod-definition_explanation", "context": "Configure a Security Context for a Pod or Container | This pod manifest defines supplementalGroupsPolicy=Strict. You can see that no group memberships defined in /etc/group are merged to the supplementary groups for container processes.", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/security-context/", "id": "k8s_00770_code_explain_23980"} {"prompt": "Explain this shell code:\n\nkubectl apply -f https://k8s.io/examples/pods/security/security-context-6.yaml", "completion": "This shell code Configure a Security Context for a Pod or Container | Verify that the Pod's Container is running:", "language": "shell", "type": "code-example_explanation", "context": "Configure a Security Context for a Pod or Container | Verify that the Pod's Container is running:", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/security-context/", "id": "k8s_00770_code_explain_23981"} {"prompt": "Complete this none code:\n\n...\nstatus:\n containerStatuses:\n - name: sec-ctx-demo\n user:\n linux:", "completion": " gid: 3000\n supplementalGroups:\n - 3000\n - 4000\n uid: 1000\n...", "language": "none", "type": "code-example", "context": "Configure a Security Context for a Pod or Container | The following container runtimes are known to support fine-grained SupplementalGroups control.", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/security-context/", "id": "k8s_00770_code_completion_23982"} {"prompt": "Explain this none code:\n\n...\nstatus:\n containerStatuses:\n - name: sec-ctx-demo\n user:\n linux:\n gid: 3000\n supplementalGroups:\n - 3000\n - 4000\n uid: 1000\n...", "completion": "This none code Configure a Security Context for a Pod or Container | The following container runtimes are known to support fine-grained SupplementalGroups control.", "language": "none", "type": "code-example_explanation", "context": "Configure a Security Context for a Pod or Container | The following container runtimes are known to support fine-grained SupplementalGroups control.", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/security-context/", "id": "k8s_00770_code_explain_23983"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Node\n...", "completion": "status:\n features:\n supplementalGroupsPolicy: true", "language": "yaml", "type": "kubernetes-manifest", "context": "Configure a Security Context for a Pod or Container | At this alpha release(from v1.31 to v1.32), when a pod with SupplementalGroupsPolicy=Strict are scheduled to a node that does NOT support this feature(i.e. .status.features.supplementalGroupsPolicy=fa", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/security-context/", "id": "k8s_00770_code_completion_23984"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Node\n...\nstatus:\n features:\n supplementalGroupsPolicy: true", "completion": "This yaml code Configure a Security Context for a Pod or Container | At this alpha release(from v1.31 to v1.32), when a pod with SupplementalGroupsPolicy=Strict are scheduled to a node that does NOT support this feature(i.e. .status.features.supplementalGroupsPolicy=fa", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Configure a Security Context for a Pod or Container | At this alpha release(from v1.31 to v1.32), when a pod with SupplementalGroupsPolicy=Strict are scheduled to a node that does NOT support this feature(i.e. .status.features.supplementalGroupsPolicy=fa", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/security-context/", "id": "k8s_00770_code_explain_23985"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Event\n...\ntype: Warning\nreason: SupplementalGroupsPolicyNotSupported", "completion": "message: \"SupplementalGroupsPolicy=Strict is not supported in this node\"\ninvolvedObject:\n apiVersion: v1\n kind: Pod\n ...", "language": "yaml", "type": "pod-definition", "context": "Configure a Security Context for a Pod or Container | By default, Kubernetes recursively changes ownership and permissions for the contents of each volume to match the fsGroup specified in a Pod's securityContext when that volume is mounted. For large vo", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/security-context/", "id": "k8s_00770_code_completion_23986"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Event\n...\ntype: Warning\nreason: SupplementalGroupsPolicyNotSupported\nmessage: \"SupplementalGroupsPolicy=Strict is not supported in this node\"\ninvolvedObject:\n apiVersion: v1\n kind: Pod\n ...", "completion": "This yaml code Configure a Security Context for a Pod or Container | By default, Kubernetes recursively changes ownership and permissions for the contents of each volume to match the fsGroup specified in a Pod's securityContext when that volume is mounted. For large vo", "language": "yaml", "type": "pod-definition_explanation", "context": "Configure a Security Context for a Pod or Container | By default, Kubernetes recursively changes ownership and permissions for the contents of each volume to match the fsGroup specified in a Pod's securityContext when that volume is mounted. For large vo", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/security-context/", "id": "k8s_00770_code_explain_23987"} {"prompt": "Complete this yaml code:\n\nsecurityContext:\n runAsUser: 1000", "completion": " runAsGroup: 3000\n fsGroup: 2000\n fsGroupChangePolicy: \"OnRootMismatch\"", "language": "yaml", "type": "kubernetes-manifest", "context": "Configure a Security Context for a Pod or Container | If you deploy a Container Storage Interface (CSI) driver which supports the VOLUME_MOUNT_GROUP NodeServiceCapability, the process of setting file ownership and permissions based on the fsGroup specifi", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/security-context/", "id": "k8s_00770_code_completion_23988"} {"prompt": "Explain this yaml code:\n\nsecurityContext:\n runAsUser: 1000\n runAsGroup: 3000\n fsGroup: 2000\n fsGroupChangePolicy: \"OnRootMismatch\"", "completion": "This yaml code Configure a Security Context for a Pod or Container | If you deploy a Container Storage Interface (CSI) driver which supports the VOLUME_MOUNT_GROUP NodeServiceCapability, the process of setting file ownership and permissions based on the fsGroup specifi", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Configure a Security Context for a Pod or Container | If you deploy a Container Storage Interface (CSI) driver which supports the VOLUME_MOUNT_GROUP NodeServiceCapability, the process of setting file ownership and permissions based on the fsGroup specifi", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/security-context/", "id": "k8s_00770_code_explain_23989"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: security-context-demo-2\nspec:\n securityContext:", "completion": " runAsUser: 1000\n containers:\n - name: sec-ctx-demo-2\n image: gcr.io/google-samples/hello-app:2.0\n securityContext:\n runAsUser: 2000\n allowPrivilegeEscalation: false", "language": "yaml", "type": "pod-definition", "context": "Configure a Security Context for a Pod or Container | Create the Pod:", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/security-context/", "id": "k8s_00770_code_completion_23990"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: security-context-demo-2\nspec:\n securityContext:\n runAsUser: 1000\n containers:\n - name: sec-ctx-demo-2\n image: gcr.io/google-samples/hello-app:2.0\n securityContext:\n runAsUser: 2000\n allowPrivilegeEscalation: false", "completion": "This yaml code Configure a Security Context for a Pod or Container | Create the Pod:", "language": "yaml", "type": "pod-definition_explanation", "context": "Configure a Security Context for a Pod or Container | Create the Pod:", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/security-context/", "id": "k8s_00770_code_explain_23991"} {"prompt": "Explain this shell code:\n\nkubectl apply -f https://k8s.io/examples/pods/security/security-context-2.yaml", "completion": "This shell code Configure a Security Context for a Pod or Container | Verify that the Pod's Container is running:", "language": "shell", "type": "code-example_explanation", "context": "Configure a Security Context for a Pod or Container | Verify that the Pod's Container is running:", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/security-context/", "id": "k8s_00770_code_explain_23992"} {"prompt": "Complete this text code:\n\nUSER PID %CPU %MEM VSZ RSS TTY STAT START TIME COMMAND\n2000 1 0.0 0.0 4336 764 ? Ss 20:36 0:00 /bin/sh -c node server.js", "completion": "2000 8 0.1 0.5 772124 22604 ? Sl 20:36 0:00 node server.js\n...", "language": "text", "type": "code-example", "context": "Configure a Security Context for a Pod or Container | Exit your shell:", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/security-context/", "id": "k8s_00770_code_completion_23993"} {"prompt": "Explain this text code:\n\nUSER PID %CPU %MEM VSZ RSS TTY STAT START TIME COMMAND\n2000 1 0.0 0.0 4336 764 ? Ss 20:36 0:00 /bin/sh -c node server.js\n2000 8 0.1 0.5 772124 22604 ? Sl 20:36 0:00 node server.js\n...", "completion": "This text code Configure a Security Context for a Pod or Container | Exit your shell:", "language": "text", "type": "code-example_explanation", "context": "Configure a Security Context for a Pod or Container | Exit your shell:", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/security-context/", "id": "k8s_00770_code_explain_23994"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: security-context-demo-3", "completion": "spec:\n containers:\n - name: sec-ctx-3\n image: gcr.io/google-samples/hello-app:2.0", "language": "yaml", "type": "pod-definition", "context": "Configure a Security Context for a Pod or Container | Create the Pod:", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/security-context/", "id": "k8s_00770_code_completion_23995"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: security-context-demo-3\nspec:\n containers:\n - name: sec-ctx-3\n image: gcr.io/google-samples/hello-app:2.0", "completion": "This yaml code Configure a Security Context for a Pod or Container | Create the Pod:", "language": "yaml", "type": "pod-definition_explanation", "context": "Configure a Security Context for a Pod or Container | Create the Pod:", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/security-context/", "id": "k8s_00770_code_explain_23996"} {"prompt": "Explain this shell code:\n\nkubectl apply -f https://k8s.io/examples/pods/security/security-context-3.yaml", "completion": "This shell code Configure a Security Context for a Pod or Container | Verify that the Pod's Container is running:", "language": "shell", "type": "code-example_explanation", "context": "Configure a Security Context for a Pod or Container | Verify that the Pod's Container is running:", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/security-context/", "id": "k8s_00770_code_explain_23997"} {"prompt": "Explain this text code:\n\nUSER PID %CPU %MEM VSZ RSS TTY STAT START TIME COMMAND\nroot 1 0.0 0.0 4336 796 ? Ss 18:17 0:00 /bin/sh -c node server.js\nroot 5 0.1 0.5 772124 22700 ? Sl 18:17 0:00 node server.js", "completion": "This text code Configure a Security Context for a Pod or Container | In your shell, view the status for process 1:", "language": "text", "type": "code-example_explanation", "context": "Configure a Security Context for a Pod or Container | In your shell, view the status for process 1:", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/security-context/", "id": "k8s_00770_code_explain_23998"} {"prompt": "Complete this text code:\n\n...\nCapPrm:\t00000000a80425fb", "completion": "CapEff:\t00000000a80425fb\n...", "language": "text", "type": "code-example", "context": "Configure a Security Context for a Pod or Container | Make a note of the capabilities bitmap, and then exit your shell:", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/security-context/", "id": "k8s_00770_code_completion_23999"} {"prompt": "Explain this text code:\n\n...\nCapPrm:\t00000000a80425fb\nCapEff:\t00000000a80425fb\n...", "completion": "This text code Configure a Security Context for a Pod or Container | Make a note of the capabilities bitmap, and then exit your shell:", "language": "text", "type": "code-example_explanation", "context": "Configure a Security Context for a Pod or Container | Make a note of the capabilities bitmap, and then exit your shell:", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/security-context/", "id": "k8s_00770_code_explain_24000"} {"prompt": "Complete this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: security-context-demo-4\nspec:", "completion": " containers:\n - name: sec-ctx-4\n image: gcr.io/google-samples/hello-app:2.0\n securityContext:\n capabilities:\n add: [\"NET_ADMIN\", \"SYS_TIME\"]", "language": "yaml", "type": "pod-definition", "context": "Configure a Security Context for a Pod or Container | Create the Pod:", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/security-context/", "id": "k8s_00770_code_completion_24001"} {"prompt": "Explain this yaml code:\n\napiVersion: v1\nkind: Pod\nmetadata:\n name: security-context-demo-4\nspec:\n containers:\n - name: sec-ctx-4\n image: gcr.io/google-samples/hello-app:2.0\n securityContext:\n capabilities:\n add: [\"NET_ADMIN\", \"SYS_TIME\"]", "completion": "This yaml code Configure a Security Context for a Pod or Container | Create the Pod:", "language": "yaml", "type": "pod-definition_explanation", "context": "Configure a Security Context for a Pod or Container | Create the Pod:", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/security-context/", "id": "k8s_00770_code_explain_24002"} {"prompt": "Explain this shell code:\n\nkubectl apply -f https://k8s.io/examples/pods/security/security-context-4.yaml", "completion": "This shell code Configure a Security Context for a Pod or Container | Get a shell into the running Container:", "language": "shell", "type": "code-example_explanation", "context": "Configure a Security Context for a Pod or Container | Get a shell into the running Container:", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/security-context/", "id": "k8s_00770_code_explain_24003"} {"prompt": "Complete this text code:\n\n...\nCapPrm:\t00000000aa0435fb", "completion": "CapEff:\t00000000aa0435fb\n...", "language": "text", "type": "code-example", "context": "Configure a Security Context for a Pod or Container | Compare the capabilities of the two Containers:", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/security-context/", "id": "k8s_00770_code_completion_24004"} {"prompt": "Explain this text code:\n\n...\nCapPrm:\t00000000aa0435fb\nCapEff:\t00000000aa0435fb\n...", "completion": "This text code Configure a Security Context for a Pod or Container | Compare the capabilities of the two Containers:", "language": "text", "type": "code-example_explanation", "context": "Configure a Security Context for a Pod or Container | Compare the capabilities of the two Containers:", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/security-context/", "id": "k8s_00770_code_explain_24005"} {"prompt": "Complete this yaml code:\n\n...\nsecurityContext:", "completion": " seccompProfile:\n type: RuntimeDefault", "language": "yaml", "type": "kubernetes-manifest", "context": "Configure a Security Context for a Pod or Container | Here is an example that sets the Seccomp profile to a pre-configured file at /seccomp/my-profiles/profile-allow.json:", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/security-context/", "id": "k8s_00770_code_completion_24006"} {"prompt": "Explain this yaml code:\n\n...\nsecurityContext:\n seccompProfile:\n type: RuntimeDefault", "completion": "This yaml code Configure a Security Context for a Pod or Container | Here is an example that sets the Seccomp profile to a pre-configured file at /seccomp/my-profiles/profile-allow.json:", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Configure a Security Context for a Pod or Container | Here is an example that sets the Seccomp profile to a pre-configured file at /seccomp/my-profiles/profile-allow.json:", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/security-context/", "id": "k8s_00770_code_explain_24007"} {"prompt": "Explain this text code:\n\n/seccomp/my-profiles/profile-allow.json", "completion": "This text code Configure a Security Context for a Pod or Container | To set the AppArmor profile for a Container, include the appArmorProfile field in the securityContext section of your Container. The appArmorProfile field is a AppArmorProfile object consisting of typ", "language": "text", "type": "code-example_explanation", "context": "Configure a Security Context for a Pod or Container | To set the AppArmor profile for a Container, include the appArmorProfile field in the securityContext section of your Container. The appArmorProfile field is a AppArmorProfile object consisting of typ", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/security-context/", "id": "k8s_00770_code_explain_24008"} {"prompt": "Complete this yaml code:\n\n...\nsecurityContext:", "completion": " seccompProfile:\n type: Localhost\n localhostProfile: my-profiles/profile-allow.json", "language": "yaml", "type": "kubernetes-manifest", "context": "Configure a Security Context for a Pod or Container | To set the AppArmor profile for a Container, include the appArmorProfile field in the securityContext section of your Container. The appArmorProfile field is a AppArmorProfile object consisting of typ", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/security-context/", "id": "k8s_00770_code_completion_24009"} {"prompt": "Explain this yaml code:\n\n...\nsecurityContext:\n seccompProfile:\n type: Localhost\n localhostProfile: my-profiles/profile-allow.json", "completion": "This yaml code Configure a Security Context for a Pod or Container | To set the AppArmor profile for a Container, include the appArmorProfile field in the securityContext section of your Container. The appArmorProfile field is a AppArmorProfile object consisting of typ", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Configure a Security Context for a Pod or Container | To set the AppArmor profile for a Container, include the appArmorProfile field in the securityContext section of your Container. The appArmorProfile field is a AppArmorProfile object consisting of typ", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/security-context/", "id": "k8s_00770_code_explain_24010"} {"prompt": "Complete this yaml code:\n\n...\ncontainers:\n- name: container-1", "completion": " securityContext:\n appArmorProfile:\n type: RuntimeDefault", "language": "yaml", "type": "kubernetes-manifest", "context": "Configure a Security Context for a Pod or Container | Here is an example that sets the AppArmor profile to a pre-configured profile named k8s-apparmor-example-deny-write:", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/security-context/", "id": "k8s_00770_code_completion_24011"} {"prompt": "Explain this yaml code:\n\n...\ncontainers:\n- name: container-1\n securityContext:\n appArmorProfile:\n type: RuntimeDefault", "completion": "This yaml code Configure a Security Context for a Pod or Container | Here is an example that sets the AppArmor profile to a pre-configured profile named k8s-apparmor-example-deny-write:", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Configure a Security Context for a Pod or Container | Here is an example that sets the AppArmor profile to a pre-configured profile named k8s-apparmor-example-deny-write:", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/security-context/", "id": "k8s_00770_code_explain_24012"} {"prompt": "Complete this yaml code:\n\n...\ncontainers:\n- name: container-1", "completion": " securityContext:\n appArmorProfile:\n type: Localhost\n localhostProfile: k8s-apparmor-example-deny-write", "language": "yaml", "type": "kubernetes-manifest", "context": "Configure a Security Context for a Pod or Container | For more details please see, Restrict a Container's Access to Resources with AppArmor.", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/security-context/", "id": "k8s_00770_code_completion_24013"} {"prompt": "Explain this yaml code:\n\n...\ncontainers:\n- name: container-1\n securityContext:\n appArmorProfile:\n type: Localhost\n localhostProfile: k8s-apparmor-example-deny-write", "completion": "This yaml code Configure a Security Context for a Pod or Container | For more details please see, Restrict a Container's Access to Resources with AppArmor.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Configure a Security Context for a Pod or Container | For more details please see, Restrict a Container's Access to Resources with AppArmor.", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/security-context/", "id": "k8s_00770_code_explain_24014"} {"prompt": "Complete this yaml code:\n\n...\nsecurityContext:", "completion": " seLinuxOptions:\n level: \"s0:c123,c456\"", "language": "yaml", "type": "kubernetes-manifest", "context": "Configure a Security Context for a Pod or Container | Kubernetes v1.27 introduced an early limited form of this behavior that was only applicable to volumes (and PersistentVolumeClaims) using the ReadWriteOncePod access mode.", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/security-context/", "id": "k8s_00770_code_completion_24015"} {"prompt": "Explain this yaml code:\n\n...\nsecurityContext:\n seLinuxOptions:\n level: \"s0:c123,c456\"", "completion": "This yaml code Configure a Security Context for a Pod or Container | Kubernetes v1.27 introduced an early limited form of this behavior that was only applicable to volumes (and PersistentVolumeClaims) using the ReadWriteOncePod access mode.", "language": "yaml", "type": "kubernetes-manifest_explanation", "context": "Configure a Security Context for a Pod or Container | Kubernetes v1.27 introduced an early limited form of this behavior that was only applicable to volumes (and PersistentVolumeClaims) using the ReadWriteOncePod access mode.", "source": "https://kubernetes.io/docs/tasks/configure-pod-container/security-context/", "id": "k8s_00770_code_explain_24016"} {"prompt": "Explain this text code:\n\nconflicting SELinux labels of volume :